Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge drm-fixes-for-v4.17-rc6-urgent into drm-next

Need to backmerge some nouveau fixes to reduce
the nouveau -next conflicts a lot.

Signed-off-by: Dave Airlie <airlied@redhat.com>

+4020 -2248
+1 -1
Documentation/admin-guide/pm/intel_pstate.rst
··· 145 145 146 146 In this mode ``intel_pstate`` registers utilization update callbacks with the 147 147 CPU scheduler in order to run a P-state selection algorithm, either 148 - ``powersave`` or ``performance``, depending on the ``scaling_cur_freq`` policy 148 + ``powersave`` or ``performance``, depending on the ``scaling_governor`` policy 149 149 setting in ``sysfs``. The current CPU frequency information to be made 150 150 available from the ``scaling_cur_freq`` policy attribute in ``sysfs`` is 151 151 periodically updated by those utilization update callbacks too.
+1 -1
Documentation/admin-guide/pm/sleep-states.rst
··· 15 15 ================================== 16 16 17 17 Depending on its configuration and the capabilities of the platform it runs on, 18 - the Linux kernel can support up to four system sleep states, includig 18 + the Linux kernel can support up to four system sleep states, including 19 19 hibernation and up to three variants of system suspend. The sleep states that 20 20 can be supported by the kernel are listed below. 21 21
+9 -1
Documentation/bpf/bpf_devel_QA.txt
··· 557 557 pulls in some header files containing file scope host assembly codes. 558 558 - You can add "-fno-jump-tables" to work around the switch table issue. 559 559 560 - Otherwise, you can use bpf target. 560 + Otherwise, you can use bpf target. Additionally, you _must_ use bpf target 561 + when: 562 + 563 + - Your program uses data structures with pointer or long / unsigned long 564 + types that interface with BPF helpers or context data structures. Access 565 + into these structures is verified by the BPF verifier and may result 566 + in verification failures if the native architecture is not aligned with 567 + the BPF architecture, e.g. 64-bit. An example of this is 568 + BPF_PROG_TYPE_SK_MSG require '-target bpf' 561 569 562 570 Happy BPF hacking!
+4 -1
Documentation/device-mapper/thin-provisioning.txt
··· 264 264 data device, but just remove the mapping. 265 265 266 266 read_only: Don't allow any changes to be made to the pool 267 - metadata. 267 + metadata. This mode is only available after the 268 + thin-pool has been created and first used in full 269 + read/write mode. It cannot be specified on initial 270 + thin-pool creation. 268 271 269 272 error_if_no_space: Error IOs, instead of queueing, if no space. 270 273
-1
Documentation/devicetree/bindings/ata/ahci-platform.txt
··· 30 30 Optional properties: 31 31 - dma-coherent : Present if dma operations are coherent 32 32 - clocks : a list of phandle + clock specifier pairs 33 - - resets : a list of phandle + reset specifier pairs 34 33 - target-supply : regulator for SATA target power 35 34 - phys : reference to the SATA PHY node 36 35 - phy-names : must be "sata-phy"
+1 -1
Documentation/devicetree/bindings/display/panel/panel-common.txt
··· 38 38 require specific display timings. The panel-timing subnode expresses those 39 39 timings as specified in the timing subnode section of the display timing 40 40 bindings defined in 41 - Documentation/devicetree/bindings/display/display-timing.txt. 41 + Documentation/devicetree/bindings/display/panel/display-timing.txt. 42 42 43 43 44 44 Connectivity
+1
Documentation/devicetree/bindings/dma/renesas,rcar-dmac.txt
··· 26 26 - "renesas,dmac-r8a7794" (R-Car E2) 27 27 - "renesas,dmac-r8a7795" (R-Car H3) 28 28 - "renesas,dmac-r8a7796" (R-Car M3-W) 29 + - "renesas,dmac-r8a77965" (R-Car M3-N) 29 30 - "renesas,dmac-r8a77970" (R-Car V3M) 30 31 - "renesas,dmac-r8a77980" (R-Car V3H) 31 32
+7
Documentation/devicetree/bindings/input/atmel,maxtouch.txt
··· 4 4 - compatible: 5 5 atmel,maxtouch 6 6 7 + The following compatibles have been used in various products but are 8 + deprecated: 9 + atmel,qt602240_ts 10 + atmel,atmel_mxt_ts 11 + atmel,atmel_mxt_tp 12 + atmel,mXT224 13 + 7 14 - reg: The I2C address of the device 8 15 9 16 - interrupts: The sink for the touchpad's IRQ output
+3 -1
Documentation/devicetree/bindings/net/can/rcar_canfd.txt
··· 5 5 - compatible: Must contain one or more of the following: 6 6 - "renesas,rcar-gen3-canfd" for R-Car Gen3 compatible controller. 7 7 - "renesas,r8a7795-canfd" for R8A7795 (R-Car H3) compatible controller. 8 - - "renesas,r8a7796-canfd" for R8A7796 (R-Car M3) compatible controller. 8 + - "renesas,r8a7796-canfd" for R8A7796 (R-Car M3-W) compatible controller. 9 + - "renesas,r8a77970-canfd" for R8A77970 (R-Car V3M) compatible controller. 10 + - "renesas,r8a77980-canfd" for R8A77980 (R-Car V3H) compatible controller. 9 11 10 12 When compatible with the generic version, nodes must list the 11 13 SoC-specific version corresponding to the platform first, followed by the
+1
Documentation/devicetree/bindings/net/renesas,ravb.txt
··· 18 18 19 19 - "renesas,etheravb-r8a7795" for the R8A7795 SoC. 20 20 - "renesas,etheravb-r8a7796" for the R8A7796 SoC. 21 + - "renesas,etheravb-r8a77965" for the R8A77965 SoC. 21 22 - "renesas,etheravb-r8a77970" for the R8A77970 SoC. 22 23 - "renesas,etheravb-r8a77980" for the R8A77980 SoC. 23 24 - "renesas,etheravb-r8a77995" for the R8A77995 SoC.
+3 -3
Documentation/devicetree/bindings/pinctrl/allwinner,sunxi-pinctrl.txt
··· 56 56 configuration, drive strength and pullups. If one of these options is 57 57 not set, its actual value will be unspecified. 58 58 59 - This driver supports the generic pin multiplexing and configuration 60 - bindings. For details on each properties, you can refer to 61 - ./pinctrl-bindings.txt. 59 + Allwinner A1X Pin Controller supports the generic pin multiplexing and 60 + configuration bindings. For details on each properties, you can refer to 61 + ./pinctrl-bindings.txt. 62 62 63 63 Required sub-node properties: 64 64 - pins
+2
Documentation/devicetree/bindings/serial/renesas,sci-serial.txt
··· 43 43 - "renesas,hscif-r8a7795" for R8A7795 (R-Car H3) HSCIF compatible UART. 44 44 - "renesas,scif-r8a7796" for R8A7796 (R-Car M3-W) SCIF compatible UART. 45 45 - "renesas,hscif-r8a7796" for R8A7796 (R-Car M3-W) HSCIF compatible UART. 46 + - "renesas,scif-r8a77965" for R8A77965 (R-Car M3-N) SCIF compatible UART. 47 + - "renesas,hscif-r8a77965" for R8A77965 (R-Car M3-N) HSCIF compatible UART. 46 48 - "renesas,scif-r8a77970" for R8A77970 (R-Car V3M) SCIF compatible UART. 47 49 - "renesas,hscif-r8a77970" for R8A77970 (R-Car V3M) HSCIF compatible UART. 48 50 - "renesas,scif-r8a77980" for R8A77980 (R-Car V3H) SCIF compatible UART.
+1
Documentation/devicetree/bindings/vendor-prefixes.txt
··· 182 182 keithkoep Keith & Koep GmbH 183 183 keymile Keymile GmbH 184 184 khadas Khadas 185 + kiebackpeter Kieback & Peter GmbH 185 186 kinetic Kinetic Technologies 186 187 kingnovel Kingnovel Technology Co., Ltd. 187 188 kosagi Sutajio Ko-Usagi PTE Ltd.
+8
Documentation/devicetree/overlay-notes.txt
··· 98 98 of_overlay_remove_all() which will remove every single one in the correct 99 99 order. 100 100 101 + In addition, there is the option to register notifiers that get called on 102 + overlay operations. See of_overlay_notifier_register/unregister and 103 + enum of_overlay_notify_action for details. 104 + 105 + Note that a notifier callback is not supposed to store pointers to a device 106 + tree node or its content beyond OF_OVERLAY_POST_REMOVE corresponding to the 107 + respective node it received. 108 + 101 109 Overlay DTS Format 102 110 ------------------ 103 111
+2 -2
Documentation/doc-guide/parse-headers.rst
··· 177 177 **** 178 178 179 179 180 - Report bugs to Mauro Carvalho Chehab <mchehab@s-opensource.com> 180 + Report bugs to Mauro Carvalho Chehab <mchehab@kernel.org> 181 181 182 182 183 183 COPYRIGHT 184 184 ********* 185 185 186 186 187 - Copyright (c) 2016 by Mauro Carvalho Chehab <mchehab@s-opensource.com>. 187 + Copyright (c) 2016 by Mauro Carvalho Chehab <mchehab+samsung@kernel.org>. 188 188 189 189 License GPLv2: GNU GPL version 2 <http://gnu.org/licenses/gpl.html>. 190 190
+1 -1
Documentation/media/uapi/rc/keytable.c.rst
··· 7 7 8 8 /* keytable.c - This program allows checking/replacing keys at IR 9 9 10 - Copyright (C) 2006-2009 Mauro Carvalho Chehab <mchehab@infradead.org> 10 + Copyright (C) 2006-2009 Mauro Carvalho Chehab <mchehab@kernel.org> 11 11 12 12 This program is free software; you can redistribute it and/or modify 13 13 it under the terms of the GNU General Public License as published by
+1 -1
Documentation/media/uapi/v4l/v4l2grab.c.rst
··· 6 6 .. code-block:: c 7 7 8 8 /* V4L2 video picture grabber 9 - Copyright (C) 2009 Mauro Carvalho Chehab <mchehab@infradead.org> 9 + Copyright (C) 2009 Mauro Carvalho Chehab <mchehab@kernel.org> 10 10 11 11 This program is free software; you can redistribute it and/or modify 12 12 it under the terms of the GNU General Public License as published by
+2 -2
Documentation/sphinx/parse-headers.pl
··· 387 387 388 388 =head1 BUGS 389 389 390 - Report bugs to Mauro Carvalho Chehab <mchehab@s-opensource.com> 390 + Report bugs to Mauro Carvalho Chehab <mchehab@kernel.org> 391 391 392 392 =head1 COPYRIGHT 393 393 394 - Copyright (c) 2016 by Mauro Carvalho Chehab <mchehab@s-opensource.com>. 394 + Copyright (c) 2016 by Mauro Carvalho Chehab <mchehab+samsung@kernel.org>. 395 395 396 396 License GPLv2: GNU GPL version 2 <http://gnu.org/licenses/gpl.html>. 397 397
+2 -2
Documentation/translations/zh_CN/video4linux/v4l2-framework.txt
··· 6 6 help. Contact the Chinese maintainer if this translation is outdated 7 7 or if there is a problem with the translation. 8 8 9 - Maintainer: Mauro Carvalho Chehab <mchehab@infradead.org> 9 + Maintainer: Mauro Carvalho Chehab <mchehab@kernel.org> 10 10 Chinese maintainer: Fu Wei <tekkamanninja@gmail.com> 11 11 --------------------------------------------------------------------- 12 12 Documentation/video4linux/v4l2-framework.txt 的中文翻译 ··· 14 14 如果想评论或更新本文的内容,请直接联系原文档的维护者。如果你使用英文 15 15 交流有困难的话,也可以向中文版维护者求助。如果本翻译更新不及时或者翻 16 16 译存在问题,请联系中文版维护者。 17 - 英文版维护者: Mauro Carvalho Chehab <mchehab@infradead.org> 17 + 英文版维护者: Mauro Carvalho Chehab <mchehab@kernel.org> 18 18 中文版维护者: 傅炜 Fu Wei <tekkamanninja@gmail.com> 19 19 中文版翻译者: 傅炜 Fu Wei <tekkamanninja@gmail.com> 20 20 中文版校译者: 傅炜 Fu Wei <tekkamanninja@gmail.com>
+8 -25
MAINTAINERS
··· 137 137 ----------------------------------- 138 138 139 139 3C59X NETWORK DRIVER 140 - M: Steffen Klassert <klassert@mathematik.tu-chemnitz.de> 140 + M: Steffen Klassert <klassert@kernel.org> 141 141 L: netdev@vger.kernel.org 142 - S: Maintained 142 + S: Odd Fixes 143 143 F: Documentation/networking/vortex.txt 144 144 F: drivers/net/ethernet/3com/3c59x.c 145 145 ··· 2556 2556 F: sound/soc/atmel/tse850-pcm5142.c 2557 2557 2558 2558 AZ6007 DVB DRIVER 2559 - M: Mauro Carvalho Chehab <mchehab@s-opensource.com> 2560 2559 M: Mauro Carvalho Chehab <mchehab@kernel.org> 2561 2560 L: linux-media@vger.kernel.org 2562 2561 W: https://linuxtv.org ··· 3084 3085 F: include/uapi/linux/btrfs* 3085 3086 3086 3087 BTTV VIDEO4LINUX DRIVER 3087 - M: Mauro Carvalho Chehab <mchehab@s-opensource.com> 3088 3088 M: Mauro Carvalho Chehab <mchehab@kernel.org> 3089 3089 L: linux-media@vger.kernel.org 3090 3090 W: https://linuxtv.org ··· 3693 3695 3694 3696 CPU POWER MONITORING SUBSYSTEM 3695 3697 M: Thomas Renninger <trenn@suse.com> 3696 - M: Shuah Khan <shuahkh@osg.samsung.com> 3697 3698 M: Shuah Khan <shuah@kernel.org> 3698 3699 L: linux-pm@vger.kernel.org 3699 3700 S: Maintained ··· 3811 3814 F: drivers/media/dvb-frontends/cx24120* 3812 3815 3813 3816 CX88 VIDEO4LINUX DRIVER 3814 - M: Mauro Carvalho Chehab <mchehab@s-opensource.com> 3815 3817 M: Mauro Carvalho Chehab <mchehab@kernel.org> 3816 3818 L: linux-media@vger.kernel.org 3817 3819 W: https://linuxtv.org ··· 5068 5072 5069 5073 EDAC-CORE 5070 5074 M: Borislav Petkov <bp@alien8.de> 5071 - M: Mauro Carvalho Chehab <mchehab@s-opensource.com> 5072 5075 M: Mauro Carvalho Chehab <mchehab@kernel.org> 5073 5076 L: linux-edac@vger.kernel.org 5074 5077 T: git git://git.kernel.org/pub/scm/linux/kernel/git/bp/bp.git for-next ··· 5096 5101 F: drivers/edac/fsl_ddr_edac.* 5097 5102 5098 5103 EDAC-GHES 5099 - M: Mauro Carvalho Chehab <mchehab@s-opensource.com> 5100 5104 M: Mauro Carvalho Chehab <mchehab@kernel.org> 5101 5105 L: linux-edac@vger.kernel.org 5102 5106 S: Maintained ··· 5112 5118 F: drivers/edac/i5000_edac.c 5113 5119 5114 5120 EDAC-I5400 5115 - M: Mauro Carvalho Chehab <mchehab@s-opensource.com> 5116 5121 M: Mauro Carvalho Chehab <mchehab@kernel.org> 5117 5122 L: linux-edac@vger.kernel.org 5118 5123 S: Maintained 5119 5124 F: drivers/edac/i5400_edac.c 5120 5125 5121 5126 EDAC-I7300 5122 - M: Mauro Carvalho Chehab <mchehab@s-opensource.com> 5123 5127 M: Mauro Carvalho Chehab <mchehab@kernel.org> 5124 5128 L: linux-edac@vger.kernel.org 5125 5129 S: Maintained 5126 5130 F: drivers/edac/i7300_edac.c 5127 5131 5128 5132 EDAC-I7CORE 5129 - M: Mauro Carvalho Chehab <mchehab@s-opensource.com> 5130 5133 M: Mauro Carvalho Chehab <mchehab@kernel.org> 5131 5134 L: linux-edac@vger.kernel.org 5132 5135 S: Maintained ··· 5173 5182 F: drivers/edac/r82600_edac.c 5174 5183 5175 5184 EDAC-SBRIDGE 5176 - M: Mauro Carvalho Chehab <mchehab@s-opensource.com> 5177 5185 M: Mauro Carvalho Chehab <mchehab@kernel.org> 5178 5186 L: linux-edac@vger.kernel.org 5179 5187 S: Maintained ··· 5231 5241 F: drivers/net/ethernet/ibm/ehea/ 5232 5242 5233 5243 EM28XX VIDEO4LINUX DRIVER 5234 - M: Mauro Carvalho Chehab <mchehab@s-opensource.com> 5235 5244 M: Mauro Carvalho Chehab <mchehab@kernel.org> 5236 5245 L: linux-media@vger.kernel.org 5237 5246 W: https://linuxtv.org ··· 7685 7696 S: Maintained 7686 7697 F: Documentation/kbuild/ 7687 7698 F: Makefile 7688 - F: scripts/Makefile.* 7699 + F: scripts/Kbuild* 7700 + F: scripts/Makefile* 7689 7701 F: scripts/basic/ 7690 7702 F: scripts/mk* 7703 + F: scripts/mod/ 7691 7704 F: scripts/package/ 7692 7705 7693 7706 KERNEL JANITORS ··· 7714 7723 F: include/uapi/linux/sunrpc/ 7715 7724 7716 7725 KERNEL SELFTEST FRAMEWORK 7717 - M: Shuah Khan <shuahkh@osg.samsung.com> 7718 7726 M: Shuah Khan <shuah@kernel.org> 7719 7727 L: linux-kselftest@vger.kernel.org 7720 7728 T: git git://git.kernel.org/pub/scm/linux/kernel/git/shuah/linux-kselftest.git ··· 8880 8890 F: drivers/staging/media/tegra-vde/ 8881 8891 8882 8892 MEDIA INPUT INFRASTRUCTURE (V4L/DVB) 8883 - M: Mauro Carvalho Chehab <mchehab@s-opensource.com> 8884 8893 M: Mauro Carvalho Chehab <mchehab@kernel.org> 8885 8894 P: LinuxTV.org Project 8886 8895 L: linux-media@vger.kernel.org ··· 9733 9744 F: net/core/drop_monitor.c 9734 9745 9735 9746 NETWORKING DRIVERS 9747 + M: "David S. Miller" <davem@davemloft.net> 9736 9748 L: netdev@vger.kernel.org 9737 9749 W: http://www.linuxfoundation.org/en/Net 9738 9750 Q: http://patchwork.ozlabs.org/project/netdev/list/ ··· 9890 9900 F: Documentation/devicetree/bindings/net/nfc/ 9891 9901 9892 9902 NFS, SUNRPC, AND LOCKD CLIENTS 9893 - M: Trond Myklebust <trond.myklebust@primarydata.com> 9903 + M: Trond Myklebust <trond.myklebust@hammerspace.com> 9894 9904 M: Anna Schumaker <anna.schumaker@netapp.com> 9895 9905 L: linux-nfs@vger.kernel.org 9896 9906 W: http://client.linux-nfs.org ··· 12268 12278 F: drivers/media/i2c/saa6588* 12269 12279 12270 12280 SAA7134 VIDEO4LINUX DRIVER 12271 - M: Mauro Carvalho Chehab <mchehab@s-opensource.com> 12272 12281 M: Mauro Carvalho Chehab <mchehab@kernel.org> 12273 12282 L: linux-media@vger.kernel.org 12274 12283 W: https://linuxtv.org ··· 12506 12517 SCTP PROTOCOL 12507 12518 M: Vlad Yasevich <vyasevich@gmail.com> 12508 12519 M: Neil Horman <nhorman@tuxdriver.com> 12520 + M: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com> 12509 12521 L: linux-sctp@vger.kernel.org 12510 12522 W: http://lksctp.sourceforge.net 12511 12523 S: Maintained ··· 12772 12782 F: drivers/media/radio/si4713/radio-usb-si4713.c 12773 12783 12774 12784 SIANO DVB DRIVER 12775 - M: Mauro Carvalho Chehab <mchehab@s-opensource.com> 12776 12785 M: Mauro Carvalho Chehab <mchehab@kernel.org> 12777 12786 L: linux-media@vger.kernel.org 12778 12787 W: https://linuxtv.org ··· 13762 13773 F: drivers/media/i2c/tda9840* 13763 13774 13764 13775 TEA5761 TUNER DRIVER 13765 - M: Mauro Carvalho Chehab <mchehab@s-opensource.com> 13766 13776 M: Mauro Carvalho Chehab <mchehab@kernel.org> 13767 13777 L: linux-media@vger.kernel.org 13768 13778 W: https://linuxtv.org ··· 13770 13782 F: drivers/media/tuners/tea5761.* 13771 13783 13772 13784 TEA5767 TUNER DRIVER 13773 - M: Mauro Carvalho Chehab <mchehab@s-opensource.com> 13774 13785 M: Mauro Carvalho Chehab <mchehab@kernel.org> 13775 13786 L: linux-media@vger.kernel.org 13776 13787 W: https://linuxtv.org ··· 13859 13872 F: drivers/iommu/tegra* 13860 13873 13861 13874 TEGRA KBC DRIVER 13862 - M: Rakesh Iyer <riyer@nvidia.com> 13863 13875 M: Laxman Dewangan <ldewangan@nvidia.com> 13864 13876 S: Supported 13865 13877 F: drivers/input/keyboard/tegra-kbc.c ··· 14185 14199 F: drivers/net/ethernet/ti/tlan.* 14186 14200 14187 14201 TM6000 VIDEO4LINUX DRIVER 14188 - M: Mauro Carvalho Chehab <mchehab@s-opensource.com> 14189 14202 M: Mauro Carvalho Chehab <mchehab@kernel.org> 14190 14203 L: linux-media@vger.kernel.org 14191 14204 W: https://linuxtv.org ··· 14667 14682 14668 14683 USB OVER IP DRIVER 14669 14684 M: Valentina Manea <valentina.manea.m@gmail.com> 14670 - M: Shuah Khan <shuahkh@osg.samsung.com> 14671 14685 M: Shuah Khan <shuah@kernel.org> 14672 14686 L: linux-usb@vger.kernel.org 14673 14687 S: Maintained ··· 15410 15426 F: arch/x86/entry/vdso/ 15411 15427 15412 15428 XC2028/3028 TUNER DRIVER 15413 - M: Mauro Carvalho Chehab <mchehab@s-opensource.com> 15414 15429 M: Mauro Carvalho Chehab <mchehab@kernel.org> 15415 15430 L: linux-media@vger.kernel.org 15416 15431 W: https://linuxtv.org
+2 -2
Makefile
··· 2 2 VERSION = 4 3 3 PATCHLEVEL = 17 4 4 SUBLEVEL = 0 5 - EXTRAVERSION = -rc3 6 - NAME = Fearless Coyote 5 + EXTRAVERSION = -rc5 6 + NAME = Merciless Moray 7 7 8 8 # *DOCUMENTATION* 9 9 # To see a list of typical targets execute "make help"
+4
arch/Kconfig
··· 464 464 config GCC_PLUGIN_STRUCTLEAK 465 465 bool "Force initialization of variables containing userspace addresses" 466 466 depends on GCC_PLUGINS 467 + # Currently STRUCTLEAK inserts initialization out of live scope of 468 + # variables from KASAN point of view. This leads to KASAN false 469 + # positive reports. Prohibit this combination for now. 470 + depends on !KASAN_EXTRA 467 471 help 468 472 This plugin zero-initializes any structures containing a 469 473 __user attribute. This can prevent some classes of information
+2 -2
arch/arm/boot/dts/imx35.dtsi
··· 303 303 }; 304 304 305 305 can1: can@53fe4000 { 306 - compatible = "fsl,imx35-flexcan"; 306 + compatible = "fsl,imx35-flexcan", "fsl,imx25-flexcan"; 307 307 reg = <0x53fe4000 0x1000>; 308 308 clocks = <&clks 33>, <&clks 33>; 309 309 clock-names = "ipg", "per"; ··· 312 312 }; 313 313 314 314 can2: can@53fe8000 { 315 - compatible = "fsl,imx35-flexcan"; 315 + compatible = "fsl,imx35-flexcan", "fsl,imx25-flexcan"; 316 316 reg = <0x53fe8000 0x1000>; 317 317 clocks = <&clks 34>, <&clks 34>; 318 318 clock-names = "ipg", "per";
+2 -2
arch/arm/boot/dts/imx53.dtsi
··· 551 551 }; 552 552 553 553 can1: can@53fc8000 { 554 - compatible = "fsl,imx53-flexcan"; 554 + compatible = "fsl,imx53-flexcan", "fsl,imx25-flexcan"; 555 555 reg = <0x53fc8000 0x4000>; 556 556 interrupts = <82>; 557 557 clocks = <&clks IMX5_CLK_CAN1_IPG_GATE>, ··· 561 561 }; 562 562 563 563 can2: can@53fcc000 { 564 - compatible = "fsl,imx53-flexcan"; 564 + compatible = "fsl,imx53-flexcan", "fsl,imx25-flexcan"; 565 565 reg = <0x53fcc000 0x4000>; 566 566 interrupts = <83>; 567 567 clocks = <&clks IMX5_CLK_CAN2_IPG_GATE>,
+6
arch/arm64/include/asm/cputype.h
··· 75 75 #define ARM_CPU_IMP_CAVIUM 0x43 76 76 #define ARM_CPU_IMP_BRCM 0x42 77 77 #define ARM_CPU_IMP_QCOM 0x51 78 + #define ARM_CPU_IMP_NVIDIA 0x4E 78 79 79 80 #define ARM_CPU_PART_AEM_V8 0xD0F 80 81 #define ARM_CPU_PART_FOUNDATION 0xD00 ··· 100 99 #define QCOM_CPU_PART_FALKOR 0xC00 101 100 #define QCOM_CPU_PART_KRYO 0x200 102 101 102 + #define NVIDIA_CPU_PART_DENVER 0x003 103 + #define NVIDIA_CPU_PART_CARMEL 0x004 104 + 103 105 #define MIDR_CORTEX_A53 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A53) 104 106 #define MIDR_CORTEX_A57 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A57) 105 107 #define MIDR_CORTEX_A72 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A72) ··· 118 114 #define MIDR_QCOM_FALKOR_V1 MIDR_CPU_MODEL(ARM_CPU_IMP_QCOM, QCOM_CPU_PART_FALKOR_V1) 119 115 #define MIDR_QCOM_FALKOR MIDR_CPU_MODEL(ARM_CPU_IMP_QCOM, QCOM_CPU_PART_FALKOR) 120 116 #define MIDR_QCOM_KRYO MIDR_CPU_MODEL(ARM_CPU_IMP_QCOM, QCOM_CPU_PART_KRYO) 117 + #define MIDR_NVIDIA_DENVER MIDR_CPU_MODEL(ARM_CPU_IMP_NVIDIA, NVIDIA_CPU_PART_DENVER) 118 + #define MIDR_NVIDIA_CARMEL MIDR_CPU_MODEL(ARM_CPU_IMP_NVIDIA, NVIDIA_CPU_PART_CARMEL) 121 119 122 120 #ifndef __ASSEMBLY__ 123 121
+1 -1
arch/arm64/include/asm/kvm_emulate.h
··· 333 333 } else { 334 334 u64 sctlr = vcpu_read_sys_reg(vcpu, SCTLR_EL1); 335 335 sctlr |= (1 << 25); 336 - vcpu_write_sys_reg(vcpu, SCTLR_EL1, sctlr); 336 + vcpu_write_sys_reg(vcpu, sctlr, SCTLR_EL1); 337 337 } 338 338 } 339 339
+1
arch/arm64/kernel/cpu_errata.c
··· 316 316 MIDR_ALL_VERSIONS(MIDR_CAVIUM_THUNDERX2), 317 317 MIDR_ALL_VERSIONS(MIDR_QCOM_FALKOR_V1), 318 318 MIDR_ALL_VERSIONS(MIDR_QCOM_FALKOR), 319 + MIDR_ALL_VERSIONS(MIDR_NVIDIA_DENVER), 319 320 {}, 320 321 }; 321 322
+19 -5
arch/arm64/kvm/hyp/vgic-v2-cpuif-proxy.c
··· 18 18 #include <linux/compiler.h> 19 19 #include <linux/irqchip/arm-gic.h> 20 20 #include <linux/kvm_host.h> 21 + #include <linux/swab.h> 21 22 22 23 #include <asm/kvm_emulate.h> 23 24 #include <asm/kvm_hyp.h> 24 25 #include <asm/kvm_mmu.h> 26 + 27 + static bool __hyp_text __is_be(struct kvm_vcpu *vcpu) 28 + { 29 + if (vcpu_mode_is_32bit(vcpu)) 30 + return !!(read_sysreg_el2(spsr) & COMPAT_PSR_E_BIT); 31 + 32 + return !!(read_sysreg(SCTLR_EL1) & SCTLR_ELx_EE); 33 + } 25 34 26 35 /* 27 36 * __vgic_v2_perform_cpuif_access -- perform a GICV access on behalf of the ··· 73 64 addr += fault_ipa - vgic->vgic_cpu_base; 74 65 75 66 if (kvm_vcpu_dabt_iswrite(vcpu)) { 76 - u32 data = vcpu_data_guest_to_host(vcpu, 77 - vcpu_get_reg(vcpu, rd), 78 - sizeof(u32)); 67 + u32 data = vcpu_get_reg(vcpu, rd); 68 + if (__is_be(vcpu)) { 69 + /* guest pre-swabbed data, undo this for writel() */ 70 + data = swab32(data); 71 + } 79 72 writel_relaxed(data, addr); 80 73 } else { 81 74 u32 data = readl_relaxed(addr); 82 - vcpu_set_reg(vcpu, rd, vcpu_data_host_to_guest(vcpu, data, 83 - sizeof(u32))); 75 + if (__is_be(vcpu)) { 76 + /* guest expects swabbed data */ 77 + data = swab32(data); 78 + } 79 + vcpu_set_reg(vcpu, rd, data); 84 80 } 85 81 86 82 return 1;
+3 -1
arch/arm64/mm/init.c
··· 646 646 647 647 void __init free_initrd_mem(unsigned long start, unsigned long end) 648 648 { 649 - if (!keep_initrd) 649 + if (!keep_initrd) { 650 650 free_reserved_area((void *)start, (void *)end, 0, "initrd"); 651 + memblock_free(__virt_to_phys(start), end - start); 652 + } 651 653 } 652 654 653 655 static int __init keepinitrd_setup(char *__unused)
+6
arch/hexagon/include/asm/io.h
··· 216 216 memcpy((void *) dst, src, count); 217 217 } 218 218 219 + static inline void memset_io(volatile void __iomem *addr, int value, 220 + size_t size) 221 + { 222 + memset((void __force *)addr, value, size); 223 + } 224 + 219 225 #define PCI_IO_ADDR (volatile void __iomem *) 220 226 221 227 /*
+1
arch/hexagon/lib/checksum.c
··· 199 199 memcpy(dst, src, len); 200 200 return csum_partial(dst, len, sum); 201 201 } 202 + EXPORT_SYMBOL(csum_partial_copy_nocheck);
+3
arch/parisc/Makefile
··· 123 123 124 124 PHONY += bzImage $(BOOT_TARGETS) $(INSTALL_TARGETS) 125 125 126 + # Default kernel to build 127 + all: bzImage 128 + 126 129 zImage: vmlinuz 127 130 Image: vmlinux 128 131
+4 -3
arch/parisc/kernel/drivers.c
··· 448 448 * Checks all the children of @parent for a matching @id. If none 449 449 * found, it allocates a new device and returns it. 450 450 */ 451 - static struct parisc_device * alloc_tree_node(struct device *parent, char id) 451 + static struct parisc_device * __init alloc_tree_node( 452 + struct device *parent, char id) 452 453 { 453 454 struct match_id_data d = { 454 455 .id = id, ··· 826 825 * devices which are not physically connected (such as extra serial & 827 826 * keyboard ports). This problem is not yet solved. 828 827 */ 829 - static void walk_native_bus(unsigned long io_io_low, unsigned long io_io_high, 830 - struct device *parent) 828 + static void __init walk_native_bus(unsigned long io_io_low, 829 + unsigned long io_io_high, struct device *parent) 831 830 { 832 831 int i, devices_found = 0; 833 832 unsigned long hpa = io_io_low;
+1 -1
arch/parisc/kernel/pci.c
··· 174 174 * pcibios_init_bridge() initializes cache line and default latency 175 175 * for pci controllers and pci-pci bridges 176 176 */ 177 - void __init pcibios_init_bridge(struct pci_dev *dev) 177 + void __ref pcibios_init_bridge(struct pci_dev *dev) 178 178 { 179 179 unsigned short bridge_ctl, bridge_ctl_new; 180 180
+1 -1
arch/parisc/kernel/time.c
··· 205 205 device_initcall(rtc_init); 206 206 #endif 207 207 208 - void read_persistent_clock(struct timespec *ts) 208 + void read_persistent_clock64(struct timespec64 *ts) 209 209 { 210 210 static struct pdc_tod tod_data; 211 211 if (pdc_tod_read(&tod_data) == 0) {
+11
arch/parisc/kernel/traps.c
··· 837 837 if (pdc_instr(&instr) == PDC_OK) 838 838 ivap[0] = instr; 839 839 840 + /* 841 + * Rules for the checksum of the HPMC handler: 842 + * 1. The IVA does not point to PDC/PDH space (ie: the OS has installed 843 + * its own IVA). 844 + * 2. The word at IVA + 32 is nonzero. 845 + * 3. If Length (IVA + 60) is not zero, then Length (IVA + 60) and 846 + * Address (IVA + 56) are word-aligned. 847 + * 4. The checksum of the 8 words starting at IVA + 32 plus the sum of 848 + * the Length/4 words starting at Address is zero. 849 + */ 850 + 840 851 /* Compute Checksum for HPMC handler */ 841 852 length = os_hpmc_size; 842 853 ivap[7] = length;
+1 -1
arch/parisc/mm/init.c
··· 516 516 } 517 517 } 518 518 519 - void free_initmem(void) 519 + void __ref free_initmem(void) 520 520 { 521 521 unsigned long init_begin = (unsigned long)__init_begin; 522 522 unsigned long init_end = (unsigned long)__init_end;
+21 -8
arch/powerpc/include/asm/ftrace.h
··· 69 69 #endif 70 70 71 71 #if defined(CONFIG_FTRACE_SYSCALLS) && !defined(__ASSEMBLY__) 72 - #ifdef PPC64_ELF_ABI_v1 72 + /* 73 + * Some syscall entry functions on powerpc start with "ppc_" (fork and clone, 74 + * for instance) or ppc32_/ppc64_. We should also match the sys_ variant with 75 + * those. 76 + */ 73 77 #define ARCH_HAS_SYSCALL_MATCH_SYM_NAME 78 + #ifdef PPC64_ELF_ABI_v1 74 79 static inline bool arch_syscall_match_sym_name(const char *sym, const char *name) 75 80 { 76 - /* 77 - * Compare the symbol name with the system call name. Skip the .sys or .SyS 78 - * prefix from the symbol name and the sys prefix from the system call name and 79 - * just match the rest. This is only needed on ppc64 since symbol names on 80 - * 32bit do not start with a period so the generic function will work. 81 - */ 82 - return !strcmp(sym + 4, name + 3); 81 + /* We need to skip past the initial dot, and the __se_sys alias */ 82 + return !strcmp(sym + 1, name) || 83 + (!strncmp(sym, ".__se_sys", 9) && !strcmp(sym + 6, name)) || 84 + (!strncmp(sym, ".ppc_", 5) && !strcmp(sym + 5, name + 4)) || 85 + (!strncmp(sym, ".ppc32_", 7) && !strcmp(sym + 7, name + 4)) || 86 + (!strncmp(sym, ".ppc64_", 7) && !strcmp(sym + 7, name + 4)); 87 + } 88 + #else 89 + static inline bool arch_syscall_match_sym_name(const char *sym, const char *name) 90 + { 91 + return !strcmp(sym, name) || 92 + (!strncmp(sym, "__se_sys", 8) && !strcmp(sym + 5, name)) || 93 + (!strncmp(sym, "ppc_", 4) && !strcmp(sym + 4, name + 4)) || 94 + (!strncmp(sym, "ppc32_", 6) && !strcmp(sym + 6, name + 4)) || 95 + (!strncmp(sym, "ppc64_", 6) && !strcmp(sym + 6, name + 4)); 83 96 } 84 97 #endif 85 98 #endif /* CONFIG_FTRACE_SYSCALLS && !__ASSEMBLY__ */
-1
arch/powerpc/include/asm/paca.h
··· 165 165 u64 saved_msr; /* MSR saved here by enter_rtas */ 166 166 u16 trap_save; /* Used when bad stack is encountered */ 167 167 u8 irq_soft_mask; /* mask for irq soft masking */ 168 - u8 soft_enabled; /* irq soft-enable flag */ 169 168 u8 irq_happened; /* irq happened while soft-disabled */ 170 169 u8 io_sync; /* writel() needs spin_unlock sync */ 171 170 u8 irq_work_pending; /* IRQ_WORK interrupt while soft-disable */
+5 -8
arch/powerpc/include/asm/topology.h
··· 91 91 extern int stop_topology_update(void); 92 92 extern int prrn_is_enabled(void); 93 93 extern int find_and_online_cpu_nid(int cpu); 94 + extern int timed_topology_update(int nsecs); 94 95 #else 95 96 static inline int start_topology_update(void) 96 97 { ··· 109 108 { 110 109 return 0; 111 110 } 111 + static inline int timed_topology_update(int nsecs) 112 + { 113 + return 0; 114 + } 112 115 #endif /* CONFIG_NUMA && CONFIG_PPC_SPLPAR */ 113 - 114 - #if defined(CONFIG_HOTPLUG_CPU) || defined(CONFIG_NEED_MULTIPLE_NODES) 115 - #if defined(CONFIG_PPC_SPLPAR) 116 - extern int timed_topology_update(int nsecs); 117 - #else 118 - #define timed_topology_update(nsecs) 119 - #endif /* CONFIG_PPC_SPLPAR */ 120 - #endif /* CONFIG_HOTPLUG_CPU || CONFIG_NEED_MULTIPLE_NODES */ 121 116 122 117 #include <asm-generic/topology.h> 123 118
+1
arch/sh/Kconfig
··· 9 9 select HAVE_IDE if HAS_IOPORT_MAP 10 10 select HAVE_MEMBLOCK 11 11 select HAVE_MEMBLOCK_NODE_MAP 12 + select NO_BOOTMEM 12 13 select ARCH_DISCARD_MEMBLOCK 13 14 select HAVE_OPROFILE 14 15 select HAVE_GENERIC_DMA_COHERENT
+4
arch/sh/kernel/cpu/sh2/probe.c
··· 43 43 #endif 44 44 45 45 #if defined(CONFIG_CPU_J2) 46 + #if defined(CONFIG_SMP) 46 47 unsigned cpu = hard_smp_processor_id(); 48 + #else 49 + unsigned cpu = 0; 50 + #endif 47 51 if (cpu == 0) of_scan_flat_dt(scan_cache, NULL); 48 52 if (j2_ccr_base) __raw_writel(0x80000303, j2_ccr_base + 4*cpu); 49 53 if (cpu != 0) return;
-1
arch/sh/kernel/setup.c
··· 11 11 #include <linux/ioport.h> 12 12 #include <linux/init.h> 13 13 #include <linux/initrd.h> 14 - #include <linux/bootmem.h> 15 14 #include <linux/console.h> 16 15 #include <linux/root_dev.h> 17 16 #include <linux/utsname.h>
+8 -3
arch/sh/mm/consistent.c
··· 59 59 60 60 split_page(pfn_to_page(virt_to_phys(ret) >> PAGE_SHIFT), order); 61 61 62 - *dma_handle = virt_to_phys(ret) - PFN_PHYS(dev->dma_pfn_offset); 62 + *dma_handle = virt_to_phys(ret); 63 + if (!WARN_ON(!dev)) 64 + *dma_handle -= PFN_PHYS(dev->dma_pfn_offset); 63 65 64 66 return ret_nocache; 65 67 } ··· 71 69 unsigned long attrs) 72 70 { 73 71 int order = get_order(size); 74 - unsigned long pfn = (dma_handle >> PAGE_SHIFT) + dev->dma_pfn_offset; 72 + unsigned long pfn = dma_handle >> PAGE_SHIFT; 75 73 int k; 74 + 75 + if (!WARN_ON(!dev)) 76 + pfn += dev->dma_pfn_offset; 76 77 77 78 for (k = 0; k < (1 << order); k++) 78 79 __free_pages(pfn_to_page(pfn + k), 0); ··· 148 143 if (!memsize) 149 144 return 0; 150 145 151 - buf = dma_alloc_coherent(NULL, memsize, &dma_handle, GFP_KERNEL); 146 + buf = dma_alloc_coherent(&pdev->dev, memsize, &dma_handle, GFP_KERNEL); 152 147 if (!buf) { 153 148 pr_warning("%s: unable to allocate memory\n", name); 154 149 return -ENOMEM;
+6 -62
arch/sh/mm/init.c
··· 211 211 212 212 NODE_DATA(nid) = __va(phys); 213 213 memset(NODE_DATA(nid), 0, sizeof(struct pglist_data)); 214 - 215 - NODE_DATA(nid)->bdata = &bootmem_node_data[nid]; 216 214 #endif 217 215 218 216 NODE_DATA(nid)->node_start_pfn = start_pfn; 219 217 NODE_DATA(nid)->node_spanned_pages = end_pfn - start_pfn; 220 218 } 221 219 222 - static void __init bootmem_init_one_node(unsigned int nid) 223 - { 224 - unsigned long total_pages, paddr; 225 - unsigned long end_pfn; 226 - struct pglist_data *p; 227 - 228 - p = NODE_DATA(nid); 229 - 230 - /* Nothing to do.. */ 231 - if (!p->node_spanned_pages) 232 - return; 233 - 234 - end_pfn = pgdat_end_pfn(p); 235 - 236 - total_pages = bootmem_bootmap_pages(p->node_spanned_pages); 237 - 238 - paddr = memblock_alloc(total_pages << PAGE_SHIFT, PAGE_SIZE); 239 - if (!paddr) 240 - panic("Can't allocate bootmap for nid[%d]\n", nid); 241 - 242 - init_bootmem_node(p, paddr >> PAGE_SHIFT, p->node_start_pfn, end_pfn); 243 - 244 - free_bootmem_with_active_regions(nid, end_pfn); 245 - 246 - /* 247 - * XXX Handle initial reservations for the system memory node 248 - * only for the moment, we'll refactor this later for handling 249 - * reservations in other nodes. 250 - */ 251 - if (nid == 0) { 252 - struct memblock_region *reg; 253 - 254 - /* Reserve the sections we're already using. */ 255 - for_each_memblock(reserved, reg) { 256 - reserve_bootmem(reg->base, reg->size, BOOTMEM_DEFAULT); 257 - } 258 - } 259 - 260 - sparse_memory_present_with_active_regions(nid); 261 - } 262 - 263 220 static void __init do_init_bootmem(void) 264 221 { 265 222 struct memblock_region *reg; 266 - int i; 267 223 268 224 /* Add active regions with valid PFNs. */ 269 225 for_each_memblock(memory, reg) { ··· 235 279 236 280 plat_mem_setup(); 237 281 238 - for_each_online_node(i) 239 - bootmem_init_one_node(i); 282 + for_each_memblock(memory, reg) { 283 + int nid = memblock_get_region_node(reg); 240 284 285 + memory_present(nid, memblock_region_memory_base_pfn(reg), 286 + memblock_region_memory_end_pfn(reg)); 287 + } 241 288 sparse_init(); 242 289 } 243 290 ··· 281 322 { 282 323 unsigned long max_zone_pfns[MAX_NR_ZONES]; 283 324 unsigned long vaddr, end; 284 - int nid; 285 325 286 326 sh_mv.mv_mem_init(); 287 327 ··· 335 377 kmap_coherent_init(); 336 378 337 379 memset(max_zone_pfns, 0, sizeof(max_zone_pfns)); 338 - 339 - for_each_online_node(nid) { 340 - pg_data_t *pgdat = NODE_DATA(nid); 341 - unsigned long low, start_pfn; 342 - 343 - start_pfn = pgdat->bdata->node_min_pfn; 344 - low = pgdat->bdata->node_low_pfn; 345 - 346 - if (max_zone_pfns[ZONE_NORMAL] < low) 347 - max_zone_pfns[ZONE_NORMAL] = low; 348 - 349 - printk("Node %u: start_pfn = 0x%lx, low = 0x%lx\n", 350 - nid, start_pfn, low); 351 - } 352 - 380 + max_zone_pfns[ZONE_NORMAL] = max_low_pfn; 353 381 free_area_init_nodes(max_zone_pfns); 354 382 } 355 383
-19
arch/sh/mm/numa.c
··· 8 8 * for more details. 9 9 */ 10 10 #include <linux/module.h> 11 - #include <linux/bootmem.h> 12 11 #include <linux/memblock.h> 13 12 #include <linux/mm.h> 14 13 #include <linux/numa.h> ··· 25 26 */ 26 27 void __init setup_bootmem_node(int nid, unsigned long start, unsigned long end) 27 28 { 28 - unsigned long bootmap_pages; 29 29 unsigned long start_pfn, end_pfn; 30 - unsigned long bootmem_paddr; 31 30 32 31 /* Don't allow bogus node assignment */ 33 32 BUG_ON(nid >= MAX_NUMNODES || nid <= 0); ··· 45 48 SMP_CACHE_BYTES, end)); 46 49 memset(NODE_DATA(nid), 0, sizeof(struct pglist_data)); 47 50 48 - NODE_DATA(nid)->bdata = &bootmem_node_data[nid]; 49 51 NODE_DATA(nid)->node_start_pfn = start_pfn; 50 52 NODE_DATA(nid)->node_spanned_pages = end_pfn - start_pfn; 51 - 52 - /* Node-local bootmap */ 53 - bootmap_pages = bootmem_bootmap_pages(end_pfn - start_pfn); 54 - bootmem_paddr = memblock_alloc_base(bootmap_pages << PAGE_SHIFT, 55 - PAGE_SIZE, end); 56 - init_bootmem_node(NODE_DATA(nid), bootmem_paddr >> PAGE_SHIFT, 57 - start_pfn, end_pfn); 58 - 59 - free_bootmem_with_active_regions(nid, end_pfn); 60 - 61 - /* Reserve the pgdat and bootmap space with the bootmem allocator */ 62 - reserve_bootmem_node(NODE_DATA(nid), start_pfn << PAGE_SHIFT, 63 - sizeof(struct pglist_data), BOOTMEM_DEFAULT); 64 - reserve_bootmem_node(NODE_DATA(nid), bootmem_paddr, 65 - bootmap_pages << PAGE_SHIFT, BOOTMEM_DEFAULT); 66 53 67 54 /* It's up */ 68 55 node_set_online(nid);
+1 -1
arch/sparc/include/uapi/asm/oradax.h
··· 3 3 * 4 4 * This program is free software: you can redistribute it and/or modify 5 5 * it under the terms of the GNU General Public License as published by 6 - * the Free Software Foundation, either version 3 of the License, or 6 + * the Free Software Foundation, either version 2 of the License, or 7 7 * (at your option) any later version. 8 8 * 9 9 * This program is distributed in the hope that it will be useful,
+1 -1
arch/sparc/kernel/vio.c
··· 403 403 if (err) { 404 404 printk(KERN_ERR "VIO: Could not register device %s, err=%d\n", 405 405 dev_name(&vdev->dev), err); 406 - kfree(vdev); 406 + put_device(&vdev->dev); 407 407 return NULL; 408 408 } 409 409 if (vdev->dp)
-1
arch/x86/entry/vdso/vdso32/vdso-fakesections.c
··· 1 - #include "../vdso-fakesections.c"
+7 -1
arch/x86/events/core.c
··· 27 27 #include <linux/cpu.h> 28 28 #include <linux/bitops.h> 29 29 #include <linux/device.h> 30 + #include <linux/nospec.h> 30 31 31 32 #include <asm/apic.h> 32 33 #include <asm/stacktrace.h> ··· 305 304 306 305 config = attr->config; 307 306 308 - cache_type = (config >> 0) & 0xff; 307 + cache_type = (config >> 0) & 0xff; 309 308 if (cache_type >= PERF_COUNT_HW_CACHE_MAX) 310 309 return -EINVAL; 310 + cache_type = array_index_nospec(cache_type, PERF_COUNT_HW_CACHE_MAX); 311 311 312 312 cache_op = (config >> 8) & 0xff; 313 313 if (cache_op >= PERF_COUNT_HW_CACHE_OP_MAX) 314 314 return -EINVAL; 315 + cache_op = array_index_nospec(cache_op, PERF_COUNT_HW_CACHE_OP_MAX); 315 316 316 317 cache_result = (config >> 16) & 0xff; 317 318 if (cache_result >= PERF_COUNT_HW_CACHE_RESULT_MAX) 318 319 return -EINVAL; 320 + cache_result = array_index_nospec(cache_result, PERF_COUNT_HW_CACHE_RESULT_MAX); 319 321 320 322 val = hw_cache_event_ids[cache_type][cache_op][cache_result]; 321 323 ··· 424 420 425 421 if (attr->config >= x86_pmu.max_events) 426 422 return -EINVAL; 423 + 424 + attr->config = array_index_nospec((unsigned long)attr->config, x86_pmu.max_events); 427 425 428 426 /* 429 427 * The generic map:
+2
arch/x86/events/intel/cstate.c
··· 92 92 #include <linux/module.h> 93 93 #include <linux/slab.h> 94 94 #include <linux/perf_event.h> 95 + #include <linux/nospec.h> 95 96 #include <asm/cpu_device_id.h> 96 97 #include <asm/intel-family.h> 97 98 #include "../perf_event.h" ··· 303 302 } else if (event->pmu == &cstate_pkg_pmu) { 304 303 if (cfg >= PERF_CSTATE_PKG_EVENT_MAX) 305 304 return -EINVAL; 305 + cfg = array_index_nospec((unsigned long)cfg, PERF_CSTATE_PKG_EVENT_MAX); 306 306 if (!pkg_msr[cfg].attr) 307 307 return -EINVAL; 308 308 event->hw.event_base = pkg_msr[cfg].msr;
+6 -3
arch/x86/events/msr.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 2 #include <linux/perf_event.h> 3 + #include <linux/nospec.h> 3 4 #include <asm/intel-family.h> 4 5 5 6 enum perf_msr_id { ··· 159 158 if (event->attr.type != event->pmu->type) 160 159 return -ENOENT; 161 160 162 - if (cfg >= PERF_MSR_EVENT_MAX) 163 - return -EINVAL; 164 - 165 161 /* unsupported modes and filters */ 166 162 if (event->attr.exclude_user || 167 163 event->attr.exclude_kernel || ··· 168 170 event->attr.exclude_guest || 169 171 event->attr.sample_period) /* no sampling */ 170 172 return -EINVAL; 173 + 174 + if (cfg >= PERF_MSR_EVENT_MAX) 175 + return -EINVAL; 176 + 177 + cfg = array_index_nospec((unsigned long)cfg, PERF_MSR_EVENT_MAX); 171 178 172 179 if (!msr[cfg].attr) 173 180 return -EINVAL;
+5 -1
arch/x86/kernel/cpu/common.c
··· 848 848 c->x86_power = edx; 849 849 } 850 850 851 + if (c->extended_cpuid_level >= 0x80000008) { 852 + cpuid(0x80000008, &eax, &ebx, &ecx, &edx); 853 + c->x86_capability[CPUID_8000_0008_EBX] = ebx; 854 + } 855 + 851 856 if (c->extended_cpuid_level >= 0x8000000a) 852 857 c->x86_capability[CPUID_8000_000A_EDX] = cpuid_edx(0x8000000a); 853 858 ··· 876 871 877 872 c->x86_virt_bits = (eax >> 8) & 0xff; 878 873 c->x86_phys_bits = eax & 0xff; 879 - c->x86_capability[CPUID_8000_0008_EBX] = ebx; 880 874 } 881 875 #ifdef CONFIG_X86_32 882 876 else if (cpu_has(c, X86_FEATURE_PAE) || cpu_has(c, X86_FEATURE_PSE36))
+11 -11
arch/x86/kernel/tsc.c
··· 1067 1067 .resume = tsc_resume, 1068 1068 .mark_unstable = tsc_cs_mark_unstable, 1069 1069 .tick_stable = tsc_cs_tick_stable, 1070 + .list = LIST_HEAD_INIT(clocksource_tsc_early.list), 1070 1071 }; 1071 1072 1072 1073 /* ··· 1087 1086 .resume = tsc_resume, 1088 1087 .mark_unstable = tsc_cs_mark_unstable, 1089 1088 .tick_stable = tsc_cs_tick_stable, 1089 + .list = LIST_HEAD_INIT(clocksource_tsc.list), 1090 1090 }; 1091 1091 1092 1092 void mark_tsc_unstable(char *reason) ··· 1100 1098 clear_sched_clock_stable(); 1101 1099 disable_sched_clock_irqtime(); 1102 1100 pr_info("Marking TSC unstable due to %s\n", reason); 1103 - /* Change only the rating, when not registered */ 1104 - if (clocksource_tsc.mult) { 1105 - clocksource_mark_unstable(&clocksource_tsc); 1106 - } else { 1107 - clocksource_tsc.flags |= CLOCK_SOURCE_UNSTABLE; 1108 - clocksource_tsc.rating = 0; 1109 - } 1101 + 1102 + clocksource_mark_unstable(&clocksource_tsc_early); 1103 + clocksource_mark_unstable(&clocksource_tsc); 1110 1104 } 1111 1105 1112 1106 EXPORT_SYMBOL_GPL(mark_tsc_unstable); ··· 1242 1244 1243 1245 /* Don't bother refining TSC on unstable systems */ 1244 1246 if (tsc_unstable) 1245 - return; 1247 + goto unreg; 1246 1248 1247 1249 /* 1248 1250 * Since the work is started early in boot, we may be ··· 1295 1297 1296 1298 out: 1297 1299 if (tsc_unstable) 1298 - return; 1300 + goto unreg; 1299 1301 1300 1302 if (boot_cpu_has(X86_FEATURE_ART)) 1301 1303 art_related_clocksource = &clocksource_tsc; 1302 1304 clocksource_register_khz(&clocksource_tsc, tsc_khz); 1305 + unreg: 1303 1306 clocksource_unregister(&clocksource_tsc_early); 1304 1307 } 1305 1308 ··· 1310 1311 if (!boot_cpu_has(X86_FEATURE_TSC) || tsc_disabled > 0 || !tsc_khz) 1311 1312 return 0; 1312 1313 1313 - if (check_tsc_unstable()) 1314 - return 0; 1314 + if (tsc_unstable) 1315 + goto unreg; 1315 1316 1316 1317 if (tsc_clocksource_reliable) 1317 1318 clocksource_tsc.flags &= ~CLOCK_SOURCE_MUST_VERIFY; ··· 1327 1328 if (boot_cpu_has(X86_FEATURE_ART)) 1328 1329 art_related_clocksource = &clocksource_tsc; 1329 1330 clocksource_register_khz(&clocksource_tsc, tsc_khz); 1331 + unreg: 1330 1332 clocksource_unregister(&clocksource_tsc_early); 1331 1333 return 0; 1332 1334 }
+20 -17
arch/x86/kvm/lapic.c
··· 1463 1463 local_irq_restore(flags); 1464 1464 } 1465 1465 1466 - static void start_sw_period(struct kvm_lapic *apic) 1467 - { 1468 - if (!apic->lapic_timer.period) 1469 - return; 1470 - 1471 - if (apic_lvtt_oneshot(apic) && 1472 - ktime_after(ktime_get(), 1473 - apic->lapic_timer.target_expiration)) { 1474 - apic_timer_expired(apic); 1475 - return; 1476 - } 1477 - 1478 - hrtimer_start(&apic->lapic_timer.timer, 1479 - apic->lapic_timer.target_expiration, 1480 - HRTIMER_MODE_ABS_PINNED); 1481 - } 1482 - 1483 1466 static void update_target_expiration(struct kvm_lapic *apic, uint32_t old_divisor) 1484 1467 { 1485 1468 ktime_t now, remaining; ··· 1527 1544 apic->lapic_timer.target_expiration = 1528 1545 ktime_add_ns(apic->lapic_timer.target_expiration, 1529 1546 apic->lapic_timer.period); 1547 + } 1548 + 1549 + static void start_sw_period(struct kvm_lapic *apic) 1550 + { 1551 + if (!apic->lapic_timer.period) 1552 + return; 1553 + 1554 + if (ktime_after(ktime_get(), 1555 + apic->lapic_timer.target_expiration)) { 1556 + apic_timer_expired(apic); 1557 + 1558 + if (apic_lvtt_oneshot(apic)) 1559 + return; 1560 + 1561 + advance_periodic_target_expiration(apic); 1562 + } 1563 + 1564 + hrtimer_start(&apic->lapic_timer.timer, 1565 + apic->lapic_timer.target_expiration, 1566 + HRTIMER_MODE_ABS_PINNED); 1530 1567 } 1531 1568 1532 1569 bool kvm_lapic_hv_timer_in_use(struct kvm_vcpu *vcpu)
+14 -4
arch/x86/net/bpf_jit_comp.c
··· 1027 1027 break; 1028 1028 1029 1029 case BPF_JMP | BPF_JA: 1030 - jmp_offset = addrs[i + insn->off] - addrs[i]; 1030 + if (insn->off == -1) 1031 + /* -1 jmp instructions will always jump 1032 + * backwards two bytes. Explicitly handling 1033 + * this case avoids wasting too many passes 1034 + * when there are long sequences of replaced 1035 + * dead code. 1036 + */ 1037 + jmp_offset = -2; 1038 + else 1039 + jmp_offset = addrs[i + insn->off] - addrs[i]; 1040 + 1031 1041 if (!jmp_offset) 1032 1042 /* optimize out nop jumps */ 1033 1043 break; ··· 1236 1226 for (pass = 0; pass < 20 || image; pass++) { 1237 1227 proglen = do_jit(prog, addrs, image, oldproglen, &ctx); 1238 1228 if (proglen <= 0) { 1229 + out_image: 1239 1230 image = NULL; 1240 1231 if (header) 1241 1232 bpf_jit_binary_free(header); ··· 1247 1236 if (proglen != oldproglen) { 1248 1237 pr_err("bpf_jit: proglen=%d != oldproglen=%d\n", 1249 1238 proglen, oldproglen); 1250 - prog = orig_prog; 1251 - goto out_addrs; 1239 + goto out_image; 1252 1240 } 1253 1241 break; 1254 1242 } ··· 1283 1273 prog = orig_prog; 1284 1274 } 1285 1275 1286 - if (!prog->is_func || extra_pass) { 1276 + if (!image || !prog->is_func || extra_pass) { 1287 1277 out_addrs: 1288 1278 kfree(addrs); 1289 1279 kfree(jit_data);
+13
arch/x86/xen/enlighten_hvm.c
··· 65 65 { 66 66 early_memunmap(HYPERVISOR_shared_info, PAGE_SIZE); 67 67 HYPERVISOR_shared_info = __va(PFN_PHYS(shared_info_pfn)); 68 + 69 + /* 70 + * The virtual address of the shared_info page has changed, so 71 + * the vcpu_info pointer for VCPU 0 is now stale. 72 + * 73 + * The prepare_boot_cpu callback will re-initialize it via 74 + * xen_vcpu_setup, but we can't rely on that to be called for 75 + * old Xen versions (xen_have_vector_callback == 0). 76 + * 77 + * It is, in any case, bad to have a stale vcpu_info pointer 78 + * so reset it now. 79 + */ 80 + xen_vcpu_info_reset(0); 68 81 } 69 82 70 83 static void __init init_hvm_pv_info(void)
+31 -55
arch/x86/xen/enlighten_pv.c
··· 421 421 { 422 422 unsigned long va = dtr->address; 423 423 unsigned int size = dtr->size + 1; 424 - unsigned pages = DIV_ROUND_UP(size, PAGE_SIZE); 425 - unsigned long frames[pages]; 426 - int f; 424 + unsigned long pfn, mfn; 425 + int level; 426 + pte_t *ptep; 427 + void *virt; 427 428 428 - /* 429 - * A GDT can be up to 64k in size, which corresponds to 8192 430 - * 8-byte entries, or 16 4k pages.. 431 - */ 432 - 433 - BUG_ON(size > 65536); 429 + /* @size should be at most GDT_SIZE which is smaller than PAGE_SIZE. */ 430 + BUG_ON(size > PAGE_SIZE); 434 431 BUG_ON(va & ~PAGE_MASK); 435 432 436 - for (f = 0; va < dtr->address + size; va += PAGE_SIZE, f++) { 437 - int level; 438 - pte_t *ptep; 439 - unsigned long pfn, mfn; 440 - void *virt; 433 + /* 434 + * The GDT is per-cpu and is in the percpu data area. 435 + * That can be virtually mapped, so we need to do a 436 + * page-walk to get the underlying MFN for the 437 + * hypercall. The page can also be in the kernel's 438 + * linear range, so we need to RO that mapping too. 439 + */ 440 + ptep = lookup_address(va, &level); 441 + BUG_ON(ptep == NULL); 441 442 442 - /* 443 - * The GDT is per-cpu and is in the percpu data area. 444 - * That can be virtually mapped, so we need to do a 445 - * page-walk to get the underlying MFN for the 446 - * hypercall. The page can also be in the kernel's 447 - * linear range, so we need to RO that mapping too. 448 - */ 449 - ptep = lookup_address(va, &level); 450 - BUG_ON(ptep == NULL); 443 + pfn = pte_pfn(*ptep); 444 + mfn = pfn_to_mfn(pfn); 445 + virt = __va(PFN_PHYS(pfn)); 451 446 452 - pfn = pte_pfn(*ptep); 453 - mfn = pfn_to_mfn(pfn); 454 - virt = __va(PFN_PHYS(pfn)); 447 + make_lowmem_page_readonly((void *)va); 448 + make_lowmem_page_readonly(virt); 455 449 456 - frames[f] = mfn; 457 - 458 - make_lowmem_page_readonly((void *)va); 459 - make_lowmem_page_readonly(virt); 460 - } 461 - 462 - if (HYPERVISOR_set_gdt(frames, size / sizeof(struct desc_struct))) 450 + if (HYPERVISOR_set_gdt(&mfn, size / sizeof(struct desc_struct))) 463 451 BUG(); 464 452 } 465 453 ··· 458 470 { 459 471 unsigned long va = dtr->address; 460 472 unsigned int size = dtr->size + 1; 461 - unsigned pages = DIV_ROUND_UP(size, PAGE_SIZE); 462 - unsigned long frames[pages]; 463 - int f; 473 + unsigned long pfn, mfn; 474 + pte_t pte; 464 475 465 - /* 466 - * A GDT can be up to 64k in size, which corresponds to 8192 467 - * 8-byte entries, or 16 4k pages.. 468 - */ 469 - 470 - BUG_ON(size > 65536); 476 + /* @size should be at most GDT_SIZE which is smaller than PAGE_SIZE. */ 477 + BUG_ON(size > PAGE_SIZE); 471 478 BUG_ON(va & ~PAGE_MASK); 472 479 473 - for (f = 0; va < dtr->address + size; va += PAGE_SIZE, f++) { 474 - pte_t pte; 475 - unsigned long pfn, mfn; 480 + pfn = virt_to_pfn(va); 481 + mfn = pfn_to_mfn(pfn); 476 482 477 - pfn = virt_to_pfn(va); 478 - mfn = pfn_to_mfn(pfn); 483 + pte = pfn_pte(pfn, PAGE_KERNEL_RO); 479 484 480 - pte = pfn_pte(pfn, PAGE_KERNEL_RO); 485 + if (HYPERVISOR_update_va_mapping((unsigned long)va, pte, 0)) 486 + BUG(); 481 487 482 - if (HYPERVISOR_update_va_mapping((unsigned long)va, pte, 0)) 483 - BUG(); 484 - 485 - frames[f] = mfn; 486 - } 487 - 488 - if (HYPERVISOR_set_gdt(frames, size / sizeof(struct desc_struct))) 488 + if (HYPERVISOR_set_gdt(&mfn, size / sizeof(struct desc_struct))) 489 489 BUG(); 490 490 } 491 491
+28 -12
block/blk-mq.c
··· 95 95 { 96 96 struct mq_inflight *mi = priv; 97 97 98 - if (blk_mq_rq_state(rq) == MQ_RQ_IN_FLIGHT) { 99 - /* 100 - * index[0] counts the specific partition that was asked 101 - * for. index[1] counts the ones that are active on the 102 - * whole device, so increment that if mi->part is indeed 103 - * a partition, and not a whole device. 104 - */ 105 - if (rq->part == mi->part) 106 - mi->inflight[0]++; 107 - if (mi->part->partno) 108 - mi->inflight[1]++; 109 - } 98 + /* 99 + * index[0] counts the specific partition that was asked for. index[1] 100 + * counts the ones that are active on the whole device, so increment 101 + * that if mi->part is indeed a partition, and not a whole device. 102 + */ 103 + if (rq->part == mi->part) 104 + mi->inflight[0]++; 105 + if (mi->part->partno) 106 + mi->inflight[1]++; 110 107 } 111 108 112 109 void blk_mq_in_flight(struct request_queue *q, struct hd_struct *part, ··· 113 116 114 117 inflight[0] = inflight[1] = 0; 115 118 blk_mq_queue_tag_busy_iter(q, blk_mq_check_inflight, &mi); 119 + } 120 + 121 + static void blk_mq_check_inflight_rw(struct blk_mq_hw_ctx *hctx, 122 + struct request *rq, void *priv, 123 + bool reserved) 124 + { 125 + struct mq_inflight *mi = priv; 126 + 127 + if (rq->part == mi->part) 128 + mi->inflight[rq_data_dir(rq)]++; 129 + } 130 + 131 + void blk_mq_in_flight_rw(struct request_queue *q, struct hd_struct *part, 132 + unsigned int inflight[2]) 133 + { 134 + struct mq_inflight mi = { .part = part, .inflight = inflight, }; 135 + 136 + inflight[0] = inflight[1] = 0; 137 + blk_mq_queue_tag_busy_iter(q, blk_mq_check_inflight_rw, &mi); 116 138 } 117 139 118 140 void blk_freeze_queue_start(struct request_queue *q)
+3 -1
block/blk-mq.h
··· 188 188 } 189 189 190 190 void blk_mq_in_flight(struct request_queue *q, struct hd_struct *part, 191 - unsigned int inflight[2]); 191 + unsigned int inflight[2]); 192 + void blk_mq_in_flight_rw(struct request_queue *q, struct hd_struct *part, 193 + unsigned int inflight[2]); 192 194 193 195 static inline void blk_mq_put_dispatch_budget(struct blk_mq_hw_ctx *hctx) 194 196 {
+12
block/genhd.c
··· 82 82 } 83 83 } 84 84 85 + void part_in_flight_rw(struct request_queue *q, struct hd_struct *part, 86 + unsigned int inflight[2]) 87 + { 88 + if (q->mq_ops) { 89 + blk_mq_in_flight_rw(q, part, inflight); 90 + return; 91 + } 92 + 93 + inflight[0] = atomic_read(&part->in_flight[0]); 94 + inflight[1] = atomic_read(&part->in_flight[1]); 95 + } 96 + 85 97 struct hd_struct *__disk_get_part(struct gendisk *disk, int partno) 86 98 { 87 99 struct disk_part_tbl *ptbl = rcu_dereference(disk->part_tbl);
+6 -4
block/partition-generic.c
··· 145 145 jiffies_to_msecs(part_stat_read(p, time_in_queue))); 146 146 } 147 147 148 - ssize_t part_inflight_show(struct device *dev, 149 - struct device_attribute *attr, char *buf) 148 + ssize_t part_inflight_show(struct device *dev, struct device_attribute *attr, 149 + char *buf) 150 150 { 151 151 struct hd_struct *p = dev_to_part(dev); 152 + struct request_queue *q = part_to_disk(p)->queue; 153 + unsigned int inflight[2]; 152 154 153 - return sprintf(buf, "%8u %8u\n", atomic_read(&p->in_flight[0]), 154 - atomic_read(&p->in_flight[1])); 155 + part_in_flight_rw(q, p, inflight); 156 + return sprintf(buf, "%8u %8u\n", inflight[0], inflight[1]); 155 157 } 156 158 157 159 #ifdef CONFIG_FAIL_MAKE_REQUEST
+3 -3
drivers/ata/ahci.c
··· 698 698 699 699 DPRINTK("ENTER\n"); 700 700 701 - ahci_stop_engine(ap); 701 + hpriv->stop_engine(ap); 702 702 703 703 rc = sata_link_hardreset(link, sata_ehc_deb_timing(&link->eh_context), 704 704 deadline, &online, NULL); ··· 724 724 bool online; 725 725 int rc; 726 726 727 - ahci_stop_engine(ap); 727 + hpriv->stop_engine(ap); 728 728 729 729 /* clear D2H reception area to properly wait for D2H FIS */ 730 730 ata_tf_init(link->device, &tf); ··· 788 788 789 789 DPRINTK("ENTER\n"); 790 790 791 - ahci_stop_engine(ap); 791 + hpriv->stop_engine(ap); 792 792 793 793 for (i = 0; i < 2; i++) { 794 794 u16 val;
+7 -1
drivers/ata/ahci.h
··· 350 350 u32 em_msg_type; /* EM message type */ 351 351 bool got_runtime_pm; /* Did we do pm_runtime_get? */ 352 352 struct clk *clks[AHCI_MAX_CLKS]; /* Optional */ 353 - struct reset_control *rsts; /* Optional */ 354 353 struct regulator **target_pwrs; /* Optional */ 355 354 /* 356 355 * If platform uses PHYs. There is a 1:1 relation between the port number and ··· 365 366 * be overridden anytime before the host is activated. 366 367 */ 367 368 void (*start_engine)(struct ata_port *ap); 369 + /* 370 + * Optional ahci_stop_engine override, if not set this gets set to the 371 + * default ahci_stop_engine during ahci_save_initial_config, this can 372 + * be overridden anytime before the host is activated. 373 + */ 374 + int (*stop_engine)(struct ata_port *ap); 375 + 368 376 irqreturn_t (*irq_handler)(int irq, void *dev_instance); 369 377 370 378 /* only required for per-port MSI(-X) support */
+56
drivers/ata/ahci_mvebu.c
··· 62 62 writel(0x80, hpriv->mmio + AHCI_VENDOR_SPECIFIC_0_DATA); 63 63 } 64 64 65 + /** 66 + * ahci_mvebu_stop_engine 67 + * 68 + * @ap: Target ata port 69 + * 70 + * Errata Ref#226 - SATA Disk HOT swap issue when connected through 71 + * Port Multiplier in FIS-based Switching mode. 72 + * 73 + * To avoid the issue, according to design, the bits[11:8, 0] of 74 + * register PxFBS are cleared when Port Command and Status (0x18) bit[0] 75 + * changes its value from 1 to 0, i.e. falling edge of Port 76 + * Command and Status bit[0] sends PULSE that resets PxFBS 77 + * bits[11:8; 0]. 78 + * 79 + * This function is used to override function of "ahci_stop_engine" 80 + * from libahci.c by adding the mvebu work around(WA) to save PxFBS 81 + * value before the PxCMD ST write of 0, then restore PxFBS value. 82 + * 83 + * Return: 0 on success; Error code otherwise. 84 + */ 85 + int ahci_mvebu_stop_engine(struct ata_port *ap) 86 + { 87 + void __iomem *port_mmio = ahci_port_base(ap); 88 + u32 tmp, port_fbs; 89 + 90 + tmp = readl(port_mmio + PORT_CMD); 91 + 92 + /* check if the HBA is idle */ 93 + if ((tmp & (PORT_CMD_START | PORT_CMD_LIST_ON)) == 0) 94 + return 0; 95 + 96 + /* save the port PxFBS register for later restore */ 97 + port_fbs = readl(port_mmio + PORT_FBS); 98 + 99 + /* setting HBA to idle */ 100 + tmp &= ~PORT_CMD_START; 101 + writel(tmp, port_mmio + PORT_CMD); 102 + 103 + /* 104 + * bit #15 PxCMD signal doesn't clear PxFBS, 105 + * restore the PxFBS register right after clearing the PxCMD ST, 106 + * no need to wait for the PxCMD bit #15. 107 + */ 108 + writel(port_fbs, port_mmio + PORT_FBS); 109 + 110 + /* wait for engine to stop. This could be as long as 500 msec */ 111 + tmp = ata_wait_register(ap, port_mmio + PORT_CMD, 112 + PORT_CMD_LIST_ON, PORT_CMD_LIST_ON, 1, 500); 113 + if (tmp & PORT_CMD_LIST_ON) 114 + return -EIO; 115 + 116 + return 0; 117 + } 118 + 65 119 #ifdef CONFIG_PM_SLEEP 66 120 static int ahci_mvebu_suspend(struct platform_device *pdev, pm_message_t state) 67 121 { ··· 165 111 rc = ahci_platform_enable_resources(hpriv); 166 112 if (rc) 167 113 return rc; 114 + 115 + hpriv->stop_engine = ahci_mvebu_stop_engine; 168 116 169 117 if (of_device_is_compatible(pdev->dev.of_node, 170 118 "marvell,armada-380-ahci")) {
+1 -1
drivers/ata/ahci_qoriq.c
··· 96 96 97 97 DPRINTK("ENTER\n"); 98 98 99 - ahci_stop_engine(ap); 99 + hpriv->stop_engine(ap); 100 100 101 101 /* 102 102 * There is a errata on ls1021a Rev1.0 and Rev2.0 which is:
+2 -2
drivers/ata/ahci_xgene.c
··· 165 165 PORT_CMD_ISSUE, 0x0, 1, 100)) 166 166 return -EBUSY; 167 167 168 - ahci_stop_engine(ap); 168 + hpriv->stop_engine(ap); 169 169 ahci_start_fis_rx(ap); 170 170 171 171 /* ··· 421 421 portrxfis_saved = readl(port_mmio + PORT_FIS_ADDR); 422 422 portrxfishi_saved = readl(port_mmio + PORT_FIS_ADDR_HI); 423 423 424 - ahci_stop_engine(ap); 424 + hpriv->stop_engine(ap); 425 425 426 426 rc = xgene_ahci_do_hardreset(link, deadline, &online); 427 427
+12 -8
drivers/ata/libahci.c
··· 560 560 if (!hpriv->start_engine) 561 561 hpriv->start_engine = ahci_start_engine; 562 562 563 + if (!hpriv->stop_engine) 564 + hpriv->stop_engine = ahci_stop_engine; 565 + 563 566 if (!hpriv->irq_handler) 564 567 hpriv->irq_handler = ahci_single_level_irq_intr; 565 568 } ··· 900 897 static int ahci_deinit_port(struct ata_port *ap, const char **emsg) 901 898 { 902 899 int rc; 900 + struct ahci_host_priv *hpriv = ap->host->private_data; 903 901 904 902 /* disable DMA */ 905 - rc = ahci_stop_engine(ap); 903 + rc = hpriv->stop_engine(ap); 906 904 if (rc) { 907 905 *emsg = "failed to stop engine"; 908 906 return rc; ··· 1314 1310 int busy, rc; 1315 1311 1316 1312 /* stop engine */ 1317 - rc = ahci_stop_engine(ap); 1313 + rc = hpriv->stop_engine(ap); 1318 1314 if (rc) 1319 1315 goto out_restart; 1320 1316 ··· 1553 1549 1554 1550 DPRINTK("ENTER\n"); 1555 1551 1556 - ahci_stop_engine(ap); 1552 + hpriv->stop_engine(ap); 1557 1553 1558 1554 /* clear D2H reception area to properly wait for D2H FIS */ 1559 1555 ata_tf_init(link->device, &tf); ··· 2079 2075 2080 2076 if (!(ap->pflags & ATA_PFLAG_FROZEN)) { 2081 2077 /* restart engine */ 2082 - ahci_stop_engine(ap); 2078 + hpriv->stop_engine(ap); 2083 2079 hpriv->start_engine(ap); 2084 2080 } 2085 2081 2086 2082 sata_pmp_error_handler(ap); 2087 2083 2088 2084 if (!ata_dev_enabled(ap->link.device)) 2089 - ahci_stop_engine(ap); 2085 + hpriv->stop_engine(ap); 2090 2086 } 2091 2087 EXPORT_SYMBOL_GPL(ahci_error_handler); 2092 2088 ··· 2133 2129 return; 2134 2130 2135 2131 /* set DITO, MDAT, DETO and enable DevSlp, need to stop engine first */ 2136 - rc = ahci_stop_engine(ap); 2132 + rc = hpriv->stop_engine(ap); 2137 2133 if (rc) 2138 2134 return; 2139 2135 ··· 2193 2189 return; 2194 2190 } 2195 2191 2196 - rc = ahci_stop_engine(ap); 2192 + rc = hpriv->stop_engine(ap); 2197 2193 if (rc) 2198 2194 return; 2199 2195 ··· 2226 2222 return; 2227 2223 } 2228 2224 2229 - rc = ahci_stop_engine(ap); 2225 + rc = hpriv->stop_engine(ap); 2230 2226 if (rc) 2231 2227 return; 2232 2228
+3 -21
drivers/ata/libahci_platform.c
··· 25 25 #include <linux/phy/phy.h> 26 26 #include <linux/pm_runtime.h> 27 27 #include <linux/of_platform.h> 28 - #include <linux/reset.h> 29 28 #include "ahci.h" 30 29 31 30 static void ahci_host_stop(struct ata_host *host); ··· 195 196 * following order: 196 197 * 1) Regulator 197 198 * 2) Clocks (through ahci_platform_enable_clks) 198 - * 3) Resets 199 - * 4) Phys 199 + * 3) Phys 200 200 * 201 201 * If resource enabling fails at any point the previous enabled resources 202 202 * are disabled in reverse order. ··· 215 217 if (rc) 216 218 goto disable_regulator; 217 219 218 - rc = reset_control_deassert(hpriv->rsts); 220 + rc = ahci_platform_enable_phys(hpriv); 219 221 if (rc) 220 222 goto disable_clks; 221 223 222 - rc = ahci_platform_enable_phys(hpriv); 223 - if (rc) 224 - goto disable_resets; 225 - 226 224 return 0; 227 - 228 - disable_resets: 229 - reset_control_assert(hpriv->rsts); 230 225 231 226 disable_clks: 232 227 ahci_platform_disable_clks(hpriv); ··· 239 248 * following order: 240 249 * 1) Phys 241 250 * 2) Clocks (through ahci_platform_disable_clks) 242 - * 3) Resets 243 - * 4) Regulator 251 + * 3) Regulator 244 252 */ 245 253 void ahci_platform_disable_resources(struct ahci_host_priv *hpriv) 246 254 { 247 255 ahci_platform_disable_phys(hpriv); 248 - 249 - reset_control_assert(hpriv->rsts); 250 256 251 257 ahci_platform_disable_clks(hpriv); 252 258 ··· 391 403 break; 392 404 } 393 405 hpriv->clks[i] = clk; 394 - } 395 - 396 - hpriv->rsts = devm_reset_control_array_get_optional_shared(dev); 397 - if (IS_ERR(hpriv->rsts)) { 398 - rc = PTR_ERR(hpriv->rsts); 399 - goto err_out; 400 406 } 401 407 402 408 hpriv->nports = child_nodes = of_get_child_count(dev->of_node);
+6
drivers/ata/libata-core.c
··· 4549 4549 ATA_HORKAGE_ZERO_AFTER_TRIM | 4550 4550 ATA_HORKAGE_NOLPM, }, 4551 4551 4552 + /* This specific Samsung model/firmware-rev does not handle LPM well */ 4553 + { "SAMSUNG MZMPC128HBFU-000MV", "CXM14M1Q", ATA_HORKAGE_NOLPM, }, 4554 + 4555 + /* Sandisk devices which are known to not handle LPM well */ 4556 + { "SanDisk SD7UB3Q*G1001", NULL, ATA_HORKAGE_NOLPM, }, 4557 + 4552 4558 /* devices that don't properly handle queued TRIM commands */ 4553 4559 { "Micron_M500_*", NULL, ATA_HORKAGE_NO_NCQ_TRIM | 4554 4560 ATA_HORKAGE_ZERO_AFTER_TRIM, },
+2 -2
drivers/ata/libata-eh.c
··· 175 175 { } 176 176 #endif /* CONFIG_PM */ 177 177 178 - static void __ata_ehi_pushv_desc(struct ata_eh_info *ehi, const char *fmt, 179 - va_list args) 178 + static __printf(2, 0) void __ata_ehi_pushv_desc(struct ata_eh_info *ehi, 179 + const char *fmt, va_list args) 180 180 { 181 181 ehi->desc_len += vscnprintf(ehi->desc + ehi->desc_len, 182 182 ATA_EH_DESC_LEN - ehi->desc_len,
+1 -1
drivers/ata/sata_highbank.c
··· 410 410 int rc; 411 411 int retry = 100; 412 412 413 - ahci_stop_engine(ap); 413 + hpriv->stop_engine(ap); 414 414 415 415 /* clear D2H reception area to properly wait for D2H FIS */ 416 416 ata_tf_init(link->device, &tf);
+2 -2
drivers/ata/sata_sil24.c
··· 285 285 [PORT_CERR_INCONSISTENT] = { AC_ERR_HSM, ATA_EH_RESET, 286 286 "protocol mismatch" }, 287 287 [PORT_CERR_DIRECTION] = { AC_ERR_HSM, ATA_EH_RESET, 288 - "data directon mismatch" }, 288 + "data direction mismatch" }, 289 289 [PORT_CERR_UNDERRUN] = { AC_ERR_HSM, ATA_EH_RESET, 290 290 "ran out of SGEs while writing" }, 291 291 [PORT_CERR_OVERRUN] = { AC_ERR_HSM, ATA_EH_RESET, 292 292 "ran out of SGEs while reading" }, 293 293 [PORT_CERR_PKT_PROT] = { AC_ERR_HSM, ATA_EH_RESET, 294 - "invalid data directon for ATAPI CDB" }, 294 + "invalid data direction for ATAPI CDB" }, 295 295 [PORT_CERR_SGT_BOUNDARY] = { AC_ERR_SYSTEM, ATA_EH_RESET, 296 296 "SGT not on qword boundary" }, 297 297 [PORT_CERR_SGT_TGTABRT] = { AC_ERR_HOST_BUS, ATA_EH_RESET,
+1 -1
drivers/atm/firestream.c
··· 191 191 "reserved 37", 192 192 "reserved 38", 193 193 "reserved 39", 194 - "reseverd 40", 194 + "reserved 40", 195 195 "reserved 41", 196 196 "reserved 42", 197 197 "reserved 43",
+3
drivers/atm/zatm.c
··· 28 28 #include <asm/io.h> 29 29 #include <linux/atomic.h> 30 30 #include <linux/uaccess.h> 31 + #include <linux/nospec.h> 31 32 32 33 #include "uPD98401.h" 33 34 #include "uPD98402.h" ··· 1459 1458 return -EFAULT; 1460 1459 if (pool < 0 || pool > ZATM_LAST_POOL) 1461 1460 return -EINVAL; 1461 + pool = array_index_nospec(pool, 1462 + ZATM_LAST_POOL + 1); 1462 1463 spin_lock_irqsave(&zatm_dev->lock, flags); 1463 1464 info = zatm_dev->pool_info[pool]; 1464 1465 if (cmd == ZATM_GETPOOLZ) {
+3 -1
drivers/block/rbd.c
··· 2366 2366 osd_req_op_cls_init(obj_req->osd_req, 0, CEPH_OSD_OP_CALL, "rbd", 2367 2367 "copyup"); 2368 2368 osd_req_op_cls_request_data_bvecs(obj_req->osd_req, 0, 2369 - obj_req->copyup_bvecs, bytes); 2369 + obj_req->copyup_bvecs, 2370 + obj_req->copyup_bvec_count, 2371 + bytes); 2370 2372 2371 2373 switch (obj_req->img_request->op_type) { 2372 2374 case OBJ_OP_WRITE:
+15 -4
drivers/bluetooth/btusb.c
··· 231 231 { USB_DEVICE(0x0930, 0x0227), .driver_info = BTUSB_ATH3012 }, 232 232 { USB_DEVICE(0x0b05, 0x17d0), .driver_info = BTUSB_ATH3012 }, 233 233 { USB_DEVICE(0x0cf3, 0x0036), .driver_info = BTUSB_ATH3012 }, 234 + { USB_DEVICE(0x0cf3, 0x3004), .driver_info = BTUSB_ATH3012 }, 234 235 { USB_DEVICE(0x0cf3, 0x3008), .driver_info = BTUSB_ATH3012 }, 235 236 { USB_DEVICE(0x0cf3, 0x311d), .driver_info = BTUSB_ATH3012 }, 236 237 { USB_DEVICE(0x0cf3, 0x311e), .driver_info = BTUSB_ATH3012 }, ··· 264 263 { USB_DEVICE(0x0489, 0xe03c), .driver_info = BTUSB_ATH3012 }, 265 264 266 265 /* QCA ROME chipset */ 267 - { USB_DEVICE(0x0cf3, 0x3004), .driver_info = BTUSB_QCA_ROME }, 268 266 { USB_DEVICE(0x0cf3, 0xe007), .driver_info = BTUSB_QCA_ROME }, 269 267 { USB_DEVICE(0x0cf3, 0xe009), .driver_info = BTUSB_QCA_ROME }, 270 268 { USB_DEVICE(0x0cf3, 0xe010), .driver_info = BTUSB_QCA_ROME }, ··· 397 397 .matches = { 398 398 DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."), 399 399 DMI_MATCH(DMI_PRODUCT_NAME, "OptiPlex 3060"), 400 + }, 401 + }, 402 + { 403 + /* Dell XPS 9360 (QCA ROME device 0cf3:e300) */ 404 + .matches = { 405 + DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."), 406 + DMI_MATCH(DMI_PRODUCT_NAME, "XPS 13 9360"), 400 407 }, 401 408 }, 402 409 {} ··· 2859 2852 } 2860 2853 #endif 2861 2854 2855 + static void btusb_check_needs_reset_resume(struct usb_interface *intf) 2856 + { 2857 + if (dmi_check_system(btusb_needs_reset_resume_table)) 2858 + interface_to_usbdev(intf)->quirks |= USB_QUIRK_RESET_RESUME; 2859 + } 2860 + 2862 2861 static int btusb_probe(struct usb_interface *intf, 2863 2862 const struct usb_device_id *id) 2864 2863 { ··· 2987 2974 hdev->send = btusb_send_frame; 2988 2975 hdev->notify = btusb_notify; 2989 2976 2990 - if (dmi_check_system(btusb_needs_reset_resume_table)) 2991 - interface_to_usbdev(intf)->quirks |= USB_QUIRK_RESET_RESUME; 2992 - 2993 2977 #ifdef CONFIG_PM 2994 2978 err = btusb_config_oob_wake(hdev); 2995 2979 if (err) ··· 3074 3064 data->setup_on_usb = btusb_setup_qca; 3075 3065 hdev->set_bdaddr = btusb_set_bdaddr_ath3012; 3076 3066 set_bit(HCI_QUIRK_SIMULTANEOUS_DISCOVERY, &hdev->quirks); 3067 + btusb_check_needs_reset_resume(intf); 3077 3068 } 3078 3069 3079 3070 #ifdef CONFIG_BT_HCIBTUSB_RTL
+2 -2
drivers/char/agp/uninorth-agp.c
··· 195 195 return 0; 196 196 } 197 197 198 - int uninorth_remove_memory(struct agp_memory *mem, off_t pg_start, int type) 198 + static int uninorth_remove_memory(struct agp_memory *mem, off_t pg_start, int type) 199 199 { 200 200 size_t i; 201 201 u32 *gp; ··· 470 470 return 0; 471 471 } 472 472 473 - void null_cache_flush(void) 473 + static void null_cache_flush(void) 474 474 { 475 475 mb(); 476 476 }
+1 -1
drivers/clk/clk-cs2000-cp.c
··· 541 541 return ret; 542 542 } 543 543 544 - static int cs2000_resume(struct device *dev) 544 + static int __maybe_unused cs2000_resume(struct device *dev) 545 545 { 546 546 struct cs2000_priv *priv = dev_get_drvdata(dev); 547 547
+9 -1
drivers/clk/clk-mux.c
··· 112 112 return 0; 113 113 } 114 114 115 + static int clk_mux_determine_rate(struct clk_hw *hw, 116 + struct clk_rate_request *req) 117 + { 118 + struct clk_mux *mux = to_clk_mux(hw); 119 + 120 + return clk_mux_determine_rate_flags(hw, req, mux->flags); 121 + } 122 + 115 123 const struct clk_ops clk_mux_ops = { 116 124 .get_parent = clk_mux_get_parent, 117 125 .set_parent = clk_mux_set_parent, 118 - .determine_rate = __clk_mux_determine_rate, 126 + .determine_rate = clk_mux_determine_rate, 119 127 }; 120 128 EXPORT_SYMBOL_GPL(clk_mux_ops); 121 129
+23 -31
drivers/clk/clk-stm32mp1.c
··· 216 216 "pclk5", "pll3_q", "ck_hsi", "ck_csi", "pll4_q", "ck_hse" 217 217 }; 218 218 219 - const char * const usart234578_src[] = { 219 + static const char * const usart234578_src[] = { 220 220 "pclk1", "pll4_q", "ck_hsi", "ck_csi", "ck_hse" 221 221 }; 222 222 223 223 static const char * const usart6_src[] = { 224 224 "pclk2", "pll4_q", "ck_hsi", "ck_csi", "ck_hse" 225 - }; 226 - 227 - static const char * const dfsdm_src[] = { 228 - "pclk2", "ck_mcu" 229 225 }; 230 226 231 227 static const char * const fdcan_src[] = { ··· 312 316 struct clock_config { 313 317 u32 id; 314 318 const char *name; 315 - union { 316 - const char *parent_name; 317 - const char * const *parent_names; 318 - }; 319 + const char *parent_name; 320 + const char * const *parent_names; 319 321 int num_parents; 320 322 unsigned long flags; 321 323 void *cfg; ··· 463 469 } 464 470 } 465 471 466 - const struct clk_ops mp1_gate_clk_ops = { 472 + static const struct clk_ops mp1_gate_clk_ops = { 467 473 .enable = mp1_gate_clk_enable, 468 474 .disable = mp1_gate_clk_disable, 469 475 .is_enabled = clk_gate_is_enabled, ··· 692 698 mp1_gate_clk_disable(hw); 693 699 } 694 700 695 - const struct clk_ops mp1_mgate_clk_ops = { 701 + static const struct clk_ops mp1_mgate_clk_ops = { 696 702 .enable = mp1_mgate_clk_enable, 697 703 .disable = mp1_mgate_clk_disable, 698 704 .is_enabled = clk_gate_is_enabled, ··· 726 732 return 0; 727 733 } 728 734 729 - const struct clk_ops clk_mmux_ops = { 735 + static const struct clk_ops clk_mmux_ops = { 730 736 .get_parent = clk_mmux_get_parent, 731 737 .set_parent = clk_mmux_set_parent, 732 738 .determine_rate = __clk_mux_determine_rate, ··· 1042 1048 u32 offset; 1043 1049 }; 1044 1050 1045 - struct clk_hw *_clk_register_pll(struct device *dev, 1046 - struct clk_hw_onecell_data *clk_data, 1047 - void __iomem *base, spinlock_t *lock, 1048 - const struct clock_config *cfg) 1051 + static struct clk_hw *_clk_register_pll(struct device *dev, 1052 + struct clk_hw_onecell_data *clk_data, 1053 + void __iomem *base, spinlock_t *lock, 1054 + const struct clock_config *cfg) 1049 1055 { 1050 1056 struct stm32_pll_cfg *stm_pll_cfg = cfg->cfg; 1051 1057 ··· 1399 1405 G_USBH, 1400 1406 G_ETHSTP, 1401 1407 G_RTCAPB, 1402 - G_TZC, 1408 + G_TZC1, 1409 + G_TZC2, 1403 1410 G_TZPC, 1404 1411 G_IWDG1, 1405 1412 G_BSEC, ··· 1412 1417 G_LAST 1413 1418 }; 1414 1419 1415 - struct stm32_mgate mp1_mgate[G_LAST]; 1420 + static struct stm32_mgate mp1_mgate[G_LAST]; 1416 1421 1417 1422 #define _K_GATE(_id, _gate_offset, _gate_bit_idx, _gate_flags,\ 1418 1423 _mgate, _ops)\ ··· 1435 1440 &mp1_mgate[_id], &mp1_mgate_clk_ops) 1436 1441 1437 1442 /* Peripheral gates */ 1438 - struct stm32_gate_cfg per_gate_cfg[G_LAST] = { 1443 + static struct stm32_gate_cfg per_gate_cfg[G_LAST] = { 1439 1444 /* Multi gates */ 1440 1445 K_GATE(G_MDIO, RCC_APB1ENSETR, 31, 0), 1441 1446 K_MGATE(G_DAC12, RCC_APB1ENSETR, 29, 0), ··· 1501 1506 K_GATE(G_BSEC, RCC_APB5ENSETR, 16, 0), 1502 1507 K_GATE(G_IWDG1, RCC_APB5ENSETR, 15, 0), 1503 1508 K_GATE(G_TZPC, RCC_APB5ENSETR, 13, 0), 1504 - K_GATE(G_TZC, RCC_APB5ENSETR, 12, 0), 1509 + K_GATE(G_TZC2, RCC_APB5ENSETR, 12, 0), 1510 + K_GATE(G_TZC1, RCC_APB5ENSETR, 11, 0), 1505 1511 K_GATE(G_RTCAPB, RCC_APB5ENSETR, 8, 0), 1506 1512 K_MGATE(G_USART1, RCC_APB5ENSETR, 4, 0), 1507 1513 K_MGATE(G_I2C6, RCC_APB5ENSETR, 3, 0), ··· 1596 1600 M_LAST 1597 1601 }; 1598 1602 1599 - struct stm32_mmux ker_mux[M_LAST]; 1603 + static struct stm32_mmux ker_mux[M_LAST]; 1600 1604 1601 1605 #define _K_MUX(_id, _offset, _shift, _width, _mux_flags, _mmux, _ops)\ 1602 1606 [_id] = {\ ··· 1619 1623 _K_MUX(_id, _offset, _shift, _width, _mux_flags,\ 1620 1624 &ker_mux[_id], &clk_mmux_ops) 1621 1625 1622 - const struct stm32_mux_cfg ker_mux_cfg[M_LAST] = { 1626 + static const struct stm32_mux_cfg ker_mux_cfg[M_LAST] = { 1623 1627 /* Kernel multi mux */ 1624 1628 K_MMUX(M_SDMMC12, RCC_SDMMC12CKSELR, 0, 3, 0), 1625 1629 K_MMUX(M_SPI23, RCC_SPI2S23CKSELR, 0, 3, 0), ··· 1856 1860 PCLK(USART1, "usart1", "pclk5", 0, G_USART1), 1857 1861 PCLK(RTCAPB, "rtcapb", "pclk5", CLK_IGNORE_UNUSED | 1858 1862 CLK_IS_CRITICAL, G_RTCAPB), 1859 - PCLK(TZC, "tzc", "pclk5", CLK_IGNORE_UNUSED, G_TZC), 1863 + PCLK(TZC1, "tzc1", "ck_axi", CLK_IGNORE_UNUSED, G_TZC1), 1864 + PCLK(TZC2, "tzc2", "ck_axi", CLK_IGNORE_UNUSED, G_TZC2), 1860 1865 PCLK(TZPC, "tzpc", "pclk5", CLK_IGNORE_UNUSED, G_TZPC), 1861 1866 PCLK(IWDG1, "iwdg1", "pclk5", 0, G_IWDG1), 1862 1867 PCLK(BSEC, "bsec", "pclk5", CLK_IGNORE_UNUSED, G_BSEC), ··· 1913 1916 KCLK(RNG1_K, "rng1_k", rng_src, 0, G_RNG1, M_RNG1), 1914 1917 KCLK(RNG2_K, "rng2_k", rng_src, 0, G_RNG2, M_RNG2), 1915 1918 KCLK(USBPHY_K, "usbphy_k", usbphy_src, 0, G_USBPHY, M_USBPHY), 1916 - KCLK(STGEN_K, "stgen_k", stgen_src, CLK_IGNORE_UNUSED, 1917 - G_STGEN, M_STGEN), 1919 + KCLK(STGEN_K, "stgen_k", stgen_src, CLK_IS_CRITICAL, G_STGEN, M_STGEN), 1918 1920 KCLK(SPDIF_K, "spdif_k", spdif_src, 0, G_SPDIF, M_SPDIF), 1919 1921 KCLK(SPI1_K, "spi1_k", spi123_src, 0, G_SPI1, M_SPI1), 1920 1922 KCLK(SPI2_K, "spi2_k", spi123_src, 0, G_SPI2, M_SPI23), ··· 1944 1948 KCLK(FDCAN_K, "fdcan_k", fdcan_src, 0, G_FDCAN, M_FDCAN), 1945 1949 KCLK(SAI1_K, "sai1_k", sai_src, 0, G_SAI1, M_SAI1), 1946 1950 KCLK(SAI2_K, "sai2_k", sai2_src, 0, G_SAI2, M_SAI2), 1947 - KCLK(SAI3_K, "sai3_k", sai_src, 0, G_SAI2, M_SAI3), 1948 - KCLK(SAI4_K, "sai4_k", sai_src, 0, G_SAI2, M_SAI4), 1951 + KCLK(SAI3_K, "sai3_k", sai_src, 0, G_SAI3, M_SAI3), 1952 + KCLK(SAI4_K, "sai4_k", sai_src, 0, G_SAI4, M_SAI4), 1949 1953 KCLK(ADC12_K, "adc12_k", adc12_src, 0, G_ADC12, M_ADC12), 1950 1954 KCLK(DSI_K, "dsi_k", dsi_src, 0, G_DSI, M_DSI), 1951 1955 KCLK(ADFSDM_K, "adfsdm_k", sai_src, 0, G_ADFSDM, M_SAI1), ··· 1988 1992 _DIV(RCC_MCO2CFGR, 4, 4, 0, NULL)), 1989 1993 1990 1994 /* Debug clocks */ 1991 - FIXED_FACTOR(NO_ID, "ck_axi_div2", "ck_axi", 0, 1, 2), 1992 - 1993 - GATE(DBG, "ck_apb_dbg", "ck_axi_div2", 0, RCC_DBGCFGR, 8, 0), 1994 - 1995 1995 GATE(CK_DBG, "ck_sys_dbg", "ck_axi", 0, RCC_DBGCFGR, 8, 0), 1996 1996 1997 1997 COMPOSITE(CK_TRACE, "ck_trace", ck_trace_src, CLK_OPS_PARENT_ENABLE,
+4 -3
drivers/clk/clk.c
··· 426 426 return now <= rate && now > best; 427 427 } 428 428 429 - static int 430 - clk_mux_determine_rate_flags(struct clk_hw *hw, struct clk_rate_request *req, 431 - unsigned long flags) 429 + int clk_mux_determine_rate_flags(struct clk_hw *hw, 430 + struct clk_rate_request *req, 431 + unsigned long flags) 432 432 { 433 433 struct clk_core *core = hw->core, *parent, *best_parent = NULL; 434 434 int i, num_parents, ret; ··· 488 488 489 489 return 0; 490 490 } 491 + EXPORT_SYMBOL_GPL(clk_mux_determine_rate_flags); 491 492 492 493 struct clk *__clk_lookup(const char *name) 493 494 {
+10 -1
drivers/clk/meson/clk-regmap.c
··· 153 153 val << mux->shift); 154 154 } 155 155 156 + static int clk_regmap_mux_determine_rate(struct clk_hw *hw, 157 + struct clk_rate_request *req) 158 + { 159 + struct clk_regmap *clk = to_clk_regmap(hw); 160 + struct clk_regmap_mux_data *mux = clk_get_regmap_mux_data(clk); 161 + 162 + return clk_mux_determine_rate_flags(hw, req, mux->flags); 163 + } 164 + 156 165 const struct clk_ops clk_regmap_mux_ops = { 157 166 .get_parent = clk_regmap_mux_get_parent, 158 167 .set_parent = clk_regmap_mux_set_parent, 159 - .determine_rate = __clk_mux_determine_rate, 168 + .determine_rate = clk_regmap_mux_determine_rate, 160 169 }; 161 170 EXPORT_SYMBOL_GPL(clk_regmap_mux_ops); 162 171
-2
drivers/clk/meson/gxbb-aoclk.h
··· 17 17 #define AO_RTC_ALT_CLK_CNTL0 0x94 18 18 #define AO_RTC_ALT_CLK_CNTL1 0x98 19 19 20 - extern const struct clk_ops meson_aoclk_gate_regmap_ops; 21 - 22 20 struct aoclk_cec_32k { 23 21 struct clk_hw hw; 24 22 struct regmap *regmap;
+3 -2
drivers/clk/meson/meson8b.c
··· 253 253 .mult = 1, 254 254 .div = 3, 255 255 .hw.init = &(struct clk_init_data){ 256 - .name = "fclk_div_div3", 256 + .name = "fclk_div3_div", 257 257 .ops = &clk_fixed_factor_ops, 258 258 .parent_names = (const char *[]){ "fixed_pll" }, 259 259 .num_parents = 1, ··· 632 632 .hw.init = &(struct clk_init_data){ 633 633 .name = "cpu_clk", 634 634 .ops = &clk_regmap_mux_ro_ops, 635 - .parent_names = (const char *[]){ "xtal", "cpu_out_sel" }, 635 + .parent_names = (const char *[]){ "xtal", 636 + "cpu_scale_out_sel" }, 636 637 .num_parents = 2, 637 638 .flags = (CLK_SET_RATE_PARENT | 638 639 CLK_SET_RATE_NO_REPARENT),
+44 -2
drivers/cpufreq/cppc_cpufreq.c
··· 126 126 cpu->perf_caps.lowest_perf, cpu_num, ret); 127 127 } 128 128 129 + /* 130 + * The PCC subspace describes the rate at which platform can accept commands 131 + * on the shared PCC channel (including READs which do not count towards freq 132 + * trasition requests), so ideally we need to use the PCC values as a fallback 133 + * if we don't have a platform specific transition_delay_us 134 + */ 135 + #ifdef CONFIG_ARM64 136 + #include <asm/cputype.h> 137 + 138 + static unsigned int cppc_cpufreq_get_transition_delay_us(int cpu) 139 + { 140 + unsigned long implementor = read_cpuid_implementor(); 141 + unsigned long part_num = read_cpuid_part_number(); 142 + unsigned int delay_us = 0; 143 + 144 + switch (implementor) { 145 + case ARM_CPU_IMP_QCOM: 146 + switch (part_num) { 147 + case QCOM_CPU_PART_FALKOR_V1: 148 + case QCOM_CPU_PART_FALKOR: 149 + delay_us = 10000; 150 + break; 151 + default: 152 + delay_us = cppc_get_transition_latency(cpu) / NSEC_PER_USEC; 153 + break; 154 + } 155 + break; 156 + default: 157 + delay_us = cppc_get_transition_latency(cpu) / NSEC_PER_USEC; 158 + break; 159 + } 160 + 161 + return delay_us; 162 + } 163 + 164 + #else 165 + 166 + static unsigned int cppc_cpufreq_get_transition_delay_us(int cpu) 167 + { 168 + return cppc_get_transition_latency(cpu) / NSEC_PER_USEC; 169 + } 170 + #endif 171 + 129 172 static int cppc_cpufreq_cpu_init(struct cpufreq_policy *policy) 130 173 { 131 174 struct cppc_cpudata *cpu; ··· 205 162 cpu->perf_caps.highest_perf; 206 163 policy->cpuinfo.max_freq = cppc_dmi_max_khz; 207 164 208 - policy->transition_delay_us = cppc_get_transition_latency(cpu_num) / 209 - NSEC_PER_USEC; 165 + policy->transition_delay_us = cppc_cpufreq_get_transition_delay_us(cpu_num); 210 166 policy->shared_type = cpu->shared_type; 211 167 212 168 if (policy->shared_type == CPUFREQ_SHARED_TYPE_ANY) {
+1 -1
drivers/gpio/gpio-aspeed.c
··· 384 384 if (set) 385 385 reg |= bit; 386 386 else 387 - reg &= bit; 387 + reg &= ~bit; 388 388 iowrite32(reg, addr); 389 389 390 390 spin_unlock_irqrestore(&gpio->lock, flags);
+4 -4
drivers/gpio/gpio-pci-idio-16.c
··· 116 116 unsigned long word_mask; 117 117 const unsigned long port_mask = GENMASK(gpio_reg_size - 1, 0); 118 118 unsigned long port_state; 119 - u8 __iomem ports[] = { 120 - idio16gpio->reg->out0_7, idio16gpio->reg->out8_15, 121 - idio16gpio->reg->in0_7, idio16gpio->reg->in8_15, 119 + void __iomem *ports[] = { 120 + &idio16gpio->reg->out0_7, &idio16gpio->reg->out8_15, 121 + &idio16gpio->reg->in0_7, &idio16gpio->reg->in8_15, 122 122 }; 123 123 124 124 /* clear bits array to a clean slate */ ··· 143 143 } 144 144 145 145 /* read bits from current gpio port */ 146 - port_state = ioread8(ports + i); 146 + port_state = ioread8(ports[i]); 147 147 148 148 /* store acquired bits at respective bits array offset */ 149 149 bits[word_index] |= port_state << word_offset;
+11 -11
drivers/gpio/gpio-pcie-idio-24.c
··· 206 206 unsigned long word_mask; 207 207 const unsigned long port_mask = GENMASK(gpio_reg_size - 1, 0); 208 208 unsigned long port_state; 209 - u8 __iomem ports[] = { 210 - idio24gpio->reg->out0_7, idio24gpio->reg->out8_15, 211 - idio24gpio->reg->out16_23, idio24gpio->reg->in0_7, 212 - idio24gpio->reg->in8_15, idio24gpio->reg->in16_23, 209 + void __iomem *ports[] = { 210 + &idio24gpio->reg->out0_7, &idio24gpio->reg->out8_15, 211 + &idio24gpio->reg->out16_23, &idio24gpio->reg->in0_7, 212 + &idio24gpio->reg->in8_15, &idio24gpio->reg->in16_23, 213 213 }; 214 214 const unsigned long out_mode_mask = BIT(1); 215 215 ··· 217 217 bitmap_zero(bits, chip->ngpio); 218 218 219 219 /* get bits are evaluated a gpio port register at a time */ 220 - for (i = 0; i < ARRAY_SIZE(ports); i++) { 220 + for (i = 0; i < ARRAY_SIZE(ports) + 1; i++) { 221 221 /* gpio offset in bits array */ 222 222 bits_offset = i * gpio_reg_size; 223 223 ··· 236 236 237 237 /* read bits from current gpio port (port 6 is TTL GPIO) */ 238 238 if (i < 6) 239 - port_state = ioread8(ports + i); 239 + port_state = ioread8(ports[i]); 240 240 else if (ioread8(&idio24gpio->reg->ctl) & out_mode_mask) 241 241 port_state = ioread8(&idio24gpio->reg->ttl_out0_7); 242 242 else ··· 301 301 const unsigned long port_mask = GENMASK(gpio_reg_size, 0); 302 302 unsigned long flags; 303 303 unsigned int out_state; 304 - u8 __iomem ports[] = { 305 - idio24gpio->reg->out0_7, idio24gpio->reg->out8_15, 306 - idio24gpio->reg->out16_23 304 + void __iomem *ports[] = { 305 + &idio24gpio->reg->out0_7, &idio24gpio->reg->out8_15, 306 + &idio24gpio->reg->out16_23 307 307 }; 308 308 const unsigned long out_mode_mask = BIT(1); 309 309 const unsigned int ttl_offset = 48; ··· 327 327 raw_spin_lock_irqsave(&idio24gpio->lock, flags); 328 328 329 329 /* process output lines */ 330 - out_state = ioread8(ports + i) & ~gpio_mask; 330 + out_state = ioread8(ports[i]) & ~gpio_mask; 331 331 out_state |= (*bits >> bits_offset) & gpio_mask; 332 - iowrite8(out_state, ports + i); 332 + iowrite8(out_state, ports[i]); 333 333 334 334 raw_spin_unlock_irqrestore(&idio24gpio->lock, flags); 335 335 }
+4 -3
drivers/gpio/gpiolib.c
··· 497 497 struct gpiohandle_request handlereq; 498 498 struct linehandle_state *lh; 499 499 struct file *file; 500 - int fd, i, ret; 500 + int fd, i, count = 0, ret; 501 501 u32 lflags; 502 502 503 503 if (copy_from_user(&handlereq, ip, sizeof(handlereq))) ··· 558 558 if (ret) 559 559 goto out_free_descs; 560 560 lh->descs[i] = desc; 561 + count = i; 561 562 562 563 if (lflags & GPIOHANDLE_REQUEST_ACTIVE_LOW) 563 564 set_bit(FLAG_ACTIVE_LOW, &desc->flags); ··· 629 628 out_put_unused_fd: 630 629 put_unused_fd(fd); 631 630 out_free_descs: 632 - for (; i >= 0; i--) 631 + for (i = 0; i < count; i++) 633 632 gpiod_free(lh->descs[i]); 634 633 kfree(lh->label); 635 634 out_free_lh: ··· 903 902 desc = &gdev->descs[offset]; 904 903 ret = gpiod_request(desc, le->label); 905 904 if (ret) 906 - goto out_free_desc; 905 + goto out_free_label; 907 906 le->desc = desc; 908 907 le->eflags = eflags; 909 908
+1 -1
drivers/gpu/drm/amd/display/modules/color/color_gamma.c
··· 1451 1451 1452 1452 kfree(rgb_regamma); 1453 1453 rgb_regamma_alloc_fail: 1454 - kfree(rgb_user); 1454 + kvfree(rgb_user); 1455 1455 rgb_user_alloc_fail: 1456 1456 return ret; 1457 1457 }
+1
drivers/gpu/drm/bridge/Kconfig
··· 84 84 tristate "Silicon Image SII8620 HDMI/MHL bridge" 85 85 depends on OF && RC_CORE 86 86 select DRM_KMS_HELPER 87 + imply EXTCON 87 88 help 88 89 Silicon Image SII8620 HDMI/MHL bridge chip driver. 89 90
+8
drivers/gpu/drm/drm_atomic.c
··· 155 155 state->connectors[i].state); 156 156 state->connectors[i].ptr = NULL; 157 157 state->connectors[i].state = NULL; 158 + state->connectors[i].old_state = NULL; 159 + state->connectors[i].new_state = NULL; 158 160 drm_connector_put(connector); 159 161 } 160 162 ··· 171 169 172 170 state->crtcs[i].ptr = NULL; 173 171 state->crtcs[i].state = NULL; 172 + state->crtcs[i].old_state = NULL; 173 + state->crtcs[i].new_state = NULL; 174 174 } 175 175 176 176 for (i = 0; i < config->num_total_plane; i++) { ··· 185 181 state->planes[i].state); 186 182 state->planes[i].ptr = NULL; 187 183 state->planes[i].state = NULL; 184 + state->planes[i].old_state = NULL; 185 + state->planes[i].new_state = NULL; 188 186 } 189 187 190 188 for (i = 0; i < state->num_private_objs; i++) { ··· 196 190 state->private_objs[i].state); 197 191 state->private_objs[i].ptr = NULL; 198 192 state->private_objs[i].state = NULL; 193 + state->private_objs[i].old_state = NULL; 194 + state->private_objs[i].new_state = NULL; 199 195 } 200 196 state->num_private_objs = 0; 201 197
+1
drivers/gpu/drm/drm_file.c
··· 212 212 return -ENOMEM; 213 213 214 214 filp->private_data = priv; 215 + filp->f_mode |= FMODE_UNSIGNED_OFFSET; 215 216 priv->filp = filp; 216 217 priv->pid = get_pid(task_pid(current)); 217 218 priv->minor = minor;
-1
drivers/gpu/drm/nouveau/nouveau_bo.c
··· 214 214 INIT_LIST_HEAD(&nvbo->entry); 215 215 INIT_LIST_HEAD(&nvbo->vma_list); 216 216 nvbo->bo.bdev = &drm->ttm.bdev; 217 - nvbo->cli = cli; 218 217 219 218 /* This is confusing, and doesn't actually mean we want an uncached 220 219 * mapping, but is what NOUVEAU_GEM_DOMAIN_COHERENT gets translated
-2
drivers/gpu/drm/nouveau/nouveau_bo.h
··· 26 26 27 27 struct list_head vma_list; 28 28 29 - struct nouveau_cli *cli; 30 - 31 29 unsigned contig:1; 32 30 unsigned page:5; 33 31 unsigned kind:8;
+3 -3
drivers/gpu/drm/nouveau/nouveau_ttm.c
··· 63 63 struct ttm_mem_reg *reg) 64 64 { 65 65 struct nouveau_bo *nvbo = nouveau_bo(bo); 66 - struct nouveau_drm *drm = nvbo->cli->drm; 66 + struct nouveau_drm *drm = nouveau_bdev(bo->bdev); 67 67 struct nouveau_mem *mem; 68 68 int ret; 69 69 ··· 103 103 struct ttm_mem_reg *reg) 104 104 { 105 105 struct nouveau_bo *nvbo = nouveau_bo(bo); 106 - struct nouveau_drm *drm = nvbo->cli->drm; 106 + struct nouveau_drm *drm = nouveau_bdev(bo->bdev); 107 107 struct nouveau_mem *mem; 108 108 int ret; 109 109 ··· 131 131 struct ttm_mem_reg *reg) 132 132 { 133 133 struct nouveau_bo *nvbo = nouveau_bo(bo); 134 - struct nouveau_drm *drm = nvbo->cli->drm; 134 + struct nouveau_drm *drm = nouveau_bdev(bo->bdev); 135 135 struct nouveau_mem *mem; 136 136 int ret; 137 137
+3 -4
drivers/gpu/drm/nouveau/nv50_display.c
··· 3264 3264 3265 3265 drm_connector_unregister(&mstc->connector); 3266 3266 3267 - drm_modeset_lock_all(drm->dev); 3268 3267 drm_fb_helper_remove_one_connector(&drm->fbcon->helper, &mstc->connector); 3268 + 3269 + drm_modeset_lock(&drm->dev->mode_config.connection_mutex, NULL); 3269 3270 mstc->port = NULL; 3270 - drm_modeset_unlock_all(drm->dev); 3271 + drm_modeset_unlock(&drm->dev->mode_config.connection_mutex); 3271 3272 3272 3273 drm_connector_unreference(&mstc->connector); 3273 3274 } ··· 3278 3277 { 3279 3278 struct nouveau_drm *drm = nouveau_drm(connector->dev); 3280 3279 3281 - drm_modeset_lock_all(drm->dev); 3282 3280 drm_fb_helper_add_one_connector(&drm->fbcon->helper, connector); 3283 - drm_modeset_unlock_all(drm->dev); 3284 3281 3285 3282 drm_connector_register(connector); 3286 3283 }
+13 -7
drivers/gpu/drm/omapdrm/dss/dispc.c
··· 828 828 h_coef = dispc_ovl_get_scale_coef(fir_hinc, true); 829 829 v_coef = dispc_ovl_get_scale_coef(fir_vinc, five_taps); 830 830 831 + if (!h_coef || !v_coef) { 832 + dev_err(&dispc->pdev->dev, "%s: failed to find scale coefs\n", 833 + __func__); 834 + return; 835 + } 836 + 831 837 for (i = 0; i < 8; i++) { 832 838 u32 h, hv; 833 839 ··· 2348 2342 } 2349 2343 2350 2344 if (in_width > maxsinglelinewidth) { 2351 - DSSERR("Cannot scale max input width exceeded"); 2345 + DSSERR("Cannot scale max input width exceeded\n"); 2352 2346 return -EINVAL; 2353 2347 } 2354 2348 return 0; ··· 2430 2424 } 2431 2425 2432 2426 if (in_width > (maxsinglelinewidth * 2)) { 2433 - DSSERR("Cannot setup scaling"); 2434 - DSSERR("width exceeds maximum width possible"); 2427 + DSSERR("Cannot setup scaling\n"); 2428 + DSSERR("width exceeds maximum width possible\n"); 2435 2429 return -EINVAL; 2436 2430 } 2437 2431 2438 2432 if (in_width > maxsinglelinewidth && *five_taps) { 2439 - DSSERR("cannot setup scaling with five taps"); 2433 + DSSERR("cannot setup scaling with five taps\n"); 2440 2434 return -EINVAL; 2441 2435 } 2442 2436 return 0; ··· 2478 2472 in_width > maxsinglelinewidth && ++*decim_x); 2479 2473 2480 2474 if (in_width > maxsinglelinewidth) { 2481 - DSSERR("Cannot scale width exceeds max line width"); 2475 + DSSERR("Cannot scale width exceeds max line width\n"); 2482 2476 return -EINVAL; 2483 2477 } 2484 2478 ··· 2496 2490 * bandwidth. Despite what theory says this appears to 2497 2491 * be true also for 16-bit color formats. 2498 2492 */ 2499 - DSSERR("Not enough bandwidth, too much downscaling (x-decimation factor %d > 4)", *decim_x); 2493 + DSSERR("Not enough bandwidth, too much downscaling (x-decimation factor %d > 4)\n", *decim_x); 2500 2494 2501 2495 return -EINVAL; 2502 2496 } ··· 4639 4633 i734_buf.size, &i734_buf.paddr, 4640 4634 GFP_KERNEL); 4641 4635 if (!i734_buf.vaddr) { 4642 - dev_err(&dispc->pdev->dev, "%s: dma_alloc_writecombine failed", 4636 + dev_err(&dispc->pdev->dev, "%s: dma_alloc_writecombine failed\n", 4643 4637 __func__); 4644 4638 return -ENOMEM; 4645 4639 }
+1 -1
drivers/gpu/drm/omapdrm/dss/hdmi4.c
··· 679 679 struct omap_dss_audio *dss_audio) 680 680 { 681 681 struct omap_hdmi *hd = dev_get_drvdata(dev); 682 - int ret; 682 + int ret = 0; 683 683 684 684 mutex_lock(&hd->lock); 685 685
+6 -1
drivers/gpu/drm/omapdrm/dss/hdmi4_core.c
··· 922 922 { 923 923 const struct hdmi4_features *features; 924 924 struct resource *res; 925 + const struct soc_device_attribute *soc; 925 926 926 - features = soc_device_match(hdmi4_soc_devices)->data; 927 + soc = soc_device_match(hdmi4_soc_devices); 928 + if (!soc) 929 + return -ENODEV; 930 + 931 + features = soc->data; 927 932 core->cts_swmode = features->cts_swmode; 928 933 core->audio_use_mclk = features->audio_use_mclk; 929 934
+1 -1
drivers/gpu/drm/omapdrm/dss/hdmi5.c
··· 671 671 struct omap_dss_audio *dss_audio) 672 672 { 673 673 struct omap_hdmi *hd = dev_get_drvdata(dev); 674 - int ret; 674 + int ret = 0; 675 675 676 676 mutex_lock(&hd->lock); 677 677
+10
drivers/gpu/drm/omapdrm/omap_connector.c
··· 121 121 if (dssdrv->read_edid) { 122 122 void *edid = kzalloc(MAX_EDID, GFP_KERNEL); 123 123 124 + if (!edid) 125 + return 0; 126 + 124 127 if ((dssdrv->read_edid(dssdev, edid, MAX_EDID) > 0) && 125 128 drm_edid_is_valid(edid)) { 126 129 drm_mode_connector_update_edid_property( ··· 141 138 } else { 142 139 struct drm_display_mode *mode = drm_mode_create(dev); 143 140 struct videomode vm = {0}; 141 + 142 + if (!mode) 143 + return 0; 144 144 145 145 dssdrv->get_timings(dssdev, &vm); 146 146 ··· 206 200 if (!r) { 207 201 /* check if vrefresh is still valid */ 208 202 new_mode = drm_mode_duplicate(dev, mode); 203 + 204 + if (!new_mode) 205 + return MODE_BAD; 206 + 209 207 new_mode->clock = vm.pixelclock / 1000; 210 208 new_mode->vrefresh = 0; 211 209 if (mode->vrefresh == drm_mode_vrefresh(new_mode))
+5 -1
drivers/gpu/drm/omapdrm/omap_dmm_tiler.c
··· 401 401 struct tiler_block *tiler_reserve_2d(enum tiler_fmt fmt, u16 w, 402 402 u16 h, u16 align) 403 403 { 404 - struct tiler_block *block = kzalloc(sizeof(*block), GFP_KERNEL); 404 + struct tiler_block *block; 405 405 u32 min_align = 128; 406 406 int ret; 407 407 unsigned long flags; 408 408 u32 slot_bytes; 409 + 410 + block = kzalloc(sizeof(*block), GFP_KERNEL); 411 + if (!block) 412 + return ERR_PTR(-ENOMEM); 409 413 410 414 BUG_ON(!validfmt(fmt)); 411 415
+1 -1
drivers/gpu/drm/omapdrm/tcm-sita.c
··· 90 90 { 91 91 int i; 92 92 unsigned long index; 93 - bool area_free; 93 + bool area_free = false; 94 94 unsigned long slots_per_band = PAGE_SIZE / slot_bytes; 95 95 unsigned long bit_offset = (offset > 0) ? offset / slot_bytes : 0; 96 96 unsigned long curr_bit = bit_offset;
+22 -3
drivers/gpu/drm/vc4/vc4_dpi.c
··· 96 96 struct platform_device *pdev; 97 97 98 98 struct drm_encoder *encoder; 99 - struct drm_connector *connector; 100 99 101 100 void __iomem *regs; 102 101 ··· 163 164 164 165 static void vc4_dpi_encoder_enable(struct drm_encoder *encoder) 165 166 { 167 + struct drm_device *dev = encoder->dev; 166 168 struct drm_display_mode *mode = &encoder->crtc->mode; 167 169 struct vc4_dpi_encoder *vc4_encoder = to_vc4_dpi_encoder(encoder); 168 170 struct vc4_dpi *dpi = vc4_encoder->dpi; 171 + struct drm_connector_list_iter conn_iter; 172 + struct drm_connector *connector = NULL, *connector_scan; 169 173 u32 dpi_c = DPI_ENABLE | DPI_OUTPUT_ENABLE_MODE; 170 174 int ret; 171 175 172 - if (dpi->connector->display_info.num_bus_formats) { 173 - u32 bus_format = dpi->connector->display_info.bus_formats[0]; 176 + /* Look up the connector attached to DPI so we can get the 177 + * bus_format. Ideally the bridge would tell us the 178 + * bus_format we want, but it doesn't yet, so assume that it's 179 + * uniform throughout the bridge chain. 180 + */ 181 + drm_connector_list_iter_begin(dev, &conn_iter); 182 + drm_for_each_connector_iter(connector_scan, &conn_iter) { 183 + if (connector_scan->encoder == encoder) { 184 + connector = connector_scan; 185 + break; 186 + } 187 + } 188 + drm_connector_list_iter_end(&conn_iter); 189 + 190 + if (connector && connector->display_info.num_bus_formats) { 191 + u32 bus_format = connector->display_info.bus_formats[0]; 174 192 175 193 switch (bus_format) { 176 194 case MEDIA_BUS_FMT_RGB888_1X24: ··· 215 199 DRM_ERROR("Unknown media bus format %d\n", bus_format); 216 200 break; 217 201 } 202 + } else { 203 + /* Default to 24bit if no connector found. */ 204 + dpi_c |= VC4_SET_FIELD(DPI_FORMAT_24BIT_888_RGB, DPI_FORMAT); 218 205 } 219 206 220 207 if (mode->flags & DRM_MODE_FLAG_NHSYNC)
+1 -1
drivers/gpu/drm/vc4/vc4_plane.c
··· 505 505 * the scl fields here. 506 506 */ 507 507 if (num_planes == 1) { 508 - scl0 = vc4_get_scl_field(state, 1); 508 + scl0 = vc4_get_scl_field(state, 0); 509 509 scl1 = scl0; 510 510 } else { 511 511 scl0 = vc4_get_scl_field(state, 1);
+4 -3
drivers/hid/Kconfig
··· 462 462 select NEW_LEDS 463 463 select LEDS_CLASS 464 464 ---help--- 465 - Support for Lenovo devices that are not fully compliant with HID standard. 465 + Support for IBM/Lenovo devices that are not fully compliant with HID standard. 466 466 467 - Say Y if you want support for the non-compliant features of the Lenovo 468 - Thinkpad standalone keyboards, e.g: 467 + Say Y if you want support for horizontal scrolling of the IBM/Lenovo 468 + Scrollpoint mice or the non-compliant features of the Lenovo Thinkpad 469 + standalone keyboards, e.g: 469 470 - ThinkPad USB Keyboard with TrackPoint (supports extra LEDs and trackpoint 470 471 configuration) 471 472 - ThinkPad Compact Bluetooth Keyboard with TrackPoint (supports Fn keys)
+9
drivers/hid/hid-ids.h
··· 552 552 #define USB_VENDOR_ID_HUION 0x256c 553 553 #define USB_DEVICE_ID_HUION_TABLET 0x006e 554 554 555 + #define USB_VENDOR_ID_IBM 0x04b3 556 + #define USB_DEVICE_ID_IBM_SCROLLPOINT_III 0x3100 557 + #define USB_DEVICE_ID_IBM_SCROLLPOINT_PRO 0x3103 558 + #define USB_DEVICE_ID_IBM_SCROLLPOINT_OPTICAL 0x3105 559 + #define USB_DEVICE_ID_IBM_SCROLLPOINT_800DPI_OPTICAL 0x3108 560 + #define USB_DEVICE_ID_IBM_SCROLLPOINT_800DPI_OPTICAL_PRO 0x3109 561 + 555 562 #define USB_VENDOR_ID_IDEACOM 0x1cb6 556 563 #define USB_DEVICE_ID_IDEACOM_IDC6650 0x6650 557 564 #define USB_DEVICE_ID_IDEACOM_IDC6651 0x6651 ··· 691 684 #define USB_DEVICE_ID_LENOVO_TPKBD 0x6009 692 685 #define USB_DEVICE_ID_LENOVO_CUSBKBD 0x6047 693 686 #define USB_DEVICE_ID_LENOVO_CBTKBD 0x6048 687 + #define USB_DEVICE_ID_LENOVO_SCROLLPOINT_OPTICAL 0x6049 694 688 #define USB_DEVICE_ID_LENOVO_TPPRODOCK 0x6067 695 689 #define USB_DEVICE_ID_LENOVO_X1_COVER 0x6085 696 690 #define USB_DEVICE_ID_LENOVO_X1_TAB 0x60a3 ··· 972 964 #define USB_DEVICE_ID_SIS817_TOUCH 0x0817 973 965 #define USB_DEVICE_ID_SIS_TS 0x1013 974 966 #define USB_DEVICE_ID_SIS1030_TOUCH 0x1030 967 + #define USB_DEVICE_ID_SIS10FB_TOUCH 0x10fb 975 968 976 969 #define USB_VENDOR_ID_SKYCABLE 0x1223 977 970 #define USB_DEVICE_ID_SKYCABLE_WIRELESS_PRESENTER 0x3F07
+36
drivers/hid/hid-lenovo.c
··· 6 6 * 7 7 * Copyright (c) 2012 Bernhard Seibold 8 8 * Copyright (c) 2014 Jamie Lentin <jm@lentin.co.uk> 9 + * 10 + * Linux IBM/Lenovo Scrollpoint mouse driver: 11 + * - IBM Scrollpoint III 12 + * - IBM Scrollpoint Pro 13 + * - IBM Scrollpoint Optical 14 + * - IBM Scrollpoint Optical 800dpi 15 + * - IBM Scrollpoint Optical 800dpi Pro 16 + * - Lenovo Scrollpoint Optical 17 + * 18 + * Copyright (c) 2012 Peter De Wachter <pdewacht@gmail.com> 19 + * Copyright (c) 2018 Peter Ganzhorn <peter.ganzhorn@gmail.com> 9 20 */ 10 21 11 22 /* ··· 171 160 return 0; 172 161 } 173 162 163 + static int lenovo_input_mapping_scrollpoint(struct hid_device *hdev, 164 + struct hid_input *hi, struct hid_field *field, 165 + struct hid_usage *usage, unsigned long **bit, int *max) 166 + { 167 + if (usage->hid == HID_GD_Z) { 168 + hid_map_usage(hi, usage, bit, max, EV_REL, REL_HWHEEL); 169 + return 1; 170 + } 171 + return 0; 172 + } 173 + 174 174 static int lenovo_input_mapping(struct hid_device *hdev, 175 175 struct hid_input *hi, struct hid_field *field, 176 176 struct hid_usage *usage, unsigned long **bit, int *max) ··· 193 171 case USB_DEVICE_ID_LENOVO_CUSBKBD: 194 172 case USB_DEVICE_ID_LENOVO_CBTKBD: 195 173 return lenovo_input_mapping_cptkbd(hdev, hi, field, 174 + usage, bit, max); 175 + case USB_DEVICE_ID_IBM_SCROLLPOINT_III: 176 + case USB_DEVICE_ID_IBM_SCROLLPOINT_PRO: 177 + case USB_DEVICE_ID_IBM_SCROLLPOINT_OPTICAL: 178 + case USB_DEVICE_ID_IBM_SCROLLPOINT_800DPI_OPTICAL: 179 + case USB_DEVICE_ID_IBM_SCROLLPOINT_800DPI_OPTICAL_PRO: 180 + case USB_DEVICE_ID_LENOVO_SCROLLPOINT_OPTICAL: 181 + return lenovo_input_mapping_scrollpoint(hdev, hi, field, 196 182 usage, bit, max); 197 183 default: 198 184 return 0; ··· 913 883 { HID_USB_DEVICE(USB_VENDOR_ID_LENOVO, USB_DEVICE_ID_LENOVO_CUSBKBD) }, 914 884 { HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_LENOVO, USB_DEVICE_ID_LENOVO_CBTKBD) }, 915 885 { HID_USB_DEVICE(USB_VENDOR_ID_LENOVO, USB_DEVICE_ID_LENOVO_TPPRODOCK) }, 886 + { HID_USB_DEVICE(USB_VENDOR_ID_IBM, USB_DEVICE_ID_IBM_SCROLLPOINT_III) }, 887 + { HID_USB_DEVICE(USB_VENDOR_ID_IBM, USB_DEVICE_ID_IBM_SCROLLPOINT_PRO) }, 888 + { HID_USB_DEVICE(USB_VENDOR_ID_IBM, USB_DEVICE_ID_IBM_SCROLLPOINT_OPTICAL) }, 889 + { HID_USB_DEVICE(USB_VENDOR_ID_IBM, USB_DEVICE_ID_IBM_SCROLLPOINT_800DPI_OPTICAL) }, 890 + { HID_USB_DEVICE(USB_VENDOR_ID_IBM, USB_DEVICE_ID_IBM_SCROLLPOINT_800DPI_OPTICAL_PRO) }, 891 + { HID_USB_DEVICE(USB_VENDOR_ID_LENOVO, USB_DEVICE_ID_LENOVO_SCROLLPOINT_OPTICAL) }, 916 892 { } 917 893 }; 918 894
+2
drivers/hid/i2c-hid/i2c-hid.c
··· 174 174 I2C_HID_QUIRK_NO_IRQ_AFTER_RESET }, 175 175 { I2C_VENDOR_ID_RAYD, I2C_PRODUCT_ID_RAYD_3118, 176 176 I2C_HID_QUIRK_RESEND_REPORT_DESCR }, 177 + { USB_VENDOR_ID_SIS_TOUCH, USB_DEVICE_ID_SIS10FB_TOUCH, 178 + I2C_HID_QUIRK_RESEND_REPORT_DESCR }, 177 179 { 0, 0 } 178 180 }; 179 181
+16 -20
drivers/hid/intel-ish-hid/ishtp-hid-client.c
··· 77 77 struct ishtp_cl_data *client_data = hid_ishtp_cl->client_data; 78 78 int curr_hid_dev = client_data->cur_hid_dev; 79 79 80 - if (data_len < sizeof(struct hostif_msg_hdr)) { 81 - dev_err(&client_data->cl_device->dev, 82 - "[hid-ish]: error, received %u which is less than data header %u\n", 83 - (unsigned int)data_len, 84 - (unsigned int)sizeof(struct hostif_msg_hdr)); 85 - ++client_data->bad_recv_cnt; 86 - ish_hw_reset(hid_ishtp_cl->dev); 87 - return; 88 - } 89 - 90 80 payload = recv_buf + sizeof(struct hostif_msg_hdr); 91 81 total_len = data_len; 92 82 cur_pos = 0; 93 83 94 84 do { 85 + if (cur_pos + sizeof(struct hostif_msg) > total_len) { 86 + dev_err(&client_data->cl_device->dev, 87 + "[hid-ish]: error, received %u which is less than data header %u\n", 88 + (unsigned int)data_len, 89 + (unsigned int)sizeof(struct hostif_msg_hdr)); 90 + ++client_data->bad_recv_cnt; 91 + ish_hw_reset(hid_ishtp_cl->dev); 92 + break; 93 + } 94 + 95 95 recv_msg = (struct hostif_msg *)(recv_buf + cur_pos); 96 96 payload_len = recv_msg->hdr.size; 97 97 ··· 412 412 { 413 413 struct ishtp_hid_data *hid_data = hid->driver_data; 414 414 struct ishtp_cl_data *client_data = hid_data->client_data; 415 - static unsigned char buf[10]; 416 - unsigned int len; 417 - struct hostif_msg_to_sensor *msg = (struct hostif_msg_to_sensor *)buf; 415 + struct hostif_msg_to_sensor msg = {}; 418 416 int rv; 419 417 int i; 420 418 ··· 424 426 return; 425 427 } 426 428 427 - len = sizeof(struct hostif_msg_to_sensor); 428 - 429 - memset(msg, 0, sizeof(struct hostif_msg_to_sensor)); 430 - msg->hdr.command = (report_type == HID_FEATURE_REPORT) ? 429 + msg.hdr.command = (report_type == HID_FEATURE_REPORT) ? 431 430 HOSTIF_GET_FEATURE_REPORT : HOSTIF_GET_INPUT_REPORT; 432 431 for (i = 0; i < client_data->num_hid_devices; ++i) { 433 432 if (hid == client_data->hid_sensor_hubs[i]) { 434 - msg->hdr.device_id = 433 + msg.hdr.device_id = 435 434 client_data->hid_devices[i].dev_id; 436 435 break; 437 436 } ··· 437 442 if (i == client_data->num_hid_devices) 438 443 return; 439 444 440 - msg->report_id = report_id; 441 - rv = ishtp_cl_send(client_data->hid_ishtp_cl, buf, len); 445 + msg.report_id = report_id; 446 + rv = ishtp_cl_send(client_data->hid_ishtp_cl, (uint8_t *)&msg, 447 + sizeof(msg)); 442 448 if (rv) 443 449 hid_ishtp_trace(client_data, "%s hid %p send failed\n", 444 450 __func__, hid);
+1 -1
drivers/hid/intel-ish-hid/ishtp/bus.c
··· 418 418 list_del(&device->device_link); 419 419 spin_unlock_irqrestore(&dev->device_list_lock, flags); 420 420 dev_err(dev->devc, "Failed to register ISHTP client device\n"); 421 - kfree(device); 421 + put_device(&device->dev); 422 422 return NULL; 423 423 } 424 424
+3 -1
drivers/hid/wacom_sys.c
··· 1213 1213 devres->root = root; 1214 1214 1215 1215 error = sysfs_create_group(devres->root, group); 1216 - if (error) 1216 + if (error) { 1217 + devres_free(devres); 1217 1218 return error; 1219 + } 1218 1220 1219 1221 devres_add(&wacom->hdev->dev, devres); 1220 1222
+4 -1
drivers/infiniband/Kconfig
··· 61 61 pages on demand instead. 62 62 63 63 config INFINIBAND_ADDR_TRANS 64 - bool 64 + bool "RDMA/CM" 65 65 depends on INFINIBAND 66 66 default y 67 + ---help--- 68 + Support for RDMA communication manager (CM). 69 + This allows for a generic connection abstraction over RDMA. 67 70 68 71 config INFINIBAND_ADDR_TRANS_CONFIGFS 69 72 bool
+35 -20
drivers/infiniband/core/cache.c
··· 291 291 * so lookup free slot only if requested. 292 292 */ 293 293 if (pempty && empty < 0) { 294 - if (data->props & GID_TABLE_ENTRY_INVALID) { 295 - /* Found an invalid (free) entry; allocate it */ 296 - if (data->props & GID_TABLE_ENTRY_DEFAULT) { 297 - if (default_gid) 298 - empty = curr_index; 299 - } else { 300 - empty = curr_index; 301 - } 294 + if (data->props & GID_TABLE_ENTRY_INVALID && 295 + (default_gid == 296 + !!(data->props & GID_TABLE_ENTRY_DEFAULT))) { 297 + /* 298 + * Found an invalid (free) entry; allocate it. 299 + * If default GID is requested, then our 300 + * found slot must be one of the DEFAULT 301 + * reserved slots or we fail. 302 + * This ensures that only DEFAULT reserved 303 + * slots are used for default property GIDs. 304 + */ 305 + empty = curr_index; 302 306 } 303 307 } 304 308 ··· 424 420 return ret; 425 421 } 426 422 427 - int ib_cache_gid_del(struct ib_device *ib_dev, u8 port, 428 - union ib_gid *gid, struct ib_gid_attr *attr) 423 + static int 424 + _ib_cache_gid_del(struct ib_device *ib_dev, u8 port, 425 + union ib_gid *gid, struct ib_gid_attr *attr, 426 + unsigned long mask, bool default_gid) 429 427 { 430 428 struct ib_gid_table *table; 431 429 int ret = 0; ··· 437 431 438 432 mutex_lock(&table->lock); 439 433 440 - ix = find_gid(table, gid, attr, false, 441 - GID_ATTR_FIND_MASK_GID | 442 - GID_ATTR_FIND_MASK_GID_TYPE | 443 - GID_ATTR_FIND_MASK_NETDEV, 444 - NULL); 434 + ix = find_gid(table, gid, attr, default_gid, mask, NULL); 445 435 if (ix < 0) { 446 436 ret = -EINVAL; 447 437 goto out_unlock; ··· 452 450 pr_debug("%s: can't delete gid %pI6 error=%d\n", 453 451 __func__, gid->raw, ret); 454 452 return ret; 453 + } 454 + 455 + int ib_cache_gid_del(struct ib_device *ib_dev, u8 port, 456 + union ib_gid *gid, struct ib_gid_attr *attr) 457 + { 458 + unsigned long mask = GID_ATTR_FIND_MASK_GID | 459 + GID_ATTR_FIND_MASK_GID_TYPE | 460 + GID_ATTR_FIND_MASK_DEFAULT | 461 + GID_ATTR_FIND_MASK_NETDEV; 462 + 463 + return _ib_cache_gid_del(ib_dev, port, gid, attr, mask, false); 455 464 } 456 465 457 466 int ib_cache_gid_del_all_netdev_gids(struct ib_device *ib_dev, u8 port, ··· 741 728 unsigned long gid_type_mask, 742 729 enum ib_cache_gid_default_mode mode) 743 730 { 744 - union ib_gid gid; 731 + union ib_gid gid = { }; 745 732 struct ib_gid_attr gid_attr; 746 733 struct ib_gid_table *table; 747 734 unsigned int gid_type; ··· 749 736 750 737 table = ib_dev->cache.ports[port - rdma_start_port(ib_dev)].gid; 751 738 752 - make_default_gid(ndev, &gid); 739 + mask = GID_ATTR_FIND_MASK_GID_TYPE | 740 + GID_ATTR_FIND_MASK_DEFAULT | 741 + GID_ATTR_FIND_MASK_NETDEV; 753 742 memset(&gid_attr, 0, sizeof(gid_attr)); 754 743 gid_attr.ndev = ndev; 755 744 ··· 762 747 gid_attr.gid_type = gid_type; 763 748 764 749 if (mode == IB_CACHE_GID_DEFAULT_MODE_SET) { 765 - mask = GID_ATTR_FIND_MASK_GID_TYPE | 766 - GID_ATTR_FIND_MASK_DEFAULT; 750 + make_default_gid(ndev, &gid); 767 751 __ib_cache_gid_add(ib_dev, port, &gid, 768 752 &gid_attr, mask, true); 769 753 } else if (mode == IB_CACHE_GID_DEFAULT_MODE_DELETE) { 770 - ib_cache_gid_del(ib_dev, port, &gid, &gid_attr); 754 + _ib_cache_gid_del(ib_dev, port, &gid, 755 + &gid_attr, mask, true); 771 756 } 772 757 } 773 758 }
+43 -17
drivers/infiniband/core/cma.c
··· 382 382 #define CMA_VERSION 0x00 383 383 384 384 struct cma_req_info { 385 + struct sockaddr_storage listen_addr_storage; 386 + struct sockaddr_storage src_addr_storage; 385 387 struct ib_device *device; 386 388 int port; 387 389 union ib_gid local_gid; ··· 868 866 { 869 867 struct ib_qp_attr qp_attr; 870 868 int qp_attr_mask, ret; 871 - union ib_gid sgid; 872 869 873 870 mutex_lock(&id_priv->qp_mutex); 874 871 if (!id_priv->id.qp) { ··· 887 886 888 887 qp_attr.qp_state = IB_QPS_RTR; 889 888 ret = rdma_init_qp_attr(&id_priv->id, &qp_attr, &qp_attr_mask); 890 - if (ret) 891 - goto out; 892 - 893 - ret = ib_query_gid(id_priv->id.device, id_priv->id.port_num, 894 - rdma_ah_read_grh(&qp_attr.ah_attr)->sgid_index, 895 - &sgid, NULL); 896 889 if (ret) 897 890 goto out; 898 891 ··· 1335 1340 } 1336 1341 1337 1342 static struct net_device *cma_get_net_dev(struct ib_cm_event *ib_event, 1338 - const struct cma_req_info *req) 1343 + struct cma_req_info *req) 1339 1344 { 1340 - struct sockaddr_storage listen_addr_storage, src_addr_storage; 1341 - struct sockaddr *listen_addr = (struct sockaddr *)&listen_addr_storage, 1342 - *src_addr = (struct sockaddr *)&src_addr_storage; 1345 + struct sockaddr *listen_addr = 1346 + (struct sockaddr *)&req->listen_addr_storage; 1347 + struct sockaddr *src_addr = (struct sockaddr *)&req->src_addr_storage; 1343 1348 struct net_device *net_dev; 1344 1349 const union ib_gid *gid = req->has_gid ? &req->local_gid : NULL; 1345 1350 int err; ··· 1353 1358 gid, listen_addr); 1354 1359 if (!net_dev) 1355 1360 return ERR_PTR(-ENODEV); 1356 - 1357 - if (!validate_net_dev(net_dev, listen_addr, src_addr)) { 1358 - dev_put(net_dev); 1359 - return ERR_PTR(-EHOSTUNREACH); 1360 - } 1361 1361 1362 1362 return net_dev; 1363 1363 } ··· 1480 1490 } 1481 1491 } 1482 1492 1493 + /* 1494 + * Net namespace might be getting deleted while route lookup, 1495 + * cm_id lookup is in progress. Therefore, perform netdevice 1496 + * validation, cm_id lookup under rcu lock. 1497 + * RCU lock along with netdevice state check, synchronizes with 1498 + * netdevice migrating to different net namespace and also avoids 1499 + * case where net namespace doesn't get deleted while lookup is in 1500 + * progress. 1501 + * If the device state is not IFF_UP, its properties such as ifindex 1502 + * and nd_net cannot be trusted to remain valid without rcu lock. 1503 + * net/core/dev.c change_net_namespace() ensures to synchronize with 1504 + * ongoing operations on net device after device is closed using 1505 + * synchronize_net(). 1506 + */ 1507 + rcu_read_lock(); 1508 + if (*net_dev) { 1509 + /* 1510 + * If netdevice is down, it is likely that it is administratively 1511 + * down or it might be migrating to different namespace. 1512 + * In that case avoid further processing, as the net namespace 1513 + * or ifindex may change. 1514 + */ 1515 + if (((*net_dev)->flags & IFF_UP) == 0) { 1516 + id_priv = ERR_PTR(-EHOSTUNREACH); 1517 + goto err; 1518 + } 1519 + 1520 + if (!validate_net_dev(*net_dev, 1521 + (struct sockaddr *)&req.listen_addr_storage, 1522 + (struct sockaddr *)&req.src_addr_storage)) { 1523 + id_priv = ERR_PTR(-EHOSTUNREACH); 1524 + goto err; 1525 + } 1526 + } 1527 + 1483 1528 bind_list = cma_ps_find(*net_dev ? dev_net(*net_dev) : &init_net, 1484 1529 rdma_ps_from_service_id(req.service_id), 1485 1530 cma_port_from_service_id(req.service_id)); 1486 1531 id_priv = cma_find_listener(bind_list, cm_id, ib_event, &req, *net_dev); 1532 + err: 1533 + rcu_read_unlock(); 1487 1534 if (IS_ERR(id_priv) && *net_dev) { 1488 1535 dev_put(*net_dev); 1489 1536 *net_dev = NULL; 1490 1537 } 1491 - 1492 1538 return id_priv; 1493 1539 } 1494 1540
+4 -1
drivers/infiniband/core/iwpm_util.c
··· 114 114 struct sockaddr_storage *mapped_sockaddr, 115 115 u8 nl_client) 116 116 { 117 - struct hlist_head *hash_bucket_head; 117 + struct hlist_head *hash_bucket_head = NULL; 118 118 struct iwpm_mapping_info *map_info; 119 119 unsigned long flags; 120 120 int ret = -EINVAL; ··· 142 142 } 143 143 } 144 144 spin_unlock_irqrestore(&iwpm_mapinfo_lock, flags); 145 + 146 + if (!hash_bucket_head) 147 + kfree(map_info); 145 148 return ret; 146 149 } 147 150
+2 -2
drivers/infiniband/core/mad.c
··· 59 59 MODULE_PARM_DESC(recv_queue_size, "Size of receive queue in number of work requests"); 60 60 61 61 static struct list_head ib_mad_port_list; 62 - static u32 ib_mad_client_id = 0; 62 + static atomic_t ib_mad_client_id = ATOMIC_INIT(0); 63 63 64 64 /* Port list lock */ 65 65 static DEFINE_SPINLOCK(ib_mad_port_list_lock); ··· 377 377 } 378 378 379 379 spin_lock_irqsave(&port_priv->reg_lock, flags); 380 - mad_agent_priv->agent.hi_tid = ++ib_mad_client_id; 380 + mad_agent_priv->agent.hi_tid = atomic_inc_return(&ib_mad_client_id); 381 381 382 382 /* 383 383 * Make sure MAD registration (if supplied)
+15 -13
drivers/infiniband/core/roce_gid_mgmt.c
··· 255 255 struct net_device *rdma_ndev) 256 256 { 257 257 struct net_device *real_dev = rdma_vlan_dev_real_dev(event_ndev); 258 + unsigned long gid_type_mask; 258 259 259 260 if (!rdma_ndev) 260 261 return; ··· 265 264 266 265 rcu_read_lock(); 267 266 268 - if (rdma_is_upper_dev_rcu(rdma_ndev, event_ndev) && 269 - is_eth_active_slave_of_bonding_rcu(rdma_ndev, real_dev) == 270 - BONDING_SLAVE_STATE_INACTIVE) { 271 - unsigned long gid_type_mask; 272 - 267 + if (((rdma_ndev != event_ndev && 268 + !rdma_is_upper_dev_rcu(rdma_ndev, event_ndev)) || 269 + is_eth_active_slave_of_bonding_rcu(rdma_ndev, real_dev) 270 + == 271 + BONDING_SLAVE_STATE_INACTIVE)) { 273 272 rcu_read_unlock(); 274 - 275 - gid_type_mask = roce_gid_type_mask_support(ib_dev, port); 276 - 277 - ib_cache_gid_set_default_gid(ib_dev, port, rdma_ndev, 278 - gid_type_mask, 279 - IB_CACHE_GID_DEFAULT_MODE_DELETE); 280 - } else { 281 - rcu_read_unlock(); 273 + return; 282 274 } 275 + 276 + rcu_read_unlock(); 277 + 278 + gid_type_mask = roce_gid_type_mask_support(ib_dev, port); 279 + 280 + ib_cache_gid_set_default_gid(ib_dev, port, rdma_ndev, 281 + gid_type_mask, 282 + IB_CACHE_GID_DEFAULT_MODE_DELETE); 283 283 } 284 284 285 285 static void enum_netdev_ipv4_ips(struct ib_device *ib_dev,
+28 -16
drivers/infiniband/core/ucma.c
··· 159 159 complete(&ctx->comp); 160 160 } 161 161 162 + /* 163 + * Same as ucm_get_ctx but requires that ->cm_id->device is valid, eg that the 164 + * CM_ID is bound. 165 + */ 166 + static struct ucma_context *ucma_get_ctx_dev(struct ucma_file *file, int id) 167 + { 168 + struct ucma_context *ctx = ucma_get_ctx(file, id); 169 + 170 + if (IS_ERR(ctx)) 171 + return ctx; 172 + if (!ctx->cm_id->device) { 173 + ucma_put_ctx(ctx); 174 + return ERR_PTR(-EINVAL); 175 + } 176 + return ctx; 177 + } 178 + 162 179 static void ucma_close_event_id(struct work_struct *work) 163 180 { 164 181 struct ucma_event *uevent_close = container_of(work, struct ucma_event, close_work); ··· 700 683 if (copy_from_user(&cmd, inbuf, sizeof(cmd))) 701 684 return -EFAULT; 702 685 703 - if (!rdma_addr_size_in6(&cmd.src_addr) || 686 + if ((cmd.src_addr.sin6_family && !rdma_addr_size_in6(&cmd.src_addr)) || 704 687 !rdma_addr_size_in6(&cmd.dst_addr)) 705 688 return -EINVAL; 706 689 ··· 751 734 if (copy_from_user(&cmd, inbuf, sizeof(cmd))) 752 735 return -EFAULT; 753 736 754 - ctx = ucma_get_ctx(file, cmd.id); 737 + ctx = ucma_get_ctx_dev(file, cmd.id); 755 738 if (IS_ERR(ctx)) 756 739 return PTR_ERR(ctx); 757 740 ··· 1067 1050 if (!cmd.conn_param.valid) 1068 1051 return -EINVAL; 1069 1052 1070 - ctx = ucma_get_ctx(file, cmd.id); 1053 + ctx = ucma_get_ctx_dev(file, cmd.id); 1071 1054 if (IS_ERR(ctx)) 1072 1055 return PTR_ERR(ctx); 1073 1056 ··· 1109 1092 if (copy_from_user(&cmd, inbuf, sizeof(cmd))) 1110 1093 return -EFAULT; 1111 1094 1112 - ctx = ucma_get_ctx(file, cmd.id); 1095 + ctx = ucma_get_ctx_dev(file, cmd.id); 1113 1096 if (IS_ERR(ctx)) 1114 1097 return PTR_ERR(ctx); 1115 1098 ··· 1137 1120 if (copy_from_user(&cmd, inbuf, sizeof(cmd))) 1138 1121 return -EFAULT; 1139 1122 1140 - ctx = ucma_get_ctx(file, cmd.id); 1123 + ctx = ucma_get_ctx_dev(file, cmd.id); 1141 1124 if (IS_ERR(ctx)) 1142 1125 return PTR_ERR(ctx); 1143 1126 ··· 1156 1139 if (copy_from_user(&cmd, inbuf, sizeof(cmd))) 1157 1140 return -EFAULT; 1158 1141 1159 - ctx = ucma_get_ctx(file, cmd.id); 1142 + ctx = ucma_get_ctx_dev(file, cmd.id); 1160 1143 if (IS_ERR(ctx)) 1161 1144 return PTR_ERR(ctx); 1162 1145 ··· 1184 1167 if (cmd.qp_state > IB_QPS_ERR) 1185 1168 return -EINVAL; 1186 1169 1187 - ctx = ucma_get_ctx(file, cmd.id); 1170 + ctx = ucma_get_ctx_dev(file, cmd.id); 1188 1171 if (IS_ERR(ctx)) 1189 1172 return PTR_ERR(ctx); 1190 - 1191 - if (!ctx->cm_id->device) { 1192 - ret = -EINVAL; 1193 - goto out; 1194 - } 1195 1173 1196 1174 resp.qp_attr_mask = 0; 1197 1175 memset(&qp_attr, 0, sizeof qp_attr); ··· 1328 1316 if (copy_from_user(&cmd, inbuf, sizeof(cmd))) 1329 1317 return -EFAULT; 1330 1318 1319 + if (unlikely(cmd.optlen > KMALLOC_MAX_SIZE)) 1320 + return -EINVAL; 1321 + 1331 1322 ctx = ucma_get_ctx(file, cmd.id); 1332 1323 if (IS_ERR(ctx)) 1333 1324 return PTR_ERR(ctx); 1334 - 1335 - if (unlikely(cmd.optlen > KMALLOC_MAX_SIZE)) 1336 - return -EINVAL; 1337 1325 1338 1326 optval = memdup_user(u64_to_user_ptr(cmd.optval), 1339 1327 cmd.optlen); ··· 1396 1384 else 1397 1385 return -EINVAL; 1398 1386 1399 - ctx = ucma_get_ctx(file, cmd->id); 1387 + ctx = ucma_get_ctx_dev(file, cmd->id); 1400 1388 if (IS_ERR(ctx)) 1401 1389 return PTR_ERR(ctx); 1402 1390
+6
drivers/infiniband/core/uverbs_cmd.c
··· 691 691 692 692 mr->device = pd->device; 693 693 mr->pd = pd; 694 + mr->dm = NULL; 694 695 mr->uobject = uobj; 695 696 atomic_inc(&pd->usecnt); 696 697 mr->res.type = RDMA_RESTRACK_MR; ··· 765 764 return PTR_ERR(uobj); 766 765 767 766 mr = uobj->object; 767 + 768 + if (mr->dm) { 769 + ret = -EINVAL; 770 + goto put_uobjs; 771 + } 768 772 769 773 if (cmd.flags & IB_MR_REREG_ACCESS) { 770 774 ret = ib_check_mr_access(cmd.access_flags);
+9
drivers/infiniband/core/uverbs_ioctl.c
··· 234 234 return -EINVAL; 235 235 } 236 236 237 + for (; i < method_spec->num_buckets; i++) { 238 + struct uverbs_attr_spec_hash *attr_spec_bucket = 239 + method_spec->attr_buckets[i]; 240 + 241 + if (!bitmap_empty(attr_spec_bucket->mandatory_attrs_bitmask, 242 + attr_spec_bucket->num_attrs)) 243 + return -EINVAL; 244 + } 245 + 237 246 return 0; 238 247 } 239 248
+6 -6
drivers/infiniband/core/uverbs_std_types_flow_action.c
··· 363 363 364 364 static const struct uverbs_attr_spec uverbs_flow_action_esp_keymat[] = { 365 365 [IB_UVERBS_FLOW_ACTION_ESP_KEYMAT_AES_GCM] = { 366 - .ptr = { 366 + { .ptr = { 367 367 .type = UVERBS_ATTR_TYPE_PTR_IN, 368 368 UVERBS_ATTR_TYPE(struct ib_uverbs_flow_action_esp_keymat_aes_gcm), 369 369 .flags = UVERBS_ATTR_SPEC_F_MIN_SZ_OR_ZERO, 370 - }, 370 + } }, 371 371 }, 372 372 }; 373 373 374 374 static const struct uverbs_attr_spec uverbs_flow_action_esp_replay[] = { 375 375 [IB_UVERBS_FLOW_ACTION_ESP_REPLAY_NONE] = { 376 - .ptr = { 376 + { .ptr = { 377 377 .type = UVERBS_ATTR_TYPE_PTR_IN, 378 378 /* No need to specify any data */ 379 379 .len = 0, 380 - } 380 + } } 381 381 }, 382 382 [IB_UVERBS_FLOW_ACTION_ESP_REPLAY_BMP] = { 383 - .ptr = { 383 + { .ptr = { 384 384 .type = UVERBS_ATTR_TYPE_PTR_IN, 385 385 UVERBS_ATTR_STRUCT(struct ib_uverbs_flow_action_esp_replay_bmp, size), 386 386 .flags = UVERBS_ATTR_SPEC_F_MIN_SZ_OR_ZERO, 387 - } 387 + } } 388 388 }, 389 389 }; 390 390
+1
drivers/infiniband/core/verbs.c
··· 1656 1656 if (!IS_ERR(mr)) { 1657 1657 mr->device = pd->device; 1658 1658 mr->pd = pd; 1659 + mr->dm = NULL; 1659 1660 mr->uobject = NULL; 1660 1661 atomic_inc(&pd->usecnt); 1661 1662 mr->need_inval = false;
+10 -1
drivers/infiniband/hw/cxgb4/cq.c
··· 315 315 * Deal with out-of-order and/or completions that complete 316 316 * prior unsignalled WRs. 317 317 */ 318 - void c4iw_flush_hw_cq(struct c4iw_cq *chp) 318 + void c4iw_flush_hw_cq(struct c4iw_cq *chp, struct c4iw_qp *flush_qhp) 319 319 { 320 320 struct t4_cqe *hw_cqe, *swcqe, read_cqe; 321 321 struct c4iw_qp *qhp; ··· 338 338 */ 339 339 if (qhp == NULL) 340 340 goto next_cqe; 341 + 342 + if (flush_qhp != qhp) { 343 + spin_lock(&qhp->lock); 344 + 345 + if (qhp->wq.flushed == 1) 346 + goto next_cqe; 347 + } 341 348 342 349 if (CQE_OPCODE(hw_cqe) == FW_RI_TERMINATE) 343 350 goto next_cqe; ··· 397 390 next_cqe: 398 391 t4_hwcq_consume(&chp->cq); 399 392 ret = t4_next_hw_cqe(&chp->cq, &hw_cqe); 393 + if (qhp && flush_qhp != qhp) 394 + spin_unlock(&qhp->lock); 400 395 } 401 396 } 402 397
+8 -1
drivers/infiniband/hw/cxgb4/device.c
··· 875 875 876 876 rdev->status_page->db_off = 0; 877 877 878 + init_completion(&rdev->rqt_compl); 879 + init_completion(&rdev->pbl_compl); 880 + kref_init(&rdev->rqt_kref); 881 + kref_init(&rdev->pbl_kref); 882 + 878 883 return 0; 879 884 err_free_status_page_and_wr_log: 880 885 if (c4iw_wr_log && rdev->wr_log) ··· 898 893 899 894 static void c4iw_rdev_close(struct c4iw_rdev *rdev) 900 895 { 901 - destroy_workqueue(rdev->free_workq); 902 896 kfree(rdev->wr_log); 903 897 c4iw_release_dev_ucontext(rdev, &rdev->uctx); 904 898 free_page((unsigned long)rdev->status_page); 905 899 c4iw_pblpool_destroy(rdev); 906 900 c4iw_rqtpool_destroy(rdev); 901 + wait_for_completion(&rdev->pbl_compl); 902 + wait_for_completion(&rdev->rqt_compl); 907 903 c4iw_ocqp_pool_destroy(rdev); 904 + destroy_workqueue(rdev->free_workq); 908 905 c4iw_destroy_resource(&rdev->resource); 909 906 } 910 907
+5 -1
drivers/infiniband/hw/cxgb4/iw_cxgb4.h
··· 185 185 struct wr_log_entry *wr_log; 186 186 int wr_log_size; 187 187 struct workqueue_struct *free_workq; 188 + struct completion rqt_compl; 189 + struct completion pbl_compl; 190 + struct kref rqt_kref; 191 + struct kref pbl_kref; 188 192 }; 189 193 190 194 static inline int c4iw_fatal_error(struct c4iw_rdev *rdev) ··· 1053 1049 void c4iw_pblpool_free(struct c4iw_rdev *rdev, u32 addr, int size); 1054 1050 u32 c4iw_ocqp_pool_alloc(struct c4iw_rdev *rdev, int size); 1055 1051 void c4iw_ocqp_pool_free(struct c4iw_rdev *rdev, u32 addr, int size); 1056 - void c4iw_flush_hw_cq(struct c4iw_cq *chp); 1052 + void c4iw_flush_hw_cq(struct c4iw_cq *chp, struct c4iw_qp *flush_qhp); 1057 1053 void c4iw_count_rcqes(struct t4_cq *cq, struct t4_wq *wq, int *count); 1058 1054 int c4iw_ep_disconnect(struct c4iw_ep *ep, int abrupt, gfp_t gfp); 1059 1055 int c4iw_flush_rq(struct t4_wq *wq, struct t4_cq *cq, int count);
+2 -2
drivers/infiniband/hw/cxgb4/qp.c
··· 1343 1343 qhp->wq.flushed = 1; 1344 1344 t4_set_wq_in_error(&qhp->wq); 1345 1345 1346 - c4iw_flush_hw_cq(rchp); 1346 + c4iw_flush_hw_cq(rchp, qhp); 1347 1347 c4iw_count_rcqes(&rchp->cq, &qhp->wq, &count); 1348 1348 rq_flushed = c4iw_flush_rq(&qhp->wq, &rchp->cq, count); 1349 1349 1350 1350 if (schp != rchp) 1351 - c4iw_flush_hw_cq(schp); 1351 + c4iw_flush_hw_cq(schp, qhp); 1352 1352 sq_flushed = c4iw_flush_sq(qhp); 1353 1353 1354 1354 spin_unlock(&qhp->lock);
+24 -2
drivers/infiniband/hw/cxgb4/resource.c
··· 260 260 rdev->stats.pbl.cur += roundup(size, 1 << MIN_PBL_SHIFT); 261 261 if (rdev->stats.pbl.cur > rdev->stats.pbl.max) 262 262 rdev->stats.pbl.max = rdev->stats.pbl.cur; 263 + kref_get(&rdev->pbl_kref); 263 264 } else 264 265 rdev->stats.pbl.fail++; 265 266 mutex_unlock(&rdev->stats.lock); 266 267 return (u32)addr; 268 + } 269 + 270 + static void destroy_pblpool(struct kref *kref) 271 + { 272 + struct c4iw_rdev *rdev; 273 + 274 + rdev = container_of(kref, struct c4iw_rdev, pbl_kref); 275 + gen_pool_destroy(rdev->pbl_pool); 276 + complete(&rdev->pbl_compl); 267 277 } 268 278 269 279 void c4iw_pblpool_free(struct c4iw_rdev *rdev, u32 addr, int size) ··· 283 273 rdev->stats.pbl.cur -= roundup(size, 1 << MIN_PBL_SHIFT); 284 274 mutex_unlock(&rdev->stats.lock); 285 275 gen_pool_free(rdev->pbl_pool, (unsigned long)addr, size); 276 + kref_put(&rdev->pbl_kref, destroy_pblpool); 286 277 } 287 278 288 279 int c4iw_pblpool_create(struct c4iw_rdev *rdev) ··· 321 310 322 311 void c4iw_pblpool_destroy(struct c4iw_rdev *rdev) 323 312 { 324 - gen_pool_destroy(rdev->pbl_pool); 313 + kref_put(&rdev->pbl_kref, destroy_pblpool); 325 314 } 326 315 327 316 /* ··· 342 331 rdev->stats.rqt.cur += roundup(size << 6, 1 << MIN_RQT_SHIFT); 343 332 if (rdev->stats.rqt.cur > rdev->stats.rqt.max) 344 333 rdev->stats.rqt.max = rdev->stats.rqt.cur; 334 + kref_get(&rdev->rqt_kref); 345 335 } else 346 336 rdev->stats.rqt.fail++; 347 337 mutex_unlock(&rdev->stats.lock); 348 338 return (u32)addr; 339 + } 340 + 341 + static void destroy_rqtpool(struct kref *kref) 342 + { 343 + struct c4iw_rdev *rdev; 344 + 345 + rdev = container_of(kref, struct c4iw_rdev, rqt_kref); 346 + gen_pool_destroy(rdev->rqt_pool); 347 + complete(&rdev->rqt_compl); 349 348 } 350 349 351 350 void c4iw_rqtpool_free(struct c4iw_rdev *rdev, u32 addr, int size) ··· 365 344 rdev->stats.rqt.cur -= roundup(size << 6, 1 << MIN_RQT_SHIFT); 366 345 mutex_unlock(&rdev->stats.lock); 367 346 gen_pool_free(rdev->rqt_pool, (unsigned long)addr, size << 6); 347 + kref_put(&rdev->rqt_kref, destroy_rqtpool); 368 348 } 369 349 370 350 int c4iw_rqtpool_create(struct c4iw_rdev *rdev) ··· 402 380 403 381 void c4iw_rqtpool_destroy(struct c4iw_rdev *rdev) 404 382 { 405 - gen_pool_destroy(rdev->rqt_pool); 383 + kref_put(&rdev->rqt_kref, destroy_rqtpool); 406 384 } 407 385 408 386 /*
+5 -6
drivers/infiniband/hw/hfi1/affinity.c
··· 412 412 static int get_irq_affinity(struct hfi1_devdata *dd, 413 413 struct hfi1_msix_entry *msix) 414 414 { 415 - int ret; 416 415 cpumask_var_t diff; 417 416 struct hfi1_affinity_node *entry; 418 417 struct cpu_mask_set *set = NULL; ··· 422 423 423 424 extra[0] = '\0'; 424 425 cpumask_clear(&msix->mask); 425 - 426 - ret = zalloc_cpumask_var(&diff, GFP_KERNEL); 427 - if (!ret) 428 - return -ENOMEM; 429 426 430 427 entry = node_affinity_lookup(dd->node); 431 428 ··· 453 458 * finds its CPU here. 454 459 */ 455 460 if (cpu == -1 && set) { 461 + if (!zalloc_cpumask_var(&diff, GFP_KERNEL)) 462 + return -ENOMEM; 463 + 456 464 if (cpumask_equal(&set->mask, &set->used)) { 457 465 /* 458 466 * We've used up all the CPUs, bump up the generation ··· 467 469 cpumask_andnot(diff, &set->mask, &set->used); 468 470 cpu = cpumask_first(diff); 469 471 cpumask_set_cpu(cpu, &set->used); 472 + 473 + free_cpumask_var(diff); 470 474 } 471 475 472 476 cpumask_set_cpu(cpu, &msix->mask); ··· 482 482 hfi1_setup_sdma_notifier(msix); 483 483 } 484 484 485 - free_cpumask_var(diff); 486 485 return 0; 487 486 } 488 487
+15 -4
drivers/infiniband/hw/hfi1/driver.c
··· 433 433 bool do_cnp) 434 434 { 435 435 struct hfi1_ibport *ibp = to_iport(qp->ibqp.device, qp->port_num); 436 + struct hfi1_pportdata *ppd = ppd_from_ibp(ibp); 436 437 struct ib_other_headers *ohdr = pkt->ohdr; 437 438 struct ib_grh *grh = pkt->grh; 438 439 u32 rqpn = 0, bth1; 439 - u16 pkey, rlid, dlid = ib_get_dlid(pkt->hdr); 440 + u16 pkey; 441 + u32 rlid, slid, dlid = 0; 440 442 u8 hdr_type, sc, svc_type; 441 443 bool is_mcast = false; 442 444 445 + /* can be called from prescan */ 443 446 if (pkt->etype == RHF_RCV_TYPE_BYPASS) { 444 447 is_mcast = hfi1_is_16B_mcast(dlid); 445 448 pkey = hfi1_16B_get_pkey(pkt->hdr); 446 449 sc = hfi1_16B_get_sc(pkt->hdr); 450 + dlid = hfi1_16B_get_dlid(pkt->hdr); 451 + slid = hfi1_16B_get_slid(pkt->hdr); 447 452 hdr_type = HFI1_PKT_TYPE_16B; 448 453 } else { 449 454 is_mcast = (dlid > be16_to_cpu(IB_MULTICAST_LID_BASE)) && 450 455 (dlid != be16_to_cpu(IB_LID_PERMISSIVE)); 451 456 pkey = ib_bth_get_pkey(ohdr); 452 457 sc = hfi1_9B_get_sc5(pkt->hdr, pkt->rhf); 458 + dlid = ib_get_dlid(pkt->hdr); 459 + slid = ib_get_slid(pkt->hdr); 453 460 hdr_type = HFI1_PKT_TYPE_9B; 454 461 } 455 462 456 463 switch (qp->ibqp.qp_type) { 464 + case IB_QPT_UD: 465 + dlid = ppd->lid; 466 + rlid = slid; 467 + rqpn = ib_get_sqpn(pkt->ohdr); 468 + svc_type = IB_CC_SVCTYPE_UD; 469 + break; 457 470 case IB_QPT_SMI: 458 471 case IB_QPT_GSI: 459 - case IB_QPT_UD: 460 - rlid = ib_get_slid(pkt->hdr); 472 + rlid = slid; 461 473 rqpn = ib_get_sqpn(pkt->ohdr); 462 474 svc_type = IB_CC_SVCTYPE_UD; 463 475 break; ··· 494 482 dlid, rlid, sc, grh); 495 483 496 484 if (!is_mcast && (bth1 & IB_BECN_SMASK)) { 497 - struct hfi1_pportdata *ppd = ppd_from_ibp(ibp); 498 485 u32 lqpn = bth1 & RVT_QPN_MASK; 499 486 u8 sl = ibp->sc_to_sl[sc]; 500 487
+4 -4
drivers/infiniband/hw/hfi1/hfi.h
··· 1537 1537 void process_becn(struct hfi1_pportdata *ppd, u8 sl, u32 rlid, u32 lqpn, 1538 1538 u32 rqpn, u8 svc_type); 1539 1539 void return_cnp(struct hfi1_ibport *ibp, struct rvt_qp *qp, u32 remote_qpn, 1540 - u32 pkey, u32 slid, u32 dlid, u8 sc5, 1540 + u16 pkey, u32 slid, u32 dlid, u8 sc5, 1541 1541 const struct ib_grh *old_grh); 1542 1542 void return_cnp_16B(struct hfi1_ibport *ibp, struct rvt_qp *qp, 1543 - u32 remote_qpn, u32 pkey, u32 slid, u32 dlid, 1543 + u32 remote_qpn, u16 pkey, u32 slid, u32 dlid, 1544 1544 u8 sc5, const struct ib_grh *old_grh); 1545 1545 typedef void (*hfi1_handle_cnp)(struct hfi1_ibport *ibp, struct rvt_qp *qp, 1546 - u32 remote_qpn, u32 pkey, u32 slid, u32 dlid, 1546 + u32 remote_qpn, u16 pkey, u32 slid, u32 dlid, 1547 1547 u8 sc5, const struct ib_grh *old_grh); 1548 1548 1549 1549 #define PKEY_CHECK_INVALID -1 ··· 2437 2437 ((slid >> OPA_16B_SLID_SHIFT) << OPA_16B_SLID_HIGH_SHIFT); 2438 2438 lrh2 = (lrh2 & ~OPA_16B_DLID_MASK) | 2439 2439 ((dlid >> OPA_16B_DLID_SHIFT) << OPA_16B_DLID_HIGH_SHIFT); 2440 - lrh2 = (lrh2 & ~OPA_16B_PKEY_MASK) | (pkey << OPA_16B_PKEY_SHIFT); 2440 + lrh2 = (lrh2 & ~OPA_16B_PKEY_MASK) | ((u32)pkey << OPA_16B_PKEY_SHIFT); 2441 2441 lrh2 = (lrh2 & ~OPA_16B_L4_MASK) | l4; 2442 2442 2443 2443 hdr->lrh[0] = lrh0;
+31 -12
drivers/infiniband/hw/hfi1/init.c
··· 88 88 * pio buffers per ctxt, etc.) Zero means use one user context per CPU. 89 89 */ 90 90 int num_user_contexts = -1; 91 - module_param_named(num_user_contexts, num_user_contexts, uint, S_IRUGO); 91 + module_param_named(num_user_contexts, num_user_contexts, int, 0444); 92 92 MODULE_PARM_DESC( 93 - num_user_contexts, "Set max number of user contexts to use"); 93 + num_user_contexts, "Set max number of user contexts to use (default: -1 will use the real (non-HT) CPU count)"); 94 94 95 95 uint krcvqs[RXE_NUM_DATA_VL]; 96 96 int krcvqsset; ··· 1209 1209 kfree(ad); 1210 1210 } 1211 1211 1212 - static void __hfi1_free_devdata(struct kobject *kobj) 1212 + /** 1213 + * hfi1_clean_devdata - cleans up per-unit data structure 1214 + * @dd: pointer to a valid devdata structure 1215 + * 1216 + * It cleans up all data structures set up by 1217 + * by hfi1_alloc_devdata(). 1218 + */ 1219 + static void hfi1_clean_devdata(struct hfi1_devdata *dd) 1213 1220 { 1214 - struct hfi1_devdata *dd = 1215 - container_of(kobj, struct hfi1_devdata, kobj); 1216 1221 struct hfi1_asic_data *ad; 1217 1222 unsigned long flags; 1218 1223 1219 1224 spin_lock_irqsave(&hfi1_devs_lock, flags); 1220 - idr_remove(&hfi1_unit_table, dd->unit); 1221 - list_del(&dd->list); 1225 + if (!list_empty(&dd->list)) { 1226 + idr_remove(&hfi1_unit_table, dd->unit); 1227 + list_del_init(&dd->list); 1228 + } 1222 1229 ad = release_asic_data(dd); 1223 1230 spin_unlock_irqrestore(&hfi1_devs_lock, flags); 1224 - if (ad) 1225 - finalize_asic_data(dd, ad); 1231 + 1232 + finalize_asic_data(dd, ad); 1226 1233 free_platform_config(dd); 1227 1234 rcu_barrier(); /* wait for rcu callbacks to complete */ 1228 1235 free_percpu(dd->int_counter); 1229 1236 free_percpu(dd->rcv_limit); 1230 1237 free_percpu(dd->send_schedule); 1231 1238 free_percpu(dd->tx_opstats); 1239 + dd->int_counter = NULL; 1240 + dd->rcv_limit = NULL; 1241 + dd->send_schedule = NULL; 1242 + dd->tx_opstats = NULL; 1232 1243 sdma_clean(dd, dd->num_sdma); 1233 1244 rvt_dealloc_device(&dd->verbs_dev.rdi); 1245 + } 1246 + 1247 + static void __hfi1_free_devdata(struct kobject *kobj) 1248 + { 1249 + struct hfi1_devdata *dd = 1250 + container_of(kobj, struct hfi1_devdata, kobj); 1251 + 1252 + hfi1_clean_devdata(dd); 1234 1253 } 1235 1254 1236 1255 static struct kobj_type hfi1_devdata_type = { ··· 1284 1265 return ERR_PTR(-ENOMEM); 1285 1266 dd->num_pports = nports; 1286 1267 dd->pport = (struct hfi1_pportdata *)(dd + 1); 1268 + dd->pcidev = pdev; 1269 + pci_set_drvdata(pdev, dd); 1287 1270 1288 1271 INIT_LIST_HEAD(&dd->list); 1289 1272 idr_preload(GFP_KERNEL); ··· 1352 1331 return dd; 1353 1332 1354 1333 bail: 1355 - if (!list_empty(&dd->list)) 1356 - list_del_init(&dd->list); 1357 - rvt_dealloc_device(&dd->verbs_dev.rdi); 1334 + hfi1_clean_devdata(dd); 1358 1335 return ERR_PTR(ret); 1359 1336 } 1360 1337
-3
drivers/infiniband/hw/hfi1/pcie.c
··· 163 163 resource_size_t addr; 164 164 int ret = 0; 165 165 166 - dd->pcidev = pdev; 167 - pci_set_drvdata(pdev, dd); 168 - 169 166 addr = pci_resource_start(pdev, 0); 170 167 len = pci_resource_len(pdev, 0); 171 168
+1
drivers/infiniband/hw/hfi1/platform.c
··· 199 199 { 200 200 /* Release memory allocated for eprom or fallback file read. */ 201 201 kfree(dd->platform_config.data); 202 + dd->platform_config.data = NULL; 202 203 } 203 204 204 205 void get_port_type(struct hfi1_pportdata *ppd)
+2
drivers/infiniband/hw/hfi1/qsfp.c
··· 204 204 205 205 void clean_up_i2c(struct hfi1_devdata *dd, struct hfi1_asic_data *ad) 206 206 { 207 + if (!ad) 208 + return; 207 209 clean_i2c_bus(ad->i2c_bus0); 208 210 ad->i2c_bus0 = NULL; 209 211 clean_i2c_bus(ad->i2c_bus1);
+40 -10
drivers/infiniband/hw/hfi1/ruc.c
··· 733 733 ohdr->bth[2] = cpu_to_be32(bth2); 734 734 } 735 735 736 + /** 737 + * hfi1_make_ruc_header_16B - build a 16B header 738 + * @qp: the queue pair 739 + * @ohdr: a pointer to the destination header memory 740 + * @bth0: bth0 passed in from the RC/UC builder 741 + * @bth2: bth2 passed in from the RC/UC builder 742 + * @middle: non zero implies indicates ahg "could" be used 743 + * @ps: the current packet state 744 + * 745 + * This routine may disarm ahg under these situations: 746 + * - packet needs a GRH 747 + * - BECN needed 748 + * - migration state not IB_MIG_MIGRATED 749 + */ 736 750 static inline void hfi1_make_ruc_header_16B(struct rvt_qp *qp, 737 751 struct ib_other_headers *ohdr, 738 752 u32 bth0, u32 bth2, int middle, ··· 791 777 else 792 778 middle = 0; 793 779 780 + if (qp->s_flags & RVT_S_ECN) { 781 + qp->s_flags &= ~RVT_S_ECN; 782 + /* we recently received a FECN, so return a BECN */ 783 + becn = true; 784 + middle = 0; 785 + } 794 786 if (middle) 795 787 build_ahg(qp, bth2); 796 788 else ··· 804 784 805 785 bth0 |= pkey; 806 786 bth0 |= extra_bytes << 20; 807 - if (qp->s_flags & RVT_S_ECN) { 808 - qp->s_flags &= ~RVT_S_ECN; 809 - /* we recently received a FECN, so return a BECN */ 810 - becn = true; 811 - } 812 787 hfi1_make_ruc_bth(qp, ohdr, bth0, bth1, bth2); 813 788 814 789 if (!ppd->lid) ··· 821 806 pkey, becn, 0, l4, priv->s_sc); 822 807 } 823 808 809 + /** 810 + * hfi1_make_ruc_header_9B - build a 9B header 811 + * @qp: the queue pair 812 + * @ohdr: a pointer to the destination header memory 813 + * @bth0: bth0 passed in from the RC/UC builder 814 + * @bth2: bth2 passed in from the RC/UC builder 815 + * @middle: non zero implies indicates ahg "could" be used 816 + * @ps: the current packet state 817 + * 818 + * This routine may disarm ahg under these situations: 819 + * - packet needs a GRH 820 + * - BECN needed 821 + * - migration state not IB_MIG_MIGRATED 822 + */ 824 823 static inline void hfi1_make_ruc_header_9B(struct rvt_qp *qp, 825 824 struct ib_other_headers *ohdr, 826 825 u32 bth0, u32 bth2, int middle, ··· 868 839 else 869 840 middle = 0; 870 841 842 + if (qp->s_flags & RVT_S_ECN) { 843 + qp->s_flags &= ~RVT_S_ECN; 844 + /* we recently received a FECN, so return a BECN */ 845 + bth1 |= (IB_BECN_MASK << IB_BECN_SHIFT); 846 + middle = 0; 847 + } 871 848 if (middle) 872 849 build_ahg(qp, bth2); 873 850 else ··· 881 846 882 847 bth0 |= pkey; 883 848 bth0 |= extra_bytes << 20; 884 - if (qp->s_flags & RVT_S_ECN) { 885 - qp->s_flags &= ~RVT_S_ECN; 886 - /* we recently received a FECN, so return a BECN */ 887 - bth1 |= (IB_BECN_MASK << IB_BECN_SHIFT); 888 - } 889 849 hfi1_make_ruc_bth(qp, ohdr, bth0, bth1, bth2); 890 850 hfi1_make_ib_hdr(&ps->s_txreq->phdr.hdr.ibh, 891 851 lrh0,
+2 -2
drivers/infiniband/hw/hfi1/ud.c
··· 628 628 } 629 629 630 630 void return_cnp_16B(struct hfi1_ibport *ibp, struct rvt_qp *qp, 631 - u32 remote_qpn, u32 pkey, u32 slid, u32 dlid, 631 + u32 remote_qpn, u16 pkey, u32 slid, u32 dlid, 632 632 u8 sc5, const struct ib_grh *old_grh) 633 633 { 634 634 u64 pbc, pbc_flags = 0; ··· 687 687 } 688 688 689 689 void return_cnp(struct hfi1_ibport *ibp, struct rvt_qp *qp, u32 remote_qpn, 690 - u32 pkey, u32 slid, u32 dlid, u8 sc5, 690 + u16 pkey, u32 slid, u32 dlid, u8 sc5, 691 691 const struct ib_grh *old_grh) 692 692 { 693 693 u64 pbc, pbc_flags = 0;
+6 -6
drivers/infiniband/hw/hns/hns_roce_hem.c
··· 912 912 obj_per_chunk = buf_chunk_size / obj_size; 913 913 num_hem = (nobj + obj_per_chunk - 1) / obj_per_chunk; 914 914 bt_chunk_num = bt_chunk_size / 8; 915 - if (table->type >= HEM_TYPE_MTT) 915 + if (type >= HEM_TYPE_MTT) 916 916 num_bt_l0 = bt_chunk_num; 917 917 918 918 table->hem = kcalloc(num_hem, sizeof(*table->hem), ··· 920 920 if (!table->hem) 921 921 goto err_kcalloc_hem_buf; 922 922 923 - if (check_whether_bt_num_3(table->type, hop_num)) { 923 + if (check_whether_bt_num_3(type, hop_num)) { 924 924 unsigned long num_bt_l1; 925 925 926 926 num_bt_l1 = (num_hem + bt_chunk_num - 1) / ··· 939 939 goto err_kcalloc_l1_dma; 940 940 } 941 941 942 - if (check_whether_bt_num_2(table->type, hop_num) || 943 - check_whether_bt_num_3(table->type, hop_num)) { 942 + if (check_whether_bt_num_2(type, hop_num) || 943 + check_whether_bt_num_3(type, hop_num)) { 944 944 table->bt_l0 = kcalloc(num_bt_l0, sizeof(*table->bt_l0), 945 945 GFP_KERNEL); 946 946 if (!table->bt_l0) ··· 1039 1039 void hns_roce_cleanup_hem(struct hns_roce_dev *hr_dev) 1040 1040 { 1041 1041 hns_roce_cleanup_hem_table(hr_dev, &hr_dev->cq_table.table); 1042 - hns_roce_cleanup_hem_table(hr_dev, &hr_dev->qp_table.irrl_table); 1043 1042 if (hr_dev->caps.trrl_entry_sz) 1044 1043 hns_roce_cleanup_hem_table(hr_dev, 1045 1044 &hr_dev->qp_table.trrl_table); 1045 + hns_roce_cleanup_hem_table(hr_dev, &hr_dev->qp_table.irrl_table); 1046 1046 hns_roce_cleanup_hem_table(hr_dev, &hr_dev->qp_table.qp_table); 1047 1047 hns_roce_cleanup_hem_table(hr_dev, &hr_dev->mr_table.mtpt_table); 1048 - hns_roce_cleanup_hem_table(hr_dev, &hr_dev->mr_table.mtt_table); 1049 1048 if (hns_roce_check_whether_mhop(hr_dev, HEM_TYPE_CQE)) 1050 1049 hns_roce_cleanup_hem_table(hr_dev, 1051 1050 &hr_dev->mr_table.mtt_cqe_table); 1051 + hns_roce_cleanup_hem_table(hr_dev, &hr_dev->mr_table.mtt_table); 1052 1052 }
+29 -20
drivers/infiniband/hw/hns/hns_roce_hw_v2.c
··· 71 71 return -EINVAL; 72 72 } 73 73 74 + if (wr->opcode == IB_WR_RDMA_READ) { 75 + dev_err(hr_dev->dev, "Not support inline data!\n"); 76 + return -EINVAL; 77 + } 78 + 74 79 for (i = 0; i < wr->num_sge; i++) { 75 80 memcpy(wqe, ((void *)wr->sg_list[i].addr), 76 81 wr->sg_list[i].length); ··· 153 148 ibqp->qp_type != IB_QPT_GSI && 154 149 ibqp->qp_type != IB_QPT_UD)) { 155 150 dev_err(dev, "Not supported QP(0x%x)type!\n", ibqp->qp_type); 156 - *bad_wr = NULL; 151 + *bad_wr = wr; 157 152 return -EOPNOTSUPP; 158 153 } 159 154 ··· 187 182 qp->sq.wrid[(qp->sq.head + nreq) & (qp->sq.wqe_cnt - 1)] = 188 183 wr->wr_id; 189 184 190 - owner_bit = ~(qp->sq.head >> ilog2(qp->sq.wqe_cnt)) & 0x1; 185 + owner_bit = 186 + ~(((qp->sq.head + nreq) >> ilog2(qp->sq.wqe_cnt)) & 0x1); 191 187 192 188 /* Corresponding to the QP type, wqe process separately */ 193 189 if (ibqp->qp_type == IB_QPT_GSI) { ··· 462 456 } else { 463 457 dev_err(dev, "Illegal qp_type(0x%x)\n", ibqp->qp_type); 464 458 spin_unlock_irqrestore(&qp->sq.lock, flags); 459 + *bad_wr = wr; 465 460 return -EOPNOTSUPP; 466 461 } 467 462 } ··· 2599 2592 roce_set_field(qpc_mask->byte_4_sqpn_tst, V2_QPC_BYTE_4_SQPN_M, 2600 2593 V2_QPC_BYTE_4_SQPN_S, 0); 2601 2594 2602 - roce_set_field(context->byte_56_dqpn_err, V2_QPC_BYTE_56_DQPN_M, 2603 - V2_QPC_BYTE_56_DQPN_S, hr_qp->qpn); 2604 - roce_set_field(qpc_mask->byte_56_dqpn_err, V2_QPC_BYTE_56_DQPN_M, 2605 - V2_QPC_BYTE_56_DQPN_S, 0); 2595 + if (attr_mask & IB_QP_DEST_QPN) { 2596 + roce_set_field(context->byte_56_dqpn_err, V2_QPC_BYTE_56_DQPN_M, 2597 + V2_QPC_BYTE_56_DQPN_S, hr_qp->qpn); 2598 + roce_set_field(qpc_mask->byte_56_dqpn_err, 2599 + V2_QPC_BYTE_56_DQPN_M, V2_QPC_BYTE_56_DQPN_S, 0); 2600 + } 2606 2601 roce_set_field(context->byte_168_irrl_idx, 2607 2602 V2_QPC_BYTE_168_SQ_SHIFT_BAK_M, 2608 2603 V2_QPC_BYTE_168_SQ_SHIFT_BAK_S, ··· 2659 2650 return -EINVAL; 2660 2651 } 2661 2652 2662 - if ((attr_mask & IB_QP_ALT_PATH) || (attr_mask & IB_QP_ACCESS_FLAGS) || 2663 - (attr_mask & IB_QP_PKEY_INDEX) || (attr_mask & IB_QP_QKEY)) { 2653 + if (attr_mask & IB_QP_ALT_PATH) { 2664 2654 dev_err(dev, "INIT2RTR attr_mask (0x%x) error\n", attr_mask); 2665 2655 return -EINVAL; 2666 2656 } ··· 2808 2800 V2_QPC_BYTE_140_RR_MAX_S, 0); 2809 2801 } 2810 2802 2811 - roce_set_field(context->byte_56_dqpn_err, V2_QPC_BYTE_56_DQPN_M, 2812 - V2_QPC_BYTE_56_DQPN_S, attr->dest_qp_num); 2813 - roce_set_field(qpc_mask->byte_56_dqpn_err, V2_QPC_BYTE_56_DQPN_M, 2814 - V2_QPC_BYTE_56_DQPN_S, 0); 2803 + if (attr_mask & IB_QP_DEST_QPN) { 2804 + roce_set_field(context->byte_56_dqpn_err, V2_QPC_BYTE_56_DQPN_M, 2805 + V2_QPC_BYTE_56_DQPN_S, attr->dest_qp_num); 2806 + roce_set_field(qpc_mask->byte_56_dqpn_err, 2807 + V2_QPC_BYTE_56_DQPN_M, V2_QPC_BYTE_56_DQPN_S, 0); 2808 + } 2815 2809 2816 2810 /* Configure GID index */ 2817 2811 port_num = rdma_ah_get_port_num(&attr->ah_attr); ··· 2855 2845 if (ibqp->qp_type == IB_QPT_GSI || ibqp->qp_type == IB_QPT_UD) 2856 2846 roce_set_field(context->byte_24_mtu_tc, V2_QPC_BYTE_24_MTU_M, 2857 2847 V2_QPC_BYTE_24_MTU_S, IB_MTU_4096); 2858 - else 2848 + else if (attr_mask & IB_QP_PATH_MTU) 2859 2849 roce_set_field(context->byte_24_mtu_tc, V2_QPC_BYTE_24_MTU_M, 2860 2850 V2_QPC_BYTE_24_MTU_S, attr->path_mtu); 2861 2851 ··· 2932 2922 return -EINVAL; 2933 2923 } 2934 2924 2935 - /* If exist optional param, return error */ 2936 - if ((attr_mask & IB_QP_ALT_PATH) || (attr_mask & IB_QP_ACCESS_FLAGS) || 2937 - (attr_mask & IB_QP_QKEY) || (attr_mask & IB_QP_PATH_MIG_STATE) || 2938 - (attr_mask & IB_QP_CUR_STATE) || 2939 - (attr_mask & IB_QP_MIN_RNR_TIMER)) { 2925 + /* Not support alternate path and path migration */ 2926 + if ((attr_mask & IB_QP_ALT_PATH) || 2927 + (attr_mask & IB_QP_PATH_MIG_STATE)) { 2940 2928 dev_err(dev, "RTR2RTS attr_mask (0x%x)error\n", attr_mask); 2941 2929 return -EINVAL; 2942 2930 } ··· 3169 3161 (cur_state == IB_QPS_RTR && new_state == IB_QPS_ERR) || 3170 3162 (cur_state == IB_QPS_RTS && new_state == IB_QPS_ERR) || 3171 3163 (cur_state == IB_QPS_SQD && new_state == IB_QPS_ERR) || 3172 - (cur_state == IB_QPS_SQE && new_state == IB_QPS_ERR)) { 3164 + (cur_state == IB_QPS_SQE && new_state == IB_QPS_ERR) || 3165 + (cur_state == IB_QPS_ERR && new_state == IB_QPS_ERR)) { 3173 3166 /* Nothing */ 3174 3167 ; 3175 3168 } else { ··· 4487 4478 ret = hns_roce_cmd_mbox(hr_dev, mailbox->dma, 0, eq->eqn, 0, 4488 4479 eq_cmd, HNS_ROCE_CMD_TIMEOUT_MSECS); 4489 4480 if (ret) { 4490 - dev_err(dev, "[mailbox cmd] creat eqc failed.\n"); 4481 + dev_err(dev, "[mailbox cmd] create eqc failed.\n"); 4491 4482 goto err_cmd_mbox; 4492 4483 } 4493 4484
+1 -1
drivers/infiniband/hw/hns/hns_roce_qp.c
··· 620 620 to_hr_ucontext(ib_pd->uobject->context), 621 621 ucmd.db_addr, &hr_qp->rdb); 622 622 if (ret) { 623 - dev_err(dev, "rp record doorbell map failed!\n"); 623 + dev_err(dev, "rq record doorbell map failed!\n"); 624 624 goto err_mtt; 625 625 } 626 626 }
+1 -1
drivers/infiniband/hw/mlx4/mr.c
··· 346 346 /* Add to the first block the misalignment that it suffers from. */ 347 347 total_len += (first_block_start & ((1ULL << block_shift) - 1ULL)); 348 348 last_block_end = current_block_start + current_block_len; 349 - last_block_aligned_end = round_up(last_block_end, 1 << block_shift); 349 + last_block_aligned_end = round_up(last_block_end, 1ULL << block_shift); 350 350 total_len += (last_block_aligned_end - last_block_end); 351 351 352 352 if (total_len & ((1ULL << block_shift) - 1ULL))
+2 -1
drivers/infiniband/hw/mlx4/qp.c
··· 673 673 MLX4_IB_RX_HASH_SRC_PORT_TCP | 674 674 MLX4_IB_RX_HASH_DST_PORT_TCP | 675 675 MLX4_IB_RX_HASH_SRC_PORT_UDP | 676 - MLX4_IB_RX_HASH_DST_PORT_UDP)) { 676 + MLX4_IB_RX_HASH_DST_PORT_UDP | 677 + MLX4_IB_RX_HASH_INNER)) { 677 678 pr_debug("RX Hash fields_mask has unsupported mask (0x%llx)\n", 678 679 ucmd->rx_hash_fields_mask); 679 680 return (-EOPNOTSUPP);
+1
drivers/infiniband/hw/mlx5/Kconfig
··· 1 1 config MLX5_INFINIBAND 2 2 tristate "Mellanox Connect-IB HCA support" 3 3 depends on NETDEVICES && ETHERNET && PCI && MLX5_CORE 4 + depends on INFINIBAND_USER_ACCESS || INFINIBAND_USER_ACCESS=n 4 5 ---help--- 5 6 This driver provides low-level InfiniBand support for 6 7 Mellanox Connect-IB PCI Express host channel adapters (HCAs).
+3 -6
drivers/infiniband/hw/mlx5/main.c
··· 52 52 #include <linux/mlx5/port.h> 53 53 #include <linux/mlx5/vport.h> 54 54 #include <linux/mlx5/fs.h> 55 - #include <linux/mlx5/fs_helpers.h> 56 55 #include <linux/list.h> 57 56 #include <rdma/ib_smi.h> 58 57 #include <rdma/ib_umem.h> ··· 179 180 if (rep_ndev == ndev) 180 181 roce->netdev = (event == NETDEV_UNREGISTER) ? 181 182 NULL : ndev; 182 - } else if (ndev->dev.parent == &ibdev->mdev->pdev->dev) { 183 + } else if (ndev->dev.parent == &mdev->pdev->dev) { 183 184 roce->netdev = (event == NETDEV_UNREGISTER) ? 184 185 NULL : ndev; 185 186 } ··· 4756 4757 { 4757 4758 struct mlx5_ib_dev *dev = to_mdev(ibdev); 4758 4759 4759 - return mlx5_get_vector_affinity(dev->mdev, comp_vector); 4760 + return mlx5_get_vector_affinity_hint(dev->mdev, comp_vector); 4760 4761 } 4761 4762 4762 4763 /* The mlx5_ib_multiport_mutex should be held when calling this function */ ··· 5426 5427 static int mlx5_ib_stage_uar_init(struct mlx5_ib_dev *dev) 5427 5428 { 5428 5429 dev->mdev->priv.uar = mlx5_get_uars_page(dev->mdev); 5429 - if (!dev->mdev->priv.uar) 5430 - return -ENOMEM; 5431 - return 0; 5430 + return PTR_ERR_OR_ZERO(dev->mdev->priv.uar); 5432 5431 } 5433 5432 5434 5433 static void mlx5_ib_stage_uar_cleanup(struct mlx5_ib_dev *dev)
+23 -9
drivers/infiniband/hw/mlx5/mr.c
··· 866 866 int *order) 867 867 { 868 868 struct mlx5_ib_dev *dev = to_mdev(pd->device); 869 + struct ib_umem *u; 869 870 int err; 870 871 871 - *umem = ib_umem_get(pd->uobject->context, start, length, 872 - access_flags, 0); 873 - err = PTR_ERR_OR_ZERO(*umem); 872 + *umem = NULL; 873 + 874 + u = ib_umem_get(pd->uobject->context, start, length, access_flags, 0); 875 + err = PTR_ERR_OR_ZERO(u); 874 876 if (err) { 875 - *umem = NULL; 876 - mlx5_ib_err(dev, "umem get failed (%d)\n", err); 877 + mlx5_ib_dbg(dev, "umem get failed (%d)\n", err); 877 878 return err; 878 879 } 879 880 880 - mlx5_ib_cont_pages(*umem, start, MLX5_MKEY_PAGE_SHIFT_MASK, npages, 881 + mlx5_ib_cont_pages(u, start, MLX5_MKEY_PAGE_SHIFT_MASK, npages, 881 882 page_shift, ncont, order); 882 883 if (!*npages) { 883 884 mlx5_ib_warn(dev, "avoid zero region\n"); 884 - ib_umem_release(*umem); 885 + ib_umem_release(u); 885 886 return -EINVAL; 886 887 } 888 + 889 + *umem = u; 887 890 888 891 mlx5_ib_dbg(dev, "npages %d, ncont %d, order %d, page_shift %d\n", 889 892 *npages, *ncont, *order, *page_shift); ··· 1461 1458 int access_flags = flags & IB_MR_REREG_ACCESS ? 1462 1459 new_access_flags : 1463 1460 mr->access_flags; 1464 - u64 addr = (flags & IB_MR_REREG_TRANS) ? virt_addr : mr->umem->address; 1465 - u64 len = (flags & IB_MR_REREG_TRANS) ? length : mr->umem->length; 1466 1461 int page_shift = 0; 1467 1462 int upd_flags = 0; 1468 1463 int npages = 0; 1469 1464 int ncont = 0; 1470 1465 int order = 0; 1466 + u64 addr, len; 1471 1467 int err; 1472 1468 1473 1469 mlx5_ib_dbg(dev, "start 0x%llx, virt_addr 0x%llx, length 0x%llx, access_flags 0x%x\n", 1474 1470 start, virt_addr, length, access_flags); 1475 1471 1476 1472 atomic_sub(mr->npages, &dev->mdev->priv.reg_pages); 1473 + 1474 + if (!mr->umem) 1475 + return -EINVAL; 1476 + 1477 + if (flags & IB_MR_REREG_TRANS) { 1478 + addr = virt_addr; 1479 + len = length; 1480 + } else { 1481 + addr = mr->umem->address; 1482 + len = mr->umem->length; 1483 + } 1477 1484 1478 1485 if (flags != IB_MR_REREG_PD) { 1479 1486 /* ··· 1492 1479 */ 1493 1480 flags |= IB_MR_REREG_TRANS; 1494 1481 ib_umem_release(mr->umem); 1482 + mr->umem = NULL; 1495 1483 err = mr_umem_get(pd, addr, len, access_flags, &mr->umem, 1496 1484 &npages, &page_shift, &ncont, &order); 1497 1485 if (err)
+14 -10
drivers/infiniband/hw/mlx5/qp.c
··· 259 259 } else { 260 260 if (ucmd) { 261 261 qp->rq.wqe_cnt = ucmd->rq_wqe_count; 262 + if (ucmd->rq_wqe_shift > BITS_PER_BYTE * sizeof(ucmd->rq_wqe_shift)) 263 + return -EINVAL; 262 264 qp->rq.wqe_shift = ucmd->rq_wqe_shift; 265 + if ((1 << qp->rq.wqe_shift) / sizeof(struct mlx5_wqe_data_seg) < qp->wq_sig) 266 + return -EINVAL; 263 267 qp->rq.max_gs = (1 << qp->rq.wqe_shift) / sizeof(struct mlx5_wqe_data_seg) - qp->wq_sig; 264 268 qp->rq.max_post = qp->rq.wqe_cnt; 265 269 } else { ··· 2455 2451 2456 2452 static int ib_rate_to_mlx5(struct mlx5_ib_dev *dev, u8 rate) 2457 2453 { 2458 - if (rate == IB_RATE_PORT_CURRENT) { 2454 + if (rate == IB_RATE_PORT_CURRENT) 2459 2455 return 0; 2460 - } else if (rate < IB_RATE_2_5_GBPS || rate > IB_RATE_300_GBPS) { 2461 - return -EINVAL; 2462 - } else { 2463 - while (rate != IB_RATE_2_5_GBPS && 2464 - !(1 << (rate + MLX5_STAT_RATE_OFFSET) & 2465 - MLX5_CAP_GEN(dev->mdev, stat_rate_support))) 2466 - --rate; 2467 - } 2468 2456 2469 - return rate + MLX5_STAT_RATE_OFFSET; 2457 + if (rate < IB_RATE_2_5_GBPS || rate > IB_RATE_300_GBPS) 2458 + return -EINVAL; 2459 + 2460 + while (rate != IB_RATE_PORT_CURRENT && 2461 + !(1 << (rate + MLX5_STAT_RATE_OFFSET) & 2462 + MLX5_CAP_GEN(dev->mdev, stat_rate_support))) 2463 + --rate; 2464 + 2465 + return rate ? rate + MLX5_STAT_RATE_OFFSET : rate; 2470 2466 } 2471 2467 2472 2468 static int modify_raw_packet_eth_prio(struct mlx5_core_dev *dev,
+1 -1
drivers/infiniband/hw/nes/nes_nic.c
··· 461 461 /** 462 462 * nes_netdev_start_xmit 463 463 */ 464 - static int nes_netdev_start_xmit(struct sk_buff *skb, struct net_device *netdev) 464 + static netdev_tx_t nes_netdev_start_xmit(struct sk_buff *skb, struct net_device *netdev) 465 465 { 466 466 struct nes_vnic *nesvnic = netdev_priv(netdev); 467 467 struct nes_device *nesdev = nesvnic->nesdev;
+1 -1
drivers/infiniband/sw/rxe/rxe_opcode.c
··· 390 390 .name = "IB_OPCODE_RC_SEND_ONLY_INV", 391 391 .mask = RXE_IETH_MASK | RXE_PAYLOAD_MASK | RXE_REQ_MASK 392 392 | RXE_COMP_MASK | RXE_RWR_MASK | RXE_SEND_MASK 393 - | RXE_END_MASK, 393 + | RXE_END_MASK | RXE_START_MASK, 394 394 .length = RXE_BTH_BYTES + RXE_IETH_BYTES, 395 395 .offset = { 396 396 [RXE_BTH] = 0,
-1
drivers/infiniband/sw/rxe/rxe_req.c
··· 728 728 rollback_state(wqe, qp, &rollback_wqe, rollback_psn); 729 729 730 730 if (ret == -EAGAIN) { 731 - kfree_skb(skb); 732 731 rxe_run_task(&qp->req.task, 1); 733 732 goto exit; 734 733 }
+1 -5
drivers/infiniband/sw/rxe/rxe_resp.c
··· 742 742 err = rxe_xmit_packet(rxe, qp, &ack_pkt, skb); 743 743 if (err) { 744 744 pr_err("Failed sending RDMA reply.\n"); 745 - kfree_skb(skb); 746 745 return RESPST_ERR_RNR; 747 746 } 748 747 ··· 953 954 } 954 955 955 956 err = rxe_xmit_packet(rxe, qp, &ack_pkt, skb); 956 - if (err) { 957 + if (err) 957 958 pr_err_ratelimited("Failed sending ack\n"); 958 - kfree_skb(skb); 959 - } 960 959 961 960 err1: 962 961 return err; ··· 1138 1141 if (rc) { 1139 1142 pr_err("Failed resending result. This flow is not handled - skb ignored\n"); 1140 1143 rxe_drop_ref(qp); 1141 - kfree_skb(skb_copy); 1142 1144 rc = RESPST_CLEANUP; 1143 1145 goto out; 1144 1146 }
+1 -1
drivers/infiniband/ulp/ipoib/ipoib_main.c
··· 1094 1094 spin_unlock_irqrestore(&priv->lock, flags); 1095 1095 } 1096 1096 1097 - static int ipoib_start_xmit(struct sk_buff *skb, struct net_device *dev) 1097 + static netdev_tx_t ipoib_start_xmit(struct sk_buff *skb, struct net_device *dev) 1098 1098 { 1099 1099 struct ipoib_dev_priv *priv = ipoib_priv(dev); 1100 1100 struct rdma_netdev *rn = netdev_priv(dev);
+1 -1
drivers/infiniband/ulp/srp/Kconfig
··· 1 1 config INFINIBAND_SRP 2 2 tristate "InfiniBand SCSI RDMA Protocol" 3 - depends on SCSI 3 + depends on SCSI && INFINIBAND_ADDR_TRANS 4 4 select SCSI_SRP_ATTRS 5 5 ---help--- 6 6 Support for the SCSI RDMA Protocol over InfiniBand. This
+1 -1
drivers/infiniband/ulp/srpt/Kconfig
··· 1 1 config INFINIBAND_SRPT 2 2 tristate "InfiniBand SCSI RDMA Protocol target support" 3 - depends on INFINIBAND && TARGET_CORE 3 + depends on INFINIBAND && INFINIBAND_ADDR_TRANS && TARGET_CORE 4 4 ---help--- 5 5 6 6 Support for the SCSI RDMA Protocol (SRP) Target driver. The
+5 -5
drivers/input/input-leds.c
··· 88 88 const struct input_device_id *id) 89 89 { 90 90 struct input_leds *leds; 91 + struct input_led *led; 91 92 unsigned int num_leds; 92 93 unsigned int led_code; 93 94 int led_no; ··· 120 119 121 120 led_no = 0; 122 121 for_each_set_bit(led_code, dev->ledbit, LED_CNT) { 123 - struct input_led *led = &leds->leds[led_no]; 124 - 125 - led->handle = &leds->handle; 126 - led->code = led_code; 127 - 128 122 if (!input_led_info[led_code].name) 129 123 continue; 124 + 125 + led = &leds->leds[led_no]; 126 + led->handle = &leds->handle; 127 + led->code = led_code; 130 128 131 129 led->cdev.name = kasprintf(GFP_KERNEL, "%s::%s", 132 130 dev_name(&dev->dev),
+1 -1
drivers/input/mouse/alps.c
··· 583 583 584 584 x = (s8)(((packet[0] & 0x20) << 2) | (packet[1] & 0x7f)); 585 585 y = (s8)(((packet[0] & 0x10) << 3) | (packet[2] & 0x7f)); 586 - z = packet[4] & 0x7c; 586 + z = packet[4] & 0x7f; 587 587 588 588 /* 589 589 * The x and y values tend to be quite large, and when used
+5 -2
drivers/input/rmi4/rmi_spi.c
··· 147 147 if (len > RMI_SPI_XFER_SIZE_LIMIT) 148 148 return -EINVAL; 149 149 150 - if (rmi_spi->xfer_buf_size < len) 151 - rmi_spi_manage_pools(rmi_spi, len); 150 + if (rmi_spi->xfer_buf_size < len) { 151 + ret = rmi_spi_manage_pools(rmi_spi, len); 152 + if (ret < 0) 153 + return ret; 154 + } 152 155 153 156 if (addr == 0) 154 157 /*
+1 -1
drivers/input/touchscreen/Kconfig
··· 362 362 363 363 If unsure, say N. 364 364 365 - To compile this driver as a moudle, choose M here : the 365 + To compile this driver as a module, choose M here : the 366 366 module will be called hideep_ts. 367 367 368 368 config TOUCHSCREEN_ILI210X
+124 -76
drivers/input/touchscreen/atmel_mxt_ts.c
··· 280 280 struct input_dev *input_dev; 281 281 char phys[64]; /* device physical location */ 282 282 struct mxt_object *object_table; 283 - struct mxt_info info; 283 + struct mxt_info *info; 284 + void *raw_info_block; 284 285 unsigned int irq; 285 286 unsigned int max_x; 286 287 unsigned int max_y; ··· 461 460 { 462 461 u8 appmode = data->client->addr; 463 462 u8 bootloader; 463 + u8 family_id = data->info ? data->info->family_id : 0; 464 464 465 465 switch (appmode) { 466 466 case 0x4a: 467 467 case 0x4b: 468 468 /* Chips after 1664S use different scheme */ 469 - if (retry || data->info.family_id >= 0xa2) { 469 + if (retry || family_id >= 0xa2) { 470 470 bootloader = appmode - 0x24; 471 471 break; 472 472 } ··· 694 692 struct mxt_object *object; 695 693 int i; 696 694 697 - for (i = 0; i < data->info.object_num; i++) { 695 + for (i = 0; i < data->info->object_num; i++) { 698 696 object = data->object_table + i; 699 697 if (object->type == type) 700 698 return object; ··· 1464 1462 data_pos += offset; 1465 1463 } 1466 1464 1467 - if (cfg_info.family_id != data->info.family_id) { 1465 + if (cfg_info.family_id != data->info->family_id) { 1468 1466 dev_err(dev, "Family ID mismatch!\n"); 1469 1467 return -EINVAL; 1470 1468 } 1471 1469 1472 - if (cfg_info.variant_id != data->info.variant_id) { 1470 + if (cfg_info.variant_id != data->info->variant_id) { 1473 1471 dev_err(dev, "Variant ID mismatch!\n"); 1474 1472 return -EINVAL; 1475 1473 } ··· 1514 1512 1515 1513 /* Malloc memory to store configuration */ 1516 1514 cfg_start_ofs = MXT_OBJECT_START + 1517 - data->info.object_num * sizeof(struct mxt_object) + 1515 + data->info->object_num * sizeof(struct mxt_object) + 1518 1516 MXT_INFO_CHECKSUM_SIZE; 1519 1517 config_mem_size = data->mem_size - cfg_start_ofs; 1520 1518 config_mem = kzalloc(config_mem_size, GFP_KERNEL); ··· 1565 1563 return ret; 1566 1564 } 1567 1565 1568 - static int mxt_get_info(struct mxt_data *data) 1569 - { 1570 - struct i2c_client *client = data->client; 1571 - struct mxt_info *info = &data->info; 1572 - int error; 1573 - 1574 - /* Read 7-byte info block starting at address 0 */ 1575 - error = __mxt_read_reg(client, 0, sizeof(*info), info); 1576 - if (error) 1577 - return error; 1578 - 1579 - return 0; 1580 - } 1581 - 1582 1566 static void mxt_free_input_device(struct mxt_data *data) 1583 1567 { 1584 1568 if (data->input_dev) { ··· 1579 1591 video_unregister_device(&data->dbg.vdev); 1580 1592 v4l2_device_unregister(&data->dbg.v4l2); 1581 1593 #endif 1582 - 1583 - kfree(data->object_table); 1584 1594 data->object_table = NULL; 1595 + data->info = NULL; 1596 + kfree(data->raw_info_block); 1597 + data->raw_info_block = NULL; 1585 1598 kfree(data->msg_buf); 1586 1599 data->msg_buf = NULL; 1587 1600 data->T5_address = 0; ··· 1598 1609 data->max_reportid = 0; 1599 1610 } 1600 1611 1601 - static int mxt_get_object_table(struct mxt_data *data) 1612 + static int mxt_parse_object_table(struct mxt_data *data, 1613 + struct mxt_object *object_table) 1602 1614 { 1603 1615 struct i2c_client *client = data->client; 1604 - size_t table_size; 1605 - struct mxt_object *object_table; 1606 - int error; 1607 1616 int i; 1608 1617 u8 reportid; 1609 1618 u16 end_address; 1610 1619 1611 - table_size = data->info.object_num * sizeof(struct mxt_object); 1612 - object_table = kzalloc(table_size, GFP_KERNEL); 1613 - if (!object_table) { 1614 - dev_err(&data->client->dev, "Failed to allocate memory\n"); 1615 - return -ENOMEM; 1616 - } 1617 - 1618 - error = __mxt_read_reg(client, MXT_OBJECT_START, table_size, 1619 - object_table); 1620 - if (error) { 1621 - kfree(object_table); 1622 - return error; 1623 - } 1624 - 1625 1620 /* Valid Report IDs start counting from 1 */ 1626 1621 reportid = 1; 1627 1622 data->mem_size = 0; 1628 - for (i = 0; i < data->info.object_num; i++) { 1623 + for (i = 0; i < data->info->object_num; i++) { 1629 1624 struct mxt_object *object = object_table + i; 1630 1625 u8 min_id, max_id; 1631 1626 ··· 1633 1660 1634 1661 switch (object->type) { 1635 1662 case MXT_GEN_MESSAGE_T5: 1636 - if (data->info.family_id == 0x80 && 1637 - data->info.version < 0x20) { 1663 + if (data->info->family_id == 0x80 && 1664 + data->info->version < 0x20) { 1638 1665 /* 1639 1666 * On mXT224 firmware versions prior to V2.0 1640 1667 * read and discard unused CRC byte otherwise ··· 1689 1716 /* If T44 exists, T5 position has to be directly after */ 1690 1717 if (data->T44_address && (data->T5_address != data->T44_address + 1)) { 1691 1718 dev_err(&client->dev, "Invalid T44 position\n"); 1692 - error = -EINVAL; 1693 - goto free_object_table; 1719 + return -EINVAL; 1694 1720 } 1695 1721 1696 1722 data->msg_buf = kcalloc(data->max_reportid, 1697 1723 data->T5_msg_size, GFP_KERNEL); 1698 - if (!data->msg_buf) { 1699 - dev_err(&client->dev, "Failed to allocate message buffer\n"); 1724 + if (!data->msg_buf) 1725 + return -ENOMEM; 1726 + 1727 + return 0; 1728 + } 1729 + 1730 + static int mxt_read_info_block(struct mxt_data *data) 1731 + { 1732 + struct i2c_client *client = data->client; 1733 + int error; 1734 + size_t size; 1735 + void *id_buf, *buf; 1736 + uint8_t num_objects; 1737 + u32 calculated_crc; 1738 + u8 *crc_ptr; 1739 + 1740 + /* If info block already allocated, free it */ 1741 + if (data->raw_info_block) 1742 + mxt_free_object_table(data); 1743 + 1744 + /* Read 7-byte ID information block starting at address 0 */ 1745 + size = sizeof(struct mxt_info); 1746 + id_buf = kzalloc(size, GFP_KERNEL); 1747 + if (!id_buf) 1748 + return -ENOMEM; 1749 + 1750 + error = __mxt_read_reg(client, 0, size, id_buf); 1751 + if (error) 1752 + goto err_free_mem; 1753 + 1754 + /* Resize buffer to give space for rest of info block */ 1755 + num_objects = ((struct mxt_info *)id_buf)->object_num; 1756 + size += (num_objects * sizeof(struct mxt_object)) 1757 + + MXT_INFO_CHECKSUM_SIZE; 1758 + 1759 + buf = krealloc(id_buf, size, GFP_KERNEL); 1760 + if (!buf) { 1700 1761 error = -ENOMEM; 1701 - goto free_object_table; 1762 + goto err_free_mem; 1763 + } 1764 + id_buf = buf; 1765 + 1766 + /* Read rest of info block */ 1767 + error = __mxt_read_reg(client, MXT_OBJECT_START, 1768 + size - MXT_OBJECT_START, 1769 + id_buf + MXT_OBJECT_START); 1770 + if (error) 1771 + goto err_free_mem; 1772 + 1773 + /* Extract & calculate checksum */ 1774 + crc_ptr = id_buf + size - MXT_INFO_CHECKSUM_SIZE; 1775 + data->info_crc = crc_ptr[0] | (crc_ptr[1] << 8) | (crc_ptr[2] << 16); 1776 + 1777 + calculated_crc = mxt_calculate_crc(id_buf, 0, 1778 + size - MXT_INFO_CHECKSUM_SIZE); 1779 + 1780 + /* 1781 + * CRC mismatch can be caused by data corruption due to I2C comms 1782 + * issue or else device is not using Object Based Protocol (eg i2c-hid) 1783 + */ 1784 + if ((data->info_crc == 0) || (data->info_crc != calculated_crc)) { 1785 + dev_err(&client->dev, 1786 + "Info Block CRC error calculated=0x%06X read=0x%06X\n", 1787 + calculated_crc, data->info_crc); 1788 + error = -EIO; 1789 + goto err_free_mem; 1702 1790 } 1703 1791 1704 - data->object_table = object_table; 1792 + data->raw_info_block = id_buf; 1793 + data->info = (struct mxt_info *)id_buf; 1794 + 1795 + dev_info(&client->dev, 1796 + "Family: %u Variant: %u Firmware V%u.%u.%02X Objects: %u\n", 1797 + data->info->family_id, data->info->variant_id, 1798 + data->info->version >> 4, data->info->version & 0xf, 1799 + data->info->build, data->info->object_num); 1800 + 1801 + /* Parse object table information */ 1802 + error = mxt_parse_object_table(data, id_buf + MXT_OBJECT_START); 1803 + if (error) { 1804 + dev_err(&client->dev, "Error %d parsing object table\n", error); 1805 + mxt_free_object_table(data); 1806 + goto err_free_mem; 1807 + } 1808 + 1809 + data->object_table = (struct mxt_object *)(id_buf + MXT_OBJECT_START); 1705 1810 1706 1811 return 0; 1707 1812 1708 - free_object_table: 1709 - mxt_free_object_table(data); 1813 + err_free_mem: 1814 + kfree(id_buf); 1710 1815 return error; 1711 1816 } 1712 1817 ··· 2097 2046 int error; 2098 2047 2099 2048 while (1) { 2100 - error = mxt_get_info(data); 2049 + error = mxt_read_info_block(data); 2101 2050 if (!error) 2102 2051 break; 2103 2052 ··· 2128 2077 msleep(MXT_FW_RESET_TIME); 2129 2078 } 2130 2079 2131 - /* Get object table information */ 2132 - error = mxt_get_object_table(data); 2133 - if (error) { 2134 - dev_err(&client->dev, "Error %d reading object table\n", error); 2135 - return error; 2136 - } 2137 - 2138 2080 error = mxt_acquire_irq(data); 2139 2081 if (error) 2140 - goto err_free_object_table; 2082 + return error; 2141 2083 2142 2084 error = request_firmware_nowait(THIS_MODULE, true, MXT_CFG_NAME, 2143 2085 &client->dev, GFP_KERNEL, data, ··· 2138 2094 if (error) { 2139 2095 dev_err(&client->dev, "Failed to invoke firmware loader: %d\n", 2140 2096 error); 2141 - goto err_free_object_table; 2097 + return error; 2142 2098 } 2143 2099 2144 2100 return 0; 2145 - 2146 - err_free_object_table: 2147 - mxt_free_object_table(data); 2148 - return error; 2149 2101 } 2150 2102 2151 2103 static int mxt_set_t7_power_cfg(struct mxt_data *data, u8 sleep) ··· 2202 2162 static u16 mxt_get_debug_value(struct mxt_data *data, unsigned int x, 2203 2163 unsigned int y) 2204 2164 { 2205 - struct mxt_info *info = &data->info; 2165 + struct mxt_info *info = data->info; 2206 2166 struct mxt_dbg *dbg = &data->dbg; 2207 2167 unsigned int ofs, page; 2208 2168 unsigned int col = 0; ··· 2530 2490 2531 2491 static void mxt_debug_init(struct mxt_data *data) 2532 2492 { 2533 - struct mxt_info *info = &data->info; 2493 + struct mxt_info *info = data->info; 2534 2494 struct mxt_dbg *dbg = &data->dbg; 2535 2495 struct mxt_object *object; 2536 2496 int error; ··· 2616 2576 const struct firmware *cfg) 2617 2577 { 2618 2578 struct device *dev = &data->client->dev; 2619 - struct mxt_info *info = &data->info; 2620 2579 int error; 2621 2580 2622 2581 error = mxt_init_t7_power_cfg(data); ··· 2640 2601 2641 2602 mxt_debug_init(data); 2642 2603 2643 - dev_info(dev, 2644 - "Family: %u Variant: %u Firmware V%u.%u.%02X Objects: %u\n", 2645 - info->family_id, info->variant_id, info->version >> 4, 2646 - info->version & 0xf, info->build, info->object_num); 2647 - 2648 2604 return 0; 2649 2605 } 2650 2606 ··· 2648 2614 struct device_attribute *attr, char *buf) 2649 2615 { 2650 2616 struct mxt_data *data = dev_get_drvdata(dev); 2651 - struct mxt_info *info = &data->info; 2617 + struct mxt_info *info = data->info; 2652 2618 return scnprintf(buf, PAGE_SIZE, "%u.%u.%02X\n", 2653 2619 info->version >> 4, info->version & 0xf, info->build); 2654 2620 } ··· 2658 2624 struct device_attribute *attr, char *buf) 2659 2625 { 2660 2626 struct mxt_data *data = dev_get_drvdata(dev); 2661 - struct mxt_info *info = &data->info; 2627 + struct mxt_info *info = data->info; 2662 2628 return scnprintf(buf, PAGE_SIZE, "%u.%u\n", 2663 2629 info->family_id, info->variant_id); 2664 2630 } ··· 2697 2663 return -ENOMEM; 2698 2664 2699 2665 error = 0; 2700 - for (i = 0; i < data->info.object_num; i++) { 2666 + for (i = 0; i < data->info->object_num; i++) { 2701 2667 object = data->object_table + i; 2702 2668 2703 2669 if (!mxt_object_readable(object->type)) ··· 3069 3035 .driver_data = samus_platform_data, 3070 3036 }, 3071 3037 { 3038 + /* Samsung Chromebook Pro */ 3039 + .ident = "Samsung Chromebook Pro", 3040 + .matches = { 3041 + DMI_MATCH(DMI_SYS_VENDOR, "Google"), 3042 + DMI_MATCH(DMI_PRODUCT_NAME, "Caroline"), 3043 + }, 3044 + .driver_data = samus_platform_data, 3045 + }, 3046 + { 3072 3047 /* Other Google Chromebooks */ 3073 3048 .ident = "Chromebook", 3074 3049 .matches = { ··· 3297 3254 3298 3255 static const struct of_device_id mxt_of_match[] = { 3299 3256 { .compatible = "atmel,maxtouch", }, 3257 + /* Compatibles listed below are deprecated */ 3258 + { .compatible = "atmel,qt602240_ts", }, 3259 + { .compatible = "atmel,atmel_mxt_ts", }, 3260 + { .compatible = "atmel,atmel_mxt_tp", }, 3261 + { .compatible = "atmel,mXT224", }, 3300 3262 {}, 3301 3263 }; 3302 3264 MODULE_DEVICE_TABLE(of, mxt_of_match);
+1 -1
drivers/iommu/amd_iommu.c
··· 83 83 84 84 static DEFINE_SPINLOCK(amd_iommu_devtable_lock); 85 85 static DEFINE_SPINLOCK(pd_bitmap_lock); 86 - static DEFINE_SPINLOCK(iommu_table_lock); 87 86 88 87 /* List of all available dev_data structures */ 89 88 static LLIST_HEAD(dev_data_list); ··· 3561 3562 *****************************************************************************/ 3562 3563 3563 3564 static struct irq_chip amd_ir_chip; 3565 + static DEFINE_SPINLOCK(iommu_table_lock); 3564 3566 3565 3567 static void set_dte_irq_entry(u16 devid, struct irq_remap_table *table) 3566 3568 {
+25 -29
drivers/iommu/dma-iommu.c
··· 167 167 * @list: Reserved region list from iommu_get_resv_regions() 168 168 * 169 169 * IOMMU drivers can use this to implement their .get_resv_regions callback 170 - * for general non-IOMMU-specific reservations. Currently, this covers host 171 - * bridge windows for PCI devices and GICv3 ITS region reservation on ACPI 172 - * based ARM platforms that may require HW MSI reservation. 170 + * for general non-IOMMU-specific reservations. Currently, this covers GICv3 171 + * ITS region reservation on ACPI based ARM platforms that may require HW MSI 172 + * reservation. 173 173 */ 174 174 void iommu_dma_get_resv_regions(struct device *dev, struct list_head *list) 175 175 { 176 - struct pci_host_bridge *bridge; 177 - struct resource_entry *window; 178 176 179 - if (!is_of_node(dev->iommu_fwspec->iommu_fwnode) && 180 - iort_iommu_msi_get_resv_regions(dev, list) < 0) 181 - return; 177 + if (!is_of_node(dev->iommu_fwspec->iommu_fwnode)) 178 + iort_iommu_msi_get_resv_regions(dev, list); 182 179 183 - if (!dev_is_pci(dev)) 184 - return; 185 - 186 - bridge = pci_find_host_bridge(to_pci_dev(dev)->bus); 187 - resource_list_for_each_entry(window, &bridge->windows) { 188 - struct iommu_resv_region *region; 189 - phys_addr_t start; 190 - size_t length; 191 - 192 - if (resource_type(window->res) != IORESOURCE_MEM) 193 - continue; 194 - 195 - start = window->res->start - window->offset; 196 - length = window->res->end - window->res->start + 1; 197 - region = iommu_alloc_resv_region(start, length, 0, 198 - IOMMU_RESV_RESERVED); 199 - if (!region) 200 - return; 201 - 202 - list_add_tail(&region->list, list); 203 - } 204 180 } 205 181 EXPORT_SYMBOL(iommu_dma_get_resv_regions); 206 182 ··· 205 229 return 0; 206 230 } 207 231 232 + static void iova_reserve_pci_windows(struct pci_dev *dev, 233 + struct iova_domain *iovad) 234 + { 235 + struct pci_host_bridge *bridge = pci_find_host_bridge(dev->bus); 236 + struct resource_entry *window; 237 + unsigned long lo, hi; 238 + 239 + resource_list_for_each_entry(window, &bridge->windows) { 240 + if (resource_type(window->res) != IORESOURCE_MEM) 241 + continue; 242 + 243 + lo = iova_pfn(iovad, window->res->start - window->offset); 244 + hi = iova_pfn(iovad, window->res->end - window->offset); 245 + reserve_iova(iovad, lo, hi); 246 + } 247 + } 248 + 208 249 static int iova_reserve_iommu_regions(struct device *dev, 209 250 struct iommu_domain *domain) 210 251 { ··· 230 237 struct iommu_resv_region *region; 231 238 LIST_HEAD(resv_regions); 232 239 int ret = 0; 240 + 241 + if (dev_is_pci(dev)) 242 + iova_reserve_pci_windows(to_pci_dev(dev), iovad); 233 243 234 244 iommu_get_resv_regions(dev, &resv_regions); 235 245 list_for_each_entry(region, &resv_regions, list) {
+1 -1
drivers/iommu/dmar.c
··· 1345 1345 struct qi_desc desc; 1346 1346 1347 1347 if (mask) { 1348 - BUG_ON(addr & ((1 << (VTD_PAGE_SHIFT + mask)) - 1)); 1348 + WARN_ON_ONCE(addr & ((1ULL << (VTD_PAGE_SHIFT + mask)) - 1)); 1349 1349 addr |= (1ULL << (VTD_PAGE_SHIFT + mask - 1)) - 1; 1350 1350 desc.high = QI_DEV_IOTLB_ADDR(addr) | QI_DEV_IOTLB_SIZE; 1351 1351 } else
+1 -1
drivers/iommu/intel_irq_remapping.c
··· 1136 1136 irte->dest_id = IRTE_DEST(cfg->dest_apicid); 1137 1137 1138 1138 /* Update the hardware only if the interrupt is in remapped mode. */ 1139 - if (!force || ir_data->irq_2_iommu.mode == IRQ_REMAPPING) 1139 + if (force || ir_data->irq_2_iommu.mode == IRQ_REMAPPING) 1140 1140 modify_irte(&ir_data->irq_2_iommu, irte); 1141 1141 } 1142 1142
+9 -2
drivers/iommu/rockchip-iommu.c
··· 1098 1098 data->iommu = platform_get_drvdata(iommu_dev); 1099 1099 dev->archdata.iommu = data; 1100 1100 1101 - of_dev_put(iommu_dev); 1101 + platform_device_put(iommu_dev); 1102 1102 1103 1103 return 0; 1104 1104 } ··· 1175 1175 for (i = 0; i < iommu->num_clocks; ++i) 1176 1176 iommu->clocks[i].id = rk_iommu_clocks[i]; 1177 1177 1178 + /* 1179 + * iommu clocks should be present for all new devices and devicetrees 1180 + * but there are older devicetrees without clocks out in the wild. 1181 + * So clocks as optional for the time being. 1182 + */ 1178 1183 err = devm_clk_bulk_get(iommu->dev, iommu->num_clocks, iommu->clocks); 1179 - if (err) 1184 + if (err == -ENOENT) 1185 + iommu->num_clocks = 0; 1186 + else if (err) 1180 1187 return err; 1181 1188 1182 1189 err = clk_bulk_prepare(iommu->num_clocks, iommu->clocks);
+2 -2
drivers/irqchip/qcom-irq-combiner.c
··· 1 - /* Copyright (c) 2015-2016, The Linux Foundation. All rights reserved. 1 + /* Copyright (c) 2015-2018, The Linux Foundation. All rights reserved. 2 2 * 3 3 * This program is free software; you can redistribute it and/or modify 4 4 * it under the terms of the GNU General Public License version 2 and ··· 68 68 69 69 bit = readl_relaxed(combiner->regs[reg].addr); 70 70 status = bit & combiner->regs[reg].enabled; 71 - if (!status) 71 + if (bit && !status) 72 72 pr_warn_ratelimited("Unexpected IRQ on CPU%d: (%08x %08lx %p)\n", 73 73 smp_processor_id(), bit, 74 74 combiner->regs[reg].enabled,
+4 -1
drivers/md/bcache/alloc.c
··· 290 290 if (kthread_should_stop() || \ 291 291 test_bit(CACHE_SET_IO_DISABLE, &ca->set->flags)) { \ 292 292 set_current_state(TASK_RUNNING); \ 293 - return 0; \ 293 + goto out; \ 294 294 } \ 295 295 \ 296 296 schedule(); \ ··· 378 378 bch_prio_write(ca); 379 379 } 380 380 } 381 + out: 382 + wait_for_kthread_stop(); 383 + return 0; 381 384 } 382 385 383 386 /* Allocation */
+4
drivers/md/bcache/bcache.h
··· 392 392 #define DEFAULT_CACHED_DEV_ERROR_LIMIT 64 393 393 atomic_t io_errors; 394 394 unsigned error_limit; 395 + 396 + char backing_dev_name[BDEVNAME_SIZE]; 395 397 }; 396 398 397 399 enum alloc_reserve { ··· 466 464 atomic_long_t meta_sectors_written; 467 465 atomic_long_t btree_sectors_written; 468 466 atomic_long_t sectors_written; 467 + 468 + char cache_dev_name[BDEVNAME_SIZE]; 469 469 }; 470 470 471 471 struct gc_stat {
+1 -2
drivers/md/bcache/debug.c
··· 106 106 107 107 void bch_data_verify(struct cached_dev *dc, struct bio *bio) 108 108 { 109 - char name[BDEVNAME_SIZE]; 110 109 struct bio *check; 111 110 struct bio_vec bv, cbv; 112 111 struct bvec_iter iter, citer = { 0 }; ··· 133 134 bv.bv_len), 134 135 dc->disk.c, 135 136 "verify failed at dev %s sector %llu", 136 - bdevname(dc->bdev, name), 137 + dc->backing_dev_name, 137 138 (uint64_t) bio->bi_iter.bi_sector); 138 139 139 140 kunmap_atomic(p1);
+3 -5
drivers/md/bcache/io.c
··· 52 52 /* IO errors */ 53 53 void bch_count_backing_io_errors(struct cached_dev *dc, struct bio *bio) 54 54 { 55 - char buf[BDEVNAME_SIZE]; 56 55 unsigned errors; 57 56 58 57 WARN_ONCE(!dc, "NULL pointer of struct cached_dev"); ··· 59 60 errors = atomic_add_return(1, &dc->io_errors); 60 61 if (errors < dc->error_limit) 61 62 pr_err("%s: IO error on backing device, unrecoverable", 62 - bio_devname(bio, buf)); 63 + dc->backing_dev_name); 63 64 else 64 65 bch_cached_dev_error(dc); 65 66 } ··· 104 105 } 105 106 106 107 if (error) { 107 - char buf[BDEVNAME_SIZE]; 108 108 unsigned errors = atomic_add_return(1 << IO_ERROR_SHIFT, 109 109 &ca->io_errors); 110 110 errors >>= IO_ERROR_SHIFT; 111 111 112 112 if (errors < ca->set->error_limit) 113 113 pr_err("%s: IO error on %s%s", 114 - bdevname(ca->bdev, buf), m, 114 + ca->cache_dev_name, m, 115 115 is_read ? ", recovering." : "."); 116 116 else 117 117 bch_cache_set_error(ca->set, 118 118 "%s: too many IO errors %s", 119 - bdevname(ca->bdev, buf), m); 119 + ca->cache_dev_name, m); 120 120 } 121 121 } 122 122
+1 -4
drivers/md/bcache/request.c
··· 649 649 */ 650 650 if (unlikely(s->iop.writeback && 651 651 bio->bi_opf & REQ_PREFLUSH)) { 652 - char buf[BDEVNAME_SIZE]; 653 - 654 - bio_devname(bio, buf); 655 652 pr_err("Can't flush %s: returned bi_status %i", 656 - buf, bio->bi_status); 653 + dc->backing_dev_name, bio->bi_status); 657 654 } else { 658 655 /* set to orig_bio->bi_status in bio_complete() */ 659 656 s->iop.status = bio->bi_status;
+52 -23
drivers/md/bcache/super.c
··· 936 936 static void cached_dev_detach_finish(struct work_struct *w) 937 937 { 938 938 struct cached_dev *dc = container_of(w, struct cached_dev, detach); 939 - char buf[BDEVNAME_SIZE]; 940 939 struct closure cl; 941 940 closure_init_stack(&cl); 942 941 ··· 966 967 967 968 mutex_unlock(&bch_register_lock); 968 969 969 - pr_info("Caching disabled for %s", bdevname(dc->bdev, buf)); 970 + pr_info("Caching disabled for %s", dc->backing_dev_name); 970 971 971 972 /* Drop ref we took in cached_dev_detach() */ 972 973 closure_put(&dc->disk.cl); ··· 998 999 { 999 1000 uint32_t rtime = cpu_to_le32(get_seconds()); 1000 1001 struct uuid_entry *u; 1001 - char buf[BDEVNAME_SIZE]; 1002 1002 struct cached_dev *exist_dc, *t; 1003 - 1004 - bdevname(dc->bdev, buf); 1005 1003 1006 1004 if ((set_uuid && memcmp(set_uuid, c->sb.set_uuid, 16)) || 1007 1005 (!set_uuid && memcmp(dc->sb.set_uuid, c->sb.set_uuid, 16))) 1008 1006 return -ENOENT; 1009 1007 1010 1008 if (dc->disk.c) { 1011 - pr_err("Can't attach %s: already attached", buf); 1009 + pr_err("Can't attach %s: already attached", 1010 + dc->backing_dev_name); 1012 1011 return -EINVAL; 1013 1012 } 1014 1013 1015 1014 if (test_bit(CACHE_SET_STOPPING, &c->flags)) { 1016 - pr_err("Can't attach %s: shutting down", buf); 1015 + pr_err("Can't attach %s: shutting down", 1016 + dc->backing_dev_name); 1017 1017 return -EINVAL; 1018 1018 } 1019 1019 1020 1020 if (dc->sb.block_size < c->sb.block_size) { 1021 1021 /* Will die */ 1022 1022 pr_err("Couldn't attach %s: block size less than set's block size", 1023 - buf); 1023 + dc->backing_dev_name); 1024 1024 return -EINVAL; 1025 1025 } 1026 1026 ··· 1027 1029 list_for_each_entry_safe(exist_dc, t, &c->cached_devs, list) { 1028 1030 if (!memcmp(dc->sb.uuid, exist_dc->sb.uuid, 16)) { 1029 1031 pr_err("Tried to attach %s but duplicate UUID already attached", 1030 - buf); 1032 + dc->backing_dev_name); 1031 1033 1032 1034 return -EINVAL; 1033 1035 } ··· 1045 1047 1046 1048 if (!u) { 1047 1049 if (BDEV_STATE(&dc->sb) == BDEV_STATE_DIRTY) { 1048 - pr_err("Couldn't find uuid for %s in set", buf); 1050 + pr_err("Couldn't find uuid for %s in set", 1051 + dc->backing_dev_name); 1049 1052 return -ENOENT; 1050 1053 } 1051 1054 1052 1055 u = uuid_find_empty(c); 1053 1056 if (!u) { 1054 - pr_err("Not caching %s, no room for UUID", buf); 1057 + pr_err("Not caching %s, no room for UUID", 1058 + dc->backing_dev_name); 1055 1059 return -EINVAL; 1056 1060 } 1057 1061 } ··· 1112 1112 up_write(&dc->writeback_lock); 1113 1113 1114 1114 pr_info("Caching %s as %s on set %pU", 1115 - bdevname(dc->bdev, buf), dc->disk.disk->disk_name, 1115 + dc->backing_dev_name, 1116 + dc->disk.disk->disk_name, 1116 1117 dc->disk.c->sb.set_uuid); 1117 1118 return 0; 1118 1119 } ··· 1226 1225 struct block_device *bdev, 1227 1226 struct cached_dev *dc) 1228 1227 { 1229 - char name[BDEVNAME_SIZE]; 1230 1228 const char *err = "cannot allocate memory"; 1231 1229 struct cache_set *c; 1232 1230 1231 + bdevname(bdev, dc->backing_dev_name); 1233 1232 memcpy(&dc->sb, sb, sizeof(struct cache_sb)); 1234 1233 dc->bdev = bdev; 1235 1234 dc->bdev->bd_holder = dc; ··· 1237 1236 bio_init(&dc->sb_bio, dc->sb_bio.bi_inline_vecs, 1); 1238 1237 bio_first_bvec_all(&dc->sb_bio)->bv_page = sb_page; 1239 1238 get_page(sb_page); 1239 + 1240 1240 1241 1241 if (cached_dev_init(dc, sb->block_size << 9)) 1242 1242 goto err; ··· 1249 1247 if (bch_cache_accounting_add_kobjs(&dc->accounting, &dc->disk.kobj)) 1250 1248 goto err; 1251 1249 1252 - pr_info("registered backing device %s", bdevname(bdev, name)); 1250 + pr_info("registered backing device %s", dc->backing_dev_name); 1253 1251 1254 1252 list_add(&dc->list, &uncached_devices); 1255 1253 list_for_each_entry(c, &bch_cache_sets, list) ··· 1261 1259 1262 1260 return; 1263 1261 err: 1264 - pr_notice("error %s: %s", bdevname(bdev, name), err); 1262 + pr_notice("error %s: %s", dc->backing_dev_name, err); 1265 1263 bcache_device_stop(&dc->disk); 1266 1264 } 1267 1265 ··· 1369 1367 1370 1368 bool bch_cached_dev_error(struct cached_dev *dc) 1371 1369 { 1372 - char name[BDEVNAME_SIZE]; 1370 + struct cache_set *c; 1373 1371 1374 1372 if (!dc || test_bit(BCACHE_DEV_CLOSING, &dc->disk.flags)) 1375 1373 return false; ··· 1379 1377 smp_mb(); 1380 1378 1381 1379 pr_err("stop %s: too many IO errors on backing device %s\n", 1382 - dc->disk.disk->disk_name, bdevname(dc->bdev, name)); 1380 + dc->disk.disk->disk_name, dc->backing_dev_name); 1381 + 1382 + /* 1383 + * If the cached device is still attached to a cache set, 1384 + * even dc->io_disable is true and no more I/O requests 1385 + * accepted, cache device internal I/O (writeback scan or 1386 + * garbage collection) may still prevent bcache device from 1387 + * being stopped. So here CACHE_SET_IO_DISABLE should be 1388 + * set to c->flags too, to make the internal I/O to cache 1389 + * device rejected and stopped immediately. 1390 + * If c is NULL, that means the bcache device is not attached 1391 + * to any cache set, then no CACHE_SET_IO_DISABLE bit to set. 1392 + */ 1393 + c = dc->disk.c; 1394 + if (c && test_and_set_bit(CACHE_SET_IO_DISABLE, &c->flags)) 1395 + pr_info("CACHE_SET_IO_DISABLE already set"); 1383 1396 1384 1397 bcache_device_stop(&dc->disk); 1385 1398 return true; ··· 1412 1395 return false; 1413 1396 1414 1397 if (test_and_set_bit(CACHE_SET_IO_DISABLE, &c->flags)) 1415 - pr_warn("CACHE_SET_IO_DISABLE already set"); 1398 + pr_info("CACHE_SET_IO_DISABLE already set"); 1416 1399 1417 1400 /* XXX: we can be called from atomic context 1418 1401 acquire_console_sem(); ··· 1556 1539 */ 1557 1540 pr_warn("stop_when_cache_set_failed of %s is \"auto\" and cache is dirty, stop it to avoid potential data corruption.", 1558 1541 d->disk->disk_name); 1542 + /* 1543 + * There might be a small time gap that cache set is 1544 + * released but bcache device is not. Inside this time 1545 + * gap, regular I/O requests will directly go into 1546 + * backing device as no cache set attached to. This 1547 + * behavior may also introduce potential inconsistence 1548 + * data in writeback mode while cache is dirty. 1549 + * Therefore before calling bcache_device_stop() due 1550 + * to a broken cache device, dc->io_disable should be 1551 + * explicitly set to true. 1552 + */ 1553 + dc->io_disable = true; 1554 + /* make others know io_disable is true earlier */ 1555 + smp_mb(); 1559 1556 bcache_device_stop(d); 1560 1557 } else { 1561 1558 /* ··· 2034 2003 static int register_cache(struct cache_sb *sb, struct page *sb_page, 2035 2004 struct block_device *bdev, struct cache *ca) 2036 2005 { 2037 - char name[BDEVNAME_SIZE]; 2038 2006 const char *err = NULL; /* must be set for any error case */ 2039 2007 int ret = 0; 2040 2008 2041 - bdevname(bdev, name); 2042 - 2009 + bdevname(bdev, ca->cache_dev_name); 2043 2010 memcpy(&ca->sb, sb, sizeof(struct cache_sb)); 2044 2011 ca->bdev = bdev; 2045 2012 ca->bdev->bd_holder = ca; ··· 2074 2045 goto out; 2075 2046 } 2076 2047 2077 - pr_info("registered cache device %s", name); 2048 + pr_info("registered cache device %s", ca->cache_dev_name); 2078 2049 2079 2050 out: 2080 2051 kobject_put(&ca->kobj); 2081 2052 2082 2053 err: 2083 2054 if (err) 2084 - pr_notice("error %s: %s", name, err); 2055 + pr_notice("error %s: %s", ca->cache_dev_name, err); 2085 2056 2086 2057 return ret; 2087 2058 }
+3 -1
drivers/md/bcache/writeback.c
··· 244 244 struct keybuf_key *w = bio->bi_private; 245 245 struct dirty_io *io = w->private; 246 246 247 - if (bio->bi_status) 247 + if (bio->bi_status) { 248 248 SET_KEY_DIRTY(&w->key, false); 249 + bch_count_backing_io_errors(io->dc, bio); 250 + } 249 251 250 252 closure_put(&io->cl); 251 253 }
+3 -2
drivers/md/dm-bufio.c
··· 1681 1681 1682 1682 if (block_size <= KMALLOC_MAX_SIZE && 1683 1683 (block_size < PAGE_SIZE || !is_power_of_2(block_size))) { 1684 - snprintf(slab_name, sizeof slab_name, "dm_bufio_cache-%u", c->block_size); 1685 - c->slab_cache = kmem_cache_create(slab_name, c->block_size, ARCH_KMALLOC_MINALIGN, 1684 + unsigned align = min(1U << __ffs(block_size), (unsigned)PAGE_SIZE); 1685 + snprintf(slab_name, sizeof slab_name, "dm_bufio_cache-%u", block_size); 1686 + c->slab_cache = kmem_cache_create(slab_name, block_size, align, 1686 1687 SLAB_RECLAIM_ACCOUNT, NULL); 1687 1688 if (!c->slab_cache) { 1688 1689 r = -ENOMEM;
+1 -1
drivers/md/dm-cache-background-tracker.c
··· 166 166 atomic_read(&b->pending_demotes) >= b->max_work; 167 167 } 168 168 169 - struct bt_work *alloc_work(struct background_tracker *b) 169 + static struct bt_work *alloc_work(struct background_tracker *b) 170 170 { 171 171 if (max_work_reached(b)) 172 172 return NULL;
+1 -1
drivers/md/dm-integrity.c
··· 2440 2440 unsigned i; 2441 2441 for (i = 0; i < ic->journal_sections; i++) 2442 2442 kvfree(sl[i]); 2443 - kfree(sl); 2443 + kvfree(sl); 2444 2444 } 2445 2445 2446 2446 static struct scatterlist **dm_integrity_alloc_journal_scatterlist(struct dm_integrity_c *ic, struct page_list *pl)
+6 -4
drivers/md/dm-raid1.c
··· 23 23 24 24 #define MAX_RECOVERY 1 /* Maximum number of regions recovered in parallel. */ 25 25 26 + #define MAX_NR_MIRRORS (DM_KCOPYD_MAX_REGIONS + 1) 27 + 26 28 #define DM_RAID1_HANDLE_ERRORS 0x01 27 29 #define DM_RAID1_KEEP_LOG 0x02 28 30 #define errors_handled(p) ((p)->features & DM_RAID1_HANDLE_ERRORS) ··· 257 255 unsigned long error_bits; 258 256 259 257 unsigned int i; 260 - struct dm_io_region io[ms->nr_mirrors]; 258 + struct dm_io_region io[MAX_NR_MIRRORS]; 261 259 struct mirror *m; 262 260 struct dm_io_request io_req = { 263 261 .bi_op = REQ_OP_WRITE, ··· 653 651 static void do_write(struct mirror_set *ms, struct bio *bio) 654 652 { 655 653 unsigned int i; 656 - struct dm_io_region io[ms->nr_mirrors], *dest = io; 654 + struct dm_io_region io[MAX_NR_MIRRORS], *dest = io; 657 655 struct mirror *m; 658 656 struct dm_io_request io_req = { 659 657 .bi_op = REQ_OP_WRITE, ··· 1085 1083 argc -= args_used; 1086 1084 1087 1085 if (!argc || sscanf(argv[0], "%u%c", &nr_mirrors, &dummy) != 1 || 1088 - nr_mirrors < 2 || nr_mirrors > DM_KCOPYD_MAX_REGIONS + 1) { 1086 + nr_mirrors < 2 || nr_mirrors > MAX_NR_MIRRORS) { 1089 1087 ti->error = "Invalid number of mirrors"; 1090 1088 dm_dirty_log_destroy(dl); 1091 1089 return -EINVAL; ··· 1406 1404 int num_feature_args = 0; 1407 1405 struct mirror_set *ms = (struct mirror_set *) ti->private; 1408 1406 struct dm_dirty_log *log = dm_rh_dirty_log(ms->rh); 1409 - char buffer[ms->nr_mirrors + 1]; 1407 + char buffer[MAX_NR_MIRRORS + 1]; 1410 1408 1411 1409 switch (type) { 1412 1410 case STATUSTYPE_INFO:
+4 -3
drivers/md/dm.c
··· 1020 1020 EXPORT_SYMBOL_GPL(dm_set_target_max_io_len); 1021 1021 1022 1022 static struct dm_target *dm_dax_get_live_target(struct mapped_device *md, 1023 - sector_t sector, int *srcu_idx) 1023 + sector_t sector, int *srcu_idx) 1024 + __acquires(md->io_barrier) 1024 1025 { 1025 1026 struct dm_table *map; 1026 1027 struct dm_target *ti; ··· 1038 1037 } 1039 1038 1040 1039 static long dm_dax_direct_access(struct dax_device *dax_dev, pgoff_t pgoff, 1041 - long nr_pages, void **kaddr, pfn_t *pfn) 1040 + long nr_pages, void **kaddr, pfn_t *pfn) 1042 1041 { 1043 1042 struct mapped_device *md = dax_get_private(dax_dev); 1044 1043 sector_t sector = pgoff * PAGE_SECTORS; ··· 1066 1065 } 1067 1066 1068 1067 static size_t dm_dax_copy_from_iter(struct dax_device *dax_dev, pgoff_t pgoff, 1069 - void *addr, size_t bytes, struct iov_iter *i) 1068 + void *addr, size_t bytes, struct iov_iter *i) 1070 1069 { 1071 1070 struct mapped_device *md = dax_get_private(dax_dev); 1072 1071 sector_t sector = pgoff * PAGE_SECTORS;
+1 -1
drivers/media/i2c/saa7115.c
··· 20 20 // 21 21 // VBI support (2004) and cleanups (2005) by Hans Verkuil <hverkuil@xs4all.nl> 22 22 // 23 - // Copyright (c) 2005-2006 Mauro Carvalho Chehab <mchehab@infradead.org> 23 + // Copyright (c) 2005-2006 Mauro Carvalho Chehab <mchehab@kernel.org> 24 24 // SAA7111, SAA7113 and SAA7118 support 25 25 26 26 #include "saa711x_regs.h"
+1 -1
drivers/media/i2c/saa711x_regs.h
··· 2 2 * SPDX-License-Identifier: GPL-2.0+ 3 3 * saa711x - Philips SAA711x video decoder register specifications 4 4 * 5 - * Copyright (c) 2006 Mauro Carvalho Chehab <mchehab@infradead.org> 5 + * Copyright (c) 2006 Mauro Carvalho Chehab <mchehab@kernel.org> 6 6 */ 7 7 8 8 #define R_00_CHIP_VERSION 0x00
+1 -1
drivers/media/i2c/tda7432.c
··· 8 8 * Muting and tone control by Jonathan Isom <jisom@ematic.com> 9 9 * 10 10 * Copyright (c) 2000 Eric Sandeen <eric_sandeen@bigfoot.com> 11 - * Copyright (c) 2006 Mauro Carvalho Chehab <mchehab@infradead.org> 11 + * Copyright (c) 2006 Mauro Carvalho Chehab <mchehab@kernel.org> 12 12 * This code is placed under the terms of the GNU General Public License 13 13 * Based on tda9855.c by Steve VanDeBogart (vandebo@uclink.berkeley.edu) 14 14 * Which was based on tda8425.c by Greg Alexander (c) 1998
+1 -1
drivers/media/i2c/tvp5150.c
··· 2 2 // 3 3 // tvp5150 - Texas Instruments TVP5150A/AM1 and TVP5151 video decoder driver 4 4 // 5 - // Copyright (c) 2005,2006 Mauro Carvalho Chehab <mchehab@infradead.org> 5 + // Copyright (c) 2005,2006 Mauro Carvalho Chehab <mchehab@kernel.org> 6 6 7 7 #include <dt-bindings/media/tvp5150.h> 8 8 #include <linux/i2c.h>
+1 -1
drivers/media/i2c/tvp5150_reg.h
··· 3 3 * 4 4 * tvp5150 - Texas Instruments TVP5150A/AM1 video decoder registers 5 5 * 6 - * Copyright (c) 2005,2006 Mauro Carvalho Chehab <mchehab@infradead.org> 6 + * Copyright (c) 2005,2006 Mauro Carvalho Chehab <mchehab@kernel.org> 7 7 */ 8 8 9 9 #define TVP5150_VD_IN_SRC_SEL_1 0x00 /* Video input source selection #1 */
+1 -1
drivers/media/i2c/tvp7002.c
··· 5 5 * Author: Santiago Nunez-Corrales <santiago.nunez@ridgerun.com> 6 6 * 7 7 * This code is partially based upon the TVP5150 driver 8 - * written by Mauro Carvalho Chehab (mchehab@infradead.org), 8 + * written by Mauro Carvalho Chehab <mchehab@kernel.org>, 9 9 * the TVP514x driver written by Vaibhav Hiremath <hvaibhav@ti.com> 10 10 * and the TVP7002 driver in the TI LSP 2.10.00.14. Revisions by 11 11 * Muralidharan Karicheri and Snehaprabha Narnakaje (TI).
+1 -1
drivers/media/i2c/tvp7002_reg.h
··· 5 5 * Author: Santiago Nunez-Corrales <santiago.nunez@ridgerun.com> 6 6 * 7 7 * This code is partially based upon the TVP5150 driver 8 - * written by Mauro Carvalho Chehab (mchehab@infradead.org), 8 + * written by Mauro Carvalho Chehab <mchehab@kernel.org>, 9 9 * the TVP514x driver written by Vaibhav Hiremath <hvaibhav@ti.com> 10 10 * and the TVP7002 driver in the TI LSP 2.10.00.14 11 11 *
+1 -1
drivers/media/media-devnode.c
··· 4 4 * Copyright (C) 2010 Nokia Corporation 5 5 * 6 6 * Based on drivers/media/video/v4l2_dev.c code authored by 7 - * Mauro Carvalho Chehab <mchehab@infradead.org> (version 2) 7 + * Mauro Carvalho Chehab <mchehab@kernel.org> (version 2) 8 8 * Alan Cox, <alan@lxorguk.ukuu.org.uk> (version 1) 9 9 * 10 10 * Contacts: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
+1 -1
drivers/media/pci/bt8xx/bttv-audio-hook.c
··· 1 1 /* 2 2 * Handlers for board audio hooks, splitted from bttv-cards 3 3 * 4 - * Copyright (c) 2006 Mauro Carvalho Chehab (mchehab@infradead.org) 4 + * Copyright (c) 2006 Mauro Carvalho Chehab <mchehab@kernel.org> 5 5 * This code is placed under the terms of the GNU General Public License 6 6 */ 7 7
+1 -1
drivers/media/pci/bt8xx/bttv-audio-hook.h
··· 1 1 /* 2 2 * Handlers for board audio hooks, splitted from bttv-cards 3 3 * 4 - * Copyright (c) 2006 Mauro Carvalho Chehab (mchehab@infradead.org) 4 + * Copyright (c) 2006 Mauro Carvalho Chehab <mchehab@kernel.org> 5 5 * This code is placed under the terms of the GNU General Public License 6 6 */ 7 7
+2 -2
drivers/media/pci/bt8xx/bttv-cards.c
··· 2447 2447 }, 2448 2448 /* ---- card 0x88---------------------------------- */ 2449 2449 [BTTV_BOARD_ACORP_Y878F] = { 2450 - /* Mauro Carvalho Chehab <mchehab@infradead.org> */ 2450 + /* Mauro Carvalho Chehab <mchehab@kernel.org> */ 2451 2451 .name = "Acorp Y878F", 2452 2452 .video_inputs = 3, 2453 2453 /* .audio_inputs= 1, */ ··· 2688 2688 }, 2689 2689 [BTTV_BOARD_ENLTV_FM_2] = { 2690 2690 /* Encore TV Tuner Pro ENL TV-FM-2 2691 - Mauro Carvalho Chehab <mchehab@infradead.org */ 2691 + Mauro Carvalho Chehab <mchehab@kernel.org> */ 2692 2692 .name = "Encore ENL TV-FM-2", 2693 2693 .video_inputs = 3, 2694 2694 /* .audio_inputs= 1, */
+1 -1
drivers/media/pci/bt8xx/bttv-driver.c
··· 13 13 (c) 2005-2006 Nickolay V. Shmyrev <nshmyrev@yandex.ru> 14 14 15 15 Fixes to be fully V4L2 compliant by 16 - (c) 2006 Mauro Carvalho Chehab <mchehab@infradead.org> 16 + (c) 2006 Mauro Carvalho Chehab <mchehab@kernel.org> 17 17 18 18 Cropping and overscan support 19 19 Copyright (C) 2005, 2006 Michael H. Schimek <mschimek@gmx.at>
+1 -1
drivers/media/pci/bt8xx/bttv-i2c.c
··· 8 8 & Marcus Metzler (mocm@thp.uni-koeln.de) 9 9 (c) 1999-2003 Gerd Knorr <kraxel@bytesex.org> 10 10 11 - (c) 2005 Mauro Carvalho Chehab <mchehab@infradead.org> 11 + (c) 2005 Mauro Carvalho Chehab <mchehab@kernel.org> 12 12 - Multituner support and i2c address binding 13 13 14 14 This program is free software; you can redistribute it and/or modify
+1 -1
drivers/media/pci/cx23885/cx23885-input.c
··· 13 13 * Copyright (C) 2008 <srinivasa.deevi at conexant dot com> 14 14 * Copyright (C) 2005 Ludovico Cavedon <cavedon@sssup.it> 15 15 * Markus Rechberger <mrechberger@gmail.com> 16 - * Mauro Carvalho Chehab <mchehab@infradead.org> 16 + * Mauro Carvalho Chehab <mchehab@kernel.org> 17 17 * Sascha Sommer <saschasommer@freenet.de> 18 18 * Copyright (C) 2004, 2005 Chris Pascoe 19 19 * Copyright (C) 2003, 2004 Gerd Knorr
+2 -2
drivers/media/pci/cx88/cx88-alsa.c
··· 4 4 * 5 5 * (c) 2007 Trent Piepho <xyzzy@speakeasy.org> 6 6 * (c) 2005,2006 Ricardo Cerqueira <v4l@cerqueira.org> 7 - * (c) 2005 Mauro Carvalho Chehab <mchehab@infradead.org> 7 + * (c) 2005 Mauro Carvalho Chehab <mchehab@kernel.org> 8 8 * Based on a dummy cx88 module by Gerd Knorr <kraxel@bytesex.org> 9 9 * Based on dummy.c by Jaroslav Kysela <perex@perex.cz> 10 10 * ··· 103 103 104 104 MODULE_DESCRIPTION("ALSA driver module for cx2388x based TV cards"); 105 105 MODULE_AUTHOR("Ricardo Cerqueira"); 106 - MODULE_AUTHOR("Mauro Carvalho Chehab <mchehab@infradead.org>"); 106 + MODULE_AUTHOR("Mauro Carvalho Chehab <mchehab@kernel.org>"); 107 107 MODULE_LICENSE("GPL"); 108 108 MODULE_VERSION(CX88_VERSION); 109 109
+1 -1
drivers/media/pci/cx88/cx88-blackbird.c
··· 5 5 * (c) 2004 Jelle Foks <jelle@foks.us> 6 6 * (c) 2004 Gerd Knorr <kraxel@bytesex.org> 7 7 * 8 - * (c) 2005-2006 Mauro Carvalho Chehab <mchehab@infradead.org> 8 + * (c) 2005-2006 Mauro Carvalho Chehab <mchehab@kernel.org> 9 9 * - video_ioctl2 conversion 10 10 * 11 11 * Includes parts from the ivtv driver <http://sourceforge.net/projects/ivtv/>
+1 -1
drivers/media/pci/cx88/cx88-core.c
··· 4 4 * 5 5 * (c) 2003 Gerd Knorr <kraxel@bytesex.org> [SuSE Labs] 6 6 * 7 - * (c) 2005-2006 Mauro Carvalho Chehab <mchehab@infradead.org> 7 + * (c) 2005-2006 Mauro Carvalho Chehab <mchehab@kernel.org> 8 8 * - Multituner support 9 9 * - video_ioctl2 conversion 10 10 * - PAL/M fixes
+1 -1
drivers/media/pci/cx88/cx88-i2c.c
··· 8 8 * (c) 2002 Yurij Sysoev <yurij@naturesoft.net> 9 9 * (c) 1999-2003 Gerd Knorr <kraxel@bytesex.org> 10 10 * 11 - * (c) 2005 Mauro Carvalho Chehab <mchehab@infradead.org> 11 + * (c) 2005 Mauro Carvalho Chehab <mchehab@kernel.org> 12 12 * - Multituner support and i2c address binding 13 13 * 14 14 * This program is free software; you can redistribute it and/or modify
+1 -1
drivers/media/pci/cx88/cx88-video.c
··· 5 5 * 6 6 * (c) 2003-04 Gerd Knorr <kraxel@bytesex.org> [SuSE Labs] 7 7 * 8 - * (c) 2005-2006 Mauro Carvalho Chehab <mchehab@infradead.org> 8 + * (c) 2005-2006 Mauro Carvalho Chehab <mchehab@kernel.org> 9 9 * - Multituner support 10 10 * - video_ioctl2 conversion 11 11 * - PAL/M fixes
+1 -1
drivers/media/radio/radio-aimslab.c
··· 4 4 * Copyright 1997 M. Kirkwood 5 5 * 6 6 * Converted to the radio-isa framework by Hans Verkuil <hans.verkuil@cisco.com> 7 - * Converted to V4L2 API by Mauro Carvalho Chehab <mchehab@infradead.org> 7 + * Converted to V4L2 API by Mauro Carvalho Chehab <mchehab@kernel.org> 8 8 * Converted to new API by Alan Cox <alan@lxorguk.ukuu.org.uk> 9 9 * Various bugfixes and enhancements by Russell Kroll <rkroll@exploits.org> 10 10 *
+1 -1
drivers/media/radio/radio-aztech.c
··· 2 2 * radio-aztech.c - Aztech radio card driver 3 3 * 4 4 * Converted to the radio-isa framework by Hans Verkuil <hans.verkuil@xs4all.nl> 5 - * Converted to V4L2 API by Mauro Carvalho Chehab <mchehab@infradead.org> 5 + * Converted to V4L2 API by Mauro Carvalho Chehab <mchehab@kernel.org> 6 6 * Adapted to support the Video for Linux API by 7 7 * Russell Kroll <rkroll@exploits.org>. Based on original tuner code by: 8 8 *
+1 -1
drivers/media/radio/radio-gemtek.c
··· 15 15 * Various bugfixes and enhancements by Russell Kroll <rkroll@exploits.org> 16 16 * 17 17 * Converted to the radio-isa framework by Hans Verkuil <hans.verkuil@cisco.com> 18 - * Converted to V4L2 API by Mauro Carvalho Chehab <mchehab@infradead.org> 18 + * Converted to V4L2 API by Mauro Carvalho Chehab <mchehab@kernel.org> 19 19 * 20 20 * Note: this card seems to swap the left and right audio channels! 21 21 *
+1 -1
drivers/media/radio/radio-maxiradio.c
··· 27 27 * BUGS: 28 28 * - card unmutes if you change frequency 29 29 * 30 - * (c) 2006, 2007 by Mauro Carvalho Chehab <mchehab@infradead.org>: 30 + * (c) 2006, 2007 by Mauro Carvalho Chehab <mchehab@kernel.org>: 31 31 * - Conversion to V4L2 API 32 32 * - Uses video_ioctl2 for parsing and to add debug support 33 33 */
+1 -1
drivers/media/radio/radio-rtrack2.c
··· 7 7 * Various bugfixes and enhancements by Russell Kroll <rkroll@exploits.org> 8 8 * 9 9 * Converted to the radio-isa framework by Hans Verkuil <hans.verkuil@cisco.com> 10 - * Converted to V4L2 API by Mauro Carvalho Chehab <mchehab@infradead.org> 10 + * Converted to V4L2 API by Mauro Carvalho Chehab <mchehab@kernel.org> 11 11 * 12 12 * Fully tested with actual hardware and the v4l2-compliance tool. 13 13 */
+1 -1
drivers/media/radio/radio-sf16fmi.c
··· 13 13 * No volume control - only mute/unmute - you have to use line volume 14 14 * control on SB-part of SF16-FMI/SF16-FMP/SF16-FMD 15 15 * 16 - * Converted to V4L2 API by Mauro Carvalho Chehab <mchehab@infradead.org> 16 + * Converted to V4L2 API by Mauro Carvalho Chehab <mchehab@kernel.org> 17 17 */ 18 18 19 19 #include <linux/kernel.h> /* __setup */
+1 -1
drivers/media/radio/radio-terratec.c
··· 17 17 * Volume Control is done digitally 18 18 * 19 19 * Converted to the radio-isa framework by Hans Verkuil <hans.verkuil@cisco.com> 20 - * Converted to V4L2 API by Mauro Carvalho Chehab <mchehab@infradead.org> 20 + * Converted to V4L2 API by Mauro Carvalho Chehab <mchehab@kernel.org> 21 21 */ 22 22 23 23 #include <linux/module.h> /* Modules */
+1 -1
drivers/media/radio/radio-trust.c
··· 12 12 * Scott McGrath (smcgrath@twilight.vtc.vsc.edu) 13 13 * William McGrath (wmcgrath@twilight.vtc.vsc.edu) 14 14 * 15 - * Converted to V4L2 API by Mauro Carvalho Chehab <mchehab@infradead.org> 15 + * Converted to V4L2 API by Mauro Carvalho Chehab <mchehab@kernel.org> 16 16 */ 17 17 18 18 #include <stdarg.h>
+1 -1
drivers/media/radio/radio-typhoon.c
··· 25 25 * The frequency change is necessary since the card never seems to be 26 26 * completely silent. 27 27 * 28 - * Converted to V4L2 API by Mauro Carvalho Chehab <mchehab@infradead.org> 28 + * Converted to V4L2 API by Mauro Carvalho Chehab <mchehab@kernel.org> 29 29 */ 30 30 31 31 #include <linux/module.h> /* Modules */
+1 -1
drivers/media/radio/radio-zoltrix.c
··· 27 27 * 2002-07-15 - Fix Stereo typo 28 28 * 29 29 * 2006-07-24 - Converted to V4L2 API 30 - * by Mauro Carvalho Chehab <mchehab@infradead.org> 30 + * by Mauro Carvalho Chehab <mchehab@kernel.org> 31 31 * 32 32 * Converted to the radio-isa framework by Hans Verkuil <hans.verkuil@cisco.com> 33 33 *
+1 -1
drivers/media/rc/keymaps/rc-avermedia-m135a.c
··· 12 12 * 13 13 * On Avermedia M135A with IR model RM-JX, the same codes exist on both 14 14 * Positivo (BR) and original IR, initial version and remote control codes 15 - * added by Mauro Carvalho Chehab <mchehab@infradead.org> 15 + * added by Mauro Carvalho Chehab <mchehab@kernel.org> 16 16 * 17 17 * Positivo also ships Avermedia M135A with model RM-K6, extra control 18 18 * codes added by Herton Ronaldo Krzesinski <herton@mandriva.com.br>
+1 -1
drivers/media/rc/keymaps/rc-encore-enltv-fm53.c
··· 9 9 #include <linux/module.h> 10 10 11 11 /* Encore ENLTV-FM v5.3 12 - Mauro Carvalho Chehab <mchehab@infradead.org> 12 + Mauro Carvalho Chehab <mchehab@kernel.org> 13 13 */ 14 14 15 15 static struct rc_map_table encore_enltv_fm53[] = {
+1 -1
drivers/media/rc/keymaps/rc-encore-enltv2.c
··· 9 9 #include <linux/module.h> 10 10 11 11 /* Encore ENLTV2-FM - silver plastic - "Wand Media" written at the botton 12 - Mauro Carvalho Chehab <mchehab@infradead.org> */ 12 + Mauro Carvalho Chehab <mchehab@kernel.org> */ 13 13 14 14 static struct rc_map_table encore_enltv2[] = { 15 15 { 0x4c, KEY_POWER2 },
+1 -1
drivers/media/rc/keymaps/rc-kaiomy.c
··· 9 9 #include <linux/module.h> 10 10 11 11 /* Kaiomy TVnPC U2 12 - Mauro Carvalho Chehab <mchehab@infradead.org> 12 + Mauro Carvalho Chehab <mchehab@kernel.org> 13 13 */ 14 14 15 15 static struct rc_map_table kaiomy[] = {
+1 -1
drivers/media/rc/keymaps/rc-kworld-plus-tv-analog.c
··· 9 9 #include <linux/module.h> 10 10 11 11 /* Kworld Plus TV Analog Lite PCI IR 12 - Mauro Carvalho Chehab <mchehab@infradead.org> 12 + Mauro Carvalho Chehab <mchehab@kernel.org> 13 13 */ 14 14 15 15 static struct rc_map_table kworld_plus_tv_analog[] = {
+1 -1
drivers/media/rc/keymaps/rc-pixelview-new.c
··· 9 9 #include <linux/module.h> 10 10 11 11 /* 12 - Mauro Carvalho Chehab <mchehab@infradead.org> 12 + Mauro Carvalho Chehab <mchehab@kernel.org> 13 13 present on PV MPEG 8000GT 14 14 */ 15 15
+2 -2
drivers/media/tuners/tea5761.c
··· 2 2 // For Philips TEA5761 FM Chip 3 3 // I2C address is always 0x20 (0x10 at 7-bit mode). 4 4 // 5 - // Copyright (c) 2005-2007 Mauro Carvalho Chehab (mchehab@infradead.org) 5 + // Copyright (c) 2005-2007 Mauro Carvalho Chehab <mchehab@kernel.org> 6 6 7 7 #include <linux/i2c.h> 8 8 #include <linux/slab.h> ··· 337 337 EXPORT_SYMBOL_GPL(tea5761_autodetection); 338 338 339 339 MODULE_DESCRIPTION("Philips TEA5761 FM tuner driver"); 340 - MODULE_AUTHOR("Mauro Carvalho Chehab <mchehab@infradead.org>"); 340 + MODULE_AUTHOR("Mauro Carvalho Chehab <mchehab@kernel.org>"); 341 341 MODULE_LICENSE("GPL v2");
+2 -2
drivers/media/tuners/tea5767.c
··· 2 2 // For Philips TEA5767 FM Chip used on some TV Cards like Prolink Pixelview 3 3 // I2C address is always 0xC0. 4 4 // 5 - // Copyright (c) 2005 Mauro Carvalho Chehab (mchehab@infradead.org) 5 + // Copyright (c) 2005 Mauro Carvalho Chehab <mchehab@kernel.org> 6 6 // 7 7 // tea5767 autodetection thanks to Torsten Seeboth and Atsushi Nakagawa 8 8 // from their contributions on DScaler. ··· 469 469 EXPORT_SYMBOL_GPL(tea5767_autodetection); 470 470 471 471 MODULE_DESCRIPTION("Philips TEA5767 FM tuner driver"); 472 - MODULE_AUTHOR("Mauro Carvalho Chehab <mchehab@infradead.org>"); 472 + MODULE_AUTHOR("Mauro Carvalho Chehab <mchehab@kernel.org>"); 473 473 MODULE_LICENSE("GPL v2");
+1 -1
drivers/media/tuners/tuner-xc2028-types.h
··· 5 5 * This file includes internal tipes to be used inside tuner-xc2028. 6 6 * Shouldn't be included outside tuner-xc2028 7 7 * 8 - * Copyright (c) 2007-2008 Mauro Carvalho Chehab (mchehab@infradead.org) 8 + * Copyright (c) 2007-2008 Mauro Carvalho Chehab <mchehab@kernel.org> 9 9 */ 10 10 11 11 /* xc3028 firmware types */
+2 -2
drivers/media/tuners/tuner-xc2028.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 2 // tuner-xc2028 3 3 // 4 - // Copyright (c) 2007-2008 Mauro Carvalho Chehab (mchehab@infradead.org) 4 + // Copyright (c) 2007-2008 Mauro Carvalho Chehab <mchehab@kernel.org> 5 5 // 6 6 // Copyright (c) 2007 Michel Ludwig (michel.ludwig@gmail.com) 7 7 // - frontend interface ··· 1518 1518 1519 1519 MODULE_DESCRIPTION("Xceive xc2028/xc3028 tuner driver"); 1520 1520 MODULE_AUTHOR("Michel Ludwig <michel.ludwig@gmail.com>"); 1521 - MODULE_AUTHOR("Mauro Carvalho Chehab <mchehab@infradead.org>"); 1521 + MODULE_AUTHOR("Mauro Carvalho Chehab <mchehab@kernel.org>"); 1522 1522 MODULE_LICENSE("GPL v2"); 1523 1523 MODULE_FIRMWARE(XC2028_DEFAULT_FIRMWARE); 1524 1524 MODULE_FIRMWARE(XC3028L_DEFAULT_FIRMWARE);
+1 -1
drivers/media/tuners/tuner-xc2028.h
··· 2 2 * SPDX-License-Identifier: GPL-2.0 3 3 * tuner-xc2028 4 4 * 5 - * Copyright (c) 2007-2008 Mauro Carvalho Chehab (mchehab@infradead.org) 5 + * Copyright (c) 2007-2008 Mauro Carvalho Chehab <mchehab@kernel.org> 6 6 */ 7 7 8 8 #ifndef __TUNER_XC2028_H__
+1 -1
drivers/media/usb/em28xx/em28xx-camera.c
··· 2 2 // 3 3 // em28xx-camera.c - driver for Empia EM25xx/27xx/28xx USB video capture devices 4 4 // 5 - // Copyright (C) 2009 Mauro Carvalho Chehab <mchehab@infradead.org> 5 + // Copyright (C) 2009 Mauro Carvalho Chehab <mchehab@kernel.org> 6 6 // Copyright (C) 2013 Frank Schäfer <fschaefer.oss@googlemail.com> 7 7 // 8 8 // This program is free software; you can redistribute it and/or modify
+1 -1
drivers/media/usb/em28xx/em28xx-cards.c
··· 5 5 // 6 6 // Copyright (C) 2005 Ludovico Cavedon <cavedon@sssup.it> 7 7 // Markus Rechberger <mrechberger@gmail.com> 8 - // Mauro Carvalho Chehab <mchehab@infradead.org> 8 + // Mauro Carvalho Chehab <mchehab@kernel.org> 9 9 // Sascha Sommer <saschasommer@freenet.de> 10 10 // Copyright (C) 2012 Frank Schäfer <fschaefer.oss@googlemail.com> 11 11 //
+2 -2
drivers/media/usb/em28xx/em28xx-core.c
··· 4 4 // 5 5 // Copyright (C) 2005 Ludovico Cavedon <cavedon@sssup.it> 6 6 // Markus Rechberger <mrechberger@gmail.com> 7 - // Mauro Carvalho Chehab <mchehab@infradead.org> 7 + // Mauro Carvalho Chehab <mchehab@kernel.org> 8 8 // Sascha Sommer <saschasommer@freenet.de> 9 9 // Copyright (C) 2012 Frank Schäfer <fschaefer.oss@googlemail.com> 10 10 // ··· 32 32 33 33 #define DRIVER_AUTHOR "Ludovico Cavedon <cavedon@sssup.it>, " \ 34 34 "Markus Rechberger <mrechberger@gmail.com>, " \ 35 - "Mauro Carvalho Chehab <mchehab@infradead.org>, " \ 35 + "Mauro Carvalho Chehab <mchehab@kernel.org>, " \ 36 36 "Sascha Sommer <saschasommer@freenet.de>" 37 37 38 38 MODULE_AUTHOR(DRIVER_AUTHOR);
+2 -2
drivers/media/usb/em28xx/em28xx-dvb.c
··· 2 2 // 3 3 // DVB device driver for em28xx 4 4 // 5 - // (c) 2008-2011 Mauro Carvalho Chehab <mchehab@infradead.org> 5 + // (c) 2008-2011 Mauro Carvalho Chehab <mchehab@kernel.org> 6 6 // 7 7 // (c) 2008 Devin Heitmueller <devin.heitmueller@gmail.com> 8 8 // - Fixes for the driver to properly work with HVR-950 ··· 63 63 #include "tc90522.h" 64 64 #include "qm1d1c0042.h" 65 65 66 - MODULE_AUTHOR("Mauro Carvalho Chehab <mchehab@infradead.org>"); 66 + MODULE_AUTHOR("Mauro Carvalho Chehab <mchehab@kernel.org>"); 67 67 MODULE_LICENSE("GPL v2"); 68 68 MODULE_DESCRIPTION(DRIVER_DESC " - digital TV interface"); 69 69 MODULE_VERSION(EM28XX_VERSION);
+1 -1
drivers/media/usb/em28xx/em28xx-i2c.c
··· 4 4 // 5 5 // Copyright (C) 2005 Ludovico Cavedon <cavedon@sssup.it> 6 6 // Markus Rechberger <mrechberger@gmail.com> 7 - // Mauro Carvalho Chehab <mchehab@infradead.org> 7 + // Mauro Carvalho Chehab <mchehab@kernel.org> 8 8 // Sascha Sommer <saschasommer@freenet.de> 9 9 // Copyright (C) 2013 Frank Schäfer <fschaefer.oss@googlemail.com> 10 10 //
+1 -1
drivers/media/usb/em28xx/em28xx-input.c
··· 4 4 // 5 5 // Copyright (C) 2005 Ludovico Cavedon <cavedon@sssup.it> 6 6 // Markus Rechberger <mrechberger@gmail.com> 7 - // Mauro Carvalho Chehab <mchehab@infradead.org> 7 + // Mauro Carvalho Chehab <mchehab@kernel.org> 8 8 // Sascha Sommer <saschasommer@freenet.de> 9 9 // 10 10 // This program is free software; you can redistribute it and/or modify
+2 -2
drivers/media/usb/em28xx/em28xx-video.c
··· 5 5 // 6 6 // Copyright (C) 2005 Ludovico Cavedon <cavedon@sssup.it> 7 7 // Markus Rechberger <mrechberger@gmail.com> 8 - // Mauro Carvalho Chehab <mchehab@infradead.org> 8 + // Mauro Carvalho Chehab <mchehab@kernel.org> 9 9 // Sascha Sommer <saschasommer@freenet.de> 10 10 // Copyright (C) 2012 Frank Schäfer <fschaefer.oss@googlemail.com> 11 11 // ··· 44 44 45 45 #define DRIVER_AUTHOR "Ludovico Cavedon <cavedon@sssup.it>, " \ 46 46 "Markus Rechberger <mrechberger@gmail.com>, " \ 47 - "Mauro Carvalho Chehab <mchehab@infradead.org>, " \ 47 + "Mauro Carvalho Chehab <mchehab@kernel.org>, " \ 48 48 "Sascha Sommer <saschasommer@freenet.de>" 49 49 50 50 static unsigned int isoc_debug;
+1 -1
drivers/media/usb/em28xx/em28xx.h
··· 4 4 * 5 5 * Copyright (C) 2005 Markus Rechberger <mrechberger@gmail.com> 6 6 * Ludovico Cavedon <cavedon@sssup.it> 7 - * Mauro Carvalho Chehab <mchehab@infradead.org> 7 + * Mauro Carvalho Chehab <mchehab@kernel.org> 8 8 * Copyright (C) 2012 Frank Schäfer <fschaefer.oss@googlemail.com> 9 9 * 10 10 * Based on the em2800 driver from Sascha Sommer <saschasommer@freenet.de>
+1 -1
drivers/media/usb/gspca/zc3xx-reg.h
··· 1 1 /* 2 2 * zc030x registers 3 3 * 4 - * Copyright (c) 2008 Mauro Carvalho Chehab <mchehab@infradead.org> 4 + * Copyright (c) 2008 Mauro Carvalho Chehab <mchehab@kernel.org> 5 5 * 6 6 * The register aliases used here came from this driver: 7 7 * http://zc0302.sourceforge.net/zc0302.php
+1 -1
drivers/media/usb/tm6000/tm6000-cards.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 2 // tm6000-cards.c - driver for TM5600/TM6000/TM6010 USB video capture devices 3 3 // 4 - // Copyright (c) 2006-2007 Mauro Carvalho Chehab <mchehab@infradead.org> 4 + // Copyright (c) 2006-2007 Mauro Carvalho Chehab <mchehab@kernel.org> 5 5 6 6 #include <linux/init.h> 7 7 #include <linux/module.h>
+1 -1
drivers/media/usb/tm6000/tm6000-core.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 2 // tm6000-core.c - driver for TM5600/TM6000/TM6010 USB video capture devices 3 3 // 4 - // Copyright (c) 2006-2007 Mauro Carvalho Chehab <mchehab@infradead.org> 4 + // Copyright (c) 2006-2007 Mauro Carvalho Chehab <mchehab@kernel.org> 5 5 // 6 6 // Copyright (c) 2007 Michel Ludwig <michel.ludwig@gmail.com> 7 7 // - DVB-T support
+1 -1
drivers/media/usb/tm6000/tm6000-i2c.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 2 // tm6000-i2c.c - driver for TM5600/TM6000/TM6010 USB video capture devices 3 3 // 4 - // Copyright (c) 2006-2007 Mauro Carvalho Chehab <mchehab@infradead.org> 4 + // Copyright (c) 2006-2007 Mauro Carvalho Chehab <mchehab@kernel.org> 5 5 // 6 6 // Copyright (c) 2007 Michel Ludwig <michel.ludwig@gmail.com> 7 7 // - Fix SMBus Read Byte command
+1 -1
drivers/media/usb/tm6000/tm6000-regs.h
··· 2 2 * SPDX-License-Identifier: GPL-2.0 3 3 * tm6000-regs.h - driver for TM5600/TM6000/TM6010 USB video capture devices 4 4 * 5 - * Copyright (c) 2006-2007 Mauro Carvalho Chehab <mchehab@infradead.org> 5 + * Copyright (c) 2006-2007 Mauro Carvalho Chehab <mchehab@kernel.org> 6 6 */ 7 7 8 8 /*
+1 -1
drivers/media/usb/tm6000/tm6000-usb-isoc.h
··· 2 2 * SPDX-License-Identifier: GPL-2.0 3 3 * tm6000-buf.c - driver for TM5600/TM6000/TM6010 USB video capture devices 4 4 * 5 - * Copyright (c) 2006-2007 Mauro Carvalho Chehab <mchehab@infradead.org> 5 + * Copyright (c) 2006-2007 Mauro Carvalho Chehab <mchehab@kernel.org> 6 6 */ 7 7 8 8 #include <linux/videodev2.h>
+1 -1
drivers/media/usb/tm6000/tm6000-video.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 2 // tm6000-video.c - driver for TM5600/TM6000/TM6010 USB video capture devices 3 3 // 4 - // Copyright (c) 2006-2007 Mauro Carvalho Chehab <mchehab@infradead.org> 4 + // Copyright (c) 2006-2007 Mauro Carvalho Chehab <mchehab@kernel.org> 5 5 // 6 6 // Copyright (c) 2007 Michel Ludwig <michel.ludwig@gmail.com> 7 7 // - Fixed module load/unload
+1 -1
drivers/media/usb/tm6000/tm6000.h
··· 2 2 * SPDX-License-Identifier: GPL-2.0 3 3 * tm6000.h - driver for TM5600/TM6000/TM6010 USB video capture devices 4 4 * 5 - * Copyright (c) 2006-2007 Mauro Carvalho Chehab <mchehab@infradead.org> 5 + * Copyright (c) 2006-2007 Mauro Carvalho Chehab <mchehab@kernel.org> 6 6 * 7 7 * Copyright (c) 2007 Michel Ludwig <michel.ludwig@gmail.com> 8 8 * - DVB-T support
+2 -2
drivers/media/v4l2-core/v4l2-dev.c
··· 10 10 * 2 of the License, or (at your option) any later version. 11 11 * 12 12 * Authors: Alan Cox, <alan@lxorguk.ukuu.org.uk> (version 1) 13 - * Mauro Carvalho Chehab <mchehab@infradead.org> (version 2) 13 + * Mauro Carvalho Chehab <mchehab@kernel.org> (version 2) 14 14 * 15 15 * Fixes: 20000516 Claudio Matsuoka <claudio@conectiva.com> 16 16 * - Added procfs support ··· 1072 1072 subsys_initcall(videodev_init); 1073 1073 module_exit(videodev_exit) 1074 1074 1075 - MODULE_AUTHOR("Alan Cox, Mauro Carvalho Chehab <mchehab@infradead.org>"); 1075 + MODULE_AUTHOR("Alan Cox, Mauro Carvalho Chehab <mchehab@kernel.org>"); 1076 1076 MODULE_DESCRIPTION("Device registrar for Video4Linux drivers v2"); 1077 1077 MODULE_LICENSE("GPL"); 1078 1078 MODULE_ALIAS_CHARDEV_MAJOR(VIDEO_MAJOR);
+1 -1
drivers/media/v4l2-core/v4l2-ioctl.c
··· 9 9 * 2 of the License, or (at your option) any later version. 10 10 * 11 11 * Authors: Alan Cox, <alan@lxorguk.ukuu.org.uk> (version 1) 12 - * Mauro Carvalho Chehab <mchehab@infradead.org> (version 2) 12 + * Mauro Carvalho Chehab <mchehab@kernel.org> (version 2) 13 13 */ 14 14 15 15 #include <linux/mm.h>
+3 -3
drivers/media/v4l2-core/videobuf-core.c
··· 1 1 /* 2 2 * generic helper functions for handling video4linux capture buffers 3 3 * 4 - * (c) 2007 Mauro Carvalho Chehab, <mchehab@infradead.org> 4 + * (c) 2007 Mauro Carvalho Chehab, <mchehab@kernel.org> 5 5 * 6 6 * Highly based on video-buf written originally by: 7 7 * (c) 2001,02 Gerd Knorr <kraxel@bytesex.org> 8 - * (c) 2006 Mauro Carvalho Chehab, <mchehab@infradead.org> 8 + * (c) 2006 Mauro Carvalho Chehab, <mchehab@kernel.org> 9 9 * (c) 2006 Ted Walther and John Sokol 10 10 * 11 11 * This program is free software; you can redistribute it and/or modify ··· 38 38 module_param(debug, int, 0644); 39 39 40 40 MODULE_DESCRIPTION("helper module to manage video4linux buffers"); 41 - MODULE_AUTHOR("Mauro Carvalho Chehab <mchehab@infradead.org>"); 41 + MODULE_AUTHOR("Mauro Carvalho Chehab <mchehab@kernel.org>"); 42 42 MODULE_LICENSE("GPL"); 43 43 44 44 #define dprintk(level, fmt, arg...) \
+1 -1
drivers/media/v4l2-core/videobuf-dma-contig.c
··· 7 7 * Copyright (c) 2008 Magnus Damm 8 8 * 9 9 * Based on videobuf-vmalloc.c, 10 - * (c) 2007 Mauro Carvalho Chehab, <mchehab@infradead.org> 10 + * (c) 2007 Mauro Carvalho Chehab, <mchehab@kernel.org> 11 11 * 12 12 * This program is free software; you can redistribute it and/or modify 13 13 * it under the terms of the GNU General Public License as published by
+3 -3
drivers/media/v4l2-core/videobuf-dma-sg.c
··· 6 6 * into PAGE_SIZE chunks). They also assume the driver does not need 7 7 * to touch the video data. 8 8 * 9 - * (c) 2007 Mauro Carvalho Chehab, <mchehab@infradead.org> 9 + * (c) 2007 Mauro Carvalho Chehab, <mchehab@kernel.org> 10 10 * 11 11 * Highly based on video-buf written originally by: 12 12 * (c) 2001,02 Gerd Knorr <kraxel@bytesex.org> 13 - * (c) 2006 Mauro Carvalho Chehab, <mchehab@infradead.org> 13 + * (c) 2006 Mauro Carvalho Chehab, <mchehab@kernel.org> 14 14 * (c) 2006 Ted Walther and John Sokol 15 15 * 16 16 * This program is free software; you can redistribute it and/or modify ··· 48 48 module_param(debug, int, 0644); 49 49 50 50 MODULE_DESCRIPTION("helper module to manage video4linux dma sg buffers"); 51 - MODULE_AUTHOR("Mauro Carvalho Chehab <mchehab@infradead.org>"); 51 + MODULE_AUTHOR("Mauro Carvalho Chehab <mchehab@kernel.org>"); 52 52 MODULE_LICENSE("GPL"); 53 53 54 54 #define dprintk(level, fmt, arg...) \
+2 -2
drivers/media/v4l2-core/videobuf-vmalloc.c
··· 6 6 * into PAGE_SIZE chunks). They also assume the driver does not need 7 7 * to touch the video data. 8 8 * 9 - * (c) 2007 Mauro Carvalho Chehab, <mchehab@infradead.org> 9 + * (c) 2007 Mauro Carvalho Chehab, <mchehab@kernel.org> 10 10 * 11 11 * This program is free software; you can redistribute it and/or modify 12 12 * it under the terms of the GNU General Public License as published by ··· 41 41 module_param(debug, int, 0644); 42 42 43 43 MODULE_DESCRIPTION("helper module to manage video4linux vmalloc buffers"); 44 - MODULE_AUTHOR("Mauro Carvalho Chehab <mchehab@infradead.org>"); 44 + MODULE_AUTHOR("Mauro Carvalho Chehab <mchehab@kernel.org>"); 45 45 MODULE_LICENSE("GPL"); 46 46 47 47 #define dprintk(level, fmt, arg...) \
+38 -67
drivers/mtd/nand/onenand/omap2.c
··· 375 375 { 376 376 struct omap2_onenand *c = container_of(mtd, struct omap2_onenand, mtd); 377 377 struct onenand_chip *this = mtd->priv; 378 - dma_addr_t dma_src, dma_dst; 379 - int bram_offset; 378 + struct device *dev = &c->pdev->dev; 380 379 void *buf = (void *)buffer; 380 + dma_addr_t dma_src, dma_dst; 381 + int bram_offset, err; 381 382 size_t xtra; 382 - int ret; 383 383 384 384 bram_offset = omap2_onenand_bufferram_offset(mtd, area) + area + offset; 385 - if (bram_offset & 3 || (size_t)buf & 3 || count < 384) 385 + /* 386 + * If the buffer address is not DMA-able, len is not long enough to make 387 + * DMA transfers profitable or panic_write() may be in an interrupt 388 + * context fallback to PIO mode. 389 + */ 390 + if (!virt_addr_valid(buf) || bram_offset & 3 || (size_t)buf & 3 || 391 + count < 384 || in_interrupt() || oops_in_progress ) 386 392 goto out_copy; 387 - 388 - /* panic_write() may be in an interrupt context */ 389 - if (in_interrupt() || oops_in_progress) 390 - goto out_copy; 391 - 392 - if (buf >= high_memory) { 393 - struct page *p1; 394 - 395 - if (((size_t)buf & PAGE_MASK) != 396 - ((size_t)(buf + count - 1) & PAGE_MASK)) 397 - goto out_copy; 398 - p1 = vmalloc_to_page(buf); 399 - if (!p1) 400 - goto out_copy; 401 - buf = page_address(p1) + ((size_t)buf & ~PAGE_MASK); 402 - } 403 393 404 394 xtra = count & 3; 405 395 if (xtra) { ··· 397 407 memcpy(buf + count, this->base + bram_offset + count, xtra); 398 408 } 399 409 410 + dma_dst = dma_map_single(dev, buf, count, DMA_FROM_DEVICE); 400 411 dma_src = c->phys_base + bram_offset; 401 - dma_dst = dma_map_single(&c->pdev->dev, buf, count, DMA_FROM_DEVICE); 402 - if (dma_mapping_error(&c->pdev->dev, dma_dst)) { 403 - dev_err(&c->pdev->dev, 404 - "Couldn't DMA map a %d byte buffer\n", 405 - count); 412 + 413 + if (dma_mapping_error(dev, dma_dst)) { 414 + dev_err(dev, "Couldn't DMA map a %d byte buffer\n", count); 406 415 goto out_copy; 407 416 } 408 417 409 - ret = omap2_onenand_dma_transfer(c, dma_src, dma_dst, count); 410 - dma_unmap_single(&c->pdev->dev, dma_dst, count, DMA_FROM_DEVICE); 418 + err = omap2_onenand_dma_transfer(c, dma_src, dma_dst, count); 419 + dma_unmap_single(dev, dma_dst, count, DMA_FROM_DEVICE); 420 + if (!err) 421 + return 0; 411 422 412 - if (ret) { 413 - dev_err(&c->pdev->dev, "timeout waiting for DMA\n"); 414 - goto out_copy; 415 - } 416 - 417 - return 0; 423 + dev_err(dev, "timeout waiting for DMA\n"); 418 424 419 425 out_copy: 420 426 memcpy(buf, this->base + bram_offset, count); ··· 423 437 { 424 438 struct omap2_onenand *c = container_of(mtd, struct omap2_onenand, mtd); 425 439 struct onenand_chip *this = mtd->priv; 426 - dma_addr_t dma_src, dma_dst; 427 - int bram_offset; 440 + struct device *dev = &c->pdev->dev; 428 441 void *buf = (void *)buffer; 429 - int ret; 442 + dma_addr_t dma_src, dma_dst; 443 + int bram_offset, err; 430 444 431 445 bram_offset = omap2_onenand_bufferram_offset(mtd, area) + area + offset; 432 - if (bram_offset & 3 || (size_t)buf & 3 || count < 384) 446 + /* 447 + * If the buffer address is not DMA-able, len is not long enough to make 448 + * DMA transfers profitable or panic_write() may be in an interrupt 449 + * context fallback to PIO mode. 450 + */ 451 + if (!virt_addr_valid(buf) || bram_offset & 3 || (size_t)buf & 3 || 452 + count < 384 || in_interrupt() || oops_in_progress ) 433 453 goto out_copy; 434 454 435 - /* panic_write() may be in an interrupt context */ 436 - if (in_interrupt() || oops_in_progress) 437 - goto out_copy; 438 - 439 - if (buf >= high_memory) { 440 - struct page *p1; 441 - 442 - if (((size_t)buf & PAGE_MASK) != 443 - ((size_t)(buf + count - 1) & PAGE_MASK)) 444 - goto out_copy; 445 - p1 = vmalloc_to_page(buf); 446 - if (!p1) 447 - goto out_copy; 448 - buf = page_address(p1) + ((size_t)buf & ~PAGE_MASK); 449 - } 450 - 451 - dma_src = dma_map_single(&c->pdev->dev, buf, count, DMA_TO_DEVICE); 455 + dma_src = dma_map_single(dev, buf, count, DMA_TO_DEVICE); 452 456 dma_dst = c->phys_base + bram_offset; 453 - if (dma_mapping_error(&c->pdev->dev, dma_src)) { 454 - dev_err(&c->pdev->dev, 455 - "Couldn't DMA map a %d byte buffer\n", 456 - count); 457 - return -1; 458 - } 459 - 460 - ret = omap2_onenand_dma_transfer(c, dma_src, dma_dst, count); 461 - dma_unmap_single(&c->pdev->dev, dma_src, count, DMA_TO_DEVICE); 462 - 463 - if (ret) { 464 - dev_err(&c->pdev->dev, "timeout waiting for DMA\n"); 457 + if (dma_mapping_error(dev, dma_src)) { 458 + dev_err(dev, "Couldn't DMA map a %d byte buffer\n", count); 465 459 goto out_copy; 466 460 } 467 461 468 - return 0; 462 + err = omap2_onenand_dma_transfer(c, dma_src, dma_dst, count); 463 + dma_unmap_page(dev, dma_src, count, DMA_TO_DEVICE); 464 + if (!err) 465 + return 0; 466 + 467 + dev_err(dev, "timeout waiting for DMA\n"); 469 468 470 469 out_copy: 471 470 memcpy(this->base + bram_offset, buf, count);
+9 -3
drivers/mtd/nand/raw/marvell_nand.c
··· 1074 1074 return ret; 1075 1075 1076 1076 ret = marvell_nfc_wait_op(chip, 1077 - chip->data_interface.timings.sdr.tPROG_max); 1077 + PSEC_TO_MSEC(chip->data_interface.timings.sdr.tPROG_max)); 1078 1078 return ret; 1079 1079 } 1080 1080 ··· 1408 1408 struct marvell_nand_chip *marvell_nand = to_marvell_nand(chip); 1409 1409 struct marvell_nfc *nfc = to_marvell_nfc(chip->controller); 1410 1410 const struct marvell_hw_ecc_layout *lt = to_marvell_nand(chip)->layout; 1411 + u32 xtype; 1411 1412 int ret; 1412 1413 struct marvell_nfc_op nfc_op = { 1413 1414 .ndcb[0] = NDCB0_CMD_TYPE(TYPE_WRITE) | NDCB0_LEN_OVRD, ··· 1424 1423 * last naked write. 1425 1424 */ 1426 1425 if (chunk == 0) { 1427 - nfc_op.ndcb[0] |= NDCB0_CMD_XTYPE(XTYPE_WRITE_DISPATCH) | 1426 + if (lt->nchunks == 1) 1427 + xtype = XTYPE_MONOLITHIC_RW; 1428 + else 1429 + xtype = XTYPE_WRITE_DISPATCH; 1430 + 1431 + nfc_op.ndcb[0] |= NDCB0_CMD_XTYPE(xtype) | 1428 1432 NDCB0_ADDR_CYC(marvell_nand->addr_cyc) | 1429 1433 NDCB0_CMD1(NAND_CMD_SEQIN); 1430 1434 nfc_op.ndcb[1] |= NDCB1_ADDRS_PAGE(page); ··· 1500 1494 } 1501 1495 1502 1496 ret = marvell_nfc_wait_op(chip, 1503 - chip->data_interface.timings.sdr.tPROG_max); 1497 + PSEC_TO_MSEC(chip->data_interface.timings.sdr.tPROG_max)); 1504 1498 1505 1499 marvell_nfc_disable_hw_ecc(chip); 1506 1500
+5
drivers/mtd/nand/raw/nand_base.c
··· 706 706 */ 707 707 int nand_soft_waitrdy(struct nand_chip *chip, unsigned long timeout_ms) 708 708 { 709 + const struct nand_sdr_timings *timings; 709 710 u8 status = 0; 710 711 int ret; 711 712 712 713 if (!chip->exec_op) 713 714 return -ENOTSUPP; 715 + 716 + /* Wait tWB before polling the STATUS reg. */ 717 + timings = nand_get_sdr_timings(&chip->data_interface); 718 + ndelay(PSEC_TO_NSEC(timings->tWB_max)); 714 719 715 720 ret = nand_status_op(chip, NULL); 716 721 if (ret)
+9 -6
drivers/net/bonding/bond_alb.c
··· 450 450 { 451 451 int i; 452 452 453 - if (!client_info->slave) 453 + if (!client_info->slave || !is_valid_ether_addr(client_info->mac_dst)) 454 454 return; 455 455 456 456 for (i = 0; i < RLB_ARP_BURST_SIZE; i++) { ··· 943 943 skb->priority = TC_PRIO_CONTROL; 944 944 skb->dev = slave->dev; 945 945 946 + netdev_dbg(slave->bond->dev, 947 + "Send learning packet: dev %s mac %pM vlan %d\n", 948 + slave->dev->name, mac_addr, vid); 949 + 946 950 if (vid) 947 951 __vlan_hwaccel_put_tag(skb, vlan_proto, vid); 948 952 ··· 969 965 u8 *mac_addr = data->mac_addr; 970 966 struct bond_vlan_tag *tags; 971 967 972 - if (is_vlan_dev(upper) && vlan_get_encap_level(upper) == 0) { 973 - if (strict_match && 974 - ether_addr_equal_64bits(mac_addr, 975 - upper->dev_addr)) { 968 + if (is_vlan_dev(upper) && 969 + bond->nest_level == vlan_get_encap_level(upper) - 1) { 970 + if (upper->addr_assign_type == NET_ADDR_STOLEN) { 976 971 alb_send_lp_vid(slave, mac_addr, 977 972 vlan_dev_vlan_proto(upper), 978 973 vlan_dev_vlan_id(upper)); 979 - } else if (!strict_match) { 974 + } else { 980 975 alb_send_lp_vid(slave, upper->dev_addr, 981 976 vlan_dev_vlan_proto(upper), 982 977 vlan_dev_vlan_id(upper));
+2
drivers/net/bonding/bond_main.c
··· 1738 1738 if (bond_mode_uses_xmit_hash(bond)) 1739 1739 bond_update_slave_arr(bond, NULL); 1740 1740 1741 + bond->nest_level = dev_get_nest_level(bond_dev); 1742 + 1741 1743 netdev_info(bond_dev, "Enslaving %s as %s interface with %s link\n", 1742 1744 slave_dev->name, 1743 1745 bond_is_active_slave(new_slave) ? "an active" : "a backup",
+1 -1
drivers/net/can/dev.c
··· 605 605 { 606 606 struct can_priv *priv = netdev_priv(dev); 607 607 608 - netdev_dbg(dev, "bus-off\n"); 608 + netdev_info(dev, "bus-off\n"); 609 609 610 610 netif_carrier_off(dev); 611 611
+14 -12
drivers/net/can/flexcan.c
··· 200 200 #define FLEXCAN_QUIRK_DISABLE_MECR BIT(4) /* Disable Memory error detection */ 201 201 #define FLEXCAN_QUIRK_USE_OFF_TIMESTAMP BIT(5) /* Use timestamp based offloading */ 202 202 #define FLEXCAN_QUIRK_BROKEN_PERR_STATE BIT(6) /* No interrupt for error passive */ 203 + #define FLEXCAN_QUIRK_DEFAULT_BIG_ENDIAN BIT(7) /* default to BE register access */ 203 204 204 205 /* Structure of the message buffer */ 205 206 struct flexcan_mb { ··· 288 287 }; 289 288 290 289 static const struct flexcan_devtype_data fsl_p1010_devtype_data = { 290 + .quirks = FLEXCAN_QUIRK_BROKEN_WERR_STATE | 291 + FLEXCAN_QUIRK_BROKEN_PERR_STATE | 292 + FLEXCAN_QUIRK_DEFAULT_BIG_ENDIAN, 293 + }; 294 + 295 + static const struct flexcan_devtype_data fsl_imx25_devtype_data = { 291 296 .quirks = FLEXCAN_QUIRK_BROKEN_WERR_STATE | 292 297 FLEXCAN_QUIRK_BROKEN_PERR_STATE, 293 298 }; ··· 1258 1251 static const struct of_device_id flexcan_of_match[] = { 1259 1252 { .compatible = "fsl,imx6q-flexcan", .data = &fsl_imx6q_devtype_data, }, 1260 1253 { .compatible = "fsl,imx28-flexcan", .data = &fsl_imx28_devtype_data, }, 1261 - { .compatible = "fsl,imx53-flexcan", .data = &fsl_p1010_devtype_data, }, 1262 - { .compatible = "fsl,imx35-flexcan", .data = &fsl_p1010_devtype_data, }, 1263 - { .compatible = "fsl,imx25-flexcan", .data = &fsl_p1010_devtype_data, }, 1254 + { .compatible = "fsl,imx53-flexcan", .data = &fsl_imx25_devtype_data, }, 1255 + { .compatible = "fsl,imx35-flexcan", .data = &fsl_imx25_devtype_data, }, 1256 + { .compatible = "fsl,imx25-flexcan", .data = &fsl_imx25_devtype_data, }, 1264 1257 { .compatible = "fsl,p1010-flexcan", .data = &fsl_p1010_devtype_data, }, 1265 1258 { .compatible = "fsl,vf610-flexcan", .data = &fsl_vf610_devtype_data, }, 1266 1259 { .compatible = "fsl,ls1021ar2-flexcan", .data = &fsl_ls1021a_r2_devtype_data, }, ··· 1344 1337 1345 1338 priv = netdev_priv(dev); 1346 1339 1347 - if (of_property_read_bool(pdev->dev.of_node, "big-endian")) { 1340 + if (of_property_read_bool(pdev->dev.of_node, "big-endian") || 1341 + devtype_data->quirks & FLEXCAN_QUIRK_DEFAULT_BIG_ENDIAN) { 1348 1342 priv->read = flexcan_read_be; 1349 1343 priv->write = flexcan_write_be; 1350 1344 } else { 1351 - if (of_device_is_compatible(pdev->dev.of_node, 1352 - "fsl,p1010-flexcan")) { 1353 - priv->read = flexcan_read_be; 1354 - priv->write = flexcan_write_be; 1355 - } else { 1356 - priv->read = flexcan_read_le; 1357 - priv->write = flexcan_write_le; 1358 - } 1345 + priv->read = flexcan_read_le; 1346 + priv->write = flexcan_write_le; 1359 1347 } 1360 1348 1361 1349 priv->can.clock.freq = clock_freq;
+7 -4
drivers/net/can/spi/hi311x.c
··· 91 91 #define HI3110_STAT_BUSOFF BIT(2) 92 92 #define HI3110_STAT_ERRP BIT(3) 93 93 #define HI3110_STAT_ERRW BIT(4) 94 + #define HI3110_STAT_TXMTY BIT(7) 94 95 95 96 #define HI3110_BTR0_SJW_SHIFT 6 96 97 #define HI3110_BTR0_BRP_SHIFT 0 ··· 428 427 struct hi3110_priv *priv = netdev_priv(net); 429 428 struct spi_device *spi = priv->spi; 430 429 430 + mutex_lock(&priv->hi3110_lock); 431 431 bec->txerr = hi3110_read(spi, HI3110_READ_TEC); 432 432 bec->rxerr = hi3110_read(spi, HI3110_READ_REC); 433 + mutex_unlock(&priv->hi3110_lock); 433 434 434 435 return 0; 435 436 } ··· 738 735 } 739 736 } 740 737 741 - if (intf == 0) 742 - break; 743 - 744 - if (intf & HI3110_INT_TXCPLT) { 738 + if (priv->tx_len && statf & HI3110_STAT_TXMTY) { 745 739 net->stats.tx_packets++; 746 740 net->stats.tx_bytes += priv->tx_len - 1; 747 741 can_led_event(net, CAN_LED_EVENT_TX); ··· 748 748 } 749 749 netif_wake_queue(net); 750 750 } 751 + 752 + if (intf == 0) 753 + break; 751 754 } 752 755 mutex_unlock(&priv->hi3110_lock); 753 756 return IRQ_HANDLED;
+1 -1
drivers/net/can/usb/kvaser_usb.c
··· 1179 1179 1180 1180 skb = alloc_can_skb(priv->netdev, &cf); 1181 1181 if (!skb) { 1182 - stats->tx_dropped++; 1182 + stats->rx_dropped++; 1183 1183 return; 1184 1184 } 1185 1185
+26
drivers/net/dsa/mv88e6xxx/chip.c
··· 3370 3370 .num_internal_phys = 5, 3371 3371 .max_vid = 4095, 3372 3372 .port_base_addr = 0x10, 3373 + .phy_base_addr = 0x0, 3373 3374 .global1_addr = 0x1b, 3374 3375 .global2_addr = 0x1c, 3375 3376 .age_time_coeff = 15000, ··· 3392 3391 .num_internal_phys = 0, 3393 3392 .max_vid = 4095, 3394 3393 .port_base_addr = 0x10, 3394 + .phy_base_addr = 0x0, 3395 3395 .global1_addr = 0x1b, 3396 3396 .global2_addr = 0x1c, 3397 3397 .age_time_coeff = 15000, ··· 3412 3410 .num_internal_phys = 8, 3413 3411 .max_vid = 4095, 3414 3412 .port_base_addr = 0x10, 3413 + .phy_base_addr = 0x0, 3415 3414 .global1_addr = 0x1b, 3416 3415 .global2_addr = 0x1c, 3417 3416 .age_time_coeff = 15000, ··· 3434 3431 .num_internal_phys = 5, 3435 3432 .max_vid = 4095, 3436 3433 .port_base_addr = 0x10, 3434 + .phy_base_addr = 0x0, 3437 3435 .global1_addr = 0x1b, 3438 3436 .global2_addr = 0x1c, 3439 3437 .age_time_coeff = 15000, ··· 3456 3452 .num_internal_phys = 0, 3457 3453 .max_vid = 4095, 3458 3454 .port_base_addr = 0x10, 3455 + .phy_base_addr = 0x0, 3459 3456 .global1_addr = 0x1b, 3460 3457 .global2_addr = 0x1c, 3461 3458 .age_time_coeff = 15000, ··· 3477 3472 .num_gpio = 11, 3478 3473 .max_vid = 4095, 3479 3474 .port_base_addr = 0x10, 3475 + .phy_base_addr = 0x10, 3480 3476 .global1_addr = 0x1b, 3481 3477 .global2_addr = 0x1c, 3482 3478 .age_time_coeff = 3750, ··· 3499 3493 .num_internal_phys = 5, 3500 3494 .max_vid = 4095, 3501 3495 .port_base_addr = 0x10, 3496 + .phy_base_addr = 0x0, 3502 3497 .global1_addr = 0x1b, 3503 3498 .global2_addr = 0x1c, 3504 3499 .age_time_coeff = 15000, ··· 3521 3514 .num_internal_phys = 0, 3522 3515 .max_vid = 4095, 3523 3516 .port_base_addr = 0x10, 3517 + .phy_base_addr = 0x0, 3524 3518 .global1_addr = 0x1b, 3525 3519 .global2_addr = 0x1c, 3526 3520 .age_time_coeff = 15000, ··· 3543 3535 .num_internal_phys = 5, 3544 3536 .max_vid = 4095, 3545 3537 .port_base_addr = 0x10, 3538 + .phy_base_addr = 0x0, 3546 3539 .global1_addr = 0x1b, 3547 3540 .global2_addr = 0x1c, 3548 3541 .age_time_coeff = 15000, ··· 3566 3557 .num_gpio = 15, 3567 3558 .max_vid = 4095, 3568 3559 .port_base_addr = 0x10, 3560 + .phy_base_addr = 0x0, 3569 3561 .global1_addr = 0x1b, 3570 3562 .global2_addr = 0x1c, 3571 3563 .age_time_coeff = 15000, ··· 3588 3578 .num_internal_phys = 5, 3589 3579 .max_vid = 4095, 3590 3580 .port_base_addr = 0x10, 3581 + .phy_base_addr = 0x0, 3591 3582 .global1_addr = 0x1b, 3592 3583 .global2_addr = 0x1c, 3593 3584 .age_time_coeff = 15000, ··· 3611 3600 .num_gpio = 15, 3612 3601 .max_vid = 4095, 3613 3602 .port_base_addr = 0x10, 3603 + .phy_base_addr = 0x0, 3614 3604 .global1_addr = 0x1b, 3615 3605 .global2_addr = 0x1c, 3616 3606 .age_time_coeff = 15000, ··· 3633 3621 .num_internal_phys = 0, 3634 3622 .max_vid = 4095, 3635 3623 .port_base_addr = 0x10, 3624 + .phy_base_addr = 0x0, 3636 3625 .global1_addr = 0x1b, 3637 3626 .global2_addr = 0x1c, 3638 3627 .age_time_coeff = 15000, ··· 3654 3641 .num_gpio = 16, 3655 3642 .max_vid = 8191, 3656 3643 .port_base_addr = 0x0, 3644 + .phy_base_addr = 0x0, 3657 3645 .global1_addr = 0x1b, 3658 3646 .global2_addr = 0x1c, 3659 3647 .tag_protocol = DSA_TAG_PROTO_DSA, ··· 3677 3663 .num_gpio = 16, 3678 3664 .max_vid = 8191, 3679 3665 .port_base_addr = 0x0, 3666 + .phy_base_addr = 0x0, 3680 3667 .global1_addr = 0x1b, 3681 3668 .global2_addr = 0x1c, 3682 3669 .age_time_coeff = 3750, ··· 3699 3684 .num_internal_phys = 11, 3700 3685 .max_vid = 8191, 3701 3686 .port_base_addr = 0x0, 3687 + .phy_base_addr = 0x0, 3702 3688 .global1_addr = 0x1b, 3703 3689 .global2_addr = 0x1c, 3704 3690 .age_time_coeff = 3750, ··· 3723 3707 .num_gpio = 15, 3724 3708 .max_vid = 4095, 3725 3709 .port_base_addr = 0x10, 3710 + .phy_base_addr = 0x0, 3726 3711 .global1_addr = 0x1b, 3727 3712 .global2_addr = 0x1c, 3728 3713 .age_time_coeff = 15000, ··· 3747 3730 .num_gpio = 16, 3748 3731 .max_vid = 8191, 3749 3732 .port_base_addr = 0x0, 3733 + .phy_base_addr = 0x0, 3750 3734 .global1_addr = 0x1b, 3751 3735 .global2_addr = 0x1c, 3752 3736 .age_time_coeff = 3750, ··· 3771 3753 .num_gpio = 15, 3772 3754 .max_vid = 4095, 3773 3755 .port_base_addr = 0x10, 3756 + .phy_base_addr = 0x0, 3774 3757 .global1_addr = 0x1b, 3775 3758 .global2_addr = 0x1c, 3776 3759 .age_time_coeff = 15000, ··· 3795 3776 .num_gpio = 15, 3796 3777 .max_vid = 4095, 3797 3778 .port_base_addr = 0x10, 3779 + .phy_base_addr = 0x0, 3798 3780 .global1_addr = 0x1b, 3799 3781 .global2_addr = 0x1c, 3800 3782 .age_time_coeff = 15000, ··· 3818 3798 .num_gpio = 11, 3819 3799 .max_vid = 4095, 3820 3800 .port_base_addr = 0x10, 3801 + .phy_base_addr = 0x10, 3821 3802 .global1_addr = 0x1b, 3822 3803 .global2_addr = 0x1c, 3823 3804 .age_time_coeff = 3750, ··· 3841 3820 .num_internal_phys = 5, 3842 3821 .max_vid = 4095, 3843 3822 .port_base_addr = 0x10, 3823 + .phy_base_addr = 0x0, 3844 3824 .global1_addr = 0x1b, 3845 3825 .global2_addr = 0x1c, 3846 3826 .age_time_coeff = 15000, ··· 3863 3841 .num_internal_phys = 5, 3864 3842 .max_vid = 4095, 3865 3843 .port_base_addr = 0x10, 3844 + .phy_base_addr = 0x0, 3866 3845 .global1_addr = 0x1b, 3867 3846 .global2_addr = 0x1c, 3868 3847 .age_time_coeff = 15000, ··· 3886 3863 .num_gpio = 15, 3887 3864 .max_vid = 4095, 3888 3865 .port_base_addr = 0x10, 3866 + .phy_base_addr = 0x0, 3889 3867 .global1_addr = 0x1b, 3890 3868 .global2_addr = 0x1c, 3891 3869 .age_time_coeff = 15000, ··· 3909 3885 .num_gpio = 16, 3910 3886 .max_vid = 8191, 3911 3887 .port_base_addr = 0x0, 3888 + .phy_base_addr = 0x0, 3912 3889 .global1_addr = 0x1b, 3913 3890 .global2_addr = 0x1c, 3914 3891 .age_time_coeff = 3750, ··· 3932 3907 .num_gpio = 16, 3933 3908 .max_vid = 8191, 3934 3909 .port_base_addr = 0x0, 3910 + .phy_base_addr = 0x0, 3935 3911 .global1_addr = 0x1b, 3936 3912 .global2_addr = 0x1c, 3937 3913 .age_time_coeff = 3750,
+1
drivers/net/dsa/mv88e6xxx/chip.h
··· 114 114 unsigned int num_gpio; 115 115 unsigned int max_vid; 116 116 unsigned int port_base_addr; 117 + unsigned int phy_base_addr; 117 118 unsigned int global1_addr; 118 119 unsigned int global2_addr; 119 120 unsigned int age_time_coeff;
+1 -1
drivers/net/dsa/mv88e6xxx/global2.c
··· 1118 1118 err = irq; 1119 1119 goto out; 1120 1120 } 1121 - bus->irq[chip->info->port_base_addr + phy] = irq; 1121 + bus->irq[chip->info->phy_base_addr + phy] = irq; 1122 1122 } 1123 1123 return 0; 1124 1124 out:
+3
drivers/net/ethernet/aquantia/atlantic/aq_nic.c
··· 95 95 /*rss rings */ 96 96 cfg->vecs = min(cfg->aq_hw_caps->vecs, AQ_CFG_VECS_DEF); 97 97 cfg->vecs = min(cfg->vecs, num_online_cpus()); 98 + cfg->vecs = min(cfg->vecs, self->irqvecs); 98 99 /* cfg->vecs should be power of 2 for RSS */ 99 100 if (cfg->vecs >= 8U) 100 101 cfg->vecs = 8U; ··· 247 246 248 247 self->ndev->hw_features |= aq_hw_caps->hw_features; 249 248 self->ndev->features = aq_hw_caps->hw_features; 249 + self->ndev->vlan_features |= NETIF_F_HW_CSUM | NETIF_F_RXCSUM | 250 + NETIF_F_RXHASH | NETIF_F_SG | NETIF_F_LRO; 250 251 self->ndev->priv_flags = aq_hw_caps->hw_priv_flags; 251 252 self->ndev->priv_flags |= IFF_LIVE_ADDR_CHANGE; 252 253
+1
drivers/net/ethernet/aquantia/atlantic/aq_nic.h
··· 80 80 81 81 struct pci_dev *pdev; 82 82 unsigned int msix_entry_mask; 83 + u32 irqvecs; 83 84 }; 84 85 85 86 static inline struct device *aq_nic_get_dev(struct aq_nic_s *self)
+9 -9
drivers/net/ethernet/aquantia/atlantic/aq_pci_func.c
··· 267 267 numvecs = min(numvecs, num_online_cpus()); 268 268 /*enable interrupts */ 269 269 #if !AQ_CFG_FORCE_LEGACY_INT 270 - err = pci_alloc_irq_vectors(self->pdev, numvecs, numvecs, 271 - PCI_IRQ_MSIX); 270 + numvecs = pci_alloc_irq_vectors(self->pdev, 1, numvecs, 271 + PCI_IRQ_MSIX | PCI_IRQ_MSI | 272 + PCI_IRQ_LEGACY); 272 273 273 - if (err < 0) { 274 - err = pci_alloc_irq_vectors(self->pdev, 1, 1, 275 - PCI_IRQ_MSI | PCI_IRQ_LEGACY); 276 - if (err < 0) 277 - goto err_hwinit; 274 + if (numvecs < 0) { 275 + err = numvecs; 276 + goto err_hwinit; 278 277 } 279 278 #endif 279 + self->irqvecs = numvecs; 280 280 281 281 /* net device init */ 282 282 aq_nic_cfg_start(self); ··· 298 298 kfree(self->aq_hw); 299 299 err_ioremap: 300 300 free_netdev(ndev); 301 - err_pci_func: 302 - pci_release_regions(pdev); 303 301 err_ndev: 302 + pci_release_regions(pdev); 303 + err_pci_func: 304 304 pci_disable_device(pdev); 305 305 return err; 306 306 }
+13 -5
drivers/net/ethernet/broadcom/bcmsysport.c
··· 2144 2144 .ndo_select_queue = bcm_sysport_select_queue, 2145 2145 }; 2146 2146 2147 - static int bcm_sysport_map_queues(struct net_device *dev, 2147 + static int bcm_sysport_map_queues(struct notifier_block *nb, 2148 2148 struct dsa_notifier_register_info *info) 2149 2149 { 2150 - struct bcm_sysport_priv *priv = netdev_priv(dev); 2151 2150 struct bcm_sysport_tx_ring *ring; 2151 + struct bcm_sysport_priv *priv; 2152 2152 struct net_device *slave_dev; 2153 2153 unsigned int num_tx_queues; 2154 2154 unsigned int q, start, port; 2155 + struct net_device *dev; 2156 + 2157 + priv = container_of(nb, struct bcm_sysport_priv, dsa_notifier); 2158 + if (priv->netdev != info->master) 2159 + return 0; 2160 + 2161 + dev = info->master; 2155 2162 2156 2163 /* We can't be setting up queue inspection for non directly attached 2157 2164 * switches ··· 2181 2174 if (priv->is_lite) 2182 2175 netif_set_real_num_tx_queues(slave_dev, 2183 2176 slave_dev->num_tx_queues / 2); 2177 + 2184 2178 num_tx_queues = slave_dev->real_num_tx_queues; 2185 2179 2186 2180 if (priv->per_port_num_tx_queues && 2187 2181 priv->per_port_num_tx_queues != num_tx_queues) 2188 - netdev_warn(slave_dev, "asymetric number of per-port queues\n"); 2182 + netdev_warn(slave_dev, "asymmetric number of per-port queues\n"); 2189 2183 2190 2184 priv->per_port_num_tx_queues = num_tx_queues; 2191 2185 ··· 2209 2201 return 0; 2210 2202 } 2211 2203 2212 - static int bcm_sysport_dsa_notifier(struct notifier_block *unused, 2204 + static int bcm_sysport_dsa_notifier(struct notifier_block *nb, 2213 2205 unsigned long event, void *ptr) 2214 2206 { 2215 2207 struct dsa_notifier_register_info *info; ··· 2219 2211 2220 2212 info = ptr; 2221 2213 2222 - return notifier_from_errno(bcm_sysport_map_queues(info->master, info)); 2214 + return notifier_from_errno(bcm_sysport_map_queues(nb, info)); 2223 2215 } 2224 2216 2225 2217 #define REV_FMT "v%2x.%02x"
+5 -4
drivers/net/ethernet/broadcom/tg3.c
··· 8733 8733 tg3_mem_rx_release(tp); 8734 8734 tg3_mem_tx_release(tp); 8735 8735 8736 - /* Protect tg3_get_stats64() from reading freed tp->hw_stats. */ 8737 - tg3_full_lock(tp, 0); 8736 + /* tp->hw_stats can be referenced safely: 8737 + * 1. under rtnl_lock 8738 + * 2. or under tp->lock if TG3_FLAG_INIT_COMPLETE is set. 8739 + */ 8738 8740 if (tp->hw_stats) { 8739 8741 dma_free_coherent(&tp->pdev->dev, sizeof(struct tg3_hw_stats), 8740 8742 tp->hw_stats, tp->stats_mapping); 8741 8743 tp->hw_stats = NULL; 8742 8744 } 8743 - tg3_full_unlock(tp); 8744 8745 } 8745 8746 8746 8747 /* ··· 14179 14178 struct tg3 *tp = netdev_priv(dev); 14180 14179 14181 14180 spin_lock_bh(&tp->lock); 14182 - if (!tp->hw_stats) { 14181 + if (!tp->hw_stats || !tg3_flag(tp, INIT_COMPLETE)) { 14183 14182 *stats = tp->net_stats_prev; 14184 14183 spin_unlock_bh(&tp->lock); 14185 14184 return;
+3 -4
drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
··· 3433 3433 sgl = adapter->hma.sgt->sgl; 3434 3434 node = dev_to_node(adapter->pdev_dev); 3435 3435 for_each_sg(sgl, iter, sgt->orig_nents, i) { 3436 - newpage = alloc_pages_node(node, __GFP_NOWARN | GFP_KERNEL, 3437 - page_order); 3436 + newpage = alloc_pages_node(node, __GFP_NOWARN | GFP_KERNEL | 3437 + __GFP_ZERO, page_order); 3438 3438 if (!newpage) { 3439 3439 dev_err(adapter->pdev_dev, 3440 3440 "Not enough memory for HMA page allocation\n"); ··· 5474 5474 } 5475 5475 spin_lock_init(&adapter->mbox_lock); 5476 5476 INIT_LIST_HEAD(&adapter->mlist.list); 5477 + adapter->mbox_log->size = T4_OS_LOG_MBOX_CMDS; 5477 5478 pci_set_drvdata(pdev, adapter); 5478 5479 5479 5480 if (func != ent->driver_data) { ··· 5508 5507 err = -ENOMEM; 5509 5508 goto out_free_adapter; 5510 5509 } 5511 - 5512 - adapter->mbox_log->size = T4_OS_LOG_MBOX_CMDS; 5513 5510 5514 5511 /* PCI device has been enabled */ 5515 5512 adapter->flags |= DEV_ENABLED;
+1 -1
drivers/net/ethernet/freescale/ucc_geth_ethtool.c
··· 61 61 static const char tx_fw_stat_gstrings[][ETH_GSTRING_LEN] = { 62 62 "tx-single-collision", 63 63 "tx-multiple-collision", 64 - "tx-late-collsion", 64 + "tx-late-collision", 65 65 "tx-aborted-frames", 66 66 "tx-lost-frames", 67 67 "tx-carrier-sense-errors",
+1 -1
drivers/net/ethernet/intel/ice/ice_controlq.c
··· 1014 1014 desc = ICE_CTL_Q_DESC(cq->rq, ntc); 1015 1015 desc_idx = ntc; 1016 1016 1017 + cq->rq_last_status = (enum ice_aq_err)le16_to_cpu(desc->retval); 1017 1018 flags = le16_to_cpu(desc->flags); 1018 1019 if (flags & ICE_AQ_FLAG_ERR) { 1019 1020 ret_code = ICE_ERR_AQ_ERROR; 1020 - cq->rq_last_status = (enum ice_aq_err)le16_to_cpu(desc->retval); 1021 1021 ice_debug(hw, ICE_DBG_AQ_MSG, 1022 1022 "Control Receive Queue Event received with error 0x%x\n", 1023 1023 cq->rq_last_status);
+1 -1
drivers/net/ethernet/intel/ixgbe/ixgbe_ipsec.c
··· 943 943 kfree(ipsec->ip_tbl); 944 944 kfree(ipsec->rx_tbl); 945 945 kfree(ipsec->tx_tbl); 946 + kfree(ipsec); 946 947 err1: 947 - kfree(adapter->ipsec); 948 948 netdev_err(adapter->netdev, "Unable to allocate memory for SA tables"); 949 949 } 950 950
+3
drivers/net/ethernet/intel/ixgbe/ixgbe_x550.c
··· 3427 3427 hw->phy.sfp_setup_needed = false; 3428 3428 } 3429 3429 3430 + if (status == IXGBE_ERR_SFP_NOT_SUPPORTED) 3431 + return status; 3432 + 3430 3433 /* Reset PHY */ 3431 3434 if (!hw->phy.reset_disable && hw->phy.ops.reset) 3432 3435 hw->phy.ops.reset(hw);
+1 -1
drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
··· 4137 4137 return NETDEV_TX_OK; 4138 4138 } 4139 4139 4140 - static int ixgbevf_xmit_frame(struct sk_buff *skb, struct net_device *netdev) 4140 + static netdev_tx_t ixgbevf_xmit_frame(struct sk_buff *skb, struct net_device *netdev) 4141 4141 { 4142 4142 struct ixgbevf_adapter *adapter = netdev_priv(netdev); 4143 4143 struct ixgbevf_ring *tx_ring;
+23 -7
drivers/net/ethernet/marvell/mvpp2.c
··· 942 942 struct clk *pp_clk; 943 943 struct clk *gop_clk; 944 944 struct clk *mg_clk; 945 + struct clk *mg_core_clk; 945 946 struct clk *axi_clk; 946 947 947 948 /* List of pointers to port structures */ ··· 8769 8768 err = clk_prepare_enable(priv->mg_clk); 8770 8769 if (err < 0) 8771 8770 goto err_gop_clk; 8771 + 8772 + priv->mg_core_clk = devm_clk_get(&pdev->dev, "mg_core_clk"); 8773 + if (IS_ERR(priv->mg_core_clk)) { 8774 + priv->mg_core_clk = NULL; 8775 + } else { 8776 + err = clk_prepare_enable(priv->mg_core_clk); 8777 + if (err < 0) 8778 + goto err_mg_clk; 8779 + } 8772 8780 } 8773 8781 8774 8782 priv->axi_clk = devm_clk_get(&pdev->dev, "axi_clk"); 8775 8783 if (IS_ERR(priv->axi_clk)) { 8776 8784 err = PTR_ERR(priv->axi_clk); 8777 8785 if (err == -EPROBE_DEFER) 8778 - goto err_gop_clk; 8786 + goto err_mg_core_clk; 8779 8787 priv->axi_clk = NULL; 8780 8788 } else { 8781 8789 err = clk_prepare_enable(priv->axi_clk); 8782 8790 if (err < 0) 8783 - goto err_gop_clk; 8791 + goto err_mg_core_clk; 8784 8792 } 8785 8793 8786 8794 /* Get system's tclk rate */ ··· 8803 8793 if (priv->hw_version == MVPP22) { 8804 8794 err = dma_set_mask(&pdev->dev, MVPP2_DESC_DMA_MASK); 8805 8795 if (err) 8806 - goto err_mg_clk; 8796 + goto err_axi_clk; 8807 8797 /* Sadly, the BM pools all share the same register to 8808 8798 * store the high 32 bits of their address. So they 8809 8799 * must all have the same high 32 bits, which forces ··· 8811 8801 */ 8812 8802 err = dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(32)); 8813 8803 if (err) 8814 - goto err_mg_clk; 8804 + goto err_axi_clk; 8815 8805 } 8816 8806 8817 8807 /* Initialize network controller */ 8818 8808 err = mvpp2_init(pdev, priv); 8819 8809 if (err < 0) { 8820 8810 dev_err(&pdev->dev, "failed to initialize controller\n"); 8821 - goto err_mg_clk; 8811 + goto err_axi_clk; 8822 8812 } 8823 8813 8824 8814 /* Initialize ports */ ··· 8831 8821 if (priv->port_count == 0) { 8832 8822 dev_err(&pdev->dev, "no ports enabled\n"); 8833 8823 err = -ENODEV; 8834 - goto err_mg_clk; 8824 + goto err_axi_clk; 8835 8825 } 8836 8826 8837 8827 /* Statistics must be gathered regularly because some of them (like ··· 8859 8849 mvpp2_port_remove(priv->port_list[i]); 8860 8850 i++; 8861 8851 } 8862 - err_mg_clk: 8852 + err_axi_clk: 8863 8853 clk_disable_unprepare(priv->axi_clk); 8854 + 8855 + err_mg_core_clk: 8856 + if (priv->hw_version == MVPP22) 8857 + clk_disable_unprepare(priv->mg_core_clk); 8858 + err_mg_clk: 8864 8859 if (priv->hw_version == MVPP22) 8865 8860 clk_disable_unprepare(priv->mg_clk); 8866 8861 err_gop_clk: ··· 8912 8897 return 0; 8913 8898 8914 8899 clk_disable_unprepare(priv->axi_clk); 8900 + clk_disable_unprepare(priv->mg_core_clk); 8915 8901 clk_disable_unprepare(priv->mg_clk); 8916 8902 clk_disable_unprepare(priv->pp_clk); 8917 8903 clk_disable_unprepare(priv->gop_clk);
+16
drivers/net/ethernet/mellanox/mlx4/en_ethtool.c
··· 1027 1027 if (!coal->tx_max_coalesced_frames_irq) 1028 1028 return -EINVAL; 1029 1029 1030 + if (coal->tx_coalesce_usecs > MLX4_EN_MAX_COAL_TIME || 1031 + coal->rx_coalesce_usecs > MLX4_EN_MAX_COAL_TIME || 1032 + coal->rx_coalesce_usecs_low > MLX4_EN_MAX_COAL_TIME || 1033 + coal->rx_coalesce_usecs_high > MLX4_EN_MAX_COAL_TIME) { 1034 + netdev_info(dev, "%s: maximum coalesce time supported is %d usecs\n", 1035 + __func__, MLX4_EN_MAX_COAL_TIME); 1036 + return -ERANGE; 1037 + } 1038 + 1039 + if (coal->tx_max_coalesced_frames > MLX4_EN_MAX_COAL_PKTS || 1040 + coal->rx_max_coalesced_frames > MLX4_EN_MAX_COAL_PKTS) { 1041 + netdev_info(dev, "%s: maximum coalesced frames supported is %d\n", 1042 + __func__, MLX4_EN_MAX_COAL_PKTS); 1043 + return -ERANGE; 1044 + } 1045 + 1030 1046 priv->rx_frames = (coal->rx_max_coalesced_frames == 1031 1047 MLX4_EN_AUTO_CONF) ? 1032 1048 MLX4_EN_RX_COAL_TARGET :
+1 -7
drivers/net/ethernet/mellanox/mlx4/en_netdev.c
··· 3324 3324 MAX_TX_RINGS, GFP_KERNEL); 3325 3325 if (!priv->tx_ring[t]) { 3326 3326 err = -ENOMEM; 3327 - goto err_free_tx; 3327 + goto out; 3328 3328 } 3329 3329 priv->tx_cq[t] = kzalloc(sizeof(struct mlx4_en_cq *) * 3330 3330 MAX_TX_RINGS, GFP_KERNEL); 3331 3331 if (!priv->tx_cq[t]) { 3332 - kfree(priv->tx_ring[t]); 3333 3332 err = -ENOMEM; 3334 3333 goto out; 3335 3334 } ··· 3581 3582 3582 3583 return 0; 3583 3584 3584 - err_free_tx: 3585 - while (t--) { 3586 - kfree(priv->tx_ring[t]); 3587 - kfree(priv->tx_cq[t]); 3588 - } 3589 3585 out: 3590 3586 mlx4_en_destroy_netdev(dev); 3591 3587 return err;
+1 -1
drivers/net/ethernet/mellanox/mlx4/main.c
··· 1317 1317 1318 1318 ret = mlx4_unbond_fs_rules(dev); 1319 1319 if (ret) 1320 - mlx4_warn(dev, "multifunction unbond for flow rules failedi (%d)\n", ret); 1320 + mlx4_warn(dev, "multifunction unbond for flow rules failed (%d)\n", ret); 1321 1321 ret1 = mlx4_unbond_mac_table(dev); 1322 1322 if (ret1) { 1323 1323 mlx4_warn(dev, "multifunction unbond for MAC table failed (%d)\n", ret1);
+5 -2
drivers/net/ethernet/mellanox/mlx4/mlx4_en.h
··· 132 132 #define MLX4_EN_TX_COAL_PKTS 16 133 133 #define MLX4_EN_TX_COAL_TIME 0x10 134 134 135 + #define MLX4_EN_MAX_COAL_PKTS U16_MAX 136 + #define MLX4_EN_MAX_COAL_TIME U16_MAX 137 + 135 138 #define MLX4_EN_RX_RATE_LOW 400000 136 139 #define MLX4_EN_RX_COAL_TIME_LOW 0 137 140 #define MLX4_EN_RX_RATE_HIGH 450000 ··· 555 552 u16 rx_usecs_low; 556 553 u32 pkt_rate_high; 557 554 u16 rx_usecs_high; 558 - u16 sample_interval; 559 - u16 adaptive_rx_coal; 555 + u32 sample_interval; 556 + u32 adaptive_rx_coal; 560 557 u32 msg_enable; 561 558 u32 loopback_ok; 562 559 u32 validate_loopback;
+5 -3
drivers/net/ethernet/mellanox/mlx5/core/en_dcbnl.c
··· 1007 1007 1008 1008 mutex_lock(&priv->state_lock); 1009 1009 1010 - if (!test_bit(MLX5E_STATE_OPENED, &priv->state)) 1011 - goto out; 1012 - 1013 1010 new_channels.params = priv->channels.params; 1014 1011 mlx5e_trust_update_tx_min_inline_mode(priv, &new_channels.params); 1012 + 1013 + if (!test_bit(MLX5E_STATE_OPENED, &priv->state)) { 1014 + priv->channels.params = new_channels.params; 1015 + goto out; 1016 + } 1015 1017 1016 1018 /* Skip if tx_min_inline is the same */ 1017 1019 if (new_channels.params.tx_min_inline_mode ==
+3 -2
drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
··· 877 877 }; 878 878 879 879 static void mlx5e_build_rep_params(struct mlx5_core_dev *mdev, 880 - struct mlx5e_params *params) 880 + struct mlx5e_params *params, u16 mtu) 881 881 { 882 882 u8 cq_period_mode = MLX5_CAP_GEN(mdev, cq_period_start_from_cqe) ? 883 883 MLX5_CQ_PERIOD_MODE_START_FROM_CQE : 884 884 MLX5_CQ_PERIOD_MODE_START_FROM_EQE; 885 885 886 886 params->hard_mtu = MLX5E_ETH_HARD_MTU; 887 + params->sw_mtu = mtu; 887 888 params->log_sq_size = MLX5E_REP_PARAMS_LOG_SQ_SIZE; 888 889 params->rq_wq_type = MLX5_WQ_TYPE_LINKED_LIST; 889 890 params->log_rq_mtu_frames = MLX5E_REP_PARAMS_LOG_RQ_SIZE; ··· 932 931 933 932 priv->channels.params.num_channels = profile->max_nch(mdev); 934 933 935 - mlx5e_build_rep_params(mdev, &priv->channels.params); 934 + mlx5e_build_rep_params(mdev, &priv->channels.params, netdev->mtu); 936 935 mlx5e_build_rep_netdev(netdev); 937 936 938 937 mlx5e_timestamp_init(priv);
+1 -1
drivers/net/ethernet/mellanox/mlx5/core/en_selftest.c
··· 290 290 291 291 if (!test_bit(MLX5E_STATE_OPENED, &priv->state)) { 292 292 netdev_err(priv->netdev, 293 - "\tCan't perform loobpack test while device is down\n"); 293 + "\tCan't perform loopback test while device is down\n"); 294 294 return -ENODEV; 295 295 } 296 296
+6 -1
drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
··· 1261 1261 f->mask); 1262 1262 addr_type = key->addr_type; 1263 1263 1264 + /* the HW doesn't support frag first/later */ 1265 + if (mask->flags & FLOW_DIS_FIRST_FRAG) 1266 + return -EOPNOTSUPP; 1267 + 1264 1268 if (mask->flags & FLOW_DIS_IS_FRAGMENT) { 1265 1269 MLX5_SET(fte_match_set_lyr_2_4, headers_c, frag, 1); 1266 1270 MLX5_SET(fte_match_set_lyr_2_4, headers_v, frag, ··· 1868 1864 } 1869 1865 1870 1866 ip_proto = MLX5_GET(fte_match_set_lyr_2_4, headers_v, ip_protocol); 1871 - if (modify_ip_header && ip_proto != IPPROTO_TCP && ip_proto != IPPROTO_UDP) { 1867 + if (modify_ip_header && ip_proto != IPPROTO_TCP && 1868 + ip_proto != IPPROTO_UDP && ip_proto != IPPROTO_ICMP) { 1872 1869 pr_info("can't offload re-write of ip proto %d\n", ip_proto); 1873 1870 return false; 1874 1871 }
+10 -10
drivers/net/ethernet/mellanox/mlx5/core/en_tx.c
··· 255 255 dma_addr = dma_map_single(sq->pdev, skb_data, headlen, 256 256 DMA_TO_DEVICE); 257 257 if (unlikely(dma_mapping_error(sq->pdev, dma_addr))) 258 - return -ENOMEM; 258 + goto dma_unmap_wqe_err; 259 259 260 260 dseg->addr = cpu_to_be64(dma_addr); 261 261 dseg->lkey = sq->mkey_be; ··· 273 273 dma_addr = skb_frag_dma_map(sq->pdev, frag, 0, fsz, 274 274 DMA_TO_DEVICE); 275 275 if (unlikely(dma_mapping_error(sq->pdev, dma_addr))) 276 - return -ENOMEM; 276 + goto dma_unmap_wqe_err; 277 277 278 278 dseg->addr = cpu_to_be64(dma_addr); 279 279 dseg->lkey = sq->mkey_be; ··· 285 285 } 286 286 287 287 return num_dma; 288 + 289 + dma_unmap_wqe_err: 290 + mlx5e_dma_unmap_wqe_err(sq, num_dma); 291 + return -ENOMEM; 288 292 } 289 293 290 294 static inline void ··· 384 380 num_dma = mlx5e_txwqe_build_dsegs(sq, skb, skb_data, headlen, 385 381 (struct mlx5_wqe_data_seg *)cseg + ds_cnt); 386 382 if (unlikely(num_dma < 0)) 387 - goto dma_unmap_wqe_err; 383 + goto err_drop; 388 384 389 385 mlx5e_txwqe_complete(sq, skb, opcode, ds_cnt + num_dma, 390 386 num_bytes, num_dma, wi, cseg); 391 387 392 388 return NETDEV_TX_OK; 393 389 394 - dma_unmap_wqe_err: 390 + err_drop: 395 391 sq->stats.dropped++; 396 - mlx5e_dma_unmap_wqe_err(sq, wi->num_dma); 397 - 398 392 dev_kfree_skb_any(skb); 399 393 400 394 return NETDEV_TX_OK; ··· 647 645 num_dma = mlx5e_txwqe_build_dsegs(sq, skb, skb_data, headlen, 648 646 (struct mlx5_wqe_data_seg *)cseg + ds_cnt); 649 647 if (unlikely(num_dma < 0)) 650 - goto dma_unmap_wqe_err; 648 + goto err_drop; 651 649 652 650 mlx5e_txwqe_complete(sq, skb, opcode, ds_cnt + num_dma, 653 651 num_bytes, num_dma, wi, cseg); 654 652 655 653 return NETDEV_TX_OK; 656 654 657 - dma_unmap_wqe_err: 655 + err_drop: 658 656 sq->stats.dropped++; 659 - mlx5e_dma_unmap_wqe_err(sq, wi->num_dma); 660 - 661 657 dev_kfree_skb_any(skb); 662 658 663 659 return NETDEV_TX_OK;
+28
drivers/net/ethernet/mellanox/mlx5/core/eq.c
··· 34 34 #include <linux/module.h> 35 35 #include <linux/mlx5/driver.h> 36 36 #include <linux/mlx5/cmd.h> 37 + #ifdef CONFIG_RFS_ACCEL 38 + #include <linux/cpu_rmap.h> 39 + #endif 37 40 #include "mlx5_core.h" 38 41 #include "fpga/core.h" 39 42 #include "eswitch.h" ··· 925 922 MLX5_SET(query_eq_in, in, opcode, MLX5_CMD_OP_QUERY_EQ); 926 923 MLX5_SET(query_eq_in, in, eq_number, eq->eqn); 927 924 return mlx5_cmd_exec(dev, in, sizeof(in), out, outlen); 925 + } 926 + 927 + /* This function should only be called after mlx5_cmd_force_teardown_hca */ 928 + void mlx5_core_eq_free_irqs(struct mlx5_core_dev *dev) 929 + { 930 + struct mlx5_eq_table *table = &dev->priv.eq_table; 931 + struct mlx5_eq *eq; 932 + 933 + #ifdef CONFIG_RFS_ACCEL 934 + if (dev->rmap) { 935 + free_irq_cpu_rmap(dev->rmap); 936 + dev->rmap = NULL; 937 + } 938 + #endif 939 + list_for_each_entry(eq, &table->comp_eqs_list, list) 940 + free_irq(eq->irqn, eq); 941 + 942 + free_irq(table->pages_eq.irqn, &table->pages_eq); 943 + free_irq(table->async_eq.irqn, &table->async_eq); 944 + free_irq(table->cmd_eq.irqn, &table->cmd_eq); 945 + #ifdef CONFIG_INFINIBAND_ON_DEMAND_PAGING 946 + if (MLX5_CAP_GEN(dev, pg)) 947 + free_irq(table->pfault_eq.irqn, &table->pfault_eq); 948 + #endif 949 + pci_free_irq_vectors(dev->pdev); 928 950 }
+10 -1
drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
··· 2175 2175 memset(vf_stats, 0, sizeof(*vf_stats)); 2176 2176 vf_stats->rx_packets = 2177 2177 MLX5_GET_CTR(out, received_eth_unicast.packets) + 2178 + MLX5_GET_CTR(out, received_ib_unicast.packets) + 2178 2179 MLX5_GET_CTR(out, received_eth_multicast.packets) + 2180 + MLX5_GET_CTR(out, received_ib_multicast.packets) + 2179 2181 MLX5_GET_CTR(out, received_eth_broadcast.packets); 2180 2182 2181 2183 vf_stats->rx_bytes = 2182 2184 MLX5_GET_CTR(out, received_eth_unicast.octets) + 2185 + MLX5_GET_CTR(out, received_ib_unicast.octets) + 2183 2186 MLX5_GET_CTR(out, received_eth_multicast.octets) + 2187 + MLX5_GET_CTR(out, received_ib_multicast.octets) + 2184 2188 MLX5_GET_CTR(out, received_eth_broadcast.octets); 2185 2189 2186 2190 vf_stats->tx_packets = 2187 2191 MLX5_GET_CTR(out, transmitted_eth_unicast.packets) + 2192 + MLX5_GET_CTR(out, transmitted_ib_unicast.packets) + 2188 2193 MLX5_GET_CTR(out, transmitted_eth_multicast.packets) + 2194 + MLX5_GET_CTR(out, transmitted_ib_multicast.packets) + 2189 2195 MLX5_GET_CTR(out, transmitted_eth_broadcast.packets); 2190 2196 2191 2197 vf_stats->tx_bytes = 2192 2198 MLX5_GET_CTR(out, transmitted_eth_unicast.octets) + 2199 + MLX5_GET_CTR(out, transmitted_ib_unicast.octets) + 2193 2200 MLX5_GET_CTR(out, transmitted_eth_multicast.octets) + 2201 + MLX5_GET_CTR(out, transmitted_ib_multicast.octets) + 2194 2202 MLX5_GET_CTR(out, transmitted_eth_broadcast.octets); 2195 2203 2196 2204 vf_stats->multicast = 2197 - MLX5_GET_CTR(out, received_eth_multicast.packets); 2205 + MLX5_GET_CTR(out, received_eth_multicast.packets) + 2206 + MLX5_GET_CTR(out, received_ib_multicast.packets); 2198 2207 2199 2208 vf_stats->broadcast = 2200 2209 MLX5_GET_CTR(out, received_eth_broadcast.packets);
+16 -10
drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
··· 187 187 static void del_sw_hw_rule(struct fs_node *node); 188 188 static bool mlx5_flow_dests_cmp(struct mlx5_flow_destination *d1, 189 189 struct mlx5_flow_destination *d2); 190 + static void cleanup_root_ns(struct mlx5_flow_root_namespace *root_ns); 190 191 static struct mlx5_flow_rule * 191 192 find_flow_rule(struct fs_fte *fte, 192 193 struct mlx5_flow_destination *dest); ··· 482 481 483 482 if (rule->dest_attr.type == MLX5_FLOW_DESTINATION_TYPE_COUNTER && 484 483 --fte->dests_size) { 485 - modify_mask = BIT(MLX5_SET_FTE_MODIFY_ENABLE_MASK_ACTION); 484 + modify_mask = BIT(MLX5_SET_FTE_MODIFY_ENABLE_MASK_ACTION) | 485 + BIT(MLX5_SET_FTE_MODIFY_ENABLE_MASK_FLOW_COUNTERS); 486 486 fte->action.action &= ~MLX5_FLOW_CONTEXT_ACTION_COUNT; 487 487 update_fte = true; 488 488 goto out; ··· 2353 2351 2354 2352 static int init_root_ns(struct mlx5_flow_steering *steering) 2355 2353 { 2354 + int err; 2355 + 2356 2356 steering->root_ns = create_root_ns(steering, FS_FT_NIC_RX); 2357 2357 if (!steering->root_ns) 2358 - goto cleanup; 2358 + return -ENOMEM; 2359 2359 2360 - if (init_root_tree(steering, &root_fs, &steering->root_ns->ns.node)) 2361 - goto cleanup; 2360 + err = init_root_tree(steering, &root_fs, &steering->root_ns->ns.node); 2361 + if (err) 2362 + goto out_err; 2362 2363 2363 2364 set_prio_attrs(steering->root_ns); 2364 - 2365 - if (create_anchor_flow_table(steering)) 2366 - goto cleanup; 2365 + err = create_anchor_flow_table(steering); 2366 + if (err) 2367 + goto out_err; 2367 2368 2368 2369 return 0; 2369 2370 2370 - cleanup: 2371 - mlx5_cleanup_fs(steering->dev); 2372 - return -ENOMEM; 2371 + out_err: 2372 + cleanup_root_ns(steering->root_ns); 2373 + steering->root_ns = NULL; 2374 + return err; 2373 2375 } 2374 2376 2375 2377 static void clean_tree(struct fs_node *node)
+8
drivers/net/ethernet/mellanox/mlx5/core/main.c
··· 1587 1587 1588 1588 mlx5_enter_error_state(dev, true); 1589 1589 1590 + /* Some platforms requiring freeing the IRQ's in the shutdown 1591 + * flow. If they aren't freed they can't be allocated after 1592 + * kexec. There is no need to cleanup the mlx5_core software 1593 + * contexts. 1594 + */ 1595 + mlx5_irq_clear_affinity_hints(dev); 1596 + mlx5_core_eq_free_irqs(dev); 1597 + 1590 1598 return 0; 1591 1599 } 1592 1600
+2
drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h
··· 128 128 u32 *out, int outlen); 129 129 int mlx5_start_eqs(struct mlx5_core_dev *dev); 130 130 void mlx5_stop_eqs(struct mlx5_core_dev *dev); 131 + /* This function should only be called after mlx5_cmd_force_teardown_hca */ 132 + void mlx5_core_eq_free_irqs(struct mlx5_core_dev *dev); 131 133 struct mlx5_eq *mlx5_eqn2eq(struct mlx5_core_dev *dev, int eqn); 132 134 u32 mlx5_eq_poll_irq_disabled(struct mlx5_eq *eq); 133 135 void mlx5_cq_tasklet_cb(unsigned long data);
+2 -2
drivers/net/ethernet/mellanox/mlxsw/core.c
··· 1100 1100 err_alloc_lag_mapping: 1101 1101 mlxsw_ports_fini(mlxsw_core); 1102 1102 err_ports_init: 1103 - mlxsw_bus->fini(bus_priv); 1104 - err_bus_init: 1105 1103 if (!reload) 1106 1104 devlink_resources_unregister(devlink, NULL); 1107 1105 err_register_resources: 1106 + mlxsw_bus->fini(bus_priv); 1107 + err_bus_init: 1108 1108 if (!reload) 1109 1109 devlink_free(devlink); 1110 1110 err_devlink_alloc:
+5 -7
drivers/net/ethernet/mellanox/mlxsw/spectrum_switchdev.c
··· 1718 1718 struct net_device *dev = mlxsw_sp_port->dev; 1719 1719 int err; 1720 1720 1721 - if (bridge_port->bridge_device->multicast_enabled) { 1722 - if (bridge_port->bridge_device->multicast_enabled) { 1723 - err = mlxsw_sp_port_smid_set(mlxsw_sp_port, mid->mid, 1724 - false); 1725 - if (err) 1726 - netdev_err(dev, "Unable to remove port from SMID\n"); 1727 - } 1721 + if (bridge_port->bridge_device->multicast_enabled && 1722 + !bridge_port->mrouter) { 1723 + err = mlxsw_sp_port_smid_set(mlxsw_sp_port, mid->mid, false); 1724 + if (err) 1725 + netdev_err(dev, "Unable to remove port from SMID\n"); 1728 1726 } 1729 1727 1730 1728 err = mlxsw_sp_port_remove_from_mid(mlxsw_sp_port, mid);
+8 -2
drivers/net/ethernet/netronome/nfp/flower/action.c
··· 183 183 nfp_fl_set_ipv4_udp_tun(struct nfp_fl_set_ipv4_udp_tun *set_tun, 184 184 const struct tc_action *action, 185 185 struct nfp_fl_pre_tunnel *pre_tun, 186 - enum nfp_flower_tun_type tun_type) 186 + enum nfp_flower_tun_type tun_type, 187 + struct net_device *netdev) 187 188 { 188 189 size_t act_size = sizeof(struct nfp_fl_set_ipv4_udp_tun); 189 190 struct ip_tunnel_info *ip_tun = tcf_tunnel_info(action); 190 191 u32 tmp_set_ip_tun_type_index = 0; 191 192 /* Currently support one pre-tunnel so index is always 0. */ 192 193 int pretun_idx = 0; 194 + struct net *net; 193 195 194 196 if (ip_tun->options_len) 195 197 return -EOPNOTSUPP; 198 + 199 + net = dev_net(netdev); 196 200 197 201 set_tun->head.jump_id = NFP_FL_ACTION_OPCODE_SET_IPV4_TUNNEL; 198 202 set_tun->head.len_lw = act_size >> NFP_FL_LW_SIZ; ··· 208 204 209 205 set_tun->tun_type_index = cpu_to_be32(tmp_set_ip_tun_type_index); 210 206 set_tun->tun_id = ip_tun->key.tun_id; 207 + set_tun->ttl = net->ipv4.sysctl_ip_default_ttl; 211 208 212 209 /* Complete pre_tunnel action. */ 213 210 pre_tun->ipv4_dst = ip_tun->key.u.ipv4.dst; ··· 516 511 *a_len += sizeof(struct nfp_fl_pre_tunnel); 517 512 518 513 set_tun = (void *)&nfp_fl->action_data[*a_len]; 519 - err = nfp_fl_set_ipv4_udp_tun(set_tun, a, pre_tun, *tun_type); 514 + err = nfp_fl_set_ipv4_udp_tun(set_tun, a, pre_tun, *tun_type, 515 + netdev); 520 516 if (err) 521 517 return err; 522 518 *a_len += sizeof(struct nfp_fl_set_ipv4_udp_tun);
+4 -1
drivers/net/ethernet/netronome/nfp/flower/cmsg.h
··· 190 190 __be16 reserved; 191 191 __be64 tun_id __packed; 192 192 __be32 tun_type_index; 193 - __be32 extra[3]; 193 + __be16 reserved2; 194 + u8 ttl; 195 + u8 reserved3; 196 + __be32 extra[2]; 194 197 }; 195 198 196 199 /* Metadata with L2 (1W/4B)
+1 -20
drivers/net/ethernet/netronome/nfp/flower/main.c
··· 52 52 53 53 #define NFP_FLOWER_ALLOWED_VER 0x0001000000010000UL 54 54 55 - #define NFP_FLOWER_FRAME_HEADROOM 158 56 - 57 55 static const char *nfp_flower_extra_cap(struct nfp_app *app, struct nfp_net *nn) 58 56 { 59 57 return "FLOWER"; ··· 358 360 } 359 361 360 362 SET_NETDEV_DEV(repr, &priv->nn->pdev->dev); 361 - nfp_net_get_mac_addr(app->pf, port); 363 + nfp_net_get_mac_addr(app->pf, repr, port); 362 364 363 365 cmsg_port_id = nfp_flower_cmsg_phys_port(phys_port); 364 366 err = nfp_repr_init(app, repr, ··· 557 559 app->priv = NULL; 558 560 } 559 561 560 - static int 561 - nfp_flower_check_mtu(struct nfp_app *app, struct net_device *netdev, 562 - int new_mtu) 563 - { 564 - /* The flower fw reserves NFP_FLOWER_FRAME_HEADROOM bytes of the 565 - * supported max MTU to allow for appending tunnel headers. To prevent 566 - * unexpected behaviour this needs to be accounted for. 567 - */ 568 - if (new_mtu > netdev->max_mtu - NFP_FLOWER_FRAME_HEADROOM) { 569 - nfp_err(app->cpp, "New MTU (%d) is not valid\n", new_mtu); 570 - return -EINVAL; 571 - } 572 - 573 - return 0; 574 - } 575 - 576 562 static bool nfp_flower_check_ack(struct nfp_flower_priv *app_priv) 577 563 { 578 564 bool ret; ··· 638 656 .init = nfp_flower_init, 639 657 .clean = nfp_flower_clean, 640 658 641 - .check_mtu = nfp_flower_check_mtu, 642 659 .repr_change_mtu = nfp_flower_repr_change_mtu, 643 660 644 661 .vnic_alloc = nfp_flower_vnic_alloc,
+1 -1
drivers/net/ethernet/netronome/nfp/nfp_app_nic.c
··· 69 69 if (err) 70 70 return err < 0 ? err : 0; 71 71 72 - nfp_net_get_mac_addr(app->pf, nn->port); 72 + nfp_net_get_mac_addr(app->pf, nn->dp.netdev, nn->port); 73 73 74 74 return 0; 75 75 }
+3 -1
drivers/net/ethernet/netronome/nfp/nfp_main.h
··· 171 171 int nfp_hwmon_register(struct nfp_pf *pf); 172 172 void nfp_hwmon_unregister(struct nfp_pf *pf); 173 173 174 - void nfp_net_get_mac_addr(struct nfp_pf *pf, struct nfp_port *port); 174 + void 175 + nfp_net_get_mac_addr(struct nfp_pf *pf, struct net_device *netdev, 176 + struct nfp_port *port); 175 177 176 178 bool nfp_ctrl_tx(struct nfp_net *nn, struct sk_buff *skb); 177 179
+18 -13
drivers/net/ethernet/netronome/nfp/nfp_net_main.c
··· 67 67 /** 68 68 * nfp_net_get_mac_addr() - Get the MAC address. 69 69 * @pf: NFP PF handle 70 + * @netdev: net_device to set MAC address on 70 71 * @port: NFP port structure 71 72 * 72 73 * First try to get the MAC address from NSP ETH table. If that 73 74 * fails generate a random address. 74 75 */ 75 - void nfp_net_get_mac_addr(struct nfp_pf *pf, struct nfp_port *port) 76 + void 77 + nfp_net_get_mac_addr(struct nfp_pf *pf, struct net_device *netdev, 78 + struct nfp_port *port) 76 79 { 77 80 struct nfp_eth_table_port *eth_port; 78 81 79 82 eth_port = __nfp_port_get_eth_port(port); 80 83 if (!eth_port) { 81 - eth_hw_addr_random(port->netdev); 84 + eth_hw_addr_random(netdev); 82 85 return; 83 86 } 84 87 85 - ether_addr_copy(port->netdev->dev_addr, eth_port->mac_addr); 86 - ether_addr_copy(port->netdev->perm_addr, eth_port->mac_addr); 88 + ether_addr_copy(netdev->dev_addr, eth_port->mac_addr); 89 + ether_addr_copy(netdev->perm_addr, eth_port->mac_addr); 87 90 } 88 91 89 92 static struct nfp_eth_table_port * ··· 514 511 return PTR_ERR(mem); 515 512 } 516 513 517 - min_size = NFP_MAC_STATS_SIZE * (pf->eth_tbl->max_index + 1); 518 - pf->mac_stats_mem = nfp_rtsym_map(pf->rtbl, "_mac_stats", 519 - "net.macstats", min_size, 520 - &pf->mac_stats_bar); 521 - if (IS_ERR(pf->mac_stats_mem)) { 522 - if (PTR_ERR(pf->mac_stats_mem) != -ENOENT) { 523 - err = PTR_ERR(pf->mac_stats_mem); 524 - goto err_unmap_ctrl; 514 + if (pf->eth_tbl) { 515 + min_size = NFP_MAC_STATS_SIZE * (pf->eth_tbl->max_index + 1); 516 + pf->mac_stats_mem = nfp_rtsym_map(pf->rtbl, "_mac_stats", 517 + "net.macstats", min_size, 518 + &pf->mac_stats_bar); 519 + if (IS_ERR(pf->mac_stats_mem)) { 520 + if (PTR_ERR(pf->mac_stats_mem) != -ENOENT) { 521 + err = PTR_ERR(pf->mac_stats_mem); 522 + goto err_unmap_ctrl; 523 + } 524 + pf->mac_stats_mem = NULL; 525 525 } 526 - pf->mac_stats_mem = NULL; 527 526 } 528 527 529 528 pf->vf_cfg_mem = nfp_net_pf_map_rtsym(pf, "net.vfcfg",
+6 -4
drivers/net/ethernet/ni/nixge.c
··· 1170 1170 1171 1171 cell = nvmem_cell_get(dev, "address"); 1172 1172 if (IS_ERR(cell)) 1173 - return cell; 1173 + return NULL; 1174 1174 1175 1175 mac = nvmem_cell_read(cell, &cell_size); 1176 1176 nvmem_cell_put(cell); ··· 1183 1183 struct nixge_priv *priv; 1184 1184 struct net_device *ndev; 1185 1185 struct resource *dmares; 1186 - const char *mac_addr; 1186 + const u8 *mac_addr; 1187 1187 int err; 1188 1188 1189 1189 ndev = alloc_etherdev(sizeof(*priv)); ··· 1202 1202 ndev->max_mtu = NIXGE_JUMBO_MTU; 1203 1203 1204 1204 mac_addr = nixge_get_nvmem_address(&pdev->dev); 1205 - if (mac_addr && is_valid_ether_addr(mac_addr)) 1205 + if (mac_addr && is_valid_ether_addr(mac_addr)) { 1206 1206 ether_addr_copy(ndev->dev_addr, mac_addr); 1207 - else 1207 + kfree(mac_addr); 1208 + } else { 1208 1209 eth_hw_addr_random(ndev); 1210 + } 1209 1211 1210 1212 priv = netdev_priv(ndev); 1211 1213 priv->ndev = ndev;
+2 -4
drivers/net/ethernet/qlogic/qed/qed_l2.c
··· 115 115 116 116 void qed_l2_setup(struct qed_hwfn *p_hwfn) 117 117 { 118 - if (p_hwfn->hw_info.personality != QED_PCI_ETH && 119 - p_hwfn->hw_info.personality != QED_PCI_ETH_ROCE) 118 + if (!QED_IS_L2_PERSONALITY(p_hwfn)) 120 119 return; 121 120 122 121 mutex_init(&p_hwfn->p_l2_info->lock); ··· 125 126 { 126 127 u32 i; 127 128 128 - if (p_hwfn->hw_info.personality != QED_PCI_ETH && 129 - p_hwfn->hw_info.personality != QED_PCI_ETH_ROCE) 129 + if (!QED_IS_L2_PERSONALITY(p_hwfn)) 130 130 return; 131 131 132 132 if (!p_hwfn->p_l2_info)
+1 -1
drivers/net/ethernet/qlogic/qed/qed_ll2.c
··· 2370 2370 u8 flags = 0; 2371 2371 2372 2372 if (unlikely(skb->ip_summed != CHECKSUM_NONE)) { 2373 - DP_INFO(cdev, "Cannot transmit a checksumed packet\n"); 2373 + DP_INFO(cdev, "Cannot transmit a checksummed packet\n"); 2374 2374 return -EINVAL; 2375 2375 } 2376 2376
+1 -1
drivers/net/ethernet/qlogic/qed/qed_main.c
··· 680 680 tasklet_disable(p_hwfn->sp_dpc); 681 681 p_hwfn->b_sp_dpc_enabled = false; 682 682 DP_VERBOSE(cdev, NETIF_MSG_IFDOWN, 683 - "Disabled sp taskelt [hwfn %d] at %p\n", 683 + "Disabled sp tasklet [hwfn %d] at %p\n", 684 684 i, p_hwfn->sp_dpc); 685 685 } 686 686 }
+1 -1
drivers/net/ethernet/qlogic/qed/qed_roce.c
··· 848 848 849 849 if (!(qp->resp_offloaded)) { 850 850 DP_NOTICE(p_hwfn, 851 - "The responder's qp should be offloded before requester's\n"); 851 + "The responder's qp should be offloaded before requester's\n"); 852 852 return -EINVAL; 853 853 } 854 854
+1 -1
drivers/net/ethernet/qlogic/qede/qede_rdma.c
··· 238 238 } 239 239 240 240 if (!found) { 241 - event_node = kzalloc(sizeof(*event_node), GFP_KERNEL); 241 + event_node = kzalloc(sizeof(*event_node), GFP_ATOMIC); 242 242 if (!event_node) { 243 243 DP_NOTICE(edev, 244 244 "qedr: Could not allocate memory for rdma work\n");
+1 -1
drivers/net/ethernet/realtek/8139too.c
··· 2224 2224 struct rtl8139_private *tp = netdev_priv(dev); 2225 2225 const int irq = tp->pci_dev->irq; 2226 2226 2227 - disable_irq(irq); 2227 + disable_irq_nosync(irq); 2228 2228 rtl8139_interrupt(irq, dev); 2229 2229 enable_irq(irq); 2230 2230 }
+3
drivers/net/ethernet/realtek/r8169.c
··· 4981 4981 static void rtl_pll_power_up(struct rtl8169_private *tp) 4982 4982 { 4983 4983 rtl_generic_op(tp, tp->pll_power_ops.up); 4984 + 4985 + /* give MAC/PHY some time to resume */ 4986 + msleep(20); 4984 4987 } 4985 4988 4986 4989 static void rtl_init_pll_power_ops(struct rtl8169_private *tp)
+3 -2
drivers/net/ethernet/sfc/ef10.c
··· 4784 4784 * will set rule->filter_id to EFX_ARFS_FILTER_ID_PENDING, meaning that 4785 4785 * the rule is not removed by efx_rps_hash_del() below. 4786 4786 */ 4787 - ret = efx_ef10_filter_remove_internal(efx, 1U << spec->priority, 4788 - filter_idx, true) == 0; 4787 + if (ret) 4788 + ret = efx_ef10_filter_remove_internal(efx, 1U << spec->priority, 4789 + filter_idx, true) == 0; 4789 4790 /* While we can't safely dereference rule (we dropped the lock), we can 4790 4791 * still test it for NULL. 4791 4792 */
+2
drivers/net/ethernet/sfc/rx.c
··· 839 839 int rc; 840 840 841 841 rc = efx->type->filter_insert(efx, &req->spec, true); 842 + if (rc >= 0) 843 + rc %= efx->type->max_rx_ip_filters; 842 844 if (efx->rps_hash_table) { 843 845 spin_lock_bh(&efx->rps_hash_lock); 844 846 rule = efx_rps_hash_find(efx, &req->spec);
+2 -3
drivers/net/ethernet/sun/niu.c
··· 3443 3443 3444 3444 len = (val & RCR_ENTRY_L2_LEN) >> 3445 3445 RCR_ENTRY_L2_LEN_SHIFT; 3446 - len -= ETH_FCS_LEN; 3446 + append_size = len + ETH_HLEN + ETH_FCS_LEN; 3447 3447 3448 3448 addr = (val & RCR_ENTRY_PKT_BUF_ADDR) << 3449 3449 RCR_ENTRY_PKT_BUF_ADDR_SHIFT; ··· 3453 3453 RCR_ENTRY_PKTBUFSZ_SHIFT]; 3454 3454 3455 3455 off = addr & ~PAGE_MASK; 3456 - append_size = rcr_size; 3457 3456 if (num_rcr == 1) { 3458 3457 int ptype; 3459 3458 ··· 3465 3466 else 3466 3467 skb_checksum_none_assert(skb); 3467 3468 } else if (!(val & RCR_ENTRY_MULTI)) 3468 - append_size = len - skb->len; 3469 + append_size = append_size - skb->len; 3469 3470 3470 3471 niu_rx_skb_append(skb, page, off, append_size, rcr_size); 3471 3472 if ((page->index + rp->rbr_block_size) - rcr_size == addr) {
+2
drivers/net/ethernet/ti/cpsw.c
··· 1340 1340 cpsw_ale_add_ucast(cpsw->ale, priv->mac_addr, 1341 1341 HOST_PORT_NUM, ALE_VLAN | 1342 1342 ALE_SECURE, slave->port_vlan); 1343 + cpsw_ale_control_set(cpsw->ale, slave_port, 1344 + ALE_PORT_DROP_UNKNOWN_VLAN, 1); 1343 1345 } 1344 1346 1345 1347 static void soft_reset_slave(struct cpsw_slave *slave)
+2 -1
drivers/net/hyperv/netvsc_drv.c
··· 1840 1840 goto rx_handler_failed; 1841 1841 } 1842 1842 1843 - ret = netdev_upper_dev_link(vf_netdev, ndev, NULL); 1843 + ret = netdev_master_upper_dev_link(vf_netdev, ndev, 1844 + NULL, NULL, NULL); 1844 1845 if (ret != 0) { 1845 1846 netdev_err(vf_netdev, 1846 1847 "can not set master device %s (err = %d)\n",
+1 -1
drivers/net/hyperv/rndis_filter.c
··· 1288 1288 rndis_device->link_state ? "down" : "up"); 1289 1289 1290 1290 if (net_device->nvsp_version < NVSP_PROTOCOL_VERSION_5) 1291 - return net_device; 1291 + goto out; 1292 1292 1293 1293 rndis_filter_query_link_speed(rndis_device, net_device); 1294 1294
+1 -1
drivers/net/ieee802154/atusb.c
··· 1045 1045 atusb->tx_dr.bRequest = ATUSB_TX; 1046 1046 atusb->tx_dr.wValue = cpu_to_le16(0); 1047 1047 1048 - atusb->tx_urb = usb_alloc_urb(0, GFP_ATOMIC); 1048 + atusb->tx_urb = usb_alloc_urb(0, GFP_KERNEL); 1049 1049 if (!atusb->tx_urb) 1050 1050 goto fail; 1051 1051
+10 -5
drivers/net/ieee802154/mcr20a.c
··· 1267 1267 ret = mcr20a_get_platform_data(spi, pdata); 1268 1268 if (ret < 0) { 1269 1269 dev_crit(&spi->dev, "mcr20a_get_platform_data failed.\n"); 1270 - return ret; 1270 + goto free_pdata; 1271 1271 } 1272 1272 1273 1273 /* init reset gpio */ ··· 1275 1275 ret = devm_gpio_request_one(&spi->dev, pdata->rst_gpio, 1276 1276 GPIOF_OUT_INIT_HIGH, "reset"); 1277 1277 if (ret) 1278 - return ret; 1278 + goto free_pdata; 1279 1279 } 1280 1280 1281 1281 /* reset mcr20a */ ··· 1291 1291 hw = ieee802154_alloc_hw(sizeof(*lp), &mcr20a_hw_ops); 1292 1292 if (!hw) { 1293 1293 dev_crit(&spi->dev, "ieee802154_alloc_hw failed\n"); 1294 - return -ENOMEM; 1294 + ret = -ENOMEM; 1295 + goto free_pdata; 1295 1296 } 1296 1297 1297 1298 /* init mcr20a local data */ ··· 1309 1308 /* init buf */ 1310 1309 lp->buf = devm_kzalloc(&spi->dev, SPI_COMMAND_BUFFER, GFP_KERNEL); 1311 1310 1312 - if (!lp->buf) 1313 - return -ENOMEM; 1311 + if (!lp->buf) { 1312 + ret = -ENOMEM; 1313 + goto free_dev; 1314 + } 1314 1315 1315 1316 mcr20a_setup_tx_spi_messages(lp); 1316 1317 mcr20a_setup_rx_spi_messages(lp); ··· 1369 1366 1370 1367 free_dev: 1371 1368 ieee802154_free_hw(lp->hw); 1369 + free_pdata: 1370 + kfree(pdata); 1372 1371 1373 1372 return ret; 1374 1373 }
+10
drivers/net/phy/broadcom.c
··· 720 720 .get_strings = bcm_phy_get_strings, 721 721 .get_stats = bcm53xx_phy_get_stats, 722 722 .probe = bcm53xx_phy_probe, 723 + }, { 724 + .phy_id = PHY_ID_BCM89610, 725 + .phy_id_mask = 0xfffffff0, 726 + .name = "Broadcom BCM89610", 727 + .features = PHY_GBIT_FEATURES, 728 + .flags = PHY_HAS_INTERRUPT, 729 + .config_init = bcm54xx_config_init, 730 + .ack_interrupt = bcm_phy_ack_intr, 731 + .config_intr = bcm_phy_config_intr, 723 732 } }; 724 733 725 734 module_phy_driver(broadcom_drivers); ··· 750 741 { PHY_ID_BCMAC131, 0xfffffff0 }, 751 742 { PHY_ID_BCM5241, 0xfffffff0 }, 752 743 { PHY_ID_BCM5395, 0xfffffff0 }, 744 + { PHY_ID_BCM89610, 0xfffffff0 }, 753 745 { } 754 746 }; 755 747
+10 -1
drivers/net/phy/phy_device.c
··· 535 535 536 536 /* Grab the bits from PHYIR1, and put them in the upper half */ 537 537 phy_reg = mdiobus_read(bus, addr, MII_PHYSID1); 538 - if (phy_reg < 0) 538 + if (phy_reg < 0) { 539 + /* if there is no device, return without an error so scanning 540 + * the bus works properly 541 + */ 542 + if (phy_reg == -EIO || phy_reg == -ENODEV) { 543 + *phy_id = 0xffffffff; 544 + return 0; 545 + } 546 + 539 547 return -EIO; 548 + } 540 549 541 550 *phy_id = (phy_reg & 0xffff) << 16; 542 551
+1 -1
drivers/net/phy/sfp-bus.c
··· 125 125 if (id->base.br_nominal) { 126 126 if (id->base.br_nominal != 255) { 127 127 br_nom = id->base.br_nominal * 100; 128 - br_min = br_nom + id->base.br_nominal * id->ext.br_min; 128 + br_min = br_nom - id->base.br_nominal * id->ext.br_min; 129 129 br_max = br_nom + id->base.br_nominal * id->ext.br_max; 130 130 } else if (id->ext.br_max) { 131 131 br_nom = 250 * id->ext.br_max;
+13
drivers/net/usb/qmi_wwan.c
··· 1098 1098 {QMI_FIXED_INTF(0x05c6, 0x9080, 8)}, 1099 1099 {QMI_FIXED_INTF(0x05c6, 0x9083, 3)}, 1100 1100 {QMI_FIXED_INTF(0x05c6, 0x9084, 4)}, 1101 + {QMI_FIXED_INTF(0x05c6, 0x90b2, 3)}, /* ublox R410M */ 1101 1102 {QMI_FIXED_INTF(0x05c6, 0x920d, 0)}, 1102 1103 {QMI_FIXED_INTF(0x05c6, 0x920d, 5)}, 1103 1104 {QMI_QUIRK_SET_DTR(0x05c6, 0x9625, 4)}, /* YUGA CLM920-NC5 */ ··· 1342 1341 if (!id->driver_info) { 1343 1342 dev_dbg(&intf->dev, "setting defaults for dynamic device id\n"); 1344 1343 id->driver_info = (unsigned long)&qmi_wwan_info; 1344 + } 1345 + 1346 + /* There are devices where the same interface number can be 1347 + * configured as different functions. We should only bind to 1348 + * vendor specific functions when matching on interface number 1349 + */ 1350 + if (id->match_flags & USB_DEVICE_ID_MATCH_INT_NUMBER && 1351 + desc->bInterfaceClass != USB_CLASS_VENDOR_SPEC) { 1352 + dev_dbg(&intf->dev, 1353 + "Rejecting interface number match for class %02x\n", 1354 + desc->bInterfaceClass); 1355 + return -ENODEV; 1345 1356 } 1346 1357 1347 1358 /* Quectel EC20 quirk where we've QMI on interface 4 instead of 0 */
+20 -16
drivers/net/wireless/broadcom/brcm80211/brcmfmac/firmware.c
··· 459 459 kfree(req); 460 460 } 461 461 462 - static void brcmf_fw_request_nvram_done(const struct firmware *fw, void *ctx) 462 + static int brcmf_fw_request_nvram_done(const struct firmware *fw, void *ctx) 463 463 { 464 464 struct brcmf_fw *fwctx = ctx; 465 465 struct brcmf_fw_item *cur; ··· 498 498 brcmf_dbg(TRACE, "nvram %p len %d\n", nvram, nvram_length); 499 499 cur->nv_data.data = nvram; 500 500 cur->nv_data.len = nvram_length; 501 - return; 501 + return 0; 502 502 503 503 fail: 504 - brcmf_dbg(TRACE, "failed: dev=%s\n", dev_name(fwctx->dev)); 505 - fwctx->done(fwctx->dev, -ENOENT, NULL); 506 - brcmf_fw_free_request(fwctx->req); 507 - kfree(fwctx); 504 + return -ENOENT; 508 505 } 509 506 510 507 static int brcmf_fw_request_next_item(struct brcmf_fw *fwctx, bool async) ··· 550 553 brcmf_dbg(TRACE, "enter: firmware %s %sfound\n", cur->path, 551 554 fw ? "" : "not "); 552 555 553 - if (fw) { 554 - if (cur->type == BRCMF_FW_TYPE_BINARY) 555 - cur->binary = fw; 556 - else if (cur->type == BRCMF_FW_TYPE_NVRAM) 557 - brcmf_fw_request_nvram_done(fw, fwctx); 558 - else 559 - release_firmware(fw); 560 - } else if (cur->type == BRCMF_FW_TYPE_NVRAM) { 561 - brcmf_fw_request_nvram_done(NULL, fwctx); 562 - } else if (!(cur->flags & BRCMF_FW_REQF_OPTIONAL)) { 556 + if (!fw) 563 557 ret = -ENOENT; 558 + 559 + switch (cur->type) { 560 + case BRCMF_FW_TYPE_NVRAM: 561 + ret = brcmf_fw_request_nvram_done(fw, fwctx); 562 + break; 563 + case BRCMF_FW_TYPE_BINARY: 564 + cur->binary = fw; 565 + break; 566 + default: 567 + /* something fishy here so bail out early */ 568 + brcmf_err("unknown fw type: %d\n", cur->type); 569 + release_firmware(fw); 570 + ret = -EINVAL; 564 571 goto fail; 565 572 } 573 + 574 + if (ret < 0 && !(cur->flags & BRCMF_FW_REQF_OPTIONAL)) 575 + goto fail; 566 576 567 577 do { 568 578 if (++fwctx->curpos == fwctx->req->n_items) {
+5 -8
drivers/net/wireless/intel/iwlwifi/fw/api/scan.h
··· 8 8 * Copyright(c) 2012 - 2014 Intel Corporation. All rights reserved. 9 9 * Copyright(c) 2013 - 2015 Intel Mobile Communications GmbH 10 10 * Copyright(c) 2016 - 2017 Intel Deutschland GmbH 11 + * Copyright(c) 2018 Intel Corporation 11 12 * 12 13 * This program is free software; you can redistribute it and/or modify 13 14 * it under the terms of version 2 of the GNU General Public License as ··· 31 30 * Copyright(c) 2012 - 2014 Intel Corporation. All rights reserved. 32 31 * Copyright(c) 2013 - 2015 Intel Mobile Communications GmbH 33 32 * Copyright(c) 2016 - 2017 Intel Deutschland GmbH 34 - * Copyright(c) 2018 Intel Corporation 33 + * Copyright(c) 2018 Intel Corporation 35 34 * All rights reserved. 36 35 * 37 36 * Redistribution and use in source and binary forms, with or without ··· 750 749 } __packed; 751 750 752 751 #define IWL_SCAN_REQ_UMAC_SIZE_V8 sizeof(struct iwl_scan_req_umac) 753 - #define IWL_SCAN_REQ_UMAC_SIZE_V7 (sizeof(struct iwl_scan_req_umac) - \ 754 - 4 * sizeof(u8)) 755 - #define IWL_SCAN_REQ_UMAC_SIZE_V6 (sizeof(struct iwl_scan_req_umac) - \ 756 - 2 * sizeof(u8) - sizeof(__le16)) 757 - #define IWL_SCAN_REQ_UMAC_SIZE_V1 (sizeof(struct iwl_scan_req_umac) - \ 758 - 2 * sizeof(__le32) - 2 * sizeof(u8) - \ 759 - sizeof(__le16)) 752 + #define IWL_SCAN_REQ_UMAC_SIZE_V7 48 753 + #define IWL_SCAN_REQ_UMAC_SIZE_V6 44 754 + #define IWL_SCAN_REQ_UMAC_SIZE_V1 36 760 755 761 756 /** 762 757 * struct iwl_umac_scan_abort
+95 -16
drivers/net/wireless/intel/iwlwifi/iwl-nvm-parse.c
··· 76 76 #include "iwl-io.h" 77 77 #include "iwl-csr.h" 78 78 #include "fw/acpi.h" 79 + #include "fw/api/nvm-reg.h" 79 80 80 81 /* NVM offsets (in words) definitions */ 81 82 enum nvm_offsets { ··· 147 146 149, 153, 157, 161, 165, 169, 173, 177, 181 148 147 }; 149 148 150 - #define IWL_NUM_CHANNELS ARRAY_SIZE(iwl_nvm_channels) 151 - #define IWL_NUM_CHANNELS_EXT ARRAY_SIZE(iwl_ext_nvm_channels) 149 + #define IWL_NVM_NUM_CHANNELS ARRAY_SIZE(iwl_nvm_channels) 150 + #define IWL_NVM_NUM_CHANNELS_EXT ARRAY_SIZE(iwl_ext_nvm_channels) 152 151 #define NUM_2GHZ_CHANNELS 14 153 152 #define NUM_2GHZ_CHANNELS_EXT 14 154 153 #define FIRST_2GHZ_HT_MINUS 5 ··· 302 301 const u8 *nvm_chan; 303 302 304 303 if (cfg->nvm_type != IWL_NVM_EXT) { 305 - num_of_ch = IWL_NUM_CHANNELS; 304 + num_of_ch = IWL_NVM_NUM_CHANNELS; 306 305 nvm_chan = &iwl_nvm_channels[0]; 307 306 num_2ghz_channels = NUM_2GHZ_CHANNELS; 308 307 } else { 309 - num_of_ch = IWL_NUM_CHANNELS_EXT; 308 + num_of_ch = IWL_NVM_NUM_CHANNELS_EXT; 310 309 nvm_chan = &iwl_ext_nvm_channels[0]; 311 310 num_2ghz_channels = NUM_2GHZ_CHANNELS_EXT; 312 311 } ··· 721 720 if (cfg->nvm_type != IWL_NVM_EXT) 722 721 data = kzalloc(sizeof(*data) + 723 722 sizeof(struct ieee80211_channel) * 724 - IWL_NUM_CHANNELS, 723 + IWL_NVM_NUM_CHANNELS, 725 724 GFP_KERNEL); 726 725 else 727 726 data = kzalloc(sizeof(*data) + 728 727 sizeof(struct ieee80211_channel) * 729 - IWL_NUM_CHANNELS_EXT, 728 + IWL_NVM_NUM_CHANNELS_EXT, 730 729 GFP_KERNEL); 731 730 if (!data) 732 731 return NULL; ··· 843 842 return flags; 844 843 } 845 844 845 + struct regdb_ptrs { 846 + struct ieee80211_wmm_rule *rule; 847 + u32 token; 848 + }; 849 + 846 850 struct ieee80211_regdomain * 847 851 iwl_parse_nvm_mcc_info(struct device *dev, const struct iwl_cfg *cfg, 848 - int num_of_ch, __le32 *channels, u16 fw_mcc) 852 + int num_of_ch, __le32 *channels, u16 fw_mcc, 853 + u16 geo_info) 849 854 { 850 855 int ch_idx; 851 856 u16 ch_flags; 852 857 u32 reg_rule_flags, prev_reg_rule_flags = 0; 853 858 const u8 *nvm_chan = cfg->nvm_type == IWL_NVM_EXT ? 854 859 iwl_ext_nvm_channels : iwl_nvm_channels; 855 - struct ieee80211_regdomain *regd; 856 - int size_of_regd; 860 + struct ieee80211_regdomain *regd, *copy_rd; 861 + int size_of_regd, regd_to_copy, wmms_to_copy; 862 + int size_of_wmms = 0; 857 863 struct ieee80211_reg_rule *rule; 864 + struct ieee80211_wmm_rule *wmm_rule, *d_wmm, *s_wmm; 865 + struct regdb_ptrs *regdb_ptrs; 858 866 enum nl80211_band band; 859 867 int center_freq, prev_center_freq = 0; 860 - int valid_rules = 0; 868 + int valid_rules = 0, n_wmms = 0; 869 + int i; 861 870 bool new_rule; 862 871 int max_num_ch = cfg->nvm_type == IWL_NVM_EXT ? 863 - IWL_NUM_CHANNELS_EXT : IWL_NUM_CHANNELS; 872 + IWL_NVM_NUM_CHANNELS_EXT : IWL_NVM_NUM_CHANNELS; 864 873 865 874 if (WARN_ON_ONCE(num_of_ch > NL80211_MAX_SUPP_REG_RULES)) 866 875 return ERR_PTR(-EINVAL); ··· 886 875 sizeof(struct ieee80211_regdomain) + 887 876 num_of_ch * sizeof(struct ieee80211_reg_rule); 888 877 889 - regd = kzalloc(size_of_regd, GFP_KERNEL); 878 + if (geo_info & GEO_WMM_ETSI_5GHZ_INFO) 879 + size_of_wmms = 880 + num_of_ch * sizeof(struct ieee80211_wmm_rule); 881 + 882 + regd = kzalloc(size_of_regd + size_of_wmms, GFP_KERNEL); 890 883 if (!regd) 891 884 return ERR_PTR(-ENOMEM); 885 + 886 + regdb_ptrs = kcalloc(num_of_ch, sizeof(*regdb_ptrs), GFP_KERNEL); 887 + if (!regdb_ptrs) { 888 + copy_rd = ERR_PTR(-ENOMEM); 889 + goto out; 890 + } 891 + 892 + /* set alpha2 from FW. */ 893 + regd->alpha2[0] = fw_mcc >> 8; 894 + regd->alpha2[1] = fw_mcc & 0xff; 895 + 896 + wmm_rule = (struct ieee80211_wmm_rule *)((u8 *)regd + size_of_regd); 892 897 893 898 for (ch_idx = 0; ch_idx < num_of_ch; ch_idx++) { 894 899 ch_flags = (u16)__le32_to_cpup(channels + ch_idx); ··· 954 927 955 928 iwl_nvm_print_channel_flags(dev, IWL_DL_LAR, 956 929 nvm_chan[ch_idx], ch_flags); 930 + 931 + if (!(geo_info & GEO_WMM_ETSI_5GHZ_INFO) || 932 + band == NL80211_BAND_2GHZ) 933 + continue; 934 + 935 + if (!reg_query_regdb_wmm(regd->alpha2, center_freq, 936 + &regdb_ptrs[n_wmms].token, wmm_rule)) { 937 + /* Add only new rules */ 938 + for (i = 0; i < n_wmms; i++) { 939 + if (regdb_ptrs[i].token == 940 + regdb_ptrs[n_wmms].token) { 941 + rule->wmm_rule = regdb_ptrs[i].rule; 942 + break; 943 + } 944 + } 945 + if (i == n_wmms) { 946 + rule->wmm_rule = wmm_rule; 947 + regdb_ptrs[n_wmms++].rule = wmm_rule; 948 + wmm_rule++; 949 + } 950 + } 957 951 } 958 952 959 953 regd->n_reg_rules = valid_rules; 954 + regd->n_wmm_rules = n_wmms; 960 955 961 - /* set alpha2 from FW. */ 962 - regd->alpha2[0] = fw_mcc >> 8; 963 - regd->alpha2[1] = fw_mcc & 0xff; 956 + /* 957 + * Narrow down regdom for unused regulatory rules to prevent hole 958 + * between reg rules to wmm rules. 959 + */ 960 + regd_to_copy = sizeof(struct ieee80211_regdomain) + 961 + valid_rules * sizeof(struct ieee80211_reg_rule); 964 962 965 - return regd; 963 + wmms_to_copy = sizeof(struct ieee80211_wmm_rule) * n_wmms; 964 + 965 + copy_rd = kzalloc(regd_to_copy + wmms_to_copy, GFP_KERNEL); 966 + if (!copy_rd) { 967 + copy_rd = ERR_PTR(-ENOMEM); 968 + goto out; 969 + } 970 + 971 + memcpy(copy_rd, regd, regd_to_copy); 972 + memcpy((u8 *)copy_rd + regd_to_copy, (u8 *)regd + size_of_regd, 973 + wmms_to_copy); 974 + 975 + d_wmm = (struct ieee80211_wmm_rule *)((u8 *)copy_rd + regd_to_copy); 976 + s_wmm = (struct ieee80211_wmm_rule *)((u8 *)regd + size_of_regd); 977 + 978 + for (i = 0; i < regd->n_reg_rules; i++) { 979 + if (!regd->reg_rules[i].wmm_rule) 980 + continue; 981 + 982 + copy_rd->reg_rules[i].wmm_rule = d_wmm + 983 + (regd->reg_rules[i].wmm_rule - s_wmm) / 984 + sizeof(struct ieee80211_wmm_rule); 985 + } 986 + 987 + out: 988 + kfree(regdb_ptrs); 989 + kfree(regd); 990 + return copy_rd; 966 991 } 967 992 IWL_EXPORT_SYMBOL(iwl_parse_nvm_mcc_info);
+4 -2
drivers/net/wireless/intel/iwlwifi/iwl-nvm-parse.h
··· 101 101 * 102 102 * This function parses the regulatory channel data received as a 103 103 * MCC_UPDATE_CMD command. It returns a newly allocation regulatory domain, 104 - * to be fed into the regulatory core. An ERR_PTR is returned on error. 104 + * to be fed into the regulatory core. In case the geo_info is set handle 105 + * accordingly. An ERR_PTR is returned on error. 105 106 * If not given to the regulatory core, the user is responsible for freeing 106 107 * the regdomain returned here with kfree. 107 108 */ 108 109 struct ieee80211_regdomain * 109 110 iwl_parse_nvm_mcc_info(struct device *dev, const struct iwl_cfg *cfg, 110 - int num_of_ch, __le32 *channels, u16 fw_mcc); 111 + int num_of_ch, __le32 *channels, u16 fw_mcc, 112 + u16 geo_info); 111 113 112 114 #endif /* __iwl_nvm_parse_h__ */
+2 -1
drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
··· 311 311 regd = iwl_parse_nvm_mcc_info(mvm->trans->dev, mvm->cfg, 312 312 __le32_to_cpu(resp->n_channels), 313 313 resp->channels, 314 - __le16_to_cpu(resp->mcc)); 314 + __le16_to_cpu(resp->mcc), 315 + __le16_to_cpu(resp->geo_info)); 315 316 /* Store the return source id */ 316 317 src_id = resp->source_id; 317 318 kfree(resp);
+1
drivers/net/wireless/mac80211_hwsim.c
··· 3236 3236 GENL_SET_ERR_MSG(info,"MAC is no valid source addr"); 3237 3237 NL_SET_BAD_ATTR(info->extack, 3238 3238 info->attrs[HWSIM_ATTR_PERM_ADDR]); 3239 + kfree(hwname); 3239 3240 return -EINVAL; 3240 3241 } 3241 3242
-15
drivers/net/wireless/realtek/rtlwifi/btcoexist/halbtcoutsrc.c
··· 158 158 159 159 static u8 rtl_get_hwpg_single_ant_path(struct rtl_priv *rtlpriv) 160 160 { 161 - struct rtl_mod_params *mod_params = rtlpriv->cfg->mod_params; 162 - 163 - /* override ant_num / ant_path */ 164 - if (mod_params->ant_sel) { 165 - rtlpriv->btcoexist.btc_info.ant_num = 166 - (mod_params->ant_sel == 1 ? ANT_X2 : ANT_X1); 167 - 168 - rtlpriv->btcoexist.btc_info.single_ant_path = 169 - (mod_params->ant_sel == 1 ? 0 : 1); 170 - } 171 161 return rtlpriv->btcoexist.btc_info.single_ant_path; 172 162 } 173 163 ··· 168 178 169 179 static u8 rtl_get_hwpg_ant_num(struct rtl_priv *rtlpriv) 170 180 { 171 - struct rtl_mod_params *mod_params = rtlpriv->cfg->mod_params; 172 181 u8 num; 173 182 174 183 if (rtlpriv->btcoexist.btc_info.ant_num == ANT_X2) 175 184 num = 2; 176 185 else 177 186 num = 1; 178 - 179 - /* override ant_num / ant_path */ 180 - if (mod_params->ant_sel) 181 - num = (mod_params->ant_sel == 1 ? ANT_X2 : ANT_X1) + 1; 182 187 183 188 return num; 184 189 }
+7 -4
drivers/net/wireless/realtek/rtlwifi/rtl8723be/hw.c
··· 848 848 return false; 849 849 } 850 850 851 + if (rtlpriv->cfg->ops->get_btc_status()) 852 + rtlpriv->btcoexist.btc_ops->btc_power_on_setting(rtlpriv); 853 + 851 854 bytetmp = rtl_read_byte(rtlpriv, REG_MULTI_FUNC_CTRL); 852 855 rtl_write_byte(rtlpriv, REG_MULTI_FUNC_CTRL, bytetmp | BIT(3)); 853 856 ··· 2699 2696 rtlpriv->btcoexist.btc_info.bt_type = BT_RTL8723B; 2700 2697 rtlpriv->btcoexist.btc_info.ant_num = (value & 0x1); 2701 2698 rtlpriv->btcoexist.btc_info.single_ant_path = 2702 - (value & 0x40); /*0xc3[6]*/ 2699 + (value & 0x40 ? ANT_AUX : ANT_MAIN); /*0xc3[6]*/ 2703 2700 } else { 2704 2701 rtlpriv->btcoexist.btc_info.btcoexist = 0; 2705 2702 rtlpriv->btcoexist.btc_info.bt_type = BT_RTL8723B; 2706 2703 rtlpriv->btcoexist.btc_info.ant_num = ANT_X2; 2707 - rtlpriv->btcoexist.btc_info.single_ant_path = 0; 2704 + rtlpriv->btcoexist.btc_info.single_ant_path = ANT_MAIN; 2708 2705 } 2709 2706 2710 2707 /* override ant_num / ant_path */ 2711 2708 if (mod_params->ant_sel) { 2712 2709 rtlpriv->btcoexist.btc_info.ant_num = 2713 - (mod_params->ant_sel == 1 ? ANT_X2 : ANT_X1); 2710 + (mod_params->ant_sel == 1 ? ANT_X1 : ANT_X2); 2714 2711 2715 2712 rtlpriv->btcoexist.btc_info.single_ant_path = 2716 - (mod_params->ant_sel == 1 ? 0 : 1); 2713 + (mod_params->ant_sel == 1 ? ANT_AUX : ANT_MAIN); 2717 2714 } 2718 2715 } 2719 2716
+5
drivers/net/wireless/realtek/rtlwifi/wifi.h
··· 2823 2823 ANT_X1 = 1, 2824 2824 }; 2825 2825 2826 + enum bt_ant_path { 2827 + ANT_MAIN = 0, 2828 + ANT_AUX = 1, 2829 + }; 2830 + 2826 2831 enum bt_co_type { 2827 2832 BT_2WIRE = 0, 2828 2833 BT_ISSC_3WIRE = 1,
+1 -1
drivers/nvme/host/Kconfig
··· 27 27 28 28 config NVME_RDMA 29 29 tristate "NVM Express over Fabrics RDMA host driver" 30 - depends on INFINIBAND && BLOCK 30 + depends on INFINIBAND && INFINIBAND_ADDR_TRANS && BLOCK 31 31 select NVME_CORE 32 32 select NVME_FABRICS 33 33 select SG_POOL
+9 -26
drivers/nvme/host/core.c
··· 99 99 100 100 static void nvme_ns_remove(struct nvme_ns *ns); 101 101 static int nvme_revalidate_disk(struct gendisk *disk); 102 + static void nvme_put_subsystem(struct nvme_subsystem *subsys); 102 103 103 104 int nvme_reset_ctrl(struct nvme_ctrl *ctrl) 104 105 { ··· 118 117 ret = nvme_reset_ctrl(ctrl); 119 118 if (!ret) { 120 119 flush_work(&ctrl->reset_work); 121 - if (ctrl->state != NVME_CTRL_LIVE) 120 + if (ctrl->state != NVME_CTRL_LIVE && 121 + ctrl->state != NVME_CTRL_ADMIN_ONLY) 122 122 ret = -ENETRESET; 123 123 } 124 124 ··· 352 350 ida_simple_remove(&head->subsys->ns_ida, head->instance); 353 351 list_del_init(&head->entry); 354 352 cleanup_srcu_struct(&head->srcu); 353 + nvme_put_subsystem(head->subsys); 355 354 kfree(head); 356 355 } 357 356 ··· 767 764 ret = PTR_ERR(meta); 768 765 goto out_unmap; 769 766 } 767 + req->cmd_flags |= REQ_INTEGRITY; 770 768 } 771 769 } 772 770 ··· 2864 2860 goto out_cleanup_srcu; 2865 2861 2866 2862 list_add_tail(&head->entry, &ctrl->subsys->nsheads); 2863 + 2864 + kref_get(&ctrl->subsys->ref); 2865 + 2867 2866 return head; 2868 2867 out_cleanup_srcu: 2869 2868 cleanup_srcu_struct(&head->srcu); ··· 3004 2997 if (nvme_init_ns_head(ns, nsid, id)) 3005 2998 goto out_free_id; 3006 2999 nvme_setup_streams_ns(ctrl, ns); 3007 - 3008 - #ifdef CONFIG_NVME_MULTIPATH 3009 - /* 3010 - * If multipathing is enabled we need to always use the subsystem 3011 - * instance number for numbering our devices to avoid conflicts 3012 - * between subsystems that have multiple controllers and thus use 3013 - * the multipath-aware subsystem node and those that have a single 3014 - * controller and use the controller node directly. 3015 - */ 3016 - if (ns->head->disk) { 3017 - sprintf(disk_name, "nvme%dc%dn%d", ctrl->subsys->instance, 3018 - ctrl->cntlid, ns->head->instance); 3019 - flags = GENHD_FL_HIDDEN; 3020 - } else { 3021 - sprintf(disk_name, "nvme%dn%d", ctrl->subsys->instance, 3022 - ns->head->instance); 3023 - } 3024 - #else 3025 - /* 3026 - * But without the multipath code enabled, multiple controller per 3027 - * subsystems are visible as devices and thus we cannot use the 3028 - * subsystem instance. 3029 - */ 3030 - sprintf(disk_name, "nvme%dn%d", ctrl->instance, ns->head->instance); 3031 - #endif 3000 + nvme_set_disk_name(disk_name, ns, ctrl, &flags); 3032 3001 3033 3002 if ((ctrl->quirks & NVME_QUIRK_LIGHTNVM) && id->vs[0] == 0x1) { 3034 3003 if (nvme_nvm_register(ns, disk_name, node)) {
+6
drivers/nvme/host/fabrics.c
··· 668 668 ret = -ENOMEM; 669 669 goto out; 670 670 } 671 + kfree(opts->transport); 671 672 opts->transport = p; 672 673 break; 673 674 case NVMF_OPT_NQN: ··· 677 676 ret = -ENOMEM; 678 677 goto out; 679 678 } 679 + kfree(opts->subsysnqn); 680 680 opts->subsysnqn = p; 681 681 nqnlen = strlen(opts->subsysnqn); 682 682 if (nqnlen >= NVMF_NQN_SIZE) { ··· 700 698 ret = -ENOMEM; 701 699 goto out; 702 700 } 701 + kfree(opts->traddr); 703 702 opts->traddr = p; 704 703 break; 705 704 case NVMF_OPT_TRSVCID: ··· 709 706 ret = -ENOMEM; 710 707 goto out; 711 708 } 709 + kfree(opts->trsvcid); 712 710 opts->trsvcid = p; 713 711 break; 714 712 case NVMF_OPT_QUEUE_SIZE: ··· 796 792 ret = -EINVAL; 797 793 goto out; 798 794 } 795 + nvmf_host_put(opts->host); 799 796 opts->host = nvmf_host_add(p); 800 797 kfree(p); 801 798 if (!opts->host) { ··· 822 817 ret = -ENOMEM; 823 818 goto out; 824 819 } 820 + kfree(opts->host_traddr); 825 821 opts->host_traddr = p; 826 822 break; 827 823 case NVMF_OPT_HOST_ID:
+23 -1
drivers/nvme/host/multipath.c
··· 15 15 #include "nvme.h" 16 16 17 17 static bool multipath = true; 18 - module_param(multipath, bool, 0644); 18 + module_param(multipath, bool, 0444); 19 19 MODULE_PARM_DESC(multipath, 20 20 "turn on native support for multiple controllers per subsystem"); 21 + 22 + /* 23 + * If multipathing is enabled we need to always use the subsystem instance 24 + * number for numbering our devices to avoid conflicts between subsystems that 25 + * have multiple controllers and thus use the multipath-aware subsystem node 26 + * and those that have a single controller and use the controller node 27 + * directly. 28 + */ 29 + void nvme_set_disk_name(char *disk_name, struct nvme_ns *ns, 30 + struct nvme_ctrl *ctrl, int *flags) 31 + { 32 + if (!multipath) { 33 + sprintf(disk_name, "nvme%dn%d", ctrl->instance, ns->head->instance); 34 + } else if (ns->head->disk) { 35 + sprintf(disk_name, "nvme%dc%dn%d", ctrl->subsys->instance, 36 + ctrl->cntlid, ns->head->instance); 37 + *flags = GENHD_FL_HIDDEN; 38 + } else { 39 + sprintf(disk_name, "nvme%dn%d", ctrl->subsys->instance, 40 + ns->head->instance); 41 + } 42 + } 21 43 22 44 void nvme_failover_req(struct request *req) 23 45 {
+17
drivers/nvme/host/nvme.h
··· 84 84 * Supports the LighNVM command set if indicated in vs[1]. 85 85 */ 86 86 NVME_QUIRK_LIGHTNVM = (1 << 6), 87 + 88 + /* 89 + * Set MEDIUM priority on SQ creation 90 + */ 91 + NVME_QUIRK_MEDIUM_PRIO_SQ = (1 << 7), 87 92 }; 88 93 89 94 /* ··· 441 436 extern const struct block_device_operations nvme_ns_head_ops; 442 437 443 438 #ifdef CONFIG_NVME_MULTIPATH 439 + void nvme_set_disk_name(char *disk_name, struct nvme_ns *ns, 440 + struct nvme_ctrl *ctrl, int *flags); 444 441 void nvme_failover_req(struct request *req); 445 442 bool nvme_req_needs_failover(struct request *req, blk_status_t error); 446 443 void nvme_kick_requeue_lists(struct nvme_ctrl *ctrl); ··· 468 461 } 469 462 470 463 #else 464 + /* 465 + * Without the multipath code enabled, multiple controller per subsystems are 466 + * visible as devices and thus we cannot use the subsystem instance. 467 + */ 468 + static inline void nvme_set_disk_name(char *disk_name, struct nvme_ns *ns, 469 + struct nvme_ctrl *ctrl, int *flags) 470 + { 471 + sprintf(disk_name, "nvme%dn%d", ctrl->instance, ns->head->instance); 472 + } 473 + 471 474 static inline void nvme_failover_req(struct request *req) 472 475 { 473 476 }
+11 -1
drivers/nvme/host/pci.c
··· 1093 1093 static int adapter_alloc_sq(struct nvme_dev *dev, u16 qid, 1094 1094 struct nvme_queue *nvmeq) 1095 1095 { 1096 + struct nvme_ctrl *ctrl = &dev->ctrl; 1096 1097 struct nvme_command c; 1097 1098 int flags = NVME_QUEUE_PHYS_CONTIG; 1099 + 1100 + /* 1101 + * Some drives have a bug that auto-enables WRRU if MEDIUM isn't 1102 + * set. Since URGENT priority is zeroes, it makes all queues 1103 + * URGENT. 1104 + */ 1105 + if (ctrl->quirks & NVME_QUIRK_MEDIUM_PRIO_SQ) 1106 + flags |= NVME_SQ_PRIO_MEDIUM; 1098 1107 1099 1108 /* 1100 1109 * Note: we (ab)use the fact that the prp fields survive if no data ··· 2710 2701 .driver_data = NVME_QUIRK_STRIPE_SIZE | 2711 2702 NVME_QUIRK_DEALLOCATE_ZEROES, }, 2712 2703 { PCI_VDEVICE(INTEL, 0xf1a5), /* Intel 600P/P3100 */ 2713 - .driver_data = NVME_QUIRK_NO_DEEPEST_PS }, 2704 + .driver_data = NVME_QUIRK_NO_DEEPEST_PS | 2705 + NVME_QUIRK_MEDIUM_PRIO_SQ }, 2714 2706 { PCI_VDEVICE(INTEL, 0x5845), /* Qemu emulated controller */ 2715 2707 .driver_data = NVME_QUIRK_IDENTIFY_CNS, }, 2716 2708 { PCI_DEVICE(0x1c58, 0x0003), /* HGST adapter */
+1 -1
drivers/nvme/target/Kconfig
··· 27 27 28 28 config NVME_TARGET_RDMA 29 29 tristate "NVMe over Fabrics RDMA target support" 30 - depends on INFINIBAND 30 + depends on INFINIBAND && INFINIBAND_ADDR_TRANS 31 31 depends on NVME_TARGET 32 32 select SGL_ALLOC 33 33 help
+6
drivers/nvme/target/loop.c
··· 469 469 nvme_stop_ctrl(&ctrl->ctrl); 470 470 nvme_loop_shutdown_ctrl(ctrl); 471 471 472 + if (!nvme_change_ctrl_state(&ctrl->ctrl, NVME_CTRL_CONNECTING)) { 473 + /* state change failure should never happen */ 474 + WARN_ON_ONCE(1); 475 + return; 476 + } 477 + 472 478 ret = nvme_loop_configure_admin_queue(ctrl); 473 479 if (ret) 474 480 goto out_disable;
+21 -9
drivers/of/overlay.c
··· 102 102 103 103 static BLOCKING_NOTIFIER_HEAD(overlay_notify_chain); 104 104 105 + /** 106 + * of_overlay_notifier_register() - Register notifier for overlay operations 107 + * @nb: Notifier block to register 108 + * 109 + * Register for notification on overlay operations on device tree nodes. The 110 + * reported actions definied by @of_reconfig_change. The notifier callback 111 + * furthermore receives a pointer to the affected device tree node. 112 + * 113 + * Note that a notifier callback is not supposed to store pointers to a device 114 + * tree node or its content beyond @OF_OVERLAY_POST_REMOVE corresponding to the 115 + * respective node it received. 116 + */ 105 117 int of_overlay_notifier_register(struct notifier_block *nb) 106 118 { 107 119 return blocking_notifier_chain_register(&overlay_notify_chain, nb); 108 120 } 109 121 EXPORT_SYMBOL_GPL(of_overlay_notifier_register); 110 122 123 + /** 124 + * of_overlay_notifier_register() - Unregister notifier for overlay operations 125 + * @nb: Notifier block to unregister 126 + */ 111 127 int of_overlay_notifier_unregister(struct notifier_block *nb) 112 128 { 113 129 return blocking_notifier_chain_unregister(&overlay_notify_chain, nb); ··· 687 671 of_node_put(ovcs->fragments[i].overlay); 688 672 } 689 673 kfree(ovcs->fragments); 690 - 691 674 /* 692 - * TODO 693 - * 694 - * would like to: kfree(ovcs->overlay_tree); 695 - * but can not since drivers may have pointers into this data 696 - * 697 - * would like to: kfree(ovcs->fdt); 698 - * but can not since drivers may have pointers into this data 675 + * There should be no live pointers into ovcs->overlay_tree and 676 + * ovcs->fdt due to the policy that overlay notifiers are not allowed 677 + * to retain pointers into the overlay devicetree. 699 678 */ 700 - 679 + kfree(ovcs->overlay_tree); 680 + kfree(ovcs->fdt); 701 681 kfree(ovcs); 702 682 } 703 683
+1 -1
drivers/parisc/ccio-dma.c
··· 1263 1263 * I/O Page Directory, the resource map, and initalizing the 1264 1264 * U2/Uturn chip into virtual mode. 1265 1265 */ 1266 - static void 1266 + static void __init 1267 1267 ccio_ioc_init(struct ioc *ioc) 1268 1268 { 1269 1269 int i;
+27 -10
drivers/pci/pci.c
··· 1910 1910 EXPORT_SYMBOL(pci_pme_active); 1911 1911 1912 1912 /** 1913 - * pci_enable_wake - enable PCI device as wakeup event source 1913 + * __pci_enable_wake - enable PCI device as wakeup event source 1914 1914 * @dev: PCI device affected 1915 1915 * @state: PCI state from which device will issue wakeup events 1916 1916 * @enable: True to enable event generation; false to disable ··· 1928 1928 * Error code depending on the platform is returned if both the platform and 1929 1929 * the native mechanism fail to enable the generation of wake-up events 1930 1930 */ 1931 - int pci_enable_wake(struct pci_dev *dev, pci_power_t state, bool enable) 1931 + static int __pci_enable_wake(struct pci_dev *dev, pci_power_t state, bool enable) 1932 1932 { 1933 1933 int ret = 0; 1934 1934 ··· 1969 1969 1970 1970 return ret; 1971 1971 } 1972 + 1973 + /** 1974 + * pci_enable_wake - change wakeup settings for a PCI device 1975 + * @pci_dev: Target device 1976 + * @state: PCI state from which device will issue wakeup events 1977 + * @enable: Whether or not to enable event generation 1978 + * 1979 + * If @enable is set, check device_may_wakeup() for the device before calling 1980 + * __pci_enable_wake() for it. 1981 + */ 1982 + int pci_enable_wake(struct pci_dev *pci_dev, pci_power_t state, bool enable) 1983 + { 1984 + if (enable && !device_may_wakeup(&pci_dev->dev)) 1985 + return -EINVAL; 1986 + 1987 + return __pci_enable_wake(pci_dev, state, enable); 1988 + } 1972 1989 EXPORT_SYMBOL(pci_enable_wake); 1973 1990 1974 1991 /** ··· 1998 1981 * should not be called twice in a row to enable wake-up due to PCI PM vs ACPI 1999 1982 * ordering constraints. 2000 1983 * 2001 - * This function only returns error code if the device is not capable of 2002 - * generating PME# from both D3_hot and D3_cold, and the platform is unable to 2003 - * enable wake-up power for it. 1984 + * This function only returns error code if the device is not allowed to wake 1985 + * up the system from sleep or it is not capable of generating PME# from both 1986 + * D3_hot and D3_cold and the platform is unable to enable wake-up power for it. 2004 1987 */ 2005 1988 int pci_wake_from_d3(struct pci_dev *dev, bool enable) 2006 1989 { ··· 2131 2114 2132 2115 dev->runtime_d3cold = target_state == PCI_D3cold; 2133 2116 2134 - pci_enable_wake(dev, target_state, pci_dev_run_wake(dev)); 2117 + __pci_enable_wake(dev, target_state, pci_dev_run_wake(dev)); 2135 2118 2136 2119 error = pci_set_power_state(dev, target_state); 2137 2120 ··· 2155 2138 { 2156 2139 struct pci_bus *bus = dev->bus; 2157 2140 2158 - if (device_can_wakeup(&dev->dev)) 2159 - return true; 2160 - 2161 2141 if (!dev->pme_support) 2162 2142 return false; 2163 2143 2164 2144 /* PME-capable in principle, but not from the target power state */ 2165 - if (!pci_pme_capable(dev, pci_target_state(dev, false))) 2145 + if (!pci_pme_capable(dev, pci_target_state(dev, true))) 2166 2146 return false; 2147 + 2148 + if (device_can_wakeup(&dev->dev)) 2149 + return true; 2167 2150 2168 2151 while (bus->parent) { 2169 2152 struct pci_dev *bridge = bus->self;
+12 -4
drivers/pinctrl/intel/pinctrl-cherryview.c
··· 1622 1622 1623 1623 if (!need_valid_mask) { 1624 1624 irq_base = devm_irq_alloc_descs(pctrl->dev, -1, 0, 1625 - chip->ngpio, NUMA_NO_NODE); 1625 + community->npins, NUMA_NO_NODE); 1626 1626 if (irq_base < 0) { 1627 1627 dev_err(pctrl->dev, "Failed to allocate IRQ numbers\n"); 1628 1628 return irq_base; 1629 1629 } 1630 - } else { 1631 - irq_base = 0; 1632 1630 } 1633 1631 1634 - ret = gpiochip_irqchip_add(chip, &chv_gpio_irqchip, irq_base, 1632 + ret = gpiochip_irqchip_add(chip, &chv_gpio_irqchip, 0, 1635 1633 handle_bad_irq, IRQ_TYPE_NONE); 1636 1634 if (ret) { 1637 1635 dev_err(pctrl->dev, "failed to add IRQ chip\n"); 1638 1636 return ret; 1637 + } 1638 + 1639 + if (!need_valid_mask) { 1640 + for (i = 0; i < community->ngpio_ranges; i++) { 1641 + range = &community->gpio_ranges[i]; 1642 + 1643 + irq_domain_associate_many(chip->irq.domain, irq_base, 1644 + range->base, range->npins); 1645 + irq_base += range->npins; 1646 + } 1639 1647 } 1640 1648 1641 1649 gpiochip_set_chained_irqchip(chip, &chv_gpio_irqchip, irq,
+42 -3
drivers/pinctrl/intel/pinctrl-sunrisepoint.c
··· 36 36 .npins = ((e) - (s) + 1), \ 37 37 } 38 38 39 + #define SPTH_GPP(r, s, e, g) \ 40 + { \ 41 + .reg_num = (r), \ 42 + .base = (s), \ 43 + .size = ((e) - (s) + 1), \ 44 + .gpio_base = (g), \ 45 + } 46 + 47 + #define SPTH_COMMUNITY(b, s, e, g) \ 48 + { \ 49 + .barno = (b), \ 50 + .padown_offset = SPT_PAD_OWN, \ 51 + .padcfglock_offset = SPT_PADCFGLOCK, \ 52 + .hostown_offset = SPT_HOSTSW_OWN, \ 53 + .ie_offset = SPT_GPI_IE, \ 54 + .pin_base = (s), \ 55 + .npins = ((e) - (s) + 1), \ 56 + .gpps = (g), \ 57 + .ngpps = ARRAY_SIZE(g), \ 58 + } 59 + 39 60 /* Sunrisepoint-LP */ 40 61 static const struct pinctrl_pin_desc sptlp_pins[] = { 41 62 /* GPP_A */ ··· 552 531 FUNCTION("i2c2", spth_i2c2_groups), 553 532 }; 554 533 534 + static const struct intel_padgroup spth_community0_gpps[] = { 535 + SPTH_GPP(0, 0, 23, 0), /* GPP_A */ 536 + SPTH_GPP(1, 24, 47, 24), /* GPP_B */ 537 + }; 538 + 539 + static const struct intel_padgroup spth_community1_gpps[] = { 540 + SPTH_GPP(0, 48, 71, 48), /* GPP_C */ 541 + SPTH_GPP(1, 72, 95, 72), /* GPP_D */ 542 + SPTH_GPP(2, 96, 108, 96), /* GPP_E */ 543 + SPTH_GPP(3, 109, 132, 120), /* GPP_F */ 544 + SPTH_GPP(4, 133, 156, 144), /* GPP_G */ 545 + SPTH_GPP(5, 157, 180, 168), /* GPP_H */ 546 + }; 547 + 548 + static const struct intel_padgroup spth_community3_gpps[] = { 549 + SPTH_GPP(0, 181, 191, 192), /* GPP_I */ 550 + }; 551 + 555 552 static const struct intel_community spth_communities[] = { 556 - SPT_COMMUNITY(0, 0, 47), 557 - SPT_COMMUNITY(1, 48, 180), 558 - SPT_COMMUNITY(2, 181, 191), 553 + SPTH_COMMUNITY(0, 0, 47, spth_community0_gpps), 554 + SPTH_COMMUNITY(1, 48, 180, spth_community1_gpps), 555 + SPTH_COMMUNITY(2, 181, 191, spth_community3_gpps), 559 556 }; 560 557 561 558 static const struct intel_pinctrl_soc_data spth_soc_data = {
+1 -1
drivers/pinctrl/meson/pinctrl-meson-axg.c
··· 898 898 899 899 static struct meson_bank meson_axg_aobus_banks[] = { 900 900 /* name first last irq pullen pull dir out in */ 901 - BANK("AO", GPIOAO_0, GPIOAO_9, 0, 13, 0, 16, 0, 0, 0, 0, 0, 16, 1, 0), 901 + BANK("AO", GPIOAO_0, GPIOAO_13, 0, 13, 0, 16, 0, 0, 0, 0, 0, 16, 1, 0), 902 902 }; 903 903 904 904 static struct meson_pmx_bank meson_axg_periphs_pmx_banks[] = {
+1 -1
drivers/platform/x86/Kconfig
··· 154 154 depends on ACPI_VIDEO || ACPI_VIDEO = n 155 155 depends on RFKILL || RFKILL = n 156 156 depends on SERIO_I8042 157 - select DELL_SMBIOS 157 + depends on DELL_SMBIOS 158 158 select POWER_SUPPLY 159 159 select LEDS_CLASS 160 160 select NEW_LEDS
+3 -1
drivers/platform/x86/asus-wireless.c
··· 178 178 { 179 179 struct asus_wireless_data *data = acpi_driver_data(adev); 180 180 181 - if (data->wq) 181 + if (data->wq) { 182 + devm_led_classdev_unregister(&adev->dev, &data->led); 182 183 destroy_workqueue(data->wq); 184 + } 183 185 return 0; 184 186 } 185 187
+2
drivers/remoteproc/qcom_q6v5_pil.c
··· 1083 1083 dev_err(qproc->dev, "unable to resolve mba region\n"); 1084 1084 return ret; 1085 1085 } 1086 + of_node_put(node); 1086 1087 1087 1088 qproc->mba_phys = r.start; 1088 1089 qproc->mba_size = resource_size(&r); ··· 1101 1100 dev_err(qproc->dev, "unable to resolve mpss region\n"); 1102 1101 return ret; 1103 1102 } 1103 + of_node_put(node); 1104 1104 1105 1105 qproc->mpss_phys = qproc->mpss_reloc = r.start; 1106 1106 qproc->mpss_size = resource_size(&r);
+2 -2
drivers/remoteproc/remoteproc_core.c
··· 1163 1163 if (ret) 1164 1164 return ret; 1165 1165 1166 - ret = rproc_stop(rproc, false); 1166 + ret = rproc_stop(rproc, true); 1167 1167 if (ret) 1168 1168 goto unlock_mutex; 1169 1169 ··· 1316 1316 if (!atomic_dec_and_test(&rproc->power)) 1317 1317 goto out; 1318 1318 1319 - ret = rproc_stop(rproc, true); 1319 + ret = rproc_stop(rproc, false); 1320 1320 if (ret) { 1321 1321 atomic_inc(&rproc->power); 1322 1322 goto out;
+2
drivers/rpmsg/rpmsg_char.c
··· 581 581 unregister_chrdev_region(rpmsg_major, RPMSG_DEV_MAX); 582 582 } 583 583 module_exit(rpmsg_chrdev_exit); 584 + 585 + MODULE_ALIAS("rpmsg:rpmsg_chrdev"); 584 586 MODULE_LICENSE("GPL v2");
+1 -1
drivers/sbus/char/oradax.c
··· 3 3 * 4 4 * This program is free software: you can redistribute it and/or modify 5 5 * it under the terms of the GNU General Public License as published by 6 - * the Free Software Foundation, either version 3 of the License, or 6 + * the Free Software Foundation, either version 2 of the License, or 7 7 * (at your option) any later version. 8 8 * 9 9 * This program is distributed in the hope that it will be useful,
+1 -2
drivers/scsi/isci/port_config.c
··· 291 291 * Note: We have not moved the current phy_index so we will actually 292 292 * compare the startting phy with itself. 293 293 * This is expected and required to add the phy to the port. */ 294 - while (phy_index < SCI_MAX_PHYS) { 294 + for (; phy_index < SCI_MAX_PHYS; phy_index++) { 295 295 if ((phy_mask & (1 << phy_index)) == 0) 296 296 continue; 297 297 sci_phy_get_sas_address(&ihost->phys[phy_index], ··· 311 311 &ihost->phys[phy_index]); 312 312 313 313 assigned_phy_mask |= (1 << phy_index); 314 - phy_index++; 315 314 } 316 315 317 316 }
+5 -2
drivers/scsi/storvsc_drv.c
··· 1722 1722 max_targets = STORVSC_MAX_TARGETS; 1723 1723 max_channels = STORVSC_MAX_CHANNELS; 1724 1724 /* 1725 - * On Windows8 and above, we support sub-channels for storage. 1725 + * On Windows8 and above, we support sub-channels for storage 1726 + * on SCSI and FC controllers. 1726 1727 * The number of sub-channels offerred is based on the number of 1727 1728 * VCPUs in the guest. 1728 1729 */ 1729 - max_sub_channels = (num_cpus / storvsc_vcpus_per_sub_channel); 1730 + if (!dev_is_ide) 1731 + max_sub_channels = 1732 + (num_cpus - 1) / storvsc_vcpus_per_sub_channel; 1730 1733 } 1731 1734 1732 1735 scsi_driver.can_queue = (max_outstanding_req_per_channel *
+1 -1
drivers/staging/media/imx/imx-media-csi.c
··· 1799 1799 priv->dev->of_node = pdata->of_node; 1800 1800 pinctrl = devm_pinctrl_get_select_default(priv->dev); 1801 1801 if (IS_ERR(pinctrl)) { 1802 - ret = PTR_ERR(priv->vdev); 1802 + ret = PTR_ERR(pinctrl); 1803 1803 dev_dbg(priv->dev, 1804 1804 "devm_pinctrl_get_select_default() failed: %d\n", ret); 1805 1805 if (ret != -ENODEV)
+4 -4
drivers/target/target_core_iblock.c
··· 427 427 { 428 428 struct se_device *dev = cmd->se_dev; 429 429 struct scatterlist *sg = &cmd->t_data_sg[0]; 430 - unsigned char *buf, zero = 0x00, *p = &zero; 431 - int rc, ret; 430 + unsigned char *buf, *not_zero; 431 + int ret; 432 432 433 433 buf = kmap(sg_page(sg)) + sg->offset; 434 434 if (!buf) ··· 437 437 * Fall back to block_execute_write_same() slow-path if 438 438 * incoming WRITE_SAME payload does not contain zeros. 439 439 */ 440 - rc = memcmp(buf, p, cmd->data_length); 440 + not_zero = memchr_inv(buf, 0x00, cmd->data_length); 441 441 kunmap(sg_page(sg)); 442 442 443 - if (rc) 443 + if (not_zero) 444 444 return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE; 445 445 446 446 ret = blkdev_issue_zeroout(bdev,
+1 -2
drivers/thermal/int340x_thermal/int3403_thermal.c
··· 194 194 return -EFAULT; 195 195 } 196 196 197 + priv->priv = obj; 197 198 obj->max_state = p->package.count - 1; 198 199 obj->cdev = 199 200 thermal_cooling_device_register(acpi_device_bid(priv->adev), 200 201 priv, &int3403_cooling_ops); 201 202 if (IS_ERR(obj->cdev)) 202 203 result = PTR_ERR(obj->cdev); 203 - 204 - priv->priv = obj; 205 204 206 205 kfree(buf.pointer); 207 206 /* TODO: add ACPI notification support */
+11 -3
drivers/thermal/samsung/exynos_tmu.c
··· 185 185 * @regulator: pointer to the TMU regulator structure. 186 186 * @reg_conf: pointer to structure to register with core thermal. 187 187 * @ntrip: number of supported trip points. 188 + * @enabled: current status of TMU device 188 189 * @tmu_initialize: SoC specific TMU initialization method 189 190 * @tmu_control: SoC specific TMU control method 190 191 * @tmu_read: SoC specific TMU temperature read method ··· 206 205 struct regulator *regulator; 207 206 struct thermal_zone_device *tzd; 208 207 unsigned int ntrip; 208 + bool enabled; 209 209 210 210 int (*tmu_initialize)(struct platform_device *pdev); 211 211 void (*tmu_control)(struct platform_device *pdev, bool on); ··· 400 398 mutex_lock(&data->lock); 401 399 clk_enable(data->clk); 402 400 data->tmu_control(pdev, on); 401 + data->enabled = on; 403 402 clk_disable(data->clk); 404 403 mutex_unlock(&data->lock); 405 404 } ··· 892 889 static int exynos_get_temp(void *p, int *temp) 893 890 { 894 891 struct exynos_tmu_data *data = p; 892 + int value, ret = 0; 895 893 896 - if (!data || !data->tmu_read) 894 + if (!data || !data->tmu_read || !data->enabled) 897 895 return -EINVAL; 898 896 899 897 mutex_lock(&data->lock); 900 898 clk_enable(data->clk); 901 899 902 - *temp = code_to_temp(data, data->tmu_read(data)) * MCELSIUS; 900 + value = data->tmu_read(data); 901 + if (value < 0) 902 + ret = value; 903 + else 904 + *temp = code_to_temp(data, value) * MCELSIUS; 903 905 904 906 clk_disable(data->clk); 905 907 mutex_unlock(&data->lock); 906 908 907 - return 0; 909 + return ret; 908 910 } 909 911 910 912 #ifdef CONFIG_THERMAL_EMULATION
+3 -1
drivers/usb/core/config.c
··· 191 191 static const unsigned short high_speed_maxpacket_maxes[4] = { 192 192 [USB_ENDPOINT_XFER_CONTROL] = 64, 193 193 [USB_ENDPOINT_XFER_ISOC] = 1024, 194 - [USB_ENDPOINT_XFER_BULK] = 512, 194 + 195 + /* Bulk should be 512, but some devices use 1024: we will warn below */ 196 + [USB_ENDPOINT_XFER_BULK] = 1024, 195 197 [USB_ENDPOINT_XFER_INT] = 1024, 196 198 }; 197 199 static const unsigned short super_speed_maxpacket_maxes[4] = {
+2
drivers/usb/dwc2/core.h
··· 985 985 986 986 /* DWC OTG HW Release versions */ 987 987 #define DWC2_CORE_REV_2_71a 0x4f54271a 988 + #define DWC2_CORE_REV_2_72a 0x4f54272a 988 989 #define DWC2_CORE_REV_2_80a 0x4f54280a 989 990 #define DWC2_CORE_REV_2_90a 0x4f54290a 990 991 #define DWC2_CORE_REV_2_91a 0x4f54291a ··· 993 992 #define DWC2_CORE_REV_2_94a 0x4f54294a 994 993 #define DWC2_CORE_REV_3_00a 0x4f54300a 995 994 #define DWC2_CORE_REV_3_10a 0x4f54310a 995 + #define DWC2_CORE_REV_4_00a 0x4f54400a 996 996 #define DWC2_FS_IOT_REV_1_00a 0x5531100a 997 997 #define DWC2_HS_IOT_REV_1_00a 0x5532100a 998 998
+21
drivers/usb/dwc2/gadget.c
··· 3928 3928 if (index && !hs_ep->isochronous) 3929 3929 epctrl |= DXEPCTL_SETD0PID; 3930 3930 3931 + /* WA for Full speed ISOC IN in DDMA mode. 3932 + * By Clear NAK status of EP, core will send ZLP 3933 + * to IN token and assert NAK interrupt relying 3934 + * on TxFIFO status only 3935 + */ 3936 + 3937 + if (hsotg->gadget.speed == USB_SPEED_FULL && 3938 + hs_ep->isochronous && dir_in) { 3939 + /* The WA applies only to core versions from 2.72a 3940 + * to 4.00a (including both). Also for FS_IOT_1.00a 3941 + * and HS_IOT_1.00a. 3942 + */ 3943 + u32 gsnpsid = dwc2_readl(hsotg->regs + GSNPSID); 3944 + 3945 + if ((gsnpsid >= DWC2_CORE_REV_2_72a && 3946 + gsnpsid <= DWC2_CORE_REV_4_00a) || 3947 + gsnpsid == DWC2_FS_IOT_REV_1_00a || 3948 + gsnpsid == DWC2_HS_IOT_REV_1_00a) 3949 + epctrl |= DXEPCTL_CNAK; 3950 + } 3951 + 3931 3952 dev_dbg(hsotg->dev, "%s: write DxEPCTL=0x%08x\n", 3932 3953 __func__, epctrl); 3933 3954
+8 -5
drivers/usb/dwc2/hcd.c
··· 358 358 359 359 static int dwc2_vbus_supply_init(struct dwc2_hsotg *hsotg) 360 360 { 361 + int ret; 362 + 361 363 hsotg->vbus_supply = devm_regulator_get_optional(hsotg->dev, "vbus"); 362 - if (IS_ERR(hsotg->vbus_supply)) 363 - return 0; 364 + if (IS_ERR(hsotg->vbus_supply)) { 365 + ret = PTR_ERR(hsotg->vbus_supply); 366 + hsotg->vbus_supply = NULL; 367 + return ret == -ENODEV ? 0 : ret; 368 + } 364 369 365 370 return regulator_enable(hsotg->vbus_supply); 366 371 } ··· 4347 4342 4348 4343 spin_unlock_irqrestore(&hsotg->lock, flags); 4349 4344 4350 - dwc2_vbus_supply_init(hsotg); 4351 - 4352 - return 0; 4345 + return dwc2_vbus_supply_init(hsotg); 4353 4346 } 4354 4347 4355 4348 /*
+3 -1
drivers/usb/dwc2/pci.c
··· 141 141 goto err; 142 142 143 143 glue = devm_kzalloc(dev, sizeof(*glue), GFP_KERNEL); 144 - if (!glue) 144 + if (!glue) { 145 + ret = -ENOMEM; 145 146 goto err; 147 + } 146 148 147 149 ret = platform_device_add(dwc2); 148 150 if (ret) {
+2 -2
drivers/usb/dwc3/gadget.c
··· 166 166 dwc3_ep_inc_trb(&dep->trb_dequeue); 167 167 } 168 168 169 - void dwc3_gadget_del_and_unmap_request(struct dwc3_ep *dep, 169 + static void dwc3_gadget_del_and_unmap_request(struct dwc3_ep *dep, 170 170 struct dwc3_request *req, int status) 171 171 { 172 172 struct dwc3 *dwc = dep->dwc; ··· 1424 1424 dwc->lock); 1425 1425 1426 1426 if (!r->trb) 1427 - goto out1; 1427 + goto out0; 1428 1428 1429 1429 if (r->num_pending_sgs) { 1430 1430 struct dwc3_trb *trb;
+1 -1
drivers/usb/gadget/function/f_phonet.c
··· 221 221 netif_wake_queue(dev); 222 222 } 223 223 224 - static int pn_net_xmit(struct sk_buff *skb, struct net_device *dev) 224 + static netdev_tx_t pn_net_xmit(struct sk_buff *skb, struct net_device *dev) 225 225 { 226 226 struct phonet_port *port = netdev_priv(dev); 227 227 struct f_phonet *fp;
+2 -1
drivers/usb/host/ehci-mem.c
··· 73 73 if (!qh) 74 74 goto done; 75 75 qh->hw = (struct ehci_qh_hw *) 76 - dma_pool_zalloc(ehci->qh_pool, flags, &dma); 76 + dma_pool_alloc(ehci->qh_pool, flags, &dma); 77 77 if (!qh->hw) 78 78 goto fail; 79 + memset(qh->hw, 0, sizeof *qh->hw); 79 80 qh->qh_dma = dma; 80 81 // INIT_LIST_HEAD (&qh->qh_list); 81 82 INIT_LIST_HEAD (&qh->qtd_list);
+4 -2
drivers/usb/host/ehci-sched.c
··· 1287 1287 } else { 1288 1288 alloc_itd: 1289 1289 spin_unlock_irqrestore(&ehci->lock, flags); 1290 - itd = dma_pool_zalloc(ehci->itd_pool, mem_flags, 1290 + itd = dma_pool_alloc(ehci->itd_pool, mem_flags, 1291 1291 &itd_dma); 1292 1292 spin_lock_irqsave(&ehci->lock, flags); 1293 1293 if (!itd) { ··· 1297 1297 } 1298 1298 } 1299 1299 1300 + memset(itd, 0, sizeof(*itd)); 1300 1301 itd->itd_dma = itd_dma; 1301 1302 itd->frame = NO_FRAME; 1302 1303 list_add(&itd->itd_list, &sched->td_list); ··· 2081 2080 } else { 2082 2081 alloc_sitd: 2083 2082 spin_unlock_irqrestore(&ehci->lock, flags); 2084 - sitd = dma_pool_zalloc(ehci->sitd_pool, mem_flags, 2083 + sitd = dma_pool_alloc(ehci->sitd_pool, mem_flags, 2085 2084 &sitd_dma); 2086 2085 spin_lock_irqsave(&ehci->lock, flags); 2087 2086 if (!sitd) { ··· 2091 2090 } 2092 2091 } 2093 2092 2093 + memset(sitd, 0, sizeof(*sitd)); 2094 2094 sitd->sitd_dma = sitd_dma; 2095 2095 sitd->frame = NO_FRAME; 2096 2096 list_add(&sitd->sitd_list, &iso_sched->td_list);
+1
drivers/usb/host/xhci.c
··· 3621 3621 del_timer_sync(&virt_dev->eps[i].stop_cmd_timer); 3622 3622 } 3623 3623 xhci_debugfs_remove_slot(xhci, udev->slot_id); 3624 + virt_dev->udev = NULL; 3624 3625 ret = xhci_disable_slot(xhci, udev->slot_id); 3625 3626 if (ret) 3626 3627 xhci_free_virt_device(xhci, udev->slot_id);
+2 -1
drivers/usb/musb/musb_gadget.c
··· 417 417 req = next_request(musb_ep); 418 418 request = &req->request; 419 419 420 - trace_musb_req_tx(req); 421 420 csr = musb_readw(epio, MUSB_TXCSR); 422 421 musb_dbg(musb, "<== %s, txcsr %04x", musb_ep->end_point.name, csr); 423 422 ··· 454 455 if (request) { 455 456 u8 is_dma = 0; 456 457 bool short_packet = false; 458 + 459 + trace_musb_req_tx(req); 457 460 458 461 if (dma && (csr & MUSB_TXCSR_DMAENAB)) { 459 462 is_dma = 1;
+3 -1
drivers/usb/musb/musb_host.c
··· 990 990 /* set tx_reinit and schedule the next qh */ 991 991 ep->tx_reinit = 1; 992 992 } 993 - musb_start_urb(musb, is_in, next_qh); 993 + 994 + if (next_qh) 995 + musb_start_urb(musb, is_in, next_qh); 994 996 } 995 997 } 996 998
+5
drivers/usb/serial/option.c
··· 233 233 /* These Quectel products use Qualcomm's vendor ID */ 234 234 #define QUECTEL_PRODUCT_UC20 0x9003 235 235 #define QUECTEL_PRODUCT_UC15 0x9090 236 + /* These u-blox products use Qualcomm's vendor ID */ 237 + #define UBLOX_PRODUCT_R410M 0x90b2 236 238 /* These Yuga products use Qualcomm's vendor ID */ 237 239 #define YUGA_PRODUCT_CLM920_NC5 0x9625 238 240 ··· 1067 1065 /* Yuga products use Qualcomm vendor ID */ 1068 1066 { USB_DEVICE(QUALCOMM_VENDOR_ID, YUGA_PRODUCT_CLM920_NC5), 1069 1067 .driver_info = RSVD(1) | RSVD(4) }, 1068 + /* u-blox products using Qualcomm vendor ID */ 1069 + { USB_DEVICE(QUALCOMM_VENDOR_ID, UBLOX_PRODUCT_R410M), 1070 + .driver_info = RSVD(1) | RSVD(3) }, 1070 1071 /* Quectel products using Quectel vendor ID */ 1071 1072 { USB_DEVICE(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EC21), 1072 1073 .driver_info = RSVD(4) },
+35 -34
drivers/usb/serial/visor.c
··· 335 335 goto exit; 336 336 } 337 337 338 - if (retval == sizeof(*connection_info)) { 339 - connection_info = (struct visor_connection_info *) 340 - transfer_buffer; 341 - 342 - num_ports = le16_to_cpu(connection_info->num_ports); 343 - for (i = 0; i < num_ports; ++i) { 344 - switch ( 345 - connection_info->connections[i].port_function_id) { 346 - case VISOR_FUNCTION_GENERIC: 347 - string = "Generic"; 348 - break; 349 - case VISOR_FUNCTION_DEBUGGER: 350 - string = "Debugger"; 351 - break; 352 - case VISOR_FUNCTION_HOTSYNC: 353 - string = "HotSync"; 354 - break; 355 - case VISOR_FUNCTION_CONSOLE: 356 - string = "Console"; 357 - break; 358 - case VISOR_FUNCTION_REMOTE_FILE_SYS: 359 - string = "Remote File System"; 360 - break; 361 - default: 362 - string = "unknown"; 363 - break; 364 - } 365 - dev_info(dev, "%s: port %d, is for %s use\n", 366 - serial->type->description, 367 - connection_info->connections[i].port, string); 368 - } 338 + if (retval != sizeof(*connection_info)) { 339 + dev_err(dev, "Invalid connection information received from device\n"); 340 + retval = -ENODEV; 341 + goto exit; 369 342 } 370 - /* 371 - * Handle devices that report invalid stuff here. 372 - */ 343 + 344 + connection_info = (struct visor_connection_info *)transfer_buffer; 345 + 346 + num_ports = le16_to_cpu(connection_info->num_ports); 347 + 348 + /* Handle devices that report invalid stuff here. */ 373 349 if (num_ports == 0 || num_ports > 2) { 374 350 dev_warn(dev, "%s: No valid connect info available\n", 375 351 serial->type->description); 376 352 num_ports = 2; 377 353 } 378 354 355 + for (i = 0; i < num_ports; ++i) { 356 + switch (connection_info->connections[i].port_function_id) { 357 + case VISOR_FUNCTION_GENERIC: 358 + string = "Generic"; 359 + break; 360 + case VISOR_FUNCTION_DEBUGGER: 361 + string = "Debugger"; 362 + break; 363 + case VISOR_FUNCTION_HOTSYNC: 364 + string = "HotSync"; 365 + break; 366 + case VISOR_FUNCTION_CONSOLE: 367 + string = "Console"; 368 + break; 369 + case VISOR_FUNCTION_REMOTE_FILE_SYS: 370 + string = "Remote File System"; 371 + break; 372 + default: 373 + string = "unknown"; 374 + break; 375 + } 376 + dev_info(dev, "%s: port %d, is for %s use\n", 377 + serial->type->description, 378 + connection_info->connections[i].port, string); 379 + } 379 380 dev_info(dev, "%s: Number of ports: %d\n", serial->type->description, 380 381 num_ports); 381 382
+1
drivers/usb/typec/tcpm.c
··· 3725 3725 for (i = 0; i < ARRAY_SIZE(port->port_altmode); i++) 3726 3726 typec_unregister_altmode(port->port_altmode[i]); 3727 3727 typec_unregister_port(port->typec_port); 3728 + usb_role_switch_put(port->role_sw); 3728 3729 tcpm_debugfs_exit(port); 3729 3730 destroy_workqueue(port->wq); 3730 3731 }
+39 -8
drivers/usb/typec/tps6598x.c
··· 73 73 struct device *dev; 74 74 struct regmap *regmap; 75 75 struct mutex lock; /* device lock */ 76 + u8 i2c_protocol:1; 76 77 77 78 struct typec_port *port; 78 79 struct typec_partner *partner; ··· 81 80 struct typec_capability typec_cap; 82 81 }; 83 82 83 + static int 84 + tps6598x_block_read(struct tps6598x *tps, u8 reg, void *val, size_t len) 85 + { 86 + u8 data[len + 1]; 87 + int ret; 88 + 89 + if (!tps->i2c_protocol) 90 + return regmap_raw_read(tps->regmap, reg, val, len); 91 + 92 + ret = regmap_raw_read(tps->regmap, reg, data, sizeof(data)); 93 + if (ret) 94 + return ret; 95 + 96 + if (data[0] < len) 97 + return -EIO; 98 + 99 + memcpy(val, &data[1], len); 100 + return 0; 101 + } 102 + 84 103 static inline int tps6598x_read16(struct tps6598x *tps, u8 reg, u16 *val) 85 104 { 86 - return regmap_raw_read(tps->regmap, reg, val, sizeof(u16)); 105 + return tps6598x_block_read(tps, reg, val, sizeof(u16)); 87 106 } 88 107 89 108 static inline int tps6598x_read32(struct tps6598x *tps, u8 reg, u32 *val) 90 109 { 91 - return regmap_raw_read(tps->regmap, reg, val, sizeof(u32)); 110 + return tps6598x_block_read(tps, reg, val, sizeof(u32)); 92 111 } 93 112 94 113 static inline int tps6598x_read64(struct tps6598x *tps, u8 reg, u64 *val) 95 114 { 96 - return regmap_raw_read(tps->regmap, reg, val, sizeof(u64)); 115 + return tps6598x_block_read(tps, reg, val, sizeof(u64)); 97 116 } 98 117 99 118 static inline int tps6598x_write16(struct tps6598x *tps, u8 reg, u16 val) ··· 142 121 struct tps6598x_rx_identity_reg id; 143 122 int ret; 144 123 145 - ret = regmap_raw_read(tps->regmap, TPS_REG_RX_IDENTITY_SOP, 146 - &id, sizeof(id)); 124 + ret = tps6598x_block_read(tps, TPS_REG_RX_IDENTITY_SOP, 125 + &id, sizeof(id)); 147 126 if (ret) 148 127 return ret; 149 128 ··· 245 224 } while (val); 246 225 247 226 if (out_len) { 248 - ret = regmap_raw_read(tps->regmap, TPS_REG_DATA1, 249 - out_data, out_len); 227 + ret = tps6598x_block_read(tps, TPS_REG_DATA1, 228 + out_data, out_len); 250 229 if (ret) 251 230 return ret; 252 231 val = out_data[0]; 253 232 } else { 254 - ret = regmap_read(tps->regmap, TPS_REG_DATA1, &val); 233 + ret = tps6598x_block_read(tps, TPS_REG_DATA1, &val, sizeof(u8)); 255 234 if (ret) 256 235 return ret; 257 236 } ··· 405 384 return ret; 406 385 if (!vid) 407 386 return -ENODEV; 387 + 388 + /* 389 + * Checking can the adapter handle SMBus protocol. If it can not, the 390 + * driver needs to take care of block reads separately. 391 + * 392 + * FIXME: Testing with I2C_FUNC_I2C. regmap-i2c uses I2C protocol 393 + * unconditionally if the adapter has I2C_FUNC_I2C set. 394 + */ 395 + if (i2c_check_functionality(client->adapter, I2C_FUNC_I2C)) 396 + tps->i2c_protocol = true; 408 397 409 398 ret = tps6598x_read32(tps, TPS_REG_STATUS, &status); 410 399 if (ret < 0)
+7
fs/btrfs/extent-tree.c
··· 3142 3142 struct rb_node *node; 3143 3143 int ret = 0; 3144 3144 3145 + spin_lock(&root->fs_info->trans_lock); 3145 3146 cur_trans = root->fs_info->running_transaction; 3147 + if (cur_trans) 3148 + refcount_inc(&cur_trans->use_count); 3149 + spin_unlock(&root->fs_info->trans_lock); 3146 3150 if (!cur_trans) 3147 3151 return 0; 3148 3152 ··· 3155 3151 head = btrfs_find_delayed_ref_head(delayed_refs, bytenr); 3156 3152 if (!head) { 3157 3153 spin_unlock(&delayed_refs->lock); 3154 + btrfs_put_transaction(cur_trans); 3158 3155 return 0; 3159 3156 } 3160 3157 ··· 3172 3167 mutex_lock(&head->mutex); 3173 3168 mutex_unlock(&head->mutex); 3174 3169 btrfs_put_delayed_ref_head(head); 3170 + btrfs_put_transaction(cur_trans); 3175 3171 return -EAGAIN; 3176 3172 } 3177 3173 spin_unlock(&delayed_refs->lock); ··· 3205 3199 } 3206 3200 spin_unlock(&head->lock); 3207 3201 mutex_unlock(&head->mutex); 3202 + btrfs_put_transaction(cur_trans); 3208 3203 return ret; 3209 3204 } 3210 3205
+1 -1
fs/btrfs/relocation.c
··· 1841 1841 old_bytenr = btrfs_node_blockptr(parent, slot); 1842 1842 blocksize = fs_info->nodesize; 1843 1843 old_ptr_gen = btrfs_node_ptr_generation(parent, slot); 1844 - btrfs_node_key_to_cpu(parent, &key, slot); 1844 + btrfs_node_key_to_cpu(parent, &first_key, slot); 1845 1845 1846 1846 if (level <= max_level) { 1847 1847 eb = path->nodes[level];
+4
fs/btrfs/send.c
··· 5236 5236 len = btrfs_file_extent_num_bytes(path->nodes[0], ei); 5237 5237 } 5238 5238 5239 + if (offset >= sctx->cur_inode_size) { 5240 + ret = 0; 5241 + goto out; 5242 + } 5239 5243 if (offset + len > sctx->cur_inode_size) 5240 5244 len = sctx->cur_inode_size - offset; 5241 5245 if (len == 0) {
+122 -83
fs/ceph/file.c
··· 70 70 */ 71 71 72 72 /* 73 - * Calculate the length sum of direct io vectors that can 74 - * be combined into one page vector. 73 + * How many pages to get in one call to iov_iter_get_pages(). This 74 + * determines the size of the on-stack array used as a buffer. 75 75 */ 76 - static size_t dio_get_pagev_size(const struct iov_iter *it) 77 - { 78 - const struct iovec *iov = it->iov; 79 - const struct iovec *iovend = iov + it->nr_segs; 80 - size_t size; 76 + #define ITER_GET_BVECS_PAGES 64 81 77 82 - size = iov->iov_len - it->iov_offset; 83 - /* 84 - * An iov can be page vectored when both the current tail 85 - * and the next base are page aligned. 86 - */ 87 - while (PAGE_ALIGNED((iov->iov_base + iov->iov_len)) && 88 - (++iov < iovend && PAGE_ALIGNED((iov->iov_base)))) { 89 - size += iov->iov_len; 90 - } 91 - dout("dio_get_pagevlen len = %zu\n", size); 92 - return size; 78 + static ssize_t __iter_get_bvecs(struct iov_iter *iter, size_t maxsize, 79 + struct bio_vec *bvecs) 80 + { 81 + size_t size = 0; 82 + int bvec_idx = 0; 83 + 84 + if (maxsize > iov_iter_count(iter)) 85 + maxsize = iov_iter_count(iter); 86 + 87 + while (size < maxsize) { 88 + struct page *pages[ITER_GET_BVECS_PAGES]; 89 + ssize_t bytes; 90 + size_t start; 91 + int idx = 0; 92 + 93 + bytes = iov_iter_get_pages(iter, pages, maxsize - size, 94 + ITER_GET_BVECS_PAGES, &start); 95 + if (bytes < 0) 96 + return size ?: bytes; 97 + 98 + iov_iter_advance(iter, bytes); 99 + size += bytes; 100 + 101 + for ( ; bytes; idx++, bvec_idx++) { 102 + struct bio_vec bv = { 103 + .bv_page = pages[idx], 104 + .bv_len = min_t(int, bytes, PAGE_SIZE - start), 105 + .bv_offset = start, 106 + }; 107 + 108 + bvecs[bvec_idx] = bv; 109 + bytes -= bv.bv_len; 110 + start = 0; 111 + } 112 + } 113 + 114 + return size; 93 115 } 94 116 95 117 /* 96 - * Allocate a page vector based on (@it, @nbytes). 97 - * The return value is the tuple describing a page vector, 98 - * that is (@pages, @page_align, @num_pages). 118 + * iov_iter_get_pages() only considers one iov_iter segment, no matter 119 + * what maxsize or maxpages are given. For ITER_BVEC that is a single 120 + * page. 121 + * 122 + * Attempt to get up to @maxsize bytes worth of pages from @iter. 123 + * Return the number of bytes in the created bio_vec array, or an error. 99 124 */ 100 - static struct page ** 101 - dio_get_pages_alloc(const struct iov_iter *it, size_t nbytes, 102 - size_t *page_align, int *num_pages) 125 + static ssize_t iter_get_bvecs_alloc(struct iov_iter *iter, size_t maxsize, 126 + struct bio_vec **bvecs, int *num_bvecs) 103 127 { 104 - struct iov_iter tmp_it = *it; 105 - size_t align; 106 - struct page **pages; 107 - int ret = 0, idx, npages; 128 + struct bio_vec *bv; 129 + size_t orig_count = iov_iter_count(iter); 130 + ssize_t bytes; 131 + int npages; 108 132 109 - align = (unsigned long)(it->iov->iov_base + it->iov_offset) & 110 - (PAGE_SIZE - 1); 111 - npages = calc_pages_for(align, nbytes); 112 - pages = kvmalloc(sizeof(*pages) * npages, GFP_KERNEL); 113 - if (!pages) 114 - return ERR_PTR(-ENOMEM); 133 + iov_iter_truncate(iter, maxsize); 134 + npages = iov_iter_npages(iter, INT_MAX); 135 + iov_iter_reexpand(iter, orig_count); 115 136 116 - for (idx = 0; idx < npages; ) { 117 - size_t start; 118 - ret = iov_iter_get_pages(&tmp_it, pages + idx, nbytes, 119 - npages - idx, &start); 120 - if (ret < 0) 121 - goto fail; 137 + /* 138 + * __iter_get_bvecs() may populate only part of the array -- zero it 139 + * out. 140 + */ 141 + bv = kvmalloc_array(npages, sizeof(*bv), GFP_KERNEL | __GFP_ZERO); 142 + if (!bv) 143 + return -ENOMEM; 122 144 123 - iov_iter_advance(&tmp_it, ret); 124 - nbytes -= ret; 125 - idx += (ret + start + PAGE_SIZE - 1) / PAGE_SIZE; 145 + bytes = __iter_get_bvecs(iter, maxsize, bv); 146 + if (bytes < 0) { 147 + /* 148 + * No pages were pinned -- just free the array. 149 + */ 150 + kvfree(bv); 151 + return bytes; 126 152 } 127 153 128 - BUG_ON(nbytes != 0); 129 - *num_pages = npages; 130 - *page_align = align; 131 - dout("dio_get_pages_alloc: got %d pages align %zu\n", npages, align); 132 - return pages; 133 - fail: 134 - ceph_put_page_vector(pages, idx, false); 135 - return ERR_PTR(ret); 154 + *bvecs = bv; 155 + *num_bvecs = npages; 156 + return bytes; 157 + } 158 + 159 + static void put_bvecs(struct bio_vec *bvecs, int num_bvecs, bool should_dirty) 160 + { 161 + int i; 162 + 163 + for (i = 0; i < num_bvecs; i++) { 164 + if (bvecs[i].bv_page) { 165 + if (should_dirty) 166 + set_page_dirty_lock(bvecs[i].bv_page); 167 + put_page(bvecs[i].bv_page); 168 + } 169 + } 170 + kvfree(bvecs); 136 171 } 137 172 138 173 /* ··· 781 746 struct inode *inode = req->r_inode; 782 747 struct ceph_aio_request *aio_req = req->r_priv; 783 748 struct ceph_osd_data *osd_data = osd_req_op_extent_osd_data(req, 0); 784 - int num_pages = calc_pages_for((u64)osd_data->alignment, 785 - osd_data->length); 786 749 787 - dout("ceph_aio_complete_req %p rc %d bytes %llu\n", 788 - inode, rc, osd_data->length); 750 + BUG_ON(osd_data->type != CEPH_OSD_DATA_TYPE_BVECS); 751 + BUG_ON(!osd_data->num_bvecs); 752 + 753 + dout("ceph_aio_complete_req %p rc %d bytes %u\n", 754 + inode, rc, osd_data->bvec_pos.iter.bi_size); 789 755 790 756 if (rc == -EOLDSNAPC) { 791 757 struct ceph_aio_work *aio_work; ··· 804 768 } else if (!aio_req->write) { 805 769 if (rc == -ENOENT) 806 770 rc = 0; 807 - if (rc >= 0 && osd_data->length > rc) { 808 - int zoff = osd_data->alignment + rc; 809 - int zlen = osd_data->length - rc; 771 + if (rc >= 0 && osd_data->bvec_pos.iter.bi_size > rc) { 772 + struct iov_iter i; 773 + int zlen = osd_data->bvec_pos.iter.bi_size - rc; 774 + 810 775 /* 811 776 * If read is satisfied by single OSD request, 812 777 * it can pass EOF. Otherwise read is within ··· 822 785 aio_req->total_len = rc + zlen; 823 786 } 824 787 825 - if (zlen > 0) 826 - ceph_zero_page_vector_range(zoff, zlen, 827 - osd_data->pages); 788 + iov_iter_bvec(&i, ITER_BVEC, osd_data->bvec_pos.bvecs, 789 + osd_data->num_bvecs, 790 + osd_data->bvec_pos.iter.bi_size); 791 + iov_iter_advance(&i, rc); 792 + iov_iter_zero(zlen, &i); 828 793 } 829 794 } 830 795 831 - ceph_put_page_vector(osd_data->pages, num_pages, aio_req->should_dirty); 796 + put_bvecs(osd_data->bvec_pos.bvecs, osd_data->num_bvecs, 797 + aio_req->should_dirty); 832 798 ceph_osdc_put_request(req); 833 799 834 800 if (rc < 0) ··· 919 879 struct ceph_fs_client *fsc = ceph_inode_to_client(inode); 920 880 struct ceph_vino vino; 921 881 struct ceph_osd_request *req; 922 - struct page **pages; 882 + struct bio_vec *bvecs; 923 883 struct ceph_aio_request *aio_req = NULL; 924 884 int num_pages = 0; 925 885 int flags; ··· 954 914 } 955 915 956 916 while (iov_iter_count(iter) > 0) { 957 - u64 size = dio_get_pagev_size(iter); 958 - size_t start = 0; 917 + u64 size = iov_iter_count(iter); 959 918 ssize_t len; 919 + 920 + if (write) 921 + size = min_t(u64, size, fsc->mount_options->wsize); 922 + else 923 + size = min_t(u64, size, fsc->mount_options->rsize); 960 924 961 925 vino = ceph_vino(inode); 962 926 req = ceph_osdc_new_request(&fsc->client->osdc, &ci->i_layout, ··· 977 933 break; 978 934 } 979 935 980 - if (write) 981 - size = min_t(u64, size, fsc->mount_options->wsize); 982 - else 983 - size = min_t(u64, size, fsc->mount_options->rsize); 984 - 985 - len = size; 986 - pages = dio_get_pages_alloc(iter, len, &start, &num_pages); 987 - if (IS_ERR(pages)) { 936 + len = iter_get_bvecs_alloc(iter, size, &bvecs, &num_pages); 937 + if (len < 0) { 988 938 ceph_osdc_put_request(req); 989 - ret = PTR_ERR(pages); 939 + ret = len; 990 940 break; 991 941 } 942 + if (len != size) 943 + osd_req_op_extent_update(req, 0, len); 992 944 993 945 /* 994 946 * To simplify error handling, allow AIO when IO within i_size ··· 1017 977 req->r_mtime = mtime; 1018 978 } 1019 979 1020 - osd_req_op_extent_osd_data_pages(req, 0, pages, len, start, 1021 - false, false); 980 + osd_req_op_extent_osd_data_bvecs(req, 0, bvecs, num_pages, len); 1022 981 1023 982 if (aio_req) { 1024 983 aio_req->total_len += len; ··· 1030 991 list_add_tail(&req->r_unsafe_item, &aio_req->osd_reqs); 1031 992 1032 993 pos += len; 1033 - iov_iter_advance(iter, len); 1034 994 continue; 1035 995 } 1036 996 ··· 1042 1004 if (ret == -ENOENT) 1043 1005 ret = 0; 1044 1006 if (ret >= 0 && ret < len && pos + ret < size) { 1007 + struct iov_iter i; 1045 1008 int zlen = min_t(size_t, len - ret, 1046 1009 size - pos - ret); 1047 - ceph_zero_page_vector_range(start + ret, zlen, 1048 - pages); 1010 + 1011 + iov_iter_bvec(&i, ITER_BVEC, bvecs, num_pages, 1012 + len); 1013 + iov_iter_advance(&i, ret); 1014 + iov_iter_zero(zlen, &i); 1049 1015 ret += zlen; 1050 1016 } 1051 1017 if (ret >= 0) 1052 1018 len = ret; 1053 1019 } 1054 1020 1055 - ceph_put_page_vector(pages, num_pages, should_dirty); 1056 - 1021 + put_bvecs(bvecs, num_pages, should_dirty); 1057 1022 ceph_osdc_put_request(req); 1058 1023 if (ret < 0) 1059 1024 break; 1060 1025 1061 1026 pos += len; 1062 - iov_iter_advance(iter, len); 1063 - 1064 1027 if (!write && pos >= size) 1065 1028 break; 1066 1029
+1 -1
fs/cifs/Kconfig
··· 197 197 198 198 config CIFS_SMB_DIRECT 199 199 bool "SMB Direct support (Experimental)" 200 - depends on CIFS=m && INFINIBAND || CIFS=y && INFINIBAND=y 200 + depends on CIFS=m && INFINIBAND && INFINIBAND_ADDR_TRANS || CIFS=y && INFINIBAND=y && INFINIBAND_ADDR_TRANS=y 201 201 help 202 202 Enables SMB Direct experimental support for SMB 3.0, 3.02 and 3.1.1. 203 203 SMB Direct allows transferring SMB packets over RDMA. If unsure,
+13
fs/cifs/cifsfs.c
··· 1047 1047 return rc; 1048 1048 } 1049 1049 1050 + /* 1051 + * Directory operations under CIFS/SMB2/SMB3 are synchronous, so fsync() 1052 + * is a dummy operation. 1053 + */ 1054 + static int cifs_dir_fsync(struct file *file, loff_t start, loff_t end, int datasync) 1055 + { 1056 + cifs_dbg(FYI, "Sync directory - name: %pD datasync: 0x%x\n", 1057 + file, datasync); 1058 + 1059 + return 0; 1060 + } 1061 + 1050 1062 static ssize_t cifs_copy_file_range(struct file *src_file, loff_t off, 1051 1063 struct file *dst_file, loff_t destoff, 1052 1064 size_t len, unsigned int flags) ··· 1193 1181 .copy_file_range = cifs_copy_file_range, 1194 1182 .clone_file_range = cifs_clone_file_range, 1195 1183 .llseek = generic_file_llseek, 1184 + .fsync = cifs_dir_fsync, 1196 1185 }; 1197 1186 1198 1187 static void
-8
fs/cifs/connect.c
··· 1977 1977 goto cifs_parse_mount_err; 1978 1978 } 1979 1979 1980 - #ifdef CONFIG_CIFS_SMB_DIRECT 1981 - if (vol->rdma && vol->sign) { 1982 - cifs_dbg(VFS, "Currently SMB direct doesn't support signing." 1983 - " This is being fixed\n"); 1984 - goto cifs_parse_mount_err; 1985 - } 1986 - #endif 1987 - 1988 1980 #ifndef CONFIG_KEYS 1989 1981 /* Muliuser mounts require CONFIG_KEYS support */ 1990 1982 if (vol->multiuser) {
+6
fs/cifs/smb2ops.c
··· 589 589 590 590 SMB2_close(xid, tcon, fid.persistent_fid, fid.volatile_fid); 591 591 592 + /* 593 + * If ea_name is NULL (listxattr) and there are no EAs, return 0 as it's 594 + * not an error. Otherwise, the specified ea_name was not found. 595 + */ 592 596 if (!rc) 593 597 rc = move_smb2_ea_to_cifs(ea_data, buf_size, smb2_data, 594 598 SMB2_MAX_EA_BUF, ea_name); 599 + else if (!ea_name && rc == -ENODATA) 600 + rc = 0; 595 601 596 602 kfree(smb2_data); 597 603 return rc;
+38 -35
fs/cifs/smb2pdu.c
··· 730 730 731 731 int smb3_validate_negotiate(const unsigned int xid, struct cifs_tcon *tcon) 732 732 { 733 - int rc = 0; 734 - struct validate_negotiate_info_req vneg_inbuf; 733 + int rc; 734 + struct validate_negotiate_info_req *pneg_inbuf; 735 735 struct validate_negotiate_info_rsp *pneg_rsp = NULL; 736 736 u32 rsplen; 737 737 u32 inbuflen; /* max of 4 dialects */ 738 738 739 739 cifs_dbg(FYI, "validate negotiate\n"); 740 - 741 - #ifdef CONFIG_CIFS_SMB_DIRECT 742 - if (tcon->ses->server->rdma) 743 - return 0; 744 - #endif 745 740 746 741 /* In SMB3.11 preauth integrity supersedes validate negotiate */ 747 742 if (tcon->ses->server->dialect == SMB311_PROT_ID) ··· 760 765 if (tcon->ses->session_flags & SMB2_SESSION_FLAG_IS_NULL) 761 766 cifs_dbg(VFS, "Unexpected null user (anonymous) auth flag sent by server\n"); 762 767 763 - vneg_inbuf.Capabilities = 768 + pneg_inbuf = kmalloc(sizeof(*pneg_inbuf), GFP_NOFS); 769 + if (!pneg_inbuf) 770 + return -ENOMEM; 771 + 772 + pneg_inbuf->Capabilities = 764 773 cpu_to_le32(tcon->ses->server->vals->req_capabilities); 765 - memcpy(vneg_inbuf.Guid, tcon->ses->server->client_guid, 774 + memcpy(pneg_inbuf->Guid, tcon->ses->server->client_guid, 766 775 SMB2_CLIENT_GUID_SIZE); 767 776 768 777 if (tcon->ses->sign) 769 - vneg_inbuf.SecurityMode = 778 + pneg_inbuf->SecurityMode = 770 779 cpu_to_le16(SMB2_NEGOTIATE_SIGNING_REQUIRED); 771 780 else if (global_secflags & CIFSSEC_MAY_SIGN) 772 - vneg_inbuf.SecurityMode = 781 + pneg_inbuf->SecurityMode = 773 782 cpu_to_le16(SMB2_NEGOTIATE_SIGNING_ENABLED); 774 783 else 775 - vneg_inbuf.SecurityMode = 0; 784 + pneg_inbuf->SecurityMode = 0; 776 785 777 786 778 787 if (strcmp(tcon->ses->server->vals->version_string, 779 788 SMB3ANY_VERSION_STRING) == 0) { 780 - vneg_inbuf.Dialects[0] = cpu_to_le16(SMB30_PROT_ID); 781 - vneg_inbuf.Dialects[1] = cpu_to_le16(SMB302_PROT_ID); 782 - vneg_inbuf.DialectCount = cpu_to_le16(2); 789 + pneg_inbuf->Dialects[0] = cpu_to_le16(SMB30_PROT_ID); 790 + pneg_inbuf->Dialects[1] = cpu_to_le16(SMB302_PROT_ID); 791 + pneg_inbuf->DialectCount = cpu_to_le16(2); 783 792 /* structure is big enough for 3 dialects, sending only 2 */ 784 - inbuflen = sizeof(struct validate_negotiate_info_req) - 2; 793 + inbuflen = sizeof(*pneg_inbuf) - 794 + sizeof(pneg_inbuf->Dialects[0]); 785 795 } else if (strcmp(tcon->ses->server->vals->version_string, 786 796 SMBDEFAULT_VERSION_STRING) == 0) { 787 - vneg_inbuf.Dialects[0] = cpu_to_le16(SMB21_PROT_ID); 788 - vneg_inbuf.Dialects[1] = cpu_to_le16(SMB30_PROT_ID); 789 - vneg_inbuf.Dialects[2] = cpu_to_le16(SMB302_PROT_ID); 790 - vneg_inbuf.DialectCount = cpu_to_le16(3); 797 + pneg_inbuf->Dialects[0] = cpu_to_le16(SMB21_PROT_ID); 798 + pneg_inbuf->Dialects[1] = cpu_to_le16(SMB30_PROT_ID); 799 + pneg_inbuf->Dialects[2] = cpu_to_le16(SMB302_PROT_ID); 800 + pneg_inbuf->DialectCount = cpu_to_le16(3); 791 801 /* structure is big enough for 3 dialects */ 792 - inbuflen = sizeof(struct validate_negotiate_info_req); 802 + inbuflen = sizeof(*pneg_inbuf); 793 803 } else { 794 804 /* otherwise specific dialect was requested */ 795 - vneg_inbuf.Dialects[0] = 805 + pneg_inbuf->Dialects[0] = 796 806 cpu_to_le16(tcon->ses->server->vals->protocol_id); 797 - vneg_inbuf.DialectCount = cpu_to_le16(1); 807 + pneg_inbuf->DialectCount = cpu_to_le16(1); 798 808 /* structure is big enough for 3 dialects, sending only 1 */ 799 - inbuflen = sizeof(struct validate_negotiate_info_req) - 4; 809 + inbuflen = sizeof(*pneg_inbuf) - 810 + sizeof(pneg_inbuf->Dialects[0]) * 2; 800 811 } 801 812 802 813 rc = SMB2_ioctl(xid, tcon, NO_FILE_ID, NO_FILE_ID, 803 814 FSCTL_VALIDATE_NEGOTIATE_INFO, true /* is_fsctl */, 804 - (char *)&vneg_inbuf, sizeof(struct validate_negotiate_info_req), 805 - (char **)&pneg_rsp, &rsplen); 815 + (char *)pneg_inbuf, inbuflen, (char **)&pneg_rsp, &rsplen); 806 816 807 817 if (rc != 0) { 808 818 cifs_dbg(VFS, "validate protocol negotiate failed: %d\n", rc); 809 - return -EIO; 819 + rc = -EIO; 820 + goto out_free_inbuf; 810 821 } 811 822 812 - if (rsplen != sizeof(struct validate_negotiate_info_rsp)) { 823 + rc = -EIO; 824 + if (rsplen != sizeof(*pneg_rsp)) { 813 825 cifs_dbg(VFS, "invalid protocol negotiate response size: %d\n", 814 826 rsplen); 815 827 816 828 /* relax check since Mac returns max bufsize allowed on ioctl */ 817 - if ((rsplen > CIFSMaxBufSize) 818 - || (rsplen < sizeof(struct validate_negotiate_info_rsp))) 819 - goto err_rsp_free; 829 + if (rsplen > CIFSMaxBufSize || rsplen < sizeof(*pneg_rsp)) 830 + goto out_free_rsp; 820 831 } 821 832 822 833 /* check validate negotiate info response matches what we got earlier */ ··· 839 838 goto vneg_out; 840 839 841 840 /* validate negotiate successful */ 841 + rc = 0; 842 842 cifs_dbg(FYI, "validate negotiate info successful\n"); 843 - kfree(pneg_rsp); 844 - return 0; 843 + goto out_free_rsp; 845 844 846 845 vneg_out: 847 846 cifs_dbg(VFS, "protocol revalidation - security settings mismatch\n"); 848 - err_rsp_free: 847 + out_free_rsp: 849 848 kfree(pneg_rsp); 850 - return -EIO; 849 + out_free_inbuf: 850 + kfree(pneg_inbuf); 851 + return rc; 851 852 } 852 853 853 854 enum securityEnum
+1 -1
fs/fs-writeback.c
··· 1961 1961 } 1962 1962 1963 1963 if (!list_empty(&wb->work_list)) 1964 - mod_delayed_work(bdi_wq, &wb->dwork, 0); 1964 + wb_wakeup(wb); 1965 1965 else if (wb_has_dirty_io(wb) && dirty_writeback_interval) 1966 1966 wb_wakeup_delayed(wb); 1967 1967
+12 -2
fs/ocfs2/refcounttree.c
··· 4250 4250 static int ocfs2_reflink(struct dentry *old_dentry, struct inode *dir, 4251 4251 struct dentry *new_dentry, bool preserve) 4252 4252 { 4253 - int error; 4253 + int error, had_lock; 4254 4254 struct inode *inode = d_inode(old_dentry); 4255 4255 struct buffer_head *old_bh = NULL; 4256 4256 struct inode *new_orphan_inode = NULL; 4257 + struct ocfs2_lock_holder oh; 4257 4258 4258 4259 if (!ocfs2_refcount_tree(OCFS2_SB(inode->i_sb))) 4259 4260 return -EOPNOTSUPP; ··· 4296 4295 goto out; 4297 4296 } 4298 4297 4298 + had_lock = ocfs2_inode_lock_tracker(new_orphan_inode, NULL, 1, 4299 + &oh); 4300 + if (had_lock < 0) { 4301 + error = had_lock; 4302 + mlog_errno(error); 4303 + goto out; 4304 + } 4305 + 4299 4306 /* If the security isn't preserved, we need to re-initialize them. */ 4300 4307 if (!preserve) { 4301 4308 error = ocfs2_init_security_and_acl(dir, new_orphan_inode, ··· 4311 4302 if (error) 4312 4303 mlog_errno(error); 4313 4304 } 4314 - out: 4315 4305 if (!error) { 4316 4306 error = ocfs2_mv_orphaned_inode_to_new(dir, new_orphan_inode, 4317 4307 new_dentry); 4318 4308 if (error) 4319 4309 mlog_errno(error); 4320 4310 } 4311 + ocfs2_inode_unlock_tracker(new_orphan_inode, 1, &oh, had_lock); 4321 4312 4313 + out: 4322 4314 if (new_orphan_inode) { 4323 4315 /* 4324 4316 * We need to open_unlock the inode no matter whether we
+16 -7
fs/proc/kcore.c
··· 209 209 { 210 210 struct list_head *head = (struct list_head *)arg; 211 211 struct kcore_list *ent; 212 + struct page *p; 213 + 214 + if (!pfn_valid(pfn)) 215 + return 1; 216 + 217 + p = pfn_to_page(pfn); 218 + if (!memmap_valid_within(pfn, p, page_zone(p))) 219 + return 1; 212 220 213 221 ent = kmalloc(sizeof(*ent), GFP_KERNEL); 214 222 if (!ent) 215 223 return -ENOMEM; 216 - ent->addr = (unsigned long)__va((pfn << PAGE_SHIFT)); 224 + ent->addr = (unsigned long)page_to_virt(p); 217 225 ent->size = nr_pages << PAGE_SHIFT; 218 226 219 - /* Sanity check: Can happen in 32bit arch...maybe */ 220 - if (ent->addr < (unsigned long) __va(0)) 227 + if (!virt_addr_valid(ent->addr)) 221 228 goto free_out; 222 229 223 230 /* cut not-mapped area. ....from ppc-32 code. */ 224 231 if (ULONG_MAX - ent->addr < ent->size) 225 232 ent->size = ULONG_MAX - ent->addr; 226 233 227 - /* cut when vmalloc() area is higher than direct-map area */ 228 - if (VMALLOC_START > (unsigned long)__va(0)) { 229 - if (ent->addr > VMALLOC_START) 230 - goto free_out; 234 + /* 235 + * We've already checked virt_addr_valid so we know this address 236 + * is a valid pointer, therefore we can check against it to determine 237 + * if we need to trim 238 + */ 239 + if (VMALLOC_START > ent->addr) { 231 240 if (VMALLOC_START - ent->addr < ent->size) 232 241 ent->size = VMALLOC_START - ent->addr; 233 242 }
+8 -1
fs/xfs/libxfs/xfs_attr.c
··· 511 511 if (args->flags & ATTR_CREATE) 512 512 return retval; 513 513 retval = xfs_attr_shortform_remove(args); 514 - ASSERT(retval == 0); 514 + if (retval) 515 + return retval; 516 + /* 517 + * Since we have removed the old attr, clear ATTR_REPLACE so 518 + * that the leaf format add routine won't trip over the attr 519 + * not being around. 520 + */ 521 + args->flags &= ~ATTR_REPLACE; 515 522 } 516 523 517 524 if (args->namelen >= XFS_ATTR_SF_ENTSIZE_MAX ||
+4
fs/xfs/libxfs/xfs_bmap.c
··· 725 725 *logflagsp = 0; 726 726 if ((error = xfs_alloc_vextent(&args))) { 727 727 xfs_iroot_realloc(ip, -1, whichfork); 728 + ASSERT(ifp->if_broot == NULL); 729 + XFS_IFORK_FMT_SET(ip, whichfork, XFS_DINODE_FMT_EXTENTS); 728 730 xfs_btree_del_cursor(cur, XFS_BTREE_ERROR); 729 731 return error; 730 732 } 731 733 732 734 if (WARN_ON_ONCE(args.fsbno == NULLFSBLOCK)) { 733 735 xfs_iroot_realloc(ip, -1, whichfork); 736 + ASSERT(ifp->if_broot == NULL); 737 + XFS_IFORK_FMT_SET(ip, whichfork, XFS_DINODE_FMT_EXTENTS); 734 738 xfs_btree_del_cursor(cur, XFS_BTREE_ERROR); 735 739 return -ENOSPC; 736 740 }
+21
fs/xfs/libxfs/xfs_inode_buf.c
··· 466 466 return __this_address; 467 467 if (di_size > XFS_DFORK_DSIZE(dip, mp)) 468 468 return __this_address; 469 + if (dip->di_nextents) 470 + return __this_address; 469 471 /* fall through */ 470 472 case XFS_DINODE_FMT_EXTENTS: 471 473 case XFS_DINODE_FMT_BTREE: ··· 486 484 if (XFS_DFORK_Q(dip)) { 487 485 switch (dip->di_aformat) { 488 486 case XFS_DINODE_FMT_LOCAL: 487 + if (dip->di_anextents) 488 + return __this_address; 489 + /* fall through */ 489 490 case XFS_DINODE_FMT_EXTENTS: 490 491 case XFS_DINODE_FMT_BTREE: 491 492 break; 492 493 default: 493 494 return __this_address; 494 495 } 496 + } else { 497 + /* 498 + * If there is no fork offset, this may be a freshly-made inode 499 + * in a new disk cluster, in which case di_aformat is zeroed. 500 + * Otherwise, such an inode must be in EXTENTS format; this goes 501 + * for freed inodes as well. 502 + */ 503 + switch (dip->di_aformat) { 504 + case 0: 505 + case XFS_DINODE_FMT_EXTENTS: 506 + break; 507 + default: 508 + return __this_address; 509 + } 510 + if (dip->di_anextents) 511 + return __this_address; 495 512 } 496 513 497 514 /* only version 3 or greater inodes are extensively verified here */
+19 -5
fs/xfs/xfs_file.c
··· 778 778 if (error) 779 779 goto out_unlock; 780 780 } else if (mode & FALLOC_FL_INSERT_RANGE) { 781 - unsigned int blksize_mask = i_blocksize(inode) - 1; 781 + unsigned int blksize_mask = i_blocksize(inode) - 1; 782 + loff_t isize = i_size_read(inode); 782 783 783 - new_size = i_size_read(inode) + len; 784 784 if (offset & blksize_mask || len & blksize_mask) { 785 785 error = -EINVAL; 786 786 goto out_unlock; 787 787 } 788 788 789 - /* check the new inode size does not wrap through zero */ 790 - if (new_size > inode->i_sb->s_maxbytes) { 789 + /* 790 + * New inode size must not exceed ->s_maxbytes, accounting for 791 + * possible signed overflow. 792 + */ 793 + if (inode->i_sb->s_maxbytes - isize < len) { 791 794 error = -EFBIG; 792 795 goto out_unlock; 793 796 } 797 + new_size = isize + len; 794 798 795 799 /* Offset should be less than i_size */ 796 - if (offset >= i_size_read(inode)) { 800 + if (offset >= isize) { 797 801 error = -EINVAL; 798 802 goto out_unlock; 799 803 } ··· 880 876 struct file *dst_file, 881 877 u64 dst_loff) 882 878 { 879 + struct inode *srci = file_inode(src_file); 880 + u64 max_dedupe; 883 881 int error; 884 882 883 + /* 884 + * Since we have to read all these pages in to compare them, cut 885 + * it off at MAX_RW_COUNT/2 rounded down to the nearest block. 886 + * That means we won't do more than MAX_RW_COUNT IO per request. 887 + */ 888 + max_dedupe = (MAX_RW_COUNT >> 1) & ~(i_blocksize(srci) - 1); 889 + if (len > max_dedupe) 890 + len = max_dedupe; 885 891 error = xfs_reflink_remap_range(src_file, loff, dst_file, dst_loff, 886 892 len, true); 887 893 if (error)
+2 -2
include/dt-bindings/clock/stm32mp1-clks.h
··· 76 76 #define I2C6 63 77 77 #define USART1 64 78 78 #define RTCAPB 65 79 - #define TZC 66 79 + #define TZC1 66 80 80 #define TZPC 67 81 81 #define IWDG1 68 82 82 #define BSEC 69 ··· 123 123 #define CRC1 110 124 124 #define USBH 111 125 125 #define ETHSTP 112 126 + #define TZC2 113 126 127 127 128 /* Kernel clocks */ 128 129 #define SDMMC1_K 118 ··· 229 228 #define CK_MCO2 212 230 229 231 230 /* TRACE & DEBUG clocks */ 232 - #define DBG 213 233 231 #define CK_DBG 214 234 232 #define CK_TRACE 215 235 233
+1
include/kvm/arm_vgic.h
··· 131 131 u32 mpidr; /* GICv3 target VCPU */ 132 132 }; 133 133 u8 source; /* GICv2 SGIs only */ 134 + u8 active_source; /* GICv2 SGIs only */ 134 135 u8 priority; 135 136 enum vgic_irq_config config; /* Level or edge */ 136 137
+3 -1
include/linux/bpf.h
··· 31 31 void (*map_release)(struct bpf_map *map, struct file *map_file); 32 32 void (*map_free)(struct bpf_map *map); 33 33 int (*map_get_next_key)(struct bpf_map *map, void *key, void *next_key); 34 + void (*map_release_uref)(struct bpf_map *map); 34 35 35 36 /* funcs callable from userspace and from eBPF programs */ 36 37 void *(*map_lookup_elem)(struct bpf_map *map, void *key); ··· 352 351 struct bpf_prog **_prog, *__prog; \ 353 352 struct bpf_prog_array *_array; \ 354 353 u32 _ret = 1; \ 354 + preempt_disable(); \ 355 355 rcu_read_lock(); \ 356 356 _array = rcu_dereference(array); \ 357 357 if (unlikely(check_non_null && !_array))\ ··· 364 362 } \ 365 363 _out: \ 366 364 rcu_read_unlock(); \ 365 + preempt_enable_no_resched(); \ 367 366 _ret; \ 368 367 }) 369 368 ··· 437 434 int bpf_fd_array_map_update_elem(struct bpf_map *map, struct file *map_file, 438 435 void *key, void *value, u64 map_flags); 439 436 int bpf_fd_array_map_lookup_elem(struct bpf_map *map, void *key, u32 *value); 440 - void bpf_fd_array_map_clear(struct bpf_map *map); 441 437 int bpf_fd_htab_map_update_elem(struct bpf_map *map, struct file *map_file, 442 438 void *key, void *value, u64 map_flags); 443 439 int bpf_fd_htab_map_lookup_elem(struct bpf_map *map, void *key, u32 *value);
+1
include/linux/brcmphy.h
··· 25 25 #define PHY_ID_BCM54612E 0x03625e60 26 26 #define PHY_ID_BCM54616S 0x03625d10 27 27 #define PHY_ID_BCM57780 0x03625d90 28 + #define PHY_ID_BCM89610 0x03625cd0 28 29 29 30 #define PHY_ID_BCM7250 0xae025280 30 31 #define PHY_ID_BCM7260 0xae025190
+10 -2
include/linux/ceph/osd_client.h
··· 77 77 u32 bio_length; 78 78 }; 79 79 #endif /* CONFIG_BLOCK */ 80 - struct ceph_bvec_iter bvec_pos; 80 + struct { 81 + struct ceph_bvec_iter bvec_pos; 82 + u32 num_bvecs; 83 + }; 81 84 }; 82 85 }; 83 86 ··· 415 412 struct ceph_bio_iter *bio_pos, 416 413 u32 bio_length); 417 414 #endif /* CONFIG_BLOCK */ 415 + void osd_req_op_extent_osd_data_bvecs(struct ceph_osd_request *osd_req, 416 + unsigned int which, 417 + struct bio_vec *bvecs, u32 num_bvecs, 418 + u32 bytes); 418 419 void osd_req_op_extent_osd_data_bvec_pos(struct ceph_osd_request *osd_req, 419 420 unsigned int which, 420 421 struct ceph_bvec_iter *bvec_pos); ··· 433 426 bool own_pages); 434 427 void osd_req_op_cls_request_data_bvecs(struct ceph_osd_request *osd_req, 435 428 unsigned int which, 436 - struct bio_vec *bvecs, u32 bytes); 429 + struct bio_vec *bvecs, u32 num_bvecs, 430 + u32 bytes); 437 431 extern void osd_req_op_cls_response_data_pages(struct ceph_osd_request *, 438 432 unsigned int which, 439 433 struct page **pages, u64 length,
+3
include/linux/clk-provider.h
··· 765 765 int __clk_determine_rate(struct clk_hw *core, struct clk_rate_request *req); 766 766 int __clk_mux_determine_rate_closest(struct clk_hw *hw, 767 767 struct clk_rate_request *req); 768 + int clk_mux_determine_rate_flags(struct clk_hw *hw, 769 + struct clk_rate_request *req, 770 + unsigned long flags); 768 771 void clk_hw_reparent(struct clk_hw *hw, struct clk_hw *new_parent); 769 772 void clk_hw_set_rate_range(struct clk_hw *hw, unsigned long min_rate, 770 773 unsigned long max_rate);
+3 -1
include/linux/genhd.h
··· 368 368 part_stat_add(cpu, gendiskp, field, -subnd) 369 369 370 370 void part_in_flight(struct request_queue *q, struct hd_struct *part, 371 - unsigned int inflight[2]); 371 + unsigned int inflight[2]); 372 + void part_in_flight_rw(struct request_queue *q, struct hd_struct *part, 373 + unsigned int inflight[2]); 372 374 void part_dec_in_flight(struct request_queue *q, struct hd_struct *part, 373 375 int rw); 374 376 void part_inc_in_flight(struct request_queue *q, struct hd_struct *part,
+1
include/linux/kthread.h
··· 62 62 int kthread_park(struct task_struct *k); 63 63 void kthread_unpark(struct task_struct *k); 64 64 void kthread_parkme(void); 65 + void kthread_park_complete(struct task_struct *k); 65 66 66 67 int kthreadd(void *unused); 67 68 extern struct task_struct *kthreadd_task;
+3 -9
include/linux/mlx5/driver.h
··· 1284 1284 }; 1285 1285 1286 1286 static inline const struct cpumask * 1287 - mlx5_get_vector_affinity(struct mlx5_core_dev *dev, int vector) 1287 + mlx5_get_vector_affinity_hint(struct mlx5_core_dev *dev, int vector) 1288 1288 { 1289 - const struct cpumask *mask; 1290 1289 struct irq_desc *desc; 1291 1290 unsigned int irq; 1292 1291 int eqn; 1293 1292 int err; 1294 1293 1295 - err = mlx5_vector2eqn(dev, MLX5_EQ_VEC_COMP_BASE + vector, &eqn, &irq); 1294 + err = mlx5_vector2eqn(dev, vector, &eqn, &irq); 1296 1295 if (err) 1297 1296 return NULL; 1298 1297 1299 1298 desc = irq_to_desc(irq); 1300 - #ifdef CONFIG_GENERIC_IRQ_EFFECTIVE_AFF_MASK 1301 - mask = irq_data_get_effective_affinity_mask(&desc->irq_data); 1302 - #else 1303 - mask = desc->irq_common_data.affinity; 1304 - #endif 1305 - return mask; 1299 + return desc->affinity_hint; 1306 1300 } 1307 1301 1308 1302 #endif /* MLX5_DRIVER_H */
+2
include/linux/oom.h
··· 95 95 return 0; 96 96 } 97 97 98 + void __oom_reap_task_mm(struct mm_struct *mm); 99 + 98 100 extern unsigned long oom_badness(struct task_struct *p, 99 101 struct mem_cgroup *memcg, const nodemask_t *nodemask, 100 102 unsigned long totalpages);
+1
include/linux/rbtree_augmented.h
··· 26 26 27 27 #include <linux/compiler.h> 28 28 #include <linux/rbtree.h> 29 + #include <linux/rcupdate.h> 29 30 30 31 /* 31 32 * Please note - only struct rb_augment_callbacks and the prototypes for
+1
include/linux/rbtree_latch.h
··· 35 35 36 36 #include <linux/rbtree.h> 37 37 #include <linux/seqlock.h> 38 + #include <linux/rcupdate.h> 38 39 39 40 struct latch_tree_node { 40 41 struct rb_node node[2];
+1 -1
include/linux/remoteproc.h
··· 569 569 void rproc_add_subdev(struct rproc *rproc, 570 570 struct rproc_subdev *subdev, 571 571 int (*probe)(struct rproc_subdev *subdev), 572 - void (*remove)(struct rproc_subdev *subdev, bool graceful)); 572 + void (*remove)(struct rproc_subdev *subdev, bool crashed)); 573 573 574 574 void rproc_remove_subdev(struct rproc *rproc, struct rproc_subdev *subdev); 575 575
+45 -5
include/linux/sched.h
··· 112 112 113 113 #ifdef CONFIG_DEBUG_ATOMIC_SLEEP 114 114 115 + /* 116 + * Special states are those that do not use the normal wait-loop pattern. See 117 + * the comment with set_special_state(). 118 + */ 119 + #define is_special_task_state(state) \ 120 + ((state) & (__TASK_STOPPED | __TASK_TRACED | TASK_DEAD)) 121 + 115 122 #define __set_current_state(state_value) \ 116 123 do { \ 124 + WARN_ON_ONCE(is_special_task_state(state_value));\ 117 125 current->task_state_change = _THIS_IP_; \ 118 126 current->state = (state_value); \ 119 127 } while (0) 128 + 120 129 #define set_current_state(state_value) \ 121 130 do { \ 131 + WARN_ON_ONCE(is_special_task_state(state_value));\ 122 132 current->task_state_change = _THIS_IP_; \ 123 133 smp_store_mb(current->state, (state_value)); \ 124 134 } while (0) 125 135 136 + #define set_special_state(state_value) \ 137 + do { \ 138 + unsigned long flags; /* may shadow */ \ 139 + WARN_ON_ONCE(!is_special_task_state(state_value)); \ 140 + raw_spin_lock_irqsave(&current->pi_lock, flags); \ 141 + current->task_state_change = _THIS_IP_; \ 142 + current->state = (state_value); \ 143 + raw_spin_unlock_irqrestore(&current->pi_lock, flags); \ 144 + } while (0) 126 145 #else 127 146 /* 128 147 * set_current_state() includes a barrier so that the write of current->state ··· 163 144 * 164 145 * The above is typically ordered against the wakeup, which does: 165 146 * 166 - * need_sleep = false; 167 - * wake_up_state(p, TASK_UNINTERRUPTIBLE); 147 + * need_sleep = false; 148 + * wake_up_state(p, TASK_UNINTERRUPTIBLE); 168 149 * 169 150 * Where wake_up_state() (and all other wakeup primitives) imply enough 170 151 * barriers to order the store of the variable against wakeup. ··· 173 154 * once it observes the TASK_UNINTERRUPTIBLE store the waking CPU can issue a 174 155 * TASK_RUNNING store which can collide with __set_current_state(TASK_RUNNING). 175 156 * 176 - * This is obviously fine, since they both store the exact same value. 157 + * However, with slightly different timing the wakeup TASK_RUNNING store can 158 + * also collide with the TASK_UNINTERRUPTIBLE store. Loosing that store is not 159 + * a problem either because that will result in one extra go around the loop 160 + * and our @cond test will save the day. 177 161 * 178 162 * Also see the comments of try_to_wake_up(). 179 163 */ 180 - #define __set_current_state(state_value) do { current->state = (state_value); } while (0) 181 - #define set_current_state(state_value) smp_store_mb(current->state, (state_value)) 164 + #define __set_current_state(state_value) \ 165 + current->state = (state_value) 166 + 167 + #define set_current_state(state_value) \ 168 + smp_store_mb(current->state, (state_value)) 169 + 170 + /* 171 + * set_special_state() should be used for those states when the blocking task 172 + * can not use the regular condition based wait-loop. In that case we must 173 + * serialize against wakeups such that any possible in-flight TASK_RUNNING stores 174 + * will not collide with our state change. 175 + */ 176 + #define set_special_state(state_value) \ 177 + do { \ 178 + unsigned long flags; /* may shadow */ \ 179 + raw_spin_lock_irqsave(&current->pi_lock, flags); \ 180 + current->state = (state_value); \ 181 + raw_spin_unlock_irqrestore(&current->pi_lock, flags); \ 182 + } while (0) 183 + 182 184 #endif 183 185 184 186 /* Task command name length: */
+1 -1
include/linux/sched/signal.h
··· 280 280 { 281 281 spin_lock_irq(&current->sighand->siglock); 282 282 if (current->jobctl & JOBCTL_STOP_DEQUEUED) 283 - __set_current_state(TASK_STOPPED); 283 + set_special_state(TASK_STOPPED); 284 284 spin_unlock_irq(&current->sighand->siglock); 285 285 286 286 schedule();
+1 -1
include/linux/usb/composite.h
··· 52 52 #define USB_GADGET_DELAYED_STATUS 0x7fff /* Impossibly large value */ 53 53 54 54 /* big enough to hold our biggest descriptor */ 55 - #define USB_COMP_EP0_BUFSIZ 1024 55 + #define USB_COMP_EP0_BUFSIZ 4096 56 56 57 57 /* OS feature descriptor length <= 4kB */ 58 58 #define USB_COMP_EP0_OS_DESC_BUFSIZ 4096
+17
include/linux/wait_bit.h
··· 305 305 __ret; \ 306 306 }) 307 307 308 + /** 309 + * clear_and_wake_up_bit - clear a bit and wake up anyone waiting on that bit 310 + * 311 + * @bit: the bit of the word being waited on 312 + * @word: the word being waited on, a kernel virtual address 313 + * 314 + * You can use this helper if bitflags are manipulated atomically rather than 315 + * non-atomically under a lock. 316 + */ 317 + static inline void clear_and_wake_up_bit(int bit, void *word) 318 + { 319 + clear_bit_unlock(bit, word); 320 + /* See wake_up_bit() for which memory barrier you need to use. */ 321 + smp_mb__after_atomic(); 322 + wake_up_bit(word, bit); 323 + } 324 + 308 325 #endif /* _LINUX_WAIT_BIT_H */
+1 -1
include/media/i2c/tvp7002.h
··· 5 5 * Author: Santiago Nunez-Corrales <santiago.nunez@ridgerun.com> 6 6 * 7 7 * This code is partially based upon the TVP5150 driver 8 - * written by Mauro Carvalho Chehab (mchehab@infradead.org), 8 + * written by Mauro Carvalho Chehab <mchehab@kernel.org>, 9 9 * the TVP514x driver written by Vaibhav Hiremath <hvaibhav@ti.com> 10 10 * and the TVP7002 driver in the TI LSP 2.10.00.14 11 11 *
+2 -2
include/media/videobuf-core.h
··· 1 1 /* 2 2 * generic helper functions for handling video4linux capture buffers 3 3 * 4 - * (c) 2007 Mauro Carvalho Chehab, <mchehab@infradead.org> 4 + * (c) 2007 Mauro Carvalho Chehab, <mchehab@kernel.org> 5 5 * 6 6 * Highly based on video-buf written originally by: 7 7 * (c) 2001,02 Gerd Knorr <kraxel@bytesex.org> 8 - * (c) 2006 Mauro Carvalho Chehab, <mchehab@infradead.org> 8 + * (c) 2006 Mauro Carvalho Chehab, <mchehab@kernel.org> 9 9 * (c) 2006 Ted Walther and John Sokol 10 10 * 11 11 * This program is free software; you can redistribute it and/or modify
+2 -2
include/media/videobuf-dma-sg.h
··· 6 6 * into PAGE_SIZE chunks). They also assume the driver does not need 7 7 * to touch the video data. 8 8 * 9 - * (c) 2007 Mauro Carvalho Chehab, <mchehab@infradead.org> 9 + * (c) 2007 Mauro Carvalho Chehab, <mchehab@kernel.org> 10 10 * 11 11 * Highly based on video-buf written originally by: 12 12 * (c) 2001,02 Gerd Knorr <kraxel@bytesex.org> 13 - * (c) 2006 Mauro Carvalho Chehab, <mchehab@infradead.org> 13 + * (c) 2006 Mauro Carvalho Chehab, <mchehab@kernel.org> 14 14 * (c) 2006 Ted Walther and John Sokol 15 15 * 16 16 * This program is free software; you can redistribute it and/or modify
+1 -1
include/media/videobuf-vmalloc.h
··· 6 6 * into PAGE_SIZE chunks). They also assume the driver does not need 7 7 * to touch the video data. 8 8 * 9 - * (c) 2007 Mauro Carvalho Chehab, <mchehab@infradead.org> 9 + * (c) 2007 Mauro Carvalho Chehab, <mchehab@kernel.org> 10 10 * 11 11 * This program is free software; you can redistribute it and/or modify 12 12 * it under the terms of the GNU General Public License as published by
+1
include/net/bonding.h
··· 198 198 struct slave __rcu *primary_slave; 199 199 struct bond_up_slave __rcu *slave_arr; /* Array of usable slaves */ 200 200 bool force_primary; 201 + u32 nest_level; 201 202 s32 slave_cnt; /* never change this value outside the attach/detach wrappers */ 202 203 int (*recv_probe)(const struct sk_buff *, struct bonding *, 203 204 struct slave *);
+1 -1
include/net/flow_dissector.h
··· 251 251 * This structure is used to hold a digest of the full flow keys. This is a 252 252 * larger "hash" of a flow to allow definitively matching specific flows where 253 253 * the 32 bit skb->hash is not large enough. The size is limited to 16 bytes so 254 - * that it can by used in CB of skb (see sch_choke for an example). 254 + * that it can be used in CB of skb (see sch_choke for an example). 255 255 */ 256 256 #define FLOW_KEYS_DIGEST_LEN 16 257 257 struct flow_keys_digest {
+1 -1
include/net/mac80211.h
··· 2080 2080 * virtual interface might not be given air time for the transmission of 2081 2081 * the frame, as it is not synced with the AP/P2P GO yet, and thus the 2082 2082 * deauthentication frame might not be transmitted. 2083 - > 2083 + * 2084 2084 * @IEEE80211_HW_DOESNT_SUPPORT_QOS_NDP: The driver (or firmware) doesn't 2085 2085 * support QoS NDP for AP probing - that's most likely a driver bug. 2086 2086 *
+1
include/net/tls.h
··· 148 148 struct scatterlist *partially_sent_record; 149 149 u16 partially_sent_offset; 150 150 unsigned long flags; 151 + bool in_tcp_sendpages; 151 152 152 153 u16 pending_open_record_frags; 153 154 int (*push_pending_record)(struct sock *sk, int flags);
+1
include/net/xfrm.h
··· 375 375 int xfrm_input_register_afinfo(const struct xfrm_input_afinfo *afinfo); 376 376 int xfrm_input_unregister_afinfo(const struct xfrm_input_afinfo *afinfo); 377 377 378 + void xfrm_flush_gc(void); 378 379 void xfrm_state_delete_tunnel(struct xfrm_state *x); 379 380 380 381 struct xfrm_type {
+11 -3
include/trace/events/initcall.h
··· 31 31 TP_ARGS(func), 32 32 33 33 TP_STRUCT__entry( 34 - __field(initcall_t, func) 34 + /* 35 + * Use field_struct to avoid is_signed_type() 36 + * comparison of a function pointer 37 + */ 38 + __field_struct(initcall_t, func) 35 39 ), 36 40 37 41 TP_fast_assign( ··· 52 48 TP_ARGS(func, ret), 53 49 54 50 TP_STRUCT__entry( 55 - __field(initcall_t, func) 56 - __field(int, ret) 51 + /* 52 + * Use field_struct to avoid is_signed_type() 53 + * comparison of a function pointer 54 + */ 55 + __field_struct(initcall_t, func) 56 + __field(int, ret) 57 57 ), 58 58 59 59 TP_fast_assign(
+85
include/trace/events/rxrpc.h
··· 15 15 #define _TRACE_RXRPC_H 16 16 17 17 #include <linux/tracepoint.h> 18 + #include <linux/errqueue.h> 18 19 19 20 /* 20 21 * Define enums for tracing information. ··· 209 208 rxrpc_cong_retransmit_again, 210 209 rxrpc_cong_rtt_window_end, 211 210 rxrpc_cong_saw_nack, 211 + }; 212 + 213 + enum rxrpc_tx_fail_trace { 214 + rxrpc_tx_fail_call_abort, 215 + rxrpc_tx_fail_call_ack, 216 + rxrpc_tx_fail_call_data_frag, 217 + rxrpc_tx_fail_call_data_nofrag, 218 + rxrpc_tx_fail_call_final_resend, 219 + rxrpc_tx_fail_conn_abort, 220 + rxrpc_tx_fail_conn_challenge, 221 + rxrpc_tx_fail_conn_response, 222 + rxrpc_tx_fail_reject, 223 + rxrpc_tx_fail_version_keepalive, 224 + rxrpc_tx_fail_version_reply, 212 225 }; 213 226 214 227 #endif /* end __RXRPC_DECLARE_TRACE_ENUMS_ONCE_ONLY */ ··· 452 437 EM(RXRPC_CALL_LOCAL_ERROR, "LocalError") \ 453 438 E_(RXRPC_CALL_NETWORK_ERROR, "NetError") 454 439 440 + #define rxrpc_tx_fail_traces \ 441 + EM(rxrpc_tx_fail_call_abort, "CallAbort") \ 442 + EM(rxrpc_tx_fail_call_ack, "CallAck") \ 443 + EM(rxrpc_tx_fail_call_data_frag, "CallDataFrag") \ 444 + EM(rxrpc_tx_fail_call_data_nofrag, "CallDataNofrag") \ 445 + EM(rxrpc_tx_fail_call_final_resend, "CallFinalResend") \ 446 + EM(rxrpc_tx_fail_conn_abort, "ConnAbort") \ 447 + EM(rxrpc_tx_fail_conn_challenge, "ConnChall") \ 448 + EM(rxrpc_tx_fail_conn_response, "ConnResp") \ 449 + EM(rxrpc_tx_fail_reject, "Reject") \ 450 + EM(rxrpc_tx_fail_version_keepalive, "VerKeepalive") \ 451 + E_(rxrpc_tx_fail_version_reply, "VerReply") 452 + 455 453 /* 456 454 * Export enum symbols via userspace. 457 455 */ ··· 488 460 rxrpc_propose_ack_outcomes; 489 461 rxrpc_congest_modes; 490 462 rxrpc_congest_changes; 463 + rxrpc_tx_fail_traces; 491 464 492 465 /* 493 466 * Now redefine the EM() and E_() macros to map the enums to the strings that ··· 1401 1372 __entry->call, 1402 1373 __entry->ix, 1403 1374 __entry->anno) 1375 + ); 1376 + 1377 + TRACE_EVENT(rxrpc_rx_icmp, 1378 + TP_PROTO(struct rxrpc_peer *peer, struct sock_extended_err *ee, 1379 + struct sockaddr_rxrpc *srx), 1380 + 1381 + TP_ARGS(peer, ee, srx), 1382 + 1383 + TP_STRUCT__entry( 1384 + __field(unsigned int, peer ) 1385 + __field_struct(struct sock_extended_err, ee ) 1386 + __field_struct(struct sockaddr_rxrpc, srx ) 1387 + ), 1388 + 1389 + TP_fast_assign( 1390 + __entry->peer = peer->debug_id; 1391 + memcpy(&__entry->ee, ee, sizeof(__entry->ee)); 1392 + memcpy(&__entry->srx, srx, sizeof(__entry->srx)); 1393 + ), 1394 + 1395 + TP_printk("P=%08x o=%u t=%u c=%u i=%u d=%u e=%d %pISp", 1396 + __entry->peer, 1397 + __entry->ee.ee_origin, 1398 + __entry->ee.ee_type, 1399 + __entry->ee.ee_code, 1400 + __entry->ee.ee_info, 1401 + __entry->ee.ee_data, 1402 + __entry->ee.ee_errno, 1403 + &__entry->srx.transport) 1404 + ); 1405 + 1406 + TRACE_EVENT(rxrpc_tx_fail, 1407 + TP_PROTO(unsigned int debug_id, rxrpc_serial_t serial, int ret, 1408 + enum rxrpc_tx_fail_trace what), 1409 + 1410 + TP_ARGS(debug_id, serial, ret, what), 1411 + 1412 + TP_STRUCT__entry( 1413 + __field(unsigned int, debug_id ) 1414 + __field(rxrpc_serial_t, serial ) 1415 + __field(int, ret ) 1416 + __field(enum rxrpc_tx_fail_trace, what ) 1417 + ), 1418 + 1419 + TP_fast_assign( 1420 + __entry->debug_id = debug_id; 1421 + __entry->serial = serial; 1422 + __entry->ret = ret; 1423 + __entry->what = what; 1424 + ), 1425 + 1426 + TP_printk("c=%08x r=%x ret=%d %s", 1427 + __entry->debug_id, 1428 + __entry->serial, 1429 + __entry->ret, 1430 + __print_symbolic(__entry->what, rxrpc_tx_fail_traces)) 1404 1431 ); 1405 1432 1406 1433 #endif /* _TRACE_RXRPC_H */
+6 -10
include/trace/events/sunrpc.h
··· 224 224 TP_ARGS(task, backlog, rtt, execute), 225 225 226 226 TP_STRUCT__entry( 227 + __field(unsigned int, task_id) 228 + __field(unsigned int, client_id) 227 229 __field(u32, xid) 228 230 __field(int, version) 229 231 __string(progname, task->tk_client->cl_program->name) ··· 233 231 __field(unsigned long, backlog) 234 232 __field(unsigned long, rtt) 235 233 __field(unsigned long, execute) 236 - __string(addr, 237 - task->tk_xprt->address_strings[RPC_DISPLAY_ADDR]) 238 - __string(port, 239 - task->tk_xprt->address_strings[RPC_DISPLAY_PORT]) 240 234 ), 241 235 242 236 TP_fast_assign( 237 + __entry->client_id = task->tk_client->cl_clid; 238 + __entry->task_id = task->tk_pid; 243 239 __entry->xid = be32_to_cpu(task->tk_rqstp->rq_xid); 244 240 __entry->version = task->tk_client->cl_vers; 245 241 __assign_str(progname, task->tk_client->cl_program->name) ··· 245 245 __entry->backlog = ktime_to_us(backlog); 246 246 __entry->rtt = ktime_to_us(rtt); 247 247 __entry->execute = ktime_to_us(execute); 248 - __assign_str(addr, 249 - task->tk_xprt->address_strings[RPC_DISPLAY_ADDR]); 250 - __assign_str(port, 251 - task->tk_xprt->address_strings[RPC_DISPLAY_PORT]); 252 248 ), 253 249 254 - TP_printk("peer=[%s]:%s xid=0x%08x %sv%d %s backlog=%lu rtt=%lu execute=%lu", 255 - __get_str(addr), __get_str(port), __entry->xid, 250 + TP_printk("task:%u@%d xid=0x%08x %sv%d %s backlog=%lu rtt=%lu execute=%lu", 251 + __entry->task_id, __entry->client_id, __entry->xid, 256 252 __get_str(progname), __entry->version, __get_str(procname), 257 253 __entry->backlog, __entry->rtt, __entry->execute) 258 254 );
+1 -1
include/uapi/linux/if_infiniband.h
··· 1 - /* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) */ 1 + /* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) */ 2 2 /* 3 3 * This software is available to you under a choice of one of two 4 4 * licenses. You may choose to be licensed under the terms of the GNU
+2
include/uapi/linux/nl80211.h
··· 2698 2698 #define NL80211_ATTR_KEYS NL80211_ATTR_KEYS 2699 2699 #define NL80211_ATTR_FEATURE_FLAGS NL80211_ATTR_FEATURE_FLAGS 2700 2700 2701 + #define NL80211_WIPHY_NAME_MAXLEN 128 2702 + 2701 2703 #define NL80211_MAX_SUPP_RATES 32 2702 2704 #define NL80211_MAX_SUPP_HT_RATES 77 2703 2705 #define NL80211_MAX_SUPP_REG_RULES 64
+1 -1
include/uapi/linux/rds.h
··· 1 - /* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) */ 1 + /* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR Linux-OpenIB) */ 2 2 /* 3 3 * Copyright (c) 2008 Oracle. All rights reserved. 4 4 *
+1 -1
include/uapi/linux/tls.h
··· 1 - /* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) */ 1 + /* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR Linux-OpenIB) */ 2 2 /* 3 3 * Copyright (c) 2016-2017, Mellanox Technologies. All rights reserved. 4 4 *
+1 -1
include/uapi/rdma/cxgb3-abi.h
··· 1 - /* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) */ 1 + /* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR Linux-OpenIB) */ 2 2 /* 3 3 * Copyright (c) 2006 Chelsio, Inc. All rights reserved. 4 4 *
+1 -1
include/uapi/rdma/cxgb4-abi.h
··· 1 - /* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) */ 1 + /* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR Linux-OpenIB) */ 2 2 /* 3 3 * Copyright (c) 2009-2010 Chelsio, Inc. All rights reserved. 4 4 *
+1 -1
include/uapi/rdma/hns-abi.h
··· 1 - /* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) */ 1 + /* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR Linux-OpenIB) */ 2 2 /* 3 3 * Copyright (c) 2016 Hisilicon Limited. 4 4 *
+1 -1
include/uapi/rdma/ib_user_cm.h
··· 1 - /* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) */ 1 + /* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR Linux-OpenIB) */ 2 2 /* 3 3 * Copyright (c) 2005 Topspin Communications. All rights reserved. 4 4 * Copyright (c) 2005 Intel Corporation. All rights reserved.
+1 -1
include/uapi/rdma/ib_user_ioctl_verbs.h
··· 1 - /* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) */ 1 + /* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR Linux-OpenIB) */ 2 2 /* 3 3 * Copyright (c) 2017-2018, Mellanox Technologies inc. All rights reserved. 4 4 *
+1 -1
include/uapi/rdma/ib_user_mad.h
··· 1 - /* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) */ 1 + /* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR Linux-OpenIB) */ 2 2 /* 3 3 * Copyright (c) 2004 Topspin Communications. All rights reserved. 4 4 * Copyright (c) 2005 Voltaire, Inc. All rights reserved.
+1 -1
include/uapi/rdma/ib_user_sa.h
··· 1 - /* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) */ 1 + /* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR Linux-OpenIB) */ 2 2 /* 3 3 * Copyright (c) 2005 Intel Corporation. All rights reserved. 4 4 *
+1 -1
include/uapi/rdma/ib_user_verbs.h
··· 1 - /* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) */ 1 + /* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR Linux-OpenIB) */ 2 2 /* 3 3 * Copyright (c) 2005 Topspin Communications. All rights reserved. 4 4 * Copyright (c) 2005, 2006 Cisco Systems. All rights reserved.
+1 -1
include/uapi/rdma/mlx4-abi.h
··· 1 - /* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) */ 1 + /* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR Linux-OpenIB) */ 2 2 /* 3 3 * Copyright (c) 2007 Cisco Systems, Inc. All rights reserved. 4 4 * Copyright (c) 2007, 2008 Mellanox Technologies. All rights reserved.
+1 -1
include/uapi/rdma/mlx5-abi.h
··· 1 - /* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) */ 1 + /* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR Linux-OpenIB) */ 2 2 /* 3 3 * Copyright (c) 2013-2015, Mellanox Technologies. All rights reserved. 4 4 *
+1 -1
include/uapi/rdma/mthca-abi.h
··· 1 - /* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) */ 1 + /* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR Linux-OpenIB) */ 2 2 /* 3 3 * Copyright (c) 2005 Topspin Communications. All rights reserved. 4 4 * Copyright (c) 2005, 2006 Cisco Systems. All rights reserved.
+1 -1
include/uapi/rdma/nes-abi.h
··· 1 - /* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) */ 1 + /* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR Linux-OpenIB) */ 2 2 /* 3 3 * Copyright (c) 2006 - 2011 Intel Corporation. All rights reserved. 4 4 * Copyright (c) 2005 Topspin Communications. All rights reserved.
+1 -1
include/uapi/rdma/qedr-abi.h
··· 1 - /* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) */ 1 + /* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR Linux-OpenIB) */ 2 2 /* QLogic qedr NIC Driver 3 3 * Copyright (c) 2015-2016 QLogic Corporation 4 4 *
+1 -1
include/uapi/rdma/rdma_user_cm.h
··· 1 - /* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) */ 1 + /* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR Linux-OpenIB) */ 2 2 /* 3 3 * Copyright (c) 2005-2006 Intel Corporation. All rights reserved. 4 4 *
+1 -1
include/uapi/rdma/rdma_user_ioctl.h
··· 1 - /* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) */ 1 + /* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR Linux-OpenIB) */ 2 2 /* 3 3 * Copyright (c) 2016 Mellanox Technologies, LTD. All rights reserved. 4 4 *
+1 -1
include/uapi/rdma/rdma_user_rxe.h
··· 1 - /* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) */ 1 + /* SPDX-License-Identifier: ((GPL-2.0 WITH Linux-syscall-note) OR Linux-OpenIB) */ 2 2 /* 3 3 * Copyright (c) 2016 Mellanox Technologies Ltd. All rights reserved. 4 4 *
+8 -1
init/main.c
··· 423 423 424 424 /* 425 425 * Enable might_sleep() and smp_processor_id() checks. 426 - * They cannot be enabled earlier because with CONFIG_PRREMPT=y 426 + * They cannot be enabled earlier because with CONFIG_PREEMPT=y 427 427 * kernel_thread() would trigger might_sleep() splats. With 428 428 * CONFIG_PREEMPT_VOLUNTARY=y the init task might have scheduled 429 429 * already, but it's stuck on the kthreadd_done completion. ··· 1034 1034 static void mark_readonly(void) 1035 1035 { 1036 1036 if (rodata_enabled) { 1037 + /* 1038 + * load_module() results in W+X mappings, which are cleaned up 1039 + * with call_rcu_sched(). Let's make sure that queued work is 1040 + * flushed so that we don't hit false positives looking for 1041 + * insecure pages which are W+X. 1042 + */ 1043 + rcu_barrier_sched(); 1037 1044 mark_rodata_ro(); 1038 1045 rodata_test(); 1039 1046 } else
+2 -1
kernel/bpf/arraymap.c
··· 476 476 } 477 477 478 478 /* decrement refcnt of all bpf_progs that are stored in this map */ 479 - void bpf_fd_array_map_clear(struct bpf_map *map) 479 + static void bpf_fd_array_map_clear(struct bpf_map *map) 480 480 { 481 481 struct bpf_array *array = container_of(map, struct bpf_array, map); 482 482 int i; ··· 495 495 .map_fd_get_ptr = prog_fd_array_get_ptr, 496 496 .map_fd_put_ptr = prog_fd_array_put_ptr, 497 497 .map_fd_sys_lookup_elem = prog_fd_array_sys_lookup_elem, 498 + .map_release_uref = bpf_fd_array_map_clear, 498 499 }; 499 500 500 501 static struct bpf_event_entry *bpf_event_entry_gen(struct file *perf_file,
+73 -26
kernel/bpf/sockmap.c
··· 43 43 #include <net/tcp.h> 44 44 #include <linux/ptr_ring.h> 45 45 #include <net/inet_common.h> 46 + #include <linux/sched/signal.h> 46 47 47 48 #define SOCK_CREATE_FLAG_MASK \ 48 49 (BPF_F_NUMA_NODE | BPF_F_RDONLY | BPF_F_WRONLY) ··· 326 325 if (ret > 0) { 327 326 if (apply) 328 327 apply_bytes -= ret; 328 + 329 + sg->offset += ret; 330 + sg->length -= ret; 329 331 size -= ret; 330 332 offset += ret; 331 333 if (uncharge) ··· 336 332 goto retry; 337 333 } 338 334 339 - sg->length = size; 340 - sg->offset = offset; 341 335 return ret; 342 336 } 343 337 ··· 393 391 } while (i != md->sg_end); 394 392 } 395 393 396 - static void free_bytes_sg(struct sock *sk, int bytes, struct sk_msg_buff *md) 394 + static void free_bytes_sg(struct sock *sk, int bytes, 395 + struct sk_msg_buff *md, bool charge) 397 396 { 398 397 struct scatterlist *sg = md->sg_data; 399 398 int i = md->sg_start, free; ··· 404 401 if (bytes < free) { 405 402 sg[i].length -= bytes; 406 403 sg[i].offset += bytes; 407 - sk_mem_uncharge(sk, bytes); 404 + if (charge) 405 + sk_mem_uncharge(sk, bytes); 408 406 break; 409 407 } 410 408 411 - sk_mem_uncharge(sk, sg[i].length); 409 + if (charge) 410 + sk_mem_uncharge(sk, sg[i].length); 412 411 put_page(sg_page(&sg[i])); 413 412 bytes -= sg[i].length; 414 413 sg[i].length = 0; ··· 421 416 if (i == MAX_SKB_FRAGS) 422 417 i = 0; 423 418 } 419 + md->sg_start = i; 424 420 } 425 421 426 422 static int free_sg(struct sock *sk, int start, struct sk_msg_buff *md) ··· 529 523 i = md->sg_start; 530 524 531 525 do { 532 - r->sg_data[i] = md->sg_data[i]; 533 - 534 526 size = (apply && apply_bytes < md->sg_data[i].length) ? 535 527 apply_bytes : md->sg_data[i].length; 536 528 ··· 539 535 } 540 536 541 537 sk_mem_charge(sk, size); 538 + r->sg_data[i] = md->sg_data[i]; 542 539 r->sg_data[i].length = size; 543 540 md->sg_data[i].length -= size; 544 541 md->sg_data[i].offset += size; ··· 580 575 struct sk_msg_buff *md, 581 576 int flags) 582 577 { 578 + bool ingress = !!(md->flags & BPF_F_INGRESS); 583 579 struct smap_psock *psock; 584 580 struct scatterlist *sg; 585 - int i, err, free = 0; 586 - bool ingress = !!(md->flags & BPF_F_INGRESS); 581 + int err = 0; 587 582 588 583 sg = md->sg_data; 589 584 ··· 611 606 out_rcu: 612 607 rcu_read_unlock(); 613 608 out: 614 - i = md->sg_start; 615 - while (sg[i].length) { 616 - free += sg[i].length; 617 - put_page(sg_page(&sg[i])); 618 - sg[i].length = 0; 619 - i++; 620 - if (i == MAX_SKB_FRAGS) 621 - i = 0; 622 - } 623 - return free; 609 + free_bytes_sg(NULL, send, md, false); 610 + return err; 624 611 } 625 612 626 613 static inline void bpf_md_init(struct smap_psock *psock) ··· 697 700 err = bpf_tcp_sendmsg_do_redirect(redir, send, m, flags); 698 701 lock_sock(sk); 699 702 703 + if (unlikely(err < 0)) { 704 + free_start_sg(sk, m); 705 + psock->sg_size = 0; 706 + if (!cork) 707 + *copied -= send; 708 + } else { 709 + psock->sg_size -= send; 710 + } 711 + 700 712 if (cork) { 701 713 free_start_sg(sk, m); 714 + psock->sg_size = 0; 702 715 kfree(m); 703 716 m = NULL; 717 + err = 0; 704 718 } 705 - if (unlikely(err)) 706 - *copied -= err; 707 - else 708 - psock->sg_size -= send; 709 719 break; 710 720 case __SK_DROP: 711 721 default: 712 - free_bytes_sg(sk, send, m); 722 + free_bytes_sg(sk, send, m, true); 713 723 apply_bytes_dec(psock, send); 714 724 *copied -= send; 715 725 psock->sg_size -= send; ··· 734 730 735 731 out_err: 736 732 return err; 733 + } 734 + 735 + static int bpf_wait_data(struct sock *sk, 736 + struct smap_psock *psk, int flags, 737 + long timeo, int *err) 738 + { 739 + int rc; 740 + 741 + DEFINE_WAIT_FUNC(wait, woken_wake_function); 742 + 743 + add_wait_queue(sk_sleep(sk), &wait); 744 + sk_set_bit(SOCKWQ_ASYNC_WAITDATA, sk); 745 + rc = sk_wait_event(sk, &timeo, 746 + !list_empty(&psk->ingress) || 747 + !skb_queue_empty(&sk->sk_receive_queue), 748 + &wait); 749 + sk_clear_bit(SOCKWQ_ASYNC_WAITDATA, sk); 750 + remove_wait_queue(sk_sleep(sk), &wait); 751 + 752 + return rc; 737 753 } 738 754 739 755 static int bpf_tcp_recvmsg(struct sock *sk, struct msghdr *msg, size_t len, ··· 779 755 return tcp_recvmsg(sk, msg, len, nonblock, flags, addr_len); 780 756 781 757 lock_sock(sk); 758 + bytes_ready: 782 759 while (copied != len) { 783 760 struct scatterlist *sg; 784 761 struct sk_msg_buff *md; ··· 832 807 consume_skb(md->skb); 833 808 kfree(md); 834 809 } 810 + } 811 + 812 + if (!copied) { 813 + long timeo; 814 + int data; 815 + int err = 0; 816 + 817 + timeo = sock_rcvtimeo(sk, nonblock); 818 + data = bpf_wait_data(sk, psock, flags, timeo, &err); 819 + 820 + if (data) { 821 + if (!skb_queue_empty(&sk->sk_receive_queue)) { 822 + release_sock(sk); 823 + smap_release_sock(psock, sk); 824 + copied = tcp_recvmsg(sk, msg, len, nonblock, flags, addr_len); 825 + return copied; 826 + } 827 + goto bytes_ready; 828 + } 829 + 830 + if (err) 831 + copied = err; 835 832 } 836 833 837 834 release_sock(sk); ··· 1878 1831 return err; 1879 1832 } 1880 1833 1881 - static void sock_map_release(struct bpf_map *map, struct file *map_file) 1834 + static void sock_map_release(struct bpf_map *map) 1882 1835 { 1883 1836 struct bpf_stab *stab = container_of(map, struct bpf_stab, map); 1884 1837 struct bpf_prog *orig; ··· 1902 1855 .map_get_next_key = sock_map_get_next_key, 1903 1856 .map_update_elem = sock_map_update_elem, 1904 1857 .map_delete_elem = sock_map_delete_elem, 1905 - .map_release = sock_map_release, 1858 + .map_release_uref = sock_map_release, 1906 1859 }; 1907 1860 1908 1861 BPF_CALL_4(bpf_sock_map_update, struct bpf_sock_ops_kern *, bpf_sock,
+16 -7
kernel/bpf/syscall.c
··· 26 26 #include <linux/cred.h> 27 27 #include <linux/timekeeping.h> 28 28 #include <linux/ctype.h> 29 + #include <linux/nospec.h> 29 30 30 31 #define IS_FD_ARRAY(map) ((map)->map_type == BPF_MAP_TYPE_PROG_ARRAY || \ 31 32 (map)->map_type == BPF_MAP_TYPE_PERF_EVENT_ARRAY || \ ··· 103 102 static struct bpf_map *find_and_alloc_map(union bpf_attr *attr) 104 103 { 105 104 const struct bpf_map_ops *ops; 105 + u32 type = attr->map_type; 106 106 struct bpf_map *map; 107 107 int err; 108 108 109 - if (attr->map_type >= ARRAY_SIZE(bpf_map_types)) 109 + if (type >= ARRAY_SIZE(bpf_map_types)) 110 110 return ERR_PTR(-EINVAL); 111 - ops = bpf_map_types[attr->map_type]; 111 + type = array_index_nospec(type, ARRAY_SIZE(bpf_map_types)); 112 + ops = bpf_map_types[type]; 112 113 if (!ops) 113 114 return ERR_PTR(-EINVAL); 114 115 ··· 125 122 if (IS_ERR(map)) 126 123 return map; 127 124 map->ops = ops; 128 - map->map_type = attr->map_type; 125 + map->map_type = type; 129 126 return map; 130 127 } 131 128 ··· 260 257 static void bpf_map_put_uref(struct bpf_map *map) 261 258 { 262 259 if (atomic_dec_and_test(&map->usercnt)) { 263 - if (map->map_type == BPF_MAP_TYPE_PROG_ARRAY) 264 - bpf_fd_array_map_clear(map); 260 + if (map->ops->map_release_uref) 261 + map->ops->map_release_uref(map); 265 262 } 266 263 } 267 264 ··· 874 871 875 872 static int find_prog_type(enum bpf_prog_type type, struct bpf_prog *prog) 876 873 { 877 - if (type >= ARRAY_SIZE(bpf_prog_types) || !bpf_prog_types[type]) 874 + const struct bpf_prog_ops *ops; 875 + 876 + if (type >= ARRAY_SIZE(bpf_prog_types)) 877 + return -EINVAL; 878 + type = array_index_nospec(type, ARRAY_SIZE(bpf_prog_types)); 879 + ops = bpf_prog_types[type]; 880 + if (!ops) 878 881 return -EINVAL; 879 882 880 883 if (!bpf_prog_is_dev_bound(prog->aux)) 881 - prog->aux->ops = bpf_prog_types[type]; 884 + prog->aux->ops = ops; 882 885 else 883 886 prog->aux->ops = &bpf_offload_prog_ops; 884 887 prog->type = type;
+1
kernel/compat.c
··· 34 34 { 35 35 struct compat_timex tx32; 36 36 37 + memset(txc, 0, sizeof(struct timex)); 37 38 if (copy_from_user(&tx32, utp, sizeof(struct compat_timex))) 38 39 return -EFAULT; 39 40
+5 -2
kernel/events/ring_buffer.c
··· 14 14 #include <linux/slab.h> 15 15 #include <linux/circ_buf.h> 16 16 #include <linux/poll.h> 17 + #include <linux/nospec.h> 17 18 18 19 #include "internal.h" 19 20 ··· 868 867 return NULL; 869 868 870 869 /* AUX space */ 871 - if (pgoff >= rb->aux_pgoff) 872 - return virt_to_page(rb->aux_pages[pgoff - rb->aux_pgoff]); 870 + if (pgoff >= rb->aux_pgoff) { 871 + int aux_pgoff = array_index_nospec(pgoff - rb->aux_pgoff, rb->aux_nr_pages); 872 + return virt_to_page(rb->aux_pages[aux_pgoff]); 873 + } 873 874 } 874 875 875 876 return __perf_mmap_to_page(rb, pgoff);
+3 -4
kernel/events/uprobes.c
··· 491 491 if (!uprobe) 492 492 return NULL; 493 493 494 - uprobe->inode = igrab(inode); 494 + uprobe->inode = inode; 495 495 uprobe->offset = offset; 496 496 init_rwsem(&uprobe->register_rwsem); 497 497 init_rwsem(&uprobe->consumer_rwsem); ··· 502 502 if (cur_uprobe) { 503 503 kfree(uprobe); 504 504 uprobe = cur_uprobe; 505 - iput(inode); 506 505 } 507 506 508 507 return uprobe; ··· 700 701 rb_erase(&uprobe->rb_node, &uprobes_tree); 701 702 spin_unlock(&uprobes_treelock); 702 703 RB_CLEAR_NODE(&uprobe->rb_node); /* for uprobe_is_active() */ 703 - iput(uprobe->inode); 704 704 put_uprobe(uprobe); 705 705 } 706 706 ··· 871 873 * tuple). Creation refcount stops uprobe_unregister from freeing the 872 874 * @uprobe even before the register operation is complete. Creation 873 875 * refcount is released when the last @uc for the @uprobe 874 - * unregisters. 876 + * unregisters. Caller of uprobe_register() is required to keep @inode 877 + * (and the containing mount) referenced. 875 878 * 876 879 * Return errno if it cannot successully install probes 877 880 * else return 0 (success)
+23 -27
kernel/kthread.c
··· 55 55 KTHREAD_IS_PER_CPU = 0, 56 56 KTHREAD_SHOULD_STOP, 57 57 KTHREAD_SHOULD_PARK, 58 - KTHREAD_IS_PARKED, 59 58 }; 60 59 61 60 static inline void set_kthread_struct(void *kthread) ··· 176 177 177 178 static void __kthread_parkme(struct kthread *self) 178 179 { 179 - __set_current_state(TASK_PARKED); 180 - while (test_bit(KTHREAD_SHOULD_PARK, &self->flags)) { 181 - if (!test_and_set_bit(KTHREAD_IS_PARKED, &self->flags)) 182 - complete(&self->parked); 180 + for (;;) { 181 + set_current_state(TASK_PARKED); 182 + if (!test_bit(KTHREAD_SHOULD_PARK, &self->flags)) 183 + break; 183 184 schedule(); 184 - __set_current_state(TASK_PARKED); 185 185 } 186 - clear_bit(KTHREAD_IS_PARKED, &self->flags); 187 186 __set_current_state(TASK_RUNNING); 188 187 } 189 188 ··· 190 193 __kthread_parkme(to_kthread(current)); 191 194 } 192 195 EXPORT_SYMBOL_GPL(kthread_parkme); 196 + 197 + void kthread_park_complete(struct task_struct *k) 198 + { 199 + complete(&to_kthread(k)->parked); 200 + } 193 201 194 202 static int kthread(void *_create) 195 203 { ··· 452 450 { 453 451 struct kthread *kthread = to_kthread(k); 454 452 455 - clear_bit(KTHREAD_SHOULD_PARK, &kthread->flags); 456 453 /* 457 - * We clear the IS_PARKED bit here as we don't wait 458 - * until the task has left the park code. So if we'd 459 - * park before that happens we'd see the IS_PARKED bit 460 - * which might be about to be cleared. 454 + * Newly created kthread was parked when the CPU was offline. 455 + * The binding was lost and we need to set it again. 461 456 */ 462 - if (test_and_clear_bit(KTHREAD_IS_PARKED, &kthread->flags)) { 463 - /* 464 - * Newly created kthread was parked when the CPU was offline. 465 - * The binding was lost and we need to set it again. 466 - */ 467 - if (test_bit(KTHREAD_IS_PER_CPU, &kthread->flags)) 468 - __kthread_bind(k, kthread->cpu, TASK_PARKED); 469 - wake_up_state(k, TASK_PARKED); 470 - } 457 + if (test_bit(KTHREAD_IS_PER_CPU, &kthread->flags)) 458 + __kthread_bind(k, kthread->cpu, TASK_PARKED); 459 + 460 + clear_bit(KTHREAD_SHOULD_PARK, &kthread->flags); 461 + wake_up_state(k, TASK_PARKED); 471 462 } 472 463 EXPORT_SYMBOL_GPL(kthread_unpark); 473 464 ··· 483 488 if (WARN_ON(k->flags & PF_EXITING)) 484 489 return -ENOSYS; 485 490 486 - if (!test_bit(KTHREAD_IS_PARKED, &kthread->flags)) { 487 - set_bit(KTHREAD_SHOULD_PARK, &kthread->flags); 488 - if (k != current) { 489 - wake_up_process(k); 490 - wait_for_completion(&kthread->parked); 491 - } 491 + if (WARN_ON_ONCE(test_bit(KTHREAD_SHOULD_PARK, &kthread->flags))) 492 + return -EBUSY; 493 + 494 + set_bit(KTHREAD_SHOULD_PARK, &kthread->flags); 495 + if (k != current) { 496 + wake_up_process(k); 497 + wait_for_completion(&kthread->parked); 492 498 } 493 499 494 500 return 0;
+5
kernel/module.c
··· 3517 3517 * walking this with preempt disabled. In all the failure paths, we 3518 3518 * call synchronize_sched(), but we don't want to slow down the success 3519 3519 * path, so use actual RCU here. 3520 + * Note that module_alloc() on most architectures creates W+X page 3521 + * mappings which won't be cleaned up until do_free_init() runs. Any 3522 + * code such as mark_rodata_ro() which depends on those mappings to 3523 + * be cleaned up needs to sync with the queued work - ie 3524 + * rcu_barrier_sched() 3520 3525 */ 3521 3526 call_rcu_sched(&freeinit->rcu, do_free_init); 3522 3527 mutex_unlock(&module_mutex);
+5 -2
kernel/sched/autogroup.c
··· 2 2 /* 3 3 * Auto-group scheduling implementation: 4 4 */ 5 + #include <linux/nospec.h> 5 6 #include "sched.h" 6 7 7 8 unsigned int __read_mostly sysctl_sched_autogroup_enabled = 1; ··· 210 209 static unsigned long next = INITIAL_JIFFIES; 211 210 struct autogroup *ag; 212 211 unsigned long shares; 213 - int err; 212 + int err, idx; 214 213 215 214 if (nice < MIN_NICE || nice > MAX_NICE) 216 215 return -EINVAL; ··· 228 227 229 228 next = HZ / 10 + jiffies; 230 229 ag = autogroup_task_get(p); 231 - shares = scale_load(sched_prio_to_weight[nice + 20]); 230 + 231 + idx = array_index_nospec(nice + 20, 40); 232 + shares = scale_load(sched_prio_to_weight[idx]); 232 233 233 234 down_write(&ag->lock); 234 235 err = sched_group_set_shares(ag->tg, shares);
+28 -28
kernel/sched/core.c
··· 7 7 */ 8 8 #include "sched.h" 9 9 10 + #include <linux/kthread.h> 11 + #include <linux/nospec.h> 12 + 10 13 #include <asm/switch_to.h> 11 14 #include <asm/tlb.h> 12 15 ··· 2721 2718 membarrier_mm_sync_core_before_usermode(mm); 2722 2719 mmdrop(mm); 2723 2720 } 2724 - if (unlikely(prev_state == TASK_DEAD)) { 2725 - if (prev->sched_class->task_dead) 2726 - prev->sched_class->task_dead(prev); 2721 + if (unlikely(prev_state & (TASK_DEAD|TASK_PARKED))) { 2722 + switch (prev_state) { 2723 + case TASK_DEAD: 2724 + if (prev->sched_class->task_dead) 2725 + prev->sched_class->task_dead(prev); 2727 2726 2728 - /* 2729 - * Remove function-return probe instances associated with this 2730 - * task and put them back on the free list. 2731 - */ 2732 - kprobe_flush_task(prev); 2727 + /* 2728 + * Remove function-return probe instances associated with this 2729 + * task and put them back on the free list. 2730 + */ 2731 + kprobe_flush_task(prev); 2733 2732 2734 - /* Task is done with its stack. */ 2735 - put_task_stack(prev); 2733 + /* Task is done with its stack. */ 2734 + put_task_stack(prev); 2736 2735 2737 - put_task_struct(prev); 2736 + put_task_struct(prev); 2737 + break; 2738 + 2739 + case TASK_PARKED: 2740 + kthread_park_complete(prev); 2741 + break; 2742 + } 2738 2743 } 2739 2744 2740 2745 tick_nohz_task_switch(); ··· 3509 3498 3510 3499 void __noreturn do_task_dead(void) 3511 3500 { 3512 - /* 3513 - * The setting of TASK_RUNNING by try_to_wake_up() may be delayed 3514 - * when the following two conditions become true. 3515 - * - There is race condition of mmap_sem (It is acquired by 3516 - * exit_mm()), and 3517 - * - SMI occurs before setting TASK_RUNINNG. 3518 - * (or hypervisor of virtual machine switches to other guest) 3519 - * As a result, we may become TASK_RUNNING after becoming TASK_DEAD 3520 - * 3521 - * To avoid it, we have to wait for releasing tsk->pi_lock which 3522 - * is held by try_to_wake_up() 3523 - */ 3524 - raw_spin_lock_irq(&current->pi_lock); 3525 - raw_spin_unlock_irq(&current->pi_lock); 3526 - 3527 3501 /* Causes final put_task_struct in finish_task_switch(): */ 3528 - __set_current_state(TASK_DEAD); 3502 + set_special_state(TASK_DEAD); 3529 3503 3530 3504 /* Tell freezer to ignore us: */ 3531 3505 current->flags |= PF_NOFREEZE; ··· 6924 6928 struct cftype *cft, s64 nice) 6925 6929 { 6926 6930 unsigned long weight; 6931 + int idx; 6927 6932 6928 6933 if (nice < MIN_NICE || nice > MAX_NICE) 6929 6934 return -ERANGE; 6930 6935 6931 - weight = sched_prio_to_weight[NICE_TO_PRIO(nice) - MAX_RT_PRIO]; 6936 + idx = NICE_TO_PRIO(nice) - MAX_RT_PRIO; 6937 + idx = array_index_nospec(idx, 40); 6938 + weight = sched_prio_to_weight[idx]; 6939 + 6932 6940 return sched_group_set_shares(css_tg(css), scale_load(weight)); 6933 6941 } 6934 6942 #endif
+2 -14
kernel/sched/cpufreq_schedutil.c
··· 305 305 * Do not reduce the frequency if the CPU has not been idle 306 306 * recently, as the reduction is likely to be premature then. 307 307 */ 308 - if (busy && next_f < sg_policy->next_freq) { 308 + if (busy && next_f < sg_policy->next_freq && 309 + sg_policy->next_freq != UINT_MAX) { 309 310 next_f = sg_policy->next_freq; 310 311 311 312 /* Reset cached freq as next_freq has changed */ ··· 397 396 398 397 sg_policy = container_of(irq_work, struct sugov_policy, irq_work); 399 398 400 - /* 401 - * For RT tasks, the schedutil governor shoots the frequency to maximum. 402 - * Special care must be taken to ensure that this kthread doesn't result 403 - * in the same behavior. 404 - * 405 - * This is (mostly) guaranteed by the work_in_progress flag. The flag is 406 - * updated only at the end of the sugov_work() function and before that 407 - * the schedutil governor rejects all other frequency scaling requests. 408 - * 409 - * There is a very rare case though, where the RT thread yields right 410 - * after the work_in_progress flag is cleared. The effects of that are 411 - * neglected for now. 412 - */ 413 399 kthread_queue_work(&sg_policy->worker, &sg_policy->work); 414 400 } 415 401
+2 -57
kernel/sched/fair.c
··· 1854 1854 static void numa_migrate_preferred(struct task_struct *p) 1855 1855 { 1856 1856 unsigned long interval = HZ; 1857 - unsigned long numa_migrate_retry; 1858 1857 1859 1858 /* This task has no NUMA fault statistics yet */ 1860 1859 if (unlikely(p->numa_preferred_nid == -1 || !p->numa_faults)) ··· 1861 1862 1862 1863 /* Periodically retry migrating the task to the preferred node */ 1863 1864 interval = min(interval, msecs_to_jiffies(p->numa_scan_period) / 16); 1864 - numa_migrate_retry = jiffies + interval; 1865 - 1866 - /* 1867 - * Check that the new retry threshold is after the current one. If 1868 - * the retry is in the future, it implies that wake_affine has 1869 - * temporarily asked NUMA balancing to backoff from placement. 1870 - */ 1871 - if (numa_migrate_retry > p->numa_migrate_retry) 1872 - return; 1873 - 1874 - /* Safe to try placing the task on the preferred node */ 1875 - p->numa_migrate_retry = numa_migrate_retry; 1865 + p->numa_migrate_retry = jiffies + interval; 1876 1866 1877 1867 /* Success if task is already running on preferred CPU */ 1878 1868 if (task_node(p) == p->numa_preferred_nid) ··· 5910 5922 return this_eff_load < prev_eff_load ? this_cpu : nr_cpumask_bits; 5911 5923 } 5912 5924 5913 - #ifdef CONFIG_NUMA_BALANCING 5914 - static void 5915 - update_wa_numa_placement(struct task_struct *p, int prev_cpu, int target) 5916 - { 5917 - unsigned long interval; 5918 - 5919 - if (!static_branch_likely(&sched_numa_balancing)) 5920 - return; 5921 - 5922 - /* If balancing has no preference then continue gathering data */ 5923 - if (p->numa_preferred_nid == -1) 5924 - return; 5925 - 5926 - /* 5927 - * If the wakeup is not affecting locality then it is neutral from 5928 - * the perspective of NUMA balacing so continue gathering data. 5929 - */ 5930 - if (cpu_to_node(prev_cpu) == cpu_to_node(target)) 5931 - return; 5932 - 5933 - /* 5934 - * Temporarily prevent NUMA balancing trying to place waker/wakee after 5935 - * wakee has been moved by wake_affine. This will potentially allow 5936 - * related tasks to converge and update their data placement. The 5937 - * 4 * numa_scan_period is to allow the two-pass filter to migrate 5938 - * hot data to the wakers node. 5939 - */ 5940 - interval = max(sysctl_numa_balancing_scan_delay, 5941 - p->numa_scan_period << 2); 5942 - p->numa_migrate_retry = jiffies + msecs_to_jiffies(interval); 5943 - 5944 - interval = max(sysctl_numa_balancing_scan_delay, 5945 - current->numa_scan_period << 2); 5946 - current->numa_migrate_retry = jiffies + msecs_to_jiffies(interval); 5947 - } 5948 - #else 5949 - static void 5950 - update_wa_numa_placement(struct task_struct *p, int prev_cpu, int target) 5951 - { 5952 - } 5953 - #endif 5954 - 5955 5925 static int wake_affine(struct sched_domain *sd, struct task_struct *p, 5956 5926 int this_cpu, int prev_cpu, int sync) 5957 5927 { ··· 5925 5979 if (target == nr_cpumask_bits) 5926 5980 return prev_cpu; 5927 5981 5928 - update_wa_numa_placement(p, prev_cpu, target); 5929 5982 schedstat_inc(sd->ttwu_move_affine); 5930 5983 schedstat_inc(p->se.statistics.nr_wakeups_affine); 5931 5984 return target; ··· 9792 9847 if (curr_cost > this_rq->max_idle_balance_cost) 9793 9848 this_rq->max_idle_balance_cost = curr_cost; 9794 9849 9850 + out: 9795 9851 /* 9796 9852 * While browsing the domains, we released the rq lock, a task could 9797 9853 * have been enqueued in the meantime. Since we're not going idle, ··· 9801 9855 if (this_rq->cfs.h_nr_running && !pulled_task) 9802 9856 pulled_task = 1; 9803 9857 9804 - out: 9805 9858 /* Move the next balance forward */ 9806 9859 if (time_after(this_rq->next_balance, next_balance)) 9807 9860 this_rq->next_balance = next_balance;
+15 -2
kernel/signal.c
··· 1961 1961 return; 1962 1962 } 1963 1963 1964 + set_special_state(TASK_TRACED); 1965 + 1964 1966 /* 1965 1967 * We're committing to trapping. TRACED should be visible before 1966 1968 * TRAPPING is cleared; otherwise, the tracer might fail do_wait(). 1967 1969 * Also, transition to TRACED and updates to ->jobctl should be 1968 1970 * atomic with respect to siglock and should be done after the arch 1969 1971 * hook as siglock is released and regrabbed across it. 1972 + * 1973 + * TRACER TRACEE 1974 + * 1975 + * ptrace_attach() 1976 + * [L] wait_on_bit(JOBCTL_TRAPPING) [S] set_special_state(TRACED) 1977 + * do_wait() 1978 + * set_current_state() smp_wmb(); 1979 + * ptrace_do_wait() 1980 + * wait_task_stopped() 1981 + * task_stopped_code() 1982 + * [L] task_is_traced() [S] task_clear_jobctl_trapping(); 1970 1983 */ 1971 - set_current_state(TASK_TRACED); 1984 + smp_wmb(); 1972 1985 1973 1986 current->last_siginfo = info; 1974 1987 current->exit_code = exit_code; ··· 2189 2176 if (task_participate_group_stop(current)) 2190 2177 notify = CLD_STOPPED; 2191 2178 2192 - __set_current_state(TASK_STOPPED); 2179 + set_special_state(TASK_STOPPED); 2193 2180 spin_unlock_irq(&current->sighand->siglock); 2194 2181 2195 2182 /*
+14 -5
kernel/stop_machine.c
··· 21 21 #include <linux/smpboot.h> 22 22 #include <linux/atomic.h> 23 23 #include <linux/nmi.h> 24 + #include <linux/sched/wake_q.h> 24 25 25 26 /* 26 27 * Structure to determine completion condition and record errors. May ··· 66 65 } 67 66 68 67 static void __cpu_stop_queue_work(struct cpu_stopper *stopper, 69 - struct cpu_stop_work *work) 68 + struct cpu_stop_work *work, 69 + struct wake_q_head *wakeq) 70 70 { 71 71 list_add_tail(&work->list, &stopper->works); 72 - wake_up_process(stopper->thread); 72 + wake_q_add(wakeq, stopper->thread); 73 73 } 74 74 75 75 /* queue @work to @stopper. if offline, @work is completed immediately */ 76 76 static bool cpu_stop_queue_work(unsigned int cpu, struct cpu_stop_work *work) 77 77 { 78 78 struct cpu_stopper *stopper = &per_cpu(cpu_stopper, cpu); 79 + DEFINE_WAKE_Q(wakeq); 79 80 unsigned long flags; 80 81 bool enabled; 81 82 82 83 spin_lock_irqsave(&stopper->lock, flags); 83 84 enabled = stopper->enabled; 84 85 if (enabled) 85 - __cpu_stop_queue_work(stopper, work); 86 + __cpu_stop_queue_work(stopper, work, &wakeq); 86 87 else if (work->done) 87 88 cpu_stop_signal_done(work->done); 88 89 spin_unlock_irqrestore(&stopper->lock, flags); 90 + 91 + wake_up_q(&wakeq); 89 92 90 93 return enabled; 91 94 } ··· 234 229 { 235 230 struct cpu_stopper *stopper1 = per_cpu_ptr(&cpu_stopper, cpu1); 236 231 struct cpu_stopper *stopper2 = per_cpu_ptr(&cpu_stopper, cpu2); 232 + DEFINE_WAKE_Q(wakeq); 237 233 int err; 238 234 retry: 239 235 spin_lock_irq(&stopper1->lock); ··· 258 252 goto unlock; 259 253 260 254 err = 0; 261 - __cpu_stop_queue_work(stopper1, work1); 262 - __cpu_stop_queue_work(stopper2, work2); 255 + __cpu_stop_queue_work(stopper1, work1, &wakeq); 256 + __cpu_stop_queue_work(stopper2, work2, &wakeq); 263 257 unlock: 264 258 spin_unlock(&stopper2->lock); 265 259 spin_unlock_irq(&stopper1->lock); ··· 269 263 cpu_relax(); 270 264 goto retry; 271 265 } 266 + 267 + wake_up_q(&wakeq); 268 + 272 269 return err; 273 270 } 274 271 /**
+44 -19
kernel/time/clocksource.c
··· 119 119 static int watchdog_running; 120 120 static atomic_t watchdog_reset_pending; 121 121 122 + static void inline clocksource_watchdog_lock(unsigned long *flags) 123 + { 124 + spin_lock_irqsave(&watchdog_lock, *flags); 125 + } 126 + 127 + static void inline clocksource_watchdog_unlock(unsigned long *flags) 128 + { 129 + spin_unlock_irqrestore(&watchdog_lock, *flags); 130 + } 131 + 122 132 static int clocksource_watchdog_kthread(void *data); 123 133 static void __clocksource_change_rating(struct clocksource *cs, int rating); 124 134 ··· 152 142 cs->flags &= ~(CLOCK_SOURCE_VALID_FOR_HRES | CLOCK_SOURCE_WATCHDOG); 153 143 cs->flags |= CLOCK_SOURCE_UNSTABLE; 154 144 145 + /* 146 + * If the clocksource is registered clocksource_watchdog_kthread() will 147 + * re-rate and re-select. 148 + */ 149 + if (list_empty(&cs->list)) { 150 + cs->rating = 0; 151 + return; 152 + } 153 + 155 154 if (cs->mark_unstable) 156 155 cs->mark_unstable(cs); 157 156 157 + /* kick clocksource_watchdog_kthread() */ 158 158 if (finished_booting) 159 159 schedule_work(&watchdog_work); 160 160 } ··· 173 153 * clocksource_mark_unstable - mark clocksource unstable via watchdog 174 154 * @cs: clocksource to be marked unstable 175 155 * 176 - * This function is called instead of clocksource_change_rating from 177 - * cpu hotplug code to avoid a deadlock between the clocksource mutex 178 - * and the cpu hotplug mutex. It defers the update of the clocksource 179 - * to the watchdog thread. 156 + * This function is called by the x86 TSC code to mark clocksources as unstable; 157 + * it defers demotion and re-selection to a kthread. 180 158 */ 181 159 void clocksource_mark_unstable(struct clocksource *cs) 182 160 { ··· 182 164 183 165 spin_lock_irqsave(&watchdog_lock, flags); 184 166 if (!(cs->flags & CLOCK_SOURCE_UNSTABLE)) { 185 - if (list_empty(&cs->wd_list)) 167 + if (!list_empty(&cs->list) && list_empty(&cs->wd_list)) 186 168 list_add(&cs->wd_list, &watchdog_list); 187 169 __clocksource_unstable(cs); 188 170 } ··· 337 319 338 320 static void clocksource_enqueue_watchdog(struct clocksource *cs) 339 321 { 340 - unsigned long flags; 322 + INIT_LIST_HEAD(&cs->wd_list); 341 323 342 - spin_lock_irqsave(&watchdog_lock, flags); 343 324 if (cs->flags & CLOCK_SOURCE_MUST_VERIFY) { 344 325 /* cs is a clocksource to be watched. */ 345 326 list_add(&cs->wd_list, &watchdog_list); ··· 348 331 if (cs->flags & CLOCK_SOURCE_IS_CONTINUOUS) 349 332 cs->flags |= CLOCK_SOURCE_VALID_FOR_HRES; 350 333 } 351 - spin_unlock_irqrestore(&watchdog_lock, flags); 352 334 } 353 335 354 336 static void clocksource_select_watchdog(bool fallback) ··· 389 373 390 374 static void clocksource_dequeue_watchdog(struct clocksource *cs) 391 375 { 392 - unsigned long flags; 393 - 394 - spin_lock_irqsave(&watchdog_lock, flags); 395 376 if (cs != watchdog) { 396 377 if (cs->flags & CLOCK_SOURCE_MUST_VERIFY) { 397 378 /* cs is a watched clocksource. */ ··· 397 384 clocksource_stop_watchdog(); 398 385 } 399 386 } 400 - spin_unlock_irqrestore(&watchdog_lock, flags); 401 387 } 402 388 403 389 static int __clocksource_watchdog_kthread(void) 404 390 { 405 391 struct clocksource *cs, *tmp; 406 392 unsigned long flags; 407 - LIST_HEAD(unstable); 408 393 int select = 0; 409 394 410 395 spin_lock_irqsave(&watchdog_lock, flags); 411 396 list_for_each_entry_safe(cs, tmp, &watchdog_list, wd_list) { 412 397 if (cs->flags & CLOCK_SOURCE_UNSTABLE) { 413 398 list_del_init(&cs->wd_list); 414 - list_add(&cs->wd_list, &unstable); 399 + __clocksource_change_rating(cs, 0); 415 400 select = 1; 416 401 } 417 402 if (cs->flags & CLOCK_SOURCE_RESELECT) { ··· 421 410 clocksource_stop_watchdog(); 422 411 spin_unlock_irqrestore(&watchdog_lock, flags); 423 412 424 - /* Needs to be done outside of watchdog lock */ 425 - list_for_each_entry_safe(cs, tmp, &unstable, wd_list) { 426 - list_del_init(&cs->wd_list); 427 - __clocksource_change_rating(cs, 0); 428 - } 429 413 return select; 430 414 } 431 415 ··· 452 446 static inline int __clocksource_watchdog_kthread(void) { return 0; } 453 447 static bool clocksource_is_watchdog(struct clocksource *cs) { return false; } 454 448 void clocksource_mark_unstable(struct clocksource *cs) { } 449 + 450 + static void inline clocksource_watchdog_lock(unsigned long *flags) { } 451 + static void inline clocksource_watchdog_unlock(unsigned long *flags) { } 455 452 456 453 #endif /* CONFIG_CLOCKSOURCE_WATCHDOG */ 457 454 ··· 788 779 */ 789 780 int __clocksource_register_scale(struct clocksource *cs, u32 scale, u32 freq) 790 781 { 782 + unsigned long flags; 791 783 792 784 /* Initialize mult/shift and max_idle_ns */ 793 785 __clocksource_update_freq_scale(cs, scale, freq); 794 786 795 787 /* Add clocksource to the clocksource list */ 796 788 mutex_lock(&clocksource_mutex); 789 + 790 + clocksource_watchdog_lock(&flags); 797 791 clocksource_enqueue(cs); 798 792 clocksource_enqueue_watchdog(cs); 793 + clocksource_watchdog_unlock(&flags); 794 + 799 795 clocksource_select(); 800 796 clocksource_select_watchdog(false); 801 797 mutex_unlock(&clocksource_mutex); ··· 822 808 */ 823 809 void clocksource_change_rating(struct clocksource *cs, int rating) 824 810 { 811 + unsigned long flags; 812 + 825 813 mutex_lock(&clocksource_mutex); 814 + clocksource_watchdog_lock(&flags); 826 815 __clocksource_change_rating(cs, rating); 816 + clocksource_watchdog_unlock(&flags); 817 + 827 818 clocksource_select(); 828 819 clocksource_select_watchdog(false); 829 820 mutex_unlock(&clocksource_mutex); ··· 840 821 */ 841 822 static int clocksource_unbind(struct clocksource *cs) 842 823 { 824 + unsigned long flags; 825 + 843 826 if (clocksource_is_watchdog(cs)) { 844 827 /* Select and try to install a replacement watchdog. */ 845 828 clocksource_select_watchdog(true); ··· 855 834 if (curr_clocksource == cs) 856 835 return -EBUSY; 857 836 } 837 + 838 + clocksource_watchdog_lock(&flags); 858 839 clocksource_dequeue_watchdog(cs); 859 840 list_del_init(&cs->list); 841 + clocksource_watchdog_unlock(&flags); 842 + 860 843 return 0; 861 844 } 862 845
+2 -2
kernel/trace/ftrace.c
··· 5514 5514 ftrace_create_filter_files(&global_ops, d_tracer); 5515 5515 5516 5516 #ifdef CONFIG_FUNCTION_GRAPH_TRACER 5517 - trace_create_file("set_graph_function", 0444, d_tracer, 5517 + trace_create_file("set_graph_function", 0644, d_tracer, 5518 5518 NULL, 5519 5519 &ftrace_graph_fops); 5520 - trace_create_file("set_graph_notrace", 0444, d_tracer, 5520 + trace_create_file("set_graph_notrace", 0644, d_tracer, 5521 5521 NULL, 5522 5522 &ftrace_graph_notrace_fops); 5523 5523 #endif /* CONFIG_FUNCTION_GRAPH_TRACER */
+3
kernel/trace/trace_events_filter.c
··· 762 762 763 763 static int regex_match_front(char *str, struct regex *r, int len) 764 764 { 765 + if (len < r->len) 766 + return 0; 767 + 765 768 if (strncmp(str, r->pattern, r->len) == 0) 766 769 return 1; 767 770 return 0;
+12
kernel/trace/trace_events_hist.c
··· 2466 2466 else if (strcmp(modifier, "usecs") == 0) 2467 2467 *flags |= HIST_FIELD_FL_TIMESTAMP_USECS; 2468 2468 else { 2469 + hist_err("Invalid field modifier: ", modifier); 2469 2470 field = ERR_PTR(-EINVAL); 2470 2471 goto out; 2471 2472 } ··· 2482 2481 else { 2483 2482 field = trace_find_event_field(file->event_call, field_name); 2484 2483 if (!field || !field->size) { 2484 + hist_err("Couldn't find field: ", field_name); 2485 2485 field = ERR_PTR(-EINVAL); 2486 2486 goto out; 2487 2487 } ··· 4915 4913 seq_printf(m, "%s", field_name); 4916 4914 } else if (hist_field->flags & HIST_FIELD_FL_TIMESTAMP) 4917 4915 seq_puts(m, "common_timestamp"); 4916 + 4917 + if (hist_field->flags) { 4918 + if (!(hist_field->flags & HIST_FIELD_FL_VAR_REF) && 4919 + !(hist_field->flags & HIST_FIELD_FL_EXPR)) { 4920 + const char *flags = get_hist_field_flags(hist_field); 4921 + 4922 + if (flags) 4923 + seq_printf(m, ".%s", flags); 4924 + } 4925 + } 4918 4926 } 4919 4927 4920 4928 static int event_hist_trigger_print(struct seq_file *m,
+1 -1
kernel/trace/trace_stack.c
··· 472 472 NULL, &stack_trace_fops); 473 473 474 474 #ifdef CONFIG_DYNAMIC_FTRACE 475 - trace_create_file("stack_trace_filter", 0444, d_tracer, 475 + trace_create_file("stack_trace_filter", 0644, d_tracer, 476 476 &trace_ops, &stack_trace_filter_fops); 477 477 #endif 478 478
+14 -21
kernel/trace/trace_uprobe.c
··· 55 55 struct list_head list; 56 56 struct trace_uprobe_filter filter; 57 57 struct uprobe_consumer consumer; 58 + struct path path; 58 59 struct inode *inode; 59 60 char *filename; 60 61 unsigned long offset; ··· 290 289 for (i = 0; i < tu->tp.nr_args; i++) 291 290 traceprobe_free_probe_arg(&tu->tp.args[i]); 292 291 293 - iput(tu->inode); 292 + path_put(&tu->path); 294 293 kfree(tu->tp.call.class->system); 295 294 kfree(tu->tp.call.name); 296 295 kfree(tu->filename); ··· 364 363 static int create_trace_uprobe(int argc, char **argv) 365 364 { 366 365 struct trace_uprobe *tu; 367 - struct inode *inode; 368 366 char *arg, *event, *group, *filename; 369 367 char buf[MAX_EVENT_NAME_LEN]; 370 368 struct path path; ··· 371 371 bool is_delete, is_return; 372 372 int i, ret; 373 373 374 - inode = NULL; 375 374 ret = 0; 376 375 is_delete = false; 377 376 is_return = false; ··· 436 437 } 437 438 /* Find the last occurrence, in case the path contains ':' too. */ 438 439 arg = strrchr(argv[1], ':'); 439 - if (!arg) { 440 - ret = -EINVAL; 441 - goto fail_address_parse; 442 - } 440 + if (!arg) 441 + return -EINVAL; 443 442 444 443 *arg++ = '\0'; 445 444 filename = argv[1]; 446 445 ret = kern_path(filename, LOOKUP_FOLLOW, &path); 447 446 if (ret) 448 - goto fail_address_parse; 447 + return ret; 449 448 450 - inode = igrab(d_real_inode(path.dentry)); 451 - path_put(&path); 452 - 453 - if (!inode || !S_ISREG(inode->i_mode)) { 449 + if (!d_is_reg(path.dentry)) { 454 450 ret = -EINVAL; 455 451 goto fail_address_parse; 456 452 } ··· 484 490 goto fail_address_parse; 485 491 } 486 492 tu->offset = offset; 487 - tu->inode = inode; 493 + tu->path = path; 488 494 tu->filename = kstrdup(filename, GFP_KERNEL); 489 495 490 496 if (!tu->filename) { ··· 552 558 return ret; 553 559 554 560 fail_address_parse: 555 - iput(inode); 561 + path_put(&path); 556 562 557 563 pr_info("Failed to parse address or file.\n"); 558 564 ··· 916 922 goto err_flags; 917 923 918 924 tu->consumer.filter = filter; 925 + tu->inode = d_real_inode(tu->path.dentry); 919 926 ret = uprobe_register(tu->inode, tu->offset, &tu->consumer); 920 927 if (ret) 921 928 goto err_buffer; ··· 962 967 WARN_ON(!uprobe_filter_is_empty(&tu->filter)); 963 968 964 969 uprobe_unregister(tu->inode, tu->offset, &tu->consumer); 970 + tu->inode = NULL; 965 971 tu->tp.flags &= file ? ~TP_FLAG_TRACE : ~TP_FLAG_PROFILE; 966 972 967 973 uprobe_buffer_disable(); ··· 1333 1337 create_local_trace_uprobe(char *name, unsigned long offs, bool is_return) 1334 1338 { 1335 1339 struct trace_uprobe *tu; 1336 - struct inode *inode; 1337 1340 struct path path; 1338 1341 int ret; 1339 1342 ··· 1340 1345 if (ret) 1341 1346 return ERR_PTR(ret); 1342 1347 1343 - inode = igrab(d_inode(path.dentry)); 1344 - path_put(&path); 1345 - 1346 - if (!inode || !S_ISREG(inode->i_mode)) { 1347 - iput(inode); 1348 + if (!d_is_reg(path.dentry)) { 1349 + path_put(&path); 1348 1350 return ERR_PTR(-EINVAL); 1349 1351 } 1350 1352 ··· 1356 1364 if (IS_ERR(tu)) { 1357 1365 pr_info("Failed to allocate trace_uprobe.(%d)\n", 1358 1366 (int)PTR_ERR(tu)); 1367 + path_put(&path); 1359 1368 return ERR_CAST(tu); 1360 1369 } 1361 1370 1362 1371 tu->offset = offs; 1363 - tu->inode = inode; 1372 + tu->path = path; 1364 1373 tu->filename = kstrdup(name, GFP_KERNEL); 1365 1374 init_trace_event_call(tu, &tu->tp.call); 1366 1375
+2 -2
kernel/tracepoint.c
··· 207 207 lockdep_is_held(&tracepoints_mutex)); 208 208 old = func_add(&tp_funcs, func, prio); 209 209 if (IS_ERR(old)) { 210 - WARN_ON_ONCE(1); 210 + WARN_ON_ONCE(PTR_ERR(old) != -ENOMEM); 211 211 return PTR_ERR(old); 212 212 } 213 213 ··· 239 239 lockdep_is_held(&tracepoints_mutex)); 240 240 old = func_remove(&tp_funcs, func); 241 241 if (IS_ERR(old)) { 242 - WARN_ON_ONCE(1); 242 + WARN_ON_ONCE(PTR_ERR(old) != -ENOMEM); 243 243 return PTR_ERR(old); 244 244 } 245 245
+9 -14
lib/errseq.c
··· 111 111 * errseq_sample() - Grab current errseq_t value. 112 112 * @eseq: Pointer to errseq_t to be sampled. 113 113 * 114 - * This function allows callers to sample an errseq_t value, marking it as 115 - * "seen" if required. 114 + * This function allows callers to initialise their errseq_t variable. 115 + * If the error has been "seen", new callers will not see an old error. 116 + * If there is an unseen error in @eseq, the caller of this function will 117 + * see it the next time it checks for an error. 116 118 * 119 + * Context: Any context. 117 120 * Return: The current errseq value. 118 121 */ 119 122 errseq_t errseq_sample(errseq_t *eseq) 120 123 { 121 124 errseq_t old = READ_ONCE(*eseq); 122 - errseq_t new = old; 123 125 124 - /* 125 - * For the common case of no errors ever having been set, we can skip 126 - * marking the SEEN bit. Once an error has been set, the value will 127 - * never go back to zero. 128 - */ 129 - if (old != 0) { 130 - new |= ERRSEQ_SEEN; 131 - if (old != new) 132 - cmpxchg(eseq, old, new); 133 - } 134 - return new; 126 + /* If nobody has seen this error yet, then we can be the first. */ 127 + if (!(old & ERRSEQ_SEEN)) 128 + old = 0; 129 + return old; 135 130 } 136 131 EXPORT_SYMBOL(errseq_sample); 137 132
+6 -1
lib/find_bit_benchmark.c
··· 132 132 test_find_next_bit(bitmap, BITMAP_LEN); 133 133 test_find_next_zero_bit(bitmap, BITMAP_LEN); 134 134 test_find_last_bit(bitmap, BITMAP_LEN); 135 - test_find_first_bit(bitmap, BITMAP_LEN); 135 + 136 + /* 137 + * test_find_first_bit() may take some time, so 138 + * traverse only part of bitmap to avoid soft lockup. 139 + */ 140 + test_find_first_bit(bitmap, BITMAP_LEN / 10); 136 141 test_find_next_and_bit(bitmap, bitmap2, BITMAP_LEN); 137 142 138 143 pr_err("\nStart testing find_bit() with sparse bitmap\n");
+2 -2
lib/swiotlb.c
··· 714 714 715 715 phys_addr = swiotlb_tbl_map_single(dev, 716 716 __phys_to_dma(dev, io_tlb_start), 717 - 0, size, DMA_FROM_DEVICE, 0); 717 + 0, size, DMA_FROM_DEVICE, attrs); 718 718 if (phys_addr == SWIOTLB_MAP_ERROR) 719 719 goto out_warn; 720 720 ··· 737 737 swiotlb_tbl_unmap_single(dev, phys_addr, size, DMA_TO_DEVICE, 738 738 DMA_ATTR_SKIP_CPU_SYNC); 739 739 out_warn: 740 - if ((attrs & DMA_ATTR_NO_WARN) && printk_ratelimit()) { 740 + if (!(attrs & DMA_ATTR_NO_WARN) && printk_ratelimit()) { 741 741 dev_warn(dev, 742 742 "swiotlb: coherent allocation failed, size=%zu\n", 743 743 size);
+2 -1
mm/backing-dev.c
··· 115 115 bdi, &bdi_debug_stats_fops); 116 116 if (!bdi->debug_stats) { 117 117 debugfs_remove(bdi->debug_dir); 118 + bdi->debug_dir = NULL; 118 119 return -ENOMEM; 119 120 } 120 121 ··· 384 383 * the barrier provided by test_and_clear_bit() above. 385 384 */ 386 385 smp_wmb(); 387 - clear_bit(WB_shutting_down, &wb->state); 386 + clear_and_wake_up_bit(WB_shutting_down, &wb->state); 388 387 } 389 388 390 389 static void wb_exit(struct bdi_writeback *wb)
+1 -3
mm/migrate.c
··· 528 528 int i; 529 529 int index = page_index(page); 530 530 531 - for (i = 0; i < HPAGE_PMD_NR; i++) { 531 + for (i = 1; i < HPAGE_PMD_NR; i++) { 532 532 pslot = radix_tree_lookup_slot(&mapping->i_pages, 533 533 index + i); 534 534 radix_tree_replace_slot(&mapping->i_pages, pslot, 535 535 newpage + i); 536 536 } 537 - } else { 538 - radix_tree_replace_slot(&mapping->i_pages, pslot, newpage); 539 537 } 540 538 541 539 /*
+58 -18
mm/mmap.c
··· 1324 1324 return 0; 1325 1325 } 1326 1326 1327 + static inline u64 file_mmap_size_max(struct file *file, struct inode *inode) 1328 + { 1329 + if (S_ISREG(inode->i_mode)) 1330 + return inode->i_sb->s_maxbytes; 1331 + 1332 + if (S_ISBLK(inode->i_mode)) 1333 + return MAX_LFS_FILESIZE; 1334 + 1335 + /* Special "we do even unsigned file positions" case */ 1336 + if (file->f_mode & FMODE_UNSIGNED_OFFSET) 1337 + return 0; 1338 + 1339 + /* Yes, random drivers might want more. But I'm tired of buggy drivers */ 1340 + return ULONG_MAX; 1341 + } 1342 + 1343 + static inline bool file_mmap_ok(struct file *file, struct inode *inode, 1344 + unsigned long pgoff, unsigned long len) 1345 + { 1346 + u64 maxsize = file_mmap_size_max(file, inode); 1347 + 1348 + if (maxsize && len > maxsize) 1349 + return false; 1350 + maxsize -= len; 1351 + if (pgoff > maxsize >> PAGE_SHIFT) 1352 + return false; 1353 + return true; 1354 + } 1355 + 1327 1356 /* 1328 1357 * The caller must hold down_write(&current->mm->mmap_sem). 1329 1358 */ ··· 1437 1408 if (file) { 1438 1409 struct inode *inode = file_inode(file); 1439 1410 unsigned long flags_mask; 1411 + 1412 + if (!file_mmap_ok(file, inode, pgoff, len)) 1413 + return -EOVERFLOW; 1440 1414 1441 1415 flags_mask = LEGACY_MAP_MASK | file->f_op->mmap_supported_flags; 1442 1416 ··· 3056 3024 /* mm's last user has gone, and its about to be pulled down */ 3057 3025 mmu_notifier_release(mm); 3058 3026 3027 + if (unlikely(mm_is_oom_victim(mm))) { 3028 + /* 3029 + * Manually reap the mm to free as much memory as possible. 3030 + * Then, as the oom reaper does, set MMF_OOM_SKIP to disregard 3031 + * this mm from further consideration. Taking mm->mmap_sem for 3032 + * write after setting MMF_OOM_SKIP will guarantee that the oom 3033 + * reaper will not run on this mm again after mmap_sem is 3034 + * dropped. 3035 + * 3036 + * Nothing can be holding mm->mmap_sem here and the above call 3037 + * to mmu_notifier_release(mm) ensures mmu notifier callbacks in 3038 + * __oom_reap_task_mm() will not block. 3039 + * 3040 + * This needs to be done before calling munlock_vma_pages_all(), 3041 + * which clears VM_LOCKED, otherwise the oom reaper cannot 3042 + * reliably test it. 3043 + */ 3044 + mutex_lock(&oom_lock); 3045 + __oom_reap_task_mm(mm); 3046 + mutex_unlock(&oom_lock); 3047 + 3048 + set_bit(MMF_OOM_SKIP, &mm->flags); 3049 + down_write(&mm->mmap_sem); 3050 + up_write(&mm->mmap_sem); 3051 + } 3052 + 3059 3053 if (mm->locked_vm) { 3060 3054 vma = mm->mmap; 3061 3055 while (vma) { ··· 3103 3045 /* update_hiwater_rss(mm) here? but nobody should be looking */ 3104 3046 /* Use -1 here to ensure all VMAs in the mm are unmapped */ 3105 3047 unmap_vmas(&tlb, vma, 0, -1); 3106 - 3107 - if (unlikely(mm_is_oom_victim(mm))) { 3108 - /* 3109 - * Wait for oom_reap_task() to stop working on this 3110 - * mm. Because MMF_OOM_SKIP is already set before 3111 - * calling down_read(), oom_reap_task() will not run 3112 - * on this "mm" post up_write(). 3113 - * 3114 - * mm_is_oom_victim() cannot be set from under us 3115 - * either because victim->mm is already set to NULL 3116 - * under task_lock before calling mmput and oom_mm is 3117 - * set not NULL by the OOM killer only if victim->mm 3118 - * is found not NULL while holding the task_lock. 3119 - */ 3120 - set_bit(MMF_OOM_SKIP, &mm->flags); 3121 - down_write(&mm->mmap_sem); 3122 - up_write(&mm->mmap_sem); 3123 - } 3124 3048 free_pgtables(&tlb, vma, FIRST_USER_ADDRESS, USER_PGTABLES_CEILING); 3125 3049 tlb_finish_mmu(&tlb, 0, -1); 3126 3050
+43 -38
mm/oom_kill.c
··· 469 469 return false; 470 470 } 471 471 472 - 473 472 #ifdef CONFIG_MMU 474 473 /* 475 474 * OOM Reaper kernel thread which tries to reap the memory used by the OOM ··· 479 480 static struct task_struct *oom_reaper_list; 480 481 static DEFINE_SPINLOCK(oom_reaper_lock); 481 482 482 - static bool __oom_reap_task_mm(struct task_struct *tsk, struct mm_struct *mm) 483 + void __oom_reap_task_mm(struct mm_struct *mm) 483 484 { 484 - struct mmu_gather tlb; 485 485 struct vm_area_struct *vma; 486 + 487 + /* 488 + * Tell all users of get_user/copy_from_user etc... that the content 489 + * is no longer stable. No barriers really needed because unmapping 490 + * should imply barriers already and the reader would hit a page fault 491 + * if it stumbled over a reaped memory. 492 + */ 493 + set_bit(MMF_UNSTABLE, &mm->flags); 494 + 495 + for (vma = mm->mmap ; vma; vma = vma->vm_next) { 496 + if (!can_madv_dontneed_vma(vma)) 497 + continue; 498 + 499 + /* 500 + * Only anonymous pages have a good chance to be dropped 501 + * without additional steps which we cannot afford as we 502 + * are OOM already. 503 + * 504 + * We do not even care about fs backed pages because all 505 + * which are reclaimable have already been reclaimed and 506 + * we do not want to block exit_mmap by keeping mm ref 507 + * count elevated without a good reason. 508 + */ 509 + if (vma_is_anonymous(vma) || !(vma->vm_flags & VM_SHARED)) { 510 + const unsigned long start = vma->vm_start; 511 + const unsigned long end = vma->vm_end; 512 + struct mmu_gather tlb; 513 + 514 + tlb_gather_mmu(&tlb, mm, start, end); 515 + mmu_notifier_invalidate_range_start(mm, start, end); 516 + unmap_page_range(&tlb, vma, start, end, NULL); 517 + mmu_notifier_invalidate_range_end(mm, start, end); 518 + tlb_finish_mmu(&tlb, start, end); 519 + } 520 + } 521 + } 522 + 523 + static bool oom_reap_task_mm(struct task_struct *tsk, struct mm_struct *mm) 524 + { 486 525 bool ret = true; 487 526 488 527 /* 489 528 * We have to make sure to not race with the victim exit path 490 529 * and cause premature new oom victim selection: 491 - * __oom_reap_task_mm exit_mm 530 + * oom_reap_task_mm exit_mm 492 531 * mmget_not_zero 493 532 * mmput 494 533 * atomic_dec_and_test ··· 571 534 572 535 trace_start_task_reaping(tsk->pid); 573 536 574 - /* 575 - * Tell all users of get_user/copy_from_user etc... that the content 576 - * is no longer stable. No barriers really needed because unmapping 577 - * should imply barriers already and the reader would hit a page fault 578 - * if it stumbled over a reaped memory. 579 - */ 580 - set_bit(MMF_UNSTABLE, &mm->flags); 537 + __oom_reap_task_mm(mm); 581 538 582 - for (vma = mm->mmap ; vma; vma = vma->vm_next) { 583 - if (!can_madv_dontneed_vma(vma)) 584 - continue; 585 - 586 - /* 587 - * Only anonymous pages have a good chance to be dropped 588 - * without additional steps which we cannot afford as we 589 - * are OOM already. 590 - * 591 - * We do not even care about fs backed pages because all 592 - * which are reclaimable have already been reclaimed and 593 - * we do not want to block exit_mmap by keeping mm ref 594 - * count elevated without a good reason. 595 - */ 596 - if (vma_is_anonymous(vma) || !(vma->vm_flags & VM_SHARED)) { 597 - const unsigned long start = vma->vm_start; 598 - const unsigned long end = vma->vm_end; 599 - 600 - tlb_gather_mmu(&tlb, mm, start, end); 601 - mmu_notifier_invalidate_range_start(mm, start, end); 602 - unmap_page_range(&tlb, vma, start, end, NULL); 603 - mmu_notifier_invalidate_range_end(mm, start, end); 604 - tlb_finish_mmu(&tlb, start, end); 605 - } 606 - } 607 539 pr_info("oom_reaper: reaped process %d (%s), now anon-rss:%lukB, file-rss:%lukB, shmem-rss:%lukB\n", 608 540 task_pid_nr(tsk), tsk->comm, 609 541 K(get_mm_counter(mm, MM_ANONPAGES)), ··· 593 587 struct mm_struct *mm = tsk->signal->oom_mm; 594 588 595 589 /* Retry the down_read_trylock(mmap_sem) a few times */ 596 - while (attempts++ < MAX_OOM_REAP_RETRIES && !__oom_reap_task_mm(tsk, mm)) 590 + while (attempts++ < MAX_OOM_REAP_RETRIES && !oom_reap_task_mm(tsk, mm)) 597 591 schedule_timeout_idle(HZ/10); 598 592 599 593 if (attempts <= MAX_OOM_REAP_RETRIES || 600 594 test_bit(MMF_OOM_SKIP, &mm->flags)) 601 595 goto done; 602 - 603 596 604 597 pr_info("oom_reaper: unable to reap pid:%d (%s)\n", 605 598 task_pid_nr(tsk), tsk->comm);
+1 -1
mm/sparse.c
··· 629 629 unsigned long pfn; 630 630 631 631 for (pfn = start_pfn; pfn < end_pfn; pfn += PAGES_PER_SECTION) { 632 - unsigned long section_nr = pfn_to_section_nr(start_pfn); 632 + unsigned long section_nr = pfn_to_section_nr(pfn); 633 633 struct mem_section *ms; 634 634 635 635 /*
+5 -1
mm/vmstat.c
··· 1161 1161 "nr_vmscan_immediate_reclaim", 1162 1162 "nr_dirtied", 1163 1163 "nr_written", 1164 - "nr_indirectly_reclaimable", 1164 + "", /* nr_indirectly_reclaimable */ 1165 1165 1166 1166 /* enum writeback_stat_item counters */ 1167 1167 "nr_dirty_threshold", ··· 1739 1739 { 1740 1740 unsigned long *l = arg; 1741 1741 unsigned long off = l - (unsigned long *)m->private; 1742 + 1743 + /* Skip hidden vmstat items. */ 1744 + if (*vmstat_text[off] == '\0') 1745 + return 0; 1742 1746 1743 1747 seq_puts(m, vmstat_text[off]); 1744 1748 seq_put_decimal_ull(m, " ", *l);
+30 -12
mm/z3fold.c
··· 144 144 PAGE_HEADLESS = 0, 145 145 MIDDLE_CHUNK_MAPPED, 146 146 NEEDS_COMPACTING, 147 - PAGE_STALE 147 + PAGE_STALE, 148 + UNDER_RECLAIM 148 149 }; 149 150 150 151 /***************** ··· 174 173 clear_bit(MIDDLE_CHUNK_MAPPED, &page->private); 175 174 clear_bit(NEEDS_COMPACTING, &page->private); 176 175 clear_bit(PAGE_STALE, &page->private); 176 + clear_bit(UNDER_RECLAIM, &page->private); 177 177 178 178 spin_lock_init(&zhdr->page_lock); 179 179 kref_init(&zhdr->refcount); ··· 758 756 atomic64_dec(&pool->pages_nr); 759 757 return; 760 758 } 759 + if (test_bit(UNDER_RECLAIM, &page->private)) { 760 + z3fold_page_unlock(zhdr); 761 + return; 762 + } 761 763 if (test_and_set_bit(NEEDS_COMPACTING, &page->private)) { 762 764 z3fold_page_unlock(zhdr); 763 765 return; ··· 846 840 kref_get(&zhdr->refcount); 847 841 list_del_init(&zhdr->buddy); 848 842 zhdr->cpu = -1; 843 + set_bit(UNDER_RECLAIM, &page->private); 844 + break; 849 845 } 850 846 851 847 list_del_init(&page->lru); ··· 895 887 goto next; 896 888 } 897 889 next: 898 - spin_lock(&pool->lock); 899 890 if (test_bit(PAGE_HEADLESS, &page->private)) { 900 891 if (ret == 0) { 901 - spin_unlock(&pool->lock); 902 892 free_z3fold_page(page); 903 893 return 0; 904 894 } 905 - } else if (kref_put(&zhdr->refcount, release_z3fold_page)) { 906 - atomic64_dec(&pool->pages_nr); 895 + spin_lock(&pool->lock); 896 + list_add(&page->lru, &pool->lru); 907 897 spin_unlock(&pool->lock); 908 - return 0; 898 + } else { 899 + z3fold_page_lock(zhdr); 900 + clear_bit(UNDER_RECLAIM, &page->private); 901 + if (kref_put(&zhdr->refcount, 902 + release_z3fold_page_locked)) { 903 + atomic64_dec(&pool->pages_nr); 904 + return 0; 905 + } 906 + /* 907 + * if we are here, the page is still not completely 908 + * free. Take the global pool lock then to be able 909 + * to add it back to the lru list 910 + */ 911 + spin_lock(&pool->lock); 912 + list_add(&page->lru, &pool->lru); 913 + spin_unlock(&pool->lock); 914 + z3fold_page_unlock(zhdr); 909 915 } 910 916 911 - /* 912 - * Add to the beginning of LRU. 913 - * Pool lock has to be kept here to ensure the page has 914 - * not already been released 915 - */ 916 - list_add(&page->lru, &pool->lru); 917 + /* We started off locked to we need to lock the pool back */ 918 + spin_lock(&pool->lock); 917 919 } 918 920 spin_unlock(&pool->lock); 919 921 return -EAGAIN;
+1 -1
net/9p/trans_common.c
··· 16 16 #include <linux/module.h> 17 17 18 18 /** 19 - * p9_release_req_pages - Release pages after the transaction. 19 + * p9_release_pages - Release pages after the transaction. 20 20 */ 21 21 void p9_release_pages(struct page **pages, int nr_pages) 22 22 {
+2 -2
net/9p/trans_fd.c
··· 1092 1092 }; 1093 1093 1094 1094 /** 1095 - * p9_poll_proc - poll worker thread 1096 - * @a: thread state and arguments 1095 + * p9_poll_workfn - poll worker thread 1096 + * @work: work queue 1097 1097 * 1098 1098 * polls all v9fs transports for new events and queues the appropriate 1099 1099 * work to the work queue
+1 -3
net/9p/trans_rdma.c
··· 68 68 * @pd: Protection Domain pointer 69 69 * @qp: Queue Pair pointer 70 70 * @cq: Completion Queue pointer 71 - * @dm_mr: DMA Memory Region pointer 72 - * @lkey: The local access only memory region key 73 71 * @timeout: Number of uSecs to wait for connection management events 74 72 * @privport: Whether a privileged port may be used 75 73 * @port: The port to use ··· 630 632 } 631 633 632 634 /** 633 - * trans_create_rdma - Transport method for creating atransport instance 635 + * rdma_create_trans - Transport method for creating a transport instance 634 636 * @client: client instance 635 637 * @addr: IP address string 636 638 * @args: Mount options string
+2 -3
net/9p/trans_virtio.c
··· 60 60 61 61 /** 62 62 * struct virtio_chan - per-instance transport information 63 - * @initialized: whether the channel is initialized 64 63 * @inuse: whether the channel is in use 65 64 * @lock: protects multiple elements within this structure 66 65 * @client: client instance ··· 384 385 * @uidata: user bffer that should be ued for zero copy read 385 386 * @uodata: user buffer that shoud be user for zero copy write 386 387 * @inlen: read buffer size 387 - * @olen: write buffer size 388 - * @hdrlen: reader header size, This is the size of response protocol data 388 + * @outlen: write buffer size 389 + * @in_hdr_len: reader header size, This is the size of response protocol data 389 390 * 390 391 */ 391 392 static int
+1 -1
net/9p/trans_xen.c
··· 485 485 486 486 static int xen_9pfs_front_resume(struct xenbus_device *dev) 487 487 { 488 - dev_warn(&dev->dev, "suspsend/resume unsupported\n"); 488 + dev_warn(&dev->dev, "suspend/resume unsupported\n"); 489 489 return 0; 490 490 } 491 491
+7 -2
net/atm/lec.c
··· 41 41 #include <linux/module.h> 42 42 #include <linux/init.h> 43 43 44 + /* Hardening for Spectre-v1 */ 45 + #include <linux/nospec.h> 46 + 44 47 #include "lec.h" 45 48 #include "lec_arpc.h" 46 49 #include "resources.h" ··· 690 687 bytes_left = copy_from_user(&ioc_data, arg, sizeof(struct atmlec_ioc)); 691 688 if (bytes_left != 0) 692 689 pr_info("copy from user failed for %d bytes\n", bytes_left); 693 - if (ioc_data.dev_num < 0 || ioc_data.dev_num >= MAX_LEC_ITF || 694 - !dev_lec[ioc_data.dev_num]) 690 + if (ioc_data.dev_num < 0 || ioc_data.dev_num >= MAX_LEC_ITF) 691 + return -EINVAL; 692 + ioc_data.dev_num = array_index_nospec(ioc_data.dev_num, MAX_LEC_ITF); 693 + if (!dev_lec[ioc_data.dev_num]) 695 694 return -EINVAL; 696 695 vpriv = kmalloc(sizeof(struct lec_vcc_priv), GFP_KERNEL); 697 696 if (!vpriv)
+2 -2
net/bridge/br_if.c
··· 518 518 return -ELOOP; 519 519 } 520 520 521 - /* Device is already being bridged */ 522 - if (br_port_exists(dev)) 521 + /* Device has master upper dev */ 522 + if (netdev_master_upper_dev_get(dev)) 523 523 return -EBUSY; 524 524 525 525 /* No bridging devices that dislike that (e.g. wireless) */
+23 -4
net/ceph/osd_client.c
··· 157 157 #endif /* CONFIG_BLOCK */ 158 158 159 159 static void ceph_osd_data_bvecs_init(struct ceph_osd_data *osd_data, 160 - struct ceph_bvec_iter *bvec_pos) 160 + struct ceph_bvec_iter *bvec_pos, 161 + u32 num_bvecs) 161 162 { 162 163 osd_data->type = CEPH_OSD_DATA_TYPE_BVECS; 163 164 osd_data->bvec_pos = *bvec_pos; 165 + osd_data->num_bvecs = num_bvecs; 164 166 } 165 167 166 168 #define osd_req_op_data(oreq, whch, typ, fld) \ ··· 239 237 EXPORT_SYMBOL(osd_req_op_extent_osd_data_bio); 240 238 #endif /* CONFIG_BLOCK */ 241 239 240 + void osd_req_op_extent_osd_data_bvecs(struct ceph_osd_request *osd_req, 241 + unsigned int which, 242 + struct bio_vec *bvecs, u32 num_bvecs, 243 + u32 bytes) 244 + { 245 + struct ceph_osd_data *osd_data; 246 + struct ceph_bvec_iter it = { 247 + .bvecs = bvecs, 248 + .iter = { .bi_size = bytes }, 249 + }; 250 + 251 + osd_data = osd_req_op_data(osd_req, which, extent, osd_data); 252 + ceph_osd_data_bvecs_init(osd_data, &it, num_bvecs); 253 + } 254 + EXPORT_SYMBOL(osd_req_op_extent_osd_data_bvecs); 255 + 242 256 void osd_req_op_extent_osd_data_bvec_pos(struct ceph_osd_request *osd_req, 243 257 unsigned int which, 244 258 struct ceph_bvec_iter *bvec_pos) ··· 262 244 struct ceph_osd_data *osd_data; 263 245 264 246 osd_data = osd_req_op_data(osd_req, which, extent, osd_data); 265 - ceph_osd_data_bvecs_init(osd_data, bvec_pos); 247 + ceph_osd_data_bvecs_init(osd_data, bvec_pos, 0); 266 248 } 267 249 EXPORT_SYMBOL(osd_req_op_extent_osd_data_bvec_pos); 268 250 ··· 305 287 306 288 void osd_req_op_cls_request_data_bvecs(struct ceph_osd_request *osd_req, 307 289 unsigned int which, 308 - struct bio_vec *bvecs, u32 bytes) 290 + struct bio_vec *bvecs, u32 num_bvecs, 291 + u32 bytes) 309 292 { 310 293 struct ceph_osd_data *osd_data; 311 294 struct ceph_bvec_iter it = { ··· 315 296 }; 316 297 317 298 osd_data = osd_req_op_data(osd_req, which, cls, request_data); 318 - ceph_osd_data_bvecs_init(osd_data, &it); 299 + ceph_osd_data_bvecs_init(osd_data, &it, num_bvecs); 319 300 osd_req->r_ops[which].cls.indata_len += bytes; 320 301 osd_req->r_ops[which].indata_len += bytes; 321 302 }
+4 -2
net/compat.c
··· 377 377 optname == SO_ATTACH_REUSEPORT_CBPF) 378 378 return do_set_attach_filter(sock, level, optname, 379 379 optval, optlen); 380 - if (optname == SO_RCVTIMEO || optname == SO_SNDTIMEO) 380 + if (!COMPAT_USE_64BIT_TIME && 381 + (optname == SO_RCVTIMEO || optname == SO_SNDTIMEO)) 381 382 return do_set_sock_timeout(sock, level, optname, optval, optlen); 382 383 383 384 return sock_setsockopt(sock, level, optname, optval, optlen); ··· 449 448 static int compat_sock_getsockopt(struct socket *sock, int level, int optname, 450 449 char __user *optval, int __user *optlen) 451 450 { 452 - if (optname == SO_RCVTIMEO || optname == SO_SNDTIMEO) 451 + if (!COMPAT_USE_64BIT_TIME && 452 + (optname == SO_RCVTIMEO || optname == SO_SNDTIMEO)) 453 453 return do_get_sock_timeout(sock, level, optname, optval, optlen); 454 454 return sock_getsockopt(sock, level, optname, optval, optlen); 455 455 }
+5
net/core/ethtool.c
··· 1032 1032 info_size = sizeof(info); 1033 1033 if (copy_from_user(&info, useraddr, info_size)) 1034 1034 return -EFAULT; 1035 + /* Since malicious users may modify the original data, 1036 + * we need to check whether FLOW_RSS is still requested. 1037 + */ 1038 + if (!(info.flow_type & FLOW_RSS)) 1039 + return -EINVAL; 1035 1040 } 1036 1041 1037 1042 if (info.cmd == ETHTOOL_GRXCLSRLALL) {
+1
net/core/filter.c
··· 3240 3240 skb_dst_set(skb, (struct dst_entry *) md); 3241 3241 3242 3242 info = &md->u.tun_info; 3243 + memset(info, 0, sizeof(*info)); 3243 3244 info->mode = IP_TUNNEL_INFO_TX; 3244 3245 3245 3246 info->key.tun_flags = TUNNEL_KEY | TUNNEL_CSUM | TUNNEL_NOCACHE;
+12 -2
net/dccp/ccids/ccid2.c
··· 126 126 DCCPF_SEQ_WMAX)); 127 127 } 128 128 129 + static void dccp_tasklet_schedule(struct sock *sk) 130 + { 131 + struct tasklet_struct *t = &dccp_sk(sk)->dccps_xmitlet; 132 + 133 + if (!test_and_set_bit(TASKLET_STATE_SCHED, &t->state)) { 134 + sock_hold(sk); 135 + __tasklet_schedule(t); 136 + } 137 + } 138 + 129 139 static void ccid2_hc_tx_rto_expire(struct timer_list *t) 130 140 { 131 141 struct ccid2_hc_tx_sock *hc = from_timer(hc, t, tx_rtotimer); ··· 176 166 177 167 /* if we were blocked before, we may now send cwnd=1 packet */ 178 168 if (sender_was_blocked) 179 - tasklet_schedule(&dccp_sk(sk)->dccps_xmitlet); 169 + dccp_tasklet_schedule(sk); 180 170 /* restart backed-off timer */ 181 171 sk_reset_timer(sk, &hc->tx_rtotimer, jiffies + hc->tx_rto); 182 172 out: ··· 716 706 done: 717 707 /* check if incoming Acks allow pending packets to be sent */ 718 708 if (sender_was_blocked && !ccid2_cwnd_network_limited(hc)) 719 - tasklet_schedule(&dccp_sk(sk)->dccps_xmitlet); 709 + dccp_tasklet_schedule(sk); 720 710 dccp_ackvec_parsed_cleanup(&hc->tx_av_chunks); 721 711 } 722 712
+1 -1
net/dccp/timer.c
··· 232 232 else 233 233 dccp_write_xmit(sk); 234 234 bh_unlock_sock(sk); 235 + sock_put(sk); 235 236 } 236 237 237 238 static void dccp_write_xmit_timer(struct timer_list *t) ··· 241 240 struct sock *sk = &dp->dccps_inet_connection.icsk_inet.sk; 242 241 243 242 dccp_write_xmitlet((unsigned long)sk); 244 - sock_put(sk); 245 243 } 246 244 247 245 void dccp_init_xmit_timers(struct sock *sk)
+2 -2
net/ieee802154/6lowpan/6lowpan_i.h
··· 20 20 struct frag_lowpan_compare_key { 21 21 u16 tag; 22 22 u16 d_size; 23 - const struct ieee802154_addr src; 24 - const struct ieee802154_addr dst; 23 + struct ieee802154_addr src; 24 + struct ieee802154_addr dst; 25 25 }; 26 26 27 27 /* Equivalent of ipv4 struct ipq
+7 -7
net/ieee802154/6lowpan/reassembly.c
··· 75 75 { 76 76 struct netns_ieee802154_lowpan *ieee802154_lowpan = 77 77 net_ieee802154_lowpan(net); 78 - struct frag_lowpan_compare_key key = { 79 - .tag = cb->d_tag, 80 - .d_size = cb->d_size, 81 - .src = *src, 82 - .dst = *dst, 83 - }; 78 + struct frag_lowpan_compare_key key = {}; 84 79 struct inet_frag_queue *q; 80 + 81 + key.tag = cb->d_tag; 82 + key.d_size = cb->d_size; 83 + key.src = *src; 84 + key.dst = *dst; 85 85 86 86 q = inet_frag_find(&ieee802154_lowpan->frags, &key); 87 87 if (!q) ··· 372 372 struct lowpan_frag_queue *fq; 373 373 struct net *net = dev_net(skb->dev); 374 374 struct lowpan_802154_cb *cb = lowpan_802154_cb(skb); 375 - struct ieee802154_hdr hdr; 375 + struct ieee802154_hdr hdr = {}; 376 376 int err; 377 377 378 378 if (ieee802154_hdr_peek_addrs(skb, &hdr) < 0)
+5 -2
net/ipv4/ping.c
··· 775 775 ipc.addr = faddr = daddr; 776 776 777 777 if (ipc.opt && ipc.opt->opt.srr) { 778 - if (!daddr) 779 - return -EINVAL; 778 + if (!daddr) { 779 + err = -EINVAL; 780 + goto out_free; 781 + } 780 782 faddr = ipc.opt->opt.faddr; 781 783 } 782 784 tos = get_rttos(&ipc, inet); ··· 844 842 845 843 out: 846 844 ip_rt_put(rt); 845 + out_free: 847 846 if (free) 848 847 kfree(ipc.opt); 849 848 if (!err) {
+54 -65
net/ipv4/route.c
··· 709 709 fnhe->fnhe_gw = gw; 710 710 fnhe->fnhe_pmtu = pmtu; 711 711 fnhe->fnhe_mtu_locked = lock; 712 - fnhe->fnhe_expires = expires; 712 + fnhe->fnhe_expires = max(1UL, expires); 713 713 714 714 /* Exception created; mark the cached routes for the nexthop 715 715 * stale, so anyone caching it rechecks if this exception ··· 1297 1297 return mtu - lwtunnel_headroom(dst->lwtstate, mtu); 1298 1298 } 1299 1299 1300 + static void ip_del_fnhe(struct fib_nh *nh, __be32 daddr) 1301 + { 1302 + struct fnhe_hash_bucket *hash; 1303 + struct fib_nh_exception *fnhe, __rcu **fnhe_p; 1304 + u32 hval = fnhe_hashfun(daddr); 1305 + 1306 + spin_lock_bh(&fnhe_lock); 1307 + 1308 + hash = rcu_dereference_protected(nh->nh_exceptions, 1309 + lockdep_is_held(&fnhe_lock)); 1310 + hash += hval; 1311 + 1312 + fnhe_p = &hash->chain; 1313 + fnhe = rcu_dereference_protected(*fnhe_p, lockdep_is_held(&fnhe_lock)); 1314 + while (fnhe) { 1315 + if (fnhe->fnhe_daddr == daddr) { 1316 + rcu_assign_pointer(*fnhe_p, rcu_dereference_protected( 1317 + fnhe->fnhe_next, lockdep_is_held(&fnhe_lock))); 1318 + fnhe_flush_routes(fnhe); 1319 + kfree_rcu(fnhe, rcu); 1320 + break; 1321 + } 1322 + fnhe_p = &fnhe->fnhe_next; 1323 + fnhe = rcu_dereference_protected(fnhe->fnhe_next, 1324 + lockdep_is_held(&fnhe_lock)); 1325 + } 1326 + 1327 + spin_unlock_bh(&fnhe_lock); 1328 + } 1329 + 1300 1330 static struct fib_nh_exception *find_exception(struct fib_nh *nh, __be32 daddr) 1301 1331 { 1302 1332 struct fnhe_hash_bucket *hash = rcu_dereference(nh->nh_exceptions); ··· 1340 1310 1341 1311 for (fnhe = rcu_dereference(hash[hval].chain); fnhe; 1342 1312 fnhe = rcu_dereference(fnhe->fnhe_next)) { 1343 - if (fnhe->fnhe_daddr == daddr) 1313 + if (fnhe->fnhe_daddr == daddr) { 1314 + if (fnhe->fnhe_expires && 1315 + time_after(jiffies, fnhe->fnhe_expires)) { 1316 + ip_del_fnhe(nh, daddr); 1317 + break; 1318 + } 1344 1319 return fnhe; 1320 + } 1345 1321 } 1346 1322 return NULL; 1347 1323 } ··· 1375 1339 fnhe->fnhe_gw = 0; 1376 1340 fnhe->fnhe_pmtu = 0; 1377 1341 fnhe->fnhe_expires = 0; 1342 + fnhe->fnhe_mtu_locked = false; 1378 1343 fnhe_flush_routes(fnhe); 1379 1344 orig = NULL; 1380 1345 } ··· 1673 1636 #endif 1674 1637 } 1675 1638 1676 - static void ip_del_fnhe(struct fib_nh *nh, __be32 daddr) 1677 - { 1678 - struct fnhe_hash_bucket *hash; 1679 - struct fib_nh_exception *fnhe, __rcu **fnhe_p; 1680 - u32 hval = fnhe_hashfun(daddr); 1681 - 1682 - spin_lock_bh(&fnhe_lock); 1683 - 1684 - hash = rcu_dereference_protected(nh->nh_exceptions, 1685 - lockdep_is_held(&fnhe_lock)); 1686 - hash += hval; 1687 - 1688 - fnhe_p = &hash->chain; 1689 - fnhe = rcu_dereference_protected(*fnhe_p, lockdep_is_held(&fnhe_lock)); 1690 - while (fnhe) { 1691 - if (fnhe->fnhe_daddr == daddr) { 1692 - rcu_assign_pointer(*fnhe_p, rcu_dereference_protected( 1693 - fnhe->fnhe_next, lockdep_is_held(&fnhe_lock))); 1694 - fnhe_flush_routes(fnhe); 1695 - kfree_rcu(fnhe, rcu); 1696 - break; 1697 - } 1698 - fnhe_p = &fnhe->fnhe_next; 1699 - fnhe = rcu_dereference_protected(fnhe->fnhe_next, 1700 - lockdep_is_held(&fnhe_lock)); 1701 - } 1702 - 1703 - spin_unlock_bh(&fnhe_lock); 1704 - } 1705 - 1706 1639 /* called in rcu_read_lock() section */ 1707 1640 static int __mkroute_input(struct sk_buff *skb, 1708 1641 const struct fib_result *res, ··· 1726 1719 1727 1720 fnhe = find_exception(&FIB_RES_NH(*res), daddr); 1728 1721 if (do_cache) { 1729 - if (fnhe) { 1722 + if (fnhe) 1730 1723 rth = rcu_dereference(fnhe->fnhe_rth_input); 1731 - if (rth && rth->dst.expires && 1732 - time_after(jiffies, rth->dst.expires)) { 1733 - ip_del_fnhe(&FIB_RES_NH(*res), daddr); 1734 - fnhe = NULL; 1735 - } else { 1736 - goto rt_cache; 1737 - } 1738 - } 1739 - 1740 - rth = rcu_dereference(FIB_RES_NH(*res).nh_rth_input); 1741 - 1742 - rt_cache: 1724 + else 1725 + rth = rcu_dereference(FIB_RES_NH(*res).nh_rth_input); 1743 1726 if (rt_cache_valid(rth)) { 1744 1727 skb_dst_set_noref(skb, &rth->dst); 1745 1728 goto out; ··· 2213 2216 * the loopback interface and the IP_PKTINFO ipi_ifindex will 2214 2217 * be set to the loopback interface as well. 2215 2218 */ 2216 - fi = NULL; 2219 + do_cache = false; 2217 2220 } 2218 2221 2219 2222 fnhe = NULL; 2220 2223 do_cache &= fi != NULL; 2221 - if (do_cache) { 2224 + if (fi) { 2222 2225 struct rtable __rcu **prth; 2223 2226 struct fib_nh *nh = &FIB_RES_NH(*res); 2224 2227 2225 2228 fnhe = find_exception(nh, fl4->daddr); 2229 + if (!do_cache) 2230 + goto add; 2226 2231 if (fnhe) { 2227 2232 prth = &fnhe->fnhe_rth_output; 2228 - rth = rcu_dereference(*prth); 2229 - if (rth && rth->dst.expires && 2230 - time_after(jiffies, rth->dst.expires)) { 2231 - ip_del_fnhe(nh, fl4->daddr); 2232 - fnhe = NULL; 2233 - } else { 2234 - goto rt_cache; 2233 + } else { 2234 + if (unlikely(fl4->flowi4_flags & 2235 + FLOWI_FLAG_KNOWN_NH && 2236 + !(nh->nh_gw && 2237 + nh->nh_scope == RT_SCOPE_LINK))) { 2238 + do_cache = false; 2239 + goto add; 2235 2240 } 2241 + prth = raw_cpu_ptr(nh->nh_pcpu_rth_output); 2236 2242 } 2237 - 2238 - if (unlikely(fl4->flowi4_flags & 2239 - FLOWI_FLAG_KNOWN_NH && 2240 - !(nh->nh_gw && 2241 - nh->nh_scope == RT_SCOPE_LINK))) { 2242 - do_cache = false; 2243 - goto add; 2244 - } 2245 - prth = raw_cpu_ptr(nh->nh_pcpu_rth_output); 2246 2243 rth = rcu_dereference(*prth); 2247 - 2248 - rt_cache: 2249 2244 if (rt_cache_valid(rth) && dst_hold_safe(&rth->dst)) 2250 2245 return rth; 2251 2246 }
+4 -3
net/ipv4/tcp.c
··· 697 697 { 698 698 return skb->len < size_goal && 699 699 sock_net(sk)->ipv4.sysctl_tcp_autocorking && 700 - skb != tcp_write_queue_head(sk) && 700 + !tcp_rtx_queue_empty(sk) && 701 701 refcount_read(&sk->sk_wmem_alloc) > skb->truesize; 702 702 } 703 703 ··· 1204 1204 uarg->zerocopy = 0; 1205 1205 } 1206 1206 1207 - if (unlikely(flags & MSG_FASTOPEN || inet_sk(sk)->defer_connect)) { 1207 + if (unlikely(flags & MSG_FASTOPEN || inet_sk(sk)->defer_connect) && 1208 + !tp->repair) { 1208 1209 err = tcp_sendmsg_fastopen(sk, msg, &copied_syn, size); 1209 1210 if (err == -EINPROGRESS && copied_syn > 0) 1210 1211 goto out; ··· 2674 2673 case TCP_REPAIR_QUEUE: 2675 2674 if (!tp->repair) 2676 2675 err = -EPERM; 2677 - else if (val < TCP_QUEUES_NR) 2676 + else if ((unsigned int)val < TCP_QUEUES_NR) 2678 2677 tp->repair_queue = val; 2679 2678 else 2680 2679 err = -EINVAL;
+3 -1
net/ipv4/tcp_bbr.c
··· 806 806 } 807 807 } 808 808 } 809 - bbr->idle_restart = 0; 809 + /* Restart after idle ends only once we process a new S/ACK for data */ 810 + if (rs->delivered > 0) 811 + bbr->idle_restart = 0; 810 812 } 811 813 812 814 static void bbr_update_model(struct sock *sk, const struct rate_sample *rs)
+7 -4
net/ipv4/udp.c
··· 401 401 bool dev_match = (sk->sk_bound_dev_if == dif || 402 402 sk->sk_bound_dev_if == sdif); 403 403 404 - if (exact_dif && !dev_match) 404 + if (!dev_match) 405 405 return -1; 406 - if (sk->sk_bound_dev_if && dev_match) 406 + if (sk->sk_bound_dev_if) 407 407 score += 4; 408 408 } 409 409 ··· 952 952 sock_tx_timestamp(sk, ipc.sockc.tsflags, &ipc.tx_flags); 953 953 954 954 if (ipc.opt && ipc.opt->opt.srr) { 955 - if (!daddr) 956 - return -EINVAL; 955 + if (!daddr) { 956 + err = -EINVAL; 957 + goto out_free; 958 + } 957 959 faddr = ipc.opt->opt.faddr; 958 960 connected = 0; 959 961 } ··· 1076 1074 1077 1075 out: 1078 1076 ip_rt_put(rt); 1077 + out_free: 1079 1078 if (free) 1080 1079 kfree(ipc.opt); 1081 1080 if (!err)
+4 -5
net/ipv6/Kconfig
··· 34 34 bool "IPv6: Route Information (RFC 4191) support" 35 35 depends on IPV6_ROUTER_PREF 36 36 ---help--- 37 - This is experimental support of Route Information. 37 + Support of Route Information. 38 38 39 39 If unsure, say N. 40 40 41 41 config IPV6_OPTIMISTIC_DAD 42 42 bool "IPv6: Enable RFC 4429 Optimistic DAD" 43 43 ---help--- 44 - This is experimental support for optimistic Duplicate 45 - Address Detection. It allows for autoconfigured addresses 46 - to be used more quickly. 44 + Support for optimistic Duplicate Address Detection. It allows for 45 + autoconfigured addresses to be used more quickly. 47 46 48 47 If unsure, say N. 49 48 ··· 279 280 depends on IPV6 280 281 select IP_MROUTE_COMMON 281 282 ---help--- 282 - Experimental support for IPv6 multicast forwarding. 283 + Support for IPv6 multicast forwarding. 283 284 If unsure, say N. 284 285 285 286 config IPV6_MROUTE_MULTIPLE_TABLES
+2 -2
net/ipv6/ip6_vti.c
··· 669 669 else 670 670 mtu = ETH_DATA_LEN - LL_MAX_HEADER - sizeof(struct ipv6hdr); 671 671 672 - dev->mtu = max_t(int, mtu, IPV6_MIN_MTU); 672 + dev->mtu = max_t(int, mtu, IPV4_MIN_MTU); 673 673 } 674 674 675 675 /** ··· 881 881 dev->priv_destructor = vti6_dev_free; 882 882 883 883 dev->type = ARPHRD_TUNNEL6; 884 - dev->min_mtu = IPV6_MIN_MTU; 884 + dev->min_mtu = IPV4_MIN_MTU; 885 885 dev->max_mtu = IP_MAX_MTU - sizeof(struct ipv6hdr); 886 886 dev->flags |= IFF_NOARP; 887 887 dev->addr_len = sizeof(struct in6_addr);
+6 -1
net/ipv6/route.c
··· 1835 1835 const struct ipv6hdr *inner_iph; 1836 1836 const struct icmp6hdr *icmph; 1837 1837 struct ipv6hdr _inner_iph; 1838 + struct icmp6hdr _icmph; 1838 1839 1839 1840 if (likely(outer_iph->nexthdr != IPPROTO_ICMPV6)) 1840 1841 goto out; 1841 1842 1842 - icmph = icmp6_hdr(skb); 1843 + icmph = skb_header_pointer(skb, skb_transport_offset(skb), 1844 + sizeof(_icmph), &_icmph); 1845 + if (!icmph) 1846 + goto out; 1847 + 1843 1848 if (icmph->icmp6_type != ICMPV6_DEST_UNREACH && 1844 1849 icmph->icmp6_type != ICMPV6_PKT_TOOBIG && 1845 1850 icmph->icmp6_type != ICMPV6_TIME_EXCEED &&
+2 -2
net/ipv6/udp.c
··· 148 148 bool dev_match = (sk->sk_bound_dev_if == dif || 149 149 sk->sk_bound_dev_if == sdif); 150 150 151 - if (exact_dif && !dev_match) 151 + if (!dev_match) 152 152 return -1; 153 - if (sk->sk_bound_dev_if && dev_match) 153 + if (sk->sk_bound_dev_if) 154 154 score++; 155 155 } 156 156
+3
net/ipv6/xfrm6_tunnel.c
··· 341 341 struct xfrm6_tunnel_net *xfrm6_tn = xfrm6_tunnel_pernet(net); 342 342 unsigned int i; 343 343 344 + xfrm_state_flush(net, IPSEC_PROTO_ANY, false); 345 + xfrm_flush_gc(); 346 + 344 347 for (i = 0; i < XFRM6_TUNNEL_SPI_BYADDR_HSIZE; i++) 345 348 WARN_ON_ONCE(!hlist_empty(&xfrm6_tn->spi_byaddr[i])); 346 349
+35 -10
net/key/af_key.c
··· 437 437 return 0; 438 438 } 439 439 440 + static inline int sadb_key_len(const struct sadb_key *key) 441 + { 442 + int key_bytes = DIV_ROUND_UP(key->sadb_key_bits, 8); 443 + 444 + return DIV_ROUND_UP(sizeof(struct sadb_key) + key_bytes, 445 + sizeof(uint64_t)); 446 + } 447 + 448 + static int verify_key_len(const void *p) 449 + { 450 + const struct sadb_key *key = p; 451 + 452 + if (sadb_key_len(key) > key->sadb_key_len) 453 + return -EINVAL; 454 + 455 + return 0; 456 + } 457 + 440 458 static inline int pfkey_sec_ctx_len(const struct sadb_x_sec_ctx *sec_ctx) 441 459 { 442 460 return DIV_ROUND_UP(sizeof(struct sadb_x_sec_ctx) + ··· 551 533 return -EINVAL; 552 534 if (ext_hdrs[ext_type-1] != NULL) 553 535 return -EINVAL; 554 - if (ext_type == SADB_EXT_ADDRESS_SRC || 555 - ext_type == SADB_EXT_ADDRESS_DST || 556 - ext_type == SADB_EXT_ADDRESS_PROXY || 557 - ext_type == SADB_X_EXT_NAT_T_OA) { 536 + switch (ext_type) { 537 + case SADB_EXT_ADDRESS_SRC: 538 + case SADB_EXT_ADDRESS_DST: 539 + case SADB_EXT_ADDRESS_PROXY: 540 + case SADB_X_EXT_NAT_T_OA: 558 541 if (verify_address_len(p)) 559 542 return -EINVAL; 560 - } 561 - if (ext_type == SADB_X_EXT_SEC_CTX) { 543 + break; 544 + case SADB_X_EXT_SEC_CTX: 562 545 if (verify_sec_ctx_len(p)) 563 546 return -EINVAL; 547 + break; 548 + case SADB_EXT_KEY_AUTH: 549 + case SADB_EXT_KEY_ENCRYPT: 550 + if (verify_key_len(p)) 551 + return -EINVAL; 552 + break; 553 + default: 554 + break; 564 555 } 565 556 ext_hdrs[ext_type-1] = (void *) p; 566 557 } ··· 1131 1104 key = ext_hdrs[SADB_EXT_KEY_AUTH - 1]; 1132 1105 if (key != NULL && 1133 1106 sa->sadb_sa_auth != SADB_X_AALG_NULL && 1134 - ((key->sadb_key_bits+7) / 8 == 0 || 1135 - (key->sadb_key_bits+7) / 8 > key->sadb_key_len * sizeof(uint64_t))) 1107 + key->sadb_key_bits == 0) 1136 1108 return ERR_PTR(-EINVAL); 1137 1109 key = ext_hdrs[SADB_EXT_KEY_ENCRYPT-1]; 1138 1110 if (key != NULL && 1139 1111 sa->sadb_sa_encrypt != SADB_EALG_NULL && 1140 - ((key->sadb_key_bits+7) / 8 == 0 || 1141 - (key->sadb_key_bits+7) / 8 > key->sadb_key_len * sizeof(uint64_t))) 1112 + key->sadb_key_bits == 0) 1142 1113 return ERR_PTR(-EINVAL); 1143 1114 1144 1115 x = xfrm_state_alloc(net);
+3
net/llc/af_llc.c
··· 930 930 if (size > llc->dev->mtu) 931 931 size = llc->dev->mtu; 932 932 copied = size - hdrlen; 933 + rc = -EINVAL; 934 + if (copied < 0) 935 + goto release; 933 936 release_sock(sk); 934 937 skb = sock_alloc_send_skb(sk, size, noblock, &rc); 935 938 lock_sock(sk);
+4
net/mac80211/agg-tx.c
··· 8 8 * Copyright 2007, Michael Wu <flamingice@sourmilk.net> 9 9 * Copyright 2007-2010, Intel Corporation 10 10 * Copyright(c) 2015-2017 Intel Deutschland GmbH 11 + * Copyright (C) 2018 Intel Corporation 11 12 * 12 13 * This program is free software; you can redistribute it and/or modify 13 14 * it under the terms of the GNU General Public License version 2 as ··· 970 969 ieee80211_agg_tx_operational(local, sta, tid); 971 970 972 971 sta->ampdu_mlme.addba_req_num[tid] = 0; 972 + 973 + tid_tx->timeout = 974 + le16_to_cpu(mgmt->u.action.u.addba_resp.timeout); 973 975 974 976 if (tid_tx->timeout) { 975 977 mod_timer(&tid_tx->session_timer,
+19 -8
net/mac80211/mlme.c
··· 36 36 #define IEEE80211_AUTH_TIMEOUT (HZ / 5) 37 37 #define IEEE80211_AUTH_TIMEOUT_LONG (HZ / 2) 38 38 #define IEEE80211_AUTH_TIMEOUT_SHORT (HZ / 10) 39 + #define IEEE80211_AUTH_TIMEOUT_SAE (HZ * 2) 39 40 #define IEEE80211_AUTH_MAX_TRIES 3 40 41 #define IEEE80211_AUTH_WAIT_ASSOC (HZ * 5) 41 42 #define IEEE80211_ASSOC_TIMEOUT (HZ / 5) ··· 1788 1787 params[ac].acm = acm; 1789 1788 params[ac].uapsd = uapsd; 1790 1789 1791 - if (params->cw_min == 0 || 1790 + if (params[ac].cw_min == 0 || 1792 1791 params[ac].cw_min > params[ac].cw_max) { 1793 1792 sdata_info(sdata, 1794 1793 "AP has invalid WMM params (CWmin/max=%d/%d for ACI %d), using defaults\n", ··· 3815 3814 tx_flags); 3816 3815 3817 3816 if (tx_flags == 0) { 3818 - auth_data->timeout = jiffies + IEEE80211_AUTH_TIMEOUT; 3819 - auth_data->timeout_started = true; 3820 - run_again(sdata, auth_data->timeout); 3817 + if (auth_data->algorithm == WLAN_AUTH_SAE) 3818 + auth_data->timeout = jiffies + 3819 + IEEE80211_AUTH_TIMEOUT_SAE; 3820 + else 3821 + auth_data->timeout = jiffies + IEEE80211_AUTH_TIMEOUT; 3821 3822 } else { 3822 3823 auth_data->timeout = 3823 3824 round_jiffies_up(jiffies + IEEE80211_AUTH_TIMEOUT_LONG); 3824 - auth_data->timeout_started = true; 3825 - run_again(sdata, auth_data->timeout); 3826 3825 } 3826 + 3827 + auth_data->timeout_started = true; 3828 + run_again(sdata, auth_data->timeout); 3827 3829 3828 3830 return 0; 3829 3831 } ··· 3898 3894 ifmgd->status_received = false; 3899 3895 if (ifmgd->auth_data && ieee80211_is_auth(fc)) { 3900 3896 if (status_acked) { 3901 - ifmgd->auth_data->timeout = 3902 - jiffies + IEEE80211_AUTH_TIMEOUT_SHORT; 3897 + if (ifmgd->auth_data->algorithm == 3898 + WLAN_AUTH_SAE) 3899 + ifmgd->auth_data->timeout = 3900 + jiffies + 3901 + IEEE80211_AUTH_TIMEOUT_SAE; 3902 + else 3903 + ifmgd->auth_data->timeout = 3904 + jiffies + 3905 + IEEE80211_AUTH_TIMEOUT_SHORT; 3903 3906 run_again(sdata, ifmgd->auth_data->timeout); 3904 3907 } else { 3905 3908 ifmgd->auth_data->timeout = jiffies - 1;
+2 -1
net/mac80211/tx.c
··· 4 4 * Copyright 2006-2007 Jiri Benc <jbenc@suse.cz> 5 5 * Copyright 2007 Johannes Berg <johannes@sipsolutions.net> 6 6 * Copyright 2013-2014 Intel Mobile Communications GmbH 7 + * Copyright (C) 2018 Intel Corporation 7 8 * 8 9 * This program is free software; you can redistribute it and/or modify 9 10 * it under the terms of the GNU General Public License version 2 as ··· 1136 1135 } 1137 1136 1138 1137 /* reset session timer */ 1139 - if (reset_agg_timer && tid_tx->timeout) 1138 + if (reset_agg_timer) 1140 1139 tid_tx->last_tx = jiffies; 1141 1140 1142 1141 return queued;
+3 -3
net/netlink/af_netlink.c
··· 2606 2606 { 2607 2607 if (v == SEQ_START_TOKEN) { 2608 2608 seq_puts(seq, 2609 - "sk Eth Pid Groups " 2610 - "Rmem Wmem Dump Locks Drops Inode\n"); 2609 + "sk Eth Pid Groups " 2610 + "Rmem Wmem Dump Locks Drops Inode\n"); 2611 2611 } else { 2612 2612 struct sock *s = v; 2613 2613 struct netlink_sock *nlk = nlk_sk(s); 2614 2614 2615 - seq_printf(seq, "%pK %-3d %-6u %08x %-8d %-8d %d %-8d %-8d %-8lu\n", 2615 + seq_printf(seq, "%pK %-3d %-10u %08x %-8d %-8d %-5d %-8d %-8d %-8lu\n", 2616 2616 s, 2617 2617 s->sk_protocol, 2618 2618 nlk->portid,
+4
net/nsh/nsh.c
··· 57 57 return -ENOMEM; 58 58 nh = (struct nshhdr *)(skb->data); 59 59 length = nsh_hdr_len(nh); 60 + if (length < NSH_BASE_HDR_LEN) 61 + return -EINVAL; 60 62 inner_proto = tun_p_to_eth_p(nh->np); 61 63 if (!pskb_may_pull(skb, length)) 62 64 return -ENOMEM; ··· 92 90 if (unlikely(!pskb_may_pull(skb, NSH_BASE_HDR_LEN))) 93 91 goto out; 94 92 nsh_len = nsh_hdr_len(nsh_hdr(skb)); 93 + if (nsh_len < NSH_BASE_HDR_LEN) 94 + goto out; 95 95 if (unlikely(!pskb_may_pull(skb, nsh_len))) 96 96 goto out; 97 97
+3 -6
net/openvswitch/flow_netlink.c
··· 1712 1712 1713 1713 /* The nlattr stream should already have been validated */ 1714 1714 nla_for_each_nested(nla, attr, rem) { 1715 - if (tbl[nla_type(nla)].len == OVS_ATTR_NESTED) { 1716 - if (tbl[nla_type(nla)].next) 1717 - tbl = tbl[nla_type(nla)].next; 1718 - nlattr_set(nla, val, tbl); 1719 - } else { 1715 + if (tbl[nla_type(nla)].len == OVS_ATTR_NESTED) 1716 + nlattr_set(nla, val, tbl[nla_type(nla)].next ? : tbl); 1717 + else 1720 1718 memset(nla_data(nla), val, nla_len(nla)); 1721 - } 1722 1719 1723 1720 if (nla_type(nla) == OVS_KEY_ATTR_CT_STATE) 1724 1721 *(u32 *)nla_data(nla) &= CT_SUPPORTED_MASK;
+2 -1
net/rds/ib_cm.c
··· 547 547 rdsdebug("conn %p pd %p cq %p %p\n", conn, ic->i_pd, 548 548 ic->i_send_cq, ic->i_recv_cq); 549 549 550 - return ret; 550 + goto out; 551 551 552 552 sends_out: 553 553 vfree(ic->i_sends); ··· 572 572 ic->i_send_cq = NULL; 573 573 rds_ibdev_out: 574 574 rds_ib_remove_conn(rds_ibdev, conn); 575 + out: 575 576 rds_ib_dev_put(rds_ibdev); 576 577 577 578 return ret;
+1
net/rds/recv.c
··· 558 558 struct rds_cmsg_rx_trace t; 559 559 int i, j; 560 560 561 + memset(&t, 0, sizeof(t)); 561 562 inc->i_rx_lat_trace[RDS_MSG_RX_CMSG] = local_clock(); 562 563 t.rx_traces = rs->rs_rx_traces; 563 564 for (i = 0; i < rs->rs_rx_traces; i++) {
+6 -1
net/rfkill/rfkill-gpio.c
··· 137 137 138 138 ret = rfkill_register(rfkill->rfkill_dev); 139 139 if (ret < 0) 140 - return ret; 140 + goto err_destroy; 141 141 142 142 platform_set_drvdata(pdev, rfkill); 143 143 144 144 dev_info(&pdev->dev, "%s device registered.\n", rfkill->name); 145 145 146 146 return 0; 147 + 148 + err_destroy: 149 + rfkill_destroy(rfkill->rfkill_dev); 150 + 151 + return ret; 147 152 } 148 153 149 154 static int rfkill_gpio_remove(struct platform_device *pdev)
+1 -1
net/rxrpc/af_rxrpc.c
··· 313 313 memset(&cp, 0, sizeof(cp)); 314 314 cp.local = rx->local; 315 315 cp.key = key; 316 - cp.security_level = 0; 316 + cp.security_level = rx->min_sec_level; 317 317 cp.exclusive = false; 318 318 cp.upgrade = upgrade; 319 319 cp.service_id = srx->srx_service;
+1
net/rxrpc/ar-internal.h
··· 476 476 RXRPC_CALL_SEND_PING, /* A ping will need to be sent */ 477 477 RXRPC_CALL_PINGING, /* Ping in process */ 478 478 RXRPC_CALL_RETRANS_TIMEOUT, /* Retransmission due to timeout occurred */ 479 + RXRPC_CALL_BEGAN_RX_TIMER, /* We began the expect_rx_by timer */ 479 480 }; 480 481 481 482 /*
+8 -3
net/rxrpc/conn_event.c
··· 40 40 } __attribute__((packed)) pkt; 41 41 struct rxrpc_ackinfo ack_info; 42 42 size_t len; 43 - int ioc; 43 + int ret, ioc; 44 44 u32 serial, mtu, call_id, padding; 45 45 46 46 _enter("%d", conn->debug_id); ··· 135 135 break; 136 136 } 137 137 138 - kernel_sendmsg(conn->params.local->socket, &msg, iov, ioc, len); 138 + ret = kernel_sendmsg(conn->params.local->socket, &msg, iov, ioc, len); 139 139 conn->params.peer->last_tx_at = ktime_get_real(); 140 + if (ret < 0) 141 + trace_rxrpc_tx_fail(conn->debug_id, serial, ret, 142 + rxrpc_tx_fail_call_final_resend); 143 + 140 144 _leave(""); 141 - return; 142 145 } 143 146 144 147 /* ··· 239 236 240 237 ret = kernel_sendmsg(conn->params.local->socket, &msg, iov, 2, len); 241 238 if (ret < 0) { 239 + trace_rxrpc_tx_fail(conn->debug_id, serial, ret, 240 + rxrpc_tx_fail_conn_abort); 242 241 _debug("sendmsg failed: %d", ret); 243 242 return -EAGAIN; 244 243 }
+1 -1
net/rxrpc/input.c
··· 971 971 if (timo) { 972 972 unsigned long now = jiffies, expect_rx_by; 973 973 974 - expect_rx_by = jiffies + timo; 974 + expect_rx_by = now + timo; 975 975 WRITE_ONCE(call->expect_rx_by, expect_rx_by); 976 976 rxrpc_reduce_call_timer(call, expect_rx_by, now, 977 977 rxrpc_timer_set_for_normal);
+2 -1
net/rxrpc/local_event.c
··· 71 71 72 72 ret = kernel_sendmsg(local->socket, &msg, iov, 2, len); 73 73 if (ret < 0) 74 - _debug("sendmsg failed: %d", ret); 74 + trace_rxrpc_tx_fail(local->debug_id, 0, ret, 75 + rxrpc_tx_fail_version_reply); 75 76 76 77 _leave(""); 77 78 }
+42 -15
net/rxrpc/local_object.c
··· 134 134 } 135 135 } 136 136 137 - /* we want to receive ICMP errors */ 138 - opt = 1; 139 - ret = kernel_setsockopt(local->socket, SOL_IP, IP_RECVERR, 140 - (char *) &opt, sizeof(opt)); 141 - if (ret < 0) { 142 - _debug("setsockopt failed"); 143 - goto error; 144 - } 137 + switch (local->srx.transport.family) { 138 + case AF_INET: 139 + /* we want to receive ICMP errors */ 140 + opt = 1; 141 + ret = kernel_setsockopt(local->socket, SOL_IP, IP_RECVERR, 142 + (char *) &opt, sizeof(opt)); 143 + if (ret < 0) { 144 + _debug("setsockopt failed"); 145 + goto error; 146 + } 145 147 146 - /* we want to set the don't fragment bit */ 147 - opt = IP_PMTUDISC_DO; 148 - ret = kernel_setsockopt(local->socket, SOL_IP, IP_MTU_DISCOVER, 149 - (char *) &opt, sizeof(opt)); 150 - if (ret < 0) { 151 - _debug("setsockopt failed"); 152 - goto error; 148 + /* we want to set the don't fragment bit */ 149 + opt = IP_PMTUDISC_DO; 150 + ret = kernel_setsockopt(local->socket, SOL_IP, IP_MTU_DISCOVER, 151 + (char *) &opt, sizeof(opt)); 152 + if (ret < 0) { 153 + _debug("setsockopt failed"); 154 + goto error; 155 + } 156 + break; 157 + 158 + case AF_INET6: 159 + /* we want to receive ICMP errors */ 160 + opt = 1; 161 + ret = kernel_setsockopt(local->socket, SOL_IPV6, IPV6_RECVERR, 162 + (char *) &opt, sizeof(opt)); 163 + if (ret < 0) { 164 + _debug("setsockopt failed"); 165 + goto error; 166 + } 167 + 168 + /* we want to set the don't fragment bit */ 169 + opt = IPV6_PMTUDISC_DO; 170 + ret = kernel_setsockopt(local->socket, SOL_IPV6, IPV6_MTU_DISCOVER, 171 + (char *) &opt, sizeof(opt)); 172 + if (ret < 0) { 173 + _debug("setsockopt failed"); 174 + goto error; 175 + } 176 + break; 177 + 178 + default: 179 + BUG(); 153 180 } 154 181 155 182 /* set the socket up */
+32 -2
net/rxrpc/output.c
··· 210 210 if (ping) 211 211 call->ping_time = now; 212 212 conn->params.peer->last_tx_at = ktime_get_real(); 213 + if (ret < 0) 214 + trace_rxrpc_tx_fail(call->debug_id, serial, ret, 215 + rxrpc_tx_fail_call_ack); 213 216 214 217 if (call->state < RXRPC_CALL_COMPLETE) { 215 218 if (ret < 0) { ··· 297 294 ret = kernel_sendmsg(conn->params.local->socket, 298 295 &msg, iov, 1, sizeof(pkt)); 299 296 conn->params.peer->last_tx_at = ktime_get_real(); 297 + if (ret < 0) 298 + trace_rxrpc_tx_fail(call->debug_id, serial, ret, 299 + rxrpc_tx_fail_call_abort); 300 + 300 301 301 302 rxrpc_put_connection(conn); 302 303 return ret; ··· 394 387 conn->params.peer->last_tx_at = ktime_get_real(); 395 388 396 389 up_read(&conn->params.local->defrag_sem); 390 + if (ret < 0) 391 + trace_rxrpc_tx_fail(call->debug_id, serial, ret, 392 + rxrpc_tx_fail_call_data_nofrag); 397 393 if (ret == -EMSGSIZE) 398 394 goto send_fragmentable; 399 395 ··· 423 413 rxrpc_reduce_call_timer(call, ack_lost_at, nowj, 424 414 rxrpc_timer_set_for_lost_ack); 425 415 } 416 + } 417 + 418 + if (sp->hdr.seq == 1 && 419 + !test_and_set_bit(RXRPC_CALL_BEGAN_RX_TIMER, 420 + &call->flags)) { 421 + unsigned long nowj = jiffies, expect_rx_by; 422 + 423 + expect_rx_by = nowj + call->next_rx_timo; 424 + WRITE_ONCE(call->expect_rx_by, expect_rx_by); 425 + rxrpc_reduce_call_timer(call, expect_rx_by, nowj, 426 + rxrpc_timer_set_for_normal); 426 427 } 427 428 } 428 429 ··· 486 465 #endif 487 466 } 488 467 468 + if (ret < 0) 469 + trace_rxrpc_tx_fail(call->debug_id, serial, ret, 470 + rxrpc_tx_fail_call_data_frag); 471 + 489 472 up_write(&conn->params.local->defrag_sem); 490 473 goto done; 491 474 } ··· 507 482 struct kvec iov[2]; 508 483 size_t size; 509 484 __be32 code; 485 + int ret; 510 486 511 487 _enter("%d", local->debug_id); 512 488 ··· 542 516 whdr.flags ^= RXRPC_CLIENT_INITIATED; 543 517 whdr.flags &= RXRPC_CLIENT_INITIATED; 544 518 545 - kernel_sendmsg(local->socket, &msg, iov, 2, size); 519 + ret = kernel_sendmsg(local->socket, &msg, iov, 2, size); 520 + if (ret < 0) 521 + trace_rxrpc_tx_fail(local->debug_id, 0, ret, 522 + rxrpc_tx_fail_reject); 546 523 } 547 524 548 525 rxrpc_free_skb(skb, rxrpc_skb_rx_freed); ··· 596 567 597 568 ret = kernel_sendmsg(peer->local->socket, &msg, iov, 2, len); 598 569 if (ret < 0) 599 - _debug("sendmsg failed: %d", ret); 570 + trace_rxrpc_tx_fail(peer->debug_id, 0, ret, 571 + rxrpc_tx_fail_version_keepalive); 600 572 601 573 peer->last_tx_at = ktime_get_real(); 602 574 _leave("");
+23 -23
net/rxrpc/peer_event.c
··· 28 28 * Find the peer associated with an ICMP packet. 29 29 */ 30 30 static struct rxrpc_peer *rxrpc_lookup_peer_icmp_rcu(struct rxrpc_local *local, 31 - const struct sk_buff *skb) 31 + const struct sk_buff *skb, 32 + struct sockaddr_rxrpc *srx) 32 33 { 33 34 struct sock_exterr_skb *serr = SKB_EXT_ERR(skb); 34 - struct sockaddr_rxrpc srx; 35 35 36 36 _enter(""); 37 37 38 - memset(&srx, 0, sizeof(srx)); 39 - srx.transport_type = local->srx.transport_type; 40 - srx.transport_len = local->srx.transport_len; 41 - srx.transport.family = local->srx.transport.family; 38 + memset(srx, 0, sizeof(*srx)); 39 + srx->transport_type = local->srx.transport_type; 40 + srx->transport_len = local->srx.transport_len; 41 + srx->transport.family = local->srx.transport.family; 42 42 43 43 /* Can we see an ICMP4 packet on an ICMP6 listening socket? and vice 44 44 * versa? 45 45 */ 46 - switch (srx.transport.family) { 46 + switch (srx->transport.family) { 47 47 case AF_INET: 48 - srx.transport.sin.sin_port = serr->port; 48 + srx->transport.sin.sin_port = serr->port; 49 49 switch (serr->ee.ee_origin) { 50 50 case SO_EE_ORIGIN_ICMP: 51 51 _net("Rx ICMP"); 52 - memcpy(&srx.transport.sin.sin_addr, 52 + memcpy(&srx->transport.sin.sin_addr, 53 53 skb_network_header(skb) + serr->addr_offset, 54 54 sizeof(struct in_addr)); 55 55 break; 56 56 case SO_EE_ORIGIN_ICMP6: 57 57 _net("Rx ICMP6 on v4 sock"); 58 - memcpy(&srx.transport.sin.sin_addr, 58 + memcpy(&srx->transport.sin.sin_addr, 59 59 skb_network_header(skb) + serr->addr_offset + 12, 60 60 sizeof(struct in_addr)); 61 61 break; 62 62 default: 63 - memcpy(&srx.transport.sin.sin_addr, &ip_hdr(skb)->saddr, 63 + memcpy(&srx->transport.sin.sin_addr, &ip_hdr(skb)->saddr, 64 64 sizeof(struct in_addr)); 65 65 break; 66 66 } ··· 68 68 69 69 #ifdef CONFIG_AF_RXRPC_IPV6 70 70 case AF_INET6: 71 - srx.transport.sin6.sin6_port = serr->port; 71 + srx->transport.sin6.sin6_port = serr->port; 72 72 switch (serr->ee.ee_origin) { 73 73 case SO_EE_ORIGIN_ICMP6: 74 74 _net("Rx ICMP6"); 75 - memcpy(&srx.transport.sin6.sin6_addr, 75 + memcpy(&srx->transport.sin6.sin6_addr, 76 76 skb_network_header(skb) + serr->addr_offset, 77 77 sizeof(struct in6_addr)); 78 78 break; 79 79 case SO_EE_ORIGIN_ICMP: 80 80 _net("Rx ICMP on v6 sock"); 81 - srx.transport.sin6.sin6_addr.s6_addr32[0] = 0; 82 - srx.transport.sin6.sin6_addr.s6_addr32[1] = 0; 83 - srx.transport.sin6.sin6_addr.s6_addr32[2] = htonl(0xffff); 84 - memcpy(srx.transport.sin6.sin6_addr.s6_addr + 12, 81 + srx->transport.sin6.sin6_addr.s6_addr32[0] = 0; 82 + srx->transport.sin6.sin6_addr.s6_addr32[1] = 0; 83 + srx->transport.sin6.sin6_addr.s6_addr32[2] = htonl(0xffff); 84 + memcpy(srx->transport.sin6.sin6_addr.s6_addr + 12, 85 85 skb_network_header(skb) + serr->addr_offset, 86 86 sizeof(struct in_addr)); 87 87 break; 88 88 default: 89 - memcpy(&srx.transport.sin6.sin6_addr, 89 + memcpy(&srx->transport.sin6.sin6_addr, 90 90 &ipv6_hdr(skb)->saddr, 91 91 sizeof(struct in6_addr)); 92 92 break; ··· 98 98 BUG(); 99 99 } 100 100 101 - return rxrpc_lookup_peer_rcu(local, &srx); 101 + return rxrpc_lookup_peer_rcu(local, srx); 102 102 } 103 103 104 104 /* ··· 146 146 void rxrpc_error_report(struct sock *sk) 147 147 { 148 148 struct sock_exterr_skb *serr; 149 + struct sockaddr_rxrpc srx; 149 150 struct rxrpc_local *local = sk->sk_user_data; 150 151 struct rxrpc_peer *peer; 151 152 struct sk_buff *skb; ··· 167 166 } 168 167 169 168 rcu_read_lock(); 170 - peer = rxrpc_lookup_peer_icmp_rcu(local, skb); 169 + peer = rxrpc_lookup_peer_icmp_rcu(local, skb, &srx); 171 170 if (peer && !rxrpc_get_peer_maybe(peer)) 172 171 peer = NULL; 173 172 if (!peer) { ··· 176 175 _leave(" [no peer]"); 177 176 return; 178 177 } 178 + 179 + trace_rxrpc_rx_icmp(peer, &serr->ee, &srx); 179 180 180 181 if ((serr->ee.ee_origin == SO_EE_ORIGIN_ICMP && 181 182 serr->ee.ee_type == ICMP_DEST_UNREACH && ··· 211 208 _enter(""); 212 209 213 210 ee = &serr->ee; 214 - 215 - _net("Rx Error o=%d t=%d c=%d e=%d", 216 - ee->ee_origin, ee->ee_type, ee->ee_code, ee->ee_errno); 217 211 218 212 err = ee->ee_errno; 219 213
+4 -2
net/rxrpc/rxkad.c
··· 664 664 665 665 ret = kernel_sendmsg(conn->params.local->socket, &msg, iov, 2, len); 666 666 if (ret < 0) { 667 - _debug("sendmsg failed: %d", ret); 667 + trace_rxrpc_tx_fail(conn->debug_id, serial, ret, 668 + rxrpc_tx_fail_conn_challenge); 668 669 return -EAGAIN; 669 670 } 670 671 ··· 720 719 721 720 ret = kernel_sendmsg(conn->params.local->socket, &msg, iov, 3, len); 722 721 if (ret < 0) { 723 - _debug("sendmsg failed: %d", ret); 722 + trace_rxrpc_tx_fail(conn->debug_id, serial, ret, 723 + rxrpc_tx_fail_conn_response); 724 724 return -EAGAIN; 725 725 } 726 726
+10
net/rxrpc/sendmsg.c
··· 223 223 224 224 ret = rxrpc_send_data_packet(call, skb, false); 225 225 if (ret < 0) { 226 + switch (ret) { 227 + case -ENETUNREACH: 228 + case -EHOSTUNREACH: 229 + case -ECONNREFUSED: 230 + rxrpc_set_call_completion(call, 231 + RXRPC_CALL_LOCAL_ERROR, 232 + 0, ret); 233 + goto out; 234 + } 226 235 _debug("need instant resend %d", ret); 227 236 rxrpc_instant_resend(call, ix); 228 237 } else { ··· 250 241 rxrpc_timer_set_for_send); 251 242 } 252 243 244 + out: 253 245 rxrpc_free_skb(skb, rxrpc_skb_tx_freed); 254 246 _leave(""); 255 247 }
+2 -1
net/sched/act_skbedit.c
··· 121 121 return 0; 122 122 123 123 if (!flags) { 124 - tcf_idr_release(*a, bind); 124 + if (exists) 125 + tcf_idr_release(*a, bind); 125 126 return -EINVAL; 126 127 } 127 128
+4 -1
net/sched/act_skbmod.c
··· 131 131 if (exists && bind) 132 132 return 0; 133 133 134 - if (!lflags) 134 + if (!lflags) { 135 + if (exists) 136 + tcf_idr_release(*a, bind); 135 137 return -EINVAL; 138 + } 136 139 137 140 if (!exists) { 138 141 ret = tcf_idr_create(tn, parm->index, est, a,
+1 -1
net/sched/cls_api.c
··· 152 152 NL_SET_ERR_MSG(extack, "TC classifier not found"); 153 153 err = -ENOENT; 154 154 } 155 - goto errout; 156 155 #endif 156 + goto errout; 157 157 } 158 158 tp->classify = tp->ops->classify; 159 159 tp->protocol = protocol;
+25 -12
net/sched/sch_fq.c
··· 128 128 return f->next == &detached; 129 129 } 130 130 131 + static bool fq_flow_is_throttled(const struct fq_flow *f) 132 + { 133 + return f->next == &throttled; 134 + } 135 + 136 + static void fq_flow_add_tail(struct fq_flow_head *head, struct fq_flow *flow) 137 + { 138 + if (head->first) 139 + head->last->next = flow; 140 + else 141 + head->first = flow; 142 + head->last = flow; 143 + flow->next = NULL; 144 + } 145 + 146 + static void fq_flow_unset_throttled(struct fq_sched_data *q, struct fq_flow *f) 147 + { 148 + rb_erase(&f->rate_node, &q->delayed); 149 + q->throttled_flows--; 150 + fq_flow_add_tail(&q->old_flows, f); 151 + } 152 + 131 153 static void fq_flow_set_throttled(struct fq_sched_data *q, struct fq_flow *f) 132 154 { 133 155 struct rb_node **p = &q->delayed.rb_node, *parent = NULL; ··· 177 155 178 156 static struct kmem_cache *fq_flow_cachep __read_mostly; 179 157 180 - static void fq_flow_add_tail(struct fq_flow_head *head, struct fq_flow *flow) 181 - { 182 - if (head->first) 183 - head->last->next = flow; 184 - else 185 - head->first = flow; 186 - head->last = flow; 187 - flow->next = NULL; 188 - } 189 158 190 159 /* limit number of collected flows per round */ 191 160 #define FQ_GC_MAX 8 ··· 280 267 f->socket_hash != sk->sk_hash)) { 281 268 f->credit = q->initial_quantum; 282 269 f->socket_hash = sk->sk_hash; 270 + if (fq_flow_is_throttled(f)) 271 + fq_flow_unset_throttled(q, f); 283 272 f->time_next_packet = 0ULL; 284 273 } 285 274 return f; ··· 453 438 q->time_next_delayed_flow = f->time_next_packet; 454 439 break; 455 440 } 456 - rb_erase(p, &q->delayed); 457 - q->throttled_flows--; 458 - fq_flow_add_tail(&q->old_flows, f); 441 + fq_flow_unset_throttled(q, f); 459 442 } 460 443 } 461 444
+29 -1
net/sctp/associola.c
··· 1024 1024 struct sctp_endpoint *ep; 1025 1025 struct sctp_chunk *chunk; 1026 1026 struct sctp_inq *inqueue; 1027 - int state; 1027 + int first_time = 1; /* is this the first time through the loop */ 1028 1028 int error = 0; 1029 + int state; 1029 1030 1030 1031 /* The association should be held so we should be safe. */ 1031 1032 ep = asoc->ep; ··· 1037 1036 state = asoc->state; 1038 1037 subtype = SCTP_ST_CHUNK(chunk->chunk_hdr->type); 1039 1038 1039 + /* If the first chunk in the packet is AUTH, do special 1040 + * processing specified in Section 6.3 of SCTP-AUTH spec 1041 + */ 1042 + if (first_time && subtype.chunk == SCTP_CID_AUTH) { 1043 + struct sctp_chunkhdr *next_hdr; 1044 + 1045 + next_hdr = sctp_inq_peek(inqueue); 1046 + if (!next_hdr) 1047 + goto normal; 1048 + 1049 + /* If the next chunk is COOKIE-ECHO, skip the AUTH 1050 + * chunk while saving a pointer to it so we can do 1051 + * Authentication later (during cookie-echo 1052 + * processing). 1053 + */ 1054 + if (next_hdr->type == SCTP_CID_COOKIE_ECHO) { 1055 + chunk->auth_chunk = skb_clone(chunk->skb, 1056 + GFP_ATOMIC); 1057 + chunk->auth = 1; 1058 + continue; 1059 + } 1060 + } 1061 + 1062 + normal: 1040 1063 /* SCTP-AUTH, Section 6.3: 1041 1064 * The receiver has a list of chunk types which it expects 1042 1065 * to be received only after an AUTH-chunk. This list has ··· 1099 1074 /* If there is an error on chunk, discard this packet. */ 1100 1075 if (error && chunk) 1101 1076 chunk->pdiscard = 1; 1077 + 1078 + if (first_time) 1079 + first_time = 0; 1102 1080 } 1103 1081 sctp_association_put(asoc); 1104 1082 }
+1 -1
net/sctp/inqueue.c
··· 217 217 skb_pull(chunk->skb, sizeof(*ch)); 218 218 chunk->subh.v = NULL; /* Subheader is no longer valid. */ 219 219 220 - if (chunk->chunk_end + sizeof(*ch) < skb_tail_pointer(chunk->skb)) { 220 + if (chunk->chunk_end + sizeof(*ch) <= skb_tail_pointer(chunk->skb)) { 221 221 /* This is not a singleton */ 222 222 chunk->singleton = 0; 223 223 } else if (chunk->chunk_end > skb_tail_pointer(chunk->skb)) {
+3
net/sctp/ipv6.c
··· 895 895 if (sctp_is_any(sk, addr1) || sctp_is_any(sk, addr2)) 896 896 return 1; 897 897 898 + if (addr1->sa.sa_family == AF_INET && addr2->sa.sa_family == AF_INET) 899 + return addr1->v4.sin_addr.s_addr == addr2->v4.sin_addr.s_addr; 900 + 898 901 return __sctp_v6_cmp_addr(addr1, addr2); 899 902 } 900 903
+1 -1
net/sctp/sm_make_chunk.c
··· 1152 1152 const struct sctp_association *asoc, 1153 1153 const struct sctp_chunk *chunk) 1154 1154 { 1155 - static const char error[] = "Association exceeded its max_retans count"; 1155 + static const char error[] = "Association exceeded its max_retrans count"; 1156 1156 size_t payload_len = sizeof(error) + sizeof(struct sctp_errhdr); 1157 1157 struct sctp_chunk *retval; 1158 1158
+53 -41
net/sctp/sm_statefuns.c
··· 153 153 struct sctp_cmd_seq *commands); 154 154 155 155 static enum sctp_ierror sctp_sf_authenticate( 156 - struct net *net, 157 - const struct sctp_endpoint *ep, 158 156 const struct sctp_association *asoc, 159 - const union sctp_subtype type, 160 157 struct sctp_chunk *chunk); 161 158 162 159 static enum sctp_disposition __sctp_sf_do_9_1_abort( ··· 623 626 return SCTP_DISPOSITION_CONSUME; 624 627 } 625 628 629 + static bool sctp_auth_chunk_verify(struct net *net, struct sctp_chunk *chunk, 630 + const struct sctp_association *asoc) 631 + { 632 + struct sctp_chunk auth; 633 + 634 + if (!chunk->auth_chunk) 635 + return true; 636 + 637 + /* SCTP-AUTH: auth_chunk pointer is only set when the cookie-echo 638 + * is supposed to be authenticated and we have to do delayed 639 + * authentication. We've just recreated the association using 640 + * the information in the cookie and now it's much easier to 641 + * do the authentication. 642 + */ 643 + 644 + /* Make sure that we and the peer are AUTH capable */ 645 + if (!net->sctp.auth_enable || !asoc->peer.auth_capable) 646 + return false; 647 + 648 + /* set-up our fake chunk so that we can process it */ 649 + auth.skb = chunk->auth_chunk; 650 + auth.asoc = chunk->asoc; 651 + auth.sctp_hdr = chunk->sctp_hdr; 652 + auth.chunk_hdr = (struct sctp_chunkhdr *) 653 + skb_push(chunk->auth_chunk, 654 + sizeof(struct sctp_chunkhdr)); 655 + skb_pull(chunk->auth_chunk, sizeof(struct sctp_chunkhdr)); 656 + auth.transport = chunk->transport; 657 + 658 + return sctp_sf_authenticate(asoc, &auth) == SCTP_IERROR_NO_ERROR; 659 + } 660 + 626 661 /* 627 662 * Respond to a normal COOKIE ECHO chunk. 628 663 * We are the side that is being asked for an association. ··· 792 763 if (error) 793 764 goto nomem_init; 794 765 795 - /* SCTP-AUTH: auth_chunk pointer is only set when the cookie-echo 796 - * is supposed to be authenticated and we have to do delayed 797 - * authentication. We've just recreated the association using 798 - * the information in the cookie and now it's much easier to 799 - * do the authentication. 800 - */ 801 - if (chunk->auth_chunk) { 802 - struct sctp_chunk auth; 803 - enum sctp_ierror ret; 804 - 805 - /* Make sure that we and the peer are AUTH capable */ 806 - if (!net->sctp.auth_enable || !new_asoc->peer.auth_capable) { 807 - sctp_association_free(new_asoc); 808 - return sctp_sf_pdiscard(net, ep, asoc, type, arg, commands); 809 - } 810 - 811 - /* set-up our fake chunk so that we can process it */ 812 - auth.skb = chunk->auth_chunk; 813 - auth.asoc = chunk->asoc; 814 - auth.sctp_hdr = chunk->sctp_hdr; 815 - auth.chunk_hdr = (struct sctp_chunkhdr *) 816 - skb_push(chunk->auth_chunk, 817 - sizeof(struct sctp_chunkhdr)); 818 - skb_pull(chunk->auth_chunk, sizeof(struct sctp_chunkhdr)); 819 - auth.transport = chunk->transport; 820 - 821 - ret = sctp_sf_authenticate(net, ep, new_asoc, type, &auth); 822 - if (ret != SCTP_IERROR_NO_ERROR) { 823 - sctp_association_free(new_asoc); 824 - return sctp_sf_pdiscard(net, ep, asoc, type, arg, commands); 825 - } 766 + if (!sctp_auth_chunk_verify(net, chunk, new_asoc)) { 767 + sctp_association_free(new_asoc); 768 + return sctp_sf_pdiscard(net, ep, asoc, type, arg, commands); 826 769 } 827 770 828 771 repl = sctp_make_cookie_ack(new_asoc, chunk); ··· 1795 1794 GFP_ATOMIC)) 1796 1795 goto nomem; 1797 1796 1797 + if (sctp_auth_asoc_init_active_key(new_asoc, GFP_ATOMIC)) 1798 + goto nomem; 1799 + 1800 + if (!sctp_auth_chunk_verify(net, chunk, new_asoc)) 1801 + return SCTP_DISPOSITION_DISCARD; 1802 + 1798 1803 /* Make sure no new addresses are being added during the 1799 1804 * restart. Though this is a pretty complicated attack 1800 1805 * since you'd have to get inside the cookie. 1801 1806 */ 1802 - if (!sctp_sf_check_restart_addrs(new_asoc, asoc, chunk, commands)) { 1807 + if (!sctp_sf_check_restart_addrs(new_asoc, asoc, chunk, commands)) 1803 1808 return SCTP_DISPOSITION_CONSUME; 1804 - } 1805 1809 1806 1810 /* If the endpoint is in the SHUTDOWN-ACK-SENT state and recognizes 1807 1811 * the peer has restarted (Action A), it MUST NOT setup a new ··· 1912 1906 GFP_ATOMIC)) 1913 1907 goto nomem; 1914 1908 1909 + if (sctp_auth_asoc_init_active_key(new_asoc, GFP_ATOMIC)) 1910 + goto nomem; 1911 + 1912 + if (!sctp_auth_chunk_verify(net, chunk, new_asoc)) 1913 + return SCTP_DISPOSITION_DISCARD; 1914 + 1915 1915 /* Update the content of current association. */ 1916 1916 sctp_add_cmd_sf(commands, SCTP_CMD_UPDATE_ASSOC, SCTP_ASOC(new_asoc)); 1917 1917 sctp_add_cmd_sf(commands, SCTP_CMD_NEW_STATE, ··· 2015 2003 * a COOKIE ACK. 2016 2004 */ 2017 2005 2006 + if (!sctp_auth_chunk_verify(net, chunk, asoc)) 2007 + return SCTP_DISPOSITION_DISCARD; 2008 + 2018 2009 /* Don't accidentally move back into established state. */ 2019 2010 if (asoc->state < SCTP_STATE_ESTABLISHED) { 2020 2011 sctp_add_cmd_sf(commands, SCTP_CMD_TIMER_STOP, ··· 2065 2050 } 2066 2051 } 2067 2052 2068 - repl = sctp_make_cookie_ack(new_asoc, chunk); 2053 + repl = sctp_make_cookie_ack(asoc, chunk); 2069 2054 if (!repl) 2070 2055 goto nomem; 2071 2056 ··· 4180 4165 * The return value is the disposition of the chunk. 4181 4166 */ 4182 4167 static enum sctp_ierror sctp_sf_authenticate( 4183 - struct net *net, 4184 - const struct sctp_endpoint *ep, 4185 4168 const struct sctp_association *asoc, 4186 - const union sctp_subtype type, 4187 4169 struct sctp_chunk *chunk) 4188 4170 { 4189 4171 struct sctp_shared_key *sh_key = NULL; ··· 4281 4269 commands); 4282 4270 4283 4271 auth_hdr = (struct sctp_authhdr *)chunk->skb->data; 4284 - error = sctp_sf_authenticate(net, ep, asoc, type, chunk); 4272 + error = sctp_sf_authenticate(asoc, chunk); 4285 4273 switch (error) { 4286 4274 case SCTP_IERROR_AUTH_BAD_HMAC: 4287 4275 /* Generate the ERROR chunk and discard the rest
+2
net/sctp/stream.c
··· 240 240 241 241 new->out = NULL; 242 242 new->in = NULL; 243 + new->outcnt = 0; 244 + new->incnt = 0; 243 245 } 244 246 245 247 static int sctp_send_reconf(struct sctp_association *asoc,
-1
net/sctp/ulpevent.c
··· 715 715 return event; 716 716 717 717 fail_mark: 718 - sctp_chunk_put(chunk); 719 718 kfree_skb(skb); 720 719 fail: 721 720 return NULL;
+29 -32
net/smc/af_smc.c
··· 292 292 smc_copy_sock_settings(&smc->sk, smc->clcsock->sk, SK_FLAGS_CLC_TO_SMC); 293 293 } 294 294 295 + /* register a new rmb */ 296 + static int smc_reg_rmb(struct smc_link *link, struct smc_buf_desc *rmb_desc) 297 + { 298 + /* register memory region for new rmb */ 299 + if (smc_wr_reg_send(link, rmb_desc->mr_rx[SMC_SINGLE_LINK])) { 300 + rmb_desc->regerr = 1; 301 + return -EFAULT; 302 + } 303 + return 0; 304 + } 305 + 295 306 static int smc_clnt_conf_first_link(struct smc_sock *smc) 296 307 { 297 308 struct smc_link_group *lgr = smc->conn.lgr; ··· 332 321 333 322 smc_wr_remember_qp_attr(link); 334 323 335 - rc = smc_wr_reg_send(link, 336 - smc->conn.rmb_desc->mr_rx[SMC_SINGLE_LINK]); 337 - if (rc) 324 + if (smc_reg_rmb(link, smc->conn.rmb_desc)) 338 325 return SMC_CLC_DECL_INTERR; 339 326 340 327 /* send CONFIRM LINK response over RoCE fabric */ ··· 482 473 goto decline_rdma_unlock; 483 474 } 484 475 } else { 485 - struct smc_buf_desc *buf_desc = smc->conn.rmb_desc; 486 - 487 - if (!buf_desc->reused) { 488 - /* register memory region for new rmb */ 489 - rc = smc_wr_reg_send(link, 490 - buf_desc->mr_rx[SMC_SINGLE_LINK]); 491 - if (rc) { 476 + if (!smc->conn.rmb_desc->reused) { 477 + if (smc_reg_rmb(link, smc->conn.rmb_desc)) { 492 478 reason_code = SMC_CLC_DECL_INTERR; 493 479 goto decline_rdma_unlock; 494 480 } ··· 723 719 724 720 link = &lgr->lnk[SMC_SINGLE_LINK]; 725 721 726 - rc = smc_wr_reg_send(link, 727 - smc->conn.rmb_desc->mr_rx[SMC_SINGLE_LINK]); 728 - if (rc) 722 + if (smc_reg_rmb(link, smc->conn.rmb_desc)) 729 723 return SMC_CLC_DECL_INTERR; 730 724 731 725 /* send CONFIRM LINK request to client over the RoCE fabric */ ··· 856 854 smc_rx_init(new_smc); 857 855 858 856 if (local_contact != SMC_FIRST_CONTACT) { 859 - struct smc_buf_desc *buf_desc = new_smc->conn.rmb_desc; 860 - 861 - if (!buf_desc->reused) { 862 - /* register memory region for new rmb */ 863 - rc = smc_wr_reg_send(link, 864 - buf_desc->mr_rx[SMC_SINGLE_LINK]); 865 - if (rc) { 857 + if (!new_smc->conn.rmb_desc->reused) { 858 + if (smc_reg_rmb(link, new_smc->conn.rmb_desc)) { 866 859 reason_code = SMC_CLC_DECL_INTERR; 867 860 goto decline_rdma_unlock; 868 861 } ··· 975 978 } 976 979 977 980 out: 978 - if (lsmc->clcsock) { 979 - sock_release(lsmc->clcsock); 980 - lsmc->clcsock = NULL; 981 - } 982 981 release_sock(lsk); 983 982 sock_put(&lsmc->sk); /* sock_hold in smc_listen */ 984 983 } ··· 1163 1170 /* delegate to CLC child sock */ 1164 1171 release_sock(sk); 1165 1172 mask = smc->clcsock->ops->poll(file, smc->clcsock, wait); 1166 - /* if non-blocking connect finished ... */ 1167 1173 lock_sock(sk); 1168 - if ((sk->sk_state == SMC_INIT) && (mask & EPOLLOUT)) { 1169 - sk->sk_err = smc->clcsock->sk->sk_err; 1170 - if (sk->sk_err) { 1171 - mask |= EPOLLERR; 1172 - } else { 1174 + sk->sk_err = smc->clcsock->sk->sk_err; 1175 + if (sk->sk_err) { 1176 + mask |= EPOLLERR; 1177 + } else { 1178 + /* if non-blocking connect finished ... */ 1179 + if (sk->sk_state == SMC_INIT && 1180 + mask & EPOLLOUT && 1181 + smc->clcsock->sk->sk_state != TCP_CLOSE) { 1173 1182 rc = smc_connect_rdma(smc); 1174 1183 if (rc < 0) 1175 1184 mask |= EPOLLERR; ··· 1315 1320 1316 1321 smc = smc_sk(sk); 1317 1322 lock_sock(sk); 1318 - if (sk->sk_state != SMC_ACTIVE) 1323 + if (sk->sk_state != SMC_ACTIVE) { 1324 + release_sock(sk); 1319 1325 goto out; 1326 + } 1327 + release_sock(sk); 1320 1328 if (smc->use_fallback) 1321 1329 rc = kernel_sendpage(smc->clcsock, page, offset, 1322 1330 size, flags); ··· 1327 1329 rc = sock_no_sendpage(sock, page, offset, size, flags); 1328 1330 1329 1331 out: 1330 - release_sock(sk); 1331 1332 return rc; 1332 1333 } 1333 1334
+19 -3
net/smc/smc_core.c
··· 32 32 33 33 static u32 smc_lgr_num; /* unique link group number */ 34 34 35 + static void smc_buf_free(struct smc_buf_desc *buf_desc, struct smc_link *lnk, 36 + bool is_rmb); 37 + 35 38 static void smc_lgr_schedule_free_work(struct smc_link_group *lgr) 36 39 { 37 40 /* client link group creation always follows the server link group ··· 237 234 conn->sndbuf_size = 0; 238 235 } 239 236 if (conn->rmb_desc) { 240 - conn->rmb_desc->reused = true; 241 - conn->rmb_desc->used = 0; 242 - conn->rmbe_size = 0; 237 + if (!conn->rmb_desc->regerr) { 238 + conn->rmb_desc->reused = 1; 239 + conn->rmb_desc->used = 0; 240 + conn->rmbe_size = 0; 241 + } else { 242 + /* buf registration failed, reuse not possible */ 243 + struct smc_link_group *lgr = conn->lgr; 244 + struct smc_link *lnk; 245 + 246 + write_lock_bh(&lgr->rmbs_lock); 247 + list_del(&conn->rmb_desc->list); 248 + write_unlock_bh(&lgr->rmbs_lock); 249 + 250 + lnk = &lgr->lnk[SMC_SINGLE_LINK]; 251 + smc_buf_free(conn->rmb_desc, lnk, true); 252 + } 243 253 } 244 254 } 245 255
+2 -1
net/smc/smc_core.h
··· 123 123 */ 124 124 u32 order; /* allocation order */ 125 125 u32 used; /* currently used / unused */ 126 - bool reused; /* new created / reused */ 126 + u8 reused : 1; /* new created / reused */ 127 + u8 regerr : 1; /* err during registration */ 127 128 }; 128 129 129 130 struct smc_rtoken { /* address/key of remote RMB */
+1 -4
net/sunrpc/xprtrdma/fmr_ops.c
··· 72 72 if (IS_ERR(mr->fmr.fm_mr)) 73 73 goto out_fmr_err; 74 74 75 + INIT_LIST_HEAD(&mr->mr_list); 75 76 return 0; 76 77 77 78 out_fmr_err: ··· 102 101 { 103 102 LIST_HEAD(unmap_list); 104 103 int rc; 105 - 106 - /* Ensure MW is not on any rl_registered list */ 107 - if (!list_empty(&mr->mr_list)) 108 - list_del(&mr->mr_list); 109 104 110 105 kfree(mr->fmr.fm_physaddrs); 111 106 kfree(mr->mr_sg);
+3 -6
net/sunrpc/xprtrdma/frwr_ops.c
··· 110 110 if (!mr->mr_sg) 111 111 goto out_list_err; 112 112 113 + INIT_LIST_HEAD(&mr->mr_list); 113 114 sg_init_table(mr->mr_sg, depth); 114 115 init_completion(&frwr->fr_linv_done); 115 116 return 0; ··· 133 132 frwr_op_release_mr(struct rpcrdma_mr *mr) 134 133 { 135 134 int rc; 136 - 137 - /* Ensure MR is not on any rl_registered list */ 138 - if (!list_empty(&mr->mr_list)) 139 - list_del(&mr->mr_list); 140 135 141 136 rc = ib_dereg_mr(mr->frwr.fr_mr); 142 137 if (rc) ··· 192 195 return; 193 196 194 197 out_release: 195 - pr_err("rpcrdma: FRWR reset failed %d, %p release\n", rc, mr); 198 + pr_err("rpcrdma: FRWR reset failed %d, %p released\n", rc, mr); 196 199 r_xprt->rx_stats.mrs_orphaned++; 197 200 198 201 spin_lock(&r_xprt->rx_buf.rb_mrlock); ··· 473 476 474 477 list_for_each_entry(mr, mrs, mr_list) 475 478 if (mr->mr_handle == rep->rr_inv_rkey) { 476 - list_del(&mr->mr_list); 479 + list_del_init(&mr->mr_list); 477 480 trace_xprtrdma_remoteinv(mr); 478 481 mr->frwr.fr_state = FRWR_IS_INVALID; 479 482 rpcrdma_mr_unmap_and_put(mr);
+5
net/sunrpc/xprtrdma/verbs.c
··· 1254 1254 list_del(&mr->mr_all); 1255 1255 1256 1256 spin_unlock(&buf->rb_mrlock); 1257 + 1258 + /* Ensure MW is not on any rl_registered list */ 1259 + if (!list_empty(&mr->mr_list)) 1260 + list_del(&mr->mr_list); 1261 + 1257 1262 ia->ri_ops->ro_release_mr(mr); 1258 1263 count++; 1259 1264 spin_lock(&buf->rb_mrlock);
+1 -1
net/sunrpc/xprtrdma/xprt_rdma.h
··· 380 380 struct rpcrdma_mr *mr; 381 381 382 382 mr = list_first_entry(list, struct rpcrdma_mr, mr_list); 383 - list_del(&mr->mr_list); 383 + list_del_init(&mr->mr_list); 384 384 return mr; 385 385 } 386 386
+14 -3
net/tipc/node.c
··· 1950 1950 int tipc_nl_node_get_link(struct sk_buff *skb, struct genl_info *info) 1951 1951 { 1952 1952 struct net *net = genl_info_net(info); 1953 + struct nlattr *attrs[TIPC_NLA_LINK_MAX + 1]; 1953 1954 struct tipc_nl_msg msg; 1954 1955 char *name; 1955 1956 int err; ··· 1958 1957 msg.portid = info->snd_portid; 1959 1958 msg.seq = info->snd_seq; 1960 1959 1961 - if (!info->attrs[TIPC_NLA_LINK_NAME]) 1960 + if (!info->attrs[TIPC_NLA_LINK]) 1962 1961 return -EINVAL; 1963 - name = nla_data(info->attrs[TIPC_NLA_LINK_NAME]); 1962 + 1963 + err = nla_parse_nested(attrs, TIPC_NLA_LINK_MAX, 1964 + info->attrs[TIPC_NLA_LINK], 1965 + tipc_nl_link_policy, info->extack); 1966 + if (err) 1967 + return err; 1968 + 1969 + if (!attrs[TIPC_NLA_LINK_NAME]) 1970 + return -EINVAL; 1971 + 1972 + name = nla_data(attrs[TIPC_NLA_LINK_NAME]); 1964 1973 1965 1974 msg.skb = nlmsg_new(NLMSG_GOODSIZE, GFP_KERNEL); 1966 1975 if (!msg.skb) ··· 2255 2244 2256 2245 rtnl_lock(); 2257 2246 for (bearer_id = prev_bearer; bearer_id < MAX_BEARERS; bearer_id++) { 2258 - err = __tipc_nl_add_monitor(net, &msg, prev_bearer); 2247 + err = __tipc_nl_add_monitor(net, &msg, bearer_id); 2259 2248 if (err) 2260 2249 break; 2261 2250 }
+2 -1
net/tipc/socket.c
··· 1516 1516 1517 1517 srcaddr->sock.family = AF_TIPC; 1518 1518 srcaddr->sock.addrtype = TIPC_ADDR_ID; 1519 + srcaddr->sock.scope = 0; 1519 1520 srcaddr->sock.addr.id.ref = msg_origport(hdr); 1520 1521 srcaddr->sock.addr.id.node = msg_orignode(hdr); 1521 1522 srcaddr->sock.addr.name.domain = 0; 1522 - srcaddr->sock.scope = 0; 1523 1523 m->msg_namelen = sizeof(struct sockaddr_tipc); 1524 1524 1525 1525 if (!msg_in_group(hdr)) ··· 1528 1528 /* Group message users may also want to know sending member's id */ 1529 1529 srcaddr->member.family = AF_TIPC; 1530 1530 srcaddr->member.addrtype = TIPC_ADDR_NAME; 1531 + srcaddr->member.scope = 0; 1531 1532 srcaddr->member.addr.name.name.type = msg_nametype(hdr); 1532 1533 srcaddr->member.addr.name.name.instance = TIPC_SKB_CB(skb)->orig_member; 1533 1534 srcaddr->member.addr.name.domain = 0;
+12 -7
net/tls/tls_main.c
··· 114 114 size = sg->length - offset; 115 115 offset += sg->offset; 116 116 117 + ctx->in_tcp_sendpages = true; 117 118 while (1) { 118 119 if (sg_is_last(sg)) 119 120 sendpage_flags = flags; ··· 135 134 offset -= sg->offset; 136 135 ctx->partially_sent_offset = offset; 137 136 ctx->partially_sent_record = (void *)sg; 137 + ctx->in_tcp_sendpages = false; 138 138 return ret; 139 139 } 140 140 ··· 150 148 } 151 149 152 150 clear_bit(TLS_PENDING_CLOSED_RECORD, &ctx->flags); 151 + ctx->in_tcp_sendpages = false; 152 + ctx->sk_write_space(sk); 153 153 154 154 return 0; 155 155 } ··· 221 217 { 222 218 struct tls_context *ctx = tls_get_ctx(sk); 223 219 220 + /* We are already sending pages, ignore notification */ 221 + if (ctx->in_tcp_sendpages) 222 + return; 223 + 224 224 if (!sk->sk_write_pending && tls_is_pending_closed_record(ctx)) { 225 225 gfp_t sk_allocation = sk->sk_allocation; 226 226 int rc; ··· 249 241 struct tls_context *ctx = tls_get_ctx(sk); 250 242 long timeo = sock_sndtimeo(sk, 0); 251 243 void (*sk_proto_close)(struct sock *sk, long timeout); 244 + bool free_ctx = false; 252 245 253 246 lock_sock(sk); 254 247 sk_proto_close = ctx->sk_proto_close; 255 248 256 - if (ctx->conf == TLS_HW_RECORD) 257 - goto skip_tx_cleanup; 258 - 259 - if (ctx->conf == TLS_BASE) { 260 - kfree(ctx); 261 - ctx = NULL; 249 + if (ctx->conf == TLS_BASE || ctx->conf == TLS_HW_RECORD) { 250 + free_ctx = true; 262 251 goto skip_tx_cleanup; 263 252 } 264 253 ··· 292 287 /* free ctx for TLS_HW_RECORD, used by tcp_set_state 293 288 * for sk->sk_prot->unhash [tls_hw_unhash] 294 289 */ 295 - if (ctx && ctx->conf == TLS_HW_RECORD) 290 + if (free_ctx) 296 291 kfree(ctx); 297 292 } 298 293
+3
net/wireless/core.c
··· 95 95 96 96 ASSERT_RTNL(); 97 97 98 + if (strlen(newname) > NL80211_WIPHY_NAME_MAXLEN) 99 + return -EINVAL; 100 + 98 101 /* prohibit calling the thing phy%d when %d is not its number */ 99 102 sscanf(newname, PHY_NAME "%d%n", &wiphy_idx, &taken); 100 103 if (taken == strlen(newname) && wiphy_idx != rdev->wiphy_idx) {
+1
net/wireless/nl80211.c
··· 9214 9214 9215 9215 if (nla_get_flag(info->attrs[NL80211_ATTR_EXTERNAL_AUTH_SUPPORT])) { 9216 9216 if (!info->attrs[NL80211_ATTR_SOCKET_OWNER]) { 9217 + kzfree(connkeys); 9217 9218 GENL_SET_ERR_MSG(info, 9218 9219 "external auth requires connection ownership"); 9219 9220 return -EINVAL;
+1
net/wireless/reg.c
··· 1026 1026 1027 1027 if (!tmp_rd) { 1028 1028 kfree(regdom); 1029 + kfree(wmm_ptrs); 1029 1030 return -ENOMEM; 1030 1031 } 1031 1032 regdom = tmp_rd;
+6
net/xfrm/xfrm_state.c
··· 2175 2175 return afinfo; 2176 2176 } 2177 2177 2178 + void xfrm_flush_gc(void) 2179 + { 2180 + flush_work(&xfrm_state_gc_work); 2181 + } 2182 + EXPORT_SYMBOL(xfrm_flush_gc); 2183 + 2178 2184 /* Temporarily located here until net/xfrm/xfrm_tunnel.c is created */ 2179 2185 void xfrm_state_delete_tunnel(struct xfrm_state *x) 2180 2186 {
+5 -2
samples/sockmap/Makefile
··· 65 65 # asm/sysreg.h - inline assembly used by it is incompatible with llvm. 66 66 # But, there is no easy way to fix it, so just exclude it since it is 67 67 # useless for BPF samples. 68 + # 69 + # -target bpf option required with SK_MSG programs, this is to ensure 70 + # reading 'void *' data types for data and data_end are __u64 reads. 68 71 $(obj)/%.o: $(src)/%.c 69 72 $(CLANG) $(NOSTDINC_FLAGS) $(LINUXINCLUDE) $(EXTRA_CFLAGS) -I$(obj) \ 70 73 -D__KERNEL__ -D__ASM_SYSREG_H -Wno-unused-value -Wno-pointer-sign \ 71 74 -Wno-compare-distinct-pointer-types \ 72 75 -Wno-gnu-variable-sized-type-not-at-end \ 73 76 -Wno-address-of-packed-member -Wno-tautological-compare \ 74 - -Wno-unknown-warning-option \ 75 - -O2 -emit-llvm -c $< -o -| $(LLC) -march=bpf -filetype=obj -o $@ 77 + -Wno-unknown-warning-option -O2 -target bpf \ 78 + -emit-llvm -c $< -o -| $(LLC) -march=bpf -filetype=obj -o $@
+1 -1
scripts/Makefile.gcc-plugins
··· 14 14 endif 15 15 16 16 ifdef CONFIG_GCC_PLUGIN_SANCOV 17 - ifeq ($(CFLAGS_KCOV),) 17 + ifeq ($(strip $(CFLAGS_KCOV)),) 18 18 # It is needed because of the gcc-plugin.sh and gcc version checks. 19 19 gcc-plugin-$(CONFIG_GCC_PLUGIN_SANCOV) += sancov_plugin.so 20 20
+1 -1
scripts/Makefile.lib
··· 196 196 $(call if_changed,bison) 197 197 198 198 quiet_cmd_bison_h = YACC $@ 199 - cmd_bison_h = bison -o/dev/null --defines=$@ -t -l $< 199 + cmd_bison_h = $(YACC) -o/dev/null --defines=$@ -t -l $< 200 200 201 201 $(obj)/%.tab.h: $(src)/%.y FORCE 202 202 $(call if_changed,bison_h)
+2 -3
scripts/dtc/checks.c
··· 787 787 FAIL(c, dti, node, "incorrect #size-cells for PCI bridge"); 788 788 789 789 prop = get_property(node, "bus-range"); 790 - if (!prop) { 791 - FAIL(c, dti, node, "missing bus-range for PCI bridge"); 790 + if (!prop) 792 791 return; 793 - } 792 + 794 793 if (prop->val.len != (sizeof(cell_t) * 2)) { 795 794 FAIL_PROP(c, dti, node, prop, "value must be 2 cells"); 796 795 return;
+1 -1
scripts/extract_xc3028.pl
··· 1 1 #!/usr/bin/env perl 2 2 3 - # Copyright (c) Mauro Carvalho Chehab <mchehab@infradead.org> 3 + # Copyright (c) Mauro Carvalho Chehab <mchehab@kernel.org> 4 4 # Released under GPLv2 5 5 # 6 6 # In order to use, you need to:
+4 -1
scripts/faddr2line
··· 170 170 echo "$file_lines" | while read -r line 171 171 do 172 172 echo $line 173 - eval $(echo $line | awk -F "[ :]" '{printf("n1=%d;n2=%d;f=%s",$NF-5, $NF+5, $(NF-1))}') 173 + n=$(echo $line | sed 's/.*:\([0-9]\+\).*/\1/g') 174 + n1=$[$n-5] 175 + n2=$[$n+5] 176 + f=$(echo $line | sed 's/.*at \(.\+\):.*/\1/g') 174 177 awk 'NR>=strtonum("'$n1'") && NR<=strtonum("'$n2'") {printf("%d\t%s\n", NR, $0)}' $f 175 178 done 176 179
+2 -2
scripts/genksyms/Makefile
··· 14 14 # so that 'bison: not found' will be displayed if it is missing. 15 15 ifeq ($(findstring 1,$(KBUILD_ENABLE_EXTRA_GCC_CHECKS)),) 16 16 17 - quiet_cmd_bison_no_warn = $(quet_cmd_bison) 17 + quiet_cmd_bison_no_warn = $(quiet_cmd_bison) 18 18 cmd_bison_no_warn = $(YACC) --version >/dev/null; \ 19 19 $(cmd_bison) 2>/dev/null 20 20 21 21 $(obj)/parse.tab.c: $(src)/parse.y FORCE 22 22 $(call if_changed,bison_no_warn) 23 23 24 - quiet_cmd_bison_h_no_warn = $(quet_cmd_bison_h) 24 + quiet_cmd_bison_h_no_warn = $(quiet_cmd_bison_h) 25 25 cmd_bison_h_no_warn = $(YACC) --version >/dev/null; \ 26 26 $(cmd_bison_h) 2>/dev/null 27 27
+1 -8
scripts/mod/sumversion.c
··· 330 330 goto out; 331 331 } 332 332 333 - /* There will be a line like so: 334 - deps_drivers/net/dummy.o := \ 335 - drivers/net/dummy.c \ 336 - $(wildcard include/config/net/fastroute.h) \ 337 - include/linux/module.h \ 338 - 339 - Sum all files in the same dir or subdirs. 340 - */ 333 + /* Sum all files in the same dir or subdirs. */ 341 334 while ((line = get_next_line(&pos, file, flen)) != NULL) { 342 335 char* p = line; 343 336
+1 -1
scripts/split-man.pl
··· 1 1 #!/usr/bin/perl 2 2 # SPDX-License-Identifier: GPL-2.0 3 3 # 4 - # Author: Mauro Carvalho Chehab <mchehab@s-opensource.com> 4 + # Author: Mauro Carvalho Chehab <mchehab+samsung@kernel.org> 5 5 # 6 6 # Produce manpages from kernel-doc. 7 7 # See Documentation/doc-guide/kernel-doc.rst for instructions
+2
sound/core/pcm_compat.c
··· 423 423 return -ENOTTY; 424 424 if (substream->stream != dir) 425 425 return -EINVAL; 426 + if (substream->runtime->status->state == SNDRV_PCM_STATE_OPEN) 427 + return -EBADFD; 426 428 427 429 if ((ch = substream->runtime->channels) > 128) 428 430 return -EINVAL;
+2 -2
sound/core/seq/seq_virmidi.c
··· 174 174 } 175 175 return; 176 176 } 177 + spin_lock_irqsave(&substream->runtime->lock, flags); 177 178 if (vmidi->event.type != SNDRV_SEQ_EVENT_NONE) { 178 179 if (snd_seq_kernel_client_dispatch(vmidi->client, &vmidi->event, in_atomic(), 0) < 0) 179 - return; 180 + goto out; 180 181 vmidi->event.type = SNDRV_SEQ_EVENT_NONE; 181 182 } 182 - spin_lock_irqsave(&substream->runtime->lock, flags); 183 183 while (1) { 184 184 count = __snd_rawmidi_transmit_peek(substream, buf, sizeof(buf)); 185 185 if (count <= 0)
+15 -2
sound/drivers/aloop.c
··· 831 831 { 832 832 struct loopback *loopback = snd_kcontrol_chip(kcontrol); 833 833 834 + mutex_lock(&loopback->cable_lock); 834 835 ucontrol->value.integer.value[0] = 835 836 loopback->setup[kcontrol->id.subdevice] 836 837 [kcontrol->id.device].rate_shift; 838 + mutex_unlock(&loopback->cable_lock); 837 839 return 0; 838 840 } 839 841 ··· 867 865 { 868 866 struct loopback *loopback = snd_kcontrol_chip(kcontrol); 869 867 868 + mutex_lock(&loopback->cable_lock); 870 869 ucontrol->value.integer.value[0] = 871 870 loopback->setup[kcontrol->id.subdevice] 872 871 [kcontrol->id.device].notify; 872 + mutex_unlock(&loopback->cable_lock); 873 873 return 0; 874 874 } 875 875 ··· 883 879 int change = 0; 884 880 885 881 val = ucontrol->value.integer.value[0] ? 1 : 0; 882 + mutex_lock(&loopback->cable_lock); 886 883 if (val != loopback->setup[kcontrol->id.subdevice] 887 884 [kcontrol->id.device].notify) { 888 885 loopback->setup[kcontrol->id.subdevice] 889 886 [kcontrol->id.device].notify = val; 890 887 change = 1; 891 888 } 889 + mutex_unlock(&loopback->cable_lock); 892 890 return change; 893 891 } 894 892 ··· 898 892 struct snd_ctl_elem_value *ucontrol) 899 893 { 900 894 struct loopback *loopback = snd_kcontrol_chip(kcontrol); 901 - struct loopback_cable *cable = loopback->cables 902 - [kcontrol->id.subdevice][kcontrol->id.device ^ 1]; 895 + struct loopback_cable *cable; 896 + 903 897 unsigned int val = 0; 904 898 899 + mutex_lock(&loopback->cable_lock); 900 + cable = loopback->cables[kcontrol->id.subdevice][kcontrol->id.device ^ 1]; 905 901 if (cable != NULL) { 906 902 unsigned int running = cable->running ^ cable->pause; 907 903 908 904 val = (running & (1 << SNDRV_PCM_STREAM_PLAYBACK)) ? 1 : 0; 909 905 } 906 + mutex_unlock(&loopback->cable_lock); 910 907 ucontrol->value.integer.value[0] = val; 911 908 return 0; 912 909 } ··· 952 943 { 953 944 struct loopback *loopback = snd_kcontrol_chip(kcontrol); 954 945 946 + mutex_lock(&loopback->cable_lock); 955 947 ucontrol->value.integer.value[0] = 956 948 loopback->setup[kcontrol->id.subdevice] 957 949 [kcontrol->id.device].rate; 950 + mutex_unlock(&loopback->cable_lock); 958 951 return 0; 959 952 } 960 953 ··· 976 965 { 977 966 struct loopback *loopback = snd_kcontrol_chip(kcontrol); 978 967 968 + mutex_lock(&loopback->cable_lock); 979 969 ucontrol->value.integer.value[0] = 980 970 loopback->setup[kcontrol->id.subdevice] 981 971 [kcontrol->id.device].channels; 972 + mutex_unlock(&loopback->cable_lock); 982 973 return 0; 983 974 } 984 975
+3 -2
sound/firewire/amdtp-stream.c
··· 773 773 u32 cycle; 774 774 unsigned int packets; 775 775 776 - s->max_payload_length = amdtp_stream_get_max_payload(s); 777 - 778 776 /* 779 777 * For in-stream, first packet has come. 780 778 * For out-stream, prepared to transmit first packet ··· 876 878 } 877 879 878 880 amdtp_stream_update(s); 881 + 882 + if (s->direction == AMDTP_IN_STREAM) 883 + s->max_payload_length = amdtp_stream_get_max_payload(s); 879 884 880 885 if (s->flags & CIP_NO_HEADER) 881 886 s->tag = TAG_NO_CIP_HEADER;
+1 -1
sound/pci/hda/patch_realtek.c
··· 3832 3832 } 3833 3833 } 3834 3834 3835 - #if IS_REACHABLE(INPUT) 3835 + #if IS_REACHABLE(CONFIG_INPUT) 3836 3836 static void gpio2_mic_hotkey_event(struct hda_codec *codec, 3837 3837 struct hda_jack_callback *event) 3838 3838 {
+6
tools/arch/arm/include/uapi/asm/kvm.h
··· 195 195 #define KVM_REG_ARM_VFP_FPINST 0x1009 196 196 #define KVM_REG_ARM_VFP_FPINST2 0x100A 197 197 198 + /* KVM-as-firmware specific pseudo-registers */ 199 + #define KVM_REG_ARM_FW (0x0014 << KVM_REG_ARM_COPROC_SHIFT) 200 + #define KVM_REG_ARM_FW_REG(r) (KVM_REG_ARM | KVM_REG_SIZE_U64 | \ 201 + KVM_REG_ARM_FW | ((r) & 0xffff)) 202 + #define KVM_REG_ARM_PSCI_VERSION KVM_REG_ARM_FW_REG(0) 203 + 198 204 /* Device Control API: ARM VGIC */ 199 205 #define KVM_DEV_ARM_VGIC_GRP_ADDR 0 200 206 #define KVM_DEV_ARM_VGIC_GRP_DIST_REGS 1
+6
tools/arch/arm64/include/uapi/asm/kvm.h
··· 206 206 #define KVM_REG_ARM_TIMER_CNT ARM64_SYS_REG(3, 3, 14, 3, 2) 207 207 #define KVM_REG_ARM_TIMER_CVAL ARM64_SYS_REG(3, 3, 14, 0, 2) 208 208 209 + /* KVM-as-firmware specific pseudo-registers */ 210 + #define KVM_REG_ARM_FW (0x0014 << KVM_REG_ARM_COPROC_SHIFT) 211 + #define KVM_REG_ARM_FW_REG(r) (KVM_REG_ARM64 | KVM_REG_SIZE_U64 | \ 212 + KVM_REG_ARM_FW | ((r) & 0xffff)) 213 + #define KVM_REG_ARM_PSCI_VERSION KVM_REG_ARM_FW_REG(0) 214 + 209 215 /* Device Control API: ARM VGIC */ 210 216 #define KVM_DEV_ARM_VGIC_GRP_ADDR 0 211 217 #define KVM_DEV_ARM_VGIC_GRP_DIST_REGS 1
+1
tools/arch/x86/include/asm/cpufeatures.h
··· 320 320 #define X86_FEATURE_AVX512_VPOPCNTDQ (16*32+14) /* POPCNT for vectors of DW/QW */ 321 321 #define X86_FEATURE_LA57 (16*32+16) /* 5-level page tables */ 322 322 #define X86_FEATURE_RDPID (16*32+22) /* RDPID instruction */ 323 + #define X86_FEATURE_CLDEMOTE (16*32+25) /* CLDEMOTE instruction */ 323 324 324 325 /* AMD-defined CPU features, CPUID level 0x80000007 (EBX), word 17 */ 325 326 #define X86_FEATURE_OVERFLOW_RECOV (17*32+ 0) /* MCA overflow recovery support */
+2
tools/bpf/Makefile
··· 76 76 $(QUIET_LINK)$(CC) $(CFLAGS) -o $@ $^ 77 77 78 78 $(OUTPUT)bpf_exp.lex.c: $(OUTPUT)bpf_exp.yacc.c 79 + $(OUTPUT)bpf_exp.yacc.o: $(OUTPUT)bpf_exp.yacc.c 80 + $(OUTPUT)bpf_exp.lex.o: $(OUTPUT)bpf_exp.lex.c 79 81 80 82 clean: bpftool_clean 81 83 $(call QUIET_CLEAN, bpf-progs)
+5 -2
tools/bpf/bpf_dbg.c
··· 1063 1063 1064 1064 static int cmd_load(char *arg) 1065 1065 { 1066 - char *subcmd, *cont, *tmp = strdup(arg); 1066 + char *subcmd, *cont = NULL, *tmp = strdup(arg); 1067 1067 int ret = CMD_OK; 1068 1068 1069 1069 subcmd = strtok_r(tmp, " ", &cont); ··· 1073 1073 bpf_reset(); 1074 1074 bpf_reset_breakpoints(); 1075 1075 1076 - ret = cmd_load_bpf(cont); 1076 + if (!cont) 1077 + ret = CMD_ERR; 1078 + else 1079 + ret = cmd_load_bpf(cont); 1077 1080 } else if (matches(subcmd, "pcap") == 0) { 1078 1081 ret = cmd_load_pcap(cont); 1079 1082 } else {
+7
tools/include/uapi/linux/kvm.h
··· 676 676 __u8 pad[36]; 677 677 }; 678 678 679 + #define KVM_X86_DISABLE_EXITS_MWAIT (1 << 0) 680 + #define KVM_X86_DISABLE_EXITS_HTL (1 << 1) 681 + #define KVM_X86_DISABLE_EXITS_PAUSE (1 << 2) 682 + #define KVM_X86_DISABLE_VALID_EXITS (KVM_X86_DISABLE_EXITS_MWAIT | \ 683 + KVM_X86_DISABLE_EXITS_HTL | \ 684 + KVM_X86_DISABLE_EXITS_PAUSE) 685 + 679 686 /* for KVM_ENABLE_CAP */ 680 687 struct kvm_enable_cap { 681 688 /* in */
+1 -1
tools/perf/bench/numa.c
··· 175 175 OPT_UINTEGER('s', "nr_secs" , &p0.nr_secs, "max number of seconds to run (default: 5 secs)"), 176 176 OPT_UINTEGER('u', "usleep" , &p0.sleep_usecs, "usecs to sleep per loop iteration"), 177 177 178 - OPT_BOOLEAN('R', "data_reads" , &p0.data_reads, "access the data via writes (can be mixed with -W)"), 178 + OPT_BOOLEAN('R', "data_reads" , &p0.data_reads, "access the data via reads (can be mixed with -W)"), 179 179 OPT_BOOLEAN('W', "data_writes" , &p0.data_writes, "access the data via writes (can be mixed with -R)"), 180 180 OPT_BOOLEAN('B', "data_backwards", &p0.data_backwards, "access the data backwards as well"), 181 181 OPT_BOOLEAN('Z', "data_zero_memset", &p0.data_zero_memset,"access the data via glibc bzero only"),
-1
tools/perf/pmu-events/arch/x86/mapfile.csv
··· 29 29 GenuineIntel-6-4C,v13,silvermont,core 30 30 GenuineIntel-6-2A,v15,sandybridge,core 31 31 GenuineIntel-6-2C,v2,westmereep-dp,core 32 - GenuineIntel-6-2C,v2,westmereep-dp,core 33 32 GenuineIntel-6-25,v2,westmereep-sp,core 34 33 GenuineIntel-6-2F,v2,westmereex,core 35 34 GenuineIntel-6-55,v1,skylakex,core
+4 -4
tools/perf/util/parse-events.y
··· 224 224 event_bpf_file 225 225 226 226 event_pmu: 227 - PE_NAME '/' event_config '/' 227 + PE_NAME opt_event_config 228 228 { 229 229 struct list_head *list, *orig_terms, *terms; 230 230 231 - if (parse_events_copy_term_list($3, &orig_terms)) 231 + if (parse_events_copy_term_list($2, &orig_terms)) 232 232 YYABORT; 233 233 234 234 ALLOC_LIST(list); 235 - if (parse_events_add_pmu(_parse_state, list, $1, $3, false)) { 235 + if (parse_events_add_pmu(_parse_state, list, $1, $2, false)) { 236 236 struct perf_pmu *pmu = NULL; 237 237 int ok = 0; 238 238 char *pattern; ··· 262 262 if (!ok) 263 263 YYABORT; 264 264 } 265 - parse_events_terms__delete($3); 265 + parse_events_terms__delete($2); 266 266 parse_events_terms__delete(orig_terms); 267 267 $$ = list; 268 268 }
+1
tools/power/acpi/Makefile.config
··· 56 56 # to compile vs uClibc, that can be done here as well. 57 57 CROSS = #/usr/i386-linux-uclibc/usr/bin/i386-uclibc- 58 58 CROSS_COMPILE ?= $(CROSS) 59 + LD = $(CC) 59 60 HOSTCC = gcc 60 61 61 62 # check if compiler option is supported
+2 -2
tools/testing/selftests/bpf/test_progs.c
··· 1108 1108 1109 1109 assert(system("dd if=/dev/urandom of=/dev/zero count=4 2> /dev/null") 1110 1110 == 0); 1111 - assert(system("./urandom_read if=/dev/urandom of=/dev/zero count=4 2> /dev/null") == 0); 1111 + assert(system("./urandom_read") == 0); 1112 1112 /* disable stack trace collection */ 1113 1113 key = 0; 1114 1114 val = 1; ··· 1158 1158 } while (bpf_map_get_next_key(stackmap_fd, &previous_key, &key) == 0); 1159 1159 1160 1160 CHECK(build_id_matches < 1, "build id match", 1161 - "Didn't find expected build ID from the map"); 1161 + "Didn't find expected build ID from the map\n"); 1162 1162 1163 1163 disable_pmu: 1164 1164 ioctl(pmu_fd, PERF_EVENT_IOC_DISABLE);
+4 -4
tools/testing/selftests/lib.mk
··· 20 20 21 21 .ONESHELL: 22 22 define RUN_TESTS 23 - @export KSFT_TAP_LEVEL=`echo 1`; 24 - @test_num=`echo 0`; 25 - @echo "TAP version 13"; 26 - @for TEST in $(1); do \ 23 + @export KSFT_TAP_LEVEL=`echo 1`; \ 24 + test_num=`echo 0`; \ 25 + echo "TAP version 13"; \ 26 + for TEST in $(1); do \ 27 27 BASENAME_TEST=`basename $$TEST`; \ 28 28 test_num=`echo $$test_num+1 | bc`; \ 29 29 echo "selftests: $$BASENAME_TEST"; \
+2 -1
tools/testing/selftests/net/Makefile
··· 5 5 CFLAGS += -I../../../../usr/include/ 6 6 7 7 TEST_PROGS := run_netsocktests run_afpackettests test_bpf.sh netdevice.sh rtnetlink.sh 8 - TEST_PROGS += fib_tests.sh fib-onlink-tests.sh in_netns.sh pmtu.sh 8 + TEST_PROGS += fib_tests.sh fib-onlink-tests.sh pmtu.sh 9 + TEST_PROGS_EXTENDED := in_netns.sh 9 10 TEST_GEN_FILES = socket 10 11 TEST_GEN_FILES += psock_fanout psock_tpacket msg_zerocopy 11 12 TEST_GEN_PROGS = reuseport_bpf reuseport_bpf_cpu reuseport_bpf_numa
+8 -3
tools/testing/selftests/tc-testing/tc-tests/actions/bpf.json
··· 66 66 "cmdUnderTest": "$TC action add action bpf object-file _b.o index 667", 67 67 "expExitCode": "0", 68 68 "verifyCmd": "$TC action get action bpf index 667", 69 - "matchPattern": "action order [0-9]*: bpf _b.o:\\[action\\] id [0-9]* tag 3b185187f1855c4c default-action pipe.*index 667 ref", 69 + "matchPattern": "action order [0-9]*: bpf _b.o:\\[action\\] id [0-9]* tag 3b185187f1855c4c( jited)? default-action pipe.*index 667 ref", 70 70 "matchCount": "1", 71 71 "teardown": [ 72 72 "$TC action flush action bpf", ··· 92 92 "cmdUnderTest": "$TC action add action bpf object-file _c.o index 667", 93 93 "expExitCode": "255", 94 94 "verifyCmd": "$TC action get action bpf index 667", 95 - "matchPattern": "action order [0-9]*: bpf _b.o:\\[action\\] id [0-9].*index 667 ref", 95 + "matchPattern": "action order [0-9]*: bpf _c.o:\\[action\\] id [0-9].*index 667 ref", 96 96 "matchCount": "0", 97 97 "teardown": [ 98 - "$TC action flush action bpf", 98 + [ 99 + "$TC action flush action bpf", 100 + 0, 101 + 1, 102 + 255 103 + ], 99 104 "rm -f _c.o" 100 105 ] 101 106 },
+1 -1
virt/kvm/arm/vgic/vgic-init.c
··· 423 423 * We cannot rely on the vgic maintenance interrupt to be 424 424 * delivered synchronously. This means we can only use it to 425 425 * exit the VM, and we perform the handling of EOIed 426 - * interrupts on the exit path (see vgic_process_maintenance). 426 + * interrupts on the exit path (see vgic_fold_lr_state). 427 427 */ 428 428 return IRQ_HANDLED; 429 429 }
+8 -2
virt/kvm/arm/vgic/vgic-mmio.c
··· 289 289 irq->vcpu->cpu != -1) /* VCPU thread is running */ 290 290 cond_resched_lock(&irq->irq_lock); 291 291 292 - if (irq->hw) 292 + if (irq->hw) { 293 293 vgic_hw_irq_change_active(vcpu, irq, active, !requester_vcpu); 294 - else 294 + } else { 295 + u32 model = vcpu->kvm->arch.vgic.vgic_model; 296 + 295 297 irq->active = active; 298 + if (model == KVM_DEV_TYPE_ARM_VGIC_V2 && 299 + active && vgic_irq_is_sgi(irq->intid)) 300 + irq->active_source = requester_vcpu->vcpu_id; 301 + } 296 302 297 303 if (irq->active) 298 304 vgic_queue_irq_unlock(vcpu->kvm, irq, flags);
+22 -16
virt/kvm/arm/vgic/vgic-v2.c
··· 37 37 vgic_v2_write_lr(i, 0); 38 38 } 39 39 40 - void vgic_v2_set_npie(struct kvm_vcpu *vcpu) 41 - { 42 - struct vgic_v2_cpu_if *cpuif = &vcpu->arch.vgic_cpu.vgic_v2; 43 - 44 - cpuif->vgic_hcr |= GICH_HCR_NPIE; 45 - } 46 - 47 40 void vgic_v2_set_underflow(struct kvm_vcpu *vcpu) 48 41 { 49 42 struct vgic_v2_cpu_if *cpuif = &vcpu->arch.vgic_cpu.vgic_v2; ··· 64 71 int lr; 65 72 unsigned long flags; 66 73 67 - cpuif->vgic_hcr &= ~(GICH_HCR_UIE | GICH_HCR_NPIE); 74 + cpuif->vgic_hcr &= ~GICH_HCR_UIE; 68 75 69 76 for (lr = 0; lr < vgic_cpu->used_lrs; lr++) { 70 77 u32 val = cpuif->vgic_lr[lr]; 71 - u32 intid = val & GICH_LR_VIRTUALID; 78 + u32 cpuid, intid = val & GICH_LR_VIRTUALID; 72 79 struct vgic_irq *irq; 80 + 81 + /* Extract the source vCPU id from the LR */ 82 + cpuid = val & GICH_LR_PHYSID_CPUID; 83 + cpuid >>= GICH_LR_PHYSID_CPUID_SHIFT; 84 + cpuid &= 7; 73 85 74 86 /* Notify fds when the guest EOI'ed a level-triggered SPI */ 75 87 if (lr_signals_eoi_mi(val) && vgic_valid_spi(vcpu->kvm, intid)) ··· 88 90 /* Always preserve the active bit */ 89 91 irq->active = !!(val & GICH_LR_ACTIVE_BIT); 90 92 93 + if (irq->active && vgic_irq_is_sgi(intid)) 94 + irq->active_source = cpuid; 95 + 91 96 /* Edge is the only case where we preserve the pending bit */ 92 97 if (irq->config == VGIC_CONFIG_EDGE && 93 98 (val & GICH_LR_PENDING_BIT)) { 94 99 irq->pending_latch = true; 95 100 96 - if (vgic_irq_is_sgi(intid)) { 97 - u32 cpuid = val & GICH_LR_PHYSID_CPUID; 98 - 99 - cpuid >>= GICH_LR_PHYSID_CPUID_SHIFT; 101 + if (vgic_irq_is_sgi(intid)) 100 102 irq->source |= (1 << cpuid); 101 - } 102 103 } 103 104 104 105 /* ··· 149 152 u32 val = irq->intid; 150 153 bool allow_pending = true; 151 154 152 - if (irq->active) 155 + if (irq->active) { 153 156 val |= GICH_LR_ACTIVE_BIT; 157 + if (vgic_irq_is_sgi(irq->intid)) 158 + val |= irq->active_source << GICH_LR_PHYSID_CPUID_SHIFT; 159 + if (vgic_irq_is_multi_sgi(irq)) { 160 + allow_pending = false; 161 + val |= GICH_LR_EOI; 162 + } 163 + } 154 164 155 165 if (irq->hw) { 156 166 val |= GICH_LR_HW; ··· 194 190 BUG_ON(!src); 195 191 val |= (src - 1) << GICH_LR_PHYSID_CPUID_SHIFT; 196 192 irq->source &= ~(1 << (src - 1)); 197 - if (irq->source) 193 + if (irq->source) { 198 194 irq->pending_latch = true; 195 + val |= GICH_LR_EOI; 196 + } 199 197 } 200 198 } 201 199
+29 -20
virt/kvm/arm/vgic/vgic-v3.c
··· 27 27 static bool common_trap; 28 28 static bool gicv4_enable; 29 29 30 - void vgic_v3_set_npie(struct kvm_vcpu *vcpu) 31 - { 32 - struct vgic_v3_cpu_if *cpuif = &vcpu->arch.vgic_cpu.vgic_v3; 33 - 34 - cpuif->vgic_hcr |= ICH_HCR_NPIE; 35 - } 36 - 37 30 void vgic_v3_set_underflow(struct kvm_vcpu *vcpu) 38 31 { 39 32 struct vgic_v3_cpu_if *cpuif = &vcpu->arch.vgic_cpu.vgic_v3; ··· 48 55 int lr; 49 56 unsigned long flags; 50 57 51 - cpuif->vgic_hcr &= ~(ICH_HCR_UIE | ICH_HCR_NPIE); 58 + cpuif->vgic_hcr &= ~ICH_HCR_UIE; 52 59 53 60 for (lr = 0; lr < vgic_cpu->used_lrs; lr++) { 54 61 u64 val = cpuif->vgic_lr[lr]; 55 - u32 intid; 62 + u32 intid, cpuid; 56 63 struct vgic_irq *irq; 64 + bool is_v2_sgi = false; 57 65 58 - if (model == KVM_DEV_TYPE_ARM_VGIC_V3) 66 + cpuid = val & GICH_LR_PHYSID_CPUID; 67 + cpuid >>= GICH_LR_PHYSID_CPUID_SHIFT; 68 + 69 + if (model == KVM_DEV_TYPE_ARM_VGIC_V3) { 59 70 intid = val & ICH_LR_VIRTUAL_ID_MASK; 60 - else 71 + } else { 61 72 intid = val & GICH_LR_VIRTUALID; 73 + is_v2_sgi = vgic_irq_is_sgi(intid); 74 + } 62 75 63 76 /* Notify fds when the guest EOI'ed a level-triggered IRQ */ 64 77 if (lr_signals_eoi_mi(val) && vgic_valid_spi(vcpu->kvm, intid)) ··· 80 81 /* Always preserve the active bit */ 81 82 irq->active = !!(val & ICH_LR_ACTIVE_BIT); 82 83 84 + if (irq->active && is_v2_sgi) 85 + irq->active_source = cpuid; 86 + 83 87 /* Edge is the only case where we preserve the pending bit */ 84 88 if (irq->config == VGIC_CONFIG_EDGE && 85 89 (val & ICH_LR_PENDING_BIT)) { 86 90 irq->pending_latch = true; 87 91 88 - if (vgic_irq_is_sgi(intid) && 89 - model == KVM_DEV_TYPE_ARM_VGIC_V2) { 90 - u32 cpuid = val & GICH_LR_PHYSID_CPUID; 91 - 92 - cpuid >>= GICH_LR_PHYSID_CPUID_SHIFT; 92 + if (is_v2_sgi) 93 93 irq->source |= (1 << cpuid); 94 - } 95 94 } 96 95 97 96 /* ··· 130 133 { 131 134 u32 model = vcpu->kvm->arch.vgic.vgic_model; 132 135 u64 val = irq->intid; 133 - bool allow_pending = true; 136 + bool allow_pending = true, is_v2_sgi; 134 137 135 - if (irq->active) 138 + is_v2_sgi = (vgic_irq_is_sgi(irq->intid) && 139 + model == KVM_DEV_TYPE_ARM_VGIC_V2); 140 + 141 + if (irq->active) { 136 142 val |= ICH_LR_ACTIVE_BIT; 143 + if (is_v2_sgi) 144 + val |= irq->active_source << GICH_LR_PHYSID_CPUID_SHIFT; 145 + if (vgic_irq_is_multi_sgi(irq)) { 146 + allow_pending = false; 147 + val |= ICH_LR_EOI; 148 + } 149 + } 137 150 138 151 if (irq->hw) { 139 152 val |= ICH_LR_HW; ··· 181 174 BUG_ON(!src); 182 175 val |= (src - 1) << GICH_LR_PHYSID_CPUID_SHIFT; 183 176 irq->source &= ~(1 << (src - 1)); 184 - if (irq->source) 177 + if (irq->source) { 185 178 irq->pending_latch = true; 179 + val |= ICH_LR_EOI; 180 + } 186 181 } 187 182 } 188 183
+7 -23
virt/kvm/arm/vgic/vgic.c
··· 725 725 vgic_v3_set_underflow(vcpu); 726 726 } 727 727 728 - static inline void vgic_set_npie(struct kvm_vcpu *vcpu) 729 - { 730 - if (kvm_vgic_global_state.type == VGIC_V2) 731 - vgic_v2_set_npie(vcpu); 732 - else 733 - vgic_v3_set_npie(vcpu); 734 - } 735 - 736 728 /* Requires the ap_list_lock to be held. */ 737 729 static int compute_ap_list_depth(struct kvm_vcpu *vcpu, 738 730 bool *multi_sgi) ··· 738 746 DEBUG_SPINLOCK_BUG_ON(!spin_is_locked(&vgic_cpu->ap_list_lock)); 739 747 740 748 list_for_each_entry(irq, &vgic_cpu->ap_list_head, ap_list) { 749 + int w; 750 + 741 751 spin_lock(&irq->irq_lock); 742 752 /* GICv2 SGIs can count for more than one... */ 743 - if (vgic_irq_is_sgi(irq->intid) && irq->source) { 744 - int w = hweight8(irq->source); 745 - 746 - count += w; 747 - *multi_sgi |= (w > 1); 748 - } else { 749 - count++; 750 - } 753 + w = vgic_irq_get_lr_count(irq); 751 754 spin_unlock(&irq->irq_lock); 755 + 756 + count += w; 757 + *multi_sgi |= (w > 1); 752 758 } 753 759 return count; 754 760 } ··· 757 767 struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu; 758 768 struct vgic_irq *irq; 759 769 int count; 760 - bool npie = false; 761 770 bool multi_sgi; 762 771 u8 prio = 0xff; 763 772 ··· 786 797 if (likely(vgic_target_oracle(irq) == vcpu)) { 787 798 vgic_populate_lr(vcpu, irq, count++); 788 799 789 - if (irq->source) { 790 - npie = true; 800 + if (irq->source) 791 801 prio = irq->priority; 792 - } 793 802 } 794 803 795 804 spin_unlock(&irq->irq_lock); ··· 799 812 break; 800 813 } 801 814 } 802 - 803 - if (npie) 804 - vgic_set_npie(vcpu); 805 815 806 816 vcpu->arch.vgic_cpu.used_lrs = count; 807 817
+14
virt/kvm/arm/vgic/vgic.h
··· 110 110 return irq->config == VGIC_CONFIG_LEVEL && irq->hw; 111 111 } 112 112 113 + static inline int vgic_irq_get_lr_count(struct vgic_irq *irq) 114 + { 115 + /* Account for the active state as an interrupt */ 116 + if (vgic_irq_is_sgi(irq->intid) && irq->source) 117 + return hweight8(irq->source) + irq->active; 118 + 119 + return irq_is_pending(irq) || irq->active; 120 + } 121 + 122 + static inline bool vgic_irq_is_multi_sgi(struct vgic_irq *irq) 123 + { 124 + return vgic_irq_get_lr_count(irq) > 1; 125 + } 126 + 113 127 /* 114 128 * This struct provides an intermediate representation of the fields contained 115 129 * in the GICH_VMCR and ICH_VMCR registers, such that code exporting the GIC