Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'v4.10-rc8' into perf/core, to pick up fixes

Signed-off-by: Ingo Molnar <mingo@kernel.org>

+1939 -1129
-5
Documentation/media/uapi/cec/cec-func-close.rst
··· 33 33 Description 34 34 =========== 35 35 36 - .. note:: 37 - 38 - This documents the proposed CEC API. This API is not yet finalized 39 - and is currently only available as a staging kernel module. 40 - 41 36 Closes the cec device. Resources associated with the file descriptor are 42 37 freed. The device configuration remain unchanged. 43 38
-5
Documentation/media/uapi/cec/cec-func-ioctl.rst
··· 39 39 Description 40 40 =========== 41 41 42 - .. note:: 43 - 44 - This documents the proposed CEC API. This API is not yet finalized 45 - and is currently only available as a staging kernel module. 46 - 47 42 The :c:func:`ioctl()` function manipulates cec device parameters. The 48 43 argument ``fd`` must be an open file descriptor. 49 44
-5
Documentation/media/uapi/cec/cec-func-open.rst
··· 46 46 Description 47 47 =========== 48 48 49 - .. note:: 50 - 51 - This documents the proposed CEC API. This API is not yet finalized 52 - and is currently only available as a staging kernel module. 53 - 54 49 To open a cec device applications call :c:func:`open()` with the 55 50 desired device name. The function has no side effects; the device 56 51 configuration remain unchanged.
-5
Documentation/media/uapi/cec/cec-func-poll.rst
··· 39 39 Description 40 40 =========== 41 41 42 - .. note:: 43 - 44 - This documents the proposed CEC API. This API is not yet finalized 45 - and is currently only available as a staging kernel module. 46 - 47 42 With the :c:func:`poll()` function applications can wait for CEC 48 43 events. 49 44
+12 -5
Documentation/media/uapi/cec/cec-intro.rst
··· 3 3 Introduction 4 4 ============ 5 5 6 - .. note:: 7 - 8 - This documents the proposed CEC API. This API is not yet finalized 9 - and is currently only available as a staging kernel module. 10 - 11 6 HDMI connectors provide a single pin for use by the Consumer Electronics 12 7 Control protocol. This protocol allows different devices connected by an 13 8 HDMI cable to communicate. The protocol for CEC version 1.4 is defined ··· 26 31 Drivers that support CEC will create a CEC device node (/dev/cecX) to 27 32 give userspace access to the CEC adapter. The 28 33 :ref:`CEC_ADAP_G_CAPS` ioctl will tell userspace what it is allowed to do. 34 + 35 + In order to check the support and test it, it is suggested to download 36 + the `v4l-utils <https://git.linuxtv.org/v4l-utils.git/>`_ package. It 37 + provides three tools to handle CEC: 38 + 39 + - cec-ctl: the Swiss army knife of CEC. Allows you to configure, transmit 40 + and monitor CEC messages. 41 + 42 + - cec-compliance: does a CEC compliance test of a remote CEC device to 43 + determine how compliant the CEC implementation is. 44 + 45 + - cec-follower: emulates a CEC follower.
-5
Documentation/media/uapi/cec/cec-ioc-adap-g-caps.rst
··· 29 29 Description 30 30 =========== 31 31 32 - .. note:: 33 - 34 - This documents the proposed CEC API. This API is not yet finalized 35 - and is currently only available as a staging kernel module. 36 - 37 32 All cec devices must support :ref:`ioctl CEC_ADAP_G_CAPS <CEC_ADAP_G_CAPS>`. To query 38 33 device information, applications call the ioctl with a pointer to a 39 34 struct :c:type:`cec_caps`. The driver fills the structure and
-5
Documentation/media/uapi/cec/cec-ioc-adap-g-log-addrs.rst
··· 35 35 Description 36 36 =========== 37 37 38 - .. note:: 39 - 40 - This documents the proposed CEC API. This API is not yet finalized 41 - and is currently only available as a staging kernel module. 42 - 43 38 To query the current CEC logical addresses, applications call 44 39 :ref:`ioctl CEC_ADAP_G_LOG_ADDRS <CEC_ADAP_G_LOG_ADDRS>` with a pointer to a 45 40 struct :c:type:`cec_log_addrs` where the driver stores the logical addresses.
-5
Documentation/media/uapi/cec/cec-ioc-adap-g-phys-addr.rst
··· 35 35 Description 36 36 =========== 37 37 38 - .. note:: 39 - 40 - This documents the proposed CEC API. This API is not yet finalized 41 - and is currently only available as a staging kernel module. 42 - 43 38 To query the current physical address applications call 44 39 :ref:`ioctl CEC_ADAP_G_PHYS_ADDR <CEC_ADAP_G_PHYS_ADDR>` with a pointer to a __u16 where the 45 40 driver stores the physical address.
-5
Documentation/media/uapi/cec/cec-ioc-dqevent.rst
··· 30 30 Description 31 31 =========== 32 32 33 - .. note:: 34 - 35 - This documents the proposed CEC API. This API is not yet finalized 36 - and is currently only available as a staging kernel module. 37 - 38 33 CEC devices can send asynchronous events. These can be retrieved by 39 34 calling :c:func:`CEC_DQEVENT`. If the file descriptor is in 40 35 non-blocking mode and no event is pending, then it will return -1 and
-5
Documentation/media/uapi/cec/cec-ioc-g-mode.rst
··· 31 31 Description 32 32 =========== 33 33 34 - .. note:: 35 - 36 - This documents the proposed CEC API. This API is not yet finalized 37 - and is currently only available as a staging kernel module. 38 - 39 34 By default any filehandle can use :ref:`CEC_TRANSMIT`, but in order to prevent 40 35 applications from stepping on each others toes it must be possible to 41 36 obtain exclusive access to the CEC adapter. This ioctl sets the
-5
Documentation/media/uapi/cec/cec-ioc-receive.rst
··· 34 34 Description 35 35 =========== 36 36 37 - .. note:: 38 - 39 - This documents the proposed CEC API. This API is not yet finalized 40 - and is currently only available as a staging kernel module. 41 - 42 37 To receive a CEC message the application has to fill in the 43 38 ``timeout`` field of struct :c:type:`cec_msg` and pass it to 44 39 :ref:`ioctl CEC_RECEIVE <CEC_RECEIVE>`.
+19 -19
MAINTAINERS
··· 1091 1091 F: drivers/*/*aspeed* 1092 1092 1093 1093 ARM/ATMEL AT91RM9200, AT91SAM9 AND SAMA5 SOC SUPPORT 1094 - M: Nicolas Ferre <nicolas.ferre@atmel.com> 1094 + M: Nicolas Ferre <nicolas.ferre@microchip.com> 1095 1095 M: Alexandre Belloni <alexandre.belloni@free-electrons.com> 1096 1096 M: Jean-Christophe Plagniol-Villard <plagnioj@jcrosoft.com> 1097 1097 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) ··· 1773 1773 F: include/linux/soc/renesas/ 1774 1774 1775 1775 ARM/SOCFPGA ARCHITECTURE 1776 - M: Dinh Nguyen <dinguyen@opensource.altera.com> 1776 + M: Dinh Nguyen <dinguyen@kernel.org> 1777 1777 S: Maintained 1778 1778 F: arch/arm/mach-socfpga/ 1779 1779 F: arch/arm/boot/dts/socfpga* ··· 1783 1783 T: git git://git.kernel.org/pub/scm/linux/kernel/git/dinguyen/linux.git 1784 1784 1785 1785 ARM/SOCFPGA CLOCK FRAMEWORK SUPPORT 1786 - M: Dinh Nguyen <dinguyen@opensource.altera.com> 1786 + M: Dinh Nguyen <dinguyen@kernel.org> 1787 1787 S: Maintained 1788 1788 F: drivers/clk/socfpga/ 1789 1789 ··· 2175 2175 F: include/uapi/linux/atm* 2176 2176 2177 2177 ATMEL AT91 / AT32 MCI DRIVER 2178 - M: Ludovic Desroches <ludovic.desroches@atmel.com> 2178 + M: Ludovic Desroches <ludovic.desroches@microchip.com> 2179 2179 S: Maintained 2180 2180 F: drivers/mmc/host/atmel-mci.c 2181 2181 2182 2182 ATMEL AT91 SAMA5D2-Compatible Shutdown Controller 2183 - M: Nicolas Ferre <nicolas.ferre@atmel.com> 2183 + M: Nicolas Ferre <nicolas.ferre@microchip.com> 2184 2184 S: Supported 2185 2185 F: drivers/power/reset/at91-sama5d2_shdwc.c 2186 2186 2187 2187 ATMEL SAMA5D2 ADC DRIVER 2188 - M: Ludovic Desroches <ludovic.desroches@atmel.com> 2188 + M: Ludovic Desroches <ludovic.desroches@microchip.com> 2189 2189 L: linux-iio@vger.kernel.org 2190 2190 S: Supported 2191 2191 F: drivers/iio/adc/at91-sama5d2_adc.c 2192 2192 2193 2193 ATMEL Audio ALSA driver 2194 - M: Nicolas Ferre <nicolas.ferre@atmel.com> 2194 + M: Nicolas Ferre <nicolas.ferre@microchip.com> 2195 2195 L: alsa-devel@alsa-project.org (moderated for non-subscribers) 2196 2196 S: Supported 2197 2197 F: sound/soc/atmel 2198 2198 2199 2199 ATMEL XDMA DRIVER 2200 - M: Ludovic Desroches <ludovic.desroches@atmel.com> 2200 + M: Ludovic Desroches <ludovic.desroches@microchip.com> 2201 2201 L: linux-arm-kernel@lists.infradead.org 2202 2202 L: dmaengine@vger.kernel.org 2203 2203 S: Supported 2204 2204 F: drivers/dma/at_xdmac.c 2205 2205 2206 2206 ATMEL I2C DRIVER 2207 - M: Ludovic Desroches <ludovic.desroches@atmel.com> 2207 + M: Ludovic Desroches <ludovic.desroches@microchip.com> 2208 2208 L: linux-i2c@vger.kernel.org 2209 2209 S: Supported 2210 2210 F: drivers/i2c/busses/i2c-at91.c 2211 2211 2212 2212 ATMEL ISI DRIVER 2213 - M: Ludovic Desroches <ludovic.desroches@atmel.com> 2213 + M: Ludovic Desroches <ludovic.desroches@microchip.com> 2214 2214 L: linux-media@vger.kernel.org 2215 2215 S: Supported 2216 2216 F: drivers/media/platform/soc_camera/atmel-isi.c 2217 2217 F: include/media/atmel-isi.h 2218 2218 2219 2219 ATMEL LCDFB DRIVER 2220 - M: Nicolas Ferre <nicolas.ferre@atmel.com> 2220 + M: Nicolas Ferre <nicolas.ferre@microchip.com> 2221 2221 L: linux-fbdev@vger.kernel.org 2222 2222 S: Maintained 2223 2223 F: drivers/video/fbdev/atmel_lcdfb.c 2224 2224 F: include/video/atmel_lcdc.h 2225 2225 2226 2226 ATMEL MACB ETHERNET DRIVER 2227 - M: Nicolas Ferre <nicolas.ferre@atmel.com> 2227 + M: Nicolas Ferre <nicolas.ferre@microchip.com> 2228 2228 S: Supported 2229 2229 F: drivers/net/ethernet/cadence/ 2230 2230 ··· 2236 2236 F: drivers/mtd/nand/atmel_nand* 2237 2237 2238 2238 ATMEL SDMMC DRIVER 2239 - M: Ludovic Desroches <ludovic.desroches@atmel.com> 2239 + M: Ludovic Desroches <ludovic.desroches@microchip.com> 2240 2240 L: linux-mmc@vger.kernel.org 2241 2241 S: Supported 2242 2242 F: drivers/mmc/host/sdhci-of-at91.c 2243 2243 2244 2244 ATMEL SPI DRIVER 2245 - M: Nicolas Ferre <nicolas.ferre@atmel.com> 2245 + M: Nicolas Ferre <nicolas.ferre@microchip.com> 2246 2246 S: Supported 2247 2247 F: drivers/spi/spi-atmel.* 2248 2248 2249 2249 ATMEL SSC DRIVER 2250 - M: Nicolas Ferre <nicolas.ferre@atmel.com> 2250 + M: Nicolas Ferre <nicolas.ferre@microchip.com> 2251 2251 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 2252 2252 S: Supported 2253 2253 F: drivers/misc/atmel-ssc.c 2254 2254 F: include/linux/atmel-ssc.h 2255 2255 2256 2256 ATMEL Timer Counter (TC) AND CLOCKSOURCE DRIVERS 2257 - M: Nicolas Ferre <nicolas.ferre@atmel.com> 2257 + M: Nicolas Ferre <nicolas.ferre@microchip.com> 2258 2258 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 2259 2259 S: Supported 2260 2260 F: drivers/misc/atmel_tclib.c 2261 2261 F: drivers/clocksource/tcb_clksrc.c 2262 2262 2263 2263 ATMEL USBA UDC DRIVER 2264 - M: Nicolas Ferre <nicolas.ferre@atmel.com> 2264 + M: Nicolas Ferre <nicolas.ferre@microchip.com> 2265 2265 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 2266 2266 S: Supported 2267 2267 F: drivers/usb/gadget/udc/atmel_usba_udc.* ··· 9736 9736 F: drivers/pinctrl/pinctrl-at91.* 9737 9737 9738 9738 PIN CONTROLLER - ATMEL AT91 PIO4 9739 - M: Ludovic Desroches <ludovic.desroches@atmel.com> 9739 + M: Ludovic Desroches <ludovic.desroches@microchip.com> 9740 9740 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 9741 9741 L: linux-gpio@vger.kernel.org 9742 9742 S: Supported ··· 13065 13065 F: include/uapi/linux/userio.h 13066 13066 13067 13067 VIRTIO CONSOLE DRIVER 13068 - M: Amit Shah <amit.shah@redhat.com> 13068 + M: Amit Shah <amit@kernel.org> 13069 13069 L: virtualization@lists.linux-foundation.org 13070 13070 S: Maintained 13071 13071 F: drivers/char/virtio_console.c
+2 -2
Makefile
··· 1 1 VERSION = 4 2 2 PATCHLEVEL = 10 3 3 SUBLEVEL = 0 4 - EXTRAVERSION = -rc6 4 + EXTRAVERSION = -rc8 5 5 NAME = Fearless Coyote 6 6 7 7 # *DOCUMENTATION* ··· 799 799 KBUILD_ARFLAGS := $(call ar-option,D) 800 800 801 801 # check for 'asm goto' 802 - ifeq ($(shell $(CONFIG_SHELL) $(srctree)/scripts/gcc-goto.sh $(CC)), y) 802 + ifeq ($(shell $(CONFIG_SHELL) $(srctree)/scripts/gcc-goto.sh $(CC) $(KBUILD_CFLAGS)), y) 803 803 KBUILD_CFLAGS += -DCC_HAVE_ASM_GOTO 804 804 KBUILD_AFLAGS += -DCC_HAVE_ASM_GOTO 805 805 endif
+1 -1
arch/arc/kernel/unaligned.c
··· 243 243 244 244 /* clear any remanants of delay slot */ 245 245 if (delay_mode(regs)) { 246 - regs->ret = regs->bta ~1U; 246 + regs->ret = regs->bta & ~1U; 247 247 regs->status32 &= ~STATUS_DE_MASK; 248 248 } else { 249 249 regs->ret += state.instr_len;
+1 -1
arch/arm/boot/dts/Makefile
··· 617 617 orion5x-lacie-ethernet-disk-mini-v2.dtb \ 618 618 orion5x-linkstation-lsgl.dtb \ 619 619 orion5x-linkstation-lswtgl.dtb \ 620 - orion5x-lschl.dtb \ 620 + orion5x-linkstation-lschl.dtb \ 621 621 orion5x-lswsgl.dtb \ 622 622 orion5x-maxtor-shared-storage-2.dtb \ 623 623 orion5x-netgear-wnr854t.dtb \
+8
arch/arm/boot/dts/imx1.dtsi
··· 18 18 / { 19 19 #address-cells = <1>; 20 20 #size-cells = <1>; 21 + /* 22 + * The decompressor and also some bootloaders rely on a 23 + * pre-existing /chosen node to be available to insert the 24 + * command line and merge other ATAGS info. 25 + * Also for U-Boot there must be a pre-existing /memory node. 26 + */ 27 + chosen {}; 28 + memory { device_type = "memory"; reg = <0 0>; }; 21 29 22 30 aliases { 23 31 gpio0 = &gpio1;
+8
arch/arm/boot/dts/imx23.dtsi
··· 16 16 #size-cells = <1>; 17 17 18 18 interrupt-parent = <&icoll>; 19 + /* 20 + * The decompressor and also some bootloaders rely on a 21 + * pre-existing /chosen node to be available to insert the 22 + * command line and merge other ATAGS info. 23 + * Also for U-Boot there must be a pre-existing /memory node. 24 + */ 25 + chosen {}; 26 + memory { device_type = "memory"; reg = <0 0>; }; 19 27 20 28 aliases { 21 29 gpio0 = &gpio0;
+8
arch/arm/boot/dts/imx25.dtsi
··· 14 14 / { 15 15 #address-cells = <1>; 16 16 #size-cells = <1>; 17 + /* 18 + * The decompressor and also some bootloaders rely on a 19 + * pre-existing /chosen node to be available to insert the 20 + * command line and merge other ATAGS info. 21 + * Also for U-Boot there must be a pre-existing /memory node. 22 + */ 23 + chosen {}; 24 + memory { device_type = "memory"; reg = <0 0>; }; 17 25 18 26 aliases { 19 27 ethernet0 = &fec;
+8
arch/arm/boot/dts/imx27.dtsi
··· 19 19 / { 20 20 #address-cells = <1>; 21 21 #size-cells = <1>; 22 + /* 23 + * The decompressor and also some bootloaders rely on a 24 + * pre-existing /chosen node to be available to insert the 25 + * command line and merge other ATAGS info. 26 + * Also for U-Boot there must be a pre-existing /memory node. 27 + */ 28 + chosen {}; 29 + memory { device_type = "memory"; reg = <0 0>; }; 22 30 23 31 aliases { 24 32 ethernet0 = &fec;
+8
arch/arm/boot/dts/imx28.dtsi
··· 17 17 #size-cells = <1>; 18 18 19 19 interrupt-parent = <&icoll>; 20 + /* 21 + * The decompressor and also some bootloaders rely on a 22 + * pre-existing /chosen node to be available to insert the 23 + * command line and merge other ATAGS info. 24 + * Also for U-Boot there must be a pre-existing /memory node. 25 + */ 26 + chosen {}; 27 + memory { device_type = "memory"; reg = <0 0>; }; 20 28 21 29 aliases { 22 30 ethernet0 = &mac0;
+8
arch/arm/boot/dts/imx31.dtsi
··· 12 12 / { 13 13 #address-cells = <1>; 14 14 #size-cells = <1>; 15 + /* 16 + * The decompressor and also some bootloaders rely on a 17 + * pre-existing /chosen node to be available to insert the 18 + * command line and merge other ATAGS info. 19 + * Also for U-Boot there must be a pre-existing /memory node. 20 + */ 21 + chosen {}; 22 + memory { device_type = "memory"; reg = <0 0>; }; 15 23 16 24 aliases { 17 25 serial0 = &uart1;
+8
arch/arm/boot/dts/imx35.dtsi
··· 13 13 / { 14 14 #address-cells = <1>; 15 15 #size-cells = <1>; 16 + /* 17 + * The decompressor and also some bootloaders rely on a 18 + * pre-existing /chosen node to be available to insert the 19 + * command line and merge other ATAGS info. 20 + * Also for U-Boot there must be a pre-existing /memory node. 21 + */ 22 + chosen {}; 23 + memory { device_type = "memory"; reg = <0 0>; }; 16 24 17 25 aliases { 18 26 ethernet0 = &fec;
+8
arch/arm/boot/dts/imx50.dtsi
··· 17 17 / { 18 18 #address-cells = <1>; 19 19 #size-cells = <1>; 20 + /* 21 + * The decompressor and also some bootloaders rely on a 22 + * pre-existing /chosen node to be available to insert the 23 + * command line and merge other ATAGS info. 24 + * Also for U-Boot there must be a pre-existing /memory node. 25 + */ 26 + chosen {}; 27 + memory { device_type = "memory"; reg = <0 0>; }; 20 28 21 29 aliases { 22 30 ethernet0 = &fec;
+8
arch/arm/boot/dts/imx51.dtsi
··· 19 19 / { 20 20 #address-cells = <1>; 21 21 #size-cells = <1>; 22 + /* 23 + * The decompressor and also some bootloaders rely on a 24 + * pre-existing /chosen node to be available to insert the 25 + * command line and merge other ATAGS info. 26 + * Also for U-Boot there must be a pre-existing /memory node. 27 + */ 28 + chosen {}; 29 + memory { device_type = "memory"; reg = <0 0>; }; 22 30 23 31 aliases { 24 32 ethernet0 = &fec;
+8
arch/arm/boot/dts/imx53.dtsi
··· 19 19 / { 20 20 #address-cells = <1>; 21 21 #size-cells = <1>; 22 + /* 23 + * The decompressor and also some bootloaders rely on a 24 + * pre-existing /chosen node to be available to insert the 25 + * command line and merge other ATAGS info. 26 + * Also for U-Boot there must be a pre-existing /memory node. 27 + */ 28 + chosen {}; 29 + memory { device_type = "memory"; reg = <0 0>; }; 22 30 23 31 aliases { 24 32 ethernet0 = &fec;
+1 -1
arch/arm/boot/dts/imx6dl.dtsi
··· 137 137 &gpio4 { 138 138 gpio-ranges = <&iomuxc 5 136 1>, <&iomuxc 6 145 1>, <&iomuxc 7 150 1>, 139 139 <&iomuxc 8 146 1>, <&iomuxc 9 151 1>, <&iomuxc 10 147 1>, 140 - <&iomuxc 11 151 1>, <&iomuxc 12 148 1>, <&iomuxc 13 153 1>, 140 + <&iomuxc 11 152 1>, <&iomuxc 12 148 1>, <&iomuxc 13 153 1>, 141 141 <&iomuxc 14 149 1>, <&iomuxc 15 154 1>, <&iomuxc 16 39 7>, 142 142 <&iomuxc 23 56 1>, <&iomuxc 24 61 7>, <&iomuxc 31 46 1>; 143 143 };
+8
arch/arm/boot/dts/imx6qdl.dtsi
··· 16 16 / { 17 17 #address-cells = <1>; 18 18 #size-cells = <1>; 19 + /* 20 + * The decompressor and also some bootloaders rely on a 21 + * pre-existing /chosen node to be available to insert the 22 + * command line and merge other ATAGS info. 23 + * Also for U-Boot there must be a pre-existing /memory node. 24 + */ 25 + chosen {}; 26 + memory { device_type = "memory"; reg = <0 0>; }; 19 27 20 28 aliases { 21 29 ethernet0 = &fec;
+8
arch/arm/boot/dts/imx6sl.dtsi
··· 14 14 / { 15 15 #address-cells = <1>; 16 16 #size-cells = <1>; 17 + /* 18 + * The decompressor and also some bootloaders rely on a 19 + * pre-existing /chosen node to be available to insert the 20 + * command line and merge other ATAGS info. 21 + * Also for U-Boot there must be a pre-existing /memory node. 22 + */ 23 + chosen {}; 24 + memory { device_type = "memory"; reg = <0 0>; }; 17 25 18 26 aliases { 19 27 ethernet0 = &fec;
+8
arch/arm/boot/dts/imx6sx.dtsi
··· 15 15 / { 16 16 #address-cells = <1>; 17 17 #size-cells = <1>; 18 + /* 19 + * The decompressor and also some bootloaders rely on a 20 + * pre-existing /chosen node to be available to insert the 21 + * command line and merge other ATAGS info. 22 + * Also for U-Boot there must be a pre-existing /memory node. 23 + */ 24 + chosen {}; 25 + memory { device_type = "memory"; reg = <0 0>; }; 18 26 19 27 aliases { 20 28 can0 = &flexcan1;
+8
arch/arm/boot/dts/imx6ul.dtsi
··· 15 15 / { 16 16 #address-cells = <1>; 17 17 #size-cells = <1>; 18 + /* 19 + * The decompressor and also some bootloaders rely on a 20 + * pre-existing /chosen node to be available to insert the 21 + * command line and merge other ATAGS info. 22 + * Also for U-Boot there must be a pre-existing /memory node. 23 + */ 24 + chosen {}; 25 + memory { device_type = "memory"; reg = <0 0>; }; 18 26 19 27 aliases { 20 28 ethernet0 = &fec1;
+8
arch/arm/boot/dts/imx7s.dtsi
··· 50 50 / { 51 51 #address-cells = <1>; 52 52 #size-cells = <1>; 53 + /* 54 + * The decompressor and also some bootloaders rely on a 55 + * pre-existing /chosen node to be available to insert the 56 + * command line and merge other ATAGS info. 57 + * Also for U-Boot there must be a pre-existing /memory node. 58 + */ 59 + chosen {}; 60 + memory { device_type = "memory"; reg = <0 0>; }; 53 61 54 62 aliases { 55 63 gpio0 = &gpio1;
+2 -2
arch/arm/boot/dts/orion5x-lschl.dts arch/arm/boot/dts/orion5x-linkstation-lschl.dts
··· 2 2 * Device Tree file for Buffalo Linkstation LS-CHLv3 3 3 * 4 4 * Copyright (C) 2016 Ash Hughes <ashley.hughes@blueyonder.co.uk> 5 - * Copyright (C) 2015, 2016 5 + * Copyright (C) 2015-2017 6 6 * Roger Shimizu <rogershimizu@gmail.com> 7 7 * 8 8 * This file is dual-licensed: you can use it either under the terms ··· 52 52 #include <dt-bindings/gpio/gpio.h> 53 53 54 54 / { 55 - model = "Buffalo Linkstation Live v3 (LS-CHL)"; 55 + model = "Buffalo Linkstation LiveV3 (LS-CHL)"; 56 56 compatible = "buffalo,lschl", "marvell,orion5x-88f5182", "marvell,orion5x"; 57 57 58 58 memory { /* 128 MB */
+1
arch/arm/boot/dts/stih407-family.dtsi
··· 680 680 phy-names = "usb2-phy", "usb3-phy"; 681 681 phys = <&usb2_picophy0>, 682 682 <&phy_port2 PHY_TYPE_USB3>; 683 + snps,dis_u3_susphy_quirk; 683 684 }; 684 685 }; 685 686
+2 -2
arch/arm/configs/ezx_defconfig
··· 64 64 CONFIG_NETFILTER_NETLINK_QUEUE=m 65 65 CONFIG_NF_CONNTRACK=m 66 66 CONFIG_NF_CONNTRACK_EVENTS=y 67 - CONFIG_NF_CT_PROTO_SCTP=m 68 - CONFIG_NF_CT_PROTO_UDPLITE=m 67 + CONFIG_NF_CT_PROTO_SCTP=y 68 + CONFIG_NF_CT_PROTO_UDPLITE=y 69 69 CONFIG_NF_CONNTRACK_AMANDA=m 70 70 CONFIG_NF_CONNTRACK_FTP=m 71 71 CONFIG_NF_CONNTRACK_H323=m
+2 -2
arch/arm/configs/imote2_defconfig
··· 56 56 CONFIG_NETFILTER_NETLINK_QUEUE=m 57 57 CONFIG_NF_CONNTRACK=m 58 58 CONFIG_NF_CONNTRACK_EVENTS=y 59 - CONFIG_NF_CT_PROTO_SCTP=m 60 - CONFIG_NF_CT_PROTO_UDPLITE=m 59 + CONFIG_NF_CT_PROTO_SCTP=y 60 + CONFIG_NF_CT_PROTO_UDPLITE=y 61 61 CONFIG_NF_CONNTRACK_AMANDA=m 62 62 CONFIG_NF_CONNTRACK_FTP=m 63 63 CONFIG_NF_CONNTRACK_H323=m
+1 -1
arch/arm/kernel/ptrace.c
··· 600 600 const void *kbuf, const void __user *ubuf) 601 601 { 602 602 int ret; 603 - struct pt_regs newregs; 603 + struct pt_regs newregs = *task_pt_regs(target); 604 604 605 605 ret = user_regset_copyin(&pos, &count, &kbuf, &ubuf, 606 606 &newregs,
+1 -1
arch/arm/mach-imx/mmdc.c
··· 60 60 61 61 #define to_mmdc_pmu(p) container_of(p, struct mmdc_pmu, pmu) 62 62 63 - static enum cpuhp_state cpuhp_mmdc_state; 64 63 static int ddr_type; 65 64 66 65 struct fsl_mmdc_devtype_data { ··· 81 82 82 83 #ifdef CONFIG_PERF_EVENTS 83 84 85 + static enum cpuhp_state cpuhp_mmdc_state; 84 86 static DEFINE_IDA(mmdc_ida); 85 87 86 88 PMU_EVENT_ATTR_STRING(total-cycles, mmdc_pmu_total_cycles, "event=0x00")
+2 -2
arch/arm/mm/fault.c
··· 610 610 611 611 void __init early_abt_enable(void) 612 612 { 613 - fsr_info[22].fn = early_abort_handler; 613 + fsr_info[FSR_FS_AEA].fn = early_abort_handler; 614 614 local_abt_enable(); 615 - fsr_info[22].fn = do_bad; 615 + fsr_info[FSR_FS_AEA].fn = do_bad; 616 616 } 617 617 618 618 #ifndef CONFIG_ARM_LPAE
+4
arch/arm/mm/fault.h
··· 11 11 #define FSR_FS5_0 (0x3f) 12 12 13 13 #ifdef CONFIG_ARM_LPAE 14 + #define FSR_FS_AEA 17 15 + 14 16 static inline int fsr_fs(unsigned int fsr) 15 17 { 16 18 return fsr & FSR_FS5_0; 17 19 } 18 20 #else 21 + #define FSR_FS_AEA 22 22 + 19 23 static inline int fsr_fs(unsigned int fsr) 20 24 { 21 25 return (fsr & FSR_FS3_0) | (fsr & FSR_FS4) >> 6;
+18
arch/arm64/boot/dts/amlogic/meson-gx.dtsi
··· 55 55 #address-cells = <2>; 56 56 #size-cells = <2>; 57 57 58 + reserved-memory { 59 + #address-cells = <2>; 60 + #size-cells = <2>; 61 + ranges; 62 + 63 + /* 16 MiB reserved for Hardware ROM Firmware */ 64 + hwrom_reserved: hwrom@0 { 65 + reg = <0x0 0x0 0x0 0x1000000>; 66 + no-map; 67 + }; 68 + 69 + /* 2 MiB reserved for ARM Trusted Firmware (BL31) */ 70 + secmon_reserved: secmon@10000000 { 71 + reg = <0x0 0x10000000 0x0 0x200000>; 72 + no-map; 73 + }; 74 + }; 75 + 58 76 cpus { 59 77 #address-cells = <0x2>; 60 78 #size-cells = <0x0>;
+12
arch/arm64/boot/dts/amlogic/meson-gxbb-odroidc2.dts
··· 151 151 status = "okay"; 152 152 pinctrl-0 = <&eth_rgmii_pins>; 153 153 pinctrl-names = "default"; 154 + phy-handle = <&eth_phy0>; 155 + 156 + mdio { 157 + compatible = "snps,dwmac-mdio"; 158 + #address-cells = <1>; 159 + #size-cells = <0>; 160 + 161 + eth_phy0: ethernet-phy@0 { 162 + reg = <0>; 163 + eee-broken-1000t; 164 + }; 165 + }; 154 166 }; 155 167 156 168 &ir {
+1 -1
arch/powerpc/Kconfig
··· 164 164 select ARCH_HAS_SCALED_CPUTIME if VIRT_CPU_ACCOUNTING_NATIVE 165 165 select HAVE_ARCH_HARDENED_USERCOPY 166 166 select HAVE_KERNEL_GZIP 167 - select HAVE_CC_STACKPROTECTOR 168 167 169 168 config GENERIC_CSUM 170 169 def_bool CPU_LITTLE_ENDIAN ··· 483 484 bool "Build a relocatable kernel" 484 485 depends on (PPC64 && !COMPILE_TEST) || (FLATMEM && (44x || FSL_BOOKE)) 485 486 select NONSTATIC_KERNEL 487 + select MODULE_REL_CRCS if MODVERSIONS 486 488 help 487 489 This builds a kernel image that is capable of running at the 488 490 location the kernel is loaded at. For ppc32, there is no any
+2
arch/powerpc/include/asm/cpu_has_feature.h
··· 23 23 { 24 24 int i; 25 25 26 + #ifndef __clang__ /* clang can't cope with this */ 26 27 BUILD_BUG_ON(!__builtin_constant_p(feature)); 28 + #endif 27 29 28 30 #ifdef CONFIG_JUMP_LABEL_FEATURE_CHECK_DEBUG 29 31 if (!static_key_initialized) {
+2
arch/powerpc/include/asm/mmu.h
··· 160 160 { 161 161 int i; 162 162 163 + #ifndef __clang__ /* clang can't cope with this */ 163 164 BUILD_BUG_ON(!__builtin_constant_p(feature)); 165 + #endif 164 166 165 167 #ifdef CONFIG_JUMP_LABEL_FEATURE_CHECK_DEBUG 166 168 if (!static_key_initialized) {
-4
arch/powerpc/include/asm/module.h
··· 90 90 } 91 91 #endif 92 92 93 - #if defined(CONFIG_MODVERSIONS) && defined(CONFIG_PPC64) 94 - #define ARCH_RELOCATES_KCRCTAB 95 - #define reloc_start PHYSICAL_START 96 - #endif 97 93 #endif /* __KERNEL__ */ 98 94 #endif /* _ASM_POWERPC_MODULE_H */
+2 -1
arch/powerpc/include/asm/reg.h
··· 649 649 #define SRR1_ISI_N_OR_G 0x10000000 /* ISI: Access is no-exec or G */ 650 650 #define SRR1_ISI_PROT 0x08000000 /* ISI: Other protection fault */ 651 651 #define SRR1_WAKEMASK 0x00380000 /* reason for wakeup */ 652 - #define SRR1_WAKEMASK_P8 0x003c0000 /* reason for wakeup on POWER8 */ 652 + #define SRR1_WAKEMASK_P8 0x003c0000 /* reason for wakeup on POWER8 and 9 */ 653 653 #define SRR1_WAKESYSERR 0x00300000 /* System error */ 654 654 #define SRR1_WAKEEE 0x00200000 /* External interrupt */ 655 + #define SRR1_WAKEHVI 0x00240000 /* Hypervisor Virtualization Interrupt (P9) */ 655 656 #define SRR1_WAKEMT 0x00280000 /* mtctrl */ 656 657 #define SRR1_WAKEHMI 0x00280000 /* Hypervisor maintenance */ 657 658 #define SRR1_WAKEDEC 0x00180000 /* Decrementer interrupt */
-40
arch/powerpc/include/asm/stackprotector.h
··· 1 - /* 2 - * GCC stack protector support. 3 - * 4 - * Stack protector works by putting predefined pattern at the start of 5 - * the stack frame and verifying that it hasn't been overwritten when 6 - * returning from the function. The pattern is called stack canary 7 - * and gcc expects it to be defined by a global variable called 8 - * "__stack_chk_guard" on PPC. This unfortunately means that on SMP 9 - * we cannot have a different canary value per task. 10 - */ 11 - 12 - #ifndef _ASM_STACKPROTECTOR_H 13 - #define _ASM_STACKPROTECTOR_H 14 - 15 - #include <linux/random.h> 16 - #include <linux/version.h> 17 - #include <asm/reg.h> 18 - 19 - extern unsigned long __stack_chk_guard; 20 - 21 - /* 22 - * Initialize the stackprotector canary value. 23 - * 24 - * NOTE: this must only be called from functions that never return, 25 - * and it must always be inlined. 26 - */ 27 - static __always_inline void boot_init_stack_canary(void) 28 - { 29 - unsigned long canary; 30 - 31 - /* Try to get a semi random initial value. */ 32 - get_random_bytes(&canary, sizeof(canary)); 33 - canary ^= mftb(); 34 - canary ^= LINUX_VERSION_CODE; 35 - 36 - current->stack_canary = canary; 37 - __stack_chk_guard = current->stack_canary; 38 - } 39 - 40 - #endif /* _ASM_STACKPROTECTOR_H */
+1
arch/powerpc/include/asm/xics.h
··· 44 44 45 45 #ifdef CONFIG_PPC_POWERNV 46 46 extern int icp_opal_init(void); 47 + extern void icp_opal_flush_interrupt(void); 47 48 #else 48 49 static inline int icp_opal_init(void) { return -ENODEV; } 49 50 #endif
-4
arch/powerpc/kernel/Makefile
··· 19 19 CFLAGS_btext.o += $(DISABLE_LATENT_ENTROPY_PLUGIN) 20 20 CFLAGS_prom.o += $(DISABLE_LATENT_ENTROPY_PLUGIN) 21 21 22 - # -fstack-protector triggers protection checks in this code, 23 - # but it is being used too early to link to meaningful stack_chk logic. 24 - CFLAGS_prom_init.o += $(call cc-option, -fno-stack-protector) 25 - 26 22 ifdef CONFIG_FUNCTION_TRACER 27 23 # Do not trace early boot code 28 24 CFLAGS_REMOVE_cputable.o = -mno-sched-epilog $(CC_FLAGS_FTRACE)
-3
arch/powerpc/kernel/asm-offsets.c
··· 91 91 DEFINE(TI_livepatch_sp, offsetof(struct thread_info, livepatch_sp)); 92 92 #endif 93 93 94 - #ifdef CONFIG_CC_STACKPROTECTOR 95 - DEFINE(TSK_STACK_CANARY, offsetof(struct task_struct, stack_canary)); 96 - #endif 97 94 DEFINE(KSP, offsetof(struct thread_struct, ksp)); 98 95 DEFINE(PT_REGS, offsetof(struct thread_struct, regs)); 99 96 #ifdef CONFIG_BOOKE
+1 -1
arch/powerpc/kernel/eeh_driver.c
··· 545 545 static void *__eeh_clear_pe_frozen_state(void *data, void *flag) 546 546 { 547 547 struct eeh_pe *pe = (struct eeh_pe *)data; 548 - bool *clear_sw_state = flag; 548 + bool clear_sw_state = *(bool *)flag; 549 549 int i, rc = 1; 550 550 551 551 for (i = 0; rc && i < 3; i++)
+1 -5
arch/powerpc/kernel/entry_32.S
··· 674 674 mtspr SPRN_SPEFSCR,r0 /* restore SPEFSCR reg */ 675 675 END_FTR_SECTION_IFSET(CPU_FTR_SPE) 676 676 #endif /* CONFIG_SPE */ 677 - #if defined(CONFIG_CC_STACKPROTECTOR) && !defined(CONFIG_SMP) 678 - lwz r0,TSK_STACK_CANARY(r2) 679 - lis r4,__stack_chk_guard@ha 680 - stw r0,__stack_chk_guard@l(r4) 681 - #endif 677 + 682 678 lwz r0,_CCR(r1) 683 679 mtcrf 0xFF,r0 684 680 /* r3-r12 are destroyed -- Cort */
-8
arch/powerpc/kernel/module_64.c
··· 286 286 for (end = (void *)vers + size; vers < end; vers++) 287 287 if (vers->name[0] == '.') { 288 288 memmove(vers->name, vers->name+1, strlen(vers->name)); 289 - #ifdef ARCH_RELOCATES_KCRCTAB 290 - /* The TOC symbol has no CRC computed. To avoid CRC 291 - * check failing, we must force it to the expected 292 - * value (see CRC check in module.c). 293 - */ 294 - if (!strcmp(vers->name, "TOC.")) 295 - vers->crc = -(unsigned long)reloc_start; 296 - #endif 297 289 } 298 290 } 299 291
-6
arch/powerpc/kernel/process.c
··· 64 64 #include <linux/kprobes.h> 65 65 #include <linux/kdebug.h> 66 66 67 - #ifdef CONFIG_CC_STACKPROTECTOR 68 - #include <linux/stackprotector.h> 69 - unsigned long __stack_chk_guard __read_mostly; 70 - EXPORT_SYMBOL(__stack_chk_guard); 71 - #endif 72 - 73 67 /* Transactional Memory debug */ 74 68 #ifdef TM_DEBUG_SW 75 69 #define TM_DEBUG(x...) printk(KERN_INFO x)
+3
arch/powerpc/kernel/prom_init.c
··· 2834 2834 2835 2835 cpu_pkg = call_prom("instance-to-package", 1, 1, prom_cpu); 2836 2836 2837 + if (!PHANDLE_VALID(cpu_pkg)) 2838 + return; 2839 + 2837 2840 prom_getprop(cpu_pkg, "reg", &rval, sizeof(rval)); 2838 2841 prom.cpu = be32_to_cpu(rval); 2839 2842
+5 -16
arch/powerpc/mm/fault.c
··· 253 253 if (unlikely(debugger_fault_handler(regs))) 254 254 goto bail; 255 255 256 - /* On a kernel SLB miss we can only check for a valid exception entry */ 257 - if (!user_mode(regs) && (address >= TASK_SIZE)) { 256 + /* 257 + * The kernel should never take an execute fault nor should it 258 + * take a page fault to a kernel address. 259 + */ 260 + if (!user_mode(regs) && (is_exec || (address >= TASK_SIZE))) { 258 261 rc = SIGSEGV; 259 262 goto bail; 260 263 } ··· 393 390 #endif /* CONFIG_8xx */ 394 391 395 392 if (is_exec) { 396 - /* 397 - * An execution fault + no execute ? 398 - * 399 - * On CPUs that don't have CPU_FTR_COHERENT_ICACHE we 400 - * deliberately create NX mappings, and use the fault to do the 401 - * cache flush. This is usually handled in hash_page_do_lazy_icache() 402 - * but we could end up here if that races with a concurrent PTE 403 - * update. In that case we need to fall through here to the VMA 404 - * check below. 405 - */ 406 - if (cpu_has_feature(CPU_FTR_COHERENT_ICACHE) && 407 - (regs->msr & SRR1_ISI_N_OR_G)) 408 - goto bad_area; 409 - 410 393 /* 411 394 * Allow execution from readable areas if the MMU does not 412 395 * provide separate controls over reading and executing.
+2 -2
arch/powerpc/mm/pgtable-radix.c
··· 65 65 if (!pmdp) 66 66 return -ENOMEM; 67 67 if (map_page_size == PMD_SIZE) { 68 - ptep = (pte_t *)pudp; 68 + ptep = pmdp_ptep(pmdp); 69 69 goto set_the_pte; 70 70 } 71 71 ptep = pte_alloc_kernel(pmdp, ea); ··· 90 90 } 91 91 pmdp = pmd_offset(pudp, ea); 92 92 if (map_page_size == PMD_SIZE) { 93 - ptep = (pte_t *)pudp; 93 + ptep = pmdp_ptep(pmdp); 94 94 goto set_the_pte; 95 95 } 96 96 if (!pmd_present(*pmdp)) {
+1 -5
arch/powerpc/mm/tlb-radix.c
··· 50 50 for (set = 0; set < POWER9_TLB_SETS_RADIX ; set++) { 51 51 __tlbiel_pid(pid, set, ric); 52 52 } 53 - if (cpu_has_feature(CPU_FTR_POWER9_DD1)) 54 - asm volatile(PPC_INVALIDATE_ERAT : : :"memory"); 55 - return; 53 + asm volatile(PPC_INVALIDATE_ERAT "; isync" : : :"memory"); 56 54 } 57 55 58 56 static inline void _tlbie_pid(unsigned long pid, unsigned long ric) ··· 83 85 asm volatile(PPC_TLBIEL(%0, %4, %3, %2, %1) 84 86 : : "r"(rb), "i"(r), "i"(prs), "i"(ric), "r"(rs) : "memory"); 85 87 asm volatile("ptesync": : :"memory"); 86 - if (cpu_has_feature(CPU_FTR_POWER9_DD1)) 87 - asm volatile(PPC_INVALIDATE_ERAT : : :"memory"); 88 88 } 89 89 90 90 static inline void _tlbie_va(unsigned long va, unsigned long pid,
+10 -2
arch/powerpc/platforms/powernv/smp.c
··· 155 155 wmask = SRR1_WAKEMASK_P8; 156 156 157 157 idle_states = pnv_get_supported_cpuidle_states(); 158 + 158 159 /* We don't want to take decrementer interrupts while we are offline, 159 - * so clear LPCR:PECE1. We keep PECE2 enabled. 160 + * so clear LPCR:PECE1. We keep PECE2 (and LPCR_PECE_HVEE on P9) 161 + * enabled as to let IPIs in. 160 162 */ 161 163 mtspr(SPRN_LPCR, mfspr(SPRN_LPCR) & ~(u64)LPCR_PECE1); 162 164 ··· 208 206 * contains 0. 209 207 */ 210 208 if (((srr1 & wmask) == SRR1_WAKEEE) || 209 + ((srr1 & wmask) == SRR1_WAKEHVI) || 211 210 (local_paca->irq_happened & PACA_IRQ_EE)) { 212 - icp_native_flush_interrupt(); 211 + if (cpu_has_feature(CPU_FTR_ARCH_300)) 212 + icp_opal_flush_interrupt(); 213 + else 214 + icp_native_flush_interrupt(); 213 215 } else if ((srr1 & wmask) == SRR1_WAKEHDBELL) { 214 216 unsigned long msg = PPC_DBELL_TYPE(PPC_DBELL_SERVER); 215 217 asm volatile(PPC_MSGCLR(%0) : : "r" (msg)); ··· 227 221 if (srr1 && !generic_check_cpu_restart(cpu)) 228 222 DBG("CPU%d Unexpected exit while offline !\n", cpu); 229 223 } 224 + 225 + /* Re-enable decrementer interrupts */ 230 226 mtspr(SPRN_LPCR, mfspr(SPRN_LPCR) | LPCR_PECE1); 231 227 DBG("CPU%d coming online...\n", cpu); 232 228 }
+33 -2
arch/powerpc/sysdev/xics/icp-opal.c
··· 120 120 { 121 121 int hw_cpu = get_hard_smp_processor_id(cpu); 122 122 123 + kvmppc_set_host_ipi(cpu, 1); 123 124 opal_int_set_mfrr(hw_cpu, IPI_PRIORITY); 124 125 } 125 126 126 127 static irqreturn_t icp_opal_ipi_action(int irq, void *dev_id) 127 128 { 128 - int hw_cpu = hard_smp_processor_id(); 129 + int cpu = smp_processor_id(); 129 130 130 - opal_int_set_mfrr(hw_cpu, 0xff); 131 + kvmppc_set_host_ipi(cpu, 0); 132 + opal_int_set_mfrr(get_hard_smp_processor_id(cpu), 0xff); 131 133 132 134 return smp_ipi_demux(); 135 + } 136 + 137 + /* 138 + * Called when an interrupt is received on an off-line CPU to 139 + * clear the interrupt, so that the CPU can go back to nap mode. 140 + */ 141 + void icp_opal_flush_interrupt(void) 142 + { 143 + unsigned int xirr; 144 + unsigned int vec; 145 + 146 + do { 147 + xirr = icp_opal_get_xirr(); 148 + vec = xirr & 0x00ffffff; 149 + if (vec == XICS_IRQ_SPURIOUS) 150 + break; 151 + if (vec == XICS_IPI) { 152 + /* Clear pending IPI */ 153 + int cpu = smp_processor_id(); 154 + kvmppc_set_host_ipi(cpu, 0); 155 + opal_int_set_mfrr(get_hard_smp_processor_id(cpu), 0xff); 156 + } else { 157 + pr_err("XICS: hw interrupt 0x%x to offline cpu, " 158 + "disabling\n", vec); 159 + xics_mask_unknown_vec(vec); 160 + } 161 + 162 + /* EOI the interrupt */ 163 + } while (opal_int_eoi(xirr) > 0); 133 164 } 134 165 135 166 #endif /* CONFIG_SMP */
+4 -4
arch/x86/crypto/aesni-intel_glue.c
··· 1085 1085 aesni_simd_skciphers[i]; i++) 1086 1086 simd_skcipher_free(aesni_simd_skciphers[i]); 1087 1087 1088 - for (i = 0; i < ARRAY_SIZE(aesni_simd_skciphers2) && 1089 - aesni_simd_skciphers2[i].simd; i++) 1090 - simd_skcipher_free(aesni_simd_skciphers2[i].simd); 1088 + for (i = 0; i < ARRAY_SIZE(aesni_simd_skciphers2); i++) 1089 + if (aesni_simd_skciphers2[i].simd) 1090 + simd_skcipher_free(aesni_simd_skciphers2[i].simd); 1091 1091 } 1092 1092 1093 1093 static int __init aesni_init(void) ··· 1168 1168 simd = simd_skcipher_create_compat(algname, drvname, basename); 1169 1169 err = PTR_ERR(simd); 1170 1170 if (IS_ERR(simd)) 1171 - goto unregister_simds; 1171 + continue; 1172 1172 1173 1173 aesni_simd_skciphers2[i].simd = simd; 1174 1174 }
+1
arch/x86/include/asm/processor.h
··· 104 104 __u8 x86_phys_bits; 105 105 /* CPUID returned core id bits: */ 106 106 __u8 x86_coreid_bits; 107 + __u8 cu_id; 107 108 /* Max extended CPUID function supported: */ 108 109 __u32 extended_cpuid_level; 109 110 /* Maximum supported CPUID level, -1=no CPUID: */
+2 -2
arch/x86/kernel/apic/io_apic.c
··· 1875 1875 .irq_ack = irq_chip_ack_parent, 1876 1876 .irq_eoi = ioapic_ack_level, 1877 1877 .irq_set_affinity = ioapic_set_affinity, 1878 - .irq_retrigger = irq_chip_retrigger_hierarchy, 1879 1878 .flags = IRQCHIP_SKIP_SET_WAKE, 1880 1879 }; 1881 1880 ··· 1886 1887 .irq_ack = irq_chip_ack_parent, 1887 1888 .irq_eoi = ioapic_ir_ack_level, 1888 1889 .irq_set_affinity = ioapic_set_affinity, 1889 - .irq_retrigger = irq_chip_retrigger_hierarchy, 1890 1890 .flags = IRQCHIP_SKIP_SET_WAKE, 1891 1891 }; 1892 1892 ··· 2115 2117 if (idx != -1 && irq_trigger(idx)) 2116 2118 unmask_ioapic_irq(irq_get_chip_data(0)); 2117 2119 } 2120 + irq_domain_deactivate_irq(irq_data); 2118 2121 irq_domain_activate_irq(irq_data); 2119 2122 if (timer_irq_works()) { 2120 2123 if (disable_timer_pin_1 > 0) ··· 2137 2138 * legacy devices should be connected to IO APIC #0 2138 2139 */ 2139 2140 replace_pin_at_irq_node(data, node, apic1, pin1, apic2, pin2); 2141 + irq_domain_deactivate_irq(irq_data); 2140 2142 irq_domain_activate_irq(irq_data); 2141 2143 legacy_pic->unmask(0); 2142 2144 if (timer_irq_works()) {
+15 -1
arch/x86/kernel/cpu/amd.c
··· 309 309 310 310 /* get information required for multi-node processors */ 311 311 if (boot_cpu_has(X86_FEATURE_TOPOEXT)) { 312 + u32 eax, ebx, ecx, edx; 312 313 313 - node_id = cpuid_ecx(0x8000001e) & 7; 314 + cpuid(0x8000001e, &eax, &ebx, &ecx, &edx); 315 + 316 + node_id = ecx & 0xff; 317 + smp_num_siblings = ((ebx >> 8) & 0xff) + 1; 318 + 319 + if (c->x86 == 0x15) 320 + c->cu_id = ebx & 0xff; 321 + 322 + if (c->x86 >= 0x17) { 323 + c->cpu_core_id = ebx & 0xff; 324 + 325 + if (smp_num_siblings > 1) 326 + c->x86_max_cores /= smp_num_siblings; 327 + } 314 328 315 329 /* 316 330 * We may have multiple LLCs if L3 caches exist, so check if we
+1
arch/x86/kernel/cpu/common.c
··· 1015 1015 c->x86_model_id[0] = '\0'; /* Unset */ 1016 1016 c->x86_max_cores = 1; 1017 1017 c->x86_coreid_bits = 0; 1018 + c->cu_id = 0xff; 1018 1019 #ifdef CONFIG_X86_64 1019 1020 c->x86_clflush_size = 64; 1020 1021 c->x86_phys_bits = 36;
+1
arch/x86/kernel/hpet.c
··· 352 352 } else { 353 353 struct hpet_dev *hdev = EVT_TO_HPET_DEV(evt); 354 354 355 + irq_domain_deactivate_irq(irq_get_irq_data(hdev->irq)); 355 356 irq_domain_activate_irq(irq_get_irq_data(hdev->irq)); 356 357 disable_irq(hdev->irq); 357 358 irq_set_affinity(hdev->irq, cpumask_of(hdev->cpu));
+9 -3
arch/x86/kernel/smpboot.c
··· 433 433 int cpu1 = c->cpu_index, cpu2 = o->cpu_index; 434 434 435 435 if (c->phys_proc_id == o->phys_proc_id && 436 - per_cpu(cpu_llc_id, cpu1) == per_cpu(cpu_llc_id, cpu2) && 437 - c->cpu_core_id == o->cpu_core_id) 438 - return topology_sane(c, o, "smt"); 436 + per_cpu(cpu_llc_id, cpu1) == per_cpu(cpu_llc_id, cpu2)) { 437 + if (c->cpu_core_id == o->cpu_core_id) 438 + return topology_sane(c, o, "smt"); 439 + 440 + if ((c->cu_id != 0xff) && 441 + (o->cu_id != 0xff) && 442 + (c->cu_id == o->cu_id)) 443 + return topology_sane(c, o, "smt"); 444 + } 439 445 440 446 } else if (c->phys_proc_id == o->phys_proc_id && 441 447 c->cpu_core_id == o->cpu_core_id) {
+3 -2
arch/x86/kernel/tsc.c
··· 1356 1356 (unsigned long)cpu_khz / 1000, 1357 1357 (unsigned long)cpu_khz % 1000); 1358 1358 1359 + /* Sanitize TSC ADJUST before cyc2ns gets initialized */ 1360 + tsc_store_and_check_tsc_adjust(true); 1361 + 1359 1362 /* 1360 1363 * Secondary CPUs do not run through tsc_init(), so set up 1361 1364 * all the scale factors for all CPUs, assuming the same ··· 1389 1386 1390 1387 if (unsynchronized_tsc()) 1391 1388 mark_tsc_unstable("TSCs unsynchronized"); 1392 - else 1393 - tsc_store_and_check_tsc_adjust(true); 1394 1389 1395 1390 check_system_tsc_reliable(); 1396 1391
+7 -9
arch/x86/kernel/tsc_sync.c
··· 286 286 if (unsynchronized_tsc()) 287 287 return; 288 288 289 - if (tsc_clocksource_reliable) { 290 - if (cpu == (nr_cpu_ids-1) || system_state != SYSTEM_BOOTING) 291 - pr_info( 292 - "Skipped synchronization checks as TSC is reliable.\n"); 293 - return; 294 - } 295 - 296 289 /* 297 290 * Set the maximum number of test runs to 298 291 * 1 if the CPU does not provide the TSC_ADJUST MSR ··· 373 380 int cpus = 2; 374 381 375 382 /* Also aborts if there is no TSC. */ 376 - if (unsynchronized_tsc() || tsc_clocksource_reliable) 383 + if (unsynchronized_tsc()) 377 384 return; 378 385 379 386 /* 380 387 * Store, verify and sanitize the TSC adjust register. If 381 388 * successful skip the test. 389 + * 390 + * The test is also skipped when the TSC is marked reliable. This 391 + * is true for SoCs which have no fallback clocksource. On these 392 + * SoCs the TSC is frequency synchronized, but still the TSC ADJUST 393 + * register might have been wreckaged by the BIOS.. 382 394 */ 383 - if (tsc_store_and_check_tsc_adjust(false)) { 395 + if (tsc_store_and_check_tsc_adjust(false) || tsc_clocksource_reliable) { 384 396 atomic_inc(&skip_test); 385 397 return; 386 398 }
+1
arch/x86/kvm/x86.c
··· 3182 3182 memcpy(dest, xsave, XSAVE_HDR_OFFSET); 3183 3183 3184 3184 /* Set XSTATE_BV */ 3185 + xstate_bv &= vcpu->arch.guest_supported_xcr0 | XFEATURE_MASK_FPSSE; 3185 3186 *(u64 *)(dest + XSAVE_HDR_OFFSET) = xstate_bv; 3186 3187 3187 3188 /*
+2
arch/x86/mm/dump_pagetables.c
··· 15 15 #include <linux/debugfs.h> 16 16 #include <linux/mm.h> 17 17 #include <linux/init.h> 18 + #include <linux/sched.h> 18 19 #include <linux/seq_file.h> 19 20 20 21 #include <asm/pgtable.h> ··· 407 406 } else 408 407 note_page(m, &st, __pgprot(0), 1); 409 408 409 + cond_resched(); 410 410 start++; 411 411 } 412 412
+4 -5
block/blk-lib.c
··· 306 306 if (ret == 0 || (ret && ret != -EOPNOTSUPP)) 307 307 goto out; 308 308 309 - ret = __blkdev_issue_write_same(bdev, sector, nr_sects, gfp_mask, 310 - ZERO_PAGE(0), biop); 311 - if (ret == 0 || (ret && ret != -EOPNOTSUPP)) 312 - goto out; 313 - 314 309 ret = 0; 315 310 while (nr_sects != 0) { 316 311 bio = next_bio(bio, min(nr_sects, (sector_t)BIO_MAX_PAGES), ··· 363 368 BLKDEV_DISCARD_ZERO)) 364 369 return 0; 365 370 } 371 + 372 + if (!blkdev_issue_write_same(bdev, sector, nr_sects, gfp_mask, 373 + ZERO_PAGE(0))) 374 + return 0; 366 375 367 376 blk_start_plug(&plug); 368 377 ret = __blkdev_issue_zeroout(bdev, sector, nr_sects, gfp_mask,
+1 -1
crypto/algif_aead.c
··· 661 661 unlock: 662 662 list_for_each_entry_safe(rsgl, tmp, &ctx->list, list) { 663 663 af_alg_free_sg(&rsgl->sgl); 664 + list_del(&rsgl->list); 664 665 if (rsgl != &ctx->first_rsgl) 665 666 sock_kfree_s(sk, rsgl, sizeof(*rsgl)); 666 - list_del(&rsgl->list); 667 667 } 668 668 INIT_LIST_HEAD(&ctx->list); 669 669 aead_wmem_wakeup(sk);
+5 -1
drivers/acpi/nfit/core.c
··· 2704 2704 struct acpi_nfit_desc *acpi_desc = to_acpi_nfit_desc(nd_desc); 2705 2705 struct device *dev = acpi_desc->dev; 2706 2706 struct acpi_nfit_flush_work flush; 2707 + int rc; 2707 2708 2708 2709 /* bounce the device lock to flush acpi_nfit_add / acpi_nfit_notify */ 2709 2710 device_lock(dev); ··· 2717 2716 INIT_WORK_ONSTACK(&flush.work, flush_probe); 2718 2717 COMPLETION_INITIALIZER_ONSTACK(flush.cmp); 2719 2718 queue_work(nfit_wq, &flush.work); 2720 - return wait_for_completion_interruptible(&flush.cmp); 2719 + 2720 + rc = wait_for_completion_interruptible(&flush.cmp); 2721 + cancel_work_sync(&flush.work); 2722 + return rc; 2721 2723 } 2722 2724 2723 2725 static int acpi_nfit_clear_to_send(struct nvdimm_bus_descriptor *nd_desc,
+1 -4
drivers/base/firmware_class.c
··· 558 558 struct firmware_buf *buf = fw_priv->buf; 559 559 560 560 __fw_load_abort(buf); 561 - 562 - /* avoid user action after loading abort */ 563 - fw_priv->buf = NULL; 564 561 } 565 562 566 563 static LIST_HEAD(pending_fw_head); ··· 710 713 711 714 mutex_lock(&fw_lock); 712 715 fw_buf = fw_priv->buf; 713 - if (!fw_buf) 716 + if (fw_state_is_aborted(&fw_buf->fw_st)) 714 717 goto out; 715 718 716 719 switch (loading) {
+6 -6
drivers/base/memory.c
··· 389 389 { 390 390 struct memory_block *mem = to_memory_block(dev); 391 391 unsigned long start_pfn, end_pfn; 392 + unsigned long valid_start, valid_end, valid_pages; 392 393 unsigned long nr_pages = PAGES_PER_SECTION * sections_per_block; 393 - struct page *first_page; 394 394 struct zone *zone; 395 395 int zone_shift = 0; 396 396 397 397 start_pfn = section_nr_to_pfn(mem->start_section_nr); 398 398 end_pfn = start_pfn + nr_pages; 399 - first_page = pfn_to_page(start_pfn); 400 399 401 400 /* The block contains more than one zone can not be offlined. */ 402 - if (!test_pages_in_a_zone(start_pfn, end_pfn)) 401 + if (!test_pages_in_a_zone(start_pfn, end_pfn, &valid_start, &valid_end)) 403 402 return sprintf(buf, "none\n"); 404 403 405 - zone = page_zone(first_page); 404 + zone = page_zone(pfn_to_page(valid_start)); 405 + valid_pages = valid_end - valid_start; 406 406 407 407 /* MMOP_ONLINE_KEEP */ 408 408 sprintf(buf, "%s", zone->name); 409 409 410 410 /* MMOP_ONLINE_KERNEL */ 411 - zone_can_shift(start_pfn, nr_pages, ZONE_NORMAL, &zone_shift); 411 + zone_can_shift(valid_start, valid_pages, ZONE_NORMAL, &zone_shift); 412 412 if (zone_shift) { 413 413 strcat(buf, " "); 414 414 strcat(buf, (zone + zone_shift)->name); 415 415 } 416 416 417 417 /* MMOP_ONLINE_MOVABLE */ 418 - zone_can_shift(start_pfn, nr_pages, ZONE_MOVABLE, &zone_shift); 418 + zone_can_shift(valid_start, valid_pages, ZONE_MOVABLE, &zone_shift); 419 419 if (zone_shift) { 420 420 strcat(buf, " "); 421 421 strcat(buf, (zone + zone_shift)->name);
+6 -5
drivers/base/power/runtime.c
··· 966 966 unsigned long flags; 967 967 int retval; 968 968 969 - might_sleep_if(!(rpmflags & RPM_ASYNC) && !dev->power.irq_safe); 970 - 971 969 if (rpmflags & RPM_GET_PUT) { 972 970 if (!atomic_dec_and_test(&dev->power.usage_count)) 973 971 return 0; 974 972 } 973 + 974 + might_sleep_if(!(rpmflags & RPM_ASYNC) && !dev->power.irq_safe); 975 975 976 976 spin_lock_irqsave(&dev->power.lock, flags); 977 977 retval = rpm_idle(dev, rpmflags); ··· 998 998 unsigned long flags; 999 999 int retval; 1000 1000 1001 - might_sleep_if(!(rpmflags & RPM_ASYNC) && !dev->power.irq_safe); 1002 - 1003 1001 if (rpmflags & RPM_GET_PUT) { 1004 1002 if (!atomic_dec_and_test(&dev->power.usage_count)) 1005 1003 return 0; 1006 1004 } 1005 + 1006 + might_sleep_if(!(rpmflags & RPM_ASYNC) && !dev->power.irq_safe); 1007 1007 1008 1008 spin_lock_irqsave(&dev->power.lock, flags); 1009 1009 retval = rpm_suspend(dev, rpmflags); ··· 1029 1029 unsigned long flags; 1030 1030 int retval; 1031 1031 1032 - might_sleep_if(!(rpmflags & RPM_ASYNC) && !dev->power.irq_safe); 1032 + might_sleep_if(!(rpmflags & RPM_ASYNC) && !dev->power.irq_safe && 1033 + dev->power.runtime_status != RPM_ACTIVE); 1033 1034 1034 1035 if (rpmflags & RPM_GET_PUT) 1035 1036 atomic_inc(&dev->power.usage_count);
-3
drivers/char/hw_random/core.c
··· 92 92 mutex_unlock(&reading_mutex); 93 93 if (bytes_read > 0) 94 94 add_device_randomness(rng_buffer, bytes_read); 95 - memset(rng_buffer, 0, size); 96 95 } 97 96 98 97 static inline void cleanup_rng(struct kref *kref) ··· 287 288 } 288 289 } 289 290 out: 290 - memset(rng_buffer, 0, rng_buffer_size()); 291 291 return ret ? : err; 292 292 293 293 out_unlock_reading: ··· 425 427 /* Outside lock, sure, but y'know: randomness. */ 426 428 add_hwgenerator_randomness((void *)rng_fillbuf, rc, 427 429 rc * current_quality * 8 >> 10); 428 - memset(rng_fillbuf, 0, rng_buffer_size()); 429 430 } 430 431 hwrng_fill = NULL; 431 432 return 0;
+14 -3
drivers/cpufreq/brcmstb-avs-cpufreq.c
··· 784 784 static int brcm_avs_suspend(struct cpufreq_policy *policy) 785 785 { 786 786 struct private_data *priv = policy->driver_data; 787 + int ret; 787 788 788 - return brcm_avs_get_pmap(priv, &priv->pmap); 789 + ret = brcm_avs_get_pmap(priv, &priv->pmap); 790 + if (ret) 791 + return ret; 792 + 793 + /* 794 + * We can't use the P-state returned by brcm_avs_get_pmap(), since 795 + * that's the initial P-state from when the P-map was downloaded to the 796 + * AVS co-processor, not necessarily the P-state we are running at now. 797 + * So, we get the current P-state explicitly. 798 + */ 799 + return brcm_avs_get_pstate(priv, &priv->pmap.state); 789 800 } 790 801 791 802 static int brcm_avs_resume(struct cpufreq_policy *policy) ··· 965 954 brcm_avs_parse_p1(pmap.p1, &mdiv_p0, &pdiv, &ndiv); 966 955 brcm_avs_parse_p2(pmap.p2, &mdiv_p1, &mdiv_p2, &mdiv_p3, &mdiv_p4); 967 956 968 - return sprintf(buf, "0x%08x 0x%08x %u %u %u %u %u %u %u\n", 957 + return sprintf(buf, "0x%08x 0x%08x %u %u %u %u %u %u %u %u %u\n", 969 958 pmap.p1, pmap.p2, ndiv, pdiv, mdiv_p0, mdiv_p1, mdiv_p2, 970 - mdiv_p3, mdiv_p4); 959 + mdiv_p3, mdiv_p4, pmap.mode, pmap.state); 971 960 } 972 961 973 962 static ssize_t show_brcm_avs_voltage(struct cpufreq_policy *policy, char *buf)
+30
drivers/cpufreq/intel_pstate.c
··· 1235 1235 cpudata->epp_default = intel_pstate_get_epp(cpudata, 0); 1236 1236 } 1237 1237 1238 + #define MSR_IA32_POWER_CTL_BIT_EE 19 1239 + 1240 + /* Disable energy efficiency optimization */ 1241 + static void intel_pstate_disable_ee(int cpu) 1242 + { 1243 + u64 power_ctl; 1244 + int ret; 1245 + 1246 + ret = rdmsrl_on_cpu(cpu, MSR_IA32_POWER_CTL, &power_ctl); 1247 + if (ret) 1248 + return; 1249 + 1250 + if (!(power_ctl & BIT(MSR_IA32_POWER_CTL_BIT_EE))) { 1251 + pr_info("Disabling energy efficiency optimization\n"); 1252 + power_ctl |= BIT(MSR_IA32_POWER_CTL_BIT_EE); 1253 + wrmsrl_on_cpu(cpu, MSR_IA32_POWER_CTL, power_ctl); 1254 + } 1255 + } 1256 + 1238 1257 static int atom_get_min_pstate(void) 1239 1258 { 1240 1259 u64 value; ··· 1864 1845 {} 1865 1846 }; 1866 1847 1848 + static const struct x86_cpu_id intel_pstate_cpu_ee_disable_ids[] = { 1849 + ICPU(INTEL_FAM6_KABYLAKE_DESKTOP, core_params), 1850 + {} 1851 + }; 1852 + 1867 1853 static int intel_pstate_init_cpu(unsigned int cpunum) 1868 1854 { 1869 1855 struct cpudata *cpu; ··· 1899 1875 cpu->cpu = cpunum; 1900 1876 1901 1877 if (hwp_active) { 1878 + const struct x86_cpu_id *id; 1879 + 1880 + id = x86_match_cpu(intel_pstate_cpu_ee_disable_ids); 1881 + if (id) 1882 + intel_pstate_disable_ee(cpunum); 1883 + 1902 1884 intel_pstate_hwp_enable(cpu); 1903 1885 pid_params.sample_rate_ms = 50; 1904 1886 pid_params.sample_rate_ns = 50 * NSEC_PER_MSEC;
+1 -1
drivers/crypto/ccp/ccp-dev-v5.c
··· 959 959 static void ccp5_config(struct ccp_device *ccp) 960 960 { 961 961 /* Public side */ 962 - iowrite32(0x00001249, ccp->io_regs + CMD5_REQID_CONFIG_OFFSET); 962 + iowrite32(0x0, ccp->io_regs + CMD5_REQID_CONFIG_OFFSET); 963 963 } 964 964 965 965 static void ccp5other_config(struct ccp_device *ccp)
+1
drivers/crypto/ccp/ccp-dev.h
··· 238 238 struct ccp_device *ccp; 239 239 240 240 spinlock_t lock; 241 + struct list_head created; 241 242 struct list_head pending; 242 243 struct list_head active; 243 244 struct list_head complete;
+5 -1
drivers/crypto/ccp/ccp-dmaengine.c
··· 63 63 ccp_free_desc_resources(chan->ccp, &chan->complete); 64 64 ccp_free_desc_resources(chan->ccp, &chan->active); 65 65 ccp_free_desc_resources(chan->ccp, &chan->pending); 66 + ccp_free_desc_resources(chan->ccp, &chan->created); 66 67 67 68 spin_unlock_irqrestore(&chan->lock, flags); 68 69 } ··· 274 273 spin_lock_irqsave(&chan->lock, flags); 275 274 276 275 cookie = dma_cookie_assign(tx_desc); 276 + list_del(&desc->entry); 277 277 list_add_tail(&desc->entry, &chan->pending); 278 278 279 279 spin_unlock_irqrestore(&chan->lock, flags); ··· 428 426 429 427 spin_lock_irqsave(&chan->lock, sflags); 430 428 431 - list_add_tail(&desc->entry, &chan->pending); 429 + list_add_tail(&desc->entry, &chan->created); 432 430 433 431 spin_unlock_irqrestore(&chan->lock, sflags); 434 432 ··· 612 610 /*TODO: Purge the complete list? */ 613 611 ccp_free_desc_resources(chan->ccp, &chan->active); 614 612 ccp_free_desc_resources(chan->ccp, &chan->pending); 613 + ccp_free_desc_resources(chan->ccp, &chan->created); 615 614 616 615 spin_unlock_irqrestore(&chan->lock, flags); 617 616 ··· 682 679 chan->ccp = ccp; 683 680 684 681 spin_lock_init(&chan->lock); 682 + INIT_LIST_HEAD(&chan->created); 685 683 INIT_LIST_HEAD(&chan->pending); 686 684 INIT_LIST_HEAD(&chan->active); 687 685 INIT_LIST_HEAD(&chan->complete);
+28 -25
drivers/crypto/chelsio/chcr_algo.c
··· 158 158 case CRYPTO_ALG_TYPE_AEAD: 159 159 ctx_req.req.aead_req = (struct aead_request *)req; 160 160 ctx_req.ctx.reqctx = aead_request_ctx(ctx_req.req.aead_req); 161 - dma_unmap_sg(&u_ctx->lldi.pdev->dev, ctx_req.req.aead_req->dst, 161 + dma_unmap_sg(&u_ctx->lldi.pdev->dev, ctx_req.ctx.reqctx->dst, 162 162 ctx_req.ctx.reqctx->dst_nents, DMA_FROM_DEVICE); 163 163 if (ctx_req.ctx.reqctx->skb) { 164 164 kfree_skb(ctx_req.ctx.reqctx->skb); ··· 1362 1362 struct chcr_wr *chcr_req; 1363 1363 struct cpl_rx_phys_dsgl *phys_cpl; 1364 1364 struct phys_sge_parm sg_param; 1365 - struct scatterlist *src, *dst; 1366 - struct scatterlist src_sg[2], dst_sg[2]; 1365 + struct scatterlist *src; 1367 1366 unsigned int frags = 0, transhdr_len; 1368 1367 unsigned int ivsize = crypto_aead_ivsize(tfm), dst_size = 0; 1369 1368 unsigned int kctx_len = 0; ··· 1382 1383 1383 1384 if (sg_nents_for_len(req->src, req->assoclen + req->cryptlen) < 0) 1384 1385 goto err; 1385 - src = scatterwalk_ffwd(src_sg, req->src, req->assoclen); 1386 - dst = src; 1386 + src = scatterwalk_ffwd(reqctx->srcffwd, req->src, req->assoclen); 1387 + reqctx->dst = src; 1388 + 1387 1389 if (req->src != req->dst) { 1388 1390 err = chcr_copy_assoc(req, aeadctx); 1389 1391 if (err) 1390 1392 return ERR_PTR(err); 1391 - dst = scatterwalk_ffwd(dst_sg, req->dst, req->assoclen); 1393 + reqctx->dst = scatterwalk_ffwd(reqctx->dstffwd, req->dst, 1394 + req->assoclen); 1392 1395 } 1393 1396 if (get_aead_subtype(tfm) == CRYPTO_ALG_SUB_TYPE_AEAD_NULL) { 1394 1397 null = 1; 1395 1398 assoclen = 0; 1396 1399 } 1397 - reqctx->dst_nents = sg_nents_for_len(dst, req->cryptlen + 1400 + reqctx->dst_nents = sg_nents_for_len(reqctx->dst, req->cryptlen + 1398 1401 (op_type ? -authsize : authsize)); 1399 1402 if (reqctx->dst_nents <= 0) { 1400 1403 pr_err("AUTHENC:Invalid Destination sg entries\n"); ··· 1461 1460 sg_param.obsize = req->cryptlen + (op_type ? -authsize : authsize); 1462 1461 sg_param.qid = qid; 1463 1462 sg_param.align = 0; 1464 - if (map_writesg_phys_cpl(&u_ctx->lldi.pdev->dev, phys_cpl, dst, 1463 + if (map_writesg_phys_cpl(&u_ctx->lldi.pdev->dev, phys_cpl, reqctx->dst, 1465 1464 &sg_param)) 1466 1465 goto dstmap_fail; 1467 1466 ··· 1712 1711 struct chcr_wr *chcr_req; 1713 1712 struct cpl_rx_phys_dsgl *phys_cpl; 1714 1713 struct phys_sge_parm sg_param; 1715 - struct scatterlist *src, *dst; 1716 - struct scatterlist src_sg[2], dst_sg[2]; 1714 + struct scatterlist *src; 1717 1715 unsigned int frags = 0, transhdr_len, ivsize = AES_BLOCK_SIZE; 1718 1716 unsigned int dst_size = 0, kctx_len; 1719 1717 unsigned int sub_type; ··· 1728 1728 if (sg_nents_for_len(req->src, req->assoclen + req->cryptlen) < 0) 1729 1729 goto err; 1730 1730 sub_type = get_aead_subtype(tfm); 1731 - src = scatterwalk_ffwd(src_sg, req->src, req->assoclen); 1732 - dst = src; 1731 + src = scatterwalk_ffwd(reqctx->srcffwd, req->src, req->assoclen); 1732 + reqctx->dst = src; 1733 + 1733 1734 if (req->src != req->dst) { 1734 1735 err = chcr_copy_assoc(req, aeadctx); 1735 1736 if (err) { 1736 1737 pr_err("AAD copy to destination buffer fails\n"); 1737 1738 return ERR_PTR(err); 1738 1739 } 1739 - dst = scatterwalk_ffwd(dst_sg, req->dst, req->assoclen); 1740 + reqctx->dst = scatterwalk_ffwd(reqctx->dstffwd, req->dst, 1741 + req->assoclen); 1740 1742 } 1741 - reqctx->dst_nents = sg_nents_for_len(dst, req->cryptlen + 1743 + reqctx->dst_nents = sg_nents_for_len(reqctx->dst, req->cryptlen + 1742 1744 (op_type ? -authsize : authsize)); 1743 1745 if (reqctx->dst_nents <= 0) { 1744 1746 pr_err("CCM:Invalid Destination sg entries\n"); ··· 1779 1777 sg_param.obsize = req->cryptlen + (op_type ? -authsize : authsize); 1780 1778 sg_param.qid = qid; 1781 1779 sg_param.align = 0; 1782 - if (map_writesg_phys_cpl(&u_ctx->lldi.pdev->dev, phys_cpl, dst, 1780 + if (map_writesg_phys_cpl(&u_ctx->lldi.pdev->dev, phys_cpl, reqctx->dst, 1783 1781 &sg_param)) 1784 1782 goto dstmap_fail; 1785 1783 ··· 1811 1809 struct chcr_wr *chcr_req; 1812 1810 struct cpl_rx_phys_dsgl *phys_cpl; 1813 1811 struct phys_sge_parm sg_param; 1814 - struct scatterlist *src, *dst; 1815 - struct scatterlist src_sg[2], dst_sg[2]; 1812 + struct scatterlist *src; 1816 1813 unsigned int frags = 0, transhdr_len; 1817 1814 unsigned int ivsize = AES_BLOCK_SIZE; 1818 1815 unsigned int dst_size = 0, kctx_len; ··· 1833 1832 if (sg_nents_for_len(req->src, req->assoclen + req->cryptlen) < 0) 1834 1833 goto err; 1835 1834 1836 - src = scatterwalk_ffwd(src_sg, req->src, req->assoclen); 1837 - dst = src; 1835 + src = scatterwalk_ffwd(reqctx->srcffwd, req->src, req->assoclen); 1836 + reqctx->dst = src; 1838 1837 if (req->src != req->dst) { 1839 1838 err = chcr_copy_assoc(req, aeadctx); 1840 1839 if (err) 1841 1840 return ERR_PTR(err); 1842 - dst = scatterwalk_ffwd(dst_sg, req->dst, req->assoclen); 1841 + reqctx->dst = scatterwalk_ffwd(reqctx->dstffwd, req->dst, 1842 + req->assoclen); 1843 1843 } 1844 1844 1845 1845 if (!req->cryptlen) ··· 1850 1848 crypt_len = AES_BLOCK_SIZE; 1851 1849 else 1852 1850 crypt_len = req->cryptlen; 1853 - reqctx->dst_nents = sg_nents_for_len(dst, req->cryptlen + 1851 + reqctx->dst_nents = sg_nents_for_len(reqctx->dst, req->cryptlen + 1854 1852 (op_type ? -authsize : authsize)); 1855 1853 if (reqctx->dst_nents <= 0) { 1856 1854 pr_err("GCM:Invalid Destination sg entries\n"); ··· 1925 1923 sg_param.obsize = req->cryptlen + (op_type ? -authsize : authsize); 1926 1924 sg_param.qid = qid; 1927 1925 sg_param.align = 0; 1928 - if (map_writesg_phys_cpl(&u_ctx->lldi.pdev->dev, phys_cpl, dst, 1926 + if (map_writesg_phys_cpl(&u_ctx->lldi.pdev->dev, phys_cpl, reqctx->dst, 1929 1927 &sg_param)) 1930 1928 goto dstmap_fail; 1931 1929 ··· 1939 1937 write_sg_to_skb(skb, &frags, src, req->cryptlen); 1940 1938 } else { 1941 1939 aes_gcm_empty_pld_pad(req->dst, authsize - 1); 1942 - write_sg_to_skb(skb, &frags, dst, crypt_len); 1940 + write_sg_to_skb(skb, &frags, reqctx->dst, crypt_len); 1941 + 1943 1942 } 1944 1943 1945 1944 create_wreq(ctx, chcr_req, req, skb, kctx_len, size, 1, ··· 2192 2189 unsigned int ck_size; 2193 2190 int ret = 0, key_ctx_size = 0; 2194 2191 2195 - if (get_aead_subtype(aead) == 2196 - CRYPTO_ALG_SUB_TYPE_AEAD_RFC4106) { 2192 + if (get_aead_subtype(aead) == CRYPTO_ALG_SUB_TYPE_AEAD_RFC4106 && 2193 + keylen > 3) { 2197 2194 keylen -= 4; /* nonce/salt is present in the last 4 bytes */ 2198 2195 memcpy(aeadctx->salt, key + keylen, 4); 2199 2196 }
+8 -10
drivers/crypto/chelsio/chcr_core.c
··· 52 52 int assign_chcr_device(struct chcr_dev **dev) 53 53 { 54 54 struct uld_ctx *u_ctx; 55 + int ret = -ENXIO; 55 56 56 57 /* 57 58 * Which device to use if multiple devices are available TODO ··· 60 59 * must go to the same device to maintain the ordering. 61 60 */ 62 61 mutex_lock(&dev_mutex); /* TODO ? */ 63 - u_ctx = list_first_entry(&uld_ctx_list, struct uld_ctx, entry); 64 - if (!u_ctx) { 65 - mutex_unlock(&dev_mutex); 66 - return -ENXIO; 62 + list_for_each_entry(u_ctx, &uld_ctx_list, entry) 63 + if (u_ctx && u_ctx->dev) { 64 + *dev = u_ctx->dev; 65 + ret = 0; 66 + break; 67 67 } 68 - 69 - *dev = u_ctx->dev; 70 68 mutex_unlock(&dev_mutex); 71 - return 0; 69 + return ret; 72 70 } 73 71 74 72 static int chcr_dev_add(struct uld_ctx *u_ctx) ··· 202 202 203 203 static int __init chcr_crypto_init(void) 204 204 { 205 - if (cxgb4_register_uld(CXGB4_ULD_CRYPTO, &chcr_uld_info)) { 205 + if (cxgb4_register_uld(CXGB4_ULD_CRYPTO, &chcr_uld_info)) 206 206 pr_err("ULD register fail: No chcr crypto support in cxgb4"); 207 - return -1; 208 - } 209 207 210 208 return 0; 211 209 }
+3
drivers/crypto/chelsio/chcr_crypto.h
··· 158 158 }; 159 159 struct chcr_aead_reqctx { 160 160 struct sk_buff *skb; 161 + struct scatterlist *dst; 162 + struct scatterlist srcffwd[2]; 163 + struct scatterlist dstffwd[2]; 161 164 short int dst_nents; 162 165 u16 verify; 163 166 u8 iv[CHCR_MAX_CRYPTO_IV_LEN];
+1 -1
drivers/crypto/qat/qat_c62x/adf_drv.c
··· 233 233 &hw_data->accel_capabilities_mask); 234 234 235 235 /* Find and map all the device's BARS */ 236 - i = 0; 236 + i = (hw_data->fuses & ADF_DEVICE_FUSECTL_MASK) ? 1 : 0; 237 237 bar_mask = pci_select_bars(pdev, IORESOURCE_MEM); 238 238 for_each_set_bit(bar_nr, (const unsigned long *)&bar_mask, 239 239 ADF_PCI_MAX_BARS * 2) {
+1
drivers/crypto/qat/qat_common/adf_accel_devices.h
··· 69 69 #define ADF_ERRSOU5 (0x3A000 + 0xD8) 70 70 #define ADF_DEVICE_FUSECTL_OFFSET 0x40 71 71 #define ADF_DEVICE_LEGFUSE_OFFSET 0x4C 72 + #define ADF_DEVICE_FUSECTL_MASK 0x80000000 72 73 #define ADF_PCI_MAX_BARS 3 73 74 #define ADF_DEVICE_NAME_LENGTH 32 74 75 #define ADF_ETR_MAX_RINGS_PER_BANK 16
+2 -2
drivers/crypto/qat/qat_common/qat_hal.c
··· 456 456 unsigned int csr_val; 457 457 int times = 30; 458 458 459 - if (handle->pci_dev->device == ADF_C3XXX_PCI_DEVICE_ID) 459 + if (handle->pci_dev->device != ADF_DH895XCC_PCI_DEVICE_ID) 460 460 return 0; 461 461 462 462 csr_val = ADF_CSR_RD(csr_addr, 0); ··· 716 716 (void __iomem *)((uintptr_t)handle->hal_cap_ae_xfer_csr_addr_v + 717 717 LOCAL_TO_XFER_REG_OFFSET); 718 718 handle->pci_dev = pci_info->pci_dev; 719 - if (handle->pci_dev->device != ADF_C3XXX_PCI_DEVICE_ID) { 719 + if (handle->pci_dev->device == ADF_DH895XCC_PCI_DEVICE_ID) { 720 720 sram_bar = 721 721 &pci_info->pci_bars[hw_data->get_sram_bar_id(hw_data)]; 722 722 handle->hal_sram_addr_v = sram_bar->virt_addr;
+3 -1
drivers/gpu/drm/amd/amdgpu/gmc_v6_0.c
··· 254 254 } 255 255 WREG32(mmHDP_REG_COHERENCY_FLUSH_CNTL, 0); 256 256 257 + if (adev->mode_info.num_crtc) 258 + amdgpu_display_set_vga_render_state(adev, false); 259 + 257 260 gmc_v6_0_mc_stop(adev, &save); 258 261 259 262 if (gmc_v6_0_wait_for_idle((void *)adev)) { ··· 286 283 dev_warn(adev->dev, "Wait for MC idle timedout !\n"); 287 284 } 288 285 gmc_v6_0_mc_resume(adev, &save); 289 - amdgpu_display_set_vga_render_state(adev, false); 290 286 } 291 287 292 288 static int gmc_v6_0_mc_init(struct amdgpu_device *adev)
+8 -5
drivers/gpu/drm/drm_atomic.c
··· 2032 2032 } 2033 2033 2034 2034 for_each_crtc_in_state(state, crtc, crtc_state, i) { 2035 + struct drm_pending_vblank_event *event = crtc_state->event; 2035 2036 /* 2036 - * TEST_ONLY and PAGE_FLIP_EVENT are mutually 2037 - * exclusive, if they weren't, this code should be 2038 - * called on success for TEST_ONLY too. 2037 + * Free the allocated event. drm_atomic_helper_setup_commit 2038 + * can allocate an event too, so only free it if it's ours 2039 + * to prevent a double free in drm_atomic_state_clear. 2039 2040 */ 2040 - if (crtc_state->event) 2041 - drm_event_cancel_free(dev, &crtc_state->event->base); 2041 + if (event && (event->base.fence || event->base.file_priv)) { 2042 + drm_event_cancel_free(dev, &event->base); 2043 + crtc_state->event = NULL; 2044 + } 2042 2045 } 2043 2046 2044 2047 if (!fence_state)
-9
drivers/gpu/drm/drm_atomic_helper.c
··· 1666 1666 1667 1667 funcs = plane->helper_private; 1668 1668 1669 - if (!drm_atomic_helper_framebuffer_changed(dev, state, plane_state->crtc)) 1670 - continue; 1671 - 1672 1669 if (funcs->prepare_fb) { 1673 1670 ret = funcs->prepare_fb(plane, plane_state); 1674 1671 if (ret) ··· 1680 1683 const struct drm_plane_helper_funcs *funcs; 1681 1684 1682 1685 if (j >= i) 1683 - continue; 1684 - 1685 - if (!drm_atomic_helper_framebuffer_changed(dev, state, plane_state->crtc)) 1686 1686 continue; 1687 1687 1688 1688 funcs = plane->helper_private; ··· 1947 1953 1948 1954 for_each_plane_in_state(old_state, plane, plane_state, i) { 1949 1955 const struct drm_plane_helper_funcs *funcs; 1950 - 1951 - if (!drm_atomic_helper_framebuffer_changed(dev, old_state, plane_state->crtc)) 1952 - continue; 1953 1956 1954 1957 funcs = plane->helper_private; 1955 1958
+18 -5
drivers/gpu/drm/drm_connector.c
··· 225 225 226 226 INIT_LIST_HEAD(&connector->probed_modes); 227 227 INIT_LIST_HEAD(&connector->modes); 228 + mutex_init(&connector->mutex); 228 229 connector->edid_blob_ptr = NULL; 229 230 connector->status = connector_status_unknown; 230 231 ··· 360 359 connector->funcs->atomic_destroy_state(connector, 361 360 connector->state); 362 361 362 + mutex_destroy(&connector->mutex); 363 + 363 364 memset(connector, 0, sizeof(*connector)); 364 365 } 365 366 EXPORT_SYMBOL(drm_connector_cleanup); ··· 377 374 */ 378 375 int drm_connector_register(struct drm_connector *connector) 379 376 { 380 - int ret; 377 + int ret = 0; 381 378 382 - if (connector->registered) 379 + if (!connector->dev->registered) 383 380 return 0; 381 + 382 + mutex_lock(&connector->mutex); 383 + if (connector->registered) 384 + goto unlock; 384 385 385 386 ret = drm_sysfs_connector_add(connector); 386 387 if (ret) 387 - return ret; 388 + goto unlock; 388 389 389 390 ret = drm_debugfs_connector_add(connector); 390 391 if (ret) { ··· 404 397 drm_mode_object_register(connector->dev, &connector->base); 405 398 406 399 connector->registered = true; 407 - return 0; 400 + goto unlock; 408 401 409 402 err_debugfs: 410 403 drm_debugfs_connector_remove(connector); 411 404 err_sysfs: 412 405 drm_sysfs_connector_remove(connector); 406 + unlock: 407 + mutex_unlock(&connector->mutex); 413 408 return ret; 414 409 } 415 410 EXPORT_SYMBOL(drm_connector_register); ··· 424 415 */ 425 416 void drm_connector_unregister(struct drm_connector *connector) 426 417 { 427 - if (!connector->registered) 418 + mutex_lock(&connector->mutex); 419 + if (!connector->registered) { 420 + mutex_unlock(&connector->mutex); 428 421 return; 422 + } 429 423 430 424 if (connector->funcs->early_unregister) 431 425 connector->funcs->early_unregister(connector); ··· 437 425 drm_debugfs_connector_remove(connector); 438 426 439 427 connector->registered = false; 428 + mutex_unlock(&connector->mutex); 440 429 } 441 430 EXPORT_SYMBOL(drm_connector_unregister); 442 431
+4
drivers/gpu/drm/drm_drv.c
··· 745 745 if (ret) 746 746 goto err_minors; 747 747 748 + dev->registered = true; 749 + 748 750 if (dev->driver->load) { 749 751 ret = dev->driver->load(dev, flags); 750 752 if (ret) ··· 786 784 struct drm_map_list *r_list, *list_temp; 787 785 788 786 drm_lastclose(dev); 787 + 788 + dev->registered = false; 789 789 790 790 if (drm_core_check_feature(dev, DRIVER_MODESET)) 791 791 drm_modeset_unregister_all(dev);
+3 -1
drivers/gpu/drm/i915/i915_drv.c
··· 213 213 } else if (id == INTEL_PCH_KBP_DEVICE_ID_TYPE) { 214 214 dev_priv->pch_type = PCH_KBP; 215 215 DRM_DEBUG_KMS("Found KabyPoint PCH\n"); 216 - WARN_ON(!IS_KABYLAKE(dev_priv)); 216 + WARN_ON(!IS_SKYLAKE(dev_priv) && 217 + !IS_KABYLAKE(dev_priv)); 217 218 } else if ((id == INTEL_PCH_P2X_DEVICE_ID_TYPE) || 218 219 (id == INTEL_PCH_P3X_DEVICE_ID_TYPE) || 219 220 ((id == INTEL_PCH_QEMU_DEVICE_ID_TYPE) && ··· 2428 2427 * we can do is to hope that things will still work (and disable RPM). 2429 2428 */ 2430 2429 i915_gem_init_swizzling(dev_priv); 2430 + i915_gem_restore_fences(dev_priv); 2431 2431 2432 2432 intel_runtime_pm_enable_interrupts(dev_priv); 2433 2433
+4 -12
drivers/gpu/drm/i915/i915_drv.h
··· 1012 1012 struct work_struct underrun_work; 1013 1013 1014 1014 struct intel_fbc_state_cache { 1015 + struct i915_vma *vma; 1016 + 1015 1017 struct { 1016 1018 unsigned int mode_flags; 1017 1019 uint32_t hsw_bdw_pixel_rate; ··· 1027 1025 } plane; 1028 1026 1029 1027 struct { 1030 - u64 ilk_ggtt_offset; 1031 1028 uint32_t pixel_format; 1032 1029 unsigned int stride; 1033 - int fence_reg; 1034 - unsigned int tiling_mode; 1035 1030 } fb; 1036 1031 } state_cache; 1037 1032 1038 1033 struct intel_fbc_reg_params { 1034 + struct i915_vma *vma; 1035 + 1039 1036 struct { 1040 1037 enum pipe pipe; 1041 1038 enum plane plane; ··· 1042 1041 } crtc; 1043 1042 1044 1043 struct { 1045 - u64 ggtt_offset; 1046 1044 uint32_t pixel_format; 1047 1045 unsigned int stride; 1048 - int fence_reg; 1049 1046 } fb; 1050 1047 1051 1048 int cfb_size; ··· 3165 3166 const struct i915_ggtt_view *view) 3166 3167 { 3167 3168 return i915_gem_obj_to_vma(obj, &to_i915(obj->base.dev)->ggtt.base, view); 3168 - } 3169 - 3170 - static inline unsigned long 3171 - i915_gem_object_ggtt_offset(struct drm_i915_gem_object *o, 3172 - const struct i915_ggtt_view *view) 3173 - { 3174 - return i915_ggtt_offset(i915_gem_object_to_ggtt(o, view)); 3175 3169 } 3176 3170 3177 3171 /* i915_gem_fence_reg.c */
+11 -3
drivers/gpu/drm/i915/i915_gem.c
··· 2010 2010 for (i = 0; i < dev_priv->num_fence_regs; i++) { 2011 2011 struct drm_i915_fence_reg *reg = &dev_priv->fence_regs[i]; 2012 2012 2013 - if (WARN_ON(reg->pin_count)) 2014 - continue; 2013 + /* Ideally we want to assert that the fence register is not 2014 + * live at this point (i.e. that no piece of code will be 2015 + * trying to write through fence + GTT, as that both violates 2016 + * our tracking of activity and associated locking/barriers, 2017 + * but also is illegal given that the hw is powered down). 2018 + * 2019 + * Previously we used reg->pin_count as a "liveness" indicator. 2020 + * That is not sufficient, and we need a more fine-grained 2021 + * tool if we want to have a sanity check here. 2022 + */ 2015 2023 2016 2024 if (!reg->vma) 2017 2025 continue; ··· 3486 3478 vma->display_alignment = max_t(u64, vma->display_alignment, alignment); 3487 3479 3488 3480 /* Treat this as an end-of-frame, like intel_user_framebuffer_dirty() */ 3489 - if (obj->cache_dirty) { 3481 + if (obj->cache_dirty || obj->base.write_domain == I915_GEM_DOMAIN_CPU) { 3490 3482 i915_gem_clflush_object(obj, true); 3491 3483 intel_fb_obj_flush(obj, false, ORIGIN_DIRTYFB); 3492 3484 }
+6 -6
drivers/gpu/drm/i915/i915_gem_execbuffer.c
··· 1181 1181 if (exec[i].offset != 1182 1182 gen8_canonical_addr(exec[i].offset & PAGE_MASK)) 1183 1183 return -EINVAL; 1184 - 1185 - /* From drm_mm perspective address space is continuous, 1186 - * so from this point we're always using non-canonical 1187 - * form internally. 1188 - */ 1189 - exec[i].offset = gen8_noncanonical_addr(exec[i].offset); 1190 1184 } 1185 + 1186 + /* From drm_mm perspective address space is continuous, 1187 + * so from this point we're always using non-canonical 1188 + * form internally. 1189 + */ 1190 + exec[i].offset = gen8_noncanonical_addr(exec[i].offset); 1191 1191 1192 1192 if (exec[i].alignment && !is_power_of_2(exec[i].alignment)) 1193 1193 return -EINVAL;
+10 -2
drivers/gpu/drm/i915/i915_gem_internal.c
··· 66 66 67 67 max_order = MAX_ORDER; 68 68 #ifdef CONFIG_SWIOTLB 69 - if (swiotlb_nr_tbl()) /* minimum max swiotlb size is IO_TLB_SEGSIZE */ 70 - max_order = min(max_order, ilog2(IO_TLB_SEGPAGES)); 69 + if (swiotlb_nr_tbl()) { 70 + unsigned int max_segment; 71 + 72 + max_segment = swiotlb_max_segment(); 73 + if (max_segment) { 74 + max_segment = max_t(unsigned int, max_segment, 75 + PAGE_SIZE) >> PAGE_SHIFT; 76 + max_order = min(max_order, ilog2(max_segment)); 77 + } 78 + } 71 79 #endif 72 80 73 81 gfp = GFP_KERNEL | __GFP_HIGHMEM | __GFP_RECLAIMABLE;
+20
drivers/gpu/drm/i915/intel_atomic_plane.c
··· 85 85 86 86 __drm_atomic_helper_plane_duplicate_state(plane, state); 87 87 88 + intel_state->vma = NULL; 89 + 88 90 return state; 89 91 } 90 92 ··· 102 100 intel_plane_destroy_state(struct drm_plane *plane, 103 101 struct drm_plane_state *state) 104 102 { 103 + struct i915_vma *vma; 104 + 105 + vma = fetch_and_zero(&to_intel_plane_state(state)->vma); 106 + 107 + /* 108 + * FIXME: Normally intel_cleanup_plane_fb handles destruction of vma. 109 + * We currently don't clear all planes during driver unload, so we have 110 + * to be able to unpin vma here for now. 111 + * 112 + * Normally this can only happen during unload when kmscon is disabled 113 + * and userspace doesn't attempt to set a framebuffer at all. 114 + */ 115 + if (vma) { 116 + mutex_lock(&plane->dev->struct_mutex); 117 + intel_unpin_fb_vma(vma); 118 + mutex_unlock(&plane->dev->struct_mutex); 119 + } 120 + 105 121 drm_atomic_helper_plane_destroy_state(plane, state); 106 122 } 107 123
+44 -85
drivers/gpu/drm/i915/intel_display.c
··· 2235 2235 i915_vma_pin_fence(vma); 2236 2236 } 2237 2237 2238 + i915_vma_get(vma); 2238 2239 err: 2239 2240 intel_runtime_pm_put(dev_priv); 2240 2241 return vma; 2241 2242 } 2242 2243 2243 - void intel_unpin_fb_obj(struct drm_framebuffer *fb, unsigned int rotation) 2244 + void intel_unpin_fb_vma(struct i915_vma *vma) 2244 2245 { 2245 - struct drm_i915_gem_object *obj = intel_fb_obj(fb); 2246 - struct i915_ggtt_view view; 2247 - struct i915_vma *vma; 2248 - 2249 - WARN_ON(!mutex_is_locked(&obj->base.dev->struct_mutex)); 2250 - 2251 - intel_fill_fb_ggtt_view(&view, fb, rotation); 2252 - vma = i915_gem_object_to_ggtt(obj, &view); 2246 + lockdep_assert_held(&vma->vm->dev->struct_mutex); 2253 2247 2254 2248 if (WARN_ON_ONCE(!vma)) 2255 2249 return; 2256 2250 2257 2251 i915_vma_unpin_fence(vma); 2258 2252 i915_gem_object_unpin_from_display_plane(vma); 2253 + i915_vma_put(vma); 2259 2254 } 2260 2255 2261 2256 static int intel_fb_pitch(const struct drm_framebuffer *fb, int plane, ··· 2745 2750 struct drm_device *dev = intel_crtc->base.dev; 2746 2751 struct drm_i915_private *dev_priv = to_i915(dev); 2747 2752 struct drm_crtc *c; 2748 - struct intel_crtc *i; 2749 2753 struct drm_i915_gem_object *obj; 2750 2754 struct drm_plane *primary = intel_crtc->base.primary; 2751 2755 struct drm_plane_state *plane_state = primary->state; ··· 2769 2775 * an fb with another CRTC instead 2770 2776 */ 2771 2777 for_each_crtc(dev, c) { 2772 - i = to_intel_crtc(c); 2778 + struct intel_plane_state *state; 2773 2779 2774 2780 if (c == &intel_crtc->base) 2775 2781 continue; 2776 2782 2777 - if (!i->active) 2783 + if (!to_intel_crtc(c)->active) 2778 2784 continue; 2779 2785 2780 - fb = c->primary->fb; 2781 - if (!fb) 2786 + state = to_intel_plane_state(c->primary->state); 2787 + if (!state->vma) 2782 2788 continue; 2783 2789 2784 - obj = intel_fb_obj(fb); 2785 - if (i915_gem_object_ggtt_offset(obj, NULL) == plane_config->base) { 2790 + if (intel_plane_ggtt_offset(state) == plane_config->base) { 2791 + fb = c->primary->fb; 2786 2792 drm_framebuffer_reference(fb); 2787 2793 goto valid_fb; 2788 2794 } ··· 2803 2809 return; 2804 2810 2805 2811 valid_fb: 2812 + mutex_lock(&dev->struct_mutex); 2813 + intel_state->vma = 2814 + intel_pin_and_fence_fb_obj(fb, primary->state->rotation); 2815 + mutex_unlock(&dev->struct_mutex); 2816 + if (IS_ERR(intel_state->vma)) { 2817 + DRM_ERROR("failed to pin boot fb on pipe %d: %li\n", 2818 + intel_crtc->pipe, PTR_ERR(intel_state->vma)); 2819 + 2820 + intel_state->vma = NULL; 2821 + drm_framebuffer_unreference(fb); 2822 + return; 2823 + } 2824 + 2806 2825 plane_state->src_x = 0; 2807 2826 plane_state->src_y = 0; 2808 2827 plane_state->src_w = fb->width << 16; ··· 3111 3104 I915_WRITE(DSPSTRIDE(plane), fb->pitches[0]); 3112 3105 if (INTEL_GEN(dev_priv) >= 4) { 3113 3106 I915_WRITE(DSPSURF(plane), 3114 - intel_fb_gtt_offset(fb, rotation) + 3107 + intel_plane_ggtt_offset(plane_state) + 3115 3108 intel_crtc->dspaddr_offset); 3116 3109 I915_WRITE(DSPTILEOFF(plane), (y << 16) | x); 3117 3110 I915_WRITE(DSPLINOFF(plane), linear_offset); 3118 3111 } else { 3119 3112 I915_WRITE(DSPADDR(plane), 3120 - intel_fb_gtt_offset(fb, rotation) + 3113 + intel_plane_ggtt_offset(plane_state) + 3121 3114 intel_crtc->dspaddr_offset); 3122 3115 } 3123 3116 POSTING_READ(reg); ··· 3214 3207 3215 3208 I915_WRITE(DSPSTRIDE(plane), fb->pitches[0]); 3216 3209 I915_WRITE(DSPSURF(plane), 3217 - intel_fb_gtt_offset(fb, rotation) + 3210 + intel_plane_ggtt_offset(plane_state) + 3218 3211 intel_crtc->dspaddr_offset); 3219 3212 if (IS_HASWELL(dev_priv) || IS_BROADWELL(dev_priv)) { 3220 3213 I915_WRITE(DSPOFFSET(plane), (y << 16) | x); ··· 3235 3228 3236 3229 return intel_tile_width_bytes(dev_priv, fb_modifier, cpp); 3237 3230 } 3238 - } 3239 - 3240 - u32 intel_fb_gtt_offset(struct drm_framebuffer *fb, 3241 - unsigned int rotation) 3242 - { 3243 - struct drm_i915_gem_object *obj = intel_fb_obj(fb); 3244 - struct i915_ggtt_view view; 3245 - struct i915_vma *vma; 3246 - 3247 - intel_fill_fb_ggtt_view(&view, fb, rotation); 3248 - 3249 - vma = i915_gem_object_to_ggtt(obj, &view); 3250 - if (WARN(!vma, "ggtt vma for display object not found! (view=%u)\n", 3251 - view.type)) 3252 - return -1; 3253 - 3254 - return i915_ggtt_offset(vma); 3255 3231 } 3256 3232 3257 3233 static void skl_detach_scaler(struct intel_crtc *intel_crtc, int id) ··· 3431 3441 } 3432 3442 3433 3443 I915_WRITE(PLANE_SURF(pipe, 0), 3434 - intel_fb_gtt_offset(fb, rotation) + surf_addr); 3444 + intel_plane_ggtt_offset(plane_state) + surf_addr); 3435 3445 3436 3446 POSTING_READ(PLANE_SURF(pipe, 0)); 3437 3447 } ··· 4262 4272 drm_crtc_vblank_put(&intel_crtc->base); 4263 4273 4264 4274 wake_up_all(&dev_priv->pending_flip_queue); 4265 - queue_work(dev_priv->wq, &work->unpin_work); 4266 - 4267 4275 trace_i915_flip_complete(intel_crtc->plane, 4268 4276 work->pending_flip_obj); 4277 + 4278 + queue_work(dev_priv->wq, &work->unpin_work); 4269 4279 } 4270 4280 4271 4281 static int intel_crtc_wait_for_pending_flips(struct drm_crtc *crtc) ··· 11526 11536 flush_work(&work->mmio_work); 11527 11537 11528 11538 mutex_lock(&dev->struct_mutex); 11529 - intel_unpin_fb_obj(work->old_fb, primary->state->rotation); 11539 + intel_unpin_fb_vma(work->old_vma); 11530 11540 i915_gem_object_put(work->pending_flip_obj); 11531 11541 mutex_unlock(&dev->struct_mutex); 11532 11542 ··· 12236 12246 goto cleanup_pending; 12237 12247 } 12238 12248 12239 - work->gtt_offset = intel_fb_gtt_offset(fb, primary->state->rotation); 12240 - work->gtt_offset += intel_crtc->dspaddr_offset; 12249 + work->old_vma = to_intel_plane_state(primary->state)->vma; 12250 + to_intel_plane_state(primary->state)->vma = vma; 12251 + 12252 + work->gtt_offset = i915_ggtt_offset(vma) + intel_crtc->dspaddr_offset; 12241 12253 work->rotation = crtc->primary->state->rotation; 12242 12254 12243 12255 /* ··· 12293 12301 cleanup_request: 12294 12302 i915_add_request_no_flush(request); 12295 12303 cleanup_unpin: 12296 - intel_unpin_fb_obj(fb, crtc->primary->state->rotation); 12304 + to_intel_plane_state(primary->state)->vma = work->old_vma; 12305 + intel_unpin_fb_vma(vma); 12297 12306 cleanup_pending: 12298 12307 atomic_dec(&intel_crtc->unpin_work_count); 12299 12308 unlock: ··· 14787 14794 DRM_DEBUG_KMS("failed to pin object\n"); 14788 14795 return PTR_ERR(vma); 14789 14796 } 14797 + 14798 + to_intel_plane_state(new_state)->vma = vma; 14790 14799 } 14791 14800 14792 14801 return 0; ··· 14807 14812 intel_cleanup_plane_fb(struct drm_plane *plane, 14808 14813 struct drm_plane_state *old_state) 14809 14814 { 14810 - struct drm_i915_private *dev_priv = to_i915(plane->dev); 14811 - struct intel_plane_state *old_intel_state; 14812 - struct drm_i915_gem_object *old_obj = intel_fb_obj(old_state->fb); 14813 - struct drm_i915_gem_object *obj = intel_fb_obj(plane->state->fb); 14815 + struct i915_vma *vma; 14814 14816 14815 - old_intel_state = to_intel_plane_state(old_state); 14816 - 14817 - if (!obj && !old_obj) 14818 - return; 14819 - 14820 - if (old_obj && (plane->type != DRM_PLANE_TYPE_CURSOR || 14821 - !INTEL_INFO(dev_priv)->cursor_needs_physical)) 14822 - intel_unpin_fb_obj(old_state->fb, old_state->rotation); 14817 + /* Should only be called after a successful intel_prepare_plane_fb()! */ 14818 + vma = fetch_and_zero(&to_intel_plane_state(old_state)->vma); 14819 + if (vma) 14820 + intel_unpin_fb_vma(vma); 14823 14821 } 14824 14822 14825 14823 int ··· 15154 15166 if (!obj) 15155 15167 addr = 0; 15156 15168 else if (!INTEL_INFO(dev_priv)->cursor_needs_physical) 15157 - addr = i915_gem_object_ggtt_offset(obj, NULL); 15169 + addr = intel_plane_ggtt_offset(state); 15158 15170 else 15159 15171 addr = obj->phys_handle->busaddr; 15160 15172 ··· 17054 17066 void intel_modeset_gem_init(struct drm_device *dev) 17055 17067 { 17056 17068 struct drm_i915_private *dev_priv = to_i915(dev); 17057 - struct drm_crtc *c; 17058 - struct drm_i915_gem_object *obj; 17059 17069 17060 17070 intel_init_gt_powersave(dev_priv); 17061 17071 17062 17072 intel_modeset_init_hw(dev); 17063 17073 17064 17074 intel_setup_overlay(dev_priv); 17065 - 17066 - /* 17067 - * Make sure any fbs we allocated at startup are properly 17068 - * pinned & fenced. When we do the allocation it's too early 17069 - * for this. 17070 - */ 17071 - for_each_crtc(dev, c) { 17072 - struct i915_vma *vma; 17073 - 17074 - obj = intel_fb_obj(c->primary->fb); 17075 - if (obj == NULL) 17076 - continue; 17077 - 17078 - mutex_lock(&dev->struct_mutex); 17079 - vma = intel_pin_and_fence_fb_obj(c->primary->fb, 17080 - c->primary->state->rotation); 17081 - mutex_unlock(&dev->struct_mutex); 17082 - if (IS_ERR(vma)) { 17083 - DRM_ERROR("failed to pin boot fb on pipe %d\n", 17084 - to_intel_crtc(c)->pipe); 17085 - drm_framebuffer_unreference(c->primary->fb); 17086 - c->primary->fb = NULL; 17087 - c->primary->crtc = c->primary->state->crtc = NULL; 17088 - update_state_fb(c->primary); 17089 - c->state->plane_mask &= ~(1 << drm_plane_index(c->primary)); 17090 - } 17091 - } 17092 17075 } 17093 17076 17094 17077 int intel_connector_register(struct drm_connector *connector)
+2 -1
drivers/gpu/drm/i915/intel_dpll_mgr.c
··· 1730 1730 return NULL; 1731 1731 1732 1732 if ((encoder->type == INTEL_OUTPUT_DP || 1733 - encoder->type == INTEL_OUTPUT_EDP) && 1733 + encoder->type == INTEL_OUTPUT_EDP || 1734 + encoder->type == INTEL_OUTPUT_DP_MST) && 1734 1735 !bxt_ddi_dp_set_dpll_hw_state(clock, &dpll_hw_state)) 1735 1736 return NULL; 1736 1737
+7 -2
drivers/gpu/drm/i915/intel_drv.h
··· 377 377 struct intel_plane_state { 378 378 struct drm_plane_state base; 379 379 struct drm_rect clip; 380 + struct i915_vma *vma; 380 381 381 382 struct { 382 383 u32 offset; ··· 1047 1046 struct work_struct mmio_work; 1048 1047 1049 1048 struct drm_crtc *crtc; 1049 + struct i915_vma *old_vma; 1050 1050 struct drm_framebuffer *old_fb; 1051 1051 struct drm_i915_gem_object *pending_flip_obj; 1052 1052 struct drm_pending_vblank_event *event; ··· 1275 1273 struct drm_modeset_acquire_ctx *ctx); 1276 1274 struct i915_vma * 1277 1275 intel_pin_and_fence_fb_obj(struct drm_framebuffer *fb, unsigned int rotation); 1278 - void intel_unpin_fb_obj(struct drm_framebuffer *fb, unsigned int rotation); 1276 + void intel_unpin_fb_vma(struct i915_vma *vma); 1279 1277 struct drm_framebuffer * 1280 1278 __intel_framebuffer_create(struct drm_device *dev, 1281 1279 struct drm_mode_fb_cmd2 *mode_cmd, ··· 1364 1362 int skl_update_scaler_crtc(struct intel_crtc_state *crtc_state); 1365 1363 int skl_max_scale(struct intel_crtc *crtc, struct intel_crtc_state *crtc_state); 1366 1364 1367 - u32 intel_fb_gtt_offset(struct drm_framebuffer *fb, unsigned int rotation); 1365 + static inline u32 intel_plane_ggtt_offset(const struct intel_plane_state *state) 1366 + { 1367 + return i915_ggtt_offset(state->vma); 1368 + } 1368 1369 1369 1370 u32 skl_plane_ctl_format(uint32_t pixel_format); 1370 1371 u32 skl_plane_ctl_tiling(uint64_t fb_modifier);
+20 -32
drivers/gpu/drm/i915/intel_fbc.c
··· 173 173 if (IS_I945GM(dev_priv)) 174 174 fbc_ctl |= FBC_CTL_C3_IDLE; /* 945 needs special SR handling */ 175 175 fbc_ctl |= (cfb_pitch & 0xff) << FBC_CTL_STRIDE_SHIFT; 176 - fbc_ctl |= params->fb.fence_reg; 176 + fbc_ctl |= params->vma->fence->id; 177 177 I915_WRITE(FBC_CONTROL, fbc_ctl); 178 178 } 179 179 ··· 193 193 else 194 194 dpfc_ctl |= DPFC_CTL_LIMIT_1X; 195 195 196 - if (params->fb.fence_reg != I915_FENCE_REG_NONE) { 197 - dpfc_ctl |= DPFC_CTL_FENCE_EN | params->fb.fence_reg; 196 + if (params->vma->fence) { 197 + dpfc_ctl |= DPFC_CTL_FENCE_EN | params->vma->fence->id; 198 198 I915_WRITE(DPFC_FENCE_YOFF, params->crtc.fence_y_offset); 199 199 } else { 200 200 I915_WRITE(DPFC_FENCE_YOFF, 0); ··· 251 251 break; 252 252 } 253 253 254 - if (params->fb.fence_reg != I915_FENCE_REG_NONE) { 254 + if (params->vma->fence) { 255 255 dpfc_ctl |= DPFC_CTL_FENCE_EN; 256 256 if (IS_GEN5(dev_priv)) 257 - dpfc_ctl |= params->fb.fence_reg; 257 + dpfc_ctl |= params->vma->fence->id; 258 258 if (IS_GEN6(dev_priv)) { 259 259 I915_WRITE(SNB_DPFC_CTL_SA, 260 - SNB_CPU_FENCE_ENABLE | params->fb.fence_reg); 260 + SNB_CPU_FENCE_ENABLE | 261 + params->vma->fence->id); 261 262 I915_WRITE(DPFC_CPU_FENCE_OFFSET, 262 263 params->crtc.fence_y_offset); 263 264 } ··· 270 269 } 271 270 272 271 I915_WRITE(ILK_DPFC_FENCE_YOFF, params->crtc.fence_y_offset); 273 - I915_WRITE(ILK_FBC_RT_BASE, params->fb.ggtt_offset | ILK_FBC_RT_VALID); 272 + I915_WRITE(ILK_FBC_RT_BASE, 273 + i915_ggtt_offset(params->vma) | ILK_FBC_RT_VALID); 274 274 /* enable it... */ 275 275 I915_WRITE(ILK_DPFC_CONTROL, dpfc_ctl | DPFC_CTL_EN); 276 276 ··· 321 319 break; 322 320 } 323 321 324 - if (params->fb.fence_reg != I915_FENCE_REG_NONE) { 322 + if (params->vma->fence) { 325 323 dpfc_ctl |= IVB_DPFC_CTL_FENCE_EN; 326 324 I915_WRITE(SNB_DPFC_CTL_SA, 327 - SNB_CPU_FENCE_ENABLE | params->fb.fence_reg); 325 + SNB_CPU_FENCE_ENABLE | 326 + params->vma->fence->id); 328 327 I915_WRITE(DPFC_CPU_FENCE_OFFSET, params->crtc.fence_y_offset); 329 328 } else { 330 329 I915_WRITE(SNB_DPFC_CTL_SA,0); ··· 730 727 return effective_w <= max_w && effective_h <= max_h; 731 728 } 732 729 733 - /* XXX replace me when we have VMA tracking for intel_plane_state */ 734 - static int get_fence_id(struct drm_framebuffer *fb) 735 - { 736 - struct i915_vma *vma = i915_gem_object_to_ggtt(intel_fb_obj(fb), NULL); 737 - 738 - return vma && vma->fence ? vma->fence->id : I915_FENCE_REG_NONE; 739 - } 740 - 741 730 static void intel_fbc_update_state_cache(struct intel_crtc *crtc, 742 731 struct intel_crtc_state *crtc_state, 743 732 struct intel_plane_state *plane_state) ··· 738 743 struct intel_fbc *fbc = &dev_priv->fbc; 739 744 struct intel_fbc_state_cache *cache = &fbc->state_cache; 740 745 struct drm_framebuffer *fb = plane_state->base.fb; 741 - struct drm_i915_gem_object *obj; 746 + 747 + cache->vma = NULL; 742 748 743 749 cache->crtc.mode_flags = crtc_state->base.adjusted_mode.flags; 744 750 if (IS_HASWELL(dev_priv) || IS_BROADWELL(dev_priv)) ··· 754 758 if (!cache->plane.visible) 755 759 return; 756 760 757 - obj = intel_fb_obj(fb); 758 - 759 - /* FIXME: We lack the proper locking here, so only run this on the 760 - * platforms that need. */ 761 - if (IS_GEN(dev_priv, 5, 6)) 762 - cache->fb.ilk_ggtt_offset = i915_gem_object_ggtt_offset(obj, NULL); 763 761 cache->fb.pixel_format = fb->pixel_format; 764 762 cache->fb.stride = fb->pitches[0]; 765 - cache->fb.fence_reg = get_fence_id(fb); 766 - cache->fb.tiling_mode = i915_gem_object_get_tiling(obj); 763 + 764 + cache->vma = plane_state->vma; 767 765 } 768 766 769 767 static bool intel_fbc_can_activate(struct intel_crtc *crtc) ··· 774 784 return false; 775 785 } 776 786 777 - if (!cache->plane.visible) { 787 + if (!cache->vma) { 778 788 fbc->no_fbc_reason = "primary plane not visible"; 779 789 return false; 780 790 } ··· 797 807 * so have no fence associated with it) due to aperture constaints 798 808 * at the time of pinning. 799 809 */ 800 - if (cache->fb.tiling_mode != I915_TILING_X || 801 - cache->fb.fence_reg == I915_FENCE_REG_NONE) { 810 + if (!cache->vma->fence) { 802 811 fbc->no_fbc_reason = "framebuffer not tiled or fenced"; 803 812 return false; 804 813 } ··· 877 888 * zero. */ 878 889 memset(params, 0, sizeof(*params)); 879 890 891 + params->vma = cache->vma; 892 + 880 893 params->crtc.pipe = crtc->pipe; 881 894 params->crtc.plane = crtc->plane; 882 895 params->crtc.fence_y_offset = get_crtc_fence_y_offset(crtc); 883 896 884 897 params->fb.pixel_format = cache->fb.pixel_format; 885 898 params->fb.stride = cache->fb.stride; 886 - params->fb.fence_reg = cache->fb.fence_reg; 887 899 888 900 params->cfb_size = intel_fbc_calculate_cfb_size(dev_priv, cache); 889 - 890 - params->fb.ggtt_offset = cache->fb.ilk_ggtt_offset; 891 901 } 892 902 893 903 static bool intel_fbc_reg_params_equal(struct intel_fbc_reg_params *params1,
+2 -2
drivers/gpu/drm/i915/intel_fbdev.c
··· 284 284 out_destroy_fbi: 285 285 drm_fb_helper_release_fbi(helper); 286 286 out_unpin: 287 - intel_unpin_fb_obj(&ifbdev->fb->base, DRM_ROTATE_0); 287 + intel_unpin_fb_vma(vma); 288 288 out_unlock: 289 289 mutex_unlock(&dev->struct_mutex); 290 290 return ret; ··· 549 549 550 550 if (ifbdev->fb) { 551 551 mutex_lock(&ifbdev->helper.dev->struct_mutex); 552 - intel_unpin_fb_obj(&ifbdev->fb->base, DRM_ROTATE_0); 552 + intel_unpin_fb_vma(ifbdev->vma); 553 553 mutex_unlock(&ifbdev->helper.dev->struct_mutex); 554 554 555 555 drm_framebuffer_remove(&ifbdev->fb->base);
+4 -4
drivers/gpu/drm/i915/intel_sprite.c
··· 273 273 274 274 I915_WRITE(PLANE_CTL(pipe, plane), plane_ctl); 275 275 I915_WRITE(PLANE_SURF(pipe, plane), 276 - intel_fb_gtt_offset(fb, rotation) + surf_addr); 276 + intel_plane_ggtt_offset(plane_state) + surf_addr); 277 277 POSTING_READ(PLANE_SURF(pipe, plane)); 278 278 } 279 279 ··· 458 458 I915_WRITE(SPSIZE(pipe, plane), (crtc_h << 16) | crtc_w); 459 459 I915_WRITE(SPCNTR(pipe, plane), sprctl); 460 460 I915_WRITE(SPSURF(pipe, plane), 461 - intel_fb_gtt_offset(fb, rotation) + sprsurf_offset); 461 + intel_plane_ggtt_offset(plane_state) + sprsurf_offset); 462 462 POSTING_READ(SPSURF(pipe, plane)); 463 463 } 464 464 ··· 594 594 I915_WRITE(SPRSCALE(pipe), sprscale); 595 595 I915_WRITE(SPRCTL(pipe), sprctl); 596 596 I915_WRITE(SPRSURF(pipe), 597 - intel_fb_gtt_offset(fb, rotation) + sprsurf_offset); 597 + intel_plane_ggtt_offset(plane_state) + sprsurf_offset); 598 598 POSTING_READ(SPRSURF(pipe)); 599 599 } 600 600 ··· 721 721 I915_WRITE(DVSSCALE(pipe), dvsscale); 722 722 I915_WRITE(DVSCNTR(pipe), dvscntr); 723 723 I915_WRITE(DVSSURF(pipe), 724 - intel_fb_gtt_offset(fb, rotation) + dvssurf_offset); 724 + intel_plane_ggtt_offset(plane_state) + dvssurf_offset); 725 725 POSTING_READ(DVSSURF(pipe)); 726 726 } 727 727
+2 -1
drivers/gpu/drm/nouveau/dispnv04/hw.c
··· 222 222 uint32_t mpllP; 223 223 224 224 pci_read_config_dword(pci_get_bus_and_slot(0, 3), 0x6c, &mpllP); 225 + mpllP = (mpllP >> 8) & 0xf; 225 226 if (!mpllP) 226 227 mpllP = 4; 227 228 ··· 233 232 uint32_t clock; 234 233 235 234 pci_read_config_dword(pci_get_bus_and_slot(0, 5), 0x4c, &clock); 236 - return clock; 235 + return clock / 1000; 237 236 } 238 237 239 238 ret = nouveau_hw_get_pllvals(dev, plltype, &pllvals);
+1
drivers/gpu/drm/nouveau/nouveau_fence.h
··· 99 99 struct nouveau_bo *bo; 100 100 struct nouveau_bo *bo_gart; 101 101 u32 *suspend; 102 + struct mutex mutex; 102 103 }; 103 104 104 105 int nv84_fence_context_new(struct nouveau_channel *);
+1 -1
drivers/gpu/drm/nouveau/nouveau_led.h
··· 42 42 } 43 43 44 44 /* nouveau_led.c */ 45 - #if IS_ENABLED(CONFIG_LEDS_CLASS) 45 + #if IS_REACHABLE(CONFIG_LEDS_CLASS) 46 46 int nouveau_led_init(struct drm_device *dev); 47 47 void nouveau_led_suspend(struct drm_device *dev); 48 48 void nouveau_led_resume(struct drm_device *dev);
+2 -1
drivers/gpu/drm/nouveau/nouveau_usif.c
··· 313 313 if (!(ret = nvif_unpack(-ENOSYS, &data, &size, argv->v0, 0, 0, true))) { 314 314 /* block access to objects not created via this interface */ 315 315 owner = argv->v0.owner; 316 - if (argv->v0.object == 0ULL) 316 + if (argv->v0.object == 0ULL && 317 + argv->v0.type != NVIF_IOCTL_V0_DEL) 317 318 argv->v0.owner = NVDRM_OBJECT_ANY; /* except client */ 318 319 else 319 320 argv->v0.owner = NVDRM_OBJECT_USIF;
+6
drivers/gpu/drm/nouveau/nv50_display.c
··· 4052 4052 } 4053 4053 } 4054 4054 4055 + for_each_crtc_in_state(state, crtc, crtc_state, i) { 4056 + if (crtc->state->event) 4057 + drm_crtc_vblank_get(crtc); 4058 + } 4059 + 4055 4060 /* Update plane(s). */ 4056 4061 for_each_plane_in_state(state, plane, plane_state, i) { 4057 4062 struct nv50_wndw_atom *asyw = nv50_wndw_atom(plane->state); ··· 4106 4101 drm_crtc_send_vblank_event(crtc, crtc->state->event); 4107 4102 spin_unlock_irqrestore(&crtc->dev->event_lock, flags); 4108 4103 crtc->state->event = NULL; 4104 + drm_crtc_vblank_put(crtc); 4109 4105 } 4110 4106 } 4111 4107
+6
drivers/gpu/drm/nouveau/nv84_fence.c
··· 107 107 struct nv84_fence_chan *fctx = chan->fence; 108 108 109 109 nouveau_bo_wr32(priv->bo, chan->chid * 16 / 4, fctx->base.sequence); 110 + mutex_lock(&priv->mutex); 110 111 nouveau_bo_vma_del(priv->bo, &fctx->vma_gart); 111 112 nouveau_bo_vma_del(priv->bo, &fctx->vma); 113 + mutex_unlock(&priv->mutex); 112 114 nouveau_fence_context_del(&fctx->base); 113 115 chan->fence = NULL; 114 116 nouveau_fence_context_free(&fctx->base); ··· 136 134 fctx->base.sync32 = nv84_fence_sync32; 137 135 fctx->base.sequence = nv84_fence_read(chan); 138 136 137 + mutex_lock(&priv->mutex); 139 138 ret = nouveau_bo_vma_add(priv->bo, cli->vm, &fctx->vma); 140 139 if (ret == 0) { 141 140 ret = nouveau_bo_vma_add(priv->bo_gart, cli->vm, 142 141 &fctx->vma_gart); 143 142 } 143 + mutex_unlock(&priv->mutex); 144 144 145 145 if (ret) 146 146 nv84_fence_context_del(chan); ··· 215 211 priv->base.contexts = fifo->nr; 216 212 priv->base.context_base = dma_fence_context_alloc(priv->base.contexts); 217 213 priv->base.uevent = true; 214 + 215 + mutex_init(&priv->mutex); 218 216 219 217 /* Use VRAM if there is any ; otherwise fallback to system memory */ 220 218 domain = drm->device.info.ram_size != 0 ? TTM_PL_FLAG_VRAM :
+1 -1
drivers/gpu/drm/nouveau/nvkm/engine/disp/hdagt215.c
··· 59 59 ); 60 60 } 61 61 for (i = 0; i < size; i++) 62 - nvkm_wr32(device, 0x61c440 + soff, (i << 8) | args->v0.data[0]); 62 + nvkm_wr32(device, 0x61c440 + soff, (i << 8) | args->v0.data[i]); 63 63 for (; i < 0x60; i++) 64 64 nvkm_wr32(device, 0x61c440 + soff, (i << 8)); 65 65 nvkm_mask(device, 0x61c448 + soff, 0x80000003, 0x80000003);
-2
drivers/gpu/drm/nouveau/nvkm/engine/disp/nv50.c
··· 433 433 case 0x94: 434 434 case 0x96: 435 435 case 0x98: 436 - case 0xaa: 437 - case 0xac: 438 436 return true; 439 437 default: 440 438 break;
+2 -1
drivers/gpu/drm/radeon/radeon_drv.c
··· 97 97 * 2.46.0 - Add PFP_SYNC_ME support on evergreen 98 98 * 2.47.0 - Add UVD_NO_OP register support 99 99 * 2.48.0 - TA_CS_BC_BASE_ADDR allowed on SI 100 + * 2.49.0 - DRM_RADEON_GEM_INFO ioctl returns correct vram_size/visible values 100 101 */ 101 102 #define KMS_DRIVER_MAJOR 2 102 - #define KMS_DRIVER_MINOR 48 103 + #define KMS_DRIVER_MINOR 49 103 104 #define KMS_DRIVER_PATCHLEVEL 0 104 105 int radeon_driver_load_kms(struct drm_device *dev, unsigned long flags); 105 106 int radeon_driver_unload_kms(struct drm_device *dev);
+2 -2
drivers/gpu/drm/radeon/radeon_gem.c
··· 220 220 221 221 man = &rdev->mman.bdev.man[TTM_PL_VRAM]; 222 222 223 - args->vram_size = rdev->mc.real_vram_size; 224 - args->vram_visible = (u64)man->size << PAGE_SHIFT; 223 + args->vram_size = (u64)man->size << PAGE_SHIFT; 224 + args->vram_visible = rdev->mc.visible_vram_size; 225 225 args->vram_visible -= rdev->vram_pin_size; 226 226 args->gart_size = rdev->mc.gtt_size; 227 227 args->gart_size -= rdev->gart_pin_size;
+1 -1
drivers/gpu/drm/vc4/vc4_plane.c
··· 858 858 } 859 859 } 860 860 plane = &vc4_plane->base; 861 - ret = drm_universal_plane_init(dev, plane, 0xff, 861 + ret = drm_universal_plane_init(dev, plane, 0, 862 862 &vc4_plane_funcs, 863 863 formats, num_formats, 864 864 type, NULL);
+1 -2
drivers/gpu/drm/vmwgfx/vmwgfx_fb.c
··· 481 481 mode_cmd.height = var->yres; 482 482 mode_cmd.pitches[0] = ((var->bits_per_pixel + 7) / 8) * mode_cmd.width; 483 483 mode_cmd.pixel_format = 484 - drm_mode_legacy_fb_format(var->bits_per_pixel, 485 - ((var->bits_per_pixel + 7) / 8) * mode_cmd.width); 484 + drm_mode_legacy_fb_format(var->bits_per_pixel, depth); 486 485 487 486 cur_fb = par->set_fb; 488 487 if (cur_fb && cur_fb->width == mode_cmd.width &&
+1
drivers/hv/ring_buffer.c
··· 383 383 return ret; 384 384 } 385 385 386 + init_cached_read_index(channel); 386 387 next_read_location = hv_get_next_read_location(inring_info); 387 388 next_read_location = hv_copyfrom_ringbuffer(inring_info, &desc, 388 389 sizeof(desc),
+8 -6
drivers/i2c/busses/i2c-piix4.c
··· 58 58 #define SMBSLVDAT (0xC + piix4_smba) 59 59 60 60 /* count for request_region */ 61 - #define SMBIOSIZE 8 61 + #define SMBIOSIZE 9 62 62 63 63 /* PCI Address Constants */ 64 64 #define SMBBA 0x090 ··· 592 592 u8 port; 593 593 int retval; 594 594 595 + mutex_lock(&piix4_mutex_sb800); 596 + 595 597 /* Request the SMBUS semaphore, avoid conflicts with the IMC */ 596 598 smbslvcnt = inb_p(SMBSLVCNT); 597 599 do { ··· 607 605 usleep_range(1000, 2000); 608 606 } while (--retries); 609 607 /* SMBus is still owned by the IMC, we give up */ 610 - if (!retries) 608 + if (!retries) { 609 + mutex_unlock(&piix4_mutex_sb800); 611 610 return -EBUSY; 612 - 613 - mutex_lock(&piix4_mutex_sb800); 611 + } 614 612 615 613 outb_p(piix4_port_sel_sb800, SB800_PIIX4_SMB_IDX); 616 614 smba_en_lo = inb_p(SB800_PIIX4_SMB_IDX + 1); ··· 625 623 626 624 outb_p(smba_en_lo, SB800_PIIX4_SMB_IDX + 1); 627 625 628 - mutex_unlock(&piix4_mutex_sb800); 629 - 630 626 /* Release the semaphore */ 631 627 outb_p(smbslvcnt | 0x20, SMBSLVCNT); 628 + 629 + mutex_unlock(&piix4_mutex_sb800); 632 630 633 631 return retval; 634 632 }
+2 -2
drivers/iio/adc/palmas_gpadc.c
··· 775 775 776 776 static int palmas_gpadc_suspend(struct device *dev) 777 777 { 778 - struct iio_dev *indio_dev = dev_to_iio_dev(dev); 778 + struct iio_dev *indio_dev = dev_get_drvdata(dev); 779 779 struct palmas_gpadc *adc = iio_priv(indio_dev); 780 780 int wakeup = adc->wakeup1_enable || adc->wakeup2_enable; 781 781 int ret; ··· 798 798 799 799 static int palmas_gpadc_resume(struct device *dev) 800 800 { 801 - struct iio_dev *indio_dev = dev_to_iio_dev(dev); 801 + struct iio_dev *indio_dev = dev_get_drvdata(dev); 802 802 struct palmas_gpadc *adc = iio_priv(indio_dev); 803 803 int wakeup = adc->wakeup1_enable || adc->wakeup2_enable; 804 804 int ret;
+2 -2
drivers/iio/health/afe4403.c
··· 422 422 423 423 static int __maybe_unused afe4403_suspend(struct device *dev) 424 424 { 425 - struct iio_dev *indio_dev = dev_to_iio_dev(dev); 425 + struct iio_dev *indio_dev = spi_get_drvdata(to_spi_device(dev)); 426 426 struct afe4403_data *afe = iio_priv(indio_dev); 427 427 int ret; 428 428 ··· 443 443 444 444 static int __maybe_unused afe4403_resume(struct device *dev) 445 445 { 446 - struct iio_dev *indio_dev = dev_to_iio_dev(dev); 446 + struct iio_dev *indio_dev = spi_get_drvdata(to_spi_device(dev)); 447 447 struct afe4403_data *afe = iio_priv(indio_dev); 448 448 int ret; 449 449
+2 -2
drivers/iio/health/afe4404.c
··· 428 428 429 429 static int __maybe_unused afe4404_suspend(struct device *dev) 430 430 { 431 - struct iio_dev *indio_dev = dev_to_iio_dev(dev); 431 + struct iio_dev *indio_dev = i2c_get_clientdata(to_i2c_client(dev)); 432 432 struct afe4404_data *afe = iio_priv(indio_dev); 433 433 int ret; 434 434 ··· 449 449 450 450 static int __maybe_unused afe4404_resume(struct device *dev) 451 451 { 452 - struct iio_dev *indio_dev = dev_to_iio_dev(dev); 452 + struct iio_dev *indio_dev = i2c_get_clientdata(to_i2c_client(dev)); 453 453 struct afe4404_data *afe = iio_priv(indio_dev); 454 454 int ret; 455 455
+1 -1
drivers/iio/health/max30100.c
··· 238 238 239 239 mutex_lock(&data->lock); 240 240 241 - while (cnt || (cnt = max30100_fifo_count(data) > 0)) { 241 + while (cnt || (cnt = max30100_fifo_count(data)) > 0) { 242 242 ret = max30100_read_measurement(data); 243 243 if (ret) 244 244 break;
+4 -2
drivers/iio/humidity/dht11.c
··· 71 71 * a) select an implementation using busy loop polling on those systems 72 72 * b) use the checksum to do some probabilistic decoding 73 73 */ 74 - #define DHT11_START_TRANSMISSION 18 /* ms */ 74 + #define DHT11_START_TRANSMISSION_MIN 18000 /* us */ 75 + #define DHT11_START_TRANSMISSION_MAX 20000 /* us */ 75 76 #define DHT11_MIN_TIMERES 34000 /* ns */ 76 77 #define DHT11_THRESHOLD 49000 /* ns */ 77 78 #define DHT11_AMBIG_LOW 23000 /* ns */ ··· 229 228 ret = gpio_direction_output(dht11->gpio, 0); 230 229 if (ret) 231 230 goto err; 232 - msleep(DHT11_START_TRANSMISSION); 231 + usleep_range(DHT11_START_TRANSMISSION_MIN, 232 + DHT11_START_TRANSMISSION_MAX); 233 233 ret = gpio_direction_input(dht11->gpio); 234 234 if (ret) 235 235 goto err;
+5 -3
drivers/infiniband/sw/rxe/rxe_mr.c
··· 59 59 60 60 case RXE_MEM_TYPE_MR: 61 61 case RXE_MEM_TYPE_FMR: 62 - return ((iova < mem->iova) || 63 - ((iova + length) > (mem->iova + mem->length))) ? 64 - -EFAULT : 0; 62 + if (iova < mem->iova || 63 + length > mem->length || 64 + iova > mem->iova + mem->length - length) 65 + return -EFAULT; 66 + return 0; 65 67 66 68 default: 67 69 return -EFAULT;
+1 -1
drivers/infiniband/sw/rxe/rxe_resp.c
··· 479 479 goto err2; 480 480 } 481 481 482 - resid = mtu; 482 + qp->resp.resid = mtu; 483 483 } else { 484 484 if (pktlen != resid) { 485 485 state = RESPST_ERR_LENGTH;
+14 -6
drivers/input/misc/uinput.c
··· 263 263 return -EINVAL; 264 264 } 265 265 266 - if (test_bit(ABS_MT_SLOT, dev->absbit)) { 267 - nslot = input_abs_get_max(dev, ABS_MT_SLOT) + 1; 268 - error = input_mt_init_slots(dev, nslot, 0); 269 - if (error) 266 + if (test_bit(EV_ABS, dev->evbit)) { 267 + input_alloc_absinfo(dev); 268 + if (!dev->absinfo) { 269 + error = -EINVAL; 270 270 goto fail1; 271 - } else if (test_bit(ABS_MT_POSITION_X, dev->absbit)) { 272 - input_set_events_per_packet(dev, 60); 271 + } 272 + 273 + if (test_bit(ABS_MT_SLOT, dev->absbit)) { 274 + nslot = input_abs_get_max(dev, ABS_MT_SLOT) + 1; 275 + error = input_mt_init_slots(dev, nslot, 0); 276 + if (error) 277 + goto fail1; 278 + } else if (test_bit(ABS_MT_POSITION_X, dev->absbit)) { 279 + input_set_events_per_packet(dev, 60); 280 + } 273 281 } 274 282 275 283 if (test_bit(EV_FF, dev->evbit) && !udev->ff_effects_max) {
+7 -1
drivers/input/rmi4/Kconfig
··· 42 42 config RMI4_F03 43 43 bool "RMI4 Function 03 (PS2 Guest)" 44 44 depends on RMI4_CORE 45 - depends on SERIO=y || RMI4_CORE=SERIO 46 45 help 47 46 Say Y here if you want to add support for RMI4 function 03. 48 47 49 48 Function 03 provides PS2 guest support for RMI4 devices. This 50 49 includes support for TrackPoints on TouchPads. 50 + 51 + config RMI4_F03_SERIO 52 + tristate 53 + depends on RMI4_CORE 54 + depends on RMI4_F03 55 + default RMI4_CORE 56 + select SERIO 51 57 52 58 config RMI4_2D_SENSOR 53 59 bool
+19 -9
drivers/irqchip/irq-keystone.c
··· 19 19 #include <linux/bitops.h> 20 20 #include <linux/module.h> 21 21 #include <linux/moduleparam.h> 22 + #include <linux/interrupt.h> 22 23 #include <linux/irqdomain.h> 23 24 #include <linux/irqchip.h> 24 - #include <linux/irqchip/chained_irq.h> 25 25 #include <linux/of.h> 26 26 #include <linux/of_platform.h> 27 27 #include <linux/mfd/syscon.h> ··· 39 39 struct irq_domain *irqd; 40 40 struct regmap *devctrl_regs; 41 41 u32 devctrl_offset; 42 + raw_spinlock_t wa_lock; 42 43 }; 43 44 44 45 static inline u32 keystone_irq_readl(struct keystone_irq_device *kirq) ··· 84 83 /* nothing to do here */ 85 84 } 86 85 87 - static void keystone_irq_handler(struct irq_desc *desc) 86 + static irqreturn_t keystone_irq_handler(int irq, void *keystone_irq) 88 87 { 89 - unsigned int irq = irq_desc_get_irq(desc); 90 - struct keystone_irq_device *kirq = irq_desc_get_handler_data(desc); 88 + struct keystone_irq_device *kirq = keystone_irq; 89 + unsigned long wa_lock_flags; 91 90 unsigned long pending; 92 91 int src, virq; 93 92 94 93 dev_dbg(kirq->dev, "start irq %d\n", irq); 95 - 96 - chained_irq_enter(irq_desc_get_chip(desc), desc); 97 94 98 95 pending = keystone_irq_readl(kirq); 99 96 keystone_irq_writel(kirq, pending); ··· 110 111 if (!virq) 111 112 dev_warn(kirq->dev, "spurious irq detected hwirq %d, virq %d\n", 112 113 src, virq); 114 + raw_spin_lock_irqsave(&kirq->wa_lock, wa_lock_flags); 113 115 generic_handle_irq(virq); 116 + raw_spin_unlock_irqrestore(&kirq->wa_lock, 117 + wa_lock_flags); 114 118 } 115 119 } 116 120 117 - chained_irq_exit(irq_desc_get_chip(desc), desc); 118 - 119 121 dev_dbg(kirq->dev, "end irq %d\n", irq); 122 + return IRQ_HANDLED; 120 123 } 121 124 122 125 static int keystone_irq_map(struct irq_domain *h, unsigned int virq, ··· 183 182 return -ENODEV; 184 183 } 185 184 185 + raw_spin_lock_init(&kirq->wa_lock); 186 + 186 187 platform_set_drvdata(pdev, kirq); 187 188 188 - irq_set_chained_handler_and_data(kirq->irq, keystone_irq_handler, kirq); 189 + ret = request_irq(kirq->irq, keystone_irq_handler, 190 + 0, dev_name(dev), kirq); 191 + if (ret) { 192 + irq_domain_remove(kirq->irqd); 193 + return ret; 194 + } 189 195 190 196 /* clear all source bits */ 191 197 keystone_irq_writel(kirq, ~0x0); ··· 206 198 { 207 199 struct keystone_irq_device *kirq = platform_get_drvdata(pdev); 208 200 int hwirq; 201 + 202 + free_irq(kirq->irq, kirq); 209 203 210 204 for (hwirq = 0; hwirq < KEYSTONE_N_IRQ; hwirq++) 211 205 irq_dispose_mapping(irq_find_mapping(kirq->irqd, hwirq));
+4
drivers/irqchip/irq-mxs.c
··· 131 131 .irq_ack = icoll_ack_irq, 132 132 .irq_mask = icoll_mask_irq, 133 133 .irq_unmask = icoll_unmask_irq, 134 + .flags = IRQCHIP_MASK_ON_SUSPEND | 135 + IRQCHIP_SKIP_SET_WAKE, 134 136 }; 135 137 136 138 static struct irq_chip asm9260_icoll_chip = { 137 139 .irq_ack = icoll_ack_irq, 138 140 .irq_mask = asm9260_mask_irq, 139 141 .irq_unmask = asm9260_unmask_irq, 142 + .flags = IRQCHIP_MASK_ON_SUSPEND | 143 + IRQCHIP_SKIP_SET_WAKE, 140 144 }; 141 145 142 146 asmlinkage void __exception_irq_entry icoll_handle_irq(struct pt_regs *regs)
+4 -4
drivers/md/dm-crypt.c
··· 1534 1534 return PTR_ERR(key); 1535 1535 } 1536 1536 1537 - rcu_read_lock(); 1537 + down_read(&key->sem); 1538 1538 1539 1539 ukp = user_key_payload(key); 1540 1540 if (!ukp) { 1541 - rcu_read_unlock(); 1541 + up_read(&key->sem); 1542 1542 key_put(key); 1543 1543 kzfree(new_key_string); 1544 1544 return -EKEYREVOKED; 1545 1545 } 1546 1546 1547 1547 if (cc->key_size != ukp->datalen) { 1548 - rcu_read_unlock(); 1548 + up_read(&key->sem); 1549 1549 key_put(key); 1550 1550 kzfree(new_key_string); 1551 1551 return -EINVAL; ··· 1553 1553 1554 1554 memcpy(cc->key, ukp->data, cc->key_size); 1555 1555 1556 - rcu_read_unlock(); 1556 + up_read(&key->sem); 1557 1557 key_put(key); 1558 1558 1559 1559 /* clear the flag since following operations may invalidate previously valid key */
+2 -2
drivers/md/dm-mpath.c
··· 427 427 unsigned long flags; 428 428 struct priority_group *pg; 429 429 struct pgpath *pgpath; 430 - bool bypassed = true; 430 + unsigned bypassed = 1; 431 431 432 432 if (!atomic_read(&m->nr_valid_paths)) { 433 433 clear_bit(MPATHF_QUEUE_IO, &m->flags); ··· 466 466 */ 467 467 do { 468 468 list_for_each_entry(pg, &m->priority_groups, list) { 469 - if (pg->bypassed == bypassed) 469 + if (pg->bypassed == !!bypassed) 470 470 continue; 471 471 pgpath = choose_path_in_pg(m, pg, nr_bytes); 472 472 if (!IS_ERR_OR_NULL(pgpath)) {
+4
drivers/md/dm-rq.c
··· 779 779 int srcu_idx; 780 780 struct dm_table *map = dm_get_live_table(md, &srcu_idx); 781 781 782 + if (unlikely(!map)) { 783 + dm_put_live_table(md, srcu_idx); 784 + return; 785 + } 782 786 ti = dm_table_find_target(map, pos); 783 787 dm_put_live_table(md, srcu_idx); 784 788 }
+1 -1
drivers/media/cec/cec-adap.c
··· 1206 1206 las->log_addr[i] = CEC_LOG_ADDR_INVALID; 1207 1207 if (last_la == CEC_LOG_ADDR_INVALID || 1208 1208 last_la == CEC_LOG_ADDR_UNREGISTERED || 1209 - !(last_la & type2mask[type])) 1209 + !((1 << last_la) & type2mask[type])) 1210 1210 last_la = la_list[0]; 1211 1211 1212 1212 err = cec_config_log_addr(adap, i, last_la);
+25 -7
drivers/mmc/host/mmci.c
··· 1023 1023 if (!host->busy_status && busy_resp && 1024 1024 !(status & (MCI_CMDCRCFAIL|MCI_CMDTIMEOUT)) && 1025 1025 (readl(base + MMCISTATUS) & host->variant->busy_detect_flag)) { 1026 - /* Unmask the busy IRQ */ 1026 + 1027 + /* Clear the busy start IRQ */ 1028 + writel(host->variant->busy_detect_mask, 1029 + host->base + MMCICLEAR); 1030 + 1031 + /* Unmask the busy end IRQ */ 1027 1032 writel(readl(base + MMCIMASK0) | 1028 1033 host->variant->busy_detect_mask, 1029 1034 base + MMCIMASK0); ··· 1043 1038 1044 1039 /* 1045 1040 * At this point we are not busy with a command, we have 1046 - * not received a new busy request, mask the busy IRQ and 1047 - * fall through to process the IRQ. 1041 + * not received a new busy request, clear and mask the busy 1042 + * end IRQ and fall through to process the IRQ. 1048 1043 */ 1049 1044 if (host->busy_status) { 1045 + 1046 + writel(host->variant->busy_detect_mask, 1047 + host->base + MMCICLEAR); 1048 + 1050 1049 writel(readl(base + MMCIMASK0) & 1051 1050 ~host->variant->busy_detect_mask, 1052 1051 base + MMCIMASK0); ··· 1292 1283 } 1293 1284 1294 1285 /* 1295 - * We intentionally clear the MCI_ST_CARDBUSY IRQ here (if it's 1296 - * enabled) since the HW seems to be triggering the IRQ on both 1297 - * edges while monitoring DAT0 for busy completion. 1286 + * We intentionally clear the MCI_ST_CARDBUSY IRQ (if it's 1287 + * enabled) in mmci_cmd_irq() function where ST Micro busy 1288 + * detection variant is handled. Considering the HW seems to be 1289 + * triggering the IRQ on both edges while monitoring DAT0 for 1290 + * busy completion and that same status bit is used to monitor 1291 + * start and end of busy detection, special care must be taken 1292 + * to make sure that both start and end interrupts are always 1293 + * cleared one after the other. 1298 1294 */ 1299 1295 status &= readl(host->base + MMCIMASK0); 1300 - writel(status, host->base + MMCICLEAR); 1296 + if (host->variant->busy_detect) 1297 + writel(status & ~host->variant->busy_detect_mask, 1298 + host->base + MMCICLEAR); 1299 + else 1300 + writel(status, host->base + MMCICLEAR); 1301 1301 1302 1302 dev_dbg(mmc_dev(host->mmc), "irq0 (data+cmd) %08x\n", status); 1303 1303
+2 -1
drivers/mmc/host/sdhci.c
··· 2733 2733 if (intmask & SDHCI_INT_RETUNE) 2734 2734 mmc_retune_needed(host->mmc); 2735 2735 2736 - if (intmask & SDHCI_INT_CARD_INT) { 2736 + if ((intmask & SDHCI_INT_CARD_INT) && 2737 + (host->ier & SDHCI_INT_CARD_INT)) { 2737 2738 sdhci_enable_sdio_irq_nolock(host, false); 2738 2739 host->thread_isr |= SDHCI_INT_CARD_INT; 2739 2740 result = IRQ_WAKE_THREAD;
+96 -12
drivers/net/ethernet/cavium/thunder/thunder_bgx.c
··· 31 31 u8 lmac_type; 32 32 u8 lane_to_sds; 33 33 bool use_training; 34 + bool autoneg; 34 35 bool link_up; 35 36 int lmacid; /* ID within BGX */ 36 37 int lmacid_bd; /* ID on board */ ··· 462 461 /* power down, reset autoneg, autoneg enable */ 463 462 cfg = bgx_reg_read(bgx, lmacid, BGX_GMP_PCS_MRX_CTL); 464 463 cfg &= ~PCS_MRX_CTL_PWR_DN; 465 - cfg |= (PCS_MRX_CTL_RST_AN | PCS_MRX_CTL_AN_EN); 464 + cfg |= PCS_MRX_CTL_RST_AN; 465 + if (lmac->phydev) { 466 + cfg |= PCS_MRX_CTL_AN_EN; 467 + } else { 468 + /* In scenarios where PHY driver is not present or it's a 469 + * non-standard PHY, FW sets AN_EN to inform Linux driver 470 + * to do auto-neg and link polling or not. 471 + */ 472 + if (cfg & PCS_MRX_CTL_AN_EN) 473 + lmac->autoneg = true; 474 + } 466 475 bgx_reg_write(bgx, lmacid, BGX_GMP_PCS_MRX_CTL, cfg); 467 476 468 477 if (lmac->lmac_type == BGX_MODE_QSGMII) { ··· 483 472 return 0; 484 473 } 485 474 486 - if (lmac->lmac_type == BGX_MODE_SGMII) { 475 + if ((lmac->lmac_type == BGX_MODE_SGMII) && lmac->phydev) { 487 476 if (bgx_poll_reg(bgx, lmacid, BGX_GMP_PCS_MRX_STATUS, 488 477 PCS_MRX_STATUS_AN_CPT, false)) { 489 478 dev_err(&bgx->pdev->dev, "BGX AN_CPT not completed\n"); ··· 689 678 return -1; 690 679 } 691 680 681 + static void bgx_poll_for_sgmii_link(struct lmac *lmac) 682 + { 683 + u64 pcs_link, an_result; 684 + u8 speed; 685 + 686 + pcs_link = bgx_reg_read(lmac->bgx, lmac->lmacid, 687 + BGX_GMP_PCS_MRX_STATUS); 688 + 689 + /*Link state bit is sticky, read it again*/ 690 + if (!(pcs_link & PCS_MRX_STATUS_LINK)) 691 + pcs_link = bgx_reg_read(lmac->bgx, lmac->lmacid, 692 + BGX_GMP_PCS_MRX_STATUS); 693 + 694 + if (bgx_poll_reg(lmac->bgx, lmac->lmacid, BGX_GMP_PCS_MRX_STATUS, 695 + PCS_MRX_STATUS_AN_CPT, false)) { 696 + lmac->link_up = false; 697 + lmac->last_speed = SPEED_UNKNOWN; 698 + lmac->last_duplex = DUPLEX_UNKNOWN; 699 + goto next_poll; 700 + } 701 + 702 + lmac->link_up = ((pcs_link & PCS_MRX_STATUS_LINK) != 0) ? true : false; 703 + an_result = bgx_reg_read(lmac->bgx, lmac->lmacid, 704 + BGX_GMP_PCS_ANX_AN_RESULTS); 705 + 706 + speed = (an_result >> 3) & 0x3; 707 + lmac->last_duplex = (an_result >> 1) & 0x1; 708 + switch (speed) { 709 + case 0: 710 + lmac->last_speed = 10; 711 + break; 712 + case 1: 713 + lmac->last_speed = 100; 714 + break; 715 + case 2: 716 + lmac->last_speed = 1000; 717 + break; 718 + default: 719 + lmac->link_up = false; 720 + lmac->last_speed = SPEED_UNKNOWN; 721 + lmac->last_duplex = DUPLEX_UNKNOWN; 722 + break; 723 + } 724 + 725 + next_poll: 726 + 727 + if (lmac->last_link != lmac->link_up) { 728 + if (lmac->link_up) 729 + bgx_sgmii_change_link_state(lmac); 730 + lmac->last_link = lmac->link_up; 731 + } 732 + 733 + queue_delayed_work(lmac->check_link, &lmac->dwork, HZ * 3); 734 + } 735 + 692 736 static void bgx_poll_for_link(struct work_struct *work) 693 737 { 694 738 struct lmac *lmac; 695 739 u64 spu_link, smu_link; 696 740 697 741 lmac = container_of(work, struct lmac, dwork.work); 742 + if (lmac->is_sgmii) { 743 + bgx_poll_for_sgmii_link(lmac); 744 + return; 745 + } 698 746 699 747 /* Receive link is latching low. Force it high and verify it */ 700 748 bgx_reg_modify(lmac->bgx, lmac->lmacid, ··· 845 775 (lmac->lmac_type != BGX_MODE_XLAUI) && 846 776 (lmac->lmac_type != BGX_MODE_40G_KR) && 847 777 (lmac->lmac_type != BGX_MODE_10G_KR)) { 848 - if (!lmac->phydev) 849 - return -ENODEV; 850 - 778 + if (!lmac->phydev) { 779 + if (lmac->autoneg) { 780 + bgx_reg_write(bgx, lmacid, 781 + BGX_GMP_PCS_LINKX_TIMER, 782 + PCS_LINKX_TIMER_COUNT); 783 + goto poll; 784 + } else { 785 + /* Default to below link speed and duplex */ 786 + lmac->link_up = true; 787 + lmac->last_speed = 1000; 788 + lmac->last_duplex = 1; 789 + bgx_sgmii_change_link_state(lmac); 790 + return 0; 791 + } 792 + } 851 793 lmac->phydev->dev_flags = 0; 852 794 853 795 if (phy_connect_direct(&lmac->netdev, lmac->phydev, ··· 868 786 return -ENODEV; 869 787 870 788 phy_start_aneg(lmac->phydev); 871 - } else { 872 - lmac->check_link = alloc_workqueue("check_link", WQ_UNBOUND | 873 - WQ_MEM_RECLAIM, 1); 874 - if (!lmac->check_link) 875 - return -ENOMEM; 876 - INIT_DELAYED_WORK(&lmac->dwork, bgx_poll_for_link); 877 - queue_delayed_work(lmac->check_link, &lmac->dwork, 0); 789 + return 0; 878 790 } 791 + 792 + poll: 793 + lmac->check_link = alloc_workqueue("check_link", WQ_UNBOUND | 794 + WQ_MEM_RECLAIM, 1); 795 + if (!lmac->check_link) 796 + return -ENOMEM; 797 + INIT_DELAYED_WORK(&lmac->dwork, bgx_poll_for_link); 798 + queue_delayed_work(lmac->check_link, &lmac->dwork, 0); 879 799 880 800 return 0; 881 801 }
+5
drivers/net/ethernet/cavium/thunder/thunder_bgx.h
··· 153 153 #define PCS_MRX_CTL_LOOPBACK1 BIT_ULL(14) 154 154 #define PCS_MRX_CTL_RESET BIT_ULL(15) 155 155 #define BGX_GMP_PCS_MRX_STATUS 0x30008 156 + #define PCS_MRX_STATUS_LINK BIT_ULL(2) 156 157 #define PCS_MRX_STATUS_AN_CPT BIT_ULL(5) 158 + #define BGX_GMP_PCS_ANX_ADV 0x30010 157 159 #define BGX_GMP_PCS_ANX_AN_RESULTS 0x30020 160 + #define BGX_GMP_PCS_LINKX_TIMER 0x30040 161 + #define PCS_LINKX_TIMER_COUNT 0x1E84 158 162 #define BGX_GMP_PCS_SGM_AN_ADV 0x30068 159 163 #define BGX_GMP_PCS_MISCX_CTL 0x30078 164 + #define PCS_MISC_CTL_MODE BIT_ULL(8) 160 165 #define PCS_MISC_CTL_DISP_EN BIT_ULL(13) 161 166 #define PCS_MISC_CTL_GMX_ENO BIT_ULL(11) 162 167 #define PCS_MISC_CTL_SAMP_PT_MASK 0x7Full
+2 -6
drivers/net/ethernet/hisilicon/hns/hns_dsaf_reg.h
··· 1014 1014 1015 1015 static inline void dsaf_write_reg(void __iomem *base, u32 reg, u32 value) 1016 1016 { 1017 - u8 __iomem *reg_addr = ACCESS_ONCE(base); 1018 - 1019 - writel(value, reg_addr + reg); 1017 + writel(value, base + reg); 1020 1018 } 1021 1019 1022 1020 #define dsaf_write_dev(a, reg, value) \ ··· 1022 1024 1023 1025 static inline u32 dsaf_read_reg(u8 __iomem *base, u32 reg) 1024 1026 { 1025 - u8 __iomem *reg_addr = ACCESS_ONCE(base); 1026 - 1027 - return readl(reg_addr + reg); 1027 + return readl(base + reg); 1028 1028 } 1029 1029 1030 1030 static inline void dsaf_write_syscon(struct regmap *base, u32 reg, u32 value)
+1 -1
drivers/net/ethernet/hisilicon/hns/hns_enet.c
··· 305 305 struct hns_nic_ring_data *ring_data) 306 306 { 307 307 struct hns_nic_priv *priv = netdev_priv(ndev); 308 - struct device *dev = priv->dev; 309 308 struct hnae_ring *ring = ring_data->ring; 309 + struct device *dev = ring_to_dev(ring); 310 310 struct netdev_queue *dev_queue; 311 311 struct skb_frag_struct *frag; 312 312 int buf_num;
+2 -2
drivers/net/ethernet/mellanox/mlx4/en_ethtool.c
··· 1099 1099 memcpy(&new_prof, priv->prof, sizeof(struct mlx4_en_port_profile)); 1100 1100 new_prof.tx_ring_size = tx_size; 1101 1101 new_prof.rx_ring_size = rx_size; 1102 - err = mlx4_en_try_alloc_resources(priv, tmp, &new_prof); 1102 + err = mlx4_en_try_alloc_resources(priv, tmp, &new_prof, true); 1103 1103 if (err) 1104 1104 goto out; 1105 1105 ··· 1774 1774 new_prof.tx_ring_num[TX_XDP] = xdp_count; 1775 1775 new_prof.rx_ring_num = channel->rx_count; 1776 1776 1777 - err = mlx4_en_try_alloc_resources(priv, tmp, &new_prof); 1777 + err = mlx4_en_try_alloc_resources(priv, tmp, &new_prof, true); 1778 1778 if (err) 1779 1779 goto out; 1780 1780
+25 -10
drivers/net/ethernet/mellanox/mlx4/en_netdev.c
··· 2042 2042 if (priv->tx_cq[t] && priv->tx_cq[t][i]) 2043 2043 mlx4_en_destroy_cq(priv, &priv->tx_cq[t][i]); 2044 2044 } 2045 + kfree(priv->tx_ring[t]); 2046 + kfree(priv->tx_cq[t]); 2045 2047 } 2046 2048 2047 2049 for (i = 0; i < priv->rx_ring_num; i++) { ··· 2186 2184 2187 2185 int mlx4_en_try_alloc_resources(struct mlx4_en_priv *priv, 2188 2186 struct mlx4_en_priv *tmp, 2189 - struct mlx4_en_port_profile *prof) 2187 + struct mlx4_en_port_profile *prof, 2188 + bool carry_xdp_prog) 2190 2189 { 2191 - int t; 2190 + struct bpf_prog *xdp_prog; 2191 + int i, t; 2192 2192 2193 2193 mlx4_en_copy_priv(tmp, priv, prof); 2194 2194 ··· 2204 2200 } 2205 2201 return -ENOMEM; 2206 2202 } 2203 + 2204 + /* All rx_rings has the same xdp_prog. Pick the first one. */ 2205 + xdp_prog = rcu_dereference_protected( 2206 + priv->rx_ring[0]->xdp_prog, 2207 + lockdep_is_held(&priv->mdev->state_lock)); 2208 + 2209 + if (xdp_prog && carry_xdp_prog) { 2210 + xdp_prog = bpf_prog_add(xdp_prog, tmp->rx_ring_num); 2211 + if (IS_ERR(xdp_prog)) { 2212 + mlx4_en_free_resources(tmp); 2213 + return PTR_ERR(xdp_prog); 2214 + } 2215 + for (i = 0; i < tmp->rx_ring_num; i++) 2216 + rcu_assign_pointer(tmp->rx_ring[i]->xdp_prog, 2217 + xdp_prog); 2218 + } 2219 + 2207 2220 return 0; 2208 2221 } 2209 2222 ··· 2235 2214 { 2236 2215 struct mlx4_en_priv *priv = netdev_priv(dev); 2237 2216 struct mlx4_en_dev *mdev = priv->mdev; 2238 - int t; 2239 2217 2240 2218 en_dbg(DRV, priv, "Destroying netdev on port:%d\n", priv->port); 2241 2219 ··· 2267 2247 2268 2248 mlx4_en_free_resources(priv); 2269 2249 mutex_unlock(&mdev->state_lock); 2270 - 2271 - for (t = 0; t < MLX4_EN_NUM_TX_TYPES; t++) { 2272 - kfree(priv->tx_ring[t]); 2273 - kfree(priv->tx_cq[t]); 2274 - } 2275 2250 2276 2251 free_netdev(dev); 2277 2252 } ··· 2770 2755 en_warn(priv, "Reducing the number of TX rings, to not exceed the max total rings number.\n"); 2771 2756 } 2772 2757 2773 - err = mlx4_en_try_alloc_resources(priv, tmp, &new_prof); 2758 + err = mlx4_en_try_alloc_resources(priv, tmp, &new_prof, false); 2774 2759 if (err) { 2775 2760 if (prog) 2776 2761 bpf_prog_sub(prog, priv->rx_ring_num - 1); ··· 3514 3499 memcpy(&new_prof, priv->prof, sizeof(struct mlx4_en_port_profile)); 3515 3500 memcpy(&new_prof.hwtstamp_config, &ts_config, sizeof(ts_config)); 3516 3501 3517 - err = mlx4_en_try_alloc_resources(priv, tmp, &new_prof); 3502 + err = mlx4_en_try_alloc_resources(priv, tmp, &new_prof, true); 3518 3503 if (err) 3519 3504 goto out; 3520 3505
+4 -1
drivers/net/ethernet/mellanox/mlx4/en_rx.c
··· 514 514 return; 515 515 516 516 for (ring = 0; ring < priv->rx_ring_num; ring++) { 517 - if (mlx4_en_is_ring_empty(priv->rx_ring[ring])) 517 + if (mlx4_en_is_ring_empty(priv->rx_ring[ring])) { 518 + local_bh_disable(); 518 519 napi_reschedule(&priv->rx_cq[ring]->napi); 520 + local_bh_enable(); 521 + } 519 522 } 520 523 } 521 524
+2 -1
drivers/net/ethernet/mellanox/mlx4/mlx4_en.h
··· 679 679 680 680 int mlx4_en_try_alloc_resources(struct mlx4_en_priv *priv, 681 681 struct mlx4_en_priv *tmp, 682 - struct mlx4_en_port_profile *prof); 682 + struct mlx4_en_port_profile *prof, 683 + bool carry_xdp_prog); 683 684 void mlx4_en_safe_replace_resources(struct mlx4_en_priv *priv, 684 685 struct mlx4_en_priv *tmp); 685 686
+2 -2
drivers/net/hamradio/mkiss.c
··· 648 648 { 649 649 /* Finish setting up the DEVICE info. */ 650 650 dev->mtu = AX_MTU; 651 - dev->hard_header_len = 0; 652 - dev->addr_len = 0; 651 + dev->hard_header_len = AX25_MAX_HEADER_LEN; 652 + dev->addr_len = AX25_ADDR_LEN; 653 653 dev->type = ARPHRD_AX25; 654 654 dev->tx_queue_len = 10; 655 655 dev->header_ops = &ax25_header_ops;
+6
drivers/net/hyperv/netvsc.c
··· 1295 1295 ndev = hv_get_drvdata(device); 1296 1296 buffer = get_per_channel_state(channel); 1297 1297 1298 + /* commit_rd_index() -> hv_signal_on_read() needs this. */ 1299 + init_cached_read_index(channel); 1300 + 1298 1301 do { 1299 1302 desc = get_next_pkt_raw(channel); 1300 1303 if (desc != NULL) { ··· 1350 1347 1351 1348 bufferlen = bytes_recvd; 1352 1349 } 1350 + 1351 + init_cached_read_index(channel); 1352 + 1353 1353 } while (1); 1354 1354 1355 1355 if (bufferlen > NETVSC_PACKET_SIZE)
+1
drivers/net/loopback.c
··· 164 164 { 165 165 dev->mtu = 64 * 1024; 166 166 dev->hard_header_len = ETH_HLEN; /* 14 */ 167 + dev->min_header_len = ETH_HLEN; /* 14 */ 167 168 dev->addr_len = ETH_ALEN; /* 6 */ 168 169 dev->type = ARPHRD_LOOPBACK; /* 0x0001*/ 169 170 dev->flags = IFF_LOOPBACK;
+2 -2
drivers/net/macvtap.c
··· 681 681 size_t linear; 682 682 683 683 if (q->flags & IFF_VNET_HDR) { 684 - vnet_hdr_len = q->vnet_hdr_sz; 684 + vnet_hdr_len = READ_ONCE(q->vnet_hdr_sz); 685 685 686 686 err = -EINVAL; 687 687 if (len < vnet_hdr_len) ··· 820 820 821 821 if (q->flags & IFF_VNET_HDR) { 822 822 struct virtio_net_hdr vnet_hdr; 823 - vnet_hdr_len = q->vnet_hdr_sz; 823 + vnet_hdr_len = READ_ONCE(q->vnet_hdr_sz); 824 824 if (iov_iter_count(iter) < vnet_hdr_len) 825 825 return -EINVAL; 826 826
+2 -4
drivers/net/phy/mdio-bcm-iproc.c
··· 81 81 if (rc) 82 82 return rc; 83 83 84 - iproc_mdio_config_clk(priv->base); 85 - 86 84 /* Prepare the read operation */ 87 85 cmd = (MII_DATA_TA_VAL << MII_DATA_TA_SHIFT) | 88 86 (reg << MII_DATA_RA_SHIFT) | ··· 109 111 rc = iproc_mdio_wait_for_idle(priv->base); 110 112 if (rc) 111 113 return rc; 112 - 113 - iproc_mdio_config_clk(priv->base); 114 114 115 115 /* Prepare the write operation */ 116 116 cmd = (MII_DATA_TA_VAL << MII_DATA_TA_SHIFT) | ··· 158 162 bus->parent = &pdev->dev; 159 163 bus->read = iproc_mdio_read; 160 164 bus->write = iproc_mdio_write; 165 + 166 + iproc_mdio_config_clk(priv->base); 161 167 162 168 rc = of_mdiobus_register(bus, pdev->dev.of_node); 163 169 if (rc) {
+20 -1
drivers/net/phy/phy_device.c
··· 908 908 struct module *ndev_owner = dev->dev.parent->driver->owner; 909 909 struct mii_bus *bus = phydev->mdio.bus; 910 910 struct device *d = &phydev->mdio.dev; 911 + bool using_genphy = false; 911 912 int err; 912 913 913 914 /* For Ethernet device drivers that register their own MDIO bus, we ··· 934 933 d->driver = 935 934 &genphy_driver[GENPHY_DRV_1G].mdiodrv.driver; 936 935 936 + using_genphy = true; 937 + } 938 + 939 + if (!try_module_get(d->driver->owner)) { 940 + dev_err(&dev->dev, "failed to get the device driver module\n"); 941 + err = -EIO; 942 + goto error_put_device; 943 + } 944 + 945 + if (using_genphy) { 937 946 err = d->driver->probe(d); 938 947 if (err >= 0) 939 948 err = device_bind_driver(d); 940 949 941 950 if (err) 942 - goto error; 951 + goto error_module_put; 943 952 } 944 953 945 954 if (phydev->attached_dev) { ··· 986 975 return err; 987 976 988 977 error: 978 + /* phy_detach() does all of the cleanup below */ 989 979 phy_detach(phydev); 980 + return err; 981 + 982 + error_module_put: 983 + module_put(d->driver->owner); 984 + error_put_device: 990 985 put_device(d); 991 986 if (ndev_owner != bus->owner) 992 987 module_put(bus->owner); ··· 1055 1038 phy_suspend(phydev); 1056 1039 1057 1040 phy_led_triggers_unregister(phydev); 1041 + 1042 + module_put(phydev->mdio.dev.driver->owner); 1058 1043 1059 1044 /* If the device had no specific driver before (i.e. - it 1060 1045 * was using the generic driver), we unbind the device
+6 -4
drivers/net/tun.c
··· 1170 1170 } 1171 1171 1172 1172 if (tun->flags & IFF_VNET_HDR) { 1173 - if (len < tun->vnet_hdr_sz) 1173 + int vnet_hdr_sz = READ_ONCE(tun->vnet_hdr_sz); 1174 + 1175 + if (len < vnet_hdr_sz) 1174 1176 return -EINVAL; 1175 - len -= tun->vnet_hdr_sz; 1177 + len -= vnet_hdr_sz; 1176 1178 1177 1179 if (!copy_from_iter_full(&gso, sizeof(gso), from)) 1178 1180 return -EFAULT; ··· 1185 1183 1186 1184 if (tun16_to_cpu(tun, gso.hdr_len) > len) 1187 1185 return -EINVAL; 1188 - iov_iter_advance(from, tun->vnet_hdr_sz - sizeof(gso)); 1186 + iov_iter_advance(from, vnet_hdr_sz - sizeof(gso)); 1189 1187 } 1190 1188 1191 1189 if ((tun->flags & TUN_TYPE_MASK) == IFF_TAP) { ··· 1337 1335 vlan_hlen = VLAN_HLEN; 1338 1336 1339 1337 if (tun->flags & IFF_VNET_HDR) 1340 - vnet_hdr_sz = tun->vnet_hdr_sz; 1338 + vnet_hdr_sz = READ_ONCE(tun->vnet_hdr_sz); 1341 1339 1342 1340 total = skb->len + vlan_hlen + vnet_hdr_sz; 1343 1341
+34 -22
drivers/net/usb/catc.c
··· 776 776 struct net_device *netdev; 777 777 struct catc *catc; 778 778 u8 broadcast[ETH_ALEN]; 779 - int i, pktsz; 779 + int pktsz, ret; 780 780 781 781 if (usb_set_interface(usbdev, 782 782 intf->altsetting->desc.bInterfaceNumber, 1)) { ··· 811 811 if ((!catc->ctrl_urb) || (!catc->tx_urb) || 812 812 (!catc->rx_urb) || (!catc->irq_urb)) { 813 813 dev_err(&intf->dev, "No free urbs available.\n"); 814 - usb_free_urb(catc->ctrl_urb); 815 - usb_free_urb(catc->tx_urb); 816 - usb_free_urb(catc->rx_urb); 817 - usb_free_urb(catc->irq_urb); 818 - free_netdev(netdev); 819 - return -ENOMEM; 814 + ret = -ENOMEM; 815 + goto fail_free; 820 816 } 821 817 822 818 /* The F5U011 has the same vendor/product as the netmate but a device version of 0x130 */ ··· 840 844 catc->irq_buf, 2, catc_irq_done, catc, 1); 841 845 842 846 if (!catc->is_f5u011) { 847 + u32 *buf; 848 + int i; 849 + 843 850 dev_dbg(dev, "Checking memory size\n"); 844 851 845 - i = 0x12345678; 846 - catc_write_mem(catc, 0x7a80, &i, 4); 847 - i = 0x87654321; 848 - catc_write_mem(catc, 0xfa80, &i, 4); 849 - catc_read_mem(catc, 0x7a80, &i, 4); 852 + buf = kmalloc(4, GFP_KERNEL); 853 + if (!buf) { 854 + ret = -ENOMEM; 855 + goto fail_free; 856 + } 857 + 858 + *buf = 0x12345678; 859 + catc_write_mem(catc, 0x7a80, buf, 4); 860 + *buf = 0x87654321; 861 + catc_write_mem(catc, 0xfa80, buf, 4); 862 + catc_read_mem(catc, 0x7a80, buf, 4); 850 863 851 - switch (i) { 864 + switch (*buf) { 852 865 case 0x12345678: 853 866 catc_set_reg(catc, TxBufCount, 8); 854 867 catc_set_reg(catc, RxBufCount, 32); ··· 872 867 dev_dbg(dev, "32k Memory\n"); 873 868 break; 874 869 } 870 + 871 + kfree(buf); 875 872 876 873 dev_dbg(dev, "Getting MAC from SEEROM.\n"); 877 874 ··· 920 913 usb_set_intfdata(intf, catc); 921 914 922 915 SET_NETDEV_DEV(netdev, &intf->dev); 923 - if (register_netdev(netdev) != 0) { 924 - usb_set_intfdata(intf, NULL); 925 - usb_free_urb(catc->ctrl_urb); 926 - usb_free_urb(catc->tx_urb); 927 - usb_free_urb(catc->rx_urb); 928 - usb_free_urb(catc->irq_urb); 929 - free_netdev(netdev); 930 - return -EIO; 931 - } 916 + ret = register_netdev(netdev); 917 + if (ret) 918 + goto fail_clear_intfdata; 919 + 932 920 return 0; 921 + 922 + fail_clear_intfdata: 923 + usb_set_intfdata(intf, NULL); 924 + fail_free: 925 + usb_free_urb(catc->ctrl_urb); 926 + usb_free_urb(catc->tx_urb); 927 + usb_free_urb(catc->rx_urb); 928 + usb_free_urb(catc->irq_urb); 929 + free_netdev(netdev); 930 + return ret; 933 931 } 934 932 935 933 static void catc_disconnect(struct usb_interface *intf)
+25 -4
drivers/net/usb/pegasus.c
··· 126 126 127 127 static int get_registers(pegasus_t *pegasus, __u16 indx, __u16 size, void *data) 128 128 { 129 + u8 *buf; 129 130 int ret; 131 + 132 + buf = kmalloc(size, GFP_NOIO); 133 + if (!buf) 134 + return -ENOMEM; 130 135 131 136 ret = usb_control_msg(pegasus->usb, usb_rcvctrlpipe(pegasus->usb, 0), 132 137 PEGASUS_REQ_GET_REGS, PEGASUS_REQT_READ, 0, 133 - indx, data, size, 1000); 138 + indx, buf, size, 1000); 134 139 if (ret < 0) 135 140 netif_dbg(pegasus, drv, pegasus->net, 136 141 "%s returned %d\n", __func__, ret); 142 + else if (ret <= size) 143 + memcpy(data, buf, ret); 144 + kfree(buf); 137 145 return ret; 138 146 } 139 147 140 - static int set_registers(pegasus_t *pegasus, __u16 indx, __u16 size, void *data) 148 + static int set_registers(pegasus_t *pegasus, __u16 indx, __u16 size, 149 + const void *data) 141 150 { 151 + u8 *buf; 142 152 int ret; 153 + 154 + buf = kmemdup(data, size, GFP_NOIO); 155 + if (!buf) 156 + return -ENOMEM; 143 157 144 158 ret = usb_control_msg(pegasus->usb, usb_sndctrlpipe(pegasus->usb, 0), 145 159 PEGASUS_REQ_SET_REGS, PEGASUS_REQT_WRITE, 0, 146 - indx, data, size, 100); 160 + indx, buf, size, 100); 147 161 if (ret < 0) 148 162 netif_dbg(pegasus, drv, pegasus->net, 149 163 "%s returned %d\n", __func__, ret); 164 + kfree(buf); 150 165 return ret; 151 166 } 152 167 153 168 static int set_register(pegasus_t *pegasus, __u16 indx, __u8 data) 154 169 { 170 + u8 *buf; 155 171 int ret; 172 + 173 + buf = kmemdup(&data, 1, GFP_NOIO); 174 + if (!buf) 175 + return -ENOMEM; 156 176 157 177 ret = usb_control_msg(pegasus->usb, usb_sndctrlpipe(pegasus->usb, 0), 158 178 PEGASUS_REQ_SET_REG, PEGASUS_REQT_WRITE, data, 159 - indx, &data, 1, 1000); 179 + indx, buf, 1, 1000); 160 180 if (ret < 0) 161 181 netif_dbg(pegasus, drv, pegasus->net, 162 182 "%s returned %d\n", __func__, ret); 183 + kfree(buf); 163 184 return ret; 164 185 } 165 186
+27 -7
drivers/net/usb/rtl8150.c
··· 155 155 */ 156 156 static int get_registers(rtl8150_t * dev, u16 indx, u16 size, void *data) 157 157 { 158 - return usb_control_msg(dev->udev, usb_rcvctrlpipe(dev->udev, 0), 159 - RTL8150_REQ_GET_REGS, RTL8150_REQT_READ, 160 - indx, 0, data, size, 500); 158 + void *buf; 159 + int ret; 160 + 161 + buf = kmalloc(size, GFP_NOIO); 162 + if (!buf) 163 + return -ENOMEM; 164 + 165 + ret = usb_control_msg(dev->udev, usb_rcvctrlpipe(dev->udev, 0), 166 + RTL8150_REQ_GET_REGS, RTL8150_REQT_READ, 167 + indx, 0, buf, size, 500); 168 + if (ret > 0 && ret <= size) 169 + memcpy(data, buf, ret); 170 + kfree(buf); 171 + return ret; 161 172 } 162 173 163 - static int set_registers(rtl8150_t * dev, u16 indx, u16 size, void *data) 174 + static int set_registers(rtl8150_t * dev, u16 indx, u16 size, const void *data) 164 175 { 165 - return usb_control_msg(dev->udev, usb_sndctrlpipe(dev->udev, 0), 166 - RTL8150_REQ_SET_REGS, RTL8150_REQT_WRITE, 167 - indx, 0, data, size, 500); 176 + void *buf; 177 + int ret; 178 + 179 + buf = kmemdup(data, size, GFP_NOIO); 180 + if (!buf) 181 + return -ENOMEM; 182 + 183 + ret = usb_control_msg(dev->udev, usb_sndctrlpipe(dev->udev, 0), 184 + RTL8150_REQ_SET_REGS, RTL8150_REQT_WRITE, 185 + indx, 0, buf, size, 500); 186 + kfree(buf); 187 + return ret; 168 188 } 169 189 170 190 static void async_set_reg_cb(struct urb *urb)
+80 -49
drivers/net/usb/sierra_net.c
··· 73 73 /* Private data structure */ 74 74 struct sierra_net_data { 75 75 76 - u8 ethr_hdr_tmpl[ETH_HLEN]; /* ethernet header template for rx'd pkts */ 77 - 78 76 u16 link_up; /* air link up or down */ 79 77 u8 tx_hdr_template[4]; /* part of HIP hdr for tx'd packets */ 80 78 ··· 120 122 121 123 /* LSI Protocol types */ 122 124 #define SIERRA_NET_PROTOCOL_UMTS 0x01 125 + #define SIERRA_NET_PROTOCOL_UMTS_DS 0x04 123 126 /* LSI Coverage */ 124 127 #define SIERRA_NET_COVERAGE_NONE 0x00 125 128 #define SIERRA_NET_COVERAGE_NOPACKET 0x01 ··· 128 129 /* LSI Session */ 129 130 #define SIERRA_NET_SESSION_IDLE 0x00 130 131 /* LSI Link types */ 131 - #define SIERRA_NET_AS_LINK_TYPE_IPv4 0x00 132 + #define SIERRA_NET_AS_LINK_TYPE_IPV4 0x00 133 + #define SIERRA_NET_AS_LINK_TYPE_IPV6 0x02 132 134 133 135 struct lsi_umts { 134 136 u8 protocol; ··· 137 137 __be16 length; 138 138 /* eventually use a union for the rest - assume umts for now */ 139 139 u8 coverage; 140 - u8 unused2[41]; 140 + u8 network_len; /* network name len */ 141 + u8 network[40]; /* network name (UCS2, bigendian) */ 141 142 u8 session_state; 142 143 u8 unused3[33]; 144 + } __packed; 145 + 146 + struct lsi_umts_single { 147 + struct lsi_umts lsi; 143 148 u8 link_type; 144 149 u8 pdp_addr_len; /* NW-supplied PDP address len */ 145 150 u8 pdp_addr[16]; /* NW-supplied PDP address (bigendian)) */ ··· 163 158 u8 reserved[8]; 164 159 } __packed; 165 160 161 + struct lsi_umts_dual { 162 + struct lsi_umts lsi; 163 + u8 pdp_addr4_len; /* NW-supplied PDP IPv4 address len */ 164 + u8 pdp_addr4[4]; /* NW-supplied PDP IPv4 address (bigendian)) */ 165 + u8 pdp_addr6_len; /* NW-supplied PDP IPv6 address len */ 166 + u8 pdp_addr6[16]; /* NW-supplied PDP IPv6 address (bigendian)) */ 167 + u8 unused4[23]; 168 + u8 dns1_addr4_len; /* NW-supplied 1st DNS v4 address len (bigendian) */ 169 + u8 dns1_addr4[4]; /* NW-supplied 1st DNS v4 address */ 170 + u8 dns1_addr6_len; /* NW-supplied 1st DNS v6 address len */ 171 + u8 dns1_addr6[16]; /* NW-supplied 1st DNS v6 address (bigendian)*/ 172 + u8 dns2_addr4_len; /* NW-supplied 2nd DNS v4 address len (bigendian) */ 173 + u8 dns2_addr4[4]; /* NW-supplied 2nd DNS v4 address */ 174 + u8 dns2_addr6_len; /* NW-supplied 2nd DNS v6 address len */ 175 + u8 dns2_addr6[16]; /* NW-supplied 2nd DNS v6 address (bigendian)*/ 176 + u8 unused5[68]; 177 + } __packed; 178 + 166 179 #define SIERRA_NET_LSI_COMMON_LEN 4 167 - #define SIERRA_NET_LSI_UMTS_LEN (sizeof(struct lsi_umts)) 180 + #define SIERRA_NET_LSI_UMTS_LEN (sizeof(struct lsi_umts_single)) 168 181 #define SIERRA_NET_LSI_UMTS_STATUS_LEN \ 169 182 (SIERRA_NET_LSI_UMTS_LEN - SIERRA_NET_LSI_COMMON_LEN) 183 + #define SIERRA_NET_LSI_UMTS_DS_LEN (sizeof(struct lsi_umts_dual)) 184 + #define SIERRA_NET_LSI_UMTS_DS_STATUS_LEN \ 185 + (SIERRA_NET_LSI_UMTS_DS_LEN - SIERRA_NET_LSI_COMMON_LEN) 170 186 171 187 /* Forward definitions */ 172 188 static void sierra_sync_timer(unsigned long syncdata); ··· 216 190 dev->data[0] = (unsigned long)priv; 217 191 } 218 192 219 - /* is packet IPv4 */ 193 + /* is packet IPv4/IPv6 */ 220 194 static inline int is_ip(struct sk_buff *skb) 221 195 { 222 - return skb->protocol == cpu_to_be16(ETH_P_IP); 196 + return skb->protocol == cpu_to_be16(ETH_P_IP) || 197 + skb->protocol == cpu_to_be16(ETH_P_IPV6); 223 198 } 224 199 225 200 /* ··· 376 349 static int sierra_net_parse_lsi(struct usbnet *dev, char *data, int datalen) 377 350 { 378 351 struct lsi_umts *lsi = (struct lsi_umts *)data; 352 + u32 expected_length; 379 353 380 - if (datalen < sizeof(struct lsi_umts)) { 381 - netdev_err(dev->net, "%s: Data length %d, exp %Zu\n", 382 - __func__, datalen, 383 - sizeof(struct lsi_umts)); 354 + if (datalen < sizeof(struct lsi_umts_single)) { 355 + netdev_err(dev->net, "%s: Data length %d, exp >= %Zu\n", 356 + __func__, datalen, sizeof(struct lsi_umts_single)); 384 357 return -1; 385 - } 386 - 387 - if (lsi->length != cpu_to_be16(SIERRA_NET_LSI_UMTS_STATUS_LEN)) { 388 - netdev_err(dev->net, "%s: LSI_UMTS_STATUS_LEN %d, exp %u\n", 389 - __func__, be16_to_cpu(lsi->length), 390 - (u32)SIERRA_NET_LSI_UMTS_STATUS_LEN); 391 - return -1; 392 - } 393 - 394 - /* Validate the protocol - only support UMTS for now */ 395 - if (lsi->protocol != SIERRA_NET_PROTOCOL_UMTS) { 396 - netdev_err(dev->net, "Protocol unsupported, 0x%02x\n", 397 - lsi->protocol); 398 - return -1; 399 - } 400 - 401 - /* Validate the link type */ 402 - if (lsi->link_type != SIERRA_NET_AS_LINK_TYPE_IPv4) { 403 - netdev_err(dev->net, "Link type unsupported: 0x%02x\n", 404 - lsi->link_type); 405 - return -1; 406 - } 407 - 408 - /* Validate the coverage */ 409 - if (lsi->coverage == SIERRA_NET_COVERAGE_NONE 410 - || lsi->coverage == SIERRA_NET_COVERAGE_NOPACKET) { 411 - netdev_err(dev->net, "No coverage, 0x%02x\n", lsi->coverage); 412 - return 0; 413 358 } 414 359 415 360 /* Validate the session state */ 416 361 if (lsi->session_state == SIERRA_NET_SESSION_IDLE) { 417 362 netdev_err(dev->net, "Session idle, 0x%02x\n", 418 - lsi->session_state); 363 + lsi->session_state); 364 + return 0; 365 + } 366 + 367 + /* Validate the protocol - only support UMTS for now */ 368 + if (lsi->protocol == SIERRA_NET_PROTOCOL_UMTS) { 369 + struct lsi_umts_single *single = (struct lsi_umts_single *)lsi; 370 + 371 + /* Validate the link type */ 372 + if (single->link_type != SIERRA_NET_AS_LINK_TYPE_IPV4 && 373 + single->link_type != SIERRA_NET_AS_LINK_TYPE_IPV6) { 374 + netdev_err(dev->net, "Link type unsupported: 0x%02x\n", 375 + single->link_type); 376 + return -1; 377 + } 378 + expected_length = SIERRA_NET_LSI_UMTS_STATUS_LEN; 379 + } else if (lsi->protocol == SIERRA_NET_PROTOCOL_UMTS_DS) { 380 + expected_length = SIERRA_NET_LSI_UMTS_DS_STATUS_LEN; 381 + } else { 382 + netdev_err(dev->net, "Protocol unsupported, 0x%02x\n", 383 + lsi->protocol); 384 + return -1; 385 + } 386 + 387 + if (be16_to_cpu(lsi->length) != expected_length) { 388 + netdev_err(dev->net, "%s: LSI_UMTS_STATUS_LEN %d, exp %u\n", 389 + __func__, be16_to_cpu(lsi->length), expected_length); 390 + return -1; 391 + } 392 + 393 + /* Validate the coverage */ 394 + if (lsi->coverage == SIERRA_NET_COVERAGE_NONE || 395 + lsi->coverage == SIERRA_NET_COVERAGE_NOPACKET) { 396 + netdev_err(dev->net, "No coverage, 0x%02x\n", lsi->coverage); 419 397 return 0; 420 398 } 421 399 ··· 684 652 u8 numendpoints; 685 653 u16 fwattr = 0; 686 654 int status; 687 - struct ethhdr *eth; 688 655 struct sierra_net_data *priv; 689 656 static const u8 sync_tmplate[sizeof(priv->sync_msg)] = { 690 657 0x00, 0x00, SIERRA_NET_HIP_MSYNC_ID, 0x00}; ··· 720 689 /* change MAC addr to include, ifacenum, and to be unique */ 721 690 dev->net->dev_addr[ETH_ALEN-2] = atomic_inc_return(&iface_counter); 722 691 dev->net->dev_addr[ETH_ALEN-1] = ifacenum; 723 - 724 - /* we will have to manufacture ethernet headers, prepare template */ 725 - eth = (struct ethhdr *)priv->ethr_hdr_tmpl; 726 - memcpy(&eth->h_dest, dev->net->dev_addr, ETH_ALEN); 727 - eth->h_proto = cpu_to_be16(ETH_P_IP); 728 692 729 693 /* prepare shutdown message template */ 730 694 memcpy(priv->shdwn_msg, shdwn_tmplate, sizeof(priv->shdwn_msg)); ··· 850 824 851 825 skb_pull(skb, hh.hdrlen); 852 826 853 - /* We are going to accept this packet, prepare it */ 854 - memcpy(skb->data, sierra_net_get_private(dev)->ethr_hdr_tmpl, 855 - ETH_HLEN); 827 + /* We are going to accept this packet, prepare it. 828 + * In case protocol is IPv6, keep it, otherwise force IPv4. 829 + */ 830 + skb_reset_mac_header(skb); 831 + if (eth_hdr(skb)->h_proto != cpu_to_be16(ETH_P_IPV6)) 832 + eth_hdr(skb)->h_proto = cpu_to_be16(ETH_P_IP); 833 + eth_zero_addr(eth_hdr(skb)->h_source); 834 + memcpy(eth_hdr(skb)->h_dest, dev->net->dev_addr, ETH_ALEN); 856 835 857 836 /* Last packet in batch handled by usbnet */ 858 837 if (hh.payload_len.word == skb->len)
+7 -2
drivers/net/wireless/realtek/rtlwifi/rtl8192ce/sw.c
··· 92 92 struct rtl_priv *rtlpriv = rtl_priv(hw); 93 93 struct rtl_pci *rtlpci = rtl_pcidev(rtl_pcipriv(hw)); 94 94 struct rtl_hal *rtlhal = rtl_hal(rtl_priv(hw)); 95 - char *fw_name = "rtlwifi/rtl8192cfwU.bin"; 95 + char *fw_name; 96 96 97 97 rtl8192ce_bt_reg_init(hw); 98 98 ··· 164 164 } 165 165 166 166 /* request fw */ 167 - if (IS_81XXC_VENDOR_UMC_B_CUT(rtlhal->version)) 167 + if (IS_VENDOR_UMC_A_CUT(rtlhal->version) && 168 + !IS_92C_SERIAL(rtlhal->version)) 169 + fw_name = "rtlwifi/rtl8192cfwU.bin"; 170 + else if (IS_81XXC_VENDOR_UMC_B_CUT(rtlhal->version)) 168 171 fw_name = "rtlwifi/rtl8192cfwU_B.bin"; 172 + else 173 + fw_name = "rtlwifi/rtl8192cfw.bin"; 169 174 170 175 rtlpriv->max_fw_size = 0x4000; 171 176 pr_info("Using firmware %s\n", fw_name);
+24 -22
drivers/net/xen-netfront.c
··· 281 281 { 282 282 RING_IDX req_prod = queue->rx.req_prod_pvt; 283 283 int notify; 284 + int err = 0; 284 285 285 286 if (unlikely(!netif_carrier_ok(queue->info->netdev))) 286 287 return; ··· 296 295 struct xen_netif_rx_request *req; 297 296 298 297 skb = xennet_alloc_one_rx_buffer(queue); 299 - if (!skb) 298 + if (!skb) { 299 + err = -ENOMEM; 300 300 break; 301 + } 301 302 302 303 id = xennet_rxidx(req_prod); 303 304 ··· 323 320 324 321 queue->rx.req_prod_pvt = req_prod; 325 322 326 - /* Not enough requests? Try again later. */ 327 - if (req_prod - queue->rx.sring->req_prod < NET_RX_SLOTS_MIN) { 323 + /* Try again later if there are not enough requests or skb allocation 324 + * failed. 325 + * Enough requests is quantified as the sum of newly created slots and 326 + * the unconsumed slots at the backend. 327 + */ 328 + if (req_prod - queue->rx.rsp_cons < NET_RX_SLOTS_MIN || 329 + unlikely(err)) { 328 330 mod_timer(&queue->rx_refill_timer, jiffies + (HZ/10)); 329 331 return; 330 332 } ··· 1387 1379 for (i = 0; i < num_queues && info->queues; ++i) { 1388 1380 struct netfront_queue *queue = &info->queues[i]; 1389 1381 1382 + del_timer_sync(&queue->rx_refill_timer); 1383 + 1390 1384 if (queue->tx_irq && (queue->tx_irq == queue->rx_irq)) 1391 1385 unbind_from_irqhandler(queue->tx_irq, queue); 1392 1386 if (queue->tx_irq && (queue->tx_irq != queue->rx_irq)) { ··· 1743 1733 1744 1734 if (netif_running(info->netdev)) 1745 1735 napi_disable(&queue->napi); 1746 - del_timer_sync(&queue->rx_refill_timer); 1747 1736 netif_napi_del(&queue->napi); 1748 1737 } 1749 1738 ··· 1831 1822 xennet_destroy_queues(info); 1832 1823 1833 1824 err = xennet_create_queues(info, &num_queues); 1834 - if (err < 0) 1835 - goto destroy_ring; 1825 + if (err < 0) { 1826 + xenbus_dev_fatal(dev, err, "creating queues"); 1827 + kfree(info->queues); 1828 + info->queues = NULL; 1829 + goto out; 1830 + } 1836 1831 1837 1832 /* Create shared ring, alloc event channel -- for each queue */ 1838 1833 for (i = 0; i < num_queues; ++i) { 1839 1834 queue = &info->queues[i]; 1840 1835 err = setup_netfront(dev, queue, feature_split_evtchn); 1841 - if (err) { 1842 - /* setup_netfront() will tidy up the current 1843 - * queue on error, but we need to clean up 1844 - * those already allocated. 1845 - */ 1846 - if (i > 0) { 1847 - rtnl_lock(); 1848 - netif_set_real_num_tx_queues(info->netdev, i); 1849 - rtnl_unlock(); 1850 - goto destroy_ring; 1851 - } else { 1852 - goto out; 1853 - } 1854 - } 1836 + if (err) 1837 + goto destroy_ring; 1855 1838 } 1856 1839 1857 1840 again: ··· 1933 1932 xenbus_transaction_end(xbt, 1); 1934 1933 destroy_ring: 1935 1934 xennet_disconnect_backend(info); 1936 - kfree(info->queues); 1937 - info->queues = NULL; 1935 + xennet_destroy_queues(info); 1938 1936 out: 1937 + unregister_netdev(info->netdev); 1938 + xennet_free_netdev(info->netdev); 1939 1939 return err; 1940 1940 } 1941 1941
+10 -7
drivers/nvdimm/namespace_devs.c
··· 52 52 kfree(nsblk); 53 53 } 54 54 55 - static struct device_type namespace_io_device_type = { 55 + static const struct device_type namespace_io_device_type = { 56 56 .name = "nd_namespace_io", 57 57 .release = namespace_io_release, 58 58 }; 59 59 60 - static struct device_type namespace_pmem_device_type = { 60 + static const struct device_type namespace_pmem_device_type = { 61 61 .name = "nd_namespace_pmem", 62 62 .release = namespace_pmem_release, 63 63 }; 64 64 65 - static struct device_type namespace_blk_device_type = { 65 + static const struct device_type namespace_blk_device_type = { 66 66 .name = "nd_namespace_blk", 67 67 .release = namespace_blk_release, 68 68 }; ··· 962 962 struct nvdimm_drvdata *ndd; 963 963 struct nd_label_id label_id; 964 964 u32 flags = 0, remainder; 965 + int rc, i, id = -1; 965 966 u8 *uuid = NULL; 966 - int rc, i; 967 967 968 968 if (dev->driver || ndns->claim) 969 969 return -EBUSY; ··· 972 972 struct nd_namespace_pmem *nspm = to_nd_namespace_pmem(dev); 973 973 974 974 uuid = nspm->uuid; 975 + id = nspm->id; 975 976 } else if (is_namespace_blk(dev)) { 976 977 struct nd_namespace_blk *nsblk = to_nd_namespace_blk(dev); 977 978 978 979 uuid = nsblk->uuid; 979 980 flags = NSLABEL_FLAG_LOCAL; 981 + id = nsblk->id; 980 982 } 981 983 982 984 /* ··· 1041 1039 1042 1040 /* 1043 1041 * Try to delete the namespace if we deleted all of its 1044 - * allocation, this is not the seed device for the region, and 1045 - * it is not actively claimed by a btt instance. 1042 + * allocation, this is not the seed or 0th device for the 1043 + * region, and it is not actively claimed by a btt, pfn, or dax 1044 + * instance. 1046 1045 */ 1047 - if (val == 0 && nd_region->ns_seed != dev && !ndns->claim) 1046 + if (val == 0 && id != 0 && nd_region->ns_seed != dev && !ndns->claim) 1048 1047 nd_device_unregister(dev, ND_ASYNC); 1049 1048 1050 1049 return rc;
+2 -5
drivers/nvdimm/pfn_devs.c
··· 627 627 size = resource_size(&nsio->res); 628 628 npfns = (size - start_pad - end_trunc - SZ_8K) / SZ_4K; 629 629 if (nd_pfn->mode == PFN_MODE_PMEM) { 630 - unsigned long memmap_size; 631 - 632 630 /* 633 631 * vmemmap_populate_hugepages() allocates the memmap array in 634 632 * HPAGE_SIZE chunks. 635 633 */ 636 - memmap_size = ALIGN(64 * npfns, HPAGE_SIZE); 637 - offset = ALIGN(start + SZ_8K + memmap_size + dax_label_reserve, 638 - nd_pfn->align) - start; 634 + offset = ALIGN(start + SZ_8K + 64 * npfns + dax_label_reserve, 635 + max(nd_pfn->align, HPAGE_SIZE)) - start; 639 636 } else if (nd_pfn->mode == PFN_MODE_RAM) 640 637 offset = ALIGN(start + SZ_8K + dax_label_reserve, 641 638 nd_pfn->align) - start;
-6
drivers/pci/hotplug/pciehp_ctrl.c
··· 31 31 #include <linux/kernel.h> 32 32 #include <linux/types.h> 33 33 #include <linux/slab.h> 34 - #include <linux/pm_runtime.h> 35 34 #include <linux/pci.h> 36 35 #include "../pci.h" 37 36 #include "pciehp.h" ··· 98 99 pciehp_green_led_blink(p_slot); 99 100 100 101 /* Check link training status */ 101 - pm_runtime_get_sync(&ctrl->pcie->port->dev); 102 102 retval = pciehp_check_link_status(ctrl); 103 103 if (retval) { 104 104 ctrl_err(ctrl, "Failed to check link status\n"); ··· 118 120 if (retval != -EEXIST) 119 121 goto err_exit; 120 122 } 121 - pm_runtime_put(&ctrl->pcie->port->dev); 122 123 123 124 pciehp_green_led_on(p_slot); 124 125 pciehp_set_attention_status(p_slot, 0); 125 126 return 0; 126 127 127 128 err_exit: 128 - pm_runtime_put(&ctrl->pcie->port->dev); 129 129 set_slot_off(ctrl, p_slot); 130 130 return retval; 131 131 } ··· 137 141 int retval; 138 142 struct controller *ctrl = p_slot->ctrl; 139 143 140 - pm_runtime_get_sync(&ctrl->pcie->port->dev); 141 144 retval = pciehp_unconfigure_device(p_slot); 142 - pm_runtime_put(&ctrl->pcie->port->dev); 143 145 if (retval) 144 146 return retval; 145 147
+10
drivers/pci/msi.c
··· 1206 1206 if (flags & PCI_IRQ_AFFINITY) { 1207 1207 if (!affd) 1208 1208 affd = &msi_default_affd; 1209 + 1210 + if (affd->pre_vectors + affd->post_vectors > min_vecs) 1211 + return -EINVAL; 1212 + 1213 + /* 1214 + * If there aren't any vectors left after applying the pre/post 1215 + * vectors don't bother with assigning affinity. 1216 + */ 1217 + if (affd->pre_vectors + affd->post_vectors == min_vecs) 1218 + affd = NULL; 1209 1219 } else { 1210 1220 if (WARN_ON(affd)) 1211 1221 affd = NULL;
+6 -6
drivers/pci/pci.c
··· 2241 2241 return false; 2242 2242 2243 2243 /* 2244 - * Hotplug ports handled by firmware in System Management Mode 2244 + * Hotplug interrupts cannot be delivered if the link is down, 2245 + * so parents of a hotplug port must stay awake. In addition, 2246 + * hotplug ports handled by firmware in System Management Mode 2245 2247 * may not be put into D3 by the OS (Thunderbolt on non-Macs). 2248 + * For simplicity, disallow in general for now. 2246 2249 */ 2247 - if (bridge->is_hotplug_bridge && !pciehp_is_native(bridge)) 2250 + if (bridge->is_hotplug_bridge) 2248 2251 return false; 2249 2252 2250 2253 if (pci_bridge_d3_force) ··· 2279 2276 !pci_pme_capable(dev, PCI_D3cold)) || 2280 2277 2281 2278 /* If it is a bridge it must be allowed to go to D3. */ 2282 - !pci_power_manageable(dev) || 2283 - 2284 - /* Hotplug interrupts cannot be delivered if the link is down. */ 2285 - dev->is_hotplug_bridge) 2279 + !pci_power_manageable(dev)) 2286 2280 2287 2281 *d3cold_ok = false; 2288 2282
+1 -1
drivers/regulator/axp20x-regulator.c
··· 272 272 64, AXP806_DCDCD_V_CTRL, 0x3f, AXP806_PWR_OUT_CTRL1, 273 273 BIT(3)), 274 274 AXP_DESC(AXP806, DCDCE, "dcdce", "vine", 1100, 3400, 100, 275 - AXP806_DCDCB_V_CTRL, 0x1f, AXP806_PWR_OUT_CTRL1, BIT(4)), 275 + AXP806_DCDCE_V_CTRL, 0x1f, AXP806_PWR_OUT_CTRL1, BIT(4)), 276 276 AXP_DESC(AXP806, ALDO1, "aldo1", "aldoin", 700, 3300, 100, 277 277 AXP806_ALDO1_V_CTRL, 0x1f, AXP806_PWR_OUT_CTRL1, BIT(5)), 278 278 AXP_DESC(AXP806, ALDO2, "aldo2", "aldoin", 700, 3400, 100,
-46
drivers/regulator/fixed.c
··· 30 30 #include <linux/of_gpio.h> 31 31 #include <linux/regulator/of_regulator.h> 32 32 #include <linux/regulator/machine.h> 33 - #include <linux/acpi.h> 34 - #include <linux/property.h> 35 - #include <linux/gpio/consumer.h> 36 33 37 34 struct fixed_voltage_data { 38 35 struct regulator_desc desc; ··· 94 97 return config; 95 98 } 96 99 97 - /** 98 - * acpi_get_fixed_voltage_config - extract fixed_voltage_config structure info 99 - * @dev: device requesting for fixed_voltage_config 100 - * @desc: regulator description 101 - * 102 - * Populates fixed_voltage_config structure by extracting data through ACPI 103 - * interface, returns a pointer to the populated structure of NULL if memory 104 - * alloc fails. 105 - */ 106 - static struct fixed_voltage_config * 107 - acpi_get_fixed_voltage_config(struct device *dev, 108 - const struct regulator_desc *desc) 109 - { 110 - struct fixed_voltage_config *config; 111 - const char *supply_name; 112 - struct gpio_desc *gpiod; 113 - int ret; 114 - 115 - config = devm_kzalloc(dev, sizeof(*config), GFP_KERNEL); 116 - if (!config) 117 - return ERR_PTR(-ENOMEM); 118 - 119 - ret = device_property_read_string(dev, "supply-name", &supply_name); 120 - if (!ret) 121 - config->supply_name = supply_name; 122 - 123 - gpiod = gpiod_get(dev, "gpio", GPIOD_ASIS); 124 - if (IS_ERR(gpiod)) 125 - return ERR_PTR(-ENODEV); 126 - 127 - config->gpio = desc_to_gpio(gpiod); 128 - config->enable_high = device_property_read_bool(dev, 129 - "enable-active-high"); 130 - gpiod_put(gpiod); 131 - 132 - return config; 133 - } 134 - 135 100 static struct regulator_ops fixed_voltage_ops = { 136 101 }; 137 102 ··· 112 153 if (pdev->dev.of_node) { 113 154 config = of_get_fixed_voltage_config(&pdev->dev, 114 155 &drvdata->desc); 115 - if (IS_ERR(config)) 116 - return PTR_ERR(config); 117 - } else if (ACPI_HANDLE(&pdev->dev)) { 118 - config = acpi_get_fixed_voltage_config(&pdev->dev, 119 - &drvdata->desc); 120 156 if (IS_ERR(config)) 121 157 return PTR_ERR(config); 122 158 } else {
+1 -1
drivers/regulator/twl6030-regulator.c
··· 452 452 vsel = 62; 453 453 else if ((min_uV > 1800000) && (min_uV <= 1900000)) 454 454 vsel = 61; 455 - else if ((min_uV > 1350000) && (min_uV <= 1800000)) 455 + else if ((min_uV > 1500000) && (min_uV <= 1800000)) 456 456 vsel = 60; 457 457 else if ((min_uV > 1350000) && (min_uV <= 1500000)) 458 458 vsel = 59;
+4 -4
drivers/s390/scsi/zfcp_fsf.c
··· 1583 1583 int zfcp_fsf_open_wka_port(struct zfcp_fc_wka_port *wka_port) 1584 1584 { 1585 1585 struct zfcp_qdio *qdio = wka_port->adapter->qdio; 1586 - struct zfcp_fsf_req *req = NULL; 1586 + struct zfcp_fsf_req *req; 1587 1587 int retval = -EIO; 1588 1588 1589 1589 spin_lock_irq(&qdio->req_q_lock); ··· 1612 1612 zfcp_fsf_req_free(req); 1613 1613 out: 1614 1614 spin_unlock_irq(&qdio->req_q_lock); 1615 - if (req && !IS_ERR(req)) 1615 + if (!retval) 1616 1616 zfcp_dbf_rec_run_wka("fsowp_1", wka_port, req->req_id); 1617 1617 return retval; 1618 1618 } ··· 1638 1638 int zfcp_fsf_close_wka_port(struct zfcp_fc_wka_port *wka_port) 1639 1639 { 1640 1640 struct zfcp_qdio *qdio = wka_port->adapter->qdio; 1641 - struct zfcp_fsf_req *req = NULL; 1641 + struct zfcp_fsf_req *req; 1642 1642 int retval = -EIO; 1643 1643 1644 1644 spin_lock_irq(&qdio->req_q_lock); ··· 1667 1667 zfcp_fsf_req_free(req); 1668 1668 out: 1669 1669 spin_unlock_irq(&qdio->req_q_lock); 1670 - if (req && !IS_ERR(req)) 1670 + if (!retval) 1671 1671 zfcp_dbf_rec_run_wka("fscwp_1", wka_port, req->req_id); 1672 1672 return retval; 1673 1673 }
+6 -2
drivers/scsi/aacraid/comminit.c
··· 50 50 51 51 static inline int aac_is_msix_mode(struct aac_dev *dev) 52 52 { 53 - u32 status; 53 + u32 status = 0; 54 54 55 - status = src_readl(dev, MUnit.OMR); 55 + if (dev->pdev->device == PMC_DEVICE_S6 || 56 + dev->pdev->device == PMC_DEVICE_S7 || 57 + dev->pdev->device == PMC_DEVICE_S8) { 58 + status = src_readl(dev, MUnit.OMR); 59 + } 56 60 return (status & AAC_INT_MODE_MSIX); 57 61 } 58 62
+1
drivers/scsi/ibmvscsi_tgt/ibmvscsi_tgt.c
··· 3816 3816 static const struct target_core_fabric_ops ibmvscsis_ops = { 3817 3817 .module = THIS_MODULE, 3818 3818 .name = "ibmvscsis", 3819 + .max_data_sg_nents = MAX_TXU / PAGE_SIZE, 3819 3820 .get_fabric_name = ibmvscsis_get_fabric_name, 3820 3821 .tpg_get_wwn = ibmvscsis_get_fabric_wwn, 3821 3822 .tpg_get_tag = ibmvscsis_get_tag,
+18
drivers/scsi/mpt3sas/mpt3sas_scsih.c
··· 51 51 #include <linux/workqueue.h> 52 52 #include <linux/delay.h> 53 53 #include <linux/pci.h> 54 + #include <linux/pci-aspm.h> 54 55 #include <linux/interrupt.h> 55 56 #include <linux/aer.h> 56 57 #include <linux/raid_class.h> ··· 4658 4657 struct MPT3SAS_DEVICE *sas_device_priv_data; 4659 4658 u32 response_code = 0; 4660 4659 unsigned long flags; 4660 + unsigned int sector_sz; 4661 4661 4662 4662 mpi_reply = mpt3sas_base_get_reply_virt_addr(ioc, reply); 4663 4663 scmd = _scsih_scsi_lookup_get_clear(ioc, smid); ··· 4717 4715 } 4718 4716 4719 4717 xfer_cnt = le32_to_cpu(mpi_reply->TransferCount); 4718 + 4719 + /* In case of bogus fw or device, we could end up having 4720 + * unaligned partial completion. We can force alignment here, 4721 + * then scsi-ml does not need to handle this misbehavior. 4722 + */ 4723 + sector_sz = scmd->device->sector_size; 4724 + if (unlikely(scmd->request->cmd_type == REQ_TYPE_FS && sector_sz && 4725 + xfer_cnt % sector_sz)) { 4726 + sdev_printk(KERN_INFO, scmd->device, 4727 + "unaligned partial completion avoided (xfer_cnt=%u, sector_sz=%u)\n", 4728 + xfer_cnt, sector_sz); 4729 + xfer_cnt = round_down(xfer_cnt, sector_sz); 4730 + } 4731 + 4720 4732 scsi_set_resid(scmd, scsi_bufflen(scmd) - xfer_cnt); 4721 4733 if (ioc_status & MPI2_IOCSTATUS_FLAG_LOG_INFO_AVAILABLE) 4722 4734 log_info = le32_to_cpu(mpi_reply->IOCLogInfo); ··· 8762 8746 8763 8747 switch (hba_mpi_version) { 8764 8748 case MPI2_VERSION: 8749 + pci_disable_link_state(pdev, PCIE_LINK_STATE_L0S | 8750 + PCIE_LINK_STATE_L1 | PCIE_LINK_STATE_CLKPM); 8765 8751 /* Use mpt2sas driver host template for SAS 2.0 HBA's */ 8766 8752 shost = scsi_host_alloc(&mpt2sas_driver_template, 8767 8753 sizeof(struct MPT3SAS_ADAPTER));
+2 -1
drivers/scsi/qla2xxx/qla_isr.c
··· 3242 3242 * from a probe failure context. 3243 3243 */ 3244 3244 if (!ha->rsp_q_map || !ha->rsp_q_map[0]) 3245 - return; 3245 + goto free_irqs; 3246 3246 rsp = ha->rsp_q_map[0]; 3247 3247 3248 3248 if (ha->flags.msix_enabled) { ··· 3262 3262 free_irq(pci_irq_vector(ha->pdev, 0), rsp); 3263 3263 } 3264 3264 3265 + free_irqs: 3265 3266 pci_free_irq_vectors(ha->pdev); 3266 3267 } 3267 3268
+1 -1
drivers/scsi/qla2xxx/qla_os.c
··· 1616 1616 /* Don't abort commands in adapter during EEH 1617 1617 * recovery as it's not accessible/responding. 1618 1618 */ 1619 - if (!ha->flags.eeh_busy) { 1619 + if (GET_CMD_SP(sp) && !ha->flags.eeh_busy) { 1620 1620 /* Get a reference to the sp and drop the lock. 1621 1621 * The reference ensures this sp->done() call 1622 1622 * - and not the call in qla2xxx_eh_abort() -
+10 -1
drivers/scsi/virtio_scsi.c
··· 534 534 { 535 535 struct Scsi_Host *shost = virtio_scsi_host(vscsi->vdev); 536 536 struct virtio_scsi_cmd *cmd = scsi_cmd_priv(sc); 537 + unsigned long flags; 537 538 int req_size; 539 + int ret; 538 540 539 541 BUG_ON(scsi_sg_count(sc) > shost->sg_tablesize); 540 542 ··· 564 562 req_size = sizeof(cmd->req.cmd); 565 563 } 566 564 567 - if (virtscsi_kick_cmd(req_vq, cmd, req_size, sizeof(cmd->resp.cmd)) != 0) 565 + ret = virtscsi_kick_cmd(req_vq, cmd, req_size, sizeof(cmd->resp.cmd)); 566 + if (ret == -EIO) { 567 + cmd->resp.cmd.response = VIRTIO_SCSI_S_BAD_TARGET; 568 + spin_lock_irqsave(&req_vq->vq_lock, flags); 569 + virtscsi_complete_cmd(vscsi, cmd); 570 + spin_unlock_irqrestore(&req_vq->vq_lock, flags); 571 + } else if (ret != 0) { 568 572 return SCSI_MLQUEUE_HOST_BUSY; 573 + } 569 574 return 0; 570 575 } 571 576
+6
drivers/staging/greybus/timesync_platform.c
··· 45 45 46 46 int gb_timesync_platform_lock_bus(struct gb_timesync_svc *pdata) 47 47 { 48 + if (!arche_platform_change_state_cb) 49 + return 0; 50 + 48 51 return arche_platform_change_state_cb(ARCHE_PLATFORM_STATE_TIME_SYNC, 49 52 pdata); 50 53 } 51 54 52 55 void gb_timesync_platform_unlock_bus(void) 53 56 { 57 + if (!arche_platform_change_state_cb) 58 + return; 59 + 54 60 arche_platform_change_state_cb(ARCHE_PLATFORM_STATE_ACTIVE, NULL); 55 61 } 56 62
+1 -3
drivers/staging/lustre/lustre/llite/llite_mmap.c
··· 390 390 result = VM_FAULT_LOCKED; 391 391 break; 392 392 case -ENODATA: 393 + case -EAGAIN: 393 394 case -EFAULT: 394 395 result = VM_FAULT_NOPAGE; 395 396 break; 396 397 case -ENOMEM: 397 398 result = VM_FAULT_OOM; 398 - break; 399 - case -EAGAIN: 400 - result = VM_FAULT_RETRY; 401 399 break; 402 400 default: 403 401 result = VM_FAULT_SIGBUS;
+9 -1
drivers/target/target_core_device.c
··· 352 352 kfree(new); 353 353 return -EINVAL; 354 354 } 355 - BUG_ON(orig->se_lun_acl != NULL); 355 + if (orig->se_lun_acl != NULL) { 356 + pr_warn_ratelimited("Detected existing explicit" 357 + " se_lun_acl->se_lun_group reference for %s" 358 + " mapped_lun: %llu, failing\n", 359 + nacl->initiatorname, mapped_lun); 360 + mutex_unlock(&nacl->lun_entry_mutex); 361 + kfree(new); 362 + return -EINVAL; 363 + } 356 364 357 365 rcu_assign_pointer(new->se_lun, lun); 358 366 rcu_assign_pointer(new->se_lun_acl, lun_acl);
+6 -2
drivers/target/target_core_sbc.c
··· 451 451 int *post_ret) 452 452 { 453 453 struct se_device *dev = cmd->se_dev; 454 + sense_reason_t ret = TCM_NO_SENSE; 454 455 455 456 /* 456 457 * Only set SCF_COMPARE_AND_WRITE_POST to force a response fall-through ··· 459 458 * sent to the backend driver. 460 459 */ 461 460 spin_lock_irq(&cmd->t_state_lock); 462 - if ((cmd->transport_state & CMD_T_SENT) && !cmd->scsi_status) { 461 + if (cmd->transport_state & CMD_T_SENT) { 463 462 cmd->se_cmd_flags |= SCF_COMPARE_AND_WRITE_POST; 464 463 *post_ret = 1; 464 + 465 + if (cmd->scsi_status == SAM_STAT_CHECK_CONDITION) 466 + ret = TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE; 465 467 } 466 468 spin_unlock_irq(&cmd->t_state_lock); 467 469 ··· 474 470 */ 475 471 up(&dev->caw_sem); 476 472 477 - return TCM_NO_SENSE; 473 + return ret; 478 474 } 479 475 480 476 static sense_reason_t compare_and_write_callback(struct se_cmd *cmd, bool success,
+58 -28
drivers/target/target_core_transport.c
··· 457 457 { 458 458 struct se_node_acl *nacl = container_of(kref, 459 459 struct se_node_acl, acl_kref); 460 + struct se_portal_group *se_tpg = nacl->se_tpg; 460 461 461 - complete(&nacl->acl_free_comp); 462 + if (!nacl->dynamic_stop) { 463 + complete(&nacl->acl_free_comp); 464 + return; 465 + } 466 + 467 + mutex_lock(&se_tpg->acl_node_mutex); 468 + list_del(&nacl->acl_list); 469 + mutex_unlock(&se_tpg->acl_node_mutex); 470 + 471 + core_tpg_wait_for_nacl_pr_ref(nacl); 472 + core_free_device_list_for_node(nacl, se_tpg); 473 + kfree(nacl); 462 474 } 463 475 464 476 void target_put_nacl(struct se_node_acl *nacl) ··· 511 499 void transport_free_session(struct se_session *se_sess) 512 500 { 513 501 struct se_node_acl *se_nacl = se_sess->se_node_acl; 502 + 514 503 /* 515 504 * Drop the se_node_acl->nacl_kref obtained from within 516 505 * core_tpg_get_initiator_node_acl(). 517 506 */ 518 507 if (se_nacl) { 508 + struct se_portal_group *se_tpg = se_nacl->se_tpg; 509 + const struct target_core_fabric_ops *se_tfo = se_tpg->se_tpg_tfo; 510 + unsigned long flags; 511 + 519 512 se_sess->se_node_acl = NULL; 513 + 514 + /* 515 + * Also determine if we need to drop the extra ->cmd_kref if 516 + * it had been previously dynamically generated, and 517 + * the endpoint is not caching dynamic ACLs. 518 + */ 519 + mutex_lock(&se_tpg->acl_node_mutex); 520 + if (se_nacl->dynamic_node_acl && 521 + !se_tfo->tpg_check_demo_mode_cache(se_tpg)) { 522 + spin_lock_irqsave(&se_nacl->nacl_sess_lock, flags); 523 + if (list_empty(&se_nacl->acl_sess_list)) 524 + se_nacl->dynamic_stop = true; 525 + spin_unlock_irqrestore(&se_nacl->nacl_sess_lock, flags); 526 + 527 + if (se_nacl->dynamic_stop) 528 + list_del(&se_nacl->acl_list); 529 + } 530 + mutex_unlock(&se_tpg->acl_node_mutex); 531 + 532 + if (se_nacl->dynamic_stop) 533 + target_put_nacl(se_nacl); 534 + 520 535 target_put_nacl(se_nacl); 521 536 } 522 537 if (se_sess->sess_cmd_map) { ··· 557 518 void transport_deregister_session(struct se_session *se_sess) 558 519 { 559 520 struct se_portal_group *se_tpg = se_sess->se_tpg; 560 - const struct target_core_fabric_ops *se_tfo; 561 - struct se_node_acl *se_nacl; 562 521 unsigned long flags; 563 - bool drop_nacl = false; 564 522 565 523 if (!se_tpg) { 566 524 transport_free_session(se_sess); 567 525 return; 568 526 } 569 - se_tfo = se_tpg->se_tpg_tfo; 570 527 571 528 spin_lock_irqsave(&se_tpg->session_lock, flags); 572 529 list_del(&se_sess->sess_list); ··· 570 535 se_sess->fabric_sess_ptr = NULL; 571 536 spin_unlock_irqrestore(&se_tpg->session_lock, flags); 572 537 573 - /* 574 - * Determine if we need to do extra work for this initiator node's 575 - * struct se_node_acl if it had been previously dynamically generated. 576 - */ 577 - se_nacl = se_sess->se_node_acl; 578 - 579 - mutex_lock(&se_tpg->acl_node_mutex); 580 - if (se_nacl && se_nacl->dynamic_node_acl) { 581 - if (!se_tfo->tpg_check_demo_mode_cache(se_tpg)) { 582 - list_del(&se_nacl->acl_list); 583 - drop_nacl = true; 584 - } 585 - } 586 - mutex_unlock(&se_tpg->acl_node_mutex); 587 - 588 - if (drop_nacl) { 589 - core_tpg_wait_for_nacl_pr_ref(se_nacl); 590 - core_free_device_list_for_node(se_nacl, se_tpg); 591 - se_sess->se_node_acl = NULL; 592 - kfree(se_nacl); 593 - } 594 538 pr_debug("TARGET_CORE[%s]: Deregistered fabric_sess\n", 595 539 se_tpg->se_tpg_tfo->get_fabric_name()); 596 540 /* 597 541 * If last kref is dropping now for an explicit NodeACL, awake sleeping 598 542 * ->acl_free_comp caller to wakeup configfs se_node_acl->acl_group 599 543 * removal context from within transport_free_session() code. 544 + * 545 + * For dynamic ACL, target_put_nacl() uses target_complete_nacl() 546 + * to release all remaining generate_node_acl=1 created ACL resources. 600 547 */ 601 548 602 549 transport_free_session(se_sess); ··· 3127 3110 spin_unlock_irqrestore(&cmd->t_state_lock, flags); 3128 3111 goto check_stop; 3129 3112 } 3130 - cmd->t_state = TRANSPORT_ISTATE_PROCESSING; 3131 3113 spin_unlock_irqrestore(&cmd->t_state_lock, flags); 3132 3114 3133 3115 cmd->se_tfo->queue_tm_rsp(cmd); ··· 3139 3123 struct se_cmd *cmd) 3140 3124 { 3141 3125 unsigned long flags; 3126 + bool aborted = false; 3142 3127 3143 3128 spin_lock_irqsave(&cmd->t_state_lock, flags); 3144 - cmd->transport_state |= CMD_T_ACTIVE; 3129 + if (cmd->transport_state & CMD_T_ABORTED) { 3130 + aborted = true; 3131 + } else { 3132 + cmd->t_state = TRANSPORT_ISTATE_PROCESSING; 3133 + cmd->transport_state |= CMD_T_ACTIVE; 3134 + } 3145 3135 spin_unlock_irqrestore(&cmd->t_state_lock, flags); 3136 + 3137 + if (aborted) { 3138 + pr_warn_ratelimited("handle_tmr caught CMD_T_ABORTED TMR %d" 3139 + "ref_tag: %llu tag: %llu\n", cmd->se_tmr_req->function, 3140 + cmd->se_tmr_req->ref_task_tag, cmd->tag); 3141 + transport_cmd_check_stop_to_fabric(cmd); 3142 + return 0; 3143 + } 3146 3144 3147 3145 INIT_WORK(&cmd->work, target_tmr_work); 3148 3146 queue_work(cmd->se_dev->tmr_wq, &cmd->work);
+1 -1
drivers/target/target_core_xcopy.c
··· 864 864 " CHECK_CONDITION -> sending response\n", rc); 865 865 ec_cmd->scsi_status = SAM_STAT_CHECK_CONDITION; 866 866 } 867 - target_complete_cmd(ec_cmd, SAM_STAT_CHECK_CONDITION); 867 + target_complete_cmd(ec_cmd, ec_cmd->scsi_status); 868 868 } 869 869 870 870 sense_reason_t target_do_xcopy(struct se_cmd *se_cmd)
+4
drivers/usb/core/quirks.c
··· 37 37 /* CBM - Flash disk */ 38 38 { USB_DEVICE(0x0204, 0x6025), .driver_info = USB_QUIRK_RESET_RESUME }, 39 39 40 + /* WORLDE easy key (easykey.25) MIDI controller */ 41 + { USB_DEVICE(0x0218, 0x0401), .driver_info = 42 + USB_QUIRK_CONFIG_INTF_STRINGS }, 43 + 40 44 /* HP 5300/5370C scanner */ 41 45 { USB_DEVICE(0x03f0, 0x0701), .driver_info = 42 46 USB_QUIRK_STRING_FETCH_255 },
+12 -1
drivers/usb/gadget/function/f_fs.c
··· 2269 2269 if (len < sizeof(*d) || h->interface >= ffs->interfaces_count) 2270 2270 return -EINVAL; 2271 2271 length = le32_to_cpu(d->dwSize); 2272 + if (len < length) 2273 + return -EINVAL; 2272 2274 type = le32_to_cpu(d->dwPropertyDataType); 2273 2275 if (type < USB_EXT_PROP_UNICODE || 2274 2276 type > USB_EXT_PROP_UNICODE_MULTI) { ··· 2279 2277 return -EINVAL; 2280 2278 } 2281 2279 pnl = le16_to_cpu(d->wPropertyNameLength); 2280 + if (length < 14 + pnl) { 2281 + pr_vdebug("invalid os descriptor length: %d pnl:%d (descriptor %d)\n", 2282 + length, pnl, type); 2283 + return -EINVAL; 2284 + } 2282 2285 pdl = le32_to_cpu(*(u32 *)((u8 *)data + 10 + pnl)); 2283 2286 if (length != 14 + pnl + pdl) { 2284 2287 pr_vdebug("invalid os descriptor length: %d pnl:%d pdl:%d (descriptor %d)\n", ··· 2370 2363 } 2371 2364 } 2372 2365 if (flags & (1 << i)) { 2366 + if (len < 4) { 2367 + goto error; 2368 + } 2373 2369 os_descs_count = get_unaligned_le32(data); 2374 2370 data += 4; 2375 2371 len -= 4; ··· 2445 2435 2446 2436 ENTER(); 2447 2437 2448 - if (unlikely(get_unaligned_le32(data) != FUNCTIONFS_STRINGS_MAGIC || 2438 + if (unlikely(len < 16 || 2439 + get_unaligned_le32(data) != FUNCTIONFS_STRINGS_MAGIC || 2449 2440 get_unaligned_le32(data + 4) != len)) 2450 2441 goto error; 2451 2442 str_count = get_unaligned_le32(data + 8);
+13 -13
drivers/usb/musb/musb_core.c
··· 594 594 | MUSB_PORT_STAT_RESUME; 595 595 musb->rh_timer = jiffies 596 596 + msecs_to_jiffies(USB_RESUME_TIMEOUT); 597 - musb->need_finish_resume = 1; 598 - 599 597 musb->xceiv->otg->state = OTG_STATE_A_HOST; 600 598 musb->is_active = 1; 601 599 musb_host_resume_root_hub(musb); 600 + schedule_delayed_work(&musb->finish_resume_work, 601 + msecs_to_jiffies(USB_RESUME_TIMEOUT)); 602 602 break; 603 603 case OTG_STATE_B_WAIT_ACON: 604 604 musb->xceiv->otg->state = OTG_STATE_B_PERIPHERAL; ··· 1925 1925 static void musb_irq_work(struct work_struct *data) 1926 1926 { 1927 1927 struct musb *musb = container_of(data, struct musb, irq_work.work); 1928 + int error; 1929 + 1930 + error = pm_runtime_get_sync(musb->controller); 1931 + if (error < 0) { 1932 + dev_err(musb->controller, "Could not enable: %i\n", error); 1933 + 1934 + return; 1935 + } 1928 1936 1929 1937 musb_pm_runtime_check_session(musb); 1930 1938 ··· 1940 1932 musb->xceiv_old_state = musb->xceiv->otg->state; 1941 1933 sysfs_notify(&musb->controller->kobj, NULL, "mode"); 1942 1934 } 1935 + 1936 + pm_runtime_mark_last_busy(musb->controller); 1937 + pm_runtime_put_autosuspend(musb->controller); 1943 1938 } 1944 1939 1945 1940 static void musb_recover_from_babble(struct musb *musb) ··· 2721 2710 mask = MUSB_DEVCTL_BDEVICE | MUSB_DEVCTL_FSDEV | MUSB_DEVCTL_LSDEV; 2722 2711 if ((devctl & mask) != (musb->context.devctl & mask)) 2723 2712 musb->port1_status = 0; 2724 - if (musb->need_finish_resume) { 2725 - musb->need_finish_resume = 0; 2726 - schedule_delayed_work(&musb->finish_resume_work, 2727 - msecs_to_jiffies(USB_RESUME_TIMEOUT)); 2728 - } 2729 2713 2730 2714 /* 2731 2715 * The USB HUB code expects the device to be in RPM_ACTIVE once it came ··· 2771 2765 return 0; 2772 2766 2773 2767 musb_restore_context(musb); 2774 - 2775 - if (musb->need_finish_resume) { 2776 - musb->need_finish_resume = 0; 2777 - schedule_delayed_work(&musb->finish_resume_work, 2778 - msecs_to_jiffies(USB_RESUME_TIMEOUT)); 2779 - } 2780 2768 2781 2769 spin_lock_irqsave(&musb->lock, flags); 2782 2770 error = musb_run_resume_work(musb);
-1
drivers/usb/musb/musb_core.h
··· 410 410 411 411 /* is_suspended means USB B_PERIPHERAL suspend */ 412 412 unsigned is_suspended:1; 413 - unsigned need_finish_resume :1; 414 413 415 414 /* may_wakeup means remote wakeup is enabled */ 416 415 unsigned may_wakeup:1;
+1
drivers/usb/serial/option.c
··· 2007 2007 { USB_DEVICE_AND_INTERFACE_INFO(WETELECOM_VENDOR_ID, WETELECOM_PRODUCT_WMD200, 0xff, 0xff, 0xff) }, 2008 2008 { USB_DEVICE_AND_INTERFACE_INFO(WETELECOM_VENDOR_ID, WETELECOM_PRODUCT_6802, 0xff, 0xff, 0xff) }, 2009 2009 { USB_DEVICE_AND_INTERFACE_INFO(WETELECOM_VENDOR_ID, WETELECOM_PRODUCT_WMD300, 0xff, 0xff, 0xff) }, 2010 + { USB_DEVICE_AND_INTERFACE_INFO(0x03f0, 0x421d, 0xff, 0xff, 0xff) }, /* HP lt2523 (Novatel E371) */ 2010 2011 { } /* Terminating entry */ 2011 2012 }; 2012 2013 MODULE_DEVICE_TABLE(usb, option_ids);
+1
drivers/usb/serial/pl2303.c
··· 49 49 { USB_DEVICE(IODATA_VENDOR_ID, IODATA_PRODUCT_ID) }, 50 50 { USB_DEVICE(IODATA_VENDOR_ID, IODATA_PRODUCT_ID_RSAQ5) }, 51 51 { USB_DEVICE(ATEN_VENDOR_ID, ATEN_PRODUCT_ID) }, 52 + { USB_DEVICE(ATEN_VENDOR_ID, ATEN_PRODUCT_ID2) }, 52 53 { USB_DEVICE(ATEN_VENDOR_ID2, ATEN_PRODUCT_ID) }, 53 54 { USB_DEVICE(ELCOM_VENDOR_ID, ELCOM_PRODUCT_ID) }, 54 55 { USB_DEVICE(ELCOM_VENDOR_ID, ELCOM_PRODUCT_ID_UCSGT) },
+1
drivers/usb/serial/pl2303.h
··· 27 27 #define ATEN_VENDOR_ID 0x0557 28 28 #define ATEN_VENDOR_ID2 0x0547 29 29 #define ATEN_PRODUCT_ID 0x2008 30 + #define ATEN_PRODUCT_ID2 0x2118 30 31 31 32 #define IODATA_VENDOR_ID 0x04bb 32 33 #define IODATA_PRODUCT_ID 0x0a03
+1
drivers/usb/serial/qcserial.c
··· 124 124 {USB_DEVICE(0x1410, 0xa021)}, /* Novatel Gobi 3000 Composite */ 125 125 {USB_DEVICE(0x413c, 0x8193)}, /* Dell Gobi 3000 QDL */ 126 126 {USB_DEVICE(0x413c, 0x8194)}, /* Dell Gobi 3000 Composite */ 127 + {USB_DEVICE(0x413c, 0x81a6)}, /* Dell DW5570 QDL (MC8805) */ 127 128 {USB_DEVICE(0x1199, 0x68a4)}, /* Sierra Wireless QDL */ 128 129 {USB_DEVICE(0x1199, 0x68a5)}, /* Sierra Wireless Modem */ 129 130 {USB_DEVICE(0x1199, 0x68a8)}, /* Sierra Wireless QDL */
+27 -6
drivers/vfio/vfio_iommu_spapr_tce.c
··· 1123 1123 mutex_lock(&container->lock); 1124 1124 1125 1125 ret = tce_iommu_create_default_window(container); 1126 - if (ret) 1127 - return ret; 1128 - 1129 - ret = tce_iommu_create_window(container, create.page_shift, 1130 - create.window_size, create.levels, 1131 - &create.start_addr); 1126 + if (!ret) 1127 + ret = tce_iommu_create_window(container, 1128 + create.page_shift, 1129 + create.window_size, create.levels, 1130 + &create.start_addr); 1132 1131 1133 1132 mutex_unlock(&container->lock); 1134 1133 ··· 1245 1246 static long tce_iommu_take_ownership_ddw(struct tce_container *container, 1246 1247 struct iommu_table_group *table_group) 1247 1248 { 1249 + long i, ret = 0; 1250 + 1248 1251 if (!table_group->ops->create_table || !table_group->ops->set_window || 1249 1252 !table_group->ops->release_ownership) { 1250 1253 WARN_ON_ONCE(1); ··· 1255 1254 1256 1255 table_group->ops->take_ownership(table_group); 1257 1256 1257 + /* Set all windows to the new group */ 1258 + for (i = 0; i < IOMMU_TABLE_GROUP_MAX_TABLES; ++i) { 1259 + struct iommu_table *tbl = container->tables[i]; 1260 + 1261 + if (!tbl) 1262 + continue; 1263 + 1264 + ret = table_group->ops->set_window(table_group, i, tbl); 1265 + if (ret) 1266 + goto release_exit; 1267 + } 1268 + 1258 1269 return 0; 1270 + 1271 + release_exit: 1272 + for (i = 0; i < IOMMU_TABLE_GROUP_MAX_TABLES; ++i) 1273 + table_group->ops->unset_window(table_group, i); 1274 + 1275 + table_group->ops->release_ownership(table_group); 1276 + 1277 + return ret; 1259 1278 } 1260 1279 1261 1280 static int tce_iommu_attach_group(void *iommu_data,
+4 -6
drivers/vhost/vhost.c
··· 130 130 131 131 static void vhost_init_is_le(struct vhost_virtqueue *vq) 132 132 { 133 - if (vhost_has_feature(vq, VIRTIO_F_VERSION_1)) 134 - vq->is_le = true; 133 + vq->is_le = vhost_has_feature(vq, VIRTIO_F_VERSION_1) 134 + || virtio_legacy_is_little_endian(); 135 135 } 136 136 #endif /* CONFIG_VHOST_CROSS_ENDIAN_LEGACY */ 137 137 138 138 static void vhost_reset_is_le(struct vhost_virtqueue *vq) 139 139 { 140 - vq->is_le = virtio_legacy_is_little_endian(); 140 + vhost_init_is_le(vq); 141 141 } 142 142 143 143 struct vhost_flush_struct { ··· 1714 1714 int r; 1715 1715 bool is_le = vq->is_le; 1716 1716 1717 - if (!vq->private_data) { 1718 - vhost_reset_is_le(vq); 1717 + if (!vq->private_data) 1719 1718 return 0; 1720 - } 1721 1719 1722 1720 vhost_init_is_le(vq); 1723 1721
-7
drivers/virtio/virtio_ring.c
··· 159 159 if (xen_domain()) 160 160 return true; 161 161 162 - /* 163 - * On ARM-based machines, the DMA ops will do the right thing, 164 - * so always use them with legacy devices. 165 - */ 166 - if (IS_ENABLED(CONFIG_ARM) || IS_ENABLED(CONFIG_ARM64)) 167 - return !virtio_has_feature(vdev, VIRTIO_F_VERSION_1); 168 - 169 162 return false; 170 163 } 171 164
+24 -15
fs/btrfs/compression.c
··· 1024 1024 unsigned long buf_offset; 1025 1025 unsigned long current_buf_start; 1026 1026 unsigned long start_byte; 1027 + unsigned long prev_start_byte; 1027 1028 unsigned long working_bytes = total_out - buf_start; 1028 1029 unsigned long bytes; 1029 1030 char *kaddr; ··· 1072 1071 if (!bio->bi_iter.bi_size) 1073 1072 return 0; 1074 1073 bvec = bio_iter_iovec(bio, bio->bi_iter); 1075 - 1074 + prev_start_byte = start_byte; 1076 1075 start_byte = page_offset(bvec.bv_page) - disk_start; 1077 1076 1078 1077 /* 1079 - * make sure our new page is covered by this 1080 - * working buffer 1078 + * We need to make sure we're only adjusting 1079 + * our offset into compression working buffer when 1080 + * we're switching pages. Otherwise we can incorrectly 1081 + * keep copying when we were actually done. 1081 1082 */ 1082 - if (total_out <= start_byte) 1083 - return 1; 1083 + if (start_byte != prev_start_byte) { 1084 + /* 1085 + * make sure our new page is covered by this 1086 + * working buffer 1087 + */ 1088 + if (total_out <= start_byte) 1089 + return 1; 1084 1090 1085 - /* 1086 - * the next page in the biovec might not be adjacent 1087 - * to the last page, but it might still be found 1088 - * inside this working buffer. bump our offset pointer 1089 - */ 1090 - if (total_out > start_byte && 1091 - current_buf_start < start_byte) { 1092 - buf_offset = start_byte - buf_start; 1093 - working_bytes = total_out - start_byte; 1094 - current_buf_start = buf_start + buf_offset; 1091 + /* 1092 + * the next page in the biovec might not be adjacent 1093 + * to the last page, but it might still be found 1094 + * inside this working buffer. bump our offset pointer 1095 + */ 1096 + if (total_out > start_byte && 1097 + current_buf_start < start_byte) { 1098 + buf_offset = start_byte - buf_start; 1099 + working_bytes = total_out - start_byte; 1100 + current_buf_start = buf_start + buf_offset; 1101 + } 1095 1102 } 1096 1103 } 1097 1104
+4 -2
fs/btrfs/ioctl.c
··· 5653 5653 #ifdef CONFIG_COMPAT 5654 5654 long btrfs_compat_ioctl(struct file *file, unsigned int cmd, unsigned long arg) 5655 5655 { 5656 + /* 5657 + * These all access 32-bit values anyway so no further 5658 + * handling is necessary. 5659 + */ 5656 5660 switch (cmd) { 5657 5661 case FS_IOC32_GETFLAGS: 5658 5662 cmd = FS_IOC_GETFLAGS; ··· 5667 5663 case FS_IOC32_GETVERSION: 5668 5664 cmd = FS_IOC_GETVERSION; 5669 5665 break; 5670 - default: 5671 - return -ENOIOCTLCMD; 5672 5666 } 5673 5667 5674 5668 return btrfs_ioctl(file, cmd, (unsigned long) compat_ptr(arg));
+5
fs/dax.c
··· 1031 1031 struct blk_dax_ctl dax = { 0 }; 1032 1032 ssize_t map_len; 1033 1033 1034 + if (fatal_signal_pending(current)) { 1035 + ret = -EINTR; 1036 + break; 1037 + } 1038 + 1034 1039 dax.sector = dax_iomap_sector(iomap, pos); 1035 1040 dax.size = (length + offset + PAGE_SIZE - 1) & PAGE_MASK; 1036 1041 map_len = dax_map_atomic(iomap->bdev, &dax);
+3
fs/iomap.c
··· 114 114 115 115 BUG_ON(pos + len > iomap->offset + iomap->length); 116 116 117 + if (fatal_signal_pending(current)) 118 + return -EINTR; 119 + 117 120 page = grab_cache_page_write_begin(inode->i_mapping, index, flags); 118 121 if (!page) 119 122 return -ENOMEM;
+60 -37
fs/nfsd/vfs.c
··· 332 332 } 333 333 } 334 334 335 + static __be32 336 + nfsd_get_write_access(struct svc_rqst *rqstp, struct svc_fh *fhp, 337 + struct iattr *iap) 338 + { 339 + struct inode *inode = d_inode(fhp->fh_dentry); 340 + int host_err; 341 + 342 + if (iap->ia_size < inode->i_size) { 343 + __be32 err; 344 + 345 + err = nfsd_permission(rqstp, fhp->fh_export, fhp->fh_dentry, 346 + NFSD_MAY_TRUNC | NFSD_MAY_OWNER_OVERRIDE); 347 + if (err) 348 + return err; 349 + } 350 + 351 + host_err = get_write_access(inode); 352 + if (host_err) 353 + goto out_nfserrno; 354 + 355 + host_err = locks_verify_truncate(inode, NULL, iap->ia_size); 356 + if (host_err) 357 + goto out_put_write_access; 358 + return 0; 359 + 360 + out_put_write_access: 361 + put_write_access(inode); 362 + out_nfserrno: 363 + return nfserrno(host_err); 364 + } 365 + 335 366 /* 336 367 * Set various file attributes. After this call fhp needs an fh_put. 337 368 */ ··· 377 346 __be32 err; 378 347 int host_err; 379 348 bool get_write_count; 349 + int size_change = 0; 380 350 381 351 if (iap->ia_valid & (ATTR_ATIME | ATTR_MTIME | ATTR_SIZE)) 382 352 accmode |= NFSD_MAY_WRITE|NFSD_MAY_OWNER_OVERRIDE; ··· 390 358 /* Get inode */ 391 359 err = fh_verify(rqstp, fhp, ftype, accmode); 392 360 if (err) 393 - return err; 361 + goto out; 394 362 if (get_write_count) { 395 363 host_err = fh_want_write(fhp); 396 364 if (host_err) 397 - goto out_host_err; 365 + return nfserrno(host_err); 398 366 } 399 367 400 368 dentry = fhp->fh_dentry; ··· 405 373 iap->ia_valid &= ~ATTR_MODE; 406 374 407 375 if (!iap->ia_valid) 408 - return 0; 376 + goto out; 409 377 410 378 nfsd_sanitize_attrs(inode, iap); 411 379 412 - if (check_guard && guardtime != inode->i_ctime.tv_sec) 413 - return nfserr_notsync; 414 - 415 380 /* 416 381 * The size case is special, it changes the file in addition to the 417 - * attributes, and file systems don't expect it to be mixed with 418 - * "random" attribute changes. We thus split out the size change 419 - * into a separate call for vfs_truncate, and do the rest as a 420 - * a separate setattr call. 382 + * attributes. 421 383 */ 422 384 if (iap->ia_valid & ATTR_SIZE) { 423 - struct path path = { 424 - .mnt = fhp->fh_export->ex_path.mnt, 425 - .dentry = dentry, 426 - }; 427 - bool implicit_mtime = false; 385 + err = nfsd_get_write_access(rqstp, fhp, iap); 386 + if (err) 387 + goto out; 388 + size_change = 1; 428 389 429 390 /* 430 - * vfs_truncate implicity updates the mtime IFF the file size 431 - * actually changes. Avoid the additional seattr call below if 432 - * the only other attribute that the client sends is the mtime. 391 + * RFC5661, Section 18.30.4: 392 + * Changing the size of a file with SETATTR indirectly 393 + * changes the time_modify and change attributes. 394 + * 395 + * (and similar for the older RFCs) 433 396 */ 434 - if (iap->ia_size != i_size_read(inode) && 435 - ((iap->ia_valid & ~(ATTR_SIZE | ATTR_MTIME)) == 0)) 436 - implicit_mtime = true; 437 - 438 - host_err = vfs_truncate(&path, iap->ia_size); 439 - if (host_err) 440 - goto out_host_err; 441 - 442 - iap->ia_valid &= ~ATTR_SIZE; 443 - if (implicit_mtime) 444 - iap->ia_valid &= ~ATTR_MTIME; 445 - if (!iap->ia_valid) 446 - goto done; 397 + if (iap->ia_size != i_size_read(inode)) 398 + iap->ia_valid |= ATTR_MTIME; 447 399 } 448 400 449 401 iap->ia_valid |= ATTR_CTIME; 450 402 403 + if (check_guard && guardtime != inode->i_ctime.tv_sec) { 404 + err = nfserr_notsync; 405 + goto out_put_write_access; 406 + } 407 + 451 408 fh_lock(fhp); 452 409 host_err = notify_change(dentry, iap, NULL); 453 410 fh_unlock(fhp); 454 - if (host_err) 455 - goto out_host_err; 411 + err = nfserrno(host_err); 456 412 457 - done: 458 - host_err = commit_metadata(fhp); 459 - out_host_err: 460 - return nfserrno(host_err); 413 + out_put_write_access: 414 + if (size_change) 415 + put_write_access(inode); 416 + if (!err) 417 + err = nfserrno(commit_metadata(fhp)); 418 + out: 419 + return err; 461 420 } 462 421 463 422 #if defined(CONFIG_NFSD_V4)
+2 -1
fs/proc/page.c
··· 173 173 u |= kpf_copy_bit(k, KPF_ACTIVE, PG_active); 174 174 u |= kpf_copy_bit(k, KPF_RECLAIM, PG_reclaim); 175 175 176 - u |= kpf_copy_bit(k, KPF_SWAPCACHE, PG_swapcache); 176 + if (PageSwapCache(page)) 177 + u |= 1 << KPF_SWAPCACHE; 177 178 u |= kpf_copy_bit(k, KPF_SWAPBACKED, PG_swapbacked); 178 179 179 180 u |= kpf_copy_bit(k, KPF_UNEVICTABLE, PG_unevictable);
+1 -1
fs/pstore/ram.c
··· 280 280 1, id, type, PSTORE_TYPE_PMSG, 0); 281 281 282 282 /* ftrace is last since it may want to dynamically allocate memory. */ 283 - if (!prz_ok(prz)) { 283 + if (!prz_ok(prz) && cxt->fprzs) { 284 284 if (!(cxt->flags & RAMOOPS_FLAG_FTRACE_PER_CPU)) { 285 285 prz = ramoops_get_next_prz(cxt->fprzs, 286 286 &cxt->ftrace_read_cnt, 1, id, type,
+6 -5
include/asm-generic/export.h
··· 9 9 #ifndef KSYM_ALIGN 10 10 #define KSYM_ALIGN 8 11 11 #endif 12 - #ifndef KCRC_ALIGN 13 - #define KCRC_ALIGN 8 14 - #endif 15 12 #else 16 13 #define __put .long 17 14 #ifndef KSYM_ALIGN 18 15 #define KSYM_ALIGN 4 19 16 #endif 17 + #endif 20 18 #ifndef KCRC_ALIGN 21 19 #define KCRC_ALIGN 4 22 - #endif 23 20 #endif 24 21 25 22 #ifdef CONFIG_HAVE_UNDERSCORE_SYMBOL_PREFIX ··· 49 52 .section ___kcrctab\sec+\name,"a" 50 53 .balign KCRC_ALIGN 51 54 KSYM(__kcrctab_\name): 52 - __put KSYM(__crc_\name) 55 + #if defined(CONFIG_MODULE_REL_CRCS) 56 + .long KSYM(__crc_\name) - . 57 + #else 58 + .long KSYM(__crc_\name) 59 + #endif 53 60 .weak KSYM(__crc_\name) 54 61 .previous 55 62 #endif
+1
include/drm/drmP.h
··· 517 517 struct drm_minor *control; /**< Control node */ 518 518 struct drm_minor *primary; /**< Primary node */ 519 519 struct drm_minor *render; /**< Render node */ 520 + bool registered; 520 521 521 522 /* currently active master for this device. Protected by master_mutex */ 522 523 struct drm_master *master;
+15 -1
include/drm/drm_connector.h
··· 381 381 * core drm connector interfaces. Everything added from this callback 382 382 * should be unregistered in the early_unregister callback. 383 383 * 384 + * This is called while holding drm_connector->mutex. 385 + * 384 386 * Returns: 385 387 * 386 388 * 0 on success, or a negative error code on failure. ··· 397 395 * late_register(). It is called from drm_connector_unregister(), 398 396 * early in the driver unload sequence to disable userspace access 399 397 * before data structures are torndown. 398 + * 399 + * This is called while holding drm_connector->mutex. 400 400 */ 401 401 void (*early_unregister)(struct drm_connector *connector); 402 402 ··· 563 559 * @interlace_allowed: can this connector handle interlaced modes? 564 560 * @doublescan_allowed: can this connector handle doublescan? 565 561 * @stereo_allowed: can this connector handle stereo modes? 566 - * @registered: is this connector exposed (registered) with userspace? 567 562 * @modes: modes available on this connector (from fill_modes() + user) 568 563 * @status: one of the drm_connector_status enums (connected, not, or unknown) 569 564 * @probed_modes: list of modes derived directly from the display ··· 611 608 char *name; 612 609 613 610 /** 611 + * @mutex: Lock for general connector state, but currently only protects 612 + * @registered. Most of the connector state is still protected by the 613 + * mutex in &drm_mode_config. 614 + */ 615 + struct mutex mutex; 616 + 617 + /** 614 618 * @index: Compacted connector index, which matches the position inside 615 619 * the mode_config.list for drivers not supporting hot-add/removing. Can 616 620 * be used as an array index. It is invariant over the lifetime of the ··· 630 620 bool interlace_allowed; 631 621 bool doublescan_allowed; 632 622 bool stereo_allowed; 623 + /** 624 + * @registered: Is this connector exposed (registered) with userspace? 625 + * Protected by @mutex. 626 + */ 633 627 bool registered; 634 628 struct list_head modes; /* list of modes on this connector */ 635 629
+1 -3
include/linux/buffer_head.h
··· 243 243 { 244 244 if (err == 0) 245 245 return VM_FAULT_LOCKED; 246 - if (err == -EFAULT) 246 + if (err == -EFAULT || err == -EAGAIN) 247 247 return VM_FAULT_NOPAGE; 248 248 if (err == -ENOMEM) 249 249 return VM_FAULT_OOM; 250 - if (err == -EAGAIN) 251 - return VM_FAULT_RETRY; 252 250 /* -ENOSPC, -EDQUOT, -EIO ... */ 253 251 return VM_FAULT_SIGBUS; 254 252 }
+4 -4
include/linux/cpumask.h
··· 560 560 static inline int cpumask_parse_user(const char __user *buf, int len, 561 561 struct cpumask *dstp) 562 562 { 563 - return bitmap_parse_user(buf, len, cpumask_bits(dstp), nr_cpu_ids); 563 + return bitmap_parse_user(buf, len, cpumask_bits(dstp), nr_cpumask_bits); 564 564 } 565 565 566 566 /** ··· 575 575 struct cpumask *dstp) 576 576 { 577 577 return bitmap_parselist_user(buf, len, cpumask_bits(dstp), 578 - nr_cpu_ids); 578 + nr_cpumask_bits); 579 579 } 580 580 581 581 /** ··· 590 590 char *nl = strchr(buf, '\n'); 591 591 unsigned int len = nl ? (unsigned int)(nl - buf) : strlen(buf); 592 592 593 - return bitmap_parse(buf, len, cpumask_bits(dstp), nr_cpu_ids); 593 + return bitmap_parse(buf, len, cpumask_bits(dstp), nr_cpumask_bits); 594 594 } 595 595 596 596 /** ··· 602 602 */ 603 603 static inline int cpulist_parse(const char *buf, struct cpumask *dstp) 604 604 { 605 - return bitmap_parselist(buf, cpumask_bits(dstp), nr_cpu_ids); 605 + return bitmap_parselist(buf, cpumask_bits(dstp), nr_cpumask_bits); 606 606 } 607 607 608 608 /**
+12 -5
include/linux/export.h
··· 43 43 #ifdef CONFIG_MODVERSIONS 44 44 /* Mark the CRC weak since genksyms apparently decides not to 45 45 * generate a checksums for some symbols */ 46 + #if defined(CONFIG_MODULE_REL_CRCS) 46 47 #define __CRC_SYMBOL(sym, sec) \ 47 - extern __visible void *__crc_##sym __attribute__((weak)); \ 48 - static const unsigned long __kcrctab_##sym \ 49 - __used \ 50 - __attribute__((section("___kcrctab" sec "+" #sym), used)) \ 51 - = (unsigned long) &__crc_##sym; 48 + asm(" .section \"___kcrctab" sec "+" #sym "\", \"a\" \n" \ 49 + " .weak " VMLINUX_SYMBOL_STR(__crc_##sym) " \n" \ 50 + " .long " VMLINUX_SYMBOL_STR(__crc_##sym) " - . \n" \ 51 + " .previous \n"); 52 + #else 53 + #define __CRC_SYMBOL(sym, sec) \ 54 + asm(" .section \"___kcrctab" sec "+" #sym "\", \"a\" \n" \ 55 + " .weak " VMLINUX_SYMBOL_STR(__crc_##sym) " \n" \ 56 + " .long " VMLINUX_SYMBOL_STR(__crc_##sym) " \n" \ 57 + " .previous \n"); 58 + #endif 52 59 #else 53 60 #define __CRC_SYMBOL(sym, sec) 54 61 #endif
+30 -2
include/linux/hyperv.h
··· 128 128 u32 ring_data_startoffset; 129 129 u32 priv_write_index; 130 130 u32 priv_read_index; 131 + u32 cached_read_index; 131 132 }; 132 133 133 134 /* ··· 181 180 return write; 182 181 } 183 182 183 + static inline u32 hv_get_cached_bytes_to_write( 184 + const struct hv_ring_buffer_info *rbi) 185 + { 186 + u32 read_loc, write_loc, dsize, write; 187 + 188 + dsize = rbi->ring_datasize; 189 + read_loc = rbi->cached_read_index; 190 + write_loc = rbi->ring_buffer->write_index; 191 + 192 + write = write_loc >= read_loc ? dsize - (write_loc - read_loc) : 193 + read_loc - write_loc; 194 + return write; 195 + } 184 196 /* 185 197 * VMBUS version is 32 bit entity broken up into 186 198 * two 16 bit quantities: major_number. minor_number. ··· 1502 1488 1503 1489 static inline void hv_signal_on_read(struct vmbus_channel *channel) 1504 1490 { 1505 - u32 cur_write_sz; 1491 + u32 cur_write_sz, cached_write_sz; 1506 1492 u32 pending_sz; 1507 1493 struct hv_ring_buffer_info *rbi = &channel->inbound; 1508 1494 ··· 1526 1512 1527 1513 cur_write_sz = hv_get_bytes_to_write(rbi); 1528 1514 1529 - if (cur_write_sz >= pending_sz) 1515 + if (cur_write_sz < pending_sz) 1516 + return; 1517 + 1518 + cached_write_sz = hv_get_cached_bytes_to_write(rbi); 1519 + if (cached_write_sz < pending_sz) 1530 1520 vmbus_setevent(channel); 1531 1521 1532 1522 return; 1523 + } 1524 + 1525 + static inline void 1526 + init_cached_read_index(struct vmbus_channel *channel) 1527 + { 1528 + struct hv_ring_buffer_info *rbi = &channel->inbound; 1529 + 1530 + rbi->cached_read_index = rbi->ring_buffer->read_index; 1533 1531 } 1534 1532 1535 1533 /* ··· 1594 1568 /* 1595 1569 * This call commits the read index and potentially signals the host. 1596 1570 * Here is the pattern for using the "in-place" consumption APIs: 1571 + * 1572 + * init_cached_read_index(); 1597 1573 * 1598 1574 * while (get_next_pkt_raw() { 1599 1575 * process the packet "in-place";
+17
include/linux/irq.h
··· 184 184 * 185 185 * IRQD_TRIGGER_MASK - Mask for the trigger type bits 186 186 * IRQD_SETAFFINITY_PENDING - Affinity setting is pending 187 + * IRQD_ACTIVATED - Interrupt has already been activated 187 188 * IRQD_NO_BALANCING - Balancing disabled for this IRQ 188 189 * IRQD_PER_CPU - Interrupt is per cpu 189 190 * IRQD_AFFINITY_SET - Interrupt affinity was set ··· 203 202 enum { 204 203 IRQD_TRIGGER_MASK = 0xf, 205 204 IRQD_SETAFFINITY_PENDING = (1 << 8), 205 + IRQD_ACTIVATED = (1 << 9), 206 206 IRQD_NO_BALANCING = (1 << 10), 207 207 IRQD_PER_CPU = (1 << 11), 208 208 IRQD_AFFINITY_SET = (1 << 12), ··· 312 310 static inline bool irqd_affinity_is_managed(struct irq_data *d) 313 311 { 314 312 return __irqd_to_state(d) & IRQD_AFFINITY_MANAGED; 313 + } 314 + 315 + static inline bool irqd_is_activated(struct irq_data *d) 316 + { 317 + return __irqd_to_state(d) & IRQD_ACTIVATED; 318 + } 319 + 320 + static inline void irqd_set_activated(struct irq_data *d) 321 + { 322 + __irqd_to_state(d) |= IRQD_ACTIVATED; 323 + } 324 + 325 + static inline void irqd_clr_activated(struct irq_data *d) 326 + { 327 + __irqd_to_state(d) &= ~IRQD_ACTIVATED; 315 328 } 316 329 317 330 #undef __irqd_to_state
+12 -1
include/linux/log2.h
··· 203 203 * ... and so on. 204 204 */ 205 205 206 - #define order_base_2(n) ilog2(roundup_pow_of_two(n)) 206 + static inline __attribute_const__ 207 + int __order_base_2(unsigned long n) 208 + { 209 + return n > 1 ? ilog2(n - 1) + 1 : 0; 210 + } 207 211 212 + #define order_base_2(n) \ 213 + ( \ 214 + __builtin_constant_p(n) ? ( \ 215 + ((n) == 0 || (n) == 1) ? 0 : \ 216 + ilog2((n) - 1) + 1) : \ 217 + __order_base_2(n) \ 218 + ) 208 219 #endif /* _LINUX_LOG2_H */
+2 -1
include/linux/memory_hotplug.h
··· 85 85 extern int add_one_highpage(struct page *page, int pfn, int bad_ppro); 86 86 /* VM interface that may be used by firmware interface */ 87 87 extern int online_pages(unsigned long, unsigned long, int); 88 - extern int test_pages_in_a_zone(unsigned long, unsigned long); 88 + extern int test_pages_in_a_zone(unsigned long start_pfn, unsigned long end_pfn, 89 + unsigned long *valid_start, unsigned long *valid_end); 89 90 extern void __offline_isolated_pages(unsigned long, unsigned long); 90 91 91 92 typedef void (*online_page_callback_t)(struct page *page);
+7 -7
include/linux/module.h
··· 346 346 347 347 /* Exported symbols */ 348 348 const struct kernel_symbol *syms; 349 - const unsigned long *crcs; 349 + const s32 *crcs; 350 350 unsigned int num_syms; 351 351 352 352 /* Kernel parameters. */ ··· 359 359 /* GPL-only exported symbols. */ 360 360 unsigned int num_gpl_syms; 361 361 const struct kernel_symbol *gpl_syms; 362 - const unsigned long *gpl_crcs; 362 + const s32 *gpl_crcs; 363 363 364 364 #ifdef CONFIG_UNUSED_SYMBOLS 365 365 /* unused exported symbols. */ 366 366 const struct kernel_symbol *unused_syms; 367 - const unsigned long *unused_crcs; 367 + const s32 *unused_crcs; 368 368 unsigned int num_unused_syms; 369 369 370 370 /* GPL-only, unused exported symbols. */ 371 371 unsigned int num_unused_gpl_syms; 372 372 const struct kernel_symbol *unused_gpl_syms; 373 - const unsigned long *unused_gpl_crcs; 373 + const s32 *unused_gpl_crcs; 374 374 #endif 375 375 376 376 #ifdef CONFIG_MODULE_SIG ··· 382 382 383 383 /* symbols that will be GPL-only in the near future. */ 384 384 const struct kernel_symbol *gpl_future_syms; 385 - const unsigned long *gpl_future_crcs; 385 + const s32 *gpl_future_crcs; 386 386 unsigned int num_gpl_future_syms; 387 387 388 388 /* Exception table */ ··· 523 523 524 524 struct symsearch { 525 525 const struct kernel_symbol *start, *stop; 526 - const unsigned long *crcs; 526 + const s32 *crcs; 527 527 enum { 528 528 NOT_GPL_ONLY, 529 529 GPL_ONLY, ··· 539 539 */ 540 540 const struct kernel_symbol *find_symbol(const char *name, 541 541 struct module **owner, 542 - const unsigned long **crc, 542 + const s32 **crc, 543 543 bool gplok, 544 544 bool warn); 545 545
+4
include/linux/netdevice.h
··· 1511 1511 * @max_mtu: Interface Maximum MTU value 1512 1512 * @type: Interface hardware type 1513 1513 * @hard_header_len: Maximum hardware header length. 1514 + * @min_header_len: Minimum hardware header length 1514 1515 * 1515 1516 * @needed_headroom: Extra headroom the hardware may need, but not in all 1516 1517 * cases can this be guaranteed ··· 1729 1728 unsigned int max_mtu; 1730 1729 unsigned short type; 1731 1730 unsigned short hard_header_len; 1731 + unsigned short min_header_len; 1732 1732 1733 1733 unsigned short needed_headroom; 1734 1734 unsigned short needed_tailroom; ··· 2696 2694 { 2697 2695 if (likely(len >= dev->hard_header_len)) 2698 2696 return true; 2697 + if (len < dev->min_header_len) 2698 + return false; 2699 2699 2700 2700 if (capable(CAP_SYS_RAWIO)) { 2701 2701 memset(ll_header + len, 0, dev->hard_header_len - len);
+4
include/net/cipso_ipv4.h
··· 309 309 } 310 310 311 311 for (opt_iter = 6; opt_iter < opt_len;) { 312 + if (opt_iter + 1 == opt_len) { 313 + err_offset = opt_iter; 314 + goto out; 315 + } 312 316 tag_len = opt[opt_iter + 1]; 313 317 if ((tag_len == 0) || (tag_len > (opt_len - opt_iter))) { 314 318 err_offset = opt_iter + 1;
+4 -1
include/net/lwtunnel.h
··· 178 178 } 179 179 static inline int lwtunnel_valid_encap_type_attr(struct nlattr *attr, int len) 180 180 { 181 - return -EOPNOTSUPP; 181 + /* return 0 since we are not walking attr looking for 182 + * RTA_ENCAP_TYPE attribute on nexthops. 183 + */ 184 + return 0; 182 185 } 183 186 184 187 static inline int lwtunnel_build_state(struct net_device *dev, u16 encap_type,
+3 -1
include/net/sock.h
··· 2006 2006 void sk_stop_timer(struct sock *sk, struct timer_list *timer); 2007 2007 2008 2008 int __sk_queue_drop_skb(struct sock *sk, struct sk_buff *skb, 2009 - unsigned int flags); 2009 + unsigned int flags, 2010 + void (*destructor)(struct sock *sk, 2011 + struct sk_buff *skb)); 2010 2012 int __sock_queue_rcv_skb(struct sock *sk, struct sk_buff *skb); 2011 2013 int sock_queue_rcv_skb(struct sock *sk, struct sk_buff *skb); 2012 2014
+1
include/target/target_core_base.h
··· 538 538 char initiatorname[TRANSPORT_IQN_LEN]; 539 539 /* Used to signal demo mode created ACL, disabled by default */ 540 540 bool dynamic_node_acl; 541 + bool dynamic_stop; 541 542 u32 queue_depth; 542 543 u32 acl_index; 543 544 enum target_prot_type saved_prot_type;
+3 -6
include/uapi/linux/seg6.h
··· 23 23 __u8 type; 24 24 __u8 segments_left; 25 25 __u8 first_segment; 26 - __u8 flag_1; 27 - __u8 flag_2; 28 - __u8 reserved; 26 + __u8 flags; 27 + __u16 reserved; 29 28 30 29 struct in6_addr segments[0]; 31 30 }; 32 31 33 - #define SR6_FLAG1_CLEANUP (1 << 7) 34 32 #define SR6_FLAG1_PROTECTED (1 << 6) 35 33 #define SR6_FLAG1_OAM (1 << 5) 36 34 #define SR6_FLAG1_ALERT (1 << 4) ··· 40 42 #define SR6_TLV_PADDING 4 41 43 #define SR6_TLV_HMAC 5 42 44 43 - #define sr_has_cleanup(srh) ((srh)->flag_1 & SR6_FLAG1_CLEANUP) 44 - #define sr_has_hmac(srh) ((srh)->flag_1 & SR6_FLAG1_HMAC) 45 + #define sr_has_hmac(srh) ((srh)->flags & SR6_FLAG1_HMAC) 45 46 46 47 struct sr6_tlv { 47 48 __u8 type;
+8 -3
include/uapi/rdma/ib_user_verbs.h
··· 37 37 #define IB_USER_VERBS_H 38 38 39 39 #include <linux/types.h> 40 - #include <rdma/ib_verbs.h> 41 40 42 41 /* 43 42 * Increment this value if any changes that break userspace ABI ··· 547 548 }; 548 549 549 550 enum { 550 - IB_USER_LEGACY_LAST_QP_ATTR_MASK = IB_QP_DEST_QPN 551 + /* 552 + * This value is equal to IB_QP_DEST_QPN. 553 + */ 554 + IB_USER_LEGACY_LAST_QP_ATTR_MASK = 1ULL << 20, 551 555 }; 552 556 553 557 enum { 554 - IB_USER_LAST_QP_ATTR_MASK = IB_QP_RATE_LIMIT 558 + /* 559 + * This value is equal to IB_QP_RATE_LIMIT. 560 + */ 561 + IB_USER_LAST_QP_ATTR_MASK = 1ULL << 25, 555 562 }; 556 563 557 564 struct ib_uverbs_ex_create_qp {
+4
init/Kconfig
··· 1987 1987 make them incompatible with the kernel you are running. If 1988 1988 unsure, say N. 1989 1989 1990 + config MODULE_REL_CRCS 1991 + bool 1992 + depends on MODVERSIONS 1993 + 1990 1994 config MODULE_SRCVERSION_ALL 1991 1995 bool "Source checksum for all modules" 1992 1996 help
+15 -10
kernel/events/core.c
··· 3538 3538 int ret; 3539 3539 }; 3540 3540 3541 - static int find_cpu_to_read(struct perf_event *event, int local_cpu) 3541 + static int __perf_event_read_cpu(struct perf_event *event, int event_cpu) 3542 3542 { 3543 - int event_cpu = event->oncpu; 3544 3543 u16 local_pkg, event_pkg; 3545 3544 3546 3545 if (event->group_caps & PERF_EV_CAP_READ_ACTIVE_PKG) { 3547 - event_pkg = topology_physical_package_id(event_cpu); 3548 - local_pkg = topology_physical_package_id(local_cpu); 3546 + int local_cpu = smp_processor_id(); 3547 + 3548 + event_pkg = topology_physical_package_id(event_cpu); 3549 + local_pkg = topology_physical_package_id(local_cpu); 3549 3550 3550 3551 if (event_pkg == local_pkg) 3551 3552 return local_cpu; ··· 3676 3675 3677 3676 static int perf_event_read(struct perf_event *event, bool group) 3678 3677 { 3679 - int ret = 0, cpu_to_read, local_cpu; 3678 + int event_cpu, ret = 0; 3680 3679 3681 3680 /* 3682 3681 * If event is enabled and currently active on a CPU, update the ··· 3689 3688 .ret = 0, 3690 3689 }; 3691 3690 3692 - local_cpu = get_cpu(); 3693 - cpu_to_read = find_cpu_to_read(event, local_cpu); 3694 - put_cpu(); 3691 + event_cpu = READ_ONCE(event->oncpu); 3692 + if ((unsigned)event_cpu >= nr_cpu_ids) 3693 + return 0; 3694 + 3695 + preempt_disable(); 3696 + event_cpu = __perf_event_read_cpu(event, event_cpu); 3695 3697 3696 3698 /* 3697 3699 * Purposely ignore the smp_call_function_single() return 3698 3700 * value. 3699 3701 * 3700 - * If event->oncpu isn't a valid CPU it means the event got 3702 + * If event_cpu isn't a valid CPU it means the event got 3701 3703 * scheduled out and that will have updated the event count. 3702 3704 * 3703 3705 * Therefore, either way, we'll have an up-to-date event count 3704 3706 * after this. 3705 3707 */ 3706 - (void)smp_call_function_single(cpu_to_read, __perf_event_read, &data, 1); 3708 + (void)smp_call_function_single(event_cpu, __perf_event_read, &data, 1); 3709 + preempt_enable(); 3707 3710 ret = data.ret; 3708 3711 } else if (event->state == PERF_EVENT_STATE_INACTIVE) { 3709 3712 struct perf_event_context *ctx = event->ctx;
+30 -14
kernel/irq/irqdomain.c
··· 1346 1346 } 1347 1347 EXPORT_SYMBOL_GPL(irq_domain_free_irqs_parent); 1348 1348 1349 + static void __irq_domain_activate_irq(struct irq_data *irq_data) 1350 + { 1351 + if (irq_data && irq_data->domain) { 1352 + struct irq_domain *domain = irq_data->domain; 1353 + 1354 + if (irq_data->parent_data) 1355 + __irq_domain_activate_irq(irq_data->parent_data); 1356 + if (domain->ops->activate) 1357 + domain->ops->activate(domain, irq_data); 1358 + } 1359 + } 1360 + 1361 + static void __irq_domain_deactivate_irq(struct irq_data *irq_data) 1362 + { 1363 + if (irq_data && irq_data->domain) { 1364 + struct irq_domain *domain = irq_data->domain; 1365 + 1366 + if (domain->ops->deactivate) 1367 + domain->ops->deactivate(domain, irq_data); 1368 + if (irq_data->parent_data) 1369 + __irq_domain_deactivate_irq(irq_data->parent_data); 1370 + } 1371 + } 1372 + 1349 1373 /** 1350 1374 * irq_domain_activate_irq - Call domain_ops->activate recursively to activate 1351 1375 * interrupt ··· 1380 1356 */ 1381 1357 void irq_domain_activate_irq(struct irq_data *irq_data) 1382 1358 { 1383 - if (irq_data && irq_data->domain) { 1384 - struct irq_domain *domain = irq_data->domain; 1385 - 1386 - if (irq_data->parent_data) 1387 - irq_domain_activate_irq(irq_data->parent_data); 1388 - if (domain->ops->activate) 1389 - domain->ops->activate(domain, irq_data); 1359 + if (!irqd_is_activated(irq_data)) { 1360 + __irq_domain_activate_irq(irq_data); 1361 + irqd_set_activated(irq_data); 1390 1362 } 1391 1363 } 1392 1364 ··· 1396 1376 */ 1397 1377 void irq_domain_deactivate_irq(struct irq_data *irq_data) 1398 1378 { 1399 - if (irq_data && irq_data->domain) { 1400 - struct irq_domain *domain = irq_data->domain; 1401 - 1402 - if (domain->ops->deactivate) 1403 - domain->ops->deactivate(domain, irq_data); 1404 - if (irq_data->parent_data) 1405 - irq_domain_deactivate_irq(irq_data->parent_data); 1379 + if (irqd_is_activated(irq_data)) { 1380 + __irq_domain_deactivate_irq(irq_data); 1381 + irqd_clr_activated(irq_data); 1406 1382 } 1407 1383 } 1408 1384
+25 -28
kernel/module.c
··· 389 389 extern const struct kernel_symbol __stop___ksymtab_gpl[]; 390 390 extern const struct kernel_symbol __start___ksymtab_gpl_future[]; 391 391 extern const struct kernel_symbol __stop___ksymtab_gpl_future[]; 392 - extern const unsigned long __start___kcrctab[]; 393 - extern const unsigned long __start___kcrctab_gpl[]; 394 - extern const unsigned long __start___kcrctab_gpl_future[]; 392 + extern const s32 __start___kcrctab[]; 393 + extern const s32 __start___kcrctab_gpl[]; 394 + extern const s32 __start___kcrctab_gpl_future[]; 395 395 #ifdef CONFIG_UNUSED_SYMBOLS 396 396 extern const struct kernel_symbol __start___ksymtab_unused[]; 397 397 extern const struct kernel_symbol __stop___ksymtab_unused[]; 398 398 extern const struct kernel_symbol __start___ksymtab_unused_gpl[]; 399 399 extern const struct kernel_symbol __stop___ksymtab_unused_gpl[]; 400 - extern const unsigned long __start___kcrctab_unused[]; 401 - extern const unsigned long __start___kcrctab_unused_gpl[]; 400 + extern const s32 __start___kcrctab_unused[]; 401 + extern const s32 __start___kcrctab_unused_gpl[]; 402 402 #endif 403 403 404 404 #ifndef CONFIG_MODVERSIONS ··· 497 497 498 498 /* Output */ 499 499 struct module *owner; 500 - const unsigned long *crc; 500 + const s32 *crc; 501 501 const struct kernel_symbol *sym; 502 502 }; 503 503 ··· 563 563 * (optional) module which owns it. Needs preempt disabled or module_mutex. */ 564 564 const struct kernel_symbol *find_symbol(const char *name, 565 565 struct module **owner, 566 - const unsigned long **crc, 566 + const s32 **crc, 567 567 bool gplok, 568 568 bool warn) 569 569 { ··· 1249 1249 } 1250 1250 1251 1251 #ifdef CONFIG_MODVERSIONS 1252 - /* If the arch applies (non-zero) relocations to kernel kcrctab, unapply it. */ 1253 - static unsigned long maybe_relocated(unsigned long crc, 1254 - const struct module *crc_owner) 1252 + 1253 + static u32 resolve_rel_crc(const s32 *crc) 1255 1254 { 1256 - #ifdef ARCH_RELOCATES_KCRCTAB 1257 - if (crc_owner == NULL) 1258 - return crc - (unsigned long)reloc_start; 1259 - #endif 1260 - return crc; 1255 + return *(u32 *)((void *)crc + *crc); 1261 1256 } 1262 1257 1263 1258 static int check_version(Elf_Shdr *sechdrs, 1264 1259 unsigned int versindex, 1265 1260 const char *symname, 1266 1261 struct module *mod, 1267 - const unsigned long *crc, 1268 - const struct module *crc_owner) 1262 + const s32 *crc) 1269 1263 { 1270 1264 unsigned int i, num_versions; 1271 1265 struct modversion_info *versions; ··· 1277 1283 / sizeof(struct modversion_info); 1278 1284 1279 1285 for (i = 0; i < num_versions; i++) { 1286 + u32 crcval; 1287 + 1280 1288 if (strcmp(versions[i].name, symname) != 0) 1281 1289 continue; 1282 1290 1283 - if (versions[i].crc == maybe_relocated(*crc, crc_owner)) 1291 + if (IS_ENABLED(CONFIG_MODULE_REL_CRCS)) 1292 + crcval = resolve_rel_crc(crc); 1293 + else 1294 + crcval = *crc; 1295 + if (versions[i].crc == crcval) 1284 1296 return 1; 1285 - pr_debug("Found checksum %lX vs module %lX\n", 1286 - maybe_relocated(*crc, crc_owner), versions[i].crc); 1297 + pr_debug("Found checksum %X vs module %lX\n", 1298 + crcval, versions[i].crc); 1287 1299 goto bad_version; 1288 1300 } 1289 1301 ··· 1307 1307 unsigned int versindex, 1308 1308 struct module *mod) 1309 1309 { 1310 - const unsigned long *crc; 1310 + const s32 *crc; 1311 1311 1312 1312 /* 1313 1313 * Since this should be found in kernel (which can't be removed), no ··· 1321 1321 } 1322 1322 preempt_enable(); 1323 1323 return check_version(sechdrs, versindex, 1324 - VMLINUX_SYMBOL_STR(module_layout), mod, crc, 1325 - NULL); 1324 + VMLINUX_SYMBOL_STR(module_layout), mod, crc); 1326 1325 } 1327 1326 1328 1327 /* First part is kernel version, which we ignore if module has crcs. */ ··· 1339 1340 unsigned int versindex, 1340 1341 const char *symname, 1341 1342 struct module *mod, 1342 - const unsigned long *crc, 1343 - const struct module *crc_owner) 1343 + const s32 *crc) 1344 1344 { 1345 1345 return 1; 1346 1346 } ··· 1366 1368 { 1367 1369 struct module *owner; 1368 1370 const struct kernel_symbol *sym; 1369 - const unsigned long *crc; 1371 + const s32 *crc; 1370 1372 int err; 1371 1373 1372 1374 /* ··· 1381 1383 if (!sym) 1382 1384 goto unlock; 1383 1385 1384 - if (!check_version(info->sechdrs, info->index.vers, name, mod, crc, 1385 - owner)) { 1386 + if (!check_version(info->sechdrs, info->index.vers, name, mod, crc)) { 1386 1387 sym = ERR_PTR(-EINVAL); 1387 1388 goto getname; 1388 1389 }
+4 -8
kernel/stacktrace.c
··· 18 18 if (WARN_ON(!trace->entries)) 19 19 return; 20 20 21 - for (i = 0; i < trace->nr_entries; i++) { 22 - printk("%*c", 1 + spaces, ' '); 23 - print_ip_sym(trace->entries[i]); 24 - } 21 + for (i = 0; i < trace->nr_entries; i++) 22 + printk("%*c%pS\n", 1 + spaces, ' ', (void *)trace->entries[i]); 25 23 } 26 24 EXPORT_SYMBOL_GPL(print_stack_trace); 27 25 ··· 27 29 struct stack_trace *trace, int spaces) 28 30 { 29 31 int i; 30 - unsigned long ip; 31 32 int generated; 32 33 int total = 0; 33 34 ··· 34 37 return 0; 35 38 36 39 for (i = 0; i < trace->nr_entries; i++) { 37 - ip = trace->entries[i]; 38 - generated = snprintf(buf, size, "%*c[<%p>] %pS\n", 39 - 1 + spaces, ' ', (void *) ip, (void *) ip); 40 + generated = snprintf(buf, size, "%*c%pS\n", 1 + spaces, ' ', 41 + (void *)trace->entries[i]); 40 42 41 43 total += generated; 42 44
+5
kernel/time/tick-sched.c
··· 725 725 */ 726 726 if (delta == 0) { 727 727 tick_nohz_restart(ts, now); 728 + /* 729 + * Make sure next tick stop doesn't get fooled by past 730 + * clock deadline 731 + */ 732 + ts->next_tick = 0; 728 733 goto out; 729 734 } 730 735 }
+1 -1
kernel/trace/trace_kprobe.c
··· 1372 1372 return a1 + a2 + a3 + a4 + a5 + a6; 1373 1373 } 1374 1374 1375 - static struct __init trace_event_file * 1375 + static __init struct trace_event_file * 1376 1376 find_trace_probe_file(struct trace_kprobe *tk, struct trace_array *tr) 1377 1377 { 1378 1378 struct trace_event_file *file;
+1 -2
kernel/ucount.c
··· 227 227 * properly. 228 228 */ 229 229 user_header = register_sysctl("user", empty); 230 + kmemleak_ignore(user_header); 230 231 BUG_ON(!user_header); 231 232 BUG_ON(!setup_userns_sysctls(&init_user_ns)); 232 233 #endif 233 234 return 0; 234 235 } 235 236 subsys_initcall(user_namespace_sysctl_init); 236 - 237 -
+5
mm/filemap.c
··· 1791 1791 1792 1792 cond_resched(); 1793 1793 find_page: 1794 + if (fatal_signal_pending(current)) { 1795 + error = -EINTR; 1796 + goto out; 1797 + } 1798 + 1794 1799 page = find_get_page(mapping, index); 1795 1800 if (!page) { 1796 1801 page_cache_sync_readahead(mapping,
+3
mm/kasan/report.c
··· 13 13 * 14 14 */ 15 15 16 + #include <linux/ftrace.h> 16 17 #include <linux/kernel.h> 17 18 #include <linux/mm.h> 18 19 #include <linux/printk.h> ··· 300 299 301 300 if (likely(!kasan_report_enabled())) 302 301 return; 302 + 303 + disable_trace_on_warning(); 303 304 304 305 info.access_addr = (void *)addr; 305 306 info.access_size = size;
+21 -7
mm/memory_hotplug.c
··· 1483 1483 } 1484 1484 1485 1485 /* 1486 - * Confirm all pages in a range [start, end) is belongs to the same zone. 1486 + * Confirm all pages in a range [start, end) belong to the same zone. 1487 + * When true, return its valid [start, end). 1487 1488 */ 1488 - int test_pages_in_a_zone(unsigned long start_pfn, unsigned long end_pfn) 1489 + int test_pages_in_a_zone(unsigned long start_pfn, unsigned long end_pfn, 1490 + unsigned long *valid_start, unsigned long *valid_end) 1489 1491 { 1490 1492 unsigned long pfn, sec_end_pfn; 1493 + unsigned long start, end; 1491 1494 struct zone *zone = NULL; 1492 1495 struct page *page; 1493 1496 int i; 1494 - for (pfn = start_pfn, sec_end_pfn = SECTION_ALIGN_UP(start_pfn); 1497 + for (pfn = start_pfn, sec_end_pfn = SECTION_ALIGN_UP(start_pfn + 1); 1495 1498 pfn < end_pfn; 1496 - pfn = sec_end_pfn + 1, sec_end_pfn += PAGES_PER_SECTION) { 1499 + pfn = sec_end_pfn, sec_end_pfn += PAGES_PER_SECTION) { 1497 1500 /* Make sure the memory section is present first */ 1498 1501 if (!present_section_nr(pfn_to_section_nr(pfn))) 1499 1502 continue; ··· 1512 1509 page = pfn_to_page(pfn + i); 1513 1510 if (zone && page_zone(page) != zone) 1514 1511 return 0; 1512 + if (!zone) 1513 + start = pfn + i; 1515 1514 zone = page_zone(page); 1515 + end = pfn + MAX_ORDER_NR_PAGES; 1516 1516 } 1517 1517 } 1518 - return 1; 1518 + 1519 + if (zone) { 1520 + *valid_start = start; 1521 + *valid_end = end; 1522 + return 1; 1523 + } else { 1524 + return 0; 1525 + } 1519 1526 } 1520 1527 1521 1528 /* ··· 1852 1839 long offlined_pages; 1853 1840 int ret, drain, retry_max, node; 1854 1841 unsigned long flags; 1842 + unsigned long valid_start, valid_end; 1855 1843 struct zone *zone; 1856 1844 struct memory_notify arg; 1857 1845 ··· 1863 1849 return -EINVAL; 1864 1850 /* This makes hotplug much easier...and readable. 1865 1851 we assume this for now. .*/ 1866 - if (!test_pages_in_a_zone(start_pfn, end_pfn)) 1852 + if (!test_pages_in_a_zone(start_pfn, end_pfn, &valid_start, &valid_end)) 1867 1853 return -EINVAL; 1868 1854 1869 - zone = page_zone(pfn_to_page(start_pfn)); 1855 + zone = page_zone(pfn_to_page(valid_start)); 1870 1856 node = zone_to_nid(zone); 1871 1857 nr_pages = end_pfn - start_pfn; 1872 1858
+9 -2
mm/shmem.c
··· 415 415 struct shrink_control *sc, unsigned long nr_to_split) 416 416 { 417 417 LIST_HEAD(list), *pos, *next; 418 + LIST_HEAD(to_remove); 418 419 struct inode *inode; 419 420 struct shmem_inode_info *info; 420 421 struct page *page; ··· 442 441 /* Check if there's anything to gain */ 443 442 if (round_up(inode->i_size, PAGE_SIZE) == 444 443 round_up(inode->i_size, HPAGE_PMD_SIZE)) { 445 - list_del_init(&info->shrinklist); 444 + list_move(&info->shrinklist, &to_remove); 446 445 removed++; 447 - iput(inode); 448 446 goto next; 449 447 } 450 448 ··· 453 453 break; 454 454 } 455 455 spin_unlock(&sbinfo->shrinklist_lock); 456 + 457 + list_for_each_safe(pos, next, &to_remove) { 458 + info = list_entry(pos, struct shmem_inode_info, shrinklist); 459 + inode = &info->vfs_inode; 460 + list_del_init(&info->shrinklist); 461 + iput(inode); 462 + } 456 463 457 464 list_for_each_safe(pos, next, &list) { 458 465 int ret;
+4
mm/slub.c
··· 1422 1422 int err; 1423 1423 unsigned long i, count = oo_objects(s->oo); 1424 1424 1425 + /* Bailout if already initialised */ 1426 + if (s->random_seq) 1427 + return 0; 1428 + 1425 1429 err = cache_random_seq_create(s, count, GFP_KERNEL); 1426 1430 if (err) { 1427 1431 pr_err("SLUB: Unable to initialize free list for %s\n",
+29 -1
mm/zswap.c
··· 78 78 79 79 /* Enable/disable zswap (disabled by default) */ 80 80 static bool zswap_enabled; 81 - module_param_named(enabled, zswap_enabled, bool, 0644); 81 + static int zswap_enabled_param_set(const char *, 82 + const struct kernel_param *); 83 + static struct kernel_param_ops zswap_enabled_param_ops = { 84 + .set = zswap_enabled_param_set, 85 + .get = param_get_bool, 86 + }; 87 + module_param_cb(enabled, &zswap_enabled_param_ops, &zswap_enabled, 0644); 82 88 83 89 /* Crypto compressor to use */ 84 90 #define ZSWAP_COMPRESSOR_DEFAULT "lzo" ··· 181 175 182 176 /* used by param callback function */ 183 177 static bool zswap_init_started; 178 + 179 + /* fatal error during init */ 180 + static bool zswap_init_failed; 184 181 185 182 /********************************* 186 183 * helpers and fwd declarations ··· 633 624 char *s = strstrip((char *)val); 634 625 int ret; 635 626 627 + if (zswap_init_failed) { 628 + pr_err("can't set param, initialization failed\n"); 629 + return -ENODEV; 630 + } 631 + 636 632 /* no change required */ 637 633 if (!strcmp(s, *(char **)kp->arg)) 638 634 return 0; ··· 715 701 const struct kernel_param *kp) 716 702 { 717 703 return __zswap_param_set(val, kp, NULL, zswap_compressor); 704 + } 705 + 706 + static int zswap_enabled_param_set(const char *val, 707 + const struct kernel_param *kp) 708 + { 709 + if (zswap_init_failed) { 710 + pr_err("can't enable, initialization failed\n"); 711 + return -ENODEV; 712 + } 713 + 714 + return param_set_bool(val, kp); 718 715 } 719 716 720 717 /********************************* ··· 1226 1201 dstmem_fail: 1227 1202 zswap_entry_cache_destroy(); 1228 1203 cache_fail: 1204 + /* if built-in, we aren't unloaded on failure; don't allow use */ 1205 + zswap_init_failed = true; 1206 + zswap_enabled = false; 1229 1207 return -ENOMEM; 1230 1208 } 1231 1209 /* must be late so crypto has time to come up */
+6 -2
net/core/datagram.c
··· 332 332 EXPORT_SYMBOL(__skb_free_datagram_locked); 333 333 334 334 int __sk_queue_drop_skb(struct sock *sk, struct sk_buff *skb, 335 - unsigned int flags) 335 + unsigned int flags, 336 + void (*destructor)(struct sock *sk, 337 + struct sk_buff *skb)) 336 338 { 337 339 int err = 0; 338 340 ··· 344 342 if (skb == skb_peek(&sk->sk_receive_queue)) { 345 343 __skb_unlink(skb, &sk->sk_receive_queue); 346 344 atomic_dec(&skb->users); 345 + if (destructor) 346 + destructor(sk, skb); 347 347 err = 0; 348 348 } 349 349 spin_unlock_bh(&sk->sk_receive_queue.lock); ··· 379 375 380 376 int skb_kill_datagram(struct sock *sk, struct sk_buff *skb, unsigned int flags) 381 377 { 382 - int err = __sk_queue_drop_skb(sk, skb, flags); 378 + int err = __sk_queue_drop_skb(sk, skb, flags, NULL); 383 379 384 380 kfree_skb(skb); 385 381 sk_mem_reclaim_partial(sk);
+13 -18
net/core/dev.c
··· 1695 1695 1696 1696 static struct static_key netstamp_needed __read_mostly; 1697 1697 #ifdef HAVE_JUMP_LABEL 1698 - /* We are not allowed to call static_key_slow_dec() from irq context 1699 - * If net_disable_timestamp() is called from irq context, defer the 1700 - * static_key_slow_dec() calls. 1701 - */ 1702 1698 static atomic_t netstamp_needed_deferred; 1699 + static void netstamp_clear(struct work_struct *work) 1700 + { 1701 + int deferred = atomic_xchg(&netstamp_needed_deferred, 0); 1702 + 1703 + while (deferred--) 1704 + static_key_slow_dec(&netstamp_needed); 1705 + } 1706 + static DECLARE_WORK(netstamp_work, netstamp_clear); 1703 1707 #endif 1704 1708 1705 1709 void net_enable_timestamp(void) 1706 1710 { 1707 - #ifdef HAVE_JUMP_LABEL 1708 - int deferred = atomic_xchg(&netstamp_needed_deferred, 0); 1709 - 1710 - if (deferred) { 1711 - while (--deferred) 1712 - static_key_slow_dec(&netstamp_needed); 1713 - return; 1714 - } 1715 - #endif 1716 1711 static_key_slow_inc(&netstamp_needed); 1717 1712 } 1718 1713 EXPORT_SYMBOL(net_enable_timestamp); ··· 1715 1720 void net_disable_timestamp(void) 1716 1721 { 1717 1722 #ifdef HAVE_JUMP_LABEL 1718 - if (in_interrupt()) { 1719 - atomic_inc(&netstamp_needed_deferred); 1720 - return; 1721 - } 1722 - #endif 1723 + /* net_disable_timestamp() can be called from non process context */ 1724 + atomic_inc(&netstamp_needed_deferred); 1725 + schedule_work(&netstamp_work); 1726 + #else 1723 1727 static_key_slow_dec(&netstamp_needed); 1728 + #endif 1724 1729 } 1725 1730 EXPORT_SYMBOL(net_disable_timestamp); 1726 1731
+6 -3
net/core/ethtool.c
··· 1405 1405 if (regs.len > reglen) 1406 1406 regs.len = reglen; 1407 1407 1408 - regbuf = vzalloc(reglen); 1409 - if (reglen && !regbuf) 1410 - return -ENOMEM; 1408 + regbuf = NULL; 1409 + if (reglen) { 1410 + regbuf = vzalloc(reglen); 1411 + if (!regbuf) 1412 + return -ENOMEM; 1413 + } 1411 1414 1412 1415 ops->get_regs(dev, &regs, regbuf); 1413 1416
+1
net/dsa/dsa2.c
··· 273 273 if (err) { 274 274 dev_warn(ds->dev, "Failed to create slave %d: %d\n", 275 275 index, err); 276 + ds->ports[index].netdev = NULL; 276 277 return err; 277 278 } 278 279
+1
net/ethernet/eth.c
··· 356 356 dev->header_ops = &eth_header_ops; 357 357 dev->type = ARPHRD_ETHER; 358 358 dev->hard_header_len = ETH_HLEN; 359 + dev->min_header_len = ETH_HLEN; 359 360 dev->mtu = ETH_DATA_LEN; 360 361 dev->min_mtu = ETH_MIN_MTU; 361 362 dev->max_mtu = ETH_DATA_LEN;
+4
net/ipv4/cipso_ipv4.c
··· 1587 1587 goto validate_return_locked; 1588 1588 } 1589 1589 1590 + if (opt_iter + 1 == opt_len) { 1591 + err_offset = opt_iter; 1592 + goto validate_return_locked; 1593 + } 1590 1594 tag_len = tag[1]; 1591 1595 if (tag_len > (opt_len - opt_iter)) { 1592 1596 err_offset = opt_iter + 1;
+1
net/ipv4/igmp.c
··· 1172 1172 psf->sf_crcount = im->crcount; 1173 1173 } 1174 1174 in_dev_put(pmc->interface); 1175 + kfree(pmc); 1175 1176 } 1176 1177 spin_unlock_bh(&im->lock); 1177 1178 }
+8 -1
net/ipv4/ip_sockglue.c
··· 1238 1238 pktinfo->ipi_ifindex = 0; 1239 1239 pktinfo->ipi_spec_dst.s_addr = 0; 1240 1240 } 1241 - skb_dst_drop(skb); 1241 + /* We need to keep the dst for __ip_options_echo() 1242 + * We could restrict the test to opt.ts_needtime || opt.srr, 1243 + * but the following is good enough as IP options are not often used. 1244 + */ 1245 + if (unlikely(IPCB(skb)->opt.optlen)) 1246 + skb_dst_force(skb); 1247 + else 1248 + skb_dst_drop(skb); 1242 1249 } 1243 1250 1244 1251 int ip_setsockopt(struct sock *sk, int level,
+2
net/ipv4/ping.c
··· 642 642 { 643 643 struct sk_buff *skb = skb_peek(&sk->sk_write_queue); 644 644 645 + if (!skb) 646 + return 0; 645 647 pfh->wcheck = csum_partial((char *)&pfh->icmph, 646 648 sizeof(struct icmphdr), pfh->wcheck); 647 649 pfh->icmph.checksum = csum_fold(pfh->wcheck);
+6
net/ipv4/tcp.c
··· 770 770 ret = -EAGAIN; 771 771 break; 772 772 } 773 + /* if __tcp_splice_read() got nothing while we have 774 + * an skb in receive queue, we do not want to loop. 775 + * This might happen with URG data. 776 + */ 777 + if (!skb_queue_empty(&sk->sk_receive_queue)) 778 + break; 773 779 sk_wait_data(sk, &timeo, NULL); 774 780 if (signal_pending(current)) { 775 781 ret = sock_intr_errno(timeo);
+1 -1
net/ipv4/udp.c
··· 1501 1501 return err; 1502 1502 1503 1503 csum_copy_err: 1504 - if (!__sk_queue_drop_skb(sk, skb, flags)) { 1504 + if (!__sk_queue_drop_skb(sk, skb, flags, udp_skb_destructor)) { 1505 1505 UDP_INC_STATS(sock_net(sk), UDP_MIB_CSUMERRORS, is_udplite); 1506 1506 UDP_INC_STATS(sock_net(sk), UDP_MIB_INERRORS, is_udplite); 1507 1507 }
+14 -2
net/ipv6/addrconf.c
··· 3386 3386 } 3387 3387 3388 3388 if (idev) { 3389 - if (idev->if_flags & IF_READY) 3390 - /* device is already configured. */ 3389 + if (idev->if_flags & IF_READY) { 3390 + /* device is already configured - 3391 + * but resend MLD reports, we might 3392 + * have roamed and need to update 3393 + * multicast snooping switches 3394 + */ 3395 + ipv6_mc_up(idev); 3391 3396 break; 3397 + } 3392 3398 idev->if_flags |= IF_READY; 3393 3399 } 3394 3400 ··· 4015 4009 4016 4010 if (bump_id) 4017 4011 rt_genid_bump_ipv6(dev_net(dev)); 4012 + 4013 + /* Make sure that a new temporary address will be created 4014 + * before this temporary address becomes deprecated. 4015 + */ 4016 + if (ifp->flags & IFA_F_TEMPORARY) 4017 + addrconf_verify_rtnl(); 4018 4018 } 4019 4019 4020 4020 static void addrconf_dad_run(struct inet6_dev *idev)
+3 -28
net/ipv6/exthdrs.c
··· 327 327 struct ipv6_sr_hdr *hdr; 328 328 struct inet6_dev *idev; 329 329 struct in6_addr *addr; 330 - bool cleanup = false; 331 330 int accept_seg6; 332 331 333 332 hdr = (struct ipv6_sr_hdr *)skb_transport_header(skb); ··· 350 351 #endif 351 352 352 353 looped_back: 353 - if (hdr->segments_left > 0) { 354 - if (hdr->nexthdr != NEXTHDR_IPV6 && hdr->segments_left == 1 && 355 - sr_has_cleanup(hdr)) 356 - cleanup = true; 357 - } else { 354 + if (hdr->segments_left == 0) { 358 355 if (hdr->nexthdr == NEXTHDR_IPV6) { 359 356 int offset = (hdr->hdrlen + 1) << 3; 360 357 ··· 413 418 414 419 ipv6_hdr(skb)->daddr = *addr; 415 420 416 - if (cleanup) { 417 - int srhlen = (hdr->hdrlen + 1) << 3; 418 - int nh = hdr->nexthdr; 419 - 420 - skb_pull_rcsum(skb, sizeof(struct ipv6hdr) + srhlen); 421 - memmove(skb_network_header(skb) + srhlen, 422 - skb_network_header(skb), 423 - (unsigned char *)hdr - skb_network_header(skb)); 424 - skb->network_header += srhlen; 425 - ipv6_hdr(skb)->nexthdr = nh; 426 - ipv6_hdr(skb)->payload_len = htons(skb->len - 427 - sizeof(struct ipv6hdr)); 428 - skb_push_rcsum(skb, sizeof(struct ipv6hdr)); 429 - } 430 - 431 421 skb_dst_drop(skb); 432 422 433 423 ip6_route_input(skb); ··· 433 453 } 434 454 ipv6_hdr(skb)->hop_limit--; 435 455 436 - /* be sure that srh is still present before reinjecting */ 437 - if (!cleanup) { 438 - skb_pull(skb, sizeof(struct ipv6hdr)); 439 - goto looped_back; 440 - } 441 - skb_set_transport_header(skb, sizeof(struct ipv6hdr)); 442 - IP6CB(skb)->nhoff = offsetof(struct ipv6hdr, nexthdr); 456 + skb_pull(skb, sizeof(struct ipv6hdr)); 457 + goto looped_back; 443 458 } 444 459 445 460 dst_input(skb);
+21 -19
net/ipv6/ip6_gre.c
··· 367 367 368 368 369 369 static void ip6gre_err(struct sk_buff *skb, struct inet6_skb_parm *opt, 370 - u8 type, u8 code, int offset, __be32 info) 370 + u8 type, u8 code, int offset, __be32 info) 371 371 { 372 - const struct ipv6hdr *ipv6h = (const struct ipv6hdr *)skb->data; 373 - __be16 *p = (__be16 *)(skb->data + offset); 374 - int grehlen = offset + 4; 372 + const struct gre_base_hdr *greh; 373 + const struct ipv6hdr *ipv6h; 374 + int grehlen = sizeof(*greh); 375 375 struct ip6_tnl *t; 376 + int key_off = 0; 376 377 __be16 flags; 378 + __be32 key; 377 379 378 - flags = p[0]; 379 - if (flags&(GRE_CSUM|GRE_KEY|GRE_SEQ|GRE_ROUTING|GRE_VERSION)) { 380 - if (flags&(GRE_VERSION|GRE_ROUTING)) 381 - return; 382 - if (flags&GRE_KEY) { 383 - grehlen += 4; 384 - if (flags&GRE_CSUM) 385 - grehlen += 4; 386 - } 380 + if (!pskb_may_pull(skb, offset + grehlen)) 381 + return; 382 + greh = (const struct gre_base_hdr *)(skb->data + offset); 383 + flags = greh->flags; 384 + if (flags & (GRE_VERSION | GRE_ROUTING)) 385 + return; 386 + if (flags & GRE_CSUM) 387 + grehlen += 4; 388 + if (flags & GRE_KEY) { 389 + key_off = grehlen + offset; 390 + grehlen += 4; 387 391 } 388 392 389 - /* If only 8 bytes returned, keyed message will be dropped here */ 390 - if (!pskb_may_pull(skb, grehlen)) 393 + if (!pskb_may_pull(skb, offset + grehlen)) 391 394 return; 392 395 ipv6h = (const struct ipv6hdr *)skb->data; 393 - p = (__be16 *)(skb->data + offset); 396 + greh = (const struct gre_base_hdr *)(skb->data + offset); 397 + key = key_off ? *(__be32 *)(skb->data + key_off) : 0; 394 398 395 399 t = ip6gre_tunnel_lookup(skb->dev, &ipv6h->daddr, &ipv6h->saddr, 396 - flags & GRE_KEY ? 397 - *(((__be32 *)p) + (grehlen / 4) - 1) : 0, 398 - p[1]); 400 + key, greh->protocol); 399 401 if (!t) 400 402 return; 401 403
+1
net/ipv6/mcast.c
··· 779 779 psf->sf_crcount = im->mca_crcount; 780 780 } 781 781 in6_dev_put(pmc->idev); 782 + kfree(pmc); 782 783 } 783 784 spin_unlock_bh(&im->mca_lock); 784 785 }
+4 -4
net/ipv6/seg6_hmac.c
··· 174 174 * hash function (RadioGatun) with up to 1216 bits 175 175 */ 176 176 177 - /* saddr(16) + first_seg(1) + cleanup(1) + keyid(4) + seglist(16n) */ 177 + /* saddr(16) + first_seg(1) + flags(1) + keyid(4) + seglist(16n) */ 178 178 plen = 16 + 1 + 1 + 4 + (hdr->first_segment + 1) * 16; 179 179 180 180 /* this limit allows for 14 segments */ ··· 186 186 * 187 187 * 1. Source IPv6 address (128 bits) 188 188 * 2. first_segment value (8 bits) 189 - * 3. cleanup flag (8 bits: highest bit is cleanup value, others are 0) 189 + * 3. Flags (8 bits) 190 190 * 4. HMAC Key ID (32 bits) 191 191 * 5. All segments in the segments list (n * 128 bits) 192 192 */ ··· 202 202 /* first_segment value */ 203 203 *off++ = hdr->first_segment; 204 204 205 - /* cleanup flag */ 206 - *off++ = !!(sr_has_cleanup(hdr)) << 7; 205 + /* flags */ 206 + *off++ = hdr->flags; 207 207 208 208 /* HMAC Key ID */ 209 209 memcpy(off, &hmackeyid, 4);
+1
net/ipv6/sit.c
··· 1380 1380 err = dst_cache_init(&tunnel->dst_cache, GFP_KERNEL); 1381 1381 if (err) { 1382 1382 free_percpu(dev->tstats); 1383 + dev->tstats = NULL; 1383 1384 return err; 1384 1385 } 1385 1386
+13 -11
net/ipv6/tcp_ipv6.c
··· 991 991 return 0; /* don't send reset */ 992 992 } 993 993 994 + static void tcp_v6_restore_cb(struct sk_buff *skb) 995 + { 996 + /* We need to move header back to the beginning if xfrm6_policy_check() 997 + * and tcp_v6_fill_cb() are going to be called again. 998 + * ip6_datagram_recv_specific_ctl() also expects IP6CB to be there. 999 + */ 1000 + memmove(IP6CB(skb), &TCP_SKB_CB(skb)->header.h6, 1001 + sizeof(struct inet6_skb_parm)); 1002 + } 1003 + 994 1004 static struct sock *tcp_v6_syn_recv_sock(const struct sock *sk, struct sk_buff *skb, 995 1005 struct request_sock *req, 996 1006 struct dst_entry *dst, ··· 1192 1182 sk_gfp_mask(sk, GFP_ATOMIC)); 1193 1183 consume_skb(ireq->pktopts); 1194 1184 ireq->pktopts = NULL; 1195 - if (newnp->pktoptions) 1185 + if (newnp->pktoptions) { 1186 + tcp_v6_restore_cb(newnp->pktoptions); 1196 1187 skb_set_owner_r(newnp->pktoptions, newsk); 1188 + } 1197 1189 } 1198 1190 } 1199 1191 ··· 1208 1196 out: 1209 1197 tcp_listendrop(sk); 1210 1198 return NULL; 1211 - } 1212 - 1213 - static void tcp_v6_restore_cb(struct sk_buff *skb) 1214 - { 1215 - /* We need to move header back to the beginning if xfrm6_policy_check() 1216 - * and tcp_v6_fill_cb() are going to be called again. 1217 - * ip6_datagram_recv_specific_ctl() also expects IP6CB to be there. 1218 - */ 1219 - memmove(IP6CB(skb), &TCP_SKB_CB(skb)->header.h6, 1220 - sizeof(struct inet6_skb_parm)); 1221 1199 } 1222 1200 1223 1201 /* The socket must have it's spinlock held when we get
+1 -1
net/ipv6/udp.c
··· 441 441 return err; 442 442 443 443 csum_copy_err: 444 - if (!__sk_queue_drop_skb(sk, skb, flags)) { 444 + if (!__sk_queue_drop_skb(sk, skb, flags, udp_skb_destructor)) { 445 445 if (is_udp4) { 446 446 UDP_INC_STATS(sock_net(sk), 447 447 UDP_MIB_CSUMERRORS, is_udplite);
+23 -19
net/kcm/kcmsock.c
··· 929 929 goto out_error; 930 930 } 931 931 932 - /* New message, alloc head skb */ 933 - head = alloc_skb(0, sk->sk_allocation); 934 - while (!head) { 935 - kcm_push(kcm); 936 - err = sk_stream_wait_memory(sk, &timeo); 937 - if (err) 938 - goto out_error; 939 - 932 + if (msg_data_left(msg)) { 933 + /* New message, alloc head skb */ 940 934 head = alloc_skb(0, sk->sk_allocation); 935 + while (!head) { 936 + kcm_push(kcm); 937 + err = sk_stream_wait_memory(sk, &timeo); 938 + if (err) 939 + goto out_error; 940 + 941 + head = alloc_skb(0, sk->sk_allocation); 942 + } 943 + 944 + skb = head; 945 + 946 + /* Set ip_summed to CHECKSUM_UNNECESSARY to avoid calling 947 + * csum_and_copy_from_iter from skb_do_copy_data_nocache. 948 + */ 949 + skb->ip_summed = CHECKSUM_UNNECESSARY; 941 950 } 942 - 943 - skb = head; 944 - 945 - /* Set ip_summed to CHECKSUM_UNNECESSARY to avoid calling 946 - * csum_and_copy_from_iter from skb_do_copy_data_nocache. 947 - */ 948 - skb->ip_summed = CHECKSUM_UNNECESSARY; 949 951 950 952 start: 951 953 while (msg_data_left(msg)) { ··· 1020 1018 if (eor) { 1021 1019 bool not_busy = skb_queue_empty(&sk->sk_write_queue); 1022 1020 1023 - /* Message complete, queue it on send buffer */ 1024 - __skb_queue_tail(&sk->sk_write_queue, head); 1025 - kcm->seq_skb = NULL; 1026 - KCM_STATS_INCR(kcm->stats.tx_msgs); 1021 + if (head) { 1022 + /* Message complete, queue it on send buffer */ 1023 + __skb_queue_tail(&sk->sk_write_queue, head); 1024 + kcm->seq_skb = NULL; 1025 + KCM_STATS_INCR(kcm->stats.tx_msgs); 1026 + } 1027 1027 1028 1028 if (msg->msg_flags & MSG_BATCH) { 1029 1029 kcm->tx_wait_more = true;
+1
net/l2tp/l2tp_core.h
··· 263 263 int l2tp_nl_register_ops(enum l2tp_pwtype pw_type, 264 264 const struct l2tp_nl_cmd_ops *ops); 265 265 void l2tp_nl_unregister_ops(enum l2tp_pwtype pw_type); 266 + int l2tp_ioctl(struct sock *sk, int cmd, unsigned long arg); 266 267 267 268 /* Session reference counts. Incremented when code obtains a reference 268 269 * to a session.
+26 -1
net/l2tp/l2tp_ip.c
··· 11 11 12 12 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 13 13 14 + #include <asm/ioctls.h> 14 15 #include <linux/icmp.h> 15 16 #include <linux/module.h> 16 17 #include <linux/skbuff.h> ··· 554 553 return err ? err : copied; 555 554 } 556 555 556 + int l2tp_ioctl(struct sock *sk, int cmd, unsigned long arg) 557 + { 558 + struct sk_buff *skb; 559 + int amount; 560 + 561 + switch (cmd) { 562 + case SIOCOUTQ: 563 + amount = sk_wmem_alloc_get(sk); 564 + break; 565 + case SIOCINQ: 566 + spin_lock_bh(&sk->sk_receive_queue.lock); 567 + skb = skb_peek(&sk->sk_receive_queue); 568 + amount = skb ? skb->len : 0; 569 + spin_unlock_bh(&sk->sk_receive_queue.lock); 570 + break; 571 + 572 + default: 573 + return -ENOIOCTLCMD; 574 + } 575 + 576 + return put_user(amount, (int __user *)arg); 577 + } 578 + EXPORT_SYMBOL(l2tp_ioctl); 579 + 557 580 static struct proto l2tp_ip_prot = { 558 581 .name = "L2TP/IP", 559 582 .owner = THIS_MODULE, ··· 586 561 .bind = l2tp_ip_bind, 587 562 .connect = l2tp_ip_connect, 588 563 .disconnect = l2tp_ip_disconnect, 589 - .ioctl = udp_ioctl, 564 + .ioctl = l2tp_ioctl, 590 565 .destroy = l2tp_ip_destroy_sock, 591 566 .setsockopt = ip_setsockopt, 592 567 .getsockopt = ip_getsockopt,
+1 -1
net/l2tp/l2tp_ip6.c
··· 722 722 .bind = l2tp_ip6_bind, 723 723 .connect = l2tp_ip6_connect, 724 724 .disconnect = l2tp_ip6_disconnect, 725 - .ioctl = udp_ioctl, 725 + .ioctl = l2tp_ioctl, 726 726 .destroy = l2tp_ip6_destroy_sock, 727 727 .setsockopt = ipv6_setsockopt, 728 728 .getsockopt = ipv6_getsockopt,
+3 -3
net/mac80211/fils_aead.c
··· 124 124 125 125 /* CTR */ 126 126 127 - tfm2 = crypto_alloc_skcipher("ctr(aes)", 0, 0); 127 + tfm2 = crypto_alloc_skcipher("ctr(aes)", 0, CRYPTO_ALG_ASYNC); 128 128 if (IS_ERR(tfm2)) { 129 129 kfree(tmp); 130 130 return PTR_ERR(tfm2); ··· 183 183 184 184 /* CTR */ 185 185 186 - tfm2 = crypto_alloc_skcipher("ctr(aes)", 0, 0); 186 + tfm2 = crypto_alloc_skcipher("ctr(aes)", 0, CRYPTO_ALG_ASYNC); 187 187 if (IS_ERR(tfm2)) 188 188 return PTR_ERR(tfm2); 189 189 /* K2 for CTR */ ··· 272 272 crypt_len = skb->data + skb->len - encr; 273 273 skb_put(skb, AES_BLOCK_SIZE); 274 274 return aes_siv_encrypt(assoc_data->fils_kek, assoc_data->fils_kek_len, 275 - encr, crypt_len, 1, addr, len, encr); 275 + encr, crypt_len, 5, addr, len, encr); 276 276 } 277 277 278 278 int fils_decrypt_assoc_resp(struct ieee80211_sub_if_data *sdata,
+1 -1
net/mac80211/mesh.c
··· 339 339 /* fast-forward to vendor IEs */ 340 340 offset = ieee80211_ie_split_vendor(ifmsh->ie, ifmsh->ie_len, 0); 341 341 342 - if (offset) { 342 + if (offset < ifmsh->ie_len) { 343 343 len = ifmsh->ie_len - offset; 344 344 data = ifmsh->ie + offset; 345 345 if (skb_tailroom(skb) < len)
+4 -3
net/packet/af_packet.c
··· 2755 2755 struct virtio_net_hdr vnet_hdr = { 0 }; 2756 2756 int offset = 0; 2757 2757 struct packet_sock *po = pkt_sk(sk); 2758 - int hlen, tlen; 2758 + int hlen, tlen, linear; 2759 2759 int extra_len = 0; 2760 2760 2761 2761 /* ··· 2816 2816 err = -ENOBUFS; 2817 2817 hlen = LL_RESERVED_SPACE(dev); 2818 2818 tlen = dev->needed_tailroom; 2819 - skb = packet_alloc_skb(sk, hlen + tlen, hlen, len, 2820 - __virtio16_to_cpu(vio_le(), vnet_hdr.hdr_len), 2819 + linear = __virtio16_to_cpu(vio_le(), vnet_hdr.hdr_len); 2820 + linear = max(linear, min_t(int, len, dev->hard_header_len)); 2821 + skb = packet_alloc_skb(sk, hlen + tlen, hlen, len, linear, 2821 2822 msg->msg_flags & MSG_DONTWAIT, &err); 2822 2823 if (skb == NULL) 2823 2824 goto out_unlock;
+3 -2
net/sctp/socket.c
··· 239 239 union sctp_addr *laddr = (union sctp_addr *)addr; 240 240 struct sctp_transport *transport; 241 241 242 - if (sctp_verify_addr(sk, laddr, af->sockaddr_len)) 242 + if (!af || sctp_verify_addr(sk, laddr, af->sockaddr_len)) 243 243 return NULL; 244 244 245 245 addr_asoc = sctp_endpoint_lookup_assoc(sctp_sk(sk)->ep, ··· 7426 7426 */ 7427 7427 release_sock(sk); 7428 7428 current_timeo = schedule_timeout(current_timeo); 7429 - BUG_ON(sk != asoc->base.sk); 7429 + if (sk != asoc->base.sk) 7430 + goto do_error; 7430 7431 lock_sock(sk); 7431 7432 7432 7433 *timeo_p = current_timeo;
+1
net/wireless/nl80211.c
··· 5916 5916 break; 5917 5917 } 5918 5918 cfg->ht_opmode = ht_opmode; 5919 + mask |= (1 << (NL80211_MESHCONF_HT_OPMODE - 1)); 5919 5920 } 5920 5921 FILL_IN_MESH_PARAM_IF_SET(tb, cfg, dot11MeshHWMPactivePathToRootTimeout, 5921 5922 1, 65535, mask,
+2
scripts/Makefile.build
··· 164 164 $(CPP) -D__GENKSYMS__ $(c_flags) $< | \ 165 165 $(GENKSYMS) $(if $(1), -T $(2)) \ 166 166 $(patsubst y,-s _,$(CONFIG_HAVE_UNDERSCORE_SYMBOL_PREFIX)) \ 167 + $(patsubst y,-R,$(CONFIG_MODULE_REL_CRCS)) \ 167 168 $(if $(KBUILD_PRESERVE),-p) \ 168 169 -r $(firstword $(wildcard $(2:.symtypes=.symref) /dev/null)) 169 170 ··· 338 337 $(CPP) -D__GENKSYMS__ $(c_flags) -xc - | \ 339 338 $(GENKSYMS) $(if $(1), -T $(2)) \ 340 339 $(patsubst y,-s _,$(CONFIG_HAVE_UNDERSCORE_SYMBOL_PREFIX)) \ 340 + $(patsubst y,-R,$(CONFIG_MODULE_REL_CRCS)) \ 341 341 $(if $(KBUILD_PRESERVE),-p) \ 342 342 -r $(firstword $(wildcard $(2:.symtypes=.symref) /dev/null)) 343 343
+14 -5
scripts/genksyms/genksyms.c
··· 44 44 int in_source_file; 45 45 46 46 static int flag_debug, flag_dump_defs, flag_reference, flag_dump_types, 47 - flag_preserve, flag_warnings; 47 + flag_preserve, flag_warnings, flag_rel_crcs; 48 48 static const char *mod_prefix = ""; 49 49 50 50 static int errors; ··· 693 693 fputs(">\n", debugfile); 694 694 695 695 /* Used as a linker script. */ 696 - printf("%s__crc_%s = 0x%08lx ;\n", mod_prefix, name, crc); 696 + printf(!flag_rel_crcs ? "%s__crc_%s = 0x%08lx;\n" : 697 + "SECTIONS { .rodata : ALIGN(4) { " 698 + "%s__crc_%s = .; LONG(0x%08lx); } }\n", 699 + mod_prefix, name, crc); 697 700 } 698 701 } 699 702 ··· 733 730 734 731 static void genksyms_usage(void) 735 732 { 736 - fputs("Usage:\n" "genksyms [-adDTwqhV] > /path/to/.tmp_obj.ver\n" "\n" 733 + fputs("Usage:\n" "genksyms [-adDTwqhVR] > /path/to/.tmp_obj.ver\n" "\n" 737 734 #ifdef __GNU_LIBRARY__ 738 735 " -s, --symbol-prefix Select symbol prefix\n" 739 736 " -d, --debug Increment the debug level (repeatable)\n" ··· 745 742 " -q, --quiet Disable warnings (default)\n" 746 743 " -h, --help Print this message\n" 747 744 " -V, --version Print the release version\n" 745 + " -R, --relative-crc Emit section relative symbol CRCs\n" 748 746 #else /* __GNU_LIBRARY__ */ 749 747 " -s Select symbol prefix\n" 750 748 " -d Increment the debug level (repeatable)\n" ··· 757 753 " -q Disable warnings (default)\n" 758 754 " -h Print this message\n" 759 755 " -V Print the release version\n" 756 + " -R Emit section relative symbol CRCs\n" 760 757 #endif /* __GNU_LIBRARY__ */ 761 758 , stderr); 762 759 } ··· 779 774 {"preserve", 0, 0, 'p'}, 780 775 {"version", 0, 0, 'V'}, 781 776 {"help", 0, 0, 'h'}, 777 + {"relative-crc", 0, 0, 'R'}, 782 778 {0, 0, 0, 0} 783 779 }; 784 780 785 - while ((o = getopt_long(argc, argv, "s:dwqVDr:T:ph", 781 + while ((o = getopt_long(argc, argv, "s:dwqVDr:T:phR", 786 782 &long_opts[0], NULL)) != EOF) 787 783 #else /* __GNU_LIBRARY__ */ 788 - while ((o = getopt(argc, argv, "s:dwqVDr:T:ph")) != EOF) 784 + while ((o = getopt(argc, argv, "s:dwqVDr:T:phR")) != EOF) 789 785 #endif /* __GNU_LIBRARY__ */ 790 786 switch (o) { 791 787 case 's': ··· 829 823 case 'h': 830 824 genksyms_usage(); 831 825 return 0; 826 + case 'R': 827 + flag_rel_crcs = 1; 828 + break; 832 829 default: 833 830 genksyms_usage(); 834 831 return 1;
+12
scripts/kallsyms.c
··· 219 219 "_SDA2_BASE_", /* ppc */ 220 220 NULL }; 221 221 222 + static char *special_prefixes[] = { 223 + "__crc_", /* modversions */ 224 + NULL }; 225 + 222 226 static char *special_suffixes[] = { 223 227 "_veneer", /* arm */ 224 228 "_from_arm", /* arm */ ··· 262 258 for (i = 0; special_symbols[i]; i++) 263 259 if (strcmp(sym_name, special_symbols[i]) == 0) 264 260 return 0; 261 + 262 + for (i = 0; special_prefixes[i]; i++) { 263 + int l = strlen(special_prefixes[i]); 264 + 265 + if (l <= strlen(sym_name) && 266 + strncmp(sym_name, special_prefixes[i], l) == 0) 267 + return 0; 268 + } 265 269 266 270 for (i = 0; special_suffixes[i]; i++) { 267 271 int l = strlen(sym_name) - strlen(special_suffixes[i]);
+10
scripts/mod/modpost.c
··· 621 621 if (strncmp(symname, CRC_PFX, strlen(CRC_PFX)) == 0) { 622 622 is_crc = true; 623 623 crc = (unsigned int) sym->st_value; 624 + if (sym->st_shndx != SHN_UNDEF && sym->st_shndx != SHN_ABS) { 625 + unsigned int *crcp; 626 + 627 + /* symbol points to the CRC in the ELF object */ 628 + crcp = (void *)info->hdr + sym->st_value + 629 + info->sechdrs[sym->st_shndx].sh_offset - 630 + (info->hdr->e_type != ET_REL ? 631 + info->sechdrs[sym->st_shndx].sh_addr : 0); 632 + crc = *crcp; 633 + } 624 634 sym_update_crc(symname + strlen(CRC_PFX), mod, crc, 625 635 export); 626 636 }
+1 -1
security/selinux/hooks.c
··· 5887 5887 return error; 5888 5888 5889 5889 /* Obtain a SID for the context, if one was specified. */ 5890 - if (size && str[1] && str[1] != '\n') { 5890 + if (size && str[0] && str[0] != '\n') { 5891 5891 if (str[size-1] == '\n') { 5892 5892 str[size-1] = 0; 5893 5893 size--;
+1 -8
sound/core/seq/seq_memory.c
··· 419 419 { 420 420 unsigned long flags; 421 421 struct snd_seq_event_cell *ptr; 422 - int max_count = 5 * HZ; 423 422 424 423 if (snd_BUG_ON(!pool)) 425 424 return -EINVAL; ··· 431 432 if (waitqueue_active(&pool->output_sleep)) 432 433 wake_up(&pool->output_sleep); 433 434 434 - while (atomic_read(&pool->counter) > 0) { 435 - if (max_count == 0) { 436 - pr_warn("ALSA: snd_seq_pool_done timeout: %d cells remain\n", atomic_read(&pool->counter)); 437 - break; 438 - } 435 + while (atomic_read(&pool->counter) > 0) 439 436 schedule_timeout_uninterruptible(1); 440 - max_count--; 441 - } 442 437 443 438 /* release all resources */ 444 439 spin_lock_irqsave(&pool->lock, flags);
+20 -13
sound/core/seq/seq_queue.c
··· 181 181 } 182 182 } 183 183 184 + static void queue_use(struct snd_seq_queue *queue, int client, int use); 185 + 184 186 /* allocate a new queue - 185 187 * return queue index value or negative value for error 186 188 */ ··· 194 192 if (q == NULL) 195 193 return -ENOMEM; 196 194 q->info_flags = info_flags; 195 + queue_use(q, client, 1); 197 196 if (queue_list_add(q) < 0) { 198 197 queue_delete(q); 199 198 return -ENOMEM; 200 199 } 201 - snd_seq_queue_use(q->queue, client, 1); /* use this queue */ 202 200 return q->queue; 203 201 } 204 202 ··· 504 502 return result; 505 503 } 506 504 507 - 508 - /* use or unuse this queue - 509 - * if it is the first client, starts the timer. 510 - * if it is not longer used by any clients, stop the timer. 511 - */ 512 - int snd_seq_queue_use(int queueid, int client, int use) 505 + /* use or unuse this queue */ 506 + static void queue_use(struct snd_seq_queue *queue, int client, int use) 513 507 { 514 - struct snd_seq_queue *queue; 515 - 516 - queue = queueptr(queueid); 517 - if (queue == NULL) 518 - return -EINVAL; 519 - mutex_lock(&queue->timer_mutex); 520 508 if (use) { 521 509 if (!test_and_set_bit(client, queue->clients_bitmap)) 522 510 queue->clients++; ··· 521 529 } else { 522 530 snd_seq_timer_close(queue); 523 531 } 532 + } 533 + 534 + /* use or unuse this queue - 535 + * if it is the first client, starts the timer. 536 + * if it is not longer used by any clients, stop the timer. 537 + */ 538 + int snd_seq_queue_use(int queueid, int client, int use) 539 + { 540 + struct snd_seq_queue *queue; 541 + 542 + queue = queueptr(queueid); 543 + if (queue == NULL) 544 + return -EINVAL; 545 + mutex_lock(&queue->timer_mutex); 546 + queue_use(queue, client, use); 524 547 mutex_unlock(&queue->timer_mutex); 525 548 queuefree(queue); 526 549 return 0;
+1
sound/pci/hda/patch_hdmi.c
··· 3639 3639 HDA_CODEC_ENTRY(0x10de0071, "GPU 71 HDMI/DP", patch_nvhdmi), 3640 3640 HDA_CODEC_ENTRY(0x10de0072, "GPU 72 HDMI/DP", patch_nvhdmi), 3641 3641 HDA_CODEC_ENTRY(0x10de007d, "GPU 7d HDMI/DP", patch_nvhdmi), 3642 + HDA_CODEC_ENTRY(0x10de0080, "GPU 80 HDMI/DP", patch_nvhdmi), 3642 3643 HDA_CODEC_ENTRY(0x10de0082, "GPU 82 HDMI/DP", patch_nvhdmi), 3643 3644 HDA_CODEC_ENTRY(0x10de0083, "GPU 83 HDMI/DP", patch_nvhdmi), 3644 3645 HDA_CODEC_ENTRY(0x10de8001, "MCP73 HDMI", patch_nvhdmi_2ch),
+2 -1
sound/usb/line6/driver.c
··· 754 754 goto error; 755 755 } 756 756 757 + line6_get_interval(line6); 758 + 757 759 if (properties->capabilities & LINE6_CAP_CONTROL) { 758 - line6_get_interval(line6); 759 760 ret = line6_init_cap_control(line6); 760 761 if (ret < 0) 761 762 goto error;