···3333Description3434===========35353636-.. note::3737-3838- This documents the proposed CEC API. This API is not yet finalized3939- and is currently only available as a staging kernel module.4040-4136Closes the cec device. Resources associated with the file descriptor are4237freed. The device configuration remain unchanged.4338
-5
Documentation/media/uapi/cec/cec-func-ioctl.rst
···3939Description4040===========41414242-.. note::4343-4444- This documents the proposed CEC API. This API is not yet finalized4545- and is currently only available as a staging kernel module.4646-4742The :c:func:`ioctl()` function manipulates cec device parameters. The4843argument ``fd`` must be an open file descriptor.4944
-5
Documentation/media/uapi/cec/cec-func-open.rst
···4646Description4747===========48484949-.. note::5050-5151- This documents the proposed CEC API. This API is not yet finalized5252- and is currently only available as a staging kernel module.5353-5449To open a cec device applications call :c:func:`open()` with the5550desired device name. The function has no side effects; the device5651configuration remain unchanged.
-5
Documentation/media/uapi/cec/cec-func-poll.rst
···3939Description4040===========41414242-.. note::4343-4444- This documents the proposed CEC API. This API is not yet finalized4545- and is currently only available as a staging kernel module.4646-4742With the :c:func:`poll()` function applications can wait for CEC4843events.4944
+12-5
Documentation/media/uapi/cec/cec-intro.rst
···33Introduction44============5566-.. note::77-88- This documents the proposed CEC API. This API is not yet finalized99- and is currently only available as a staging kernel module.1010-116HDMI connectors provide a single pin for use by the Consumer Electronics127Control protocol. This protocol allows different devices connected by an138HDMI cable to communicate. The protocol for CEC version 1.4 is defined···2631Drivers that support CEC will create a CEC device node (/dev/cecX) to2732give userspace access to the CEC adapter. The2833:ref:`CEC_ADAP_G_CAPS` ioctl will tell userspace what it is allowed to do.3434+3535+In order to check the support and test it, it is suggested to download3636+the `v4l-utils <https://git.linuxtv.org/v4l-utils.git/>`_ package. It3737+provides three tools to handle CEC:3838+3939+- cec-ctl: the Swiss army knife of CEC. Allows you to configure, transmit4040+ and monitor CEC messages.4141+4242+- cec-compliance: does a CEC compliance test of a remote CEC device to4343+ determine how compliant the CEC implementation is.4444+4545+- cec-follower: emulates a CEC follower.
···2929Description3030===========31313232-.. note::3333-3434- This documents the proposed CEC API. This API is not yet finalized3535- and is currently only available as a staging kernel module.3636-3732All cec devices must support :ref:`ioctl CEC_ADAP_G_CAPS <CEC_ADAP_G_CAPS>`. To query3833device information, applications call the ioctl with a pointer to a3934struct :c:type:`cec_caps`. The driver fills the structure and
···3535Description3636===========37373838-.. note::3939-4040- This documents the proposed CEC API. This API is not yet finalized4141- and is currently only available as a staging kernel module.4242-4338To query the current CEC logical addresses, applications call4439:ref:`ioctl CEC_ADAP_G_LOG_ADDRS <CEC_ADAP_G_LOG_ADDRS>` with a pointer to a4540struct :c:type:`cec_log_addrs` where the driver stores the logical addresses.
···3535Description3636===========37373838-.. note::3939-4040- This documents the proposed CEC API. This API is not yet finalized4141- and is currently only available as a staging kernel module.4242-4338To query the current physical address applications call4439:ref:`ioctl CEC_ADAP_G_PHYS_ADDR <CEC_ADAP_G_PHYS_ADDR>` with a pointer to a __u16 where the4540driver stores the physical address.
-5
Documentation/media/uapi/cec/cec-ioc-dqevent.rst
···3030Description3131===========32323333-.. note::3434-3535- This documents the proposed CEC API. This API is not yet finalized3636- and is currently only available as a staging kernel module.3737-3833CEC devices can send asynchronous events. These can be retrieved by3934calling :c:func:`CEC_DQEVENT`. If the file descriptor is in4035non-blocking mode and no event is pending, then it will return -1 and
-5
Documentation/media/uapi/cec/cec-ioc-g-mode.rst
···3131Description3232===========33333434-.. note::3535-3636- This documents the proposed CEC API. This API is not yet finalized3737- and is currently only available as a staging kernel module.3838-3934By default any filehandle can use :ref:`CEC_TRANSMIT`, but in order to prevent4035applications from stepping on each others toes it must be possible to4136obtain exclusive access to the CEC adapter. This ioctl sets the
-5
Documentation/media/uapi/cec/cec-ioc-receive.rst
···3434Description3535===========36363737-.. note::3838-3939- This documents the proposed CEC API. This API is not yet finalized4040- and is currently only available as a staging kernel module.4141-4237To receive a CEC message the application has to fill in the4338``timeout`` field of struct :c:type:`cec_msg` and pass it to4439:ref:`ioctl CEC_RECEIVE <CEC_RECEIVE>`.
+19-19
MAINTAINERS
···10911091F: drivers/*/*aspeed*1092109210931093ARM/ATMEL AT91RM9200, AT91SAM9 AND SAMA5 SOC SUPPORT10941094-M: Nicolas Ferre <nicolas.ferre@atmel.com>10941094+M: Nicolas Ferre <nicolas.ferre@microchip.com>10951095M: Alexandre Belloni <alexandre.belloni@free-electrons.com>10961096M: Jean-Christophe Plagniol-Villard <plagnioj@jcrosoft.com>10971097L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)···17731773F: include/linux/soc/renesas/1774177417751775ARM/SOCFPGA ARCHITECTURE17761776-M: Dinh Nguyen <dinguyen@opensource.altera.com>17761776+M: Dinh Nguyen <dinguyen@kernel.org>17771777S: Maintained17781778F: arch/arm/mach-socfpga/17791779F: arch/arm/boot/dts/socfpga*···17831783T: git git://git.kernel.org/pub/scm/linux/kernel/git/dinguyen/linux.git1784178417851785ARM/SOCFPGA CLOCK FRAMEWORK SUPPORT17861786-M: Dinh Nguyen <dinguyen@opensource.altera.com>17861786+M: Dinh Nguyen <dinguyen@kernel.org>17871787S: Maintained17881788F: drivers/clk/socfpga/17891789···21752175F: include/uapi/linux/atm*2176217621772177ATMEL AT91 / AT32 MCI DRIVER21782178-M: Ludovic Desroches <ludovic.desroches@atmel.com>21782178+M: Ludovic Desroches <ludovic.desroches@microchip.com>21792179S: Maintained21802180F: drivers/mmc/host/atmel-mci.c2181218121822182ATMEL AT91 SAMA5D2-Compatible Shutdown Controller21832183-M: Nicolas Ferre <nicolas.ferre@atmel.com>21832183+M: Nicolas Ferre <nicolas.ferre@microchip.com>21842184S: Supported21852185F: drivers/power/reset/at91-sama5d2_shdwc.c2186218621872187ATMEL SAMA5D2 ADC DRIVER21882188-M: Ludovic Desroches <ludovic.desroches@atmel.com>21882188+M: Ludovic Desroches <ludovic.desroches@microchip.com>21892189L: linux-iio@vger.kernel.org21902190S: Supported21912191F: drivers/iio/adc/at91-sama5d2_adc.c2192219221932193ATMEL Audio ALSA driver21942194-M: Nicolas Ferre <nicolas.ferre@atmel.com>21942194+M: Nicolas Ferre <nicolas.ferre@microchip.com>21952195L: alsa-devel@alsa-project.org (moderated for non-subscribers)21962196S: Supported21972197F: sound/soc/atmel2198219821992199ATMEL XDMA DRIVER22002200-M: Ludovic Desroches <ludovic.desroches@atmel.com>22002200+M: Ludovic Desroches <ludovic.desroches@microchip.com>22012201L: linux-arm-kernel@lists.infradead.org22022202L: dmaengine@vger.kernel.org22032203S: Supported22042204F: drivers/dma/at_xdmac.c2205220522062206ATMEL I2C DRIVER22072207-M: Ludovic Desroches <ludovic.desroches@atmel.com>22072207+M: Ludovic Desroches <ludovic.desroches@microchip.com>22082208L: linux-i2c@vger.kernel.org22092209S: Supported22102210F: drivers/i2c/busses/i2c-at91.c2211221122122212ATMEL ISI DRIVER22132213-M: Ludovic Desroches <ludovic.desroches@atmel.com>22132213+M: Ludovic Desroches <ludovic.desroches@microchip.com>22142214L: linux-media@vger.kernel.org22152215S: Supported22162216F: drivers/media/platform/soc_camera/atmel-isi.c22172217F: include/media/atmel-isi.h2218221822192219ATMEL LCDFB DRIVER22202220-M: Nicolas Ferre <nicolas.ferre@atmel.com>22202220+M: Nicolas Ferre <nicolas.ferre@microchip.com>22212221L: linux-fbdev@vger.kernel.org22222222S: Maintained22232223F: drivers/video/fbdev/atmel_lcdfb.c22242224F: include/video/atmel_lcdc.h2225222522262226ATMEL MACB ETHERNET DRIVER22272227-M: Nicolas Ferre <nicolas.ferre@atmel.com>22272227+M: Nicolas Ferre <nicolas.ferre@microchip.com>22282228S: Supported22292229F: drivers/net/ethernet/cadence/22302230···22362236F: drivers/mtd/nand/atmel_nand*2237223722382238ATMEL SDMMC DRIVER22392239-M: Ludovic Desroches <ludovic.desroches@atmel.com>22392239+M: Ludovic Desroches <ludovic.desroches@microchip.com>22402240L: linux-mmc@vger.kernel.org22412241S: Supported22422242F: drivers/mmc/host/sdhci-of-at91.c2243224322442244ATMEL SPI DRIVER22452245-M: Nicolas Ferre <nicolas.ferre@atmel.com>22452245+M: Nicolas Ferre <nicolas.ferre@microchip.com>22462246S: Supported22472247F: drivers/spi/spi-atmel.*2248224822492249ATMEL SSC DRIVER22502250-M: Nicolas Ferre <nicolas.ferre@atmel.com>22502250+M: Nicolas Ferre <nicolas.ferre@microchip.com>22512251L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)22522252S: Supported22532253F: drivers/misc/atmel-ssc.c22542254F: include/linux/atmel-ssc.h2255225522562256ATMEL Timer Counter (TC) AND CLOCKSOURCE DRIVERS22572257-M: Nicolas Ferre <nicolas.ferre@atmel.com>22572257+M: Nicolas Ferre <nicolas.ferre@microchip.com>22582258L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)22592259S: Supported22602260F: drivers/misc/atmel_tclib.c22612261F: drivers/clocksource/tcb_clksrc.c2262226222632263ATMEL USBA UDC DRIVER22642264-M: Nicolas Ferre <nicolas.ferre@atmel.com>22642264+M: Nicolas Ferre <nicolas.ferre@microchip.com>22652265L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)22662266S: Supported22672267F: drivers/usb/gadget/udc/atmel_usba_udc.*···97369736F: drivers/pinctrl/pinctrl-at91.*9737973797389738PIN CONTROLLER - ATMEL AT91 PIO497399739-M: Ludovic Desroches <ludovic.desroches@atmel.com>97399739+M: Ludovic Desroches <ludovic.desroches@microchip.com>97409740L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)97419741L: linux-gpio@vger.kernel.org97429742S: Supported···1306513065F: include/uapi/linux/userio.h13066130661306713067VIRTIO CONSOLE DRIVER1306813068-M: Amit Shah <amit.shah@redhat.com>1306813068+M: Amit Shah <amit@kernel.org>1306913069L: virtualization@lists.linux-foundation.org1307013070S: Maintained1307113071F: drivers/char/virtio_console.c
···1818/ {1919 #address-cells = <1>;2020 #size-cells = <1>;2121+ /*2222+ * The decompressor and also some bootloaders rely on a2323+ * pre-existing /chosen node to be available to insert the2424+ * command line and merge other ATAGS info.2525+ * Also for U-Boot there must be a pre-existing /memory node.2626+ */2727+ chosen {};2828+ memory { device_type = "memory"; reg = <0 0>; };21292230 aliases {2331 gpio0 = &gpio1;
+8
arch/arm/boot/dts/imx23.dtsi
···1616 #size-cells = <1>;17171818 interrupt-parent = <&icoll>;1919+ /*2020+ * The decompressor and also some bootloaders rely on a2121+ * pre-existing /chosen node to be available to insert the2222+ * command line and merge other ATAGS info.2323+ * Also for U-Boot there must be a pre-existing /memory node.2424+ */2525+ chosen {};2626+ memory { device_type = "memory"; reg = <0 0>; };19272028 aliases {2129 gpio0 = &gpio0;
+8
arch/arm/boot/dts/imx25.dtsi
···1414/ {1515 #address-cells = <1>;1616 #size-cells = <1>;1717+ /*1818+ * The decompressor and also some bootloaders rely on a1919+ * pre-existing /chosen node to be available to insert the2020+ * command line and merge other ATAGS info.2121+ * Also for U-Boot there must be a pre-existing /memory node.2222+ */2323+ chosen {};2424+ memory { device_type = "memory"; reg = <0 0>; };17251826 aliases {1927 ethernet0 = &fec;
+8
arch/arm/boot/dts/imx27.dtsi
···1919/ {2020 #address-cells = <1>;2121 #size-cells = <1>;2222+ /*2323+ * The decompressor and also some bootloaders rely on a2424+ * pre-existing /chosen node to be available to insert the2525+ * command line and merge other ATAGS info.2626+ * Also for U-Boot there must be a pre-existing /memory node.2727+ */2828+ chosen {};2929+ memory { device_type = "memory"; reg = <0 0>; };22302331 aliases {2432 ethernet0 = &fec;
+8
arch/arm/boot/dts/imx28.dtsi
···1717 #size-cells = <1>;18181919 interrupt-parent = <&icoll>;2020+ /*2121+ * The decompressor and also some bootloaders rely on a2222+ * pre-existing /chosen node to be available to insert the2323+ * command line and merge other ATAGS info.2424+ * Also for U-Boot there must be a pre-existing /memory node.2525+ */2626+ chosen {};2727+ memory { device_type = "memory"; reg = <0 0>; };20282129 aliases {2230 ethernet0 = &mac0;
+8
arch/arm/boot/dts/imx31.dtsi
···1212/ {1313 #address-cells = <1>;1414 #size-cells = <1>;1515+ /*1616+ * The decompressor and also some bootloaders rely on a1717+ * pre-existing /chosen node to be available to insert the1818+ * command line and merge other ATAGS info.1919+ * Also for U-Boot there must be a pre-existing /memory node.2020+ */2121+ chosen {};2222+ memory { device_type = "memory"; reg = <0 0>; };15231624 aliases {1725 serial0 = &uart1;
+8
arch/arm/boot/dts/imx35.dtsi
···1313/ {1414 #address-cells = <1>;1515 #size-cells = <1>;1616+ /*1717+ * The decompressor and also some bootloaders rely on a1818+ * pre-existing /chosen node to be available to insert the1919+ * command line and merge other ATAGS info.2020+ * Also for U-Boot there must be a pre-existing /memory node.2121+ */2222+ chosen {};2323+ memory { device_type = "memory"; reg = <0 0>; };16241725 aliases {1826 ethernet0 = &fec;
+8
arch/arm/boot/dts/imx50.dtsi
···1717/ {1818 #address-cells = <1>;1919 #size-cells = <1>;2020+ /*2121+ * The decompressor and also some bootloaders rely on a2222+ * pre-existing /chosen node to be available to insert the2323+ * command line and merge other ATAGS info.2424+ * Also for U-Boot there must be a pre-existing /memory node.2525+ */2626+ chosen {};2727+ memory { device_type = "memory"; reg = <0 0>; };20282129 aliases {2230 ethernet0 = &fec;
+8
arch/arm/boot/dts/imx51.dtsi
···1919/ {2020 #address-cells = <1>;2121 #size-cells = <1>;2222+ /*2323+ * The decompressor and also some bootloaders rely on a2424+ * pre-existing /chosen node to be available to insert the2525+ * command line and merge other ATAGS info.2626+ * Also for U-Boot there must be a pre-existing /memory node.2727+ */2828+ chosen {};2929+ memory { device_type = "memory"; reg = <0 0>; };22302331 aliases {2432 ethernet0 = &fec;
+8
arch/arm/boot/dts/imx53.dtsi
···1919/ {2020 #address-cells = <1>;2121 #size-cells = <1>;2222+ /*2323+ * The decompressor and also some bootloaders rely on a2424+ * pre-existing /chosen node to be available to insert the2525+ * command line and merge other ATAGS info.2626+ * Also for U-Boot there must be a pre-existing /memory node.2727+ */2828+ chosen {};2929+ memory { device_type = "memory"; reg = <0 0>; };22302331 aliases {2432 ethernet0 = &fec;
···1616/ {1717 #address-cells = <1>;1818 #size-cells = <1>;1919+ /*2020+ * The decompressor and also some bootloaders rely on a2121+ * pre-existing /chosen node to be available to insert the2222+ * command line and merge other ATAGS info.2323+ * Also for U-Boot there must be a pre-existing /memory node.2424+ */2525+ chosen {};2626+ memory { device_type = "memory"; reg = <0 0>; };19272028 aliases {2129 ethernet0 = &fec;
+8
arch/arm/boot/dts/imx6sl.dtsi
···1414/ {1515 #address-cells = <1>;1616 #size-cells = <1>;1717+ /*1818+ * The decompressor and also some bootloaders rely on a1919+ * pre-existing /chosen node to be available to insert the2020+ * command line and merge other ATAGS info.2121+ * Also for U-Boot there must be a pre-existing /memory node.2222+ */2323+ chosen {};2424+ memory { device_type = "memory"; reg = <0 0>; };17251826 aliases {1927 ethernet0 = &fec;
+8
arch/arm/boot/dts/imx6sx.dtsi
···1515/ {1616 #address-cells = <1>;1717 #size-cells = <1>;1818+ /*1919+ * The decompressor and also some bootloaders rely on a2020+ * pre-existing /chosen node to be available to insert the2121+ * command line and merge other ATAGS info.2222+ * Also for U-Boot there must be a pre-existing /memory node.2323+ */2424+ chosen {};2525+ memory { device_type = "memory"; reg = <0 0>; };18261927 aliases {2028 can0 = &flexcan1;
+8
arch/arm/boot/dts/imx6ul.dtsi
···1515/ {1616 #address-cells = <1>;1717 #size-cells = <1>;1818+ /*1919+ * The decompressor and also some bootloaders rely on a2020+ * pre-existing /chosen node to be available to insert the2121+ * command line and merge other ATAGS info.2222+ * Also for U-Boot there must be a pre-existing /memory node.2323+ */2424+ chosen {};2525+ memory { device_type = "memory"; reg = <0 0>; };18261927 aliases {2028 ethernet0 = &fec1;
+8
arch/arm/boot/dts/imx7s.dtsi
···5050/ {5151 #address-cells = <1>;5252 #size-cells = <1>;5353+ /*5454+ * The decompressor and also some bootloaders rely on a5555+ * pre-existing /chosen node to be available to insert the5656+ * command line and merge other ATAGS info.5757+ * Also for U-Boot there must be a pre-existing /memory node.5858+ */5959+ chosen {};6060+ memory { device_type = "memory"; reg = <0 0>; };53615462 aliases {5563 gpio0 = &gpio1;
···22 * Device Tree file for Buffalo Linkstation LS-CHLv333 *44 * Copyright (C) 2016 Ash Hughes <ashley.hughes@blueyonder.co.uk>55- * Copyright (C) 2015, 201655+ * Copyright (C) 2015-201766 * Roger Shimizu <rogershimizu@gmail.com>77 *88 * This file is dual-licensed: you can use it either under the terms···5252#include <dt-bindings/gpio/gpio.h>53535454/ {5555- model = "Buffalo Linkstation Live v3 (LS-CHL)";5555+ model = "Buffalo Linkstation LiveV3 (LS-CHL)";5656 compatible = "buffalo,lschl", "marvell,orion5x-88f5182", "marvell,orion5x";57575858 memory { /* 128 MB */
···164164 select ARCH_HAS_SCALED_CPUTIME if VIRT_CPU_ACCOUNTING_NATIVE165165 select HAVE_ARCH_HARDENED_USERCOPY166166 select HAVE_KERNEL_GZIP167167- select HAVE_CC_STACKPROTECTOR168167169168config GENERIC_CSUM170169 def_bool CPU_LITTLE_ENDIAN···483484 bool "Build a relocatable kernel"484485 depends on (PPC64 && !COMPILE_TEST) || (FLATMEM && (44x || FSL_BOOKE))485486 select NONSTATIC_KERNEL487487+ select MODULE_REL_CRCS if MODVERSIONS486488 help487489 This builds a kernel image that is capable of running at the488490 location the kernel is loaded at. For ppc32, there is no any
+2
arch/powerpc/include/asm/cpu_has_feature.h
···2323{2424 int i;25252626+#ifndef __clang__ /* clang can't cope with this */2627 BUILD_BUG_ON(!__builtin_constant_p(feature));2828+#endif27292830#ifdef CONFIG_JUMP_LABEL_FEATURE_CHECK_DEBUG2931 if (!static_key_initialized) {
+2
arch/powerpc/include/asm/mmu.h
···160160{161161 int i;162162163163+#ifndef __clang__ /* clang can't cope with this */163164 BUILD_BUG_ON(!__builtin_constant_p(feature));165165+#endif164166165167#ifdef CONFIG_JUMP_LABEL_FEATURE_CHECK_DEBUG166168 if (!static_key_initialized) {
···649649#define SRR1_ISI_N_OR_G 0x10000000 /* ISI: Access is no-exec or G */650650#define SRR1_ISI_PROT 0x08000000 /* ISI: Other protection fault */651651#define SRR1_WAKEMASK 0x00380000 /* reason for wakeup */652652-#define SRR1_WAKEMASK_P8 0x003c0000 /* reason for wakeup on POWER8 */652652+#define SRR1_WAKEMASK_P8 0x003c0000 /* reason for wakeup on POWER8 and 9 */653653#define SRR1_WAKESYSERR 0x00300000 /* System error */654654#define SRR1_WAKEEE 0x00200000 /* External interrupt */655655+#define SRR1_WAKEHVI 0x00240000 /* Hypervisor Virtualization Interrupt (P9) */655656#define SRR1_WAKEMT 0x00280000 /* mtctrl */656657#define SRR1_WAKEHMI 0x00280000 /* Hypervisor maintenance */657658#define SRR1_WAKEDEC 0x00180000 /* Decrementer interrupt */
-40
arch/powerpc/include/asm/stackprotector.h
···11-/*22- * GCC stack protector support.33- *44- * Stack protector works by putting predefined pattern at the start of55- * the stack frame and verifying that it hasn't been overwritten when66- * returning from the function. The pattern is called stack canary77- * and gcc expects it to be defined by a global variable called88- * "__stack_chk_guard" on PPC. This unfortunately means that on SMP99- * we cannot have a different canary value per task.1010- */1111-1212-#ifndef _ASM_STACKPROTECTOR_H1313-#define _ASM_STACKPROTECTOR_H1414-1515-#include <linux/random.h>1616-#include <linux/version.h>1717-#include <asm/reg.h>1818-1919-extern unsigned long __stack_chk_guard;2020-2121-/*2222- * Initialize the stackprotector canary value.2323- *2424- * NOTE: this must only be called from functions that never return,2525- * and it must always be inlined.2626- */2727-static __always_inline void boot_init_stack_canary(void)2828-{2929- unsigned long canary;3030-3131- /* Try to get a semi random initial value. */3232- get_random_bytes(&canary, sizeof(canary));3333- canary ^= mftb();3434- canary ^= LINUX_VERSION_CODE;3535-3636- current->stack_canary = canary;3737- __stack_chk_guard = current->stack_canary;3838-}3939-4040-#endif /* _ASM_STACKPROTECTOR_H */
+1
arch/powerpc/include/asm/xics.h
···44444545#ifdef CONFIG_PPC_POWERNV4646extern int icp_opal_init(void);4747+extern void icp_opal_flush_interrupt(void);4748#else4849static inline int icp_opal_init(void) { return -ENODEV; }4950#endif
-4
arch/powerpc/kernel/Makefile
···1919CFLAGS_btext.o += $(DISABLE_LATENT_ENTROPY_PLUGIN)2020CFLAGS_prom.o += $(DISABLE_LATENT_ENTROPY_PLUGIN)21212222-# -fstack-protector triggers protection checks in this code,2323-# but it is being used too early to link to meaningful stack_chk logic.2424-CFLAGS_prom_init.o += $(call cc-option, -fno-stack-protector)2525-2622ifdef CONFIG_FUNCTION_TRACER2723# Do not trace early boot code2824CFLAGS_REMOVE_cputable.o = -mno-sched-epilog $(CC_FLAGS_FTRACE)
···286286 for (end = (void *)vers + size; vers < end; vers++)287287 if (vers->name[0] == '.') {288288 memmove(vers->name, vers->name+1, strlen(vers->name));289289-#ifdef ARCH_RELOCATES_KCRCTAB290290- /* The TOC symbol has no CRC computed. To avoid CRC291291- * check failing, we must force it to the expected292292- * value (see CRC check in module.c).293293- */294294- if (!strcmp(vers->name, "TOC."))295295- vers->crc = -(unsigned long)reloc_start;296296-#endif297289 }298290}299291
···253253 if (unlikely(debugger_fault_handler(regs)))254254 goto bail;255255256256- /* On a kernel SLB miss we can only check for a valid exception entry */257257- if (!user_mode(regs) && (address >= TASK_SIZE)) {256256+ /*257257+ * The kernel should never take an execute fault nor should it258258+ * take a page fault to a kernel address.259259+ */260260+ if (!user_mode(regs) && (is_exec || (address >= TASK_SIZE))) {258261 rc = SIGSEGV;259262 goto bail;260263 }···393390#endif /* CONFIG_8xx */394391395392 if (is_exec) {396396- /*397397- * An execution fault + no execute ?398398- *399399- * On CPUs that don't have CPU_FTR_COHERENT_ICACHE we400400- * deliberately create NX mappings, and use the fault to do the401401- * cache flush. This is usually handled in hash_page_do_lazy_icache()402402- * but we could end up here if that races with a concurrent PTE403403- * update. In that case we need to fall through here to the VMA404404- * check below.405405- */406406- if (cpu_has_feature(CPU_FTR_COHERENT_ICACHE) &&407407- (regs->msr & SRR1_ISI_N_OR_G))408408- goto bad_area;409409-410393 /*411394 * Allow execution from readable areas if the MMU does not412395 * provide separate controls over reading and executing.
···5050 for (set = 0; set < POWER9_TLB_SETS_RADIX ; set++) {5151 __tlbiel_pid(pid, set, ric);5252 }5353- if (cpu_has_feature(CPU_FTR_POWER9_DD1))5454- asm volatile(PPC_INVALIDATE_ERAT : : :"memory");5555- return;5353+ asm volatile(PPC_INVALIDATE_ERAT "; isync" : : :"memory");5654}57555856static inline void _tlbie_pid(unsigned long pid, unsigned long ric)···8385 asm volatile(PPC_TLBIEL(%0, %4, %3, %2, %1)8486 : : "r"(rb), "i"(r), "i"(prs), "i"(ric), "r"(rs) : "memory");8587 asm volatile("ptesync": : :"memory");8686- if (cpu_has_feature(CPU_FTR_POWER9_DD1))8787- asm volatile(PPC_INVALIDATE_ERAT : : :"memory");8888}89899090static inline void _tlbie_va(unsigned long va, unsigned long pid,
+10-2
arch/powerpc/platforms/powernv/smp.c
···155155 wmask = SRR1_WAKEMASK_P8;156156157157 idle_states = pnv_get_supported_cpuidle_states();158158+158159 /* We don't want to take decrementer interrupts while we are offline,159159- * so clear LPCR:PECE1. We keep PECE2 enabled.160160+ * so clear LPCR:PECE1. We keep PECE2 (and LPCR_PECE_HVEE on P9)161161+ * enabled as to let IPIs in.160162 */161163 mtspr(SPRN_LPCR, mfspr(SPRN_LPCR) & ~(u64)LPCR_PECE1);162164···208206 * contains 0.209207 */210208 if (((srr1 & wmask) == SRR1_WAKEEE) ||209209+ ((srr1 & wmask) == SRR1_WAKEHVI) ||211210 (local_paca->irq_happened & PACA_IRQ_EE)) {212212- icp_native_flush_interrupt();211211+ if (cpu_has_feature(CPU_FTR_ARCH_300))212212+ icp_opal_flush_interrupt();213213+ else214214+ icp_native_flush_interrupt();213215 } else if ((srr1 & wmask) == SRR1_WAKEHDBELL) {214216 unsigned long msg = PPC_DBELL_TYPE(PPC_DBELL_SERVER);215217 asm volatile(PPC_MSGCLR(%0) : : "r" (msg));···227221 if (srr1 && !generic_check_cpu_restart(cpu))228222 DBG("CPU%d Unexpected exit while offline !\n", cpu);229223 }224224+225225+ /* Re-enable decrementer interrupts */230226 mtspr(SPRN_LPCR, mfspr(SPRN_LPCR) | LPCR_PECE1);231227 DBG("CPU%d coming online...\n", cpu);232228}
+33-2
arch/powerpc/sysdev/xics/icp-opal.c
···120120{121121 int hw_cpu = get_hard_smp_processor_id(cpu);122122123123+ kvmppc_set_host_ipi(cpu, 1);123124 opal_int_set_mfrr(hw_cpu, IPI_PRIORITY);124125}125126126127static irqreturn_t icp_opal_ipi_action(int irq, void *dev_id)127128{128128- int hw_cpu = hard_smp_processor_id();129129+ int cpu = smp_processor_id();129130130130- opal_int_set_mfrr(hw_cpu, 0xff);131131+ kvmppc_set_host_ipi(cpu, 0);132132+ opal_int_set_mfrr(get_hard_smp_processor_id(cpu), 0xff);131133132134 return smp_ipi_demux();135135+}136136+137137+/*138138+ * Called when an interrupt is received on an off-line CPU to139139+ * clear the interrupt, so that the CPU can go back to nap mode.140140+ */141141+void icp_opal_flush_interrupt(void)142142+{143143+ unsigned int xirr;144144+ unsigned int vec;145145+146146+ do {147147+ xirr = icp_opal_get_xirr();148148+ vec = xirr & 0x00ffffff;149149+ if (vec == XICS_IRQ_SPURIOUS)150150+ break;151151+ if (vec == XICS_IPI) {152152+ /* Clear pending IPI */153153+ int cpu = smp_processor_id();154154+ kvmppc_set_host_ipi(cpu, 0);155155+ opal_int_set_mfrr(get_hard_smp_processor_id(cpu), 0xff);156156+ } else {157157+ pr_err("XICS: hw interrupt 0x%x to offline cpu, "158158+ "disabling\n", vec);159159+ xics_mask_unknown_vec(vec);160160+ }161161+162162+ /* EOI the interrupt */163163+ } while (opal_int_eoi(xirr) > 0);133164}134165135166#endif /* CONFIG_SMP */
+4-4
arch/x86/crypto/aesni-intel_glue.c
···10851085 aesni_simd_skciphers[i]; i++)10861086 simd_skcipher_free(aesni_simd_skciphers[i]);1087108710881088- for (i = 0; i < ARRAY_SIZE(aesni_simd_skciphers2) &&10891089- aesni_simd_skciphers2[i].simd; i++)10901090- simd_skcipher_free(aesni_simd_skciphers2[i].simd);10881088+ for (i = 0; i < ARRAY_SIZE(aesni_simd_skciphers2); i++)10891089+ if (aesni_simd_skciphers2[i].simd)10901090+ simd_skcipher_free(aesni_simd_skciphers2[i].simd);10911091}1092109210931093static int __init aesni_init(void)···11681168 simd = simd_skcipher_create_compat(algname, drvname, basename);11691169 err = PTR_ERR(simd);11701170 if (IS_ERR(simd))11711171- goto unregister_simds;11711171+ continue;1172117211731173 aesni_simd_skciphers2[i].simd = simd;11741174 }
+1
arch/x86/include/asm/processor.h
···104104 __u8 x86_phys_bits;105105 /* CPUID returned core id bits: */106106 __u8 x86_coreid_bits;107107+ __u8 cu_id;107108 /* Max extended CPUID function supported: */108109 __u32 extended_cpuid_level;109110 /* Maximum supported CPUID level, -1=no CPUID: */
···433433 int cpu1 = c->cpu_index, cpu2 = o->cpu_index;434434435435 if (c->phys_proc_id == o->phys_proc_id &&436436- per_cpu(cpu_llc_id, cpu1) == per_cpu(cpu_llc_id, cpu2) &&437437- c->cpu_core_id == o->cpu_core_id)438438- return topology_sane(c, o, "smt");436436+ per_cpu(cpu_llc_id, cpu1) == per_cpu(cpu_llc_id, cpu2)) {437437+ if (c->cpu_core_id == o->cpu_core_id)438438+ return topology_sane(c, o, "smt");439439+440440+ if ((c->cu_id != 0xff) &&441441+ (o->cu_id != 0xff) &&442442+ (c->cu_id == o->cu_id))443443+ return topology_sane(c, o, "smt");444444+ }439445440446 } else if (c->phys_proc_id == o->phys_proc_id &&441447 c->cpu_core_id == o->cpu_core_id) {
+3-2
arch/x86/kernel/tsc.c
···13561356 (unsigned long)cpu_khz / 1000,13571357 (unsigned long)cpu_khz % 1000);1358135813591359+ /* Sanitize TSC ADJUST before cyc2ns gets initialized */13601360+ tsc_store_and_check_tsc_adjust(true);13611361+13591362 /*13601363 * Secondary CPUs do not run through tsc_init(), so set up13611364 * all the scale factors for all CPUs, assuming the same···1389138613901387 if (unsynchronized_tsc())13911388 mark_tsc_unstable("TSCs unsynchronized");13921392- else13931393- tsc_store_and_check_tsc_adjust(true);1394138913951390 check_system_tsc_reliable();13961391
+7-9
arch/x86/kernel/tsc_sync.c
···286286 if (unsynchronized_tsc())287287 return;288288289289- if (tsc_clocksource_reliable) {290290- if (cpu == (nr_cpu_ids-1) || system_state != SYSTEM_BOOTING)291291- pr_info(292292- "Skipped synchronization checks as TSC is reliable.\n");293293- return;294294- }295295-296289 /*297290 * Set the maximum number of test runs to298291 * 1 if the CPU does not provide the TSC_ADJUST MSR···373380 int cpus = 2;374381375382 /* Also aborts if there is no TSC. */376376- if (unsynchronized_tsc() || tsc_clocksource_reliable)383383+ if (unsynchronized_tsc())377384 return;378385379386 /*380387 * Store, verify and sanitize the TSC adjust register. If381388 * successful skip the test.389389+ *390390+ * The test is also skipped when the TSC is marked reliable. This391391+ * is true for SoCs which have no fallback clocksource. On these392392+ * SoCs the TSC is frequency synchronized, but still the TSC ADJUST393393+ * register might have been wreckaged by the BIOS..382394 */383383- if (tsc_store_and_check_tsc_adjust(false)) {395395+ if (tsc_store_and_check_tsc_adjust(false) || tsc_clocksource_reliable) {384396 atomic_inc(&skip_test);385397 return;386398 }
···784784static int brcm_avs_suspend(struct cpufreq_policy *policy)785785{786786 struct private_data *priv = policy->driver_data;787787+ int ret;787788788788- return brcm_avs_get_pmap(priv, &priv->pmap);789789+ ret = brcm_avs_get_pmap(priv, &priv->pmap);790790+ if (ret)791791+ return ret;792792+793793+ /*794794+ * We can't use the P-state returned by brcm_avs_get_pmap(), since795795+ * that's the initial P-state from when the P-map was downloaded to the796796+ * AVS co-processor, not necessarily the P-state we are running at now.797797+ * So, we get the current P-state explicitly.798798+ */799799+ return brcm_avs_get_pstate(priv, &priv->pmap.state);789800}790801791802static int brcm_avs_resume(struct cpufreq_policy *policy)···965954 brcm_avs_parse_p1(pmap.p1, &mdiv_p0, &pdiv, &ndiv);966955 brcm_avs_parse_p2(pmap.p2, &mdiv_p1, &mdiv_p2, &mdiv_p3, &mdiv_p4);967956968968- return sprintf(buf, "0x%08x 0x%08x %u %u %u %u %u %u %u\n",957957+ return sprintf(buf, "0x%08x 0x%08x %u %u %u %u %u %u %u %u %u\n",969958 pmap.p1, pmap.p2, ndiv, pdiv, mdiv_p0, mdiv_p1, mdiv_p2,970970- mdiv_p3, mdiv_p4);959959+ mdiv_p3, mdiv_p4, pmap.mode, pmap.state);971960}972961973962static ssize_t show_brcm_avs_voltage(struct cpufreq_policy *policy, char *buf)
+30
drivers/cpufreq/intel_pstate.c
···12351235 cpudata->epp_default = intel_pstate_get_epp(cpudata, 0);12361236}1237123712381238+#define MSR_IA32_POWER_CTL_BIT_EE 1912391239+12401240+/* Disable energy efficiency optimization */12411241+static void intel_pstate_disable_ee(int cpu)12421242+{12431243+ u64 power_ctl;12441244+ int ret;12451245+12461246+ ret = rdmsrl_on_cpu(cpu, MSR_IA32_POWER_CTL, &power_ctl);12471247+ if (ret)12481248+ return;12491249+12501250+ if (!(power_ctl & BIT(MSR_IA32_POWER_CTL_BIT_EE))) {12511251+ pr_info("Disabling energy efficiency optimization\n");12521252+ power_ctl |= BIT(MSR_IA32_POWER_CTL_BIT_EE);12531253+ wrmsrl_on_cpu(cpu, MSR_IA32_POWER_CTL, power_ctl);12541254+ }12551255+}12561256+12381257static int atom_get_min_pstate(void)12391258{12401259 u64 value;···18641845 {}18651846};1866184718481848+static const struct x86_cpu_id intel_pstate_cpu_ee_disable_ids[] = {18491849+ ICPU(INTEL_FAM6_KABYLAKE_DESKTOP, core_params),18501850+ {}18511851+};18521852+18671853static int intel_pstate_init_cpu(unsigned int cpunum)18681854{18691855 struct cpudata *cpu;···18991875 cpu->cpu = cpunum;1900187619011877 if (hwp_active) {18781878+ const struct x86_cpu_id *id;18791879+18801880+ id = x86_match_cpu(intel_pstate_cpu_ee_disable_ids);18811881+ if (id)18821882+ intel_pstate_disable_ee(cpunum);18831883+19021884 intel_pstate_hwp_enable(cpu);19031885 pid_params.sample_rate_ms = 50;19041886 pid_params.sample_rate_ns = 50 * NSEC_PER_MSEC;
···5252int assign_chcr_device(struct chcr_dev **dev)5353{5454 struct uld_ctx *u_ctx;5555+ int ret = -ENXIO;55565657 /*5758 * Which device to use if multiple devices are available TODO···6059 * must go to the same device to maintain the ordering.6160 */6261 mutex_lock(&dev_mutex); /* TODO ? */6363- u_ctx = list_first_entry(&uld_ctx_list, struct uld_ctx, entry);6464- if (!u_ctx) {6565- mutex_unlock(&dev_mutex);6666- return -ENXIO;6262+ list_for_each_entry(u_ctx, &uld_ctx_list, entry)6363+ if (u_ctx && u_ctx->dev) {6464+ *dev = u_ctx->dev;6565+ ret = 0;6666+ break;6767 }6868-6969- *dev = u_ctx->dev;7068 mutex_unlock(&dev_mutex);7171- return 0;6969+ return ret;7270}73717472static int chcr_dev_add(struct uld_ctx *u_ctx)···202202203203static int __init chcr_crypto_init(void)204204{205205- if (cxgb4_register_uld(CXGB4_ULD_CRYPTO, &chcr_uld_info)) {205205+ if (cxgb4_register_uld(CXGB4_ULD_CRYPTO, &chcr_uld_info))206206 pr_err("ULD register fail: No chcr crypto support in cxgb4");207207- return -1;208208- }209207210208 return 0;211209}
···456456 unsigned int csr_val;457457 int times = 30;458458459459- if (handle->pci_dev->device == ADF_C3XXX_PCI_DEVICE_ID)459459+ if (handle->pci_dev->device != ADF_DH895XCC_PCI_DEVICE_ID)460460 return 0;461461462462 csr_val = ADF_CSR_RD(csr_addr, 0);···716716 (void __iomem *)((uintptr_t)handle->hal_cap_ae_xfer_csr_addr_v +717717 LOCAL_TO_XFER_REG_OFFSET);718718 handle->pci_dev = pci_info->pci_dev;719719- if (handle->pci_dev->device != ADF_C3XXX_PCI_DEVICE_ID) {719719+ if (handle->pci_dev->device == ADF_DH895XCC_PCI_DEVICE_ID) {720720 sram_bar =721721 &pci_info->pci_bars[hw_data->get_sram_bar_id(hw_data)];722722 handle->hal_sram_addr_v = sram_bar->virt_addr;
+3-1
drivers/gpu/drm/amd/amdgpu/gmc_v6_0.c
···254254 }255255 WREG32(mmHDP_REG_COHERENCY_FLUSH_CNTL, 0);256256257257+ if (adev->mode_info.num_crtc)258258+ amdgpu_display_set_vga_render_state(adev, false);259259+257260 gmc_v6_0_mc_stop(adev, &save);258261259262 if (gmc_v6_0_wait_for_idle((void *)adev)) {···286283 dev_warn(adev->dev, "Wait for MC idle timedout !\n");287284 }288285 gmc_v6_0_mc_resume(adev, &save);289289- amdgpu_display_set_vga_render_state(adev, false);290286}291287292288static int gmc_v6_0_mc_init(struct amdgpu_device *adev)
+8-5
drivers/gpu/drm/drm_atomic.c
···20322032 }2033203320342034 for_each_crtc_in_state(state, crtc, crtc_state, i) {20352035+ struct drm_pending_vblank_event *event = crtc_state->event;20352036 /*20362036- * TEST_ONLY and PAGE_FLIP_EVENT are mutually20372037- * exclusive, if they weren't, this code should be20382038- * called on success for TEST_ONLY too.20372037+ * Free the allocated event. drm_atomic_helper_setup_commit20382038+ * can allocate an event too, so only free it if it's ours20392039+ * to prevent a double free in drm_atomic_state_clear.20392040 */20402040- if (crtc_state->event)20412041- drm_event_cancel_free(dev, &crtc_state->event->base);20412041+ if (event && (event->base.fence || event->base.file_priv)) {20422042+ drm_event_cancel_free(dev, &event->base);20432043+ crtc_state->event = NULL;20442044+ }20422045 }2043204620442047 if (!fence_state)
-9
drivers/gpu/drm/drm_atomic_helper.c
···1666166616671667 funcs = plane->helper_private;1668166816691669- if (!drm_atomic_helper_framebuffer_changed(dev, state, plane_state->crtc))16701670- continue;16711671-16721669 if (funcs->prepare_fb) {16731670 ret = funcs->prepare_fb(plane, plane_state);16741671 if (ret)···16801683 const struct drm_plane_helper_funcs *funcs;1681168416821685 if (j >= i)16831683- continue;16841684-16851685- if (!drm_atomic_helper_framebuffer_changed(dev, state, plane_state->crtc))16861686 continue;1687168716881688 funcs = plane->helper_private;···1947195319481954 for_each_plane_in_state(old_state, plane, plane_state, i) {19491955 const struct drm_plane_helper_funcs *funcs;19501950-19511951- if (!drm_atomic_helper_framebuffer_changed(dev, old_state, plane_state->crtc))19521952- continue;1953195619541957 funcs = plane->helper_private;19551958
+18-5
drivers/gpu/drm/drm_connector.c
···225225226226 INIT_LIST_HEAD(&connector->probed_modes);227227 INIT_LIST_HEAD(&connector->modes);228228+ mutex_init(&connector->mutex);228229 connector->edid_blob_ptr = NULL;229230 connector->status = connector_status_unknown;230231···360359 connector->funcs->atomic_destroy_state(connector,361360 connector->state);362361362362+ mutex_destroy(&connector->mutex);363363+363364 memset(connector, 0, sizeof(*connector));364365}365366EXPORT_SYMBOL(drm_connector_cleanup);···377374 */378375int drm_connector_register(struct drm_connector *connector)379376{380380- int ret;377377+ int ret = 0;381378382382- if (connector->registered)379379+ if (!connector->dev->registered)383380 return 0;381381+382382+ mutex_lock(&connector->mutex);383383+ if (connector->registered)384384+ goto unlock;384385385386 ret = drm_sysfs_connector_add(connector);386387 if (ret)387387- return ret;388388+ goto unlock;388389389390 ret = drm_debugfs_connector_add(connector);390391 if (ret) {···404397 drm_mode_object_register(connector->dev, &connector->base);405398406399 connector->registered = true;407407- return 0;400400+ goto unlock;408401409402err_debugfs:410403 drm_debugfs_connector_remove(connector);411404err_sysfs:412405 drm_sysfs_connector_remove(connector);406406+unlock:407407+ mutex_unlock(&connector->mutex);413408 return ret;414409}415410EXPORT_SYMBOL(drm_connector_register);···424415 */425416void drm_connector_unregister(struct drm_connector *connector)426417{427427- if (!connector->registered)418418+ mutex_lock(&connector->mutex);419419+ if (!connector->registered) {420420+ mutex_unlock(&connector->mutex);428421 return;422422+ }429423430424 if (connector->funcs->early_unregister)431425 connector->funcs->early_unregister(connector);···437425 drm_debugfs_connector_remove(connector);438426439427 connector->registered = false;428428+ mutex_unlock(&connector->mutex);440429}441430EXPORT_SYMBOL(drm_connector_unregister);442431
+4
drivers/gpu/drm/drm_drv.c
···745745 if (ret)746746 goto err_minors;747747748748+ dev->registered = true;749749+748750 if (dev->driver->load) {749751 ret = dev->driver->load(dev, flags);750752 if (ret)···786784 struct drm_map_list *r_list, *list_temp;787785788786 drm_lastclose(dev);787787+788788+ dev->registered = false;789789790790 if (drm_core_check_feature(dev, DRIVER_MODESET))791791 drm_modeset_unregister_all(dev);
+3-1
drivers/gpu/drm/i915/i915_drv.c
···213213 } else if (id == INTEL_PCH_KBP_DEVICE_ID_TYPE) {214214 dev_priv->pch_type = PCH_KBP;215215 DRM_DEBUG_KMS("Found KabyPoint PCH\n");216216- WARN_ON(!IS_KABYLAKE(dev_priv));216216+ WARN_ON(!IS_SKYLAKE(dev_priv) &&217217+ !IS_KABYLAKE(dev_priv));217218 } else if ((id == INTEL_PCH_P2X_DEVICE_ID_TYPE) ||218219 (id == INTEL_PCH_P3X_DEVICE_ID_TYPE) ||219220 ((id == INTEL_PCH_QEMU_DEVICE_ID_TYPE) &&···24282427 * we can do is to hope that things will still work (and disable RPM).24292428 */24302429 i915_gem_init_swizzling(dev_priv);24302430+ i915_gem_restore_fences(dev_priv);2431243124322432 intel_runtime_pm_enable_interrupts(dev_priv);24332433
···20102010 for (i = 0; i < dev_priv->num_fence_regs; i++) {20112011 struct drm_i915_fence_reg *reg = &dev_priv->fence_regs[i];2012201220132013- if (WARN_ON(reg->pin_count))20142014- continue;20132013+ /* Ideally we want to assert that the fence register is not20142014+ * live at this point (i.e. that no piece of code will be20152015+ * trying to write through fence + GTT, as that both violates20162016+ * our tracking of activity and associated locking/barriers,20172017+ * but also is illegal given that the hw is powered down).20182018+ *20192019+ * Previously we used reg->pin_count as a "liveness" indicator.20202020+ * That is not sufficient, and we need a more fine-grained20212021+ * tool if we want to have a sanity check here.20222022+ */2015202320162024 if (!reg->vma)20172025 continue;···34863478 vma->display_alignment = max_t(u64, vma->display_alignment, alignment);3487347934883480 /* Treat this as an end-of-frame, like intel_user_framebuffer_dirty() */34893489- if (obj->cache_dirty) {34813481+ if (obj->cache_dirty || obj->base.write_domain == I915_GEM_DOMAIN_CPU) {34903482 i915_gem_clflush_object(obj, true);34913483 intel_fb_obj_flush(obj, false, ORIGIN_DIRTYFB);34923484 }
+6-6
drivers/gpu/drm/i915/i915_gem_execbuffer.c
···11811181 if (exec[i].offset !=11821182 gen8_canonical_addr(exec[i].offset & PAGE_MASK))11831183 return -EINVAL;11841184-11851185- /* From drm_mm perspective address space is continuous,11861186- * so from this point we're always using non-canonical11871187- * form internally.11881188- */11891189- exec[i].offset = gen8_noncanonical_addr(exec[i].offset);11901184 }11851185+11861186+ /* From drm_mm perspective address space is continuous,11871187+ * so from this point we're always using non-canonical11881188+ * form internally.11891189+ */11901190+ exec[i].offset = gen8_noncanonical_addr(exec[i].offset);1191119111921192 if (exec[i].alignment && !is_power_of_2(exec[i].alignment))11931193 return -EINVAL;
+10-2
drivers/gpu/drm/i915/i915_gem_internal.c
···66666767 max_order = MAX_ORDER;6868#ifdef CONFIG_SWIOTLB6969- if (swiotlb_nr_tbl()) /* minimum max swiotlb size is IO_TLB_SEGSIZE */7070- max_order = min(max_order, ilog2(IO_TLB_SEGPAGES));6969+ if (swiotlb_nr_tbl()) {7070+ unsigned int max_segment;7171+7272+ max_segment = swiotlb_max_segment();7373+ if (max_segment) {7474+ max_segment = max_t(unsigned int, max_segment,7575+ PAGE_SIZE) >> PAGE_SHIFT;7676+ max_order = min(max_order, ilog2(max_segment));7777+ }7878+ }7179#endif72807381 gfp = GFP_KERNEL | __GFP_HIGHMEM | __GFP_RECLAIMABLE;
+20
drivers/gpu/drm/i915/intel_atomic_plane.c
···85858686 __drm_atomic_helper_plane_duplicate_state(plane, state);87878888+ intel_state->vma = NULL;8989+8890 return state;8991}9092···102100intel_plane_destroy_state(struct drm_plane *plane,103101 struct drm_plane_state *state)104102{103103+ struct i915_vma *vma;104104+105105+ vma = fetch_and_zero(&to_intel_plane_state(state)->vma);106106+107107+ /*108108+ * FIXME: Normally intel_cleanup_plane_fb handles destruction of vma.109109+ * We currently don't clear all planes during driver unload, so we have110110+ * to be able to unpin vma here for now.111111+ *112112+ * Normally this can only happen during unload when kmscon is disabled113113+ * and userspace doesn't attempt to set a framebuffer at all.114114+ */115115+ if (vma) {116116+ mutex_lock(&plane->dev->struct_mutex);117117+ intel_unpin_fb_vma(vma);118118+ mutex_unlock(&plane->dev->struct_mutex);119119+ }120120+105121 drm_atomic_helper_plane_destroy_state(plane, state);106122}107123
+44-85
drivers/gpu/drm/i915/intel_display.c
···22352235 i915_vma_pin_fence(vma);22362236 }2237223722382238+ i915_vma_get(vma);22382239err:22392240 intel_runtime_pm_put(dev_priv);22402241 return vma;22412242}2242224322432243-void intel_unpin_fb_obj(struct drm_framebuffer *fb, unsigned int rotation)22442244+void intel_unpin_fb_vma(struct i915_vma *vma)22442245{22452245- struct drm_i915_gem_object *obj = intel_fb_obj(fb);22462246- struct i915_ggtt_view view;22472247- struct i915_vma *vma;22482248-22492249- WARN_ON(!mutex_is_locked(&obj->base.dev->struct_mutex));22502250-22512251- intel_fill_fb_ggtt_view(&view, fb, rotation);22522252- vma = i915_gem_object_to_ggtt(obj, &view);22462246+ lockdep_assert_held(&vma->vm->dev->struct_mutex);2253224722542248 if (WARN_ON_ONCE(!vma))22552249 return;2256225022572251 i915_vma_unpin_fence(vma);22582252 i915_gem_object_unpin_from_display_plane(vma);22532253+ i915_vma_put(vma);22592254}2260225522612256static int intel_fb_pitch(const struct drm_framebuffer *fb, int plane,···27452750 struct drm_device *dev = intel_crtc->base.dev;27462751 struct drm_i915_private *dev_priv = to_i915(dev);27472752 struct drm_crtc *c;27482748- struct intel_crtc *i;27492753 struct drm_i915_gem_object *obj;27502754 struct drm_plane *primary = intel_crtc->base.primary;27512755 struct drm_plane_state *plane_state = primary->state;···27692775 * an fb with another CRTC instead27702776 */27712777 for_each_crtc(dev, c) {27722772- i = to_intel_crtc(c);27782778+ struct intel_plane_state *state;2773277927742780 if (c == &intel_crtc->base)27752781 continue;2776278227772777- if (!i->active)27832783+ if (!to_intel_crtc(c)->active)27782784 continue;2779278527802780- fb = c->primary->fb;27812781- if (!fb)27862786+ state = to_intel_plane_state(c->primary->state);27872787+ if (!state->vma)27822788 continue;2783278927842784- obj = intel_fb_obj(fb);27852785- if (i915_gem_object_ggtt_offset(obj, NULL) == plane_config->base) {27902790+ if (intel_plane_ggtt_offset(state) == plane_config->base) {27912791+ fb = c->primary->fb;27862792 drm_framebuffer_reference(fb);27872793 goto valid_fb;27882794 }···28032809 return;2804281028052811valid_fb:28122812+ mutex_lock(&dev->struct_mutex);28132813+ intel_state->vma =28142814+ intel_pin_and_fence_fb_obj(fb, primary->state->rotation);28152815+ mutex_unlock(&dev->struct_mutex);28162816+ if (IS_ERR(intel_state->vma)) {28172817+ DRM_ERROR("failed to pin boot fb on pipe %d: %li\n",28182818+ intel_crtc->pipe, PTR_ERR(intel_state->vma));28192819+28202820+ intel_state->vma = NULL;28212821+ drm_framebuffer_unreference(fb);28222822+ return;28232823+ }28242824+28062825 plane_state->src_x = 0;28072826 plane_state->src_y = 0;28082827 plane_state->src_w = fb->width << 16;···31113104 I915_WRITE(DSPSTRIDE(plane), fb->pitches[0]);31123105 if (INTEL_GEN(dev_priv) >= 4) {31133106 I915_WRITE(DSPSURF(plane),31143114- intel_fb_gtt_offset(fb, rotation) +31073107+ intel_plane_ggtt_offset(plane_state) +31153108 intel_crtc->dspaddr_offset);31163109 I915_WRITE(DSPTILEOFF(plane), (y << 16) | x);31173110 I915_WRITE(DSPLINOFF(plane), linear_offset);31183111 } else {31193112 I915_WRITE(DSPADDR(plane),31203120- intel_fb_gtt_offset(fb, rotation) +31133113+ intel_plane_ggtt_offset(plane_state) +31213114 intel_crtc->dspaddr_offset);31223115 }31233116 POSTING_READ(reg);···3214320732153208 I915_WRITE(DSPSTRIDE(plane), fb->pitches[0]);32163209 I915_WRITE(DSPSURF(plane),32173217- intel_fb_gtt_offset(fb, rotation) +32103210+ intel_plane_ggtt_offset(plane_state) +32183211 intel_crtc->dspaddr_offset);32193212 if (IS_HASWELL(dev_priv) || IS_BROADWELL(dev_priv)) {32203213 I915_WRITE(DSPOFFSET(plane), (y << 16) | x);···3235322832363229 return intel_tile_width_bytes(dev_priv, fb_modifier, cpp);32373230 }32383238-}32393239-32403240-u32 intel_fb_gtt_offset(struct drm_framebuffer *fb,32413241- unsigned int rotation)32423242-{32433243- struct drm_i915_gem_object *obj = intel_fb_obj(fb);32443244- struct i915_ggtt_view view;32453245- struct i915_vma *vma;32463246-32473247- intel_fill_fb_ggtt_view(&view, fb, rotation);32483248-32493249- vma = i915_gem_object_to_ggtt(obj, &view);32503250- if (WARN(!vma, "ggtt vma for display object not found! (view=%u)\n",32513251- view.type))32523252- return -1;32533253-32543254- return i915_ggtt_offset(vma);32553231}3256323232573233static void skl_detach_scaler(struct intel_crtc *intel_crtc, int id)···34313441 }3432344234333443 I915_WRITE(PLANE_SURF(pipe, 0),34343434- intel_fb_gtt_offset(fb, rotation) + surf_addr);34443444+ intel_plane_ggtt_offset(plane_state) + surf_addr);3435344534363446 POSTING_READ(PLANE_SURF(pipe, 0));34373447}···42624272 drm_crtc_vblank_put(&intel_crtc->base);4263427342644274 wake_up_all(&dev_priv->pending_flip_queue);42654265- queue_work(dev_priv->wq, &work->unpin_work);42664266-42674275 trace_i915_flip_complete(intel_crtc->plane,42684276 work->pending_flip_obj);42774277+42784278+ queue_work(dev_priv->wq, &work->unpin_work);42694279}4270428042714281static int intel_crtc_wait_for_pending_flips(struct drm_crtc *crtc)···1152611536 flush_work(&work->mmio_work);11527115371152811538 mutex_lock(&dev->struct_mutex);1152911529- intel_unpin_fb_obj(work->old_fb, primary->state->rotation);1153911539+ intel_unpin_fb_vma(work->old_vma);1153011540 i915_gem_object_put(work->pending_flip_obj);1153111541 mutex_unlock(&dev->struct_mutex);1153211542···1223612246 goto cleanup_pending;1223712247 }12238122481223912239- work->gtt_offset = intel_fb_gtt_offset(fb, primary->state->rotation);1224012240- work->gtt_offset += intel_crtc->dspaddr_offset;1224912249+ work->old_vma = to_intel_plane_state(primary->state)->vma;1225012250+ to_intel_plane_state(primary->state)->vma = vma;1225112251+1225212252+ work->gtt_offset = i915_ggtt_offset(vma) + intel_crtc->dspaddr_offset;1224112253 work->rotation = crtc->primary->state->rotation;12242122541224312255 /*···1229312301cleanup_request:1229412302 i915_add_request_no_flush(request);1229512303cleanup_unpin:1229612296- intel_unpin_fb_obj(fb, crtc->primary->state->rotation);1230412304+ to_intel_plane_state(primary->state)->vma = work->old_vma;1230512305+ intel_unpin_fb_vma(vma);1229712306cleanup_pending:1229812307 atomic_dec(&intel_crtc->unpin_work_count);1229912308unlock:···1478714794 DRM_DEBUG_KMS("failed to pin object\n");1478814795 return PTR_ERR(vma);1478914796 }1479714797+1479814798+ to_intel_plane_state(new_state)->vma = vma;1479014799 }14791148001479214801 return 0;···1480714812intel_cleanup_plane_fb(struct drm_plane *plane,1480814813 struct drm_plane_state *old_state)1480914814{1481014810- struct drm_i915_private *dev_priv = to_i915(plane->dev);1481114811- struct intel_plane_state *old_intel_state;1481214812- struct drm_i915_gem_object *old_obj = intel_fb_obj(old_state->fb);1481314813- struct drm_i915_gem_object *obj = intel_fb_obj(plane->state->fb);1481514815+ struct i915_vma *vma;14814148161481514815- old_intel_state = to_intel_plane_state(old_state);1481614816-1481714817- if (!obj && !old_obj)1481814818- return;1481914819-1482014820- if (old_obj && (plane->type != DRM_PLANE_TYPE_CURSOR ||1482114821- !INTEL_INFO(dev_priv)->cursor_needs_physical))1482214822- intel_unpin_fb_obj(old_state->fb, old_state->rotation);1481714817+ /* Should only be called after a successful intel_prepare_plane_fb()! */1481814818+ vma = fetch_and_zero(&to_intel_plane_state(old_state)->vma);1481914819+ if (vma)1482014820+ intel_unpin_fb_vma(vma);1482314821}14824148221482514823int···1515415166 if (!obj)1515515167 addr = 0;1515615168 else if (!INTEL_INFO(dev_priv)->cursor_needs_physical)1515715157- addr = i915_gem_object_ggtt_offset(obj, NULL);1516915169+ addr = intel_plane_ggtt_offset(state);1515815170 else1515915171 addr = obj->phys_handle->busaddr;1516015172···1705417066void intel_modeset_gem_init(struct drm_device *dev)1705517067{1705617068 struct drm_i915_private *dev_priv = to_i915(dev);1705717057- struct drm_crtc *c;1705817058- struct drm_i915_gem_object *obj;17059170691706017070 intel_init_gt_powersave(dev_priv);17061170711706217072 intel_modeset_init_hw(dev);17063170731706417074 intel_setup_overlay(dev_priv);1706517065-1706617066- /*1706717067- * Make sure any fbs we allocated at startup are properly1706817068- * pinned & fenced. When we do the allocation it's too early1706917069- * for this.1707017070- */1707117071- for_each_crtc(dev, c) {1707217072- struct i915_vma *vma;1707317073-1707417074- obj = intel_fb_obj(c->primary->fb);1707517075- if (obj == NULL)1707617076- continue;1707717077-1707817078- mutex_lock(&dev->struct_mutex);1707917079- vma = intel_pin_and_fence_fb_obj(c->primary->fb,1708017080- c->primary->state->rotation);1708117081- mutex_unlock(&dev->struct_mutex);1708217082- if (IS_ERR(vma)) {1708317083- DRM_ERROR("failed to pin boot fb on pipe %d\n",1708417084- to_intel_crtc(c)->pipe);1708517085- drm_framebuffer_unreference(c->primary->fb);1708617086- c->primary->fb = NULL;1708717087- c->primary->crtc = c->primary->state->crtc = NULL;1708817088- update_state_fb(c->primary);1708917089- c->state->plane_mask &= ~(1 << drm_plane_index(c->primary));1709017090- }1709117091- }1709217075}17093170761709417077int intel_connector_register(struct drm_connector *connector)
···238238239239 mutex_lock(&data->lock);240240241241- while (cnt || (cnt = max30100_fifo_count(data) > 0)) {241241+ while (cnt || (cnt = max30100_fifo_count(data)) > 0) {242242 ret = max30100_read_measurement(data);243243 if (ret)244244 break;
+4-2
drivers/iio/humidity/dht11.c
···7171 * a) select an implementation using busy loop polling on those systems7272 * b) use the checksum to do some probabilistic decoding7373 */7474-#define DHT11_START_TRANSMISSION 18 /* ms */7474+#define DHT11_START_TRANSMISSION_MIN 18000 /* us */7575+#define DHT11_START_TRANSMISSION_MAX 20000 /* us */7576#define DHT11_MIN_TIMERES 34000 /* ns */7677#define DHT11_THRESHOLD 49000 /* ns */7778#define DHT11_AMBIG_LOW 23000 /* ns */···229228 ret = gpio_direction_output(dht11->gpio, 0);230229 if (ret)231230 goto err;232232- msleep(DHT11_START_TRANSMISSION);231231+ usleep_range(DHT11_START_TRANSMISSION_MIN,232232+ DHT11_START_TRANSMISSION_MAX);233233 ret = gpio_direction_input(dht11->gpio);234234 if (ret)235235 goto err;
···263263 return -EINVAL;264264 }265265266266- if (test_bit(ABS_MT_SLOT, dev->absbit)) {267267- nslot = input_abs_get_max(dev, ABS_MT_SLOT) + 1;268268- error = input_mt_init_slots(dev, nslot, 0);269269- if (error)266266+ if (test_bit(EV_ABS, dev->evbit)) {267267+ input_alloc_absinfo(dev);268268+ if (!dev->absinfo) {269269+ error = -EINVAL;270270 goto fail1;271271- } else if (test_bit(ABS_MT_POSITION_X, dev->absbit)) {272272- input_set_events_per_packet(dev, 60);271271+ }272272+273273+ if (test_bit(ABS_MT_SLOT, dev->absbit)) {274274+ nslot = input_abs_get_max(dev, ABS_MT_SLOT) + 1;275275+ error = input_mt_init_slots(dev, nslot, 0);276276+ if (error)277277+ goto fail1;278278+ } else if (test_bit(ABS_MT_POSITION_X, dev->absbit)) {279279+ input_set_events_per_packet(dev, 60);280280+ }273281 }274282275283 if (test_bit(EV_FF, dev->evbit) && !udev->ff_effects_max) {
+7-1
drivers/input/rmi4/Kconfig
···4242config RMI4_F034343 bool "RMI4 Function 03 (PS2 Guest)"4444 depends on RMI4_CORE4545- depends on SERIO=y || RMI4_CORE=SERIO4645 help4746 Say Y here if you want to add support for RMI4 function 03.48474948 Function 03 provides PS2 guest support for RMI4 devices. This5049 includes support for TrackPoints on TouchPads.5050+5151+config RMI4_F03_SERIO5252+ tristate5353+ depends on RMI4_CORE5454+ depends on RMI4_F035555+ default RMI4_CORE5656+ select SERIO51575258config RMI4_2D_SENSOR5359 bool
···10231023 if (!host->busy_status && busy_resp &&10241024 !(status & (MCI_CMDCRCFAIL|MCI_CMDTIMEOUT)) &&10251025 (readl(base + MMCISTATUS) & host->variant->busy_detect_flag)) {10261026- /* Unmask the busy IRQ */10261026+10271027+ /* Clear the busy start IRQ */10281028+ writel(host->variant->busy_detect_mask,10291029+ host->base + MMCICLEAR);10301030+10311031+ /* Unmask the busy end IRQ */10271032 writel(readl(base + MMCIMASK0) |10281033 host->variant->busy_detect_mask,10291034 base + MMCIMASK0);···1043103810441039 /*10451040 * At this point we are not busy with a command, we have10461046- * not received a new busy request, mask the busy IRQ and10471047- * fall through to process the IRQ.10411041+ * not received a new busy request, clear and mask the busy10421042+ * end IRQ and fall through to process the IRQ.10481043 */10491044 if (host->busy_status) {10451045+10461046+ writel(host->variant->busy_detect_mask,10471047+ host->base + MMCICLEAR);10481048+10501049 writel(readl(base + MMCIMASK0) &10511050 ~host->variant->busy_detect_mask,10521051 base + MMCIMASK0);···12921283 }1293128412941285 /*12951295- * We intentionally clear the MCI_ST_CARDBUSY IRQ here (if it's12961296- * enabled) since the HW seems to be triggering the IRQ on both12971297- * edges while monitoring DAT0 for busy completion.12861286+ * We intentionally clear the MCI_ST_CARDBUSY IRQ (if it's12871287+ * enabled) in mmci_cmd_irq() function where ST Micro busy12881288+ * detection variant is handled. Considering the HW seems to be12891289+ * triggering the IRQ on both edges while monitoring DAT0 for12901290+ * busy completion and that same status bit is used to monitor12911291+ * start and end of busy detection, special care must be taken12921292+ * to make sure that both start and end interrupts are always12931293+ * cleared one after the other.12981294 */12991295 status &= readl(host->base + MMCIMASK0);13001300- writel(status, host->base + MMCICLEAR);12961296+ if (host->variant->busy_detect)12971297+ writel(status & ~host->variant->busy_detect_mask,12981298+ host->base + MMCICLEAR);12991299+ else13001300+ writel(status, host->base + MMCICLEAR);1301130113021302 dev_dbg(mmc_dev(host->mmc), "irq0 (data+cmd) %08x\n", status);13031303
+2-1
drivers/mmc/host/sdhci.c
···27332733 if (intmask & SDHCI_INT_RETUNE)27342734 mmc_retune_needed(host->mmc);2735273527362736- if (intmask & SDHCI_INT_CARD_INT) {27362736+ if ((intmask & SDHCI_INT_CARD_INT) &&27372737+ (host->ier & SDHCI_INT_CARD_INT)) {27372738 sdhci_enable_sdio_irq_nolock(host, false);27382739 host->thread_isr |= SDHCI_INT_CARD_INT;27392740 result = IRQ_WAKE_THREAD;
+96-12
drivers/net/ethernet/cavium/thunder/thunder_bgx.c
···3131 u8 lmac_type;3232 u8 lane_to_sds;3333 bool use_training;3434+ bool autoneg;3435 bool link_up;3536 int lmacid; /* ID within BGX */3637 int lmacid_bd; /* ID on board */···462461 /* power down, reset autoneg, autoneg enable */463462 cfg = bgx_reg_read(bgx, lmacid, BGX_GMP_PCS_MRX_CTL);464463 cfg &= ~PCS_MRX_CTL_PWR_DN;465465- cfg |= (PCS_MRX_CTL_RST_AN | PCS_MRX_CTL_AN_EN);464464+ cfg |= PCS_MRX_CTL_RST_AN;465465+ if (lmac->phydev) {466466+ cfg |= PCS_MRX_CTL_AN_EN;467467+ } else {468468+ /* In scenarios where PHY driver is not present or it's a469469+ * non-standard PHY, FW sets AN_EN to inform Linux driver470470+ * to do auto-neg and link polling or not.471471+ */472472+ if (cfg & PCS_MRX_CTL_AN_EN)473473+ lmac->autoneg = true;474474+ }466475 bgx_reg_write(bgx, lmacid, BGX_GMP_PCS_MRX_CTL, cfg);467476468477 if (lmac->lmac_type == BGX_MODE_QSGMII) {···483472 return 0;484473 }485474486486- if (lmac->lmac_type == BGX_MODE_SGMII) {475475+ if ((lmac->lmac_type == BGX_MODE_SGMII) && lmac->phydev) {487476 if (bgx_poll_reg(bgx, lmacid, BGX_GMP_PCS_MRX_STATUS,488477 PCS_MRX_STATUS_AN_CPT, false)) {489478 dev_err(&bgx->pdev->dev, "BGX AN_CPT not completed\n");···689678 return -1;690679}691680681681+static void bgx_poll_for_sgmii_link(struct lmac *lmac)682682+{683683+ u64 pcs_link, an_result;684684+ u8 speed;685685+686686+ pcs_link = bgx_reg_read(lmac->bgx, lmac->lmacid,687687+ BGX_GMP_PCS_MRX_STATUS);688688+689689+ /*Link state bit is sticky, read it again*/690690+ if (!(pcs_link & PCS_MRX_STATUS_LINK))691691+ pcs_link = bgx_reg_read(lmac->bgx, lmac->lmacid,692692+ BGX_GMP_PCS_MRX_STATUS);693693+694694+ if (bgx_poll_reg(lmac->bgx, lmac->lmacid, BGX_GMP_PCS_MRX_STATUS,695695+ PCS_MRX_STATUS_AN_CPT, false)) {696696+ lmac->link_up = false;697697+ lmac->last_speed = SPEED_UNKNOWN;698698+ lmac->last_duplex = DUPLEX_UNKNOWN;699699+ goto next_poll;700700+ }701701+702702+ lmac->link_up = ((pcs_link & PCS_MRX_STATUS_LINK) != 0) ? true : false;703703+ an_result = bgx_reg_read(lmac->bgx, lmac->lmacid,704704+ BGX_GMP_PCS_ANX_AN_RESULTS);705705+706706+ speed = (an_result >> 3) & 0x3;707707+ lmac->last_duplex = (an_result >> 1) & 0x1;708708+ switch (speed) {709709+ case 0:710710+ lmac->last_speed = 10;711711+ break;712712+ case 1:713713+ lmac->last_speed = 100;714714+ break;715715+ case 2:716716+ lmac->last_speed = 1000;717717+ break;718718+ default:719719+ lmac->link_up = false;720720+ lmac->last_speed = SPEED_UNKNOWN;721721+ lmac->last_duplex = DUPLEX_UNKNOWN;722722+ break;723723+ }724724+725725+next_poll:726726+727727+ if (lmac->last_link != lmac->link_up) {728728+ if (lmac->link_up)729729+ bgx_sgmii_change_link_state(lmac);730730+ lmac->last_link = lmac->link_up;731731+ }732732+733733+ queue_delayed_work(lmac->check_link, &lmac->dwork, HZ * 3);734734+}735735+692736static void bgx_poll_for_link(struct work_struct *work)693737{694738 struct lmac *lmac;695739 u64 spu_link, smu_link;696740697741 lmac = container_of(work, struct lmac, dwork.work);742742+ if (lmac->is_sgmii) {743743+ bgx_poll_for_sgmii_link(lmac);744744+ return;745745+ }698746699747 /* Receive link is latching low. Force it high and verify it */700748 bgx_reg_modify(lmac->bgx, lmac->lmacid,···845775 (lmac->lmac_type != BGX_MODE_XLAUI) &&846776 (lmac->lmac_type != BGX_MODE_40G_KR) &&847777 (lmac->lmac_type != BGX_MODE_10G_KR)) {848848- if (!lmac->phydev)849849- return -ENODEV;850850-778778+ if (!lmac->phydev) {779779+ if (lmac->autoneg) {780780+ bgx_reg_write(bgx, lmacid,781781+ BGX_GMP_PCS_LINKX_TIMER,782782+ PCS_LINKX_TIMER_COUNT);783783+ goto poll;784784+ } else {785785+ /* Default to below link speed and duplex */786786+ lmac->link_up = true;787787+ lmac->last_speed = 1000;788788+ lmac->last_duplex = 1;789789+ bgx_sgmii_change_link_state(lmac);790790+ return 0;791791+ }792792+ }851793 lmac->phydev->dev_flags = 0;852794853795 if (phy_connect_direct(&lmac->netdev, lmac->phydev,···868786 return -ENODEV;869787870788 phy_start_aneg(lmac->phydev);871871- } else {872872- lmac->check_link = alloc_workqueue("check_link", WQ_UNBOUND |873873- WQ_MEM_RECLAIM, 1);874874- if (!lmac->check_link)875875- return -ENOMEM;876876- INIT_DELAYED_WORK(&lmac->dwork, bgx_poll_for_link);877877- queue_delayed_work(lmac->check_link, &lmac->dwork, 0);789789+ return 0;878790 }791791+792792+poll:793793+ lmac->check_link = alloc_workqueue("check_link", WQ_UNBOUND |794794+ WQ_MEM_RECLAIM, 1);795795+ if (!lmac->check_link)796796+ return -ENOMEM;797797+ INIT_DELAYED_WORK(&lmac->dwork, bgx_poll_for_link);798798+ queue_delayed_work(lmac->check_link, &lmac->dwork, 0);879799880800 return 0;881801}
···908908 struct module *ndev_owner = dev->dev.parent->driver->owner;909909 struct mii_bus *bus = phydev->mdio.bus;910910 struct device *d = &phydev->mdio.dev;911911+ bool using_genphy = false;911912 int err;912913913914 /* For Ethernet device drivers that register their own MDIO bus, we···934933 d->driver =935934 &genphy_driver[GENPHY_DRV_1G].mdiodrv.driver;936935936936+ using_genphy = true;937937+ }938938+939939+ if (!try_module_get(d->driver->owner)) {940940+ dev_err(&dev->dev, "failed to get the device driver module\n");941941+ err = -EIO;942942+ goto error_put_device;943943+ }944944+945945+ if (using_genphy) {937946 err = d->driver->probe(d);938947 if (err >= 0)939948 err = device_bind_driver(d);940949941950 if (err)942942- goto error;951951+ goto error_module_put;943952 }944953945954 if (phydev->attached_dev) {···986975 return err;987976988977error:978978+ /* phy_detach() does all of the cleanup below */989979 phy_detach(phydev);980980+ return err;981981+982982+error_module_put:983983+ module_put(d->driver->owner);984984+error_put_device:990985 put_device(d);991986 if (ndev_owner != bus->owner)992987 module_put(bus->owner);···10551038 phy_suspend(phydev);1056103910571040 phy_led_triggers_unregister(phydev);10411041+10421042+ module_put(phydev->mdio.dev.driver->owner);1058104310591044 /* If the device had no specific driver before (i.e. - it10601045 * was using the generic driver), we unbind the device
+6-4
drivers/net/tun.c
···11701170 }1171117111721172 if (tun->flags & IFF_VNET_HDR) {11731173- if (len < tun->vnet_hdr_sz)11731173+ int vnet_hdr_sz = READ_ONCE(tun->vnet_hdr_sz);11741174+11751175+ if (len < vnet_hdr_sz)11741176 return -EINVAL;11751175- len -= tun->vnet_hdr_sz;11771177+ len -= vnet_hdr_sz;1176117811771179 if (!copy_from_iter_full(&gso, sizeof(gso), from))11781180 return -EFAULT;···1185118311861184 if (tun16_to_cpu(tun, gso.hdr_len) > len)11871185 return -EINVAL;11881188- iov_iter_advance(from, tun->vnet_hdr_sz - sizeof(gso));11861186+ iov_iter_advance(from, vnet_hdr_sz - sizeof(gso));11891187 }1190118811911189 if ((tun->flags & TUN_TYPE_MASK) == IFF_TAP) {···13371335 vlan_hlen = VLAN_HLEN;1338133613391337 if (tun->flags & IFF_VNET_HDR)13401340- vnet_hdr_sz = tun->vnet_hdr_sz;13381338+ vnet_hdr_sz = READ_ONCE(tun->vnet_hdr_sz);1341133913421340 total = skb->len + vlan_hlen + vnet_hdr_sz;13431341
+34-22
drivers/net/usb/catc.c
···776776 struct net_device *netdev;777777 struct catc *catc;778778 u8 broadcast[ETH_ALEN];779779- int i, pktsz;779779+ int pktsz, ret;780780781781 if (usb_set_interface(usbdev,782782 intf->altsetting->desc.bInterfaceNumber, 1)) {···811811 if ((!catc->ctrl_urb) || (!catc->tx_urb) || 812812 (!catc->rx_urb) || (!catc->irq_urb)) {813813 dev_err(&intf->dev, "No free urbs available.\n");814814- usb_free_urb(catc->ctrl_urb);815815- usb_free_urb(catc->tx_urb);816816- usb_free_urb(catc->rx_urb);817817- usb_free_urb(catc->irq_urb);818818- free_netdev(netdev);819819- return -ENOMEM;814814+ ret = -ENOMEM;815815+ goto fail_free;820816 }821817822818 /* The F5U011 has the same vendor/product as the netmate but a device version of 0x130 */···840844 catc->irq_buf, 2, catc_irq_done, catc, 1);841845842846 if (!catc->is_f5u011) {847847+ u32 *buf;848848+ int i;849849+843850 dev_dbg(dev, "Checking memory size\n");844851845845- i = 0x12345678;846846- catc_write_mem(catc, 0x7a80, &i, 4);847847- i = 0x87654321; 848848- catc_write_mem(catc, 0xfa80, &i, 4);849849- catc_read_mem(catc, 0x7a80, &i, 4);852852+ buf = kmalloc(4, GFP_KERNEL);853853+ if (!buf) {854854+ ret = -ENOMEM;855855+ goto fail_free;856856+ }857857+858858+ *buf = 0x12345678;859859+ catc_write_mem(catc, 0x7a80, buf, 4);860860+ *buf = 0x87654321;861861+ catc_write_mem(catc, 0xfa80, buf, 4);862862+ catc_read_mem(catc, 0x7a80, buf, 4);850863851851- switch (i) {864864+ switch (*buf) {852865 case 0x12345678:853866 catc_set_reg(catc, TxBufCount, 8);854867 catc_set_reg(catc, RxBufCount, 32);···872867 dev_dbg(dev, "32k Memory\n");873868 break;874869 }870870+871871+ kfree(buf);875872876873 dev_dbg(dev, "Getting MAC from SEEROM.\n");877874···920913 usb_set_intfdata(intf, catc);921914922915 SET_NETDEV_DEV(netdev, &intf->dev);923923- if (register_netdev(netdev) != 0) {924924- usb_set_intfdata(intf, NULL);925925- usb_free_urb(catc->ctrl_urb);926926- usb_free_urb(catc->tx_urb);927927- usb_free_urb(catc->rx_urb);928928- usb_free_urb(catc->irq_urb);929929- free_netdev(netdev);930930- return -EIO;931931- }916916+ ret = register_netdev(netdev);917917+ if (ret)918918+ goto fail_clear_intfdata;919919+932920 return 0;921921+922922+fail_clear_intfdata:923923+ usb_set_intfdata(intf, NULL);924924+fail_free:925925+ usb_free_urb(catc->ctrl_urb);926926+ usb_free_urb(catc->tx_urb);927927+ usb_free_urb(catc->rx_urb);928928+ usb_free_urb(catc->irq_urb);929929+ free_netdev(netdev);930930+ return ret;933931}934932935933static void catc_disconnect(struct usb_interface *intf)
+25-4
drivers/net/usb/pegasus.c
···126126127127static int get_registers(pegasus_t *pegasus, __u16 indx, __u16 size, void *data)128128{129129+ u8 *buf;129130 int ret;131131+132132+ buf = kmalloc(size, GFP_NOIO);133133+ if (!buf)134134+ return -ENOMEM;130135131136 ret = usb_control_msg(pegasus->usb, usb_rcvctrlpipe(pegasus->usb, 0),132137 PEGASUS_REQ_GET_REGS, PEGASUS_REQT_READ, 0,133133- indx, data, size, 1000);138138+ indx, buf, size, 1000);134139 if (ret < 0)135140 netif_dbg(pegasus, drv, pegasus->net,136141 "%s returned %d\n", __func__, ret);142142+ else if (ret <= size)143143+ memcpy(data, buf, ret);144144+ kfree(buf);137145 return ret;138146}139147140140-static int set_registers(pegasus_t *pegasus, __u16 indx, __u16 size, void *data)148148+static int set_registers(pegasus_t *pegasus, __u16 indx, __u16 size,149149+ const void *data)141150{151151+ u8 *buf;142152 int ret;153153+154154+ buf = kmemdup(data, size, GFP_NOIO);155155+ if (!buf)156156+ return -ENOMEM;143157144158 ret = usb_control_msg(pegasus->usb, usb_sndctrlpipe(pegasus->usb, 0),145159 PEGASUS_REQ_SET_REGS, PEGASUS_REQT_WRITE, 0,146146- indx, data, size, 100);160160+ indx, buf, size, 100);147161 if (ret < 0)148162 netif_dbg(pegasus, drv, pegasus->net,149163 "%s returned %d\n", __func__, ret);164164+ kfree(buf);150165 return ret;151166}152167153168static int set_register(pegasus_t *pegasus, __u16 indx, __u8 data)154169{170170+ u8 *buf;155171 int ret;172172+173173+ buf = kmemdup(&data, 1, GFP_NOIO);174174+ if (!buf)175175+ return -ENOMEM;156176157177 ret = usb_control_msg(pegasus->usb, usb_sndctrlpipe(pegasus->usb, 0),158178 PEGASUS_REQ_SET_REG, PEGASUS_REQT_WRITE, data,159159- indx, &data, 1, 1000);179179+ indx, buf, 1, 1000);160180 if (ret < 0)161181 netif_dbg(pegasus, drv, pegasus->net,162182 "%s returned %d\n", __func__, ret);183183+ kfree(buf);163184 return ret;164185}165186
···281281{282282 RING_IDX req_prod = queue->rx.req_prod_pvt;283283 int notify;284284+ int err = 0;284285285286 if (unlikely(!netif_carrier_ok(queue->info->netdev)))286287 return;···296295 struct xen_netif_rx_request *req;297296298297 skb = xennet_alloc_one_rx_buffer(queue);299299- if (!skb)298298+ if (!skb) {299299+ err = -ENOMEM;300300 break;301301+ }301302302303 id = xennet_rxidx(req_prod);303304···323320324321 queue->rx.req_prod_pvt = req_prod;325322326326- /* Not enough requests? Try again later. */327327- if (req_prod - queue->rx.sring->req_prod < NET_RX_SLOTS_MIN) {323323+ /* Try again later if there are not enough requests or skb allocation324324+ * failed.325325+ * Enough requests is quantified as the sum of newly created slots and326326+ * the unconsumed slots at the backend.327327+ */328328+ if (req_prod - queue->rx.rsp_cons < NET_RX_SLOTS_MIN ||329329+ unlikely(err)) {328330 mod_timer(&queue->rx_refill_timer, jiffies + (HZ/10));329331 return;330332 }···13871379 for (i = 0; i < num_queues && info->queues; ++i) {13881380 struct netfront_queue *queue = &info->queues[i];1389138113821382+ del_timer_sync(&queue->rx_refill_timer);13831383+13901384 if (queue->tx_irq && (queue->tx_irq == queue->rx_irq))13911385 unbind_from_irqhandler(queue->tx_irq, queue);13921386 if (queue->tx_irq && (queue->tx_irq != queue->rx_irq)) {···1743173317441734 if (netif_running(info->netdev))17451735 napi_disable(&queue->napi);17461746- del_timer_sync(&queue->rx_refill_timer);17471736 netif_napi_del(&queue->napi);17481737 }17491738···18311822 xennet_destroy_queues(info);1832182318331824 err = xennet_create_queues(info, &num_queues);18341834- if (err < 0)18351835- goto destroy_ring;18251825+ if (err < 0) {18261826+ xenbus_dev_fatal(dev, err, "creating queues");18271827+ kfree(info->queues);18281828+ info->queues = NULL;18291829+ goto out;18301830+ }1836183118371832 /* Create shared ring, alloc event channel -- for each queue */18381833 for (i = 0; i < num_queues; ++i) {18391834 queue = &info->queues[i];18401835 err = setup_netfront(dev, queue, feature_split_evtchn);18411841- if (err) {18421842- /* setup_netfront() will tidy up the current18431843- * queue on error, but we need to clean up18441844- * those already allocated.18451845- */18461846- if (i > 0) {18471847- rtnl_lock();18481848- netif_set_real_num_tx_queues(info->netdev, i);18491849- rtnl_unlock();18501850- goto destroy_ring;18511851- } else {18521852- goto out;18531853- }18541854- }18361836+ if (err)18371837+ goto destroy_ring;18551838 }1856183918571840again:···19331932 xenbus_transaction_end(xbt, 1);19341933 destroy_ring:19351934 xennet_disconnect_backend(info);19361936- kfree(info->queues);19371937- info->queues = NULL;19351935+ xennet_destroy_queues(info);19381936 out:19371937+ unregister_netdev(info->netdev);19381938+ xennet_free_netdev(info->netdev);19391939 return err;19401940}19411941
+10-7
drivers/nvdimm/namespace_devs.c
···5252 kfree(nsblk);5353}54545555-static struct device_type namespace_io_device_type = {5555+static const struct device_type namespace_io_device_type = {5656 .name = "nd_namespace_io",5757 .release = namespace_io_release,5858};59596060-static struct device_type namespace_pmem_device_type = {6060+static const struct device_type namespace_pmem_device_type = {6161 .name = "nd_namespace_pmem",6262 .release = namespace_pmem_release,6363};64646565-static struct device_type namespace_blk_device_type = {6565+static const struct device_type namespace_blk_device_type = {6666 .name = "nd_namespace_blk",6767 .release = namespace_blk_release,6868};···962962 struct nvdimm_drvdata *ndd;963963 struct nd_label_id label_id;964964 u32 flags = 0, remainder;965965+ int rc, i, id = -1;965966 u8 *uuid = NULL;966966- int rc, i;967967968968 if (dev->driver || ndns->claim)969969 return -EBUSY;···972972 struct nd_namespace_pmem *nspm = to_nd_namespace_pmem(dev);973973974974 uuid = nspm->uuid;975975+ id = nspm->id;975976 } else if (is_namespace_blk(dev)) {976977 struct nd_namespace_blk *nsblk = to_nd_namespace_blk(dev);977978978979 uuid = nsblk->uuid;979980 flags = NSLABEL_FLAG_LOCAL;981981+ id = nsblk->id;980982 }981983982984 /*···1041103910421040 /*10431041 * Try to delete the namespace if we deleted all of its10441044- * allocation, this is not the seed device for the region, and10451045- * it is not actively claimed by a btt instance.10421042+ * allocation, this is not the seed or 0th device for the10431043+ * region, and it is not actively claimed by a btt, pfn, or dax10441044+ * instance.10461045 */10471047- if (val == 0 && nd_region->ns_seed != dev && !ndns->claim)10461046+ if (val == 0 && id != 0 && nd_region->ns_seed != dev && !ndns->claim)10481047 nd_device_unregister(dev, ND_ASYNC);1049104810501049 return rc;
···3131#include <linux/kernel.h>3232#include <linux/types.h>3333#include <linux/slab.h>3434-#include <linux/pm_runtime.h>3534#include <linux/pci.h>3635#include "../pci.h"3736#include "pciehp.h"···9899 pciehp_green_led_blink(p_slot);99100100101 /* Check link training status */101101- pm_runtime_get_sync(&ctrl->pcie->port->dev);102102 retval = pciehp_check_link_status(ctrl);103103 if (retval) {104104 ctrl_err(ctrl, "Failed to check link status\n");···118120 if (retval != -EEXIST)119121 goto err_exit;120122 }121121- pm_runtime_put(&ctrl->pcie->port->dev);122123123124 pciehp_green_led_on(p_slot);124125 pciehp_set_attention_status(p_slot, 0);125126 return 0;126127127128err_exit:128128- pm_runtime_put(&ctrl->pcie->port->dev);129129 set_slot_off(ctrl, p_slot);130130 return retval;131131}···137141 int retval;138142 struct controller *ctrl = p_slot->ctrl;139143140140- pm_runtime_get_sync(&ctrl->pcie->port->dev);141144 retval = pciehp_unconfigure_device(p_slot);142142- pm_runtime_put(&ctrl->pcie->port->dev);143145 if (retval)144146 return retval;145147
+10
drivers/pci/msi.c
···12061206 if (flags & PCI_IRQ_AFFINITY) {12071207 if (!affd)12081208 affd = &msi_default_affd;12091209+12101210+ if (affd->pre_vectors + affd->post_vectors > min_vecs)12111211+ return -EINVAL;12121212+12131213+ /*12141214+ * If there aren't any vectors left after applying the pre/post12151215+ * vectors don't bother with assigning affinity.12161216+ */12171217+ if (affd->pre_vectors + affd->post_vectors == min_vecs)12181218+ affd = NULL;12091219 } else {12101220 if (WARN_ON(affd))12111221 affd = NULL;
+6-6
drivers/pci/pci.c
···22412241 return false;2242224222432243 /*22442244- * Hotplug ports handled by firmware in System Management Mode22442244+ * Hotplug interrupts cannot be delivered if the link is down,22452245+ * so parents of a hotplug port must stay awake. In addition,22462246+ * hotplug ports handled by firmware in System Management Mode22452247 * may not be put into D3 by the OS (Thunderbolt on non-Macs).22482248+ * For simplicity, disallow in general for now.22462249 */22472247- if (bridge->is_hotplug_bridge && !pciehp_is_native(bridge))22502250+ if (bridge->is_hotplug_bridge)22482251 return false;2249225222502253 if (pci_bridge_d3_force)···22792276 !pci_pme_capable(dev, PCI_D3cold)) ||2280227722812278 /* If it is a bridge it must be allowed to go to D3. */22822282- !pci_power_manageable(dev) ||22832283-22842284- /* Hotplug interrupts cannot be delivered if the link is down. */22852285- dev->is_hotplug_bridge)22792279+ !pci_power_manageable(dev))2286228022872281 *d3cold_ok = false;22882282
···16161616 /* Don't abort commands in adapter during EEH16171617 * recovery as it's not accessible/responding.16181618 */16191619- if (!ha->flags.eeh_busy) {16191619+ if (GET_CMD_SP(sp) && !ha->flags.eeh_busy) {16201620 /* Get a reference to the sp and drop the lock.16211621 * The reference ensures this sp->done() call16221622 * - and not the call in qla2xxx_eh_abort() -
+10-1
drivers/scsi/virtio_scsi.c
···534534{535535 struct Scsi_Host *shost = virtio_scsi_host(vscsi->vdev);536536 struct virtio_scsi_cmd *cmd = scsi_cmd_priv(sc);537537+ unsigned long flags;537538 int req_size;539539+ int ret;538540539541 BUG_ON(scsi_sg_count(sc) > shost->sg_tablesize);540542···564562 req_size = sizeof(cmd->req.cmd);565563 }566564567567- if (virtscsi_kick_cmd(req_vq, cmd, req_size, sizeof(cmd->resp.cmd)) != 0)565565+ ret = virtscsi_kick_cmd(req_vq, cmd, req_size, sizeof(cmd->resp.cmd));566566+ if (ret == -EIO) {567567+ cmd->resp.cmd.response = VIRTIO_SCSI_S_BAD_TARGET;568568+ spin_lock_irqsave(&req_vq->vq_lock, flags);569569+ virtscsi_complete_cmd(vscsi, cmd);570570+ spin_unlock_irqrestore(&req_vq->vq_lock, flags);571571+ } else if (ret != 0) {568572 return SCSI_MLQUEUE_HOST_BUSY;573573+ }569574 return 0;570575}571576
+6
drivers/staging/greybus/timesync_platform.c
···45454646int gb_timesync_platform_lock_bus(struct gb_timesync_svc *pdata)4747{4848+ if (!arche_platform_change_state_cb)4949+ return 0;5050+4851 return arche_platform_change_state_cb(ARCHE_PLATFORM_STATE_TIME_SYNC,4952 pdata);5053}51545255void gb_timesync_platform_unlock_bus(void)5356{5757+ if (!arche_platform_change_state_cb)5858+ return;5959+5460 arche_platform_change_state_cb(ARCHE_PLATFORM_STATE_ACTIVE, NULL);5561}5662
+1-3
drivers/staging/lustre/lustre/llite/llite_mmap.c
···390390 result = VM_FAULT_LOCKED;391391 break;392392 case -ENODATA:393393+ case -EAGAIN:393394 case -EFAULT:394395 result = VM_FAULT_NOPAGE;395396 break;396397 case -ENOMEM:397398 result = VM_FAULT_OOM;398398- break;399399- case -EAGAIN:400400- result = VM_FAULT_RETRY;401399 break;402400 default:403401 result = VM_FAULT_SIGBUS;
···451451 int *post_ret)452452{453453 struct se_device *dev = cmd->se_dev;454454+ sense_reason_t ret = TCM_NO_SENSE;454455455456 /*456457 * Only set SCF_COMPARE_AND_WRITE_POST to force a response fall-through···459458 * sent to the backend driver.460459 */461460 spin_lock_irq(&cmd->t_state_lock);462462- if ((cmd->transport_state & CMD_T_SENT) && !cmd->scsi_status) {461461+ if (cmd->transport_state & CMD_T_SENT) {463462 cmd->se_cmd_flags |= SCF_COMPARE_AND_WRITE_POST;464463 *post_ret = 1;464464+465465+ if (cmd->scsi_status == SAM_STAT_CHECK_CONDITION)466466+ ret = TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE;465467 }466468 spin_unlock_irq(&cmd->t_state_lock);467469···474470 */475471 up(&dev->caw_sem);476472477477- return TCM_NO_SENSE;473473+ return ret;478474}479475480476static sense_reason_t compare_and_write_callback(struct se_cmd *cmd, bool success,
+58-28
drivers/target/target_core_transport.c
···457457{458458 struct se_node_acl *nacl = container_of(kref,459459 struct se_node_acl, acl_kref);460460+ struct se_portal_group *se_tpg = nacl->se_tpg;460461461461- complete(&nacl->acl_free_comp);462462+ if (!nacl->dynamic_stop) {463463+ complete(&nacl->acl_free_comp);464464+ return;465465+ }466466+467467+ mutex_lock(&se_tpg->acl_node_mutex);468468+ list_del(&nacl->acl_list);469469+ mutex_unlock(&se_tpg->acl_node_mutex);470470+471471+ core_tpg_wait_for_nacl_pr_ref(nacl);472472+ core_free_device_list_for_node(nacl, se_tpg);473473+ kfree(nacl);462474}463475464476void target_put_nacl(struct se_node_acl *nacl)···511499void transport_free_session(struct se_session *se_sess)512500{513501 struct se_node_acl *se_nacl = se_sess->se_node_acl;502502+514503 /*515504 * Drop the se_node_acl->nacl_kref obtained from within516505 * core_tpg_get_initiator_node_acl().517506 */518507 if (se_nacl) {508508+ struct se_portal_group *se_tpg = se_nacl->se_tpg;509509+ const struct target_core_fabric_ops *se_tfo = se_tpg->se_tpg_tfo;510510+ unsigned long flags;511511+519512 se_sess->se_node_acl = NULL;513513+514514+ /*515515+ * Also determine if we need to drop the extra ->cmd_kref if516516+ * it had been previously dynamically generated, and517517+ * the endpoint is not caching dynamic ACLs.518518+ */519519+ mutex_lock(&se_tpg->acl_node_mutex);520520+ if (se_nacl->dynamic_node_acl &&521521+ !se_tfo->tpg_check_demo_mode_cache(se_tpg)) {522522+ spin_lock_irqsave(&se_nacl->nacl_sess_lock, flags);523523+ if (list_empty(&se_nacl->acl_sess_list))524524+ se_nacl->dynamic_stop = true;525525+ spin_unlock_irqrestore(&se_nacl->nacl_sess_lock, flags);526526+527527+ if (se_nacl->dynamic_stop)528528+ list_del(&se_nacl->acl_list);529529+ }530530+ mutex_unlock(&se_tpg->acl_node_mutex);531531+532532+ if (se_nacl->dynamic_stop)533533+ target_put_nacl(se_nacl);534534+520535 target_put_nacl(se_nacl);521536 }522537 if (se_sess->sess_cmd_map) {···557518void transport_deregister_session(struct se_session *se_sess)558519{559520 struct se_portal_group *se_tpg = se_sess->se_tpg;560560- const struct target_core_fabric_ops *se_tfo;561561- struct se_node_acl *se_nacl;562521 unsigned long flags;563563- bool drop_nacl = false;564522565523 if (!se_tpg) {566524 transport_free_session(se_sess);567525 return;568526 }569569- se_tfo = se_tpg->se_tpg_tfo;570527571528 spin_lock_irqsave(&se_tpg->session_lock, flags);572529 list_del(&se_sess->sess_list);···570535 se_sess->fabric_sess_ptr = NULL;571536 spin_unlock_irqrestore(&se_tpg->session_lock, flags);572537573573- /*574574- * Determine if we need to do extra work for this initiator node's575575- * struct se_node_acl if it had been previously dynamically generated.576576- */577577- se_nacl = se_sess->se_node_acl;578578-579579- mutex_lock(&se_tpg->acl_node_mutex);580580- if (se_nacl && se_nacl->dynamic_node_acl) {581581- if (!se_tfo->tpg_check_demo_mode_cache(se_tpg)) {582582- list_del(&se_nacl->acl_list);583583- drop_nacl = true;584584- }585585- }586586- mutex_unlock(&se_tpg->acl_node_mutex);587587-588588- if (drop_nacl) {589589- core_tpg_wait_for_nacl_pr_ref(se_nacl);590590- core_free_device_list_for_node(se_nacl, se_tpg);591591- se_sess->se_node_acl = NULL;592592- kfree(se_nacl);593593- }594538 pr_debug("TARGET_CORE[%s]: Deregistered fabric_sess\n",595539 se_tpg->se_tpg_tfo->get_fabric_name());596540 /*597541 * If last kref is dropping now for an explicit NodeACL, awake sleeping598542 * ->acl_free_comp caller to wakeup configfs se_node_acl->acl_group599543 * removal context from within transport_free_session() code.544544+ *545545+ * For dynamic ACL, target_put_nacl() uses target_complete_nacl()546546+ * to release all remaining generate_node_acl=1 created ACL resources.600547 */601548602549 transport_free_session(se_sess);···31273110 spin_unlock_irqrestore(&cmd->t_state_lock, flags);31283111 goto check_stop;31293112 }31303130- cmd->t_state = TRANSPORT_ISTATE_PROCESSING;31313113 spin_unlock_irqrestore(&cmd->t_state_lock, flags);3132311431333115 cmd->se_tfo->queue_tm_rsp(cmd);···31393123 struct se_cmd *cmd)31403124{31413125 unsigned long flags;31263126+ bool aborted = false;3142312731433128 spin_lock_irqsave(&cmd->t_state_lock, flags);31443144- cmd->transport_state |= CMD_T_ACTIVE;31293129+ if (cmd->transport_state & CMD_T_ABORTED) {31303130+ aborted = true;31313131+ } else {31323132+ cmd->t_state = TRANSPORT_ISTATE_PROCESSING;31333133+ cmd->transport_state |= CMD_T_ACTIVE;31343134+ }31453135 spin_unlock_irqrestore(&cmd->t_state_lock, flags);31363136+31373137+ if (aborted) {31383138+ pr_warn_ratelimited("handle_tmr caught CMD_T_ABORTED TMR %d"31393139+ "ref_tag: %llu tag: %llu\n", cmd->se_tmr_req->function,31403140+ cmd->se_tmr_req->ref_task_tag, cmd->tag);31413141+ transport_cmd_check_stop_to_fabric(cmd);31423142+ return 0;31433143+ }3146314431473145 INIT_WORK(&cmd->work, target_tmr_work);31483146 queue_work(cmd->se_dev->tmr_wq, &cmd->work);
···11231123 mutex_lock(&container->lock);1124112411251125 ret = tce_iommu_create_default_window(container);11261126- if (ret)11271127- return ret;11281128-11291129- ret = tce_iommu_create_window(container, create.page_shift,11301130- create.window_size, create.levels,11311131- &create.start_addr);11261126+ if (!ret)11271127+ ret = tce_iommu_create_window(container,11281128+ create.page_shift,11291129+ create.window_size, create.levels,11301130+ &create.start_addr);1132113111331132 mutex_unlock(&container->lock);11341133···12451246static long tce_iommu_take_ownership_ddw(struct tce_container *container,12461247 struct iommu_table_group *table_group)12471248{12491249+ long i, ret = 0;12501250+12481251 if (!table_group->ops->create_table || !table_group->ops->set_window ||12491252 !table_group->ops->release_ownership) {12501253 WARN_ON_ONCE(1);···1255125412561255 table_group->ops->take_ownership(table_group);1257125612571257+ /* Set all windows to the new group */12581258+ for (i = 0; i < IOMMU_TABLE_GROUP_MAX_TABLES; ++i) {12591259+ struct iommu_table *tbl = container->tables[i];12601260+12611261+ if (!tbl)12621262+ continue;12631263+12641264+ ret = table_group->ops->set_window(table_group, i, tbl);12651265+ if (ret)12661266+ goto release_exit;12671267+ }12681268+12581269 return 0;12701270+12711271+release_exit:12721272+ for (i = 0; i < IOMMU_TABLE_GROUP_MAX_TABLES; ++i)12731273+ table_group->ops->unset_window(table_group, i);12741274+12751275+ table_group->ops->release_ownership(table_group);12761276+12771277+ return ret;12591278}1260127912611280static int tce_iommu_attach_group(void *iommu_data,
···159159 if (xen_domain())160160 return true;161161162162- /*163163- * On ARM-based machines, the DMA ops will do the right thing,164164- * so always use them with legacy devices.165165- */166166- if (IS_ENABLED(CONFIG_ARM) || IS_ENABLED(CONFIG_ARM64))167167- return !virtio_has_feature(vdev, VIRTIO_F_VERSION_1);168168-169162 return false;170163}171164
+24-15
fs/btrfs/compression.c
···10241024 unsigned long buf_offset;10251025 unsigned long current_buf_start;10261026 unsigned long start_byte;10271027+ unsigned long prev_start_byte;10271028 unsigned long working_bytes = total_out - buf_start;10281029 unsigned long bytes;10291030 char *kaddr;···10721071 if (!bio->bi_iter.bi_size)10731072 return 0;10741073 bvec = bio_iter_iovec(bio, bio->bi_iter);10751075-10741074+ prev_start_byte = start_byte;10761075 start_byte = page_offset(bvec.bv_page) - disk_start;1077107610781077 /*10791079- * make sure our new page is covered by this10801080- * working buffer10781078+ * We need to make sure we're only adjusting10791079+ * our offset into compression working buffer when10801080+ * we're switching pages. Otherwise we can incorrectly10811081+ * keep copying when we were actually done.10811082 */10821082- if (total_out <= start_byte)10831083- return 1;10831083+ if (start_byte != prev_start_byte) {10841084+ /*10851085+ * make sure our new page is covered by this10861086+ * working buffer10871087+ */10881088+ if (total_out <= start_byte)10891089+ return 1;1084109010851085- /*10861086- * the next page in the biovec might not be adjacent10871087- * to the last page, but it might still be found10881088- * inside this working buffer. bump our offset pointer10891089- */10901090- if (total_out > start_byte &&10911091- current_buf_start < start_byte) {10921092- buf_offset = start_byte - buf_start;10931093- working_bytes = total_out - start_byte;10941094- current_buf_start = buf_start + buf_offset;10911091+ /*10921092+ * the next page in the biovec might not be adjacent10931093+ * to the last page, but it might still be found10941094+ * inside this working buffer. bump our offset pointer10951095+ */10961096+ if (total_out > start_byte &&10971097+ current_buf_start < start_byte) {10981098+ buf_offset = start_byte - buf_start;10991099+ working_bytes = total_out - start_byte;11001100+ current_buf_start = buf_start + buf_offset;11011101+ }10951102 }10961103 }10971104
+4-2
fs/btrfs/ioctl.c
···56535653#ifdef CONFIG_COMPAT56545654long btrfs_compat_ioctl(struct file *file, unsigned int cmd, unsigned long arg)56555655{56565656+ /*56575657+ * These all access 32-bit values anyway so no further56585658+ * handling is necessary.56595659+ */56565660 switch (cmd) {56575661 case FS_IOC32_GETFLAGS:56585662 cmd = FS_IOC_GETFLAGS;···56675663 case FS_IOC32_GETVERSION:56685664 cmd = FS_IOC_GETVERSION;56695665 break;56705670- default:56715671- return -ENOIOCTLCMD;56725666 }5673566756745668 return btrfs_ioctl(file, cmd, (unsigned long) compat_ptr(arg));
···114114115115 BUG_ON(pos + len > iomap->offset + iomap->length);116116117117+ if (fatal_signal_pending(current))118118+ return -EINTR;119119+117120 page = grab_cache_page_write_begin(inode->i_mapping, index, flags);118121 if (!page)119122 return -ENOMEM;
+60-37
fs/nfsd/vfs.c
···332332 }333333}334334335335+static __be32336336+nfsd_get_write_access(struct svc_rqst *rqstp, struct svc_fh *fhp,337337+ struct iattr *iap)338338+{339339+ struct inode *inode = d_inode(fhp->fh_dentry);340340+ int host_err;341341+342342+ if (iap->ia_size < inode->i_size) {343343+ __be32 err;344344+345345+ err = nfsd_permission(rqstp, fhp->fh_export, fhp->fh_dentry,346346+ NFSD_MAY_TRUNC | NFSD_MAY_OWNER_OVERRIDE);347347+ if (err)348348+ return err;349349+ }350350+351351+ host_err = get_write_access(inode);352352+ if (host_err)353353+ goto out_nfserrno;354354+355355+ host_err = locks_verify_truncate(inode, NULL, iap->ia_size);356356+ if (host_err)357357+ goto out_put_write_access;358358+ return 0;359359+360360+out_put_write_access:361361+ put_write_access(inode);362362+out_nfserrno:363363+ return nfserrno(host_err);364364+}365365+335366/*336367 * Set various file attributes. After this call fhp needs an fh_put.337368 */···377346 __be32 err;378347 int host_err;379348 bool get_write_count;349349+ int size_change = 0;380350381351 if (iap->ia_valid & (ATTR_ATIME | ATTR_MTIME | ATTR_SIZE))382352 accmode |= NFSD_MAY_WRITE|NFSD_MAY_OWNER_OVERRIDE;···390358 /* Get inode */391359 err = fh_verify(rqstp, fhp, ftype, accmode);392360 if (err)393393- return err;361361+ goto out;394362 if (get_write_count) {395363 host_err = fh_want_write(fhp);396364 if (host_err)397397- goto out_host_err;365365+ return nfserrno(host_err);398366 }399367400368 dentry = fhp->fh_dentry;···405373 iap->ia_valid &= ~ATTR_MODE;406374407375 if (!iap->ia_valid)408408- return 0;376376+ goto out;409377410378 nfsd_sanitize_attrs(inode, iap);411379412412- if (check_guard && guardtime != inode->i_ctime.tv_sec)413413- return nfserr_notsync;414414-415380 /*416381 * The size case is special, it changes the file in addition to the417417- * attributes, and file systems don't expect it to be mixed with418418- * "random" attribute changes. We thus split out the size change419419- * into a separate call for vfs_truncate, and do the rest as a420420- * a separate setattr call.382382+ * attributes.421383 */422384 if (iap->ia_valid & ATTR_SIZE) {423423- struct path path = {424424- .mnt = fhp->fh_export->ex_path.mnt,425425- .dentry = dentry,426426- };427427- bool implicit_mtime = false;385385+ err = nfsd_get_write_access(rqstp, fhp, iap);386386+ if (err)387387+ goto out;388388+ size_change = 1;428389429390 /*430430- * vfs_truncate implicity updates the mtime IFF the file size431431- * actually changes. Avoid the additional seattr call below if432432- * the only other attribute that the client sends is the mtime.391391+ * RFC5661, Section 18.30.4:392392+ * Changing the size of a file with SETATTR indirectly393393+ * changes the time_modify and change attributes.394394+ *395395+ * (and similar for the older RFCs)433396 */434434- if (iap->ia_size != i_size_read(inode) &&435435- ((iap->ia_valid & ~(ATTR_SIZE | ATTR_MTIME)) == 0))436436- implicit_mtime = true;437437-438438- host_err = vfs_truncate(&path, iap->ia_size);439439- if (host_err)440440- goto out_host_err;441441-442442- iap->ia_valid &= ~ATTR_SIZE;443443- if (implicit_mtime)444444- iap->ia_valid &= ~ATTR_MTIME;445445- if (!iap->ia_valid)446446- goto done;397397+ if (iap->ia_size != i_size_read(inode))398398+ iap->ia_valid |= ATTR_MTIME;447399 }448400449401 iap->ia_valid |= ATTR_CTIME;450402403403+ if (check_guard && guardtime != inode->i_ctime.tv_sec) {404404+ err = nfserr_notsync;405405+ goto out_put_write_access;406406+ }407407+451408 fh_lock(fhp);452409 host_err = notify_change(dentry, iap, NULL);453410 fh_unlock(fhp);454454- if (host_err)455455- goto out_host_err;411411+ err = nfserrno(host_err);456412457457-done:458458- host_err = commit_metadata(fhp);459459-out_host_err:460460- return nfserrno(host_err);413413+out_put_write_access:414414+ if (size_change)415415+ put_write_access(inode);416416+ if (!err)417417+ err = nfserrno(commit_metadata(fhp));418418+out:419419+ return err;461420}462421463422#if defined(CONFIG_NFSD_V4)
+2-1
fs/proc/page.c
···173173 u |= kpf_copy_bit(k, KPF_ACTIVE, PG_active);174174 u |= kpf_copy_bit(k, KPF_RECLAIM, PG_reclaim);175175176176- u |= kpf_copy_bit(k, KPF_SWAPCACHE, PG_swapcache);176176+ if (PageSwapCache(page))177177+ u |= 1 << KPF_SWAPCACHE;177178 u |= kpf_copy_bit(k, KPF_SWAPBACKED, PG_swapbacked);178179179180 u |= kpf_copy_bit(k, KPF_UNEVICTABLE, PG_unevictable);
+1-1
fs/pstore/ram.c
···280280 1, id, type, PSTORE_TYPE_PMSG, 0);281281282282 /* ftrace is last since it may want to dynamically allocate memory. */283283- if (!prz_ok(prz)) {283283+ if (!prz_ok(prz) && cxt->fprzs) {284284 if (!(cxt->flags & RAMOOPS_FLAG_FTRACE_PER_CPU)) {285285 prz = ramoops_get_next_prz(cxt->fprzs,286286 &cxt->ftrace_read_cnt, 1, id, type,
···517517 struct drm_minor *control; /**< Control node */518518 struct drm_minor *primary; /**< Primary node */519519 struct drm_minor *render; /**< Render node */520520+ bool registered;520521521522 /* currently active master for this device. Protected by master_mutex */522523 struct drm_master *master;
+15-1
include/drm/drm_connector.h
···381381 * core drm connector interfaces. Everything added from this callback382382 * should be unregistered in the early_unregister callback.383383 *384384+ * This is called while holding drm_connector->mutex.385385+ *384386 * Returns:385387 *386388 * 0 on success, or a negative error code on failure.···397395 * late_register(). It is called from drm_connector_unregister(),398396 * early in the driver unload sequence to disable userspace access399397 * before data structures are torndown.398398+ *399399+ * This is called while holding drm_connector->mutex.400400 */401401 void (*early_unregister)(struct drm_connector *connector);402402···563559 * @interlace_allowed: can this connector handle interlaced modes?564560 * @doublescan_allowed: can this connector handle doublescan?565561 * @stereo_allowed: can this connector handle stereo modes?566566- * @registered: is this connector exposed (registered) with userspace?567562 * @modes: modes available on this connector (from fill_modes() + user)568563 * @status: one of the drm_connector_status enums (connected, not, or unknown)569564 * @probed_modes: list of modes derived directly from the display···611608 char *name;612609613610 /**611611+ * @mutex: Lock for general connector state, but currently only protects612612+ * @registered. Most of the connector state is still protected by the613613+ * mutex in &drm_mode_config.614614+ */615615+ struct mutex mutex;616616+617617+ /**614618 * @index: Compacted connector index, which matches the position inside615619 * the mode_config.list for drivers not supporting hot-add/removing. Can616620 * be used as an array index. It is invariant over the lifetime of the···630620 bool interlace_allowed;631621 bool doublescan_allowed;632622 bool stereo_allowed;623623+ /**624624+ * @registered: Is this connector exposed (registered) with userspace?625625+ * Protected by @mutex.626626+ */633627 bool registered;634628 struct list_head modes; /* list of modes on this connector */635629
+1-3
include/linux/buffer_head.h
···243243{244244 if (err == 0)245245 return VM_FAULT_LOCKED;246246- if (err == -EFAULT)246246+ if (err == -EFAULT || err == -EAGAIN)247247 return VM_FAULT_NOPAGE;248248 if (err == -ENOMEM)249249 return VM_FAULT_OOM;250250- if (err == -EAGAIN)251251- return VM_FAULT_RETRY;252250 /* -ENOSPC, -EDQUOT, -EIO ... */253251 return VM_FAULT_SIGBUS;254252}
···8585extern int add_one_highpage(struct page *page, int pfn, int bad_ppro);8686/* VM interface that may be used by firmware interface */8787extern int online_pages(unsigned long, unsigned long, int);8888-extern int test_pages_in_a_zone(unsigned long, unsigned long);8888+extern int test_pages_in_a_zone(unsigned long start_pfn, unsigned long end_pfn,8989+ unsigned long *valid_start, unsigned long *valid_end);8990extern void __offline_isolated_pages(unsigned long, unsigned long);90919192typedef void (*online_page_callback_t)(struct page *page);
+7-7
include/linux/module.h
···346346347347 /* Exported symbols */348348 const struct kernel_symbol *syms;349349- const unsigned long *crcs;349349+ const s32 *crcs;350350 unsigned int num_syms;351351352352 /* Kernel parameters. */···359359 /* GPL-only exported symbols. */360360 unsigned int num_gpl_syms;361361 const struct kernel_symbol *gpl_syms;362362- const unsigned long *gpl_crcs;362362+ const s32 *gpl_crcs;363363364364#ifdef CONFIG_UNUSED_SYMBOLS365365 /* unused exported symbols. */366366 const struct kernel_symbol *unused_syms;367367- const unsigned long *unused_crcs;367367+ const s32 *unused_crcs;368368 unsigned int num_unused_syms;369369370370 /* GPL-only, unused exported symbols. */371371 unsigned int num_unused_gpl_syms;372372 const struct kernel_symbol *unused_gpl_syms;373373- const unsigned long *unused_gpl_crcs;373373+ const s32 *unused_gpl_crcs;374374#endif375375376376#ifdef CONFIG_MODULE_SIG···382382383383 /* symbols that will be GPL-only in the near future. */384384 const struct kernel_symbol *gpl_future_syms;385385- const unsigned long *gpl_future_crcs;385385+ const s32 *gpl_future_crcs;386386 unsigned int num_gpl_future_syms;387387388388 /* Exception table */···523523524524struct symsearch {525525 const struct kernel_symbol *start, *stop;526526- const unsigned long *crcs;526526+ const s32 *crcs;527527 enum {528528 NOT_GPL_ONLY,529529 GPL_ONLY,···539539 */540540const struct kernel_symbol *find_symbol(const char *name,541541 struct module **owner,542542- const unsigned long **crc,542542+ const s32 **crc,543543 bool gplok,544544 bool warn);545545
+4
include/linux/netdevice.h
···15111511 * @max_mtu: Interface Maximum MTU value15121512 * @type: Interface hardware type15131513 * @hard_header_len: Maximum hardware header length.15141514+ * @min_header_len: Minimum hardware header length15141515 *15151516 * @needed_headroom: Extra headroom the hardware may need, but not in all15161517 * cases can this be guaranteed···17291728 unsigned int max_mtu;17301729 unsigned short type;17311730 unsigned short hard_header_len;17311731+ unsigned short min_header_len;1732173217331733 unsigned short needed_headroom;17341734 unsigned short needed_tailroom;···26962694{26972695 if (likely(len >= dev->hard_header_len))26982696 return true;26972697+ if (len < dev->min_header_len)26982698+ return false;2699269927002700 if (capable(CAP_SYS_RAWIO)) {27012701 memset(ll_header + len, 0, dev->hard_header_len - len);
···178178}179179static inline int lwtunnel_valid_encap_type_attr(struct nlattr *attr, int len)180180{181181- return -EOPNOTSUPP;181181+ /* return 0 since we are not walking attr looking for182182+ * RTA_ENCAP_TYPE attribute on nexthops.183183+ */184184+ return 0;182185}183186184187static inline int lwtunnel_build_state(struct net_device *dev, u16 encap_type,
···3737#define IB_USER_VERBS_H38383939#include <linux/types.h>4040-#include <rdma/ib_verbs.h>41404241/*4342 * Increment this value if any changes that break userspace ABI···547548};548549549550enum {550550- IB_USER_LEGACY_LAST_QP_ATTR_MASK = IB_QP_DEST_QPN551551+ /*552552+ * This value is equal to IB_QP_DEST_QPN.553553+ */554554+ IB_USER_LEGACY_LAST_QP_ATTR_MASK = 1ULL << 20,551555};552556553557enum {554554- IB_USER_LAST_QP_ATTR_MASK = IB_QP_RATE_LIMIT558558+ /*559559+ * This value is equal to IB_QP_RATE_LIMIT.560560+ */561561+ IB_USER_LAST_QP_ATTR_MASK = 1ULL << 25,555562};556563557564struct ib_uverbs_ex_create_qp {
+4
init/Kconfig
···19871987 make them incompatible with the kernel you are running. If19881988 unsure, say N.1989198919901990+config MODULE_REL_CRCS19911991+ bool19921992+ depends on MODVERSIONS19931993+19901994config MODULE_SRCVERSION_ALL19911995 bool "Source checksum for all modules"19921996 help
+15-10
kernel/events/core.c
···35383538 int ret;35393539};3540354035413541-static int find_cpu_to_read(struct perf_event *event, int local_cpu)35413541+static int __perf_event_read_cpu(struct perf_event *event, int event_cpu)35423542{35433543- int event_cpu = event->oncpu;35443543 u16 local_pkg, event_pkg;3545354435463545 if (event->group_caps & PERF_EV_CAP_READ_ACTIVE_PKG) {35473547- event_pkg = topology_physical_package_id(event_cpu);35483548- local_pkg = topology_physical_package_id(local_cpu);35463546+ int local_cpu = smp_processor_id();35473547+35483548+ event_pkg = topology_physical_package_id(event_cpu);35493549+ local_pkg = topology_physical_package_id(local_cpu);3549355035503551 if (event_pkg == local_pkg)35513552 return local_cpu;···3676367536773676static int perf_event_read(struct perf_event *event, bool group)36783677{36793679- int ret = 0, cpu_to_read, local_cpu;36783678+ int event_cpu, ret = 0;3680367936813680 /*36823681 * If event is enabled and currently active on a CPU, update the···36893688 .ret = 0,36903689 };3691369036923692- local_cpu = get_cpu();36933693- cpu_to_read = find_cpu_to_read(event, local_cpu);36943694- put_cpu();36913691+ event_cpu = READ_ONCE(event->oncpu);36923692+ if ((unsigned)event_cpu >= nr_cpu_ids)36933693+ return 0;36943694+36953695+ preempt_disable();36963696+ event_cpu = __perf_event_read_cpu(event, event_cpu);3695369736963698 /*36973699 * Purposely ignore the smp_call_function_single() return36983700 * value.36993701 *37003700- * If event->oncpu isn't a valid CPU it means the event got37023702+ * If event_cpu isn't a valid CPU it means the event got37013703 * scheduled out and that will have updated the event count.37023704 *37033705 * Therefore, either way, we'll have an up-to-date event count37043706 * after this.37053707 */37063706- (void)smp_call_function_single(cpu_to_read, __perf_event_read, &data, 1);37083708+ (void)smp_call_function_single(event_cpu, __perf_event_read, &data, 1);37093709+ preempt_enable();37073710 ret = data.ret;37083711 } else if (event->state == PERF_EVENT_STATE_INACTIVE) {37093712 struct perf_event_context *ctx = event->ctx;
+30-14
kernel/irq/irqdomain.c
···13461346}13471347EXPORT_SYMBOL_GPL(irq_domain_free_irqs_parent);1348134813491349+static void __irq_domain_activate_irq(struct irq_data *irq_data)13501350+{13511351+ if (irq_data && irq_data->domain) {13521352+ struct irq_domain *domain = irq_data->domain;13531353+13541354+ if (irq_data->parent_data)13551355+ __irq_domain_activate_irq(irq_data->parent_data);13561356+ if (domain->ops->activate)13571357+ domain->ops->activate(domain, irq_data);13581358+ }13591359+}13601360+13611361+static void __irq_domain_deactivate_irq(struct irq_data *irq_data)13621362+{13631363+ if (irq_data && irq_data->domain) {13641364+ struct irq_domain *domain = irq_data->domain;13651365+13661366+ if (domain->ops->deactivate)13671367+ domain->ops->deactivate(domain, irq_data);13681368+ if (irq_data->parent_data)13691369+ __irq_domain_deactivate_irq(irq_data->parent_data);13701370+ }13711371+}13721372+13491373/**13501374 * irq_domain_activate_irq - Call domain_ops->activate recursively to activate13511375 * interrupt···13801356 */13811357void irq_domain_activate_irq(struct irq_data *irq_data)13821358{13831383- if (irq_data && irq_data->domain) {13841384- struct irq_domain *domain = irq_data->domain;13851385-13861386- if (irq_data->parent_data)13871387- irq_domain_activate_irq(irq_data->parent_data);13881388- if (domain->ops->activate)13891389- domain->ops->activate(domain, irq_data);13591359+ if (!irqd_is_activated(irq_data)) {13601360+ __irq_domain_activate_irq(irq_data);13611361+ irqd_set_activated(irq_data);13901362 }13911363}13921364···13961376 */13971377void irq_domain_deactivate_irq(struct irq_data *irq_data)13981378{13991399- if (irq_data && irq_data->domain) {14001400- struct irq_domain *domain = irq_data->domain;14011401-14021402- if (domain->ops->deactivate)14031403- domain->ops->deactivate(domain, irq_data);14041404- if (irq_data->parent_data)14051405- irq_domain_deactivate_irq(irq_data->parent_data);13791379+ if (irqd_is_activated(irq_data)) {13801380+ __irq_domain_deactivate_irq(irq_data);13811381+ irqd_clr_activated(irq_data);14061382 }14071383}14081384
+25-28
kernel/module.c
···389389extern const struct kernel_symbol __stop___ksymtab_gpl[];390390extern const struct kernel_symbol __start___ksymtab_gpl_future[];391391extern const struct kernel_symbol __stop___ksymtab_gpl_future[];392392-extern const unsigned long __start___kcrctab[];393393-extern const unsigned long __start___kcrctab_gpl[];394394-extern const unsigned long __start___kcrctab_gpl_future[];392392+extern const s32 __start___kcrctab[];393393+extern const s32 __start___kcrctab_gpl[];394394+extern const s32 __start___kcrctab_gpl_future[];395395#ifdef CONFIG_UNUSED_SYMBOLS396396extern const struct kernel_symbol __start___ksymtab_unused[];397397extern const struct kernel_symbol __stop___ksymtab_unused[];398398extern const struct kernel_symbol __start___ksymtab_unused_gpl[];399399extern const struct kernel_symbol __stop___ksymtab_unused_gpl[];400400-extern const unsigned long __start___kcrctab_unused[];401401-extern const unsigned long __start___kcrctab_unused_gpl[];400400+extern const s32 __start___kcrctab_unused[];401401+extern const s32 __start___kcrctab_unused_gpl[];402402#endif403403404404#ifndef CONFIG_MODVERSIONS···497497498498 /* Output */499499 struct module *owner;500500- const unsigned long *crc;500500+ const s32 *crc;501501 const struct kernel_symbol *sym;502502};503503···563563 * (optional) module which owns it. Needs preempt disabled or module_mutex. */564564const struct kernel_symbol *find_symbol(const char *name,565565 struct module **owner,566566- const unsigned long **crc,566566+ const s32 **crc,567567 bool gplok,568568 bool warn)569569{···12491249}1250125012511251#ifdef CONFIG_MODVERSIONS12521252-/* If the arch applies (non-zero) relocations to kernel kcrctab, unapply it. */12531253-static unsigned long maybe_relocated(unsigned long crc,12541254- const struct module *crc_owner)12521252+12531253+static u32 resolve_rel_crc(const s32 *crc)12551254{12561256-#ifdef ARCH_RELOCATES_KCRCTAB12571257- if (crc_owner == NULL)12581258- return crc - (unsigned long)reloc_start;12591259-#endif12601260- return crc;12551255+ return *(u32 *)((void *)crc + *crc);12611256}1262125712631258static int check_version(Elf_Shdr *sechdrs,12641259 unsigned int versindex,12651260 const char *symname,12661261 struct module *mod,12671267- const unsigned long *crc,12681268- const struct module *crc_owner)12621262+ const s32 *crc)12691263{12701264 unsigned int i, num_versions;12711265 struct modversion_info *versions;···12771283 / sizeof(struct modversion_info);1278128412791285 for (i = 0; i < num_versions; i++) {12861286+ u32 crcval;12871287+12801288 if (strcmp(versions[i].name, symname) != 0)12811289 continue;1282129012831283- if (versions[i].crc == maybe_relocated(*crc, crc_owner))12911291+ if (IS_ENABLED(CONFIG_MODULE_REL_CRCS))12921292+ crcval = resolve_rel_crc(crc);12931293+ else12941294+ crcval = *crc;12951295+ if (versions[i].crc == crcval)12841296 return 1;12851285- pr_debug("Found checksum %lX vs module %lX\n",12861286- maybe_relocated(*crc, crc_owner), versions[i].crc);12971297+ pr_debug("Found checksum %X vs module %lX\n",12981298+ crcval, versions[i].crc);12871299 goto bad_version;12881300 }12891301···13071307 unsigned int versindex,13081308 struct module *mod)13091309{13101310- const unsigned long *crc;13101310+ const s32 *crc;1311131113121312 /*13131313 * Since this should be found in kernel (which can't be removed), no···13211321 }13221322 preempt_enable();13231323 return check_version(sechdrs, versindex,13241324- VMLINUX_SYMBOL_STR(module_layout), mod, crc,13251325- NULL);13241324+ VMLINUX_SYMBOL_STR(module_layout), mod, crc);13261325}1327132613281327/* First part is kernel version, which we ignore if module has crcs. */···13391340 unsigned int versindex,13401341 const char *symname,13411342 struct module *mod,13421342- const unsigned long *crc,13431343- const struct module *crc_owner)13431343+ const s32 *crc)13441344{13451345 return 1;13461346}···13661368{13671369 struct module *owner;13681370 const struct kernel_symbol *sym;13691369- const unsigned long *crc;13711371+ const s32 *crc;13701372 int err;1371137313721374 /*···13811383 if (!sym)13821384 goto unlock;1383138513841384- if (!check_version(info->sechdrs, info->index.vers, name, mod, crc,13851385- owner)) {13861386+ if (!check_version(info->sechdrs, info->index.vers, name, mod, crc)) {13861387 sym = ERR_PTR(-EINVAL);13871388 goto getname;13881389 }
+4-8
kernel/stacktrace.c
···1818 if (WARN_ON(!trace->entries))1919 return;20202121- for (i = 0; i < trace->nr_entries; i++) {2222- printk("%*c", 1 + spaces, ' ');2323- print_ip_sym(trace->entries[i]);2424- }2121+ for (i = 0; i < trace->nr_entries; i++)2222+ printk("%*c%pS\n", 1 + spaces, ' ', (void *)trace->entries[i]);2523}2624EXPORT_SYMBOL_GPL(print_stack_trace);2725···2729 struct stack_trace *trace, int spaces)2830{2931 int i;3030- unsigned long ip;3132 int generated;3233 int total = 0;3334···3437 return 0;35383639 for (i = 0; i < trace->nr_entries; i++) {3737- ip = trace->entries[i];3838- generated = snprintf(buf, size, "%*c[<%p>] %pS\n",3939- 1 + spaces, ' ', (void *) ip, (void *) ip);4040+ generated = snprintf(buf, size, "%*c%pS\n", 1 + spaces, ' ',4141+ (void *)trace->entries[i]);40424143 total += generated;4244
+5
kernel/time/tick-sched.c
···725725 */726726 if (delta == 0) {727727 tick_nohz_restart(ts, now);728728+ /*729729+ * Make sure next tick stop doesn't get fooled by past730730+ * clock deadline731731+ */732732+ ts->next_tick = 0;728733 goto out;729734 }730735 }
···14831483}1484148414851485/*14861486- * Confirm all pages in a range [start, end) is belongs to the same zone.14861486+ * Confirm all pages in a range [start, end) belong to the same zone.14871487+ * When true, return its valid [start, end).14871488 */14881488-int test_pages_in_a_zone(unsigned long start_pfn, unsigned long end_pfn)14891489+int test_pages_in_a_zone(unsigned long start_pfn, unsigned long end_pfn,14901490+ unsigned long *valid_start, unsigned long *valid_end)14891491{14901492 unsigned long pfn, sec_end_pfn;14931493+ unsigned long start, end;14911494 struct zone *zone = NULL;14921495 struct page *page;14931496 int i;14941494- for (pfn = start_pfn, sec_end_pfn = SECTION_ALIGN_UP(start_pfn);14971497+ for (pfn = start_pfn, sec_end_pfn = SECTION_ALIGN_UP(start_pfn + 1);14951498 pfn < end_pfn;14961496- pfn = sec_end_pfn + 1, sec_end_pfn += PAGES_PER_SECTION) {14991499+ pfn = sec_end_pfn, sec_end_pfn += PAGES_PER_SECTION) {14971500 /* Make sure the memory section is present first */14981501 if (!present_section_nr(pfn_to_section_nr(pfn)))14991502 continue;···15121509 page = pfn_to_page(pfn + i);15131510 if (zone && page_zone(page) != zone)15141511 return 0;15121512+ if (!zone)15131513+ start = pfn + i;15151514 zone = page_zone(page);15151515+ end = pfn + MAX_ORDER_NR_PAGES;15161516 }15171517 }15181518- return 1;15181518+15191519+ if (zone) {15201520+ *valid_start = start;15211521+ *valid_end = end;15221522+ return 1;15231523+ } else {15241524+ return 0;15251525+ }15191526}1520152715211528/*···18521839 long offlined_pages;18531840 int ret, drain, retry_max, node;18541841 unsigned long flags;18421842+ unsigned long valid_start, valid_end;18551843 struct zone *zone;18561844 struct memory_notify arg;18571845···18631849 return -EINVAL;18641850 /* This makes hotplug much easier...and readable.18651851 we assume this for now. .*/18661866- if (!test_pages_in_a_zone(start_pfn, end_pfn))18521852+ if (!test_pages_in_a_zone(start_pfn, end_pfn, &valid_start, &valid_end))18671853 return -EINVAL;1868185418691869- zone = page_zone(pfn_to_page(start_pfn));18551855+ zone = page_zone(pfn_to_page(valid_start));18701856 node = zone_to_nid(zone);18711857 nr_pages = end_pfn - start_pfn;18721858
+9-2
mm/shmem.c
···415415 struct shrink_control *sc, unsigned long nr_to_split)416416{417417 LIST_HEAD(list), *pos, *next;418418+ LIST_HEAD(to_remove);418419 struct inode *inode;419420 struct shmem_inode_info *info;420421 struct page *page;···442441 /* Check if there's anything to gain */443442 if (round_up(inode->i_size, PAGE_SIZE) ==444443 round_up(inode->i_size, HPAGE_PMD_SIZE)) {445445- list_del_init(&info->shrinklist);444444+ list_move(&info->shrinklist, &to_remove);446445 removed++;447447- iput(inode);448446 goto next;449447 }450448···453453 break;454454 }455455 spin_unlock(&sbinfo->shrinklist_lock);456456+457457+ list_for_each_safe(pos, next, &to_remove) {458458+ info = list_entry(pos, struct shmem_inode_info, shrinklist);459459+ inode = &info->vfs_inode;460460+ list_del_init(&info->shrinklist);461461+ iput(inode);462462+ }456463457464 list_for_each_safe(pos, next, &list) {458465 int ret;
+4
mm/slub.c
···14221422 int err;14231423 unsigned long i, count = oo_objects(s->oo);1424142414251425+ /* Bailout if already initialised */14261426+ if (s->random_seq)14271427+ return 0;14281428+14251429 err = cache_random_seq_create(s, count, GFP_KERNEL);14261430 if (err) {14271431 pr_err("SLUB: Unable to initialize free list for %s\n",
+29-1
mm/zswap.c
···78787979/* Enable/disable zswap (disabled by default) */8080static bool zswap_enabled;8181-module_param_named(enabled, zswap_enabled, bool, 0644);8181+static int zswap_enabled_param_set(const char *,8282+ const struct kernel_param *);8383+static struct kernel_param_ops zswap_enabled_param_ops = {8484+ .set = zswap_enabled_param_set,8585+ .get = param_get_bool,8686+};8787+module_param_cb(enabled, &zswap_enabled_param_ops, &zswap_enabled, 0644);82888389/* Crypto compressor to use */8490#define ZSWAP_COMPRESSOR_DEFAULT "lzo"···181175182176/* used by param callback function */183177static bool zswap_init_started;178178+179179+/* fatal error during init */180180+static bool zswap_init_failed;184181185182/*********************************186183* helpers and fwd declarations···633624 char *s = strstrip((char *)val);634625 int ret;635626627627+ if (zswap_init_failed) {628628+ pr_err("can't set param, initialization failed\n");629629+ return -ENODEV;630630+ }631631+636632 /* no change required */637633 if (!strcmp(s, *(char **)kp->arg))638634 return 0;···715701 const struct kernel_param *kp)716702{717703 return __zswap_param_set(val, kp, NULL, zswap_compressor);704704+}705705+706706+static int zswap_enabled_param_set(const char *val,707707+ const struct kernel_param *kp)708708+{709709+ if (zswap_init_failed) {710710+ pr_err("can't enable, initialization failed\n");711711+ return -ENODEV;712712+ }713713+714714+ return param_set_bool(val, kp);718715}719716720717/*********************************···12261201dstmem_fail:12271202 zswap_entry_cache_destroy();12281203cache_fail:12041204+ /* if built-in, we aren't unloaded on failure; don't allow use */12051205+ zswap_init_failed = true;12061206+ zswap_enabled = false;12291207 return -ENOMEM;12301208}12311209/* must be late so crypto has time to come up */
+6-2
net/core/datagram.c
···332332EXPORT_SYMBOL(__skb_free_datagram_locked);333333334334int __sk_queue_drop_skb(struct sock *sk, struct sk_buff *skb,335335- unsigned int flags)335335+ unsigned int flags,336336+ void (*destructor)(struct sock *sk,337337+ struct sk_buff *skb))336338{337339 int err = 0;338340···344342 if (skb == skb_peek(&sk->sk_receive_queue)) {345343 __skb_unlink(skb, &sk->sk_receive_queue);346344 atomic_dec(&skb->users);345345+ if (destructor)346346+ destructor(sk, skb);347347 err = 0;348348 }349349 spin_unlock_bh(&sk->sk_receive_queue.lock);···379375380376int skb_kill_datagram(struct sock *sk, struct sk_buff *skb, unsigned int flags)381377{382382- int err = __sk_queue_drop_skb(sk, skb, flags);378378+ int err = __sk_queue_drop_skb(sk, skb, flags, NULL);383379384380 kfree_skb(skb);385381 sk_mem_reclaim_partial(sk);
+13-18
net/core/dev.c
···1695169516961696static struct static_key netstamp_needed __read_mostly;16971697#ifdef HAVE_JUMP_LABEL16981698-/* We are not allowed to call static_key_slow_dec() from irq context16991699- * If net_disable_timestamp() is called from irq context, defer the17001700- * static_key_slow_dec() calls.17011701- */17021698static atomic_t netstamp_needed_deferred;16991699+static void netstamp_clear(struct work_struct *work)17001700+{17011701+ int deferred = atomic_xchg(&netstamp_needed_deferred, 0);17021702+17031703+ while (deferred--)17041704+ static_key_slow_dec(&netstamp_needed);17051705+}17061706+static DECLARE_WORK(netstamp_work, netstamp_clear);17031707#endif1704170817051709void net_enable_timestamp(void)17061710{17071707-#ifdef HAVE_JUMP_LABEL17081708- int deferred = atomic_xchg(&netstamp_needed_deferred, 0);17091709-17101710- if (deferred) {17111711- while (--deferred)17121712- static_key_slow_dec(&netstamp_needed);17131713- return;17141714- }17151715-#endif17161711 static_key_slow_inc(&netstamp_needed);17171712}17181713EXPORT_SYMBOL(net_enable_timestamp);···17151720void net_disable_timestamp(void)17161721{17171722#ifdef HAVE_JUMP_LABEL17181718- if (in_interrupt()) {17191719- atomic_inc(&netstamp_needed_deferred);17201720- return;17211721- }17221722-#endif17231723+ /* net_disable_timestamp() can be called from non process context */17241724+ atomic_inc(&netstamp_needed_deferred);17251725+ schedule_work(&netstamp_work);17261726+#else17231727 static_key_slow_dec(&netstamp_needed);17281728+#endif17241729}17251730EXPORT_SYMBOL(net_disable_timestamp);17261731
+6-3
net/core/ethtool.c
···14051405 if (regs.len > reglen)14061406 regs.len = reglen;1407140714081408- regbuf = vzalloc(reglen);14091409- if (reglen && !regbuf)14101410- return -ENOMEM;14081408+ regbuf = NULL;14091409+ if (reglen) {14101410+ regbuf = vzalloc(reglen);14111411+ if (!regbuf)14121412+ return -ENOMEM;14131413+ }1411141414121415 ops->get_regs(dev, ®s, regbuf);14131416
+1
net/dsa/dsa2.c
···273273 if (err) {274274 dev_warn(ds->dev, "Failed to create slave %d: %d\n",275275 index, err);276276+ ds->ports[index].netdev = NULL;276277 return err;277278 }278279
···12381238 pktinfo->ipi_ifindex = 0;12391239 pktinfo->ipi_spec_dst.s_addr = 0;12401240 }12411241- skb_dst_drop(skb);12411241+ /* We need to keep the dst for __ip_options_echo()12421242+ * We could restrict the test to opt.ts_needtime || opt.srr,12431243+ * but the following is good enough as IP options are not often used.12441244+ */12451245+ if (unlikely(IPCB(skb)->opt.optlen))12461246+ skb_dst_force(skb);12471247+ else12481248+ skb_dst_drop(skb);12421249}1243125012441251int ip_setsockopt(struct sock *sk, int level,
···770770 ret = -EAGAIN;771771 break;772772 }773773+ /* if __tcp_splice_read() got nothing while we have774774+ * an skb in receive queue, we do not want to loop.775775+ * This might happen with URG data.776776+ */777777+ if (!skb_queue_empty(&sk->sk_receive_queue))778778+ break;773779 sk_wait_data(sk, &timeo, NULL);774780 if (signal_pending(current)) {775781 ret = sock_intr_errno(timeo);
···33863386 }3387338733883388 if (idev) {33893389- if (idev->if_flags & IF_READY)33903390- /* device is already configured. */33893389+ if (idev->if_flags & IF_READY) {33903390+ /* device is already configured -33913391+ * but resend MLD reports, we might33923392+ * have roamed and need to update33933393+ * multicast snooping switches33943394+ */33953395+ ipv6_mc_up(idev);33913396 break;33973397+ }33923398 idev->if_flags |= IF_READY;33933399 }33943400···4015400940164010 if (bump_id)40174011 rt_genid_bump_ipv6(dev_net(dev));40124012+40134013+ /* Make sure that a new temporary address will be created40144014+ * before this temporary address becomes deprecated.40154015+ */40164016+ if (ifp->flags & IFA_F_TEMPORARY)40174017+ addrconf_verify_rtnl();40184018}4019401940204020static void addrconf_dad_run(struct inet6_dev *idev)
+3-28
net/ipv6/exthdrs.c
···327327 struct ipv6_sr_hdr *hdr;328328 struct inet6_dev *idev;329329 struct in6_addr *addr;330330- bool cleanup = false;331330 int accept_seg6;332331333332 hdr = (struct ipv6_sr_hdr *)skb_transport_header(skb);···350351#endif351352352353looped_back:353353- if (hdr->segments_left > 0) {354354- if (hdr->nexthdr != NEXTHDR_IPV6 && hdr->segments_left == 1 &&355355- sr_has_cleanup(hdr))356356- cleanup = true;357357- } else {354354+ if (hdr->segments_left == 0) {358355 if (hdr->nexthdr == NEXTHDR_IPV6) {359356 int offset = (hdr->hdrlen + 1) << 3;360357···413418414419 ipv6_hdr(skb)->daddr = *addr;415420416416- if (cleanup) {417417- int srhlen = (hdr->hdrlen + 1) << 3;418418- int nh = hdr->nexthdr;419419-420420- skb_pull_rcsum(skb, sizeof(struct ipv6hdr) + srhlen);421421- memmove(skb_network_header(skb) + srhlen,422422- skb_network_header(skb),423423- (unsigned char *)hdr - skb_network_header(skb));424424- skb->network_header += srhlen;425425- ipv6_hdr(skb)->nexthdr = nh;426426- ipv6_hdr(skb)->payload_len = htons(skb->len -427427- sizeof(struct ipv6hdr));428428- skb_push_rcsum(skb, sizeof(struct ipv6hdr));429429- }430430-431421 skb_dst_drop(skb);432422433423 ip6_route_input(skb);···433453 }434454 ipv6_hdr(skb)->hop_limit--;435455436436- /* be sure that srh is still present before reinjecting */437437- if (!cleanup) {438438- skb_pull(skb, sizeof(struct ipv6hdr));439439- goto looped_back;440440- }441441- skb_set_transport_header(skb, sizeof(struct ipv6hdr));442442- IP6CB(skb)->nhoff = offsetof(struct ipv6hdr, nexthdr);456456+ skb_pull(skb, sizeof(struct ipv6hdr));457457+ goto looped_back;443458 }444459445460 dst_input(skb);
+21-19
net/ipv6/ip6_gre.c
···367367368368369369static void ip6gre_err(struct sk_buff *skb, struct inet6_skb_parm *opt,370370- u8 type, u8 code, int offset, __be32 info)370370+ u8 type, u8 code, int offset, __be32 info)371371{372372- const struct ipv6hdr *ipv6h = (const struct ipv6hdr *)skb->data;373373- __be16 *p = (__be16 *)(skb->data + offset);374374- int grehlen = offset + 4;372372+ const struct gre_base_hdr *greh;373373+ const struct ipv6hdr *ipv6h;374374+ int grehlen = sizeof(*greh);375375 struct ip6_tnl *t;376376+ int key_off = 0;376377 __be16 flags;378378+ __be32 key;377379378378- flags = p[0];379379- if (flags&(GRE_CSUM|GRE_KEY|GRE_SEQ|GRE_ROUTING|GRE_VERSION)) {380380- if (flags&(GRE_VERSION|GRE_ROUTING))381381- return;382382- if (flags&GRE_KEY) {383383- grehlen += 4;384384- if (flags&GRE_CSUM)385385- grehlen += 4;386386- }380380+ if (!pskb_may_pull(skb, offset + grehlen))381381+ return;382382+ greh = (const struct gre_base_hdr *)(skb->data + offset);383383+ flags = greh->flags;384384+ if (flags & (GRE_VERSION | GRE_ROUTING))385385+ return;386386+ if (flags & GRE_CSUM)387387+ grehlen += 4;388388+ if (flags & GRE_KEY) {389389+ key_off = grehlen + offset;390390+ grehlen += 4;387391 }388392389389- /* If only 8 bytes returned, keyed message will be dropped here */390390- if (!pskb_may_pull(skb, grehlen))393393+ if (!pskb_may_pull(skb, offset + grehlen))391394 return;392395 ipv6h = (const struct ipv6hdr *)skb->data;393393- p = (__be16 *)(skb->data + offset);396396+ greh = (const struct gre_base_hdr *)(skb->data + offset);397397+ key = key_off ? *(__be32 *)(skb->data + key_off) : 0;394398395399 t = ip6gre_tunnel_lookup(skb->dev, &ipv6h->daddr, &ipv6h->saddr,396396- flags & GRE_KEY ?397397- *(((__be32 *)p) + (grehlen / 4) - 1) : 0,398398- p[1]);400400+ key, greh->protocol);399401 if (!t)400402 return;401403
···991991 return 0; /* don't send reset */992992}993993994994+static void tcp_v6_restore_cb(struct sk_buff *skb)995995+{996996+ /* We need to move header back to the beginning if xfrm6_policy_check()997997+ * and tcp_v6_fill_cb() are going to be called again.998998+ * ip6_datagram_recv_specific_ctl() also expects IP6CB to be there.999999+ */10001000+ memmove(IP6CB(skb), &TCP_SKB_CB(skb)->header.h6,10011001+ sizeof(struct inet6_skb_parm));10021002+}10031003+9941004static struct sock *tcp_v6_syn_recv_sock(const struct sock *sk, struct sk_buff *skb,9951005 struct request_sock *req,9961006 struct dst_entry *dst,···11921182 sk_gfp_mask(sk, GFP_ATOMIC));11931183 consume_skb(ireq->pktopts);11941184 ireq->pktopts = NULL;11951195- if (newnp->pktoptions)11851185+ if (newnp->pktoptions) {11861186+ tcp_v6_restore_cb(newnp->pktoptions);11961187 skb_set_owner_r(newnp->pktoptions, newsk);11881188+ }11971189 }11981190 }11991191···12081196out:12091197 tcp_listendrop(sk);12101198 return NULL;12111211-}12121212-12131213-static void tcp_v6_restore_cb(struct sk_buff *skb)12141214-{12151215- /* We need to move header back to the beginning if xfrm6_policy_check()12161216- * and tcp_v6_fill_cb() are going to be called again.12171217- * ip6_datagram_recv_specific_ctl() also expects IP6CB to be there.12181218- */12191219- memmove(IP6CB(skb), &TCP_SKB_CB(skb)->header.h6,12201220- sizeof(struct inet6_skb_parm));12211199}1222120012231201/* The socket must have it's spinlock held when we get
+1-1
net/ipv6/udp.c
···441441 return err;442442443443csum_copy_err:444444- if (!__sk_queue_drop_skb(sk, skb, flags)) {444444+ if (!__sk_queue_drop_skb(sk, skb, flags, udp_skb_destructor)) {445445 if (is_udp4) {446446 UDP_INC_STATS(sock_net(sk),447447 UDP_MIB_CSUMERRORS, is_udplite);
+23-19
net/kcm/kcmsock.c
···929929 goto out_error;930930 }931931932932- /* New message, alloc head skb */933933- head = alloc_skb(0, sk->sk_allocation);934934- while (!head) {935935- kcm_push(kcm);936936- err = sk_stream_wait_memory(sk, &timeo);937937- if (err)938938- goto out_error;939939-932932+ if (msg_data_left(msg)) {933933+ /* New message, alloc head skb */940934 head = alloc_skb(0, sk->sk_allocation);935935+ while (!head) {936936+ kcm_push(kcm);937937+ err = sk_stream_wait_memory(sk, &timeo);938938+ if (err)939939+ goto out_error;940940+941941+ head = alloc_skb(0, sk->sk_allocation);942942+ }943943+944944+ skb = head;945945+946946+ /* Set ip_summed to CHECKSUM_UNNECESSARY to avoid calling947947+ * csum_and_copy_from_iter from skb_do_copy_data_nocache.948948+ */949949+ skb->ip_summed = CHECKSUM_UNNECESSARY;941950 }942942-943943- skb = head;944944-945945- /* Set ip_summed to CHECKSUM_UNNECESSARY to avoid calling946946- * csum_and_copy_from_iter from skb_do_copy_data_nocache.947947- */948948- skb->ip_summed = CHECKSUM_UNNECESSARY;949951950952start:951953 while (msg_data_left(msg)) {···10201018 if (eor) {10211019 bool not_busy = skb_queue_empty(&sk->sk_write_queue);1022102010231023- /* Message complete, queue it on send buffer */10241024- __skb_queue_tail(&sk->sk_write_queue, head);10251025- kcm->seq_skb = NULL;10261026- KCM_STATS_INCR(kcm->stats.tx_msgs);10211021+ if (head) {10221022+ /* Message complete, queue it on send buffer */10231023+ __skb_queue_tail(&sk->sk_write_queue, head);10241024+ kcm->seq_skb = NULL;10251025+ KCM_STATS_INCR(kcm->stats.tx_msgs);10261026+ }1027102710281028 if (msg->msg_flags & MSG_BATCH) {10291029 kcm->tx_wait_more = true;
+1
net/l2tp/l2tp_core.h
···263263int l2tp_nl_register_ops(enum l2tp_pwtype pw_type,264264 const struct l2tp_nl_cmd_ops *ops);265265void l2tp_nl_unregister_ops(enum l2tp_pwtype pw_type);266266+int l2tp_ioctl(struct sock *sk, int cmd, unsigned long arg);266267267268/* Session reference counts. Incremented when code obtains a reference268269 * to a session.
···219219 "_SDA2_BASE_", /* ppc */220220 NULL };221221222222+ static char *special_prefixes[] = {223223+ "__crc_", /* modversions */224224+ NULL };225225+222226 static char *special_suffixes[] = {223227 "_veneer", /* arm */224228 "_from_arm", /* arm */···262258 for (i = 0; special_symbols[i]; i++)263259 if (strcmp(sym_name, special_symbols[i]) == 0)264260 return 0;261261+262262+ for (i = 0; special_prefixes[i]; i++) {263263+ int l = strlen(special_prefixes[i]);264264+265265+ if (l <= strlen(sym_name) &&266266+ strncmp(sym_name, special_prefixes[i], l) == 0)267267+ return 0;268268+ }265269266270 for (i = 0; special_suffixes[i]; i++) {267271 int l = strlen(sym_name) - strlen(special_suffixes[i]);
+10
scripts/mod/modpost.c
···621621 if (strncmp(symname, CRC_PFX, strlen(CRC_PFX)) == 0) {622622 is_crc = true;623623 crc = (unsigned int) sym->st_value;624624+ if (sym->st_shndx != SHN_UNDEF && sym->st_shndx != SHN_ABS) {625625+ unsigned int *crcp;626626+627627+ /* symbol points to the CRC in the ELF object */628628+ crcp = (void *)info->hdr + sym->st_value +629629+ info->sechdrs[sym->st_shndx].sh_offset -630630+ (info->hdr->e_type != ET_REL ?631631+ info->sechdrs[sym->st_shndx].sh_addr : 0);632632+ crc = *crcp;633633+ }624634 sym_update_crc(symname + strlen(CRC_PFX), mod, crc,625635 export);626636 }
+1-1
security/selinux/hooks.c
···58875887 return error;5888588858895889 /* Obtain a SID for the context, if one was specified. */58905890- if (size && str[1] && str[1] != '\n') {58905890+ if (size && str[0] && str[0] != '\n') {58915891 if (str[size-1] == '\n') {58925892 str[size-1] = 0;58935893 size--;
+1-8
sound/core/seq/seq_memory.c
···419419{420420 unsigned long flags;421421 struct snd_seq_event_cell *ptr;422422- int max_count = 5 * HZ;423422424423 if (snd_BUG_ON(!pool))425424 return -EINVAL;···431432 if (waitqueue_active(&pool->output_sleep))432433 wake_up(&pool->output_sleep);433434434434- while (atomic_read(&pool->counter) > 0) {435435- if (max_count == 0) {436436- pr_warn("ALSA: snd_seq_pool_done timeout: %d cells remain\n", atomic_read(&pool->counter));437437- break;438438- }435435+ while (atomic_read(&pool->counter) > 0)439436 schedule_timeout_uninterruptible(1);440440- max_count--;441441- }442437443438 /* release all resources */444439 spin_lock_irqsave(&pool->lock, flags);
+20-13
sound/core/seq/seq_queue.c
···181181 }182182}183183184184+static void queue_use(struct snd_seq_queue *queue, int client, int use);185185+184186/* allocate a new queue -185187 * return queue index value or negative value for error186188 */···194192 if (q == NULL)195193 return -ENOMEM;196194 q->info_flags = info_flags;195195+ queue_use(q, client, 1);197196 if (queue_list_add(q) < 0) {198197 queue_delete(q);199198 return -ENOMEM;200199 }201201- snd_seq_queue_use(q->queue, client, 1); /* use this queue */202200 return q->queue;203201}204202···504502 return result;505503}506504507507-508508-/* use or unuse this queue -509509- * if it is the first client, starts the timer.510510- * if it is not longer used by any clients, stop the timer.511511- */512512-int snd_seq_queue_use(int queueid, int client, int use)505505+/* use or unuse this queue */506506+static void queue_use(struct snd_seq_queue *queue, int client, int use)513507{514514- struct snd_seq_queue *queue;515515-516516- queue = queueptr(queueid);517517- if (queue == NULL)518518- return -EINVAL;519519- mutex_lock(&queue->timer_mutex);520508 if (use) {521509 if (!test_and_set_bit(client, queue->clients_bitmap))522510 queue->clients++;···521529 } else {522530 snd_seq_timer_close(queue);523531 }532532+}533533+534534+/* use or unuse this queue -535535+ * if it is the first client, starts the timer.536536+ * if it is not longer used by any clients, stop the timer.537537+ */538538+int snd_seq_queue_use(int queueid, int client, int use)539539+{540540+ struct snd_seq_queue *queue;541541+542542+ queue = queueptr(queueid);543543+ if (queue == NULL)544544+ return -EINVAL;545545+ mutex_lock(&queue->timer_mutex);546546+ queue_use(queue, client, use);524547 mutex_unlock(&queue->timer_mutex);525548 queuefree(queue);526549 return 0;