Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net

Conflicts:
drivers/net/ethernet/chelsio/cxgb4vf/sge.c
drivers/net/ethernet/intel/ixgbe/ixgbe_phy.c

sge.c was overlapping two changes, one to use the new
__dev_alloc_page() in net-next, and one to use s->fl_pg_order in net.

ixgbe_phy.c was a set of overlapping whitespace changes.

Signed-off-by: David S. Miller <davem@davemloft.net>

+3960 -2481
+4 -1
Documentation/devicetree/bindings/thermal/rcar-thermal.txt
··· 7 7 - "renesas,thermal-r8a73a4" (R-Mobile AP6) 8 8 - "renesas,thermal-r8a7779" (R-Car H1) 9 9 - "renesas,thermal-r8a7790" (R-Car H2) 10 - - "renesas,thermal-r8a7791" (R-Car M2) 10 + - "renesas,thermal-r8a7791" (R-Car M2-W) 11 + - "renesas,thermal-r8a7792" (R-Car V2H) 12 + - "renesas,thermal-r8a7793" (R-Car M2-N) 13 + - "renesas,thermal-r8a7794" (R-Car E2) 11 14 - reg : Address range of the thermal registers. 12 15 The 1st reg will be recognized as common register 13 16 if it has "interrupts".
+1 -1
Documentation/kernel-parameters.txt
··· 3621 3621 3622 3622 usb-storage.delay_use= 3623 3623 [UMS] The delay in seconds before a new device is 3624 - scanned for Logical Units (default 5). 3624 + scanned for Logical Units (default 1). 3625 3625 3626 3626 usb-storage.quirks= 3627 3627 [UMS] A list of quirks entries to supplement or
+5 -7
Documentation/video4linux/vivid.txt
··· 221 221 key, not quality. 222 222 223 223 multiplanar: select whether each device instance supports multi-planar formats, 224 - and thus the V4L2 multi-planar API. By default the first device instance 225 - is single-planar, the second multi-planar, and it keeps alternating. 224 + and thus the V4L2 multi-planar API. By default device instances are 225 + single-planar. 226 226 227 227 This module option can override that for each instance. Values are: 228 228 229 - 0: use alternating single and multi-planar devices. 230 229 1: this is a single-planar instance. 231 230 2: this is a multi-planar instance. 232 231 ··· 974 975 0 otherwise. 975 976 976 977 The driver has to be configured to support the multiplanar formats. By default 977 - the first driver instance is single-planar, the second is multi-planar, and it 978 - keeps alternating. This can be changed by setting the multiplanar module option, 979 - see section 1 for more details on that option. 978 + the driver instances are single-planar. This can be changed by setting the 979 + multiplanar module option, see section 1 for more details on that option. 980 980 981 981 If the driver instance is using the multiplanar formats/API, then the first 982 982 single planar format (YUYV) and the multiplanar NV16M and NV61M formats the ··· 1019 1021 to see the blended framebuffer overlay that's being written to by the second 1020 1022 instance. This setup would require the following commands: 1021 1023 1022 - $ sudo modprobe vivid n_devs=2 node_types=0x10101,0x1 multiplanar=1,1 1024 + $ sudo modprobe vivid n_devs=2 node_types=0x10101,0x1 1023 1025 $ v4l2-ctl -d1 --find-fb 1024 1026 /dev/fb1 is the framebuffer associated with base address 0x12800000 1025 1027 $ sudo v4l2-ctl -d2 --set-fbuf fb=1
+11 -11
MAINTAINERS
··· 1543 1543 1544 1544 ARM/ZYNQ ARCHITECTURE 1545 1545 M: Michal Simek <michal.simek@xilinx.com> 1546 + R: Sören Brinkmann <soren.brinkmann@xilinx.com> 1546 1547 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 1547 1548 W: http://wiki.xilinx.com 1548 1549 T: git git://git.xilinx.com/linux-xlnx.git ··· 2072 2071 2073 2072 BROADCOM BCM2835 ARM ARCHITECTURE 2074 2073 M: Stephen Warren <swarren@wwwdotorg.org> 2074 + M: Lee Jones <lee@kernel.org> 2075 2075 L: linux-rpi-kernel@lists.infradead.org (moderated for non-subscribers) 2076 - T: git git://git.kernel.org/pub/scm/linux/kernel/git/swarren/linux-rpi.git 2076 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/rpi/linux-rpi.git 2077 2077 S: Maintained 2078 2078 N: bcm2835 2079 2079 ··· 4322 4320 F: drivers/block/cpqarray.* 4323 4321 4324 4322 HEWLETT-PACKARD SMART ARRAY RAID DRIVER (hpsa) 4325 - M: "Stephen M. Cameron" <scameron@beardog.cce.hp.com> 4323 + M: Don Brace <don.brace@pmcs.com> 4326 4324 L: iss_storagedev@hp.com 4325 + L: storagedev@pmcs.com 4326 + L: linux-scsi@vger.kernel.org 4327 4327 S: Supported 4328 4328 F: Documentation/scsi/hpsa.txt 4329 4329 F: drivers/scsi/hpsa*.[ch] ··· 4333 4329 F: include/uapi/linux/cciss*.h 4334 4330 4335 4331 HEWLETT-PACKARD SMART CISS RAID DRIVER (cciss) 4336 - M: Mike Miller <mike.miller@hp.com> 4332 + M: Don Brace <don.brace@pmcs.com> 4337 4333 L: iss_storagedev@hp.com 4334 + L: storagedev@pmcs.com 4335 + L: linux-scsi@vger.kernel.org 4338 4336 S: Supported 4339 4337 F: Documentation/blockdev/cciss.txt 4340 4338 F: drivers/block/cciss* ··· 4731 4725 S: Maintained 4732 4726 F: drivers/iio/ 4733 4727 F: drivers/staging/iio/ 4728 + F: include/linux/iio/ 4734 4729 4735 4730 IKANOS/ADI EAGLE ADSL USB DRIVER 4736 4731 M: Matthieu Castet <castet.matthieu@free.fr> ··· 7202 7195 7203 7196 PIN CONTROL SUBSYSTEM 7204 7197 M: Linus Walleij <linus.walleij@linaro.org> 7198 + L: linux-gpio@vger.kernel.org 7205 7199 S: Maintained 7206 7200 F: drivers/pinctrl/ 7207 7201 F: include/linux/pinctrl/ ··· 8507 8499 TI DAVINCI MACHINE SUPPORT 8508 8500 M: Sekhar Nori <nsekhar@ti.com> 8509 8501 M: Kevin Hilman <khilman@deeprootsystems.com> 8510 - L: davinci-linux-open-source@linux.davincidsp.com (moderated for non-subscribers) 8511 8502 T: git git://gitorious.org/linux-davinci/linux-davinci.git 8512 8503 Q: http://patchwork.kernel.org/project/linux-davinci/list/ 8513 8504 S: Supported ··· 8516 8509 TI DAVINCI SERIES MEDIA DRIVER 8517 8510 M: Lad, Prabhakar <prabhakar.csengg@gmail.com> 8518 8511 L: linux-media@vger.kernel.org 8519 - L: davinci-linux-open-source@linux.davincidsp.com (moderated for non-subscribers) 8520 8512 W: http://linuxtv.org/ 8521 8513 Q: http://patchwork.linuxtv.org/project/linux-media/list/ 8522 8514 T: git git://linuxtv.org/mhadli/v4l-dvb-davinci_devices.git ··· 9632 9626 9633 9627 UNIVERSAL FLASH STORAGE HOST CONTROLLER DRIVER 9634 9628 M: Vinayak Holikatti <vinholikatti@gmail.com> 9635 - M: Santosh Y <santoshsy@gmail.com> 9636 9629 L: linux-scsi@vger.kernel.org 9637 9630 S: Supported 9638 9631 F: Documentation/scsi/ufs.txt ··· 9724 9719 S: Maintained 9725 9720 F: Documentation/hid/hiddev.txt 9726 9721 F: drivers/hid/usbhid/ 9727 - 9728 - USB/IP DRIVERS 9729 - L: linux-usb@vger.kernel.org 9730 - S: Orphan 9731 - F: drivers/staging/usbip/ 9732 9722 9733 9723 USB ISP116X DRIVER 9734 9724 M: Olav Kongas <ok@artecdesign.ee>
+2 -2
Makefile
··· 1 1 VERSION = 3 2 2 PATCHLEVEL = 18 3 3 SUBLEVEL = 0 4 - EXTRAVERSION = -rc2 5 - NAME = Shuffling Zombie Juror 4 + EXTRAVERSION = -rc4 5 + NAME = Diseased Newt 6 6 7 7 # *DOCUMENTATION* 8 8 # To see a list of typical targets execute "make help"
+1 -1
arch/arm/Kconfig.debug
··· 1187 1187 default 0xf1c28000 if DEBUG_SUNXI_UART0 1188 1188 default 0xf1c28400 if DEBUG_SUNXI_UART1 1189 1189 default 0xf1f02800 if DEBUG_SUNXI_R_UART 1190 - default 0xf2100000 if DEBUG_PXA_UART1 1190 + default 0xf6200000 if DEBUG_PXA_UART1 1191 1191 default 0xf4090000 if ARCH_LPC32XX 1192 1192 default 0xf4200000 if ARCH_GEMINI 1193 1193 default 0xf7000000 if DEBUG_S3C24XX_UART && (DEBUG_S3C_UART0 || \
+19
arch/arm/boot/dts/vf610-cosmic.dts
··· 33 33 34 34 }; 35 35 36 + &esdhc1 { 37 + pinctrl-names = "default"; 38 + pinctrl-0 = <&pinctrl_esdhc1>; 39 + bus-width = <4>; 40 + status = "okay"; 41 + }; 42 + 36 43 &fec1 { 37 44 phy-mode = "rmii"; 38 45 pinctrl-names = "default"; ··· 49 42 50 43 &iomuxc { 51 44 vf610-cosmic { 45 + pinctrl_esdhc1: esdhc1grp { 46 + fsl,pins = < 47 + VF610_PAD_PTA24__ESDHC1_CLK 0x31ef 48 + VF610_PAD_PTA25__ESDHC1_CMD 0x31ef 49 + VF610_PAD_PTA26__ESDHC1_DAT0 0x31ef 50 + VF610_PAD_PTA27__ESDHC1_DAT1 0x31ef 51 + VF610_PAD_PTA28__ESDHC1_DATA2 0x31ef 52 + VF610_PAD_PTA29__ESDHC1_DAT3 0x31ef 53 + VF610_PAD_PTB28__GPIO_98 0x219d 54 + >; 55 + }; 56 + 52 57 pinctrl_fec1: fec1grp { 53 58 fsl,pins = < 54 59 VF610_PAD_PTC9__ENET_RMII1_MDC 0x30d2
+4
arch/arm/boot/dts/zynq-parallella.dts
··· 34 34 }; 35 35 }; 36 36 37 + &clkc { 38 + fclk-enable = <0xf>; 39 + }; 40 + 37 41 &gem0 { 38 42 status = "okay"; 39 43 phy-mode = "rgmii-id";
+9
arch/arm/common/edma.c
··· 26 26 #include <linux/io.h> 27 27 #include <linux/slab.h> 28 28 #include <linux/edma.h> 29 + #include <linux/dma-mapping.h> 29 30 #include <linux/of_address.h> 30 31 #include <linux/of_device.h> 31 32 #include <linux/of_dma.h> ··· 1624 1623 struct device_node *node = pdev->dev.of_node; 1625 1624 struct device *dev = &pdev->dev; 1626 1625 int ret; 1626 + struct platform_device_info edma_dev_info = { 1627 + .name = "edma-dma-engine", 1628 + .dma_mask = DMA_BIT_MASK(32), 1629 + .parent = &pdev->dev, 1630 + }; 1627 1631 1628 1632 if (node) { 1629 1633 /* Check if this is a second instance registered */ ··· 1799 1793 edma_write_array(j, EDMA_QRAE, i, 0x0); 1800 1794 } 1801 1795 arch_num_cc++; 1796 + 1797 + edma_dev_info.id = j; 1798 + platform_device_register_full(&edma_dev_info); 1802 1799 } 1803 1800 1804 1801 return 0;
+1
arch/arm/configs/imx_v4_v5_defconfig
··· 97 97 # CONFIG_HW_RANDOM is not set 98 98 CONFIG_I2C_CHARDEV=y 99 99 CONFIG_I2C_IMX=y 100 + CONFIG_SPI=y 100 101 CONFIG_SPI_IMX=y 101 102 CONFIG_SPI_SPIDEV=y 102 103 CONFIG_GPIO_SYSFS=y
+1
arch/arm/configs/imx_v6_v7_defconfig
··· 158 158 CONFIG_I2C_ALGOPCF=m 159 159 CONFIG_I2C_ALGOPCA=m 160 160 CONFIG_I2C_IMX=y 161 + CONFIG_SPI=y 161 162 CONFIG_SPI_IMX=y 162 163 CONFIG_GPIO_SYSFS=y 163 164 CONFIG_GPIO_MC9S08DZ60=y
+2
arch/arm/configs/multi_v7_defconfig
··· 235 235 CONFIG_SPI_XILINX=y 236 236 CONFIG_PINCTRL_AS3722=y 237 237 CONFIG_PINCTRL_PALMAS=y 238 + CONFIG_PINCTRL_APQ8084=y 238 239 CONFIG_GPIO_SYSFS=y 239 240 CONFIG_GPIO_GENERIC_PLATFORM=y 240 241 CONFIG_GPIO_DWAPB=y ··· 412 411 CONFIG_NVEC_PAZ00=y 413 412 CONFIG_QCOM_GSBI=y 414 413 CONFIG_COMMON_CLK_QCOM=y 414 + CONFIG_APQ_MMCC_8084=y 415 415 CONFIG_MSM_GCC_8660=y 416 416 CONFIG_MSM_MMCC_8960=y 417 417 CONFIG_MSM_MMCC_8974=y
+2 -2
arch/arm/configs/omap2plus_defconfig
··· 86 86 CONFIG_IP_PNP_BOOTP=y 87 87 CONFIG_IP_PNP_RARP=y 88 88 # CONFIG_INET_LRO is not set 89 - CONFIG_IPV6=y 90 89 CONFIG_NETFILTER=y 91 90 CONFIG_CAN=m 92 91 CONFIG_CAN_C_CAN=m ··· 111 112 CONFIG_MTD_CFI=y 112 113 CONFIG_MTD_CFI_INTELEXT=y 113 114 CONFIG_MTD_NAND=y 115 + CONFIG_MTD_NAND_ECC_BCH=y 114 116 CONFIG_MTD_NAND_OMAP2=y 115 117 CONFIG_MTD_ONENAND=y 116 118 CONFIG_MTD_ONENAND_VERIFY_WRITE=y ··· 317 317 CONFIG_FANOTIFY=y 318 318 CONFIG_QUOTA=y 319 319 CONFIG_QFMT_V2=y 320 - CONFIG_AUTOFS4_FS=y 320 + CONFIG_AUTOFS4_FS=m 321 321 CONFIG_MSDOS_FS=y 322 322 CONFIG_VFAT_FS=y 323 323 CONFIG_TMPFS=y
+29 -44
arch/arm/configs/socfpga_defconfig
··· 1 - CONFIG_EXPERIMENTAL=y 2 1 CONFIG_SYSVIPC=y 2 + CONFIG_FHANDLE=y 3 + CONFIG_HIGH_RES_TIMERS=y 3 4 CONFIG_IKCONFIG=y 4 5 CONFIG_IKCONFIG_PROC=y 5 6 CONFIG_LOG_BUF_SHIFT=14 ··· 12 11 CONFIG_OPROFILE=y 13 12 CONFIG_MODULES=y 14 13 CONFIG_MODULE_UNLOAD=y 15 - CONFIG_HOTPLUG=y 16 14 # CONFIG_LBDAF is not set 17 15 # CONFIG_BLK_DEV_BSG is not set 18 16 # CONFIG_IOSCHED_DEADLINE is not set 19 17 # CONFIG_IOSCHED_CFQ is not set 20 18 CONFIG_ARCH_SOCFPGA=y 21 - CONFIG_MACH_SOCFPGA_CYCLONE5=y 22 19 CONFIG_ARM_THUMBEE=y 23 - # CONFIG_ARCH_VEXPRESS_CORTEX_A5_A9_ERRATA is not set 24 - # CONFIG_CACHE_L2X0 is not set 25 - CONFIG_HIGH_RES_TIMERS=y 26 20 CONFIG_SMP=y 27 21 CONFIG_NR_CPUS=2 28 22 CONFIG_AEABI=y 29 23 CONFIG_ZBOOT_ROM_TEXT=0x0 30 24 CONFIG_ZBOOT_ROM_BSS=0x0 31 - CONFIG_CMDLINE="" 32 25 CONFIG_VFP=y 33 26 CONFIG_NEON=y 34 27 CONFIG_NET=y ··· 36 41 CONFIG_IP_PNP_DHCP=y 37 42 CONFIG_IP_PNP_BOOTP=y 38 43 CONFIG_IP_PNP_RARP=y 44 + CONFIG_IPV6=y 45 + CONFIG_NETWORK_PHY_TIMESTAMPING=y 46 + CONFIG_VLAN_8021Q=y 47 + CONFIG_VLAN_8021Q_GVRP=y 39 48 CONFIG_CAN=y 40 - CONFIG_CAN_RAW=y 41 - CONFIG_CAN_BCM=y 42 - CONFIG_CAN_GW=y 43 - CONFIG_CAN_DEV=y 44 - CONFIG_CAN_CALC_BITTIMING=y 45 49 CONFIG_CAN_C_CAN=y 46 50 CONFIG_CAN_C_CAN_PLATFORM=y 47 51 CONFIG_CAN_DEBUG_DEVICES=y 48 52 CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 49 53 CONFIG_DEVTMPFS=y 50 - CONFIG_PROC_DEVICETREE=y 54 + CONFIG_DEVTMPFS_MOUNT=y 51 55 CONFIG_BLK_DEV_RAM=y 52 56 CONFIG_BLK_DEV_RAM_COUNT=2 53 57 CONFIG_BLK_DEV_RAM_SIZE=8192 58 + CONFIG_SRAM=y 54 59 CONFIG_SCSI=y 55 60 # CONFIG_SCSI_PROC_FS is not set 56 61 CONFIG_BLK_DEV_SD=y 57 62 # CONFIG_SCSI_LOWLEVEL is not set 58 63 CONFIG_NETDEVICES=y 59 64 CONFIG_STMMAC_ETH=y 60 - CONFIG_MICREL_PHY=y 61 - # CONFIG_STMMAC_PHY_ID_ZERO_WORKAROUND is not set 62 - CONFIG_INPUT_EVDEV=y 63 65 CONFIG_DWMAC_SOCFPGA=y 64 - CONFIG_PPS=y 65 - CONFIG_NETWORK_PHY_TIMESTAMPING=y 66 - CONFIG_PTP_1588_CLOCK=y 67 - CONFIG_VLAN_8021Q=y 68 - CONFIG_VLAN_8021Q_GVRP=y 69 - CONFIG_GARP=y 70 - CONFIG_IPV6=y 66 + CONFIG_MICREL_PHY=y 67 + CONFIG_INPUT_EVDEV=y 71 68 # CONFIG_SERIO_SERPORT is not set 72 69 CONFIG_SERIO_AMBAKMI=y 73 70 CONFIG_LEGACY_PTY_COUNT=16 ··· 68 81 CONFIG_SERIAL_8250_NR_UARTS=2 69 82 CONFIG_SERIAL_8250_RUNTIME_UARTS=2 70 83 CONFIG_SERIAL_8250_DW=y 84 + CONFIG_I2C=y 85 + CONFIG_I2C_CHARDEV=y 86 + CONFIG_I2C_DESIGNWARE_PLATFORM=y 71 87 CONFIG_GPIOLIB=y 72 88 CONFIG_GPIO_SYSFS=y 73 89 CONFIG_GPIO_DWAPB=y 74 - # CONFIG_RTC_HCTOSYS is not set 90 + CONFIG_PMBUS=y 91 + CONFIG_SENSORS_LTC2978=y 92 + CONFIG_SENSORS_LTC2978_REGULATOR=y 75 93 CONFIG_WATCHDOG=y 76 94 CONFIG_DW_WATCHDOG=y 95 + CONFIG_REGULATOR=y 96 + CONFIG_REGULATOR_FIXED_VOLTAGE=y 97 + CONFIG_USB=y 98 + CONFIG_USB_DWC2=y 99 + CONFIG_USB_DWC2_HOST=y 100 + CONFIG_MMC=y 101 + CONFIG_MMC_DW=y 77 102 CONFIG_EXT2_FS=y 78 103 CONFIG_EXT2_FS_XATTR=y 79 104 CONFIG_EXT2_FS_POSIX_ACL=y 80 105 CONFIG_EXT3_FS=y 81 - CONFIG_NFS_FS=y 82 - CONFIG_ROOT_NFS=y 83 - # CONFIG_DNOTIFY is not set 84 - # CONFIG_INOTIFY_USER is not set 85 - CONFIG_FHANDLE=y 106 + CONFIG_EXT4_FS=y 86 107 CONFIG_VFAT_FS=y 87 108 CONFIG_NTFS_FS=y 88 109 CONFIG_NTFS_RW=y 89 110 CONFIG_TMPFS=y 90 - CONFIG_JFFS2_FS=y 111 + CONFIG_CONFIGFS_FS=y 112 + CONFIG_NFS_FS=y 113 + CONFIG_ROOT_NFS=y 91 114 CONFIG_NLS_CODEPAGE_437=y 92 115 CONFIG_NLS_ISO8859_1=y 116 + CONFIG_PRINTK_TIME=y 117 + CONFIG_DEBUG_INFO=y 93 118 CONFIG_MAGIC_SYSRQ=y 94 119 CONFIG_DETECT_HUNG_TASK=y 95 120 # CONFIG_SCHED_DEBUG is not set 96 - CONFIG_DEBUG_INFO=y 97 121 CONFIG_ENABLE_DEFAULT_TRACERS=y 98 122 CONFIG_DEBUG_USER=y 99 123 CONFIG_XZ_DEC=y 100 - CONFIG_I2C=y 101 - CONFIG_I2C_DESIGNWARE_CORE=y 102 - CONFIG_I2C_DESIGNWARE_PLATFORM=y 103 - CONFIG_I2C_CHARDEV=y 104 - CONFIG_MMC=y 105 - CONFIG_MMC_DW=y 106 - CONFIG_PM=y 107 - CONFIG_SUSPEND=y 108 - CONFIG_MMC_UNSAFE_RESUME=y 109 - CONFIG_USB=y 110 - CONFIG_USB_DWC2=y 111 - CONFIG_USB_DWC2_HOST=y 112 - CONFIG_USB_DWC2_PLATFORM=y
+1
arch/arm/include/uapi/asm/unistd.h
··· 412 412 #define __NR_seccomp (__NR_SYSCALL_BASE+383) 413 413 #define __NR_getrandom (__NR_SYSCALL_BASE+384) 414 414 #define __NR_memfd_create (__NR_SYSCALL_BASE+385) 415 + #define __NR_bpf (__NR_SYSCALL_BASE+386) 415 416 416 417 /* 417 418 * The following SWIs are ARM private.
+1
arch/arm/kernel/calls.S
··· 395 395 CALL(sys_seccomp) 396 396 CALL(sys_getrandom) 397 397 /* 385 */ CALL(sys_memfd_create) 398 + CALL(sys_bpf) 398 399 #ifndef syscalls_counted 399 400 .equ syscalls_padding, ((NR_syscalls + 3) & ~3) - NR_syscalls 400 401 #define syscalls_counted
+89 -39
arch/arm/mach-imx/clk-vf610.c
··· 58 58 #define PFD_PLL1_BASE (anatop_base + 0x2b0) 59 59 #define PFD_PLL2_BASE (anatop_base + 0x100) 60 60 #define PFD_PLL3_BASE (anatop_base + 0xf0) 61 + #define PLL1_CTRL (anatop_base + 0x270) 62 + #define PLL2_CTRL (anatop_base + 0x30) 61 63 #define PLL3_CTRL (anatop_base + 0x10) 64 + #define PLL4_CTRL (anatop_base + 0x70) 65 + #define PLL5_CTRL (anatop_base + 0xe0) 66 + #define PLL6_CTRL (anatop_base + 0xa0) 62 67 #define PLL7_CTRL (anatop_base + 0x20) 68 + #define ANA_MISC1 (anatop_base + 0x160) 63 69 64 70 static void __iomem *anatop_base; 65 71 static void __iomem *ccm_base; ··· 73 67 /* sources for multiplexer clocks, this is used multiple times */ 74 68 static const char *fast_sels[] = { "firc", "fxosc", }; 75 69 static const char *slow_sels[] = { "sirc_32k", "sxosc", }; 76 - static const char *pll1_sels[] = { "pll1_main", "pll1_pfd1", "pll1_pfd2", "pll1_pfd3", "pll1_pfd4", }; 77 - static const char *pll2_sels[] = { "pll2_main", "pll2_pfd1", "pll2_pfd2", "pll2_pfd3", "pll2_pfd4", }; 78 - static const char *sys_sels[] = { "fast_clk_sel", "slow_clk_sel", "pll2_pfd_sel", "pll2_main", "pll1_pfd_sel", "pll3_main", }; 70 + static const char *pll1_sels[] = { "pll1_sys", "pll1_pfd1", "pll1_pfd2", "pll1_pfd3", "pll1_pfd4", }; 71 + static const char *pll2_sels[] = { "pll2_bus", "pll2_pfd1", "pll2_pfd2", "pll2_pfd3", "pll2_pfd4", }; 72 + static const char *pll_bypass_src_sels[] = { "fast_clk_sel", "lvds1_in", }; 73 + static const char *pll1_bypass_sels[] = { "pll1", "pll1_bypass_src", }; 74 + static const char *pll2_bypass_sels[] = { "pll2", "pll2_bypass_src", }; 75 + static const char *pll3_bypass_sels[] = { "pll3", "pll3_bypass_src", }; 76 + static const char *pll4_bypass_sels[] = { "pll4", "pll4_bypass_src", }; 77 + static const char *pll5_bypass_sels[] = { "pll5", "pll5_bypass_src", }; 78 + static const char *pll6_bypass_sels[] = { "pll6", "pll6_bypass_src", }; 79 + static const char *pll7_bypass_sels[] = { "pll7", "pll7_bypass_src", }; 80 + static const char *sys_sels[] = { "fast_clk_sel", "slow_clk_sel", "pll2_pfd_sel", "pll2_bus", "pll1_pfd_sel", "pll3_usb_otg", }; 79 81 static const char *ddr_sels[] = { "pll2_pfd2", "sys_sel", }; 80 82 static const char *rmii_sels[] = { "enet_ext", "audio_ext", "enet_50m", "enet_25m", }; 81 83 static const char *enet_ts_sels[] = { "enet_ext", "fxosc", "audio_ext", "usb", "enet_ts", "enet_25m", "enet_50m", }; 82 - static const char *esai_sels[] = { "audio_ext", "mlb", "spdif_rx", "pll4_main_div", }; 83 - static const char *sai_sels[] = { "audio_ext", "mlb", "spdif_rx", "pll4_main_div", }; 84 + static const char *esai_sels[] = { "audio_ext", "mlb", "spdif_rx", "pll4_audio_div", }; 85 + static const char *sai_sels[] = { "audio_ext", "mlb", "spdif_rx", "pll4_audio_div", }; 84 86 static const char *nfc_sels[] = { "platform_bus", "pll1_pfd1", "pll3_pfd1", "pll3_pfd3", }; 85 - static const char *qspi_sels[] = { "pll3_main", "pll3_pfd4", "pll2_pfd4", "pll1_pfd4", }; 86 - static const char *esdhc_sels[] = { "pll3_main", "pll3_pfd3", "pll1_pfd3", "platform_bus", }; 87 - static const char *dcu_sels[] = { "pll1_pfd2", "pll3_main", }; 87 + static const char *qspi_sels[] = { "pll3_usb_otg", "pll3_pfd4", "pll2_pfd4", "pll1_pfd4", }; 88 + static const char *esdhc_sels[] = { "pll3_usb_otg", "pll3_pfd3", "pll1_pfd3", "platform_bus", }; 89 + static const char *dcu_sels[] = { "pll1_pfd2", "pll3_usb_otg", }; 88 90 static const char *gpu_sels[] = { "pll2_pfd2", "pll3_pfd2", }; 89 - static const char *vadc_sels[] = { "pll6_main_div", "pll3_main_div", "pll3_main", }; 91 + static const char *vadc_sels[] = { "pll6_video_div", "pll3_usb_otg_div", "pll3_usb_otg", }; 90 92 /* FTM counter clock source, not module clock */ 91 93 static const char *ftm_ext_sels[] = {"sirc_128k", "sxosc", "fxosc_half", "audio_ext", }; 92 94 static const char *ftm_fix_sels[] = { "sxosc", "ipg_bus", }; 93 95 94 - static struct clk_div_table pll4_main_div_table[] = { 96 + 97 + static struct clk_div_table pll4_audio_div_table[] = { 95 98 { .val = 0, .div = 1 }, 96 99 { .val = 1, .div = 2 }, 97 100 { .val = 2, .div = 6 }, ··· 135 120 clk[VF610_CLK_AUDIO_EXT] = imx_obtain_fixed_clock("audio_ext", 0); 136 121 clk[VF610_CLK_ENET_EXT] = imx_obtain_fixed_clock("enet_ext", 0); 137 122 123 + /* Clock source from external clock via LVDs PAD */ 124 + clk[VF610_CLK_ANACLK1] = imx_obtain_fixed_clock("anaclk1", 0); 125 + 138 126 clk[VF610_CLK_FXOSC_HALF] = imx_clk_fixed_factor("fxosc_half", "fxosc", 1, 2); 139 127 140 128 np = of_find_compatible_node(NULL, NULL, "fsl,vf610-anatop"); ··· 151 133 clk[VF610_CLK_SLOW_CLK_SEL] = imx_clk_mux("slow_clk_sel", CCM_CCSR, 4, 1, slow_sels, ARRAY_SIZE(slow_sels)); 152 134 clk[VF610_CLK_FASK_CLK_SEL] = imx_clk_mux("fast_clk_sel", CCM_CCSR, 5, 1, fast_sels, ARRAY_SIZE(fast_sels)); 153 135 154 - clk[VF610_CLK_PLL1_MAIN] = imx_clk_fixed_factor("pll1_main", "fast_clk_sel", 22, 1); 155 - clk[VF610_CLK_PLL1_PFD1] = imx_clk_pfd("pll1_pfd1", "pll1_main", PFD_PLL1_BASE, 0); 156 - clk[VF610_CLK_PLL1_PFD2] = imx_clk_pfd("pll1_pfd2", "pll1_main", PFD_PLL1_BASE, 1); 157 - clk[VF610_CLK_PLL1_PFD3] = imx_clk_pfd("pll1_pfd3", "pll1_main", PFD_PLL1_BASE, 2); 158 - clk[VF610_CLK_PLL1_PFD4] = imx_clk_pfd("pll1_pfd4", "pll1_main", PFD_PLL1_BASE, 3); 136 + clk[VF610_CLK_PLL1_BYPASS_SRC] = imx_clk_mux("pll1_bypass_src", PLL1_CTRL, 14, 1, pll_bypass_src_sels, ARRAY_SIZE(pll_bypass_src_sels)); 137 + clk[VF610_CLK_PLL2_BYPASS_SRC] = imx_clk_mux("pll2_bypass_src", PLL2_CTRL, 14, 1, pll_bypass_src_sels, ARRAY_SIZE(pll_bypass_src_sels)); 138 + clk[VF610_CLK_PLL3_BYPASS_SRC] = imx_clk_mux("pll3_bypass_src", PLL3_CTRL, 14, 1, pll_bypass_src_sels, ARRAY_SIZE(pll_bypass_src_sels)); 139 + clk[VF610_CLK_PLL4_BYPASS_SRC] = imx_clk_mux("pll4_bypass_src", PLL4_CTRL, 14, 1, pll_bypass_src_sels, ARRAY_SIZE(pll_bypass_src_sels)); 140 + clk[VF610_CLK_PLL5_BYPASS_SRC] = imx_clk_mux("pll5_bypass_src", PLL5_CTRL, 14, 1, pll_bypass_src_sels, ARRAY_SIZE(pll_bypass_src_sels)); 141 + clk[VF610_CLK_PLL6_BYPASS_SRC] = imx_clk_mux("pll6_bypass_src", PLL6_CTRL, 14, 1, pll_bypass_src_sels, ARRAY_SIZE(pll_bypass_src_sels)); 142 + clk[VF610_CLK_PLL7_BYPASS_SRC] = imx_clk_mux("pll7_bypass_src", PLL7_CTRL, 14, 1, pll_bypass_src_sels, ARRAY_SIZE(pll_bypass_src_sels)); 159 143 160 - clk[VF610_CLK_PLL2_MAIN] = imx_clk_fixed_factor("pll2_main", "fast_clk_sel", 22, 1); 161 - clk[VF610_CLK_PLL2_PFD1] = imx_clk_pfd("pll2_pfd1", "pll2_main", PFD_PLL2_BASE, 0); 162 - clk[VF610_CLK_PLL2_PFD2] = imx_clk_pfd("pll2_pfd2", "pll2_main", PFD_PLL2_BASE, 1); 163 - clk[VF610_CLK_PLL2_PFD3] = imx_clk_pfd("pll2_pfd3", "pll2_main", PFD_PLL2_BASE, 2); 164 - clk[VF610_CLK_PLL2_PFD4] = imx_clk_pfd("pll2_pfd4", "pll2_main", PFD_PLL2_BASE, 3); 144 + clk[VF610_CLK_PLL1] = imx_clk_pllv3(IMX_PLLV3_GENERIC, "pll1", "pll1_bypass_src", PLL1_CTRL, 0x1); 145 + clk[VF610_CLK_PLL2] = imx_clk_pllv3(IMX_PLLV3_GENERIC, "pll2", "pll2_bypass_src", PLL2_CTRL, 0x1); 146 + clk[VF610_CLK_PLL3] = imx_clk_pllv3(IMX_PLLV3_USB, "pll3", "pll3_bypass_src", PLL3_CTRL, 0x1); 147 + clk[VF610_CLK_PLL4] = imx_clk_pllv3(IMX_PLLV3_AV, "pll4", "pll4_bypass_src", PLL4_CTRL, 0x7f); 148 + clk[VF610_CLK_PLL5] = imx_clk_pllv3(IMX_PLLV3_ENET, "pll5", "pll5_bypass_src", PLL5_CTRL, 0x3); 149 + clk[VF610_CLK_PLL6] = imx_clk_pllv3(IMX_PLLV3_AV, "pll6", "pll6_bypass_src", PLL6_CTRL, 0x7f); 150 + clk[VF610_CLK_PLL7] = imx_clk_pllv3(IMX_PLLV3_USB, "pll7", "pll7_bypass_src", PLL7_CTRL, 0x1); 165 151 166 - clk[VF610_CLK_PLL3_MAIN] = imx_clk_fixed_factor("pll3_main", "fast_clk_sel", 20, 1); 167 - clk[VF610_CLK_PLL3_PFD1] = imx_clk_pfd("pll3_pfd1", "pll3_main", PFD_PLL3_BASE, 0); 168 - clk[VF610_CLK_PLL3_PFD2] = imx_clk_pfd("pll3_pfd2", "pll3_main", PFD_PLL3_BASE, 1); 169 - clk[VF610_CLK_PLL3_PFD3] = imx_clk_pfd("pll3_pfd3", "pll3_main", PFD_PLL3_BASE, 2); 170 - clk[VF610_CLK_PLL3_PFD4] = imx_clk_pfd("pll3_pfd4", "pll3_main", PFD_PLL3_BASE, 3); 152 + clk[VF610_PLL1_BYPASS] = imx_clk_mux_flags("pll1_bypass", PLL1_CTRL, 16, 1, pll1_bypass_sels, ARRAY_SIZE(pll1_bypass_sels), CLK_SET_RATE_PARENT); 153 + clk[VF610_PLL2_BYPASS] = imx_clk_mux_flags("pll2_bypass", PLL2_CTRL, 16, 1, pll2_bypass_sels, ARRAY_SIZE(pll2_bypass_sels), CLK_SET_RATE_PARENT); 154 + clk[VF610_PLL3_BYPASS] = imx_clk_mux_flags("pll3_bypass", PLL3_CTRL, 16, 1, pll3_bypass_sels, ARRAY_SIZE(pll3_bypass_sels), CLK_SET_RATE_PARENT); 155 + clk[VF610_PLL4_BYPASS] = imx_clk_mux_flags("pll4_bypass", PLL4_CTRL, 16, 1, pll4_bypass_sels, ARRAY_SIZE(pll4_bypass_sels), CLK_SET_RATE_PARENT); 156 + clk[VF610_PLL5_BYPASS] = imx_clk_mux_flags("pll5_bypass", PLL5_CTRL, 16, 1, pll5_bypass_sels, ARRAY_SIZE(pll5_bypass_sels), CLK_SET_RATE_PARENT); 157 + clk[VF610_PLL6_BYPASS] = imx_clk_mux_flags("pll6_bypass", PLL6_CTRL, 16, 1, pll6_bypass_sels, ARRAY_SIZE(pll6_bypass_sels), CLK_SET_RATE_PARENT); 158 + clk[VF610_PLL7_BYPASS] = imx_clk_mux_flags("pll7_bypass", PLL7_CTRL, 16, 1, pll7_bypass_sels, ARRAY_SIZE(pll7_bypass_sels), CLK_SET_RATE_PARENT); 171 159 172 - clk[VF610_CLK_PLL4_MAIN] = imx_clk_fixed_factor("pll4_main", "fast_clk_sel", 25, 1); 173 - /* Enet pll: fixed 50Mhz */ 174 - clk[VF610_CLK_PLL5_MAIN] = imx_clk_fixed_factor("pll5_main", "fast_clk_sel", 125, 6); 175 - /* pll6: default 960Mhz */ 176 - clk[VF610_CLK_PLL6_MAIN] = imx_clk_fixed_factor("pll6_main", "fast_clk_sel", 40, 1); 177 - /* pll7: USB1 PLL at 480MHz */ 178 - clk[VF610_CLK_PLL7_MAIN] = imx_clk_pllv3(IMX_PLLV3_USB, "pll7_main", "fast_clk_sel", PLL7_CTRL, 0x2); 160 + /* Do not bypass PLLs initially */ 161 + clk_set_parent(clk[VF610_PLL1_BYPASS], clk[VF610_CLK_PLL1]); 162 + clk_set_parent(clk[VF610_PLL2_BYPASS], clk[VF610_CLK_PLL2]); 163 + clk_set_parent(clk[VF610_PLL3_BYPASS], clk[VF610_CLK_PLL3]); 164 + clk_set_parent(clk[VF610_PLL4_BYPASS], clk[VF610_CLK_PLL4]); 165 + clk_set_parent(clk[VF610_PLL5_BYPASS], clk[VF610_CLK_PLL5]); 166 + clk_set_parent(clk[VF610_PLL6_BYPASS], clk[VF610_CLK_PLL6]); 167 + clk_set_parent(clk[VF610_PLL7_BYPASS], clk[VF610_CLK_PLL7]); 168 + 169 + clk[VF610_CLK_PLL1_SYS] = imx_clk_gate("pll1_sys", "pll1_bypass", PLL1_CTRL, 13); 170 + clk[VF610_CLK_PLL2_BUS] = imx_clk_gate("pll2_bus", "pll2_bypass", PLL2_CTRL, 13); 171 + clk[VF610_CLK_PLL3_USB_OTG] = imx_clk_gate("pll3_usb_otg", "pll3_bypass", PLL3_CTRL, 13); 172 + clk[VF610_CLK_PLL4_AUDIO] = imx_clk_gate("pll4_audio", "pll4_bypass", PLL4_CTRL, 13); 173 + clk[VF610_CLK_PLL5_ENET] = imx_clk_gate("pll5_enet", "pll5_bypass", PLL5_CTRL, 13); 174 + clk[VF610_CLK_PLL6_VIDEO] = imx_clk_gate("pll6_video", "pll6_bypass", PLL6_CTRL, 13); 175 + clk[VF610_CLK_PLL7_USB_HOST] = imx_clk_gate("pll7_usb_host", "pll7_bypass", PLL7_CTRL, 13); 176 + 177 + clk[VF610_CLK_LVDS1_IN] = imx_clk_gate_exclusive("lvds1_in", "anaclk1", ANA_MISC1, 12, BIT(10)); 178 + 179 + clk[VF610_CLK_PLL1_PFD1] = imx_clk_pfd("pll1_pfd1", "pll1_sys", PFD_PLL1_BASE, 0); 180 + clk[VF610_CLK_PLL1_PFD2] = imx_clk_pfd("pll1_pfd2", "pll1_sys", PFD_PLL1_BASE, 1); 181 + clk[VF610_CLK_PLL1_PFD3] = imx_clk_pfd("pll1_pfd3", "pll1_sys", PFD_PLL1_BASE, 2); 182 + clk[VF610_CLK_PLL1_PFD4] = imx_clk_pfd("pll1_pfd4", "pll1_sys", PFD_PLL1_BASE, 3); 183 + 184 + clk[VF610_CLK_PLL2_PFD1] = imx_clk_pfd("pll2_pfd1", "pll2_bus", PFD_PLL2_BASE, 0); 185 + clk[VF610_CLK_PLL2_PFD2] = imx_clk_pfd("pll2_pfd2", "pll2_bus", PFD_PLL2_BASE, 1); 186 + clk[VF610_CLK_PLL2_PFD3] = imx_clk_pfd("pll2_pfd3", "pll2_bus", PFD_PLL2_BASE, 2); 187 + clk[VF610_CLK_PLL2_PFD4] = imx_clk_pfd("pll2_pfd4", "pll2_bus", PFD_PLL2_BASE, 3); 188 + 189 + clk[VF610_CLK_PLL3_PFD1] = imx_clk_pfd("pll3_pfd1", "pll3_usb_otg", PFD_PLL3_BASE, 0); 190 + clk[VF610_CLK_PLL3_PFD2] = imx_clk_pfd("pll3_pfd2", "pll3_usb_otg", PFD_PLL3_BASE, 1); 191 + clk[VF610_CLK_PLL3_PFD3] = imx_clk_pfd("pll3_pfd3", "pll3_usb_otg", PFD_PLL3_BASE, 2); 192 + clk[VF610_CLK_PLL3_PFD4] = imx_clk_pfd("pll3_pfd4", "pll3_usb_otg", PFD_PLL3_BASE, 3); 179 193 180 194 clk[VF610_CLK_PLL1_PFD_SEL] = imx_clk_mux("pll1_pfd_sel", CCM_CCSR, 16, 3, pll1_sels, 5); 181 195 clk[VF610_CLK_PLL2_PFD_SEL] = imx_clk_mux("pll2_pfd_sel", CCM_CCSR, 19, 3, pll2_sels, 5); ··· 217 167 clk[VF610_CLK_PLATFORM_BUS] = imx_clk_divider("platform_bus", "sys_bus", CCM_CACRR, 3, 3); 218 168 clk[VF610_CLK_IPG_BUS] = imx_clk_divider("ipg_bus", "platform_bus", CCM_CACRR, 11, 2); 219 169 220 - clk[VF610_CLK_PLL3_MAIN_DIV] = imx_clk_divider("pll3_main_div", "pll3_main", CCM_CACRR, 20, 1); 221 - clk[VF610_CLK_PLL4_MAIN_DIV] = clk_register_divider_table(NULL, "pll4_main_div", "pll4_main", 0, CCM_CACRR, 6, 3, 0, pll4_main_div_table, &imx_ccm_lock); 222 - clk[VF610_CLK_PLL6_MAIN_DIV] = imx_clk_divider("pll6_main_div", "pll6_main", CCM_CACRR, 21, 1); 170 + clk[VF610_CLK_PLL3_MAIN_DIV] = imx_clk_divider("pll3_usb_otg_div", "pll3_usb_otg", CCM_CACRR, 20, 1); 171 + clk[VF610_CLK_PLL4_MAIN_DIV] = clk_register_divider_table(NULL, "pll4_audio_div", "pll4_audio", 0, CCM_CACRR, 6, 3, 0, pll4_audio_div_table, &imx_ccm_lock); 172 + clk[VF610_CLK_PLL6_MAIN_DIV] = imx_clk_divider("pll6_video_div", "pll6_video", CCM_CACRR, 21, 1); 223 173 224 - clk[VF610_CLK_USBPHY0] = imx_clk_gate("usbphy0", "pll3_main", PLL3_CTRL, 6); 225 - clk[VF610_CLK_USBPHY1] = imx_clk_gate("usbphy1", "pll7_main", PLL7_CTRL, 6); 174 + clk[VF610_CLK_USBPHY0] = imx_clk_gate("usbphy0", "pll3_usb_otg", PLL3_CTRL, 6); 175 + clk[VF610_CLK_USBPHY1] = imx_clk_gate("usbphy1", "pll7_usb_host", PLL7_CTRL, 6); 226 176 227 177 clk[VF610_CLK_USBC0] = imx_clk_gate2("usbc0", "ipg_bus", CCM_CCGR1, CCM_CCGRx_CGn(4)); 228 178 clk[VF610_CLK_USBC1] = imx_clk_gate2("usbc1", "ipg_bus", CCM_CCGR7, CCM_CCGRx_CGn(4)); ··· 241 191 clk[VF610_CLK_QSPI1_X1_DIV] = imx_clk_divider("qspi1_x1", "qspi1_x2", CCM_CSCDR3, 11, 1); 242 192 clk[VF610_CLK_QSPI1] = imx_clk_gate2("qspi1", "qspi1_x1", CCM_CCGR8, CCM_CCGRx_CGn(4)); 243 193 244 - clk[VF610_CLK_ENET_50M] = imx_clk_fixed_factor("enet_50m", "pll5_main", 1, 10); 245 - clk[VF610_CLK_ENET_25M] = imx_clk_fixed_factor("enet_25m", "pll5_main", 1, 20); 194 + clk[VF610_CLK_ENET_50M] = imx_clk_fixed_factor("enet_50m", "pll5_enet", 1, 10); 195 + clk[VF610_CLK_ENET_25M] = imx_clk_fixed_factor("enet_25m", "pll5_enet", 1, 20); 246 196 clk[VF610_CLK_ENET_SEL] = imx_clk_mux("enet_sel", CCM_CSCMR2, 4, 2, rmii_sels, 4); 247 197 clk[VF610_CLK_ENET_TS_SEL] = imx_clk_mux("enet_ts_sel", CCM_CSCMR2, 0, 3, enet_ts_sels, 7); 248 198 clk[VF610_CLK_ENET] = imx_clk_gate("enet", "enet_sel", CCM_CSCDR1, 24);
+2 -2
arch/arm/mach-ixp4xx/include/mach/io.h
··· 76 76 u32 n, byte_enables, data; 77 77 78 78 if (!is_pci_memory(addr)) { 79 - __raw_writeb(value, addr); 79 + __raw_writeb(value, p); 80 80 return; 81 81 } 82 82 ··· 141 141 u32 n, byte_enables, data; 142 142 143 143 if (!is_pci_memory(addr)) 144 - return __raw_readb(addr); 144 + return __raw_readb(p); 145 145 146 146 n = addr % 4; 147 147 byte_enables = (0xf & ~BIT(n)) << IXP4XX_PCI_NP_CBE_BESL;
+4
arch/arm/mach-omap2/omap_device.c
··· 917 917 static int __init omap_device_late_init(void) 918 918 { 919 919 bus_for_each_dev(&platform_bus_type, NULL, NULL, omap_device_late_idle); 920 + 921 + WARN(!of_have_populated_dt(), 922 + "legacy booting deprecated, please update to boot with .dts\n"); 923 + 920 924 return 0; 921 925 } 922 926 omap_late_initcall_sync(omap_device_late_init);
+5
arch/arm/mach-pxa/include/mach/addr-map.h
··· 39 39 #define DMEMC_SIZE 0x00100000 40 40 41 41 /* 42 + * Reserved space for low level debug virtual addresses within 43 + * 0xf6200000..0xf6201000 44 + */ 45 + 46 + /* 42 47 * Internal Memory Controller (PXA27x and later) 43 48 */ 44 49 #define IMEMC_PHYS 0x58000000
+18 -8
arch/arm/mm/cache-l2x0.c
··· 956 956 * @associativity: variable to return the calculated associativity in 957 957 * @max_way_size: the maximum size in bytes for the cache ways 958 958 */ 959 - static void __init l2x0_cache_size_of_parse(const struct device_node *np, 959 + static int __init l2x0_cache_size_of_parse(const struct device_node *np, 960 960 u32 *aux_val, u32 *aux_mask, 961 961 u32 *associativity, 962 962 u32 max_way_size) ··· 974 974 of_property_read_u32(np, "cache-line-size", &line_size); 975 975 976 976 if (!cache_size || !sets) 977 - return; 977 + return -ENODEV; 978 978 979 979 /* All these l2 caches have the same line = block size actually */ 980 980 if (!line_size) { ··· 1009 1009 1010 1010 if (way_size > max_way_size) { 1011 1011 pr_err("L2C OF: set size %dKB is too large\n", way_size); 1012 - return; 1012 + return -EINVAL; 1013 1013 } 1014 1014 1015 1015 pr_info("L2C OF: override cache size: %d bytes (%dKB)\n", ··· 1027 1027 if (way_size_bits < 1 || way_size_bits > 6) { 1028 1028 pr_err("L2C OF: cache way size illegal: %dKB is not mapped\n", 1029 1029 way_size); 1030 - return; 1030 + return -EINVAL; 1031 1031 } 1032 1032 1033 1033 mask |= L2C_AUX_CTRL_WAY_SIZE_MASK; ··· 1036 1036 *aux_val &= ~mask; 1037 1037 *aux_val |= val; 1038 1038 *aux_mask &= ~mask; 1039 + 1040 + return 0; 1039 1041 } 1040 1042 1041 1043 static void __init l2x0_of_parse(const struct device_node *np, ··· 1048 1046 u32 dirty = 0; 1049 1047 u32 val = 0, mask = 0; 1050 1048 u32 assoc; 1049 + int ret; 1051 1050 1052 1051 of_property_read_u32(np, "arm,tag-latency", &tag); 1053 1052 if (tag) { ··· 1071 1068 val |= (dirty - 1) << L2X0_AUX_CTRL_DIRTY_LATENCY_SHIFT; 1072 1069 } 1073 1070 1074 - l2x0_cache_size_of_parse(np, aux_val, aux_mask, &assoc, SZ_256K); 1071 + ret = l2x0_cache_size_of_parse(np, aux_val, aux_mask, &assoc, SZ_256K); 1072 + if (ret) 1073 + return; 1074 + 1075 1075 if (assoc > 8) { 1076 1076 pr_err("l2x0 of: cache setting yield too high associativity\n"); 1077 1077 pr_err("l2x0 of: %d calculated, max 8\n", assoc); ··· 1131 1125 u32 tag[3] = { 0, 0, 0 }; 1132 1126 u32 filter[2] = { 0, 0 }; 1133 1127 u32 assoc; 1128 + int ret; 1134 1129 1135 1130 of_property_read_u32_array(np, "arm,tag-latency", tag, ARRAY_SIZE(tag)); 1136 1131 if (tag[0] && tag[1] && tag[2]) ··· 1159 1152 l2x0_base + L310_ADDR_FILTER_START); 1160 1153 } 1161 1154 1162 - l2x0_cache_size_of_parse(np, aux_val, aux_mask, &assoc, SZ_512K); 1155 + ret = l2x0_cache_size_of_parse(np, aux_val, aux_mask, &assoc, SZ_512K); 1156 + if (ret) 1157 + return; 1158 + 1163 1159 switch (assoc) { 1164 1160 case 16: 1165 1161 *aux_val &= ~L2X0_AUX_CTRL_ASSOC_MASK; ··· 1174 1164 *aux_mask &= ~L2X0_AUX_CTRL_ASSOC_MASK; 1175 1165 break; 1176 1166 default: 1177 - pr_err("PL310 OF: cache setting yield illegal associativity\n"); 1178 - pr_err("PL310 OF: %d calculated, only 8 and 16 legal\n", assoc); 1167 + pr_err("L2C-310 OF cache associativity %d invalid, only 8 or 16 permitted\n", 1168 + assoc); 1179 1169 break; 1180 1170 } 1181 1171 }
-1
arch/arm/mm/dma-mapping.c
··· 1198 1198 { 1199 1199 return dma_common_pages_remap(pages, size, 1200 1200 VM_ARM_DMA_CONSISTENT | VM_USERMAP, prot, caller); 1201 - return NULL; 1202 1201 } 1203 1202 1204 1203 /*
+3
arch/arm/mm/highmem.c
··· 127 127 { 128 128 unsigned long vaddr; 129 129 int idx, type; 130 + struct page *page = pfn_to_page(pfn); 130 131 131 132 pagefault_disable(); 133 + if (!PageHighMem(page)) 134 + return page_address(page); 132 135 133 136 type = kmap_atomic_idx_push(); 134 137 idx = type + KM_TYPE_NR * smp_processor_id();
+24 -2
arch/arm64/configs/defconfig
··· 35 35 CONFIG_ARCH_THUNDER=y 36 36 CONFIG_ARCH_VEXPRESS=y 37 37 CONFIG_ARCH_XGENE=y 38 + CONFIG_PCI=y 39 + CONFIG_PCI_MSI=y 40 + CONFIG_PCI_XGENE=y 38 41 CONFIG_SMP=y 39 42 CONFIG_PREEMPT=y 40 43 CONFIG_KSM=y ··· 55 52 CONFIG_IP_PNP_BOOTP=y 56 53 # CONFIG_INET_LRO is not set 57 54 # CONFIG_IPV6 is not set 55 + CONFIG_BPF_JIT=y 58 56 # CONFIG_WIRELESS is not set 59 57 CONFIG_NET_9P=y 60 58 CONFIG_NET_9P_VIRTIO=y ··· 69 65 CONFIG_BLK_DEV_SD=y 70 66 # CONFIG_SCSI_LOWLEVEL is not set 71 67 CONFIG_ATA=y 68 + CONFIG_SATA_AHCI=y 69 + CONFIG_SATA_AHCI_PLATFORM=y 72 70 CONFIG_AHCI_XGENE=y 73 - CONFIG_PHY_XGENE=y 74 71 CONFIG_PATA_PLATFORM=y 75 72 CONFIG_PATA_OF_PLATFORM=y 76 73 CONFIG_NETDEVICES=y 77 74 CONFIG_TUN=y 78 75 CONFIG_VIRTIO_NET=y 76 + CONFIG_NET_XGENE=y 79 77 CONFIG_SMC91X=y 80 78 CONFIG_SMSC911X=y 81 - CONFIG_NET_XGENE=y 82 79 # CONFIG_WLAN is not set 83 80 CONFIG_INPUT_EVDEV=y 84 81 # CONFIG_SERIO_SERPORT is not set ··· 92 87 CONFIG_SERIAL_OF_PLATFORM=y 93 88 CONFIG_VIRTIO_CONSOLE=y 94 89 # CONFIG_HW_RANDOM is not set 90 + # CONFIG_HMC_DRV is not set 91 + CONFIG_SPI=y 92 + CONFIG_SPI_PL022=y 93 + CONFIG_GPIO_PL061=y 94 + CONFIG_GPIO_XGENE=y 95 95 # CONFIG_HWMON is not set 96 96 CONFIG_REGULATOR=y 97 97 CONFIG_REGULATOR_FIXED_VOLTAGE=y ··· 107 97 # CONFIG_LOGO_LINUX_MONO is not set 108 98 # CONFIG_LOGO_LINUX_VGA16 is not set 109 99 CONFIG_USB=y 100 + CONFIG_USB_EHCI_HCD=y 101 + CONFIG_USB_EHCI_HCD_PLATFORM=y 110 102 CONFIG_USB_ISP1760_HCD=y 103 + CONFIG_USB_OHCI_HCD=y 104 + CONFIG_USB_OHCI_HCD_PLATFORM=y 111 105 CONFIG_USB_STORAGE=y 106 + CONFIG_USB_ULPI=y 112 107 CONFIG_MMC=y 113 108 CONFIG_MMC_ARMMMCI=y 109 + CONFIG_MMC_SDHCI=y 110 + CONFIG_MMC_SDHCI_PLTFM=y 111 + CONFIG_MMC_SPI=y 112 + CONFIG_RTC_CLASS=y 113 + CONFIG_RTC_DRV_EFI=y 114 + CONFIG_RTC_DRV_XGENE=y 114 115 CONFIG_VIRTIO_BALLOON=y 115 116 CONFIG_VIRTIO_MMIO=y 116 117 # CONFIG_IOMMU_SUPPORT is not set 118 + CONFIG_PHY_XGENE=y 117 119 CONFIG_EXT2_FS=y 118 120 CONFIG_EXT3_FS=y 119 121 # CONFIG_EXT3_DEFAULTS_TO_ORDERED is not set
+2
arch/arm64/include/asm/unistd32.h
··· 792 792 __SYSCALL(__NR_getrandom, sys_getrandom) 793 793 #define __NR_memfd_create 385 794 794 __SYSCALL(__NR_memfd_create, sys_memfd_create) 795 + #define __NR_bpf 386 796 + __SYSCALL(__NR_bpf, sys_bpf)
+1 -1
arch/arm64/kernel/psci.c
··· 528 528 if (WARN_ON_ONCE(!index)) 529 529 return -EINVAL; 530 530 531 - if (state->type == PSCI_POWER_STATE_TYPE_STANDBY) 531 + if (state[index - 1].type == PSCI_POWER_STATE_TYPE_STANDBY) 532 532 ret = psci_ops.cpu_suspend(state[index - 1], 0); 533 533 else 534 534 ret = __cpu_suspend(index, psci_suspend_finisher);
+1 -1
arch/m68k/include/asm/unistd.h
··· 4 4 #include <uapi/asm/unistd.h> 5 5 6 6 7 - #define NR_syscalls 354 7 + #define NR_syscalls 355 8 8 9 9 #define __ARCH_WANT_OLD_READDIR 10 10 #define __ARCH_WANT_OLD_STAT
+1
arch/m68k/include/uapi/asm/unistd.h
··· 359 359 #define __NR_renameat2 351 360 360 #define __NR_getrandom 352 361 361 #define __NR_memfd_create 353 362 + #define __NR_bpf 354 362 363 363 364 #endif /* _UAPI_ASM_M68K_UNISTD_H_ */
+1
arch/m68k/kernel/syscalltable.S
··· 374 374 .long sys_renameat2 375 375 .long sys_getrandom 376 376 .long sys_memfd_create 377 + .long sys_bpf 377 378
+9
arch/mips/Makefile
··· 93 93 KBUILD_AFLAGS_MODULE += -mlong-calls 94 94 KBUILD_CFLAGS_MODULE += -mlong-calls 95 95 96 + # 97 + # pass -msoft-float to GAS if it supports it. However on newer binutils 98 + # (specifically newer than 2.24.51.20140728) we then also need to explicitly 99 + # set ".set hardfloat" in all files which manipulate floating point registers. 100 + # 101 + ifneq ($(call as-option,-Wa$(comma)-msoft-float,),) 102 + cflags-y += -DGAS_HAS_SET_HARDFLOAT -Wa,-msoft-float 103 + endif 104 + 96 105 cflags-y += -ffreestanding 97 106 98 107 #
+2
arch/mips/cavium-octeon/octeon-irq.c
··· 809 809 .irq_set_type = octeon_irq_ciu_gpio_set_type, 810 810 #ifdef CONFIG_SMP 811 811 .irq_set_affinity = octeon_irq_ciu_set_affinity_v2, 812 + .irq_cpu_offline = octeon_irq_cpu_offline_ciu, 812 813 #endif 813 814 .flags = IRQCHIP_SET_TYPE_MASKED, 814 815 }; ··· 824 823 .irq_set_type = octeon_irq_ciu_gpio_set_type, 825 824 #ifdef CONFIG_SMP 826 825 .irq_set_affinity = octeon_irq_ciu_set_affinity, 826 + .irq_cpu_offline = octeon_irq_cpu_offline_ciu, 827 827 #endif 828 828 .flags = IRQCHIP_SET_TYPE_MASKED, 829 829 };
+6
arch/mips/include/asm/asmmacro-32.h
··· 13 13 #include <asm/mipsregs.h> 14 14 15 15 .macro fpu_save_single thread tmp=t0 16 + .set push 17 + SET_HARDFLOAT 16 18 cfc1 \tmp, fcr31 17 19 swc1 $f0, THREAD_FPR0_LS64(\thread) 18 20 swc1 $f1, THREAD_FPR1_LS64(\thread) ··· 49 47 swc1 $f30, THREAD_FPR30_LS64(\thread) 50 48 swc1 $f31, THREAD_FPR31_LS64(\thread) 51 49 sw \tmp, THREAD_FCR31(\thread) 50 + .set pop 52 51 .endm 53 52 54 53 .macro fpu_restore_single thread tmp=t0 54 + .set push 55 + SET_HARDFLOAT 55 56 lw \tmp, THREAD_FCR31(\thread) 56 57 lwc1 $f0, THREAD_FPR0_LS64(\thread) 57 58 lwc1 $f1, THREAD_FPR1_LS64(\thread) ··· 89 84 lwc1 $f30, THREAD_FPR30_LS64(\thread) 90 85 lwc1 $f31, THREAD_FPR31_LS64(\thread) 91 86 ctc1 \tmp, fcr31 87 + .set pop 92 88 .endm 93 89 94 90 .macro cpu_save_nonscratch thread
+18
arch/mips/include/asm/asmmacro.h
··· 57 57 #endif /* CONFIG_CPU_MIPSR2 */ 58 58 59 59 .macro fpu_save_16even thread tmp=t0 60 + .set push 61 + SET_HARDFLOAT 60 62 cfc1 \tmp, fcr31 61 63 sdc1 $f0, THREAD_FPR0_LS64(\thread) 62 64 sdc1 $f2, THREAD_FPR2_LS64(\thread) ··· 77 75 sdc1 $f28, THREAD_FPR28_LS64(\thread) 78 76 sdc1 $f30, THREAD_FPR30_LS64(\thread) 79 77 sw \tmp, THREAD_FCR31(\thread) 78 + .set pop 80 79 .endm 81 80 82 81 .macro fpu_save_16odd thread 83 82 .set push 84 83 .set mips64r2 84 + SET_HARDFLOAT 85 85 sdc1 $f1, THREAD_FPR1_LS64(\thread) 86 86 sdc1 $f3, THREAD_FPR3_LS64(\thread) 87 87 sdc1 $f5, THREAD_FPR5_LS64(\thread) ··· 114 110 .endm 115 111 116 112 .macro fpu_restore_16even thread tmp=t0 113 + .set push 114 + SET_HARDFLOAT 117 115 lw \tmp, THREAD_FCR31(\thread) 118 116 ldc1 $f0, THREAD_FPR0_LS64(\thread) 119 117 ldc1 $f2, THREAD_FPR2_LS64(\thread) ··· 139 133 .macro fpu_restore_16odd thread 140 134 .set push 141 135 .set mips64r2 136 + SET_HARDFLOAT 142 137 ldc1 $f1, THREAD_FPR1_LS64(\thread) 143 138 ldc1 $f3, THREAD_FPR3_LS64(\thread) 144 139 ldc1 $f5, THREAD_FPR5_LS64(\thread) ··· 284 277 .macro cfcmsa rd, cs 285 278 .set push 286 279 .set noat 280 + SET_HARDFLOAT 287 281 .insn 288 282 .word CFC_MSA_INSN | (\cs << 11) 289 283 move \rd, $1 ··· 294 286 .macro ctcmsa cd, rs 295 287 .set push 296 288 .set noat 289 + SET_HARDFLOAT 297 290 move $1, \rs 298 291 .word CTC_MSA_INSN | (\cd << 6) 299 292 .set pop ··· 303 294 .macro ld_d wd, off, base 304 295 .set push 305 296 .set noat 297 + SET_HARDFLOAT 306 298 add $1, \base, \off 307 299 .word LDD_MSA_INSN | (\wd << 6) 308 300 .set pop ··· 312 302 .macro st_d wd, off, base 313 303 .set push 314 304 .set noat 305 + SET_HARDFLOAT 315 306 add $1, \base, \off 316 307 .word STD_MSA_INSN | (\wd << 6) 317 308 .set pop ··· 321 310 .macro copy_u_w rd, ws, n 322 311 .set push 323 312 .set noat 313 + SET_HARDFLOAT 324 314 .insn 325 315 .word COPY_UW_MSA_INSN | (\n << 16) | (\ws << 11) 326 316 /* move triggers an assembler bug... */ ··· 332 320 .macro copy_u_d rd, ws, n 333 321 .set push 334 322 .set noat 323 + SET_HARDFLOAT 335 324 .insn 336 325 .word COPY_UD_MSA_INSN | (\n << 16) | (\ws << 11) 337 326 /* move triggers an assembler bug... */ ··· 343 330 .macro insert_w wd, n, rs 344 331 .set push 345 332 .set noat 333 + SET_HARDFLOAT 346 334 /* move triggers an assembler bug... */ 347 335 or $1, \rs, zero 348 336 .word INSERT_W_MSA_INSN | (\n << 16) | (\wd << 6) ··· 353 339 .macro insert_d wd, n, rs 354 340 .set push 355 341 .set noat 342 + SET_HARDFLOAT 356 343 /* move triggers an assembler bug... */ 357 344 or $1, \rs, zero 358 345 .word INSERT_D_MSA_INSN | (\n << 16) | (\wd << 6) ··· 396 381 st_d 31, THREAD_FPR31, \thread 397 382 .set push 398 383 .set noat 384 + SET_HARDFLOAT 399 385 cfcmsa $1, MSA_CSR 400 386 sw $1, THREAD_MSA_CSR(\thread) 401 387 .set pop ··· 405 389 .macro msa_restore_all thread 406 390 .set push 407 391 .set noat 392 + SET_HARDFLOAT 408 393 lw $1, THREAD_MSA_CSR(\thread) 409 394 ctcmsa MSA_CSR, $1 410 395 .set pop ··· 458 441 .macro msa_init_all_upper 459 442 .set push 460 443 .set noat 444 + SET_HARDFLOAT 461 445 not $1, zero 462 446 msa_init_upper 0 463 447 .set pop
+14
arch/mips/include/asm/fpregdef.h
··· 14 14 15 15 #include <asm/sgidefs.h> 16 16 17 + /* 18 + * starting with binutils 2.24.51.20140729, MIPS binutils warn about mixing 19 + * hardfloat and softfloat object files. The kernel build uses soft-float by 20 + * default, so we also need to pass -msoft-float along to GAS if it supports it. 21 + * But this in turn causes assembler errors in files which access hardfloat 22 + * registers. We detect if GAS supports "-msoft-float" in the Makefile and 23 + * explicitly put ".set hardfloat" where floating point registers are touched. 24 + */ 25 + #ifdef GAS_HAS_SET_HARDFLOAT 26 + #define SET_HARDFLOAT .set hardfloat 27 + #else 28 + #define SET_HARDFLOAT 29 + #endif 30 + 17 31 #if _MIPS_SIM == _MIPS_SIM_ABI32 18 32 19 33 /*
+2 -2
arch/mips/include/asm/fpu.h
··· 145 145 if (is_msa_enabled()) { 146 146 if (save) { 147 147 save_msa(current); 148 - asm volatile("cfc1 %0, $31" 149 - : "=r"(current->thread.fpu.fcr31)); 148 + current->thread.fpu.fcr31 = 149 + read_32bit_cp1_register(CP1_STATUS); 150 150 } 151 151 disable_msa(); 152 152 clear_thread_flag(TIF_USEDMSA);
+10 -1
arch/mips/include/asm/mipsregs.h
··· 1324 1324 /* 1325 1325 * Macros to access the floating point coprocessor control registers 1326 1326 */ 1327 - #define read_32bit_cp1_register(source) \ 1327 + #define _read_32bit_cp1_register(source, gas_hardfloat) \ 1328 1328 ({ \ 1329 1329 int __res; \ 1330 1330 \ ··· 1334 1334 " # gas fails to assemble cfc1 for some archs, \n" \ 1335 1335 " # like Octeon. \n" \ 1336 1336 " .set mips1 \n" \ 1337 + " "STR(gas_hardfloat)" \n" \ 1337 1338 " cfc1 %0,"STR(source)" \n" \ 1338 1339 " .set pop \n" \ 1339 1340 : "=r" (__res)); \ 1340 1341 __res; \ 1341 1342 }) 1343 + 1344 + #ifdef GAS_HAS_SET_HARDFLOAT 1345 + #define read_32bit_cp1_register(source) \ 1346 + _read_32bit_cp1_register(source, .set hardfloat) 1347 + #else 1348 + #define read_32bit_cp1_register(source) \ 1349 + _read_32bit_cp1_register(source, ) 1350 + #endif 1342 1351 1343 1352 #ifdef HAVE_AS_DSP 1344 1353 #define rddsp(mask) \
+9 -6
arch/mips/include/uapi/asm/unistd.h
··· 375 375 #define __NR_seccomp (__NR_Linux + 352) 376 376 #define __NR_getrandom (__NR_Linux + 353) 377 377 #define __NR_memfd_create (__NR_Linux + 354) 378 + #define __NR_bpf (__NR_Linux + 355) 378 379 379 380 /* 380 381 * Offset of the last Linux o32 flavoured syscall 381 382 */ 382 - #define __NR_Linux_syscalls 354 383 + #define __NR_Linux_syscalls 355 383 384 384 385 #endif /* _MIPS_SIM == _MIPS_SIM_ABI32 */ 385 386 386 387 #define __NR_O32_Linux 4000 387 - #define __NR_O32_Linux_syscalls 354 388 + #define __NR_O32_Linux_syscalls 355 388 389 389 390 #if _MIPS_SIM == _MIPS_SIM_ABI64 390 391 ··· 708 707 #define __NR_seccomp (__NR_Linux + 312) 709 708 #define __NR_getrandom (__NR_Linux + 313) 710 709 #define __NR_memfd_create (__NR_Linux + 314) 710 + #define __NR_bpf (__NR_Linux + 315) 711 711 712 712 /* 713 713 * Offset of the last Linux 64-bit flavoured syscall 714 714 */ 715 - #define __NR_Linux_syscalls 314 715 + #define __NR_Linux_syscalls 315 716 716 717 717 #endif /* _MIPS_SIM == _MIPS_SIM_ABI64 */ 718 718 719 719 #define __NR_64_Linux 5000 720 - #define __NR_64_Linux_syscalls 314 720 + #define __NR_64_Linux_syscalls 315 721 721 722 722 #if _MIPS_SIM == _MIPS_SIM_NABI32 723 723 ··· 1045 1043 #define __NR_seccomp (__NR_Linux + 316) 1046 1044 #define __NR_getrandom (__NR_Linux + 317) 1047 1045 #define __NR_memfd_create (__NR_Linux + 318) 1046 + #define __NR_memfd_create (__NR_Linux + 319) 1048 1047 1049 1048 /* 1050 1049 * Offset of the last N32 flavoured syscall 1051 1050 */ 1052 - #define __NR_Linux_syscalls 318 1051 + #define __NR_Linux_syscalls 319 1053 1052 1054 1053 #endif /* _MIPS_SIM == _MIPS_SIM_NABI32 */ 1055 1054 1056 1055 #define __NR_N32_Linux 6000 1057 - #define __NR_N32_Linux_syscalls 318 1056 + #define __NR_N32_Linux_syscalls 319 1058 1057 1059 1058 #endif /* _UAPI_ASM_UNISTD_H */
+2 -6
arch/mips/kernel/branch.c
··· 144 144 case mm_bc1t_op: 145 145 preempt_disable(); 146 146 if (is_fpu_owner()) 147 - asm volatile("cfc1\t%0,$31" : "=r" (fcr31)); 147 + fcr31 = read_32bit_cp1_register(CP1_STATUS); 148 148 else 149 149 fcr31 = current->thread.fpu.fcr31; 150 150 preempt_enable(); ··· 562 562 case cop1_op: 563 563 preempt_disable(); 564 564 if (is_fpu_owner()) 565 - asm volatile( 566 - ".set push\n" 567 - "\t.set mips1\n" 568 - "\tcfc1\t%0,$31\n" 569 - "\t.set pop" : "=r" (fcr31)); 565 + fcr31 = read_32bit_cp1_register(CP1_STATUS); 570 566 else 571 567 fcr31 = current->thread.fpu.fcr31; 572 568 preempt_enable();
+1
arch/mips/kernel/genex.S
··· 358 358 .set push 359 359 /* gas fails to assemble cfc1 for some archs (octeon).*/ \ 360 360 .set mips1 361 + SET_HARDFLOAT 361 362 cfc1 a1, fcr31 362 363 li a2, ~(0x3f << 12) 363 364 and a2, a1
+6
arch/mips/kernel/r2300_fpu.S
··· 28 28 .set mips1 29 29 /* Save floating point context */ 30 30 LEAF(_save_fp_context) 31 + .set push 32 + SET_HARDFLOAT 31 33 li v0, 0 # assume success 32 34 cfc1 t1,fcr31 33 35 EX(swc1 $f0,(SC_FPREGS+0)(a0)) ··· 67 65 EX(sw t1,(SC_FPC_CSR)(a0)) 68 66 cfc1 t0,$0 # implementation/version 69 67 jr ra 68 + .set pop 70 69 .set nomacro 71 70 EX(sw t0,(SC_FPC_EIR)(a0)) 72 71 .set macro ··· 83 80 * stack frame which might have been changed by the user. 84 81 */ 85 82 LEAF(_restore_fp_context) 83 + .set push 84 + SET_HARDFLOAT 86 85 li v0, 0 # assume success 87 86 EX(lw t0,(SC_FPC_CSR)(a0)) 88 87 EX(lwc1 $f0,(SC_FPREGS+0)(a0)) ··· 121 116 EX(lwc1 $f31,(SC_FPREGS+248)(a0)) 122 117 jr ra 123 118 ctc1 t0,fcr31 119 + .set pop 124 120 END(_restore_fp_context) 125 121 .set reorder 126 122
+5
arch/mips/kernel/r2300_switch.S
··· 120 120 121 121 #define FPU_DEFAULT 0x00000000 122 122 123 + .set push 124 + SET_HARDFLOAT 125 + 123 126 LEAF(_init_fpu) 124 127 mfc0 t0, CP0_STATUS 125 128 li t1, ST0_CU1 ··· 168 165 mtc1 t0, $f31 169 166 jr ra 170 167 END(_init_fpu) 168 + 169 + .set pop
+25 -2
arch/mips/kernel/r4k_fpu.S
··· 19 19 #include <asm/asm-offsets.h> 20 20 #include <asm/regdef.h> 21 21 22 + /* preprocessor replaces the fp in ".set fp=64" with $30 otherwise */ 23 + #undef fp 24 + 22 25 .macro EX insn, reg, src 23 26 .set push 27 + SET_HARDFLOAT 24 28 .set nomacro 25 29 .ex\@: \insn \reg, \src 26 30 .set pop ··· 37 33 .set arch=r4000 38 34 39 35 LEAF(_save_fp_context) 36 + .set push 37 + SET_HARDFLOAT 40 38 cfc1 t1, fcr31 39 + .set pop 41 40 42 41 #if defined(CONFIG_64BIT) || defined(CONFIG_CPU_MIPS32_R2) 43 42 .set push 43 + SET_HARDFLOAT 44 44 #ifdef CONFIG_CPU_MIPS32_R2 45 - .set mips64r2 45 + .set mips32r2 46 + .set fp=64 46 47 mfc0 t0, CP0_STATUS 47 48 sll t0, t0, 5 48 49 bgez t0, 1f # skip storing odd if FR=0 ··· 73 64 1: .set pop 74 65 #endif 75 66 67 + .set push 68 + SET_HARDFLOAT 76 69 /* Store the 16 even double precision registers */ 77 70 EX sdc1 $f0, SC_FPREGS+0(a0) 78 71 EX sdc1 $f2, SC_FPREGS+16(a0) ··· 95 84 EX sw t1, SC_FPC_CSR(a0) 96 85 jr ra 97 86 li v0, 0 # success 87 + .set pop 98 88 END(_save_fp_context) 99 89 100 90 #ifdef CONFIG_MIPS32_COMPAT 101 91 /* Save 32-bit process floating point context */ 102 92 LEAF(_save_fp_context32) 93 + .set push 94 + SET_HARDFLOAT 103 95 cfc1 t1, fcr31 104 96 105 97 mfc0 t0, CP0_STATUS ··· 148 134 EX sw t1, SC32_FPC_CSR(a0) 149 135 cfc1 t0, $0 # implementation/version 150 136 EX sw t0, SC32_FPC_EIR(a0) 137 + .set pop 151 138 152 139 jr ra 153 140 li v0, 0 # success ··· 165 150 166 151 #if defined(CONFIG_64BIT) || defined(CONFIG_CPU_MIPS32_R2) 167 152 .set push 153 + SET_HARDFLOAT 168 154 #ifdef CONFIG_CPU_MIPS32_R2 169 - .set mips64r2 155 + .set mips32r2 156 + .set fp=64 170 157 mfc0 t0, CP0_STATUS 171 158 sll t0, t0, 5 172 159 bgez t0, 1f # skip loading odd if FR=0 ··· 192 175 EX ldc1 $f31, SC_FPREGS+248(a0) 193 176 1: .set pop 194 177 #endif 178 + .set push 179 + SET_HARDFLOAT 195 180 EX ldc1 $f0, SC_FPREGS+0(a0) 196 181 EX ldc1 $f2, SC_FPREGS+16(a0) 197 182 EX ldc1 $f4, SC_FPREGS+32(a0) ··· 211 192 EX ldc1 $f28, SC_FPREGS+224(a0) 212 193 EX ldc1 $f30, SC_FPREGS+240(a0) 213 194 ctc1 t1, fcr31 195 + .set pop 214 196 jr ra 215 197 li v0, 0 # success 216 198 END(_restore_fp_context) ··· 219 199 #ifdef CONFIG_MIPS32_COMPAT 220 200 LEAF(_restore_fp_context32) 221 201 /* Restore an o32 sigcontext. */ 202 + .set push 203 + SET_HARDFLOAT 222 204 EX lw t1, SC32_FPC_CSR(a0) 223 205 224 206 mfc0 t0, CP0_STATUS ··· 264 242 ctc1 t1, fcr31 265 243 jr ra 266 244 li v0, 0 # success 245 + .set pop 267 246 END(_restore_fp_context32) 268 247 #endif 269 248
+14 -1
arch/mips/kernel/r4k_switch.S
··· 22 22 23 23 #include <asm/asmmacro.h> 24 24 25 + /* preprocessor replaces the fp in ".set fp=64" with $30 otherwise */ 26 + #undef fp 27 + 25 28 /* 26 29 * Offset to the current process status flags, the first 32 bytes of the 27 30 * stack are not used. ··· 68 65 bgtz a3, 1f 69 66 70 67 /* Save 128b MSA vector context + scalar FP control & status. */ 68 + .set push 69 + SET_HARDFLOAT 71 70 cfc1 t1, fcr31 72 71 msa_save_all a0 72 + .set pop /* SET_HARDFLOAT */ 73 + 73 74 sw t1, THREAD_FCR31(a0) 74 75 b 2f 75 76 ··· 168 161 169 162 #define FPU_DEFAULT 0x00000000 170 163 164 + .set push 165 + SET_HARDFLOAT 166 + 171 167 LEAF(_init_fpu) 172 168 mfc0 t0, CP0_STATUS 173 169 li t1, ST0_CU1 ··· 242 232 243 233 #ifdef CONFIG_CPU_MIPS32_R2 244 234 .set push 245 - .set mips64r2 235 + .set mips32r2 236 + .set fp=64 246 237 sll t0, t0, 5 # is Status.FR set? 247 238 bgez t0, 1f # no: skip setting upper 32b 248 239 ··· 302 291 #endif 303 292 jr ra 304 293 END(_init_fpu) 294 + 295 + .set pop /* SET_HARDFLOAT */
+5
arch/mips/kernel/r6000_fpu.S
··· 18 18 19 19 .set noreorder 20 20 .set mips2 21 + .set push 22 + SET_HARDFLOAT 23 + 21 24 /* Save floating point context */ 22 25 LEAF(_save_fp_context) 23 26 mfc0 t0,CP0_STATUS ··· 88 85 1: jr ra 89 86 nop 90 87 END(_restore_fp_context) 88 + 89 + .set pop /* SET_HARDFLOAT */
+1
arch/mips/kernel/scall32-o32.S
··· 579 579 PTR sys_seccomp 580 580 PTR sys_getrandom 581 581 PTR sys_memfd_create 582 + PTR sys_bpf /* 4355 */
+1
arch/mips/kernel/scall64-64.S
··· 434 434 PTR sys_seccomp 435 435 PTR sys_getrandom 436 436 PTR sys_memfd_create 437 + PTR sys_bpf /* 5315 */ 437 438 .size sys_call_table,.-sys_call_table
+1
arch/mips/kernel/scall64-n32.S
··· 427 427 PTR sys_seccomp 428 428 PTR sys_getrandom 429 429 PTR sys_memfd_create 430 + PTR sys_bpf 430 431 .size sysn32_call_table,.-sysn32_call_table
+1
arch/mips/kernel/scall64-o32.S
··· 564 564 PTR sys_seccomp 565 565 PTR sys_getrandom 566 566 PTR sys_memfd_create 567 + PTR sys_bpf /* 4355 */ 567 568 .size sys32_call_table,.-sys32_call_table
+2 -1
arch/mips/kernel/setup.c
··· 683 683 dma_contiguous_reserve(PFN_PHYS(max_low_pfn)); 684 684 /* Tell bootmem about cma reserved memblock section */ 685 685 for_each_memblock(reserved, reg) 686 - reserve_bootmem(reg->base, reg->size, BOOTMEM_DEFAULT); 686 + if (reg->size != 0) 687 + reserve_bootmem(reg->base, reg->size, BOOTMEM_DEFAULT); 687 688 } 688 689 689 690 static void __init resource_init(void)
+2 -2
arch/mips/lib/r3k_dump_tlb.c
··· 34 34 entrylo0 = read_c0_entrylo0(); 35 35 36 36 /* Unused entries have a virtual address of KSEG0. */ 37 - if ((entryhi & 0xffffe000) != 0x80000000 37 + if ((entryhi & 0xfffff000) != 0x80000000 38 38 && (entryhi & 0xfc0) == asid) { 39 39 /* 40 40 * Only print entries in use ··· 43 43 44 44 printk("va=%08lx asid=%08lx" 45 45 " [pa=%06lx n=%d d=%d v=%d g=%d]", 46 - (entryhi & 0xffffe000), 46 + (entryhi & 0xfffff000), 47 47 entryhi & 0xfc0, 48 48 entrylo0 & PAGE_MASK, 49 49 (entrylo0 & (1 << 11)) ? 1 : 0,
+4 -2
arch/mips/lib/strnlen_user.S
··· 40 40 .else 41 41 EX(lbe, t0, (v0), .Lfault\@) 42 42 .endif 43 - PTR_ADDIU v0, 1 43 + .set noreorder 44 44 bnez t0, 1b 45 - 1: PTR_SUBU v0, a0 45 + 1: PTR_ADDIU v0, 1 46 + .set reorder 47 + PTR_SUBU v0, a0 46 48 jr ra 47 49 END(__strnlen_\func\()_asm) 48 50
+1 -5
arch/mips/math-emu/cp1emu.c
··· 584 584 if (insn.i_format.rs == bc_op) { 585 585 preempt_disable(); 586 586 if (is_fpu_owner()) 587 - asm volatile( 588 - ".set push\n" 589 - "\t.set mips1\n" 590 - "\tcfc1\t%0,$31\n" 591 - "\t.set pop" : "=r" (fcr31)); 587 + fcr31 = read_32bit_cp1_register(CP1_STATUS); 592 588 else 593 589 fcr31 = current->thread.fpu.fcr31; 594 590 preempt_enable();
+1 -3
arch/mips/pci/msi-xlp.c
··· 443 443 msg.data = 0xc00 | msixvec; 444 444 445 445 ret = irq_set_msi_desc(xirq, desc); 446 - if (ret < 0) { 447 - destroy_irq(xirq); 446 + if (ret < 0) 448 447 return ret; 449 - } 450 448 451 449 write_msi_msg(xirq, &msg); 452 450 return 0;
+26 -26
arch/powerpc/include/asm/fadump.h
··· 70 70 #define CPU_UNKNOWN (~((u32)0)) 71 71 72 72 /* Utility macros */ 73 - #define SKIP_TO_NEXT_CPU(reg_entry) \ 74 - ({ \ 75 - while (reg_entry->reg_id != REG_ID("CPUEND")) \ 76 - reg_entry++; \ 77 - reg_entry++; \ 73 + #define SKIP_TO_NEXT_CPU(reg_entry) \ 74 + ({ \ 75 + while (be64_to_cpu(reg_entry->reg_id) != REG_ID("CPUEND")) \ 76 + reg_entry++; \ 77 + reg_entry++; \ 78 78 }) 79 79 80 80 /* Kernel Dump section info */ 81 81 struct fadump_section { 82 - u32 request_flag; 83 - u16 source_data_type; 84 - u16 error_flags; 85 - u64 source_address; 86 - u64 source_len; 87 - u64 bytes_dumped; 88 - u64 destination_address; 82 + __be32 request_flag; 83 + __be16 source_data_type; 84 + __be16 error_flags; 85 + __be64 source_address; 86 + __be64 source_len; 87 + __be64 bytes_dumped; 88 + __be64 destination_address; 89 89 }; 90 90 91 91 /* ibm,configure-kernel-dump header. */ 92 92 struct fadump_section_header { 93 - u32 dump_format_version; 94 - u16 dump_num_sections; 95 - u16 dump_status_flag; 96 - u32 offset_first_dump_section; 93 + __be32 dump_format_version; 94 + __be16 dump_num_sections; 95 + __be16 dump_status_flag; 96 + __be32 offset_first_dump_section; 97 97 98 98 /* Fields for disk dump option. */ 99 - u32 dd_block_size; 100 - u64 dd_block_offset; 101 - u64 dd_num_blocks; 102 - u32 dd_offset_disk_path; 99 + __be32 dd_block_size; 100 + __be64 dd_block_offset; 101 + __be64 dd_num_blocks; 102 + __be32 dd_offset_disk_path; 103 103 104 104 /* Maximum time allowed to prevent an automatic dump-reboot. */ 105 - u32 max_time_auto; 105 + __be32 max_time_auto; 106 106 }; 107 107 108 108 /* ··· 174 174 175 175 /* Register save area header. */ 176 176 struct fadump_reg_save_area_header { 177 - u64 magic_number; 178 - u32 version; 179 - u32 num_cpu_offset; 177 + __be64 magic_number; 178 + __be32 version; 179 + __be32 num_cpu_offset; 180 180 }; 181 181 182 182 /* Register entry. */ 183 183 struct fadump_reg_entry { 184 - u64 reg_id; 185 - u64 reg_value; 184 + __be64 reg_id; 185 + __be64 reg_value; 186 186 }; 187 187 188 188 /* fadump crash info structure */
+6
arch/powerpc/kernel/entry_64.S
··· 659 659 3: 660 660 #endif 661 661 bl save_nvgprs 662 + /* 663 + * Use a non volatile GPR to save and restore our thread_info flags 664 + * across the call to restore_interrupts. 665 + */ 666 + mr r30,r4 662 667 bl restore_interrupts 668 + mr r4,r30 663 669 addi r3,r1,STACK_FRAME_OVERHEAD 664 670 bl do_notify_resume 665 671 b ret_from_except
+57 -57
arch/powerpc/kernel/fadump.c
··· 58 58 const __be32 *sections; 59 59 int i, num_sections; 60 60 int size; 61 - const int *token; 61 + const __be32 *token; 62 62 63 63 if (depth != 1 || strcmp(uname, "rtas") != 0) 64 64 return 0; ··· 72 72 return 1; 73 73 74 74 fw_dump.fadump_supported = 1; 75 - fw_dump.ibm_configure_kernel_dump = *token; 75 + fw_dump.ibm_configure_kernel_dump = be32_to_cpu(*token); 76 76 77 77 /* 78 78 * The 'ibm,kernel-dump' rtas node is present only if there is ··· 147 147 memset(fdm, 0, sizeof(struct fadump_mem_struct)); 148 148 addr = addr & PAGE_MASK; 149 149 150 - fdm->header.dump_format_version = 0x00000001; 151 - fdm->header.dump_num_sections = 3; 150 + fdm->header.dump_format_version = cpu_to_be32(0x00000001); 151 + fdm->header.dump_num_sections = cpu_to_be16(3); 152 152 fdm->header.dump_status_flag = 0; 153 153 fdm->header.offset_first_dump_section = 154 - (u32)offsetof(struct fadump_mem_struct, cpu_state_data); 154 + cpu_to_be32((u32)offsetof(struct fadump_mem_struct, cpu_state_data)); 155 155 156 156 /* 157 157 * Fields for disk dump option. ··· 167 167 168 168 /* Kernel dump sections */ 169 169 /* cpu state data section. */ 170 - fdm->cpu_state_data.request_flag = FADUMP_REQUEST_FLAG; 171 - fdm->cpu_state_data.source_data_type = FADUMP_CPU_STATE_DATA; 170 + fdm->cpu_state_data.request_flag = cpu_to_be32(FADUMP_REQUEST_FLAG); 171 + fdm->cpu_state_data.source_data_type = cpu_to_be16(FADUMP_CPU_STATE_DATA); 172 172 fdm->cpu_state_data.source_address = 0; 173 - fdm->cpu_state_data.source_len = fw_dump.cpu_state_data_size; 174 - fdm->cpu_state_data.destination_address = addr; 173 + fdm->cpu_state_data.source_len = cpu_to_be64(fw_dump.cpu_state_data_size); 174 + fdm->cpu_state_data.destination_address = cpu_to_be64(addr); 175 175 addr += fw_dump.cpu_state_data_size; 176 176 177 177 /* hpte region section */ 178 - fdm->hpte_region.request_flag = FADUMP_REQUEST_FLAG; 179 - fdm->hpte_region.source_data_type = FADUMP_HPTE_REGION; 178 + fdm->hpte_region.request_flag = cpu_to_be32(FADUMP_REQUEST_FLAG); 179 + fdm->hpte_region.source_data_type = cpu_to_be16(FADUMP_HPTE_REGION); 180 180 fdm->hpte_region.source_address = 0; 181 - fdm->hpte_region.source_len = fw_dump.hpte_region_size; 182 - fdm->hpte_region.destination_address = addr; 181 + fdm->hpte_region.source_len = cpu_to_be64(fw_dump.hpte_region_size); 182 + fdm->hpte_region.destination_address = cpu_to_be64(addr); 183 183 addr += fw_dump.hpte_region_size; 184 184 185 185 /* RMA region section */ 186 - fdm->rmr_region.request_flag = FADUMP_REQUEST_FLAG; 187 - fdm->rmr_region.source_data_type = FADUMP_REAL_MODE_REGION; 188 - fdm->rmr_region.source_address = RMA_START; 189 - fdm->rmr_region.source_len = fw_dump.boot_memory_size; 190 - fdm->rmr_region.destination_address = addr; 186 + fdm->rmr_region.request_flag = cpu_to_be32(FADUMP_REQUEST_FLAG); 187 + fdm->rmr_region.source_data_type = cpu_to_be16(FADUMP_REAL_MODE_REGION); 188 + fdm->rmr_region.source_address = cpu_to_be64(RMA_START); 189 + fdm->rmr_region.source_len = cpu_to_be64(fw_dump.boot_memory_size); 190 + fdm->rmr_region.destination_address = cpu_to_be64(addr); 191 191 addr += fw_dump.boot_memory_size; 192 192 193 193 return addr; ··· 272 272 * first kernel. 273 273 */ 274 274 if (fdm_active) 275 - fw_dump.boot_memory_size = fdm_active->rmr_region.source_len; 275 + fw_dump.boot_memory_size = be64_to_cpu(fdm_active->rmr_region.source_len); 276 276 else 277 277 fw_dump.boot_memory_size = fadump_calculate_reserve_size(); 278 278 ··· 314 314 (unsigned long)(base >> 20)); 315 315 316 316 fw_dump.fadumphdr_addr = 317 - fdm_active->rmr_region.destination_address + 318 - fdm_active->rmr_region.source_len; 317 + be64_to_cpu(fdm_active->rmr_region.destination_address) + 318 + be64_to_cpu(fdm_active->rmr_region.source_len); 319 319 pr_debug("fadumphdr_addr = %p\n", 320 320 (void *) fw_dump.fadumphdr_addr); 321 321 } else { ··· 472 472 { 473 473 memset(regs, 0, sizeof(struct pt_regs)); 474 474 475 - while (reg_entry->reg_id != REG_ID("CPUEND")) { 476 - fadump_set_regval(regs, reg_entry->reg_id, 477 - reg_entry->reg_value); 475 + while (be64_to_cpu(reg_entry->reg_id) != REG_ID("CPUEND")) { 476 + fadump_set_regval(regs, be64_to_cpu(reg_entry->reg_id), 477 + be64_to_cpu(reg_entry->reg_value)); 478 478 reg_entry++; 479 479 } 480 480 reg_entry++; ··· 603 603 if (!fdm->cpu_state_data.bytes_dumped) 604 604 return -EINVAL; 605 605 606 - addr = fdm->cpu_state_data.destination_address; 606 + addr = be64_to_cpu(fdm->cpu_state_data.destination_address); 607 607 vaddr = __va(addr); 608 608 609 609 reg_header = vaddr; 610 - if (reg_header->magic_number != REGSAVE_AREA_MAGIC) { 610 + if (be64_to_cpu(reg_header->magic_number) != REGSAVE_AREA_MAGIC) { 611 611 printk(KERN_ERR "Unable to read register save area.\n"); 612 612 return -ENOENT; 613 613 } 614 614 pr_debug("--------CPU State Data------------\n"); 615 - pr_debug("Magic Number: %llx\n", reg_header->magic_number); 616 - pr_debug("NumCpuOffset: %x\n", reg_header->num_cpu_offset); 615 + pr_debug("Magic Number: %llx\n", be64_to_cpu(reg_header->magic_number)); 616 + pr_debug("NumCpuOffset: %x\n", be32_to_cpu(reg_header->num_cpu_offset)); 617 617 618 - vaddr += reg_header->num_cpu_offset; 619 - num_cpus = *((u32 *)(vaddr)); 618 + vaddr += be32_to_cpu(reg_header->num_cpu_offset); 619 + num_cpus = be32_to_cpu(*((__be32 *)(vaddr))); 620 620 pr_debug("NumCpus : %u\n", num_cpus); 621 621 vaddr += sizeof(u32); 622 622 reg_entry = (struct fadump_reg_entry *)vaddr; ··· 639 639 fdh = __va(fw_dump.fadumphdr_addr); 640 640 641 641 for (i = 0; i < num_cpus; i++) { 642 - if (reg_entry->reg_id != REG_ID("CPUSTRT")) { 642 + if (be64_to_cpu(reg_entry->reg_id) != REG_ID("CPUSTRT")) { 643 643 printk(KERN_ERR "Unable to read CPU state data\n"); 644 644 rc = -ENOENT; 645 645 goto error_out; 646 646 } 647 647 /* Lower 4 bytes of reg_value contains logical cpu id */ 648 - cpu = reg_entry->reg_value & FADUMP_CPU_ID_MASK; 648 + cpu = be64_to_cpu(reg_entry->reg_value) & FADUMP_CPU_ID_MASK; 649 649 if (fdh && !cpumask_test_cpu(cpu, &fdh->cpu_online_mask)) { 650 650 SKIP_TO_NEXT_CPU(reg_entry); 651 651 continue; ··· 692 692 return -EINVAL; 693 693 694 694 /* Check if the dump data is valid. */ 695 - if ((fdm_active->header.dump_status_flag == FADUMP_ERROR_FLAG) || 695 + if ((be16_to_cpu(fdm_active->header.dump_status_flag) == FADUMP_ERROR_FLAG) || 696 696 (fdm_active->cpu_state_data.error_flags != 0) || 697 697 (fdm_active->rmr_region.error_flags != 0)) { 698 698 printk(KERN_ERR "Dump taken by platform is not valid\n"); ··· 828 828 static inline unsigned long fadump_relocate(unsigned long paddr) 829 829 { 830 830 if (paddr > RMA_START && paddr < fw_dump.boot_memory_size) 831 - return fdm.rmr_region.destination_address + paddr; 831 + return be64_to_cpu(fdm.rmr_region.destination_address) + paddr; 832 832 else 833 833 return paddr; 834 834 } ··· 902 902 * to the specified destination_address. Hence set 903 903 * the correct offset. 904 904 */ 905 - phdr->p_offset = fdm.rmr_region.destination_address; 905 + phdr->p_offset = be64_to_cpu(fdm.rmr_region.destination_address); 906 906 } 907 907 908 908 phdr->p_paddr = mbase; ··· 951 951 952 952 fadump_setup_crash_memory_ranges(); 953 953 954 - addr = fdm.rmr_region.destination_address + fdm.rmr_region.source_len; 954 + addr = be64_to_cpu(fdm.rmr_region.destination_address) + be64_to_cpu(fdm.rmr_region.source_len); 955 955 /* Initialize fadump crash info header. */ 956 956 addr = init_fadump_header(addr); 957 957 vaddr = __va(addr); ··· 1023 1023 /* Invalidate the registration only if dump is active. */ 1024 1024 if (fw_dump.dump_active) { 1025 1025 init_fadump_mem_struct(&fdm, 1026 - fdm_active->cpu_state_data.destination_address); 1026 + be64_to_cpu(fdm_active->cpu_state_data.destination_address)); 1027 1027 fadump_invalidate_dump(&fdm); 1028 1028 } 1029 1029 } ··· 1063 1063 return; 1064 1064 } 1065 1065 1066 - destination_address = fdm_active->cpu_state_data.destination_address; 1066 + destination_address = be64_to_cpu(fdm_active->cpu_state_data.destination_address); 1067 1067 fadump_cleanup(); 1068 1068 mutex_unlock(&fadump_mutex); 1069 1069 ··· 1183 1183 seq_printf(m, 1184 1184 "CPU : [%#016llx-%#016llx] %#llx bytes, " 1185 1185 "Dumped: %#llx\n", 1186 - fdm_ptr->cpu_state_data.destination_address, 1187 - fdm_ptr->cpu_state_data.destination_address + 1188 - fdm_ptr->cpu_state_data.source_len - 1, 1189 - fdm_ptr->cpu_state_data.source_len, 1190 - fdm_ptr->cpu_state_data.bytes_dumped); 1186 + be64_to_cpu(fdm_ptr->cpu_state_data.destination_address), 1187 + be64_to_cpu(fdm_ptr->cpu_state_data.destination_address) + 1188 + be64_to_cpu(fdm_ptr->cpu_state_data.source_len) - 1, 1189 + be64_to_cpu(fdm_ptr->cpu_state_data.source_len), 1190 + be64_to_cpu(fdm_ptr->cpu_state_data.bytes_dumped)); 1191 1191 seq_printf(m, 1192 1192 "HPTE: [%#016llx-%#016llx] %#llx bytes, " 1193 1193 "Dumped: %#llx\n", 1194 - fdm_ptr->hpte_region.destination_address, 1195 - fdm_ptr->hpte_region.destination_address + 1196 - fdm_ptr->hpte_region.source_len - 1, 1197 - fdm_ptr->hpte_region.source_len, 1198 - fdm_ptr->hpte_region.bytes_dumped); 1194 + be64_to_cpu(fdm_ptr->hpte_region.destination_address), 1195 + be64_to_cpu(fdm_ptr->hpte_region.destination_address) + 1196 + be64_to_cpu(fdm_ptr->hpte_region.source_len) - 1, 1197 + be64_to_cpu(fdm_ptr->hpte_region.source_len), 1198 + be64_to_cpu(fdm_ptr->hpte_region.bytes_dumped)); 1199 1199 seq_printf(m, 1200 1200 "DUMP: [%#016llx-%#016llx] %#llx bytes, " 1201 1201 "Dumped: %#llx\n", 1202 - fdm_ptr->rmr_region.destination_address, 1203 - fdm_ptr->rmr_region.destination_address + 1204 - fdm_ptr->rmr_region.source_len - 1, 1205 - fdm_ptr->rmr_region.source_len, 1206 - fdm_ptr->rmr_region.bytes_dumped); 1202 + be64_to_cpu(fdm_ptr->rmr_region.destination_address), 1203 + be64_to_cpu(fdm_ptr->rmr_region.destination_address) + 1204 + be64_to_cpu(fdm_ptr->rmr_region.source_len) - 1, 1205 + be64_to_cpu(fdm_ptr->rmr_region.source_len), 1206 + be64_to_cpu(fdm_ptr->rmr_region.bytes_dumped)); 1207 1207 1208 1208 if (!fdm_active || 1209 1209 (fw_dump.reserve_dump_area_start == 1210 - fdm_ptr->cpu_state_data.destination_address)) 1210 + be64_to_cpu(fdm_ptr->cpu_state_data.destination_address))) 1211 1211 goto out; 1212 1212 1213 1213 /* Dump is active. Show reserved memory region. */ ··· 1215 1215 " : [%#016llx-%#016llx] %#llx bytes, " 1216 1216 "Dumped: %#llx\n", 1217 1217 (unsigned long long)fw_dump.reserve_dump_area_start, 1218 - fdm_ptr->cpu_state_data.destination_address - 1, 1219 - fdm_ptr->cpu_state_data.destination_address - 1218 + be64_to_cpu(fdm_ptr->cpu_state_data.destination_address) - 1, 1219 + be64_to_cpu(fdm_ptr->cpu_state_data.destination_address) - 1220 1220 fw_dump.reserve_dump_area_start, 1221 - fdm_ptr->cpu_state_data.destination_address - 1221 + be64_to_cpu(fdm_ptr->cpu_state_data.destination_address) - 1222 1222 fw_dump.reserve_dump_area_start); 1223 1223 out: 1224 1224 if (fdm_active)
+1 -1
arch/powerpc/mm/init_32.c
··· 103 103 /* 104 104 * Check for command-line options that affect what MMU_init will do. 105 105 */ 106 - void MMU_setup(void) 106 + void __init MMU_setup(void) 107 107 { 108 108 /* Check for nobats option (used in mapin_ram). */ 109 109 if (strstr(boot_command_line, "nobats")) {
+59
arch/powerpc/platforms/powernv/opal-lpc.c
··· 216 216 &data, len); 217 217 if (rc) 218 218 return -ENXIO; 219 + 220 + /* 221 + * Now there is some trickery with the data returned by OPAL 222 + * as it's the desired data right justified in a 32-bit BE 223 + * word. 224 + * 225 + * This is a very bad interface and I'm to blame for it :-( 226 + * 227 + * So we can't just apply a 32-bit swap to what comes from OPAL, 228 + * because user space expects the *bytes* to be in their proper 229 + * respective positions (ie, LPC position). 230 + * 231 + * So what we really want to do here is to shift data right 232 + * appropriately on a LE kernel. 233 + * 234 + * IE. If the LPC transaction has bytes B0, B1, B2 and B3 in that 235 + * order, we have in memory written to by OPAL at the "data" 236 + * pointer: 237 + * 238 + * Bytes: OPAL "data" LE "data" 239 + * 32-bit: B0 B1 B2 B3 B0B1B2B3 B3B2B1B0 240 + * 16-bit: B0 B1 0000B0B1 B1B00000 241 + * 8-bit: B0 000000B0 B0000000 242 + * 243 + * So a BE kernel will have the leftmost of the above in the MSB 244 + * and rightmost in the LSB and can just then "cast" the u32 "data" 245 + * down to the appropriate quantity and write it. 246 + * 247 + * However, an LE kernel can't. It doesn't need to swap because a 248 + * load from data followed by a store to user are going to preserve 249 + * the byte ordering which is the wire byte order which is what the 250 + * user wants, but in order to "crop" to the right size, we need to 251 + * shift right first. 252 + */ 219 253 switch(len) { 220 254 case 4: 221 255 rc = __put_user((u32)data, (u32 __user *)ubuf); 222 256 break; 223 257 case 2: 258 + #ifdef __LITTLE_ENDIAN__ 259 + data >>= 16; 260 + #endif 224 261 rc = __put_user((u16)data, (u16 __user *)ubuf); 225 262 break; 226 263 default: 264 + #ifdef __LITTLE_ENDIAN__ 265 + data >>= 24; 266 + #endif 227 267 rc = __put_user((u8)data, (u8 __user *)ubuf); 228 268 break; 229 269 } ··· 303 263 else if (todo > 1 && (pos & 1) == 0) 304 264 len = 2; 305 265 } 266 + 267 + /* 268 + * Similarly to the read case, we have some trickery here but 269 + * it's different to handle. We need to pass the value to OPAL in 270 + * a register whose layout depends on the access size. We want 271 + * to reproduce the memory layout of the user, however we aren't 272 + * doing a load from user and a store to another memory location 273 + * which would achieve that. Here we pass the value to OPAL via 274 + * a register which is expected to contain the "BE" interpretation 275 + * of the byte sequence. IE: for a 32-bit access, byte 0 should be 276 + * in the MSB. So here we *do* need to byteswap on LE. 277 + * 278 + * User bytes: LE "data" OPAL "data" 279 + * 32-bit: B0 B1 B2 B3 B3B2B1B0 B0B1B2B3 280 + * 16-bit: B0 B1 0000B1B0 0000B0B1 281 + * 8-bit: B0 000000B0 000000B0 282 + */ 306 283 switch(len) { 307 284 case 4: 308 285 rc = __get_user(data, (u32 __user *)ubuf); 286 + data = cpu_to_be32(data); 309 287 break; 310 288 case 2: 311 289 rc = __get_user(data, (u16 __user *)ubuf); 290 + data = cpu_to_be16(data); 312 291 break; 313 292 default: 314 293 rc = __get_user(data, (u8 __user *)ubuf);
+2 -2
arch/powerpc/platforms/pseries/dlpar.c
··· 382 382 BUG_ON(get_cpu_current_state(cpu) 383 383 != CPU_STATE_OFFLINE); 384 384 cpu_maps_update_done(); 385 - rc = cpu_up(cpu); 385 + rc = device_online(get_cpu_device(cpu)); 386 386 if (rc) 387 387 goto out; 388 388 cpu_maps_update_begin(); ··· 467 467 if (get_cpu_current_state(cpu) == CPU_STATE_ONLINE) { 468 468 set_preferred_offline_state(cpu, CPU_STATE_OFFLINE); 469 469 cpu_maps_update_done(); 470 - rc = cpu_down(cpu); 470 + rc = device_offline(get_cpu_device(cpu)); 471 471 if (rc) 472 472 goto out; 473 473 cpu_maps_update_begin();
+12 -2
arch/powerpc/platforms/pseries/lpar.c
··· 43 43 #include <asm/trace.h> 44 44 #include <asm/firmware.h> 45 45 #include <asm/plpar_wrappers.h> 46 + #include <asm/fadump.h> 46 47 47 48 #include "pseries.h" 48 49 ··· 248 247 } 249 248 250 249 #ifdef __LITTLE_ENDIAN__ 251 - /* Reset exceptions to big endian */ 252 - if (firmware_has_feature(FW_FEATURE_SET_MODE)) { 250 + /* 251 + * Reset exceptions to big endian. 252 + * 253 + * FIXME this is a hack for kexec, we need to reset the exception 254 + * endian before starting the new kernel and this is a convenient place 255 + * to do it. 256 + * 257 + * This is also called on boot when a fadump happens. In that case we 258 + * must not change the exception endian mode. 259 + */ 260 + if (firmware_has_feature(FW_FEATURE_SET_MODE) && !is_fadump_active()) { 253 261 long rc; 254 262 255 263 rc = pseries_big_endian_exceptions();
+14 -22
arch/s390/configs/default_defconfig
··· 35 35 CONFIG_MODULE_FORCE_UNLOAD=y 36 36 CONFIG_MODVERSIONS=y 37 37 CONFIG_MODULE_SRCVERSION_ALL=y 38 - CONFIG_BLK_DEV_INTEGRITY=y 39 38 CONFIG_BLK_DEV_THROTTLING=y 40 39 CONFIG_PARTITION_ADVANCED=y 41 40 CONFIG_IBM_PARTITION=y ··· 244 245 CONFIG_NFT_CHAIN_ROUTE_IPV4=m 245 246 CONFIG_NFT_CHAIN_NAT_IPV4=m 246 247 CONFIG_NF_TABLES_ARP=m 248 + CONFIG_NF_NAT_IPV4=m 247 249 CONFIG_IP_NF_IPTABLES=m 248 250 CONFIG_IP_NF_MATCH_AH=m 249 251 CONFIG_IP_NF_MATCH_ECN=m ··· 252 252 CONFIG_IP_NF_MATCH_TTL=m 253 253 CONFIG_IP_NF_FILTER=m 254 254 CONFIG_IP_NF_TARGET_REJECT=m 255 - CONFIG_IP_NF_TARGET_ULOG=m 256 - CONFIG_NF_NAT_IPV4=m 257 - CONFIG_IP_NF_TARGET_MASQUERADE=m 258 - CONFIG_IP_NF_TARGET_NETMAP=m 259 - CONFIG_IP_NF_TARGET_REDIRECT=m 260 255 CONFIG_IP_NF_MANGLE=m 261 256 CONFIG_IP_NF_TARGET_CLUSTERIP=m 262 257 CONFIG_IP_NF_TARGET_ECN=m ··· 265 270 CONFIG_NF_TABLES_IPV6=m 266 271 CONFIG_NFT_CHAIN_ROUTE_IPV6=m 267 272 CONFIG_NFT_CHAIN_NAT_IPV6=m 273 + CONFIG_NF_NAT_IPV6=m 268 274 CONFIG_IP6_NF_IPTABLES=m 269 275 CONFIG_IP6_NF_MATCH_AH=m 270 276 CONFIG_IP6_NF_MATCH_EUI64=m ··· 282 286 CONFIG_IP6_NF_MANGLE=m 283 287 CONFIG_IP6_NF_RAW=m 284 288 CONFIG_IP6_NF_SECURITY=m 285 - CONFIG_NF_NAT_IPV6=m 286 - CONFIG_IP6_NF_TARGET_MASQUERADE=m 287 - CONFIG_IP6_NF_TARGET_NPT=m 288 289 CONFIG_NF_TABLES_BRIDGE=m 289 290 CONFIG_NET_SCTPPROBE=m 290 291 CONFIG_RDS=m ··· 367 374 CONFIG_CHR_DEV_SG=y 368 375 CONFIG_CHR_DEV_SCH=m 369 376 CONFIG_SCSI_ENCLOSURE=m 370 - CONFIG_SCSI_MULTI_LUN=y 371 377 CONFIG_SCSI_CONSTANTS=y 372 378 CONFIG_SCSI_LOGGING=y 373 379 CONFIG_SCSI_SPI_ATTRS=m 380 + CONFIG_SCSI_FC_ATTRS=y 374 381 CONFIG_SCSI_SAS_LIBSAS=m 375 382 CONFIG_SCSI_SRP_ATTRS=m 376 383 CONFIG_ISCSI_TCP=m 377 - CONFIG_LIBFCOE=m 378 384 CONFIG_SCSI_DEBUG=m 379 385 CONFIG_ZFCP=y 380 386 CONFIG_SCSI_VIRTIO=m ··· 419 427 CONFIG_NLMON=m 420 428 CONFIG_VHOST_NET=m 421 429 # CONFIG_NET_VENDOR_ARC is not set 422 - # CONFIG_NET_CADENCE is not set 423 430 # CONFIG_NET_VENDOR_CHELSIO is not set 424 431 # CONFIG_NET_VENDOR_INTEL is not set 425 432 # CONFIG_NET_VENDOR_MARVELL is not set ··· 472 481 CONFIG_JFS_POSIX_ACL=y 473 482 CONFIG_JFS_SECURITY=y 474 483 CONFIG_JFS_STATISTICS=y 475 - CONFIG_XFS_FS=m 484 + CONFIG_XFS_FS=y 476 485 CONFIG_XFS_QUOTA=y 477 486 CONFIG_XFS_POSIX_ACL=y 478 487 CONFIG_XFS_RT=y 479 488 CONFIG_XFS_DEBUG=y 480 489 CONFIG_GFS2_FS=m 481 490 CONFIG_OCFS2_FS=m 482 - CONFIG_BTRFS_FS=m 491 + CONFIG_BTRFS_FS=y 483 492 CONFIG_BTRFS_FS_POSIX_ACL=y 484 493 CONFIG_NILFS2_FS=m 485 494 CONFIG_FANOTIFY=y ··· 565 574 CONFIG_DETECT_HUNG_TASK=y 566 575 CONFIG_TIMER_STATS=y 567 576 CONFIG_DEBUG_RT_MUTEXES=y 568 - CONFIG_RT_MUTEX_TESTER=y 569 577 CONFIG_DEBUG_WW_MUTEX_SLOWPATH=y 570 578 CONFIG_PROVE_LOCKING=y 571 579 CONFIG_LOCK_STAT=y ··· 590 600 CONFIG_FAULT_INJECTION_STACKTRACE_FILTER=y 591 601 CONFIG_LATENCYTOP=y 592 602 CONFIG_DEBUG_STRICT_USER_COPY_CHECKS=y 603 + CONFIG_IRQSOFF_TRACER=y 604 + CONFIG_PREEMPT_TRACER=y 605 + CONFIG_SCHED_TRACER=y 606 + CONFIG_FTRACE_SYSCALLS=y 607 + CONFIG_STACK_TRACER=y 593 608 CONFIG_BLK_DEV_IO_TRACE=y 594 - # CONFIG_KPROBE_EVENT is not set 609 + CONFIG_UPROBE_EVENT=y 595 610 CONFIG_LKDTM=m 596 611 CONFIG_TEST_LIST_SORT=y 597 612 CONFIG_KPROBES_SANITY_TEST=y ··· 604 609 CONFIG_INTERVAL_TREE_TEST=m 605 610 CONFIG_PERCPU_TEST=m 606 611 CONFIG_ATOMIC64_SELFTEST=y 612 + CONFIG_TEST_STRING_HELPERS=y 613 + CONFIG_TEST_KSTRTOX=y 607 614 CONFIG_DMA_API_DEBUG=y 615 + CONFIG_TEST_BPF=m 608 616 # CONFIG_STRICT_DEVMEM is not set 609 617 CONFIG_S390_PTDUMP=y 610 618 CONFIG_ENCRYPTED_KEYS=m ··· 671 673 CONFIG_X509_CERTIFICATE_PARSER=m 672 674 CONFIG_CRC7=m 673 675 CONFIG_CRC8=m 674 - CONFIG_XZ_DEC_X86=y 675 - CONFIG_XZ_DEC_POWERPC=y 676 - CONFIG_XZ_DEC_IA64=y 677 - CONFIG_XZ_DEC_ARM=y 678 - CONFIG_XZ_DEC_ARMTHUMB=y 679 - CONFIG_XZ_DEC_SPARC=y 680 676 CONFIG_CORDIC=m 681 677 CONFIG_CMM=m 682 678 CONFIG_APPLDATA_BASE=y
+5 -20
arch/s390/configs/gcov_defconfig
··· 35 35 CONFIG_MODULE_FORCE_UNLOAD=y 36 36 CONFIG_MODVERSIONS=y 37 37 CONFIG_MODULE_SRCVERSION_ALL=y 38 - CONFIG_BLK_DEV_INTEGRITY=y 39 38 CONFIG_BLK_DEV_THROTTLING=y 40 39 CONFIG_PARTITION_ADVANCED=y 41 40 CONFIG_IBM_PARTITION=y ··· 242 243 CONFIG_NFT_CHAIN_ROUTE_IPV4=m 243 244 CONFIG_NFT_CHAIN_NAT_IPV4=m 244 245 CONFIG_NF_TABLES_ARP=m 246 + CONFIG_NF_NAT_IPV4=m 245 247 CONFIG_IP_NF_IPTABLES=m 246 248 CONFIG_IP_NF_MATCH_AH=m 247 249 CONFIG_IP_NF_MATCH_ECN=m ··· 250 250 CONFIG_IP_NF_MATCH_TTL=m 251 251 CONFIG_IP_NF_FILTER=m 252 252 CONFIG_IP_NF_TARGET_REJECT=m 253 - CONFIG_IP_NF_TARGET_ULOG=m 254 - CONFIG_NF_NAT_IPV4=m 255 - CONFIG_IP_NF_TARGET_MASQUERADE=m 256 - CONFIG_IP_NF_TARGET_NETMAP=m 257 - CONFIG_IP_NF_TARGET_REDIRECT=m 258 253 CONFIG_IP_NF_MANGLE=m 259 254 CONFIG_IP_NF_TARGET_CLUSTERIP=m 260 255 CONFIG_IP_NF_TARGET_ECN=m ··· 263 268 CONFIG_NF_TABLES_IPV6=m 264 269 CONFIG_NFT_CHAIN_ROUTE_IPV6=m 265 270 CONFIG_NFT_CHAIN_NAT_IPV6=m 271 + CONFIG_NF_NAT_IPV6=m 266 272 CONFIG_IP6_NF_IPTABLES=m 267 273 CONFIG_IP6_NF_MATCH_AH=m 268 274 CONFIG_IP6_NF_MATCH_EUI64=m ··· 280 284 CONFIG_IP6_NF_MANGLE=m 281 285 CONFIG_IP6_NF_RAW=m 282 286 CONFIG_IP6_NF_SECURITY=m 283 - CONFIG_NF_NAT_IPV6=m 284 - CONFIG_IP6_NF_TARGET_MASQUERADE=m 285 - CONFIG_IP6_NF_TARGET_NPT=m 286 287 CONFIG_NF_TABLES_BRIDGE=m 287 288 CONFIG_NET_SCTPPROBE=m 288 289 CONFIG_RDS=m ··· 364 371 CONFIG_CHR_DEV_SG=y 365 372 CONFIG_CHR_DEV_SCH=m 366 373 CONFIG_SCSI_ENCLOSURE=m 367 - CONFIG_SCSI_MULTI_LUN=y 368 374 CONFIG_SCSI_CONSTANTS=y 369 375 CONFIG_SCSI_LOGGING=y 370 376 CONFIG_SCSI_SPI_ATTRS=m 377 + CONFIG_SCSI_FC_ATTRS=y 371 378 CONFIG_SCSI_SAS_LIBSAS=m 372 379 CONFIG_SCSI_SRP_ATTRS=m 373 380 CONFIG_ISCSI_TCP=m 374 - CONFIG_LIBFCOE=m 375 381 CONFIG_SCSI_DEBUG=m 376 382 CONFIG_ZFCP=y 377 383 CONFIG_SCSI_VIRTIO=m ··· 416 424 CONFIG_NLMON=m 417 425 CONFIG_VHOST_NET=m 418 426 # CONFIG_NET_VENDOR_ARC is not set 419 - # CONFIG_NET_CADENCE is not set 420 427 # CONFIG_NET_VENDOR_CHELSIO is not set 421 428 # CONFIG_NET_VENDOR_INTEL is not set 422 429 # CONFIG_NET_VENDOR_MARVELL is not set ··· 469 478 CONFIG_JFS_POSIX_ACL=y 470 479 CONFIG_JFS_SECURITY=y 471 480 CONFIG_JFS_STATISTICS=y 472 - CONFIG_XFS_FS=m 481 + CONFIG_XFS_FS=y 473 482 CONFIG_XFS_QUOTA=y 474 483 CONFIG_XFS_POSIX_ACL=y 475 484 CONFIG_XFS_RT=y 476 485 CONFIG_GFS2_FS=m 477 486 CONFIG_OCFS2_FS=m 478 - CONFIG_BTRFS_FS=m 487 + CONFIG_BTRFS_FS=y 479 488 CONFIG_BTRFS_FS_POSIX_ACL=y 480 489 CONFIG_NILFS2_FS=m 481 490 CONFIG_FANOTIFY=y ··· 617 626 CONFIG_X509_CERTIFICATE_PARSER=m 618 627 CONFIG_CRC7=m 619 628 CONFIG_CRC8=m 620 - CONFIG_XZ_DEC_X86=y 621 - CONFIG_XZ_DEC_POWERPC=y 622 - CONFIG_XZ_DEC_IA64=y 623 - CONFIG_XZ_DEC_ARM=y 624 - CONFIG_XZ_DEC_ARMTHUMB=y 625 - CONFIG_XZ_DEC_SPARC=y 626 629 CONFIG_CORDIC=m 627 630 CONFIG_CMM=m 628 631 CONFIG_APPLDATA_BASE=y
+9 -21
arch/s390/configs/performance_defconfig
··· 33 33 CONFIG_MODULE_FORCE_UNLOAD=y 34 34 CONFIG_MODVERSIONS=y 35 35 CONFIG_MODULE_SRCVERSION_ALL=y 36 - CONFIG_BLK_DEV_INTEGRITY=y 37 36 CONFIG_BLK_DEV_THROTTLING=y 38 37 CONFIG_PARTITION_ADVANCED=y 39 38 CONFIG_IBM_PARTITION=y ··· 240 241 CONFIG_NFT_CHAIN_ROUTE_IPV4=m 241 242 CONFIG_NFT_CHAIN_NAT_IPV4=m 242 243 CONFIG_NF_TABLES_ARP=m 244 + CONFIG_NF_NAT_IPV4=m 243 245 CONFIG_IP_NF_IPTABLES=m 244 246 CONFIG_IP_NF_MATCH_AH=m 245 247 CONFIG_IP_NF_MATCH_ECN=m ··· 248 248 CONFIG_IP_NF_MATCH_TTL=m 249 249 CONFIG_IP_NF_FILTER=m 250 250 CONFIG_IP_NF_TARGET_REJECT=m 251 - CONFIG_IP_NF_TARGET_ULOG=m 252 - CONFIG_NF_NAT_IPV4=m 253 - CONFIG_IP_NF_TARGET_MASQUERADE=m 254 - CONFIG_IP_NF_TARGET_NETMAP=m 255 - CONFIG_IP_NF_TARGET_REDIRECT=m 256 251 CONFIG_IP_NF_MANGLE=m 257 252 CONFIG_IP_NF_TARGET_CLUSTERIP=m 258 253 CONFIG_IP_NF_TARGET_ECN=m ··· 261 266 CONFIG_NF_TABLES_IPV6=m 262 267 CONFIG_NFT_CHAIN_ROUTE_IPV6=m 263 268 CONFIG_NFT_CHAIN_NAT_IPV6=m 269 + CONFIG_NF_NAT_IPV6=m 264 270 CONFIG_IP6_NF_IPTABLES=m 265 271 CONFIG_IP6_NF_MATCH_AH=m 266 272 CONFIG_IP6_NF_MATCH_EUI64=m ··· 278 282 CONFIG_IP6_NF_MANGLE=m 279 283 CONFIG_IP6_NF_RAW=m 280 284 CONFIG_IP6_NF_SECURITY=m 281 - CONFIG_NF_NAT_IPV6=m 282 - CONFIG_IP6_NF_TARGET_MASQUERADE=m 283 - CONFIG_IP6_NF_TARGET_NPT=m 284 285 CONFIG_NF_TABLES_BRIDGE=m 285 286 CONFIG_NET_SCTPPROBE=m 286 287 CONFIG_RDS=m ··· 362 369 CONFIG_CHR_DEV_SG=y 363 370 CONFIG_CHR_DEV_SCH=m 364 371 CONFIG_SCSI_ENCLOSURE=m 365 - CONFIG_SCSI_MULTI_LUN=y 366 372 CONFIG_SCSI_CONSTANTS=y 367 373 CONFIG_SCSI_LOGGING=y 368 374 CONFIG_SCSI_SPI_ATTRS=m 375 + CONFIG_SCSI_FC_ATTRS=y 369 376 CONFIG_SCSI_SAS_LIBSAS=m 370 377 CONFIG_SCSI_SRP_ATTRS=m 371 378 CONFIG_ISCSI_TCP=m 372 - CONFIG_LIBFCOE=m 373 379 CONFIG_SCSI_DEBUG=m 374 380 CONFIG_ZFCP=y 375 381 CONFIG_SCSI_VIRTIO=m ··· 414 422 CONFIG_NLMON=m 415 423 CONFIG_VHOST_NET=m 416 424 # CONFIG_NET_VENDOR_ARC is not set 417 - # CONFIG_NET_CADENCE is not set 418 425 # CONFIG_NET_VENDOR_CHELSIO is not set 419 426 # CONFIG_NET_VENDOR_INTEL is not set 420 427 # CONFIG_NET_VENDOR_MARVELL is not set ··· 467 476 CONFIG_JFS_POSIX_ACL=y 468 477 CONFIG_JFS_SECURITY=y 469 478 CONFIG_JFS_STATISTICS=y 470 - CONFIG_XFS_FS=m 479 + CONFIG_XFS_FS=y 471 480 CONFIG_XFS_QUOTA=y 472 481 CONFIG_XFS_POSIX_ACL=y 473 482 CONFIG_XFS_RT=y 474 483 CONFIG_GFS2_FS=m 475 484 CONFIG_OCFS2_FS=m 476 - CONFIG_BTRFS_FS=m 485 + CONFIG_BTRFS_FS=y 477 486 CONFIG_BTRFS_FS_POSIX_ACL=y 478 487 CONFIG_NILFS2_FS=m 479 488 CONFIG_FANOTIFY=y ··· 541 550 CONFIG_RCU_TORTURE_TEST=m 542 551 CONFIG_RCU_CPU_STALL_TIMEOUT=60 543 552 CONFIG_LATENCYTOP=y 553 + CONFIG_SCHED_TRACER=y 554 + CONFIG_FTRACE_SYSCALLS=y 555 + CONFIG_STACK_TRACER=y 544 556 CONFIG_BLK_DEV_IO_TRACE=y 545 - # CONFIG_KPROBE_EVENT is not set 557 + CONFIG_UPROBE_EVENT=y 546 558 CONFIG_LKDTM=m 547 559 CONFIG_PERCPU_TEST=m 548 560 CONFIG_ATOMIC64_SELFTEST=y ··· 612 618 CONFIG_X509_CERTIFICATE_PARSER=m 613 619 CONFIG_CRC7=m 614 620 CONFIG_CRC8=m 615 - CONFIG_XZ_DEC_X86=y 616 - CONFIG_XZ_DEC_POWERPC=y 617 - CONFIG_XZ_DEC_IA64=y 618 - CONFIG_XZ_DEC_ARM=y 619 - CONFIG_XZ_DEC_ARMTHUMB=y 620 - CONFIG_XZ_DEC_SPARC=y 621 621 CONFIG_CORDIC=m 622 622 CONFIG_CMM=m 623 623 CONFIG_APPLDATA_BASE=y
+2 -8
arch/s390/configs/zfcpdump_defconfig
··· 22 22 CONFIG_CRASH_DUMP=y 23 23 # CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS is not set 24 24 # CONFIG_SECCOMP is not set 25 - # CONFIG_IUCV is not set 26 25 CONFIG_NET=y 26 + # CONFIG_IUCV is not set 27 27 CONFIG_ATM=y 28 28 CONFIG_ATM_LANE=y 29 29 CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" ··· 36 36 CONFIG_SCSI=y 37 37 CONFIG_BLK_DEV_SD=y 38 38 CONFIG_SCSI_ENCLOSURE=y 39 - CONFIG_SCSI_MULTI_LUN=y 40 39 CONFIG_SCSI_CONSTANTS=y 41 40 CONFIG_SCSI_LOGGING=y 41 + CONFIG_SCSI_FC_ATTRS=y 42 42 CONFIG_SCSI_SRP_ATTRS=y 43 43 CONFIG_ZFCP=y 44 44 # CONFIG_INPUT_MOUSEDEV_PSAUX is not set ··· 75 75 CONFIG_RCU_CPU_STALL_TIMEOUT=60 76 76 # CONFIG_FTRACE is not set 77 77 # CONFIG_STRICT_DEVMEM is not set 78 - CONFIG_XZ_DEC_X86=y 79 - CONFIG_XZ_DEC_POWERPC=y 80 - CONFIG_XZ_DEC_IA64=y 81 - CONFIG_XZ_DEC_ARM=y 82 - CONFIG_XZ_DEC_ARMTHUMB=y 83 - CONFIG_XZ_DEC_SPARC=y 84 78 # CONFIG_PFAULT is not set 85 79 # CONFIG_S390_HYPFS_FS is not set 86 80 # CONFIG_VIRTUALIZATION is not set
+2 -3
arch/s390/defconfig
··· 92 92 CONFIG_BLK_DEV_SR=y 93 93 CONFIG_BLK_DEV_SR_VENDOR=y 94 94 CONFIG_CHR_DEV_SG=y 95 - CONFIG_SCSI_MULTI_LUN=y 96 95 CONFIG_SCSI_CONSTANTS=y 97 96 CONFIG_SCSI_LOGGING=y 98 97 CONFIG_SCSI_SCAN_ASYNC=y 98 + CONFIG_SCSI_FC_ATTRS=y 99 99 CONFIG_ZFCP=y 100 100 CONFIG_SCSI_VIRTIO=y 101 101 CONFIG_NETDEVICES=y ··· 164 164 CONFIG_CRYPTO_XCBC=m 165 165 CONFIG_CRYPTO_VMAC=m 166 166 CONFIG_CRYPTO_CRC32=m 167 - CONFIG_CRYPTO_CRCT10DIF=m 168 167 CONFIG_CRYPTO_MD4=m 169 168 CONFIG_CRYPTO_MICHAEL_MIC=m 170 169 CONFIG_CRYPTO_RMD128=m 171 170 CONFIG_CRYPTO_RMD160=m 172 171 CONFIG_CRYPTO_RMD256=m 173 172 CONFIG_CRYPTO_RMD320=m 174 - CONFIG_CRYPTO_SHA256=m 173 + CONFIG_CRYPTO_SHA256=y 175 174 CONFIG_CRYPTO_SHA512=m 176 175 CONFIG_CRYPTO_TGR192=m 177 176 CONFIG_CRYPTO_WP512=m
+2
arch/s390/kernel/ftrace.c
··· 121 121 { 122 122 struct ftrace_graph_ent trace; 123 123 124 + if (unlikely(ftrace_graph_is_dead())) 125 + goto out; 124 126 if (unlikely(atomic_read(&current->tracing_graph_pause))) 125 127 goto out; 126 128 ip = (ip & PSW_ADDR_INSN) - MCOUNT_INSN_SIZE;
+8 -4
arch/s390/kernel/vdso32/clock_gettime.S
··· 19 19 .type __kernel_clock_gettime,@function 20 20 __kernel_clock_gettime: 21 21 .cfi_startproc 22 + ahi %r15,-16 22 23 basr %r5,0 23 24 0: al %r5,21f-0b(%r5) /* get &_vdso_data */ 24 25 chi %r2,__CLOCK_REALTIME_COARSE ··· 35 34 1: l %r4,__VDSO_UPD_COUNT+4(%r5) /* load update counter */ 36 35 tml %r4,0x0001 /* pending update ? loop */ 37 36 jnz 1b 38 - stcke 24(%r15) /* Store TOD clock */ 39 - lm %r0,%r1,25(%r15) 37 + stcke 0(%r15) /* Store TOD clock */ 38 + lm %r0,%r1,1(%r15) 40 39 s %r0,__VDSO_XTIME_STAMP(%r5) /* TOD - cycle_last */ 41 40 sl %r1,__VDSO_XTIME_STAMP+4(%r5) 42 41 brc 3,2f ··· 71 70 8: st %r2,0(%r3) /* store tp->tv_sec */ 72 71 st %r1,4(%r3) /* store tp->tv_nsec */ 73 72 lhi %r2,0 73 + ahi %r15,16 74 74 br %r14 75 75 76 76 /* CLOCK_MONOTONIC_COARSE */ ··· 98 96 11: l %r4,__VDSO_UPD_COUNT+4(%r5) /* load update counter */ 99 97 tml %r4,0x0001 /* pending update ? loop */ 100 98 jnz 11b 101 - stcke 24(%r15) /* Store TOD clock */ 102 - lm %r0,%r1,25(%r15) 99 + stcke 0(%r15) /* Store TOD clock */ 100 + lm %r0,%r1,1(%r15) 103 101 s %r0,__VDSO_XTIME_STAMP(%r5) /* TOD - cycle_last */ 104 102 sl %r1,__VDSO_XTIME_STAMP+4(%r5) 105 103 brc 3,12f ··· 134 132 17: st %r2,0(%r3) /* store tp->tv_sec */ 135 133 st %r1,4(%r3) /* store tp->tv_nsec */ 136 134 lhi %r2,0 135 + ahi %r15,16 137 136 br %r14 138 137 139 138 /* Fallback to system call */ 140 139 19: lhi %r1,__NR_clock_gettime 141 140 svc 0 141 + ahi %r15,16 142 142 br %r14 143 143 144 144 20: .long 1000000000
+8 -6
arch/s390/kernel/vdso32/gettimeofday.S
··· 19 19 .type __kernel_gettimeofday,@function 20 20 __kernel_gettimeofday: 21 21 .cfi_startproc 22 + ahi %r15,-16 22 23 basr %r5,0 23 24 0: al %r5,13f-0b(%r5) /* get &_vdso_data */ 24 25 1: ltr %r3,%r3 /* check if tz is NULL */ ··· 30 29 l %r4,__VDSO_UPD_COUNT+4(%r5) /* load update counter */ 31 30 tml %r4,0x0001 /* pending update ? loop */ 32 31 jnz 1b 33 - stcke 24(%r15) /* Store TOD clock */ 34 - lm %r0,%r1,25(%r15) 32 + stcke 0(%r15) /* Store TOD clock */ 33 + lm %r0,%r1,1(%r15) 35 34 s %r0,__VDSO_XTIME_STAMP(%r5) /* TOD - cycle_last */ 36 35 sl %r1,__VDSO_XTIME_STAMP+4(%r5) 37 36 brc 3,3f 38 37 ahi %r0,-1 39 38 3: ms %r0,__VDSO_TK_MULT(%r5) /* * tk->mult */ 40 - st %r0,24(%r15) 39 + st %r0,0(%r15) 41 40 l %r0,__VDSO_TK_MULT(%r5) 42 41 ltr %r1,%r1 43 42 mr %r0,%r0 44 43 jnm 4f 45 44 a %r0,__VDSO_TK_MULT(%r5) 46 - 4: al %r0,24(%r15) 45 + 4: al %r0,0(%r15) 47 46 al %r0,__VDSO_XTIME_NSEC(%r5) /* + xtime */ 48 47 al %r1,__VDSO_XTIME_NSEC+4(%r5) 49 48 brc 12,5f 50 49 ahi %r0,1 51 - 5: mvc 24(4,%r15),__VDSO_XTIME_SEC+4(%r5) 50 + 5: mvc 0(4,%r15),__VDSO_XTIME_SEC+4(%r5) 52 51 cl %r4,__VDSO_UPD_COUNT+4(%r5) /* check update counter */ 53 52 jne 1b 54 53 l %r4,__VDSO_TK_SHIFT(%r5) /* Timekeeper shift */ 55 54 srdl %r0,0(%r4) /* >> tk->shift */ 56 - l %r4,24(%r15) /* get tv_sec from stack */ 55 + l %r4,0(%r15) /* get tv_sec from stack */ 57 56 basr %r5,0 58 57 6: ltr %r0,%r0 59 58 jnz 7f ··· 72 71 9: srl %r0,6 73 72 st %r0,4(%r2) /* store tv->tv_usec */ 74 73 10: slr %r2,%r2 74 + ahi %r15,16 75 75 br %r14 76 76 11: .long 1000000000 77 77 12: .long 274877907
+9 -4
arch/s390/kernel/vdso64/clock_gettime.S
··· 19 19 .type __kernel_clock_gettime,@function 20 20 __kernel_clock_gettime: 21 21 .cfi_startproc 22 + aghi %r15,-16 22 23 larl %r5,_vdso_data 23 24 cghi %r2,__CLOCK_REALTIME_COARSE 24 25 je 4f ··· 38 37 0: lg %r4,__VDSO_UPD_COUNT(%r5) /* load update counter */ 39 38 tmll %r4,0x0001 /* pending update ? loop */ 40 39 jnz 0b 41 - stcke 48(%r15) /* Store TOD clock */ 40 + stcke 0(%r15) /* Store TOD clock */ 42 41 lgf %r2,__VDSO_TK_SHIFT(%r5) /* Timekeeper shift */ 43 42 lg %r0,__VDSO_WTOM_SEC(%r5) 44 - lg %r1,49(%r15) 43 + lg %r1,1(%r15) 45 44 sg %r1,__VDSO_XTIME_STAMP(%r5) /* TOD - cycle_last */ 46 45 msgf %r1,__VDSO_TK_MULT(%r5) /* * tk->mult */ 47 46 alg %r1,__VDSO_WTOM_NSEC(%r5) ··· 57 56 2: stg %r0,0(%r3) /* store tp->tv_sec */ 58 57 stg %r1,8(%r3) /* store tp->tv_nsec */ 59 58 lghi %r2,0 59 + aghi %r15,16 60 60 br %r14 61 61 62 62 /* CLOCK_MONOTONIC_COARSE */ ··· 84 82 5: lg %r4,__VDSO_UPD_COUNT(%r5) /* load update counter */ 85 83 tmll %r4,0x0001 /* pending update ? loop */ 86 84 jnz 5b 87 - stcke 48(%r15) /* Store TOD clock */ 85 + stcke 0(%r15) /* Store TOD clock */ 88 86 lgf %r2,__VDSO_TK_SHIFT(%r5) /* Timekeeper shift */ 89 - lg %r1,49(%r15) 87 + lg %r1,1(%r15) 90 88 sg %r1,__VDSO_XTIME_STAMP(%r5) /* TOD - cycle_last */ 91 89 msgf %r1,__VDSO_TK_MULT(%r5) /* * tk->mult */ 92 90 alg %r1,__VDSO_XTIME_NSEC(%r5) /* + tk->xtime_nsec */ ··· 103 101 7: stg %r0,0(%r3) /* store tp->tv_sec */ 104 102 stg %r1,8(%r3) /* store tp->tv_nsec */ 105 103 lghi %r2,0 104 + aghi %r15,16 106 105 br %r14 107 106 108 107 /* CLOCK_THREAD_CPUTIME_ID for this thread */ ··· 137 134 slgr %r4,%r0 /* r4 = tv_nsec */ 138 135 stg %r4,8(%r3) 139 136 lghi %r2,0 137 + aghi %r15,16 140 138 br %r14 141 139 142 140 /* Fallback to system call */ 143 141 12: lghi %r1,__NR_clock_gettime 144 142 svc 0 143 + aghi %r15,16 145 144 br %r14 146 145 147 146 13: .quad 1000000000
+4 -2
arch/s390/kernel/vdso64/gettimeofday.S
··· 19 19 .type __kernel_gettimeofday,@function 20 20 __kernel_gettimeofday: 21 21 .cfi_startproc 22 + aghi %r15,-16 22 23 larl %r5,_vdso_data 23 24 0: ltgr %r3,%r3 /* check if tz is NULL */ 24 25 je 1f ··· 29 28 lg %r4,__VDSO_UPD_COUNT(%r5) /* load update counter */ 30 29 tmll %r4,0x0001 /* pending update ? loop */ 31 30 jnz 0b 32 - stcke 48(%r15) /* Store TOD clock */ 33 - lg %r1,49(%r15) 31 + stcke 0(%r15) /* Store TOD clock */ 32 + lg %r1,1(%r15) 34 33 sg %r1,__VDSO_XTIME_STAMP(%r5) /* TOD - cycle_last */ 35 34 msgf %r1,__VDSO_TK_MULT(%r5) /* * tk->mult */ 36 35 alg %r1,__VDSO_XTIME_NSEC(%r5) /* + tk->xtime_nsec */ ··· 51 50 srlg %r0,%r0,6 52 51 stg %r0,8(%r2) /* store tv->tv_usec */ 53 52 4: lghi %r2,0 53 + aghi %r15,16 54 54 br %r14 55 55 5: .quad 1000000000 56 56 .long 274877907
+4
arch/s390/kernel/vtime.c
··· 66 66 clock = S390_lowcore.last_update_clock; 67 67 asm volatile( 68 68 " stpt %0\n" /* Store current cpu timer value */ 69 + #ifdef CONFIG_HAVE_MARCH_Z9_109_FEATURES 70 + " stckf %1" /* Store current tod clock value */ 71 + #else 69 72 " stck %1" /* Store current tod clock value */ 73 + #endif 70 74 : "=m" (S390_lowcore.last_update_timer), 71 75 "=m" (S390_lowcore.last_update_clock)); 72 76 S390_lowcore.system_timer += timer - S390_lowcore.last_update_timer;
+48 -15
arch/x86/kvm/emulate.c
··· 574 574 case 4: 575 575 ctxt->_eip = (u32)dst; 576 576 break; 577 + #ifdef CONFIG_X86_64 577 578 case 8: 578 579 if ((cs_l && is_noncanonical_address(dst)) || 579 - (!cs_l && (dst & ~(u32)-1))) 580 + (!cs_l && (dst >> 32) != 0)) 580 581 return emulate_gp(ctxt, 0); 581 582 ctxt->_eip = dst; 582 583 break; 584 + #endif 583 585 default: 584 586 WARN(1, "unsupported eip assignment size\n"); 585 587 } ··· 643 641 644 642 static int __linearize(struct x86_emulate_ctxt *ctxt, 645 643 struct segmented_address addr, 646 - unsigned size, bool write, bool fetch, 644 + unsigned *max_size, unsigned size, 645 + bool write, bool fetch, 647 646 ulong *linear) 648 647 { 649 648 struct desc_struct desc; ··· 655 652 unsigned cpl; 656 653 657 654 la = seg_base(ctxt, addr.seg) + addr.ea; 655 + *max_size = 0; 658 656 switch (ctxt->mode) { 659 657 case X86EMUL_MODE_PROT64: 660 658 if (((signed long)la << 16) >> 16 != la) 661 659 return emulate_gp(ctxt, 0); 660 + 661 + *max_size = min_t(u64, ~0u, (1ull << 48) - la); 662 + if (size > *max_size) 663 + goto bad; 662 664 break; 663 665 default: 664 666 usable = ctxt->ops->get_segment(ctxt, &sel, &desc, NULL, ··· 681 673 if ((ctxt->mode == X86EMUL_MODE_REAL) && !fetch && 682 674 (ctxt->d & NoBigReal)) { 683 675 /* la is between zero and 0xffff */ 684 - if (la > 0xffff || (u32)(la + size - 1) > 0xffff) 676 + if (la > 0xffff) 685 677 goto bad; 678 + *max_size = 0x10000 - la; 686 679 } else if ((desc.type & 8) || !(desc.type & 4)) { 687 680 /* expand-up segment */ 688 - if (addr.ea > lim || (u32)(addr.ea + size - 1) > lim) 681 + if (addr.ea > lim) 689 682 goto bad; 683 + *max_size = min_t(u64, ~0u, (u64)lim + 1 - addr.ea); 690 684 } else { 691 685 /* expand-down segment */ 692 - if (addr.ea <= lim || (u32)(addr.ea + size - 1) <= lim) 686 + if (addr.ea <= lim) 693 687 goto bad; 694 688 lim = desc.d ? 0xffffffff : 0xffff; 695 - if (addr.ea > lim || (u32)(addr.ea + size - 1) > lim) 689 + if (addr.ea > lim) 696 690 goto bad; 691 + *max_size = min_t(u64, ~0u, (u64)lim + 1 - addr.ea); 697 692 } 693 + if (size > *max_size) 694 + goto bad; 698 695 cpl = ctxt->ops->cpl(ctxt); 699 696 if (!(desc.type & 8)) { 700 697 /* data segment */ ··· 724 711 return X86EMUL_CONTINUE; 725 712 bad: 726 713 if (addr.seg == VCPU_SREG_SS) 727 - return emulate_ss(ctxt, sel); 714 + return emulate_ss(ctxt, 0); 728 715 else 729 - return emulate_gp(ctxt, sel); 716 + return emulate_gp(ctxt, 0); 730 717 } 731 718 732 719 static int linearize(struct x86_emulate_ctxt *ctxt, ··· 734 721 unsigned size, bool write, 735 722 ulong *linear) 736 723 { 737 - return __linearize(ctxt, addr, size, write, false, linear); 724 + unsigned max_size; 725 + return __linearize(ctxt, addr, &max_size, size, write, false, linear); 738 726 } 739 727 740 728 ··· 760 746 static int __do_insn_fetch_bytes(struct x86_emulate_ctxt *ctxt, int op_size) 761 747 { 762 748 int rc; 763 - unsigned size; 749 + unsigned size, max_size; 764 750 unsigned long linear; 765 751 int cur_size = ctxt->fetch.end - ctxt->fetch.data; 766 752 struct segmented_address addr = { .seg = VCPU_SREG_CS, 767 753 .ea = ctxt->eip + cur_size }; 768 754 769 - size = 15UL ^ cur_size; 770 - rc = __linearize(ctxt, addr, size, false, true, &linear); 755 + /* 756 + * We do not know exactly how many bytes will be needed, and 757 + * __linearize is expensive, so fetch as much as possible. We 758 + * just have to avoid going beyond the 15 byte limit, the end 759 + * of the segment, or the end of the page. 760 + * 761 + * __linearize is called with size 0 so that it does not do any 762 + * boundary check itself. Instead, we use max_size to check 763 + * against op_size. 764 + */ 765 + rc = __linearize(ctxt, addr, &max_size, 0, false, true, &linear); 771 766 if (unlikely(rc != X86EMUL_CONTINUE)) 772 767 return rc; 773 768 769 + size = min_t(unsigned, 15UL ^ cur_size, max_size); 774 770 size = min_t(unsigned, size, PAGE_SIZE - offset_in_page(linear)); 775 771 776 772 /* ··· 790 766 * still, we must have hit the 15-byte boundary. 791 767 */ 792 768 if (unlikely(size < op_size)) 793 - return X86EMUL_UNHANDLEABLE; 769 + return emulate_gp(ctxt, 0); 770 + 794 771 rc = ctxt->ops->fetch(ctxt, linear, ctxt->fetch.end, 795 772 size, &ctxt->exception); 796 773 if (unlikely(rc != X86EMUL_CONTINUE)) ··· 2037 2012 2038 2013 rc = assign_eip_far(ctxt, ctxt->src.val, new_desc.l); 2039 2014 if (rc != X86EMUL_CONTINUE) { 2040 - WARN_ON(!ctxt->mode != X86EMUL_MODE_PROT64); 2015 + WARN_ON(ctxt->mode != X86EMUL_MODE_PROT64); 2041 2016 /* assigning eip failed; restore the old cs */ 2042 2017 ops->set_segment(ctxt, old_sel, &old_desc, 0, VCPU_SREG_CS); 2043 2018 return rc; ··· 2134 2109 return rc; 2135 2110 rc = assign_eip_far(ctxt, eip, new_desc.l); 2136 2111 if (rc != X86EMUL_CONTINUE) { 2137 - WARN_ON(!ctxt->mode != X86EMUL_MODE_PROT64); 2112 + WARN_ON(ctxt->mode != X86EMUL_MODE_PROT64); 2138 2113 ops->set_segment(ctxt, old_cs, &old_desc, 0, VCPU_SREG_CS); 2139 2114 } 2140 2115 return rc; ··· 4287 4262 fetch_register_operand(op); 4288 4263 break; 4289 4264 case OpCL: 4265 + op->type = OP_IMM; 4290 4266 op->bytes = 1; 4291 4267 op->val = reg_read(ctxt, VCPU_REGS_RCX) & 0xff; 4292 4268 break; ··· 4295 4269 rc = decode_imm(ctxt, op, 1, true); 4296 4270 break; 4297 4271 case OpOne: 4272 + op->type = OP_IMM; 4298 4273 op->bytes = 1; 4299 4274 op->val = 1; 4300 4275 break; ··· 4354 4327 ctxt->memop.bytes = ctxt->op_bytes + 2; 4355 4328 goto mem_common; 4356 4329 case OpES: 4330 + op->type = OP_IMM; 4357 4331 op->val = VCPU_SREG_ES; 4358 4332 break; 4359 4333 case OpCS: 4334 + op->type = OP_IMM; 4360 4335 op->val = VCPU_SREG_CS; 4361 4336 break; 4362 4337 case OpSS: 4338 + op->type = OP_IMM; 4363 4339 op->val = VCPU_SREG_SS; 4364 4340 break; 4365 4341 case OpDS: 4342 + op->type = OP_IMM; 4366 4343 op->val = VCPU_SREG_DS; 4367 4344 break; 4368 4345 case OpFS: 4346 + op->type = OP_IMM; 4369 4347 op->val = VCPU_SREG_FS; 4370 4348 break; 4371 4349 case OpGS: 4350 + op->type = OP_IMM; 4372 4351 op->val = VCPU_SREG_GS; 4373 4352 break; 4374 4353 case OpImplicit:
+5 -1
arch/x86/kvm/vmx.c
··· 4579 4579 vmcs_write32(TPR_THRESHOLD, 0); 4580 4580 } 4581 4581 4582 - kvm_vcpu_reload_apic_access_page(vcpu); 4582 + kvm_make_request(KVM_REQ_APIC_PAGE_RELOAD, vcpu); 4583 4583 4584 4584 if (vmx_vm_has_apicv(vcpu->kvm)) 4585 4585 memset(&vmx->pi_desc, 0, sizeof(struct pi_desc)); ··· 6426 6426 const unsigned long *fields = shadow_read_write_fields; 6427 6427 const int num_fields = max_shadow_read_write_fields; 6428 6428 6429 + preempt_disable(); 6430 + 6429 6431 vmcs_load(shadow_vmcs); 6430 6432 6431 6433 for (i = 0; i < num_fields; i++) { ··· 6451 6449 6452 6450 vmcs_clear(shadow_vmcs); 6453 6451 vmcs_load(vmx->loaded_vmcs->vmcs); 6452 + 6453 + preempt_enable(); 6454 6454 } 6455 6455 6456 6456 static void copy_vmcs12_to_shadow(struct vcpu_vmx *vmx)
+2 -2
arch/xtensa/Kconfig
··· 319 319 320 320 config XTENSA_PLATFORM_XTFPGA 321 321 bool "XTFPGA" 322 + select ETHOC if ETHERNET 322 323 select SERIAL_CONSOLE 323 - select ETHOC 324 324 select XTENSA_CALIBRATE_CCOUNT 325 325 help 326 326 XTFPGA is the name of Tensilica board family (LX60, LX110, LX200, ML605). ··· 367 367 config BLK_DEV_SIMDISK 368 368 tristate "Host file-based simulated block device support" 369 369 default n 370 - depends on XTENSA_PLATFORM_ISS 370 + depends on XTENSA_PLATFORM_ISS && BLOCK 371 371 help 372 372 Create block devices that map to files in the host file system. 373 373 Device binding to host file may be changed at runtime via proc
+16
arch/xtensa/boot/dts/lx200mx.dts
··· 1 + /dts-v1/; 2 + /include/ "xtfpga.dtsi" 3 + /include/ "xtfpga-flash-16m.dtsi" 4 + 5 + / { 6 + compatible = "cdns,xtensa-lx200"; 7 + memory@0 { 8 + device_type = "memory"; 9 + reg = <0x00000000 0x06000000>; 10 + }; 11 + pic: pic { 12 + compatible = "cdns,xtensa-mx"; 13 + #interrupt-cells = <2>; 14 + interrupt-controller; 15 + }; 16 + };
+131
arch/xtensa/configs/generic_kc705_defconfig
··· 1 + CONFIG_SYSVIPC=y 2 + CONFIG_POSIX_MQUEUE=y 3 + CONFIG_FHANDLE=y 4 + CONFIG_IRQ_DOMAIN_DEBUG=y 5 + CONFIG_NO_HZ_IDLE=y 6 + CONFIG_HIGH_RES_TIMERS=y 7 + CONFIG_IRQ_TIME_ACCOUNTING=y 8 + CONFIG_BSD_PROCESS_ACCT=y 9 + CONFIG_CGROUP_DEBUG=y 10 + CONFIG_CGROUP_FREEZER=y 11 + CONFIG_CGROUP_DEVICE=y 12 + CONFIG_CPUSETS=y 13 + CONFIG_CGROUP_CPUACCT=y 14 + CONFIG_RESOURCE_COUNTERS=y 15 + CONFIG_MEMCG=y 16 + CONFIG_NAMESPACES=y 17 + CONFIG_SCHED_AUTOGROUP=y 18 + CONFIG_RELAY=y 19 + CONFIG_BLK_DEV_INITRD=y 20 + CONFIG_EXPERT=y 21 + CONFIG_SYSCTL_SYSCALL=y 22 + CONFIG_KALLSYMS_ALL=y 23 + CONFIG_PROFILING=y 24 + CONFIG_OPROFILE=y 25 + CONFIG_MODULES=y 26 + CONFIG_MODULE_UNLOAD=y 27 + # CONFIG_IOSCHED_DEADLINE is not set 28 + # CONFIG_IOSCHED_CFQ is not set 29 + CONFIG_XTENSA_VARIANT_DC233C=y 30 + CONFIG_XTENSA_UNALIGNED_USER=y 31 + CONFIG_PREEMPT=y 32 + CONFIG_HIGHMEM=y 33 + # CONFIG_PCI is not set 34 + CONFIG_XTENSA_PLATFORM_XTFPGA=y 35 + CONFIG_CMDLINE_BOOL=y 36 + CONFIG_CMDLINE="earlycon=uart8250,mmio32,0xfd050020,115200n8 console=ttyS0,115200n8 ip=dhcp root=/dev/nfs rw debug" 37 + CONFIG_USE_OF=y 38 + CONFIG_BUILTIN_DTB="kc705" 39 + # CONFIG_COMPACTION is not set 40 + # CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS is not set 41 + CONFIG_NET=y 42 + CONFIG_PACKET=y 43 + CONFIG_UNIX=y 44 + CONFIG_INET=y 45 + CONFIG_IP_MULTICAST=y 46 + CONFIG_IP_PNP=y 47 + CONFIG_IP_PNP_DHCP=y 48 + CONFIG_IP_PNP_BOOTP=y 49 + CONFIG_IP_PNP_RARP=y 50 + # CONFIG_IPV6 is not set 51 + CONFIG_NETFILTER=y 52 + # CONFIG_WIRELESS is not set 53 + CONFIG_DEVTMPFS=y 54 + CONFIG_DEVTMPFS_MOUNT=y 55 + # CONFIG_STANDALONE is not set 56 + CONFIG_MTD=y 57 + CONFIG_MTD_CFI=y 58 + CONFIG_MTD_JEDECPROBE=y 59 + CONFIG_MTD_CFI_INTELEXT=y 60 + CONFIG_MTD_CFI_AMDSTD=y 61 + CONFIG_MTD_CFI_STAA=y 62 + CONFIG_MTD_PHYSMAP_OF=y 63 + CONFIG_MTD_UBI=y 64 + CONFIG_BLK_DEV_LOOP=y 65 + CONFIG_BLK_DEV_RAM=y 66 + CONFIG_SCSI=y 67 + CONFIG_BLK_DEV_SD=y 68 + CONFIG_NETDEVICES=y 69 + # CONFIG_NET_VENDOR_ARC is not set 70 + # CONFIG_NET_VENDOR_BROADCOM is not set 71 + # CONFIG_NET_VENDOR_INTEL is not set 72 + # CONFIG_NET_VENDOR_MARVELL is not set 73 + # CONFIG_NET_VENDOR_MICREL is not set 74 + # CONFIG_NET_VENDOR_NATSEMI is not set 75 + # CONFIG_NET_VENDOR_SAMSUNG is not set 76 + # CONFIG_NET_VENDOR_SEEQ is not set 77 + # CONFIG_NET_VENDOR_SMSC is not set 78 + # CONFIG_NET_VENDOR_STMICRO is not set 79 + # CONFIG_NET_VENDOR_VIA is not set 80 + # CONFIG_NET_VENDOR_WIZNET is not set 81 + CONFIG_MARVELL_PHY=y 82 + # CONFIG_WLAN is not set 83 + # CONFIG_INPUT_MOUSEDEV is not set 84 + # CONFIG_INPUT_KEYBOARD is not set 85 + # CONFIG_INPUT_MOUSE is not set 86 + # CONFIG_SERIO is not set 87 + CONFIG_SERIAL_8250=y 88 + # CONFIG_SERIAL_8250_DEPRECATED_OPTIONS is not set 89 + CONFIG_SERIAL_8250_CONSOLE=y 90 + CONFIG_SERIAL_OF_PLATFORM=y 91 + CONFIG_HW_RANDOM=y 92 + # CONFIG_HWMON is not set 93 + CONFIG_WATCHDOG=y 94 + CONFIG_WATCHDOG_NOWAYOUT=y 95 + CONFIG_SOFT_WATCHDOG=y 96 + # CONFIG_VGA_CONSOLE is not set 97 + # CONFIG_USB_SUPPORT is not set 98 + # CONFIG_IOMMU_SUPPORT is not set 99 + CONFIG_EXT3_FS=y 100 + CONFIG_EXT4_FS=y 101 + CONFIG_FANOTIFY=y 102 + CONFIG_VFAT_FS=y 103 + CONFIG_PROC_KCORE=y 104 + CONFIG_TMPFS=y 105 + CONFIG_TMPFS_POSIX_ACL=y 106 + CONFIG_UBIFS_FS=y 107 + CONFIG_NFS_FS=y 108 + CONFIG_NFS_V4=y 109 + CONFIG_NFS_SWAP=y 110 + CONFIG_ROOT_NFS=y 111 + CONFIG_SUNRPC_DEBUG=y 112 + CONFIG_NLS_CODEPAGE_437=y 113 + CONFIG_NLS_ISO8859_1=y 114 + CONFIG_PRINTK_TIME=y 115 + CONFIG_DYNAMIC_DEBUG=y 116 + CONFIG_DEBUG_INFO=y 117 + CONFIG_MAGIC_SYSRQ=y 118 + CONFIG_LOCKUP_DETECTOR=y 119 + # CONFIG_SCHED_DEBUG is not set 120 + CONFIG_SCHEDSTATS=y 121 + CONFIG_TIMER_STATS=y 122 + CONFIG_DEBUG_RT_MUTEXES=y 123 + CONFIG_DEBUG_SPINLOCK=y 124 + CONFIG_DEBUG_MUTEXES=y 125 + CONFIG_DEBUG_ATOMIC_SLEEP=y 126 + CONFIG_STACKTRACE=y 127 + CONFIG_RCU_TRACE=y 128 + # CONFIG_FTRACE is not set 129 + CONFIG_LD_NO_RELAX=y 130 + # CONFIG_S32C1I_SELFTEST is not set 131 + CONFIG_CRYPTO_ANSI_CPRNG=y
+135
arch/xtensa/configs/smp_lx200_defconfig
··· 1 + CONFIG_SYSVIPC=y 2 + CONFIG_POSIX_MQUEUE=y 3 + CONFIG_FHANDLE=y 4 + CONFIG_IRQ_DOMAIN_DEBUG=y 5 + CONFIG_NO_HZ_IDLE=y 6 + CONFIG_HIGH_RES_TIMERS=y 7 + CONFIG_IRQ_TIME_ACCOUNTING=y 8 + CONFIG_BSD_PROCESS_ACCT=y 9 + CONFIG_CGROUP_DEBUG=y 10 + CONFIG_CGROUP_FREEZER=y 11 + CONFIG_CGROUP_DEVICE=y 12 + CONFIG_CPUSETS=y 13 + CONFIG_CGROUP_CPUACCT=y 14 + CONFIG_RESOURCE_COUNTERS=y 15 + CONFIG_MEMCG=y 16 + CONFIG_NAMESPACES=y 17 + CONFIG_SCHED_AUTOGROUP=y 18 + CONFIG_RELAY=y 19 + CONFIG_BLK_DEV_INITRD=y 20 + CONFIG_EXPERT=y 21 + CONFIG_SYSCTL_SYSCALL=y 22 + CONFIG_KALLSYMS_ALL=y 23 + CONFIG_PROFILING=y 24 + CONFIG_OPROFILE=y 25 + CONFIG_MODULES=y 26 + CONFIG_MODULE_UNLOAD=y 27 + # CONFIG_IOSCHED_DEADLINE is not set 28 + # CONFIG_IOSCHED_CFQ is not set 29 + CONFIG_XTENSA_VARIANT_CUSTOM=y 30 + CONFIG_XTENSA_VARIANT_CUSTOM_NAME="test_mmuhifi_c3" 31 + CONFIG_XTENSA_UNALIGNED_USER=y 32 + CONFIG_PREEMPT=y 33 + CONFIG_HAVE_SMP=y 34 + CONFIG_SMP=y 35 + CONFIG_HOTPLUG_CPU=y 36 + # CONFIG_INITIALIZE_XTENSA_MMU_INSIDE_VMLINUX is not set 37 + # CONFIG_PCI is not set 38 + CONFIG_XTENSA_PLATFORM_XTFPGA=y 39 + CONFIG_CMDLINE_BOOL=y 40 + CONFIG_CMDLINE="earlycon=uart8250,mmio32,0xfd050020,115200n8 console=ttyS0,115200n8 ip=dhcp root=/dev/nfs rw debug" 41 + CONFIG_USE_OF=y 42 + CONFIG_BUILTIN_DTB="lx200mx" 43 + # CONFIG_COMPACTION is not set 44 + # CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS is not set 45 + CONFIG_NET=y 46 + CONFIG_PACKET=y 47 + CONFIG_UNIX=y 48 + CONFIG_INET=y 49 + CONFIG_IP_MULTICAST=y 50 + CONFIG_IP_PNP=y 51 + CONFIG_IP_PNP_DHCP=y 52 + CONFIG_IP_PNP_BOOTP=y 53 + CONFIG_IP_PNP_RARP=y 54 + # CONFIG_IPV6 is not set 55 + CONFIG_NETFILTER=y 56 + # CONFIG_WIRELESS is not set 57 + CONFIG_DEVTMPFS=y 58 + CONFIG_DEVTMPFS_MOUNT=y 59 + # CONFIG_STANDALONE is not set 60 + CONFIG_MTD=y 61 + CONFIG_MTD_CFI=y 62 + CONFIG_MTD_JEDECPROBE=y 63 + CONFIG_MTD_CFI_INTELEXT=y 64 + CONFIG_MTD_CFI_AMDSTD=y 65 + CONFIG_MTD_CFI_STAA=y 66 + CONFIG_MTD_PHYSMAP_OF=y 67 + CONFIG_MTD_UBI=y 68 + CONFIG_BLK_DEV_LOOP=y 69 + CONFIG_BLK_DEV_RAM=y 70 + CONFIG_SCSI=y 71 + CONFIG_BLK_DEV_SD=y 72 + CONFIG_NETDEVICES=y 73 + # CONFIG_NET_VENDOR_ARC is not set 74 + # CONFIG_NET_VENDOR_BROADCOM is not set 75 + # CONFIG_NET_VENDOR_INTEL is not set 76 + # CONFIG_NET_VENDOR_MARVELL is not set 77 + # CONFIG_NET_VENDOR_MICREL is not set 78 + # CONFIG_NET_VENDOR_NATSEMI is not set 79 + # CONFIG_NET_VENDOR_SAMSUNG is not set 80 + # CONFIG_NET_VENDOR_SEEQ is not set 81 + # CONFIG_NET_VENDOR_SMSC is not set 82 + # CONFIG_NET_VENDOR_STMICRO is not set 83 + # CONFIG_NET_VENDOR_VIA is not set 84 + # CONFIG_NET_VENDOR_WIZNET is not set 85 + CONFIG_MARVELL_PHY=y 86 + # CONFIG_WLAN is not set 87 + # CONFIG_INPUT_MOUSEDEV is not set 88 + # CONFIG_INPUT_KEYBOARD is not set 89 + # CONFIG_INPUT_MOUSE is not set 90 + # CONFIG_SERIO is not set 91 + CONFIG_SERIAL_8250=y 92 + # CONFIG_SERIAL_8250_DEPRECATED_OPTIONS is not set 93 + CONFIG_SERIAL_8250_CONSOLE=y 94 + CONFIG_SERIAL_OF_PLATFORM=y 95 + CONFIG_HW_RANDOM=y 96 + # CONFIG_HWMON is not set 97 + CONFIG_WATCHDOG=y 98 + CONFIG_WATCHDOG_NOWAYOUT=y 99 + CONFIG_SOFT_WATCHDOG=y 100 + # CONFIG_VGA_CONSOLE is not set 101 + # CONFIG_USB_SUPPORT is not set 102 + # CONFIG_IOMMU_SUPPORT is not set 103 + CONFIG_EXT3_FS=y 104 + CONFIG_EXT4_FS=y 105 + CONFIG_FANOTIFY=y 106 + CONFIG_VFAT_FS=y 107 + CONFIG_PROC_KCORE=y 108 + CONFIG_TMPFS=y 109 + CONFIG_TMPFS_POSIX_ACL=y 110 + CONFIG_UBIFS_FS=y 111 + CONFIG_NFS_FS=y 112 + CONFIG_NFS_V4=y 113 + CONFIG_NFS_SWAP=y 114 + CONFIG_ROOT_NFS=y 115 + CONFIG_SUNRPC_DEBUG=y 116 + CONFIG_NLS_CODEPAGE_437=y 117 + CONFIG_NLS_ISO8859_1=y 118 + CONFIG_PRINTK_TIME=y 119 + CONFIG_DYNAMIC_DEBUG=y 120 + CONFIG_DEBUG_INFO=y 121 + CONFIG_MAGIC_SYSRQ=y 122 + CONFIG_DEBUG_VM=y 123 + CONFIG_LOCKUP_DETECTOR=y 124 + CONFIG_SCHEDSTATS=y 125 + CONFIG_TIMER_STATS=y 126 + CONFIG_DEBUG_RT_MUTEXES=y 127 + CONFIG_DEBUG_SPINLOCK=y 128 + CONFIG_DEBUG_MUTEXES=y 129 + CONFIG_DEBUG_ATOMIC_SLEEP=y 130 + CONFIG_STACKTRACE=y 131 + CONFIG_RCU_TRACE=y 132 + # CONFIG_FTRACE is not set 133 + CONFIG_LD_NO_RELAX=y 134 + # CONFIG_S32C1I_SELFTEST is not set 135 + CONFIG_CRYPTO_ANSI_CPRNG=y
+2
arch/xtensa/include/asm/pgtable.h
··· 277 277 static inline pte_t pte_mkspecial(pte_t pte) 278 278 { return pte; } 279 279 280 + #define pgprot_noncached(prot) (__pgprot(pgprot_val(prot) & ~_PAGE_CA_MASK)) 281 + 280 282 /* 281 283 * Conversion functions: convert a page and protection to a page entry, 282 284 * and a page entry and page directory to the page they refer to.
+10 -2
arch/xtensa/include/uapi/asm/unistd.h
··· 384 384 #define __NR_pivot_root 175 385 385 __SYSCALL(175, sys_pivot_root, 2) 386 386 #define __NR_umount 176 387 - __SYSCALL(176, sys_umount, 2) 387 + __SYSCALL(176, sys_oldumount, 1) 388 + #define __ARCH_WANT_SYS_OLDUMOUNT 388 389 #define __NR_swapoff 177 389 390 __SYSCALL(177, sys_swapoff, 1) 390 391 #define __NR_sync 178 ··· 743 742 #define __NR_renameat2 336 744 743 __SYSCALL(336, sys_renameat2, 5) 745 744 746 - #define __NR_syscall_count 337 745 + #define __NR_seccomp 337 746 + __SYSCALL(337, sys_seccomp, 3) 747 + #define __NR_getrandom 338 748 + __SYSCALL(338, sys_getrandom, 3) 749 + #define __NR_memfd_create 339 750 + __SYSCALL(339, sys_memfd_create, 2) 751 + 752 + #define __NR_syscall_count 340 747 753 748 754 /* 749 755 * sysxtensa syscall handler
+11 -8
drivers/base/Kconfig
··· 171 171 Drivers should "select" this option if they desire to use the 172 172 device coredump mechanism. 173 173 174 - config DISABLE_DEV_COREDUMP 175 - bool "Disable device coredump" if EXPERT 174 + config ALLOW_DEV_COREDUMP 175 + bool "Allow device coredump" if EXPERT 176 + default y 176 177 help 177 - Disable the device coredump mechanism despite drivers wanting to 178 - use it; this allows for more sensitive systems or systems that 179 - don't want to ever access the information to not have the code, 180 - nor keep any data. 178 + This option controls if the device coredump mechanism is available or 179 + not; if disabled, the mechanism will be omitted even if drivers that 180 + can use it are enabled. 181 + Say 'N' for more sensitive systems or systems that don't want 182 + to ever access the information to not have the code, nor keep any 183 + data. 181 184 182 - If unsure, say N. 185 + If unsure, say Y. 183 186 184 187 config DEV_COREDUMP 185 188 bool 186 189 default y if WANT_DEV_COREDUMP 187 - depends on !DISABLE_DEV_COREDUMP 190 + depends on ALLOW_DEV_COREDUMP 188 191 189 192 config DEBUG_DRIVER 190 193 bool "Driver Core verbose debug messages"
+3 -1
drivers/base/core.c
··· 724 724 return &dir->kobj; 725 725 } 726 726 727 + static DEFINE_MUTEX(gdp_mutex); 727 728 728 729 static struct kobject *get_device_parent(struct device *dev, 729 730 struct device *parent) 730 731 { 731 732 if (dev->class) { 732 - static DEFINE_MUTEX(gdp_mutex); 733 733 struct kobject *kobj = NULL; 734 734 struct kobject *parent_kobj; 735 735 struct kobject *k; ··· 793 793 glue_dir->kset != &dev->class->p->glue_dirs) 794 794 return; 795 795 796 + mutex_lock(&gdp_mutex); 796 797 kobject_put(glue_dir); 798 + mutex_unlock(&gdp_mutex); 797 799 } 798 800 799 801 static void cleanup_device_parent(struct device *dev)
+19 -16
drivers/block/rbd.c
··· 342 342 343 343 struct list_head rq_queue; /* incoming rq queue */ 344 344 spinlock_t lock; /* queue, flags, open_count */ 345 - struct workqueue_struct *rq_wq; 346 345 struct work_struct rq_work; 347 346 348 347 struct rbd_image_header header; ··· 400 401 401 402 static int rbd_major; 402 403 static DEFINE_IDA(rbd_dev_id_ida); 404 + 405 + static struct workqueue_struct *rbd_wq; 403 406 404 407 /* 405 408 * Default to false for now, as single-major requires >= 0.75 version of ··· 3453 3452 } 3454 3453 3455 3454 if (queued) 3456 - queue_work(rbd_dev->rq_wq, &rbd_dev->rq_work); 3455 + queue_work(rbd_wq, &rbd_dev->rq_work); 3457 3456 } 3458 3457 3459 3458 /* ··· 3533 3532 page_count = (u32) calc_pages_for(offset, length); 3534 3533 pages = ceph_alloc_page_vector(page_count, GFP_KERNEL); 3535 3534 if (IS_ERR(pages)) 3536 - ret = PTR_ERR(pages); 3535 + return PTR_ERR(pages); 3537 3536 3538 3537 ret = -ENOMEM; 3539 3538 obj_request = rbd_obj_request_create(object_name, offset, length, ··· 5243 5242 set_capacity(rbd_dev->disk, rbd_dev->mapping.size / SECTOR_SIZE); 5244 5243 set_disk_ro(rbd_dev->disk, rbd_dev->mapping.read_only); 5245 5244 5246 - rbd_dev->rq_wq = alloc_workqueue("%s", WQ_MEM_RECLAIM, 0, 5247 - rbd_dev->disk->disk_name); 5248 - if (!rbd_dev->rq_wq) { 5249 - ret = -ENOMEM; 5250 - goto err_out_mapping; 5251 - } 5252 - 5253 5245 ret = rbd_bus_add_dev(rbd_dev); 5254 5246 if (ret) 5255 - goto err_out_workqueue; 5247 + goto err_out_mapping; 5256 5248 5257 5249 /* Everything's ready. Announce the disk to the world. */ 5258 5250 ··· 5257 5263 5258 5264 return ret; 5259 5265 5260 - err_out_workqueue: 5261 - destroy_workqueue(rbd_dev->rq_wq); 5262 - rbd_dev->rq_wq = NULL; 5263 5266 err_out_mapping: 5264 5267 rbd_dev_mapping_clear(rbd_dev); 5265 5268 err_out_disk: ··· 5503 5512 { 5504 5513 struct rbd_device *rbd_dev = dev_to_rbd_dev(dev); 5505 5514 5506 - destroy_workqueue(rbd_dev->rq_wq); 5507 5515 rbd_free_disk(rbd_dev); 5508 5516 clear_bit(RBD_DEV_FLAG_EXISTS, &rbd_dev->flags); 5509 5517 rbd_dev_mapping_clear(rbd_dev); ··· 5706 5716 if (rc) 5707 5717 return rc; 5708 5718 5719 + /* 5720 + * The number of active work items is limited by the number of 5721 + * rbd devices, so leave @max_active at default. 5722 + */ 5723 + rbd_wq = alloc_workqueue(RBD_DRV_NAME, WQ_MEM_RECLAIM, 0); 5724 + if (!rbd_wq) { 5725 + rc = -ENOMEM; 5726 + goto err_out_slab; 5727 + } 5728 + 5709 5729 if (single_major) { 5710 5730 rbd_major = register_blkdev(0, RBD_DRV_NAME); 5711 5731 if (rbd_major < 0) { 5712 5732 rc = rbd_major; 5713 - goto err_out_slab; 5733 + goto err_out_wq; 5714 5734 } 5715 5735 } 5716 5736 ··· 5738 5738 err_out_blkdev: 5739 5739 if (single_major) 5740 5740 unregister_blkdev(rbd_major, RBD_DRV_NAME); 5741 + err_out_wq: 5742 + destroy_workqueue(rbd_wq); 5741 5743 err_out_slab: 5742 5744 rbd_slab_exit(); 5743 5745 return rc; ··· 5751 5749 rbd_sysfs_cleanup(); 5752 5750 if (single_major) 5753 5751 unregister_blkdev(rbd_major, RBD_DRV_NAME); 5752 + destroy_workqueue(rbd_wq); 5754 5753 rbd_slab_exit(); 5755 5754 } 5756 5755
+2 -1
drivers/block/zram/zram_drv.c
··· 560 560 } 561 561 562 562 if (page_zero_filled(uncmem)) { 563 - kunmap_atomic(user_mem); 563 + if (user_mem) 564 + kunmap_atomic(user_mem); 564 565 /* Free memory associated with this sector now. */ 565 566 bit_spin_lock(ZRAM_ACCESS, &meta->table[index].value); 566 567 zram_free_page(zram, index);
+7 -4
drivers/char/hw_random/pseries-rng.c
··· 25 25 #include <asm/vio.h> 26 26 27 27 28 - static int pseries_rng_data_read(struct hwrng *rng, u32 *data) 28 + static int pseries_rng_read(struct hwrng *rng, void *data, size_t max, bool wait) 29 29 { 30 + u64 buffer[PLPAR_HCALL_BUFSIZE]; 31 + size_t size = max < 8 ? max : 8; 30 32 int rc; 31 33 32 - rc = plpar_hcall(H_RANDOM, (unsigned long *)data); 34 + rc = plpar_hcall(H_RANDOM, (unsigned long *)buffer); 33 35 if (rc != H_SUCCESS) { 34 36 pr_err_ratelimited("H_RANDOM call failed %d\n", rc); 35 37 return -EIO; 36 38 } 39 + memcpy(data, buffer, size); 37 40 38 41 /* The hypervisor interface returns 64 bits */ 39 - return 8; 42 + return size; 40 43 } 41 44 42 45 /** ··· 58 55 59 56 static struct hwrng pseries_rng = { 60 57 .name = KBUILD_MODNAME, 61 - .data_read = pseries_rng_data_read, 58 + .read = pseries_rng_read, 62 59 }; 63 60 64 61 static int __init pseries_rng_probe(struct vio_dev *dev,
+1 -1
drivers/char/raw.c
··· 285 285 286 286 static const struct file_operations raw_fops = { 287 287 .read = new_sync_read, 288 - .read_iter = generic_file_read_iter, 288 + .read_iter = blkdev_read_iter, 289 289 .write = new_sync_write, 290 290 .write_iter = blkdev_write_iter, 291 291 .fsync = blkdev_fsync,
+2 -2
drivers/char/virtio_console.c
··· 1449 1449 spin_lock_init(&port->outvq_lock); 1450 1450 init_waitqueue_head(&port->waitqueue); 1451 1451 1452 - virtio_device_ready(portdev->vdev); 1453 - 1454 1452 /* Fill the in_vq with buffers so the host can send us data. */ 1455 1453 nr_added_bufs = fill_queue(port->in_vq, &port->inbuf_lock); 1456 1454 if (!nr_added_bufs) { ··· 2023 2025 2024 2026 spin_lock_init(&portdev->ports_lock); 2025 2027 INIT_LIST_HEAD(&portdev->ports); 2028 + 2029 + virtio_device_ready(portdev->vdev); 2026 2030 2027 2031 if (multiport) { 2028 2032 unsigned int nr_added_bufs;
+14 -15
drivers/crypto/caam/key_gen.c
··· 48 48 u32 *desc; 49 49 struct split_key_result result; 50 50 dma_addr_t dma_addr_in, dma_addr_out; 51 - int ret = 0; 51 + int ret = -ENOMEM; 52 52 53 53 desc = kmalloc(CAAM_CMD_SZ * 6 + CAAM_PTR_SZ * 2, GFP_KERNEL | GFP_DMA); 54 54 if (!desc) { 55 55 dev_err(jrdev, "unable to allocate key input memory\n"); 56 - return -ENOMEM; 56 + return ret; 57 57 } 58 - 59 - init_job_desc(desc, 0); 60 58 61 59 dma_addr_in = dma_map_single(jrdev, (void *)key_in, keylen, 62 60 DMA_TO_DEVICE); 63 61 if (dma_mapping_error(jrdev, dma_addr_in)) { 64 62 dev_err(jrdev, "unable to map key input memory\n"); 65 - kfree(desc); 66 - return -ENOMEM; 63 + goto out_free; 67 64 } 65 + 66 + dma_addr_out = dma_map_single(jrdev, key_out, split_key_pad_len, 67 + DMA_FROM_DEVICE); 68 + if (dma_mapping_error(jrdev, dma_addr_out)) { 69 + dev_err(jrdev, "unable to map key output memory\n"); 70 + goto out_unmap_in; 71 + } 72 + 73 + init_job_desc(desc, 0); 68 74 append_key(desc, dma_addr_in, keylen, CLASS_2 | KEY_DEST_CLASS_REG); 69 75 70 76 /* Sets MDHA up into an HMAC-INIT */ ··· 87 81 * FIFO_STORE with the explicit split-key content store 88 82 * (0x26 output type) 89 83 */ 90 - dma_addr_out = dma_map_single(jrdev, key_out, split_key_pad_len, 91 - DMA_FROM_DEVICE); 92 - if (dma_mapping_error(jrdev, dma_addr_out)) { 93 - dev_err(jrdev, "unable to map key output memory\n"); 94 - kfree(desc); 95 - return -ENOMEM; 96 - } 97 84 append_fifo_store(desc, dma_addr_out, split_key_len, 98 85 LDST_CLASS_2_CCB | FIFOST_TYPE_SPLIT_KEK); 99 86 ··· 114 115 115 116 dma_unmap_single(jrdev, dma_addr_out, split_key_pad_len, 116 117 DMA_FROM_DEVICE); 118 + out_unmap_in: 117 119 dma_unmap_single(jrdev, dma_addr_in, keylen, DMA_TO_DEVICE); 118 - 120 + out_free: 119 121 kfree(desc); 120 - 121 122 return ret; 122 123 } 123 124 EXPORT_SYMBOL(gen_split_key);
+1 -2
drivers/crypto/qat/qat_common/adf_accel_devices.h
··· 198 198 struct dentry *debugfs_dir; 199 199 struct list_head list; 200 200 struct module *owner; 201 - uint8_t accel_id; 202 - uint8_t numa_node; 203 201 struct adf_accel_pci accel_pci_dev; 202 + uint8_t accel_id; 204 203 } __packed; 205 204 #endif
+7 -5
drivers/crypto/qat/qat_common/adf_transport.c
··· 419 419 WRITE_CSR_RING_BASE(csr_addr, bank_num, i, 0); 420 420 ring = &bank->rings[i]; 421 421 if (hw_data->tx_rings_mask & (1 << i)) { 422 - ring->inflights = kzalloc_node(sizeof(atomic_t), 423 - GFP_KERNEL, 424 - accel_dev->numa_node); 422 + ring->inflights = 423 + kzalloc_node(sizeof(atomic_t), 424 + GFP_KERNEL, 425 + dev_to_node(&GET_DEV(accel_dev))); 425 426 if (!ring->inflights) 426 427 goto err; 427 428 } else { ··· 470 469 int i, ret; 471 470 472 471 etr_data = kzalloc_node(sizeof(*etr_data), GFP_KERNEL, 473 - accel_dev->numa_node); 472 + dev_to_node(&GET_DEV(accel_dev))); 474 473 if (!etr_data) 475 474 return -ENOMEM; 476 475 477 476 num_banks = GET_MAX_BANKS(accel_dev); 478 477 size = num_banks * sizeof(struct adf_etr_bank_data); 479 - etr_data->banks = kzalloc_node(size, GFP_KERNEL, accel_dev->numa_node); 478 + etr_data->banks = kzalloc_node(size, GFP_KERNEL, 479 + dev_to_node(&GET_DEV(accel_dev))); 480 480 if (!etr_data->banks) { 481 481 ret = -ENOMEM; 482 482 goto err_bank;
+5 -2
drivers/crypto/qat/qat_common/qat_algs.c
··· 596 596 if (unlikely(!n)) 597 597 return -EINVAL; 598 598 599 - bufl = kmalloc_node(sz, GFP_ATOMIC, inst->accel_dev->numa_node); 599 + bufl = kmalloc_node(sz, GFP_ATOMIC, 600 + dev_to_node(&GET_DEV(inst->accel_dev))); 600 601 if (unlikely(!bufl)) 601 602 return -ENOMEM; 602 603 ··· 606 605 goto err; 607 606 608 607 for_each_sg(assoc, sg, assoc_n, i) { 608 + if (!sg->length) 609 + continue; 609 610 bufl->bufers[bufs].addr = dma_map_single(dev, 610 611 sg_virt(sg), 611 612 sg->length, ··· 643 640 struct qat_alg_buf *bufers; 644 641 645 642 buflout = kmalloc_node(sz, GFP_ATOMIC, 646 - inst->accel_dev->numa_node); 643 + dev_to_node(&GET_DEV(inst->accel_dev))); 647 644 if (unlikely(!buflout)) 648 645 goto err; 649 646 bloutp = dma_map_single(dev, buflout, sz, DMA_TO_DEVICE);
+5 -3
drivers/crypto/qat/qat_common/qat_crypto.c
··· 109 109 110 110 list_for_each(itr, adf_devmgr_get_head()) { 111 111 accel_dev = list_entry(itr, struct adf_accel_dev, list); 112 - if (accel_dev->numa_node == node && adf_dev_started(accel_dev)) 112 + if ((node == dev_to_node(&GET_DEV(accel_dev)) || 113 + dev_to_node(&GET_DEV(accel_dev)) < 0) 114 + && adf_dev_started(accel_dev)) 113 115 break; 114 116 accel_dev = NULL; 115 117 } 116 118 if (!accel_dev) { 117 - pr_err("QAT: Could not find device on give node\n"); 119 + pr_err("QAT: Could not find device on node %d\n", node); 118 120 accel_dev = adf_devmgr_get_first(); 119 121 } 120 122 if (!accel_dev || !adf_dev_started(accel_dev)) ··· 166 164 167 165 for (i = 0; i < num_inst; i++) { 168 166 inst = kzalloc_node(sizeof(*inst), GFP_KERNEL, 169 - accel_dev->numa_node); 167 + dev_to_node(&GET_DEV(accel_dev))); 170 168 if (!inst) 171 169 goto err; 172 170
+1 -1
drivers/crypto/qat/qat_dh895xcc/adf_admin.c
··· 108 108 uint64_t reg_val; 109 109 110 110 admin = kzalloc_node(sizeof(*accel_dev->admin), GFP_KERNEL, 111 - accel_dev->numa_node); 111 + dev_to_node(&GET_DEV(accel_dev))); 112 112 if (!admin) 113 113 return -ENOMEM; 114 114 admin->virt_addr = dma_zalloc_coherent(&GET_DEV(accel_dev), PAGE_SIZE,
+12 -20
drivers/crypto/qat/qat_dh895xcc/adf_drv.c
··· 119 119 kfree(accel_dev); 120 120 } 121 121 122 - static uint8_t adf_get_dev_node_id(struct pci_dev *pdev) 123 - { 124 - unsigned int bus_per_cpu = 0; 125 - struct cpuinfo_x86 *c = &cpu_data(num_online_cpus() - 1); 126 - 127 - if (!c->phys_proc_id) 128 - return 0; 129 - 130 - bus_per_cpu = 256 / (c->phys_proc_id + 1); 131 - 132 - if (bus_per_cpu != 0) 133 - return pdev->bus->number / bus_per_cpu; 134 - return 0; 135 - } 136 - 137 122 static int qat_dev_start(struct adf_accel_dev *accel_dev) 138 123 { 139 124 int cpus = num_online_cpus(); ··· 220 235 void __iomem *pmisc_bar_addr = NULL; 221 236 char name[ADF_DEVICE_NAME_LENGTH]; 222 237 unsigned int i, bar_nr; 223 - uint8_t node; 224 238 int ret; 225 239 226 240 switch (ent->device) { ··· 230 246 return -ENODEV; 231 247 } 232 248 233 - node = adf_get_dev_node_id(pdev); 234 - accel_dev = kzalloc_node(sizeof(*accel_dev), GFP_KERNEL, node); 249 + if (num_possible_nodes() > 1 && dev_to_node(&pdev->dev) < 0) { 250 + /* If the accelerator is connected to a node with no memory 251 + * there is no point in using the accelerator since the remote 252 + * memory transaction will be very slow. */ 253 + dev_err(&pdev->dev, "Invalid NUMA configuration.\n"); 254 + return -EINVAL; 255 + } 256 + 257 + accel_dev = kzalloc_node(sizeof(*accel_dev), GFP_KERNEL, 258 + dev_to_node(&pdev->dev)); 235 259 if (!accel_dev) 236 260 return -ENOMEM; 237 261 238 - accel_dev->numa_node = node; 239 262 INIT_LIST_HEAD(&accel_dev->crypto_list); 240 263 241 264 /* Add accel device to accel table. ··· 255 264 256 265 accel_dev->owner = THIS_MODULE; 257 266 /* Allocate and configure device configuration structure */ 258 - hw_data = kzalloc_node(sizeof(*hw_data), GFP_KERNEL, node); 267 + hw_data = kzalloc_node(sizeof(*hw_data), GFP_KERNEL, 268 + dev_to_node(&pdev->dev)); 259 269 if (!hw_data) { 260 270 ret = -ENOMEM; 261 271 goto out_err;
+1 -1
drivers/crypto/qat/qat_dh895xcc/adf_isr.c
··· 168 168 uint32_t msix_num_entries = hw_data->num_banks + 1; 169 169 170 170 entries = kzalloc_node(msix_num_entries * sizeof(*entries), 171 - GFP_KERNEL, accel_dev->numa_node); 171 + GFP_KERNEL, dev_to_node(&GET_DEV(accel_dev))); 172 172 if (!entries) 173 173 return -ENOMEM; 174 174
+1 -39
drivers/dma/edma.c
··· 1107 1107 } 1108 1108 EXPORT_SYMBOL(edma_filter_fn); 1109 1109 1110 - static struct platform_device *pdev0, *pdev1; 1111 - 1112 - static const struct platform_device_info edma_dev_info0 = { 1113 - .name = "edma-dma-engine", 1114 - .id = 0, 1115 - .dma_mask = DMA_BIT_MASK(32), 1116 - }; 1117 - 1118 - static const struct platform_device_info edma_dev_info1 = { 1119 - .name = "edma-dma-engine", 1120 - .id = 1, 1121 - .dma_mask = DMA_BIT_MASK(32), 1122 - }; 1123 - 1124 1110 static int edma_init(void) 1125 1111 { 1126 - int ret = platform_driver_register(&edma_driver); 1127 - 1128 - if (ret == 0) { 1129 - pdev0 = platform_device_register_full(&edma_dev_info0); 1130 - if (IS_ERR(pdev0)) { 1131 - platform_driver_unregister(&edma_driver); 1132 - ret = PTR_ERR(pdev0); 1133 - goto out; 1134 - } 1135 - } 1136 - 1137 - if (!of_have_populated_dt() && EDMA_CTLRS == 2) { 1138 - pdev1 = platform_device_register_full(&edma_dev_info1); 1139 - if (IS_ERR(pdev1)) { 1140 - platform_driver_unregister(&edma_driver); 1141 - platform_device_unregister(pdev0); 1142 - ret = PTR_ERR(pdev1); 1143 - } 1144 - } 1145 - 1146 - out: 1147 - return ret; 1112 + return platform_driver_register(&edma_driver); 1148 1113 } 1149 1114 subsys_initcall(edma_init); 1150 1115 1151 1116 static void __exit edma_exit(void) 1152 1117 { 1153 - platform_device_unregister(pdev0); 1154 - if (pdev1) 1155 - platform_device_unregister(pdev1); 1156 1118 platform_driver_unregister(&edma_driver); 1157 1119 } 1158 1120 module_exit(edma_exit);
+10 -11
drivers/gpu/drm/armada/armada_crtc.c
··· 260 260 * Tell the DRM core that vblank IRQs aren't going to happen for 261 261 * a while. This cleans up any pending vblank events for us. 262 262 */ 263 - drm_vblank_off(dev, dcrtc->num); 263 + drm_crtc_vblank_off(&dcrtc->crtc); 264 264 265 265 /* Handle any pending flip event. */ 266 266 spin_lock_irq(&dev->event_lock); ··· 289 289 armada_drm_crtc_update(dcrtc); 290 290 if (dpms_blanked(dpms)) 291 291 armada_drm_vblank_off(dcrtc); 292 + else 293 + drm_crtc_vblank_on(&dcrtc->crtc); 292 294 } 293 295 } 294 296 ··· 528 526 /* Wait for pending flips to complete */ 529 527 wait_event(dcrtc->frame_wait, !dcrtc->frame_work); 530 528 531 - drm_vblank_pre_modeset(crtc->dev, dcrtc->num); 529 + drm_crtc_vblank_off(crtc); 532 530 533 531 crtc->mode = *adj; 534 532 ··· 619 617 620 618 armada_drm_crtc_update(dcrtc); 621 619 622 - drm_vblank_post_modeset(crtc->dev, dcrtc->num); 620 + drm_crtc_vblank_on(crtc); 623 621 armada_drm_crtc_finish_fb(dcrtc, old_fb, dpms_blanked(dcrtc->dpms)); 624 622 625 623 return 0; ··· 947 945 armada_reg_queue_end(work->regs, i); 948 946 949 947 /* 950 - * Hold the old framebuffer for the work - DRM appears to drop our 951 - * reference to the old framebuffer in drm_mode_page_flip_ioctl(). 948 + * Ensure that we hold a reference on the new framebuffer. 949 + * This has to match the behaviour in mode_set. 952 950 */ 953 - drm_framebuffer_reference(work->old_fb); 951 + drm_framebuffer_reference(fb); 954 952 955 953 ret = armada_drm_crtc_queue_frame_work(dcrtc, work); 956 954 if (ret) { 957 - /* 958 - * Undo our reference above; DRM does not drop the reference 959 - * to this object on error, so that's okay. 960 - */ 961 - drm_framebuffer_unreference(work->old_fb); 955 + /* Undo our reference above */ 956 + drm_framebuffer_unreference(fb); 962 957 kfree(work); 963 958 return ret; 964 959 }
+2 -1
drivers/gpu/drm/armada/armada_drv.c
··· 190 190 if (ret) 191 191 goto err_comp; 192 192 193 + dev->irq_enabled = true; 193 194 dev->vblank_disable_allowed = 1; 194 195 195 196 ret = armada_fbdev_init(dev); ··· 332 331 .desc = "Armada SoC DRM", 333 332 .date = "20120730", 334 333 .driver_features = DRIVER_GEM | DRIVER_MODESET | 335 - DRIVER_PRIME, 334 + DRIVER_HAVE_IRQ | DRIVER_PRIME, 336 335 .ioctls = armada_ioctls, 337 336 .fops = &armada_drm_fops, 338 337 };
-5
drivers/gpu/drm/exynos/exynos_dp_core.c
··· 1355 1355 void *data) 1356 1356 { 1357 1357 struct exynos_drm_display *display = dev_get_drvdata(dev); 1358 - struct exynos_dp_device *dp = display->ctx; 1359 - struct drm_encoder *encoder = dp->encoder; 1360 1358 1361 1359 exynos_dp_dpms(display, DRM_MODE_DPMS_OFF); 1362 - 1363 - exynos_dp_connector_destroy(&dp->connector); 1364 - encoder->funcs->destroy(encoder); 1365 1360 } 1366 1361 1367 1362 static const struct component_ops exynos_dp_ops = {
+4 -1
drivers/gpu/drm/exynos/exynos_drm_crtc.c
··· 71 71 !atomic_read(&exynos_crtc->pending_flip), 72 72 HZ/20)) 73 73 atomic_set(&exynos_crtc->pending_flip, 0); 74 - drm_vblank_off(crtc->dev, exynos_crtc->pipe); 74 + drm_crtc_vblank_off(crtc); 75 75 } 76 76 77 77 if (manager->ops->dpms) 78 78 manager->ops->dpms(manager, mode); 79 79 80 80 exynos_crtc->dpms = mode; 81 + 82 + if (mode == DRM_MODE_DPMS_ON) 83 + drm_crtc_vblank_on(crtc); 81 84 } 82 85 83 86 static void exynos_drm_crtc_prepare(struct drm_crtc *crtc)
-4
drivers/gpu/drm/exynos/exynos_drm_dpi.c
··· 338 338 339 339 int exynos_dpi_remove(struct device *dev) 340 340 { 341 - struct drm_encoder *encoder = exynos_dpi_display.encoder; 342 341 struct exynos_dpi *ctx = exynos_dpi_display.ctx; 343 342 344 343 exynos_dpi_dpms(&exynos_dpi_display, DRM_MODE_DPMS_OFF); 345 - 346 - exynos_dpi_connector_destroy(&ctx->connector); 347 - encoder->funcs->destroy(encoder); 348 344 349 345 if (ctx->panel) 350 346 drm_panel_detach(ctx->panel);
+25 -18
drivers/gpu/drm/exynos/exynos_drm_drv.c
··· 87 87 88 88 plane = exynos_plane_init(dev, possible_crtcs, 89 89 DRM_PLANE_TYPE_OVERLAY); 90 - if (IS_ERR(plane)) 91 - goto err_mode_config_cleanup; 92 - } 90 + if (!IS_ERR(plane)) 91 + continue; 93 92 94 - /* init kms poll for handling hpd */ 95 - drm_kms_helper_poll_init(dev); 96 - 97 - ret = drm_vblank_init(dev, MAX_CRTC); 98 - if (ret) 93 + ret = PTR_ERR(plane); 99 94 goto err_mode_config_cleanup; 95 + } 100 96 101 97 /* setup possible_clones. */ 102 98 exynos_drm_encoder_setup(dev); ··· 102 106 /* Try to bind all sub drivers. */ 103 107 ret = component_bind_all(dev->dev, dev); 104 108 if (ret) 105 - goto err_cleanup_vblank; 109 + goto err_mode_config_cleanup; 110 + 111 + ret = drm_vblank_init(dev, dev->mode_config.num_crtc); 112 + if (ret) 113 + goto err_unbind_all; 106 114 107 115 /* Probe non kms sub drivers and virtual display driver. */ 108 116 ret = exynos_drm_device_subdrv_probe(dev); 109 117 if (ret) 110 - goto err_unbind_all; 111 - 112 - /* force connectors detection */ 113 - drm_helper_hpd_irq_event(dev); 118 + goto err_cleanup_vblank; 114 119 115 120 /* 116 121 * enable drm irq mode. ··· 130 133 */ 131 134 dev->vblank_disable_allowed = true; 132 135 136 + /* init kms poll for handling hpd */ 137 + drm_kms_helper_poll_init(dev); 138 + 139 + /* force connectors detection */ 140 + drm_helper_hpd_irq_event(dev); 141 + 133 142 return 0; 134 143 135 - err_unbind_all: 136 - component_unbind_all(dev->dev, dev); 137 144 err_cleanup_vblank: 138 145 drm_vblank_cleanup(dev); 146 + err_unbind_all: 147 + component_unbind_all(dev->dev, dev); 139 148 err_mode_config_cleanup: 140 149 drm_mode_config_cleanup(dev); 141 150 drm_release_iommu_mapping(dev); ··· 158 155 exynos_drm_fbdev_fini(dev); 159 156 drm_kms_helper_poll_fini(dev); 160 157 161 - component_unbind_all(dev->dev, dev); 162 158 drm_vblank_cleanup(dev); 159 + component_unbind_all(dev->dev, dev); 163 160 drm_mode_config_cleanup(dev); 164 161 drm_release_iommu_mapping(dev); 165 162 ··· 194 191 195 192 drm_modeset_lock_all(dev); 196 193 list_for_each_entry(connector, &dev->mode_config.connector_list, head) { 197 - if (connector->funcs->dpms) 198 - connector->funcs->dpms(connector, connector->dpms); 194 + if (connector->funcs->dpms) { 195 + int dpms = connector->dpms; 196 + 197 + connector->dpms = DRM_MODE_DPMS_OFF; 198 + connector->funcs->dpms(connector, dpms); 199 + } 199 200 } 200 201 drm_modeset_unlock_all(dev); 201 202
-4
drivers/gpu/drm/exynos/exynos_drm_dsi.c
··· 1660 1660 void *data) 1661 1661 { 1662 1662 struct exynos_dsi *dsi = exynos_dsi_display.ctx; 1663 - struct drm_encoder *encoder = dsi->encoder; 1664 1663 1665 1664 exynos_dsi_dpms(&exynos_dsi_display, DRM_MODE_DPMS_OFF); 1666 - 1667 - exynos_dsi_connector_destroy(&dsi->connector); 1668 - encoder->funcs->destroy(encoder); 1669 1665 1670 1666 mipi_dsi_host_unregister(&dsi->dsi_host); 1671 1667 }
-4
drivers/gpu/drm/exynos/exynos_drm_vidi.c
··· 630 630 { 631 631 struct exynos_drm_manager *mgr = platform_get_drvdata(pdev); 632 632 struct vidi_context *ctx = mgr->ctx; 633 - struct drm_encoder *encoder = ctx->encoder; 634 633 635 634 if (ctx->raw_edid != (struct edid *)fake_edid_info) { 636 635 kfree(ctx->raw_edid); ··· 637 638 638 639 return -EINVAL; 639 640 } 640 - 641 - encoder->funcs->destroy(encoder); 642 - drm_connector_cleanup(&ctx->connector); 643 641 644 642 return 0; 645 643 }
-6
drivers/gpu/drm/exynos/exynos_hdmi.c
··· 2312 2312 2313 2313 static void hdmi_unbind(struct device *dev, struct device *master, void *data) 2314 2314 { 2315 - struct exynos_drm_display *display = get_hdmi_display(dev); 2316 - struct drm_encoder *encoder = display->encoder; 2317 - struct hdmi_context *hdata = display->ctx; 2318 - 2319 - hdmi_connector_destroy(&hdata->connector); 2320 - encoder->funcs->destroy(encoder); 2321 2315 } 2322 2316 2323 2317 static const struct component_ops hdmi_component_ops = {
+10
drivers/gpu/drm/i915/i915_drv.c
··· 986 986 return i915_drm_freeze(drm_dev); 987 987 } 988 988 989 + static int i915_pm_freeze_late(struct device *dev) 990 + { 991 + struct pci_dev *pdev = to_pci_dev(dev); 992 + struct drm_device *drm_dev = pci_get_drvdata(pdev); 993 + struct drm_i915_private *dev_priv = drm_dev->dev_private; 994 + 995 + return intel_suspend_complete(dev_priv); 996 + } 997 + 989 998 static int i915_pm_thaw_early(struct device *dev) 990 999 { 991 1000 struct pci_dev *pdev = to_pci_dev(dev); ··· 1579 1570 .resume_early = i915_pm_resume_early, 1580 1571 .resume = i915_pm_resume, 1581 1572 .freeze = i915_pm_freeze, 1573 + .freeze_late = i915_pm_freeze_late, 1582 1574 .thaw_early = i915_pm_thaw_early, 1583 1575 .thaw = i915_pm_thaw, 1584 1576 .poweroff = i915_pm_poweroff,
+16
drivers/gpu/drm/i915/i915_gem_gtt.c
··· 1902 1902 GEN8_PPAT(6, GEN8_PPAT_WB | GEN8_PPAT_LLCELLC | GEN8_PPAT_AGE(2)) | 1903 1903 GEN8_PPAT(7, GEN8_PPAT_WB | GEN8_PPAT_LLCELLC | GEN8_PPAT_AGE(3)); 1904 1904 1905 + if (!USES_PPGTT(dev_priv->dev)) 1906 + /* Spec: "For GGTT, there is NO pat_sel[2:0] from the entry, 1907 + * so RTL will always use the value corresponding to 1908 + * pat_sel = 000". 1909 + * So let's disable cache for GGTT to avoid screen corruptions. 1910 + * MOCS still can be used though. 1911 + * - System agent ggtt writes (i.e. cpu gtt mmaps) already work 1912 + * before this patch, i.e. the same uncached + snooping access 1913 + * like on gen6/7 seems to be in effect. 1914 + * - So this just fixes blitter/render access. Again it looks 1915 + * like it's not just uncached access, but uncached + snooping. 1916 + * So we can still hold onto all our assumptions wrt cpu 1917 + * clflushing on LLC machines. 1918 + */ 1919 + pat = GEN8_PPAT(0, GEN8_PPAT_UC); 1920 + 1905 1921 /* XXX: spec defines this as 2 distinct registers. It's unclear if a 64b 1906 1922 * write would work. */ 1907 1923 I915_WRITE(GEN8_PRIVATE_PAT, pat);
+4 -1
drivers/gpu/drm/i915/intel_display.c
··· 4585 4585 * BSpec erroneously claims we should aim for 4MHz, but 4586 4586 * in fact 1MHz is the correct frequency. 4587 4587 */ 4588 - I915_WRITE(GMBUSFREQ_VLV, dev_priv->vlv_cdclk_freq); 4588 + I915_WRITE(GMBUSFREQ_VLV, DIV_ROUND_UP(dev_priv->vlv_cdclk_freq, 1000)); 4589 4589 } 4590 4590 4591 4591 /* Adjust CDclk dividers to allow high res or save power if possible */ ··· 12884 12884 12885 12885 /* Acer C720 Chromebook (Core i3 4005U) */ 12886 12886 { 0x0a16, 0x1025, 0x0a11, quirk_backlight_present }, 12887 + 12888 + /* Apple Macbook 2,1 (Core 2 T7400) */ 12889 + { 0x27a2, 0x8086, 0x7270, quirk_backlight_present }, 12887 12890 12888 12891 /* Toshiba CB35 Chromebook (Celeron 2955U) */ 12889 12892 { 0x0a06, 0x1179, 0x0a88, quirk_backlight_present },
+22 -2
drivers/gpu/drm/i915/intel_dp.c
··· 2806 2806 ssize_t ret; 2807 2807 int i; 2808 2808 2809 + /* 2810 + * Sometime we just get the same incorrect byte repeated 2811 + * over the entire buffer. Doing just one throw away read 2812 + * initially seems to "solve" it. 2813 + */ 2814 + drm_dp_dpcd_read(aux, DP_DPCD_REV, buffer, 1); 2815 + 2809 2816 for (i = 0; i < 3; i++) { 2810 2817 ret = drm_dp_dpcd_read(aux, offset, buffer, size); 2811 2818 if (ret == size) ··· 3731 3724 } 3732 3725 } 3733 3726 3734 - /* Training Pattern 3 support */ 3727 + /* Training Pattern 3 support, both source and sink */ 3735 3728 if (intel_dp->dpcd[DP_DPCD_REV] >= 0x12 && 3736 - intel_dp->dpcd[DP_MAX_LANE_COUNT] & DP_TPS3_SUPPORTED) { 3729 + intel_dp->dpcd[DP_MAX_LANE_COUNT] & DP_TPS3_SUPPORTED && 3730 + (IS_HASWELL(dev_priv) || INTEL_INFO(dev_priv)->gen >= 8)) { 3737 3731 intel_dp->use_tps3 = true; 3738 3732 DRM_DEBUG_KMS("Displayport TPS3 supported\n"); 3739 3733 } else ··· 4498 4490 4499 4491 if (intel_dig_port->base.type != INTEL_OUTPUT_EDP) 4500 4492 intel_dig_port->base.type = INTEL_OUTPUT_DISPLAYPORT; 4493 + 4494 + if (long_hpd && intel_dig_port->base.type == INTEL_OUTPUT_EDP) { 4495 + /* 4496 + * vdd off can generate a long pulse on eDP which 4497 + * would require vdd on to handle it, and thus we 4498 + * would end up in an endless cycle of 4499 + * "vdd off -> long hpd -> vdd on -> detect -> vdd off -> ..." 4500 + */ 4501 + DRM_DEBUG_KMS("ignoring long hpd on eDP port %c\n", 4502 + port_name(intel_dig_port->port)); 4503 + return false; 4504 + } 4501 4505 4502 4506 DRM_DEBUG_KMS("got hpd irq on port %c - %s\n", 4503 4507 port_name(intel_dig_port->port),
+15 -2
drivers/gpu/drm/i915/intel_panel.c
··· 1098 1098 struct drm_device *dev = connector->base.dev; 1099 1099 struct drm_i915_private *dev_priv = dev->dev_private; 1100 1100 struct intel_panel *panel = &connector->panel; 1101 + int min; 1101 1102 1102 1103 WARN_ON(panel->backlight.max == 0); 1103 1104 1105 + /* 1106 + * XXX: If the vbt value is 255, it makes min equal to max, which leads 1107 + * to problems. There are such machines out there. Either our 1108 + * interpretation is wrong or the vbt has bogus data. Or both. Safeguard 1109 + * against this by letting the minimum be at most (arbitrarily chosen) 1110 + * 25% of the max. 1111 + */ 1112 + min = clamp_t(int, dev_priv->vbt.backlight.min_brightness, 0, 64); 1113 + if (min != dev_priv->vbt.backlight.min_brightness) { 1114 + DRM_DEBUG_KMS("clamping VBT min backlight %d/255 to %d/255\n", 1115 + dev_priv->vbt.backlight.min_brightness, min); 1116 + } 1117 + 1104 1118 /* vbt value is a coefficient in range [0..255] */ 1105 - return scale(dev_priv->vbt.backlight.min_brightness, 0, 255, 1106 - 0, panel->backlight.max); 1119 + return scale(min, 0, 255, 0, panel->backlight.max); 1107 1120 } 1108 1121 1109 1122 static int bdw_setup_backlight(struct intel_connector *connector)
+5 -2
drivers/gpu/drm/radeon/cik.c
··· 4313 4313 /* init the CE partitions. CE only used for gfx on CIK */ 4314 4314 radeon_ring_write(ring, PACKET3(PACKET3_SET_BASE, 2)); 4315 4315 radeon_ring_write(ring, PACKET3_BASE_INDEX(CE_PARTITION_BASE)); 4316 - radeon_ring_write(ring, 0xc000); 4317 - radeon_ring_write(ring, 0xc000); 4316 + radeon_ring_write(ring, 0x8000); 4317 + radeon_ring_write(ring, 0x8000); 4318 4318 4319 4319 /* setup clear context state */ 4320 4320 radeon_ring_write(ring, PACKET3(PACKET3_PREAMBLE_CNTL, 0)); ··· 9446 9446 struct drm_display_mode *mode = NULL; 9447 9447 u32 num_heads = 0, lb_size; 9448 9448 int i; 9449 + 9450 + if (!rdev->mode_info.mode_config_initialized) 9451 + return; 9449 9452 9450 9453 radeon_update_display_priority(rdev); 9451 9454
+12 -9
drivers/gpu/drm/radeon/cik_sdma.c
··· 667 667 { 668 668 struct radeon_ib ib; 669 669 unsigned i; 670 + unsigned index; 670 671 int r; 671 - void __iomem *ptr = (void *)rdev->vram_scratch.ptr; 672 672 u32 tmp = 0; 673 + u64 gpu_addr; 673 674 674 - if (!ptr) { 675 - DRM_ERROR("invalid vram scratch pointer\n"); 676 - return -EINVAL; 677 - } 675 + if (ring->idx == R600_RING_TYPE_DMA_INDEX) 676 + index = R600_WB_DMA_RING_TEST_OFFSET; 677 + else 678 + index = CAYMAN_WB_DMA1_RING_TEST_OFFSET; 679 + 680 + gpu_addr = rdev->wb.gpu_addr + index; 678 681 679 682 tmp = 0xCAFEDEAD; 680 - writel(tmp, ptr); 683 + rdev->wb.wb[index/4] = cpu_to_le32(tmp); 681 684 682 685 r = radeon_ib_get(rdev, ring->idx, &ib, NULL, 256); 683 686 if (r) { ··· 689 686 } 690 687 691 688 ib.ptr[0] = SDMA_PACKET(SDMA_OPCODE_WRITE, SDMA_WRITE_SUB_OPCODE_LINEAR, 0); 692 - ib.ptr[1] = rdev->vram_scratch.gpu_addr & 0xfffffffc; 693 - ib.ptr[2] = upper_32_bits(rdev->vram_scratch.gpu_addr); 689 + ib.ptr[1] = lower_32_bits(gpu_addr); 690 + ib.ptr[2] = upper_32_bits(gpu_addr); 694 691 ib.ptr[3] = 1; 695 692 ib.ptr[4] = 0xDEADBEEF; 696 693 ib.length_dw = 5; ··· 707 704 return r; 708 705 } 709 706 for (i = 0; i < rdev->usec_timeout; i++) { 710 - tmp = readl(ptr); 707 + tmp = le32_to_cpu(rdev->wb.wb[index/4]); 711 708 if (tmp == 0xDEADBEEF) 712 709 break; 713 710 DRM_UDELAY(1);
+5 -3
drivers/gpu/drm/radeon/evergreen.c
··· 2345 2345 u32 num_heads = 0, lb_size; 2346 2346 int i; 2347 2347 2348 + if (!rdev->mode_info.mode_config_initialized) 2349 + return; 2350 + 2348 2351 radeon_update_display_priority(rdev); 2349 2352 2350 2353 for (i = 0; i < rdev->num_crtc; i++) { ··· 2555 2552 WREG32(EVERGREEN_CRTC_UPDATE_LOCK + crtc_offsets[i], 1); 2556 2553 tmp |= EVERGREEN_CRTC_BLANK_DATA_EN; 2557 2554 WREG32(EVERGREEN_CRTC_BLANK_CONTROL + crtc_offsets[i], tmp); 2555 + WREG32(EVERGREEN_CRTC_UPDATE_LOCK + crtc_offsets[i], 0); 2558 2556 } 2559 2557 } else { 2560 2558 tmp = RREG32(EVERGREEN_CRTC_CONTROL + crtc_offsets[i]); ··· 3009 3005 u32 vgt_cache_invalidation; 3010 3006 u32 hdp_host_path_cntl, tmp; 3011 3007 u32 disabled_rb_mask; 3012 - int i, j, num_shader_engines, ps_thread_count; 3008 + int i, j, ps_thread_count; 3013 3009 3014 3010 switch (rdev->family) { 3015 3011 case CHIP_CYPRESS: ··· 3306 3302 rdev->config.evergreen.tile_config |= 0 << 8; 3307 3303 rdev->config.evergreen.tile_config |= 3308 3304 ((gb_addr_config & 0x30000000) >> 28) << 12; 3309 - 3310 - num_shader_engines = (gb_addr_config & NUM_SHADER_ENGINES(3) >> 12) + 1; 3311 3305 3312 3306 if ((rdev->family >= CHIP_CEDAR) && (rdev->family <= CHIP_HEMLOCK)) { 3313 3307 u32 efuse_straps_4;
+16 -3
drivers/gpu/drm/radeon/kv_dpm.c
··· 2725 2725 2726 2726 pi->sram_end = SMC_RAM_END; 2727 2727 2728 - pi->enable_nb_dpm = true; 2728 + /* Enabling nb dpm on an asrock system prevents dpm from working */ 2729 + if (rdev->pdev->subsystem_vendor == 0x1849) 2730 + pi->enable_nb_dpm = false; 2731 + else 2732 + pi->enable_nb_dpm = true; 2729 2733 2730 2734 pi->caps_power_containment = true; 2731 2735 pi->caps_cac = true; ··· 2744 2740 pi->caps_sclk_ds = true; 2745 2741 pi->enable_auto_thermal_throttling = true; 2746 2742 pi->disable_nb_ps3_in_battery = false; 2747 - if (radeon_bapm == 0) 2743 + if (radeon_bapm == -1) { 2744 + /* There are stability issues reported on with 2745 + * bapm enabled on an asrock system. 2746 + */ 2747 + if (rdev->pdev->subsystem_vendor == 0x1849) 2748 + pi->bapm_enable = false; 2749 + else 2750 + pi->bapm_enable = true; 2751 + } else if (radeon_bapm == 0) { 2748 2752 pi->bapm_enable = false; 2749 - else 2753 + } else { 2750 2754 pi->bapm_enable = true; 2755 + } 2751 2756 pi->voltage_drop_t = 0; 2752 2757 pi->caps_sclk_throttle_low_notification = false; 2753 2758 pi->caps_fps = false; /* true? */
+3
drivers/gpu/drm/radeon/r100.c
··· 3207 3207 uint32_t pixel_bytes1 = 0; 3208 3208 uint32_t pixel_bytes2 = 0; 3209 3209 3210 + if (!rdev->mode_info.mode_config_initialized) 3211 + return; 3212 + 3210 3213 radeon_update_display_priority(rdev); 3211 3214 3212 3215 if (rdev->mode_info.crtcs[0]->base.enabled) {
+10 -10
drivers/gpu/drm/radeon/r600_dma.c
··· 338 338 { 339 339 struct radeon_ib ib; 340 340 unsigned i; 341 + unsigned index; 341 342 int r; 342 - void __iomem *ptr = (void *)rdev->vram_scratch.ptr; 343 343 u32 tmp = 0; 344 + u64 gpu_addr; 344 345 345 - if (!ptr) { 346 - DRM_ERROR("invalid vram scratch pointer\n"); 347 - return -EINVAL; 348 - } 346 + if (ring->idx == R600_RING_TYPE_DMA_INDEX) 347 + index = R600_WB_DMA_RING_TEST_OFFSET; 348 + else 349 + index = CAYMAN_WB_DMA1_RING_TEST_OFFSET; 349 350 350 - tmp = 0xCAFEDEAD; 351 - writel(tmp, ptr); 351 + gpu_addr = rdev->wb.gpu_addr + index; 352 352 353 353 r = radeon_ib_get(rdev, ring->idx, &ib, NULL, 256); 354 354 if (r) { ··· 357 357 } 358 358 359 359 ib.ptr[0] = DMA_PACKET(DMA_PACKET_WRITE, 0, 0, 1); 360 - ib.ptr[1] = rdev->vram_scratch.gpu_addr & 0xfffffffc; 361 - ib.ptr[2] = upper_32_bits(rdev->vram_scratch.gpu_addr) & 0xff; 360 + ib.ptr[1] = lower_32_bits(gpu_addr); 361 + ib.ptr[2] = upper_32_bits(gpu_addr) & 0xff; 362 362 ib.ptr[3] = 0xDEADBEEF; 363 363 ib.length_dw = 4; 364 364 ··· 374 374 return r; 375 375 } 376 376 for (i = 0; i < rdev->usec_timeout; i++) { 377 - tmp = readl(ptr); 377 + tmp = le32_to_cpu(rdev->wb.wb[index/4]); 378 378 if (tmp == 0xDEADBEEF) 379 379 break; 380 380 DRM_UDELAY(1);
+2 -4
drivers/gpu/drm/radeon/radeon_bios.c
··· 658 658 r = igp_read_bios_from_vram(rdev); 659 659 if (r == false) 660 660 r = radeon_read_bios(rdev); 661 - if (r == false) { 661 + if (r == false) 662 662 r = radeon_read_disabled_bios(rdev); 663 - } 664 - if (r == false) { 663 + if (r == false) 665 664 r = radeon_read_platform_bios(rdev); 666 - } 667 665 if (r == false || rdev->bios == NULL) { 668 666 DRM_ERROR("Unable to locate a BIOS ROM\n"); 669 667 rdev->bios = NULL;
+1 -1
drivers/gpu/drm/radeon/radeon_cs.c
··· 450 450 kfree(parser->track); 451 451 kfree(parser->relocs); 452 452 kfree(parser->relocs_ptr); 453 - kfree(parser->vm_bos); 453 + drm_free_large(parser->vm_bos); 454 454 for (i = 0; i < parser->nchunks; i++) 455 455 drm_free_large(parser->chunks[i].kdata); 456 456 kfree(parser->chunks);
+2 -2
drivers/gpu/drm/radeon/radeon_ring.c
··· 314 314 } 315 315 316 316 /* and then save the content of the ring */ 317 - *data = kmalloc_array(size, sizeof(uint32_t), GFP_KERNEL); 317 + *data = drm_malloc_ab(size, sizeof(uint32_t)); 318 318 if (!*data) { 319 319 mutex_unlock(&rdev->ring_lock); 320 320 return 0; ··· 356 356 } 357 357 358 358 radeon_ring_unlock_commit(rdev, ring, false); 359 - kfree(data); 359 + drm_free_large(data); 360 360 return 0; 361 361 } 362 362
+2 -2
drivers/gpu/drm/radeon/radeon_vm.c
··· 132 132 struct radeon_cs_reloc *list; 133 133 unsigned i, idx; 134 134 135 - list = kmalloc_array(vm->max_pde_used + 2, 136 - sizeof(struct radeon_cs_reloc), GFP_KERNEL); 135 + list = drm_malloc_ab(vm->max_pde_used + 2, 136 + sizeof(struct radeon_cs_reloc)); 137 137 if (!list) 138 138 return NULL; 139 139
+3
drivers/gpu/drm/radeon/rs600.c
··· 879 879 u32 d1mode_priority_a_cnt, d2mode_priority_a_cnt; 880 880 /* FIXME: implement full support */ 881 881 882 + if (!rdev->mode_info.mode_config_initialized) 883 + return; 884 + 882 885 radeon_update_display_priority(rdev); 883 886 884 887 if (rdev->mode_info.crtcs[0]->base.enabled)
+3
drivers/gpu/drm/radeon/rs690.c
··· 579 579 u32 d1mode_priority_a_cnt, d1mode_priority_b_cnt; 580 580 u32 d2mode_priority_a_cnt, d2mode_priority_b_cnt; 581 581 582 + if (!rdev->mode_info.mode_config_initialized) 583 + return; 584 + 582 585 radeon_update_display_priority(rdev); 583 586 584 587 if (rdev->mode_info.crtcs[0]->base.enabled)
+3
drivers/gpu/drm/radeon/rv515.c
··· 1277 1277 struct drm_display_mode *mode0 = NULL; 1278 1278 struct drm_display_mode *mode1 = NULL; 1279 1279 1280 + if (!rdev->mode_info.mode_config_initialized) 1281 + return; 1282 + 1280 1283 radeon_update_display_priority(rdev); 1281 1284 1282 1285 if (rdev->mode_info.crtcs[0]->base.enabled)
+3
drivers/gpu/drm/radeon/si.c
··· 2384 2384 u32 num_heads = 0, lb_size; 2385 2385 int i; 2386 2386 2387 + if (!rdev->mode_info.mode_config_initialized) 2388 + return; 2389 + 2387 2390 radeon_update_display_priority(rdev); 2388 2391 2389 2392 for (i = 0; i < rdev->num_crtc; i++) {
+1 -1
drivers/gpu/drm/radeon/si_dpm.c
··· 6256 6256 if ((rps->class2 & ATOM_PPLIB_CLASSIFICATION2_ULV) && 6257 6257 index == 0) { 6258 6258 /* XXX disable for A0 tahiti */ 6259 - si_pi->ulv.supported = true; 6259 + si_pi->ulv.supported = false; 6260 6260 si_pi->ulv.pl = *pl; 6261 6261 si_pi->ulv.one_pcie_lane_in_ulv = false; 6262 6262 si_pi->ulv.volt_change_delay = SISLANDS_ULVVOLTAGECHANGEDELAY_DFLT;
+2 -1
drivers/gpu/drm/vmwgfx/vmwgfx_cmdbuf_res.c
··· 246 246 struct drm_hash_item *hash; 247 247 int ret; 248 248 249 - ret = drm_ht_find_item(&man->resources, user_key, &hash); 249 + ret = drm_ht_find_item(&man->resources, user_key | (res_type << 24), 250 + &hash); 250 251 if (likely(ret != 0)) 251 252 return -EINVAL; 252 253
+5 -1
drivers/gpu/drm/vmwgfx/vmwgfx_drv.c
··· 688 688 goto out_err0; 689 689 } 690 690 691 - if (unlikely(dev_priv->prim_bb_mem < dev_priv->vram_size)) 691 + /* 692 + * Limit back buffer size to VRAM size. Remove this once 693 + * screen targets are implemented. 694 + */ 695 + if (dev_priv->prim_bb_mem > dev_priv->vram_size) 692 696 dev_priv->prim_bb_mem = dev_priv->vram_size; 693 697 694 698 mutex_unlock(&dev_priv->hw_mutex);
+17 -7
drivers/gpu/drm/vmwgfx/vmwgfx_kms.c
··· 187 187 * can do this since the caller in the drm core doesn't check anything 188 188 * which is protected by any looks. 189 189 */ 190 - drm_modeset_unlock(&crtc->mutex); 190 + drm_modeset_unlock_crtc(crtc); 191 191 drm_modeset_lock_all(dev_priv->dev); 192 192 193 193 /* A lot of the code assumes this */ ··· 252 252 ret = 0; 253 253 out: 254 254 drm_modeset_unlock_all(dev_priv->dev); 255 - drm_modeset_lock(&crtc->mutex, NULL); 255 + drm_modeset_lock_crtc(crtc); 256 256 257 257 return ret; 258 258 } ··· 273 273 * can do this since the caller in the drm core doesn't check anything 274 274 * which is protected by any looks. 275 275 */ 276 - drm_modeset_unlock(&crtc->mutex); 276 + drm_modeset_unlock_crtc(crtc); 277 277 drm_modeset_lock_all(dev_priv->dev); 278 278 279 279 vmw_cursor_update_position(dev_priv, shown, ··· 281 281 du->cursor_y + du->hotspot_y); 282 282 283 283 drm_modeset_unlock_all(dev_priv->dev); 284 - drm_modeset_lock(&crtc->mutex, NULL); 284 + drm_modeset_lock_crtc(crtc); 285 285 286 286 return 0; 287 287 } ··· 1950 1950 DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_PVSYNC) 1951 1951 }; 1952 1952 int i; 1953 + u32 assumed_bpp = 2; 1954 + 1955 + /* 1956 + * If using screen objects, then assume 32-bpp because that's what the 1957 + * SVGA device is assuming 1958 + */ 1959 + if (dev_priv->sou_priv) 1960 + assumed_bpp = 4; 1953 1961 1954 1962 /* Add preferred mode */ 1955 1963 { ··· 1968 1960 mode->vdisplay = du->pref_height; 1969 1961 vmw_guess_mode_timing(mode); 1970 1962 1971 - if (vmw_kms_validate_mode_vram(dev_priv, mode->hdisplay * 2, 1972 - mode->vdisplay)) { 1963 + if (vmw_kms_validate_mode_vram(dev_priv, 1964 + mode->hdisplay * assumed_bpp, 1965 + mode->vdisplay)) { 1973 1966 drm_mode_probed_add(connector, mode); 1974 1967 } else { 1975 1968 drm_mode_destroy(dev, mode); ··· 1992 1983 bmode->vdisplay > max_height) 1993 1984 continue; 1994 1985 1995 - if (!vmw_kms_validate_mode_vram(dev_priv, bmode->hdisplay * 2, 1986 + if (!vmw_kms_validate_mode_vram(dev_priv, 1987 + bmode->hdisplay * assumed_bpp, 1996 1988 bmode->vdisplay)) 1997 1989 continue; 1998 1990
+1
drivers/hid/hid-core.c
··· 1659 1659 hdev->hiddev_disconnect(hdev); 1660 1660 if (hdev->claimed & HID_CLAIMED_HIDRAW) 1661 1661 hidraw_disconnect(hdev); 1662 + hdev->claimed = 0; 1662 1663 } 1663 1664 EXPORT_SYMBOL_GPL(hid_disconnect); 1664 1665
+1
drivers/hid/hid-ids.h
··· 299 299 #define USB_VENDOR_ID_ELAN 0x04f3 300 300 #define USB_DEVICE_ID_ELAN_TOUCHSCREEN 0x0089 301 301 #define USB_DEVICE_ID_ELAN_TOUCHSCREEN_009B 0x009b 302 + #define USB_DEVICE_ID_ELAN_TOUCHSCREEN_0103 0x0103 302 303 #define USB_DEVICE_ID_ELAN_TOUCHSCREEN_016F 0x016f 303 304 304 305 #define USB_VENDOR_ID_ELECOM 0x056e
+1
drivers/hid/usbhid/hid-quirks.c
··· 72 72 { USB_VENDOR_ID_DMI, USB_DEVICE_ID_DMI_ENC, HID_QUIRK_NOGET }, 73 73 { USB_VENDOR_ID_ELAN, USB_DEVICE_ID_ELAN_TOUCHSCREEN, HID_QUIRK_ALWAYS_POLL }, 74 74 { USB_VENDOR_ID_ELAN, USB_DEVICE_ID_ELAN_TOUCHSCREEN_009B, HID_QUIRK_ALWAYS_POLL }, 75 + { USB_VENDOR_ID_ELAN, USB_DEVICE_ID_ELAN_TOUCHSCREEN_0103, HID_QUIRK_ALWAYS_POLL }, 75 76 { USB_VENDOR_ID_ELAN, USB_DEVICE_ID_ELAN_TOUCHSCREEN_016F, HID_QUIRK_ALWAYS_POLL }, 76 77 { USB_VENDOR_ID_ELO, USB_DEVICE_ID_ELO_TS2700, HID_QUIRK_NOGET }, 77 78 { USB_VENDOR_ID_FORMOSA, USB_DEVICE_ID_FORMOSA_IR_RECEIVER, HID_QUIRK_NO_INIT_REPORTS },
+1 -1
drivers/hwmon/fam15h_power.c
··· 234 234 { PCI_VDEVICE(AMD, PCI_DEVICE_ID_AMD_15H_NB_F4) }, 235 235 { PCI_VDEVICE(AMD, PCI_DEVICE_ID_AMD_15H_M30H_NB_F4) }, 236 236 { PCI_VDEVICE(AMD, PCI_DEVICE_ID_AMD_16H_NB_F4) }, 237 - { PCI_VDEVICE(AMD, PCI_DEVICE_ID_AMD_16H_M30H_NB_F3) }, 237 + { PCI_VDEVICE(AMD, PCI_DEVICE_ID_AMD_16H_M30H_NB_F4) }, 238 238 {} 239 239 }; 240 240 MODULE_DEVICE_TABLE(pci, fam15h_power_id_table);
+4 -2
drivers/hwmon/ibmpowernv.c
··· 181 181 182 182 opal = of_find_node_by_path("/ibm,opal/sensors"); 183 183 if (!opal) { 184 - dev_err(&pdev->dev, "Opal node 'sensors' not found\n"); 184 + dev_dbg(&pdev->dev, "Opal node 'sensors' not found\n"); 185 185 return -ENODEV; 186 186 } 187 187 ··· 335 335 336 336 err = platform_driver_probe(&ibmpowernv_driver, ibmpowernv_probe); 337 337 if (err) { 338 - pr_err("Platfrom driver probe failed\n"); 338 + if (err != -ENODEV) 339 + pr_err("Platform driver probe failed (%d)\n", err); 340 + 339 341 goto exit_device_del; 340 342 } 341 343
+10 -3
drivers/hwmon/pwm-fan.c
··· 161 161 static int pwm_fan_resume(struct device *dev) 162 162 { 163 163 struct pwm_fan_ctx *ctx = dev_get_drvdata(dev); 164 + unsigned long duty; 165 + int ret; 164 166 165 - if (ctx->pwm_value) 166 - return pwm_enable(ctx->pwm); 167 - return 0; 167 + if (ctx->pwm_value == 0) 168 + return 0; 169 + 170 + duty = DIV_ROUND_UP(ctx->pwm_value * (ctx->pwm->period - 1), MAX_PWM); 171 + ret = pwm_config(ctx->pwm, duty, ctx->pwm->period); 172 + if (ret) 173 + return ret; 174 + return pwm_enable(ctx->pwm); 168 175 } 169 176 #endif 170 177
-5
drivers/i2c/algos/i2c-algo-bit.c
··· 12 12 but WITHOUT ANY WARRANTY; without even the implied warranty of 13 13 MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 14 14 GNU General Public License for more details. 15 - 16 - You should have received a copy of the GNU General Public License 17 - along with this program; if not, write to the Free Software 18 - Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, 19 - MA 02110-1301 USA. 20 15 * ------------------------------------------------------------------------- */ 21 16 22 17 /* With some changes from Frodo Looijaard <frodol@dds.nl>, Kyösti Mälkki
-5
drivers/i2c/algos/i2c-algo-pca.c
··· 12 12 * but WITHOUT ANY WARRANTY; without even the implied warranty of 13 13 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 14 14 * GNU General Public License for more details. 15 - * 16 - * You should have received a copy of the GNU General Public License 17 - * along with this program; if not, write to the Free Software 18 - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, 19 - * MA 02110-1301 USA. 20 15 */ 21 16 22 17 #include <linux/kernel.h>
-5
drivers/i2c/algos/i2c-algo-pcf.c
··· 14 14 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 15 15 * GNU General Public License for more details. 16 16 * 17 - * You should have received a copy of the GNU General Public License 18 - * along with this program; if not, write to the Free Software 19 - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, 20 - * MA 02110-1301 USA. 21 - * 22 17 * With some changes from Kyösti Mälkki <kmalkki@cc.hut.fi> and 23 18 * Frodo Looijaard <frodol@dds.nl>, and also from Martin Bailey 24 19 * <mbailey@littlefeet-inc.com>
+1 -6
drivers/i2c/algos/i2c-algo-pcf.h
··· 12 12 This program is distributed in the hope that it will be useful, 13 13 but WITHOUT ANY WARRANTY; without even the implied warranty of 14 14 MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 15 - GNU General Public License for more details. 16 - 17 - You should have received a copy of the GNU General Public License 18 - along with this program; if not, write to the Free Software 19 - Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, 20 - MA 02110-1301 USA. */ 15 + GNU General Public License for more details. */ 21 16 /* -------------------------------------------------------------------- */ 22 17 23 18 /* With some changes from Frodo Looijaard <frodol@dds.nl> */
-4
drivers/i2c/busses/i2c-ali1535.c
··· 14 14 * but WITHOUT ANY WARRANTY; without even the implied warranty of 15 15 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 16 16 * GNU General Public License for more details. 17 - * 18 - * You should have received a copy of the GNU General Public License 19 - * along with this program; if not, write to the Free Software 20 - * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. 21 17 */ 22 18 23 19 /*
-4
drivers/i2c/busses/i2c-ali15x3.c
··· 12 12 but WITHOUT ANY WARRANTY; without even the implied warranty of 13 13 MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 14 14 GNU General Public License for more details. 15 - 16 - You should have received a copy of the GNU General Public License 17 - along with this program; if not, write to the Free Software 18 - Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. 19 15 */ 20 16 21 17 /*
-4
drivers/i2c/busses/i2c-amd756-s4882.c
··· 12 12 * but WITHOUT ANY WARRANTY; without even the implied warranty of 13 13 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 14 14 * GNU General Public License for more details. 15 - * 16 - * You should have received a copy of the GNU General Public License 17 - * along with this program; if not, write to the Free Software 18 - * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. 19 15 */ 20 16 21 17 /*
-4
drivers/i2c/busses/i2c-amd756.c
··· 15 15 but WITHOUT ANY WARRANTY; without even the implied warranty of 16 16 MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 17 17 GNU General Public License for more details. 18 - 19 - You should have received a copy of the GNU General Public License 20 - along with this program; if not, write to the Free Software 21 - Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. 22 18 */ 23 19 24 20 /*
+1 -1
drivers/i2c/busses/i2c-at91.c
··· 434 434 } 435 435 } 436 436 437 - ret = wait_for_completion_io_timeout(&dev->cmd_complete, 437 + ret = wait_for_completion_timeout(&dev->cmd_complete, 438 438 dev->adapter.timeout); 439 439 if (ret == 0) { 440 440 dev_err(dev->dev, "controller timed out\n");
-4
drivers/i2c/busses/i2c-au1550.c
··· 21 21 * but WITHOUT ANY WARRANTY; without even the implied warranty of 22 22 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 23 23 * GNU General Public License for more details. 24 - * 25 - * You should have received a copy of the GNU General Public License 26 - * along with this program; if not, write to the Free Software 27 - * Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. 28 24 */ 29 25 30 26 #include <linux/delay.h>
-4
drivers/i2c/busses/i2c-cpm.c
··· 23 23 * but WITHOUT ANY WARRANTY; without even the implied warranty of 24 24 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 25 25 * GNU General Public License for more details. 26 - * 27 - * You should have received a copy of the GNU General Public License 28 - * along with this program; if not, write to the Free Software 29 - * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. 30 26 */ 31 27 32 28 #include <linux/kernel.h>
-4
drivers/i2c/busses/i2c-davinci.c
··· 17 17 * but WITHOUT ANY WARRANTY; without even the implied warranty of 18 18 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 19 19 * GNU General Public License for more details. 20 - * 21 - * You should have received a copy of the GNU General Public License 22 - * along with this program; if not, write to the Free Software 23 - * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. 24 20 * ---------------------------------------------------------------------------- 25 21 * 26 22 */
-4
drivers/i2c/busses/i2c-designware-core.c
··· 18 18 * but WITHOUT ANY WARRANTY; without even the implied warranty of 19 19 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 20 20 * GNU General Public License for more details. 21 - * 22 - * You should have received a copy of the GNU General Public License 23 - * along with this program; if not, write to the Free Software 24 - * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. 25 21 * ---------------------------------------------------------------------------- 26 22 * 27 23 */
-4
drivers/i2c/busses/i2c-designware-core.h
··· 18 18 * but WITHOUT ANY WARRANTY; without even the implied warranty of 19 19 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 20 20 * GNU General Public License for more details. 21 - * 22 - * You should have received a copy of the GNU General Public License 23 - * along with this program; if not, write to the Free Software 24 - * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. 25 21 * ---------------------------------------------------------------------------- 26 22 * 27 23 */
-4
drivers/i2c/busses/i2c-designware-pcidrv.c
··· 19 19 * but WITHOUT ANY WARRANTY; without even the implied warranty of 20 20 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 21 21 * GNU General Public License for more details. 22 - * 23 - * You should have received a copy of the GNU General Public License 24 - * along with this program; if not, write to the Free Software 25 - * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. 26 22 * ---------------------------------------------------------------------------- 27 23 * 28 24 */
-4
drivers/i2c/busses/i2c-designware-platdrv.c
··· 18 18 * but WITHOUT ANY WARRANTY; without even the implied warranty of 19 19 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 20 20 * GNU General Public License for more details. 21 - * 22 - * You should have received a copy of the GNU General Public License 23 - * along with this program; if not, write to the Free Software 24 - * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. 25 21 * ---------------------------------------------------------------------------- 26 22 * 27 23 */
-4
drivers/i2c/busses/i2c-eg20t.c
··· 9 9 * but WITHOUT ANY WARRANTY; without even the implied warranty of 10 10 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 11 11 * GNU General Public License for more details. 12 - * 13 - * You should have received a copy of the GNU General Public License 14 - * along with this program; if not, write to the Free Software 15 - * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA. 16 12 */ 17 13 18 14 #include <linux/module.h>
+1 -5
drivers/i2c/busses/i2c-elektor.c
··· 12 12 This program is distributed in the hope that it will be useful, 13 13 but WITHOUT ANY WARRANTY; without even the implied warranty of 14 14 MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 15 - GNU General Public License for more details. 16 - 17 - You should have received a copy of the GNU General Public License 18 - along with this program; if not, write to the Free Software 19 - Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. */ 15 + GNU General Public License for more details. */ 20 16 /* ------------------------------------------------------------------------- */ 21 17 22 18 /* With some changes from Kyösti Mälkki <kmalkki@cc.hut.fi> and even
-4
drivers/i2c/busses/i2c-hydra.c
··· 15 15 but WITHOUT ANY WARRANTY; without even the implied warranty of 16 16 MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 17 17 GNU General Public License for more details. 18 - 19 - You should have received a copy of the GNU General Public License 20 - along with this program; if not, write to the Free Software 21 - Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. 22 18 */ 23 19 24 20 #include <linux/kernel.h>
-4
drivers/i2c/busses/i2c-i801.c
··· 15 15 but WITHOUT ANY WARRANTY; without even the implied warranty of 16 16 MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 17 17 GNU General Public License for more details. 18 - 19 - You should have received a copy of the GNU General Public License 20 - along with this program; if not, write to the Free Software 21 - Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. 22 18 */ 23 19 24 20 /*
-5
drivers/i2c/busses/i2c-imx.c
··· 11 11 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 12 12 * GNU General Public License for more details. 13 13 * 14 - * You should have received a copy of the GNU General Public License 15 - * along with this program; if not, write to the Free Software 16 - * Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, 17 - * USA. 18 - * 19 14 * Author: 20 15 * Darius Augulis, Teltonika Inc. 21 16 *
+1 -5
drivers/i2c/busses/i2c-iop3xx.h
··· 11 11 This program is distributed in the hope that it will be useful, 12 12 but WITHOUT ANY WARRANTY; without even the implied warranty of 13 13 MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 14 - GNU General Public License for more details. 15 - 16 - You should have received a copy of the GNU General Public License 17 - along with this program; if not, write to the Free Software 18 - Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. */ 14 + GNU General Public License for more details. */ 19 15 /* ------------------------------------------------------------------------- */ 20 16 21 17
-4
drivers/i2c/busses/i2c-isch.c
··· 14 14 but WITHOUT ANY WARRANTY; without even the implied warranty of 15 15 MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 16 16 GNU General Public License for more details. 17 - 18 - You should have received a copy of the GNU General Public License 19 - along with this program; if not, write to the Free Software 20 - Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. 21 17 */ 22 18 23 19 /*
-4
drivers/i2c/busses/i2c-ismt.c
··· 14 14 * WITHOUT ANY WARRANTY; without even the implied warranty of 15 15 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU 16 16 * General Public License for more details. 17 - * 18 - * You should have received a copy of the GNU General Public License 19 - * along with this program; if not, write to the Free Software 20 - * Foundation, Inc., 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA. 21 17 * The full GNU General Public License is included in this distribution 22 18 * in the file called LICENSE.GPL. 23 19 *
-4
drivers/i2c/busses/i2c-nforce2-s4985.c
··· 12 12 * but WITHOUT ANY WARRANTY; without even the implied warranty of 13 13 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 14 14 * GNU General Public License for more details. 15 - * 16 - * You should have received a copy of the GNU General Public License 17 - * along with this program; if not, write to the Free Software 18 - * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. 19 15 */ 20 16 21 17 /*
-4
drivers/i2c/busses/i2c-nforce2.c
··· 17 17 but WITHOUT ANY WARRANTY; without even the implied warranty of 18 18 MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 19 19 GNU General Public License for more details. 20 - 21 - You should have received a copy of the GNU General Public License 22 - along with this program; if not, write to the Free Software 23 - Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. 24 20 */ 25 21 26 22 /*
-4
drivers/i2c/busses/i2c-omap.c
··· 22 22 * but WITHOUT ANY WARRANTY; without even the implied warranty of 23 23 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 24 24 * GNU General Public License for more details. 25 - * 26 - * You should have received a copy of the GNU General Public License 27 - * along with this program; if not, write to the Free Software 28 - * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. 29 25 */ 30 26 31 27 #include <linux/module.h>
-4
drivers/i2c/busses/i2c-parport-light.c
··· 18 18 but WITHOUT ANY WARRANTY; without even the implied warranty of 19 19 MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 20 20 GNU General Public License for more details. 21 - 22 - You should have received a copy of the GNU General Public License 23 - along with this program; if not, write to the Free Software 24 - Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. 25 21 * ------------------------------------------------------------------------ */ 26 22 27 23 #include <linux/kernel.h>
-4
drivers/i2c/busses/i2c-parport.c
··· 18 18 but WITHOUT ANY WARRANTY; without even the implied warranty of 19 19 MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 20 20 GNU General Public License for more details. 21 - 22 - You should have received a copy of the GNU General Public License 23 - along with this program; if not, write to the Free Software 24 - Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. 25 21 * ------------------------------------------------------------------------ */ 26 22 27 23 #include <linux/kernel.h>
-4
drivers/i2c/busses/i2c-parport.h
··· 12 12 but WITHOUT ANY WARRANTY; without even the implied warranty of 13 13 MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 14 14 GNU General Public License for more details. 15 - 16 - You should have received a copy of the GNU General Public License 17 - along with this program; if not, write to the Free Software 18 - Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. 19 15 * ------------------------------------------------------------------------ */ 20 16 21 17 #define PORT_DATA 0
-4
drivers/i2c/busses/i2c-pasemi.c
··· 11 11 * but WITHOUT ANY WARRANTY; without even the implied warranty of 12 12 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 13 13 * GNU General Public License for more details. 14 - * 15 - * You should have received a copy of the GNU General Public License 16 - * along with this program; if not, write to the Free Software 17 - * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA 18 14 */ 19 15 20 16 #include <linux/module.h>
-4
drivers/i2c/busses/i2c-pca-isa.c
··· 12 12 * but WITHOUT ANY WARRANTY; without even the implied warranty of 13 13 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 14 14 * GNU General Public License for more details. 15 - * 16 - * You should have received a copy of the GNU General Public License 17 - * along with this program; if not, write to the Free Software 18 - * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. 19 15 */ 20 16 21 17 #include <linux/kernel.h>
-4
drivers/i2c/busses/i2c-piix4.c
··· 11 11 but WITHOUT ANY WARRANTY; without even the implied warranty of 12 12 MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 13 13 GNU General Public License for more details. 14 - 15 - You should have received a copy of the GNU General Public License 16 - along with this program; if not, write to the Free Software 17 - Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. 18 14 */ 19 15 20 16 /*
-4
drivers/i2c/busses/i2c-pmcmsp.c
··· 18 18 * ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT 19 19 * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF 20 20 * THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 21 - * 22 - * You should have received a copy of the GNU General Public License along 23 - * with this program; if not, write to the Free Software Foundation, Inc., 24 - * 675 Mass Ave, Cambridge, MA 02139, USA. 25 21 */ 26 22 27 23 #include <linux/kernel.h>
-4
drivers/i2c/busses/i2c-powermac.c
··· 14 14 MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 15 15 GNU General Public License for more details. 16 16 17 - You should have received a copy of the GNU General Public License 18 - along with this program; if not, write to the Free Software 19 - Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. 20 - 21 17 */ 22 18 23 19 #include <linux/module.h>
-4
drivers/i2c/busses/i2c-s3c2410.c
··· 14 14 * but WITHOUT ANY WARRANTY; without even the implied warranty of 15 15 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 16 16 * GNU General Public License for more details. 17 - * 18 - * You should have received a copy of the GNU General Public License 19 - * along with this program; if not, write to the Free Software 20 - * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA 21 17 */ 22 18 23 19 #include <linux/kernel.h>
-4
drivers/i2c/busses/i2c-sh_mobile.c
··· 14 14 * but WITHOUT ANY WARRANTY; without even the implied warranty of 15 15 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 16 16 * GNU General Public License for more details. 17 - * 18 - * You should have received a copy of the GNU General Public License 19 - * along with this program; if not, write to the Free Software 20 - * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA 21 17 */ 22 18 23 19 #include <linux/kernel.h>
-4
drivers/i2c/busses/i2c-sibyte.c
··· 12 12 * but WITHOUT ANY WARRANTY; without even the implied warranty of 13 13 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 14 14 * GNU General Public License for more details. 15 - * 16 - * You should have received a copy of the GNU General Public License 17 - * along with this program; if not, write to the Free Software 18 - * Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. 19 15 */ 20 16 21 17 #include <linux/kernel.h>
-4
drivers/i2c/busses/i2c-simtec.c
··· 12 12 * but WITHOUT ANY WARRANTY; without even the implied warranty of 13 13 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 14 14 * GNU General Public License for more details. 15 - * 16 - * You should have received a copy of the GNU General Public License 17 - * along with this program; if not, write to the Free Software 18 - * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA 19 15 */ 20 16 21 17 #include <linux/kernel.h>
-4
drivers/i2c/busses/i2c-sis5595.c
··· 11 11 but WITHOUT ANY WARRANTY; without even the implied warranty of 12 12 MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 13 13 GNU General Public License for more details. 14 - 15 - You should have received a copy of the GNU General Public License 16 - along with this program; if not, write to the Free Software 17 - Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. 18 14 */ 19 15 20 16 /* Note: we assume there can only be one SIS5595 with one SMBus interface */
-4
drivers/i2c/busses/i2c-sis630.c
··· 10 10 but WITHOUT ANY WARRANTY; without even the implied warranty of 11 11 MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 12 12 GNU General Public License for more details. 13 - 14 - You should have received a copy of the GNU General Public License 15 - along with this program; if not, write to the Free Software 16 - Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. 17 13 */ 18 14 19 15 /*
-4
drivers/i2c/busses/i2c-sis96x.c
··· 10 10 but WITHOUT ANY WARRANTY; without even the implied warranty of 11 11 MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 12 12 GNU General Public License for more details. 13 - 14 - You should have received a copy of the GNU General Public License 15 - along with this program; if not, write to the Free Software 16 - Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. 17 13 */ 18 14 19 15 /*
-4
drivers/i2c/busses/i2c-taos-evm.c
··· 13 13 * but WITHOUT ANY WARRANTY; without even the implied warranty of 14 14 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 15 15 * GNU General Public License for more details. 16 - * 17 - * You should have received a copy of the GNU General Public License 18 - * along with this program; if not, write to the Free Software 19 - * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA 20 16 */ 21 17 22 18 #include <linux/delay.h>
-4
drivers/i2c/busses/i2c-via.c
··· 12 12 but WITHOUT ANY WARRANTY; without even the implied warranty of 13 13 MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 14 14 GNU General Public License for more details. 15 - 16 - You should have received a copy of the GNU General Public License 17 - along with this program; if not, write to the Free Software 18 - Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. 19 15 */ 20 16 21 17 #include <linux/kernel.h>
-4
drivers/i2c/busses/i2c-viapro.c
··· 13 13 but WITHOUT ANY WARRANTY; without even the implied warranty of 14 14 MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 15 15 GNU General Public License for more details. 16 - 17 - You should have received a copy of the GNU General Public License 18 - along with this program; if not, write to the Free Software 19 - Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. 20 16 */ 21 17 22 18 /*
-4
drivers/i2c/busses/i2c-xiic.c
··· 12 12 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 13 13 * GNU General Public License for more details. 14 14 * 15 - * You should have received a copy of the GNU General Public License 16 - * along with this program; if not, write to the Free Software 17 - * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. 18 - * 19 15 * 20 16 * This code was implemented by Mocean Laboratories AB when porting linux 21 17 * to the automotive development board Russellville. The copyright holder
-4
drivers/i2c/busses/scx200_acb.c
··· 17 17 but WITHOUT ANY WARRANTY; without even the implied warranty of 18 18 MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU 19 19 General Public License for more details. 20 - 21 - You should have received a copy of the GNU General Public License 22 - along with this program; if not, write to the Free Software 23 - Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. 24 20 */ 25 21 26 22 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
-5
drivers/i2c/i2c-boardinfo.c
··· 10 10 * but WITHOUT ANY WARRANTY; without even the implied warranty of 11 11 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 12 12 * GNU General Public License for more details. 13 - * 14 - * You should have received a copy of the GNU General Public License 15 - * along with this program; if not, write to the Free Software 16 - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, 17 - * MA 02110-1301 USA. 18 13 */ 19 14 20 15 #include <linux/kernel.h>
+4 -6
drivers/i2c/i2c-core.c
··· 10 10 This program is distributed in the hope that it will be useful, 11 11 but WITHOUT ANY WARRANTY; without even the implied warranty of 12 12 MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 13 - GNU General Public License for more details. 14 - 15 - You should have received a copy of the GNU General Public License 16 - along with this program; if not, write to the Free Software 17 - Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, 18 - MA 02110-1301 USA. */ 13 + GNU General Public License for more details. */ 19 14 /* ------------------------------------------------------------------------- */ 20 15 21 16 /* With some changes from Kyösti Mälkki <kmalkki@cc.hut.fi>. ··· 664 669 dev_dbg(dev, "remove\n"); 665 670 status = driver->remove(client); 666 671 } 672 + 673 + if (dev->of_node) 674 + irq_dispose_mapping(client->irq); 667 675 668 676 dev_pm_domain_detach(&client->dev, true); 669 677 return status;
-5
drivers/i2c/i2c-core.h
··· 10 10 * but WITHOUT ANY WARRANTY; without even the implied warranty of 11 11 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 12 12 * GNU General Public License for more details. 13 - * 14 - * You should have received a copy of the GNU General Public License 15 - * along with this program; if not, write to the Free Software 16 - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, 17 - * MA 02110-1301 USA. 18 13 */ 19 14 20 15 #include <linux/rwsem.h>
-5
drivers/i2c/i2c-dev.c
··· 14 14 but WITHOUT ANY WARRANTY; without even the implied warranty of 15 15 MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 16 16 GNU General Public License for more details. 17 - 18 - You should have received a copy of the GNU General Public License 19 - along with this program; if not, write to the Free Software 20 - Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, 21 - MA 02110-1301 USA. 22 17 */ 23 18 24 19 /* Note that this is a complete rewrite of Simon Vogl's i2c-dev module.
-5
drivers/i2c/i2c-smbus.c
··· 13 13 * but WITHOUT ANY WARRANTY; without even the implied warranty of 14 14 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 15 15 * GNU General Public License for more details. 16 - * 17 - * You should have received a copy of the GNU General Public License 18 - * along with this program; if not, write to the Free Software 19 - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, 20 - * MA 02110-1301 USA. 21 16 */ 22 17 23 18 #include <linux/kernel.h>
-4
drivers/i2c/i2c-stub.c
··· 13 13 but WITHOUT ANY WARRANTY; without even the implied warranty of 14 14 MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 15 15 GNU General Public License for more details. 16 - 17 - You should have received a copy of the GNU General Public License 18 - along with this program; if not, write to the Free Software 19 - Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. 20 16 */ 21 17 22 18 #define DEBUG 1
+1 -1
drivers/iio/accel/kxcjk-1013.c
··· 894 894 895 895 static const struct iio_event_spec kxcjk1013_event = { 896 896 .type = IIO_EV_TYPE_THRESH, 897 - .dir = IIO_EV_DIR_RISING | IIO_EV_DIR_FALLING, 897 + .dir = IIO_EV_DIR_EITHER, 898 898 .mask_separate = BIT(IIO_EV_INFO_VALUE) | 899 899 BIT(IIO_EV_INFO_ENABLE) | 900 900 BIT(IIO_EV_INFO_PERIOD)
+1 -1
drivers/iio/common/st_sensors/st_sensors_buffer.c
··· 71 71 goto st_sensors_free_memory; 72 72 } 73 73 74 - for (i = 0; i < n * num_data_channels; i++) { 74 + for (i = 0; i < n * byte_for_channel; i++) { 75 75 if (i < n) 76 76 buf[i] = rx_array[i]; 77 77 else
+5 -2
drivers/iio/light/tsl4531.c
··· 230 230 return i2c_smbus_write_byte_data(to_i2c_client(dev), TSL4531_CONTROL, 231 231 TSL4531_MODE_NORMAL); 232 232 } 233 - #endif 234 233 235 234 static SIMPLE_DEV_PM_OPS(tsl4531_pm_ops, tsl4531_suspend, tsl4531_resume); 235 + #define TSL4531_PM_OPS (&tsl4531_pm_ops) 236 + #else 237 + #define TSL4531_PM_OPS NULL 238 + #endif 236 239 237 240 static const struct i2c_device_id tsl4531_id[] = { 238 241 { "tsl4531", 0 }, ··· 246 243 static struct i2c_driver tsl4531_driver = { 247 244 .driver = { 248 245 .name = TSL4531_DRV_NAME, 249 - .pm = &tsl4531_pm_ops, 246 + .pm = TSL4531_PM_OPS, 250 247 .owner = THIS_MODULE, 251 248 }, 252 249 .probe = tsl4531_probe,
+1 -1
drivers/iio/proximity/as3935.c
··· 330 330 return -EINVAL; 331 331 } 332 332 333 - indio_dev = devm_iio_device_alloc(&spi->dev, sizeof(st)); 333 + indio_dev = devm_iio_device_alloc(&spi->dev, sizeof(*st)); 334 334 if (!indio_dev) 335 335 return -ENOMEM; 336 336
+17 -6
drivers/irqchip/irq-armada-370-xp.c
··· 43 43 #define ARMADA_370_XP_INT_CLEAR_ENABLE_OFFS (0x34) 44 44 #define ARMADA_370_XP_INT_SOURCE_CTL(irq) (0x100 + irq*4) 45 45 #define ARMADA_370_XP_INT_SOURCE_CPU_MASK 0xF 46 + #define ARMADA_370_XP_INT_IRQ_FIQ_MASK(cpuid) ((BIT(0) | BIT(8)) << cpuid) 46 47 47 48 #define ARMADA_370_XP_CPU_INTACK_OFFS (0x44) 48 49 #define ARMADA_375_PPI_CAUSE (0x10) ··· 407 406 struct irq_desc *desc) 408 407 { 409 408 struct irq_chip *chip = irq_get_chip(irq); 410 - unsigned long irqmap, irqn; 409 + unsigned long irqmap, irqn, irqsrc, cpuid; 411 410 unsigned int cascade_irq; 412 411 413 412 chained_irq_enter(chip, desc); 414 413 415 414 irqmap = readl_relaxed(per_cpu_int_base + ARMADA_375_PPI_CAUSE); 416 - 417 - if (irqmap & BIT(0)) { 418 - armada_370_xp_handle_msi_irq(NULL, true); 419 - irqmap &= ~BIT(0); 420 - } 415 + cpuid = cpu_logical_map(smp_processor_id()); 421 416 422 417 for_each_set_bit(irqn, &irqmap, BITS_PER_LONG) { 418 + irqsrc = readl_relaxed(main_int_base + 419 + ARMADA_370_XP_INT_SOURCE_CTL(irqn)); 420 + 421 + /* Check if the interrupt is not masked on current CPU. 422 + * Test IRQ (0-1) and FIQ (8-9) mask bits. 423 + */ 424 + if (!(irqsrc & ARMADA_370_XP_INT_IRQ_FIQ_MASK(cpuid))) 425 + continue; 426 + 427 + if (irqn == 1) { 428 + armada_370_xp_handle_msi_irq(NULL, true); 429 + continue; 430 + } 431 + 423 432 cascade_irq = irq_find_mapping(armada_370_xp_mpic_domain, irqn); 424 433 generic_handle_irq(cascade_irq); 425 434 }
+6 -6
drivers/md/dm-bufio.c
··· 1434 1434 1435 1435 /* 1436 1436 * Test if the buffer is unused and too old, and commit it. 1437 - * At if noio is set, we must not do any I/O because we hold 1438 - * dm_bufio_clients_lock and we would risk deadlock if the I/O gets rerouted to 1439 - * different bufio client. 1437 + * And if GFP_NOFS is used, we must not do any I/O because we hold 1438 + * dm_bufio_clients_lock and we would risk deadlock if the I/O gets 1439 + * rerouted to different bufio client. 1440 1440 */ 1441 1441 static int __cleanup_old_buffer(struct dm_buffer *b, gfp_t gfp, 1442 1442 unsigned long max_jiffies) ··· 1444 1444 if (jiffies - b->last_accessed < max_jiffies) 1445 1445 return 0; 1446 1446 1447 - if (!(gfp & __GFP_IO)) { 1447 + if (!(gfp & __GFP_FS)) { 1448 1448 if (test_bit(B_READING, &b->state) || 1449 1449 test_bit(B_WRITING, &b->state) || 1450 1450 test_bit(B_DIRTY, &b->state)) ··· 1486 1486 unsigned long freed; 1487 1487 1488 1488 c = container_of(shrink, struct dm_bufio_client, shrinker); 1489 - if (sc->gfp_mask & __GFP_IO) 1489 + if (sc->gfp_mask & __GFP_FS) 1490 1490 dm_bufio_lock(c); 1491 1491 else if (!dm_bufio_trylock(c)) 1492 1492 return SHRINK_STOP; ··· 1503 1503 unsigned long count; 1504 1504 1505 1505 c = container_of(shrink, struct dm_bufio_client, shrinker); 1506 - if (sc->gfp_mask & __GFP_IO) 1506 + if (sc->gfp_mask & __GFP_FS) 1507 1507 dm_bufio_lock(c); 1508 1508 else if (!dm_bufio_trylock(c)) 1509 1509 return 0;
+12 -5
drivers/md/dm-raid.c
··· 789 789 __le32 layout; 790 790 __le32 stripe_sectors; 791 791 792 - __u8 pad[452]; /* Round struct to 512 bytes. */ 793 - /* Always set to 0 when writing. */ 792 + /* Remainder of a logical block is zero-filled when writing (see super_sync()). */ 794 793 } __packed; 795 794 796 795 static int read_disk_sb(struct md_rdev *rdev, int size) ··· 826 827 test_bit(Faulty, &(rs->dev[i].rdev.flags))) 827 828 failed_devices |= (1ULL << i); 828 829 829 - memset(sb, 0, sizeof(*sb)); 830 + memset(sb + 1, 0, rdev->sb_size - sizeof(*sb)); 830 831 831 832 sb->magic = cpu_to_le32(DM_RAID_MAGIC); 832 833 sb->features = cpu_to_le32(0); /* No features yet */ ··· 861 862 uint64_t events_sb, events_refsb; 862 863 863 864 rdev->sb_start = 0; 864 - rdev->sb_size = sizeof(*sb); 865 + rdev->sb_size = bdev_logical_block_size(rdev->meta_bdev); 866 + if (rdev->sb_size < sizeof(*sb) || rdev->sb_size > PAGE_SIZE) { 867 + DMERR("superblock size of a logical block is no longer valid"); 868 + return -EINVAL; 869 + } 865 870 866 871 ret = read_disk_sb(rdev, rdev->sb_size); 867 872 if (ret) ··· 1172 1169 raid456 = (rs->md.level == 4 || rs->md.level == 5 || rs->md.level == 6); 1173 1170 1174 1171 for (i = 0; i < rs->md.raid_disks; i++) { 1175 - struct request_queue *q = bdev_get_queue(rs->dev[i].rdev.bdev); 1172 + struct request_queue *q; 1176 1173 1174 + if (!rs->dev[i].rdev.bdev) 1175 + continue; 1176 + 1177 + q = bdev_get_queue(rs->dev[i].rdev.bdev); 1177 1178 if (!q || !blk_queue_discard(q)) 1178 1179 return; 1179 1180
+3 -1
drivers/md/dm-stripe.c
··· 159 159 sc->stripes_shift = __ffs(stripes); 160 160 161 161 r = dm_set_target_max_io_len(ti, chunk_size); 162 - if (r) 162 + if (r) { 163 + kfree(sc); 163 164 return r; 165 + } 164 166 165 167 ti->num_flush_bios = stripes; 166 168 ti->num_discard_bios = stripes;
+12 -4
drivers/md/dm-thin.c
··· 1936 1936 return DM_MAPIO_SUBMITTED; 1937 1937 } 1938 1938 1939 + /* 1940 + * We must hold the virtual cell before doing the lookup, otherwise 1941 + * there's a race with discard. 1942 + */ 1943 + build_virtual_key(tc->td, block, &key); 1944 + if (dm_bio_detain(tc->pool->prison, &key, bio, &cell1, &cell_result)) 1945 + return DM_MAPIO_SUBMITTED; 1946 + 1939 1947 r = dm_thin_find_block(td, block, 0, &result); 1940 1948 1941 1949 /* ··· 1967 1959 * shared flag will be set in their case. 1968 1960 */ 1969 1961 thin_defer_bio(tc, bio); 1962 + cell_defer_no_holder_no_free(tc, &cell1); 1970 1963 return DM_MAPIO_SUBMITTED; 1971 1964 } 1972 - 1973 - build_virtual_key(tc->td, block, &key); 1974 - if (dm_bio_detain(tc->pool->prison, &key, bio, &cell1, &cell_result)) 1975 - return DM_MAPIO_SUBMITTED; 1976 1965 1977 1966 build_data_key(tc->td, result.block, &key); 1978 1967 if (dm_bio_detain(tc->pool->prison, &key, bio, &cell2, &cell_result)) { ··· 1991 1986 * of doing so. 1992 1987 */ 1993 1988 handle_unserviceable_bio(tc->pool, bio); 1989 + cell_defer_no_holder_no_free(tc, &cell1); 1994 1990 return DM_MAPIO_SUBMITTED; 1995 1991 } 1996 1992 /* fall through */ ··· 2002 1996 * provide the hint to load the metadata into cache. 2003 1997 */ 2004 1998 thin_defer_bio(tc, bio); 1999 + cell_defer_no_holder_no_free(tc, &cell1); 2005 2000 return DM_MAPIO_SUBMITTED; 2006 2001 2007 2002 default: ··· 2012 2005 * pool is switched to fail-io mode. 2013 2006 */ 2014 2007 bio_io_error(bio); 2008 + cell_defer_no_holder_no_free(tc, &cell1); 2015 2009 return DM_MAPIO_SUBMITTED; 2016 2010 } 2017 2011 }
+6
drivers/md/persistent-data/dm-btree-internal.h
··· 42 42 } __packed; 43 43 44 44 45 + /* 46 + * Locks a block using the btree node validator. 47 + */ 48 + int bn_read_lock(struct dm_btree_info *info, dm_block_t b, 49 + struct dm_block **result); 50 + 45 51 void inc_children(struct dm_transaction_manager *tm, struct btree_node *n, 46 52 struct dm_btree_value_type *vt); 47 53
+1 -1
drivers/md/persistent-data/dm-btree-spine.c
··· 92 92 93 93 /*----------------------------------------------------------------*/ 94 94 95 - static int bn_read_lock(struct dm_btree_info *info, dm_block_t b, 95 + int bn_read_lock(struct dm_btree_info *info, dm_block_t b, 96 96 struct dm_block **result) 97 97 { 98 98 return dm_tm_read_lock(info->tm, b, &btree_node_validator, result);
+10 -14
drivers/md/persistent-data/dm-btree.c
··· 847 847 * FIXME: We shouldn't use a recursive algorithm when we have limited stack 848 848 * space. Also this only works for single level trees. 849 849 */ 850 - static int walk_node(struct ro_spine *s, dm_block_t block, 850 + static int walk_node(struct dm_btree_info *info, dm_block_t block, 851 851 int (*fn)(void *context, uint64_t *keys, void *leaf), 852 852 void *context) 853 853 { 854 854 int r; 855 855 unsigned i, nr; 856 + struct dm_block *node; 856 857 struct btree_node *n; 857 858 uint64_t keys; 858 859 859 - r = ro_step(s, block); 860 - n = ro_node(s); 860 + r = bn_read_lock(info, block, &node); 861 + if (r) 862 + return r; 863 + 864 + n = dm_block_data(node); 861 865 862 866 nr = le32_to_cpu(n->header.nr_entries); 863 867 for (i = 0; i < nr; i++) { 864 868 if (le32_to_cpu(n->header.flags) & INTERNAL_NODE) { 865 - r = walk_node(s, value64(n, i), fn, context); 869 + r = walk_node(info, value64(n, i), fn, context); 866 870 if (r) 867 871 goto out; 868 872 } else { ··· 878 874 } 879 875 880 876 out: 881 - ro_pop(s); 877 + dm_tm_unlock(info->tm, node); 882 878 return r; 883 879 } 884 880 ··· 886 882 int (*fn)(void *context, uint64_t *keys, void *leaf), 887 883 void *context) 888 884 { 889 - int r; 890 - struct ro_spine spine; 891 - 892 885 BUG_ON(info->levels > 1); 893 - 894 - init_ro_spine(&spine, info); 895 - r = walk_node(&spine, root, fn, context); 896 - exit_ro_spine(&spine); 897 - 898 - return r; 886 + return walk_node(info, root, fn, context); 899 887 } 900 888 EXPORT_SYMBOL_GPL(dm_btree_walk);
+6
drivers/media/dvb-core/dvb_frontend.c
··· 962 962 case SYS_ATSC: 963 963 c->modulation = VSB_8; 964 964 break; 965 + case SYS_ISDBS: 966 + c->symbol_rate = 28860000; 967 + c->rolloff = ROLLOFF_35; 968 + c->bandwidth_hz = c->symbol_rate / 100 * 135; 969 + break; 965 970 default: 966 971 c->modulation = QAM_AUTO; 967 972 break; ··· 2077 2072 break; 2078 2073 case SYS_DVBS: 2079 2074 case SYS_TURBO: 2075 + case SYS_ISDBS: 2080 2076 rolloff = 135; 2081 2077 break; 2082 2078 case SYS_DVBS2:
+7
drivers/media/dvb-frontends/ds3000.c
··· 864 864 memcpy(&state->frontend.ops, &ds3000_ops, 865 865 sizeof(struct dvb_frontend_ops)); 866 866 state->frontend.demodulator_priv = state; 867 + 868 + /* 869 + * Some devices like T480 starts with voltage on. Be sure 870 + * to turn voltage off during init, as this can otherwise 871 + * interfere with Unicable SCR systems. 872 + */ 873 + ds3000_set_voltage(&state->frontend, SEC_VOLTAGE_OFF); 867 874 return &state->frontend; 868 875 869 876 error3:
+2 -2
drivers/media/dvb-frontends/sp2.c
··· 266 266 return s->status; 267 267 } 268 268 269 - int sp2_init(struct sp2 *s) 269 + static int sp2_init(struct sp2 *s) 270 270 { 271 271 int ret = 0; 272 272 u8 buf; ··· 348 348 return ret; 349 349 } 350 350 351 - int sp2_exit(struct i2c_client *client) 351 + static int sp2_exit(struct i2c_client *client) 352 352 { 353 353 struct sp2 *s; 354 354
+8 -10
drivers/media/dvb-frontends/tc90522.c
··· 216 216 c->delivery_system = SYS_ISDBS; 217 217 218 218 layers = 0; 219 - ret = reg_read(state, 0xe8, val, 3); 219 + ret = reg_read(state, 0xe6, val, 5); 220 220 if (ret == 0) { 221 - int slots; 222 221 u8 v; 223 222 223 + c->stream_id = val[0] << 8 | val[1]; 224 + 224 225 /* high/single layer */ 225 - v = (val[0] & 0x70) >> 4; 226 + v = (val[2] & 0x70) >> 4; 226 227 c->modulation = (v == 7) ? PSK_8 : QPSK; 227 228 c->fec_inner = fec_conv_sat[v]; 228 229 c->layer[0].fec = c->fec_inner; 229 230 c->layer[0].modulation = c->modulation; 230 - c->layer[0].segment_count = val[1] & 0x3f; /* slots */ 231 + c->layer[0].segment_count = val[3] & 0x3f; /* slots */ 231 232 232 233 /* low layer */ 233 - v = (val[0] & 0x07); 234 + v = (val[2] & 0x07); 234 235 c->layer[1].fec = fec_conv_sat[v]; 235 236 if (v == 0) /* no low layer */ 236 237 c->layer[1].segment_count = 0; 237 238 else 238 - c->layer[1].segment_count = val[2] & 0x3f; /* slots */ 239 + c->layer[1].segment_count = val[4] & 0x3f; /* slots */ 239 240 /* actually, BPSK if v==1, but not defined in fe_modulation_t */ 240 241 c->layer[1].modulation = QPSK; 241 242 layers = (v > 0) ? 2 : 1; 242 - 243 - slots = c->layer[0].segment_count + c->layer[1].segment_count; 244 - c->symbol_rate = 28860000 * slots / 48; 245 243 } 246 244 247 245 /* statistics */ ··· 361 363 u8 v; 362 364 363 365 c->isdbt_partial_reception = val[0] & 0x01; 364 - c->isdbt_sb_mode = (val[0] & 0xc0) == 0x01; 366 + c->isdbt_sb_mode = (val[0] & 0xc0) == 0x40; 365 367 366 368 /* layer A */ 367 369 v = (val[2] & 0x78) >> 3;
+3 -8
drivers/media/platform/vivid/vivid-core.c
··· 100 100 "\t\t bit 0=crop, 1=compose, 2=scale,\n" 101 101 "\t\t -1=user-controlled (default)"); 102 102 103 - static unsigned multiplanar[VIVID_MAX_DEVS]; 103 + static unsigned multiplanar[VIVID_MAX_DEVS] = { [0 ... (VIVID_MAX_DEVS - 1)] = 1 }; 104 104 module_param_array(multiplanar, uint, NULL, 0444); 105 - MODULE_PARM_DESC(multiplanar, " 0 (default) is alternating single and multiplanar devices,\n" 106 - "\t\t 1 is single planar devices,\n" 107 - "\t\t 2 is multiplanar devices"); 105 + MODULE_PARM_DESC(multiplanar, " 1 (default) creates a single planar device, 2 creates a multiplanar device."); 108 106 109 107 /* Default: video + vbi-cap (raw and sliced) + radio rx + radio tx + sdr + vbi-out + vid-out */ 110 108 static unsigned node_types[VIVID_MAX_DEVS] = { [0 ... (VIVID_MAX_DEVS - 1)] = 0x1d3d }; ··· 667 669 /* start detecting feature set */ 668 670 669 671 /* do we use single- or multi-planar? */ 670 - if (multiplanar[inst] == 0) 671 - dev->multiplanar = inst & 1; 672 - else 673 - dev->multiplanar = multiplanar[inst] > 1; 672 + dev->multiplanar = multiplanar[inst] > 1; 674 673 v4l2_info(&dev->v4l2_dev, "using %splanar format API\n", 675 674 dev->multiplanar ? "multi" : "single "); 676 675
+2 -1
drivers/media/rc/imon.c
··· 1678 1678 if (press_type == 0) 1679 1679 rc_keyup(ictx->rdev); 1680 1680 else { 1681 - if (ictx->rc_type == RC_BIT_RC6_MCE) 1681 + if (ictx->rc_type == RC_BIT_RC6_MCE || 1682 + ictx->rc_type == RC_BIT_OTHER) 1682 1683 rc_keydown(ictx->rdev, 1683 1684 ictx->rc_type == RC_BIT_RC6_MCE ? RC_TYPE_RC6_MCE : RC_TYPE_OTHER, 1684 1685 ictx->rc_scancode, ictx->rc_toggle);
+1 -1
drivers/media/rc/ir-hix5hd2.c
··· 297 297 return 0; 298 298 } 299 299 300 - #ifdef CONFIG_PM 300 + #ifdef CONFIG_PM_SLEEP 301 301 static int hix5hd2_ir_suspend(struct device *dev) 302 302 { 303 303 struct hix5hd2_ir_priv *priv = dev_get_drvdata(dev);
+1 -1
drivers/media/rc/ir-rc5-decoder.c
··· 53 53 u32 scancode; 54 54 enum rc_type protocol; 55 55 56 - if (!(dev->enabled_protocols & (RC_BIT_RC5 | RC_BIT_RC5X))) 56 + if (!(dev->enabled_protocols & (RC_BIT_RC5 | RC_BIT_RC5X | RC_BIT_RC5_SZ))) 57 57 return 0; 58 58 59 59 if (!is_timing_event(ev)) {
-1
drivers/media/rc/rc-ir-raw.c
··· 262 262 return -ENOMEM; 263 263 264 264 dev->raw->dev = dev; 265 - dev->enabled_protocols = ~0; 266 265 dev->change_protocol = change_protocol; 267 266 rc = kfifo_alloc(&dev->raw->kfifo, 268 267 sizeof(struct ir_raw_event) * MAX_IR_EVENT_SIZE,
+2
drivers/media/rc/rc-main.c
··· 1421 1421 1422 1422 if (dev->change_protocol) { 1423 1423 u64 rc_type = (1 << rc_map->rc_type); 1424 + if (dev->driver_type == RC_DRIVER_IR_RAW) 1425 + rc_type |= RC_BIT_LIRC; 1424 1426 rc = dev->change_protocol(dev, &rc_type); 1425 1427 if (rc < 0) 1426 1428 goto out_raw;
+13 -1
drivers/mfd/max77693.c
··· 240 240 goto err_irq_charger; 241 241 } 242 242 243 - ret = regmap_add_irq_chip(max77693->regmap, max77693->irq, 243 + ret = regmap_add_irq_chip(max77693->regmap_muic, max77693->irq, 244 244 IRQF_ONESHOT | IRQF_SHARED | 245 245 IRQF_TRIGGER_FALLING, 0, 246 246 &max77693_muic_irq_chip, ··· 248 248 if (ret) { 249 249 dev_err(max77693->dev, "failed to add irq chip: %d\n", ret); 250 250 goto err_irq_muic; 251 + } 252 + 253 + /* Unmask interrupts from all blocks in interrupt source register */ 254 + ret = regmap_update_bits(max77693->regmap, 255 + MAX77693_PMIC_REG_INTSRC_MASK, 256 + SRC_IRQ_ALL, (unsigned int)~SRC_IRQ_ALL); 257 + if (ret < 0) { 258 + dev_err(max77693->dev, 259 + "Could not unmask interrupts in INTSRC: %d\n", 260 + ret); 261 + goto err_intsrc; 251 262 } 252 263 253 264 pm_runtime_set_active(max77693->dev); ··· 272 261 273 262 err_mfd: 274 263 mfd_remove_devices(max77693->dev); 264 + err_intsrc: 275 265 regmap_del_irq_chip(max77693->irq, max77693->irq_data_muic); 276 266 err_irq_muic: 277 267 regmap_del_irq_chip(max77693->irq, max77693->irq_data_charger);
+2
drivers/mfd/rtsx_pcr.c
··· 947 947 mutex_unlock(&pcr->pcr_mutex); 948 948 } 949 949 950 + #ifdef CONFIG_PM 950 951 static void rtsx_pci_power_off(struct rtsx_pcr *pcr, u8 pm_state) 951 952 { 952 953 if (pcr->ops->turn_off_led) ··· 962 961 if (pcr->ops->force_power_down) 963 962 pcr->ops->force_power_down(pcr, pm_state); 964 963 } 964 + #endif 965 965 966 966 static int rtsx_pci_init_hw(struct rtsx_pcr *pcr) 967 967 {
+1 -1
drivers/mfd/stmpe.h
··· 269 269 #define STMPE24XX_REG_CHIP_ID 0x80 270 270 #define STMPE24XX_REG_IEGPIOR_LSB 0x18 271 271 #define STMPE24XX_REG_ISGPIOR_MSB 0x19 272 - #define STMPE24XX_REG_GPMR_LSB 0xA5 272 + #define STMPE24XX_REG_GPMR_LSB 0xA4 273 273 #define STMPE24XX_REG_GPSR_LSB 0x85 274 274 #define STMPE24XX_REG_GPCR_LSB 0x88 275 275 #define STMPE24XX_REG_GPDR_LSB 0x8B
+52
drivers/mfd/twl4030-power.c
··· 44 44 #define PWR_DEVSLP BIT(1) 45 45 #define PWR_DEVOFF BIT(0) 46 46 47 + /* Register bits for CFG_P1_TRANSITION (also for P2 and P3) */ 48 + #define STARTON_SWBUG BIT(7) /* Start on watchdog */ 49 + #define STARTON_VBUS BIT(5) /* Start on VBUS */ 50 + #define STARTON_VBAT BIT(4) /* Start on battery insert */ 51 + #define STARTON_RTC BIT(3) /* Start on RTC */ 52 + #define STARTON_USB BIT(2) /* Start on USB host */ 53 + #define STARTON_CHG BIT(1) /* Start on charger */ 54 + #define STARTON_PWON BIT(0) /* Start on PWRON button */ 55 + 47 56 #define SEQ_OFFSYNC (1 << 0) 48 57 49 58 #define PHY_TO_OFF_PM_MASTER(p) (p - 0x36) ··· 615 606 return 0; 616 607 } 617 608 609 + static int twl4030_starton_mask_and_set(u8 bitmask, u8 bitvalues) 610 + { 611 + u8 regs[3] = { TWL4030_PM_MASTER_CFG_P1_TRANSITION, 612 + TWL4030_PM_MASTER_CFG_P2_TRANSITION, 613 + TWL4030_PM_MASTER_CFG_P3_TRANSITION, }; 614 + u8 val; 615 + int i, err; 616 + 617 + err = twl_i2c_write_u8(TWL_MODULE_PM_MASTER, TWL4030_PM_MASTER_KEY_CFG1, 618 + TWL4030_PM_MASTER_PROTECT_KEY); 619 + if (err) 620 + goto relock; 621 + err = twl_i2c_write_u8(TWL_MODULE_PM_MASTER, 622 + TWL4030_PM_MASTER_KEY_CFG2, 623 + TWL4030_PM_MASTER_PROTECT_KEY); 624 + if (err) 625 + goto relock; 626 + 627 + for (i = 0; i < sizeof(regs); i++) { 628 + err = twl_i2c_read_u8(TWL_MODULE_PM_MASTER, 629 + &val, regs[i]); 630 + if (err) 631 + break; 632 + val = (~bitmask & val) | (bitmask & bitvalues); 633 + err = twl_i2c_write_u8(TWL_MODULE_PM_MASTER, 634 + val, regs[i]); 635 + if (err) 636 + break; 637 + } 638 + 639 + if (err) 640 + pr_err("TWL4030 Register access failed: %i\n", err); 641 + 642 + relock: 643 + return twl_i2c_write_u8(TWL_MODULE_PM_MASTER, 0, 644 + TWL4030_PM_MASTER_PROTECT_KEY); 645 + } 646 + 618 647 /* 619 648 * In master mode, start the power off sequence. 620 649 * After a successful execution, TWL shuts down the power to the SoC ··· 661 614 void twl4030_power_off(void) 662 615 { 663 616 int err; 617 + 618 + /* Disable start on charger or VBUS as it can break poweroff */ 619 + err = twl4030_starton_mask_and_set(STARTON_VBUS | STARTON_CHG, 0); 620 + if (err) 621 + pr_err("TWL4030 Unable to configure start-up\n"); 664 622 665 623 err = twl_i2c_write_u8(TWL_MODULE_PM_MASTER, PWR_DEVOFF, 666 624 TWL4030_PM_MASTER_P1_SW_EVENTS);
+3 -2
drivers/mfd/viperboard.c
··· 93 93 version >> 8, version & 0xff, 94 94 vb->usb_dev->bus->busnum, vb->usb_dev->devnum); 95 95 96 - ret = mfd_add_devices(&interface->dev, -1, vprbrd_devs, 97 - ARRAY_SIZE(vprbrd_devs), NULL, 0, NULL); 96 + ret = mfd_add_devices(&interface->dev, PLATFORM_DEVID_AUTO, 97 + vprbrd_devs, ARRAY_SIZE(vprbrd_devs), NULL, 0, 98 + NULL); 98 99 if (ret != 0) { 99 100 dev_err(&interface->dev, "Failed to add mfd devices to core."); 100 101 goto error;
+8 -13
drivers/mmc/core/host.c
··· 311 311 struct device_node *np; 312 312 u32 bus_width; 313 313 int len, ret; 314 - bool cap_invert, gpio_invert; 314 + bool cd_cap_invert, cd_gpio_invert = false; 315 + bool ro_cap_invert, ro_gpio_invert = false; 315 316 316 317 if (!host->parent || !host->parent->of_node) 317 318 return 0; ··· 360 359 if (of_find_property(np, "non-removable", &len)) { 361 360 host->caps |= MMC_CAP_NONREMOVABLE; 362 361 } else { 363 - if (of_property_read_bool(np, "cd-inverted")) 364 - cap_invert = true; 365 - else 366 - cap_invert = false; 362 + cd_cap_invert = of_property_read_bool(np, "cd-inverted"); 367 363 368 364 if (of_find_property(np, "broken-cd", &len)) 369 365 host->caps |= MMC_CAP_NEEDS_POLL; 370 366 371 367 ret = mmc_gpiod_request_cd(host, "cd", 0, true, 372 - 0, &gpio_invert); 368 + 0, &cd_gpio_invert); 373 369 if (ret) { 374 370 if (ret == -EPROBE_DEFER) 375 371 return ret; ··· 389 391 * both inverted, the end result is that the CD line is 390 392 * not inverted. 391 393 */ 392 - if (cap_invert ^ gpio_invert) 394 + if (cd_cap_invert ^ cd_gpio_invert) 393 395 host->caps2 |= MMC_CAP2_CD_ACTIVE_HIGH; 394 396 } 395 397 396 398 /* Parse Write Protection */ 397 - if (of_property_read_bool(np, "wp-inverted")) 398 - cap_invert = true; 399 - else 400 - cap_invert = false; 399 + ro_cap_invert = of_property_read_bool(np, "wp-inverted"); 401 400 402 - ret = mmc_gpiod_request_ro(host, "wp", 0, false, 0, &gpio_invert); 401 + ret = mmc_gpiod_request_ro(host, "wp", 0, false, 0, &ro_gpio_invert); 403 402 if (ret) { 404 403 if (ret == -EPROBE_DEFER) 405 404 goto out; ··· 409 414 dev_info(host->parent, "Got WP GPIO\n"); 410 415 411 416 /* See the comment on CD inversion above */ 412 - if (cap_invert ^ gpio_invert) 417 + if (ro_cap_invert ^ ro_gpio_invert) 413 418 host->caps2 |= MMC_CAP2_RO_ACTIVE_HIGH; 414 419 415 420 if (of_find_property(np, "cap-sd-highspeed", &len))
+2
drivers/mtd/chips/cfi_cmdset_0001.c
··· 2590 2590 2591 2591 /* Go to known state. Chip may have been power cycled */ 2592 2592 if (chip->state == FL_PM_SUSPENDED) { 2593 + /* Refresh LH28F640BF Partition Config. Register */ 2594 + fixup_LH28F640BF(mtd); 2593 2595 map_write(map, CMD(0xFF), cfi->chips[i].start); 2594 2596 chip->oldstate = chip->state = FL_READY; 2595 2597 wake_up(&chip->wq);
+56 -8
drivers/mtd/devices/m25p80.c
··· 193 193 { 194 194 struct mtd_part_parser_data ppdata; 195 195 struct flash_platform_data *data; 196 - const struct spi_device_id *id = NULL; 197 196 struct m25p *flash; 198 197 struct spi_nor *nor; 199 198 enum read_mode mode = SPI_NOR_NORMAL; 199 + char *flash_name = NULL; 200 200 int ret; 201 201 202 202 data = dev_get_platdata(&spi->dev); ··· 236 236 * If that's the case, respect "type" and ignore a "name". 237 237 */ 238 238 if (data && data->type) 239 - id = spi_nor_match_id(data->type); 239 + flash_name = data->type; 240 + else 241 + flash_name = spi->modalias; 240 242 241 - /* If we didn't get name from platform, simply use "modalias". */ 242 - if (!id) 243 - id = spi_get_device_id(spi); 244 - 245 - ret = spi_nor_scan(nor, id, mode); 243 + ret = spi_nor_scan(nor, flash_name, mode); 246 244 if (ret) 247 245 return ret; 248 246 ··· 261 263 } 262 264 263 265 266 + /* 267 + * XXX This needs to be kept in sync with spi_nor_ids. We can't share 268 + * it with spi-nor, because if this is built as a module then modpost 269 + * won't be able to read it and add appropriate aliases. 270 + */ 271 + static const struct spi_device_id m25p_ids[] = { 272 + {"at25fs010"}, {"at25fs040"}, {"at25df041a"}, {"at25df321a"}, 273 + {"at25df641"}, {"at26f004"}, {"at26df081a"}, {"at26df161a"}, 274 + {"at26df321"}, {"at45db081d"}, 275 + {"en25f32"}, {"en25p32"}, {"en25q32b"}, {"en25p64"}, 276 + {"en25q64"}, {"en25qh128"}, {"en25qh256"}, 277 + {"f25l32pa"}, 278 + {"mr25h256"}, {"mr25h10"}, 279 + {"gd25q32"}, {"gd25q64"}, 280 + {"160s33b"}, {"320s33b"}, {"640s33b"}, 281 + {"mx25l2005a"}, {"mx25l4005a"}, {"mx25l8005"}, {"mx25l1606e"}, 282 + {"mx25l3205d"}, {"mx25l3255e"}, {"mx25l6405d"}, {"mx25l12805d"}, 283 + {"mx25l12855e"},{"mx25l25635e"},{"mx25l25655e"},{"mx66l51235l"}, 284 + {"mx66l1g55g"}, 285 + {"n25q064"}, {"n25q128a11"}, {"n25q128a13"}, {"n25q256a"}, 286 + {"n25q512a"}, {"n25q512ax3"}, {"n25q00"}, 287 + {"pm25lv512"}, {"pm25lv010"}, {"pm25lq032"}, 288 + {"s25sl032p"}, {"s25sl064p"}, {"s25fl256s0"}, {"s25fl256s1"}, 289 + {"s25fl512s"}, {"s70fl01gs"}, {"s25sl12800"}, {"s25sl12801"}, 290 + {"s25fl129p0"}, {"s25fl129p1"}, {"s25sl004a"}, {"s25sl008a"}, 291 + {"s25sl016a"}, {"s25sl032a"}, {"s25sl064a"}, {"s25fl008k"}, 292 + {"s25fl016k"}, {"s25fl064k"}, 293 + {"sst25vf040b"},{"sst25vf080b"},{"sst25vf016b"},{"sst25vf032b"}, 294 + {"sst25vf064c"},{"sst25wf512"}, {"sst25wf010"}, {"sst25wf020"}, 295 + {"sst25wf040"}, 296 + {"m25p05"}, {"m25p10"}, {"m25p20"}, {"m25p40"}, 297 + {"m25p80"}, {"m25p16"}, {"m25p32"}, {"m25p64"}, 298 + {"m25p128"}, {"n25q032"}, 299 + {"m25p05-nonjedec"}, {"m25p10-nonjedec"}, {"m25p20-nonjedec"}, 300 + {"m25p40-nonjedec"}, {"m25p80-nonjedec"}, {"m25p16-nonjedec"}, 301 + {"m25p32-nonjedec"}, {"m25p64-nonjedec"}, {"m25p128-nonjedec"}, 302 + {"m45pe10"}, {"m45pe80"}, {"m45pe16"}, 303 + {"m25pe20"}, {"m25pe80"}, {"m25pe16"}, 304 + {"m25px16"}, {"m25px32"}, {"m25px32-s0"}, {"m25px32-s1"}, 305 + {"m25px64"}, 306 + {"w25x10"}, {"w25x20"}, {"w25x40"}, {"w25x80"}, 307 + {"w25x16"}, {"w25x32"}, {"w25q32"}, {"w25q32dw"}, 308 + {"w25x64"}, {"w25q64"}, {"w25q128"}, {"w25q80"}, 309 + {"w25q80bl"}, {"w25q128"}, {"w25q256"}, {"cat25c11"}, 310 + {"cat25c03"}, {"cat25c09"}, {"cat25c17"}, {"cat25128"}, 311 + { }, 312 + }; 313 + MODULE_DEVICE_TABLE(spi, m25p_ids); 314 + 315 + 264 316 static struct spi_driver m25p80_driver = { 265 317 .driver = { 266 318 .name = "m25p80", 267 319 .owner = THIS_MODULE, 268 320 }, 269 - .id_table = spi_nor_ids, 321 + .id_table = m25p_ids, 270 322 .probe = m25p_probe, 271 323 .remove = m25p_remove, 272 324
+1 -1
drivers/mtd/nand/omap_elm.c
··· 115 115 116 116 if (!info) { 117 117 dev_err(dev, "Unable to configure elm - device not probed?\n"); 118 - return -ENODEV; 118 + return -EPROBE_DEFER; 119 119 } 120 120 /* ELM cannot detect ECC errors for chunks > 1KB */ 121 121 if (ecc_step_size > ((ELM_ECC_SIZE + 1) / 2)) {
+1 -6
drivers/mtd/spi-nor/fsl-quadspi.c
··· 881 881 882 882 /* iterate the subnodes. */ 883 883 for_each_available_child_of_node(dev->of_node, np) { 884 - const struct spi_device_id *id; 885 884 char modalias[40]; 886 885 887 886 /* skip the holes */ ··· 908 909 if (of_modalias_node(np, modalias, sizeof(modalias)) < 0) 909 910 goto map_failed; 910 911 911 - id = spi_nor_match_id(modalias); 912 - if (!id) 913 - goto map_failed; 914 - 915 912 ret = of_property_read_u32(np, "spi-max-frequency", 916 913 &q->clk_rate); 917 914 if (ret < 0) ··· 916 921 /* set the chip address for READID */ 917 922 fsl_qspi_set_base_addr(q, nor); 918 923 919 - ret = spi_nor_scan(nor, id, SPI_NOR_QUAD); 924 + ret = spi_nor_scan(nor, modalias, SPI_NOR_QUAD); 920 925 if (ret) 921 926 goto map_failed; 922 927
+10 -6
drivers/mtd/spi-nor/spi-nor.c
··· 28 28 29 29 #define JEDEC_MFR(_jedec_id) ((_jedec_id) >> 16) 30 30 31 + static const struct spi_device_id *spi_nor_match_id(const char *name); 32 + 31 33 /* 32 34 * Read the status register, returning its value in the location 33 35 * Return the status register value. ··· 475 473 * more nor chips. This current list focusses on newer chips, which 476 474 * have been converging on command sets which including JEDEC ID. 477 475 */ 478 - const struct spi_device_id spi_nor_ids[] = { 476 + static const struct spi_device_id spi_nor_ids[] = { 479 477 /* Atmel -- some are (confusingly) marketed as "DataFlash" */ 480 478 { "at25fs010", INFO(0x1f6601, 0, 32 * 1024, 4, SECT_4K) }, 481 479 { "at25fs040", INFO(0x1f6604, 0, 64 * 1024, 8, SECT_4K) }, ··· 639 637 { "cat25128", CAT25_INFO(2048, 8, 64, 2, SPI_NOR_NO_ERASE | SPI_NOR_NO_FR) }, 640 638 { }, 641 639 }; 642 - EXPORT_SYMBOL_GPL(spi_nor_ids); 643 640 644 641 static const struct spi_device_id *spi_nor_read_id(struct spi_nor *nor) 645 642 { ··· 912 911 return 0; 913 912 } 914 913 915 - int spi_nor_scan(struct spi_nor *nor, const struct spi_device_id *id, 916 - enum read_mode mode) 914 + int spi_nor_scan(struct spi_nor *nor, const char *name, enum read_mode mode) 917 915 { 916 + const struct spi_device_id *id = NULL; 918 917 struct flash_info *info; 919 918 struct device *dev = nor->dev; 920 919 struct mtd_info *mtd = nor->mtd; ··· 925 924 ret = spi_nor_check(nor); 926 925 if (ret) 927 926 return ret; 927 + 928 + id = spi_nor_match_id(name); 929 + if (!id) 930 + return -ENOENT; 928 931 929 932 info = (void *)id->driver_data; 930 933 ··· 1118 1113 } 1119 1114 EXPORT_SYMBOL_GPL(spi_nor_scan); 1120 1115 1121 - const struct spi_device_id *spi_nor_match_id(char *name) 1116 + static const struct spi_device_id *spi_nor_match_id(const char *name) 1122 1117 { 1123 1118 const struct spi_device_id *id = spi_nor_ids; 1124 1119 ··· 1129 1124 } 1130 1125 return NULL; 1131 1126 } 1132 - EXPORT_SYMBOL_GPL(spi_nor_match_id); 1133 1127 1134 1128 MODULE_LICENSE("GPL"); 1135 1129 MODULE_AUTHOR("Huang Shijie <shijie8@gmail.com>");
+10 -1
drivers/net/ethernet/broadcom/genet/bcmgenet.c
··· 2140 2140 goto err_irq0; 2141 2141 } 2142 2142 2143 + /* Re-configure the port multiplexer towards the PHY device */ 2144 + bcmgenet_mii_config(priv->dev, false); 2145 + 2146 + phy_connect_direct(dev, priv->phydev, bcmgenet_mii_setup, 2147 + priv->phy_interface); 2148 + 2143 2149 bcmgenet_netif_start(dev); 2144 2150 2145 2151 return 0; ··· 2189 2183 netif_dbg(priv, ifdown, dev, "bcmgenet_close\n"); 2190 2184 2191 2185 bcmgenet_netif_stop(dev); 2186 + 2187 + /* Really kill the PHY state machine and disconnect from it */ 2188 + phy_disconnect(priv->phydev); 2192 2189 2193 2190 /* Disable MAC receive */ 2194 2191 umac_enable_set(priv, CMD_RX_EN, false); ··· 2694 2685 2695 2686 phy_init_hw(priv->phydev); 2696 2687 /* Speed settings must be restored */ 2697 - bcmgenet_mii_config(priv->dev); 2688 + bcmgenet_mii_config(priv->dev, false); 2698 2689 2699 2690 /* disable ethernet MAC while updating its registers */ 2700 2691 umac_enable_set(priv, CMD_TX_EN | CMD_RX_EN, false);
+2 -1
drivers/net/ethernet/broadcom/genet/bcmgenet.h
··· 617 617 618 618 /* MDIO routines */ 619 619 int bcmgenet_mii_init(struct net_device *dev); 620 - int bcmgenet_mii_config(struct net_device *dev); 620 + int bcmgenet_mii_config(struct net_device *dev, bool init); 621 621 void bcmgenet_mii_exit(struct net_device *dev); 622 622 void bcmgenet_mii_reset(struct net_device *dev); 623 + void bcmgenet_mii_setup(struct net_device *dev); 623 624 624 625 /* Wake-on-LAN routines */ 625 626 void bcmgenet_get_wol(struct net_device *dev, struct ethtool_wolinfo *wol);
+5 -4
drivers/net/ethernet/broadcom/genet/bcmmii.c
··· 77 77 /* setup netdev link state when PHY link status change and 78 78 * update UMAC and RGMII block when link up 79 79 */ 80 - static void bcmgenet_mii_setup(struct net_device *dev) 80 + void bcmgenet_mii_setup(struct net_device *dev) 81 81 { 82 82 struct bcmgenet_priv *priv = netdev_priv(dev); 83 83 struct phy_device *phydev = priv->phydev; ··· 211 211 bcmgenet_sys_writel(priv, reg, SYS_PORT_CTRL); 212 212 } 213 213 214 - int bcmgenet_mii_config(struct net_device *dev) 214 + int bcmgenet_mii_config(struct net_device *dev, bool init) 215 215 { 216 216 struct bcmgenet_priv *priv = netdev_priv(dev); 217 217 struct phy_device *phydev = priv->phydev; ··· 298 298 return -EINVAL; 299 299 } 300 300 301 - dev_info(kdev, "configuring instance for %s\n", phy_name); 301 + if (init) 302 + dev_info(kdev, "configuring instance for %s\n", phy_name); 302 303 303 304 return 0; 304 305 } ··· 351 350 * PHY speed which is needed for bcmgenet_mii_config() to configure 352 351 * things appropriately. 353 352 */ 354 - ret = bcmgenet_mii_config(dev); 353 + ret = bcmgenet_mii_config(dev, true); 355 354 if (ret) { 356 355 phy_disconnect(priv->phydev); 357 356 return ret;
+21 -10
drivers/net/ethernet/chelsio/cxgb4/cxgb4_dcb.c
··· 79 79 app.protocol = dcb->app_priority[i].protocolid; 80 80 81 81 if (dcb->dcb_version == FW_PORT_DCB_VER_IEEE) { 82 + app.priority = dcb->app_priority[i].user_prio_map; 82 83 app.selector = dcb->app_priority[i].sel_field + 1; 83 - err = dcb_ieee_setapp(dev, &app); 84 + err = dcb_ieee_delapp(dev, &app); 84 85 } else { 85 86 app.selector = !!(dcb->app_priority[i].sel_field); 86 87 err = dcb_setapp(dev, &app); ··· 123 122 case CXGB4_DCB_INPUT_FW_ENABLED: { 124 123 /* we're going to use Firmware DCB */ 125 124 dcb->state = CXGB4_DCB_STATE_FW_INCOMPLETE; 126 - dcb->supported = CXGB4_DCBX_FW_SUPPORT; 125 + dcb->supported = DCB_CAP_DCBX_LLD_MANAGED; 126 + if (dcb->dcb_version == FW_PORT_DCB_VER_IEEE) 127 + dcb->supported |= DCB_CAP_DCBX_VER_IEEE; 128 + else 129 + dcb->supported |= DCB_CAP_DCBX_VER_CEE; 127 130 break; 128 131 } 129 132 ··· 441 436 *up_tc_map = (1 << tc); 442 437 443 438 /* prio_type is link strict */ 444 - *prio_type = 0x2; 439 + if (*pgid != 0xF) 440 + *prio_type = 0x2; 445 441 } 446 442 447 443 static void cxgb4_getpgtccfg_tx(struct net_device *dev, int tc, 448 444 u8 *prio_type, u8 *pgid, u8 *bw_per, 449 445 u8 *up_tc_map) 450 446 { 451 - return cxgb4_getpgtccfg(dev, tc, prio_type, pgid, bw_per, up_tc_map, 1); 447 + /* tc 0 is written at MSB position */ 448 + return cxgb4_getpgtccfg(dev, (7 - tc), prio_type, pgid, bw_per, 449 + up_tc_map, 1); 452 450 } 453 451 454 452 ··· 459 451 u8 *prio_type, u8 *pgid, u8 *bw_per, 460 452 u8 *up_tc_map) 461 453 { 462 - return cxgb4_getpgtccfg(dev, tc, prio_type, pgid, bw_per, up_tc_map, 0); 454 + /* tc 0 is written at MSB position */ 455 + return cxgb4_getpgtccfg(dev, (7 - tc), prio_type, pgid, bw_per, 456 + up_tc_map, 0); 463 457 } 464 458 465 459 static void cxgb4_setpgtccfg_tx(struct net_device *dev, int tc, ··· 471 461 struct fw_port_cmd pcmd; 472 462 struct port_info *pi = netdev2pinfo(dev); 473 463 struct adapter *adap = pi->adapter; 464 + int fw_tc = 7 - tc; 474 465 u32 _pgid; 475 466 int err; 476 467 ··· 490 479 } 491 480 492 481 _pgid = be32_to_cpu(pcmd.u.dcb.pgid.pgid); 493 - _pgid &= ~(0xF << (tc * 4)); 494 - _pgid |= pgid << (tc * 4); 482 + _pgid &= ~(0xF << (fw_tc * 4)); 483 + _pgid |= pgid << (fw_tc * 4); 495 484 pcmd.u.dcb.pgid.pgid = cpu_to_be32(_pgid); 496 485 497 486 INIT_PORT_DCB_WRITE_CMD(pcmd, pi->port_id); ··· 604 593 priority >= CXGB4_MAX_PRIORITY) 605 594 *pfccfg = 0; 606 595 else 607 - *pfccfg = (pi->dcb.pfcen >> priority) & 1; 596 + *pfccfg = (pi->dcb.pfcen >> (7 - priority)) & 1; 608 597 } 609 598 610 599 /* Enable/disable Priority Pause Frames for the specified Traffic Class ··· 629 618 pcmd.u.dcb.pfc.pfcen = pi->dcb.pfcen; 630 619 631 620 if (pfccfg) 632 - pcmd.u.dcb.pfc.pfcen |= (1 << priority); 621 + pcmd.u.dcb.pfc.pfcen |= (1 << (7 - priority)); 633 622 else 634 - pcmd.u.dcb.pfc.pfcen &= (~(1 << priority)); 623 + pcmd.u.dcb.pfc.pfcen &= (~(1 << (7 - priority))); 635 624 636 625 err = t4_wr_mbox(adap, adap->mbox, &pcmd, sizeof(pcmd), &pcmd); 637 626 if (err != FW_PORT_DCB_CFG_SUCCESS) {
+27 -3
drivers/net/ethernet/chelsio/cxgb4/sge.c
··· 2914 2914 int t4_sge_init(struct adapter *adap) 2915 2915 { 2916 2916 struct sge *s = &adap->sge; 2917 - u32 sge_control, sge_conm_ctrl; 2917 + u32 sge_control, sge_control2, sge_conm_ctrl; 2918 + unsigned int ingpadboundary, ingpackboundary; 2918 2919 int ret, egress_threshold; 2919 2920 2920 2921 /* ··· 2925 2924 sge_control = t4_read_reg(adap, SGE_CONTROL); 2926 2925 s->pktshift = PKTSHIFT_GET(sge_control); 2927 2926 s->stat_len = (sge_control & EGRSTATUSPAGESIZE_MASK) ? 128 : 64; 2928 - s->fl_align = 1 << (INGPADBOUNDARY_GET(sge_control) + 2929 - X_INGPADBOUNDARY_SHIFT); 2927 + 2928 + /* T4 uses a single control field to specify both the PCIe Padding and 2929 + * Packing Boundary. T5 introduced the ability to specify these 2930 + * separately. The actual Ingress Packet Data alignment boundary 2931 + * within Packed Buffer Mode is the maximum of these two 2932 + * specifications. 2933 + */ 2934 + ingpadboundary = 1 << (INGPADBOUNDARY_GET(sge_control) + 2935 + X_INGPADBOUNDARY_SHIFT); 2936 + if (is_t4(adap->params.chip)) { 2937 + s->fl_align = ingpadboundary; 2938 + } else { 2939 + /* T5 has a different interpretation of one of the PCIe Packing 2940 + * Boundary values. 2941 + */ 2942 + sge_control2 = t4_read_reg(adap, SGE_CONTROL2_A); 2943 + ingpackboundary = INGPACKBOUNDARY_G(sge_control2); 2944 + if (ingpackboundary == INGPACKBOUNDARY_16B_X) 2945 + ingpackboundary = 16; 2946 + else 2947 + ingpackboundary = 1 << (ingpackboundary + 2948 + INGPACKBOUNDARY_SHIFT_X); 2949 + 2950 + s->fl_align = max(ingpadboundary, ingpackboundary); 2951 + } 2930 2952 2931 2953 if (adap->flags & USING_SOFT_PARAMS) 2932 2954 ret = t4_sge_init_soft(adap);
+45 -6
drivers/net/ethernet/chelsio/cxgb4/t4_hw.c
··· 3130 3130 HOSTPAGESIZEPF6(sge_hps) | 3131 3131 HOSTPAGESIZEPF7(sge_hps)); 3132 3132 3133 - t4_set_reg_field(adap, SGE_CONTROL, 3134 - INGPADBOUNDARY_MASK | 3135 - EGRSTATUSPAGESIZE_MASK, 3136 - INGPADBOUNDARY(fl_align_log - 5) | 3137 - EGRSTATUSPAGESIZE(stat_len != 64)); 3138 - 3133 + if (is_t4(adap->params.chip)) { 3134 + t4_set_reg_field(adap, SGE_CONTROL, 3135 + INGPADBOUNDARY_MASK | 3136 + EGRSTATUSPAGESIZE_MASK, 3137 + INGPADBOUNDARY(fl_align_log - 5) | 3138 + EGRSTATUSPAGESIZE(stat_len != 64)); 3139 + } else { 3140 + /* T5 introduced the separation of the Free List Padding and 3141 + * Packing Boundaries. Thus, we can select a smaller Padding 3142 + * Boundary to avoid uselessly chewing up PCIe Link and Memory 3143 + * Bandwidth, and use a Packing Boundary which is large enough 3144 + * to avoid false sharing between CPUs, etc. 3145 + * 3146 + * For the PCI Link, the smaller the Padding Boundary the 3147 + * better. For the Memory Controller, a smaller Padding 3148 + * Boundary is better until we cross under the Memory Line 3149 + * Size (the minimum unit of transfer to/from Memory). If we 3150 + * have a Padding Boundary which is smaller than the Memory 3151 + * Line Size, that'll involve a Read-Modify-Write cycle on the 3152 + * Memory Controller which is never good. For T5 the smallest 3153 + * Padding Boundary which we can select is 32 bytes which is 3154 + * larger than any known Memory Controller Line Size so we'll 3155 + * use that. 3156 + * 3157 + * T5 has a different interpretation of the "0" value for the 3158 + * Packing Boundary. This corresponds to 16 bytes instead of 3159 + * the expected 32 bytes. We never have a Packing Boundary 3160 + * less than 32 bytes so we can't use that special value but 3161 + * on the other hand, if we wanted 32 bytes, the best we can 3162 + * really do is 64 bytes. 3163 + */ 3164 + if (fl_align <= 32) { 3165 + fl_align = 64; 3166 + fl_align_log = 6; 3167 + } 3168 + t4_set_reg_field(adap, SGE_CONTROL, 3169 + INGPADBOUNDARY_MASK | 3170 + EGRSTATUSPAGESIZE_MASK, 3171 + INGPADBOUNDARY(INGPCIEBOUNDARY_32B_X) | 3172 + EGRSTATUSPAGESIZE(stat_len != 64)); 3173 + t4_set_reg_field(adap, SGE_CONTROL2_A, 3174 + INGPACKBOUNDARY_V(INGPACKBOUNDARY_M), 3175 + INGPACKBOUNDARY_V(fl_align_log - 3176 + INGPACKBOUNDARY_SHIFT_X)); 3177 + } 3139 3178 /* 3140 3179 * Adjust various SGE Free List Host Buffer Sizes. 3141 3180 *
+10
drivers/net/ethernet/chelsio/cxgb4/t4_regs.h
··· 95 95 #define X_INGPADBOUNDARY_SHIFT 5 96 96 97 97 #define SGE_CONTROL 0x1008 98 + #define SGE_CONTROL2_A 0x1124 98 99 #define DCASYSTYPE 0x00080000U 99 100 #define RXPKTCPLMODE_MASK 0x00040000U 100 101 #define RXPKTCPLMODE_SHIFT 18 ··· 107 106 #define PKTSHIFT_SHIFT 10 108 107 #define PKTSHIFT(x) ((x) << PKTSHIFT_SHIFT) 109 108 #define PKTSHIFT_GET(x) (((x) & PKTSHIFT_MASK) >> PKTSHIFT_SHIFT) 109 + #define INGPCIEBOUNDARY_32B_X 0 110 110 #define INGPCIEBOUNDARY_MASK 0x00000380U 111 111 #define INGPCIEBOUNDARY_SHIFT 7 112 112 #define INGPCIEBOUNDARY(x) ((x) << INGPCIEBOUNDARY_SHIFT) ··· 116 114 #define INGPADBOUNDARY(x) ((x) << INGPADBOUNDARY_SHIFT) 117 115 #define INGPADBOUNDARY_GET(x) (((x) & INGPADBOUNDARY_MASK) \ 118 116 >> INGPADBOUNDARY_SHIFT) 117 + #define INGPACKBOUNDARY_16B_X 0 118 + #define INGPACKBOUNDARY_SHIFT_X 5 119 + 120 + #define INGPACKBOUNDARY_S 16 121 + #define INGPACKBOUNDARY_M 0x7U 122 + #define INGPACKBOUNDARY_V(x) ((x) << INGPACKBOUNDARY_S) 123 + #define INGPACKBOUNDARY_G(x) (((x) >> INGPACKBOUNDARY_S) \ 124 + & INGPACKBOUNDARY_M) 119 125 #define EGRPCIEBOUNDARY_MASK 0x0000000eU 120 126 #define EGRPCIEBOUNDARY_SHIFT 1 121 127 #define EGRPCIEBOUNDARY(x) ((x) << EGRPCIEBOUNDARY_SHIFT)
+8
drivers/net/ethernet/chelsio/cxgb4vf/adapter.h
··· 299 299 u16 timer_val[SGE_NTIMERS]; /* interrupt holdoff timer array */ 300 300 u8 counter_val[SGE_NCOUNTERS]; /* interrupt RX threshold array */ 301 301 302 + /* Decoded Adapter Parameters. 303 + */ 304 + u32 fl_pg_order; /* large page allocation size */ 305 + u32 stat_len; /* length of status page at ring end */ 306 + u32 pktshift; /* padding between CPL & packet data */ 307 + u32 fl_align; /* response queue message alignment */ 308 + u32 fl_starve_thres; /* Free List starvation threshold */ 309 + 302 310 /* 303 311 * Reverse maps from Absolute Queue IDs to associated queue pointers. 304 312 * The absolute Queue IDs are in a compact range which start at a
+90 -46
drivers/net/ethernet/chelsio/cxgb4vf/sge.c
··· 51 51 #include "../cxgb4/t4_msg.h" 52 52 53 53 /* 54 - * Decoded Adapter Parameters. 55 - */ 56 - static u32 FL_PG_ORDER; /* large page allocation size */ 57 - static u32 STAT_LEN; /* length of status page at ring end */ 58 - static u32 PKTSHIFT; /* padding between CPL and packet data */ 59 - static u32 FL_ALIGN; /* response queue message alignment */ 60 - 61 - /* 62 54 * Constants ... 63 55 */ 64 56 enum { ··· 92 100 */ 93 101 TX_QCHECK_PERIOD = (HZ / 2), 94 102 MAX_TIMER_TX_RECLAIM = 100, 95 - 96 - /* 97 - * An FL with <= FL_STARVE_THRES buffers is starving and a periodic 98 - * timer will attempt to refill it. 99 - */ 100 - FL_STARVE_THRES = 4, 101 103 102 104 /* 103 105 * Suspend an Ethernet TX queue with fewer available descriptors than ··· 250 264 251 265 /** 252 266 * fl_starving - return whether a Free List is starving. 267 + * @adapter: pointer to the adapter 253 268 * @fl: the Free List 254 269 * 255 270 * Tests specified Free List to see whether the number of buffers 256 271 * available to the hardware has falled below our "starvation" 257 272 * threshold. 258 273 */ 259 - static inline bool fl_starving(const struct sge_fl *fl) 274 + static inline bool fl_starving(const struct adapter *adapter, 275 + const struct sge_fl *fl) 260 276 { 261 - return fl->avail - fl->pend_cred <= FL_STARVE_THRES; 277 + const struct sge *s = &adapter->sge; 278 + 279 + return fl->avail - fl->pend_cred <= s->fl_starve_thres; 262 280 } 263 281 264 282 /** ··· 447 457 448 458 /** 449 459 * get_buf_size - return the size of an RX Free List buffer. 460 + * @adapter: pointer to the associated adapter 450 461 * @sdesc: pointer to the software buffer descriptor 451 462 */ 452 - static inline int get_buf_size(const struct rx_sw_desc *sdesc) 463 + static inline int get_buf_size(const struct adapter *adapter, 464 + const struct rx_sw_desc *sdesc) 453 465 { 454 - return FL_PG_ORDER > 0 && (sdesc->dma_addr & RX_LARGE_BUF) 455 - ? (PAGE_SIZE << FL_PG_ORDER) 456 - : PAGE_SIZE; 466 + const struct sge *s = &adapter->sge; 467 + 468 + return (s->fl_pg_order > 0 && (sdesc->dma_addr & RX_LARGE_BUF) 469 + ? (PAGE_SIZE << s->fl_pg_order) : PAGE_SIZE); 457 470 } 458 471 459 472 /** ··· 476 483 477 484 if (is_buf_mapped(sdesc)) 478 485 dma_unmap_page(adapter->pdev_dev, get_buf_addr(sdesc), 479 - get_buf_size(sdesc), PCI_DMA_FROMDEVICE); 486 + get_buf_size(adapter, sdesc), 487 + PCI_DMA_FROMDEVICE); 480 488 put_page(sdesc->page); 481 489 sdesc->page = NULL; 482 490 if (++fl->cidx == fl->size) ··· 505 511 506 512 if (is_buf_mapped(sdesc)) 507 513 dma_unmap_page(adapter->pdev_dev, get_buf_addr(sdesc), 508 - get_buf_size(sdesc), PCI_DMA_FROMDEVICE); 514 + get_buf_size(adapter, sdesc), 515 + PCI_DMA_FROMDEVICE); 509 516 sdesc->page = NULL; 510 517 if (++fl->cidx == fl->size) 511 518 fl->cidx = 0; ··· 584 589 static unsigned int refill_fl(struct adapter *adapter, struct sge_fl *fl, 585 590 int n, gfp_t gfp) 586 591 { 592 + struct sge *s = &adapter->sge; 587 593 struct page *page; 588 594 dma_addr_t dma_addr; 589 595 unsigned int cred = fl->avail; ··· 606 610 * If we don't support large pages, drop directly into the small page 607 611 * allocation code. 608 612 */ 609 - if (FL_PG_ORDER == 0) 613 + if (s->fl_pg_order == 0) 610 614 goto alloc_small_pages; 611 615 612 616 while (n) { 613 - page = __dev_alloc_pages(gfp, FL_PG_ORDER); 617 + page = __dev_alloc_pages(gfp, s->fl_pg_order); 614 618 if (unlikely(!page)) { 615 619 /* 616 620 * We've failed inour attempt to allocate a "large ··· 620 624 fl->large_alloc_failed++; 621 625 break; 622 626 } 623 - poison_buf(page, PAGE_SIZE << FL_PG_ORDER); 627 + poison_buf(page, PAGE_SIZE << s->fl_pg_order); 624 628 625 629 dma_addr = dma_map_page(adapter->pdev_dev, page, 0, 626 - PAGE_SIZE << FL_PG_ORDER, 630 + PAGE_SIZE << s->fl_pg_order, 627 631 PCI_DMA_FROMDEVICE); 628 632 if (unlikely(dma_mapping_error(adapter->pdev_dev, dma_addr))) { 629 633 /* ··· 634 638 * because DMA mapping resources are typically 635 639 * critical resources once they become scarse. 636 640 */ 637 - __free_pages(page, FL_PG_ORDER); 641 + __free_pages(page, s->fl_pg_order); 638 642 goto out; 639 643 } 640 644 dma_addr |= RX_LARGE_BUF; ··· 690 694 fl->pend_cred += cred; 691 695 ring_fl_db(adapter, fl); 692 696 693 - if (unlikely(fl_starving(fl))) { 697 + if (unlikely(fl_starving(adapter, fl))) { 694 698 smp_wmb(); 695 699 set_bit(fl->cntxt_id, adapter->sge.starving_fl); 696 700 } ··· 1465 1469 static void do_gro(struct sge_eth_rxq *rxq, const struct pkt_gl *gl, 1466 1470 const struct cpl_rx_pkt *pkt) 1467 1471 { 1472 + struct adapter *adapter = rxq->rspq.adapter; 1473 + struct sge *s = &adapter->sge; 1468 1474 int ret; 1469 1475 struct sk_buff *skb; 1470 1476 ··· 1477 1479 return; 1478 1480 } 1479 1481 1480 - copy_frags(skb, gl, PKTSHIFT); 1481 - skb->len = gl->tot_len - PKTSHIFT; 1482 + copy_frags(skb, gl, s->pktshift); 1483 + skb->len = gl->tot_len - s->pktshift; 1482 1484 skb->data_len = skb->len; 1483 1485 skb->truesize += skb->data_len; 1484 1486 skb->ip_summed = CHECKSUM_UNNECESSARY; ··· 1515 1517 bool csum_ok = pkt->csum_calc && !pkt->err_vec && 1516 1518 (rspq->netdev->features & NETIF_F_RXCSUM); 1517 1519 struct sge_eth_rxq *rxq = container_of(rspq, struct sge_eth_rxq, rspq); 1520 + struct adapter *adapter = rspq->adapter; 1521 + struct sge *s = &adapter->sge; 1518 1522 1519 1523 /* 1520 1524 * If this is a good TCP packet and we have Generic Receive Offload ··· 1538 1538 rxq->stats.rx_drops++; 1539 1539 return 0; 1540 1540 } 1541 - __skb_pull(skb, PKTSHIFT); 1541 + __skb_pull(skb, s->pktshift); 1542 1542 skb->protocol = eth_type_trans(skb, rspq->netdev); 1543 1543 skb_record_rx_queue(skb, rspq->idx); 1544 1544 rxq->stats.pkts++; ··· 1649 1649 static int process_responses(struct sge_rspq *rspq, int budget) 1650 1650 { 1651 1651 struct sge_eth_rxq *rxq = container_of(rspq, struct sge_eth_rxq, rspq); 1652 + struct adapter *adapter = rspq->adapter; 1653 + struct sge *s = &adapter->sge; 1652 1654 int budget_left = budget; 1653 1655 1654 1656 while (likely(budget_left)) { ··· 1700 1698 BUG_ON(frag >= MAX_SKB_FRAGS); 1701 1699 BUG_ON(rxq->fl.avail == 0); 1702 1700 sdesc = &rxq->fl.sdesc[rxq->fl.cidx]; 1703 - bufsz = get_buf_size(sdesc); 1701 + bufsz = get_buf_size(adapter, sdesc); 1704 1702 fp->page = sdesc->page; 1705 1703 fp->offset = rspq->offset; 1706 1704 fp->size = min(bufsz, len); ··· 1729 1727 */ 1730 1728 ret = rspq->handler(rspq, rspq->cur_desc, &gl); 1731 1729 if (likely(ret == 0)) 1732 - rspq->offset += ALIGN(fp->size, FL_ALIGN); 1730 + rspq->offset += ALIGN(fp->size, s->fl_align); 1733 1731 else 1734 1732 restore_rx_bufs(&gl, &rxq->fl, frag); 1735 1733 } else if (likely(rsp_type == RSP_TYPE_CPL)) { ··· 1966 1964 * schedule napi but the FL is no longer starving. 1967 1965 * No biggie. 1968 1966 */ 1969 - if (fl_starving(fl)) { 1967 + if (fl_starving(adapter, fl)) { 1970 1968 struct sge_eth_rxq *rxq; 1971 1969 1972 1970 rxq = container_of(fl, struct sge_eth_rxq, fl); ··· 2050 2048 int intr_dest, 2051 2049 struct sge_fl *fl, rspq_handler_t hnd) 2052 2050 { 2051 + struct sge *s = &adapter->sge; 2053 2052 struct port_info *pi = netdev_priv(dev); 2054 2053 struct fw_iq_cmd cmd, rpl; 2055 2054 int ret, iqandst, flsz = 0; ··· 2121 2118 fl->size = roundup(fl->size, FL_PER_EQ_UNIT); 2122 2119 fl->desc = alloc_ring(adapter->pdev_dev, fl->size, 2123 2120 sizeof(__be64), sizeof(struct rx_sw_desc), 2124 - &fl->addr, &fl->sdesc, STAT_LEN); 2121 + &fl->addr, &fl->sdesc, s->stat_len); 2125 2122 if (!fl->desc) { 2126 2123 ret = -ENOMEM; 2127 2124 goto err; ··· 2133 2130 * free list ring) in Egress Queue Units. 2134 2131 */ 2135 2132 flsz = (fl->size / FL_PER_EQ_UNIT + 2136 - STAT_LEN / EQ_UNIT); 2133 + s->stat_len / EQ_UNIT); 2137 2134 2138 2135 /* 2139 2136 * Fill in all the relevant firmware Ingress Queue Command ··· 2221 2218 struct net_device *dev, struct netdev_queue *devq, 2222 2219 unsigned int iqid) 2223 2220 { 2221 + struct sge *s = &adapter->sge; 2224 2222 int ret, nentries; 2225 2223 struct fw_eq_eth_cmd cmd, rpl; 2226 2224 struct port_info *pi = netdev_priv(dev); ··· 2230 2226 * Calculate the size of the hardware TX Queue (including the Status 2231 2227 * Page on the end of the TX Queue) in units of TX Descriptors. 2232 2228 */ 2233 - nentries = txq->q.size + STAT_LEN / sizeof(struct tx_desc); 2229 + nentries = txq->q.size + s->stat_len / sizeof(struct tx_desc); 2234 2230 2235 2231 /* 2236 2232 * Allocate the hardware ring for the TX ring (with space for its ··· 2239 2235 txq->q.desc = alloc_ring(adapter->pdev_dev, txq->q.size, 2240 2236 sizeof(struct tx_desc), 2241 2237 sizeof(struct tx_sw_desc), 2242 - &txq->q.phys_addr, &txq->q.sdesc, STAT_LEN); 2238 + &txq->q.phys_addr, &txq->q.sdesc, s->stat_len); 2243 2239 if (!txq->q.desc) 2244 2240 return -ENOMEM; 2245 2241 ··· 2312 2308 */ 2313 2309 static void free_txq(struct adapter *adapter, struct sge_txq *tq) 2314 2310 { 2311 + struct sge *s = &adapter->sge; 2312 + 2315 2313 dma_free_coherent(adapter->pdev_dev, 2316 - tq->size * sizeof(*tq->desc) + STAT_LEN, 2314 + tq->size * sizeof(*tq->desc) + s->stat_len, 2317 2315 tq->desc, tq->phys_addr); 2318 2316 tq->cntxt_id = 0; 2319 2317 tq->sdesc = NULL; ··· 2329 2323 static void free_rspq_fl(struct adapter *adapter, struct sge_rspq *rspq, 2330 2324 struct sge_fl *fl) 2331 2325 { 2326 + struct sge *s = &adapter->sge; 2332 2327 unsigned int flid = fl ? fl->cntxt_id : 0xffff; 2333 2328 2334 2329 t4vf_iq_free(adapter, FW_IQ_TYPE_FL_INT_CAP, ··· 2345 2338 if (fl) { 2346 2339 free_rx_bufs(adapter, fl, fl->avail); 2347 2340 dma_free_coherent(adapter->pdev_dev, 2348 - fl->size * sizeof(*fl->desc) + STAT_LEN, 2341 + fl->size * sizeof(*fl->desc) + s->stat_len, 2349 2342 fl->desc, fl->addr); 2350 2343 kfree(fl->sdesc); 2351 2344 fl->sdesc = NULL; ··· 2431 2424 u32 fl0 = sge_params->sge_fl_buffer_size[0]; 2432 2425 u32 fl1 = sge_params->sge_fl_buffer_size[1]; 2433 2426 struct sge *s = &adapter->sge; 2427 + unsigned int ingpadboundary, ingpackboundary; 2434 2428 2435 2429 /* 2436 2430 * Start by vetting the basic SGE parameters which have been set up by ··· 2452 2444 * Now translate the adapter parameters into our internal forms. 2453 2445 */ 2454 2446 if (fl1) 2455 - FL_PG_ORDER = ilog2(fl1) - PAGE_SHIFT; 2456 - STAT_LEN = ((sge_params->sge_control & EGRSTATUSPAGESIZE_MASK) 2457 - ? 128 : 64); 2458 - PKTSHIFT = PKTSHIFT_GET(sge_params->sge_control); 2459 - FL_ALIGN = 1 << (INGPADBOUNDARY_GET(sge_params->sge_control) + 2460 - SGE_INGPADBOUNDARY_SHIFT); 2447 + s->fl_pg_order = ilog2(fl1) - PAGE_SHIFT; 2448 + s->stat_len = ((sge_params->sge_control & EGRSTATUSPAGESIZE_MASK) 2449 + ? 128 : 64); 2450 + s->pktshift = PKTSHIFT_GET(sge_params->sge_control); 2451 + 2452 + /* T4 uses a single control field to specify both the PCIe Padding and 2453 + * Packing Boundary. T5 introduced the ability to specify these 2454 + * separately. The actual Ingress Packet Data alignment boundary 2455 + * within Packed Buffer Mode is the maximum of these two 2456 + * specifications. (Note that it makes no real practical sense to 2457 + * have the Pading Boudary be larger than the Packing Boundary but you 2458 + * could set the chip up that way and, in fact, legacy T4 code would 2459 + * end doing this because it would initialize the Padding Boundary and 2460 + * leave the Packing Boundary initialized to 0 (16 bytes).) 2461 + */ 2462 + ingpadboundary = 1 << (INGPADBOUNDARY_GET(sge_params->sge_control) + 2463 + X_INGPADBOUNDARY_SHIFT); 2464 + if (is_t4(adapter->params.chip)) { 2465 + s->fl_align = ingpadboundary; 2466 + } else { 2467 + /* T5 has a different interpretation of one of the PCIe Packing 2468 + * Boundary values. 2469 + */ 2470 + ingpackboundary = INGPACKBOUNDARY_G(sge_params->sge_control2); 2471 + if (ingpackboundary == INGPACKBOUNDARY_16B_X) 2472 + ingpackboundary = 16; 2473 + else 2474 + ingpackboundary = 1 << (ingpackboundary + 2475 + INGPACKBOUNDARY_SHIFT_X); 2476 + 2477 + s->fl_align = max(ingpadboundary, ingpackboundary); 2478 + } 2479 + 2480 + /* A FL with <= fl_starve_thres buffers is starving and a periodic 2481 + * timer will attempt to refill it. This needs to be larger than the 2482 + * SGE's Egress Congestion Threshold. If it isn't, then we can get 2483 + * stuck waiting for new packets while the SGE is waiting for us to 2484 + * give it more Free List entries. (Note that the SGE's Egress 2485 + * Congestion Threshold is in units of 2 Free List pointers.) 2486 + */ 2487 + s->fl_starve_thres 2488 + = EGRTHRESHOLD_GET(sge_params->sge_congestion_control)*2 + 1; 2461 2489 2462 2490 /* 2463 2491 * Set up tasklet timers.
+2
drivers/net/ethernet/chelsio/cxgb4vf/t4vf_common.h
··· 134 134 */ 135 135 struct sge_params { 136 136 u32 sge_control; /* padding, boundaries, lengths, etc. */ 137 + u32 sge_control2; /* T5: more of the same */ 137 138 u32 sge_host_page_size; /* RDMA page sizes */ 138 139 u32 sge_queues_per_page; /* RDMA queues/page */ 139 140 u32 sge_user_mode_limits; /* limits for BAR2 user mode accesses */ 140 141 u32 sge_fl_buffer_size[16]; /* free list buffer sizes */ 141 142 u32 sge_ingress_rx_threshold; /* RX counter interrupt threshold[4] */ 143 + u32 sge_congestion_control; /* congestion thresholds, etc. */ 142 144 u32 sge_timer_value_0_and_1; /* interrupt coalescing timer values */ 143 145 u32 sge_timer_value_2_and_3; 144 146 u32 sge_timer_value_4_and_5;
+27 -1
drivers/net/ethernet/chelsio/cxgb4vf/t4vf_hw.c
··· 468 468 sge_params->sge_timer_value_2_and_3 = vals[5]; 469 469 sge_params->sge_timer_value_4_and_5 = vals[6]; 470 470 471 + /* T4 uses a single control field to specify both the PCIe Padding and 472 + * Packing Boundary. T5 introduced the ability to specify these 473 + * separately with the Padding Boundary in SGE_CONTROL and and Packing 474 + * Boundary in SGE_CONTROL2. So for T5 and later we need to grab 475 + * SGE_CONTROL in order to determine how ingress packet data will be 476 + * laid out in Packed Buffer Mode. Unfortunately, older versions of 477 + * the firmware won't let us retrieve SGE_CONTROL2 so if we get a 478 + * failure grabbing it we throw an error since we can't figure out the 479 + * right value. 480 + */ 481 + if (!is_t4(adapter->params.chip)) { 482 + params[0] = (FW_PARAMS_MNEM(FW_PARAMS_MNEM_REG) | 483 + FW_PARAMS_PARAM_XYZ(SGE_CONTROL2_A)); 484 + v = t4vf_query_params(adapter, 1, params, vals); 485 + if (v != FW_SUCCESS) { 486 + dev_err(adapter->pdev_dev, 487 + "Unable to get SGE Control2; " 488 + "probably old firmware.\n"); 489 + return v; 490 + } 491 + sge_params->sge_control2 = vals[0]; 492 + } 493 + 471 494 params[0] = (FW_PARAMS_MNEM(FW_PARAMS_MNEM_REG) | 472 495 FW_PARAMS_PARAM_XYZ(SGE_INGRESS_RX_THRESHOLD)); 473 - v = t4vf_query_params(adapter, 1, params, vals); 496 + params[1] = (FW_PARAMS_MNEM(FW_PARAMS_MNEM_REG) | 497 + FW_PARAMS_PARAM_XYZ(SGE_CONM_CTRL)); 498 + v = t4vf_query_params(adapter, 2, params, vals); 474 499 if (v) 475 500 return v; 476 501 sge_params->sge_ingress_rx_threshold = vals[0]; 502 + sge_params->sge_congestion_control = vals[1]; 477 503 478 504 return 0; 479 505 }
+19 -4
drivers/net/ethernet/freescale/fec_main.c
··· 298 298 return bufaddr; 299 299 } 300 300 301 + static void swap_buffer2(void *dst_buf, void *src_buf, int len) 302 + { 303 + int i; 304 + unsigned int *src = src_buf; 305 + unsigned int *dst = dst_buf; 306 + 307 + for (i = 0; i < len; i += 4, src++, dst++) 308 + *dst = swab32p(src); 309 + } 310 + 301 311 static void fec_dump(struct net_device *ndev) 302 312 { 303 313 struct fec_enet_private *fep = netdev_priv(ndev); ··· 1317 1307 } 1318 1308 1319 1309 static bool fec_enet_copybreak(struct net_device *ndev, struct sk_buff **skb, 1320 - struct bufdesc *bdp, u32 length) 1310 + struct bufdesc *bdp, u32 length, bool swap) 1321 1311 { 1322 1312 struct fec_enet_private *fep = netdev_priv(ndev); 1323 1313 struct sk_buff *new_skb; ··· 1332 1322 dma_sync_single_for_cpu(&fep->pdev->dev, bdp->cbd_bufaddr, 1333 1323 FEC_ENET_RX_FRSIZE - fep->rx_align, 1334 1324 DMA_FROM_DEVICE); 1335 - memcpy(new_skb->data, (*skb)->data, length); 1325 + if (!swap) 1326 + memcpy(new_skb->data, (*skb)->data, length); 1327 + else 1328 + swap_buffer2(new_skb->data, (*skb)->data, length); 1336 1329 *skb = new_skb; 1337 1330 1338 1331 return true; ··· 1365 1352 u16 vlan_tag; 1366 1353 int index = 0; 1367 1354 bool is_copybreak; 1355 + bool need_swap = id_entry->driver_data & FEC_QUIRK_SWAP_FRAME; 1368 1356 1369 1357 #ifdef CONFIG_M532x 1370 1358 flush_cache_all(); ··· 1429 1415 * include that when passing upstream as it messes up 1430 1416 * bridging applications. 1431 1417 */ 1432 - is_copybreak = fec_enet_copybreak(ndev, &skb, bdp, pkt_len - 4); 1418 + is_copybreak = fec_enet_copybreak(ndev, &skb, bdp, pkt_len - 4, 1419 + need_swap); 1433 1420 if (!is_copybreak) { 1434 1421 skb_new = netdev_alloc_skb(ndev, FEC_ENET_RX_FRSIZE); 1435 1422 if (unlikely(!skb_new)) { ··· 1445 1430 prefetch(skb->data - NET_IP_ALIGN); 1446 1431 skb_put(skb, pkt_len - 4); 1447 1432 data = skb->data; 1448 - if (id_entry->driver_data & FEC_QUIRK_SWAP_FRAME) 1433 + if (!is_copybreak && need_swap) 1449 1434 swap_buffer(data, pkt_len); 1450 1435 1451 1436 /* Extract the enhanced buffer descriptor */
-1
drivers/net/ethernet/intel/ixgbe/ixgbe_phy.c
··· 706 706 707 707 hw->phy.ops.write_reg(hw, MDIO_CTRL1, 708 708 MDIO_MMD_AN, autoneg_reg); 709 - 710 709 return 0; 711 710 } 712 711
+14 -8
drivers/net/ethernet/mellanox/mlx4/en_netdev.c
··· 2291 2291 ret = mlx4_SET_PORT_VXLAN(priv->mdev->dev, priv->port, 2292 2292 VXLAN_STEER_BY_OUTER_MAC, 1); 2293 2293 out: 2294 - if (ret) 2294 + if (ret) { 2295 2295 en_err(priv, "failed setting L2 tunnel configuration ret %d\n", ret); 2296 + return; 2297 + } 2298 + 2299 + /* set offloads */ 2300 + priv->dev->hw_enc_features |= NETIF_F_IP_CSUM | NETIF_F_RXCSUM | 2301 + NETIF_F_TSO | NETIF_F_GSO_UDP_TUNNEL; 2302 + priv->dev->hw_features |= NETIF_F_GSO_UDP_TUNNEL; 2303 + priv->dev->features |= NETIF_F_GSO_UDP_TUNNEL; 2296 2304 } 2297 2305 2298 2306 static void mlx4_en_del_vxlan_offloads(struct work_struct *work) ··· 2308 2300 int ret; 2309 2301 struct mlx4_en_priv *priv = container_of(work, struct mlx4_en_priv, 2310 2302 vxlan_del_task); 2303 + /* unset offloads */ 2304 + priv->dev->hw_enc_features &= ~(NETIF_F_IP_CSUM | NETIF_F_RXCSUM | 2305 + NETIF_F_TSO | NETIF_F_GSO_UDP_TUNNEL); 2306 + priv->dev->hw_features &= ~NETIF_F_GSO_UDP_TUNNEL; 2307 + priv->dev->features &= ~NETIF_F_GSO_UDP_TUNNEL; 2311 2308 2312 2309 ret = mlx4_SET_PORT_VXLAN(priv->mdev->dev, priv->port, 2313 2310 VXLAN_STEER_BY_OUTER_MAC, 0); ··· 2595 2582 2596 2583 if (mdev->dev->caps.steering_mode != MLX4_STEERING_MODE_A0) 2597 2584 dev->priv_flags |= IFF_UNICAST_FLT; 2598 - 2599 - if (mdev->dev->caps.tunnel_offload_mode == MLX4_TUNNEL_OFFLOAD_MODE_VXLAN) { 2600 - dev->hw_enc_features |= NETIF_F_IP_CSUM | NETIF_F_RXCSUM | 2601 - NETIF_F_TSO | NETIF_F_GSO_UDP_TUNNEL; 2602 - dev->hw_features |= NETIF_F_GSO_UDP_TUNNEL; 2603 - dev->features |= NETIF_F_GSO_UDP_TUNNEL; 2604 - } 2605 2585 2606 2586 mdev->pndev[port] = dev; 2607 2587
+1 -2
drivers/net/ethernet/qualcomm/Kconfig
··· 5 5 config NET_VENDOR_QUALCOMM 6 6 bool "Qualcomm devices" 7 7 default y 8 - depends on SPI_MASTER && OF_GPIO 9 8 ---help--- 10 9 If you have a network (Ethernet) card belonging to this class, say Y 11 10 and read the Ethernet-HOWTO, available from ··· 19 20 20 21 config QCA7000 21 22 tristate "Qualcomm Atheros QCA7000 support" 22 - depends on SPI_MASTER && OF_GPIO 23 + depends on SPI_MASTER && OF 23 24 ---help--- 24 25 This SPI protocol driver supports the Qualcomm Atheros QCA7000. 25 26
+50 -11
drivers/net/ethernet/smsc/smsc911x.c
··· 1342 1342 spin_unlock(&pdata->mac_lock); 1343 1343 } 1344 1344 1345 + static int smsc911x_phy_general_power_up(struct smsc911x_data *pdata) 1346 + { 1347 + int rc = 0; 1348 + 1349 + if (!pdata->phy_dev) 1350 + return rc; 1351 + 1352 + /* If the internal PHY is in General Power-Down mode, all, except the 1353 + * management interface, is powered-down and stays in that condition as 1354 + * long as Phy register bit 0.11 is HIGH. 1355 + * 1356 + * In that case, clear the bit 0.11, so the PHY powers up and we can 1357 + * access to the phy registers. 1358 + */ 1359 + rc = phy_read(pdata->phy_dev, MII_BMCR); 1360 + if (rc < 0) { 1361 + SMSC_WARN(pdata, drv, "Failed reading PHY control reg"); 1362 + return rc; 1363 + } 1364 + 1365 + /* If the PHY general power-down bit is not set is not necessary to 1366 + * disable the general power down-mode. 1367 + */ 1368 + if (rc & BMCR_PDOWN) { 1369 + rc = phy_write(pdata->phy_dev, MII_BMCR, rc & ~BMCR_PDOWN); 1370 + if (rc < 0) { 1371 + SMSC_WARN(pdata, drv, "Failed writing PHY control reg"); 1372 + return rc; 1373 + } 1374 + 1375 + usleep_range(1000, 1500); 1376 + } 1377 + 1378 + return 0; 1379 + } 1380 + 1345 1381 static int smsc911x_phy_disable_energy_detect(struct smsc911x_data *pdata) 1346 1382 { 1347 1383 int rc = 0; ··· 1392 1356 return rc; 1393 1357 } 1394 1358 1395 - /* 1396 - * If energy is detected the PHY is already awake so is not necessary 1397 - * to disable the energy detect power-down mode. 1398 - */ 1399 - if ((rc & MII_LAN83C185_EDPWRDOWN) && 1400 - !(rc & MII_LAN83C185_ENERGYON)) { 1359 + /* Only disable if energy detect mode is already enabled */ 1360 + if (rc & MII_LAN83C185_EDPWRDOWN) { 1401 1361 /* Disable energy detect mode for this SMSC Transceivers */ 1402 1362 rc = phy_write(pdata->phy_dev, MII_LAN83C185_CTRL_STATUS, 1403 1363 rc & (~MII_LAN83C185_EDPWRDOWN)); ··· 1402 1370 SMSC_WARN(pdata, drv, "Failed writing PHY control reg"); 1403 1371 return rc; 1404 1372 } 1405 - 1406 - mdelay(1); 1373 + /* Allow PHY to wakeup */ 1374 + mdelay(2); 1407 1375 } 1408 1376 1409 1377 return 0; ··· 1425 1393 1426 1394 /* Only enable if energy detect mode is already disabled */ 1427 1395 if (!(rc & MII_LAN83C185_EDPWRDOWN)) { 1428 - mdelay(100); 1429 1396 /* Enable energy detect mode for this SMSC Transceivers */ 1430 1397 rc = phy_write(pdata->phy_dev, MII_LAN83C185_CTRL_STATUS, 1431 1398 rc | MII_LAN83C185_EDPWRDOWN); ··· 1433 1402 SMSC_WARN(pdata, drv, "Failed writing PHY control reg"); 1434 1403 return rc; 1435 1404 } 1436 - 1437 - mdelay(1); 1438 1405 } 1439 1406 return 0; 1440 1407 } ··· 1442 1413 unsigned int timeout; 1443 1414 unsigned int temp; 1444 1415 int ret; 1416 + 1417 + /* 1418 + * Make sure to power-up the PHY chip before doing a reset, otherwise 1419 + * the reset fails. 1420 + */ 1421 + ret = smsc911x_phy_general_power_up(pdata); 1422 + if (ret) { 1423 + SMSC_WARN(pdata, drv, "Failed to power-up the PHY chip"); 1424 + return ret; 1425 + } 1445 1426 1446 1427 /* 1447 1428 * LAN9210/LAN9211/LAN9220/LAN9221 chips have an internal PHY that
+1 -1
drivers/net/ethernet/ti/cpts.c
··· 264 264 265 265 switch (ptp_class & PTP_CLASS_PMASK) { 266 266 case PTP_CLASS_IPV4: 267 - offset += ETH_HLEN + IPV4_HLEN(data) + UDP_HLEN; 267 + offset += ETH_HLEN + IPV4_HLEN(data + offset) + UDP_HLEN; 268 268 break; 269 269 case PTP_CLASS_IPV6: 270 270 offset += ETH_HLEN + IP6_HLEN + UDP_HLEN;
+2 -2
drivers/net/phy/dp83640.c
··· 791 791 792 792 switch (type & PTP_CLASS_PMASK) { 793 793 case PTP_CLASS_IPV4: 794 - offset += ETH_HLEN + IPV4_HLEN(data) + UDP_HLEN; 794 + offset += ETH_HLEN + IPV4_HLEN(data + offset) + UDP_HLEN; 795 795 break; 796 796 case PTP_CLASS_IPV6: 797 797 offset += ETH_HLEN + IP6_HLEN + UDP_HLEN; ··· 934 934 935 935 switch (type & PTP_CLASS_PMASK) { 936 936 case PTP_CLASS_IPV4: 937 - offset += ETH_HLEN + IPV4_HLEN(data) + UDP_HLEN; 937 + offset += ETH_HLEN + IPV4_HLEN(data + offset) + UDP_HLEN; 938 938 break; 939 939 case PTP_CLASS_IPV6: 940 940 offset += ETH_HLEN + IP6_HLEN + UDP_HLEN;
+24 -12
drivers/net/phy/phy.c
··· 352 352 { 353 353 struct mii_ioctl_data *mii_data = if_mii(ifr); 354 354 u16 val = mii_data->val_in; 355 + bool change_autoneg = false; 355 356 356 357 switch (cmd) { 357 358 case SIOCGMIIPHY: ··· 368 367 if (mii_data->phy_id == phydev->addr) { 369 368 switch (mii_data->reg_num) { 370 369 case MII_BMCR: 371 - if ((val & (BMCR_RESET | BMCR_ANENABLE)) == 0) 370 + if ((val & (BMCR_RESET | BMCR_ANENABLE)) == 0) { 371 + if (phydev->autoneg == AUTONEG_ENABLE) 372 + change_autoneg = true; 372 373 phydev->autoneg = AUTONEG_DISABLE; 373 - else 374 + if (val & BMCR_FULLDPLX) 375 + phydev->duplex = DUPLEX_FULL; 376 + else 377 + phydev->duplex = DUPLEX_HALF; 378 + if (val & BMCR_SPEED1000) 379 + phydev->speed = SPEED_1000; 380 + else if (val & BMCR_SPEED100) 381 + phydev->speed = SPEED_100; 382 + else phydev->speed = SPEED_10; 383 + } 384 + else { 385 + if (phydev->autoneg == AUTONEG_DISABLE) 386 + change_autoneg = true; 374 387 phydev->autoneg = AUTONEG_ENABLE; 375 - if (!phydev->autoneg && (val & BMCR_FULLDPLX)) 376 - phydev->duplex = DUPLEX_FULL; 377 - else 378 - phydev->duplex = DUPLEX_HALF; 379 - if (!phydev->autoneg && (val & BMCR_SPEED1000)) 380 - phydev->speed = SPEED_1000; 381 - else if (!phydev->autoneg && 382 - (val & BMCR_SPEED100)) 383 - phydev->speed = SPEED_100; 388 + } 384 389 break; 385 390 case MII_ADVERTISE: 386 - phydev->advertising = val; 391 + phydev->advertising = mii_adv_to_ethtool_adv_t(val); 392 + change_autoneg = true; 387 393 break; 388 394 default: 389 395 /* do nothing */ ··· 404 396 if (mii_data->reg_num == MII_BMCR && 405 397 val & BMCR_RESET) 406 398 return phy_init_hw(phydev); 399 + 400 + if (change_autoneg) 401 + return phy_start_aneg(phydev); 402 + 407 403 return 0; 408 404 409 405 case SIOCSHWTSTAMP:
+20 -20
drivers/net/ppp/ppp_generic.c
··· 755 755 756 756 err = get_filter(argp, &code); 757 757 if (err >= 0) { 758 + struct bpf_prog *pass_filter = NULL; 758 759 struct sock_fprog_kern fprog = { 759 760 .len = err, 760 761 .filter = code, 761 762 }; 762 763 763 - ppp_lock(ppp); 764 - if (ppp->pass_filter) { 765 - bpf_prog_destroy(ppp->pass_filter); 766 - ppp->pass_filter = NULL; 764 + err = 0; 765 + if (fprog.filter) 766 + err = bpf_prog_create(&pass_filter, &fprog); 767 + if (!err) { 768 + ppp_lock(ppp); 769 + if (ppp->pass_filter) 770 + bpf_prog_destroy(ppp->pass_filter); 771 + ppp->pass_filter = pass_filter; 772 + ppp_unlock(ppp); 767 773 } 768 - if (fprog.filter != NULL) 769 - err = bpf_prog_create(&ppp->pass_filter, 770 - &fprog); 771 - else 772 - err = 0; 773 774 kfree(code); 774 - ppp_unlock(ppp); 775 775 } 776 776 break; 777 777 } ··· 781 781 782 782 err = get_filter(argp, &code); 783 783 if (err >= 0) { 784 + struct bpf_prog *active_filter = NULL; 784 785 struct sock_fprog_kern fprog = { 785 786 .len = err, 786 787 .filter = code, 787 788 }; 788 789 789 - ppp_lock(ppp); 790 - if (ppp->active_filter) { 791 - bpf_prog_destroy(ppp->active_filter); 792 - ppp->active_filter = NULL; 790 + err = 0; 791 + if (fprog.filter) 792 + err = bpf_prog_create(&active_filter, &fprog); 793 + if (!err) { 794 + ppp_lock(ppp); 795 + if (ppp->active_filter) 796 + bpf_prog_destroy(ppp->active_filter); 797 + ppp->active_filter = active_filter; 798 + ppp_unlock(ppp); 793 799 } 794 - if (fprog.filter != NULL) 795 - err = bpf_prog_create(&ppp->active_filter, 796 - &fprog); 797 - else 798 - err = 0; 799 800 kfree(code); 800 - ppp_unlock(ppp); 801 801 } 802 802 break; 803 803 }
+1 -13
drivers/net/usb/asix_devices.c
··· 465 465 return ret; 466 466 } 467 467 468 - ret = asix_sw_reset(dev, AX_SWRESET_IPPD | AX_SWRESET_PRL); 469 - if (ret < 0) 470 - return ret; 471 - 472 - msleep(150); 473 - 474 - ret = asix_sw_reset(dev, AX_SWRESET_CLEAR); 475 - if (ret < 0) 476 - return ret; 477 - 478 - msleep(150); 479 - 480 - ret = asix_sw_reset(dev, embd_phy ? AX_SWRESET_IPRL : AX_SWRESET_PRTE); 468 + ax88772_reset(dev); 481 469 482 470 /* Read PHYID register *AFTER* the PHY was reset properly */ 483 471 phyid = asix_get_phyid(dev);
+21 -10
drivers/net/vxlan.c
··· 275 275 return list_first_entry(&fdb->remotes, struct vxlan_rdst, list); 276 276 } 277 277 278 - /* Find VXLAN socket based on network namespace and UDP port */ 279 - static struct vxlan_sock *vxlan_find_sock(struct net *net, __be16 port) 278 + /* Find VXLAN socket based on network namespace, address family and UDP port */ 279 + static struct vxlan_sock *vxlan_find_sock(struct net *net, 280 + sa_family_t family, __be16 port) 280 281 { 281 282 struct vxlan_sock *vs; 282 283 283 284 hlist_for_each_entry_rcu(vs, vs_head(net, port), hlist) { 284 - if (inet_sk(vs->sock->sk)->inet_sport == port) 285 + if (inet_sk(vs->sock->sk)->inet_sport == port && 286 + inet_sk(vs->sock->sk)->sk.sk_family == family) 285 287 return vs; 286 288 } 287 289 return NULL; ··· 302 300 } 303 301 304 302 /* Look up VNI in a per net namespace table */ 305 - static struct vxlan_dev *vxlan_find_vni(struct net *net, u32 id, __be16 port) 303 + static struct vxlan_dev *vxlan_find_vni(struct net *net, u32 id, 304 + sa_family_t family, __be16 port) 306 305 { 307 306 struct vxlan_sock *vs; 308 307 309 - vs = vxlan_find_sock(net, port); 308 + vs = vxlan_find_sock(net, family, port); 310 309 if (!vs) 311 310 return NULL; 312 311 ··· 623 620 __be16 type; 624 621 int vxlan_len = sizeof(struct vxlanhdr) + sizeof(struct ethhdr); 625 622 int err = -ENOSYS; 623 + 624 + udp_tunnel_gro_complete(skb, nhoff); 626 625 627 626 eh = (struct ethhdr *)(skb->data + nhoff + sizeof(struct vxlanhdr)); 628 627 type = eh->h_proto; ··· 1776 1771 struct vxlan_dev *dst_vxlan; 1777 1772 1778 1773 ip_rt_put(rt); 1779 - dst_vxlan = vxlan_find_vni(vxlan->net, vni, dst_port); 1774 + dst_vxlan = vxlan_find_vni(vxlan->net, vni, 1775 + dst->sa.sa_family, dst_port); 1780 1776 if (!dst_vxlan) 1781 1777 goto tx_error; 1782 1778 vxlan_encap_bypass(skb, vxlan, dst_vxlan); ··· 1831 1825 struct vxlan_dev *dst_vxlan; 1832 1826 1833 1827 dst_release(ndst); 1834 - dst_vxlan = vxlan_find_vni(vxlan->net, vni, dst_port); 1828 + dst_vxlan = vxlan_find_vni(vxlan->net, vni, 1829 + dst->sa.sa_family, dst_port); 1835 1830 if (!dst_vxlan) 1836 1831 goto tx_error; 1837 1832 vxlan_encap_bypass(skb, vxlan, dst_vxlan); ··· 1992 1985 struct vxlan_dev *vxlan = netdev_priv(dev); 1993 1986 struct vxlan_net *vn = net_generic(vxlan->net, vxlan_net_id); 1994 1987 struct vxlan_sock *vs; 1988 + bool ipv6 = vxlan->flags & VXLAN_F_IPV6; 1995 1989 1996 1990 dev->tstats = netdev_alloc_pcpu_stats(struct pcpu_sw_netstats); 1997 1991 if (!dev->tstats) 1998 1992 return -ENOMEM; 1999 1993 2000 1994 spin_lock(&vn->sock_lock); 2001 - vs = vxlan_find_sock(vxlan->net, vxlan->dst_port); 1995 + vs = vxlan_find_sock(vxlan->net, ipv6 ? AF_INET6 : AF_INET, 1996 + vxlan->dst_port); 2002 1997 if (vs) { 2003 1998 /* If we have a socket with same port already, reuse it */ 2004 1999 atomic_inc(&vs->refcnt); ··· 2394 2385 { 2395 2386 struct vxlan_net *vn = net_generic(net, vxlan_net_id); 2396 2387 struct vxlan_sock *vs; 2388 + bool ipv6 = flags & VXLAN_F_IPV6; 2397 2389 2398 2390 vs = vxlan_socket_create(net, port, rcv, data, flags); 2399 2391 if (!IS_ERR(vs)) ··· 2404 2394 return vs; 2405 2395 2406 2396 spin_lock(&vn->sock_lock); 2407 - vs = vxlan_find_sock(net, port); 2397 + vs = vxlan_find_sock(net, ipv6 ? AF_INET6 : AF_INET, port); 2408 2398 if (vs) { 2409 2399 if (vs->rcv == rcv) 2410 2400 atomic_inc(&vs->refcnt); ··· 2563 2553 nla_get_u8(data[IFLA_VXLAN_UDP_ZERO_CSUM6_RX])) 2564 2554 vxlan->flags |= VXLAN_F_UDP_ZERO_CSUM6_RX; 2565 2555 2566 - if (vxlan_find_vni(net, vni, vxlan->dst_port)) { 2556 + if (vxlan_find_vni(net, vni, use_ipv6 ? AF_INET6 : AF_INET, 2557 + vxlan->dst_port)) { 2567 2558 pr_info("duplicate VNI %u\n", vni); 2568 2559 return -EEXIST; 2569 2560 }
+9 -1
drivers/net/wireless/iwlwifi/mvm/fw.c
··· 284 284 285 285 lockdep_assert_held(&mvm->mutex); 286 286 287 - if (WARN_ON_ONCE(mvm->init_ucode_complete)) 287 + if (WARN_ON_ONCE(mvm->init_ucode_complete || mvm->calibrating)) 288 288 return 0; 289 289 290 290 iwl_init_notification_wait(&mvm->notif_wait, ··· 334 334 goto out; 335 335 } 336 336 337 + mvm->calibrating = true; 338 + 337 339 /* Send TX valid antennas before triggering calibrations */ 338 340 ret = iwl_send_tx_ant_cfg(mvm, mvm->fw->valid_tx_ant); 339 341 if (ret) ··· 360 358 MVM_UCODE_CALIB_TIMEOUT); 361 359 if (!ret) 362 360 mvm->init_ucode_complete = true; 361 + 362 + if (ret && iwl_mvm_is_radio_killed(mvm)) { 363 + IWL_DEBUG_RF_KILL(mvm, "RFKILL while calibrating.\n"); 364 + ret = 1; 365 + } 363 366 goto out; 364 367 365 368 error: 366 369 iwl_remove_notification(&mvm->notif_wait, &calib_wait); 367 370 out: 371 + mvm->calibrating = false; 368 372 if (iwlmvm_mod_params.init_dbg && !mvm->nvm_data) { 369 373 /* we want to debug INIT and we have no NVM - fake */ 370 374 mvm->nvm_data = kzalloc(sizeof(struct iwl_nvm_data) +
+1
drivers/net/wireless/iwlwifi/mvm/mac80211.c
··· 825 825 826 826 mvm->scan_status = IWL_MVM_SCAN_NONE; 827 827 mvm->ps_disabled = false; 828 + mvm->calibrating = false; 828 829 829 830 /* just in case one was running */ 830 831 ieee80211_remain_on_channel_expired(mvm->hw);
+1
drivers/net/wireless/iwlwifi/mvm/mvm.h
··· 548 548 enum iwl_ucode_type cur_ucode; 549 549 bool ucode_loaded; 550 550 bool init_ucode_complete; 551 + bool calibrating; 551 552 u32 error_event_table; 552 553 u32 log_event_table; 553 554 u32 umac_error_event_table;
+11 -1
drivers/net/wireless/iwlwifi/mvm/ops.c
··· 427 427 } 428 428 mvm->sf_state = SF_UNINIT; 429 429 mvm->low_latency_agg_frame_limit = 6; 430 + mvm->cur_ucode = IWL_UCODE_INIT; 430 431 431 432 mutex_init(&mvm->mutex); 432 433 mutex_init(&mvm->d0i3_suspend_mutex); ··· 758 757 static bool iwl_mvm_set_hw_rfkill_state(struct iwl_op_mode *op_mode, bool state) 759 758 { 760 759 struct iwl_mvm *mvm = IWL_OP_MODE_GET_MVM(op_mode); 760 + bool calibrating = ACCESS_ONCE(mvm->calibrating); 761 761 762 762 if (state) 763 763 set_bit(IWL_MVM_STATUS_HW_RFKILL, &mvm->status); ··· 767 765 768 766 wiphy_rfkill_set_hw_state(mvm->hw->wiphy, iwl_mvm_is_radio_killed(mvm)); 769 767 770 - return state && mvm->cur_ucode != IWL_UCODE_INIT; 768 + /* iwl_run_init_mvm_ucode is waiting for results, abort it */ 769 + if (calibrating) 770 + iwl_abort_notification_waits(&mvm->notif_wait); 771 + 772 + /* 773 + * Stop the device if we run OPERATIONAL firmware or if we are in the 774 + * middle of the calibrations. 775 + */ 776 + return state && (mvm->cur_ucode != IWL_UCODE_INIT || calibrating); 771 777 } 772 778 773 779 static void iwl_mvm_free_skb(struct iwl_op_mode *op_mode, struct sk_buff *skb)
+2 -2
drivers/net/wireless/iwlwifi/pcie/trans.c
··· 913 913 * restart. So don't process again if the device is 914 914 * already dead. 915 915 */ 916 - if (test_bit(STATUS_DEVICE_ENABLED, &trans->status)) { 916 + if (test_and_clear_bit(STATUS_DEVICE_ENABLED, &trans->status)) { 917 + IWL_DEBUG_INFO(trans, "DEVICE_ENABLED bit was set and is now cleared\n"); 917 918 iwl_pcie_tx_stop(trans); 918 919 iwl_pcie_rx_stop(trans); 919 920 ··· 944 943 /* clear all status bits */ 945 944 clear_bit(STATUS_SYNC_HCMD_ACTIVE, &trans->status); 946 945 clear_bit(STATUS_INT_ENABLED, &trans->status); 947 - clear_bit(STATUS_DEVICE_ENABLED, &trans->status); 948 946 clear_bit(STATUS_TPOWER_PMI, &trans->status); 949 947 clear_bit(STATUS_RFKILL, &trans->status); 950 948
+3 -1
drivers/net/wireless/mac80211_hwsim.c
··· 2191 2191 if (err != 0) { 2192 2192 printk(KERN_DEBUG "mac80211_hwsim: device_bind_driver failed (%d)\n", 2193 2193 err); 2194 - goto failed_hw; 2194 + goto failed_bind; 2195 2195 } 2196 2196 2197 2197 skb_queue_head_init(&data->pending); ··· 2397 2397 return idx; 2398 2398 2399 2399 failed_hw: 2400 + device_release_driver(data->dev); 2401 + failed_bind: 2400 2402 device_unregister(data->dev); 2401 2403 failed_drvdata: 2402 2404 ieee80211_free_hw(hw);
+22 -66
drivers/of/base.c
··· 1280 1280 EXPORT_SYMBOL_GPL(of_property_read_string); 1281 1281 1282 1282 /** 1283 - * of_property_read_string_index - Find and read a string from a multiple 1284 - * strings property. 1285 - * @np: device node from which the property value is to be read. 1286 - * @propname: name of the property to be searched. 1287 - * @index: index of the string in the list of strings 1288 - * @out_string: pointer to null terminated return string, modified only if 1289 - * return value is 0. 1290 - * 1291 - * Search for a property in a device tree node and retrieve a null 1292 - * terminated string value (pointer to data, not a copy) in the list of strings 1293 - * contained in that property. 1294 - * Returns 0 on success, -EINVAL if the property does not exist, -ENODATA if 1295 - * property does not have a value, and -EILSEQ if the string is not 1296 - * null-terminated within the length of the property data. 1297 - * 1298 - * The out_string pointer is modified only if a valid string can be decoded. 1299 - */ 1300 - int of_property_read_string_index(struct device_node *np, const char *propname, 1301 - int index, const char **output) 1302 - { 1303 - struct property *prop = of_find_property(np, propname, NULL); 1304 - int i = 0; 1305 - size_t l = 0, total = 0; 1306 - const char *p; 1307 - 1308 - if (!prop) 1309 - return -EINVAL; 1310 - if (!prop->value) 1311 - return -ENODATA; 1312 - if (strnlen(prop->value, prop->length) >= prop->length) 1313 - return -EILSEQ; 1314 - 1315 - p = prop->value; 1316 - 1317 - for (i = 0; total < prop->length; total += l, p += l) { 1318 - l = strlen(p) + 1; 1319 - if (i++ == index) { 1320 - *output = p; 1321 - return 0; 1322 - } 1323 - } 1324 - return -ENODATA; 1325 - } 1326 - EXPORT_SYMBOL_GPL(of_property_read_string_index); 1327 - 1328 - /** 1329 1283 * of_property_match_string() - Find string in a list and return index 1330 1284 * @np: pointer to node containing string list property 1331 1285 * @propname: string list property name ··· 1305 1351 end = p + prop->length; 1306 1352 1307 1353 for (i = 0; p < end; i++, p += l) { 1308 - l = strlen(p) + 1; 1354 + l = strnlen(p, end - p) + 1; 1309 1355 if (p + l > end) 1310 1356 return -EILSEQ; 1311 1357 pr_debug("comparing %s with %s\n", string, p); ··· 1317 1363 EXPORT_SYMBOL_GPL(of_property_match_string); 1318 1364 1319 1365 /** 1320 - * of_property_count_strings - Find and return the number of strings from a 1321 - * multiple strings property. 1366 + * of_property_read_string_util() - Utility helper for parsing string properties 1322 1367 * @np: device node from which the property value is to be read. 1323 1368 * @propname: name of the property to be searched. 1369 + * @out_strs: output array of string pointers. 1370 + * @sz: number of array elements to read. 1371 + * @skip: Number of strings to skip over at beginning of list. 1324 1372 * 1325 - * Search for a property in a device tree node and retrieve the number of null 1326 - * terminated string contain in it. Returns the number of strings on 1327 - * success, -EINVAL if the property does not exist, -ENODATA if property 1328 - * does not have a value, and -EILSEQ if the string is not null-terminated 1329 - * within the length of the property data. 1373 + * Don't call this function directly. It is a utility helper for the 1374 + * of_property_read_string*() family of functions. 1330 1375 */ 1331 - int of_property_count_strings(struct device_node *np, const char *propname) 1376 + int of_property_read_string_helper(struct device_node *np, const char *propname, 1377 + const char **out_strs, size_t sz, int skip) 1332 1378 { 1333 1379 struct property *prop = of_find_property(np, propname, NULL); 1334 - int i = 0; 1335 - size_t l = 0, total = 0; 1336 - const char *p; 1380 + int l = 0, i = 0; 1381 + const char *p, *end; 1337 1382 1338 1383 if (!prop) 1339 1384 return -EINVAL; 1340 1385 if (!prop->value) 1341 1386 return -ENODATA; 1342 - if (strnlen(prop->value, prop->length) >= prop->length) 1343 - return -EILSEQ; 1344 - 1345 1387 p = prop->value; 1388 + end = p + prop->length; 1346 1389 1347 - for (i = 0; total < prop->length; total += l, p += l, i++) 1348 - l = strlen(p) + 1; 1349 - 1350 - return i; 1390 + for (i = 0; p < end && (!out_strs || i < skip + sz); i++, p += l) { 1391 + l = strnlen(p, end - p) + 1; 1392 + if (p + l > end) 1393 + return -EILSEQ; 1394 + if (out_strs && i >= skip) 1395 + *out_strs++ = p; 1396 + } 1397 + i -= skip; 1398 + return i <= 0 ? -ENODATA : i; 1351 1399 } 1352 - EXPORT_SYMBOL_GPL(of_property_count_strings); 1400 + EXPORT_SYMBOL_GPL(of_property_read_string_helper); 1353 1401 1354 1402 void of_print_phandle_args(const char *msg, const struct of_phandle_args *args) 1355 1403 {
+60 -6
drivers/of/selftest.c
··· 339 339 selftest(rc == -EINVAL, "expected:%i got:%i\n", -EINVAL, rc); 340 340 } 341 341 342 - static void __init of_selftest_property_match_string(void) 342 + static void __init of_selftest_property_string(void) 343 343 { 344 + const char *strings[4]; 344 345 struct device_node *np; 345 346 int rc; 346 347 ··· 358 357 rc = of_property_match_string(np, "phandle-list-names", "third"); 359 358 selftest(rc == 2, "third expected:0 got:%i\n", rc); 360 359 rc = of_property_match_string(np, "phandle-list-names", "fourth"); 361 - selftest(rc == -ENODATA, "unmatched string; rc=%i", rc); 360 + selftest(rc == -ENODATA, "unmatched string; rc=%i\n", rc); 362 361 rc = of_property_match_string(np, "missing-property", "blah"); 363 - selftest(rc == -EINVAL, "missing property; rc=%i", rc); 362 + selftest(rc == -EINVAL, "missing property; rc=%i\n", rc); 364 363 rc = of_property_match_string(np, "empty-property", "blah"); 365 - selftest(rc == -ENODATA, "empty property; rc=%i", rc); 364 + selftest(rc == -ENODATA, "empty property; rc=%i\n", rc); 366 365 rc = of_property_match_string(np, "unterminated-string", "blah"); 367 - selftest(rc == -EILSEQ, "unterminated string; rc=%i", rc); 366 + selftest(rc == -EILSEQ, "unterminated string; rc=%i\n", rc); 367 + 368 + /* of_property_count_strings() tests */ 369 + rc = of_property_count_strings(np, "string-property"); 370 + selftest(rc == 1, "Incorrect string count; rc=%i\n", rc); 371 + rc = of_property_count_strings(np, "phandle-list-names"); 372 + selftest(rc == 3, "Incorrect string count; rc=%i\n", rc); 373 + rc = of_property_count_strings(np, "unterminated-string"); 374 + selftest(rc == -EILSEQ, "unterminated string; rc=%i\n", rc); 375 + rc = of_property_count_strings(np, "unterminated-string-list"); 376 + selftest(rc == -EILSEQ, "unterminated string array; rc=%i\n", rc); 377 + 378 + /* of_property_read_string_index() tests */ 379 + rc = of_property_read_string_index(np, "string-property", 0, strings); 380 + selftest(rc == 0 && !strcmp(strings[0], "foobar"), "of_property_read_string_index() failure; rc=%i\n", rc); 381 + strings[0] = NULL; 382 + rc = of_property_read_string_index(np, "string-property", 1, strings); 383 + selftest(rc == -ENODATA && strings[0] == NULL, "of_property_read_string_index() failure; rc=%i\n", rc); 384 + rc = of_property_read_string_index(np, "phandle-list-names", 0, strings); 385 + selftest(rc == 0 && !strcmp(strings[0], "first"), "of_property_read_string_index() failure; rc=%i\n", rc); 386 + rc = of_property_read_string_index(np, "phandle-list-names", 1, strings); 387 + selftest(rc == 0 && !strcmp(strings[0], "second"), "of_property_read_string_index() failure; rc=%i\n", rc); 388 + rc = of_property_read_string_index(np, "phandle-list-names", 2, strings); 389 + selftest(rc == 0 && !strcmp(strings[0], "third"), "of_property_read_string_index() failure; rc=%i\n", rc); 390 + strings[0] = NULL; 391 + rc = of_property_read_string_index(np, "phandle-list-names", 3, strings); 392 + selftest(rc == -ENODATA && strings[0] == NULL, "of_property_read_string_index() failure; rc=%i\n", rc); 393 + strings[0] = NULL; 394 + rc = of_property_read_string_index(np, "unterminated-string", 0, strings); 395 + selftest(rc == -EILSEQ && strings[0] == NULL, "of_property_read_string_index() failure; rc=%i\n", rc); 396 + rc = of_property_read_string_index(np, "unterminated-string-list", 0, strings); 397 + selftest(rc == 0 && !strcmp(strings[0], "first"), "of_property_read_string_index() failure; rc=%i\n", rc); 398 + strings[0] = NULL; 399 + rc = of_property_read_string_index(np, "unterminated-string-list", 2, strings); /* should fail */ 400 + selftest(rc == -EILSEQ && strings[0] == NULL, "of_property_read_string_index() failure; rc=%i\n", rc); 401 + strings[1] = NULL; 402 + 403 + /* of_property_read_string_array() tests */ 404 + rc = of_property_read_string_array(np, "string-property", strings, 4); 405 + selftest(rc == 1, "Incorrect string count; rc=%i\n", rc); 406 + rc = of_property_read_string_array(np, "phandle-list-names", strings, 4); 407 + selftest(rc == 3, "Incorrect string count; rc=%i\n", rc); 408 + rc = of_property_read_string_array(np, "unterminated-string", strings, 4); 409 + selftest(rc == -EILSEQ, "unterminated string; rc=%i\n", rc); 410 + /* -- An incorrectly formed string should cause a failure */ 411 + rc = of_property_read_string_array(np, "unterminated-string-list", strings, 4); 412 + selftest(rc == -EILSEQ, "unterminated string array; rc=%i\n", rc); 413 + /* -- parsing the correctly formed strings should still work: */ 414 + strings[2] = NULL; 415 + rc = of_property_read_string_array(np, "unterminated-string-list", strings, 2); 416 + selftest(rc == 2 && strings[2] == NULL, "of_property_read_string_array() failure; rc=%i\n", rc); 417 + strings[1] = NULL; 418 + rc = of_property_read_string_array(np, "phandle-list-names", strings, 1); 419 + selftest(rc == 1 && strings[1] == NULL, "Overwrote end of string array; rc=%i, str='%s'\n", rc, strings[1]); 368 420 } 369 421 370 422 #define propcmp(p1, p2) (((p1)->length == (p2)->length) && \ ··· 935 881 of_selftest_find_node_by_name(); 936 882 of_selftest_dynamic(); 937 883 of_selftest_parse_phandle_with_args(); 938 - of_selftest_property_match_string(); 884 + of_selftest_property_string(); 939 885 of_selftest_property_copy(); 940 886 of_selftest_changeset(); 941 887 of_selftest_parse_interrupts();
+2
drivers/of/testcase-data/tests-phandle.dtsi
··· 39 39 phandle-list-bad-args = <&provider2 1 0>, 40 40 <&provider3 0>; 41 41 empty-property; 42 + string-property = "foobar"; 42 43 unterminated-string = [40 41 42 43]; 44 + unterminated-string-list = "first", "second", [40 41 42 43]; 43 45 }; 44 46 }; 45 47 };
+4 -2
drivers/phy/phy-omap-usb2.c
··· 258 258 otg->phy = &phy->phy; 259 259 260 260 platform_set_drvdata(pdev, phy); 261 + pm_runtime_enable(phy->dev); 261 262 262 263 generic_phy = devm_phy_create(phy->dev, NULL, &ops, NULL); 263 - if (IS_ERR(generic_phy)) 264 + if (IS_ERR(generic_phy)) { 265 + pm_runtime_disable(phy->dev); 264 266 return PTR_ERR(generic_phy); 267 + } 265 268 266 269 phy_set_drvdata(generic_phy, phy); 267 270 268 - pm_runtime_enable(phy->dev); 269 271 phy_provider = devm_of_phy_provider_register(phy->dev, 270 272 of_phy_simple_xlate); 271 273 if (IS_ERR(phy_provider)) {
+6 -2
drivers/pinctrl/pinctrl-baytrail.c
··· 227 227 spin_lock_irqsave(&vg->lock, flags); 228 228 value = readl(reg); 229 229 230 + WARN(value & BYT_DIRECT_IRQ_EN, 231 + "Bad pad config for io mode, force direct_irq_en bit clearing"); 232 + 230 233 /* For level trigges the BYT_TRIG_POS and BYT_TRIG_NEG bits 231 234 * are used to indicate high and low level triggering 232 235 */ 233 - value &= ~(BYT_TRIG_POS | BYT_TRIG_NEG | BYT_TRIG_LVL); 236 + value &= ~(BYT_DIRECT_IRQ_EN | BYT_TRIG_POS | BYT_TRIG_NEG | 237 + BYT_TRIG_LVL); 234 238 235 239 switch (type) { 236 240 case IRQ_TYPE_LEVEL_HIGH: ··· 322 318 "Potential Error: Setting GPIO with direct_irq_en to output"); 323 319 324 320 reg_val = readl(reg) | BYT_DIR_MASK; 325 - reg_val &= ~BYT_OUTPUT_EN; 321 + reg_val &= ~(BYT_OUTPUT_EN | BYT_INPUT_EN); 326 322 327 323 if (value) 328 324 writel(reg_val | BYT_LEVEL, reg);
+11
drivers/platform/x86/acer-wmi.c
··· 579 579 DMI_MATCH(DMI_PRODUCT_NAME, "Aspire 5741"), 580 580 }, 581 581 }, 582 + { 583 + /* 584 + * Note no video_set_backlight_video_vendor, we must use the 585 + * acer interface, as there is no native backlight interface. 586 + */ 587 + .ident = "Acer KAV80", 588 + .matches = { 589 + DMI_MATCH(DMI_SYS_VENDOR, "Acer"), 590 + DMI_MATCH(DMI_PRODUCT_NAME, "KAV80"), 591 + }, 592 + }, 582 593 {} 583 594 }; 584 595
+9
drivers/platform/x86/asus-nb-wmi.c
··· 182 182 }, 183 183 { 184 184 .callback = dmi_matched, 185 + .ident = "ASUSTeK COMPUTER INC. X550VB", 186 + .matches = { 187 + DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."), 188 + DMI_MATCH(DMI_PRODUCT_NAME, "X550VB"), 189 + }, 190 + .driver_data = &quirk_asus_wapf4, 191 + }, 192 + { 193 + .callback = dmi_matched, 185 194 .ident = "ASUSTeK COMPUTER INC. X55A", 186 195 .matches = { 187 196 DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
+7
drivers/platform/x86/ideapad-laptop.c
··· 837 837 DMI_MATCH(DMI_PRODUCT_VERSION, "Lenovo Yoga 2"), 838 838 }, 839 839 }, 840 + { 841 + .ident = "Lenovo Yoga 3 Pro 1370", 842 + .matches = { 843 + DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"), 844 + DMI_MATCH(DMI_PRODUCT_VERSION, "Lenovo YOGA 3 Pro-1370"), 845 + }, 846 + }, 840 847 {} 841 848 }; 842 849
+10
drivers/platform/x86/samsung-laptop.c
··· 1561 1561 }, 1562 1562 { 1563 1563 .callback = samsung_dmi_matched, 1564 + .ident = "NC210", 1565 + .matches = { 1566 + DMI_MATCH(DMI_SYS_VENDOR, "SAMSUNG ELECTRONICS CO., LTD."), 1567 + DMI_MATCH(DMI_PRODUCT_NAME, "NC210/NC110"), 1568 + DMI_MATCH(DMI_BOARD_NAME, "NC210/NC110"), 1569 + }, 1570 + .driver_data = &samsung_broken_acpi_video, 1571 + }, 1572 + { 1573 + .callback = samsung_dmi_matched, 1564 1574 .ident = "730U3E/740U3E", 1565 1575 .matches = { 1566 1576 DMI_MATCH(DMI_SYS_VENDOR, "SAMSUNG ELECTRONICS CO., LTD."),
+6
drivers/platform/x86/toshiba_acpi.c
··· 240 240 DMI_MATCH(DMI_PRODUCT_NAME, "Qosmio X75-A"), 241 241 }, 242 242 }, 243 + { 244 + .matches = { 245 + DMI_MATCH(DMI_SYS_VENDOR, "TOSHIBA"), 246 + DMI_MATCH(DMI_PRODUCT_NAME, "TECRA A50-A"), 247 + }, 248 + }, 243 249 {} 244 250 }; 245 251
+1 -1
drivers/regulator/max1586.c
··· 163 163 struct max1586_platform_data *pdata) 164 164 { 165 165 struct max1586_subdev_data *sub; 166 - struct of_regulator_match rmatch[ARRAY_SIZE(max1586_reg)]; 166 + struct of_regulator_match rmatch[ARRAY_SIZE(max1586_reg)] = { }; 167 167 struct device_node *np = dev->of_node; 168 168 int i, matched; 169 169
+1 -1
drivers/regulator/max77686.c
··· 395 395 struct max77686_dev *iodev = dev_get_drvdata(pdev->dev.parent); 396 396 struct device_node *pmic_np, *regulators_np; 397 397 struct max77686_regulator_data *rdata; 398 - struct of_regulator_match rmatch; 398 + struct of_regulator_match rmatch = { }; 399 399 unsigned int i; 400 400 401 401 pmic_np = iodev->dev->of_node;
+1 -1
drivers/regulator/max77693.c
··· 227 227 struct max77693_dev *iodev = dev_get_drvdata(pdev->dev.parent); 228 228 struct max77693_regulator_data *rdata = NULL; 229 229 int num_rdata, i; 230 - struct regulator_config config; 230 + struct regulator_config config = { }; 231 231 232 232 num_rdata = max77693_pmic_init_rdata(&pdev->dev, &rdata); 233 233 if (!rdata || num_rdata <= 0) {
+1 -1
drivers/regulator/max77802.c
··· 454 454 struct max77686_dev *iodev = dev_get_drvdata(pdev->dev.parent); 455 455 struct device_node *pmic_np, *regulators_np; 456 456 struct max77686_regulator_data *rdata; 457 - struct of_regulator_match rmatch; 457 + struct of_regulator_match rmatch = { }; 458 458 unsigned int i; 459 459 460 460 pmic_np = iodev->dev->of_node;
+1 -1
drivers/regulator/max8660.c
··· 335 335 int matched, i; 336 336 struct device_node *np; 337 337 struct max8660_subdev_data *sub; 338 - struct of_regulator_match rmatch[ARRAY_SIZE(max8660_reg)]; 338 + struct of_regulator_match rmatch[ARRAY_SIZE(max8660_reg)] = { }; 339 339 340 340 np = of_get_child_by_name(dev->of_node, "regulators"); 341 341 if (!np) {
+2 -1
drivers/regulator/of_regulator.c
··· 211 211 search = dev->of_node; 212 212 213 213 if (!search) { 214 - dev_err(dev, "Failed to find regulator container node\n"); 214 + dev_dbg(dev, "Failed to find regulator container node '%s'\n", 215 + desc->regulators_node); 215 216 return NULL; 216 217 } 217 218
+1 -1
drivers/regulator/s2mpa01.c
··· 341 341 { 342 342 struct sec_pmic_dev *iodev = dev_get_drvdata(pdev->dev.parent); 343 343 struct sec_platform_data *pdata = dev_get_platdata(iodev->dev); 344 - struct of_regulator_match rdata[S2MPA01_REGULATOR_MAX]; 344 + struct of_regulator_match rdata[S2MPA01_REGULATOR_MAX] = { }; 345 345 struct device_node *reg_np = NULL; 346 346 struct regulator_config config = { }; 347 347 struct s2mpa01_info *s2mpa01;
-1
drivers/s390/kvm/virtio_ccw.c
··· 888 888 struct virtio_ccw_device *vcdev = dev_get_drvdata(&cdev->dev); 889 889 int i; 890 890 struct virtqueue *vq; 891 - struct virtio_driver *drv; 892 891 893 892 if (!vcdev) 894 893 return;
+37 -5
drivers/scsi/cxgbi/libcxgbi.c
··· 399 399 * If the source port is outside our allocation range, the caller is 400 400 * responsible for keeping track of their port usage. 401 401 */ 402 + 403 + static struct cxgbi_sock *find_sock_on_port(struct cxgbi_device *cdev, 404 + unsigned char port_id) 405 + { 406 + struct cxgbi_ports_map *pmap = &cdev->pmap; 407 + unsigned int i; 408 + unsigned int used; 409 + 410 + if (!pmap->max_connect || !pmap->used) 411 + return NULL; 412 + 413 + spin_lock_bh(&pmap->lock); 414 + used = pmap->used; 415 + for (i = 0; used && i < pmap->max_connect; i++) { 416 + struct cxgbi_sock *csk = pmap->port_csk[i]; 417 + 418 + if (csk) { 419 + if (csk->port_id == port_id) { 420 + spin_unlock_bh(&pmap->lock); 421 + return csk; 422 + } 423 + used--; 424 + } 425 + } 426 + spin_unlock_bh(&pmap->lock); 427 + 428 + return NULL; 429 + } 430 + 402 431 static int sock_get_port(struct cxgbi_sock *csk) 403 432 { 404 433 struct cxgbi_device *cdev = csk->cdev; ··· 778 749 csk->daddr6.sin6_addr = daddr6->sin6_addr; 779 750 csk->daddr6.sin6_port = daddr6->sin6_port; 780 751 csk->daddr6.sin6_family = daddr6->sin6_family; 752 + csk->saddr6.sin6_family = daddr6->sin6_family; 781 753 csk->saddr6.sin6_addr = pref_saddr; 782 754 783 755 neigh_release(n); ··· 2677 2647 break; 2678 2648 case ISCSI_HOST_PARAM_IPADDRESS: 2679 2649 { 2680 - __be32 addr; 2681 - 2682 - addr = cxgbi_get_iscsi_ipv4(chba); 2683 - len = sprintf(buf, "%pI4", &addr); 2650 + struct cxgbi_sock *csk = find_sock_on_port(chba->cdev, 2651 + chba->port_id); 2652 + if (csk) { 2653 + len = sprintf(buf, "%pIS", 2654 + (struct sockaddr *)&csk->saddr); 2655 + } 2684 2656 log_debug(1 << CXGBI_DBG_ISCSI, 2685 - "hba %s, ipv4 %pI4.\n", chba->ndev->name, &addr); 2657 + "hba %s, addr %s.\n", chba->ndev->name, buf); 2686 2658 break; 2687 2659 } 2688 2660 default:
-5
drivers/scsi/cxgbi/libcxgbi.h
··· 700 700 chba->ndev->name); 701 701 } 702 702 703 - static inline __be32 cxgbi_get_iscsi_ipv4(struct cxgbi_hba *chba) 704 - { 705 - return chba->ipv4addr; 706 - } 707 - 708 703 struct cxgbi_device *cxgbi_device_register(unsigned int, unsigned int); 709 704 void cxgbi_device_unregister(struct cxgbi_device *); 710 705 void cxgbi_device_unregister_all(unsigned int flag);
+5
drivers/scsi/scsi_lib.c
··· 1893 1893 blk_mq_start_request(req); 1894 1894 } 1895 1895 1896 + if (blk_queue_tagged(q)) 1897 + req->cmd_flags |= REQ_QUEUED; 1898 + else 1899 + req->cmd_flags &= ~REQ_QUEUED; 1900 + 1896 1901 scsi_init_cmd_errh(cmd); 1897 1902 cmd->scsi_done = scsi_mq_done; 1898 1903
+1
drivers/soc/versatile/soc-realview.c
··· 26 26 { .compatible = "arm,realview-pb11mp-soc", }, 27 27 { .compatible = "arm,realview-pba8-soc", }, 28 28 { .compatible = "arm,realview-pbx-soc", }, 29 + { } 29 30 }; 30 31 31 32 static u32 realview_coreid;
+2 -2
drivers/spi/spi-fsl-dspi.c
··· 46 46 47 47 #define SPI_TCR 0x08 48 48 49 - #define SPI_CTAR(x) (0x0c + (x * 4)) 49 + #define SPI_CTAR(x) (0x0c + (((x) & 0x3) * 4)) 50 50 #define SPI_CTAR_FMSZ(x) (((x) & 0x0000000f) << 27) 51 51 #define SPI_CTAR_CPOL(x) ((x) << 26) 52 52 #define SPI_CTAR_CPHA(x) ((x) << 25) ··· 70 70 71 71 #define SPI_PUSHR 0x34 72 72 #define SPI_PUSHR_CONT (1 << 31) 73 - #define SPI_PUSHR_CTAS(x) (((x) & 0x00000007) << 28) 73 + #define SPI_PUSHR_CTAS(x) (((x) & 0x00000003) << 28) 74 74 #define SPI_PUSHR_EOQ (1 << 27) 75 75 #define SPI_PUSHR_CTCNT (1 << 26) 76 76 #define SPI_PUSHR_PCS(x) (((1 << x) & 0x0000003f) << 16)
+5 -2
drivers/spi/spi-pxa2xx.c
··· 1274 1274 if (status != 0) 1275 1275 return status; 1276 1276 write_SSCR0(0, drv_data->ioaddr); 1277 - clk_disable_unprepare(ssp->clk); 1277 + 1278 + if (!pm_runtime_suspended(dev)) 1279 + clk_disable_unprepare(ssp->clk); 1278 1280 1279 1281 return 0; 1280 1282 } ··· 1290 1288 pxa2xx_spi_dma_resume(drv_data); 1291 1289 1292 1290 /* Enable the SSP clock */ 1293 - clk_prepare_enable(ssp->clk); 1291 + if (!pm_runtime_suspended(dev)) 1292 + clk_prepare_enable(ssp->clk); 1294 1293 1295 1294 /* Restore LPSS private register bits */ 1296 1295 lpss_ssp_setup(drv_data);
+8 -5
drivers/staging/android/logger.c
··· 420 420 struct logger_log *log = file_get_log(iocb->ki_filp); 421 421 struct logger_entry header; 422 422 struct timespec now; 423 - size_t len, count; 423 + size_t len, count, w_off; 424 424 425 425 count = min_t(size_t, iocb->ki_nbytes, LOGGER_ENTRY_MAX_PAYLOAD); 426 426 ··· 452 452 memcpy(log->buffer + log->w_off, &header, len); 453 453 memcpy(log->buffer, (char *)&header + len, sizeof(header) - len); 454 454 455 - len = min(count, log->size - log->w_off); 455 + /* Work with a copy until we are ready to commit the whole entry */ 456 + w_off = logger_offset(log, log->w_off + sizeof(struct logger_entry)); 456 457 457 - if (copy_from_iter(log->buffer + log->w_off, len, from) != len) { 458 + len = min(count, log->size - w_off); 459 + 460 + if (copy_from_iter(log->buffer + w_off, len, from) != len) { 458 461 /* 459 - * Note that by not updating w_off, this abandons the 462 + * Note that by not updating log->w_off, this abandons the 460 463 * portion of the new entry that *was* successfully 461 464 * copied, just above. This is intentional to avoid 462 465 * message corruption from missing fragments. ··· 473 470 return -EFAULT; 474 471 } 475 472 476 - log->w_off = logger_offset(log, log->w_off + count); 473 + log->w_off = logger_offset(log, w_off + count); 477 474 mutex_unlock(&log->mutex); 478 475 479 476 /* wake up any blocked readers */
+1 -1
drivers/staging/comedi/Kconfig
··· 426 426 427 427 config COMEDI_II_PCI20KC 428 428 tristate "Intelligent Instruments PCI-20001C carrier support" 429 + depends on HAS_IOMEM 429 430 ---help--- 430 431 Enable support for Intelligent Instruments PCI-20001C carrier 431 432 PCI-20001, PCI-20006 and PCI-20341 ··· 668 667 config COMEDI_ADDI_APCI_3120 669 668 tristate "ADDI-DATA APCI_3120/3001 support" 670 669 depends on HAS_DMA 671 - depends on VIRT_TO_BUS 672 670 ---help--- 673 671 Enable support for ADDI-DATA APCI_3120/3001 cards 674 672
+14 -12
drivers/staging/comedi/comedi_fops.c
··· 1462 1462 unsigned int *chanlist; 1463 1463 int ret; 1464 1464 1465 - /* user_chanlist could be NULL for do_cmdtest ioctls */ 1466 - if (!user_chanlist) 1467 - return 0; 1468 - 1465 + cmd->chanlist = NULL; 1469 1466 chanlist = memdup_user(user_chanlist, 1470 1467 cmd->chanlist_len * sizeof(unsigned int)); 1471 1468 if (IS_ERR(chanlist)) ··· 1606 1609 1607 1610 s = &dev->subdevices[cmd.subdev]; 1608 1611 1609 - /* load channel/gain list */ 1610 - ret = __comedi_get_user_chanlist(dev, s, user_chanlist, &cmd); 1611 - if (ret) 1612 - return ret; 1612 + /* user_chanlist can be NULL for COMEDI_CMDTEST ioctl */ 1613 + if (user_chanlist) { 1614 + /* load channel/gain list */ 1615 + ret = __comedi_get_user_chanlist(dev, s, user_chanlist, &cmd); 1616 + if (ret) 1617 + return ret; 1618 + } 1613 1619 1614 1620 ret = s->do_cmdtest(dev, s, &cmd); 1621 + 1622 + kfree(cmd.chanlist); /* free kernel copy of user chanlist */ 1615 1623 1616 1624 /* restore chanlist pointer before copying back */ 1617 1625 cmd.chanlist = (unsigned int __force *)user_chanlist; ··· 1644 1642 1645 1643 */ 1646 1644 1647 - static int do_lock_ioctl(struct comedi_device *dev, unsigned int arg, 1645 + static int do_lock_ioctl(struct comedi_device *dev, unsigned long arg, 1648 1646 void *file) 1649 1647 { 1650 1648 int ret = 0; ··· 1681 1679 This function isn't protected by the semaphore, since 1682 1680 we already own the lock. 1683 1681 */ 1684 - static int do_unlock_ioctl(struct comedi_device *dev, unsigned int arg, 1682 + static int do_unlock_ioctl(struct comedi_device *dev, unsigned long arg, 1685 1683 void *file) 1686 1684 { 1687 1685 struct comedi_subdevice *s; ··· 1716 1714 nothing 1717 1715 1718 1716 */ 1719 - static int do_cancel_ioctl(struct comedi_device *dev, unsigned int arg, 1717 + static int do_cancel_ioctl(struct comedi_device *dev, unsigned long arg, 1720 1718 void *file) 1721 1719 { 1722 1720 struct comedi_subdevice *s; ··· 1753 1751 nothing 1754 1752 1755 1753 */ 1756 - static int do_poll_ioctl(struct comedi_device *dev, unsigned int arg, 1754 + static int do_poll_ioctl(struct comedi_device *dev, unsigned long arg, 1757 1755 void *file) 1758 1756 { 1759 1757 struct comedi_subdevice *s;
+8 -4
drivers/staging/iio/adc/mxs-lradc.c
··· 1559 1559 /* Grab all IRQ sources */ 1560 1560 for (i = 0; i < of_cfg->irq_count; i++) { 1561 1561 lradc->irq[i] = platform_get_irq(pdev, i); 1562 - if (lradc->irq[i] < 0) 1563 - return lradc->irq[i]; 1562 + if (lradc->irq[i] < 0) { 1563 + ret = lradc->irq[i]; 1564 + goto err_clk; 1565 + } 1564 1566 1565 1567 ret = devm_request_irq(dev, lradc->irq[i], 1566 1568 mxs_lradc_handle_irq, 0, 1567 1569 of_cfg->irq_name[i], iio); 1568 1570 if (ret) 1569 - return ret; 1571 + goto err_clk; 1570 1572 } 1571 1573 1572 1574 lradc->vref_mv = of_cfg->vref_mv; ··· 1590 1588 &mxs_lradc_trigger_handler, 1591 1589 &mxs_lradc_buffer_ops); 1592 1590 if (ret) 1593 - return ret; 1591 + goto err_clk; 1594 1592 1595 1593 ret = mxs_lradc_trigger_init(iio); 1596 1594 if (ret) ··· 1645 1643 mxs_lradc_trigger_remove(iio); 1646 1644 err_trig: 1647 1645 iio_triggered_buffer_cleanup(iio); 1646 + err_clk: 1647 + clk_disable_unprepare(lradc->clk); 1648 1648 return ret; 1649 1649 } 1650 1650
+6 -9
drivers/staging/iio/impedance-analyzer/ad5933.c
··· 115 115 .channel = 0, 116 116 .info_mask_separate = BIT(IIO_CHAN_INFO_PROCESSED), 117 117 .address = AD5933_REG_TEMP_DATA, 118 + .scan_index = -1, 118 119 .scan_type = { 119 120 .sign = 's', 120 121 .realbits = 14, ··· 125 124 .type = IIO_VOLTAGE, 126 125 .indexed = 1, 127 126 .channel = 0, 128 - .extend_name = "real_raw", 129 - .info_mask_separate = BIT(IIO_CHAN_INFO_RAW) | 130 - BIT(IIO_CHAN_INFO_SCALE), 127 + .extend_name = "real", 131 128 .address = AD5933_REG_REAL_DATA, 132 129 .scan_index = 0, 133 130 .scan_type = { ··· 137 138 .type = IIO_VOLTAGE, 138 139 .indexed = 1, 139 140 .channel = 0, 140 - .extend_name = "imag_raw", 141 - .info_mask_separate = BIT(IIO_CHAN_INFO_RAW) | 142 - BIT(IIO_CHAN_INFO_SCALE), 141 + .extend_name = "imag", 143 142 .address = AD5933_REG_IMAG_DATA, 144 143 .scan_index = 1, 145 144 .scan_type = { ··· 746 749 indio_dev->name = id->name; 747 750 indio_dev->modes = INDIO_DIRECT_MODE; 748 751 indio_dev->channels = ad5933_channels; 749 - indio_dev->num_channels = 1; /* only register temp0_input */ 752 + indio_dev->num_channels = ARRAY_SIZE(ad5933_channels); 750 753 751 754 ret = ad5933_register_ring_funcs_and_init(indio_dev); 752 755 if (ret) 753 756 goto error_disable_reg; 754 757 755 - /* skip temp0_input, register in0_(real|imag)_raw */ 756 - ret = iio_buffer_register(indio_dev, &ad5933_channels[1], 2); 758 + ret = iio_buffer_register(indio_dev, ad5933_channels, 759 + ARRAY_SIZE(ad5933_channels)); 757 760 if (ret) 758 761 goto error_unreg_ring; 759 762
-1
drivers/staging/iio/meter/ade7758.h
··· 119 119 u8 *tx; 120 120 u8 *rx; 121 121 struct mutex buf_lock; 122 - const struct iio_chan_spec *ade7758_ring_channels; 123 122 struct spi_transfer ring_xfer[4]; 124 123 struct spi_message ring_msg; 125 124 /*
+11 -46
drivers/staging/iio/meter/ade7758_core.c
··· 634 634 .type = IIO_VOLTAGE, 635 635 .indexed = 1, 636 636 .channel = 0, 637 - .extend_name = "raw", 638 - .info_mask_separate = BIT(IIO_CHAN_INFO_RAW), 639 - .info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SCALE), 640 637 .address = AD7758_WT(AD7758_PHASE_A, AD7758_VOLTAGE), 641 638 .scan_index = 0, 642 639 .scan_type = { ··· 645 648 .type = IIO_CURRENT, 646 649 .indexed = 1, 647 650 .channel = 0, 648 - .extend_name = "raw", 649 - .info_mask_separate = BIT(IIO_CHAN_INFO_RAW), 650 - .info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SCALE), 651 651 .address = AD7758_WT(AD7758_PHASE_A, AD7758_CURRENT), 652 652 .scan_index = 1, 653 653 .scan_type = { ··· 656 662 .type = IIO_POWER, 657 663 .indexed = 1, 658 664 .channel = 0, 659 - .extend_name = "apparent_raw", 660 - .info_mask_separate = BIT(IIO_CHAN_INFO_RAW), 661 - .info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SCALE), 665 + .extend_name = "apparent", 662 666 .address = AD7758_WT(AD7758_PHASE_A, AD7758_APP_PWR), 663 667 .scan_index = 2, 664 668 .scan_type = { ··· 668 676 .type = IIO_POWER, 669 677 .indexed = 1, 670 678 .channel = 0, 671 - .extend_name = "active_raw", 672 - .info_mask_separate = BIT(IIO_CHAN_INFO_RAW), 673 - .info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SCALE), 679 + .extend_name = "active", 674 680 .address = AD7758_WT(AD7758_PHASE_A, AD7758_ACT_PWR), 675 681 .scan_index = 3, 676 682 .scan_type = { ··· 680 690 .type = IIO_POWER, 681 691 .indexed = 1, 682 692 .channel = 0, 683 - .extend_name = "reactive_raw", 684 - .info_mask_separate = BIT(IIO_CHAN_INFO_RAW), 685 - .info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SCALE), 693 + .extend_name = "reactive", 686 694 .address = AD7758_WT(AD7758_PHASE_A, AD7758_REACT_PWR), 687 695 .scan_index = 4, 688 696 .scan_type = { ··· 692 704 .type = IIO_VOLTAGE, 693 705 .indexed = 1, 694 706 .channel = 1, 695 - .extend_name = "raw", 696 - .info_mask_separate = BIT(IIO_CHAN_INFO_RAW), 697 - .info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SCALE), 698 707 .address = AD7758_WT(AD7758_PHASE_B, AD7758_VOLTAGE), 699 708 .scan_index = 5, 700 709 .scan_type = { ··· 703 718 .type = IIO_CURRENT, 704 719 .indexed = 1, 705 720 .channel = 1, 706 - .extend_name = "raw", 707 - .info_mask_separate = BIT(IIO_CHAN_INFO_RAW), 708 - .info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SCALE), 709 721 .address = AD7758_WT(AD7758_PHASE_B, AD7758_CURRENT), 710 722 .scan_index = 6, 711 723 .scan_type = { ··· 714 732 .type = IIO_POWER, 715 733 .indexed = 1, 716 734 .channel = 1, 717 - .extend_name = "apparent_raw", 718 - .info_mask_separate = BIT(IIO_CHAN_INFO_RAW), 719 - .info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SCALE), 735 + .extend_name = "apparent", 720 736 .address = AD7758_WT(AD7758_PHASE_B, AD7758_APP_PWR), 721 737 .scan_index = 7, 722 738 .scan_type = { ··· 726 746 .type = IIO_POWER, 727 747 .indexed = 1, 728 748 .channel = 1, 729 - .extend_name = "active_raw", 730 - .info_mask_separate = BIT(IIO_CHAN_INFO_RAW), 731 - .info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SCALE), 749 + .extend_name = "active", 732 750 .address = AD7758_WT(AD7758_PHASE_B, AD7758_ACT_PWR), 733 751 .scan_index = 8, 734 752 .scan_type = { ··· 738 760 .type = IIO_POWER, 739 761 .indexed = 1, 740 762 .channel = 1, 741 - .extend_name = "reactive_raw", 742 - .info_mask_separate = BIT(IIO_CHAN_INFO_RAW), 743 - .info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SCALE), 763 + .extend_name = "reactive", 744 764 .address = AD7758_WT(AD7758_PHASE_B, AD7758_REACT_PWR), 745 765 .scan_index = 9, 746 766 .scan_type = { ··· 750 774 .type = IIO_VOLTAGE, 751 775 .indexed = 1, 752 776 .channel = 2, 753 - .extend_name = "raw", 754 - .info_mask_separate = BIT(IIO_CHAN_INFO_RAW), 755 - .info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SCALE), 756 777 .address = AD7758_WT(AD7758_PHASE_C, AD7758_VOLTAGE), 757 778 .scan_index = 10, 758 779 .scan_type = { ··· 761 788 .type = IIO_CURRENT, 762 789 .indexed = 1, 763 790 .channel = 2, 764 - .extend_name = "raw", 765 - .info_mask_separate = BIT(IIO_CHAN_INFO_RAW), 766 - .info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SCALE), 767 791 .address = AD7758_WT(AD7758_PHASE_C, AD7758_CURRENT), 768 792 .scan_index = 11, 769 793 .scan_type = { ··· 772 802 .type = IIO_POWER, 773 803 .indexed = 1, 774 804 .channel = 2, 775 - .extend_name = "apparent_raw", 776 - .info_mask_separate = BIT(IIO_CHAN_INFO_RAW), 777 - .info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SCALE), 805 + .extend_name = "apparent", 778 806 .address = AD7758_WT(AD7758_PHASE_C, AD7758_APP_PWR), 779 807 .scan_index = 12, 780 808 .scan_type = { ··· 784 816 .type = IIO_POWER, 785 817 .indexed = 1, 786 818 .channel = 2, 787 - .extend_name = "active_raw", 788 - .info_mask_separate = BIT(IIO_CHAN_INFO_RAW), 789 - .info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SCALE), 819 + .extend_name = "active", 790 820 .address = AD7758_WT(AD7758_PHASE_C, AD7758_ACT_PWR), 791 821 .scan_index = 13, 792 822 .scan_type = { ··· 796 830 .type = IIO_POWER, 797 831 .indexed = 1, 798 832 .channel = 2, 799 - .extend_name = "reactive_raw", 800 - .info_mask_separate = BIT(IIO_CHAN_INFO_RAW), 801 - .info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SCALE), 833 + .extend_name = "reactive", 802 834 .address = AD7758_WT(AD7758_PHASE_C, AD7758_REACT_PWR), 803 835 .scan_index = 14, 804 836 .scan_type = { ··· 837 873 goto error_free_rx; 838 874 } 839 875 st->us = spi; 840 - st->ade7758_ring_channels = &ade7758_channels[0]; 841 876 mutex_init(&st->buf_lock); 842 877 843 878 indio_dev->name = spi->dev.driver->name; 844 879 indio_dev->dev.parent = &spi->dev; 845 880 indio_dev->info = &ade7758_info; 846 881 indio_dev->modes = INDIO_DIRECT_MODE; 882 + indio_dev->channels = ade7758_channels; 883 + indio_dev->num_channels = ARRAY_SIZE(ade7758_channels); 847 884 848 885 ret = ade7758_configure_ring(indio_dev); 849 886 if (ret)
+2 -3
drivers/staging/iio/meter/ade7758_ring.c
··· 85 85 **/ 86 86 static int ade7758_ring_preenable(struct iio_dev *indio_dev) 87 87 { 88 - struct ade7758_state *st = iio_priv(indio_dev); 89 88 unsigned channel; 90 89 91 - if (!bitmap_empty(indio_dev->active_scan_mask, indio_dev->masklength)) 90 + if (bitmap_empty(indio_dev->active_scan_mask, indio_dev->masklength)) 92 91 return -EINVAL; 93 92 94 93 channel = find_first_bit(indio_dev->active_scan_mask, 95 94 indio_dev->masklength); 96 95 97 96 ade7758_write_waveform_type(&indio_dev->dev, 98 - st->ade7758_ring_channels[channel].address); 97 + indio_dev->channels[channel].address); 99 98 100 99 return 0; 101 100 }
+1 -1
drivers/staging/rtl8723au/include/rtw_eeprom.h
··· 107 107 }; 108 108 109 109 struct eeprom_priv { 110 + u8 mac_addr[6]; /* PermanentAddress */ 110 111 u8 bautoload_fail_flag; 111 112 u8 bloadfile_fail_flag; 112 113 u8 bloadmac_fail_flag; 113 114 /* u8 bempty; */ 114 115 /* u8 sys_config; */ 115 - u8 mac_addr[6]; /* PermanentAddress */ 116 116 /* u8 config0; */ 117 117 u16 channel_plan; 118 118 /* u8 country_string[3]; */
+29 -16
drivers/thermal/imx_thermal.c
··· 459 459 int measure_freq; 460 460 int ret; 461 461 462 + if (!cpufreq_get_current_driver()) { 463 + dev_dbg(&pdev->dev, "no cpufreq driver!"); 464 + return -EPROBE_DEFER; 465 + } 462 466 data = devm_kzalloc(&pdev->dev, sizeof(*data), GFP_KERNEL); 463 467 if (!data) 464 468 return -ENOMEM; ··· 525 521 return ret; 526 522 } 527 523 524 + data->thermal_clk = devm_clk_get(&pdev->dev, NULL); 525 + if (IS_ERR(data->thermal_clk)) { 526 + ret = PTR_ERR(data->thermal_clk); 527 + if (ret != -EPROBE_DEFER) 528 + dev_err(&pdev->dev, 529 + "failed to get thermal clk: %d\n", ret); 530 + cpufreq_cooling_unregister(data->cdev); 531 + return ret; 532 + } 533 + 534 + /* 535 + * Thermal sensor needs clk on to get correct value, normally 536 + * we should enable its clk before taking measurement and disable 537 + * clk after measurement is done, but if alarm function is enabled, 538 + * hardware will auto measure the temperature periodically, so we 539 + * need to keep the clk always on for alarm function. 540 + */ 541 + ret = clk_prepare_enable(data->thermal_clk); 542 + if (ret) { 543 + dev_err(&pdev->dev, "failed to enable thermal clk: %d\n", ret); 544 + cpufreq_cooling_unregister(data->cdev); 545 + return ret; 546 + } 547 + 528 548 data->tz = thermal_zone_device_register("imx_thermal_zone", 529 549 IMX_TRIP_NUM, 530 550 BIT(IMX_TRIP_PASSIVE), data, ··· 559 531 ret = PTR_ERR(data->tz); 560 532 dev_err(&pdev->dev, 561 533 "failed to register thermal zone device %d\n", ret); 534 + clk_disable_unprepare(data->thermal_clk); 562 535 cpufreq_cooling_unregister(data->cdev); 563 536 return ret; 564 - } 565 - 566 - data->thermal_clk = devm_clk_get(&pdev->dev, NULL); 567 - if (IS_ERR(data->thermal_clk)) { 568 - dev_warn(&pdev->dev, "failed to get thermal clk!\n"); 569 - } else { 570 - /* 571 - * Thermal sensor needs clk on to get correct value, normally 572 - * we should enable its clk before taking measurement and disable 573 - * clk after measurement is done, but if alarm function is enabled, 574 - * hardware will auto measure the temperature periodically, so we 575 - * need to keep the clk always on for alarm function. 576 - */ 577 - ret = clk_prepare_enable(data->thermal_clk); 578 - if (ret) 579 - dev_warn(&pdev->dev, "failed to enable thermal clk: %d\n", ret); 580 537 } 581 538 582 539 /* Enable measurements at ~ 10 Hz */
+7 -1
drivers/thermal/int340x_thermal/int3403_thermal.c
··· 92 92 if (ACPI_FAILURE(status)) 93 93 return -EIO; 94 94 95 - *temp = DECI_KELVIN_TO_MILLI_CELSIUS(hyst, KELVIN_OFFSET); 95 + /* 96 + * Thermal hysteresis represents a temperature difference. 97 + * Kelvin and Celsius have same degree size. So the 98 + * conversion here between tenths of degree Kelvin unit 99 + * and Milli-Celsius unit is just to multiply 100. 100 + */ 101 + *temp = hyst * 100; 96 102 97 103 return 0; 98 104 }
+32 -8
drivers/thermal/of-thermal.c
··· 387 387 int (*get_trend)(void *, long *)) 388 388 { 389 389 struct device_node *np, *child, *sensor_np; 390 + struct thermal_zone_device *tzd = ERR_PTR(-ENODEV); 390 391 391 392 np = of_find_node_by_name(NULL, "thermal-zones"); 392 393 if (!np) 393 394 return ERR_PTR(-ENODEV); 394 395 395 - if (!dev || !dev->of_node) 396 + if (!dev || !dev->of_node) { 397 + of_node_put(np); 396 398 return ERR_PTR(-EINVAL); 399 + } 397 400 398 - sensor_np = dev->of_node; 401 + sensor_np = of_node_get(dev->of_node); 399 402 400 403 for_each_child_of_node(np, child) { 401 404 struct of_phandle_args sensor_specs; ··· 425 422 } 426 423 427 424 if (sensor_specs.np == sensor_np && id == sensor_id) { 428 - of_node_put(np); 429 - return thermal_zone_of_add_sensor(child, sensor_np, 430 - data, 431 - get_temp, 432 - get_trend); 425 + tzd = thermal_zone_of_add_sensor(child, sensor_np, 426 + data, 427 + get_temp, 428 + get_trend); 429 + of_node_put(sensor_specs.np); 430 + of_node_put(child); 431 + goto exit; 433 432 } 433 + of_node_put(sensor_specs.np); 434 434 } 435 + exit: 436 + of_node_put(sensor_np); 435 437 of_node_put(np); 436 438 437 - return ERR_PTR(-ENODEV); 439 + return tzd; 438 440 } 439 441 EXPORT_SYMBOL_GPL(thermal_zone_of_sensor_register); 440 442 ··· 631 623 632 624 /* Required for cooling map matching */ 633 625 trip->np = np; 626 + of_node_get(np); 634 627 635 628 return 0; 636 629 } ··· 739 730 return tz; 740 731 741 732 free_tbps: 733 + for (i = 0; i < tz->num_tbps; i++) 734 + of_node_put(tz->tbps[i].cooling_device); 742 735 kfree(tz->tbps); 743 736 free_trips: 737 + for (i = 0; i < tz->ntrips; i++) 738 + of_node_put(tz->trips[i].np); 744 739 kfree(tz->trips); 740 + of_node_put(gchild); 745 741 free_tz: 746 742 kfree(tz); 747 743 of_node_put(child); ··· 756 742 757 743 static inline void of_thermal_free_zone(struct __thermal_zone *tz) 758 744 { 745 + int i; 746 + 747 + for (i = 0; i < tz->num_tbps; i++) 748 + of_node_put(tz->tbps[i].cooling_device); 759 749 kfree(tz->tbps); 750 + for (i = 0; i < tz->ntrips; i++) 751 + of_node_put(tz->trips[i].np); 760 752 kfree(tz->trips); 761 753 kfree(tz); 762 754 } ··· 834 814 /* attempting to build remaining zones still */ 835 815 } 836 816 } 817 + of_node_put(np); 837 818 838 819 return 0; 839 820 840 821 exit_free: 822 + of_node_put(child); 823 + of_node_put(np); 841 824 of_thermal_free_zone(tz); 842 825 843 826 /* no memory available, so free what we have built */ ··· 882 859 kfree(zone->ops); 883 860 of_thermal_free_zone(zone->devdata); 884 861 } 862 + of_node_put(np); 885 863 }
+1 -1
drivers/thermal/samsung/exynos_thermal_common.h
··· 27 27 #define SENSOR_NAME_LEN 16 28 28 #define MAX_TRIP_COUNT 8 29 29 #define MAX_COOLING_DEVICE 4 30 - #define MAX_THRESHOLD_LEVS 5 30 + #define MAX_TRIMINFO_CTRL_REG 2 31 31 32 32 #define ACTIVE_INTERVAL 500 33 33 #define IDLE_INTERVAL 10000
+57 -113
drivers/thermal/samsung/exynos_tmu.c
··· 77 77 struct exynos_tmu_platform_data *pdata = data->pdata; 78 78 int temp_code; 79 79 80 - if (pdata->cal_mode == HW_MODE) 81 - return temp; 82 - 83 - if (data->soc == SOC_ARCH_EXYNOS4210) 84 - /* temp should range between 25 and 125 */ 85 - if (temp < 25 || temp > 125) { 86 - temp_code = -EINVAL; 87 - goto out; 88 - } 89 - 90 80 switch (pdata->cal_type) { 91 81 case TYPE_TWO_POINT_TRIMMING: 92 82 temp_code = (temp - pdata->first_point_trim) * ··· 91 101 temp_code = temp + pdata->default_temp_offset; 92 102 break; 93 103 } 94 - out: 104 + 95 105 return temp_code; 96 106 } 97 107 ··· 103 113 { 104 114 struct exynos_tmu_platform_data *pdata = data->pdata; 105 115 int temp; 106 - 107 - if (pdata->cal_mode == HW_MODE) 108 - return temp_code; 109 - 110 - if (data->soc == SOC_ARCH_EXYNOS4210) 111 - /* temp_code should range between 75 and 175 */ 112 - if (temp_code < 75 || temp_code > 175) { 113 - temp = -ENODATA; 114 - goto out; 115 - } 116 116 117 117 switch (pdata->cal_type) { 118 118 case TYPE_TWO_POINT_TRIMMING: ··· 118 138 temp = temp_code - pdata->default_temp_offset; 119 139 break; 120 140 } 121 - out: 141 + 122 142 return temp; 143 + } 144 + 145 + static void exynos_tmu_clear_irqs(struct exynos_tmu_data *data) 146 + { 147 + const struct exynos_tmu_registers *reg = data->pdata->registers; 148 + unsigned int val_irq; 149 + 150 + val_irq = readl(data->base + reg->tmu_intstat); 151 + /* 152 + * Clear the interrupts. Please note that the documentation for 153 + * Exynos3250, Exynos4412, Exynos5250 and Exynos5260 incorrectly 154 + * states that INTCLEAR register has a different placing of bits 155 + * responsible for FALL IRQs than INTSTAT register. Exynos5420 156 + * and Exynos5440 documentation is correct (Exynos4210 doesn't 157 + * support FALL IRQs at all). 158 + */ 159 + writel(val_irq, data->base + reg->tmu_intclear); 123 160 } 124 161 125 162 static int exynos_tmu_initialize(struct platform_device *pdev) ··· 144 147 struct exynos_tmu_data *data = platform_get_drvdata(pdev); 145 148 struct exynos_tmu_platform_data *pdata = data->pdata; 146 149 const struct exynos_tmu_registers *reg = pdata->registers; 147 - unsigned int status, trim_info = 0, con; 150 + unsigned int status, trim_info = 0, con, ctrl; 148 151 unsigned int rising_threshold = 0, falling_threshold = 0; 149 - int ret = 0, threshold_code, i, trigger_levs = 0; 152 + int ret = 0, threshold_code, i; 150 153 151 154 mutex_lock(&data->lock); 152 155 clk_enable(data->clk); ··· 161 164 } 162 165 } 163 166 164 - if (TMU_SUPPORTS(pdata, TRIM_RELOAD)) 165 - __raw_writel(1, data->base + reg->triminfo_ctrl); 166 - 167 - if (pdata->cal_mode == HW_MODE) 168 - goto skip_calib_data; 167 + if (TMU_SUPPORTS(pdata, TRIM_RELOAD)) { 168 + for (i = 0; i < reg->triminfo_ctrl_count; i++) { 169 + if (pdata->triminfo_reload[i]) { 170 + ctrl = readl(data->base + 171 + reg->triminfo_ctrl[i]); 172 + ctrl |= pdata->triminfo_reload[i]; 173 + writel(ctrl, data->base + 174 + reg->triminfo_ctrl[i]); 175 + } 176 + } 177 + } 169 178 170 179 /* Save trimming info in order to perform calibration */ 171 180 if (data->soc == SOC_ARCH_EXYNOS5440) { ··· 200 197 trim_info = readl(data->base + reg->triminfo_data); 201 198 } 202 199 data->temp_error1 = trim_info & EXYNOS_TMU_TEMP_MASK; 203 - data->temp_error2 = ((trim_info >> reg->triminfo_85_shift) & 200 + data->temp_error2 = ((trim_info >> EXYNOS_TRIMINFO_85_SHIFT) & 204 201 EXYNOS_TMU_TEMP_MASK); 205 202 206 203 if (!data->temp_error1 || ··· 210 207 211 208 if (!data->temp_error2) 212 209 data->temp_error2 = 213 - (pdata->efuse_value >> reg->triminfo_85_shift) & 210 + (pdata->efuse_value >> EXYNOS_TRIMINFO_85_SHIFT) & 214 211 EXYNOS_TMU_TEMP_MASK; 215 - 216 - skip_calib_data: 217 - if (pdata->max_trigger_level > MAX_THRESHOLD_LEVS) { 218 - dev_err(&pdev->dev, "Invalid max trigger level\n"); 219 - ret = -EINVAL; 220 - goto out; 221 - } 222 - 223 - for (i = 0; i < pdata->max_trigger_level; i++) { 224 - if (!pdata->trigger_levels[i]) 225 - continue; 226 - 227 - if ((pdata->trigger_type[i] == HW_TRIP) && 228 - (!pdata->trigger_levels[pdata->max_trigger_level - 1])) { 229 - dev_err(&pdev->dev, "Invalid hw trigger level\n"); 230 - ret = -EINVAL; 231 - goto out; 232 - } 233 - 234 - /* Count trigger levels except the HW trip*/ 235 - if (!(pdata->trigger_type[i] == HW_TRIP)) 236 - trigger_levs++; 237 - } 238 212 239 213 rising_threshold = readl(data->base + reg->threshold_th0); 240 214 241 215 if (data->soc == SOC_ARCH_EXYNOS4210) { 242 216 /* Write temperature code for threshold */ 243 217 threshold_code = temp_to_code(data, pdata->threshold); 244 - if (threshold_code < 0) { 245 - ret = threshold_code; 246 - goto out; 247 - } 248 218 writeb(threshold_code, 249 219 data->base + reg->threshold_temp); 250 - for (i = 0; i < trigger_levs; i++) 220 + for (i = 0; i < pdata->non_hw_trigger_levels; i++) 251 221 writeb(pdata->trigger_levels[i], data->base + 252 222 reg->threshold_th0 + i * sizeof(reg->threshold_th0)); 253 223 254 - writel(reg->intclr_rise_mask, data->base + reg->tmu_intclear); 224 + exynos_tmu_clear_irqs(data); 255 225 } else { 256 226 /* Write temperature code for rising and falling threshold */ 257 - for (i = 0; 258 - i < trigger_levs && i < EXYNOS_MAX_TRIGGER_PER_REG; i++) { 227 + for (i = 0; i < pdata->non_hw_trigger_levels; i++) { 259 228 threshold_code = temp_to_code(data, 260 229 pdata->trigger_levels[i]); 261 - if (threshold_code < 0) { 262 - ret = threshold_code; 263 - goto out; 264 - } 265 230 rising_threshold &= ~(0xff << 8 * i); 266 231 rising_threshold |= threshold_code << 8 * i; 267 232 if (pdata->threshold_falling) { 268 233 threshold_code = temp_to_code(data, 269 234 pdata->trigger_levels[i] - 270 235 pdata->threshold_falling); 271 - if (threshold_code > 0) 272 - falling_threshold |= 273 - threshold_code << 8 * i; 236 + falling_threshold |= threshold_code << 8 * i; 274 237 } 275 238 } 276 239 ··· 245 276 writel(falling_threshold, 246 277 data->base + reg->threshold_th1); 247 278 248 - writel((reg->intclr_rise_mask << reg->intclr_rise_shift) | 249 - (reg->intclr_fall_mask << reg->intclr_fall_shift), 250 - data->base + reg->tmu_intclear); 279 + exynos_tmu_clear_irqs(data); 251 280 252 281 /* if last threshold limit is also present */ 253 282 i = pdata->max_trigger_level - 1; ··· 253 286 (pdata->trigger_type[i] == HW_TRIP)) { 254 287 threshold_code = temp_to_code(data, 255 288 pdata->trigger_levels[i]); 256 - if (threshold_code < 0) { 257 - ret = threshold_code; 258 - goto out; 259 - } 260 289 if (i == EXYNOS_MAX_TRIGGER_PER_REG - 1) { 261 290 /* 1-4 level to be assigned in th0 reg */ 262 291 rising_threshold &= ~(0xff << 8 * i); ··· 288 325 struct exynos_tmu_data *data = platform_get_drvdata(pdev); 289 326 struct exynos_tmu_platform_data *pdata = data->pdata; 290 327 const struct exynos_tmu_registers *reg = pdata->registers; 291 - unsigned int con, interrupt_en, cal_val; 328 + unsigned int con, interrupt_en; 292 329 293 330 mutex_lock(&data->lock); 294 331 clk_enable(data->clk); ··· 298 335 if (pdata->test_mux) 299 336 con |= (pdata->test_mux << reg->test_mux_addr_shift); 300 337 301 - if (pdata->reference_voltage) { 302 - con &= ~(reg->buf_vref_sel_mask << reg->buf_vref_sel_shift); 303 - con |= pdata->reference_voltage << reg->buf_vref_sel_shift; 304 - } 338 + con &= ~(EXYNOS_TMU_REF_VOLTAGE_MASK << EXYNOS_TMU_REF_VOLTAGE_SHIFT); 339 + con |= pdata->reference_voltage << EXYNOS_TMU_REF_VOLTAGE_SHIFT; 305 340 306 - if (pdata->gain) { 307 - con &= ~(reg->buf_slope_sel_mask << reg->buf_slope_sel_shift); 308 - con |= (pdata->gain << reg->buf_slope_sel_shift); 309 - } 341 + con &= ~(EXYNOS_TMU_BUF_SLOPE_SEL_MASK << EXYNOS_TMU_BUF_SLOPE_SEL_SHIFT); 342 + con |= (pdata->gain << EXYNOS_TMU_BUF_SLOPE_SEL_SHIFT); 310 343 311 344 if (pdata->noise_cancel_mode) { 312 345 con &= ~(reg->therm_trip_mode_mask << ··· 310 351 con |= (pdata->noise_cancel_mode << reg->therm_trip_mode_shift); 311 352 } 312 353 313 - if (pdata->cal_mode == HW_MODE) { 314 - con &= ~(reg->calib_mode_mask << reg->calib_mode_shift); 315 - cal_val = 0; 316 - switch (pdata->cal_type) { 317 - case TYPE_TWO_POINT_TRIMMING: 318 - cal_val = 3; 319 - break; 320 - case TYPE_ONE_POINT_TRIMMING_85: 321 - cal_val = 2; 322 - break; 323 - case TYPE_ONE_POINT_TRIMMING_25: 324 - cal_val = 1; 325 - break; 326 - case TYPE_NONE: 327 - break; 328 - default: 329 - dev_err(&pdev->dev, "Invalid calibration type, using none\n"); 330 - } 331 - con |= cal_val << reg->calib_mode_shift; 332 - } 333 - 334 354 if (on) { 335 - con |= (1 << reg->core_en_shift); 355 + con |= (1 << EXYNOS_TMU_CORE_EN_SHIFT); 336 356 interrupt_en = 337 357 pdata->trigger_enable[3] << reg->inten_rise3_shift | 338 358 pdata->trigger_enable[2] << reg->inten_rise2_shift | ··· 321 383 interrupt_en |= 322 384 interrupt_en << reg->inten_fall0_shift; 323 385 } else { 324 - con &= ~(1 << reg->core_en_shift); 386 + con &= ~(1 << EXYNOS_TMU_CORE_EN_SHIFT); 325 387 interrupt_en = 0; /* Disable all interrupts */ 326 388 } 327 389 writel(interrupt_en, data->base + reg->tmu_inten); ··· 342 404 clk_enable(data->clk); 343 405 344 406 temp_code = readb(data->base + reg->tmu_cur_temp); 345 - temp = code_to_temp(data, temp_code); 346 407 408 + if (data->soc == SOC_ARCH_EXYNOS4210) 409 + /* temp_code should range between 75 and 175 */ 410 + if (temp_code < 75 || temp_code > 175) { 411 + temp = -ENODATA; 412 + goto out; 413 + } 414 + 415 + temp = code_to_temp(data, temp_code); 416 + out: 347 417 clk_disable(data->clk); 348 418 mutex_unlock(&data->lock); 349 419 ··· 411 465 struct exynos_tmu_data, irq_work); 412 466 struct exynos_tmu_platform_data *pdata = data->pdata; 413 467 const struct exynos_tmu_registers *reg = pdata->registers; 414 - unsigned int val_irq, val_type; 468 + unsigned int val_type; 415 469 416 470 if (!IS_ERR(data->clk_sec)) 417 471 clk_enable(data->clk_sec); ··· 429 483 clk_enable(data->clk); 430 484 431 485 /* TODO: take action based on particular interrupt */ 432 - val_irq = readl(data->base + reg->tmu_intstat); 433 - /* clear the interrupts */ 434 - writel(val_irq, data->base + reg->tmu_intclear); 486 + exynos_tmu_clear_irqs(data); 435 487 436 488 clk_disable(data->clk); 437 489 mutex_unlock(&data->lock);
+9 -80
drivers/thermal/samsung/exynos_tmu.h
··· 34 34 TYPE_NONE, 35 35 }; 36 36 37 - enum calibration_mode { 38 - SW_MODE, 39 - HW_MODE, 40 - }; 41 - 42 37 enum soc_type { 43 38 SOC_ARCH_EXYNOS3250 = 1, 44 39 SOC_ARCH_EXYNOS4210, ··· 77 82 * bitfields. The register validity, offsets and bitfield values may vary 78 83 * slightly across different exynos SOC's. 79 84 * @triminfo_data: register containing 2 pont trimming data 80 - * @triminfo_25_shift: shift bit of the 25 C trim value in triminfo_data reg. 81 - * @triminfo_85_shift: shift bit of the 85 C trim value in triminfo_data reg. 82 85 * @triminfo_ctrl: trim info controller register. 83 - * @triminfo_reload_shift: shift of triminfo reload enable bit in triminfo_ctrl 84 - reg. 86 + * @triminfo_ctrl_count: the number of trim info controller register. 85 87 * @tmu_ctrl: TMU main controller register. 86 88 * @test_mux_addr_shift: shift bits of test mux address. 87 - * @buf_vref_sel_shift: shift bits of reference voltage in tmu_ctrl register. 88 - * @buf_vref_sel_mask: mask bits of reference voltage in tmu_ctrl register. 89 89 * @therm_trip_mode_shift: shift bits of tripping mode in tmu_ctrl register. 90 90 * @therm_trip_mode_mask: mask bits of tripping mode in tmu_ctrl register. 91 91 * @therm_trip_en_shift: shift bits of tripping enable in tmu_ctrl register. 92 - * @buf_slope_sel_shift: shift bits of amplifier gain value in tmu_ctrl 93 - register. 94 - * @buf_slope_sel_mask: mask bits of amplifier gain value in tmu_ctrl register. 95 - * @calib_mode_shift: shift bits of calibration mode value in tmu_ctrl 96 - register. 97 - * @calib_mode_mask: mask bits of calibration mode value in tmu_ctrl 98 - register. 99 - * @therm_trip_tq_en_shift: shift bits of thermal trip enable by TQ pin in 100 - tmu_ctrl register. 101 - * @core_en_shift: shift bits of TMU core enable bit in tmu_ctrl register. 102 92 * @tmu_status: register drescribing the TMU status. 103 93 * @tmu_cur_temp: register containing the current temperature of the TMU. 104 - * @tmu_cur_temp_shift: shift bits of current temp value in tmu_cur_temp 105 - register. 106 94 * @threshold_temp: register containing the base threshold level. 107 95 * @threshold_th0: Register containing first set of rising levels. 108 - * @threshold_th0_l0_shift: shift bits of level0 threshold temperature. 109 - * @threshold_th0_l1_shift: shift bits of level1 threshold temperature. 110 - * @threshold_th0_l2_shift: shift bits of level2 threshold temperature. 111 - * @threshold_th0_l3_shift: shift bits of level3 threshold temperature. 112 96 * @threshold_th1: Register containing second set of rising levels. 113 - * @threshold_th1_l0_shift: shift bits of level0 threshold temperature. 114 - * @threshold_th1_l1_shift: shift bits of level1 threshold temperature. 115 - * @threshold_th1_l2_shift: shift bits of level2 threshold temperature. 116 - * @threshold_th1_l3_shift: shift bits of level3 threshold temperature. 117 97 * @threshold_th2: Register containing third set of rising levels. 118 - * @threshold_th2_l0_shift: shift bits of level0 threshold temperature. 119 - * @threshold_th3: Register containing fourth set of rising levels. 120 98 * @threshold_th3_l0_shift: shift bits of level0 threshold temperature. 121 99 * @tmu_inten: register containing the different threshold interrupt 122 100 enable bits. ··· 98 130 * @inten_rise2_shift: shift bits of rising 2 interrupt bits. 99 131 * @inten_rise3_shift: shift bits of rising 3 interrupt bits. 100 132 * @inten_fall0_shift: shift bits of falling 0 interrupt bits. 101 - * @inten_fall1_shift: shift bits of falling 1 interrupt bits. 102 - * @inten_fall2_shift: shift bits of falling 2 interrupt bits. 103 - * @inten_fall3_shift: shift bits of falling 3 interrupt bits. 104 133 * @tmu_intstat: Register containing the interrupt status values. 105 134 * @tmu_intclear: Register for clearing the raised interrupt status. 106 - * @intclr_fall_shift: shift bits for interrupt clear fall 0 107 - * @intclr_rise_shift: shift bits of all rising interrupt bits. 108 - * @intclr_rise_mask: mask bits of all rising interrupt bits. 109 - * @intclr_fall_mask: mask bits of all rising interrupt bits. 110 135 * @emul_con: TMU emulation controller register. 111 136 * @emul_temp_shift: shift bits of emulation temperature. 112 137 * @emul_time_shift: shift bits of emulation time. 113 - * @emul_time_mask: mask bits of emulation time. 114 138 * @tmu_irqstatus: register to find which TMU generated interrupts. 115 139 * @tmu_pmin: register to get/set the Pmin value. 116 140 */ 117 141 struct exynos_tmu_registers { 118 142 u32 triminfo_data; 119 - u32 triminfo_25_shift; 120 - u32 triminfo_85_shift; 121 143 122 - u32 triminfo_ctrl; 123 - u32 triminfo_ctrl1; 124 - u32 triminfo_reload_shift; 144 + u32 triminfo_ctrl[MAX_TRIMINFO_CTRL_REG]; 145 + u32 triminfo_ctrl_count; 125 146 126 147 u32 tmu_ctrl; 127 148 u32 test_mux_addr_shift; 128 - u32 buf_vref_sel_shift; 129 - u32 buf_vref_sel_mask; 130 149 u32 therm_trip_mode_shift; 131 150 u32 therm_trip_mode_mask; 132 151 u32 therm_trip_en_shift; 133 - u32 buf_slope_sel_shift; 134 - u32 buf_slope_sel_mask; 135 - u32 calib_mode_shift; 136 - u32 calib_mode_mask; 137 - u32 therm_trip_tq_en_shift; 138 - u32 core_en_shift; 139 152 140 153 u32 tmu_status; 141 154 142 155 u32 tmu_cur_temp; 143 - u32 tmu_cur_temp_shift; 144 156 145 157 u32 threshold_temp; 146 158 147 159 u32 threshold_th0; 148 - u32 threshold_th0_l0_shift; 149 - u32 threshold_th0_l1_shift; 150 - u32 threshold_th0_l2_shift; 151 - u32 threshold_th0_l3_shift; 152 - 153 160 u32 threshold_th1; 154 - u32 threshold_th1_l0_shift; 155 - u32 threshold_th1_l1_shift; 156 - u32 threshold_th1_l2_shift; 157 - u32 threshold_th1_l3_shift; 158 - 159 161 u32 threshold_th2; 160 - u32 threshold_th2_l0_shift; 161 - 162 - u32 threshold_th3; 163 162 u32 threshold_th3_l0_shift; 164 163 165 164 u32 tmu_inten; ··· 135 200 u32 inten_rise2_shift; 136 201 u32 inten_rise3_shift; 137 202 u32 inten_fall0_shift; 138 - u32 inten_fall1_shift; 139 - u32 inten_fall2_shift; 140 - u32 inten_fall3_shift; 141 203 142 204 u32 tmu_intstat; 143 205 144 206 u32 tmu_intclear; 145 - u32 intclr_fall_shift; 146 - u32 intclr_rise_shift; 147 - u32 intclr_fall_mask; 148 - u32 intclr_rise_mask; 149 207 150 208 u32 emul_con; 151 209 u32 emul_temp_shift; 152 210 u32 emul_time_shift; 153 - u32 emul_time_mask; 154 211 155 212 u32 tmu_irqstatus; 156 213 u32 tmu_pmin; ··· 177 250 * 1 = enable trigger_level[] interrupt, 178 251 * 0 = disable trigger_level[] interrupt 179 252 * @max_trigger_level: max trigger level supported by the TMU 253 + * @non_hw_trigger_levels: number of defined non-hardware trigger levels 180 254 * @gain: gain of amplifier in the positive-TC generator block 181 - * 0 <= gain <= 15 255 + * 0 < gain <= 15 182 256 * @reference_voltage: reference voltage of amplifier 183 257 * in the positive-TC generator block 184 - * 0 <= reference_voltage <= 31 258 + * 0 < reference_voltage <= 31 185 259 * @noise_cancel_mode: noise cancellation mode 186 260 * 000, 100, 101, 110 and 111 can be different modes 187 261 * @type: determines the type of SOC ··· 193 265 * @second_point_trim: temp value of the second point trimming 194 266 * @default_temp_offset: default temperature offset in case of no trimming 195 267 * @test_mux; information if SoC supports test MUX 268 + * @triminfo_reload: reload value to read TRIMINFO register 196 269 * @cal_type: calibration type for temperature 197 - * @cal_mode: calibration mode for temperature 198 270 * @freq_clip_table: Table representing frequency reduction percentage. 199 271 * @freq_tab_count: Count of the above table as frequency reduction may 200 272 * applicable to only some of the trigger levels. ··· 212 284 enum trigger_type trigger_type[MAX_TRIP_COUNT]; 213 285 bool trigger_enable[MAX_TRIP_COUNT]; 214 286 u8 max_trigger_level; 287 + u8 non_hw_trigger_levels; 215 288 u8 gain; 216 289 u8 reference_voltage; 217 290 u8 noise_cancel_mode; ··· 224 295 u8 second_point_trim; 225 296 u8 default_temp_offset; 226 297 u8 test_mux; 298 + u8 triminfo_reload[MAX_TRIMINFO_CTRL_REG]; 227 299 228 300 enum calibration_type cal_type; 229 - enum calibration_mode cal_mode; 230 301 enum soc_type type; 231 302 struct freq_clip_table freq_tab[4]; 232 303 unsigned int freq_tab_count;
+22 -83
drivers/thermal/samsung/exynos_tmu_data.c
··· 27 27 #if defined(CONFIG_CPU_EXYNOS4210) 28 28 static const struct exynos_tmu_registers exynos4210_tmu_registers = { 29 29 .triminfo_data = EXYNOS_TMU_REG_TRIMINFO, 30 - .triminfo_25_shift = EXYNOS_TRIMINFO_25_SHIFT, 31 - .triminfo_85_shift = EXYNOS_TRIMINFO_85_SHIFT, 32 30 .tmu_ctrl = EXYNOS_TMU_REG_CONTROL, 33 - .buf_vref_sel_shift = EXYNOS_TMU_REF_VOLTAGE_SHIFT, 34 - .buf_vref_sel_mask = EXYNOS_TMU_REF_VOLTAGE_MASK, 35 - .buf_slope_sel_shift = EXYNOS_TMU_BUF_SLOPE_SEL_SHIFT, 36 - .buf_slope_sel_mask = EXYNOS_TMU_BUF_SLOPE_SEL_MASK, 37 - .core_en_shift = EXYNOS_TMU_CORE_EN_SHIFT, 38 31 .tmu_status = EXYNOS_TMU_REG_STATUS, 39 32 .tmu_cur_temp = EXYNOS_TMU_REG_CURRENT_TEMP, 40 33 .threshold_temp = EXYNOS4210_TMU_REG_THRESHOLD_TEMP, ··· 39 46 .inten_rise3_shift = EXYNOS_TMU_INTEN_RISE3_SHIFT, 40 47 .tmu_intstat = EXYNOS_TMU_REG_INTSTAT, 41 48 .tmu_intclear = EXYNOS_TMU_REG_INTCLEAR, 42 - .intclr_rise_mask = EXYNOS4210_TMU_TRIG_LEVEL_MASK, 43 49 }; 44 50 45 51 struct exynos_tmu_init_data const exynos4210_default_tmu_data = { ··· 56 64 .trigger_type[1] = THROTTLE_ACTIVE, 57 65 .trigger_type[2] = SW_TRIP, 58 66 .max_trigger_level = 4, 67 + .non_hw_trigger_levels = 3, 59 68 .gain = 15, 60 69 .reference_voltage = 7, 61 70 .cal_type = TYPE_ONE_POINT_TRIMMING, ··· 86 93 #if defined(CONFIG_SOC_EXYNOS3250) 87 94 static const struct exynos_tmu_registers exynos3250_tmu_registers = { 88 95 .triminfo_data = EXYNOS_TMU_REG_TRIMINFO, 89 - .triminfo_25_shift = EXYNOS_TRIMINFO_25_SHIFT, 90 - .triminfo_85_shift = EXYNOS_TRIMINFO_85_SHIFT, 96 + .triminfo_ctrl[0] = EXYNOS_TMU_TRIMINFO_CON1, 97 + .triminfo_ctrl[1] = EXYNOS_TMU_TRIMINFO_CON2, 98 + .triminfo_ctrl_count = 2, 91 99 .tmu_ctrl = EXYNOS_TMU_REG_CONTROL, 92 100 .test_mux_addr_shift = EXYNOS4412_MUX_ADDR_SHIFT, 93 - .buf_vref_sel_shift = EXYNOS_TMU_REF_VOLTAGE_SHIFT, 94 - .buf_vref_sel_mask = EXYNOS_TMU_REF_VOLTAGE_MASK, 95 101 .therm_trip_mode_shift = EXYNOS_TMU_TRIP_MODE_SHIFT, 96 102 .therm_trip_mode_mask = EXYNOS_TMU_TRIP_MODE_MASK, 97 103 .therm_trip_en_shift = EXYNOS_TMU_THERM_TRIP_EN_SHIFT, 98 - .buf_slope_sel_shift = EXYNOS_TMU_BUF_SLOPE_SEL_SHIFT, 99 - .buf_slope_sel_mask = EXYNOS_TMU_BUF_SLOPE_SEL_MASK, 100 - .core_en_shift = EXYNOS_TMU_CORE_EN_SHIFT, 101 104 .tmu_status = EXYNOS_TMU_REG_STATUS, 102 105 .tmu_cur_temp = EXYNOS_TMU_REG_CURRENT_TEMP, 103 106 .threshold_th0 = EXYNOS_THD_TEMP_RISE, ··· 105 116 .inten_fall0_shift = EXYNOS_TMU_INTEN_FALL0_SHIFT, 106 117 .tmu_intstat = EXYNOS_TMU_REG_INTSTAT, 107 118 .tmu_intclear = EXYNOS_TMU_REG_INTCLEAR, 108 - .intclr_fall_shift = EXYNOS_TMU_CLEAR_FALL_INT_SHIFT, 109 - .intclr_rise_shift = EXYNOS_TMU_RISE_INT_SHIFT, 110 - .intclr_rise_mask = EXYNOS_TMU_RISE_INT_MASK, 111 - .intclr_fall_mask = EXYNOS_TMU_FALL_INT_MASK, 112 119 .emul_con = EXYNOS_EMUL_CON, 113 120 .emul_temp_shift = EXYNOS_EMUL_DATA_SHIFT, 114 121 .emul_time_shift = EXYNOS_EMUL_TIME_SHIFT, 115 - .emul_time_mask = EXYNOS_EMUL_TIME_MASK, 116 122 }; 117 123 118 124 #define EXYNOS3250_TMU_DATA \ ··· 125 141 .trigger_type[2] = SW_TRIP, \ 126 142 .trigger_type[3] = HW_TRIP, \ 127 143 .max_trigger_level = 4, \ 144 + .non_hw_trigger_levels = 3, \ 128 145 .gain = 8, \ 129 146 .reference_voltage = 16, \ 130 147 .noise_cancel_mode = 4, \ ··· 145 160 .temp_level = 95, \ 146 161 }, \ 147 162 .freq_tab_count = 2, \ 163 + .triminfo_reload[0] = EXYNOS_TRIMINFO_RELOAD_ENABLE, \ 164 + .triminfo_reload[1] = EXYNOS_TRIMINFO_RELOAD_ENABLE, \ 148 165 .registers = &exynos3250_tmu_registers, \ 149 - .features = (TMU_SUPPORT_EMULATION | \ 166 + .features = (TMU_SUPPORT_EMULATION | TMU_SUPPORT_TRIM_RELOAD | \ 150 167 TMU_SUPPORT_FALLING_TRIP | TMU_SUPPORT_READY_STATUS | \ 151 168 TMU_SUPPORT_EMUL_TIME) 152 169 #endif ··· 169 182 #if defined(CONFIG_SOC_EXYNOS4412) || defined(CONFIG_SOC_EXYNOS5250) 170 183 static const struct exynos_tmu_registers exynos4412_tmu_registers = { 171 184 .triminfo_data = EXYNOS_TMU_REG_TRIMINFO, 172 - .triminfo_25_shift = EXYNOS_TRIMINFO_25_SHIFT, 173 - .triminfo_85_shift = EXYNOS_TRIMINFO_85_SHIFT, 174 - .triminfo_ctrl = EXYNOS_TMU_TRIMINFO_CON, 175 - .triminfo_reload_shift = EXYNOS_TRIMINFO_RELOAD_SHIFT, 185 + .triminfo_ctrl[0] = EXYNOS_TMU_TRIMINFO_CON2, 186 + .triminfo_ctrl_count = 1, 176 187 .tmu_ctrl = EXYNOS_TMU_REG_CONTROL, 177 188 .test_mux_addr_shift = EXYNOS4412_MUX_ADDR_SHIFT, 178 - .buf_vref_sel_shift = EXYNOS_TMU_REF_VOLTAGE_SHIFT, 179 - .buf_vref_sel_mask = EXYNOS_TMU_REF_VOLTAGE_MASK, 180 189 .therm_trip_mode_shift = EXYNOS_TMU_TRIP_MODE_SHIFT, 181 190 .therm_trip_mode_mask = EXYNOS_TMU_TRIP_MODE_MASK, 182 191 .therm_trip_en_shift = EXYNOS_TMU_THERM_TRIP_EN_SHIFT, 183 - .buf_slope_sel_shift = EXYNOS_TMU_BUF_SLOPE_SEL_SHIFT, 184 - .buf_slope_sel_mask = EXYNOS_TMU_BUF_SLOPE_SEL_MASK, 185 - .core_en_shift = EXYNOS_TMU_CORE_EN_SHIFT, 186 192 .tmu_status = EXYNOS_TMU_REG_STATUS, 187 193 .tmu_cur_temp = EXYNOS_TMU_REG_CURRENT_TEMP, 188 194 .threshold_th0 = EXYNOS_THD_TEMP_RISE, ··· 188 208 .inten_fall0_shift = EXYNOS_TMU_INTEN_FALL0_SHIFT, 189 209 .tmu_intstat = EXYNOS_TMU_REG_INTSTAT, 190 210 .tmu_intclear = EXYNOS_TMU_REG_INTCLEAR, 191 - .intclr_fall_shift = EXYNOS_TMU_CLEAR_FALL_INT_SHIFT, 192 - .intclr_rise_shift = EXYNOS_TMU_RISE_INT_SHIFT, 193 - .intclr_rise_mask = EXYNOS_TMU_RISE_INT_MASK, 194 - .intclr_fall_mask = EXYNOS_TMU_FALL_INT_MASK, 195 211 .emul_con = EXYNOS_EMUL_CON, 196 212 .emul_temp_shift = EXYNOS_EMUL_DATA_SHIFT, 197 213 .emul_time_shift = EXYNOS_EMUL_TIME_SHIFT, 198 - .emul_time_mask = EXYNOS_EMUL_TIME_MASK, 199 214 }; 200 215 201 216 #define EXYNOS4412_TMU_DATA \ ··· 208 233 .trigger_type[2] = SW_TRIP, \ 209 234 .trigger_type[3] = HW_TRIP, \ 210 235 .max_trigger_level = 4, \ 236 + .non_hw_trigger_levels = 3, \ 211 237 .gain = 8, \ 212 238 .reference_voltage = 16, \ 213 239 .noise_cancel_mode = 4, \ ··· 228 252 .temp_level = 95, \ 229 253 }, \ 230 254 .freq_tab_count = 2, \ 255 + .triminfo_reload[0] = EXYNOS_TRIMINFO_RELOAD_ENABLE, \ 231 256 .registers = &exynos4412_tmu_registers, \ 232 257 .features = (TMU_SUPPORT_EMULATION | TMU_SUPPORT_TRIM_RELOAD | \ 233 258 TMU_SUPPORT_FALLING_TRIP | TMU_SUPPORT_READY_STATUS | \ ··· 263 286 #if defined(CONFIG_SOC_EXYNOS5260) 264 287 static const struct exynos_tmu_registers exynos5260_tmu_registers = { 265 288 .triminfo_data = EXYNOS_TMU_REG_TRIMINFO, 266 - .triminfo_25_shift = EXYNOS_TRIMINFO_25_SHIFT, 267 - .triminfo_85_shift = EXYNOS_TRIMINFO_85_SHIFT, 268 289 .tmu_ctrl = EXYNOS_TMU_REG_CONTROL, 269 - .tmu_ctrl = EXYNOS_TMU_REG_CONTROL1, 270 - .buf_vref_sel_shift = EXYNOS_TMU_REF_VOLTAGE_SHIFT, 271 - .buf_vref_sel_mask = EXYNOS_TMU_REF_VOLTAGE_MASK, 272 290 .therm_trip_mode_shift = EXYNOS_TMU_TRIP_MODE_SHIFT, 273 291 .therm_trip_mode_mask = EXYNOS_TMU_TRIP_MODE_MASK, 274 292 .therm_trip_en_shift = EXYNOS_TMU_THERM_TRIP_EN_SHIFT, 275 - .buf_slope_sel_shift = EXYNOS_TMU_BUF_SLOPE_SEL_SHIFT, 276 - .buf_slope_sel_mask = EXYNOS_TMU_BUF_SLOPE_SEL_MASK, 277 - .core_en_shift = EXYNOS_TMU_CORE_EN_SHIFT, 278 293 .tmu_status = EXYNOS_TMU_REG_STATUS, 279 294 .tmu_cur_temp = EXYNOS_TMU_REG_CURRENT_TEMP, 280 295 .threshold_th0 = EXYNOS_THD_TEMP_RISE, ··· 279 310 .inten_fall0_shift = EXYNOS_TMU_INTEN_FALL0_SHIFT, 280 311 .tmu_intstat = EXYNOS5260_TMU_REG_INTSTAT, 281 312 .tmu_intclear = EXYNOS5260_TMU_REG_INTCLEAR, 282 - .intclr_fall_shift = EXYNOS5420_TMU_CLEAR_FALL_INT_SHIFT, 283 - .intclr_rise_shift = EXYNOS_TMU_RISE_INT_SHIFT, 284 - .intclr_rise_mask = EXYNOS5260_TMU_RISE_INT_MASK, 285 - .intclr_fall_mask = EXYNOS5260_TMU_FALL_INT_MASK, 286 313 .emul_con = EXYNOS5260_EMUL_CON, 287 314 .emul_temp_shift = EXYNOS_EMUL_DATA_SHIFT, 288 315 .emul_time_shift = EXYNOS_EMUL_TIME_SHIFT, 289 - .emul_time_mask = EXYNOS_EMUL_TIME_MASK, 290 316 }; 291 317 292 318 #define __EXYNOS5260_TMU_DATA \ ··· 299 335 .trigger_type[2] = SW_TRIP, \ 300 336 .trigger_type[3] = HW_TRIP, \ 301 337 .max_trigger_level = 4, \ 338 + .non_hw_trigger_levels = 3, \ 302 339 .gain = 8, \ 303 340 .reference_voltage = 16, \ 304 341 .noise_cancel_mode = 4, \ ··· 324 359 #define EXYNOS5260_TMU_DATA \ 325 360 __EXYNOS5260_TMU_DATA \ 326 361 .type = SOC_ARCH_EXYNOS5260, \ 327 - .features = (TMU_SUPPORT_EMULATION | TMU_SUPPORT_TRIM_RELOAD | \ 328 - TMU_SUPPORT_FALLING_TRIP | TMU_SUPPORT_READY_STATUS | \ 329 - TMU_SUPPORT_EMUL_TIME) 362 + .features = (TMU_SUPPORT_EMULATION | TMU_SUPPORT_FALLING_TRIP | \ 363 + TMU_SUPPORT_READY_STATUS | TMU_SUPPORT_EMUL_TIME) 330 364 331 365 struct exynos_tmu_init_data const exynos5260_default_tmu_data = { 332 366 .tmu_data = { ··· 342 378 #if defined(CONFIG_SOC_EXYNOS5420) 343 379 static const struct exynos_tmu_registers exynos5420_tmu_registers = { 344 380 .triminfo_data = EXYNOS_TMU_REG_TRIMINFO, 345 - .triminfo_25_shift = EXYNOS_TRIMINFO_25_SHIFT, 346 - .triminfo_85_shift = EXYNOS_TRIMINFO_85_SHIFT, 347 381 .tmu_ctrl = EXYNOS_TMU_REG_CONTROL, 348 - .buf_vref_sel_shift = EXYNOS_TMU_REF_VOLTAGE_SHIFT, 349 - .buf_vref_sel_mask = EXYNOS_TMU_REF_VOLTAGE_MASK, 350 382 .therm_trip_mode_shift = EXYNOS_TMU_TRIP_MODE_SHIFT, 351 383 .therm_trip_mode_mask = EXYNOS_TMU_TRIP_MODE_MASK, 352 384 .therm_trip_en_shift = EXYNOS_TMU_THERM_TRIP_EN_SHIFT, 353 - .buf_slope_sel_shift = EXYNOS_TMU_BUF_SLOPE_SEL_SHIFT, 354 - .buf_slope_sel_mask = EXYNOS_TMU_BUF_SLOPE_SEL_MASK, 355 - .core_en_shift = EXYNOS_TMU_CORE_EN_SHIFT, 356 385 .tmu_status = EXYNOS_TMU_REG_STATUS, 357 386 .tmu_cur_temp = EXYNOS_TMU_REG_CURRENT_TEMP, 358 387 .threshold_th0 = EXYNOS_THD_TEMP_RISE, ··· 359 402 .inten_fall0_shift = EXYNOS_TMU_INTEN_FALL0_SHIFT, 360 403 .tmu_intstat = EXYNOS_TMU_REG_INTSTAT, 361 404 .tmu_intclear = EXYNOS_TMU_REG_INTCLEAR, 362 - .intclr_fall_shift = EXYNOS5420_TMU_CLEAR_FALL_INT_SHIFT, 363 - .intclr_rise_shift = EXYNOS_TMU_RISE_INT_SHIFT, 364 - .intclr_rise_mask = EXYNOS_TMU_RISE_INT_MASK, 365 - .intclr_fall_mask = EXYNOS_TMU_FALL_INT_MASK, 366 405 .emul_con = EXYNOS_EMUL_CON, 367 406 .emul_temp_shift = EXYNOS_EMUL_DATA_SHIFT, 368 407 .emul_time_shift = EXYNOS_EMUL_TIME_SHIFT, 369 - .emul_time_mask = EXYNOS_EMUL_TIME_MASK, 370 408 }; 371 409 372 410 #define __EXYNOS5420_TMU_DATA \ ··· 379 427 .trigger_type[2] = SW_TRIP, \ 380 428 .trigger_type[3] = HW_TRIP, \ 381 429 .max_trigger_level = 4, \ 430 + .non_hw_trigger_levels = 3, \ 382 431 .gain = 8, \ 383 432 .reference_voltage = 16, \ 384 433 .noise_cancel_mode = 4, \ ··· 404 451 #define EXYNOS5420_TMU_DATA \ 405 452 __EXYNOS5420_TMU_DATA \ 406 453 .type = SOC_ARCH_EXYNOS5250, \ 407 - .features = (TMU_SUPPORT_EMULATION | TMU_SUPPORT_TRIM_RELOAD | \ 408 - TMU_SUPPORT_FALLING_TRIP | TMU_SUPPORT_READY_STATUS | \ 409 - TMU_SUPPORT_EMUL_TIME) 454 + .features = (TMU_SUPPORT_EMULATION | TMU_SUPPORT_FALLING_TRIP | \ 455 + TMU_SUPPORT_READY_STATUS | TMU_SUPPORT_EMUL_TIME) 410 456 411 457 #define EXYNOS5420_TMU_DATA_SHARED \ 412 458 __EXYNOS5420_TMU_DATA \ 413 459 .type = SOC_ARCH_EXYNOS5420_TRIMINFO, \ 414 - .features = (TMU_SUPPORT_EMULATION | TMU_SUPPORT_TRIM_RELOAD | \ 415 - TMU_SUPPORT_FALLING_TRIP | TMU_SUPPORT_READY_STATUS | \ 416 - TMU_SUPPORT_EMUL_TIME | TMU_SUPPORT_ADDRESS_MULTIPLE) 460 + .features = (TMU_SUPPORT_EMULATION | TMU_SUPPORT_FALLING_TRIP | \ 461 + TMU_SUPPORT_READY_STATUS | TMU_SUPPORT_EMUL_TIME | \ 462 + TMU_SUPPORT_ADDRESS_MULTIPLE) 417 463 418 464 struct exynos_tmu_init_data const exynos5420_default_tmu_data = { 419 465 .tmu_data = { ··· 429 477 #if defined(CONFIG_SOC_EXYNOS5440) 430 478 static const struct exynos_tmu_registers exynos5440_tmu_registers = { 431 479 .triminfo_data = EXYNOS5440_TMU_S0_7_TRIM, 432 - .triminfo_25_shift = EXYNOS_TRIMINFO_25_SHIFT, 433 - .triminfo_85_shift = EXYNOS_TRIMINFO_85_SHIFT, 434 480 .tmu_ctrl = EXYNOS5440_TMU_S0_7_CTRL, 435 - .buf_vref_sel_shift = EXYNOS_TMU_REF_VOLTAGE_SHIFT, 436 - .buf_vref_sel_mask = EXYNOS_TMU_REF_VOLTAGE_MASK, 437 481 .therm_trip_mode_shift = EXYNOS_TMU_TRIP_MODE_SHIFT, 438 482 .therm_trip_mode_mask = EXYNOS_TMU_TRIP_MODE_MASK, 439 483 .therm_trip_en_shift = EXYNOS_TMU_THERM_TRIP_EN_SHIFT, 440 - .buf_slope_sel_shift = EXYNOS_TMU_BUF_SLOPE_SEL_SHIFT, 441 - .buf_slope_sel_mask = EXYNOS_TMU_BUF_SLOPE_SEL_MASK, 442 - .calib_mode_shift = EXYNOS_TMU_CALIB_MODE_SHIFT, 443 - .calib_mode_mask = EXYNOS_TMU_CALIB_MODE_MASK, 444 - .core_en_shift = EXYNOS_TMU_CORE_EN_SHIFT, 445 484 .tmu_status = EXYNOS5440_TMU_S0_7_STATUS, 446 485 .tmu_cur_temp = EXYNOS5440_TMU_S0_7_TEMP, 447 486 .threshold_th0 = EXYNOS5440_TMU_S0_7_TH0, ··· 447 504 .inten_fall0_shift = EXYNOS5440_TMU_INTEN_FALL0_SHIFT, 448 505 .tmu_intstat = EXYNOS5440_TMU_S0_7_IRQ, 449 506 .tmu_intclear = EXYNOS5440_TMU_S0_7_IRQ, 450 - .intclr_fall_shift = EXYNOS5440_TMU_CLEAR_FALL_INT_SHIFT, 451 - .intclr_rise_shift = EXYNOS5440_TMU_RISE_INT_SHIFT, 452 - .intclr_rise_mask = EXYNOS5440_TMU_RISE_INT_MASK, 453 - .intclr_fall_mask = EXYNOS5440_TMU_FALL_INT_MASK, 454 507 .tmu_irqstatus = EXYNOS5440_TMU_IRQ_STATUS, 455 508 .emul_con = EXYNOS5440_TMU_S0_7_DEBUG, 456 509 .emul_temp_shift = EXYNOS_EMUL_DATA_SHIFT, ··· 460 521 .trigger_type[0] = SW_TRIP, \ 461 522 .trigger_type[4] = HW_TRIP, \ 462 523 .max_trigger_level = 5, \ 524 + .non_hw_trigger_levels = 1, \ 463 525 .gain = 5, \ 464 526 .reference_voltage = 16, \ 465 527 .noise_cancel_mode = 4, \ 466 528 .cal_type = TYPE_ONE_POINT_TRIMMING, \ 467 - .cal_mode = 0, \ 468 529 .efuse_value = 0x5b2d, \ 469 530 .min_efuse_value = 16, \ 470 531 .max_efuse_value = 76, \
+6 -47
drivers/thermal/samsung/exynos_tmu_data.h
··· 39 39 #define EXYNOS_TMU_BUF_SLOPE_SEL_SHIFT 8 40 40 #define EXYNOS_TMU_CORE_EN_SHIFT 0 41 41 42 + /* Exynos3250 specific registers */ 43 + #define EXYNOS_TMU_TRIMINFO_CON1 0x10 44 + 42 45 /* Exynos4210 specific registers */ 43 46 #define EXYNOS4210_TMU_REG_THRESHOLD_TEMP 0x44 44 47 #define EXYNOS4210_TMU_REG_TRIG_LEVEL0 0x50 45 - #define EXYNOS4210_TMU_REG_TRIG_LEVEL1 0x54 46 - #define EXYNOS4210_TMU_REG_TRIG_LEVEL2 0x58 47 - #define EXYNOS4210_TMU_REG_TRIG_LEVEL3 0x5C 48 - #define EXYNOS4210_TMU_REG_PAST_TEMP0 0x60 49 - #define EXYNOS4210_TMU_REG_PAST_TEMP1 0x64 50 - #define EXYNOS4210_TMU_REG_PAST_TEMP2 0x68 51 - #define EXYNOS4210_TMU_REG_PAST_TEMP3 0x6C 52 48 53 - #define EXYNOS4210_TMU_TRIG_LEVEL0_MASK 0x1 54 - #define EXYNOS4210_TMU_TRIG_LEVEL1_MASK 0x10 55 - #define EXYNOS4210_TMU_TRIG_LEVEL2_MASK 0x100 56 - #define EXYNOS4210_TMU_TRIG_LEVEL3_MASK 0x1000 57 - #define EXYNOS4210_TMU_TRIG_LEVEL_MASK 0x1111 58 - #define EXYNOS4210_TMU_INTCLEAR_VAL 0x1111 59 - 60 - /* Exynos5250 and Exynos4412 specific registers */ 61 - #define EXYNOS_TMU_TRIMINFO_CON 0x14 49 + /* Exynos5250, Exynos4412, Exynos3250 specific registers */ 50 + #define EXYNOS_TMU_TRIMINFO_CON2 0x14 62 51 #define EXYNOS_THD_TEMP_RISE 0x50 63 52 #define EXYNOS_THD_TEMP_FALL 0x54 64 53 #define EXYNOS_EMUL_CON 0x80 65 54 66 - #define EXYNOS_TRIMINFO_RELOAD_SHIFT 1 55 + #define EXYNOS_TRIMINFO_RELOAD_ENABLE 1 67 56 #define EXYNOS_TRIMINFO_25_SHIFT 0 68 57 #define EXYNOS_TRIMINFO_85_SHIFT 8 69 - #define EXYNOS_TMU_RISE_INT_MASK 0x111 70 - #define EXYNOS_TMU_RISE_INT_SHIFT 0 71 - #define EXYNOS_TMU_FALL_INT_MASK 0x111 72 - #define EXYNOS_TMU_CLEAR_RISE_INT 0x111 73 - #define EXYNOS_TMU_CLEAR_FALL_INT (0x111 << 12) 74 - #define EXYNOS_TMU_CLEAR_FALL_INT_SHIFT 12 75 - #define EXYNOS5420_TMU_CLEAR_FALL_INT_SHIFT 16 76 - #define EXYNOS5440_TMU_CLEAR_FALL_INT_SHIFT 4 77 58 #define EXYNOS_TMU_TRIP_MODE_SHIFT 13 78 59 #define EXYNOS_TMU_TRIP_MODE_MASK 0x7 79 60 #define EXYNOS_TMU_THERM_TRIP_EN_SHIFT 12 80 - #define EXYNOS_TMU_CALIB_MODE_SHIFT 4 81 - #define EXYNOS_TMU_CALIB_MODE_MASK 0x3 82 61 83 62 #define EXYNOS_TMU_INTEN_RISE0_SHIFT 0 84 63 #define EXYNOS_TMU_INTEN_RISE1_SHIFT 4 85 64 #define EXYNOS_TMU_INTEN_RISE2_SHIFT 8 86 65 #define EXYNOS_TMU_INTEN_RISE3_SHIFT 12 87 66 #define EXYNOS_TMU_INTEN_FALL0_SHIFT 16 88 - #define EXYNOS_TMU_INTEN_FALL1_SHIFT 20 89 - #define EXYNOS_TMU_INTEN_FALL2_SHIFT 24 90 - #define EXYNOS_TMU_INTEN_FALL3_SHIFT 28 91 67 92 68 #define EXYNOS_EMUL_TIME 0x57F0 93 69 #define EXYNOS_EMUL_TIME_MASK 0xffff ··· 75 99 #define EXYNOS_MAX_TRIGGER_PER_REG 4 76 100 77 101 /* Exynos5260 specific */ 78 - #define EXYNOS_TMU_REG_CONTROL1 0x24 79 102 #define EXYNOS5260_TMU_REG_INTEN 0xC0 80 103 #define EXYNOS5260_TMU_REG_INTSTAT 0xC4 81 104 #define EXYNOS5260_TMU_REG_INTCLEAR 0xC8 82 - #define EXYNOS5260_TMU_CLEAR_RISE_INT 0x1111 83 - #define EXYNOS5260_TMU_CLEAR_FALL_INT (0x1111 << 16) 84 - #define EXYNOS5260_TMU_RISE_INT_MASK 0x1111 85 - #define EXYNOS5260_TMU_FALL_INT_MASK 0x1111 86 105 #define EXYNOS5260_EMUL_CON 0x100 87 106 88 107 /* Exynos4412 specific */ ··· 93 122 #define EXYNOS5440_TMU_S0_7_TH0 0x110 94 123 #define EXYNOS5440_TMU_S0_7_TH1 0x130 95 124 #define EXYNOS5440_TMU_S0_7_TH2 0x150 96 - #define EXYNOS5440_TMU_S0_7_EVTEN 0x1F0 97 125 #define EXYNOS5440_TMU_S0_7_IRQEN 0x210 98 126 #define EXYNOS5440_TMU_S0_7_IRQ 0x230 99 127 /* exynos5440 common registers */ 100 128 #define EXYNOS5440_TMU_IRQ_STATUS 0x000 101 129 #define EXYNOS5440_TMU_PMIN 0x004 102 - #define EXYNOS5440_TMU_TEMP 0x008 103 130 104 - #define EXYNOS5440_TMU_RISE_INT_MASK 0xf 105 - #define EXYNOS5440_TMU_RISE_INT_SHIFT 0 106 - #define EXYNOS5440_TMU_FALL_INT_MASK 0xf 107 131 #define EXYNOS5440_TMU_INTEN_RISE0_SHIFT 0 108 132 #define EXYNOS5440_TMU_INTEN_RISE1_SHIFT 1 109 133 #define EXYNOS5440_TMU_INTEN_RISE2_SHIFT 2 110 134 #define EXYNOS5440_TMU_INTEN_RISE3_SHIFT 3 111 135 #define EXYNOS5440_TMU_INTEN_FALL0_SHIFT 4 112 - #define EXYNOS5440_TMU_INTEN_FALL1_SHIFT 5 113 - #define EXYNOS5440_TMU_INTEN_FALL2_SHIFT 6 114 - #define EXYNOS5440_TMU_INTEN_FALL3_SHIFT 7 115 - #define EXYNOS5440_TMU_TH_RISE0_SHIFT 0 116 - #define EXYNOS5440_TMU_TH_RISE1_SHIFT 8 117 - #define EXYNOS5440_TMU_TH_RISE2_SHIFT 16 118 - #define EXYNOS5440_TMU_TH_RISE3_SHIFT 24 119 136 #define EXYNOS5440_TMU_TH_RISE4_SHIFT 24 120 137 #define EXYNOS5440_EFUSE_SWAP_OFFSET 8 121 138
+1 -2
drivers/thermal/thermal_core.c
··· 1575 1575 1576 1576 thermal_zone_device_update(tz); 1577 1577 1578 - if (!result) 1579 - return tz; 1578 + return tz; 1580 1579 1581 1580 unregister: 1582 1581 release_idr(&thermal_tz_idr, &thermal_idr_lock, tz->id);
+9 -4
drivers/tty/n_tty.c
··· 2413 2413 2414 2414 poll_wait(file, &tty->read_wait, wait); 2415 2415 poll_wait(file, &tty->write_wait, wait); 2416 - if (input_available_p(tty, 1)) 2417 - mask |= POLLIN | POLLRDNORM; 2418 - if (tty->packet && tty->link->ctrl_status) 2419 - mask |= POLLPRI | POLLIN | POLLRDNORM; 2420 2416 if (test_bit(TTY_OTHER_CLOSED, &tty->flags)) 2421 2417 mask |= POLLHUP; 2418 + if (input_available_p(tty, 1)) 2419 + mask |= POLLIN | POLLRDNORM; 2420 + else if (mask & POLLHUP) { 2421 + tty_flush_to_ldisc(tty); 2422 + if (input_available_p(tty, 1)) 2423 + mask |= POLLIN | POLLRDNORM; 2424 + } 2425 + if (tty->packet && tty->link->ctrl_status) 2426 + mask |= POLLPRI | POLLIN | POLLRDNORM; 2422 2427 if (tty_hung_up_p(file)) 2423 2428 mask |= POLLHUP; 2424 2429 if (!(mask & (POLLHUP | POLLIN | POLLRDNORM))) {
+1 -1
drivers/tty/serial/8250/8250_mtk.c
··· 81 81 /* Set to highest baudrate supported */ 82 82 if (baud >= 1152000) 83 83 baud = 921600; 84 - quot = DIV_ROUND_CLOSEST(port->uartclk, 256 * baud); 84 + quot = (port->uartclk / (256 * baud)) + 1; 85 85 } 86 86 87 87 /*
+1 -1
drivers/tty/serial/of_serial.c
··· 158 158 if (of_find_property(ofdev->dev.of_node, "used-by-rtas", NULL)) 159 159 return -EBUSY; 160 160 161 - info = kmalloc(sizeof(*info), GFP_KERNEL); 161 + info = kzalloc(sizeof(*info), GFP_KERNEL); 162 162 if (info == NULL) 163 163 return -ENOMEM; 164 164
+1 -1
drivers/tty/serial/serial_core.c
··· 363 363 * The spd_hi, spd_vhi, spd_shi, spd_warp kludge... 364 364 * Die! Die! Die! 365 365 */ 366 - if (baud == 38400) 366 + if (try == 0 && baud == 38400) 367 367 baud = altbaud; 368 368 369 369 /*
+12 -3
drivers/tty/tty_io.c
··· 1709 1709 int pty_master, tty_closing, o_tty_closing, do_sleep; 1710 1710 int idx; 1711 1711 char buf[64]; 1712 + long timeout = 0; 1713 + int once = 1; 1712 1714 1713 1715 if (tty_paranoia_check(tty, inode, __func__)) 1714 1716 return 0; ··· 1791 1789 if (!do_sleep) 1792 1790 break; 1793 1791 1794 - printk(KERN_WARNING "%s: %s: read/write wait queue active!\n", 1795 - __func__, tty_name(tty, buf)); 1792 + if (once) { 1793 + once = 0; 1794 + printk(KERN_WARNING "%s: %s: read/write wait queue active!\n", 1795 + __func__, tty_name(tty, buf)); 1796 + } 1796 1797 tty_unlock_pair(tty, o_tty); 1797 1798 mutex_unlock(&tty_mutex); 1798 - schedule(); 1799 + schedule_timeout_killable(timeout); 1800 + if (timeout < 120 * HZ) 1801 + timeout = 2 * timeout + 1; 1802 + else 1803 + timeout = MAX_SCHEDULE_TIMEOUT; 1799 1804 } 1800 1805 1801 1806 /*
+7
drivers/tty/vt/consolemap.c
··· 539 539 540 540 /* Save original vc_unipagdir_loc in case we allocate a new one */ 541 541 p = *vc->vc_uni_pagedir_loc; 542 + 543 + if (!p) { 544 + err = -EINVAL; 545 + 546 + goto out_unlock; 547 + } 542 548 543 549 if (p->refcount > 1) { 544 550 int j, k; ··· 629 623 set_inverse_transl(vc, p, i); /* Update inverse translations */ 630 624 set_inverse_trans_unicode(vc, p); 631 625 626 + out_unlock: 632 627 console_unlock(); 633 628 return err; 634 629 }
-1
drivers/usb/chipidea/core.c
··· 742 742 ci_role_destroy(ci); 743 743 ci_hdrc_enter_lpm(ci, true); 744 744 usb_phy_shutdown(ci->transceiver); 745 - kfree(ci->hw_bank.regmap); 746 745 747 746 return 0; 748 747 }
+21 -4
drivers/usb/class/cdc-acm.c
··· 60 60 61 61 static DEFINE_MUTEX(acm_table_lock); 62 62 63 + static void acm_tty_set_termios(struct tty_struct *tty, 64 + struct ktermios *termios_old); 65 + 63 66 /* 64 67 * acm_table accessors 65 68 */ ··· 148 145 /* devices aren't required to support these requests. 149 146 * the cdc acm descriptor tells whether they do... 150 147 */ 151 - #define acm_set_control(acm, control) \ 152 - acm_ctrl_msg(acm, USB_CDC_REQ_SET_CONTROL_LINE_STATE, control, NULL, 0) 148 + static inline int acm_set_control(struct acm *acm, int control) 149 + { 150 + if (acm->quirks & QUIRK_CONTROL_LINE_STATE) 151 + return -EOPNOTSUPP; 152 + 153 + return acm_ctrl_msg(acm, USB_CDC_REQ_SET_CONTROL_LINE_STATE, 154 + control, NULL, 0); 155 + } 156 + 153 157 #define acm_set_line(acm, line) \ 154 158 acm_ctrl_msg(acm, USB_CDC_REQ_SET_LINE_CODING, 0, line, sizeof *(line)) 155 159 #define acm_send_break(acm, ms) \ ··· 563 553 "%s - usb_submit_urb(ctrl irq) failed\n", __func__); 564 554 goto error_submit_urb; 565 555 } 556 + 557 + acm_tty_set_termios(tty, NULL); 566 558 567 559 /* 568 560 * Unthrottle device in case the TTY was closed while throttled. ··· 992 980 /* FIXME: Needs to clear unsupported bits in the termios */ 993 981 acm->clocal = ((termios->c_cflag & CLOCAL) != 0); 994 982 995 - if (!newline.dwDTERate) { 983 + if (C_BAUD(tty) == B0) { 996 984 newline.dwDTERate = acm->line.dwDTERate; 997 985 newctrl &= ~ACM_CTRL_DTR; 998 - } else 986 + } else if (termios_old && (termios_old->c_cflag & CBAUD) == B0) { 999 987 newctrl |= ACM_CTRL_DTR; 988 + } 1000 989 1001 990 if (newctrl != acm->ctrlout) 1002 991 acm_set_control(acm, acm->ctrlout = newctrl); ··· 1327 1314 tty_port_init(&acm->port); 1328 1315 acm->port.ops = &acm_port_ops; 1329 1316 init_usb_anchor(&acm->delayed); 1317 + acm->quirks = quirks; 1330 1318 1331 1319 buf = usb_alloc_coherent(usb_dev, ctrlsize, GFP_KERNEL, &acm->ctrl_dma); 1332 1320 if (!buf) { ··· 1695 1681 { USB_DEVICE(0x0572, 0x1328), /* Shiro / Aztech USB MODEM UM-3100 */ 1696 1682 .driver_info = NO_UNION_NORMAL, /* has no union descriptor */ 1697 1683 }, 1684 + { USB_DEVICE(0x20df, 0x0001), /* Simtec Electronics Entropy Key */ 1685 + .driver_info = QUIRK_CONTROL_LINE_STATE, }, 1686 + { USB_DEVICE(0x2184, 0x001c) }, /* GW Instek AFG-2225 */ 1698 1687 { USB_DEVICE(0x22b8, 0x6425), /* Motorola MOTOMAGX phones */ 1699 1688 }, 1700 1689 /* Motorola H24 HSPA module: */
+2
drivers/usb/class/cdc-acm.h
··· 121 121 unsigned int throttle_req:1; /* throttle requested */ 122 122 u8 bInterval; 123 123 struct usb_anchor delayed; /* writes queued for a device about to be woken */ 124 + unsigned long quirks; 124 125 }; 125 126 126 127 #define CDC_DATA_INTERFACE_TYPE 0x0a ··· 133 132 #define NOT_A_MODEM BIT(3) 134 133 #define NO_DATA_INTERFACE BIT(4) 135 134 #define IGNORE_DEVICE BIT(5) 135 + #define QUIRK_CONTROL_LINE_STATE BIT(6)
+2
drivers/usb/core/hcd.c
··· 2060 2060 return -EINVAL; 2061 2061 if (dev->speed != USB_SPEED_SUPER) 2062 2062 return -EINVAL; 2063 + if (dev->state < USB_STATE_CONFIGURED) 2064 + return -ENODEV; 2063 2065 2064 2066 for (i = 0; i < num_eps; i++) { 2065 2067 /* Streams only apply to bulk endpoints. */
+5 -5
drivers/usb/core/hub.c
··· 4468 4468 if (retval) 4469 4469 goto fail; 4470 4470 4471 - if (hcd->usb_phy && !hdev->parent) 4472 - usb_phy_notify_connect(hcd->usb_phy, udev->speed); 4473 - 4474 4471 /* 4475 4472 * Some superspeed devices have finished the link training process 4476 4473 * and attached to a superspeed hub port, but the device descriptor ··· 4624 4627 4625 4628 /* Disconnect any existing devices under this port */ 4626 4629 if (udev) { 4627 - if (hcd->usb_phy && !hdev->parent && 4628 - !(portstatus & USB_PORT_STAT_CONNECTION)) 4630 + if (hcd->usb_phy && !hdev->parent) 4629 4631 usb_phy_notify_disconnect(hcd->usb_phy, udev->speed); 4630 4632 usb_disconnect(&port_dev->child); 4631 4633 } ··· 4779 4783 port_dev->child = NULL; 4780 4784 spin_unlock_irq(&device_state_lock); 4781 4785 mutex_unlock(&usb_port_peer_mutex); 4786 + } else { 4787 + if (hcd->usb_phy && !hdev->parent) 4788 + usb_phy_notify_connect(hcd->usb_phy, 4789 + udev->speed); 4782 4790 } 4783 4791 } 4784 4792
+6
drivers/usb/core/quirks.c
··· 97 97 { USB_DEVICE(0x04f3, 0x0089), .driver_info = 98 98 USB_QUIRK_DEVICE_QUALIFIER }, 99 99 100 + { USB_DEVICE(0x04f3, 0x009b), .driver_info = 101 + USB_QUIRK_DEVICE_QUALIFIER }, 102 + 103 + { USB_DEVICE(0x04f3, 0x016f), .driver_info = 104 + USB_QUIRK_DEVICE_QUALIFIER }, 105 + 100 106 /* Roland SC-8820 */ 101 107 { USB_DEVICE(0x0582, 0x0007), .driver_info = USB_QUIRK_RESET_RESUME }, 102 108
+1 -1
drivers/usb/dwc2/core.h
··· 619 619 unsigned port_suspend_change:1; 620 620 unsigned port_over_current_change:1; 621 621 unsigned port_l1_change:1; 622 - unsigned reserved:26; 622 + unsigned reserved:25; 623 623 } b; 624 624 } flags; 625 625
+8 -8
drivers/usb/dwc2/gadget.c
··· 2327 2327 2328 2328 u32 usb_status = readl(hsotg->regs + GOTGCTL); 2329 2329 2330 - dev_info(hsotg->dev, "%s: USBRst\n", __func__); 2330 + dev_dbg(hsotg->dev, "%s: USBRst\n", __func__); 2331 2331 dev_dbg(hsotg->dev, "GNPTXSTS=%08x\n", 2332 2332 readl(hsotg->regs + GNPTXSTS)); 2333 2333 ··· 2561 2561 hs_ep->fifo_size = val; 2562 2562 break; 2563 2563 } 2564 - if (i == 8) 2565 - return -ENOMEM; 2564 + if (i == 8) { 2565 + ret = -ENOMEM; 2566 + goto error; 2567 + } 2566 2568 } 2567 2569 2568 2570 /* for non control endpoints, set PID to D0 */ ··· 2581 2579 /* enable the endpoint interrupt */ 2582 2580 s3c_hsotg_ctrl_epint(hsotg, index, dir_in, 1); 2583 2581 2582 + error: 2584 2583 spin_unlock_irqrestore(&hsotg->lock, flags); 2585 2584 return ret; 2586 2585 } ··· 2937 2934 2938 2935 spin_lock_irqsave(&hsotg->lock, flags); 2939 2936 2940 - if (!driver) 2941 - hsotg->driver = NULL; 2942 - 2937 + hsotg->driver = NULL; 2943 2938 hsotg->gadget.speed = USB_SPEED_UNKNOWN; 2944 2939 2945 2940 spin_unlock_irqrestore(&hsotg->lock, flags); ··· 3568 3567 s3c_hsotg_initep(hsotg, &hsotg->eps[epnum], epnum); 3569 3568 3570 3569 /* disable power and clock */ 3570 + s3c_hsotg_phy_disable(hsotg); 3571 3571 3572 3572 ret = regulator_bulk_disable(ARRAY_SIZE(hsotg->supplies), 3573 3573 hsotg->supplies); ··· 3576 3574 dev_err(hsotg->dev, "failed to disable supplies: %d\n", ret); 3577 3575 goto err_ep_mem; 3578 3576 } 3579 - 3580 - s3c_hsotg_phy_disable(hsotg); 3581 3577 3582 3578 ret = usb_add_gadget_udc(&pdev->dev, &hsotg->gadget); 3583 3579 if (ret)
+2 -13
drivers/usb/dwc3/dwc3-omap.c
··· 597 597 { 598 598 struct dwc3_omap *omap = dev_get_drvdata(dev); 599 599 600 - dwc3_omap_write_irqmisc_set(omap, 0x00); 600 + dwc3_omap_disable_irqs(omap); 601 601 602 602 return 0; 603 603 } ··· 605 605 static void dwc3_omap_complete(struct device *dev) 606 606 { 607 607 struct dwc3_omap *omap = dev_get_drvdata(dev); 608 - u32 reg; 609 608 610 - reg = (USBOTGSS_IRQMISC_OEVT | 611 - USBOTGSS_IRQMISC_DRVVBUS_RISE | 612 - USBOTGSS_IRQMISC_CHRGVBUS_RISE | 613 - USBOTGSS_IRQMISC_DISCHRGVBUS_RISE | 614 - USBOTGSS_IRQMISC_IDPULLUP_RISE | 615 - USBOTGSS_IRQMISC_DRVVBUS_FALL | 616 - USBOTGSS_IRQMISC_CHRGVBUS_FALL | 617 - USBOTGSS_IRQMISC_DISCHRGVBUS_FALL | 618 - USBOTGSS_IRQMISC_IDPULLUP_FALL); 619 - 620 - dwc3_omap_write_irqmisc_set(omap, reg); 609 + dwc3_omap_enable_irqs(omap); 621 610 } 622 611 623 612 static int dwc3_omap_suspend(struct device *dev)
+2
drivers/usb/dwc3/dwc3-pci.c
··· 30 30 #define PCI_DEVICE_ID_SYNOPSYS_HAPSUSB3 0xabcd 31 31 #define PCI_DEVICE_ID_INTEL_BYT 0x0f37 32 32 #define PCI_DEVICE_ID_INTEL_MRFLD 0x119e 33 + #define PCI_DEVICE_ID_INTEL_BSW 0x22B7 33 34 34 35 struct dwc3_pci { 35 36 struct device *dev; ··· 182 181 PCI_DEVICE(PCI_VENDOR_ID_SYNOPSYS, 183 182 PCI_DEVICE_ID_SYNOPSYS_HAPSUSB3), 184 183 }, 184 + { PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_BSW), }, 185 185 { PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_BYT), }, 186 186 { PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_MRFLD), }, 187 187 { } /* Terminating Entry */
+36 -12
drivers/usb/dwc3/ep0.c
··· 256 256 257 257 /* stall is always issued on EP0 */ 258 258 dep = dwc->eps[0]; 259 - __dwc3_gadget_ep_set_halt(dep, 1); 259 + __dwc3_gadget_ep_set_halt(dep, 1, false); 260 260 dep->flags = DWC3_EP_ENABLED; 261 261 dwc->delayed_status = false; 262 262 ··· 271 271 dwc3_ep0_out_start(dwc); 272 272 } 273 273 274 - int dwc3_gadget_ep0_set_halt(struct usb_ep *ep, int value) 274 + int __dwc3_gadget_ep0_set_halt(struct usb_ep *ep, int value) 275 275 { 276 276 struct dwc3_ep *dep = to_dwc3_ep(ep); 277 277 struct dwc3 *dwc = dep->dwc; ··· 279 279 dwc3_ep0_stall_and_restart(dwc); 280 280 281 281 return 0; 282 + } 283 + 284 + int dwc3_gadget_ep0_set_halt(struct usb_ep *ep, int value) 285 + { 286 + struct dwc3_ep *dep = to_dwc3_ep(ep); 287 + struct dwc3 *dwc = dep->dwc; 288 + unsigned long flags; 289 + int ret; 290 + 291 + spin_lock_irqsave(&dwc->lock, flags); 292 + ret = __dwc3_gadget_ep0_set_halt(ep, value); 293 + spin_unlock_irqrestore(&dwc->lock, flags); 294 + 295 + return ret; 282 296 } 283 297 284 298 void dwc3_ep0_out_start(struct dwc3 *dwc) ··· 480 466 return -EINVAL; 481 467 if (set == 0 && (dep->flags & DWC3_EP_WEDGE)) 482 468 break; 483 - ret = __dwc3_gadget_ep_set_halt(dep, set); 469 + ret = __dwc3_gadget_ep_set_halt(dep, set, true); 484 470 if (ret) 485 471 return -EINVAL; 486 472 break; ··· 789 775 790 776 dwc->ep0_next_event = DWC3_EP0_NRDY_STATUS; 791 777 792 - r = next_request(&ep0->request_list); 793 - ur = &r->request; 794 - 795 778 trb = dwc->ep0_trb; 796 779 797 780 status = DWC3_TRB_SIZE_TRBSTS(trb->size); ··· 800 789 801 790 return; 802 791 } 792 + 793 + r = next_request(&ep0->request_list); 794 + if (!r) 795 + return; 796 + 797 + ur = &r->request; 803 798 804 799 length = trb->size & DWC3_TRB_SIZE_MASK; 805 800 ··· 828 811 829 812 dwc3_ep0_stall_and_restart(dwc); 830 813 } else { 831 - /* 832 - * handle the case where we have to send a zero packet. This 833 - * seems to be case when req.length > maxpacket. Could it be? 834 - */ 835 - if (r) 836 - dwc3_gadget_giveback(ep0, r, 0); 814 + dwc3_gadget_giveback(ep0, r, 0); 815 + 816 + if (IS_ALIGNED(ur->length, ep0->endpoint.maxpacket) && 817 + ur->length && ur->zero) { 818 + int ret; 819 + 820 + dwc->ep0_next_event = DWC3_EP0_COMPLETE; 821 + 822 + ret = dwc3_ep0_start_trans(dwc, epnum, 823 + dwc->ctrl_req_addr, 0, 824 + DWC3_TRBCTL_CONTROL_DATA); 825 + WARN_ON(ret < 0); 826 + } 837 827 } 838 828 } 839 829
+23 -16
drivers/usb/dwc3/gadget.c
··· 525 525 if (!usb_endpoint_xfer_isoc(desc)) 526 526 return 0; 527 527 528 - memset(&trb_link, 0, sizeof(trb_link)); 529 - 530 528 /* Link TRB for ISOC. The HWO bit is never reset */ 531 529 trb_st_hw = &dep->trb_pool[0]; 532 530 533 531 trb_link = &dep->trb_pool[DWC3_TRB_NUM - 1]; 532 + memset(trb_link, 0, sizeof(*trb_link)); 534 533 535 534 trb_link->bpl = lower_32_bits(dwc3_trb_dma_offset(dep, trb_st_hw)); 536 535 trb_link->bph = upper_32_bits(dwc3_trb_dma_offset(dep, trb_st_hw)); ··· 580 581 581 582 /* make sure HW endpoint isn't stalled */ 582 583 if (dep->flags & DWC3_EP_STALL) 583 - __dwc3_gadget_ep_set_halt(dep, 0); 584 + __dwc3_gadget_ep_set_halt(dep, 0, false); 584 585 585 586 reg = dwc3_readl(dwc->regs, DWC3_DALEPENA); 586 587 reg &= ~DWC3_DALEPENA_EP(dep->number); ··· 1201 1202 return ret; 1202 1203 } 1203 1204 1204 - int __dwc3_gadget_ep_set_halt(struct dwc3_ep *dep, int value) 1205 + int __dwc3_gadget_ep_set_halt(struct dwc3_ep *dep, int value, int protocol) 1205 1206 { 1206 1207 struct dwc3_gadget_ep_cmd_params params; 1207 1208 struct dwc3 *dwc = dep->dwc; 1208 1209 int ret; 1209 1210 1211 + if (usb_endpoint_xfer_isoc(dep->endpoint.desc)) { 1212 + dev_err(dwc->dev, "%s is of Isochronous type\n", dep->name); 1213 + return -EINVAL; 1214 + } 1215 + 1210 1216 memset(&params, 0x00, sizeof(params)); 1211 1217 1212 1218 if (value) { 1219 + if (!protocol && ((dep->direction && dep->flags & DWC3_EP_BUSY) || 1220 + (!list_empty(&dep->req_queued) || 1221 + !list_empty(&dep->request_list)))) { 1222 + dev_dbg(dwc->dev, "%s: pending request, cannot halt\n", 1223 + dep->name); 1224 + return -EAGAIN; 1225 + } 1226 + 1213 1227 ret = dwc3_send_gadget_ep_cmd(dwc, dep->number, 1214 1228 DWC3_DEPCMD_SETSTALL, &params); 1215 1229 if (ret) ··· 1253 1241 int ret; 1254 1242 1255 1243 spin_lock_irqsave(&dwc->lock, flags); 1256 - 1257 - if (usb_endpoint_xfer_isoc(dep->endpoint.desc)) { 1258 - dev_err(dwc->dev, "%s is of Isochronous type\n", dep->name); 1259 - ret = -EINVAL; 1260 - goto out; 1261 - } 1262 - 1263 - ret = __dwc3_gadget_ep_set_halt(dep, value); 1264 - out: 1244 + ret = __dwc3_gadget_ep_set_halt(dep, value, false); 1265 1245 spin_unlock_irqrestore(&dwc->lock, flags); 1266 1246 1267 1247 return ret; ··· 1264 1260 struct dwc3_ep *dep = to_dwc3_ep(ep); 1265 1261 struct dwc3 *dwc = dep->dwc; 1266 1262 unsigned long flags; 1263 + int ret; 1267 1264 1268 1265 spin_lock_irqsave(&dwc->lock, flags); 1269 1266 dep->flags |= DWC3_EP_WEDGE; 1270 - spin_unlock_irqrestore(&dwc->lock, flags); 1271 1267 1272 1268 if (dep->number == 0 || dep->number == 1) 1273 - return dwc3_gadget_ep0_set_halt(ep, 1); 1269 + ret = __dwc3_gadget_ep0_set_halt(ep, 1); 1274 1270 else 1275 - return dwc3_gadget_ep_set_halt(ep, 1); 1271 + ret = __dwc3_gadget_ep_set_halt(dep, 1, false); 1272 + spin_unlock_irqrestore(&dwc->lock, flags); 1273 + 1274 + return ret; 1276 1275 } 1277 1276 1278 1277 /* -------------------------------------------------------------------------- */
+2 -1
drivers/usb/dwc3/gadget.h
··· 82 82 void dwc3_ep0_interrupt(struct dwc3 *dwc, 83 83 const struct dwc3_event_depevt *event); 84 84 void dwc3_ep0_out_start(struct dwc3 *dwc); 85 + int __dwc3_gadget_ep0_set_halt(struct usb_ep *ep, int value); 85 86 int dwc3_gadget_ep0_set_halt(struct usb_ep *ep, int value); 86 87 int dwc3_gadget_ep0_queue(struct usb_ep *ep, struct usb_request *request, 87 88 gfp_t gfp_flags); 88 - int __dwc3_gadget_ep_set_halt(struct dwc3_ep *dep, int value); 89 + int __dwc3_gadget_ep_set_halt(struct dwc3_ep *dep, int value, int protocol); 89 90 90 91 /** 91 92 * dwc3_gadget_ep_get_transfer_index - Gets transfer index from HW
+38 -15
drivers/usb/dwc3/trace.h
··· 73 73 TP_PROTO(struct usb_ctrlrequest *ctrl), 74 74 TP_ARGS(ctrl), 75 75 TP_STRUCT__entry( 76 - __field(struct usb_ctrlrequest *, ctrl) 76 + __field(__u8, bRequestType) 77 + __field(__u8, bRequest) 78 + __field(__le16, wValue) 79 + __field(__le16, wIndex) 80 + __field(__le16, wLength) 77 81 ), 78 82 TP_fast_assign( 79 - __entry->ctrl = ctrl; 83 + __entry->bRequestType = ctrl->bRequestType; 84 + __entry->bRequest = ctrl->bRequest; 85 + __entry->wValue = ctrl->wValue; 86 + __entry->wIndex = ctrl->wIndex; 87 + __entry->wLength = ctrl->wLength; 80 88 ), 81 89 TP_printk("bRequestType %02x bRequest %02x wValue %04x wIndex %04x wLength %d", 82 - __entry->ctrl->bRequestType, __entry->ctrl->bRequest, 83 - le16_to_cpu(__entry->ctrl->wValue), le16_to_cpu(__entry->ctrl->wIndex), 84 - le16_to_cpu(__entry->ctrl->wLength) 90 + __entry->bRequestType, __entry->bRequest, 91 + le16_to_cpu(__entry->wValue), le16_to_cpu(__entry->wIndex), 92 + le16_to_cpu(__entry->wLength) 85 93 ) 86 94 ); 87 95 ··· 102 94 TP_PROTO(struct dwc3_request *req), 103 95 TP_ARGS(req), 104 96 TP_STRUCT__entry( 97 + __dynamic_array(char, name, DWC3_MSG_MAX) 105 98 __field(struct dwc3_request *, req) 99 + __field(unsigned, actual) 100 + __field(unsigned, length) 101 + __field(int, status) 106 102 ), 107 103 TP_fast_assign( 104 + snprintf(__get_str(name), DWC3_MSG_MAX, "%s", req->dep->name); 108 105 __entry->req = req; 106 + __entry->actual = req->request.actual; 107 + __entry->length = req->request.length; 108 + __entry->status = req->request.status; 109 109 ), 110 110 TP_printk("%s: req %p length %u/%u ==> %d", 111 - __entry->req->dep->name, __entry->req, 112 - __entry->req->request.actual, __entry->req->request.length, 113 - __entry->req->request.status 111 + __get_str(name), __entry->req, __entry->actual, __entry->length, 112 + __entry->status 114 113 ) 115 114 ); 116 115 ··· 173 158 struct dwc3_gadget_ep_cmd_params *params), 174 159 TP_ARGS(dep, cmd, params), 175 160 TP_STRUCT__entry( 176 - __field(struct dwc3_ep *, dep) 161 + __dynamic_array(char, name, DWC3_MSG_MAX) 177 162 __field(unsigned int, cmd) 178 163 __field(struct dwc3_gadget_ep_cmd_params *, params) 179 164 ), 180 165 TP_fast_assign( 181 - __entry->dep = dep; 166 + snprintf(__get_str(name), DWC3_MSG_MAX, "%s", dep->name); 182 167 __entry->cmd = cmd; 183 168 __entry->params = params; 184 169 ), 185 170 TP_printk("%s: cmd '%s' [%d] params %08x %08x %08x\n", 186 - __entry->dep->name, dwc3_gadget_ep_cmd_string(__entry->cmd), 171 + __get_str(name), dwc3_gadget_ep_cmd_string(__entry->cmd), 187 172 __entry->cmd, __entry->params->param0, 188 173 __entry->params->param1, __entry->params->param2 189 174 ) ··· 199 184 TP_PROTO(struct dwc3_ep *dep, struct dwc3_trb *trb), 200 185 TP_ARGS(dep, trb), 201 186 TP_STRUCT__entry( 202 - __field(struct dwc3_ep *, dep) 187 + __dynamic_array(char, name, DWC3_MSG_MAX) 203 188 __field(struct dwc3_trb *, trb) 189 + __field(u32, bpl) 190 + __field(u32, bph) 191 + __field(u32, size) 192 + __field(u32, ctrl) 204 193 ), 205 194 TP_fast_assign( 206 - __entry->dep = dep; 195 + snprintf(__get_str(name), DWC3_MSG_MAX, "%s", dep->name); 207 196 __entry->trb = trb; 197 + __entry->bpl = trb->bpl; 198 + __entry->bph = trb->bph; 199 + __entry->size = trb->size; 200 + __entry->ctrl = trb->ctrl; 208 201 ), 209 202 TP_printk("%s: trb %p bph %08x bpl %08x size %08x ctrl %08x\n", 210 - __entry->dep->name, __entry->trb, __entry->trb->bph, 211 - __entry->trb->bpl, __entry->trb->size, __entry->trb->ctrl 203 + __get_str(name), __entry->trb, __entry->bph, __entry->bpl, 204 + __entry->size, __entry->ctrl 212 205 ) 213 206 ); 214 207
+1 -1
drivers/usb/gadget/composite.c
··· 560 560 usb_ext->bLength = USB_DT_USB_EXT_CAP_SIZE; 561 561 usb_ext->bDescriptorType = USB_DT_DEVICE_CAPABILITY; 562 562 usb_ext->bDevCapabilityType = USB_CAP_TYPE_EXT; 563 - usb_ext->bmAttributes = cpu_to_le32(USB_LPM_SUPPORT); 563 + usb_ext->bmAttributes = cpu_to_le32(USB_LPM_SUPPORT | USB_BESL_SUPPORT); 564 564 565 565 /* 566 566 * The Superspeed USB Capability descriptor shall be implemented by all
+4 -4
drivers/usb/gadget/function/f_acm.c
··· 433 433 dev_vdbg(&cdev->gadget->dev, 434 434 "reset acm control interface %d\n", intf); 435 435 usb_ep_disable(acm->notify); 436 - } else { 437 - dev_vdbg(&cdev->gadget->dev, 438 - "init acm ctrl interface %d\n", intf); 436 + } 437 + 438 + if (!acm->notify->desc) 439 439 if (config_ep_by_speed(cdev->gadget, f, acm->notify)) 440 440 return -EINVAL; 441 - } 441 + 442 442 usb_ep_enable(acm->notify); 443 443 acm->notify->driver_data = acm; 444 444
-1
drivers/usb/gadget/function/f_eem.c
··· 325 325 return 0; 326 326 327 327 fail: 328 - usb_free_all_descriptors(f); 329 328 if (eem->port.out_ep) 330 329 eem->port.out_ep->driver_data = NULL; 331 330 if (eem->port.in_ep)
+34 -8
drivers/usb/gadget/function/f_fs.c
··· 647 647 if (io_data->read && ret > 0) { 648 648 int i; 649 649 size_t pos = 0; 650 + 651 + /* 652 + * Since req->length may be bigger than io_data->len (after 653 + * being rounded up to maxpacketsize), we may end up with more 654 + * data then user space has space for. 655 + */ 656 + ret = min_t(int, ret, io_data->len); 657 + 650 658 use_mm(io_data->mm); 651 659 for (i = 0; i < io_data->nr_segs; i++) { 660 + size_t len = min_t(size_t, ret - pos, 661 + io_data->iovec[i].iov_len); 662 + if (!len) 663 + break; 652 664 if (unlikely(copy_to_user(io_data->iovec[i].iov_base, 653 - &io_data->buf[pos], 654 - io_data->iovec[i].iov_len))) { 665 + &io_data->buf[pos], len))) { 655 666 ret = -EFAULT; 656 667 break; 657 668 } 658 - pos += io_data->iovec[i].iov_len; 669 + pos += len; 659 670 } 660 671 unuse_mm(io_data->mm); 661 672 } ··· 698 687 struct ffs_epfile *epfile = file->private_data; 699 688 struct ffs_ep *ep; 700 689 char *data = NULL; 701 - ssize_t ret, data_len; 690 + ssize_t ret, data_len = -EINVAL; 702 691 int halt; 703 692 704 693 /* Are we still active? */ ··· 798 787 /* Fire the request */ 799 788 struct usb_request *req; 800 789 790 + /* 791 + * Sanity Check: even though data_len can't be used 792 + * uninitialized at the time I write this comment, some 793 + * compilers complain about this situation. 794 + * In order to keep the code clean from warnings, data_len is 795 + * being initialized to -EINVAL during its declaration, which 796 + * means we can't rely on compiler anymore to warn no future 797 + * changes won't result in data_len being used uninitialized. 798 + * For such reason, we're adding this redundant sanity check 799 + * here. 800 + */ 801 + if (unlikely(data_len == -EINVAL)) { 802 + WARN(1, "%s: data_len == -EINVAL\n", __func__); 803 + ret = -EINVAL; 804 + goto error_lock; 805 + } 806 + 801 807 if (io_data->aio) { 802 808 req = usb_ep_alloc_request(ep->ep, GFP_KERNEL); 803 809 if (unlikely(!req)) 804 810 goto error_lock; 805 811 806 812 req->buf = data; 807 - req->length = io_data->len; 813 + req->length = data_len; 808 814 809 815 io_data->buf = data; 810 816 io_data->ep = ep->ep; ··· 843 815 844 816 req = ep->req; 845 817 req->buf = data; 846 - req->length = io_data->len; 818 + req->length = data_len; 847 819 848 820 req->context = &done; 849 821 req->complete = ffs_epfile_io_complete; ··· 2690 2662 2691 2663 func->conf = c; 2692 2664 func->gadget = c->cdev->gadget; 2693 - 2694 - ffs_data_get(func->ffs); 2695 2665 2696 2666 /* 2697 2667 * in drivers/usb/gadget/configfs.c:configfs_composite_bind()
+3 -2
drivers/usb/gadget/function/f_hid.c
··· 621 621 dev = MKDEV(major, hidg->minor); 622 622 status = cdev_add(&hidg->cdev, dev, 1); 623 623 if (status) 624 - goto fail; 624 + goto fail_free_descs; 625 625 626 626 device_create(hidg_class, NULL, dev, NULL, "%s%d", "hidg", hidg->minor); 627 627 628 628 return 0; 629 629 630 + fail_free_descs: 631 + usb_free_all_descriptors(f); 630 632 fail: 631 633 ERROR(f->config->cdev, "hidg_bind FAILED\n"); 632 634 if (hidg->req != NULL) { ··· 637 635 usb_ep_free_request(hidg->in_ep, hidg->req); 638 636 } 639 637 640 - usb_free_all_descriptors(f); 641 638 return status; 642 639 } 643 640
+43 -46
drivers/usb/gadget/function/f_loopback.c
··· 253 253 254 254 case 0: /* normal completion? */ 255 255 if (ep == loop->out_ep) { 256 - /* loop this OUT packet back IN to the host */ 257 256 req->zero = (req->actual < req->length); 258 257 req->length = req->actual; 259 - status = usb_ep_queue(loop->in_ep, req, GFP_ATOMIC); 260 - if (status == 0) 261 - return; 262 - 263 - /* "should never get here" */ 264 - ERROR(cdev, "can't loop %s to %s: %d\n", 265 - ep->name, loop->in_ep->name, 266 - status); 267 258 } 268 259 269 260 /* queue the buffer for some later OUT packet */ 270 261 req->length = buflen; 271 - status = usb_ep_queue(loop->out_ep, req, GFP_ATOMIC); 262 + status = usb_ep_queue(ep, req, GFP_ATOMIC); 272 263 if (status == 0) 273 264 return; 274 265 ··· 299 308 return alloc_ep_req(ep, len, buflen); 300 309 } 301 310 302 - static int 303 - enable_loopback(struct usb_composite_dev *cdev, struct f_loopback *loop) 311 + static int enable_endpoint(struct usb_composite_dev *cdev, struct f_loopback *loop, 312 + struct usb_ep *ep) 304 313 { 305 - int result = 0; 306 - struct usb_ep *ep; 307 314 struct usb_request *req; 308 315 unsigned i; 316 + int result; 309 317 310 - /* one endpoint writes data back IN to the host */ 311 - ep = loop->in_ep; 312 - result = config_ep_by_speed(cdev->gadget, &(loop->function), ep); 313 - if (result) 314 - return result; 315 - result = usb_ep_enable(ep); 316 - if (result < 0) 317 - return result; 318 - ep->driver_data = loop; 319 - 320 - /* one endpoint just reads OUT packets */ 321 - ep = loop->out_ep; 318 + /* 319 + * one endpoint writes data back IN to the host while another endpoint 320 + * just reads OUT packets 321 + */ 322 322 result = config_ep_by_speed(cdev->gadget, &(loop->function), ep); 323 323 if (result) 324 324 goto fail0; 325 - 326 325 result = usb_ep_enable(ep); 327 - if (result < 0) { 328 - fail0: 329 - ep = loop->in_ep; 330 - usb_ep_disable(ep); 331 - ep->driver_data = NULL; 332 - return result; 333 - } 326 + if (result < 0) 327 + goto fail0; 334 328 ep->driver_data = loop; 335 329 336 - /* allocate a bunch of read buffers and queue them all at once. 330 + /* 331 + * allocate a bunch of read buffers and queue them all at once. 337 332 * we buffer at most 'qlen' transfers; fewer if any need more 338 333 * than 'buflen' bytes each. 339 334 */ 340 335 for (i = 0; i < qlen && result == 0; i++) { 341 336 req = lb_alloc_ep_req(ep, 0); 342 - if (req) { 343 - req->complete = loopback_complete; 344 - result = usb_ep_queue(ep, req, GFP_ATOMIC); 345 - if (result) 346 - ERROR(cdev, "%s queue req --> %d\n", 347 - ep->name, result); 348 - } else { 349 - usb_ep_disable(ep); 350 - ep->driver_data = NULL; 351 - result = -ENOMEM; 352 - goto fail0; 337 + if (!req) 338 + goto fail1; 339 + 340 + req->complete = loopback_complete; 341 + result = usb_ep_queue(ep, req, GFP_ATOMIC); 342 + if (result) { 343 + ERROR(cdev, "%s queue req --> %d\n", 344 + ep->name, result); 345 + goto fail1; 353 346 } 354 347 } 348 + 349 + return 0; 350 + 351 + fail1: 352 + usb_ep_disable(ep); 353 + 354 + fail0: 355 + return result; 356 + } 357 + 358 + static int 359 + enable_loopback(struct usb_composite_dev *cdev, struct f_loopback *loop) 360 + { 361 + int result = 0; 362 + 363 + result = enable_endpoint(cdev, loop, loop->in_ep); 364 + if (result) 365 + return result; 366 + 367 + result = enable_endpoint(cdev, loop, loop->out_ep); 368 + if (result) 369 + return result; 355 370 356 371 DBG(cdev, "%s enabled\n", loop->function.name); 357 372 return result;
-1
drivers/usb/gadget/function/f_ncm.c
··· 1461 1461 return 0; 1462 1462 1463 1463 fail: 1464 - usb_free_all_descriptors(f); 1465 1464 if (ncm->notify_req) { 1466 1465 kfree(ncm->notify_req->buf); 1467 1466 usb_ep_free_request(ncm->notify, ncm->notify_req);
+4 -5
drivers/usb/gadget/function/f_obex.c
··· 35 35 struct gserial port; 36 36 u8 ctrl_id; 37 37 u8 data_id; 38 + u8 cur_alt; 38 39 u8 port_num; 39 40 u8 can_activate; 40 41 }; ··· 236 235 } else 237 236 goto fail; 238 237 238 + obex->cur_alt = alt; 239 + 239 240 return 0; 240 241 241 242 fail: ··· 248 245 { 249 246 struct f_obex *obex = func_to_obex(f); 250 247 251 - if (intf == obex->ctrl_id) 252 - return 0; 253 - 254 - return obex->port.in->driver_data ? 1 : 0; 248 + return obex->cur_alt; 255 249 } 256 250 257 251 static void obex_disable(struct usb_function *f) ··· 397 397 return 0; 398 398 399 399 fail: 400 - usb_free_all_descriptors(f); 401 400 /* we might as well release our claims on endpoints */ 402 401 if (obex->port.out) 403 402 obex->port.out->driver_data = NULL;
+1 -1
drivers/usb/gadget/function/f_phonet.c
··· 570 570 err_req: 571 571 for (i = 0; i < phonet_rxq_size && fp->out_reqv[i]; i++) 572 572 usb_ep_free_request(fp->out_ep, fp->out_reqv[i]); 573 - err: 574 573 usb_free_all_descriptors(f); 574 + err: 575 575 if (fp->out_ep) 576 576 fp->out_ep->driver_data = NULL; 577 577 if (fp->in_ep)
+6 -3
drivers/usb/gadget/function/f_rndis.c
··· 802 802 803 803 if (rndis->manufacturer && rndis->vendorID && 804 804 rndis_set_param_vendor(rndis->config, rndis->vendorID, 805 - rndis->manufacturer)) 806 - goto fail; 805 + rndis->manufacturer)) { 806 + status = -EINVAL; 807 + goto fail_free_descs; 808 + } 807 809 808 810 /* NOTE: all that is done without knowing or caring about 809 811 * the network link ... which is unavailable to this code ··· 819 817 rndis->notify->name); 820 818 return 0; 821 819 820 + fail_free_descs: 821 + usb_free_all_descriptors(f); 822 822 fail: 823 823 kfree(f->os_desc_table); 824 824 f->os_desc_n = 0; 825 - usb_free_all_descriptors(f); 826 825 827 826 if (rndis->notify_req) { 828 827 kfree(rndis->notify_req->buf);
-1
drivers/usb/gadget/function/f_subset.c
··· 380 380 return 0; 381 381 382 382 fail: 383 - usb_free_all_descriptors(f); 384 383 /* we might as well release our claims on endpoints */ 385 384 if (geth->port.out_ep) 386 385 geth->port.out_ep->driver_data = NULL;
+19 -4
drivers/usb/gadget/function/f_uac2.c
··· 512 512 return 0; 513 513 } 514 514 515 + static void snd_uac2_release(struct device *dev) 516 + { 517 + dev_dbg(dev, "releasing '%s'\n", dev_name(dev)); 518 + } 519 + 515 520 static int alsa_uac2_init(struct audio_dev *agdev) 516 521 { 517 522 struct snd_uac2_chip *uac2 = &agdev->uac2; ··· 528 523 529 524 uac2->pdev.id = 0; 530 525 uac2->pdev.name = uac2_name; 526 + uac2->pdev.dev.release = snd_uac2_release; 531 527 532 528 /* Register snd_uac2 driver */ 533 529 err = platform_driver_register(&uac2->pdrv); ··· 778 772 779 773 .bEndpointAddress = USB_DIR_OUT, 780 774 .bmAttributes = USB_ENDPOINT_XFER_ISOC | USB_ENDPOINT_SYNC_ASYNC, 775 + .wMaxPacketSize = cpu_to_le16(1023), 781 776 .bInterval = 1, 782 777 }; 783 778 ··· 787 780 .bDescriptorType = USB_DT_ENDPOINT, 788 781 789 782 .bmAttributes = USB_ENDPOINT_XFER_ISOC | USB_ENDPOINT_SYNC_ASYNC, 783 + .wMaxPacketSize = cpu_to_le16(1024), 790 784 .bInterval = 4, 791 785 }; 792 786 ··· 855 847 856 848 .bEndpointAddress = USB_DIR_IN, 857 849 .bmAttributes = USB_ENDPOINT_XFER_ISOC | USB_ENDPOINT_SYNC_ASYNC, 850 + .wMaxPacketSize = cpu_to_le16(1023), 858 851 .bInterval = 1, 859 852 }; 860 853 ··· 864 855 .bDescriptorType = USB_DT_ENDPOINT, 865 856 866 857 .bmAttributes = USB_ENDPOINT_XFER_ISOC | USB_ENDPOINT_SYNC_ASYNC, 858 + .wMaxPacketSize = cpu_to_le16(1024), 867 859 .bInterval = 4, 868 860 }; 869 861 ··· 956 946 { 957 947 struct snd_uac2_chip *uac2 = prm->uac2; 958 948 int i; 949 + 950 + if (!prm->ep_enabled) 951 + return; 959 952 960 953 prm->ep_enabled = false; 961 954 ··· 1084 1071 prm->rbuf = kzalloc(prm->max_psize * USB_XFERS, GFP_KERNEL); 1085 1072 if (!prm->rbuf) { 1086 1073 prm->max_psize = 0; 1087 - goto err; 1074 + goto err_free_descs; 1088 1075 } 1089 1076 1090 1077 prm = &agdev->uac2.p_prm; ··· 1092 1079 prm->rbuf = kzalloc(prm->max_psize * USB_XFERS, GFP_KERNEL); 1093 1080 if (!prm->rbuf) { 1094 1081 prm->max_psize = 0; 1095 - goto err; 1082 + goto err_free_descs; 1096 1083 } 1097 1084 1098 1085 ret = alsa_uac2_init(agdev); 1099 1086 if (ret) 1100 - goto err; 1087 + goto err_free_descs; 1101 1088 return 0; 1089 + 1090 + err_free_descs: 1091 + usb_free_all_descriptors(fn); 1102 1092 err: 1103 1093 kfree(agdev->uac2.p_prm.rbuf); 1104 1094 kfree(agdev->uac2.c_prm.rbuf); 1105 - usb_free_all_descriptors(fn); 1106 1095 if (agdev->in_ep) 1107 1096 agdev->in_ep->driver_data = NULL; 1108 1097 if (agdev->out_ep)
+44 -10
drivers/usb/gadget/function/f_uvc.c
··· 279 279 else if (interface != uvc->streaming_intf) 280 280 return -EINVAL; 281 281 else 282 - return uvc->state == UVC_STATE_STREAMING ? 1 : 0; 282 + return uvc->video.ep->driver_data ? 1 : 0; 283 283 } 284 284 285 285 static int 286 286 uvc_function_set_alt(struct usb_function *f, unsigned interface, unsigned alt) 287 287 { 288 288 struct uvc_device *uvc = to_uvc(f); 289 + struct usb_composite_dev *cdev = f->config->cdev; 289 290 struct v4l2_event v4l2_event; 290 291 struct uvc_event *uvc_event = (void *)&v4l2_event.u.data; 291 292 int ret; 292 293 293 - INFO(f->config->cdev, "uvc_function_set_alt(%u, %u)\n", interface, alt); 294 + INFO(cdev, "uvc_function_set_alt(%u, %u)\n", interface, alt); 294 295 295 296 if (interface == uvc->control_intf) { 296 297 if (alt) 297 298 return -EINVAL; 298 299 300 + if (uvc->control_ep->driver_data) { 301 + INFO(cdev, "reset UVC Control\n"); 302 + usb_ep_disable(uvc->control_ep); 303 + uvc->control_ep->driver_data = NULL; 304 + } 305 + 306 + if (!uvc->control_ep->desc) 307 + if (config_ep_by_speed(cdev->gadget, f, uvc->control_ep)) 308 + return -EINVAL; 309 + 310 + usb_ep_enable(uvc->control_ep); 311 + uvc->control_ep->driver_data = uvc; 312 + 299 313 if (uvc->state == UVC_STATE_DISCONNECTED) { 300 314 memset(&v4l2_event, 0, sizeof(v4l2_event)); 301 315 v4l2_event.type = UVC_EVENT_CONNECT; 302 - uvc_event->speed = f->config->cdev->gadget->speed; 316 + uvc_event->speed = cdev->gadget->speed; 303 317 v4l2_event_queue(uvc->vdev, &v4l2_event); 304 318 305 319 uvc->state = UVC_STATE_CONNECTED; ··· 335 321 if (uvc->state != UVC_STATE_STREAMING) 336 322 return 0; 337 323 338 - if (uvc->video.ep) 324 + if (uvc->video.ep) { 339 325 usb_ep_disable(uvc->video.ep); 326 + uvc->video.ep->driver_data = NULL; 327 + } 340 328 341 329 memset(&v4l2_event, 0, sizeof(v4l2_event)); 342 330 v4l2_event.type = UVC_EVENT_STREAMOFF; ··· 351 335 if (uvc->state != UVC_STATE_CONNECTED) 352 336 return 0; 353 337 354 - if (uvc->video.ep) { 355 - ret = config_ep_by_speed(f->config->cdev->gadget, 356 - &(uvc->func), uvc->video.ep); 357 - if (ret) 358 - return ret; 359 - usb_ep_enable(uvc->video.ep); 338 + if (!uvc->video.ep) 339 + return -EINVAL; 340 + 341 + if (uvc->video.ep->driver_data) { 342 + INFO(cdev, "reset UVC\n"); 343 + usb_ep_disable(uvc->video.ep); 344 + uvc->video.ep->driver_data = NULL; 360 345 } 346 + 347 + ret = config_ep_by_speed(f->config->cdev->gadget, 348 + &(uvc->func), uvc->video.ep); 349 + if (ret) 350 + return ret; 351 + usb_ep_enable(uvc->video.ep); 352 + uvc->video.ep->driver_data = uvc; 361 353 362 354 memset(&v4l2_event, 0, sizeof(v4l2_event)); 363 355 v4l2_event.type = UVC_EVENT_STREAMON; ··· 390 366 v4l2_event_queue(uvc->vdev, &v4l2_event); 391 367 392 368 uvc->state = UVC_STATE_DISCONNECTED; 369 + 370 + if (uvc->video.ep->driver_data) { 371 + usb_ep_disable(uvc->video.ep); 372 + uvc->video.ep->driver_data = NULL; 373 + } 374 + 375 + if (uvc->control_ep->driver_data) { 376 + usb_ep_disable(uvc->control_ep); 377 + uvc->control_ep->driver_data = NULL; 378 + } 393 379 } 394 380 395 381 /* --------------------------------------------------------------------------
+2 -1
drivers/usb/gadget/function/uvc_video.c
··· 352 352 353 353 if (!enable) { 354 354 for (i = 0; i < UVC_NUM_REQUESTS; ++i) 355 - usb_ep_dequeue(video->ep, video->req[i]); 355 + if (video->req[i]) 356 + usb_ep_dequeue(video->ep, video->req[i]); 356 357 357 358 uvc_video_free_requests(video); 358 359 uvcg_queue_enable(&video->queue, 0);
+1
drivers/usb/gadget/udc/Kconfig
··· 357 357 358 358 config USB_GADGET_XILINX 359 359 tristate "Xilinx USB Driver" 360 + depends on HAS_DMA 360 361 depends on OF || COMPILE_TEST 361 362 help 362 363 USB peripheral controller driver for Xilinx USB2 device.
+5
drivers/usb/gadget/udc/udc-core.c
··· 507 507 { 508 508 struct usb_udc *udc = container_of(dev, struct usb_udc, dev); 509 509 510 + if (!udc->driver) { 511 + dev_err(dev, "soft-connect without a gadget driver\n"); 512 + return -EOPNOTSUPP; 513 + } 514 + 510 515 if (sysfs_streq(buf, "connect")) { 511 516 usb_gadget_udc_start(udc->gadget, udc->driver); 512 517 usb_gadget_connect(udc->gadget);
+2 -2
drivers/usb/host/Kconfig
··· 234 234 235 235 config USB_EHCI_EXYNOS 236 236 tristate "EHCI support for Samsung S5P/EXYNOS SoC Series" 237 - depends on PLAT_S5P || ARCH_EXYNOS 237 + depends on ARCH_S5PV210 || ARCH_EXYNOS 238 238 help 239 239 Enable support for the Samsung Exynos SOC's on-chip EHCI controller. 240 240 ··· 550 550 551 551 config USB_OHCI_EXYNOS 552 552 tristate "OHCI support for Samsung S5P/EXYNOS SoC Series" 553 - depends on PLAT_S5P || ARCH_EXYNOS 553 + depends on ARCH_S5PV210 || ARCH_EXYNOS 554 554 help 555 555 Enable support for the Samsung Exynos SOC's on-chip OHCI controller. 556 556
+1 -1
drivers/usb/host/hwa-hc.c
··· 607 607 wa->wa_descr = wa_descr = (struct usb_wa_descriptor *) hdr; 608 608 if (le16_to_cpu(wa_descr->bcdWAVersion) > 0x0100) 609 609 dev_warn(dev, "Wire Adapter v%d.%d newer than groked v1.0\n", 610 - le16_to_cpu(wa_descr->bcdWAVersion) & 0xff00 >> 8, 610 + (le16_to_cpu(wa_descr->bcdWAVersion) & 0xff00) >> 8, 611 611 le16_to_cpu(wa_descr->bcdWAVersion) & 0x00ff); 612 612 result = 0; 613 613 error:
+4 -14
drivers/usb/host/xhci-pci.c
··· 128 128 xhci->quirks |= XHCI_AVOID_BEI; 129 129 } 130 130 if (pdev->vendor == PCI_VENDOR_ID_INTEL && 131 - (pdev->device == PCI_DEVICE_ID_INTEL_LYNXPOINT_XHCI || 132 - pdev->device == PCI_DEVICE_ID_INTEL_LYNXPOINT_LP_XHCI)) { 133 - /* Workaround for occasional spurious wakeups from S5 (or 134 - * any other sleep) on Haswell machines with LPT and LPT-LP 135 - * with the new Intel BIOS 136 - */ 137 - /* Limit the quirk to only known vendors, as this triggers 138 - * yet another BIOS bug on some other machines 139 - * https://bugzilla.kernel.org/show_bug.cgi?id=66171 140 - */ 141 - if (pdev->subsystem_vendor == PCI_VENDOR_ID_HP) 142 - xhci->quirks |= XHCI_SPURIOUS_WAKEUP; 143 - } 144 - if (pdev->vendor == PCI_VENDOR_ID_INTEL && 145 131 pdev->device == PCI_DEVICE_ID_INTEL_LYNXPOINT_LP_XHCI) { 146 132 xhci->quirks |= XHCI_SPURIOUS_REBOOT; 147 133 } ··· 146 160 /* See https://bugzilla.kernel.org/show_bug.cgi?id=79511 */ 147 161 if (pdev->vendor == PCI_VENDOR_ID_VIA && 148 162 pdev->device == 0x3432) 163 + xhci->quirks |= XHCI_BROKEN_STREAMS; 164 + 165 + if (pdev->vendor == PCI_VENDOR_ID_ASMEDIA && 166 + pdev->device == 0x1042) 149 167 xhci->quirks |= XHCI_BROKEN_STREAMS; 150 168 151 169 if (xhci->quirks & XHCI_RESET_ON_RESUME)
+2 -1
drivers/usb/musb/musb_cppi41.c
··· 209 209 } 210 210 } 211 211 212 - if (!list_empty(&controller->early_tx_list)) { 212 + if (!list_empty(&controller->early_tx_list) && 213 + !hrtimer_is_queued(&controller->early_tx)) { 213 214 ret = HRTIMER_RESTART; 214 215 hrtimer_forward_now(&controller->early_tx, 215 216 ktime_set(0, 20 * NSEC_PER_USEC));
+15 -3
drivers/usb/musb/musb_dsps.c
··· 868 868 struct dsps_glue *glue = dev_get_drvdata(dev); 869 869 const struct dsps_musb_wrapper *wrp = glue->wrp; 870 870 struct musb *musb = platform_get_drvdata(glue->musb); 871 - void __iomem *mbase = musb->ctrl_base; 871 + void __iomem *mbase; 872 872 873 873 del_timer_sync(&glue->timer); 874 + 875 + if (!musb) 876 + /* This can happen if the musb device is in -EPROBE_DEFER */ 877 + return 0; 878 + 879 + mbase = musb->ctrl_base; 874 880 glue->context.control = dsps_readl(mbase, wrp->control); 875 881 glue->context.epintr = dsps_readl(mbase, wrp->epintr_set); 876 882 glue->context.coreintr = dsps_readl(mbase, wrp->coreintr_set); ··· 893 887 struct dsps_glue *glue = dev_get_drvdata(dev); 894 888 const struct dsps_musb_wrapper *wrp = glue->wrp; 895 889 struct musb *musb = platform_get_drvdata(glue->musb); 896 - void __iomem *mbase = musb->ctrl_base; 890 + void __iomem *mbase; 897 891 892 + if (!musb) 893 + return 0; 894 + 895 + mbase = musb->ctrl_base; 898 896 dsps_writel(mbase, wrp->control, glue->context.control); 899 897 dsps_writel(mbase, wrp->epintr_set, glue->context.epintr); 900 898 dsps_writel(mbase, wrp->coreintr_set, glue->context.coreintr); ··· 906 896 dsps_writel(mbase, wrp->mode, glue->context.mode); 907 897 dsps_writel(mbase, wrp->tx_mode, glue->context.tx_mode); 908 898 dsps_writel(mbase, wrp->rx_mode, glue->context.rx_mode); 909 - setup_timer(&glue->timer, otg_timer, (unsigned long) musb); 899 + if (musb->xceiv->state == OTG_STATE_B_IDLE && 900 + musb->port_mode == MUSB_PORT_MODE_DUAL_ROLE) 901 + mod_timer(&glue->timer, jiffies + wrp->poll_seconds * HZ); 910 902 911 903 return 0; 912 904 }
+1
drivers/usb/serial/cp210x.c
··· 155 155 { USB_DEVICE(0x18EF, 0xE00F) }, /* ELV USB-I2C-Interface */ 156 156 { USB_DEVICE(0x1ADB, 0x0001) }, /* Schweitzer Engineering C662 Cable */ 157 157 { USB_DEVICE(0x1B1C, 0x1C00) }, /* Corsair USB Dongle */ 158 + { USB_DEVICE(0x1BA4, 0x0002) }, /* Silicon Labs 358x factory default */ 158 159 { USB_DEVICE(0x1BE3, 0x07A6) }, /* WAGO 750-923 USB Service Cable */ 159 160 { USB_DEVICE(0x1D6F, 0x0010) }, /* Seluxit ApS RF Dongle */ 160 161 { USB_DEVICE(0x1E29, 0x0102) }, /* Festo CPX-USB */
+3
drivers/usb/serial/ftdi_sio.c
··· 140 140 * /sys/bus/usb-serial/drivers/ftdi_sio/new_id and send a patch or report. 141 141 */ 142 142 static const struct usb_device_id id_table_combined[] = { 143 + { USB_DEVICE(FTDI_VID, FTDI_BRICK_PID) }, 143 144 { USB_DEVICE(FTDI_VID, FTDI_ZEITCONTROL_TAGTRACE_MIFARE_PID) }, 144 145 { USB_DEVICE(FTDI_VID, FTDI_CTI_MINI_PID) }, 145 146 { USB_DEVICE(FTDI_VID, FTDI_CTI_NANO_PID) }, ··· 662 661 { USB_DEVICE(FTDI_VID, XSENS_CONVERTER_5_PID) }, 663 662 { USB_DEVICE(FTDI_VID, XSENS_CONVERTER_6_PID) }, 664 663 { USB_DEVICE(FTDI_VID, XSENS_CONVERTER_7_PID) }, 664 + { USB_DEVICE(XSENS_VID, XSENS_AWINDA_DONGLE_PID) }, 665 + { USB_DEVICE(XSENS_VID, XSENS_AWINDA_STATION_PID) }, 665 666 { USB_DEVICE(XSENS_VID, XSENS_CONVERTER_PID) }, 666 667 { USB_DEVICE(XSENS_VID, XSENS_MTW_PID) }, 667 668 { USB_DEVICE(FTDI_VID, FTDI_OMNI1509) },
+11 -1
drivers/usb/serial/ftdi_sio_ids.h
··· 30 30 31 31 /*** third-party PIDs (using FTDI_VID) ***/ 32 32 33 + /* 34 + * Certain versions of the official Windows FTDI driver reprogrammed 35 + * counterfeit FTDI devices to PID 0. Support these devices anyway. 36 + */ 37 + #define FTDI_BRICK_PID 0x0000 38 + 33 39 #define FTDI_LUMEL_PD12_PID 0x6002 34 40 35 41 /* ··· 149 143 * Xsens Technologies BV products (http://www.xsens.com). 150 144 */ 151 145 #define XSENS_VID 0x2639 152 - #define XSENS_CONVERTER_PID 0xD00D /* Xsens USB-serial converter */ 146 + #define XSENS_AWINDA_STATION_PID 0x0101 147 + #define XSENS_AWINDA_DONGLE_PID 0x0102 153 148 #define XSENS_MTW_PID 0x0200 /* Xsens MTw */ 149 + #define XSENS_CONVERTER_PID 0xD00D /* Xsens USB-serial converter */ 150 + 151 + /* Xsens devices using FTDI VID */ 154 152 #define XSENS_CONVERTER_0_PID 0xD388 /* Xsens USB converter */ 155 153 #define XSENS_CONVERTER_1_PID 0xD389 /* Xsens Wireless Receiver */ 156 154 #define XSENS_CONVERTER_2_PID 0xD38A
+3 -17
drivers/usb/serial/kobil_sct.c
··· 335 335 port->interrupt_out_urb->transfer_buffer_length = length; 336 336 337 337 priv->cur_pos = priv->cur_pos + length; 338 - result = usb_submit_urb(port->interrupt_out_urb, GFP_NOIO); 338 + result = usb_submit_urb(port->interrupt_out_urb, 339 + GFP_ATOMIC); 339 340 dev_dbg(&port->dev, "%s - Send write URB returns: %i\n", __func__, result); 340 341 todo = priv->filled - priv->cur_pos; 341 342 ··· 351 350 if (priv->device_type == KOBIL_ADAPTER_B_PRODUCT_ID || 352 351 priv->device_type == KOBIL_ADAPTER_K_PRODUCT_ID) { 353 352 result = usb_submit_urb(port->interrupt_in_urb, 354 - GFP_NOIO); 353 + GFP_ATOMIC); 355 354 dev_dbg(&port->dev, "%s - Send read URB returns: %i\n", __func__, result); 356 355 } 357 356 } ··· 415 414 int result; 416 415 int dtr = 0; 417 416 int rts = 0; 418 - unsigned char *transfer_buffer; 419 - int transfer_buffer_length = 8; 420 417 421 418 /* FIXME: locking ? */ 422 419 priv = usb_get_serial_port_data(port); ··· 423 424 /* This device doesn't support ioctl calls */ 424 425 return -EINVAL; 425 426 } 426 - 427 - /* allocate memory for transfer buffer */ 428 - transfer_buffer = kzalloc(transfer_buffer_length, GFP_KERNEL); 429 - if (!transfer_buffer) 430 - return -ENOMEM; 431 427 432 428 if (set & TIOCM_RTS) 433 429 rts = 1; ··· 463 469 KOBIL_TIMEOUT); 464 470 } 465 471 dev_dbg(dev, "%s - Send set_status_line URB returns: %i\n", __func__, result); 466 - kfree(transfer_buffer); 467 472 return (result < 0) ? result : 0; 468 473 } 469 474 ··· 523 530 { 524 531 struct usb_serial_port *port = tty->driver_data; 525 532 struct kobil_private *priv = usb_get_serial_port_data(port); 526 - unsigned char *transfer_buffer; 527 - int transfer_buffer_length = 8; 528 533 int result; 529 534 530 535 if (priv->device_type == KOBIL_USBTWIN_PRODUCT_ID || ··· 532 541 533 542 switch (cmd) { 534 543 case TCFLSH: 535 - transfer_buffer = kmalloc(transfer_buffer_length, GFP_KERNEL); 536 - if (!transfer_buffer) 537 - return -ENOBUFS; 538 - 539 544 result = usb_control_msg(port->serial->dev, 540 545 usb_sndctrlpipe(port->serial->dev, 0), 541 546 SUSBCRequest_Misc, ··· 546 559 dev_dbg(&port->dev, 547 560 "%s - Send reset_all_queues (FLUSH) URB returns: %i\n", 548 561 __func__, result); 549 - kfree(transfer_buffer); 550 562 return (result < 0) ? -EIO: 0; 551 563 default: 552 564 return -ENOIOCTLCMD;
+1 -1
drivers/usb/serial/opticon.c
··· 215 215 216 216 /* The connected devices do not have a bulk write endpoint, 217 217 * to transmit data to de barcode device the control endpoint is used */ 218 - dr = kmalloc(sizeof(struct usb_ctrlrequest), GFP_NOIO); 218 + dr = kmalloc(sizeof(struct usb_ctrlrequest), GFP_ATOMIC); 219 219 if (!dr) { 220 220 count = -ENOMEM; 221 221 goto error_no_dr;
+10
drivers/usb/serial/option.c
··· 269 269 #define TELIT_PRODUCT_DE910_DUAL 0x1010 270 270 #define TELIT_PRODUCT_UE910_V2 0x1012 271 271 #define TELIT_PRODUCT_LE920 0x1200 272 + #define TELIT_PRODUCT_LE910 0x1201 272 273 273 274 /* ZTE PRODUCTS */ 274 275 #define ZTE_VENDOR_ID 0x19d2 ··· 363 362 364 363 /* Haier products */ 365 364 #define HAIER_VENDOR_ID 0x201e 365 + #define HAIER_PRODUCT_CE81B 0x10f8 366 366 #define HAIER_PRODUCT_CE100 0x2009 367 367 368 368 /* Cinterion (formerly Siemens) products */ ··· 589 587 590 588 static const struct option_blacklist_info zte_1255_blacklist = { 591 589 .reserved = BIT(3) | BIT(4), 590 + }; 591 + 592 + static const struct option_blacklist_info telit_le910_blacklist = { 593 + .sendsetup = BIT(0), 594 + .reserved = BIT(1) | BIT(2), 592 595 }; 593 596 594 597 static const struct option_blacklist_info telit_le920_blacklist = { ··· 1145 1138 { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_CC864_SINGLE) }, 1146 1139 { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_DE910_DUAL) }, 1147 1140 { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_UE910_V2) }, 1141 + { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_LE910), 1142 + .driver_info = (kernel_ulong_t)&telit_le910_blacklist }, 1148 1143 { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_LE920), 1149 1144 .driver_info = (kernel_ulong_t)&telit_le920_blacklist }, 1150 1145 { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, ZTE_PRODUCT_MF622, 0xff, 0xff, 0xff) }, /* ZTE WCDMA products */ ··· 1630 1621 { USB_DEVICE(LONGCHEER_VENDOR_ID, ZOOM_PRODUCT_4597) }, 1631 1622 { USB_DEVICE(LONGCHEER_VENDOR_ID, IBALL_3_5G_CONNECT) }, 1632 1623 { USB_DEVICE(HAIER_VENDOR_ID, HAIER_PRODUCT_CE100) }, 1624 + { USB_DEVICE_AND_INTERFACE_INFO(HAIER_VENDOR_ID, HAIER_PRODUCT_CE81B, 0xff, 0xff, 0xff) }, 1633 1625 /* Pirelli */ 1634 1626 { USB_DEVICE_INTERFACE_CLASS(PIRELLI_VENDOR_ID, PIRELLI_PRODUCT_C100_1, 0xff) }, 1635 1627 { USB_DEVICE_INTERFACE_CLASS(PIRELLI_VENDOR_ID, PIRELLI_PRODUCT_C100_2, 0xff) },
+2 -2
drivers/usb/storage/initializers.c
··· 52 52 us->iobuf[0] = 0x1; 53 53 result = usb_stor_control_msg(us, us->send_ctrl_pipe, 54 54 0x0C, USB_RECIP_INTERFACE | USB_TYPE_VENDOR, 55 - 0x01, 0x0, us->iobuf, 0x1, USB_CTRL_SET_TIMEOUT); 55 + 0x01, 0x0, us->iobuf, 0x1, 5 * HZ); 56 56 usb_stor_dbg(us, "-- result is %d\n", result); 57 57 58 58 return 0; ··· 100 100 result = usb_stor_control_msg(us, us->send_ctrl_pipe, 101 101 USB_REQ_SET_FEATURE, 102 102 USB_TYPE_STANDARD | USB_RECIP_DEVICE, 103 - 0x01, 0x0, NULL, 0x0, 1000); 103 + 0x01, 0x0, NULL, 0x0, 1 * HZ); 104 104 usb_stor_dbg(us, "Huawei mode set result is %d\n", result); 105 105 return 0; 106 106 }
+2
drivers/usb/storage/realtek_cr.c
··· 626 626 return 0; 627 627 } 628 628 629 + #ifdef CONFIG_PM 629 630 static int config_autodelink_before_power_down(struct us_data *us) 630 631 { 631 632 struct rts51x_chip *chip = (struct rts51x_chip *)(us->extra); ··· 717 716 } 718 717 } 719 718 } 719 + #endif 720 720 721 721 #ifdef CONFIG_REALTEK_AUTOPM 722 722 static void fw5895_set_mmc_wp(struct us_data *us)
+26
drivers/usb/storage/transport.c
··· 1118 1118 */ 1119 1119 if (result == USB_STOR_XFER_LONG) 1120 1120 fake_sense = 1; 1121 + 1122 + /* 1123 + * Sometimes a device will mistakenly skip the data phase 1124 + * and go directly to the status phase without sending a 1125 + * zero-length packet. If we get a 13-byte response here, 1126 + * check whether it really is a CSW. 1127 + */ 1128 + if (result == USB_STOR_XFER_SHORT && 1129 + srb->sc_data_direction == DMA_FROM_DEVICE && 1130 + transfer_length - scsi_get_resid(srb) == 1131 + US_BULK_CS_WRAP_LEN) { 1132 + struct scatterlist *sg = NULL; 1133 + unsigned int offset = 0; 1134 + 1135 + if (usb_stor_access_xfer_buf((unsigned char *) bcs, 1136 + US_BULK_CS_WRAP_LEN, srb, &sg, 1137 + &offset, FROM_XFER_BUF) == 1138 + US_BULK_CS_WRAP_LEN && 1139 + bcs->Signature == 1140 + cpu_to_le32(US_BULK_CS_SIGN)) { 1141 + usb_stor_dbg(us, "Device skipped data phase\n"); 1142 + scsi_set_resid(srb, transfer_length); 1143 + goto skipped_data_phase; 1144 + } 1145 + } 1121 1146 } 1122 1147 1123 1148 /* See flow chart on pg 15 of the Bulk Only Transport spec for ··· 1178 1153 if (result != USB_STOR_XFER_GOOD) 1179 1154 return USB_STOR_TRANSPORT_ERROR; 1180 1155 1156 + skipped_data_phase: 1181 1157 /* check bulk status */ 1182 1158 residue = le32_to_cpu(bcs->Residue); 1183 1159 usb_stor_dbg(us, "Bulk Status S 0x%x T 0x%x R %u Stat 0x%x\n",
+28
drivers/usb/storage/unusual_uas.h
··· 54 54 USB_SC_DEVICE, USB_PR_DEVICE, NULL, 55 55 US_FL_NO_ATA_1X), 56 56 57 + /* Reported-by: Hans de Goede <hdegoede@redhat.com> */ 58 + UNUSUAL_DEV(0x0bc2, 0x3320, 0x0000, 0x9999, 59 + "Seagate", 60 + "Expansion Desk", 61 + USB_SC_DEVICE, USB_PR_DEVICE, NULL, 62 + US_FL_NO_ATA_1X), 63 + 64 + /* Reported-by: Bogdan Mihalcea <bogdan.mihalcea@infim.ro> */ 65 + UNUSUAL_DEV(0x0bc2, 0xa003, 0x0000, 0x9999, 66 + "Seagate", 67 + "Backup Plus", 68 + USB_SC_DEVICE, USB_PR_DEVICE, NULL, 69 + US_FL_NO_ATA_1X), 70 + 57 71 /* https://bbs.archlinux.org/viewtopic.php?id=183190 */ 58 72 UNUSUAL_DEV(0x0bc2, 0xab20, 0x0000, 0x9999, 73 + "Seagate", 74 + "Backup+ BK", 75 + USB_SC_DEVICE, USB_PR_DEVICE, NULL, 76 + US_FL_NO_ATA_1X), 77 + 78 + /* https://bbs.archlinux.org/viewtopic.php?id=183190 */ 79 + UNUSUAL_DEV(0x0bc2, 0xab21, 0x0000, 0x9999, 59 80 "Seagate", 60 81 "Backup+ BK", 61 82 USB_SC_DEVICE, USB_PR_DEVICE, NULL, ··· 96 75 "ASM1051", 97 76 USB_SC_DEVICE, USB_PR_DEVICE, NULL, 98 77 US_FL_IGNORE_UAS), 78 + 79 + /* Reported-by: Hans de Goede <hdegoede@redhat.com> */ 80 + UNUSUAL_DEV(0x2109, 0x0711, 0x0000, 0x9999, 81 + "VIA", 82 + "VL711", 83 + USB_SC_DEVICE, USB_PR_DEVICE, NULL, 84 + US_FL_NO_ATA_1X),
+2 -1
fs/block_dev.c
··· 1585 1585 } 1586 1586 EXPORT_SYMBOL_GPL(blkdev_write_iter); 1587 1587 1588 - static ssize_t blkdev_read_iter(struct kiocb *iocb, struct iov_iter *to) 1588 + ssize_t blkdev_read_iter(struct kiocb *iocb, struct iov_iter *to) 1589 1589 { 1590 1590 struct file *file = iocb->ki_filp; 1591 1591 struct inode *bd_inode = file->f_mapping->host; ··· 1599 1599 iov_iter_truncate(to, size); 1600 1600 return generic_file_read_iter(iocb, to); 1601 1601 } 1602 + EXPORT_SYMBOL_GPL(blkdev_read_iter); 1602 1603 1603 1604 /* 1604 1605 * Try to release a page associated with block device when the system
+1 -1
fs/btrfs/ctree.h
··· 3276 3276 struct btrfs_root *root, unsigned long count); 3277 3277 int btrfs_async_run_delayed_refs(struct btrfs_root *root, 3278 3278 unsigned long count, int wait); 3279 - int btrfs_lookup_extent(struct btrfs_root *root, u64 start, u64 len); 3279 + int btrfs_lookup_data_extent(struct btrfs_root *root, u64 start, u64 len); 3280 3280 int btrfs_lookup_extent_info(struct btrfs_trans_handle *trans, 3281 3281 struct btrfs_root *root, u64 bytenr, 3282 3282 u64 offset, int metadata, u64 *refs, u64 *flags);
+22 -21
fs/btrfs/disk-io.c
··· 3817 3817 struct btrfs_super_block *sb = fs_info->super_copy; 3818 3818 int ret = 0; 3819 3819 3820 - if (sb->root_level > BTRFS_MAX_LEVEL) { 3821 - printk(KERN_ERR "BTRFS: tree_root level too big: %d > %d\n", 3822 - sb->root_level, BTRFS_MAX_LEVEL); 3820 + if (btrfs_super_root_level(sb) >= BTRFS_MAX_LEVEL) { 3821 + printk(KERN_ERR "BTRFS: tree_root level too big: %d >= %d\n", 3822 + btrfs_super_root_level(sb), BTRFS_MAX_LEVEL); 3823 3823 ret = -EINVAL; 3824 3824 } 3825 - if (sb->chunk_root_level > BTRFS_MAX_LEVEL) { 3826 - printk(KERN_ERR "BTRFS: chunk_root level too big: %d > %d\n", 3827 - sb->chunk_root_level, BTRFS_MAX_LEVEL); 3825 + if (btrfs_super_chunk_root_level(sb) >= BTRFS_MAX_LEVEL) { 3826 + printk(KERN_ERR "BTRFS: chunk_root level too big: %d >= %d\n", 3827 + btrfs_super_chunk_root_level(sb), BTRFS_MAX_LEVEL); 3828 3828 ret = -EINVAL; 3829 3829 } 3830 - if (sb->log_root_level > BTRFS_MAX_LEVEL) { 3831 - printk(KERN_ERR "BTRFS: log_root level too big: %d > %d\n", 3832 - sb->log_root_level, BTRFS_MAX_LEVEL); 3830 + if (btrfs_super_log_root_level(sb) >= BTRFS_MAX_LEVEL) { 3831 + printk(KERN_ERR "BTRFS: log_root level too big: %d >= %d\n", 3832 + btrfs_super_log_root_level(sb), BTRFS_MAX_LEVEL); 3833 3833 ret = -EINVAL; 3834 3834 } 3835 3835 ··· 3837 3837 * The common minimum, we don't know if we can trust the nodesize/sectorsize 3838 3838 * items yet, they'll be verified later. Issue just a warning. 3839 3839 */ 3840 - if (!IS_ALIGNED(sb->root, 4096)) 3840 + if (!IS_ALIGNED(btrfs_super_root(sb), 4096)) 3841 3841 printk(KERN_WARNING "BTRFS: tree_root block unaligned: %llu\n", 3842 3842 sb->root); 3843 - if (!IS_ALIGNED(sb->chunk_root, 4096)) 3843 + if (!IS_ALIGNED(btrfs_super_chunk_root(sb), 4096)) 3844 3844 printk(KERN_WARNING "BTRFS: tree_root block unaligned: %llu\n", 3845 3845 sb->chunk_root); 3846 - if (!IS_ALIGNED(sb->log_root, 4096)) 3846 + if (!IS_ALIGNED(btrfs_super_log_root(sb), 4096)) 3847 3847 printk(KERN_WARNING "BTRFS: tree_root block unaligned: %llu\n", 3848 - sb->log_root); 3848 + btrfs_super_log_root(sb)); 3849 3849 3850 3850 if (memcmp(fs_info->fsid, sb->dev_item.fsid, BTRFS_UUID_SIZE) != 0) { 3851 3851 printk(KERN_ERR "BTRFS: dev_item UUID does not match fsid: %pU != %pU\n", ··· 3857 3857 * Hint to catch really bogus numbers, bitflips or so, more exact checks are 3858 3858 * done later 3859 3859 */ 3860 - if (sb->num_devices > (1UL << 31)) 3860 + if (btrfs_super_num_devices(sb) > (1UL << 31)) 3861 3861 printk(KERN_WARNING "BTRFS: suspicious number of devices: %llu\n", 3862 - sb->num_devices); 3862 + btrfs_super_num_devices(sb)); 3863 3863 3864 - if (sb->bytenr != BTRFS_SUPER_INFO_OFFSET) { 3864 + if (btrfs_super_bytenr(sb) != BTRFS_SUPER_INFO_OFFSET) { 3865 3865 printk(KERN_ERR "BTRFS: super offset mismatch %llu != %u\n", 3866 - sb->bytenr, BTRFS_SUPER_INFO_OFFSET); 3866 + btrfs_super_bytenr(sb), BTRFS_SUPER_INFO_OFFSET); 3867 3867 ret = -EINVAL; 3868 3868 } 3869 3869 ··· 3871 3871 * The generation is a global counter, we'll trust it more than the others 3872 3872 * but it's still possible that it's the one that's wrong. 3873 3873 */ 3874 - if (sb->generation < sb->chunk_root_generation) 3874 + if (btrfs_super_generation(sb) < btrfs_super_chunk_root_generation(sb)) 3875 3875 printk(KERN_WARNING 3876 3876 "BTRFS: suspicious: generation < chunk_root_generation: %llu < %llu\n", 3877 - sb->generation, sb->chunk_root_generation); 3878 - if (sb->generation < sb->cache_generation && sb->cache_generation != (u64)-1) 3877 + btrfs_super_generation(sb), btrfs_super_chunk_root_generation(sb)); 3878 + if (btrfs_super_generation(sb) < btrfs_super_cache_generation(sb) 3879 + && btrfs_super_cache_generation(sb) != (u64)-1) 3879 3880 printk(KERN_WARNING 3880 3881 "BTRFS: suspicious: generation < cache_generation: %llu < %llu\n", 3881 - sb->generation, sb->cache_generation); 3882 + btrfs_super_generation(sb), btrfs_super_cache_generation(sb)); 3882 3883 3883 3884 return ret; 3884 3885 }
+2 -16
fs/btrfs/extent-tree.c
··· 710 710 rcu_read_unlock(); 711 711 } 712 712 713 - /* simple helper to search for an existing extent at a given offset */ 714 - int btrfs_lookup_extent(struct btrfs_root *root, u64 start, u64 len) 713 + /* simple helper to search for an existing data extent at a given offset */ 714 + int btrfs_lookup_data_extent(struct btrfs_root *root, u64 start, u64 len) 715 715 { 716 716 int ret; 717 717 struct btrfs_key key; ··· 726 726 key.type = BTRFS_EXTENT_ITEM_KEY; 727 727 ret = btrfs_search_slot(NULL, root->fs_info->extent_root, &key, path, 728 728 0, 0); 729 - if (ret > 0) { 730 - btrfs_item_key_to_cpu(path->nodes[0], &key, path->slots[0]); 731 - if (key.objectid == start && 732 - key.type == BTRFS_METADATA_ITEM_KEY) 733 - ret = 0; 734 - } 735 729 btrfs_free_path(path); 736 730 return ret; 737 731 } ··· 780 786 else 781 787 key.type = BTRFS_EXTENT_ITEM_KEY; 782 788 783 - again: 784 789 ret = btrfs_search_slot(trans, root->fs_info->extent_root, 785 790 &key, path, 0, 0); 786 791 if (ret < 0) ··· 794 801 key.type == BTRFS_EXTENT_ITEM_KEY && 795 802 key.offset == root->nodesize) 796 803 ret = 0; 797 - } 798 - if (ret) { 799 - key.objectid = bytenr; 800 - key.type = BTRFS_EXTENT_ITEM_KEY; 801 - key.offset = root->nodesize; 802 - btrfs_release_path(path); 803 - goto again; 804 804 } 805 805 } 806 806
+1 -1
fs/btrfs/file-item.c
··· 413 413 ret = 0; 414 414 fail: 415 415 while (ret < 0 && !list_empty(&tmplist)) { 416 - sums = list_entry(&tmplist, struct btrfs_ordered_sum, list); 416 + sums = list_entry(tmplist.next, struct btrfs_ordered_sum, list); 417 417 list_del(&sums->list); 418 418 kfree(sums); 419 419 }
+1
fs/btrfs/super.c
··· 2151 2151 extent_map_exit(); 2152 2152 extent_io_exit(); 2153 2153 btrfs_interface_exit(); 2154 + btrfs_end_io_wq_exit(); 2154 2155 unregister_filesystem(&btrfs_fs_type); 2155 2156 btrfs_exit_sysfs(); 2156 2157 btrfs_cleanup_fs_uuids();
+1 -1
fs/btrfs/tree-log.c
··· 672 672 * is this extent already allocated in the extent 673 673 * allocation tree? If so, just add a reference 674 674 */ 675 - ret = btrfs_lookup_extent(root, ins.objectid, 675 + ret = btrfs_lookup_data_extent(root, ins.objectid, 676 676 ins.offset); 677 677 if (ret == 0) { 678 678 ret = btrfs_inc_extent_ref(trans, root,
+1 -1
fs/ceph/caps.c
··· 2638 2638 2639 2639 for (i = 0; i < CEPH_CAP_BITS; i++) 2640 2640 if ((dirty & (1 << i)) && 2641 - flush_tid == ci->i_cap_flush_tid[i]) 2641 + (u16)flush_tid == ci->i_cap_flush_tid[i]) 2642 2642 cleaned |= 1 << i; 2643 2643 2644 2644 dout("handle_cap_flush_ack inode %p mds%d seq %d on %s cleaned %s,"
+2 -22
fs/isofs/inode.c
··· 29 29 #define BEQUIET 30 30 31 31 static int isofs_hashi(const struct dentry *parent, struct qstr *qstr); 32 - static int isofs_hash(const struct dentry *parent, struct qstr *qstr); 33 32 static int isofs_dentry_cmpi(const struct dentry *parent, 34 - const struct dentry *dentry, 35 - unsigned int len, const char *str, const struct qstr *name); 36 - static int isofs_dentry_cmp(const struct dentry *parent, 37 33 const struct dentry *dentry, 38 34 unsigned int len, const char *str, const struct qstr *name); 39 35 ··· 130 134 131 135 132 136 static const struct dentry_operations isofs_dentry_ops[] = { 133 - { 134 - .d_hash = isofs_hash, 135 - .d_compare = isofs_dentry_cmp, 136 - }, 137 137 { 138 138 .d_hash = isofs_hashi, 139 139 .d_compare = isofs_dentry_cmpi, ··· 250 258 } 251 259 252 260 static int 253 - isofs_hash(const struct dentry *dentry, struct qstr *qstr) 254 - { 255 - return isofs_hash_common(qstr, 0); 256 - } 257 - 258 - static int 259 261 isofs_hashi(const struct dentry *dentry, struct qstr *qstr) 260 262 { 261 263 return isofs_hashi_common(qstr, 0); 262 - } 263 - 264 - static int 265 - isofs_dentry_cmp(const struct dentry *parent, const struct dentry *dentry, 266 - unsigned int len, const char *str, const struct qstr *name) 267 - { 268 - return isofs_dentry_cmp_common(len, str, name, 0, 0); 269 264 } 270 265 271 266 static int ··· 909 930 if (opt.check == 'r') 910 931 table++; 911 932 912 - s->s_d_op = &isofs_dentry_ops[table]; 933 + if (table) 934 + s->s_d_op = &isofs_dentry_ops[table - 1]; 913 935 914 936 /* get the root dentry */ 915 937 s->s_root = d_make_root(inode);
+4 -18
fs/isofs/namei.c
··· 18 18 isofs_cmp(struct dentry *dentry, const char *compare, int dlen) 19 19 { 20 20 struct qstr qstr; 21 - 22 - if (!compare) 23 - return 1; 24 - 25 - /* check special "." and ".." files */ 26 - if (dlen == 1) { 27 - /* "." */ 28 - if (compare[0] == 0) { 29 - if (!dentry->d_name.len) 30 - return 0; 31 - compare = "."; 32 - } else if (compare[0] == 1) { 33 - compare = ".."; 34 - dlen = 2; 35 - } 36 - } 37 - 38 21 qstr.name = compare; 39 22 qstr.len = dlen; 23 + if (likely(!dentry->d_op)) 24 + return dentry->d_name.len != dlen || memcmp(dentry->d_name.name, compare, dlen); 40 25 return dentry->d_op->d_compare(NULL, NULL, dentry->d_name.len, dentry->d_name.name, &qstr); 41 26 } 42 27 ··· 131 146 (!(de->flags[-sbi->s_high_sierra] & 1))) && 132 147 (sbi->s_showassoc || 133 148 (!(de->flags[-sbi->s_high_sierra] & 4)))) { 134 - match = (isofs_cmp(dentry, dpnt, dlen) == 0); 149 + if (dpnt && (dlen > 1 || dpnt[0] > 1)) 150 + match = (isofs_cmp(dentry, dpnt, dlen) == 0); 135 151 } 136 152 if (match) { 137 153 isofs_normalize_block_and_offset(de,
+1 -1
fs/namei.c
··· 2497 2497 } 2498 2498 2499 2499 mutex_lock_nested(&p1->d_inode->i_mutex, I_MUTEX_PARENT); 2500 - mutex_lock_nested(&p2->d_inode->i_mutex, I_MUTEX_CHILD); 2500 + mutex_lock_nested(&p2->d_inode->i_mutex, I_MUTEX_PARENT2); 2501 2501 return NULL; 2502 2502 } 2503 2503 EXPORT_SYMBOL(lock_rename);
+21 -15
fs/notify/fsnotify.c
··· 229 229 &fsnotify_mark_srcu); 230 230 } 231 231 232 + /* 233 + * We need to merge inode & vfsmount mark lists so that inode mark 234 + * ignore masks are properly reflected for mount mark notifications. 235 + * That's why this traversal is so complicated... 236 + */ 232 237 while (inode_node || vfsmount_node) { 233 - inode_group = vfsmount_group = NULL; 238 + inode_group = NULL; 239 + inode_mark = NULL; 240 + vfsmount_group = NULL; 241 + vfsmount_mark = NULL; 234 242 235 243 if (inode_node) { 236 244 inode_mark = hlist_entry(srcu_dereference(inode_node, &fsnotify_mark_srcu), ··· 252 244 vfsmount_group = vfsmount_mark->group; 253 245 } 254 246 255 - if (inode_group > vfsmount_group) { 256 - /* handle inode */ 257 - ret = send_to_group(to_tell, inode_mark, NULL, mask, 258 - data, data_is, cookie, file_name); 259 - /* we didn't use the vfsmount_mark */ 260 - vfsmount_group = NULL; 261 - } else if (vfsmount_group > inode_group) { 262 - ret = send_to_group(to_tell, NULL, vfsmount_mark, mask, 263 - data, data_is, cookie, file_name); 264 - inode_group = NULL; 265 - } else { 266 - ret = send_to_group(to_tell, inode_mark, vfsmount_mark, 267 - mask, data, data_is, cookie, 268 - file_name); 247 + if (inode_group && vfsmount_group) { 248 + int cmp = fsnotify_compare_groups(inode_group, 249 + vfsmount_group); 250 + if (cmp > 0) { 251 + inode_group = NULL; 252 + inode_mark = NULL; 253 + } else if (cmp < 0) { 254 + vfsmount_group = NULL; 255 + vfsmount_mark = NULL; 256 + } 269 257 } 258 + ret = send_to_group(to_tell, inode_mark, vfsmount_mark, mask, 259 + data, data_is, cookie, file_name); 270 260 271 261 if (ret && (mask & ALL_FSNOTIFY_PERM_EVENTS)) 272 262 goto out;
+4
fs/notify/fsnotify.h
··· 12 12 /* protects reads of inode and vfsmount marks list */ 13 13 extern struct srcu_struct fsnotify_mark_srcu; 14 14 15 + /* compare two groups for sorting of marks lists */ 16 + extern int fsnotify_compare_groups(struct fsnotify_group *a, 17 + struct fsnotify_group *b); 18 + 15 19 extern void fsnotify_set_inode_mark_mask_locked(struct fsnotify_mark *fsn_mark, 16 20 __u32 mask); 17 21 /* add a mark to an inode */
+3 -5
fs/notify/inode_mark.c
··· 194 194 { 195 195 struct fsnotify_mark *lmark, *last = NULL; 196 196 int ret = 0; 197 + int cmp; 197 198 198 199 mark->flags |= FSNOTIFY_MARK_FLAG_INODE; 199 200 ··· 220 219 goto out; 221 220 } 222 221 223 - if (mark->group->priority < lmark->group->priority) 224 - continue; 225 - 226 - if ((mark->group->priority == lmark->group->priority) && 227 - (mark->group < lmark->group)) 222 + cmp = fsnotify_compare_groups(lmark->group, mark->group); 223 + if (cmp < 0) 228 224 continue; 229 225 230 226 hlist_add_before_rcu(&mark->i.i_list, &lmark->i.i_list);
+36
fs/notify/mark.c
··· 210 210 } 211 211 212 212 /* 213 + * Sorting function for lists of fsnotify marks. 214 + * 215 + * Fanotify supports different notification classes (reflected as priority of 216 + * notification group). Events shall be passed to notification groups in 217 + * decreasing priority order. To achieve this marks in notification lists for 218 + * inodes and vfsmounts are sorted so that priorities of corresponding groups 219 + * are descending. 220 + * 221 + * Furthermore correct handling of the ignore mask requires processing inode 222 + * and vfsmount marks of each group together. Using the group address as 223 + * further sort criterion provides a unique sorting order and thus we can 224 + * merge inode and vfsmount lists of marks in linear time and find groups 225 + * present in both lists. 226 + * 227 + * A return value of 1 signifies that b has priority over a. 228 + * A return value of 0 signifies that the two marks have to be handled together. 229 + * A return value of -1 signifies that a has priority over b. 230 + */ 231 + int fsnotify_compare_groups(struct fsnotify_group *a, struct fsnotify_group *b) 232 + { 233 + if (a == b) 234 + return 0; 235 + if (!a) 236 + return 1; 237 + if (!b) 238 + return -1; 239 + if (a->priority < b->priority) 240 + return 1; 241 + if (a->priority > b->priority) 242 + return -1; 243 + if (a < b) 244 + return 1; 245 + return -1; 246 + } 247 + 248 + /* 213 249 * Attach an initialized mark to a given group and fs object. 214 250 * These marks may be used for the fsnotify backend to determine which 215 251 * event types should be delivered to which group.
+3 -5
fs/notify/vfsmount_mark.c
··· 153 153 struct mount *m = real_mount(mnt); 154 154 struct fsnotify_mark *lmark, *last = NULL; 155 155 int ret = 0; 156 + int cmp; 156 157 157 158 mark->flags |= FSNOTIFY_MARK_FLAG_VFSMOUNT; 158 159 ··· 179 178 goto out; 180 179 } 181 180 182 - if (mark->group->priority < lmark->group->priority) 183 - continue; 184 - 185 - if ((mark->group->priority == lmark->group->priority) && 186 - (mark->group < lmark->group)) 181 + cmp = fsnotify_compare_groups(lmark->group, mark->group); 182 + if (cmp < 0) 187 183 continue; 188 184 189 185 hlist_add_before_rcu(&mark->m.m_list, &lmark->m.m_list);
+1 -1
fs/ocfs2/cluster/tcp.c
··· 925 925 size_t veclen, size_t total) 926 926 { 927 927 int ret; 928 - struct msghdr msg; 928 + struct msghdr msg = {.msg_flags = 0,}; 929 929 930 930 if (sock == NULL) { 931 931 ret = -EINVAL;
+11 -8
fs/overlayfs/readdir.c
··· 21 21 unsigned int len; 22 22 unsigned int type; 23 23 u64 ino; 24 - bool is_whiteout; 25 24 struct list_head l_node; 26 25 struct rb_node node; 26 + bool is_whiteout; 27 + bool is_cursor; 27 28 char name[]; 28 29 }; 29 30 ··· 93 92 p->type = d_type; 94 93 p->ino = ino; 95 94 p->is_whiteout = false; 95 + p->is_cursor = false; 96 96 } 97 97 98 98 return p; ··· 168 166 { 169 167 struct ovl_dir_cache *cache = od->cache; 170 168 171 - list_del(&od->cursor.l_node); 169 + list_del_init(&od->cursor.l_node); 172 170 WARN_ON(cache->refcount <= 0); 173 171 cache->refcount--; 174 172 if (!cache->refcount) { ··· 253 251 254 252 mutex_lock(&dir->d_inode->i_mutex); 255 253 list_for_each_entry(p, rdd->list, l_node) { 256 - if (!p->name) 254 + if (p->is_cursor) 257 255 continue; 258 256 259 257 if (p->type != DT_CHR) ··· 309 307 } 310 308 out: 311 309 return err; 312 - 313 310 } 314 311 315 312 static void ovl_seek_cursor(struct ovl_dir_file *od, loff_t pos) ··· 317 316 loff_t off = 0; 318 317 319 318 list_for_each_entry(p, &od->cache->entries, l_node) { 320 - if (!p->name) 319 + if (p->is_cursor) 321 320 continue; 322 321 if (off >= pos) 323 322 break; ··· 390 389 391 390 p = list_entry(od->cursor.l_node.next, struct ovl_cache_entry, l_node); 392 391 /* Skip cursors */ 393 - if (p->name) { 392 + if (!p->is_cursor) { 394 393 if (!p->is_whiteout) { 395 394 if (!dir_emit(ctx, p->name, p->len, p->ino, p->type)) 396 395 break; ··· 455 454 if (!od->is_upper && ovl_path_type(dentry) == OVL_PATH_MERGE) { 456 455 struct inode *inode = file_inode(file); 457 456 458 - realfile = od->upperfile; 457 + realfile =lockless_dereference(od->upperfile); 459 458 if (!realfile) { 460 459 struct path upperpath; 461 460 462 461 ovl_path_upper(dentry, &upperpath); 463 462 realfile = ovl_path_open(&upperpath, O_RDONLY); 463 + smp_mb__before_spinlock(); 464 464 mutex_lock(&inode->i_mutex); 465 465 if (!od->upperfile) { 466 466 if (IS_ERR(realfile)) { ··· 520 518 od->realfile = realfile; 521 519 od->is_real = (type != OVL_PATH_MERGE); 522 520 od->is_upper = (type != OVL_PATH_LOWER); 521 + od->cursor.is_cursor = true; 523 522 file->private_data = od; 524 523 525 524 return 0; ··· 572 569 { 573 570 struct ovl_cache_entry *p; 574 571 575 - mutex_lock_nested(&upper->d_inode->i_mutex, I_MUTEX_PARENT); 572 + mutex_lock_nested(&upper->d_inode->i_mutex, I_MUTEX_CHILD); 576 573 list_for_each_entry(p, list, l_node) { 577 574 struct dentry *dentry; 578 575
+20 -52
fs/xfs/xfs_bmap_util.c
··· 1338 1338 goto out; 1339 1339 } 1340 1340 1341 - 1341 + /* 1342 + * Preallocate and zero a range of a file. This mechanism has the allocation 1343 + * semantics of fallocate and in addition converts data in the range to zeroes. 1344 + */ 1342 1345 int 1343 1346 xfs_zero_file_space( 1344 1347 struct xfs_inode *ip, ··· 1349 1346 xfs_off_t len) 1350 1347 { 1351 1348 struct xfs_mount *mp = ip->i_mount; 1352 - uint granularity; 1353 - xfs_off_t start_boundary; 1354 - xfs_off_t end_boundary; 1349 + uint blksize; 1355 1350 int error; 1356 1351 1357 1352 trace_xfs_zero_file_space(ip); 1358 1353 1359 - granularity = max_t(uint, 1 << mp->m_sb.sb_blocklog, PAGE_CACHE_SIZE); 1354 + blksize = 1 << mp->m_sb.sb_blocklog; 1360 1355 1361 1356 /* 1362 - * Round the range of extents we are going to convert inwards. If the 1363 - * offset is aligned, then it doesn't get changed so we zero from the 1364 - * start of the block offset points to. 1357 + * Punch a hole and prealloc the range. We use hole punch rather than 1358 + * unwritten extent conversion for two reasons: 1359 + * 1360 + * 1.) Hole punch handles partial block zeroing for us. 1361 + * 1362 + * 2.) If prealloc returns ENOSPC, the file range is still zero-valued 1363 + * by virtue of the hole punch. 1365 1364 */ 1366 - start_boundary = round_up(offset, granularity); 1367 - end_boundary = round_down(offset + len, granularity); 1365 + error = xfs_free_file_space(ip, offset, len); 1366 + if (error) 1367 + goto out; 1368 1368 1369 - ASSERT(start_boundary >= offset); 1370 - ASSERT(end_boundary <= offset + len); 1371 - 1372 - if (start_boundary < end_boundary - 1) { 1373 - /* 1374 - * Writeback the range to ensure any inode size updates due to 1375 - * appending writes make it to disk (otherwise we could just 1376 - * punch out the delalloc blocks). 1377 - */ 1378 - error = filemap_write_and_wait_range(VFS_I(ip)->i_mapping, 1379 - start_boundary, end_boundary - 1); 1380 - if (error) 1381 - goto out; 1382 - truncate_pagecache_range(VFS_I(ip), start_boundary, 1383 - end_boundary - 1); 1384 - 1385 - /* convert the blocks */ 1386 - error = xfs_alloc_file_space(ip, start_boundary, 1387 - end_boundary - start_boundary - 1, 1388 - XFS_BMAPI_PREALLOC | XFS_BMAPI_CONVERT); 1389 - if (error) 1390 - goto out; 1391 - 1392 - /* We've handled the interior of the range, now for the edges */ 1393 - if (start_boundary != offset) { 1394 - error = xfs_iozero(ip, offset, start_boundary - offset); 1395 - if (error) 1396 - goto out; 1397 - } 1398 - 1399 - if (end_boundary != offset + len) 1400 - error = xfs_iozero(ip, end_boundary, 1401 - offset + len - end_boundary); 1402 - 1403 - } else { 1404 - /* 1405 - * It's either a sub-granularity range or the range spanned lies 1406 - * partially across two adjacent blocks. 1407 - */ 1408 - error = xfs_iozero(ip, offset, len); 1409 - } 1410 - 1369 + error = xfs_alloc_file_space(ip, round_down(offset, blksize), 1370 + round_up(offset + len, blksize) - 1371 + round_down(offset, blksize), 1372 + XFS_BMAPI_PREALLOC); 1411 1373 out: 1412 1374 return error; 1413 1375
+125 -131
fs/xfs/xfs_itable.c
··· 236 236 XFS_WANT_CORRUPTED_RETURN(stat == 1); 237 237 238 238 /* Check if the record contains the inode in request */ 239 - if (irec->ir_startino + XFS_INODES_PER_CHUNK <= agino) 240 - return -EINVAL; 239 + if (irec->ir_startino + XFS_INODES_PER_CHUNK <= agino) { 240 + *icount = 0; 241 + return 0; 242 + } 241 243 242 244 idx = agino - irec->ir_startino + 1; 243 245 if (idx < XFS_INODES_PER_CHUNK && ··· 264 262 265 263 #define XFS_BULKSTAT_UBLEFT(ubleft) ((ubleft) >= statstruct_size) 266 264 265 + struct xfs_bulkstat_agichunk { 266 + char __user **ac_ubuffer;/* pointer into user's buffer */ 267 + int ac_ubleft; /* bytes left in user's buffer */ 268 + int ac_ubelem; /* spaces used in user's buffer */ 269 + }; 270 + 267 271 /* 268 272 * Process inodes in chunk with a pointer to a formatter function 269 273 * that will iget the inode and fill in the appropriate structure. 270 274 */ 271 - int 275 + static int 272 276 xfs_bulkstat_ag_ichunk( 273 277 struct xfs_mount *mp, 274 278 xfs_agnumber_t agno, 275 279 struct xfs_inobt_rec_incore *irbp, 276 280 bulkstat_one_pf formatter, 277 281 size_t statstruct_size, 278 - struct xfs_bulkstat_agichunk *acp) 282 + struct xfs_bulkstat_agichunk *acp, 283 + xfs_agino_t *last_agino) 279 284 { 280 - xfs_ino_t lastino = acp->ac_lastino; 281 285 char __user **ubufp = acp->ac_ubuffer; 282 - int ubleft = acp->ac_ubleft; 283 - int ubelem = acp->ac_ubelem; 284 - int chunkidx, clustidx; 286 + int chunkidx; 285 287 int error = 0; 286 - xfs_agino_t agino; 288 + xfs_agino_t agino = irbp->ir_startino; 287 289 288 - for (agino = irbp->ir_startino, chunkidx = clustidx = 0; 289 - XFS_BULKSTAT_UBLEFT(ubleft) && 290 - irbp->ir_freecount < XFS_INODES_PER_CHUNK; 291 - chunkidx++, clustidx++, agino++) { 292 - int fmterror; /* bulkstat formatter result */ 290 + for (chunkidx = 0; chunkidx < XFS_INODES_PER_CHUNK; 291 + chunkidx++, agino++) { 292 + int fmterror; 293 293 int ubused; 294 - xfs_ino_t ino = XFS_AGINO_TO_INO(mp, agno, agino); 295 294 296 - ASSERT(chunkidx < XFS_INODES_PER_CHUNK); 295 + /* inode won't fit in buffer, we are done */ 296 + if (acp->ac_ubleft < statstruct_size) 297 + break; 297 298 298 299 /* Skip if this inode is free */ 299 - if (XFS_INOBT_MASK(chunkidx) & irbp->ir_free) { 300 - lastino = ino; 300 + if (XFS_INOBT_MASK(chunkidx) & irbp->ir_free) 301 301 continue; 302 - } 303 - 304 - /* 305 - * Count used inodes as free so we can tell when the 306 - * chunk is used up. 307 - */ 308 - irbp->ir_freecount++; 309 302 310 303 /* Get the inode and fill in a single buffer */ 311 304 ubused = statstruct_size; 312 - error = formatter(mp, ino, *ubufp, ubleft, &ubused, &fmterror); 313 - if (fmterror == BULKSTAT_RV_NOTHING) { 314 - if (error && error != -ENOENT && error != -EINVAL) { 315 - ubleft = 0; 316 - break; 317 - } 318 - lastino = ino; 319 - continue; 320 - } 321 - if (fmterror == BULKSTAT_RV_GIVEUP) { 322 - ubleft = 0; 305 + error = formatter(mp, XFS_AGINO_TO_INO(mp, agno, agino), 306 + *ubufp, acp->ac_ubleft, &ubused, &fmterror); 307 + 308 + if (fmterror == BULKSTAT_RV_GIVEUP || 309 + (error && error != -ENOENT && error != -EINVAL)) { 310 + acp->ac_ubleft = 0; 323 311 ASSERT(error); 324 312 break; 325 313 } 326 - if (*ubufp) 327 - *ubufp += ubused; 328 - ubleft -= ubused; 329 - ubelem++; 330 - lastino = ino; 314 + 315 + /* be careful not to leak error if at end of chunk */ 316 + if (fmterror == BULKSTAT_RV_NOTHING || error) { 317 + error = 0; 318 + continue; 319 + } 320 + 321 + *ubufp += ubused; 322 + acp->ac_ubleft -= ubused; 323 + acp->ac_ubelem++; 331 324 } 332 325 333 - acp->ac_lastino = lastino; 334 - acp->ac_ubleft = ubleft; 335 - acp->ac_ubelem = ubelem; 326 + /* 327 + * Post-update *last_agino. At this point, agino will always point one 328 + * inode past the last inode we processed successfully. Hence we 329 + * substract that inode when setting the *last_agino cursor so that we 330 + * return the correct cookie to userspace. On the next bulkstat call, 331 + * the inode under the lastino cookie will be skipped as we have already 332 + * processed it here. 333 + */ 334 + *last_agino = agino - 1; 336 335 337 336 return error; 338 337 } ··· 356 353 xfs_agino_t agino; /* inode # in allocation group */ 357 354 xfs_agnumber_t agno; /* allocation group number */ 358 355 xfs_btree_cur_t *cur; /* btree cursor for ialloc btree */ 359 - int end_of_ag; /* set if we've seen the ag end */ 360 - int error; /* error code */ 361 - int fmterror;/* bulkstat formatter result */ 362 - int i; /* loop index */ 363 - int icount; /* count of inodes good in irbuf */ 364 356 size_t irbsize; /* size of irec buffer in bytes */ 365 - xfs_ino_t ino; /* inode number (filesystem) */ 366 - xfs_inobt_rec_incore_t *irbp; /* current irec buffer pointer */ 367 357 xfs_inobt_rec_incore_t *irbuf; /* start of irec buffer */ 368 - xfs_inobt_rec_incore_t *irbufend; /* end of good irec buffer entries */ 369 - xfs_ino_t lastino; /* last inode number returned */ 370 358 int nirbuf; /* size of irbuf */ 371 - int rval; /* return value error code */ 372 - int tmp; /* result value from btree calls */ 373 359 int ubcount; /* size of user's buffer */ 374 - int ubleft; /* bytes left in user's buffer */ 375 - char __user *ubufp; /* pointer into user's buffer */ 376 - int ubelem; /* spaces used in user's buffer */ 360 + struct xfs_bulkstat_agichunk ac; 361 + int error = 0; 377 362 378 363 /* 379 364 * Get the last inode value, see if there's nothing to do. 380 365 */ 381 - ino = (xfs_ino_t)*lastinop; 382 - lastino = ino; 383 - agno = XFS_INO_TO_AGNO(mp, ino); 384 - agino = XFS_INO_TO_AGINO(mp, ino); 366 + agno = XFS_INO_TO_AGNO(mp, *lastinop); 367 + agino = XFS_INO_TO_AGINO(mp, *lastinop); 385 368 if (agno >= mp->m_sb.sb_agcount || 386 - ino != XFS_AGINO_TO_INO(mp, agno, agino)) { 369 + *lastinop != XFS_AGINO_TO_INO(mp, agno, agino)) { 387 370 *done = 1; 388 371 *ubcountp = 0; 389 372 return 0; 390 373 } 391 374 392 375 ubcount = *ubcountp; /* statstruct's */ 393 - ubleft = ubcount * statstruct_size; /* bytes */ 394 - *ubcountp = ubelem = 0; 376 + ac.ac_ubuffer = &ubuffer; 377 + ac.ac_ubleft = ubcount * statstruct_size; /* bytes */; 378 + ac.ac_ubelem = 0; 379 + 380 + *ubcountp = 0; 395 381 *done = 0; 396 - fmterror = 0; 397 - ubufp = ubuffer; 382 + 398 383 irbuf = kmem_zalloc_greedy(&irbsize, PAGE_SIZE, PAGE_SIZE * 4); 399 384 if (!irbuf) 400 385 return -ENOMEM; ··· 393 402 * Loop over the allocation groups, starting from the last 394 403 * inode returned; 0 means start of the allocation group. 395 404 */ 396 - rval = 0; 397 - while (XFS_BULKSTAT_UBLEFT(ubleft) && agno < mp->m_sb.sb_agcount) { 398 - cond_resched(); 405 + while (agno < mp->m_sb.sb_agcount) { 406 + struct xfs_inobt_rec_incore *irbp = irbuf; 407 + struct xfs_inobt_rec_incore *irbufend = irbuf + nirbuf; 408 + bool end_of_ag = false; 409 + int icount = 0; 410 + int stat; 411 + 399 412 error = xfs_ialloc_read_agi(mp, NULL, agno, &agbp); 400 413 if (error) 401 414 break; ··· 409 414 */ 410 415 cur = xfs_inobt_init_cursor(mp, NULL, agbp, agno, 411 416 XFS_BTNUM_INO); 412 - irbp = irbuf; 413 - irbufend = irbuf + nirbuf; 414 - end_of_ag = 0; 415 - icount = 0; 416 417 if (agino > 0) { 417 418 /* 418 419 * In the middle of an allocation group, we need to get ··· 418 427 419 428 error = xfs_bulkstat_grab_ichunk(cur, agino, &icount, &r); 420 429 if (error) 421 - break; 430 + goto del_cursor; 422 431 if (icount) { 423 432 irbp->ir_startino = r.ir_startino; 424 433 irbp->ir_freecount = r.ir_freecount; 425 434 irbp->ir_free = r.ir_free; 426 435 irbp++; 427 - agino = r.ir_startino + XFS_INODES_PER_CHUNK; 428 436 } 429 437 /* Increment to the next record */ 430 - error = xfs_btree_increment(cur, 0, &tmp); 438 + error = xfs_btree_increment(cur, 0, &stat); 431 439 } else { 432 440 /* Start of ag. Lookup the first inode chunk */ 433 - error = xfs_inobt_lookup(cur, 0, XFS_LOOKUP_GE, &tmp); 441 + error = xfs_inobt_lookup(cur, 0, XFS_LOOKUP_GE, &stat); 434 442 } 435 - if (error) 436 - break; 443 + if (error || stat == 0) { 444 + end_of_ag = true; 445 + goto del_cursor; 446 + } 437 447 438 448 /* 439 449 * Loop through inode btree records in this ag, ··· 443 451 while (irbp < irbufend && icount < ubcount) { 444 452 struct xfs_inobt_rec_incore r; 445 453 446 - error = xfs_inobt_get_rec(cur, &r, &i); 447 - if (error || i == 0) { 448 - end_of_ag = 1; 449 - break; 454 + error = xfs_inobt_get_rec(cur, &r, &stat); 455 + if (error || stat == 0) { 456 + end_of_ag = true; 457 + goto del_cursor; 450 458 } 451 459 452 460 /* ··· 461 469 irbp++; 462 470 icount += XFS_INODES_PER_CHUNK - r.ir_freecount; 463 471 } 464 - /* 465 - * Set agino to after this chunk and bump the cursor. 466 - */ 467 - agino = r.ir_startino + XFS_INODES_PER_CHUNK; 468 - error = xfs_btree_increment(cur, 0, &tmp); 472 + error = xfs_btree_increment(cur, 0, &stat); 473 + if (error || stat == 0) { 474 + end_of_ag = true; 475 + goto del_cursor; 476 + } 469 477 cond_resched(); 470 478 } 479 + 471 480 /* 472 - * Drop the btree buffers and the agi buffer. 473 - * We can't hold any of the locks these represent 474 - * when calling iget. 481 + * Drop the btree buffers and the agi buffer as we can't hold any 482 + * of the locks these represent when calling iget. If there is a 483 + * pending error, then we are done. 475 484 */ 485 + del_cursor: 476 486 xfs_btree_del_cursor(cur, XFS_BTREE_NOERROR); 477 487 xfs_buf_relse(agbp); 488 + if (error) 489 + break; 478 490 /* 479 - * Now format all the good inodes into the user's buffer. 491 + * Now format all the good inodes into the user's buffer. The 492 + * call to xfs_bulkstat_ag_ichunk() sets up the agino pointer 493 + * for the next loop iteration. 480 494 */ 481 495 irbufend = irbp; 482 496 for (irbp = irbuf; 483 - irbp < irbufend && XFS_BULKSTAT_UBLEFT(ubleft); irbp++) { 484 - struct xfs_bulkstat_agichunk ac; 485 - 486 - ac.ac_lastino = lastino; 487 - ac.ac_ubuffer = &ubuffer; 488 - ac.ac_ubleft = ubleft; 489 - ac.ac_ubelem = ubelem; 497 + irbp < irbufend && ac.ac_ubleft >= statstruct_size; 498 + irbp++) { 490 499 error = xfs_bulkstat_ag_ichunk(mp, agno, irbp, 491 - formatter, statstruct_size, &ac); 500 + formatter, statstruct_size, &ac, 501 + &agino); 492 502 if (error) 493 - rval = error; 494 - 495 - lastino = ac.ac_lastino; 496 - ubleft = ac.ac_ubleft; 497 - ubelem = ac.ac_ubelem; 503 + break; 498 504 499 505 cond_resched(); 500 506 } 507 + 501 508 /* 502 - * Set up for the next loop iteration. 509 + * If we've run out of space or had a formatting error, we 510 + * are now done 503 511 */ 504 - if (XFS_BULKSTAT_UBLEFT(ubleft)) { 505 - if (end_of_ag) { 506 - agno++; 507 - agino = 0; 508 - } else 509 - agino = XFS_INO_TO_AGINO(mp, lastino); 510 - } else 512 + if (ac.ac_ubleft < statstruct_size || error) 511 513 break; 514 + 515 + if (end_of_ag) { 516 + agno++; 517 + agino = 0; 518 + } 512 519 } 513 520 /* 514 521 * Done, we're either out of filesystem or space to put the data. 515 522 */ 516 523 kmem_free(irbuf); 517 - *ubcountp = ubelem; 518 - /* 519 - * Found some inodes, return them now and return the error next time. 520 - */ 521 - if (ubelem) 522 - rval = 0; 523 - if (agno >= mp->m_sb.sb_agcount) { 524 - /* 525 - * If we ran out of filesystem, mark lastino as off 526 - * the end of the filesystem, so the next call 527 - * will return immediately. 528 - */ 529 - *lastinop = (xfs_ino_t)XFS_AGINO_TO_INO(mp, agno, 0); 530 - *done = 1; 531 - } else 532 - *lastinop = (xfs_ino_t)lastino; 524 + *ubcountp = ac.ac_ubelem; 533 525 534 - return rval; 526 + /* 527 + * We found some inodes, so clear the error status and return them. 528 + * The lastino pointer will point directly at the inode that triggered 529 + * any error that occurred, so on the next call the error will be 530 + * triggered again and propagated to userspace as there will be no 531 + * formatted inodes in the buffer. 532 + */ 533 + if (ac.ac_ubelem) 534 + error = 0; 535 + 536 + /* 537 + * If we ran out of filesystem, lastino will point off the end of 538 + * the filesystem so the next call will return immediately. 539 + */ 540 + *lastinop = XFS_AGINO_TO_INO(mp, agno, agino); 541 + if (agno >= mp->m_sb.sb_agcount) 542 + *done = 1; 543 + 544 + return error; 535 545 } 536 546 537 547 int
-16
fs/xfs/xfs_itable.h
··· 30 30 int *ubused, 31 31 int *stat); 32 32 33 - struct xfs_bulkstat_agichunk { 34 - xfs_ino_t ac_lastino; /* last inode returned */ 35 - char __user **ac_ubuffer;/* pointer into user's buffer */ 36 - int ac_ubleft; /* bytes left in user's buffer */ 37 - int ac_ubelem; /* spaces used in user's buffer */ 38 - }; 39 - 40 - int 41 - xfs_bulkstat_ag_ichunk( 42 - struct xfs_mount *mp, 43 - xfs_agnumber_t agno, 44 - struct xfs_inobt_rec_incore *irbp, 45 - bulkstat_one_pf formatter, 46 - size_t statstruct_size, 47 - struct xfs_bulkstat_agichunk *acp); 48 - 49 33 /* 50 34 * Values for stat return value. 51 35 */
-1
include/drm/drm_pciids.h
··· 74 74 {0x1002, 0x4C64, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RV250|RADEON_IS_MOBILITY}, \ 75 75 {0x1002, 0x4C66, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RV250|RADEON_IS_MOBILITY}, \ 76 76 {0x1002, 0x4C67, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RV250|RADEON_IS_MOBILITY}, \ 77 - {0x1002, 0x4C6E, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_RV280|RADEON_IS_MOBILITY}, \ 78 77 {0x1002, 0x4E44, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_R300}, \ 79 78 {0x1002, 0x4E45, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_R300}, \ 80 79 {0x1002, 0x4E46, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_R300}, \
+31 -8
include/dt-bindings/clock/vf610-clock.h
··· 21 21 #define VF610_CLK_FASK_CLK_SEL 8 22 22 #define VF610_CLK_AUDIO_EXT 9 23 23 #define VF610_CLK_ENET_EXT 10 24 - #define VF610_CLK_PLL1_MAIN 11 24 + #define VF610_CLK_PLL1_SYS 11 25 25 #define VF610_CLK_PLL1_PFD1 12 26 26 #define VF610_CLK_PLL1_PFD2 13 27 27 #define VF610_CLK_PLL1_PFD3 14 28 28 #define VF610_CLK_PLL1_PFD4 15 29 - #define VF610_CLK_PLL2_MAIN 16 29 + #define VF610_CLK_PLL2_BUS 16 30 30 #define VF610_CLK_PLL2_PFD1 17 31 31 #define VF610_CLK_PLL2_PFD2 18 32 32 #define VF610_CLK_PLL2_PFD3 19 33 33 #define VF610_CLK_PLL2_PFD4 20 34 - #define VF610_CLK_PLL3_MAIN 21 34 + #define VF610_CLK_PLL3_USB_OTG 21 35 35 #define VF610_CLK_PLL3_PFD1 22 36 36 #define VF610_CLK_PLL3_PFD2 23 37 37 #define VF610_CLK_PLL3_PFD3 24 38 38 #define VF610_CLK_PLL3_PFD4 25 39 - #define VF610_CLK_PLL4_MAIN 26 40 - #define VF610_CLK_PLL5_MAIN 27 41 - #define VF610_CLK_PLL6_MAIN 28 39 + #define VF610_CLK_PLL4_AUDIO 26 40 + #define VF610_CLK_PLL5_ENET 27 41 + #define VF610_CLK_PLL6_VIDEO 28 42 42 #define VF610_CLK_PLL3_MAIN_DIV 29 43 43 #define VF610_CLK_PLL4_MAIN_DIV 30 44 44 #define VF610_CLK_PLL6_MAIN_DIV 31 ··· 166 166 #define VF610_CLK_DMAMUX3 153 167 167 #define VF610_CLK_FLEXCAN0_EN 154 168 168 #define VF610_CLK_FLEXCAN1_EN 155 169 - #define VF610_CLK_PLL7_MAIN 156 169 + #define VF610_CLK_PLL7_USB_HOST 156 170 170 #define VF610_CLK_USBPHY0 157 171 171 #define VF610_CLK_USBPHY1 158 172 - #define VF610_CLK_END 159 172 + #define VF610_CLK_LVDS1_IN 159 173 + #define VF610_CLK_ANACLK1 160 174 + #define VF610_CLK_PLL1_BYPASS_SRC 161 175 + #define VF610_CLK_PLL2_BYPASS_SRC 162 176 + #define VF610_CLK_PLL3_BYPASS_SRC 163 177 + #define VF610_CLK_PLL4_BYPASS_SRC 164 178 + #define VF610_CLK_PLL5_BYPASS_SRC 165 179 + #define VF610_CLK_PLL6_BYPASS_SRC 166 180 + #define VF610_CLK_PLL7_BYPASS_SRC 167 181 + #define VF610_CLK_PLL1 168 182 + #define VF610_CLK_PLL2 169 183 + #define VF610_CLK_PLL3 170 184 + #define VF610_CLK_PLL4 171 185 + #define VF610_CLK_PLL5 172 186 + #define VF610_CLK_PLL6 173 187 + #define VF610_CLK_PLL7 174 188 + #define VF610_PLL1_BYPASS 175 189 + #define VF610_PLL2_BYPASS 176 190 + #define VF610_PLL3_BYPASS 177 191 + #define VF610_PLL4_BYPASS 178 192 + #define VF610_PLL5_BYPASS 179 193 + #define VF610_PLL6_BYPASS 180 194 + #define VF610_PLL7_BYPASS 181 195 + #define VF610_CLK_END 182 173 196 174 197 #endif /* __DT_BINDINGS_CLOCK_VF610_H */
+1 -2
include/linux/blkdev.h
··· 1136 1136 /* 1137 1137 * tag stuff 1138 1138 */ 1139 - #define blk_rq_tagged(rq) \ 1140 - ((rq)->mq_ctx || ((rq)->cmd_flags & REQ_QUEUED)) 1139 + #define blk_rq_tagged(rq) ((rq)->cmd_flags & REQ_QUEUED) 1141 1140 extern int blk_queue_start_tag(struct request_queue *, struct request *); 1142 1141 extern struct request *blk_queue_find_tag(struct request_queue *, int); 1143 1142 extern void blk_queue_end_tag(struct request_queue *, struct request *);
+1
include/linux/bootmem.h
··· 46 46 extern unsigned long init_bootmem(unsigned long addr, unsigned long memend); 47 47 48 48 extern unsigned long free_all_bootmem(void); 49 + extern void reset_node_managed_pages(pg_data_t *pgdat); 49 50 extern void reset_all_zones_managed_pages(void); 50 51 51 52 extern void free_bootmem_node(pg_data_t *pgdat,
+4 -4
include/linux/cma.h
··· 18 18 extern phys_addr_t cma_get_base(struct cma *cma); 19 19 extern unsigned long cma_get_size(struct cma *cma); 20 20 21 - extern int __init cma_declare_contiguous(phys_addr_t size, 22 - phys_addr_t base, phys_addr_t limit, 21 + extern int __init cma_declare_contiguous(phys_addr_t base, 22 + phys_addr_t size, phys_addr_t limit, 23 23 phys_addr_t alignment, unsigned int order_per_bit, 24 24 bool fixed, struct cma **res_cma); 25 - extern int cma_init_reserved_mem(phys_addr_t size, 26 - phys_addr_t base, int order_per_bit, 25 + extern int cma_init_reserved_mem(phys_addr_t base, 26 + phys_addr_t size, int order_per_bit, 27 27 struct cma **res_cma); 28 28 extern struct page *cma_alloc(struct cma *cma, int count, unsigned int align); 29 29 extern bool cma_release(struct cma *cma, struct page *pages, int count);
+7 -3
include/linux/fs.h
··· 639 639 * 2: child/target 640 640 * 3: xattr 641 641 * 4: second non-directory 642 - * The last is for certain operations (such as rename) which lock two 642 + * 5: second parent (when locking independent directories in rename) 643 + * 644 + * I_MUTEX_NONDIR2 is for certain operations (such as rename) which lock two 643 645 * non-directories at once. 644 646 * 645 647 * The locking order between these classes is 646 - * parent -> child -> normal -> xattr -> second non-directory 648 + * parent[2] -> child -> grandchild -> normal -> xattr -> second non-directory 647 649 */ 648 650 enum inode_i_mutex_lock_class 649 651 { ··· 653 651 I_MUTEX_PARENT, 654 652 I_MUTEX_CHILD, 655 653 I_MUTEX_XATTR, 656 - I_MUTEX_NONDIR2 654 + I_MUTEX_NONDIR2, 655 + I_MUTEX_PARENT2, 657 656 }; 658 657 659 658 void lock_two_nondirectories(struct inode *, struct inode*); ··· 2469 2466 extern ssize_t new_sync_write(struct file *filp, const char __user *buf, size_t len, loff_t *ppos); 2470 2467 2471 2468 /* fs/block_dev.c */ 2469 + extern ssize_t blkdev_read_iter(struct kiocb *iocb, struct iov_iter *to); 2472 2470 extern ssize_t blkdev_write_iter(struct kiocb *iocb, struct iov_iter *from); 2473 2471 extern int blkdev_fsync(struct file *filp, loff_t start, loff_t end, 2474 2472 int datasync);
+7
include/linux/mfd/max77693-private.h
··· 330 330 MAX77693_IRQ_GROUP_NR, 331 331 }; 332 332 333 + #define SRC_IRQ_CHARGER BIT(0) 334 + #define SRC_IRQ_TOP BIT(1) 335 + #define SRC_IRQ_FLASH BIT(2) 336 + #define SRC_IRQ_MUIC BIT(3) 337 + #define SRC_IRQ_ALL (SRC_IRQ_CHARGER | SRC_IRQ_TOP \ 338 + | SRC_IRQ_FLASH | SRC_IRQ_MUIC) 339 + 333 340 #define LED_IRQ_FLED2_OPEN BIT(0) 334 341 #define LED_IRQ_FLED2_SHORT BIT(1) 335 342 #define LED_IRQ_FLED1_OPEN BIT(2)
+9
include/linux/mmzone.h
··· 431 431 */ 432 432 int nr_migrate_reserve_block; 433 433 434 + #ifdef CONFIG_MEMORY_ISOLATION 435 + /* 436 + * Number of isolated pageblock. It is used to solve incorrect 437 + * freepage counting problem due to racy retrieving migratetype 438 + * of pageblock. Protected by zone->lock. 439 + */ 440 + unsigned long nr_isolate_pageblock; 441 + #endif 442 + 434 443 #ifdef CONFIG_MEMORY_HOTPLUG 435 444 /* see spanned/present_pages for more description */ 436 445 seqlock_t span_seqlock;
+3 -18
include/linux/mtd/spi-nor.h
··· 187 187 /** 188 188 * spi_nor_scan() - scan the SPI NOR 189 189 * @nor: the spi_nor structure 190 - * @id: the spi_device_id provided by the driver 190 + * @name: the chip type name 191 191 * @mode: the read mode supported by the driver 192 192 * 193 193 * The drivers can use this fuction to scan the SPI NOR. 194 194 * In the scanning, it will try to get all the necessary information to 195 195 * fill the mtd_info{} and the spi_nor{}. 196 196 * 197 - * The board may assigns a spi_device_id with @id which be used to compared with 198 - * the spi_device_id detected by the scanning. 197 + * The chip type name can be provided through the @name parameter. 199 198 * 200 199 * Return: 0 for success, others for failure. 201 200 */ 202 - int spi_nor_scan(struct spi_nor *nor, const struct spi_device_id *id, 203 - enum read_mode mode); 204 - extern const struct spi_device_id spi_nor_ids[]; 205 - 206 - /** 207 - * spi_nor_match_id() - find the spi_device_id by the name 208 - * @name: the name of the spi_device_id 209 - * 210 - * The drivers use this function to find the spi_device_id 211 - * specified by the @name. 212 - * 213 - * Return: returns the right spi_device_id pointer on success, 214 - * and returns NULL on failure. 215 - */ 216 - const struct spi_device_id *spi_nor_match_id(char *name); 201 + int spi_nor_scan(struct spi_nor *nor, const char *name, enum read_mode mode); 217 202 218 203 #endif
+70 -14
include/linux/of.h
··· 267 267 extern int of_property_read_string(struct device_node *np, 268 268 const char *propname, 269 269 const char **out_string); 270 - extern int of_property_read_string_index(struct device_node *np, 271 - const char *propname, 272 - int index, const char **output); 273 270 extern int of_property_match_string(struct device_node *np, 274 271 const char *propname, 275 272 const char *string); 276 - extern int of_property_count_strings(struct device_node *np, 277 - const char *propname); 273 + extern int of_property_read_string_helper(struct device_node *np, 274 + const char *propname, 275 + const char **out_strs, size_t sz, int index); 278 276 extern int of_device_is_compatible(const struct device_node *device, 279 277 const char *); 280 278 extern int of_device_is_available(const struct device_node *device); ··· 484 486 return -ENOSYS; 485 487 } 486 488 487 - static inline int of_property_read_string_index(struct device_node *np, 488 - const char *propname, int index, 489 - const char **out_string) 490 - { 491 - return -ENOSYS; 492 - } 493 - 494 - static inline int of_property_count_strings(struct device_node *np, 495 - const char *propname) 489 + static inline int of_property_read_string_helper(struct device_node *np, 490 + const char *propname, 491 + const char **out_strs, size_t sz, int index) 496 492 { 497 493 return -ENOSYS; 498 494 } ··· 657 665 const char *propname) 658 666 { 659 667 return of_property_count_elems_of_size(np, propname, sizeof(u64)); 668 + } 669 + 670 + /** 671 + * of_property_read_string_array() - Read an array of strings from a multiple 672 + * strings property. 673 + * @np: device node from which the property value is to be read. 674 + * @propname: name of the property to be searched. 675 + * @out_strs: output array of string pointers. 676 + * @sz: number of array elements to read. 677 + * 678 + * Search for a property in a device tree node and retrieve a list of 679 + * terminated string values (pointer to data, not a copy) in that property. 680 + * 681 + * If @out_strs is NULL, the number of strings in the property is returned. 682 + */ 683 + static inline int of_property_read_string_array(struct device_node *np, 684 + const char *propname, const char **out_strs, 685 + size_t sz) 686 + { 687 + return of_property_read_string_helper(np, propname, out_strs, sz, 0); 688 + } 689 + 690 + /** 691 + * of_property_count_strings() - Find and return the number of strings from a 692 + * multiple strings property. 693 + * @np: device node from which the property value is to be read. 694 + * @propname: name of the property to be searched. 695 + * 696 + * Search for a property in a device tree node and retrieve the number of null 697 + * terminated string contain in it. Returns the number of strings on 698 + * success, -EINVAL if the property does not exist, -ENODATA if property 699 + * does not have a value, and -EILSEQ if the string is not null-terminated 700 + * within the length of the property data. 701 + */ 702 + static inline int of_property_count_strings(struct device_node *np, 703 + const char *propname) 704 + { 705 + return of_property_read_string_helper(np, propname, NULL, 0, 0); 706 + } 707 + 708 + /** 709 + * of_property_read_string_index() - Find and read a string from a multiple 710 + * strings property. 711 + * @np: device node from which the property value is to be read. 712 + * @propname: name of the property to be searched. 713 + * @index: index of the string in the list of strings 714 + * @out_string: pointer to null terminated return string, modified only if 715 + * return value is 0. 716 + * 717 + * Search for a property in a device tree node and retrieve a null 718 + * terminated string value (pointer to data, not a copy) in the list of strings 719 + * contained in that property. 720 + * Returns 0 on success, -EINVAL if the property does not exist, -ENODATA if 721 + * property does not have a value, and -EILSEQ if the string is not 722 + * null-terminated within the length of the property data. 723 + * 724 + * The out_string pointer is modified only if a valid string can be decoded. 725 + */ 726 + static inline int of_property_read_string_index(struct device_node *np, 727 + const char *propname, 728 + int index, const char **output) 729 + { 730 + int rc = of_property_read_string_helper(np, propname, output, 1, index); 731 + return rc < 0 ? rc : 0; 660 732 } 661 733 662 734 /**
+8
include/linux/page-isolation.h
··· 2 2 #define __LINUX_PAGEISOLATION_H 3 3 4 4 #ifdef CONFIG_MEMORY_ISOLATION 5 + static inline bool has_isolate_pageblock(struct zone *zone) 6 + { 7 + return zone->nr_isolate_pageblock; 8 + } 5 9 static inline bool is_migrate_isolate_page(struct page *page) 6 10 { 7 11 return get_pageblock_migratetype(page) == MIGRATE_ISOLATE; ··· 15 11 return migratetype == MIGRATE_ISOLATE; 16 12 } 17 13 #else 14 + static inline bool has_isolate_pageblock(struct zone *zone) 15 + { 16 + return false; 17 + } 18 18 static inline bool is_migrate_isolate_page(struct page *page) 19 19 { 20 20 return false;
+6 -1
include/linux/pci-acpi.h
··· 41 41 42 42 if (pci_is_root_bus(pbus)) 43 43 dev = pbus->bridge; 44 - else 44 + else { 45 + /* If pbus is a virtual bus, there is no bridge to it */ 46 + if (!pbus->self) 47 + return NULL; 48 + 45 49 dev = &pbus->self->dev; 50 + } 46 51 47 52 return ACPI_HANDLE(dev); 48 53 }
+15
include/linux/rcupdate.h
··· 617 617 #define RCU_INITIALIZER(v) (typeof(*(v)) __force __rcu *)(v) 618 618 619 619 /** 620 + * lockless_dereference() - safely load a pointer for later dereference 621 + * @p: The pointer to load 622 + * 623 + * Similar to rcu_dereference(), but for situations where the pointed-to 624 + * object's lifetime is managed by something other than RCU. That 625 + * "something other" might be reference counting or simple immortality. 626 + */ 627 + #define lockless_dereference(p) \ 628 + ({ \ 629 + typeof(p) _________p1 = ACCESS_ONCE(p); \ 630 + smp_read_barrier_depends(); /* Dependency order vs. p above. */ \ 631 + (_________p1); \ 632 + }) 633 + 634 + /** 620 635 * rcu_assign_pointer() - assign to RCU-protected pointer 621 636 * @p: pointer to assign to 622 637 * @v: value to assign (publish)
+1 -1
include/linux/ring_buffer.h
··· 97 97 __ring_buffer_alloc((size), (flags), &__key); \ 98 98 }) 99 99 100 - int ring_buffer_wait(struct ring_buffer *buffer, int cpu); 100 + int ring_buffer_wait(struct ring_buffer *buffer, int cpu, bool full); 101 101 int ring_buffer_poll_wait(struct ring_buffer *buffer, int cpu, 102 102 struct file *filp, poll_table *poll_table); 103 103
+9
include/net/udp_tunnel.h
··· 100 100 return iptunnel_handle_offloads(skb, udp_csum, type); 101 101 } 102 102 103 + static inline void udp_tunnel_gro_complete(struct sk_buff *skb, int nhoff) 104 + { 105 + struct udphdr *uh; 106 + 107 + uh = (struct udphdr *)(skb->data + nhoff - sizeof(struct udphdr)); 108 + skb_shinfo(skb)->gso_type |= uh->check ? 109 + SKB_GSO_UDP_TUNNEL_CSUM : SKB_GSO_UDP_TUNNEL; 110 + } 111 + 103 112 static inline void udp_tunnel_encap_enable(struct socket *sock) 104 113 { 105 114 #if IS_ENABLED(CONFIG_IPV6)
+4 -4
include/scsi/scsi_tcq.h
··· 67 67 if (!sdev->tagged_supported) 68 68 return; 69 69 70 - if (!shost_use_blk_mq(sdev->host) && 71 - !blk_queue_tagged(sdev->request_queue)) 70 + if (shost_use_blk_mq(sdev->host)) 71 + queue_flag_set_unlocked(QUEUE_FLAG_QUEUED, sdev->request_queue); 72 + else if (!blk_queue_tagged(sdev->request_queue)) 72 73 blk_queue_init_tags(sdev->request_queue, depth, 73 74 sdev->host->bqt); 74 75 ··· 82 81 **/ 83 82 static inline void scsi_deactivate_tcq(struct scsi_device *sdev, int depth) 84 83 { 85 - if (!shost_use_blk_mq(sdev->host) && 86 - blk_queue_tagged(sdev->request_queue)) 84 + if (blk_queue_tagged(sdev->request_queue)) 87 85 blk_queue_free_tags(sdev->request_queue); 88 86 scsi_adjust_queue_depth(sdev, 0, depth); 89 87 }
+1 -1
init/main.c
··· 544 544 static_command_line, __start___param, 545 545 __stop___param - __start___param, 546 546 -1, -1, &unknown_bootoption); 547 - if (after_dashes) 547 + if (!IS_ERR_OR_NULL(after_dashes)) 548 548 parse_args("Setting init args", after_dashes, NULL, 0, -1, -1, 549 549 set_init_arg); 550 550
+1 -1
kernel/audit.c
··· 739 739 740 740 ab = audit_log_start(NULL, GFP_KERNEL, AUDIT_FEATURE_CHANGE); 741 741 audit_log_task_info(ab, current); 742 - audit_log_format(ab, "feature=%s old=%u new=%u old_lock=%u new_lock=%u res=%d", 742 + audit_log_format(ab, " feature=%s old=%u new=%u old_lock=%u new_lock=%u res=%d", 743 743 audit_feature_names[which], !!old_feature, !!new_feature, 744 744 !!old_lock, !!new_lock, res); 745 745 audit_log_end(ab);
+1
kernel/audit_tree.c
··· 154 154 chunk->owners[i].index = i; 155 155 } 156 156 fsnotify_init_mark(&chunk->mark, audit_tree_destroy_watch); 157 + chunk->mark.mask = FS_IN_IGNORED; 157 158 return chunk; 158 159 } 159 160
+1
kernel/panic.c
··· 244 244 * 'I' - Working around severe firmware bug. 245 245 * 'O' - Out-of-tree module has been loaded. 246 246 * 'E' - Unsigned module has been loaded. 247 + * 'L' - A soft lockup has previously occurred. 247 248 * 248 249 * The string is overwritten by the next call to print_tainted(). 249 250 */
+54 -27
kernel/trace/ring_buffer.c
··· 538 538 * ring_buffer_wait - wait for input to the ring buffer 539 539 * @buffer: buffer to wait on 540 540 * @cpu: the cpu buffer to wait on 541 + * @full: wait until a full page is available, if @cpu != RING_BUFFER_ALL_CPUS 541 542 * 542 543 * If @cpu == RING_BUFFER_ALL_CPUS then the task will wake up as soon 543 544 * as data is added to any of the @buffer's cpu buffers. Otherwise 544 545 * it will wait for data to be added to a specific cpu buffer. 545 546 */ 546 - int ring_buffer_wait(struct ring_buffer *buffer, int cpu) 547 + int ring_buffer_wait(struct ring_buffer *buffer, int cpu, bool full) 547 548 { 548 - struct ring_buffer_per_cpu *cpu_buffer; 549 + struct ring_buffer_per_cpu *uninitialized_var(cpu_buffer); 549 550 DEFINE_WAIT(wait); 550 551 struct rb_irq_work *work; 552 + int ret = 0; 551 553 552 554 /* 553 555 * Depending on what the caller is waiting for, either any ··· 566 564 } 567 565 568 566 569 - prepare_to_wait(&work->waiters, &wait, TASK_INTERRUPTIBLE); 567 + while (true) { 568 + prepare_to_wait(&work->waiters, &wait, TASK_INTERRUPTIBLE); 570 569 571 - /* 572 - * The events can happen in critical sections where 573 - * checking a work queue can cause deadlocks. 574 - * After adding a task to the queue, this flag is set 575 - * only to notify events to try to wake up the queue 576 - * using irq_work. 577 - * 578 - * We don't clear it even if the buffer is no longer 579 - * empty. The flag only causes the next event to run 580 - * irq_work to do the work queue wake up. The worse 581 - * that can happen if we race with !trace_empty() is that 582 - * an event will cause an irq_work to try to wake up 583 - * an empty queue. 584 - * 585 - * There's no reason to protect this flag either, as 586 - * the work queue and irq_work logic will do the necessary 587 - * synchronization for the wake ups. The only thing 588 - * that is necessary is that the wake up happens after 589 - * a task has been queued. It's OK for spurious wake ups. 590 - */ 591 - work->waiters_pending = true; 570 + /* 571 + * The events can happen in critical sections where 572 + * checking a work queue can cause deadlocks. 573 + * After adding a task to the queue, this flag is set 574 + * only to notify events to try to wake up the queue 575 + * using irq_work. 576 + * 577 + * We don't clear it even if the buffer is no longer 578 + * empty. The flag only causes the next event to run 579 + * irq_work to do the work queue wake up. The worse 580 + * that can happen if we race with !trace_empty() is that 581 + * an event will cause an irq_work to try to wake up 582 + * an empty queue. 583 + * 584 + * There's no reason to protect this flag either, as 585 + * the work queue and irq_work logic will do the necessary 586 + * synchronization for the wake ups. The only thing 587 + * that is necessary is that the wake up happens after 588 + * a task has been queued. It's OK for spurious wake ups. 589 + */ 590 + work->waiters_pending = true; 592 591 593 - if ((cpu == RING_BUFFER_ALL_CPUS && ring_buffer_empty(buffer)) || 594 - (cpu != RING_BUFFER_ALL_CPUS && ring_buffer_empty_cpu(buffer, cpu))) 592 + if (signal_pending(current)) { 593 + ret = -EINTR; 594 + break; 595 + } 596 + 597 + if (cpu == RING_BUFFER_ALL_CPUS && !ring_buffer_empty(buffer)) 598 + break; 599 + 600 + if (cpu != RING_BUFFER_ALL_CPUS && 601 + !ring_buffer_empty_cpu(buffer, cpu)) { 602 + unsigned long flags; 603 + bool pagebusy; 604 + 605 + if (!full) 606 + break; 607 + 608 + raw_spin_lock_irqsave(&cpu_buffer->reader_lock, flags); 609 + pagebusy = cpu_buffer->reader_page == cpu_buffer->commit_page; 610 + raw_spin_unlock_irqrestore(&cpu_buffer->reader_lock, flags); 611 + 612 + if (!pagebusy) 613 + break; 614 + } 615 + 595 616 schedule(); 617 + } 596 618 597 619 finish_wait(&work->waiters, &wait); 598 - return 0; 620 + 621 + return ret; 599 622 } 600 623 601 624 /**
+15 -18
kernel/trace/trace.c
··· 1076 1076 } 1077 1077 #endif /* CONFIG_TRACER_MAX_TRACE */ 1078 1078 1079 - static int wait_on_pipe(struct trace_iterator *iter) 1079 + static int wait_on_pipe(struct trace_iterator *iter, bool full) 1080 1080 { 1081 1081 /* Iterators are static, they should be filled or empty */ 1082 1082 if (trace_buffer_iter(iter, iter->cpu_file)) 1083 1083 return 0; 1084 1084 1085 - return ring_buffer_wait(iter->trace_buffer->buffer, iter->cpu_file); 1085 + return ring_buffer_wait(iter->trace_buffer->buffer, iter->cpu_file, 1086 + full); 1086 1087 } 1087 1088 1088 1089 #ifdef CONFIG_FTRACE_STARTUP_TEST ··· 4435 4434 4436 4435 mutex_unlock(&iter->mutex); 4437 4436 4438 - ret = wait_on_pipe(iter); 4437 + ret = wait_on_pipe(iter, false); 4439 4438 4440 4439 mutex_lock(&iter->mutex); 4441 4440 4442 4441 if (ret) 4443 4442 return ret; 4444 - 4445 - if (signal_pending(current)) 4446 - return -EINTR; 4447 4443 } 4448 4444 4449 4445 return 1; ··· 5370 5372 goto out_unlock; 5371 5373 } 5372 5374 mutex_unlock(&trace_types_lock); 5373 - ret = wait_on_pipe(iter); 5375 + ret = wait_on_pipe(iter, false); 5374 5376 mutex_lock(&trace_types_lock); 5375 5377 if (ret) { 5376 5378 size = ret; 5377 - goto out_unlock; 5378 - } 5379 - if (signal_pending(current)) { 5380 - size = -EINTR; 5381 5379 goto out_unlock; 5382 5380 } 5383 5381 goto again; ··· 5494 5500 }; 5495 5501 struct buffer_ref *ref; 5496 5502 int entries, size, i; 5497 - ssize_t ret; 5503 + ssize_t ret = 0; 5498 5504 5499 5505 mutex_lock(&trace_types_lock); 5500 5506 ··· 5532 5538 int r; 5533 5539 5534 5540 ref = kzalloc(sizeof(*ref), GFP_KERNEL); 5535 - if (!ref) 5541 + if (!ref) { 5542 + ret = -ENOMEM; 5536 5543 break; 5544 + } 5537 5545 5538 5546 ref->ref = 1; 5539 5547 ref->buffer = iter->trace_buffer->buffer; 5540 5548 ref->page = ring_buffer_alloc_read_page(ref->buffer, iter->cpu_file); 5541 5549 if (!ref->page) { 5550 + ret = -ENOMEM; 5542 5551 kfree(ref); 5543 5552 break; 5544 5553 } ··· 5579 5582 5580 5583 /* did we read anything? */ 5581 5584 if (!spd.nr_pages) { 5585 + if (ret) 5586 + goto out; 5587 + 5582 5588 if ((file->f_flags & O_NONBLOCK) || (flags & SPLICE_F_NONBLOCK)) { 5583 5589 ret = -EAGAIN; 5584 5590 goto out; 5585 5591 } 5586 5592 mutex_unlock(&trace_types_lock); 5587 - ret = wait_on_pipe(iter); 5593 + ret = wait_on_pipe(iter, true); 5588 5594 mutex_lock(&trace_types_lock); 5589 5595 if (ret) 5590 5596 goto out; 5591 - if (signal_pending(current)) { 5592 - ret = -EINTR; 5593 - goto out; 5594 - } 5597 + 5595 5598 goto again; 5596 5599 } 5597 5600
+5 -5
lib/rhashtable.c
··· 229 229 ht->shift++; 230 230 231 231 /* For each new bucket, search the corresponding old bucket 232 - * for the first entry that hashes to the new bucket, and 232 + * for the first entry that hashes to the new bucket, and 233 233 * link the new bucket to that entry. Since all the entries 234 234 * which will end up in the new bucket appear in the same 235 235 * old bucket, this constructs an entirely valid new hash ··· 247 247 } 248 248 249 249 /* Publish the new table pointer. Lookups may now traverse 250 - * the new table, but they will not benefit from any 251 - * additional efficiency until later steps unzip the buckets. 250 + * the new table, but they will not benefit from any 251 + * additional efficiency until later steps unzip the buckets. 252 252 */ 253 253 rcu_assign_pointer(ht->tbl, new_tbl); 254 254 ··· 304 304 305 305 ht->shift--; 306 306 307 - /* Link each bucket in the new table to the first bucket 307 + /* Link each bucket in the new table to the first bucket 308 308 * in the old table that contains entries which will hash 309 309 * to the new bucket. 310 310 */ 311 311 for (i = 0; i < ntbl->size; i++) { 312 312 ntbl->buckets[i] = tbl->buckets[i]; 313 313 314 - /* Link each bucket in the new table to the first bucket 314 + /* Link each bucket in the new table to the first bucket 315 315 * in the old table that contains entries which will hash 316 316 * to the new bucket. 317 317 */
+5 -4
mm/bootmem.c
··· 243 243 244 244 static int reset_managed_pages_done __initdata; 245 245 246 - static inline void __init reset_node_managed_pages(pg_data_t *pgdat) 246 + void reset_node_managed_pages(pg_data_t *pgdat) 247 247 { 248 248 struct zone *z; 249 - 250 - if (reset_managed_pages_done) 251 - return; 252 249 253 250 for (z = pgdat->node_zones; z < pgdat->node_zones + MAX_NR_ZONES; z++) 254 251 z->managed_pages = 0; ··· 255 258 { 256 259 struct pglist_data *pgdat; 257 260 261 + if (reset_managed_pages_done) 262 + return; 263 + 258 264 for_each_online_pgdat(pgdat) 259 265 reset_node_managed_pages(pgdat); 266 + 260 267 reset_managed_pages_done = 1; 261 268 } 262 269
+45 -25
mm/cma.c
··· 124 124 125 125 err: 126 126 kfree(cma->bitmap); 127 + cma->count = 0; 127 128 return -EINVAL; 128 129 } 129 130 ··· 218 217 phys_addr_t highmem_start = __pa(high_memory); 219 218 int ret = 0; 220 219 221 - pr_debug("%s(size %lx, base %08lx, limit %08lx alignment %08lx)\n", 222 - __func__, (unsigned long)size, (unsigned long)base, 223 - (unsigned long)limit, (unsigned long)alignment); 220 + pr_debug("%s(size %pa, base %pa, limit %pa alignment %pa)\n", 221 + __func__, &size, &base, &limit, &alignment); 224 222 225 223 if (cma_area_count == ARRAY_SIZE(cma_areas)) { 226 224 pr_err("Not enough slots for CMA reserved regions!\n"); ··· 244 244 size = ALIGN(size, alignment); 245 245 limit &= ~(alignment - 1); 246 246 247 + if (!base) 248 + fixed = false; 249 + 247 250 /* size should be aligned with order_per_bit */ 248 251 if (!IS_ALIGNED(size >> PAGE_SHIFT, 1 << order_per_bit)) 249 252 return -EINVAL; 250 253 251 254 /* 252 - * adjust limit to avoid crossing low/high memory boundary for 253 - * automatically allocated regions 255 + * If allocating at a fixed base the request region must not cross the 256 + * low/high memory boundary. 254 257 */ 255 - if (((limit == 0 || limit > memblock_end) && 256 - (memblock_end - size < highmem_start && 257 - memblock_end > highmem_start)) || 258 - (!fixed && limit > highmem_start && limit - size < highmem_start)) { 259 - limit = highmem_start; 260 - } 261 - 262 - if (fixed && base < highmem_start && base+size > highmem_start) { 258 + if (fixed && base < highmem_start && base + size > highmem_start) { 263 259 ret = -EINVAL; 264 - pr_err("Region at %08lx defined on low/high memory boundary (%08lx)\n", 265 - (unsigned long)base, (unsigned long)highmem_start); 260 + pr_err("Region at %pa defined on low/high memory boundary (%pa)\n", 261 + &base, &highmem_start); 266 262 goto err; 267 263 } 268 264 265 + /* 266 + * If the limit is unspecified or above the memblock end, its effective 267 + * value will be the memblock end. Set it explicitly to simplify further 268 + * checks. 269 + */ 270 + if (limit == 0 || limit > memblock_end) 271 + limit = memblock_end; 272 + 269 273 /* Reserve memory */ 270 - if (base && fixed) { 274 + if (fixed) { 271 275 if (memblock_is_region_reserved(base, size) || 272 276 memblock_reserve(base, size) < 0) { 273 277 ret = -EBUSY; 274 278 goto err; 275 279 } 276 280 } else { 277 - phys_addr_t addr = memblock_alloc_range(size, alignment, base, 278 - limit); 279 - if (!addr) { 280 - ret = -ENOMEM; 281 - goto err; 282 - } else { 283 - base = addr; 281 + phys_addr_t addr = 0; 282 + 283 + /* 284 + * All pages in the reserved area must come from the same zone. 285 + * If the requested region crosses the low/high memory boundary, 286 + * try allocating from high memory first and fall back to low 287 + * memory in case of failure. 288 + */ 289 + if (base < highmem_start && limit > highmem_start) { 290 + addr = memblock_alloc_range(size, alignment, 291 + highmem_start, limit); 292 + limit = highmem_start; 284 293 } 294 + 295 + if (!addr) { 296 + addr = memblock_alloc_range(size, alignment, base, 297 + limit); 298 + if (!addr) { 299 + ret = -ENOMEM; 300 + goto err; 301 + } 302 + } 303 + 304 + base = addr; 285 305 } 286 306 287 307 ret = cma_init_reserved_mem(base, size, order_per_bit, res_cma); 288 308 if (ret) 289 309 goto err; 290 310 291 - pr_info("Reserved %ld MiB at %08lx\n", (unsigned long)size / SZ_1M, 292 - (unsigned long)base); 311 + pr_info("Reserved %ld MiB at %pa\n", (unsigned long)size / SZ_1M, 312 + &base); 293 313 return 0; 294 314 295 315 err:
+16 -2
mm/compaction.c
··· 479 479 480 480 block_end_pfn = min(block_end_pfn, end_pfn); 481 481 482 + /* 483 + * pfn could pass the block_end_pfn if isolated freepage 484 + * is more than pageblock order. In this case, we adjust 485 + * scanning range to right one. 486 + */ 487 + if (pfn >= block_end_pfn) { 488 + block_end_pfn = ALIGN(pfn + 1, pageblock_nr_pages); 489 + block_end_pfn = min(block_end_pfn, end_pfn); 490 + } 491 + 482 492 if (!pageblock_pfn_to_page(pfn, block_end_pfn, cc->zone)) 483 493 break; 484 494 ··· 1039 1029 } 1040 1030 1041 1031 acct_isolated(zone, cc); 1042 - /* Record where migration scanner will be restarted */ 1043 - cc->migrate_pfn = low_pfn; 1032 + /* 1033 + * Record where migration scanner will be restarted. If we end up in 1034 + * the same pageblock as the free scanner, make the scanners fully 1035 + * meet so that compact_finished() terminates compaction. 1036 + */ 1037 + cc->migrate_pfn = (end_pfn <= cc->free_pfn) ? low_pfn : cc->free_pfn; 1044 1038 1045 1039 return cc->nr_migratepages ? ISOLATE_SUCCESS : ISOLATE_NONE; 1046 1040 }
+25
mm/internal.h
··· 108 108 /* 109 109 * in mm/page_alloc.c 110 110 */ 111 + 112 + /* 113 + * Locate the struct page for both the matching buddy in our 114 + * pair (buddy1) and the combined O(n+1) page they form (page). 115 + * 116 + * 1) Any buddy B1 will have an order O twin B2 which satisfies 117 + * the following equation: 118 + * B2 = B1 ^ (1 << O) 119 + * For example, if the starting buddy (buddy2) is #8 its order 120 + * 1 buddy is #10: 121 + * B2 = 8 ^ (1 << 1) = 8 ^ 2 = 10 122 + * 123 + * 2) Any buddy B will have an order O+1 parent P which 124 + * satisfies the following equation: 125 + * P = B & ~(1 << O) 126 + * 127 + * Assumption: *_mem_map is contiguous at least up to MAX_ORDER 128 + */ 129 + static inline unsigned long 130 + __find_buddy_index(unsigned long page_idx, unsigned int order) 131 + { 132 + return page_idx ^ (1 << order); 133 + } 134 + 135 + extern int __isolate_free_page(struct page *page, unsigned int order); 111 136 extern void __free_pages_bootmem(struct page *page, unsigned int order); 112 137 extern void prep_compound_page(struct page *page, unsigned long order); 113 138 #ifdef CONFIG_MEMORY_FAILURE
+26
mm/memory_hotplug.c
··· 31 31 #include <linux/stop_machine.h> 32 32 #include <linux/hugetlb.h> 33 33 #include <linux/memblock.h> 34 + #include <linux/bootmem.h> 34 35 35 36 #include <asm/tlbflush.h> 36 37 ··· 1067 1066 } 1068 1067 #endif /* CONFIG_MEMORY_HOTPLUG_SPARSE */ 1069 1068 1069 + static void reset_node_present_pages(pg_data_t *pgdat) 1070 + { 1071 + struct zone *z; 1072 + 1073 + for (z = pgdat->node_zones; z < pgdat->node_zones + MAX_NR_ZONES; z++) 1074 + z->present_pages = 0; 1075 + 1076 + pgdat->node_present_pages = 0; 1077 + } 1078 + 1070 1079 /* we are OK calling __meminit stuff here - we have CONFIG_MEMORY_HOTPLUG */ 1071 1080 static pg_data_t __ref *hotadd_new_pgdat(int nid, u64 start) 1072 1081 { ··· 1106 1095 mutex_lock(&zonelists_mutex); 1107 1096 build_all_zonelists(pgdat, NULL); 1108 1097 mutex_unlock(&zonelists_mutex); 1098 + 1099 + /* 1100 + * zone->managed_pages is set to an approximate value in 1101 + * free_area_init_core(), which will cause 1102 + * /sys/device/system/node/nodeX/meminfo has wrong data. 1103 + * So reset it to 0 before any memory is onlined. 1104 + */ 1105 + reset_node_managed_pages(pgdat); 1106 + 1107 + /* 1108 + * When memory is hot-added, all the memory is in offline state. So 1109 + * clear all zones' present_pages because they will be updated in 1110 + * online_pages() and offline_pages(). 1111 + */ 1112 + reset_node_present_pages(pgdat); 1109 1113 1110 1114 return pgdat; 1111 1115 }
+5 -3
mm/nobootmem.c
··· 145 145 146 146 static int reset_managed_pages_done __initdata; 147 147 148 - static inline void __init reset_node_managed_pages(pg_data_t *pgdat) 148 + void reset_node_managed_pages(pg_data_t *pgdat) 149 149 { 150 150 struct zone *z; 151 151 152 - if (reset_managed_pages_done) 153 - return; 154 152 for (z = pgdat->node_zones; z < pgdat->node_zones + MAX_NR_ZONES; z++) 155 153 z->managed_pages = 0; 156 154 } ··· 157 159 { 158 160 struct pglist_data *pgdat; 159 161 162 + if (reset_managed_pages_done) 163 + return; 164 + 160 165 for_each_online_pgdat(pgdat) 161 166 reset_node_managed_pages(pgdat); 167 + 162 168 reset_managed_pages_done = 1; 163 169 } 164 170
+29 -39
mm/page_alloc.c
··· 467 467 } 468 468 469 469 /* 470 - * Locate the struct page for both the matching buddy in our 471 - * pair (buddy1) and the combined O(n+1) page they form (page). 472 - * 473 - * 1) Any buddy B1 will have an order O twin B2 which satisfies 474 - * the following equation: 475 - * B2 = B1 ^ (1 << O) 476 - * For example, if the starting buddy (buddy2) is #8 its order 477 - * 1 buddy is #10: 478 - * B2 = 8 ^ (1 << 1) = 8 ^ 2 = 10 479 - * 480 - * 2) Any buddy B will have an order O+1 parent P which 481 - * satisfies the following equation: 482 - * P = B & ~(1 << O) 483 - * 484 - * Assumption: *_mem_map is contiguous at least up to MAX_ORDER 485 - */ 486 - static inline unsigned long 487 - __find_buddy_index(unsigned long page_idx, unsigned int order) 488 - { 489 - return page_idx ^ (1 << order); 490 - } 491 - 492 - /* 493 470 * This function checks whether a page is free && is the buddy 494 471 * we can do coalesce a page and its buddy if 495 472 * (a) the buddy is not in a hole && ··· 546 569 unsigned long combined_idx; 547 570 unsigned long uninitialized_var(buddy_idx); 548 571 struct page *buddy; 572 + int max_order = MAX_ORDER; 549 573 550 574 VM_BUG_ON(!zone_is_initialized(zone)); 551 575 ··· 555 577 return; 556 578 557 579 VM_BUG_ON(migratetype == -1); 580 + if (is_migrate_isolate(migratetype)) { 581 + /* 582 + * We restrict max order of merging to prevent merge 583 + * between freepages on isolate pageblock and normal 584 + * pageblock. Without this, pageblock isolation 585 + * could cause incorrect freepage accounting. 586 + */ 587 + max_order = min(MAX_ORDER, pageblock_order + 1); 588 + } else { 589 + __mod_zone_freepage_state(zone, 1 << order, migratetype); 590 + } 558 591 559 - page_idx = pfn & ((1 << MAX_ORDER) - 1); 592 + page_idx = pfn & ((1 << max_order) - 1); 560 593 561 594 VM_BUG_ON_PAGE(page_idx & ((1 << order) - 1), page); 562 595 VM_BUG_ON_PAGE(bad_range(zone, page), page); 563 596 564 - while (order < MAX_ORDER-1) { 597 + while (order < max_order - 1) { 565 598 buddy_idx = __find_buddy_index(page_idx, order); 566 599 buddy = page + (buddy_idx - page_idx); 567 600 if (!page_is_buddy(page, buddy, order)) ··· 583 594 */ 584 595 if (page_is_guard(buddy)) { 585 596 clear_page_guard_flag(buddy); 586 - set_page_private(page, 0); 587 - __mod_zone_freepage_state(zone, 1 << order, 588 - migratetype); 597 + set_page_private(buddy, 0); 598 + if (!is_migrate_isolate(migratetype)) { 599 + __mod_zone_freepage_state(zone, 1 << order, 600 + migratetype); 601 + } 589 602 } else { 590 603 list_del(&buddy->lru); 591 604 zone->free_area[order].nr_free--; ··· 706 715 /* must delete as __free_one_page list manipulates */ 707 716 list_del(&page->lru); 708 717 mt = get_freepage_migratetype(page); 718 + if (unlikely(has_isolate_pageblock(zone))) 719 + mt = get_pageblock_migratetype(page); 720 + 709 721 /* MIGRATE_MOVABLE list may include MIGRATE_RESERVEs */ 710 722 __free_one_page(page, page_to_pfn(page), zone, 0, mt); 711 723 trace_mm_page_pcpu_drain(page, 0, mt); 712 - if (likely(!is_migrate_isolate_page(page))) { 713 - __mod_zone_page_state(zone, NR_FREE_PAGES, 1); 714 - if (is_migrate_cma(mt)) 715 - __mod_zone_page_state(zone, NR_FREE_CMA_PAGES, 1); 716 - } 717 724 } while (--to_free && --batch_free && !list_empty(list)); 718 725 } 719 726 spin_unlock(&zone->lock); ··· 728 739 if (nr_scanned) 729 740 __mod_zone_page_state(zone, NR_PAGES_SCANNED, -nr_scanned); 730 741 742 + if (unlikely(has_isolate_pageblock(zone) || 743 + is_migrate_isolate(migratetype))) { 744 + migratetype = get_pfnblock_migratetype(page, pfn); 745 + } 731 746 __free_one_page(page, pfn, zone, order, migratetype); 732 - if (unlikely(!is_migrate_isolate(migratetype))) 733 - __mod_zone_freepage_state(zone, 1 << order, migratetype); 734 747 spin_unlock(&zone->lock); 735 748 } 736 749 ··· 1475 1484 } 1476 1485 EXPORT_SYMBOL_GPL(split_page); 1477 1486 1478 - static int __isolate_free_page(struct page *page, unsigned int order) 1487 + int __isolate_free_page(struct page *page, unsigned int order) 1479 1488 { 1480 1489 unsigned long watermark; 1481 1490 struct zone *zone; ··· 6399 6408 6400 6409 /* Make sure the range is really isolated. */ 6401 6410 if (test_pages_isolated(outer_start, end, false)) { 6402 - pr_warn("alloc_contig_range test_pages_isolated(%lx, %lx) failed\n", 6403 - outer_start, end); 6411 + pr_info("%s: [%lx, %lx) PFNs busy\n", 6412 + __func__, outer_start, end); 6404 6413 ret = -EBUSY; 6405 6414 goto done; 6406 6415 } 6407 - 6408 6416 6409 6417 /* Grab isolated pages from freelists. */ 6410 6418 outer_end = isolate_freepages_range(&cc, outer_start, end);
+41 -2
mm/page_isolation.c
··· 60 60 int migratetype = get_pageblock_migratetype(page); 61 61 62 62 set_pageblock_migratetype(page, MIGRATE_ISOLATE); 63 + zone->nr_isolate_pageblock++; 63 64 nr_pages = move_freepages_block(zone, page, MIGRATE_ISOLATE); 64 65 65 66 __mod_zone_freepage_state(zone, -nr_pages, migratetype); ··· 76 75 { 77 76 struct zone *zone; 78 77 unsigned long flags, nr_pages; 78 + struct page *isolated_page = NULL; 79 + unsigned int order; 80 + unsigned long page_idx, buddy_idx; 81 + struct page *buddy; 79 82 80 83 zone = page_zone(page); 81 84 spin_lock_irqsave(&zone->lock, flags); 82 85 if (get_pageblock_migratetype(page) != MIGRATE_ISOLATE) 83 86 goto out; 84 - nr_pages = move_freepages_block(zone, page, migratetype); 85 - __mod_zone_freepage_state(zone, nr_pages, migratetype); 87 + 88 + /* 89 + * Because freepage with more than pageblock_order on isolated 90 + * pageblock is restricted to merge due to freepage counting problem, 91 + * it is possible that there is free buddy page. 92 + * move_freepages_block() doesn't care of merge so we need other 93 + * approach in order to merge them. Isolation and free will make 94 + * these pages to be merged. 95 + */ 96 + if (PageBuddy(page)) { 97 + order = page_order(page); 98 + if (order >= pageblock_order) { 99 + page_idx = page_to_pfn(page) & ((1 << MAX_ORDER) - 1); 100 + buddy_idx = __find_buddy_index(page_idx, order); 101 + buddy = page + (buddy_idx - page_idx); 102 + 103 + if (!is_migrate_isolate_page(buddy)) { 104 + __isolate_free_page(page, order); 105 + set_page_refcounted(page); 106 + isolated_page = page; 107 + } 108 + } 109 + } 110 + 111 + /* 112 + * If we isolate freepage with more than pageblock_order, there 113 + * should be no freepage in the range, so we could avoid costly 114 + * pageblock scanning for freepage moving. 115 + */ 116 + if (!isolated_page) { 117 + nr_pages = move_freepages_block(zone, page, migratetype); 118 + __mod_zone_freepage_state(zone, nr_pages, migratetype); 119 + } 86 120 set_pageblock_migratetype(page, migratetype); 121 + zone->nr_isolate_pageblock--; 87 122 out: 88 123 spin_unlock_irqrestore(&zone->lock, flags); 124 + if (isolated_page) 125 + __free_pages(isolated_page, order); 89 126 } 90 127 91 128 static inline struct page *
+4
mm/slab_common.c
··· 259 259 if (s->size - size >= sizeof(void *)) 260 260 continue; 261 261 262 + if (IS_ENABLED(CONFIG_SLAB) && align && 263 + (align > s->align || s->align % align)) 264 + continue; 265 + 262 266 return s; 263 267 } 264 268 return NULL;
+3 -3
mm/truncate.c
··· 715 715 * necessary) to @newsize. It will be typically be called from the filesystem's 716 716 * setattr function when ATTR_SIZE is passed in. 717 717 * 718 - * Must be called with inode_mutex held and before all filesystem specific 719 - * block truncation has been performed. 718 + * Must be called with a lock serializing truncates and writes (generally 719 + * i_mutex but e.g. xfs uses a different lock) and before all filesystem 720 + * specific block truncation has been performed. 720 721 */ 721 722 void truncate_setsize(struct inode *inode, loff_t newsize) 722 723 { ··· 756 755 struct page *page; 757 756 pgoff_t index; 758 757 759 - WARN_ON(!mutex_is_locked(&inode->i_mutex)); 760 758 WARN_ON(to > inode->i_size); 761 759 762 760 if (from >= to || bsize == PAGE_CACHE_SIZE)
+10 -15
net/ceph/auth_x.c
··· 149 149 struct ceph_crypto_key old_key; 150 150 void *ticket_buf = NULL; 151 151 void *tp, *tpend; 152 + void **ptp; 152 153 struct ceph_timespec new_validity; 153 154 struct ceph_crypto_key new_session_key; 154 155 struct ceph_buffer *new_ticket_blob; ··· 209 208 goto out; 210 209 } 211 210 tp = ticket_buf; 212 - dlen = ceph_decode_32(&tp); 211 + ptp = &tp; 212 + tpend = *ptp + dlen; 213 213 } else { 214 214 /* unencrypted */ 215 - ceph_decode_32_safe(p, end, dlen, bad); 216 - ticket_buf = kmalloc(dlen, GFP_NOFS); 217 - if (!ticket_buf) { 218 - ret = -ENOMEM; 219 - goto out; 220 - } 221 - tp = ticket_buf; 222 - ceph_decode_need(p, end, dlen, bad); 223 - ceph_decode_copy(p, ticket_buf, dlen); 215 + ptp = p; 216 + tpend = end; 224 217 } 225 - tpend = tp + dlen; 218 + ceph_decode_32_safe(ptp, tpend, dlen, bad); 226 219 dout(" ticket blob is %d bytes\n", dlen); 227 - ceph_decode_need(&tp, tpend, 1 + sizeof(u64), bad); 228 - blob_struct_v = ceph_decode_8(&tp); 229 - new_secret_id = ceph_decode_64(&tp); 230 - ret = ceph_decode_buffer(&new_ticket_blob, &tp, tpend); 220 + ceph_decode_need(ptp, tpend, 1 + sizeof(u64), bad); 221 + blob_struct_v = ceph_decode_8(ptp); 222 + new_secret_id = ceph_decode_64(ptp); 223 + ret = ceph_decode_buffer(&new_ticket_blob, ptp, tpend); 231 224 if (ret) 232 225 goto out; 233 226
+132 -37
net/ceph/crypto.c
··· 90 90 91 91 static const u8 *aes_iv = (u8 *)CEPH_AES_IV; 92 92 93 + /* 94 + * Should be used for buffers allocated with ceph_kvmalloc(). 95 + * Currently these are encrypt out-buffer (ceph_buffer) and decrypt 96 + * in-buffer (msg front). 97 + * 98 + * Dispose of @sgt with teardown_sgtable(). 99 + * 100 + * @prealloc_sg is to avoid memory allocation inside sg_alloc_table() 101 + * in cases where a single sg is sufficient. No attempt to reduce the 102 + * number of sgs by squeezing physically contiguous pages together is 103 + * made though, for simplicity. 104 + */ 105 + static int setup_sgtable(struct sg_table *sgt, struct scatterlist *prealloc_sg, 106 + const void *buf, unsigned int buf_len) 107 + { 108 + struct scatterlist *sg; 109 + const bool is_vmalloc = is_vmalloc_addr(buf); 110 + unsigned int off = offset_in_page(buf); 111 + unsigned int chunk_cnt = 1; 112 + unsigned int chunk_len = PAGE_ALIGN(off + buf_len); 113 + int i; 114 + int ret; 115 + 116 + if (buf_len == 0) { 117 + memset(sgt, 0, sizeof(*sgt)); 118 + return -EINVAL; 119 + } 120 + 121 + if (is_vmalloc) { 122 + chunk_cnt = chunk_len >> PAGE_SHIFT; 123 + chunk_len = PAGE_SIZE; 124 + } 125 + 126 + if (chunk_cnt > 1) { 127 + ret = sg_alloc_table(sgt, chunk_cnt, GFP_NOFS); 128 + if (ret) 129 + return ret; 130 + } else { 131 + WARN_ON(chunk_cnt != 1); 132 + sg_init_table(prealloc_sg, 1); 133 + sgt->sgl = prealloc_sg; 134 + sgt->nents = sgt->orig_nents = 1; 135 + } 136 + 137 + for_each_sg(sgt->sgl, sg, sgt->orig_nents, i) { 138 + struct page *page; 139 + unsigned int len = min(chunk_len - off, buf_len); 140 + 141 + if (is_vmalloc) 142 + page = vmalloc_to_page(buf); 143 + else 144 + page = virt_to_page(buf); 145 + 146 + sg_set_page(sg, page, len, off); 147 + 148 + off = 0; 149 + buf += len; 150 + buf_len -= len; 151 + } 152 + WARN_ON(buf_len != 0); 153 + 154 + return 0; 155 + } 156 + 157 + static void teardown_sgtable(struct sg_table *sgt) 158 + { 159 + if (sgt->orig_nents > 1) 160 + sg_free_table(sgt); 161 + } 162 + 93 163 static int ceph_aes_encrypt(const void *key, int key_len, 94 164 void *dst, size_t *dst_len, 95 165 const void *src, size_t src_len) 96 166 { 97 - struct scatterlist sg_in[2], sg_out[1]; 167 + struct scatterlist sg_in[2], prealloc_sg; 168 + struct sg_table sg_out; 98 169 struct crypto_blkcipher *tfm = ceph_crypto_alloc_cipher(); 99 170 struct blkcipher_desc desc = { .tfm = tfm, .flags = 0 }; 100 171 int ret; ··· 181 110 182 111 *dst_len = src_len + zero_padding; 183 112 184 - crypto_blkcipher_setkey((void *)tfm, key, key_len); 185 113 sg_init_table(sg_in, 2); 186 114 sg_set_buf(&sg_in[0], src, src_len); 187 115 sg_set_buf(&sg_in[1], pad, zero_padding); 188 - sg_init_table(sg_out, 1); 189 - sg_set_buf(sg_out, dst, *dst_len); 116 + ret = setup_sgtable(&sg_out, &prealloc_sg, dst, *dst_len); 117 + if (ret) 118 + goto out_tfm; 119 + 120 + crypto_blkcipher_setkey((void *)tfm, key, key_len); 190 121 iv = crypto_blkcipher_crt(tfm)->iv; 191 122 ivsize = crypto_blkcipher_ivsize(tfm); 192 - 193 123 memcpy(iv, aes_iv, ivsize); 124 + 194 125 /* 195 126 print_hex_dump(KERN_ERR, "enc key: ", DUMP_PREFIX_NONE, 16, 1, 196 127 key, key_len, 1); ··· 201 128 print_hex_dump(KERN_ERR, "enc pad: ", DUMP_PREFIX_NONE, 16, 1, 202 129 pad, zero_padding, 1); 203 130 */ 204 - ret = crypto_blkcipher_encrypt(&desc, sg_out, sg_in, 131 + ret = crypto_blkcipher_encrypt(&desc, sg_out.sgl, sg_in, 205 132 src_len + zero_padding); 206 - crypto_free_blkcipher(tfm); 207 - if (ret < 0) 133 + if (ret < 0) { 208 134 pr_err("ceph_aes_crypt failed %d\n", ret); 135 + goto out_sg; 136 + } 209 137 /* 210 138 print_hex_dump(KERN_ERR, "enc out: ", DUMP_PREFIX_NONE, 16, 1, 211 139 dst, *dst_len, 1); 212 140 */ 213 - return 0; 141 + 142 + out_sg: 143 + teardown_sgtable(&sg_out); 144 + out_tfm: 145 + crypto_free_blkcipher(tfm); 146 + return ret; 214 147 } 215 148 216 149 static int ceph_aes_encrypt2(const void *key, int key_len, void *dst, ··· 224 145 const void *src1, size_t src1_len, 225 146 const void *src2, size_t src2_len) 226 147 { 227 - struct scatterlist sg_in[3], sg_out[1]; 148 + struct scatterlist sg_in[3], prealloc_sg; 149 + struct sg_table sg_out; 228 150 struct crypto_blkcipher *tfm = ceph_crypto_alloc_cipher(); 229 151 struct blkcipher_desc desc = { .tfm = tfm, .flags = 0 }; 230 152 int ret; ··· 241 161 242 162 *dst_len = src1_len + src2_len + zero_padding; 243 163 244 - crypto_blkcipher_setkey((void *)tfm, key, key_len); 245 164 sg_init_table(sg_in, 3); 246 165 sg_set_buf(&sg_in[0], src1, src1_len); 247 166 sg_set_buf(&sg_in[1], src2, src2_len); 248 167 sg_set_buf(&sg_in[2], pad, zero_padding); 249 - sg_init_table(sg_out, 1); 250 - sg_set_buf(sg_out, dst, *dst_len); 168 + ret = setup_sgtable(&sg_out, &prealloc_sg, dst, *dst_len); 169 + if (ret) 170 + goto out_tfm; 171 + 172 + crypto_blkcipher_setkey((void *)tfm, key, key_len); 251 173 iv = crypto_blkcipher_crt(tfm)->iv; 252 174 ivsize = crypto_blkcipher_ivsize(tfm); 253 - 254 175 memcpy(iv, aes_iv, ivsize); 176 + 255 177 /* 256 178 print_hex_dump(KERN_ERR, "enc key: ", DUMP_PREFIX_NONE, 16, 1, 257 179 key, key_len, 1); ··· 264 182 print_hex_dump(KERN_ERR, "enc pad: ", DUMP_PREFIX_NONE, 16, 1, 265 183 pad, zero_padding, 1); 266 184 */ 267 - ret = crypto_blkcipher_encrypt(&desc, sg_out, sg_in, 185 + ret = crypto_blkcipher_encrypt(&desc, sg_out.sgl, sg_in, 268 186 src1_len + src2_len + zero_padding); 269 - crypto_free_blkcipher(tfm); 270 - if (ret < 0) 187 + if (ret < 0) { 271 188 pr_err("ceph_aes_crypt2 failed %d\n", ret); 189 + goto out_sg; 190 + } 272 191 /* 273 192 print_hex_dump(KERN_ERR, "enc out: ", DUMP_PREFIX_NONE, 16, 1, 274 193 dst, *dst_len, 1); 275 194 */ 276 - return 0; 195 + 196 + out_sg: 197 + teardown_sgtable(&sg_out); 198 + out_tfm: 199 + crypto_free_blkcipher(tfm); 200 + return ret; 277 201 } 278 202 279 203 static int ceph_aes_decrypt(const void *key, int key_len, 280 204 void *dst, size_t *dst_len, 281 205 const void *src, size_t src_len) 282 206 { 283 - struct scatterlist sg_in[1], sg_out[2]; 207 + struct sg_table sg_in; 208 + struct scatterlist sg_out[2], prealloc_sg; 284 209 struct crypto_blkcipher *tfm = ceph_crypto_alloc_cipher(); 285 210 struct blkcipher_desc desc = { .tfm = tfm }; 286 211 char pad[16]; ··· 299 210 if (IS_ERR(tfm)) 300 211 return PTR_ERR(tfm); 301 212 302 - crypto_blkcipher_setkey((void *)tfm, key, key_len); 303 - sg_init_table(sg_in, 1); 304 213 sg_init_table(sg_out, 2); 305 - sg_set_buf(sg_in, src, src_len); 306 214 sg_set_buf(&sg_out[0], dst, *dst_len); 307 215 sg_set_buf(&sg_out[1], pad, sizeof(pad)); 216 + ret = setup_sgtable(&sg_in, &prealloc_sg, src, src_len); 217 + if (ret) 218 + goto out_tfm; 308 219 220 + crypto_blkcipher_setkey((void *)tfm, key, key_len); 309 221 iv = crypto_blkcipher_crt(tfm)->iv; 310 222 ivsize = crypto_blkcipher_ivsize(tfm); 311 - 312 223 memcpy(iv, aes_iv, ivsize); 313 224 314 225 /* ··· 317 228 print_hex_dump(KERN_ERR, "dec in: ", DUMP_PREFIX_NONE, 16, 1, 318 229 src, src_len, 1); 319 230 */ 320 - 321 - ret = crypto_blkcipher_decrypt(&desc, sg_out, sg_in, src_len); 322 - crypto_free_blkcipher(tfm); 231 + ret = crypto_blkcipher_decrypt(&desc, sg_out, sg_in.sgl, src_len); 323 232 if (ret < 0) { 324 233 pr_err("ceph_aes_decrypt failed %d\n", ret); 325 - return ret; 234 + goto out_sg; 326 235 } 327 236 328 237 if (src_len <= *dst_len) ··· 338 251 print_hex_dump(KERN_ERR, "dec out: ", DUMP_PREFIX_NONE, 16, 1, 339 252 dst, *dst_len, 1); 340 253 */ 341 - return 0; 254 + 255 + out_sg: 256 + teardown_sgtable(&sg_in); 257 + out_tfm: 258 + crypto_free_blkcipher(tfm); 259 + return ret; 342 260 } 343 261 344 262 static int ceph_aes_decrypt2(const void *key, int key_len, ··· 351 259 void *dst2, size_t *dst2_len, 352 260 const void *src, size_t src_len) 353 261 { 354 - struct scatterlist sg_in[1], sg_out[3]; 262 + struct sg_table sg_in; 263 + struct scatterlist sg_out[3], prealloc_sg; 355 264 struct crypto_blkcipher *tfm = ceph_crypto_alloc_cipher(); 356 265 struct blkcipher_desc desc = { .tfm = tfm }; 357 266 char pad[16]; ··· 364 271 if (IS_ERR(tfm)) 365 272 return PTR_ERR(tfm); 366 273 367 - sg_init_table(sg_in, 1); 368 - sg_set_buf(sg_in, src, src_len); 369 274 sg_init_table(sg_out, 3); 370 275 sg_set_buf(&sg_out[0], dst1, *dst1_len); 371 276 sg_set_buf(&sg_out[1], dst2, *dst2_len); 372 277 sg_set_buf(&sg_out[2], pad, sizeof(pad)); 278 + ret = setup_sgtable(&sg_in, &prealloc_sg, src, src_len); 279 + if (ret) 280 + goto out_tfm; 373 281 374 282 crypto_blkcipher_setkey((void *)tfm, key, key_len); 375 283 iv = crypto_blkcipher_crt(tfm)->iv; 376 284 ivsize = crypto_blkcipher_ivsize(tfm); 377 - 378 285 memcpy(iv, aes_iv, ivsize); 379 286 380 287 /* ··· 383 290 print_hex_dump(KERN_ERR, "dec in: ", DUMP_PREFIX_NONE, 16, 1, 384 291 src, src_len, 1); 385 292 */ 386 - 387 - ret = crypto_blkcipher_decrypt(&desc, sg_out, sg_in, src_len); 388 - crypto_free_blkcipher(tfm); 293 + ret = crypto_blkcipher_decrypt(&desc, sg_out, sg_in.sgl, src_len); 389 294 if (ret < 0) { 390 295 pr_err("ceph_aes_decrypt failed %d\n", ret); 391 - return ret; 296 + goto out_sg; 392 297 } 393 298 394 299 if (src_len <= *dst1_len) ··· 416 325 dst2, *dst2_len, 1); 417 326 */ 418 327 419 - return 0; 328 + out_sg: 329 + teardown_sgtable(&sg_in); 330 + out_tfm: 331 + crypto_free_blkcipher(tfm); 332 + return ret; 420 333 } 421 334 422 335
+9 -1
net/ceph/messenger.c
··· 484 484 IPPROTO_TCP, &sock); 485 485 if (ret) 486 486 return ret; 487 - sock->sk->sk_allocation = GFP_NOFS; 487 + sock->sk->sk_allocation = GFP_NOFS | __GFP_MEMALLOC; 488 488 489 489 #ifdef CONFIG_LOCKDEP 490 490 lockdep_set_class(&sock->sk->sk_lock, &socket_class); ··· 509 509 510 510 return ret; 511 511 } 512 + 513 + sk_set_memalloc(sock->sk); 514 + 512 515 con->sock = sock; 513 516 return 0; 514 517 } ··· 2772 2769 { 2773 2770 struct ceph_connection *con = container_of(work, struct ceph_connection, 2774 2771 work.work); 2772 + unsigned long pflags = current->flags; 2775 2773 bool fault; 2774 + 2775 + current->flags |= PF_MEMALLOC; 2776 2776 2777 2777 mutex_lock(&con->mutex); 2778 2778 while (true) { ··· 2830 2824 con_fault_finish(con); 2831 2825 2832 2826 con->ops->put(con); 2827 + 2828 + tsk_restore_flags(current, pflags, PF_MEMALLOC); 2833 2829 } 2834 2830 2835 2831 /*
+5 -2
net/ceph/osd_client.c
··· 1007 1007 static void __remove_osd(struct ceph_osd_client *osdc, struct ceph_osd *osd) 1008 1008 { 1009 1009 dout("__remove_osd %p\n", osd); 1010 - BUG_ON(!list_empty(&osd->o_requests)); 1011 - BUG_ON(!list_empty(&osd->o_linger_requests)); 1010 + WARN_ON(!list_empty(&osd->o_requests)); 1011 + WARN_ON(!list_empty(&osd->o_linger_requests)); 1012 1012 1013 1013 rb_erase(&osd->o_node, &osdc->osds); 1014 1014 list_del_init(&osd->o_osd_lru); ··· 1254 1254 if (list_empty(&req->r_osd_item)) 1255 1255 req->r_osd = NULL; 1256 1256 } 1257 + 1258 + list_del_init(&req->r_req_lru_item); /* can be on notarget */ 1257 1259 ceph_osdc_put_request(req); 1258 1260 } 1259 1261 ··· 1397 1395 if (req->r_osd) { 1398 1396 __cancel_request(req); 1399 1397 list_del_init(&req->r_osd_item); 1398 + list_del_init(&req->r_linger_osd_item); 1400 1399 req->r_osd = NULL; 1401 1400 } 1402 1401
+2
net/ipv4/fou.c
··· 227 227 int err = -ENOSYS; 228 228 const struct net_offload **offloads; 229 229 230 + udp_tunnel_gro_complete(skb, nhoff); 231 + 230 232 rcu_read_lock(); 231 233 offloads = NAPI_GRO_CB(skb)->is_ipv6 ? inet6_offloads : inet_offloads; 232 234 ops = rcu_dereference(offloads[proto]);
+1 -1
net/ipv4/ip_sockglue.c
··· 195 195 for (cmsg = CMSG_FIRSTHDR(msg); cmsg; cmsg = CMSG_NXTHDR(msg, cmsg)) { 196 196 if (!CMSG_OK(msg, cmsg)) 197 197 return -EINVAL; 198 - #if defined(CONFIG_IPV6) 198 + #if IS_ENABLED(CONFIG_IPV6) 199 199 if (allow_ipv6 && 200 200 cmsg->cmsg_level == SOL_IPV6 && 201 201 cmsg->cmsg_type == IPV6_PKTINFO) {
-2
net/irda/af_irda.c
··· 1052 1052 1053 1053 if (sk->sk_state != TCP_ESTABLISHED) { 1054 1054 sock->state = SS_UNCONNECTED; 1055 - if (sk->sk_prot->disconnect(sk, flags)) 1056 - sock->state = SS_DISCONNECTING; 1057 1055 err = sock_error(sk); 1058 1056 if (!err) 1059 1057 err = -ECONNRESET;
+1 -1
net/mac80211/ibss.c
··· 805 805 806 806 memset(&params, 0, sizeof(params)); 807 807 memset(&csa_ie, 0, sizeof(csa_ie)); 808 - err = ieee80211_parse_ch_switch_ie(sdata, elems, beacon, 808 + err = ieee80211_parse_ch_switch_ie(sdata, elems, 809 809 ifibss->chandef.chan->band, 810 810 sta_flags, ifibss->bssid, &csa_ie); 811 811 /* can't switch to destination channel, fail */
+1 -2
net/mac80211/ieee80211_i.h
··· 1705 1705 * ieee80211_parse_ch_switch_ie - parses channel switch IEs 1706 1706 * @sdata: the sdata of the interface which has received the frame 1707 1707 * @elems: parsed 802.11 elements received with the frame 1708 - * @beacon: indicates if the frame was a beacon or probe response 1709 1708 * @current_band: indicates the current band 1710 1709 * @sta_flags: contains information about own capabilities and restrictions 1711 1710 * to decide which channel switch announcements can be accepted. Only the ··· 1718 1719 * Return: 0 on success, <0 on error and >0 if there is nothing to parse. 1719 1720 */ 1720 1721 int ieee80211_parse_ch_switch_ie(struct ieee80211_sub_if_data *sdata, 1721 - struct ieee802_11_elems *elems, bool beacon, 1722 + struct ieee802_11_elems *elems, 1722 1723 enum ieee80211_band current_band, 1723 1724 u32 sta_flags, u8 *bssid, 1724 1725 struct ieee80211_csa_ie *csa_ie);
+12 -6
net/mac80211/iface.c
··· 777 777 int i, flushed; 778 778 struct ps_data *ps; 779 779 struct cfg80211_chan_def chandef; 780 + bool cancel_scan; 780 781 781 782 clear_bit(SDATA_STATE_RUNNING, &sdata->state); 782 783 783 - if (rcu_access_pointer(local->scan_sdata) == sdata) 784 + cancel_scan = rcu_access_pointer(local->scan_sdata) == sdata; 785 + if (cancel_scan) 784 786 ieee80211_scan_cancel(local); 785 787 786 788 /* ··· 913 911 list_del(&sdata->u.vlan.list); 914 912 mutex_unlock(&local->mtx); 915 913 RCU_INIT_POINTER(sdata->vif.chanctx_conf, NULL); 914 + /* see comment in the default case below */ 915 + ieee80211_free_keys(sdata, true); 916 916 /* no need to tell driver */ 917 917 break; 918 918 case NL80211_IFTYPE_MONITOR: ··· 940 936 /* 941 937 * When we get here, the interface is marked down. 942 938 * Free the remaining keys, if there are any 943 - * (shouldn't be, except maybe in WDS mode?) 939 + * (which can happen in AP mode if userspace sets 940 + * keys before the interface is operating, and maybe 941 + * also in WDS mode) 944 942 * 945 943 * Force the key freeing to always synchronize_net() 946 944 * to wait for the RX path in case it is using this 947 - * interface enqueuing frames * at this very time on 945 + * interface enqueuing frames at this very time on 948 946 * another CPU. 949 947 */ 950 948 ieee80211_free_keys(sdata, true); 951 - 952 - /* fall through */ 953 - case NL80211_IFTYPE_AP: 954 949 skb_queue_purge(&sdata->skb_queue); 955 950 } 956 951 ··· 1006 1003 } 1007 1004 1008 1005 ieee80211_recalc_ps(local, -1); 1006 + 1007 + if (cancel_scan) 1008 + flush_delayed_work(&local->scan_work); 1009 1009 1010 1010 if (local->open_count == 0) { 1011 1011 ieee80211_stop_device(local);
+1 -1
net/mac80211/mesh.c
··· 874 874 875 875 memset(&params, 0, sizeof(params)); 876 876 memset(&csa_ie, 0, sizeof(csa_ie)); 877 - err = ieee80211_parse_ch_switch_ie(sdata, elems, beacon, band, 877 + err = ieee80211_parse_ch_switch_ie(sdata, elems, band, 878 878 sta_flags, sdata->vif.addr, 879 879 &csa_ie); 880 880 if (err < 0)
+3 -2
net/mac80211/mlme.c
··· 1117 1117 1118 1118 current_band = cbss->channel->band; 1119 1119 memset(&csa_ie, 0, sizeof(csa_ie)); 1120 - res = ieee80211_parse_ch_switch_ie(sdata, elems, beacon, current_band, 1120 + res = ieee80211_parse_ch_switch_ie(sdata, elems, current_band, 1121 1121 ifmgd->flags, 1122 1122 ifmgd->associated->bssid, &csa_ie); 1123 1123 if (res < 0) ··· 1216 1216 ieee80211_queue_work(&local->hw, &ifmgd->chswitch_work); 1217 1217 else 1218 1218 mod_timer(&ifmgd->chswitch_timer, 1219 - TU_TO_EXP_TIME(csa_ie.count * cbss->beacon_interval)); 1219 + TU_TO_EXP_TIME((csa_ie.count - 1) * 1220 + cbss->beacon_interval)); 1220 1221 } 1221 1222 1222 1223 static bool
+7 -7
net/mac80211/rx.c
··· 1685 1685 sc = le16_to_cpu(hdr->seq_ctrl); 1686 1686 frag = sc & IEEE80211_SCTL_FRAG; 1687 1687 1688 - if (likely((!ieee80211_has_morefrags(fc) && frag == 0) || 1689 - is_multicast_ether_addr(hdr->addr1))) { 1690 - /* not fragmented */ 1688 + if (likely(!ieee80211_has_morefrags(fc) && frag == 0)) 1689 + goto out; 1690 + 1691 + if (is_multicast_ether_addr(hdr->addr1)) { 1692 + rx->local->dot11MulticastReceivedFrameCount++; 1691 1693 goto out; 1692 1694 } 1695 + 1693 1696 I802_DEBUG_INC(rx->local->rx_handlers_fragments); 1694 1697 1695 1698 if (skb_linearize(rx->skb)) ··· 1785 1782 out: 1786 1783 if (rx->sta) 1787 1784 rx->sta->rx_packets++; 1788 - if (is_multicast_ether_addr(hdr->addr1)) 1789 - rx->local->dot11MulticastReceivedFrameCount++; 1790 - else 1791 - ieee80211_led_rx(rx->local); 1785 + ieee80211_led_rx(rx->local); 1792 1786 return RX_CONTINUE; 1793 1787 } 1794 1788
+6 -12
net/mac80211/spectmgmt.c
··· 22 22 #include "wme.h" 23 23 24 24 int ieee80211_parse_ch_switch_ie(struct ieee80211_sub_if_data *sdata, 25 - struct ieee802_11_elems *elems, bool beacon, 25 + struct ieee802_11_elems *elems, 26 26 enum ieee80211_band current_band, 27 27 u32 sta_flags, u8 *bssid, 28 28 struct ieee80211_csa_ie *csa_ie) ··· 91 91 return -EINVAL; 92 92 } 93 93 94 - if (!beacon && sec_chan_offs) { 94 + if (sec_chan_offs) { 95 95 secondary_channel_offset = sec_chan_offs->sec_chan_offs; 96 - } else if (beacon && ht_oper) { 97 - secondary_channel_offset = 98 - ht_oper->ht_param & IEEE80211_HT_PARAM_CHA_SEC_OFFSET; 99 96 } else if (!(sta_flags & IEEE80211_STA_DISABLE_HT)) { 100 - /* If it's not a beacon, HT is enabled and the IE not present, 101 - * it's 20 MHz, 802.11-2012 8.5.2.6: 102 - * This element [the Secondary Channel Offset Element] is 103 - * present when switching to a 40 MHz channel. It may be 104 - * present when switching to a 20 MHz channel (in which 105 - * case the secondary channel offset is set to SCN). 106 - */ 97 + /* If the secondary channel offset IE is not present, 98 + * we can't know what's the post-CSA offset, so the 99 + * best we can do is use 20MHz. 100 + */ 107 101 secondary_channel_offset = IEEE80211_HT_PARAM_CHA_SEC_NONE; 108 102 } 109 103
+3 -2
net/netlink/af_netlink.c
··· 1440 1440 return; 1441 1441 1442 1442 for (undo = 0; undo < group; undo++) 1443 - if (test_bit(group, &groups)) 1443 + if (test_bit(undo, &groups)) 1444 1444 nlk->netlink_unbind(undo); 1445 1445 } 1446 1446 ··· 1492 1492 netlink_insert(sk, net, nladdr->nl_pid) : 1493 1493 netlink_autobind(sock); 1494 1494 if (err) { 1495 - netlink_unbind(nlk->ngroups - 1, groups, nlk); 1495 + netlink_unbind(nlk->ngroups, groups, nlk); 1496 1496 return err; 1497 1497 } 1498 1498 } ··· 2509 2509 nl_table[unit].module = module; 2510 2510 if (cfg) { 2511 2511 nl_table[unit].bind = cfg->bind; 2512 + nl_table[unit].unbind = cfg->unbind; 2512 2513 nl_table[unit].flags = cfg->flags; 2513 2514 if (cfg->compare) 2514 2515 nl_table[unit].compare = cfg->compare;
-2
net/sctp/auth.c
··· 862 862 list_add(&cur_key->key_list, sh_keys); 863 863 864 864 cur_key->key = key; 865 - sctp_auth_key_hold(key); 866 - 867 865 return 0; 868 866 nomem: 869 867 if (!replace)
+3
net/sctp/sm_make_chunk.c
··· 2609 2609 addr_param = param.v + sizeof(sctp_addip_param_t); 2610 2610 2611 2611 af = sctp_get_af_specific(param_type2af(param.p->type)); 2612 + if (af == NULL) 2613 + break; 2614 + 2612 2615 af->from_addr_param(&addr, addr_param, 2613 2616 htons(asoc->peer.port), 0); 2614 2617
+4 -3
security/selinux/hooks.c
··· 4725 4725 err = selinux_nlmsg_lookup(sksec->sclass, nlh->nlmsg_type, &perm); 4726 4726 if (err) { 4727 4727 if (err == -EINVAL) { 4728 - WARN_ONCE(1, "selinux_nlmsg_perm: unrecognized netlink message:" 4729 - " protocol=%hu nlmsg_type=%hu sclass=%hu\n", 4730 - sk->sk_protocol, nlh->nlmsg_type, sksec->sclass); 4728 + printk(KERN_WARNING 4729 + "SELinux: unrecognized netlink message:" 4730 + " protocol=%hu nlmsg_type=%hu sclass=%hu\n", 4731 + sk->sk_protocol, nlh->nlmsg_type, sksec->sclass); 4731 4732 if (!selinux_enforcing || security_get_allow_unknown()) 4732 4733 err = 0; 4733 4734 }
+4
sound/pci/hda/hda_intel.c
··· 219 219 "{Intel, LPT_LP}," 220 220 "{Intel, WPT_LP}," 221 221 "{Intel, SPT}," 222 + "{Intel, SPT_LP}," 222 223 "{Intel, HPT}," 223 224 "{Intel, PBG}," 224 225 "{Intel, SCH}," ··· 2004 2003 .driver_data = AZX_DRIVER_PCH | AZX_DCAPS_INTEL_PCH }, 2005 2004 /* Sunrise Point */ 2006 2005 { PCI_DEVICE(0x8086, 0xa170), 2006 + .driver_data = AZX_DRIVER_PCH | AZX_DCAPS_INTEL_PCH }, 2007 + /* Sunrise Point-LP */ 2008 + { PCI_DEVICE(0x8086, 0x9d70), 2007 2009 .driver_data = AZX_DRIVER_PCH | AZX_DCAPS_INTEL_PCH }, 2008 2010 /* Haswell */ 2009 2011 { PCI_DEVICE(0x8086, 0x0a0c),
+31
sound/pci/hda/patch_conexant.c
··· 43 43 unsigned int num_eapds; 44 44 hda_nid_t eapds[4]; 45 45 bool dynamic_eapd; 46 + hda_nid_t mute_led_eapd; 46 47 47 48 unsigned int parse_flags; /* flag for snd_hda_parse_pin_defcfg() */ 48 49 ··· 164 163 cx_auto_turn_eapd(codec, spec->num_eapds, spec->eapds, enabled); 165 164 } 166 165 166 + /* turn on/off EAPD according to Master switch (inversely!) for mute LED */ 167 + static void cx_auto_vmaster_hook_mute_led(void *private_data, int enabled) 168 + { 169 + struct hda_codec *codec = private_data; 170 + struct conexant_spec *spec = codec->spec; 171 + 172 + snd_hda_codec_write(codec, spec->mute_led_eapd, 0, 173 + AC_VERB_SET_EAPD_BTLENABLE, 174 + enabled ? 0x00 : 0x02); 175 + } 176 + 167 177 static int cx_auto_build_controls(struct hda_codec *codec) 168 178 { 169 179 int err; ··· 235 223 CXT_FIXUP_TOSHIBA_P105, 236 224 CXT_FIXUP_HP_530, 237 225 CXT_FIXUP_CAP_MIX_AMP_5047, 226 + CXT_FIXUP_MUTE_LED_EAPD, 238 227 }; 239 228 240 229 /* for hda_fixup_thinkpad_acpi() */ ··· 570 557 } 571 558 } 572 559 560 + static void cxt_fixup_mute_led_eapd(struct hda_codec *codec, 561 + const struct hda_fixup *fix, int action) 562 + { 563 + struct conexant_spec *spec = codec->spec; 564 + 565 + if (action == HDA_FIXUP_ACT_PRE_PROBE) { 566 + spec->mute_led_eapd = 0x1b; 567 + spec->dynamic_eapd = 1; 568 + spec->gen.vmaster_mute.hook = cx_auto_vmaster_hook_mute_led; 569 + } 570 + } 571 + 573 572 /* 574 573 * Fix max input level on mixer widget to 0dB 575 574 * (originally it has 0x2b steps with 0dB offset 0x14) ··· 730 705 .type = HDA_FIXUP_FUNC, 731 706 .v.func = cxt_fixup_cap_mix_amp_5047, 732 707 }, 708 + [CXT_FIXUP_MUTE_LED_EAPD] = { 709 + .type = HDA_FIXUP_FUNC, 710 + .v.func = cxt_fixup_mute_led_eapd, 711 + }, 733 712 }; 734 713 735 714 static const struct snd_pci_quirk cxt5045_fixups[] = { ··· 791 762 SND_PCI_QUIRK(0x17aa, 0x21cf, "Lenovo T520", CXT_PINCFG_LENOVO_TP410), 792 763 SND_PCI_QUIRK(0x17aa, 0x21da, "Lenovo X220", CXT_PINCFG_LENOVO_TP410), 793 764 SND_PCI_QUIRK(0x17aa, 0x21db, "Lenovo X220-tablet", CXT_PINCFG_LENOVO_TP410), 765 + SND_PCI_QUIRK(0x17aa, 0x38af, "Lenovo IdeaPad Z560", CXT_FIXUP_MUTE_LED_EAPD), 794 766 SND_PCI_QUIRK(0x17aa, 0x3975, "Lenovo U300s", CXT_FIXUP_STEREO_DMIC), 795 767 SND_PCI_QUIRK(0x17aa, 0x3977, "Lenovo IdeaPad U310", CXT_FIXUP_STEREO_DMIC), 796 768 SND_PCI_QUIRK(0x17aa, 0x397b, "Lenovo S205", CXT_FIXUP_STEREO_DMIC), ··· 810 780 { .id = CXT_PINCFG_LEMOTE_A1004, .name = "lemote-a1004" }, 811 781 { .id = CXT_PINCFG_LEMOTE_A1205, .name = "lemote-a1205" }, 812 782 { .id = CXT_FIXUP_OLPC_XO, .name = "olpc-xo" }, 783 + { .id = CXT_FIXUP_MUTE_LED_EAPD, .name = "mute-led-eapd" }, 813 784 {} 814 785 }; 815 786
+159 -60
sound/pci/hda/patch_realtek.c
··· 288 288 snd_hda_jack_unsol_event(codec, res >> 2); 289 289 } 290 290 291 + /* Change EAPD to verb control */ 292 + static void alc_fill_eapd_coef(struct hda_codec *codec) 293 + { 294 + int coef; 295 + 296 + coef = alc_get_coef0(codec); 297 + 298 + switch (codec->vendor_id) { 299 + case 0x10ec0262: 300 + alc_update_coef_idx(codec, 0x7, 0, 1<<5); 301 + break; 302 + case 0x10ec0267: 303 + case 0x10ec0268: 304 + alc_update_coef_idx(codec, 0x7, 0, 1<<13); 305 + break; 306 + case 0x10ec0269: 307 + if ((coef & 0x00f0) == 0x0010) 308 + alc_update_coef_idx(codec, 0xd, 0, 1<<14); 309 + if ((coef & 0x00f0) == 0x0020) 310 + alc_update_coef_idx(codec, 0x4, 1<<15, 0); 311 + if ((coef & 0x00f0) == 0x0030) 312 + alc_update_coef_idx(codec, 0x10, 1<<9, 0); 313 + break; 314 + case 0x10ec0280: 315 + case 0x10ec0284: 316 + case 0x10ec0290: 317 + case 0x10ec0292: 318 + alc_update_coef_idx(codec, 0x4, 1<<15, 0); 319 + break; 320 + case 0x10ec0233: 321 + case 0x10ec0255: 322 + case 0x10ec0282: 323 + case 0x10ec0283: 324 + case 0x10ec0286: 325 + case 0x10ec0288: 326 + alc_update_coef_idx(codec, 0x10, 1<<9, 0); 327 + break; 328 + case 0x10ec0285: 329 + case 0x10ec0293: 330 + alc_update_coef_idx(codec, 0xa, 1<<13, 0); 331 + break; 332 + case 0x10ec0662: 333 + if ((coef & 0x00f0) == 0x0030) 334 + alc_update_coef_idx(codec, 0x4, 1<<10, 0); /* EAPD Ctrl */ 335 + break; 336 + case 0x10ec0272: 337 + case 0x10ec0273: 338 + case 0x10ec0663: 339 + case 0x10ec0665: 340 + case 0x10ec0670: 341 + case 0x10ec0671: 342 + case 0x10ec0672: 343 + alc_update_coef_idx(codec, 0xd, 0, 1<<14); /* EAPD Ctrl */ 344 + break; 345 + case 0x10ec0668: 346 + alc_update_coef_idx(codec, 0x7, 3<<13, 0); 347 + break; 348 + case 0x10ec0867: 349 + alc_update_coef_idx(codec, 0x4, 1<<10, 0); 350 + break; 351 + case 0x10ec0888: 352 + if ((coef & 0x00f0) == 0x0020 || (coef & 0x00f0) == 0x0030) 353 + alc_update_coef_idx(codec, 0x7, 1<<5, 0); 354 + break; 355 + case 0x10ec0892: 356 + alc_update_coef_idx(codec, 0x7, 1<<5, 0); 357 + break; 358 + case 0x10ec0899: 359 + case 0x10ec0900: 360 + alc_update_coef_idx(codec, 0x7, 1<<1, 0); 361 + break; 362 + } 363 + } 364 + 291 365 /* additional initialization for ALC888 variants */ 292 366 static void alc888_coef_init(struct hda_codec *codec) 293 367 { 294 - if (alc_get_coef0(codec) == 0x20) 295 - /* alc888S-VC */ 296 - alc_write_coef_idx(codec, 7, 0x830); 297 - else 298 - /* alc888-VB */ 299 - alc_write_coef_idx(codec, 7, 0x3030); 300 - } 301 - 302 - /* additional initialization for ALC889 variants */ 303 - static void alc889_coef_init(struct hda_codec *codec) 304 - { 305 - alc_update_coef_idx(codec, 7, 0, 0x2010); 368 + switch (alc_get_coef0(codec) & 0x00f0) { 369 + /* alc888-VA */ 370 + case 0x00: 371 + /* alc888-VB */ 372 + case 0x10: 373 + alc_update_coef_idx(codec, 7, 0, 0x2030); /* Turn EAPD to High */ 374 + break; 375 + } 306 376 } 307 377 308 378 /* turn on/off EAPD control (only if available) */ ··· 413 343 /* generic EAPD initialization */ 414 344 static void alc_auto_init_amp(struct hda_codec *codec, int type) 415 345 { 346 + alc_fill_eapd_coef(codec); 416 347 alc_auto_setup_eapd(codec, true); 417 348 switch (type) { 418 349 case ALC_INIT_GPIO1: ··· 430 359 case 0x10ec0260: 431 360 alc_update_coefex_idx(codec, 0x1a, 7, 0, 0x2010); 432 361 break; 433 - case 0x10ec0262: 434 362 case 0x10ec0880: 435 363 case 0x10ec0882: 436 364 case 0x10ec0883: 437 365 case 0x10ec0885: 438 - case 0x10ec0887: 439 - /*case 0x10ec0889:*/ /* this causes an SPDIF problem */ 440 - case 0x10ec0900: 441 - alc889_coef_init(codec); 366 + alc_update_coef_idx(codec, 7, 0, 0x2030); 442 367 break; 443 368 case 0x10ec0888: 444 369 alc888_coef_init(codec); 445 370 break; 446 - #if 0 /* XXX: This may cause the silent output on speaker on some machines */ 447 - case 0x10ec0267: 448 - case 0x10ec0268: 449 - alc_update_coef_idx(codec, 7, 0, 0x3000); 450 - break; 451 - #endif /* XXX */ 452 371 } 453 372 break; 454 373 } ··· 1771 1710 { 1772 1711 if (action != HDA_FIXUP_ACT_INIT) 1773 1712 return; 1774 - alc889_coef_init(codec); 1713 + alc_update_coef_idx(codec, 7, 0, 0x2030); 1775 1714 } 1776 1715 1777 1716 /* toggle speaker-output according to the hp-jack state */ ··· 3411 3350 } 3412 3351 } 3413 3352 3353 + static void alc280_fixup_hp_gpio4(struct hda_codec *codec, 3354 + const struct hda_fixup *fix, int action) 3355 + { 3356 + /* Like hp_gpio_mic1_led, but also needs GPIO4 low to enable headphone amp */ 3357 + struct alc_spec *spec = codec->spec; 3358 + static const struct hda_verb gpio_init[] = { 3359 + { 0x01, AC_VERB_SET_GPIO_MASK, 0x18 }, 3360 + { 0x01, AC_VERB_SET_GPIO_DIRECTION, 0x18 }, 3361 + {} 3362 + }; 3363 + 3364 + if (action == HDA_FIXUP_ACT_PRE_PROBE) { 3365 + spec->gen.vmaster_mute.hook = alc269_fixup_hp_gpio_mute_hook; 3366 + spec->gen.cap_sync_hook = alc269_fixup_hp_cap_mic_mute_hook; 3367 + spec->gpio_led = 0; 3368 + spec->cap_mute_led_nid = 0x18; 3369 + snd_hda_add_verbs(codec, gpio_init); 3370 + codec->power_filter = led_power_filter; 3371 + } 3372 + } 3373 + 3414 3374 static void alc269_fixup_hp_line1_mic1_led(struct hda_codec *codec, 3415 3375 const struct hda_fixup *fix, int action) 3416 3376 { ··· 4299 4217 ALC283_FIXUP_BXBT2807_MIC, 4300 4218 ALC255_FIXUP_DELL_WMI_MIC_MUTE_LED, 4301 4219 ALC282_FIXUP_ASPIRE_V5_PINS, 4220 + ALC280_FIXUP_HP_GPIO4, 4302 4221 }; 4303 4222 4304 4223 static const struct hda_fixup alc269_fixups[] = { ··· 4763 4680 { }, 4764 4681 }, 4765 4682 }, 4766 - 4683 + [ALC280_FIXUP_HP_GPIO4] = { 4684 + .type = HDA_FIXUP_FUNC, 4685 + .v.func = alc280_fixup_hp_gpio4, 4686 + }, 4767 4687 }; 4768 4688 4769 4689 static const struct snd_pci_quirk alc269_fixup_tbl[] = { ··· 4814 4728 SND_PCI_QUIRK(0x103c, 0x22cf, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1), 4815 4729 SND_PCI_QUIRK(0x103c, 0x22dc, "HP", ALC269_FIXUP_HP_GPIO_MIC1_LED), 4816 4730 SND_PCI_QUIRK(0x103c, 0x22fb, "HP", ALC269_FIXUP_HP_GPIO_MIC1_LED), 4817 - SND_PCI_QUIRK(0x103c, 0x8004, "HP", ALC269_FIXUP_HP_GPIO_MIC1_LED), 4818 4731 /* ALC290 */ 4819 4732 SND_PCI_QUIRK(0x103c, 0x221b, "HP", ALC269_FIXUP_HP_GPIO_MIC1_LED), 4820 4733 SND_PCI_QUIRK(0x103c, 0x2221, "HP", ALC269_FIXUP_HP_GPIO_MIC1_LED), 4821 4734 SND_PCI_QUIRK(0x103c, 0x2225, "HP", ALC269_FIXUP_HP_GPIO_MIC1_LED), 4822 4735 SND_PCI_QUIRK(0x103c, 0x2246, "HP", ALC269_FIXUP_HP_GPIO_MIC1_LED), 4823 - SND_PCI_QUIRK(0x103c, 0x2247, "HP", ALC269_FIXUP_HP_GPIO_MIC1_LED), 4824 - SND_PCI_QUIRK(0x103c, 0x2248, "HP", ALC269_FIXUP_HP_GPIO_MIC1_LED), 4825 - SND_PCI_QUIRK(0x103c, 0x2249, "HP", ALC269_FIXUP_HP_GPIO_MIC1_LED), 4826 4736 SND_PCI_QUIRK(0x103c, 0x2253, "HP", ALC269_FIXUP_HP_GPIO_MIC1_LED), 4827 4737 SND_PCI_QUIRK(0x103c, 0x2254, "HP", ALC269_FIXUP_HP_GPIO_MIC1_LED), 4828 4738 SND_PCI_QUIRK(0x103c, 0x2255, "HP", ALC269_FIXUP_HP_GPIO_MIC1_LED), 4829 4739 SND_PCI_QUIRK(0x103c, 0x2256, "HP", ALC269_FIXUP_HP_GPIO_MIC1_LED), 4830 4740 SND_PCI_QUIRK(0x103c, 0x2257, "HP", ALC269_FIXUP_HP_GPIO_MIC1_LED), 4831 - SND_PCI_QUIRK(0x103c, 0x2258, "HP", ALC269_FIXUP_HP_GPIO_MIC1_LED), 4832 4741 SND_PCI_QUIRK(0x103c, 0x2259, "HP", ALC269_FIXUP_HP_GPIO_MIC1_LED), 4833 4742 SND_PCI_QUIRK(0x103c, 0x225a, "HP", ALC269_FIXUP_HP_GPIO_MIC1_LED), 4834 4743 SND_PCI_QUIRK(0x103c, 0x2260, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1), ··· 4832 4751 SND_PCI_QUIRK(0x103c, 0x2265, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1), 4833 4752 SND_PCI_QUIRK(0x103c, 0x2272, "HP", ALC269_FIXUP_HP_GPIO_MIC1_LED), 4834 4753 SND_PCI_QUIRK(0x103c, 0x2273, "HP", ALC269_FIXUP_HP_GPIO_MIC1_LED), 4835 - SND_PCI_QUIRK(0x103c, 0x2277, "HP", ALC269_FIXUP_HP_GPIO_MIC1_LED), 4836 4754 SND_PCI_QUIRK(0x103c, 0x2278, "HP", ALC269_FIXUP_HP_GPIO_MIC1_LED), 4837 4755 SND_PCI_QUIRK(0x103c, 0x227f, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1), 4838 4756 SND_PCI_QUIRK(0x103c, 0x2282, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1), ··· 4884 4804 SND_PCI_QUIRK(0x17aa, 0x220e, "Thinkpad T440p", ALC292_FIXUP_TPT440_DOCK), 4885 4805 SND_PCI_QUIRK(0x17aa, 0x2210, "Thinkpad T540p", ALC292_FIXUP_TPT440_DOCK), 4886 4806 SND_PCI_QUIRK(0x17aa, 0x2212, "Thinkpad T440", ALC292_FIXUP_TPT440_DOCK), 4887 - SND_PCI_QUIRK(0x17aa, 0x2214, "Thinkpad", ALC269_FIXUP_LIMIT_INT_MIC_BOOST), 4807 + SND_PCI_QUIRK(0x17aa, 0x2214, "Thinkpad X240", ALC292_FIXUP_TPT440_DOCK), 4888 4808 SND_PCI_QUIRK(0x17aa, 0x2215, "Thinkpad", ALC269_FIXUP_LIMIT_INT_MIC_BOOST), 4889 4809 SND_PCI_QUIRK(0x17aa, 0x3978, "IdeaPad Y410P", ALC269_FIXUP_NO_SHUTUP), 4890 4810 SND_PCI_QUIRK(0x17aa, 0x5013, "Thinkpad", ALC269_FIXUP_LIMIT_INT_MIC_BOOST), ··· 5064 4984 {0x17, 0x40000000}, 5065 4985 {0x1d, 0x40700001}, 5066 4986 {0x21, 0x02211040}), 4987 + SND_HDA_PIN_QUIRK(0x10ec0280, 0x103c, "HP", ALC280_FIXUP_HP_GPIO4, 4988 + {0x12, 0x90a60130}, 4989 + {0x13, 0x40000000}, 4990 + {0x14, 0x90170110}, 4991 + {0x15, 0x0421101f}, 4992 + {0x16, 0x411111f0}, 4993 + {0x17, 0x411111f0}, 4994 + {0x18, 0x411111f0}, 4995 + {0x19, 0x411111f0}, 4996 + {0x1a, 0x04a11020}, 4997 + {0x1b, 0x411111f0}, 4998 + {0x1d, 0x40748605}, 4999 + {0x1e, 0x411111f0}), 5067 5000 SND_HDA_PIN_QUIRK(0x10ec0280, 0x103c, "HP", ALC269_FIXUP_HP_GPIO_MIC1_LED, 5068 5001 {0x12, 0x90a60140}, 5069 5002 {0x13, 0x40000000}, ··· 5286 5193 alc_write_coef_idx(codec, 0x17, val | (1<<7)); 5287 5194 } 5288 5195 } 5289 - 5290 - /* Class D */ 5291 - alc_update_coef_idx(codec, 0xd, 0, 1<<14); 5292 5196 5293 5197 /* HP */ 5294 5198 alc_update_coef_idx(codec, 0x4, 0, 1<<11); ··· 5741 5651 } 5742 5652 } 5743 5653 5654 + static struct coef_fw alc668_coefs[] = { 5655 + WRITE_COEF(0x01, 0xbebe), WRITE_COEF(0x02, 0xaaaa), WRITE_COEF(0x03, 0x0), 5656 + WRITE_COEF(0x04, 0x0180), WRITE_COEF(0x06, 0x0), WRITE_COEF(0x07, 0x0f80), 5657 + WRITE_COEF(0x08, 0x0031), WRITE_COEF(0x0a, 0x0060), WRITE_COEF(0x0b, 0x0), 5658 + WRITE_COEF(0x0c, 0x7cf7), WRITE_COEF(0x0d, 0x1080), WRITE_COEF(0x0e, 0x7f7f), 5659 + WRITE_COEF(0x0f, 0xcccc), WRITE_COEF(0x10, 0xddcc), WRITE_COEF(0x11, 0x0001), 5660 + WRITE_COEF(0x13, 0x0), WRITE_COEF(0x14, 0x2aa0), WRITE_COEF(0x17, 0xa940), 5661 + WRITE_COEF(0x19, 0x0), WRITE_COEF(0x1a, 0x0), WRITE_COEF(0x1b, 0x0), 5662 + WRITE_COEF(0x1c, 0x0), WRITE_COEF(0x1d, 0x0), WRITE_COEF(0x1e, 0x7418), 5663 + WRITE_COEF(0x1f, 0x0804), WRITE_COEF(0x20, 0x4200), WRITE_COEF(0x21, 0x0468), 5664 + WRITE_COEF(0x22, 0x8ccc), WRITE_COEF(0x23, 0x0250), WRITE_COEF(0x24, 0x7418), 5665 + WRITE_COEF(0x27, 0x0), WRITE_COEF(0x28, 0x8ccc), WRITE_COEF(0x2a, 0xff00), 5666 + WRITE_COEF(0x2b, 0x8000), WRITE_COEF(0xa7, 0xff00), WRITE_COEF(0xa8, 0x8000), 5667 + WRITE_COEF(0xaa, 0x2e17), WRITE_COEF(0xab, 0xa0c0), WRITE_COEF(0xac, 0x0), 5668 + WRITE_COEF(0xad, 0x0), WRITE_COEF(0xae, 0x2ac6), WRITE_COEF(0xaf, 0xa480), 5669 + WRITE_COEF(0xb0, 0x0), WRITE_COEF(0xb1, 0x0), WRITE_COEF(0xb2, 0x0), 5670 + WRITE_COEF(0xb3, 0x0), WRITE_COEF(0xb4, 0x0), WRITE_COEF(0xb5, 0x1040), 5671 + WRITE_COEF(0xb6, 0xd697), WRITE_COEF(0xb7, 0x902b), WRITE_COEF(0xb8, 0xd697), 5672 + WRITE_COEF(0xb9, 0x902b), WRITE_COEF(0xba, 0xb8ba), WRITE_COEF(0xbb, 0xaaab), 5673 + WRITE_COEF(0xbc, 0xaaaf), WRITE_COEF(0xbd, 0x6aaa), WRITE_COEF(0xbe, 0x1c02), 5674 + WRITE_COEF(0xc0, 0x00ff), WRITE_COEF(0xc1, 0x0fa6), 5675 + {} 5676 + }; 5677 + 5678 + static void alc668_restore_default_value(struct hda_codec *codec) 5679 + { 5680 + alc_process_coef_fw(codec, alc668_coefs); 5681 + } 5682 + 5744 5683 enum { 5745 5684 ALC662_FIXUP_ASPIRE, 5746 5685 ALC662_FIXUP_LED_GPIO1, ··· 6196 6077 {} 6197 6078 }; 6198 6079 6199 - static void alc662_fill_coef(struct hda_codec *codec) 6200 - { 6201 - int coef; 6202 - 6203 - coef = alc_get_coef0(codec); 6204 - 6205 - switch (codec->vendor_id) { 6206 - case 0x10ec0662: 6207 - if ((coef & 0x00f0) == 0x0030) 6208 - alc_update_coef_idx(codec, 0x4, 1<<10, 0); /* EAPD Ctrl */ 6209 - break; 6210 - case 0x10ec0272: 6211 - case 0x10ec0273: 6212 - case 0x10ec0663: 6213 - case 0x10ec0665: 6214 - case 0x10ec0670: 6215 - case 0x10ec0671: 6216 - case 0x10ec0672: 6217 - alc_update_coef_idx(codec, 0xd, 0, 1<<14); /* EAPD Ctrl */ 6218 - break; 6219 - } 6220 - } 6221 - 6222 6080 /* 6223 6081 */ 6224 6082 static int patch_alc662(struct hda_codec *codec) ··· 6214 6118 6215 6119 alc_fix_pll_init(codec, 0x20, 0x04, 15); 6216 6120 6217 - spec->init_hook = alc662_fill_coef; 6218 - alc662_fill_coef(codec); 6121 + switch (codec->vendor_id) { 6122 + case 0x10ec0668: 6123 + spec->init_hook = alc668_restore_default_value; 6124 + break; 6125 + } 6219 6126 6220 6127 snd_hda_pick_fixup(codec, alc662_fixup_models, 6221 6128 alc662_fixup_tbl, alc662_fixups);
+7 -2
sound/usb/card.c
··· 591 591 { 592 592 struct snd_card *card; 593 593 struct list_head *p; 594 + bool was_shutdown; 594 595 595 596 if (chip == (void *)-1L) 596 597 return; 597 598 598 599 card = chip->card; 599 600 down_write(&chip->shutdown_rwsem); 601 + was_shutdown = chip->shutdown; 600 602 chip->shutdown = 1; 601 603 up_write(&chip->shutdown_rwsem); 602 604 603 605 mutex_lock(&register_mutex); 604 - chip->num_interfaces--; 605 - if (chip->num_interfaces <= 0) { 606 + if (!was_shutdown) { 606 607 struct snd_usb_endpoint *ep; 607 608 608 609 snd_card_disconnect(card); ··· 623 622 list_for_each(p, &chip->mixer_list) { 624 623 snd_usb_mixer_disconnect(p); 625 624 } 625 + } 626 + 627 + chip->num_interfaces--; 628 + if (chip->num_interfaces <= 0) { 626 629 usb_chip[chip->index] = NULL; 627 630 mutex_unlock(&register_mutex); 628 631 snd_card_free_when_closed(card);
+6
sound/usb/mixer_quirks.c
··· 885 885 return changed; 886 886 } 887 887 888 + static void kctl_private_value_free(struct snd_kcontrol *kctl) 889 + { 890 + kfree((void *)kctl->private_value); 891 + } 892 + 888 893 static int snd_ftu_create_effect_switch(struct usb_mixer_interface *mixer, 889 894 int validx, int bUnitID) 890 895 { ··· 924 919 return -ENOMEM; 925 920 } 926 921 922 + kctl->private_free = kctl_private_value_free; 927 923 err = snd_ctl_add(mixer->chip->card, kctl); 928 924 if (err < 0) 929 925 return err;
+1 -1
tools/testing/selftests/ftrace/ftracetest
··· 82 82 } 83 83 84 84 # Parameters 85 - DEBUGFS_DIR=`grep debugfs /proc/mounts | cut -f2 -d' '` 85 + DEBUGFS_DIR=`grep debugfs /proc/mounts | cut -f2 -d' ' | head -1` 86 86 TRACING_DIR=$DEBUGFS_DIR/tracing 87 87 TOP_DIR=`absdir $0` 88 88 TEST_DIR=$TOP_DIR/test.d
+1 -1
tools/testing/selftests/net/psock_fanout.c
··· 128 128 struct tpacket2_hdr *header = ring; 129 129 int count = 0; 130 130 131 - while (header->tp_status & TP_STATUS_USER && count < RING_NUM_FRAMES) { 131 + while (count < RING_NUM_FRAMES && header->tp_status & TP_STATUS_USER) { 132 132 count++; 133 133 header = ring + (count * getpagesize()); 134 134 }