Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net

Conflicts:
net/dsa/slave.c

net/dsa/slave.c simply had overlapping changes.

Signed-off-by: David S. Miller <davem@davemloft.net>

+2420 -1370
+1 -1
Documentation/Changes
··· 43 43 o grub 0.93 # grub --version || grub-install --version 44 44 o mcelog 0.6 # mcelog --version 45 45 o iptables 1.4.2 # iptables -V 46 - o openssl & libcrypto 1.0.1k # openssl version 46 + o openssl & libcrypto 1.0.0 # openssl version 47 47 48 48 49 49 Kernel compilation
+18 -2
Documentation/devicetree/bindings/interrupt-controller/qca,ath79-misc-intc.txt
··· 4 4 interrupt. 5 5 6 6 Required Properties: 7 - - compatible: has to be "qca,<soctype>-cpu-intc", "qca,ar7100-misc-intc" 8 - as fallback 7 + - compatible: has to be "qca,<soctype>-cpu-intc", "qca,ar7100-misc-intc" or 8 + "qca,<soctype>-cpu-intc", "qca,ar7240-misc-intc" 9 9 - reg: Base address and size of the controllers memory area 10 10 - interrupt-parent: phandle of the parent interrupt controller. 11 11 - interrupts: Interrupt specifier for the controllers interrupt. 12 12 - interrupt-controller : Identifies the node as an interrupt controller 13 13 - #interrupt-cells : Specifies the number of cells needed to encode interrupt 14 14 source, should be 1 15 + 16 + Compatible fallback depends on the SoC. Use ar7100 for ar71xx and ar913x, 17 + use ar7240 for all other SoCs. 15 18 16 19 Please refer to interrupts.txt in this directory for details of the common 17 20 Interrupt Controllers bindings used by client devices. ··· 23 20 24 21 interrupt-controller@18060010 { 25 22 compatible = "qca,ar9132-misc-intc", qca,ar7100-misc-intc"; 23 + reg = <0x18060010 0x4>; 24 + 25 + interrupt-parent = <&cpuintc>; 26 + interrupts = <6>; 27 + 28 + interrupt-controller; 29 + #interrupt-cells = <1>; 30 + }; 31 + 32 + Another example: 33 + 34 + interrupt-controller@18060010 { 35 + compatible = "qca,ar9331-misc-intc", qca,ar7240-misc-intc"; 26 36 reg = <0x18060010 0x4>; 27 37 28 38 interrupt-parent = <&cpuintc>;
+1
Documentation/devicetree/bindings/usb/ci-hdrc-usb2.txt
··· 6 6 "lsi,zevio-usb" 7 7 "qcom,ci-hdrc" 8 8 "chipidea,usb2" 9 + "xlnx,zynq-usb-2.20a" 9 10 - reg: base address and length of the registers 10 11 - interrupts: interrupt for the USB controller 11 12
+38 -13
Documentation/power/pci.txt
··· 979 979 (alternatively, the runtime_suspend() callback will have to check if the 980 980 device should really be suspended and return -EAGAIN if that is not the case). 981 981 982 - The runtime PM of PCI devices is disabled by default. It is also blocked by 983 - pci_pm_init() that runs the pm_runtime_forbid() helper function. If a PCI 984 - driver implements the runtime PM callbacks and intends to use the runtime PM 985 - framework provided by the PM core and the PCI subsystem, it should enable this 986 - feature by executing the pm_runtime_enable() helper function. However, the 987 - driver should not call the pm_runtime_allow() helper function unblocking 988 - the runtime PM of the device. Instead, it should allow user space or some 989 - platform-specific code to do that (user space can do it via sysfs), although 990 - once it has called pm_runtime_enable(), it must be prepared to handle the 982 + The runtime PM of PCI devices is enabled by default by the PCI core. PCI 983 + device drivers do not need to enable it and should not attempt to do so. 984 + However, it is blocked by pci_pm_init() that runs the pm_runtime_forbid() 985 + helper function. In addition to that, the runtime PM usage counter of 986 + each PCI device is incremented by local_pci_probe() before executing the 987 + probe callback provided by the device's driver. 988 + 989 + If a PCI driver implements the runtime PM callbacks and intends to use the 990 + runtime PM framework provided by the PM core and the PCI subsystem, it needs 991 + to decrement the device's runtime PM usage counter in its probe callback 992 + function. If it doesn't do that, the counter will always be different from 993 + zero for the device and it will never be runtime-suspended. The simplest 994 + way to do that is by calling pm_runtime_put_noidle(), but if the driver 995 + wants to schedule an autosuspend right away, for example, it may call 996 + pm_runtime_put_autosuspend() instead for this purpose. Generally, it 997 + just needs to call a function that decrements the devices usage counter 998 + from its probe routine to make runtime PM work for the device. 999 + 1000 + It is important to remember that the driver's runtime_suspend() callback 1001 + may be executed right after the usage counter has been decremented, because 1002 + user space may already have cuased the pm_runtime_allow() helper function 1003 + unblocking the runtime PM of the device to run via sysfs, so the driver must 1004 + be prepared to cope with that. 1005 + 1006 + The driver itself should not call pm_runtime_allow(), though. Instead, it 1007 + should let user space or some platform-specific code do that (user space can 1008 + do it via sysfs as stated above), but it must be prepared to handle the 991 1009 runtime PM of the device correctly as soon as pm_runtime_allow() is called 992 - (which may happen at any time). [It also is possible that user space causes 993 - pm_runtime_allow() to be called via sysfs before the driver is loaded, so in 994 - fact the driver has to be prepared to handle the runtime PM of the device as 995 - soon as it calls pm_runtime_enable().] 1010 + (which may happen at any time, even before the driver is loaded). 1011 + 1012 + When the driver's remove callback runs, it has to balance the decrementation 1013 + of the device's runtime PM usage counter at the probe time. For this reason, 1014 + if it has decremented the counter in its probe callback, it must run 1015 + pm_runtime_get_noresume() in its remove callback. [Since the core carries 1016 + out a runtime resume of the device and bumps up the device's usage counter 1017 + before running the driver's remove callback, the runtime PM of the device 1018 + is effectively disabled for the duration of the remove execution and all 1019 + runtime PM helper functions incrementing the device's usage counter are 1020 + then effectively equivalent to pm_runtime_get_noresume().] 996 1021 997 1022 The runtime PM framework works by processing requests to suspend or resume 998 1023 devices, or to check if they are idle (in which cases it is reasonable to
+1
Documentation/ptp/testptp.c
··· 18 18 * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. 19 19 */ 20 20 #define _GNU_SOURCE 21 + #define __SANE_USERSPACE_TYPES__ /* For PPC64, to get LL64 types */ 21 22 #include <errno.h> 22 23 #include <fcntl.h> 23 24 #include <inttypes.h>
+6 -8
MAINTAINERS
··· 615 615 F: drivers/hwmon/fam15h_power.c 616 616 617 617 AMD GEODE CS5536 USB DEVICE CONTROLLER DRIVER 618 - M: Thomas Dahlmann <dahlmann.thomas@arcor.de> 619 618 L: linux-geode@lists.infradead.org (moderated for non-subscribers) 620 - S: Supported 619 + S: Orphan 621 620 F: drivers/usb/gadget/udc/amd5536udc.* 622 621 623 622 AMD GEODE PROCESSOR/CHIPSET SUPPORT ··· 3400 3401 3401 3402 DIGI EPCA PCI PRODUCTS 3402 3403 M: Lidza Louina <lidza.louina@gmail.com> 3403 - M: Mark Hounschell <markh@compro.net> 3404 3404 M: Daeseok Youn <daeseok.youn@gmail.com> 3405 3405 L: driverdev-devel@linuxdriverproject.org 3406 3406 S: Maintained ··· 5957 5959 KERNEL VIRTUAL MACHINE (KVM) FOR AMD-V 5958 5960 M: Joerg Roedel <joro@8bytes.org> 5959 5961 L: kvm@vger.kernel.org 5960 - W: http://kvm.qumranet.com 5962 + W: http://www.linux-kvm.org/ 5961 5963 S: Maintained 5962 5964 F: arch/x86/include/asm/svm.h 5963 5965 F: arch/x86/kvm/svm.c ··· 5965 5967 KERNEL VIRTUAL MACHINE (KVM) FOR POWERPC 5966 5968 M: Alexander Graf <agraf@suse.com> 5967 5969 L: kvm-ppc@vger.kernel.org 5968 - W: http://kvm.qumranet.com 5970 + W: http://www.linux-kvm.org/ 5969 5971 T: git git://github.com/agraf/linux-2.6.git 5970 5972 S: Supported 5971 5973 F: arch/powerpc/include/asm/kvm* ··· 9914 9916 STAGING - LUSTRE PARALLEL FILESYSTEM 9915 9917 M: Oleg Drokin <oleg.drokin@intel.com> 9916 9918 M: Andreas Dilger <andreas.dilger@intel.com> 9917 - L: HPDD-discuss@lists.01.org (moderated for non-subscribers) 9918 - W: http://lustre.opensfs.org/ 9919 + L: lustre-devel@lists.lustre.org (moderated for non-subscribers) 9920 + W: http://wiki.lustre.org/ 9919 9921 S: Maintained 9920 9922 F: drivers/staging/lustre 9921 9923 ··· 11207 11209 F: include/linux/vlynq.h 11208 11210 11209 11211 VME SUBSYSTEM 11210 - M: Martyn Welch <martyn.welch@ge.com> 11212 + M: Martyn Welch <martyn@welchs.me.uk> 11211 11213 M: Manohar Vanga <manohar.vanga@gmail.com> 11212 11214 M: Greg Kroah-Hartman <gregkh@linuxfoundation.org> 11213 11215 L: devel@driverdev.osuosl.org
+1 -1
Makefile
··· 1 1 VERSION = 4 2 2 PATCHLEVEL = 3 3 3 SUBLEVEL = 0 4 - EXTRAVERSION = -rc2 4 + EXTRAVERSION = -rc3 5 5 NAME = Hurr durr I'ma sheep 6 6 7 7 # *DOCUMENTATION*
+2 -2
arch/arm/boot/dts/am335x-phycore-som.dtsi
··· 252 252 }; 253 253 254 254 vdd1_reg: regulator@2 { 255 - /* VDD_MPU voltage limits 0.95V - 1.26V with +/-4% tolerance */ 255 + /* VDD_MPU voltage limits 0.95V - 1.325V with +/-4% tolerance */ 256 256 regulator-name = "vdd_mpu"; 257 257 regulator-min-microvolt = <912500>; 258 - regulator-max-microvolt = <1312500>; 258 + regulator-max-microvolt = <1378000>; 259 259 regulator-boot-on; 260 260 regulator-always-on; 261 261 };
+29 -17
arch/arm/boot/dts/am57xx-beagle-x15.dts
··· 98 98 pinctrl-0 = <&extcon_usb1_pins>; 99 99 }; 100 100 101 - extcon_usb2: extcon_usb2 { 102 - compatible = "linux,extcon-usb-gpio"; 103 - id-gpio = <&gpio7 24 GPIO_ACTIVE_HIGH>; 104 - pinctrl-names = "default"; 105 - pinctrl-0 = <&extcon_usb2_pins>; 106 - }; 107 - 108 101 hdmi0: connector { 109 102 compatible = "hdmi-connector"; 110 103 label = "hdmi"; ··· 319 326 >; 320 327 }; 321 328 322 - extcon_usb2_pins: extcon_usb2_pins { 323 - pinctrl-single,pins = < 324 - 0x3e8 (PIN_INPUT_PULLUP | MUX_MODE14) /* uart1_ctsn.gpio7_24 */ 325 - >; 326 - }; 327 - 328 329 tpd12s015_pins: pinmux_tpd12s015_pins { 329 330 pinctrl-single,pins = < 330 331 0x3b0 (PIN_OUTPUT | MUX_MODE14) /* gpio7_10 CT_CP_HPD */ ··· 419 432 }; 420 433 421 434 ldo3_reg: ldo3 { 422 - /* VDDA_1V8_PHY */ 435 + /* VDDA_1V8_PHYA */ 423 436 regulator-name = "ldo3"; 437 + regulator-min-microvolt = <1800000>; 438 + regulator-max-microvolt = <1800000>; 439 + regulator-always-on; 440 + regulator-boot-on; 441 + }; 442 + 443 + ldo4_reg: ldo4 { 444 + /* VDDA_1V8_PHYB */ 445 + regulator-name = "ldo4"; 424 446 regulator-min-microvolt = <1800000>; 425 447 regulator-max-microvolt = <1800000>; 426 448 regulator-always-on; ··· 491 495 gpio-controller; 492 496 #gpio-cells = <2>; 493 497 }; 498 + 499 + extcon_usb2: tps659038_usb { 500 + compatible = "ti,palmas-usb-vid"; 501 + ti,enable-vbus-detection; 502 + ti,enable-id-detection; 503 + id-gpios = <&gpio7 24 GPIO_ACTIVE_HIGH>; 504 + }; 505 + 494 506 }; 495 507 496 508 tmp102: tmp102@48 { ··· 521 517 mcp_rtc: rtc@6f { 522 518 compatible = "microchip,mcp7941x"; 523 519 reg = <0x6f>; 524 - interrupts = <GIC_SPI 2 IRQ_TYPE_EDGE_RISING>; /* IRQ_SYS_1N */ 520 + interrupts-extended = <&crossbar_mpu GIC_SPI 2 IRQ_TYPE_EDGE_RISING>, 521 + <&dra7_pmx_core 0x424>; 525 522 526 523 pinctrl-names = "default"; 527 524 pinctrl-0 = <&mcp79410_pins_default>; ··· 584 579 pinctrl-0 = <&mmc1_pins_default>; 585 580 586 581 vmmc-supply = <&ldo1_reg>; 587 - vmmc_aux-supply = <&vdd_3v3>; 588 582 bus-width = <4>; 589 583 cd-gpios = <&gpio6 27 0>; /* gpio 219 */ 590 584 }; ··· 627 623 }; 628 624 629 625 &usb2 { 626 + /* 627 + * Stand alone usage is peripheral only. 628 + * However, with some resistor modifications 629 + * this port can be used via expansion connectors 630 + * as "host" or "dual-role". If so, provide 631 + * the necessary dr_mode override in the expansion 632 + * board's DT. 633 + */ 630 634 dr_mode = "peripheral"; 631 635 }; 632 636 ··· 693 681 694 682 &hdmi { 695 683 status = "ok"; 696 - vdda-supply = <&ldo3_reg>; 684 + vdda-supply = <&ldo4_reg>; 697 685 698 686 pinctrl-names = "default"; 699 687 pinctrl-0 = <&hdmi_pins>;
+2 -2
arch/arm/boot/dts/dm8148-evm.dts
··· 19 19 20 20 &cpsw_emac0 { 21 21 phy_id = <&davinci_mdio>, <0>; 22 - phy-mode = "mii"; 22 + phy-mode = "rgmii"; 23 23 }; 24 24 25 25 &cpsw_emac1 { 26 26 phy_id = <&davinci_mdio>, <1>; 27 - phy-mode = "mii"; 27 + phy-mode = "rgmii"; 28 28 };
+3 -3
arch/arm/boot/dts/dm8148-t410.dts
··· 8 8 #include "dm814x.dtsi" 9 9 10 10 / { 11 - model = "DM8148 EVM"; 11 + model = "HP t410 Smart Zero Client"; 12 12 compatible = "hp,t410", "ti,dm8148"; 13 13 14 14 memory { ··· 19 19 20 20 &cpsw_emac0 { 21 21 phy_id = <&davinci_mdio>, <0>; 22 - phy-mode = "mii"; 22 + phy-mode = "rgmii"; 23 23 }; 24 24 25 25 &cpsw_emac1 { 26 26 phy_id = <&davinci_mdio>, <1>; 27 - phy-mode = "mii"; 27 + phy-mode = "rgmii"; 28 28 };
+4 -4
arch/arm/boot/dts/dm814x.dtsi
··· 181 181 ti,hwmods = "timer3"; 182 182 }; 183 183 184 - control: control@160000 { 184 + control: control@140000 { 185 185 compatible = "ti,dm814-scm", "simple-bus"; 186 - reg = <0x160000 0x16d000>; 186 + reg = <0x140000 0x16d000>; 187 187 #address-cells = <1>; 188 188 #size-cells = <1>; 189 189 ranges = <0 0x160000 0x16d000>; ··· 321 321 mac-address = [ 00 00 00 00 00 00 ]; 322 322 }; 323 323 324 - phy_sel: cpsw-phy-sel@0x48160650 { 324 + phy_sel: cpsw-phy-sel@48140650 { 325 325 compatible = "ti,am3352-cpsw-phy-sel"; 326 - reg= <0x48160650 0x4>; 326 + reg= <0x48140650 0x4>; 327 327 reg-names = "gmii-sel"; 328 328 }; 329 329 };
+3 -2
arch/arm/boot/dts/dra7.dtsi
··· 120 120 reg = <0x0 0x1400>; 121 121 #address-cells = <1>; 122 122 #size-cells = <1>; 123 + ranges = <0 0x0 0x1400>; 123 124 124 125 pbias_regulator: pbias_regulator { 125 - compatible = "ti,pbias-omap"; 126 + compatible = "ti,pbias-dra7", "ti,pbias-omap"; 126 127 reg = <0xe00 0x4>; 127 128 syscon = <&scm_conf>; 128 129 pbias_mmc_reg: pbias_mmc_omap5 { ··· 1418 1417 ti,irqs-safe-map = <0>; 1419 1418 }; 1420 1419 1421 - mac: ethernet@4a100000 { 1420 + mac: ethernet@48484000 { 1422 1421 compatible = "ti,dra7-cpsw","ti,cpsw"; 1423 1422 ti,hwmods = "gmac"; 1424 1423 clocks = <&dpll_gmac_ck>, <&gmac_gmii_ref_clk_div>;
+2 -1
arch/arm/boot/dts/omap2430.dtsi
··· 56 56 reg = <0x270 0x240>; 57 57 #address-cells = <1>; 58 58 #size-cells = <1>; 59 + ranges = <0 0x270 0x240>; 59 60 60 61 scm_clocks: clocks { 61 62 #address-cells = <1>; ··· 64 63 }; 65 64 66 65 pbias_regulator: pbias_regulator { 67 - compatible = "ti,pbias-omap"; 66 + compatible = "ti,pbias-omap2", "ti,pbias-omap"; 68 67 reg = <0x230 0x4>; 69 68 syscon = <&scm_conf>; 70 69 pbias_mmc_reg: pbias_mmc_omap2430 {
+1 -1
arch/arm/boot/dts/omap3-beagle.dts
··· 202 202 203 203 tfp410_pins: pinmux_tfp410_pins { 204 204 pinctrl-single,pins = < 205 - 0x194 (PIN_OUTPUT | MUX_MODE4) /* hdq_sio.gpio_170 */ 205 + 0x196 (PIN_OUTPUT | MUX_MODE4) /* hdq_sio.gpio_170 */ 206 206 >; 207 207 }; 208 208
-6
arch/arm/boot/dts/omap3-igep.dtsi
··· 78 78 >; 79 79 }; 80 80 81 - smsc9221_pins: pinmux_smsc9221_pins { 82 - pinctrl-single,pins = < 83 - 0x1a2 (PIN_INPUT | MUX_MODE4) /* mcspi1_cs2.gpio_176 */ 84 - >; 85 - }; 86 - 87 81 i2c1_pins: pinmux_i2c1_pins { 88 82 pinctrl-single,pins = < 89 83 0x18a (PIN_INPUT | MUX_MODE0) /* i2c1_scl.i2c1_scl */
+6
arch/arm/boot/dts/omap3-igep0020-common.dtsi
··· 156 156 OMAP3_CORE1_IOPAD(0x217a, PIN_INPUT | MUX_MODE0) /* uart2_rx.uart2_rx */ 157 157 >; 158 158 }; 159 + 160 + smsc9221_pins: pinmux_smsc9221_pins { 161 + pinctrl-single,pins = < 162 + OMAP3_CORE1_IOPAD(0x21d2, PIN_INPUT | MUX_MODE4) /* mcspi1_cs2.gpio_176 */ 163 + >; 164 + }; 159 165 }; 160 166 161 167 &omap3_pmx_core2 {
+13 -12
arch/arm/boot/dts/omap3.dtsi
··· 113 113 }; 114 114 115 115 scm_conf: scm_conf@270 { 116 - compatible = "syscon"; 116 + compatible = "syscon", "simple-bus"; 117 117 reg = <0x270 0x330>; 118 118 #address-cells = <1>; 119 119 #size-cells = <1>; 120 + ranges = <0 0x270 0x330>; 121 + 122 + pbias_regulator: pbias_regulator { 123 + compatible = "ti,pbias-omap3", "ti,pbias-omap"; 124 + reg = <0x2b0 0x4>; 125 + syscon = <&scm_conf>; 126 + pbias_mmc_reg: pbias_mmc_omap2430 { 127 + regulator-name = "pbias_mmc_omap2430"; 128 + regulator-min-microvolt = <1800000>; 129 + regulator-max-microvolt = <3000000>; 130 + }; 131 + }; 120 132 121 133 scm_clocks: clocks { 122 134 #address-cells = <1>; ··· 212 200 #dma-cells = <1>; 213 201 dma-channels = <32>; 214 202 dma-requests = <96>; 215 - }; 216 - 217 - pbias_regulator: pbias_regulator { 218 - compatible = "ti,pbias-omap"; 219 - reg = <0x2b0 0x4>; 220 - syscon = <&scm_conf>; 221 - pbias_mmc_reg: pbias_mmc_omap2430 { 222 - regulator-name = "pbias_mmc_omap2430"; 223 - regulator-min-microvolt = <1800000>; 224 - regulator-max-microvolt = <3000000>; 225 - }; 226 203 }; 227 204 228 205 gpio1: gpio@48310000 {
+2 -1
arch/arm/boot/dts/omap4.dtsi
··· 196 196 reg = <0x5a0 0x170>; 197 197 #address-cells = <1>; 198 198 #size-cells = <1>; 199 + ranges = <0 0x5a0 0x170>; 199 200 200 201 pbias_regulator: pbias_regulator { 201 - compatible = "ti,pbias-omap"; 202 + compatible = "ti,pbias-omap4", "ti,pbias-omap"; 202 203 reg = <0x60 0x4>; 203 204 syscon = <&omap4_padconf_global>; 204 205 pbias_mmc_reg: pbias_mmc_omap4 {
+2 -2
arch/arm/boot/dts/omap5-uevm.dts
··· 174 174 175 175 i2c5_pins: pinmux_i2c5_pins { 176 176 pinctrl-single,pins = < 177 - 0x184 (PIN_INPUT | MUX_MODE0) /* i2c5_scl */ 178 - 0x186 (PIN_INPUT | MUX_MODE0) /* i2c5_sda */ 177 + 0x186 (PIN_INPUT | MUX_MODE0) /* i2c5_scl */ 178 + 0x188 (PIN_INPUT | MUX_MODE0) /* i2c5_sda */ 179 179 >; 180 180 }; 181 181
+2 -1
arch/arm/boot/dts/omap5.dtsi
··· 185 185 reg = <0x5a0 0xec>; 186 186 #address-cells = <1>; 187 187 #size-cells = <1>; 188 + ranges = <0 0x5a0 0xec>; 188 189 189 190 pbias_regulator: pbias_regulator { 190 - compatible = "ti,pbias-omap"; 191 + compatible = "ti,pbias-omap5", "ti,pbias-omap"; 191 192 reg = <0x60 0x4>; 192 193 syscon = <&omap5_padconf_global>; 193 194 pbias_mmc_reg: pbias_mmc_omap5 {
+1
arch/arm/boot/dts/rk3288-veyron.dtsi
··· 158 158 }; 159 159 160 160 &hdmi { 161 + ddc-i2c-bus = <&i2c5>; 161 162 status = "okay"; 162 163 }; 163 164
+36 -38
arch/arm/boot/dts/stih407.dtsi
··· 103 103 <&clk_s_d0_quadfs 0>, 104 104 <&clk_s_d2_quadfs 0>, 105 105 <&clk_s_d2_quadfs 0>; 106 - ranges; 106 + }; 107 107 108 - sti-hdmi@8d04000 { 109 - compatible = "st,stih407-hdmi"; 110 - reg = <0x8d04000 0x1000>; 111 - reg-names = "hdmi-reg"; 112 - interrupts = <GIC_SPI 106 IRQ_TYPE_NONE>; 113 - interrupt-names = "irq"; 114 - clock-names = "pix", 115 - "tmds", 116 - "phy", 117 - "audio", 118 - "main_parent", 119 - "aux_parent"; 108 + sti-hdmi@8d04000 { 109 + compatible = "st,stih407-hdmi"; 110 + reg = <0x8d04000 0x1000>; 111 + reg-names = "hdmi-reg"; 112 + interrupts = <GIC_SPI 106 IRQ_TYPE_NONE>; 113 + interrupt-names = "irq"; 114 + clock-names = "pix", 115 + "tmds", 116 + "phy", 117 + "audio", 118 + "main_parent", 119 + "aux_parent"; 120 120 121 - clocks = <&clk_s_d2_flexgen CLK_PIX_HDMI>, 122 - <&clk_s_d2_flexgen CLK_TMDS_HDMI>, 123 - <&clk_s_d2_flexgen CLK_REF_HDMIPHY>, 124 - <&clk_s_d0_flexgen CLK_PCM_0>, 125 - <&clk_s_d2_quadfs 0>, 126 - <&clk_s_d2_quadfs 1>; 121 + clocks = <&clk_s_d2_flexgen CLK_PIX_HDMI>, 122 + <&clk_s_d2_flexgen CLK_TMDS_HDMI>, 123 + <&clk_s_d2_flexgen CLK_REF_HDMIPHY>, 124 + <&clk_s_d0_flexgen CLK_PCM_0>, 125 + <&clk_s_d2_quadfs 0>, 126 + <&clk_s_d2_quadfs 1>; 127 127 128 - hdmi,hpd-gpio = <&pio5 3>; 129 - reset-names = "hdmi"; 130 - resets = <&softreset STIH407_HDMI_TX_PHY_SOFTRESET>; 131 - ddc = <&hdmiddc>; 128 + hdmi,hpd-gpio = <&pio5 3>; 129 + reset-names = "hdmi"; 130 + resets = <&softreset STIH407_HDMI_TX_PHY_SOFTRESET>; 131 + ddc = <&hdmiddc>; 132 + }; 132 133 133 - }; 134 - 135 - sti-hda@8d02000 { 136 - compatible = "st,stih407-hda"; 137 - reg = <0x8d02000 0x400>, <0x92b0120 0x4>; 138 - reg-names = "hda-reg", "video-dacs-ctrl"; 139 - clock-names = "pix", 140 - "hddac", 141 - "main_parent", 142 - "aux_parent"; 143 - clocks = <&clk_s_d2_flexgen CLK_PIX_HDDAC>, 144 - <&clk_s_d2_flexgen CLK_HDDAC>, 145 - <&clk_s_d2_quadfs 0>, 146 - <&clk_s_d2_quadfs 1>; 147 - }; 134 + sti-hda@8d02000 { 135 + compatible = "st,stih407-hda"; 136 + reg = <0x8d02000 0x400>, <0x92b0120 0x4>; 137 + reg-names = "hda-reg", "video-dacs-ctrl"; 138 + clock-names = "pix", 139 + "hddac", 140 + "main_parent", 141 + "aux_parent"; 142 + clocks = <&clk_s_d2_flexgen CLK_PIX_HDDAC>, 143 + <&clk_s_d2_flexgen CLK_HDDAC>, 144 + <&clk_s_d2_quadfs 0>, 145 + <&clk_s_d2_quadfs 1>; 148 146 }; 149 147 }; 150 148 };
+36 -38
arch/arm/boot/dts/stih410.dtsi
··· 178 178 <&clk_s_d0_quadfs 0>, 179 179 <&clk_s_d2_quadfs 0>, 180 180 <&clk_s_d2_quadfs 0>; 181 - ranges; 181 + }; 182 182 183 - sti-hdmi@8d04000 { 184 - compatible = "st,stih407-hdmi"; 185 - reg = <0x8d04000 0x1000>; 186 - reg-names = "hdmi-reg"; 187 - interrupts = <GIC_SPI 106 IRQ_TYPE_NONE>; 188 - interrupt-names = "irq"; 189 - clock-names = "pix", 190 - "tmds", 191 - "phy", 192 - "audio", 193 - "main_parent", 194 - "aux_parent"; 183 + sti-hdmi@8d04000 { 184 + compatible = "st,stih407-hdmi"; 185 + reg = <0x8d04000 0x1000>; 186 + reg-names = "hdmi-reg"; 187 + interrupts = <GIC_SPI 106 IRQ_TYPE_NONE>; 188 + interrupt-names = "irq"; 189 + clock-names = "pix", 190 + "tmds", 191 + "phy", 192 + "audio", 193 + "main_parent", 194 + "aux_parent"; 195 195 196 - clocks = <&clk_s_d2_flexgen CLK_PIX_HDMI>, 197 - <&clk_s_d2_flexgen CLK_TMDS_HDMI>, 198 - <&clk_s_d2_flexgen CLK_REF_HDMIPHY>, 199 - <&clk_s_d0_flexgen CLK_PCM_0>, 200 - <&clk_s_d2_quadfs 0>, 201 - <&clk_s_d2_quadfs 1>; 196 + clocks = <&clk_s_d2_flexgen CLK_PIX_HDMI>, 197 + <&clk_s_d2_flexgen CLK_TMDS_HDMI>, 198 + <&clk_s_d2_flexgen CLK_REF_HDMIPHY>, 199 + <&clk_s_d0_flexgen CLK_PCM_0>, 200 + <&clk_s_d2_quadfs 0>, 201 + <&clk_s_d2_quadfs 1>; 202 202 203 - hdmi,hpd-gpio = <&pio5 3>; 204 - reset-names = "hdmi"; 205 - resets = <&softreset STIH407_HDMI_TX_PHY_SOFTRESET>; 206 - ddc = <&hdmiddc>; 203 + hdmi,hpd-gpio = <&pio5 3>; 204 + reset-names = "hdmi"; 205 + resets = <&softreset STIH407_HDMI_TX_PHY_SOFTRESET>; 206 + ddc = <&hdmiddc>; 207 + }; 207 208 208 - }; 209 - 210 - sti-hda@8d02000 { 211 - compatible = "st,stih407-hda"; 212 - reg = <0x8d02000 0x400>, <0x92b0120 0x4>; 213 - reg-names = "hda-reg", "video-dacs-ctrl"; 214 - clock-names = "pix", 215 - "hddac", 216 - "main_parent", 217 - "aux_parent"; 218 - clocks = <&clk_s_d2_flexgen CLK_PIX_HDDAC>, 219 - <&clk_s_d2_flexgen CLK_HDDAC>, 220 - <&clk_s_d2_quadfs 0>, 221 - <&clk_s_d2_quadfs 1>; 222 - }; 209 + sti-hda@8d02000 { 210 + compatible = "st,stih407-hda"; 211 + reg = <0x8d02000 0x400>, <0x92b0120 0x4>; 212 + reg-names = "hda-reg", "video-dacs-ctrl"; 213 + clock-names = "pix", 214 + "hddac", 215 + "main_parent", 216 + "aux_parent"; 217 + clocks = <&clk_s_d2_flexgen CLK_PIX_HDDAC>, 218 + <&clk_s_d2_flexgen CLK_HDDAC>, 219 + <&clk_s_d2_quadfs 0>, 220 + <&clk_s_d2_quadfs 1>; 223 221 }; 224 222 }; 225 223
+4 -1
arch/arm/configs/omap2plus_defconfig
··· 240 240 CONFIG_PINCTRL_SINGLE=y 241 241 CONFIG_DEBUG_GPIO=y 242 242 CONFIG_GPIO_SYSFS=y 243 - CONFIG_GPIO_PCF857X=m 243 + CONFIG_GPIO_PCA953X=m 244 + CONFIG_GPIO_PCF857X=y 244 245 CONFIG_GPIO_TWL4030=y 245 246 CONFIG_GPIO_PALMAS=y 246 247 CONFIG_W1=m ··· 351 350 CONFIG_USB_MUSB_OMAP2PLUS=m 352 351 CONFIG_USB_MUSB_AM35X=m 353 352 CONFIG_USB_MUSB_DSPS=m 353 + CONFIG_USB_INVENTRA_DMA=y 354 + CONFIG_USB_TI_CPPI41_DMA=y 354 355 CONFIG_USB_DWC3=m 355 356 CONFIG_USB_TEST=m 356 357 CONFIG_AM335X_PHY_USB=y
+1 -1
arch/arm/include/asm/unistd.h
··· 19 19 * This may need to be greater than __NR_last_syscall+1 in order to 20 20 * account for the padding in the syscall table 21 21 */ 22 - #define __NR_syscalls (388) 22 + #define __NR_syscalls (392) 23 23 24 24 /* 25 25 * *NOTE*: This is a ghost syscall private to the kernel. Only the
+2
arch/arm/include/uapi/asm/unistd.h
··· 414 414 #define __NR_memfd_create (__NR_SYSCALL_BASE+385) 415 415 #define __NR_bpf (__NR_SYSCALL_BASE+386) 416 416 #define __NR_execveat (__NR_SYSCALL_BASE+387) 417 + #define __NR_userfaultfd (__NR_SYSCALL_BASE+388) 418 + #define __NR_membarrier (__NR_SYSCALL_BASE+389) 417 419 418 420 /* 419 421 * The following SWIs are ARM private.
+2
arch/arm/kernel/calls.S
··· 397 397 /* 385 */ CALL(sys_memfd_create) 398 398 CALL(sys_bpf) 399 399 CALL(sys_execveat) 400 + CALL(sys_userfaultfd) 401 + CALL(sys_membarrier) 400 402 #ifndef syscalls_counted 401 403 .equ syscalls_padding, ((NR_syscalls + 3) & ~3) - NR_syscalls 402 404 #define syscalls_counted
+5 -1
arch/arm/mach-omap2/Kconfig
··· 44 44 select ARM_CPU_SUSPEND if PM 45 45 select ARM_GIC 46 46 select HAVE_ARM_SCU if SMP 47 - select HAVE_ARM_TWD if SMP 48 47 select HAVE_ARM_ARCH_TIMER 49 48 select ARM_ERRATA_798181 if SMP 49 + select OMAP_INTERCONNECT 50 50 select OMAP_INTERCONNECT_BARRIER 51 + select PM_OPP if PM 51 52 52 53 config SOC_AM33XX 53 54 bool "TI AM33XX" ··· 71 70 select ARCH_OMAP2PLUS 72 71 select ARM_CPU_SUSPEND if PM 73 72 select ARM_GIC 73 + select HAVE_ARM_SCU if SMP 74 74 select HAVE_ARM_ARCH_TIMER 75 75 select IRQ_CROSSBAR 76 76 select ARM_ERRATA_798181 if SMP 77 + select OMAP_INTERCONNECT 77 78 select OMAP_INTERCONNECT_BARRIER 79 + select PM_OPP if PM 78 80 79 81 config ARCH_OMAP2PLUS 80 82 bool
-7
arch/arm/mach-omap2/board-generic.c
··· 20 20 21 21 #include "common.h" 22 22 23 - #if !(defined(CONFIG_ARCH_OMAP2) || defined(CONFIG_ARCH_OMAP3)) 24 - #define intc_of_init NULL 25 - #endif 26 - #ifndef CONFIG_ARCH_OMAP4 27 - #define gic_of_init NULL 28 - #endif 29 - 30 23 static const struct of_device_id omap_dt_match_table[] __initconst = { 31 24 { .compatible = "simple-bus", }, 32 25 { .compatible = "ti,omap-infra", },
+6 -2
arch/arm/mach-omap2/id.c
··· 653 653 omap_revision = DRA752_REV_ES1_0; 654 654 break; 655 655 case 1: 656 - default: 657 656 omap_revision = DRA752_REV_ES1_1; 657 + break; 658 + case 2: 659 + default: 660 + omap_revision = DRA752_REV_ES2_0; 661 + break; 658 662 } 659 663 break; 660 664 ··· 678 674 /* Unknown default to latest silicon rev as default*/ 679 675 pr_warn("%s: unknown idcode=0x%08x (hawkeye=0x%08x,rev=0x%x)\n", 680 676 __func__, idcode, hawkeye, rev); 681 - omap_revision = DRA752_REV_ES1_1; 677 + omap_revision = DRA752_REV_ES2_0; 682 678 } 683 679 684 680 sprintf(soc_name, "DRA%03x", omap_rev() >> 16);
+1
arch/arm/mach-omap2/io.c
··· 676 676 void __init am43xx_init_late(void) 677 677 { 678 678 omap_common_late_init(); 679 + omap2_clk_enable_autoidle_all(); 679 680 } 680 681 #endif 681 682
+2 -1
arch/arm/mach-omap2/omap_device.c
··· 901 901 if (od->hwmods[i]->flags & HWMOD_INIT_NO_IDLE) 902 902 return 0; 903 903 904 - if (od->_driver_status != BUS_NOTIFY_BOUND_DRIVER) { 904 + if (od->_driver_status != BUS_NOTIFY_BOUND_DRIVER && 905 + od->_driver_status != BUS_NOTIFY_BIND_DRIVER) { 905 906 if (od->_state == OMAP_DEVICE_STATE_ENABLED) { 906 907 dev_warn(dev, "%s: enabled but no driver. Idling\n", 907 908 __func__);
+2 -1
arch/arm/mach-omap2/pm.h
··· 103 103 #define PM_OMAP4_ROM_SMP_BOOT_ERRATUM_GICD (1 << 0) 104 104 #define PM_OMAP4_CPU_OSWR_DISABLE (1 << 1) 105 105 106 - #if defined(CONFIG_PM) && defined(CONFIG_ARCH_OMAP4) 106 + #if defined(CONFIG_PM) && (defined(CONFIG_ARCH_OMAP4) ||\ 107 + defined(CONFIG_SOC_OMAP5) || defined(CONFIG_SOC_DRA7XX)) 107 108 extern u16 pm44xx_errata; 108 109 #define IS_PM44XX_ERRATUM(id) (pm44xx_errata & (id)) 109 110 #else
+2
arch/arm/mach-omap2/soc.h
··· 469 469 #define DRA7XX_CLASS 0x07000000 470 470 #define DRA752_REV_ES1_0 (DRA7XX_CLASS | (0x52 << 16) | (0x10 << 8)) 471 471 #define DRA752_REV_ES1_1 (DRA7XX_CLASS | (0x52 << 16) | (0x11 << 8)) 472 + #define DRA752_REV_ES2_0 (DRA7XX_CLASS | (0x52 << 16) | (0x20 << 8)) 473 + #define DRA722_REV_ES1_0 (DRA7XX_CLASS | (0x22 << 16) | (0x10 << 8)) 472 474 #define DRA722_REV_ES1_0 (DRA7XX_CLASS | (0x22 << 16) | (0x10 << 8)) 473 475 474 476 void omap2xxx_check_revision(void);
+2 -6
arch/arm/mach-omap2/timer.c
··· 297 297 if (IS_ERR(src)) 298 298 return PTR_ERR(src); 299 299 300 - r = clk_set_parent(timer->fclk, src); 301 - if (r < 0) { 302 - pr_warn("%s: %s cannot set source\n", __func__, oh->name); 303 - clk_put(src); 304 - return r; 305 - } 300 + WARN(clk_set_parent(timer->fclk, src) < 0, 301 + "Cannot set timer parent clock, no PLL clock driver?"); 306 302 307 303 clk_put(src); 308 304
+1 -1
arch/arm/mach-omap2/vc.c
··· 300 300 301 301 val = voltdm->read(OMAP3_PRM_POLCTRL_OFFSET); 302 302 if (!(val & OMAP3430_PRM_POLCTRL_CLKREQ_POL) || 303 - (val & OMAP3430_PRM_POLCTRL_CLKREQ_POL)) { 303 + (val & OMAP3430_PRM_POLCTRL_OFFMODE_POL)) { 304 304 val |= OMAP3430_PRM_POLCTRL_CLKREQ_POL; 305 305 val &= ~OMAP3430_PRM_POLCTRL_OFFMODE_POL; 306 306 pr_debug("PM: fixing sys_clkreq and sys_off_mode polarity to 0x%x\n",
+1 -1
arch/arm/mach-pxa/balloon3.c
··· 502 502 balloon3_irq_enabled; 503 503 do { 504 504 struct irq_data *d = irq_desc_get_irq_data(desc); 505 - struct irq_chip *chip = irq_data_get_chip(d); 505 + struct irq_chip *chip = irq_desc_get_chip(desc); 506 506 unsigned int irq; 507 507 508 508 /* clear useless edge notification */
+7
arch/arm/mach-pxa/include/mach/addr-map.h
··· 44 44 */ 45 45 46 46 /* 47 + * DFI Bus for NAND, PXA3xx only 48 + */ 49 + #define NAND_PHYS 0x43100000 50 + #define NAND_VIRT IOMEM(0xf6300000) 51 + #define NAND_SIZE 0x00100000 52 + 53 + /* 47 54 * Internal Memory Controller (PXA27x and later) 48 55 */ 49 56 #define IMEMC_PHYS 0x58000000
+20 -1
arch/arm/mach-pxa/pxa3xx.c
··· 47 47 #define ISRAM_START 0x5c000000 48 48 #define ISRAM_SIZE SZ_256K 49 49 50 + /* 51 + * NAND NFC: DFI bus arbitration subset 52 + */ 53 + #define NDCR (*(volatile u32 __iomem*)(NAND_VIRT + 0)) 54 + #define NDCR_ND_ARB_EN (1 << 12) 55 + #define NDCR_ND_ARB_CNTL (1 << 19) 56 + 50 57 static void __iomem *sram; 51 58 static unsigned long wakeup_src; 52 59 ··· 369 362 .pfn = __phys_to_pfn(PXA3XX_SMEMC_BASE), 370 363 .length = SMEMC_SIZE, 371 364 .type = MT_DEVICE 372 - } 365 + }, { 366 + .virtual = (unsigned long)NAND_VIRT, 367 + .pfn = __phys_to_pfn(NAND_PHYS), 368 + .length = NAND_SIZE, 369 + .type = MT_DEVICE 370 + }, 373 371 }; 374 372 375 373 void __init pxa3xx_map_io(void) ··· 430 418 * preserve them here in case they will be referenced later 431 419 */ 432 420 ASCR &= ~(ASCR_RDH | ASCR_D1S | ASCR_D2S | ASCR_D3S); 421 + 422 + /* 423 + * Disable DFI bus arbitration, to prevent a system bus lock if 424 + * somebody disables the NAND clock (unused clock) while this 425 + * bit remains set. 426 + */ 427 + NDCR = (NDCR & ~NDCR_ND_ARB_EN) | NDCR_ND_ARB_CNTL; 433 428 434 429 if ((ret = pxa_init_dma(IRQ_DMA, 32))) 435 430 return ret;
+25 -5
arch/arm/mm/alignment.c
··· 365 365 user: 366 366 if (LDST_L_BIT(instr)) { 367 367 unsigned long val; 368 + unsigned int __ua_flags = uaccess_save_and_enable(); 369 + 368 370 get16t_unaligned_check(val, addr); 371 + uaccess_restore(__ua_flags); 369 372 370 373 /* signed half-word? */ 371 374 if (instr & 0x40) 372 375 val = (signed long)((signed short) val); 373 376 374 377 regs->uregs[rd] = val; 375 - } else 378 + } else { 379 + unsigned int __ua_flags = uaccess_save_and_enable(); 376 380 put16t_unaligned_check(regs->uregs[rd], addr); 381 + uaccess_restore(__ua_flags); 382 + } 377 383 378 384 return TYPE_LDST; 379 385 ··· 426 420 427 421 user: 428 422 if (load) { 429 - unsigned long val; 423 + unsigned long val, val2; 424 + unsigned int __ua_flags = uaccess_save_and_enable(); 425 + 430 426 get32t_unaligned_check(val, addr); 427 + get32t_unaligned_check(val2, addr + 4); 428 + 429 + uaccess_restore(__ua_flags); 430 + 431 431 regs->uregs[rd] = val; 432 - get32t_unaligned_check(val, addr + 4); 433 - regs->uregs[rd2] = val; 432 + regs->uregs[rd2] = val2; 434 433 } else { 434 + unsigned int __ua_flags = uaccess_save_and_enable(); 435 435 put32t_unaligned_check(regs->uregs[rd], addr); 436 436 put32t_unaligned_check(regs->uregs[rd2], addr + 4); 437 + uaccess_restore(__ua_flags); 437 438 } 438 439 439 440 return TYPE_LDST; ··· 471 458 trans: 472 459 if (LDST_L_BIT(instr)) { 473 460 unsigned int val; 461 + unsigned int __ua_flags = uaccess_save_and_enable(); 474 462 get32t_unaligned_check(val, addr); 463 + uaccess_restore(__ua_flags); 475 464 regs->uregs[rd] = val; 476 - } else 465 + } else { 466 + unsigned int __ua_flags = uaccess_save_and_enable(); 477 467 put32t_unaligned_check(regs->uregs[rd], addr); 468 + uaccess_restore(__ua_flags); 469 + } 478 470 return TYPE_LDST; 479 471 480 472 fault: ··· 549 531 #endif 550 532 551 533 if (user_mode(regs)) { 534 + unsigned int __ua_flags = uaccess_save_and_enable(); 552 535 for (regbits = REGMASK_BITS(instr), rd = 0; regbits; 553 536 regbits >>= 1, rd += 1) 554 537 if (regbits & 1) { ··· 561 542 put32t_unaligned_check(regs->uregs[rd], eaddr); 562 543 eaddr += 4; 563 544 } 545 + uaccess_restore(__ua_flags); 564 546 } else { 565 547 for (regbits = REGMASK_BITS(instr), rd = 0; regbits; 566 548 regbits >>= 1, rd += 1)
-1
arch/arm/plat-pxa/ssp.c
··· 107 107 { .compatible = "mvrl,pxa168-ssp", .data = (void *) PXA168_SSP }, 108 108 { .compatible = "mrvl,pxa910-ssp", .data = (void *) PXA910_SSP }, 109 109 { .compatible = "mrvl,ce4100-ssp", .data = (void *) CE4100_SSP }, 110 - { .compatible = "mrvl,lpss-ssp", .data = (void *) LPSS_SSP }, 111 110 { }, 112 111 }; 113 112 MODULE_DEVICE_TABLE(of, pxa_ssp_of_ids);
+20 -2
arch/mips/ath79/irq.c
··· 293 293 294 294 return 0; 295 295 } 296 - IRQCHIP_DECLARE(ath79_misc_intc, "qca,ar7100-misc-intc", 297 - ath79_misc_intc_of_init); 296 + 297 + static int __init ar7100_misc_intc_of_init( 298 + struct device_node *node, struct device_node *parent) 299 + { 300 + ath79_misc_irq_chip.irq_mask_ack = ar71xx_misc_irq_mask; 301 + return ath79_misc_intc_of_init(node, parent); 302 + } 303 + 304 + IRQCHIP_DECLARE(ar7100_misc_intc, "qca,ar7100-misc-intc", 305 + ar7100_misc_intc_of_init); 306 + 307 + static int __init ar7240_misc_intc_of_init( 308 + struct device_node *node, struct device_node *parent) 309 + { 310 + ath79_misc_irq_chip.irq_ack = ar724x_misc_irq_ack; 311 + return ath79_misc_intc_of_init(node, parent); 312 + } 313 + 314 + IRQCHIP_DECLARE(ar7240_misc_intc, "qca,ar7240-misc-intc", 315 + ar7240_misc_intc_of_init); 298 316 299 317 static int __init ar79_cpu_intc_of_init( 300 318 struct device_node *node, struct device_node *parent)
+3
arch/mips/include/asm/cpu-features.h
··· 20 20 #ifndef cpu_has_tlb 21 21 #define cpu_has_tlb (cpu_data[0].options & MIPS_CPU_TLB) 22 22 #endif 23 + #ifndef cpu_has_ftlb 24 + #define cpu_has_ftlb (cpu_data[0].options & MIPS_CPU_FTLB) 25 + #endif 23 26 #ifndef cpu_has_tlbinv 24 27 #define cpu_has_tlbinv (cpu_data[0].options & MIPS_CPU_TLBINV) 25 28 #endif
+1
arch/mips/include/asm/cpu.h
··· 385 385 #define MIPS_CPU_CDMM 0x4000000000ull /* CPU has Common Device Memory Map */ 386 386 #define MIPS_CPU_BP_GHIST 0x8000000000ull /* R12K+ Branch Prediction Global History */ 387 387 #define MIPS_CPU_SP 0x10000000000ull /* Small (1KB) page support */ 388 + #define MIPS_CPU_FTLB 0x20000000000ull /* CPU has Fixed-page-size TLB */ 388 389 389 390 /* 390 391 * CPU ASE encodings
+9
arch/mips/include/asm/maar.h
··· 66 66 } 67 67 68 68 /** 69 + * maar_init() - initialise MAARs 70 + * 71 + * Performs initialisation of MAARs for the current CPU, making use of the 72 + * platforms implementation of platform_maar_init where necessary and 73 + * duplicating the setup it provides on secondary CPUs. 74 + */ 75 + extern void maar_init(void); 76 + 77 + /** 69 78 * struct maar_config - MAAR configuration data 70 79 * @lower: The lowest address that the MAAR pair will affect. Must be 71 80 * aligned to a 2^16 byte boundary.
+39
arch/mips/include/asm/mips-cm.h
··· 194 194 BUILD_CM_R_(gic_status, MIPS_CM_GCB_OFS + 0xd0) 195 195 BUILD_CM_R_(cpc_status, MIPS_CM_GCB_OFS + 0xf0) 196 196 BUILD_CM_RW(l2_config, MIPS_CM_GCB_OFS + 0x130) 197 + BUILD_CM_RW(sys_config2, MIPS_CM_GCB_OFS + 0x150) 197 198 198 199 /* Core Local & Core Other register accessor functions */ 199 200 BUILD_CM_Cx_RW(reset_release, 0x00) ··· 317 316 #define CM_GCR_L2_CONFIG_ASSOC_SHF 0 318 317 #define CM_GCR_L2_CONFIG_ASSOC_MSK (_ULCAST_(0xff) << 0) 319 318 319 + /* GCR_SYS_CONFIG2 register fields */ 320 + #define CM_GCR_SYS_CONFIG2_MAXVPW_SHF 0 321 + #define CM_GCR_SYS_CONFIG2_MAXVPW_MSK (_ULCAST_(0xf) << 0) 322 + 320 323 /* GCR_Cx_COHERENCE register fields */ 321 324 #define CM_GCR_Cx_COHERENCE_COHDOMAINEN_SHF 0 322 325 #define CM_GCR_Cx_COHERENCE_COHDOMAINEN_MSK (_ULCAST_(0xff) << 0) ··· 408 403 return 0; 409 404 410 405 return read_gcr_rev(); 406 + } 407 + 408 + /** 409 + * mips_cm_max_vp_width() - return the width in bits of VP indices 410 + * 411 + * Return: the width, in bits, of VP indices in fields that combine core & VP 412 + * indices. 413 + */ 414 + static inline unsigned int mips_cm_max_vp_width(void) 415 + { 416 + extern int smp_num_siblings; 417 + 418 + if (mips_cm_revision() >= CM_REV_CM3) 419 + return read_gcr_sys_config2() & CM_GCR_SYS_CONFIG2_MAXVPW_MSK; 420 + 421 + return smp_num_siblings; 422 + } 423 + 424 + /** 425 + * mips_cm_vp_id() - calculate the hardware VP ID for a CPU 426 + * @cpu: the CPU whose VP ID to calculate 427 + * 428 + * Hardware such as the GIC uses identifiers for VPs which may not match the 429 + * CPU numbers used by Linux. This function calculates the hardware VP 430 + * identifier corresponding to a given CPU. 431 + * 432 + * Return: the VP ID for the CPU. 433 + */ 434 + static inline unsigned int mips_cm_vp_id(unsigned int cpu) 435 + { 436 + unsigned int core = cpu_data[cpu].core; 437 + unsigned int vp = cpu_vpe_id(&cpu_data[cpu]); 438 + 439 + return (core * mips_cm_max_vp_width()) + vp; 411 440 } 412 441 413 442 #endif /* __MIPS_ASM_MIPS_CM_H__ */
+2
arch/mips/include/asm/mipsregs.h
··· 487 487 488 488 /* Bits specific to the MIPS32/64 PRA. */ 489 489 #define MIPS_CONF_MT (_ULCAST_(7) << 7) 490 + #define MIPS_CONF_MT_TLB (_ULCAST_(1) << 7) 491 + #define MIPS_CONF_MT_FTLB (_ULCAST_(4) << 7) 490 492 #define MIPS_CONF_AR (_ULCAST_(7) << 10) 491 493 #define MIPS_CONF_AT (_ULCAST_(3) << 13) 492 494 #define MIPS_CONF_M (_ULCAST_(1) << 31)
+13 -8
arch/mips/kernel/cpu-probe.c
··· 410 410 static inline unsigned int decode_config0(struct cpuinfo_mips *c) 411 411 { 412 412 unsigned int config0; 413 - int isa; 413 + int isa, mt; 414 414 415 415 config0 = read_c0_config(); 416 416 417 417 /* 418 418 * Look for Standard TLB or Dual VTLB and FTLB 419 419 */ 420 - if ((((config0 & MIPS_CONF_MT) >> 7) == 1) || 421 - (((config0 & MIPS_CONF_MT) >> 7) == 4)) 420 + mt = config0 & MIPS_CONF_MT; 421 + if (mt == MIPS_CONF_MT_TLB) 422 422 c->options |= MIPS_CPU_TLB; 423 + else if (mt == MIPS_CONF_MT_FTLB) 424 + c->options |= MIPS_CPU_TLB | MIPS_CPU_FTLB; 423 425 424 426 isa = (config0 & MIPS_CONF_AT) >> 13; 425 427 switch (isa) { ··· 561 559 if (cpu_has_tlb) { 562 560 if (((config4 & MIPS_CONF4_IE) >> 29) == 2) 563 561 c->options |= MIPS_CPU_TLBINV; 562 + 564 563 /* 565 - * This is a bit ugly. R6 has dropped that field from 566 - * config4 and the only valid configuration is VTLB+FTLB so 567 - * set a good value for mmuextdef for that case. 564 + * R6 has dropped the MMUExtDef field from config4. 565 + * On R6 the fields always describe the FTLB, and only if it is 566 + * present according to Config.MT. 568 567 */ 569 - if (cpu_has_mips_r6) 568 + if (!cpu_has_mips_r6) 569 + mmuextdef = config4 & MIPS_CONF4_MMUEXTDEF; 570 + else if (cpu_has_ftlb) 570 571 mmuextdef = MIPS_CONF4_MMUEXTDEF_VTLBSIZEEXT; 571 572 else 572 - mmuextdef = config4 & MIPS_CONF4_MMUEXTDEF; 573 + mmuextdef = 0; 573 574 574 575 switch (mmuextdef) { 575 576 case MIPS_CONF4_MMUEXTDEF_MMUSIZEEXT:
+9 -1
arch/mips/kernel/setup.c
··· 338 338 if (end <= reserved_end) 339 339 continue; 340 340 #ifdef CONFIG_BLK_DEV_INITRD 341 - /* mapstart should be after initrd_end */ 341 + /* Skip zones before initrd and initrd itself */ 342 342 if (initrd_end && end <= (unsigned long)PFN_UP(__pa(initrd_end))) 343 343 continue; 344 344 #endif ··· 370 370 #endif 371 371 max_low_pfn = PFN_DOWN(HIGHMEM_START); 372 372 } 373 + 374 + #ifdef CONFIG_BLK_DEV_INITRD 375 + /* 376 + * mapstart should be after initrd_end 377 + */ 378 + if (initrd_end) 379 + mapstart = max(mapstart, (unsigned long)PFN_UP(__pa(initrd_end))); 380 + #endif 373 381 374 382 /* 375 383 * Initialize the boot-time allocator with low memory only.
+2
arch/mips/kernel/smp.c
··· 42 42 #include <asm/mmu_context.h> 43 43 #include <asm/time.h> 44 44 #include <asm/setup.h> 45 + #include <asm/maar.h> 45 46 46 47 cpumask_t cpu_callin_map; /* Bitmask of started secondaries */ 47 48 ··· 158 157 mips_clockevent_init(); 159 158 mp_ops->init_secondary(); 160 159 cpu_report(); 160 + maar_init(); 161 161 162 162 /* 163 163 * XXX parity protection should be folded in here when it's converted
+3
arch/mips/loongson64/common/env.c
··· 64 64 } 65 65 if (memsize == 0) 66 66 memsize = 256; 67 + 68 + loongson_sysconf.nr_uarts = 1; 69 + 67 70 pr_info("memsize=%u, highmemsize=%u\n", memsize, highmemsize); 68 71 #else 69 72 struct boot_params *boot_p;
+114 -63
arch/mips/mm/init.c
··· 44 44 #include <asm/pgalloc.h> 45 45 #include <asm/tlb.h> 46 46 #include <asm/fixmap.h> 47 + #include <asm/maar.h> 47 48 48 49 /* 49 50 * We have up to 8 empty zeroed pages so we can map one of the right colour ··· 253 252 #endif 254 253 } 255 254 255 + unsigned __weak platform_maar_init(unsigned num_pairs) 256 + { 257 + struct maar_config cfg[BOOT_MEM_MAP_MAX]; 258 + unsigned i, num_configured, num_cfg = 0; 259 + phys_addr_t skip; 260 + 261 + for (i = 0; i < boot_mem_map.nr_map; i++) { 262 + switch (boot_mem_map.map[i].type) { 263 + case BOOT_MEM_RAM: 264 + case BOOT_MEM_INIT_RAM: 265 + break; 266 + default: 267 + continue; 268 + } 269 + 270 + skip = 0x10000 - (boot_mem_map.map[i].addr & 0xffff); 271 + 272 + cfg[num_cfg].lower = boot_mem_map.map[i].addr; 273 + cfg[num_cfg].lower += skip; 274 + 275 + cfg[num_cfg].upper = cfg[num_cfg].lower; 276 + cfg[num_cfg].upper += boot_mem_map.map[i].size - 1; 277 + cfg[num_cfg].upper -= skip; 278 + 279 + cfg[num_cfg].attrs = MIPS_MAAR_S; 280 + num_cfg++; 281 + } 282 + 283 + num_configured = maar_config(cfg, num_cfg, num_pairs); 284 + if (num_configured < num_cfg) 285 + pr_warn("Not enough MAAR pairs (%u) for all bootmem regions (%u)\n", 286 + num_pairs, num_cfg); 287 + 288 + return num_configured; 289 + } 290 + 291 + void maar_init(void) 292 + { 293 + unsigned num_maars, used, i; 294 + phys_addr_t lower, upper, attr; 295 + static struct { 296 + struct maar_config cfgs[3]; 297 + unsigned used; 298 + } recorded = { { { 0 } }, 0 }; 299 + 300 + if (!cpu_has_maar) 301 + return; 302 + 303 + /* Detect the number of MAARs */ 304 + write_c0_maari(~0); 305 + back_to_back_c0_hazard(); 306 + num_maars = read_c0_maari() + 1; 307 + 308 + /* MAARs should be in pairs */ 309 + WARN_ON(num_maars % 2); 310 + 311 + /* Set MAARs using values we recorded already */ 312 + if (recorded.used) { 313 + used = maar_config(recorded.cfgs, recorded.used, num_maars / 2); 314 + BUG_ON(used != recorded.used); 315 + } else { 316 + /* Configure the required MAARs */ 317 + used = platform_maar_init(num_maars / 2); 318 + } 319 + 320 + /* Disable any further MAARs */ 321 + for (i = (used * 2); i < num_maars; i++) { 322 + write_c0_maari(i); 323 + back_to_back_c0_hazard(); 324 + write_c0_maar(0); 325 + back_to_back_c0_hazard(); 326 + } 327 + 328 + if (recorded.used) 329 + return; 330 + 331 + pr_info("MAAR configuration:\n"); 332 + for (i = 0; i < num_maars; i += 2) { 333 + write_c0_maari(i); 334 + back_to_back_c0_hazard(); 335 + upper = read_c0_maar(); 336 + 337 + write_c0_maari(i + 1); 338 + back_to_back_c0_hazard(); 339 + lower = read_c0_maar(); 340 + 341 + attr = lower & upper; 342 + lower = (lower & MIPS_MAAR_ADDR) << 4; 343 + upper = ((upper & MIPS_MAAR_ADDR) << 4) | 0xffff; 344 + 345 + pr_info(" [%d]: ", i / 2); 346 + if (!(attr & MIPS_MAAR_V)) { 347 + pr_cont("disabled\n"); 348 + continue; 349 + } 350 + 351 + pr_cont("%pa-%pa", &lower, &upper); 352 + 353 + if (attr & MIPS_MAAR_S) 354 + pr_cont(" speculate"); 355 + 356 + pr_cont("\n"); 357 + 358 + /* Record the setup for use on secondary CPUs */ 359 + if (used <= ARRAY_SIZE(recorded.cfgs)) { 360 + recorded.cfgs[recorded.used].lower = lower; 361 + recorded.cfgs[recorded.used].upper = upper; 362 + recorded.cfgs[recorded.used].attrs = attr; 363 + recorded.used++; 364 + } 365 + } 366 + } 367 + 256 368 #ifndef CONFIG_NEED_MULTIPLE_NODES 257 369 int page_is_ram(unsigned long pagenr) 258 370 { ··· 446 332 free_highmem_page(page); 447 333 } 448 334 #endif 449 - } 450 - 451 - unsigned __weak platform_maar_init(unsigned num_pairs) 452 - { 453 - struct maar_config cfg[BOOT_MEM_MAP_MAX]; 454 - unsigned i, num_configured, num_cfg = 0; 455 - phys_addr_t skip; 456 - 457 - for (i = 0; i < boot_mem_map.nr_map; i++) { 458 - switch (boot_mem_map.map[i].type) { 459 - case BOOT_MEM_RAM: 460 - case BOOT_MEM_INIT_RAM: 461 - break; 462 - default: 463 - continue; 464 - } 465 - 466 - skip = 0x10000 - (boot_mem_map.map[i].addr & 0xffff); 467 - 468 - cfg[num_cfg].lower = boot_mem_map.map[i].addr; 469 - cfg[num_cfg].lower += skip; 470 - 471 - cfg[num_cfg].upper = cfg[num_cfg].lower; 472 - cfg[num_cfg].upper += boot_mem_map.map[i].size - 1; 473 - cfg[num_cfg].upper -= skip; 474 - 475 - cfg[num_cfg].attrs = MIPS_MAAR_S; 476 - num_cfg++; 477 - } 478 - 479 - num_configured = maar_config(cfg, num_cfg, num_pairs); 480 - if (num_configured < num_cfg) 481 - pr_warn("Not enough MAAR pairs (%u) for all bootmem regions (%u)\n", 482 - num_pairs, num_cfg); 483 - 484 - return num_configured; 485 - } 486 - 487 - static void maar_init(void) 488 - { 489 - unsigned num_maars, used, i; 490 - 491 - if (!cpu_has_maar) 492 - return; 493 - 494 - /* Detect the number of MAARs */ 495 - write_c0_maari(~0); 496 - back_to_back_c0_hazard(); 497 - num_maars = read_c0_maari() + 1; 498 - 499 - /* MAARs should be in pairs */ 500 - WARN_ON(num_maars % 2); 501 - 502 - /* Configure the required MAARs */ 503 - used = platform_maar_init(num_maars / 2); 504 - 505 - /* Disable any further MAARs */ 506 - for (i = (used * 2); i < num_maars; i++) { 507 - write_c0_maari(i); 508 - back_to_back_c0_hazard(); 509 - write_c0_maar(0); 510 - back_to_back_c0_hazard(); 511 - } 512 335 } 513 336 514 337 void __init mem_init(void)
+47 -3
arch/mips/net/bpf_jit_asm.S
··· 64 64 PTR_ADDU t1, $r_skb_data, offset 65 65 lw $r_A, 0(t1) 66 66 #ifdef CONFIG_CPU_LITTLE_ENDIAN 67 + # if defined(__mips_isa_rev) && (__mips_isa_rev >= 2) 67 68 wsbh t0, $r_A 68 69 rotr $r_A, t0, 16 70 + # else 71 + sll t0, $r_A, 24 72 + srl t1, $r_A, 24 73 + srl t2, $r_A, 8 74 + or t0, t0, t1 75 + andi t2, t2, 0xff00 76 + andi t1, $r_A, 0xff00 77 + or t0, t0, t2 78 + sll t1, t1, 8 79 + or $r_A, t0, t1 80 + # endif 69 81 #endif 70 82 jr $r_ra 71 83 move $r_ret, zero ··· 92 80 PTR_ADDU t1, $r_skb_data, offset 93 81 lh $r_A, 0(t1) 94 82 #ifdef CONFIG_CPU_LITTLE_ENDIAN 83 + # if defined(__mips_isa_rev) && (__mips_isa_rev >= 2) 95 84 wsbh t0, $r_A 96 85 seh $r_A, t0 86 + # else 87 + sll t0, $r_A, 24 88 + andi t1, $r_A, 0xff00 89 + sra t0, t0, 16 90 + srl t1, t1, 8 91 + or $r_A, t0, t1 92 + # endif 97 93 #endif 98 94 jr $r_ra 99 95 move $r_ret, zero ··· 168 148 NESTED(bpf_slow_path_word, (6 * SZREG), $r_sp) 169 149 bpf_slow_path_common(4) 170 150 #ifdef CONFIG_CPU_LITTLE_ENDIAN 151 + # if defined(__mips_isa_rev) && (__mips_isa_rev >= 2) 171 152 wsbh t0, $r_s0 172 153 jr $r_ra 173 154 rotr $r_A, t0, 16 174 - #endif 155 + # else 156 + sll t0, $r_s0, 24 157 + srl t1, $r_s0, 24 158 + srl t2, $r_s0, 8 159 + or t0, t0, t1 160 + andi t2, t2, 0xff00 161 + andi t1, $r_s0, 0xff00 162 + or t0, t0, t2 163 + sll t1, t1, 8 175 164 jr $r_ra 176 - move $r_A, $r_s0 165 + or $r_A, t0, t1 166 + # endif 167 + #else 168 + jr $r_ra 169 + move $r_A, $r_s0 170 + #endif 177 171 178 172 END(bpf_slow_path_word) 179 173 180 174 NESTED(bpf_slow_path_half, (6 * SZREG), $r_sp) 181 175 bpf_slow_path_common(2) 182 176 #ifdef CONFIG_CPU_LITTLE_ENDIAN 177 + # if defined(__mips_isa_rev) && (__mips_isa_rev >= 2) 183 178 jr $r_ra 184 179 wsbh $r_A, $r_s0 185 - #endif 180 + # else 181 + sll t0, $r_s0, 8 182 + andi t1, $r_s0, 0xff00 183 + andi t0, t0, 0xff00 184 + srl t1, t1, 8 185 + jr $r_ra 186 + or $r_A, t0, t1 187 + # endif 188 + #else 186 189 jr $r_ra 187 190 move $r_A, $r_s0 191 + #endif 188 192 189 193 END(bpf_slow_path_half) 190 194
+1
arch/tile/kernel/usb.c
··· 22 22 #include <linux/platform_device.h> 23 23 #include <linux/usb/tilegx.h> 24 24 #include <linux/init.h> 25 + #include <linux/module.h> 25 26 #include <linux/types.h> 26 27 27 28 static u64 ehci_dmamask = DMA_BIT_MASK(32);
+15 -1
arch/x86/entry/entry_64.S
··· 1128 1128 1129 1129 /* Runs on exception stack */ 1130 1130 ENTRY(nmi) 1131 + /* 1132 + * Fix up the exception frame if we're on Xen. 1133 + * PARAVIRT_ADJUST_EXCEPTION_FRAME is guaranteed to push at most 1134 + * one value to the stack on native, so it may clobber the rdx 1135 + * scratch slot, but it won't clobber any of the important 1136 + * slots past it. 1137 + * 1138 + * Xen is a different story, because the Xen frame itself overlaps 1139 + * the "NMI executing" variable. 1140 + */ 1131 1141 PARAVIRT_ADJUST_EXCEPTION_FRAME 1142 + 1132 1143 /* 1133 1144 * We allow breakpoints in NMIs. If a breakpoint occurs, then 1134 1145 * the iretq it performs will take us out of NMI context. ··· 1190 1179 * we don't want to enable interrupts, because then we'll end 1191 1180 * up in an awkward situation in which IRQs are on but NMIs 1192 1181 * are off. 1182 + * 1183 + * We also must not push anything to the stack before switching 1184 + * stacks lest we corrupt the "NMI executing" variable. 1193 1185 */ 1194 1186 1195 - SWAPGS 1187 + SWAPGS_UNSAFE_STACK 1196 1188 cld 1197 1189 movq %rsp, %rdx 1198 1190 movq PER_CPU_VAR(cpu_current_top_of_stack), %rsp
+2
arch/x86/include/asm/efi.h
··· 86 86 extern void __iomem *__init efi_ioremap(unsigned long addr, unsigned long size, 87 87 u32 type, u64 attribute); 88 88 89 + #ifdef CONFIG_KASAN 89 90 /* 90 91 * CONFIG_KASAN may redefine memset to __memset. __memset function is present 91 92 * only in kernel binary. Since the EFI stub linked into a separate binary it ··· 96 95 #undef memcpy 97 96 #undef memset 98 97 #undef memmove 98 + #endif 99 99 100 100 #endif /* CONFIG_X86_32 */ 101 101
+2
arch/x86/include/asm/msr-index.h
··· 141 141 #define DEBUGCTLMSR_BTS_OFF_USR (1UL << 10) 142 142 #define DEBUGCTLMSR_FREEZE_LBRS_ON_PMI (1UL << 11) 143 143 144 + #define MSR_PEBS_FRONTEND 0x000003f7 145 + 144 146 #define MSR_IA32_POWER_CTL 0x000001fc 145 147 146 148 #define MSR_IA32_MC0_CTL 0x00000400
+1
arch/x86/include/asm/pvclock-abi.h
··· 41 41 42 42 #define PVCLOCK_TSC_STABLE_BIT (1 << 0) 43 43 #define PVCLOCK_GUEST_STOPPED (1 << 1) 44 + /* PVCLOCK_COUNTS_FROM_ZERO broke ABI and can't be used anymore. */ 44 45 #define PVCLOCK_COUNTS_FROM_ZERO (1 << 2) 45 46 #endif /* __ASSEMBLY__ */ 46 47 #endif /* _ASM_X86_PVCLOCK_ABI_H */
+1
arch/x86/kernel/cpu/perf_event.h
··· 47 47 EXTRA_REG_RSP_1 = 1, /* offcore_response_1 */ 48 48 EXTRA_REG_LBR = 2, /* lbr_select */ 49 49 EXTRA_REG_LDLAT = 3, /* ld_lat_threshold */ 50 + EXTRA_REG_FE = 4, /* fe_* */ 50 51 51 52 EXTRA_REG_MAX /* number of entries needed */ 52 53 };
+15 -2
arch/x86/kernel/cpu/perf_event_intel.c
··· 205 205 INTEL_UEVENT_EXTRA_REG(0x01b7, MSR_OFFCORE_RSP_0, 0x3fffff8fffull, RSP_0), 206 206 INTEL_UEVENT_EXTRA_REG(0x01bb, MSR_OFFCORE_RSP_1, 0x3fffff8fffull, RSP_1), 207 207 INTEL_UEVENT_PEBS_LDLAT_EXTRA_REG(0x01cd), 208 + /* 209 + * Note the low 8 bits eventsel code is not a continuous field, containing 210 + * some #GPing bits. These are masked out. 211 + */ 212 + INTEL_UEVENT_EXTRA_REG(0x01c6, MSR_PEBS_FRONTEND, 0x7fff17, FE), 208 213 EVENT_EXTRA_END 209 214 }; 210 215 ··· 255 250 FIXED_EVENT_CONSTRAINT(0x003c, 1), /* CPU_CLK_UNHALTED.CORE */ 256 251 FIXED_EVENT_CONSTRAINT(0x0300, 2), /* CPU_CLK_UNHALTED.REF */ 257 252 INTEL_UEVENT_CONSTRAINT(0x148, 0x4), /* L1D_PEND_MISS.PENDING */ 258 - INTEL_EVENT_CONSTRAINT(0xa3, 0x4), /* CYCLE_ACTIVITY.* */ 253 + INTEL_UEVENT_CONSTRAINT(0x8a3, 0x4), /* CYCLE_ACTIVITY.CYCLES_L1D_MISS */ 259 254 EVENT_CONSTRAINT_END 260 255 }; 261 256 ··· 2896 2891 2897 2892 PMU_FORMAT_ATTR(ldlat, "config1:0-15"); 2898 2893 2894 + PMU_FORMAT_ATTR(frontend, "config1:0-23"); 2895 + 2899 2896 static struct attribute *intel_arch3_formats_attr[] = { 2900 2897 &format_attr_event.attr, 2901 2898 &format_attr_umask.attr, ··· 2911 2904 2912 2905 &format_attr_offcore_rsp.attr, /* XXX do NHM/WSM + SNB breakout */ 2913 2906 &format_attr_ldlat.attr, /* PEBS load latency */ 2907 + NULL, 2908 + }; 2909 + 2910 + static struct attribute *skl_format_attr[] = { 2911 + &format_attr_frontend.attr, 2914 2912 NULL, 2915 2913 }; 2916 2914 ··· 3528 3516 3529 3517 x86_pmu.hw_config = hsw_hw_config; 3530 3518 x86_pmu.get_event_constraints = hsw_get_event_constraints; 3531 - x86_pmu.cpu_events = hsw_events_attrs; 3519 + x86_pmu.format_attrs = merge_attr(intel_arch3_formats_attr, 3520 + skl_format_attr); 3532 3521 WARN_ON(!x86_pmu.format_attrs); 3533 3522 x86_pmu.cpu_events = hsw_events_attrs; 3534 3523 pr_cont("Skylake events, ");
+2 -2
arch/x86/kernel/cpu/perf_event_msr.c
··· 10 10 PERF_MSR_EVENT_MAX, 11 11 }; 12 12 13 - bool test_aperfmperf(int idx) 13 + static bool test_aperfmperf(int idx) 14 14 { 15 15 return boot_cpu_has(X86_FEATURE_APERFMPERF); 16 16 } 17 17 18 - bool test_intel(int idx) 18 + static bool test_intel(int idx) 19 19 { 20 20 if (boot_cpu_data.x86_vendor != X86_VENDOR_INTEL || 21 21 boot_cpu_data.x86 != 6)
+12 -4
arch/x86/kernel/paravirt.c
··· 41 41 #include <asm/timer.h> 42 42 #include <asm/special_insns.h> 43 43 44 - /* nop stub */ 45 - void _paravirt_nop(void) 46 - { 47 - } 44 + /* 45 + * nop stub, which must not clobber anything *including the stack* to 46 + * avoid confusing the entry prologues. 47 + */ 48 + extern void _paravirt_nop(void); 49 + asm (".pushsection .entry.text, \"ax\"\n" 50 + ".global _paravirt_nop\n" 51 + "_paravirt_nop:\n\t" 52 + "ret\n\t" 53 + ".size _paravirt_nop, . - _paravirt_nop\n\t" 54 + ".type _paravirt_nop, @function\n\t" 55 + ".popsection"); 48 56 49 57 /* identity function, which can be inlined */ 50 58 u32 _paravirt_ident_32(u32 x)
+13 -112
arch/x86/kvm/svm.c
··· 514 514 struct vcpu_svm *svm = to_svm(vcpu); 515 515 516 516 if (svm->vmcb->control.next_rip != 0) { 517 - WARN_ON(!static_cpu_has(X86_FEATURE_NRIPS)); 517 + WARN_ON_ONCE(!static_cpu_has(X86_FEATURE_NRIPS)); 518 518 svm->next_rip = svm->vmcb->control.next_rip; 519 519 } 520 520 ··· 866 866 set_msr_interception(msrpm, MSR_IA32_LASTINTTOIP, 0, 0); 867 867 } 868 868 869 - #define MTRR_TYPE_UC_MINUS 7 870 - #define MTRR2PROTVAL_INVALID 0xff 871 - 872 - static u8 mtrr2protval[8]; 873 - 874 - static u8 fallback_mtrr_type(int mtrr) 875 - { 876 - /* 877 - * WT and WP aren't always available in the host PAT. Treat 878 - * them as UC and UC- respectively. Everything else should be 879 - * there. 880 - */ 881 - switch (mtrr) 882 - { 883 - case MTRR_TYPE_WRTHROUGH: 884 - return MTRR_TYPE_UNCACHABLE; 885 - case MTRR_TYPE_WRPROT: 886 - return MTRR_TYPE_UC_MINUS; 887 - default: 888 - BUG(); 889 - } 890 - } 891 - 892 - static void build_mtrr2protval(void) 893 - { 894 - int i; 895 - u64 pat; 896 - 897 - for (i = 0; i < 8; i++) 898 - mtrr2protval[i] = MTRR2PROTVAL_INVALID; 899 - 900 - /* Ignore the invalid MTRR types. */ 901 - mtrr2protval[2] = 0; 902 - mtrr2protval[3] = 0; 903 - 904 - /* 905 - * Use host PAT value to figure out the mapping from guest MTRR 906 - * values to nested page table PAT/PCD/PWT values. We do not 907 - * want to change the host PAT value every time we enter the 908 - * guest. 909 - */ 910 - rdmsrl(MSR_IA32_CR_PAT, pat); 911 - for (i = 0; i < 8; i++) { 912 - u8 mtrr = pat >> (8 * i); 913 - 914 - if (mtrr2protval[mtrr] == MTRR2PROTVAL_INVALID) 915 - mtrr2protval[mtrr] = __cm_idx2pte(i); 916 - } 917 - 918 - for (i = 0; i < 8; i++) { 919 - if (mtrr2protval[i] == MTRR2PROTVAL_INVALID) { 920 - u8 fallback = fallback_mtrr_type(i); 921 - mtrr2protval[i] = mtrr2protval[fallback]; 922 - BUG_ON(mtrr2protval[i] == MTRR2PROTVAL_INVALID); 923 - } 924 - } 925 - } 926 - 927 869 static __init int svm_hardware_setup(void) 928 870 { 929 871 int cpu; ··· 932 990 } else 933 991 kvm_disable_tdp(); 934 992 935 - build_mtrr2protval(); 936 993 return 0; 937 994 938 995 err: ··· 1086 1145 return target_tsc - tsc; 1087 1146 } 1088 1147 1089 - static void svm_set_guest_pat(struct vcpu_svm *svm, u64 *g_pat) 1090 - { 1091 - struct kvm_vcpu *vcpu = &svm->vcpu; 1092 - 1093 - /* Unlike Intel, AMD takes the guest's CR0.CD into account. 1094 - * 1095 - * AMD does not have IPAT. To emulate it for the case of guests 1096 - * with no assigned devices, just set everything to WB. If guests 1097 - * have assigned devices, however, we cannot force WB for RAM 1098 - * pages only, so use the guest PAT directly. 1099 - */ 1100 - if (!kvm_arch_has_assigned_device(vcpu->kvm)) 1101 - *g_pat = 0x0606060606060606; 1102 - else 1103 - *g_pat = vcpu->arch.pat; 1104 - } 1105 - 1106 - static u64 svm_get_mt_mask(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio) 1107 - { 1108 - u8 mtrr; 1109 - 1110 - /* 1111 - * 1. MMIO: trust guest MTRR, so same as item 3. 1112 - * 2. No passthrough: always map as WB, and force guest PAT to WB as well 1113 - * 3. Passthrough: can't guarantee the result, try to trust guest. 1114 - */ 1115 - if (!is_mmio && !kvm_arch_has_assigned_device(vcpu->kvm)) 1116 - return 0; 1117 - 1118 - if (!kvm_check_has_quirk(vcpu->kvm, KVM_X86_QUIRK_CD_NW_CLEARED) && 1119 - kvm_read_cr0(vcpu) & X86_CR0_CD) 1120 - return _PAGE_NOCACHE; 1121 - 1122 - mtrr = kvm_mtrr_get_guest_memory_type(vcpu, gfn); 1123 - return mtrr2protval[mtrr]; 1124 - } 1125 - 1126 1148 static void init_vmcb(struct vcpu_svm *svm, bool init_event) 1127 1149 { 1128 1150 struct vmcb_control_area *control = &svm->vmcb->control; ··· 1182 1278 clr_cr_intercept(svm, INTERCEPT_CR3_READ); 1183 1279 clr_cr_intercept(svm, INTERCEPT_CR3_WRITE); 1184 1280 save->g_pat = svm->vcpu.arch.pat; 1185 - svm_set_guest_pat(svm, &save->g_pat); 1186 1281 save->cr3 = 0; 1187 1282 save->cr4 = 0; 1188 1283 } ··· 1576 1673 1577 1674 if (!vcpu->fpu_active) 1578 1675 cr0 |= X86_CR0_TS; 1579 - 1580 - /* These are emulated via page tables. */ 1581 - cr0 &= ~(X86_CR0_CD | X86_CR0_NW); 1582 - 1676 + /* 1677 + * re-enable caching here because the QEMU bios 1678 + * does not do it - this results in some delay at 1679 + * reboot 1680 + */ 1681 + if (kvm_check_has_quirk(vcpu->kvm, KVM_X86_QUIRK_CD_NW_CLEARED)) 1682 + cr0 &= ~(X86_CR0_CD | X86_CR0_NW); 1583 1683 svm->vmcb->save.cr0 = cr0; 1584 1684 mark_dirty(svm->vmcb, VMCB_CR); 1585 1685 update_cr0_intercept(svm); ··· 3257 3351 case MSR_VM_IGNNE: 3258 3352 vcpu_unimpl(vcpu, "unimplemented wrmsr: 0x%x data 0x%llx\n", ecx, data); 3259 3353 break; 3260 - case MSR_IA32_CR_PAT: 3261 - if (npt_enabled) { 3262 - if (!kvm_mtrr_valid(vcpu, MSR_IA32_CR_PAT, data)) 3263 - return 1; 3264 - vcpu->arch.pat = data; 3265 - svm_set_guest_pat(svm, &svm->vmcb->save.g_pat); 3266 - mark_dirty(svm->vmcb, VMCB_NPT); 3267 - break; 3268 - } 3269 - /* fall through */ 3270 3354 default: 3271 3355 return kvm_set_msr_common(vcpu, msr); 3272 3356 } ··· 4089 4193 static bool svm_has_high_real_mode_segbase(void) 4090 4194 { 4091 4195 return true; 4196 + } 4197 + 4198 + static u64 svm_get_mt_mask(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio) 4199 + { 4200 + return 0; 4092 4201 } 4093 4202 4094 4203 static void svm_cpuid_update(struct kvm_vcpu *vcpu)
+8 -3
arch/x86/kvm/vmx.c
··· 8617 8617 u64 ipat = 0; 8618 8618 8619 8619 /* For VT-d and EPT combination 8620 - * 1. MMIO: guest may want to apply WC, trust it. 8620 + * 1. MMIO: always map as UC 8621 8621 * 2. EPT with VT-d: 8622 8622 * a. VT-d without snooping control feature: can't guarantee the 8623 - * result, try to trust guest. So the same as item 1. 8623 + * result, try to trust guest. 8624 8624 * b. VT-d with snooping control feature: snooping control feature of 8625 8625 * VT-d engine can guarantee the cache correctness. Just set it 8626 8626 * to WB to keep consistent with host. So the same as item 3. 8627 8627 * 3. EPT without VT-d: always map as WB and set IPAT=1 to keep 8628 8628 * consistent with host MTRR 8629 8629 */ 8630 - if (!is_mmio && !kvm_arch_has_noncoherent_dma(vcpu->kvm)) { 8630 + if (is_mmio) { 8631 + cache = MTRR_TYPE_UNCACHABLE; 8632 + goto exit; 8633 + } 8634 + 8635 + if (!kvm_arch_has_noncoherent_dma(vcpu->kvm)) { 8631 8636 ipat = VMX_EPT_IPAT_BIT; 8632 8637 cache = MTRR_TYPE_WRBACK; 8633 8638 goto exit;
-4
arch/x86/kvm/x86.c
··· 1708 1708 vcpu->pvclock_set_guest_stopped_request = false; 1709 1709 } 1710 1710 1711 - pvclock_flags |= PVCLOCK_COUNTS_FROM_ZERO; 1712 - 1713 1711 /* If the host uses TSC clocksource, then it is stable */ 1714 1712 if (use_master_clock) 1715 1713 pvclock_flags |= PVCLOCK_TSC_STABLE_BIT; ··· 2005 2007 &vcpu->requests); 2006 2008 2007 2009 ka->boot_vcpu_runs_old_kvmclock = tmp; 2008 - 2009 - ka->kvmclock_offset = -get_kernel_ns(); 2010 2010 } 2011 2011 2012 2012 vcpu->arch.time = data;
-4
crypto/asymmetric_keys/x509_public_key.c
··· 332 332 srlen = cert->raw_serial_size; 333 333 q = cert->raw_serial; 334 334 } 335 - if (srlen > 1 && *q == 0) { 336 - srlen--; 337 - q++; 338 - } 339 335 340 336 ret = -ENOMEM; 341 337 desc = kmalloc(sulen + 2 + srlen * 2 + 1, GFP_KERNEL);
+2
drivers/acpi/ec.c
··· 1044 1044 goto err_exit; 1045 1045 1046 1046 mutex_lock(&ec->mutex); 1047 + result = -ENODATA; 1047 1048 list_for_each_entry(handler, &ec->list, node) { 1048 1049 if (value == handler->query_bit) { 1050 + result = 0; 1049 1051 q->handler = acpi_ec_get_query_handler(handler); 1050 1052 ec_dbg_evt("Query(0x%02x) scheduled", 1051 1053 q->handler->query_bit);
+1
drivers/acpi/pci_irq.c
··· 372 372 373 373 /* Interrupt Line values above 0xF are forbidden */ 374 374 if (dev->irq > 0 && (dev->irq <= 0xF) && 375 + acpi_isa_irq_available(dev->irq) && 375 376 (acpi_isa_irq_to_gsi(dev->irq, &dev_gsi) == 0)) { 376 377 dev_warn(&dev->dev, "PCI INT %c: no GSI - using ISA IRQ %d\n", 377 378 pin_name(dev->pin), dev->irq);
+14 -2
drivers/acpi/pci_link.c
··· 498 498 PIRQ_PENALTY_PCI_POSSIBLE; 499 499 } 500 500 } 501 - /* Add a penalty for the SCI */ 502 - acpi_irq_penalty[acpi_gbl_FADT.sci_interrupt] += PIRQ_PENALTY_PCI_USING; 501 + 503 502 return 0; 504 503 } 505 504 ··· 551 552 acpi_irq_penalty[link->irq.possible[i]]) 552 553 irq = link->irq.possible[i]; 553 554 } 555 + } 556 + if (acpi_irq_penalty[irq] >= PIRQ_PENALTY_ISA_ALWAYS) { 557 + printk(KERN_ERR PREFIX "No IRQ available for %s [%s]. " 558 + "Try pci=noacpi or acpi=off\n", 559 + acpi_device_name(link->device), 560 + acpi_device_bid(link->device)); 561 + return -ENODEV; 554 562 } 555 563 556 564 /* Attempt to enable the link device at this IRQ. */ ··· 825 819 else 826 820 acpi_irq_penalty[irq] += PIRQ_PENALTY_PCI_USING; 827 821 } 822 + } 823 + 824 + bool acpi_isa_irq_available(int irq) 825 + { 826 + return irq >= 0 && (irq >= ARRAY_SIZE(acpi_irq_penalty) || 827 + acpi_irq_penalty[irq] < PIRQ_PENALTY_ISA_ALWAYS); 828 828 } 829 829 830 830 /*
+8 -2
drivers/base/cacheinfo.c
··· 148 148 149 149 if (sibling == cpu) /* skip itself */ 150 150 continue; 151 + 151 152 sib_cpu_ci = get_cpu_cacheinfo(sibling); 153 + if (!sib_cpu_ci->info_list) 154 + continue; 155 + 152 156 sib_leaf = sib_cpu_ci->info_list + index; 153 157 cpumask_clear_cpu(cpu, &sib_leaf->shared_cpu_map); 154 158 cpumask_clear_cpu(sibling, &this_leaf->shared_cpu_map); ··· 163 159 164 160 static void free_cache_attributes(unsigned int cpu) 165 161 { 162 + if (!per_cpu_cacheinfo(cpu)) 163 + return; 164 + 166 165 cache_shared_cpu_map_remove(cpu); 167 166 168 167 kfree(per_cpu_cacheinfo(cpu)); ··· 521 514 break; 522 515 case CPU_DEAD: 523 516 cache_remove_dev(cpu); 524 - if (per_cpu_cacheinfo(cpu)) 525 - free_cache_attributes(cpu); 517 + free_cache_attributes(cpu); 526 518 break; 527 519 } 528 520 return notifier_from_errno(rc);
+12 -5
drivers/base/power/opp.c
··· 892 892 u32 microvolt[3] = {0}; 893 893 int count, ret; 894 894 895 - count = of_property_count_u32_elems(opp->np, "opp-microvolt"); 896 - if (!count) 895 + /* Missing property isn't a problem, but an invalid entry is */ 896 + if (!of_find_property(opp->np, "opp-microvolt", NULL)) 897 897 return 0; 898 + 899 + count = of_property_count_u32_elems(opp->np, "opp-microvolt"); 900 + if (count < 0) { 901 + dev_err(dev, "%s: Invalid opp-microvolt property (%d)\n", 902 + __func__, count); 903 + return count; 904 + } 898 905 899 906 /* There can be one or three elements here */ 900 907 if (count != 1 && count != 3) { ··· 1070 1063 * share a common logic which is isolated here. 1071 1064 * 1072 1065 * Return: -EINVAL for bad pointers, -ENOMEM if no memory available for the 1073 - * copy operation, returns 0 if no modifcation was done OR modification was 1066 + * copy operation, returns 0 if no modification was done OR modification was 1074 1067 * successful. 1075 1068 * 1076 1069 * Locking: The internal device_opp and opp structures are RCU protected. ··· 1158 1151 * mutex locking or synchronize_rcu() blocking calls cannot be used. 1159 1152 * 1160 1153 * Return: -EINVAL for bad pointers, -ENOMEM if no memory available for the 1161 - * copy operation, returns 0 if no modifcation was done OR modification was 1154 + * copy operation, returns 0 if no modification was done OR modification was 1162 1155 * successful. 1163 1156 */ 1164 1157 int dev_pm_opp_enable(struct device *dev, unsigned long freq) ··· 1184 1177 * mutex locking or synchronize_rcu() blocking calls cannot be used. 1185 1178 * 1186 1179 * Return: -EINVAL for bad pointers, -ENOMEM if no memory available for the 1187 - * copy operation, returns 0 if no modifcation was done OR modification was 1180 + * copy operation, returns 0 if no modification was done OR modification was 1188 1181 * successful. 1189 1182 */ 1190 1183 int dev_pm_opp_disable(struct device *dev, unsigned long freq)
+4 -3
drivers/char/hw_random/xgene-rng.c
··· 344 344 if (IS_ERR(ctx->csr_base)) 345 345 return PTR_ERR(ctx->csr_base); 346 346 347 - ctx->irq = platform_get_irq(pdev, 0); 348 - if (ctx->irq < 0) { 347 + rc = platform_get_irq(pdev, 0); 348 + if (rc < 0) { 349 349 dev_err(&pdev->dev, "No IRQ resource\n"); 350 - return ctx->irq; 350 + return rc; 351 351 } 352 + ctx->irq = rc; 352 353 353 354 dev_dbg(&pdev->dev, "APM X-Gene RNG BASE %p ALARM IRQ %d", 354 355 ctx->csr_base, ctx->irq);
+27
drivers/crypto/marvell/cesa.h
··· 687 687 688 688 int mv_cesa_queue_req(struct crypto_async_request *req); 689 689 690 + /* 691 + * Helper function that indicates whether a crypto request needs to be 692 + * cleaned up or not after being enqueued using mv_cesa_queue_req(). 693 + */ 694 + static inline int mv_cesa_req_needs_cleanup(struct crypto_async_request *req, 695 + int ret) 696 + { 697 + /* 698 + * The queue still had some space, the request was queued 699 + * normally, so there's no need to clean it up. 700 + */ 701 + if (ret == -EINPROGRESS) 702 + return false; 703 + 704 + /* 705 + * The queue had not space left, but since the request is 706 + * flagged with CRYPTO_TFM_REQ_MAY_BACKLOG, it was added to 707 + * the backlog and will be processed later. There's no need to 708 + * clean it up. 709 + */ 710 + if (ret == -EBUSY && req->flags & CRYPTO_TFM_REQ_MAY_BACKLOG) 711 + return false; 712 + 713 + /* Request wasn't queued, we need to clean it up */ 714 + return true; 715 + } 716 + 690 717 /* TDMA functions */ 691 718 692 719 static inline void mv_cesa_req_dma_iter_init(struct mv_cesa_dma_iter *iter,
+3 -4
drivers/crypto/marvell/cipher.c
··· 189 189 { 190 190 struct ablkcipher_request *ablkreq = ablkcipher_request_cast(req); 191 191 struct mv_cesa_ablkcipher_req *creq = ablkcipher_request_ctx(ablkreq); 192 - 193 192 creq->req.base.engine = engine; 194 193 195 194 if (creq->req.base.type == CESA_DMA_REQ) ··· 430 431 return ret; 431 432 432 433 ret = mv_cesa_queue_req(&req->base); 433 - if (ret && ret != -EINPROGRESS) 434 + if (mv_cesa_req_needs_cleanup(&req->base, ret)) 434 435 mv_cesa_ablkcipher_cleanup(req); 435 436 436 437 return ret; ··· 550 551 return ret; 551 552 552 553 ret = mv_cesa_queue_req(&req->base); 553 - if (ret && ret != -EINPROGRESS) 554 + if (mv_cesa_req_needs_cleanup(&req->base, ret)) 554 555 mv_cesa_ablkcipher_cleanup(req); 555 556 556 557 return ret; ··· 692 693 return ret; 693 694 694 695 ret = mv_cesa_queue_req(&req->base); 695 - if (ret && ret != -EINPROGRESS) 696 + if (mv_cesa_req_needs_cleanup(&req->base, ret)) 696 697 mv_cesa_ablkcipher_cleanup(req); 697 698 698 699 return ret;
+3 -5
drivers/crypto/marvell/hash.c
··· 739 739 return 0; 740 740 741 741 ret = mv_cesa_queue_req(&req->base); 742 - if (ret && ret != -EINPROGRESS) { 742 + if (mv_cesa_req_needs_cleanup(&req->base, ret)) 743 743 mv_cesa_ahash_cleanup(req); 744 - return ret; 745 - } 746 744 747 745 return ret; 748 746 } ··· 764 766 return 0; 765 767 766 768 ret = mv_cesa_queue_req(&req->base); 767 - if (ret && ret != -EINPROGRESS) 769 + if (mv_cesa_req_needs_cleanup(&req->base, ret)) 768 770 mv_cesa_ahash_cleanup(req); 769 771 770 772 return ret; ··· 789 791 return 0; 790 792 791 793 ret = mv_cesa_queue_req(&req->base); 792 - if (ret && ret != -EINPROGRESS) 794 + if (mv_cesa_req_needs_cleanup(&req->base, ret)) 793 795 mv_cesa_ahash_cleanup(req); 794 796 795 797 return ret;
+3
drivers/crypto/qat/qat_common/adf_aer.c
··· 88 88 struct pci_dev *parent = pdev->bus->self; 89 89 uint16_t bridge_ctl = 0; 90 90 91 + if (accel_dev->is_vf) 92 + return; 93 + 91 94 dev_info(&GET_DEV(accel_dev), "Resetting device qat_dev%d\n", 92 95 accel_dev->accel_id); 93 96
+1 -1
drivers/extcon/extcon.c
··· 159 159 static bool is_extcon_changed(u32 prev, u32 new, int idx, bool *attached) 160 160 { 161 161 if (((prev >> idx) & 0x1) != ((new >> idx) & 0x1)) { 162 - *attached = new ? true : false; 162 + *attached = ((new >> idx) & 0x1) ? true : false; 163 163 return true; 164 164 } 165 165
+8
drivers/firmware/Kconfig
··· 139 139 bool 140 140 depends on ARM || ARM64 141 141 142 + config QCOM_SCM_32 143 + def_bool y 144 + depends on QCOM_SCM && ARM 145 + 146 + config QCOM_SCM_64 147 + def_bool y 148 + depends on QCOM_SCM && ARM64 149 + 142 150 source "drivers/firmware/broadcom/Kconfig" 143 151 source "drivers/firmware/google/Kconfig" 144 152 source "drivers/firmware/efi/Kconfig"
+2 -1
drivers/firmware/Makefile
··· 13 13 obj-$(CONFIG_ISCSI_IBFT) += iscsi_ibft.o 14 14 obj-$(CONFIG_FIRMWARE_MEMMAP) += memmap.o 15 15 obj-$(CONFIG_QCOM_SCM) += qcom_scm.o 16 - obj-$(CONFIG_QCOM_SCM) += qcom_scm-32.o 16 + obj-$(CONFIG_QCOM_SCM_64) += qcom_scm-64.o 17 + obj-$(CONFIG_QCOM_SCM_32) += qcom_scm-32.o 17 18 CFLAGS_qcom_scm-32.o :=$(call as-instr,.arch_extension sec,-DREQUIRES_SEC=1) 18 19 19 20 obj-y += broadcom/
+63
drivers/firmware/qcom_scm-64.c
··· 1 + /* Copyright (c) 2015, The Linux Foundation. All rights reserved. 2 + * 3 + * This program is free software; you can redistribute it and/or modify 4 + * it under the terms of the GNU General Public License version 2 and 5 + * only version 2 as published by the Free Software Foundation. 6 + * 7 + * This program is distributed in the hope that it will be useful, 8 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 9 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 10 + * GNU General Public License for more details. 11 + */ 12 + 13 + #include <linux/io.h> 14 + #include <linux/errno.h> 15 + #include <linux/qcom_scm.h> 16 + 17 + /** 18 + * qcom_scm_set_cold_boot_addr() - Set the cold boot address for cpus 19 + * @entry: Entry point function for the cpus 20 + * @cpus: The cpumask of cpus that will use the entry point 21 + * 22 + * Set the cold boot address of the cpus. Any cpu outside the supported 23 + * range would be removed from the cpu present mask. 24 + */ 25 + int __qcom_scm_set_cold_boot_addr(void *entry, const cpumask_t *cpus) 26 + { 27 + return -ENOTSUPP; 28 + } 29 + 30 + /** 31 + * qcom_scm_set_warm_boot_addr() - Set the warm boot address for cpus 32 + * @entry: Entry point function for the cpus 33 + * @cpus: The cpumask of cpus that will use the entry point 34 + * 35 + * Set the Linux entry point for the SCM to transfer control to when coming 36 + * out of a power down. CPU power down may be executed on cpuidle or hotplug. 37 + */ 38 + int __qcom_scm_set_warm_boot_addr(void *entry, const cpumask_t *cpus) 39 + { 40 + return -ENOTSUPP; 41 + } 42 + 43 + /** 44 + * qcom_scm_cpu_power_down() - Power down the cpu 45 + * @flags - Flags to flush cache 46 + * 47 + * This is an end point to power down cpu. If there was a pending interrupt, 48 + * the control would return from this function, otherwise, the cpu jumps to the 49 + * warm boot entry point set for this cpu upon reset. 50 + */ 51 + void __qcom_scm_cpu_power_down(u32 flags) 52 + { 53 + } 54 + 55 + int __qcom_scm_is_call_available(u32 svc_id, u32 cmd_id) 56 + { 57 + return -ENOTSUPP; 58 + } 59 + 60 + int __qcom_scm_hdcp_req(struct qcom_scm_hdcp_req *req, u32 req_cnt, u32 *resp) 61 + { 62 + return -ENOTSUPP; 63 + }
+17
drivers/hv/channel_mgmt.c
··· 204 204 spin_lock_irqsave(&vmbus_connection.channel_lock, flags); 205 205 list_del(&channel->listentry); 206 206 spin_unlock_irqrestore(&vmbus_connection.channel_lock, flags); 207 + 208 + primary_channel = channel; 207 209 } else { 208 210 primary_channel = channel->primary_channel; 209 211 spin_lock_irqsave(&primary_channel->lock, flags); ··· 213 211 primary_channel->num_sc--; 214 212 spin_unlock_irqrestore(&primary_channel->lock, flags); 215 213 } 214 + 215 + /* 216 + * We need to free the bit for init_vp_index() to work in the case 217 + * of sub-channel, when we reload drivers like hv_netvsc. 218 + */ 219 + cpumask_clear_cpu(channel->target_cpu, 220 + &primary_channel->alloced_cpus_in_node); 221 + 216 222 free_channel(channel); 217 223 } 218 224 ··· 468 458 continue; 469 459 } 470 460 461 + /* 462 + * NOTE: in the case of sub-channel, we clear the sub-channel 463 + * related bit(s) in primary->alloced_cpus_in_node in 464 + * hv_process_channel_removal(), so when we reload drivers 465 + * like hv_netvsc in SMP guest, here we're able to re-allocate 466 + * bit from primary->alloced_cpus_in_node. 467 + */ 471 468 if (!cpumask_test_cpu(cur_cpu, 472 469 &primary->alloced_cpus_in_node)) { 473 470 cpumask_set_cpu(cur_cpu,
+1
drivers/hwmon/abx500.c
··· 470 470 { .compatible = "stericsson,abx500-temp" }, 471 471 {}, 472 472 }; 473 + MODULE_DEVICE_TABLE(of, abx500_temp_match); 473 474 #endif 474 475 475 476 static struct platform_driver abx500_temp_driver = {
+1
drivers/hwmon/gpio-fan.c
··· 539 539 { .compatible = "gpio-fan", }, 540 540 {}, 541 541 }; 542 + MODULE_DEVICE_TABLE(of, of_gpio_fan_match); 542 543 #endif /* CONFIG_OF_GPIO */ 543 544 544 545 static int gpio_fan_probe(struct platform_device *pdev)
+1
drivers/hwmon/pwm-fan.c
··· 323 323 { .compatible = "pwm-fan", }, 324 324 {}, 325 325 }; 326 + MODULE_DEVICE_TABLE(of, of_pwm_fan_match); 326 327 327 328 static struct platform_driver pwm_fan_driver = { 328 329 .probe = pwm_fan_probe,
+10 -2
drivers/idle/intel_idle.c
··· 620 620 .name = "C6-SKL", 621 621 .desc = "MWAIT 0x20", 622 622 .flags = MWAIT2flg(0x20) | CPUIDLE_FLAG_TLB_FLUSHED, 623 - .exit_latency = 75, 623 + .exit_latency = 85, 624 624 .target_residency = 200, 625 625 .enter = &intel_idle, 626 626 .enter_freeze = intel_idle_freeze, }, ··· 636 636 .name = "C8-SKL", 637 637 .desc = "MWAIT 0x40", 638 638 .flags = MWAIT2flg(0x40) | CPUIDLE_FLAG_TLB_FLUSHED, 639 - .exit_latency = 174, 639 + .exit_latency = 200, 640 640 .target_residency = 800, 641 + .enter = &intel_idle, 642 + .enter_freeze = intel_idle_freeze, }, 643 + { 644 + .name = "C9-SKL", 645 + .desc = "MWAIT 0x50", 646 + .flags = MWAIT2flg(0x50) | CPUIDLE_FLAG_TLB_FLUSHED, 647 + .exit_latency = 480, 648 + .target_residency = 5000, 641 649 .enter = &intel_idle, 642 650 .enter_freeze = intel_idle_freeze, }, 643 651 {
+1 -66
drivers/infiniband/hw/mlx5/main.c
··· 245 245 props->device_cap_flags |= IB_DEVICE_BAD_QKEY_CNTR; 246 246 if (MLX5_CAP_GEN(mdev, apm)) 247 247 props->device_cap_flags |= IB_DEVICE_AUTO_PATH_MIG; 248 - props->device_cap_flags |= IB_DEVICE_LOCAL_DMA_LKEY; 249 248 if (MLX5_CAP_GEN(mdev, xrc)) 250 249 props->device_cap_flags |= IB_DEVICE_XRC; 251 250 props->device_cap_flags |= IB_DEVICE_MEM_MGT_EXTENSIONS; ··· 794 795 return 0; 795 796 } 796 797 797 - static int alloc_pa_mkey(struct mlx5_ib_dev *dev, u32 *key, u32 pdn) 798 - { 799 - struct mlx5_create_mkey_mbox_in *in; 800 - struct mlx5_mkey_seg *seg; 801 - struct mlx5_core_mr mr; 802 - int err; 803 - 804 - in = kzalloc(sizeof(*in), GFP_KERNEL); 805 - if (!in) 806 - return -ENOMEM; 807 - 808 - seg = &in->seg; 809 - seg->flags = MLX5_PERM_LOCAL_READ | MLX5_ACCESS_MODE_PA; 810 - seg->flags_pd = cpu_to_be32(pdn | MLX5_MKEY_LEN64); 811 - seg->qpn_mkey7_0 = cpu_to_be32(0xffffff << 8); 812 - seg->start_addr = 0; 813 - 814 - err = mlx5_core_create_mkey(dev->mdev, &mr, in, sizeof(*in), 815 - NULL, NULL, NULL); 816 - if (err) { 817 - mlx5_ib_warn(dev, "failed to create mkey, %d\n", err); 818 - goto err_in; 819 - } 820 - 821 - kfree(in); 822 - *key = mr.key; 823 - 824 - return 0; 825 - 826 - err_in: 827 - kfree(in); 828 - 829 - return err; 830 - } 831 - 832 - static void free_pa_mkey(struct mlx5_ib_dev *dev, u32 key) 833 - { 834 - struct mlx5_core_mr mr; 835 - int err; 836 - 837 - memset(&mr, 0, sizeof(mr)); 838 - mr.key = key; 839 - err = mlx5_core_destroy_mkey(dev->mdev, &mr); 840 - if (err) 841 - mlx5_ib_warn(dev, "failed to destroy mkey 0x%x\n", key); 842 - } 843 - 844 798 static struct ib_pd *mlx5_ib_alloc_pd(struct ib_device *ibdev, 845 799 struct ib_ucontext *context, 846 800 struct ib_udata *udata) ··· 819 867 kfree(pd); 820 868 return ERR_PTR(-EFAULT); 821 869 } 822 - } else { 823 - err = alloc_pa_mkey(to_mdev(ibdev), &pd->pa_lkey, pd->pdn); 824 - if (err) { 825 - mlx5_core_dealloc_pd(to_mdev(ibdev)->mdev, pd->pdn); 826 - kfree(pd); 827 - return ERR_PTR(err); 828 - } 829 870 } 830 871 831 872 return &pd->ibpd; ··· 828 883 { 829 884 struct mlx5_ib_dev *mdev = to_mdev(pd->device); 830 885 struct mlx5_ib_pd *mpd = to_mpd(pd); 831 - 832 - if (!pd->uobject) 833 - free_pa_mkey(mdev, mpd->pa_lkey); 834 886 835 887 mlx5_core_dealloc_pd(mdev->mdev, mpd->pdn); 836 888 kfree(mpd); ··· 1187 1245 struct ib_srq_init_attr attr; 1188 1246 struct mlx5_ib_dev *dev; 1189 1247 struct ib_cq_init_attr cq_attr = {.cqe = 1}; 1190 - u32 rsvd_lkey; 1191 1248 int ret = 0; 1192 1249 1193 1250 dev = container_of(devr, struct mlx5_ib_dev, devr); 1194 - 1195 - ret = mlx5_core_query_special_context(dev->mdev, &rsvd_lkey); 1196 - if (ret) { 1197 - pr_err("Failed to query special context %d\n", ret); 1198 - return ret; 1199 - } 1200 - dev->ib_dev.local_dma_lkey = rsvd_lkey; 1201 1251 1202 1252 devr->p0 = mlx5_ib_alloc_pd(&dev->ib_dev, NULL, NULL); 1203 1253 if (IS_ERR(devr->p0)) { ··· 1352 1418 strlcpy(dev->ib_dev.name, "mlx5_%d", IB_DEVICE_NAME_MAX); 1353 1419 dev->ib_dev.owner = THIS_MODULE; 1354 1420 dev->ib_dev.node_type = RDMA_NODE_IB_CA; 1421 + dev->ib_dev.local_dma_lkey = 0 /* not supported for now */; 1355 1422 dev->num_ports = MLX5_CAP_GEN(mdev, num_ports); 1356 1423 dev->ib_dev.phys_port_cnt = dev->num_ports; 1357 1424 dev->ib_dev.num_comp_vectors =
-2
drivers/infiniband/hw/mlx5/mlx5_ib.h
··· 103 103 struct mlx5_ib_pd { 104 104 struct ib_pd ibpd; 105 105 u32 pdn; 106 - u32 pa_lkey; 107 106 }; 108 107 109 108 /* Use macros here so that don't have to duplicate ··· 212 213 int uuarn; 213 214 214 215 int create_type; 215 - u32 pa_lkey; 216 216 217 217 /* Store signature errors */ 218 218 bool signature_en;
+1 -3
drivers/infiniband/hw/mlx5/qp.c
··· 925 925 err = create_kernel_qp(dev, init_attr, qp, &in, &inlen); 926 926 if (err) 927 927 mlx5_ib_dbg(dev, "err %d\n", err); 928 - else 929 - qp->pa_lkey = to_mpd(pd)->pa_lkey; 930 928 } 931 929 932 930 if (err) ··· 2043 2045 mfrpl->mapped_page_list[i] = cpu_to_be64(page_list[i] | perm); 2044 2046 dseg->addr = cpu_to_be64(mfrpl->map); 2045 2047 dseg->byte_count = cpu_to_be32(ALIGN(sizeof(u64) * wr->wr.fast_reg.page_list_len, 64)); 2046 - dseg->lkey = cpu_to_be32(pd->pa_lkey); 2048 + dseg->lkey = cpu_to_be32(pd->ibpd.local_dma_lkey); 2047 2049 } 2048 2050 2049 2051 static __be32 send_ieth(struct ib_send_wr *wr)
+3 -1
drivers/infiniband/ulp/ipoib/ipoib.h
··· 80 80 IPOIB_NUM_WC = 4, 81 81 82 82 IPOIB_MAX_PATH_REC_QUEUE = 3, 83 - IPOIB_MAX_MCAST_QUEUE = 3, 83 + IPOIB_MAX_MCAST_QUEUE = 64, 84 84 85 85 IPOIB_FLAG_OPER_UP = 0, 86 86 IPOIB_FLAG_INITIALIZED = 1, ··· 548 548 549 549 int ipoib_mcast_attach(struct net_device *dev, u16 mlid, 550 550 union ib_gid *mgid, int set_qkey); 551 + int ipoib_mcast_leave(struct net_device *dev, struct ipoib_mcast *mcast); 552 + struct ipoib_mcast *__ipoib_mcast_find(struct net_device *dev, void *mgid); 551 553 552 554 int ipoib_init_qp(struct net_device *dev); 553 555 int ipoib_transport_dev_init(struct net_device *dev, struct ib_device *ca);
+18
drivers/infiniband/ulp/ipoib/ipoib_main.c
··· 1149 1149 unsigned long dt; 1150 1150 unsigned long flags; 1151 1151 int i; 1152 + LIST_HEAD(remove_list); 1153 + struct ipoib_mcast *mcast, *tmcast; 1154 + struct net_device *dev = priv->dev; 1152 1155 1153 1156 if (test_bit(IPOIB_STOP_NEIGH_GC, &priv->flags)) 1154 1157 return; ··· 1179 1176 lockdep_is_held(&priv->lock))) != NULL) { 1180 1177 /* was the neigh idle for two GC periods */ 1181 1178 if (time_after(neigh_obsolete, neigh->alive)) { 1179 + u8 *mgid = neigh->daddr + 4; 1180 + 1181 + /* Is this multicast ? */ 1182 + if (*mgid == 0xff) { 1183 + mcast = __ipoib_mcast_find(dev, mgid); 1184 + 1185 + if (mcast && test_bit(IPOIB_MCAST_FLAG_SENDONLY, &mcast->flags)) { 1186 + list_del(&mcast->list); 1187 + rb_erase(&mcast->rb_node, &priv->multicast_tree); 1188 + list_add_tail(&mcast->list, &remove_list); 1189 + } 1190 + } 1191 + 1182 1192 rcu_assign_pointer(*np, 1183 1193 rcu_dereference_protected(neigh->hnext, 1184 1194 lockdep_is_held(&priv->lock))); ··· 1207 1191 1208 1192 out_unlock: 1209 1193 spin_unlock_irqrestore(&priv->lock, flags); 1194 + list_for_each_entry_safe(mcast, tmcast, &remove_list, list) 1195 + ipoib_mcast_leave(dev, mcast); 1210 1196 } 1211 1197 1212 1198 static void ipoib_reap_neigh(struct work_struct *work)
+14 -12
drivers/infiniband/ulp/ipoib/ipoib_multicast.c
··· 153 153 return mcast; 154 154 } 155 155 156 - static struct ipoib_mcast *__ipoib_mcast_find(struct net_device *dev, void *mgid) 156 + struct ipoib_mcast *__ipoib_mcast_find(struct net_device *dev, void *mgid) 157 157 { 158 158 struct ipoib_dev_priv *priv = netdev_priv(dev); 159 159 struct rb_node *n = priv->multicast_tree.rb_node; ··· 508 508 rec.hop_limit = priv->broadcast->mcmember.hop_limit; 509 509 510 510 /* 511 - * Historically Linux IPoIB has never properly supported SEND 512 - * ONLY join. It emulated it by not providing all the required 513 - * attributes, which is enough to prevent group creation and 514 - * detect if there are full members or not. A major problem 515 - * with supporting SEND ONLY is detecting when the group is 516 - * auto-destroyed as IPoIB will cache the MLID.. 511 + * Send-only IB Multicast joins do not work at the core 512 + * IB layer yet, so we can't use them here. However, 513 + * we are emulating an Ethernet multicast send, which 514 + * does not require a multicast subscription and will 515 + * still send properly. The most appropriate thing to 516 + * do is to create the group if it doesn't exist as that 517 + * most closely emulates the behavior, from a user space 518 + * application perspecitive, of Ethernet multicast 519 + * operation. For now, we do a full join, maybe later 520 + * when the core IB layers support send only joins we 521 + * will use them. 517 522 */ 518 - #if 1 519 - if (test_bit(IPOIB_MCAST_FLAG_SENDONLY, &mcast->flags)) 520 - comp_mask &= ~IB_SA_MCMEMBER_REC_TRAFFIC_CLASS; 521 - #else 523 + #if 0 522 524 if (test_bit(IPOIB_MCAST_FLAG_SENDONLY, &mcast->flags)) 523 525 rec.join_state = 4; 524 526 #endif ··· 677 675 return 0; 678 676 } 679 677 680 - static int ipoib_mcast_leave(struct net_device *dev, struct ipoib_mcast *mcast) 678 + int ipoib_mcast_leave(struct net_device *dev, struct ipoib_mcast *mcast) 681 679 { 682 680 struct ipoib_dev_priv *priv = netdev_priv(dev); 683 681 int ret = 0;
+5
drivers/infiniband/ulp/iser/iscsi_iser.c
··· 97 97 module_param_named(max_sectors, iser_max_sectors, uint, S_IRUGO | S_IWUSR); 98 98 MODULE_PARM_DESC(max_sectors, "Max number of sectors in a single scsi command (default:1024"); 99 99 100 + bool iser_always_reg = true; 101 + module_param_named(always_register, iser_always_reg, bool, S_IRUGO); 102 + MODULE_PARM_DESC(always_register, 103 + "Always register memory, even for continuous memory regions (default:true)"); 104 + 100 105 bool iser_pi_enable = false; 101 106 module_param_named(pi_enable, iser_pi_enable, bool, S_IRUGO); 102 107 MODULE_PARM_DESC(pi_enable, "Enable T10-PI offload support (default:disabled)");
+1
drivers/infiniband/ulp/iser/iscsi_iser.h
··· 611 611 extern bool iser_pi_enable; 612 612 extern int iser_pi_guard; 613 613 extern unsigned int iser_max_sectors; 614 + extern bool iser_always_reg; 614 615 615 616 int iser_assign_reg_ops(struct iser_device *device); 616 617
+12 -6
drivers/infiniband/ulp/iser/iser_memory.c
··· 803 803 iser_reg_prot_sg(struct iscsi_iser_task *task, 804 804 struct iser_data_buf *mem, 805 805 struct iser_fr_desc *desc, 806 + bool use_dma_key, 806 807 struct iser_mem_reg *reg) 807 808 { 808 809 struct iser_device *device = task->iser_conn->ib_conn.device; 809 810 810 - if (mem->dma_nents == 1) 811 + if (use_dma_key) 811 812 return iser_reg_dma(device, mem, reg); 812 813 813 814 return device->reg_ops->reg_mem(task, mem, &desc->pi_ctx->rsc, reg); ··· 818 817 iser_reg_data_sg(struct iscsi_iser_task *task, 819 818 struct iser_data_buf *mem, 820 819 struct iser_fr_desc *desc, 820 + bool use_dma_key, 821 821 struct iser_mem_reg *reg) 822 822 { 823 823 struct iser_device *device = task->iser_conn->ib_conn.device; 824 824 825 - if (mem->dma_nents == 1) 825 + if (use_dma_key) 826 826 return iser_reg_dma(device, mem, reg); 827 827 828 828 return device->reg_ops->reg_mem(task, mem, &desc->rsc, reg); ··· 838 836 struct iser_mem_reg *reg = &task->rdma_reg[dir]; 839 837 struct iser_mem_reg *data_reg; 840 838 struct iser_fr_desc *desc = NULL; 839 + bool use_dma_key; 841 840 int err; 842 841 843 842 err = iser_handle_unaligned_buf(task, mem, dir); 844 843 if (unlikely(err)) 845 844 return err; 846 845 847 - if (mem->dma_nents != 1 || 848 - scsi_get_prot_op(task->sc) != SCSI_PROT_NORMAL) { 846 + use_dma_key = (mem->dma_nents == 1 && !iser_always_reg && 847 + scsi_get_prot_op(task->sc) == SCSI_PROT_NORMAL); 848 + 849 + if (!use_dma_key) { 849 850 desc = device->reg_ops->reg_desc_get(ib_conn); 850 851 reg->mem_h = desc; 851 852 } ··· 858 853 else 859 854 data_reg = &task->desc.data_reg; 860 855 861 - err = iser_reg_data_sg(task, mem, desc, data_reg); 856 + err = iser_reg_data_sg(task, mem, desc, use_dma_key, data_reg); 862 857 if (unlikely(err)) 863 858 goto err_reg; 864 859 ··· 871 866 if (unlikely(err)) 872 867 goto err_reg; 873 868 874 - err = iser_reg_prot_sg(task, mem, desc, prot_reg); 869 + err = iser_reg_prot_sg(task, mem, desc, 870 + use_dma_key, prot_reg); 875 871 if (unlikely(err)) 876 872 goto err_reg; 877 873 }
+13 -8
drivers/infiniband/ulp/iser/iser_verbs.c
··· 133 133 (unsigned long)comp); 134 134 } 135 135 136 - device->mr = ib_get_dma_mr(device->pd, IB_ACCESS_LOCAL_WRITE | 137 - IB_ACCESS_REMOTE_WRITE | 138 - IB_ACCESS_REMOTE_READ); 139 - if (IS_ERR(device->mr)) 140 - goto dma_mr_err; 136 + if (!iser_always_reg) { 137 + int access = IB_ACCESS_LOCAL_WRITE | 138 + IB_ACCESS_REMOTE_WRITE | 139 + IB_ACCESS_REMOTE_READ; 140 + 141 + device->mr = ib_get_dma_mr(device->pd, access); 142 + if (IS_ERR(device->mr)) 143 + goto dma_mr_err; 144 + } 141 145 142 146 INIT_IB_EVENT_HANDLER(&device->event_handler, device->ib_device, 143 147 iser_event_handler); ··· 151 147 return 0; 152 148 153 149 handler_err: 154 - ib_dereg_mr(device->mr); 150 + if (device->mr) 151 + ib_dereg_mr(device->mr); 155 152 dma_mr_err: 156 153 for (i = 0; i < device->comps_used; i++) 157 154 tasklet_kill(&device->comps[i].tasklet); ··· 178 173 static void iser_free_device_ib_res(struct iser_device *device) 179 174 { 180 175 int i; 181 - BUG_ON(device->mr == NULL); 182 176 183 177 for (i = 0; i < device->comps_used; i++) { 184 178 struct iser_comp *comp = &device->comps[i]; ··· 188 184 } 189 185 190 186 (void)ib_unregister_event_handler(&device->event_handler); 191 - (void)ib_dereg_mr(device->mr); 187 + if (device->mr) 188 + (void)ib_dereg_mr(device->mr); 192 189 ib_dealloc_pd(device->pd); 193 190 194 191 kfree(device->comps);
+186 -107
drivers/infiniband/ulp/isert/ib_isert.c
··· 238 238 rx_sg->lkey = device->pd->local_dma_lkey; 239 239 } 240 240 241 - isert_conn->rx_desc_head = 0; 242 - 243 241 return 0; 244 242 245 243 dma_map_fail: ··· 632 634 isert_init_conn(struct isert_conn *isert_conn) 633 635 { 634 636 isert_conn->state = ISER_CONN_INIT; 635 - INIT_LIST_HEAD(&isert_conn->accept_node); 637 + INIT_LIST_HEAD(&isert_conn->node); 636 638 init_completion(&isert_conn->login_comp); 637 639 init_completion(&isert_conn->login_req_comp); 638 640 init_completion(&isert_conn->wait); ··· 760 762 ret = isert_rdma_post_recvl(isert_conn); 761 763 if (ret) 762 764 goto out_conn_dev; 763 - /* 764 - * Obtain the second reference now before isert_rdma_accept() to 765 - * ensure that any initiator generated REJECT CM event that occurs 766 - * asynchronously won't drop the last reference until the error path 767 - * in iscsi_target_login_sess_out() does it's ->iscsit_free_conn() -> 768 - * isert_free_conn() -> isert_put_conn() -> kref_put(). 769 - */ 770 - if (!kref_get_unless_zero(&isert_conn->kref)) { 771 - isert_warn("conn %p connect_release is running\n", isert_conn); 772 - goto out_conn_dev; 773 - } 774 765 775 766 ret = isert_rdma_accept(isert_conn); 776 767 if (ret) 777 768 goto out_conn_dev; 778 769 779 - mutex_lock(&isert_np->np_accept_mutex); 780 - list_add_tail(&isert_conn->accept_node, &isert_np->np_accept_list); 781 - mutex_unlock(&isert_np->np_accept_mutex); 770 + mutex_lock(&isert_np->mutex); 771 + list_add_tail(&isert_conn->node, &isert_np->accepted); 772 + mutex_unlock(&isert_np->mutex); 782 773 783 - isert_info("np %p: Allow accept_np to continue\n", np); 784 - up(&isert_np->np_sem); 785 774 return 0; 786 775 787 776 out_conn_dev: ··· 816 831 isert_connected_handler(struct rdma_cm_id *cma_id) 817 832 { 818 833 struct isert_conn *isert_conn = cma_id->qp->qp_context; 834 + struct isert_np *isert_np = cma_id->context; 819 835 820 836 isert_info("conn %p\n", isert_conn); 821 837 822 838 mutex_lock(&isert_conn->mutex); 823 - if (isert_conn->state != ISER_CONN_FULL_FEATURE) 824 - isert_conn->state = ISER_CONN_UP; 839 + isert_conn->state = ISER_CONN_UP; 840 + kref_get(&isert_conn->kref); 825 841 mutex_unlock(&isert_conn->mutex); 842 + 843 + mutex_lock(&isert_np->mutex); 844 + list_move_tail(&isert_conn->node, &isert_np->pending); 845 + mutex_unlock(&isert_np->mutex); 846 + 847 + isert_info("np %p: Allow accept_np to continue\n", isert_np); 848 + up(&isert_np->sem); 826 849 } 827 850 828 851 static void ··· 896 903 897 904 switch (event) { 898 905 case RDMA_CM_EVENT_DEVICE_REMOVAL: 899 - isert_np->np_cm_id = NULL; 906 + isert_np->cm_id = NULL; 900 907 break; 901 908 case RDMA_CM_EVENT_ADDR_CHANGE: 902 - isert_np->np_cm_id = isert_setup_id(isert_np); 903 - if (IS_ERR(isert_np->np_cm_id)) { 909 + isert_np->cm_id = isert_setup_id(isert_np); 910 + if (IS_ERR(isert_np->cm_id)) { 904 911 isert_err("isert np %p setup id failed: %ld\n", 905 - isert_np, PTR_ERR(isert_np->np_cm_id)); 906 - isert_np->np_cm_id = NULL; 912 + isert_np, PTR_ERR(isert_np->cm_id)); 913 + isert_np->cm_id = NULL; 907 914 } 908 915 break; 909 916 default: ··· 922 929 struct isert_conn *isert_conn; 923 930 bool terminating = false; 924 931 925 - if (isert_np->np_cm_id == cma_id) 932 + if (isert_np->cm_id == cma_id) 926 933 return isert_np_cma_handler(cma_id->context, event); 927 934 928 935 isert_conn = cma_id->qp->qp_context; ··· 938 945 if (terminating) 939 946 goto out; 940 947 941 - mutex_lock(&isert_np->np_accept_mutex); 942 - if (!list_empty(&isert_conn->accept_node)) { 943 - list_del_init(&isert_conn->accept_node); 948 + mutex_lock(&isert_np->mutex); 949 + if (!list_empty(&isert_conn->node)) { 950 + list_del_init(&isert_conn->node); 944 951 isert_put_conn(isert_conn); 945 952 queue_work(isert_release_wq, &isert_conn->release_work); 946 953 } 947 - mutex_unlock(&isert_np->np_accept_mutex); 954 + mutex_unlock(&isert_np->mutex); 948 955 949 956 out: 950 957 return 0; ··· 955 962 { 956 963 struct isert_conn *isert_conn = cma_id->qp->qp_context; 957 964 965 + list_del_init(&isert_conn->node); 958 966 isert_conn->cm_id = NULL; 959 967 isert_put_conn(isert_conn); 960 968 ··· 1000 1006 } 1001 1007 1002 1008 static int 1003 - isert_post_recv(struct isert_conn *isert_conn, u32 count) 1009 + isert_post_recvm(struct isert_conn *isert_conn, u32 count) 1004 1010 { 1005 1011 struct ib_recv_wr *rx_wr, *rx_wr_failed; 1006 1012 int i, ret; 1007 - unsigned int rx_head = isert_conn->rx_desc_head; 1008 1013 struct iser_rx_desc *rx_desc; 1009 1014 1010 1015 for (rx_wr = isert_conn->rx_wr, i = 0; i < count; i++, rx_wr++) { 1011 - rx_desc = &isert_conn->rx_descs[rx_head]; 1012 - rx_wr->wr_id = (uintptr_t)rx_desc; 1013 - rx_wr->sg_list = &rx_desc->rx_sg; 1014 - rx_wr->num_sge = 1; 1015 - rx_wr->next = rx_wr + 1; 1016 - rx_head = (rx_head + 1) & (ISERT_QP_MAX_RECV_DTOS - 1); 1016 + rx_desc = &isert_conn->rx_descs[i]; 1017 + rx_wr->wr_id = (uintptr_t)rx_desc; 1018 + rx_wr->sg_list = &rx_desc->rx_sg; 1019 + rx_wr->num_sge = 1; 1020 + rx_wr->next = rx_wr + 1; 1017 1021 } 1018 - 1019 1022 rx_wr--; 1020 1023 rx_wr->next = NULL; /* mark end of work requests list */ 1021 1024 1022 1025 isert_conn->post_recv_buf_count += count; 1023 1026 ret = ib_post_recv(isert_conn->qp, isert_conn->rx_wr, 1024 - &rx_wr_failed); 1027 + &rx_wr_failed); 1025 1028 if (ret) { 1026 1029 isert_err("ib_post_recv() failed with ret: %d\n", ret); 1027 1030 isert_conn->post_recv_buf_count -= count; 1028 - } else { 1029 - isert_dbg("Posted %d RX buffers\n", count); 1030 - isert_conn->rx_desc_head = rx_head; 1031 1031 } 1032 + 1033 + return ret; 1034 + } 1035 + 1036 + static int 1037 + isert_post_recv(struct isert_conn *isert_conn, struct iser_rx_desc *rx_desc) 1038 + { 1039 + struct ib_recv_wr *rx_wr_failed, rx_wr; 1040 + int ret; 1041 + 1042 + rx_wr.wr_id = (uintptr_t)rx_desc; 1043 + rx_wr.sg_list = &rx_desc->rx_sg; 1044 + rx_wr.num_sge = 1; 1045 + rx_wr.next = NULL; 1046 + 1047 + isert_conn->post_recv_buf_count++; 1048 + ret = ib_post_recv(isert_conn->qp, &rx_wr, &rx_wr_failed); 1049 + if (ret) { 1050 + isert_err("ib_post_recv() failed with ret: %d\n", ret); 1051 + isert_conn->post_recv_buf_count--; 1052 + } 1053 + 1032 1054 return ret; 1033 1055 } 1034 1056 ··· 1215 1205 if (ret) 1216 1206 return ret; 1217 1207 1218 - ret = isert_post_recv(isert_conn, ISERT_MIN_POSTED_RX); 1208 + ret = isert_post_recvm(isert_conn, 1209 + ISERT_QP_MAX_RECV_DTOS); 1219 1210 if (ret) 1220 1211 return ret; 1221 1212 ··· 1289 1278 } 1290 1279 1291 1280 static struct iscsi_cmd 1292 - *isert_allocate_cmd(struct iscsi_conn *conn) 1281 + *isert_allocate_cmd(struct iscsi_conn *conn, struct iser_rx_desc *rx_desc) 1293 1282 { 1294 1283 struct isert_conn *isert_conn = conn->context; 1295 1284 struct isert_cmd *isert_cmd; ··· 1303 1292 isert_cmd = iscsit_priv_cmd(cmd); 1304 1293 isert_cmd->conn = isert_conn; 1305 1294 isert_cmd->iscsi_cmd = cmd; 1295 + isert_cmd->rx_desc = rx_desc; 1306 1296 1307 1297 return cmd; 1308 1298 } ··· 1315 1303 { 1316 1304 struct iscsi_conn *conn = isert_conn->conn; 1317 1305 struct iscsi_scsi_req *hdr = (struct iscsi_scsi_req *)buf; 1318 - struct scatterlist *sg; 1319 1306 int imm_data, imm_data_len, unsol_data, sg_nents, rc; 1320 1307 bool dump_payload = false; 1308 + unsigned int data_len; 1321 1309 1322 1310 rc = iscsit_setup_scsi_cmd(conn, cmd, buf); 1323 1311 if (rc < 0) ··· 1326 1314 imm_data = cmd->immediate_data; 1327 1315 imm_data_len = cmd->first_burst_len; 1328 1316 unsol_data = cmd->unsolicited_data; 1317 + data_len = cmd->se_cmd.data_length; 1329 1318 1319 + if (imm_data && imm_data_len == data_len) 1320 + cmd->se_cmd.se_cmd_flags |= SCF_PASSTHROUGH_SG_TO_MEM_NOALLOC; 1330 1321 rc = iscsit_process_scsi_cmd(conn, cmd, hdr); 1331 1322 if (rc < 0) { 1332 1323 return 0; ··· 1341 1326 if (!imm_data) 1342 1327 return 0; 1343 1328 1344 - sg = &cmd->se_cmd.t_data_sg[0]; 1345 - sg_nents = max(1UL, DIV_ROUND_UP(imm_data_len, PAGE_SIZE)); 1346 - 1347 - isert_dbg("Copying Immediate SG: %p sg_nents: %u from %p imm_data_len: %d\n", 1348 - sg, sg_nents, &rx_desc->data[0], imm_data_len); 1349 - 1350 - sg_copy_from_buffer(sg, sg_nents, &rx_desc->data[0], imm_data_len); 1329 + if (imm_data_len != data_len) { 1330 + sg_nents = max(1UL, DIV_ROUND_UP(imm_data_len, PAGE_SIZE)); 1331 + sg_copy_from_buffer(cmd->se_cmd.t_data_sg, sg_nents, 1332 + &rx_desc->data[0], imm_data_len); 1333 + isert_dbg("Copy Immediate sg_nents: %u imm_data_len: %d\n", 1334 + sg_nents, imm_data_len); 1335 + } else { 1336 + sg_init_table(&isert_cmd->sg, 1); 1337 + cmd->se_cmd.t_data_sg = &isert_cmd->sg; 1338 + cmd->se_cmd.t_data_nents = 1; 1339 + sg_set_buf(&isert_cmd->sg, &rx_desc->data[0], imm_data_len); 1340 + isert_dbg("Transfer Immediate imm_data_len: %d\n", 1341 + imm_data_len); 1342 + } 1351 1343 1352 1344 cmd->write_data_done += imm_data_len; 1353 1345 ··· 1429 1407 if (rc < 0) 1430 1408 return rc; 1431 1409 1410 + /* 1411 + * multiple data-outs on the same command can arrive - 1412 + * so post the buffer before hand 1413 + */ 1414 + rc = isert_post_recv(isert_conn, rx_desc); 1415 + if (rc) { 1416 + isert_err("ib_post_recv failed with %d\n", rc); 1417 + return rc; 1418 + } 1432 1419 return 0; 1433 1420 } 1434 1421 ··· 1510 1479 1511 1480 switch (opcode) { 1512 1481 case ISCSI_OP_SCSI_CMD: 1513 - cmd = isert_allocate_cmd(conn); 1482 + cmd = isert_allocate_cmd(conn, rx_desc); 1514 1483 if (!cmd) 1515 1484 break; 1516 1485 ··· 1524 1493 rx_desc, (unsigned char *)hdr); 1525 1494 break; 1526 1495 case ISCSI_OP_NOOP_OUT: 1527 - cmd = isert_allocate_cmd(conn); 1496 + cmd = isert_allocate_cmd(conn, rx_desc); 1528 1497 if (!cmd) 1529 1498 break; 1530 1499 ··· 1537 1506 (unsigned char *)hdr); 1538 1507 break; 1539 1508 case ISCSI_OP_SCSI_TMFUNC: 1540 - cmd = isert_allocate_cmd(conn); 1509 + cmd = isert_allocate_cmd(conn, rx_desc); 1541 1510 if (!cmd) 1542 1511 break; 1543 1512 ··· 1545 1514 (unsigned char *)hdr); 1546 1515 break; 1547 1516 case ISCSI_OP_LOGOUT: 1548 - cmd = isert_allocate_cmd(conn); 1517 + cmd = isert_allocate_cmd(conn, rx_desc); 1549 1518 if (!cmd) 1550 1519 break; 1551 1520 1552 1521 ret = iscsit_handle_logout_cmd(conn, cmd, (unsigned char *)hdr); 1553 1522 break; 1554 1523 case ISCSI_OP_TEXT: 1555 - if (be32_to_cpu(hdr->ttt) != 0xFFFFFFFF) { 1524 + if (be32_to_cpu(hdr->ttt) != 0xFFFFFFFF) 1556 1525 cmd = iscsit_find_cmd_from_itt(conn, hdr->itt); 1557 - if (!cmd) 1558 - break; 1559 - } else { 1560 - cmd = isert_allocate_cmd(conn); 1561 - if (!cmd) 1562 - break; 1563 - } 1526 + else 1527 + cmd = isert_allocate_cmd(conn, rx_desc); 1528 + 1529 + if (!cmd) 1530 + break; 1564 1531 1565 1532 isert_cmd = iscsit_priv_cmd(cmd); 1566 1533 ret = isert_handle_text_cmd(isert_conn, isert_cmd, cmd, ··· 1618 1589 struct ib_device *ib_dev = isert_conn->cm_id->device; 1619 1590 struct iscsi_hdr *hdr; 1620 1591 u64 rx_dma; 1621 - int rx_buflen, outstanding; 1592 + int rx_buflen; 1622 1593 1623 1594 if ((char *)desc == isert_conn->login_req_buf) { 1624 1595 rx_dma = isert_conn->login_req_dma; ··· 1658 1629 DMA_FROM_DEVICE); 1659 1630 1660 1631 isert_conn->post_recv_buf_count--; 1661 - isert_dbg("Decremented post_recv_buf_count: %d\n", 1662 - isert_conn->post_recv_buf_count); 1663 - 1664 - if ((char *)desc == isert_conn->login_req_buf) 1665 - return; 1666 - 1667 - outstanding = isert_conn->post_recv_buf_count; 1668 - if (outstanding + ISERT_MIN_POSTED_RX <= ISERT_QP_MAX_RECV_DTOS) { 1669 - int err, count = min(ISERT_QP_MAX_RECV_DTOS - outstanding, 1670 - ISERT_MIN_POSTED_RX); 1671 - err = isert_post_recv(isert_conn, count); 1672 - if (err) { 1673 - isert_err("isert_post_recv() count: %d failed, %d\n", 1674 - count, err); 1675 - } 1676 - } 1677 1632 } 1678 1633 1679 1634 static int ··· 2168 2155 { 2169 2156 struct ib_send_wr *wr_failed; 2170 2157 int ret; 2158 + 2159 + ret = isert_post_recv(isert_conn, isert_cmd->rx_desc); 2160 + if (ret) { 2161 + isert_err("ib_post_recv failed with %d\n", ret); 2162 + return ret; 2163 + } 2171 2164 2172 2165 ret = ib_post_send(isert_conn->qp, &isert_cmd->tx_desc.send_wr, 2173 2166 &wr_failed); ··· 2969 2950 &isert_cmd->tx_desc.send_wr); 2970 2951 isert_cmd->rdma_wr.s_send_wr.next = &isert_cmd->tx_desc.send_wr; 2971 2952 wr->send_wr_num += 1; 2953 + 2954 + rc = isert_post_recv(isert_conn, isert_cmd->rx_desc); 2955 + if (rc) { 2956 + isert_err("ib_post_recv failed with %d\n", rc); 2957 + return rc; 2958 + } 2972 2959 } 2973 2960 2974 2961 rc = ib_post_send(isert_conn->qp, wr->send_wr, &wr_failed); ··· 3024 2999 static int 3025 3000 isert_immediate_queue(struct iscsi_conn *conn, struct iscsi_cmd *cmd, int state) 3026 3001 { 3027 - int ret; 3002 + struct isert_cmd *isert_cmd = iscsit_priv_cmd(cmd); 3003 + int ret = 0; 3028 3004 3029 3005 switch (state) { 3006 + case ISTATE_REMOVE: 3007 + spin_lock_bh(&conn->cmd_lock); 3008 + list_del_init(&cmd->i_conn_node); 3009 + spin_unlock_bh(&conn->cmd_lock); 3010 + isert_put_cmd(isert_cmd, true); 3011 + break; 3030 3012 case ISTATE_SEND_NOPIN_WANT_RESPONSE: 3031 3013 ret = isert_put_nopin(cmd, conn, false); 3032 3014 break; ··· 3138 3106 isert_err("Unable to allocate struct isert_np\n"); 3139 3107 return -ENOMEM; 3140 3108 } 3141 - sema_init(&isert_np->np_sem, 0); 3142 - mutex_init(&isert_np->np_accept_mutex); 3143 - INIT_LIST_HEAD(&isert_np->np_accept_list); 3144 - init_completion(&isert_np->np_login_comp); 3109 + sema_init(&isert_np->sem, 0); 3110 + mutex_init(&isert_np->mutex); 3111 + INIT_LIST_HEAD(&isert_np->accepted); 3112 + INIT_LIST_HEAD(&isert_np->pending); 3145 3113 isert_np->np = np; 3146 3114 3147 3115 /* ··· 3157 3125 goto out; 3158 3126 } 3159 3127 3160 - isert_np->np_cm_id = isert_lid; 3128 + isert_np->cm_id = isert_lid; 3161 3129 np->np_context = isert_np; 3162 3130 3163 3131 return 0; ··· 3246 3214 int ret; 3247 3215 3248 3216 accept_wait: 3249 - ret = down_interruptible(&isert_np->np_sem); 3217 + ret = down_interruptible(&isert_np->sem); 3250 3218 if (ret) 3251 3219 return -ENODEV; 3252 3220 ··· 3263 3231 } 3264 3232 spin_unlock_bh(&np->np_thread_lock); 3265 3233 3266 - mutex_lock(&isert_np->np_accept_mutex); 3267 - if (list_empty(&isert_np->np_accept_list)) { 3268 - mutex_unlock(&isert_np->np_accept_mutex); 3234 + mutex_lock(&isert_np->mutex); 3235 + if (list_empty(&isert_np->pending)) { 3236 + mutex_unlock(&isert_np->mutex); 3269 3237 goto accept_wait; 3270 3238 } 3271 - isert_conn = list_first_entry(&isert_np->np_accept_list, 3272 - struct isert_conn, accept_node); 3273 - list_del_init(&isert_conn->accept_node); 3274 - mutex_unlock(&isert_np->np_accept_mutex); 3239 + isert_conn = list_first_entry(&isert_np->pending, 3240 + struct isert_conn, node); 3241 + list_del_init(&isert_conn->node); 3242 + mutex_unlock(&isert_np->mutex); 3275 3243 3276 3244 conn->context = isert_conn; 3277 3245 isert_conn->conn = conn; ··· 3289 3257 struct isert_np *isert_np = np->np_context; 3290 3258 struct isert_conn *isert_conn, *n; 3291 3259 3292 - if (isert_np->np_cm_id) 3293 - rdma_destroy_id(isert_np->np_cm_id); 3260 + if (isert_np->cm_id) 3261 + rdma_destroy_id(isert_np->cm_id); 3294 3262 3295 3263 /* 3296 3264 * FIXME: At this point we don't have a good way to insure 3297 3265 * that at this point we don't have hanging connections that 3298 3266 * completed RDMA establishment but didn't start iscsi login 3299 3267 * process. So work-around this by cleaning up what ever piled 3300 - * up in np_accept_list. 3268 + * up in accepted and pending lists. 3301 3269 */ 3302 - mutex_lock(&isert_np->np_accept_mutex); 3303 - if (!list_empty(&isert_np->np_accept_list)) { 3304 - isert_info("Still have isert connections, cleaning up...\n"); 3270 + mutex_lock(&isert_np->mutex); 3271 + if (!list_empty(&isert_np->pending)) { 3272 + isert_info("Still have isert pending connections\n"); 3305 3273 list_for_each_entry_safe(isert_conn, n, 3306 - &isert_np->np_accept_list, 3307 - accept_node) { 3274 + &isert_np->pending, 3275 + node) { 3308 3276 isert_info("cleaning isert_conn %p state (%d)\n", 3309 3277 isert_conn, isert_conn->state); 3310 3278 isert_connect_release(isert_conn); 3311 3279 } 3312 3280 } 3313 - mutex_unlock(&isert_np->np_accept_mutex); 3281 + 3282 + if (!list_empty(&isert_np->accepted)) { 3283 + isert_info("Still have isert accepted connections\n"); 3284 + list_for_each_entry_safe(isert_conn, n, 3285 + &isert_np->accepted, 3286 + node) { 3287 + isert_info("cleaning isert_conn %p state (%d)\n", 3288 + isert_conn, isert_conn->state); 3289 + isert_connect_release(isert_conn); 3290 + } 3291 + } 3292 + mutex_unlock(&isert_np->mutex); 3314 3293 3315 3294 np->np_context = NULL; 3316 3295 kfree(isert_np); ··· 3388 3345 wait_for_completion(&isert_conn->wait_comp_err); 3389 3346 } 3390 3347 3348 + /** 3349 + * isert_put_unsol_pending_cmds() - Drop commands waiting for 3350 + * unsolicitate dataout 3351 + * @conn: iscsi connection 3352 + * 3353 + * We might still have commands that are waiting for unsolicited 3354 + * dataouts messages. We must put the extra reference on those 3355 + * before blocking on the target_wait_for_session_cmds 3356 + */ 3357 + static void 3358 + isert_put_unsol_pending_cmds(struct iscsi_conn *conn) 3359 + { 3360 + struct iscsi_cmd *cmd, *tmp; 3361 + static LIST_HEAD(drop_cmd_list); 3362 + 3363 + spin_lock_bh(&conn->cmd_lock); 3364 + list_for_each_entry_safe(cmd, tmp, &conn->conn_cmd_list, i_conn_node) { 3365 + if ((cmd->cmd_flags & ICF_NON_IMMEDIATE_UNSOLICITED_DATA) && 3366 + (cmd->write_data_done < conn->sess->sess_ops->FirstBurstLength) && 3367 + (cmd->write_data_done < cmd->se_cmd.data_length)) 3368 + list_move_tail(&cmd->i_conn_node, &drop_cmd_list); 3369 + } 3370 + spin_unlock_bh(&conn->cmd_lock); 3371 + 3372 + list_for_each_entry_safe(cmd, tmp, &drop_cmd_list, i_conn_node) { 3373 + list_del_init(&cmd->i_conn_node); 3374 + if (cmd->i_state != ISTATE_REMOVE) { 3375 + struct isert_cmd *isert_cmd = iscsit_priv_cmd(cmd); 3376 + 3377 + isert_info("conn %p dropping cmd %p\n", conn, cmd); 3378 + isert_put_cmd(isert_cmd, true); 3379 + } 3380 + } 3381 + } 3382 + 3391 3383 static void isert_wait_conn(struct iscsi_conn *conn) 3392 3384 { 3393 3385 struct isert_conn *isert_conn = conn->context; ··· 3441 3363 isert_conn_terminate(isert_conn); 3442 3364 mutex_unlock(&isert_conn->mutex); 3443 3365 3444 - isert_wait4cmds(conn); 3445 3366 isert_wait4flush(isert_conn); 3367 + isert_put_unsol_pending_cmds(conn); 3368 + isert_wait4cmds(conn); 3446 3369 isert_wait4logout(isert_conn); 3447 3370 3448 3371 queue_work(isert_release_wq, &isert_conn->release_work);
+9 -12
drivers/infiniband/ulp/isert/ib_isert.h
··· 113 113 }; 114 114 115 115 struct isert_rdma_wr { 116 - struct list_head wr_list; 117 116 struct isert_cmd *isert_cmd; 118 117 enum iser_ib_op_code iser_ib_op; 119 118 struct ib_sge *ib_sge; ··· 133 134 uint64_t write_va; 134 135 u64 pdu_buf_dma; 135 136 u32 pdu_buf_len; 136 - u32 read_va_off; 137 - u32 write_va_off; 138 - u32 rdma_wr_num; 139 137 struct isert_conn *conn; 140 138 struct iscsi_cmd *iscsi_cmd; 141 139 struct iser_tx_desc tx_desc; 140 + struct iser_rx_desc *rx_desc; 142 141 struct isert_rdma_wr rdma_wr; 143 142 struct work_struct comp_work; 143 + struct scatterlist sg; 144 144 }; 145 145 146 146 struct isert_device; ··· 157 159 u64 login_req_dma; 158 160 int login_req_len; 159 161 u64 login_rsp_dma; 160 - unsigned int rx_desc_head; 161 162 struct iser_rx_desc *rx_descs; 162 - struct ib_recv_wr rx_wr[ISERT_MIN_POSTED_RX]; 163 + struct ib_recv_wr rx_wr[ISERT_QP_MAX_RECV_DTOS]; 163 164 struct iscsi_conn *conn; 164 - struct list_head accept_node; 165 + struct list_head node; 165 166 struct completion login_comp; 166 167 struct completion login_req_comp; 167 168 struct iser_tx_desc login_tx_desc; ··· 219 222 220 223 struct isert_np { 221 224 struct iscsi_np *np; 222 - struct semaphore np_sem; 223 - struct rdma_cm_id *np_cm_id; 224 - struct mutex np_accept_mutex; 225 - struct list_head np_accept_list; 226 - struct completion np_login_comp; 225 + struct semaphore sem; 226 + struct rdma_cm_id *cm_id; 227 + struct mutex mutex; 228 + struct list_head accepted; 229 + struct list_head pending; 227 230 };
+1
drivers/input/joystick/Kconfig
··· 196 196 config JOYSTICK_ZHENHUA 197 197 tristate "5-byte Zhenhua RC transmitter" 198 198 select SERIO 199 + select BITREVERSE 199 200 help 200 201 Say Y here if you have a Zhen Hua PPM-4CH transmitter which is 201 202 supplied with a ready to fly micro electric indoor helicopters
+1 -1
drivers/iommu/Kconfig
··· 43 43 endmenu 44 44 45 45 config IOMMU_IOVA 46 - bool 46 + tristate 47 47 48 48 config OF_IOMMU 49 49 def_bool y
+5 -3
drivers/iommu/intel-iommu.c
··· 3215 3215 3216 3216 /* Restrict dma_mask to the width that the iommu can handle */ 3217 3217 dma_mask = min_t(uint64_t, DOMAIN_MAX_ADDR(domain->gaw), dma_mask); 3218 + /* Ensure we reserve the whole size-aligned region */ 3219 + nrpages = __roundup_pow_of_two(nrpages); 3218 3220 3219 3221 if (!dmar_forcedac && dma_mask > DMA_BIT_MASK(32)) { 3220 3222 /* ··· 3713 3711 static int __init iommu_init_mempool(void) 3714 3712 { 3715 3713 int ret; 3716 - ret = iommu_iova_cache_init(); 3714 + ret = iova_cache_get(); 3717 3715 if (ret) 3718 3716 return ret; 3719 3717 ··· 3727 3725 3728 3726 kmem_cache_destroy(iommu_domain_cache); 3729 3727 domain_error: 3730 - iommu_iova_cache_destroy(); 3728 + iova_cache_put(); 3731 3729 3732 3730 return -ENOMEM; 3733 3731 } ··· 3736 3734 { 3737 3735 kmem_cache_destroy(iommu_devinfo_cache); 3738 3736 kmem_cache_destroy(iommu_domain_cache); 3739 - iommu_iova_cache_destroy(); 3737 + iova_cache_put(); 3740 3738 } 3741 3739 3742 3740 static void quirk_ioat_snb_local_iommu(struct pci_dev *pdev)
+69 -51
drivers/iommu/iova.c
··· 18 18 */ 19 19 20 20 #include <linux/iova.h> 21 + #include <linux/module.h> 21 22 #include <linux/slab.h> 22 - 23 - static struct kmem_cache *iommu_iova_cache; 24 - 25 - int iommu_iova_cache_init(void) 26 - { 27 - int ret = 0; 28 - 29 - iommu_iova_cache = kmem_cache_create("iommu_iova", 30 - sizeof(struct iova), 31 - 0, 32 - SLAB_HWCACHE_ALIGN, 33 - NULL); 34 - if (!iommu_iova_cache) { 35 - pr_err("Couldn't create iova cache\n"); 36 - ret = -ENOMEM; 37 - } 38 - 39 - return ret; 40 - } 41 - 42 - void iommu_iova_cache_destroy(void) 43 - { 44 - kmem_cache_destroy(iommu_iova_cache); 45 - } 46 - 47 - struct iova *alloc_iova_mem(void) 48 - { 49 - return kmem_cache_alloc(iommu_iova_cache, GFP_ATOMIC); 50 - } 51 - 52 - void free_iova_mem(struct iova *iova) 53 - { 54 - kmem_cache_free(iommu_iova_cache, iova); 55 - } 56 23 57 24 void 58 25 init_iova_domain(struct iova_domain *iovad, unsigned long granule, ··· 39 72 iovad->start_pfn = start_pfn; 40 73 iovad->dma_32bit_pfn = pfn_32bit; 41 74 } 75 + EXPORT_SYMBOL_GPL(init_iova_domain); 42 76 43 77 static struct rb_node * 44 78 __get_cached_rbnode(struct iova_domain *iovad, unsigned long *limit_pfn) ··· 88 120 } 89 121 } 90 122 91 - /* Computes the padding size required, to make the 92 - * the start address naturally aligned on its size 123 + /* 124 + * Computes the padding size required, to make the start address 125 + * naturally aligned on the power-of-two order of its size 93 126 */ 94 - static int 95 - iova_get_pad_size(int size, unsigned int limit_pfn) 127 + static unsigned int 128 + iova_get_pad_size(unsigned int size, unsigned int limit_pfn) 96 129 { 97 - unsigned int pad_size = 0; 98 - unsigned int order = ilog2(size); 99 - 100 - if (order) 101 - pad_size = (limit_pfn + 1) % (1 << order); 102 - 103 - return pad_size; 130 + return (limit_pfn + 1 - size) & (__roundup_pow_of_two(size) - 1); 104 131 } 105 132 106 133 static int __alloc_and_insert_iova_range(struct iova_domain *iovad, ··· 205 242 rb_insert_color(&iova->node, root); 206 243 } 207 244 245 + static struct kmem_cache *iova_cache; 246 + static unsigned int iova_cache_users; 247 + static DEFINE_MUTEX(iova_cache_mutex); 248 + 249 + struct iova *alloc_iova_mem(void) 250 + { 251 + return kmem_cache_alloc(iova_cache, GFP_ATOMIC); 252 + } 253 + EXPORT_SYMBOL(alloc_iova_mem); 254 + 255 + void free_iova_mem(struct iova *iova) 256 + { 257 + kmem_cache_free(iova_cache, iova); 258 + } 259 + EXPORT_SYMBOL(free_iova_mem); 260 + 261 + int iova_cache_get(void) 262 + { 263 + mutex_lock(&iova_cache_mutex); 264 + if (!iova_cache_users) { 265 + iova_cache = kmem_cache_create( 266 + "iommu_iova", sizeof(struct iova), 0, 267 + SLAB_HWCACHE_ALIGN, NULL); 268 + if (!iova_cache) { 269 + mutex_unlock(&iova_cache_mutex); 270 + printk(KERN_ERR "Couldn't create iova cache\n"); 271 + return -ENOMEM; 272 + } 273 + } 274 + 275 + iova_cache_users++; 276 + mutex_unlock(&iova_cache_mutex); 277 + 278 + return 0; 279 + } 280 + EXPORT_SYMBOL_GPL(iova_cache_get); 281 + 282 + void iova_cache_put(void) 283 + { 284 + mutex_lock(&iova_cache_mutex); 285 + if (WARN_ON(!iova_cache_users)) { 286 + mutex_unlock(&iova_cache_mutex); 287 + return; 288 + } 289 + iova_cache_users--; 290 + if (!iova_cache_users) 291 + kmem_cache_destroy(iova_cache); 292 + mutex_unlock(&iova_cache_mutex); 293 + } 294 + EXPORT_SYMBOL_GPL(iova_cache_put); 295 + 208 296 /** 209 297 * alloc_iova - allocates an iova 210 298 * @iovad: - iova domain in question ··· 279 265 if (!new_iova) 280 266 return NULL; 281 267 282 - /* If size aligned is set then round the size to 283 - * to next power of two. 284 - */ 285 - if (size_aligned) 286 - size = __roundup_pow_of_two(size); 287 - 288 268 ret = __alloc_and_insert_iova_range(iovad, size, limit_pfn, 289 269 new_iova, size_aligned); 290 270 ··· 289 281 290 282 return new_iova; 291 283 } 284 + EXPORT_SYMBOL_GPL(alloc_iova); 292 285 293 286 /** 294 287 * find_iova - find's an iova for a given pfn ··· 330 321 spin_unlock_irqrestore(&iovad->iova_rbtree_lock, flags); 331 322 return NULL; 332 323 } 324 + EXPORT_SYMBOL_GPL(find_iova); 333 325 334 326 /** 335 327 * __free_iova - frees the given iova ··· 349 339 spin_unlock_irqrestore(&iovad->iova_rbtree_lock, flags); 350 340 free_iova_mem(iova); 351 341 } 342 + EXPORT_SYMBOL_GPL(__free_iova); 352 343 353 344 /** 354 345 * free_iova - finds and frees the iova for a given pfn ··· 367 356 __free_iova(iovad, iova); 368 357 369 358 } 359 + EXPORT_SYMBOL_GPL(free_iova); 370 360 371 361 /** 372 362 * put_iova_domain - destroys the iova doamin ··· 390 378 } 391 379 spin_unlock_irqrestore(&iovad->iova_rbtree_lock, flags); 392 380 } 381 + EXPORT_SYMBOL_GPL(put_iova_domain); 393 382 394 383 static int 395 384 __is_range_overlap(struct rb_node *node, ··· 480 467 spin_unlock_irqrestore(&iovad->iova_rbtree_lock, flags); 481 468 return iova; 482 469 } 470 + EXPORT_SYMBOL_GPL(reserve_iova); 483 471 484 472 /** 485 473 * copy_reserved_iova - copies the reserved between domains ··· 507 493 } 508 494 spin_unlock_irqrestore(&from->iova_rbtree_lock, flags); 509 495 } 496 + EXPORT_SYMBOL_GPL(copy_reserved_iova); 510 497 511 498 struct iova * 512 499 split_and_remove_iova(struct iova_domain *iovad, struct iova *iova, ··· 549 534 free_iova_mem(prev); 550 535 return NULL; 551 536 } 537 + 538 + MODULE_AUTHOR("Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>"); 539 + MODULE_LICENSE("GPL");
+16 -8
drivers/irqchip/irq-atmel-aic5.c
··· 88 88 { 89 89 struct irq_domain *domain = d->domain; 90 90 struct irq_domain_chip_generic *dgc = domain->gc; 91 - struct irq_chip_generic *gc = dgc->gc[0]; 91 + struct irq_chip_generic *bgc = dgc->gc[0]; 92 + struct irq_chip_generic *gc = irq_data_get_irq_chip_data(d); 92 93 93 - /* Disable interrupt on AIC5 */ 94 - irq_gc_lock(gc); 94 + /* 95 + * Disable interrupt on AIC5. We always take the lock of the 96 + * first irq chip as all chips share the same registers. 97 + */ 98 + irq_gc_lock(bgc); 95 99 irq_reg_writel(gc, d->hwirq, AT91_AIC5_SSR); 96 100 irq_reg_writel(gc, 1, AT91_AIC5_IDCR); 97 101 gc->mask_cache &= ~d->mask; 98 - irq_gc_unlock(gc); 102 + irq_gc_unlock(bgc); 99 103 } 100 104 101 105 static void aic5_unmask(struct irq_data *d) 102 106 { 103 107 struct irq_domain *domain = d->domain; 104 108 struct irq_domain_chip_generic *dgc = domain->gc; 105 - struct irq_chip_generic *gc = dgc->gc[0]; 109 + struct irq_chip_generic *bgc = dgc->gc[0]; 110 + struct irq_chip_generic *gc = irq_data_get_irq_chip_data(d); 106 111 107 - /* Enable interrupt on AIC5 */ 108 - irq_gc_lock(gc); 112 + /* 113 + * Enable interrupt on AIC5. We always take the lock of the 114 + * first irq chip as all chips share the same registers. 115 + */ 116 + irq_gc_lock(bgc); 109 117 irq_reg_writel(gc, d->hwirq, AT91_AIC5_SSR); 110 118 irq_reg_writel(gc, 1, AT91_AIC5_IECR); 111 119 gc->mask_cache |= d->mask; 112 - irq_gc_unlock(gc); 120 + irq_gc_unlock(bgc); 113 121 } 114 122 115 123 static int aic5_retrigger(struct irq_data *d)
+10 -2
drivers/irqchip/irq-mips-gic.c
··· 320 320 intrmask[i] = gic_read(intrmask_reg); 321 321 pending_reg += gic_reg_step; 322 322 intrmask_reg += gic_reg_step; 323 + 324 + if (!config_enabled(CONFIG_64BIT) || mips_cm_is64) 325 + continue; 326 + 327 + pending[i] |= (u64)gic_read(pending_reg) << 32; 328 + intrmask[i] |= (u64)gic_read(intrmask_reg) << 32; 329 + pending_reg += gic_reg_step; 330 + intrmask_reg += gic_reg_step; 323 331 } 324 332 325 333 bitmap_and(pending, pending, intrmask, gic_shared_intrs); ··· 434 426 spin_lock_irqsave(&gic_lock, flags); 435 427 436 428 /* Re-route this IRQ */ 437 - gic_map_to_vpe(irq, cpumask_first(&tmp)); 429 + gic_map_to_vpe(irq, mips_cm_vp_id(cpumask_first(&tmp))); 438 430 439 431 /* Update the pcpu_masks */ 440 432 for (i = 0; i < NR_CPUS; i++) ··· 607 599 GIC_SHARED_TO_HWIRQ(intr)); 608 600 int i; 609 601 610 - gic_map_to_vpe(intr, cpu); 602 + gic_map_to_vpe(intr, mips_cm_vp_id(cpu)); 611 603 for (i = 0; i < NR_CPUS; i++) 612 604 clear_bit(intr, pcpu_masks[i].pcpu_mask); 613 605 set_bit(intr, pcpu_masks[cpu].pcpu_mask);
+2 -1
drivers/misc/mei/debugfs.c
··· 204 204 if (!dir) 205 205 return -ENOMEM; 206 206 207 + dev->dbgfs_dir = dir; 208 + 207 209 f = debugfs_create_file("meclients", S_IRUSR, dir, 208 210 dev, &mei_dbgfs_fops_meclients); 209 211 if (!f) { ··· 230 228 dev_err(dev->dev, "allow_fixed_address: registration failed\n"); 231 229 goto err; 232 230 } 233 - dev->dbgfs_dir = dir; 234 231 return 0; 235 232 err: 236 233 mei_dbgfs_deregister(dev);
+4 -2
drivers/mmc/core/core.c
··· 134 134 int err = cmd->error; 135 135 136 136 /* Flag re-tuning needed on CRC errors */ 137 - if (err == -EILSEQ || (mrq->sbc && mrq->sbc->error == -EILSEQ) || 137 + if ((cmd->opcode != MMC_SEND_TUNING_BLOCK && 138 + cmd->opcode != MMC_SEND_TUNING_BLOCK_HS200) && 139 + (err == -EILSEQ || (mrq->sbc && mrq->sbc->error == -EILSEQ) || 138 140 (mrq->data && mrq->data->error == -EILSEQ) || 139 - (mrq->stop && mrq->stop->error == -EILSEQ)) 141 + (mrq->stop && mrq->stop->error == -EILSEQ))) 140 142 mmc_retune_needed(host); 141 143 142 144 if (err && cmd->retries && mmc_host_is_spi(host)) {
+2 -2
drivers/mmc/core/host.c
··· 457 457 0, &cd_gpio_invert); 458 458 if (!ret) 459 459 dev_info(host->parent, "Got CD GPIO\n"); 460 - else if (ret != -ENOENT) 460 + else if (ret != -ENOENT && ret != -ENOSYS) 461 461 return ret; 462 462 463 463 /* ··· 481 481 ret = mmc_gpiod_request_ro(host, "wp", 0, false, 0, &ro_gpio_invert); 482 482 if (!ret) 483 483 dev_info(host->parent, "Got WP GPIO\n"); 484 - else if (ret != -ENOENT) 484 + else if (ret != -ENOENT && ret != -ENOSYS) 485 485 return ret; 486 486 487 487 if (of_property_read_bool(np, "disable-wp"))
+22 -44
drivers/mmc/host/pxamci.c
··· 28 28 #include <linux/clk.h> 29 29 #include <linux/err.h> 30 30 #include <linux/mmc/host.h> 31 + #include <linux/mmc/slot-gpio.h> 31 32 #include <linux/io.h> 32 33 #include <linux/regulator/consumer.h> 33 34 #include <linux/gpio.h> ··· 455 454 { 456 455 struct pxamci_host *host = mmc_priv(mmc); 457 456 458 - if (host->pdata && gpio_is_valid(host->pdata->gpio_card_ro)) { 459 - if (host->pdata->gpio_card_ro_invert) 460 - return !gpio_get_value(host->pdata->gpio_card_ro); 461 - else 462 - return gpio_get_value(host->pdata->gpio_card_ro); 463 - } 457 + if (host->pdata && gpio_is_valid(host->pdata->gpio_card_ro)) 458 + return mmc_gpio_get_ro(mmc); 464 459 if (host->pdata && host->pdata->get_ro) 465 460 return !!host->pdata->get_ro(mmc_dev(mmc)); 466 461 /* ··· 548 551 549 552 static const struct mmc_host_ops pxamci_ops = { 550 553 .request = pxamci_request, 554 + .get_cd = mmc_gpio_get_cd, 551 555 .get_ro = pxamci_get_ro, 552 556 .set_ios = pxamci_set_ios, 553 557 .enable_sdio_irq = pxamci_enable_sdio_irq, ··· 788 790 gpio_power = host->pdata->gpio_power; 789 791 } 790 792 if (gpio_is_valid(gpio_power)) { 791 - ret = gpio_request(gpio_power, "mmc card power"); 793 + ret = devm_gpio_request(&pdev->dev, gpio_power, 794 + "mmc card power"); 792 795 if (ret) { 793 - dev_err(&pdev->dev, "Failed requesting gpio_power %d\n", gpio_power); 796 + dev_err(&pdev->dev, "Failed requesting gpio_power %d\n", 797 + gpio_power); 794 798 goto out; 795 799 } 796 800 gpio_direction_output(gpio_power, 797 801 host->pdata->gpio_power_invert); 798 802 } 799 - if (gpio_is_valid(gpio_ro)) { 800 - ret = gpio_request(gpio_ro, "mmc card read only"); 801 - if (ret) { 802 - dev_err(&pdev->dev, "Failed requesting gpio_ro %d\n", gpio_ro); 803 - goto err_gpio_ro; 804 - } 805 - gpio_direction_input(gpio_ro); 803 + if (gpio_is_valid(gpio_ro)) 804 + ret = mmc_gpio_request_ro(mmc, gpio_ro); 805 + if (ret) { 806 + dev_err(&pdev->dev, "Failed requesting gpio_ro %d\n", gpio_ro); 807 + goto out; 808 + } else { 809 + mmc->caps |= host->pdata->gpio_card_ro_invert ? 810 + MMC_CAP2_RO_ACTIVE_HIGH : 0; 806 811 } 807 - if (gpio_is_valid(gpio_cd)) { 808 - ret = gpio_request(gpio_cd, "mmc card detect"); 809 - if (ret) { 810 - dev_err(&pdev->dev, "Failed requesting gpio_cd %d\n", gpio_cd); 811 - goto err_gpio_cd; 812 - } 813 - gpio_direction_input(gpio_cd); 814 812 815 - ret = request_irq(gpio_to_irq(gpio_cd), pxamci_detect_irq, 816 - IRQF_TRIGGER_RISING | IRQF_TRIGGER_FALLING, 817 - "mmc card detect", mmc); 818 - if (ret) { 819 - dev_err(&pdev->dev, "failed to request card detect IRQ\n"); 820 - goto err_request_irq; 821 - } 813 + if (gpio_is_valid(gpio_cd)) 814 + ret = mmc_gpio_request_cd(mmc, gpio_cd, 0); 815 + if (ret) { 816 + dev_err(&pdev->dev, "Failed requesting gpio_cd %d\n", gpio_cd); 817 + goto out; 822 818 } 823 819 824 820 if (host->pdata && host->pdata->init) ··· 827 835 828 836 return 0; 829 837 830 - err_request_irq: 831 - gpio_free(gpio_cd); 832 - err_gpio_cd: 833 - gpio_free(gpio_ro); 834 - err_gpio_ro: 835 - gpio_free(gpio_power); 836 - out: 838 + out: 837 839 if (host) { 838 840 if (host->dma_chan_rx) 839 841 dma_release_channel(host->dma_chan_rx); ··· 859 873 gpio_ro = host->pdata->gpio_card_ro; 860 874 gpio_power = host->pdata->gpio_power; 861 875 } 862 - if (gpio_is_valid(gpio_cd)) { 863 - free_irq(gpio_to_irq(gpio_cd), mmc); 864 - gpio_free(gpio_cd); 865 - } 866 - if (gpio_is_valid(gpio_ro)) 867 - gpio_free(gpio_ro); 868 - if (gpio_is_valid(gpio_power)) 869 - gpio_free(gpio_power); 870 876 if (host->vcc) 871 877 regulator_put(host->vcc); 872 878
+39 -14
drivers/mmc/host/sunxi-mmc.c
··· 210 210 #define SDXC_IDMAC_DES0_CES BIT(30) /* card error summary */ 211 211 #define SDXC_IDMAC_DES0_OWN BIT(31) /* 1-idma owns it, 0-host owns it */ 212 212 213 + #define SDXC_CLK_400K 0 214 + #define SDXC_CLK_25M 1 215 + #define SDXC_CLK_50M 2 216 + #define SDXC_CLK_50M_DDR 3 217 + 218 + struct sunxi_mmc_clk_delay { 219 + u32 output; 220 + u32 sample; 221 + }; 222 + 213 223 struct sunxi_idma_des { 214 224 u32 config; 215 225 u32 buf_size; ··· 239 229 struct clk *clk_mmc; 240 230 struct clk *clk_sample; 241 231 struct clk *clk_output; 232 + const struct sunxi_mmc_clk_delay *clk_delays; 242 233 243 234 /* irq */ 244 235 spinlock_t lock; ··· 665 654 666 655 /* determine delays */ 667 656 if (rate <= 400000) { 668 - oclk_dly = 180; 669 - sclk_dly = 42; 657 + oclk_dly = host->clk_delays[SDXC_CLK_400K].output; 658 + sclk_dly = host->clk_delays[SDXC_CLK_400K].sample; 670 659 } else if (rate <= 25000000) { 671 - oclk_dly = 180; 672 - sclk_dly = 75; 660 + oclk_dly = host->clk_delays[SDXC_CLK_25M].output; 661 + sclk_dly = host->clk_delays[SDXC_CLK_25M].sample; 673 662 } else if (rate <= 50000000) { 674 663 if (ios->timing == MMC_TIMING_UHS_DDR50) { 675 - oclk_dly = 60; 676 - sclk_dly = 120; 664 + oclk_dly = host->clk_delays[SDXC_CLK_50M_DDR].output; 665 + sclk_dly = host->clk_delays[SDXC_CLK_50M_DDR].sample; 677 666 } else { 678 - oclk_dly = 90; 679 - sclk_dly = 150; 667 + oclk_dly = host->clk_delays[SDXC_CLK_50M].output; 668 + sclk_dly = host->clk_delays[SDXC_CLK_50M].sample; 680 669 } 681 - } else if (rate <= 100000000) { 682 - oclk_dly = 6; 683 - sclk_dly = 24; 684 - } else if (rate <= 200000000) { 685 - oclk_dly = 3; 686 - sclk_dly = 12; 687 670 } else { 688 671 return -EINVAL; 689 672 } ··· 876 871 static const struct of_device_id sunxi_mmc_of_match[] = { 877 872 { .compatible = "allwinner,sun4i-a10-mmc", }, 878 873 { .compatible = "allwinner,sun5i-a13-mmc", }, 874 + { .compatible = "allwinner,sun9i-a80-mmc", }, 879 875 { /* sentinel */ } 880 876 }; 881 877 MODULE_DEVICE_TABLE(of, sunxi_mmc_of_match); ··· 890 884 .hw_reset = sunxi_mmc_hw_reset, 891 885 }; 892 886 887 + static const struct sunxi_mmc_clk_delay sunxi_mmc_clk_delays[] = { 888 + [SDXC_CLK_400K] = { .output = 180, .sample = 180 }, 889 + [SDXC_CLK_25M] = { .output = 180, .sample = 75 }, 890 + [SDXC_CLK_50M] = { .output = 90, .sample = 120 }, 891 + [SDXC_CLK_50M_DDR] = { .output = 60, .sample = 120 }, 892 + }; 893 + 894 + static const struct sunxi_mmc_clk_delay sun9i_mmc_clk_delays[] = { 895 + [SDXC_CLK_400K] = { .output = 180, .sample = 180 }, 896 + [SDXC_CLK_25M] = { .output = 180, .sample = 75 }, 897 + [SDXC_CLK_50M] = { .output = 150, .sample = 120 }, 898 + [SDXC_CLK_50M_DDR] = { .output = 90, .sample = 120 }, 899 + }; 900 + 893 901 static int sunxi_mmc_resource_request(struct sunxi_mmc_host *host, 894 902 struct platform_device *pdev) 895 903 { ··· 914 894 host->idma_des_size_bits = 13; 915 895 else 916 896 host->idma_des_size_bits = 16; 897 + 898 + if (of_device_is_compatible(np, "allwinner,sun9i-a80-mmc")) 899 + host->clk_delays = sun9i_mmc_clk_delays; 900 + else 901 + host->clk_delays = sunxi_mmc_clk_delays; 917 902 918 903 ret = mmc_regulator_get_supply(host->mmc); 919 904 if (ret) {
+5
drivers/mtd/ubi/io.c
··· 926 926 goto bad; 927 927 } 928 928 929 + if (data_size > ubi->leb_size) { 930 + ubi_err(ubi, "bad data_size"); 931 + goto bad; 932 + } 933 + 929 934 if (vol_type == UBI_VID_STATIC) { 930 935 /* 931 936 * Although from high-level point of view static volumes may
+1
drivers/mtd/ubi/vtbl.c
··· 649 649 if (ubi->corr_peb_count) 650 650 ubi_err(ubi, "%d PEBs are corrupted and not used", 651 651 ubi->corr_peb_count); 652 + return -ENOSPC; 652 653 } 653 654 ubi->rsvd_pebs += reserved_pebs; 654 655 ubi->avail_pebs -= reserved_pebs;
+1
drivers/mtd/ubi/wl.c
··· 1601 1601 if (ubi->corr_peb_count) 1602 1602 ubi_err(ubi, "%d PEBs are corrupted and not used", 1603 1603 ubi->corr_peb_count); 1604 + err = -ENOSPC; 1604 1605 goto out_free; 1605 1606 } 1606 1607 ubi->avail_pebs -= reserved_pebs;
+2
drivers/net/dsa/mv88e6xxx.c
··· 2126 2126 reg |= PORT_CONTROL_FRAME_ETHER_TYPE_DSA; 2127 2127 else 2128 2128 reg |= PORT_CONTROL_FRAME_MODE_DSA; 2129 + reg |= PORT_CONTROL_FORWARD_UNKNOWN | 2130 + PORT_CONTROL_FORWARD_UNKNOWN_MC; 2129 2131 } 2130 2132 2131 2133 if (mv88e6xxx_6352_family(ds) || mv88e6xxx_6351_family(ds) ||
+7 -6
drivers/net/ethernet/brocade/bna/bfa_ioc.c
··· 1543 1543 } 1544 1544 1545 1545 /* Flush FLI data fifo. */ 1546 - static u32 1546 + static int 1547 1547 bfa_flash_fifo_flush(void __iomem *pci_bar) 1548 1548 { 1549 1549 u32 i; ··· 1573 1573 } 1574 1574 1575 1575 /* Read flash status. */ 1576 - static u32 1576 + static int 1577 1577 bfa_flash_status_read(void __iomem *pci_bar) 1578 1578 { 1579 1579 union bfa_flash_dev_status_reg dev_status; 1580 - u32 status; 1580 + int status; 1581 1581 u32 ret_status; 1582 1582 int i; 1583 1583 ··· 1611 1611 } 1612 1612 1613 1613 /* Start flash read operation. */ 1614 - static u32 1614 + static int 1615 1615 bfa_flash_read_start(void __iomem *pci_bar, u32 offset, u32 len, 1616 1616 char *buf) 1617 1617 { 1618 - u32 status; 1618 + int status; 1619 1619 1620 1620 /* len must be mutiple of 4 and not exceeding fifo size */ 1621 1621 if (len == 0 || len > BFA_FLASH_FIFO_SIZE || (len & 0x03) != 0) ··· 1703 1703 bfa_flash_raw_read(void __iomem *pci_bar, u32 offset, char *buf, 1704 1704 u32 len) 1705 1705 { 1706 - u32 n, status; 1706 + u32 n; 1707 + int status; 1707 1708 u32 off, l, s, residue, fifo_sz; 1708 1709 1709 1710 residue = len;
+1 -1
drivers/net/ethernet/hisilicon/hip04_eth.c
··· 816 816 struct net_device *ndev; 817 817 struct hip04_priv *priv; 818 818 struct resource *res; 819 - unsigned int irq; 819 + int irq; 820 820 int ret; 821 821 822 822 ndev = alloc_etherdev(sizeof(struct hip04_priv));
+3 -3
drivers/net/ethernet/ibm/emac/core.h
··· 460 460 u32 index; 461 461 }; 462 462 463 - #define EMAC_ETHTOOL_REGS_VER 0 464 - #define EMAC4_ETHTOOL_REGS_VER 1 465 - #define EMAC4SYNC_ETHTOOL_REGS_VER 2 463 + #define EMAC_ETHTOOL_REGS_VER 3 464 + #define EMAC4_ETHTOOL_REGS_VER 4 465 + #define EMAC4SYNC_ETHTOOL_REGS_VER 5 466 466 467 467 #endif /* __IBM_NEWEMAC_CORE_H */
+9
drivers/net/ethernet/intel/i40e/i40e_adminq.c
··· 953 953 /* take the lock before we start messing with the ring */ 954 954 mutex_lock(&hw->aq.arq_mutex); 955 955 956 + if (hw->aq.arq.count == 0) { 957 + i40e_debug(hw, I40E_DEBUG_AQ_MESSAGE, 958 + "AQRX: Admin queue not initialized.\n"); 959 + ret_code = I40E_ERR_QUEUE_EMPTY; 960 + goto clean_arq_element_err; 961 + } 962 + 956 963 /* set next_to_use to head */ 957 964 ntu = (rd32(hw, hw->aq.arq.head) & I40E_PF_ARQH_ARQH_MASK); 958 965 if (ntu == ntc) { ··· 1021 1014 /* Set pending if needed, unlock and return */ 1022 1015 if (pending != NULL) 1023 1016 *pending = (ntc > ntu ? hw->aq.arq.count : 0) + (ntu - ntc); 1017 + 1018 + clean_arq_element_err: 1024 1019 mutex_unlock(&hw->aq.arq_mutex); 1025 1020 1026 1021 if (i40e_is_nvm_update_op(&e->desc)) {
+2 -1
drivers/net/ethernet/intel/i40e/i40e_main.c
··· 2688 2688 rx_ctx.lrxqthresh = 2; 2689 2689 rx_ctx.crcstrip = 1; 2690 2690 rx_ctx.l2tsel = 1; 2691 - rx_ctx.showiv = 1; 2691 + /* this controls whether VLAN is stripped from inner headers */ 2692 + rx_ctx.showiv = 0; 2692 2693 #ifdef I40E_FCOE 2693 2694 rx_ctx.fc_ena = (vsi->type == I40E_VSI_FCOE); 2694 2695 #endif
+9
drivers/net/ethernet/intel/i40evf/i40e_adminq.c
··· 894 894 /* take the lock before we start messing with the ring */ 895 895 mutex_lock(&hw->aq.arq_mutex); 896 896 897 + if (hw->aq.arq.count == 0) { 898 + i40e_debug(hw, I40E_DEBUG_AQ_MESSAGE, 899 + "AQRX: Admin queue not initialized.\n"); 900 + ret_code = I40E_ERR_QUEUE_EMPTY; 901 + goto clean_arq_element_err; 902 + } 903 + 897 904 /* set next_to_use to head */ 898 905 ntu = (rd32(hw, hw->aq.arq.head) & I40E_VF_ARQH1_ARQH_MASK); 899 906 if (ntu == ntc) { ··· 962 955 /* Set pending if needed, unlock and return */ 963 956 if (pending != NULL) 964 957 *pending = (ntc > ntu ? hw->aq.arq.count : 0) + (ntu - ntc); 958 + 959 + clean_arq_element_err: 965 960 mutex_unlock(&hw->aq.arq_mutex); 966 961 967 962 return ret_code;
+4 -3
drivers/net/ethernet/mellanox/mlx4/mcg.c
··· 1184 1184 if (prot == MLX4_PROT_ETH) { 1185 1185 /* manage the steering entry for promisc mode */ 1186 1186 if (new_entry) 1187 - new_steering_entry(dev, port, steer, index, qp->qpn); 1187 + err = new_steering_entry(dev, port, steer, 1188 + index, qp->qpn); 1188 1189 else 1189 - existing_steering_entry(dev, port, steer, 1190 - index, qp->qpn); 1190 + err = existing_steering_entry(dev, port, steer, 1191 + index, qp->qpn); 1191 1192 } 1192 1193 if (err && link && index != -1) { 1193 1194 if (index < dev->caps.num_mgms)
-22
drivers/net/ethernet/mellanox/mlx5/core/fw.c
··· 200 200 201 201 return err; 202 202 } 203 - 204 - int mlx5_core_query_special_context(struct mlx5_core_dev *dev, u32 *rsvd_lkey) 205 - { 206 - struct mlx5_cmd_query_special_contexts_mbox_in in; 207 - struct mlx5_cmd_query_special_contexts_mbox_out out; 208 - int err; 209 - 210 - memset(&in, 0, sizeof(in)); 211 - memset(&out, 0, sizeof(out)); 212 - in.hdr.opcode = cpu_to_be16(MLX5_CMD_OP_QUERY_SPECIAL_CONTEXTS); 213 - err = mlx5_cmd_exec(dev, &in, sizeof(in), &out, sizeof(out)); 214 - if (err) 215 - return err; 216 - 217 - if (out.hdr.status) 218 - err = mlx5_cmd_status_to_err(&out.hdr); 219 - 220 - *rsvd_lkey = be32_to_cpu(out.resd_lkey); 221 - 222 - return err; 223 - } 224 - EXPORT_SYMBOL(mlx5_core_query_special_context);
+1 -1
drivers/net/ethernet/realtek/r8169.c
··· 6081 6081 { 6082 6082 void __iomem *ioaddr = tp->mmio_addr; 6083 6083 struct pci_dev *pdev = tp->pci_dev; 6084 - u16 rg_saw_cnt; 6084 + int rg_saw_cnt; 6085 6085 u32 data; 6086 6086 static const struct ephy_info e_info_8168h_1[] = { 6087 6087 { 0x1e, 0x0800, 0x0001 },
+4 -3
drivers/pci/pci-driver.c
··· 299 299 * Unbound PCI devices are always put in D0, regardless of 300 300 * runtime PM status. During probe, the device is set to 301 301 * active and the usage count is incremented. If the driver 302 - * supports runtime PM, it should call pm_runtime_put_noidle() 303 - * in its probe routine and pm_runtime_get_noresume() in its 304 - * remove routine. 302 + * supports runtime PM, it should call pm_runtime_put_noidle(), 303 + * or any other runtime PM helper function decrementing the usage 304 + * count, in its probe routine and pm_runtime_get_noresume() in 305 + * its remove routine. 305 306 */ 306 307 pm_runtime_get_sync(dev); 307 308 pci_dev->driver = pci_drv;
+20
drivers/staging/android/TODO
··· 5 5 - add proper arch dependencies as needed 6 6 - audit userspace interfaces to make sure they are sane 7 7 8 + 9 + ion/ 10 + - Remove ION_IOC_SYNC: Flushing for devices should be purely a kernel internal 11 + interface on top of dma-buf. flush_for_device needs to be added to dma-buf 12 + first. 13 + - Remove ION_IOC_CUSTOM: Atm used for cache flushing for cpu access in some 14 + vendor trees. Should be replaced with an ioctl on the dma-buf to expose the 15 + begin/end_cpu_access hooks to userspace. 16 + - Clarify the tricks ion plays with explicitly managing coherency behind the 17 + dma api's back (this is absolutely needed for high-perf gpu drivers): Add an 18 + explicit coherency management mode to flush_for_device to be used by drivers 19 + which want to manage caches themselves and which indicates whether cpu caches 20 + need flushing. 21 + - With those removed there's probably no use for ION_IOC_IMPORT anymore either 22 + since ion would just be the central allocator for shared buffers. 23 + - Add dt-binding to expose cma regions as ion heaps, with the rule that any 24 + such cma regions must already be used by some device for dma. I.e. ion only 25 + exposes existing cma regions and doesn't reserve unecessarily memory when 26 + booting a system which doesn't use ion. 27 + 8 28 Please send patches to Greg Kroah-Hartman <greg@kroah.com> and Cc: 9 29 Arve Hjønnevåg <arve@android.com> and Riley Andrews <riandrews@android.com>
+3 -3
drivers/staging/android/ion/ion.c
··· 1179 1179 mutex_unlock(&client->lock); 1180 1180 goto end; 1181 1181 } 1182 - mutex_unlock(&client->lock); 1183 1182 1184 1183 handle = ion_handle_create(client, buffer); 1185 - if (IS_ERR(handle)) 1184 + if (IS_ERR(handle)) { 1185 + mutex_unlock(&client->lock); 1186 1186 goto end; 1187 + } 1187 1188 1188 - mutex_lock(&client->lock); 1189 1189 ret = ion_handle_add(client, handle); 1190 1190 mutex_unlock(&client->lock); 1191 1191 if (ret) {
+1 -1
drivers/staging/fbtft/fb_uc1611.c
··· 76 76 77 77 /* Set CS active high */ 78 78 par->spi->mode |= SPI_CS_HIGH; 79 - ret = par->spi->master->setup(par->spi); 79 + ret = spi_setup(par->spi); 80 80 if (ret) { 81 81 dev_err(par->info->device, "Could not set SPI_CS_HIGH\n"); 82 82 return ret;
+2 -2
drivers/staging/fbtft/fb_watterott.c
··· 169 169 /* enable SPI interface by having CS and MOSI low during reset */ 170 170 save_mode = par->spi->mode; 171 171 par->spi->mode |= SPI_CS_HIGH; 172 - ret = par->spi->master->setup(par->spi); /* set CS inactive low */ 172 + ret = spi_setup(par->spi); /* set CS inactive low */ 173 173 if (ret) { 174 174 dev_err(par->info->device, "Could not set SPI_CS_HIGH\n"); 175 175 return ret; ··· 180 180 par->fbtftops.reset(par); 181 181 mdelay(1000); 182 182 par->spi->mode = save_mode; 183 - ret = par->spi->master->setup(par->spi); 183 + ret = spi_setup(par->spi); 184 184 if (ret) { 185 185 dev_err(par->info->device, "Could not restore SPI mode\n"); 186 186 return ret;
+3 -7
drivers/staging/fbtft/fbtft-core.c
··· 1436 1436 1437 1437 /* 9-bit SPI setup */ 1438 1438 if (par->spi && display->buswidth == 9) { 1439 - par->spi->bits_per_word = 9; 1440 - ret = par->spi->master->setup(par->spi); 1441 - if (ret) { 1439 + if (par->spi->master->bits_per_word_mask & SPI_BPW_MASK(9)) { 1440 + par->spi->bits_per_word = 9; 1441 + } else { 1442 1442 dev_warn(&par->spi->dev, 1443 1443 "9-bit SPI not available, emulating using 8-bit.\n"); 1444 - par->spi->bits_per_word = 8; 1445 - ret = par->spi->master->setup(par->spi); 1446 - if (ret) 1447 - goto out_release; 1448 1444 /* allocate buffer with room for dc bits */ 1449 1445 par->extra = devm_kzalloc(par->info->device, 1450 1446 par->txbuf.len + (par->txbuf.len / 8) + 8,
+4 -7
drivers/staging/fbtft/flexfb.c
··· 463 463 } 464 464 par->fbtftops.write_register = fbtft_write_reg8_bus9; 465 465 par->fbtftops.write_vmem = fbtft_write_vmem16_bus9; 466 - sdev->bits_per_word = 9; 467 - ret = sdev->master->setup(sdev); 468 - if (ret) { 466 + if (par->spi->master->bits_per_word_mask 467 + & SPI_BPW_MASK(9)) { 468 + par->spi->bits_per_word = 9; 469 + } else { 469 470 dev_warn(dev, 470 471 "9-bit SPI not available, emulating using 8-bit.\n"); 471 - sdev->bits_per_word = 8; 472 - ret = sdev->master->setup(sdev); 473 - if (ret) 474 - goto out_release; 475 472 /* allocate buffer with room for dc bits */ 476 473 par->extra = devm_kzalloc(par->info->device, 477 474 par->txbuf.len + (par->txbuf.len / 8) + 8,
+6 -10
drivers/staging/lustre/README.txt
··· 14 14 Lustre has independent Metadata and Data servers that clients can access 15 15 in parallel to maximize performance. 16 16 17 - In order to use Lustre client you will need to download lustre client 18 - tools from 19 - https://downloads.hpdd.intel.com/public/lustre/latest-feature-release/ 20 - the package name is lustre-client. 17 + In order to use Lustre client you will need to download the "lustre-client" 18 + package that contains the userspace tools from http://lustre.org/download/ 21 19 22 20 You will need to install and configure your Lustre servers separately. 23 21 ··· 74 76 75 77 More Information 76 78 ================ 77 - You can get more information at 78 - OpenSFS website: http://lustre.opensfs.org/about/ 79 - Intel HPDD wiki: https://wiki.hpdd.intel.com 79 + You can get more information at the Lustre website: http://wiki.lustre.org/ 80 80 81 - Out of tree Lustre client and server code is available at: 82 - http://git.whamcloud.com/fs/lustre-release.git 81 + Source for the userspace tools and out-of-tree client and server code 82 + is available at: http://git.hpdd.intel.com/fs/lustre-release.git 83 83 84 84 Latest binary packages: 85 - http://lustre.opensfs.org/download-lustre/ 85 + http://lustre.org/download/
+1
drivers/staging/most/Kconfig
··· 1 1 menuconfig MOST 2 2 tristate "MOST driver" 3 + depends on HAS_DMA 3 4 select MOSTCORE 4 5 default n 5 6 ---help---
+1
drivers/staging/most/hdm-dim2/Kconfig
··· 5 5 config HDM_DIM2 6 6 tristate "DIM2 HDM" 7 7 depends on AIM_NETWORK 8 + depends on HAS_IOMEM 8 9 9 10 ---help--- 10 11 Say Y here if you want to connect via MediaLB to network transceiver.
+1 -1
drivers/staging/most/hdm-usb/Kconfig
··· 4 4 5 5 config HDM_USB 6 6 tristate "USB HDM" 7 - depends on USB 7 + depends on USB && NET 8 8 select AIM_NETWORK 9 9 ---help--- 10 10 Say Y here if you want to connect via USB to network tranceiver.
+1
drivers/staging/most/mostcore/Kconfig
··· 4 4 5 5 config MOSTCORE 6 6 tristate "MOST Core" 7 + depends on HAS_DMA 7 8 8 9 ---help--- 9 10 Say Y here if you want to enable MOST support.
-1
drivers/staging/unisys/visorbus/Makefile
··· 10 10 visorbus-y += periodic_work.o 11 11 12 12 ccflags-y += -Idrivers/staging/unisys/include 13 - ccflags-y += -Idrivers/staging/unisys/visorutil
+9 -4
drivers/staging/unisys/visorbus/visorbus_main.c
··· 37 37 #define POLLJIFFIES_TESTWORK 100 38 38 #define POLLJIFFIES_NORMALCHANNEL 10 39 39 40 + static int busreg_rc = -ENODEV; /* stores the result from bus registration */ 41 + 40 42 static int visorbus_uevent(struct device *xdev, struct kobj_uevent_env *env); 41 43 static int visorbus_match(struct device *xdev, struct device_driver *xdrv); 42 44 static void fix_vbus_dev_info(struct visor_device *visordev); ··· 865 863 { 866 864 int rc = 0; 867 865 866 + if (busreg_rc < 0) 867 + return -ENODEV; /*can't register on a nonexistent bus*/ 868 + 868 869 drv->driver.name = drv->name; 869 870 drv->driver.bus = &visorbus_type; 870 871 drv->driver.probe = visordriver_probe_device; ··· 890 885 if (rc < 0) 891 886 return rc; 892 887 rc = register_driver_attributes(drv); 888 + if (rc < 0) 889 + driver_unregister(&drv->driver); 893 890 return rc; 894 891 } 895 892 EXPORT_SYMBOL_GPL(visorbus_register_visor_driver); ··· 1267 1260 static int 1268 1261 create_bus_type(void) 1269 1262 { 1270 - int rc = 0; 1271 - 1272 - rc = bus_register(&visorbus_type); 1273 - return rc; 1263 + busreg_rc = bus_register(&visorbus_type); 1264 + return busreg_rc; 1274 1265 } 1275 1266 1276 1267 /** Remove the one-and-only one instance of the visor bus type (visorbus_type).
+11 -7
drivers/staging/unisys/visornic/visornic_main.c
··· 1189 1189 spin_lock_irqsave(&devdata->priv_lock, flags); 1190 1190 atomic_dec(&devdata->num_rcvbuf_in_iovm); 1191 1191 1192 - /* update rcv stats - call it with priv_lock held */ 1193 - devdata->net_stats.rx_packets++; 1194 - devdata->net_stats.rx_bytes = skb->len; 1195 - 1196 1192 /* set length to how much was ACTUALLY received - 1197 1193 * NOTE: rcv_done_len includes actual length of data rcvd 1198 1194 * including ethhdr 1199 1195 */ 1200 1196 skb->len = cmdrsp->net.rcv.rcv_done_len; 1197 + 1198 + /* update rcv stats - call it with priv_lock held */ 1199 + devdata->net_stats.rx_packets++; 1200 + devdata->net_stats.rx_bytes += skb->len; 1201 1201 1202 1202 /* test enabled while holding lock */ 1203 1203 if (!(devdata->enabled && devdata->enab_dis_acked)) { ··· 1924 1924 "%s debugfs_create_dir %s failed\n", 1925 1925 __func__, netdev->name); 1926 1926 err = -ENOMEM; 1927 - goto cleanup_xmit_cmdrsp; 1927 + goto cleanup_register_netdev; 1928 1928 } 1929 1929 1930 1930 dev_info(&dev->device, "%s success netdev=%s\n", 1931 1931 __func__, netdev->name); 1932 1932 return 0; 1933 + 1934 + cleanup_register_netdev: 1935 + unregister_netdev(netdev); 1933 1936 1934 1937 cleanup_napi_add: 1935 1938 del_timer_sync(&devdata->irq_poll_timer); ··· 2131 2128 if (!dev_num_pool) 2132 2129 goto cleanup_workqueue; 2133 2130 2134 - visorbus_register_visor_driver(&visornic_driver); 2135 - return 0; 2131 + err = visorbus_register_visor_driver(&visornic_driver); 2132 + if (!err) 2133 + return 0; 2136 2134 2137 2135 cleanup_workqueue: 2138 2136 if (visornic_timeout_reset_workqueue) {
+3 -2
drivers/target/iscsi/iscsi_target_parameters.c
··· 407 407 TYPERANGE_UTF8, USE_INITIAL_ONLY); 408 408 if (!param) 409 409 goto out; 410 + 410 411 /* 411 412 * Extra parameters for ISER from RFC-5046 412 413 */ ··· 497 496 } else if (!strcmp(param->name, SESSIONTYPE)) { 498 497 SET_PSTATE_NEGOTIATE(param); 499 498 } else if (!strcmp(param->name, IFMARKER)) { 500 - SET_PSTATE_NEGOTIATE(param); 499 + SET_PSTATE_REJECT(param); 501 500 } else if (!strcmp(param->name, OFMARKER)) { 502 - SET_PSTATE_NEGOTIATE(param); 501 + SET_PSTATE_REJECT(param); 503 502 } else if (!strcmp(param->name, IFMARKINT)) { 504 503 SET_PSTATE_REJECT(param); 505 504 } else if (!strcmp(param->name, OFMARKINT)) {
+26 -19
drivers/target/target_core_device.c
··· 62 62 struct se_session *se_sess = se_cmd->se_sess; 63 63 struct se_node_acl *nacl = se_sess->se_node_acl; 64 64 struct se_dev_entry *deve; 65 + sense_reason_t ret = TCM_NO_SENSE; 65 66 66 67 rcu_read_lock(); 67 68 deve = target_nacl_find_deve(nacl, unpacked_lun); 68 69 if (deve) { 69 70 atomic_long_inc(&deve->total_cmds); 70 - 71 - if ((se_cmd->data_direction == DMA_TO_DEVICE) && 72 - (deve->lun_flags & TRANSPORT_LUNFLAGS_READ_ONLY)) { 73 - pr_err("TARGET_CORE[%s]: Detected WRITE_PROTECTED LUN" 74 - " Access for 0x%08llx\n", 75 - se_cmd->se_tfo->get_fabric_name(), 76 - unpacked_lun); 77 - rcu_read_unlock(); 78 - return TCM_WRITE_PROTECTED; 79 - } 80 71 81 72 if (se_cmd->data_direction == DMA_TO_DEVICE) 82 73 atomic_long_add(se_cmd->data_length, ··· 84 93 85 94 percpu_ref_get(&se_lun->lun_ref); 86 95 se_cmd->lun_ref_active = true; 96 + 97 + if ((se_cmd->data_direction == DMA_TO_DEVICE) && 98 + (deve->lun_flags & TRANSPORT_LUNFLAGS_READ_ONLY)) { 99 + pr_err("TARGET_CORE[%s]: Detected WRITE_PROTECTED LUN" 100 + " Access for 0x%08llx\n", 101 + se_cmd->se_tfo->get_fabric_name(), 102 + unpacked_lun); 103 + rcu_read_unlock(); 104 + ret = TCM_WRITE_PROTECTED; 105 + goto ref_dev; 106 + } 87 107 } 88 108 rcu_read_unlock(); 89 109 ··· 111 109 unpacked_lun); 112 110 return TCM_NON_EXISTENT_LUN; 113 111 } 114 - /* 115 - * Force WRITE PROTECT for virtual LUN 0 116 - */ 117 - if ((se_cmd->data_direction != DMA_FROM_DEVICE) && 118 - (se_cmd->data_direction != DMA_NONE)) 119 - return TCM_WRITE_PROTECTED; 120 112 121 113 se_lun = se_sess->se_tpg->tpg_virt_lun0; 122 114 se_cmd->se_lun = se_sess->se_tpg->tpg_virt_lun0; ··· 119 123 120 124 percpu_ref_get(&se_lun->lun_ref); 121 125 se_cmd->lun_ref_active = true; 126 + 127 + /* 128 + * Force WRITE PROTECT for virtual LUN 0 129 + */ 130 + if ((se_cmd->data_direction != DMA_FROM_DEVICE) && 131 + (se_cmd->data_direction != DMA_NONE)) { 132 + ret = TCM_WRITE_PROTECTED; 133 + goto ref_dev; 134 + } 122 135 } 123 136 /* 124 137 * RCU reference protected by percpu se_lun->lun_ref taken above that ··· 135 130 * pointer can be kfree_rcu() by the final se_lun->lun_group put via 136 131 * target_core_fabric_configfs.c:target_fabric_port_release 137 132 */ 133 + ref_dev: 138 134 se_cmd->se_dev = rcu_dereference_raw(se_lun->lun_se_dev); 139 135 atomic_long_inc(&se_cmd->se_dev->num_cmds); 140 136 ··· 146 140 atomic_long_add(se_cmd->data_length, 147 141 &se_cmd->se_dev->read_bytes); 148 142 149 - return 0; 143 + return ret; 150 144 } 151 145 EXPORT_SYMBOL(transport_lookup_cmd_lun); 152 146 ··· 433 427 434 428 hlist_del_rcu(&orig->link); 435 429 clear_bit(DEF_PR_REG_ACTIVE, &orig->deve_flags); 436 - rcu_assign_pointer(orig->se_lun, NULL); 437 - rcu_assign_pointer(orig->se_lun_acl, NULL); 438 430 orig->lun_flags = 0; 439 431 orig->creation_time = 0; 440 432 orig->attach_count--; ··· 442 438 */ 443 439 kref_put(&orig->pr_kref, target_pr_kref_release); 444 440 wait_for_completion(&orig->pr_comp); 441 + 442 + rcu_assign_pointer(orig->se_lun, NULL); 443 + rcu_assign_pointer(orig->se_lun_acl, NULL); 445 444 446 445 kfree_rcu(orig, rcu_head); 447 446
+1 -1
drivers/target/target_core_hba.c
··· 187 187 188 188 bool target_sense_desc_format(struct se_device *dev) 189 189 { 190 - return dev->transport->get_blocks(dev) > U32_MAX; 190 + return (dev) ? dev->transport->get_blocks(dev) > U32_MAX : false; 191 191 }
+2
drivers/target/target_core_iblock.c
··· 105 105 mode = FMODE_READ|FMODE_EXCL; 106 106 if (!ib_dev->ibd_readonly) 107 107 mode |= FMODE_WRITE; 108 + else 109 + dev->dev_flags |= DF_READ_ONLY; 108 110 109 111 bd = blkdev_get_by_path(ib_dev->ibd_udev_path, mode, ib_dev); 110 112 if (IS_ERR(bd)) {
+67 -24
drivers/target/target_core_pr.c
··· 618 618 struct se_device *dev, 619 619 struct se_node_acl *nacl, 620 620 struct se_lun *lun, 621 - struct se_dev_entry *deve, 621 + struct se_dev_entry *dest_deve, 622 622 u64 mapped_lun, 623 623 unsigned char *isid, 624 624 u64 sa_res_key, ··· 640 640 INIT_LIST_HEAD(&pr_reg->pr_reg_atp_mem_list); 641 641 atomic_set(&pr_reg->pr_res_holders, 0); 642 642 pr_reg->pr_reg_nacl = nacl; 643 - pr_reg->pr_reg_deve = deve; 643 + /* 644 + * For destination registrations for ALL_TG_PT=1 and SPEC_I_PT=1, 645 + * the se_dev_entry->pr_ref will have been already obtained by 646 + * core_get_se_deve_from_rtpi() or __core_scsi3_alloc_registration(). 647 + * 648 + * Otherwise, locate se_dev_entry now and obtain a reference until 649 + * registration completes in __core_scsi3_add_registration(). 650 + */ 651 + if (dest_deve) { 652 + pr_reg->pr_reg_deve = dest_deve; 653 + } else { 654 + rcu_read_lock(); 655 + pr_reg->pr_reg_deve = target_nacl_find_deve(nacl, mapped_lun); 656 + if (!pr_reg->pr_reg_deve) { 657 + rcu_read_unlock(); 658 + pr_err("Unable to locate PR deve %s mapped_lun: %llu\n", 659 + nacl->initiatorname, mapped_lun); 660 + kmem_cache_free(t10_pr_reg_cache, pr_reg); 661 + return NULL; 662 + } 663 + kref_get(&pr_reg->pr_reg_deve->pr_kref); 664 + rcu_read_unlock(); 665 + } 644 666 pr_reg->pr_res_mapped_lun = mapped_lun; 645 667 pr_reg->pr_aptpl_target_lun = lun->unpacked_lun; 646 668 pr_reg->tg_pt_sep_rtpi = lun->lun_rtpi; ··· 958 936 !(strcmp(pr_reg->pr_tport, t_port)) && 959 937 (pr_reg->pr_reg_tpgt == tpgt) && 960 938 (pr_reg->pr_aptpl_target_lun == target_lun)) { 939 + /* 940 + * Obtain the ->pr_reg_deve pointer + reference, that 941 + * is released by __core_scsi3_add_registration() below. 942 + */ 943 + rcu_read_lock(); 944 + pr_reg->pr_reg_deve = target_nacl_find_deve(nacl, mapped_lun); 945 + if (!pr_reg->pr_reg_deve) { 946 + pr_err("Unable to locate PR APTPL %s mapped_lun:" 947 + " %llu\n", nacl->initiatorname, mapped_lun); 948 + rcu_read_unlock(); 949 + continue; 950 + } 951 + kref_get(&pr_reg->pr_reg_deve->pr_kref); 952 + rcu_read_unlock(); 961 953 962 954 pr_reg->pr_reg_nacl = nacl; 963 955 pr_reg->tg_pt_sep_rtpi = lun->lun_rtpi; 964 - 965 956 list_del(&pr_reg->pr_reg_aptpl_list); 966 957 spin_unlock(&pr_tmpl->aptpl_reg_lock); 967 958 /* 968 959 * At this point all of the pointers in *pr_reg will 969 960 * be setup, so go ahead and add the registration. 970 961 */ 971 - 972 962 __core_scsi3_add_registration(dev, nacl, pr_reg, 0, 0); 973 963 /* 974 964 * If this registration is the reservation holder, ··· 1078 1044 1079 1045 __core_scsi3_dump_registration(tfo, dev, nacl, pr_reg, register_type); 1080 1046 spin_unlock(&pr_tmpl->registration_lock); 1081 - 1082 - rcu_read_lock(); 1083 - deve = pr_reg->pr_reg_deve; 1084 - if (deve) 1085 - set_bit(DEF_PR_REG_ACTIVE, &deve->deve_flags); 1086 - rcu_read_unlock(); 1087 - 1088 1047 /* 1089 1048 * Skip extra processing for ALL_TG_PT=0 or REGISTER_AND_MOVE. 1090 1049 */ 1091 1050 if (!pr_reg->pr_reg_all_tg_pt || register_move) 1092 - return; 1051 + goto out; 1093 1052 /* 1094 1053 * Walk pr_reg->pr_reg_atp_list and add registrations for ALL_TG_PT=1 1095 1054 * allocated in __core_scsi3_alloc_registration() ··· 1102 1075 __core_scsi3_dump_registration(tfo, dev, nacl_tmp, pr_reg_tmp, 1103 1076 register_type); 1104 1077 spin_unlock(&pr_tmpl->registration_lock); 1105 - 1078 + /* 1079 + * Drop configfs group dependency reference and deve->pr_kref 1080 + * obtained from __core_scsi3_alloc_registration() code. 1081 + */ 1106 1082 rcu_read_lock(); 1107 1083 deve = pr_reg_tmp->pr_reg_deve; 1108 - if (deve) 1084 + if (deve) { 1109 1085 set_bit(DEF_PR_REG_ACTIVE, &deve->deve_flags); 1086 + core_scsi3_lunacl_undepend_item(deve); 1087 + pr_reg_tmp->pr_reg_deve = NULL; 1088 + } 1110 1089 rcu_read_unlock(); 1111 - 1112 - /* 1113 - * Drop configfs group dependency reference from 1114 - * __core_scsi3_alloc_registration() 1115 - */ 1116 - core_scsi3_lunacl_undepend_item(pr_reg_tmp->pr_reg_deve); 1117 1090 } 1091 + out: 1092 + /* 1093 + * Drop deve->pr_kref obtained in __core_scsi3_do_alloc_registration() 1094 + */ 1095 + rcu_read_lock(); 1096 + deve = pr_reg->pr_reg_deve; 1097 + if (deve) { 1098 + set_bit(DEF_PR_REG_ACTIVE, &deve->deve_flags); 1099 + kref_put(&deve->pr_kref, target_pr_kref_release); 1100 + pr_reg->pr_reg_deve = NULL; 1101 + } 1102 + rcu_read_unlock(); 1118 1103 } 1119 1104 1120 1105 static int core_scsi3_alloc_registration( ··· 1824 1785 dest_node_acl->initiatorname, i_buf, (dest_se_deve) ? 1825 1786 dest_se_deve->mapped_lun : 0); 1826 1787 1827 - if (!dest_se_deve) 1788 + if (!dest_se_deve) { 1789 + kref_put(&local_pr_reg->pr_reg_deve->pr_kref, 1790 + target_pr_kref_release); 1828 1791 continue; 1829 - 1792 + } 1830 1793 core_scsi3_lunacl_undepend_item(dest_se_deve); 1831 1794 core_scsi3_nodeacl_undepend_item(dest_node_acl); 1832 1795 core_scsi3_tpg_undepend_item(dest_tpg); ··· 1864 1823 1865 1824 kmem_cache_free(t10_pr_reg_cache, dest_pr_reg); 1866 1825 1867 - if (!dest_se_deve) 1826 + if (!dest_se_deve) { 1827 + kref_put(&local_pr_reg->pr_reg_deve->pr_kref, 1828 + target_pr_kref_release); 1868 1829 continue; 1869 - 1830 + } 1870 1831 core_scsi3_lunacl_undepend_item(dest_se_deve); 1871 1832 core_scsi3_nodeacl_undepend_item(dest_node_acl); 1872 1833 core_scsi3_tpg_undepend_item(dest_tpg);
+4 -1
drivers/target/target_core_tpg.c
··· 668 668 list_add_tail(&lun->lun_dev_link, &dev->dev_sep_list); 669 669 spin_unlock(&dev->se_port_lock); 670 670 671 - lun->lun_access = lun_access; 671 + if (dev->dev_flags & DF_READ_ONLY) 672 + lun->lun_access = TRANSPORT_LUNFLAGS_READ_ONLY; 673 + else 674 + lun->lun_access = lun_access; 672 675 if (!(dev->se_hba->hba_flags & HBA_FLAGS_INTERNAL_USE)) 673 676 hlist_add_head_rcu(&lun->link, &tpg->tpg_lun_hlist); 674 677 mutex_unlock(&tpg->tpg_lun_mutex);
+10
drivers/thermal/power_allocator.c
··· 144 144 switch_on_temp = 0; 145 145 146 146 temperature_threshold = control_temp - switch_on_temp; 147 + /* 148 + * estimate_pid_constants() tries to find appropriate default 149 + * values for thermal zones that don't provide them. If a 150 + * system integrator has configured a thermal zone with two 151 + * passive trip points at the same temperature, that person 152 + * hasn't put any effort to set up the thermal zone properly 153 + * so just give up. 154 + */ 155 + if (!temperature_threshold) 156 + return; 147 157 148 158 if (!tz->tzp->k_po || force) 149 159 tz->tzp->k_po = int_to_frac(sustainable_power) /
+1 -1
drivers/thunderbolt/nhi.c
··· 643 643 { 644 644 .class = PCI_CLASS_SYSTEM_OTHER << 8, .class_mask = ~0, 645 645 .vendor = PCI_VENDOR_ID_INTEL, .device = 0x156c, 646 - .subvendor = 0x2222, .subdevice = 0x1111, 646 + .subvendor = PCI_ANY_ID, .subdevice = PCI_ANY_ID, 647 647 }, 648 648 { 0,} 649 649 };
+2
drivers/tty/serial/8250/8250_port.c
··· 2910 2910 } 2911 2911 2912 2912 #endif /* CONFIG_SERIAL_8250_CONSOLE */ 2913 + 2914 + MODULE_LICENSE("GPL");
+1 -1
drivers/usb/chipidea/ci_hdrc_imx.c
··· 61 61 { .compatible = "fsl,imx27-usb", .data = &imx27_usb_data}, 62 62 { .compatible = "fsl,imx6q-usb", .data = &imx6q_usb_data}, 63 63 { .compatible = "fsl,imx6sl-usb", .data = &imx6sl_usb_data}, 64 - { .compatible = "fsl,imx6sx-usb", .data = &imx6sl_usb_data}, 64 + { .compatible = "fsl,imx6sx-usb", .data = &imx6sx_usb_data}, 65 65 { /* sentinel */ } 66 66 }; 67 67 MODULE_DEVICE_TABLE(of, ci_hdrc_imx_dt_ids);
+19 -6
drivers/usb/chipidea/ci_hdrc_usb2.c
··· 12 12 #include <linux/dma-mapping.h> 13 13 #include <linux/module.h> 14 14 #include <linux/of.h> 15 + #include <linux/of_platform.h> 15 16 #include <linux/phy/phy.h> 16 17 #include <linux/platform_device.h> 17 18 #include <linux/usb/chipidea.h> ··· 31 30 .flags = CI_HDRC_DISABLE_STREAMING, 32 31 }; 33 32 33 + static struct ci_hdrc_platform_data ci_zynq_pdata = { 34 + .capoffset = DEF_CAPOFFSET, 35 + }; 36 + 37 + static const struct of_device_id ci_hdrc_usb2_of_match[] = { 38 + { .compatible = "chipidea,usb2"}, 39 + { .compatible = "xlnx,zynq-usb-2.20a", .data = &ci_zynq_pdata}, 40 + { } 41 + }; 42 + MODULE_DEVICE_TABLE(of, ci_hdrc_usb2_of_match); 43 + 34 44 static int ci_hdrc_usb2_probe(struct platform_device *pdev) 35 45 { 36 46 struct device *dev = &pdev->dev; 37 47 struct ci_hdrc_usb2_priv *priv; 38 48 struct ci_hdrc_platform_data *ci_pdata = dev_get_platdata(dev); 39 49 int ret; 50 + const struct of_device_id *match; 40 51 41 52 if (!ci_pdata) { 42 53 ci_pdata = devm_kmalloc(dev, sizeof(*ci_pdata), GFP_KERNEL); 43 54 *ci_pdata = ci_default_pdata; /* struct copy */ 55 + } 56 + 57 + match = of_match_device(ci_hdrc_usb2_of_match, &pdev->dev); 58 + if (match && match->data) { 59 + /* struct copy */ 60 + *ci_pdata = *(struct ci_hdrc_platform_data *)match->data; 44 61 } 45 62 46 63 priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL); ··· 114 95 115 96 return 0; 116 97 } 117 - 118 - static const struct of_device_id ci_hdrc_usb2_of_match[] = { 119 - { .compatible = "chipidea,usb2" }, 120 - { } 121 - }; 122 - MODULE_DEVICE_TABLE(of, ci_hdrc_usb2_of_match); 123 98 124 99 static struct platform_driver ci_hdrc_usb2_driver = { 125 100 .probe = ci_hdrc_usb2_probe,
+44 -40
drivers/usb/chipidea/udc.c
··· 656 656 return 0; 657 657 } 658 658 659 + static int _ep_set_halt(struct usb_ep *ep, int value, bool check_transfer) 660 + { 661 + struct ci_hw_ep *hwep = container_of(ep, struct ci_hw_ep, ep); 662 + int direction, retval = 0; 663 + unsigned long flags; 664 + 665 + if (ep == NULL || hwep->ep.desc == NULL) 666 + return -EINVAL; 667 + 668 + if (usb_endpoint_xfer_isoc(hwep->ep.desc)) 669 + return -EOPNOTSUPP; 670 + 671 + spin_lock_irqsave(hwep->lock, flags); 672 + 673 + if (value && hwep->dir == TX && check_transfer && 674 + !list_empty(&hwep->qh.queue) && 675 + !usb_endpoint_xfer_control(hwep->ep.desc)) { 676 + spin_unlock_irqrestore(hwep->lock, flags); 677 + return -EAGAIN; 678 + } 679 + 680 + direction = hwep->dir; 681 + do { 682 + retval |= hw_ep_set_halt(hwep->ci, hwep->num, hwep->dir, value); 683 + 684 + if (!value) 685 + hwep->wedge = 0; 686 + 687 + if (hwep->type == USB_ENDPOINT_XFER_CONTROL) 688 + hwep->dir = (hwep->dir == TX) ? RX : TX; 689 + 690 + } while (hwep->dir != direction); 691 + 692 + spin_unlock_irqrestore(hwep->lock, flags); 693 + return retval; 694 + } 695 + 696 + 659 697 /** 660 698 * _gadget_stop_activity: stops all USB activity, flushes & disables all endpts 661 699 * @gadget: gadget ··· 1089 1051 num += ci->hw_ep_max / 2; 1090 1052 1091 1053 spin_unlock(&ci->lock); 1092 - err = usb_ep_set_halt(&ci->ci_hw_ep[num].ep); 1054 + err = _ep_set_halt(&ci->ci_hw_ep[num].ep, 1, false); 1093 1055 spin_lock(&ci->lock); 1094 1056 if (!err) 1095 1057 isr_setup_status_phase(ci); ··· 1155 1117 1156 1118 if (err < 0) { 1157 1119 spin_unlock(&ci->lock); 1158 - if (usb_ep_set_halt(&hwep->ep)) 1159 - dev_err(ci->dev, "error: ep_set_halt\n"); 1120 + if (_ep_set_halt(&hwep->ep, 1, false)) 1121 + dev_err(ci->dev, "error: _ep_set_halt\n"); 1160 1122 spin_lock(&ci->lock); 1161 1123 } 1162 1124 } ··· 1187 1149 err = isr_setup_status_phase(ci); 1188 1150 if (err < 0) { 1189 1151 spin_unlock(&ci->lock); 1190 - if (usb_ep_set_halt(&hwep->ep)) 1152 + if (_ep_set_halt(&hwep->ep, 1, false)) 1191 1153 dev_err(ci->dev, 1192 - "error: ep_set_halt\n"); 1154 + "error: _ep_set_halt\n"); 1193 1155 spin_lock(&ci->lock); 1194 1156 } 1195 1157 } ··· 1435 1397 */ 1436 1398 static int ep_set_halt(struct usb_ep *ep, int value) 1437 1399 { 1438 - struct ci_hw_ep *hwep = container_of(ep, struct ci_hw_ep, ep); 1439 - int direction, retval = 0; 1440 - unsigned long flags; 1441 - 1442 - if (ep == NULL || hwep->ep.desc == NULL) 1443 - return -EINVAL; 1444 - 1445 - if (usb_endpoint_xfer_isoc(hwep->ep.desc)) 1446 - return -EOPNOTSUPP; 1447 - 1448 - spin_lock_irqsave(hwep->lock, flags); 1449 - 1450 - #ifndef STALL_IN 1451 - /* g_file_storage MS compliant but g_zero fails chapter 9 compliance */ 1452 - if (value && hwep->type == USB_ENDPOINT_XFER_BULK && hwep->dir == TX && 1453 - !list_empty(&hwep->qh.queue)) { 1454 - spin_unlock_irqrestore(hwep->lock, flags); 1455 - return -EAGAIN; 1456 - } 1457 - #endif 1458 - 1459 - direction = hwep->dir; 1460 - do { 1461 - retval |= hw_ep_set_halt(hwep->ci, hwep->num, hwep->dir, value); 1462 - 1463 - if (!value) 1464 - hwep->wedge = 0; 1465 - 1466 - if (hwep->type == USB_ENDPOINT_XFER_CONTROL) 1467 - hwep->dir = (hwep->dir == TX) ? RX : TX; 1468 - 1469 - } while (hwep->dir != direction); 1470 - 1471 - spin_unlock_irqrestore(hwep->lock, flags); 1472 - return retval; 1400 + return _ep_set_halt(ep, value, true); 1473 1401 } 1474 1402 1475 1403 /**
+3 -2
drivers/usb/core/config.c
··· 112 112 cfgno, inum, asnum, ep->desc.bEndpointAddress); 113 113 ep->ss_ep_comp.bmAttributes = 16; 114 114 } else if (usb_endpoint_xfer_isoc(&ep->desc) && 115 - desc->bmAttributes > 2) { 115 + USB_SS_MULT(desc->bmAttributes) > 3) { 116 116 dev_warn(ddev, "Isoc endpoint has Mult of %d in " 117 117 "config %d interface %d altsetting %d ep %d: " 118 118 "setting to 3\n", desc->bmAttributes + 1, ··· 121 121 } 122 122 123 123 if (usb_endpoint_xfer_isoc(&ep->desc)) 124 - max_tx = (desc->bMaxBurst + 1) * (desc->bmAttributes + 1) * 124 + max_tx = (desc->bMaxBurst + 1) * 125 + (USB_SS_MULT(desc->bmAttributes)) * 125 126 usb_endpoint_maxp(&ep->desc); 126 127 else if (usb_endpoint_xfer_int(&ep->desc)) 127 128 max_tx = usb_endpoint_maxp(&ep->desc) *
+2 -2
drivers/usb/dwc3/dwc3-omap.c
··· 514 514 goto err1; 515 515 } 516 516 517 - dwc3_omap_enable_irqs(omap); 518 - 519 517 ret = dwc3_omap_extcon_register(omap); 520 518 if (ret < 0) 521 519 goto err2; ··· 523 525 dev_err(&pdev->dev, "failed to create dwc3 core\n"); 524 526 goto err3; 525 527 } 528 + 529 + dwc3_omap_enable_irqs(omap); 526 530 527 531 return 0; 528 532
-4
drivers/usb/dwc3/gadget.c
··· 2665 2665 int i; 2666 2666 irqreturn_t ret = IRQ_NONE; 2667 2667 2668 - spin_lock(&dwc->lock); 2669 - 2670 2668 for (i = 0; i < dwc->num_event_buffers; i++) { 2671 2669 irqreturn_t status; 2672 2670 ··· 2672 2674 if (status == IRQ_WAKE_THREAD) 2673 2675 ret = status; 2674 2676 } 2675 - 2676 - spin_unlock(&dwc->lock); 2677 2677 2678 2678 return ret; 2679 2679 }
+1
drivers/usb/gadget/epautoconf.c
··· 186 186 187 187 list_for_each_entry (ep, &gadget->ep_list, ep_list) { 188 188 ep->claimed = false; 189 + ep->driver_data = NULL; 189 190 } 190 191 gadget->in_epnum = 0; 191 192 gadget->out_epnum = 0;
+20 -23
drivers/usb/gadget/udc/amd5536udc.c
··· 3138 3138 writel(AMD_BIT(UDC_DEVCFG_SOFTRESET), &dev->regs->cfg); 3139 3139 if (dev->irq_registered) 3140 3140 free_irq(pdev->irq, dev); 3141 - if (dev->regs) 3142 - iounmap(dev->regs); 3141 + if (dev->virt_addr) 3142 + iounmap(dev->virt_addr); 3143 3143 if (dev->mem_region) 3144 3144 release_mem_region(pci_resource_start(pdev, 0), 3145 3145 pci_resource_len(pdev, 0)); ··· 3226 3226 3227 3227 /* init */ 3228 3228 dev = kzalloc(sizeof(struct udc), GFP_KERNEL); 3229 - if (!dev) { 3230 - retval = -ENOMEM; 3231 - goto finished; 3232 - } 3229 + if (!dev) 3230 + return -ENOMEM; 3233 3231 3234 3232 /* pci setup */ 3235 3233 if (pci_enable_device(pdev) < 0) { 3236 - kfree(dev); 3237 - dev = NULL; 3238 3234 retval = -ENODEV; 3239 - goto finished; 3235 + goto err_pcidev; 3240 3236 } 3241 3237 dev->active = 1; 3242 3238 ··· 3242 3246 3243 3247 if (!request_mem_region(resource, len, name)) { 3244 3248 dev_dbg(&pdev->dev, "pci device used already\n"); 3245 - kfree(dev); 3246 - dev = NULL; 3247 3249 retval = -EBUSY; 3248 - goto finished; 3250 + goto err_memreg; 3249 3251 } 3250 3252 dev->mem_region = 1; 3251 3253 3252 3254 dev->virt_addr = ioremap_nocache(resource, len); 3253 3255 if (dev->virt_addr == NULL) { 3254 3256 dev_dbg(&pdev->dev, "start address cannot be mapped\n"); 3255 - kfree(dev); 3256 - dev = NULL; 3257 3257 retval = -EFAULT; 3258 - goto finished; 3258 + goto err_ioremap; 3259 3259 } 3260 3260 3261 3261 if (!pdev->irq) { 3262 3262 dev_err(&pdev->dev, "irq not set\n"); 3263 - kfree(dev); 3264 - dev = NULL; 3265 3263 retval = -ENODEV; 3266 - goto finished; 3264 + goto err_irq; 3267 3265 } 3268 3266 3269 3267 spin_lock_init(&dev->lock); ··· 3273 3283 3274 3284 if (request_irq(pdev->irq, udc_irq, IRQF_SHARED, name, dev) != 0) { 3275 3285 dev_dbg(&pdev->dev, "request_irq(%d) fail\n", pdev->irq); 3276 - kfree(dev); 3277 - dev = NULL; 3278 3286 retval = -EBUSY; 3279 - goto finished; 3287 + goto err_irq; 3280 3288 } 3281 3289 dev->irq_registered = 1; 3282 3290 ··· 3302 3314 return 0; 3303 3315 3304 3316 finished: 3305 - if (dev) 3306 - udc_pci_remove(pdev); 3317 + udc_pci_remove(pdev); 3318 + return retval; 3319 + 3320 + err_irq: 3321 + iounmap(dev->virt_addr); 3322 + err_ioremap: 3323 + release_mem_region(resource, len); 3324 + err_memreg: 3325 + pci_disable_device(pdev); 3326 + err_pcidev: 3327 + kfree(dev); 3307 3328 return retval; 3308 3329 } 3309 3330
+11
drivers/usb/gadget/udc/atmel_usba_udc.c
··· 2002 2002 ep->udc = udc; 2003 2003 INIT_LIST_HEAD(&ep->queue); 2004 2004 2005 + if (ep->index == 0) { 2006 + ep->ep.caps.type_control = true; 2007 + } else { 2008 + ep->ep.caps.type_iso = ep->can_isoc; 2009 + ep->ep.caps.type_bulk = true; 2010 + ep->ep.caps.type_int = true; 2011 + } 2012 + 2013 + ep->ep.caps.dir_in = true; 2014 + ep->ep.caps.dir_out = true; 2015 + 2005 2016 if (i) 2006 2017 list_add_tail(&ep->ep.ep_list, &udc->gadget.ep_list); 2007 2018
+1 -2
drivers/usb/gadget/udc/bdc/bdc_core.c
··· 324 324 bdc->scratchpad.buff, bdc->scratchpad.sp_dma); 325 325 326 326 /* Destroy the dma pools */ 327 - if (bdc->bd_table_pool) 328 - dma_pool_destroy(bdc->bd_table_pool); 327 + dma_pool_destroy(bdc->bd_table_pool); 329 328 330 329 /* Free the bdc_ep array */ 331 330 kfree(bdc->bdc_ep_array);
+30 -16
drivers/usb/gadget/udc/dummy_hcd.c
··· 1348 1348 { 1349 1349 struct dummy *dum = dum_hcd->dum; 1350 1350 struct dummy_request *req; 1351 + int sent = 0; 1351 1352 1352 1353 top: 1353 1354 /* if there's no request queued, the device is NAKing; return */ ··· 1386 1385 if (len == 0) 1387 1386 break; 1388 1387 1389 - /* use an extra pass for the final short packet */ 1390 - if (len > ep->ep.maxpacket) { 1391 - rescan = 1; 1392 - len -= (len % ep->ep.maxpacket); 1388 + /* send multiple of maxpacket first, then remainder */ 1389 + if (len >= ep->ep.maxpacket) { 1390 + is_short = 0; 1391 + if (len % ep->ep.maxpacket) 1392 + rescan = 1; 1393 + len -= len % ep->ep.maxpacket; 1394 + } else { 1395 + is_short = 1; 1393 1396 } 1394 - is_short = (len % ep->ep.maxpacket) != 0; 1395 1397 1396 1398 len = dummy_perform_transfer(urb, req, len); 1397 1399 ··· 1403 1399 req->req.status = len; 1404 1400 } else { 1405 1401 limit -= len; 1402 + sent += len; 1406 1403 urb->actual_length += len; 1407 1404 req->req.actual += len; 1408 1405 } ··· 1426 1421 *status = -EOVERFLOW; 1427 1422 else 1428 1423 *status = 0; 1429 - } else if (!to_host) { 1424 + } else { 1430 1425 *status = 0; 1431 1426 if (host_len > dev_len) 1432 1427 req->req.status = -EOVERFLOW; ··· 1434 1429 req->req.status = 0; 1435 1430 } 1436 1431 1437 - /* many requests terminate without a short packet */ 1432 + /* 1433 + * many requests terminate without a short packet. 1434 + * send a zlp if demanded by flags. 1435 + */ 1438 1436 } else { 1439 - if (req->req.length == req->req.actual 1440 - && !req->req.zero) 1441 - req->req.status = 0; 1442 - if (urb->transfer_buffer_length == urb->actual_length 1443 - && !(urb->transfer_flags 1444 - & URB_ZERO_PACKET)) 1445 - *status = 0; 1437 + if (req->req.length == req->req.actual) { 1438 + if (req->req.zero && to_host) 1439 + rescan = 1; 1440 + else 1441 + req->req.status = 0; 1442 + } 1443 + if (urb->transfer_buffer_length == urb->actual_length) { 1444 + if (urb->transfer_flags & URB_ZERO_PACKET && 1445 + !to_host) 1446 + rescan = 1; 1447 + else 1448 + *status = 0; 1449 + } 1446 1450 } 1447 1451 1448 1452 /* device side completion --> continuable */ ··· 1474 1460 if (rescan) 1475 1461 goto top; 1476 1462 } 1477 - return limit; 1463 + return sent; 1478 1464 } 1479 1465 1480 1466 static int periodic_bytes(struct dummy *dum, struct dummy_ep *ep) ··· 1904 1890 default: 1905 1891 treat_control_like_bulk: 1906 1892 ep->last_io = jiffies; 1907 - total = transfer(dum_hcd, urb, ep, limit, &status); 1893 + total -= transfer(dum_hcd, urb, ep, limit, &status); 1908 1894 break; 1909 1895 } 1910 1896
+1 -2
drivers/usb/gadget/udc/gr_udc.c
··· 2117 2117 return -EBUSY; 2118 2118 2119 2119 gr_dfs_delete(dev); 2120 - if (dev->desc_pool) 2121 - dma_pool_destroy(dev->desc_pool); 2120 + dma_pool_destroy(dev->desc_pool); 2122 2121 platform_set_drvdata(pdev, NULL); 2123 2122 2124 2123 gr_free_request(&dev->epi[0].ep, &dev->ep0reqi->req);
+1 -2
drivers/usb/gadget/udc/mv_u3d_core.c
··· 1767 1767 usb_del_gadget_udc(&u3d->gadget); 1768 1768 1769 1769 /* free memory allocated in probe */ 1770 - if (u3d->trb_pool) 1771 - dma_pool_destroy(u3d->trb_pool); 1770 + dma_pool_destroy(u3d->trb_pool); 1772 1771 1773 1772 if (u3d->ep_context) 1774 1773 dma_free_coherent(&dev->dev, u3d->ep_context_size,
+1 -2
drivers/usb/gadget/udc/mv_udc_core.c
··· 2100 2100 } 2101 2101 2102 2102 /* free memory allocated in probe */ 2103 - if (udc->dtd_pool) 2104 - dma_pool_destroy(udc->dtd_pool); 2103 + dma_pool_destroy(udc->dtd_pool); 2105 2104 2106 2105 if (udc->ep_dqh) 2107 2106 dma_free_coherent(&pdev->dev, udc->ep_dqh_size,
+8 -9
drivers/usb/host/xhci-mem.c
··· 1498 1498 * use Event Data TRBs, and we don't chain in a link TRB on short 1499 1499 * transfers, we're basically dividing by 1. 1500 1500 * 1501 - * xHCI 1.0 specification indicates that the Average TRB Length should 1502 - * be set to 8 for control endpoints. 1501 + * xHCI 1.0 and 1.1 specification indicates that the Average TRB Length 1502 + * should be set to 8 for control endpoints. 1503 1503 */ 1504 - if (usb_endpoint_xfer_control(&ep->desc) && xhci->hci_version == 0x100) 1504 + if (usb_endpoint_xfer_control(&ep->desc) && xhci->hci_version >= 0x100) 1505 1505 ep_ctx->tx_info |= cpu_to_le32(AVG_TRB_LENGTH_FOR_EP(8)); 1506 1506 else 1507 1507 ep_ctx->tx_info |= ··· 1792 1792 int size; 1793 1793 int i, j, num_ports; 1794 1794 1795 - if (timer_pending(&xhci->cmd_timer)) 1796 - del_timer_sync(&xhci->cmd_timer); 1795 + del_timer_sync(&xhci->cmd_timer); 1797 1796 1798 1797 /* Free the Event Ring Segment Table and the actual Event Ring */ 1799 1798 size = sizeof(struct xhci_erst_entry)*(xhci->erst.num_entries); ··· 2320 2321 2321 2322 INIT_LIST_HEAD(&xhci->cmd_list); 2322 2323 2324 + /* init command timeout timer */ 2325 + setup_timer(&xhci->cmd_timer, xhci_handle_command_timeout, 2326 + (unsigned long)xhci); 2327 + 2323 2328 page_size = readl(&xhci->op_regs->page_size); 2324 2329 xhci_dbg_trace(xhci, trace_xhci_dbg_init, 2325 2330 "Supported page size register = 0x%x", page_size); ··· 2507 2504 xhci_dbg_trace(xhci, trace_xhci_dbg_init, 2508 2505 "Wrote ERST address to ir_set 0."); 2509 2506 xhci_print_ir_set(xhci, 0); 2510 - 2511 - /* init command timeout timer */ 2512 - setup_timer(&xhci->cmd_timer, xhci_handle_command_timeout, 2513 - (unsigned long)xhci); 2514 2507 2515 2508 /* 2516 2509 * XXX: Might need to set the Interrupter Moderation Register to
+45 -45
drivers/usb/host/xhci-pci.c
··· 180 180 "QUIRK: Resetting on resume"); 181 181 } 182 182 183 - /* 184 - * In some Intel xHCI controllers, in order to get D3 working, 185 - * through a vendor specific SSIC CONFIG register at offset 0x883c, 186 - * SSIC PORT need to be marked as "unused" before putting xHCI 187 - * into D3. After D3 exit, the SSIC port need to be marked as "used". 188 - * Without this change, xHCI might not enter D3 state. 189 - * Make sure PME works on some Intel xHCI controllers by writing 1 to clear 190 - * the Internal PME flag bit in vendor specific PMCTRL register at offset 0x80a4 191 - */ 192 - static void xhci_pme_quirk(struct usb_hcd *hcd, bool suspend) 193 - { 194 - struct xhci_hcd *xhci = hcd_to_xhci(hcd); 195 - struct pci_dev *pdev = to_pci_dev(hcd->self.controller); 196 - u32 val; 197 - void __iomem *reg; 198 - 199 - if (pdev->vendor == PCI_VENDOR_ID_INTEL && 200 - pdev->device == PCI_DEVICE_ID_INTEL_CHERRYVIEW_XHCI) { 201 - 202 - reg = (void __iomem *) xhci->cap_regs + PORT2_SSIC_CONFIG_REG2; 203 - 204 - /* Notify SSIC that SSIC profile programming is not done */ 205 - val = readl(reg) & ~PROG_DONE; 206 - writel(val, reg); 207 - 208 - /* Mark SSIC port as unused(suspend) or used(resume) */ 209 - val = readl(reg); 210 - if (suspend) 211 - val |= SSIC_PORT_UNUSED; 212 - else 213 - val &= ~SSIC_PORT_UNUSED; 214 - writel(val, reg); 215 - 216 - /* Notify SSIC that SSIC profile programming is done */ 217 - val = readl(reg) | PROG_DONE; 218 - writel(val, reg); 219 - readl(reg); 220 - } 221 - 222 - reg = (void __iomem *) xhci->cap_regs + 0x80a4; 223 - val = readl(reg); 224 - writel(val | BIT(28), reg); 225 - readl(reg); 226 - } 227 - 228 183 #ifdef CONFIG_ACPI 229 184 static void xhci_pme_acpi_rtd3_enable(struct pci_dev *dev) 230 185 { ··· 300 345 } 301 346 302 347 #ifdef CONFIG_PM 348 + /* 349 + * In some Intel xHCI controllers, in order to get D3 working, 350 + * through a vendor specific SSIC CONFIG register at offset 0x883c, 351 + * SSIC PORT need to be marked as "unused" before putting xHCI 352 + * into D3. After D3 exit, the SSIC port need to be marked as "used". 353 + * Without this change, xHCI might not enter D3 state. 354 + * Make sure PME works on some Intel xHCI controllers by writing 1 to clear 355 + * the Internal PME flag bit in vendor specific PMCTRL register at offset 0x80a4 356 + */ 357 + static void xhci_pme_quirk(struct usb_hcd *hcd, bool suspend) 358 + { 359 + struct xhci_hcd *xhci = hcd_to_xhci(hcd); 360 + struct pci_dev *pdev = to_pci_dev(hcd->self.controller); 361 + u32 val; 362 + void __iomem *reg; 363 + 364 + if (pdev->vendor == PCI_VENDOR_ID_INTEL && 365 + pdev->device == PCI_DEVICE_ID_INTEL_CHERRYVIEW_XHCI) { 366 + 367 + reg = (void __iomem *) xhci->cap_regs + PORT2_SSIC_CONFIG_REG2; 368 + 369 + /* Notify SSIC that SSIC profile programming is not done */ 370 + val = readl(reg) & ~PROG_DONE; 371 + writel(val, reg); 372 + 373 + /* Mark SSIC port as unused(suspend) or used(resume) */ 374 + val = readl(reg); 375 + if (suspend) 376 + val |= SSIC_PORT_UNUSED; 377 + else 378 + val &= ~SSIC_PORT_UNUSED; 379 + writel(val, reg); 380 + 381 + /* Notify SSIC that SSIC profile programming is done */ 382 + val = readl(reg) | PROG_DONE; 383 + writel(val, reg); 384 + readl(reg); 385 + } 386 + 387 + reg = (void __iomem *) xhci->cap_regs + 0x80a4; 388 + val = readl(reg); 389 + writel(val | BIT(28), reg); 390 + readl(reg); 391 + } 392 + 303 393 static int xhci_pci_suspend(struct usb_hcd *hcd, bool do_wakeup) 304 394 { 305 395 struct xhci_hcd *xhci = hcd_to_xhci(hcd);
+11 -2
drivers/usb/host/xhci-ring.c
··· 302 302 ret = xhci_handshake(&xhci->op_regs->cmd_ring, 303 303 CMD_RING_RUNNING, 0, 5 * 1000 * 1000); 304 304 if (ret < 0) { 305 + /* we are about to kill xhci, give it one more chance */ 306 + xhci_write_64(xhci, temp_64 | CMD_RING_ABORT, 307 + &xhci->op_regs->cmd_ring); 308 + udelay(1000); 309 + ret = xhci_handshake(&xhci->op_regs->cmd_ring, 310 + CMD_RING_RUNNING, 0, 3 * 1000 * 1000); 311 + if (ret == 0) 312 + return 0; 313 + 305 314 xhci_err(xhci, "Stopped the command ring failed, " 306 315 "maybe the host is dead\n"); 307 316 xhci->xhc_state |= XHCI_STATE_DYING; ··· 3470 3461 if (start_cycle == 0) 3471 3462 field |= 0x1; 3472 3463 3473 - /* xHCI 1.0 6.4.1.2.1: Transfer Type field */ 3474 - if (xhci->hci_version == 0x100) { 3464 + /* xHCI 1.0/1.1 6.4.1.2.1: Transfer Type field */ 3465 + if (xhci->hci_version >= 0x100) { 3475 3466 if (urb->transfer_buffer_length > 0) { 3476 3467 if (setup->bRequestType & USB_DIR_IN) 3477 3468 field |= TRB_TX_TYPE(TRB_DATA_IN);
+11 -13
drivers/usb/host/xhci.c
··· 146 146 "waited %u microseconds.\n", 147 147 XHCI_MAX_HALT_USEC); 148 148 if (!ret) 149 - xhci->xhc_state &= ~XHCI_STATE_HALTED; 149 + xhci->xhc_state &= ~(XHCI_STATE_HALTED | XHCI_STATE_DYING); 150 + 150 151 return ret; 151 152 } 152 153 ··· 655 654 } 656 655 EXPORT_SYMBOL_GPL(xhci_run); 657 656 658 - static void xhci_only_stop_hcd(struct usb_hcd *hcd) 659 - { 660 - struct xhci_hcd *xhci = hcd_to_xhci(hcd); 661 - 662 - spin_lock_irq(&xhci->lock); 663 - xhci_halt(xhci); 664 - spin_unlock_irq(&xhci->lock); 665 - } 666 - 667 657 /* 668 658 * Stop xHCI driver. 669 659 * ··· 669 677 u32 temp; 670 678 struct xhci_hcd *xhci = hcd_to_xhci(hcd); 671 679 672 - if (!usb_hcd_is_primary_hcd(hcd)) { 673 - xhci_only_stop_hcd(xhci->shared_hcd); 680 + if (xhci->xhc_state & XHCI_STATE_HALTED) 674 681 return; 675 - } 676 682 683 + mutex_lock(&xhci->mutex); 677 684 spin_lock_irq(&xhci->lock); 685 + xhci->xhc_state |= XHCI_STATE_HALTED; 686 + xhci->cmd_ring_state = CMD_RING_STATE_STOPPED; 687 + 678 688 /* Make sure the xHC is halted for a USB3 roothub 679 689 * (xhci_stop() could be called as part of failed init). 680 690 */ ··· 711 717 xhci_dbg_trace(xhci, trace_xhci_dbg_init, 712 718 "xhci_stop completed - status = %x", 713 719 readl(&xhci->op_regs->status)); 720 + mutex_unlock(&xhci->mutex); 714 721 } 715 722 716 723 /* ··· 3787 3792 struct xhci_command *command = NULL; 3788 3793 3789 3794 mutex_lock(&xhci->mutex); 3795 + 3796 + if (xhci->xhc_state) /* dying or halted */ 3797 + goto out; 3790 3798 3791 3799 if (!udev->slot_id) { 3792 3800 xhci_dbg_trace(xhci, trace_xhci_dbg_address,
+7
drivers/usb/musb/musb_core.c
··· 1051 1051 * (c) peripheral initiates, using SRP 1052 1052 */ 1053 1053 if (musb->port_mode != MUSB_PORT_MODE_HOST && 1054 + musb->xceiv->otg->state != OTG_STATE_A_WAIT_BCON && 1054 1055 (devctl & MUSB_DEVCTL_VBUS) == MUSB_DEVCTL_VBUS) { 1055 1056 musb->is_active = 1; 1056 1057 } else { ··· 2449 2448 struct musb *musb = dev_to_musb(dev); 2450 2449 unsigned long flags; 2451 2450 2451 + musb_platform_disable(musb); 2452 + musb_generic_disable(musb); 2453 + 2452 2454 spin_lock_irqsave(&musb->lock, flags); 2453 2455 2454 2456 if (is_peripheral_active(musb)) { ··· 2505 2501 pm_runtime_disable(dev); 2506 2502 pm_runtime_set_active(dev); 2507 2503 pm_runtime_enable(dev); 2504 + 2505 + musb_start(musb); 2506 + 2508 2507 return 0; 2509 2508 } 2510 2509
+3
drivers/usb/musb/musb_cppi41.c
··· 551 551 } else { 552 552 cppi41_set_autoreq_mode(cppi41_channel, EP_MODE_AUTOREQ_NONE); 553 553 554 + /* delay to drain to cppi dma pipeline for isoch */ 555 + udelay(250); 556 + 554 557 csr = musb_readw(epio, MUSB_RXCSR); 555 558 csr &= ~(MUSB_RXCSR_H_REQPKT | MUSB_RXCSR_DMAENAB); 556 559 musb_writew(epio, MUSB_RXCSR, csr);
+5 -2
drivers/usb/musb/musb_dsps.c
··· 225 225 226 226 dsps_writel(reg_base, wrp->epintr_set, epmask); 227 227 dsps_writel(reg_base, wrp->coreintr_set, coremask); 228 - /* start polling for ID change. */ 229 - mod_timer(&glue->timer, jiffies + msecs_to_jiffies(wrp->poll_timeout)); 228 + /* start polling for ID change in dual-role idle mode */ 229 + if (musb->xceiv->otg->state == OTG_STATE_B_IDLE && 230 + musb->port_mode == MUSB_PORT_MODE_DUAL_ROLE) 231 + mod_timer(&glue->timer, jiffies + 232 + msecs_to_jiffies(wrp->poll_timeout)); 230 233 dsps_musb_try_idle(musb, 0); 231 234 } 232 235
+2
drivers/usb/musb/ux500.c
··· 379 379 {} 380 380 }; 381 381 382 + MODULE_DEVICE_TABLE(of, ux500_match); 383 + 382 384 static struct platform_driver ux500_driver = { 383 385 .probe = ux500_probe, 384 386 .remove = ux500_remove,
+1 -1
drivers/usb/phy/Kconfig
··· 155 155 config USB_QCOM_8X16_PHY 156 156 tristate "Qualcomm APQ8016/MSM8916 on-chip USB PHY controller support" 157 157 depends on ARCH_QCOM || COMPILE_TEST 158 - depends on RESET_CONTROLLER 158 + depends on RESET_CONTROLLER && EXTCON 159 159 select USB_PHY 160 160 select USB_ULPI_VIEWPORT 161 161 help
+2 -1
drivers/usb/phy/phy-generic.c
··· 232 232 clk_rate = pdata->clk_rate; 233 233 needs_vcc = pdata->needs_vcc; 234 234 if (gpio_is_valid(pdata->gpio_reset)) { 235 - err = devm_gpio_request_one(dev, pdata->gpio_reset, 0, 235 + err = devm_gpio_request_one(dev, pdata->gpio_reset, 236 + GPIOF_ACTIVE_LOW, 236 237 dev_name(dev)); 237 238 if (!err) 238 239 nop->gpiod_reset =
+1
drivers/usb/phy/phy-isp1301.c
··· 31 31 { "isp1301", 0 }, 32 32 { } 33 33 }; 34 + MODULE_DEVICE_TABLE(i2c, isp1301_id); 34 35 35 36 static struct i2c_client *isp1301_i2c_client; 36 37
+24
drivers/usb/serial/option.c
··· 278 278 #define ZTE_PRODUCT_MF622 0x0001 279 279 #define ZTE_PRODUCT_MF628 0x0015 280 280 #define ZTE_PRODUCT_MF626 0x0031 281 + #define ZTE_PRODUCT_ZM8620_X 0x0396 282 + #define ZTE_PRODUCT_ME3620_MBIM 0x0426 283 + #define ZTE_PRODUCT_ME3620_X 0x1432 284 + #define ZTE_PRODUCT_ME3620_L 0x1433 281 285 #define ZTE_PRODUCT_AC2726 0xfff1 282 286 #define ZTE_PRODUCT_MG880 0xfffd 283 287 #define ZTE_PRODUCT_CDMA_TECH 0xfffe ··· 546 542 547 543 static const struct option_blacklist_info zte_mc2716_z_blacklist = { 548 544 .sendsetup = BIT(1) | BIT(2) | BIT(3), 545 + }; 546 + 547 + static const struct option_blacklist_info zte_me3620_mbim_blacklist = { 548 + .reserved = BIT(2) | BIT(3) | BIT(4), 549 + }; 550 + 551 + static const struct option_blacklist_info zte_me3620_xl_blacklist = { 552 + .reserved = BIT(3) | BIT(4) | BIT(5), 553 + }; 554 + 555 + static const struct option_blacklist_info zte_zm8620_x_blacklist = { 556 + .reserved = BIT(3) | BIT(4) | BIT(5), 549 557 }; 550 558 551 559 static const struct option_blacklist_info huawei_cdc12_blacklist = { ··· 1607 1591 .driver_info = (kernel_ulong_t)&zte_ad3812_z_blacklist }, 1608 1592 { USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, ZTE_PRODUCT_MC2716, 0xff, 0xff, 0xff), 1609 1593 .driver_info = (kernel_ulong_t)&zte_mc2716_z_blacklist }, 1594 + { USB_DEVICE(ZTE_VENDOR_ID, ZTE_PRODUCT_ME3620_L), 1595 + .driver_info = (kernel_ulong_t)&zte_me3620_xl_blacklist }, 1596 + { USB_DEVICE(ZTE_VENDOR_ID, ZTE_PRODUCT_ME3620_MBIM), 1597 + .driver_info = (kernel_ulong_t)&zte_me3620_mbim_blacklist }, 1598 + { USB_DEVICE(ZTE_VENDOR_ID, ZTE_PRODUCT_ME3620_X), 1599 + .driver_info = (kernel_ulong_t)&zte_me3620_xl_blacklist }, 1600 + { USB_DEVICE(ZTE_VENDOR_ID, ZTE_PRODUCT_ZM8620_X), 1601 + .driver_info = (kernel_ulong_t)&zte_zm8620_x_blacklist }, 1610 1602 { USB_VENDOR_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0xff, 0x02, 0x01) }, 1611 1603 { USB_VENDOR_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0xff, 0x02, 0x05) }, 1612 1604 { USB_VENDOR_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0xff, 0x86, 0x10) },
+31
drivers/usb/serial/whiteheat.c
··· 80 80 static int whiteheat_firmware_attach(struct usb_serial *serial); 81 81 82 82 /* function prototypes for the Connect Tech WhiteHEAT serial converter */ 83 + static int whiteheat_probe(struct usb_serial *serial, 84 + const struct usb_device_id *id); 83 85 static int whiteheat_attach(struct usb_serial *serial); 84 86 static void whiteheat_release(struct usb_serial *serial); 85 87 static int whiteheat_port_probe(struct usb_serial_port *port); ··· 118 116 .description = "Connect Tech - WhiteHEAT", 119 117 .id_table = id_table_std, 120 118 .num_ports = 4, 119 + .probe = whiteheat_probe, 121 120 .attach = whiteheat_attach, 122 121 .release = whiteheat_release, 123 122 .port_probe = whiteheat_port_probe, ··· 220 217 /***************************************************************************** 221 218 * Connect Tech's White Heat serial driver functions 222 219 *****************************************************************************/ 220 + 221 + static int whiteheat_probe(struct usb_serial *serial, 222 + const struct usb_device_id *id) 223 + { 224 + struct usb_host_interface *iface_desc; 225 + struct usb_endpoint_descriptor *endpoint; 226 + size_t num_bulk_in = 0; 227 + size_t num_bulk_out = 0; 228 + size_t min_num_bulk; 229 + unsigned int i; 230 + 231 + iface_desc = serial->interface->cur_altsetting; 232 + 233 + for (i = 0; i < iface_desc->desc.bNumEndpoints; i++) { 234 + endpoint = &iface_desc->endpoint[i].desc; 235 + if (usb_endpoint_is_bulk_in(endpoint)) 236 + ++num_bulk_in; 237 + if (usb_endpoint_is_bulk_out(endpoint)) 238 + ++num_bulk_out; 239 + } 240 + 241 + min_num_bulk = COMMAND_PORT + 1; 242 + if (num_bulk_in < min_num_bulk || num_bulk_out < min_num_bulk) 243 + return -ENODEV; 244 + 245 + return 0; 246 + } 247 + 223 248 static int whiteheat_attach(struct usb_serial *serial) 224 249 { 225 250 struct usb_serial_port *command_port;
+2 -1
drivers/watchdog/Kconfig
··· 817 817 tristate "Intel TCO Timer/Watchdog" 818 818 depends on (X86 || IA64) && PCI 819 819 select WATCHDOG_CORE 820 + depends on I2C || I2C=n 820 821 select LPC_ICH if !EXPERT 821 - select I2C_I801 if !EXPERT 822 + select I2C_I801 if !EXPERT && I2C 822 823 ---help--- 823 824 Hardware driver for the intel TCO timer based watchdog devices. 824 825 These drivers are included in the Intel 82801 I/O Controller
+8 -2
drivers/watchdog/bcm2835_wdt.c
··· 36 36 #define PM_RSTC_WRCFG_FULL_RESET 0x00000020 37 37 #define PM_RSTC_RESET 0x00000102 38 38 39 + /* 40 + * The Raspberry Pi firmware uses the RSTS register to know which partiton 41 + * to boot from. The partiton value is spread into bits 0, 2, 4, 6, 8, 10. 42 + * Partiton 63 is a special partition used by the firmware to indicate halt. 43 + */ 44 + #define PM_RSTS_RASPBERRYPI_HALT 0x555 45 + 39 46 #define SECS_TO_WDOG_TICKS(x) ((x) << 16) 40 47 #define WDOG_TICKS_TO_SECS(x) ((x) >> 16) 41 48 ··· 158 151 * hard reset. 159 152 */ 160 153 val = readl_relaxed(wdt->base + PM_RSTS); 161 - val &= PM_RSTC_WRCFG_CLR; 162 - val |= PM_PASSWORD | PM_RSTS_HADWRH_SET; 154 + val |= PM_PASSWORD | PM_RSTS_RASPBERRYPI_HALT; 163 155 writel_relaxed(val, wdt->base + PM_RSTS); 164 156 165 157 /* Continue with normal reset mechanism */
+1
drivers/watchdog/gef_wdt.c
··· 303 303 }, 304 304 {}, 305 305 }; 306 + MODULE_DEVICE_TABLE(of, gef_wdt_ids); 306 307 307 308 static struct platform_driver gef_wdt_driver = { 308 309 .driver = {
+1
drivers/watchdog/mena21_wdt.c
··· 253 253 { .compatible = "men,a021-wdt" }, 254 254 { }, 255 255 }; 256 + MODULE_DEVICE_TABLE(of, a21_wdt_ids); 256 257 257 258 static struct platform_driver a21_wdt_driver = { 258 259 .probe = a21_wdt_probe,
+1
drivers/watchdog/moxart_wdt.c
··· 168 168 { .compatible = "moxa,moxart-watchdog" }, 169 169 { }, 170 170 }; 171 + MODULE_DEVICE_TABLE(of, moxart_watchdog_match); 171 172 172 173 static struct platform_driver moxart_wdt_driver = { 173 174 .probe = moxart_wdt_probe,
+51 -2
fs/cifs/cifsencrypt.c
··· 444 444 return 0; 445 445 } 446 446 447 + /* Server has provided av pairs/target info in the type 2 challenge 448 + * packet and we have plucked it and stored within smb session. 449 + * We parse that blob here to find the server given timestamp 450 + * as part of ntlmv2 authentication (or local current time as 451 + * default in case of failure) 452 + */ 453 + static __le64 454 + find_timestamp(struct cifs_ses *ses) 455 + { 456 + unsigned int attrsize; 457 + unsigned int type; 458 + unsigned int onesize = sizeof(struct ntlmssp2_name); 459 + unsigned char *blobptr; 460 + unsigned char *blobend; 461 + struct ntlmssp2_name *attrptr; 462 + 463 + if (!ses->auth_key.len || !ses->auth_key.response) 464 + return 0; 465 + 466 + blobptr = ses->auth_key.response; 467 + blobend = blobptr + ses->auth_key.len; 468 + 469 + while (blobptr + onesize < blobend) { 470 + attrptr = (struct ntlmssp2_name *) blobptr; 471 + type = le16_to_cpu(attrptr->type); 472 + if (type == NTLMSSP_AV_EOL) 473 + break; 474 + blobptr += 2; /* advance attr type */ 475 + attrsize = le16_to_cpu(attrptr->length); 476 + blobptr += 2; /* advance attr size */ 477 + if (blobptr + attrsize > blobend) 478 + break; 479 + if (type == NTLMSSP_AV_TIMESTAMP) { 480 + if (attrsize == sizeof(u64)) 481 + return *((__le64 *)blobptr); 482 + } 483 + blobptr += attrsize; /* advance attr value */ 484 + } 485 + 486 + return cpu_to_le64(cifs_UnixTimeToNT(CURRENT_TIME)); 487 + } 488 + 447 489 static int calc_ntlmv2_hash(struct cifs_ses *ses, char *ntlmv2_hash, 448 490 const struct nls_table *nls_cp) 449 491 { ··· 683 641 struct ntlmv2_resp *ntlmv2; 684 642 char ntlmv2_hash[16]; 685 643 unsigned char *tiblob = NULL; /* target info blob */ 644 + __le64 rsp_timestamp; 686 645 687 646 if (ses->server->negflavor == CIFS_NEGFLAVOR_EXTENDED) { 688 647 if (!ses->domainName) { ··· 702 659 } 703 660 } 704 661 662 + /* Must be within 5 minutes of the server (or in range +/-2h 663 + * in case of Mac OS X), so simply carry over server timestamp 664 + * (as Windows 7 does) 665 + */ 666 + rsp_timestamp = find_timestamp(ses); 667 + 705 668 baselen = CIFS_SESS_KEY_SIZE + sizeof(struct ntlmv2_resp); 706 669 tilen = ses->auth_key.len; 707 670 tiblob = ses->auth_key.response; ··· 724 675 (ses->auth_key.response + CIFS_SESS_KEY_SIZE); 725 676 ntlmv2->blob_signature = cpu_to_le32(0x00000101); 726 677 ntlmv2->reserved = 0; 727 - /* Must be within 5 minutes of the server */ 728 - ntlmv2->time = cpu_to_le64(cifs_UnixTimeToNT(CURRENT_TIME)); 678 + ntlmv2->time = rsp_timestamp; 679 + 729 680 get_random_bytes(&ntlmv2->client_chal, sizeof(ntlmv2->client_chal)); 730 681 ntlmv2->reserved2 = 0; 731 682
+6 -2
fs/cifs/smb2ops.c
··· 50 50 break; 51 51 default: 52 52 server->echoes = true; 53 - server->oplocks = true; 53 + if (enable_oplocks) { 54 + server->oplocks = true; 55 + server->oplock_credits = 1; 56 + } else 57 + server->oplocks = false; 58 + 54 59 server->echo_credits = 1; 55 - server->oplock_credits = 1; 56 60 } 57 61 server->credits -= server->echo_credits + server->oplock_credits; 58 62 return 0;
+69 -15
fs/cifs/smb2pdu.c
··· 46 46 #include "smb2status.h" 47 47 #include "smb2glob.h" 48 48 #include "cifspdu.h" 49 + #include "cifs_spnego.h" 49 50 50 51 /* 51 52 * The following table defines the expected "StructureSize" of SMB2 requests ··· 487 486 cifs_dbg(FYI, "missing security blob on negprot\n"); 488 487 489 488 rc = cifs_enable_signing(server, ses->sign); 490 - #ifdef CONFIG_SMB2_ASN1 /* BB REMOVEME when updated asn1.c ready */ 491 489 if (rc) 492 490 goto neg_exit; 493 - if (blob_length) 491 + if (blob_length) { 494 492 rc = decode_negTokenInit(security_blob, blob_length, server); 495 - if (rc == 1) 496 - rc = 0; 497 - else if (rc == 0) { 498 - rc = -EIO; 499 - goto neg_exit; 493 + if (rc == 1) 494 + rc = 0; 495 + else if (rc == 0) 496 + rc = -EIO; 500 497 } 501 - #endif 502 - 503 498 neg_exit: 504 499 free_rsp_buf(resp_buftype, rsp); 505 500 return rc; ··· 589 592 __le32 phase = NtLmNegotiate; /* NTLMSSP, if needed, is multistage */ 590 593 struct TCP_Server_Info *server = ses->server; 591 594 u16 blob_length = 0; 592 - char *security_blob; 595 + struct key *spnego_key = NULL; 596 + char *security_blob = NULL; 593 597 char *ntlmssp_blob = NULL; 594 598 bool use_spnego = false; /* else use raw ntlmssp */ 595 599 ··· 618 620 ses->ntlmssp->sesskey_per_smbsess = true; 619 621 620 622 /* FIXME: allow for other auth types besides NTLMSSP (e.g. krb5) */ 621 - ses->sectype = RawNTLMSSP; 623 + if (ses->sectype != Kerberos && ses->sectype != RawNTLMSSP) 624 + ses->sectype = RawNTLMSSP; 622 625 623 626 ssetup_ntlmssp_authenticate: 624 627 if (phase == NtLmChallenge) ··· 648 649 iov[0].iov_base = (char *)req; 649 650 /* 4 for rfc1002 length field and 1 for pad */ 650 651 iov[0].iov_len = get_rfc1002_length(req) + 4 - 1; 651 - if (phase == NtLmNegotiate) { 652 + 653 + if (ses->sectype == Kerberos) { 654 + #ifdef CONFIG_CIFS_UPCALL 655 + struct cifs_spnego_msg *msg; 656 + 657 + spnego_key = cifs_get_spnego_key(ses); 658 + if (IS_ERR(spnego_key)) { 659 + rc = PTR_ERR(spnego_key); 660 + spnego_key = NULL; 661 + goto ssetup_exit; 662 + } 663 + 664 + msg = spnego_key->payload.data; 665 + /* 666 + * check version field to make sure that cifs.upcall is 667 + * sending us a response in an expected form 668 + */ 669 + if (msg->version != CIFS_SPNEGO_UPCALL_VERSION) { 670 + cifs_dbg(VFS, 671 + "bad cifs.upcall version. Expected %d got %d", 672 + CIFS_SPNEGO_UPCALL_VERSION, msg->version); 673 + rc = -EKEYREJECTED; 674 + goto ssetup_exit; 675 + } 676 + ses->auth_key.response = kmemdup(msg->data, msg->sesskey_len, 677 + GFP_KERNEL); 678 + if (!ses->auth_key.response) { 679 + cifs_dbg(VFS, 680 + "Kerberos can't allocate (%u bytes) memory", 681 + msg->sesskey_len); 682 + rc = -ENOMEM; 683 + goto ssetup_exit; 684 + } 685 + ses->auth_key.len = msg->sesskey_len; 686 + blob_length = msg->secblob_len; 687 + iov[1].iov_base = msg->data + msg->sesskey_len; 688 + iov[1].iov_len = blob_length; 689 + #else 690 + rc = -EOPNOTSUPP; 691 + goto ssetup_exit; 692 + #endif /* CONFIG_CIFS_UPCALL */ 693 + } else if (phase == NtLmNegotiate) { /* if not krb5 must be ntlmssp */ 652 694 ntlmssp_blob = kmalloc(sizeof(struct _NEGOTIATE_MESSAGE), 653 695 GFP_KERNEL); 654 696 if (ntlmssp_blob == NULL) { ··· 712 672 /* with raw NTLMSSP we don't encapsulate in SPNEGO */ 713 673 security_blob = ntlmssp_blob; 714 674 } 675 + iov[1].iov_base = security_blob; 676 + iov[1].iov_len = blob_length; 715 677 } else if (phase == NtLmAuthenticate) { 716 678 req->hdr.SessionId = ses->Suid; 717 679 ntlmssp_blob = kzalloc(sizeof(struct _NEGOTIATE_MESSAGE) + 500, ··· 741 699 } else { 742 700 security_blob = ntlmssp_blob; 743 701 } 702 + iov[1].iov_base = security_blob; 703 + iov[1].iov_len = blob_length; 744 704 } else { 745 705 cifs_dbg(VFS, "illegal ntlmssp phase\n"); 746 706 rc = -EIO; ··· 754 710 cpu_to_le16(sizeof(struct smb2_sess_setup_req) - 755 711 1 /* pad */ - 4 /* rfc1001 len */); 756 712 req->SecurityBufferLength = cpu_to_le16(blob_length); 757 - iov[1].iov_base = security_blob; 758 - iov[1].iov_len = blob_length; 759 713 760 714 inc_rfc1001_len(req, blob_length - 1 /* pad */); 761 715 ··· 764 722 765 723 kfree(security_blob); 766 724 rsp = (struct smb2_sess_setup_rsp *)iov[0].iov_base; 725 + ses->Suid = rsp->hdr.SessionId; 767 726 if (resp_buftype != CIFS_NO_BUFFER && 768 727 rsp->hdr.Status == STATUS_MORE_PROCESSING_REQUIRED) { 769 728 if (phase != NtLmNegotiate) { ··· 782 739 /* NTLMSSP Negotiate sent now processing challenge (response) */ 783 740 phase = NtLmChallenge; /* process ntlmssp challenge */ 784 741 rc = 0; /* MORE_PROCESSING is not an error here but expected */ 785 - ses->Suid = rsp->hdr.SessionId; 786 742 rc = decode_ntlmssp_challenge(rsp->Buffer, 787 743 le16_to_cpu(rsp->SecurityBufferLength), ses); 788 744 } ··· 837 795 if (!server->sign) { 838 796 kfree(ses->auth_key.response); 839 797 ses->auth_key.response = NULL; 798 + } 799 + if (spnego_key) { 800 + key_invalidate(spnego_key); 801 + key_put(spnego_key); 840 802 } 841 803 kfree(ses->ntlmssp); 842 804 ··· 922 876 if (tcon && tcon->bad_network_name) 923 877 return -ENOENT; 924 878 879 + if ((tcon->seal) && 880 + ((ses->server->capabilities & SMB2_GLOBAL_CAP_ENCRYPTION) == 0)) { 881 + cifs_dbg(VFS, "encryption requested but no server support"); 882 + return -EOPNOTSUPP; 883 + } 884 + 925 885 unc_path = kmalloc(MAX_SHARENAME_LENGTH * 2, GFP_KERNEL); 926 886 if (unc_path == NULL) 927 887 return -ENOMEM; ··· 1007 955 ((tcon->share_flags & SHI1005_FLAGS_DFS) == 0)) 1008 956 cifs_dbg(VFS, "DFS capability contradicts DFS flag\n"); 1009 957 init_copy_chunk_defaults(tcon); 958 + if (tcon->share_flags & SHI1005_FLAGS_ENCRYPT_DATA) 959 + cifs_dbg(VFS, "Encrypted shares not supported"); 1010 960 if (tcon->ses->server->ops->validate_negotiate) 1011 961 rc = tcon->ses->server->ops->validate_negotiate(xid, tcon); 1012 962 tcon_exit:
+12 -1
fs/dax.c
··· 569 569 if (!buffer_size_valid(&bh) || bh.b_size < PMD_SIZE) 570 570 goto fallback; 571 571 572 + sector = bh.b_blocknr << (blkbits - 9); 573 + 572 574 if (buffer_unwritten(&bh) || buffer_new(&bh)) { 573 575 int i; 576 + 577 + length = bdev_direct_access(bh.b_bdev, sector, &kaddr, &pfn, 578 + bh.b_size); 579 + if (length < 0) { 580 + result = VM_FAULT_SIGBUS; 581 + goto out; 582 + } 583 + if ((length < PMD_SIZE) || (pfn & PG_PMD_COLOUR)) 584 + goto fallback; 585 + 574 586 for (i = 0; i < PTRS_PER_PMD; i++) 575 587 clear_pmem(kaddr + i * PAGE_SIZE, PAGE_SIZE); 576 588 wmb_pmem(); ··· 635 623 result = VM_FAULT_NOPAGE; 636 624 spin_unlock(ptl); 637 625 } else { 638 - sector = bh.b_blocknr << (blkbits - 9); 639 626 length = bdev_direct_access(bh.b_bdev, sector, &kaddr, &pfn, 640 627 bh.b_size); 641 628 if (length < 0) {
-3
fs/ubifs/xattr.c
··· 652 652 { 653 653 int err; 654 654 655 - mutex_lock(&inode->i_mutex); 656 655 err = security_inode_init_security(inode, dentry, qstr, 657 656 &init_xattrs, 0); 658 - mutex_unlock(&inode->i_mutex); 659 - 660 657 if (err) { 661 658 struct ubifs_info *c = dentry->i_sb->s_fs_info; 662 659 ubifs_err(c, "cannot initialize security for inode %lu, error %d",
+1
include/linux/acpi.h
··· 217 217 218 218 int acpi_pci_irq_enable (struct pci_dev *dev); 219 219 void acpi_penalize_isa_irq(int irq, int active); 220 + bool acpi_isa_irq_available(int irq); 220 221 void acpi_penalize_sci_irq(int irq, int trigger, int polarity); 221 222 void acpi_pci_irq_disable (struct pci_dev *dev); 222 223
+2 -2
include/linux/iova.h
··· 68 68 return iova >> iova_shift(iovad); 69 69 } 70 70 71 - int iommu_iova_cache_init(void); 72 - void iommu_iova_cache_destroy(void); 71 + int iova_cache_get(void); 72 + void iova_cache_put(void); 73 73 74 74 struct iova *alloc_iova_mem(void); 75 75 void free_iova_mem(struct iova *iova);
-1
include/linux/memcontrol.h
··· 242 242 * percpu counter. 243 243 */ 244 244 struct mem_cgroup_stat_cpu __percpu *stat; 245 - spinlock_t pcp_counter_lock; 246 245 247 246 #if defined(CONFIG_MEMCG_KMEM) && defined(CONFIG_INET) 248 247 struct cg_proto tcp_mem;
-11
include/linux/mlx5/device.h
··· 402 402 u8 rsvd[8]; 403 403 }; 404 404 405 - struct mlx5_cmd_query_special_contexts_mbox_in { 406 - struct mlx5_inbox_hdr hdr; 407 - u8 rsvd[8]; 408 - }; 409 - 410 - struct mlx5_cmd_query_special_contexts_mbox_out { 411 - struct mlx5_outbox_hdr hdr; 412 - __be32 dump_fill_mkey; 413 - __be32 resd_lkey; 414 - }; 415 - 416 405 struct mlx5_cmd_layout { 417 406 u8 type; 418 407 u8 rsvd0[3];
-1
include/linux/mlx5/driver.h
··· 845 845 int mlx5_register_interface(struct mlx5_interface *intf); 846 846 void mlx5_unregister_interface(struct mlx5_interface *intf); 847 847 int mlx5_core_query_vendor_id(struct mlx5_core_dev *mdev, u32 *vendor_id); 848 - int mlx5_core_query_special_context(struct mlx5_core_dev *dev, u32 *rsvd_lkey); 849 848 850 849 struct mlx5_profile { 851 850 u64 mask;
+21
include/linux/mm.h
··· 905 905 #endif 906 906 } 907 907 908 + #ifdef CONFIG_MEMCG 909 + static inline struct mem_cgroup *page_memcg(struct page *page) 910 + { 911 + return page->mem_cgroup; 912 + } 913 + 914 + static inline void set_page_memcg(struct page *page, struct mem_cgroup *memcg) 915 + { 916 + page->mem_cgroup = memcg; 917 + } 918 + #else 919 + static inline struct mem_cgroup *page_memcg(struct page *page) 920 + { 921 + return NULL; 922 + } 923 + 924 + static inline void set_page_memcg(struct page *page, struct mem_cgroup *memcg) 925 + { 926 + } 927 + #endif 928 + 908 929 /* 909 930 * Some inline functions in vmstat.h depend on page_zone() 910 931 */
+5 -6
include/linux/rcupdate.h
··· 230 230 struct rcu_synchronize *rs_array); 231 231 232 232 #define _wait_rcu_gp(checktiny, ...) \ 233 - do { \ 234 - call_rcu_func_t __crcu_array[] = { __VA_ARGS__ }; \ 235 - const int __n = ARRAY_SIZE(__crcu_array); \ 236 - struct rcu_synchronize __rs_array[__n]; \ 237 - \ 238 - __wait_rcu_gp(checktiny, __n, __crcu_array, __rs_array); \ 233 + do { \ 234 + call_rcu_func_t __crcu_array[] = { __VA_ARGS__ }; \ 235 + struct rcu_synchronize __rs_array[ARRAY_SIZE(__crcu_array)]; \ 236 + __wait_rcu_gp(checktiny, ARRAY_SIZE(__crcu_array), \ 237 + __crcu_array, __rs_array); \ 239 238 } while (0) 240 239 241 240 #define wait_rcu_gp(...) _wait_rcu_gp(false, __VA_ARGS__)
+1 -1
include/linux/skbuff.h
··· 2708 2708 if (skb->ip_summed == CHECKSUM_COMPLETE) 2709 2709 skb->csum = csum_sub(skb->csum, csum_partial(start, len, 0)); 2710 2710 else if (skb->ip_summed == CHECKSUM_PARTIAL && 2711 - skb_checksum_start_offset(skb) <= len) 2711 + skb_checksum_start_offset(skb) < 0) 2712 2712 skb->ip_summed = CHECKSUM_NONE; 2713 2713 } 2714 2714
+5 -1
include/net/af_unix.h
··· 63 63 #define UNIX_GC_MAYBE_CYCLE 1 64 64 struct socket_wq peer_wq; 65 65 }; 66 - #define unix_sk(__sk) ((struct unix_sock *)__sk) 66 + 67 + static inline struct unix_sock *unix_sk(struct sock *sk) 68 + { 69 + return (struct unix_sock *)sk; 70 + } 67 71 68 72 #define peer_wait peer_wq.wait 69 73
+1
include/target/target_core_base.h
··· 730 730 #define DF_EMULATED_VPD_UNIT_SERIAL 0x00000004 731 731 #define DF_USING_UDEV_PATH 0x00000008 732 732 #define DF_USING_ALIAS 0x00000010 733 + #define DF_READ_ONLY 0x00000020 733 734 /* Physical device queue depth */ 734 735 u32 queue_depth; 735 736 /* Used for SPC-2 reservations enforce of ISIDs */
-2
include/uapi/linux/userfaultfd.h
··· 11 11 12 12 #include <linux/types.h> 13 13 14 - #include <linux/compiler.h> 15 - 16 14 #define UFFD_API ((__u64)0xAA) 17 15 /* 18 16 * After implementing the respective features it will become:
+7 -7
ipc/msg.c
··· 137 137 return retval; 138 138 } 139 139 140 - /* ipc_addid() locks msq upon success. */ 141 - id = ipc_addid(&msg_ids(ns), &msq->q_perm, ns->msg_ctlmni); 142 - if (id < 0) { 143 - ipc_rcu_putref(msq, msg_rcu_free); 144 - return id; 145 - } 146 - 147 140 msq->q_stime = msq->q_rtime = 0; 148 141 msq->q_ctime = get_seconds(); 149 142 msq->q_cbytes = msq->q_qnum = 0; ··· 145 152 INIT_LIST_HEAD(&msq->q_messages); 146 153 INIT_LIST_HEAD(&msq->q_receivers); 147 154 INIT_LIST_HEAD(&msq->q_senders); 155 + 156 + /* ipc_addid() locks msq upon success. */ 157 + id = ipc_addid(&msg_ids(ns), &msq->q_perm, ns->msg_ctlmni); 158 + if (id < 0) { 159 + ipc_rcu_putref(msq, msg_rcu_free); 160 + return id; 161 + } 148 162 149 163 ipc_unlock_object(&msq->q_perm); 150 164 rcu_read_unlock();
+7 -6
ipc/shm.c
··· 551 551 if (IS_ERR(file)) 552 552 goto no_file; 553 553 554 - id = ipc_addid(&shm_ids(ns), &shp->shm_perm, ns->shm_ctlmni); 555 - if (id < 0) { 556 - error = id; 557 - goto no_id; 558 - } 559 - 560 554 shp->shm_cprid = task_tgid_vnr(current); 561 555 shp->shm_lprid = 0; 562 556 shp->shm_atim = shp->shm_dtim = 0; ··· 559 565 shp->shm_nattch = 0; 560 566 shp->shm_file = file; 561 567 shp->shm_creator = current; 568 + 569 + id = ipc_addid(&shm_ids(ns), &shp->shm_perm, ns->shm_ctlmni); 570 + if (id < 0) { 571 + error = id; 572 + goto no_id; 573 + } 574 + 562 575 list_add(&shp->shm_clist, &current->sysvshm.shm_clist); 563 576 564 577 /*
+4 -4
ipc/util.c
··· 237 237 rcu_read_lock(); 238 238 spin_lock(&new->lock); 239 239 240 + current_euid_egid(&euid, &egid); 241 + new->cuid = new->uid = euid; 242 + new->gid = new->cgid = egid; 243 + 240 244 id = idr_alloc(&ids->ipcs_idr, new, 241 245 (next_id < 0) ? 0 : ipcid_to_idx(next_id), 0, 242 246 GFP_NOWAIT); ··· 252 248 } 253 249 254 250 ids->in_use++; 255 - 256 - current_euid_egid(&euid, &egid); 257 - new->cuid = new->uid = euid; 258 - new->gid = new->cgid = egid; 259 251 260 252 if (next_id < 0) { 261 253 new->seq = ids->seq++;
+81 -33
kernel/events/core.c
··· 1243 1243 PERF_EVENT_STATE_INACTIVE; 1244 1244 } 1245 1245 1246 - /* 1247 - * Called at perf_event creation and when events are attached/detached from a 1248 - * group. 1249 - */ 1250 - static void perf_event__read_size(struct perf_event *event) 1246 + static void __perf_event_read_size(struct perf_event *event, int nr_siblings) 1251 1247 { 1252 1248 int entry = sizeof(u64); /* value */ 1253 1249 int size = 0; ··· 1259 1263 entry += sizeof(u64); 1260 1264 1261 1265 if (event->attr.read_format & PERF_FORMAT_GROUP) { 1262 - nr += event->group_leader->nr_siblings; 1266 + nr += nr_siblings; 1263 1267 size += sizeof(u64); 1264 1268 } 1265 1269 ··· 1267 1271 event->read_size = size; 1268 1272 } 1269 1273 1270 - static void perf_event__header_size(struct perf_event *event) 1274 + static void __perf_event_header_size(struct perf_event *event, u64 sample_type) 1271 1275 { 1272 1276 struct perf_sample_data *data; 1273 - u64 sample_type = event->attr.sample_type; 1274 1277 u16 size = 0; 1275 - 1276 - perf_event__read_size(event); 1277 1278 1278 1279 if (sample_type & PERF_SAMPLE_IP) 1279 1280 size += sizeof(data->ip); ··· 1294 1301 size += sizeof(data->txn); 1295 1302 1296 1303 event->header_size = size; 1304 + } 1305 + 1306 + /* 1307 + * Called at perf_event creation and when events are attached/detached from a 1308 + * group. 1309 + */ 1310 + static void perf_event__header_size(struct perf_event *event) 1311 + { 1312 + __perf_event_read_size(event, 1313 + event->group_leader->nr_siblings); 1314 + __perf_event_header_size(event, event->attr.sample_type); 1297 1315 } 1298 1316 1299 1317 static void perf_event__id_header_size(struct perf_event *event) ··· 1332 1328 size += sizeof(data->cpu_entry); 1333 1329 1334 1330 event->id_header_size = size; 1331 + } 1332 + 1333 + static bool perf_event_validate_size(struct perf_event *event) 1334 + { 1335 + /* 1336 + * The values computed here will be over-written when we actually 1337 + * attach the event. 1338 + */ 1339 + __perf_event_read_size(event, event->group_leader->nr_siblings + 1); 1340 + __perf_event_header_size(event, event->attr.sample_type & ~PERF_SAMPLE_READ); 1341 + perf_event__id_header_size(event); 1342 + 1343 + /* 1344 + * Sum the lot; should not exceed the 64k limit we have on records. 1345 + * Conservative limit to allow for callchains and other variable fields. 1346 + */ 1347 + if (event->read_size + event->header_size + 1348 + event->id_header_size + sizeof(struct perf_event_header) >= 16*1024) 1349 + return false; 1350 + 1351 + return true; 1335 1352 } 1336 1353 1337 1354 static void perf_group_attach(struct perf_event *event) ··· 8322 8297 8323 8298 if (move_group) { 8324 8299 gctx = group_leader->ctx; 8300 + mutex_lock_double(&gctx->mutex, &ctx->mutex); 8301 + } else { 8302 + mutex_lock(&ctx->mutex); 8303 + } 8325 8304 8305 + if (!perf_event_validate_size(event)) { 8306 + err = -E2BIG; 8307 + goto err_locked; 8308 + } 8309 + 8310 + /* 8311 + * Must be under the same ctx::mutex as perf_install_in_context(), 8312 + * because we need to serialize with concurrent event creation. 8313 + */ 8314 + if (!exclusive_event_installable(event, ctx)) { 8315 + /* exclusive and group stuff are assumed mutually exclusive */ 8316 + WARN_ON_ONCE(move_group); 8317 + 8318 + err = -EBUSY; 8319 + goto err_locked; 8320 + } 8321 + 8322 + WARN_ON_ONCE(ctx->parent_ctx); 8323 + 8324 + if (move_group) { 8326 8325 /* 8327 8326 * See perf_event_ctx_lock() for comments on the details 8328 8327 * of swizzling perf_event::ctx. 8329 8328 */ 8330 - mutex_lock_double(&gctx->mutex, &ctx->mutex); 8331 - 8332 8329 perf_remove_from_context(group_leader, false); 8333 8330 8334 8331 list_for_each_entry(sibling, &group_leader->sibling_list, ··· 8358 8311 perf_remove_from_context(sibling, false); 8359 8312 put_ctx(gctx); 8360 8313 } 8361 - } else { 8362 - mutex_lock(&ctx->mutex); 8363 - } 8364 8314 8365 - WARN_ON_ONCE(ctx->parent_ctx); 8366 - 8367 - if (move_group) { 8368 8315 /* 8369 8316 * Wait for everybody to stop referencing the events through 8370 8317 * the old lists, before installing it on new lists. ··· 8390 8349 perf_event__state_init(group_leader); 8391 8350 perf_install_in_context(ctx, group_leader, group_leader->cpu); 8392 8351 get_ctx(ctx); 8352 + 8353 + /* 8354 + * Now that all events are installed in @ctx, nothing 8355 + * references @gctx anymore, so drop the last reference we have 8356 + * on it. 8357 + */ 8358 + put_ctx(gctx); 8393 8359 } 8394 8360 8395 - if (!exclusive_event_installable(event, ctx)) { 8396 - err = -EBUSY; 8397 - mutex_unlock(&ctx->mutex); 8398 - fput(event_file); 8399 - goto err_context; 8400 - } 8361 + /* 8362 + * Precalculate sample_data sizes; do while holding ctx::mutex such 8363 + * that we're serialized against further additions and before 8364 + * perf_install_in_context() which is the point the event is active and 8365 + * can use these values. 8366 + */ 8367 + perf_event__header_size(event); 8368 + perf_event__id_header_size(event); 8401 8369 8402 8370 perf_install_in_context(ctx, event, event->cpu); 8403 8371 perf_unpin_context(ctx); 8404 8372 8405 - if (move_group) { 8373 + if (move_group) 8406 8374 mutex_unlock(&gctx->mutex); 8407 - put_ctx(gctx); 8408 - } 8409 8375 mutex_unlock(&ctx->mutex); 8410 8376 8411 8377 put_online_cpus(); ··· 8424 8376 mutex_unlock(&current->perf_event_mutex); 8425 8377 8426 8378 /* 8427 - * Precalculate sample_data sizes 8428 - */ 8429 - perf_event__header_size(event); 8430 - perf_event__id_header_size(event); 8431 - 8432 - /* 8433 8379 * Drop the reference on the group_event after placing the 8434 8380 * new event on the sibling_list. This ensures destruction 8435 8381 * of the group leader will find the pointer to itself in ··· 8433 8391 fd_install(event_fd, event_file); 8434 8392 return event_fd; 8435 8393 8394 + err_locked: 8395 + if (move_group) 8396 + mutex_unlock(&gctx->mutex); 8397 + mutex_unlock(&ctx->mutex); 8398 + /* err_file: */ 8399 + fput(event_file); 8436 8400 err_context: 8437 8401 perf_unpin_context(ctx); 8438 8402 put_ctx(ctx);
+5 -5
kernel/locking/lockdep.c
··· 3068 3068 static int __lock_acquire(struct lockdep_map *lock, unsigned int subclass, 3069 3069 int trylock, int read, int check, int hardirqs_off, 3070 3070 struct lockdep_map *nest_lock, unsigned long ip, 3071 - int references) 3071 + int references, int pin_count) 3072 3072 { 3073 3073 struct task_struct *curr = current; 3074 3074 struct lock_class *class = NULL; ··· 3157 3157 hlock->waittime_stamp = 0; 3158 3158 hlock->holdtime_stamp = lockstat_clock(); 3159 3159 #endif 3160 - hlock->pin_count = 0; 3160 + hlock->pin_count = pin_count; 3161 3161 3162 3162 if (check && !mark_irqflags(curr, hlock)) 3163 3163 return 0; ··· 3343 3343 hlock_class(hlock)->subclass, hlock->trylock, 3344 3344 hlock->read, hlock->check, hlock->hardirqs_off, 3345 3345 hlock->nest_lock, hlock->acquire_ip, 3346 - hlock->references)) 3346 + hlock->references, hlock->pin_count)) 3347 3347 return 0; 3348 3348 } 3349 3349 ··· 3433 3433 hlock_class(hlock)->subclass, hlock->trylock, 3434 3434 hlock->read, hlock->check, hlock->hardirqs_off, 3435 3435 hlock->nest_lock, hlock->acquire_ip, 3436 - hlock->references)) 3436 + hlock->references, hlock->pin_count)) 3437 3437 return 0; 3438 3438 } 3439 3439 ··· 3583 3583 current->lockdep_recursion = 1; 3584 3584 trace_lock_acquire(lock, subclass, trylock, read, check, nest_lock, ip); 3585 3585 __lock_acquire(lock, subclass, trylock, read, check, 3586 - irqs_disabled_flags(flags), nest_lock, ip, 0); 3586 + irqs_disabled_flags(flags), nest_lock, ip, 0, 0); 3587 3587 current->lockdep_recursion = 0; 3588 3588 raw_local_irq_restore(flags); 3589 3589 }
+5
kernel/rcu/tree.c
··· 3868 3868 static void __init 3869 3869 rcu_boot_init_percpu_data(int cpu, struct rcu_state *rsp) 3870 3870 { 3871 + static struct lock_class_key rcu_exp_sched_rdp_class; 3871 3872 unsigned long flags; 3872 3873 struct rcu_data *rdp = per_cpu_ptr(rsp->rda, cpu); 3873 3874 struct rcu_node *rnp = rcu_get_root(rsp); ··· 3884 3883 mutex_init(&rdp->exp_funnel_mutex); 3885 3884 rcu_boot_init_nocb_percpu_data(rdp); 3886 3885 raw_spin_unlock_irqrestore(&rnp->lock, flags); 3886 + if (rsp == &rcu_sched_state) 3887 + lockdep_set_class_and_name(&rdp->exp_funnel_mutex, 3888 + &rcu_exp_sched_rdp_class, 3889 + "rcu_data_exp_sched"); 3887 3890 } 3888 3891 3889 3892 /*
+11 -3
kernel/sched/core.c
··· 4934 4934 idle->state = TASK_RUNNING; 4935 4935 idle->se.exec_start = sched_clock(); 4936 4936 4937 - do_set_cpus_allowed(idle, cpumask_of(cpu)); 4937 + #ifdef CONFIG_SMP 4938 + /* 4939 + * Its possible that init_idle() gets called multiple times on a task, 4940 + * in that case do_set_cpus_allowed() will not do the right thing. 4941 + * 4942 + * And since this is boot we can forgo the serialization. 4943 + */ 4944 + set_cpus_allowed_common(idle, cpumask_of(cpu)); 4945 + #endif 4938 4946 /* 4939 4947 * We're having a chicken and egg problem, even though we are 4940 4948 * holding rq->lock, the cpu isn't yet set to this cpu so the ··· 4959 4951 4960 4952 rq->curr = rq->idle = idle; 4961 4953 idle->on_rq = TASK_ON_RQ_QUEUED; 4962 - #if defined(CONFIG_SMP) 4954 + #ifdef CONFIG_SMP 4963 4955 idle->on_cpu = 1; 4964 4956 #endif 4965 4957 raw_spin_unlock(&rq->lock); ··· 4974 4966 idle->sched_class = &idle_sched_class; 4975 4967 ftrace_graph_init_idle_task(idle, cpu); 4976 4968 vtime_init_idle(idle, cpu); 4977 - #if defined(CONFIG_SMP) 4969 + #ifdef CONFIG_SMP 4978 4970 sprintf(idle->comm, "%s/%d", INIT_TASK_COMM, cpu); 4979 4971 #endif 4980 4972 }
+1 -1
mm/dmapool.c
··· 394 394 list_for_each_entry(page, &pool->page_list, page_list) { 395 395 if (dma < page->dma) 396 396 continue; 397 - if (dma < (page->dma + pool->allocation)) 397 + if ((dma - page->dma) < pool->allocation) 398 398 return page; 399 399 } 400 400 return NULL;
+8
mm/hugetlb.c
··· 3202 3202 continue; 3203 3203 3204 3204 /* 3205 + * Shared VMAs have their own reserves and do not affect 3206 + * MAP_PRIVATE accounting but it is possible that a shared 3207 + * VMA is using the same page so check and skip such VMAs. 3208 + */ 3209 + if (iter_vma->vm_flags & VM_MAYSHARE) 3210 + continue; 3211 + 3212 + /* 3205 3213 * Unmap the page from other VMAs without their own reserves. 3206 3214 * They get marked to be SIGKILLed if they fault in these 3207 3215 * areas. This is because a future no-page fault on this VMA
+18 -13
mm/memcontrol.c
··· 644 644 } 645 645 646 646 /* 647 + * Return page count for single (non recursive) @memcg. 648 + * 647 649 * Implementation Note: reading percpu statistics for memcg. 648 650 * 649 651 * Both of vmstat[] and percpu_counter has threshold and do periodic 650 652 * synchronization to implement "quick" read. There are trade-off between 651 653 * reading cost and precision of value. Then, we may have a chance to implement 652 - * a periodic synchronizion of counter in memcg's counter. 654 + * a periodic synchronization of counter in memcg's counter. 653 655 * 654 656 * But this _read() function is used for user interface now. The user accounts 655 657 * memory usage by memory cgroup and he _always_ requires exact value because ··· 661 659 * 662 660 * If there are kernel internal actions which can make use of some not-exact 663 661 * value, and reading all cpu value can be performance bottleneck in some 664 - * common workload, threashold and synchonization as vmstat[] should be 662 + * common workload, threshold and synchronization as vmstat[] should be 665 663 * implemented. 666 664 */ 667 - static long mem_cgroup_read_stat(struct mem_cgroup *memcg, 668 - enum mem_cgroup_stat_index idx) 665 + static unsigned long 666 + mem_cgroup_read_stat(struct mem_cgroup *memcg, enum mem_cgroup_stat_index idx) 669 667 { 670 668 long val = 0; 671 669 int cpu; 672 670 671 + /* Per-cpu values can be negative, use a signed accumulator */ 673 672 for_each_possible_cpu(cpu) 674 673 val += per_cpu(memcg->stat->count[idx], cpu); 674 + /* 675 + * Summing races with updates, so val may be negative. Avoid exposing 676 + * transient negative values. 677 + */ 678 + if (val < 0) 679 + val = 0; 675 680 return val; 676 681 } 677 682 ··· 1263 1254 for (i = 0; i < MEM_CGROUP_STAT_NSTATS; i++) { 1264 1255 if (i == MEM_CGROUP_STAT_SWAP && !do_swap_account) 1265 1256 continue; 1266 - pr_cont(" %s:%ldKB", mem_cgroup_stat_names[i], 1257 + pr_cont(" %s:%luKB", mem_cgroup_stat_names[i], 1267 1258 K(mem_cgroup_read_stat(iter, i))); 1268 1259 } 1269 1260 ··· 2828 2819 enum mem_cgroup_stat_index idx) 2829 2820 { 2830 2821 struct mem_cgroup *iter; 2831 - long val = 0; 2822 + unsigned long val = 0; 2832 2823 2833 - /* Per-cpu values can be negative, use a signed accumulator */ 2834 2824 for_each_mem_cgroup_tree(iter, memcg) 2835 2825 val += mem_cgroup_read_stat(iter, idx); 2836 2826 2837 - if (val < 0) /* race ? */ 2838 - val = 0; 2839 2827 return val; 2840 2828 } 2841 2829 ··· 3175 3169 for (i = 0; i < MEM_CGROUP_STAT_NSTATS; i++) { 3176 3170 if (i == MEM_CGROUP_STAT_SWAP && !do_swap_account) 3177 3171 continue; 3178 - seq_printf(m, "%s %ld\n", mem_cgroup_stat_names[i], 3172 + seq_printf(m, "%s %lu\n", mem_cgroup_stat_names[i], 3179 3173 mem_cgroup_read_stat(memcg, i) * PAGE_SIZE); 3180 3174 } 3181 3175 ··· 3200 3194 (u64)memsw * PAGE_SIZE); 3201 3195 3202 3196 for (i = 0; i < MEM_CGROUP_STAT_NSTATS; i++) { 3203 - long long val = 0; 3197 + unsigned long long val = 0; 3204 3198 3205 3199 if (i == MEM_CGROUP_STAT_SWAP && !do_swap_account) 3206 3200 continue; 3207 3201 for_each_mem_cgroup_tree(mi, memcg) 3208 3202 val += mem_cgroup_read_stat(mi, i) * PAGE_SIZE; 3209 - seq_printf(m, "total_%s %lld\n", mem_cgroup_stat_names[i], val); 3203 + seq_printf(m, "total_%s %llu\n", mem_cgroup_stat_names[i], val); 3210 3204 } 3211 3205 3212 3206 for (i = 0; i < MEM_CGROUP_EVENTS_NSTATS; i++) { ··· 4185 4179 if (memcg_wb_domain_init(memcg, GFP_KERNEL)) 4186 4180 goto out_free_stat; 4187 4181 4188 - spin_lock_init(&memcg->pcp_counter_lock); 4189 4182 return memcg; 4190 4183 4191 4184 out_free_stat:
+11 -1
mm/migrate.c
··· 740 740 if (PageSwapBacked(page)) 741 741 SetPageSwapBacked(newpage); 742 742 743 + /* 744 + * Indirectly called below, migrate_page_copy() copies PG_dirty and thus 745 + * needs newpage's memcg set to transfer memcg dirty page accounting. 746 + * So perform memcg migration in two steps: 747 + * 1. set newpage->mem_cgroup (here) 748 + * 2. clear page->mem_cgroup (below) 749 + */ 750 + set_page_memcg(newpage, page_memcg(page)); 751 + 743 752 mapping = page_mapping(page); 744 753 if (!mapping) 745 754 rc = migrate_page(mapping, newpage, page, mode); ··· 765 756 rc = fallback_migrate_page(mapping, newpage, page, mode); 766 757 767 758 if (rc != MIGRATEPAGE_SUCCESS) { 759 + set_page_memcg(newpage, NULL); 768 760 newpage->mapping = NULL; 769 761 } else { 770 - mem_cgroup_migrate(page, newpage, false); 762 + set_page_memcg(page, NULL); 771 763 if (page_was_mapped) 772 764 remove_migration_ptes(page, newpage); 773 765 page->mapping = NULL;
+10 -3
mm/slab.c
··· 2190 2190 size += BYTES_PER_WORD; 2191 2191 } 2192 2192 #if FORCED_DEBUG && defined(CONFIG_DEBUG_PAGEALLOC) 2193 - if (size >= kmalloc_size(INDEX_NODE + 1) 2194 - && cachep->object_size > cache_line_size() 2195 - && ALIGN(size, cachep->align) < PAGE_SIZE) { 2193 + /* 2194 + * To activate debug pagealloc, off-slab management is necessary 2195 + * requirement. In early phase of initialization, small sized slab 2196 + * doesn't get initialized so it would not be possible. So, we need 2197 + * to check size >= 256. It guarantees that all necessary small 2198 + * sized slab is initialized in current slab initialization sequence. 2199 + */ 2200 + if (!slab_early_init && size >= kmalloc_size(INDEX_NODE) && 2201 + size >= 256 && cachep->object_size > cache_line_size() && 2202 + ALIGN(size, cachep->align) < PAGE_SIZE) { 2196 2203 cachep->obj_offset += PAGE_SIZE - ALIGN(size, cachep->align); 2197 2204 size = PAGE_SIZE; 2198 2205 }
+1 -2
net/core/net-sysfs.c
··· 31 31 static const char fmt_hex[] = "%#x\n"; 32 32 static const char fmt_long_hex[] = "%#lx\n"; 33 33 static const char fmt_dec[] = "%d\n"; 34 - static const char fmt_udec[] = "%u\n"; 35 34 static const char fmt_ulong[] = "%lu\n"; 36 35 static const char fmt_u64[] = "%llu\n"; 37 36 ··· 201 202 if (netif_running(netdev)) { 202 203 struct ethtool_cmd cmd; 203 204 if (!__ethtool_get_settings(netdev, &cmd)) 204 - ret = sprintf(buf, fmt_udec, ethtool_cmd_speed(&cmd)); 205 + ret = sprintf(buf, fmt_dec, ethtool_cmd_speed(&cmd)); 205 206 } 206 207 rtnl_unlock(); 207 208 return ret;
+5 -4
net/core/skbuff.c
··· 2958 2958 */ 2959 2959 unsigned char *skb_pull_rcsum(struct sk_buff *skb, unsigned int len) 2960 2960 { 2961 + unsigned char *data = skb->data; 2962 + 2961 2963 BUG_ON(len > skb->len); 2962 - skb->len -= len; 2963 - BUG_ON(skb->len < skb->data_len); 2964 - skb_postpull_rcsum(skb, skb->data, len); 2965 - return skb->data += len; 2964 + __skb_pull(skb, len); 2965 + skb_postpull_rcsum(skb, data, len); 2966 + return skb->data; 2966 2967 } 2967 2968 EXPORT_SYMBOL_GPL(skb_pull_rcsum); 2968 2969
+1
net/ipv4/fib_frontend.c
··· 340 340 fl4.flowi4_tos = tos; 341 341 fl4.flowi4_scope = RT_SCOPE_UNIVERSE; 342 342 fl4.flowi4_tun_key.tun_id = 0; 343 + fl4.flowi4_flags = 0; 343 344 344 345 no_addr = idev->ifa_list == NULL; 345 346
+1
net/ipv4/route.c
··· 1743 1743 fl4.flowi4_mark = skb->mark; 1744 1744 fl4.flowi4_tos = tos; 1745 1745 fl4.flowi4_scope = RT_SCOPE_UNIVERSE; 1746 + fl4.flowi4_flags = 0; 1746 1747 fl4.daddr = daddr; 1747 1748 fl4.saddr = saddr; 1748 1749 err = fib_lookup(net, &fl4, &res, 0);
+2 -1
net/ipv6/route.c
··· 1169 1169 1170 1170 fl6->flowi6_iif = LOOPBACK_IFINDEX; 1171 1171 1172 - if ((sk && sk->sk_bound_dev_if) || rt6_need_strict(&fl6->daddr)) 1172 + if ((sk && sk->sk_bound_dev_if) || rt6_need_strict(&fl6->daddr) || 1173 + fl6->flowi6_oif) 1173 1174 flags |= RT6_LOOKUP_F_IFACE; 1174 1175 1175 1176 if (!ipv6_addr_any(&fl6->saddr))
+9 -2
net/l2tp/l2tp_core.c
··· 1319 1319 tunnel = container_of(work, struct l2tp_tunnel, del_work); 1320 1320 sk = l2tp_tunnel_sock_lookup(tunnel); 1321 1321 if (!sk) 1322 - return; 1322 + goto out; 1323 1323 1324 1324 sock = sk->sk_socket; 1325 1325 ··· 1341 1341 } 1342 1342 1343 1343 l2tp_tunnel_sock_put(sk); 1344 + out: 1345 + l2tp_tunnel_dec_refcount(tunnel); 1344 1346 } 1345 1347 1346 1348 /* Create a socket for the tunnel, if one isn't set up by ··· 1638 1636 */ 1639 1637 int l2tp_tunnel_delete(struct l2tp_tunnel *tunnel) 1640 1638 { 1639 + l2tp_tunnel_inc_refcount(tunnel); 1641 1640 l2tp_tunnel_closeall(tunnel); 1642 - return (false == queue_work(l2tp_wq, &tunnel->del_work)); 1641 + if (false == queue_work(l2tp_wq, &tunnel->del_work)) { 1642 + l2tp_tunnel_dec_refcount(tunnel); 1643 + return 1; 1644 + } 1645 + return 0; 1643 1646 } 1644 1647 EXPORT_SYMBOL_GPL(l2tp_tunnel_delete); 1645 1648
+11 -9
net/sctp/associola.c
··· 1208 1208 * within this document. 1209 1209 * 1210 1210 * Our basic strategy is to round-robin transports in priorities 1211 - * according to sctp_state_prio_map[] e.g., if no such 1211 + * according to sctp_trans_score() e.g., if no such 1212 1212 * transport with state SCTP_ACTIVE exists, round-robin through 1213 1213 * SCTP_UNKNOWN, etc. You get the picture. 1214 1214 */ 1215 - static const u8 sctp_trans_state_to_prio_map[] = { 1216 - [SCTP_ACTIVE] = 3, /* best case */ 1217 - [SCTP_UNKNOWN] = 2, 1218 - [SCTP_PF] = 1, 1219 - [SCTP_INACTIVE] = 0, /* worst case */ 1220 - }; 1221 - 1222 1215 static u8 sctp_trans_score(const struct sctp_transport *trans) 1223 1216 { 1224 - return sctp_trans_state_to_prio_map[trans->state]; 1217 + switch (trans->state) { 1218 + case SCTP_ACTIVE: 1219 + return 3; /* best case */ 1220 + case SCTP_UNKNOWN: 1221 + return 2; 1222 + case SCTP_PF: 1223 + return 1; 1224 + default: /* case SCTP_INACTIVE */ 1225 + return 0; /* worst case */ 1226 + } 1225 1227 } 1226 1228 1227 1229 static struct sctp_transport *sctp_trans_elect_tie(struct sctp_transport *trans1,
+24 -20
net/sctp/sm_sideeffect.c
··· 244 244 int error; 245 245 struct sctp_transport *transport = (struct sctp_transport *) peer; 246 246 struct sctp_association *asoc = transport->asoc; 247 - struct net *net = sock_net(asoc->base.sk); 247 + struct sock *sk = asoc->base.sk; 248 + struct net *net = sock_net(sk); 248 249 249 250 /* Check whether a task is in the sock. */ 250 251 251 - bh_lock_sock(asoc->base.sk); 252 - if (sock_owned_by_user(asoc->base.sk)) { 252 + bh_lock_sock(sk); 253 + if (sock_owned_by_user(sk)) { 253 254 pr_debug("%s: sock is busy\n", __func__); 254 255 255 256 /* Try again later. */ ··· 273 272 transport, GFP_ATOMIC); 274 273 275 274 if (error) 276 - asoc->base.sk->sk_err = -error; 275 + sk->sk_err = -error; 277 276 278 277 out_unlock: 279 - bh_unlock_sock(asoc->base.sk); 278 + bh_unlock_sock(sk); 280 279 sctp_transport_put(transport); 281 280 } 282 281 ··· 286 285 static void sctp_generate_timeout_event(struct sctp_association *asoc, 287 286 sctp_event_timeout_t timeout_type) 288 287 { 289 - struct net *net = sock_net(asoc->base.sk); 288 + struct sock *sk = asoc->base.sk; 289 + struct net *net = sock_net(sk); 290 290 int error = 0; 291 291 292 - bh_lock_sock(asoc->base.sk); 293 - if (sock_owned_by_user(asoc->base.sk)) { 292 + bh_lock_sock(sk); 293 + if (sock_owned_by_user(sk)) { 294 294 pr_debug("%s: sock is busy: timer %d\n", __func__, 295 295 timeout_type); 296 296 ··· 314 312 (void *)timeout_type, GFP_ATOMIC); 315 313 316 314 if (error) 317 - asoc->base.sk->sk_err = -error; 315 + sk->sk_err = -error; 318 316 319 317 out_unlock: 320 - bh_unlock_sock(asoc->base.sk); 318 + bh_unlock_sock(sk); 321 319 sctp_association_put(asoc); 322 320 } 323 321 ··· 367 365 int error = 0; 368 366 struct sctp_transport *transport = (struct sctp_transport *) data; 369 367 struct sctp_association *asoc = transport->asoc; 370 - struct net *net = sock_net(asoc->base.sk); 368 + struct sock *sk = asoc->base.sk; 369 + struct net *net = sock_net(sk); 371 370 372 - bh_lock_sock(asoc->base.sk); 373 - if (sock_owned_by_user(asoc->base.sk)) { 371 + bh_lock_sock(sk); 372 + if (sock_owned_by_user(sk)) { 374 373 pr_debug("%s: sock is busy\n", __func__); 375 374 376 375 /* Try again later. */ ··· 391 388 asoc->state, asoc->ep, asoc, 392 389 transport, GFP_ATOMIC); 393 390 394 - if (error) 395 - asoc->base.sk->sk_err = -error; 391 + if (error) 392 + sk->sk_err = -error; 396 393 397 394 out_unlock: 398 - bh_unlock_sock(asoc->base.sk); 395 + bh_unlock_sock(sk); 399 396 sctp_transport_put(transport); 400 397 } 401 398 ··· 406 403 { 407 404 struct sctp_transport *transport = (struct sctp_transport *) data; 408 405 struct sctp_association *asoc = transport->asoc; 409 - struct net *net = sock_net(asoc->base.sk); 406 + struct sock *sk = asoc->base.sk; 407 + struct net *net = sock_net(sk); 410 408 411 - bh_lock_sock(asoc->base.sk); 412 - if (sock_owned_by_user(asoc->base.sk)) { 409 + bh_lock_sock(sk); 410 + if (sock_owned_by_user(sk)) { 413 411 pr_debug("%s: sock is busy\n", __func__); 414 412 415 413 /* Try again later. */ ··· 431 427 asoc->state, asoc->ep, asoc, transport, GFP_ATOMIC); 432 428 433 429 out_unlock: 434 - bh_unlock_sock(asoc->base.sk); 430 + bh_unlock_sock(sk); 435 431 sctp_association_put(asoc); 436 432 } 437 433
-19
net/sunrpc/xprtrdma/fmr_ops.c
··· 39 39 fmr_op_open(struct rpcrdma_ia *ia, struct rpcrdma_ep *ep, 40 40 struct rpcrdma_create_data_internal *cdata) 41 41 { 42 - struct ib_device_attr *devattr = &ia->ri_devattr; 43 - struct ib_mr *mr; 44 - 45 - /* Obtain an lkey to use for the regbufs, which are 46 - * protected from remote access. 47 - */ 48 - if (devattr->device_cap_flags & IB_DEVICE_LOCAL_DMA_LKEY) { 49 - ia->ri_dma_lkey = ia->ri_device->local_dma_lkey; 50 - } else { 51 - mr = ib_get_dma_mr(ia->ri_pd, IB_ACCESS_LOCAL_WRITE); 52 - if (IS_ERR(mr)) { 53 - pr_err("%s: ib_get_dma_mr for failed with %lX\n", 54 - __func__, PTR_ERR(mr)); 55 - return -ENOMEM; 56 - } 57 - ia->ri_dma_lkey = ia->ri_dma_mr->lkey; 58 - ia->ri_dma_mr = mr; 59 - } 60 - 61 42 return 0; 62 43 } 63 44
-5
net/sunrpc/xprtrdma/frwr_ops.c
··· 189 189 struct ib_device_attr *devattr = &ia->ri_devattr; 190 190 int depth, delta; 191 191 192 - /* Obtain an lkey to use for the regbufs, which are 193 - * protected from remote access. 194 - */ 195 - ia->ri_dma_lkey = ia->ri_device->local_dma_lkey; 196 - 197 192 ia->ri_max_frmr_depth = 198 193 min_t(unsigned int, RPCRDMA_MAX_DATA_SEGS, 199 194 devattr->max_fast_reg_page_list_len);
+1 -9
net/sunrpc/xprtrdma/physical_ops.c
··· 23 23 physical_op_open(struct rpcrdma_ia *ia, struct rpcrdma_ep *ep, 24 24 struct rpcrdma_create_data_internal *cdata) 25 25 { 26 - struct ib_device_attr *devattr = &ia->ri_devattr; 27 26 struct ib_mr *mr; 28 27 29 28 /* Obtain an rkey to use for RPC data payloads. ··· 36 37 __func__, PTR_ERR(mr)); 37 38 return -ENOMEM; 38 39 } 40 + 39 41 ia->ri_dma_mr = mr; 40 - 41 - /* Obtain an lkey to use for regbufs. 42 - */ 43 - if (devattr->device_cap_flags & IB_DEVICE_LOCAL_DMA_LKEY) 44 - ia->ri_dma_lkey = ia->ri_device->local_dma_lkey; 45 - else 46 - ia->ri_dma_lkey = ia->ri_dma_mr->lkey; 47 - 48 42 return 0; 49 43 } 50 44
+1 -1
net/sunrpc/xprtrdma/verbs.c
··· 1252 1252 goto out_free; 1253 1253 1254 1254 iov->length = size; 1255 - iov->lkey = ia->ri_dma_lkey; 1255 + iov->lkey = ia->ri_pd->local_dma_lkey; 1256 1256 rb->rg_size = size; 1257 1257 rb->rg_owner = NULL; 1258 1258 return rb;
-1
net/sunrpc/xprtrdma/xprt_rdma.h
··· 65 65 struct rdma_cm_id *ri_id; 66 66 struct ib_pd *ri_pd; 67 67 struct ib_mr *ri_dma_mr; 68 - u32 ri_dma_lkey; 69 68 struct completion ri_done; 70 69 int ri_async_rc; 71 70 unsigned int ri_max_frmr_depth;
+14 -1
net/unix/af_unix.c
··· 2179 2179 if (UNIXCB(skb).fp) 2180 2180 scm.fp = scm_fp_dup(UNIXCB(skb).fp); 2181 2181 2182 - sk_peek_offset_fwd(sk, chunk); 2182 + if (skip) { 2183 + sk_peek_offset_fwd(sk, chunk); 2184 + skip -= chunk; 2185 + } 2183 2186 2187 + if (UNIXCB(skb).fp) 2188 + break; 2189 + 2190 + last = skb; 2191 + last_len = skb->len; 2192 + unix_state_lock(sk); 2193 + skb = skb_peek_next(skb, &sk->sk_receive_queue); 2194 + if (skb) 2195 + goto again; 2196 + unix_state_unlock(sk); 2184 2197 break; 2185 2198 } 2186 2199 } while (size);
+7 -7
samples/kprobes/jprobe_example.c
··· 1 1 /* 2 2 * Here's a sample kernel module showing the use of jprobes to dump 3 - * the arguments of do_fork(). 3 + * the arguments of _do_fork(). 4 4 * 5 5 * For more information on theory of operation of jprobes, see 6 6 * Documentation/kprobes.txt 7 7 * 8 8 * Build and insert the kernel module as done in the kprobe example. 9 9 * You will see the trace data in /var/log/messages and on the 10 - * console whenever do_fork() is invoked to create a new process. 10 + * console whenever _do_fork() is invoked to create a new process. 11 11 * (Some messages may be suppressed if syslogd is configured to 12 12 * eliminate duplicate messages.) 13 13 */ ··· 17 17 #include <linux/kprobes.h> 18 18 19 19 /* 20 - * Jumper probe for do_fork. 20 + * Jumper probe for _do_fork. 21 21 * Mirror principle enables access to arguments of the probed routine 22 22 * from the probe handler. 23 23 */ 24 24 25 - /* Proxy routine having the same arguments as actual do_fork() routine */ 26 - static long jdo_fork(unsigned long clone_flags, unsigned long stack_start, 25 + /* Proxy routine having the same arguments as actual _do_fork() routine */ 26 + static long j_do_fork(unsigned long clone_flags, unsigned long stack_start, 27 27 unsigned long stack_size, int __user *parent_tidptr, 28 28 int __user *child_tidptr) 29 29 { ··· 36 36 } 37 37 38 38 static struct jprobe my_jprobe = { 39 - .entry = jdo_fork, 39 + .entry = j_do_fork, 40 40 .kp = { 41 - .symbol_name = "do_fork", 41 + .symbol_name = "_do_fork", 42 42 }, 43 43 }; 44 44
+3 -3
samples/kprobes/kprobe_example.c
··· 1 1 /* 2 2 * NOTE: This example is works on x86 and powerpc. 3 3 * Here's a sample kernel module showing the use of kprobes to dump a 4 - * stack trace and selected registers when do_fork() is called. 4 + * stack trace and selected registers when _do_fork() is called. 5 5 * 6 6 * For more information on theory of operation of kprobes, see 7 7 * Documentation/kprobes.txt 8 8 * 9 9 * You will see the trace data in /var/log/messages and on the console 10 - * whenever do_fork() is invoked to create a new process. 10 + * whenever _do_fork() is invoked to create a new process. 11 11 */ 12 12 13 13 #include <linux/kernel.h> ··· 16 16 17 17 /* For each probe you need to allocate a kprobe structure */ 18 18 static struct kprobe kp = { 19 - .symbol_name = "do_fork", 19 + .symbol_name = "_do_fork", 20 20 }; 21 21 22 22 /* kprobe pre_handler: called just before the probed instruction is executed */
+2 -2
samples/kprobes/kretprobe_example.c
··· 7 7 * 8 8 * usage: insmod kretprobe_example.ko func=<func_name> 9 9 * 10 - * If no func_name is specified, do_fork is instrumented 10 + * If no func_name is specified, _do_fork is instrumented 11 11 * 12 12 * For more information on theory of operation of kretprobes, see 13 13 * Documentation/kprobes.txt ··· 25 25 #include <linux/limits.h> 26 26 #include <linux/sched.h> 27 27 28 - static char func_name[NAME_MAX] = "do_fork"; 28 + static char func_name[NAME_MAX] = "_do_fork"; 29 29 module_param_string(func, func_name, NAME_MAX, S_IRUGO); 30 30 MODULE_PARM_DESC(func, "Function to kretprobe; this module will report the" 31 31 " function's execution time");
-4
scripts/extract-cert.c
··· 17 17 #include <stdint.h> 18 18 #include <stdbool.h> 19 19 #include <string.h> 20 - #include <getopt.h> 21 20 #include <err.h> 22 - #include <arpa/inet.h> 23 21 #include <openssl/bio.h> 24 - #include <openssl/evp.h> 25 22 #include <openssl/pem.h> 26 - #include <openssl/pkcs7.h> 27 23 #include <openssl/err.h> 28 24 #include <openssl/engine.h> 29 25
+77 -17
scripts/sign-file.c
··· 20 20 #include <getopt.h> 21 21 #include <err.h> 22 22 #include <arpa/inet.h> 23 + #include <openssl/opensslv.h> 23 24 #include <openssl/bio.h> 24 25 #include <openssl/evp.h> 25 26 #include <openssl/pem.h> 26 - #include <openssl/cms.h> 27 27 #include <openssl/err.h> 28 28 #include <openssl/engine.h> 29 + 30 + /* 31 + * Use CMS if we have openssl-1.0.0 or newer available - otherwise we have to 32 + * assume that it's not available and its header file is missing and that we 33 + * should use PKCS#7 instead. Switching to the older PKCS#7 format restricts 34 + * the options we have on specifying the X.509 certificate we want. 35 + * 36 + * Further, older versions of OpenSSL don't support manually adding signers to 37 + * the PKCS#7 message so have to accept that we get a certificate included in 38 + * the signature message. Nor do such older versions of OpenSSL support 39 + * signing with anything other than SHA1 - so we're stuck with that if such is 40 + * the case. 41 + */ 42 + #if OPENSSL_VERSION_NUMBER < 0x10000000L 43 + #define USE_PKCS7 44 + #endif 45 + #ifndef USE_PKCS7 46 + #include <openssl/cms.h> 47 + #else 48 + #include <openssl/pkcs7.h> 49 + #endif 29 50 30 51 struct module_signature { 31 52 uint8_t algo; /* Public-key crypto algorithm [0] */ ··· 131 110 struct module_signature sig_info = { .id_type = PKEY_ID_PKCS7 }; 132 111 char *hash_algo = NULL; 133 112 char *private_key_name, *x509_name, *module_name, *dest_name; 134 - bool save_cms = false, replace_orig; 113 + bool save_sig = false, replace_orig; 135 114 bool sign_only = false; 136 115 unsigned char buf[4096]; 137 - unsigned long module_size, cms_size; 138 - unsigned int use_keyid = 0, use_signed_attrs = CMS_NOATTR; 116 + unsigned long module_size, sig_size; 117 + unsigned int use_signed_attrs; 139 118 const EVP_MD *digest_algo; 140 119 EVP_PKEY *private_key; 120 + #ifndef USE_PKCS7 141 121 CMS_ContentInfo *cms; 122 + unsigned int use_keyid = 0; 123 + #else 124 + PKCS7 *pkcs7; 125 + #endif 142 126 X509 *x509; 143 127 BIO *b, *bd = NULL, *bm; 144 128 int opt, n; 145 - 146 129 OpenSSL_add_all_algorithms(); 147 130 ERR_load_crypto_strings(); 148 131 ERR_clear_error(); 149 132 150 133 key_pass = getenv("KBUILD_SIGN_PIN"); 151 134 135 + #ifndef USE_PKCS7 136 + use_signed_attrs = CMS_NOATTR; 137 + #else 138 + use_signed_attrs = PKCS7_NOATTR; 139 + #endif 140 + 152 141 do { 153 142 opt = getopt(argc, argv, "dpk"); 154 143 switch (opt) { 155 - case 'p': save_cms = true; break; 156 - case 'd': sign_only = true; save_cms = true; break; 144 + case 'p': save_sig = true; break; 145 + case 'd': sign_only = true; save_sig = true; break; 146 + #ifndef USE_PKCS7 157 147 case 'k': use_keyid = CMS_USE_KEYID; break; 148 + #endif 158 149 case -1: break; 159 150 default: format(); 160 151 } ··· 189 156 "asprintf"); 190 157 replace_orig = true; 191 158 } 159 + 160 + #ifdef USE_PKCS7 161 + if (strcmp(hash_algo, "sha1") != 0) { 162 + fprintf(stderr, "sign-file: %s only supports SHA1 signing\n", 163 + OPENSSL_VERSION_TEXT); 164 + exit(3); 165 + } 166 + #endif 192 167 193 168 /* Read the private key and the X.509 cert the PKCS#7 message 194 169 * will point to. ··· 254 213 bm = BIO_new_file(module_name, "rb"); 255 214 ERR(!bm, "%s", module_name); 256 215 257 - /* Load the CMS message from the digest buffer. */ 216 + #ifndef USE_PKCS7 217 + /* Load the signature message from the digest buffer. */ 258 218 cms = CMS_sign(NULL, NULL, NULL, NULL, 259 219 CMS_NOCERTS | CMS_PARTIAL | CMS_BINARY | CMS_DETACHED | CMS_STREAM); 260 220 ERR(!cms, "CMS_sign"); ··· 263 221 ERR(!CMS_add1_signer(cms, x509, private_key, digest_algo, 264 222 CMS_NOCERTS | CMS_BINARY | CMS_NOSMIMECAP | 265 223 use_keyid | use_signed_attrs), 266 - "CMS_sign_add_signer"); 224 + "CMS_add1_signer"); 267 225 ERR(CMS_final(cms, bm, NULL, CMS_NOCERTS | CMS_BINARY) < 0, 268 226 "CMS_final"); 269 227 270 - if (save_cms) { 271 - char *cms_name; 228 + #else 229 + pkcs7 = PKCS7_sign(x509, private_key, NULL, bm, 230 + PKCS7_NOCERTS | PKCS7_BINARY | 231 + PKCS7_DETACHED | use_signed_attrs); 232 + ERR(!pkcs7, "PKCS7_sign"); 233 + #endif 272 234 273 - ERR(asprintf(&cms_name, "%s.p7s", module_name) < 0, "asprintf"); 274 - b = BIO_new_file(cms_name, "wb"); 275 - ERR(!b, "%s", cms_name); 276 - ERR(i2d_CMS_bio_stream(b, cms, NULL, 0) < 0, "%s", cms_name); 235 + if (save_sig) { 236 + char *sig_file_name; 237 + 238 + ERR(asprintf(&sig_file_name, "%s.p7s", module_name) < 0, 239 + "asprintf"); 240 + b = BIO_new_file(sig_file_name, "wb"); 241 + ERR(!b, "%s", sig_file_name); 242 + #ifndef USE_PKCS7 243 + ERR(i2d_CMS_bio_stream(b, cms, NULL, 0) < 0, 244 + "%s", sig_file_name); 245 + #else 246 + ERR(i2d_PKCS7_bio(b, pkcs7) < 0, 247 + "%s", sig_file_name); 248 + #endif 277 249 BIO_free(b); 278 250 } 279 251 ··· 303 247 ERR(n < 0, "%s", module_name); 304 248 module_size = BIO_number_written(bd); 305 249 250 + #ifndef USE_PKCS7 306 251 ERR(i2d_CMS_bio_stream(bd, cms, NULL, 0) < 0, "%s", dest_name); 307 - cms_size = BIO_number_written(bd) - module_size; 308 - sig_info.sig_len = htonl(cms_size); 252 + #else 253 + ERR(i2d_PKCS7_bio(bd, pkcs7) < 0, "%s", dest_name); 254 + #endif 255 + sig_size = BIO_number_written(bd) - module_size; 256 + sig_info.sig_len = htonl(sig_size); 309 257 ERR(BIO_write(bd, &sig_info, sizeof(sig_info)) < 0, "%s", dest_name); 310 258 ERR(BIO_write(bd, magic_number, sizeof(magic_number) - 1) < 0, "%s", dest_name); 311 259
+4 -4
security/keys/gc.c
··· 134 134 kdebug("- %u", key->serial); 135 135 key_check(key); 136 136 137 + /* Throw away the key data */ 138 + if (key->type->destroy) 139 + key->type->destroy(key); 140 + 137 141 security_key_free(key); 138 142 139 143 /* deal with the user's key tracking and quota */ ··· 151 147 atomic_dec(&key->user->nkeys); 152 148 if (test_bit(KEY_FLAG_INSTANTIATED, &key->flags)) 153 149 atomic_dec(&key->user->nikeys); 154 - 155 - /* now throw away the key memory */ 156 - if (key->type->destroy) 157 - key->type->destroy(key); 158 150 159 151 key_user_put(key->user); 160 152
+6 -2
tools/build/Makefile.feature
··· 41 41 libelf-getphdrnum \ 42 42 libelf-mmap \ 43 43 libnuma \ 44 + numa_num_possible_cpus \ 44 45 libperl \ 45 46 libpython \ 46 47 libpython-version \ ··· 52 51 timerfd \ 53 52 libdw-dwarf-unwind \ 54 53 zlib \ 55 - lzma 54 + lzma \ 55 + get_cpuid 56 56 57 57 FEATURE_DISPLAY ?= \ 58 58 dwarf \ ··· 63 61 libbfd \ 64 62 libelf \ 65 63 libnuma \ 64 + numa_num_possible_cpus \ 66 65 libperl \ 67 66 libpython \ 68 67 libslang \ 69 68 libunwind \ 70 69 libdw-dwarf-unwind \ 71 70 zlib \ 72 - lzma 71 + lzma \ 72 + get_cpuid 73 73 74 74 # Set FEATURE_CHECK_(C|LD)FLAGS-all for all FEATURE_TESTS features. 75 75 # If in the future we need per-feature checks/flags for features not
+9 -1
tools/build/feature/Makefile
··· 19 19 test-libelf-getphdrnum.bin \ 20 20 test-libelf-mmap.bin \ 21 21 test-libnuma.bin \ 22 + test-numa_num_possible_cpus.bin \ 22 23 test-libperl.bin \ 23 24 test-libpython.bin \ 24 25 test-libpython-version.bin \ ··· 35 34 test-compile-x32.bin \ 36 35 test-zlib.bin \ 37 36 test-lzma.bin \ 38 - test-bpf.bin 37 + test-bpf.bin \ 38 + test-get_cpuid.bin 39 39 40 40 CC := $(CROSS_COMPILE)gcc -MD 41 41 PKG_CONFIG := $(CROSS_COMPILE)pkg-config ··· 87 85 $(BUILD) -lelf 88 86 89 87 test-libnuma.bin: 88 + $(BUILD) -lnuma 89 + 90 + test-numa_num_possible_cpus.bin: 90 91 $(BUILD) -lnuma 91 92 92 93 test-libunwind.bin: ··· 166 161 167 162 test-lzma.bin: 168 163 $(BUILD) -llzma 164 + 165 + test-get_cpuid.bin: 166 + $(BUILD) 169 167 170 168 test-bpf.bin: 171 169 $(BUILD)
+10
tools/build/feature/test-all.c
··· 77 77 # include "test-libnuma.c" 78 78 #undef main 79 79 80 + #define main main_test_numa_num_possible_cpus 81 + # include "test-numa_num_possible_cpus.c" 82 + #undef main 83 + 80 84 #define main main_test_timerfd 81 85 # include "test-timerfd.c" 82 86 #undef main ··· 121 117 # include "test-lzma.c" 122 118 #undef main 123 119 120 + #define main main_test_get_cpuid 121 + # include "test-get_cpuid.c" 122 + #undef main 123 + 124 124 int main(int argc, char *argv[]) 125 125 { 126 126 main_test_libpython(); ··· 144 136 main_test_libbfd(); 145 137 main_test_backtrace(); 146 138 main_test_libnuma(); 139 + main_test_numa_num_possible_cpus(); 147 140 main_test_timerfd(); 148 141 main_test_stackprotector_all(); 149 142 main_test_libdw_dwarf_unwind(); ··· 152 143 main_test_zlib(); 153 144 main_test_pthread_attr_setaffinity_np(); 154 145 main_test_lzma(); 146 + main_test_get_cpuid(); 155 147 156 148 return 0; 157 149 }
+7
tools/build/feature/test-get_cpuid.c
··· 1 + #include <cpuid.h> 2 + 3 + int main(void) 4 + { 5 + unsigned int eax = 0, ebx = 0, ecx = 0, edx = 0; 6 + return __get_cpuid(0x15, &eax, &ebx, &ecx, &edx); 7 + }
+6
tools/build/feature/test-numa_num_possible_cpus.c
··· 1 + #include <numa.h> 2 + 3 + int main(void) 4 + { 5 + return numa_num_possible_cpus(); 6 + }
+20 -3
tools/lib/traceevent/event-parse.c
··· 3795 3795 struct format_field *field; 3796 3796 struct printk_map *printk; 3797 3797 long long val, fval; 3798 - unsigned long addr; 3798 + unsigned long long addr; 3799 3799 char *str; 3800 3800 unsigned char *hex; 3801 3801 int print; ··· 3828 3828 */ 3829 3829 if (!(field->flags & FIELD_IS_ARRAY) && 3830 3830 field->size == pevent->long_size) { 3831 - addr = *(unsigned long *)(data + field->offset); 3831 + 3832 + /* Handle heterogeneous recording and processing 3833 + * architectures 3834 + * 3835 + * CASE I: 3836 + * Traces recorded on 32-bit devices (32-bit 3837 + * addressing) and processed on 64-bit devices: 3838 + * In this case, only 32 bits should be read. 3839 + * 3840 + * CASE II: 3841 + * Traces recorded on 64 bit devices and processed 3842 + * on 32-bit devices: 3843 + * In this case, 64 bits must be read. 3844 + */ 3845 + addr = (pevent->long_size == 8) ? 3846 + *(unsigned long long *)(data + field->offset) : 3847 + (unsigned long long)*(unsigned int *)(data + field->offset); 3848 + 3832 3849 /* Check if it matches a print format */ 3833 3850 printk = find_printk(pevent, addr); 3834 3851 if (printk) 3835 3852 trace_seq_puts(s, printk->printk); 3836 3853 else 3837 - trace_seq_printf(s, "%lx", addr); 3854 + trace_seq_printf(s, "%llx", addr); 3838 3855 break; 3839 3856 } 3840 3857 str = malloc(len + 1);
-15
tools/perf/Documentation/intel-pt.txt
··· 364 364 365 365 CYC packets are not requested by default. 366 366 367 - no_force_psb This is a driver option and is not in the IA32_RTIT_CTL MSR. 368 - 369 - It stops the driver resetting the byte count to zero whenever 370 - enabling the trace (for example on context switches) which in 371 - turn results in no PSB being forced. However some processors 372 - will produce a PSB anyway. 373 - 374 - In any case, there is still a PSB when the trace is enabled for 375 - the first time. 376 - 377 - no_force_psb can be used to slightly decrease the trace size but 378 - may make it harder for the decoder to recover from errors. 379 - 380 - no_force_psb is not selected by default. 381 - 382 367 383 368 new snapshot option 384 369 -------------------
+15 -5
tools/perf/config/Makefile
··· 573 573 msg := $(warning No numa.h found, disables 'perf bench numa mem' benchmark, please install numactl-devel/libnuma-devel/libnuma-dev); 574 574 NO_LIBNUMA := 1 575 575 else 576 - CFLAGS += -DHAVE_LIBNUMA_SUPPORT 577 - EXTLIBS += -lnuma 578 - $(call detected,CONFIG_NUMA) 576 + ifeq ($(feature-numa_num_possible_cpus), 0) 577 + msg := $(warning Old numa library found, disables 'perf bench numa mem' benchmark, please install numactl-devel/libnuma-devel/libnuma-dev >= 2.0.8); 578 + NO_LIBNUMA := 1 579 + else 580 + CFLAGS += -DHAVE_LIBNUMA_SUPPORT 581 + EXTLIBS += -lnuma 582 + $(call detected,CONFIG_NUMA) 583 + endif 579 584 endif 580 585 endif 581 586 ··· 626 621 endif 627 622 628 623 ifndef NO_AUXTRACE 629 - $(call detected,CONFIG_AUXTRACE) 630 - CFLAGS += -DHAVE_AUXTRACE_SUPPORT 624 + ifeq ($(feature-get_cpuid), 0) 625 + msg := $(warning Your gcc lacks the __get_cpuid() builtin, disables support for auxtrace/Intel PT, please install a newer gcc); 626 + NO_AUXTRACE := 1 627 + else 628 + $(call detected,CONFIG_AUXTRACE) 629 + CFLAGS += -DHAVE_AUXTRACE_SUPPORT 630 + endif 631 631 endif 632 632 633 633 # Among the variables below, these:
+7 -6
tools/perf/util/probe-event.c
··· 270 270 int ret = 0; 271 271 272 272 if (module) { 273 - list_for_each_entry(dso, &host_machine->dsos.head, node) { 274 - if (!dso->kernel) 275 - continue; 276 - if (strncmp(dso->short_name + 1, module, 277 - dso->short_name_len - 2) == 0) 278 - goto found; 273 + char module_name[128]; 274 + 275 + snprintf(module_name, sizeof(module_name), "[%s]", module); 276 + map = map_groups__find_by_name(&host_machine->kmaps, MAP__FUNCTION, module_name); 277 + if (map) { 278 + dso = map->dso; 279 + goto found; 279 280 } 280 281 pr_debug("Failed to find module %s.\n", module); 281 282 return -ENOENT;
+4 -1
tools/perf/util/session.c
··· 1580 1580 file_offset = page_offset; 1581 1581 head = data_offset - page_offset; 1582 1582 1583 - if (data_size && (data_offset + data_size < file_size)) 1583 + if (data_size == 0) 1584 + goto out; 1585 + 1586 + if (data_offset + data_size < file_size) 1584 1587 file_size = data_offset + data_size; 1585 1588 1586 1589 ui_progress__init(&prog, file_size, "Processing events...");
+14 -2
tools/perf/util/stat.c
··· 196 196 memset(counter->per_pkg_mask, 0, MAX_NR_CPUS); 197 197 } 198 198 199 - static int check_per_pkg(struct perf_evsel *counter, int cpu, bool *skip) 199 + static int check_per_pkg(struct perf_evsel *counter, 200 + struct perf_counts_values *vals, int cpu, bool *skip) 200 201 { 201 202 unsigned long *mask = counter->per_pkg_mask; 202 203 struct cpu_map *cpus = perf_evsel__cpus(counter); ··· 219 218 counter->per_pkg_mask = mask; 220 219 } 221 220 221 + /* 222 + * we do not consider an event that has not run as a good 223 + * instance to mark a package as used (skip=1). Otherwise 224 + * we may run into a situation where the first CPU in a package 225 + * is not running anything, yet the second is, and this function 226 + * would mark the package as used after the first CPU and would 227 + * not read the values from the second CPU. 228 + */ 229 + if (!(vals->run && vals->ena)) 230 + return 0; 231 + 222 232 s = cpu_map__get_socket(cpus, cpu); 223 233 if (s < 0) 224 234 return -1; ··· 247 235 static struct perf_counts_values zero; 248 236 bool skip = false; 249 237 250 - if (check_per_pkg(evsel, cpu, &skip)) { 238 + if (check_per_pkg(evsel, count, cpu, &skip)) { 251 239 pr_err("failed to read per-pkg counter\n"); 252 240 return -1; 253 241 }
+13 -22
tools/perf/util/symbol-elf.c
··· 38 38 #endif 39 39 40 40 #ifndef HAVE_ELF_GETPHDRNUM_SUPPORT 41 - int elf_getphdrnum(Elf *elf, size_t *dst) 41 + static int elf_getphdrnum(Elf *elf, size_t *dst) 42 42 { 43 43 GElf_Ehdr gehdr; 44 44 GElf_Ehdr *ehdr; ··· 1271 1271 static int kcore__init(struct kcore *kcore, char *filename, int elfclass, 1272 1272 bool temp) 1273 1273 { 1274 - GElf_Ehdr *ehdr; 1275 - 1276 1274 kcore->elfclass = elfclass; 1277 1275 1278 1276 if (temp) ··· 1287 1289 if (!gelf_newehdr(kcore->elf, elfclass)) 1288 1290 goto out_end; 1289 1291 1290 - ehdr = gelf_getehdr(kcore->elf, &kcore->ehdr); 1291 - if (!ehdr) 1292 - goto out_end; 1292 + memset(&kcore->ehdr, 0, sizeof(GElf_Ehdr)); 1293 1293 1294 1294 return 0; 1295 1295 ··· 1344 1348 static int kcore__add_phdr(struct kcore *kcore, int idx, off_t offset, 1345 1349 u64 addr, u64 len) 1346 1350 { 1347 - GElf_Phdr gphdr; 1348 - GElf_Phdr *phdr; 1351 + GElf_Phdr phdr = { 1352 + .p_type = PT_LOAD, 1353 + .p_flags = PF_R | PF_W | PF_X, 1354 + .p_offset = offset, 1355 + .p_vaddr = addr, 1356 + .p_paddr = 0, 1357 + .p_filesz = len, 1358 + .p_memsz = len, 1359 + .p_align = page_size, 1360 + }; 1349 1361 1350 - phdr = gelf_getphdr(kcore->elf, idx, &gphdr); 1351 - if (!phdr) 1352 - return -1; 1353 - 1354 - phdr->p_type = PT_LOAD; 1355 - phdr->p_flags = PF_R | PF_W | PF_X; 1356 - phdr->p_offset = offset; 1357 - phdr->p_vaddr = addr; 1358 - phdr->p_paddr = 0; 1359 - phdr->p_filesz = len; 1360 - phdr->p_memsz = len; 1361 - phdr->p_align = page_size; 1362 - 1363 - if (!gelf_update_phdr(kcore->elf, idx, phdr)) 1362 + if (!gelf_update_phdr(kcore->elf, idx, &phdr)) 1364 1363 return -1; 1365 1364 1366 1365 return 0;
+1 -1
tools/perf/util/util.c
··· 709 709 710 710 dir = opendir(procfs__mountpoint()); 711 711 if (!dir) 712 - return -1; 712 + return false; 713 713 714 714 /* Walk through the directory. */ 715 715 while (ret && (d = readdir(dir)) != NULL) {
+34 -5
tools/power/x86/turbostat/turbostat.c
··· 71 71 unsigned int extra_msr_offset64; 72 72 unsigned int extra_delta_offset32; 73 73 unsigned int extra_delta_offset64; 74 + unsigned int aperf_mperf_multiplier = 1; 74 75 int do_smi; 75 76 double bclk; 77 + double base_hz; 78 + double tsc_tweak = 1.0; 76 79 unsigned int show_pkg; 77 80 unsigned int show_core; 78 81 unsigned int show_cpu; ··· 505 502 /* %Busy */ 506 503 if (has_aperf) { 507 504 if (!skip_c0) 508 - outp += sprintf(outp, "%8.2f", 100.0 * t->mperf/t->tsc); 505 + outp += sprintf(outp, "%8.2f", 100.0 * t->mperf/t->tsc/tsc_tweak); 509 506 else 510 507 outp += sprintf(outp, "********"); 511 508 } ··· 513 510 /* Bzy_MHz */ 514 511 if (has_aperf) 515 512 outp += sprintf(outp, "%8.0f", 516 - 1.0 * t->tsc / units * t->aperf / t->mperf / interval_float); 513 + 1.0 * t->tsc * tsc_tweak / units * t->aperf / t->mperf / interval_float); 517 514 518 515 /* TSC_MHz */ 519 516 outp += sprintf(outp, "%8.0f", 1.0 * t->tsc/units/interval_float); ··· 987 984 return -3; 988 985 if (get_msr(cpu, MSR_IA32_MPERF, &t->mperf)) 989 986 return -4; 987 + t->aperf = t->aperf * aperf_mperf_multiplier; 988 + t->mperf = t->mperf * aperf_mperf_multiplier; 990 989 } 991 990 992 991 if (do_smi) { ··· 1153 1148 int slv_pkg_cstate_limits[16] = {PCL__0, PCL__1, PCLRSV, PCLRSV, PCL__4, PCLRSV, PCL__6, PCL__7, PCLRSV, PCLRSV, PCLRSV, PCLRSV, PCLRSV, PCLRSV, PCLRSV, PCLRSV}; 1154 1149 int amt_pkg_cstate_limits[16] = {PCL__0, PCL__1, PCL__2, PCLRSV, PCLRSV, PCLRSV, PCL__6, PCL__7, PCLRSV, PCLRSV, PCLRSV, PCLRSV, PCLRSV, PCLRSV, PCLRSV, PCLRSV}; 1155 1150 int phi_pkg_cstate_limits[16] = {PCL__0, PCL__2, PCL_6N, PCL_6R, PCLRSV, PCLRSV, PCLRSV, PCLUNL, PCLRSV, PCLRSV, PCLRSV, PCLRSV, PCLRSV, PCLRSV, PCLRSV, PCLRSV}; 1151 + 1152 + 1153 + static void 1154 + calculate_tsc_tweak() 1155 + { 1156 + unsigned long long msr; 1157 + unsigned int base_ratio; 1158 + 1159 + get_msr(base_cpu, MSR_NHM_PLATFORM_INFO, &msr); 1160 + base_ratio = (msr >> 8) & 0xFF; 1161 + base_hz = base_ratio * bclk * 1000000; 1162 + tsc_tweak = base_hz / tsc_hz; 1163 + } 1156 1164 1157 1165 static void 1158 1166 dump_nhm_platform_info(void) ··· 1944 1926 1945 1927 switch (model) { 1946 1928 case 0x3A: /* IVB */ 1947 - case 0x3E: /* IVB Xeon */ 1948 - 1949 1929 case 0x3C: /* HSW */ 1950 1930 case 0x3F: /* HSX */ 1951 1931 case 0x45: /* HSW */ ··· 2559 2543 return 0; 2560 2544 } 2561 2545 2546 + unsigned int get_aperf_mperf_multiplier(unsigned int family, unsigned int model) 2547 + { 2548 + if (is_knl(family, model)) 2549 + return 1024; 2550 + return 1; 2551 + } 2552 + 2562 2553 #define SLM_BCLK_FREQS 5 2563 2554 double slm_freq_table[SLM_BCLK_FREQS] = { 83.3, 100.0, 133.3, 116.7, 80.0}; 2564 2555 ··· 2767 2744 } 2768 2745 } 2769 2746 2747 + if (has_aperf) 2748 + aperf_mperf_multiplier = get_aperf_mperf_multiplier(family, model); 2749 + 2770 2750 do_nhm_platform_info = do_nhm_cstates = do_smi = probe_nhm_msrs(family, model); 2771 2751 do_snb_cstates = has_snb_msrs(family, model); 2772 2752 do_pc2 = do_snb_cstates && (pkg_cstate_limit >= PCL__2); ··· 2787 2761 2788 2762 if (debug) 2789 2763 dump_cstate_pstate_config_info(); 2764 + 2765 + if (has_skl_msrs(family, model)) 2766 + calculate_tsc_tweak(); 2790 2767 2791 2768 return; 2792 2769 } ··· 3119 3090 } 3120 3091 3121 3092 void print_version() { 3122 - fprintf(stderr, "turbostat version 4.7 17-June, 2015" 3093 + fprintf(stderr, "turbostat version 4.8 26-Sep, 2015" 3123 3094 " - Len Brown <lenb@kernel.org>\n"); 3124 3095 } 3125 3096