Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'v4.1-rc3' into patchwork

Linux 4.1-rc3

* tag 'v4.1-rc3': (381 commits)
Linux 4.1-rc3
drm: Zero out invalid vblank timestamp in drm_update_vblank_count.
m32r: make flush_cpumask non-volatile.
mnt: Fix fs_fully_visible to verify the root directory is visible
path_openat(): fix double fput()
namei: d_is_negative() should be checked before ->d_seq validation
ARM: dts: Add keep-power-in-suspend to WiFi SDIO node for exynos5250-snow
ARM: dts: Fix typo in trip point temperature for exynos5420/5440
ARM: dts: add 'rtc_src' clock to rtc node for exynos4412-odroid boards
ARM: dts: Make DP a consumer of DISP1 power domain on Exynos5420
MAINTAINERS: add Conexant Digicolor machines entry
MAINTAINERS: socfpga: update the git repo for SoCFPGA
drm/tegra: Don't use vblank_disable_immediate on incapable driver.
mmc: dw_mmc: dw_mci_get_cd check MMC_CAP_NONREMOVABLE
mmc: dw_mmc: init desc in dw_mci_idmac_init
ARM: multi_v7_defconfig: Select more FSL SoCs
MAINTAINERS: replace an AT91 maintainer
drivers: CCI: fix used_mask init in validate_group()
drm/radeon: stop trying to suspend UVD sessions
drm/radeon: more strictly validate the UVD codec
...

+5254 -2772
+7
CREDITS
··· 3709 3709 D: Co-author of German book ``Linux-Kernel-Programmierung'' 3710 3710 D: Co-founder of Berlin Linux User Group 3711 3711 3712 + N: Andrew Victor 3713 + E: linux@maxim.org.za 3714 + W: http://maxim.org.za/at91_26.html 3715 + D: First maintainer of Atmel ARM-based SoC, aka AT91 3716 + D: Introduced support for at91rm9200, the first chip of AT91 family 3717 + S: South Africa 3718 + 3712 3719 N: Riku Voipio 3713 3720 E: riku.voipio@iki.fi 3714 3721 D: Author of PCA9532 LED and Fintek f75375s hwmon driver
+4 -1
Documentation/IPMI.txt
··· 505 505 506 506 The addresses are normal I2C addresses. The adapter is the string 507 507 name of the adapter, as shown in /sys/class/i2c-adapter/i2c-<n>/name. 508 - It is *NOT* i2c-<n> itself. 508 + It is *NOT* i2c-<n> itself. Also, the comparison is done ignoring 509 + spaces, so if the name is "This is an I2C chip" you can say 510 + adapter_name=ThisisanI2cchip. This is because it's hard to pass in 511 + spaces in kernel parameters. 509 512 510 513 The debug flags are bit flags for each BMC found, they are: 511 514 IPMI messages: 1, driver state: 2, timing: 4, I2C probe: 8
+1 -1
Documentation/acpi/enumeration.txt
··· 253 253 GPIO support 254 254 ~~~~~~~~~~~~ 255 255 ACPI 5 introduced two new resources to describe GPIO connections: GpioIo 256 - and GpioInt. These resources are used be used to pass GPIO numbers used by 256 + and GpioInt. These resources can be used to pass GPIO numbers used by 257 257 the device to the driver. ACPI 5.1 extended this with _DSD (Device 258 258 Specific Data) which made it possible to name the GPIOs among other things. 259 259
+3 -3
Documentation/acpi/gpio-properties.txt
··· 1 1 _DSD Device Properties Related to GPIO 2 2 -------------------------------------- 3 3 4 - With the release of ACPI 5.1 and the _DSD configuration objecte names 5 - can finally be given to GPIOs (and other things as well) returned by 6 - _CRS. Previously, we were only able to use an integer index to find 4 + With the release of ACPI 5.1, the _DSD configuration object finally 5 + allows names to be given to GPIOs (and other things as well) returned 6 + by _CRS. Previously, we were only able to use an integer index to find 7 7 the corresponding GPIO, which is pretty error prone (it depends on 8 8 the _CRS output ordering, for example). 9 9
+1
Documentation/devicetree/bindings/arm/omap/l3-noc.txt
··· 6 6 Required properties: 7 7 - compatible : Should be "ti,omap3-l3-smx" for OMAP3 family 8 8 Should be "ti,omap4-l3-noc" for OMAP4 family 9 + Should be "ti,omap5-l3-noc" for OMAP5 family 9 10 Should be "ti,dra7-l3-noc" for DRA7 family 10 11 Should be "ti,am4372-l3-noc" for AM43 family 11 12 - reg: Contains L3 register address range for each noc domain.
+1 -1
Documentation/devicetree/bindings/dma/fsl-mxs-dma.txt
··· 38 38 80 81 68 69 39 39 70 71 72 73 40 40 74 75 76 77>; 41 - interrupt-names = "auart4-rx", "aurat4-tx", "spdif-tx", "empty", 41 + interrupt-names = "auart4-rx", "auart4-tx", "spdif-tx", "empty", 42 42 "saif0", "saif1", "i2c0", "i2c1", 43 43 "auart0-rx", "auart0-tx", "auart1-rx", "auart1-tx", 44 44 "auart2-rx", "auart2-tx", "auart3-rx", "auart3-tx";
+30
Documentation/devicetree/bindings/rtc/abracon,abx80x.txt
··· 1 + Abracon ABX80X I2C ultra low power RTC/Alarm chip 2 + 3 + The Abracon ABX80X family consist of the ab0801, ab0803, ab0804, ab0805, ab1801, 4 + ab1803, ab1804 and ab1805. The ab0805 is the superset of ab080x and the ab1805 5 + is the superset of ab180x. 6 + 7 + Required properties: 8 + 9 + - "compatible": should one of: 10 + "abracon,abx80x" 11 + "abracon,ab0801" 12 + "abracon,ab0803" 13 + "abracon,ab0804" 14 + "abracon,ab0805" 15 + "abracon,ab1801" 16 + "abracon,ab1803" 17 + "abracon,ab1804" 18 + "abracon,ab1805" 19 + Using "abracon,abx80x" will enable chip autodetection. 20 + - "reg": I2C bus address of the device 21 + 22 + Optional properties: 23 + 24 + The abx804 and abx805 have a trickle charger that is able to charge the 25 + connected battery or supercap. Both the following properties have to be defined 26 + and valid to enable charging: 27 + 28 + - "abracon,tc-diode": should be "standard" (0.6V) or "schottky" (0.3V) 29 + - "abracon,tc-resistor": should be <0>, <3>, <6> or <11>. 0 disables the output 30 + resistor, the other values are in ohm.
+5 -3
Documentation/kasan.txt
··· 9 9 bugs. 10 10 11 11 KASan uses compile-time instrumentation for checking every memory access, 12 - therefore you will need a certain version of GCC > 4.9.2 12 + therefore you will need a gcc version of 4.9.2 or later. KASan could detect out 13 + of bounds accesses to stack or global variables, but only if gcc 5.0 or later was 14 + used to built the kernel. 13 15 14 16 Currently KASan is supported only for x86_64 architecture and requires that the 15 17 kernel be built with the SLUB allocator. ··· 25 23 26 24 and choose between CONFIG_KASAN_OUTLINE and CONFIG_KASAN_INLINE. Outline/inline 27 25 is compiler instrumentation types. The former produces smaller binary the 28 - latter is 1.1 - 2 times faster. Inline instrumentation requires GCC 5.0 or 29 - latter. 26 + latter is 1.1 - 2 times faster. Inline instrumentation requires a gcc version 27 + of 5.0 or later. 30 28 31 29 Currently KASAN works only with the SLUB memory allocator. 32 30 For better bug detection and nicer report, enable CONFIG_STACKTRACE and put
+2
Documentation/kernel-parameters.txt
··· 3787 3787 READ_CAPACITY_16 command); 3788 3788 f = NO_REPORT_OPCODES (don't use report opcodes 3789 3789 command, uas only); 3790 + g = MAX_SECTORS_240 (don't transfer more than 3791 + 240 sectors at a time, uas only); 3790 3792 h = CAPACITY_HEURISTICS (decrease the 3791 3793 reported device capacity by one 3792 3794 sector if the number is odd);
+3 -3
Documentation/module-signing.txt
··· 119 119 should be altered from the default: 120 120 121 121 [ req_distinguished_name ] 122 - O = Magrathea 123 - CN = Glacier signing key 124 - emailAddress = slartibartfast@magrathea.h2g2 122 + #O = Unspecified company 123 + CN = Build time autogenerated kernel key 124 + #emailAddress = unspecified.user@unspecified.company 125 125 126 126 The generated RSA key size can also be set with: 127 127
+9
Documentation/networking/mpls-sysctl.txt
··· 18 18 19 19 Possible values: 0 - 1048575 20 20 Default: 0 21 + 22 + conf/<interface>/input - BOOL 23 + Control whether packets can be input on this interface. 24 + 25 + If disabled, packets will be discarded without further 26 + processing. 27 + 28 + 0 - disabled (default) 29 + not 0 - enabled
+1 -1
Documentation/networking/scaling.txt
··· 282 282 283 283 - The current CPU's queue head counter >= the recorded tail counter 284 284 value in rps_dev_flow[i] 285 - - The current CPU is unset (equal to RPS_NO_CPU) 285 + - The current CPU is unset (>= nr_cpu_ids) 286 286 - The current CPU is offline 287 287 288 288 After this check, the packet is sent to the (possibly updated) current
+16 -16
Documentation/powerpc/transactional_memory.txt
··· 74 74 Syscalls 75 75 ======== 76 76 77 - Syscalls made from within an active transaction will not be performed and the 78 - transaction will be doomed by the kernel with the failure code TM_CAUSE_SYSCALL 79 - | TM_CAUSE_PERSISTENT. 77 + Performing syscalls from within transaction is not recommended, and can lead 78 + to unpredictable results. 80 79 81 - Syscalls made from within a suspended transaction are performed as normal and 82 - the transaction is not explicitly doomed by the kernel. However, what the 83 - kernel does to perform the syscall may result in the transaction being doomed 84 - by the hardware. The syscall is performed in suspended mode so any side 85 - effects will be persistent, independent of transaction success or failure. No 86 - guarantees are provided by the kernel about which syscalls will affect 87 - transaction success. 80 + Syscalls do not by design abort transactions, but beware: The kernel code will 81 + not be running in transactional state. The effect of syscalls will always 82 + remain visible, but depending on the call they may abort your transaction as a 83 + side-effect, read soon-to-be-aborted transactional data that should not remain 84 + invisible, etc. If you constantly retry a transaction that constantly aborts 85 + itself by calling a syscall, you'll have a livelock & make no progress. 88 86 89 - Care must be taken when relying on syscalls to abort during active transactions 90 - if the calls are made via a library. Libraries may cache values (which may 91 - give the appearance of success) or perform operations that cause transaction 92 - failure before entering the kernel (which may produce different failure codes). 93 - Examples are glibc's getpid() and lazy symbol resolution. 87 + Simple syscalls (e.g. sigprocmask()) "could" be OK. Even things like write() 88 + from, say, printf() should be OK as long as the kernel does not access any 89 + memory that was accessed transactionally. 90 + 91 + Consider any syscalls that happen to work as debug-only -- not recommended for 92 + production use. Best to queue them up till after the transaction is over. 94 93 95 94 96 95 Signals ··· 176 177 TM_CAUSE_RESCHED Thread was rescheduled. 177 178 TM_CAUSE_TLBI Software TLB invalid. 178 179 TM_CAUSE_FAC_UNAV FP/VEC/VSX unavailable trap. 179 - TM_CAUSE_SYSCALL Syscall from active transaction. 180 + TM_CAUSE_SYSCALL Currently unused; future syscalls that must abort 181 + transactions for consistency will use this. 180 182 TM_CAUSE_SIGNAL Signal delivered. 181 183 TM_CAUSE_MISC Currently unused. 182 184 TM_CAUSE_ALIGNMENT Alignment fault.
+25 -9
MAINTAINERS
··· 892 892 F: arch/arm/mach-alpine/ 893 893 894 894 ARM/ATMEL AT91RM9200 AND AT91SAM ARM ARCHITECTURES 895 - M: Andrew Victor <linux@maxim.org.za> 896 895 M: Nicolas Ferre <nicolas.ferre@atmel.com> 896 + M: Alexandre Belloni <alexandre.belloni@free-electrons.com> 897 897 M: Jean-Christophe Plagniol-Villard <plagnioj@jcrosoft.com> 898 898 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 899 - W: http://maxim.org.za/at91_26.html 900 899 W: http://www.linux4sam.org 901 900 S: Supported 902 901 F: arch/arm/mach-at91/ ··· 988 989 F: drivers/clocksource/timer-prima2.c 989 990 F: drivers/clocksource/timer-atlas7.c 990 991 N: [^a-z]sirf 992 + 993 + ARM/CONEXANT DIGICOLOR MACHINE SUPPORT 994 + M: Baruch Siach <baruch@tkos.co.il> 995 + L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 996 + S: Maintained 997 + N: digicolor 991 998 992 999 ARM/EBSA110 MACHINE SUPPORT 993 1000 M: Russell King <linux@arm.linux.org.uk> ··· 1444 1439 M: Dinh Nguyen <dinguyen@opensource.altera.com> 1445 1440 S: Maintained 1446 1441 F: arch/arm/mach-socfpga/ 1442 + F: arch/arm/boot/dts/socfpga* 1443 + F: arch/arm/configs/socfpga_defconfig 1447 1444 W: http://www.rocketboards.org 1448 - T: git://git.rocketboards.org/linux-socfpga.git 1449 - T: git://git.rocketboards.org/linux-socfpga-next.git 1445 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/dinguyen/linux.git 1450 1446 1451 1447 ARM/SOCFPGA CLOCK FRAMEWORK SUPPORT 1452 1448 M: Dinh Nguyen <dinguyen@opensource.altera.com> ··· 2122 2116 F: drivers/net/ethernet/broadcom/bnx2x/ 2123 2117 2124 2118 BROADCOM BCM281XX/BCM11XXX/BCM216XX ARM ARCHITECTURE 2125 - M: Christian Daudt <bcm@fixthebug.org> 2126 2119 M: Florian Fainelli <f.fainelli@gmail.com> 2120 + M: Ray Jui <rjui@broadcom.com> 2121 + M: Scott Branden <sbranden@broadcom.com> 2127 2122 L: bcm-kernel-feedback-list@broadcom.com 2128 2123 T: git git://github.com/broadcom/mach-bcm 2129 2124 S: Maintained ··· 2175 2168 F: drivers/usb/gadget/udc/bcm63xx_udc.* 2176 2169 2177 2170 BROADCOM BCM7XXX ARM ARCHITECTURE 2178 - M: Marc Carino <marc.ceeeee@gmail.com> 2179 2171 M: Brian Norris <computersforpeace@gmail.com> 2180 2172 M: Gregory Fong <gregory.0xf0@gmail.com> 2181 2173 M: Florian Fainelli <f.fainelli@gmail.com> ··· 3418 3412 F: drivers/gpu/drm/rcar-du/ 3419 3413 F: drivers/gpu/drm/shmobile/ 3420 3414 F: include/linux/platform_data/shmob_drm.h 3415 + 3416 + DRM DRIVERS FOR ROCKCHIP 3417 + M: Mark Yao <mark.yao@rock-chips.com> 3418 + L: dri-devel@lists.freedesktop.org 3419 + S: Maintained 3420 + F: drivers/gpu/drm/rockchip/ 3421 + F: Documentation/devicetree/bindings/video/rockchip* 3421 3422 3422 3423 DSBR100 USB FM RADIO DRIVER 3423 3424 M: Alexey Klimov <klimov.linux@gmail.com> ··· 5056 5043 F: drivers/video/fbdev/imsttfb.c 5057 5044 5058 5045 INFINIBAND SUBSYSTEM 5059 - M: Roland Dreier <roland@kernel.org> 5046 + M: Doug Ledford <dledford@redhat.com> 5060 5047 M: Sean Hefty <sean.hefty@intel.com> 5061 5048 M: Hal Rosenstock <hal.rosenstock@gmail.com> 5062 5049 L: linux-rdma@vger.kernel.org 5063 5050 W: http://www.openfabrics.org/ 5064 5051 Q: http://patchwork.kernel.org/project/linux-rdma/list/ 5065 - T: git git://git.kernel.org/pub/scm/linux/kernel/git/roland/infiniband.git 5052 + T: git git://github.com/dledford/linux.git 5066 5053 S: Supported 5067 5054 F: Documentation/infiniband/ 5068 5055 F: drivers/infiniband/ 5069 5056 F: include/uapi/linux/if_infiniband.h 5057 + F: include/uapi/rdma/ 5058 + F: include/rdma/ 5070 5059 5071 5060 INOTIFY 5072 5061 M: John McCutchan <john@johnmccutchan.com> ··· 5821 5806 LED SUBSYSTEM 5822 5807 M: Bryan Wu <cooloney@gmail.com> 5823 5808 M: Richard Purdie <rpurdie@rpsys.net> 5809 + M: Jacek Anaszewski <j.anaszewski@samsung.com> 5824 5810 L: linux-leds@vger.kernel.org 5825 5811 T: git git://git.kernel.org/pub/scm/linux/kernel/git/cooloney/linux-leds.git 5826 5812 S: Maintained ··· 10547 10531 F: include/uapi/linux/virtio_console.h 10548 10532 10549 10533 VIRTIO CORE, NET AND BLOCK DRIVERS 10550 - M: Rusty Russell <rusty@rustcorp.com.au> 10551 10534 M: "Michael S. Tsirkin" <mst@redhat.com> 10552 10535 L: virtualization@lists.linux-foundation.org 10553 10536 S: Maintained ··· 11054 11039 ZRAM COMPRESSED RAM BLOCK DEVICE DRVIER 11055 11040 M: Minchan Kim <minchan@kernel.org> 11056 11041 M: Nitin Gupta <ngupta@vflare.org> 11042 + R: Sergey Senozhatsky <sergey.senozhatsky.work@gmail.com> 11057 11043 L: linux-kernel@vger.kernel.org 11058 11044 S: Maintained 11059 11045 F: drivers/block/zram/
+1 -1
Makefile
··· 1 1 VERSION = 4 2 2 PATCHLEVEL = 1 3 3 SUBLEVEL = 0 4 - EXTRAVERSION = -rc1 4 + EXTRAVERSION = -rc3 5 5 NAME = Hurr durr I'ma sheep 6 6 7 7 # *DOCUMENTATION*
+2 -2
arch/arm/boot/dts/am437x-sk-evm.dts
··· 49 49 pinctrl-0 = <&matrix_keypad_pins>; 50 50 51 51 debounce-delay-ms = <5>; 52 - col-scan-delay-us = <1500>; 52 + col-scan-delay-us = <5>; 53 53 54 54 row-gpios = <&gpio5 5 GPIO_ACTIVE_HIGH /* Bank5, pin5 */ 55 55 &gpio5 6 GPIO_ACTIVE_HIGH>; /* Bank5, pin6 */ ··· 473 473 interrupt-parent = <&gpio0>; 474 474 interrupts = <31 0>; 475 475 476 - wake-gpios = <&gpio1 28 GPIO_ACTIVE_HIGH>; 476 + reset-gpios = <&gpio1 28 GPIO_ACTIVE_LOW>; 477 477 478 478 touchscreen-size-x = <480>; 479 479 touchscreen-size-y = <272>;
+6 -5
arch/arm/boot/dts/am57xx-beagle-x15.dts
··· 18 18 aliases { 19 19 rtc0 = &mcp_rtc; 20 20 rtc1 = &tps659038_rtc; 21 + rtc2 = &rtc; 21 22 }; 22 23 23 24 memory { ··· 84 83 gpio_fan: gpio_fan { 85 84 /* Based on 5v 500mA AFB02505HHB */ 86 85 compatible = "gpio-fan"; 87 - gpios = <&tps659038_gpio 1 GPIO_ACTIVE_HIGH>; 86 + gpios = <&tps659038_gpio 2 GPIO_ACTIVE_HIGH>; 88 87 gpio-fan,speed-map = <0 0>, 89 88 <13000 1>; 90 89 #cooling-cells = <2>; ··· 131 130 132 131 uart3_pins_default: uart3_pins_default { 133 132 pinctrl-single,pins = < 134 - 0x248 (PIN_INPUT_SLEW | MUX_MODE0) /* uart3_rxd.rxd */ 135 - 0x24c (PIN_INPUT_SLEW | MUX_MODE0) /* uart3_txd.txd */ 133 + 0x3f8 (PIN_INPUT_SLEW | MUX_MODE2) /* uart2_ctsn.uart3_rxd */ 134 + 0x3fc (PIN_INPUT_SLEW | MUX_MODE1) /* uart2_rtsn.uart3_txd */ 136 135 >; 137 136 }; 138 137 ··· 456 455 mcp_rtc: rtc@6f { 457 456 compatible = "microchip,mcp7941x"; 458 457 reg = <0x6f>; 459 - interrupts = <GIC_SPI 2 IRQ_TYPE_LEVEL_LOW>; /* IRQ_SYS_1N */ 458 + interrupts = <GIC_SPI 2 IRQ_TYPE_EDGE_RISING>; /* IRQ_SYS_1N */ 460 459 461 460 pinctrl-names = "default"; 462 461 pinctrl-0 = <&mcp79410_pins_default>; ··· 479 478 &uart3 { 480 479 status = "okay"; 481 480 interrupts-extended = <&crossbar_mpu GIC_SPI 69 IRQ_TYPE_LEVEL_HIGH>, 482 - <&dra7_pmx_core 0x248>; 481 + <&dra7_pmx_core 0x3f8>; 483 482 484 483 pinctrl-names = "default"; 485 484 pinctrl-0 = <&uart3_pins_default>;
+4
arch/arm/boot/dts/armada-xp-openblocks-ax3-4.dts
··· 105 105 }; 106 106 107 107 internal-regs { 108 + rtc@10300 { 109 + /* No crystal connected to the internal RTC */ 110 + status = "disabled"; 111 + }; 108 112 serial@12000 { 109 113 status = "okay"; 110 114 };
+5 -5
arch/arm/boot/dts/dra7.dtsi
··· 911 911 ti,clock-cycles = <16>; 912 912 913 913 reg = <0x4ae07ddc 0x4>, <0x4ae07de0 0x4>, 914 - <0x4ae06014 0x4>, <0x4a003b20 0x8>, 914 + <0x4ae06014 0x4>, <0x4a003b20 0xc>, 915 915 <0x4ae0c158 0x4>; 916 916 reg-names = "setup-address", "control-address", 917 917 "int-address", "efuse-address", ··· 944 944 ti,clock-cycles = <16>; 945 945 946 946 reg = <0x4ae07e34 0x4>, <0x4ae07e24 0x4>, 947 - <0x4ae06010 0x4>, <0x4a0025cc 0x8>, 947 + <0x4ae06010 0x4>, <0x4a0025cc 0xc>, 948 948 <0x4a002470 0x4>; 949 949 reg-names = "setup-address", "control-address", 950 950 "int-address", "efuse-address", ··· 977 977 ti,clock-cycles = <16>; 978 978 979 979 reg = <0x4ae07e30 0x4>, <0x4ae07e20 0x4>, 980 - <0x4ae06010 0x4>, <0x4a0025e0 0x8>, 980 + <0x4ae06010 0x4>, <0x4a0025e0 0xc>, 981 981 <0x4a00246c 0x4>; 982 982 reg-names = "setup-address", "control-address", 983 983 "int-address", "efuse-address", ··· 1010 1010 ti,clock-cycles = <16>; 1011 1011 1012 1012 reg = <0x4ae07de4 0x4>, <0x4ae07de8 0x4>, 1013 - <0x4ae06010 0x4>, <0x4a003b08 0x8>, 1013 + <0x4ae06010 0x4>, <0x4a003b08 0xc>, 1014 1014 <0x4ae0c154 0x4>; 1015 1015 reg-names = "setup-address", "control-address", 1016 1016 "int-address", "efuse-address", ··· 1203 1203 status = "disabled"; 1204 1204 }; 1205 1205 1206 - rtc@48838000 { 1206 + rtc: rtc@48838000 { 1207 1207 compatible = "ti,am3352-rtc"; 1208 1208 reg = <0x48838000 0x100>; 1209 1209 interrupts = <GIC_SPI 217 IRQ_TYPE_LEVEL_HIGH>,
+3
arch/arm/boot/dts/exynos4412-odroid-common.dtsi
··· 9 9 10 10 #include <dt-bindings/sound/samsung-i2s.h> 11 11 #include <dt-bindings/input/input.h> 12 + #include <dt-bindings/clock/maxim,max77686.h> 12 13 #include "exynos4412.dtsi" 13 14 14 15 / { ··· 106 105 107 106 rtc@10070000 { 108 107 status = "okay"; 108 + clocks = <&clock CLK_RTC>, <&max77686 MAX77686_CLK_AP>; 109 + clock-names = "rtc", "rtc_src"; 109 110 }; 110 111 111 112 g2d@10800000 {
+1
arch/arm/boot/dts/exynos5250-snow.dts
··· 567 567 num-slots = <1>; 568 568 broken-cd; 569 569 cap-sdio-irq; 570 + keep-power-in-suspend; 570 571 card-detect-delay = <200>; 571 572 samsung,dw-mshc-ciu-div = <3>; 572 573 samsung,dw-mshc-sdr-timing = <2 3>;
+1 -1
arch/arm/boot/dts/exynos5420-trip-points.dtsi
··· 28 28 type = "active"; 29 29 }; 30 30 cpu-crit-0 { 31 - temperature = <1200000>; /* millicelsius */ 31 + temperature = <120000>; /* millicelsius */ 32 32 hysteresis = <0>; /* millicelsius */ 33 33 type = "critical"; 34 34 };
+1
arch/arm/boot/dts/exynos5420.dtsi
··· 536 536 clock-names = "dp"; 537 537 phys = <&dp_phy>; 538 538 phy-names = "dp"; 539 + power-domains = <&disp_pd>; 539 540 }; 540 541 541 542 mipi_phy: video-phy@10040714 {
+1 -1
arch/arm/boot/dts/exynos5440-trip-points.dtsi
··· 18 18 type = "active"; 19 19 }; 20 20 cpu-crit-0 { 21 - temperature = <1050000>; /* millicelsius */ 21 + temperature = <105000>; /* millicelsius */ 22 22 hysteresis = <0>; /* millicelsius */ 23 23 type = "critical"; 24 24 };
+3 -1
arch/arm/boot/dts/imx23-olinuxino.dts
··· 12 12 */ 13 13 14 14 /dts-v1/; 15 + #include <dt-bindings/gpio/gpio.h> 15 16 #include "imx23.dtsi" 16 17 17 18 / { ··· 94 93 95 94 ahb@80080000 { 96 95 usb0: usb@80080000 { 96 + dr_mode = "host"; 97 97 vbus-supply = <&reg_usb0_vbus>; 98 98 status = "okay"; 99 99 }; ··· 124 122 125 123 user { 126 124 label = "green"; 127 - gpios = <&gpio2 1 1>; 125 + gpios = <&gpio2 1 GPIO_ACTIVE_HIGH>; 128 126 }; 129 127 }; 130 128 };
+1
arch/arm/boot/dts/imx25.dtsi
··· 428 428 429 429 pwm4: pwm@53fc8000 { 430 430 compatible = "fsl,imx25-pwm", "fsl,imx27-pwm"; 431 + #pwm-cells = <2>; 431 432 reg = <0x53fc8000 0x4000>; 432 433 clocks = <&clks 108>, <&clks 52>; 433 434 clock-names = "ipg", "per";
+1 -1
arch/arm/boot/dts/imx28.dtsi
··· 913 913 80 81 68 69 914 914 70 71 72 73 915 915 74 75 76 77>; 916 - interrupt-names = "auart4-rx", "aurat4-tx", "spdif-tx", "empty", 916 + interrupt-names = "auart4-rx", "auart4-tx", "spdif-tx", "empty", 917 917 "saif0", "saif1", "i2c0", "i2c1", 918 918 "auart0-rx", "auart0-tx", "auart1-rx", "auart1-tx", 919 919 "auart2-rx", "auart2-tx", "auart3-rx", "auart3-tx";
+2
arch/arm/boot/dts/imx6qdl-phytec-pfla02.dtsi
··· 31 31 regulator-min-microvolt = <5000000>; 32 32 regulator-max-microvolt = <5000000>; 33 33 gpio = <&gpio4 15 0>; 34 + enable-active-high; 34 35 }; 35 36 36 37 reg_usb_h1_vbus: regulator@1 { ··· 41 40 regulator-min-microvolt = <5000000>; 42 41 regulator-max-microvolt = <5000000>; 43 42 gpio = <&gpio1 0 0>; 43 + enable-active-high; 44 44 }; 45 45 }; 46 46
-1
arch/arm/boot/dts/imx6qdl-sabreauto.dtsi
··· 185 185 &i2c3 { 186 186 pinctrl-names = "default"; 187 187 pinctrl-0 = <&pinctrl_i2c3>; 188 - pinctrl-assert-gpios = <&gpio5 4 GPIO_ACTIVE_HIGH>; 189 188 status = "okay"; 190 189 191 190 max7310_a: gpio@30 {
+4
arch/arm/boot/dts/omap3-n900.dts
··· 498 498 DRVDD-supply = <&vmmc2>; 499 499 IOVDD-supply = <&vio>; 500 500 DVDD-supply = <&vio>; 501 + 502 + ai3x-micbias-vg = <1>; 501 503 }; 502 504 503 505 tlv320aic3x_aux: tlv320aic3x@19 { ··· 511 509 DRVDD-supply = <&vmmc2>; 512 510 IOVDD-supply = <&vio>; 513 511 DVDD-supply = <&vio>; 512 + 513 + ai3x-micbias-vg = <2>; 514 514 }; 515 515 516 516 tsl2563: tsl2563@29 {
+2
arch/arm/boot/dts/omap3.dtsi
··· 456 456 }; 457 457 458 458 mmu_isp: mmu@480bd400 { 459 + #iommu-cells = <0>; 459 460 compatible = "ti,omap2-iommu"; 460 461 reg = <0x480bd400 0x80>; 461 462 interrupts = <24>; ··· 465 464 }; 466 465 467 466 mmu_iva: mmu@5d000000 { 467 + #iommu-cells = <0>; 468 468 compatible = "ti,omap2-iommu"; 469 469 reg = <0x5d000000 0x80>; 470 470 interrupts = <28>;
+1 -1
arch/arm/boot/dts/omap5.dtsi
··· 128 128 * hierarchy. 129 129 */ 130 130 ocp { 131 - compatible = "ti,omap4-l3-noc", "simple-bus"; 131 + compatible = "ti,omap5-l3-noc", "simple-bus"; 132 132 #address-cells = <1>; 133 133 #size-cells = <1>; 134 134 ranges;
+1 -1
arch/arm/boot/dts/r8a7791-koelsch.dts
··· 545 545 compatible = "adi,adv7511w"; 546 546 reg = <0x39>; 547 547 interrupt-parent = <&gpio3>; 548 - interrupts = <29 IRQ_TYPE_EDGE_FALLING>; 548 + interrupts = <29 IRQ_TYPE_LEVEL_LOW>; 549 549 550 550 adi,input-depth = <8>; 551 551 adi,input-colorspace = "rgb";
-17
arch/arm/boot/dts/ste-dbx5x0.dtsi
··· 1017 1017 status = "disabled"; 1018 1018 }; 1019 1019 1020 - vmmci: regulator-gpio { 1021 - compatible = "regulator-gpio"; 1022 - 1023 - regulator-min-microvolt = <1800000>; 1024 - regulator-max-microvolt = <2900000>; 1025 - regulator-name = "mmci-reg"; 1026 - regulator-type = "voltage"; 1027 - 1028 - startup-delay-us = <100>; 1029 - enable-active-high; 1030 - 1031 - states = <1800000 0x1 1032 - 2900000 0x0>; 1033 - 1034 - status = "disabled"; 1035 - }; 1036 - 1037 1020 mcde@a0350000 { 1038 1021 compatible = "stericsson,mcde"; 1039 1022 reg = <0xa0350000 0x1000>, /* MCDE */
+15
arch/arm/boot/dts/ste-href.dtsi
··· 111 111 pinctrl-1 = <&i2c3_sleep_mode>; 112 112 }; 113 113 114 + vmmci: regulator-gpio { 115 + compatible = "regulator-gpio"; 116 + 117 + regulator-min-microvolt = <1800000>; 118 + regulator-max-microvolt = <2900000>; 119 + regulator-name = "mmci-reg"; 120 + regulator-type = "voltage"; 121 + 122 + startup-delay-us = <100>; 123 + enable-active-high; 124 + 125 + states = <1800000 0x1 126 + 2900000 0x0>; 127 + }; 128 + 114 129 // External Micro SD slot 115 130 sdi0_per1@80126000 { 116 131 arm,primecell-periphid = <0x10480180>;
+13
arch/arm/boot/dts/ste-snowball.dts
··· 146 146 }; 147 147 148 148 vmmci: regulator-gpio { 149 + compatible = "regulator-gpio"; 150 + 149 151 gpios = <&gpio7 4 0x4>; 150 152 enable-gpio = <&gpio6 25 0x4>; 153 + 154 + regulator-min-microvolt = <1800000>; 155 + regulator-max-microvolt = <2900000>; 156 + regulator-name = "mmci-reg"; 157 + regulator-type = "voltage"; 158 + 159 + startup-delay-us = <100>; 160 + enable-active-high; 161 + 162 + states = <1800000 0x1 163 + 2900000 0x0>; 151 164 }; 152 165 153 166 // External Micro SD slot
+3
arch/arm/configs/multi_v7_defconfig
··· 39 39 CONFIG_ARCH_KEYSTONE=y 40 40 CONFIG_ARCH_MESON=y 41 41 CONFIG_ARCH_MXC=y 42 + CONFIG_SOC_IMX50=y 42 43 CONFIG_SOC_IMX51=y 43 44 CONFIG_SOC_IMX53=y 44 45 CONFIG_SOC_IMX6Q=y 45 46 CONFIG_SOC_IMX6SL=y 47 + CONFIG_SOC_IMX6SX=y 46 48 CONFIG_SOC_VF610=y 49 + CONFIG_SOC_LS1021A=y 47 50 CONFIG_ARCH_OMAP3=y 48 51 CONFIG_ARCH_OMAP4=y 49 52 CONFIG_SOC_OMAP5=y
+1 -1
arch/arm/configs/omap2plus_defconfig
··· 393 393 CONFIG_DMA_OMAP=y 394 394 # CONFIG_IOMMU_SUPPORT is not set 395 395 CONFIG_EXTCON=m 396 - CONFIG_EXTCON_GPIO=m 396 + CONFIG_EXTCON_USB_GPIO=m 397 397 CONFIG_EXTCON_PALMAS=m 398 398 CONFIG_TI_EMIF=m 399 399 CONFIG_PWM=y
+1 -1
arch/arm/include/asm/dma-iommu.h
··· 25 25 }; 26 26 27 27 struct dma_iommu_mapping * 28 - arm_iommu_create_mapping(struct bus_type *bus, dma_addr_t base, size_t size); 28 + arm_iommu_create_mapping(struct bus_type *bus, dma_addr_t base, u64 size); 29 29 30 30 void arm_iommu_release_mapping(struct dma_iommu_mapping *mapping); 31 31
+1
arch/arm/include/asm/xen/page.h
··· 110 110 bool xen_arch_need_swiotlb(struct device *dev, 111 111 unsigned long pfn, 112 112 unsigned long mfn); 113 + unsigned long xen_get_swiotlb_free_pages(unsigned int order); 113 114 114 115 #endif /* _ASM_ARM_XEN_PAGE_H */
+7 -2
arch/arm/kernel/perf_event_cpu.c
··· 303 303 304 304 static int of_pmu_irq_cfg(struct platform_device *pdev) 305 305 { 306 - int i; 306 + int i, irq; 307 307 int *irqs = kcalloc(pdev->num_resources, sizeof(*irqs), GFP_KERNEL); 308 308 309 309 if (!irqs) 310 310 return -ENOMEM; 311 + 312 + /* Don't bother with PPIs; they're already affine */ 313 + irq = platform_get_irq(pdev, 0); 314 + if (irq >= 0 && irq_is_percpu(irq)) 315 + return 0; 311 316 312 317 for (i = 0; i < pdev->num_resources; ++i) { 313 318 struct device_node *dn; ··· 322 317 i); 323 318 if (!dn) { 324 319 pr_warn("Failed to parse %s/interrupt-affinity[%d]\n", 325 - of_node_full_name(dn), i); 320 + of_node_full_name(pdev->dev.of_node), i); 326 321 break; 327 322 } 328 323
+1 -1
arch/arm/mach-imx/devices/platform-sdhci-esdhc-imx.c
··· 1 1 /* 2 - * Copyright (C) 2010 Pengutronix, Wolfram Sang <w.sang@pengutronix.de> 2 + * Copyright (C) 2010 Pengutronix, Wolfram Sang <kernel@pengutronix.de> 3 3 * 4 4 * This program is free software; you can redistribute it and/or modify it under 5 5 * the terms of the GNU General Public License version 2 as published by the
+1
arch/arm/mach-omap2/prm-regbits-34xx.h
··· 112 112 #define OMAP3430_VC_CMD_ONLP_SHIFT 16 113 113 #define OMAP3430_VC_CMD_RET_SHIFT 8 114 114 #define OMAP3430_VC_CMD_OFF_SHIFT 0 115 + #define OMAP3430_SREN_MASK (1 << 4) 115 116 #define OMAP3430_HSEN_MASK (1 << 3) 116 117 #define OMAP3430_MCODE_MASK (0x7 << 0) 117 118 #define OMAP3430_VALID_MASK (1 << 24)
+1
arch/arm/mach-omap2/prm-regbits-44xx.h
··· 35 35 #define OMAP4430_GLOBAL_WARM_SW_RST_SHIFT 1 36 36 #define OMAP4430_GLOBAL_WUEN_MASK (1 << 16) 37 37 #define OMAP4430_HSMCODE_MASK (0x7 << 0) 38 + #define OMAP4430_SRMODEEN_MASK (1 << 4) 38 39 #define OMAP4430_HSMODEEN_MASK (1 << 3) 39 40 #define OMAP4430_HSSCLL_SHIFT 24 40 41 #define OMAP4430_ICEPICK_RST_SHIFT 9
+10 -2
arch/arm/mach-omap2/vc.c
··· 316 316 * idle. And we can also scale voltages to zero for off-idle. 317 317 * Note that no actual voltage scaling during off-idle will 318 318 * happen unless the board specific twl4030 PMIC scripts are 319 - * loaded. 319 + * loaded. See also omap_vc_i2c_init for comments regarding 320 + * erratum i531. 320 321 */ 321 322 val = voltdm->read(OMAP3_PRM_VOLTCTRL_OFFSET); 322 323 if (!(val & OMAP3430_PRM_VOLTCTRL_SEL_OFF)) { ··· 705 704 return; 706 705 } 707 706 707 + /* 708 + * Note that for omap3 OMAP3430_SREN_MASK clears SREN to work around 709 + * erratum i531 "Extra Power Consumed When Repeated Start Operation 710 + * Mode Is Enabled on I2C Interface Dedicated for Smart Reflex (I2C4)". 711 + * Otherwise I2C4 eventually leads into about 23mW extra power being 712 + * consumed even during off idle using VMODE. 713 + */ 708 714 i2c_high_speed = voltdm->pmic->i2c_high_speed; 709 715 if (i2c_high_speed) 710 - voltdm->rmw(vc->common->i2c_cfg_hsen_mask, 716 + voltdm->rmw(vc->common->i2c_cfg_clear_mask, 711 717 vc->common->i2c_cfg_hsen_mask, 712 718 vc->common->i2c_cfg_reg); 713 719
+2
arch/arm/mach-omap2/vc.h
··· 34 34 * @cmd_ret_shift: RET field shift in PRM_VC_CMD_VAL_* register 35 35 * @cmd_off_shift: OFF field shift in PRM_VC_CMD_VAL_* register 36 36 * @i2c_cfg_reg: I2C configuration register offset 37 + * @i2c_cfg_clear_mask: high-speed mode bit clear mask in I2C config register 37 38 * @i2c_cfg_hsen_mask: high-speed mode bit field mask in I2C config register 38 39 * @i2c_mcode_mask: MCODE field mask for I2C config register 39 40 * ··· 53 52 u8 cmd_ret_shift; 54 53 u8 cmd_off_shift; 55 54 u8 i2c_cfg_reg; 55 + u8 i2c_cfg_clear_mask; 56 56 u8 i2c_cfg_hsen_mask; 57 57 u8 i2c_mcode_mask; 58 58 };
+1
arch/arm/mach-omap2/vc3xxx_data.c
··· 40 40 .cmd_onlp_shift = OMAP3430_VC_CMD_ONLP_SHIFT, 41 41 .cmd_ret_shift = OMAP3430_VC_CMD_RET_SHIFT, 42 42 .cmd_off_shift = OMAP3430_VC_CMD_OFF_SHIFT, 43 + .i2c_cfg_clear_mask = OMAP3430_SREN_MASK | OMAP3430_HSEN_MASK, 43 44 .i2c_cfg_hsen_mask = OMAP3430_HSEN_MASK, 44 45 .i2c_cfg_reg = OMAP3_PRM_VC_I2C_CFG_OFFSET, 45 46 .i2c_mcode_mask = OMAP3430_MCODE_MASK,
+1
arch/arm/mach-omap2/vc44xx_data.c
··· 42 42 .cmd_ret_shift = OMAP4430_RET_SHIFT, 43 43 .cmd_off_shift = OMAP4430_OFF_SHIFT, 44 44 .i2c_cfg_reg = OMAP4_PRM_VC_CFG_I2C_MODE_OFFSET, 45 + .i2c_cfg_clear_mask = OMAP4430_SRMODEEN_MASK | OMAP4430_HSMODEEN_MASK, 45 46 .i2c_cfg_hsen_mask = OMAP4430_HSMODEEN_MASK, 46 47 .i2c_mcode_mask = OMAP4430_HSMCODE_MASK, 47 48 };
+9
arch/arm/mach-pxa/Kconfig
··· 691 691 config PXA310_ULPI 692 692 bool 693 693 694 + config PXA_SYSTEMS_CPLDS 695 + tristate "Motherboard cplds" 696 + default ARCH_LUBBOCK || MACH_MAINSTONE 697 + help 698 + This driver supports the Lubbock and Mainstone multifunction chip 699 + found on the pxa25x development platform system (Lubbock) and pxa27x 700 + development platform system (Mainstone). This IO board supports the 701 + interrupts handling, ethernet controller, flash chips, etc ... 702 + 694 703 endif
+1
arch/arm/mach-pxa/Makefile
··· 90 90 obj-$(CONFIG_MACH_RAUMFELD_SPEAKER) += raumfeld.o 91 91 obj-$(CONFIG_MACH_ZIPIT2) += z2.o 92 92 93 + obj-$(CONFIG_PXA_SYSTEMS_CPLDS) += pxa_cplds_irqs.o 93 94 obj-$(CONFIG_TOSA_BT) += tosa-bt.o
+4 -3
arch/arm/mach-pxa/include/mach/lubbock.h
··· 37 37 #define LUB_GP __LUB_REG(LUBBOCK_FPGA_PHYS + 0x100) 38 38 39 39 /* Board specific IRQs */ 40 - #define LUBBOCK_IRQ(x) (IRQ_BOARD_START + (x)) 40 + #define LUBBOCK_NR_IRQS IRQ_BOARD_START 41 + 42 + #define LUBBOCK_IRQ(x) (LUBBOCK_NR_IRQS + (x)) 41 43 #define LUBBOCK_SD_IRQ LUBBOCK_IRQ(0) 42 44 #define LUBBOCK_SA1111_IRQ LUBBOCK_IRQ(1) 43 45 #define LUBBOCK_USB_IRQ LUBBOCK_IRQ(2) /* usb connect */ ··· 49 47 #define LUBBOCK_USB_DISC_IRQ LUBBOCK_IRQ(6) /* usb disconnect */ 50 48 #define LUBBOCK_LAST_IRQ LUBBOCK_IRQ(6) 51 49 52 - #define LUBBOCK_SA1111_IRQ_BASE (IRQ_BOARD_START + 16) 53 - #define LUBBOCK_NR_IRQS (IRQ_BOARD_START + 16 + 55) 50 + #define LUBBOCK_SA1111_IRQ_BASE (LUBBOCK_NR_IRQS + 32) 54 51 55 52 #ifndef __ASSEMBLY__ 56 53 extern void lubbock_set_misc_wr(unsigned int mask, unsigned int set);
+3 -3
arch/arm/mach-pxa/include/mach/mainstone.h
··· 120 120 #define MST_PCMCIA_PWR_VCC_50 0x4 /* voltage VCC = 5.0V */ 121 121 122 122 /* board specific IRQs */ 123 - #define MAINSTONE_IRQ(x) (IRQ_BOARD_START + (x)) 123 + #define MAINSTONE_NR_IRQS IRQ_BOARD_START 124 + 125 + #define MAINSTONE_IRQ(x) (MAINSTONE_NR_IRQS + (x)) 124 126 #define MAINSTONE_MMC_IRQ MAINSTONE_IRQ(0) 125 127 #define MAINSTONE_USIM_IRQ MAINSTONE_IRQ(1) 126 128 #define MAINSTONE_USBC_IRQ MAINSTONE_IRQ(2) ··· 137 135 #define MAINSTONE_S1_CD_IRQ MAINSTONE_IRQ(13) 138 136 #define MAINSTONE_S1_STSCHG_IRQ MAINSTONE_IRQ(14) 139 137 #define MAINSTONE_S1_IRQ MAINSTONE_IRQ(15) 140 - 141 - #define MAINSTONE_NR_IRQS (IRQ_BOARD_START + 16) 142 138 143 139 #endif
+29 -79
arch/arm/mach-pxa/lubbock.c
··· 12 12 * published by the Free Software Foundation. 13 13 */ 14 14 #include <linux/gpio.h> 15 + #include <linux/gpio/machine.h> 15 16 #include <linux/module.h> 16 17 #include <linux/kernel.h> 17 18 #include <linux/init.h> ··· 123 122 local_irq_restore(flags); 124 123 } 125 124 EXPORT_SYMBOL(lubbock_set_misc_wr); 126 - 127 - static unsigned long lubbock_irq_enabled; 128 - 129 - static void lubbock_mask_irq(struct irq_data *d) 130 - { 131 - int lubbock_irq = (d->irq - LUBBOCK_IRQ(0)); 132 - LUB_IRQ_MASK_EN = (lubbock_irq_enabled &= ~(1 << lubbock_irq)); 133 - } 134 - 135 - static void lubbock_unmask_irq(struct irq_data *d) 136 - { 137 - int lubbock_irq = (d->irq - LUBBOCK_IRQ(0)); 138 - /* the irq can be acknowledged only if deasserted, so it's done here */ 139 - LUB_IRQ_SET_CLR &= ~(1 << lubbock_irq); 140 - LUB_IRQ_MASK_EN = (lubbock_irq_enabled |= (1 << lubbock_irq)); 141 - } 142 - 143 - static struct irq_chip lubbock_irq_chip = { 144 - .name = "FPGA", 145 - .irq_ack = lubbock_mask_irq, 146 - .irq_mask = lubbock_mask_irq, 147 - .irq_unmask = lubbock_unmask_irq, 148 - }; 149 - 150 - static void lubbock_irq_handler(unsigned int irq, struct irq_desc *desc) 151 - { 152 - unsigned long pending = LUB_IRQ_SET_CLR & lubbock_irq_enabled; 153 - do { 154 - /* clear our parent irq */ 155 - desc->irq_data.chip->irq_ack(&desc->irq_data); 156 - if (likely(pending)) { 157 - irq = LUBBOCK_IRQ(0) + __ffs(pending); 158 - generic_handle_irq(irq); 159 - } 160 - pending = LUB_IRQ_SET_CLR & lubbock_irq_enabled; 161 - } while (pending); 162 - } 163 - 164 - static void __init lubbock_init_irq(void) 165 - { 166 - int irq; 167 - 168 - pxa25x_init_irq(); 169 - 170 - /* setup extra lubbock irqs */ 171 - for (irq = LUBBOCK_IRQ(0); irq <= LUBBOCK_LAST_IRQ; irq++) { 172 - irq_set_chip_and_handler(irq, &lubbock_irq_chip, 173 - handle_level_irq); 174 - set_irq_flags(irq, IRQF_VALID | IRQF_PROBE); 175 - } 176 - 177 - irq_set_chained_handler(PXA_GPIO_TO_IRQ(0), lubbock_irq_handler); 178 - irq_set_irq_type(PXA_GPIO_TO_IRQ(0), IRQ_TYPE_EDGE_FALLING); 179 - } 180 - 181 - #ifdef CONFIG_PM 182 - 183 - static void lubbock_irq_resume(void) 184 - { 185 - LUB_IRQ_MASK_EN = lubbock_irq_enabled; 186 - } 187 - 188 - static struct syscore_ops lubbock_irq_syscore_ops = { 189 - .resume = lubbock_irq_resume, 190 - }; 191 - 192 - static int __init lubbock_irq_device_init(void) 193 - { 194 - if (machine_is_lubbock()) { 195 - register_syscore_ops(&lubbock_irq_syscore_ops); 196 - return 0; 197 - } 198 - return -ENODEV; 199 - } 200 - 201 - device_initcall(lubbock_irq_device_init); 202 - 203 - #endif 204 125 205 126 static int lubbock_udc_is_connected(void) 206 127 { ··· 306 383 }, 307 384 }; 308 385 386 + static struct resource lubbock_cplds_resources[] = { 387 + [0] = { 388 + .start = LUBBOCK_FPGA_PHYS + 0xc0, 389 + .end = LUBBOCK_FPGA_PHYS + 0xe0 - 1, 390 + .flags = IORESOURCE_MEM, 391 + }, 392 + [1] = { 393 + .start = PXA_GPIO_TO_IRQ(0), 394 + .end = PXA_GPIO_TO_IRQ(0), 395 + .flags = IORESOURCE_IRQ | IORESOURCE_IRQ_LOWEDGE, 396 + }, 397 + [2] = { 398 + .start = LUBBOCK_IRQ(0), 399 + .end = LUBBOCK_IRQ(6), 400 + .flags = IORESOURCE_IRQ, 401 + }, 402 + }; 403 + 404 + static struct platform_device lubbock_cplds_device = { 405 + .name = "pxa_cplds_irqs", 406 + .id = -1, 407 + .resource = &lubbock_cplds_resources[0], 408 + .num_resources = 3, 409 + }; 410 + 411 + 309 412 static struct platform_device *devices[] __initdata = { 310 413 &sa1111_device, 311 414 &smc91x_device, 312 415 &lubbock_flash_device[0], 313 416 &lubbock_flash_device[1], 417 + &lubbock_cplds_device, 314 418 }; 315 419 316 420 static struct pxafb_mode_info sharp_lm8v31_mode = { ··· 598 648 /* Maintainer: MontaVista Software Inc. */ 599 649 .map_io = lubbock_map_io, 600 650 .nr_irqs = LUBBOCK_NR_IRQS, 601 - .init_irq = lubbock_init_irq, 651 + .init_irq = pxa25x_init_irq, 602 652 .handle_irq = pxa25x_handle_irq, 603 653 .init_time = pxa_timer_init, 604 654 .init_machine = lubbock_init,
+28 -87
arch/arm/mach-pxa/mainstone.c
··· 13 13 * published by the Free Software Foundation. 14 14 */ 15 15 #include <linux/gpio.h> 16 + #include <linux/gpio/machine.h> 16 17 #include <linux/init.h> 17 18 #include <linux/platform_device.h> 18 19 #include <linux/syscore_ops.h> ··· 122 121 /* GPIO */ 123 122 GPIO1_GPIO | WAKEUP_ON_EDGE_BOTH, 124 123 }; 125 - 126 - static unsigned long mainstone_irq_enabled; 127 - 128 - static void mainstone_mask_irq(struct irq_data *d) 129 - { 130 - int mainstone_irq = (d->irq - MAINSTONE_IRQ(0)); 131 - MST_INTMSKENA = (mainstone_irq_enabled &= ~(1 << mainstone_irq)); 132 - } 133 - 134 - static void mainstone_unmask_irq(struct irq_data *d) 135 - { 136 - int mainstone_irq = (d->irq - MAINSTONE_IRQ(0)); 137 - /* the irq can be acknowledged only if deasserted, so it's done here */ 138 - MST_INTSETCLR &= ~(1 << mainstone_irq); 139 - MST_INTMSKENA = (mainstone_irq_enabled |= (1 << mainstone_irq)); 140 - } 141 - 142 - static struct irq_chip mainstone_irq_chip = { 143 - .name = "FPGA", 144 - .irq_ack = mainstone_mask_irq, 145 - .irq_mask = mainstone_mask_irq, 146 - .irq_unmask = mainstone_unmask_irq, 147 - }; 148 - 149 - static void mainstone_irq_handler(unsigned int irq, struct irq_desc *desc) 150 - { 151 - unsigned long pending = MST_INTSETCLR & mainstone_irq_enabled; 152 - do { 153 - /* clear useless edge notification */ 154 - desc->irq_data.chip->irq_ack(&desc->irq_data); 155 - if (likely(pending)) { 156 - irq = MAINSTONE_IRQ(0) + __ffs(pending); 157 - generic_handle_irq(irq); 158 - } 159 - pending = MST_INTSETCLR & mainstone_irq_enabled; 160 - } while (pending); 161 - } 162 - 163 - static void __init mainstone_init_irq(void) 164 - { 165 - int irq; 166 - 167 - pxa27x_init_irq(); 168 - 169 - /* setup extra Mainstone irqs */ 170 - for(irq = MAINSTONE_IRQ(0); irq <= MAINSTONE_IRQ(15); irq++) { 171 - irq_set_chip_and_handler(irq, &mainstone_irq_chip, 172 - handle_level_irq); 173 - if (irq == MAINSTONE_IRQ(10) || irq == MAINSTONE_IRQ(14)) 174 - set_irq_flags(irq, IRQF_VALID | IRQF_PROBE | IRQF_NOAUTOEN); 175 - else 176 - set_irq_flags(irq, IRQF_VALID | IRQF_PROBE); 177 - } 178 - set_irq_flags(MAINSTONE_IRQ(8), 0); 179 - set_irq_flags(MAINSTONE_IRQ(12), 0); 180 - 181 - MST_INTMSKENA = 0; 182 - MST_INTSETCLR = 0; 183 - 184 - irq_set_chained_handler(PXA_GPIO_TO_IRQ(0), mainstone_irq_handler); 185 - irq_set_irq_type(PXA_GPIO_TO_IRQ(0), IRQ_TYPE_EDGE_FALLING); 186 - } 187 - 188 - #ifdef CONFIG_PM 189 - 190 - static void mainstone_irq_resume(void) 191 - { 192 - MST_INTMSKENA = mainstone_irq_enabled; 193 - } 194 - 195 - static struct syscore_ops mainstone_irq_syscore_ops = { 196 - .resume = mainstone_irq_resume, 197 - }; 198 - 199 - static int __init mainstone_irq_device_init(void) 200 - { 201 - if (machine_is_mainstone()) 202 - register_syscore_ops(&mainstone_irq_syscore_ops); 203 - 204 - return 0; 205 - } 206 - 207 - device_initcall(mainstone_irq_device_init); 208 - 209 - #endif 210 - 211 124 212 125 static struct resource smc91x_resources[] = { 213 126 [0] = { ··· 402 487 }, 403 488 }; 404 489 490 + static struct resource mst_cplds_resources[] = { 491 + [0] = { 492 + .start = MST_FPGA_PHYS + 0xc0, 493 + .end = MST_FPGA_PHYS + 0xe0 - 1, 494 + .flags = IORESOURCE_MEM, 495 + }, 496 + [1] = { 497 + .start = PXA_GPIO_TO_IRQ(0), 498 + .end = PXA_GPIO_TO_IRQ(0), 499 + .flags = IORESOURCE_IRQ | IORESOURCE_IRQ_LOWEDGE, 500 + }, 501 + [2] = { 502 + .start = MAINSTONE_IRQ(0), 503 + .end = MAINSTONE_IRQ(15), 504 + .flags = IORESOURCE_IRQ, 505 + }, 506 + }; 507 + 508 + static struct platform_device mst_cplds_device = { 509 + .name = "pxa_cplds_irqs", 510 + .id = -1, 511 + .resource = &mst_cplds_resources[0], 512 + .num_resources = 3, 513 + }; 514 + 405 515 static struct platform_device *platform_devices[] __initdata = { 406 516 &smc91x_device, 407 517 &mst_flash_device[0], 408 518 &mst_flash_device[1], 409 519 &mst_gpio_keys_device, 520 + &mst_cplds_device, 410 521 }; 411 522 412 523 static struct pxaohci_platform_data mainstone_ohci_platform_data = { ··· 659 718 .atag_offset = 0x100, /* BLOB boot parameter setting */ 660 719 .map_io = mainstone_map_io, 661 720 .nr_irqs = MAINSTONE_NR_IRQS, 662 - .init_irq = mainstone_init_irq, 721 + .init_irq = pxa27x_init_irq, 663 722 .handle_irq = pxa27x_handle_irq, 664 723 .init_time = pxa_timer_init, 665 724 .init_machine = mainstone_init,
+200
arch/arm/mach-pxa/pxa_cplds_irqs.c
··· 1 + /* 2 + * Intel Reference Systems cplds 3 + * 4 + * Copyright (C) 2014 Robert Jarzmik 5 + * 6 + * This program is free software; you can redistribute it and/or modify 7 + * it under the terms of the GNU General Public License as published by 8 + * the Free Software Foundation; either version 2 of the License, or 9 + * (at your option) any later version. 10 + * 11 + * Cplds motherboard driver, supporting lubbock and mainstone SoC board. 12 + */ 13 + 14 + #include <linux/bitops.h> 15 + #include <linux/gpio.h> 16 + #include <linux/gpio/consumer.h> 17 + #include <linux/interrupt.h> 18 + #include <linux/io.h> 19 + #include <linux/irq.h> 20 + #include <linux/irqdomain.h> 21 + #include <linux/mfd/core.h> 22 + #include <linux/module.h> 23 + #include <linux/of_platform.h> 24 + 25 + #define FPGA_IRQ_MASK_EN 0x0 26 + #define FPGA_IRQ_SET_CLR 0x10 27 + 28 + #define CPLDS_NB_IRQ 32 29 + 30 + struct cplds { 31 + void __iomem *base; 32 + int irq; 33 + unsigned int irq_mask; 34 + struct gpio_desc *gpio0; 35 + struct irq_domain *irqdomain; 36 + }; 37 + 38 + static irqreturn_t cplds_irq_handler(int in_irq, void *d) 39 + { 40 + struct cplds *fpga = d; 41 + unsigned long pending; 42 + unsigned int bit; 43 + 44 + pending = readl(fpga->base + FPGA_IRQ_SET_CLR) & fpga->irq_mask; 45 + for_each_set_bit(bit, &pending, CPLDS_NB_IRQ) 46 + generic_handle_irq(irq_find_mapping(fpga->irqdomain, bit)); 47 + 48 + return IRQ_HANDLED; 49 + } 50 + 51 + static void cplds_irq_mask_ack(struct irq_data *d) 52 + { 53 + struct cplds *fpga = irq_data_get_irq_chip_data(d); 54 + unsigned int cplds_irq = irqd_to_hwirq(d); 55 + unsigned int set, bit = BIT(cplds_irq); 56 + 57 + fpga->irq_mask &= ~bit; 58 + writel(fpga->irq_mask, fpga->base + FPGA_IRQ_MASK_EN); 59 + set = readl(fpga->base + FPGA_IRQ_SET_CLR); 60 + writel(set & ~bit, fpga->base + FPGA_IRQ_SET_CLR); 61 + } 62 + 63 + static void cplds_irq_unmask(struct irq_data *d) 64 + { 65 + struct cplds *fpga = irq_data_get_irq_chip_data(d); 66 + unsigned int cplds_irq = irqd_to_hwirq(d); 67 + unsigned int bit = BIT(cplds_irq); 68 + 69 + fpga->irq_mask |= bit; 70 + writel(fpga->irq_mask, fpga->base + FPGA_IRQ_MASK_EN); 71 + } 72 + 73 + static struct irq_chip cplds_irq_chip = { 74 + .name = "pxa_cplds", 75 + .irq_mask_ack = cplds_irq_mask_ack, 76 + .irq_unmask = cplds_irq_unmask, 77 + .flags = IRQCHIP_MASK_ON_SUSPEND | IRQCHIP_SKIP_SET_WAKE, 78 + }; 79 + 80 + static int cplds_irq_domain_map(struct irq_domain *d, unsigned int irq, 81 + irq_hw_number_t hwirq) 82 + { 83 + struct cplds *fpga = d->host_data; 84 + 85 + irq_set_chip_and_handler(irq, &cplds_irq_chip, handle_level_irq); 86 + irq_set_chip_data(irq, fpga); 87 + 88 + return 0; 89 + } 90 + 91 + static const struct irq_domain_ops cplds_irq_domain_ops = { 92 + .xlate = irq_domain_xlate_twocell, 93 + .map = cplds_irq_domain_map, 94 + }; 95 + 96 + static int cplds_resume(struct platform_device *pdev) 97 + { 98 + struct cplds *fpga = platform_get_drvdata(pdev); 99 + 100 + writel(fpga->irq_mask, fpga->base + FPGA_IRQ_MASK_EN); 101 + 102 + return 0; 103 + } 104 + 105 + static int cplds_probe(struct platform_device *pdev) 106 + { 107 + struct resource *res; 108 + struct cplds *fpga; 109 + int ret; 110 + unsigned int base_irq = 0; 111 + unsigned long irqflags = 0; 112 + 113 + fpga = devm_kzalloc(&pdev->dev, sizeof(*fpga), GFP_KERNEL); 114 + if (!fpga) 115 + return -ENOMEM; 116 + 117 + res = platform_get_resource(pdev, IORESOURCE_IRQ, 0); 118 + if (res) { 119 + fpga->irq = (unsigned int)res->start; 120 + irqflags = res->flags; 121 + } 122 + if (!fpga->irq) 123 + return -ENODEV; 124 + 125 + base_irq = platform_get_irq(pdev, 1); 126 + if (base_irq < 0) 127 + base_irq = 0; 128 + 129 + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 130 + fpga->base = devm_ioremap_resource(&pdev->dev, res); 131 + if (IS_ERR(fpga->base)) 132 + return PTR_ERR(fpga->base); 133 + 134 + platform_set_drvdata(pdev, fpga); 135 + 136 + writel(fpga->irq_mask, fpga->base + FPGA_IRQ_MASK_EN); 137 + writel(0, fpga->base + FPGA_IRQ_SET_CLR); 138 + 139 + ret = devm_request_irq(&pdev->dev, fpga->irq, cplds_irq_handler, 140 + irqflags, dev_name(&pdev->dev), fpga); 141 + if (ret == -ENOSYS) 142 + return -EPROBE_DEFER; 143 + 144 + if (ret) { 145 + dev_err(&pdev->dev, "couldn't request main irq%d: %d\n", 146 + fpga->irq, ret); 147 + return ret; 148 + } 149 + 150 + irq_set_irq_wake(fpga->irq, 1); 151 + fpga->irqdomain = irq_domain_add_linear(pdev->dev.of_node, 152 + CPLDS_NB_IRQ, 153 + &cplds_irq_domain_ops, fpga); 154 + if (!fpga->irqdomain) 155 + return -ENODEV; 156 + 157 + if (base_irq) { 158 + ret = irq_create_strict_mappings(fpga->irqdomain, base_irq, 0, 159 + CPLDS_NB_IRQ); 160 + if (ret) { 161 + dev_err(&pdev->dev, "couldn't create the irq mapping %d..%d\n", 162 + base_irq, base_irq + CPLDS_NB_IRQ); 163 + return ret; 164 + } 165 + } 166 + 167 + return 0; 168 + } 169 + 170 + static int cplds_remove(struct platform_device *pdev) 171 + { 172 + struct cplds *fpga = platform_get_drvdata(pdev); 173 + 174 + irq_set_chip_and_handler(fpga->irq, NULL, NULL); 175 + 176 + return 0; 177 + } 178 + 179 + static const struct of_device_id cplds_id_table[] = { 180 + { .compatible = "intel,lubbock-cplds-irqs", }, 181 + { .compatible = "intel,mainstone-cplds-irqs", }, 182 + { } 183 + }; 184 + MODULE_DEVICE_TABLE(of, cplds_id_table); 185 + 186 + static struct platform_driver cplds_driver = { 187 + .driver = { 188 + .name = "pxa_cplds_irqs", 189 + .of_match_table = of_match_ptr(cplds_id_table), 190 + }, 191 + .probe = cplds_probe, 192 + .remove = cplds_remove, 193 + .resume = cplds_resume, 194 + }; 195 + 196 + module_platform_driver(cplds_driver); 197 + 198 + MODULE_DESCRIPTION("PXA Cplds interrupts driver"); 199 + MODULE_AUTHOR("Robert Jarzmik <robert.jarzmik@free.fr>"); 200 + MODULE_LICENSE("GPL");
+33
arch/arm/mach-rockchip/pm.c
··· 44 44 static phys_addr_t rk3288_bootram_phy; 45 45 46 46 static struct regmap *pmu_regmap; 47 + static struct regmap *grf_regmap; 47 48 static struct regmap *sgrf_regmap; 48 49 49 50 static u32 rk3288_pmu_pwr_mode_con; 51 + static u32 rk3288_grf_soc_con0; 50 52 static u32 rk3288_sgrf_soc_con0; 51 53 52 54 static inline u32 rk3288_l2_config(void) ··· 72 70 { 73 71 u32 mode_set, mode_set1; 74 72 73 + regmap_read(grf_regmap, RK3288_GRF_SOC_CON0, &rk3288_grf_soc_con0); 74 + 75 75 regmap_read(sgrf_regmap, RK3288_SGRF_SOC_CON0, &rk3288_sgrf_soc_con0); 76 76 77 77 regmap_read(pmu_regmap, RK3288_PMU_PWRMODE_CON, 78 78 &rk3288_pmu_pwr_mode_con); 79 + 80 + /* 81 + * We need set this bit GRF_FORCE_JTAG here, for the debug module, 82 + * otherwise, it may become inaccessible after resume. 83 + * This creates a potential security issue, as the sdmmc pins may 84 + * accept jtag data for a short time during resume if no card is 85 + * inserted. 86 + * But this is of course also true for the regular boot, before we 87 + * turn of the jtag/sdmmc autodetect. 88 + */ 89 + regmap_write(grf_regmap, RK3288_GRF_SOC_CON0, GRF_FORCE_JTAG | 90 + GRF_FORCE_JTAG_WRITE); 79 91 80 92 /* 81 93 * SGRF_FAST_BOOT_EN - system to boot from FAST_BOOT_ADDR ··· 98 82 regmap_write(sgrf_regmap, RK3288_SGRF_SOC_CON0, 99 83 SGRF_PCLK_WDT_GATE | SGRF_FAST_BOOT_EN 100 84 | SGRF_PCLK_WDT_GATE_WRITE | SGRF_FAST_BOOT_EN_WRITE); 85 + 86 + /* 87 + * The dapswjdp can not auto reset before resume, that cause it may 88 + * access some illegal address during resume. Let's disable it before 89 + * suspend, and the MASKROM will enable it back. 90 + */ 91 + regmap_write(sgrf_regmap, RK3288_SGRF_CPU_CON0, SGRF_DAPDEVICEEN_WRITE); 101 92 102 93 /* booting address of resuming system is from this register value */ 103 94 regmap_write(sgrf_regmap, RK3288_SGRF_FAST_BOOT_ADDR, ··· 151 128 regmap_write(sgrf_regmap, RK3288_SGRF_SOC_CON0, 152 129 rk3288_sgrf_soc_con0 | SGRF_PCLK_WDT_GATE_WRITE 153 130 | SGRF_FAST_BOOT_EN_WRITE); 131 + 132 + regmap_write(grf_regmap, RK3288_GRF_SOC_CON0, rk3288_grf_soc_con0 | 133 + GRF_FORCE_JTAG_WRITE); 154 134 } 155 135 156 136 static int rockchip_lpmode_enter(unsigned long arg) ··· 209 183 "rockchip,rk3288-sgrf"); 210 184 if (IS_ERR(sgrf_regmap)) { 211 185 pr_err("%s: could not find sgrf regmap\n", __func__); 186 + return PTR_ERR(pmu_regmap); 187 + } 188 + 189 + grf_regmap = syscon_regmap_lookup_by_compatible( 190 + "rockchip,rk3288-grf"); 191 + if (IS_ERR(grf_regmap)) { 192 + pr_err("%s: could not find grf regmap\n", __func__); 212 193 return PTR_ERR(pmu_regmap); 213 194 } 214 195
+8
arch/arm/mach-rockchip/pm.h
··· 48 48 #define RK3288_PMU_WAKEUP_RST_CLR_CNT 0x44 49 49 #define RK3288_PMU_PWRMODE_CON1 0x90 50 50 51 + #define RK3288_GRF_SOC_CON0 0x244 52 + #define GRF_FORCE_JTAG BIT(12) 53 + #define GRF_FORCE_JTAG_WRITE BIT(28) 54 + 51 55 #define RK3288_SGRF_SOC_CON0 (0x0000) 52 56 #define RK3288_SGRF_FAST_BOOT_ADDR (0x0120) 53 57 #define SGRF_PCLK_WDT_GATE BIT(6) 54 58 #define SGRF_PCLK_WDT_GATE_WRITE BIT(22) 55 59 #define SGRF_FAST_BOOT_EN BIT(8) 56 60 #define SGRF_FAST_BOOT_EN_WRITE BIT(24) 61 + 62 + #define RK3288_SGRF_CPU_CON0 (0x40) 63 + #define SGRF_DAPDEVICEEN BIT(0) 64 + #define SGRF_DAPDEVICEEN_WRITE BIT(16) 57 65 58 66 #define RK3288_CRU_MODE_CON 0x50 59 67 #define RK3288_CRU_SEL0_CON 0x60
+19
arch/arm/mach-rockchip/rockchip.c
··· 30 30 #include "pm.h" 31 31 32 32 #define RK3288_GRF_SOC_CON0 0x244 33 + #define RK3288_TIMER6_7_PHYS 0xff810000 33 34 34 35 static void __init rockchip_timer_init(void) 35 36 { 36 37 if (of_machine_is_compatible("rockchip,rk3288")) { 37 38 struct regmap *grf; 39 + void __iomem *reg_base; 40 + 41 + /* 42 + * Most/all uboot versions for rk3288 don't enable timer7 43 + * which is needed for the architected timer to work. 44 + * So make sure it is running during early boot. 45 + */ 46 + reg_base = ioremap(RK3288_TIMER6_7_PHYS, SZ_16K); 47 + if (reg_base) { 48 + writel(0, reg_base + 0x30); 49 + writel(0xffffffff, reg_base + 0x20); 50 + writel(0xffffffff, reg_base + 0x24); 51 + writel(1, reg_base + 0x30); 52 + dsb(); 53 + iounmap(reg_base); 54 + } else { 55 + pr_err("rockchip: could not map timer7 registers\n"); 56 + } 38 57 39 58 /* 40 59 * Disable auto jtag/sdmmc switching that causes issues
+5 -8
arch/arm/mm/dma-mapping.c
··· 1878 1878 * arm_iommu_attach_device function. 1879 1879 */ 1880 1880 struct dma_iommu_mapping * 1881 - arm_iommu_create_mapping(struct bus_type *bus, dma_addr_t base, size_t size) 1881 + arm_iommu_create_mapping(struct bus_type *bus, dma_addr_t base, u64 size) 1882 1882 { 1883 1883 unsigned int bits = size >> PAGE_SHIFT; 1884 1884 unsigned int bitmap_size = BITS_TO_LONGS(bits) * sizeof(long); 1885 1885 struct dma_iommu_mapping *mapping; 1886 1886 int extensions = 1; 1887 1887 int err = -ENOMEM; 1888 + 1889 + /* currently only 32-bit DMA address space is supported */ 1890 + if (size > DMA_BIT_MASK(32) + 1) 1891 + return ERR_PTR(-ERANGE); 1888 1892 1889 1893 if (!bitmap_size) 1890 1894 return ERR_PTR(-EINVAL); ··· 2059 2055 struct dma_iommu_mapping *mapping; 2060 2056 2061 2057 if (!iommu) 2062 - return false; 2063 - 2064 - /* 2065 - * currently arm_iommu_create_mapping() takes a max of size_t 2066 - * for size param. So check this limit for now. 2067 - */ 2068 - if (size > SIZE_MAX) 2069 2058 return false; 2070 2059 2071 2060 mapping = arm_iommu_create_mapping(dev->bus, dma_base, size);
-2
arch/arm/mm/proc-arm1020.S
··· 22 22 * 23 23 * These are the low level assembler for performing cache and TLB 24 24 * functions on the arm1020. 25 - * 26 - * CONFIG_CPU_ARM1020_CPU_IDLE -> nohlt 27 25 */ 28 26 #include <linux/linkage.h> 29 27 #include <linux/init.h>
-2
arch/arm/mm/proc-arm1020e.S
··· 22 22 * 23 23 * These are the low level assembler for performing cache and TLB 24 24 * functions on the arm1020e. 25 - * 26 - * CONFIG_CPU_ARM1020_CPU_IDLE -> nohlt 27 25 */ 28 26 #include <linux/linkage.h> 29 27 #include <linux/init.h>
-3
arch/arm/mm/proc-arm925.S
··· 441 441 .type __arm925_setup, #function 442 442 __arm925_setup: 443 443 mov r0, #0 444 - #if defined(CONFIG_CPU_ICACHE_STREAMING_DISABLE) 445 - orr r0,r0,#1 << 7 446 - #endif 447 444 448 445 /* Transparent on, D-cache clean & flush mode. See NOTE2 above */ 449 446 orr r0,r0,#1 << 1 @ transparent mode on
-1
arch/arm/mm/proc-feroceon.S
··· 602 602 PMD_SECT_AP_WRITE | \ 603 603 PMD_SECT_AP_READ 604 604 initfn __feroceon_setup, __\name\()_proc_info 605 - .long __feroceon_setup 606 605 .long cpu_arch_name 607 606 .long cpu_elf_name 608 607 .long HWCAP_SWP|HWCAP_HALF|HWCAP_THUMB|HWCAP_FAST_MULT|HWCAP_EDSP
+15
arch/arm/xen/mm.c
··· 4 4 #include <linux/gfp.h> 5 5 #include <linux/highmem.h> 6 6 #include <linux/export.h> 7 + #include <linux/memblock.h> 7 8 #include <linux/of_address.h> 8 9 #include <linux/slab.h> 9 10 #include <linux/types.h> ··· 21 20 #include <asm/xen/page.h> 22 21 #include <asm/xen/hypercall.h> 23 22 #include <asm/xen/interface.h> 23 + 24 + unsigned long xen_get_swiotlb_free_pages(unsigned int order) 25 + { 26 + struct memblock_region *reg; 27 + gfp_t flags = __GFP_NOWARN; 28 + 29 + for_each_memblock(memory, reg) { 30 + if (reg->base < (phys_addr_t)0xffffffff) { 31 + flags |= __GFP_DMA; 32 + break; 33 + } 34 + } 35 + return __get_free_pages(flags, order); 36 + } 24 37 25 38 enum dma_cache_op { 26 39 DMA_UNMAP,
+1
arch/arm64/Kconfig
··· 31 31 select GENERIC_EARLY_IOREMAP 32 32 select GENERIC_IRQ_PROBE 33 33 select GENERIC_IRQ_SHOW 34 + select GENERIC_IRQ_SHOW_LEVEL 34 35 select GENERIC_PCI_IOMAP 35 36 select GENERIC_SCHED_CLOCK 36 37 select GENERIC_SMP_IDLE_THREAD
+16
arch/arm64/include/asm/barrier.h
··· 65 65 do { \ 66 66 compiletime_assert_atomic_type(*p); \ 67 67 switch (sizeof(*p)) { \ 68 + case 1: \ 69 + asm volatile ("stlrb %w1, %0" \ 70 + : "=Q" (*p) : "r" (v) : "memory"); \ 71 + break; \ 72 + case 2: \ 73 + asm volatile ("stlrh %w1, %0" \ 74 + : "=Q" (*p) : "r" (v) : "memory"); \ 75 + break; \ 68 76 case 4: \ 69 77 asm volatile ("stlr %w1, %0" \ 70 78 : "=Q" (*p) : "r" (v) : "memory"); \ ··· 89 81 typeof(*p) ___p1; \ 90 82 compiletime_assert_atomic_type(*p); \ 91 83 switch (sizeof(*p)) { \ 84 + case 1: \ 85 + asm volatile ("ldarb %w0, %1" \ 86 + : "=r" (___p1) : "Q" (*p) : "memory"); \ 87 + break; \ 88 + case 2: \ 89 + asm volatile ("ldarh %w0, %1" \ 90 + : "=r" (___p1) : "Q" (*p) : "memory"); \ 91 + break; \ 92 92 case 4: \ 93 93 asm volatile ("ldar %w0, %1" \ 94 94 : "=r" (___p1) : "Q" (*p) : "memory"); \
+7 -2
arch/arm64/kernel/perf_event.c
··· 1310 1310 1311 1311 static int armpmu_device_probe(struct platform_device *pdev) 1312 1312 { 1313 - int i, *irqs; 1313 + int i, irq, *irqs; 1314 1314 1315 1315 if (!cpu_pmu) 1316 1316 return -ENODEV; ··· 1318 1318 irqs = kcalloc(pdev->num_resources, sizeof(*irqs), GFP_KERNEL); 1319 1319 if (!irqs) 1320 1320 return -ENOMEM; 1321 + 1322 + /* Don't bother with PPIs; they're already affine */ 1323 + irq = platform_get_irq(pdev, 0); 1324 + if (irq >= 0 && irq_is_percpu(irq)) 1325 + return 0; 1321 1326 1322 1327 for (i = 0; i < pdev->num_resources; ++i) { 1323 1328 struct device_node *dn; ··· 1332 1327 i); 1333 1328 if (!dn) { 1334 1329 pr_warn("Failed to parse %s/interrupt-affinity[%d]\n", 1335 - of_node_full_name(dn), i); 1330 + of_node_full_name(pdev->dev.of_node), i); 1336 1331 break; 1337 1332 } 1338 1333
+4 -5
arch/arm64/mm/dma-mapping.c
··· 67 67 68 68 *ret_page = phys_to_page(phys); 69 69 ptr = (void *)val; 70 - if (flags & __GFP_ZERO) 71 - memset(ptr, 0, size); 70 + memset(ptr, 0, size); 72 71 } 73 72 74 73 return ptr; ··· 104 105 struct page *page; 105 106 void *addr; 106 107 107 - size = PAGE_ALIGN(size); 108 108 page = dma_alloc_from_contiguous(dev, size >> PAGE_SHIFT, 109 109 get_order(size)); 110 110 if (!page) ··· 111 113 112 114 *dma_handle = phys_to_dma(dev, page_to_phys(page)); 113 115 addr = page_address(page); 114 - if (flags & __GFP_ZERO) 115 - memset(addr, 0, size); 116 + memset(addr, 0, size); 116 117 return addr; 117 118 } else { 118 119 return swiotlb_alloc_coherent(dev, size, dma_handle, flags); ··· 191 194 struct dma_attrs *attrs) 192 195 { 193 196 void *swiotlb_addr = phys_to_virt(dma_to_phys(dev, dma_handle)); 197 + 198 + size = PAGE_ALIGN(size); 194 199 195 200 if (!is_device_dma_coherent(dev)) { 196 201 if (__free_from_pool(vaddr, size))
+3 -3
arch/m32r/kernel/smp.c
··· 45 45 /* 46 46 * For flush_tlb_others() 47 47 */ 48 - static volatile cpumask_t flush_cpumask; 48 + static cpumask_t flush_cpumask; 49 49 static struct mm_struct *flush_mm; 50 50 static struct vm_area_struct *flush_vma; 51 51 static volatile unsigned long flush_va; ··· 415 415 */ 416 416 send_IPI_mask(&cpumask, INVALIDATE_TLB_IPI, 0); 417 417 418 - while (!cpumask_empty((cpumask_t*)&flush_cpumask)) { 418 + while (!cpumask_empty(&flush_cpumask)) { 419 419 /* nothing. lockup detection does not belong here */ 420 420 mb(); 421 421 } ··· 468 468 __flush_tlb_page(va); 469 469 } 470 470 } 471 - cpumask_clear_cpu(cpu_id, (cpumask_t*)&flush_cpumask); 471 + cpumask_clear_cpu(cpu_id, &flush_cpumask); 472 472 } 473 473 474 474 /*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*/
+1 -1
arch/powerpc/include/uapi/asm/tm.h
··· 11 11 #define TM_CAUSE_RESCHED 0xde 12 12 #define TM_CAUSE_TLBI 0xdc 13 13 #define TM_CAUSE_FAC_UNAV 0xda 14 - #define TM_CAUSE_SYSCALL 0xd8 14 + #define TM_CAUSE_SYSCALL 0xd8 /* future use */ 15 15 #define TM_CAUSE_MISC 0xd6 /* future use */ 16 16 #define TM_CAUSE_SIGNAL 0xd4 17 17 #define TM_CAUSE_ALIGNMENT 0xd2
+10 -1
arch/powerpc/kernel/eeh.c
··· 749 749 eeh_unfreeze_pe(pe, false); 750 750 eeh_pe_state_clear(pe, EEH_PE_CFG_BLOCKED); 751 751 eeh_pe_dev_traverse(pe, eeh_restore_dev_state, dev); 752 + eeh_pe_state_clear(pe, EEH_PE_ISOLATED); 752 753 break; 753 754 case pcie_hot_reset: 755 + eeh_pe_state_mark(pe, EEH_PE_ISOLATED); 754 756 eeh_ops->set_option(pe, EEH_OPT_FREEZE_PE); 755 757 eeh_pe_dev_traverse(pe, eeh_disable_and_save_dev_state, dev); 756 758 eeh_pe_state_mark(pe, EEH_PE_CFG_BLOCKED); 757 759 eeh_ops->reset(pe, EEH_RESET_HOT); 758 760 break; 759 761 case pcie_warm_reset: 762 + eeh_pe_state_mark(pe, EEH_PE_ISOLATED); 760 763 eeh_ops->set_option(pe, EEH_OPT_FREEZE_PE); 761 764 eeh_pe_dev_traverse(pe, eeh_disable_and_save_dev_state, dev); 762 765 eeh_pe_state_mark(pe, EEH_PE_CFG_BLOCKED); 763 766 eeh_ops->reset(pe, EEH_RESET_FUNDAMENTAL); 764 767 break; 765 768 default: 766 - eeh_pe_state_clear(pe, EEH_PE_CFG_BLOCKED); 769 + eeh_pe_state_clear(pe, EEH_PE_ISOLATED | EEH_PE_CFG_BLOCKED); 767 770 return -EINVAL; 768 771 }; 769 772 ··· 1061 1058 if (!edev || !eeh_enabled()) 1062 1059 return; 1063 1060 1061 + if (!eeh_has_flag(EEH_PROBE_MODE_DEVTREE)) 1062 + return; 1063 + 1064 1064 /* USB Bus children of PCI devices will not have BUID's */ 1065 1065 phb = edev->phb; 1066 1066 if (NULL == phb || ··· 1117 1111 pr_debug("EEH: Already referenced !\n"); 1118 1112 return; 1119 1113 } 1114 + 1115 + if (eeh_has_flag(EEH_PROBE_MODE_DEV)) 1116 + eeh_ops->probe(pdn, NULL); 1120 1117 1121 1118 /* 1122 1119 * The EEH cache might not be removed correctly because of
-19
arch/powerpc/kernel/entry_64.S
··· 34 34 #include <asm/ftrace.h> 35 35 #include <asm/hw_irq.h> 36 36 #include <asm/context_tracking.h> 37 - #include <asm/tm.h> 38 37 39 38 /* 40 39 * System calls. ··· 145 146 andi. r11,r10,_TIF_SYSCALL_DOTRACE 146 147 bne syscall_dotrace 147 148 .Lsyscall_dotrace_cont: 148 - #ifdef CONFIG_PPC_TRANSACTIONAL_MEM 149 - BEGIN_FTR_SECTION 150 - b 1f 151 - END_FTR_SECTION_IFCLR(CPU_FTR_TM) 152 - extrdi. r11, r12, 1, (63-MSR_TS_T_LG) /* transaction active? */ 153 - beq+ 1f 154 - 155 - /* Doom the transaction and don't perform the syscall: */ 156 - mfmsr r11 157 - li r12, 1 158 - rldimi r11, r12, MSR_TM_LG, 63-MSR_TM_LG 159 - mtmsrd r11, 0 160 - li r11, (TM_CAUSE_SYSCALL|TM_CAUSE_PERSISTENT) 161 - TABORT(R11) 162 - 163 - b .Lsyscall_exit 164 - 1: 165 - #endif 166 149 cmpldi 0,r0,NR_syscalls 167 150 bge- syscall_enosys 168 151
+2
arch/powerpc/kernel/idle_power7.S
··· 501 501 CHECK_HMI_INTERRUPT 502 502 END_FTR_SECTION_IFSET(CPU_FTR_HVMODE) 503 503 ld r1,PACAR1(r13) 504 + ld r6,_CCR(r1) 504 505 ld r4,_MSR(r1) 505 506 ld r5,_NIP(r1) 506 507 addi r1,r1,INT_FRAME_SIZE 508 + mtcr r6 507 509 mtspr SPRN_SRR1,r4 508 510 mtspr SPRN_SRR0,r5 509 511 rfid
+1 -1
arch/powerpc/kvm/book3s_xics.c
··· 12 12 #include <linux/err.h> 13 13 #include <linux/gfp.h> 14 14 #include <linux/anon_inodes.h> 15 + #include <linux/spinlock.h> 15 16 16 17 #include <asm/uaccess.h> 17 18 #include <asm/kvm_book3s.h> ··· 21 20 #include <asm/xics.h> 22 21 #include <asm/debug.h> 23 22 #include <asm/time.h> 24 - #include <asm/spinlock.h> 25 23 26 24 #include <linux/debugfs.h> 27 25 #include <linux/seq_file.h>
+1 -1
arch/powerpc/platforms/powernv/pci-ioda.c
··· 2693 2693 hose->last_busno = 0xff; 2694 2694 } 2695 2695 hose->private_data = phb; 2696 - hose->controller_ops = pnv_pci_controller_ops; 2697 2696 phb->hub_id = hub_id; 2698 2697 phb->opal_id = phb_id; 2699 2698 phb->type = ioda_type; ··· 2811 2812 pnv_pci_controller_ops.enable_device_hook = pnv_pci_enable_device_hook; 2812 2813 pnv_pci_controller_ops.window_alignment = pnv_pci_window_alignment; 2813 2814 pnv_pci_controller_ops.reset_secondary_bus = pnv_pci_reset_secondary_bus; 2815 + hose->controller_ops = pnv_pci_controller_ops; 2814 2816 2815 2817 #ifdef CONFIG_PCI_IOV 2816 2818 ppc_md.pcibios_fixup_sriov = pnv_pci_ioda_fixup_iov_resources;
+4 -6
arch/powerpc/platforms/pseries/dlpar.c
··· 412 412 if (rc) 413 413 return -EINVAL; 414 414 415 + rc = dlpar_acquire_drc(drc_index); 416 + if (rc) 417 + return -EINVAL; 418 + 415 419 parent = of_find_node_by_path("/cpus"); 416 420 if (!parent) 417 421 return -ENODEV; ··· 425 421 return -EINVAL; 426 422 427 423 of_node_put(parent); 428 - 429 - rc = dlpar_acquire_drc(drc_index); 430 - if (rc) { 431 - dlpar_free_cc_nodes(dn); 432 - return -EINVAL; 433 - } 434 424 435 425 rc = dlpar_attach_node(dn); 436 426 if (rc) {
+1 -1
arch/s390/Kconfig
··· 115 115 select HAVE_ARCH_SECCOMP_FILTER 116 116 select HAVE_ARCH_TRACEHOOK 117 117 select HAVE_ARCH_TRANSPARENT_HUGEPAGE 118 - select HAVE_BPF_JIT if PACK_STACK && HAVE_MARCH_Z9_109_FEATURES 118 + select HAVE_BPF_JIT if PACK_STACK && HAVE_MARCH_Z196_FEATURES 119 119 select HAVE_CMPXCHG_DOUBLE 120 120 select HAVE_CMPXCHG_LOCAL 121 121 select HAVE_DEBUG_KMEMLEAK
+91 -31
arch/s390/crypto/crypt_s390.h
··· 3 3 * 4 4 * Support for s390 cryptographic instructions. 5 5 * 6 - * Copyright IBM Corp. 2003, 2007 6 + * Copyright IBM Corp. 2003, 2015 7 7 * Author(s): Thomas Spatzier 8 8 * Jan Glauber (jan.glauber@de.ibm.com) 9 + * Harald Freudenberger (freude@de.ibm.com) 9 10 * 10 11 * This program is free software; you can redistribute it and/or modify it 11 12 * under the terms of the GNU General Public License as published by the Free ··· 29 28 #define CRYPT_S390_MSA 0x1 30 29 #define CRYPT_S390_MSA3 0x2 31 30 #define CRYPT_S390_MSA4 0x4 31 + #define CRYPT_S390_MSA5 0x8 32 32 33 33 /* s390 cryptographic operations */ 34 34 enum crypt_s390_operations { 35 - CRYPT_S390_KM = 0x0100, 36 - CRYPT_S390_KMC = 0x0200, 37 - CRYPT_S390_KIMD = 0x0300, 38 - CRYPT_S390_KLMD = 0x0400, 39 - CRYPT_S390_KMAC = 0x0500, 40 - CRYPT_S390_KMCTR = 0x0600 35 + CRYPT_S390_KM = 0x0100, 36 + CRYPT_S390_KMC = 0x0200, 37 + CRYPT_S390_KIMD = 0x0300, 38 + CRYPT_S390_KLMD = 0x0400, 39 + CRYPT_S390_KMAC = 0x0500, 40 + CRYPT_S390_KMCTR = 0x0600, 41 + CRYPT_S390_PPNO = 0x0700 41 42 }; 42 43 43 44 /* ··· 141 138 KMAC_TDEA_192 = CRYPT_S390_KMAC | 3 142 139 }; 143 140 141 + /* 142 + * function codes for PPNO (PERFORM PSEUDORANDOM NUMBER 143 + * OPERATION) instruction 144 + */ 145 + enum crypt_s390_ppno_func { 146 + PPNO_QUERY = CRYPT_S390_PPNO | 0, 147 + PPNO_SHA512_DRNG_GEN = CRYPT_S390_PPNO | 3, 148 + PPNO_SHA512_DRNG_SEED = CRYPT_S390_PPNO | 0x83 149 + }; 150 + 144 151 /** 145 152 * crypt_s390_km: 146 153 * @func: the function code passed to KM; see crypt_s390_km_func ··· 175 162 int ret; 176 163 177 164 asm volatile( 178 - "0: .insn rre,0xb92e0000,%3,%1 \n" /* KM opcode */ 179 - "1: brc 1,0b \n" /* handle partial completion */ 165 + "0: .insn rre,0xb92e0000,%3,%1\n" /* KM opcode */ 166 + "1: brc 1,0b\n" /* handle partial completion */ 180 167 " la %0,0\n" 181 168 "2:\n" 182 - EX_TABLE(0b,2b) EX_TABLE(1b,2b) 169 + EX_TABLE(0b, 2b) EX_TABLE(1b, 2b) 183 170 : "=d" (ret), "+a" (__src), "+d" (__src_len), "+a" (__dest) 184 171 : "d" (__func), "a" (__param), "0" (-1) : "cc", "memory"); 185 172 if (ret < 0) ··· 211 198 int ret; 212 199 213 200 asm volatile( 214 - "0: .insn rre,0xb92f0000,%3,%1 \n" /* KMC opcode */ 215 - "1: brc 1,0b \n" /* handle partial completion */ 201 + "0: .insn rre,0xb92f0000,%3,%1\n" /* KMC opcode */ 202 + "1: brc 1,0b\n" /* handle partial completion */ 216 203 " la %0,0\n" 217 204 "2:\n" 218 - EX_TABLE(0b,2b) EX_TABLE(1b,2b) 205 + EX_TABLE(0b, 2b) EX_TABLE(1b, 2b) 219 206 : "=d" (ret), "+a" (__src), "+d" (__src_len), "+a" (__dest) 220 207 : "d" (__func), "a" (__param), "0" (-1) : "cc", "memory"); 221 208 if (ret < 0) ··· 246 233 int ret; 247 234 248 235 asm volatile( 249 - "0: .insn rre,0xb93e0000,%1,%1 \n" /* KIMD opcode */ 250 - "1: brc 1,0b \n" /* handle partial completion */ 236 + "0: .insn rre,0xb93e0000,%1,%1\n" /* KIMD opcode */ 237 + "1: brc 1,0b\n" /* handle partial completion */ 251 238 " la %0,0\n" 252 239 "2:\n" 253 - EX_TABLE(0b,2b) EX_TABLE(1b,2b) 240 + EX_TABLE(0b, 2b) EX_TABLE(1b, 2b) 254 241 : "=d" (ret), "+a" (__src), "+d" (__src_len) 255 242 : "d" (__func), "a" (__param), "0" (-1) : "cc", "memory"); 256 243 if (ret < 0) ··· 280 267 int ret; 281 268 282 269 asm volatile( 283 - "0: .insn rre,0xb93f0000,%1,%1 \n" /* KLMD opcode */ 284 - "1: brc 1,0b \n" /* handle partial completion */ 270 + "0: .insn rre,0xb93f0000,%1,%1\n" /* KLMD opcode */ 271 + "1: brc 1,0b\n" /* handle partial completion */ 285 272 " la %0,0\n" 286 273 "2:\n" 287 - EX_TABLE(0b,2b) EX_TABLE(1b,2b) 274 + EX_TABLE(0b, 2b) EX_TABLE(1b, 2b) 288 275 : "=d" (ret), "+a" (__src), "+d" (__src_len) 289 276 : "d" (__func), "a" (__param), "0" (-1) : "cc", "memory"); 290 277 if (ret < 0) ··· 315 302 int ret; 316 303 317 304 asm volatile( 318 - "0: .insn rre,0xb91e0000,%1,%1 \n" /* KLAC opcode */ 319 - "1: brc 1,0b \n" /* handle partial completion */ 305 + "0: .insn rre,0xb91e0000,%1,%1\n" /* KLAC opcode */ 306 + "1: brc 1,0b\n" /* handle partial completion */ 320 307 " la %0,0\n" 321 308 "2:\n" 322 - EX_TABLE(0b,2b) EX_TABLE(1b,2b) 309 + EX_TABLE(0b, 2b) EX_TABLE(1b, 2b) 323 310 : "=d" (ret), "+a" (__src), "+d" (__src_len) 324 311 : "d" (__func), "a" (__param), "0" (-1) : "cc", "memory"); 325 312 if (ret < 0) ··· 353 340 int ret = -1; 354 341 355 342 asm volatile( 356 - "0: .insn rrf,0xb92d0000,%3,%1,%4,0 \n" /* KMCTR opcode */ 357 - "1: brc 1,0b \n" /* handle partial completion */ 343 + "0: .insn rrf,0xb92d0000,%3,%1,%4,0\n" /* KMCTR opcode */ 344 + "1: brc 1,0b\n" /* handle partial completion */ 358 345 " la %0,0\n" 359 346 "2:\n" 360 - EX_TABLE(0b,2b) EX_TABLE(1b,2b) 347 + EX_TABLE(0b, 2b) EX_TABLE(1b, 2b) 361 348 : "+d" (ret), "+a" (__src), "+d" (__src_len), "+a" (__dest), 362 349 "+a" (__ctr) 363 350 : "d" (__func), "a" (__param) : "cc", "memory"); 364 351 if (ret < 0) 365 352 return ret; 366 353 return (func & CRYPT_S390_FUNC_MASK) ? src_len - __src_len : __src_len; 354 + } 355 + 356 + /** 357 + * crypt_s390_ppno: 358 + * @func: the function code passed to PPNO; see crypt_s390_ppno_func 359 + * @param: address of parameter block; see POP for details on each func 360 + * @dest: address of destination memory area 361 + * @dest_len: size of destination memory area in bytes 362 + * @seed: address of seed data 363 + * @seed_len: size of seed data in bytes 364 + * 365 + * Executes the PPNO (PERFORM PSEUDORANDOM NUMBER OPERATION) 366 + * operation of the CPU. 367 + * 368 + * Returns -1 for failure, 0 for the query func, number of random 369 + * bytes stored in dest buffer for generate function 370 + */ 371 + static inline int crypt_s390_ppno(long func, void *param, 372 + u8 *dest, long dest_len, 373 + const u8 *seed, long seed_len) 374 + { 375 + register long __func asm("0") = func & CRYPT_S390_FUNC_MASK; 376 + register void *__param asm("1") = param; /* param block (240 bytes) */ 377 + register u8 *__dest asm("2") = dest; /* buf for recv random bytes */ 378 + register long __dest_len asm("3") = dest_len; /* requested random bytes */ 379 + register const u8 *__seed asm("4") = seed; /* buf with seed data */ 380 + register long __seed_len asm("5") = seed_len; /* bytes in seed buf */ 381 + int ret = -1; 382 + 383 + asm volatile ( 384 + "0: .insn rre,0xb93c0000,%1,%5\n" /* PPNO opcode */ 385 + "1: brc 1,0b\n" /* handle partial completion */ 386 + " la %0,0\n" 387 + "2:\n" 388 + EX_TABLE(0b, 2b) EX_TABLE(1b, 2b) 389 + : "+d" (ret), "+a"(__dest), "+d"(__dest_len) 390 + : "d"(__func), "a"(__param), "a"(__seed), "d"(__seed_len) 391 + : "cc", "memory"); 392 + if (ret < 0) 393 + return ret; 394 + return (func & CRYPT_S390_FUNC_MASK) ? dest_len - __dest_len : 0; 367 395 } 368 396 369 397 /** ··· 427 373 return 0; 428 374 if (facility_mask & CRYPT_S390_MSA4 && !test_facility(77)) 429 375 return 0; 376 + if (facility_mask & CRYPT_S390_MSA5 && !test_facility(57)) 377 + return 0; 378 + 430 379 switch (func & CRYPT_S390_OP_MASK) { 431 380 case CRYPT_S390_KM: 432 381 ret = crypt_s390_km(KM_QUERY, &status, NULL, NULL, 0); ··· 447 390 ret = crypt_s390_kmac(KMAC_QUERY, &status, NULL, 0); 448 391 break; 449 392 case CRYPT_S390_KMCTR: 450 - ret = crypt_s390_kmctr(KMCTR_QUERY, &status, NULL, NULL, 0, 451 - NULL); 393 + ret = crypt_s390_kmctr(KMCTR_QUERY, &status, 394 + NULL, NULL, 0, NULL); 395 + break; 396 + case CRYPT_S390_PPNO: 397 + ret = crypt_s390_ppno(PPNO_QUERY, &status, 398 + NULL, 0, NULL, 0); 452 399 break; 453 400 default: 454 401 return 0; ··· 480 419 int ret = -1; 481 420 482 421 asm volatile( 483 - "0: .insn rre,0xb92c0000,0,0 \n" /* PCC opcode */ 484 - "1: brc 1,0b \n" /* handle partial completion */ 422 + "0: .insn rre,0xb92c0000,0,0\n" /* PCC opcode */ 423 + "1: brc 1,0b\n" /* handle partial completion */ 485 424 " la %0,0\n" 486 425 "2:\n" 487 - EX_TABLE(0b,2b) EX_TABLE(1b,2b) 426 + EX_TABLE(0b, 2b) EX_TABLE(1b, 2b) 488 427 : "+d" (ret) 489 428 : "d" (__func), "a" (__param) : "cc", "memory"); 490 429 return ret; 491 430 } 492 - 493 431 494 432 #endif /* _CRYPTO_ARCH_S390_CRYPT_S390_H */
+783 -75
arch/s390/crypto/prng.c
··· 1 1 /* 2 - * Copyright IBM Corp. 2006, 2007 2 + * Copyright IBM Corp. 2006, 2015 3 3 * Author(s): Jan Glauber <jan.glauber@de.ibm.com> 4 + * Harald Freudenberger <freude@de.ibm.com> 4 5 * Driver for the s390 pseudo random number generator 5 6 */ 7 + 8 + #define KMSG_COMPONENT "prng" 9 + #define pr_fmt(fmt) KMSG_COMPONENT ": " fmt 10 + 6 11 #include <linux/fs.h> 12 + #include <linux/fips.h> 7 13 #include <linux/init.h> 8 14 #include <linux/kernel.h> 15 + #include <linux/device.h> 9 16 #include <linux/miscdevice.h> 10 17 #include <linux/module.h> 11 18 #include <linux/moduleparam.h> 19 + #include <linux/mutex.h> 12 20 #include <linux/random.h> 13 21 #include <linux/slab.h> 14 22 #include <asm/debug.h> 15 23 #include <asm/uaccess.h> 24 + #include <asm/timex.h> 16 25 17 26 #include "crypt_s390.h" 18 27 19 28 MODULE_LICENSE("GPL"); 20 - MODULE_AUTHOR("Jan Glauber <jan.glauber@de.ibm.com>"); 29 + MODULE_AUTHOR("IBM Corporation"); 21 30 MODULE_DESCRIPTION("s390 PRNG interface"); 22 31 23 - static int prng_chunk_size = 256; 24 - module_param(prng_chunk_size, int, S_IRUSR | S_IRGRP | S_IROTH); 32 + 33 + #define PRNG_MODE_AUTO 0 34 + #define PRNG_MODE_TDES 1 35 + #define PRNG_MODE_SHA512 2 36 + 37 + static unsigned int prng_mode = PRNG_MODE_AUTO; 38 + module_param_named(mode, prng_mode, int, 0); 39 + MODULE_PARM_DESC(prng_mode, "PRNG mode: 0 - auto, 1 - TDES, 2 - SHA512"); 40 + 41 + 42 + #define PRNG_CHUNKSIZE_TDES_MIN 8 43 + #define PRNG_CHUNKSIZE_TDES_MAX (64*1024) 44 + #define PRNG_CHUNKSIZE_SHA512_MIN 64 45 + #define PRNG_CHUNKSIZE_SHA512_MAX (64*1024) 46 + 47 + static unsigned int prng_chunk_size = 256; 48 + module_param_named(chunksize, prng_chunk_size, int, 0); 25 49 MODULE_PARM_DESC(prng_chunk_size, "PRNG read chunk size in bytes"); 26 50 27 - static int prng_entropy_limit = 4096; 28 - module_param(prng_entropy_limit, int, S_IRUSR | S_IRGRP | S_IROTH | S_IWUSR); 29 - MODULE_PARM_DESC(prng_entropy_limit, 30 - "PRNG add entropy after that much bytes were produced"); 51 + 52 + #define PRNG_RESEED_LIMIT_TDES 4096 53 + #define PRNG_RESEED_LIMIT_TDES_LOWER 4096 54 + #define PRNG_RESEED_LIMIT_SHA512 100000 55 + #define PRNG_RESEED_LIMIT_SHA512_LOWER 10000 56 + 57 + static unsigned int prng_reseed_limit; 58 + module_param_named(reseed_limit, prng_reseed_limit, int, 0); 59 + MODULE_PARM_DESC(prng_reseed_limit, "PRNG reseed limit"); 60 + 31 61 32 62 /* 33 63 * Any one who considers arithmetical methods of producing random digits is, 34 64 * of course, in a state of sin. -- John von Neumann 35 65 */ 36 66 37 - struct s390_prng_data { 38 - unsigned long count; /* how many bytes were produced */ 39 - char *buf; 67 + static int prng_errorflag; 68 + 69 + #define PRNG_GEN_ENTROPY_FAILED 1 70 + #define PRNG_SELFTEST_FAILED 2 71 + #define PRNG_INSTANTIATE_FAILED 3 72 + #define PRNG_SEED_FAILED 4 73 + #define PRNG_RESEED_FAILED 5 74 + #define PRNG_GEN_FAILED 6 75 + 76 + struct prng_ws_s { 77 + u8 parm_block[32]; 78 + u32 reseed_counter; 79 + u64 byte_counter; 40 80 }; 41 81 42 - static struct s390_prng_data *p; 43 - 44 - /* copied from libica, use a non-zero initial parameter block */ 45 - static unsigned char parm_block[32] = { 46 - 0x0F,0x2B,0x8E,0x63,0x8C,0x8E,0xD2,0x52,0x64,0xB7,0xA0,0x7B,0x75,0x28,0xB8,0xF4, 47 - 0x75,0x5F,0xD2,0xA6,0x8D,0x97,0x11,0xFF,0x49,0xD8,0x23,0xF3,0x7E,0x21,0xEC,0xA0, 82 + struct ppno_ws_s { 83 + u32 res; 84 + u32 reseed_counter; 85 + u64 stream_bytes; 86 + u8 V[112]; 87 + u8 C[112]; 48 88 }; 49 89 50 - static int prng_open(struct inode *inode, struct file *file) 90 + struct prng_data_s { 91 + struct mutex mutex; 92 + union { 93 + struct prng_ws_s prngws; 94 + struct ppno_ws_s ppnows; 95 + }; 96 + u8 *buf; 97 + u32 rest; 98 + u8 *prev; 99 + }; 100 + 101 + static struct prng_data_s *prng_data; 102 + 103 + /* initial parameter block for tdes mode, copied from libica */ 104 + static const u8 initial_parm_block[32] __initconst = { 105 + 0x0F, 0x2B, 0x8E, 0x63, 0x8C, 0x8E, 0xD2, 0x52, 106 + 0x64, 0xB7, 0xA0, 0x7B, 0x75, 0x28, 0xB8, 0xF4, 107 + 0x75, 0x5F, 0xD2, 0xA6, 0x8D, 0x97, 0x11, 0xFF, 108 + 0x49, 0xD8, 0x23, 0xF3, 0x7E, 0x21, 0xEC, 0xA0 }; 109 + 110 + 111 + /*** helper functions ***/ 112 + 113 + static int generate_entropy(u8 *ebuf, size_t nbytes) 51 114 { 52 - return nonseekable_open(inode, file); 115 + int n, ret = 0; 116 + u8 *pg, *h, hash[32]; 117 + 118 + pg = (u8 *) __get_free_page(GFP_KERNEL); 119 + if (!pg) { 120 + prng_errorflag = PRNG_GEN_ENTROPY_FAILED; 121 + return -ENOMEM; 122 + } 123 + 124 + while (nbytes) { 125 + /* fill page with urandom bytes */ 126 + get_random_bytes(pg, PAGE_SIZE); 127 + /* exor page with stckf values */ 128 + for (n = 0; n < sizeof(PAGE_SIZE/sizeof(u64)); n++) { 129 + u64 *p = ((u64 *)pg) + n; 130 + *p ^= get_tod_clock_fast(); 131 + } 132 + n = (nbytes < sizeof(hash)) ? nbytes : sizeof(hash); 133 + if (n < sizeof(hash)) 134 + h = hash; 135 + else 136 + h = ebuf; 137 + /* generate sha256 from this page */ 138 + if (crypt_s390_kimd(KIMD_SHA_256, h, 139 + pg, PAGE_SIZE) != PAGE_SIZE) { 140 + prng_errorflag = PRNG_GEN_ENTROPY_FAILED; 141 + ret = -EIO; 142 + goto out; 143 + } 144 + if (n < sizeof(hash)) 145 + memcpy(ebuf, hash, n); 146 + ret += n; 147 + ebuf += n; 148 + nbytes -= n; 149 + } 150 + 151 + out: 152 + free_page((unsigned long)pg); 153 + return ret; 53 154 } 54 155 55 - static void prng_add_entropy(void) 156 + 157 + /*** tdes functions ***/ 158 + 159 + static void prng_tdes_add_entropy(void) 56 160 { 57 161 __u64 entropy[4]; 58 162 unsigned int i; 59 163 int ret; 60 164 61 165 for (i = 0; i < 16; i++) { 62 - ret = crypt_s390_kmc(KMC_PRNG, parm_block, (char *)entropy, 63 - (char *)entropy, sizeof(entropy)); 166 + ret = crypt_s390_kmc(KMC_PRNG, prng_data->prngws.parm_block, 167 + (char *)entropy, (char *)entropy, 168 + sizeof(entropy)); 64 169 BUG_ON(ret < 0 || ret != sizeof(entropy)); 65 - memcpy(parm_block, entropy, sizeof(entropy)); 170 + memcpy(prng_data->prngws.parm_block, entropy, sizeof(entropy)); 66 171 } 67 172 } 68 173 69 - static void prng_seed(int nbytes) 174 + 175 + static void prng_tdes_seed(int nbytes) 70 176 { 71 177 char buf[16]; 72 178 int i = 0; 73 179 74 - BUG_ON(nbytes > 16); 180 + BUG_ON(nbytes > sizeof(buf)); 181 + 75 182 get_random_bytes(buf, nbytes); 76 183 77 184 /* Add the entropy */ 78 185 while (nbytes >= 8) { 79 - *((__u64 *)parm_block) ^= *((__u64 *)(buf+i)); 80 - prng_add_entropy(); 186 + *((__u64 *)prng_data->prngws.parm_block) ^= *((__u64 *)(buf+i)); 187 + prng_tdes_add_entropy(); 81 188 i += 8; 82 189 nbytes -= 8; 83 190 } 84 - prng_add_entropy(); 191 + prng_tdes_add_entropy(); 192 + prng_data->prngws.reseed_counter = 0; 85 193 } 86 194 87 - static ssize_t prng_read(struct file *file, char __user *ubuf, size_t nbytes, 88 - loff_t *ppos) 89 - { 90 - int chunk, n; 91 - int ret = 0; 92 - int tmp; 93 195 94 - /* nbytes can be arbitrary length, we split it into chunks */ 196 + static int __init prng_tdes_instantiate(void) 197 + { 198 + int datalen; 199 + 200 + pr_debug("prng runs in TDES mode with " 201 + "chunksize=%d and reseed_limit=%u\n", 202 + prng_chunk_size, prng_reseed_limit); 203 + 204 + /* memory allocation, prng_data struct init, mutex init */ 205 + datalen = sizeof(struct prng_data_s) + prng_chunk_size; 206 + prng_data = kzalloc(datalen, GFP_KERNEL); 207 + if (!prng_data) { 208 + prng_errorflag = PRNG_INSTANTIATE_FAILED; 209 + return -ENOMEM; 210 + } 211 + mutex_init(&prng_data->mutex); 212 + prng_data->buf = ((u8 *)prng_data) + sizeof(struct prng_data_s); 213 + memcpy(prng_data->prngws.parm_block, initial_parm_block, 32); 214 + 215 + /* initialize the PRNG, add 128 bits of entropy */ 216 + prng_tdes_seed(16); 217 + 218 + return 0; 219 + } 220 + 221 + 222 + static void prng_tdes_deinstantiate(void) 223 + { 224 + pr_debug("The prng module stopped " 225 + "after running in triple DES mode\n"); 226 + kzfree(prng_data); 227 + } 228 + 229 + 230 + /*** sha512 functions ***/ 231 + 232 + static int __init prng_sha512_selftest(void) 233 + { 234 + /* NIST DRBG testvector for Hash Drbg, Sha-512, Count #0 */ 235 + static const u8 seed[] __initconst = { 236 + 0x6b, 0x50, 0xa7, 0xd8, 0xf8, 0xa5, 0x5d, 0x7a, 237 + 0x3d, 0xf8, 0xbb, 0x40, 0xbc, 0xc3, 0xb7, 0x22, 238 + 0xd8, 0x70, 0x8d, 0xe6, 0x7f, 0xda, 0x01, 0x0b, 239 + 0x03, 0xc4, 0xc8, 0x4d, 0x72, 0x09, 0x6f, 0x8c, 240 + 0x3e, 0xc6, 0x49, 0xcc, 0x62, 0x56, 0xd9, 0xfa, 241 + 0x31, 0xdb, 0x7a, 0x29, 0x04, 0xaa, 0xf0, 0x25 }; 242 + static const u8 V0[] __initconst = { 243 + 0x00, 0xad, 0xe3, 0x6f, 0x9a, 0x01, 0xc7, 0x76, 244 + 0x61, 0x34, 0x35, 0xf5, 0x4e, 0x24, 0x74, 0x22, 245 + 0x21, 0x9a, 0x29, 0x89, 0xc7, 0x93, 0x2e, 0x60, 246 + 0x1e, 0xe8, 0x14, 0x24, 0x8d, 0xd5, 0x03, 0xf1, 247 + 0x65, 0x5d, 0x08, 0x22, 0x72, 0xd5, 0xad, 0x95, 248 + 0xe1, 0x23, 0x1e, 0x8a, 0xa7, 0x13, 0xd9, 0x2b, 249 + 0x5e, 0xbc, 0xbb, 0x80, 0xab, 0x8d, 0xe5, 0x79, 250 + 0xab, 0x5b, 0x47, 0x4e, 0xdd, 0xee, 0x6b, 0x03, 251 + 0x8f, 0x0f, 0x5c, 0x5e, 0xa9, 0x1a, 0x83, 0xdd, 252 + 0xd3, 0x88, 0xb2, 0x75, 0x4b, 0xce, 0x83, 0x36, 253 + 0x57, 0x4b, 0xf1, 0x5c, 0xca, 0x7e, 0x09, 0xc0, 254 + 0xd3, 0x89, 0xc6, 0xe0, 0xda, 0xc4, 0x81, 0x7e, 255 + 0x5b, 0xf9, 0xe1, 0x01, 0xc1, 0x92, 0x05, 0xea, 256 + 0xf5, 0x2f, 0xc6, 0xc6, 0xc7, 0x8f, 0xbc, 0xf4 }; 257 + static const u8 C0[] __initconst = { 258 + 0x00, 0xf4, 0xa3, 0xe5, 0xa0, 0x72, 0x63, 0x95, 259 + 0xc6, 0x4f, 0x48, 0xd0, 0x8b, 0x5b, 0x5f, 0x8e, 260 + 0x6b, 0x96, 0x1f, 0x16, 0xed, 0xbc, 0x66, 0x94, 261 + 0x45, 0x31, 0xd7, 0x47, 0x73, 0x22, 0xa5, 0x86, 262 + 0xce, 0xc0, 0x4c, 0xac, 0x63, 0xb8, 0x39, 0x50, 263 + 0xbf, 0xe6, 0x59, 0x6c, 0x38, 0x58, 0x99, 0x1f, 264 + 0x27, 0xa7, 0x9d, 0x71, 0x2a, 0xb3, 0x7b, 0xf9, 265 + 0xfb, 0x17, 0x86, 0xaa, 0x99, 0x81, 0xaa, 0x43, 266 + 0xe4, 0x37, 0xd3, 0x1e, 0x6e, 0xe5, 0xe6, 0xee, 267 + 0xc2, 0xed, 0x95, 0x4f, 0x53, 0x0e, 0x46, 0x8a, 268 + 0xcc, 0x45, 0xa5, 0xdb, 0x69, 0x0d, 0x81, 0xc9, 269 + 0x32, 0x92, 0xbc, 0x8f, 0x33, 0xe6, 0xf6, 0x09, 270 + 0x7c, 0x8e, 0x05, 0x19, 0x0d, 0xf1, 0xb6, 0xcc, 271 + 0xf3, 0x02, 0x21, 0x90, 0x25, 0xec, 0xed, 0x0e }; 272 + static const u8 random[] __initconst = { 273 + 0x95, 0xb7, 0xf1, 0x7e, 0x98, 0x02, 0xd3, 0x57, 274 + 0x73, 0x92, 0xc6, 0xa9, 0xc0, 0x80, 0x83, 0xb6, 275 + 0x7d, 0xd1, 0x29, 0x22, 0x65, 0xb5, 0xf4, 0x2d, 276 + 0x23, 0x7f, 0x1c, 0x55, 0xbb, 0x9b, 0x10, 0xbf, 277 + 0xcf, 0xd8, 0x2c, 0x77, 0xa3, 0x78, 0xb8, 0x26, 278 + 0x6a, 0x00, 0x99, 0x14, 0x3b, 0x3c, 0x2d, 0x64, 279 + 0x61, 0x1e, 0xee, 0xb6, 0x9a, 0xcd, 0xc0, 0x55, 280 + 0x95, 0x7c, 0x13, 0x9e, 0x8b, 0x19, 0x0c, 0x7a, 281 + 0x06, 0x95, 0x5f, 0x2c, 0x79, 0x7c, 0x27, 0x78, 282 + 0xde, 0x94, 0x03, 0x96, 0xa5, 0x01, 0xf4, 0x0e, 283 + 0x91, 0x39, 0x6a, 0xcf, 0x8d, 0x7e, 0x45, 0xeb, 284 + 0xdb, 0xb5, 0x3b, 0xbf, 0x8c, 0x97, 0x52, 0x30, 285 + 0xd2, 0xf0, 0xff, 0x91, 0x06, 0xc7, 0x61, 0x19, 286 + 0xae, 0x49, 0x8e, 0x7f, 0xbc, 0x03, 0xd9, 0x0f, 287 + 0x8e, 0x4c, 0x51, 0x62, 0x7a, 0xed, 0x5c, 0x8d, 288 + 0x42, 0x63, 0xd5, 0xd2, 0xb9, 0x78, 0x87, 0x3a, 289 + 0x0d, 0xe5, 0x96, 0xee, 0x6d, 0xc7, 0xf7, 0xc2, 290 + 0x9e, 0x37, 0xee, 0xe8, 0xb3, 0x4c, 0x90, 0xdd, 291 + 0x1c, 0xf6, 0xa9, 0xdd, 0xb2, 0x2b, 0x4c, 0xbd, 292 + 0x08, 0x6b, 0x14, 0xb3, 0x5d, 0xe9, 0x3d, 0xa2, 293 + 0xd5, 0xcb, 0x18, 0x06, 0x69, 0x8c, 0xbd, 0x7b, 294 + 0xbb, 0x67, 0xbf, 0xe3, 0xd3, 0x1f, 0xd2, 0xd1, 295 + 0xdb, 0xd2, 0xa1, 0xe0, 0x58, 0xa3, 0xeb, 0x99, 296 + 0xd7, 0xe5, 0x1f, 0x1a, 0x93, 0x8e, 0xed, 0x5e, 297 + 0x1c, 0x1d, 0xe2, 0x3a, 0x6b, 0x43, 0x45, 0xd3, 298 + 0x19, 0x14, 0x09, 0xf9, 0x2f, 0x39, 0xb3, 0x67, 299 + 0x0d, 0x8d, 0xbf, 0xb6, 0x35, 0xd8, 0xe6, 0xa3, 300 + 0x69, 0x32, 0xd8, 0x10, 0x33, 0xd1, 0x44, 0x8d, 301 + 0x63, 0xb4, 0x03, 0xdd, 0xf8, 0x8e, 0x12, 0x1b, 302 + 0x6e, 0x81, 0x9a, 0xc3, 0x81, 0x22, 0x6c, 0x13, 303 + 0x21, 0xe4, 0xb0, 0x86, 0x44, 0xf6, 0x72, 0x7c, 304 + 0x36, 0x8c, 0x5a, 0x9f, 0x7a, 0x4b, 0x3e, 0xe2 }; 305 + 306 + int ret = 0; 307 + u8 buf[sizeof(random)]; 308 + struct ppno_ws_s ws; 309 + 310 + memset(&ws, 0, sizeof(ws)); 311 + 312 + /* initial seed */ 313 + ret = crypt_s390_ppno(PPNO_SHA512_DRNG_SEED, 314 + &ws, NULL, 0, 315 + seed, sizeof(seed)); 316 + if (ret < 0) { 317 + pr_err("The prng self test seed operation for the " 318 + "SHA-512 mode failed with rc=%d\n", ret); 319 + prng_errorflag = PRNG_SELFTEST_FAILED; 320 + return -EIO; 321 + } 322 + 323 + /* check working states V and C */ 324 + if (memcmp(ws.V, V0, sizeof(V0)) != 0 325 + || memcmp(ws.C, C0, sizeof(C0)) != 0) { 326 + pr_err("The prng self test state test " 327 + "for the SHA-512 mode failed\n"); 328 + prng_errorflag = PRNG_SELFTEST_FAILED; 329 + return -EIO; 330 + } 331 + 332 + /* generate random bytes */ 333 + ret = crypt_s390_ppno(PPNO_SHA512_DRNG_GEN, 334 + &ws, buf, sizeof(buf), 335 + NULL, 0); 336 + if (ret < 0) { 337 + pr_err("The prng self test generate operation for " 338 + "the SHA-512 mode failed with rc=%d\n", ret); 339 + prng_errorflag = PRNG_SELFTEST_FAILED; 340 + return -EIO; 341 + } 342 + ret = crypt_s390_ppno(PPNO_SHA512_DRNG_GEN, 343 + &ws, buf, sizeof(buf), 344 + NULL, 0); 345 + if (ret < 0) { 346 + pr_err("The prng self test generate operation for " 347 + "the SHA-512 mode failed with rc=%d\n", ret); 348 + prng_errorflag = PRNG_SELFTEST_FAILED; 349 + return -EIO; 350 + } 351 + 352 + /* check against expected data */ 353 + if (memcmp(buf, random, sizeof(random)) != 0) { 354 + pr_err("The prng self test data test " 355 + "for the SHA-512 mode failed\n"); 356 + prng_errorflag = PRNG_SELFTEST_FAILED; 357 + return -EIO; 358 + } 359 + 360 + return 0; 361 + } 362 + 363 + 364 + static int __init prng_sha512_instantiate(void) 365 + { 366 + int ret, datalen; 367 + u8 seed[64]; 368 + 369 + pr_debug("prng runs in SHA-512 mode " 370 + "with chunksize=%d and reseed_limit=%u\n", 371 + prng_chunk_size, prng_reseed_limit); 372 + 373 + /* memory allocation, prng_data struct init, mutex init */ 374 + datalen = sizeof(struct prng_data_s) + prng_chunk_size; 375 + if (fips_enabled) 376 + datalen += prng_chunk_size; 377 + prng_data = kzalloc(datalen, GFP_KERNEL); 378 + if (!prng_data) { 379 + prng_errorflag = PRNG_INSTANTIATE_FAILED; 380 + return -ENOMEM; 381 + } 382 + mutex_init(&prng_data->mutex); 383 + prng_data->buf = ((u8 *)prng_data) + sizeof(struct prng_data_s); 384 + 385 + /* selftest */ 386 + ret = prng_sha512_selftest(); 387 + if (ret) 388 + goto outfree; 389 + 390 + /* generate initial seed bytestring, first 48 bytes of entropy */ 391 + ret = generate_entropy(seed, 48); 392 + if (ret != 48) 393 + goto outfree; 394 + /* followed by 16 bytes of unique nonce */ 395 + get_tod_clock_ext(seed + 48); 396 + 397 + /* initial seed of the ppno drng */ 398 + ret = crypt_s390_ppno(PPNO_SHA512_DRNG_SEED, 399 + &prng_data->ppnows, NULL, 0, 400 + seed, sizeof(seed)); 401 + if (ret < 0) { 402 + prng_errorflag = PRNG_SEED_FAILED; 403 + ret = -EIO; 404 + goto outfree; 405 + } 406 + 407 + /* if fips mode is enabled, generate a first block of random 408 + bytes for the FIPS 140-2 Conditional Self Test */ 409 + if (fips_enabled) { 410 + prng_data->prev = prng_data->buf + prng_chunk_size; 411 + ret = crypt_s390_ppno(PPNO_SHA512_DRNG_GEN, 412 + &prng_data->ppnows, 413 + prng_data->prev, 414 + prng_chunk_size, 415 + NULL, 0); 416 + if (ret < 0 || ret != prng_chunk_size) { 417 + prng_errorflag = PRNG_GEN_FAILED; 418 + ret = -EIO; 419 + goto outfree; 420 + } 421 + } 422 + 423 + return 0; 424 + 425 + outfree: 426 + kfree(prng_data); 427 + return ret; 428 + } 429 + 430 + 431 + static void prng_sha512_deinstantiate(void) 432 + { 433 + pr_debug("The prng module stopped after running in SHA-512 mode\n"); 434 + kzfree(prng_data); 435 + } 436 + 437 + 438 + static int prng_sha512_reseed(void) 439 + { 440 + int ret; 441 + u8 seed[32]; 442 + 443 + /* generate 32 bytes of fresh entropy */ 444 + ret = generate_entropy(seed, sizeof(seed)); 445 + if (ret != sizeof(seed)) 446 + return ret; 447 + 448 + /* do a reseed of the ppno drng with this bytestring */ 449 + ret = crypt_s390_ppno(PPNO_SHA512_DRNG_SEED, 450 + &prng_data->ppnows, NULL, 0, 451 + seed, sizeof(seed)); 452 + if (ret) { 453 + prng_errorflag = PRNG_RESEED_FAILED; 454 + return -EIO; 455 + } 456 + 457 + return 0; 458 + } 459 + 460 + 461 + static int prng_sha512_generate(u8 *buf, size_t nbytes) 462 + { 463 + int ret; 464 + 465 + /* reseed needed ? */ 466 + if (prng_data->ppnows.reseed_counter > prng_reseed_limit) { 467 + ret = prng_sha512_reseed(); 468 + if (ret) 469 + return ret; 470 + } 471 + 472 + /* PPNO generate */ 473 + ret = crypt_s390_ppno(PPNO_SHA512_DRNG_GEN, 474 + &prng_data->ppnows, buf, nbytes, 475 + NULL, 0); 476 + if (ret < 0 || ret != nbytes) { 477 + prng_errorflag = PRNG_GEN_FAILED; 478 + return -EIO; 479 + } 480 + 481 + /* FIPS 140-2 Conditional Self Test */ 482 + if (fips_enabled) { 483 + if (!memcmp(prng_data->prev, buf, nbytes)) { 484 + prng_errorflag = PRNG_GEN_FAILED; 485 + return -EILSEQ; 486 + } 487 + memcpy(prng_data->prev, buf, nbytes); 488 + } 489 + 490 + return ret; 491 + } 492 + 493 + 494 + /*** file io functions ***/ 495 + 496 + static int prng_open(struct inode *inode, struct file *file) 497 + { 498 + return nonseekable_open(inode, file); 499 + } 500 + 501 + 502 + static ssize_t prng_tdes_read(struct file *file, char __user *ubuf, 503 + size_t nbytes, loff_t *ppos) 504 + { 505 + int chunk, n, tmp, ret = 0; 506 + 507 + /* lock prng_data struct */ 508 + if (mutex_lock_interruptible(&prng_data->mutex)) 509 + return -ERESTARTSYS; 510 + 95 511 while (nbytes) { 96 - /* same as in extract_entropy_user in random.c */ 97 512 if (need_resched()) { 98 513 if (signal_pending(current)) { 99 514 if (ret == 0) 100 515 ret = -ERESTARTSYS; 101 516 break; 102 517 } 518 + /* give mutex free before calling schedule() */ 519 + mutex_unlock(&prng_data->mutex); 103 520 schedule(); 521 + /* occopy mutex again */ 522 + if (mutex_lock_interruptible(&prng_data->mutex)) { 523 + if (ret == 0) 524 + ret = -ERESTARTSYS; 525 + return ret; 526 + } 104 527 } 105 528 106 529 /* ··· 535 112 /* PRNG only likes multiples of 8 bytes */ 536 113 n = (chunk + 7) & -8; 537 114 538 - if (p->count > prng_entropy_limit) 539 - prng_seed(8); 115 + if (prng_data->prngws.reseed_counter > prng_reseed_limit) 116 + prng_tdes_seed(8); 540 117 541 118 /* if the CPU supports PRNG stckf is present too */ 542 - asm volatile(".insn s,0xb27c0000,%0" 543 - : "=m" (*((unsigned long long *)p->buf)) : : "cc"); 119 + *((unsigned long long *)prng_data->buf) = get_tod_clock_fast(); 544 120 545 121 /* 546 122 * Beside the STCKF the input for the TDES-EDE is the output ··· 554 132 * Note: you can still get strict X9.17 conformity by setting 555 133 * prng_chunk_size to 8 bytes. 556 134 */ 557 - tmp = crypt_s390_kmc(KMC_PRNG, parm_block, p->buf, p->buf, n); 558 - BUG_ON((tmp < 0) || (tmp != n)); 135 + tmp = crypt_s390_kmc(KMC_PRNG, prng_data->prngws.parm_block, 136 + prng_data->buf, prng_data->buf, n); 137 + if (tmp < 0 || tmp != n) { 138 + ret = -EIO; 139 + break; 140 + } 559 141 560 - p->count += n; 142 + prng_data->prngws.byte_counter += n; 143 + prng_data->prngws.reseed_counter += n; 561 144 562 - if (copy_to_user(ubuf, p->buf, chunk)) 145 + if (copy_to_user(ubuf, prng_data->buf, chunk)) 563 146 return -EFAULT; 564 147 565 148 nbytes -= chunk; 566 149 ret += chunk; 567 150 ubuf += chunk; 568 151 } 152 + 153 + /* unlock prng_data struct */ 154 + mutex_unlock(&prng_data->mutex); 155 + 569 156 return ret; 570 157 } 571 158 572 - static const struct file_operations prng_fops = { 159 + 160 + static ssize_t prng_sha512_read(struct file *file, char __user *ubuf, 161 + size_t nbytes, loff_t *ppos) 162 + { 163 + int n, ret = 0; 164 + u8 *p; 165 + 166 + /* if errorflag is set do nothing and return 'broken pipe' */ 167 + if (prng_errorflag) 168 + return -EPIPE; 169 + 170 + /* lock prng_data struct */ 171 + if (mutex_lock_interruptible(&prng_data->mutex)) 172 + return -ERESTARTSYS; 173 + 174 + while (nbytes) { 175 + if (need_resched()) { 176 + if (signal_pending(current)) { 177 + if (ret == 0) 178 + ret = -ERESTARTSYS; 179 + break; 180 + } 181 + /* give mutex free before calling schedule() */ 182 + mutex_unlock(&prng_data->mutex); 183 + schedule(); 184 + /* occopy mutex again */ 185 + if (mutex_lock_interruptible(&prng_data->mutex)) { 186 + if (ret == 0) 187 + ret = -ERESTARTSYS; 188 + return ret; 189 + } 190 + } 191 + if (prng_data->rest) { 192 + /* push left over random bytes from the previous read */ 193 + p = prng_data->buf + prng_chunk_size - prng_data->rest; 194 + n = (nbytes < prng_data->rest) ? 195 + nbytes : prng_data->rest; 196 + prng_data->rest -= n; 197 + } else { 198 + /* generate one chunk of random bytes into read buf */ 199 + p = prng_data->buf; 200 + n = prng_sha512_generate(p, prng_chunk_size); 201 + if (n < 0) { 202 + ret = n; 203 + break; 204 + } 205 + if (nbytes < prng_chunk_size) { 206 + n = nbytes; 207 + prng_data->rest = prng_chunk_size - n; 208 + } else { 209 + n = prng_chunk_size; 210 + prng_data->rest = 0; 211 + } 212 + } 213 + if (copy_to_user(ubuf, p, n)) { 214 + ret = -EFAULT; 215 + break; 216 + } 217 + ubuf += n; 218 + nbytes -= n; 219 + ret += n; 220 + } 221 + 222 + /* unlock prng_data struct */ 223 + mutex_unlock(&prng_data->mutex); 224 + 225 + return ret; 226 + } 227 + 228 + 229 + /*** sysfs stuff ***/ 230 + 231 + static const struct file_operations prng_sha512_fops = { 573 232 .owner = THIS_MODULE, 574 233 .open = &prng_open, 575 234 .release = NULL, 576 - .read = &prng_read, 235 + .read = &prng_sha512_read, 236 + .llseek = noop_llseek, 237 + }; 238 + static const struct file_operations prng_tdes_fops = { 239 + .owner = THIS_MODULE, 240 + .open = &prng_open, 241 + .release = NULL, 242 + .read = &prng_tdes_read, 577 243 .llseek = noop_llseek, 578 244 }; 579 245 580 - static struct miscdevice prng_dev = { 246 + static struct miscdevice prng_sha512_dev = { 581 247 .name = "prandom", 582 248 .minor = MISC_DYNAMIC_MINOR, 583 - .fops = &prng_fops, 249 + .fops = &prng_sha512_fops, 584 250 }; 251 + static struct miscdevice prng_tdes_dev = { 252 + .name = "prandom", 253 + .minor = MISC_DYNAMIC_MINOR, 254 + .fops = &prng_tdes_fops, 255 + }; 256 + 257 + 258 + /* chunksize attribute (ro) */ 259 + static ssize_t prng_chunksize_show(struct device *dev, 260 + struct device_attribute *attr, 261 + char *buf) 262 + { 263 + return snprintf(buf, PAGE_SIZE, "%u\n", prng_chunk_size); 264 + } 265 + static DEVICE_ATTR(chunksize, 0444, prng_chunksize_show, NULL); 266 + 267 + /* counter attribute (ro) */ 268 + static ssize_t prng_counter_show(struct device *dev, 269 + struct device_attribute *attr, 270 + char *buf) 271 + { 272 + u64 counter; 273 + 274 + if (mutex_lock_interruptible(&prng_data->mutex)) 275 + return -ERESTARTSYS; 276 + if (prng_mode == PRNG_MODE_SHA512) 277 + counter = prng_data->ppnows.stream_bytes; 278 + else 279 + counter = prng_data->prngws.byte_counter; 280 + mutex_unlock(&prng_data->mutex); 281 + 282 + return snprintf(buf, PAGE_SIZE, "%llu\n", counter); 283 + } 284 + static DEVICE_ATTR(byte_counter, 0444, prng_counter_show, NULL); 285 + 286 + /* errorflag attribute (ro) */ 287 + static ssize_t prng_errorflag_show(struct device *dev, 288 + struct device_attribute *attr, 289 + char *buf) 290 + { 291 + return snprintf(buf, PAGE_SIZE, "%d\n", prng_errorflag); 292 + } 293 + static DEVICE_ATTR(errorflag, 0444, prng_errorflag_show, NULL); 294 + 295 + /* mode attribute (ro) */ 296 + static ssize_t prng_mode_show(struct device *dev, 297 + struct device_attribute *attr, 298 + char *buf) 299 + { 300 + if (prng_mode == PRNG_MODE_TDES) 301 + return snprintf(buf, PAGE_SIZE, "TDES\n"); 302 + else 303 + return snprintf(buf, PAGE_SIZE, "SHA512\n"); 304 + } 305 + static DEVICE_ATTR(mode, 0444, prng_mode_show, NULL); 306 + 307 + /* reseed attribute (w) */ 308 + static ssize_t prng_reseed_store(struct device *dev, 309 + struct device_attribute *attr, 310 + const char *buf, size_t count) 311 + { 312 + if (mutex_lock_interruptible(&prng_data->mutex)) 313 + return -ERESTARTSYS; 314 + prng_sha512_reseed(); 315 + mutex_unlock(&prng_data->mutex); 316 + 317 + return count; 318 + } 319 + static DEVICE_ATTR(reseed, 0200, NULL, prng_reseed_store); 320 + 321 + /* reseed limit attribute (rw) */ 322 + static ssize_t prng_reseed_limit_show(struct device *dev, 323 + struct device_attribute *attr, 324 + char *buf) 325 + { 326 + return snprintf(buf, PAGE_SIZE, "%u\n", prng_reseed_limit); 327 + } 328 + static ssize_t prng_reseed_limit_store(struct device *dev, 329 + struct device_attribute *attr, 330 + const char *buf, size_t count) 331 + { 332 + unsigned limit; 333 + 334 + if (sscanf(buf, "%u\n", &limit) != 1) 335 + return -EINVAL; 336 + 337 + if (prng_mode == PRNG_MODE_SHA512) { 338 + if (limit < PRNG_RESEED_LIMIT_SHA512_LOWER) 339 + return -EINVAL; 340 + } else { 341 + if (limit < PRNG_RESEED_LIMIT_TDES_LOWER) 342 + return -EINVAL; 343 + } 344 + 345 + prng_reseed_limit = limit; 346 + 347 + return count; 348 + } 349 + static DEVICE_ATTR(reseed_limit, 0644, 350 + prng_reseed_limit_show, prng_reseed_limit_store); 351 + 352 + /* strength attribute (ro) */ 353 + static ssize_t prng_strength_show(struct device *dev, 354 + struct device_attribute *attr, 355 + char *buf) 356 + { 357 + return snprintf(buf, PAGE_SIZE, "256\n"); 358 + } 359 + static DEVICE_ATTR(strength, 0444, prng_strength_show, NULL); 360 + 361 + static struct attribute *prng_sha512_dev_attrs[] = { 362 + &dev_attr_errorflag.attr, 363 + &dev_attr_chunksize.attr, 364 + &dev_attr_byte_counter.attr, 365 + &dev_attr_mode.attr, 366 + &dev_attr_reseed.attr, 367 + &dev_attr_reseed_limit.attr, 368 + &dev_attr_strength.attr, 369 + NULL 370 + }; 371 + static struct attribute *prng_tdes_dev_attrs[] = { 372 + &dev_attr_chunksize.attr, 373 + &dev_attr_byte_counter.attr, 374 + &dev_attr_mode.attr, 375 + NULL 376 + }; 377 + 378 + static struct attribute_group prng_sha512_dev_attr_group = { 379 + .attrs = prng_sha512_dev_attrs 380 + }; 381 + static struct attribute_group prng_tdes_dev_attr_group = { 382 + .attrs = prng_tdes_dev_attrs 383 + }; 384 + 385 + 386 + /*** module init and exit ***/ 585 387 586 388 static int __init prng_init(void) 587 389 { ··· 815 169 if (!crypt_s390_func_available(KMC_PRNG, CRYPT_S390_MSA)) 816 170 return -EOPNOTSUPP; 817 171 818 - if (prng_chunk_size < 8) 819 - return -EINVAL; 820 - 821 - p = kmalloc(sizeof(struct s390_prng_data), GFP_KERNEL); 822 - if (!p) 823 - return -ENOMEM; 824 - p->count = 0; 825 - 826 - p->buf = kmalloc(prng_chunk_size, GFP_KERNEL); 827 - if (!p->buf) { 828 - ret = -ENOMEM; 829 - goto out_free; 172 + /* choose prng mode */ 173 + if (prng_mode != PRNG_MODE_TDES) { 174 + /* check for MSA5 support for PPNO operations */ 175 + if (!crypt_s390_func_available(PPNO_SHA512_DRNG_GEN, 176 + CRYPT_S390_MSA5)) { 177 + if (prng_mode == PRNG_MODE_SHA512) { 178 + pr_err("The prng module cannot " 179 + "start in SHA-512 mode\n"); 180 + return -EOPNOTSUPP; 181 + } 182 + prng_mode = PRNG_MODE_TDES; 183 + } else 184 + prng_mode = PRNG_MODE_SHA512; 830 185 } 831 186 832 - /* initialize the PRNG, add 128 bits of entropy */ 833 - prng_seed(16); 187 + if (prng_mode == PRNG_MODE_SHA512) { 834 188 835 - ret = misc_register(&prng_dev); 836 - if (ret) 837 - goto out_buf; 838 - return 0; 189 + /* SHA512 mode */ 839 190 840 - out_buf: 841 - kfree(p->buf); 842 - out_free: 843 - kfree(p); 191 + if (prng_chunk_size < PRNG_CHUNKSIZE_SHA512_MIN 192 + || prng_chunk_size > PRNG_CHUNKSIZE_SHA512_MAX) 193 + return -EINVAL; 194 + prng_chunk_size = (prng_chunk_size + 0x3f) & ~0x3f; 195 + 196 + if (prng_reseed_limit == 0) 197 + prng_reseed_limit = PRNG_RESEED_LIMIT_SHA512; 198 + else if (prng_reseed_limit < PRNG_RESEED_LIMIT_SHA512_LOWER) 199 + return -EINVAL; 200 + 201 + ret = prng_sha512_instantiate(); 202 + if (ret) 203 + goto out; 204 + 205 + ret = misc_register(&prng_sha512_dev); 206 + if (ret) { 207 + prng_sha512_deinstantiate(); 208 + goto out; 209 + } 210 + ret = sysfs_create_group(&prng_sha512_dev.this_device->kobj, 211 + &prng_sha512_dev_attr_group); 212 + if (ret) { 213 + misc_deregister(&prng_sha512_dev); 214 + prng_sha512_deinstantiate(); 215 + goto out; 216 + } 217 + 218 + } else { 219 + 220 + /* TDES mode */ 221 + 222 + if (prng_chunk_size < PRNG_CHUNKSIZE_TDES_MIN 223 + || prng_chunk_size > PRNG_CHUNKSIZE_TDES_MAX) 224 + return -EINVAL; 225 + prng_chunk_size = (prng_chunk_size + 0x07) & ~0x07; 226 + 227 + if (prng_reseed_limit == 0) 228 + prng_reseed_limit = PRNG_RESEED_LIMIT_TDES; 229 + else if (prng_reseed_limit < PRNG_RESEED_LIMIT_TDES_LOWER) 230 + return -EINVAL; 231 + 232 + ret = prng_tdes_instantiate(); 233 + if (ret) 234 + goto out; 235 + 236 + ret = misc_register(&prng_tdes_dev); 237 + if (ret) { 238 + prng_tdes_deinstantiate(); 239 + goto out; 240 + } 241 + ret = sysfs_create_group(&prng_tdes_dev.this_device->kobj, 242 + &prng_tdes_dev_attr_group); 243 + if (ret) { 244 + misc_deregister(&prng_tdes_dev); 245 + prng_tdes_deinstantiate(); 246 + goto out; 247 + } 248 + 249 + } 250 + 251 + out: 844 252 return ret; 845 253 } 846 254 255 + 847 256 static void __exit prng_exit(void) 848 257 { 849 - /* wipe me */ 850 - kzfree(p->buf); 851 - kfree(p); 852 - 853 - misc_deregister(&prng_dev); 258 + if (prng_mode == PRNG_MODE_SHA512) { 259 + sysfs_remove_group(&prng_sha512_dev.this_device->kobj, 260 + &prng_sha512_dev_attr_group); 261 + misc_deregister(&prng_sha512_dev); 262 + prng_sha512_deinstantiate(); 263 + } else { 264 + sysfs_remove_group(&prng_tdes_dev.this_device->kobj, 265 + &prng_tdes_dev_attr_group); 266 + misc_deregister(&prng_tdes_dev); 267 + prng_tdes_deinstantiate(); 268 + } 854 269 } 270 + 855 271 856 272 module_init(prng_init); 857 273 module_exit(prng_exit);
+3
arch/s390/include/asm/kexec.h
··· 26 26 /* Not more than 2GB */ 27 27 #define KEXEC_CONTROL_MEMORY_LIMIT (1UL<<31) 28 28 29 + /* Allocate control page with GFP_DMA */ 30 + #define KEXEC_CONTROL_MEMORY_GFP GFP_DMA 31 + 29 32 /* Maximum address we can use for the crash control pages */ 30 33 #define KEXEC_CRASH_CONTROL_MEMORY_LIMIT (-1UL) 31 34
+3 -1
arch/s390/include/asm/mmu.h
··· 14 14 unsigned long asce_bits; 15 15 unsigned long asce_limit; 16 16 unsigned long vdso_base; 17 - /* The mmu context has extended page tables. */ 17 + /* The mmu context allocates 4K page tables. */ 18 + unsigned int alloc_pgste:1; 19 + /* The mmu context uses extended page tables. */ 18 20 unsigned int has_pgste:1; 19 21 /* The mmu context uses storage keys. */ 20 22 unsigned int use_skey:1;
+3
arch/s390/include/asm/mmu_context.h
··· 20 20 mm->context.flush_mm = 0; 21 21 mm->context.asce_bits = _ASCE_TABLE_LENGTH | _ASCE_USER_BITS; 22 22 mm->context.asce_bits |= _ASCE_TYPE_REGION3; 23 + #ifdef CONFIG_PGSTE 24 + mm->context.alloc_pgste = page_table_allocate_pgste; 23 25 mm->context.has_pgste = 0; 24 26 mm->context.use_skey = 0; 27 + #endif 25 28 mm->context.asce_limit = STACK_TOP_MAX; 26 29 crst_table_init((unsigned long *) mm->pgd, pgd_entry_type(mm)); 27 30 return 0;
+1
arch/s390/include/asm/pgalloc.h
··· 21 21 unsigned long *page_table_alloc(struct mm_struct *); 22 22 void page_table_free(struct mm_struct *, unsigned long *); 23 23 void page_table_free_rcu(struct mmu_gather *, unsigned long *, unsigned long); 24 + extern int page_table_allocate_pgste; 24 25 25 26 int set_guest_storage_key(struct mm_struct *mm, unsigned long addr, 26 27 unsigned long key, bool nq);
+72 -95
arch/s390/include/asm/pgtable.h
··· 12 12 #define _ASM_S390_PGTABLE_H 13 13 14 14 /* 15 - * The Linux memory management assumes a three-level page table setup. For 16 - * s390 31 bit we "fold" the mid level into the top-level page table, so 17 - * that we physically have the same two-level page table as the s390 mmu 18 - * expects in 31 bit mode. For s390 64 bit we use three of the five levels 19 - * the hardware provides (region first and region second tables are not 20 - * used). 15 + * The Linux memory management assumes a three-level page table setup. 16 + * For s390 64 bit we use up to four of the five levels the hardware 17 + * provides (region first tables are not used). 21 18 * 22 19 * The "pgd_xxx()" functions are trivial for a folded two-level 23 20 * setup: the pgd is never bad, and a pmd always exists (as it's folded ··· 98 101 99 102 #ifndef __ASSEMBLY__ 100 103 /* 101 - * The vmalloc and module area will always be on the topmost area of the kernel 102 - * mapping. We reserve 96MB (31bit) / 128GB (64bit) for vmalloc and modules. 104 + * The vmalloc and module area will always be on the topmost area of the 105 + * kernel mapping. We reserve 128GB (64bit) for vmalloc and modules. 103 106 * On 64 bit kernels we have a 2GB area at the top of the vmalloc area where 104 107 * modules will reside. That makes sure that inter module branches always 105 108 * happen without trampolines and in addition the placement within a 2GB frame ··· 128 131 } 129 132 130 133 /* 131 - * A 31 bit pagetable entry of S390 has following format: 132 - * | PFRA | | OS | 133 - * 0 0IP0 134 - * 00000000001111111111222222222233 135 - * 01234567890123456789012345678901 136 - * 137 - * I Page-Invalid Bit: Page is not available for address-translation 138 - * P Page-Protection Bit: Store access not possible for page 139 - * 140 - * A 31 bit segmenttable entry of S390 has following format: 141 - * | P-table origin | |PTL 142 - * 0 IC 143 - * 00000000001111111111222222222233 144 - * 01234567890123456789012345678901 145 - * 146 - * I Segment-Invalid Bit: Segment is not available for address-translation 147 - * C Common-Segment Bit: Segment is not private (PoP 3-30) 148 - * PTL Page-Table-Length: Page-table length (PTL+1*16 entries -> up to 256) 149 - * 150 - * The 31 bit segmenttable origin of S390 has following format: 151 - * 152 - * |S-table origin | | STL | 153 - * X **GPS 154 - * 00000000001111111111222222222233 155 - * 01234567890123456789012345678901 156 - * 157 - * X Space-Switch event: 158 - * G Segment-Invalid Bit: * 159 - * P Private-Space Bit: Segment is not private (PoP 3-30) 160 - * S Storage-Alteration: 161 - * STL Segment-Table-Length: Segment-table length (STL+1*16 entries -> up to 2048) 162 - * 163 134 * A 64 bit pagetable entry of S390 has following format: 164 135 * | PFRA |0IPC| OS | 165 136 * 0000000000111111111122222222223333333333444444444455555555556666 ··· 185 220 186 221 /* Software bits in the page table entry */ 187 222 #define _PAGE_PRESENT 0x001 /* SW pte present bit */ 188 - #define _PAGE_TYPE 0x002 /* SW pte type bit */ 189 223 #define _PAGE_YOUNG 0x004 /* SW pte young bit */ 190 224 #define _PAGE_DIRTY 0x008 /* SW pte dirty bit */ 191 225 #define _PAGE_READ 0x010 /* SW pte read bit */ ··· 204 240 * table lock held. 205 241 * 206 242 * The following table gives the different possible bit combinations for 207 - * the pte hardware and software bits in the last 12 bits of a pte: 243 + * the pte hardware and software bits in the last 12 bits of a pte 244 + * (. unassigned bit, x don't care, t swap type): 208 245 * 209 246 * 842100000000 210 247 * 000084210000 211 248 * 000000008421 212 - * .IR...wrdytp 213 - * empty .10...000000 214 - * swap .10...xxxx10 215 - * file .11...xxxxx0 216 - * prot-none, clean, old .11...000001 217 - * prot-none, clean, young .11...000101 218 - * prot-none, dirty, old .10...001001 219 - * prot-none, dirty, young .10...001101 220 - * read-only, clean, old .11...010001 221 - * read-only, clean, young .01...010101 222 - * read-only, dirty, old .11...011001 223 - * read-only, dirty, young .01...011101 224 - * read-write, clean, old .11...110001 225 - * read-write, clean, young .01...110101 226 - * read-write, dirty, old .10...111001 227 - * read-write, dirty, young .00...111101 249 + * .IR.uswrdy.p 250 + * empty .10.00000000 251 + * swap .11..ttttt.0 252 + * prot-none, clean, old .11.xx0000.1 253 + * prot-none, clean, young .11.xx0001.1 254 + * prot-none, dirty, old .10.xx0010.1 255 + * prot-none, dirty, young .10.xx0011.1 256 + * read-only, clean, old .11.xx0100.1 257 + * read-only, clean, young .01.xx0101.1 258 + * read-only, dirty, old .11.xx0110.1 259 + * read-only, dirty, young .01.xx0111.1 260 + * read-write, clean, old .11.xx1100.1 261 + * read-write, clean, young .01.xx1101.1 262 + * read-write, dirty, old .10.xx1110.1 263 + * read-write, dirty, young .00.xx1111.1 264 + * HW-bits: R read-only, I invalid 265 + * SW-bits: p present, y young, d dirty, r read, w write, s special, 266 + * u unused, l large 228 267 * 229 - * pte_present is true for the bit pattern .xx...xxxxx1, (pte & 0x001) == 0x001 230 - * pte_none is true for the bit pattern .10...xxxx00, (pte & 0x603) == 0x400 231 - * pte_swap is true for the bit pattern .10...xxxx10, (pte & 0x603) == 0x402 268 + * pte_none is true for the bit pattern .10.00000000, pte == 0x400 269 + * pte_swap is true for the bit pattern .11..ooooo.0, (pte & 0x201) == 0x200 270 + * pte_present is true for the bit pattern .xx.xxxxxx.1, (pte & 0x001) == 0x001 232 271 */ 233 272 234 273 /* Bits in the segment/region table address-space-control-element */ ··· 302 335 * read-write, dirty, young 11..0...0...11 303 336 * The segment table origin is used to distinguish empty (origin==0) from 304 337 * read-write, old segment table entries (origin!=0) 338 + * HW-bits: R read-only, I invalid 339 + * SW-bits: y young, d dirty, r read, w write 305 340 */ 306 341 307 342 #define _SEGMENT_ENTRY_SPLIT_BIT 11 /* THP splitting bit number */ ··· 387 418 { 388 419 #ifdef CONFIG_PGSTE 389 420 if (unlikely(mm->context.has_pgste)) 421 + return 1; 422 + #endif 423 + return 0; 424 + } 425 + 426 + static inline int mm_alloc_pgste(struct mm_struct *mm) 427 + { 428 + #ifdef CONFIG_PGSTE 429 + if (unlikely(mm->context.alloc_pgste)) 390 430 return 1; 391 431 #endif 392 432 return 0; ··· 560 582 561 583 static inline int pte_swap(pte_t pte) 562 584 { 563 - /* Bit pattern: (pte & 0x603) == 0x402 */ 564 - return (pte_val(pte) & (_PAGE_INVALID | _PAGE_PROTECT | 565 - _PAGE_TYPE | _PAGE_PRESENT)) 566 - == (_PAGE_INVALID | _PAGE_TYPE); 585 + /* Bit pattern: (pte & 0x201) == 0x200 */ 586 + return (pte_val(pte) & (_PAGE_PROTECT | _PAGE_PRESENT)) 587 + == _PAGE_PROTECT; 567 588 } 568 589 569 590 static inline int pte_special(pte_t pte) ··· 1563 1586 #endif /* CONFIG_TRANSPARENT_HUGEPAGE */ 1564 1587 1565 1588 /* 1566 - * 31 bit swap entry format: 1567 - * A page-table entry has some bits we have to treat in a special way. 1568 - * Bits 0, 20 and bit 23 have to be zero, otherwise an specification 1569 - * exception will occur instead of a page translation exception. The 1570 - * specifiation exception has the bad habit not to store necessary 1571 - * information in the lowcore. 1572 - * Bits 21, 22, 30 and 31 are used to indicate the page type. 1573 - * A swap pte is indicated by bit pattern (pte & 0x603) == 0x402 1574 - * This leaves the bits 1-19 and bits 24-29 to store type and offset. 1575 - * We use the 5 bits from 25-29 for the type and the 20 bits from 1-19 1576 - * plus 24 for the offset. 1577 - * 0| offset |0110|o|type |00| 1578 - * 0 0000000001111111111 2222 2 22222 33 1579 - * 0 1234567890123456789 0123 4 56789 01 1580 - * 1581 1589 * 64 bit swap entry format: 1582 1590 * A page-table entry has some bits we have to treat in a special way. 1583 1591 * Bits 52 and bit 55 have to be zero, otherwise an specification 1584 1592 * exception will occur instead of a page translation exception. The 1585 1593 * specifiation exception has the bad habit not to store necessary 1586 1594 * information in the lowcore. 1587 - * Bits 53, 54, 62 and 63 are used to indicate the page type. 1588 - * A swap pte is indicated by bit pattern (pte & 0x603) == 0x402 1589 - * This leaves the bits 0-51 and bits 56-61 to store type and offset. 1590 - * We use the 5 bits from 57-61 for the type and the 53 bits from 0-51 1591 - * plus 56 for the offset. 1592 - * | offset |0110|o|type |00| 1593 - * 0000000000111111111122222222223333333333444444444455 5555 5 55566 66 1594 - * 0123456789012345678901234567890123456789012345678901 2345 6 78901 23 1595 + * Bits 54 and 63 are used to indicate the page type. 1596 + * A swap pte is indicated by bit pattern (pte & 0x201) == 0x200 1597 + * This leaves the bits 0-51 and bits 56-62 to store type and offset. 1598 + * We use the 5 bits from 57-61 for the type and the 52 bits from 0-51 1599 + * for the offset. 1600 + * | offset |01100|type |00| 1601 + * |0000000000111111111122222222223333333333444444444455|55555|55566|66| 1602 + * |0123456789012345678901234567890123456789012345678901|23456|78901|23| 1595 1603 */ 1596 1604 1597 - #define __SWP_OFFSET_MASK (~0UL >> 11) 1605 + #define __SWP_OFFSET_MASK ((1UL << 52) - 1) 1606 + #define __SWP_OFFSET_SHIFT 12 1607 + #define __SWP_TYPE_MASK ((1UL << 5) - 1) 1608 + #define __SWP_TYPE_SHIFT 2 1598 1609 1599 1610 static inline pte_t mk_swap_pte(unsigned long type, unsigned long offset) 1600 1611 { 1601 1612 pte_t pte; 1602 - offset &= __SWP_OFFSET_MASK; 1603 - pte_val(pte) = _PAGE_INVALID | _PAGE_TYPE | ((type & 0x1f) << 2) | 1604 - ((offset & 1UL) << 7) | ((offset & ~1UL) << 11); 1613 + 1614 + pte_val(pte) = _PAGE_INVALID | _PAGE_PROTECT; 1615 + pte_val(pte) |= (offset & __SWP_OFFSET_MASK) << __SWP_OFFSET_SHIFT; 1616 + pte_val(pte) |= (type & __SWP_TYPE_MASK) << __SWP_TYPE_SHIFT; 1605 1617 return pte; 1606 1618 } 1607 1619 1608 - #define __swp_type(entry) (((entry).val >> 2) & 0x1f) 1609 - #define __swp_offset(entry) (((entry).val >> 11) | (((entry).val >> 7) & 1)) 1610 - #define __swp_entry(type,offset) ((swp_entry_t) { pte_val(mk_swap_pte((type),(offset))) }) 1620 + static inline unsigned long __swp_type(swp_entry_t entry) 1621 + { 1622 + return (entry.val >> __SWP_TYPE_SHIFT) & __SWP_TYPE_MASK; 1623 + } 1624 + 1625 + static inline unsigned long __swp_offset(swp_entry_t entry) 1626 + { 1627 + return (entry.val >> __SWP_OFFSET_SHIFT) & __SWP_OFFSET_MASK; 1628 + } 1629 + 1630 + static inline swp_entry_t __swp_entry(unsigned long type, unsigned long offset) 1631 + { 1632 + return (swp_entry_t) { pte_val(mk_swap_pte(type, offset)) }; 1633 + } 1611 1634 1612 1635 #define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) }) 1613 1636 #define __swp_entry_to_pte(x) ((pte_t) { (x).val })
+36 -30
arch/s390/mm/hugetlbpage.c
··· 14 14 15 15 /* 16 16 * Convert encoding pte bits pmd bits 17 - * .IR...wrdytp dy..R...I...wr 18 - * empty .10...000000 -> 00..0...1...00 19 - * prot-none, clean, old .11...000001 -> 00..1...1...00 20 - * prot-none, clean, young .11...000101 -> 01..1...1...00 21 - * prot-none, dirty, old .10...001001 -> 10..1...1...00 22 - * prot-none, dirty, young .10...001101 -> 11..1...1...00 23 - * read-only, clean, old .11...010001 -> 00..1...1...01 24 - * read-only, clean, young .01...010101 -> 01..1...0...01 25 - * read-only, dirty, old .11...011001 -> 10..1...1...01 26 - * read-only, dirty, young .01...011101 -> 11..1...0...01 27 - * read-write, clean, old .11...110001 -> 00..0...1...11 28 - * read-write, clean, young .01...110101 -> 01..0...0...11 29 - * read-write, dirty, old .10...111001 -> 10..0...1...11 30 - * read-write, dirty, young .00...111101 -> 11..0...0...11 17 + * lIR.uswrdy.p dy..R...I...wr 18 + * empty 010.000000.0 -> 00..0...1...00 19 + * prot-none, clean, old 111.000000.1 -> 00..1...1...00 20 + * prot-none, clean, young 111.000001.1 -> 01..1...1...00 21 + * prot-none, dirty, old 111.000010.1 -> 10..1...1...00 22 + * prot-none, dirty, young 111.000011.1 -> 11..1...1...00 23 + * read-only, clean, old 111.000100.1 -> 00..1...1...01 24 + * read-only, clean, young 101.000101.1 -> 01..1...0...01 25 + * read-only, dirty, old 111.000110.1 -> 10..1...1...01 26 + * read-only, dirty, young 101.000111.1 -> 11..1...0...01 27 + * read-write, clean, old 111.001100.1 -> 00..1...1...11 28 + * read-write, clean, young 101.001101.1 -> 01..1...0...11 29 + * read-write, dirty, old 110.001110.1 -> 10..0...1...11 30 + * read-write, dirty, young 100.001111.1 -> 11..0...0...11 31 + * HW-bits: R read-only, I invalid 32 + * SW-bits: p present, y young, d dirty, r read, w write, s special, 33 + * u unused, l large 31 34 */ 32 35 if (pte_present(pte)) { 33 36 pmd_val(pmd) = pte_val(pte) & PAGE_MASK; ··· 51 48 52 49 /* 53 50 * Convert encoding pmd bits pte bits 54 - * dy..R...I...wr .IR...wrdytp 55 - * empty 00..0...1...00 -> .10...001100 56 - * prot-none, clean, old 00..0...1...00 -> .10...000001 57 - * prot-none, clean, young 01..0...1...00 -> .10...000101 58 - * prot-none, dirty, old 10..0...1...00 -> .10...001001 59 - * prot-none, dirty, young 11..0...1...00 -> .10...001101 60 - * read-only, clean, old 00..1...1...01 -> .11...010001 61 - * read-only, clean, young 01..1...1...01 -> .11...010101 62 - * read-only, dirty, old 10..1...1...01 -> .11...011001 63 - * read-only, dirty, young 11..1...1...01 -> .11...011101 64 - * read-write, clean, old 00..0...1...11 -> .10...110001 65 - * read-write, clean, young 01..0...1...11 -> .10...110101 66 - * read-write, dirty, old 10..0...1...11 -> .10...111001 67 - * read-write, dirty, young 11..0...1...11 -> .10...111101 51 + * dy..R...I...wr lIR.uswrdy.p 52 + * empty 00..0...1...00 -> 010.000000.0 53 + * prot-none, clean, old 00..1...1...00 -> 111.000000.1 54 + * prot-none, clean, young 01..1...1...00 -> 111.000001.1 55 + * prot-none, dirty, old 10..1...1...00 -> 111.000010.1 56 + * prot-none, dirty, young 11..1...1...00 -> 111.000011.1 57 + * read-only, clean, old 00..1...1...01 -> 111.000100.1 58 + * read-only, clean, young 01..1...0...01 -> 101.000101.1 59 + * read-only, dirty, old 10..1...1...01 -> 111.000110.1 60 + * read-only, dirty, young 11..1...0...01 -> 101.000111.1 61 + * read-write, clean, old 00..1...1...11 -> 111.001100.1 62 + * read-write, clean, young 01..1...0...11 -> 101.001101.1 63 + * read-write, dirty, old 10..0...1...11 -> 110.001110.1 64 + * read-write, dirty, young 11..0...0...11 -> 100.001111.1 65 + * HW-bits: R read-only, I invalid 66 + * SW-bits: p present, y young, d dirty, r read, w write, s special, 67 + * u unused, l large 68 68 */ 69 69 if (pmd_present(pmd)) { 70 70 pte_val(pte) = pmd_val(pmd) & _SEGMENT_ENTRY_ORIGIN_LARGE; ··· 76 70 pte_val(pte) |= (pmd_val(pmd) & _SEGMENT_ENTRY_WRITE) << 4; 77 71 pte_val(pte) |= (pmd_val(pmd) & _SEGMENT_ENTRY_INVALID) << 5; 78 72 pte_val(pte) |= (pmd_val(pmd) & _SEGMENT_ENTRY_PROTECT); 79 - pmd_val(pmd) |= (pte_val(pte) & _PAGE_DIRTY) << 10; 80 - pmd_val(pmd) |= (pte_val(pte) & _PAGE_YOUNG) << 10; 73 + pte_val(pte) |= (pmd_val(pmd) & _SEGMENT_ENTRY_DIRTY) >> 10; 74 + pte_val(pte) |= (pmd_val(pmd) & _SEGMENT_ENTRY_YOUNG) >> 10; 81 75 } else 82 76 pte_val(pte) = _PAGE_INVALID; 83 77 return pte;
+43 -99
arch/s390/mm/pgtable.c
··· 18 18 #include <linux/rcupdate.h> 19 19 #include <linux/slab.h> 20 20 #include <linux/swapops.h> 21 + #include <linux/sysctl.h> 21 22 #include <linux/ksm.h> 22 23 #include <linux/mman.h> 23 24 ··· 921 920 } 922 921 EXPORT_SYMBOL(get_guest_storage_key); 923 922 923 + static int page_table_allocate_pgste_min = 0; 924 + static int page_table_allocate_pgste_max = 1; 925 + int page_table_allocate_pgste = 0; 926 + EXPORT_SYMBOL(page_table_allocate_pgste); 927 + 928 + static struct ctl_table page_table_sysctl[] = { 929 + { 930 + .procname = "allocate_pgste", 931 + .data = &page_table_allocate_pgste, 932 + .maxlen = sizeof(int), 933 + .mode = S_IRUGO | S_IWUSR, 934 + .proc_handler = proc_dointvec, 935 + .extra1 = &page_table_allocate_pgste_min, 936 + .extra2 = &page_table_allocate_pgste_max, 937 + }, 938 + { } 939 + }; 940 + 941 + static struct ctl_table page_table_sysctl_dir[] = { 942 + { 943 + .procname = "vm", 944 + .maxlen = 0, 945 + .mode = 0555, 946 + .child = page_table_sysctl, 947 + }, 948 + { } 949 + }; 950 + 951 + static int __init page_table_register_sysctl(void) 952 + { 953 + return register_sysctl_table(page_table_sysctl_dir) ? 0 : -ENOMEM; 954 + } 955 + __initcall(page_table_register_sysctl); 956 + 924 957 #else /* CONFIG_PGSTE */ 925 958 926 959 static inline int page_table_with_pgste(struct page *page) ··· 998 963 struct page *uninitialized_var(page); 999 964 unsigned int mask, bit; 1000 965 1001 - if (mm_has_pgste(mm)) 966 + if (mm_alloc_pgste(mm)) 1002 967 return page_table_alloc_pgste(mm); 1003 968 /* Allocate fragments of a 4K page as 1K/2K page table */ 1004 969 spin_lock_bh(&mm->context.list_lock); ··· 1200 1165 } 1201 1166 #endif /* CONFIG_TRANSPARENT_HUGEPAGE */ 1202 1167 1203 - static unsigned long page_table_realloc_pmd(struct mmu_gather *tlb, 1204 - struct mm_struct *mm, pud_t *pud, 1205 - unsigned long addr, unsigned long end) 1206 - { 1207 - unsigned long next, *table, *new; 1208 - struct page *page; 1209 - spinlock_t *ptl; 1210 - pmd_t *pmd; 1211 - 1212 - pmd = pmd_offset(pud, addr); 1213 - do { 1214 - next = pmd_addr_end(addr, end); 1215 - again: 1216 - if (pmd_none_or_clear_bad(pmd)) 1217 - continue; 1218 - table = (unsigned long *) pmd_deref(*pmd); 1219 - page = pfn_to_page(__pa(table) >> PAGE_SHIFT); 1220 - if (page_table_with_pgste(page)) 1221 - continue; 1222 - /* Allocate new page table with pgstes */ 1223 - new = page_table_alloc_pgste(mm); 1224 - if (!new) 1225 - return -ENOMEM; 1226 - 1227 - ptl = pmd_lock(mm, pmd); 1228 - if (likely((unsigned long *) pmd_deref(*pmd) == table)) { 1229 - /* Nuke pmd entry pointing to the "short" page table */ 1230 - pmdp_flush_lazy(mm, addr, pmd); 1231 - pmd_clear(pmd); 1232 - /* Copy ptes from old table to new table */ 1233 - memcpy(new, table, PAGE_SIZE/2); 1234 - clear_table(table, _PAGE_INVALID, PAGE_SIZE/2); 1235 - /* Establish new table */ 1236 - pmd_populate(mm, pmd, (pte_t *) new); 1237 - /* Free old table with rcu, there might be a walker! */ 1238 - page_table_free_rcu(tlb, table, addr); 1239 - new = NULL; 1240 - } 1241 - spin_unlock(ptl); 1242 - if (new) { 1243 - page_table_free_pgste(new); 1244 - goto again; 1245 - } 1246 - } while (pmd++, addr = next, addr != end); 1247 - 1248 - return addr; 1249 - } 1250 - 1251 - static unsigned long page_table_realloc_pud(struct mmu_gather *tlb, 1252 - struct mm_struct *mm, pgd_t *pgd, 1253 - unsigned long addr, unsigned long end) 1254 - { 1255 - unsigned long next; 1256 - pud_t *pud; 1257 - 1258 - pud = pud_offset(pgd, addr); 1259 - do { 1260 - next = pud_addr_end(addr, end); 1261 - if (pud_none_or_clear_bad(pud)) 1262 - continue; 1263 - next = page_table_realloc_pmd(tlb, mm, pud, addr, next); 1264 - if (unlikely(IS_ERR_VALUE(next))) 1265 - return next; 1266 - } while (pud++, addr = next, addr != end); 1267 - 1268 - return addr; 1269 - } 1270 - 1271 - static unsigned long page_table_realloc(struct mmu_gather *tlb, struct mm_struct *mm, 1272 - unsigned long addr, unsigned long end) 1273 - { 1274 - unsigned long next; 1275 - pgd_t *pgd; 1276 - 1277 - pgd = pgd_offset(mm, addr); 1278 - do { 1279 - next = pgd_addr_end(addr, end); 1280 - if (pgd_none_or_clear_bad(pgd)) 1281 - continue; 1282 - next = page_table_realloc_pud(tlb, mm, pgd, addr, next); 1283 - if (unlikely(IS_ERR_VALUE(next))) 1284 - return next; 1285 - } while (pgd++, addr = next, addr != end); 1286 - 1287 - return 0; 1288 - } 1289 - 1290 1168 /* 1291 1169 * switch on pgstes for its userspace process (for kvm) 1292 1170 */ 1293 1171 int s390_enable_sie(void) 1294 1172 { 1295 - struct task_struct *tsk = current; 1296 - struct mm_struct *mm = tsk->mm; 1297 - struct mmu_gather tlb; 1173 + struct mm_struct *mm = current->mm; 1298 1174 1299 1175 /* Do we have pgstes? if yes, we are done */ 1300 - if (mm_has_pgste(tsk->mm)) 1176 + if (mm_has_pgste(mm)) 1301 1177 return 0; 1302 - 1178 + /* Fail if the page tables are 2K */ 1179 + if (!mm_alloc_pgste(mm)) 1180 + return -EINVAL; 1303 1181 down_write(&mm->mmap_sem); 1182 + mm->context.has_pgste = 1; 1304 1183 /* split thp mappings and disable thp for future mappings */ 1305 1184 thp_split_mm(mm); 1306 - /* Reallocate the page tables with pgstes */ 1307 - tlb_gather_mmu(&tlb, mm, 0, TASK_SIZE); 1308 - if (!page_table_realloc(&tlb, mm, 0, TASK_SIZE)) 1309 - mm->context.has_pgste = 1; 1310 - tlb_finish_mmu(&tlb, 0, TASK_SIZE); 1311 1185 up_write(&mm->mmap_sem); 1312 - return mm->context.has_pgste ? 0 : -ENOMEM; 1186 + return 0; 1313 1187 } 1314 1188 EXPORT_SYMBOL_GPL(s390_enable_sie); 1315 1189
+1 -1
arch/tile/kernel/setup.c
··· 774 774 * though, there'll be no lowmem, so we just alloc_bootmem 775 775 * the memmap. There will be no percpu memory either. 776 776 */ 777 - if (i != 0 && cpumask_test_cpu(i, &isolnodes)) { 777 + if (i != 0 && node_isset(i, isolnodes)) { 778 778 node_memmap_pfn[i] = 779 779 alloc_bootmem_pfn(0, memmap_size, 0); 780 780 BUG_ON(node_percpu[i] != 0);
+2
arch/x86/boot/compressed/eboot.c
··· 1109 1109 if (!cmdline_ptr) 1110 1110 goto fail; 1111 1111 hdr->cmd_line_ptr = (unsigned long)cmdline_ptr; 1112 + /* Fill in upper bits of command line address, NOP on 32 bit */ 1113 + boot_params->ext_cmd_line_ptr = (u64)(unsigned long)cmdline_ptr >> 32; 1112 1114 1113 1115 hdr->ramdisk_image = 0; 1114 1116 hdr->ramdisk_size = 0;
+1 -1
arch/x86/include/asm/hypervisor.h
··· 50 50 /* Recognized hypervisors */ 51 51 extern const struct hypervisor_x86 x86_hyper_vmware; 52 52 extern const struct hypervisor_x86 x86_hyper_ms_hyperv; 53 - extern const struct hypervisor_x86 x86_hyper_xen_hvm; 53 + extern const struct hypervisor_x86 x86_hyper_xen; 54 54 extern const struct hypervisor_x86 x86_hyper_kvm; 55 55 56 56 extern void init_hypervisor(struct cpuinfo_x86 *c);
-1
arch/x86/include/asm/pvclock.h
··· 95 95 96 96 struct pvclock_vsyscall_time_info { 97 97 struct pvclock_vcpu_time_info pvti; 98 - u32 migrate_count; 99 98 } __attribute__((__aligned__(SMP_CACHE_BYTES))); 100 99 101 100 #define PVTI_SIZE sizeof(struct pvclock_vsyscall_time_info)
+1 -1
arch/x86/include/asm/spinlock.h
··· 169 169 struct __raw_tickets tmp = READ_ONCE(lock->tickets); 170 170 171 171 tmp.head &= ~TICKET_SLOWPATH_FLAG; 172 - return (tmp.tail - tmp.head) > TICKET_LOCK_INC; 172 + return (__ticket_t)(tmp.tail - tmp.head) > TICKET_LOCK_INC; 173 173 } 174 174 #define arch_spin_is_contended arch_spin_is_contended 175 175
+5
arch/x86/include/asm/xen/page.h
··· 269 269 return false; 270 270 } 271 271 272 + static inline unsigned long xen_get_swiotlb_free_pages(unsigned int order) 273 + { 274 + return __get_free_pages(__GFP_NOWARN, order); 275 + } 276 + 272 277 #endif /* _ASM_X86_XEN_PAGE_H */
+2 -2
arch/x86/kernel/cpu/hypervisor.c
··· 27 27 28 28 static const __initconst struct hypervisor_x86 * const hypervisors[] = 29 29 { 30 - #ifdef CONFIG_XEN_PVHVM 31 - &x86_hyper_xen_hvm, 30 + #ifdef CONFIG_XEN 31 + &x86_hyper_xen, 32 32 #endif 33 33 &x86_hyper_vmware, 34 34 &x86_hyper_ms_hyperv,
+38 -28
arch/x86/kernel/cpu/perf_event_intel.c
··· 2533 2533 return x86_event_sysfs_show(page, config, event); 2534 2534 } 2535 2535 2536 - static __initconst const struct x86_pmu core_pmu = { 2537 - .name = "core", 2538 - .handle_irq = x86_pmu_handle_irq, 2539 - .disable_all = x86_pmu_disable_all, 2540 - .enable_all = core_pmu_enable_all, 2541 - .enable = core_pmu_enable_event, 2542 - .disable = x86_pmu_disable_event, 2543 - .hw_config = x86_pmu_hw_config, 2544 - .schedule_events = x86_schedule_events, 2545 - .eventsel = MSR_ARCH_PERFMON_EVENTSEL0, 2546 - .perfctr = MSR_ARCH_PERFMON_PERFCTR0, 2547 - .event_map = intel_pmu_event_map, 2548 - .max_events = ARRAY_SIZE(intel_perfmon_event_map), 2549 - .apic = 1, 2550 - /* 2551 - * Intel PMCs cannot be accessed sanely above 32 bit width, 2552 - * so we install an artificial 1<<31 period regardless of 2553 - * the generic event period: 2554 - */ 2555 - .max_period = (1ULL << 31) - 1, 2556 - .get_event_constraints = intel_get_event_constraints, 2557 - .put_event_constraints = intel_put_event_constraints, 2558 - .event_constraints = intel_core_event_constraints, 2559 - .guest_get_msrs = core_guest_get_msrs, 2560 - .format_attrs = intel_arch_formats_attr, 2561 - .events_sysfs_show = intel_event_sysfs_show, 2562 - }; 2563 - 2564 2536 struct intel_shared_regs *allocate_shared_regs(int cpu) 2565 2537 { 2566 2538 struct intel_shared_regs *regs; ··· 2713 2741 &format_attr_offcore_rsp.attr, /* XXX do NHM/WSM + SNB breakout */ 2714 2742 &format_attr_ldlat.attr, /* PEBS load latency */ 2715 2743 NULL, 2744 + }; 2745 + 2746 + static __initconst const struct x86_pmu core_pmu = { 2747 + .name = "core", 2748 + .handle_irq = x86_pmu_handle_irq, 2749 + .disable_all = x86_pmu_disable_all, 2750 + .enable_all = core_pmu_enable_all, 2751 + .enable = core_pmu_enable_event, 2752 + .disable = x86_pmu_disable_event, 2753 + .hw_config = x86_pmu_hw_config, 2754 + .schedule_events = x86_schedule_events, 2755 + .eventsel = MSR_ARCH_PERFMON_EVENTSEL0, 2756 + .perfctr = MSR_ARCH_PERFMON_PERFCTR0, 2757 + .event_map = intel_pmu_event_map, 2758 + .max_events = ARRAY_SIZE(intel_perfmon_event_map), 2759 + .apic = 1, 2760 + /* 2761 + * Intel PMCs cannot be accessed sanely above 32-bit width, 2762 + * so we install an artificial 1<<31 period regardless of 2763 + * the generic event period: 2764 + */ 2765 + .max_period = (1ULL<<31) - 1, 2766 + .get_event_constraints = intel_get_event_constraints, 2767 + .put_event_constraints = intel_put_event_constraints, 2768 + .event_constraints = intel_core_event_constraints, 2769 + .guest_get_msrs = core_guest_get_msrs, 2770 + .format_attrs = intel_arch_formats_attr, 2771 + .events_sysfs_show = intel_event_sysfs_show, 2772 + 2773 + /* 2774 + * Virtual (or funny metal) CPU can define x86_pmu.extra_regs 2775 + * together with PMU version 1 and thus be using core_pmu with 2776 + * shared_regs. We need following callbacks here to allocate 2777 + * it properly. 2778 + */ 2779 + .cpu_prepare = intel_pmu_cpu_prepare, 2780 + .cpu_starting = intel_pmu_cpu_starting, 2781 + .cpu_dying = intel_pmu_cpu_dying, 2716 2782 }; 2717 2783 2718 2784 static __initconst const struct x86_pmu intel_pmu = {
+12
arch/x86/kernel/cpu/perf_event_intel_uncore_snb.c
··· 1 1 /* Nehalem/SandBridge/Haswell uncore support */ 2 2 #include "perf_event_intel_uncore.h" 3 3 4 + /* Uncore IMC PCI IDs */ 5 + #define PCI_DEVICE_ID_INTEL_SNB_IMC 0x0100 6 + #define PCI_DEVICE_ID_INTEL_IVB_IMC 0x0154 7 + #define PCI_DEVICE_ID_INTEL_IVB_E3_IMC 0x0150 8 + #define PCI_DEVICE_ID_INTEL_HSW_IMC 0x0c00 9 + #define PCI_DEVICE_ID_INTEL_HSW_U_IMC 0x0a04 10 + 4 11 /* SNB event control */ 5 12 #define SNB_UNC_CTL_EV_SEL_MASK 0x000000ff 6 13 #define SNB_UNC_CTL_UMASK_MASK 0x0000ff00 ··· 479 472 PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_HSW_IMC), 480 473 .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0), 481 474 }, 475 + { /* IMC */ 476 + PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_HSW_U_IMC), 477 + .driver_data = UNCORE_PCI_DEV_DATA(SNB_PCI_UNCORE_IMC, 0), 478 + }, 482 479 { /* end: all zeroes */ }, 483 480 }; 484 481 ··· 513 502 IMC_DEV(IVB_IMC, &ivb_uncore_pci_driver), /* 3rd Gen Core processor */ 514 503 IMC_DEV(IVB_E3_IMC, &ivb_uncore_pci_driver), /* Xeon E3-1200 v2/3rd Gen Core processor */ 515 504 IMC_DEV(HSW_IMC, &hsw_uncore_pci_driver), /* 4th Gen Core Processor */ 505 + IMC_DEV(HSW_U_IMC, &hsw_uncore_pci_driver), /* 4th Gen Core ULT Mobile Processor */ 516 506 { /* end marker */ } 517 507 }; 518 508
+8 -6
arch/x86/kernel/process.c
··· 57 57 .io_bitmap = { [0 ... IO_BITMAP_LONGS] = ~0 }, 58 58 #endif 59 59 }; 60 - EXPORT_PER_CPU_SYMBOL_GPL(cpu_tss); 60 + EXPORT_PER_CPU_SYMBOL(cpu_tss); 61 61 62 62 #ifdef CONFIG_X86_64 63 63 static DEFINE_PER_CPU(unsigned char, is_idle); ··· 156 156 /* FPU state will be reallocated lazily at the first use. */ 157 157 drop_fpu(tsk); 158 158 free_thread_xstate(tsk); 159 - } else if (!used_math()) { 160 - /* kthread execs. TODO: cleanup this horror. */ 161 - if (WARN_ON(init_fpu(tsk))) 162 - force_sig(SIGKILL, tsk); 163 - user_fpu_begin(); 159 + } else { 160 + if (!tsk_used_math(tsk)) { 161 + /* kthread execs. TODO: cleanup this horror. */ 162 + if (WARN_ON(init_fpu(tsk))) 163 + force_sig(SIGKILL, tsk); 164 + user_fpu_begin(); 165 + } 164 166 restore_init_xstate(); 165 167 } 166 168 }
-44
arch/x86/kernel/pvclock.c
··· 141 141 set_normalized_timespec(ts, now.tv_sec, now.tv_nsec); 142 142 } 143 143 144 - static struct pvclock_vsyscall_time_info *pvclock_vdso_info; 145 - 146 - static struct pvclock_vsyscall_time_info * 147 - pvclock_get_vsyscall_user_time_info(int cpu) 148 - { 149 - if (!pvclock_vdso_info) { 150 - BUG(); 151 - return NULL; 152 - } 153 - 154 - return &pvclock_vdso_info[cpu]; 155 - } 156 - 157 - struct pvclock_vcpu_time_info *pvclock_get_vsyscall_time_info(int cpu) 158 - { 159 - return &pvclock_get_vsyscall_user_time_info(cpu)->pvti; 160 - } 161 - 162 144 #ifdef CONFIG_X86_64 163 - static int pvclock_task_migrate(struct notifier_block *nb, unsigned long l, 164 - void *v) 165 - { 166 - struct task_migration_notifier *mn = v; 167 - struct pvclock_vsyscall_time_info *pvti; 168 - 169 - pvti = pvclock_get_vsyscall_user_time_info(mn->from_cpu); 170 - 171 - /* this is NULL when pvclock vsyscall is not initialized */ 172 - if (unlikely(pvti == NULL)) 173 - return NOTIFY_DONE; 174 - 175 - pvti->migrate_count++; 176 - 177 - return NOTIFY_DONE; 178 - } 179 - 180 - static struct notifier_block pvclock_migrate = { 181 - .notifier_call = pvclock_task_migrate, 182 - }; 183 - 184 145 /* 185 146 * Initialize the generic pvclock vsyscall state. This will allocate 186 147 * a/some page(s) for the per-vcpu pvclock information, set up a ··· 155 194 156 195 WARN_ON (size != PVCLOCK_VSYSCALL_NR_PAGES*PAGE_SIZE); 157 196 158 - pvclock_vdso_info = i; 159 - 160 197 for (idx = 0; idx <= (PVCLOCK_FIXMAP_END-PVCLOCK_FIXMAP_BEGIN); idx++) { 161 198 __set_fixmap(PVCLOCK_FIXMAP_BEGIN + idx, 162 199 __pa(i) + (idx*PAGE_SIZE), 163 200 PAGE_KERNEL_VVAR); 164 201 } 165 - 166 - 167 - register_task_migration_notifier(&pvclock_migrate); 168 202 169 203 return 0; 170 204 }
+28 -5
arch/x86/kvm/x86.c
··· 1669 1669 &guest_hv_clock, sizeof(guest_hv_clock)))) 1670 1670 return 0; 1671 1671 1672 - /* 1673 - * The interface expects us to write an even number signaling that the 1674 - * update is finished. Since the guest won't see the intermediate 1675 - * state, we just increase by 2 at the end. 1672 + /* This VCPU is paused, but it's legal for a guest to read another 1673 + * VCPU's kvmclock, so we really have to follow the specification where 1674 + * it says that version is odd if data is being modified, and even after 1675 + * it is consistent. 1676 + * 1677 + * Version field updates must be kept separate. This is because 1678 + * kvm_write_guest_cached might use a "rep movs" instruction, and 1679 + * writes within a string instruction are weakly ordered. So there 1680 + * are three writes overall. 1681 + * 1682 + * As a small optimization, only write the version field in the first 1683 + * and third write. The vcpu->pv_time cache is still valid, because the 1684 + * version field is the first in the struct. 1676 1685 */ 1677 - vcpu->hv_clock.version = guest_hv_clock.version + 2; 1686 + BUILD_BUG_ON(offsetof(struct pvclock_vcpu_time_info, version) != 0); 1687 + 1688 + vcpu->hv_clock.version = guest_hv_clock.version + 1; 1689 + kvm_write_guest_cached(v->kvm, &vcpu->pv_time, 1690 + &vcpu->hv_clock, 1691 + sizeof(vcpu->hv_clock.version)); 1692 + 1693 + smp_wmb(); 1678 1694 1679 1695 /* retain PVCLOCK_GUEST_STOPPED if set in guest copy */ 1680 1696 pvclock_flags = (guest_hv_clock.flags & PVCLOCK_GUEST_STOPPED); ··· 1711 1695 kvm_write_guest_cached(v->kvm, &vcpu->pv_time, 1712 1696 &vcpu->hv_clock, 1713 1697 sizeof(vcpu->hv_clock)); 1698 + 1699 + smp_wmb(); 1700 + 1701 + vcpu->hv_clock.version++; 1702 + kvm_write_guest_cached(v->kvm, &vcpu->pv_time, 1703 + &vcpu->hv_clock, 1704 + sizeof(vcpu->hv_clock.version)); 1714 1705 return 0; 1715 1706 } 1716 1707
+8 -6
arch/x86/mm/ioremap.c
··· 351 351 */ 352 352 void *xlate_dev_mem_ptr(phys_addr_t phys) 353 353 { 354 - void *addr; 355 - unsigned long start = phys & PAGE_MASK; 354 + unsigned long start = phys & PAGE_MASK; 355 + unsigned long offset = phys & ~PAGE_MASK; 356 + unsigned long vaddr; 356 357 357 358 /* If page is RAM, we can use __va. Otherwise ioremap and unmap. */ 358 359 if (page_is_ram(start >> PAGE_SHIFT)) 359 360 return __va(phys); 360 361 361 - addr = (void __force *)ioremap_cache(start, PAGE_SIZE); 362 - if (addr) 363 - addr = (void *)((unsigned long)addr | (phys & ~PAGE_MASK)); 362 + vaddr = (unsigned long)ioremap_cache(start, PAGE_SIZE); 363 + /* Only add the offset on success and return NULL if the ioremap() failed: */ 364 + if (vaddr) 365 + vaddr += offset; 364 366 365 - return addr; 367 + return (void *)vaddr; 366 368 } 367 369 368 370 void unxlate_dev_mem_ptr(phys_addr_t phys, void *addr)
+22 -2
arch/x86/pci/acpi.c
··· 325 325 kfree(info); 326 326 } 327 327 328 + /* 329 + * An IO port or MMIO resource assigned to a PCI host bridge may be 330 + * consumed by the host bridge itself or available to its child 331 + * bus/devices. The ACPI specification defines a bit (Producer/Consumer) 332 + * to tell whether the resource is consumed by the host bridge itself, 333 + * but firmware hasn't used that bit consistently, so we can't rely on it. 334 + * 335 + * On x86 and IA64 platforms, all IO port and MMIO resources are assumed 336 + * to be available to child bus/devices except one special case: 337 + * IO port [0xCF8-0xCFF] is consumed by the host bridge itself 338 + * to access PCI configuration space. 339 + * 340 + * So explicitly filter out PCI CFG IO ports[0xCF8-0xCFF]. 341 + */ 342 + static bool resource_is_pcicfg_ioport(struct resource *res) 343 + { 344 + return (res->flags & IORESOURCE_IO) && 345 + res->start == 0xCF8 && res->end == 0xCFF; 346 + } 347 + 328 348 static void probe_pci_root_info(struct pci_root_info *info, 329 349 struct acpi_device *device, 330 350 int busnum, int domain, ··· 366 346 "no IO and memory resources present in _CRS\n"); 367 347 else 368 348 resource_list_for_each_entry_safe(entry, tmp, list) { 369 - if ((entry->res->flags & IORESOURCE_WINDOW) == 0 || 370 - (entry->res->flags & IORESOURCE_DISABLED)) 349 + if ((entry->res->flags & IORESOURCE_DISABLED) || 350 + resource_is_pcicfg_ioport(entry->res)) 371 351 resource_list_destroy_entry(entry); 372 352 else 373 353 entry->res->name = info->name;
+15 -19
arch/x86/vdso/vclock_gettime.c
··· 82 82 cycle_t ret; 83 83 u64 last; 84 84 u32 version; 85 - u32 migrate_count; 86 85 u8 flags; 87 86 unsigned cpu, cpu1; 88 87 89 88 90 89 /* 91 - * When looping to get a consistent (time-info, tsc) pair, we 92 - * also need to deal with the possibility we can switch vcpus, 93 - * so make sure we always re-fetch time-info for the current vcpu. 90 + * Note: hypervisor must guarantee that: 91 + * 1. cpu ID number maps 1:1 to per-CPU pvclock time info. 92 + * 2. that per-CPU pvclock time info is updated if the 93 + * underlying CPU changes. 94 + * 3. that version is increased whenever underlying CPU 95 + * changes. 96 + * 94 97 */ 95 98 do { 96 99 cpu = __getcpu() & VGETCPU_CPU_MASK; ··· 102 99 * __getcpu() calls (Gleb). 103 100 */ 104 101 105 - /* Make sure migrate_count will change if we leave the VCPU. */ 106 - do { 107 - pvti = get_pvti(cpu); 108 - migrate_count = pvti->migrate_count; 109 - 110 - cpu1 = cpu; 111 - cpu = __getcpu() & VGETCPU_CPU_MASK; 112 - } while (unlikely(cpu != cpu1)); 102 + pvti = get_pvti(cpu); 113 103 114 104 version = __pvclock_read_cycles(&pvti->pvti, &ret, &flags); 115 105 116 106 /* 117 107 * Test we're still on the cpu as well as the version. 118 - * - We must read TSC of pvti's VCPU. 119 - * - KVM doesn't follow the versioning protocol, so data could 120 - * change before version if we left the VCPU. 108 + * We could have been migrated just after the first 109 + * vgetcpu but before fetching the version, so we 110 + * wouldn't notice a version change. 121 111 */ 122 - smp_rmb(); 123 - } while (unlikely((pvti->pvti.version & 1) || 124 - pvti->pvti.version != version || 125 - pvti->migrate_count != migrate_count)); 112 + cpu1 = __getcpu() & VGETCPU_CPU_MASK; 113 + } while (unlikely(cpu != cpu1 || 114 + (pvti->pvti.version & 1) || 115 + pvti->pvti.version != version)); 126 116 127 117 if (unlikely(!(flags & PVCLOCK_TSC_STABLE_BIT))) 128 118 *mode = VCLOCK_NONE;
+19 -10
arch/x86/xen/enlighten.c
··· 1760 1760 1761 1761 static void __init xen_hvm_guest_init(void) 1762 1762 { 1763 + if (xen_pv_domain()) 1764 + return; 1765 + 1763 1766 init_hvm_pv_info(); 1764 1767 1765 1768 xen_hvm_init_shared_info(); ··· 1778 1775 xen_hvm_init_time_ops(); 1779 1776 xen_hvm_init_mmu_ops(); 1780 1777 } 1778 + #endif 1781 1779 1782 1780 static bool xen_nopv = false; 1783 1781 static __init int xen_parse_nopv(char *arg) ··· 1788 1784 } 1789 1785 early_param("xen_nopv", xen_parse_nopv); 1790 1786 1791 - static uint32_t __init xen_hvm_platform(void) 1787 + static uint32_t __init xen_platform(void) 1792 1788 { 1793 1789 if (xen_nopv) 1794 - return 0; 1795 - 1796 - if (xen_pv_domain()) 1797 1790 return 0; 1798 1791 1799 1792 return xen_cpuid_base(); ··· 1810 1809 } 1811 1810 EXPORT_SYMBOL_GPL(xen_hvm_need_lapic); 1812 1811 1813 - const struct hypervisor_x86 x86_hyper_xen_hvm __refconst = { 1814 - .name = "Xen HVM", 1815 - .detect = xen_hvm_platform, 1812 + static void xen_set_cpu_features(struct cpuinfo_x86 *c) 1813 + { 1814 + if (xen_pv_domain()) 1815 + clear_cpu_bug(c, X86_BUG_SYSRET_SS_ATTRS); 1816 + } 1817 + 1818 + const struct hypervisor_x86 x86_hyper_xen = { 1819 + .name = "Xen", 1820 + .detect = xen_platform, 1821 + #ifdef CONFIG_XEN_PVHVM 1816 1822 .init_platform = xen_hvm_guest_init, 1817 - .x2apic_available = xen_x2apic_para_available, 1818 - }; 1819 - EXPORT_SYMBOL(x86_hyper_xen_hvm); 1820 1823 #endif 1824 + .x2apic_available = xen_x2apic_para_available, 1825 + .set_cpu_features = xen_set_cpu_features, 1826 + }; 1827 + EXPORT_SYMBOL(x86_hyper_xen);
+10
arch/x86/xen/suspend.c
··· 88 88 tick_resume_local(); 89 89 } 90 90 91 + static void xen_vcpu_notify_suspend(void *data) 92 + { 93 + tick_suspend_local(); 94 + } 95 + 91 96 void xen_arch_resume(void) 92 97 { 93 98 on_each_cpu(xen_vcpu_notify_restore, NULL, 1); 99 + } 100 + 101 + void xen_arch_suspend(void) 102 + { 103 + on_each_cpu(xen_vcpu_notify_suspend, NULL, 1); 94 104 }
+2
block/blk-core.c
··· 552 552 q->queue_lock = &q->__queue_lock; 553 553 spin_unlock_irq(lock); 554 554 555 + bdi_destroy(&q->backing_dev_info); 556 + 555 557 /* @q is and will stay empty, shutdown and put */ 556 558 blk_put_queue(q); 557 559 }
+36 -24
block/blk-mq.c
··· 677 677 data.next = blk_rq_timeout(round_jiffies_up(data.next)); 678 678 mod_timer(&q->timeout, data.next); 679 679 } else { 680 - queue_for_each_hw_ctx(q, hctx, i) 681 - blk_mq_tag_idle(hctx); 680 + queue_for_each_hw_ctx(q, hctx, i) { 681 + /* the hctx may be unmapped, so check it here */ 682 + if (blk_mq_hw_queue_mapped(hctx)) 683 + blk_mq_tag_idle(hctx); 684 + } 682 685 } 683 686 } 684 687 ··· 858 855 spin_lock(&hctx->lock); 859 856 list_splice(&rq_list, &hctx->dispatch); 860 857 spin_unlock(&hctx->lock); 858 + /* 859 + * the queue is expected stopped with BLK_MQ_RQ_QUEUE_BUSY, but 860 + * it's possible the queue is stopped and restarted again 861 + * before this. Queue restart will dispatch requests. And since 862 + * requests in rq_list aren't added into hctx->dispatch yet, 863 + * the requests in rq_list might get lost. 864 + * 865 + * blk_mq_run_hw_queue() already checks the STOPPED bit 866 + **/ 867 + blk_mq_run_hw_queue(hctx, true); 861 868 } 862 869 } 863 870 ··· 1584 1571 return NOTIFY_OK; 1585 1572 } 1586 1573 1587 - static int blk_mq_hctx_cpu_online(struct blk_mq_hw_ctx *hctx, int cpu) 1588 - { 1589 - struct request_queue *q = hctx->queue; 1590 - struct blk_mq_tag_set *set = q->tag_set; 1591 - 1592 - if (set->tags[hctx->queue_num]) 1593 - return NOTIFY_OK; 1594 - 1595 - set->tags[hctx->queue_num] = blk_mq_init_rq_map(set, hctx->queue_num); 1596 - if (!set->tags[hctx->queue_num]) 1597 - return NOTIFY_STOP; 1598 - 1599 - hctx->tags = set->tags[hctx->queue_num]; 1600 - return NOTIFY_OK; 1601 - } 1602 - 1603 1574 static int blk_mq_hctx_notify(void *data, unsigned long action, 1604 1575 unsigned int cpu) 1605 1576 { ··· 1591 1594 1592 1595 if (action == CPU_DEAD || action == CPU_DEAD_FROZEN) 1593 1596 return blk_mq_hctx_cpu_offline(hctx, cpu); 1594 - else if (action == CPU_ONLINE || action == CPU_ONLINE_FROZEN) 1595 - return blk_mq_hctx_cpu_online(hctx, cpu); 1597 + 1598 + /* 1599 + * In case of CPU online, tags may be reallocated 1600 + * in blk_mq_map_swqueue() after mapping is updated. 1601 + */ 1596 1602 1597 1603 return NOTIFY_OK; 1598 1604 } ··· 1775 1775 unsigned int i; 1776 1776 struct blk_mq_hw_ctx *hctx; 1777 1777 struct blk_mq_ctx *ctx; 1778 + struct blk_mq_tag_set *set = q->tag_set; 1778 1779 1779 1780 queue_for_each_hw_ctx(q, hctx, i) { 1780 1781 cpumask_clear(hctx->cpumask); ··· 1804 1803 * disable it and free the request entries. 1805 1804 */ 1806 1805 if (!hctx->nr_ctx) { 1807 - struct blk_mq_tag_set *set = q->tag_set; 1808 - 1809 1806 if (set->tags[i]) { 1810 1807 blk_mq_free_rq_map(set, set->tags[i], i); 1811 1808 set->tags[i] = NULL; 1812 - hctx->tags = NULL; 1813 1809 } 1810 + hctx->tags = NULL; 1814 1811 continue; 1815 1812 } 1813 + 1814 + /* unmapped hw queue can be remapped after CPU topo changed */ 1815 + if (!set->tags[i]) 1816 + set->tags[i] = blk_mq_init_rq_map(set, i); 1817 + hctx->tags = set->tags[i]; 1818 + WARN_ON(!hctx->tags); 1816 1819 1817 1820 /* 1818 1821 * Set the map size to the number of mapped software queues. ··· 2095 2090 */ 2096 2091 list_for_each_entry(q, &all_q_list, all_q_node) 2097 2092 blk_mq_freeze_queue_start(q); 2098 - list_for_each_entry(q, &all_q_list, all_q_node) 2093 + list_for_each_entry(q, &all_q_list, all_q_node) { 2099 2094 blk_mq_freeze_queue_wait(q); 2095 + 2096 + /* 2097 + * timeout handler can't touch hw queue during the 2098 + * reinitialization 2099 + */ 2100 + del_timer_sync(&q->timeout); 2101 + } 2100 2102 2101 2103 list_for_each_entry(q, &all_q_list, all_q_node) 2102 2104 blk_mq_queue_reinit(q);
-2
block/blk-sysfs.c
··· 522 522 523 523 blk_trace_shutdown(q); 524 524 525 - bdi_destroy(&q->backing_dev_info); 526 - 527 525 ida_simple_remove(&blk_queue_ida, q->id); 528 526 call_rcu(&q->rcu_head, blk_free_queue_rcu); 529 527 }
+1 -1
block/bounce.c
··· 221 221 if (page_to_pfn(page) <= queue_bounce_pfn(q) && !force) 222 222 continue; 223 223 224 - inc_zone_page_state(to->bv_page, NR_BOUNCE); 225 224 to->bv_page = mempool_alloc(pool, q->bounce_gfp); 225 + inc_zone_page_state(to->bv_page, NR_BOUNCE); 226 226 227 227 if (rw == WRITE) { 228 228 char *vto, *vfrom;
+1 -5
block/elevator.c
··· 157 157 158 158 eq = kzalloc_node(sizeof(*eq), GFP_KERNEL, q->node); 159 159 if (unlikely(!eq)) 160 - goto err; 160 + return NULL; 161 161 162 162 eq->type = e; 163 163 kobject_init(&eq->kobj, &elv_ktype); ··· 165 165 hash_init(eq->hash); 166 166 167 167 return eq; 168 - err: 169 - kfree(eq); 170 - elevator_put(e); 171 - return NULL; 172 168 } 173 169 EXPORT_SYMBOL(elevator_alloc); 174 170
+2
drivers/acpi/acpi_pnp.c
··· 304 304 {"PNPb006"}, 305 305 /* cs423x-pnpbios */ 306 306 {"CSC0100"}, 307 + {"CSC0103"}, 308 + {"CSC0110"}, 307 309 {"CSC0000"}, 308 310 {"GIM0100"}, /* Guillemot Turtlebeach something appears to be cs4232 compatible */ 309 311 /* es18xx-pnpbios */
+1 -1
drivers/acpi/resource.c
··· 573 573 * @ares: Input ACPI resource object. 574 574 * @types: Valid resource types of IORESOURCE_XXX 575 575 * 576 - * This is a hepler function to support acpi_dev_get_resources(), which filters 576 + * This is a helper function to support acpi_dev_get_resources(), which filters 577 577 * ACPI resource objects according to resource types. 578 578 */ 579 579 int acpi_dev_filter_resource_type(struct acpi_resource *ares,
+1 -1
drivers/acpi/sbs.c
··· 684 684 if (!sbs_manager_broken) { 685 685 result = acpi_manager_get_info(sbs); 686 686 if (!result) { 687 - sbs->manager_present = 0; 687 + sbs->manager_present = 1; 688 688 for (id = 0; id < MAX_SBS_BAT; ++id) 689 689 if ((sbs->batteries_supported & (1 << id))) 690 690 acpi_battery_add(sbs, id);
+22
drivers/acpi/sbshc.c
··· 14 14 #include <linux/delay.h> 15 15 #include <linux/module.h> 16 16 #include <linux/interrupt.h> 17 + #include <linux/dmi.h> 17 18 #include "sbshc.h" 18 19 19 20 #define PREFIX "ACPI: " ··· 88 87 ACPI_SMB_ALARM_DATA = 0x26, /* 2 bytes alarm data */ 89 88 }; 90 89 90 + static bool macbook; 91 + 91 92 static inline int smb_hc_read(struct acpi_smb_hc *hc, u8 address, u8 *data) 92 93 { 93 94 return ec_read(hc->offset + address, data); ··· 135 132 } 136 133 137 134 mutex_lock(&hc->lock); 135 + if (macbook) 136 + udelay(5); 138 137 if (smb_hc_read(hc, ACPI_SMB_PROTOCOL, &temp)) 139 138 goto end; 140 139 if (temp) { ··· 262 257 acpi_handle handle, acpi_ec_query_func func, 263 258 void *data); 264 259 260 + static int macbook_dmi_match(const struct dmi_system_id *d) 261 + { 262 + pr_debug("Detected MacBook, enabling workaround\n"); 263 + macbook = true; 264 + return 0; 265 + } 266 + 267 + static struct dmi_system_id acpi_smbus_dmi_table[] = { 268 + { macbook_dmi_match, "Apple MacBook", { 269 + DMI_MATCH(DMI_BOARD_VENDOR, "Apple"), 270 + DMI_MATCH(DMI_PRODUCT_NAME, "MacBook") }, 271 + }, 272 + { }, 273 + }; 274 + 265 275 static int acpi_smbus_hc_add(struct acpi_device *device) 266 276 { 267 277 int status; 268 278 unsigned long long val; 269 279 struct acpi_smb_hc *hc; 280 + 281 + dmi_check_system(acpi_smbus_dmi_table); 270 282 271 283 if (!device) 272 284 return -EINVAL;
+1 -1
drivers/block/loop.c
··· 1620 1620 1621 1621 static void loop_remove(struct loop_device *lo) 1622 1622 { 1623 - del_gendisk(lo->lo_disk); 1624 1623 blk_cleanup_queue(lo->lo_queue); 1624 + del_gendisk(lo->lo_disk); 1625 1625 blk_mq_free_tag_set(&lo->tag_set); 1626 1626 put_disk(lo->lo_disk); 1627 1627 kfree(lo);
+2 -1
drivers/block/nvme-scsi.c
··· 944 944 static int nvme_trans_bdev_limits_page(struct nvme_ns *ns, struct sg_io_hdr *hdr, 945 945 u8 *inq_response, int alloc_len) 946 946 { 947 - __be32 max_sectors = cpu_to_be32(queue_max_hw_sectors(ns->queue)); 947 + __be32 max_sectors = cpu_to_be32( 948 + nvme_block_nr(ns, queue_max_hw_sectors(ns->queue))); 948 949 __be32 max_discard = cpu_to_be32(ns->queue->limits.max_discard_sectors); 949 950 __be32 discard_desc_count = cpu_to_be32(0x100); 950 951
+5
drivers/block/rbd.c
··· 2264 2264 result, xferred); 2265 2265 if (!img_request->result) 2266 2266 img_request->result = result; 2267 + /* 2268 + * Need to end I/O on the entire obj_request worth of 2269 + * bytes in case of error. 2270 + */ 2271 + xferred = obj_request->length; 2267 2272 } 2268 2273 2269 2274 /* Image object requests don't own their page array */
+11 -24
drivers/block/xen-blkback/blkback.c
··· 265 265 atomic_dec(&blkif->persistent_gnt_in_use); 266 266 } 267 267 268 - static void free_persistent_gnts_unmap_callback(int result, 269 - struct gntab_unmap_queue_data *data) 270 - { 271 - struct completion *c = data->data; 272 - 273 - /* BUG_ON used to reproduce existing behaviour, 274 - but is this the best way to deal with this? */ 275 - BUG_ON(result); 276 - complete(c); 277 - } 278 - 279 268 static void free_persistent_gnts(struct xen_blkif *blkif, struct rb_root *root, 280 269 unsigned int num) 281 270 { ··· 274 285 struct rb_node *n; 275 286 int segs_to_unmap = 0; 276 287 struct gntab_unmap_queue_data unmap_data; 277 - struct completion unmap_completion; 278 288 279 - init_completion(&unmap_completion); 280 - 281 - unmap_data.data = &unmap_completion; 282 - unmap_data.done = &free_persistent_gnts_unmap_callback; 283 289 unmap_data.pages = pages; 284 290 unmap_data.unmap_ops = unmap; 285 291 unmap_data.kunmap_ops = NULL; ··· 294 310 !rb_next(&persistent_gnt->node)) { 295 311 296 312 unmap_data.count = segs_to_unmap; 297 - gnttab_unmap_refs_async(&unmap_data); 298 - wait_for_completion(&unmap_completion); 313 + BUG_ON(gnttab_unmap_refs_sync(&unmap_data)); 299 314 300 315 put_free_pages(blkif, pages, segs_to_unmap); 301 316 segs_to_unmap = 0; ··· 312 329 struct gnttab_unmap_grant_ref unmap[BLKIF_MAX_SEGMENTS_PER_REQUEST]; 313 330 struct page *pages[BLKIF_MAX_SEGMENTS_PER_REQUEST]; 314 331 struct persistent_gnt *persistent_gnt; 315 - int ret, segs_to_unmap = 0; 332 + int segs_to_unmap = 0; 316 333 struct xen_blkif *blkif = container_of(work, typeof(*blkif), persistent_purge_work); 334 + struct gntab_unmap_queue_data unmap_data; 335 + 336 + unmap_data.pages = pages; 337 + unmap_data.unmap_ops = unmap; 338 + unmap_data.kunmap_ops = NULL; 317 339 318 340 while(!list_empty(&blkif->persistent_purge_list)) { 319 341 persistent_gnt = list_first_entry(&blkif->persistent_purge_list, ··· 334 346 pages[segs_to_unmap] = persistent_gnt->page; 335 347 336 348 if (++segs_to_unmap == BLKIF_MAX_SEGMENTS_PER_REQUEST) { 337 - ret = gnttab_unmap_refs(unmap, NULL, pages, 338 - segs_to_unmap); 339 - BUG_ON(ret); 349 + unmap_data.count = segs_to_unmap; 350 + BUG_ON(gnttab_unmap_refs_sync(&unmap_data)); 340 351 put_free_pages(blkif, pages, segs_to_unmap); 341 352 segs_to_unmap = 0; 342 353 } 343 354 kfree(persistent_gnt); 344 355 } 345 356 if (segs_to_unmap > 0) { 346 - ret = gnttab_unmap_refs(unmap, NULL, pages, segs_to_unmap); 347 - BUG_ON(ret); 357 + unmap_data.count = segs_to_unmap; 358 + BUG_ON(gnttab_unmap_refs_sync(&unmap_data)); 348 359 put_free_pages(blkif, pages, segs_to_unmap); 349 360 } 350 361 }
+23
drivers/block/zram/zram_drv.c
··· 74 74 return (struct zram *)dev_to_disk(dev)->private_data; 75 75 } 76 76 77 + static ssize_t compact_store(struct device *dev, 78 + struct device_attribute *attr, const char *buf, size_t len) 79 + { 80 + unsigned long nr_migrated; 81 + struct zram *zram = dev_to_zram(dev); 82 + struct zram_meta *meta; 83 + 84 + down_read(&zram->init_lock); 85 + if (!init_done(zram)) { 86 + up_read(&zram->init_lock); 87 + return -EINVAL; 88 + } 89 + 90 + meta = zram->meta; 91 + nr_migrated = zs_compact(meta->mem_pool); 92 + atomic64_add(nr_migrated, &zram->stats.num_migrated); 93 + up_read(&zram->init_lock); 94 + 95 + return len; 96 + } 97 + 77 98 static ssize_t disksize_show(struct device *dev, 78 99 struct device_attribute *attr, char *buf) 79 100 { ··· 1059 1038 .owner = THIS_MODULE 1060 1039 }; 1061 1040 1041 + static DEVICE_ATTR_WO(compact); 1062 1042 static DEVICE_ATTR_RW(disksize); 1063 1043 static DEVICE_ATTR_RO(initstate); 1064 1044 static DEVICE_ATTR_WO(reset); ··· 1136 1114 &dev_attr_num_writes.attr, 1137 1115 &dev_attr_failed_reads.attr, 1138 1116 &dev_attr_failed_writes.attr, 1117 + &dev_attr_compact.attr, 1139 1118 &dev_attr_invalid_io.attr, 1140 1119 &dev_attr_notify_free.attr, 1141 1120 &dev_attr_zero_pages.attr,
+1 -1
drivers/bus/arm-cci.c
··· 660 660 * Initialise the fake PMU. We only need to populate the 661 661 * used_mask for the purposes of validation. 662 662 */ 663 - .used_mask = CPU_BITS_NONE, 663 + .used_mask = { 0 }, 664 664 }; 665 665 666 666 if (!validate_event(event->pmu, &fake_pmu, leader))
+3 -2
drivers/bus/omap_l3_noc.c
··· 1 1 /* 2 2 * OMAP L3 Interconnect error handling driver 3 3 * 4 - * Copyright (C) 2011-2014 Texas Instruments Incorporated - http://www.ti.com/ 4 + * Copyright (C) 2011-2015 Texas Instruments Incorporated - http://www.ti.com/ 5 5 * Santosh Shilimkar <santosh.shilimkar@ti.com> 6 6 * Sricharan <r.sricharan@ti.com> 7 7 * ··· 233 233 } 234 234 235 235 static const struct of_device_id l3_noc_match[] = { 236 - {.compatible = "ti,omap4-l3-noc", .data = &omap_l3_data}, 236 + {.compatible = "ti,omap4-l3-noc", .data = &omap4_l3_data}, 237 + {.compatible = "ti,omap5-l3-noc", .data = &omap5_l3_data}, 237 238 {.compatible = "ti,dra7-l3-noc", .data = &dra_l3_data}, 238 239 {.compatible = "ti,am4372-l3-noc", .data = &am4372_l3_data}, 239 240 {},
+40 -14
drivers/bus/omap_l3_noc.h
··· 1 1 /* 2 2 * OMAP L3 Interconnect error handling driver header 3 3 * 4 - * Copyright (C) 2011-2014 Texas Instruments Incorporated - http://www.ti.com/ 4 + * Copyright (C) 2011-2015 Texas Instruments Incorporated - http://www.ti.com/ 5 5 * Santosh Shilimkar <santosh.shilimkar@ti.com> 6 6 * sricharan <r.sricharan@ti.com> 7 7 * ··· 175 175 }; 176 176 177 177 178 - static struct l3_target_data omap_l3_target_data_clk3[] = { 179 - {0x0100, "EMUSS",}, 180 - {0x0300, "DEBUG SOURCE",}, 181 - {0x0, "HOST CLK3",}, 178 + static struct l3_target_data omap4_l3_target_data_clk3[] = { 179 + {0x0100, "DEBUGSS",}, 182 180 }; 183 181 184 - static struct l3_flagmux_data omap_l3_flagmux_clk3 = { 182 + static struct l3_flagmux_data omap4_l3_flagmux_clk3 = { 185 183 .offset = 0x0200, 186 - .l3_targ = omap_l3_target_data_clk3, 187 - .num_targ_data = ARRAY_SIZE(omap_l3_target_data_clk3), 184 + .l3_targ = omap4_l3_target_data_clk3, 185 + .num_targ_data = ARRAY_SIZE(omap4_l3_target_data_clk3), 188 186 }; 189 187 190 188 static struct l3_masters_data omap_l3_masters[] = { ··· 213 215 { 0x32, "USBHOSTFS"} 214 216 }; 215 217 216 - static struct l3_flagmux_data *omap_l3_flagmux[] = { 218 + static struct l3_flagmux_data *omap4_l3_flagmux[] = { 217 219 &omap_l3_flagmux_clk1, 218 220 &omap_l3_flagmux_clk2, 219 - &omap_l3_flagmux_clk3, 221 + &omap4_l3_flagmux_clk3, 220 222 }; 221 223 222 - static const struct omap_l3 omap_l3_data = { 223 - .l3_flagmux = omap_l3_flagmux, 224 - .num_modules = ARRAY_SIZE(omap_l3_flagmux), 224 + static const struct omap_l3 omap4_l3_data = { 225 + .l3_flagmux = omap4_l3_flagmux, 226 + .num_modules = ARRAY_SIZE(omap4_l3_flagmux), 225 227 .l3_masters = omap_l3_masters, 226 228 .num_masters = ARRAY_SIZE(omap_l3_masters), 227 229 /* The 6 MSBs of register field used to distinguish initiator */ 228 230 .mst_addr_mask = 0xFC, 231 + }; 232 + 233 + /* OMAP5 data */ 234 + static struct l3_target_data omap5_l3_target_data_clk3[] = { 235 + {0x0100, "L3INSTR",}, 236 + {0x0300, "DEBUGSS",}, 237 + {0x0, "HOSTCLK3",}, 238 + }; 239 + 240 + static struct l3_flagmux_data omap5_l3_flagmux_clk3 = { 241 + .offset = 0x0200, 242 + .l3_targ = omap5_l3_target_data_clk3, 243 + .num_targ_data = ARRAY_SIZE(omap5_l3_target_data_clk3), 244 + }; 245 + 246 + static struct l3_flagmux_data *omap5_l3_flagmux[] = { 247 + &omap_l3_flagmux_clk1, 248 + &omap_l3_flagmux_clk2, 249 + &omap5_l3_flagmux_clk3, 250 + }; 251 + 252 + static const struct omap_l3 omap5_l3_data = { 253 + .l3_flagmux = omap5_l3_flagmux, 254 + .num_modules = ARRAY_SIZE(omap5_l3_flagmux), 255 + .l3_masters = omap_l3_masters, 256 + .num_masters = ARRAY_SIZE(omap_l3_masters), 257 + /* The 6 MSBs of register field used to distinguish initiator */ 258 + .mst_addr_mask = 0x7E0, 229 259 }; 230 260 231 261 /* DRA7 data */ ··· 300 274 301 275 static struct l3_target_data dra_l3_target_data_clk2[] = { 302 276 {0x0, "HOST CLK1",}, 303 - {0x0, "HOST CLK2",}, 277 + {0x800000, "HOST CLK2",}, 304 278 {0xdead, L3_TARGET_NOT_SUPPORTED,}, 305 279 {0x3400, "SHA2_2",}, 306 280 {0x0900, "BB2D",},
+9 -9
drivers/char/hw_random/bcm63xx-rng.c
··· 57 57 val &= ~RNG_EN; 58 58 __raw_writel(val, priv->regs + RNG_CTRL); 59 59 60 - clk_didsable_unprepare(prov->clk); 60 + clk_disable_unprepare(priv->clk); 61 61 } 62 62 63 63 static int bcm63xx_rng_data_present(struct hwrng *rng, int wait) ··· 97 97 priv->rng.name = pdev->name; 98 98 priv->rng.init = bcm63xx_rng_init; 99 99 priv->rng.cleanup = bcm63xx_rng_cleanup; 100 - prov->rng.data_present = bcm63xx_rng_data_present; 100 + priv->rng.data_present = bcm63xx_rng_data_present; 101 101 priv->rng.data_read = bcm63xx_rng_data_read; 102 102 103 103 priv->clk = devm_clk_get(&pdev->dev, "ipsec"); 104 104 if (IS_ERR(priv->clk)) { 105 - error = PTR_ERR(priv->clk); 106 - dev_err(&pdev->dev, "no clock for device: %d\n", error); 107 - return error; 105 + ret = PTR_ERR(priv->clk); 106 + dev_err(&pdev->dev, "no clock for device: %d\n", ret); 107 + return ret; 108 108 } 109 109 110 110 if (!devm_request_mem_region(&pdev->dev, r->start, ··· 120 120 return -ENOMEM; 121 121 } 122 122 123 - error = devm_hwrng_register(&pdev->dev, &priv->rng); 124 - if (error) { 123 + ret = devm_hwrng_register(&pdev->dev, &priv->rng); 124 + if (ret) { 125 125 dev_err(&pdev->dev, "failed to register rng device: %d\n", 126 - error); 127 - return error; 126 + ret); 127 + return ret; 128 128 } 129 129 130 130 dev_info(&pdev->dev, "registered RNG driver\n");
+2 -2
drivers/char/ipmi/ipmi_msghandler.c
··· 2000 2000 seq_printf(m, " %x", intf->channels[i].address); 2001 2001 seq_putc(m, '\n'); 2002 2002 2003 - return seq_has_overflowed(m); 2003 + return 0; 2004 2004 } 2005 2005 2006 2006 static int smi_ipmb_proc_open(struct inode *inode, struct file *file) ··· 2023 2023 ipmi_version_major(&intf->bmc->id), 2024 2024 ipmi_version_minor(&intf->bmc->id)); 2025 2025 2026 - return seq_has_overflowed(m); 2026 + return 0; 2027 2027 } 2028 2028 2029 2029 static int smi_version_proc_open(struct inode *inode, struct file *file)
+9 -7
drivers/char/ipmi/ipmi_si_intf.c
··· 942 942 * If we are running to completion, start it and run 943 943 * transactions until everything is clear. 944 944 */ 945 - smi_info->curr_msg = msg; 946 - smi_info->waiting_msg = NULL; 945 + smi_info->waiting_msg = msg; 947 946 948 947 /* 949 948 * Run to completion means we are single-threaded, no ··· 2243 2244 acpi_handle handle; 2244 2245 acpi_status status; 2245 2246 unsigned long long tmp; 2246 - int rv; 2247 + int rv = -EINVAL; 2247 2248 2248 2249 acpi_dev = pnp_acpi_device(dev); 2249 2250 if (!acpi_dev) ··· 2261 2262 2262 2263 /* _IFT tells us the interface type: KCS, BT, etc */ 2263 2264 status = acpi_evaluate_integer(handle, "_IFT", NULL, &tmp); 2264 - if (ACPI_FAILURE(status)) 2265 + if (ACPI_FAILURE(status)) { 2266 + dev_err(&dev->dev, "Could not find ACPI IPMI interface type\n"); 2265 2267 goto err_free; 2268 + } 2266 2269 2267 2270 switch (tmp) { 2268 2271 case 1: ··· 2277 2276 info->si_type = SI_BT; 2278 2277 break; 2279 2278 case 4: /* SSIF, just ignore */ 2279 + rv = -ENODEV; 2280 2280 goto err_free; 2281 2281 default: 2282 2282 dev_info(&dev->dev, "unknown IPMI type %lld\n", tmp); ··· 2338 2336 2339 2337 err_free: 2340 2338 kfree(info); 2341 - return -EINVAL; 2339 + return rv; 2342 2340 } 2343 2341 2344 2342 static void ipmi_pnp_remove(struct pnp_dev *dev) ··· 3082 3080 3083 3081 seq_printf(m, "%s\n", si_to_str[smi->si_type]); 3084 3082 3085 - return seq_has_overflowed(m); 3083 + return 0; 3086 3084 } 3087 3085 3088 3086 static int smi_type_proc_open(struct inode *inode, struct file *file) ··· 3155 3153 smi->irq, 3156 3154 smi->slave_addr); 3157 3155 3158 - return seq_has_overflowed(m); 3156 + return 0; 3159 3157 } 3160 3158 3161 3159 static int smi_params_proc_open(struct inode *inode, struct file *file)
+178 -35
drivers/char/ipmi/ipmi_ssif.c
··· 31 31 * interface into the I2C driver, I believe. 32 32 */ 33 33 34 - #include <linux/version.h> 35 34 #if defined(MODVERSIONS) 36 35 #include <linux/modversions.h> 37 36 #endif ··· 165 166 /* Number of watchdog pretimeouts. */ 166 167 SSIF_STAT_watchdog_pretimeouts, 167 168 169 + /* Number of alers received. */ 170 + SSIF_STAT_alerts, 171 + 168 172 /* Always add statistics before this value, it must be last. */ 169 173 SSIF_NUM_STATS 170 174 }; ··· 216 214 #define WDT_PRE_TIMEOUT_INT 0x08 217 215 unsigned char msg_flags; 218 216 217 + u8 global_enables; 219 218 bool has_event_buffer; 219 + bool supports_alert; 220 + 221 + /* 222 + * Used to tell what we should do with alerts. If we are 223 + * waiting on a response, read the data immediately. 224 + */ 225 + bool got_alert; 226 + bool waiting_alert; 220 227 221 228 /* 222 229 * If set to true, this will request events the next time the ··· 489 478 490 479 if (ssif_info->i2c_read_write == I2C_SMBUS_WRITE) { 491 480 result = i2c_smbus_write_block_data( 492 - ssif_info->client, SSIF_IPMI_REQUEST, 481 + ssif_info->client, ssif_info->i2c_command, 493 482 ssif_info->i2c_data[0], 494 483 ssif_info->i2c_data + 1); 495 484 ssif_info->done_handler(ssif_info, result, NULL, 0); 496 485 } else { 497 486 result = i2c_smbus_read_block_data( 498 - ssif_info->client, SSIF_IPMI_RESPONSE, 487 + ssif_info->client, ssif_info->i2c_command, 499 488 ssif_info->i2c_data); 500 489 if (result < 0) 501 490 ssif_info->done_handler(ssif_info, result, ··· 529 518 static void msg_done_handler(struct ssif_info *ssif_info, int result, 530 519 unsigned char *data, unsigned int len); 531 520 532 - static void retry_timeout(unsigned long data) 521 + static void start_get(struct ssif_info *ssif_info) 533 522 { 534 - struct ssif_info *ssif_info = (void *) data; 535 523 int rv; 536 524 537 - if (ssif_info->stopping) 538 - return; 539 - 540 525 ssif_info->rtc_us_timer = 0; 526 + ssif_info->multi_pos = 0; 541 527 542 528 rv = ssif_i2c_send(ssif_info, msg_done_handler, I2C_SMBUS_READ, 543 529 SSIF_IPMI_RESPONSE, ··· 546 538 547 539 msg_done_handler(ssif_info, -EIO, NULL, 0); 548 540 } 541 + } 542 + 543 + static void retry_timeout(unsigned long data) 544 + { 545 + struct ssif_info *ssif_info = (void *) data; 546 + unsigned long oflags, *flags; 547 + bool waiting; 548 + 549 + if (ssif_info->stopping) 550 + return; 551 + 552 + flags = ipmi_ssif_lock_cond(ssif_info, &oflags); 553 + waiting = ssif_info->waiting_alert; 554 + ssif_info->waiting_alert = false; 555 + ipmi_ssif_unlock_cond(ssif_info, flags); 556 + 557 + if (waiting) 558 + start_get(ssif_info); 559 + } 560 + 561 + 562 + static void ssif_alert(struct i2c_client *client, unsigned int data) 563 + { 564 + struct ssif_info *ssif_info = i2c_get_clientdata(client); 565 + unsigned long oflags, *flags; 566 + bool do_get = false; 567 + 568 + ssif_inc_stat(ssif_info, alerts); 569 + 570 + flags = ipmi_ssif_lock_cond(ssif_info, &oflags); 571 + if (ssif_info->waiting_alert) { 572 + ssif_info->waiting_alert = false; 573 + del_timer(&ssif_info->retry_timer); 574 + do_get = true; 575 + } else if (ssif_info->curr_msg) { 576 + ssif_info->got_alert = true; 577 + } 578 + ipmi_ssif_unlock_cond(ssif_info, flags); 579 + if (do_get) 580 + start_get(ssif_info); 549 581 } 550 582 551 583 static int start_resend(struct ssif_info *ssif_info); ··· 607 559 if (ssif_info->retries_left > 0) { 608 560 ssif_inc_stat(ssif_info, receive_retries); 609 561 562 + flags = ipmi_ssif_lock_cond(ssif_info, &oflags); 563 + ssif_info->waiting_alert = true; 564 + ssif_info->rtc_us_timer = SSIF_MSG_USEC; 610 565 mod_timer(&ssif_info->retry_timer, 611 566 jiffies + SSIF_MSG_JIFFIES); 612 - ssif_info->rtc_us_timer = SSIF_MSG_USEC; 567 + ipmi_ssif_unlock_cond(ssif_info, flags); 613 568 return; 614 569 } 615 570 ··· 632 581 ssif_inc_stat(ssif_info, received_message_parts); 633 582 634 583 /* Remove the multi-part read marker. */ 635 - for (i = 0; i < (len-2); i++) 636 - ssif_info->data[i] = data[i+2]; 637 584 len -= 2; 585 + for (i = 0; i < len; i++) 586 + ssif_info->data[i] = data[i+2]; 638 587 ssif_info->multi_len = len; 639 588 ssif_info->multi_pos = 1; 640 589 ··· 661 610 goto continue_op; 662 611 } 663 612 664 - blocknum = data[ssif_info->multi_len]; 613 + blocknum = data[0]; 665 614 666 - if (ssif_info->multi_len+len-1 > IPMI_MAX_MSG_LENGTH) { 615 + if (ssif_info->multi_len + len - 1 > IPMI_MAX_MSG_LENGTH) { 667 616 /* Received message too big, abort the operation. */ 668 617 result = -E2BIG; 669 618 if (ssif_info->ssif_debug & SSIF_DEBUG_MSG) ··· 673 622 } 674 623 675 624 /* Remove the blocknum from the data. */ 676 - for (i = 0; i < (len-1); i++) 677 - ssif_info->data[i+ssif_info->multi_len] = data[i+1]; 678 625 len--; 626 + for (i = 0; i < len; i++) 627 + ssif_info->data[i + ssif_info->multi_len] = data[i + 1]; 679 628 ssif_info->multi_len += len; 680 629 if (blocknum == 0xff) { 681 630 /* End of read */ 682 631 len = ssif_info->multi_len; 683 632 data = ssif_info->data; 684 - } else if ((blocknum+1) != ssif_info->multi_pos) { 633 + } else if (blocknum + 1 != ssif_info->multi_pos) { 685 634 /* 686 635 * Out of sequence block, just abort. Block 687 636 * numbers start at zero for the second block, ··· 701 650 if (rv < 0) { 702 651 if (ssif_info->ssif_debug & SSIF_DEBUG_MSG) 703 652 pr_info(PFX 704 - "Error from i2c_non_blocking_op(2)\n"); 653 + "Error from ssif_i2c_send\n"); 705 654 706 655 result = -EIO; 707 656 } else ··· 881 830 } 882 831 883 832 if (ssif_info->multi_data) { 884 - /* In the middle of a multi-data write. */ 833 + /* 834 + * In the middle of a multi-data write. See the comment 835 + * in the SSIF_MULTI_n_PART case in the probe function 836 + * for details on the intricacies of this. 837 + */ 885 838 int left; 886 839 887 840 ssif_inc_stat(ssif_info, sent_messages_parts); ··· 919 864 msg_done_handler(ssif_info, -EIO, NULL, 0); 920 865 } 921 866 } else { 867 + unsigned long oflags, *flags; 868 + bool got_alert; 869 + 922 870 ssif_inc_stat(ssif_info, sent_messages); 923 871 ssif_inc_stat(ssif_info, sent_messages_parts); 924 872 925 - /* Wait a jiffie then request the next message */ 926 - ssif_info->retries_left = SSIF_RECV_RETRIES; 927 - ssif_info->rtc_us_timer = SSIF_MSG_PART_USEC; 928 - mod_timer(&ssif_info->retry_timer, 929 - jiffies + SSIF_MSG_PART_JIFFIES); 930 - return; 873 + flags = ipmi_ssif_lock_cond(ssif_info, &oflags); 874 + got_alert = ssif_info->got_alert; 875 + if (got_alert) { 876 + ssif_info->got_alert = false; 877 + ssif_info->waiting_alert = false; 878 + } 879 + 880 + if (got_alert) { 881 + ipmi_ssif_unlock_cond(ssif_info, flags); 882 + /* The alert already happened, try now. */ 883 + retry_timeout((unsigned long) ssif_info); 884 + } else { 885 + /* Wait a jiffie then request the next message */ 886 + ssif_info->waiting_alert = true; 887 + ssif_info->retries_left = SSIF_RECV_RETRIES; 888 + ssif_info->rtc_us_timer = SSIF_MSG_PART_USEC; 889 + mod_timer(&ssif_info->retry_timer, 890 + jiffies + SSIF_MSG_PART_JIFFIES); 891 + ipmi_ssif_unlock_cond(ssif_info, flags); 892 + } 931 893 } 932 894 } 933 895 ··· 952 880 { 953 881 int rv; 954 882 int command; 883 + 884 + ssif_info->got_alert = false; 955 885 956 886 if (ssif_info->data_len > 32) { 957 887 command = SSIF_IPMI_MULTI_PART_REQUEST_START; ··· 989 915 return -E2BIG; 990 916 991 917 ssif_info->retries_left = SSIF_SEND_RETRIES; 992 - memcpy(ssif_info->data+1, data, len); 918 + memcpy(ssif_info->data + 1, data, len); 993 919 ssif_info->data_len = len; 994 920 return start_resend(ssif_info); 995 921 } ··· 1274 1200 { 1275 1201 seq_puts(m, "ssif\n"); 1276 1202 1277 - return seq_has_overflowed(m); 1203 + return 0; 1278 1204 } 1279 1205 1280 1206 static int smi_type_proc_open(struct inode *inode, struct file *file) ··· 1317 1243 ssif_get_stat(ssif_info, events)); 1318 1244 seq_printf(m, "watchdog_pretimeouts: %u\n", 1319 1245 ssif_get_stat(ssif_info, watchdog_pretimeouts)); 1246 + seq_printf(m, "alerts: %u\n", 1247 + ssif_get_stat(ssif_info, alerts)); 1320 1248 return 0; 1321 1249 } 1322 1250 ··· 1334 1258 .release = single_release, 1335 1259 }; 1336 1260 1261 + static int strcmp_nospace(char *s1, char *s2) 1262 + { 1263 + while (*s1 && *s2) { 1264 + while (isspace(*s1)) 1265 + s1++; 1266 + while (isspace(*s2)) 1267 + s2++; 1268 + if (*s1 > *s2) 1269 + return 1; 1270 + if (*s1 < *s2) 1271 + return -1; 1272 + s1++; 1273 + s2++; 1274 + } 1275 + return 0; 1276 + } 1277 + 1337 1278 static struct ssif_addr_info *ssif_info_find(unsigned short addr, 1338 1279 char *adapter_name, 1339 1280 bool match_null_name) ··· 1365 1272 /* One is NULL and one is not */ 1366 1273 continue; 1367 1274 } 1368 - if (strcmp(info->adapter_name, adapter_name)) 1369 - /* Names to not match */ 1275 + if (adapter_name && 1276 + strcmp_nospace(info->adapter_name, 1277 + adapter_name)) 1278 + /* Names do not match */ 1370 1279 continue; 1371 1280 } 1372 1281 found = info; ··· 1400 1305 #endif 1401 1306 return false; 1402 1307 } 1308 + 1309 + /* 1310 + * Global enables we care about. 1311 + */ 1312 + #define GLOBAL_ENABLES_MASK (IPMI_BMC_EVT_MSG_BUFF | IPMI_BMC_RCV_MSG_INTR | \ 1313 + IPMI_BMC_EVT_MSG_INTR) 1403 1314 1404 1315 static int ssif_probe(struct i2c_client *client, const struct i2c_device_id *id) 1405 1316 { ··· 1492 1391 break; 1493 1392 1494 1393 case SSIF_MULTI_2_PART: 1495 - if (ssif_info->max_xmit_msg_size > 64) 1496 - ssif_info->max_xmit_msg_size = 64; 1394 + if (ssif_info->max_xmit_msg_size > 63) 1395 + ssif_info->max_xmit_msg_size = 63; 1497 1396 if (ssif_info->max_recv_msg_size > 62) 1498 1397 ssif_info->max_recv_msg_size = 62; 1499 1398 break; 1500 1399 1501 1400 case SSIF_MULTI_n_PART: 1401 + /* 1402 + * The specification is rather confusing at 1403 + * this point, but I think I understand what 1404 + * is meant. At least I have a workable 1405 + * solution. With multi-part messages, you 1406 + * cannot send a message that is a multiple of 1407 + * 32-bytes in length, because the start and 1408 + * middle messages are 32-bytes and the end 1409 + * message must be at least one byte. You 1410 + * can't fudge on an extra byte, that would 1411 + * screw up things like fru data writes. So 1412 + * we limit the length to 63 bytes. That way 1413 + * a 32-byte message gets sent as a single 1414 + * part. A larger message will be a 32-byte 1415 + * start and the next message is always going 1416 + * to be 1-31 bytes in length. Not ideal, but 1417 + * it should work. 1418 + */ 1419 + if (ssif_info->max_xmit_msg_size > 63) 1420 + ssif_info->max_xmit_msg_size = 63; 1502 1421 break; 1503 1422 1504 1423 default: ··· 1528 1407 } else { 1529 1408 no_support: 1530 1409 /* Assume no multi-part or PEC support */ 1531 - pr_info(PFX "Error fetching SSIF: %d %d %2.2x, your system probably doesn't support this command so using defaults\n", 1410 + pr_info(PFX "Error fetching SSIF: %d %d %2.2x, your system probably doesn't support this command so using defaults\n", 1532 1411 rv, len, resp[2]); 1533 1412 1534 1413 ssif_info->max_xmit_msg_size = 32; ··· 1557 1436 goto found; 1558 1437 } 1559 1438 1439 + ssif_info->global_enables = resp[3]; 1440 + 1560 1441 if (resp[3] & IPMI_BMC_EVT_MSG_BUFF) { 1561 1442 ssif_info->has_event_buffer = true; 1562 1443 /* buffer is already enabled, nothing to do. */ ··· 1567 1444 1568 1445 msg[0] = IPMI_NETFN_APP_REQUEST << 2; 1569 1446 msg[1] = IPMI_SET_BMC_GLOBAL_ENABLES_CMD; 1570 - msg[2] = resp[3] | IPMI_BMC_EVT_MSG_BUFF; 1447 + msg[2] = ssif_info->global_enables | IPMI_BMC_EVT_MSG_BUFF; 1571 1448 rv = do_cmd(client, 3, msg, &len, resp); 1572 1449 if (rv || (len < 2)) { 1573 - pr_warn(PFX "Error getting global enables: %d %d %2.2x\n", 1450 + pr_warn(PFX "Error setting global enables: %d %d %2.2x\n", 1574 1451 rv, len, resp[2]); 1575 1452 rv = 0; /* Not fatal */ 1576 1453 goto found; 1577 1454 } 1578 1455 1579 - if (resp[2] == 0) 1456 + if (resp[2] == 0) { 1580 1457 /* A successful return means the event buffer is supported. */ 1581 1458 ssif_info->has_event_buffer = true; 1459 + ssif_info->global_enables |= IPMI_BMC_EVT_MSG_BUFF; 1460 + } 1461 + 1462 + msg[0] = IPMI_NETFN_APP_REQUEST << 2; 1463 + msg[1] = IPMI_SET_BMC_GLOBAL_ENABLES_CMD; 1464 + msg[2] = ssif_info->global_enables | IPMI_BMC_RCV_MSG_INTR; 1465 + rv = do_cmd(client, 3, msg, &len, resp); 1466 + if (rv || (len < 2)) { 1467 + pr_warn(PFX "Error setting global enables: %d %d %2.2x\n", 1468 + rv, len, resp[2]); 1469 + rv = 0; /* Not fatal */ 1470 + goto found; 1471 + } 1472 + 1473 + if (resp[2] == 0) { 1474 + /* A successful return means the alert is supported. */ 1475 + ssif_info->supports_alert = true; 1476 + ssif_info->global_enables |= IPMI_BMC_RCV_MSG_INTR; 1477 + } 1582 1478 1583 1479 found: 1584 1480 ssif_info->intf_num = atomic_inc_return(&next_intf); ··· 1955 1813 }, 1956 1814 .probe = ssif_probe, 1957 1815 .remove = ssif_remove, 1816 + .alert = ssif_alert, 1958 1817 .id_table = ssif_id, 1959 1818 .detect = ssif_detect 1960 1819 }; ··· 1975 1832 rv = new_ssif_client(addr[i], adapter_name[i], 1976 1833 dbg[i], slave_addrs[i], 1977 1834 SI_HARDCODED); 1978 - if (!rv) 1835 + if (rv) 1979 1836 pr_err(PFX 1980 1837 "Couldn't add hardcoded device at addr 0x%x\n", 1981 1838 addr[i]);
+16
drivers/cpuidle/cpuidle.c
··· 158 158 int entered_state; 159 159 160 160 struct cpuidle_state *target_state = &drv->states[index]; 161 + bool broadcast = !!(target_state->flags & CPUIDLE_FLAG_TIMER_STOP); 161 162 ktime_t time_start, time_end; 162 163 s64 diff; 164 + 165 + /* 166 + * Tell the time framework to switch to a broadcast timer because our 167 + * local timer will be shut down. If a local timer is used from another 168 + * CPU as a broadcast timer, this call may fail if it is not available. 169 + */ 170 + if (broadcast && tick_broadcast_enter()) 171 + return -EBUSY; 163 172 164 173 trace_cpu_idle_rcuidle(index, dev->cpu); 165 174 time_start = ktime_get(); ··· 177 168 178 169 time_end = ktime_get(); 179 170 trace_cpu_idle_rcuidle(PWR_EVENT_EXIT, dev->cpu); 171 + 172 + if (broadcast) { 173 + if (WARN_ON_ONCE(!irqs_disabled())) 174 + local_irq_disable(); 175 + 176 + tick_broadcast_exit(); 177 + } 180 178 181 179 if (!cpuidle_state_is_coupled(dev, drv, entered_state)) 182 180 local_irq_enable();
+1
drivers/dma/Kconfig
··· 437 437 438 438 config XGENE_DMA 439 439 tristate "APM X-Gene DMA support" 440 + depends on ARCH_XGENE || COMPILE_TEST 440 441 select DMA_ENGINE 441 442 select DMA_ENGINE_RAID 442 443 select ASYNC_TX_ENABLE_CHANNEL_SWITCH
+4
drivers/dma/dmaengine.c
··· 571 571 572 572 chan = private_candidate(&mask, device, NULL, NULL); 573 573 if (chan) { 574 + dma_cap_set(DMA_PRIVATE, device->cap_mask); 575 + device->privatecnt++; 574 576 err = dma_chan_get(chan); 575 577 if (err) { 576 578 pr_debug("%s: failed to get %s: (%d)\n", 577 579 __func__, dma_chan_name(chan), err); 578 580 chan = NULL; 581 + if (--device->privatecnt == 0) 582 + dma_cap_clear(DMA_PRIVATE, device->cap_mask); 579 583 } 580 584 } 581 585
+2
drivers/dma/sh/usb-dmac.c
··· 673 673 * Power management 674 674 */ 675 675 676 + #ifdef CONFIG_PM 676 677 static int usb_dmac_runtime_suspend(struct device *dev) 677 678 { 678 679 struct usb_dmac *dmac = dev_get_drvdata(dev); ··· 691 690 692 691 return usb_dmac_init(dmac); 693 692 } 693 + #endif /* CONFIG_PM */ 694 694 695 695 static const struct dev_pm_ops usb_dmac_pm = { 696 696 SET_RUNTIME_PM_OPS(usb_dmac_runtime_suspend, usb_dmac_runtime_resume,
+3 -3
drivers/firmware/efi/runtime-map.c
··· 120 120 entry = kzalloc(sizeof(*entry), GFP_KERNEL); 121 121 if (!entry) { 122 122 kset_unregister(map_kset); 123 - return entry; 123 + map_kset = NULL; 124 + return ERR_PTR(-ENOMEM); 124 125 } 125 126 126 127 memcpy(&entry->md, efi_runtime_map + nr * efi_memdesc_size, ··· 133 132 if (ret) { 134 133 kobject_put(&entry->kobj); 135 134 kset_unregister(map_kset); 135 + map_kset = NULL; 136 136 return ERR_PTR(ret); 137 137 } 138 138 ··· 197 195 entry = *(map_entries + j); 198 196 kobject_put(&entry->kobj); 199 197 } 200 - if (map_kset) 201 - kset_unregister(map_kset); 202 198 out: 203 199 return ret; 204 200 }
+9 -39
drivers/gpio/gpio-omap.c
··· 1054 1054 dev_err(bank->dev, "Could not get gpio dbck\n"); 1055 1055 } 1056 1056 1057 - static void 1058 - omap_mpuio_alloc_gc(struct gpio_bank *bank, unsigned int irq_start, 1059 - unsigned int num) 1060 - { 1061 - struct irq_chip_generic *gc; 1062 - struct irq_chip_type *ct; 1063 - 1064 - gc = irq_alloc_generic_chip("MPUIO", 1, irq_start, bank->base, 1065 - handle_simple_irq); 1066 - if (!gc) { 1067 - dev_err(bank->dev, "Memory alloc failed for gc\n"); 1068 - return; 1069 - } 1070 - 1071 - ct = gc->chip_types; 1072 - 1073 - /* NOTE: No ack required, reading IRQ status clears it. */ 1074 - ct->chip.irq_mask = irq_gc_mask_set_bit; 1075 - ct->chip.irq_unmask = irq_gc_mask_clr_bit; 1076 - ct->chip.irq_set_type = omap_gpio_irq_type; 1077 - 1078 - if (bank->regs->wkup_en) 1079 - ct->chip.irq_set_wake = omap_gpio_wake_enable; 1080 - 1081 - ct->regs.mask = OMAP_MPUIO_GPIO_INT / bank->stride; 1082 - irq_setup_generic_chip(gc, IRQ_MSK(num), IRQ_GC_INIT_MASK_CACHE, 1083 - IRQ_NOREQUEST | IRQ_NOPROBE, 0); 1084 - } 1085 - 1086 1057 static int omap_gpio_chip_init(struct gpio_bank *bank, struct irq_chip *irqc) 1087 1058 { 1088 - int j; 1089 1059 static int gpio; 1090 1060 int irq_base = 0; 1091 1061 int ret; ··· 1102 1132 } 1103 1133 #endif 1104 1134 1135 + /* MPUIO is a bit different, reading IRQ status clears it */ 1136 + if (bank->is_mpuio) { 1137 + irqc->irq_ack = dummy_irq_chip.irq_ack; 1138 + irqc->irq_mask = irq_gc_mask_set_bit; 1139 + irqc->irq_unmask = irq_gc_mask_clr_bit; 1140 + if (!bank->regs->wkup_en) 1141 + irqc->irq_set_wake = NULL; 1142 + } 1143 + 1105 1144 ret = gpiochip_irqchip_add(&bank->chip, irqc, 1106 1145 irq_base, omap_gpio_irq_handler, 1107 1146 IRQ_TYPE_NONE); ··· 1123 1144 1124 1145 gpiochip_set_chained_irqchip(&bank->chip, irqc, 1125 1146 bank->irq, omap_gpio_irq_handler); 1126 - 1127 - for (j = 0; j < bank->width; j++) { 1128 - int irq = irq_find_mapping(bank->chip.irqdomain, j); 1129 - if (bank->is_mpuio) { 1130 - omap_mpuio_alloc_gc(bank, irq, bank->width); 1131 - irq_set_chip_and_handler(irq, NULL, NULL); 1132 - set_irq_flags(irq, 0); 1133 - } 1134 - } 1135 1147 1136 1148 return 0; 1137 1149 }
+1 -1
drivers/gpio/gpiolib-acpi.c
··· 550 550 551 551 length = min(agpio->pin_table_length, (u16)(pin_index + bits)); 552 552 for (i = pin_index; i < length; ++i) { 553 - unsigned pin = agpio->pin_table[i]; 553 + int pin = agpio->pin_table[i]; 554 554 struct acpi_gpio_connection *conn; 555 555 struct gpio_desc *desc; 556 556 bool found;
+19
drivers/gpio/gpiolib-sysfs.c
··· 551 551 */ 552 552 int gpiod_export(struct gpio_desc *desc, bool direction_may_change) 553 553 { 554 + struct gpio_chip *chip; 554 555 unsigned long flags; 555 556 int status; 556 557 const char *ioname = NULL; ··· 569 568 return -EINVAL; 570 569 } 571 570 571 + chip = desc->chip; 572 + 572 573 mutex_lock(&sysfs_lock); 574 + 575 + /* check if chip is being removed */ 576 + if (!chip || !chip->exported) { 577 + status = -ENODEV; 578 + goto fail_unlock; 579 + } 573 580 574 581 spin_lock_irqsave(&gpio_lock, flags); 575 582 if (!test_bit(FLAG_REQUESTED, &desc->flags) || ··· 792 783 { 793 784 int status; 794 785 struct device *dev; 786 + struct gpio_desc *desc; 787 + unsigned int i; 795 788 796 789 mutex_lock(&sysfs_lock); 797 790 dev = class_find_device(&gpio_class, NULL, chip, match_export); 798 791 if (dev) { 799 792 put_device(dev); 800 793 device_unregister(dev); 794 + /* prevent further gpiod exports */ 801 795 chip->exported = false; 802 796 status = 0; 803 797 } else ··· 809 797 810 798 if (status) 811 799 chip_dbg(chip, "%s: status %d\n", __func__, status); 800 + 801 + /* unregister gpiod class devices owned by sysfs */ 802 + for (i = 0; i < chip->ngpio; i++) { 803 + desc = &chip->desc[i]; 804 + if (test_and_clear_bit(FLAG_SYSFS, &desc->flags)) 805 + gpiod_free(desc); 806 + } 812 807 } 813 808 814 809 static int __init gpiolib_sysfs_init(void)
+5 -2
drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
··· 430 430 431 431 BUG_ON(!dqm || !qpd); 432 432 433 - BUG_ON(!list_empty(&qpd->queues_list)); 433 + pr_debug("In func %s\n", __func__); 434 434 435 - pr_debug("kfd: In func %s\n", __func__); 435 + pr_debug("qpd->queues_list is %s\n", 436 + list_empty(&qpd->queues_list) ? "empty" : "not empty"); 436 437 437 438 retval = 0; 438 439 mutex_lock(&dqm->lock); ··· 882 881 mutex_unlock(&dqm->lock); 883 882 return -ENOMEM; 884 883 } 884 + 885 + init_sdma_vm(dqm, q, qpd); 885 886 886 887 retval = mqd->init_mqd(mqd, &q->mqd, &q->mqd_mem_obj, 887 888 &q->gart_mqd_addr, &q->properties);
+2 -2
drivers/gpu/drm/amd/amdkfd/kfd_topology.c
··· 728 728 sysfs_show_32bit_prop(buffer, "max_engine_clk_fcompute", 729 729 dev->gpu->kfd2kgd->get_max_engine_clock_in_mhz( 730 730 dev->gpu->kgd)); 731 + 731 732 sysfs_show_64bit_prop(buffer, "local_mem_size", 732 - dev->gpu->kfd2kgd->get_vmem_size( 733 - dev->gpu->kgd)); 733 + (unsigned long long int) 0); 734 734 735 735 sysfs_show_32bit_prop(buffer, "fw_version", 736 736 dev->gpu->kfd2kgd->get_fw_version(
+4 -5
drivers/gpu/drm/drm_irq.c
··· 131 131 132 132 /* Reinitialize corresponding vblank timestamp if high-precision query 133 133 * available. Skip this step if query unsupported or failed. Will 134 - * reinitialize delayed at next vblank interrupt in that case. 134 + * reinitialize delayed at next vblank interrupt in that case and 135 + * assign 0 for now, to mark the vblanktimestamp as invalid. 135 136 */ 136 - if (rc) { 137 - tslot = atomic_read(&vblank->count) + diff; 138 - vblanktimestamp(dev, crtc, tslot) = t_vblank; 139 - } 137 + tslot = atomic_read(&vblank->count) + diff; 138 + vblanktimestamp(dev, crtc, tslot) = rc ? t_vblank : (struct timeval) {0, 0}; 140 139 141 140 smp_mb__before_atomic(); 142 141 atomic_add(diff, &vblank->count);
+2
drivers/gpu/drm/i915/i915_reg.h
··· 6074 6074 #define GTFIFOCTL 0x120008 6075 6075 #define GT_FIFO_FREE_ENTRIES_MASK 0x7f 6076 6076 #define GT_FIFO_NUM_RESERVED_ENTRIES 20 6077 + #define GT_FIFO_CTL_BLOCK_ALL_POLICY_STALL (1 << 12) 6078 + #define GT_FIFO_CTL_RC6_POLICY_STALL (1 << 11) 6077 6079 6078 6080 #define HSW_IDICR 0x9008 6079 6081 #define IDIHASHMSK(x) (((x) & 0x3f) << 16)
-3
drivers/gpu/drm/i915/intel_display.c
··· 13635 13635 }; 13636 13636 13637 13637 static struct intel_quirk intel_quirks[] = { 13638 - /* HP Mini needs pipe A force quirk (LP: #322104) */ 13639 - { 0x27ae, 0x103c, 0x361a, quirk_pipea_force }, 13640 - 13641 13638 /* Toshiba Protege R-205, S-209 needs pipe A force quirk */ 13642 13639 { 0x2592, 0x1179, 0x0001, quirk_pipea_force }, 13643 13640
+5 -4
drivers/gpu/drm/i915/intel_dp.c
··· 1348 1348 1349 1349 pipe_config->has_dp_encoder = true; 1350 1350 pipe_config->has_drrs = false; 1351 - pipe_config->has_audio = intel_dp->has_audio; 1351 + pipe_config->has_audio = intel_dp->has_audio && port != PORT_A; 1352 1352 1353 1353 if (is_edp(intel_dp) && intel_connector->panel.fixed_mode) { 1354 1354 intel_fixed_panel_mode(intel_connector->panel.fixed_mode, ··· 2211 2211 int dotclock; 2212 2212 2213 2213 tmp = I915_READ(intel_dp->output_reg); 2214 - if (tmp & DP_AUDIO_OUTPUT_ENABLE) 2215 - pipe_config->has_audio = true; 2214 + 2215 + pipe_config->has_audio = tmp & DP_AUDIO_OUTPUT_ENABLE && port != PORT_A; 2216 2216 2217 2217 if ((port == PORT_A) || !HAS_PCH_CPT(dev)) { 2218 2218 if (tmp & DP_SYNC_HS_HIGH) ··· 3812 3812 if (val == 0) 3813 3813 break; 3814 3814 3815 - intel_dp->sink_rates[i] = val * 200; 3815 + /* Value read is in kHz while drm clock is saved in deca-kHz */ 3816 + intel_dp->sink_rates[i] = (val * 200) / 10; 3816 3817 } 3817 3818 intel_dp->num_sink_rates = i; 3818 3819 }
+24 -2
drivers/gpu/drm/i915/intel_lvds.c
··· 813 813 static const struct dmi_system_id intel_dual_link_lvds[] = { 814 814 { 815 815 .callback = intel_dual_link_lvds_callback, 816 - .ident = "Apple MacBook Pro (Core i5/i7 Series)", 816 + .ident = "Apple MacBook Pro 15\" (2010)", 817 + .matches = { 818 + DMI_MATCH(DMI_SYS_VENDOR, "Apple Inc."), 819 + DMI_MATCH(DMI_PRODUCT_NAME, "MacBookPro6,2"), 820 + }, 821 + }, 822 + { 823 + .callback = intel_dual_link_lvds_callback, 824 + .ident = "Apple MacBook Pro 15\" (2011)", 817 825 .matches = { 818 826 DMI_MATCH(DMI_SYS_VENDOR, "Apple Inc."), 819 827 DMI_MATCH(DMI_PRODUCT_NAME, "MacBookPro8,2"), 828 + }, 829 + }, 830 + { 831 + .callback = intel_dual_link_lvds_callback, 832 + .ident = "Apple MacBook Pro 15\" (2012)", 833 + .matches = { 834 + DMI_MATCH(DMI_SYS_VENDOR, "Apple Inc."), 835 + DMI_MATCH(DMI_PRODUCT_NAME, "MacBookPro9,1"), 820 836 }, 821 837 }, 822 838 { } /* terminating entry */ ··· 863 847 /* use the module option value if specified */ 864 848 if (i915.lvds_channel_mode > 0) 865 849 return i915.lvds_channel_mode == 2; 850 + 851 + /* single channel LVDS is limited to 112 MHz */ 852 + if (lvds_encoder->attached_connector->base.panel.fixed_mode->clock 853 + > 112999) 854 + return true; 866 855 867 856 if (dmi_check_system(intel_dual_link_lvds)) 868 857 return true; ··· 1132 1111 out: 1133 1112 mutex_unlock(&dev->mode_config.mutex); 1134 1113 1114 + intel_panel_init(&intel_connector->panel, fixed_mode, downclock_mode); 1115 + 1135 1116 lvds_encoder->is_dual_link = compute_is_dual_link_lvds(lvds_encoder); 1136 1117 DRM_DEBUG_KMS("detected %s-link lvds configuration\n", 1137 1118 lvds_encoder->is_dual_link ? "dual" : "single"); ··· 1148 1125 } 1149 1126 drm_connector_register(connector); 1150 1127 1151 - intel_panel_init(&intel_connector->panel, fixed_mode, downclock_mode); 1152 1128 intel_panel_setup_backlight(connector, INVALID_PIPE); 1153 1129 1154 1130 return;
+8
drivers/gpu/drm/i915/intel_uncore.c
··· 360 360 __raw_i915_write32(dev_priv, GTFIFODBG, 361 361 __raw_i915_read32(dev_priv, GTFIFODBG)); 362 362 363 + /* WaDisableShadowRegForCpd:chv */ 364 + if (IS_CHERRYVIEW(dev)) { 365 + __raw_i915_write32(dev_priv, GTFIFOCTL, 366 + __raw_i915_read32(dev_priv, GTFIFOCTL) | 367 + GT_FIFO_CTL_BLOCK_ALL_POLICY_STALL | 368 + GT_FIFO_CTL_RC6_POLICY_STALL); 369 + } 370 + 363 371 intel_uncore_forcewake_reset(dev, restore_forcewake); 364 372 } 365 373
+3
drivers/gpu/drm/radeon/atombios_crtc.c
··· 580 580 else 581 581 radeon_crtc->pll_flags |= RADEON_PLL_PREFER_LOW_REF_DIV; 582 582 583 + /* if there is no audio, set MINM_OVER_MAXP */ 584 + if (!drm_detect_monitor_audio(radeon_connector_edid(connector))) 585 + radeon_crtc->pll_flags |= RADEON_PLL_PREFER_MINM_OVER_MAXP; 583 586 if (rdev->family < CHIP_RV770) 584 587 radeon_crtc->pll_flags |= RADEON_PLL_PREFER_MINM_OVER_MAXP; 585 588 /* use frac fb div on APUs */
+2 -4
drivers/gpu/drm/radeon/atombios_encoders.c
··· 1761 1761 struct drm_device *dev = encoder->dev; 1762 1762 struct radeon_device *rdev = dev->dev_private; 1763 1763 struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder); 1764 - struct drm_connector *connector = radeon_get_connector_for_encoder(encoder); 1765 1764 int encoder_mode = atombios_get_encoder_mode(encoder); 1766 1765 1767 1766 DRM_DEBUG_KMS("encoder dpms %d to mode %d, devices %08x, active_devices %08x\n", 1768 1767 radeon_encoder->encoder_id, mode, radeon_encoder->devices, 1769 1768 radeon_encoder->active_device); 1770 1769 1771 - if (connector && (radeon_audio != 0) && 1770 + if ((radeon_audio != 0) && 1772 1771 ((encoder_mode == ATOM_ENCODER_MODE_HDMI) || 1773 - (ENCODER_MODE_IS_DP(encoder_mode) && 1774 - drm_detect_monitor_audio(radeon_connector_edid(connector))))) 1772 + ENCODER_MODE_IS_DP(encoder_mode))) 1775 1773 radeon_audio_dpms(encoder, mode); 1776 1774 1777 1775 switch (radeon_encoder->encoder_id) {
-25
drivers/gpu/drm/radeon/dce6_afmt.c
··· 295 295 WREG32(DCCG_AUDIO_DTO1_MODULE, clock); 296 296 } 297 297 } 298 - 299 - void dce6_dp_enable(struct drm_encoder *encoder, bool enable) 300 - { 301 - struct drm_device *dev = encoder->dev; 302 - struct radeon_device *rdev = dev->dev_private; 303 - struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder); 304 - struct radeon_encoder_atom_dig *dig = radeon_encoder->enc_priv; 305 - 306 - if (!dig || !dig->afmt) 307 - return; 308 - 309 - if (enable) { 310 - WREG32(EVERGREEN_DP_SEC_TIMESTAMP + dig->afmt->offset, 311 - EVERGREEN_DP_SEC_TIMESTAMP_MODE(1)); 312 - WREG32(EVERGREEN_DP_SEC_CNTL + dig->afmt->offset, 313 - EVERGREEN_DP_SEC_ASP_ENABLE | /* Audio packet transmission */ 314 - EVERGREEN_DP_SEC_ATP_ENABLE | /* Audio timestamp packet transmission */ 315 - EVERGREEN_DP_SEC_AIP_ENABLE | /* Audio infoframe packet transmission */ 316 - EVERGREEN_DP_SEC_STREAM_ENABLE); /* Master enable for secondary stream engine */ 317 - } else { 318 - WREG32(EVERGREEN_DP_SEC_CNTL + dig->afmt->offset, 0); 319 - } 320 - 321 - dig->afmt->enabled = enable; 322 - }
+34 -19
drivers/gpu/drm/radeon/evergreen_hdmi.c
··· 219 219 WREG32(AFMT_AVI_INFO3 + offset, 220 220 frame[0xC] | (frame[0xD] << 8) | (buffer[1] << 24)); 221 221 222 - WREG32_OR(HDMI_INFOFRAME_CONTROL0 + offset, 223 - HDMI_AVI_INFO_SEND | /* enable AVI info frames */ 224 - HDMI_AVI_INFO_CONT); /* required for audio info values to be updated */ 225 - 226 222 WREG32_P(HDMI_INFOFRAME_CONTROL1 + offset, 227 - HDMI_AVI_INFO_LINE(2), /* anything other than 0 */ 228 - ~HDMI_AVI_INFO_LINE_MASK); 223 + HDMI_AVI_INFO_LINE(2), /* anything other than 0 */ 224 + ~HDMI_AVI_INFO_LINE_MASK); 229 225 } 230 226 231 227 void dce4_hdmi_audio_set_dto(struct radeon_device *rdev, ··· 366 370 WREG32(AFMT_AUDIO_PACKET_CONTROL2 + offset, 367 371 AFMT_AUDIO_CHANNEL_ENABLE(0xff)); 368 372 373 + WREG32(HDMI_AUDIO_PACKET_CONTROL + offset, 374 + HDMI_AUDIO_DELAY_EN(1) | /* set the default audio delay */ 375 + HDMI_AUDIO_PACKETS_PER_LINE(3)); /* should be suffient for all audio modes and small enough for all hblanks */ 376 + 369 377 /* allow 60958 channel status and send audio packets fields to be updated */ 370 - WREG32(AFMT_AUDIO_PACKET_CONTROL + offset, 371 - AFMT_AUDIO_SAMPLE_SEND | AFMT_RESET_FIFO_WHEN_AUDIO_DIS | AFMT_60958_CS_UPDATE); 378 + WREG32_OR(AFMT_AUDIO_PACKET_CONTROL + offset, 379 + AFMT_RESET_FIFO_WHEN_AUDIO_DIS | AFMT_60958_CS_UPDATE); 372 380 } 373 381 374 382 ··· 398 398 return; 399 399 400 400 if (enable) { 401 - WREG32(HDMI_INFOFRAME_CONTROL1 + dig->afmt->offset, 402 - HDMI_AUDIO_INFO_LINE(2)); /* anything other than 0 */ 401 + struct drm_connector *connector = radeon_get_connector_for_encoder(encoder); 403 402 404 - WREG32(HDMI_AUDIO_PACKET_CONTROL + dig->afmt->offset, 405 - HDMI_AUDIO_DELAY_EN(1) | /* set the default audio delay */ 406 - HDMI_AUDIO_PACKETS_PER_LINE(3)); /* should be suffient for all audio modes and small enough for all hblanks */ 407 - 408 - WREG32(HDMI_INFOFRAME_CONTROL0 + dig->afmt->offset, 409 - HDMI_AUDIO_INFO_SEND | /* enable audio info frames (frames won't be set until audio is enabled) */ 410 - HDMI_AUDIO_INFO_CONT); /* required for audio info values to be updated */ 403 + if (drm_detect_monitor_audio(radeon_connector_edid(connector))) { 404 + WREG32(HDMI_INFOFRAME_CONTROL0 + dig->afmt->offset, 405 + HDMI_AVI_INFO_SEND | /* enable AVI info frames */ 406 + HDMI_AVI_INFO_CONT | /* required for audio info values to be updated */ 407 + HDMI_AUDIO_INFO_SEND | /* enable audio info frames (frames won't be set until audio is enabled) */ 408 + HDMI_AUDIO_INFO_CONT); /* required for audio info values to be updated */ 409 + WREG32_OR(AFMT_AUDIO_PACKET_CONTROL + dig->afmt->offset, 410 + AFMT_AUDIO_SAMPLE_SEND); 411 + } else { 412 + WREG32(HDMI_INFOFRAME_CONTROL0 + dig->afmt->offset, 413 + HDMI_AVI_INFO_SEND | /* enable AVI info frames */ 414 + HDMI_AVI_INFO_CONT); /* required for audio info values to be updated */ 415 + WREG32_AND(AFMT_AUDIO_PACKET_CONTROL + dig->afmt->offset, 416 + ~AFMT_AUDIO_SAMPLE_SEND); 417 + } 411 418 } else { 419 + WREG32_AND(AFMT_AUDIO_PACKET_CONTROL + dig->afmt->offset, 420 + ~AFMT_AUDIO_SAMPLE_SEND); 412 421 WREG32(HDMI_INFOFRAME_CONTROL0 + dig->afmt->offset, 0); 413 422 } 414 423 ··· 433 424 struct radeon_device *rdev = dev->dev_private; 434 425 struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder); 435 426 struct radeon_encoder_atom_dig *dig = radeon_encoder->enc_priv; 427 + struct drm_connector *connector = radeon_get_connector_for_encoder(encoder); 436 428 437 429 if (!dig || !dig->afmt) 438 430 return; 439 431 440 - if (enable) { 432 + if (enable && drm_detect_monitor_audio(radeon_connector_edid(connector))) { 441 433 struct drm_connector *connector = radeon_get_connector_for_encoder(encoder); 442 434 struct radeon_connector *radeon_connector = to_radeon_connector(connector); 443 435 struct radeon_connector_atom_dig *dig_connector; 444 436 uint32_t val; 445 437 438 + WREG32_OR(AFMT_AUDIO_PACKET_CONTROL + dig->afmt->offset, 439 + AFMT_AUDIO_SAMPLE_SEND); 440 + 446 441 WREG32(EVERGREEN_DP_SEC_TIMESTAMP + dig->afmt->offset, 447 442 EVERGREEN_DP_SEC_TIMESTAMP_MODE(1)); 448 443 449 - if (radeon_connector->con_priv) { 444 + if (!ASIC_IS_DCE6(rdev) && radeon_connector->con_priv) { 450 445 dig_connector = radeon_connector->con_priv; 451 446 val = RREG32(EVERGREEN_DP_SEC_AUD_N + dig->afmt->offset); 452 447 val &= ~EVERGREEN_DP_SEC_N_BASE_MULTIPLE(0xf); ··· 470 457 EVERGREEN_DP_SEC_STREAM_ENABLE); /* Master enable for secondary stream engine */ 471 458 } else { 472 459 WREG32(EVERGREEN_DP_SEC_CNTL + dig->afmt->offset, 0); 460 + WREG32_AND(AFMT_AUDIO_PACKET_CONTROL + dig->afmt->offset, 461 + ~AFMT_AUDIO_SAMPLE_SEND); 473 462 } 474 463 475 464 dig->afmt->enabled = enable;
+6 -5
drivers/gpu/drm/radeon/r600_hdmi.c
··· 228 228 WREG32(HDMI0_AVI_INFO3 + offset, 229 229 frame[0xC] | (frame[0xD] << 8) | (buffer[1] << 24)); 230 230 231 - WREG32_OR(HDMI0_INFOFRAME_CONTROL0 + offset, 232 - HDMI0_AVI_INFO_SEND | /* enable AVI info frames */ 233 - HDMI0_AVI_INFO_CONT); /* send AVI info frames every frame/field */ 234 - 235 231 WREG32_OR(HDMI0_INFOFRAME_CONTROL1 + offset, 236 - HDMI0_AVI_INFO_LINE(2)); /* anything other than 0 */ 232 + HDMI0_AVI_INFO_LINE(2)); /* anything other than 0 */ 233 + 234 + WREG32_OR(HDMI0_INFOFRAME_CONTROL0 + offset, 235 + HDMI0_AVI_INFO_SEND | /* enable AVI info frames */ 236 + HDMI0_AVI_INFO_CONT); /* send AVI info frames every frame/field */ 237 + 237 238 } 238 239 239 240 /*
-1
drivers/gpu/drm/radeon/radeon.h
··· 1673 1673 struct radeon_bo *vcpu_bo; 1674 1674 void *cpu_addr; 1675 1675 uint64_t gpu_addr; 1676 - void *saved_bo; 1677 1676 atomic_t handles[RADEON_MAX_UVD_HANDLES]; 1678 1677 struct drm_file *filp[RADEON_MAX_UVD_HANDLES]; 1679 1678 unsigned img_size[RADEON_MAX_UVD_HANDLES];
+1 -1
drivers/gpu/drm/radeon/radeon_asic.c
··· 1202 1202 static struct radeon_asic_ring rv770_uvd_ring = { 1203 1203 .ib_execute = &uvd_v1_0_ib_execute, 1204 1204 .emit_fence = &uvd_v2_2_fence_emit, 1205 - .emit_semaphore = &uvd_v1_0_semaphore_emit, 1205 + .emit_semaphore = &uvd_v2_2_semaphore_emit, 1206 1206 .cs_parse = &radeon_uvd_cs_parse, 1207 1207 .ring_test = &uvd_v1_0_ring_test, 1208 1208 .ib_test = &uvd_v1_0_ib_test,
+4
drivers/gpu/drm/radeon/radeon_asic.h
··· 949 949 int uvd_v2_2_resume(struct radeon_device *rdev); 950 950 void uvd_v2_2_fence_emit(struct radeon_device *rdev, 951 951 struct radeon_fence *fence); 952 + bool uvd_v2_2_semaphore_emit(struct radeon_device *rdev, 953 + struct radeon_ring *ring, 954 + struct radeon_semaphore *semaphore, 955 + bool emit_wait); 952 956 953 957 /* uvd v3.1 */ 954 958 bool uvd_v3_1_semaphore_emit(struct radeon_device *rdev,
+20 -14
drivers/gpu/drm/radeon/radeon_audio.c
··· 102 102 void r600_hdmi_enable(struct drm_encoder *encoder, bool enable); 103 103 void evergreen_hdmi_enable(struct drm_encoder *encoder, bool enable); 104 104 void evergreen_dp_enable(struct drm_encoder *encoder, bool enable); 105 - void dce6_dp_enable(struct drm_encoder *encoder, bool enable); 106 105 107 106 static const u32 pin_offsets[7] = 108 107 { ··· 239 240 .set_avi_packet = evergreen_set_avi_packet, 240 241 .set_audio_packet = dce4_set_audio_packet, 241 242 .mode_set = radeon_audio_dp_mode_set, 242 - .dpms = dce6_dp_enable, 243 + .dpms = evergreen_dp_enable, 243 244 }; 244 245 245 246 static void radeon_audio_interface_init(struct radeon_device *rdev) ··· 460 461 if (!connector || !connector->encoder) 461 462 return; 462 463 464 + if (!radeon_encoder_is_digital(connector->encoder)) 465 + return; 466 + 463 467 rdev = connector->encoder->dev->dev_private; 468 + 469 + if (!radeon_audio_chipset_supported(rdev)) 470 + return; 471 + 464 472 radeon_encoder = to_radeon_encoder(connector->encoder); 465 473 dig = radeon_encoder->enc_priv; 466 474 475 + if (!dig->afmt) 476 + return; 477 + 467 478 if (status == connector_status_connected) { 468 - struct radeon_connector *radeon_connector; 469 - int sink_type; 470 - 471 - if (!drm_detect_monitor_audio(radeon_connector_edid(connector))) { 472 - radeon_encoder->audio = NULL; 473 - return; 474 - } 475 - 476 - radeon_connector = to_radeon_connector(connector); 477 - sink_type = radeon_dp_getsinktype(radeon_connector); 479 + struct radeon_connector *radeon_connector = to_radeon_connector(connector); 478 480 479 481 if (connector->connector_type == DRM_MODE_CONNECTOR_DisplayPort && 480 - sink_type == CONNECTOR_OBJECT_ID_DISPLAYPORT) 482 + radeon_dp_getsinktype(radeon_connector) == 483 + CONNECTOR_OBJECT_ID_DISPLAYPORT) 481 484 radeon_encoder->audio = rdev->audio.dp_funcs; 482 485 else 483 486 radeon_encoder->audio = rdev->audio.hdmi_funcs; 484 487 485 488 dig->afmt->pin = radeon_audio_get_pin(connector->encoder); 486 - radeon_audio_enable(rdev, dig->afmt->pin, 0xf); 489 + if (drm_detect_monitor_audio(radeon_connector_edid(connector))) { 490 + radeon_audio_enable(rdev, dig->afmt->pin, 0xf); 491 + } else { 492 + radeon_audio_enable(rdev, dig->afmt->pin, 0); 493 + dig->afmt->pin = NULL; 494 + } 487 495 } else { 488 496 radeon_audio_enable(rdev, dig->afmt->pin, 0); 489 497 dig->afmt->pin = NULL;
+6 -2
drivers/gpu/drm/radeon/radeon_connectors.c
··· 1379 1379 /* updated in get modes as well since we need to know if it's analog or digital */ 1380 1380 radeon_connector_update_scratch_regs(connector, ret); 1381 1381 1382 - if (radeon_audio != 0) 1382 + if (radeon_audio != 0) { 1383 + radeon_connector_get_edid(connector); 1383 1384 radeon_audio_detect(connector, ret); 1385 + } 1384 1386 1385 1387 exit: 1386 1388 pm_runtime_mark_last_busy(connector->dev->dev); ··· 1719 1717 1720 1718 radeon_connector_update_scratch_regs(connector, ret); 1721 1719 1722 - if (radeon_audio != 0) 1720 + if (radeon_audio != 0) { 1721 + radeon_connector_get_edid(connector); 1723 1722 radeon_audio_detect(connector, ret); 1723 + } 1724 1724 1725 1725 out: 1726 1726 pm_runtime_mark_last_busy(connector->dev->dev);
+2 -2
drivers/gpu/drm/radeon/radeon_cs.c
··· 88 88 p->dma_reloc_idx = 0; 89 89 /* FIXME: we assume that each relocs use 4 dwords */ 90 90 p->nrelocs = chunk->length_dw / 4; 91 - p->relocs = kcalloc(p->nrelocs, sizeof(struct radeon_bo_list), GFP_KERNEL); 91 + p->relocs = drm_calloc_large(p->nrelocs, sizeof(struct radeon_bo_list)); 92 92 if (p->relocs == NULL) { 93 93 return -ENOMEM; 94 94 } ··· 428 428 } 429 429 } 430 430 kfree(parser->track); 431 - kfree(parser->relocs); 431 + drm_free_large(parser->relocs); 432 432 drm_free_large(parser->vm_bos); 433 433 for (i = 0; i < parser->nchunks; i++) 434 434 drm_free_large(parser->chunks[i].kdata);
+8 -5
drivers/gpu/drm/radeon/radeon_mn.c
··· 135 135 while (it) { 136 136 struct radeon_mn_node *node; 137 137 struct radeon_bo *bo; 138 - int r; 138 + long r; 139 139 140 140 node = container_of(it, struct radeon_mn_node, it); 141 141 it = interval_tree_iter_next(it, start, end); 142 142 143 143 list_for_each_entry(bo, &node->bos, mn_list) { 144 144 145 + if (!bo->tbo.ttm || bo->tbo.ttm->state != tt_bound) 146 + continue; 147 + 145 148 r = radeon_bo_reserve(bo, true); 146 149 if (r) { 147 - DRM_ERROR("(%d) failed to reserve user bo\n", r); 150 + DRM_ERROR("(%ld) failed to reserve user bo\n", r); 148 151 continue; 149 152 } 150 153 151 154 r = reservation_object_wait_timeout_rcu(bo->tbo.resv, 152 155 true, false, MAX_SCHEDULE_TIMEOUT); 153 - if (r) 154 - DRM_ERROR("(%d) failed to wait for user bo\n", r); 156 + if (r <= 0) 157 + DRM_ERROR("(%ld) failed to wait for user bo\n", r); 155 158 156 159 radeon_ttm_placement_from_domain(bo, RADEON_GEM_DOMAIN_CPU); 157 160 r = ttm_bo_validate(&bo->tbo, &bo->placement, false, false); 158 161 if (r) 159 - DRM_ERROR("(%d) failed to validate user bo\n", r); 162 + DRM_ERROR("(%ld) failed to validate user bo\n", r); 160 163 161 164 radeon_bo_unreserve(bo); 162 165 }
+3 -5
drivers/gpu/drm/radeon/radeon_ttm.c
··· 591 591 { 592 592 struct radeon_device *rdev = radeon_get_rdev(ttm->bdev); 593 593 struct radeon_ttm_tt *gtt = (void *)ttm; 594 - struct scatterlist *sg; 595 - int i; 594 + struct sg_page_iter sg_iter; 596 595 597 596 int write = !(gtt->userflags & RADEON_GEM_USERPTR_READONLY); 598 597 enum dma_data_direction direction = write ? ··· 604 605 /* free the sg table and pages again */ 605 606 dma_unmap_sg(rdev->dev, ttm->sg->sgl, ttm->sg->nents, direction); 606 607 607 - for_each_sg(ttm->sg->sgl, sg, ttm->sg->nents, i) { 608 - struct page *page = sg_page(sg); 609 - 608 + for_each_sg_page(ttm->sg->sgl, &sg_iter, ttm->sg->nents, 0) { 609 + struct page *page = sg_page_iter_page(&sg_iter); 610 610 if (!(gtt->userflags & RADEON_GEM_USERPTR_READONLY)) 611 611 set_page_dirty(page); 612 612
+93 -51
drivers/gpu/drm/radeon/radeon_uvd.c
··· 204 204 205 205 int radeon_uvd_suspend(struct radeon_device *rdev) 206 206 { 207 - unsigned size; 208 - void *ptr; 209 - int i; 207 + int i, r; 210 208 211 209 if (rdev->uvd.vcpu_bo == NULL) 212 210 return 0; 213 211 214 - for (i = 0; i < RADEON_MAX_UVD_HANDLES; ++i) 215 - if (atomic_read(&rdev->uvd.handles[i])) 216 - break; 212 + for (i = 0; i < RADEON_MAX_UVD_HANDLES; ++i) { 213 + uint32_t handle = atomic_read(&rdev->uvd.handles[i]); 214 + if (handle != 0) { 215 + struct radeon_fence *fence; 217 216 218 - if (i == RADEON_MAX_UVD_HANDLES) 219 - return 0; 217 + radeon_uvd_note_usage(rdev); 220 218 221 - size = radeon_bo_size(rdev->uvd.vcpu_bo); 222 - size -= rdev->uvd_fw->size; 219 + r = radeon_uvd_get_destroy_msg(rdev, 220 + R600_RING_TYPE_UVD_INDEX, handle, &fence); 221 + if (r) { 222 + DRM_ERROR("Error destroying UVD (%d)!\n", r); 223 + continue; 224 + } 223 225 224 - ptr = rdev->uvd.cpu_addr; 225 - ptr += rdev->uvd_fw->size; 226 + radeon_fence_wait(fence, false); 227 + radeon_fence_unref(&fence); 226 228 227 - rdev->uvd.saved_bo = kmalloc(size, GFP_KERNEL); 228 - memcpy(rdev->uvd.saved_bo, ptr, size); 229 + rdev->uvd.filp[i] = NULL; 230 + atomic_set(&rdev->uvd.handles[i], 0); 231 + } 232 + } 229 233 230 234 return 0; 231 235 } ··· 250 246 ptr = rdev->uvd.cpu_addr; 251 247 ptr += rdev->uvd_fw->size; 252 248 253 - if (rdev->uvd.saved_bo != NULL) { 254 - memcpy(ptr, rdev->uvd.saved_bo, size); 255 - kfree(rdev->uvd.saved_bo); 256 - rdev->uvd.saved_bo = NULL; 257 - } else 258 - memset(ptr, 0, size); 249 + memset(ptr, 0, size); 259 250 260 251 return 0; 261 252 } ··· 395 396 return 0; 396 397 } 397 398 399 + static int radeon_uvd_validate_codec(struct radeon_cs_parser *p, 400 + unsigned stream_type) 401 + { 402 + switch (stream_type) { 403 + case 0: /* H264 */ 404 + case 1: /* VC1 */ 405 + /* always supported */ 406 + return 0; 407 + 408 + case 3: /* MPEG2 */ 409 + case 4: /* MPEG4 */ 410 + /* only since UVD 3 */ 411 + if (p->rdev->family >= CHIP_PALM) 412 + return 0; 413 + 414 + /* fall through */ 415 + default: 416 + DRM_ERROR("UVD codec not supported by hardware %d!\n", 417 + stream_type); 418 + return -EINVAL; 419 + } 420 + } 421 + 398 422 static int radeon_uvd_cs_msg(struct radeon_cs_parser *p, struct radeon_bo *bo, 399 423 unsigned offset, unsigned buf_sizes[]) 400 424 { ··· 458 436 return -EINVAL; 459 437 } 460 438 461 - if (msg_type == 1) { 462 - /* it's a decode msg, calc buffer sizes */ 463 - r = radeon_uvd_cs_msg_decode(msg, buf_sizes); 464 - /* calc image size (width * height) */ 465 - img_size = msg[6] * msg[7]; 439 + switch (msg_type) { 440 + case 0: 441 + /* it's a create msg, calc image size (width * height) */ 442 + img_size = msg[7] * msg[8]; 443 + 444 + r = radeon_uvd_validate_codec(p, msg[4]); 466 445 radeon_bo_kunmap(bo); 467 446 if (r) 468 447 return r; 469 448 470 - } else if (msg_type == 2) { 449 + /* try to alloc a new handle */ 450 + for (i = 0; i < RADEON_MAX_UVD_HANDLES; ++i) { 451 + if (atomic_read(&p->rdev->uvd.handles[i]) == handle) { 452 + DRM_ERROR("Handle 0x%x already in use!\n", handle); 453 + return -EINVAL; 454 + } 455 + 456 + if (!atomic_cmpxchg(&p->rdev->uvd.handles[i], 0, handle)) { 457 + p->rdev->uvd.filp[i] = p->filp; 458 + p->rdev->uvd.img_size[i] = img_size; 459 + return 0; 460 + } 461 + } 462 + 463 + DRM_ERROR("No more free UVD handles!\n"); 464 + return -EINVAL; 465 + 466 + case 1: 467 + /* it's a decode msg, validate codec and calc buffer sizes */ 468 + r = radeon_uvd_validate_codec(p, msg[4]); 469 + if (!r) 470 + r = radeon_uvd_cs_msg_decode(msg, buf_sizes); 471 + radeon_bo_kunmap(bo); 472 + if (r) 473 + return r; 474 + 475 + /* validate the handle */ 476 + for (i = 0; i < RADEON_MAX_UVD_HANDLES; ++i) { 477 + if (atomic_read(&p->rdev->uvd.handles[i]) == handle) { 478 + if (p->rdev->uvd.filp[i] != p->filp) { 479 + DRM_ERROR("UVD handle collision detected!\n"); 480 + return -EINVAL; 481 + } 482 + return 0; 483 + } 484 + } 485 + 486 + DRM_ERROR("Invalid UVD handle 0x%x!\n", handle); 487 + return -ENOENT; 488 + 489 + case 2: 471 490 /* it's a destroy msg, free the handle */ 472 491 for (i = 0; i < RADEON_MAX_UVD_HANDLES; ++i) 473 492 atomic_cmpxchg(&p->rdev->uvd.handles[i], handle, 0); 474 493 radeon_bo_kunmap(bo); 475 494 return 0; 476 - } else { 477 - /* it's a create msg, calc image size (width * height) */ 478 - img_size = msg[7] * msg[8]; 479 - radeon_bo_kunmap(bo); 480 495 481 - if (msg_type != 0) { 482 - DRM_ERROR("Illegal UVD message type (%d)!\n", msg_type); 483 - return -EINVAL; 484 - } 496 + default: 485 497 486 - /* it's a create msg, no special handling needed */ 498 + DRM_ERROR("Illegal UVD message type (%d)!\n", msg_type); 499 + return -EINVAL; 487 500 } 488 501 489 - /* create or decode, validate the handle */ 490 - for (i = 0; i < RADEON_MAX_UVD_HANDLES; ++i) { 491 - if (atomic_read(&p->rdev->uvd.handles[i]) == handle) 492 - return 0; 493 - } 494 - 495 - /* handle not found try to alloc a new one */ 496 - for (i = 0; i < RADEON_MAX_UVD_HANDLES; ++i) { 497 - if (!atomic_cmpxchg(&p->rdev->uvd.handles[i], 0, handle)) { 498 - p->rdev->uvd.filp[i] = p->filp; 499 - p->rdev->uvd.img_size[i] = img_size; 500 - return 0; 501 - } 502 - } 503 - 504 - DRM_ERROR("No more free UVD handles!\n"); 502 + BUG(); 505 503 return -EINVAL; 506 504 } 507 505
+48 -17
drivers/gpu/drm/radeon/radeon_vce.c
··· 493 493 * 494 494 * @p: parser context 495 495 * @handle: handle to validate 496 + * @allocated: allocated a new handle? 496 497 * 497 498 * Validates the handle and return the found session index or -EINVAL 498 499 * we we don't have another free session index. 499 500 */ 500 - int radeon_vce_validate_handle(struct radeon_cs_parser *p, uint32_t handle) 501 + static int radeon_vce_validate_handle(struct radeon_cs_parser *p, 502 + uint32_t handle, bool *allocated) 501 503 { 502 504 unsigned i; 503 505 506 + *allocated = false; 507 + 504 508 /* validate the handle */ 505 509 for (i = 0; i < RADEON_MAX_VCE_HANDLES; ++i) { 506 - if (atomic_read(&p->rdev->vce.handles[i]) == handle) 510 + if (atomic_read(&p->rdev->vce.handles[i]) == handle) { 511 + if (p->rdev->vce.filp[i] != p->filp) { 512 + DRM_ERROR("VCE handle collision detected!\n"); 513 + return -EINVAL; 514 + } 507 515 return i; 516 + } 508 517 } 509 518 510 519 /* handle not found try to alloc a new one */ ··· 521 512 if (!atomic_cmpxchg(&p->rdev->vce.handles[i], 0, handle)) { 522 513 p->rdev->vce.filp[i] = p->filp; 523 514 p->rdev->vce.img_size[i] = 0; 515 + *allocated = true; 524 516 return i; 525 517 } 526 518 } ··· 539 529 int radeon_vce_cs_parse(struct radeon_cs_parser *p) 540 530 { 541 531 int session_idx = -1; 542 - bool destroyed = false; 532 + bool destroyed = false, created = false, allocated = false; 543 533 uint32_t tmp, handle = 0; 544 534 uint32_t *size = &tmp; 545 - int i, r; 535 + int i, r = 0; 546 536 547 537 while (p->idx < p->chunk_ib->length_dw) { 548 538 uint32_t len = radeon_get_ib_value(p, p->idx); ··· 550 540 551 541 if ((len < 8) || (len & 3)) { 552 542 DRM_ERROR("invalid VCE command length (%d)!\n", len); 553 - return -EINVAL; 543 + r = -EINVAL; 544 + goto out; 554 545 } 555 546 556 547 if (destroyed) { 557 548 DRM_ERROR("No other command allowed after destroy!\n"); 558 - return -EINVAL; 549 + r = -EINVAL; 550 + goto out; 559 551 } 560 552 561 553 switch (cmd) { 562 554 case 0x00000001: // session 563 555 handle = radeon_get_ib_value(p, p->idx + 2); 564 - session_idx = radeon_vce_validate_handle(p, handle); 556 + session_idx = radeon_vce_validate_handle(p, handle, 557 + &allocated); 565 558 if (session_idx < 0) 566 559 return session_idx; 567 560 size = &p->rdev->vce.img_size[session_idx]; ··· 574 561 break; 575 562 576 563 case 0x01000001: // create 564 + created = true; 565 + if (!allocated) { 566 + DRM_ERROR("Handle already in use!\n"); 567 + r = -EINVAL; 568 + goto out; 569 + } 570 + 577 571 *size = radeon_get_ib_value(p, p->idx + 8) * 578 572 radeon_get_ib_value(p, p->idx + 10) * 579 573 8 * 3 / 2; ··· 598 578 r = radeon_vce_cs_reloc(p, p->idx + 10, p->idx + 9, 599 579 *size); 600 580 if (r) 601 - return r; 581 + goto out; 602 582 603 583 r = radeon_vce_cs_reloc(p, p->idx + 12, p->idx + 11, 604 584 *size / 3); 605 585 if (r) 606 - return r; 586 + goto out; 607 587 break; 608 588 609 589 case 0x02000001: // destroy ··· 614 594 r = radeon_vce_cs_reloc(p, p->idx + 3, p->idx + 2, 615 595 *size * 2); 616 596 if (r) 617 - return r; 597 + goto out; 618 598 break; 619 599 620 600 case 0x05000004: // video bitstream buffer ··· 622 602 r = radeon_vce_cs_reloc(p, p->idx + 3, p->idx + 2, 623 603 tmp); 624 604 if (r) 625 - return r; 605 + goto out; 626 606 break; 627 607 628 608 case 0x05000005: // feedback buffer 629 609 r = radeon_vce_cs_reloc(p, p->idx + 3, p->idx + 2, 630 610 4096); 631 611 if (r) 632 - return r; 612 + goto out; 633 613 break; 634 614 635 615 default: 636 616 DRM_ERROR("invalid VCE command (0x%x)!\n", cmd); 637 - return -EINVAL; 617 + r = -EINVAL; 618 + goto out; 638 619 } 639 620 640 621 if (session_idx == -1) { 641 622 DRM_ERROR("no session command at start of IB\n"); 642 - return -EINVAL; 623 + r = -EINVAL; 624 + goto out; 643 625 } 644 626 645 627 p->idx += len / 4; 646 628 } 647 629 648 - if (destroyed) { 649 - /* IB contains a destroy msg, free the handle */ 630 + if (allocated && !created) { 631 + DRM_ERROR("New session without create command!\n"); 632 + r = -ENOENT; 633 + } 634 + 635 + out: 636 + if ((!r && destroyed) || (r && allocated)) { 637 + /* 638 + * IB contains a destroy msg or we have allocated an 639 + * handle and got an error, anyway free the handle 640 + */ 650 641 for (i = 0; i < RADEON_MAX_VCE_HANDLES; ++i) 651 642 atomic_cmpxchg(&p->rdev->vce.handles[i], handle, 0); 652 643 } 653 644 654 - return 0; 645 + return r; 655 646 } 656 647 657 648 /**
+21 -15
drivers/gpu/drm/radeon/radeon_vm.c
··· 473 473 } 474 474 475 475 mutex_lock(&vm->mutex); 476 + soffset /= RADEON_GPU_PAGE_SIZE; 477 + eoffset /= RADEON_GPU_PAGE_SIZE; 478 + if (soffset || eoffset) { 479 + struct interval_tree_node *it; 480 + it = interval_tree_iter_first(&vm->va, soffset, eoffset - 1); 481 + if (it && it != &bo_va->it) { 482 + struct radeon_bo_va *tmp; 483 + tmp = container_of(it, struct radeon_bo_va, it); 484 + /* bo and tmp overlap, invalid offset */ 485 + dev_err(rdev->dev, "bo %p va 0x%010Lx conflict with " 486 + "(bo %p 0x%010lx 0x%010lx)\n", bo_va->bo, 487 + soffset, tmp->bo, tmp->it.start, tmp->it.last); 488 + mutex_unlock(&vm->mutex); 489 + return -EINVAL; 490 + } 491 + } 492 + 476 493 if (bo_va->it.start || bo_va->it.last) { 477 494 if (bo_va->addr) { 478 495 /* add a clone of the bo_va to clear the old address */ ··· 507 490 spin_lock(&vm->status_lock); 508 491 list_add(&tmp->vm_status, &vm->freed); 509 492 spin_unlock(&vm->status_lock); 493 + 494 + bo_va->addr = 0; 510 495 } 511 496 512 497 interval_tree_remove(&bo_va->it, &vm->va); ··· 516 497 bo_va->it.last = 0; 517 498 } 518 499 519 - soffset /= RADEON_GPU_PAGE_SIZE; 520 - eoffset /= RADEON_GPU_PAGE_SIZE; 521 500 if (soffset || eoffset) { 522 - struct interval_tree_node *it; 523 - it = interval_tree_iter_first(&vm->va, soffset, eoffset - 1); 524 - if (it) { 525 - struct radeon_bo_va *tmp; 526 - tmp = container_of(it, struct radeon_bo_va, it); 527 - /* bo and tmp overlap, invalid offset */ 528 - dev_err(rdev->dev, "bo %p va 0x%010Lx conflict with " 529 - "(bo %p 0x%010lx 0x%010lx)\n", bo_va->bo, 530 - soffset, tmp->bo, tmp->it.start, tmp->it.last); 531 - mutex_unlock(&vm->mutex); 532 - return -EINVAL; 533 - } 534 501 bo_va->it.start = soffset; 535 502 bo_va->it.last = eoffset - 1; 536 503 interval_tree_insert(&bo_va->it, &vm->va); ··· 1112 1107 list_del(&bo_va->bo_list); 1113 1108 1114 1109 mutex_lock(&vm->mutex); 1115 - interval_tree_remove(&bo_va->it, &vm->va); 1110 + if (bo_va->it.start || bo_va->it.last) 1111 + interval_tree_remove(&bo_va->it, &vm->va); 1116 1112 spin_lock(&vm->status_lock); 1117 1113 list_del(&bo_va->vm_status); 1118 1114
+3
drivers/gpu/drm/radeon/rv770d.h
··· 989 989 ((n) & 0x3FFF) << 16) 990 990 991 991 /* UVD */ 992 + #define UVD_SEMA_ADDR_LOW 0xef00 993 + #define UVD_SEMA_ADDR_HIGH 0xef04 994 + #define UVD_SEMA_CMD 0xef08 992 995 #define UVD_GPCOM_VCPU_CMD 0xef0c 993 996 #define UVD_GPCOM_VCPU_DATA0 0xef10 994 997 #define UVD_GPCOM_VCPU_DATA1 0xef14
+1
drivers/gpu/drm/radeon/si_dpm.c
··· 2924 2924 static struct si_dpm_quirk si_dpm_quirk_list[] = { 2925 2925 /* PITCAIRN - https://bugs.freedesktop.org/show_bug.cgi?id=76490 */ 2926 2926 { PCI_VENDOR_ID_ATI, 0x6810, 0x1462, 0x3036, 0, 120000 }, 2927 + { PCI_VENDOR_ID_ATI, 0x6811, 0x174b, 0xe271, 0, 120000 }, 2927 2928 { 0, 0, 0, 0 }, 2928 2929 }; 2929 2930
+2 -12
drivers/gpu/drm/radeon/uvd_v1_0.c
··· 466 466 struct radeon_semaphore *semaphore, 467 467 bool emit_wait) 468 468 { 469 - uint64_t addr = semaphore->gpu_addr; 470 - 471 - radeon_ring_write(ring, PACKET0(UVD_SEMA_ADDR_LOW, 0)); 472 - radeon_ring_write(ring, (addr >> 3) & 0x000FFFFF); 473 - 474 - radeon_ring_write(ring, PACKET0(UVD_SEMA_ADDR_HIGH, 0)); 475 - radeon_ring_write(ring, (addr >> 23) & 0x000FFFFF); 476 - 477 - radeon_ring_write(ring, PACKET0(UVD_SEMA_CMD, 0)); 478 - radeon_ring_write(ring, emit_wait ? 1 : 0); 479 - 480 - return true; 469 + /* disable semaphores for UVD V1 hardware */ 470 + return false; 481 471 } 482 472 483 473 /**
+29
drivers/gpu/drm/radeon/uvd_v2_2.c
··· 60 60 } 61 61 62 62 /** 63 + * uvd_v2_2_semaphore_emit - emit semaphore command 64 + * 65 + * @rdev: radeon_device pointer 66 + * @ring: radeon_ring pointer 67 + * @semaphore: semaphore to emit commands for 68 + * @emit_wait: true if we should emit a wait command 69 + * 70 + * Emit a semaphore command (either wait or signal) to the UVD ring. 71 + */ 72 + bool uvd_v2_2_semaphore_emit(struct radeon_device *rdev, 73 + struct radeon_ring *ring, 74 + struct radeon_semaphore *semaphore, 75 + bool emit_wait) 76 + { 77 + uint64_t addr = semaphore->gpu_addr; 78 + 79 + radeon_ring_write(ring, PACKET0(UVD_SEMA_ADDR_LOW, 0)); 80 + radeon_ring_write(ring, (addr >> 3) & 0x000FFFFF); 81 + 82 + radeon_ring_write(ring, PACKET0(UVD_SEMA_ADDR_HIGH, 0)); 83 + radeon_ring_write(ring, (addr >> 23) & 0x000FFFFF); 84 + 85 + radeon_ring_write(ring, PACKET0(UVD_SEMA_CMD, 0)); 86 + radeon_ring_write(ring, emit_wait ? 1 : 0); 87 + 88 + return true; 89 + } 90 + 91 + /** 63 92 * uvd_v2_2_resume - memory controller programming 64 93 * 65 94 * @rdev: radeon_device pointer
+5 -4
drivers/gpu/drm/rockchip/rockchip_drm_vop.c
··· 1409 1409 struct vop *vop; 1410 1410 struct resource *res; 1411 1411 size_t alloc_size; 1412 - int ret; 1412 + int ret, irq; 1413 1413 1414 1414 of_id = of_match_device(vop_driver_dt_match, dev); 1415 1415 vop_data = of_id->data; ··· 1445 1445 return ret; 1446 1446 } 1447 1447 1448 - vop->irq = platform_get_irq(pdev, 0); 1449 - if (vop->irq < 0) { 1448 + irq = platform_get_irq(pdev, 0); 1449 + if (irq < 0) { 1450 1450 dev_err(dev, "cannot find irq for vop\n"); 1451 - return vop->irq; 1451 + return irq; 1452 1452 } 1453 + vop->irq = (unsigned int)irq; 1453 1454 1454 1455 spin_lock_init(&vop->reg_lock); 1455 1456 spin_lock_init(&vop->irq_lock);
-1
drivers/gpu/drm/tegra/drm.c
··· 173 173 drm->irq_enabled = true; 174 174 175 175 /* syncpoints are used for full 32-bit hardware VBLANK counters */ 176 - drm->vblank_disable_immediate = true; 177 176 drm->max_vblank_count = 0xffffffff; 178 177 179 178 err = drm_vblank_init(drm, drm->mode_config.num_crtc);
+3 -10
drivers/infiniband/core/addr.c
··· 472 472 } sgid_addr, dgid_addr; 473 473 474 474 475 - ret = rdma_gid2ip(&sgid_addr._sockaddr, sgid); 476 - if (ret) 477 - return ret; 478 - 479 - ret = rdma_gid2ip(&dgid_addr._sockaddr, dgid); 480 - if (ret) 481 - return ret; 475 + rdma_gid2ip(&sgid_addr._sockaddr, sgid); 476 + rdma_gid2ip(&dgid_addr._sockaddr, dgid); 482 477 483 478 memset(&dev_addr, 0, sizeof(dev_addr)); 484 479 ··· 507 512 struct sockaddr_in6 _sockaddr_in6; 508 513 } gid_addr; 509 514 510 - ret = rdma_gid2ip(&gid_addr._sockaddr, sgid); 515 + rdma_gid2ip(&gid_addr._sockaddr, sgid); 511 516 512 - if (ret) 513 - return ret; 514 517 memset(&dev_addr, 0, sizeof(dev_addr)); 515 518 ret = rdma_translate_ip(&gid_addr._sockaddr, &dev_addr, vlan_id); 516 519 if (ret)
+11 -12
drivers/infiniband/core/cm.c
··· 437 437 return cm_id_priv; 438 438 } 439 439 440 - static void cm_mask_copy(u8 *dst, u8 *src, u8 *mask) 440 + static void cm_mask_copy(u32 *dst, const u32 *src, const u32 *mask) 441 441 { 442 442 int i; 443 443 444 - for (i = 0; i < IB_CM_COMPARE_SIZE / sizeof(unsigned long); i++) 445 - ((unsigned long *) dst)[i] = ((unsigned long *) src)[i] & 446 - ((unsigned long *) mask)[i]; 444 + for (i = 0; i < IB_CM_COMPARE_SIZE; i++) 445 + dst[i] = src[i] & mask[i]; 447 446 } 448 447 449 448 static int cm_compare_data(struct ib_cm_compare_data *src_data, 450 449 struct ib_cm_compare_data *dst_data) 451 450 { 452 - u8 src[IB_CM_COMPARE_SIZE]; 453 - u8 dst[IB_CM_COMPARE_SIZE]; 451 + u32 src[IB_CM_COMPARE_SIZE]; 452 + u32 dst[IB_CM_COMPARE_SIZE]; 454 453 455 454 if (!src_data || !dst_data) 456 455 return 0; 457 456 458 457 cm_mask_copy(src, src_data->data, dst_data->mask); 459 458 cm_mask_copy(dst, dst_data->data, src_data->mask); 460 - return memcmp(src, dst, IB_CM_COMPARE_SIZE); 459 + return memcmp(src, dst, sizeof(src)); 461 460 } 462 461 463 - static int cm_compare_private_data(u8 *private_data, 462 + static int cm_compare_private_data(u32 *private_data, 464 463 struct ib_cm_compare_data *dst_data) 465 464 { 466 - u8 src[IB_CM_COMPARE_SIZE]; 465 + u32 src[IB_CM_COMPARE_SIZE]; 467 466 468 467 if (!dst_data) 469 468 return 0; 470 469 471 470 cm_mask_copy(src, private_data, dst_data->mask); 472 - return memcmp(src, dst_data->data, IB_CM_COMPARE_SIZE); 471 + return memcmp(src, dst_data->data, sizeof(src)); 473 472 } 474 473 475 474 /* ··· 537 538 538 539 static struct cm_id_private * cm_find_listen(struct ib_device *device, 539 540 __be64 service_id, 540 - u8 *private_data) 541 + u32 *private_data) 541 542 { 542 543 struct rb_node *node = cm.listen_service_table.rb_node; 543 544 struct cm_id_private *cm_id_priv; ··· 952 953 cm_mask_copy(cm_id_priv->compare_data->data, 953 954 compare_data->data, compare_data->mask); 954 955 memcpy(cm_id_priv->compare_data->mask, compare_data->mask, 955 - IB_CM_COMPARE_SIZE); 956 + sizeof(compare_data->mask)); 956 957 } 957 958 958 959 cm_id->state = IB_CM_LISTEN;
+2 -2
drivers/infiniband/core/cm_msgs.h
··· 103 103 /* local ACK timeout:5, rsvd:3 */ 104 104 u8 alt_offset139; 105 105 106 - u8 private_data[IB_CM_REQ_PRIVATE_DATA_SIZE]; 106 + u32 private_data[IB_CM_REQ_PRIVATE_DATA_SIZE / sizeof(u32)]; 107 107 108 108 } __attribute__ ((packed)); 109 109 ··· 801 801 __be16 rsvd; 802 802 __be64 service_id; 803 803 804 - u8 private_data[IB_CM_SIDR_REQ_PRIVATE_DATA_SIZE]; 804 + u32 private_data[IB_CM_SIDR_REQ_PRIVATE_DATA_SIZE / sizeof(u32)]; 805 805 } __attribute__ ((packed)); 806 806 807 807 struct cm_sidr_rep_msg {
+17 -10
drivers/infiniband/core/cma.c
··· 859 859 memcpy(&ib->sib_addr, &path->dgid, 16); 860 860 } 861 861 862 + static __be16 ss_get_port(const struct sockaddr_storage *ss) 863 + { 864 + if (ss->ss_family == AF_INET) 865 + return ((struct sockaddr_in *)ss)->sin_port; 866 + else if (ss->ss_family == AF_INET6) 867 + return ((struct sockaddr_in6 *)ss)->sin6_port; 868 + BUG(); 869 + } 870 + 862 871 static void cma_save_ip4_info(struct rdma_cm_id *id, struct rdma_cm_id *listen_id, 863 872 struct cma_hdr *hdr) 864 873 { 865 - struct sockaddr_in *listen4, *ip4; 874 + struct sockaddr_in *ip4; 866 875 867 - listen4 = (struct sockaddr_in *) &listen_id->route.addr.src_addr; 868 876 ip4 = (struct sockaddr_in *) &id->route.addr.src_addr; 869 - ip4->sin_family = listen4->sin_family; 877 + ip4->sin_family = AF_INET; 870 878 ip4->sin_addr.s_addr = hdr->dst_addr.ip4.addr; 871 - ip4->sin_port = listen4->sin_port; 879 + ip4->sin_port = ss_get_port(&listen_id->route.addr.src_addr); 872 880 873 881 ip4 = (struct sockaddr_in *) &id->route.addr.dst_addr; 874 - ip4->sin_family = listen4->sin_family; 882 + ip4->sin_family = AF_INET; 875 883 ip4->sin_addr.s_addr = hdr->src_addr.ip4.addr; 876 884 ip4->sin_port = hdr->port; 877 885 } ··· 887 879 static void cma_save_ip6_info(struct rdma_cm_id *id, struct rdma_cm_id *listen_id, 888 880 struct cma_hdr *hdr) 889 881 { 890 - struct sockaddr_in6 *listen6, *ip6; 882 + struct sockaddr_in6 *ip6; 891 883 892 - listen6 = (struct sockaddr_in6 *) &listen_id->route.addr.src_addr; 893 884 ip6 = (struct sockaddr_in6 *) &id->route.addr.src_addr; 894 - ip6->sin6_family = listen6->sin6_family; 885 + ip6->sin6_family = AF_INET6; 895 886 ip6->sin6_addr = hdr->dst_addr.ip6; 896 - ip6->sin6_port = listen6->sin6_port; 887 + ip6->sin6_port = ss_get_port(&listen_id->route.addr.src_addr); 897 888 898 889 ip6 = (struct sockaddr_in6 *) &id->route.addr.dst_addr; 899 - ip6->sin6_family = listen6->sin6_family; 890 + ip6->sin6_family = AF_INET6; 900 891 ip6->sin6_addr = hdr->src_addr.ip6; 901 892 ip6->sin6_port = hdr->port; 902 893 }
+72 -1
drivers/infiniband/core/iwpm_msg.c
··· 468 468 } 469 469 EXPORT_SYMBOL(iwpm_add_mapping_cb); 470 470 471 - /* netlink attribute policy for the response to add and query mapping request */ 471 + /* netlink attribute policy for the response to add and query mapping request 472 + * and response with remote address info */ 472 473 static const struct nla_policy resp_query_policy[IWPM_NLA_RQUERY_MAPPING_MAX] = { 473 474 [IWPM_NLA_QUERY_MAPPING_SEQ] = { .type = NLA_U32 }, 474 475 [IWPM_NLA_QUERY_LOCAL_ADDR] = { .len = sizeof(struct sockaddr_storage) }, ··· 559 558 return 0; 560 559 } 561 560 EXPORT_SYMBOL(iwpm_add_and_query_mapping_cb); 561 + 562 + /* 563 + * iwpm_remote_info_cb - Process a port mapper message, containing 564 + * the remote connecting peer address info 565 + */ 566 + int iwpm_remote_info_cb(struct sk_buff *skb, struct netlink_callback *cb) 567 + { 568 + struct nlattr *nltb[IWPM_NLA_RQUERY_MAPPING_MAX]; 569 + struct sockaddr_storage *local_sockaddr, *remote_sockaddr; 570 + struct sockaddr_storage *mapped_loc_sockaddr, *mapped_rem_sockaddr; 571 + struct iwpm_remote_info *rem_info; 572 + const char *msg_type; 573 + u8 nl_client; 574 + int ret = -EINVAL; 575 + 576 + msg_type = "Remote Mapping info"; 577 + if (iwpm_parse_nlmsg(cb, IWPM_NLA_RQUERY_MAPPING_MAX, 578 + resp_query_policy, nltb, msg_type)) 579 + return ret; 580 + 581 + nl_client = RDMA_NL_GET_CLIENT(cb->nlh->nlmsg_type); 582 + if (!iwpm_valid_client(nl_client)) { 583 + pr_info("%s: Invalid port mapper client = %d\n", 584 + __func__, nl_client); 585 + return ret; 586 + } 587 + atomic_set(&echo_nlmsg_seq, cb->nlh->nlmsg_seq); 588 + 589 + local_sockaddr = (struct sockaddr_storage *) 590 + nla_data(nltb[IWPM_NLA_QUERY_LOCAL_ADDR]); 591 + remote_sockaddr = (struct sockaddr_storage *) 592 + nla_data(nltb[IWPM_NLA_QUERY_REMOTE_ADDR]); 593 + mapped_loc_sockaddr = (struct sockaddr_storage *) 594 + nla_data(nltb[IWPM_NLA_RQUERY_MAPPED_LOC_ADDR]); 595 + mapped_rem_sockaddr = (struct sockaddr_storage *) 596 + nla_data(nltb[IWPM_NLA_RQUERY_MAPPED_REM_ADDR]); 597 + 598 + if (mapped_loc_sockaddr->ss_family != local_sockaddr->ss_family || 599 + mapped_rem_sockaddr->ss_family != remote_sockaddr->ss_family) { 600 + pr_info("%s: Sockaddr family doesn't match the requested one\n", 601 + __func__); 602 + return ret; 603 + } 604 + rem_info = kzalloc(sizeof(struct iwpm_remote_info), GFP_ATOMIC); 605 + if (!rem_info) { 606 + pr_err("%s: Unable to allocate a remote info\n", __func__); 607 + ret = -ENOMEM; 608 + return ret; 609 + } 610 + memcpy(&rem_info->mapped_loc_sockaddr, mapped_loc_sockaddr, 611 + sizeof(struct sockaddr_storage)); 612 + memcpy(&rem_info->remote_sockaddr, remote_sockaddr, 613 + sizeof(struct sockaddr_storage)); 614 + memcpy(&rem_info->mapped_rem_sockaddr, mapped_rem_sockaddr, 615 + sizeof(struct sockaddr_storage)); 616 + rem_info->nl_client = nl_client; 617 + 618 + iwpm_add_remote_info(rem_info); 619 + 620 + iwpm_print_sockaddr(local_sockaddr, 621 + "remote_info: Local sockaddr:"); 622 + iwpm_print_sockaddr(mapped_loc_sockaddr, 623 + "remote_info: Mapped local sockaddr:"); 624 + iwpm_print_sockaddr(remote_sockaddr, 625 + "remote_info: Remote sockaddr:"); 626 + iwpm_print_sockaddr(mapped_rem_sockaddr, 627 + "remote_info: Mapped remote sockaddr:"); 628 + return ret; 629 + } 630 + EXPORT_SYMBOL(iwpm_remote_info_cb); 562 631 563 632 /* netlink attribute policy for the received request for mapping info */ 564 633 static const struct nla_policy resp_mapinfo_policy[IWPM_NLA_MAPINFO_REQ_MAX] = {
+175 -33
drivers/infiniband/core/iwpm_util.c
··· 33 33 34 34 #include "iwpm_util.h" 35 35 36 - #define IWPM_HASH_BUCKET_SIZE 512 37 - #define IWPM_HASH_BUCKET_MASK (IWPM_HASH_BUCKET_SIZE - 1) 36 + #define IWPM_MAPINFO_HASH_SIZE 512 37 + #define IWPM_MAPINFO_HASH_MASK (IWPM_MAPINFO_HASH_SIZE - 1) 38 + #define IWPM_REMINFO_HASH_SIZE 64 39 + #define IWPM_REMINFO_HASH_MASK (IWPM_REMINFO_HASH_SIZE - 1) 38 40 39 41 static LIST_HEAD(iwpm_nlmsg_req_list); 40 42 static DEFINE_SPINLOCK(iwpm_nlmsg_req_lock); ··· 44 42 static struct hlist_head *iwpm_hash_bucket; 45 43 static DEFINE_SPINLOCK(iwpm_mapinfo_lock); 46 44 45 + static struct hlist_head *iwpm_reminfo_bucket; 46 + static DEFINE_SPINLOCK(iwpm_reminfo_lock); 47 + 47 48 static DEFINE_MUTEX(iwpm_admin_lock); 48 49 static struct iwpm_admin_data iwpm_admin; 49 50 50 51 int iwpm_init(u8 nl_client) 51 52 { 53 + int ret = 0; 52 54 if (iwpm_valid_client(nl_client)) 53 55 return -EINVAL; 54 56 mutex_lock(&iwpm_admin_lock); 55 57 if (atomic_read(&iwpm_admin.refcount) == 0) { 56 - iwpm_hash_bucket = kzalloc(IWPM_HASH_BUCKET_SIZE * 58 + iwpm_hash_bucket = kzalloc(IWPM_MAPINFO_HASH_SIZE * 57 59 sizeof(struct hlist_head), GFP_KERNEL); 58 60 if (!iwpm_hash_bucket) { 59 - mutex_unlock(&iwpm_admin_lock); 61 + ret = -ENOMEM; 60 62 pr_err("%s Unable to create mapinfo hash table\n", __func__); 61 - return -ENOMEM; 63 + goto init_exit; 64 + } 65 + iwpm_reminfo_bucket = kzalloc(IWPM_REMINFO_HASH_SIZE * 66 + sizeof(struct hlist_head), GFP_KERNEL); 67 + if (!iwpm_reminfo_bucket) { 68 + kfree(iwpm_hash_bucket); 69 + ret = -ENOMEM; 70 + pr_err("%s Unable to create reminfo hash table\n", __func__); 71 + goto init_exit; 62 72 } 63 73 } 64 74 atomic_inc(&iwpm_admin.refcount); 75 + init_exit: 65 76 mutex_unlock(&iwpm_admin_lock); 66 - iwpm_set_valid(nl_client, 1); 67 - return 0; 77 + if (!ret) { 78 + iwpm_set_valid(nl_client, 1); 79 + pr_debug("%s: Mapinfo and reminfo tables are created\n", 80 + __func__); 81 + } 82 + return ret; 68 83 } 69 84 EXPORT_SYMBOL(iwpm_init); 70 85 71 86 static void free_hash_bucket(void); 87 + static void free_reminfo_bucket(void); 72 88 73 89 int iwpm_exit(u8 nl_client) 74 90 { ··· 101 81 } 102 82 if (atomic_dec_and_test(&iwpm_admin.refcount)) { 103 83 free_hash_bucket(); 104 - pr_debug("%s: Mapinfo hash table is destroyed\n", __func__); 84 + free_reminfo_bucket(); 85 + pr_debug("%s: Resources are destroyed\n", __func__); 105 86 } 106 87 mutex_unlock(&iwpm_admin_lock); 107 88 iwpm_set_valid(nl_client, 0); ··· 110 89 } 111 90 EXPORT_SYMBOL(iwpm_exit); 112 91 113 - static struct hlist_head *get_hash_bucket_head(struct sockaddr_storage *, 92 + static struct hlist_head *get_mapinfo_hash_bucket(struct sockaddr_storage *, 114 93 struct sockaddr_storage *); 115 94 116 95 int iwpm_create_mapinfo(struct sockaddr_storage *local_sockaddr, ··· 120 99 struct hlist_head *hash_bucket_head; 121 100 struct iwpm_mapping_info *map_info; 122 101 unsigned long flags; 102 + int ret = -EINVAL; 123 103 124 104 if (!iwpm_valid_client(nl_client)) 125 - return -EINVAL; 105 + return ret; 126 106 map_info = kzalloc(sizeof(struct iwpm_mapping_info), GFP_KERNEL); 127 107 if (!map_info) { 128 108 pr_err("%s: Unable to allocate a mapping info\n", __func__); ··· 137 115 138 116 spin_lock_irqsave(&iwpm_mapinfo_lock, flags); 139 117 if (iwpm_hash_bucket) { 140 - hash_bucket_head = get_hash_bucket_head( 118 + hash_bucket_head = get_mapinfo_hash_bucket( 141 119 &map_info->local_sockaddr, 142 120 &map_info->mapped_sockaddr); 143 - hlist_add_head(&map_info->hlist_node, hash_bucket_head); 121 + if (hash_bucket_head) { 122 + hlist_add_head(&map_info->hlist_node, hash_bucket_head); 123 + ret = 0; 124 + } 144 125 } 145 126 spin_unlock_irqrestore(&iwpm_mapinfo_lock, flags); 146 - return 0; 127 + return ret; 147 128 } 148 129 EXPORT_SYMBOL(iwpm_create_mapinfo); 149 130 ··· 161 136 162 137 spin_lock_irqsave(&iwpm_mapinfo_lock, flags); 163 138 if (iwpm_hash_bucket) { 164 - hash_bucket_head = get_hash_bucket_head( 139 + hash_bucket_head = get_mapinfo_hash_bucket( 165 140 local_sockaddr, 166 141 mapped_local_addr); 142 + if (!hash_bucket_head) 143 + goto remove_mapinfo_exit; 144 + 167 145 hlist_for_each_entry_safe(map_info, tmp_hlist_node, 168 146 hash_bucket_head, hlist_node) { 169 147 ··· 180 152 } 181 153 } 182 154 } 155 + remove_mapinfo_exit: 183 156 spin_unlock_irqrestore(&iwpm_mapinfo_lock, flags); 184 157 return ret; 185 158 } ··· 195 166 196 167 /* remove all the mapinfo data from the list */ 197 168 spin_lock_irqsave(&iwpm_mapinfo_lock, flags); 198 - for (i = 0; i < IWPM_HASH_BUCKET_SIZE; i++) { 169 + for (i = 0; i < IWPM_MAPINFO_HASH_SIZE; i++) { 199 170 hlist_for_each_entry_safe(map_info, tmp_hlist_node, 200 171 &iwpm_hash_bucket[i], hlist_node) { 201 172 ··· 208 179 iwpm_hash_bucket = NULL; 209 180 spin_unlock_irqrestore(&iwpm_mapinfo_lock, flags); 210 181 } 182 + 183 + static void free_reminfo_bucket(void) 184 + { 185 + struct hlist_node *tmp_hlist_node; 186 + struct iwpm_remote_info *rem_info; 187 + unsigned long flags; 188 + int i; 189 + 190 + /* remove all the remote info from the list */ 191 + spin_lock_irqsave(&iwpm_reminfo_lock, flags); 192 + for (i = 0; i < IWPM_REMINFO_HASH_SIZE; i++) { 193 + hlist_for_each_entry_safe(rem_info, tmp_hlist_node, 194 + &iwpm_reminfo_bucket[i], hlist_node) { 195 + 196 + hlist_del_init(&rem_info->hlist_node); 197 + kfree(rem_info); 198 + } 199 + } 200 + /* free the hash list */ 201 + kfree(iwpm_reminfo_bucket); 202 + iwpm_reminfo_bucket = NULL; 203 + spin_unlock_irqrestore(&iwpm_reminfo_lock, flags); 204 + } 205 + 206 + static struct hlist_head *get_reminfo_hash_bucket(struct sockaddr_storage *, 207 + struct sockaddr_storage *); 208 + 209 + void iwpm_add_remote_info(struct iwpm_remote_info *rem_info) 210 + { 211 + struct hlist_head *hash_bucket_head; 212 + unsigned long flags; 213 + 214 + spin_lock_irqsave(&iwpm_reminfo_lock, flags); 215 + if (iwpm_reminfo_bucket) { 216 + hash_bucket_head = get_reminfo_hash_bucket( 217 + &rem_info->mapped_loc_sockaddr, 218 + &rem_info->mapped_rem_sockaddr); 219 + if (hash_bucket_head) 220 + hlist_add_head(&rem_info->hlist_node, hash_bucket_head); 221 + } 222 + spin_unlock_irqrestore(&iwpm_reminfo_lock, flags); 223 + } 224 + 225 + int iwpm_get_remote_info(struct sockaddr_storage *mapped_loc_addr, 226 + struct sockaddr_storage *mapped_rem_addr, 227 + struct sockaddr_storage *remote_addr, 228 + u8 nl_client) 229 + { 230 + struct hlist_node *tmp_hlist_node; 231 + struct hlist_head *hash_bucket_head; 232 + struct iwpm_remote_info *rem_info = NULL; 233 + unsigned long flags; 234 + int ret = -EINVAL; 235 + 236 + if (!iwpm_valid_client(nl_client)) { 237 + pr_info("%s: Invalid client = %d\n", __func__, nl_client); 238 + return ret; 239 + } 240 + spin_lock_irqsave(&iwpm_reminfo_lock, flags); 241 + if (iwpm_reminfo_bucket) { 242 + hash_bucket_head = get_reminfo_hash_bucket( 243 + mapped_loc_addr, 244 + mapped_rem_addr); 245 + if (!hash_bucket_head) 246 + goto get_remote_info_exit; 247 + hlist_for_each_entry_safe(rem_info, tmp_hlist_node, 248 + hash_bucket_head, hlist_node) { 249 + 250 + if (!iwpm_compare_sockaddr(&rem_info->mapped_loc_sockaddr, 251 + mapped_loc_addr) && 252 + !iwpm_compare_sockaddr(&rem_info->mapped_rem_sockaddr, 253 + mapped_rem_addr)) { 254 + 255 + memcpy(remote_addr, &rem_info->remote_sockaddr, 256 + sizeof(struct sockaddr_storage)); 257 + iwpm_print_sockaddr(remote_addr, 258 + "get_remote_info: Remote sockaddr:"); 259 + 260 + hlist_del_init(&rem_info->hlist_node); 261 + kfree(rem_info); 262 + ret = 0; 263 + break; 264 + } 265 + } 266 + } 267 + get_remote_info_exit: 268 + spin_unlock_irqrestore(&iwpm_reminfo_lock, flags); 269 + return ret; 270 + } 271 + EXPORT_SYMBOL(iwpm_get_remote_info); 211 272 212 273 struct iwpm_nlmsg_request *iwpm_get_nlmsg_request(__u32 nlmsg_seq, 213 274 u8 nl_client, gfp_t gfp) ··· 528 409 return hash; 529 410 } 530 411 531 - static struct hlist_head *get_hash_bucket_head(struct sockaddr_storage 532 - *local_sockaddr, 533 - struct sockaddr_storage 534 - *mapped_sockaddr) 412 + static int get_hash_bucket(struct sockaddr_storage *a_sockaddr, 413 + struct sockaddr_storage *b_sockaddr, u32 *hash) 535 414 { 536 - u32 local_hash, mapped_hash, hash; 415 + u32 a_hash, b_hash; 537 416 538 - if (local_sockaddr->ss_family == AF_INET) { 539 - local_hash = iwpm_ipv4_jhash((struct sockaddr_in *) local_sockaddr); 540 - mapped_hash = iwpm_ipv4_jhash((struct sockaddr_in *) mapped_sockaddr); 417 + if (a_sockaddr->ss_family == AF_INET) { 418 + a_hash = iwpm_ipv4_jhash((struct sockaddr_in *) a_sockaddr); 419 + b_hash = iwpm_ipv4_jhash((struct sockaddr_in *) b_sockaddr); 541 420 542 - } else if (local_sockaddr->ss_family == AF_INET6) { 543 - local_hash = iwpm_ipv6_jhash((struct sockaddr_in6 *) local_sockaddr); 544 - mapped_hash = iwpm_ipv6_jhash((struct sockaddr_in6 *) mapped_sockaddr); 421 + } else if (a_sockaddr->ss_family == AF_INET6) { 422 + a_hash = iwpm_ipv6_jhash((struct sockaddr_in6 *) a_sockaddr); 423 + b_hash = iwpm_ipv6_jhash((struct sockaddr_in6 *) b_sockaddr); 545 424 } else { 546 425 pr_err("%s: Invalid sockaddr family\n", __func__); 547 - return NULL; 426 + return -EINVAL; 548 427 } 549 428 550 - if (local_hash == mapped_hash) /* if port mapper isn't available */ 551 - hash = local_hash; 429 + if (a_hash == b_hash) /* if port mapper isn't available */ 430 + *hash = a_hash; 552 431 else 553 - hash = jhash_2words(local_hash, mapped_hash, 0); 432 + *hash = jhash_2words(a_hash, b_hash, 0); 433 + return 0; 434 + } 554 435 555 - return &iwpm_hash_bucket[hash & IWPM_HASH_BUCKET_MASK]; 436 + static struct hlist_head *get_mapinfo_hash_bucket(struct sockaddr_storage 437 + *local_sockaddr, struct sockaddr_storage 438 + *mapped_sockaddr) 439 + { 440 + u32 hash; 441 + int ret; 442 + 443 + ret = get_hash_bucket(local_sockaddr, mapped_sockaddr, &hash); 444 + if (ret) 445 + return NULL; 446 + return &iwpm_hash_bucket[hash & IWPM_MAPINFO_HASH_MASK]; 447 + } 448 + 449 + static struct hlist_head *get_reminfo_hash_bucket(struct sockaddr_storage 450 + *mapped_loc_sockaddr, struct sockaddr_storage 451 + *mapped_rem_sockaddr) 452 + { 453 + u32 hash; 454 + int ret; 455 + 456 + ret = get_hash_bucket(mapped_loc_sockaddr, mapped_rem_sockaddr, &hash); 457 + if (ret) 458 + return NULL; 459 + return &iwpm_reminfo_bucket[hash & IWPM_REMINFO_HASH_MASK]; 556 460 } 557 461 558 462 static int send_mapinfo_num(u32 mapping_num, u8 nl_client, int iwpm_pid) ··· 654 512 } 655 513 skb_num++; 656 514 spin_lock_irqsave(&iwpm_mapinfo_lock, flags); 657 - for (i = 0; i < IWPM_HASH_BUCKET_SIZE; i++) { 515 + for (i = 0; i < IWPM_MAPINFO_HASH_SIZE; i++) { 658 516 hlist_for_each_entry(map_info, &iwpm_hash_bucket[i], 659 517 hlist_node) { 660 518 if (map_info->nl_client != nl_client) ··· 737 595 738 596 spin_lock_irqsave(&iwpm_mapinfo_lock, flags); 739 597 if (iwpm_hash_bucket) { 740 - for (i = 0; i < IWPM_HASH_BUCKET_SIZE; i++) { 598 + for (i = 0; i < IWPM_MAPINFO_HASH_SIZE; i++) { 741 599 if (!hlist_empty(&iwpm_hash_bucket[i])) { 742 600 full_bucket = 1; 743 601 break;
+15
drivers/infiniband/core/iwpm_util.h
··· 76 76 u8 nl_client; 77 77 }; 78 78 79 + struct iwpm_remote_info { 80 + struct hlist_node hlist_node; 81 + struct sockaddr_storage remote_sockaddr; 82 + struct sockaddr_storage mapped_loc_sockaddr; 83 + struct sockaddr_storage mapped_rem_sockaddr; 84 + u8 nl_client; 85 + }; 86 + 79 87 struct iwpm_admin_data { 80 88 atomic_t refcount; 81 89 atomic_t nlmsg_seq; ··· 134 126 * Returns the sequence number for the netlink message. 135 127 */ 136 128 int iwpm_get_nlmsg_seq(void); 129 + 130 + /** 131 + * iwpm_add_reminfo - Add remote address info of the connecting peer 132 + * to the remote info hash table 133 + * @reminfo: The remote info to be added 134 + */ 135 + void iwpm_add_remote_info(struct iwpm_remote_info *reminfo); 137 136 138 137 /** 139 138 * iwpm_valid_client - Check if the port mapper client is valid
+7 -7
drivers/infiniband/core/umem_odp.c
··· 446 446 int remove_existing_mapping = 0; 447 447 int ret = 0; 448 448 449 - mutex_lock(&umem->odp_data->umem_mutex); 450 449 /* 451 450 * Note: we avoid writing if seq is different from the initial seq, to 452 451 * handle case of a racing notifier. This check also allows us to bail ··· 478 479 } 479 480 480 481 out: 481 - mutex_unlock(&umem->odp_data->umem_mutex); 482 - 483 482 /* On Demand Paging - avoid pinning the page */ 484 483 if (umem->context->invalidate_range || !stored_page) 485 484 put_page(page); ··· 583 586 584 587 bcnt -= min_t(size_t, npages << PAGE_SHIFT, bcnt); 585 588 user_virt += npages << PAGE_SHIFT; 589 + mutex_lock(&umem->odp_data->umem_mutex); 586 590 for (j = 0; j < npages; ++j) { 587 591 ret = ib_umem_odp_map_dma_single_page( 588 592 umem, k, base_virt_addr, local_page_list[j], ··· 592 594 break; 593 595 k++; 594 596 } 597 + mutex_unlock(&umem->odp_data->umem_mutex); 595 598 596 599 if (ret < 0) { 597 600 /* Release left over pages when handling errors. */ ··· 632 633 * faults from completion. We might be racing with other 633 634 * invalidations, so we must make sure we free each page only 634 635 * once. */ 636 + mutex_lock(&umem->odp_data->umem_mutex); 635 637 for (addr = virt; addr < bound; addr += (u64)umem->page_size) { 636 638 idx = (addr - ib_umem_start(umem)) / PAGE_SIZE; 637 - mutex_lock(&umem->odp_data->umem_mutex); 638 639 if (umem->odp_data->page_list[idx]) { 639 640 struct page *page = umem->odp_data->page_list[idx]; 640 - struct page *head_page = compound_head(page); 641 641 dma_addr_t dma = umem->odp_data->dma_list[idx]; 642 642 dma_addr_t dma_addr = dma & ODP_DMA_ADDR_MASK; 643 643 ··· 644 646 645 647 ib_dma_unmap_page(dev, dma_addr, PAGE_SIZE, 646 648 DMA_BIDIRECTIONAL); 647 - if (dma & ODP_WRITE_ALLOWED_BIT) 649 + if (dma & ODP_WRITE_ALLOWED_BIT) { 650 + struct page *head_page = compound_head(page); 648 651 /* 649 652 * set_page_dirty prefers being called with 650 653 * the page lock. However, MMU notifiers are ··· 656 657 * be removed. 657 658 */ 658 659 set_page_dirty(head_page); 660 + } 659 661 /* on demand pinning support */ 660 662 if (!umem->context->invalidate_range) 661 663 put_page(page); 662 664 umem->odp_data->page_list[idx] = NULL; 663 665 umem->odp_data->dma_list[idx] = 0; 664 666 } 665 - mutex_unlock(&umem->odp_data->umem_mutex); 666 667 } 668 + mutex_unlock(&umem->odp_data->umem_mutex); 667 669 } 668 670 EXPORT_SYMBOL(ib_umem_odp_unmap_dma_pages);
+71 -16
drivers/infiniband/hw/cxgb4/cm.c
··· 583 583 sizeof(ep->com.mapped_remote_addr)); 584 584 } 585 585 586 + static int get_remote_addr(struct c4iw_ep *ep) 587 + { 588 + int ret; 589 + 590 + print_addr(&ep->com, __func__, "get_remote_addr"); 591 + 592 + ret = iwpm_get_remote_info(&ep->com.mapped_local_addr, 593 + &ep->com.mapped_remote_addr, 594 + &ep->com.remote_addr, RDMA_NL_C4IW); 595 + if (ret) 596 + pr_info(MOD "Unable to find remote peer addr info - err %d\n", 597 + ret); 598 + 599 + return ret; 600 + } 601 + 586 602 static void best_mtu(const unsigned short *mtus, unsigned short mtu, 587 603 unsigned int *idx, int use_ts, int ipv6) 588 604 { ··· 691 675 if (is_t5(ep->com.dev->rdev.lldi.adapter_type)) { 692 676 opt2 |= T5_OPT_2_VALID_F; 693 677 opt2 |= CONG_CNTRL_V(CONG_ALG_TAHOE); 694 - opt2 |= CONG_CNTRL_VALID; /* OPT_2_ISS for T5 */ 678 + opt2 |= T5_ISS_F; 695 679 } 696 680 t4_set_arp_err_handler(skb, ep, act_open_req_arp_failure); 697 681 ··· 2058 2042 status, status2errno(status)); 2059 2043 2060 2044 if (is_neg_adv(status)) { 2061 - dev_warn(&dev->rdev.lldi.pdev->dev, 2062 - "Connection problems for atid %u status %u (%s)\n", 2063 - atid, status, neg_adv_str(status)); 2045 + PDBG("%s Connection problems for atid %u status %u (%s)\n", 2046 + __func__, atid, status, neg_adv_str(status)); 2047 + ep->stats.connect_neg_adv++; 2048 + mutex_lock(&dev->rdev.stats.lock); 2049 + dev->rdev.stats.neg_adv++; 2050 + mutex_unlock(&dev->rdev.stats.lock); 2064 2051 return 0; 2065 2052 } 2066 2053 ··· 2233 2214 u32 isn = (prandom_u32() & ~7UL) - 1; 2234 2215 opt2 |= T5_OPT_2_VALID_F; 2235 2216 opt2 |= CONG_CNTRL_V(CONG_ALG_TAHOE); 2236 - opt2 |= CONG_CNTRL_VALID; /* OPT_2_ISS for T5 */ 2217 + opt2 |= T5_ISS_F; 2237 2218 rpl5 = (void *)rpl; 2238 2219 memset(&rpl5->iss, 0, roundup(sizeof(*rpl5)-sizeof(*rpl), 16)); 2239 2220 if (peer2peer) ··· 2371 2352 state_set(&child_ep->com, CONNECTING); 2372 2353 child_ep->com.dev = dev; 2373 2354 child_ep->com.cm_id = NULL; 2355 + 2356 + /* 2357 + * The mapped_local and mapped_remote addresses get setup with 2358 + * the actual 4-tuple. The local address will be based on the 2359 + * actual local address of the connection, but on the port number 2360 + * of the parent listening endpoint. The remote address is 2361 + * setup based on a query to the IWPM since we don't know what it 2362 + * originally was before mapping. If no mapping was done, then 2363 + * mapped_remote == remote, and mapped_local == local. 2364 + */ 2374 2365 if (iptype == 4) { 2375 2366 struct sockaddr_in *sin = (struct sockaddr_in *) 2376 - &child_ep->com.local_addr; 2367 + &child_ep->com.mapped_local_addr; 2368 + 2377 2369 sin->sin_family = PF_INET; 2378 2370 sin->sin_port = local_port; 2379 2371 sin->sin_addr.s_addr = *(__be32 *)local_ip; 2380 - sin = (struct sockaddr_in *)&child_ep->com.remote_addr; 2372 + 2373 + sin = (struct sockaddr_in *)&child_ep->com.local_addr; 2374 + sin->sin_family = PF_INET; 2375 + sin->sin_port = ((struct sockaddr_in *) 2376 + &parent_ep->com.local_addr)->sin_port; 2377 + sin->sin_addr.s_addr = *(__be32 *)local_ip; 2378 + 2379 + sin = (struct sockaddr_in *)&child_ep->com.mapped_remote_addr; 2381 2380 sin->sin_family = PF_INET; 2382 2381 sin->sin_port = peer_port; 2383 2382 sin->sin_addr.s_addr = *(__be32 *)peer_ip; 2384 2383 } else { 2385 2384 struct sockaddr_in6 *sin6 = (struct sockaddr_in6 *) 2386 - &child_ep->com.local_addr; 2385 + &child_ep->com.mapped_local_addr; 2386 + 2387 2387 sin6->sin6_family = PF_INET6; 2388 2388 sin6->sin6_port = local_port; 2389 2389 memcpy(sin6->sin6_addr.s6_addr, local_ip, 16); 2390 - sin6 = (struct sockaddr_in6 *)&child_ep->com.remote_addr; 2390 + 2391 + sin6 = (struct sockaddr_in6 *)&child_ep->com.local_addr; 2392 + sin6->sin6_family = PF_INET6; 2393 + sin6->sin6_port = ((struct sockaddr_in6 *) 2394 + &parent_ep->com.local_addr)->sin6_port; 2395 + memcpy(sin6->sin6_addr.s6_addr, local_ip, 16); 2396 + 2397 + sin6 = (struct sockaddr_in6 *)&child_ep->com.mapped_remote_addr; 2391 2398 sin6->sin6_family = PF_INET6; 2392 2399 sin6->sin6_port = peer_port; 2393 2400 memcpy(sin6->sin6_addr.s6_addr, peer_ip, 16); 2394 2401 } 2402 + memcpy(&child_ep->com.remote_addr, &child_ep->com.mapped_remote_addr, 2403 + sizeof(child_ep->com.remote_addr)); 2404 + get_remote_addr(child_ep); 2405 + 2395 2406 c4iw_get_ep(&parent_ep->com); 2396 2407 child_ep->parent_ep = parent_ep; 2397 2408 child_ep->tos = PASS_OPEN_TOS_G(ntohl(req->tos_stid)); ··· 2569 2520 2570 2521 ep = lookup_tid(t, tid); 2571 2522 if (is_neg_adv(req->status)) { 2572 - dev_warn(&dev->rdev.lldi.pdev->dev, 2573 - "Negative advice on abort - tid %u status %d (%s)\n", 2574 - ep->hwtid, req->status, neg_adv_str(req->status)); 2523 + PDBG("%s Negative advice on abort- tid %u status %d (%s)\n", 2524 + __func__, ep->hwtid, req->status, 2525 + neg_adv_str(req->status)); 2526 + ep->stats.abort_neg_adv++; 2527 + mutex_lock(&dev->rdev.stats.lock); 2528 + dev->rdev.stats.neg_adv++; 2529 + mutex_unlock(&dev->rdev.stats.lock); 2575 2530 return 0; 2576 2531 } 2577 2532 PDBG("%s ep %p tid %u state %u\n", __func__, ep, ep->hwtid, ··· 3624 3571 * TP will ignore any value > 0 for MSS index. 3625 3572 */ 3626 3573 req->tcb.opt0 = cpu_to_be64(MSS_IDX_V(0xF)); 3627 - req->cookie = (unsigned long)skb; 3574 + req->cookie = (uintptr_t)skb; 3628 3575 3629 3576 set_wr_txq(req_skb, CPL_PRIORITY_CONTROL, port_id); 3630 3577 ret = cxgb4_ofld_send(dev->rdev.lldi.ports[0], req_skb); ··· 3984 3931 return 0; 3985 3932 } 3986 3933 if (is_neg_adv(req->status)) { 3987 - dev_warn(&dev->rdev.lldi.pdev->dev, 3988 - "Negative advice on abort - tid %u status %d (%s)\n", 3989 - ep->hwtid, req->status, neg_adv_str(req->status)); 3934 + PDBG("%s Negative advice on abort- tid %u status %d (%s)\n", 3935 + __func__, ep->hwtid, req->status, 3936 + neg_adv_str(req->status)); 3937 + ep->stats.abort_neg_adv++; 3938 + dev->rdev.stats.neg_adv++; 3990 3939 kfree_skb(skb); 3991 3940 return 0; 3992 3941 }
+14 -8
drivers/infiniband/hw/cxgb4/cq.c
··· 55 55 FW_RI_RES_WR_NRES_V(1) | 56 56 FW_WR_COMPL_F); 57 57 res_wr->len16_pkd = cpu_to_be32(DIV_ROUND_UP(wr_len, 16)); 58 - res_wr->cookie = (unsigned long) &wr_wait; 58 + res_wr->cookie = (uintptr_t)&wr_wait; 59 59 res = res_wr->res; 60 60 res->u.cq.restype = FW_RI_RES_TYPE_CQ; 61 61 res->u.cq.op = FW_RI_RES_OP_RESET; ··· 125 125 FW_RI_RES_WR_NRES_V(1) | 126 126 FW_WR_COMPL_F); 127 127 res_wr->len16_pkd = cpu_to_be32(DIV_ROUND_UP(wr_len, 16)); 128 - res_wr->cookie = (unsigned long) &wr_wait; 128 + res_wr->cookie = (uintptr_t)&wr_wait; 129 129 res = res_wr->res; 130 130 res->u.cq.restype = FW_RI_RES_TYPE_CQ; 131 131 res->u.cq.op = FW_RI_RES_OP_WRITE; ··· 156 156 goto err4; 157 157 158 158 cq->gen = 1; 159 - cq->gts = rdev->lldi.gts_reg; 160 159 cq->rdev = rdev; 161 160 if (user) { 162 - cq->ugts = (u64)pci_resource_start(rdev->lldi.pdev, 2) + 163 - (cq->cqid << rdev->cqshift); 164 - cq->ugts &= PAGE_MASK; 161 + u32 off = (cq->cqid << rdev->cqshift) & PAGE_MASK; 162 + 163 + cq->ugts = (u64)rdev->bar2_pa + off; 164 + } else if (is_t4(rdev->lldi.adapter_type)) { 165 + cq->gts = rdev->lldi.gts_reg; 166 + cq->qid_mask = -1U; 167 + } else { 168 + u32 off = ((cq->cqid << rdev->cqshift) & PAGE_MASK) + 12; 169 + 170 + cq->gts = rdev->bar2_kva + off; 171 + cq->qid_mask = rdev->qpmask; 165 172 } 166 173 return 0; 167 174 err4: ··· 977 970 } 978 971 PDBG("%s cqid 0x%0x chp %p size %u memsize %zu, dma_addr 0x%0llx\n", 979 972 __func__, chp->cq.cqid, chp, chp->cq.size, 980 - chp->cq.memsize, 981 - (unsigned long long) chp->cq.dma_addr); 973 + chp->cq.memsize, (unsigned long long) chp->cq.dma_addr); 982 974 return &chp->ibcq; 983 975 err5: 984 976 kfree(mm2);
+34 -3
drivers/infiniband/hw/cxgb4/device.c
··· 93 93 [RDMA_NL_IWPM_ADD_MAPPING] = {.dump = iwpm_add_mapping_cb}, 94 94 [RDMA_NL_IWPM_QUERY_MAPPING] = {.dump = iwpm_add_and_query_mapping_cb}, 95 95 [RDMA_NL_IWPM_HANDLE_ERR] = {.dump = iwpm_mapping_error_cb}, 96 + [RDMA_NL_IWPM_REMOTE_INFO] = {.dump = iwpm_remote_info_cb}, 96 97 [RDMA_NL_IWPM_MAPINFO] = {.dump = iwpm_mapping_info_cb}, 97 98 [RDMA_NL_IWPM_MAPINFO_NUM] = {.dump = iwpm_ack_mapping_info_cb} 98 99 }; ··· 152 151 int prev_ts_set = 0; 153 152 int idx, end; 154 153 155 - #define ts2ns(ts) div64_ul((ts) * dev->rdev.lldi.cclk_ps, 1000) 154 + #define ts2ns(ts) div64_u64((ts) * dev->rdev.lldi.cclk_ps, 1000) 156 155 157 156 idx = atomic_read(&dev->rdev.wr_log_idx) & 158 157 (dev->rdev.wr_log_size - 1); ··· 490 489 dev->rdev.stats.act_ofld_conn_fails); 491 490 seq_printf(seq, "PAS_OFLD_CONN_FAILS: %10llu\n", 492 491 dev->rdev.stats.pas_ofld_conn_fails); 492 + seq_printf(seq, "NEG_ADV_RCVD: %10llu\n", dev->rdev.stats.neg_adv); 493 493 seq_printf(seq, "AVAILABLE IRD: %10u\n", dev->avail_ird); 494 494 return 0; 495 495 } ··· 562 560 cc = snprintf(epd->buf + epd->pos, space, 563 561 "ep %p cm_id %p qp %p state %d flags 0x%lx " 564 562 "history 0x%lx hwtid %d atid %d " 563 + "conn_na %u abort_na %u " 565 564 "%pI4:%d/%d <-> %pI4:%d/%d\n", 566 565 ep, ep->com.cm_id, ep->com.qp, 567 566 (int)ep->com.state, ep->com.flags, 568 567 ep->com.history, ep->hwtid, ep->atid, 568 + ep->stats.connect_neg_adv, 569 + ep->stats.abort_neg_adv, 569 570 &lsin->sin_addr, ntohs(lsin->sin_port), 570 571 ntohs(mapped_lsin->sin_port), 571 572 &rsin->sin_addr, ntohs(rsin->sin_port), ··· 586 581 cc = snprintf(epd->buf + epd->pos, space, 587 582 "ep %p cm_id %p qp %p state %d flags 0x%lx " 588 583 "history 0x%lx hwtid %d atid %d " 584 + "conn_na %u abort_na %u " 589 585 "%pI6:%d/%d <-> %pI6:%d/%d\n", 590 586 ep, ep->com.cm_id, ep->com.qp, 591 587 (int)ep->com.state, ep->com.flags, 592 588 ep->com.history, ep->hwtid, ep->atid, 589 + ep->stats.connect_neg_adv, 590 + ep->stats.abort_neg_adv, 593 591 &lsin6->sin6_addr, ntohs(lsin6->sin6_port), 594 592 ntohs(mapped_lsin6->sin6_port), 595 593 &rsin6->sin6_addr, ntohs(rsin6->sin6_port), ··· 773 765 c4iw_init_dev_ucontext(rdev, &rdev->uctx); 774 766 775 767 /* 768 + * This implementation assumes udb_density == ucq_density! Eventually 769 + * we might need to support this but for now fail the open. Also the 770 + * cqid and qpid range must match for now. 771 + */ 772 + if (rdev->lldi.udb_density != rdev->lldi.ucq_density) { 773 + pr_err(MOD "%s: unsupported udb/ucq densities %u/%u\n", 774 + pci_name(rdev->lldi.pdev), rdev->lldi.udb_density, 775 + rdev->lldi.ucq_density); 776 + err = -EINVAL; 777 + goto err1; 778 + } 779 + if (rdev->lldi.vr->qp.start != rdev->lldi.vr->cq.start || 780 + rdev->lldi.vr->qp.size != rdev->lldi.vr->cq.size) { 781 + pr_err(MOD "%s: unsupported qp and cq id ranges " 782 + "qp start %u size %u cq start %u size %u\n", 783 + pci_name(rdev->lldi.pdev), rdev->lldi.vr->qp.start, 784 + rdev->lldi.vr->qp.size, rdev->lldi.vr->cq.size, 785 + rdev->lldi.vr->cq.size); 786 + err = -EINVAL; 787 + goto err1; 788 + } 789 + 790 + /* 776 791 * qpshift is the number of bits to shift the qpid left in order 777 792 * to get the correct address of the doorbell for that qp. 778 793 */ ··· 815 784 rdev->lldi.vr->qp.size, 816 785 rdev->lldi.vr->cq.start, 817 786 rdev->lldi.vr->cq.size); 818 - PDBG("udb len 0x%x udb base %llx db_reg %p gts_reg %p qpshift %lu " 787 + PDBG("udb len 0x%x udb base %p db_reg %p gts_reg %p qpshift %lu " 819 788 "qpmask 0x%x cqshift %lu cqmask 0x%x\n", 820 789 (unsigned)pci_resource_len(rdev->lldi.pdev, 2), 821 - (u64)pci_resource_start(rdev->lldi.pdev, 2), 790 + (void *)pci_resource_start(rdev->lldi.pdev, 2), 822 791 rdev->lldi.db_reg, 823 792 rdev->lldi.gts_reg, 824 793 rdev->qpshift, rdev->qpmask,
+7
drivers/infiniband/hw/cxgb4/iw_cxgb4.h
··· 137 137 u64 tcam_full; 138 138 u64 act_ofld_conn_fails; 139 139 u64 pas_ofld_conn_fails; 140 + u64 neg_adv; 140 141 }; 141 142 142 143 struct c4iw_hw_queue { ··· 815 814 int backlog; 816 815 }; 817 816 817 + struct c4iw_ep_stats { 818 + unsigned connect_neg_adv; 819 + unsigned abort_neg_adv; 820 + }; 821 + 818 822 struct c4iw_ep { 819 823 struct c4iw_ep_common com; 820 824 struct c4iw_ep *parent_ep; ··· 852 846 unsigned int retry_count; 853 847 int snd_win; 854 848 int rcv_win; 849 + struct c4iw_ep_stats stats; 855 850 }; 856 851 857 852 static inline void print_addr(struct c4iw_ep_common *epc, const char *func,
+3 -3
drivers/infiniband/hw/cxgb4/mem.c
··· 144 144 if (i == (num_wqe-1)) { 145 145 req->wr.wr_hi = cpu_to_be32(FW_WR_OP_V(FW_ULPTX_WR) | 146 146 FW_WR_COMPL_F); 147 - req->wr.wr_lo = (__force __be64)(unsigned long) &wr_wait; 147 + req->wr.wr_lo = (__force __be64)&wr_wait; 148 148 } else 149 149 req->wr.wr_hi = cpu_to_be32(FW_WR_OP_V(FW_ULPTX_WR)); 150 150 req->wr.wr_mid = cpu_to_be32( ··· 676 676 mhp->attr.zbva = 0; 677 677 mhp->attr.va_fbo = 0; 678 678 mhp->attr.page_size = 0; 679 - mhp->attr.len = ~0UL; 679 + mhp->attr.len = ~0ULL; 680 680 mhp->attr.pbl_size = 0; 681 681 682 682 ret = write_tpt_entry(&rhp->rdev, 0, &stag, 1, php->pdid, 683 683 FW_RI_STAG_NSMR, mhp->attr.perms, 684 - mhp->attr.mw_bind_enable, 0, 0, ~0UL, 0, 0, 0); 684 + mhp->attr.mw_bind_enable, 0, 0, ~0ULL, 0, 0, 0); 685 685 if (ret) 686 686 goto err1; 687 687
+5 -5
drivers/infiniband/hw/cxgb4/qp.c
··· 275 275 FW_RI_RES_WR_NRES_V(2) | 276 276 FW_WR_COMPL_F); 277 277 res_wr->len16_pkd = cpu_to_be32(DIV_ROUND_UP(wr_len, 16)); 278 - res_wr->cookie = (unsigned long) &wr_wait; 278 + res_wr->cookie = (uintptr_t)&wr_wait; 279 279 res = res_wr->res; 280 280 res->u.sqrq.restype = FW_RI_RES_TYPE_SQ; 281 281 res->u.sqrq.op = FW_RI_RES_OP_WRITE; ··· 1209 1209 wqe->flowid_len16 = cpu_to_be32( 1210 1210 FW_WR_FLOWID_V(ep->hwtid) | 1211 1211 FW_WR_LEN16_V(DIV_ROUND_UP(sizeof(*wqe), 16))); 1212 - wqe->cookie = (unsigned long) &ep->com.wr_wait; 1212 + wqe->cookie = (uintptr_t)&ep->com.wr_wait; 1213 1213 1214 1214 wqe->u.fini.type = FW_RI_TYPE_FINI; 1215 1215 ret = c4iw_ofld_send(&rhp->rdev, skb); ··· 1279 1279 FW_WR_FLOWID_V(qhp->ep->hwtid) | 1280 1280 FW_WR_LEN16_V(DIV_ROUND_UP(sizeof(*wqe), 16))); 1281 1281 1282 - wqe->cookie = (unsigned long) &qhp->ep->com.wr_wait; 1282 + wqe->cookie = (uintptr_t)&qhp->ep->com.wr_wait; 1283 1283 1284 1284 wqe->u.init.type = FW_RI_TYPE_INIT; 1285 1285 wqe->u.init.mpareqbit_p2ptype = ··· 1766 1766 mm2->len = PAGE_ALIGN(qhp->wq.rq.memsize); 1767 1767 insert_mmap(ucontext, mm2); 1768 1768 mm3->key = uresp.sq_db_gts_key; 1769 - mm3->addr = (__force unsigned long) qhp->wq.sq.udb; 1769 + mm3->addr = (__force unsigned long)qhp->wq.sq.udb; 1770 1770 mm3->len = PAGE_SIZE; 1771 1771 insert_mmap(ucontext, mm3); 1772 1772 mm4->key = uresp.rq_db_gts_key; 1773 - mm4->addr = (__force unsigned long) qhp->wq.rq.udb; 1773 + mm4->addr = (__force unsigned long)qhp->wq.rq.udb; 1774 1774 mm4->len = PAGE_SIZE; 1775 1775 insert_mmap(ucontext, mm4); 1776 1776 if (mm5) {
+4 -3
drivers/infiniband/hw/cxgb4/t4.h
··· 539 539 size_t memsize; 540 540 __be64 bits_type_ts; 541 541 u32 cqid; 542 + u32 qid_mask; 542 543 int vector; 543 544 u16 size; /* including status page */ 544 545 u16 cidx; ··· 564 563 set_bit(CQ_ARMED, &cq->flags); 565 564 while (cq->cidx_inc > CIDXINC_M) { 566 565 val = SEINTARM_V(0) | CIDXINC_V(CIDXINC_M) | TIMERREG_V(7) | 567 - INGRESSQID_V(cq->cqid); 566 + INGRESSQID_V(cq->cqid & cq->qid_mask); 568 567 writel(val, cq->gts); 569 568 cq->cidx_inc -= CIDXINC_M; 570 569 } 571 570 val = SEINTARM_V(se) | CIDXINC_V(cq->cidx_inc) | TIMERREG_V(6) | 572 - INGRESSQID_V(cq->cqid); 571 + INGRESSQID_V(cq->cqid & cq->qid_mask); 573 572 writel(val, cq->gts); 574 573 cq->cidx_inc = 0; 575 574 return 0; ··· 602 601 u32 val; 603 602 604 603 val = SEINTARM_V(0) | CIDXINC_V(cq->cidx_inc) | TIMERREG_V(7) | 605 - INGRESSQID_V(cq->cqid); 604 + INGRESSQID_V(cq->cqid & cq->qid_mask); 606 605 writel(val, cq->gts); 607 606 cq->cidx_inc = 0; 608 607 }
+3 -1
drivers/infiniband/hw/cxgb4/t4fw_ri_api.h
··· 848 848 #define CONG_CNTRL_V(x) ((x) << CONG_CNTRL_S) 849 849 #define CONG_CNTRL_G(x) (((x) >> CONG_CNTRL_S) & CONG_CNTRL_M) 850 850 851 - #define CONG_CNTRL_VALID (1 << 18) 851 + #define T5_ISS_S 18 852 + #define T5_ISS_V(x) ((x) << T5_ISS_S) 853 + #define T5_ISS_F T5_ISS_V(1U) 852 854 853 855 #endif /* _T4FW_RI_API_H_ */
+1
drivers/infiniband/hw/nes/nes.c
··· 116 116 [RDMA_NL_IWPM_REG_PID] = {.dump = iwpm_register_pid_cb}, 117 117 [RDMA_NL_IWPM_ADD_MAPPING] = {.dump = iwpm_add_mapping_cb}, 118 118 [RDMA_NL_IWPM_QUERY_MAPPING] = {.dump = iwpm_add_and_query_mapping_cb}, 119 + [RDMA_NL_IWPM_REMOTE_INFO] = {.dump = iwpm_remote_info_cb}, 119 120 [RDMA_NL_IWPM_HANDLE_ERR] = {.dump = iwpm_mapping_error_cb}, 120 121 [RDMA_NL_IWPM_MAPINFO] = {.dump = iwpm_mapping_info_cb}, 121 122 [RDMA_NL_IWPM_MAPINFO_NUM] = {.dump = iwpm_ack_mapping_info_cb}
+47 -16
drivers/infiniband/hw/nes/nes_cm.c
··· 596 596 memcpy(pm_msg->if_name, nesvnic->netdev->name, IWPM_IFNAME_SIZE); 597 597 } 598 598 599 + static void record_sockaddr_info(struct sockaddr_storage *addr_info, 600 + nes_addr_t *ip_addr, u16 *port_num) 601 + { 602 + struct sockaddr_in *in_addr = (struct sockaddr_in *)addr_info; 603 + 604 + if (in_addr->sin_family == AF_INET) { 605 + *ip_addr = ntohl(in_addr->sin_addr.s_addr); 606 + *port_num = ntohs(in_addr->sin_port); 607 + } 608 + } 609 + 599 610 /* 600 611 * nes_record_pm_msg - Save the received mapping info 601 612 */ 602 613 static void nes_record_pm_msg(struct nes_cm_info *cm_info, 603 614 struct iwpm_sa_data *pm_msg) 604 615 { 605 - struct sockaddr_in *mapped_loc_addr = 606 - (struct sockaddr_in *)&pm_msg->mapped_loc_addr; 607 - struct sockaddr_in *mapped_rem_addr = 608 - (struct sockaddr_in *)&pm_msg->mapped_rem_addr; 616 + record_sockaddr_info(&pm_msg->mapped_loc_addr, 617 + &cm_info->mapped_loc_addr, &cm_info->mapped_loc_port); 609 618 610 - if (mapped_loc_addr->sin_family == AF_INET) { 611 - cm_info->mapped_loc_addr = 612 - ntohl(mapped_loc_addr->sin_addr.s_addr); 613 - cm_info->mapped_loc_port = ntohs(mapped_loc_addr->sin_port); 614 - } 615 - if (mapped_rem_addr->sin_family == AF_INET) { 616 - cm_info->mapped_rem_addr = 617 - ntohl(mapped_rem_addr->sin_addr.s_addr); 618 - cm_info->mapped_rem_port = ntohs(mapped_rem_addr->sin_port); 619 - } 619 + record_sockaddr_info(&pm_msg->mapped_rem_addr, 620 + &cm_info->mapped_rem_addr, &cm_info->mapped_rem_port); 621 + } 622 + 623 + /* 624 + * nes_get_reminfo - Get the address info of the remote connecting peer 625 + */ 626 + static int nes_get_remote_addr(struct nes_cm_node *cm_node) 627 + { 628 + struct sockaddr_storage mapped_loc_addr, mapped_rem_addr; 629 + struct sockaddr_storage remote_addr; 630 + int ret; 631 + 632 + nes_create_sockaddr(htonl(cm_node->mapped_loc_addr), 633 + htons(cm_node->mapped_loc_port), &mapped_loc_addr); 634 + nes_create_sockaddr(htonl(cm_node->mapped_rem_addr), 635 + htons(cm_node->mapped_rem_port), &mapped_rem_addr); 636 + 637 + ret = iwpm_get_remote_info(&mapped_loc_addr, &mapped_rem_addr, 638 + &remote_addr, RDMA_NL_NES); 639 + if (ret) 640 + nes_debug(NES_DBG_CM, "Unable to find remote peer address info\n"); 641 + else 642 + record_sockaddr_info(&remote_addr, &cm_node->rem_addr, 643 + &cm_node->rem_port); 644 + return ret; 620 645 } 621 646 622 647 /** ··· 1591 1566 return NULL; 1592 1567 1593 1568 /* set our node specific transport info */ 1594 - cm_node->loc_addr = cm_info->loc_addr; 1569 + if (listener) { 1570 + cm_node->loc_addr = listener->loc_addr; 1571 + cm_node->loc_port = listener->loc_port; 1572 + } else { 1573 + cm_node->loc_addr = cm_info->loc_addr; 1574 + cm_node->loc_port = cm_info->loc_port; 1575 + } 1595 1576 cm_node->rem_addr = cm_info->rem_addr; 1596 - cm_node->loc_port = cm_info->loc_port; 1597 1577 cm_node->rem_port = cm_info->rem_port; 1598 1578 1599 1579 cm_node->mapped_loc_addr = cm_info->mapped_loc_addr; ··· 2181 2151 cm_node->state = NES_CM_STATE_ESTABLISHED; 2182 2152 if (datasize) { 2183 2153 cm_node->tcp_cntxt.rcv_nxt = inc_sequence + datasize; 2154 + nes_get_remote_addr(cm_node); 2184 2155 handle_rcv_mpa(cm_node, skb); 2185 2156 } else { /* rcvd ACK only */ 2186 2157 dev_kfree_skb_any(skb);
-1
drivers/infiniband/hw/qib/qib.h
··· 1136 1136 extern u32 qib_cpulist_count; 1137 1137 extern unsigned long *qib_cpulist; 1138 1138 1139 - extern unsigned qib_wc_pat; 1140 1139 extern unsigned qib_cc_table_size; 1141 1140 int qib_init(struct qib_devdata *, int); 1142 1141 int init_chip_wc_pat(struct qib_devdata *dd, u32);
+2 -1
drivers/infiniband/hw/qib/qib_file_ops.c
··· 835 835 vma->vm_flags &= ~VM_MAYREAD; 836 836 vma->vm_flags |= VM_DONTCOPY | VM_DONTEXPAND; 837 837 838 - if (qib_wc_pat) 838 + /* We used PAT if wc_cookie == 0 */ 839 + if (!dd->wc_cookie) 839 840 vma->vm_page_prot = pgprot_writecombine(vma->vm_page_prot); 840 841 841 842 ret = io_remap_pfn_range(vma, vma->vm_start, phys >> PAGE_SHIFT,
+3 -5
drivers/infiniband/hw/qib/qib_iba6120.c
··· 3315 3315 qib_6120_config_ctxts(dd); 3316 3316 qib_set_ctxtcnt(dd); 3317 3317 3318 - if (qib_wc_pat) { 3319 - ret = init_chip_wc_pat(dd, 0); 3320 - if (ret) 3321 - goto bail; 3322 - } 3318 + ret = init_chip_wc_pat(dd, 0); 3319 + if (ret) 3320 + goto bail; 3323 3321 set_6120_baseaddrs(dd); /* set chip access pointers now */ 3324 3322 3325 3323 ret = 0;
+3 -5
drivers/infiniband/hw/qib/qib_iba7220.c
··· 4126 4126 qib_7220_config_ctxts(dd); 4127 4127 qib_set_ctxtcnt(dd); /* needed for PAT setup */ 4128 4128 4129 - if (qib_wc_pat) { 4130 - ret = init_chip_wc_pat(dd, 0); 4131 - if (ret) 4132 - goto bail; 4133 - } 4129 + ret = init_chip_wc_pat(dd, 0); 4130 + if (ret) 4131 + goto bail; 4134 4132 set_7220_baseaddrs(dd); /* set chip access pointers now */ 4135 4133 4136 4134 ret = 0;
+20 -21
drivers/infiniband/hw/qib/qib_iba7322.c
··· 6429 6429 unsigned features, pidx, sbufcnt; 6430 6430 int ret, mtu; 6431 6431 u32 sbufs, updthresh; 6432 + resource_size_t vl15off; 6432 6433 6433 6434 /* pport structs are contiguous, allocated after devdata */ 6434 6435 ppd = (struct qib_pportdata *)(dd + 1); ··· 6678 6677 qib_7322_config_ctxts(dd); 6679 6678 qib_set_ctxtcnt(dd); 6680 6679 6681 - if (qib_wc_pat) { 6682 - resource_size_t vl15off; 6683 - /* 6684 - * We do not set WC on the VL15 buffers to avoid 6685 - * a rare problem with unaligned writes from 6686 - * interrupt-flushed store buffers, so we need 6687 - * to map those separately here. We can't solve 6688 - * this for the rarely used mtrr case. 6689 - */ 6690 - ret = init_chip_wc_pat(dd, 0); 6691 - if (ret) 6692 - goto bail; 6680 + /* 6681 + * We do not set WC on the VL15 buffers to avoid 6682 + * a rare problem with unaligned writes from 6683 + * interrupt-flushed store buffers, so we need 6684 + * to map those separately here. We can't solve 6685 + * this for the rarely used mtrr case. 6686 + */ 6687 + ret = init_chip_wc_pat(dd, 0); 6688 + if (ret) 6689 + goto bail; 6693 6690 6694 - /* vl15 buffers start just after the 4k buffers */ 6695 - vl15off = dd->physaddr + (dd->piobufbase >> 32) + 6696 - dd->piobcnt4k * dd->align4k; 6697 - dd->piovl15base = ioremap_nocache(vl15off, 6698 - NUM_VL15_BUFS * dd->align4k); 6699 - if (!dd->piovl15base) { 6700 - ret = -ENOMEM; 6701 - goto bail; 6702 - } 6691 + /* vl15 buffers start just after the 4k buffers */ 6692 + vl15off = dd->physaddr + (dd->piobufbase >> 32) + 6693 + dd->piobcnt4k * dd->align4k; 6694 + dd->piovl15base = ioremap_nocache(vl15off, 6695 + NUM_VL15_BUFS * dd->align4k); 6696 + if (!dd->piovl15base) { 6697 + ret = -ENOMEM; 6698 + goto bail; 6703 6699 } 6700 + 6704 6701 qib_7322_set_baseaddrs(dd); /* set chip access pointers now */ 6705 6702 6706 6703 ret = 0;
+7 -19
drivers/infiniband/hw/qib/qib_init.c
··· 91 91 unsigned qib_cc_table_size; 92 92 module_param_named(cc_table_size, qib_cc_table_size, uint, S_IRUGO); 93 93 MODULE_PARM_DESC(cc_table_size, "Congestion control table entries 0 (CCA disabled - default), min = 128, max = 1984"); 94 - /* 95 - * qib_wc_pat parameter: 96 - * 0 is WC via MTRR 97 - * 1 is WC via PAT 98 - * If PAT initialization fails, code reverts back to MTRR 99 - */ 100 - unsigned qib_wc_pat = 1; /* default (1) is to use PAT, not MTRR */ 101 - module_param_named(wc_pat, qib_wc_pat, uint, S_IRUGO); 102 - MODULE_PARM_DESC(wc_pat, "enable write-combining via PAT mechanism"); 103 94 104 95 static void verify_interrupt(unsigned long); 105 96 ··· 1368 1377 spin_unlock(&dd->pport[pidx].cc_shadow_lock); 1369 1378 } 1370 1379 1371 - if (!qib_wc_pat) 1372 - qib_disable_wc(dd); 1380 + qib_disable_wc(dd); 1373 1381 1374 1382 if (dd->pioavailregs_dma) { 1375 1383 dma_free_coherent(&dd->pcidev->dev, PAGE_SIZE, ··· 1537 1547 goto bail; 1538 1548 } 1539 1549 1540 - if (!qib_wc_pat) { 1541 - ret = qib_enable_wc(dd); 1542 - if (ret) { 1543 - qib_dev_err(dd, 1544 - "Write combining not enabled (err %d): performance may be poor\n", 1545 - -ret); 1546 - ret = 0; 1547 - } 1550 + ret = qib_enable_wc(dd); 1551 + if (ret) { 1552 + qib_dev_err(dd, 1553 + "Write combining not enabled (err %d): performance may be poor\n", 1554 + -ret); 1555 + ret = 0; 1548 1556 } 1549 1557 1550 1558 qib_verify_pioperf(dd);
+4 -27
drivers/infiniband/hw/qib/qib_wc_x86_64.c
··· 116 116 } 117 117 118 118 if (!ret) { 119 - int cookie; 120 - 121 - cookie = mtrr_add(pioaddr, piolen, MTRR_TYPE_WRCOMB, 0); 122 - if (cookie < 0) { 123 - { 124 - qib_devinfo(dd->pcidev, 125 - "mtrr_add() WC for PIO bufs failed (%d)\n", 126 - cookie); 127 - ret = -EINVAL; 128 - } 129 - } else { 130 - dd->wc_cookie = cookie; 131 - dd->wc_base = (unsigned long) pioaddr; 132 - dd->wc_len = (unsigned long) piolen; 133 - } 119 + dd->wc_cookie = arch_phys_wc_add(pioaddr, piolen); 120 + if (dd->wc_cookie < 0) 121 + ret = -EINVAL; 134 122 } 135 123 136 124 return ret; ··· 130 142 */ 131 143 void qib_disable_wc(struct qib_devdata *dd) 132 144 { 133 - if (dd->wc_cookie) { 134 - int r; 135 - 136 - r = mtrr_del(dd->wc_cookie, dd->wc_base, 137 - dd->wc_len); 138 - if (r < 0) 139 - qib_devinfo(dd->pcidev, 140 - "mtrr_del(%lx, %lx, %lx) failed: %d\n", 141 - dd->wc_cookie, dd->wc_base, 142 - dd->wc_len, r); 143 - dd->wc_cookie = 0; /* even on failure */ 144 - } 145 + arch_phys_wc_del(dd->wc_cookie); 145 146 } 146 147 147 148 /**
+2 -2
drivers/infiniband/ulp/ipoib/ipoib_cm.c
··· 386 386 rx->rx_ring[i].mapping, 387 387 GFP_KERNEL)) { 388 388 ipoib_warn(priv, "failed to allocate receive buffer %d\n", i); 389 - ret = -ENOMEM; 390 - goto err_count; 389 + ret = -ENOMEM; 390 + goto err_count; 391 391 } 392 392 ret = ipoib_cm_post_receive_nonsrq(dev, rx, &t->wr, t->sge, i); 393 393 if (ret) {
+1 -70
drivers/irqchip/irq-gic.c
··· 82 82 #define NR_GIC_CPU_IF 8 83 83 static u8 gic_cpu_map[NR_GIC_CPU_IF] __read_mostly; 84 84 85 - /* 86 - * Supported arch specific GIC irq extension. 87 - * Default make them NULL. 88 - */ 89 - struct irq_chip gic_arch_extn = { 90 - .irq_eoi = NULL, 91 - .irq_mask = NULL, 92 - .irq_unmask = NULL, 93 - .irq_retrigger = NULL, 94 - .irq_set_type = NULL, 95 - .irq_set_wake = NULL, 96 - }; 97 - 98 85 #ifndef MAX_GIC_NR 99 86 #define MAX_GIC_NR 1 100 87 #endif ··· 154 167 155 168 static void gic_mask_irq(struct irq_data *d) 156 169 { 157 - unsigned long flags; 158 - 159 - raw_spin_lock_irqsave(&irq_controller_lock, flags); 160 170 gic_poke_irq(d, GIC_DIST_ENABLE_CLEAR); 161 - if (gic_arch_extn.irq_mask) 162 - gic_arch_extn.irq_mask(d); 163 - raw_spin_unlock_irqrestore(&irq_controller_lock, flags); 164 171 } 165 172 166 173 static void gic_unmask_irq(struct irq_data *d) 167 174 { 168 - unsigned long flags; 169 - 170 - raw_spin_lock_irqsave(&irq_controller_lock, flags); 171 - if (gic_arch_extn.irq_unmask) 172 - gic_arch_extn.irq_unmask(d); 173 175 gic_poke_irq(d, GIC_DIST_ENABLE_SET); 174 - raw_spin_unlock_irqrestore(&irq_controller_lock, flags); 175 176 } 176 177 177 178 static void gic_eoi_irq(struct irq_data *d) 178 179 { 179 - if (gic_arch_extn.irq_eoi) { 180 - raw_spin_lock(&irq_controller_lock); 181 - gic_arch_extn.irq_eoi(d); 182 - raw_spin_unlock(&irq_controller_lock); 183 - } 184 - 185 180 writel_relaxed(gic_irq(d), gic_cpu_base(d) + GIC_CPU_EOI); 186 181 } 187 182 ··· 220 251 { 221 252 void __iomem *base = gic_dist_base(d); 222 253 unsigned int gicirq = gic_irq(d); 223 - unsigned long flags; 224 - int ret; 225 254 226 255 /* Interrupt configuration for SGIs can't be changed */ 227 256 if (gicirq < 16) ··· 230 263 type != IRQ_TYPE_EDGE_RISING) 231 264 return -EINVAL; 232 265 233 - raw_spin_lock_irqsave(&irq_controller_lock, flags); 234 - 235 - if (gic_arch_extn.irq_set_type) 236 - gic_arch_extn.irq_set_type(d, type); 237 - 238 - ret = gic_configure_irq(gicirq, type, base, NULL); 239 - 240 - raw_spin_unlock_irqrestore(&irq_controller_lock, flags); 241 - 242 - return ret; 243 - } 244 - 245 - static int gic_retrigger(struct irq_data *d) 246 - { 247 - if (gic_arch_extn.irq_retrigger) 248 - return gic_arch_extn.irq_retrigger(d); 249 - 250 - /* the genirq layer expects 0 if we can't retrigger in hardware */ 251 - return 0; 266 + return gic_configure_irq(gicirq, type, base, NULL); 252 267 } 253 268 254 269 #ifdef CONFIG_SMP ··· 259 310 260 311 return IRQ_SET_MASK_OK; 261 312 } 262 - #endif 263 - 264 - #ifdef CONFIG_PM 265 - static int gic_set_wake(struct irq_data *d, unsigned int on) 266 - { 267 - int ret = -ENXIO; 268 - 269 - if (gic_arch_extn.irq_set_wake) 270 - ret = gic_arch_extn.irq_set_wake(d, on); 271 - 272 - return ret; 273 - } 274 - 275 - #else 276 - #define gic_set_wake NULL 277 313 #endif 278 314 279 315 static void __exception_irq_entry gic_handle_irq(struct pt_regs *regs) ··· 319 385 .irq_unmask = gic_unmask_irq, 320 386 .irq_eoi = gic_eoi_irq, 321 387 .irq_set_type = gic_set_type, 322 - .irq_retrigger = gic_retrigger, 323 388 #ifdef CONFIG_SMP 324 389 .irq_set_affinity = gic_set_affinity, 325 390 #endif 326 - .irq_set_wake = gic_set_wake, 327 391 .irq_get_irqchip_state = gic_irq_get_irqchip_state, 328 392 .irq_set_irqchip_state = gic_irq_set_irqchip_state, 329 393 }; ··· 987 1055 set_handle_irq(gic_handle_irq); 988 1056 } 989 1057 990 - gic_chip.flags |= gic_arch_extn.flags; 991 1058 gic_dist_init(gic); 992 1059 gic_cpu_init(gic); 993 1060 gic_pm_init(gic);
+6 -6
drivers/md/dm-crypt.c
··· 925 925 926 926 switch (r) { 927 927 /* async */ 928 - case -EINPROGRESS: 929 928 case -EBUSY: 930 929 wait_for_completion(&ctx->restart); 931 930 reinit_completion(&ctx->restart); 931 + /* fall through*/ 932 + case -EINPROGRESS: 932 933 ctx->req = NULL; 933 934 ctx->cc_sector++; 934 935 continue; ··· 1346 1345 struct dm_crypt_io *io = container_of(ctx, struct dm_crypt_io, ctx); 1347 1346 struct crypt_config *cc = io->cc; 1348 1347 1349 - if (error == -EINPROGRESS) 1348 + if (error == -EINPROGRESS) { 1349 + complete(&ctx->restart); 1350 1350 return; 1351 + } 1351 1352 1352 1353 if (!error && cc->iv_gen_ops && cc->iv_gen_ops->post) 1353 1354 error = cc->iv_gen_ops->post(cc, iv_of_dmreq(cc, dmreq), dmreq); ··· 1360 1357 crypt_free_req(cc, req_of_dmreq(cc, dmreq), io->base_bio); 1361 1358 1362 1359 if (!atomic_dec_and_test(&ctx->cc_pending)) 1363 - goto done; 1360 + return; 1364 1361 1365 1362 if (bio_data_dir(io->base_bio) == READ) 1366 1363 kcryptd_crypt_read_done(io); 1367 1364 else 1368 1365 kcryptd_crypt_write_io_submit(io, 1); 1369 - done: 1370 - if (!completion_done(&ctx->restart)) 1371 - complete(&ctx->restart); 1372 1366 } 1373 1367 1374 1368 static void kcryptd_crypt(struct work_struct *work)
+9 -8
drivers/md/dm-ioctl.c
··· 1298 1298 goto err_unlock_md_type; 1299 1299 } 1300 1300 1301 - if (dm_get_md_type(md) == DM_TYPE_NONE) 1301 + if (dm_get_md_type(md) == DM_TYPE_NONE) { 1302 1302 /* Initial table load: acquire type of table. */ 1303 1303 dm_set_md_type(md, dm_table_get_type(t)); 1304 - else if (dm_get_md_type(md) != dm_table_get_type(t)) { 1304 + 1305 + /* setup md->queue to reflect md's type (may block) */ 1306 + r = dm_setup_md_queue(md); 1307 + if (r) { 1308 + DMWARN("unable to set up device queue for new table."); 1309 + goto err_unlock_md_type; 1310 + } 1311 + } else if (dm_get_md_type(md) != dm_table_get_type(t)) { 1305 1312 DMWARN("can't change device type after initial table load."); 1306 1313 r = -EINVAL; 1307 1314 goto err_unlock_md_type; 1308 1315 } 1309 1316 1310 - /* setup md->queue to reflect md's type (may block) */ 1311 - r = dm_setup_md_queue(md); 1312 - if (r) { 1313 - DMWARN("unable to set up device queue for new table."); 1314 - goto err_unlock_md_type; 1315 - } 1316 1317 dm_unlock_md_type(md); 1317 1318 1318 1319 /* stage inactive table */
+12 -7
drivers/md/dm.c
··· 1082 1082 dm_put(md); 1083 1083 } 1084 1084 1085 - static void free_rq_clone(struct request *clone) 1085 + static void free_rq_clone(struct request *clone, bool must_be_mapped) 1086 1086 { 1087 1087 struct dm_rq_target_io *tio = clone->end_io_data; 1088 1088 struct mapped_device *md = tio->md; 1089 1089 1090 + WARN_ON_ONCE(must_be_mapped && !clone->q); 1091 + 1090 1092 blk_rq_unprep_clone(clone); 1091 1093 1092 - if (clone->q->mq_ops) 1094 + if (md->type == DM_TYPE_MQ_REQUEST_BASED) 1095 + /* stacked on blk-mq queue(s) */ 1093 1096 tio->ti->type->release_clone_rq(clone); 1094 1097 else if (!md->queue->mq_ops) 1095 1098 /* request_fn queue stacked on request_fn queue(s) */ 1096 1099 free_clone_request(md, clone); 1100 + /* 1101 + * NOTE: for the blk-mq queue stacked on request_fn queue(s) case: 1102 + * no need to call free_clone_request() because we leverage blk-mq by 1103 + * allocating the clone at the end of the blk-mq pdu (see: clone_rq) 1104 + */ 1097 1105 1098 1106 if (!md->queue->mq_ops) 1099 1107 free_rq_tio(tio); ··· 1132 1124 rq->sense_len = clone->sense_len; 1133 1125 } 1134 1126 1135 - free_rq_clone(clone); 1127 + free_rq_clone(clone, true); 1136 1128 if (!rq->q->mq_ops) 1137 1129 blk_end_request_all(rq, error); 1138 1130 else ··· 1151 1143 } 1152 1144 1153 1145 if (clone) 1154 - free_rq_clone(clone); 1146 + free_rq_clone(clone, false); 1155 1147 } 1156 1148 1157 1149 /* ··· 2669 2661 static int dm_init_request_based_queue(struct mapped_device *md) 2670 2662 { 2671 2663 struct request_queue *q = NULL; 2672 - 2673 - if (md->queue->elevator) 2674 - return 0; 2675 2664 2676 2665 /* Fully initialize the queue */ 2677 2666 q = blk_init_allocated_queue(md->queue, dm_request_fn, NULL);
+2 -2
drivers/md/md.c
··· 4818 4818 if (mddev->sysfs_state) 4819 4819 sysfs_put(mddev->sysfs_state); 4820 4820 4821 + if (mddev->queue) 4822 + blk_cleanup_queue(mddev->queue); 4821 4823 if (mddev->gendisk) { 4822 4824 del_gendisk(mddev->gendisk); 4823 4825 put_disk(mddev->gendisk); 4824 4826 } 4825 - if (mddev->queue) 4826 - blk_cleanup_queue(mddev->queue); 4827 4827 4828 4828 kfree(mddev); 4829 4829 }
+7 -7
drivers/media/platform/marvell-ccic/mcam-core.c
··· 117 117 .planar = false, 118 118 }, 119 119 { 120 - .desc = "UYVY 4:2:2", 121 - .pixelformat = V4L2_PIX_FMT_UYVY, 120 + .desc = "YVYU 4:2:2", 121 + .pixelformat = V4L2_PIX_FMT_YVYU, 122 122 .mbus_code = MEDIA_BUS_FMT_YUYV8_2X8, 123 123 .bpp = 2, 124 124 .planar = false, ··· 740 740 741 741 switch (fmt->pixelformat) { 742 742 case V4L2_PIX_FMT_YUYV: 743 - case V4L2_PIX_FMT_UYVY: 743 + case V4L2_PIX_FMT_YVYU: 744 744 widthy = fmt->width * 2; 745 745 widthuv = 0; 746 746 break; ··· 767 767 case V4L2_PIX_FMT_YUV420: 768 768 case V4L2_PIX_FMT_YVU420: 769 769 mcam_reg_write_mask(cam, REG_CTRL0, 770 - C0_DF_YUV | C0_YUV_420PL | C0_YUVE_YVYU, C0_DF_MASK); 770 + C0_DF_YUV | C0_YUV_420PL | C0_YUVE_VYUY, C0_DF_MASK); 771 771 break; 772 772 case V4L2_PIX_FMT_YUYV: 773 773 mcam_reg_write_mask(cam, REG_CTRL0, 774 - C0_DF_YUV | C0_YUV_PACKED | C0_YUVE_UYVY, C0_DF_MASK); 774 + C0_DF_YUV | C0_YUV_PACKED | C0_YUVE_NOSWAP, C0_DF_MASK); 775 775 break; 776 - case V4L2_PIX_FMT_UYVY: 776 + case V4L2_PIX_FMT_YVYU: 777 777 mcam_reg_write_mask(cam, REG_CTRL0, 778 - C0_DF_YUV | C0_YUV_PACKED | C0_YUVE_YUYV, C0_DF_MASK); 778 + C0_DF_YUV | C0_YUV_PACKED | C0_YUVE_SWAP24, C0_DF_MASK); 779 779 break; 780 780 case V4L2_PIX_FMT_RGB444: 781 781 mcam_reg_write_mask(cam, REG_CTRL0,
+4 -4
drivers/media/platform/marvell-ccic/mcam-core.h
··· 331 331 #define C0_YUVE_YVYU 0x00010000 /* Y1CrY0Cb */ 332 332 #define C0_YUVE_VYUY 0x00020000 /* CrY1CbY0 */ 333 333 #define C0_YUVE_UYVY 0x00030000 /* CbY1CrY0 */ 334 - #define C0_YUVE_XYUV 0x00000000 /* 420: .YUV */ 335 - #define C0_YUVE_XYVU 0x00010000 /* 420: .YVU */ 336 - #define C0_YUVE_XUVY 0x00020000 /* 420: .UVY */ 337 - #define C0_YUVE_XVUY 0x00030000 /* 420: .VUY */ 334 + #define C0_YUVE_NOSWAP 0x00000000 /* no bytes swapping */ 335 + #define C0_YUVE_SWAP13 0x00010000 /* swap byte 1 and 3 */ 336 + #define C0_YUVE_SWAP24 0x00020000 /* swap byte 2 and 4 */ 337 + #define C0_YUVE_SWAP1324 0x00030000 /* swap bytes 1&3 and 2&4 */ 338 338 /* Bayer bits 18,19 if needed */ 339 339 #define C0_EOF_VSYNC 0x00400000 /* Generate EOF by VSYNC */ 340 340 #define C0_VEDGE_CTRL 0x00800000 /* Detect falling edge of VSYNC */
+6 -1
drivers/media/platform/soc_camera/rcar_vin.c
··· 135 135 #define VIN_MAX_WIDTH 2048 136 136 #define VIN_MAX_HEIGHT 2048 137 137 138 + #define TIMEOUT_MS 100 139 + 138 140 enum chip_id { 139 141 RCAR_GEN2, 140 142 RCAR_H1, ··· 822 820 if (priv->state == STOPPING) { 823 821 priv->request_to_stop = true; 824 822 spin_unlock_irq(&priv->lock); 825 - wait_for_completion(&priv->capture_stop); 823 + if (!wait_for_completion_timeout( 824 + &priv->capture_stop, 825 + msecs_to_jiffies(TIMEOUT_MS))) 826 + priv->state = STOPPED; 826 827 spin_lock_irq(&priv->lock); 827 828 } 828 829 }
+12
drivers/mmc/card/block.c
··· 1029 1029 md->reset_done &= ~type; 1030 1030 } 1031 1031 1032 + int mmc_access_rpmb(struct mmc_queue *mq) 1033 + { 1034 + struct mmc_blk_data *md = mq->data; 1035 + /* 1036 + * If this is a RPMB partition access, return ture 1037 + */ 1038 + if (md && md->part_type == EXT_CSD_PART_CONFIG_ACC_RPMB) 1039 + return true; 1040 + 1041 + return false; 1042 + } 1043 + 1032 1044 static int mmc_blk_issue_discard_rq(struct mmc_queue *mq, struct request *req) 1033 1045 { 1034 1046 struct mmc_blk_data *md = mq->data;
+1 -1
drivers/mmc/card/queue.c
··· 38 38 return BLKPREP_KILL; 39 39 } 40 40 41 - if (mq && mmc_card_removed(mq->card)) 41 + if (mq && (mmc_card_removed(mq->card) || mmc_access_rpmb(mq))) 42 42 return BLKPREP_KILL; 43 43 44 44 req->cmd_flags |= REQ_DONTPREP;
+2
drivers/mmc/card/queue.h
··· 73 73 extern int mmc_packed_init(struct mmc_queue *, struct mmc_card *); 74 74 extern void mmc_packed_clean(struct mmc_queue *); 75 75 76 + extern int mmc_access_rpmb(struct mmc_queue *); 77 + 76 78 #endif
+1
drivers/mmc/core/core.c
··· 2651 2651 switch (mode) { 2652 2652 case PM_HIBERNATION_PREPARE: 2653 2653 case PM_SUSPEND_PREPARE: 2654 + case PM_RESTORE_PREPARE: 2654 2655 spin_lock_irqsave(&host->lock, flags); 2655 2656 host->rescan_disable = 1; 2656 2657 spin_unlock_irqrestore(&host->lock, flags);
+5 -2
drivers/mmc/host/dw_mmc.c
··· 589 589 host->ring_size = PAGE_SIZE / sizeof(struct idmac_desc); 590 590 591 591 /* Forward link the descriptor list */ 592 - for (i = 0, p = host->sg_cpu; i < host->ring_size - 1; i++, p++) 592 + for (i = 0, p = host->sg_cpu; i < host->ring_size - 1; i++, p++) { 593 593 p->des3 = cpu_to_le32(host->sg_dma + 594 594 (sizeof(struct idmac_desc) * (i + 1))); 595 + p->des1 = 0; 596 + } 595 597 596 598 /* Set the last descriptor as the end-of-ring descriptor */ 597 599 p->des3 = cpu_to_le32(host->sg_dma); ··· 1302 1300 int gpio_cd = mmc_gpio_get_cd(mmc); 1303 1301 1304 1302 /* Use platform get_cd function, else try onboard card detect */ 1305 - if (brd->quirks & DW_MCI_QUIRK_BROKEN_CARD_DETECTION) 1303 + if ((brd->quirks & DW_MCI_QUIRK_BROKEN_CARD_DETECTION) || 1304 + (mmc->caps & MMC_CAP_NONREMOVABLE)) 1306 1305 present = 1; 1307 1306 else if (!IS_ERR_VALUE(gpio_cd)) 1308 1307 present = gpio_cd;
+1 -1
drivers/mmc/host/sh_mmcif.c
··· 1408 1408 host = mmc_priv(mmc); 1409 1409 host->mmc = mmc; 1410 1410 host->addr = reg; 1411 - host->timeout = msecs_to_jiffies(1000); 1411 + host->timeout = msecs_to_jiffies(10000); 1412 1412 host->ccs_enable = !pd || !pd->ccs_unsupported; 1413 1413 host->clk_ctrl2_enable = pd && pd->clk_ctrl2_present; 1414 1414
+12
drivers/net/bonding/bond_main.c
··· 82 82 #include <net/bond_3ad.h> 83 83 #include <net/bond_alb.h> 84 84 85 + #include "bonding_priv.h" 86 + 85 87 /*---------------------------- Module parameters ----------------------------*/ 86 88 87 89 /* monitor all links that often (in milliseconds). <=0 disables monitoring */ ··· 4544 4542 int bond_create(struct net *net, const char *name) 4545 4543 { 4546 4544 struct net_device *bond_dev; 4545 + struct bonding *bond; 4546 + struct alb_bond_info *bond_info; 4547 4547 int res; 4548 4548 4549 4549 rtnl_lock(); ··· 4558 4554 rtnl_unlock(); 4559 4555 return -ENOMEM; 4560 4556 } 4557 + 4558 + /* 4559 + * Initialize rx_hashtbl_used_head to RLB_NULL_INDEX. 4560 + * It is set to 0 by default which is wrong. 4561 + */ 4562 + bond = netdev_priv(bond_dev); 4563 + bond_info = &(BOND_ALB_INFO(bond)); 4564 + bond_info->rx_hashtbl_used_head = RLB_NULL_INDEX; 4561 4565 4562 4566 dev_net_set(bond_dev, net); 4563 4567 bond_dev->rtnl_link_ops = &bond_link_ops;
+1
drivers/net/bonding/bond_procfs.c
··· 4 4 #include <net/netns/generic.h> 5 5 #include <net/bonding.h> 6 6 7 + #include "bonding_priv.h" 7 8 8 9 static void *bond_info_seq_start(struct seq_file *seq, loff_t *pos) 9 10 __acquires(RCU)
+25
drivers/net/bonding/bonding_priv.h
··· 1 + /* 2 + * Bond several ethernet interfaces into a Cisco, running 'Etherchannel'. 3 + * 4 + * Portions are (c) Copyright 1995 Simon "Guru Aleph-Null" Janes 5 + * NCM: Network and Communications Management, Inc. 6 + * 7 + * BUT, I'm the one who modified it for ethernet, so: 8 + * (c) Copyright 1999, Thomas Davis, tadavis@lbl.gov 9 + * 10 + * This software may be used and distributed according to the terms 11 + * of the GNU Public License, incorporated herein by reference. 12 + * 13 + */ 14 + 15 + #ifndef _BONDING_PRIV_H 16 + #define _BONDING_PRIV_H 17 + 18 + #define DRV_VERSION "3.7.1" 19 + #define DRV_RELDATE "April 27, 2011" 20 + #define DRV_NAME "bonding" 21 + #define DRV_DESCRIPTION "Ethernet Channel Bonding Driver" 22 + 23 + #define bond_version DRV_DESCRIPTION ": v" DRV_VERSION " (" DRV_RELDATE ")\n" 24 + 25 + #endif
+1 -1
drivers/net/can/Kconfig
··· 112 112 113 113 config CAN_GRCAN 114 114 tristate "Aeroflex Gaisler GRCAN and GRHCAN CAN devices" 115 - depends on OF 115 + depends on OF && HAS_DMA 116 116 ---help--- 117 117 Say Y here if you want to use Aeroflex Gaisler GRCAN or GRHCAN. 118 118 Note that the driver supports little endian, even though little
+1 -1
drivers/net/can/usb/kvaser_usb.c
··· 1102 1102 1103 1103 if (msg->u.rx_can_header.flag & (MSG_FLAG_ERROR_FRAME | 1104 1104 MSG_FLAG_NERR)) { 1105 - netdev_err(priv->netdev, "Unknow error (flags: 0x%02x)\n", 1105 + netdev_err(priv->netdev, "Unknown error (flags: 0x%02x)\n", 1106 1106 msg->u.rx_can_header.flag); 1107 1107 1108 1108 stats->rx_errors++;
+1 -1
drivers/net/ethernet/8390/etherh.c
··· 523 523 char *s; 524 524 525 525 if (!ecard_readchunk(&cd, ec, 0xf5, 0)) { 526 - printk(KERN_ERR "%s: unable to read podule description string\n", 526 + printk(KERN_ERR "%s: unable to read module description string\n", 527 527 dev_name(&ec->dev)); 528 528 goto no_addr; 529 529 }
+1 -4
drivers/net/ethernet/altera/altera_msgdmahw.h
··· 58 58 /* Tx buffer control flags 59 59 */ 60 60 #define MSGDMA_DESC_CTL_TX_FIRST (MSGDMA_DESC_CTL_GEN_SOP | \ 61 - MSGDMA_DESC_CTL_TR_ERR_IRQ | \ 62 61 MSGDMA_DESC_CTL_GO) 63 62 64 - #define MSGDMA_DESC_CTL_TX_MIDDLE (MSGDMA_DESC_CTL_TR_ERR_IRQ | \ 65 - MSGDMA_DESC_CTL_GO) 63 + #define MSGDMA_DESC_CTL_TX_MIDDLE (MSGDMA_DESC_CTL_GO) 66 64 67 65 #define MSGDMA_DESC_CTL_TX_LAST (MSGDMA_DESC_CTL_GEN_EOP | \ 68 66 MSGDMA_DESC_CTL_TR_COMP_IRQ | \ 69 - MSGDMA_DESC_CTL_TR_ERR_IRQ | \ 70 67 MSGDMA_DESC_CTL_GO) 71 68 72 69 #define MSGDMA_DESC_CTL_TX_SINGLE (MSGDMA_DESC_CTL_GEN_SOP | \
+35 -8
drivers/net/ethernet/altera/altera_tse_main.c
··· 391 391 "RCV pktstatus %08X pktlength %08X\n", 392 392 pktstatus, pktlength); 393 393 394 + /* DMA trasfer from TSE starts with 2 aditional bytes for 395 + * IP payload alignment. Status returned by get_rx_status() 396 + * contains DMA transfer length. Packet is 2 bytes shorter. 397 + */ 398 + pktlength -= 2; 399 + 394 400 count++; 395 401 next_entry = (++priv->rx_cons) % priv->rx_ring_size; 396 402 ··· 783 777 struct altera_tse_private *priv = netdev_priv(dev); 784 778 struct phy_device *phydev; 785 779 struct device_node *phynode; 780 + bool fixed_link = false; 781 + int rc = 0; 786 782 787 783 /* Avoid init phy in case of no phy present */ 788 784 if (!priv->phy_iface) ··· 797 789 phynode = of_parse_phandle(priv->device->of_node, "phy-handle", 0); 798 790 799 791 if (!phynode) { 800 - netdev_dbg(dev, "no phy-handle found\n"); 801 - if (!priv->mdio) { 802 - netdev_err(dev, 803 - "No phy-handle nor local mdio specified\n"); 804 - return -ENODEV; 792 + /* check if a fixed-link is defined in device-tree */ 793 + if (of_phy_is_fixed_link(priv->device->of_node)) { 794 + rc = of_phy_register_fixed_link(priv->device->of_node); 795 + if (rc < 0) { 796 + netdev_err(dev, "cannot register fixed PHY\n"); 797 + return rc; 798 + } 799 + 800 + /* In the case of a fixed PHY, the DT node associated 801 + * to the PHY is the Ethernet MAC DT node. 802 + */ 803 + phynode = of_node_get(priv->device->of_node); 804 + fixed_link = true; 805 + 806 + netdev_dbg(dev, "fixed-link detected\n"); 807 + phydev = of_phy_connect(dev, phynode, 808 + &altera_tse_adjust_link, 809 + 0, priv->phy_iface); 810 + } else { 811 + netdev_dbg(dev, "no phy-handle found\n"); 812 + if (!priv->mdio) { 813 + netdev_err(dev, "No phy-handle nor local mdio specified\n"); 814 + return -ENODEV; 815 + } 816 + phydev = connect_local_phy(dev); 805 817 } 806 - phydev = connect_local_phy(dev); 807 818 } else { 808 819 netdev_dbg(dev, "phy-handle found\n"); 809 820 phydev = of_phy_connect(dev, phynode, ··· 846 819 /* Broken HW is sometimes missing the pull-up resistor on the 847 820 * MDIO line, which results in reads to non-existent devices returning 848 821 * 0 rather than 0xffff. Catch this here and treat 0 as a non-existent 849 - * device as well. 822 + * device as well. If a fixed-link is used the phy_id is always 0. 850 823 * Note: phydev->phy_id is the result of reading the UID PHY registers. 851 824 */ 852 - if (phydev->phy_id == 0) { 825 + if ((phydev->phy_id == 0) && !fixed_link) { 853 826 netdev_err(dev, "Bad PHY UID 0x%08x\n", phydev->phy_id); 854 827 phy_disconnect(phydev); 855 828 return -ENODEV;
+1 -1
drivers/net/ethernet/amd/Kconfig
··· 179 179 180 180 config AMD_XGBE 181 181 tristate "AMD 10GbE Ethernet driver" 182 - depends on (OF_NET || ACPI) && HAS_IOMEM 182 + depends on (OF_NET || ACPI) && HAS_IOMEM && HAS_DMA 183 183 select PHYLIB 184 184 select AMD_XGBE_PHY 185 185 select BITREVERSE
+2 -3
drivers/net/ethernet/arc/Kconfig
··· 25 25 config ARC_EMAC 26 26 tristate "ARC EMAC support" 27 27 select ARC_EMAC_CORE 28 - depends on OF_IRQ 29 - depends on OF_NET 28 + depends on OF_IRQ && OF_NET && HAS_DMA 30 29 ---help--- 31 30 On some legacy ARC (Synopsys) FPGA boards such as ARCAngel4/ML50x 32 31 non-standard on-chip ethernet device ARC EMAC 10/100 is used. ··· 34 35 config EMAC_ROCKCHIP 35 36 tristate "Rockchip EMAC support" 36 37 select ARC_EMAC_CORE 37 - depends on OF_IRQ && OF_NET && REGULATOR 38 + depends on OF_IRQ && OF_NET && REGULATOR && HAS_DMA 38 39 ---help--- 39 40 Support for Rockchip RK3066/RK3188 EMAC ethernet controllers. 40 41 This selects Rockchip SoC glue layer support for the
+1 -1
drivers/net/ethernet/atheros/atl1e/atl1e_hw.h
··· 129 129 #define TWSI_CTRL_LD_SLV_ADDR_SHIFT 8 130 130 #define TWSI_CTRL_SW_LDSTART 0x800 131 131 #define TWSI_CTRL_HW_LDSTART 0x1000 132 - #define TWSI_CTRL_SMB_SLV_ADDR_MASK 0x0x7F 132 + #define TWSI_CTRL_SMB_SLV_ADDR_MASK 0x7F 133 133 #define TWSI_CTRL_SMB_SLV_ADDR_SHIFT 15 134 134 #define TWSI_CTRL_LD_EXIST 0x400000 135 135 #define TWSI_CTRL_READ_FREQ_SEL_MASK 0x3
+1 -1
drivers/net/ethernet/broadcom/bcmsysport.h
··· 543 543 u32 jbr; /* RO # of xmited jabber count*/ 544 544 u32 bytes; /* RO # of xmited byte count */ 545 545 u32 pok; /* RO # of xmited good pkt */ 546 - u32 uc; /* RO (0x0x4f0)# of xmited unitcast pkt */ 546 + u32 uc; /* RO (0x4f0) # of xmited unicast pkt */ 547 547 }; 548 548 549 549 struct bcm_sysport_mib {
+1 -1
drivers/net/ethernet/broadcom/bgmac.c
··· 1260 1260 1261 1261 /* Poll again if more events arrived in the meantime */ 1262 1262 if (bgmac_read(bgmac, BGMAC_INT_STATUS) & (BGMAC_IS_TX0 | BGMAC_IS_RX)) 1263 - return handled; 1263 + return weight; 1264 1264 1265 1265 if (handled < weight) { 1266 1266 napi_complete(napi);
+1 -3
drivers/net/ethernet/broadcom/bnx2x/bnx2x.h
··· 521 521 }; 522 522 523 523 enum bnx2x_tpa_mode_t { 524 + TPA_MODE_DISABLED, 524 525 TPA_MODE_LRO, 525 526 TPA_MODE_GRO 526 527 }; ··· 590 589 591 590 /* TPA related */ 592 591 struct bnx2x_agg_info *tpa_info; 593 - u8 disable_tpa; 594 592 #ifdef BNX2X_STOP_ON_ERROR 595 593 u64 tpa_queue_used; 596 594 #endif ··· 1545 1545 #define USING_MSIX_FLAG (1 << 5) 1546 1546 #define USING_MSI_FLAG (1 << 6) 1547 1547 #define DISABLE_MSI_FLAG (1 << 7) 1548 - #define TPA_ENABLE_FLAG (1 << 8) 1549 1548 #define NO_MCP_FLAG (1 << 9) 1550 - #define GRO_ENABLE_FLAG (1 << 10) 1551 1549 #define MF_FUNC_DIS (1 << 11) 1552 1550 #define OWN_CNIC_IRQ (1 << 12) 1553 1551 #define NO_ISCSI_OOO_FLAG (1 << 13)
+64 -52
drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c
··· 947 947 u16 frag_size, pages; 948 948 #ifdef BNX2X_STOP_ON_ERROR 949 949 /* sanity check */ 950 - if (fp->disable_tpa && 950 + if (fp->mode == TPA_MODE_DISABLED && 951 951 (CQE_TYPE_START(cqe_fp_type) || 952 952 CQE_TYPE_STOP(cqe_fp_type))) 953 - BNX2X_ERR("START/STOP packet while disable_tpa type %x\n", 953 + BNX2X_ERR("START/STOP packet while TPA disabled, type %x\n", 954 954 CQE_TYPE(cqe_fp_type)); 955 955 #endif 956 956 ··· 1396 1396 DP(NETIF_MSG_IFUP, 1397 1397 "mtu %d rx_buf_size %d\n", bp->dev->mtu, fp->rx_buf_size); 1398 1398 1399 - if (!fp->disable_tpa) { 1399 + if (fp->mode != TPA_MODE_DISABLED) { 1400 1400 /* Fill the per-aggregation pool */ 1401 1401 for (i = 0; i < MAX_AGG_QS(bp); i++) { 1402 1402 struct bnx2x_agg_info *tpa_info = ··· 1410 1410 BNX2X_ERR("Failed to allocate TPA skb pool for queue[%d] - disabling TPA on this queue!\n", 1411 1411 j); 1412 1412 bnx2x_free_tpa_pool(bp, fp, i); 1413 - fp->disable_tpa = 1; 1413 + fp->mode = TPA_MODE_DISABLED; 1414 1414 break; 1415 1415 } 1416 1416 dma_unmap_addr_set(first_buf, mapping, 0); ··· 1438 1438 ring_prod); 1439 1439 bnx2x_free_tpa_pool(bp, fp, 1440 1440 MAX_AGG_QS(bp)); 1441 - fp->disable_tpa = 1; 1441 + fp->mode = TPA_MODE_DISABLED; 1442 1442 ring_prod = 0; 1443 1443 break; 1444 1444 } ··· 1560 1560 1561 1561 bnx2x_free_rx_bds(fp); 1562 1562 1563 - if (!fp->disable_tpa) 1563 + if (fp->mode != TPA_MODE_DISABLED) 1564 1564 bnx2x_free_tpa_pool(bp, fp, MAX_AGG_QS(bp)); 1565 1565 } 1566 1566 } ··· 2477 2477 /* set the tpa flag for each queue. The tpa flag determines the queue 2478 2478 * minimal size so it must be set prior to queue memory allocation 2479 2479 */ 2480 - fp->disable_tpa = !(bp->flags & TPA_ENABLE_FLAG || 2481 - (bp->flags & GRO_ENABLE_FLAG && 2482 - bnx2x_mtu_allows_gro(bp->dev->mtu))); 2483 - if (bp->flags & TPA_ENABLE_FLAG) 2480 + if (bp->dev->features & NETIF_F_LRO) 2484 2481 fp->mode = TPA_MODE_LRO; 2485 - else if (bp->flags & GRO_ENABLE_FLAG) 2482 + else if (bp->dev->features & NETIF_F_GRO && 2483 + bnx2x_mtu_allows_gro(bp->dev->mtu)) 2486 2484 fp->mode = TPA_MODE_GRO; 2485 + else 2486 + fp->mode = TPA_MODE_DISABLED; 2487 2487 2488 - /* We don't want TPA on an FCoE L2 ring */ 2489 - if (IS_FCOE_FP(fp)) 2490 - fp->disable_tpa = 1; 2488 + /* We don't want TPA if it's disabled in bp 2489 + * or if this is an FCoE L2 ring. 2490 + */ 2491 + if (bp->disable_tpa || IS_FCOE_FP(fp)) 2492 + fp->mode = TPA_MODE_DISABLED; 2491 2493 } 2492 2494 2493 2495 int bnx2x_load_cnic(struct bnx2x *bp) ··· 2610 2608 /* 2611 2609 * Zero fastpath structures preserving invariants like napi, which are 2612 2610 * allocated only once, fp index, max_cos, bp pointer. 2613 - * Also set fp->disable_tpa and txdata_ptr. 2611 + * Also set fp->mode and txdata_ptr. 2614 2612 */ 2615 2613 DP(NETIF_MSG_IFUP, "num queues: %d", bp->num_queues); 2616 2614 for_each_queue(bp, i) ··· 3249 3247 3250 3248 if ((bp->state == BNX2X_STATE_CLOSED) || 3251 3249 (bp->state == BNX2X_STATE_ERROR) || 3252 - (bp->flags & (TPA_ENABLE_FLAG | GRO_ENABLE_FLAG))) 3250 + (bp->dev->features & (NETIF_F_LRO | NETIF_F_GRO))) 3253 3251 return LL_FLUSH_FAILED; 3254 3252 3255 3253 if (!bnx2x_fp_lock_poll(fp)) ··· 4545 4543 * In these cases we disable the queue 4546 4544 * Min size is different for OOO, TPA and non-TPA queues 4547 4545 */ 4548 - if (ring_size < (fp->disable_tpa ? 4546 + if (ring_size < (fp->mode == TPA_MODE_DISABLED ? 4549 4547 MIN_RX_SIZE_NONTPA : MIN_RX_SIZE_TPA)) { 4550 4548 /* release memory allocated for this queue */ 4551 4549 bnx2x_free_fp_mem_at(bp, index); ··· 4811 4809 { 4812 4810 struct bnx2x *bp = netdev_priv(dev); 4813 4811 4812 + if (pci_num_vf(bp->pdev)) { 4813 + netdev_features_t changed = dev->features ^ features; 4814 + 4815 + /* Revert the requested changes in features if they 4816 + * would require internal reload of PF in bnx2x_set_features(). 4817 + */ 4818 + if (!(features & NETIF_F_RXCSUM) && !bp->disable_tpa) { 4819 + features &= ~NETIF_F_RXCSUM; 4820 + features |= dev->features & NETIF_F_RXCSUM; 4821 + } 4822 + 4823 + if (changed & NETIF_F_LOOPBACK) { 4824 + features &= ~NETIF_F_LOOPBACK; 4825 + features |= dev->features & NETIF_F_LOOPBACK; 4826 + } 4827 + } 4828 + 4814 4829 /* TPA requires Rx CSUM offloading */ 4815 4830 if (!(features & NETIF_F_RXCSUM)) { 4816 4831 features &= ~NETIF_F_LRO; 4817 4832 features &= ~NETIF_F_GRO; 4818 4833 } 4819 - 4820 - /* Note: do not disable SW GRO in kernel when HW GRO is off */ 4821 - if (bp->disable_tpa) 4822 - features &= ~NETIF_F_LRO; 4823 4834 4824 4835 return features; 4825 4836 } ··· 4840 4825 int bnx2x_set_features(struct net_device *dev, netdev_features_t features) 4841 4826 { 4842 4827 struct bnx2x *bp = netdev_priv(dev); 4843 - u32 flags = bp->flags; 4844 - u32 changes; 4828 + netdev_features_t changes = features ^ dev->features; 4845 4829 bool bnx2x_reload = false; 4830 + int rc; 4846 4831 4847 - if (features & NETIF_F_LRO) 4848 - flags |= TPA_ENABLE_FLAG; 4849 - else 4850 - flags &= ~TPA_ENABLE_FLAG; 4851 - 4852 - if (features & NETIF_F_GRO) 4853 - flags |= GRO_ENABLE_FLAG; 4854 - else 4855 - flags &= ~GRO_ENABLE_FLAG; 4856 - 4857 - if (features & NETIF_F_LOOPBACK) { 4858 - if (bp->link_params.loopback_mode != LOOPBACK_BMAC) { 4859 - bp->link_params.loopback_mode = LOOPBACK_BMAC; 4860 - bnx2x_reload = true; 4861 - } 4862 - } else { 4863 - if (bp->link_params.loopback_mode != LOOPBACK_NONE) { 4864 - bp->link_params.loopback_mode = LOOPBACK_NONE; 4865 - bnx2x_reload = true; 4832 + /* VFs or non SRIOV PFs should be able to change loopback feature */ 4833 + if (!pci_num_vf(bp->pdev)) { 4834 + if (features & NETIF_F_LOOPBACK) { 4835 + if (bp->link_params.loopback_mode != LOOPBACK_BMAC) { 4836 + bp->link_params.loopback_mode = LOOPBACK_BMAC; 4837 + bnx2x_reload = true; 4838 + } 4839 + } else { 4840 + if (bp->link_params.loopback_mode != LOOPBACK_NONE) { 4841 + bp->link_params.loopback_mode = LOOPBACK_NONE; 4842 + bnx2x_reload = true; 4843 + } 4866 4844 } 4867 4845 } 4868 4846 4869 - changes = flags ^ bp->flags; 4870 - 4871 4847 /* if GRO is changed while LRO is enabled, don't force a reload */ 4872 - if ((changes & GRO_ENABLE_FLAG) && (flags & TPA_ENABLE_FLAG)) 4873 - changes &= ~GRO_ENABLE_FLAG; 4848 + if ((changes & NETIF_F_GRO) && (features & NETIF_F_LRO)) 4849 + changes &= ~NETIF_F_GRO; 4874 4850 4875 4851 /* if GRO is changed while HW TPA is off, don't force a reload */ 4876 - if ((changes & GRO_ENABLE_FLAG) && bp->disable_tpa) 4877 - changes &= ~GRO_ENABLE_FLAG; 4852 + if ((changes & NETIF_F_GRO) && bp->disable_tpa) 4853 + changes &= ~NETIF_F_GRO; 4878 4854 4879 4855 if (changes) 4880 4856 bnx2x_reload = true; 4881 4857 4882 - bp->flags = flags; 4883 - 4884 4858 if (bnx2x_reload) { 4885 - if (bp->recovery_state == BNX2X_RECOVERY_DONE) 4886 - return bnx2x_reload_if_running(dev); 4859 + if (bp->recovery_state == BNX2X_RECOVERY_DONE) { 4860 + dev->features = features; 4861 + rc = bnx2x_reload_if_running(dev); 4862 + return rc ? rc : 1; 4863 + } 4887 4864 /* else: bnx2x_nic_load() will be called at end of recovery */ 4888 4865 } 4889 4866 ··· 4937 4930 return -ENODEV; 4938 4931 } 4939 4932 bp = netdev_priv(dev); 4933 + 4934 + if (pci_num_vf(bp->pdev)) { 4935 + DP(BNX2X_MSG_IOV, "VFs are enabled, can not change MTU\n"); 4936 + return -EPERM; 4937 + } 4940 4938 4941 4939 if (bp->recovery_state != BNX2X_RECOVERY_DONE) { 4942 4940 BNX2X_ERR("Handling parity error recovery. Try again later\n");
+1 -1
drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.h
··· 969 969 { 970 970 int i; 971 971 972 - if (fp->disable_tpa) 972 + if (fp->mode == TPA_MODE_DISABLED) 973 973 return; 974 974 975 975 for (i = 0; i < last; i++)
+17
drivers/net/ethernet/broadcom/bnx2x/bnx2x_ethtool.c
··· 1843 1843 "set ring params command parameters: rx_pending = %d, tx_pending = %d\n", 1844 1844 ering->rx_pending, ering->tx_pending); 1845 1845 1846 + if (pci_num_vf(bp->pdev)) { 1847 + DP(BNX2X_MSG_IOV, 1848 + "VFs are enabled, can not change ring parameters\n"); 1849 + return -EPERM; 1850 + } 1851 + 1846 1852 if (bp->recovery_state != BNX2X_RECOVERY_DONE) { 1847 1853 DP(BNX2X_MSG_ETHTOOL, 1848 1854 "Handling parity error recovery. Try again later\n"); ··· 2905 2899 u8 is_serdes, link_up; 2906 2900 int rc, cnt = 0; 2907 2901 2902 + if (pci_num_vf(bp->pdev)) { 2903 + DP(BNX2X_MSG_IOV, 2904 + "VFs are enabled, can not perform self test\n"); 2905 + return; 2906 + } 2907 + 2908 2908 if (bp->recovery_state != BNX2X_RECOVERY_DONE) { 2909 2909 netdev_err(bp->dev, 2910 2910 "Handling parity error recovery. Try again later\n"); ··· 3479 3467 "set-channels command parameters: rx = %d, tx = %d, other = %d, combined = %d\n", 3480 3468 channels->rx_count, channels->tx_count, channels->other_count, 3481 3469 channels->combined_count); 3470 + 3471 + if (pci_num_vf(bp->pdev)) { 3472 + DP(BNX2X_MSG_IOV, "VFs are enabled, can not set channels\n"); 3473 + return -EPERM; 3474 + } 3482 3475 3483 3476 /* We don't support separate rx / tx channels. 3484 3477 * We don't allow setting 'other' channels.
+10 -7
drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
··· 3128 3128 __set_bit(BNX2X_Q_FLG_FORCE_DEFAULT_PRI, &flags); 3129 3129 } 3130 3130 3131 - if (!fp->disable_tpa) { 3131 + if (fp->mode != TPA_MODE_DISABLED) { 3132 3132 __set_bit(BNX2X_Q_FLG_TPA, &flags); 3133 3133 __set_bit(BNX2X_Q_FLG_TPA_IPV6, &flags); 3134 3134 if (fp->mode == TPA_MODE_GRO) ··· 3176 3176 u16 sge_sz = 0; 3177 3177 u16 tpa_agg_size = 0; 3178 3178 3179 - if (!fp->disable_tpa) { 3179 + if (fp->mode != TPA_MODE_DISABLED) { 3180 3180 pause->sge_th_lo = SGE_TH_LO(bp); 3181 3181 pause->sge_th_hi = SGE_TH_HI(bp); 3182 3182 ··· 3304 3304 /* This flag is relevant for E1x only. 3305 3305 * E2 doesn't have a TPA configuration in a function level. 3306 3306 */ 3307 - flags |= (bp->flags & TPA_ENABLE_FLAG) ? FUNC_FLG_TPA : 0; 3307 + flags |= (bp->dev->features & NETIF_F_LRO) ? FUNC_FLG_TPA : 0; 3308 3308 3309 3309 func_init.func_flgs = flags; 3310 3310 func_init.pf_id = BP_FUNC(bp); ··· 12107 12107 12108 12108 /* Set TPA flags */ 12109 12109 if (bp->disable_tpa) { 12110 - bp->flags &= ~(TPA_ENABLE_FLAG | GRO_ENABLE_FLAG); 12110 + bp->dev->hw_features &= ~NETIF_F_LRO; 12111 12111 bp->dev->features &= ~NETIF_F_LRO; 12112 - } else { 12113 - bp->flags |= (TPA_ENABLE_FLAG | GRO_ENABLE_FLAG); 12114 - bp->dev->features |= NETIF_F_LRO; 12115 12112 } 12116 12113 12117 12114 if (CHIP_IS_E1(bp)) ··· 13367 13370 int max_cos_est; 13368 13371 bool is_vf; 13369 13372 int cnic_cnt; 13373 + 13374 + /* Management FW 'remembers' living interfaces. Allow it some time 13375 + * to forget previously living interfaces, allowing a proper re-load. 13376 + */ 13377 + if (is_kdump_kernel()) 13378 + msleep(5000); 13370 13379 13371 13380 /* An estimated maximum supported CoS number according to the chip 13372 13381 * version.
+1 -1
drivers/net/ethernet/broadcom/bnx2x/bnx2x_vfpf.c
··· 594 594 bnx2x_vfpf_prep(bp, &req->first_tlv, CHANNEL_TLV_SETUP_Q, sizeof(*req)); 595 595 596 596 /* select tpa mode to request */ 597 - if (!fp->disable_tpa) { 597 + if (fp->mode != TPA_MODE_DISABLED) { 598 598 flags |= VFPF_QUEUE_FLG_TPA; 599 599 flags |= VFPF_QUEUE_FLG_TPA_IPV6; 600 600 if (fp->mode == TPA_MODE_GRO)
+3 -1
drivers/net/ethernet/broadcom/tg3.c
··· 18129 18129 18130 18130 rtnl_lock(); 18131 18131 18132 - tp->pcierr_recovery = true; 18132 + /* We needn't recover from permanent error */ 18133 + if (state == pci_channel_io_frozen) 18134 + tp->pcierr_recovery = true; 18133 18135 18134 18136 /* We probably don't have netdev yet */ 18135 18137 if (!netdev || !netif_running(netdev))
+5 -2
drivers/net/ethernet/cadence/macb.c
··· 707 707 708 708 /* properly align Ethernet header */ 709 709 skb_reserve(skb, NET_IP_ALIGN); 710 + } else { 711 + bp->rx_ring[entry].addr &= ~MACB_BIT(RX_USED); 712 + bp->rx_ring[entry].ctrl = 0; 710 713 } 711 714 } 712 715 ··· 1476 1473 for (i = 0; i < TX_RING_SIZE; i++) { 1477 1474 bp->queues[0].tx_ring[i].addr = 0; 1478 1475 bp->queues[0].tx_ring[i].ctrl = MACB_BIT(TX_USED); 1479 - bp->queues[0].tx_head = 0; 1480 - bp->queues[0].tx_tail = 0; 1481 1476 } 1477 + bp->queues[0].tx_head = 0; 1478 + bp->queues[0].tx_tail = 0; 1482 1479 bp->queues[0].tx_ring[TX_RING_SIZE - 1].ctrl |= MACB_BIT(TX_WRAP); 1483 1480 1484 1481 bp->rx_tail = 0;
+1 -1
drivers/net/ethernet/chelsio/cxgb4/t4_hw.c
··· 492 492 memoffset = (mtype * (edc_size * 1024 * 1024)); 493 493 else { 494 494 mc_size = EXT_MEM0_SIZE_G(t4_read_reg(adap, 495 - MA_EXT_MEMORY1_BAR_A)); 495 + MA_EXT_MEMORY0_BAR_A)); 496 496 memoffset = (MEM_MC0 * edc_size + mc_size) * 1024 * 1024; 497 497 } 498 498
+3 -2
drivers/net/ethernet/emulex/benet/be_main.c
··· 4846 4846 } 4847 4847 4848 4848 static int be_ndo_bridge_getlink(struct sk_buff *skb, u32 pid, u32 seq, 4849 - struct net_device *dev, u32 filter_mask) 4849 + struct net_device *dev, u32 filter_mask, 4850 + int nlflags) 4850 4851 { 4851 4852 struct be_adapter *adapter = netdev_priv(dev); 4852 4853 int status = 0; ··· 4869 4868 return ndo_dflt_bridge_getlink(skb, pid, seq, dev, 4870 4869 hsw_mode == PORT_FWD_TYPE_VEPA ? 4871 4870 BRIDGE_MODE_VEPA : BRIDGE_MODE_VEB, 4872 - 0, 0); 4871 + 0, 0, nlflags); 4873 4872 } 4874 4873 4875 4874 #ifdef CONFIG_BE2NET_VXLAN
+4 -1
drivers/net/ethernet/freescale/fec_main.c
··· 988 988 rcntl |= 0x40000000 | 0x00000020; 989 989 990 990 /* RGMII, RMII or MII */ 991 - if (fep->phy_interface == PHY_INTERFACE_MODE_RGMII) 991 + if (fep->phy_interface == PHY_INTERFACE_MODE_RGMII || 992 + fep->phy_interface == PHY_INTERFACE_MODE_RGMII_ID || 993 + fep->phy_interface == PHY_INTERFACE_MODE_RGMII_RXID || 994 + fep->phy_interface == PHY_INTERFACE_MODE_RGMII_TXID) 992 995 rcntl |= (1 << 6); 993 996 else if (fep->phy_interface == PHY_INTERFACE_MODE_RMII) 994 997 rcntl |= (1 << 8);
+4 -2
drivers/net/ethernet/ibm/ehea/ehea_main.c
··· 3347 3347 { 3348 3348 int ret = 0; 3349 3349 3350 - if (atomic_inc_and_test(&ehea_memory_hooks_registered)) 3350 + if (atomic_inc_return(&ehea_memory_hooks_registered) > 1) 3351 3351 return 0; 3352 3352 3353 3353 ret = ehea_create_busmap(); ··· 3381 3381 out2: 3382 3382 unregister_reboot_notifier(&ehea_reboot_nb); 3383 3383 out: 3384 + atomic_dec(&ehea_memory_hooks_registered); 3384 3385 return ret; 3385 3386 } 3386 3387 3387 3388 static void ehea_unregister_memory_hooks(void) 3388 3389 { 3389 - if (atomic_read(&ehea_memory_hooks_registered)) 3390 + /* Only remove the hooks if we've registered them */ 3391 + if (atomic_read(&ehea_memory_hooks_registered) == 0) 3390 3392 return; 3391 3393 3392 3394 unregister_reboot_notifier(&ehea_reboot_nb);
+2 -2
drivers/net/ethernet/ibm/ibmveth.c
··· 1238 1238 return -EINVAL; 1239 1239 1240 1240 for (i = 0; i < IBMVETH_NUM_BUFF_POOLS; i++) 1241 - if (new_mtu_oh < adapter->rx_buff_pool[i].buff_size) 1241 + if (new_mtu_oh <= adapter->rx_buff_pool[i].buff_size) 1242 1242 break; 1243 1243 1244 1244 if (i == IBMVETH_NUM_BUFF_POOLS) ··· 1257 1257 for (i = 0; i < IBMVETH_NUM_BUFF_POOLS; i++) { 1258 1258 adapter->rx_buff_pool[i].active = 1; 1259 1259 1260 - if (new_mtu_oh < adapter->rx_buff_pool[i].buff_size) { 1260 + if (new_mtu_oh <= adapter->rx_buff_pool[i].buff_size) { 1261 1261 dev->mtu = new_mtu; 1262 1262 vio_cmo_set_dev_desired(viodev, 1263 1263 ibmveth_get_desired_dma
+4 -3
drivers/net/ethernet/intel/i40e/i40e_main.c
··· 8053 8053 #ifdef HAVE_BRIDGE_FILTER 8054 8054 static int i40e_ndo_bridge_getlink(struct sk_buff *skb, u32 pid, u32 seq, 8055 8055 struct net_device *dev, 8056 - u32 __always_unused filter_mask) 8056 + u32 __always_unused filter_mask, int nlflags) 8057 8057 #else 8058 8058 static int i40e_ndo_bridge_getlink(struct sk_buff *skb, u32 pid, u32 seq, 8059 - struct net_device *dev) 8059 + struct net_device *dev, int nlflags) 8060 8060 #endif /* HAVE_BRIDGE_FILTER */ 8061 8061 { 8062 8062 struct i40e_netdev_priv *np = netdev_priv(dev); ··· 8078 8078 if (!veb) 8079 8079 return 0; 8080 8080 8081 - return ndo_dflt_bridge_getlink(skb, pid, seq, dev, veb->bridge_mode); 8081 + return ndo_dflt_bridge_getlink(skb, pid, seq, dev, veb->bridge_mode, 8082 + nlflags); 8082 8083 } 8083 8084 #endif /* HAVE_BRIDGE_ATTRIBS */ 8084 8085
+2 -2
drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
··· 8044 8044 8045 8045 static int ixgbe_ndo_bridge_getlink(struct sk_buff *skb, u32 pid, u32 seq, 8046 8046 struct net_device *dev, 8047 - u32 filter_mask) 8047 + u32 filter_mask, int nlflags) 8048 8048 { 8049 8049 struct ixgbe_adapter *adapter = netdev_priv(dev); 8050 8050 ··· 8052 8052 return 0; 8053 8053 8054 8054 return ndo_dflt_bridge_getlink(skb, pid, seq, dev, 8055 - adapter->bridge_mode, 0, 0); 8055 + adapter->bridge_mode, 0, 0, nlflags); 8056 8056 } 8057 8057 8058 8058 static void *ixgbe_fwd_add(struct net_device *pdev, struct net_device *vdev)
+5 -11
drivers/net/ethernet/marvell/pxa168_eth.c
··· 1508 1508 np = of_parse_phandle(pdev->dev.of_node, "phy-handle", 0); 1509 1509 if (!np) { 1510 1510 dev_err(&pdev->dev, "missing phy-handle\n"); 1511 - return -EINVAL; 1511 + err = -EINVAL; 1512 + goto err_netdev; 1512 1513 } 1513 1514 of_property_read_u32(np, "reg", &pep->phy_addr); 1514 1515 pep->phy_intf = of_get_phy_mode(pdev->dev.of_node); ··· 1527 1526 pep->smi_bus = mdiobus_alloc(); 1528 1527 if (pep->smi_bus == NULL) { 1529 1528 err = -ENOMEM; 1530 - goto err_base; 1529 + goto err_netdev; 1531 1530 } 1532 1531 pep->smi_bus->priv = pep; 1533 1532 pep->smi_bus->name = "pxa168_eth smi"; ··· 1552 1551 mdiobus_unregister(pep->smi_bus); 1553 1552 err_free_mdio: 1554 1553 mdiobus_free(pep->smi_bus); 1555 - err_base: 1556 - iounmap(pep->base); 1557 1554 err_netdev: 1558 1555 free_netdev(dev); 1559 1556 err_clk: 1560 - clk_disable(clk); 1561 - clk_put(clk); 1557 + clk_disable_unprepare(clk); 1562 1558 return err; 1563 1559 } 1564 1560 ··· 1572 1574 if (pep->phy) 1573 1575 phy_disconnect(pep->phy); 1574 1576 if (pep->clk) { 1575 - clk_disable(pep->clk); 1576 - clk_put(pep->clk); 1577 - pep->clk = NULL; 1577 + clk_disable_unprepare(pep->clk); 1578 1578 } 1579 1579 1580 - iounmap(pep->base); 1581 - pep->base = NULL; 1582 1580 mdiobus_unregister(pep->smi_bus); 1583 1581 mdiobus_free(pep->smi_bus); 1584 1582 unregister_netdev(dev);
+16 -13
drivers/net/ethernet/mellanox/mlx4/en_ethtool.c
··· 1102 1102 struct mlx4_en_priv *priv = netdev_priv(dev); 1103 1103 1104 1104 /* check if requested function is supported by the device */ 1105 - if ((hfunc == ETH_RSS_HASH_TOP && 1106 - !(priv->mdev->dev->caps.flags2 & MLX4_DEV_CAP_FLAG2_RSS_TOP)) || 1107 - (hfunc == ETH_RSS_HASH_XOR && 1108 - !(priv->mdev->dev->caps.flags2 & MLX4_DEV_CAP_FLAG2_RSS_XOR))) 1109 - return -EINVAL; 1105 + if (hfunc == ETH_RSS_HASH_TOP) { 1106 + if (!(priv->mdev->dev->caps.flags2 & MLX4_DEV_CAP_FLAG2_RSS_TOP)) 1107 + return -EINVAL; 1108 + if (!(dev->features & NETIF_F_RXHASH)) 1109 + en_warn(priv, "Toeplitz hash function should be used in conjunction with RX hashing for optimal performance\n"); 1110 + return 0; 1111 + } else if (hfunc == ETH_RSS_HASH_XOR) { 1112 + if (!(priv->mdev->dev->caps.flags2 & MLX4_DEV_CAP_FLAG2_RSS_XOR)) 1113 + return -EINVAL; 1114 + if (dev->features & NETIF_F_RXHASH) 1115 + en_warn(priv, "Enabling both XOR Hash function and RX Hashing can limit RPS functionality\n"); 1116 + return 0; 1117 + } 1110 1118 1111 - priv->rss_hash_fn = hfunc; 1112 - if (hfunc == ETH_RSS_HASH_TOP && !(dev->features & NETIF_F_RXHASH)) 1113 - en_warn(priv, 1114 - "Toeplitz hash function should be used in conjunction with RX hashing for optimal performance\n"); 1115 - if (hfunc == ETH_RSS_HASH_XOR && (dev->features & NETIF_F_RXHASH)) 1116 - en_warn(priv, 1117 - "Enabling both XOR Hash function and RX Hashing can limit RPS functionality\n"); 1118 - return 0; 1119 + return -EINVAL; 1119 1120 } 1120 1121 1121 1122 static int mlx4_en_get_rxfh(struct net_device *dev, u32 *ring_index, u8 *key, ··· 1190 1189 priv->prof->rss_rings = rss_rings; 1191 1190 if (key) 1192 1191 memcpy(priv->rss_key, key, MLX4_EN_RSS_KEY_SIZE); 1192 + if (hfunc != ETH_RSS_HASH_NO_CHANGE) 1193 + priv->rss_hash_fn = hfunc; 1193 1194 1194 1195 if (port_up) { 1195 1196 err = mlx4_en_start_port(dev);
+2 -1
drivers/net/ethernet/mellanox/mlx4/en_netdev.c
··· 1467 1467 if (mdev->dev->caps.flags2 & MLX4_DEV_CAP_FLAG2_TS) 1468 1468 mlx4_en_ptp_overflow_check(mdev); 1469 1469 1470 + mlx4_en_recover_from_oom(priv); 1470 1471 queue_delayed_work(mdev->workqueue, &priv->service_task, 1471 1472 SERVICE_TASK_DELAY); 1472 1473 } ··· 1722 1721 cq_err: 1723 1722 while (rx_index--) { 1724 1723 mlx4_en_deactivate_cq(priv, priv->rx_cq[rx_index]); 1725 - mlx4_en_free_affinity_hint(priv, i); 1724 + mlx4_en_free_affinity_hint(priv, rx_index); 1726 1725 } 1727 1726 for (i = 0; i < priv->rx_ring_num; i++) 1728 1727 mlx4_en_deactivate_rx_ring(priv, priv->rx_ring[i]);
+24 -2
drivers/net/ethernet/mellanox/mlx4/en_rx.c
··· 244 244 return mlx4_en_alloc_frags(priv, rx_desc, frags, ring->page_alloc, gfp); 245 245 } 246 246 247 + static inline bool mlx4_en_is_ring_empty(struct mlx4_en_rx_ring *ring) 248 + { 249 + BUG_ON((u32)(ring->prod - ring->cons) > ring->actual_size); 250 + return ring->prod == ring->cons; 251 + } 252 + 247 253 static inline void mlx4_en_update_rx_prod_db(struct mlx4_en_rx_ring *ring) 248 254 { 249 255 *ring->wqres.db.db = cpu_to_be32(ring->prod & 0xffff); ··· 321 315 ring->cons, ring->prod); 322 316 323 317 /* Unmap and free Rx buffers */ 324 - BUG_ON((u32) (ring->prod - ring->cons) > ring->actual_size); 325 - while (ring->cons != ring->prod) { 318 + while (!mlx4_en_is_ring_empty(ring)) { 326 319 index = ring->cons & ring->size_mask; 327 320 en_dbg(DRV, priv, "Processing descriptor:%d\n", index); 328 321 mlx4_en_free_rx_desc(priv, ring, index); ··· 494 489 ring_ind--; 495 490 } 496 491 return err; 492 + } 493 + 494 + /* We recover from out of memory by scheduling our napi poll 495 + * function (mlx4_en_process_cq), which tries to allocate 496 + * all missing RX buffers (call to mlx4_en_refill_rx_buffers). 497 + */ 498 + void mlx4_en_recover_from_oom(struct mlx4_en_priv *priv) 499 + { 500 + int ring; 501 + 502 + if (!priv->port_up) 503 + return; 504 + 505 + for (ring = 0; ring < priv->rx_ring_num; ring++) { 506 + if (mlx4_en_is_ring_empty(priv->rx_ring[ring])) 507 + napi_reschedule(&priv->rx_cq[ring]->napi); 508 + } 497 509 } 498 510 499 511 void mlx4_en_destroy_rx_ring(struct mlx4_en_priv *priv,
+5 -3
drivers/net/ethernet/mellanox/mlx4/en_tx.c
··· 143 143 ring->hwtstamp_tx_type = priv->hwtstamp_config.tx_type; 144 144 ring->queue_index = queue_index; 145 145 146 - if (queue_index < priv->num_tx_rings_p_up && cpu_online(queue_index)) 147 - cpumask_set_cpu(queue_index, &ring->affinity_mask); 146 + if (queue_index < priv->num_tx_rings_p_up) 147 + cpumask_set_cpu_local_first(queue_index, 148 + priv->mdev->dev->numa_node, 149 + &ring->affinity_mask); 148 150 149 151 *pring = ring; 150 152 return 0; ··· 215 213 216 214 err = mlx4_qp_to_ready(mdev->dev, &ring->wqres.mtt, &ring->context, 217 215 &ring->qp, &ring->qp_state); 218 - if (!user_prio && cpu_online(ring->queue_index)) 216 + if (!cpumask_empty(&ring->affinity_mask)) 219 217 netif_set_xps_queue(priv->dev, &ring->affinity_mask, 220 218 ring->queue_index); 221 219
+14 -4
drivers/net/ethernet/mellanox/mlx4/fw.c
··· 56 56 #define MLX4_GET(dest, source, offset) \ 57 57 do { \ 58 58 void *__p = (char *) (source) + (offset); \ 59 + u64 val; \ 59 60 switch (sizeof (dest)) { \ 60 61 case 1: (dest) = *(u8 *) __p; break; \ 61 62 case 2: (dest) = be16_to_cpup(__p); break; \ 62 63 case 4: (dest) = be32_to_cpup(__p); break; \ 63 - case 8: (dest) = be64_to_cpup(__p); break; \ 64 + case 8: val = get_unaligned((u64 *)__p); \ 65 + (dest) = be64_to_cpu(val); break; \ 64 66 default: __buggy_use_of_MLX4_GET(); \ 65 67 } \ 66 68 } while (0) ··· 1607 1605 * swaps each 4-byte word before passing it back to 1608 1606 * us. Therefore we need to swab it before printing. 1609 1607 */ 1610 - for (i = 0; i < 4; ++i) 1611 - ((u32 *) board_id)[i] = 1612 - swab32(*(u32 *) (vsd + VSD_OFFSET_MLX_BOARD_ID + i * 4)); 1608 + u32 *bid_u32 = (u32 *)board_id; 1609 + 1610 + for (i = 0; i < 4; ++i) { 1611 + u32 *addr; 1612 + u32 val; 1613 + 1614 + addr = (u32 *) (vsd + VSD_OFFSET_MLX_BOARD_ID + i * 4); 1615 + val = get_unaligned(addr); 1616 + val = swab32(val); 1617 + put_unaligned(val, &bid_u32[i]); 1618 + } 1613 1619 } 1614 1620 } 1615 1621
+1
drivers/net/ethernet/mellanox/mlx4/mlx4_en.h
··· 774 774 void mlx4_en_deactivate_tx_ring(struct mlx4_en_priv *priv, 775 775 struct mlx4_en_tx_ring *ring); 776 776 void mlx4_en_set_num_rx_rings(struct mlx4_en_dev *mdev); 777 + void mlx4_en_recover_from_oom(struct mlx4_en_priv *priv); 777 778 int mlx4_en_create_rx_ring(struct mlx4_en_priv *priv, 778 779 struct mlx4_en_rx_ring **pring, 779 780 u32 size, u16 stride, int node);
+9 -29
drivers/net/ethernet/myricom/myri10ge/myri10ge.c
··· 69 69 #include <net/ip.h> 70 70 #include <net/tcp.h> 71 71 #include <asm/byteorder.h> 72 - #include <asm/io.h> 73 72 #include <asm/processor.h> 74 - #ifdef CONFIG_MTRR 75 - #include <asm/mtrr.h> 76 - #endif 77 73 #include <net/busy_poll.h> 78 74 79 75 #include "myri10ge_mcp.h" ··· 238 242 unsigned int rdma_tags_available; 239 243 int intr_coal_delay; 240 244 __be32 __iomem *intr_coal_delay_ptr; 241 - int mtrr; 242 - int wc_enabled; 245 + int wc_cookie; 243 246 int down_cnt; 244 247 wait_queue_head_t down_wq; 245 248 struct work_struct watchdog_work; ··· 1900 1905 "tx_aborted_errors", "tx_carrier_errors", "tx_fifo_errors", 1901 1906 "tx_heartbeat_errors", "tx_window_errors", 1902 1907 /* device-specific stats */ 1903 - "tx_boundary", "WC", "irq", "MSI", "MSIX", 1908 + "tx_boundary", "irq", "MSI", "MSIX", 1904 1909 "read_dma_bw_MBs", "write_dma_bw_MBs", "read_write_dma_bw_MBs", 1905 1910 "serial_number", "watchdog_resets", 1906 1911 #ifdef CONFIG_MYRI10GE_DCA ··· 1979 1984 data[i] = ((u64 *)&link_stats)[i]; 1980 1985 1981 1986 data[i++] = (unsigned int)mgp->tx_boundary; 1982 - data[i++] = (unsigned int)mgp->wc_enabled; 1983 1987 data[i++] = (unsigned int)mgp->pdev->irq; 1984 1988 data[i++] = (unsigned int)mgp->msi_enabled; 1985 1989 data[i++] = (unsigned int)mgp->msix_enabled; ··· 4034 4040 4035 4041 mgp->board_span = pci_resource_len(pdev, 0); 4036 4042 mgp->iomem_base = pci_resource_start(pdev, 0); 4037 - mgp->mtrr = -1; 4038 - mgp->wc_enabled = 0; 4039 - #ifdef CONFIG_MTRR 4040 - mgp->mtrr = mtrr_add(mgp->iomem_base, mgp->board_span, 4041 - MTRR_TYPE_WRCOMB, 1); 4042 - if (mgp->mtrr >= 0) 4043 - mgp->wc_enabled = 1; 4044 - #endif 4043 + mgp->wc_cookie = arch_phys_wc_add(mgp->iomem_base, mgp->board_span); 4045 4044 mgp->sram = ioremap_wc(mgp->iomem_base, mgp->board_span); 4046 4045 if (mgp->sram == NULL) { 4047 4046 dev_err(&pdev->dev, "ioremap failed for %ld bytes at 0x%lx\n", ··· 4133 4146 goto abort_with_state; 4134 4147 } 4135 4148 if (mgp->msix_enabled) 4136 - dev_info(dev, "%d MSI-X IRQs, tx bndry %d, fw %s, WC %s\n", 4149 + dev_info(dev, "%d MSI-X IRQs, tx bndry %d, fw %s, MTRR %s, WC Enabled\n", 4137 4150 mgp->num_slices, mgp->tx_boundary, mgp->fw_name, 4138 - (mgp->wc_enabled ? "Enabled" : "Disabled")); 4151 + (mgp->wc_cookie > 0 ? "Enabled" : "Disabled")); 4139 4152 else 4140 - dev_info(dev, "%s IRQ %d, tx bndry %d, fw %s, WC %s\n", 4153 + dev_info(dev, "%s IRQ %d, tx bndry %d, fw %s, MTRR %s, WC Enabled\n", 4141 4154 mgp->msi_enabled ? "MSI" : "xPIC", 4142 4155 pdev->irq, mgp->tx_boundary, mgp->fw_name, 4143 - (mgp->wc_enabled ? "Enabled" : "Disabled")); 4156 + (mgp->wc_cookie > 0 ? "Enabled" : "Disabled")); 4144 4157 4145 4158 board_number++; 4146 4159 return 0; ··· 4162 4175 iounmap(mgp->sram); 4163 4176 4164 4177 abort_with_mtrr: 4165 - #ifdef CONFIG_MTRR 4166 - if (mgp->mtrr >= 0) 4167 - mtrr_del(mgp->mtrr, mgp->iomem_base, mgp->board_span); 4168 - #endif 4178 + arch_phys_wc_del(mgp->wc_cookie); 4169 4179 dma_free_coherent(&pdev->dev, sizeof(*mgp->cmd), 4170 4180 mgp->cmd, mgp->cmd_bus); 4171 4181 ··· 4204 4220 pci_restore_state(pdev); 4205 4221 4206 4222 iounmap(mgp->sram); 4207 - 4208 - #ifdef CONFIG_MTRR 4209 - if (mgp->mtrr >= 0) 4210 - mtrr_del(mgp->mtrr, mgp->iomem_base, mgp->board_span); 4211 - #endif 4223 + arch_phys_wc_del(mgp->wc_cookie); 4212 4224 myri10ge_free_slices(mgp); 4213 4225 kfree(mgp->msix_vectors); 4214 4226 dma_free_coherent(&pdev->dev, sizeof(*mgp->cmd),
+2 -2
drivers/net/ethernet/qlogic/netxen/netxen_nic_init.c
··· 135 135 int i, j; 136 136 struct nx_host_tx_ring *tx_ring = adapter->tx_ring; 137 137 138 - spin_lock(&adapter->tx_clean_lock); 138 + spin_lock_bh(&adapter->tx_clean_lock); 139 139 cmd_buf = tx_ring->cmd_buf_arr; 140 140 for (i = 0; i < tx_ring->num_desc; i++) { 141 141 buffrag = cmd_buf->frag_array; ··· 159 159 } 160 160 cmd_buf++; 161 161 } 162 - spin_unlock(&adapter->tx_clean_lock); 162 + spin_unlock_bh(&adapter->tx_clean_lock); 163 163 } 164 164 165 165 void netxen_free_sw_resources(struct netxen_adapter *adapter)
+3 -2
drivers/net/ethernet/rocker/rocker.c
··· 4176 4176 4177 4177 static int rocker_port_bridge_getlink(struct sk_buff *skb, u32 pid, u32 seq, 4178 4178 struct net_device *dev, 4179 - u32 filter_mask) 4179 + u32 filter_mask, int nlflags) 4180 4180 { 4181 4181 struct rocker_port *rocker_port = netdev_priv(dev); 4182 4182 u16 mode = BRIDGE_MODE_UNDEF; 4183 4183 u32 mask = BR_LEARNING | BR_LEARNING_SYNC; 4184 4184 4185 4185 return ndo_dflt_bridge_getlink(skb, pid, seq, dev, mode, 4186 - rocker_port->brport_flags, mask); 4186 + rocker_port->brport_flags, mask, 4187 + nlflags); 4187 4188 } 4188 4189 4189 4190 static int rocker_port_get_phys_port_name(struct net_device *dev,
+6 -2
drivers/net/ethernet/ti/netcp_ethss.c
··· 1765 1765 ALE_PORT_STATE, 1766 1766 ALE_PORT_STATE_FORWARD); 1767 1767 1768 - if (ndev && slave->open) 1768 + if (ndev && slave->open && 1769 + slave->link_interface != SGMII_LINK_MAC_PHY && 1770 + slave->link_interface != XGMII_LINK_MAC_PHY) 1769 1771 netif_carrier_on(ndev); 1770 1772 } else { 1771 1773 writel(mac_control, GBE_REG_ADDR(slave, emac_regs, ··· 1775 1773 cpsw_ale_control_set(gbe_dev->ale, slave->port_num, 1776 1774 ALE_PORT_STATE, 1777 1775 ALE_PORT_STATE_DISABLE); 1778 - if (ndev) 1776 + if (ndev && 1777 + slave->link_interface != SGMII_LINK_MAC_PHY && 1778 + slave->link_interface != XGMII_LINK_MAC_PHY) 1779 1779 netif_carrier_off(ndev); 1780 1780 } 1781 1781
+12 -1
drivers/net/hyperv/hyperv_net.h
··· 128 128 struct hv_netvsc_packet { 129 129 /* Bookkeeping stuff */ 130 130 u32 status; 131 - bool part_of_skb; 132 131 133 132 bool is_data_pkt; 134 133 bool xmit_more; /* from skb */ ··· 611 612 u32 count; /* counter of batched packets */ 612 613 }; 613 614 615 + /* The context of the netvsc device */ 616 + struct net_device_context { 617 + /* point back to our device context */ 618 + struct hv_device *device_ctx; 619 + struct delayed_work dwork; 620 + struct work_struct work; 621 + u32 msg_enable; /* debug level */ 622 + }; 623 + 614 624 /* Per netvsc device */ 615 625 struct netvsc_device { 616 626 struct hv_device *dev; ··· 675 667 struct multi_send_data msd[NR_CPUS]; 676 668 u32 max_pkt; /* max number of pkt in one send, e.g. 8 */ 677 669 u32 pkt_align; /* alignment bytes, e.g. 8 */ 670 + 671 + /* The net device context */ 672 + struct net_device_context *nd_ctx; 678 673 }; 679 674 680 675 /* NdisInitialize message */
+3 -5
drivers/net/hyperv/netvsc.c
··· 889 889 } else { 890 890 packet->page_buf_cnt = 0; 891 891 packet->total_data_buflen += msd_len; 892 - if (!packet->part_of_skb) { 893 - skb = (struct sk_buff *)(unsigned long)packet-> 894 - send_completion_tid; 895 - packet->send_completion_tid = 0; 896 - } 897 892 } 898 893 899 894 if (msdp->pkt) ··· 1191 1196 * in struct netvsc_device *. 1192 1197 */ 1193 1198 ndev = net_device->ndev; 1199 + 1200 + /* Add netvsc_device context to netvsc_device */ 1201 + net_device->nd_ctx = netdev_priv(ndev); 1194 1202 1195 1203 /* Initialize the NetVSC channel extension */ 1196 1204 init_completion(&net_device->channel_init_wait);
+21 -26
drivers/net/hyperv/netvsc_drv.c
··· 40 40 41 41 #include "hyperv_net.h" 42 42 43 - struct net_device_context { 44 - /* point back to our device context */ 45 - struct hv_device *device_ctx; 46 - struct delayed_work dwork; 47 - struct work_struct work; 48 - }; 49 43 50 44 #define RING_SIZE_MIN 64 51 45 static int ring_size = 128; 52 46 module_param(ring_size, int, S_IRUGO); 53 47 MODULE_PARM_DESC(ring_size, "Ring buffer size (# of pages)"); 48 + 49 + static const u32 default_msg = NETIF_MSG_DRV | NETIF_MSG_PROBE | 50 + NETIF_MSG_LINK | NETIF_MSG_IFUP | 51 + NETIF_MSG_IFDOWN | NETIF_MSG_RX_ERR | 52 + NETIF_MSG_TX_ERR; 53 + 54 + static int debug = -1; 55 + module_param(debug, int, S_IRUGO); 56 + MODULE_PARM_DESC(debug, "Debug level (0=none,...,16=all)"); 54 57 55 58 static void do_set_multicast(struct work_struct *w) 56 59 { ··· 238 235 struct sk_buff *skb = (struct sk_buff *) 239 236 (unsigned long)packet->send_completion_tid; 240 237 241 - if (!packet->part_of_skb) 242 - kfree(packet); 243 - 244 238 if (skb) 245 239 dev_kfree_skb_any(skb); 246 240 } ··· 389 389 u32 net_trans_info; 390 390 u32 hash; 391 391 u32 skb_length; 392 - u32 head_room; 393 392 u32 pkt_sz; 394 393 struct hv_page_buffer page_buf[MAX_PAGE_BUFFER_COUNT]; 395 394 ··· 401 402 402 403 check_size: 403 404 skb_length = skb->len; 404 - head_room = skb_headroom(skb); 405 405 num_data_pgs = netvsc_get_slots(skb) + 2; 406 406 if (num_data_pgs > MAX_PAGE_BUFFER_COUNT && linear) { 407 407 net_alert_ratelimited("packet too big: %u pages (%u bytes)\n", ··· 419 421 420 422 pkt_sz = sizeof(struct hv_netvsc_packet) + RNDIS_AND_PPI_SIZE; 421 423 422 - if (head_room < pkt_sz) { 423 - packet = kmalloc(pkt_sz, GFP_ATOMIC); 424 - if (!packet) { 425 - /* out of memory, drop packet */ 426 - netdev_err(net, "unable to alloc hv_netvsc_packet\n"); 427 - ret = -ENOMEM; 428 - goto drop; 429 - } 430 - packet->part_of_skb = false; 431 - } else { 432 - /* Use the headroom for building up the packet */ 433 - packet = (struct hv_netvsc_packet *)skb->head; 434 - packet->part_of_skb = true; 424 + ret = skb_cow_head(skb, pkt_sz); 425 + if (ret) { 426 + netdev_err(net, "unable to alloc hv_netvsc_packet\n"); 427 + ret = -ENOMEM; 428 + goto drop; 435 429 } 430 + /* Use the headroom for building up the packet */ 431 + packet = (struct hv_netvsc_packet *)skb->head; 436 432 437 433 packet->status = 0; 438 434 packet->xmit_more = skb->xmit_more; ··· 583 591 net->stats.tx_bytes += skb_length; 584 592 net->stats.tx_packets++; 585 593 } else { 586 - if (packet && !packet->part_of_skb) 587 - kfree(packet); 588 594 if (ret != -EAGAIN) { 589 595 dev_kfree_skb_any(skb); 590 596 net->stats.tx_dropped++; ··· 878 888 879 889 net_device_ctx = netdev_priv(net); 880 890 net_device_ctx->device_ctx = dev; 891 + net_device_ctx->msg_enable = netif_msg_init(debug, default_msg); 892 + if (netif_msg_probe(net_device_ctx)) 893 + netdev_dbg(net, "netvsc msg_enable: %d\n", 894 + net_device_ctx->msg_enable); 895 + 881 896 hv_set_drvdata(dev, net); 882 897 INIT_DELAYED_WORK(&net_device_ctx->dwork, netvsc_link_change); 883 898 INIT_WORK(&net_device_ctx->work, do_set_multicast);
+2 -1
drivers/net/hyperv/rndis_filter.c
··· 429 429 430 430 rndis_msg = pkt->data; 431 431 432 - dump_rndis_message(dev, rndis_msg); 432 + if (netif_msg_rx_err(net_dev->nd_ctx)) 433 + dump_rndis_message(dev, rndis_msg); 433 434 434 435 switch (rndis_msg->ndis_msg_type) { 435 436 case RNDIS_MSG_PACKET:
+9 -5
drivers/net/phy/mdio-gpio.c
··· 80 80 * assume the pin serves as pull-up. If direction is 81 81 * output, the default value is high. 82 82 */ 83 - gpio_set_value(bitbang->mdo, 1 ^ bitbang->mdo_active_low); 83 + gpio_set_value_cansleep(bitbang->mdo, 84 + 1 ^ bitbang->mdo_active_low); 84 85 return; 85 86 } 86 87 ··· 97 96 struct mdio_gpio_info *bitbang = 98 97 container_of(ctrl, struct mdio_gpio_info, ctrl); 99 98 100 - return gpio_get_value(bitbang->mdio) ^ bitbang->mdio_active_low; 99 + return gpio_get_value_cansleep(bitbang->mdio) ^ 100 + bitbang->mdio_active_low; 101 101 } 102 102 103 103 static void mdio_set(struct mdiobb_ctrl *ctrl, int what) ··· 107 105 container_of(ctrl, struct mdio_gpio_info, ctrl); 108 106 109 107 if (bitbang->mdo) 110 - gpio_set_value(bitbang->mdo, what ^ bitbang->mdo_active_low); 108 + gpio_set_value_cansleep(bitbang->mdo, 109 + what ^ bitbang->mdo_active_low); 111 110 else 112 - gpio_set_value(bitbang->mdio, what ^ bitbang->mdio_active_low); 111 + gpio_set_value_cansleep(bitbang->mdio, 112 + what ^ bitbang->mdio_active_low); 113 113 } 114 114 115 115 static void mdc_set(struct mdiobb_ctrl *ctrl, int what) ··· 119 115 struct mdio_gpio_info *bitbang = 120 116 container_of(ctrl, struct mdio_gpio_info, ctrl); 121 117 122 - gpio_set_value(bitbang->mdc, what ^ bitbang->mdc_active_low); 118 + gpio_set_value_cansleep(bitbang->mdc, what ^ bitbang->mdc_active_low); 123 119 } 124 120 125 121 static struct mdiobb_ops mdio_gpio_ops = {
+17 -43
drivers/net/phy/mdio-mux-gpio.c
··· 12 12 #include <linux/module.h> 13 13 #include <linux/phy.h> 14 14 #include <linux/mdio-mux.h> 15 - #include <linux/of_gpio.h> 15 + #include <linux/gpio/consumer.h> 16 16 17 17 #define DRV_VERSION "1.1" 18 18 #define DRV_DESCRIPTION "GPIO controlled MDIO bus multiplexer driver" 19 19 20 - #define MDIO_MUX_GPIO_MAX_BITS 8 21 - 22 20 struct mdio_mux_gpio_state { 23 - struct gpio_desc *gpio[MDIO_MUX_GPIO_MAX_BITS]; 24 - unsigned int num_gpios; 21 + struct gpio_descs *gpios; 25 22 void *mux_handle; 26 23 }; 27 24 28 25 static int mdio_mux_gpio_switch_fn(int current_child, int desired_child, 29 26 void *data) 30 27 { 31 - int values[MDIO_MUX_GPIO_MAX_BITS]; 32 - unsigned int n; 33 28 struct mdio_mux_gpio_state *s = data; 29 + int values[s->gpios->ndescs]; 30 + unsigned int n; 34 31 35 32 if (current_child == desired_child) 36 33 return 0; 37 34 38 - for (n = 0; n < s->num_gpios; n++) { 35 + for (n = 0; n < s->gpios->ndescs; n++) 39 36 values[n] = (desired_child >> n) & 1; 40 - } 41 - gpiod_set_array_cansleep(s->num_gpios, s->gpio, values); 37 + 38 + gpiod_set_array_cansleep(s->gpios->ndescs, s->gpios->desc, values); 42 39 43 40 return 0; 44 41 } ··· 43 46 static int mdio_mux_gpio_probe(struct platform_device *pdev) 44 47 { 45 48 struct mdio_mux_gpio_state *s; 46 - int num_gpios; 47 - unsigned int n; 48 49 int r; 49 - 50 - if (!pdev->dev.of_node) 51 - return -ENODEV; 52 - 53 - num_gpios = of_gpio_count(pdev->dev.of_node); 54 - if (num_gpios <= 0 || num_gpios > MDIO_MUX_GPIO_MAX_BITS) 55 - return -ENODEV; 56 50 57 51 s = devm_kzalloc(&pdev->dev, sizeof(*s), GFP_KERNEL); 58 52 if (!s) 59 53 return -ENOMEM; 60 54 61 - s->num_gpios = num_gpios; 62 - 63 - for (n = 0; n < num_gpios; ) { 64 - struct gpio_desc *gpio = gpiod_get_index(&pdev->dev, NULL, n, 65 - GPIOD_OUT_LOW); 66 - if (IS_ERR(gpio)) { 67 - r = PTR_ERR(gpio); 68 - goto err; 69 - } 70 - s->gpio[n] = gpio; 71 - n++; 72 - } 55 + s->gpios = gpiod_get_array(&pdev->dev, NULL, GPIOD_OUT_LOW); 56 + if (IS_ERR(s->gpios)) 57 + return PTR_ERR(s->gpios); 73 58 74 59 r = mdio_mux_init(&pdev->dev, 75 60 mdio_mux_gpio_switch_fn, &s->mux_handle, s); 76 61 77 - if (r == 0) { 78 - pdev->dev.platform_data = s; 79 - return 0; 62 + if (r != 0) { 63 + gpiod_put_array(s->gpios); 64 + return r; 80 65 } 81 - err: 82 - while (n) { 83 - n--; 84 - gpiod_put(s->gpio[n]); 85 - } 86 - return r; 66 + 67 + pdev->dev.platform_data = s; 68 + return 0; 87 69 } 88 70 89 71 static int mdio_mux_gpio_remove(struct platform_device *pdev) 90 72 { 91 - unsigned int n; 92 73 struct mdio_mux_gpio_state *s = dev_get_platdata(&pdev->dev); 93 74 mdio_mux_uninit(s->mux_handle); 94 - for (n = 0; n < s->num_gpios; n++) 95 - gpiod_put(s->gpio[n]); 75 + gpiod_put_array(s->gpios); 96 76 return 0; 97 77 } 98 78
+20 -16
drivers/net/ppp/ppp_mppe.c
··· 478 478 struct blkcipher_desc desc = { .tfm = state->arc4 }; 479 479 unsigned ccount; 480 480 int flushed = MPPE_BITS(ibuf) & MPPE_BIT_FLUSHED; 481 - int sanity = 0; 482 481 struct scatterlist sg_in[1], sg_out[1]; 483 482 484 483 if (isize <= PPP_HDRLEN + MPPE_OVHD) { ··· 513 514 "mppe_decompress[%d]: ENCRYPTED bit not set!\n", 514 515 state->unit); 515 516 state->sanity_errors += 100; 516 - sanity = 1; 517 + goto sanity_error; 517 518 } 518 519 if (!state->stateful && !flushed) { 519 520 printk(KERN_DEBUG "mppe_decompress[%d]: FLUSHED bit not set in " 520 521 "stateless mode!\n", state->unit); 521 522 state->sanity_errors += 100; 522 - sanity = 1; 523 + goto sanity_error; 523 524 } 524 525 if (state->stateful && ((ccount & 0xff) == 0xff) && !flushed) { 525 526 printk(KERN_DEBUG "mppe_decompress[%d]: FLUSHED bit not set on " 526 527 "flag packet!\n", state->unit); 527 528 state->sanity_errors += 100; 528 - sanity = 1; 529 - } 530 - 531 - if (sanity) { 532 - if (state->sanity_errors < SANITY_MAX) 533 - return DECOMP_ERROR; 534 - else 535 - /* 536 - * Take LCP down if the peer is sending too many bogons. 537 - * We don't want to do this for a single or just a few 538 - * instances since it could just be due to packet corruption. 539 - */ 540 - return DECOMP_FATALERROR; 529 + goto sanity_error; 541 530 } 542 531 543 532 /* ··· 533 546 */ 534 547 535 548 if (!state->stateful) { 549 + /* Discard late packet */ 550 + if ((ccount - state->ccount) % MPPE_CCOUNT_SPACE 551 + > MPPE_CCOUNT_SPACE / 2) { 552 + state->sanity_errors++; 553 + goto sanity_error; 554 + } 555 + 536 556 /* RFC 3078, sec 8.1. Rekey for every packet. */ 537 557 while (state->ccount != ccount) { 538 558 mppe_rekey(state, 0); ··· 643 649 state->sanity_errors >>= 1; 644 650 645 651 return osize; 652 + 653 + sanity_error: 654 + if (state->sanity_errors < SANITY_MAX) 655 + return DECOMP_ERROR; 656 + else 657 + /* Take LCP down if the peer is sending too many bogons. 658 + * We don't want to do this for a single or just a few 659 + * instances since it could just be due to packet corruption. 660 + */ 661 + return DECOMP_FATALERROR; 646 662 } 647 663 648 664 /*
+1 -5
drivers/net/vxlan.c
··· 730 730 /* Only change unicasts */ 731 731 if (!(is_multicast_ether_addr(f->eth_addr) || 732 732 is_zero_ether_addr(f->eth_addr))) { 733 - int rc = vxlan_fdb_replace(f, ip, port, vni, 733 + notify |= vxlan_fdb_replace(f, ip, port, vni, 734 734 ifindex); 735 - 736 - if (rc < 0) 737 - return rc; 738 - notify |= rc; 739 735 } else 740 736 return -EOPNOTSUPP; 741 737 }
+4 -6
drivers/pinctrl/core.c
··· 1110 1110 EXPORT_SYMBOL_GPL(devm_pinctrl_put); 1111 1111 1112 1112 int pinctrl_register_map(struct pinctrl_map const *maps, unsigned num_maps, 1113 - bool dup, bool locked) 1113 + bool dup) 1114 1114 { 1115 1115 int i, ret; 1116 1116 struct pinctrl_maps *maps_node; ··· 1178 1178 maps_node->maps = maps; 1179 1179 } 1180 1180 1181 - if (!locked) 1182 - mutex_lock(&pinctrl_maps_mutex); 1181 + mutex_lock(&pinctrl_maps_mutex); 1183 1182 list_add_tail(&maps_node->node, &pinctrl_maps); 1184 - if (!locked) 1185 - mutex_unlock(&pinctrl_maps_mutex); 1183 + mutex_unlock(&pinctrl_maps_mutex); 1186 1184 1187 1185 return 0; 1188 1186 } ··· 1195 1197 int pinctrl_register_mappings(struct pinctrl_map const *maps, 1196 1198 unsigned num_maps) 1197 1199 { 1198 - return pinctrl_register_map(maps, num_maps, true, false); 1200 + return pinctrl_register_map(maps, num_maps, true); 1199 1201 } 1200 1202 1201 1203 void pinctrl_unregister_map(struct pinctrl_map const *map)
+1 -1
drivers/pinctrl/core.h
··· 183 183 } 184 184 185 185 int pinctrl_register_map(struct pinctrl_map const *maps, unsigned num_maps, 186 - bool dup, bool locked); 186 + bool dup); 187 187 void pinctrl_unregister_map(struct pinctrl_map const *map); 188 188 189 189 extern int pinctrl_force_sleep(struct pinctrl_dev *pctldev);
+1 -1
drivers/pinctrl/devicetree.c
··· 92 92 dt_map->num_maps = num_maps; 93 93 list_add_tail(&dt_map->node, &p->dt_maps); 94 94 95 - return pinctrl_register_map(map, num_maps, false, true); 95 + return pinctrl_register_map(map, num_maps, false); 96 96 } 97 97 98 98 struct pinctrl_dev *of_pinctrl_get(struct device_node *np)
+2
drivers/pinctrl/mediatek/pinctrl-mtk-common.c
··· 881 881 if (!mtk_eint_get_mask(pctl, eint_num)) { 882 882 mtk_eint_mask(d); 883 883 unmask = 1; 884 + } else { 885 + unmask = 0; 884 886 } 885 887 886 888 clr_bit = 0xff << eint_offset;
+1 -1
drivers/pinctrl/mvebu/pinctrl-armada-370.c
··· 364 364 MPP_FUNCTION(0x5, "audio", "mclk"), 365 365 MPP_FUNCTION(0x6, "uart0", "cts")), 366 366 MPP_MODE(63, 367 - MPP_FUNCTION(0x0, "gpo", NULL), 367 + MPP_FUNCTION(0x0, "gpio", NULL), 368 368 MPP_FUNCTION(0x1, "spi0", "sck"), 369 369 MPP_FUNCTION(0x2, "tclk", NULL)), 370 370 MPP_MODE(64,
+8 -6
drivers/pinctrl/qcom/pinctrl-spmi-gpio.c
··· 260 260 val = 1; 261 261 } 262 262 263 + val = val << PMIC_GPIO_REG_MODE_DIR_SHIFT; 263 264 val |= pad->function << PMIC_GPIO_REG_MODE_FUNCTION_SHIFT; 264 265 val |= pad->out_value & PMIC_GPIO_REG_MODE_VALUE_SHIFT; 265 266 ··· 418 417 return ret; 419 418 420 419 val = pad->buffer_type << PMIC_GPIO_REG_OUT_TYPE_SHIFT; 421 - val = pad->strength << PMIC_GPIO_REG_OUT_STRENGTH_SHIFT; 420 + val |= pad->strength << PMIC_GPIO_REG_OUT_STRENGTH_SHIFT; 422 421 423 422 ret = pmic_gpio_write(state, pad, PMIC_GPIO_REG_DIG_OUT_CTL, val); 424 423 if (ret < 0) ··· 467 466 seq_puts(s, " ---"); 468 467 } else { 469 468 470 - if (!pad->input_enabled) { 469 + if (pad->input_enabled) { 471 470 ret = pmic_gpio_read(state, pad, PMIC_MPP_REG_RT_STS); 472 - if (!ret) { 473 - ret &= PMIC_MPP_REG_RT_STS_VAL_MASK; 474 - pad->out_value = ret; 475 - } 471 + if (ret < 0) 472 + return; 473 + 474 + ret &= PMIC_MPP_REG_RT_STS_VAL_MASK; 475 + pad->out_value = ret; 476 476 } 477 477 478 478 seq_printf(s, " %-4s", pad->output_enabled ? "out" : "in");
+6 -4
drivers/pinctrl/qcom/pinctrl-spmi-mpp.c
··· 370 370 } 371 371 } 372 372 373 + val = val << PMIC_MPP_REG_MODE_DIR_SHIFT; 373 374 val |= pad->function << PMIC_MPP_REG_MODE_FUNCTION_SHIFT; 374 375 val |= pad->out_value & PMIC_MPP_REG_MODE_VALUE_MASK; 375 376 ··· 577 576 578 577 if (pad->input_enabled) { 579 578 ret = pmic_mpp_read(state, pad, PMIC_MPP_REG_RT_STS); 580 - if (!ret) { 581 - ret &= PMIC_MPP_REG_RT_STS_VAL_MASK; 582 - pad->out_value = ret; 583 - } 579 + if (ret < 0) 580 + return; 581 + 582 + ret &= PMIC_MPP_REG_RT_STS_VAL_MASK; 583 + pad->out_value = ret; 584 584 } 585 585 586 586 seq_printf(s, " %-4s", pad->output_enabled ? "out" : "in");
+7
drivers/platform/x86/ideapad-laptop.c
··· 830 830 */ 831 831 static const struct dmi_system_id no_hw_rfkill_list[] = { 832 832 { 833 + .ident = "Lenovo G40-30", 834 + .matches = { 835 + DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"), 836 + DMI_MATCH(DMI_PRODUCT_VERSION, "Lenovo G40-30"), 837 + }, 838 + }, 839 + { 833 840 .ident = "Lenovo Yoga 2 11 / 13 / Pro", 834 841 .matches = { 835 842 DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
+1 -1
drivers/platform/x86/thinkpad_acpi.c
··· 2115 2115 return 0; 2116 2116 } 2117 2117 2118 - void static hotkey_mask_warn_incomplete_mask(void) 2118 + static void hotkey_mask_warn_incomplete_mask(void) 2119 2119 { 2120 2120 /* log only what the user can fix... */ 2121 2121 const u32 wantedmask = hotkey_driver_mask &
+10
drivers/rtc/Kconfig
··· 164 164 This driver can also be built as a module. If so, the module 165 165 will be called rtc-ab-b5ze-s3. 166 166 167 + config RTC_DRV_ABX80X 168 + tristate "Abracon ABx80x" 169 + help 170 + If you say yes here you get support for Abracon AB080X and AB180X 171 + families of ultra-low-power battery- and capacitor-backed real-time 172 + clock chips. 173 + 174 + This driver can also be built as a module. If so, the module 175 + will be called rtc-abx80x. 176 + 167 177 config RTC_DRV_AS3722 168 178 tristate "ams AS3722 RTC driver" 169 179 depends on MFD_AS3722
+1
drivers/rtc/Makefile
··· 25 25 obj-$(CONFIG_RTC_DRV_AB3100) += rtc-ab3100.o 26 26 obj-$(CONFIG_RTC_DRV_AB8500) += rtc-ab8500.o 27 27 obj-$(CONFIG_RTC_DRV_ABB5ZES3) += rtc-ab-b5ze-s3.o 28 + obj-$(CONFIG_RTC_DRV_ABX80X) += rtc-abx80x.o 28 29 obj-$(CONFIG_RTC_DRV_ARMADA38X) += rtc-armada38x.o 29 30 obj-$(CONFIG_RTC_DRV_AS3722) += rtc-as3722.o 30 31 obj-$(CONFIG_RTC_DRV_AT32AP700X)+= rtc-at32ap700x.o
+307
drivers/rtc/rtc-abx80x.c
··· 1 + /* 2 + * A driver for the I2C members of the Abracon AB x8xx RTC family, 3 + * and compatible: AB 1805 and AB 0805 4 + * 5 + * Copyright 2014-2015 Macq S.A. 6 + * 7 + * Author: Philippe De Muyter <phdm@macqel.be> 8 + * Author: Alexandre Belloni <alexandre.belloni@free-electrons.com> 9 + * 10 + * This program is free software; you can redistribute it and/or modify 11 + * it under the terms of the GNU General Public License version 2 as 12 + * published by the Free Software Foundation. 13 + * 14 + */ 15 + 16 + #include <linux/bcd.h> 17 + #include <linux/i2c.h> 18 + #include <linux/module.h> 19 + #include <linux/rtc.h> 20 + 21 + #define ABX8XX_REG_HTH 0x00 22 + #define ABX8XX_REG_SC 0x01 23 + #define ABX8XX_REG_MN 0x02 24 + #define ABX8XX_REG_HR 0x03 25 + #define ABX8XX_REG_DA 0x04 26 + #define ABX8XX_REG_MO 0x05 27 + #define ABX8XX_REG_YR 0x06 28 + #define ABX8XX_REG_WD 0x07 29 + 30 + #define ABX8XX_REG_CTRL1 0x10 31 + #define ABX8XX_CTRL_WRITE BIT(1) 32 + #define ABX8XX_CTRL_12_24 BIT(6) 33 + 34 + #define ABX8XX_REG_CFG_KEY 0x1f 35 + #define ABX8XX_CFG_KEY_MISC 0x9d 36 + 37 + #define ABX8XX_REG_ID0 0x28 38 + 39 + #define ABX8XX_REG_TRICKLE 0x20 40 + #define ABX8XX_TRICKLE_CHARGE_ENABLE 0xa0 41 + #define ABX8XX_TRICKLE_STANDARD_DIODE 0x8 42 + #define ABX8XX_TRICKLE_SCHOTTKY_DIODE 0x4 43 + 44 + static u8 trickle_resistors[] = {0, 3, 6, 11}; 45 + 46 + enum abx80x_chip {AB0801, AB0803, AB0804, AB0805, 47 + AB1801, AB1803, AB1804, AB1805, ABX80X}; 48 + 49 + struct abx80x_cap { 50 + u16 pn; 51 + bool has_tc; 52 + }; 53 + 54 + static struct abx80x_cap abx80x_caps[] = { 55 + [AB0801] = {.pn = 0x0801}, 56 + [AB0803] = {.pn = 0x0803}, 57 + [AB0804] = {.pn = 0x0804, .has_tc = true}, 58 + [AB0805] = {.pn = 0x0805, .has_tc = true}, 59 + [AB1801] = {.pn = 0x1801}, 60 + [AB1803] = {.pn = 0x1803}, 61 + [AB1804] = {.pn = 0x1804, .has_tc = true}, 62 + [AB1805] = {.pn = 0x1805, .has_tc = true}, 63 + [ABX80X] = {.pn = 0} 64 + }; 65 + 66 + static struct i2c_driver abx80x_driver; 67 + 68 + static int abx80x_enable_trickle_charger(struct i2c_client *client, 69 + u8 trickle_cfg) 70 + { 71 + int err; 72 + 73 + /* 74 + * Write the configuration key register to enable access to the Trickle 75 + * register 76 + */ 77 + err = i2c_smbus_write_byte_data(client, ABX8XX_REG_CFG_KEY, 78 + ABX8XX_CFG_KEY_MISC); 79 + if (err < 0) { 80 + dev_err(&client->dev, "Unable to write configuration key\n"); 81 + return -EIO; 82 + } 83 + 84 + err = i2c_smbus_write_byte_data(client, ABX8XX_REG_TRICKLE, 85 + ABX8XX_TRICKLE_CHARGE_ENABLE | 86 + trickle_cfg); 87 + if (err < 0) { 88 + dev_err(&client->dev, "Unable to write trickle register\n"); 89 + return -EIO; 90 + } 91 + 92 + return 0; 93 + } 94 + 95 + static int abx80x_rtc_read_time(struct device *dev, struct rtc_time *tm) 96 + { 97 + struct i2c_client *client = to_i2c_client(dev); 98 + unsigned char buf[8]; 99 + int err; 100 + 101 + err = i2c_smbus_read_i2c_block_data(client, ABX8XX_REG_HTH, 102 + sizeof(buf), buf); 103 + if (err < 0) { 104 + dev_err(&client->dev, "Unable to read date\n"); 105 + return -EIO; 106 + } 107 + 108 + tm->tm_sec = bcd2bin(buf[ABX8XX_REG_SC] & 0x7F); 109 + tm->tm_min = bcd2bin(buf[ABX8XX_REG_MN] & 0x7F); 110 + tm->tm_hour = bcd2bin(buf[ABX8XX_REG_HR] & 0x3F); 111 + tm->tm_wday = buf[ABX8XX_REG_WD] & 0x7; 112 + tm->tm_mday = bcd2bin(buf[ABX8XX_REG_DA] & 0x3F); 113 + tm->tm_mon = bcd2bin(buf[ABX8XX_REG_MO] & 0x1F) - 1; 114 + tm->tm_year = bcd2bin(buf[ABX8XX_REG_YR]) + 100; 115 + 116 + err = rtc_valid_tm(tm); 117 + if (err < 0) 118 + dev_err(&client->dev, "retrieved date/time is not valid.\n"); 119 + 120 + return err; 121 + } 122 + 123 + static int abx80x_rtc_set_time(struct device *dev, struct rtc_time *tm) 124 + { 125 + struct i2c_client *client = to_i2c_client(dev); 126 + unsigned char buf[8]; 127 + int err; 128 + 129 + if (tm->tm_year < 100) 130 + return -EINVAL; 131 + 132 + buf[ABX8XX_REG_HTH] = 0; 133 + buf[ABX8XX_REG_SC] = bin2bcd(tm->tm_sec); 134 + buf[ABX8XX_REG_MN] = bin2bcd(tm->tm_min); 135 + buf[ABX8XX_REG_HR] = bin2bcd(tm->tm_hour); 136 + buf[ABX8XX_REG_DA] = bin2bcd(tm->tm_mday); 137 + buf[ABX8XX_REG_MO] = bin2bcd(tm->tm_mon + 1); 138 + buf[ABX8XX_REG_YR] = bin2bcd(tm->tm_year - 100); 139 + buf[ABX8XX_REG_WD] = tm->tm_wday; 140 + 141 + err = i2c_smbus_write_i2c_block_data(client, ABX8XX_REG_HTH, 142 + sizeof(buf), buf); 143 + if (err < 0) { 144 + dev_err(&client->dev, "Unable to write to date registers\n"); 145 + return -EIO; 146 + } 147 + 148 + return 0; 149 + } 150 + 151 + static const struct rtc_class_ops abx80x_rtc_ops = { 152 + .read_time = abx80x_rtc_read_time, 153 + .set_time = abx80x_rtc_set_time, 154 + }; 155 + 156 + static int abx80x_dt_trickle_cfg(struct device_node *np) 157 + { 158 + const char *diode; 159 + int trickle_cfg = 0; 160 + int i, ret; 161 + u32 tmp; 162 + 163 + ret = of_property_read_string(np, "abracon,tc-diode", &diode); 164 + if (ret) 165 + return ret; 166 + 167 + if (!strcmp(diode, "standard")) 168 + trickle_cfg |= ABX8XX_TRICKLE_STANDARD_DIODE; 169 + else if (!strcmp(diode, "schottky")) 170 + trickle_cfg |= ABX8XX_TRICKLE_SCHOTTKY_DIODE; 171 + else 172 + return -EINVAL; 173 + 174 + ret = of_property_read_u32(np, "abracon,tc-resistor", &tmp); 175 + if (ret) 176 + return ret; 177 + 178 + for (i = 0; i < sizeof(trickle_resistors); i++) 179 + if (trickle_resistors[i] == tmp) 180 + break; 181 + 182 + if (i == sizeof(trickle_resistors)) 183 + return -EINVAL; 184 + 185 + return (trickle_cfg | i); 186 + } 187 + 188 + static int abx80x_probe(struct i2c_client *client, 189 + const struct i2c_device_id *id) 190 + { 191 + struct device_node *np = client->dev.of_node; 192 + struct rtc_device *rtc; 193 + int i, data, err, trickle_cfg = -EINVAL; 194 + char buf[7]; 195 + unsigned int part = id->driver_data; 196 + unsigned int partnumber; 197 + unsigned int majrev, minrev; 198 + unsigned int lot; 199 + unsigned int wafer; 200 + unsigned int uid; 201 + 202 + if (!i2c_check_functionality(client->adapter, I2C_FUNC_I2C)) 203 + return -ENODEV; 204 + 205 + err = i2c_smbus_read_i2c_block_data(client, ABX8XX_REG_ID0, 206 + sizeof(buf), buf); 207 + if (err < 0) { 208 + dev_err(&client->dev, "Unable to read partnumber\n"); 209 + return -EIO; 210 + } 211 + 212 + partnumber = (buf[0] << 8) | buf[1]; 213 + majrev = buf[2] >> 3; 214 + minrev = buf[2] & 0x7; 215 + lot = ((buf[4] & 0x80) << 2) | ((buf[6] & 0x80) << 1) | buf[3]; 216 + uid = ((buf[4] & 0x7f) << 8) | buf[5]; 217 + wafer = (buf[6] & 0x7c) >> 2; 218 + dev_info(&client->dev, "model %04x, revision %u.%u, lot %x, wafer %x, uid %x\n", 219 + partnumber, majrev, minrev, lot, wafer, uid); 220 + 221 + data = i2c_smbus_read_byte_data(client, ABX8XX_REG_CTRL1); 222 + if (data < 0) { 223 + dev_err(&client->dev, "Unable to read control register\n"); 224 + return -EIO; 225 + } 226 + 227 + err = i2c_smbus_write_byte_data(client, ABX8XX_REG_CTRL1, 228 + ((data & ~ABX8XX_CTRL_12_24) | 229 + ABX8XX_CTRL_WRITE)); 230 + if (err < 0) { 231 + dev_err(&client->dev, "Unable to write control register\n"); 232 + return -EIO; 233 + } 234 + 235 + /* part autodetection */ 236 + if (part == ABX80X) { 237 + for (i = 0; abx80x_caps[i].pn; i++) 238 + if (partnumber == abx80x_caps[i].pn) 239 + break; 240 + if (abx80x_caps[i].pn == 0) { 241 + dev_err(&client->dev, "Unknown part: %04x\n", 242 + partnumber); 243 + return -EINVAL; 244 + } 245 + part = i; 246 + } 247 + 248 + if (partnumber != abx80x_caps[part].pn) { 249 + dev_err(&client->dev, "partnumber mismatch %04x != %04x\n", 250 + partnumber, abx80x_caps[part].pn); 251 + return -EINVAL; 252 + } 253 + 254 + if (np && abx80x_caps[part].has_tc) 255 + trickle_cfg = abx80x_dt_trickle_cfg(np); 256 + 257 + if (trickle_cfg > 0) { 258 + dev_info(&client->dev, "Enabling trickle charger: %02x\n", 259 + trickle_cfg); 260 + abx80x_enable_trickle_charger(client, trickle_cfg); 261 + } 262 + 263 + rtc = devm_rtc_device_register(&client->dev, abx80x_driver.driver.name, 264 + &abx80x_rtc_ops, THIS_MODULE); 265 + 266 + if (IS_ERR(rtc)) 267 + return PTR_ERR(rtc); 268 + 269 + i2c_set_clientdata(client, rtc); 270 + 271 + return 0; 272 + } 273 + 274 + static int abx80x_remove(struct i2c_client *client) 275 + { 276 + return 0; 277 + } 278 + 279 + static const struct i2c_device_id abx80x_id[] = { 280 + { "abx80x", ABX80X }, 281 + { "ab0801", AB0801 }, 282 + { "ab0803", AB0803 }, 283 + { "ab0804", AB0804 }, 284 + { "ab0805", AB0805 }, 285 + { "ab1801", AB1801 }, 286 + { "ab1803", AB1803 }, 287 + { "ab1804", AB1804 }, 288 + { "ab1805", AB1805 }, 289 + { } 290 + }; 291 + MODULE_DEVICE_TABLE(i2c, abx80x_id); 292 + 293 + static struct i2c_driver abx80x_driver = { 294 + .driver = { 295 + .name = "rtc-abx80x", 296 + }, 297 + .probe = abx80x_probe, 298 + .remove = abx80x_remove, 299 + .id_table = abx80x_id, 300 + }; 301 + 302 + module_i2c_driver(abx80x_driver); 303 + 304 + MODULE_AUTHOR("Philippe De Muyter <phdm@macqel.be>"); 305 + MODULE_AUTHOR("Alexandre Belloni <alexandre.belloni@free-electrons.com>"); 306 + MODULE_DESCRIPTION("Abracon ABX80X RTC driver"); 307 + MODULE_LICENSE("GPL v2");
+12 -12
drivers/rtc/rtc-armada38x.c
··· 40 40 void __iomem *regs; 41 41 void __iomem *regs_soc; 42 42 spinlock_t lock; 43 + /* 44 + * While setting the time, the RTC TIME register should not be 45 + * accessed. Setting the RTC time involves sleeping during 46 + * 100ms, so a mutex instead of a spinlock is used to protect 47 + * it 48 + */ 49 + struct mutex mutex_time; 43 50 int irq; 44 51 }; 45 52 ··· 66 59 struct armada38x_rtc *rtc = dev_get_drvdata(dev); 67 60 unsigned long time, time_check, flags; 68 61 69 - spin_lock_irqsave(&rtc->lock, flags); 70 - 62 + mutex_lock(&rtc->mutex_time); 71 63 time = readl(rtc->regs + RTC_TIME); 72 64 /* 73 65 * WA for failing time set attempts. As stated in HW ERRATA if ··· 77 71 if ((time_check - time) > 1) 78 72 time_check = readl(rtc->regs + RTC_TIME); 79 73 80 - spin_unlock_irqrestore(&rtc->lock, flags); 74 + mutex_unlock(&rtc->mutex_time); 81 75 82 76 rtc_time_to_tm(time_check, tm); 83 77 ··· 100 94 * then wait for 100ms before writing to the time register to be 101 95 * sure that the data will be taken into account. 102 96 */ 103 - spin_lock_irqsave(&rtc->lock, flags); 104 - 97 + mutex_lock(&rtc->mutex_time); 105 98 rtc_delayed_write(0, rtc, RTC_STATUS); 106 - 107 - spin_unlock_irqrestore(&rtc->lock, flags); 108 - 109 99 msleep(100); 110 - 111 - spin_lock_irqsave(&rtc->lock, flags); 112 - 113 100 rtc_delayed_write(time, rtc, RTC_TIME); 101 + mutex_unlock(&rtc->mutex_time); 114 102 115 - spin_unlock_irqrestore(&rtc->lock, flags); 116 103 out: 117 104 return ret; 118 105 } ··· 229 230 return -ENOMEM; 230 231 231 232 spin_lock_init(&rtc->lock); 233 + mutex_init(&rtc->mutex_time); 232 234 233 235 res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "rtc"); 234 236 rtc->regs = devm_ioremap_resource(&pdev->dev, res);
+2
drivers/s390/char/con3215.c
··· 667 667 info->buffer = kzalloc(RAW3215_BUFFER_SIZE, GFP_KERNEL | GFP_DMA); 668 668 info->inbuf = kzalloc(RAW3215_INBUF_SIZE, GFP_KERNEL | GFP_DMA); 669 669 if (!info->buffer || !info->inbuf) { 670 + kfree(info->inbuf); 671 + kfree(info->buffer); 670 672 kfree(info); 671 673 return NULL; 672 674 }
+13 -44
drivers/scsi/3w-9xxx.c
··· 149 149 static int twa_scsiop_execute_scsi(TW_Device_Extension *tw_dev, int request_id, char *cdb, int use_sg, TW_SG_Entry *sglistarg); 150 150 static void twa_scsiop_execute_scsi_complete(TW_Device_Extension *tw_dev, int request_id); 151 151 static char *twa_string_lookup(twa_message_type *table, unsigned int aen_code); 152 - static void twa_unmap_scsi_data(TW_Device_Extension *tw_dev, int request_id); 153 152 154 153 /* Functions */ 155 154 ··· 1339 1340 } 1340 1341 1341 1342 /* Now complete the io */ 1343 + scsi_dma_unmap(cmd); 1344 + cmd->scsi_done(cmd); 1342 1345 tw_dev->state[request_id] = TW_S_COMPLETED; 1343 1346 twa_free_request_id(tw_dev, request_id); 1344 1347 tw_dev->posted_request_count--; 1345 - tw_dev->srb[request_id]->scsi_done(tw_dev->srb[request_id]); 1346 - twa_unmap_scsi_data(tw_dev, request_id); 1347 1348 } 1348 1349 1349 1350 /* Check for valid status after each drain */ ··· 1400 1401 } 1401 1402 } 1402 1403 } /* End twa_load_sgl() */ 1403 - 1404 - /* This function will perform a pci-dma mapping for a scatter gather list */ 1405 - static int twa_map_scsi_sg_data(TW_Device_Extension *tw_dev, int request_id) 1406 - { 1407 - int use_sg; 1408 - struct scsi_cmnd *cmd = tw_dev->srb[request_id]; 1409 - 1410 - use_sg = scsi_dma_map(cmd); 1411 - if (!use_sg) 1412 - return 0; 1413 - else if (use_sg < 0) { 1414 - TW_PRINTK(tw_dev->host, TW_DRIVER, 0x1c, "Failed to map scatter gather list"); 1415 - return 0; 1416 - } 1417 - 1418 - cmd->SCp.phase = TW_PHASE_SGLIST; 1419 - cmd->SCp.have_data_in = use_sg; 1420 - 1421 - return use_sg; 1422 - } /* End twa_map_scsi_sg_data() */ 1423 1404 1424 1405 /* This function will poll for a response interrupt of a request */ 1425 1406 static int twa_poll_response(TW_Device_Extension *tw_dev, int request_id, int seconds) ··· 1579 1600 (tw_dev->state[i] != TW_S_INITIAL) && 1580 1601 (tw_dev->state[i] != TW_S_COMPLETED)) { 1581 1602 if (tw_dev->srb[i]) { 1582 - tw_dev->srb[i]->result = (DID_RESET << 16); 1583 - tw_dev->srb[i]->scsi_done(tw_dev->srb[i]); 1584 - twa_unmap_scsi_data(tw_dev, i); 1603 + struct scsi_cmnd *cmd = tw_dev->srb[i]; 1604 + 1605 + cmd->result = (DID_RESET << 16); 1606 + scsi_dma_unmap(cmd); 1607 + cmd->scsi_done(cmd); 1585 1608 } 1586 1609 } 1587 1610 } ··· 1762 1781 /* Save the scsi command for use by the ISR */ 1763 1782 tw_dev->srb[request_id] = SCpnt; 1764 1783 1765 - /* Initialize phase to zero */ 1766 - SCpnt->SCp.phase = TW_PHASE_INITIAL; 1767 - 1768 1784 retval = twa_scsiop_execute_scsi(tw_dev, request_id, NULL, 0, NULL); 1769 1785 switch (retval) { 1770 1786 case SCSI_MLQUEUE_HOST_BUSY: 1787 + scsi_dma_unmap(SCpnt); 1771 1788 twa_free_request_id(tw_dev, request_id); 1772 - twa_unmap_scsi_data(tw_dev, request_id); 1773 1789 break; 1774 1790 case 1: 1791 + SCpnt->result = (DID_ERROR << 16); 1792 + scsi_dma_unmap(SCpnt); 1793 + done(SCpnt); 1775 1794 tw_dev->state[request_id] = TW_S_COMPLETED; 1776 1795 twa_free_request_id(tw_dev, request_id); 1777 - twa_unmap_scsi_data(tw_dev, request_id); 1778 - SCpnt->result = (DID_ERROR << 16); 1779 - done(SCpnt); 1780 1796 retval = 0; 1781 1797 } 1782 1798 out: ··· 1841 1863 command_packet->sg_list[0].address = TW_CPU_TO_SGL(tw_dev->generic_buffer_phys[request_id]); 1842 1864 command_packet->sg_list[0].length = cpu_to_le32(TW_MIN_SGL_LENGTH); 1843 1865 } else { 1844 - sg_count = twa_map_scsi_sg_data(tw_dev, request_id); 1845 - if (sg_count == 0) 1866 + sg_count = scsi_dma_map(srb); 1867 + if (sg_count < 0) 1846 1868 goto out; 1847 1869 1848 1870 scsi_for_each_sg(srb, sg, sg_count, i) { ··· 1956 1978 (table[index].text != (char *)0)); index++); 1957 1979 return(table[index].text); 1958 1980 } /* End twa_string_lookup() */ 1959 - 1960 - /* This function will perform a pci-dma unmap */ 1961 - static void twa_unmap_scsi_data(TW_Device_Extension *tw_dev, int request_id) 1962 - { 1963 - struct scsi_cmnd *cmd = tw_dev->srb[request_id]; 1964 - 1965 - if (cmd->SCp.phase == TW_PHASE_SGLIST) 1966 - scsi_dma_unmap(cmd); 1967 - } /* End twa_unmap_scsi_data() */ 1968 1981 1969 1982 /* This function gets called when a disk is coming on-line */ 1970 1983 static int twa_slave_configure(struct scsi_device *sdev)
-5
drivers/scsi/3w-9xxx.h
··· 324 324 #define TW_CURRENT_DRIVER_BUILD 0 325 325 #define TW_CURRENT_DRIVER_BRANCH 0 326 326 327 - /* Phase defines */ 328 - #define TW_PHASE_INITIAL 0 329 - #define TW_PHASE_SINGLE 1 330 - #define TW_PHASE_SGLIST 2 331 - 332 327 /* Misc defines */ 333 328 #define TW_9550SX_DRAIN_COMPLETED 0xFFFF 334 329 #define TW_SECTOR_SIZE 512
+10 -40
drivers/scsi/3w-sas.c
··· 290 290 return 0; 291 291 } /* End twl_post_command_packet() */ 292 292 293 - /* This function will perform a pci-dma mapping for a scatter gather list */ 294 - static int twl_map_scsi_sg_data(TW_Device_Extension *tw_dev, int request_id) 295 - { 296 - int use_sg; 297 - struct scsi_cmnd *cmd = tw_dev->srb[request_id]; 298 - 299 - use_sg = scsi_dma_map(cmd); 300 - if (!use_sg) 301 - return 0; 302 - else if (use_sg < 0) { 303 - TW_PRINTK(tw_dev->host, TW_DRIVER, 0x1, "Failed to map scatter gather list"); 304 - return 0; 305 - } 306 - 307 - cmd->SCp.phase = TW_PHASE_SGLIST; 308 - cmd->SCp.have_data_in = use_sg; 309 - 310 - return use_sg; 311 - } /* End twl_map_scsi_sg_data() */ 312 - 313 293 /* This function hands scsi cdb's to the firmware */ 314 294 static int twl_scsiop_execute_scsi(TW_Device_Extension *tw_dev, int request_id, char *cdb, int use_sg, TW_SG_Entry_ISO *sglistarg) 315 295 { ··· 337 357 if (!sglistarg) { 338 358 /* Map sglist from scsi layer to cmd packet */ 339 359 if (scsi_sg_count(srb)) { 340 - sg_count = twl_map_scsi_sg_data(tw_dev, request_id); 341 - if (sg_count == 0) 360 + sg_count = scsi_dma_map(srb); 361 + if (sg_count <= 0) 342 362 goto out; 343 363 344 364 scsi_for_each_sg(srb, sg, sg_count, i) { ··· 1082 1102 return retval; 1083 1103 } /* End twl_initialize_device_extension() */ 1084 1104 1085 - /* This function will perform a pci-dma unmap */ 1086 - static void twl_unmap_scsi_data(TW_Device_Extension *tw_dev, int request_id) 1087 - { 1088 - struct scsi_cmnd *cmd = tw_dev->srb[request_id]; 1089 - 1090 - if (cmd->SCp.phase == TW_PHASE_SGLIST) 1091 - scsi_dma_unmap(cmd); 1092 - } /* End twl_unmap_scsi_data() */ 1093 - 1094 1105 /* This function will handle attention interrupts */ 1095 1106 static int twl_handle_attention_interrupt(TW_Device_Extension *tw_dev) 1096 1107 { ··· 1222 1251 } 1223 1252 1224 1253 /* Now complete the io */ 1254 + scsi_dma_unmap(cmd); 1255 + cmd->scsi_done(cmd); 1225 1256 tw_dev->state[request_id] = TW_S_COMPLETED; 1226 1257 twl_free_request_id(tw_dev, request_id); 1227 1258 tw_dev->posted_request_count--; 1228 - tw_dev->srb[request_id]->scsi_done(tw_dev->srb[request_id]); 1229 - twl_unmap_scsi_data(tw_dev, request_id); 1230 1259 } 1231 1260 1232 1261 /* Check for another response interrupt */ ··· 1371 1400 if ((tw_dev->state[i] != TW_S_FINISHED) && 1372 1401 (tw_dev->state[i] != TW_S_INITIAL) && 1373 1402 (tw_dev->state[i] != TW_S_COMPLETED)) { 1374 - if (tw_dev->srb[i]) { 1375 - tw_dev->srb[i]->result = (DID_RESET << 16); 1376 - tw_dev->srb[i]->scsi_done(tw_dev->srb[i]); 1377 - twl_unmap_scsi_data(tw_dev, i); 1403 + struct scsi_cmnd *cmd = tw_dev->srb[i]; 1404 + 1405 + if (cmd) { 1406 + cmd->result = (DID_RESET << 16); 1407 + scsi_dma_unmap(cmd); 1408 + cmd->scsi_done(cmd); 1378 1409 } 1379 1410 } 1380 1411 } ··· 1479 1506 1480 1507 /* Save the scsi command for use by the ISR */ 1481 1508 tw_dev->srb[request_id] = SCpnt; 1482 - 1483 - /* Initialize phase to zero */ 1484 - SCpnt->SCp.phase = TW_PHASE_INITIAL; 1485 1509 1486 1510 retval = twl_scsiop_execute_scsi(tw_dev, request_id, NULL, 0, NULL); 1487 1511 if (retval) {
-4
drivers/scsi/3w-sas.h
··· 103 103 #define TW_CURRENT_DRIVER_BUILD 0 104 104 #define TW_CURRENT_DRIVER_BRANCH 0 105 105 106 - /* Phase defines */ 107 - #define TW_PHASE_INITIAL 0 108 - #define TW_PHASE_SGLIST 2 109 - 110 106 /* Misc defines */ 111 107 #define TW_SECTOR_SIZE 512 112 108 #define TW_MAX_UNITS 32
+6 -36
drivers/scsi/3w-xxxx.c
··· 1271 1271 return 0; 1272 1272 } /* End tw_initialize_device_extension() */ 1273 1273 1274 - static int tw_map_scsi_sg_data(struct pci_dev *pdev, struct scsi_cmnd *cmd) 1275 - { 1276 - int use_sg; 1277 - 1278 - dprintk(KERN_WARNING "3w-xxxx: tw_map_scsi_sg_data()\n"); 1279 - 1280 - use_sg = scsi_dma_map(cmd); 1281 - if (use_sg < 0) { 1282 - printk(KERN_WARNING "3w-xxxx: tw_map_scsi_sg_data(): pci_map_sg() failed.\n"); 1283 - return 0; 1284 - } 1285 - 1286 - cmd->SCp.phase = TW_PHASE_SGLIST; 1287 - cmd->SCp.have_data_in = use_sg; 1288 - 1289 - return use_sg; 1290 - } /* End tw_map_scsi_sg_data() */ 1291 - 1292 - static void tw_unmap_scsi_data(struct pci_dev *pdev, struct scsi_cmnd *cmd) 1293 - { 1294 - dprintk(KERN_WARNING "3w-xxxx: tw_unmap_scsi_data()\n"); 1295 - 1296 - if (cmd->SCp.phase == TW_PHASE_SGLIST) 1297 - scsi_dma_unmap(cmd); 1298 - } /* End tw_unmap_scsi_data() */ 1299 - 1300 1274 /* This function will reset a device extension */ 1301 1275 static int tw_reset_device_extension(TW_Device_Extension *tw_dev) 1302 1276 { ··· 1293 1319 srb = tw_dev->srb[i]; 1294 1320 if (srb != NULL) { 1295 1321 srb->result = (DID_RESET << 16); 1296 - tw_dev->srb[i]->scsi_done(tw_dev->srb[i]); 1297 - tw_unmap_scsi_data(tw_dev->tw_pci_dev, tw_dev->srb[i]); 1322 + scsi_dma_unmap(srb); 1323 + srb->scsi_done(srb); 1298 1324 } 1299 1325 } 1300 1326 } ··· 1741 1767 command_packet->byte8.io.lba = lba; 1742 1768 command_packet->byte6.block_count = num_sectors; 1743 1769 1744 - use_sg = tw_map_scsi_sg_data(tw_dev->tw_pci_dev, tw_dev->srb[request_id]); 1745 - if (!use_sg) 1770 + use_sg = scsi_dma_map(srb); 1771 + if (use_sg <= 0) 1746 1772 return 1; 1747 1773 1748 1774 scsi_for_each_sg(tw_dev->srb[request_id], sg, use_sg, i) { ··· 1928 1954 1929 1955 /* Save the scsi command for use by the ISR */ 1930 1956 tw_dev->srb[request_id] = SCpnt; 1931 - 1932 - /* Initialize phase to zero */ 1933 - SCpnt->SCp.phase = TW_PHASE_INITIAL; 1934 1957 1935 1958 switch (*command) { 1936 1959 case READ_10: ··· 2156 2185 2157 2186 /* Now complete the io */ 2158 2187 if ((error != TW_ISR_DONT_COMPLETE)) { 2188 + scsi_dma_unmap(tw_dev->srb[request_id]); 2189 + tw_dev->srb[request_id]->scsi_done(tw_dev->srb[request_id]); 2159 2190 tw_dev->state[request_id] = TW_S_COMPLETED; 2160 2191 tw_state_request_finish(tw_dev, request_id); 2161 2192 tw_dev->posted_request_count--; 2162 - tw_dev->srb[request_id]->scsi_done(tw_dev->srb[request_id]); 2163 - 2164 - tw_unmap_scsi_data(tw_dev->tw_pci_dev, tw_dev->srb[request_id]); 2165 2193 } 2166 2194 } 2167 2195
-5
drivers/scsi/3w-xxxx.h
··· 195 195 #define TW_AEN_SMART_FAIL 0x000F 196 196 #define TW_AEN_SBUF_FAIL 0x0024 197 197 198 - /* Phase defines */ 199 - #define TW_PHASE_INITIAL 0 200 - #define TW_PHASE_SINGLE 1 201 - #define TW_PHASE_SGLIST 2 202 - 203 198 /* Misc defines */ 204 199 #define TW_ALIGNMENT_6000 64 /* 64 bytes */ 205 200 #define TW_ALIGNMENT_7000 4 /* 4 bytes */
+11 -12
drivers/scsi/aha1542.c
··· 375 375 u8 lun = cmd->device->lun; 376 376 unsigned long flags; 377 377 int bufflen = scsi_bufflen(cmd); 378 - int mbo; 378 + int mbo, sg_count; 379 379 struct mailbox *mb = aha1542->mb; 380 380 struct ccb *ccb = aha1542->ccb; 381 + struct chain *cptr; 381 382 382 383 if (*cmd->cmnd == REQUEST_SENSE) { 383 384 /* Don't do the command - we have the sense data already */ ··· 398 397 print_hex_dump_bytes("command: ", DUMP_PREFIX_NONE, cmd->cmnd, cmd->cmd_len); 399 398 } 400 399 #endif 400 + if (bufflen) { /* allocate memory before taking host_lock */ 401 + sg_count = scsi_sg_count(cmd); 402 + cptr = kmalloc(sizeof(*cptr) * sg_count, GFP_KERNEL | GFP_DMA); 403 + if (!cptr) 404 + return SCSI_MLQUEUE_HOST_BUSY; 405 + } 406 + 401 407 /* Use the outgoing mailboxes in a round-robin fashion, because this 402 408 is how the host adapter will scan for them */ 403 409 ··· 449 441 450 442 if (bufflen) { 451 443 struct scatterlist *sg; 452 - struct chain *cptr; 453 - int i, sg_count = scsi_sg_count(cmd); 444 + int i; 454 445 455 446 ccb[mbo].op = 2; /* SCSI Initiator Command w/scatter-gather */ 456 - cmd->host_scribble = kmalloc(sizeof(*cptr)*sg_count, 457 - GFP_KERNEL | GFP_DMA); 458 - cptr = (struct chain *) cmd->host_scribble; 459 - if (cptr == NULL) { 460 - /* free the claimed mailbox slot */ 461 - aha1542->int_cmds[mbo] = NULL; 462 - spin_unlock_irqrestore(sh->host_lock, flags); 463 - return SCSI_MLQUEUE_HOST_BUSY; 464 - } 447 + cmd->host_scribble = (void *)cptr; 465 448 scsi_for_each_sg(cmd, sg, sg_count, i) { 466 449 any2scsi(cptr[i].dataptr, isa_page_to_bus(sg_page(sg)) 467 450 + sg->offset);
+1
drivers/scsi/scsi_devinfo.c
··· 226 226 {"PIONEER", "CD-ROM DRM-624X", NULL, BLIST_FORCELUN | BLIST_SINGLELUN}, 227 227 {"Promise", "VTrak E610f", NULL, BLIST_SPARSELUN | BLIST_NO_RSOC}, 228 228 {"Promise", "", NULL, BLIST_SPARSELUN}, 229 + {"QNAP", "iSCSI Storage", NULL, BLIST_MAX_1024}, 229 230 {"QUANTUM", "XP34301", "1071", BLIST_NOTQ}, 230 231 {"REGAL", "CDC-4X", NULL, BLIST_MAX5LUN | BLIST_SINGLELUN}, 231 232 {"SanDisk", "ImageMate CF-SD1", NULL, BLIST_FORCELUN},
+6
drivers/scsi/scsi_scan.c
··· 897 897 */ 898 898 if (*bflags & BLIST_MAX_512) 899 899 blk_queue_max_hw_sectors(sdev->request_queue, 512); 900 + /* 901 + * Max 1024 sector transfer length for targets that report incorrect 902 + * max/optimal lengths and relied on the old block layer safe default 903 + */ 904 + else if (*bflags & BLIST_MAX_1024) 905 + blk_queue_max_hw_sectors(sdev->request_queue, 1024); 900 906 901 907 /* 902 908 * Some devices may not want to have a start command automatically
+3 -4
drivers/sh/pm_runtime.c
··· 80 80 if (IS_ENABLED(CONFIG_ARCH_SHMOBILE_MULTI)) { 81 81 if (!of_machine_is_compatible("renesas,emev2") && 82 82 !of_machine_is_compatible("renesas,r7s72100") && 83 - !of_machine_is_compatible("renesas,r8a73a4") && 84 83 #ifndef CONFIG_PM_GENERIC_DOMAINS_OF 84 + !of_machine_is_compatible("renesas,r8a73a4") && 85 85 !of_machine_is_compatible("renesas,r8a7740") && 86 + !of_machine_is_compatible("renesas,sh73a0") && 86 87 #endif 87 88 !of_machine_is_compatible("renesas,r8a7778") && 88 89 !of_machine_is_compatible("renesas,r8a7779") && ··· 91 90 !of_machine_is_compatible("renesas,r8a7791") && 92 91 !of_machine_is_compatible("renesas,r8a7792") && 93 92 !of_machine_is_compatible("renesas,r8a7793") && 94 - !of_machine_is_compatible("renesas,r8a7794") && 95 - !of_machine_is_compatible("renesas,sh7372") && 96 - !of_machine_is_compatible("renesas,sh73a0")) 93 + !of_machine_is_compatible("renesas,r8a7794")) 97 94 return 0; 98 95 } 99 96
+1
drivers/staging/media/omap4iss/Kconfig
··· 2 2 bool "OMAP 4 Camera support" 3 3 depends on VIDEO_V4L2=y && VIDEO_V4L2_SUBDEV_API && I2C=y && ARCH_OMAP4 4 4 depends on HAS_DMA 5 + select MFD_SYSCON 5 6 select VIDEOBUF2_DMA_CONTIG 6 7 ---help--- 7 8 Driver for an OMAP 4 ISS controller.
+11
drivers/staging/media/omap4iss/iss.c
··· 17 17 #include <linux/dma-mapping.h> 18 18 #include <linux/i2c.h> 19 19 #include <linux/interrupt.h> 20 + #include <linux/mfd/syscon.h> 20 21 #include <linux/module.h> 21 22 #include <linux/platform_device.h> 22 23 #include <linux/slab.h> ··· 1386 1385 iss->dev->coherent_dma_mask = DMA_BIT_MASK(32); 1387 1386 1388 1387 platform_set_drvdata(pdev, iss); 1388 + 1389 + /* 1390 + * TODO: When implementing DT support switch to syscon regmap lookup by 1391 + * phandle. 1392 + */ 1393 + iss->syscon = syscon_regmap_lookup_by_compatible("syscon"); 1394 + if (IS_ERR(iss->syscon)) { 1395 + ret = PTR_ERR(iss->syscon); 1396 + goto error; 1397 + } 1389 1398 1390 1399 /* Clocks */ 1391 1400 ret = iss_map_mem_resource(pdev, iss, OMAP4_ISS_MEM_TOP);
+4
drivers/staging/media/omap4iss/iss.h
··· 29 29 #include "iss_ipipe.h" 30 30 #include "iss_resizer.h" 31 31 32 + struct regmap; 33 + 32 34 #define to_iss_device(ptr_module) \ 33 35 container_of(ptr_module, struct iss_device, ptr_module) 34 36 #define to_device(ptr_module) \ ··· 81 79 82 80 /* 83 81 * struct iss_device - ISS device structure. 82 + * @syscon: Regmap for the syscon register space 84 83 * @crashed: Bitmask of crashed entities (indexed by entity ID) 85 84 */ 86 85 struct iss_device { ··· 96 93 97 94 struct resource *res[OMAP4_ISS_MEM_LAST]; 98 95 void __iomem *regs[OMAP4_ISS_MEM_LAST]; 96 + struct regmap *syscon; 99 97 100 98 u64 raw_dmamask; 101 99
+7 -5
drivers/staging/media/omap4iss/iss_csiphy.c
··· 13 13 14 14 #include <linux/delay.h> 15 15 #include <linux/device.h> 16 + #include <linux/regmap.h> 16 17 17 18 #include "../../../../arch/arm/mach-omap2/control.h" 18 19 ··· 141 140 * - bit [18] : CSIPHY1 CTRLCLK enable 142 141 * - bit [17:16] : CSIPHY1 config: 00 d-phy, 01/10 ccp2 143 142 */ 144 - cam_rx_ctrl = omap4_ctrl_pad_readl( 145 - OMAP4_CTRL_MODULE_PAD_CORE_CONTROL_CAMERA_RX); 146 - 143 + /* 144 + * TODO: When implementing DT support specify the CONTROL_CAMERA_RX 145 + * register offset in the syscon property instead of hardcoding it. 146 + */ 147 + regmap_read(iss->syscon, 0x68, &cam_rx_ctrl); 147 148 148 149 if (subdevs->interface == ISS_INTERFACE_CSI2A_PHY1) { 149 150 cam_rx_ctrl &= ~(OMAP4_CAMERARX_CSI21_LANEENABLE_MASK | ··· 169 166 cam_rx_ctrl |= OMAP4_CAMERARX_CSI22_CTRLCLKEN_MASK; 170 167 } 171 168 172 - omap4_ctrl_pad_writel(cam_rx_ctrl, 173 - OMAP4_CTRL_MODULE_PAD_CORE_CONTROL_CAMERA_RX); 169 + regmap_write(iss->syscon, 0x68, cam_rx_ctrl); 174 170 175 171 /* Reset used lane count */ 176 172 csi2->phy->used_data_lanes = 0;
+17 -1
drivers/tty/hvc/hvc_xen.c
··· 299 299 return 0; 300 300 } 301 301 302 + static void xen_console_update_evtchn(struct xencons_info *info) 303 + { 304 + if (xen_hvm_domain()) { 305 + uint64_t v; 306 + int err; 307 + 308 + err = hvm_get_parameter(HVM_PARAM_CONSOLE_EVTCHN, &v); 309 + if (!err && v) 310 + info->evtchn = v; 311 + } else 312 + info->evtchn = xen_start_info->console.domU.evtchn; 313 + } 314 + 302 315 void xen_console_resume(void) 303 316 { 304 317 struct xencons_info *info = vtermno_to_xencons(HVC_COOKIE); 305 - if (info != NULL && info->irq) 318 + if (info != NULL && info->irq) { 319 + if (!xen_initial_domain()) 320 + xen_console_update_evtchn(info); 306 321 rebind_evtchn_irq(info->evtchn, info->irq); 322 + } 307 323 } 308 324 309 325 static void xencons_disconnect_backend(struct xencons_info *info)
+23 -2
drivers/tty/serial/8250/8250_pci.c
··· 1998 1998 #define PCIE_DEVICE_ID_WCH_CH382_2S1P 0x3250 1999 1999 #define PCIE_DEVICE_ID_WCH_CH384_4S 0x3470 2000 2000 2001 + #define PCI_DEVICE_ID_EXAR_XR17V8358 0x8358 2002 + 2001 2003 /* Unknown vendors/cards - this should not be in linux/pci_ids.h */ 2002 2004 #define PCI_SUBDEVICE_ID_UNKNOWN_0x1584 0x1584 2003 2005 #define PCI_SUBDEVICE_ID_UNKNOWN_0x1588 0x1588 ··· 2522 2520 .subdevice = PCI_ANY_ID, 2523 2521 .setup = pci_xr17v35x_setup, 2524 2522 }, 2523 + { 2524 + .vendor = PCI_VENDOR_ID_EXAR, 2525 + .device = PCI_DEVICE_ID_EXAR_XR17V8358, 2526 + .subvendor = PCI_ANY_ID, 2527 + .subdevice = PCI_ANY_ID, 2528 + .setup = pci_xr17v35x_setup, 2529 + }, 2525 2530 /* 2526 2531 * Xircom cards 2527 2532 */ ··· 3008 2999 pbn_exar_XR17V352, 3009 3000 pbn_exar_XR17V354, 3010 3001 pbn_exar_XR17V358, 3002 + pbn_exar_XR17V8358, 3011 3003 pbn_exar_ibm_saturn, 3012 3004 pbn_pasemi_1682M, 3013 3005 pbn_ni8430_2, ··· 3690 3680 [pbn_exar_XR17V358] = { 3691 3681 .flags = FL_BASE0, 3692 3682 .num_ports = 8, 3683 + .base_baud = 7812500, 3684 + .uart_offset = 0x400, 3685 + .reg_shift = 0, 3686 + .first_offset = 0, 3687 + }, 3688 + [pbn_exar_XR17V8358] = { 3689 + .flags = FL_BASE0, 3690 + .num_ports = 16, 3693 3691 .base_baud = 7812500, 3694 3692 .uart_offset = 0x400, 3695 3693 .reg_shift = 0, ··· 5098 5080 0, 5099 5081 0, pbn_exar_XR17C158 }, 5100 5082 /* 5101 - * Exar Corp. XR17V35[248] Dual/Quad/Octal PCIe UARTs 5083 + * Exar Corp. XR17V[48]35[248] Dual/Quad/Octal/Hexa PCIe UARTs 5102 5084 */ 5103 5085 { PCI_VENDOR_ID_EXAR, PCI_DEVICE_ID_EXAR_XR17V352, 5104 5086 PCI_ANY_ID, PCI_ANY_ID, ··· 5112 5094 PCI_ANY_ID, PCI_ANY_ID, 5113 5095 0, 5114 5096 0, pbn_exar_XR17V358 }, 5115 - 5097 + { PCI_VENDOR_ID_EXAR, PCI_DEVICE_ID_EXAR_XR17V8358, 5098 + PCI_ANY_ID, PCI_ANY_ID, 5099 + 0, 5100 + 0, pbn_exar_XR17V8358 }, 5116 5101 /* 5117 5102 * Topic TP560 Data/Fax/Voice 56k modem (reported by Evan Clarke) 5118 5103 */
+2
drivers/tty/serial/atmel_serial.c
··· 880 880 config.direction = DMA_MEM_TO_DEV; 881 881 config.dst_addr_width = DMA_SLAVE_BUSWIDTH_1_BYTE; 882 882 config.dst_addr = port->mapbase + ATMEL_US_THR; 883 + config.dst_maxburst = 1; 883 884 884 885 ret = dmaengine_slave_config(atmel_port->chan_tx, 885 886 &config); ··· 1060 1059 config.direction = DMA_DEV_TO_MEM; 1061 1060 config.src_addr_width = DMA_SLAVE_BUSWIDTH_1_BYTE; 1062 1061 config.src_addr = port->mapbase + ATMEL_US_RHR; 1062 + config.src_maxburst = 1; 1063 1063 1064 1064 ret = dmaengine_slave_config(atmel_port->chan_rx, 1065 1065 &config);
-1
drivers/tty/serial/of_serial.c
··· 346 346 { .compatible = "ibm,qpace-nwp-serial", 347 347 .data = (void *)PORT_NWPSERIAL, }, 348 348 #endif 349 - { .type = "serial", .data = (void *)PORT_UNKNOWN, }, 350 349 { /* end of list */ }, 351 350 }; 352 351
+3 -2
drivers/tty/serial/samsung.c
··· 1068 1068 spin_lock_irqsave(&port->lock, flags); 1069 1069 1070 1070 ufcon = rd_regl(port, S3C2410_UFCON); 1071 - ufcon |= S3C2410_UFCON_RESETRX | S3C2410_UFCON_RESETTX | 1072 - S5PV210_UFCON_RXTRIG8; 1071 + ufcon |= S3C2410_UFCON_RESETRX | S5PV210_UFCON_RXTRIG8; 1072 + if (!uart_console(port)) 1073 + ufcon |= S3C2410_UFCON_RESETTX; 1073 1074 wr_regl(port, S3C2410_UFCON, ufcon); 1074 1075 1075 1076 enable_rx_pio(ourport);
+1 -1
drivers/tty/serial/serial_core.c
··· 1770 1770 * @port: the port to write the message 1771 1771 * @s: array of characters 1772 1772 * @count: number of characters in string to write 1773 - * @write: function to write character to port 1773 + * @putchar: function to write character to port 1774 1774 */ 1775 1775 void uart_console_write(struct uart_port *port, const char *s, 1776 1776 unsigned int count,
+6 -5
drivers/tty/serial/uartlite.c
··· 632 632 633 633 static int ulite_probe(struct platform_device *pdev) 634 634 { 635 - struct resource *res, *res2; 635 + struct resource *res; 636 + int irq; 636 637 int id = pdev->id; 637 638 #ifdef CONFIG_OF 638 639 const __be32 *prop; ··· 647 646 if (!res) 648 647 return -ENODEV; 649 648 650 - res2 = platform_get_resource(pdev, IORESOURCE_IRQ, 0); 651 - if (!res2) 652 - return -ENODEV; 649 + irq = platform_get_irq(pdev, 0); 650 + if (irq <= 0) 651 + return -ENXIO; 653 652 654 - return ulite_assign(&pdev->dev, id, res->start, res2->start); 653 + return ulite_assign(&pdev->dev, id, res->start, irq); 655 654 } 656 655 657 656 static int ulite_remove(struct platform_device *pdev)
+6 -6
drivers/tty/serial/xilinx_uartps.c
··· 1331 1331 */ 1332 1332 static int cdns_uart_probe(struct platform_device *pdev) 1333 1333 { 1334 - int rc, id; 1334 + int rc, id, irq; 1335 1335 struct uart_port *port; 1336 - struct resource *res, *res2; 1336 + struct resource *res; 1337 1337 struct cdns_uart *cdns_uart_data; 1338 1338 1339 1339 cdns_uart_data = devm_kzalloc(&pdev->dev, sizeof(*cdns_uart_data), ··· 1380 1380 goto err_out_clk_disable; 1381 1381 } 1382 1382 1383 - res2 = platform_get_resource(pdev, IORESOURCE_IRQ, 0); 1384 - if (!res2) { 1385 - rc = -ENODEV; 1383 + irq = platform_get_irq(pdev, 0); 1384 + if (irq <= 0) { 1385 + rc = -ENXIO; 1386 1386 goto err_out_clk_disable; 1387 1387 } 1388 1388 ··· 1411 1411 * and triggers invocation of the config_port() entry point. 1412 1412 */ 1413 1413 port->mapbase = res->start; 1414 - port->irq = res2->start; 1414 + port->irq = irq; 1415 1415 port->dev = &pdev->dev; 1416 1416 port->uartclk = clk_get_rate(cdns_uart_data->uartclk); 1417 1417 port->private_data = cdns_uart_data;
+2 -1
drivers/tty/tty_ioctl.c
··· 536 536 * Locking: termios_rwsem 537 537 */ 538 538 539 - static int tty_set_termios(struct tty_struct *tty, struct ktermios *new_termios) 539 + int tty_set_termios(struct tty_struct *tty, struct ktermios *new_termios) 540 540 { 541 541 struct ktermios old_termios; 542 542 struct tty_ldisc *ld; ··· 569 569 up_write(&tty->termios_rwsem); 570 570 return 0; 571 571 } 572 + EXPORT_SYMBOL_GPL(tty_set_termios); 572 573 573 574 /** 574 575 * set_termios - set termios values for a tty
-4
drivers/usb/chipidea/otg_fsm.c
··· 520 520 { 521 521 struct ci_hdrc *ci = container_of(fsm, struct ci_hdrc, fsm); 522 522 523 - mutex_unlock(&fsm->lock); 524 523 if (on) { 525 524 ci_role_stop(ci); 526 525 ci_role_start(ci, CI_ROLE_HOST); ··· 528 529 hw_device_reset(ci); 529 530 ci_role_start(ci, CI_ROLE_GADGET); 530 531 } 531 - mutex_lock(&fsm->lock); 532 532 return 0; 533 533 } 534 534 ··· 535 537 { 536 538 struct ci_hdrc *ci = container_of(fsm, struct ci_hdrc, fsm); 537 539 538 - mutex_unlock(&fsm->lock); 539 540 if (on) 540 541 usb_gadget_vbus_connect(&ci->gadget); 541 542 else 542 543 usb_gadget_vbus_disconnect(&ci->gadget); 543 - mutex_lock(&fsm->lock); 544 544 545 545 return 0; 546 546 }
+6 -1
drivers/usb/class/cdc-acm.c
··· 1142 1142 } 1143 1143 1144 1144 while (buflen > 0) { 1145 + elength = buffer[0]; 1146 + if (!elength) { 1147 + dev_err(&intf->dev, "skipping garbage byte\n"); 1148 + elength = 1; 1149 + goto next_desc; 1150 + } 1145 1151 if (buffer[1] != USB_DT_CS_INTERFACE) { 1146 1152 dev_err(&intf->dev, "skipping garbage\n"); 1147 1153 goto next_desc; 1148 1154 } 1149 - elength = buffer[0]; 1150 1155 1151 1156 switch (buffer[2]) { 1152 1157 case USB_CDC_UNION_TYPE: /* we've found it */
+10 -3
drivers/usb/host/ehci-msm.c
··· 88 88 } 89 89 90 90 res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 91 - hcd->regs = devm_ioremap_resource(&pdev->dev, res); 92 - if (IS_ERR(hcd->regs)) { 93 - ret = PTR_ERR(hcd->regs); 91 + if (!res) { 92 + dev_err(&pdev->dev, "Unable to get memory resource\n"); 93 + ret = -ENODEV; 94 94 goto put_hcd; 95 95 } 96 + 96 97 hcd->rsrc_start = res->start; 97 98 hcd->rsrc_len = resource_size(res); 99 + hcd->regs = devm_ioremap(&pdev->dev, hcd->rsrc_start, hcd->rsrc_len); 100 + if (!hcd->regs) { 101 + dev_err(&pdev->dev, "ioremap failed\n"); 102 + ret = -ENOMEM; 103 + goto put_hcd; 104 + } 98 105 99 106 /* 100 107 * OTG driver takes care of PHY initialization, clock management,
+9 -2
drivers/usb/storage/uas-detect.h
··· 51 51 } 52 52 53 53 static int uas_use_uas_driver(struct usb_interface *intf, 54 - const struct usb_device_id *id) 54 + const struct usb_device_id *id, 55 + unsigned long *flags_ret) 55 56 { 56 57 struct usb_host_endpoint *eps[4] = { }; 57 58 struct usb_device *udev = interface_to_usbdev(intf); ··· 74 73 * this writing the following versions exist: 75 74 * ASM1051 - no uas support version 76 75 * ASM1051 - with broken (*) uas support 77 - * ASM1053 - with working uas support 76 + * ASM1053 - with working uas support, but problems with large xfers 78 77 * ASM1153 - with working uas support 79 78 * 80 79 * Devices with these chips re-use a number of device-ids over the ··· 104 103 } else if (usb_ss_max_streams(&eps[1]->ss_ep_comp) == 32) { 105 104 /* Possibly an ASM1051, disable uas */ 106 105 flags |= US_FL_IGNORE_UAS; 106 + } else { 107 + /* ASM1053, these have issues with large transfers */ 108 + flags |= US_FL_MAX_SECTORS_240; 107 109 } 108 110 } 109 111 ··· 135 131 "Please try an other USB controller if you wish to use UAS.\n"); 136 132 return 0; 137 133 } 134 + 135 + if (flags_ret) 136 + *flags_ret = flags; 138 137 139 138 return 1; 140 139 }
+12 -4
drivers/usb/storage/uas.c
··· 759 759 760 760 static int uas_slave_alloc(struct scsi_device *sdev) 761 761 { 762 - sdev->hostdata = (void *)sdev->host->hostdata; 762 + struct uas_dev_info *devinfo = 763 + (struct uas_dev_info *)sdev->host->hostdata; 764 + 765 + sdev->hostdata = devinfo; 763 766 764 767 /* USB has unusual DMA-alignment requirements: Although the 765 768 * starting address of each scatter-gather element doesn't matter, ··· 780 777 * will require changes to the block layer. 781 778 */ 782 779 blk_queue_update_dma_alignment(sdev->request_queue, (512 - 1)); 780 + 781 + if (devinfo->flags & US_FL_MAX_SECTORS_64) 782 + blk_queue_max_hw_sectors(sdev->request_queue, 64); 783 + else if (devinfo->flags & US_FL_MAX_SECTORS_240) 784 + blk_queue_max_hw_sectors(sdev->request_queue, 240); 783 785 784 786 return 0; 785 787 } ··· 895 887 struct Scsi_Host *shost = NULL; 896 888 struct uas_dev_info *devinfo; 897 889 struct usb_device *udev = interface_to_usbdev(intf); 890 + unsigned long dev_flags; 898 891 899 - if (!uas_use_uas_driver(intf, id)) 892 + if (!uas_use_uas_driver(intf, id, &dev_flags)) 900 893 return -ENODEV; 901 894 902 895 if (uas_switch_interface(udev, intf)) ··· 919 910 devinfo->udev = udev; 920 911 devinfo->resetting = 0; 921 912 devinfo->shutdown = 0; 922 - devinfo->flags = id->driver_info; 923 - usb_stor_adjust_quirks(udev, &devinfo->flags); 913 + devinfo->flags = dev_flags; 924 914 init_usb_anchor(&devinfo->cmd_urbs); 925 915 init_usb_anchor(&devinfo->sense_urbs); 926 916 init_usb_anchor(&devinfo->data_urbs);
+6 -2
drivers/usb/storage/usb.c
··· 479 479 US_FL_SINGLE_LUN | US_FL_NO_WP_DETECT | 480 480 US_FL_NO_READ_DISC_INFO | US_FL_NO_READ_CAPACITY_16 | 481 481 US_FL_INITIAL_READ10 | US_FL_WRITE_CACHE | 482 - US_FL_NO_ATA_1X | US_FL_NO_REPORT_OPCODES); 482 + US_FL_NO_ATA_1X | US_FL_NO_REPORT_OPCODES | 483 + US_FL_MAX_SECTORS_240); 483 484 484 485 p = quirks; 485 486 while (*p) { ··· 520 519 break; 521 520 case 'f': 522 521 f |= US_FL_NO_REPORT_OPCODES; 522 + break; 523 + case 'g': 524 + f |= US_FL_MAX_SECTORS_240; 523 525 break; 524 526 case 'h': 525 527 f |= US_FL_CAPACITY_HEURISTICS; ··· 1084 1080 1085 1081 /* If uas is enabled and this device can do uas then ignore it. */ 1086 1082 #if IS_ENABLED(CONFIG_USB_UAS) 1087 - if (uas_use_uas_driver(intf, id)) 1083 + if (uas_use_uas_driver(intf, id, NULL)) 1088 1084 return -ENXIO; 1089 1085 #endif 1090 1086
+7 -1
drivers/vfio/pci/vfio_pci.c
··· 907 907 mutex_lock(&vdev->igate); 908 908 909 909 if (vdev->req_trigger) { 910 - dev_dbg(&vdev->pdev->dev, "Requesting device from user\n"); 910 + if (!(count % 10)) 911 + dev_notice_ratelimited(&vdev->pdev->dev, 912 + "Relaying device request to user (#%u)\n", 913 + count); 911 914 eventfd_signal(vdev->req_trigger, 1); 915 + } else if (count == 0) { 916 + dev_warn(&vdev->pdev->dev, 917 + "No device request channel registered, blocked until released by user\n"); 912 918 } 913 919 914 920 mutex_unlock(&vdev->igate);
+18 -3
drivers/vfio/vfio.c
··· 710 710 void *device_data = device->device_data; 711 711 struct vfio_unbound_dev *unbound; 712 712 unsigned int i = 0; 713 + long ret; 714 + bool interrupted = false; 713 715 714 716 /* 715 717 * The group exists so long as we have a device reference. Get ··· 757 755 758 756 vfio_device_put(device); 759 757 760 - } while (wait_event_interruptible_timeout(vfio.release_q, 761 - !vfio_dev_present(group, dev), 762 - HZ * 10) <= 0); 758 + if (interrupted) { 759 + ret = wait_event_timeout(vfio.release_q, 760 + !vfio_dev_present(group, dev), HZ * 10); 761 + } else { 762 + ret = wait_event_interruptible_timeout(vfio.release_q, 763 + !vfio_dev_present(group, dev), HZ * 10); 764 + if (ret == -ERESTARTSYS) { 765 + interrupted = true; 766 + dev_warn(dev, 767 + "Device is currently in use, task" 768 + " \"%s\" (%d) " 769 + "blocked until device is released", 770 + current->comm, task_pid_nr(current)); 771 + } 772 + } 773 + } while (ret <= 0); 763 774 764 775 vfio_group_put(group); 765 776
+10
drivers/xen/events/events_2l.c
··· 345 345 return IRQ_HANDLED; 346 346 } 347 347 348 + static void evtchn_2l_resume(void) 349 + { 350 + int i; 351 + 352 + for_each_online_cpu(i) 353 + memset(per_cpu(cpu_evtchn_mask, i), 0, sizeof(xen_ulong_t) * 354 + EVTCHN_2L_NR_CHANNELS/BITS_PER_EVTCHN_WORD); 355 + } 356 + 348 357 static const struct evtchn_ops evtchn_ops_2l = { 349 358 .max_channels = evtchn_2l_max_channels, 350 359 .nr_channels = evtchn_2l_max_channels, ··· 365 356 .mask = evtchn_2l_mask, 366 357 .unmask = evtchn_2l_unmask, 367 358 .handle_events = evtchn_2l_handle_events, 359 + .resume = evtchn_2l_resume, 368 360 }; 369 361 370 362 void __init xen_evtchn_2l_init(void)
+4 -3
drivers/xen/events/events_base.c
··· 529 529 if (rc) 530 530 goto err; 531 531 532 - bind_evtchn_to_cpu(evtchn, 0); 533 532 info->evtchn = evtchn; 533 + bind_evtchn_to_cpu(evtchn, 0); 534 534 535 535 rc = xen_evtchn_port_setup(info); 536 536 if (rc) ··· 1279 1279 1280 1280 mutex_unlock(&irq_mapping_update_lock); 1281 1281 1282 - /* new event channels are always bound to cpu 0 */ 1283 - irq_set_affinity(irq, cpumask_of(0)); 1282 + bind_evtchn_to_cpu(evtchn, info->cpu); 1283 + /* This will be deferred until interrupt is processed */ 1284 + irq_set_affinity(irq, cpumask_of(info->cpu)); 1284 1285 1285 1286 /* Unmask the event channel. */ 1286 1287 enable_irq(irq);
+3 -25
drivers/xen/gntdev.c
··· 327 327 return err; 328 328 } 329 329 330 - struct unmap_grant_pages_callback_data 331 - { 332 - struct completion completion; 333 - int result; 334 - }; 335 - 336 - static void unmap_grant_callback(int result, 337 - struct gntab_unmap_queue_data *data) 338 - { 339 - struct unmap_grant_pages_callback_data* d = data->data; 340 - 341 - d->result = result; 342 - complete(&d->completion); 343 - } 344 - 345 330 static int __unmap_grant_pages(struct grant_map *map, int offset, int pages) 346 331 { 347 332 int i, err = 0; 348 333 struct gntab_unmap_queue_data unmap_data; 349 - struct unmap_grant_pages_callback_data data; 350 - 351 - init_completion(&data.completion); 352 - unmap_data.data = &data; 353 - unmap_data.done= &unmap_grant_callback; 354 334 355 335 if (map->notify.flags & UNMAP_NOTIFY_CLEAR_BYTE) { 356 336 int pgno = (map->notify.addr >> PAGE_SHIFT); ··· 347 367 unmap_data.pages = map->pages + offset; 348 368 unmap_data.count = pages; 349 369 350 - gnttab_unmap_refs_async(&unmap_data); 351 - 352 - wait_for_completion(&data.completion); 353 - if (data.result) 354 - return data.result; 370 + err = gnttab_unmap_refs_sync(&unmap_data); 371 + if (err) 372 + return err; 355 373 356 374 for (i = 0; i < pages; i++) { 357 375 if (map->unmap_ops[offset+i].status)
+28
drivers/xen/grant-table.c
··· 123 123 int (*query_foreign_access)(grant_ref_t ref); 124 124 }; 125 125 126 + struct unmap_refs_callback_data { 127 + struct completion completion; 128 + int result; 129 + }; 130 + 126 131 static struct gnttab_ops *gnttab_interface; 127 132 128 133 static int grant_table_version; ··· 867 862 __gnttab_unmap_refs_async(item); 868 863 } 869 864 EXPORT_SYMBOL_GPL(gnttab_unmap_refs_async); 865 + 866 + static void unmap_refs_callback(int result, 867 + struct gntab_unmap_queue_data *data) 868 + { 869 + struct unmap_refs_callback_data *d = data->data; 870 + 871 + d->result = result; 872 + complete(&d->completion); 873 + } 874 + 875 + int gnttab_unmap_refs_sync(struct gntab_unmap_queue_data *item) 876 + { 877 + struct unmap_refs_callback_data data; 878 + 879 + init_completion(&data.completion); 880 + item->data = &data; 881 + item->done = &unmap_refs_callback; 882 + gnttab_unmap_refs_async(item); 883 + wait_for_completion(&data.completion); 884 + 885 + return data.result; 886 + } 887 + EXPORT_SYMBOL_GPL(gnttab_unmap_refs_sync); 870 888 871 889 static int gnttab_map_frames_v1(xen_pfn_t *frames, unsigned int nr_gframes) 872 890 {
+6 -3
drivers/xen/manage.c
··· 131 131 goto out_resume; 132 132 } 133 133 134 + xen_arch_suspend(); 135 + 134 136 si.cancelled = 1; 135 137 136 138 err = stop_machine(xen_suspend, &si, cpumask_of(0)); ··· 150 148 si.cancelled = 1; 151 149 } 152 150 151 + xen_arch_resume(); 152 + 153 153 out_resume: 154 - if (!si.cancelled) { 155 - xen_arch_resume(); 154 + if (!si.cancelled) 156 155 xs_resume(); 157 - } else 156 + else 158 157 xs_suspend_cancel(); 159 158 160 159 dpm_resume_end(si.cancelled ? PMSG_THAW : PMSG_RESTORE);
+1 -1
drivers/xen/swiotlb-xen.c
··· 235 235 #define SLABS_PER_PAGE (1 << (PAGE_SHIFT - IO_TLB_SHIFT)) 236 236 #define IO_TLB_MIN_SLABS ((1<<20) >> IO_TLB_SHIFT) 237 237 while ((SLABS_PER_PAGE << order) > IO_TLB_MIN_SLABS) { 238 - xen_io_tlb_start = (void *)__get_free_pages(__GFP_NOWARN, order); 238 + xen_io_tlb_start = (void *)xen_get_swiotlb_free_pages(order); 239 239 if (xen_io_tlb_start) 240 240 break; 241 241 order--;
+3 -3
drivers/xen/xen-pciback/conf_space.c
··· 16 16 #include "conf_space.h" 17 17 #include "conf_space_quirks.h" 18 18 19 - bool permissive; 20 - module_param(permissive, bool, 0644); 19 + bool xen_pcibk_permissive; 20 + module_param_named(permissive, xen_pcibk_permissive, bool, 0644); 21 21 22 22 /* This is where xen_pcibk_read_config_byte, xen_pcibk_read_config_word, 23 23 * xen_pcibk_write_config_word, and xen_pcibk_write_config_byte are created. */ ··· 262 262 * This means that some fields may still be read-only because 263 263 * they have entries in the config_field list that intercept 264 264 * the write and do nothing. */ 265 - if (dev_data->permissive || permissive) { 265 + if (dev_data->permissive || xen_pcibk_permissive) { 266 266 switch (size) { 267 267 case 1: 268 268 err = pci_write_config_byte(dev, offset,
+1 -1
drivers/xen/xen-pciback/conf_space.h
··· 64 64 void *data; 65 65 }; 66 66 67 - extern bool permissive; 67 + extern bool xen_pcibk_permissive; 68 68 69 69 #define OFFSET(cfg_entry) ((cfg_entry)->base_offset+(cfg_entry)->field->offset) 70 70
+1 -1
drivers/xen/xen-pciback/conf_space_header.c
··· 118 118 119 119 cmd->val = value; 120 120 121 - if (!permissive && (!dev_data || !dev_data->permissive)) 121 + if (!xen_pcibk_permissive && (!dev_data || !dev_data->permissive)) 122 122 return 0; 123 123 124 124 /* Only allow the guest to control certain bits. */
+29
drivers/xen/xenbus/xenbus_probe.c
··· 57 57 #include <xen/xen.h> 58 58 #include <xen/xenbus.h> 59 59 #include <xen/events.h> 60 + #include <xen/xen-ops.h> 60 61 #include <xen/page.h> 61 62 62 63 #include <xen/hvm.h> ··· 736 735 return err; 737 736 } 738 737 738 + static int xenbus_resume_cb(struct notifier_block *nb, 739 + unsigned long action, void *data) 740 + { 741 + int err = 0; 742 + 743 + if (xen_hvm_domain()) { 744 + uint64_t v; 745 + 746 + err = hvm_get_parameter(HVM_PARAM_STORE_EVTCHN, &v); 747 + if (!err && v) 748 + xen_store_evtchn = v; 749 + else 750 + pr_warn("Cannot update xenstore event channel: %d\n", 751 + err); 752 + } else 753 + xen_store_evtchn = xen_start_info->store_evtchn; 754 + 755 + return err; 756 + } 757 + 758 + static struct notifier_block xenbus_resume_nb = { 759 + .notifier_call = xenbus_resume_cb, 760 + }; 761 + 739 762 static int __init xenbus_init(void) 740 763 { 741 764 int err = 0; ··· 817 792 pr_warn("Error initializing xenstore comms: %i\n", err); 818 793 goto out_error; 819 794 } 795 + 796 + if ((xen_store_domain_type != XS_LOCAL) && 797 + (xen_store_domain_type != XS_UNKNOWN)) 798 + xen_resume_notifier_register(&xenbus_resume_nb); 820 799 821 800 #ifdef CONFIG_XEN_COMPAT_XENFS 822 801 /*
+2
fs/btrfs/delayed-inode.c
··· 1802 1802 set_nlink(inode, btrfs_stack_inode_nlink(inode_item)); 1803 1803 inode_set_bytes(inode, btrfs_stack_inode_nbytes(inode_item)); 1804 1804 BTRFS_I(inode)->generation = btrfs_stack_inode_generation(inode_item); 1805 + BTRFS_I(inode)->last_trans = btrfs_stack_inode_transid(inode_item); 1806 + 1805 1807 inode->i_version = btrfs_stack_inode_sequence(inode_item); 1806 1808 inode->i_rdev = 0; 1807 1809 *rdev = btrfs_stack_inode_rdev(inode_item);
+57 -33
fs/btrfs/extent-tree.c
··· 3178 3178 bi = btrfs_item_ptr_offset(leaf, path->slots[0]); 3179 3179 write_extent_buffer(leaf, &cache->item, bi, sizeof(cache->item)); 3180 3180 btrfs_mark_buffer_dirty(leaf); 3181 - btrfs_release_path(path); 3182 3181 fail: 3182 + btrfs_release_path(path); 3183 3183 if (ret) 3184 3184 btrfs_abort_transaction(trans, root, ret); 3185 3185 return ret; ··· 3305 3305 3306 3306 spin_lock(&block_group->lock); 3307 3307 if (block_group->cached != BTRFS_CACHE_FINISHED || 3308 - !btrfs_test_opt(root, SPACE_CACHE) || 3309 - block_group->delalloc_bytes) { 3308 + !btrfs_test_opt(root, SPACE_CACHE)) { 3310 3309 /* 3311 3310 * don't bother trying to write stuff out _if_ 3312 3311 * a) we're not cached, ··· 3407 3408 int loops = 0; 3408 3409 3409 3410 spin_lock(&cur_trans->dirty_bgs_lock); 3410 - if (!list_empty(&cur_trans->dirty_bgs)) { 3411 - list_splice_init(&cur_trans->dirty_bgs, &dirty); 3411 + if (list_empty(&cur_trans->dirty_bgs)) { 3412 + spin_unlock(&cur_trans->dirty_bgs_lock); 3413 + return 0; 3412 3414 } 3415 + list_splice_init(&cur_trans->dirty_bgs, &dirty); 3413 3416 spin_unlock(&cur_trans->dirty_bgs_lock); 3414 3417 3415 3418 again: 3416 - if (list_empty(&dirty)) { 3417 - btrfs_free_path(path); 3418 - return 0; 3419 - } 3420 - 3421 3419 /* 3422 3420 * make sure all the block groups on our dirty list actually 3423 3421 * exist ··· 3427 3431 return -ENOMEM; 3428 3432 } 3429 3433 3434 + /* 3435 + * cache_write_mutex is here only to save us from balance or automatic 3436 + * removal of empty block groups deleting this block group while we are 3437 + * writing out the cache 3438 + */ 3439 + mutex_lock(&trans->transaction->cache_write_mutex); 3430 3440 while (!list_empty(&dirty)) { 3431 3441 cache = list_first_entry(&dirty, 3432 3442 struct btrfs_block_group_cache, 3433 3443 dirty_list); 3434 - 3435 - /* 3436 - * cache_write_mutex is here only to save us from balance 3437 - * deleting this block group while we are writing out the 3438 - * cache 3439 - */ 3440 - mutex_lock(&trans->transaction->cache_write_mutex); 3441 - 3442 3444 /* 3443 3445 * this can happen if something re-dirties a block 3444 3446 * group that is already under IO. Just wait for it to ··· 3489 3495 } 3490 3496 if (!ret) 3491 3497 ret = write_one_cache_group(trans, root, path, cache); 3492 - mutex_unlock(&trans->transaction->cache_write_mutex); 3493 3498 3494 3499 /* if its not on the io list, we need to put the block group */ 3495 3500 if (should_put) ··· 3496 3503 3497 3504 if (ret) 3498 3505 break; 3506 + 3507 + /* 3508 + * Avoid blocking other tasks for too long. It might even save 3509 + * us from writing caches for block groups that are going to be 3510 + * removed. 3511 + */ 3512 + mutex_unlock(&trans->transaction->cache_write_mutex); 3513 + mutex_lock(&trans->transaction->cache_write_mutex); 3499 3514 } 3515 + mutex_unlock(&trans->transaction->cache_write_mutex); 3500 3516 3501 3517 /* 3502 3518 * go through delayed refs for all the stuff we've just kicked off ··· 3516 3514 loops++; 3517 3515 spin_lock(&cur_trans->dirty_bgs_lock); 3518 3516 list_splice_init(&cur_trans->dirty_bgs, &dirty); 3517 + /* 3518 + * dirty_bgs_lock protects us from concurrent block group 3519 + * deletes too (not just cache_write_mutex). 3520 + */ 3521 + if (!list_empty(&dirty)) { 3522 + spin_unlock(&cur_trans->dirty_bgs_lock); 3523 + goto again; 3524 + } 3519 3525 spin_unlock(&cur_trans->dirty_bgs_lock); 3520 - goto again; 3521 3526 } 3522 3527 3523 3528 btrfs_free_path(path); ··· 7546 7537 * returns the key for the extent through ins, and a tree buffer for 7547 7538 * the first block of the extent through buf. 7548 7539 * 7549 - * returns the tree buffer or NULL. 7540 + * returns the tree buffer or an ERR_PTR on error. 7550 7541 */ 7551 7542 struct extent_buffer *btrfs_alloc_tree_block(struct btrfs_trans_handle *trans, 7552 7543 struct btrfs_root *root, ··· 7557 7548 struct btrfs_key ins; 7558 7549 struct btrfs_block_rsv *block_rsv; 7559 7550 struct extent_buffer *buf; 7551 + struct btrfs_delayed_extent_op *extent_op; 7560 7552 u64 flags = 0; 7561 7553 int ret; 7562 7554 u32 blocksize = root->nodesize; ··· 7578 7568 7579 7569 ret = btrfs_reserve_extent(root, blocksize, blocksize, 7580 7570 empty_size, hint, &ins, 0, 0); 7581 - if (ret) { 7582 - unuse_block_rsv(root->fs_info, block_rsv, blocksize); 7583 - return ERR_PTR(ret); 7584 - } 7571 + if (ret) 7572 + goto out_unuse; 7585 7573 7586 7574 buf = btrfs_init_new_buffer(trans, root, ins.objectid, level); 7587 - BUG_ON(IS_ERR(buf)); /* -ENOMEM */ 7575 + if (IS_ERR(buf)) { 7576 + ret = PTR_ERR(buf); 7577 + goto out_free_reserved; 7578 + } 7588 7579 7589 7580 if (root_objectid == BTRFS_TREE_RELOC_OBJECTID) { 7590 7581 if (parent == 0) ··· 7595 7584 BUG_ON(parent > 0); 7596 7585 7597 7586 if (root_objectid != BTRFS_TREE_LOG_OBJECTID) { 7598 - struct btrfs_delayed_extent_op *extent_op; 7599 7587 extent_op = btrfs_alloc_delayed_extent_op(); 7600 - BUG_ON(!extent_op); /* -ENOMEM */ 7588 + if (!extent_op) { 7589 + ret = -ENOMEM; 7590 + goto out_free_buf; 7591 + } 7601 7592 if (key) 7602 7593 memcpy(&extent_op->key, key, sizeof(extent_op->key)); 7603 7594 else ··· 7614 7601 extent_op->level = level; 7615 7602 7616 7603 ret = btrfs_add_delayed_tree_ref(root->fs_info, trans, 7617 - ins.objectid, 7618 - ins.offset, parent, root_objectid, 7619 - level, BTRFS_ADD_DELAYED_EXTENT, 7620 - extent_op, 0); 7621 - BUG_ON(ret); /* -ENOMEM */ 7604 + ins.objectid, ins.offset, 7605 + parent, root_objectid, level, 7606 + BTRFS_ADD_DELAYED_EXTENT, 7607 + extent_op, 0); 7608 + if (ret) 7609 + goto out_free_delayed; 7622 7610 } 7623 7611 return buf; 7612 + 7613 + out_free_delayed: 7614 + btrfs_free_delayed_extent_op(extent_op); 7615 + out_free_buf: 7616 + free_extent_buffer(buf); 7617 + out_free_reserved: 7618 + btrfs_free_reserved_extent(root, ins.objectid, ins.offset, 0); 7619 + out_unuse: 7620 + unuse_block_rsv(root->fs_info, block_rsv, blocksize); 7621 + return ERR_PTR(ret); 7624 7622 } 7625 7623 7626 7624 struct walk_control {
+28 -26
fs/btrfs/extent_io.c
··· 4560 4560 do { 4561 4561 index--; 4562 4562 page = eb->pages[index]; 4563 - if (page && mapped) { 4563 + if (!page) 4564 + continue; 4565 + if (mapped) 4564 4566 spin_lock(&page->mapping->private_lock); 4567 + /* 4568 + * We do this since we'll remove the pages after we've 4569 + * removed the eb from the radix tree, so we could race 4570 + * and have this page now attached to the new eb. So 4571 + * only clear page_private if it's still connected to 4572 + * this eb. 4573 + */ 4574 + if (PagePrivate(page) && 4575 + page->private == (unsigned long)eb) { 4576 + BUG_ON(test_bit(EXTENT_BUFFER_DIRTY, &eb->bflags)); 4577 + BUG_ON(PageDirty(page)); 4578 + BUG_ON(PageWriteback(page)); 4565 4579 /* 4566 - * We do this since we'll remove the pages after we've 4567 - * removed the eb from the radix tree, so we could race 4568 - * and have this page now attached to the new eb. So 4569 - * only clear page_private if it's still connected to 4570 - * this eb. 4580 + * We need to make sure we haven't be attached 4581 + * to a new eb. 4571 4582 */ 4572 - if (PagePrivate(page) && 4573 - page->private == (unsigned long)eb) { 4574 - BUG_ON(test_bit(EXTENT_BUFFER_DIRTY, &eb->bflags)); 4575 - BUG_ON(PageDirty(page)); 4576 - BUG_ON(PageWriteback(page)); 4577 - /* 4578 - * We need to make sure we haven't be attached 4579 - * to a new eb. 4580 - */ 4581 - ClearPagePrivate(page); 4582 - set_page_private(page, 0); 4583 - /* One for the page private */ 4584 - page_cache_release(page); 4585 - } 4586 - spin_unlock(&page->mapping->private_lock); 4587 - 4588 - } 4589 - if (page) { 4590 - /* One for when we alloced the page */ 4583 + ClearPagePrivate(page); 4584 + set_page_private(page, 0); 4585 + /* One for the page private */ 4591 4586 page_cache_release(page); 4592 4587 } 4588 + 4589 + if (mapped) 4590 + spin_unlock(&page->mapping->private_lock); 4591 + 4592 + /* One for when we alloced the page */ 4593 + page_cache_release(page); 4593 4594 } while (index != 0); 4594 4595 } 4595 4596 ··· 4871 4870 mark_extent_buffer_accessed(exists, p); 4872 4871 goto free_eb; 4873 4872 } 4873 + exists = NULL; 4874 4874 4875 4875 /* 4876 4876 * Do this so attach doesn't complain and we need to ··· 4935 4933 return eb; 4936 4934 4937 4935 free_eb: 4936 + WARN_ON(!atomic_dec_and_test(&eb->refs)); 4938 4937 for (i = 0; i < num_pages; i++) { 4939 4938 if (eb->pages[i]) 4940 4939 unlock_page(eb->pages[i]); 4941 4940 } 4942 4941 4943 - WARN_ON(!atomic_dec_and_test(&eb->refs)); 4944 4942 btrfs_release_extent_buffer(eb); 4945 4943 return exists; 4946 4944 }
+7 -5
fs/btrfs/free-space-cache.c
··· 86 86 87 87 mapping_set_gfp_mask(inode->i_mapping, 88 88 mapping_gfp_mask(inode->i_mapping) & 89 - ~(GFP_NOFS & ~__GFP_HIGHMEM)); 89 + ~(__GFP_FS | __GFP_HIGHMEM)); 90 90 91 91 return inode; 92 92 } ··· 1218 1218 * 1219 1219 * This function writes out a free space cache struct to disk for quick recovery 1220 1220 * on mount. This will return 0 if it was successfull in writing the cache out, 1221 - * and -1 if it was not. 1221 + * or an errno if it was not. 1222 1222 */ 1223 1223 static int __btrfs_write_out_cache(struct btrfs_root *root, struct inode *inode, 1224 1224 struct btrfs_free_space_ctl *ctl, ··· 1235 1235 int must_iput = 0; 1236 1236 1237 1237 if (!i_size_read(inode)) 1238 - return -1; 1238 + return -EIO; 1239 1239 1240 1240 WARN_ON(io_ctl->pages); 1241 1241 ret = io_ctl_init(io_ctl, inode, root, 1); 1242 1242 if (ret) 1243 - return -1; 1243 + return ret; 1244 1244 1245 1245 if (block_group && (block_group->flags & BTRFS_BLOCK_GROUP_DATA)) { 1246 1246 down_write(&block_group->data_rwsem); ··· 1258 1258 } 1259 1259 1260 1260 /* Lock all pages first so we can lock the extent safely. */ 1261 - io_ctl_prepare_pages(io_ctl, inode, 0); 1261 + ret = io_ctl_prepare_pages(io_ctl, inode, 0); 1262 + if (ret) 1263 + goto out; 1262 1264 1263 1265 lock_extent_bits(&BTRFS_I(inode)->io_tree, 0, i_size_read(inode) - 1, 1264 1266 0, &cached_state);
+13 -10
fs/btrfs/inode.c
··· 3632 3632 BTRFS_I(inode)->generation = btrfs_inode_generation(leaf, inode_item); 3633 3633 BTRFS_I(inode)->last_trans = btrfs_inode_transid(leaf, inode_item); 3634 3634 3635 - /* 3636 - * If we were modified in the current generation and evicted from memory 3637 - * and then re-read we need to do a full sync since we don't have any 3638 - * idea about which extents were modified before we were evicted from 3639 - * cache. 3640 - */ 3641 - if (BTRFS_I(inode)->last_trans == root->fs_info->generation) 3642 - set_bit(BTRFS_INODE_NEEDS_FULL_SYNC, 3643 - &BTRFS_I(inode)->runtime_flags); 3644 - 3645 3635 inode->i_version = btrfs_inode_sequence(leaf, inode_item); 3646 3636 inode->i_generation = BTRFS_I(inode)->generation; 3647 3637 inode->i_rdev = 0; ··· 3641 3651 BTRFS_I(inode)->flags = btrfs_inode_flags(leaf, inode_item); 3642 3652 3643 3653 cache_index: 3654 + /* 3655 + * If we were modified in the current generation and evicted from memory 3656 + * and then re-read we need to do a full sync since we don't have any 3657 + * idea about which extents were modified before we were evicted from 3658 + * cache. 3659 + * 3660 + * This is required for both inode re-read from disk and delayed inode 3661 + * in delayed_nodes_tree. 3662 + */ 3663 + if (BTRFS_I(inode)->last_trans == root->fs_info->generation) 3664 + set_bit(BTRFS_INODE_NEEDS_FULL_SYNC, 3665 + &BTRFS_I(inode)->runtime_flags); 3666 + 3644 3667 path->slots[0]++; 3645 3668 if (inode->i_nlink != 1 || 3646 3669 path->slots[0] >= btrfs_header_nritems(leaf))
+2 -1
fs/btrfs/ioctl.c
··· 2410 2410 "Attempt to delete subvolume %llu during send", 2411 2411 dest->root_key.objectid); 2412 2412 err = -EPERM; 2413 - goto out_dput; 2413 + goto out_unlock_inode; 2414 2414 } 2415 2415 2416 2416 d_invalidate(dentry); ··· 2505 2505 root_flags & ~BTRFS_ROOT_SUBVOL_DEAD); 2506 2506 spin_unlock(&dest->root_item_lock); 2507 2507 } 2508 + out_unlock_inode: 2508 2509 mutex_unlock(&inode->i_mutex); 2509 2510 if (!err) { 2510 2511 shrink_dcache_sb(root->fs_info->sb);
+11 -4
fs/btrfs/volumes.c
··· 1058 1058 struct extent_map *em; 1059 1059 struct list_head *search_list = &trans->transaction->pending_chunks; 1060 1060 int ret = 0; 1061 + u64 physical_start = *start; 1061 1062 1062 1063 again: 1063 1064 list_for_each_entry(em, search_list, list) { ··· 1069 1068 for (i = 0; i < map->num_stripes; i++) { 1070 1069 if (map->stripes[i].dev != device) 1071 1070 continue; 1072 - if (map->stripes[i].physical >= *start + len || 1071 + if (map->stripes[i].physical >= physical_start + len || 1073 1072 map->stripes[i].physical + em->orig_block_len <= 1074 - *start) 1073 + physical_start) 1075 1074 continue; 1076 1075 *start = map->stripes[i].physical + 1077 1076 em->orig_block_len; ··· 1194 1193 */ 1195 1194 if (contains_pending_extent(trans, device, 1196 1195 &search_start, 1197 - hole_size)) 1198 - hole_size = 0; 1196 + hole_size)) { 1197 + if (key.offset >= search_start) { 1198 + hole_size = key.offset - search_start; 1199 + } else { 1200 + WARN_ON_ONCE(1); 1201 + hole_size = 0; 1202 + } 1203 + } 1199 1204 1200 1205 if (hole_size > max_hole_size) { 1201 1206 max_hole_start = search_start;
+1 -1
fs/configfs/mount.c
··· 173 173 MODULE_VERSION("0.0.2"); 174 174 MODULE_DESCRIPTION("Simple RAM filesystem for user driven kernel subsystem configuration."); 175 175 176 - module_init(configfs_init); 176 + core_initcall(configfs_init); 177 177 module_exit(configfs_exit);
+1 -1
fs/efivarfs/super.c
··· 121 121 int len, i; 122 122 int err = -ENOMEM; 123 123 124 - entry = kmalloc(sizeof(*entry), GFP_KERNEL); 124 + entry = kzalloc(sizeof(*entry), GFP_KERNEL); 125 125 if (!entry) 126 126 return err; 127 127
+7 -2
fs/ext4/Kconfig
··· 64 64 If you are not using a security module that requires using 65 65 extended attributes for file security labels, say N. 66 66 67 - config EXT4_FS_ENCRYPTION 68 - bool "Ext4 Encryption" 67 + config EXT4_ENCRYPTION 68 + tristate "Ext4 Encryption" 69 69 depends on EXT4_FS 70 70 select CRYPTO_AES 71 71 select CRYPTO_CBC ··· 80 80 feature is similar to ecryptfs, but it is more memory 81 81 efficient since it avoids caching the encrypted and 82 82 decrypted pages in the page cache. 83 + 84 + config EXT4_FS_ENCRYPTION 85 + bool 86 + default y 87 + depends on EXT4_ENCRYPTION 83 88 84 89 config EXT4_DEBUG 85 90 bool "EXT4 debugging support"
+143 -133
fs/ext4/crypto_fname.c
··· 66 66 int res = 0; 67 67 char iv[EXT4_CRYPTO_BLOCK_SIZE]; 68 68 struct scatterlist sg[1]; 69 + int padding = 4 << (ctx->flags & EXT4_POLICY_FLAGS_PAD_MASK); 69 70 char *workbuf; 70 71 71 72 if (iname->len <= 0 || iname->len > ctx->lim) ··· 74 73 75 74 ciphertext_len = (iname->len < EXT4_CRYPTO_BLOCK_SIZE) ? 76 75 EXT4_CRYPTO_BLOCK_SIZE : iname->len; 76 + ciphertext_len = ext4_fname_crypto_round_up(ciphertext_len, padding); 77 77 ciphertext_len = (ciphertext_len > ctx->lim) 78 78 ? ctx->lim : ciphertext_len; 79 79 ··· 103 101 /* Create encryption request */ 104 102 sg_init_table(sg, 1); 105 103 sg_set_page(sg, ctx->workpage, PAGE_SIZE, 0); 106 - ablkcipher_request_set_crypt(req, sg, sg, iname->len, iv); 104 + ablkcipher_request_set_crypt(req, sg, sg, ciphertext_len, iv); 107 105 res = crypto_ablkcipher_encrypt(req); 108 106 if (res == -EINPROGRESS || res == -EBUSY) { 109 107 BUG_ON(req->base.data != &ecr); ··· 200 198 return oname->len; 201 199 } 202 200 201 + static const char *lookup_table = 202 + "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+,"; 203 + 203 204 /** 204 205 * ext4_fname_encode_digest() - 205 206 * 206 207 * Encodes the input digest using characters from the set [a-zA-Z0-9_+]. 207 208 * The encoded string is roughly 4/3 times the size of the input string. 208 209 */ 209 - int ext4_fname_encode_digest(char *dst, char *src, u32 len) 210 + static int digest_encode(const char *src, int len, char *dst) 210 211 { 211 - static const char *lookup_table = 212 - "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789_+"; 213 - u32 current_chunk, num_chunks, i; 214 - char tmp_buf[3]; 215 - u32 c0, c1, c2, c3; 212 + int i = 0, bits = 0, ac = 0; 213 + char *cp = dst; 216 214 217 - current_chunk = 0; 218 - num_chunks = len/3; 219 - for (i = 0; i < num_chunks; i++) { 220 - c0 = src[3*i] & 0x3f; 221 - c1 = (((src[3*i]>>6)&0x3) | ((src[3*i+1] & 0xf)<<2)) & 0x3f; 222 - c2 = (((src[3*i+1]>>4)&0xf) | ((src[3*i+2] & 0x3)<<4)) & 0x3f; 223 - c3 = (src[3*i+2]>>2) & 0x3f; 224 - dst[4*i] = lookup_table[c0]; 225 - dst[4*i+1] = lookup_table[c1]; 226 - dst[4*i+2] = lookup_table[c2]; 227 - dst[4*i+3] = lookup_table[c3]; 228 - } 229 - if (i*3 < len) { 230 - memset(tmp_buf, 0, 3); 231 - memcpy(tmp_buf, &src[3*i], len-3*i); 232 - c0 = tmp_buf[0] & 0x3f; 233 - c1 = (((tmp_buf[0]>>6)&0x3) | ((tmp_buf[1] & 0xf)<<2)) & 0x3f; 234 - c2 = (((tmp_buf[1]>>4)&0xf) | ((tmp_buf[2] & 0x3)<<4)) & 0x3f; 235 - c3 = (tmp_buf[2]>>2) & 0x3f; 236 - dst[4*i] = lookup_table[c0]; 237 - dst[4*i+1] = lookup_table[c1]; 238 - dst[4*i+2] = lookup_table[c2]; 239 - dst[4*i+3] = lookup_table[c3]; 215 + while (i < len) { 216 + ac += (((unsigned char) src[i]) << bits); 217 + bits += 8; 218 + do { 219 + *cp++ = lookup_table[ac & 0x3f]; 220 + ac >>= 6; 221 + bits -= 6; 222 + } while (bits >= 6); 240 223 i++; 241 224 } 242 - return (i * 4); 225 + if (bits) 226 + *cp++ = lookup_table[ac & 0x3f]; 227 + return cp - dst; 243 228 } 244 229 245 - /** 246 - * ext4_fname_hash() - 247 - * 248 - * This function computes the hash of the input filename, and sets the output 249 - * buffer to the *encoded* digest. It returns the length of the digest as its 250 - * return value. Errors are returned as negative numbers. We trust the caller 251 - * to allocate sufficient memory to oname string. 252 - */ 253 - static int ext4_fname_hash(struct ext4_fname_crypto_ctx *ctx, 254 - const struct ext4_str *iname, 255 - struct ext4_str *oname) 230 + static int digest_decode(const char *src, int len, char *dst) 256 231 { 257 - struct scatterlist sg; 258 - struct hash_desc desc = { 259 - .tfm = (struct crypto_hash *)ctx->htfm, 260 - .flags = CRYPTO_TFM_REQ_MAY_SLEEP 261 - }; 262 - int res = 0; 232 + int i = 0, bits = 0, ac = 0; 233 + const char *p; 234 + char *cp = dst; 263 235 264 - if (iname->len <= EXT4_FNAME_CRYPTO_DIGEST_SIZE) { 265 - res = ext4_fname_encode_digest(oname->name, iname->name, 266 - iname->len); 267 - oname->len = res; 268 - return res; 236 + while (i < len) { 237 + p = strchr(lookup_table, src[i]); 238 + if (p == NULL || src[i] == 0) 239 + return -2; 240 + ac += (p - lookup_table) << bits; 241 + bits += 6; 242 + if (bits >= 8) { 243 + *cp++ = ac & 0xff; 244 + ac >>= 8; 245 + bits -= 8; 246 + } 247 + i++; 269 248 } 270 - 271 - sg_init_one(&sg, iname->name, iname->len); 272 - res = crypto_hash_init(&desc); 273 - if (res) { 274 - printk(KERN_ERR 275 - "%s: Error initializing crypto hash; res = [%d]\n", 276 - __func__, res); 277 - goto out; 278 - } 279 - res = crypto_hash_update(&desc, &sg, iname->len); 280 - if (res) { 281 - printk(KERN_ERR 282 - "%s: Error updating crypto hash; res = [%d]\n", 283 - __func__, res); 284 - goto out; 285 - } 286 - res = crypto_hash_final(&desc, 287 - &oname->name[EXT4_FNAME_CRYPTO_DIGEST_SIZE]); 288 - if (res) { 289 - printk(KERN_ERR 290 - "%s: Error finalizing crypto hash; res = [%d]\n", 291 - __func__, res); 292 - goto out; 293 - } 294 - /* Encode the digest as a printable string--this will increase the 295 - * size of the digest */ 296 - oname->name[0] = 'I'; 297 - res = ext4_fname_encode_digest(oname->name+1, 298 - &oname->name[EXT4_FNAME_CRYPTO_DIGEST_SIZE], 299 - EXT4_FNAME_CRYPTO_DIGEST_SIZE) + 1; 300 - oname->len = res; 301 - out: 302 - return res; 249 + if (ac) 250 + return -1; 251 + return cp - dst; 303 252 } 304 253 305 254 /** ··· 358 405 if (IS_ERR(ctx)) 359 406 return ctx; 360 407 408 + ctx->flags = ei->i_crypt_policy_flags; 361 409 if (ctx->has_valid_key) { 362 410 if (ctx->key.mode != EXT4_ENCRYPTION_MODE_AES_256_CTS) { 363 411 printk_once(KERN_WARNING ··· 471 517 u32 namelen) 472 518 { 473 519 u32 ciphertext_len; 520 + int padding = 4 << (ctx->flags & EXT4_POLICY_FLAGS_PAD_MASK); 474 521 475 522 if (ctx == NULL) 476 523 return -EIO; ··· 479 524 return -EACCES; 480 525 ciphertext_len = (namelen < EXT4_CRYPTO_BLOCK_SIZE) ? 481 526 EXT4_CRYPTO_BLOCK_SIZE : namelen; 527 + ciphertext_len = ext4_fname_crypto_round_up(ciphertext_len, padding); 482 528 ciphertext_len = (ciphertext_len > ctx->lim) 483 529 ? ctx->lim : ciphertext_len; 484 530 return (int) ciphertext_len; ··· 495 539 u32 ilen, struct ext4_str *crypto_str) 496 540 { 497 541 unsigned int olen; 542 + int padding = 4 << (ctx->flags & EXT4_POLICY_FLAGS_PAD_MASK); 498 543 499 544 if (!ctx) 500 545 return -EIO; 501 - olen = ext4_fname_crypto_round_up(ilen, EXT4_CRYPTO_BLOCK_SIZE); 546 + if (padding < EXT4_CRYPTO_BLOCK_SIZE) 547 + padding = EXT4_CRYPTO_BLOCK_SIZE; 548 + olen = ext4_fname_crypto_round_up(ilen, padding); 502 549 crypto_str->len = olen; 503 550 if (olen < EXT4_FNAME_CRYPTO_DIGEST_SIZE*2) 504 551 olen = EXT4_FNAME_CRYPTO_DIGEST_SIZE*2; ··· 530 571 * ext4_fname_disk_to_usr() - converts a filename from disk space to user space 531 572 */ 532 573 int _ext4_fname_disk_to_usr(struct ext4_fname_crypto_ctx *ctx, 533 - const struct ext4_str *iname, 534 - struct ext4_str *oname) 574 + struct dx_hash_info *hinfo, 575 + const struct ext4_str *iname, 576 + struct ext4_str *oname) 535 577 { 578 + char buf[24]; 579 + int ret; 580 + 536 581 if (ctx == NULL) 537 582 return -EIO; 538 583 if (iname->len < 3) { ··· 550 587 } 551 588 if (ctx->has_valid_key) 552 589 return ext4_fname_decrypt(ctx, iname, oname); 553 - else 554 - return ext4_fname_hash(ctx, iname, oname); 590 + 591 + if (iname->len <= EXT4_FNAME_CRYPTO_DIGEST_SIZE) { 592 + ret = digest_encode(iname->name, iname->len, oname->name); 593 + oname->len = ret; 594 + return ret; 595 + } 596 + if (hinfo) { 597 + memcpy(buf, &hinfo->hash, 4); 598 + memcpy(buf+4, &hinfo->minor_hash, 4); 599 + } else 600 + memset(buf, 0, 8); 601 + memcpy(buf + 8, iname->name + iname->len - 16, 16); 602 + oname->name[0] = '_'; 603 + ret = digest_encode(buf, 24, oname->name+1); 604 + oname->len = ret + 1; 605 + return ret + 1; 555 606 } 556 607 557 608 int ext4_fname_disk_to_usr(struct ext4_fname_crypto_ctx *ctx, 609 + struct dx_hash_info *hinfo, 558 610 const struct ext4_dir_entry_2 *de, 559 611 struct ext4_str *oname) 560 612 { 561 613 struct ext4_str iname = {.name = (unsigned char *) de->name, 562 614 .len = de->name_len }; 563 615 564 - return _ext4_fname_disk_to_usr(ctx, &iname, oname); 616 + return _ext4_fname_disk_to_usr(ctx, hinfo, &iname, oname); 565 617 } 566 618 567 619 ··· 618 640 const struct qstr *iname, 619 641 struct dx_hash_info *hinfo) 620 642 { 621 - struct ext4_str tmp, tmp2; 643 + struct ext4_str tmp; 622 644 int ret = 0; 645 + char buf[EXT4_FNAME_CRYPTO_DIGEST_SIZE+1]; 623 646 624 - if (!ctx || !ctx->has_valid_key || 647 + if (!ctx || 625 648 ((iname->name[0] == '.') && 626 649 ((iname->len == 1) || 627 650 ((iname->name[1] == '.') && (iname->len == 2))))) { 628 651 ext4fs_dirhash(iname->name, iname->len, hinfo); 652 + return 0; 653 + } 654 + 655 + if (!ctx->has_valid_key && iname->name[0] == '_') { 656 + if (iname->len != 33) 657 + return -ENOENT; 658 + ret = digest_decode(iname->name+1, iname->len, buf); 659 + if (ret != 24) 660 + return -ENOENT; 661 + memcpy(&hinfo->hash, buf, 4); 662 + memcpy(&hinfo->minor_hash, buf + 4, 4); 663 + return 0; 664 + } 665 + 666 + if (!ctx->has_valid_key && iname->name[0] != '_') { 667 + if (iname->len > 43) 668 + return -ENOENT; 669 + ret = digest_decode(iname->name, iname->len, buf); 670 + ext4fs_dirhash(buf, ret, hinfo); 629 671 return 0; 630 672 } 631 673 ··· 655 657 return ret; 656 658 657 659 ret = ext4_fname_encrypt(ctx, iname, &tmp); 658 - if (ret < 0) 659 - goto out; 660 - 661 - tmp2.len = (4 * ((EXT4_FNAME_CRYPTO_DIGEST_SIZE + 2) / 3)) + 1; 662 - tmp2.name = kmalloc(tmp2.len + 1, GFP_KERNEL); 663 - if (tmp2.name == NULL) { 664 - ret = -ENOMEM; 665 - goto out; 660 + if (ret >= 0) { 661 + ext4fs_dirhash(tmp.name, tmp.len, hinfo); 662 + ret = 0; 666 663 } 667 664 668 - ret = ext4_fname_hash(ctx, &tmp, &tmp2); 669 - if (ret > 0) 670 - ext4fs_dirhash(tmp2.name, tmp2.len, hinfo); 671 - ext4_fname_crypto_free_buffer(&tmp2); 672 - out: 673 665 ext4_fname_crypto_free_buffer(&tmp); 674 666 return ret; 675 667 } 676 668 677 - /** 678 - * ext4_fname_disk_to_htree() - converts a filename from disk space to htree-access string 679 - */ 680 - int ext4_fname_disk_to_hash(struct ext4_fname_crypto_ctx *ctx, 681 - const struct ext4_dir_entry_2 *de, 682 - struct dx_hash_info *hinfo) 669 + int ext4_fname_match(struct ext4_fname_crypto_ctx *ctx, struct ext4_str *cstr, 670 + int len, const char * const name, 671 + struct ext4_dir_entry_2 *de) 683 672 { 684 - struct ext4_str iname = {.name = (unsigned char *) de->name, 685 - .len = de->name_len}; 686 - struct ext4_str tmp; 687 - int ret; 673 + int ret = -ENOENT; 674 + int bigname = (*name == '_'); 688 675 689 - if (!ctx || 690 - ((iname.name[0] == '.') && 691 - ((iname.len == 1) || 692 - ((iname.name[1] == '.') && (iname.len == 2))))) { 693 - ext4fs_dirhash(iname.name, iname.len, hinfo); 694 - return 0; 676 + if (ctx->has_valid_key) { 677 + if (cstr->name == NULL) { 678 + struct qstr istr; 679 + 680 + ret = ext4_fname_crypto_alloc_buffer(ctx, len, cstr); 681 + if (ret < 0) 682 + goto errout; 683 + istr.name = name; 684 + istr.len = len; 685 + ret = ext4_fname_encrypt(ctx, &istr, cstr); 686 + if (ret < 0) 687 + goto errout; 688 + } 689 + } else { 690 + if (cstr->name == NULL) { 691 + cstr->name = kmalloc(32, GFP_KERNEL); 692 + if (cstr->name == NULL) 693 + return -ENOMEM; 694 + if ((bigname && (len != 33)) || 695 + (!bigname && (len > 43))) 696 + goto errout; 697 + ret = digest_decode(name+bigname, len-bigname, 698 + cstr->name); 699 + if (ret < 0) { 700 + ret = -ENOENT; 701 + goto errout; 702 + } 703 + cstr->len = ret; 704 + } 705 + if (bigname) { 706 + if (de->name_len < 16) 707 + return 0; 708 + ret = memcmp(de->name + de->name_len - 16, 709 + cstr->name + 8, 16); 710 + return (ret == 0) ? 1 : 0; 711 + } 695 712 } 696 - 697 - tmp.len = (4 * ((EXT4_FNAME_CRYPTO_DIGEST_SIZE + 2) / 3)) + 1; 698 - tmp.name = kmalloc(tmp.len + 1, GFP_KERNEL); 699 - if (tmp.name == NULL) 700 - return -ENOMEM; 701 - 702 - ret = ext4_fname_hash(ctx, &iname, &tmp); 703 - if (ret > 0) 704 - ext4fs_dirhash(tmp.name, tmp.len, hinfo); 705 - ext4_fname_crypto_free_buffer(&tmp); 713 + if (de->name_len != cstr->len) 714 + return 0; 715 + ret = memcmp(de->name, cstr->name, cstr->len); 716 + return (ret == 0) ? 1 : 0; 717 + errout: 718 + kfree(cstr->name); 719 + cstr->name = NULL; 706 720 return ret; 707 721 }
+1
fs/ext4/crypto_key.c
··· 110 110 } 111 111 res = 0; 112 112 113 + ei->i_crypt_policy_flags = ctx.flags; 113 114 if (S_ISREG(inode->i_mode)) 114 115 crypt_key->mode = ctx.contents_encryption_mode; 115 116 else if (S_ISDIR(inode->i_mode) || S_ISLNK(inode->i_mode))
+9 -5
fs/ext4/crypto_policy.c
··· 37 37 return 0; 38 38 return (memcmp(ctx.master_key_descriptor, policy->master_key_descriptor, 39 39 EXT4_KEY_DESCRIPTOR_SIZE) == 0 && 40 + (ctx.flags == 41 + policy->flags) && 40 42 (ctx.contents_encryption_mode == 41 43 policy->contents_encryption_mode) && 42 44 (ctx.filenames_encryption_mode == ··· 58 56 printk(KERN_WARNING 59 57 "%s: Invalid contents encryption mode %d\n", __func__, 60 58 policy->contents_encryption_mode); 61 - res = -EINVAL; 62 - goto out; 59 + return -EINVAL; 63 60 } 64 61 if (!ext4_valid_filenames_enc_mode(policy->filenames_encryption_mode)) { 65 62 printk(KERN_WARNING 66 63 "%s: Invalid filenames encryption mode %d\n", __func__, 67 64 policy->filenames_encryption_mode); 68 - res = -EINVAL; 69 - goto out; 65 + return -EINVAL; 70 66 } 67 + if (policy->flags & ~EXT4_POLICY_FLAGS_VALID) 68 + return -EINVAL; 71 69 ctx.contents_encryption_mode = policy->contents_encryption_mode; 72 70 ctx.filenames_encryption_mode = policy->filenames_encryption_mode; 71 + ctx.flags = policy->flags; 73 72 BUILD_BUG_ON(sizeof(ctx.nonce) != EXT4_KEY_DERIVATION_NONCE_SIZE); 74 73 get_random_bytes(ctx.nonce, EXT4_KEY_DERIVATION_NONCE_SIZE); 75 74 76 75 res = ext4_xattr_set(inode, EXT4_XATTR_INDEX_ENCRYPTION, 77 76 EXT4_XATTR_NAME_ENCRYPTION_CONTEXT, &ctx, 78 77 sizeof(ctx), 0); 79 - out: 80 78 if (!res) 81 79 ext4_set_inode_flag(inode, EXT4_INODE_ENCRYPT); 82 80 return res; ··· 117 115 policy->version = 0; 118 116 policy->contents_encryption_mode = ctx.contents_encryption_mode; 119 117 policy->filenames_encryption_mode = ctx.filenames_encryption_mode; 118 + policy->flags = ctx.flags; 120 119 memcpy(&policy->master_key_descriptor, ctx.master_key_descriptor, 121 120 EXT4_KEY_DESCRIPTOR_SIZE); 122 121 return 0; ··· 179 176 EXT4_ENCRYPTION_MODE_AES_256_XTS; 180 177 ctx.filenames_encryption_mode = 181 178 EXT4_ENCRYPTION_MODE_AES_256_CTS; 179 + ctx.flags = 0; 182 180 memset(ctx.master_key_descriptor, 0x42, 183 181 EXT4_KEY_DESCRIPTOR_SIZE); 184 182 res = 0;
+1 -1
fs/ext4/dir.c
··· 249 249 } else { 250 250 /* Directory is encrypted */ 251 251 err = ext4_fname_disk_to_usr(enc_ctx, 252 - de, &fname_crypto_str); 252 + NULL, de, &fname_crypto_str); 253 253 if (err < 0) 254 254 goto errout; 255 255 if (!dir_emit(ctx,
+7 -9
fs/ext4/ext4.h
··· 911 911 912 912 /* on-disk additional length */ 913 913 __u16 i_extra_isize; 914 + char i_crypt_policy_flags; 914 915 915 916 /* Indicate the inline data space. */ 916 917 u16 i_inline_off; ··· 1066 1065 1067 1066 /* Metadata checksum algorithm codes */ 1068 1067 #define EXT4_CRC32C_CHKSUM 1 1069 - 1070 - /* Encryption algorithms */ 1071 - #define EXT4_ENCRYPTION_MODE_INVALID 0 1072 - #define EXT4_ENCRYPTION_MODE_AES_256_XTS 1 1073 - #define EXT4_ENCRYPTION_MODE_AES_256_GCM 2 1074 - #define EXT4_ENCRYPTION_MODE_AES_256_CBC 3 1075 1068 1076 1069 /* 1077 1070 * Structure of the super block ··· 2088 2093 int ext4_fname_crypto_alloc_buffer(struct ext4_fname_crypto_ctx *ctx, 2089 2094 u32 ilen, struct ext4_str *crypto_str); 2090 2095 int _ext4_fname_disk_to_usr(struct ext4_fname_crypto_ctx *ctx, 2096 + struct dx_hash_info *hinfo, 2091 2097 const struct ext4_str *iname, 2092 2098 struct ext4_str *oname); 2093 2099 int ext4_fname_disk_to_usr(struct ext4_fname_crypto_ctx *ctx, 2100 + struct dx_hash_info *hinfo, 2094 2101 const struct ext4_dir_entry_2 *de, 2095 2102 struct ext4_str *oname); 2096 2103 int ext4_fname_usr_to_disk(struct ext4_fname_crypto_ctx *ctx, ··· 2101 2104 int ext4_fname_usr_to_hash(struct ext4_fname_crypto_ctx *ctx, 2102 2105 const struct qstr *iname, 2103 2106 struct dx_hash_info *hinfo); 2104 - int ext4_fname_disk_to_hash(struct ext4_fname_crypto_ctx *ctx, 2105 - const struct ext4_dir_entry_2 *de, 2106 - struct dx_hash_info *hinfo); 2107 2107 int ext4_fname_crypto_namelen_on_disk(struct ext4_fname_crypto_ctx *ctx, 2108 2108 u32 namelen); 2109 + int ext4_fname_match(struct ext4_fname_crypto_ctx *ctx, struct ext4_str *cstr, 2110 + int len, const char * const name, 2111 + struct ext4_dir_entry_2 *de); 2112 + 2109 2113 2110 2114 #ifdef CONFIG_EXT4_FS_ENCRYPTION 2111 2115 void ext4_put_fname_crypto_ctx(struct ext4_fname_crypto_ctx **ctx);
+10 -1
fs/ext4/ext4_crypto.h
··· 20 20 char version; 21 21 char contents_encryption_mode; 22 22 char filenames_encryption_mode; 23 + char flags; 23 24 char master_key_descriptor[EXT4_KEY_DESCRIPTOR_SIZE]; 24 25 } __attribute__((__packed__)); 25 26 26 27 #define EXT4_ENCRYPTION_CONTEXT_FORMAT_V1 1 27 28 #define EXT4_KEY_DERIVATION_NONCE_SIZE 16 29 + 30 + #define EXT4_POLICY_FLAGS_PAD_4 0x00 31 + #define EXT4_POLICY_FLAGS_PAD_8 0x01 32 + #define EXT4_POLICY_FLAGS_PAD_16 0x02 33 + #define EXT4_POLICY_FLAGS_PAD_32 0x03 34 + #define EXT4_POLICY_FLAGS_PAD_MASK 0x03 35 + #define EXT4_POLICY_FLAGS_VALID 0x03 28 36 29 37 /** 30 38 * Encryption context for inode ··· 49 41 char format; 50 42 char contents_encryption_mode; 51 43 char filenames_encryption_mode; 52 - char reserved; 44 + char flags; 53 45 char master_key_descriptor[EXT4_KEY_DESCRIPTOR_SIZE]; 54 46 char nonce[EXT4_KEY_DERIVATION_NONCE_SIZE]; 55 47 } __attribute__((__packed__)); ··· 128 120 struct crypto_hash *htfm; 129 121 struct page *workpage; 130 122 struct ext4_encryption_key key; 123 + unsigned flags : 8; 131 124 unsigned has_valid_key : 1; 132 125 unsigned ctfm_key_is_ready : 1; 133 126 };
+8 -7
fs/ext4/extents.c
··· 4927 4927 if (ret) 4928 4928 return ret; 4929 4929 4930 - /* 4931 - * currently supporting (pre)allocate mode for extent-based 4932 - * files _only_ 4933 - */ 4934 - if (!(ext4_test_inode_flag(inode, EXT4_INODE_EXTENTS))) 4935 - return -EOPNOTSUPP; 4936 - 4937 4930 if (mode & FALLOC_FL_COLLAPSE_RANGE) 4938 4931 return ext4_collapse_range(inode, offset, len); 4939 4932 ··· 4947 4954 flags |= EXT4_GET_BLOCKS_KEEP_SIZE; 4948 4955 4949 4956 mutex_lock(&inode->i_mutex); 4957 + 4958 + /* 4959 + * We only support preallocation for extent-based files only 4960 + */ 4961 + if (!(ext4_test_inode_flag(inode, EXT4_INODE_EXTENTS))) { 4962 + ret = -EOPNOTSUPP; 4963 + goto out; 4964 + } 4950 4965 4951 4966 if (!(mode & FALLOC_FL_KEEP_SIZE) && 4952 4967 offset + len > i_size_read(inode)) {
+8
fs/ext4/extents_status.c
··· 703 703 704 704 BUG_ON(end < lblk); 705 705 706 + if ((status & EXTENT_STATUS_DELAYED) && 707 + (status & EXTENT_STATUS_WRITTEN)) { 708 + ext4_warning(inode->i_sb, "Inserting extent [%u/%u] as " 709 + " delayed and written which can potentially " 710 + " cause data loss.\n", lblk, len); 711 + WARN_ON(1); 712 + } 713 + 706 714 newes.es_lblk = lblk; 707 715 newes.es_len = len; 708 716 ext4_es_store_pblock_status(&newes, pblk, status);
+2
fs/ext4/inode.c
··· 531 531 status = map->m_flags & EXT4_MAP_UNWRITTEN ? 532 532 EXTENT_STATUS_UNWRITTEN : EXTENT_STATUS_WRITTEN; 533 533 if (!(flags & EXT4_GET_BLOCKS_DELALLOC_RESERVE) && 534 + !(status & EXTENT_STATUS_WRITTEN) && 534 535 ext4_find_delalloc_range(inode, map->m_lblk, 535 536 map->m_lblk + map->m_len - 1)) 536 537 status |= EXTENT_STATUS_DELAYED; ··· 636 635 status = map->m_flags & EXT4_MAP_UNWRITTEN ? 637 636 EXTENT_STATUS_UNWRITTEN : EXTENT_STATUS_WRITTEN; 638 637 if (!(flags & EXT4_GET_BLOCKS_DELALLOC_RESERVE) && 638 + !(status & EXTENT_STATUS_WRITTEN) && 639 639 ext4_find_delalloc_range(inode, map->m_lblk, 640 640 map->m_lblk + map->m_len - 1)) 641 641 status |= EXTENT_STATUS_DELAYED;
+6 -66
fs/ext4/namei.c
··· 640 640 ext4_put_fname_crypto_ctx(&ctx); 641 641 ctx = NULL; 642 642 } 643 - res = ext4_fname_disk_to_usr(ctx, de, 643 + res = ext4_fname_disk_to_usr(ctx, NULL, de, 644 644 &fname_crypto_str); 645 645 if (res < 0) { 646 646 printk(KERN_WARNING "Error " ··· 653 653 name = fname_crypto_str.name; 654 654 len = fname_crypto_str.len; 655 655 } 656 - res = ext4_fname_disk_to_hash(ctx, de, 657 - &h); 658 - if (res < 0) { 659 - printk(KERN_WARNING "Error " 660 - "converting filename " 661 - "from disk to htree" 662 - "\n"); 663 - h.hash = 0xDEADBEEF; 664 - } 656 + ext4fs_dirhash(de->name, de->name_len, 657 + &h); 665 658 printk("%*.s:(E)%x.%u ", len, name, 666 659 h.hash, (unsigned) ((char *) de 667 660 - base)); ··· 1001 1008 /* silently ignore the rest of the block */ 1002 1009 break; 1003 1010 } 1004 - #ifdef CONFIG_EXT4_FS_ENCRYPTION 1005 - err = ext4_fname_disk_to_hash(ctx, de, hinfo); 1006 - if (err < 0) { 1007 - count = err; 1008 - goto errout; 1009 - } 1010 - #else 1011 1011 ext4fs_dirhash(de->name, de->name_len, hinfo); 1012 - #endif 1013 1012 if ((hinfo->hash < start_hash) || 1014 1013 ((hinfo->hash == start_hash) && 1015 1014 (hinfo->minor_hash < start_minor_hash))) ··· 1017 1032 &tmp_str); 1018 1033 } else { 1019 1034 /* Directory is encrypted */ 1020 - err = ext4_fname_disk_to_usr(ctx, de, 1035 + err = ext4_fname_disk_to_usr(ctx, hinfo, de, 1021 1036 &fname_crypto_str); 1022 1037 if (err < 0) { 1023 1038 count = err; ··· 1178 1193 int count = 0; 1179 1194 char *base = (char *) de; 1180 1195 struct dx_hash_info h = *hinfo; 1181 - #ifdef CONFIG_EXT4_FS_ENCRYPTION 1182 - struct ext4_fname_crypto_ctx *ctx = NULL; 1183 - int err; 1184 - 1185 - ctx = ext4_get_fname_crypto_ctx(dir, EXT4_NAME_LEN); 1186 - if (IS_ERR(ctx)) 1187 - return PTR_ERR(ctx); 1188 - #endif 1189 1196 1190 1197 while ((char *) de < base + blocksize) { 1191 1198 if (de->name_len && de->inode) { 1192 - #ifdef CONFIG_EXT4_FS_ENCRYPTION 1193 - err = ext4_fname_disk_to_hash(ctx, de, &h); 1194 - if (err < 0) { 1195 - ext4_put_fname_crypto_ctx(&ctx); 1196 - return err; 1197 - } 1198 - #else 1199 1199 ext4fs_dirhash(de->name, de->name_len, &h); 1200 - #endif 1201 1200 map_tail--; 1202 1201 map_tail->hash = h.hash; 1203 1202 map_tail->offs = ((char *) de - base)>>2; ··· 1192 1223 /* XXX: do we need to check rec_len == 0 case? -Chris */ 1193 1224 de = ext4_next_entry(de, blocksize); 1194 1225 } 1195 - #ifdef CONFIG_EXT4_FS_ENCRYPTION 1196 - ext4_put_fname_crypto_ctx(&ctx); 1197 - #endif 1198 1226 return count; 1199 1227 } 1200 1228 ··· 1253 1287 return 0; 1254 1288 1255 1289 #ifdef CONFIG_EXT4_FS_ENCRYPTION 1256 - if (ctx) { 1257 - /* Directory is encrypted */ 1258 - res = ext4_fname_disk_to_usr(ctx, de, fname_crypto_str); 1259 - if (res < 0) 1260 - return res; 1261 - if (len != res) 1262 - return 0; 1263 - res = memcmp(name, fname_crypto_str->name, len); 1264 - return (res == 0) ? 1 : 0; 1265 - } 1290 + if (ctx) 1291 + return ext4_fname_match(ctx, fname_crypto_str, len, name, de); 1266 1292 #endif 1267 1293 if (len != de->name_len) 1268 1294 return 0; ··· 1281 1323 ctx = ext4_get_fname_crypto_ctx(dir, EXT4_NAME_LEN); 1282 1324 if (IS_ERR(ctx)) 1283 1325 return -1; 1284 - 1285 - if (ctx != NULL) { 1286 - /* Allocate buffer to hold maximum name length */ 1287 - res = ext4_fname_crypto_alloc_buffer(ctx, EXT4_NAME_LEN, 1288 - &fname_crypto_str); 1289 - if (res < 0) { 1290 - ext4_put_fname_crypto_ctx(&ctx); 1291 - return -1; 1292 - } 1293 - } 1294 1326 1295 1327 de = (struct ext4_dir_entry_2 *)search_buf; 1296 1328 dlimit = search_buf + buf_size; ··· 1820 1872 return res; 1821 1873 } 1822 1874 reclen = EXT4_DIR_REC_LEN(res); 1823 - 1824 - /* Allocate buffer to hold maximum name length */ 1825 - res = ext4_fname_crypto_alloc_buffer(ctx, EXT4_NAME_LEN, 1826 - &fname_crypto_str); 1827 - if (res < 0) { 1828 - ext4_put_fname_crypto_ctx(&ctx); 1829 - return -1; 1830 - } 1831 1875 } 1832 1876 1833 1877 de = (struct ext4_dir_entry_2 *)buf;
+5 -2
fs/ext4/resize.c
··· 1432 1432 goto exit; 1433 1433 /* 1434 1434 * We will always be modifying at least the superblock and GDT 1435 - * block. If we are adding a group past the last current GDT block, 1435 + * blocks. If we are adding a group past the last current GDT block, 1436 1436 * we will also modify the inode and the dindirect block. If we 1437 1437 * are adding a group with superblock/GDT backups we will also 1438 1438 * modify each of the reserved GDT dindirect blocks. 1439 1439 */ 1440 - credit = flex_gd->count * 4 + reserved_gdb; 1440 + credit = 3; /* sb, resize inode, resize inode dindirect */ 1441 + /* GDT blocks */ 1442 + credit += 1 + DIV_ROUND_UP(flex_gd->count, EXT4_DESC_PER_BLOCK(sb)); 1443 + credit += reserved_gdb; /* Reserved GDT dindirect blocks */ 1441 1444 handle = ext4_journal_start_sb(sb, EXT4_HT_RESIZE, credit); 1442 1445 if (IS_ERR(handle)) { 1443 1446 err = PTR_ERR(handle);
+1 -1
fs/ext4/symlink.c
··· 74 74 goto errout; 75 75 } 76 76 pstr.name = paddr; 77 - res = _ext4_fname_disk_to_usr(ctx, &cstr, &pstr); 77 + res = _ext4_fname_disk_to_usr(ctx, NULL, &cstr, &pstr); 78 78 if (res < 0) 79 79 goto errout; 80 80 /* Null-terminate the name */
+7
fs/f2fs/data.c
··· 1513 1513 { 1514 1514 struct inode *inode = mapping->host; 1515 1515 struct f2fs_sb_info *sbi = F2FS_I_SB(inode); 1516 + bool locked = false; 1516 1517 int ret; 1517 1518 long diff; 1518 1519 ··· 1534 1533 1535 1534 diff = nr_pages_to_write(sbi, DATA, wbc); 1536 1535 1536 + if (!S_ISDIR(inode->i_mode)) { 1537 + mutex_lock(&sbi->writepages); 1538 + locked = true; 1539 + } 1537 1540 ret = write_cache_pages(mapping, wbc, __f2fs_writepage, mapping); 1541 + if (locked) 1542 + mutex_unlock(&sbi->writepages); 1538 1543 1539 1544 f2fs_submit_merged_bio(sbi, DATA, WRITE); 1540 1545
+1
fs/f2fs/f2fs.h
··· 625 625 struct mutex cp_mutex; /* checkpoint procedure lock */ 626 626 struct rw_semaphore cp_rwsem; /* blocking FS operations */ 627 627 struct rw_semaphore node_write; /* locking node writes */ 628 + struct mutex writepages; /* mutex for writepages() */ 628 629 wait_queue_head_t cp_wait; 629 630 630 631 struct inode_management im[MAX_INO_ENTRY]; /* manage inode cache */
+3 -5
fs/f2fs/namei.c
··· 298 298 299 299 static void *f2fs_follow_link(struct dentry *dentry, struct nameidata *nd) 300 300 { 301 - struct page *page; 301 + struct page *page = page_follow_link_light(dentry, nd); 302 302 303 - page = page_follow_link_light(dentry, nd); 304 - if (IS_ERR(page)) 303 + if (IS_ERR_OR_NULL(page)) 305 304 return page; 306 305 307 306 /* this is broken symlink case */ 308 307 if (*nd_get_link(nd) == 0) { 309 - kunmap(page); 310 - page_cache_release(page); 308 + page_put_link(dentry, nd, page); 311 309 return ERR_PTR(-ENOENT); 312 310 } 313 311 return page;
+1
fs/f2fs/super.c
··· 1035 1035 sbi->raw_super = raw_super; 1036 1036 sbi->raw_super_buf = raw_super_buf; 1037 1037 mutex_init(&sbi->gc_mutex); 1038 + mutex_init(&sbi->writepages); 1038 1039 mutex_init(&sbi->cp_mutex); 1039 1040 init_rwsem(&sbi->node_write); 1040 1041 clear_sbi_flag(sbi, SBI_POR_DOING);
+15 -7
fs/namei.c
··· 1415 1415 */ 1416 1416 if (nd->flags & LOOKUP_RCU) { 1417 1417 unsigned seq; 1418 + bool negative; 1418 1419 dentry = __d_lookup_rcu(parent, &nd->last, &seq); 1419 1420 if (!dentry) 1420 1421 goto unlazy; ··· 1425 1424 * the dentry name information from lookup. 1426 1425 */ 1427 1426 *inode = dentry->d_inode; 1427 + negative = d_is_negative(dentry); 1428 1428 if (read_seqcount_retry(&dentry->d_seq, seq)) 1429 1429 return -ECHILD; 1430 + if (negative) 1431 + return -ENOENT; 1430 1432 1431 1433 /* 1432 1434 * This sequence count validates that the parent had no ··· 1476 1472 goto need_lookup; 1477 1473 } 1478 1474 1475 + if (unlikely(d_is_negative(dentry))) { 1476 + dput(dentry); 1477 + return -ENOENT; 1478 + } 1479 1479 path->mnt = mnt; 1480 1480 path->dentry = dentry; 1481 1481 err = follow_managed(path, nd->flags); ··· 1591 1583 goto out_err; 1592 1584 1593 1585 inode = path->dentry->d_inode; 1586 + err = -ENOENT; 1587 + if (d_is_negative(path->dentry)) 1588 + goto out_path_put; 1594 1589 } 1595 - err = -ENOENT; 1596 - if (d_is_negative(path->dentry)) 1597 - goto out_path_put; 1598 1590 1599 1591 if (should_follow_link(path->dentry, follow)) { 1600 1592 if (nd->flags & LOOKUP_RCU) { ··· 3044 3036 3045 3037 BUG_ON(nd->flags & LOOKUP_RCU); 3046 3038 inode = path->dentry->d_inode; 3047 - finish_lookup: 3048 - /* we _can_ be in RCU mode here */ 3049 3039 error = -ENOENT; 3050 3040 if (d_is_negative(path->dentry)) { 3051 3041 path_to_nameidata(path, nd); 3052 3042 goto out; 3053 3043 } 3054 - 3044 + finish_lookup: 3045 + /* we _can_ be in RCU mode here */ 3055 3046 if (should_follow_link(path->dentry, !symlink_ok)) { 3056 3047 if (nd->flags & LOOKUP_RCU) { 3057 3048 if (unlikely(nd->path.mnt != path->mnt || ··· 3233 3226 3234 3227 if (unlikely(file->f_flags & __O_TMPFILE)) { 3235 3228 error = do_tmpfile(dfd, pathname, nd, flags, op, file, &opened); 3236 - goto out; 3229 + goto out2; 3237 3230 } 3238 3231 3239 3232 error = path_init(dfd, pathname, flags, nd); ··· 3263 3256 } 3264 3257 out: 3265 3258 path_cleanup(nd); 3259 + out2: 3266 3260 if (!(opened & FILE_OPENED)) { 3267 3261 BUG_ON(!error); 3268 3262 put_filp(file);
+6
fs/namespace.c
··· 3179 3179 if (mnt->mnt.mnt_sb->s_type != type) 3180 3180 continue; 3181 3181 3182 + /* This mount is not fully visible if it's root directory 3183 + * is not the root directory of the filesystem. 3184 + */ 3185 + if (mnt->mnt.mnt_root != mnt->mnt.mnt_sb->s_root) 3186 + continue; 3187 + 3182 3188 /* This mount is not fully visible if there are any child mounts 3183 3189 * that cover anything except for empty directories. 3184 3190 */
+1 -1
fs/nilfs2/btree.c
··· 388 388 nchildren = nilfs_btree_node_get_nchildren(node); 389 389 390 390 if (unlikely(level < NILFS_BTREE_LEVEL_NODE_MIN || 391 - level > NILFS_BTREE_LEVEL_MAX || 391 + level >= NILFS_BTREE_LEVEL_MAX || 392 392 nchildren < 0 || 393 393 nchildren > NILFS_BTREE_ROOT_NCHILDREN_MAX)) { 394 394 pr_crit("NILFS: bad btree root (inode number=%lu): level = %d, flags = 0x%x, nchildren = %d\n",
+13
fs/ocfs2/dlm/dlmmaster.c
··· 757 757 if (tmpres) { 758 758 spin_unlock(&dlm->spinlock); 759 759 spin_lock(&tmpres->spinlock); 760 + 761 + /* 762 + * Right after dlm spinlock was released, dlm_thread could have 763 + * purged the lockres. Check if lockres got unhashed. If so 764 + * start over. 765 + */ 766 + if (hlist_unhashed(&tmpres->hash_node)) { 767 + spin_unlock(&tmpres->spinlock); 768 + dlm_lockres_put(tmpres); 769 + tmpres = NULL; 770 + goto lookup; 771 + } 772 + 760 773 /* Wait on the thread that is mastering the resource */ 761 774 if (tmpres->owner == DLM_LOCK_RES_OWNER_UNKNOWN) { 762 775 __dlm_wait_on_lockres(tmpres);
+11 -1
fs/splice.c
··· 1161 1161 long ret, bytes; 1162 1162 umode_t i_mode; 1163 1163 size_t len; 1164 - int i, flags; 1164 + int i, flags, more; 1165 1165 1166 1166 /* 1167 1167 * We require the input being a regular file, as we don't want to ··· 1204 1204 * Don't block on output, we have to drain the direct pipe. 1205 1205 */ 1206 1206 sd->flags &= ~SPLICE_F_NONBLOCK; 1207 + more = sd->flags & SPLICE_F_MORE; 1207 1208 1208 1209 while (len) { 1209 1210 size_t read_len; ··· 1217 1216 read_len = ret; 1218 1217 sd->total_len = read_len; 1219 1218 1219 + /* 1220 + * If more data is pending, set SPLICE_F_MORE 1221 + * If this is the last data and SPLICE_F_MORE was not set 1222 + * initially, clears it. 1223 + */ 1224 + if (read_len < len) 1225 + sd->flags |= SPLICE_F_MORE; 1226 + else if (!more) 1227 + sd->flags &= ~SPLICE_F_MORE; 1220 1228 /* 1221 1229 * NOTE: nonblocking mode only applies to the input. We 1222 1230 * must not do the output in nonblocking mode as then we
-1
include/acpi/actypes.h
··· 124 124 #ifndef ACPI_USE_SYSTEM_INTTYPES 125 125 126 126 typedef unsigned char u8; 127 - typedef unsigned char u8; 128 127 typedef unsigned short u16; 129 128 typedef short s16; 130 129 typedef COMPILER_DEPENDENT_UINT64 u64;
+1 -1
include/linux/blk_types.h
··· 220 220 221 221 /* This mask is used for both bio and request merge checking */ 222 222 #define REQ_NOMERGE_FLAGS \ 223 - (REQ_NOMERGE | REQ_STARTED | REQ_SOFTBARRIER | REQ_FLUSH | REQ_FUA) 223 + (REQ_NOMERGE | REQ_STARTED | REQ_SOFTBARRIER | REQ_FLUSH | REQ_FUA | REQ_FLUSH_SEQ) 224 224 225 225 #define REQ_RAHEAD (1ULL << __REQ_RAHEAD) 226 226 #define REQ_THROTTLED (1ULL << __REQ_THROTTLED)
+15 -1
include/linux/compiler-gcc.h
··· 9 9 + __GNUC_MINOR__ * 100 \ 10 10 + __GNUC_PATCHLEVEL__) 11 11 12 - 13 12 /* Optimization barrier */ 13 + 14 14 /* The "volatile" is due to gcc bugs */ 15 15 #define barrier() __asm__ __volatile__("": : :"memory") 16 + /* 17 + * This version is i.e. to prevent dead stores elimination on @ptr 18 + * where gcc and llvm may behave differently when otherwise using 19 + * normal barrier(): while gcc behavior gets along with a normal 20 + * barrier(), llvm needs an explicit input variable to be assumed 21 + * clobbered. The issue is as follows: while the inline asm might 22 + * access any memory it wants, the compiler could have fit all of 23 + * @ptr into memory registers instead, and since @ptr never escaped 24 + * from that, it proofed that the inline asm wasn't touching any of 25 + * it. This version works well with both compilers, i.e. we're telling 26 + * the compiler that the inline asm absolutely may see the contents 27 + * of @ptr. See also: https://llvm.org/bugs/show_bug.cgi?id=15495 28 + */ 29 + #define barrier_data(ptr) __asm__ __volatile__("": :"r"(ptr) :"memory") 16 30 17 31 /* 18 32 * This macro obfuscates arithmetic on a variable address so that gcc
+3
include/linux/compiler-intel.h
··· 13 13 /* Intel ECC compiler doesn't support gcc specific asm stmts. 14 14 * It uses intrinsics to do the equivalent things. 15 15 */ 16 + #undef barrier_data 16 17 #undef RELOC_HIDE 17 18 #undef OPTIMIZER_HIDE_VAR 19 + 20 + #define barrier_data(ptr) barrier() 18 21 19 22 #define RELOC_HIDE(ptr, off) \ 20 23 ({ unsigned long __ptr; \
+4
include/linux/compiler.h
··· 169 169 # define barrier() __memory_barrier() 170 170 #endif 171 171 172 + #ifndef barrier_data 173 + # define barrier_data(ptr) barrier() 174 + #endif 175 + 172 176 /* Unreachable code */ 173 177 #ifndef unreachable 174 178 # define unreachable() do { } while (1)
+1 -1
include/linux/ftrace_event.h
··· 46 46 const unsigned char *buf, int len); 47 47 48 48 const char *ftrace_print_array_seq(struct trace_seq *p, 49 - const void *buf, int buf_len, 49 + const void *buf, int count, 50 50 size_t el_size); 51 51 52 52 struct trace_iterator;
-2
include/linux/irqchip/arm-gic.h
··· 95 95 96 96 struct device_node; 97 97 98 - extern struct irq_chip gic_arch_extn; 99 - 100 98 void gic_set_irqchip_flags(unsigned long flags); 101 99 void gic_init_bases(unsigned int, int, void __iomem *, void __iomem *, 102 100 u32 offset, struct device_node *);
+4
include/linux/kexec.h
··· 40 40 #error KEXEC_CONTROL_MEMORY_LIMIT not defined 41 41 #endif 42 42 43 + #ifndef KEXEC_CONTROL_MEMORY_GFP 44 + #define KEXEC_CONTROL_MEMORY_GFP GFP_KERNEL 45 + #endif 46 + 43 47 #ifndef KEXEC_CONTROL_PAGE_SIZE 44 48 #error KEXEC_CONTROL_PAGE_SIZE not defined 45 49 #endif
+11 -5
include/linux/netdevice.h
··· 60 60 struct wireless_dev; 61 61 /* 802.15.4 specific */ 62 62 struct wpan_dev; 63 + struct mpls_dev; 63 64 64 65 void netdev_set_default_ethtool_ops(struct net_device *dev, 65 66 const struct ethtool_ops *ops); ··· 977 976 * int (*ndo_bridge_setlink)(struct net_device *dev, struct nlmsghdr *nlh, 978 977 * u16 flags) 979 978 * int (*ndo_bridge_getlink)(struct sk_buff *skb, u32 pid, u32 seq, 980 - * struct net_device *dev, u32 filter_mask) 979 + * struct net_device *dev, u32 filter_mask, 980 + * int nlflags) 981 981 * int (*ndo_bridge_dellink)(struct net_device *dev, struct nlmsghdr *nlh, 982 982 * u16 flags); 983 983 * ··· 1174 1172 int (*ndo_bridge_getlink)(struct sk_buff *skb, 1175 1173 u32 pid, u32 seq, 1176 1174 struct net_device *dev, 1177 - u32 filter_mask); 1175 + u32 filter_mask, 1176 + int nlflags); 1178 1177 int (*ndo_bridge_dellink)(struct net_device *dev, 1179 1178 struct nlmsghdr *nlh, 1180 1179 u16 flags); ··· 1630 1627 void *ax25_ptr; 1631 1628 struct wireless_dev *ieee80211_ptr; 1632 1629 struct wpan_dev *ieee802154_ptr; 1630 + #if IS_ENABLED(CONFIG_MPLS_ROUTING) 1631 + struct mpls_dev __rcu *mpls_ptr; 1632 + #endif 1633 1633 1634 1634 /* 1635 1635 * Cache lines mostly used on receive path (including eth_type_trans()) ··· 2027 2021 ({ \ 2028 2022 typeof(type) __percpu *pcpu_stats = alloc_percpu(type); \ 2029 2023 if (pcpu_stats) { \ 2030 - int i; \ 2031 - for_each_possible_cpu(i) { \ 2024 + int __cpu; \ 2025 + for_each_possible_cpu(__cpu) { \ 2032 2026 typeof(type) *stat; \ 2033 - stat = per_cpu_ptr(pcpu_stats, i); \ 2027 + stat = per_cpu_ptr(pcpu_stats, __cpu); \ 2034 2028 u64_stats_init(&stat->syncp); \ 2035 2029 } \ 2036 2030 } \
+14 -2
include/linux/netfilter_bridge.h
··· 39 39 40 40 static inline int nf_bridge_get_physinif(const struct sk_buff *skb) 41 41 { 42 - return skb->nf_bridge ? skb->nf_bridge->physindev->ifindex : 0; 42 + struct nf_bridge_info *nf_bridge; 43 + 44 + if (skb->nf_bridge == NULL) 45 + return 0; 46 + 47 + nf_bridge = skb->nf_bridge; 48 + return nf_bridge->physindev ? nf_bridge->physindev->ifindex : 0; 43 49 } 44 50 45 51 static inline int nf_bridge_get_physoutif(const struct sk_buff *skb) 46 52 { 47 - return skb->nf_bridge ? skb->nf_bridge->physoutdev->ifindex : 0; 53 + struct nf_bridge_info *nf_bridge; 54 + 55 + if (skb->nf_bridge == NULL) 56 + return 0; 57 + 58 + nf_bridge = skb->nf_bridge; 59 + return nf_bridge->physoutdev ? nf_bridge->physoutdev->ifindex : 0; 48 60 } 49 61 50 62 static inline struct net_device *
+1 -1
include/linux/nilfs2_fs.h
··· 460 460 /* level */ 461 461 #define NILFS_BTREE_LEVEL_DATA 0 462 462 #define NILFS_BTREE_LEVEL_NODE_MIN (NILFS_BTREE_LEVEL_DATA + 1) 463 - #define NILFS_BTREE_LEVEL_MAX 14 463 + #define NILFS_BTREE_LEVEL_MAX 14 /* Max level (exclusive) */ 464 464 465 465 /** 466 466 * struct nilfs_palloc_group_desc - block group descriptor
-4
include/linux/pci_ids.h
··· 2541 2541 2542 2542 #define PCI_VENDOR_ID_INTEL 0x8086 2543 2543 #define PCI_DEVICE_ID_INTEL_EESSC 0x0008 2544 - #define PCI_DEVICE_ID_INTEL_SNB_IMC 0x0100 2545 - #define PCI_DEVICE_ID_INTEL_IVB_IMC 0x0154 2546 - #define PCI_DEVICE_ID_INTEL_IVB_E3_IMC 0x0150 2547 - #define PCI_DEVICE_ID_INTEL_HSW_IMC 0x0c00 2548 2544 #define PCI_DEVICE_ID_INTEL_PXHD_0 0x0320 2549 2545 #define PCI_DEVICE_ID_INTEL_PXHD_1 0x0321 2550 2546 #define PCI_DEVICE_ID_INTEL_PXH_0 0x0329
+2 -1
include/linux/rhashtable.h
··· 282 282 static inline bool rht_grow_above_100(const struct rhashtable *ht, 283 283 const struct bucket_table *tbl) 284 284 { 285 - return atomic_read(&ht->nelems) > tbl->size; 285 + return atomic_read(&ht->nelems) > tbl->size && 286 + (!ht->p.max_size || tbl->size < ht->p.max_size); 286 287 } 287 288 288 289 /* The bucket lock is selected based on the hash and protects mutations
+1 -1
include/linux/rtnetlink.h
··· 122 122 123 123 extern int ndo_dflt_bridge_getlink(struct sk_buff *skb, u32 pid, u32 seq, 124 124 struct net_device *dev, u16 mode, 125 - u32 flags, u32 mask); 125 + u32 flags, u32 mask, int nlflags); 126 126 #endif /* __LINUX_RTNETLINK_H */
-8
include/linux/sched.h
··· 175 175 extern void calc_global_load(unsigned long ticks); 176 176 extern void update_cpu_load_nohz(void); 177 177 178 - /* Notifier for when a task gets migrated to a new CPU */ 179 - struct task_migration_notifier { 180 - struct task_struct *task; 181 - int from_cpu; 182 - int to_cpu; 183 - }; 184 - extern void register_task_migration_notifier(struct notifier_block *n); 185 - 186 178 extern unsigned long get_parent_ip(unsigned long addr); 187 179 188 180 extern void dump_cpu_task(int cpu);
+1
include/linux/skbuff.h
··· 773 773 774 774 struct sk_buff *__alloc_skb(unsigned int size, gfp_t priority, int flags, 775 775 int node); 776 + struct sk_buff *__build_skb(void *data, unsigned int frag_size); 776 777 struct sk_buff *build_skb(void *data, unsigned int frag_size); 777 778 static inline struct sk_buff *alloc_skb(unsigned int size, 778 779 gfp_t priority)
+1
include/linux/tty.h
··· 491 491 492 492 extern void tty_termios_copy_hw(struct ktermios *new, struct ktermios *old); 493 493 extern int tty_termios_hw_change(struct ktermios *a, struct ktermios *b); 494 + extern int tty_set_termios(struct tty_struct *tty, struct ktermios *kt); 494 495 495 496 extern struct tty_ldisc *tty_ldisc_ref(struct tty_struct *); 496 497 extern void tty_ldisc_deref(struct tty_ldisc *);
+2
include/linux/usb_usual.h
··· 77 77 /* Cannot handle ATA_12 or ATA_16 CDBs */ \ 78 78 US_FLAG(NO_REPORT_OPCODES, 0x04000000) \ 79 79 /* Cannot handle MI_REPORT_SUPPORTED_OPERATION_CODES */ \ 80 + US_FLAG(MAX_SECTORS_240, 0x08000000) \ 81 + /* Sets max_sectors to 240 */ \ 80 82 81 83 #define US_FLAG(name, value) US_FL_##name = value , 82 84 enum { US_DO_ALL_FLAGS };
+1 -1
include/linux/util_macros.h
··· 5 5 ({ \ 6 6 typeof(as) __fc_i, __fc_as = (as) - 1; \ 7 7 typeof(x) __fc_x = (x); \ 8 - typeof(*a) *__fc_a = (a); \ 8 + typeof(*a) const *__fc_a = (a); \ 9 9 for (__fc_i = 0; __fc_i < __fc_as; __fc_i++) { \ 10 10 if (__fc_x op DIV_ROUND_CLOSEST(__fc_a[__fc_i] + \ 11 11 __fc_a[__fc_i + 1], 2)) \
-7
include/net/bonding.h
··· 30 30 #include <net/bond_alb.h> 31 31 #include <net/bond_options.h> 32 32 33 - #define DRV_VERSION "3.7.1" 34 - #define DRV_RELDATE "April 27, 2011" 35 - #define DRV_NAME "bonding" 36 - #define DRV_DESCRIPTION "Ethernet Channel Bonding Driver" 37 - 38 - #define bond_version DRV_DESCRIPTION ": v" DRV_VERSION " (" DRV_RELDATE ")\n" 39 - 40 33 #define BOND_MAX_ARP_TARGETS 16 41 34 42 35 #define BOND_DEFAULT_MIIMON 100
+1 -19
include/net/inet_connection_sock.h
··· 279 279 void inet_csk_reqsk_queue_hash_add(struct sock *sk, struct request_sock *req, 280 280 unsigned long timeout); 281 281 282 - static inline void inet_csk_reqsk_queue_removed(struct sock *sk, 283 - struct request_sock *req) 284 - { 285 - reqsk_queue_removed(&inet_csk(sk)->icsk_accept_queue, req); 286 - } 287 - 288 282 static inline void inet_csk_reqsk_queue_added(struct sock *sk, 289 283 const unsigned long timeout) 290 284 { ··· 300 306 return reqsk_queue_is_full(&inet_csk(sk)->icsk_accept_queue); 301 307 } 302 308 303 - static inline void inet_csk_reqsk_queue_unlink(struct sock *sk, 304 - struct request_sock *req) 305 - { 306 - reqsk_queue_unlink(&inet_csk(sk)->icsk_accept_queue, req); 307 - } 308 - 309 - static inline void inet_csk_reqsk_queue_drop(struct sock *sk, 310 - struct request_sock *req) 311 - { 312 - inet_csk_reqsk_queue_unlink(sk, req); 313 - inet_csk_reqsk_queue_removed(sk, req); 314 - reqsk_put(req); 315 - } 309 + void inet_csk_reqsk_queue_drop(struct sock *sk, struct request_sock *req); 316 310 317 311 void inet_csk_destroy_sock(struct sock *sk); 318 312 void inet_csk_prepare_forced_close(struct sock *sk);
-18
include/net/request_sock.h
··· 212 212 return queue->rskq_accept_head == NULL; 213 213 } 214 214 215 - static inline void reqsk_queue_unlink(struct request_sock_queue *queue, 216 - struct request_sock *req) 217 - { 218 - struct listen_sock *lopt = queue->listen_opt; 219 - struct request_sock **prev; 220 - 221 - spin_lock(&queue->syn_wait_lock); 222 - 223 - prev = &lopt->syn_table[req->rsk_hash]; 224 - while (*prev != req) 225 - prev = &(*prev)->dl_next; 226 - *prev = req->dl_next; 227 - 228 - spin_unlock(&queue->syn_wait_lock); 229 - if (del_timer(&req->rsk_timer)) 230 - reqsk_put(req); 231 - } 232 - 233 215 static inline void reqsk_queue_add(struct request_sock_queue *queue, 234 216 struct request_sock *req, 235 217 struct sock *parent,
+1 -2
include/rdma/ib_addr.h
··· 160 160 } 161 161 162 162 /* Important - sockaddr should be a union of sockaddr_in and sockaddr_in6 */ 163 - static inline int rdma_gid2ip(struct sockaddr *out, union ib_gid *gid) 163 + static inline void rdma_gid2ip(struct sockaddr *out, union ib_gid *gid) 164 164 { 165 165 if (ipv6_addr_v4mapped((struct in6_addr *)gid)) { 166 166 struct sockaddr_in *out_in = (struct sockaddr_in *)out; ··· 173 173 out_in->sin6_family = AF_INET6; 174 174 memcpy(&out_in->sin6_addr.s6_addr, gid->raw, 16); 175 175 } 176 - return 0; 177 176 } 178 177 179 178 static inline void iboe_addr_get_sgid(struct rdma_dev_addr *dev_addr,
+4 -3
include/rdma/ib_cm.h
··· 105 105 IB_CM_SIDR_REQ_PRIVATE_DATA_SIZE = 216, 106 106 IB_CM_SIDR_REP_PRIVATE_DATA_SIZE = 136, 107 107 IB_CM_SIDR_REP_INFO_LENGTH = 72, 108 - IB_CM_COMPARE_SIZE = 64 108 + /* compare done u32 at a time */ 109 + IB_CM_COMPARE_SIZE = (64 / sizeof(u32)) 109 110 }; 110 111 111 112 struct ib_cm_id; ··· 338 337 #define IB_SDP_SERVICE_ID_MASK cpu_to_be64(0xFFFFFFFFFFFF0000ULL) 339 338 340 339 struct ib_cm_compare_data { 341 - u8 data[IB_CM_COMPARE_SIZE]; 342 - u8 mask[IB_CM_COMPARE_SIZE]; 340 + u32 data[IB_CM_COMPARE_SIZE]; 341 + u32 mask[IB_CM_COMPARE_SIZE]; 343 342 }; 344 343 345 344 /**
+25
include/rdma/iw_portmap.h
··· 148 148 int iwpm_add_and_query_mapping_cb(struct sk_buff *, struct netlink_callback *); 149 149 150 150 /** 151 + * iwpm_remote_info_cb - Process remote connecting peer address info, which 152 + * the port mapper has received from the connecting peer 153 + * 154 + * @cb: Contains the received message (payload and netlink header) 155 + * 156 + * Stores the IPv4/IPv6 address info in a hash table 157 + */ 158 + int iwpm_remote_info_cb(struct sk_buff *, struct netlink_callback *); 159 + 160 + /** 151 161 * iwpm_mapping_error_cb - Process port mapper notification for error 152 162 * 153 163 * @skb: ··· 183 173 * @cb: Contains the received message (payload and netlink header) 184 174 */ 185 175 int iwpm_ack_mapping_info_cb(struct sk_buff *, struct netlink_callback *); 176 + 177 + /** 178 + * iwpm_get_remote_info - Get the remote connecting peer address info 179 + * 180 + * @mapped_loc_addr: Mapped local address of the listening peer 181 + * @mapped_rem_addr: Mapped remote address of the connecting peer 182 + * @remote_addr: To store the remote address of the connecting peer 183 + * @nl_client: The index of the netlink client 184 + * 185 + * The remote address info is retrieved and provided to the client in 186 + * the remote_addr. After that it is removed from the hash table 187 + */ 188 + int iwpm_get_remote_info(struct sockaddr_storage *mapped_loc_addr, 189 + struct sockaddr_storage *mapped_rem_addr, 190 + struct sockaddr_storage *remote_addr, u8 nl_client); 186 191 187 192 /** 188 193 * iwpm_create_mapinfo - Store local and mapped IPv4/IPv6 address
+1
include/scsi/scsi_devinfo.h
··· 36 36 for sequential scan */ 37 37 #define BLIST_TRY_VPD_PAGES 0x10000000 /* Attempt to read VPD pages */ 38 38 #define BLIST_NO_RSOC 0x20000000 /* don't try to issue RSOC */ 39 + #define BLIST_MAX_1024 0x40000000 /* maximum 1024 sector cdb length */ 39 40 40 41 #endif
+1 -1
include/sound/designware_i2s.h
··· 1 1 /* 2 - * Copyright (ST) 2012 Rajeev Kumar (rajeev-dlh.kumar@st.com) 2 + * Copyright (ST) 2012 Rajeev Kumar (rajeevkumar.linux@gmail.com) 3 3 * 4 4 * This program is free software; you can redistribute it and/or modify 5 5 * it under the terms of the GNU General Public License as published by
+9 -5
include/sound/emu10k1.h
··· 41 41 42 42 #define EMUPAGESIZE 4096 43 43 #define MAXREQVOICES 8 44 - #define MAXPAGES 8192 44 + #define MAXPAGES0 4096 /* 32 bit mode */ 45 + #define MAXPAGES1 8192 /* 31 bit mode */ 45 46 #define RESERVED 0 46 47 #define NUM_MIDI 16 47 48 #define NUM_G 64 /* use all channels */ ··· 51 50 52 51 /* FIXME? - according to the OSS driver the EMU10K1 needs a 29 bit DMA mask */ 53 52 #define EMU10K1_DMA_MASK 0x7fffffffUL /* 31bit */ 54 - #define AUDIGY_DMA_MASK 0x7fffffffUL /* 31bit FIXME - 32 should work? */ 55 - /* See ALSA bug #1276 - rlrevell */ 53 + #define AUDIGY_DMA_MASK 0xffffffffUL /* 32bit mode */ 56 54 57 55 #define TMEMSIZE 256*1024 58 56 #define TMEMSIZEREG 4 ··· 466 466 467 467 #define MAPB 0x0d /* Cache map B */ 468 468 469 - #define MAP_PTE_MASK 0xffffe000 /* The 19 MSBs of the PTE indexed by the PTI */ 470 - #define MAP_PTI_MASK 0x00001fff /* The 13 bit index to one of the 8192 PTE dwords */ 469 + #define MAP_PTE_MASK0 0xfffff000 /* The 20 MSBs of the PTE indexed by the PTI */ 470 + #define MAP_PTI_MASK0 0x00000fff /* The 12 bit index to one of the 4096 PTE dwords */ 471 + 472 + #define MAP_PTE_MASK1 0xffffe000 /* The 19 MSBs of the PTE indexed by the PTI */ 473 + #define MAP_PTI_MASK1 0x00001fff /* The 13 bit index to one of the 8192 PTE dwords */ 471 474 472 475 /* 0x0e, 0x0f: Not used */ 473 476 ··· 1707 1704 unsigned short model; /* subsystem id */ 1708 1705 unsigned int card_type; /* EMU10K1_CARD_* */ 1709 1706 unsigned int ecard_ctrl; /* ecard control bits */ 1707 + unsigned int address_mode; /* address mode */ 1710 1708 unsigned long dma_mask; /* PCI DMA mask */ 1711 1709 unsigned int delay_pcm_irq; /* in samples */ 1712 1710 int max_cache_pages; /* max memory size / PAGE_SIZE */
+1 -1
include/sound/soc-dapm.h
··· 287 287 .access = SNDRV_CTL_ELEM_ACCESS_TLV_READ | SNDRV_CTL_ELEM_ACCESS_READWRITE,\ 288 288 .tlv.p = (tlv_array), \ 289 289 .get = snd_soc_dapm_get_volsw, .put = snd_soc_dapm_put_volsw, \ 290 - .private_value = SOC_SINGLE_VALUE(reg, shift, max, invert, 0) } 290 + .private_value = SOC_SINGLE_VALUE(reg, shift, max, invert, 1) } 291 291 #define SOC_DAPM_SINGLE_TLV_VIRT(xname, max, tlv_array) \ 292 292 SOC_DAPM_SINGLE(xname, SND_SOC_NOPM, 0, max, 0, tlv_array) 293 293 #define SOC_DAPM_ENUM(xname, xenum) \
+12
include/sound/soc.h
··· 387 387 int snd_soc_register_card(struct snd_soc_card *card); 388 388 int snd_soc_unregister_card(struct snd_soc_card *card); 389 389 int devm_snd_soc_register_card(struct device *dev, struct snd_soc_card *card); 390 + #ifdef CONFIG_PM_SLEEP 390 391 int snd_soc_suspend(struct device *dev); 391 392 int snd_soc_resume(struct device *dev); 393 + #else 394 + static inline int snd_soc_suspend(struct device *dev) 395 + { 396 + return 0; 397 + } 398 + 399 + static inline int snd_soc_resume(struct device *dev) 400 + { 401 + return 0; 402 + } 403 + #endif 392 404 int snd_soc_poweroff(struct device *dev); 393 405 int snd_soc_register_platform(struct device *dev, 394 406 const struct snd_soc_platform_driver *platform_drv);
+1 -1
include/sound/spear_dma.h
··· 1 1 /* 2 2 * linux/spear_dma.h 3 3 * 4 - * Copyright (ST) 2012 Rajeev Kumar (rajeev-dlh.kumar@st.com) 4 + * Copyright (ST) 2012 Rajeev Kumar (rajeevkumar.linux@gmail.com) 5 5 * 6 6 * This program is free software; you can redistribute it and/or modify 7 7 * it under the terms of the GNU General Public License as published by
+1 -1
include/uapi/linux/virtio_ring.h
··· 155 155 } 156 156 157 157 /* The following is used with USED_EVENT_IDX and AVAIL_EVENT_IDX */ 158 - /* Assuming a given event_idx value from the other size, if 158 + /* Assuming a given event_idx value from the other side, if 159 159 * we have just incremented index from old to new_idx, 160 160 * should we trigger an event? */ 161 161 static inline int vring_need_event(__u16 event_idx, __u16 new_idx, __u16 old)
+1
include/uapi/rdma/rdma_netlink.h
··· 37 37 RDMA_NL_IWPM_ADD_MAPPING, 38 38 RDMA_NL_IWPM_QUERY_MAPPING, 39 39 RDMA_NL_IWPM_REMOVE_MAPPING, 40 + RDMA_NL_IWPM_REMOTE_INFO, 40 41 RDMA_NL_IWPM_HANDLE_ERR, 41 42 RDMA_NL_IWPM_MAPINFO, 42 43 RDMA_NL_IWPM_MAPINFO_NUM,
+1
include/xen/grant_table.h
··· 191 191 struct gnttab_unmap_grant_ref *kunmap_ops, 192 192 struct page **pages, unsigned int count); 193 193 void gnttab_unmap_refs_async(struct gntab_unmap_queue_data* item); 194 + int gnttab_unmap_refs_sync(struct gntab_unmap_queue_data *item); 194 195 195 196 196 197 /* Perform a batch of grant map/copy operations. Retry every batch slot
+1
include/xen/xen-ops.h
··· 13 13 14 14 void xen_timer_resume(void); 15 15 void xen_arch_resume(void); 16 + void xen_arch_suspend(void); 16 17 17 18 void xen_resume_notifier_register(struct notifier_block *nb); 18 19 void xen_resume_notifier_unregister(struct notifier_block *nb);
+3 -2
init/do_mounts.c
··· 225 225 #endif 226 226 227 227 if (strncmp(name, "/dev/", 5) != 0) { 228 - unsigned maj, min; 228 + unsigned maj, min, offset; 229 229 char dummy; 230 230 231 - if (sscanf(name, "%u:%u%c", &maj, &min, &dummy) == 2) { 231 + if ((sscanf(name, "%u:%u%c", &maj, &min, &dummy) == 2) || 232 + (sscanf(name, "%u:%u:%u:%c", &maj, &min, &offset, &dummy) == 3)) { 232 233 res = MKDEV(maj, min); 233 234 if (maj != MAJOR(res) || min != MINOR(res)) 234 235 goto fail;
+3 -3
kernel/Makefile
··· 197 197 @echo >>x509.genkey "x509_extensions = myexts" 198 198 @echo >>x509.genkey 199 199 @echo >>x509.genkey "[ req_distinguished_name ]" 200 - @echo >>x509.genkey "O = Magrathea" 201 - @echo >>x509.genkey "CN = Glacier signing key" 202 - @echo >>x509.genkey "emailAddress = slartibartfast@magrathea.h2g2" 200 + @echo >>x509.genkey "#O = Unspecified company" 201 + @echo >>x509.genkey "CN = Build time autogenerated kernel key" 202 + @echo >>x509.genkey "#emailAddress = unspecified.user@unspecified.company" 203 203 @echo >>x509.genkey 204 204 @echo >>x509.genkey "[ myexts ]" 205 205 @echo >>x509.genkey "basicConstraints=critical,CA:FALSE"
+6 -6
kernel/bpf/core.c
··· 357 357 ALU64_MOD_X: 358 358 if (unlikely(SRC == 0)) 359 359 return 0; 360 - tmp = DST; 361 - DST = do_div(tmp, SRC); 360 + div64_u64_rem(DST, SRC, &tmp); 361 + DST = tmp; 362 362 CONT; 363 363 ALU_MOD_X: 364 364 if (unlikely(SRC == 0)) ··· 367 367 DST = do_div(tmp, (u32) SRC); 368 368 CONT; 369 369 ALU64_MOD_K: 370 - tmp = DST; 371 - DST = do_div(tmp, IMM); 370 + div64_u64_rem(DST, IMM, &tmp); 371 + DST = tmp; 372 372 CONT; 373 373 ALU_MOD_K: 374 374 tmp = (u32) DST; ··· 377 377 ALU64_DIV_X: 378 378 if (unlikely(SRC == 0)) 379 379 return 0; 380 - do_div(DST, SRC); 380 + DST = div64_u64(DST, SRC); 381 381 CONT; 382 382 ALU_DIV_X: 383 383 if (unlikely(SRC == 0)) ··· 387 387 DST = (u32) tmp; 388 388 CONT; 389 389 ALU64_DIV_K: 390 - do_div(DST, IMM); 390 + DST = div64_u64(DST, IMM); 391 391 CONT; 392 392 ALU_DIV_K: 393 393 tmp = (u32) DST;
+1
kernel/irq/dummychip.c
··· 57 57 .irq_ack = noop, 58 58 .irq_mask = noop, 59 59 .irq_unmask = noop, 60 + .flags = IRQCHIP_SKIP_SET_WAKE, 60 61 }; 61 62 EXPORT_SYMBOL_GPL(dummy_irq_chip);
+1 -1
kernel/kexec.c
··· 707 707 do { 708 708 unsigned long pfn, epfn, addr, eaddr; 709 709 710 - pages = kimage_alloc_pages(GFP_KERNEL, order); 710 + pages = kimage_alloc_pages(KEXEC_CONTROL_MEMORY_GFP, order); 711 711 if (!pages) 712 712 break; 713 713 pfn = page_to_pfn(pages);
+9 -7
kernel/rcu/tree.c
··· 162 162 static int kthread_prio = CONFIG_RCU_KTHREAD_PRIO; 163 163 module_param(kthread_prio, int, 0644); 164 164 165 - /* Delay in jiffies for grace-period initialization delays. */ 166 - static int gp_init_delay = IS_ENABLED(CONFIG_RCU_TORTURE_TEST_SLOW_INIT) 167 - ? CONFIG_RCU_TORTURE_TEST_SLOW_INIT_DELAY 168 - : 0; 165 + /* Delay in jiffies for grace-period initialization delays, debug only. */ 166 + #ifdef CONFIG_RCU_TORTURE_TEST_SLOW_INIT 167 + static int gp_init_delay = CONFIG_RCU_TORTURE_TEST_SLOW_INIT_DELAY; 169 168 module_param(gp_init_delay, int, 0644); 169 + #else /* #ifdef CONFIG_RCU_TORTURE_TEST_SLOW_INIT */ 170 + static const int gp_init_delay; 171 + #endif /* #else #ifdef CONFIG_RCU_TORTURE_TEST_SLOW_INIT */ 172 + #define PER_RCU_NODE_PERIOD 10 /* Number of grace periods between delays. */ 170 173 171 174 /* 172 175 * Track the rcutorture test sequence number and the update version ··· 1846 1843 raw_spin_unlock_irq(&rnp->lock); 1847 1844 cond_resched_rcu_qs(); 1848 1845 ACCESS_ONCE(rsp->gp_activity) = jiffies; 1849 - if (IS_ENABLED(CONFIG_RCU_TORTURE_TEST_SLOW_INIT) && 1850 - gp_init_delay > 0 && 1851 - !(rsp->gpnum % (rcu_num_nodes * 10))) 1846 + if (gp_init_delay > 0 && 1847 + !(rsp->gpnum % (rcu_num_nodes * PER_RCU_NODE_PERIOD))) 1852 1848 schedule_timeout_uninterruptible(gp_init_delay); 1853 1849 } 1854 1850
-15
kernel/sched/core.c
··· 1016 1016 rq_clock_skip_update(rq, true); 1017 1017 } 1018 1018 1019 - static ATOMIC_NOTIFIER_HEAD(task_migration_notifier); 1020 - 1021 - void register_task_migration_notifier(struct notifier_block *n) 1022 - { 1023 - atomic_notifier_chain_register(&task_migration_notifier, n); 1024 - } 1025 - 1026 1019 #ifdef CONFIG_SMP 1027 1020 void set_task_cpu(struct task_struct *p, unsigned int new_cpu) 1028 1021 { ··· 1046 1053 trace_sched_migrate_task(p, new_cpu); 1047 1054 1048 1055 if (task_cpu(p) != new_cpu) { 1049 - struct task_migration_notifier tmn; 1050 - 1051 1056 if (p->sched_class->migrate_task_rq) 1052 1057 p->sched_class->migrate_task_rq(p, new_cpu); 1053 1058 p->se.nr_migrations++; 1054 1059 perf_sw_event_sched(PERF_COUNT_SW_CPU_MIGRATIONS, 1, 0); 1055 - 1056 - tmn.task = p; 1057 - tmn.from_cpu = task_cpu(p); 1058 - tmn.to_cpu = new_cpu; 1059 - 1060 - atomic_notifier_call_chain(&task_migration_notifier, 0, &tmn); 1061 1060 } 1062 1061 1063 1062 __set_task_cpu(p, new_cpu);
+2 -14
kernel/sched/idle.c
··· 81 81 struct cpuidle_device *dev = __this_cpu_read(cpuidle_devices); 82 82 struct cpuidle_driver *drv = cpuidle_get_cpu_driver(dev); 83 83 int next_state, entered_state; 84 - unsigned int broadcast; 85 84 bool reflect; 86 85 87 86 /* ··· 149 150 goto exit_idle; 150 151 } 151 152 152 - broadcast = drv->states[next_state].flags & CPUIDLE_FLAG_TIMER_STOP; 153 - 154 - /* 155 - * Tell the time framework to switch to a broadcast timer 156 - * because our local timer will be shutdown. If a local timer 157 - * is used from another cpu as a broadcast timer, this call may 158 - * fail if it is not available 159 - */ 160 - if (broadcast && tick_broadcast_enter()) 161 - goto use_default; 162 - 163 153 /* Take note of the planned idle state. */ 164 154 idle_set_state(this_rq(), &drv->states[next_state]); 165 155 ··· 162 174 /* The cpu is no longer idle or about to enter idle. */ 163 175 idle_set_state(this_rq(), NULL); 164 176 165 - if (broadcast) 166 - tick_broadcast_exit(); 177 + if (entered_state == -EBUSY) 178 + goto use_default; 167 179 168 180 /* 169 181 * Give the governor an opportunity to reflect on the outcome
+1 -5
kernel/time/clockevents.c
··· 117 117 /* Transition with new state-specific callbacks */ 118 118 switch (state) { 119 119 case CLOCK_EVT_STATE_DETACHED: 120 - /* 121 - * This is an internal state, which is guaranteed to go from 122 - * SHUTDOWN to DETACHED. No driver interaction required. 123 - */ 124 - return 0; 120 + /* The clockevent device is getting replaced. Shut it down. */ 125 121 126 122 case CLOCK_EVT_STATE_SHUTDOWN: 127 123 return dev->set_state_shutdown(dev);
+2 -1
kernel/trace/trace_output.c
··· 178 178 EXPORT_SYMBOL(ftrace_print_hex_seq); 179 179 180 180 const char * 181 - ftrace_print_array_seq(struct trace_seq *p, const void *buf, int buf_len, 181 + ftrace_print_array_seq(struct trace_seq *p, const void *buf, int count, 182 182 size_t el_size) 183 183 { 184 184 const char *ret = trace_seq_buffer_ptr(p); 185 185 const char *prefix = ""; 186 186 void *ptr = (void *)buf; 187 + size_t buf_len = count * el_size; 187 188 188 189 trace_seq_putc(p, '{'); 189 190
+1
lib/Kconfig.debug
··· 1281 1281 int "How much to slow down RCU grace-period initialization" 1282 1282 range 0 5 1283 1283 default 3 1284 + depends on RCU_TORTURE_TEST_SLOW_INIT 1284 1285 help 1285 1286 This option specifies the number of jiffies to wait between 1286 1287 each rcu_node structure initialization.
+6 -2
lib/Kconfig.kasan
··· 10 10 help 11 11 Enables kernel address sanitizer - runtime memory debugger, 12 12 designed to find out-of-bounds accesses and use-after-free bugs. 13 - This is strictly debugging feature. It consumes about 1/8 14 - of available memory and brings about ~x3 performance slowdown. 13 + This is strictly a debugging feature and it requires a gcc version 14 + of 4.9.2 or later. Detection of out of bounds accesses to stack or 15 + global variables requires gcc 5.0 or later. 16 + This feature consumes about 1/8 of available memory and brings about 17 + ~x3 performance slowdown. 15 18 For better error detection enable CONFIG_STACKTRACE, 16 19 and add slub_debug=U to boot cmdline. 17 20 ··· 43 40 memory accesses. This is faster than outline (in some workloads 44 41 it gives about x2 boost over outline instrumentation), but 45 42 make kernel's .text size much bigger. 43 + This requires a gcc version of 5.0 or later. 46 44 47 45 endchoice 48 46
-41
lib/find_last_bit.c
··· 1 - /* find_last_bit.c: fallback find next bit implementation 2 - * 3 - * Copyright (C) 2008 IBM Corporation 4 - * Written by Rusty Russell <rusty@rustcorp.com.au> 5 - * (Inspired by David Howell's find_next_bit implementation) 6 - * 7 - * Rewritten by Yury Norov <yury.norov@gmail.com> to decrease 8 - * size and improve performance, 2015. 9 - * 10 - * This program is free software; you can redistribute it and/or 11 - * modify it under the terms of the GNU General Public License 12 - * as published by the Free Software Foundation; either version 13 - * 2 of the License, or (at your option) any later version. 14 - */ 15 - 16 - #include <linux/bitops.h> 17 - #include <linux/bitmap.h> 18 - #include <linux/export.h> 19 - #include <linux/kernel.h> 20 - 21 - #ifndef find_last_bit 22 - 23 - unsigned long find_last_bit(const unsigned long *addr, unsigned long size) 24 - { 25 - if (size) { 26 - unsigned long val = BITMAP_LAST_WORD_MASK(size); 27 - unsigned long idx = (size-1) / BITS_PER_LONG; 28 - 29 - do { 30 - val &= addr[idx]; 31 - if (val) 32 - return idx * BITS_PER_LONG + __fls(val); 33 - 34 - val = ~0ul; 35 - } while (idx--); 36 - } 37 - return size; 38 - } 39 - EXPORT_SYMBOL(find_last_bit); 40 - 41 - #endif
+8 -3
lib/rhashtable.c
··· 405 405 406 406 if (rht_grow_above_75(ht, tbl)) 407 407 size *= 2; 408 - /* More than two rehashes (not resizes) detected. */ 409 - else if (WARN_ON(old_tbl != tbl && old_tbl->size == size)) 408 + /* Do not schedule more than one rehash */ 409 + else if (old_tbl != tbl) 410 410 return -EBUSY; 411 411 412 412 new_tbl = bucket_table_alloc(ht, size, GFP_ATOMIC); 413 - if (new_tbl == NULL) 413 + if (new_tbl == NULL) { 414 + /* Schedule async resize/rehash to try allocation 415 + * non-atomic context. 416 + */ 417 + schedule_work(&ht->run_work); 414 418 return -ENOMEM; 419 + } 415 420 416 421 err = rhashtable_rehash_attach(ht, tbl, new_tbl); 417 422 if (err) {
+1 -1
lib/string.c
··· 607 607 void memzero_explicit(void *s, size_t count) 608 608 { 609 609 memset(s, 0, count); 610 - barrier(); 610 + barrier_data(s); 611 611 } 612 612 EXPORT_SYMBOL(memzero_explicit); 613 613
+8 -5
mm/hwpoison-inject.c
··· 34 34 if (!hwpoison_filter_enable) 35 35 goto inject; 36 36 37 - if (!PageLRU(p) && !PageHuge(p)) 38 - shake_page(p, 0); 37 + if (!PageLRU(hpage) && !PageHuge(p)) 38 + shake_page(hpage, 0); 39 39 /* 40 40 * This implies unable to support non-LRU pages. 41 41 */ 42 - if (!PageLRU(p) && !PageHuge(p)) 43 - return 0; 42 + if (!PageLRU(hpage) && !PageHuge(p)) 43 + goto put_out; 44 44 45 45 /* 46 46 * do a racy check with elevated page count, to make sure PG_hwpoison ··· 52 52 err = hwpoison_filter(hpage); 53 53 unlock_page(hpage); 54 54 if (err) 55 - return 0; 55 + goto put_out; 56 56 57 57 inject: 58 58 pr_info("Injecting memory failure at pfn %#lx\n", pfn); 59 59 return memory_failure(pfn, 18, MF_COUNT_INCREASED); 60 + put_out: 61 + put_page(hpage); 62 + return 0; 60 63 } 61 64 62 65 static int hwpoison_unpoison(void *data, u64 val)
+8 -8
mm/memory-failure.c
··· 1187 1187 * The check (unnecessarily) ignores LRU pages being isolated and 1188 1188 * walked by the page reclaim code, however that's not a big loss. 1189 1189 */ 1190 - if (!PageHuge(p) && !PageTransTail(p)) { 1191 - if (!PageLRU(p)) 1192 - shake_page(p, 0); 1193 - if (!PageLRU(p)) { 1190 + if (!PageHuge(p)) { 1191 + if (!PageLRU(hpage)) 1192 + shake_page(hpage, 0); 1193 + if (!PageLRU(hpage)) { 1194 1194 /* 1195 1195 * shake_page could have turned it free. 1196 1196 */ ··· 1777 1777 } else if (ret == 0) { /* for free pages */ 1778 1778 if (PageHuge(page)) { 1779 1779 set_page_hwpoison_huge_page(hpage); 1780 - dequeue_hwpoisoned_huge_page(hpage); 1781 - atomic_long_add(1 << compound_order(hpage), 1780 + if (!dequeue_hwpoisoned_huge_page(hpage)) 1781 + atomic_long_add(1 << compound_order(hpage), 1782 1782 &num_poisoned_pages); 1783 1783 } else { 1784 - SetPageHWPoison(page); 1785 - atomic_long_inc(&num_poisoned_pages); 1784 + if (!TestSetPageHWPoison(page)) 1785 + atomic_long_inc(&num_poisoned_pages); 1786 1786 } 1787 1787 } 1788 1788 unset_migratetype_isolate(page, MIGRATE_MOVABLE);
+3 -3
mm/page-writeback.c
··· 580 580 long x; 581 581 582 582 x = div64_s64(((s64)setpoint - (s64)dirty) << RATELIMIT_CALC_SHIFT, 583 - limit - setpoint + 1); 583 + (limit - setpoint) | 1); 584 584 pos_ratio = x; 585 585 pos_ratio = pos_ratio * x >> RATELIMIT_CALC_SHIFT; 586 586 pos_ratio = pos_ratio * x >> RATELIMIT_CALC_SHIFT; ··· 807 807 * scale global setpoint to bdi's: 808 808 * bdi_setpoint = setpoint * bdi_thresh / thresh 809 809 */ 810 - x = div_u64((u64)bdi_thresh << 16, thresh + 1); 810 + x = div_u64((u64)bdi_thresh << 16, thresh | 1); 811 811 bdi_setpoint = setpoint * (u64)x >> 16; 812 812 /* 813 813 * Use span=(8*write_bw) in single bdi case as indicated by ··· 822 822 823 823 if (bdi_dirty < x_intercept - span / 4) { 824 824 pos_ratio = div64_u64(pos_ratio * (x_intercept - bdi_dirty), 825 - x_intercept - bdi_setpoint + 1); 825 + (x_intercept - bdi_setpoint) | 1); 826 826 } else 827 827 pos_ratio /= 4; 828 828
+1 -1
net/bridge/br_mdb.c
··· 170 170 struct br_port_msg *bpm; 171 171 struct nlattr *nest, *nest2; 172 172 173 - nlh = nlmsg_put(skb, pid, seq, type, sizeof(*bpm), NLM_F_MULTI); 173 + nlh = nlmsg_put(skb, pid, seq, type, sizeof(*bpm), 0); 174 174 if (!nlh) 175 175 return -EMSGSIZE; 176 176
+2 -2
net/bridge/br_netlink.c
··· 394 394 * Dump information about all ports, in response to GETLINK 395 395 */ 396 396 int br_getlink(struct sk_buff *skb, u32 pid, u32 seq, 397 - struct net_device *dev, u32 filter_mask) 397 + struct net_device *dev, u32 filter_mask, int nlflags) 398 398 { 399 399 struct net_bridge_port *port = br_port_get_rtnl(dev); 400 400 ··· 402 402 !(filter_mask & RTEXT_FILTER_BRVLAN_COMPRESSED)) 403 403 return 0; 404 404 405 - return br_fill_ifinfo(skb, port, pid, seq, RTM_NEWLINK, NLM_F_MULTI, 405 + return br_fill_ifinfo(skb, port, pid, seq, RTM_NEWLINK, nlflags, 406 406 filter_mask, dev); 407 407 } 408 408
+1 -1
net/bridge/br_private.h
··· 828 828 int br_setlink(struct net_device *dev, struct nlmsghdr *nlmsg, u16 flags); 829 829 int br_dellink(struct net_device *dev, struct nlmsghdr *nlmsg, u16 flags); 830 830 int br_getlink(struct sk_buff *skb, u32 pid, u32 seq, struct net_device *dev, 831 - u32 filter_mask); 831 + u32 filter_mask, int nlflags); 832 832 833 833 #ifdef CONFIG_SYSFS 834 834 /* br_sysfs_if.c */
+6 -6
net/core/dev.c
··· 3079 3079 set_rps_cpu(struct net_device *dev, struct sk_buff *skb, 3080 3080 struct rps_dev_flow *rflow, u16 next_cpu) 3081 3081 { 3082 - if (next_cpu != RPS_NO_CPU) { 3082 + if (next_cpu < nr_cpu_ids) { 3083 3083 #ifdef CONFIG_RFS_ACCEL 3084 3084 struct netdev_rx_queue *rxqueue; 3085 3085 struct rps_dev_flow_table *flow_table; ··· 3184 3184 * If the desired CPU (where last recvmsg was done) is 3185 3185 * different from current CPU (one in the rx-queue flow 3186 3186 * table entry), switch if one of the following holds: 3187 - * - Current CPU is unset (equal to RPS_NO_CPU). 3187 + * - Current CPU is unset (>= nr_cpu_ids). 3188 3188 * - Current CPU is offline. 3189 3189 * - The current CPU's queue tail has advanced beyond the 3190 3190 * last packet that was enqueued using this table entry. ··· 3192 3192 * have been dequeued, thus preserving in order delivery. 3193 3193 */ 3194 3194 if (unlikely(tcpu != next_cpu) && 3195 - (tcpu == RPS_NO_CPU || !cpu_online(tcpu) || 3195 + (tcpu >= nr_cpu_ids || !cpu_online(tcpu) || 3196 3196 ((int)(per_cpu(softnet_data, tcpu).input_queue_head - 3197 3197 rflow->last_qtail)) >= 0)) { 3198 3198 tcpu = next_cpu; 3199 3199 rflow = set_rps_cpu(dev, skb, rflow, next_cpu); 3200 3200 } 3201 3201 3202 - if (tcpu != RPS_NO_CPU && cpu_online(tcpu)) { 3202 + if (tcpu < nr_cpu_ids && cpu_online(tcpu)) { 3203 3203 *rflowp = rflow; 3204 3204 cpu = tcpu; 3205 3205 goto done; ··· 3240 3240 struct rps_dev_flow_table *flow_table; 3241 3241 struct rps_dev_flow *rflow; 3242 3242 bool expire = true; 3243 - int cpu; 3243 + unsigned int cpu; 3244 3244 3245 3245 rcu_read_lock(); 3246 3246 flow_table = rcu_dereference(rxqueue->rps_flow_table); 3247 3247 if (flow_table && flow_id <= flow_table->mask) { 3248 3248 rflow = &flow_table->flows[flow_id]; 3249 3249 cpu = ACCESS_ONCE(rflow->cpu); 3250 - if (rflow->filter == filter_id && cpu != RPS_NO_CPU && 3250 + if (rflow->filter == filter_id && cpu < nr_cpu_ids && 3251 3251 ((int)(per_cpu(softnet_data, cpu).input_queue_head - 3252 3252 rflow->last_qtail) < 3253 3253 (int)(10 * flow_table->mask)))
+7 -5
net/core/rtnetlink.c
··· 2854 2854 2855 2855 int ndo_dflt_bridge_getlink(struct sk_buff *skb, u32 pid, u32 seq, 2856 2856 struct net_device *dev, u16 mode, 2857 - u32 flags, u32 mask) 2857 + u32 flags, u32 mask, int nlflags) 2858 2858 { 2859 2859 struct nlmsghdr *nlh; 2860 2860 struct ifinfomsg *ifm; ··· 2863 2863 u8 operstate = netif_running(dev) ? dev->operstate : IF_OPER_DOWN; 2864 2864 struct net_device *br_dev = netdev_master_upper_dev_get(dev); 2865 2865 2866 - nlh = nlmsg_put(skb, pid, seq, RTM_NEWLINK, sizeof(*ifm), NLM_F_MULTI); 2866 + nlh = nlmsg_put(skb, pid, seq, RTM_NEWLINK, sizeof(*ifm), nlflags); 2867 2867 if (nlh == NULL) 2868 2868 return -EMSGSIZE; 2869 2869 ··· 2969 2969 if (br_dev && br_dev->netdev_ops->ndo_bridge_getlink) { 2970 2970 if (idx >= cb->args[0] && 2971 2971 br_dev->netdev_ops->ndo_bridge_getlink( 2972 - skb, portid, seq, dev, filter_mask) < 0) 2972 + skb, portid, seq, dev, filter_mask, 2973 + NLM_F_MULTI) < 0) 2973 2974 break; 2974 2975 idx++; 2975 2976 } ··· 2978 2977 if (ops->ndo_bridge_getlink) { 2979 2978 if (idx >= cb->args[0] && 2980 2979 ops->ndo_bridge_getlink(skb, portid, seq, dev, 2981 - filter_mask) < 0) 2980 + filter_mask, 2981 + NLM_F_MULTI) < 0) 2982 2982 break; 2983 2983 idx++; 2984 2984 } ··· 3020 3018 goto errout; 3021 3019 } 3022 3020 3023 - err = dev->netdev_ops->ndo_bridge_getlink(skb, 0, 0, dev, 0); 3021 + err = dev->netdev_ops->ndo_bridge_getlink(skb, 0, 0, dev, 0, 0); 3024 3022 if (err < 0) 3025 3023 goto errout; 3026 3024
+24 -6
net/core/skbuff.c
··· 280 280 EXPORT_SYMBOL(__alloc_skb); 281 281 282 282 /** 283 - * build_skb - build a network buffer 283 + * __build_skb - build a network buffer 284 284 * @data: data buffer provided by caller 285 - * @frag_size: size of fragment, or 0 if head was kmalloced 285 + * @frag_size: size of data, or 0 if head was kmalloced 286 286 * 287 287 * Allocate a new &sk_buff. Caller provides space holding head and 288 288 * skb_shared_info. @data must have been allocated by kmalloc() only if 289 - * @frag_size is 0, otherwise data should come from the page allocator. 289 + * @frag_size is 0, otherwise data should come from the page allocator 290 + * or vmalloc() 290 291 * The return is the new skb buffer. 291 292 * On a failure the return is %NULL, and @data is not freed. 292 293 * Notes : ··· 298 297 * before giving packet to stack. 299 298 * RX rings only contains data buffers, not full skbs. 300 299 */ 301 - struct sk_buff *build_skb(void *data, unsigned int frag_size) 300 + struct sk_buff *__build_skb(void *data, unsigned int frag_size) 302 301 { 303 302 struct skb_shared_info *shinfo; 304 303 struct sk_buff *skb; ··· 312 311 313 312 memset(skb, 0, offsetof(struct sk_buff, tail)); 314 313 skb->truesize = SKB_TRUESIZE(size); 315 - skb->head_frag = frag_size != 0; 316 314 atomic_set(&skb->users, 1); 317 315 skb->head = data; 318 316 skb->data = data; ··· 326 326 atomic_set(&shinfo->dataref, 1); 327 327 kmemcheck_annotate_variable(shinfo->destructor_arg); 328 328 329 + return skb; 330 + } 331 + 332 + /* build_skb() is wrapper over __build_skb(), that specifically 333 + * takes care of skb->head and skb->pfmemalloc 334 + * This means that if @frag_size is not zero, then @data must be backed 335 + * by a page fragment, not kmalloc() or vmalloc() 336 + */ 337 + struct sk_buff *build_skb(void *data, unsigned int frag_size) 338 + { 339 + struct sk_buff *skb = __build_skb(data, frag_size); 340 + 341 + if (skb && frag_size) { 342 + skb->head_frag = 1; 343 + if (virt_to_head_page(data)->pfmemalloc) 344 + skb->pfmemalloc = 1; 345 + } 329 346 return skb; 330 347 } 331 348 EXPORT_SYMBOL(build_skb); ··· 365 348 gfp_t gfp = gfp_mask; 366 349 367 350 if (order) { 368 - gfp_mask |= __GFP_COMP | __GFP_NOWARN | __GFP_NORETRY; 351 + gfp_mask |= __GFP_COMP | __GFP_NOWARN | __GFP_NORETRY | 352 + __GFP_NOMEMALLOC; 369 353 page = alloc_pages_node(NUMA_NO_NODE, gfp_mask, order); 370 354 nc->frag.size = PAGE_SIZE << (page ? order : 0); 371 355 }
+2 -1
net/dccp/ipv4.c
··· 453 453 iph->saddr, iph->daddr); 454 454 if (req) { 455 455 nsk = dccp_check_req(sk, skb, req); 456 - reqsk_put(req); 456 + if (!nsk) 457 + reqsk_put(req); 457 458 return nsk; 458 459 } 459 460 nsk = inet_lookup_established(sock_net(sk), &dccp_hashinfo,
+2 -1
net/dccp/ipv6.c
··· 301 301 &iph->daddr, inet6_iif(skb)); 302 302 if (req) { 303 303 nsk = dccp_check_req(sk, skb, req); 304 - reqsk_put(req); 304 + if (!nsk) 305 + reqsk_put(req); 305 306 return nsk; 306 307 } 307 308 nsk = __inet6_lookup_established(sock_net(sk), &dccp_hashinfo,
+1 -2
net/dccp/minisocks.c
··· 186 186 if (child == NULL) 187 187 goto listen_overflow; 188 188 189 - inet_csk_reqsk_queue_unlink(sk, req); 190 - inet_csk_reqsk_queue_removed(sk, req); 189 + inet_csk_reqsk_queue_drop(sk, req); 191 190 inet_csk_reqsk_queue_add(sk, req, child); 192 191 out: 193 192 return child;
+1 -1
net/dsa/dsa.c
··· 633 633 if (cd->sw_addr > PHY_MAX_ADDR) 634 634 continue; 635 635 636 - if (!of_property_read_u32(np, "eeprom-length", &eeprom_len)) 636 + if (!of_property_read_u32(child, "eeprom-length", &eeprom_len)) 637 637 cd->eeprom_len = eeprom_len; 638 638 639 639 for_each_available_child_of_node(child, port) {
+34
net/ipv4/inet_connection_sock.c
··· 564 564 } 565 565 EXPORT_SYMBOL(inet_rtx_syn_ack); 566 566 567 + /* return true if req was found in the syn_table[] */ 568 + static bool reqsk_queue_unlink(struct request_sock_queue *queue, 569 + struct request_sock *req) 570 + { 571 + struct listen_sock *lopt = queue->listen_opt; 572 + struct request_sock **prev; 573 + bool found = false; 574 + 575 + spin_lock(&queue->syn_wait_lock); 576 + 577 + for (prev = &lopt->syn_table[req->rsk_hash]; *prev != NULL; 578 + prev = &(*prev)->dl_next) { 579 + if (*prev == req) { 580 + *prev = req->dl_next; 581 + found = true; 582 + break; 583 + } 584 + } 585 + 586 + spin_unlock(&queue->syn_wait_lock); 587 + if (del_timer(&req->rsk_timer)) 588 + reqsk_put(req); 589 + return found; 590 + } 591 + 592 + void inet_csk_reqsk_queue_drop(struct sock *sk, struct request_sock *req) 593 + { 594 + if (reqsk_queue_unlink(&inet_csk(sk)->icsk_accept_queue, req)) { 595 + reqsk_queue_removed(&inet_csk(sk)->icsk_accept_queue, req); 596 + reqsk_put(req); 597 + } 598 + } 599 + EXPORT_SYMBOL(inet_csk_reqsk_queue_drop); 600 + 567 601 static void reqsk_timer_handler(unsigned long data) 568 602 { 569 603 struct request_sock *req = (struct request_sock *)data;
+1
net/ipv4/ping.c
··· 158 158 if (sk_hashed(sk)) { 159 159 write_lock_bh(&ping_table.lock); 160 160 hlist_nulls_del(&sk->sk_nulls_node); 161 + sk_nulls_node_init(&sk->sk_nulls_node); 161 162 sock_put(sk); 162 163 isk->inet_num = 0; 163 164 isk->inet_sport = 0;
+1 -4
net/ipv4/route.c
··· 962 962 if (dst_metric_locked(dst, RTAX_MTU)) 963 963 return; 964 964 965 - if (dst->dev->mtu < mtu) 966 - return; 967 - 968 - if (rt->rt_pmtu && rt->rt_pmtu < mtu) 965 + if (ipv4_mtu(dst) < mtu) 969 966 return; 970 967 971 968 if (mtu < ip_rt_min_pmtu)
+2 -1
net/ipv4/tcp_ipv4.c
··· 1348 1348 req = inet_csk_search_req(sk, th->source, iph->saddr, iph->daddr); 1349 1349 if (req) { 1350 1350 nsk = tcp_check_req(sk, skb, req, false); 1351 - reqsk_put(req); 1351 + if (!nsk) 1352 + reqsk_put(req); 1352 1353 return nsk; 1353 1354 } 1354 1355
+4 -3
net/ipv4/tcp_minisocks.c
··· 755 755 if (!child) 756 756 goto listen_overflow; 757 757 758 - inet_csk_reqsk_queue_unlink(sk, req); 759 - inet_csk_reqsk_queue_removed(sk, req); 760 - 758 + inet_csk_reqsk_queue_drop(sk, req); 761 759 inet_csk_reqsk_queue_add(sk, req, child); 760 + /* Warning: caller must not call reqsk_put(req); 761 + * child stole last reference on it. 762 + */ 762 763 return child; 763 764 764 765 listen_overflow:
+46 -20
net/ipv4/tcp_output.c
··· 2812 2812 } 2813 2813 } 2814 2814 2815 - /* Send a fin. The caller locks the socket for us. This cannot be 2816 - * allowed to fail queueing a FIN frame under any circumstances. 2815 + /* We allow to exceed memory limits for FIN packets to expedite 2816 + * connection tear down and (memory) recovery. 2817 + * Otherwise tcp_send_fin() could be tempted to either delay FIN 2818 + * or even be forced to close flow without any FIN. 2819 + */ 2820 + static void sk_forced_wmem_schedule(struct sock *sk, int size) 2821 + { 2822 + int amt, status; 2823 + 2824 + if (size <= sk->sk_forward_alloc) 2825 + return; 2826 + amt = sk_mem_pages(size); 2827 + sk->sk_forward_alloc += amt * SK_MEM_QUANTUM; 2828 + sk_memory_allocated_add(sk, amt, &status); 2829 + } 2830 + 2831 + /* Send a FIN. The caller locks the socket for us. 2832 + * We should try to send a FIN packet really hard, but eventually give up. 2817 2833 */ 2818 2834 void tcp_send_fin(struct sock *sk) 2819 2835 { 2836 + struct sk_buff *skb, *tskb = tcp_write_queue_tail(sk); 2820 2837 struct tcp_sock *tp = tcp_sk(sk); 2821 - struct sk_buff *skb = tcp_write_queue_tail(sk); 2822 - int mss_now; 2823 2838 2824 - /* Optimization, tack on the FIN if we have a queue of 2825 - * unsent frames. But be careful about outgoing SACKS 2826 - * and IP options. 2839 + /* Optimization, tack on the FIN if we have one skb in write queue and 2840 + * this skb was not yet sent, or we are under memory pressure. 2841 + * Note: in the latter case, FIN packet will be sent after a timeout, 2842 + * as TCP stack thinks it has already been transmitted. 2827 2843 */ 2828 - mss_now = tcp_current_mss(sk); 2829 - 2830 - if (tcp_send_head(sk)) { 2831 - TCP_SKB_CB(skb)->tcp_flags |= TCPHDR_FIN; 2832 - TCP_SKB_CB(skb)->end_seq++; 2844 + if (tskb && (tcp_send_head(sk) || sk_under_memory_pressure(sk))) { 2845 + coalesce: 2846 + TCP_SKB_CB(tskb)->tcp_flags |= TCPHDR_FIN; 2847 + TCP_SKB_CB(tskb)->end_seq++; 2833 2848 tp->write_seq++; 2834 - } else { 2835 - /* Socket is locked, keep trying until memory is available. */ 2836 - for (;;) { 2837 - skb = sk_stream_alloc_skb(sk, 0, sk->sk_allocation); 2838 - if (skb) 2839 - break; 2840 - yield(); 2849 + if (!tcp_send_head(sk)) { 2850 + /* This means tskb was already sent. 2851 + * Pretend we included the FIN on previous transmit. 2852 + * We need to set tp->snd_nxt to the value it would have 2853 + * if FIN had been sent. This is because retransmit path 2854 + * does not change tp->snd_nxt. 2855 + */ 2856 + tp->snd_nxt++; 2857 + return; 2841 2858 } 2859 + } else { 2860 + skb = alloc_skb_fclone(MAX_TCP_HEADER, sk->sk_allocation); 2861 + if (unlikely(!skb)) { 2862 + if (tskb) 2863 + goto coalesce; 2864 + return; 2865 + } 2866 + skb_reserve(skb, MAX_TCP_HEADER); 2867 + sk_forced_wmem_schedule(sk, skb->truesize); 2842 2868 /* FIN eats a sequence byte, write_seq advanced by tcp_queue_skb(). */ 2843 2869 tcp_init_nondata_skb(skb, tp->write_seq, 2844 2870 TCPHDR_ACK | TCPHDR_FIN); 2845 2871 tcp_queue_skb(sk, skb); 2846 2872 } 2847 - __tcp_push_pending_frames(sk, mss_now, TCP_NAGLE_OFF); 2873 + __tcp_push_pending_frames(sk, tcp_current_mss(sk), TCP_NAGLE_OFF); 2848 2874 } 2849 2875 2850 2876 /* We get here when a process closes a file descriptor (either due to
+1 -8
net/ipv6/ip6_gre.c
··· 1246 1246 static int ip6gre_tunnel_init(struct net_device *dev) 1247 1247 { 1248 1248 struct ip6_tnl *tunnel; 1249 - int i; 1250 1249 1251 1250 tunnel = netdev_priv(dev); 1252 1251 ··· 1259 1260 if (ipv6_addr_any(&tunnel->parms.raddr)) 1260 1261 dev->header_ops = &ip6gre_header_ops; 1261 1262 1262 - dev->tstats = alloc_percpu(struct pcpu_sw_netstats); 1263 + dev->tstats = netdev_alloc_pcpu_stats(struct pcpu_sw_netstats); 1263 1264 if (!dev->tstats) 1264 1265 return -ENOMEM; 1265 - 1266 - for_each_possible_cpu(i) { 1267 - struct pcpu_sw_netstats *ip6gre_tunnel_stats; 1268 - ip6gre_tunnel_stats = per_cpu_ptr(dev->tstats, i); 1269 - u64_stats_init(&ip6gre_tunnel_stats->syncp); 1270 - } 1271 1266 1272 1267 return 0; 1273 1268 }
+2 -1
net/ipv6/tcp_ipv6.c
··· 946 946 &ipv6_hdr(skb)->daddr, tcp_v6_iif(skb)); 947 947 if (req) { 948 948 nsk = tcp_check_req(sk, skb, req, false); 949 - reqsk_put(req); 949 + if (!nsk) 950 + reqsk_put(req); 950 951 return nsk; 951 952 } 952 953 nsk = __inet6_lookup_established(sock_net(sk), &tcp_hashinfo,
+122 -3
net/mpls/af_mpls.c
··· 53 53 return rt; 54 54 } 55 55 56 + static inline struct mpls_dev *mpls_dev_get(const struct net_device *dev) 57 + { 58 + return rcu_dereference_rtnl(dev->mpls_ptr); 59 + } 60 + 56 61 static bool mpls_output_possible(const struct net_device *dev) 57 62 { 58 63 return dev && (dev->flags & IFF_UP) && netif_carrier_ok(dev); ··· 141 136 struct mpls_route *rt; 142 137 struct mpls_entry_decoded dec; 143 138 struct net_device *out_dev; 139 + struct mpls_dev *mdev; 144 140 unsigned int hh_len; 145 141 unsigned int new_header_size; 146 142 unsigned int mtu; 147 143 int err; 148 144 149 145 /* Careful this entire function runs inside of an rcu critical section */ 146 + 147 + mdev = mpls_dev_get(dev); 148 + if (!mdev || !mdev->input_enabled) 149 + goto drop; 150 150 151 151 if (skb->pkt_type != PACKET_HOST) 152 152 goto drop; ··· 362 352 if (!dev) 363 353 goto errout; 364 354 365 - /* For now just support ethernet devices */ 355 + /* Ensure this is a supported device */ 366 356 err = -EINVAL; 367 - if ((dev->type != ARPHRD_ETHER) && (dev->type != ARPHRD_LOOPBACK)) 357 + if (!mpls_dev_get(dev)) 368 358 goto errout; 369 359 370 360 err = -EINVAL; ··· 438 428 return err; 439 429 } 440 430 431 + #define MPLS_PERDEV_SYSCTL_OFFSET(field) \ 432 + (&((struct mpls_dev *)0)->field) 433 + 434 + static const struct ctl_table mpls_dev_table[] = { 435 + { 436 + .procname = "input", 437 + .maxlen = sizeof(int), 438 + .mode = 0644, 439 + .proc_handler = proc_dointvec, 440 + .data = MPLS_PERDEV_SYSCTL_OFFSET(input_enabled), 441 + }, 442 + { } 443 + }; 444 + 445 + static int mpls_dev_sysctl_register(struct net_device *dev, 446 + struct mpls_dev *mdev) 447 + { 448 + char path[sizeof("net/mpls/conf/") + IFNAMSIZ]; 449 + struct ctl_table *table; 450 + int i; 451 + 452 + table = kmemdup(&mpls_dev_table, sizeof(mpls_dev_table), GFP_KERNEL); 453 + if (!table) 454 + goto out; 455 + 456 + /* Table data contains only offsets relative to the base of 457 + * the mdev at this point, so make them absolute. 458 + */ 459 + for (i = 0; i < ARRAY_SIZE(mpls_dev_table); i++) 460 + table[i].data = (char *)mdev + (uintptr_t)table[i].data; 461 + 462 + snprintf(path, sizeof(path), "net/mpls/conf/%s", dev->name); 463 + 464 + mdev->sysctl = register_net_sysctl(dev_net(dev), path, table); 465 + if (!mdev->sysctl) 466 + goto free; 467 + 468 + return 0; 469 + 470 + free: 471 + kfree(table); 472 + out: 473 + return -ENOBUFS; 474 + } 475 + 476 + static void mpls_dev_sysctl_unregister(struct mpls_dev *mdev) 477 + { 478 + struct ctl_table *table; 479 + 480 + table = mdev->sysctl->ctl_table_arg; 481 + unregister_net_sysctl_table(mdev->sysctl); 482 + kfree(table); 483 + } 484 + 485 + static struct mpls_dev *mpls_add_dev(struct net_device *dev) 486 + { 487 + struct mpls_dev *mdev; 488 + int err = -ENOMEM; 489 + 490 + ASSERT_RTNL(); 491 + 492 + mdev = kzalloc(sizeof(*mdev), GFP_KERNEL); 493 + if (!mdev) 494 + return ERR_PTR(err); 495 + 496 + err = mpls_dev_sysctl_register(dev, mdev); 497 + if (err) 498 + goto free; 499 + 500 + rcu_assign_pointer(dev->mpls_ptr, mdev); 501 + 502 + return mdev; 503 + 504 + free: 505 + kfree(mdev); 506 + return ERR_PTR(err); 507 + } 508 + 441 509 static void mpls_ifdown(struct net_device *dev) 442 510 { 443 511 struct mpls_route __rcu **platform_label; 444 512 struct net *net = dev_net(dev); 513 + struct mpls_dev *mdev; 445 514 unsigned index; 446 515 447 516 platform_label = rtnl_dereference(net->mpls.platform_label); ··· 532 443 continue; 533 444 rt->rt_dev = NULL; 534 445 } 446 + 447 + mdev = mpls_dev_get(dev); 448 + if (!mdev) 449 + return; 450 + 451 + mpls_dev_sysctl_unregister(mdev); 452 + 453 + RCU_INIT_POINTER(dev->mpls_ptr, NULL); 454 + 455 + kfree(mdev); 535 456 } 536 457 537 458 static int mpls_dev_notify(struct notifier_block *this, unsigned long event, 538 459 void *ptr) 539 460 { 540 461 struct net_device *dev = netdev_notifier_info_to_dev(ptr); 462 + struct mpls_dev *mdev; 541 463 542 464 switch(event) { 465 + case NETDEV_REGISTER: 466 + /* For now just support ethernet devices */ 467 + if ((dev->type == ARPHRD_ETHER) || 468 + (dev->type == ARPHRD_LOOPBACK)) { 469 + mdev = mpls_add_dev(dev); 470 + if (IS_ERR(mdev)) 471 + return notifier_from_errno(PTR_ERR(mdev)); 472 + } 473 + break; 474 + 543 475 case NETDEV_UNREGISTER: 544 476 mpls_ifdown(dev); 545 477 break; ··· 645 535 */ 646 536 if ((dec.bos != bos) || dec.ttl || dec.tc) 647 537 return -EINVAL; 538 + 539 + switch (dec.label) { 540 + case LABEL_IMPLICIT_NULL: 541 + /* RFC3032: This is a label that an LSR may 542 + * assign and distribute, but which never 543 + * actually appears in the encapsulation. 544 + */ 545 + return -EINVAL; 546 + } 648 547 649 548 label[i] = dec.label; 650 549 } ··· 1031 912 return ret; 1032 913 } 1033 914 1034 - static struct ctl_table mpls_table[] = { 915 + static const struct ctl_table mpls_table[] = { 1035 916 { 1036 917 .procname = "platform_labels", 1037 918 .data = NULL,
+6
net/mpls/internal.h
··· 22 22 u8 bos; 23 23 }; 24 24 25 + struct mpls_dev { 26 + int input_enabled; 27 + 28 + struct ctl_table_header *sysctl; 29 + }; 30 + 25 31 struct sk_buff; 26 32 27 33 static inline struct mpls_shim_hdr *mpls_hdr(const struct sk_buff *skb)
+1 -2
net/netfilter/nf_tables_api.c
··· 4340 4340 case NFT_CONTINUE: 4341 4341 case NFT_BREAK: 4342 4342 case NFT_RETURN: 4343 - desc->len = sizeof(data->verdict); 4344 4343 break; 4345 4344 case NFT_JUMP: 4346 4345 case NFT_GOTO: ··· 4354 4355 4355 4356 chain->use++; 4356 4357 data->verdict.chain = chain; 4357 - desc->len = sizeof(data); 4358 4358 break; 4359 4359 } 4360 4360 4361 + desc->len = sizeof(data->verdict); 4361 4362 desc->type = NFT_DATA_VERDICT; 4362 4363 return 0; 4363 4364 }
+2
net/netfilter/nft_reject.c
··· 63 63 if (nla_put_u8(skb, NFTA_REJECT_ICMP_CODE, priv->icmp_code)) 64 64 goto nla_put_failure; 65 65 break; 66 + default: 67 + break; 66 68 } 67 69 68 70 return 0;
+2
net/netfilter/nft_reject_inet.c
··· 108 108 if (nla_put_u8(skb, NFTA_REJECT_ICMP_CODE, priv->icmp_code)) 109 109 goto nla_put_failure; 110 110 break; 111 + default: 112 + break; 111 113 } 112 114 113 115 return 0;
+2 -4
net/netlink/af_netlink.c
··· 1629 1629 if (data == NULL) 1630 1630 return NULL; 1631 1631 1632 - skb = build_skb(data, size); 1632 + skb = __build_skb(data, size); 1633 1633 if (skb == NULL) 1634 1634 vfree(data); 1635 - else { 1636 - skb->head_frag = 0; 1635 + else 1637 1636 skb->destructor = netlink_skb_destructor; 1638 - } 1639 1637 1640 1638 return skb; 1641 1639 }
-2
net/sched/act_connmark.c
··· 63 63 skb->mark = c->mark; 64 64 /* using overlimits stats to count how many packets marked */ 65 65 ca->tcf_qstats.overlimits++; 66 - nf_ct_put(c); 67 66 goto out; 68 67 } 69 68 ··· 81 82 nf_ct_put(c); 82 83 83 84 out: 84 - skb->nfct = NULL; 85 85 spin_unlock(&ca->tcf_lock); 86 86 return ca->tcf_action; 87 87 }
+9 -8
net/tipc/bearer.c
··· 591 591 592 592 /* Caller should hold rtnl_lock to protect the bearer */ 593 593 static int __tipc_nl_add_bearer(struct tipc_nl_msg *msg, 594 - struct tipc_bearer *bearer) 594 + struct tipc_bearer *bearer, int nlflags) 595 595 { 596 596 void *hdr; 597 597 struct nlattr *attrs; 598 598 struct nlattr *prop; 599 599 600 600 hdr = genlmsg_put(msg->skb, msg->portid, msg->seq, &tipc_genl_family, 601 - NLM_F_MULTI, TIPC_NL_BEARER_GET); 601 + nlflags, TIPC_NL_BEARER_GET); 602 602 if (!hdr) 603 603 return -EMSGSIZE; 604 604 ··· 657 657 if (!bearer) 658 658 continue; 659 659 660 - err = __tipc_nl_add_bearer(&msg, bearer); 660 + err = __tipc_nl_add_bearer(&msg, bearer, NLM_F_MULTI); 661 661 if (err) 662 662 break; 663 663 } ··· 705 705 goto err_out; 706 706 } 707 707 708 - err = __tipc_nl_add_bearer(&msg, bearer); 708 + err = __tipc_nl_add_bearer(&msg, bearer, 0); 709 709 if (err) 710 710 goto err_out; 711 711 rtnl_unlock(); ··· 857 857 } 858 858 859 859 static int __tipc_nl_add_media(struct tipc_nl_msg *msg, 860 - struct tipc_media *media) 860 + struct tipc_media *media, int nlflags) 861 861 { 862 862 void *hdr; 863 863 struct nlattr *attrs; 864 864 struct nlattr *prop; 865 865 866 866 hdr = genlmsg_put(msg->skb, msg->portid, msg->seq, &tipc_genl_family, 867 - NLM_F_MULTI, TIPC_NL_MEDIA_GET); 867 + nlflags, TIPC_NL_MEDIA_GET); 868 868 if (!hdr) 869 869 return -EMSGSIZE; 870 870 ··· 916 916 917 917 rtnl_lock(); 918 918 for (; media_info_array[i] != NULL; i++) { 919 - err = __tipc_nl_add_media(&msg, media_info_array[i]); 919 + err = __tipc_nl_add_media(&msg, media_info_array[i], 920 + NLM_F_MULTI); 920 921 if (err) 921 922 break; 922 923 } ··· 964 963 goto err_out; 965 964 } 966 965 967 - err = __tipc_nl_add_media(&msg, media); 966 + err = __tipc_nl_add_media(&msg, media, 0); 968 967 if (err) 969 968 goto err_out; 970 969 rtnl_unlock();
+6 -10
net/tipc/link.c
··· 1145 1145 } 1146 1146 /* Synchronize with parallel link if applicable */ 1147 1147 if (unlikely((l_ptr->flags & LINK_SYNCHING) && !msg_dup(msg))) { 1148 - link_handle_out_of_seq_msg(l_ptr, skb); 1149 - if (link_synch(l_ptr)) 1150 - link_retrieve_defq(l_ptr, &head); 1151 - skb = NULL; 1152 - goto unlock; 1148 + if (!link_synch(l_ptr)) 1149 + goto unlock; 1153 1150 } 1154 1151 l_ptr->next_in_no++; 1155 1152 if (unlikely(!skb_queue_empty(&l_ptr->deferdq))) ··· 2010 2013 2011 2014 /* Caller should hold appropriate locks to protect the link */ 2012 2015 static int __tipc_nl_add_link(struct net *net, struct tipc_nl_msg *msg, 2013 - struct tipc_link *link) 2016 + struct tipc_link *link, int nlflags) 2014 2017 { 2015 2018 int err; 2016 2019 void *hdr; ··· 2019 2022 struct tipc_net *tn = net_generic(net, tipc_net_id); 2020 2023 2021 2024 hdr = genlmsg_put(msg->skb, msg->portid, msg->seq, &tipc_genl_family, 2022 - NLM_F_MULTI, TIPC_NL_LINK_GET); 2025 + nlflags, TIPC_NL_LINK_GET); 2023 2026 if (!hdr) 2024 2027 return -EMSGSIZE; 2025 2028 ··· 2092 2095 if (!node->links[i]) 2093 2096 continue; 2094 2097 2095 - err = __tipc_nl_add_link(net, msg, node->links[i]); 2098 + err = __tipc_nl_add_link(net, msg, node->links[i], NLM_F_MULTI); 2096 2099 if (err) 2097 2100 return err; 2098 2101 } ··· 2140 2143 err = __tipc_nl_add_node_links(net, &msg, node, 2141 2144 &prev_link); 2142 2145 tipc_node_unlock(node); 2143 - tipc_node_put(node); 2144 2146 if (err) 2145 2147 goto out; 2146 2148 ··· 2206 2210 goto err_out; 2207 2211 } 2208 2212 2209 - err = __tipc_nl_add_link(net, &msg, link); 2213 + err = __tipc_nl_add_link(net, &msg, link, 0); 2210 2214 if (err) 2211 2215 goto err_out; 2212 2216
+3 -6
net/tipc/server.c
··· 102 102 } 103 103 saddr->scope = -TIPC_NODE_SCOPE; 104 104 kernel_bind(sock, (struct sockaddr *)saddr, sizeof(*saddr)); 105 - sk_release_kernel(sk); 105 + sock_release(sock); 106 106 con->sock = NULL; 107 107 } 108 108 ··· 321 321 struct socket *sock = NULL; 322 322 int ret; 323 323 324 - ret = sock_create_kern(AF_TIPC, SOCK_SEQPACKET, 0, &sock); 324 + ret = __sock_create(s->net, AF_TIPC, SOCK_SEQPACKET, 0, &sock, 1); 325 325 if (ret < 0) 326 326 return NULL; 327 - 328 - sk_change_net(sock->sk, s->net); 329 - 330 327 ret = kernel_setsockopt(sock, SOL_TIPC, TIPC_IMPORTANCE, 331 328 (char *)&s->imp, sizeof(s->imp)); 332 329 if (ret < 0) ··· 373 376 374 377 create_err: 375 378 kernel_sock_shutdown(sock, SHUT_RDWR); 376 - sk_release_kernel(sock->sk); 379 + sock_release(sock); 377 380 return NULL; 378 381 } 379 382
+2 -1
net/tipc/socket.c
··· 1764 1764 int tipc_sk_rcv(struct net *net, struct sk_buff_head *inputq) 1765 1765 { 1766 1766 u32 dnode, dport = 0; 1767 - int err = -TIPC_ERR_NO_PORT; 1767 + int err; 1768 1768 struct sk_buff *skb; 1769 1769 struct tipc_sock *tsk; 1770 1770 struct tipc_net *tn; 1771 1771 struct sock *sk; 1772 1772 1773 1773 while (skb_queue_len(inputq)) { 1774 + err = -TIPC_ERR_NO_PORT; 1774 1775 skb = NULL; 1775 1776 dport = tipc_skb_peek_port(inputq, dport); 1776 1777 tsk = tipc_sk_lookup(net, dport);
+28 -42
net/unix/garbage.c
··· 95 95 96 96 unsigned int unix_tot_inflight; 97 97 98 - 99 98 struct sock *unix_get_socket(struct file *filp) 100 99 { 101 100 struct sock *u_sock = NULL; 102 101 struct inode *inode = file_inode(filp); 103 102 104 - /* 105 - * Socket ? 106 - */ 103 + /* Socket ? */ 107 104 if (S_ISSOCK(inode->i_mode) && !(filp->f_mode & FMODE_PATH)) { 108 105 struct socket *sock = SOCKET_I(inode); 109 106 struct sock *s = sock->sk; 110 107 111 - /* 112 - * PF_UNIX ? 113 - */ 108 + /* PF_UNIX ? */ 114 109 if (s && sock->ops && sock->ops->family == PF_UNIX) 115 110 u_sock = s; 116 111 } 117 112 return u_sock; 118 113 } 119 114 120 - /* 121 - * Keep the number of times in flight count for the file 122 - * descriptor if it is for an AF_UNIX socket. 115 + /* Keep the number of times in flight count for the file 116 + * descriptor if it is for an AF_UNIX socket. 123 117 */ 124 118 125 119 void unix_inflight(struct file *fp) 126 120 { 127 121 struct sock *s = unix_get_socket(fp); 122 + 128 123 if (s) { 129 124 struct unix_sock *u = unix_sk(s); 125 + 130 126 spin_lock(&unix_gc_lock); 127 + 131 128 if (atomic_long_inc_return(&u->inflight) == 1) { 132 129 BUG_ON(!list_empty(&u->link)); 133 130 list_add_tail(&u->link, &gc_inflight_list); ··· 139 142 void unix_notinflight(struct file *fp) 140 143 { 141 144 struct sock *s = unix_get_socket(fp); 145 + 142 146 if (s) { 143 147 struct unix_sock *u = unix_sk(s); 148 + 144 149 spin_lock(&unix_gc_lock); 145 150 BUG_ON(list_empty(&u->link)); 151 + 146 152 if (atomic_long_dec_and_test(&u->inflight)) 147 153 list_del_init(&u->link); 148 154 unix_tot_inflight--; ··· 161 161 162 162 spin_lock(&x->sk_receive_queue.lock); 163 163 skb_queue_walk_safe(&x->sk_receive_queue, skb, next) { 164 - /* 165 - * Do we have file descriptors ? 166 - */ 164 + /* Do we have file descriptors ? */ 167 165 if (UNIXCB(skb).fp) { 168 166 bool hit = false; 169 - /* 170 - * Process the descriptors of this socket 171 - */ 167 + /* Process the descriptors of this socket */ 172 168 int nfd = UNIXCB(skb).fp->count; 173 169 struct file **fp = UNIXCB(skb).fp->fp; 170 + 174 171 while (nfd--) { 175 - /* 176 - * Get the socket the fd matches 177 - * if it indeed does so 178 - */ 172 + /* Get the socket the fd matches if it indeed does so */ 179 173 struct sock *sk = unix_get_socket(*fp++); 174 + 180 175 if (sk) { 181 176 struct unix_sock *u = unix_sk(sk); 182 177 183 - /* 184 - * Ignore non-candidates, they could 178 + /* Ignore non-candidates, they could 185 179 * have been added to the queues after 186 180 * starting the garbage collection 187 181 */ 188 182 if (test_bit(UNIX_GC_CANDIDATE, &u->gc_flags)) { 189 183 hit = true; 184 + 190 185 func(u); 191 186 } 192 187 } ··· 198 203 static void scan_children(struct sock *x, void (*func)(struct unix_sock *), 199 204 struct sk_buff_head *hitlist) 200 205 { 201 - if (x->sk_state != TCP_LISTEN) 206 + if (x->sk_state != TCP_LISTEN) { 202 207 scan_inflight(x, func, hitlist); 203 - else { 208 + } else { 204 209 struct sk_buff *skb; 205 210 struct sk_buff *next; 206 211 struct unix_sock *u; 207 212 LIST_HEAD(embryos); 208 213 209 - /* 210 - * For a listening socket collect the queued embryos 214 + /* For a listening socket collect the queued embryos 211 215 * and perform a scan on them as well. 212 216 */ 213 217 spin_lock(&x->sk_receive_queue.lock); 214 218 skb_queue_walk_safe(&x->sk_receive_queue, skb, next) { 215 219 u = unix_sk(skb->sk); 216 220 217 - /* 218 - * An embryo cannot be in-flight, so it's safe 221 + /* An embryo cannot be in-flight, so it's safe 219 222 * to use the list link. 220 223 */ 221 224 BUG_ON(!list_empty(&u->link)); ··· 242 249 static void inc_inflight_move_tail(struct unix_sock *u) 243 250 { 244 251 atomic_long_inc(&u->inflight); 245 - /* 246 - * If this still might be part of a cycle, move it to the end 252 + /* If this still might be part of a cycle, move it to the end 247 253 * of the list, so that it's checked even if it was already 248 254 * passed over 249 255 */ ··· 255 263 256 264 void wait_for_unix_gc(void) 257 265 { 258 - /* 259 - * If number of inflight sockets is insane, 266 + /* If number of inflight sockets is insane, 260 267 * force a garbage collect right now. 261 268 */ 262 269 if (unix_tot_inflight > UNIX_INFLIGHT_TRIGGER_GC && !gc_in_progress) ··· 279 288 goto out; 280 289 281 290 gc_in_progress = true; 282 - /* 283 - * First, select candidates for garbage collection. Only 291 + /* First, select candidates for garbage collection. Only 284 292 * in-flight sockets are considered, and from those only ones 285 293 * which don't have any external reference. 286 294 * ··· 310 320 } 311 321 } 312 322 313 - /* 314 - * Now remove all internal in-flight reference to children of 323 + /* Now remove all internal in-flight reference to children of 315 324 * the candidates. 316 325 */ 317 326 list_for_each_entry(u, &gc_candidates, link) 318 327 scan_children(&u->sk, dec_inflight, NULL); 319 328 320 - /* 321 - * Restore the references for children of all candidates, 329 + /* Restore the references for children of all candidates, 322 330 * which have remaining references. Do this recursively, so 323 331 * only those remain, which form cyclic references. 324 332 * ··· 338 350 } 339 351 list_del(&cursor); 340 352 341 - /* 342 - * not_cycle_list contains those sockets which do not make up a 353 + /* not_cycle_list contains those sockets which do not make up a 343 354 * cycle. Restore these to the inflight list. 344 355 */ 345 356 while (!list_empty(&not_cycle_list)) { ··· 347 360 list_move_tail(&u->link, &gc_inflight_list); 348 361 } 349 362 350 - /* 351 - * Now gc_candidates contains only garbage. Restore original 363 + /* Now gc_candidates contains only garbage. Restore original 352 364 * inflight counters for these as well, and remove the skbuffs 353 365 * which are creating the cycle(s). 354 366 */
+4 -2
sound/pci/emu10k1/emu10k1.c
··· 183 183 } 184 184 #endif 185 185 186 - strcpy(card->driver, emu->card_capabilities->driver); 187 - strcpy(card->shortname, emu->card_capabilities->name); 186 + strlcpy(card->driver, emu->card_capabilities->driver, 187 + sizeof(card->driver)); 188 + strlcpy(card->shortname, emu->card_capabilities->name, 189 + sizeof(card->shortname)); 188 190 snprintf(card->longname, sizeof(card->longname), 189 191 "%s (rev.%d, serial:0x%x) at 0x%lx, irq %i", 190 192 card->shortname, emu->revision, emu->serial, emu->port, emu->irq);
+2 -2
sound/pci/emu10k1/emu10k1_callback.c
··· 415 415 snd_emu10k1_ptr_write(hw, Z2, ch, 0); 416 416 417 417 /* invalidate maps */ 418 - temp = (hw->silent_page.addr << 1) | MAP_PTI_MASK; 418 + temp = (hw->silent_page.addr << hw->address_mode) | (hw->address_mode ? MAP_PTI_MASK1 : MAP_PTI_MASK0); 419 419 snd_emu10k1_ptr_write(hw, MAPA, ch, temp); 420 420 snd_emu10k1_ptr_write(hw, MAPB, ch, temp); 421 421 #if 0 ··· 436 436 snd_emu10k1_ptr_write(hw, CDF, ch, sample); 437 437 438 438 /* invalidate maps */ 439 - temp = ((unsigned int)hw->silent_page.addr << 1) | MAP_PTI_MASK; 439 + temp = ((unsigned int)hw->silent_page.addr << hw_address_mode) | (hw->address_mode ? MAP_PTI_MASK1 : MAP_PTI_MASK0); 440 440 snd_emu10k1_ptr_write(hw, MAPA, ch, temp); 441 441 snd_emu10k1_ptr_write(hw, MAPB, ch, temp); 442 442
+14 -7
sound/pci/emu10k1/emu10k1_main.c
··· 282 282 snd_emu10k1_ptr_write(emu, TCB, 0, 0); /* taken from original driver */ 283 283 snd_emu10k1_ptr_write(emu, TCBS, 0, 4); /* taken from original driver */ 284 284 285 - silent_page = (emu->silent_page.addr << 1) | MAP_PTI_MASK; 285 + silent_page = (emu->silent_page.addr << emu->address_mode) | (emu->address_mode ? MAP_PTI_MASK1 : MAP_PTI_MASK0); 286 286 for (ch = 0; ch < NUM_G; ch++) { 287 287 snd_emu10k1_ptr_write(emu, MAPA, ch, silent_page); 288 288 snd_emu10k1_ptr_write(emu, MAPB, ch, silent_page); ··· 346 346 } else if (emu->audigy) { /* enable analog output */ 347 347 unsigned int reg = inl(emu->port + A_IOCFG); 348 348 outl(reg | A_IOCFG_GPOUT0, emu->port + A_IOCFG); 349 + } 350 + 351 + if (emu->address_mode == 0) { 352 + /* use 16M in 4G */ 353 + outl(inl(emu->port + HCFG) | HCFG_EXPANDED_MEM, emu->port + HCFG); 349 354 } 350 355 351 356 return 0; ··· 1451 1446 * 1452 1447 */ 1453 1448 {.vendor = 0x1102, .device = 0x0008, .subsystem = 0x20011102, 1454 - .driver = "Audigy2", .name = "SB Audigy 2 ZS Notebook [SB0530]", 1449 + .driver = "Audigy2", .name = "Audigy 2 ZS Notebook [SB0530]", 1455 1450 .id = "Audigy2", 1456 1451 .emu10k2_chip = 1, 1457 1452 .ca0108_chip = 1, ··· 1601 1596 .adc_1361t = 1, /* 24 bit capture instead of 16bit */ 1602 1597 .ac97_chip = 1} , 1603 1598 {.vendor = 0x1102, .device = 0x0004, .subsystem = 0x10051102, 1604 - .driver = "Audigy2", .name = "SB Audigy 2 Platinum EX [SB0280]", 1599 + .driver = "Audigy2", .name = "Audigy 2 Platinum EX [SB0280]", 1605 1600 .id = "Audigy2", 1606 1601 .emu10k2_chip = 1, 1607 1602 .ca0102_chip = 1, ··· 1907 1902 1908 1903 is_audigy = emu->audigy = c->emu10k2_chip; 1909 1904 1905 + /* set addressing mode */ 1906 + emu->address_mode = is_audigy ? 0 : 1; 1910 1907 /* set the DMA transfer mask */ 1911 - emu->dma_mask = is_audigy ? AUDIGY_DMA_MASK : EMU10K1_DMA_MASK; 1908 + emu->dma_mask = emu->address_mode ? EMU10K1_DMA_MASK : AUDIGY_DMA_MASK; 1912 1909 if (pci_set_dma_mask(pci, emu->dma_mask) < 0 || 1913 1910 pci_set_consistent_dma_mask(pci, emu->dma_mask) < 0) { 1914 1911 dev_err(card->dev, ··· 1935 1928 1936 1929 emu->max_cache_pages = max_cache_bytes >> PAGE_SHIFT; 1937 1930 if (snd_dma_alloc_pages(SNDRV_DMA_TYPE_DEV, snd_dma_pci_data(pci), 1938 - 32 * 1024, &emu->ptb_pages) < 0) { 1931 + (emu->address_mode ? 32 : 16) * 1024, &emu->ptb_pages) < 0) { 1939 1932 err = -ENOMEM; 1940 1933 goto error; 1941 1934 } ··· 2034 2027 2035 2028 /* Clear silent pages and set up pointers */ 2036 2029 memset(emu->silent_page.area, 0, PAGE_SIZE); 2037 - silent_page = emu->silent_page.addr << 1; 2038 - for (idx = 0; idx < MAXPAGES; idx++) 2030 + silent_page = emu->silent_page.addr << emu->address_mode; 2031 + for (idx = 0; idx < (emu->address_mode ? MAXPAGES1 : MAXPAGES0); idx++) 2039 2032 ((u32 *)emu->ptb_pages.area)[idx] = cpu_to_le32(silent_page | idx); 2040 2033 2041 2034 /* set up voice indices */
+1 -1
sound/pci/emu10k1/emupcm.c
··· 380 380 snd_emu10k1_ptr_write(emu, Z1, voice, 0); 381 381 snd_emu10k1_ptr_write(emu, Z2, voice, 0); 382 382 /* invalidate maps */ 383 - silent_page = ((unsigned int)emu->silent_page.addr << 1) | MAP_PTI_MASK; 383 + silent_page = ((unsigned int)emu->silent_page.addr << emu->address_mode) | (emu->address_mode ? MAP_PTI_MASK1 : MAP_PTI_MASK0); 384 384 snd_emu10k1_ptr_write(emu, MAPA, voice, silent_page); 385 385 snd_emu10k1_ptr_write(emu, MAPB, voice, silent_page); 386 386 /* modulation envelope */
+6 -5
sound/pci/emu10k1/memory.c
··· 34 34 * aligned pages in others 35 35 */ 36 36 #define __set_ptb_entry(emu,page,addr) \ 37 - (((u32 *)(emu)->ptb_pages.area)[page] = cpu_to_le32(((addr) << 1) | (page))) 37 + (((u32 *)(emu)->ptb_pages.area)[page] = cpu_to_le32(((addr) << (emu->address_mode)) | (page))) 38 38 39 39 #define UNIT_PAGES (PAGE_SIZE / EMUPAGESIZE) 40 - #define MAX_ALIGN_PAGES (MAXPAGES / UNIT_PAGES) 40 + #define MAX_ALIGN_PAGES0 (MAXPAGES0 / UNIT_PAGES) 41 + #define MAX_ALIGN_PAGES1 (MAXPAGES1 / UNIT_PAGES) 41 42 /* get aligned page from offset address */ 42 43 #define get_aligned_page(offset) ((offset) >> PAGE_SHIFT) 43 44 /* get offset address from aligned page */ ··· 125 124 } 126 125 page = blk->mapped_page + blk->pages; 127 126 } 128 - size = MAX_ALIGN_PAGES - page; 127 + size = (emu->address_mode ? MAX_ALIGN_PAGES1 : MAX_ALIGN_PAGES0) - page; 129 128 if (size >= max_size) { 130 129 *nextp = pos; 131 130 return page; ··· 182 181 q = get_emu10k1_memblk(p, mapped_link); 183 182 end_page = q->mapped_page; 184 183 } else 185 - end_page = MAX_ALIGN_PAGES; 184 + end_page = (emu->address_mode ? MAX_ALIGN_PAGES1 : MAX_ALIGN_PAGES0); 186 185 187 186 /* remove links */ 188 187 list_del(&blk->mapped_link); ··· 308 307 if (snd_BUG_ON(!emu)) 309 308 return NULL; 310 309 if (snd_BUG_ON(runtime->dma_bytes <= 0 || 311 - runtime->dma_bytes >= MAXPAGES * EMUPAGESIZE)) 310 + runtime->dma_bytes >= (emu->address_mode ? MAXPAGES1 : MAXPAGES0) * EMUPAGESIZE)) 312 311 return NULL; 313 312 hdr = emu->memhdr; 314 313 if (snd_BUG_ON(!hdr))
+14 -10
sound/pci/hda/hda_codec.c
··· 873 873 struct hda_pcm *pcm; 874 874 va_list args; 875 875 876 - va_start(args, fmt); 877 876 pcm = kzalloc(sizeof(*pcm), GFP_KERNEL); 878 877 if (!pcm) 879 878 return NULL; 880 879 881 880 pcm->codec = codec; 882 881 kref_init(&pcm->kref); 882 + va_start(args, fmt); 883 883 pcm->name = kvasprintf(GFP_KERNEL, fmt, args); 884 + va_end(args); 884 885 if (!pcm->name) { 885 886 kfree(pcm); 886 887 return NULL; ··· 2083 2082 .put = vmaster_mute_mode_put, 2084 2083 }; 2085 2084 2085 + /* meta hook to call each driver's vmaster hook */ 2086 + static void vmaster_hook(void *private_data, int enabled) 2087 + { 2088 + struct hda_vmaster_mute_hook *hook = private_data; 2089 + 2090 + if (hook->mute_mode != HDA_VMUTE_FOLLOW_MASTER) 2091 + enabled = hook->mute_mode; 2092 + hook->hook(hook->codec, enabled); 2093 + } 2094 + 2086 2095 /** 2087 2096 * snd_hda_add_vmaster_hook - Add a vmaster hook for mute-LED 2088 2097 * @codec: the HDA codec ··· 2111 2100 2112 2101 if (!hook->hook || !hook->sw_kctl) 2113 2102 return 0; 2114 - snd_ctl_add_vmaster_hook(hook->sw_kctl, hook->hook, codec); 2115 2103 hook->codec = codec; 2116 2104 hook->mute_mode = HDA_VMUTE_FOLLOW_MASTER; 2105 + snd_ctl_add_vmaster_hook(hook->sw_kctl, vmaster_hook, hook); 2117 2106 if (!expose_enum_ctl) 2118 2107 return 0; 2119 2108 kctl = snd_ctl_new1(&vmaster_mute_mode, hook); ··· 2139 2128 */ 2140 2129 if (hook->codec->bus->shutdown) 2141 2130 return; 2142 - switch (hook->mute_mode) { 2143 - case HDA_VMUTE_FOLLOW_MASTER: 2144 - snd_ctl_sync_vmaster_hook(hook->sw_kctl); 2145 - break; 2146 - default: 2147 - hook->hook(hook->codec, hook->mute_mode); 2148 - break; 2149 - } 2131 + snd_ctl_sync_vmaster_hook(hook->sw_kctl); 2150 2132 } 2151 2133 EXPORT_SYMBOL_GPL(snd_hda_sync_vmaster_hook); 2152 2134
+2 -1
sound/pci/hda/hda_generic.c
··· 3259 3259 val = PIN_IN; 3260 3260 if (cfg->inputs[i].type == AUTO_PIN_MIC) 3261 3261 val |= snd_hda_get_default_vref(codec, pin); 3262 - if (pin != spec->hp_mic_pin) 3262 + if (pin != spec->hp_mic_pin && 3263 + !snd_hda_codec_get_pin_target(codec, pin)) 3263 3264 set_pin_target(codec, pin, val, false); 3264 3265 3265 3266 if (mixer) {
+12 -4
sound/pci/hda/patch_realtek.c
··· 4190 4190 static void alc_fixup_dell_xps13(struct hda_codec *codec, 4191 4191 const struct hda_fixup *fix, int action) 4192 4192 { 4193 - if (action == HDA_FIXUP_ACT_PROBE) { 4194 - struct alc_spec *spec = codec->spec; 4195 - struct hda_input_mux *imux = &spec->gen.input_mux; 4196 - int i; 4193 + struct alc_spec *spec = codec->spec; 4194 + struct hda_input_mux *imux = &spec->gen.input_mux; 4195 + int i; 4197 4196 4197 + switch (action) { 4198 + case HDA_FIXUP_ACT_PRE_PROBE: 4199 + /* mic pin 0x19 must be initialized with Vref Hi-Z, otherwise 4200 + * it causes a click noise at start up 4201 + */ 4202 + snd_hda_codec_set_pin_target(codec, 0x19, PIN_VREFHIZ); 4203 + break; 4204 + case HDA_FIXUP_ACT_PROBE: 4198 4205 spec->shutup = alc_shutup_dell_xps13; 4199 4206 4200 4207 /* Make the internal mic the default input source. */ ··· 4211 4204 break; 4212 4205 } 4213 4206 } 4207 + break; 4214 4208 } 4215 4209 } 4216 4210
+1
sound/pci/hda/thinkpad_helper.c
··· 72 72 if (led_set_func(TPACPI_LED_MUTE, false) >= 0) { 73 73 old_vmaster_hook = spec->vmaster_mute.hook; 74 74 spec->vmaster_mute.hook = update_tpacpi_mute_led; 75 + spec->vmaster_mute_enum = 1; 75 76 removefunc = false; 76 77 } 77 78 if (led_set_func(TPACPI_LED_MICMUTE, false) >= 0) {
+12 -1
sound/soc/codecs/rt5645.c
··· 18 18 #include <linux/platform_device.h> 19 19 #include <linux/spi/spi.h> 20 20 #include <linux/gpio.h> 21 + #include <linux/acpi.h> 21 22 #include <sound/core.h> 22 23 #include <sound/pcm.h> 23 24 #include <sound/pcm_params.h> ··· 2657 2656 }; 2658 2657 MODULE_DEVICE_TABLE(i2c, rt5645_i2c_id); 2659 2658 2659 + #ifdef CONFIG_ACPI 2660 + static struct acpi_device_id rt5645_acpi_match[] = { 2661 + { "10EC5645", 0 }, 2662 + { "10EC5650", 0 }, 2663 + {}, 2664 + }; 2665 + MODULE_DEVICE_TABLE(acpi, rt5645_acpi_match); 2666 + #endif 2667 + 2660 2668 static int rt5645_i2c_probe(struct i2c_client *i2c, 2661 2669 const struct i2c_device_id *id) 2662 2670 { ··· 2780 2770 2781 2771 case RT5645_DMIC_DATA_GPIO12: 2782 2772 regmap_update_bits(rt5645->regmap, RT5645_DMIC_CTRL1, 2783 - RT5645_DMIC_1_DP_MASK, RT5645_DMIC_2_DP_GPIO12); 2773 + RT5645_DMIC_2_DP_MASK, RT5645_DMIC_2_DP_GPIO12); 2784 2774 regmap_update_bits(rt5645->regmap, RT5645_GPIO_CTRL1, 2785 2775 RT5645_GP12_PIN_MASK, 2786 2776 RT5645_GP12_PIN_DMIC2_SDA); ··· 2882 2872 .driver = { 2883 2873 .name = "rt5645", 2884 2874 .owner = THIS_MODULE, 2875 + .acpi_match_table = ACPI_PTR(rt5645_acpi_match), 2885 2876 }, 2886 2877 .probe = rt5645_i2c_probe, 2887 2878 .remove = rt5645_i2c_remove,
+4 -1
sound/soc/codecs/rt5677.c
··· 62 62 {RT5677_PR_BASE + 0x1e, 0x0000}, 63 63 {RT5677_PR_BASE + 0x12, 0x0eaa}, 64 64 {RT5677_PR_BASE + 0x14, 0x018a}, 65 + {RT5677_PR_BASE + 0x15, 0x0490}, 66 + {RT5677_PR_BASE + 0x38, 0x0f71}, 67 + {RT5677_PR_BASE + 0x39, 0x0f71}, 65 68 }; 66 69 #define RT5677_INIT_REG_LEN ARRAY_SIZE(init_list) 67 70 ··· 917 914 { 918 915 struct snd_soc_codec *codec = snd_soc_dapm_to_codec(w->dapm); 919 916 struct rt5677_priv *rt5677 = snd_soc_codec_get_drvdata(codec); 920 - int idx = rl6231_calc_dmic_clk(rt5677->sysclk); 917 + int idx = rl6231_calc_dmic_clk(rt5677->lrck[RT5677_AIF1] << 8); 921 918 922 919 if (idx < 0) 923 920 dev_err(codec->dev, "Failed to set DMIC clock\n");
+2 -2
sound/soc/codecs/tfa9879.c
··· 280 280 int i; 281 281 282 282 tfa9879 = devm_kzalloc(&i2c->dev, sizeof(*tfa9879), GFP_KERNEL); 283 - if (IS_ERR(tfa9879)) 284 - return PTR_ERR(tfa9879); 283 + if (!tfa9879) 284 + return -ENOMEM; 285 285 286 286 i2c_set_clientdata(i2c, tfa9879); 287 287
+1 -1
sound/soc/fsl/fsl_ssi.c
··· 1357 1357 } 1358 1358 1359 1359 ssi_private->irq = platform_get_irq(pdev, 0); 1360 - if (!ssi_private->irq) { 1360 + if (ssi_private->irq < 0) { 1361 1361 dev_err(&pdev->dev, "no irq for node %s\n", pdev->name); 1362 1362 return ssi_private->irq; 1363 1363 }
+1 -1
sound/soc/intel/Makefile
··· 4 4 # Platform Support 5 5 obj-$(CONFIG_SND_SOC_INTEL_HASWELL) += haswell/ 6 6 obj-$(CONFIG_SND_SOC_INTEL_BAYTRAIL) += baytrail/ 7 - obj-$(CONFIG_SND_SOC_INTEL_BAYTRAIL) += atom/ 7 + obj-$(CONFIG_SND_SST_MFLD_PLATFORM) += atom/ 8 8 9 9 # Machine support 10 10 obj-$(CONFIG_SND_SOC_INTEL_SST) += boards/
-1
sound/soc/intel/baytrail/sst-baytrail-ipc.c
··· 759 759 dsp_new_err: 760 760 sst_ipc_fini(ipc); 761 761 ipc_init_err: 762 - kfree(byt); 763 762 764 763 return err; 765 764 }
-1
sound/soc/intel/haswell/sst-haswell-ipc.c
··· 2201 2201 dsp_new_err: 2202 2202 sst_ipc_fini(ipc); 2203 2203 ipc_init_err: 2204 - kfree(hsw); 2205 2204 return ret; 2206 2205 } 2207 2206 EXPORT_SYMBOL_GPL(sst_hsw_dsp_init);
+1 -1
sound/soc/qcom/lpass-cpu.c
··· 194 194 int cmd, struct snd_soc_dai *dai) 195 195 { 196 196 struct lpass_data *drvdata = snd_soc_dai_get_drvdata(dai); 197 - int ret; 197 + int ret = -EINVAL; 198 198 199 199 switch (cmd) { 200 200 case SNDRV_PCM_TRIGGER_START:
+2 -2
sound/soc/samsung/s3c24xx-i2s.c
··· 461 461 return -ENOENT; 462 462 } 463 463 s3c24xx_i2s.regs = devm_ioremap_resource(&pdev->dev, res); 464 - if (s3c24xx_i2s.regs == NULL) 465 - return -ENXIO; 464 + if (IS_ERR(s3c24xx_i2s.regs)) 465 + return PTR_ERR(s3c24xx_i2s.regs); 466 466 467 467 s3c24xx_i2s_pcm_stereo_out.dma_addr = res->start + S3C2410_IISFIFO; 468 468 s3c24xx_i2s_pcm_stereo_in.dma_addr = res->start + S3C2410_IISFIFO;
+1
sound/soc/sh/rcar/dma.c
··· 156 156 (void *)id); 157 157 } 158 158 if (IS_ERR_OR_NULL(dmaen->chan)) { 159 + dmaen->chan = NULL; 159 160 dev_err(dev, "can't get dma channel\n"); 160 161 goto rsnd_dma_channel_err; 161 162 }
+1 -10
sound/synth/emux/emux_oss.c
··· 118 118 if (snd_BUG_ON(!arg || !emu)) 119 119 return -ENXIO; 120 120 121 - mutex_lock(&emu->register_mutex); 122 - 123 - if (!snd_emux_inc_count(emu)) { 124 - mutex_unlock(&emu->register_mutex); 121 + if (!snd_emux_inc_count(emu)) 125 122 return -EFAULT; 126 - } 127 123 128 124 memset(&callback, 0, sizeof(callback)); 129 125 callback.owner = THIS_MODULE; ··· 131 135 if (p == NULL) { 132 136 snd_printk(KERN_ERR "can't create port\n"); 133 137 snd_emux_dec_count(emu); 134 - mutex_unlock(&emu->register_mutex); 135 138 return -ENOMEM; 136 139 } 137 140 ··· 143 148 reset_port_mode(p, arg->seq_mode); 144 149 145 150 snd_emux_reset_port(p); 146 - 147 - mutex_unlock(&emu->register_mutex); 148 151 return 0; 149 152 } 150 153 ··· 188 195 if (snd_BUG_ON(!emu)) 189 196 return -ENXIO; 190 197 191 - mutex_lock(&emu->register_mutex); 192 198 snd_emux_sounds_off_all(p); 193 199 snd_soundfont_close_check(emu->sflist, SF_CLIENT_NO(p->chset.port)); 194 200 snd_seq_event_port_detach(p->chset.client, p->chset.port); 195 201 snd_emux_dec_count(emu); 196 202 197 - mutex_unlock(&emu->register_mutex); 198 203 return 0; 199 204 } 200 205
+21 -8
sound/synth/emux/emux_seq.c
··· 124 124 if (emu->voices) 125 125 snd_emux_terminate_all(emu); 126 126 127 - mutex_lock(&emu->register_mutex); 128 127 if (emu->client >= 0) { 129 128 snd_seq_delete_kernel_client(emu->client); 130 129 emu->client = -1; 131 130 } 132 - mutex_unlock(&emu->register_mutex); 133 131 } 134 132 135 133 ··· 267 269 /* 268 270 * increment usage count 269 271 */ 270 - int 271 - snd_emux_inc_count(struct snd_emux *emu) 272 + static int 273 + __snd_emux_inc_count(struct snd_emux *emu) 272 274 { 273 275 emu->used++; 274 276 if (!try_module_get(emu->ops.owner)) ··· 282 284 return 1; 283 285 } 284 286 287 + int snd_emux_inc_count(struct snd_emux *emu) 288 + { 289 + int ret; 290 + 291 + mutex_lock(&emu->register_mutex); 292 + ret = __snd_emux_inc_count(emu); 293 + mutex_unlock(&emu->register_mutex); 294 + return ret; 295 + } 285 296 286 297 /* 287 298 * decrease usage count 288 299 */ 289 - void 290 - snd_emux_dec_count(struct snd_emux *emu) 300 + static void 301 + __snd_emux_dec_count(struct snd_emux *emu) 291 302 { 292 303 module_put(emu->card->module); 293 304 emu->used--; ··· 305 298 module_put(emu->ops.owner); 306 299 } 307 300 301 + void snd_emux_dec_count(struct snd_emux *emu) 302 + { 303 + mutex_lock(&emu->register_mutex); 304 + __snd_emux_dec_count(emu); 305 + mutex_unlock(&emu->register_mutex); 306 + } 308 307 309 308 /* 310 309 * Routine that is called upon a first use of a particular port ··· 330 317 331 318 mutex_lock(&emu->register_mutex); 332 319 snd_emux_init_port(p); 333 - snd_emux_inc_count(emu); 320 + __snd_emux_inc_count(emu); 334 321 mutex_unlock(&emu->register_mutex); 335 322 return 0; 336 323 } ··· 353 340 354 341 mutex_lock(&emu->register_mutex); 355 342 snd_emux_sounds_off_all(p); 356 - snd_emux_dec_count(emu); 343 + __snd_emux_dec_count(emu); 357 344 mutex_unlock(&emu->register_mutex); 358 345 return 0; 359 346 }
+1 -1
tools/lib/api/Makefile
··· 16 16 LIBFILE = $(OUTPUT)libapi.a 17 17 18 18 CFLAGS := $(EXTRA_WARNINGS) $(EXTRA_CFLAGS) 19 - CFLAGS += -ggdb3 -Wall -Wextra -std=gnu99 -Werror -O6 -D_FORTIFY_SOURCE=2 -fPIC 19 + CFLAGS += -ggdb3 -Wall -Wextra -std=gnu99 -Werror -O6 -U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=2 -fPIC 20 20 CFLAGS += -D_LARGEFILE64_SOURCE -D_FILE_OFFSET_BITS=64 21 21 22 22 RM = rm -f
+1 -1
tools/lib/traceevent/event-parse.c
··· 3865 3865 } else if (el_size == 4) { 3866 3866 trace_seq_printf(s, "%u", *(uint32_t *)num); 3867 3867 } else if (el_size == 8) { 3868 - trace_seq_printf(s, "%lu", *(uint64_t *)num); 3868 + trace_seq_printf(s, "%"PRIu64, *(uint64_t *)num); 3869 3869 } else { 3870 3870 trace_seq_printf(s, "BAD SIZE:%d 0x%x", 3871 3871 el_size, *(uint8_t *)num);
+8 -7
tools/perf/bench/futex-requeue.c
··· 132 132 if (!fshared) 133 133 futex_flag = FUTEX_PRIVATE_FLAG; 134 134 135 + if (nrequeue > nthreads) 136 + nrequeue = nthreads; 137 + 135 138 printf("Run summary [PID %d]: Requeuing %d threads (from [%s] %p to %p), " 136 139 "%d at a time.\n\n", getpid(), nthreads, 137 140 fshared ? "shared":"private", &futex1, &futex2, nrequeue); ··· 164 161 165 162 /* Ok, all threads are patiently blocked, start requeueing */ 166 163 gettimeofday(&start, NULL); 167 - for (nrequeued = 0; nrequeued < nthreads; nrequeued += nrequeue) { 164 + while (nrequeued < nthreads) { 168 165 /* 169 166 * Do not wakeup any tasks blocked on futex1, allowing 170 167 * us to really measure futex_wait functionality. 171 168 */ 172 - futex_cmp_requeue(&futex1, 0, &futex2, 0, 173 - nrequeue, futex_flag); 169 + nrequeued += futex_cmp_requeue(&futex1, 0, &futex2, 0, 170 + nrequeue, futex_flag); 174 171 } 172 + 175 173 gettimeofday(&end, NULL); 176 174 timersub(&end, &start, &runtime); 177 - 178 - if (nrequeued > nthreads) 179 - nrequeued = nthreads; 180 175 181 176 update_stats(&requeued_stats, nrequeued); 182 177 update_stats(&requeuetime_stats, runtime.tv_usec); ··· 185 184 } 186 185 187 186 /* everybody should be blocked on futex2, wake'em up */ 188 - nrequeued = futex_wake(&futex2, nthreads, futex_flag); 187 + nrequeued = futex_wake(&futex2, nrequeued, futex_flag); 189 188 if (nthreads != nrequeued) 190 189 warnx("couldn't wakeup all tasks (%d/%d)", nrequeued, nthreads); 191 190
+10 -2
tools/perf/bench/numa.c
··· 180 180 OPT_INTEGER('H', "thp" , &p0.thp, "MADV_NOHUGEPAGE < 0 < MADV_HUGEPAGE"), 181 181 OPT_BOOLEAN('c', "show_convergence", &p0.show_convergence, "show convergence details"), 182 182 OPT_BOOLEAN('m', "measure_convergence", &p0.measure_convergence, "measure convergence latency"), 183 - OPT_BOOLEAN('q', "quiet" , &p0.show_quiet, "bzero the initial allocations"), 183 + OPT_BOOLEAN('q', "quiet" , &p0.show_quiet, "quiet mode"), 184 184 OPT_BOOLEAN('S', "serialize-startup", &p0.serialize_startup,"serialize thread startup"), 185 185 186 186 /* Special option string parsing callbacks: */ ··· 828 828 td = g->threads + task_nr; 829 829 830 830 node = numa_node_of_cpu(td->curr_cpu); 831 + if (node < 0) /* curr_cpu was likely still -1 */ 832 + return 0; 833 + 831 834 node_present[node] = 1; 832 835 } 833 836 ··· 884 881 885 882 for (p = 0; p < g->p.nr_proc; p++) { 886 883 unsigned int nodes = count_process_nodes(p); 884 + 885 + if (!nodes) { 886 + *strong = 0; 887 + return; 888 + } 887 889 888 890 nodes_min = min(nodes, nodes_min); 889 891 nodes_max = max(nodes, nodes_max); ··· 1403 1395 if (!name) 1404 1396 name = "main,"; 1405 1397 1406 - if (g->p.show_quiet) 1398 + if (!g->p.show_quiet) 1407 1399 printf(" %-30s %15.3f, %-15s %s\n", name, val, txt_unit, txt_short); 1408 1400 else 1409 1401 printf(" %14.3f %s\n", val, txt_long);
+29 -29
tools/perf/builtin-kmem.c
··· 319 319 return 0; 320 320 } 321 321 322 - static struct page_stat *search_page_alloc_stat(struct page_stat *stat, bool create) 322 + static struct page_stat *search_page_alloc_stat(struct page_stat *pstat, bool create) 323 323 { 324 324 struct rb_node **node = &page_alloc_tree.rb_node; 325 325 struct rb_node *parent = NULL; ··· 331 331 parent = *node; 332 332 data = rb_entry(*node, struct page_stat, node); 333 333 334 - cmp = page_stat_cmp(data, stat); 334 + cmp = page_stat_cmp(data, pstat); 335 335 if (cmp < 0) 336 336 node = &parent->rb_left; 337 337 else if (cmp > 0) ··· 345 345 346 346 data = zalloc(sizeof(*data)); 347 347 if (data != NULL) { 348 - data->page = stat->page; 349 - data->order = stat->order; 350 - data->gfp_flags = stat->gfp_flags; 351 - data->migrate_type = stat->migrate_type; 348 + data->page = pstat->page; 349 + data->order = pstat->order; 350 + data->gfp_flags = pstat->gfp_flags; 351 + data->migrate_type = pstat->migrate_type; 352 352 353 353 rb_link_node(&data->node, parent, node); 354 354 rb_insert_color(&data->node, &page_alloc_tree); ··· 375 375 unsigned int migrate_type = perf_evsel__intval(evsel, sample, 376 376 "migratetype"); 377 377 u64 bytes = kmem_page_size << order; 378 - struct page_stat *stat; 378 + struct page_stat *pstat; 379 379 struct page_stat this = { 380 380 .order = order, 381 381 .gfp_flags = gfp_flags, ··· 401 401 * This is to find the current page (with correct gfp flags and 402 402 * migrate type) at free event. 403 403 */ 404 - stat = search_page(page, true); 405 - if (stat == NULL) 404 + pstat = search_page(page, true); 405 + if (pstat == NULL) 406 406 return -ENOMEM; 407 407 408 - stat->order = order; 409 - stat->gfp_flags = gfp_flags; 410 - stat->migrate_type = migrate_type; 408 + pstat->order = order; 409 + pstat->gfp_flags = gfp_flags; 410 + pstat->migrate_type = migrate_type; 411 411 412 412 this.page = page; 413 - stat = search_page_alloc_stat(&this, true); 414 - if (stat == NULL) 413 + pstat = search_page_alloc_stat(&this, true); 414 + if (pstat == NULL) 415 415 return -ENOMEM; 416 416 417 - stat->nr_alloc++; 418 - stat->alloc_bytes += bytes; 417 + pstat->nr_alloc++; 418 + pstat->alloc_bytes += bytes; 419 419 420 420 order_stats[order][migrate_type]++; 421 421 ··· 428 428 u64 page; 429 429 unsigned int order = perf_evsel__intval(evsel, sample, "order"); 430 430 u64 bytes = kmem_page_size << order; 431 - struct page_stat *stat; 431 + struct page_stat *pstat; 432 432 struct page_stat this = { 433 433 .order = order, 434 434 }; ··· 441 441 nr_page_frees++; 442 442 total_page_free_bytes += bytes; 443 443 444 - stat = search_page(page, false); 445 - if (stat == NULL) { 444 + pstat = search_page(page, false); 445 + if (pstat == NULL) { 446 446 pr_debug2("missing free at page %"PRIx64" (order: %d)\n", 447 447 page, order); 448 448 ··· 453 453 } 454 454 455 455 this.page = page; 456 - this.gfp_flags = stat->gfp_flags; 457 - this.migrate_type = stat->migrate_type; 456 + this.gfp_flags = pstat->gfp_flags; 457 + this.migrate_type = pstat->migrate_type; 458 458 459 - rb_erase(&stat->node, &page_tree); 460 - free(stat); 459 + rb_erase(&pstat->node, &page_tree); 460 + free(pstat); 461 461 462 - stat = search_page_alloc_stat(&this, false); 463 - if (stat == NULL) 462 + pstat = search_page_alloc_stat(&this, false); 463 + if (pstat == NULL) 464 464 return -ENOENT; 465 465 466 - stat->nr_free++; 467 - stat->free_bytes += bytes; 466 + pstat->nr_free++; 467 + pstat->free_bytes += bytes; 468 468 469 469 return 0; 470 470 } ··· 640 640 nr_page_frees, total_page_free_bytes / 1024); 641 641 printf("\n"); 642 642 643 - printf("%-30s: %'16lu [ %'16"PRIu64" KB ]\n", "Total alloc+freed requests", 643 + printf("%-30s: %'16"PRIu64" [ %'16"PRIu64" KB ]\n", "Total alloc+freed requests", 644 644 nr_alloc_freed, (total_alloc_freed_bytes) / 1024); 645 - printf("%-30s: %'16lu [ %'16"PRIu64" KB ]\n", "Total alloc-only requests", 645 + printf("%-30s: %'16"PRIu64" [ %'16"PRIu64" KB ]\n", "Total alloc-only requests", 646 646 nr_page_allocs - nr_alloc_freed, 647 647 (total_page_alloc_bytes - total_alloc_freed_bytes) / 1024); 648 648 printf("%-30s: %'16lu [ %'16"PRIu64" KB ]\n", "Total free-only requests",
+1 -1
tools/perf/builtin-report.c
··· 329 329 fprintf(stdout, "\n\n"); 330 330 } 331 331 332 - if (sort_order == default_sort_order && 332 + if (sort_order == NULL && 333 333 parent_pattern == default_parent_pattern) { 334 334 fprintf(stdout, "#\n# (%s)\n#\n", help); 335 335
+1 -1
tools/perf/builtin-top.c
··· 733 733 "Kernel address maps (/proc/{kallsyms,modules}) are restricted.\n\n" 734 734 "Check /proc/sys/kernel/kptr_restrict.\n\n" 735 735 "Kernel%s samples will not be resolved.\n", 736 - !RB_EMPTY_ROOT(&al.map->dso->symbols[MAP__FUNCTION]) ? 736 + al.map && !RB_EMPTY_ROOT(&al.map->dso->symbols[MAP__FUNCTION]) ? 737 737 " modules" : ""); 738 738 if (use_browser <= 0) 739 739 sleep(5);
+8 -2
tools/perf/builtin-trace.c
··· 2241 2241 if (err < 0) 2242 2242 goto out_error_mmap; 2243 2243 2244 + if (!target__none(&trace->opts.target)) 2245 + perf_evlist__enable(evlist); 2246 + 2244 2247 if (forks) 2245 2248 perf_evlist__start_workload(evlist); 2246 - else 2247 - perf_evlist__enable(evlist); 2248 2249 2249 2250 trace->multiple_threads = evlist->threads->map[0] == -1 || 2250 2251 evlist->threads->nr > 1 || ··· 2273 2272 2274 2273 if (interrupted) 2275 2274 goto out_disable; 2275 + 2276 + if (done && !draining) { 2277 + perf_evlist__disable(evlist); 2278 + draining = true; 2279 + } 2276 2280 } 2277 2281 } 2278 2282
+2
tools/perf/util/probe-event.c
··· 1084 1084 * 1085 1085 * TODO:Group name support 1086 1086 */ 1087 + if (!arg) 1088 + return -EINVAL; 1087 1089 1088 1090 ptr = strpbrk(arg, ";=@+%"); 1089 1091 if (ptr && *ptr == '=') { /* Event name */
+3 -1
tools/perf/util/probe-finder.c
··· 578 578 /* Search child die for local variables and parameters. */ 579 579 if (!die_find_variable_at(sc_die, pf->pvar->var, pf->addr, &vr_die)) { 580 580 /* Search again in global variables */ 581 - if (!die_find_variable_at(&pf->cu_die, pf->pvar->var, 0, &vr_die)) 581 + if (!die_find_variable_at(&pf->cu_die, pf->pvar->var, 582 + 0, &vr_die)) { 582 583 pr_warning("Failed to find '%s' in this function.\n", 583 584 pf->pvar->var); 584 585 ret = -ENOENT; 586 + } 585 587 } 586 588 if (ret >= 0) 587 589 ret = convert_variable(&vr_die, pf);
+1 -1
tools/testing/selftests/powerpc/pmu/Makefile
··· 26 26 $(MAKE) -s -C ebb emit_tests 27 27 endef 28 28 29 - DEFAULT_INSTALL := $(INSTALL_RULE) 29 + DEFAULT_INSTALL_RULE := $(INSTALL_RULE) 30 30 override define INSTALL_RULE 31 31 $(DEFAULT_INSTALL_RULE) 32 32 $(MAKE) -C ebb install
+1 -1
tools/testing/selftests/powerpc/tm/Makefile
··· 1 - TEST_PROGS := tm-resched-dscr tm-syscall 1 + TEST_PROGS := tm-resched-dscr 2 2 3 3 all: $(TEST_PROGS) 4 4