Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'samsung-mach-1' of git://git.kernel.org/pub/scm/linux/kernel/git/kgene/linux-samsung into next/soc

Samsung updates for v4.2

- add failure(exception) handling
: of_iomap(), of_find_device_by_node() and kstrdup()

- add common poweroff to use PS_HOLD based for all of exynos SoCs
- add exnos_get/set_boot_addr() helper
- constify platform_device_id and irq_domain_ops
- get current parent clock for power domain on/off
- use core_initcall to register power domain driver
- make exynos_core_restart() less verbose

- add support coupled CPUidle for exynos3250

- fix exynos_boot_secondary() return value on timeout
- fix clk_enable() in s3c24xx adc
- fix missing of_node_put() for power domains

* tag 'samsung-mach-1' of git://git.kernel.org/pub/scm/linux/kernel/git/kgene/linux-samsung: (301 commits)
ARM: EXYNOS: register power domain driver from core_initcall
ARM: EXYNOS: use PS_HOLD based poweroff for all supported SoCs
ARM: SAMSUNG: Constify platform_device_id
ARM: EXYNOS: Constify irq_domain_ops
ARM: EXYNOS: add coupled cpuidle support for Exynos3250
ARM: EXYNOS: add exynos_get_boot_addr() helper
ARM: EXYNOS: add exynos_set_boot_addr() helper
ARM: EXYNOS: make exynos_core_restart() less verbose
ARM: EXYNOS: fix exynos_boot_secondary() return value on timeout
ARM: EXYNOS: Get current parent clock for power domain on/off
ARM: SAMSUNG: fix clk_enable() WARNing in S3C24XX ADC
ARM: EXYNOS: Add missing of_node_put() when parsing power domains
ARM: EXYNOS: Handle of_find_device_by_node() and kstrdup() failures
ARM: EXYNOS: Handle of of_iomap() failure
Linux 4.1-rc4
....

+3721 -3838
+4 -3
Documentation/devicetree/bindings/arm/exynos/power_domain.txt
··· 19 19 domains. 20 20 - clock-names: The following clocks can be specified: 21 21 - oscclk: Oscillator clock. 22 - - pclkN, clkN: Pairs of parent of input clock and input clock to the 23 - devices in this power domain. Maximum of 4 pairs (N = 0 to 3) 24 - are supported currently. 22 + - clkN: Input clocks to the devices in this power domain. These clocks 23 + will be reparented to oscclk before swithing power domain off. 24 + Their original parent will be brought back after turning on 25 + the domain. Maximum of 4 clocks (N = 0 to 3) are supported. 25 26 - asbN: Clocks required by asynchronous bridges (ASB) present in 26 27 the power domain. These clock should be enabled during power 27 28 domain on/off operations.
+3 -3
Documentation/devicetree/bindings/mtd/m25p80.txt Documentation/devicetree/bindings/mtd/jedec,spi-nor.txt
··· 8 8 is not Linux-only, but in case of Linux, see the "m25p_ids" 9 9 table in drivers/mtd/devices/m25p80.c for the list of supported 10 10 chips. 11 - Must also include "nor-jedec" for any SPI NOR flash that can be 12 - identified by the JEDEC READ ID opcode (0x9F). 11 + Must also include "jedec,spi-nor" for any SPI NOR flash that can 12 + be identified by the JEDEC READ ID opcode (0x9F). 13 13 - reg : Chip-Select number 14 14 - spi-max-frequency : Maximum frequency of the SPI bus the chip can operate at 15 15 ··· 25 25 flash: m25p80@0 { 26 26 #address-cells = <1>; 27 27 #size-cells = <1>; 28 - compatible = "spansion,m25p80", "nor-jedec"; 28 + compatible = "spansion,m25p80", "jedec,spi-nor"; 29 29 reg = <0>; 30 30 spi-max-frequency = <40000000>; 31 31 m25p,fast-read;
+3
Documentation/serial/tty.txt
··· 198 198 199 199 TTY_OTHER_CLOSED Device is a pty and the other side has closed. 200 200 201 + TTY_OTHER_DONE Device is a pty and the other side has closed and 202 + all pending input processing has been completed. 203 + 201 204 TTY_NO_WRITE_SPLIT Prevent driver from splitting up writes into 202 205 smaller chunks. 203 206
+35 -15
MAINTAINERS
··· 974 974 ARM/CORTINA SYSTEMS GEMINI ARM ARCHITECTURE 975 975 M: Hans Ulli Kroll <ulli.kroll@googlemail.com> 976 976 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 977 - T: git git://git.berlios.de/gemini-board 977 + T: git git://github.com/ulli-kroll/linux.git 978 978 S: Maintained 979 979 F: arch/arm/mach-gemini/ 980 980 ··· 1201 1201 M: Philipp Zabel <philipp.zabel@gmail.com> 1202 1202 S: Maintained 1203 1203 1204 - ARM/Marvell Armada 370 and Armada XP SOC support 1204 + ARM/Marvell Kirkwood and Armada 370, 375, 38x, XP SOC support 1205 1205 M: Jason Cooper <jason@lakedaemon.net> 1206 1206 M: Andrew Lunn <andrew@lunn.ch> 1207 1207 M: Gregory Clement <gregory.clement@free-electrons.com> ··· 1210 1210 S: Maintained 1211 1211 F: arch/arm/mach-mvebu/ 1212 1212 F: drivers/rtc/rtc-armada38x.c 1213 + F: arch/arm/boot/dts/armada* 1214 + F: arch/arm/boot/dts/kirkwood* 1215 + 1213 1216 1214 1217 ARM/Marvell Berlin SoC support 1215 1218 M: Sebastian Hesselbarth <sebastian.hesselbarth@gmail.com> 1216 1219 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 1217 1220 S: Maintained 1218 1221 F: arch/arm/mach-berlin/ 1222 + F: arch/arm/boot/dts/berlin* 1223 + 1219 1224 1220 1225 ARM/Marvell Dove/MV78xx0/Orion SOC support 1221 1226 M: Jason Cooper <jason@lakedaemon.net> ··· 1233 1228 F: arch/arm/mach-mv78xx0/ 1234 1229 F: arch/arm/mach-orion5x/ 1235 1230 F: arch/arm/plat-orion/ 1231 + F: arch/arm/boot/dts/dove* 1232 + F: arch/arm/boot/dts/orion5x* 1233 + 1236 1234 1237 1235 ARM/Orion SoC/Technologic Systems TS-78xx platform support 1238 1236 M: Alexander Clouter <alex@digriz.org.uk> ··· 1387 1379 1388 1380 ARM/SAMSUNG EXYNOS ARM ARCHITECTURES 1389 1381 M: Kukjin Kim <kgene@kernel.org> 1382 + M: Krzysztof Kozlowski <k.kozlowski@samsung.com> 1390 1383 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 1391 1384 L: linux-samsung-soc@vger.kernel.org (moderated for non-subscribers) 1392 1385 S: Maintained ··· 1976 1967 F: drivers/net/wireless/b43legacy/ 1977 1968 1978 1969 BACKLIGHT CLASS/SUBSYSTEM 1979 - M: Jingoo Han <jg1.han@samsung.com> 1970 + M: Jingoo Han <jingoohan1@gmail.com> 1980 1971 M: Lee Jones <lee.jones@linaro.org> 1981 1972 S: Maintained 1982 1973 F: drivers/video/backlight/ ··· 3960 3951 F: Documentation/extcon/ 3961 3952 3962 3953 EXYNOS DP DRIVER 3963 - M: Jingoo Han <jg1.han@samsung.com> 3954 + M: Jingoo Han <jingoohan1@gmail.com> 3964 3955 L: dri-devel@lists.freedesktop.org 3965 3956 S: Maintained 3966 3957 F: drivers/gpu/drm/exynos/exynos_dp* ··· 4419 4410 F: include/uapi/linux/gfs2_ondisk.h 4420 4411 4421 4412 GIGASET ISDN DRIVERS 4422 - M: Hansjoerg Lipp <hjlipp@web.de> 4423 - M: Tilman Schmidt <tilman@imap.cc> 4413 + M: Paul Bolle <pebolle@tiscali.nl> 4424 4414 L: gigaset307x-common@lists.sourceforge.net 4425 4415 W: http://gigaset307x.sourceforge.net/ 4426 - S: Maintained 4416 + S: Odd Fixes 4427 4417 F: Documentation/isdn/README.gigaset 4428 4418 F: drivers/isdn/gigaset/ 4429 4419 F: include/uapi/linux/gigaset_dev.h ··· 5095 5087 L: linux-rdma@vger.kernel.org 5096 5088 W: http://www.openfabrics.org/ 5097 5089 Q: http://patchwork.kernel.org/project/linux-rdma/list/ 5098 - T: git git://github.com/dledford/linux.git 5090 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/dledford/rdma.git 5099 5091 S: Supported 5100 5092 F: Documentation/infiniband/ 5101 5093 F: drivers/infiniband/ ··· 7000 6992 S: Maintained 7001 6993 F: arch/nios2/ 7002 6994 6995 + NOKIA N900 POWER SUPPLY DRIVERS 6996 + M: Pali Rohár <pali.rohar@gmail.com> 6997 + S: Maintained 6998 + F: include/linux/power/bq2415x_charger.h 6999 + F: include/linux/power/bq27x00_battery.h 7000 + F: include/linux/power/isp1704_charger.h 7001 + F: drivers/power/bq2415x_charger.c 7002 + F: drivers/power/bq27x00_battery.c 7003 + F: drivers/power/isp1704_charger.c 7004 + F: drivers/power/rx51_battery.c 7005 + 7003 7006 NTB DRIVER 7004 7007 M: Jon Mason <jdmason@kudzu.us> 7005 7008 M: Dave Jiang <dave.jiang@intel.com> ··· 7599 7580 F: drivers/pci/host/*rcar* 7600 7581 7601 7582 PCI DRIVER FOR SAMSUNG EXYNOS 7602 - M: Jingoo Han <jg1.han@samsung.com> 7583 + M: Jingoo Han <jingoohan1@gmail.com> 7603 7584 L: linux-pci@vger.kernel.org 7604 7585 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 7605 7586 L: linux-samsung-soc@vger.kernel.org (moderated for non-subscribers) ··· 7607 7588 F: drivers/pci/host/pci-exynos.c 7608 7589 7609 7590 PCI DRIVER FOR SYNOPSIS DESIGNWARE 7610 - M: Jingoo Han <jg1.han@samsung.com> 7591 + M: Jingoo Han <jingoohan1@gmail.com> 7611 7592 L: linux-pci@vger.kernel.org 7612 7593 S: Maintained 7613 7594 F: drivers/pci/host/*designware* ··· 8563 8544 F: sound/soc/samsung/ 8564 8545 8565 8546 SAMSUNG FRAMEBUFFER DRIVER 8566 - M: Jingoo Han <jg1.han@samsung.com> 8547 + M: Jingoo Han <jingoohan1@gmail.com> 8567 8548 L: linux-fbdev@vger.kernel.org 8568 8549 S: Maintained 8569 8550 F: drivers/video/fbdev/s3c-fb.c ··· 8868 8849 S: Supported 8869 8850 F: drivers/scsi/be2iscsi/ 8870 8851 8871 - SERVER ENGINES 10Gbps NIC - BladeEngine 2 DRIVER 8872 - M: Sathya Perla <sathya.perla@emulex.com> 8873 - M: Subbu Seetharaman <subbu.seetharaman@emulex.com> 8874 - M: Ajit Khaparde <ajit.khaparde@emulex.com> 8852 + Emulex 10Gbps NIC BE2, BE3-R, Lancer, Skyhawk-R DRIVER 8853 + M: Sathya Perla <sathya.perla@avagotech.com> 8854 + M: Ajit Khaparde <ajit.khaparde@avagotech.com> 8855 + M: Padmanabh Ratnakar <padmanabh.ratnakar@avagotech.com> 8856 + M: Sriharsha Basavapatna <sriharsha.basavapatna@avagotech.com> 8875 8857 L: netdev@vger.kernel.org 8876 8858 W: http://www.emulex.com 8877 8859 S: Supported
+1 -1
Makefile
··· 1 1 VERSION = 4 2 2 PATCHLEVEL = 1 3 3 SUBLEVEL = 0 4 - EXTRAVERSION = -rc3 4 + EXTRAVERSION = -rc4 5 5 NAME = Hurr durr I'ma sheep 6 6 7 7 # *DOCUMENTATION*
-13
arch/arc/Kconfig.debug
··· 2 2 3 3 source "lib/Kconfig.debug" 4 4 5 - config EARLY_PRINTK 6 - bool "Early printk" if EMBEDDED 7 - default y 8 - help 9 - Write kernel log output directly into the VGA buffer or to a serial 10 - port. 11 - 12 - This is useful for kernel debugging when your machine crashes very 13 - early before the console code is initialized. For normal operation 14 - it is not recommended because it looks ugly and doesn't cooperate 15 - with klogd/syslogd or the X server. You should normally N here, 16 - unless you want to debug such a crash. 17 - 18 5 config 16KSTACKS 19 6 bool "Use 16Kb for kernel stacks instead of 8Kb" 20 7 help
+1 -1
arch/arc/include/asm/atomic.h
··· 99 99 atomic_ops_unlock(flags); \ 100 100 } 101 101 102 - #define ATOMIC_OP_RETURN(op, c_op) \ 102 + #define ATOMIC_OP_RETURN(op, c_op, asm_op) \ 103 103 static inline int atomic_##op##_return(int i, atomic_t *v) \ 104 104 { \ 105 105 unsigned long flags; \
+2 -2
arch/arc/mm/cache_arc700.c
··· 266 266 * Machine specific helpers for Entire D-Cache or Per Line ops 267 267 */ 268 268 269 - static unsigned int __before_dc_op(const int op) 269 + static inline unsigned int __before_dc_op(const int op) 270 270 { 271 271 unsigned int reg = reg; 272 272 ··· 284 284 return reg; 285 285 } 286 286 287 - static void __after_dc_op(const int op, unsigned int reg) 287 + static inline void __after_dc_op(const int op, unsigned int reg) 288 288 { 289 289 if (op & OP_FLUSH) /* flush / flush-n-inv both wait */ 290 290 while (read_aux_reg(ARC_REG_DC_CTRL) & DC_CTRL_FLUSH_STATUS);
+1 -1
arch/arm/boot/dts/armada-375.dtsi
··· 69 69 mainpll: mainpll { 70 70 compatible = "fixed-clock"; 71 71 #clock-cells = <0>; 72 - clock-frequency = <2000000000>; 72 + clock-frequency = <1000000000>; 73 73 }; 74 74 /* 25 MHz reference crystal */ 75 75 refclk: oscillator {
+1 -1
arch/arm/boot/dts/armada-38x.dtsi
··· 585 585 mainpll: mainpll { 586 586 compatible = "fixed-clock"; 587 587 #clock-cells = <0>; 588 - clock-frequency = <2000000000>; 588 + clock-frequency = <1000000000>; 589 589 }; 590 590 591 591 /* 25 MHz reference crystal */
+1 -1
arch/arm/boot/dts/armada-39x.dtsi
··· 502 502 mainpll: mainpll { 503 503 compatible = "fixed-clock"; 504 504 #clock-cells = <0>; 505 - clock-frequency = <2000000000>; 505 + clock-frequency = <1000000000>; 506 506 }; 507 507 }; 508 508 };
+1
arch/arm/boot/dts/dove-cubox.dts
··· 87 87 88 88 /* connect xtal input to 25MHz reference */ 89 89 clocks = <&ref25>; 90 + clock-names = "xtal"; 90 91 91 92 /* connect xtal input as source of pll0 and pll1 */ 92 93 silabs,pll-source = <0 0>, <1 0>;
+1
arch/arm/boot/dts/exynos5420-peach-pit.dts
··· 711 711 num-slots = <1>; 712 712 broken-cd; 713 713 cap-sdio-irq; 714 + keep-power-in-suspend; 714 715 card-detect-delay = <200>; 715 716 clock-frequency = <400000000>; 716 717 samsung,dw-mshc-ciu-div = <1>;
+1
arch/arm/boot/dts/exynos5800-peach-pi.dts
··· 674 674 num-slots = <1>; 675 675 broken-cd; 676 676 cap-sdio-irq; 677 + keep-power-in-suspend; 677 678 card-detect-delay = <200>; 678 679 clock-frequency = <400000000>; 679 680 samsung,dw-mshc-ciu-div = <1>;
+4 -4
arch/arm/boot/dts/tegra124.dtsi
··· 826 826 <&tegra_car TEGRA124_CLK_PLL_U>, 827 827 <&tegra_car TEGRA124_CLK_USBD>; 828 828 clock-names = "reg", "pll_u", "utmi-pads"; 829 - resets = <&tegra_car 59>, <&tegra_car 22>; 829 + resets = <&tegra_car 22>, <&tegra_car 22>; 830 830 reset-names = "usb", "utmi-pads"; 831 831 nvidia,hssync-start-delay = <0>; 832 832 nvidia,idle-wait-delay = <17>; ··· 838 838 nvidia,hssquelch-level = <2>; 839 839 nvidia,hsdiscon-level = <5>; 840 840 nvidia,xcvr-hsslew = <12>; 841 + nvidia,has-utmi-pad-registers; 841 842 status = "disabled"; 842 843 }; 843 844 ··· 863 862 <&tegra_car TEGRA124_CLK_PLL_U>, 864 863 <&tegra_car TEGRA124_CLK_USBD>; 865 864 clock-names = "reg", "pll_u", "utmi-pads"; 866 - resets = <&tegra_car 22>, <&tegra_car 22>; 865 + resets = <&tegra_car 58>, <&tegra_car 22>; 867 866 reset-names = "usb", "utmi-pads"; 868 867 nvidia,hssync-start-delay = <0>; 869 868 nvidia,idle-wait-delay = <17>; ··· 875 874 nvidia,hssquelch-level = <2>; 876 875 nvidia,hsdiscon-level = <5>; 877 876 nvidia,xcvr-hsslew = <12>; 878 - nvidia,has-utmi-pad-registers; 879 877 status = "disabled"; 880 878 }; 881 879 ··· 899 899 <&tegra_car TEGRA124_CLK_PLL_U>, 900 900 <&tegra_car TEGRA124_CLK_USBD>; 901 901 clock-names = "reg", "pll_u", "utmi-pads"; 902 - resets = <&tegra_car 58>, <&tegra_car 22>; 902 + resets = <&tegra_car 59>, <&tegra_car 22>; 903 903 reset-names = "usb", "utmi-pads"; 904 904 nvidia,hssync-start-delay = <0>; 905 905 nvidia,idle-wait-delay = <17>;
+1
arch/arm/boot/dts/vexpress-v2p-ca15_a7.dts
··· 191 191 compatible = "arm,cortex-a15-pmu"; 192 192 interrupts = <0 68 4>, 193 193 <0 69 4>; 194 + interrupt-affinity = <&cpu0>, <&cpu1>; 194 195 }; 195 196 196 197 oscclk6a: oscclk6a {
+7 -4
arch/arm/boot/dts/vexpress-v2p-ca9.dts
··· 33 33 #address-cells = <1>; 34 34 #size-cells = <0>; 35 35 36 - cpu@0 { 36 + A9_0: cpu@0 { 37 37 device_type = "cpu"; 38 38 compatible = "arm,cortex-a9"; 39 39 reg = <0>; 40 40 next-level-cache = <&L2>; 41 41 }; 42 42 43 - cpu@1 { 43 + A9_1: cpu@1 { 44 44 device_type = "cpu"; 45 45 compatible = "arm,cortex-a9"; 46 46 reg = <1>; 47 47 next-level-cache = <&L2>; 48 48 }; 49 49 50 - cpu@2 { 50 + A9_2: cpu@2 { 51 51 device_type = "cpu"; 52 52 compatible = "arm,cortex-a9"; 53 53 reg = <2>; 54 54 next-level-cache = <&L2>; 55 55 }; 56 56 57 - cpu@3 { 57 + A9_3: cpu@3 { 58 58 device_type = "cpu"; 59 59 compatible = "arm,cortex-a9"; 60 60 reg = <3>; ··· 170 170 compatible = "arm,pl310-cache"; 171 171 reg = <0x1e00a000 0x1000>; 172 172 interrupts = <0 43 4>; 173 + cache-unified; 173 174 cache-level = <2>; 174 175 arm,data-latency = <1 1 1>; 175 176 arm,tag-latency = <1 1 1>; ··· 182 181 <0 61 4>, 183 182 <0 62 4>, 184 183 <0 63 4>; 184 + interrupt-affinity = <&A9_0>, <&A9_1>, <&A9_2>, <&A9_3>; 185 + 185 186 }; 186 187 187 188 dcc {
+4
arch/arm/include/asm/firmware.h
··· 34 34 */ 35 35 int (*set_cpu_boot_addr)(int cpu, unsigned long boot_addr); 36 36 /* 37 + * Gets boot address of specified physical CPU 38 + */ 39 + int (*get_cpu_boot_addr)(int cpu, unsigned long *boot_addr); 40 + /* 37 41 * Boots specified physical CPU 38 42 */ 39 43 int (*cpu_boot)(int cpu);
+5 -1
arch/arm/mach-exynos/common.h
··· 159 159 160 160 extern struct cpuidle_exynos_data cpuidle_coupled_exynos_data; 161 161 162 + extern void exynos_set_delayed_reset_assertion(bool enable); 163 + 162 164 extern void s5p_init_cpu(void __iomem *cpuid_addr); 163 165 extern unsigned int samsung_rev(void); 164 - extern void __iomem *cpu_boot_reg_base(void); 166 + extern void exynos_core_restart(u32 core_id); 167 + extern int exynos_set_boot_addr(u32 core_id, unsigned long boot_addr); 168 + extern int exynos_get_boot_addr(u32 core_id, unsigned long *boot_addr); 165 169 166 170 static inline void pmu_raw_writel(u32 val, u32 offset) 167 171 {
+29 -1
arch/arm/mach-exynos/exynos.c
··· 167 167 } 168 168 169 169 /* 170 + * Set or clear the USE_DELAYED_RESET_ASSERTION option. Used by smp code 171 + * and suspend. 172 + * 173 + * This is necessary only on Exynos4 SoCs. When system is running 174 + * USE_DELAYED_RESET_ASSERTION should be set so the ARM CLK clock down 175 + * feature could properly detect global idle state when secondary CPU is 176 + * powered down. 177 + * 178 + * However this should not be set when such system is going into suspend. 179 + */ 180 + void exynos_set_delayed_reset_assertion(bool enable) 181 + { 182 + if (of_machine_is_compatible("samsung,exynos4")) { 183 + unsigned int tmp, core_id; 184 + 185 + for (core_id = 0; core_id < num_possible_cpus(); core_id++) { 186 + tmp = pmu_raw_readl(EXYNOS_ARM_CORE_OPTION(core_id)); 187 + if (enable) 188 + tmp |= S5P_USE_DELAYED_RESET_ASSERTION; 189 + else 190 + tmp &= ~(S5P_USE_DELAYED_RESET_ASSERTION); 191 + pmu_raw_writel(tmp, EXYNOS_ARM_CORE_OPTION(core_id)); 192 + } 193 + } 194 + } 195 + 196 + /* 170 197 * Apparently, these SoCs are not able to wake-up from suspend using 171 198 * the PMU. Too bad. Should they suddenly become capable of such a 172 199 * feat, the matches below should be moved to suspend.c. ··· 234 207 exynos_sysram_init(); 235 208 236 209 #if defined(CONFIG_SMP) && defined(CONFIG_ARM_EXYNOS_CPUIDLE) 237 - if (of_machine_is_compatible("samsung,exynos4210")) 210 + if (of_machine_is_compatible("samsung,exynos4210") || 211 + of_machine_is_compatible("samsung,exynos3250")) 238 212 exynos_cpuidle.dev.platform_data = &cpuidle_coupled_exynos_data; 239 213 #endif 240 214 if (of_machine_is_compatible("samsung,exynos4210") ||
+18
arch/arm/mach-exynos/firmware.c
··· 49 49 sysram_ns_base_addr + 0x24); 50 50 __raw_writel(EXYNOS_AFTR_MAGIC, sysram_ns_base_addr + 0x20); 51 51 if (soc_is_exynos3250()) { 52 + flush_cache_all(); 52 53 exynos_smc(SMC_CMD_SAVE, OP_TYPE_CORE, 53 54 SMC_POWERSTATE_IDLE, 0); 54 55 exynos_smc(SMC_CMD_SHUTDOWN, OP_TYPE_CLUSTER, ··· 105 104 return 0; 106 105 } 107 106 107 + static int exynos_get_cpu_boot_addr(int cpu, unsigned long *boot_addr) 108 + { 109 + void __iomem *boot_reg; 110 + 111 + if (!sysram_ns_base_addr) 112 + return -ENODEV; 113 + 114 + boot_reg = sysram_ns_base_addr + 0x1c; 115 + 116 + if (soc_is_exynos4412()) 117 + boot_reg += 4 * cpu; 118 + 119 + *boot_addr = __raw_readl(boot_reg); 120 + return 0; 121 + } 122 + 108 123 static int exynos_cpu_suspend(unsigned long arg) 109 124 { 110 125 flush_cache_all(); ··· 155 138 static const struct firmware_ops exynos_firmware_ops = { 156 139 .do_idle = IS_ENABLED(CONFIG_EXYNOS_CPU_SUSPEND) ? exynos_do_idle : NULL, 157 140 .set_cpu_boot_addr = exynos_set_cpu_boot_addr, 141 + .get_cpu_boot_addr = exynos_get_cpu_boot_addr, 158 142 .cpu_boot = exynos_cpu_boot, 159 143 .suspend = IS_ENABLED(CONFIG_PM_SLEEP) ? exynos_suspend : NULL, 160 144 .resume = IS_ENABLED(CONFIG_EXYNOS_CPU_SUSPEND) ? exynos_resume : NULL,
+60 -63
arch/arm/mach-exynos/platsmp.c
··· 34 34 35 35 extern void exynos4_secondary_startup(void); 36 36 37 - /* 38 - * Set or clear the USE_DELAYED_RESET_ASSERTION option, set on Exynos4 SoCs 39 - * during hot-(un)plugging CPUx. 40 - * 41 - * The feature can be cleared safely during first boot of secondary CPU. 42 - * 43 - * Exynos4 SoCs require setting USE_DELAYED_RESET_ASSERTION during powering 44 - * down a CPU so the CPU idle clock down feature could properly detect global 45 - * idle state when CPUx is off. 46 - */ 47 - static void exynos_set_delayed_reset_assertion(u32 core_id, bool enable) 48 - { 49 - if (soc_is_exynos4()) { 50 - unsigned int tmp; 51 - 52 - tmp = pmu_raw_readl(EXYNOS_ARM_CORE_OPTION(core_id)); 53 - if (enable) 54 - tmp |= S5P_USE_DELAYED_RESET_ASSERTION; 55 - else 56 - tmp &= ~(S5P_USE_DELAYED_RESET_ASSERTION); 57 - pmu_raw_writel(tmp, EXYNOS_ARM_CORE_OPTION(core_id)); 58 - } 59 - } 60 - 61 37 #ifdef CONFIG_HOTPLUG_CPU 62 38 static inline void cpu_leave_lowpower(u32 core_id) 63 39 { ··· 49 73 : "=&r" (v) 50 74 : "Ir" (CR_C), "Ir" (0x40) 51 75 : "cc"); 52 - 53 - exynos_set_delayed_reset_assertion(core_id, false); 54 76 } 55 77 56 78 static inline void platform_do_lowpower(unsigned int cpu, int *spurious) ··· 60 86 61 87 /* Turn the CPU off on next WFI instruction. */ 62 88 exynos_cpu_power_down(core_id); 63 - 64 - /* 65 - * Exynos4 SoCs require setting 66 - * USE_DELAYED_RESET_ASSERTION so the CPU idle 67 - * clock down feature could properly detect 68 - * global idle state when CPUx is off. 69 - */ 70 - exynos_set_delayed_reset_assertion(core_id, true); 71 89 72 90 wfi(); 73 91 ··· 169 203 S5P_CORE_LOCAL_PWR_EN); 170 204 } 171 205 172 - void __iomem *cpu_boot_reg_base(void) 206 + static void __iomem *cpu_boot_reg_base(void) 173 207 { 174 208 if (soc_is_exynos4210() && samsung_rev() == EXYNOS4210_REV_1_1) 175 209 return pmu_base_addr + S5P_INFORM5; ··· 195 229 * 196 230 * Currently this is needed only when booting secondary CPU on Exynos3250. 197 231 */ 198 - static void exynos_core_restart(u32 core_id) 232 + void exynos_core_restart(u32 core_id) 199 233 { 200 234 u32 val; 201 235 ··· 210 244 val |= S5P_CORE_WAKEUP_FROM_LOCAL_CFG; 211 245 pmu_raw_writel(val, EXYNOS_ARM_CORE_STATUS(core_id)); 212 246 213 - pr_info("CPU%u: Software reset\n", core_id); 214 247 pmu_raw_writel(EXYNOS_CORE_PO_RESET(core_id), EXYNOS_SWRESET); 215 248 } 216 249 ··· 245 280 */ 246 281 spin_lock(&boot_lock); 247 282 spin_unlock(&boot_lock); 283 + } 284 + 285 + int exynos_set_boot_addr(u32 core_id, unsigned long boot_addr) 286 + { 287 + int ret; 288 + 289 + /* 290 + * Try to set boot address using firmware first 291 + * and fall back to boot register if it fails. 292 + */ 293 + ret = call_firmware_op(set_cpu_boot_addr, core_id, boot_addr); 294 + if (ret && ret != -ENOSYS) 295 + goto fail; 296 + if (ret == -ENOSYS) { 297 + void __iomem *boot_reg = cpu_boot_reg(core_id); 298 + 299 + if (IS_ERR(boot_reg)) { 300 + ret = PTR_ERR(boot_reg); 301 + goto fail; 302 + } 303 + __raw_writel(boot_addr, boot_reg); 304 + ret = 0; 305 + } 306 + fail: 307 + return ret; 308 + } 309 + 310 + int exynos_get_boot_addr(u32 core_id, unsigned long *boot_addr) 311 + { 312 + int ret; 313 + 314 + /* 315 + * Try to get boot address using firmware first 316 + * and fall back to boot register if it fails. 317 + */ 318 + ret = call_firmware_op(get_cpu_boot_addr, core_id, boot_addr); 319 + if (ret && ret != -ENOSYS) 320 + goto fail; 321 + if (ret == -ENOSYS) { 322 + void __iomem *boot_reg = cpu_boot_reg(core_id); 323 + 324 + if (IS_ERR(boot_reg)) { 325 + ret = PTR_ERR(boot_reg); 326 + goto fail; 327 + } 328 + *boot_addr = __raw_readl(boot_reg); 329 + ret = 0; 330 + } 331 + fail: 332 + return ret; 248 333 } 249 334 250 335 static int exynos_boot_secondary(unsigned int cpu, struct task_struct *idle) ··· 356 341 357 342 boot_addr = virt_to_phys(exynos4_secondary_startup); 358 343 359 - /* 360 - * Try to set boot address using firmware first 361 - * and fall back to boot register if it fails. 362 - */ 363 - ret = call_firmware_op(set_cpu_boot_addr, core_id, boot_addr); 364 - if (ret && ret != -ENOSYS) 344 + ret = exynos_set_boot_addr(core_id, boot_addr); 345 + if (ret) 365 346 goto fail; 366 - if (ret == -ENOSYS) { 367 - void __iomem *boot_reg = cpu_boot_reg(core_id); 368 - 369 - if (IS_ERR(boot_reg)) { 370 - ret = PTR_ERR(boot_reg); 371 - goto fail; 372 - } 373 - __raw_writel(boot_addr, boot_reg); 374 - } 375 347 376 348 call_firmware_op(cpu_boot, core_id); 377 349 ··· 373 371 udelay(10); 374 372 } 375 373 376 - /* No harm if this is called during first boot of secondary CPU */ 377 - exynos_set_delayed_reset_assertion(core_id, false); 374 + if (pen_release != -1) 375 + ret = -ETIMEDOUT; 378 376 379 377 /* 380 378 * now the secondary core is starting up let it run its ··· 422 420 423 421 exynos_sysram_init(); 424 422 423 + exynos_set_delayed_reset_assertion(true); 424 + 425 425 if (read_cpuid_part() == ARM_CPU_PART_CORTEX_A9) 426 426 scu_enable(scu_base_addr()); 427 427 ··· 446 442 core_id = MPIDR_AFFINITY_LEVEL(mpidr, 0); 447 443 boot_addr = virt_to_phys(exynos4_secondary_startup); 448 444 449 - ret = call_firmware_op(set_cpu_boot_addr, core_id, boot_addr); 450 - if (ret && ret != -ENOSYS) 445 + ret = exynos_set_boot_addr(core_id, boot_addr); 446 + if (ret) 451 447 break; 452 - if (ret == -ENOSYS) { 453 - void __iomem *boot_reg = cpu_boot_reg(core_id); 454 - 455 - if (IS_ERR(boot_reg)) 456 - break; 457 - __raw_writel(boot_addr, boot_reg); 458 - } 459 448 } 460 449 } 461 450
+43 -8
arch/arm/mach-exynos/pm.c
··· 22 22 #include <asm/firmware.h> 23 23 #include <asm/smp_scu.h> 24 24 #include <asm/suspend.h> 25 + #include <asm/cacheflush.h> 25 26 26 27 #include <mach/map.h> 27 28 ··· 210 209 * sequence, let's wait for one of these to happen 211 210 */ 212 211 while (exynos_cpu_power_state(1)) { 212 + unsigned long boot_addr; 213 + 213 214 /* 214 215 * The other cpu may skip idle and boot back 215 216 * up again ··· 224 221 * boot back up again, getting stuck in the 225 222 * boot rom code 226 223 */ 227 - if (__raw_readl(cpu_boot_reg_base()) == 0) 224 + ret = exynos_get_boot_addr(1, &boot_addr); 225 + if (ret) 226 + goto fail; 227 + ret = -1; 228 + if (boot_addr == 0) 228 229 goto abort; 229 230 230 231 cpu_relax(); ··· 240 233 241 234 abort: 242 235 if (cpu_online(1)) { 236 + unsigned long boot_addr = virt_to_phys(exynos_cpu_resume); 237 + 243 238 /* 244 239 * Set the boot vector to something non-zero 245 240 */ 246 - __raw_writel(virt_to_phys(exynos_cpu_resume), 247 - cpu_boot_reg_base()); 241 + ret = exynos_set_boot_addr(1, boot_addr); 242 + if (ret) 243 + goto fail; 248 244 dsb(); 249 245 250 246 /* ··· 257 247 while (exynos_cpu_power_state(1) != S5P_CORE_LOCAL_PWR_EN) 258 248 cpu_relax(); 259 249 250 + if (soc_is_exynos3250()) { 251 + while (!pmu_raw_readl(S5P_PMU_SPARE2) && 252 + !atomic_read(&cpu1_wakeup)) 253 + cpu_relax(); 254 + 255 + if (!atomic_read(&cpu1_wakeup)) 256 + exynos_core_restart(1); 257 + } 258 + 260 259 while (!atomic_read(&cpu1_wakeup)) { 260 + smp_rmb(); 261 + 261 262 /* 262 263 * Poke cpu1 out of the boot rom 263 264 */ 264 - __raw_writel(virt_to_phys(exynos_cpu_resume), 265 - cpu_boot_reg_base()); 266 265 267 - arch_send_wakeup_ipi_mask(cpumask_of(1)); 266 + ret = exynos_set_boot_addr(1, boot_addr); 267 + if (ret) 268 + goto fail; 269 + 270 + call_firmware_op(cpu_boot, 1); 271 + 272 + if (soc_is_exynos3250()) 273 + dsb_sev(); 274 + else 275 + arch_send_wakeup_ipi_mask(cpumask_of(1)); 268 276 } 269 277 } 270 - 278 + fail: 271 279 return ret; 272 280 } 273 281 274 282 static int exynos_wfi_finisher(unsigned long flags) 275 283 { 284 + if (soc_is_exynos3250()) 285 + flush_cache_all(); 276 286 cpu_do_idle(); 277 287 278 288 return -1; ··· 313 283 */ 314 284 exynos_cpu_power_down(1); 315 285 286 + if (soc_is_exynos3250()) 287 + pmu_raw_writel(0, S5P_PMU_SPARE2); 288 + 316 289 ret = cpu_suspend(0, exynos_wfi_finisher); 317 290 318 291 cpu_pm_exit(); ··· 332 299 333 300 static void exynos_pre_enter_aftr(void) 334 301 { 335 - __raw_writel(virt_to_phys(exynos_cpu_resume), cpu_boot_reg_base()); 302 + unsigned long boot_addr = virt_to_phys(exynos_cpu_resume); 303 + 304 + (void)exynos_set_boot_addr(1, boot_addr); 336 305 } 337 306 338 307 static void exynos_post_enter_aftr(void)
+35 -22
arch/arm/mach-exynos/pm_domains.c
··· 62 62 for (i = 0; i < MAX_CLK_PER_DOMAIN; i++) { 63 63 if (IS_ERR(pd->clk[i])) 64 64 break; 65 + pd->pclk[i] = clk_get_parent(pd->clk[i]); 65 66 if (clk_set_parent(pd->clk[i], pd->oscclk)) 66 67 pr_err("%s: error setting oscclk as parent to clock %d\n", 67 68 pd->name, i); ··· 91 90 for (i = 0; i < MAX_CLK_PER_DOMAIN; i++) { 92 91 if (IS_ERR(pd->clk[i])) 93 92 break; 93 + 94 + if (IS_ERR(pd->clk[i])) 95 + continue; /* Skip on first power up */ 94 96 if (clk_set_parent(pd->clk[i], pd->pclk[i])) 95 97 pr_err("%s: error setting parent to clock%d\n", 96 98 pd->name, i); ··· 121 117 122 118 static __init int exynos4_pm_init_power_domain(void) 123 119 { 124 - struct platform_device *pdev; 125 120 struct device_node *np; 126 121 127 122 for_each_compatible_node(np, NULL, "samsung,exynos4210-pd") { 128 123 struct exynos_pm_domain *pd; 129 124 int on, i; 130 - struct device *dev; 131 - 132 - pdev = of_find_device_by_node(np); 133 - dev = &pdev->dev; 134 125 135 126 pd = kzalloc(sizeof(*pd), GFP_KERNEL); 136 127 if (!pd) { 137 128 pr_err("%s: failed to allocate memory for domain\n", 138 129 __func__); 130 + of_node_put(np); 131 + return -ENOMEM; 132 + } 133 + pd->pd.name = kstrdup_const(strrchr(np->full_name, '/') + 1, 134 + GFP_KERNEL); 135 + if (!pd->pd.name) { 136 + kfree(pd); 137 + of_node_put(np); 139 138 return -ENOMEM; 140 139 } 141 140 142 - pd->pd.name = kstrdup(dev_name(dev), GFP_KERNEL); 143 141 pd->name = pd->pd.name; 144 142 pd->base = of_iomap(np, 0); 143 + if (!pd->base) { 144 + pr_warn("%s: failed to map memory\n", __func__); 145 + kfree(pd->pd.name); 146 + kfree(pd); 147 + of_node_put(np); 148 + continue; 149 + } 150 + 145 151 pd->pd.power_off = exynos_pd_power_off; 146 152 pd->pd.power_on = exynos_pd_power_on; 147 153 ··· 159 145 char clk_name[8]; 160 146 161 147 snprintf(clk_name, sizeof(clk_name), "asb%d", i); 162 - pd->asb_clk[i] = clk_get(dev, clk_name); 148 + pd->asb_clk[i] = of_clk_get_by_name(np, clk_name); 163 149 if (IS_ERR(pd->asb_clk[i])) 164 150 break; 165 151 } 166 152 167 - pd->oscclk = clk_get(dev, "oscclk"); 153 + pd->oscclk = of_clk_get_by_name(np, "oscclk"); 168 154 if (IS_ERR(pd->oscclk)) 169 155 goto no_clk; 170 156 ··· 172 158 char clk_name[8]; 173 159 174 160 snprintf(clk_name, sizeof(clk_name), "clk%d", i); 175 - pd->clk[i] = clk_get(dev, clk_name); 161 + pd->clk[i] = of_clk_get_by_name(np, clk_name); 176 162 if (IS_ERR(pd->clk[i])) 177 163 break; 178 - snprintf(clk_name, sizeof(clk_name), "pclk%d", i); 179 - pd->pclk[i] = clk_get(dev, clk_name); 180 - if (IS_ERR(pd->pclk[i])) { 181 - clk_put(pd->clk[i]); 182 - pd->clk[i] = ERR_PTR(-EINVAL); 183 - break; 184 - } 164 + /* 165 + * Skip setting parent on first power up. 166 + * The parent at this time may not be useful at all. 167 + */ 168 + pd->pclk[i] = ERR_PTR(-EINVAL); 185 169 } 186 170 187 171 if (IS_ERR(pd->clk[0])) ··· 200 188 args.np = np; 201 189 args.args_count = 0; 202 190 child_domain = of_genpd_get_from_provider(&args); 203 - if (!child_domain) 204 - continue; 191 + if (IS_ERR(child_domain)) 192 + goto next_pd; 205 193 206 194 if (of_parse_phandle_with_args(np, "power-domains", 207 195 "#power-domain-cells", 0, &args) != 0) 208 - continue; 196 + goto next_pd; 209 197 210 198 parent_domain = of_genpd_get_from_provider(&args); 211 - if (!parent_domain) 212 - continue; 199 + if (IS_ERR(parent_domain)) 200 + goto next_pd; 213 201 214 202 if (pm_genpd_add_subdomain(parent_domain, child_domain)) 215 203 pr_warn("%s failed to add subdomain: %s\n", ··· 217 205 else 218 206 pr_info("%s has as child subdomain: %s.\n", 219 207 parent_domain->name, child_domain->name); 208 + next_pd: 220 209 of_node_put(np); 221 210 } 222 211 223 212 return 0; 224 213 } 225 - arch_initcall(exynos4_pm_init_power_domain); 214 + core_initcall(exynos4_pm_init_power_domain);
+3 -3
arch/arm/mach-exynos/pmu.c
··· 681 681 EXYNOS5420_CMU_RESET_FSYS_SYS_PWR_REG, 682 682 }; 683 683 684 - static void exynos5_power_off(void) 684 + static void exynos_power_off(void) 685 685 { 686 686 unsigned int tmp; 687 687 ··· 872 872 EXYNOS5420_ARM_INTR_SPREAD_USE_STANDBYWFI); 873 873 874 874 pmu_raw_writel(0x1, EXYNOS5420_UP_SCHEDULER); 875 - 876 - pm_power_off = exynos5_power_off; 877 875 pr_info("EXYNOS5420 PMU initialized\n"); 878 876 } 879 877 ··· 981 983 ret = register_restart_handler(&pmu_restart_handler); 982 984 if (ret) 983 985 dev_warn(dev, "can't register restart handler err=%d\n", ret); 986 + 987 + pm_power_off = exynos_power_off; 984 988 985 989 dev_dbg(dev, "Exynos PMU Driver probe done\n"); 986 990 return 0;
+7 -2
arch/arm/mach-exynos/suspend.c
··· 223 223 return irq_domain_alloc_irqs_parent(domain, virq, nr_irqs, &parent_args); 224 224 } 225 225 226 - static struct irq_domain_ops exynos_pmu_domain_ops = { 226 + static const struct irq_domain_ops exynos_pmu_domain_ops = { 227 227 .xlate = exynos_pmu_domain_xlate, 228 228 .alloc = exynos_pmu_domain_alloc, 229 229 .free = irq_domain_free_irqs_common, ··· 342 342 343 343 static void exynos_pm_prepare(void) 344 344 { 345 + exynos_set_delayed_reset_assertion(false); 346 + 345 347 /* Set wake-up mask registers */ 346 348 exynos_pm_set_wakeup_mask(); 347 349 ··· 484 482 485 483 /* Clear SLEEP mode set in INFORM1 */ 486 484 pmu_raw_writel(0x0, S5P_INFORM1); 485 + exynos_set_delayed_reset_assertion(true); 487 486 } 488 487 489 488 static void exynos3250_pm_resume(void) ··· 726 723 return; 727 724 } 728 725 729 - if (WARN_ON(!of_find_property(np, "interrupt-controller", NULL))) 726 + if (WARN_ON(!of_find_property(np, "interrupt-controller", NULL))) { 730 727 pr_warn("Outdated DT detected, suspend/resume will NOT work\n"); 728 + return; 729 + } 731 730 732 731 pm_data = (const struct exynos_pm_data *) match->data; 733 732
+3 -1
arch/arm/mach-gemini/common.h
··· 12 12 #ifndef __GEMINI_COMMON_H__ 13 13 #define __GEMINI_COMMON_H__ 14 14 15 + #include <linux/reboot.h> 16 + 15 17 struct mtd_partition; 16 18 17 19 extern void gemini_map_io(void); ··· 28 26 struct mtd_partition *parts, 29 27 unsigned int nr_parts); 30 28 31 - extern void gemini_restart(char mode, const char *cmd); 29 + extern void gemini_restart(enum reboot_mode mode, const char *cmd); 32 30 33 31 #endif /* __GEMINI_COMMON_H__ */
+3 -1
arch/arm/mach-gemini/reset.c
··· 14 14 #include <mach/hardware.h> 15 15 #include <mach/global_reg.h> 16 16 17 - void gemini_restart(char mode, const char *cmd) 17 + #include "common.h" 18 + 19 + void gemini_restart(enum reboot_mode mode, const char *cmd) 18 20 { 19 21 __raw_writel(RESET_GLOBAL | RESET_CPU1, 20 22 IO_ADDRESS(GEMINI_GLOBAL_BASE) + GLOBAL_RESET);
+14 -54
arch/arm/mach-omap2/omap_hwmod.c
··· 171 171 */ 172 172 #define LINKS_PER_OCP_IF 2 173 173 174 + /* 175 + * Address offset (in bytes) between the reset control and the reset 176 + * status registers: 4 bytes on OMAP4 177 + */ 178 + #define OMAP4_RST_CTRL_ST_OFFSET 4 179 + 174 180 /** 175 181 * struct omap_hwmod_soc_ops - fn ptrs for some SoC-specific operations 176 182 * @enable_module: function to enable a module (via MODULEMODE) ··· 3022 3016 if (ohri->st_shift) 3023 3017 pr_err("omap_hwmod: %s: %s: hwmod data error: OMAP4 does not support st_shift\n", 3024 3018 oh->name, ohri->name); 3025 - return omap_prm_deassert_hardreset(ohri->rst_shift, 0, 3019 + return omap_prm_deassert_hardreset(ohri->rst_shift, ohri->rst_shift, 3026 3020 oh->clkdm->pwrdm.ptr->prcm_partition, 3027 3021 oh->clkdm->pwrdm.ptr->prcm_offs, 3028 - oh->prcm.omap4.rstctrl_offs, 0); 3022 + oh->prcm.omap4.rstctrl_offs, 3023 + oh->prcm.omap4.rstctrl_offs + 3024 + OMAP4_RST_CTRL_ST_OFFSET); 3029 3025 } 3030 3026 3031 3027 /** ··· 3056 3048 } 3057 3049 3058 3050 /** 3059 - * _am33xx_assert_hardreset - call AM33XX PRM hardreset fn with hwmod args 3060 - * @oh: struct omap_hwmod * to assert hardreset 3061 - * @ohri: hardreset line data 3062 - * 3063 - * Call am33xx_prminst_assert_hardreset() with parameters extracted 3064 - * from the hwmod @oh and the hardreset line data @ohri. Only 3065 - * intended for use as an soc_ops function pointer. Passes along the 3066 - * return value from am33xx_prminst_assert_hardreset(). XXX This 3067 - * function is scheduled for removal when the PRM code is moved into 3068 - * drivers/. 3069 - */ 3070 - static int _am33xx_assert_hardreset(struct omap_hwmod *oh, 3071 - struct omap_hwmod_rst_info *ohri) 3072 - 3073 - { 3074 - return omap_prm_assert_hardreset(ohri->rst_shift, 0, 3075 - oh->clkdm->pwrdm.ptr->prcm_offs, 3076 - oh->prcm.omap4.rstctrl_offs); 3077 - } 3078 - 3079 - /** 3080 3051 * _am33xx_deassert_hardreset - call AM33XX PRM hardreset fn with hwmod args 3081 3052 * @oh: struct omap_hwmod * to deassert hardreset 3082 3053 * @ohri: hardreset line data ··· 3070 3083 static int _am33xx_deassert_hardreset(struct omap_hwmod *oh, 3071 3084 struct omap_hwmod_rst_info *ohri) 3072 3085 { 3073 - return omap_prm_deassert_hardreset(ohri->rst_shift, ohri->st_shift, 0, 3086 + return omap_prm_deassert_hardreset(ohri->rst_shift, ohri->st_shift, 3087 + oh->clkdm->pwrdm.ptr->prcm_partition, 3074 3088 oh->clkdm->pwrdm.ptr->prcm_offs, 3075 3089 oh->prcm.omap4.rstctrl_offs, 3076 3090 oh->prcm.omap4.rstst_offs); 3077 - } 3078 - 3079 - /** 3080 - * _am33xx_is_hardreset_asserted - call AM33XX PRM hardreset fn with hwmod args 3081 - * @oh: struct omap_hwmod * to test hardreset 3082 - * @ohri: hardreset line data 3083 - * 3084 - * Call am33xx_prminst_is_hardreset_asserted() with parameters 3085 - * extracted from the hwmod @oh and the hardreset line data @ohri. 3086 - * Only intended for use as an soc_ops function pointer. Passes along 3087 - * the return value from am33xx_prminst_is_hardreset_asserted(). XXX 3088 - * This function is scheduled for removal when the PRM code is moved 3089 - * into drivers/. 3090 - */ 3091 - static int _am33xx_is_hardreset_asserted(struct omap_hwmod *oh, 3092 - struct omap_hwmod_rst_info *ohri) 3093 - { 3094 - return omap_prm_is_hardreset_asserted(ohri->rst_shift, 0, 3095 - oh->clkdm->pwrdm.ptr->prcm_offs, 3096 - oh->prcm.omap4.rstctrl_offs); 3097 3091 } 3098 3092 3099 3093 /* Public functions */ ··· 3876 3908 soc_ops.init_clkdm = _init_clkdm; 3877 3909 soc_ops.update_context_lost = _omap4_update_context_lost; 3878 3910 soc_ops.get_context_lost = _omap4_get_context_lost; 3879 - } else if (soc_is_am43xx()) { 3911 + } else if (cpu_is_ti816x() || soc_is_am33xx() || soc_is_am43xx()) { 3880 3912 soc_ops.enable_module = _omap4_enable_module; 3881 3913 soc_ops.disable_module = _omap4_disable_module; 3882 3914 soc_ops.wait_target_ready = _omap4_wait_target_ready; 3883 3915 soc_ops.assert_hardreset = _omap4_assert_hardreset; 3884 - soc_ops.deassert_hardreset = _omap4_deassert_hardreset; 3885 - soc_ops.is_hardreset_asserted = _omap4_is_hardreset_asserted; 3886 - soc_ops.init_clkdm = _init_clkdm; 3887 - } else if (cpu_is_ti816x() || soc_is_am33xx()) { 3888 - soc_ops.enable_module = _omap4_enable_module; 3889 - soc_ops.disable_module = _omap4_disable_module; 3890 - soc_ops.wait_target_ready = _omap4_wait_target_ready; 3891 - soc_ops.assert_hardreset = _am33xx_assert_hardreset; 3892 3916 soc_ops.deassert_hardreset = _am33xx_deassert_hardreset; 3893 - soc_ops.is_hardreset_asserted = _am33xx_is_hardreset_asserted; 3917 + soc_ops.is_hardreset_asserted = _omap4_is_hardreset_asserted; 3894 3918 soc_ops.init_clkdm = _init_clkdm; 3895 3919 } else { 3896 3920 WARN(1, "omap_hwmod: unknown SoC type\n");
+70
arch/arm/mach-omap2/omap_hwmod_43xx_data.c
··· 544 544 }, 545 545 }; 546 546 547 + static struct omap_hwmod_class_sysconfig am43xx_vpfe_sysc = { 548 + .rev_offs = 0x0, 549 + .sysc_offs = 0x104, 550 + .sysc_flags = SYSC_HAS_MIDLEMODE | SYSC_HAS_SIDLEMODE, 551 + .idlemodes = (SIDLE_FORCE | SIDLE_NO | SIDLE_SMART | 552 + MSTANDBY_FORCE | MSTANDBY_SMART | MSTANDBY_NO), 553 + .sysc_fields = &omap_hwmod_sysc_type2, 554 + }; 555 + 556 + static struct omap_hwmod_class am43xx_vpfe_hwmod_class = { 557 + .name = "vpfe", 558 + .sysc = &am43xx_vpfe_sysc, 559 + }; 560 + 561 + static struct omap_hwmod am43xx_vpfe0_hwmod = { 562 + .name = "vpfe0", 563 + .class = &am43xx_vpfe_hwmod_class, 564 + .clkdm_name = "l3s_clkdm", 565 + .prcm = { 566 + .omap4 = { 567 + .modulemode = MODULEMODE_SWCTRL, 568 + .clkctrl_offs = AM43XX_CM_PER_VPFE0_CLKCTRL_OFFSET, 569 + }, 570 + }, 571 + }; 572 + 573 + static struct omap_hwmod am43xx_vpfe1_hwmod = { 574 + .name = "vpfe1", 575 + .class = &am43xx_vpfe_hwmod_class, 576 + .clkdm_name = "l3s_clkdm", 577 + .prcm = { 578 + .omap4 = { 579 + .modulemode = MODULEMODE_SWCTRL, 580 + .clkctrl_offs = AM43XX_CM_PER_VPFE1_CLKCTRL_OFFSET, 581 + }, 582 + }, 583 + }; 584 + 547 585 /* Interfaces */ 548 586 static struct omap_hwmod_ocp_if am43xx_l3_main__l4_hs = { 549 587 .master = &am33xx_l3_main_hwmod, ··· 863 825 .user = OCP_USER_MPU | OCP_USER_SDMA, 864 826 }; 865 827 828 + static struct omap_hwmod_ocp_if am43xx_l3__vpfe0 = { 829 + .master = &am43xx_vpfe0_hwmod, 830 + .slave = &am33xx_l3_main_hwmod, 831 + .clk = "l3_gclk", 832 + .user = OCP_USER_MPU | OCP_USER_SDMA, 833 + }; 834 + 835 + static struct omap_hwmod_ocp_if am43xx_l3__vpfe1 = { 836 + .master = &am43xx_vpfe1_hwmod, 837 + .slave = &am33xx_l3_main_hwmod, 838 + .clk = "l3_gclk", 839 + .user = OCP_USER_MPU | OCP_USER_SDMA, 840 + }; 841 + 842 + static struct omap_hwmod_ocp_if am43xx_l4_ls__vpfe0 = { 843 + .master = &am33xx_l4_ls_hwmod, 844 + .slave = &am43xx_vpfe0_hwmod, 845 + .clk = "l4ls_gclk", 846 + .user = OCP_USER_MPU | OCP_USER_SDMA, 847 + }; 848 + 849 + static struct omap_hwmod_ocp_if am43xx_l4_ls__vpfe1 = { 850 + .master = &am33xx_l4_ls_hwmod, 851 + .slave = &am43xx_vpfe1_hwmod, 852 + .clk = "l4ls_gclk", 853 + .user = OCP_USER_MPU | OCP_USER_SDMA, 854 + }; 855 + 866 856 static struct omap_hwmod_ocp_if *am43xx_hwmod_ocp_ifs[] __initdata = { 867 857 &am33xx_l4_wkup__synctimer, 868 858 &am43xx_l4_ls__timer8, ··· 991 925 &am43xx_l4_ls__dss_dispc, 992 926 &am43xx_l4_ls__dss_rfbi, 993 927 &am43xx_l4_ls__hdq1w, 928 + &am43xx_l3__vpfe0, 929 + &am43xx_l3__vpfe1, 930 + &am43xx_l4_ls__vpfe0, 931 + &am43xx_l4_ls__vpfe1, 994 932 NULL, 995 933 }; 996 934
+2 -1
arch/arm/mach-omap2/prcm43xx.h
··· 144 144 #define AM43XX_CM_PER_USBPHYOCP2SCP1_CLKCTRL_OFFSET 0x05C0 145 145 #define AM43XX_CM_PER_DSS_CLKCTRL_OFFSET 0x0a20 146 146 #define AM43XX_CM_PER_HDQ1W_CLKCTRL_OFFSET 0x04a0 147 - 147 + #define AM43XX_CM_PER_VPFE0_CLKCTRL_OFFSET 0x0068 148 + #define AM43XX_CM_PER_VPFE1_CLKCTRL_OFFSET 0x0070 148 149 #endif
+7 -13
arch/arm/mach-omap2/prminst44xx.c
··· 87 87 return v; 88 88 } 89 89 90 - /* 91 - * Address offset (in bytes) between the reset control and the reset 92 - * status registers: 4 bytes on OMAP4 93 - */ 94 - #define OMAP4_RST_CTRL_ST_OFFSET 4 95 - 96 90 /** 97 91 * omap4_prminst_is_hardreset_asserted - read the HW reset line state of 98 92 * submodules contained in the hwmod module ··· 135 141 * omap4_prminst_deassert_hardreset - deassert a submodule hardreset line and 136 142 * wait 137 143 * @shift: register bit shift corresponding to the reset line to deassert 138 - * @st_shift: status bit offset, not used for OMAP4+ 144 + * @st_shift: status bit offset corresponding to the reset line 139 145 * @part: PRM partition 140 146 * @inst: PRM instance offset 141 147 * @rstctrl_offs: reset register offset 142 - * @st_offs: reset status register offset, not used for OMAP4+ 148 + * @rstst_offs: reset status register offset 143 149 * 144 150 * Some IPs like dsp, ipu or iva contain processors that require an HW 145 151 * reset line to be asserted / deasserted in order to fully enable the ··· 151 157 * of reset, or -EBUSY if the submodule did not exit reset promptly. 152 158 */ 153 159 int omap4_prminst_deassert_hardreset(u8 shift, u8 st_shift, u8 part, s16 inst, 154 - u16 rstctrl_offs, u16 st_offs) 160 + u16 rstctrl_offs, u16 rstst_offs) 155 161 { 156 162 int c; 157 163 u32 mask = 1 << shift; 158 - u16 rstst_offs = rstctrl_offs + OMAP4_RST_CTRL_ST_OFFSET; 164 + u32 st_mask = 1 << st_shift; 159 165 160 166 /* Check the current status to avoid de-asserting the line twice */ 161 167 if (omap4_prminst_is_hardreset_asserted(shift, part, inst, ··· 163 169 return -EEXIST; 164 170 165 171 /* Clear the reset status by writing 1 to the status bit */ 166 - omap4_prminst_rmw_inst_reg_bits(0xffffffff, mask, part, inst, 172 + omap4_prminst_rmw_inst_reg_bits(0xffffffff, st_mask, part, inst, 167 173 rstst_offs); 168 174 /* de-assert the reset control line */ 169 175 omap4_prminst_rmw_inst_reg_bits(mask, 0, part, inst, rstctrl_offs); 170 176 /* wait the status to be set */ 171 - omap_test_timeout(omap4_prminst_is_hardreset_asserted(shift, part, inst, 172 - rstst_offs), 177 + omap_test_timeout(omap4_prminst_is_hardreset_asserted(st_shift, part, 178 + inst, rstst_offs), 173 179 MAX_MODULE_HARDRESET_WAIT, c); 174 180 175 181 return (c == MAX_MODULE_HARDRESET_WAIT) ? -EBUSY : 0;
+5 -8
arch/arm/mach-omap2/timer.c
··· 298 298 if (IS_ERR(src)) 299 299 return PTR_ERR(src); 300 300 301 - if (clk_get_parent(timer->fclk) != src) { 302 - r = clk_set_parent(timer->fclk, src); 303 - if (r < 0) { 304 - pr_warn("%s: %s cannot set source\n", __func__, 305 - oh->name); 306 - clk_put(src); 307 - return r; 308 - } 301 + r = clk_set_parent(timer->fclk, src); 302 + if (r < 0) { 303 + pr_warn("%s: %s cannot set source\n", __func__, oh->name); 304 + clk_put(src); 305 + return r; 309 306 } 310 307 311 308 clk_put(src);
-26
arch/arm/mach-rockchip/pm.c
··· 44 44 static phys_addr_t rk3288_bootram_phy; 45 45 46 46 static struct regmap *pmu_regmap; 47 - static struct regmap *grf_regmap; 48 47 static struct regmap *sgrf_regmap; 49 48 50 49 static u32 rk3288_pmu_pwr_mode_con; 51 - static u32 rk3288_grf_soc_con0; 52 50 static u32 rk3288_sgrf_soc_con0; 53 51 54 52 static inline u32 rk3288_l2_config(void) ··· 70 72 { 71 73 u32 mode_set, mode_set1; 72 74 73 - regmap_read(grf_regmap, RK3288_GRF_SOC_CON0, &rk3288_grf_soc_con0); 74 - 75 75 regmap_read(sgrf_regmap, RK3288_SGRF_SOC_CON0, &rk3288_sgrf_soc_con0); 76 76 77 77 regmap_read(pmu_regmap, RK3288_PMU_PWRMODE_CON, 78 78 &rk3288_pmu_pwr_mode_con); 79 - 80 - /* 81 - * We need set this bit GRF_FORCE_JTAG here, for the debug module, 82 - * otherwise, it may become inaccessible after resume. 83 - * This creates a potential security issue, as the sdmmc pins may 84 - * accept jtag data for a short time during resume if no card is 85 - * inserted. 86 - * But this is of course also true for the regular boot, before we 87 - * turn of the jtag/sdmmc autodetect. 88 - */ 89 - regmap_write(grf_regmap, RK3288_GRF_SOC_CON0, GRF_FORCE_JTAG | 90 - GRF_FORCE_JTAG_WRITE); 91 79 92 80 /* 93 81 * SGRF_FAST_BOOT_EN - system to boot from FAST_BOOT_ADDR ··· 135 151 regmap_write(sgrf_regmap, RK3288_SGRF_SOC_CON0, 136 152 rk3288_sgrf_soc_con0 | SGRF_PCLK_WDT_GATE_WRITE 137 153 | SGRF_FAST_BOOT_EN_WRITE); 138 - 139 - regmap_write(grf_regmap, RK3288_GRF_SOC_CON0, rk3288_grf_soc_con0 | 140 - GRF_FORCE_JTAG_WRITE); 141 154 } 142 155 143 156 static int rockchip_lpmode_enter(unsigned long arg) ··· 190 209 "rockchip,rk3288-sgrf"); 191 210 if (IS_ERR(sgrf_regmap)) { 192 211 pr_err("%s: could not find sgrf regmap\n", __func__); 193 - return PTR_ERR(pmu_regmap); 194 - } 195 - 196 - grf_regmap = syscon_regmap_lookup_by_compatible( 197 - "rockchip,rk3288-grf"); 198 - if (IS_ERR(grf_regmap)) { 199 - pr_err("%s: could not find grf regmap\n", __func__); 200 212 return PTR_ERR(pmu_regmap); 201 213 } 202 214
-4
arch/arm/mach-rockchip/pm.h
··· 48 48 #define RK3288_PMU_WAKEUP_RST_CLR_CNT 0x44 49 49 #define RK3288_PMU_PWRMODE_CON1 0x90 50 50 51 - #define RK3288_GRF_SOC_CON0 0x244 52 - #define GRF_FORCE_JTAG BIT(12) 53 - #define GRF_FORCE_JTAG_WRITE BIT(28) 54 - 55 51 #define RK3288_SGRF_SOC_CON0 (0x0000) 56 52 #define RK3288_SGRF_FAST_BOOT_ADDR (0x0120) 57 53 #define SGRF_PCLK_WDT_GATE BIT(6)
+39 -3
arch/arm/net/bpf_jit_32.c
··· 54 54 #define SEEN_DATA (1 << (BPF_MEMWORDS + 3)) 55 55 56 56 #define FLAG_NEED_X_RESET (1 << 0) 57 + #define FLAG_IMM_OVERFLOW (1 << 1) 57 58 58 59 struct jit_ctx { 59 60 const struct bpf_prog *skf; ··· 294 293 /* PC in ARM mode == address of the instruction + 8 */ 295 294 imm = offset - (8 + ctx->idx * 4); 296 295 296 + if (imm & ~0xfff) { 297 + /* 298 + * literal pool is too far, signal it into flags. we 299 + * can only detect it on the second pass unfortunately. 300 + */ 301 + ctx->flags |= FLAG_IMM_OVERFLOW; 302 + return 0; 303 + } 304 + 297 305 return imm; 298 306 } 299 307 ··· 459 449 return; 460 450 } 461 451 #endif 462 - if (rm != ARM_R0) 463 - emit(ARM_MOV_R(ARM_R0, rm), ctx); 452 + 453 + /* 454 + * For BPF_ALU | BPF_DIV | BPF_K instructions, rm is ARM_R4 455 + * (r_A) and rn is ARM_R0 (r_scratch) so load rn first into 456 + * ARM_R1 to avoid accidentally overwriting ARM_R0 with rm 457 + * before using it as a source for ARM_R1. 458 + * 459 + * For BPF_ALU | BPF_DIV | BPF_X rm is ARM_R4 (r_A) and rn is 460 + * ARM_R5 (r_X) so there is no particular register overlap 461 + * issues. 462 + */ 464 463 if (rn != ARM_R1) 465 464 emit(ARM_MOV_R(ARM_R1, rn), ctx); 465 + if (rm != ARM_R0) 466 + emit(ARM_MOV_R(ARM_R0, rm), ctx); 466 467 467 468 ctx->seen |= SEEN_CALL; 468 469 emit_mov_i(ARM_R3, (u32)jit_udiv, ctx); ··· 876 855 default: 877 856 return -1; 878 857 } 858 + 859 + if (ctx->flags & FLAG_IMM_OVERFLOW) 860 + /* 861 + * this instruction generated an overflow when 862 + * trying to access the literal pool, so 863 + * delegate this filter to the kernel interpreter. 864 + */ 865 + return -1; 879 866 } 880 867 881 868 /* compute offsets only during the first pass */ ··· 946 917 ctx.idx = 0; 947 918 948 919 build_prologue(&ctx); 949 - build_body(&ctx); 920 + if (build_body(&ctx) < 0) { 921 + #if __LINUX_ARM_ARCH__ < 7 922 + if (ctx.imm_count) 923 + kfree(ctx.imms); 924 + #endif 925 + bpf_jit_binary_free(header); 926 + goto out; 927 + } 950 928 build_epilogue(&ctx); 951 929 952 930 flush_icache_range((u32)ctx.target, (u32)(ctx.target + ctx.idx));
+3 -3
arch/arm/plat-samsung/adc.c
··· 389 389 if (ret) 390 390 return ret; 391 391 392 - clk_enable(adc->clk); 392 + clk_prepare_enable(adc->clk); 393 393 394 394 tmp = adc->prescale | S3C2410_ADCCON_PRSCEN; 395 395 ··· 413 413 { 414 414 struct adc_device *adc = platform_get_drvdata(pdev); 415 415 416 - clk_disable(adc->clk); 416 + clk_disable_unprepare(adc->clk); 417 417 regulator_disable(adc->vdd); 418 418 419 419 return 0; ··· 475 475 #define s3c_adc_resume NULL 476 476 #endif 477 477 478 - static struct platform_device_id s3c_adc_driver_ids[] = { 478 + static const struct platform_device_id s3c_adc_driver_ids[] = { 479 479 { 480 480 .name = "s3c24xx-adc", 481 481 .driver_data = TYPE_ADCV1,
+27 -4
arch/arm64/boot/dts/arm/juno-motherboard.dtsi
··· 21 21 clock-output-names = "juno_mb:clk25mhz"; 22 22 }; 23 23 24 + v2m_refclk1mhz: refclk1mhz { 25 + compatible = "fixed-clock"; 26 + #clock-cells = <0>; 27 + clock-frequency = <1000000>; 28 + clock-output-names = "juno_mb:refclk1mhz"; 29 + }; 30 + 31 + v2m_refclk32khz: refclk32khz { 32 + compatible = "fixed-clock"; 33 + #clock-cells = <0>; 34 + clock-frequency = <32768>; 35 + clock-output-names = "juno_mb:refclk32khz"; 36 + }; 37 + 24 38 motherboard { 25 39 compatible = "arm,vexpress,v2p-p1", "simple-bus"; 26 40 #address-cells = <2>; /* SMB chipselect number and offset */ ··· 80 66 #size-cells = <1>; 81 67 ranges = <0 3 0 0x200000>; 82 68 69 + v2m_sysctl: sysctl@020000 { 70 + compatible = "arm,sp810", "arm,primecell"; 71 + reg = <0x020000 0x1000>; 72 + clocks = <&v2m_refclk32khz>, <&v2m_refclk1mhz>, <&mb_clk24mhz>; 73 + clock-names = "refclk", "timclk", "apb_pclk"; 74 + #clock-cells = <1>; 75 + clock-output-names = "timerclken0", "timerclken1", "timerclken2", "timerclken3"; 76 + }; 77 + 83 78 mmci@050000 { 84 79 compatible = "arm,pl180", "arm,primecell"; 85 80 reg = <0x050000 0x1000>; ··· 129 106 compatible = "arm,sp804", "arm,primecell"; 130 107 reg = <0x110000 0x10000>; 131 108 interrupts = <9>; 132 - clocks = <&mb_clk24mhz>, <&soc_smc50mhz>; 133 - clock-names = "timclken1", "apb_pclk"; 109 + clocks = <&v2m_sysctl 0>, <&v2m_sysctl 1>, <&mb_clk24mhz>; 110 + clock-names = "timclken1", "timclken2", "apb_pclk"; 134 111 }; 135 112 136 113 v2m_timer23: timer@120000 { 137 114 compatible = "arm,sp804", "arm,primecell"; 138 115 reg = <0x120000 0x10000>; 139 116 interrupts = <9>; 140 - clocks = <&mb_clk24mhz>, <&soc_smc50mhz>; 141 - clock-names = "timclken1", "apb_pclk"; 117 + clocks = <&v2m_sysctl 2>, <&v2m_sysctl 3>, <&mb_clk24mhz>; 118 + clock-names = "timclken1", "timclken2", "apb_pclk"; 142 119 }; 143 120 144 121 rtc@170000 {
+19 -3
arch/arm64/crypto/crc32-arm64.c
··· 147 147 { 148 148 struct chksum_desc_ctx *ctx = shash_desc_ctx(desc); 149 149 150 + put_unaligned_le32(ctx->crc, out); 151 + return 0; 152 + } 153 + 154 + static int chksumc_final(struct shash_desc *desc, u8 *out) 155 + { 156 + struct chksum_desc_ctx *ctx = shash_desc_ctx(desc); 157 + 150 158 put_unaligned_le32(~ctx->crc, out); 151 159 return 0; 152 160 } 153 161 154 162 static int __chksum_finup(u32 crc, const u8 *data, unsigned int len, u8 *out) 155 163 { 156 - put_unaligned_le32(~crc32_arm64_le_hw(crc, data, len), out); 164 + put_unaligned_le32(crc32_arm64_le_hw(crc, data, len), out); 157 165 return 0; 158 166 } 159 167 ··· 207 199 { 208 200 struct chksum_ctx *mctx = crypto_tfm_ctx(tfm); 209 201 202 + mctx->key = 0; 203 + return 0; 204 + } 205 + 206 + static int crc32c_cra_init(struct crypto_tfm *tfm) 207 + { 208 + struct chksum_ctx *mctx = crypto_tfm_ctx(tfm); 209 + 210 210 mctx->key = ~0; 211 211 return 0; 212 212 } ··· 245 229 .setkey = chksum_setkey, 246 230 .init = chksum_init, 247 231 .update = chksumc_update, 248 - .final = chksum_final, 232 + .final = chksumc_final, 249 233 .finup = chksumc_finup, 250 234 .digest = chksumc_digest, 251 235 .descsize = sizeof(struct chksum_desc_ctx), ··· 257 241 .cra_alignmask = 0, 258 242 .cra_ctxsize = sizeof(struct chksum_ctx), 259 243 .cra_module = THIS_MODULE, 260 - .cra_init = crc32_cra_init, 244 + .cra_init = crc32c_cra_init, 261 245 } 262 246 }; 263 247
+3
arch/arm64/crypto/sha1-ce-glue.c
··· 74 74 75 75 static int sha1_ce_final(struct shash_desc *desc, u8 *out) 76 76 { 77 + struct sha1_ce_state *sctx = shash_desc_ctx(desc); 78 + 79 + sctx->finalize = 0; 77 80 kernel_neon_begin_partial(16); 78 81 sha1_base_do_finalize(desc, (sha1_block_fn *)sha1_ce_transform); 79 82 kernel_neon_end();
+3
arch/arm64/crypto/sha2-ce-glue.c
··· 75 75 76 76 static int sha256_ce_final(struct shash_desc *desc, u8 *out) 77 77 { 78 + struct sha256_ce_state *sctx = shash_desc_ctx(desc); 79 + 80 + sctx->finalize = 0; 78 81 kernel_neon_begin_partial(28); 79 82 sha256_base_do_finalize(desc, (sha256_block_fn *)sha2_ce_transform); 80 83 kernel_neon_end();
+1 -52
arch/arm64/kernel/alternative.c
··· 24 24 #include <asm/cacheflush.h> 25 25 #include <asm/alternative.h> 26 26 #include <asm/cpufeature.h> 27 - #include <asm/insn.h> 28 27 #include <linux/stop_machine.h> 29 28 30 29 extern struct alt_instr __alt_instructions[], __alt_instructions_end[]; ··· 33 34 struct alt_instr *end; 34 35 }; 35 36 36 - /* 37 - * Decode the imm field of a b/bl instruction, and return the byte 38 - * offset as a signed value (so it can be used when computing a new 39 - * branch target). 40 - */ 41 - static s32 get_branch_offset(u32 insn) 42 - { 43 - s32 imm = aarch64_insn_decode_immediate(AARCH64_INSN_IMM_26, insn); 44 - 45 - /* sign-extend the immediate before turning it into a byte offset */ 46 - return (imm << 6) >> 4; 47 - } 48 - 49 - static u32 get_alt_insn(u8 *insnptr, u8 *altinsnptr) 50 - { 51 - u32 insn; 52 - 53 - aarch64_insn_read(altinsnptr, &insn); 54 - 55 - /* Stop the world on instructions we don't support... */ 56 - BUG_ON(aarch64_insn_is_cbz(insn)); 57 - BUG_ON(aarch64_insn_is_cbnz(insn)); 58 - BUG_ON(aarch64_insn_is_bcond(insn)); 59 - /* ... and there is probably more. */ 60 - 61 - if (aarch64_insn_is_b(insn) || aarch64_insn_is_bl(insn)) { 62 - enum aarch64_insn_branch_type type; 63 - unsigned long target; 64 - 65 - if (aarch64_insn_is_b(insn)) 66 - type = AARCH64_INSN_BRANCH_NOLINK; 67 - else 68 - type = AARCH64_INSN_BRANCH_LINK; 69 - 70 - target = (unsigned long)altinsnptr + get_branch_offset(insn); 71 - insn = aarch64_insn_gen_branch_imm((unsigned long)insnptr, 72 - target, type); 73 - } 74 - 75 - return insn; 76 - } 77 - 78 37 static int __apply_alternatives(void *alt_region) 79 38 { 80 39 struct alt_instr *alt; ··· 40 83 u8 *origptr, *replptr; 41 84 42 85 for (alt = region->begin; alt < region->end; alt++) { 43 - u32 insn; 44 - int i; 45 - 46 86 if (!cpus_have_cap(alt->cpufeature)) 47 87 continue; 48 88 ··· 49 95 50 96 origptr = (u8 *)&alt->orig_offset + alt->orig_offset; 51 97 replptr = (u8 *)&alt->alt_offset + alt->alt_offset; 52 - 53 - for (i = 0; i < alt->alt_len; i += sizeof(insn)) { 54 - insn = get_alt_insn(origptr + i, replptr + i); 55 - aarch64_insn_write(origptr + i, insn); 56 - } 57 - 98 + memcpy(origptr, replptr, alt->alt_len); 58 99 flush_icache_range((uintptr_t)origptr, 59 100 (uintptr_t)(origptr + alt->alt_len)); 60 101 }
+4 -4
arch/arm64/kernel/perf_event.c
··· 1315 1315 if (!cpu_pmu) 1316 1316 return -ENODEV; 1317 1317 1318 - irqs = kcalloc(pdev->num_resources, sizeof(*irqs), GFP_KERNEL); 1319 - if (!irqs) 1320 - return -ENOMEM; 1321 - 1322 1318 /* Don't bother with PPIs; they're already affine */ 1323 1319 irq = platform_get_irq(pdev, 0); 1324 1320 if (irq >= 0 && irq_is_percpu(irq)) 1325 1321 return 0; 1322 + 1323 + irqs = kcalloc(pdev->num_resources, sizeof(*irqs), GFP_KERNEL); 1324 + if (!irqs) 1325 + return -ENOMEM; 1326 1326 1327 1327 for (i = 0; i < pdev->num_resources; ++i) { 1328 1328 struct device_node *dn;
+2
arch/arm64/mm/dump.c
··· 328 328 for (j = 0; j < pg_level[i].num; j++) 329 329 pg_level[i].mask |= pg_level[i].bits[j].mask; 330 330 331 + #ifdef CONFIG_SPARSEMEM_VMEMMAP 331 332 address_markers[VMEMMAP_START_NR].start_address = 332 333 (unsigned long)virt_to_page(PAGE_OFFSET); 333 334 address_markers[VMEMMAP_END_NR].start_address = 334 335 (unsigned long)virt_to_page(high_memory); 336 + #endif 335 337 336 338 pe = debugfs_create_file("kernel_page_tables", 0400, NULL, NULL, 337 339 &ptdump_fops);
+1 -1
arch/arm64/net/bpf_jit_comp.c
··· 487 487 return -EINVAL; 488 488 } 489 489 490 - imm64 = (u64)insn1.imm << 32 | imm; 490 + imm64 = (u64)insn1.imm << 32 | (u32)imm; 491 491 emit_a64_mov_i64(dst, imm64, ctx); 492 492 493 493 return 1;
+1 -1
arch/mips/Makefile
··· 277 277 ifdef CONFIG_MIPS 278 278 CHECKFLAGS += $(shell $(CC) $(KBUILD_CFLAGS) -dM -E -x c /dev/null | \ 279 279 egrep -vw '__GNUC_(|MINOR_|PATCHLEVEL_)_' | \ 280 - sed -e "s/^\#define /-D'/" -e "s/ /'='/" -e "s/$$/'/") 280 + sed -e "s/^\#define /-D'/" -e "s/ /'='/" -e "s/$$/'/" -e 's/\$$/&&/g') 281 281 ifdef CONFIG_64BIT 282 282 CHECKFLAGS += -m64 283 283 endif
+2 -2
arch/mips/include/asm/elf.h
··· 304 304 \ 305 305 current->thread.abi = &mips_abi; \ 306 306 \ 307 - current->thread.fpu.fcr31 = current_cpu_data.fpu_csr31; \ 307 + current->thread.fpu.fcr31 = boot_cpu_data.fpu_csr31; \ 308 308 } while (0) 309 309 310 310 #endif /* CONFIG_32BIT */ ··· 366 366 else \ 367 367 current->thread.abi = &mips_abi; \ 368 368 \ 369 - current->thread.fpu.fcr31 = current_cpu_data.fpu_csr31; \ 369 + current->thread.fpu.fcr31 = boot_cpu_data.fpu_csr31; \ 370 370 \ 371 371 p = personality(current->personality); \ 372 372 if (p != PER_LINUX32 && p != PER_LINUX) \
+1 -1
arch/mips/include/asm/smp.h
··· 45 45 #define SMP_DUMP 0x8 46 46 #define SMP_ASK_C0COUNT 0x10 47 47 48 - extern volatile cpumask_t cpu_callin_map; 48 + extern cpumask_t cpu_callin_map; 49 49 50 50 /* Mask of CPUs which are currently definitely operating coherently */ 51 51 extern cpumask_t cpu_coherent_mask;
+17 -15
arch/mips/kernel/elf.c
··· 76 76 77 77 /* Lets see if this is an O32 ELF */ 78 78 if (ehdr32->e_ident[EI_CLASS] == ELFCLASS32) { 79 - /* FR = 1 for N32 */ 80 - if (ehdr32->e_flags & EF_MIPS_ABI2) 81 - state->overall_fp_mode = FP_FR1; 82 - else 83 - /* Set a good default FPU mode for O32 */ 84 - state->overall_fp_mode = cpu_has_mips_r6 ? 85 - FP_FRE : FP_FR0; 86 - 87 79 if (ehdr32->e_flags & EF_MIPS_FP64) { 88 80 /* 89 81 * Set MIPS_ABI_FP_OLD_64 for EF_MIPS_FP64. We will override it ··· 96 104 (char *)&abiflags, 97 105 sizeof(abiflags)); 98 106 } else { 99 - /* FR=1 is really the only option for 64-bit */ 100 - state->overall_fp_mode = FP_FR1; 101 - 102 107 if (phdr64->p_type != PT_MIPS_ABIFLAGS) 103 108 return 0; 104 109 if (phdr64->p_filesz < sizeof(abiflags)) ··· 126 137 struct elf32_hdr *ehdr = _ehdr; 127 138 struct mode_req prog_req, interp_req; 128 139 int fp_abi, interp_fp_abi, abi0, abi1, max_abi; 140 + bool is_mips64; 129 141 130 142 if (!config_enabled(CONFIG_MIPS_O32_FP64_SUPPORT)) 131 143 return 0; ··· 142 152 abi0 = abi1 = fp_abi; 143 153 } 144 154 145 - /* ABI limits. O32 = FP_64A, N32/N64 = FP_SOFT */ 146 - max_abi = ((ehdr->e_ident[EI_CLASS] == ELFCLASS32) && 147 - (!(ehdr->e_flags & EF_MIPS_ABI2))) ? 148 - MIPS_ABI_FP_64A : MIPS_ABI_FP_SOFT; 155 + is_mips64 = (ehdr->e_ident[EI_CLASS] == ELFCLASS64) || 156 + (ehdr->e_flags & EF_MIPS_ABI2); 157 + 158 + if (is_mips64) { 159 + /* MIPS64 code always uses FR=1, thus the default is easy */ 160 + state->overall_fp_mode = FP_FR1; 161 + 162 + /* Disallow access to the various FPXX & FP64 ABIs */ 163 + max_abi = MIPS_ABI_FP_SOFT; 164 + } else { 165 + /* Default to a mode capable of running code expecting FR=0 */ 166 + state->overall_fp_mode = cpu_has_mips_r6 ? FP_FRE : FP_FR0; 167 + 168 + /* Allow all ABIs we know about */ 169 + max_abi = MIPS_ABI_FP_64A; 170 + } 149 171 150 172 if ((abi0 > max_abi && abi0 != MIPS_ABI_FP_UNKNOWN) || 151 173 (abi1 > max_abi && abi1 != MIPS_ABI_FP_UNKNOWN))
+1 -1
arch/mips/kernel/ptrace.c
··· 176 176 177 177 __get_user(value, data + 64); 178 178 fcr31 = child->thread.fpu.fcr31; 179 - mask = current_cpu_data.fpu_msk31; 179 + mask = boot_cpu_data.fpu_msk31; 180 180 child->thread.fpu.fcr31 = (value & ~mask) | (fcr31 & mask); 181 181 182 182 /* FIR may not be written. */
+1 -1
arch/mips/kernel/smp-cps.c
··· 92 92 #ifdef CONFIG_MIPS_MT_FPAFF 93 93 /* If we have an FPU, enroll ourselves in the FPU-full mask */ 94 94 if (cpu_has_fpu) 95 - cpu_set(0, mt_fpu_cpumask); 95 + cpumask_set_cpu(0, &mt_fpu_cpumask); 96 96 #endif /* CONFIG_MIPS_MT_FPAFF */ 97 97 } 98 98
+4 -2
arch/mips/kernel/smp.c
··· 43 43 #include <asm/time.h> 44 44 #include <asm/setup.h> 45 45 46 - volatile cpumask_t cpu_callin_map; /* Bitmask of started secondaries */ 46 + cpumask_t cpu_callin_map; /* Bitmask of started secondaries */ 47 47 48 48 int __cpu_number_map[NR_CPUS]; /* Map physical to logical */ 49 49 EXPORT_SYMBOL(__cpu_number_map); ··· 218 218 /* 219 219 * Trust is futile. We should really have timeouts ... 220 220 */ 221 - while (!cpumask_test_cpu(cpu, &cpu_callin_map)) 221 + while (!cpumask_test_cpu(cpu, &cpu_callin_map)) { 222 222 udelay(100); 223 + schedule(); 224 + } 223 225 224 226 synchronise_count_master(cpu); 225 227 return 0;
-1
arch/mips/kernel/traps.c
··· 269 269 */ 270 270 printk("epc : %0*lx %pS\n", field, regs->cp0_epc, 271 271 (void *) regs->cp0_epc); 272 - printk(" %s\n", print_tainted()); 273 272 printk("ra : %0*lx %pS\n", field, regs->regs[31], 274 273 (void *) regs->regs[31]); 275 274
-6
arch/mips/kvm/emulate.c
··· 2389 2389 { 2390 2390 unsigned long *gpr = &vcpu->arch.gprs[vcpu->arch.io_gpr]; 2391 2391 enum emulation_result er = EMULATE_DONE; 2392 - unsigned long curr_pc; 2393 2392 2394 2393 if (run->mmio.len > sizeof(*gpr)) { 2395 2394 kvm_err("Bad MMIO length: %d", run->mmio.len); ··· 2396 2397 goto done; 2397 2398 } 2398 2399 2399 - /* 2400 - * Update PC and hold onto current PC in case there is 2401 - * an error and we want to rollback the PC 2402 - */ 2403 - curr_pc = vcpu->arch.pc; 2404 2400 er = update_pc(vcpu, vcpu->arch.pending_load_cause); 2405 2401 if (er == EMULATE_FAIL) 2406 2402 return er;
+2 -2
arch/mips/math-emu/cp1emu.c
··· 889 889 break; 890 890 891 891 case FPCREG_RID: 892 - value = current_cpu_data.fpu_id; 892 + value = boot_cpu_data.fpu_id; 893 893 break; 894 894 895 895 default: ··· 921 921 (void *)xcp->cp0_epc, MIPSInst_RT(ir), value); 922 922 923 923 /* Preserve read-only bits. */ 924 - mask = current_cpu_data.fpu_msk31; 924 + mask = boot_cpu_data.fpu_msk31; 925 925 fcr31 = (value & ~mask) | (fcr31 & mask); 926 926 break; 927 927
+1 -1
arch/mips/mm/tlb-r4k.c
··· 495 495 496 496 if (cpu_has_rixi) { 497 497 /* 498 - * Enable the no read, no exec bits, and enable large virtual 498 + * Enable the no read, no exec bits, and enable large physical 499 499 * address. 500 500 */ 501 501 #ifdef CONFIG_64BIT
+2 -2
arch/mips/sgi-ip32/ip32-platform.c
··· 130 130 .resource = ip32_rtc_resources, 131 131 }; 132 132 133 - +static int __init sgio2_rtc_devinit(void) 133 + static __init int sgio2_rtc_devinit(void) 134 134 { 135 135 return platform_device_register(&ip32_rtc_device); 136 136 } 137 137 138 - device_initcall(sgio2_cmos_devinit); 138 + device_initcall(sgio2_rtc_devinit);
+4
arch/parisc/include/asm/elf.h
··· 348 348 349 349 #define ELF_HWCAP 0 350 350 351 + #define STACK_RND_MASK (is_32bit_task() ? \ 352 + 0x7ff >> (PAGE_SHIFT - 12) : \ 353 + 0x3ffff >> (PAGE_SHIFT - 12)) 354 + 351 355 struct mm_struct; 352 356 extern unsigned long arch_randomize_brk(struct mm_struct *); 353 357 #define arch_randomize_brk arch_randomize_brk
+6 -4
arch/parisc/kernel/process.c
··· 181 181 return 1; 182 182 } 183 183 184 + /* 185 + * Copy architecture-specific thread state 186 + */ 184 187 int 185 188 copy_thread(unsigned long clone_flags, unsigned long usp, 186 - unsigned long arg, struct task_struct *p) 189 + unsigned long kthread_arg, struct task_struct *p) 187 190 { 188 191 struct pt_regs *cregs = &(p->thread.regs); 189 192 void *stack = task_stack_page(p); ··· 198 195 extern void * const child_return; 199 196 200 197 if (unlikely(p->flags & PF_KTHREAD)) { 198 + /* kernel thread */ 201 199 memset(cregs, 0, sizeof(struct pt_regs)); 202 200 if (!usp) /* idle thread */ 203 201 return 0; 204 - 205 - /* kernel thread */ 206 202 /* Must exit via ret_from_kernel_thread in order 207 203 * to call schedule_tail() 208 204 */ ··· 217 215 #else 218 216 cregs->gr[26] = usp; 219 217 #endif 220 - cregs->gr[25] = arg; 218 + cregs->gr[25] = kthread_arg; 221 219 } else { 222 220 /* user thread */ 223 221 /* usp must be word aligned. This also prevents users from
+3
arch/parisc/kernel/sys_parisc.c
··· 77 77 if (stack_base > STACK_SIZE_MAX) 78 78 stack_base = STACK_SIZE_MAX; 79 79 80 + /* Add space for stack randomization. */ 81 + stack_base += (STACK_RND_MASK << PAGE_SHIFT); 82 + 80 83 return PAGE_ALIGN(STACK_TOP - stack_base); 81 84 } 82 85
+3 -4
arch/x86/kernel/cpu/perf_event_intel.c
··· 1134 1134 [ C(LL ) ] = { 1135 1135 [ C(OP_READ) ] = { 1136 1136 [ C(RESULT_ACCESS) ] = SLM_DMND_READ|SLM_LLC_ACCESS, 1137 - [ C(RESULT_MISS) ] = SLM_DMND_READ|SLM_LLC_MISS, 1137 + [ C(RESULT_MISS) ] = 0, 1138 1138 }, 1139 1139 [ C(OP_WRITE) ] = { 1140 1140 [ C(RESULT_ACCESS) ] = SLM_DMND_WRITE|SLM_LLC_ACCESS, ··· 1184 1184 [ C(OP_READ) ] = { 1185 1185 /* OFFCORE_RESPONSE.ANY_DATA.LOCAL_CACHE */ 1186 1186 [ C(RESULT_ACCESS) ] = 0x01b7, 1187 - /* OFFCORE_RESPONSE.ANY_DATA.ANY_LLC_MISS */ 1188 - [ C(RESULT_MISS) ] = 0x01b7, 1187 + [ C(RESULT_MISS) ] = 0, 1189 1188 }, 1190 1189 [ C(OP_WRITE) ] = { 1191 1190 /* OFFCORE_RESPONSE.ANY_RFO.LOCAL_CACHE */ ··· 1216 1217 [ C(ITLB) ] = { 1217 1218 [ C(OP_READ) ] = { 1218 1219 [ C(RESULT_ACCESS) ] = 0x00c0, /* INST_RETIRED.ANY_P */ 1219 - [ C(RESULT_MISS) ] = 0x0282, /* ITLB.MISSES */ 1220 + [ C(RESULT_MISS) ] = 0x40205, /* PAGE_WALKS.I_SIDE_WALKS */ 1220 1221 }, 1221 1222 [ C(OP_WRITE) ] = { 1222 1223 [ C(RESULT_ACCESS) ] = -1,
+1
arch/x86/kernel/cpu/perf_event_intel_rapl.c
··· 722 722 break; 723 723 case 60: /* Haswell */ 724 724 case 69: /* Haswell-Celeron */ 725 + case 61: /* Broadwell */ 725 726 rapl_cntr_mask = RAPL_IDX_HSW; 726 727 rapl_pmu_events_group.attrs = rapl_events_hsw_attr; 727 728 break;
+28
arch/x86/net/bpf_jit_comp.c
··· 559 559 if (is_ereg(dst_reg)) 560 560 EMIT1(0x41); 561 561 EMIT3(0xC1, add_1reg(0xC8, dst_reg), 8); 562 + 563 + /* emit 'movzwl eax, ax' */ 564 + if (is_ereg(dst_reg)) 565 + EMIT3(0x45, 0x0F, 0xB7); 566 + else 567 + EMIT2(0x0F, 0xB7); 568 + EMIT1(add_2reg(0xC0, dst_reg, dst_reg)); 562 569 break; 563 570 case 32: 564 571 /* emit 'bswap eax' to swap lower 4 bytes */ ··· 584 577 break; 585 578 586 579 case BPF_ALU | BPF_END | BPF_FROM_LE: 580 + switch (imm32) { 581 + case 16: 582 + /* emit 'movzwl eax, ax' to zero extend 16-bit 583 + * into 64 bit 584 + */ 585 + if (is_ereg(dst_reg)) 586 + EMIT3(0x45, 0x0F, 0xB7); 587 + else 588 + EMIT2(0x0F, 0xB7); 589 + EMIT1(add_2reg(0xC0, dst_reg, dst_reg)); 590 + break; 591 + case 32: 592 + /* emit 'mov eax, eax' to clear upper 32-bits */ 593 + if (is_ereg(dst_reg)) 594 + EMIT1(0x45); 595 + EMIT2(0x89, add_2reg(0xC0, dst_reg, dst_reg)); 596 + break; 597 + case 64: 598 + /* nop */ 599 + break; 600 + } 587 601 break; 588 602 589 603 /* ST: *(u8*)(dst_reg + off) = imm */
+1 -1
arch/x86/vdso/Makefile
··· 51 51 $(obj)/vdso64.so.dbg: $(src)/vdso.lds $(vobjs) FORCE 52 52 $(call if_changed,vdso) 53 53 54 - HOST_EXTRACFLAGS += -I$(srctree)/tools/include -I$(srctree)/include/uapi 54 + HOST_EXTRACFLAGS += -I$(srctree)/tools/include -I$(srctree)/include/uapi -I$(srctree)/arch/x86/include/uapi 55 55 hostprogs-y += vdso2c 56 56 57 57 quiet_cmd_vdso2c = VDSO2C $@
+3 -10
drivers/acpi/acpica/utglobal.c
··· 102 102 {"_SB_", ACPI_TYPE_DEVICE, NULL}, 103 103 {"_SI_", ACPI_TYPE_LOCAL_SCOPE, NULL}, 104 104 {"_TZ_", ACPI_TYPE_DEVICE, NULL}, 105 - /* 106 - * March, 2015: 107 - * The _REV object is in the process of being deprecated, because 108 - * other ACPI implementations permanently return 2. Thus, it 109 - * has little or no value. Return 2 for compatibility with 110 - * other ACPI implementations. 111 - */ 112 - {"_REV", ACPI_TYPE_INTEGER, ACPI_CAST_PTR(char, 2)}, 105 + {"_REV", ACPI_TYPE_INTEGER, (char *)ACPI_CA_SUPPORT_LEVEL}, 113 106 {"_OS_", ACPI_TYPE_STRING, ACPI_OS_NAME}, 114 - {"_GL_", ACPI_TYPE_MUTEX, ACPI_CAST_PTR(char, 1)}, 107 + {"_GL_", ACPI_TYPE_MUTEX, (char *)1}, 115 108 116 109 #if !defined (ACPI_NO_METHOD_EXECUTION) || defined (ACPI_CONSTANT_EVAL_ONLY) 117 - {"_OSI", ACPI_TYPE_METHOD, ACPI_CAST_PTR(char, 1)}, 110 + {"_OSI", ACPI_TYPE_METHOD, (char *)1}, 118 111 #endif 119 112 120 113 /* Table terminator */
+2 -4
drivers/acpi/osl.c
··· 182 182 request_mem_region(addr, length, desc); 183 183 } 184 184 185 - static int __init acpi_reserve_resources(void) 185 + static void __init acpi_reserve_resources(void) 186 186 { 187 187 acpi_request_region(&acpi_gbl_FADT.xpm1a_event_block, acpi_gbl_FADT.pm1_event_length, 188 188 "ACPI PM1a_EVT_BLK"); ··· 211 211 if (!(acpi_gbl_FADT.gpe1_block_length & 0x1)) 212 212 acpi_request_region(&acpi_gbl_FADT.xgpe1_block, 213 213 acpi_gbl_FADT.gpe1_block_length, "ACPI GPE1_BLK"); 214 - 215 - return 0; 216 214 } 217 - device_initcall(acpi_reserve_resources); 218 215 219 216 void acpi_os_printf(const char *fmt, ...) 220 217 { ··· 1842 1845 1843 1846 acpi_status __init acpi_os_initialize1(void) 1844 1847 { 1848 + acpi_reserve_resources(); 1845 1849 kacpid_wq = alloc_workqueue("kacpid", 0, 1); 1846 1850 kacpi_notify_wq = alloc_workqueue("kacpi_notify", 0, 1); 1847 1851 kacpi_hotplug_wq = alloc_ordered_workqueue("kacpi_hotplug", 0);
+1 -9
drivers/ata/Kconfig
··· 270 270 config SATA_DWC 271 271 tristate "DesignWare Cores SATA support" 272 272 depends on 460EX 273 + select DW_DMAC 273 274 help 274 275 This option enables support for the on-chip SATA controller of the 275 276 AppliedMicro processor 460EX. ··· 727 726 help 728 727 This option enables support for the NatSemi/AMD SC1200 SoC 729 728 companion chip used with the Geode processor family. 730 - 731 - If unsure, say N. 732 - 733 - config PATA_SCC 734 - tristate "Toshiba's Cell Reference Set IDE support" 735 - depends on PCI && PPC_CELLEB 736 - help 737 - This option enables support for the built-in IDE controller on 738 - Toshiba Cell Reference Board. 739 729 740 730 If unsure, say N. 741 731
-1
drivers/ata/Makefile
··· 75 75 obj-$(CONFIG_PATA_RADISYS) += pata_radisys.o 76 76 obj-$(CONFIG_PATA_RDC) += pata_rdc.o 77 77 obj-$(CONFIG_PATA_SC1200) += pata_sc1200.o 78 - obj-$(CONFIG_PATA_SCC) += pata_scc.o 79 78 obj-$(CONFIG_PATA_SCH) += pata_sch.o 80 79 obj-$(CONFIG_PATA_SERVERWORKS) += pata_serverworks.o 81 80 obj-$(CONFIG_PATA_SIL680) += pata_sil680.o
+95 -8
drivers/ata/ahci.c
··· 66 66 board_ahci_yes_fbs, 67 67 68 68 /* board IDs for specific chipsets in alphabetical order */ 69 + board_ahci_avn, 69 70 board_ahci_mcp65, 70 71 board_ahci_mcp77, 71 72 board_ahci_mcp89, ··· 85 84 static int ahci_init_one(struct pci_dev *pdev, const struct pci_device_id *ent); 86 85 static int ahci_vt8251_hardreset(struct ata_link *link, unsigned int *class, 87 86 unsigned long deadline); 87 + static int ahci_avn_hardreset(struct ata_link *link, unsigned int *class, 88 + unsigned long deadline); 88 89 static void ahci_mcp89_apple_enable(struct pci_dev *pdev); 89 90 static bool is_mcp89_apple(struct pci_dev *pdev); 90 91 static int ahci_p5wdh_hardreset(struct ata_link *link, unsigned int *class, ··· 108 105 static struct ata_port_operations ahci_p5wdh_ops = { 109 106 .inherits = &ahci_ops, 110 107 .hardreset = ahci_p5wdh_hardreset, 108 + }; 109 + 110 + static struct ata_port_operations ahci_avn_ops = { 111 + .inherits = &ahci_ops, 112 + .hardreset = ahci_avn_hardreset, 111 113 }; 112 114 113 115 static const struct ata_port_info ahci_port_info[] = { ··· 159 151 .port_ops = &ahci_ops, 160 152 }, 161 153 /* by chipsets */ 154 + [board_ahci_avn] = { 155 + .flags = AHCI_FLAG_COMMON, 156 + .pio_mask = ATA_PIO4, 157 + .udma_mask = ATA_UDMA6, 158 + .port_ops = &ahci_avn_ops, 159 + }, 162 160 [board_ahci_mcp65] = { 163 161 AHCI_HFLAGS (AHCI_HFLAG_NO_FPDMA_AA | AHCI_HFLAG_NO_PMP | 164 162 AHCI_HFLAG_YES_NCQ), ··· 304 290 { PCI_VDEVICE(INTEL, 0x1f27), board_ahci }, /* Avoton RAID */ 305 291 { PCI_VDEVICE(INTEL, 0x1f2e), board_ahci }, /* Avoton RAID */ 306 292 { PCI_VDEVICE(INTEL, 0x1f2f), board_ahci }, /* Avoton RAID */ 307 - { PCI_VDEVICE(INTEL, 0x1f32), board_ahci }, /* Avoton AHCI */ 308 - { PCI_VDEVICE(INTEL, 0x1f33), board_ahci }, /* Avoton AHCI */ 309 - { PCI_VDEVICE(INTEL, 0x1f34), board_ahci }, /* Avoton RAID */ 310 - { PCI_VDEVICE(INTEL, 0x1f35), board_ahci }, /* Avoton RAID */ 311 - { PCI_VDEVICE(INTEL, 0x1f36), board_ahci }, /* Avoton RAID */ 312 - { PCI_VDEVICE(INTEL, 0x1f37), board_ahci }, /* Avoton RAID */ 313 - { PCI_VDEVICE(INTEL, 0x1f3e), board_ahci }, /* Avoton RAID */ 314 - { PCI_VDEVICE(INTEL, 0x1f3f), board_ahci }, /* Avoton RAID */ 293 + { PCI_VDEVICE(INTEL, 0x1f32), board_ahci_avn }, /* Avoton AHCI */ 294 + { PCI_VDEVICE(INTEL, 0x1f33), board_ahci_avn }, /* Avoton AHCI */ 295 + { PCI_VDEVICE(INTEL, 0x1f34), board_ahci_avn }, /* Avoton RAID */ 296 + { PCI_VDEVICE(INTEL, 0x1f35), board_ahci_avn }, /* Avoton RAID */ 297 + { PCI_VDEVICE(INTEL, 0x1f36), board_ahci_avn }, /* Avoton RAID */ 298 + { PCI_VDEVICE(INTEL, 0x1f37), board_ahci_avn }, /* Avoton RAID */ 299 + { PCI_VDEVICE(INTEL, 0x1f3e), board_ahci_avn }, /* Avoton RAID */ 300 + { PCI_VDEVICE(INTEL, 0x1f3f), board_ahci_avn }, /* Avoton RAID */ 315 301 { PCI_VDEVICE(INTEL, 0x2823), board_ahci }, /* Wellsburg RAID */ 316 302 { PCI_VDEVICE(INTEL, 0x2827), board_ahci }, /* Wellsburg RAID */ 317 303 { PCI_VDEVICE(INTEL, 0x8d02), board_ahci }, /* Wellsburg AHCI */ ··· 683 669 } 684 670 return rc; 685 671 } 672 + 673 + /* 674 + * ahci_avn_hardreset - attempt more aggressive recovery of Avoton ports. 675 + * 676 + * It has been observed with some SSDs that the timing of events in the 677 + * link synchronization phase can leave the port in a state that can not 678 + * be recovered by a SATA-hard-reset alone. The failing signature is 679 + * SStatus.DET stuck at 1 ("Device presence detected but Phy 680 + * communication not established"). It was found that unloading and 681 + * reloading the driver when this problem occurs allows the drive 682 + * connection to be recovered (DET advanced to 0x3). The critical 683 + * component of reloading the driver is that the port state machines are 684 + * reset by bouncing "port enable" in the AHCI PCS configuration 685 + * register. So, reproduce that effect by bouncing a port whenever we 686 + * see DET==1 after a reset. 687 + */ 688 + static int ahci_avn_hardreset(struct ata_link *link, unsigned int *class, 689 + unsigned long deadline) 690 + { 691 + const unsigned long *timing = sata_ehc_deb_timing(&link->eh_context); 692 + struct ata_port *ap = link->ap; 693 + struct ahci_port_priv *pp = ap->private_data; 694 + struct ahci_host_priv *hpriv = ap->host->private_data; 695 + u8 *d2h_fis = pp->rx_fis + RX_FIS_D2H_REG; 696 + unsigned long tmo = deadline - jiffies; 697 + struct ata_taskfile tf; 698 + bool online; 699 + int rc, i; 700 + 701 + DPRINTK("ENTER\n"); 702 + 703 + ahci_stop_engine(ap); 704 + 705 + for (i = 0; i < 2; i++) { 706 + u16 val; 707 + u32 sstatus; 708 + int port = ap->port_no; 709 + struct ata_host *host = ap->host; 710 + struct pci_dev *pdev = to_pci_dev(host->dev); 711 + 712 + /* clear D2H reception area to properly wait for D2H FIS */ 713 + ata_tf_init(link->device, &tf); 714 + tf.command = ATA_BUSY; 715 + ata_tf_to_fis(&tf, 0, 0, d2h_fis); 716 + 717 + rc = sata_link_hardreset(link, timing, deadline, &online, 718 + ahci_check_ready); 719 + 720 + if (sata_scr_read(link, SCR_STATUS, &sstatus) != 0 || 721 + (sstatus & 0xf) != 1) 722 + break; 723 + 724 + ata_link_printk(link, KERN_INFO, "avn bounce port%d\n", 725 + port); 726 + 727 + pci_read_config_word(pdev, 0x92, &val); 728 + val &= ~(1 << port); 729 + pci_write_config_word(pdev, 0x92, val); 730 + ata_msleep(ap, 1000); 731 + val |= 1 << port; 732 + pci_write_config_word(pdev, 0x92, val); 733 + deadline += tmo; 734 + } 735 + 736 + hpriv->start_engine(ap); 737 + 738 + if (online) 739 + *class = ahci_dev_classify(ap); 740 + 741 + DPRINTK("EXIT, rc=%d, class=%u\n", rc, *class); 742 + return rc; 743 + } 744 + 686 745 687 746 #ifdef CONFIG_PM 688 747 static int ahci_pci_device_suspend(struct pci_dev *pdev, pm_message_t mesg)
+24 -25
drivers/ata/ahci_st.c
··· 37 37 struct reset_control *pwr; 38 38 struct reset_control *sw_rst; 39 39 struct reset_control *pwr_rst; 40 - struct ahci_host_priv *hpriv; 41 40 }; 42 41 43 42 static void st_ahci_configure_oob(void __iomem *mmio) ··· 54 55 writel(new_val, mmio + ST_AHCI_OOBR); 55 56 } 56 57 57 - static int st_ahci_deassert_resets(struct device *dev) 58 + static int st_ahci_deassert_resets(struct ahci_host_priv *hpriv, 59 + struct device *dev) 58 60 { 59 - struct st_ahci_drv_data *drv_data = dev_get_drvdata(dev); 61 + struct st_ahci_drv_data *drv_data = hpriv->plat_data; 60 62 int err; 61 63 62 64 if (drv_data->pwr) { ··· 90 90 static void st_ahci_host_stop(struct ata_host *host) 91 91 { 92 92 struct ahci_host_priv *hpriv = host->private_data; 93 + struct st_ahci_drv_data *drv_data = hpriv->plat_data; 93 94 struct device *dev = host->dev; 94 - struct st_ahci_drv_data *drv_data = dev_get_drvdata(dev); 95 95 int err; 96 96 97 97 if (drv_data->pwr) { ··· 103 103 ahci_platform_disable_resources(hpriv); 104 104 } 105 105 106 - static int st_ahci_probe_resets(struct platform_device *pdev) 106 + static int st_ahci_probe_resets(struct ahci_host_priv *hpriv, 107 + struct device *dev) 107 108 { 108 - struct st_ahci_drv_data *drv_data = platform_get_drvdata(pdev); 109 + struct st_ahci_drv_data *drv_data = hpriv->plat_data; 109 110 110 - drv_data->pwr = devm_reset_control_get(&pdev->dev, "pwr-dwn"); 111 + drv_data->pwr = devm_reset_control_get(dev, "pwr-dwn"); 111 112 if (IS_ERR(drv_data->pwr)) { 112 - dev_info(&pdev->dev, "power reset control not defined\n"); 113 + dev_info(dev, "power reset control not defined\n"); 113 114 drv_data->pwr = NULL; 114 115 } 115 116 116 - drv_data->sw_rst = devm_reset_control_get(&pdev->dev, "sw-rst"); 117 + drv_data->sw_rst = devm_reset_control_get(dev, "sw-rst"); 117 118 if (IS_ERR(drv_data->sw_rst)) { 118 - dev_info(&pdev->dev, "soft reset control not defined\n"); 119 + dev_info(dev, "soft reset control not defined\n"); 119 120 drv_data->sw_rst = NULL; 120 121 } 121 122 122 - drv_data->pwr_rst = devm_reset_control_get(&pdev->dev, "pwr-rst"); 123 + drv_data->pwr_rst = devm_reset_control_get(dev, "pwr-rst"); 123 124 if (IS_ERR(drv_data->pwr_rst)) { 124 - dev_dbg(&pdev->dev, "power soft reset control not defined\n"); 125 + dev_dbg(dev, "power soft reset control not defined\n"); 125 126 drv_data->pwr_rst = NULL; 126 127 } 127 128 128 - return st_ahci_deassert_resets(&pdev->dev); 129 + return st_ahci_deassert_resets(hpriv, dev); 129 130 } 130 131 131 132 static struct ata_port_operations st_ahci_port_ops = { ··· 155 154 if (!drv_data) 156 155 return -ENOMEM; 157 156 158 - platform_set_drvdata(pdev, drv_data); 159 - 160 157 hpriv = ahci_platform_get_resources(pdev); 161 158 if (IS_ERR(hpriv)) 162 159 return PTR_ERR(hpriv); 160 + hpriv->plat_data = drv_data; 163 161 164 - drv_data->hpriv = hpriv; 165 - 166 - err = st_ahci_probe_resets(pdev); 162 + err = st_ahci_probe_resets(hpriv, &pdev->dev); 167 163 if (err) 168 164 return err; 169 165 ··· 168 170 if (err) 169 171 return err; 170 172 171 - st_ahci_configure_oob(drv_data->hpriv->mmio); 173 + st_ahci_configure_oob(hpriv->mmio); 172 174 173 175 err = ahci_platform_init_host(pdev, hpriv, &st_ahci_port_info, 174 176 &ahci_platform_sht); ··· 183 185 #ifdef CONFIG_PM_SLEEP 184 186 static int st_ahci_suspend(struct device *dev) 185 187 { 186 - struct st_ahci_drv_data *drv_data = dev_get_drvdata(dev); 187 - struct ahci_host_priv *hpriv = drv_data->hpriv; 188 + struct ata_host *host = dev_get_drvdata(dev); 189 + struct ahci_host_priv *hpriv = host->private_data; 190 + struct st_ahci_drv_data *drv_data = hpriv->plat_data; 188 191 int err; 189 192 190 193 err = ahci_platform_suspend_host(dev); ··· 207 208 208 209 static int st_ahci_resume(struct device *dev) 209 210 { 210 - struct st_ahci_drv_data *drv_data = dev_get_drvdata(dev); 211 - struct ahci_host_priv *hpriv = drv_data->hpriv; 211 + struct ata_host *host = dev_get_drvdata(dev); 212 + struct ahci_host_priv *hpriv = host->private_data; 212 213 int err; 213 214 214 215 err = ahci_platform_enable_resources(hpriv); 215 216 if (err) 216 217 return err; 217 218 218 - err = st_ahci_deassert_resets(dev); 219 + err = st_ahci_deassert_resets(hpriv, dev); 219 220 if (err) { 220 221 ahci_platform_disable_resources(hpriv); 221 222 return err; 222 223 } 223 224 224 - st_ahci_configure_oob(drv_data->hpriv->mmio); 225 + st_ahci_configure_oob(hpriv->mmio); 225 226 226 227 return ahci_platform_resume_host(dev); 227 228 }
+1 -2
drivers/ata/libahci.c
··· 1707 1707 if (unlikely(resetting)) 1708 1708 status &= ~PORT_IRQ_BAD_PMP; 1709 1709 1710 - /* if LPM is enabled, PHYRDY doesn't mean anything */ 1711 - if (ap->link.lpm_policy > ATA_LPM_MAX_POWER) { 1710 + if (sata_lpm_ignore_phy_events(&ap->link)) { 1712 1711 status &= ~PORT_IRQ_PHYRDY; 1713 1712 ahci_scr_write(&ap->link, SCR_ERROR, SERR_PHYRDY_CHG); 1714 1713 }
+33 -1
drivers/ata/libata-core.c
··· 4235 4235 ATA_HORKAGE_ZERO_AFTER_TRIM, }, 4236 4236 { "Crucial_CT*MX100*", "MU01", ATA_HORKAGE_NO_NCQ_TRIM | 4237 4237 ATA_HORKAGE_ZERO_AFTER_TRIM, }, 4238 - { "Samsung SSD 850 PRO*", NULL, ATA_HORKAGE_NO_NCQ_TRIM | 4238 + { "Samsung SSD 8*", NULL, ATA_HORKAGE_NO_NCQ_TRIM | 4239 4239 ATA_HORKAGE_ZERO_AFTER_TRIM, }, 4240 4240 4241 4241 /* ··· 6751 6751 6752 6752 return tmp; 6753 6753 } 6754 + 6755 + /** 6756 + * sata_lpm_ignore_phy_events - test if PHY event should be ignored 6757 + * @link: Link receiving the event 6758 + * 6759 + * Test whether the received PHY event has to be ignored or not. 6760 + * 6761 + * LOCKING: 6762 + * None: 6763 + * 6764 + * RETURNS: 6765 + * True if the event has to be ignored. 6766 + */ 6767 + bool sata_lpm_ignore_phy_events(struct ata_link *link) 6768 + { 6769 + unsigned long lpm_timeout = link->last_lpm_change + 6770 + msecs_to_jiffies(ATA_TMOUT_SPURIOUS_PHY); 6771 + 6772 + /* if LPM is enabled, PHYRDY doesn't mean anything */ 6773 + if (link->lpm_policy > ATA_LPM_MAX_POWER) 6774 + return true; 6775 + 6776 + /* ignore the first PHY event after the LPM policy changed 6777 + * as it is might be spurious 6778 + */ 6779 + if ((link->flags & ATA_LFLAG_CHANGED) && 6780 + time_before(jiffies, lpm_timeout)) 6781 + return true; 6782 + 6783 + return false; 6784 + } 6785 + EXPORT_SYMBOL_GPL(sata_lpm_ignore_phy_events); 6754 6786 6755 6787 /* 6756 6788 * Dummy port_ops
+3
drivers/ata/libata-eh.c
··· 3597 3597 } 3598 3598 } 3599 3599 3600 + link->last_lpm_change = jiffies; 3601 + link->flags |= ATA_LFLAG_CHANGED; 3602 + 3600 3603 return 0; 3601 3604 3602 3605 fail:
-1110
drivers/ata/pata_scc.c
··· 1 - /* 2 - * Support for IDE interfaces on Celleb platform 3 - * 4 - * (C) Copyright 2006 TOSHIBA CORPORATION 5 - * 6 - * This code is based on drivers/ata/ata_piix.c: 7 - * Copyright 2003-2005 Red Hat Inc 8 - * Copyright 2003-2005 Jeff Garzik 9 - * Copyright (C) 1998-1999 Andrzej Krzysztofowicz, Author and Maintainer 10 - * Copyright (C) 1998-2000 Andre Hedrick <andre@linux-ide.org> 11 - * Copyright (C) 2003 Red Hat Inc 12 - * 13 - * and drivers/ata/ahci.c: 14 - * Copyright 2004-2005 Red Hat, Inc. 15 - * 16 - * and drivers/ata/libata-core.c: 17 - * Copyright 2003-2004 Red Hat, Inc. All rights reserved. 18 - * Copyright 2003-2004 Jeff Garzik 19 - * 20 - * This program is free software; you can redistribute it and/or modify 21 - * it under the terms of the GNU General Public License as published by 22 - * the Free Software Foundation; either version 2 of the License, or 23 - * (at your option) any later version. 24 - * 25 - * This program is distributed in the hope that it will be useful, 26 - * but WITHOUT ANY WARRANTY; without even the implied warranty of 27 - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 28 - * GNU General Public License for more details. 29 - * 30 - * You should have received a copy of the GNU General Public License along 31 - * with this program; if not, write to the Free Software Foundation, Inc., 32 - * 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA. 33 - */ 34 - 35 - #include <linux/kernel.h> 36 - #include <linux/module.h> 37 - #include <linux/pci.h> 38 - #include <linux/blkdev.h> 39 - #include <linux/delay.h> 40 - #include <linux/device.h> 41 - #include <scsi/scsi_host.h> 42 - #include <linux/libata.h> 43 - 44 - #define DRV_NAME "pata_scc" 45 - #define DRV_VERSION "0.3" 46 - 47 - #define PCI_DEVICE_ID_TOSHIBA_SCC_ATA 0x01b4 48 - 49 - /* PCI BARs */ 50 - #define SCC_CTRL_BAR 0 51 - #define SCC_BMID_BAR 1 52 - 53 - /* offset of CTRL registers */ 54 - #define SCC_CTL_PIOSHT 0x000 55 - #define SCC_CTL_PIOCT 0x004 56 - #define SCC_CTL_MDMACT 0x008 57 - #define SCC_CTL_MCRCST 0x00C 58 - #define SCC_CTL_SDMACT 0x010 59 - #define SCC_CTL_SCRCST 0x014 60 - #define SCC_CTL_UDENVT 0x018 61 - #define SCC_CTL_TDVHSEL 0x020 62 - #define SCC_CTL_MODEREG 0x024 63 - #define SCC_CTL_ECMODE 0xF00 64 - #define SCC_CTL_MAEA0 0xF50 65 - #define SCC_CTL_MAEC0 0xF54 66 - #define SCC_CTL_CCKCTRL 0xFF0 67 - 68 - /* offset of BMID registers */ 69 - #define SCC_DMA_CMD 0x000 70 - #define SCC_DMA_STATUS 0x004 71 - #define SCC_DMA_TABLE_OFS 0x008 72 - #define SCC_DMA_INTMASK 0x010 73 - #define SCC_DMA_INTST 0x014 74 - #define SCC_DMA_PTERADD 0x018 75 - #define SCC_REG_CMD_ADDR 0x020 76 - #define SCC_REG_DATA 0x000 77 - #define SCC_REG_ERR 0x004 78 - #define SCC_REG_FEATURE 0x004 79 - #define SCC_REG_NSECT 0x008 80 - #define SCC_REG_LBAL 0x00C 81 - #define SCC_REG_LBAM 0x010 82 - #define SCC_REG_LBAH 0x014 83 - #define SCC_REG_DEVICE 0x018 84 - #define SCC_REG_STATUS 0x01C 85 - #define SCC_REG_CMD 0x01C 86 - #define SCC_REG_ALTSTATUS 0x020 87 - 88 - /* register value */ 89 - #define TDVHSEL_MASTER 0x00000001 90 - #define TDVHSEL_SLAVE 0x00000004 91 - 92 - #define MODE_JCUSFEN 0x00000080 93 - 94 - #define ECMODE_VALUE 0x01 95 - 96 - #define CCKCTRL_ATARESET 0x00040000 97 - #define CCKCTRL_BUFCNT 0x00020000 98 - #define CCKCTRL_CRST 0x00010000 99 - #define CCKCTRL_OCLKEN 0x00000100 100 - #define CCKCTRL_ATACLKOEN 0x00000002 101 - #define CCKCTRL_LCLKEN 0x00000001 102 - 103 - #define QCHCD_IOS_SS 0x00000001 104 - 105 - #define QCHSD_STPDIAG 0x00020000 106 - 107 - #define INTMASK_MSK 0xD1000012 108 - #define INTSTS_SERROR 0x80000000 109 - #define INTSTS_PRERR 0x40000000 110 - #define INTSTS_RERR 0x10000000 111 - #define INTSTS_ICERR 0x01000000 112 - #define INTSTS_BMSINT 0x00000010 113 - #define INTSTS_BMHE 0x00000008 114 - #define INTSTS_IOIRQS 0x00000004 115 - #define INTSTS_INTRQ 0x00000002 116 - #define INTSTS_ACTEINT 0x00000001 117 - 118 - 119 - /* PIO transfer mode table */ 120 - /* JCHST */ 121 - static const unsigned long JCHSTtbl[2][7] = { 122 - {0x0E, 0x05, 0x02, 0x03, 0x02, 0x00, 0x00}, /* 100MHz */ 123 - {0x13, 0x07, 0x04, 0x04, 0x03, 0x00, 0x00} /* 133MHz */ 124 - }; 125 - 126 - /* JCHHT */ 127 - static const unsigned long JCHHTtbl[2][7] = { 128 - {0x0E, 0x02, 0x02, 0x02, 0x02, 0x00, 0x00}, /* 100MHz */ 129 - {0x13, 0x03, 0x03, 0x03, 0x03, 0x00, 0x00} /* 133MHz */ 130 - }; 131 - 132 - /* JCHCT */ 133 - static const unsigned long JCHCTtbl[2][7] = { 134 - {0x1D, 0x1D, 0x1C, 0x0B, 0x06, 0x00, 0x00}, /* 100MHz */ 135 - {0x27, 0x26, 0x26, 0x0E, 0x09, 0x00, 0x00} /* 133MHz */ 136 - }; 137 - 138 - /* DMA transfer mode table */ 139 - /* JCHDCTM/JCHDCTS */ 140 - static const unsigned long JCHDCTxtbl[2][7] = { 141 - {0x0A, 0x06, 0x04, 0x03, 0x01, 0x00, 0x00}, /* 100MHz */ 142 - {0x0E, 0x09, 0x06, 0x04, 0x02, 0x01, 0x00} /* 133MHz */ 143 - }; 144 - 145 - /* JCSTWTM/JCSTWTS */ 146 - static const unsigned long JCSTWTxtbl[2][7] = { 147 - {0x06, 0x04, 0x03, 0x02, 0x02, 0x02, 0x00}, /* 100MHz */ 148 - {0x09, 0x06, 0x04, 0x02, 0x02, 0x02, 0x02} /* 133MHz */ 149 - }; 150 - 151 - /* JCTSS */ 152 - static const unsigned long JCTSStbl[2][7] = { 153 - {0x05, 0x05, 0x05, 0x05, 0x05, 0x05, 0x00}, /* 100MHz */ 154 - {0x05, 0x05, 0x05, 0x05, 0x05, 0x05, 0x05} /* 133MHz */ 155 - }; 156 - 157 - /* JCENVT */ 158 - static const unsigned long JCENVTtbl[2][7] = { 159 - {0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x00}, /* 100MHz */ 160 - {0x02, 0x02, 0x02, 0x02, 0x02, 0x02, 0x02} /* 133MHz */ 161 - }; 162 - 163 - /* JCACTSELS/JCACTSELM */ 164 - static const unsigned long JCACTSELtbl[2][7] = { 165 - {0x00, 0x00, 0x00, 0x00, 0x01, 0x01, 0x00}, /* 100MHz */ 166 - {0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01} /* 133MHz */ 167 - }; 168 - 169 - static const struct pci_device_id scc_pci_tbl[] = { 170 - { PCI_VDEVICE(TOSHIBA_2, PCI_DEVICE_ID_TOSHIBA_SCC_ATA), 0}, 171 - { } /* terminate list */ 172 - }; 173 - 174 - /** 175 - * scc_set_piomode - Initialize host controller PATA PIO timings 176 - * @ap: Port whose timings we are configuring 177 - * @adev: um 178 - * 179 - * Set PIO mode for device. 180 - * 181 - * LOCKING: 182 - * None (inherited from caller). 183 - */ 184 - 185 - static void scc_set_piomode (struct ata_port *ap, struct ata_device *adev) 186 - { 187 - unsigned int pio = adev->pio_mode - XFER_PIO_0; 188 - void __iomem *ctrl_base = ap->host->iomap[SCC_CTRL_BAR]; 189 - void __iomem *cckctrl_port = ctrl_base + SCC_CTL_CCKCTRL; 190 - void __iomem *piosht_port = ctrl_base + SCC_CTL_PIOSHT; 191 - void __iomem *pioct_port = ctrl_base + SCC_CTL_PIOCT; 192 - unsigned long reg; 193 - int offset; 194 - 195 - reg = in_be32(cckctrl_port); 196 - if (reg & CCKCTRL_ATACLKOEN) 197 - offset = 1; /* 133MHz */ 198 - else 199 - offset = 0; /* 100MHz */ 200 - 201 - reg = JCHSTtbl[offset][pio] << 16 | JCHHTtbl[offset][pio]; 202 - out_be32(piosht_port, reg); 203 - reg = JCHCTtbl[offset][pio]; 204 - out_be32(pioct_port, reg); 205 - } 206 - 207 - /** 208 - * scc_set_dmamode - Initialize host controller PATA DMA timings 209 - * @ap: Port whose timings we are configuring 210 - * @adev: um 211 - * 212 - * Set UDMA mode for device. 213 - * 214 - * LOCKING: 215 - * None (inherited from caller). 216 - */ 217 - 218 - static void scc_set_dmamode (struct ata_port *ap, struct ata_device *adev) 219 - { 220 - unsigned int udma = adev->dma_mode; 221 - unsigned int is_slave = (adev->devno != 0); 222 - u8 speed = udma; 223 - void __iomem *ctrl_base = ap->host->iomap[SCC_CTRL_BAR]; 224 - void __iomem *cckctrl_port = ctrl_base + SCC_CTL_CCKCTRL; 225 - void __iomem *mdmact_port = ctrl_base + SCC_CTL_MDMACT; 226 - void __iomem *mcrcst_port = ctrl_base + SCC_CTL_MCRCST; 227 - void __iomem *sdmact_port = ctrl_base + SCC_CTL_SDMACT; 228 - void __iomem *scrcst_port = ctrl_base + SCC_CTL_SCRCST; 229 - void __iomem *udenvt_port = ctrl_base + SCC_CTL_UDENVT; 230 - void __iomem *tdvhsel_port = ctrl_base + SCC_CTL_TDVHSEL; 231 - int offset, idx; 232 - 233 - if (in_be32(cckctrl_port) & CCKCTRL_ATACLKOEN) 234 - offset = 1; /* 133MHz */ 235 - else 236 - offset = 0; /* 100MHz */ 237 - 238 - if (speed >= XFER_UDMA_0) 239 - idx = speed - XFER_UDMA_0; 240 - else 241 - return; 242 - 243 - if (is_slave) { 244 - out_be32(sdmact_port, JCHDCTxtbl[offset][idx]); 245 - out_be32(scrcst_port, JCSTWTxtbl[offset][idx]); 246 - out_be32(tdvhsel_port, 247 - (in_be32(tdvhsel_port) & ~TDVHSEL_SLAVE) | (JCACTSELtbl[offset][idx] << 2)); 248 - } else { 249 - out_be32(mdmact_port, JCHDCTxtbl[offset][idx]); 250 - out_be32(mcrcst_port, JCSTWTxtbl[offset][idx]); 251 - out_be32(tdvhsel_port, 252 - (in_be32(tdvhsel_port) & ~TDVHSEL_MASTER) | JCACTSELtbl[offset][idx]); 253 - } 254 - out_be32(udenvt_port, 255 - JCTSStbl[offset][idx] << 16 | JCENVTtbl[offset][idx]); 256 - } 257 - 258 - unsigned long scc_mode_filter(struct ata_device *adev, unsigned long mask) 259 - { 260 - /* errata A308 workaround: limit ATAPI UDMA mode to UDMA4 */ 261 - if (adev->class == ATA_DEV_ATAPI && 262 - (mask & (0xE0 << ATA_SHIFT_UDMA))) { 263 - printk(KERN_INFO "%s: limit ATAPI UDMA to UDMA4\n", DRV_NAME); 264 - mask &= ~(0xE0 << ATA_SHIFT_UDMA); 265 - } 266 - return mask; 267 - } 268 - 269 - /** 270 - * scc_tf_load - send taskfile registers to host controller 271 - * @ap: Port to which output is sent 272 - * @tf: ATA taskfile register set 273 - * 274 - * Note: Original code is ata_sff_tf_load(). 275 - */ 276 - 277 - static void scc_tf_load (struct ata_port *ap, const struct ata_taskfile *tf) 278 - { 279 - struct ata_ioports *ioaddr = &ap->ioaddr; 280 - unsigned int is_addr = tf->flags & ATA_TFLAG_ISADDR; 281 - 282 - if (tf->ctl != ap->last_ctl) { 283 - out_be32(ioaddr->ctl_addr, tf->ctl); 284 - ap->last_ctl = tf->ctl; 285 - ata_wait_idle(ap); 286 - } 287 - 288 - if (is_addr && (tf->flags & ATA_TFLAG_LBA48)) { 289 - out_be32(ioaddr->feature_addr, tf->hob_feature); 290 - out_be32(ioaddr->nsect_addr, tf->hob_nsect); 291 - out_be32(ioaddr->lbal_addr, tf->hob_lbal); 292 - out_be32(ioaddr->lbam_addr, tf->hob_lbam); 293 - out_be32(ioaddr->lbah_addr, tf->hob_lbah); 294 - VPRINTK("hob: feat 0x%X nsect 0x%X, lba 0x%X 0x%X 0x%X\n", 295 - tf->hob_feature, 296 - tf->hob_nsect, 297 - tf->hob_lbal, 298 - tf->hob_lbam, 299 - tf->hob_lbah); 300 - } 301 - 302 - if (is_addr) { 303 - out_be32(ioaddr->feature_addr, tf->feature); 304 - out_be32(ioaddr->nsect_addr, tf->nsect); 305 - out_be32(ioaddr->lbal_addr, tf->lbal); 306 - out_be32(ioaddr->lbam_addr, tf->lbam); 307 - out_be32(ioaddr->lbah_addr, tf->lbah); 308 - VPRINTK("feat 0x%X nsect 0x%X lba 0x%X 0x%X 0x%X\n", 309 - tf->feature, 310 - tf->nsect, 311 - tf->lbal, 312 - tf->lbam, 313 - tf->lbah); 314 - } 315 - 316 - if (tf->flags & ATA_TFLAG_DEVICE) { 317 - out_be32(ioaddr->device_addr, tf->device); 318 - VPRINTK("device 0x%X\n", tf->device); 319 - } 320 - 321 - ata_wait_idle(ap); 322 - } 323 - 324 - /** 325 - * scc_check_status - Read device status reg & clear interrupt 326 - * @ap: port where the device is 327 - * 328 - * Note: Original code is ata_check_status(). 329 - */ 330 - 331 - static u8 scc_check_status (struct ata_port *ap) 332 - { 333 - return in_be32(ap->ioaddr.status_addr); 334 - } 335 - 336 - /** 337 - * scc_tf_read - input device's ATA taskfile shadow registers 338 - * @ap: Port from which input is read 339 - * @tf: ATA taskfile register set for storing input 340 - * 341 - * Note: Original code is ata_sff_tf_read(). 342 - */ 343 - 344 - static void scc_tf_read (struct ata_port *ap, struct ata_taskfile *tf) 345 - { 346 - struct ata_ioports *ioaddr = &ap->ioaddr; 347 - 348 - tf->command = scc_check_status(ap); 349 - tf->feature = in_be32(ioaddr->error_addr); 350 - tf->nsect = in_be32(ioaddr->nsect_addr); 351 - tf->lbal = in_be32(ioaddr->lbal_addr); 352 - tf->lbam = in_be32(ioaddr->lbam_addr); 353 - tf->lbah = in_be32(ioaddr->lbah_addr); 354 - tf->device = in_be32(ioaddr->device_addr); 355 - 356 - if (tf->flags & ATA_TFLAG_LBA48) { 357 - out_be32(ioaddr->ctl_addr, tf->ctl | ATA_HOB); 358 - tf->hob_feature = in_be32(ioaddr->error_addr); 359 - tf->hob_nsect = in_be32(ioaddr->nsect_addr); 360 - tf->hob_lbal = in_be32(ioaddr->lbal_addr); 361 - tf->hob_lbam = in_be32(ioaddr->lbam_addr); 362 - tf->hob_lbah = in_be32(ioaddr->lbah_addr); 363 - out_be32(ioaddr->ctl_addr, tf->ctl); 364 - ap->last_ctl = tf->ctl; 365 - } 366 - } 367 - 368 - /** 369 - * scc_exec_command - issue ATA command to host controller 370 - * @ap: port to which command is being issued 371 - * @tf: ATA taskfile register set 372 - * 373 - * Note: Original code is ata_sff_exec_command(). 374 - */ 375 - 376 - static void scc_exec_command (struct ata_port *ap, 377 - const struct ata_taskfile *tf) 378 - { 379 - DPRINTK("ata%u: cmd 0x%X\n", ap->print_id, tf->command); 380 - 381 - out_be32(ap->ioaddr.command_addr, tf->command); 382 - ata_sff_pause(ap); 383 - } 384 - 385 - /** 386 - * scc_check_altstatus - Read device alternate status reg 387 - * @ap: port where the device is 388 - */ 389 - 390 - static u8 scc_check_altstatus (struct ata_port *ap) 391 - { 392 - return in_be32(ap->ioaddr.altstatus_addr); 393 - } 394 - 395 - /** 396 - * scc_dev_select - Select device 0/1 on ATA bus 397 - * @ap: ATA channel to manipulate 398 - * @device: ATA device (numbered from zero) to select 399 - * 400 - * Note: Original code is ata_sff_dev_select(). 401 - */ 402 - 403 - static void scc_dev_select (struct ata_port *ap, unsigned int device) 404 - { 405 - u8 tmp; 406 - 407 - if (device == 0) 408 - tmp = ATA_DEVICE_OBS; 409 - else 410 - tmp = ATA_DEVICE_OBS | ATA_DEV1; 411 - 412 - out_be32(ap->ioaddr.device_addr, tmp); 413 - ata_sff_pause(ap); 414 - } 415 - 416 - /** 417 - * scc_set_devctl - Write device control reg 418 - * @ap: port where the device is 419 - * @ctl: value to write 420 - */ 421 - 422 - static void scc_set_devctl(struct ata_port *ap, u8 ctl) 423 - { 424 - out_be32(ap->ioaddr.ctl_addr, ctl); 425 - } 426 - 427 - /** 428 - * scc_bmdma_setup - Set up PCI IDE BMDMA transaction 429 - * @qc: Info associated with this ATA transaction. 430 - * 431 - * Note: Original code is ata_bmdma_setup(). 432 - */ 433 - 434 - static void scc_bmdma_setup (struct ata_queued_cmd *qc) 435 - { 436 - struct ata_port *ap = qc->ap; 437 - unsigned int rw = (qc->tf.flags & ATA_TFLAG_WRITE); 438 - u8 dmactl; 439 - void __iomem *mmio = ap->ioaddr.bmdma_addr; 440 - 441 - /* load PRD table addr */ 442 - out_be32(mmio + SCC_DMA_TABLE_OFS, ap->bmdma_prd_dma); 443 - 444 - /* specify data direction, triple-check start bit is clear */ 445 - dmactl = in_be32(mmio + SCC_DMA_CMD); 446 - dmactl &= ~(ATA_DMA_WR | ATA_DMA_START); 447 - if (!rw) 448 - dmactl |= ATA_DMA_WR; 449 - out_be32(mmio + SCC_DMA_CMD, dmactl); 450 - 451 - /* issue r/w command */ 452 - ap->ops->sff_exec_command(ap, &qc->tf); 453 - } 454 - 455 - /** 456 - * scc_bmdma_start - Start a PCI IDE BMDMA transaction 457 - * @qc: Info associated with this ATA transaction. 458 - * 459 - * Note: Original code is ata_bmdma_start(). 460 - */ 461 - 462 - static void scc_bmdma_start (struct ata_queued_cmd *qc) 463 - { 464 - struct ata_port *ap = qc->ap; 465 - u8 dmactl; 466 - void __iomem *mmio = ap->ioaddr.bmdma_addr; 467 - 468 - /* start host DMA transaction */ 469 - dmactl = in_be32(mmio + SCC_DMA_CMD); 470 - out_be32(mmio + SCC_DMA_CMD, dmactl | ATA_DMA_START); 471 - } 472 - 473 - /** 474 - * scc_devchk - PATA device presence detection 475 - * @ap: ATA channel to examine 476 - * @device: Device to examine (starting at zero) 477 - * 478 - * Note: Original code is ata_devchk(). 479 - */ 480 - 481 - static unsigned int scc_devchk (struct ata_port *ap, 482 - unsigned int device) 483 - { 484 - struct ata_ioports *ioaddr = &ap->ioaddr; 485 - u8 nsect, lbal; 486 - 487 - ap->ops->sff_dev_select(ap, device); 488 - 489 - out_be32(ioaddr->nsect_addr, 0x55); 490 - out_be32(ioaddr->lbal_addr, 0xaa); 491 - 492 - out_be32(ioaddr->nsect_addr, 0xaa); 493 - out_be32(ioaddr->lbal_addr, 0x55); 494 - 495 - out_be32(ioaddr->nsect_addr, 0x55); 496 - out_be32(ioaddr->lbal_addr, 0xaa); 497 - 498 - nsect = in_be32(ioaddr->nsect_addr); 499 - lbal = in_be32(ioaddr->lbal_addr); 500 - 501 - if ((nsect == 0x55) && (lbal == 0xaa)) 502 - return 1; /* we found a device */ 503 - 504 - return 0; /* nothing found */ 505 - } 506 - 507 - /** 508 - * scc_wait_after_reset - wait for devices to become ready after reset 509 - * 510 - * Note: Original code is ata_sff_wait_after_reset 511 - */ 512 - 513 - static int scc_wait_after_reset(struct ata_link *link, unsigned int devmask, 514 - unsigned long deadline) 515 - { 516 - struct ata_port *ap = link->ap; 517 - struct ata_ioports *ioaddr = &ap->ioaddr; 518 - unsigned int dev0 = devmask & (1 << 0); 519 - unsigned int dev1 = devmask & (1 << 1); 520 - int rc, ret = 0; 521 - 522 - /* Spec mandates ">= 2ms" before checking status. We wait 523 - * 150ms, because that was the magic delay used for ATAPI 524 - * devices in Hale Landis's ATADRVR, for the period of time 525 - * between when the ATA command register is written, and then 526 - * status is checked. Because waiting for "a while" before 527 - * checking status is fine, post SRST, we perform this magic 528 - * delay here as well. 529 - * 530 - * Old drivers/ide uses the 2mS rule and then waits for ready. 531 - */ 532 - ata_msleep(ap, 150); 533 - 534 - /* always check readiness of the master device */ 535 - rc = ata_sff_wait_ready(link, deadline); 536 - /* -ENODEV means the odd clown forgot the D7 pulldown resistor 537 - * and TF status is 0xff, bail out on it too. 538 - */ 539 - if (rc) 540 - return rc; 541 - 542 - /* if device 1 was found in ata_devchk, wait for register 543 - * access briefly, then wait for BSY to clear. 544 - */ 545 - if (dev1) { 546 - int i; 547 - 548 - ap->ops->sff_dev_select(ap, 1); 549 - 550 - /* Wait for register access. Some ATAPI devices fail 551 - * to set nsect/lbal after reset, so don't waste too 552 - * much time on it. We're gonna wait for !BSY anyway. 553 - */ 554 - for (i = 0; i < 2; i++) { 555 - u8 nsect, lbal; 556 - 557 - nsect = in_be32(ioaddr->nsect_addr); 558 - lbal = in_be32(ioaddr->lbal_addr); 559 - if ((nsect == 1) && (lbal == 1)) 560 - break; 561 - ata_msleep(ap, 50); /* give drive a breather */ 562 - } 563 - 564 - rc = ata_sff_wait_ready(link, deadline); 565 - if (rc) { 566 - if (rc != -ENODEV) 567 - return rc; 568 - ret = rc; 569 - } 570 - } 571 - 572 - /* is all this really necessary? */ 573 - ap->ops->sff_dev_select(ap, 0); 574 - if (dev1) 575 - ap->ops->sff_dev_select(ap, 1); 576 - if (dev0) 577 - ap->ops->sff_dev_select(ap, 0); 578 - 579 - return ret; 580 - } 581 - 582 - /** 583 - * scc_bus_softreset - PATA device software reset 584 - * 585 - * Note: Original code is ata_bus_softreset(). 586 - */ 587 - 588 - static int scc_bus_softreset(struct ata_port *ap, unsigned int devmask, 589 - unsigned long deadline) 590 - { 591 - struct ata_ioports *ioaddr = &ap->ioaddr; 592 - 593 - DPRINTK("ata%u: bus reset via SRST\n", ap->print_id); 594 - 595 - /* software reset. causes dev0 to be selected */ 596 - out_be32(ioaddr->ctl_addr, ap->ctl); 597 - udelay(20); 598 - out_be32(ioaddr->ctl_addr, ap->ctl | ATA_SRST); 599 - udelay(20); 600 - out_be32(ioaddr->ctl_addr, ap->ctl); 601 - 602 - return scc_wait_after_reset(&ap->link, devmask, deadline); 603 - } 604 - 605 - /** 606 - * scc_softreset - reset host port via ATA SRST 607 - * @ap: port to reset 608 - * @classes: resulting classes of attached devices 609 - * @deadline: deadline jiffies for the operation 610 - * 611 - * Note: Original code is ata_sff_softreset(). 612 - */ 613 - 614 - static int scc_softreset(struct ata_link *link, unsigned int *classes, 615 - unsigned long deadline) 616 - { 617 - struct ata_port *ap = link->ap; 618 - unsigned int slave_possible = ap->flags & ATA_FLAG_SLAVE_POSS; 619 - unsigned int devmask = 0; 620 - int rc; 621 - u8 err; 622 - 623 - DPRINTK("ENTER\n"); 624 - 625 - /* determine if device 0/1 are present */ 626 - if (scc_devchk(ap, 0)) 627 - devmask |= (1 << 0); 628 - if (slave_possible && scc_devchk(ap, 1)) 629 - devmask |= (1 << 1); 630 - 631 - /* select device 0 again */ 632 - ap->ops->sff_dev_select(ap, 0); 633 - 634 - /* issue bus reset */ 635 - DPRINTK("about to softreset, devmask=%x\n", devmask); 636 - rc = scc_bus_softreset(ap, devmask, deadline); 637 - if (rc) { 638 - ata_port_err(ap, "SRST failed (err_mask=0x%x)\n", rc); 639 - return -EIO; 640 - } 641 - 642 - /* determine by signature whether we have ATA or ATAPI devices */ 643 - classes[0] = ata_sff_dev_classify(&ap->link.device[0], 644 - devmask & (1 << 0), &err); 645 - if (slave_possible && err != 0x81) 646 - classes[1] = ata_sff_dev_classify(&ap->link.device[1], 647 - devmask & (1 << 1), &err); 648 - 649 - DPRINTK("EXIT, classes[0]=%u [1]=%u\n", classes[0], classes[1]); 650 - return 0; 651 - } 652 - 653 - /** 654 - * scc_bmdma_stop - Stop PCI IDE BMDMA transfer 655 - * @qc: Command we are ending DMA for 656 - */ 657 - 658 - static void scc_bmdma_stop (struct ata_queued_cmd *qc) 659 - { 660 - struct ata_port *ap = qc->ap; 661 - void __iomem *ctrl_base = ap->host->iomap[SCC_CTRL_BAR]; 662 - void __iomem *bmid_base = ap->host->iomap[SCC_BMID_BAR]; 663 - u32 reg; 664 - 665 - while (1) { 666 - reg = in_be32(bmid_base + SCC_DMA_INTST); 667 - 668 - if (reg & INTSTS_SERROR) { 669 - printk(KERN_WARNING "%s: SERROR\n", DRV_NAME); 670 - out_be32(bmid_base + SCC_DMA_INTST, INTSTS_SERROR|INTSTS_BMSINT); 671 - out_be32(bmid_base + SCC_DMA_CMD, 672 - in_be32(bmid_base + SCC_DMA_CMD) & ~ATA_DMA_START); 673 - continue; 674 - } 675 - 676 - if (reg & INTSTS_PRERR) { 677 - u32 maea0, maec0; 678 - maea0 = in_be32(ctrl_base + SCC_CTL_MAEA0); 679 - maec0 = in_be32(ctrl_base + SCC_CTL_MAEC0); 680 - printk(KERN_WARNING "%s: PRERR [addr:%x cmd:%x]\n", DRV_NAME, maea0, maec0); 681 - out_be32(bmid_base + SCC_DMA_INTST, INTSTS_PRERR|INTSTS_BMSINT); 682 - out_be32(bmid_base + SCC_DMA_CMD, 683 - in_be32(bmid_base + SCC_DMA_CMD) & ~ATA_DMA_START); 684 - continue; 685 - } 686 - 687 - if (reg & INTSTS_RERR) { 688 - printk(KERN_WARNING "%s: Response Error\n", DRV_NAME); 689 - out_be32(bmid_base + SCC_DMA_INTST, INTSTS_RERR|INTSTS_BMSINT); 690 - out_be32(bmid_base + SCC_DMA_CMD, 691 - in_be32(bmid_base + SCC_DMA_CMD) & ~ATA_DMA_START); 692 - continue; 693 - } 694 - 695 - if (reg & INTSTS_ICERR) { 696 - out_be32(bmid_base + SCC_DMA_CMD, 697 - in_be32(bmid_base + SCC_DMA_CMD) & ~ATA_DMA_START); 698 - printk(KERN_WARNING "%s: Illegal Configuration\n", DRV_NAME); 699 - out_be32(bmid_base + SCC_DMA_INTST, INTSTS_ICERR|INTSTS_BMSINT); 700 - continue; 701 - } 702 - 703 - if (reg & INTSTS_BMSINT) { 704 - unsigned int classes; 705 - unsigned long deadline = ata_deadline(jiffies, ATA_TMOUT_BOOT); 706 - printk(KERN_WARNING "%s: Internal Bus Error\n", DRV_NAME); 707 - out_be32(bmid_base + SCC_DMA_INTST, INTSTS_BMSINT); 708 - /* TBD: SW reset */ 709 - scc_softreset(&ap->link, &classes, deadline); 710 - continue; 711 - } 712 - 713 - if (reg & INTSTS_BMHE) { 714 - out_be32(bmid_base + SCC_DMA_INTST, INTSTS_BMHE); 715 - continue; 716 - } 717 - 718 - if (reg & INTSTS_ACTEINT) { 719 - out_be32(bmid_base + SCC_DMA_INTST, INTSTS_ACTEINT); 720 - continue; 721 - } 722 - 723 - if (reg & INTSTS_IOIRQS) { 724 - out_be32(bmid_base + SCC_DMA_INTST, INTSTS_IOIRQS); 725 - continue; 726 - } 727 - break; 728 - } 729 - 730 - /* clear start/stop bit */ 731 - out_be32(bmid_base + SCC_DMA_CMD, 732 - in_be32(bmid_base + SCC_DMA_CMD) & ~ATA_DMA_START); 733 - 734 - /* one-PIO-cycle guaranteed wait, per spec, for HDMA1:0 transition */ 735 - ata_sff_dma_pause(ap); /* dummy read */ 736 - } 737 - 738 - /** 739 - * scc_bmdma_status - Read PCI IDE BMDMA status 740 - * @ap: Port associated with this ATA transaction. 741 - */ 742 - 743 - static u8 scc_bmdma_status (struct ata_port *ap) 744 - { 745 - void __iomem *mmio = ap->ioaddr.bmdma_addr; 746 - u8 host_stat = in_be32(mmio + SCC_DMA_STATUS); 747 - u32 int_status = in_be32(mmio + SCC_DMA_INTST); 748 - struct ata_queued_cmd *qc = ata_qc_from_tag(ap, ap->link.active_tag); 749 - static int retry = 0; 750 - 751 - /* return if IOS_SS is cleared */ 752 - if (!(in_be32(mmio + SCC_DMA_CMD) & ATA_DMA_START)) 753 - return host_stat; 754 - 755 - /* errata A252,A308 workaround: Step4 */ 756 - if ((scc_check_altstatus(ap) & ATA_ERR) 757 - && (int_status & INTSTS_INTRQ)) 758 - return (host_stat | ATA_DMA_INTR); 759 - 760 - /* errata A308 workaround Step5 */ 761 - if (int_status & INTSTS_IOIRQS) { 762 - host_stat |= ATA_DMA_INTR; 763 - 764 - /* We don't check ATAPI DMA because it is limited to UDMA4 */ 765 - if ((qc->tf.protocol == ATA_PROT_DMA && 766 - qc->dev->xfer_mode > XFER_UDMA_4)) { 767 - if (!(int_status & INTSTS_ACTEINT)) { 768 - printk(KERN_WARNING "ata%u: operation failed (transfer data loss)\n", 769 - ap->print_id); 770 - host_stat |= ATA_DMA_ERR; 771 - if (retry++) 772 - ap->udma_mask &= ~(1 << qc->dev->xfer_mode); 773 - } else 774 - retry = 0; 775 - } 776 - } 777 - 778 - return host_stat; 779 - } 780 - 781 - /** 782 - * scc_data_xfer - Transfer data by PIO 783 - * @dev: device for this I/O 784 - * @buf: data buffer 785 - * @buflen: buffer length 786 - * @rw: read/write 787 - * 788 - * Note: Original code is ata_sff_data_xfer(). 789 - */ 790 - 791 - static unsigned int scc_data_xfer (struct ata_device *dev, unsigned char *buf, 792 - unsigned int buflen, int rw) 793 - { 794 - struct ata_port *ap = dev->link->ap; 795 - unsigned int words = buflen >> 1; 796 - unsigned int i; 797 - __le16 *buf16 = (__le16 *) buf; 798 - void __iomem *mmio = ap->ioaddr.data_addr; 799 - 800 - /* Transfer multiple of 2 bytes */ 801 - if (rw == READ) 802 - for (i = 0; i < words; i++) 803 - buf16[i] = cpu_to_le16(in_be32(mmio)); 804 - else 805 - for (i = 0; i < words; i++) 806 - out_be32(mmio, le16_to_cpu(buf16[i])); 807 - 808 - /* Transfer trailing 1 byte, if any. */ 809 - if (unlikely(buflen & 0x01)) { 810 - __le16 align_buf[1] = { 0 }; 811 - unsigned char *trailing_buf = buf + buflen - 1; 812 - 813 - if (rw == READ) { 814 - align_buf[0] = cpu_to_le16(in_be32(mmio)); 815 - memcpy(trailing_buf, align_buf, 1); 816 - } else { 817 - memcpy(align_buf, trailing_buf, 1); 818 - out_be32(mmio, le16_to_cpu(align_buf[0])); 819 - } 820 - words++; 821 - } 822 - 823 - return words << 1; 824 - } 825 - 826 - /** 827 - * scc_postreset - standard postreset callback 828 - * @ap: the target ata_port 829 - * @classes: classes of attached devices 830 - * 831 - * Note: Original code is ata_sff_postreset(). 832 - */ 833 - 834 - static void scc_postreset(struct ata_link *link, unsigned int *classes) 835 - { 836 - struct ata_port *ap = link->ap; 837 - 838 - DPRINTK("ENTER\n"); 839 - 840 - /* is double-select really necessary? */ 841 - if (classes[0] != ATA_DEV_NONE) 842 - ap->ops->sff_dev_select(ap, 1); 843 - if (classes[1] != ATA_DEV_NONE) 844 - ap->ops->sff_dev_select(ap, 0); 845 - 846 - /* bail out if no device is present */ 847 - if (classes[0] == ATA_DEV_NONE && classes[1] == ATA_DEV_NONE) { 848 - DPRINTK("EXIT, no device\n"); 849 - return; 850 - } 851 - 852 - /* set up device control */ 853 - out_be32(ap->ioaddr.ctl_addr, ap->ctl); 854 - 855 - DPRINTK("EXIT\n"); 856 - } 857 - 858 - /** 859 - * scc_irq_clear - Clear PCI IDE BMDMA interrupt. 860 - * @ap: Port associated with this ATA transaction. 861 - * 862 - * Note: Original code is ata_bmdma_irq_clear(). 863 - */ 864 - 865 - static void scc_irq_clear (struct ata_port *ap) 866 - { 867 - void __iomem *mmio = ap->ioaddr.bmdma_addr; 868 - 869 - if (!mmio) 870 - return; 871 - 872 - out_be32(mmio + SCC_DMA_STATUS, in_be32(mmio + SCC_DMA_STATUS)); 873 - } 874 - 875 - /** 876 - * scc_port_start - Set port up for dma. 877 - * @ap: Port to initialize 878 - * 879 - * Allocate space for PRD table using ata_bmdma_port_start(). 880 - * Set PRD table address for PTERADD. (PRD Transfer End Read) 881 - */ 882 - 883 - static int scc_port_start (struct ata_port *ap) 884 - { 885 - void __iomem *mmio = ap->ioaddr.bmdma_addr; 886 - int rc; 887 - 888 - rc = ata_bmdma_port_start(ap); 889 - if (rc) 890 - return rc; 891 - 892 - out_be32(mmio + SCC_DMA_PTERADD, ap->bmdma_prd_dma); 893 - return 0; 894 - } 895 - 896 - /** 897 - * scc_port_stop - Undo scc_port_start() 898 - * @ap: Port to shut down 899 - * 900 - * Reset PTERADD. 901 - */ 902 - 903 - static void scc_port_stop (struct ata_port *ap) 904 - { 905 - void __iomem *mmio = ap->ioaddr.bmdma_addr; 906 - 907 - out_be32(mmio + SCC_DMA_PTERADD, 0); 908 - } 909 - 910 - static struct scsi_host_template scc_sht = { 911 - ATA_BMDMA_SHT(DRV_NAME), 912 - }; 913 - 914 - static struct ata_port_operations scc_pata_ops = { 915 - .inherits = &ata_bmdma_port_ops, 916 - 917 - .set_piomode = scc_set_piomode, 918 - .set_dmamode = scc_set_dmamode, 919 - .mode_filter = scc_mode_filter, 920 - 921 - .sff_tf_load = scc_tf_load, 922 - .sff_tf_read = scc_tf_read, 923 - .sff_exec_command = scc_exec_command, 924 - .sff_check_status = scc_check_status, 925 - .sff_check_altstatus = scc_check_altstatus, 926 - .sff_dev_select = scc_dev_select, 927 - .sff_set_devctl = scc_set_devctl, 928 - 929 - .bmdma_setup = scc_bmdma_setup, 930 - .bmdma_start = scc_bmdma_start, 931 - .bmdma_stop = scc_bmdma_stop, 932 - .bmdma_status = scc_bmdma_status, 933 - .sff_data_xfer = scc_data_xfer, 934 - 935 - .cable_detect = ata_cable_80wire, 936 - .softreset = scc_softreset, 937 - .postreset = scc_postreset, 938 - 939 - .sff_irq_clear = scc_irq_clear, 940 - 941 - .port_start = scc_port_start, 942 - .port_stop = scc_port_stop, 943 - }; 944 - 945 - static struct ata_port_info scc_port_info[] = { 946 - { 947 - .flags = ATA_FLAG_SLAVE_POSS, 948 - .pio_mask = ATA_PIO4, 949 - /* No MWDMA */ 950 - .udma_mask = ATA_UDMA6, 951 - .port_ops = &scc_pata_ops, 952 - }, 953 - }; 954 - 955 - /** 956 - * scc_reset_controller - initialize SCC PATA controller. 957 - */ 958 - 959 - static int scc_reset_controller(struct ata_host *host) 960 - { 961 - void __iomem *ctrl_base = host->iomap[SCC_CTRL_BAR]; 962 - void __iomem *bmid_base = host->iomap[SCC_BMID_BAR]; 963 - void __iomem *cckctrl_port = ctrl_base + SCC_CTL_CCKCTRL; 964 - void __iomem *mode_port = ctrl_base + SCC_CTL_MODEREG; 965 - void __iomem *ecmode_port = ctrl_base + SCC_CTL_ECMODE; 966 - void __iomem *intmask_port = bmid_base + SCC_DMA_INTMASK; 967 - void __iomem *dmastatus_port = bmid_base + SCC_DMA_STATUS; 968 - u32 reg = 0; 969 - 970 - out_be32(cckctrl_port, reg); 971 - reg |= CCKCTRL_ATACLKOEN; 972 - out_be32(cckctrl_port, reg); 973 - reg |= CCKCTRL_LCLKEN | CCKCTRL_OCLKEN; 974 - out_be32(cckctrl_port, reg); 975 - reg |= CCKCTRL_CRST; 976 - out_be32(cckctrl_port, reg); 977 - 978 - for (;;) { 979 - reg = in_be32(cckctrl_port); 980 - if (reg & CCKCTRL_CRST) 981 - break; 982 - udelay(5000); 983 - } 984 - 985 - reg |= CCKCTRL_ATARESET; 986 - out_be32(cckctrl_port, reg); 987 - out_be32(ecmode_port, ECMODE_VALUE); 988 - out_be32(mode_port, MODE_JCUSFEN); 989 - out_be32(intmask_port, INTMASK_MSK); 990 - 991 - if (in_be32(dmastatus_port) & QCHSD_STPDIAG) { 992 - printk(KERN_WARNING "%s: failed to detect 80c cable. (PDIAG# is high)\n", DRV_NAME); 993 - return -EIO; 994 - } 995 - 996 - return 0; 997 - } 998 - 999 - /** 1000 - * scc_setup_ports - initialize ioaddr with SCC PATA port offsets. 1001 - * @ioaddr: IO address structure to be initialized 1002 - * @base: base address of BMID region 1003 - */ 1004 - 1005 - static void scc_setup_ports (struct ata_ioports *ioaddr, void __iomem *base) 1006 - { 1007 - ioaddr->cmd_addr = base + SCC_REG_CMD_ADDR; 1008 - ioaddr->altstatus_addr = ioaddr->cmd_addr + SCC_REG_ALTSTATUS; 1009 - ioaddr->ctl_addr = ioaddr->cmd_addr + SCC_REG_ALTSTATUS; 1010 - ioaddr->bmdma_addr = base; 1011 - ioaddr->data_addr = ioaddr->cmd_addr + SCC_REG_DATA; 1012 - ioaddr->error_addr = ioaddr->cmd_addr + SCC_REG_ERR; 1013 - ioaddr->feature_addr = ioaddr->cmd_addr + SCC_REG_FEATURE; 1014 - ioaddr->nsect_addr = ioaddr->cmd_addr + SCC_REG_NSECT; 1015 - ioaddr->lbal_addr = ioaddr->cmd_addr + SCC_REG_LBAL; 1016 - ioaddr->lbam_addr = ioaddr->cmd_addr + SCC_REG_LBAM; 1017 - ioaddr->lbah_addr = ioaddr->cmd_addr + SCC_REG_LBAH; 1018 - ioaddr->device_addr = ioaddr->cmd_addr + SCC_REG_DEVICE; 1019 - ioaddr->status_addr = ioaddr->cmd_addr + SCC_REG_STATUS; 1020 - ioaddr->command_addr = ioaddr->cmd_addr + SCC_REG_CMD; 1021 - } 1022 - 1023 - static int scc_host_init(struct ata_host *host) 1024 - { 1025 - struct pci_dev *pdev = to_pci_dev(host->dev); 1026 - int rc; 1027 - 1028 - rc = scc_reset_controller(host); 1029 - if (rc) 1030 - return rc; 1031 - 1032 - rc = dma_set_mask(&pdev->dev, ATA_DMA_MASK); 1033 - if (rc) 1034 - return rc; 1035 - rc = dma_set_coherent_mask(&pdev->dev, ATA_DMA_MASK); 1036 - if (rc) 1037 - return rc; 1038 - 1039 - scc_setup_ports(&host->ports[0]->ioaddr, host->iomap[SCC_BMID_BAR]); 1040 - 1041 - pci_set_master(pdev); 1042 - 1043 - return 0; 1044 - } 1045 - 1046 - /** 1047 - * scc_init_one - Register SCC PATA device with kernel services 1048 - * @pdev: PCI device to register 1049 - * @ent: Entry in scc_pci_tbl matching with @pdev 1050 - * 1051 - * LOCKING: 1052 - * Inherited from PCI layer (may sleep). 1053 - * 1054 - * RETURNS: 1055 - * Zero on success, or -ERRNO value. 1056 - */ 1057 - 1058 - static int scc_init_one (struct pci_dev *pdev, const struct pci_device_id *ent) 1059 - { 1060 - unsigned int board_idx = (unsigned int) ent->driver_data; 1061 - const struct ata_port_info *ppi[] = { &scc_port_info[board_idx], NULL }; 1062 - struct ata_host *host; 1063 - int rc; 1064 - 1065 - ata_print_version_once(&pdev->dev, DRV_VERSION); 1066 - 1067 - host = ata_host_alloc_pinfo(&pdev->dev, ppi, 1); 1068 - if (!host) 1069 - return -ENOMEM; 1070 - 1071 - rc = pcim_enable_device(pdev); 1072 - if (rc) 1073 - return rc; 1074 - 1075 - rc = pcim_iomap_regions(pdev, (1 << SCC_CTRL_BAR) | (1 << SCC_BMID_BAR), DRV_NAME); 1076 - if (rc == -EBUSY) 1077 - pcim_pin_device(pdev); 1078 - if (rc) 1079 - return rc; 1080 - host->iomap = pcim_iomap_table(pdev); 1081 - 1082 - ata_port_pbar_desc(host->ports[0], SCC_CTRL_BAR, -1, "ctrl"); 1083 - ata_port_pbar_desc(host->ports[0], SCC_BMID_BAR, -1, "bmid"); 1084 - 1085 - rc = scc_host_init(host); 1086 - if (rc) 1087 - return rc; 1088 - 1089 - return ata_host_activate(host, pdev->irq, ata_bmdma_interrupt, 1090 - IRQF_SHARED, &scc_sht); 1091 - } 1092 - 1093 - static struct pci_driver scc_pci_driver = { 1094 - .name = DRV_NAME, 1095 - .id_table = scc_pci_tbl, 1096 - .probe = scc_init_one, 1097 - .remove = ata_pci_remove_one, 1098 - #ifdef CONFIG_PM_SLEEP 1099 - .suspend = ata_pci_device_suspend, 1100 - .resume = ata_pci_device_resume, 1101 - #endif 1102 - }; 1103 - 1104 - module_pci_driver(scc_pci_driver); 1105 - 1106 - MODULE_AUTHOR("Toshiba corp"); 1107 - MODULE_DESCRIPTION("SCSI low-level driver for Toshiba SCC PATA controller"); 1108 - MODULE_LICENSE("GPL"); 1109 - MODULE_DEVICE_TABLE(pci, scc_pci_tbl); 1110 - MODULE_VERSION(DRV_VERSION);
-3
drivers/bluetooth/bt3c_cs.c
··· 227 227 iobase = info->p_dev->resource[0]->start; 228 228 229 229 avail = bt3c_read(iobase, 0x7006); 230 - //printk("bt3c_cs: receiving %d bytes\n", avail); 231 230 232 231 bt3c_address(iobase, 0x7480); 233 232 while (size < avail) { ··· 249 250 250 251 bt_cb(info->rx_skb)->pkt_type = inb(iobase + DATA_L); 251 252 inb(iobase + DATA_H); 252 - //printk("bt3c: PACKET_TYPE=%02x\n", bt_cb(info->rx_skb)->pkt_type); 253 253 254 254 switch (bt_cb(info->rx_skb)->pkt_type) { 255 255 ··· 362 364 if (stat & 0x0001) 363 365 bt3c_receive(info); 364 366 if (stat & 0x0002) { 365 - //BT_ERR("Ack (stat=0x%04x)", stat); 366 367 clear_bit(XMIT_SENDING, &(info->tx_state)); 367 368 bt3c_write_wakeup(info); 368 369 }
+79 -69
drivers/bluetooth/btbcm.c
··· 95 95 } 96 96 EXPORT_SYMBOL_GPL(btbcm_set_bdaddr); 97 97 98 + int btbcm_patchram(struct hci_dev *hdev, const char *firmware) 99 + { 100 + const struct hci_command_hdr *cmd; 101 + const struct firmware *fw; 102 + const u8 *fw_ptr; 103 + size_t fw_size; 104 + struct sk_buff *skb; 105 + u16 opcode; 106 + int err; 107 + 108 + err = request_firmware(&fw, firmware, &hdev->dev); 109 + if (err < 0) { 110 + BT_INFO("%s: BCM: Patch %s not found", hdev->name, firmware); 111 + return err; 112 + } 113 + 114 + /* Start Download */ 115 + skb = __hci_cmd_sync(hdev, 0xfc2e, 0, NULL, HCI_INIT_TIMEOUT); 116 + if (IS_ERR(skb)) { 117 + err = PTR_ERR(skb); 118 + BT_ERR("%s: BCM: Download Minidrv command failed (%d)", 119 + hdev->name, err); 120 + goto done; 121 + } 122 + kfree_skb(skb); 123 + 124 + /* 50 msec delay after Download Minidrv completes */ 125 + msleep(50); 126 + 127 + fw_ptr = fw->data; 128 + fw_size = fw->size; 129 + 130 + while (fw_size >= sizeof(*cmd)) { 131 + const u8 *cmd_param; 132 + 133 + cmd = (struct hci_command_hdr *)fw_ptr; 134 + fw_ptr += sizeof(*cmd); 135 + fw_size -= sizeof(*cmd); 136 + 137 + if (fw_size < cmd->plen) { 138 + BT_ERR("%s: BCM: Patch %s is corrupted", hdev->name, 139 + firmware); 140 + err = -EINVAL; 141 + goto done; 142 + } 143 + 144 + cmd_param = fw_ptr; 145 + fw_ptr += cmd->plen; 146 + fw_size -= cmd->plen; 147 + 148 + opcode = le16_to_cpu(cmd->opcode); 149 + 150 + skb = __hci_cmd_sync(hdev, opcode, cmd->plen, cmd_param, 151 + HCI_INIT_TIMEOUT); 152 + if (IS_ERR(skb)) { 153 + err = PTR_ERR(skb); 154 + BT_ERR("%s: BCM: Patch command %04x failed (%d)", 155 + hdev->name, opcode, err); 156 + goto done; 157 + } 158 + kfree_skb(skb); 159 + } 160 + 161 + /* 250 msec delay after Launch Ram completes */ 162 + msleep(250); 163 + 164 + done: 165 + release_firmware(fw); 166 + return err; 167 + } 168 + EXPORT_SYMBOL(btbcm_patchram); 169 + 98 170 static int btbcm_reset(struct hci_dev *hdev) 99 171 { 100 172 struct sk_buff *skb; ··· 270 198 271 199 int btbcm_setup_patchram(struct hci_dev *hdev) 272 200 { 273 - const struct hci_command_hdr *cmd; 274 - const struct firmware *fw; 275 - const u8 *fw_ptr; 276 - size_t fw_size; 277 201 char fw_name[64]; 278 - u16 opcode, subver, rev, pid, vid; 202 + u16 subver, rev, pid, vid; 279 203 const char *hw_name = NULL; 280 204 struct sk_buff *skb; 281 205 struct hci_rp_read_local_version *ver; ··· 341 273 hw_name ? : "BCM", (subver & 0x7000) >> 13, 342 274 (subver & 0x1f00) >> 8, (subver & 0x00ff), rev & 0x0fff); 343 275 344 - err = request_firmware(&fw, fw_name, &hdev->dev); 345 - if (err < 0) { 346 - BT_INFO("%s: BCM: patch %s not found", hdev->name, fw_name); 276 + err = btbcm_patchram(hdev, fw_name); 277 + if (err == -ENOENT) 347 278 return 0; 348 - } 349 279 350 - /* Start Download */ 351 - skb = __hci_cmd_sync(hdev, 0xfc2e, 0, NULL, HCI_INIT_TIMEOUT); 352 - if (IS_ERR(skb)) { 353 - err = PTR_ERR(skb); 354 - BT_ERR("%s: BCM: Download Minidrv command failed (%d)", 355 - hdev->name, err); 356 - goto reset; 357 - } 358 - kfree_skb(skb); 359 - 360 - /* 50 msec delay after Download Minidrv completes */ 361 - msleep(50); 362 - 363 - fw_ptr = fw->data; 364 - fw_size = fw->size; 365 - 366 - while (fw_size >= sizeof(*cmd)) { 367 - const u8 *cmd_param; 368 - 369 - cmd = (struct hci_command_hdr *)fw_ptr; 370 - fw_ptr += sizeof(*cmd); 371 - fw_size -= sizeof(*cmd); 372 - 373 - if (fw_size < cmd->plen) { 374 - BT_ERR("%s: BCM: patch %s is corrupted", hdev->name, 375 - fw_name); 376 - err = -EINVAL; 377 - goto reset; 378 - } 379 - 380 - cmd_param = fw_ptr; 381 - fw_ptr += cmd->plen; 382 - fw_size -= cmd->plen; 383 - 384 - opcode = le16_to_cpu(cmd->opcode); 385 - 386 - skb = __hci_cmd_sync(hdev, opcode, cmd->plen, cmd_param, 387 - HCI_INIT_TIMEOUT); 388 - if (IS_ERR(skb)) { 389 - err = PTR_ERR(skb); 390 - BT_ERR("%s: BCM: patch command %04x failed (%d)", 391 - hdev->name, opcode, err); 392 - goto reset; 393 - } 394 - kfree_skb(skb); 395 - } 396 - 397 - /* 250 msec delay after Launch Ram completes */ 398 - msleep(250); 399 - 400 - reset: 401 280 /* Reset */ 402 281 err = btbcm_reset(hdev); 403 282 if (err) 404 - goto done; 283 + return err; 405 284 406 285 /* Read Local Version Info */ 407 286 skb = btbcm_read_local_version(hdev); 408 - if (IS_ERR(skb)) { 409 - err = PTR_ERR(skb); 410 - goto done; 411 - } 287 + if (IS_ERR(skb)) 288 + return PTR_ERR(skb); 412 289 413 290 ver = (struct hci_rp_read_local_version *)skb->data; 414 291 rev = le16_to_cpu(ver->hci_rev); ··· 368 355 369 356 set_bit(HCI_QUIRK_STRICT_DUPLICATE_FILTER, &hdev->quirks); 370 357 371 - done: 372 - release_firmware(fw); 373 - 374 - return err; 358 + return 0; 375 359 } 376 360 EXPORT_SYMBOL_GPL(btbcm_setup_patchram); 377 361
+6
drivers/bluetooth/btbcm.h
··· 25 25 26 26 int btbcm_check_bdaddr(struct hci_dev *hdev); 27 27 int btbcm_set_bdaddr(struct hci_dev *hdev, const bdaddr_t *bdaddr); 28 + int btbcm_patchram(struct hci_dev *hdev, const char *firmware); 28 29 29 30 int btbcm_setup_patchram(struct hci_dev *hdev); 30 31 int btbcm_setup_apple(struct hci_dev *hdev); ··· 38 37 } 39 38 40 39 static inline int btbcm_set_bdaddr(struct hci_dev *hdev, const bdaddr_t *bdaddr) 40 + { 41 + return -EOPNOTSUPP; 42 + } 43 + 44 + static inline int btbcm_patchram(struct hci_dev *hdev, const char *firmware) 41 45 { 42 46 return -EOPNOTSUPP; 43 47 }
+401 -2
drivers/bluetooth/btusb.c
··· 24 24 #include <linux/module.h> 25 25 #include <linux/usb.h> 26 26 #include <linux/firmware.h> 27 + #include <asm/unaligned.h> 27 28 28 29 #include <net/bluetooth/bluetooth.h> 29 30 #include <net/bluetooth/hci_core.h> ··· 58 57 #define BTUSB_AMP 0x4000 59 58 #define BTUSB_QCA_ROME 0x8000 60 59 #define BTUSB_BCM_APPLE 0x10000 60 + #define BTUSB_REALTEK 0x20000 61 61 62 62 static const struct usb_device_id btusb_table[] = { 63 63 /* Generic Bluetooth USB device */ ··· 289 287 /* Other Intel Bluetooth devices */ 290 288 { USB_VENDOR_AND_INTERFACE_INFO(0x8087, 0xe0, 0x01, 0x01), 291 289 .driver_info = BTUSB_IGNORE }, 290 + 291 + /* Realtek Bluetooth devices */ 292 + { USB_VENDOR_AND_INTERFACE_INFO(0x0bda, 0xe0, 0x01, 0x01), 293 + .driver_info = BTUSB_REALTEK }, 294 + 295 + /* Additional Realtek 8723AE Bluetooth devices */ 296 + { USB_DEVICE(0x0930, 0x021d), .driver_info = BTUSB_REALTEK }, 297 + { USB_DEVICE(0x13d3, 0x3394), .driver_info = BTUSB_REALTEK }, 298 + 299 + /* Additional Realtek 8723BE Bluetooth devices */ 300 + { USB_DEVICE(0x0489, 0xe085), .driver_info = BTUSB_REALTEK }, 301 + { USB_DEVICE(0x0489, 0xe08b), .driver_info = BTUSB_REALTEK }, 302 + { USB_DEVICE(0x13d3, 0x3410), .driver_info = BTUSB_REALTEK }, 303 + { USB_DEVICE(0x13d3, 0x3416), .driver_info = BTUSB_REALTEK }, 304 + { USB_DEVICE(0x13d3, 0x3459), .driver_info = BTUSB_REALTEK }, 305 + 306 + /* Additional Realtek 8821AE Bluetooth devices */ 307 + { USB_DEVICE(0x0b05, 0x17dc), .driver_info = BTUSB_REALTEK }, 308 + { USB_DEVICE(0x13d3, 0x3414), .driver_info = BTUSB_REALTEK }, 309 + { USB_DEVICE(0x13d3, 0x3458), .driver_info = BTUSB_REALTEK }, 310 + { USB_DEVICE(0x13d3, 0x3461), .driver_info = BTUSB_REALTEK }, 311 + { USB_DEVICE(0x13d3, 0x3462), .driver_info = BTUSB_REALTEK }, 292 312 293 313 { } /* Terminating entry */ 294 314 }; ··· 916 892 */ 917 893 if (data->setup_on_usb) { 918 894 err = data->setup_on_usb(hdev); 919 - if (err <0) 895 + if (err < 0) 920 896 return err; 921 897 } 922 898 ··· 1367 1343 kfree_skb(skb); 1368 1344 1369 1345 return ret; 1346 + } 1347 + 1348 + #define RTL_FRAG_LEN 252 1349 + 1350 + struct rtl_download_cmd { 1351 + __u8 index; 1352 + __u8 data[RTL_FRAG_LEN]; 1353 + } __packed; 1354 + 1355 + struct rtl_download_response { 1356 + __u8 status; 1357 + __u8 index; 1358 + } __packed; 1359 + 1360 + struct rtl_rom_version_evt { 1361 + __u8 status; 1362 + __u8 version; 1363 + } __packed; 1364 + 1365 + struct rtl_epatch_header { 1366 + __u8 signature[8]; 1367 + __le32 fw_version; 1368 + __le16 num_patches; 1369 + } __packed; 1370 + 1371 + #define RTL_EPATCH_SIGNATURE "Realtech" 1372 + #define RTL_ROM_LMP_3499 0x3499 1373 + #define RTL_ROM_LMP_8723A 0x1200 1374 + #define RTL_ROM_LMP_8723B 0x8723 1375 + #define RTL_ROM_LMP_8821A 0x8821 1376 + #define RTL_ROM_LMP_8761A 0x8761 1377 + 1378 + static int rtl_read_rom_version(struct hci_dev *hdev, u8 *version) 1379 + { 1380 + struct rtl_rom_version_evt *rom_version; 1381 + struct sk_buff *skb; 1382 + int ret; 1383 + 1384 + /* Read RTL ROM version command */ 1385 + skb = __hci_cmd_sync(hdev, 0xfc6d, 0, NULL, HCI_INIT_TIMEOUT); 1386 + if (IS_ERR(skb)) { 1387 + BT_ERR("%s: Read ROM version failed (%ld)", 1388 + hdev->name, PTR_ERR(skb)); 1389 + return PTR_ERR(skb); 1390 + } 1391 + 1392 + if (skb->len != sizeof(*rom_version)) { 1393 + BT_ERR("%s: RTL version event length mismatch", hdev->name); 1394 + kfree_skb(skb); 1395 + return -EIO; 1396 + } 1397 + 1398 + rom_version = (struct rtl_rom_version_evt *)skb->data; 1399 + BT_INFO("%s: rom_version status=%x version=%x", 1400 + hdev->name, rom_version->status, rom_version->version); 1401 + 1402 + ret = rom_version->status; 1403 + if (ret == 0) 1404 + *version = rom_version->version; 1405 + 1406 + kfree_skb(skb); 1407 + return ret; 1408 + } 1409 + 1410 + static int rtl8723b_parse_firmware(struct hci_dev *hdev, u16 lmp_subver, 1411 + const struct firmware *fw, 1412 + unsigned char **_buf) 1413 + { 1414 + const u8 extension_sig[] = { 0x51, 0x04, 0xfd, 0x77 }; 1415 + struct rtl_epatch_header *epatch_info; 1416 + unsigned char *buf; 1417 + int i, ret, len; 1418 + size_t min_size; 1419 + u8 opcode, length, data, rom_version = 0; 1420 + int project_id = -1; 1421 + const unsigned char *fwptr, *chip_id_base; 1422 + const unsigned char *patch_length_base, *patch_offset_base; 1423 + u32 patch_offset = 0; 1424 + u16 patch_length, num_patches; 1425 + const u16 project_id_to_lmp_subver[] = { 1426 + RTL_ROM_LMP_8723A, 1427 + RTL_ROM_LMP_8723B, 1428 + RTL_ROM_LMP_8821A, 1429 + RTL_ROM_LMP_8761A 1430 + }; 1431 + 1432 + ret = rtl_read_rom_version(hdev, &rom_version); 1433 + if (ret) 1434 + return -bt_to_errno(ret); 1435 + 1436 + min_size = sizeof(struct rtl_epatch_header) + sizeof(extension_sig) + 3; 1437 + if (fw->size < min_size) 1438 + return -EINVAL; 1439 + 1440 + fwptr = fw->data + fw->size - sizeof(extension_sig); 1441 + if (memcmp(fwptr, extension_sig, sizeof(extension_sig)) != 0) { 1442 + BT_ERR("%s: extension section signature mismatch", hdev->name); 1443 + return -EINVAL; 1444 + } 1445 + 1446 + /* Loop from the end of the firmware parsing instructions, until 1447 + * we find an instruction that identifies the "project ID" for the 1448 + * hardware supported by this firwmare file. 1449 + * Once we have that, we double-check that that project_id is suitable 1450 + * for the hardware we are working with. 1451 + */ 1452 + while (fwptr >= fw->data + (sizeof(struct rtl_epatch_header) + 3)) { 1453 + opcode = *--fwptr; 1454 + length = *--fwptr; 1455 + data = *--fwptr; 1456 + 1457 + BT_DBG("check op=%x len=%x data=%x", opcode, length, data); 1458 + 1459 + if (opcode == 0xff) /* EOF */ 1460 + break; 1461 + 1462 + if (length == 0) { 1463 + BT_ERR("%s: found instruction with length 0", 1464 + hdev->name); 1465 + return -EINVAL; 1466 + } 1467 + 1468 + if (opcode == 0 && length == 1) { 1469 + project_id = data; 1470 + break; 1471 + } 1472 + 1473 + fwptr -= length; 1474 + } 1475 + 1476 + if (project_id < 0) { 1477 + BT_ERR("%s: failed to find version instruction", hdev->name); 1478 + return -EINVAL; 1479 + } 1480 + 1481 + if (project_id >= ARRAY_SIZE(project_id_to_lmp_subver)) { 1482 + BT_ERR("%s: unknown project id %d", hdev->name, project_id); 1483 + return -EINVAL; 1484 + } 1485 + 1486 + if (lmp_subver != project_id_to_lmp_subver[project_id]) { 1487 + BT_ERR("%s: firmware is for %x but this is a %x", hdev->name, 1488 + project_id_to_lmp_subver[project_id], lmp_subver); 1489 + return -EINVAL; 1490 + } 1491 + 1492 + epatch_info = (struct rtl_epatch_header *)fw->data; 1493 + if (memcmp(epatch_info->signature, RTL_EPATCH_SIGNATURE, 8) != 0) { 1494 + BT_ERR("%s: bad EPATCH signature", hdev->name); 1495 + return -EINVAL; 1496 + } 1497 + 1498 + num_patches = le16_to_cpu(epatch_info->num_patches); 1499 + BT_DBG("fw_version=%x, num_patches=%d", 1500 + le32_to_cpu(epatch_info->fw_version), num_patches); 1501 + 1502 + /* After the rtl_epatch_header there is a funky patch metadata section. 1503 + * Assuming 2 patches, the layout is: 1504 + * ChipID1 ChipID2 PatchLength1 PatchLength2 PatchOffset1 PatchOffset2 1505 + * 1506 + * Find the right patch for this chip. 1507 + */ 1508 + min_size += 8 * num_patches; 1509 + if (fw->size < min_size) 1510 + return -EINVAL; 1511 + 1512 + chip_id_base = fw->data + sizeof(struct rtl_epatch_header); 1513 + patch_length_base = chip_id_base + (sizeof(u16) * num_patches); 1514 + patch_offset_base = patch_length_base + (sizeof(u16) * num_patches); 1515 + for (i = 0; i < num_patches; i++) { 1516 + u16 chip_id = get_unaligned_le16(chip_id_base + 1517 + (i * sizeof(u16))); 1518 + if (chip_id == rom_version + 1) { 1519 + patch_length = get_unaligned_le16(patch_length_base + 1520 + (i * sizeof(u16))); 1521 + patch_offset = get_unaligned_le32(patch_offset_base + 1522 + (i * sizeof(u32))); 1523 + break; 1524 + } 1525 + } 1526 + 1527 + if (!patch_offset) { 1528 + BT_ERR("%s: didn't find patch for chip id %d", 1529 + hdev->name, rom_version); 1530 + return -EINVAL; 1531 + } 1532 + 1533 + BT_DBG("length=%x offset=%x index %d", patch_length, patch_offset, i); 1534 + min_size = patch_offset + patch_length; 1535 + if (fw->size < min_size) 1536 + return -EINVAL; 1537 + 1538 + /* Copy the firmware into a new buffer and write the version at 1539 + * the end. 1540 + */ 1541 + len = patch_length; 1542 + buf = kmemdup(fw->data + patch_offset, patch_length, GFP_KERNEL); 1543 + if (!buf) 1544 + return -ENOMEM; 1545 + 1546 + memcpy(buf + patch_length - 4, &epatch_info->fw_version, 4); 1547 + 1548 + *_buf = buf; 1549 + return len; 1550 + } 1551 + 1552 + static int rtl_download_firmware(struct hci_dev *hdev, 1553 + const unsigned char *data, int fw_len) 1554 + { 1555 + struct rtl_download_cmd *dl_cmd; 1556 + int frag_num = fw_len / RTL_FRAG_LEN + 1; 1557 + int frag_len = RTL_FRAG_LEN; 1558 + int ret = 0; 1559 + int i; 1560 + 1561 + dl_cmd = kmalloc(sizeof(struct rtl_download_cmd), GFP_KERNEL); 1562 + if (!dl_cmd) 1563 + return -ENOMEM; 1564 + 1565 + for (i = 0; i < frag_num; i++) { 1566 + struct rtl_download_response *dl_resp; 1567 + struct sk_buff *skb; 1568 + 1569 + BT_DBG("download fw (%d/%d)", i, frag_num); 1570 + 1571 + dl_cmd->index = i; 1572 + if (i == (frag_num - 1)) { 1573 + dl_cmd->index |= 0x80; /* data end */ 1574 + frag_len = fw_len % RTL_FRAG_LEN; 1575 + } 1576 + memcpy(dl_cmd->data, data, frag_len); 1577 + 1578 + /* Send download command */ 1579 + skb = __hci_cmd_sync(hdev, 0xfc20, frag_len + 1, dl_cmd, 1580 + HCI_INIT_TIMEOUT); 1581 + if (IS_ERR(skb)) { 1582 + BT_ERR("%s: download fw command failed (%ld)", 1583 + hdev->name, PTR_ERR(skb)); 1584 + ret = -PTR_ERR(skb); 1585 + goto out; 1586 + } 1587 + 1588 + if (skb->len != sizeof(*dl_resp)) { 1589 + BT_ERR("%s: download fw event length mismatch", 1590 + hdev->name); 1591 + kfree_skb(skb); 1592 + ret = -EIO; 1593 + goto out; 1594 + } 1595 + 1596 + dl_resp = (struct rtl_download_response *)skb->data; 1597 + if (dl_resp->status != 0) { 1598 + kfree_skb(skb); 1599 + ret = bt_to_errno(dl_resp->status); 1600 + goto out; 1601 + } 1602 + 1603 + kfree_skb(skb); 1604 + data += RTL_FRAG_LEN; 1605 + } 1606 + 1607 + out: 1608 + kfree(dl_cmd); 1609 + return ret; 1610 + } 1611 + 1612 + static int btusb_setup_rtl8723a(struct hci_dev *hdev) 1613 + { 1614 + struct btusb_data *data = dev_get_drvdata(&hdev->dev); 1615 + struct usb_device *udev = interface_to_usbdev(data->intf); 1616 + const struct firmware *fw; 1617 + int ret; 1618 + 1619 + BT_INFO("%s: rtl: loading rtl_bt/rtl8723a_fw.bin", hdev->name); 1620 + ret = request_firmware(&fw, "rtl_bt/rtl8723a_fw.bin", &udev->dev); 1621 + if (ret < 0) { 1622 + BT_ERR("%s: Failed to load rtl_bt/rtl8723a_fw.bin", hdev->name); 1623 + return ret; 1624 + } 1625 + 1626 + if (fw->size < 8) { 1627 + ret = -EINVAL; 1628 + goto out; 1629 + } 1630 + 1631 + /* Check that the firmware doesn't have the epatch signature 1632 + * (which is only for RTL8723B and newer). 1633 + */ 1634 + if (!memcmp(fw->data, RTL_EPATCH_SIGNATURE, 8)) { 1635 + BT_ERR("%s: unexpected EPATCH signature!", hdev->name); 1636 + ret = -EINVAL; 1637 + goto out; 1638 + } 1639 + 1640 + ret = rtl_download_firmware(hdev, fw->data, fw->size); 1641 + 1642 + out: 1643 + release_firmware(fw); 1644 + return ret; 1645 + } 1646 + 1647 + static int btusb_setup_rtl8723b(struct hci_dev *hdev, u16 lmp_subver, 1648 + const char *fw_name) 1649 + { 1650 + struct btusb_data *data = dev_get_drvdata(&hdev->dev); 1651 + struct usb_device *udev = interface_to_usbdev(data->intf); 1652 + unsigned char *fw_data = NULL; 1653 + const struct firmware *fw; 1654 + int ret; 1655 + 1656 + BT_INFO("%s: rtl: loading %s", hdev->name, fw_name); 1657 + ret = request_firmware(&fw, fw_name, &udev->dev); 1658 + if (ret < 0) { 1659 + BT_ERR("%s: Failed to load %s", hdev->name, fw_name); 1660 + return ret; 1661 + } 1662 + 1663 + ret = rtl8723b_parse_firmware(hdev, lmp_subver, fw, &fw_data); 1664 + if (ret < 0) 1665 + goto out; 1666 + 1667 + ret = rtl_download_firmware(hdev, fw_data, ret); 1668 + kfree(fw_data); 1669 + if (ret < 0) 1670 + goto out; 1671 + 1672 + out: 1673 + release_firmware(fw); 1674 + return ret; 1675 + } 1676 + 1677 + static int btusb_setup_realtek(struct hci_dev *hdev) 1678 + { 1679 + struct sk_buff *skb; 1680 + struct hci_rp_read_local_version *resp; 1681 + u16 lmp_subver; 1682 + 1683 + skb = btusb_read_local_version(hdev); 1684 + if (IS_ERR(skb)) 1685 + return -PTR_ERR(skb); 1686 + 1687 + resp = (struct hci_rp_read_local_version *)skb->data; 1688 + BT_INFO("%s: rtl: examining hci_ver=%02x hci_rev=%04x lmp_ver=%02x " 1689 + "lmp_subver=%04x", hdev->name, resp->hci_ver, resp->hci_rev, 1690 + resp->lmp_ver, resp->lmp_subver); 1691 + 1692 + lmp_subver = le16_to_cpu(resp->lmp_subver); 1693 + kfree_skb(skb); 1694 + 1695 + /* Match a set of subver values that correspond to stock firmware, 1696 + * which is not compatible with standard btusb. 1697 + * If matched, upload an alternative firmware that does conform to 1698 + * standard btusb. Once that firmware is uploaded, the subver changes 1699 + * to a different value. 1700 + */ 1701 + switch (lmp_subver) { 1702 + case RTL_ROM_LMP_8723A: 1703 + case RTL_ROM_LMP_3499: 1704 + return btusb_setup_rtl8723a(hdev); 1705 + case RTL_ROM_LMP_8723B: 1706 + return btusb_setup_rtl8723b(hdev, lmp_subver, 1707 + "rtl_bt/rtl8723b_fw.bin"); 1708 + case RTL_ROM_LMP_8821A: 1709 + return btusb_setup_rtl8723b(hdev, lmp_subver, 1710 + "rtl_bt/rtl8821a_fw.bin"); 1711 + case RTL_ROM_LMP_8761A: 1712 + return btusb_setup_rtl8723b(hdev, lmp_subver, 1713 + "rtl_bt/rtl8761a_fw.bin"); 1714 + default: 1715 + BT_INFO("rtl: assuming no firmware upload needed."); 1716 + return 0; 1717 + } 1370 1718 } 1371 1719 1372 1720 static const struct firmware *btusb_setup_intel_get_fw(struct hci_dev *hdev, ··· 2973 2577 int i, err; 2974 2578 2975 2579 err = btusb_qca_send_vendor_req(hdev, QCA_GET_TARGET_VERSION, &ver, 2976 - sizeof(ver)); 2580 + sizeof(ver)); 2977 2581 if (err < 0) 2978 2582 return err; 2979 2583 ··· 3171 2775 data->setup_on_usb = btusb_setup_qca; 3172 2776 hdev->set_bdaddr = btusb_set_bdaddr_ath3012; 3173 2777 } 2778 + 2779 + if (id->driver_info & BTUSB_REALTEK) 2780 + hdev->setup = btusb_setup_realtek; 3174 2781 3175 2782 if (id->driver_info & BTUSB_AMP) { 3176 2783 /* AMP controllers do not support SCO packets */
+68 -40
drivers/bluetooth/hci_ath.c
··· 95 95 hci_uart_tx_wakeup(hu); 96 96 } 97 97 98 - /* Initialize protocol */ 99 98 static int ath_open(struct hci_uart *hu) 100 99 { 101 100 struct ath_struct *ath; ··· 115 116 return 0; 116 117 } 117 118 118 - /* Flush protocol data */ 119 - static int ath_flush(struct hci_uart *hu) 120 - { 121 - struct ath_struct *ath = hu->priv; 122 - 123 - BT_DBG("hu %p", hu); 124 - 125 - skb_queue_purge(&ath->txq); 126 - 127 - return 0; 128 - } 129 - 130 - /* Close protocol */ 131 119 static int ath_close(struct hci_uart *hu) 132 120 { 133 121 struct ath_struct *ath = hu->priv; ··· 133 147 return 0; 134 148 } 135 149 150 + static int ath_flush(struct hci_uart *hu) 151 + { 152 + struct ath_struct *ath = hu->priv; 153 + 154 + BT_DBG("hu %p", hu); 155 + 156 + skb_queue_purge(&ath->txq); 157 + 158 + return 0; 159 + } 160 + 161 + static int ath_set_bdaddr(struct hci_dev *hdev, const bdaddr_t *bdaddr) 162 + { 163 + struct sk_buff *skb; 164 + u8 buf[10]; 165 + int err; 166 + 167 + buf[0] = 0x01; 168 + buf[1] = 0x01; 169 + buf[2] = 0x00; 170 + buf[3] = sizeof(bdaddr_t); 171 + memcpy(buf + 4, bdaddr, sizeof(bdaddr_t)); 172 + 173 + skb = __hci_cmd_sync(hdev, 0xfc0b, sizeof(buf), buf, HCI_INIT_TIMEOUT); 174 + if (IS_ERR(skb)) { 175 + err = PTR_ERR(skb); 176 + BT_ERR("%s: Change address command failed (%d)", 177 + hdev->name, err); 178 + return err; 179 + } 180 + kfree_skb(skb); 181 + 182 + return 0; 183 + } 184 + 185 + static int ath_setup(struct hci_uart *hu) 186 + { 187 + BT_DBG("hu %p", hu); 188 + 189 + hu->hdev->set_bdaddr = ath_set_bdaddr; 190 + 191 + return 0; 192 + } 193 + 194 + static const struct h4_recv_pkt ath_recv_pkts[] = { 195 + { H4_RECV_ACL, .recv = hci_recv_frame }, 196 + { H4_RECV_SCO, .recv = hci_recv_frame }, 197 + { H4_RECV_EVENT, .recv = hci_recv_frame }, 198 + }; 199 + 200 + static int ath_recv(struct hci_uart *hu, const void *data, int count) 201 + { 202 + struct ath_struct *ath = hu->priv; 203 + 204 + ath->rx_skb = h4_recv_buf(hu->hdev, ath->rx_skb, data, count, 205 + ath_recv_pkts, ARRAY_SIZE(ath_recv_pkts)); 206 + if (IS_ERR(ath->rx_skb)) { 207 + int err = PTR_ERR(ath->rx_skb); 208 + BT_ERR("%s: Frame reassembly failed (%d)", hu->hdev->name, err); 209 + return err; 210 + } 211 + 212 + return count; 213 + } 214 + 136 215 #define HCI_OP_ATH_SLEEP 0xFC04 137 216 138 - /* Enqueue frame for transmittion */ 139 217 static int ath_enqueue(struct hci_uart *hu, struct sk_buff *skb) 140 218 { 141 219 struct ath_struct *ath = hu->priv; ··· 209 159 return 0; 210 160 } 211 161 212 - /* 213 - * Update power management enable flag with parameters of 162 + /* Update power management enable flag with parameters of 214 163 * HCI sleep enable vendor specific HCI command. 215 164 */ 216 165 if (bt_cb(skb)->pkt_type == HCI_COMMAND_PKT) { ··· 239 190 return skb_dequeue(&ath->txq); 240 191 } 241 192 242 - static const struct h4_recv_pkt ath_recv_pkts[] = { 243 - { H4_RECV_ACL, .recv = hci_recv_frame }, 244 - { H4_RECV_SCO, .recv = hci_recv_frame }, 245 - { H4_RECV_EVENT, .recv = hci_recv_frame }, 246 - }; 247 - 248 - /* Recv data */ 249 - static int ath_recv(struct hci_uart *hu, const void *data, int count) 250 - { 251 - struct ath_struct *ath = hu->priv; 252 - 253 - ath->rx_skb = h4_recv_buf(hu->hdev, ath->rx_skb, data, count, 254 - ath_recv_pkts, ARRAY_SIZE(ath_recv_pkts)); 255 - if (IS_ERR(ath->rx_skb)) { 256 - int err = PTR_ERR(ath->rx_skb); 257 - BT_ERR("%s: Frame reassembly failed (%d)", hu->hdev->name, err); 258 - return err; 259 - } 260 - 261 - return count; 262 - } 263 - 264 193 static const struct hci_uart_proto athp = { 265 194 .id = HCI_UART_ATH3K, 266 195 .name = "ATH3K", 267 196 .open = ath_open, 268 197 .close = ath_close, 198 + .flush = ath_flush, 199 + .setup = ath_setup, 269 200 .recv = ath_recv, 270 201 .enqueue = ath_enqueue, 271 202 .dequeue = ath_dequeue, 272 - .flush = ath_flush, 273 203 }; 274 204 275 205 int __init ath_init(void)
+12 -12
drivers/extcon/extcon-usb-gpio.c
··· 119 119 return PTR_ERR(info->id_gpiod); 120 120 } 121 121 122 + info->edev = devm_extcon_dev_allocate(dev, usb_extcon_cable); 123 + if (IS_ERR(info->edev)) { 124 + dev_err(dev, "failed to allocate extcon device\n"); 125 + return -ENOMEM; 126 + } 127 + 128 + ret = devm_extcon_dev_register(dev, info->edev); 129 + if (ret < 0) { 130 + dev_err(dev, "failed to register extcon device\n"); 131 + return ret; 132 + } 133 + 122 134 ret = gpiod_set_debounce(info->id_gpiod, 123 135 USB_GPIO_DEBOUNCE_MS * 1000); 124 136 if (ret < 0) ··· 151 139 pdev->name, info); 152 140 if (ret < 0) { 153 141 dev_err(dev, "failed to request handler for ID IRQ\n"); 154 - return ret; 155 - } 156 - 157 - info->edev = devm_extcon_dev_allocate(dev, usb_extcon_cable); 158 - if (IS_ERR(info->edev)) { 159 - dev_err(dev, "failed to allocate extcon device\n"); 160 - return -ENOMEM; 161 - } 162 - 163 - ret = devm_extcon_dev_register(dev, info->edev); 164 - if (ret < 0) { 165 - dev_err(dev, "failed to register extcon device\n"); 166 142 return ret; 167 143 } 168 144
+6 -6
drivers/firmware/dmi_scan.c
··· 499 499 buf += 16; 500 500 501 501 if (memcmp(buf, "_DMI_", 5) == 0 && dmi_checksum(buf, 15)) { 502 + if (smbios_ver) 503 + dmi_ver = smbios_ver; 504 + else 505 + dmi_ver = (buf[14] & 0xF0) << 4 | (buf[14] & 0x0F); 502 506 dmi_num = get_unaligned_le16(buf + 12); 503 507 dmi_len = get_unaligned_le16(buf + 6); 504 508 dmi_base = get_unaligned_le32(buf + 8); 505 509 506 510 if (dmi_walk_early(dmi_decode) == 0) { 507 511 if (smbios_ver) { 508 - dmi_ver = smbios_ver; 509 - pr_info("SMBIOS %d.%d%s present.\n", 510 - dmi_ver >> 8, dmi_ver & 0xFF, 511 - (dmi_ver < 0x0300) ? "" : ".x"); 512 + pr_info("SMBIOS %d.%d present.\n", 513 + dmi_ver >> 8, dmi_ver & 0xFF); 512 514 } else { 513 - dmi_ver = (buf[14] & 0xF0) << 4 | 514 - (buf[14] & 0x0F); 515 515 pr_info("Legacy DMI %d.%d present.\n", 516 516 dmi_ver >> 8, dmi_ver & 0xFF); 517 517 }
+10 -3
drivers/gpu/drm/i915/i915_drv.c
··· 699 699 intel_init_pch_refclk(dev); 700 700 drm_mode_config_reset(dev); 701 701 702 + /* 703 + * Interrupts have to be enabled before any batches are run. If not the 704 + * GPU will hang. i915_gem_init_hw() will initiate batches to 705 + * update/restore the context. 706 + * 707 + * Modeset enabling in intel_modeset_init_hw() also needs working 708 + * interrupts. 709 + */ 710 + intel_runtime_pm_enable_interrupts(dev_priv); 711 + 702 712 mutex_lock(&dev->struct_mutex); 703 713 if (i915_gem_init_hw(dev)) { 704 714 DRM_ERROR("failed to re-initialize GPU, declaring wedged!\n"); 705 715 atomic_set_mask(I915_WEDGED, &dev_priv->gpu_error.reset_counter); 706 716 } 707 717 mutex_unlock(&dev->struct_mutex); 708 - 709 - /* We need working interrupts for modeset enabling ... */ 710 - intel_runtime_pm_enable_interrupts(dev_priv); 711 718 712 719 intel_modeset_init_hw(dev); 713 720
+2 -2
drivers/gpu/drm/radeon/cik.c
··· 5822 5822 L2_CACHE_BIGK_FRAGMENT_SIZE(4)); 5823 5823 /* setup context0 */ 5824 5824 WREG32(VM_CONTEXT0_PAGE_TABLE_START_ADDR, rdev->mc.gtt_start >> 12); 5825 - WREG32(VM_CONTEXT0_PAGE_TABLE_END_ADDR, rdev->mc.gtt_end >> 12); 5825 + WREG32(VM_CONTEXT0_PAGE_TABLE_END_ADDR, (rdev->mc.gtt_end >> 12) - 1); 5826 5826 WREG32(VM_CONTEXT0_PAGE_TABLE_BASE_ADDR, rdev->gart.table_addr >> 12); 5827 5827 WREG32(VM_CONTEXT0_PROTECTION_FAULT_DEFAULT_ADDR, 5828 5828 (u32)(rdev->dummy_page.addr >> 12)); ··· 5837 5837 /* restore context1-15 */ 5838 5838 /* set vm size, must be a multiple of 4 */ 5839 5839 WREG32(VM_CONTEXT1_PAGE_TABLE_START_ADDR, 0); 5840 - WREG32(VM_CONTEXT1_PAGE_TABLE_END_ADDR, rdev->vm_manager.max_pfn); 5840 + WREG32(VM_CONTEXT1_PAGE_TABLE_END_ADDR, rdev->vm_manager.max_pfn - 1); 5841 5841 for (i = 1; i < 16; i++) { 5842 5842 if (i < 8) 5843 5843 WREG32(VM_CONTEXT0_PAGE_TABLE_BASE_ADDR + (i << 2),
+1 -1
drivers/gpu/drm/radeon/evergreen.c
··· 2485 2485 WREG32(MC_VM_MB_L1_TLB2_CNTL, tmp); 2486 2486 WREG32(MC_VM_MB_L1_TLB3_CNTL, tmp); 2487 2487 WREG32(VM_CONTEXT0_PAGE_TABLE_START_ADDR, rdev->mc.gtt_start >> 12); 2488 - WREG32(VM_CONTEXT0_PAGE_TABLE_END_ADDR, rdev->mc.gtt_end >> 12); 2488 + WREG32(VM_CONTEXT0_PAGE_TABLE_END_ADDR, (rdev->mc.gtt_end >> 12) - 1); 2489 2489 WREG32(VM_CONTEXT0_PAGE_TABLE_BASE_ADDR, rdev->gart.table_addr >> 12); 2490 2490 WREG32(VM_CONTEXT0_CNTL, ENABLE_CONTEXT | PAGE_TABLE_DEPTH(0) | 2491 2491 RANGE_PROTECTION_FAULT_ENABLE_DEFAULT);
+3 -2
drivers/gpu/drm/radeon/ni.c
··· 1282 1282 L2_CACHE_BIGK_FRAGMENT_SIZE(6)); 1283 1283 /* setup context0 */ 1284 1284 WREG32(VM_CONTEXT0_PAGE_TABLE_START_ADDR, rdev->mc.gtt_start >> 12); 1285 - WREG32(VM_CONTEXT0_PAGE_TABLE_END_ADDR, rdev->mc.gtt_end >> 12); 1285 + WREG32(VM_CONTEXT0_PAGE_TABLE_END_ADDR, (rdev->mc.gtt_end >> 12) - 1); 1286 1286 WREG32(VM_CONTEXT0_PAGE_TABLE_BASE_ADDR, rdev->gart.table_addr >> 12); 1287 1287 WREG32(VM_CONTEXT0_PROTECTION_FAULT_DEFAULT_ADDR, 1288 1288 (u32)(rdev->dummy_page.addr >> 12)); ··· 1301 1301 */ 1302 1302 for (i = 1; i < 8; i++) { 1303 1303 WREG32(VM_CONTEXT0_PAGE_TABLE_START_ADDR + (i << 2), 0); 1304 - WREG32(VM_CONTEXT0_PAGE_TABLE_END_ADDR + (i << 2), rdev->vm_manager.max_pfn); 1304 + WREG32(VM_CONTEXT0_PAGE_TABLE_END_ADDR + (i << 2), 1305 + rdev->vm_manager.max_pfn - 1); 1305 1306 WREG32(VM_CONTEXT0_PAGE_TABLE_BASE_ADDR + (i << 2), 1306 1307 rdev->vm_manager.saved_table_addr[i]); 1307 1308 }
+1 -1
drivers/gpu/drm/radeon/r600.c
··· 1112 1112 WREG32(MC_VM_L1_TLB_MCB_RD_SEM_CNTL, tmp | ENABLE_SEMAPHORE_MODE); 1113 1113 WREG32(MC_VM_L1_TLB_MCB_WR_SEM_CNTL, tmp | ENABLE_SEMAPHORE_MODE); 1114 1114 WREG32(VM_CONTEXT0_PAGE_TABLE_START_ADDR, rdev->mc.gtt_start >> 12); 1115 - WREG32(VM_CONTEXT0_PAGE_TABLE_END_ADDR, rdev->mc.gtt_end >> 12); 1115 + WREG32(VM_CONTEXT0_PAGE_TABLE_END_ADDR, (rdev->mc.gtt_end >> 12) - 1); 1116 1116 WREG32(VM_CONTEXT0_PAGE_TABLE_BASE_ADDR, rdev->gart.table_addr >> 12); 1117 1117 WREG32(VM_CONTEXT0_CNTL, ENABLE_CONTEXT | PAGE_TABLE_DEPTH(0) | 1118 1118 RANGE_PROTECTION_FAULT_ENABLE_DEFAULT);
+3
drivers/gpu/drm/radeon/radeon_dp_mst.c
··· 666 666 int ret; 667 667 u8 msg[1]; 668 668 669 + if (!radeon_mst) 670 + return 0; 671 + 669 672 if (dig_connector->dpcd[DP_DPCD_REV] < 0x12) 670 673 return 0; 671 674
+1 -1
drivers/gpu/drm/radeon/rv770.c
··· 921 921 WREG32(MC_VM_MB_L1_TLB2_CNTL, tmp); 922 922 WREG32(MC_VM_MB_L1_TLB3_CNTL, tmp); 923 923 WREG32(VM_CONTEXT0_PAGE_TABLE_START_ADDR, rdev->mc.gtt_start >> 12); 924 - WREG32(VM_CONTEXT0_PAGE_TABLE_END_ADDR, rdev->mc.gtt_end >> 12); 924 + WREG32(VM_CONTEXT0_PAGE_TABLE_END_ADDR, (rdev->mc.gtt_end >> 12) - 1); 925 925 WREG32(VM_CONTEXT0_PAGE_TABLE_BASE_ADDR, rdev->gart.table_addr >> 12); 926 926 WREG32(VM_CONTEXT0_CNTL, ENABLE_CONTEXT | PAGE_TABLE_DEPTH(0) | 927 927 RANGE_PROTECTION_FAULT_ENABLE_DEFAULT);
+2 -2
drivers/gpu/drm/radeon/si.c
··· 4303 4303 L2_CACHE_BIGK_FRAGMENT_SIZE(4)); 4304 4304 /* setup context0 */ 4305 4305 WREG32(VM_CONTEXT0_PAGE_TABLE_START_ADDR, rdev->mc.gtt_start >> 12); 4306 - WREG32(VM_CONTEXT0_PAGE_TABLE_END_ADDR, rdev->mc.gtt_end >> 12); 4306 + WREG32(VM_CONTEXT0_PAGE_TABLE_END_ADDR, (rdev->mc.gtt_end >> 12) - 1); 4307 4307 WREG32(VM_CONTEXT0_PAGE_TABLE_BASE_ADDR, rdev->gart.table_addr >> 12); 4308 4308 WREG32(VM_CONTEXT0_PROTECTION_FAULT_DEFAULT_ADDR, 4309 4309 (u32)(rdev->dummy_page.addr >> 12)); ··· 4318 4318 /* empty context1-15 */ 4319 4319 /* set vm size, must be a multiple of 4 */ 4320 4320 WREG32(VM_CONTEXT1_PAGE_TABLE_START_ADDR, 0); 4321 - WREG32(VM_CONTEXT1_PAGE_TABLE_END_ADDR, rdev->vm_manager.max_pfn); 4321 + WREG32(VM_CONTEXT1_PAGE_TABLE_END_ADDR, rdev->vm_manager.max_pfn - 1); 4322 4322 /* Assign the pt base to something valid for now; the pts used for 4323 4323 * the VMs are determined by the application and setup and assigned 4324 4324 * on the fly in the vm part of radeon_gart.c
-9
drivers/ide/Kconfig
··· 643 643 help 644 644 This driver adds support for Toshiba TC86C001 GOKU-S chip. 645 645 646 - config BLK_DEV_CELLEB 647 - tristate "Toshiba's Cell Reference Set IDE support" 648 - depends on PPC_CELLEB 649 - select BLK_DEV_IDEDMA_PCI 650 - help 651 - This driver provides support for the on-board IDE controller on 652 - Toshiba Cell Reference Board. 653 - If unsure, say Y. 654 - 655 646 endif 656 647 657 648 # TODO: BLK_DEV_IDEDMA_PCI -> BLK_DEV_IDEDMA_SFF
-1
drivers/ide/Makefile
··· 38 38 obj-$(CONFIG_BLK_DEV_ALI15X3) += alim15x3.o 39 39 obj-$(CONFIG_BLK_DEV_AMD74XX) += amd74xx.o 40 40 obj-$(CONFIG_BLK_DEV_ATIIXP) += atiixp.o 41 - obj-$(CONFIG_BLK_DEV_CELLEB) += scc_pata.o 42 41 obj-$(CONFIG_BLK_DEV_CMD64X) += cmd64x.o 43 42 obj-$(CONFIG_BLK_DEV_CS5520) += cs5520.o 44 43 obj-$(CONFIG_BLK_DEV_CS5530) += cs5530.o
-887
drivers/ide/scc_pata.c
··· 1 - /* 2 - * Support for IDE interfaces on Celleb platform 3 - * 4 - * (C) Copyright 2006 TOSHIBA CORPORATION 5 - * 6 - * This code is based on drivers/ide/pci/siimage.c: 7 - * Copyright (C) 2001-2002 Andre Hedrick <andre@linux-ide.org> 8 - * Copyright (C) 2003 Red Hat 9 - * 10 - * This program is free software; you can redistribute it and/or modify 11 - * it under the terms of the GNU General Public License as published by 12 - * the Free Software Foundation; either version 2 of the License, or 13 - * (at your option) any later version. 14 - * 15 - * This program is distributed in the hope that it will be useful, 16 - * but WITHOUT ANY WARRANTY; without even the implied warranty of 17 - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 18 - * GNU General Public License for more details. 19 - * 20 - * You should have received a copy of the GNU General Public License along 21 - * with this program; if not, write to the Free Software Foundation, Inc., 22 - * 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA. 23 - */ 24 - 25 - #include <linux/types.h> 26 - #include <linux/module.h> 27 - #include <linux/pci.h> 28 - #include <linux/delay.h> 29 - #include <linux/ide.h> 30 - #include <linux/init.h> 31 - 32 - #define PCI_DEVICE_ID_TOSHIBA_SCC_ATA 0x01b4 33 - 34 - #define SCC_PATA_NAME "scc IDE" 35 - 36 - #define TDVHSEL_MASTER 0x00000001 37 - #define TDVHSEL_SLAVE 0x00000004 38 - 39 - #define MODE_JCUSFEN 0x00000080 40 - 41 - #define CCKCTRL_ATARESET 0x00040000 42 - #define CCKCTRL_BUFCNT 0x00020000 43 - #define CCKCTRL_CRST 0x00010000 44 - #define CCKCTRL_OCLKEN 0x00000100 45 - #define CCKCTRL_ATACLKOEN 0x00000002 46 - #define CCKCTRL_LCLKEN 0x00000001 47 - 48 - #define QCHCD_IOS_SS 0x00000001 49 - 50 - #define QCHSD_STPDIAG 0x00020000 51 - 52 - #define INTMASK_MSK 0xD1000012 53 - #define INTSTS_SERROR 0x80000000 54 - #define INTSTS_PRERR 0x40000000 55 - #define INTSTS_RERR 0x10000000 56 - #define INTSTS_ICERR 0x01000000 57 - #define INTSTS_BMSINT 0x00000010 58 - #define INTSTS_BMHE 0x00000008 59 - #define INTSTS_IOIRQS 0x00000004 60 - #define INTSTS_INTRQ 0x00000002 61 - #define INTSTS_ACTEINT 0x00000001 62 - 63 - #define ECMODE_VALUE 0x01 64 - 65 - static struct scc_ports { 66 - unsigned long ctl, dma; 67 - struct ide_host *host; /* for removing port from system */ 68 - } scc_ports[MAX_HWIFS]; 69 - 70 - /* PIO transfer mode table */ 71 - /* JCHST */ 72 - static unsigned long JCHSTtbl[2][7] = { 73 - {0x0E, 0x05, 0x02, 0x03, 0x02, 0x00, 0x00}, /* 100MHz */ 74 - {0x13, 0x07, 0x04, 0x04, 0x03, 0x00, 0x00} /* 133MHz */ 75 - }; 76 - 77 - /* JCHHT */ 78 - static unsigned long JCHHTtbl[2][7] = { 79 - {0x0E, 0x02, 0x02, 0x02, 0x02, 0x00, 0x00}, /* 100MHz */ 80 - {0x13, 0x03, 0x03, 0x03, 0x03, 0x00, 0x00} /* 133MHz */ 81 - }; 82 - 83 - /* JCHCT */ 84 - static unsigned long JCHCTtbl[2][7] = { 85 - {0x1D, 0x1D, 0x1C, 0x0B, 0x06, 0x00, 0x00}, /* 100MHz */ 86 - {0x27, 0x26, 0x26, 0x0E, 0x09, 0x00, 0x00} /* 133MHz */ 87 - }; 88 - 89 - 90 - /* DMA transfer mode table */ 91 - /* JCHDCTM/JCHDCTS */ 92 - static unsigned long JCHDCTxtbl[2][7] = { 93 - {0x0A, 0x06, 0x04, 0x03, 0x01, 0x00, 0x00}, /* 100MHz */ 94 - {0x0E, 0x09, 0x06, 0x04, 0x02, 0x01, 0x00} /* 133MHz */ 95 - }; 96 - 97 - /* JCSTWTM/JCSTWTS */ 98 - static unsigned long JCSTWTxtbl[2][7] = { 99 - {0x06, 0x04, 0x03, 0x02, 0x02, 0x02, 0x00}, /* 100MHz */ 100 - {0x09, 0x06, 0x04, 0x02, 0x02, 0x02, 0x02} /* 133MHz */ 101 - }; 102 - 103 - /* JCTSS */ 104 - static unsigned long JCTSStbl[2][7] = { 105 - {0x05, 0x05, 0x05, 0x05, 0x05, 0x05, 0x00}, /* 100MHz */ 106 - {0x05, 0x05, 0x05, 0x05, 0x05, 0x05, 0x05} /* 133MHz */ 107 - }; 108 - 109 - /* JCENVT */ 110 - static unsigned long JCENVTtbl[2][7] = { 111 - {0x01, 0x01, 0x01, 0x01, 0x01, 0x01, 0x00}, /* 100MHz */ 112 - {0x02, 0x02, 0x02, 0x02, 0x02, 0x02, 0x02} /* 133MHz */ 113 - }; 114 - 115 - /* JCACTSELS/JCACTSELM */ 116 - static unsigned long JCACTSELtbl[2][7] = { 117 - {0x00, 0x00, 0x00, 0x00, 0x01, 0x01, 0x00}, /* 100MHz */ 118 - {0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01} /* 133MHz */ 119 - }; 120 - 121 - 122 - static u8 scc_ide_inb(unsigned long port) 123 - { 124 - u32 data = in_be32((void*)port); 125 - return (u8)data; 126 - } 127 - 128 - static void scc_exec_command(ide_hwif_t *hwif, u8 cmd) 129 - { 130 - out_be32((void *)hwif->io_ports.command_addr, cmd); 131 - eieio(); 132 - in_be32((void *)(hwif->dma_base + 0x01c)); 133 - eieio(); 134 - } 135 - 136 - static u8 scc_read_status(ide_hwif_t *hwif) 137 - { 138 - return (u8)in_be32((void *)hwif->io_ports.status_addr); 139 - } 140 - 141 - static u8 scc_read_altstatus(ide_hwif_t *hwif) 142 - { 143 - return (u8)in_be32((void *)hwif->io_ports.ctl_addr); 144 - } 145 - 146 - static u8 scc_dma_sff_read_status(ide_hwif_t *hwif) 147 - { 148 - return (u8)in_be32((void *)(hwif->dma_base + 4)); 149 - } 150 - 151 - static void scc_write_devctl(ide_hwif_t *hwif, u8 ctl) 152 - { 153 - out_be32((void *)hwif->io_ports.ctl_addr, ctl); 154 - eieio(); 155 - in_be32((void *)(hwif->dma_base + 0x01c)); 156 - eieio(); 157 - } 158 - 159 - static void scc_ide_insw(unsigned long port, void *addr, u32 count) 160 - { 161 - u16 *ptr = (u16 *)addr; 162 - while (count--) { 163 - *ptr++ = le16_to_cpu(in_be32((void*)port)); 164 - } 165 - } 166 - 167 - static void scc_ide_insl(unsigned long port, void *addr, u32 count) 168 - { 169 - u16 *ptr = (u16 *)addr; 170 - while (count--) { 171 - *ptr++ = le16_to_cpu(in_be32((void*)port)); 172 - *ptr++ = le16_to_cpu(in_be32((void*)port)); 173 - } 174 - } 175 - 176 - static void scc_ide_outb(u8 addr, unsigned long port) 177 - { 178 - out_be32((void*)port, addr); 179 - } 180 - 181 - static void 182 - scc_ide_outsw(unsigned long port, void *addr, u32 count) 183 - { 184 - u16 *ptr = (u16 *)addr; 185 - while (count--) { 186 - out_be32((void*)port, cpu_to_le16(*ptr++)); 187 - } 188 - } 189 - 190 - static void 191 - scc_ide_outsl(unsigned long port, void *addr, u32 count) 192 - { 193 - u16 *ptr = (u16 *)addr; 194 - while (count--) { 195 - out_be32((void*)port, cpu_to_le16(*ptr++)); 196 - out_be32((void*)port, cpu_to_le16(*ptr++)); 197 - } 198 - } 199 - 200 - /** 201 - * scc_set_pio_mode - set host controller for PIO mode 202 - * @hwif: port 203 - * @drive: drive 204 - * 205 - * Load the timing settings for this device mode into the 206 - * controller. 207 - */ 208 - 209 - static void scc_set_pio_mode(ide_hwif_t *hwif, ide_drive_t *drive) 210 - { 211 - struct scc_ports *ports = ide_get_hwifdata(hwif); 212 - unsigned long ctl_base = ports->ctl; 213 - unsigned long cckctrl_port = ctl_base + 0xff0; 214 - unsigned long piosht_port = ctl_base + 0x000; 215 - unsigned long pioct_port = ctl_base + 0x004; 216 - unsigned long reg; 217 - int offset; 218 - const u8 pio = drive->pio_mode - XFER_PIO_0; 219 - 220 - reg = in_be32((void __iomem *)cckctrl_port); 221 - if (reg & CCKCTRL_ATACLKOEN) { 222 - offset = 1; /* 133MHz */ 223 - } else { 224 - offset = 0; /* 100MHz */ 225 - } 226 - reg = JCHSTtbl[offset][pio] << 16 | JCHHTtbl[offset][pio]; 227 - out_be32((void __iomem *)piosht_port, reg); 228 - reg = JCHCTtbl[offset][pio]; 229 - out_be32((void __iomem *)pioct_port, reg); 230 - } 231 - 232 - /** 233 - * scc_set_dma_mode - set host controller for DMA mode 234 - * @hwif: port 235 - * @drive: drive 236 - * 237 - * Load the timing settings for this device mode into the 238 - * controller. 239 - */ 240 - 241 - static void scc_set_dma_mode(ide_hwif_t *hwif, ide_drive_t *drive) 242 - { 243 - struct scc_ports *ports = ide_get_hwifdata(hwif); 244 - unsigned long ctl_base = ports->ctl; 245 - unsigned long cckctrl_port = ctl_base + 0xff0; 246 - unsigned long mdmact_port = ctl_base + 0x008; 247 - unsigned long mcrcst_port = ctl_base + 0x00c; 248 - unsigned long sdmact_port = ctl_base + 0x010; 249 - unsigned long scrcst_port = ctl_base + 0x014; 250 - unsigned long udenvt_port = ctl_base + 0x018; 251 - unsigned long tdvhsel_port = ctl_base + 0x020; 252 - int is_slave = drive->dn & 1; 253 - int offset, idx; 254 - unsigned long reg; 255 - unsigned long jcactsel; 256 - const u8 speed = drive->dma_mode; 257 - 258 - reg = in_be32((void __iomem *)cckctrl_port); 259 - if (reg & CCKCTRL_ATACLKOEN) { 260 - offset = 1; /* 133MHz */ 261 - } else { 262 - offset = 0; /* 100MHz */ 263 - } 264 - 265 - idx = speed - XFER_UDMA_0; 266 - 267 - jcactsel = JCACTSELtbl[offset][idx]; 268 - if (is_slave) { 269 - out_be32((void __iomem *)sdmact_port, JCHDCTxtbl[offset][idx]); 270 - out_be32((void __iomem *)scrcst_port, JCSTWTxtbl[offset][idx]); 271 - jcactsel = jcactsel << 2; 272 - out_be32((void __iomem *)tdvhsel_port, (in_be32((void __iomem *)tdvhsel_port) & ~TDVHSEL_SLAVE) | jcactsel); 273 - } else { 274 - out_be32((void __iomem *)mdmact_port, JCHDCTxtbl[offset][idx]); 275 - out_be32((void __iomem *)mcrcst_port, JCSTWTxtbl[offset][idx]); 276 - out_be32((void __iomem *)tdvhsel_port, (in_be32((void __iomem *)tdvhsel_port) & ~TDVHSEL_MASTER) | jcactsel); 277 - } 278 - reg = JCTSStbl[offset][idx] << 16 | JCENVTtbl[offset][idx]; 279 - out_be32((void __iomem *)udenvt_port, reg); 280 - } 281 - 282 - static void scc_dma_host_set(ide_drive_t *drive, int on) 283 - { 284 - ide_hwif_t *hwif = drive->hwif; 285 - u8 unit = drive->dn & 1; 286 - u8 dma_stat = scc_dma_sff_read_status(hwif); 287 - 288 - if (on) 289 - dma_stat |= (1 << (5 + unit)); 290 - else 291 - dma_stat &= ~(1 << (5 + unit)); 292 - 293 - scc_ide_outb(dma_stat, hwif->dma_base + 4); 294 - } 295 - 296 - /** 297 - * scc_dma_setup - begin a DMA phase 298 - * @drive: target device 299 - * @cmd: command 300 - * 301 - * Build an IDE DMA PRD (IDE speak for scatter gather table) 302 - * and then set up the DMA transfer registers. 303 - * 304 - * Returns 0 on success. If a PIO fallback is required then 1 305 - * is returned. 306 - */ 307 - 308 - static int scc_dma_setup(ide_drive_t *drive, struct ide_cmd *cmd) 309 - { 310 - ide_hwif_t *hwif = drive->hwif; 311 - u32 rw = (cmd->tf_flags & IDE_TFLAG_WRITE) ? 0 : ATA_DMA_WR; 312 - u8 dma_stat; 313 - 314 - /* fall back to pio! */ 315 - if (ide_build_dmatable(drive, cmd) == 0) 316 - return 1; 317 - 318 - /* PRD table */ 319 - out_be32((void __iomem *)(hwif->dma_base + 8), hwif->dmatable_dma); 320 - 321 - /* specify r/w */ 322 - out_be32((void __iomem *)hwif->dma_base, rw); 323 - 324 - /* read DMA status for INTR & ERROR flags */ 325 - dma_stat = scc_dma_sff_read_status(hwif); 326 - 327 - /* clear INTR & ERROR flags */ 328 - out_be32((void __iomem *)(hwif->dma_base + 4), dma_stat | 6); 329 - 330 - return 0; 331 - } 332 - 333 - static void scc_dma_start(ide_drive_t *drive) 334 - { 335 - ide_hwif_t *hwif = drive->hwif; 336 - u8 dma_cmd = scc_ide_inb(hwif->dma_base); 337 - 338 - /* start DMA */ 339 - scc_ide_outb(dma_cmd | 1, hwif->dma_base); 340 - } 341 - 342 - static int __scc_dma_end(ide_drive_t *drive) 343 - { 344 - ide_hwif_t *hwif = drive->hwif; 345 - u8 dma_stat, dma_cmd; 346 - 347 - /* get DMA command mode */ 348 - dma_cmd = scc_ide_inb(hwif->dma_base); 349 - /* stop DMA */ 350 - scc_ide_outb(dma_cmd & ~1, hwif->dma_base); 351 - /* get DMA status */ 352 - dma_stat = scc_dma_sff_read_status(hwif); 353 - /* clear the INTR & ERROR bits */ 354 - scc_ide_outb(dma_stat | 6, hwif->dma_base + 4); 355 - /* verify good DMA status */ 356 - return (dma_stat & 7) != 4 ? (0x10 | dma_stat) : 0; 357 - } 358 - 359 - /** 360 - * scc_dma_end - Stop DMA 361 - * @drive: IDE drive 362 - * 363 - * Check and clear INT Status register. 364 - * Then call __scc_dma_end(). 365 - */ 366 - 367 - static int scc_dma_end(ide_drive_t *drive) 368 - { 369 - ide_hwif_t *hwif = drive->hwif; 370 - void __iomem *dma_base = (void __iomem *)hwif->dma_base; 371 - unsigned long intsts_port = hwif->dma_base + 0x014; 372 - u32 reg; 373 - int dma_stat, data_loss = 0; 374 - static int retry = 0; 375 - 376 - /* errata A308 workaround: Step5 (check data loss) */ 377 - /* We don't check non ide_disk because it is limited to UDMA4 */ 378 - if (!(in_be32((void __iomem *)hwif->io_ports.ctl_addr) 379 - & ATA_ERR) && 380 - drive->media == ide_disk && drive->current_speed > XFER_UDMA_4) { 381 - reg = in_be32((void __iomem *)intsts_port); 382 - if (!(reg & INTSTS_ACTEINT)) { 383 - printk(KERN_WARNING "%s: operation failed (transfer data loss)\n", 384 - drive->name); 385 - data_loss = 1; 386 - if (retry++) { 387 - struct request *rq = hwif->rq; 388 - ide_drive_t *drive; 389 - int i; 390 - 391 - /* ERROR_RESET and drive->crc_count are needed 392 - * to reduce DMA transfer mode in retry process. 393 - */ 394 - if (rq) 395 - rq->errors |= ERROR_RESET; 396 - 397 - ide_port_for_each_dev(i, drive, hwif) 398 - drive->crc_count++; 399 - } 400 - } 401 - } 402 - 403 - while (1) { 404 - reg = in_be32((void __iomem *)intsts_port); 405 - 406 - if (reg & INTSTS_SERROR) { 407 - printk(KERN_WARNING "%s: SERROR\n", SCC_PATA_NAME); 408 - out_be32((void __iomem *)intsts_port, INTSTS_SERROR|INTSTS_BMSINT); 409 - 410 - out_be32(dma_base, in_be32(dma_base) & ~QCHCD_IOS_SS); 411 - continue; 412 - } 413 - 414 - if (reg & INTSTS_PRERR) { 415 - u32 maea0, maec0; 416 - unsigned long ctl_base = hwif->config_data; 417 - 418 - maea0 = in_be32((void __iomem *)(ctl_base + 0xF50)); 419 - maec0 = in_be32((void __iomem *)(ctl_base + 0xF54)); 420 - 421 - printk(KERN_WARNING "%s: PRERR [addr:%x cmd:%x]\n", SCC_PATA_NAME, maea0, maec0); 422 - 423 - out_be32((void __iomem *)intsts_port, INTSTS_PRERR|INTSTS_BMSINT); 424 - 425 - out_be32(dma_base, in_be32(dma_base) & ~QCHCD_IOS_SS); 426 - continue; 427 - } 428 - 429 - if (reg & INTSTS_RERR) { 430 - printk(KERN_WARNING "%s: Response Error\n", SCC_PATA_NAME); 431 - out_be32((void __iomem *)intsts_port, INTSTS_RERR|INTSTS_BMSINT); 432 - 433 - out_be32(dma_base, in_be32(dma_base) & ~QCHCD_IOS_SS); 434 - continue; 435 - } 436 - 437 - if (reg & INTSTS_ICERR) { 438 - out_be32(dma_base, in_be32(dma_base) & ~QCHCD_IOS_SS); 439 - 440 - printk(KERN_WARNING "%s: Illegal Configuration\n", SCC_PATA_NAME); 441 - out_be32((void __iomem *)intsts_port, INTSTS_ICERR|INTSTS_BMSINT); 442 - continue; 443 - } 444 - 445 - if (reg & INTSTS_BMSINT) { 446 - printk(KERN_WARNING "%s: Internal Bus Error\n", SCC_PATA_NAME); 447 - out_be32((void __iomem *)intsts_port, INTSTS_BMSINT); 448 - 449 - ide_do_reset(drive); 450 - continue; 451 - } 452 - 453 - if (reg & INTSTS_BMHE) { 454 - out_be32((void __iomem *)intsts_port, INTSTS_BMHE); 455 - continue; 456 - } 457 - 458 - if (reg & INTSTS_ACTEINT) { 459 - out_be32((void __iomem *)intsts_port, INTSTS_ACTEINT); 460 - continue; 461 - } 462 - 463 - if (reg & INTSTS_IOIRQS) { 464 - out_be32((void __iomem *)intsts_port, INTSTS_IOIRQS); 465 - continue; 466 - } 467 - break; 468 - } 469 - 470 - dma_stat = __scc_dma_end(drive); 471 - if (data_loss) 472 - dma_stat |= 2; /* emulate DMA error (to retry command) */ 473 - return dma_stat; 474 - } 475 - 476 - /* returns 1 if dma irq issued, 0 otherwise */ 477 - static int scc_dma_test_irq(ide_drive_t *drive) 478 - { 479 - ide_hwif_t *hwif = drive->hwif; 480 - u32 int_stat = in_be32((void __iomem *)hwif->dma_base + 0x014); 481 - 482 - /* SCC errata A252,A308 workaround: Step4 */ 483 - if ((in_be32((void __iomem *)hwif->io_ports.ctl_addr) 484 - & ATA_ERR) && 485 - (int_stat & INTSTS_INTRQ)) 486 - return 1; 487 - 488 - /* SCC errata A308 workaround: Step5 (polling IOIRQS) */ 489 - if (int_stat & INTSTS_IOIRQS) 490 - return 1; 491 - 492 - return 0; 493 - } 494 - 495 - static u8 scc_udma_filter(ide_drive_t *drive) 496 - { 497 - ide_hwif_t *hwif = drive->hwif; 498 - u8 mask = hwif->ultra_mask; 499 - 500 - /* errata A308 workaround: limit non ide_disk drive to UDMA4 */ 501 - if ((drive->media != ide_disk) && (mask & 0xE0)) { 502 - printk(KERN_INFO "%s: limit %s to UDMA4\n", 503 - SCC_PATA_NAME, drive->name); 504 - mask = ATA_UDMA4; 505 - } 506 - 507 - return mask; 508 - } 509 - 510 - /** 511 - * setup_mmio_scc - map CTRL/BMID region 512 - * @dev: PCI device we are configuring 513 - * @name: device name 514 - * 515 - */ 516 - 517 - static int setup_mmio_scc (struct pci_dev *dev, const char *name) 518 - { 519 - void __iomem *ctl_addr; 520 - void __iomem *dma_addr; 521 - int i, ret; 522 - 523 - for (i = 0; i < MAX_HWIFS; i++) { 524 - if (scc_ports[i].ctl == 0) 525 - break; 526 - } 527 - if (i >= MAX_HWIFS) 528 - return -ENOMEM; 529 - 530 - ret = pci_request_selected_regions(dev, (1 << 2) - 1, name); 531 - if (ret < 0) { 532 - printk(KERN_ERR "%s: can't reserve resources\n", name); 533 - return ret; 534 - } 535 - 536 - ctl_addr = pci_ioremap_bar(dev, 0); 537 - if (!ctl_addr) 538 - goto fail_0; 539 - 540 - dma_addr = pci_ioremap_bar(dev, 1); 541 - if (!dma_addr) 542 - goto fail_1; 543 - 544 - pci_set_master(dev); 545 - scc_ports[i].ctl = (unsigned long)ctl_addr; 546 - scc_ports[i].dma = (unsigned long)dma_addr; 547 - pci_set_drvdata(dev, (void *) &scc_ports[i]); 548 - 549 - return 1; 550 - 551 - fail_1: 552 - iounmap(ctl_addr); 553 - fail_0: 554 - return -ENOMEM; 555 - } 556 - 557 - static int scc_ide_setup_pci_device(struct pci_dev *dev, 558 - const struct ide_port_info *d) 559 - { 560 - struct scc_ports *ports = pci_get_drvdata(dev); 561 - struct ide_host *host; 562 - struct ide_hw hw, *hws[] = { &hw }; 563 - int i, rc; 564 - 565 - memset(&hw, 0, sizeof(hw)); 566 - for (i = 0; i <= 8; i++) 567 - hw.io_ports_array[i] = ports->dma + 0x20 + i * 4; 568 - hw.irq = dev->irq; 569 - hw.dev = &dev->dev; 570 - 571 - rc = ide_host_add(d, hws, 1, &host); 572 - if (rc) 573 - return rc; 574 - 575 - ports->host = host; 576 - 577 - return 0; 578 - } 579 - 580 - /** 581 - * init_setup_scc - set up an SCC PATA Controller 582 - * @dev: PCI device 583 - * @d: IDE port info 584 - * 585 - * Perform the initial set up for this device. 586 - */ 587 - 588 - static int init_setup_scc(struct pci_dev *dev, const struct ide_port_info *d) 589 - { 590 - unsigned long ctl_base; 591 - unsigned long dma_base; 592 - unsigned long cckctrl_port; 593 - unsigned long intmask_port; 594 - unsigned long mode_port; 595 - unsigned long ecmode_port; 596 - u32 reg = 0; 597 - struct scc_ports *ports; 598 - int rc; 599 - 600 - rc = pci_enable_device(dev); 601 - if (rc) 602 - goto end; 603 - 604 - rc = setup_mmio_scc(dev, d->name); 605 - if (rc < 0) 606 - goto end; 607 - 608 - ports = pci_get_drvdata(dev); 609 - ctl_base = ports->ctl; 610 - dma_base = ports->dma; 611 - cckctrl_port = ctl_base + 0xff0; 612 - intmask_port = dma_base + 0x010; 613 - mode_port = ctl_base + 0x024; 614 - ecmode_port = ctl_base + 0xf00; 615 - 616 - /* controller initialization */ 617 - reg = 0; 618 - out_be32((void*)cckctrl_port, reg); 619 - reg |= CCKCTRL_ATACLKOEN; 620 - out_be32((void*)cckctrl_port, reg); 621 - reg |= CCKCTRL_LCLKEN | CCKCTRL_OCLKEN; 622 - out_be32((void*)cckctrl_port, reg); 623 - reg |= CCKCTRL_CRST; 624 - out_be32((void*)cckctrl_port, reg); 625 - 626 - for (;;) { 627 - reg = in_be32((void*)cckctrl_port); 628 - if (reg & CCKCTRL_CRST) 629 - break; 630 - udelay(5000); 631 - } 632 - 633 - reg |= CCKCTRL_ATARESET; 634 - out_be32((void*)cckctrl_port, reg); 635 - 636 - out_be32((void*)ecmode_port, ECMODE_VALUE); 637 - out_be32((void*)mode_port, MODE_JCUSFEN); 638 - out_be32((void*)intmask_port, INTMASK_MSK); 639 - 640 - rc = scc_ide_setup_pci_device(dev, d); 641 - 642 - end: 643 - return rc; 644 - } 645 - 646 - static void scc_tf_load(ide_drive_t *drive, struct ide_taskfile *tf, u8 valid) 647 - { 648 - struct ide_io_ports *io_ports = &drive->hwif->io_ports; 649 - 650 - if (valid & IDE_VALID_FEATURE) 651 - scc_ide_outb(tf->feature, io_ports->feature_addr); 652 - if (valid & IDE_VALID_NSECT) 653 - scc_ide_outb(tf->nsect, io_ports->nsect_addr); 654 - if (valid & IDE_VALID_LBAL) 655 - scc_ide_outb(tf->lbal, io_ports->lbal_addr); 656 - if (valid & IDE_VALID_LBAM) 657 - scc_ide_outb(tf->lbam, io_ports->lbam_addr); 658 - if (valid & IDE_VALID_LBAH) 659 - scc_ide_outb(tf->lbah, io_ports->lbah_addr); 660 - if (valid & IDE_VALID_DEVICE) 661 - scc_ide_outb(tf->device, io_ports->device_addr); 662 - } 663 - 664 - static void scc_tf_read(ide_drive_t *drive, struct ide_taskfile *tf, u8 valid) 665 - { 666 - struct ide_io_ports *io_ports = &drive->hwif->io_ports; 667 - 668 - if (valid & IDE_VALID_ERROR) 669 - tf->error = scc_ide_inb(io_ports->feature_addr); 670 - if (valid & IDE_VALID_NSECT) 671 - tf->nsect = scc_ide_inb(io_ports->nsect_addr); 672 - if (valid & IDE_VALID_LBAL) 673 - tf->lbal = scc_ide_inb(io_ports->lbal_addr); 674 - if (valid & IDE_VALID_LBAM) 675 - tf->lbam = scc_ide_inb(io_ports->lbam_addr); 676 - if (valid & IDE_VALID_LBAH) 677 - tf->lbah = scc_ide_inb(io_ports->lbah_addr); 678 - if (valid & IDE_VALID_DEVICE) 679 - tf->device = scc_ide_inb(io_ports->device_addr); 680 - } 681 - 682 - static void scc_input_data(ide_drive_t *drive, struct ide_cmd *cmd, 683 - void *buf, unsigned int len) 684 - { 685 - unsigned long data_addr = drive->hwif->io_ports.data_addr; 686 - 687 - len++; 688 - 689 - if (drive->io_32bit) { 690 - scc_ide_insl(data_addr, buf, len / 4); 691 - 692 - if ((len & 3) >= 2) 693 - scc_ide_insw(data_addr, (u8 *)buf + (len & ~3), 1); 694 - } else 695 - scc_ide_insw(data_addr, buf, len / 2); 696 - } 697 - 698 - static void scc_output_data(ide_drive_t *drive, struct ide_cmd *cmd, 699 - void *buf, unsigned int len) 700 - { 701 - unsigned long data_addr = drive->hwif->io_ports.data_addr; 702 - 703 - len++; 704 - 705 - if (drive->io_32bit) { 706 - scc_ide_outsl(data_addr, buf, len / 4); 707 - 708 - if ((len & 3) >= 2) 709 - scc_ide_outsw(data_addr, (u8 *)buf + (len & ~3), 1); 710 - } else 711 - scc_ide_outsw(data_addr, buf, len / 2); 712 - } 713 - 714 - /** 715 - * init_mmio_iops_scc - set up the iops for MMIO 716 - * @hwif: interface to set up 717 - * 718 - */ 719 - 720 - static void init_mmio_iops_scc(ide_hwif_t *hwif) 721 - { 722 - struct pci_dev *dev = to_pci_dev(hwif->dev); 723 - struct scc_ports *ports = pci_get_drvdata(dev); 724 - unsigned long dma_base = ports->dma; 725 - 726 - ide_set_hwifdata(hwif, ports); 727 - 728 - hwif->dma_base = dma_base; 729 - hwif->config_data = ports->ctl; 730 - } 731 - 732 - /** 733 - * init_iops_scc - set up iops 734 - * @hwif: interface to set up 735 - * 736 - * Do the basic setup for the SCC hardware interface 737 - * and then do the MMIO setup. 738 - */ 739 - 740 - static void init_iops_scc(ide_hwif_t *hwif) 741 - { 742 - struct pci_dev *dev = to_pci_dev(hwif->dev); 743 - 744 - hwif->hwif_data = NULL; 745 - if (pci_get_drvdata(dev) == NULL) 746 - return; 747 - init_mmio_iops_scc(hwif); 748 - } 749 - 750 - static int scc_init_dma(ide_hwif_t *hwif, const struct ide_port_info *d) 751 - { 752 - return ide_allocate_dma_engine(hwif); 753 - } 754 - 755 - static u8 scc_cable_detect(ide_hwif_t *hwif) 756 - { 757 - return ATA_CBL_PATA80; 758 - } 759 - 760 - /** 761 - * init_hwif_scc - set up hwif 762 - * @hwif: interface to set up 763 - * 764 - * We do the basic set up of the interface structure. The SCC 765 - * requires several custom handlers so we override the default 766 - * ide DMA handlers appropriately. 767 - */ 768 - 769 - static void init_hwif_scc(ide_hwif_t *hwif) 770 - { 771 - /* PTERADD */ 772 - out_be32((void __iomem *)(hwif->dma_base + 0x018), hwif->dmatable_dma); 773 - 774 - if (in_be32((void __iomem *)(hwif->config_data + 0xff0)) & CCKCTRL_ATACLKOEN) 775 - hwif->ultra_mask = ATA_UDMA6; /* 133MHz */ 776 - else 777 - hwif->ultra_mask = ATA_UDMA5; /* 100MHz */ 778 - } 779 - 780 - static const struct ide_tp_ops scc_tp_ops = { 781 - .exec_command = scc_exec_command, 782 - .read_status = scc_read_status, 783 - .read_altstatus = scc_read_altstatus, 784 - .write_devctl = scc_write_devctl, 785 - 786 - .dev_select = ide_dev_select, 787 - .tf_load = scc_tf_load, 788 - .tf_read = scc_tf_read, 789 - 790 - .input_data = scc_input_data, 791 - .output_data = scc_output_data, 792 - }; 793 - 794 - static const struct ide_port_ops scc_port_ops = { 795 - .set_pio_mode = scc_set_pio_mode, 796 - .set_dma_mode = scc_set_dma_mode, 797 - .udma_filter = scc_udma_filter, 798 - .cable_detect = scc_cable_detect, 799 - }; 800 - 801 - static const struct ide_dma_ops scc_dma_ops = { 802 - .dma_host_set = scc_dma_host_set, 803 - .dma_setup = scc_dma_setup, 804 - .dma_start = scc_dma_start, 805 - .dma_end = scc_dma_end, 806 - .dma_test_irq = scc_dma_test_irq, 807 - .dma_lost_irq = ide_dma_lost_irq, 808 - .dma_timer_expiry = ide_dma_sff_timer_expiry, 809 - .dma_sff_read_status = scc_dma_sff_read_status, 810 - }; 811 - 812 - static const struct ide_port_info scc_chipset = { 813 - .name = "sccIDE", 814 - .init_iops = init_iops_scc, 815 - .init_dma = scc_init_dma, 816 - .init_hwif = init_hwif_scc, 817 - .tp_ops = &scc_tp_ops, 818 - .port_ops = &scc_port_ops, 819 - .dma_ops = &scc_dma_ops, 820 - .host_flags = IDE_HFLAG_SINGLE, 821 - .irq_flags = IRQF_SHARED, 822 - .pio_mask = ATA_PIO4, 823 - .chipset = ide_pci, 824 - }; 825 - 826 - /** 827 - * scc_init_one - pci layer discovery entry 828 - * @dev: PCI device 829 - * @id: ident table entry 830 - * 831 - * Called by the PCI code when it finds an SCC PATA controller. 832 - * We then use the IDE PCI generic helper to do most of the work. 833 - */ 834 - 835 - static int scc_init_one(struct pci_dev *dev, const struct pci_device_id *id) 836 - { 837 - return init_setup_scc(dev, &scc_chipset); 838 - } 839 - 840 - /** 841 - * scc_remove - pci layer remove entry 842 - * @dev: PCI device 843 - * 844 - * Called by the PCI code when it removes an SCC PATA controller. 845 - */ 846 - 847 - static void scc_remove(struct pci_dev *dev) 848 - { 849 - struct scc_ports *ports = pci_get_drvdata(dev); 850 - struct ide_host *host = ports->host; 851 - 852 - ide_host_remove(host); 853 - 854 - iounmap((void*)ports->dma); 855 - iounmap((void*)ports->ctl); 856 - pci_release_selected_regions(dev, (1 << 2) - 1); 857 - memset(ports, 0, sizeof(*ports)); 858 - } 859 - 860 - static const struct pci_device_id scc_pci_tbl[] = { 861 - { PCI_VDEVICE(TOSHIBA_2, PCI_DEVICE_ID_TOSHIBA_SCC_ATA), 0 }, 862 - { 0, }, 863 - }; 864 - MODULE_DEVICE_TABLE(pci, scc_pci_tbl); 865 - 866 - static struct pci_driver scc_pci_driver = { 867 - .name = "SCC IDE", 868 - .id_table = scc_pci_tbl, 869 - .probe = scc_init_one, 870 - .remove = scc_remove, 871 - }; 872 - 873 - static int __init scc_ide_init(void) 874 - { 875 - return ide_pci_register_driver(&scc_pci_driver); 876 - } 877 - 878 - static void __exit scc_ide_exit(void) 879 - { 880 - pci_unregister_driver(&scc_pci_driver); 881 - } 882 - 883 - module_init(scc_ide_init); 884 - module_exit(scc_ide_exit); 885 - 886 - MODULE_DESCRIPTION("PCI driver module for Toshiba SCC IDE"); 887 - MODULE_LICENSE("GPL");
+18 -3
drivers/iio/accel/mma9551_core.c
··· 389 389 { 390 390 int ret, i; 391 391 int len_words = len / sizeof(u16); 392 - __be16 be_buf[MMA9551_MAX_MAILBOX_DATA_REGS]; 392 + __be16 be_buf[MMA9551_MAX_MAILBOX_DATA_REGS / 2]; 393 + 394 + if (len_words > ARRAY_SIZE(be_buf)) { 395 + dev_err(&client->dev, "Invalid buffer size %d\n", len); 396 + return -EINVAL; 397 + } 393 398 394 399 ret = mma9551_transfer(client, app_id, MMA9551_CMD_READ_CONFIG, 395 400 reg, NULL, 0, (u8 *) be_buf, len); ··· 429 424 { 430 425 int ret, i; 431 426 int len_words = len / sizeof(u16); 432 - __be16 be_buf[MMA9551_MAX_MAILBOX_DATA_REGS]; 427 + __be16 be_buf[MMA9551_MAX_MAILBOX_DATA_REGS / 2]; 428 + 429 + if (len_words > ARRAY_SIZE(be_buf)) { 430 + dev_err(&client->dev, "Invalid buffer size %d\n", len); 431 + return -EINVAL; 432 + } 433 433 434 434 ret = mma9551_transfer(client, app_id, MMA9551_CMD_READ_STATUS, 435 435 reg, NULL, 0, (u8 *) be_buf, len); ··· 469 459 { 470 460 int i; 471 461 int len_words = len / sizeof(u16); 472 - __be16 be_buf[MMA9551_MAX_MAILBOX_DATA_REGS]; 462 + __be16 be_buf[(MMA9551_MAX_MAILBOX_DATA_REGS - 1) / 2]; 463 + 464 + if (len_words > ARRAY_SIZE(be_buf)) { 465 + dev_err(&client->dev, "Invalid buffer size %d\n", len); 466 + return -EINVAL; 467 + } 473 468 474 469 for (i = 0; i < len_words; i++) 475 470 be_buf[i] = cpu_to_be16(buf[i]);
+10 -8
drivers/iio/accel/mma9553.c
··· 54 54 #define MMA9553_MASK_CONF_STEPCOALESCE GENMASK(7, 0) 55 55 56 56 #define MMA9553_REG_CONF_ACTTHD 0x0E 57 + #define MMA9553_MAX_ACTTHD GENMASK(15, 0) 57 58 58 59 /* Pedometer status registers (R-only) */ 59 60 #define MMA9553_REG_STATUS 0x00 ··· 317 316 static int mma9553_read_activity_stepcnt(struct mma9553_data *data, 318 317 u8 *activity, u16 *stepcnt) 319 318 { 320 - u32 status_stepcnt; 321 - u16 status; 319 + u16 buf[2]; 322 320 int ret; 323 321 324 322 ret = mma9551_read_status_words(data->client, MMA9551_APPID_PEDOMETER, 325 - MMA9553_REG_STATUS, sizeof(u32), 326 - (u16 *) &status_stepcnt); 323 + MMA9553_REG_STATUS, sizeof(u32), buf); 327 324 if (ret < 0) { 328 325 dev_err(&data->client->dev, 329 326 "error reading status and stepcnt\n"); 330 327 return ret; 331 328 } 332 329 333 - status = status_stepcnt & MMA9553_MASK_CONF_WORD; 334 - *activity = mma9553_get_bits(status, MMA9553_MASK_STATUS_ACTIVITY); 335 - *stepcnt = status_stepcnt >> 16; 330 + *activity = mma9553_get_bits(buf[0], MMA9553_MASK_STATUS_ACTIVITY); 331 + *stepcnt = buf[1]; 336 332 337 333 return 0; 338 334 } ··· 870 872 case IIO_EV_INFO_PERIOD: 871 873 switch (chan->type) { 872 874 case IIO_ACTIVITY: 875 + if (val < 0 || val > MMA9553_ACTIVITY_THD_TO_SEC( 876 + MMA9553_MAX_ACTTHD)) 877 + return -EINVAL; 873 878 mutex_lock(&data->mutex); 874 879 ret = mma9553_set_config(data, MMA9553_REG_CONF_ACTTHD, 875 880 &data->conf.actthd, ··· 972 971 .modified = 1, \ 973 972 .channel2 = _chan2, \ 974 973 .info_mask_separate = BIT(IIO_CHAN_INFO_PROCESSED), \ 975 - .info_mask_shared_by_type = BIT(IIO_CHAN_INFO_CALIBHEIGHT), \ 974 + .info_mask_shared_by_type = BIT(IIO_CHAN_INFO_CALIBHEIGHT) | \ 975 + BIT(IIO_CHAN_INFO_ENABLE), \ 976 976 .event_spec = mma9553_activity_events, \ 977 977 .num_event_specs = ARRAY_SIZE(mma9553_activity_events), \ 978 978 .ext_info = mma9553_ext_info, \
+1
drivers/iio/accel/st_accel_core.c
··· 546 546 547 547 indio_dev->modes = INDIO_DIRECT_MODE; 548 548 indio_dev->info = &accel_info; 549 + mutex_init(&adata->tb.buf_lock); 549 550 550 551 st_sensors_power_enable(indio_dev); 551 552
+6 -6
drivers/iio/adc/axp288_adc.c
··· 53 53 .channel = 0, 54 54 .address = AXP288_TS_ADC_H, 55 55 .datasheet_name = "TS_PIN", 56 + .info_mask_separate = BIT(IIO_CHAN_INFO_RAW), 56 57 }, { 57 58 .indexed = 1, 58 59 .type = IIO_TEMP, 59 60 .channel = 1, 60 61 .address = AXP288_PMIC_ADC_H, 61 62 .datasheet_name = "PMIC_TEMP", 63 + .info_mask_separate = BIT(IIO_CHAN_INFO_RAW), 62 64 }, { 63 65 .indexed = 1, 64 66 .type = IIO_TEMP, 65 67 .channel = 2, 66 68 .address = AXP288_GP_ADC_H, 67 69 .datasheet_name = "GPADC", 70 + .info_mask_separate = BIT(IIO_CHAN_INFO_RAW), 68 71 }, { 69 72 .indexed = 1, 70 73 .type = IIO_CURRENT, 71 74 .channel = 3, 72 75 .address = AXP20X_BATT_CHRG_I_H, 73 76 .datasheet_name = "BATT_CHG_I", 74 - .info_mask_separate = BIT(IIO_CHAN_INFO_PROCESSED), 77 + .info_mask_separate = BIT(IIO_CHAN_INFO_RAW), 75 78 }, { 76 79 .indexed = 1, 77 80 .type = IIO_CURRENT, 78 81 .channel = 4, 79 82 .address = AXP20X_BATT_DISCHRG_I_H, 80 83 .datasheet_name = "BATT_DISCHRG_I", 81 - .info_mask_separate = BIT(IIO_CHAN_INFO_PROCESSED), 84 + .info_mask_separate = BIT(IIO_CHAN_INFO_RAW), 82 85 }, { 83 86 .indexed = 1, 84 87 .type = IIO_VOLTAGE, 85 88 .channel = 5, 86 89 .address = AXP20X_BATT_V_H, 87 90 .datasheet_name = "BATT_V", 88 - .info_mask_separate = BIT(IIO_CHAN_INFO_PROCESSED), 91 + .info_mask_separate = BIT(IIO_CHAN_INFO_RAW), 89 92 }, 90 93 }; 91 94 ··· 153 150 if (axp288_adc_set_ts(info->regmap, AXP288_ADC_TS_PIN_ON, 154 151 chan->address)) 155 152 dev_err(&indio_dev->dev, "TS pin restore\n"); 156 - break; 157 - case IIO_CHAN_INFO_PROCESSED: 158 - ret = axp288_adc_read_channel(val, chan->address, info->regmap); 159 153 break; 160 154 default: 161 155 ret = -EINVAL;
+34 -26
drivers/iio/adc/cc10001_adc.c
··· 35 35 #define CC10001_ADC_EOC_SET BIT(0) 36 36 37 37 #define CC10001_ADC_CHSEL_SAMPLED 0x0c 38 - #define CC10001_ADC_POWER_UP 0x10 39 - #define CC10001_ADC_POWER_UP_SET BIT(0) 38 + #define CC10001_ADC_POWER_DOWN 0x10 39 + #define CC10001_ADC_POWER_DOWN_SET BIT(0) 40 + 40 41 #define CC10001_ADC_DEBUG 0x14 41 42 #define CC10001_ADC_DATA_COUNT 0x20 42 43 ··· 63 62 u16 *buf; 64 63 65 64 struct mutex lock; 66 - unsigned long channel_map; 67 65 unsigned int start_delay_ns; 68 66 unsigned int eoc_delay_ns; 69 67 }; ··· 79 79 return readl(adc_dev->reg_base + reg); 80 80 } 81 81 82 + static void cc10001_adc_power_up(struct cc10001_adc_device *adc_dev) 83 + { 84 + cc10001_adc_write_reg(adc_dev, CC10001_ADC_POWER_DOWN, 0); 85 + ndelay(adc_dev->start_delay_ns); 86 + } 87 + 88 + static void cc10001_adc_power_down(struct cc10001_adc_device *adc_dev) 89 + { 90 + cc10001_adc_write_reg(adc_dev, CC10001_ADC_POWER_DOWN, 91 + CC10001_ADC_POWER_DOWN_SET); 92 + } 93 + 82 94 static void cc10001_adc_start(struct cc10001_adc_device *adc_dev, 83 95 unsigned int channel) 84 96 { ··· 100 88 val = (channel & CC10001_ADC_CH_MASK) | CC10001_ADC_MODE_SINGLE_CONV; 101 89 cc10001_adc_write_reg(adc_dev, CC10001_ADC_CONFIG, val); 102 90 91 + udelay(1); 103 92 val = cc10001_adc_read_reg(adc_dev, CC10001_ADC_CONFIG); 104 93 val = val | CC10001_ADC_START_CONV; 105 94 cc10001_adc_write_reg(adc_dev, CC10001_ADC_CONFIG, val); ··· 142 129 struct iio_dev *indio_dev; 143 130 unsigned int delay_ns; 144 131 unsigned int channel; 132 + unsigned int scan_idx; 145 133 bool sample_invalid; 146 134 u16 *data; 147 135 int i; ··· 153 139 154 140 mutex_lock(&adc_dev->lock); 155 141 156 - cc10001_adc_write_reg(adc_dev, CC10001_ADC_POWER_UP, 157 - CC10001_ADC_POWER_UP_SET); 158 - 159 - /* Wait for 8 (6+2) clock cycles before activating START */ 160 - ndelay(adc_dev->start_delay_ns); 142 + cc10001_adc_power_up(adc_dev); 161 143 162 144 /* Calculate delay step for eoc and sampled data */ 163 145 delay_ns = adc_dev->eoc_delay_ns / CC10001_MAX_POLL_COUNT; 164 146 165 147 i = 0; 166 148 sample_invalid = false; 167 - for_each_set_bit(channel, indio_dev->active_scan_mask, 149 + for_each_set_bit(scan_idx, indio_dev->active_scan_mask, 168 150 indio_dev->masklength) { 169 151 152 + channel = indio_dev->channels[scan_idx].channel; 170 153 cc10001_adc_start(adc_dev, channel); 171 154 172 155 data[i] = cc10001_adc_poll_done(indio_dev, channel, delay_ns); ··· 177 166 } 178 167 179 168 done: 180 - cc10001_adc_write_reg(adc_dev, CC10001_ADC_POWER_UP, 0); 169 + cc10001_adc_power_down(adc_dev); 181 170 182 171 mutex_unlock(&adc_dev->lock); 183 172 ··· 196 185 unsigned int delay_ns; 197 186 u16 val; 198 187 199 - cc10001_adc_write_reg(adc_dev, CC10001_ADC_POWER_UP, 200 - CC10001_ADC_POWER_UP_SET); 201 - 202 - /* Wait for 8 (6+2) clock cycles before activating START */ 203 - ndelay(adc_dev->start_delay_ns); 188 + cc10001_adc_power_up(adc_dev); 204 189 205 190 /* Calculate delay step for eoc and sampled data */ 206 191 delay_ns = adc_dev->eoc_delay_ns / CC10001_MAX_POLL_COUNT; ··· 205 198 206 199 val = cc10001_adc_poll_done(indio_dev, chan->channel, delay_ns); 207 200 208 - cc10001_adc_write_reg(adc_dev, CC10001_ADC_POWER_UP, 0); 201 + cc10001_adc_power_down(adc_dev); 209 202 210 203 return val; 211 204 } ··· 231 224 232 225 case IIO_CHAN_INFO_SCALE: 233 226 ret = regulator_get_voltage(adc_dev->reg); 234 - if (ret) 227 + if (ret < 0) 235 228 return ret; 236 229 237 230 *val = ret / 1000; ··· 262 255 .update_scan_mode = &cc10001_update_scan_mode, 263 256 }; 264 257 265 - static int cc10001_adc_channel_init(struct iio_dev *indio_dev) 258 + static int cc10001_adc_channel_init(struct iio_dev *indio_dev, 259 + unsigned long channel_map) 266 260 { 267 - struct cc10001_adc_device *adc_dev = iio_priv(indio_dev); 268 261 struct iio_chan_spec *chan_array, *timestamp; 269 262 unsigned int bit, idx = 0; 270 263 271 - indio_dev->num_channels = bitmap_weight(&adc_dev->channel_map, 272 - CC10001_ADC_NUM_CHANNELS); 264 + indio_dev->num_channels = bitmap_weight(&channel_map, 265 + CC10001_ADC_NUM_CHANNELS) + 1; 273 266 274 - chan_array = devm_kcalloc(&indio_dev->dev, indio_dev->num_channels + 1, 267 + chan_array = devm_kcalloc(&indio_dev->dev, indio_dev->num_channels, 275 268 sizeof(struct iio_chan_spec), 276 269 GFP_KERNEL); 277 270 if (!chan_array) 278 271 return -ENOMEM; 279 272 280 - for_each_set_bit(bit, &adc_dev->channel_map, CC10001_ADC_NUM_CHANNELS) { 273 + for_each_set_bit(bit, &channel_map, CC10001_ADC_NUM_CHANNELS) { 281 274 struct iio_chan_spec *chan = &chan_array[idx]; 282 275 283 276 chan->type = IIO_VOLTAGE; ··· 312 305 unsigned long adc_clk_rate; 313 306 struct resource *res; 314 307 struct iio_dev *indio_dev; 308 + unsigned long channel_map; 315 309 int ret; 316 310 317 311 indio_dev = devm_iio_device_alloc(&pdev->dev, sizeof(*adc_dev)); ··· 321 313 322 314 adc_dev = iio_priv(indio_dev); 323 315 324 - adc_dev->channel_map = GENMASK(CC10001_ADC_NUM_CHANNELS - 1, 0); 316 + channel_map = GENMASK(CC10001_ADC_NUM_CHANNELS - 1, 0); 325 317 if (!of_property_read_u32(node, "adc-reserved-channels", &ret)) 326 - adc_dev->channel_map &= ~ret; 318 + channel_map &= ~ret; 327 319 328 320 adc_dev->reg = devm_regulator_get(&pdev->dev, "vref"); 329 321 if (IS_ERR(adc_dev->reg)) ··· 369 361 adc_dev->start_delay_ns = adc_dev->eoc_delay_ns * CC10001_WAIT_CYCLES; 370 362 371 363 /* Setup the ADC channels available on the device */ 372 - ret = cc10001_adc_channel_init(indio_dev); 364 + ret = cc10001_adc_channel_init(indio_dev, channel_map); 373 365 if (ret < 0) 374 366 goto err_disable_clk; 375 367
+3 -3
drivers/iio/adc/mcp320x.c
··· 60 60 struct spi_message msg; 61 61 struct spi_transfer transfer[2]; 62 62 63 - u8 tx_buf; 64 - u8 rx_buf[2]; 65 - 66 63 struct regulator *reg; 67 64 struct mutex lock; 68 65 const struct mcp320x_chip_info *chip_info; 66 + 67 + u8 tx_buf ____cacheline_aligned; 68 + u8 rx_buf[2]; 69 69 }; 70 70 71 71 static int mcp320x_channel_to_tx_data(int device_index,
+4 -3
drivers/iio/adc/qcom-spmi-vadc.c
··· 18 18 #include <linux/iio/iio.h> 19 19 #include <linux/interrupt.h> 20 20 #include <linux/kernel.h> 21 + #include <linux/math64.h> 21 22 #include <linux/module.h> 22 23 #include <linux/of.h> 23 24 #include <linux/platform_device.h> ··· 472 471 const struct vadc_channel_prop *prop, u16 adc_code) 473 472 { 474 473 const struct vadc_prescale_ratio *prescale; 475 - s32 voltage; 474 + s64 voltage; 476 475 477 476 voltage = adc_code - vadc->graph[prop->calibration].gnd; 478 477 voltage *= vadc->graph[prop->calibration].dx; 479 - voltage = voltage / vadc->graph[prop->calibration].dy; 478 + voltage = div64_s64(voltage, vadc->graph[prop->calibration].dy); 480 479 481 480 if (prop->calibration == VADC_CALIB_ABSOLUTE) 482 481 voltage += vadc->graph[prop->calibration].dx; ··· 488 487 489 488 voltage = voltage * prescale->den; 490 489 491 - return voltage / prescale->num; 490 + return div64_s64(voltage, prescale->num); 492 491 } 493 492 494 493 static int vadc_decimation_from_dt(u32 value)
+3 -2
drivers/iio/adc/xilinx-xadc-core.c
··· 856 856 switch (chan->address) { 857 857 case XADC_REG_VCCINT: 858 858 case XADC_REG_VCCAUX: 859 + case XADC_REG_VREFP: 859 860 case XADC_REG_VCCBRAM: 860 861 case XADC_REG_VCCPINT: 861 862 case XADC_REG_VCCPAUX: ··· 997 996 .num_event_specs = (_alarm) ? ARRAY_SIZE(xadc_voltage_events) : 0, \ 998 997 .scan_index = (_scan_index), \ 999 998 .scan_type = { \ 1000 - .sign = 'u', \ 999 + .sign = ((_addr) == XADC_REG_VREFN) ? 's' : 'u', \ 1001 1000 .realbits = 12, \ 1002 1001 .storagebits = 16, \ 1003 1002 .shift = 4, \ ··· 1009 1008 static const struct iio_chan_spec xadc_channels[] = { 1010 1009 XADC_CHAN_TEMP(0, 8, XADC_REG_TEMP), 1011 1010 XADC_CHAN_VOLTAGE(0, 9, XADC_REG_VCCINT, "vccint", true), 1012 - XADC_CHAN_VOLTAGE(1, 10, XADC_REG_VCCINT, "vccaux", true), 1011 + XADC_CHAN_VOLTAGE(1, 10, XADC_REG_VCCAUX, "vccaux", true), 1013 1012 XADC_CHAN_VOLTAGE(2, 14, XADC_REG_VCCBRAM, "vccbram", true), 1014 1013 XADC_CHAN_VOLTAGE(3, 5, XADC_REG_VCCPINT, "vccpint", true), 1015 1014 XADC_CHAN_VOLTAGE(4, 6, XADC_REG_VCCPAUX, "vccpaux", true),
+3 -3
drivers/iio/adc/xilinx-xadc.h
··· 145 145 #define XADC_REG_MAX_VCCPINT 0x28 146 146 #define XADC_REG_MAX_VCCPAUX 0x29 147 147 #define XADC_REG_MAX_VCCO_DDR 0x2a 148 - #define XADC_REG_MIN_VCCPINT 0x2b 149 - #define XADC_REG_MIN_VCCPAUX 0x2c 150 - #define XADC_REG_MIN_VCCO_DDR 0x2d 148 + #define XADC_REG_MIN_VCCPINT 0x2c 149 + #define XADC_REG_MIN_VCCPAUX 0x2d 150 + #define XADC_REG_MIN_VCCO_DDR 0x2e 151 151 152 152 #define XADC_REG_CONF0 0x40 153 153 #define XADC_REG_CONF1 0x41
-2
drivers/iio/common/st_sensors/st_sensors_core.c
··· 304 304 struct st_sensors_platform_data *of_pdata; 305 305 int err = 0; 306 306 307 - mutex_init(&sdata->tb.buf_lock); 308 - 309 307 /* If OF/DT pdata exists, it will take precedence of anything else */ 310 308 of_pdata = st_sensors_of_probe(indio_dev->dev.parent, pdata); 311 309 if (of_pdata)
+1
drivers/iio/gyro/st_gyro_core.c
··· 400 400 401 401 indio_dev->modes = INDIO_DIRECT_MODE; 402 402 indio_dev->info = &gyro_info; 403 + mutex_init(&gdata->tb.buf_lock); 403 404 404 405 st_sensors_power_enable(indio_dev); 405 406
+2 -1
drivers/iio/kfifo_buf.c
··· 38 38 kfifo_free(&buf->kf); 39 39 ret = __iio_allocate_kfifo(buf, buf->buffer.bytes_per_datum, 40 40 buf->buffer.length); 41 - buf->update_needed = false; 41 + if (ret >= 0) 42 + buf->update_needed = false; 42 43 } else { 43 44 kfifo_reset_out(&buf->kf); 44 45 }
+5 -7
drivers/iio/light/hid-sensor-prox.c
··· 43 43 static const struct iio_chan_spec prox_channels[] = { 44 44 { 45 45 .type = IIO_PROXIMITY, 46 - .modified = 1, 47 - .channel2 = IIO_NO_MOD, 48 46 .info_mask_separate = BIT(IIO_CHAN_INFO_RAW), 49 47 .info_mask_shared_by_type = BIT(IIO_CHAN_INFO_OFFSET) | 50 48 BIT(IIO_CHAN_INFO_SCALE) | ··· 251 253 struct iio_dev *indio_dev; 252 254 struct prox_state *prox_state; 253 255 struct hid_sensor_hub_device *hsdev = pdev->dev.platform_data; 254 - struct iio_chan_spec *channels; 255 256 256 257 indio_dev = devm_iio_device_alloc(&pdev->dev, 257 258 sizeof(struct prox_state)); ··· 269 272 return ret; 270 273 } 271 274 272 - channels = kmemdup(prox_channels, sizeof(prox_channels), GFP_KERNEL); 273 - if (!channels) { 275 + indio_dev->channels = kmemdup(prox_channels, sizeof(prox_channels), 276 + GFP_KERNEL); 277 + if (!indio_dev->channels) { 274 278 dev_err(&pdev->dev, "failed to duplicate channels\n"); 275 279 return -ENOMEM; 276 280 } 277 281 278 - ret = prox_parse_report(pdev, hsdev, channels, 282 + ret = prox_parse_report(pdev, hsdev, 283 + (struct iio_chan_spec *)indio_dev->channels, 279 284 HID_USAGE_SENSOR_PROX, prox_state); 280 285 if (ret) { 281 286 dev_err(&pdev->dev, "failed to setup attributes\n"); 282 287 goto error_free_dev_mem; 283 288 } 284 289 285 - indio_dev->channels = channels; 286 290 indio_dev->num_channels = 287 291 ARRAY_SIZE(prox_channels); 288 292 indio_dev->dev.parent = &pdev->dev;
+1
drivers/iio/magnetometer/st_magn_core.c
··· 369 369 370 370 indio_dev->modes = INDIO_DIRECT_MODE; 371 371 indio_dev->info = &magn_info; 372 + mutex_init(&mdata->tb.buf_lock); 372 373 373 374 st_sensors_power_enable(indio_dev); 374 375
+1
drivers/iio/pressure/bmp280.c
··· 172 172 var2 = (((((adc_temp >> 4) - ((s32)le16_to_cpu(buf[T1]))) * 173 173 ((adc_temp >> 4) - ((s32)le16_to_cpu(buf[T1])))) >> 12) * 174 174 ((s32)(s16)le16_to_cpu(buf[T3]))) >> 14; 175 + data->t_fine = var1 + var2; 175 176 176 177 return (data->t_fine * 5 + 128) >> 8; 177 178 }
-2
drivers/iio/pressure/hid-sensor-press.c
··· 47 47 static const struct iio_chan_spec press_channels[] = { 48 48 { 49 49 .type = IIO_PRESSURE, 50 - .modified = 1, 51 - .channel2 = IIO_NO_MOD, 52 50 .info_mask_separate = BIT(IIO_CHAN_INFO_RAW), 53 51 .info_mask_shared_by_type = BIT(IIO_CHAN_INFO_OFFSET) | 54 52 BIT(IIO_CHAN_INFO_SCALE) |
+1
drivers/iio/pressure/st_pressure_core.c
··· 417 417 418 418 indio_dev->modes = INDIO_DIRECT_MODE; 419 419 indio_dev->info = &press_info; 420 + mutex_init(&press_data->tb.buf_lock); 420 421 421 422 st_sensors_power_enable(indio_dev); 422 423
+1 -1
drivers/infiniband/core/iwpm_msg.c
··· 33 33 34 34 #include "iwpm_util.h" 35 35 36 - static const char iwpm_ulib_name[] = "iWarpPortMapperUser"; 36 + static const char iwpm_ulib_name[IWPM_ULIBNAME_SIZE] = "iWarpPortMapperUser"; 37 37 static int iwpm_ulib_version = 3; 38 38 static int iwpm_user_pid = IWPM_PID_UNDEFINED; 39 39 static atomic_t echo_nlmsg_seq;
+8 -8
drivers/infiniband/hw/cxgb4/cm.c
··· 583 583 sizeof(ep->com.mapped_remote_addr)); 584 584 } 585 585 586 - static int get_remote_addr(struct c4iw_ep *ep) 586 + static int get_remote_addr(struct c4iw_ep *parent_ep, struct c4iw_ep *child_ep) 587 587 { 588 588 int ret; 589 589 590 - print_addr(&ep->com, __func__, "get_remote_addr"); 590 + print_addr(&parent_ep->com, __func__, "get_remote_addr parent_ep "); 591 + print_addr(&child_ep->com, __func__, "get_remote_addr child_ep "); 591 592 592 - ret = iwpm_get_remote_info(&ep->com.mapped_local_addr, 593 - &ep->com.mapped_remote_addr, 594 - &ep->com.remote_addr, RDMA_NL_C4IW); 593 + ret = iwpm_get_remote_info(&parent_ep->com.mapped_local_addr, 594 + &child_ep->com.mapped_remote_addr, 595 + &child_ep->com.remote_addr, RDMA_NL_C4IW); 595 596 if (ret) 596 - pr_info(MOD "Unable to find remote peer addr info - err %d\n", 597 - ret); 597 + PDBG("Unable to find remote peer addr info - err %d\n", ret); 598 598 599 599 return ret; 600 600 } ··· 2420 2420 } 2421 2421 memcpy(&child_ep->com.remote_addr, &child_ep->com.mapped_remote_addr, 2422 2422 sizeof(child_ep->com.remote_addr)); 2423 - get_remote_addr(child_ep); 2423 + get_remote_addr(parent_ep, child_ep); 2424 2424 2425 2425 c4iw_get_ep(&parent_ep->com); 2426 2426 child_ep->parent_ep = parent_ep;
+2 -2
drivers/infiniband/hw/cxgb4/device.c
··· 1386 1386 t4_sq_host_wq_pidx(&qp->wq), 1387 1387 t4_sq_wq_size(&qp->wq)); 1388 1388 if (ret) { 1389 - pr_err(KERN_ERR MOD "%s: Fatal error - " 1389 + pr_err(MOD "%s: Fatal error - " 1390 1390 "DB overflow recovery failed - " 1391 1391 "error syncing SQ qid %u\n", 1392 1392 pci_name(ctx->lldi.pdev), qp->wq.sq.qid); ··· 1402 1402 t4_rq_wq_size(&qp->wq)); 1403 1403 1404 1404 if (ret) { 1405 - pr_err(KERN_ERR MOD "%s: Fatal error - " 1405 + pr_err(MOD "%s: Fatal error - " 1406 1406 "DB overflow recovery failed - " 1407 1407 "error syncing RQ qid %u\n", 1408 1408 pci_name(ctx->lldi.pdev), qp->wq.rq.qid);
+2 -2
drivers/infiniband/hw/ehca/ehca_mcast.c
··· 77 77 return -EINVAL; 78 78 } 79 79 80 - memcpy(&my_gid.raw, gid->raw, sizeof(union ib_gid)); 80 + memcpy(&my_gid, gid->raw, sizeof(union ib_gid)); 81 81 82 82 subnet_prefix = be64_to_cpu(my_gid.global.subnet_prefix); 83 83 interface_id = be64_to_cpu(my_gid.global.interface_id); ··· 114 114 return -EINVAL; 115 115 } 116 116 117 - memcpy(&my_gid.raw, gid->raw, sizeof(union ib_gid)); 117 + memcpy(&my_gid, gid->raw, sizeof(union ib_gid)); 118 118 119 119 subnet_prefix = be64_to_cpu(my_gid.global.subnet_prefix); 120 120 interface_id = be64_to_cpu(my_gid.global.interface_id);
+1 -2
drivers/infiniband/hw/mlx4/main.c
··· 1569 1569 MLX4_CMD_TIME_CLASS_B, 1570 1570 MLX4_CMD_WRAPPED); 1571 1571 if (err) 1572 - pr_warn(KERN_WARNING 1573 - "set port %d command failed\n", gw->port); 1572 + pr_warn("set port %d command failed\n", gw->port); 1574 1573 } 1575 1574 1576 1575 mlx4_free_cmd_mailbox(dev, mailbox);
+1 -1
drivers/infiniband/hw/mlx5/qp.c
··· 1392 1392 1393 1393 if (ah->ah_flags & IB_AH_GRH) { 1394 1394 if (ah->grh.sgid_index >= gen->port[port - 1].gid_table_len) { 1395 - pr_err(KERN_ERR "sgid_index (%u) too large. max is %d\n", 1395 + pr_err("sgid_index (%u) too large. max is %d\n", 1396 1396 ah->grh.sgid_index, gen->port[port - 1].gid_table_len); 1397 1397 return -EINVAL; 1398 1398 }
+1 -1
drivers/infiniband/hw/qib/qib.h
··· 903 903 /* PCI Device ID (here for NodeInfo) */ 904 904 u16 deviceid; 905 905 /* for write combining settings */ 906 - unsigned long wc_cookie; 906 + int wc_cookie; 907 907 unsigned long wc_base; 908 908 unsigned long wc_len; 909 909
+2 -1
drivers/infiniband/hw/qib/qib_wc_x86_64.c
··· 118 118 if (!ret) { 119 119 dd->wc_cookie = arch_phys_wc_add(pioaddr, piolen); 120 120 if (dd->wc_cookie < 0) 121 - ret = -EINVAL; 121 + /* use error from routine */ 122 + ret = dd->wc_cookie; 122 123 } 123 124 124 125 return ret;
+1
drivers/iommu/amd_iommu_v2.c
··· 266 266 267 267 static void put_pasid_state_wait(struct pasid_state *pasid_state) 268 268 { 269 + atomic_dec(&pasid_state->count); 269 270 wait_event(pasid_state->wq, !atomic_read(&pasid_state->count)); 270 271 free_pasid_state(pasid_state); 271 272 }
+2 -28
drivers/iommu/arm-smmu.c
··· 224 224 #define RESUME_TERMINATE (1 << 0) 225 225 226 226 #define TTBCR2_SEP_SHIFT 15 227 - #define TTBCR2_SEP_MASK 0x7 228 - 229 - #define TTBCR2_ADDR_32 0 230 - #define TTBCR2_ADDR_36 1 231 - #define TTBCR2_ADDR_40 2 232 - #define TTBCR2_ADDR_42 3 233 - #define TTBCR2_ADDR_44 4 234 - #define TTBCR2_ADDR_48 5 227 + #define TTBCR2_SEP_UPSTREAM (0x7 << TTBCR2_SEP_SHIFT) 235 228 236 229 #define TTBRn_HI_ASID_SHIFT 16 237 230 ··· 786 793 writel_relaxed(reg, cb_base + ARM_SMMU_CB_TTBCR); 787 794 if (smmu->version > ARM_SMMU_V1) { 788 795 reg = pgtbl_cfg->arm_lpae_s1_cfg.tcr >> 32; 789 - switch (smmu->va_size) { 790 - case 32: 791 - reg |= (TTBCR2_ADDR_32 << TTBCR2_SEP_SHIFT); 792 - break; 793 - case 36: 794 - reg |= (TTBCR2_ADDR_36 << TTBCR2_SEP_SHIFT); 795 - break; 796 - case 40: 797 - reg |= (TTBCR2_ADDR_40 << TTBCR2_SEP_SHIFT); 798 - break; 799 - case 42: 800 - reg |= (TTBCR2_ADDR_42 << TTBCR2_SEP_SHIFT); 801 - break; 802 - case 44: 803 - reg |= (TTBCR2_ADDR_44 << TTBCR2_SEP_SHIFT); 804 - break; 805 - case 48: 806 - reg |= (TTBCR2_ADDR_48 << TTBCR2_SEP_SHIFT); 807 - break; 808 - } 796 + reg |= TTBCR2_SEP_UPSTREAM; 809 797 writel_relaxed(reg, cb_base + ARM_SMMU_CB_TTBCR2); 810 798 } 811 799 } else {
+1 -3
drivers/iommu/rockchip-iommu.c
··· 1004 1004 return 0; 1005 1005 } 1006 1006 1007 - #ifdef CONFIG_OF 1008 1007 static const struct of_device_id rk_iommu_dt_ids[] = { 1009 1008 { .compatible = "rockchip,iommu" }, 1010 1009 { /* sentinel */ } 1011 1010 }; 1012 1011 MODULE_DEVICE_TABLE(of, rk_iommu_dt_ids); 1013 - #endif 1014 1012 1015 1013 static struct platform_driver rk_iommu_driver = { 1016 1014 .probe = rk_iommu_probe, 1017 1015 .remove = rk_iommu_remove, 1018 1016 .driver = { 1019 1017 .name = "rk_iommu", 1020 - .of_match_table = of_match_ptr(rk_iommu_dt_ids), 1018 + .of_match_table = rk_iommu_dt_ids, 1021 1019 }, 1022 1020 }; 1023 1021
+1 -1
drivers/irqchip/irq-tegra.c
··· 264 264 265 265 irq_domain_set_hwirq_and_chip(domain, virq + i, hwirq + i, 266 266 &tegra_ictlr_chip, 267 - &info->base[ictlr]); 267 + info->base[ictlr]); 268 268 } 269 269 270 270 parent_args = *args;
+3 -2
drivers/md/raid0.c
··· 188 188 } 189 189 dev[j] = rdev1; 190 190 191 - disk_stack_limits(mddev->gendisk, rdev1->bdev, 192 - rdev1->data_offset << 9); 191 + if (mddev->queue) 192 + disk_stack_limits(mddev->gendisk, rdev1->bdev, 193 + rdev1->data_offset << 9); 193 194 194 195 if (rdev1->bdev->bd_disk->queue->merge_bvec_fn) 195 196 conf->has_merge_bvec = 1;
+70 -53
drivers/md/raid5.c
··· 1078 1078 pr_debug("skip op %ld on disc %d for sector %llu\n", 1079 1079 bi->bi_rw, i, (unsigned long long)sh->sector); 1080 1080 clear_bit(R5_LOCKED, &sh->dev[i].flags); 1081 - if (sh->batch_head) 1082 - set_bit(STRIPE_BATCH_ERR, 1083 - &sh->batch_head->state); 1084 1081 set_bit(STRIPE_HANDLE, &sh->state); 1085 1082 } 1086 1083 ··· 1968 1971 put_cpu(); 1969 1972 } 1970 1973 1974 + static struct stripe_head *alloc_stripe(struct kmem_cache *sc, gfp_t gfp) 1975 + { 1976 + struct stripe_head *sh; 1977 + 1978 + sh = kmem_cache_zalloc(sc, gfp); 1979 + if (sh) { 1980 + spin_lock_init(&sh->stripe_lock); 1981 + spin_lock_init(&sh->batch_lock); 1982 + INIT_LIST_HEAD(&sh->batch_list); 1983 + INIT_LIST_HEAD(&sh->lru); 1984 + atomic_set(&sh->count, 1); 1985 + } 1986 + return sh; 1987 + } 1971 1988 static int grow_one_stripe(struct r5conf *conf, gfp_t gfp) 1972 1989 { 1973 1990 struct stripe_head *sh; 1974 - sh = kmem_cache_zalloc(conf->slab_cache, gfp); 1991 + 1992 + sh = alloc_stripe(conf->slab_cache, gfp); 1975 1993 if (!sh) 1976 1994 return 0; 1977 1995 1978 1996 sh->raid_conf = conf; 1979 - 1980 - spin_lock_init(&sh->stripe_lock); 1981 1997 1982 1998 if (grow_buffers(sh, gfp)) { 1983 1999 shrink_buffers(sh); ··· 2000 1990 sh->hash_lock_index = 2001 1991 conf->max_nr_stripes % NR_STRIPE_HASH_LOCKS; 2002 1992 /* we just created an active stripe so... */ 2003 - atomic_set(&sh->count, 1); 2004 1993 atomic_inc(&conf->active_stripes); 2005 - INIT_LIST_HEAD(&sh->lru); 2006 1994 2007 - spin_lock_init(&sh->batch_lock); 2008 - INIT_LIST_HEAD(&sh->batch_list); 2009 - sh->batch_head = NULL; 2010 1995 release_stripe(sh); 2011 1996 conf->max_nr_stripes++; 2012 1997 return 1; ··· 2065 2060 return ret; 2066 2061 } 2067 2062 2063 + static int resize_chunks(struct r5conf *conf, int new_disks, int new_sectors) 2064 + { 2065 + unsigned long cpu; 2066 + int err = 0; 2067 + 2068 + mddev_suspend(conf->mddev); 2069 + get_online_cpus(); 2070 + for_each_present_cpu(cpu) { 2071 + struct raid5_percpu *percpu; 2072 + struct flex_array *scribble; 2073 + 2074 + percpu = per_cpu_ptr(conf->percpu, cpu); 2075 + scribble = scribble_alloc(new_disks, 2076 + new_sectors / STRIPE_SECTORS, 2077 + GFP_NOIO); 2078 + 2079 + if (scribble) { 2080 + flex_array_free(percpu->scribble); 2081 + percpu->scribble = scribble; 2082 + } else { 2083 + err = -ENOMEM; 2084 + break; 2085 + } 2086 + } 2087 + put_online_cpus(); 2088 + mddev_resume(conf->mddev); 2089 + return err; 2090 + } 2091 + 2068 2092 static int resize_stripes(struct r5conf *conf, int newsize) 2069 2093 { 2070 2094 /* Make all the stripes able to hold 'newsize' devices. ··· 2122 2088 struct stripe_head *osh, *nsh; 2123 2089 LIST_HEAD(newstripes); 2124 2090 struct disk_info *ndisks; 2125 - unsigned long cpu; 2126 2091 int err; 2127 2092 struct kmem_cache *sc; 2128 2093 int i; ··· 2142 2109 return -ENOMEM; 2143 2110 2144 2111 for (i = conf->max_nr_stripes; i; i--) { 2145 - nsh = kmem_cache_zalloc(sc, GFP_KERNEL); 2112 + nsh = alloc_stripe(sc, GFP_KERNEL); 2146 2113 if (!nsh) 2147 2114 break; 2148 2115 2149 2116 nsh->raid_conf = conf; 2150 - spin_lock_init(&nsh->stripe_lock); 2151 - 2152 2117 list_add(&nsh->lru, &newstripes); 2153 2118 } 2154 2119 if (i) { ··· 2173 2142 lock_device_hash_lock(conf, hash)); 2174 2143 osh = get_free_stripe(conf, hash); 2175 2144 unlock_device_hash_lock(conf, hash); 2176 - atomic_set(&nsh->count, 1); 2145 + 2177 2146 for(i=0; i<conf->pool_size; i++) { 2178 2147 nsh->dev[i].page = osh->dev[i].page; 2179 2148 nsh->dev[i].orig_page = osh->dev[i].page; 2180 2149 } 2181 - for( ; i<newsize; i++) 2182 - nsh->dev[i].page = NULL; 2183 2150 nsh->hash_lock_index = hash; 2184 2151 kmem_cache_free(conf->slab_cache, osh); 2185 2152 cnt++; ··· 2203 2174 } else 2204 2175 err = -ENOMEM; 2205 2176 2206 - get_online_cpus(); 2207 - for_each_present_cpu(cpu) { 2208 - struct raid5_percpu *percpu; 2209 - struct flex_array *scribble; 2210 - 2211 - percpu = per_cpu_ptr(conf->percpu, cpu); 2212 - scribble = scribble_alloc(newsize, conf->chunk_sectors / 2213 - STRIPE_SECTORS, GFP_NOIO); 2214 - 2215 - if (scribble) { 2216 - flex_array_free(percpu->scribble); 2217 - percpu->scribble = scribble; 2218 - } else { 2219 - err = -ENOMEM; 2220 - break; 2221 - } 2222 - } 2223 - put_online_cpus(); 2224 - 2225 2177 /* Step 4, return new stripes to service */ 2226 2178 while(!list_empty(&newstripes)) { 2227 2179 nsh = list_entry(newstripes.next, struct stripe_head, lru); ··· 2222 2212 2223 2213 conf->slab_cache = sc; 2224 2214 conf->active_name = 1-conf->active_name; 2225 - conf->pool_size = newsize; 2215 + if (!err) 2216 + conf->pool_size = newsize; 2226 2217 return err; 2227 2218 } 2228 2219 ··· 2445 2434 } 2446 2435 rdev_dec_pending(rdev, conf->mddev); 2447 2436 2448 - if (sh->batch_head && !uptodate) 2437 + if (sh->batch_head && !uptodate && !replacement) 2449 2438 set_bit(STRIPE_BATCH_ERR, &sh->batch_head->state); 2450 2439 2451 2440 if (!test_and_clear_bit(R5_DOUBLE_LOCKED, &sh->dev[i].flags)) ··· 3289 3278 /* reconstruct-write isn't being forced */ 3290 3279 return 0; 3291 3280 for (i = 0; i < s->failed; i++) { 3292 - if (!test_bit(R5_UPTODATE, &fdev[i]->flags) && 3281 + if (s->failed_num[i] != sh->pd_idx && 3282 + s->failed_num[i] != sh->qd_idx && 3283 + !test_bit(R5_UPTODATE, &fdev[i]->flags) && 3293 3284 !test_bit(R5_OVERWRITE, &fdev[i]->flags)) 3294 3285 return 1; 3295 3286 } ··· 3311 3298 */ 3312 3299 BUG_ON(test_bit(R5_Wantcompute, &dev->flags)); 3313 3300 BUG_ON(test_bit(R5_Wantread, &dev->flags)); 3301 + BUG_ON(sh->batch_head); 3314 3302 if ((s->uptodate == disks - 1) && 3315 3303 (s->failed && (disk_idx == s->failed_num[0] || 3316 3304 disk_idx == s->failed_num[1]))) { ··· 3380 3366 { 3381 3367 int i; 3382 3368 3383 - BUG_ON(sh->batch_head); 3384 3369 /* look for blocks to read/compute, skip this if a compute 3385 3370 * is already in flight, or if the stripe contents are in the 3386 3371 * midst of changing due to a write ··· 4211 4198 return; 4212 4199 4213 4200 head_sh = sh; 4214 - do { 4215 - sh = list_first_entry(&sh->batch_list, 4216 - struct stripe_head, batch_list); 4217 - BUG_ON(sh == head_sh); 4218 - } while (!test_bit(STRIPE_DEGRADED, &sh->state)); 4219 4201 4220 - while (sh != head_sh) { 4221 - next = list_first_entry(&sh->batch_list, 4222 - struct stripe_head, batch_list); 4202 + list_for_each_entry_safe(sh, next, &head_sh->batch_list, batch_list) { 4203 + 4223 4204 list_del_init(&sh->batch_list); 4224 4205 4225 4206 set_mask_bits(&sh->state, ~STRIPE_EXPAND_SYNC_FLAG, ··· 4233 4226 4234 4227 set_bit(STRIPE_HANDLE, &sh->state); 4235 4228 release_stripe(sh); 4236 - 4237 - sh = next; 4238 4229 } 4239 4230 } 4240 4231 ··· 6226 6221 percpu->spare_page = alloc_page(GFP_KERNEL); 6227 6222 if (!percpu->scribble) 6228 6223 percpu->scribble = scribble_alloc(max(conf->raid_disks, 6229 - conf->previous_raid_disks), conf->chunk_sectors / 6230 - STRIPE_SECTORS, GFP_KERNEL); 6224 + conf->previous_raid_disks), 6225 + max(conf->chunk_sectors, 6226 + conf->prev_chunk_sectors) 6227 + / STRIPE_SECTORS, 6228 + GFP_KERNEL); 6231 6229 6232 6230 if (!percpu->scribble || (conf->level == 6 && !percpu->spare_page)) { 6233 6231 free_scratch_buffer(conf, percpu); ··· 7206 7198 if (!check_stripe_cache(mddev)) 7207 7199 return -ENOSPC; 7208 7200 7201 + if (mddev->new_chunk_sectors > mddev->chunk_sectors || 7202 + mddev->delta_disks > 0) 7203 + if (resize_chunks(conf, 7204 + conf->previous_raid_disks 7205 + + max(0, mddev->delta_disks), 7206 + max(mddev->new_chunk_sectors, 7207 + mddev->chunk_sectors) 7208 + ) < 0) 7209 + return -ENOMEM; 7209 7210 return resize_stripes(conf, (conf->previous_raid_disks 7210 7211 + mddev->delta_disks)); 7211 7212 }
+3 -3
drivers/mtd/devices/m25p80.c
··· 223 223 */ 224 224 if (data && data->type) 225 225 flash_name = data->type; 226 - else if (!strcmp(spi->modalias, "nor-jedec")) 226 + else if (!strcmp(spi->modalias, "spi-nor")) 227 227 flash_name = NULL; /* auto-detect */ 228 228 else 229 229 flash_name = spi->modalias; ··· 255 255 * since most of these flash are compatible to some extent, and their 256 256 * differences can often be differentiated by the JEDEC read-ID command, we 257 257 * encourage new users to add support to the spi-nor library, and simply bind 258 - * against a generic string here (e.g., "nor-jedec"). 258 + * against a generic string here (e.g., "jedec,spi-nor"). 259 259 * 260 260 * Many flash names are kept here in this list (as well as in spi-nor.c) to 261 261 * keep them available as module aliases for existing platforms. ··· 305 305 * Generic support for SPI NOR that can be identified by the JEDEC READ 306 306 * ID opcode (0x9F). Use this, if possible. 307 307 */ 308 - {"nor-jedec"}, 308 + {"spi-nor"}, 309 309 { }, 310 310 }; 311 311 MODULE_DEVICE_TABLE(spi, m25p_ids);
+4 -2
drivers/mtd/tests/readtest.c
··· 191 191 err = ret; 192 192 } 193 193 194 - err = mtdtest_relax(); 195 - if (err) 194 + ret = mtdtest_relax(); 195 + if (ret) { 196 + err = ret; 196 197 goto out; 198 + } 197 199 } 198 200 199 201 if (err)
+2
drivers/mtd/ubi/block.c
··· 310 310 blk_rq_map_sg(req->q, req, pdu->usgl.sg); 311 311 312 312 ret = ubiblock_read(pdu); 313 + rq_flush_dcache_pages(req); 314 + 313 315 blk_mq_end_request(req, ret); 314 316 } 315 317
+4 -3
drivers/net/can/xilinx_can.c
··· 509 509 cf->can_id |= CAN_RTR_FLAG; 510 510 } 511 511 512 - if (!(id_xcan & XCAN_IDR_SRR_MASK)) { 513 - data[0] = priv->read_reg(priv, XCAN_RXFIFO_DW1_OFFSET); 514 - data[1] = priv->read_reg(priv, XCAN_RXFIFO_DW2_OFFSET); 512 + /* DW1/DW2 must always be read to remove message from RXFIFO */ 513 + data[0] = priv->read_reg(priv, XCAN_RXFIFO_DW1_OFFSET); 514 + data[1] = priv->read_reg(priv, XCAN_RXFIFO_DW2_OFFSET); 515 515 516 + if (!(cf->can_id & CAN_RTR_FLAG)) { 516 517 /* Change Xilinx CAN data format to socketCAN data format */ 517 518 if (cf->can_dlc > 0) 518 519 *(__be32 *)(cf->data) = cpu_to_be32(data[0]);
+3
drivers/net/dsa/mv88e6xxx.c
··· 1469 1469 #if IS_ENABLED(CONFIG_NET_DSA_MV88E6171) 1470 1470 unregister_switch_driver(&mv88e6171_switch_driver); 1471 1471 #endif 1472 + #if IS_ENABLED(CONFIG_NET_DSA_MV88E6352) 1473 + unregister_switch_driver(&mv88e6352_switch_driver); 1474 + #endif 1472 1475 #if IS_ENABLED(CONFIG_NET_DSA_MV88E6123_61_65) 1473 1476 unregister_switch_driver(&mv88e6123_61_65_switch_driver); 1474 1477 #endif
+1
drivers/net/ethernet/amd/Kconfig
··· 180 180 config AMD_XGBE 181 181 tristate "AMD 10GbE Ethernet driver" 182 182 depends on (OF_NET || ACPI) && HAS_IOMEM && HAS_DMA 183 + depends on ARM64 || COMPILE_TEST 183 184 select PHYLIB 184 185 select AMD_XGBE_PHY 185 186 select BITREVERSE
+1
drivers/net/ethernet/apm/xgene/Kconfig
··· 1 1 config NET_XGENE 2 2 tristate "APM X-Gene SoC Ethernet Driver" 3 3 depends on HAS_DMA 4 + depends on ARCH_XGENE || COMPILE_TEST 4 5 select PHYLIB 5 6 help 6 7 This is the Ethernet driver for the on-chip ethernet interface on the
+5 -5
drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c
··· 4786 4786 { 4787 4787 struct bnx2x *bp = netdev_priv(dev); 4788 4788 4789 + if (pci_num_vf(bp->pdev)) { 4790 + DP(BNX2X_MSG_IOV, "VFs are enabled, can not change MTU\n"); 4791 + return -EPERM; 4792 + } 4793 + 4789 4794 if (bp->recovery_state != BNX2X_RECOVERY_DONE) { 4790 4795 BNX2X_ERR("Can't perform change MTU during parity recovery\n"); 4791 4796 return -EAGAIN; ··· 4942 4937 return -ENODEV; 4943 4938 } 4944 4939 bp = netdev_priv(dev); 4945 - 4946 - if (pci_num_vf(bp->pdev)) { 4947 - DP(BNX2X_MSG_IOV, "VFs are enabled, can not change MTU\n"); 4948 - return -EPERM; 4949 - } 4950 4940 4951 4941 if (bp->recovery_state != BNX2X_RECOVERY_DONE) { 4952 4942 BNX2X_ERR("Handling parity error recovery. Try again later\n");
+7 -2
drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
··· 13371 13371 /* Management FW 'remembers' living interfaces. Allow it some time 13372 13372 * to forget previously living interfaces, allowing a proper re-load. 13373 13373 */ 13374 - if (is_kdump_kernel()) 13375 - msleep(5000); 13374 + if (is_kdump_kernel()) { 13375 + ktime_t now = ktime_get_boottime(); 13376 + ktime_t fw_ready_time = ktime_set(5, 0); 13377 + 13378 + if (ktime_before(now, fw_ready_time)) 13379 + msleep(ktime_ms_delta(fw_ready_time, now)); 13380 + } 13376 13381 13377 13382 /* An estimated maximum supported CoS number according to the chip 13378 13383 * version.
+10 -1
drivers/net/ethernet/cadence/macb.c
··· 981 981 struct macb_queue *queue = dev_id; 982 982 struct macb *bp = queue->bp; 983 983 struct net_device *dev = bp->dev; 984 - u32 status; 984 + u32 status, ctrl; 985 985 986 986 status = queue_readl(queue, ISR); 987 987 ··· 1036 1036 * Link change detection isn't possible with RMII, so we'll 1037 1037 * add that if/when we get our hands on a full-blown MII PHY. 1038 1038 */ 1039 + 1040 + if (status & MACB_BIT(RXUBR)) { 1041 + ctrl = macb_readl(bp, NCR); 1042 + macb_writel(bp, NCR, ctrl & ~MACB_BIT(RE)); 1043 + macb_writel(bp, NCR, ctrl | MACB_BIT(RE)); 1044 + 1045 + if (bp->caps & MACB_CAPS_ISR_CLEAR_ON_WRITE) 1046 + macb_writel(bp, ISR, MACB_BIT(RXUBR)); 1047 + } 1039 1048 1040 1049 if (status & MACB_BIT(ISR_ROVR)) { 1041 1050 /* We missed at least one packet */
+1
drivers/net/ethernet/intel/e1000e/e1000.h
··· 40 40 #include <linux/ptp_classify.h> 41 41 #include <linux/mii.h> 42 42 #include <linux/mdio.h> 43 + #include <linux/pm_qos.h> 43 44 #include "hw.h" 44 45 45 46 struct e1000_info;
+2 -2
drivers/net/ethernet/intel/fm10k/fm10k_main.c
··· 610 610 unsigned int total_bytes = 0, total_packets = 0; 611 611 u16 cleaned_count = fm10k_desc_unused(rx_ring); 612 612 613 - do { 613 + while (likely(total_packets < budget)) { 614 614 union fm10k_rx_desc *rx_desc; 615 615 616 616 /* return some buffers to hardware, one at a time is too slow */ ··· 659 659 660 660 /* update budget accounting */ 661 661 total_packets++; 662 - } while (likely(total_packets < budget)); 662 + } 663 663 664 664 /* place incomplete frames back on ring for completion */ 665 665 rx_ring->skb = skb;
+3 -1
drivers/net/ethernet/intel/igb/igb_main.c
··· 1036 1036 adapter->tx_ring[q_vector->tx.ring->queue_index] = NULL; 1037 1037 1038 1038 if (q_vector->rx.ring) 1039 - adapter->tx_ring[q_vector->rx.ring->queue_index] = NULL; 1039 + adapter->rx_ring[q_vector->rx.ring->queue_index] = NULL; 1040 1040 1041 1041 netif_napi_del(&q_vector->napi); 1042 1042 ··· 1207 1207 q_vector = adapter->q_vector[v_idx]; 1208 1208 if (!q_vector) 1209 1209 q_vector = kzalloc(size, GFP_KERNEL); 1210 + else 1211 + memset(q_vector, 0, size); 1210 1212 if (!q_vector) 1211 1213 return -ENOMEM; 1212 1214
+1 -1
drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
··· 3612 3612 u8 *dst_mac = skb_header_pointer(skb, 0, 0, NULL); 3613 3613 3614 3614 if (!dst_mac || is_link_local_ether_addr(dst_mac)) { 3615 - dev_kfree_skb(skb); 3615 + dev_kfree_skb_any(skb); 3616 3616 return NETDEV_TX_OK; 3617 3617 } 3618 3618
+1 -1
drivers/net/ethernet/mellanox/mlx4/en_port.c
··· 139 139 int i; 140 140 int offset = next - start; 141 141 142 - for (i = 0; i <= num; i++) { 142 + for (i = 0; i < num; i++) { 143 143 ret += be64_to_cpu(*curr); 144 144 curr += offset; 145 145 }
+7 -7
drivers/net/ethernet/mellanox/mlx4/resource_tracker.c
··· 2845 2845 { 2846 2846 int err; 2847 2847 int eqn = vhcr->in_modifier; 2848 - int res_id = (slave << 8) | eqn; 2848 + int res_id = (slave << 10) | eqn; 2849 2849 struct mlx4_eq_context *eqc = inbox->buf; 2850 2850 int mtt_base = eq_get_mtt_addr(eqc) / dev->caps.mtt_entry_sz; 2851 2851 int mtt_size = eq_get_mtt_size(eqc); ··· 3051 3051 struct mlx4_cmd_info *cmd) 3052 3052 { 3053 3053 int eqn = vhcr->in_modifier; 3054 - int res_id = eqn | (slave << 8); 3054 + int res_id = eqn | (slave << 10); 3055 3055 struct res_eq *eq; 3056 3056 int err; 3057 3057 ··· 3108 3108 return 0; 3109 3109 3110 3110 mutex_lock(&priv->mfunc.master.gen_eqe_mutex[slave]); 3111 - res_id = (slave << 8) | event_eq->eqn; 3111 + res_id = (slave << 10) | event_eq->eqn; 3112 3112 err = get_res(dev, slave, res_id, RES_EQ, &req); 3113 3113 if (err) 3114 3114 goto unlock; ··· 3131 3131 3132 3132 memcpy(mailbox->buf, (u8 *) eqe, 28); 3133 3133 3134 - in_modifier = (slave & 0xff) | ((event_eq->eqn & 0xff) << 16); 3134 + in_modifier = (slave & 0xff) | ((event_eq->eqn & 0x3ff) << 16); 3135 3135 3136 3136 err = mlx4_cmd(dev, mailbox->dma, in_modifier, 0, 3137 3137 MLX4_CMD_GEN_EQE, MLX4_CMD_TIME_CLASS_B, ··· 3157 3157 struct mlx4_cmd_info *cmd) 3158 3158 { 3159 3159 int eqn = vhcr->in_modifier; 3160 - int res_id = eqn | (slave << 8); 3160 + int res_id = eqn | (slave << 10); 3161 3161 struct res_eq *eq; 3162 3162 int err; 3163 3163 ··· 4714 4714 break; 4715 4715 4716 4716 case RES_EQ_HW: 4717 - err = mlx4_cmd(dev, slave, eqn & 0xff, 4717 + err = mlx4_cmd(dev, slave, eqn & 0x3ff, 4718 4718 1, MLX4_CMD_HW2SW_EQ, 4719 4719 MLX4_CMD_TIME_CLASS_A, 4720 4720 MLX4_CMD_NATIVE); 4721 4721 if (err) 4722 4722 mlx4_dbg(dev, "rem_slave_eqs: failed to move slave %d eqs %d to SW ownership\n", 4723 - slave, eqn); 4723 + slave, eqn & 0x3ff); 4724 4724 atomic_dec(&eq->mtt->ref_count); 4725 4725 state = RES_EQ_RESERVED; 4726 4726 break;
+2 -2
drivers/net/ethernet/qlogic/netxen/netxen_nic_init.c
··· 1764 1764 int done = 0; 1765 1765 struct nx_host_tx_ring *tx_ring = adapter->tx_ring; 1766 1766 1767 - if (!spin_trylock(&adapter->tx_clean_lock)) 1767 + if (!spin_trylock_bh(&adapter->tx_clean_lock)) 1768 1768 return 1; 1769 1769 1770 1770 sw_consumer = tx_ring->sw_consumer; ··· 1819 1819 */ 1820 1820 hw_consumer = le32_to_cpu(*(tx_ring->hw_consumer)); 1821 1821 done = (sw_consumer == hw_consumer); 1822 - spin_unlock(&adapter->tx_clean_lock); 1822 + spin_unlock_bh(&adapter->tx_clean_lock); 1823 1823 1824 1824 return done; 1825 1825 }
+2 -2
drivers/net/ethernet/qualcomm/qca_spi.c
··· 912 912 qca->spi_dev = spi_device; 913 913 qca->legacy_mode = legacy_mode; 914 914 915 + spi_set_drvdata(spi_device, qcaspi_devs); 916 + 915 917 mac = of_get_mac_address(spi_device->dev.of_node); 916 918 917 919 if (mac) ··· 945 943 free_netdev(qcaspi_devs); 946 944 return -EFAULT; 947 945 } 948 - 949 - spi_set_drvdata(spi_device, qcaspi_devs); 950 946 951 947 qcaspi_init_device_debugfs(qca); 952 948
+2 -2
drivers/net/ethernet/realtek/r8169.c
··· 6884 6884 rtl8169_start_xmit(nskb, tp->dev); 6885 6885 } while (segs); 6886 6886 6887 - dev_kfree_skb(skb); 6887 + dev_consume_skb_any(skb); 6888 6888 } else if (skb->ip_summed == CHECKSUM_PARTIAL) { 6889 6889 if (skb_checksum_help(skb) < 0) 6890 6890 goto drop; ··· 6896 6896 drop: 6897 6897 stats = &tp->dev->stats; 6898 6898 stats->tx_dropped++; 6899 - dev_kfree_skb(skb); 6899 + dev_kfree_skb_any(skb); 6900 6900 } 6901 6901 } 6902 6902
+12 -8
drivers/net/ethernet/smsc/smc91x.c
··· 2238 2238 const struct of_device_id *match = NULL; 2239 2239 struct smc_local *lp; 2240 2240 struct net_device *ndev; 2241 - struct resource *res, *ires; 2241 + struct resource *res; 2242 2242 unsigned int __iomem *addr; 2243 2243 unsigned long irq_flags = SMC_IRQ_FLAGS; 2244 + unsigned long irq_resflags; 2244 2245 int ret; 2245 2246 2246 2247 ndev = alloc_etherdev(sizeof(struct smc_local)); ··· 2333 2332 goto out_free_netdev; 2334 2333 } 2335 2334 2336 - ires = platform_get_resource(pdev, IORESOURCE_IRQ, 0); 2337 - if (!ires) { 2335 + ndev->irq = platform_get_irq(pdev, 0); 2336 + if (ndev->irq <= 0) { 2338 2337 ret = -ENODEV; 2339 2338 goto out_release_io; 2340 2339 } 2341 - 2342 - ndev->irq = ires->start; 2343 - 2344 - if (irq_flags == -1 || ires->flags & IRQF_TRIGGER_MASK) 2345 - irq_flags = ires->flags & IRQF_TRIGGER_MASK; 2340 + /* 2341 + * If this platform does not specify any special irqflags, or if 2342 + * the resource supplies a trigger, override the irqflags with 2343 + * the trigger flags from the resource. 2344 + */ 2345 + irq_resflags = irqd_get_trigger_type(irq_get_irq_data(ndev->irq)); 2346 + if (irq_flags == -1 || irq_resflags & IRQF_TRIGGER_MASK) 2347 + irq_flags = irq_resflags & IRQF_TRIGGER_MASK; 2346 2348 2347 2349 ret = smc_request_attrib(pdev, ndev); 2348 2350 if (ret)
+6 -6
drivers/net/ethernet/smsc/smsc911x.c
··· 2418 2418 struct net_device *dev; 2419 2419 struct smsc911x_data *pdata; 2420 2420 struct smsc911x_platform_config *config = dev_get_platdata(&pdev->dev); 2421 - struct resource *res, *irq_res; 2421 + struct resource *res; 2422 2422 unsigned int intcfg = 0; 2423 - int res_size, irq_flags; 2423 + int res_size, irq, irq_flags; 2424 2424 int retval; 2425 2425 2426 2426 res = platform_get_resource_byname(pdev, IORESOURCE_MEM, ··· 2434 2434 } 2435 2435 res_size = resource_size(res); 2436 2436 2437 - irq_res = platform_get_resource(pdev, IORESOURCE_IRQ, 0); 2438 - if (!irq_res) { 2437 + irq = platform_get_irq(pdev, 0); 2438 + if (irq <= 0) { 2439 2439 pr_warn("Could not allocate irq resource\n"); 2440 2440 retval = -ENODEV; 2441 2441 goto out_0; ··· 2455 2455 SET_NETDEV_DEV(dev, &pdev->dev); 2456 2456 2457 2457 pdata = netdev_priv(dev); 2458 - dev->irq = irq_res->start; 2459 - irq_flags = irq_res->flags & IRQF_TRIGGER_MASK; 2458 + dev->irq = irq; 2459 + irq_flags = irq_get_trigger_type(irq); 2460 2460 pdata->ioaddr = ioremap_nocache(res->start, res_size); 2461 2461 2462 2462 pdata->dev = dev;
+1
drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
··· 23 23 *******************************************************************************/ 24 24 25 25 #include <linux/platform_device.h> 26 + #include <linux/module.h> 26 27 #include <linux/io.h> 27 28 #include <linux/of.h> 28 29 #include <linux/of_net.h>
+2 -2
drivers/net/ethernet/xilinx/ll_temac_main.c
··· 707 707 708 708 cur_p->app0 |= STS_CTRL_APP0_SOP; 709 709 cur_p->len = skb_headlen(skb); 710 - cur_p->phys = dma_map_single(ndev->dev.parent, skb->data, skb->len, 711 - DMA_TO_DEVICE); 710 + cur_p->phys = dma_map_single(ndev->dev.parent, skb->data, 711 + skb_headlen(skb), DMA_TO_DEVICE); 712 712 cur_p->app4 = (unsigned long)skb; 713 713 714 714 for (ii = 0; ii < num_frag; ii++) {
+2 -7
drivers/net/hyperv/netvsc.c
··· 826 826 u16 q_idx = packet->q_idx; 827 827 u32 pktlen = packet->total_data_buflen, msd_len = 0; 828 828 unsigned int section_index = NETVSC_INVALID_INDEX; 829 - struct sk_buff *skb = NULL; 830 829 unsigned long flag; 831 830 struct multi_send_data *msdp; 832 831 struct hv_netvsc_packet *msd_send = NULL, *cur_send = NULL; ··· 923 924 if (cur_send) 924 925 ret = netvsc_send_pkt(cur_send, net_device); 925 926 926 - if (ret != 0) { 927 - if (section_index != NETVSC_INVALID_INDEX) 928 - netvsc_free_send_slot(net_device, section_index); 929 - } else if (skb) { 930 - dev_kfree_skb_any(skb); 931 - } 927 + if (ret != 0 && section_index != NETVSC_INVALID_INDEX) 928 + netvsc_free_send_slot(net_device, section_index); 932 929 933 930 return ret; 934 931 }
+206 -184
drivers/net/ieee802154/at86rf230.c
··· 85 85 struct ieee802154_hw *hw; 86 86 struct at86rf2xx_chip_data *data; 87 87 struct regmap *regmap; 88 + int slp_tr; 88 89 89 90 struct completion state_complete; 90 91 struct at86rf230_state_change state; ··· 96 95 unsigned long cal_timeout; 97 96 s8 max_frame_retries; 98 97 bool is_tx; 98 + bool is_tx_from_off; 99 99 u8 tx_retry; 100 100 struct sk_buff *tx_skb; 101 101 struct at86rf230_state_change tx; 102 102 }; 103 103 104 - #define RG_TRX_STATUS (0x01) 105 - #define SR_TRX_STATUS 0x01, 0x1f, 0 106 - #define SR_RESERVED_01_3 0x01, 0x20, 5 107 - #define SR_CCA_STATUS 0x01, 0x40, 6 108 - #define SR_CCA_DONE 0x01, 0x80, 7 109 - #define RG_TRX_STATE (0x02) 110 - #define SR_TRX_CMD 0x02, 0x1f, 0 111 - #define SR_TRAC_STATUS 0x02, 0xe0, 5 112 - #define RG_TRX_CTRL_0 (0x03) 113 - #define SR_CLKM_CTRL 0x03, 0x07, 0 114 - #define SR_CLKM_SHA_SEL 0x03, 0x08, 3 115 - #define SR_PAD_IO_CLKM 0x03, 0x30, 4 116 - #define SR_PAD_IO 0x03, 0xc0, 6 117 - #define RG_TRX_CTRL_1 (0x04) 118 - #define SR_IRQ_POLARITY 0x04, 0x01, 0 119 - #define SR_IRQ_MASK_MODE 0x04, 0x02, 1 120 - #define SR_SPI_CMD_MODE 0x04, 0x0c, 2 121 - #define SR_RX_BL_CTRL 0x04, 0x10, 4 122 - #define SR_TX_AUTO_CRC_ON 0x04, 0x20, 5 123 - #define SR_IRQ_2_EXT_EN 0x04, 0x40, 6 124 - #define SR_PA_EXT_EN 0x04, 0x80, 7 125 - #define RG_PHY_TX_PWR (0x05) 126 - #define SR_TX_PWR 0x05, 0x0f, 0 127 - #define SR_PA_LT 0x05, 0x30, 4 128 - #define SR_PA_BUF_LT 0x05, 0xc0, 6 129 - #define RG_PHY_RSSI (0x06) 130 - #define SR_RSSI 0x06, 0x1f, 0 131 - #define SR_RND_VALUE 0x06, 0x60, 5 132 - #define SR_RX_CRC_VALID 0x06, 0x80, 7 133 - #define RG_PHY_ED_LEVEL (0x07) 134 - #define SR_ED_LEVEL 0x07, 0xff, 0 135 - #define RG_PHY_CC_CCA (0x08) 136 - #define SR_CHANNEL 0x08, 0x1f, 0 137 - #define SR_CCA_MODE 0x08, 0x60, 5 138 - #define SR_CCA_REQUEST 0x08, 0x80, 7 139 - #define RG_CCA_THRES (0x09) 140 - #define SR_CCA_ED_THRES 0x09, 0x0f, 0 141 - #define SR_RESERVED_09_1 0x09, 0xf0, 4 142 - #define RG_RX_CTRL (0x0a) 143 - #define SR_PDT_THRES 0x0a, 0x0f, 0 144 - #define SR_RESERVED_0a_1 0x0a, 0xf0, 4 145 - #define RG_SFD_VALUE (0x0b) 146 - #define SR_SFD_VALUE 0x0b, 0xff, 0 147 - #define RG_TRX_CTRL_2 (0x0c) 148 - #define SR_OQPSK_DATA_RATE 0x0c, 0x03, 0 149 - #define SR_SUB_MODE 0x0c, 0x04, 2 150 - #define SR_BPSK_QPSK 0x0c, 0x08, 3 151 - #define SR_OQPSK_SUB1_RC_EN 0x0c, 0x10, 4 152 - #define SR_RESERVED_0c_5 0x0c, 0x60, 5 153 - #define SR_RX_SAFE_MODE 0x0c, 0x80, 7 154 - #define RG_ANT_DIV (0x0d) 155 - #define SR_ANT_CTRL 0x0d, 0x03, 0 156 - #define SR_ANT_EXT_SW_EN 0x0d, 0x04, 2 157 - #define SR_ANT_DIV_EN 0x0d, 0x08, 3 158 - #define SR_RESERVED_0d_2 0x0d, 0x70, 4 159 - #define SR_ANT_SEL 0x0d, 0x80, 7 160 - #define RG_IRQ_MASK (0x0e) 161 - #define SR_IRQ_MASK 0x0e, 0xff, 0 162 - #define RG_IRQ_STATUS (0x0f) 163 - #define SR_IRQ_0_PLL_LOCK 0x0f, 0x01, 0 164 - #define SR_IRQ_1_PLL_UNLOCK 0x0f, 0x02, 1 165 - #define SR_IRQ_2_RX_START 0x0f, 0x04, 2 166 - #define SR_IRQ_3_TRX_END 0x0f, 0x08, 3 167 - #define SR_IRQ_4_CCA_ED_DONE 0x0f, 0x10, 4 168 - #define SR_IRQ_5_AMI 0x0f, 0x20, 5 169 - #define SR_IRQ_6_TRX_UR 0x0f, 0x40, 6 170 - #define SR_IRQ_7_BAT_LOW 0x0f, 0x80, 7 171 - #define RG_VREG_CTRL (0x10) 172 - #define SR_RESERVED_10_6 0x10, 0x03, 0 173 - #define SR_DVDD_OK 0x10, 0x04, 2 174 - #define SR_DVREG_EXT 0x10, 0x08, 3 175 - #define SR_RESERVED_10_3 0x10, 0x30, 4 176 - #define SR_AVDD_OK 0x10, 0x40, 6 177 - #define SR_AVREG_EXT 0x10, 0x80, 7 178 - #define RG_BATMON (0x11) 179 - #define SR_BATMON_VTH 0x11, 0x0f, 0 180 - #define SR_BATMON_HR 0x11, 0x10, 4 181 - #define SR_BATMON_OK 0x11, 0x20, 5 182 - #define SR_RESERVED_11_1 0x11, 0xc0, 6 183 - #define RG_XOSC_CTRL (0x12) 184 - #define SR_XTAL_TRIM 0x12, 0x0f, 0 185 - #define SR_XTAL_MODE 0x12, 0xf0, 4 186 - #define RG_RX_SYN (0x15) 187 - #define SR_RX_PDT_LEVEL 0x15, 0x0f, 0 188 - #define SR_RESERVED_15_2 0x15, 0x70, 4 189 - #define SR_RX_PDT_DIS 0x15, 0x80, 7 190 - #define RG_XAH_CTRL_1 (0x17) 191 - #define SR_RESERVED_17_8 0x17, 0x01, 0 192 - #define SR_AACK_PROM_MODE 0x17, 0x02, 1 193 - #define SR_AACK_ACK_TIME 0x17, 0x04, 2 194 - #define SR_RESERVED_17_5 0x17, 0x08, 3 195 - #define SR_AACK_UPLD_RES_FT 0x17, 0x10, 4 196 - #define SR_AACK_FLTR_RES_FT 0x17, 0x20, 5 197 - #define SR_CSMA_LBT_MODE 0x17, 0x40, 6 198 - #define SR_RESERVED_17_1 0x17, 0x80, 7 199 - #define RG_FTN_CTRL (0x18) 200 - #define SR_RESERVED_18_2 0x18, 0x7f, 0 201 - #define SR_FTN_START 0x18, 0x80, 7 202 - #define RG_PLL_CF (0x1a) 203 - #define SR_RESERVED_1a_2 0x1a, 0x7f, 0 204 - #define SR_PLL_CF_START 0x1a, 0x80, 7 205 - #define RG_PLL_DCU (0x1b) 206 - #define SR_RESERVED_1b_3 0x1b, 0x3f, 0 207 - #define SR_RESERVED_1b_2 0x1b, 0x40, 6 208 - #define SR_PLL_DCU_START 0x1b, 0x80, 7 209 - #define RG_PART_NUM (0x1c) 210 - #define SR_PART_NUM 0x1c, 0xff, 0 211 - #define RG_VERSION_NUM (0x1d) 212 - #define SR_VERSION_NUM 0x1d, 0xff, 0 213 - #define RG_MAN_ID_0 (0x1e) 214 - #define SR_MAN_ID_0 0x1e, 0xff, 0 215 - #define RG_MAN_ID_1 (0x1f) 216 - #define SR_MAN_ID_1 0x1f, 0xff, 0 217 - #define RG_SHORT_ADDR_0 (0x20) 218 - #define SR_SHORT_ADDR_0 0x20, 0xff, 0 219 - #define RG_SHORT_ADDR_1 (0x21) 220 - #define SR_SHORT_ADDR_1 0x21, 0xff, 0 221 - #define RG_PAN_ID_0 (0x22) 222 - #define SR_PAN_ID_0 0x22, 0xff, 0 223 - #define RG_PAN_ID_1 (0x23) 224 - #define SR_PAN_ID_1 0x23, 0xff, 0 225 - #define RG_IEEE_ADDR_0 (0x24) 226 - #define SR_IEEE_ADDR_0 0x24, 0xff, 0 227 - #define RG_IEEE_ADDR_1 (0x25) 228 - #define SR_IEEE_ADDR_1 0x25, 0xff, 0 229 - #define RG_IEEE_ADDR_2 (0x26) 230 - #define SR_IEEE_ADDR_2 0x26, 0xff, 0 231 - #define RG_IEEE_ADDR_3 (0x27) 232 - #define SR_IEEE_ADDR_3 0x27, 0xff, 0 233 - #define RG_IEEE_ADDR_4 (0x28) 234 - #define SR_IEEE_ADDR_4 0x28, 0xff, 0 235 - #define RG_IEEE_ADDR_5 (0x29) 236 - #define SR_IEEE_ADDR_5 0x29, 0xff, 0 237 - #define RG_IEEE_ADDR_6 (0x2a) 238 - #define SR_IEEE_ADDR_6 0x2a, 0xff, 0 239 - #define RG_IEEE_ADDR_7 (0x2b) 240 - #define SR_IEEE_ADDR_7 0x2b, 0xff, 0 241 - #define RG_XAH_CTRL_0 (0x2c) 242 - #define SR_SLOTTED_OPERATION 0x2c, 0x01, 0 243 - #define SR_MAX_CSMA_RETRIES 0x2c, 0x0e, 1 244 - #define SR_MAX_FRAME_RETRIES 0x2c, 0xf0, 4 245 - #define RG_CSMA_SEED_0 (0x2d) 246 - #define SR_CSMA_SEED_0 0x2d, 0xff, 0 247 - #define RG_CSMA_SEED_1 (0x2e) 248 - #define SR_CSMA_SEED_1 0x2e, 0x07, 0 249 - #define SR_AACK_I_AM_COORD 0x2e, 0x08, 3 250 - #define SR_AACK_DIS_ACK 0x2e, 0x10, 4 251 - #define SR_AACK_SET_PD 0x2e, 0x20, 5 252 - #define SR_AACK_FVN_MODE 0x2e, 0xc0, 6 253 - #define RG_CSMA_BE (0x2f) 254 - #define SR_MIN_BE 0x2f, 0x0f, 0 255 - #define SR_MAX_BE 0x2f, 0xf0, 4 104 + #define RG_TRX_STATUS (0x01) 105 + #define SR_TRX_STATUS 0x01, 0x1f, 0 106 + #define SR_RESERVED_01_3 0x01, 0x20, 5 107 + #define SR_CCA_STATUS 0x01, 0x40, 6 108 + #define SR_CCA_DONE 0x01, 0x80, 7 109 + #define RG_TRX_STATE (0x02) 110 + #define SR_TRX_CMD 0x02, 0x1f, 0 111 + #define SR_TRAC_STATUS 0x02, 0xe0, 5 112 + #define RG_TRX_CTRL_0 (0x03) 113 + #define SR_CLKM_CTRL 0x03, 0x07, 0 114 + #define SR_CLKM_SHA_SEL 0x03, 0x08, 3 115 + #define SR_PAD_IO_CLKM 0x03, 0x30, 4 116 + #define SR_PAD_IO 0x03, 0xc0, 6 117 + #define RG_TRX_CTRL_1 (0x04) 118 + #define SR_IRQ_POLARITY 0x04, 0x01, 0 119 + #define SR_IRQ_MASK_MODE 0x04, 0x02, 1 120 + #define SR_SPI_CMD_MODE 0x04, 0x0c, 2 121 + #define SR_RX_BL_CTRL 0x04, 0x10, 4 122 + #define SR_TX_AUTO_CRC_ON 0x04, 0x20, 5 123 + #define SR_IRQ_2_EXT_EN 0x04, 0x40, 6 124 + #define SR_PA_EXT_EN 0x04, 0x80, 7 125 + #define RG_PHY_TX_PWR (0x05) 126 + #define SR_TX_PWR 0x05, 0x0f, 0 127 + #define SR_PA_LT 0x05, 0x30, 4 128 + #define SR_PA_BUF_LT 0x05, 0xc0, 6 129 + #define RG_PHY_RSSI (0x06) 130 + #define SR_RSSI 0x06, 0x1f, 0 131 + #define SR_RND_VALUE 0x06, 0x60, 5 132 + #define SR_RX_CRC_VALID 0x06, 0x80, 7 133 + #define RG_PHY_ED_LEVEL (0x07) 134 + #define SR_ED_LEVEL 0x07, 0xff, 0 135 + #define RG_PHY_CC_CCA (0x08) 136 + #define SR_CHANNEL 0x08, 0x1f, 0 137 + #define SR_CCA_MODE 0x08, 0x60, 5 138 + #define SR_CCA_REQUEST 0x08, 0x80, 7 139 + #define RG_CCA_THRES (0x09) 140 + #define SR_CCA_ED_THRES 0x09, 0x0f, 0 141 + #define SR_RESERVED_09_1 0x09, 0xf0, 4 142 + #define RG_RX_CTRL (0x0a) 143 + #define SR_PDT_THRES 0x0a, 0x0f, 0 144 + #define SR_RESERVED_0a_1 0x0a, 0xf0, 4 145 + #define RG_SFD_VALUE (0x0b) 146 + #define SR_SFD_VALUE 0x0b, 0xff, 0 147 + #define RG_TRX_CTRL_2 (0x0c) 148 + #define SR_OQPSK_DATA_RATE 0x0c, 0x03, 0 149 + #define SR_SUB_MODE 0x0c, 0x04, 2 150 + #define SR_BPSK_QPSK 0x0c, 0x08, 3 151 + #define SR_OQPSK_SUB1_RC_EN 0x0c, 0x10, 4 152 + #define SR_RESERVED_0c_5 0x0c, 0x60, 5 153 + #define SR_RX_SAFE_MODE 0x0c, 0x80, 7 154 + #define RG_ANT_DIV (0x0d) 155 + #define SR_ANT_CTRL 0x0d, 0x03, 0 156 + #define SR_ANT_EXT_SW_EN 0x0d, 0x04, 2 157 + #define SR_ANT_DIV_EN 0x0d, 0x08, 3 158 + #define SR_RESERVED_0d_2 0x0d, 0x70, 4 159 + #define SR_ANT_SEL 0x0d, 0x80, 7 160 + #define RG_IRQ_MASK (0x0e) 161 + #define SR_IRQ_MASK 0x0e, 0xff, 0 162 + #define RG_IRQ_STATUS (0x0f) 163 + #define SR_IRQ_0_PLL_LOCK 0x0f, 0x01, 0 164 + #define SR_IRQ_1_PLL_UNLOCK 0x0f, 0x02, 1 165 + #define SR_IRQ_2_RX_START 0x0f, 0x04, 2 166 + #define SR_IRQ_3_TRX_END 0x0f, 0x08, 3 167 + #define SR_IRQ_4_CCA_ED_DONE 0x0f, 0x10, 4 168 + #define SR_IRQ_5_AMI 0x0f, 0x20, 5 169 + #define SR_IRQ_6_TRX_UR 0x0f, 0x40, 6 170 + #define SR_IRQ_7_BAT_LOW 0x0f, 0x80, 7 171 + #define RG_VREG_CTRL (0x10) 172 + #define SR_RESERVED_10_6 0x10, 0x03, 0 173 + #define SR_DVDD_OK 0x10, 0x04, 2 174 + #define SR_DVREG_EXT 0x10, 0x08, 3 175 + #define SR_RESERVED_10_3 0x10, 0x30, 4 176 + #define SR_AVDD_OK 0x10, 0x40, 6 177 + #define SR_AVREG_EXT 0x10, 0x80, 7 178 + #define RG_BATMON (0x11) 179 + #define SR_BATMON_VTH 0x11, 0x0f, 0 180 + #define SR_BATMON_HR 0x11, 0x10, 4 181 + #define SR_BATMON_OK 0x11, 0x20, 5 182 + #define SR_RESERVED_11_1 0x11, 0xc0, 6 183 + #define RG_XOSC_CTRL (0x12) 184 + #define SR_XTAL_TRIM 0x12, 0x0f, 0 185 + #define SR_XTAL_MODE 0x12, 0xf0, 4 186 + #define RG_RX_SYN (0x15) 187 + #define SR_RX_PDT_LEVEL 0x15, 0x0f, 0 188 + #define SR_RESERVED_15_2 0x15, 0x70, 4 189 + #define SR_RX_PDT_DIS 0x15, 0x80, 7 190 + #define RG_XAH_CTRL_1 (0x17) 191 + #define SR_RESERVED_17_8 0x17, 0x01, 0 192 + #define SR_AACK_PROM_MODE 0x17, 0x02, 1 193 + #define SR_AACK_ACK_TIME 0x17, 0x04, 2 194 + #define SR_RESERVED_17_5 0x17, 0x08, 3 195 + #define SR_AACK_UPLD_RES_FT 0x17, 0x10, 4 196 + #define SR_AACK_FLTR_RES_FT 0x17, 0x20, 5 197 + #define SR_CSMA_LBT_MODE 0x17, 0x40, 6 198 + #define SR_RESERVED_17_1 0x17, 0x80, 7 199 + #define RG_FTN_CTRL (0x18) 200 + #define SR_RESERVED_18_2 0x18, 0x7f, 0 201 + #define SR_FTN_START 0x18, 0x80, 7 202 + #define RG_PLL_CF (0x1a) 203 + #define SR_RESERVED_1a_2 0x1a, 0x7f, 0 204 + #define SR_PLL_CF_START 0x1a, 0x80, 7 205 + #define RG_PLL_DCU (0x1b) 206 + #define SR_RESERVED_1b_3 0x1b, 0x3f, 0 207 + #define SR_RESERVED_1b_2 0x1b, 0x40, 6 208 + #define SR_PLL_DCU_START 0x1b, 0x80, 7 209 + #define RG_PART_NUM (0x1c) 210 + #define SR_PART_NUM 0x1c, 0xff, 0 211 + #define RG_VERSION_NUM (0x1d) 212 + #define SR_VERSION_NUM 0x1d, 0xff, 0 213 + #define RG_MAN_ID_0 (0x1e) 214 + #define SR_MAN_ID_0 0x1e, 0xff, 0 215 + #define RG_MAN_ID_1 (0x1f) 216 + #define SR_MAN_ID_1 0x1f, 0xff, 0 217 + #define RG_SHORT_ADDR_0 (0x20) 218 + #define SR_SHORT_ADDR_0 0x20, 0xff, 0 219 + #define RG_SHORT_ADDR_1 (0x21) 220 + #define SR_SHORT_ADDR_1 0x21, 0xff, 0 221 + #define RG_PAN_ID_0 (0x22) 222 + #define SR_PAN_ID_0 0x22, 0xff, 0 223 + #define RG_PAN_ID_1 (0x23) 224 + #define SR_PAN_ID_1 0x23, 0xff, 0 225 + #define RG_IEEE_ADDR_0 (0x24) 226 + #define SR_IEEE_ADDR_0 0x24, 0xff, 0 227 + #define RG_IEEE_ADDR_1 (0x25) 228 + #define SR_IEEE_ADDR_1 0x25, 0xff, 0 229 + #define RG_IEEE_ADDR_2 (0x26) 230 + #define SR_IEEE_ADDR_2 0x26, 0xff, 0 231 + #define RG_IEEE_ADDR_3 (0x27) 232 + #define SR_IEEE_ADDR_3 0x27, 0xff, 0 233 + #define RG_IEEE_ADDR_4 (0x28) 234 + #define SR_IEEE_ADDR_4 0x28, 0xff, 0 235 + #define RG_IEEE_ADDR_5 (0x29) 236 + #define SR_IEEE_ADDR_5 0x29, 0xff, 0 237 + #define RG_IEEE_ADDR_6 (0x2a) 238 + #define SR_IEEE_ADDR_6 0x2a, 0xff, 0 239 + #define RG_IEEE_ADDR_7 (0x2b) 240 + #define SR_IEEE_ADDR_7 0x2b, 0xff, 0 241 + #define RG_XAH_CTRL_0 (0x2c) 242 + #define SR_SLOTTED_OPERATION 0x2c, 0x01, 0 243 + #define SR_MAX_CSMA_RETRIES 0x2c, 0x0e, 1 244 + #define SR_MAX_FRAME_RETRIES 0x2c, 0xf0, 4 245 + #define RG_CSMA_SEED_0 (0x2d) 246 + #define SR_CSMA_SEED_0 0x2d, 0xff, 0 247 + #define RG_CSMA_SEED_1 (0x2e) 248 + #define SR_CSMA_SEED_1 0x2e, 0x07, 0 249 + #define SR_AACK_I_AM_COORD 0x2e, 0x08, 3 250 + #define SR_AACK_DIS_ACK 0x2e, 0x10, 4 251 + #define SR_AACK_SET_PD 0x2e, 0x20, 5 252 + #define SR_AACK_FVN_MODE 0x2e, 0xc0, 6 253 + #define RG_CSMA_BE (0x2f) 254 + #define SR_MIN_BE 0x2f, 0x0f, 0 255 + #define SR_MAX_BE 0x2f, 0xf0, 4 256 256 257 257 #define CMD_REG 0x80 258 258 #define CMD_REG_MASK 0x3f ··· 293 291 #define STATE_RX_AACK_ON_NOCLK 0x1D 294 292 #define STATE_BUSY_RX_AACK_NOCLK 0x1E 295 293 #define STATE_TRANSITION_IN_PROGRESS 0x1F 294 + 295 + #define TRX_STATE_MASK (0x1F) 296 296 297 297 #define AT86RF2XX_NUMREGS 0x3F 298 298 ··· 338 334 unsigned int shift, unsigned int data) 339 335 { 340 336 return regmap_update_bits(lp->regmap, addr, mask, data << shift); 337 + } 338 + 339 + static inline void 340 + at86rf230_slp_tr_rising_edge(struct at86rf230_local *lp) 341 + { 342 + gpio_set_value(lp->slp_tr, 1); 343 + udelay(1); 344 + gpio_set_value(lp->slp_tr, 0); 341 345 } 342 346 343 347 static bool ··· 521 509 struct at86rf230_state_change *ctx = context; 522 510 struct at86rf230_local *lp = ctx->lp; 523 511 const u8 *buf = ctx->buf; 524 - const u8 trx_state = buf[1] & 0x1f; 512 + const u8 trx_state = buf[1] & TRX_STATE_MASK; 525 513 526 514 /* Assert state change */ 527 515 if (trx_state != ctx->to_state) { ··· 621 609 switch (ctx->to_state) { 622 610 case STATE_RX_AACK_ON: 623 611 tim = ktime_set(0, c->t_off_to_aack * NSEC_PER_USEC); 612 + /* state change from TRX_OFF to RX_AACK_ON to do a 613 + * calibration, we need to reset the timeout for the 614 + * next one. 615 + */ 616 + lp->cal_timeout = jiffies + AT86RF2XX_CAL_LOOP_TIMEOUT; 624 617 goto change; 618 + case STATE_TX_ARET_ON: 625 619 case STATE_TX_ON: 626 620 tim = ktime_set(0, c->t_off_to_tx_on * NSEC_PER_USEC); 627 - /* state change from TRX_OFF to TX_ON to do a 628 - * calibration, we need to reset the timeout for the 621 + /* state change from TRX_OFF to TX_ON or ARET_ON to do 622 + * a calibration, we need to reset the timeout for the 629 623 * next one. 630 624 */ 631 625 lp->cal_timeout = jiffies + AT86RF2XX_CAL_LOOP_TIMEOUT; ··· 685 667 struct at86rf230_state_change *ctx = context; 686 668 struct at86rf230_local *lp = ctx->lp; 687 669 u8 *buf = ctx->buf; 688 - const u8 trx_state = buf[1] & 0x1f; 670 + const u8 trx_state = buf[1] & TRX_STATE_MASK; 689 671 int rc; 690 672 691 673 /* Check for "possible" STATE_TRANSITION_IN_PROGRESS */ ··· 791 773 } 792 774 793 775 static void 794 - at86rf230_tx_trac_error(void *context) 795 - { 796 - struct at86rf230_state_change *ctx = context; 797 - struct at86rf230_local *lp = ctx->lp; 798 - 799 - at86rf230_async_state_change(lp, ctx, STATE_TX_ON, 800 - at86rf230_tx_on, true); 801 - } 802 - 803 - static void 804 776 at86rf230_tx_trac_check(void *context) 805 777 { 806 778 struct at86rf230_state_change *ctx = context; ··· 799 791 const u8 trac = (buf[1] & 0xe0) >> 5; 800 792 801 793 /* If trac status is different than zero we need to do a state change 802 - * to STATE_FORCE_TRX_OFF then STATE_TX_ON to recover the transceiver 803 - * state to TX_ON. 794 + * to STATE_FORCE_TRX_OFF then STATE_RX_AACK_ON to recover the 795 + * transceiver. 804 796 */ 805 797 if (trac) 806 798 at86rf230_async_state_change(lp, ctx, STATE_FORCE_TRX_OFF, 807 - at86rf230_tx_trac_error, true); 799 + at86rf230_tx_on, true); 808 800 else 809 801 at86rf230_tx_on(context); 810 802 } ··· 949 941 u8 *buf = ctx->buf; 950 942 int rc; 951 943 952 - buf[0] = (RG_TRX_STATE & CMD_REG_MASK) | CMD_REG | CMD_WRITE; 953 - buf[1] = STATE_BUSY_TX; 954 944 ctx->trx.len = 2; 955 - ctx->msg.complete = NULL; 956 - rc = spi_async(lp->spi, &ctx->msg); 957 - if (rc) 958 - at86rf230_async_error(lp, ctx, rc); 945 + 946 + if (gpio_is_valid(lp->slp_tr)) { 947 + at86rf230_slp_tr_rising_edge(lp); 948 + } else { 949 + buf[0] = (RG_TRX_STATE & CMD_REG_MASK) | CMD_REG | CMD_WRITE; 950 + buf[1] = STATE_BUSY_TX; 951 + ctx->msg.complete = NULL; 952 + rc = spi_async(lp->spi, &ctx->msg); 953 + if (rc) 954 + at86rf230_async_error(lp, ctx, rc); 955 + } 959 956 } 960 957 961 958 static void ··· 1006 993 * are in STATE_TX_ON. The pfad differs here, so we change 1007 994 * the complete handler. 1008 995 */ 1009 - if (lp->tx_aret) 1010 - at86rf230_async_state_change(lp, ctx, STATE_TX_ON, 1011 - at86rf230_xmit_tx_on, false); 1012 - else 996 + if (lp->tx_aret) { 997 + if (lp->is_tx_from_off) { 998 + lp->is_tx_from_off = false; 999 + at86rf230_async_state_change(lp, ctx, STATE_TX_ARET_ON, 1000 + at86rf230_xmit_tx_on, 1001 + false); 1002 + } else { 1003 + at86rf230_async_state_change(lp, ctx, STATE_TX_ON, 1004 + at86rf230_xmit_tx_on, 1005 + false); 1006 + } 1007 + } else { 1013 1008 at86rf230_async_state_change(lp, ctx, STATE_TX_ON, 1014 1009 at86rf230_write_frame, false); 1010 + } 1015 1011 } 1016 1012 1017 1013 static int ··· 1039 1017 * to TX_ON, the lp->cal_timeout should be reinit by state_delay 1040 1018 * function then to start in the next 5 minutes. 1041 1019 */ 1042 - if (time_is_before_jiffies(lp->cal_timeout)) 1020 + if (time_is_before_jiffies(lp->cal_timeout)) { 1021 + lp->is_tx_from_off = true; 1043 1022 at86rf230_async_state_change(lp, ctx, STATE_TRX_OFF, 1044 1023 at86rf230_xmit_start, false); 1045 - else 1024 + } else { 1046 1025 at86rf230_xmit_start(ctx); 1026 + } 1047 1027 1048 1028 return 0; 1049 1029 } ··· 1061 1037 static int 1062 1038 at86rf230_start(struct ieee802154_hw *hw) 1063 1039 { 1064 - struct at86rf230_local *lp = hw->priv; 1065 - 1066 - lp->cal_timeout = jiffies + AT86RF2XX_CAL_LOOP_TIMEOUT; 1067 1040 return at86rf230_sync_state_change(hw->priv, STATE_RX_AACK_ON); 1068 1041 } 1069 1042 ··· 1694 1673 lp = hw->priv; 1695 1674 lp->hw = hw; 1696 1675 lp->spi = spi; 1676 + lp->slp_tr = slp_tr; 1697 1677 hw->parent = &spi->dev; 1698 1678 hw->vif_data_size = sizeof(*lp); 1699 1679 ieee802154_random_extended_addr(&hw->phy->perm_extended_addr);
+15
drivers/net/macvlan.c
··· 599 599 goto del_unicast; 600 600 } 601 601 602 + if (dev->flags & IFF_PROMISC) { 603 + err = dev_set_promiscuity(lowerdev, 1); 604 + if (err < 0) 605 + goto clear_multi; 606 + } 607 + 602 608 hash_add: 603 609 macvlan_hash_add(vlan); 604 610 return 0; 605 611 612 + clear_multi: 613 + dev_set_allmulti(lowerdev, -1); 606 614 del_unicast: 607 615 dev_uc_del(lowerdev, dev->dev_addr); 608 616 out: ··· 645 637 646 638 if (dev->flags & IFF_ALLMULTI) 647 639 dev_set_allmulti(lowerdev, -1); 640 + 641 + if (dev->flags & IFF_PROMISC) 642 + dev_set_promiscuity(lowerdev, -1); 648 643 649 644 dev_uc_del(lowerdev, dev->dev_addr); 650 645 ··· 707 696 if (dev->flags & IFF_UP) { 708 697 if (change & IFF_ALLMULTI) 709 698 dev_set_allmulti(lowerdev, dev->flags & IFF_ALLMULTI ? 1 : -1); 699 + if (change & IFF_PROMISC) 700 + dev_set_promiscuity(lowerdev, 701 + dev->flags & IFF_PROMISC ? 1 : -1); 702 + 710 703 } 711 704 } 712 705
+1
drivers/net/phy/Kconfig
··· 27 27 config AMD_XGBE_PHY 28 28 tristate "Driver for the AMD 10GbE (amd-xgbe) PHYs" 29 29 depends on (OF || ACPI) && HAS_IOMEM 30 + depends on ARM64 || COMPILE_TEST 30 31 ---help--- 31 32 Currently supports the AMD 10GbE PHY 32 33
+4 -1
drivers/net/phy/mdio-gpio.c
··· 168 168 if (!new_bus->irq[i]) 169 169 new_bus->irq[i] = PHY_POLL; 170 170 171 - snprintf(new_bus->id, MII_BUS_ID_SIZE, "gpio-%x", bus_id); 171 + if (bus_id != -1) 172 + snprintf(new_bus->id, MII_BUS_ID_SIZE, "gpio-%x", bus_id); 173 + else 174 + strncpy(new_bus->id, "gpio", MII_BUS_ID_SIZE); 172 175 173 176 if (devm_gpio_request(dev, bitbang->mdc, "mdc")) 174 177 goto out_free_bus;
+2 -1
drivers/net/phy/micrel.c
··· 548 548 } 549 549 550 550 clk = devm_clk_get(&phydev->dev, "rmii-ref"); 551 - if (!IS_ERR(clk)) { 551 + /* NOTE: clk may be NULL if building without CONFIG_HAVE_CLK */ 552 + if (!IS_ERR_OR_NULL(clk)) { 552 553 unsigned long rate = clk_get_rate(clk); 553 554 bool rmii_ref_clk_sel_25_mhz; 554 555
+4
drivers/net/ppp/pppoe.c
··· 465 465 struct sock *sk = sk_pppox(po); 466 466 467 467 lock_sock(sk); 468 + if (po->pppoe_dev) { 469 + dev_put(po->pppoe_dev); 470 + po->pppoe_dev = NULL; 471 + } 468 472 pppox_unbind_sock(sk); 469 473 release_sock(sk); 470 474 sock_put(sk);
+1
drivers/net/usb/r8152.c
··· 4116 4116 {REALTEK_USB_DEVICE(VENDOR_ID_REALTEK, 0x8153)}, 4117 4117 {REALTEK_USB_DEVICE(VENDOR_ID_SAMSUNG, 0xa101)}, 4118 4118 {REALTEK_USB_DEVICE(VENDOR_ID_LENOVO, 0x7205)}, 4119 + {REALTEK_USB_DEVICE(VENDOR_ID_LENOVO, 0x304f)}, 4119 4120 {} 4120 4121 }; 4121 4122
+2 -2
drivers/net/usb/usbnet.c
··· 1285 1285 struct net_device *net) 1286 1286 { 1287 1287 struct usbnet *dev = netdev_priv(net); 1288 - int length; 1288 + unsigned int length; 1289 1289 struct urb *urb = NULL; 1290 1290 struct skb_data *entry; 1291 1291 struct driver_info *info = dev->driver_info; ··· 1413 1413 } 1414 1414 } else 1415 1415 netif_dbg(dev, tx_queued, dev->net, 1416 - "> tx, len %d, type 0x%x\n", length, skb->protocol); 1416 + "> tx, len %u, type 0x%x\n", length, skb->protocol); 1417 1417 #ifdef CONFIG_PM 1418 1418 deferred: 1419 1419 #endif
+25 -27
drivers/net/wireless/ath/ath9k/xmit.c
··· 1103 1103 struct sk_buff *skb; 1104 1104 struct ath_frame_info *fi; 1105 1105 struct ieee80211_tx_info *info; 1106 - struct ieee80211_vif *vif; 1107 1106 struct ath_hw *ah = sc->sc_ah; 1108 1107 1109 1108 if (sc->tx99_state || !ah->tpc_enabled) 1110 1109 return MAX_RATE_POWER; 1111 1110 1112 1111 skb = bf->bf_mpdu; 1113 - info = IEEE80211_SKB_CB(skb); 1114 - vif = info->control.vif; 1115 - 1116 - if (!vif) { 1117 - max_power = sc->cur_chan->cur_txpower; 1118 - goto out; 1119 - } 1120 - 1121 - if (vif->bss_conf.txpower_type != NL80211_TX_POWER_LIMITED) { 1122 - max_power = min_t(u8, sc->cur_chan->cur_txpower, 1123 - 2 * vif->bss_conf.txpower); 1124 - goto out; 1125 - } 1126 - 1127 1112 fi = get_frame_info(skb); 1113 + info = IEEE80211_SKB_CB(skb); 1128 1114 1129 1115 if (!AR_SREV_9300_20_OR_LATER(ah)) { 1130 1116 int txpower = fi->tx_power; ··· 1147 1161 txpower -= 2; 1148 1162 1149 1163 txpower = max(txpower, 0); 1150 - max_power = min_t(u8, ah->tx_power[rateidx], 1151 - 2 * vif->bss_conf.txpower); 1152 - max_power = min_t(u8, max_power, txpower); 1164 + max_power = min_t(u8, ah->tx_power[rateidx], txpower); 1165 + 1166 + /* XXX: clamp minimum TX power at 1 for AR9160 since if 1167 + * max_power is set to 0, frames are transmitted at max 1168 + * TX power 1169 + */ 1170 + if (!max_power && !AR_SREV_9280_20_OR_LATER(ah)) 1171 + max_power = 1; 1153 1172 } else if (!bf->bf_state.bfs_paprd) { 1154 1173 if (rateidx < 8 && (info->flags & IEEE80211_TX_CTL_STBC)) 1155 1174 max_power = min_t(u8, ah->tx_power_stbc[rateidx], 1156 - 2 * vif->bss_conf.txpower); 1175 + fi->tx_power); 1157 1176 else 1158 1177 max_power = min_t(u8, ah->tx_power[rateidx], 1159 - 2 * vif->bss_conf.txpower); 1160 - max_power = min(max_power, fi->tx_power); 1178 + fi->tx_power); 1161 1179 } else { 1162 1180 max_power = ah->paprd_training_power; 1163 1181 } 1164 - out: 1165 - /* XXX: clamp minimum TX power at 1 for AR9160 since if max_power 1166 - * is set to 0, frames are transmitted at max TX power 1167 - */ 1168 - return (!max_power && !AR_SREV_9280_20_OR_LATER(ah)) ? 1 : max_power; 1182 + 1183 + return max_power; 1169 1184 } 1170 1185 1171 1186 static void ath_buf_set_rate(struct ath_softc *sc, struct ath_buf *bf, ··· 2116 2129 struct ath_node *an = NULL; 2117 2130 enum ath9k_key_type keytype; 2118 2131 bool short_preamble = false; 2132 + u8 txpower; 2119 2133 2120 2134 /* 2121 2135 * We check if Short Preamble is needed for the CTS rate by ··· 2133 2145 if (sta) 2134 2146 an = (struct ath_node *) sta->drv_priv; 2135 2147 2148 + if (tx_info->control.vif) { 2149 + struct ieee80211_vif *vif = tx_info->control.vif; 2150 + 2151 + txpower = 2 * vif->bss_conf.txpower; 2152 + } else { 2153 + struct ath_softc *sc = hw->priv; 2154 + 2155 + txpower = sc->cur_chan->cur_txpower; 2156 + } 2157 + 2136 2158 memset(fi, 0, sizeof(*fi)); 2137 2159 fi->txq = -1; 2138 2160 if (hw_key) ··· 2153 2155 fi->keyix = ATH9K_TXKEYIX_INVALID; 2154 2156 fi->keytype = keytype; 2155 2157 fi->framelen = framelen; 2156 - fi->tx_power = MAX_RATE_POWER; 2158 + fi->tx_power = txpower; 2157 2159 2158 2160 if (!rate) 2159 2161 return;
+2
drivers/net/wireless/iwlwifi/iwl-fw-file.h
··· 244 244 * longer than the passive one, which is essential for fragmented scan. 245 245 * @IWL_UCODE_TLV_API_WIFI_MCC_UPDATE: ucode supports MCC updates with source. 246 246 * IWL_UCODE_TLV_API_HDC_PHASE_0: ucode supports finer configuration of LTR 247 + * @IWL_UCODE_TLV_API_TX_POWER_DEV: new API for tx power. 247 248 * @IWL_UCODE_TLV_API_BASIC_DWELL: use only basic dwell time in scan command, 248 249 * regardless of the band or the number of the probes. FW will calculate 249 250 * the actual dwell time. ··· 261 260 IWL_UCODE_TLV_API_FRAGMENTED_SCAN = BIT(8), 262 261 IWL_UCODE_TLV_API_WIFI_MCC_UPDATE = BIT(9), 263 262 IWL_UCODE_TLV_API_HDC_PHASE_0 = BIT(10), 263 + IWL_UCODE_TLV_API_TX_POWER_DEV = BIT(11), 264 264 IWL_UCODE_TLV_API_BASIC_DWELL = BIT(13), 265 265 IWL_UCODE_TLV_API_SCD_CFG = BIT(15), 266 266 IWL_UCODE_TLV_API_SINGLE_SCAN_EBS = BIT(16),
+27 -14
drivers/net/wireless/iwlwifi/iwl-trans.h
··· 6 6 * GPL LICENSE SUMMARY 7 7 * 8 8 * Copyright(c) 2007 - 2014 Intel Corporation. All rights reserved. 9 - * Copyright(c) 2013 - 2014 Intel Mobile Communications GmbH 9 + * Copyright(c) 2013 - 2015 Intel Mobile Communications GmbH 10 10 * 11 11 * This program is free software; you can redistribute it and/or modify 12 12 * it under the terms of version 2 of the GNU General Public License as ··· 32 32 * BSD LICENSE 33 33 * 34 34 * Copyright(c) 2005 - 2014 Intel Corporation. All rights reserved. 35 - * Copyright(c) 2013 - 2014 Intel Mobile Communications GmbH 35 + * Copyright(c) 2013 - 2015 Intel Mobile Communications GmbH 36 36 * All rights reserved. 37 37 * 38 38 * Redistribution and use in source and binary forms, with or without ··· 421 421 * 422 422 * All the handlers MUST be implemented 423 423 * 424 - * @start_hw: starts the HW- from that point on, the HW can send interrupts 425 - * May sleep 424 + * @start_hw: starts the HW. If low_power is true, the NIC needs to be taken 425 + * out of a low power state. From that point on, the HW can send 426 + * interrupts. May sleep. 426 427 * @op_mode_leave: Turn off the HW RF kill indication if on 427 428 * May sleep 428 429 * @start_fw: allocates and inits all the resources for the transport ··· 433 432 * the SCD base address in SRAM, then provide it here, or 0 otherwise. 434 433 * May sleep 435 434 * @stop_device: stops the whole device (embedded CPU put to reset) and stops 436 - * the HW. From that point on, the HW will be in low power but will still 437 - * issue interrupt if the HW RF kill is triggered. This callback must do 438 - * the right thing and not crash even if start_hw() was called but not 439 - * start_fw(). May sleep 435 + * the HW. If low_power is true, the NIC will be put in low power state. 436 + * From that point on, the HW will be stopped but will still issue an 437 + * interrupt if the HW RF kill switch is triggered. 438 + * This callback must do the right thing and not crash even if %start_hw() 439 + * was called but not &start_fw(). May sleep. 440 440 * @d3_suspend: put the device into the correct mode for WoWLAN during 441 441 * suspend. This is optional, if not implemented WoWLAN will not be 442 442 * supported. This callback may sleep. ··· 493 491 */ 494 492 struct iwl_trans_ops { 495 493 496 - int (*start_hw)(struct iwl_trans *iwl_trans); 494 + int (*start_hw)(struct iwl_trans *iwl_trans, bool low_power); 497 495 void (*op_mode_leave)(struct iwl_trans *iwl_trans); 498 496 int (*start_fw)(struct iwl_trans *trans, const struct fw_img *fw, 499 497 bool run_in_rfkill); 500 498 int (*update_sf)(struct iwl_trans *trans, 501 499 struct iwl_sf_region *st_fwrd_space); 502 500 void (*fw_alive)(struct iwl_trans *trans, u32 scd_addr); 503 - void (*stop_device)(struct iwl_trans *trans); 501 + void (*stop_device)(struct iwl_trans *trans, bool low_power); 504 502 505 503 void (*d3_suspend)(struct iwl_trans *trans, bool test); 506 504 int (*d3_resume)(struct iwl_trans *trans, enum iwl_d3_status *status, ··· 654 652 trans->ops->configure(trans, trans_cfg); 655 653 } 656 654 657 - static inline int iwl_trans_start_hw(struct iwl_trans *trans) 655 + static inline int _iwl_trans_start_hw(struct iwl_trans *trans, bool low_power) 658 656 { 659 657 might_sleep(); 660 658 661 - return trans->ops->start_hw(trans); 659 + return trans->ops->start_hw(trans, low_power); 660 + } 661 + 662 + static inline int iwl_trans_start_hw(struct iwl_trans *trans) 663 + { 664 + return trans->ops->start_hw(trans, true); 662 665 } 663 666 664 667 static inline void iwl_trans_op_mode_leave(struct iwl_trans *trans) ··· 710 703 return 0; 711 704 } 712 705 713 - static inline void iwl_trans_stop_device(struct iwl_trans *trans) 706 + static inline void _iwl_trans_stop_device(struct iwl_trans *trans, 707 + bool low_power) 714 708 { 715 709 might_sleep(); 716 710 717 - trans->ops->stop_device(trans); 711 + trans->ops->stop_device(trans, low_power); 718 712 719 713 trans->state = IWL_TRANS_NO_FW; 714 + } 715 + 716 + static inline void iwl_trans_stop_device(struct iwl_trans *trans) 717 + { 718 + _iwl_trans_stop_device(trans, true); 720 719 } 721 720 722 721 static inline void iwl_trans_d3_suspend(struct iwl_trans *trans, bool test)
+1 -1
drivers/net/wireless/iwlwifi/mvm/d3.c
··· 1726 1726 results->matched_profiles = le32_to_cpu(query->matched_profiles); 1727 1727 memcpy(results->matches, query->matches, sizeof(results->matches)); 1728 1728 1729 - #ifdef CPTCFG_IWLWIFI_DEBUGFS 1729 + #ifdef CONFIG_IWLWIFI_DEBUGFS 1730 1730 mvm->last_netdetect_scans = le32_to_cpu(query->n_scans_done); 1731 1731 #endif 1732 1732
+34
drivers/net/wireless/iwlwifi/mvm/fw-api-power.h
··· 298 298 } __packed; 299 299 300 300 /** 301 + * struct iwl_reduce_tx_power_cmd - TX power reduction command 302 + * REDUCE_TX_POWER_CMD = 0x9f 303 + * @flags: (reserved for future implementation) 304 + * @mac_context_id: id of the mac ctx for which we are reducing TX power. 305 + * @pwr_restriction: TX power restriction in dBms. 306 + */ 307 + struct iwl_reduce_tx_power_cmd { 308 + u8 flags; 309 + u8 mac_context_id; 310 + __le16 pwr_restriction; 311 + } __packed; /* TX_REDUCED_POWER_API_S_VER_1 */ 312 + 313 + /** 314 + * struct iwl_dev_tx_power_cmd - TX power reduction command 315 + * REDUCE_TX_POWER_CMD = 0x9f 316 + * @set_mode: 0 - MAC tx power, 1 - device tx power 317 + * @mac_context_id: id of the mac ctx for which we are reducing TX power. 318 + * @pwr_restriction: TX power restriction in 1/8 dBms. 319 + * @dev_24: device TX power restriction in 1/8 dBms 320 + * @dev_52_low: device TX power restriction upper band - low 321 + * @dev_52_high: device TX power restriction upper band - high 322 + */ 323 + struct iwl_dev_tx_power_cmd { 324 + __le32 set_mode; 325 + __le32 mac_context_id; 326 + __le16 pwr_restriction; 327 + __le16 dev_24; 328 + __le16 dev_52_low; 329 + __le16 dev_52_high; 330 + } __packed; /* TX_REDUCED_POWER_API_S_VER_2 */ 331 + 332 + #define IWL_DEV_MAX_TX_POWER 0x7FFF 333 + 334 + /** 301 335 * struct iwl_beacon_filter_cmd 302 336 * REPLY_BEACON_FILTERING_CMD = 0xd2 (command) 303 337 * @id_and_color: MAC contex identifier
+2 -42
drivers/net/wireless/iwlwifi/mvm/fw-api-scan.h
··· 122 122 SCAN_COMP_STATUS_ERR_ALLOC_TE = 0x0C, 123 123 }; 124 124 125 - /** 126 - * struct iwl_scan_results_notif - scan results for one channel 127 - * ( SCAN_RESULTS_NOTIFICATION = 0x83 ) 128 - * @channel: which channel the results are from 129 - * @band: 0 for 5.2 GHz, 1 for 2.4 GHz 130 - * @probe_status: SCAN_PROBE_STATUS_*, indicates success of probe request 131 - * @num_probe_not_sent: # of request that weren't sent due to not enough time 132 - * @duration: duration spent in channel, in usecs 133 - * @statistics: statistics gathered for this channel 134 - */ 135 - struct iwl_scan_results_notif { 136 - u8 channel; 137 - u8 band; 138 - u8 probe_status; 139 - u8 num_probe_not_sent; 140 - __le32 duration; 141 - __le32 statistics[SCAN_RESULTS_STATISTICS]; 142 - } __packed; /* SCAN_RESULT_NTF_API_S_VER_2 */ 143 - 144 - /** 145 - * struct iwl_scan_complete_notif - notifies end of scanning (all channels) 146 - * ( SCAN_COMPLETE_NOTIFICATION = 0x84 ) 147 - * @scanned_channels: number of channels scanned (and number of valid results) 148 - * @status: one of SCAN_COMP_STATUS_* 149 - * @bt_status: BT on/off status 150 - * @last_channel: last channel that was scanned 151 - * @tsf_low: TSF timer (lower half) in usecs 152 - * @tsf_high: TSF timer (higher half) in usecs 153 - * @results: array of scan results, only "scanned_channels" of them are valid 154 - */ 155 - struct iwl_scan_complete_notif { 156 - u8 scanned_channels; 157 - u8 status; 158 - u8 bt_status; 159 - u8 last_channel; 160 - __le32 tsf_low; 161 - __le32 tsf_high; 162 - struct iwl_scan_results_notif results[]; 163 - } __packed; /* SCAN_COMPLETE_NTF_API_S_VER_2 */ 164 - 165 125 /* scan offload */ 166 126 #define IWL_SCAN_MAX_BLACKLIST_LEN 64 167 127 #define IWL_SCAN_SHORT_BLACKLIST_LEN 16 ··· 514 554 } __packed; 515 555 516 556 /** 517 - * struct iwl_lmac_scan_results_notif - scan results for one channel - 557 + * struct iwl_scan_results_notif - scan results for one channel - 518 558 * SCAN_RESULT_NTF_API_S_VER_3 519 559 * @channel: which channel the results are from 520 560 * @band: 0 for 5.2 GHz, 1 for 2.4 GHz ··· 522 562 * @num_probe_not_sent: # of request that weren't sent due to not enough time 523 563 * @duration: duration spent in channel, in usecs 524 564 */ 525 - struct iwl_lmac_scan_results_notif { 565 + struct iwl_scan_results_notif { 526 566 u8 channel; 527 567 u8 band; 528 568 u8 probe_status;
-13
drivers/net/wireless/iwlwifi/mvm/fw-api.h
··· 281 281 __le32 valid; 282 282 } __packed; 283 283 284 - /** 285 - * struct iwl_reduce_tx_power_cmd - TX power reduction command 286 - * REDUCE_TX_POWER_CMD = 0x9f 287 - * @flags: (reserved for future implementation) 288 - * @mac_context_id: id of the mac ctx for which we are reducing TX power. 289 - * @pwr_restriction: TX power restriction in dBms. 290 - */ 291 - struct iwl_reduce_tx_power_cmd { 292 - u8 flags; 293 - u8 mac_context_id; 294 - __le16 pwr_restriction; 295 - } __packed; /* TX_REDUCED_POWER_API_S_VER_1 */ 296 - 297 284 /* 298 285 * Calibration control struct. 299 286 * Sent as part of the phy configuration command.
+21 -33
drivers/net/wireless/iwlwifi/mvm/fw.c
··· 6 6 * GPL LICENSE SUMMARY 7 7 * 8 8 * Copyright(c) 2012 - 2014 Intel Corporation. All rights reserved. 9 - * Copyright(c) 2013 - 2014 Intel Mobile Communications GmbH 9 + * Copyright(c) 2013 - 2015 Intel Mobile Communications GmbH 10 10 * 11 11 * This program is free software; you can redistribute it and/or modify 12 12 * it under the terms of version 2 of the GNU General Public License as ··· 32 32 * BSD LICENSE 33 33 * 34 34 * Copyright(c) 2012 - 2014 Intel Corporation. All rights reserved. 35 - * Copyright(c) 2013 - 2014 Intel Mobile Communications GmbH 35 + * Copyright(c) 2013 - 2015 Intel Mobile Communications GmbH 36 36 * All rights reserved. 37 37 * 38 38 * Redistribution and use in source and binary forms, with or without ··· 322 322 323 323 lockdep_assert_held(&mvm->mutex); 324 324 325 - if (WARN_ON_ONCE(mvm->init_ucode_complete || mvm->calibrating)) 325 + if (WARN_ON_ONCE(mvm->calibrating)) 326 326 return 0; 327 327 328 328 iwl_init_notification_wait(&mvm->notif_wait, ··· 396 396 */ 397 397 ret = iwl_wait_notification(&mvm->notif_wait, &calib_wait, 398 398 MVM_UCODE_CALIB_TIMEOUT); 399 - if (!ret) 400 - mvm->init_ucode_complete = true; 401 399 402 400 if (ret && iwl_mvm_is_radio_killed(mvm)) { 403 401 IWL_DEBUG_RF_KILL(mvm, "RFKILL while calibrating.\n"); ··· 491 493 le32_to_cpu(desc->trig_desc.type)); 492 494 493 495 mvm->fw_dump_desc = desc; 494 - 495 - /* stop recording */ 496 - if (mvm->cfg->device_family == IWL_DEVICE_FAMILY_7000) { 497 - iwl_set_bits_prph(mvm->trans, MON_BUFF_SAMPLE_CTL, 0x100); 498 - } else { 499 - iwl_write_prph(mvm->trans, DBGC_IN_SAMPLE, 0); 500 - /* wait before we collect the data till the DBGC stop */ 501 - udelay(100); 502 - } 503 496 504 497 queue_delayed_work(system_wq, &mvm->fw_dump_wk, delay); 505 498 ··· 647 658 * module loading, load init ucode now 648 659 * (for example, if we were in RFKILL) 649 660 */ 650 - if (!mvm->init_ucode_complete) { 651 - ret = iwl_run_init_mvm_ucode(mvm, false); 652 - if (ret && !iwlmvm_mod_params.init_dbg) { 653 - IWL_ERR(mvm, "Failed to run INIT ucode: %d\n", ret); 654 - /* this can't happen */ 655 - if (WARN_ON(ret > 0)) 656 - ret = -ERFKILL; 657 - goto error; 658 - } 659 - if (!iwlmvm_mod_params.init_dbg) { 660 - /* 661 - * should stop and start HW since that INIT 662 - * image just loaded 663 - */ 664 - iwl_trans_stop_device(mvm->trans); 665 - ret = iwl_trans_start_hw(mvm->trans); 666 - if (ret) 667 - return ret; 668 - } 661 + ret = iwl_run_init_mvm_ucode(mvm, false); 662 + if (ret && !iwlmvm_mod_params.init_dbg) { 663 + IWL_ERR(mvm, "Failed to run INIT ucode: %d\n", ret); 664 + /* this can't happen */ 665 + if (WARN_ON(ret > 0)) 666 + ret = -ERFKILL; 667 + goto error; 668 + } 669 + if (!iwlmvm_mod_params.init_dbg) { 670 + /* 671 + * Stop and start the transport without entering low power 672 + * mode. This will save the state of other components on the 673 + * device that are triggered by the INIT firwmare (MFUART). 674 + */ 675 + _iwl_trans_stop_device(mvm->trans, false); 676 + _iwl_trans_start_hw(mvm->trans, false); 677 + if (ret) 678 + return ret; 669 679 } 670 680 671 681 if (iwlmvm_mod_params.init_dbg)
+23 -3
drivers/net/wireless/iwlwifi/mvm/mac80211.c
··· 1322 1322 1323 1323 clear_bit(IWL_MVM_STATUS_IN_HW_RESTART, &mvm->status); 1324 1324 iwl_mvm_d0i3_enable_tx(mvm, NULL); 1325 - ret = iwl_mvm_update_quotas(mvm, false, NULL); 1325 + ret = iwl_mvm_update_quotas(mvm, true, NULL); 1326 1326 if (ret) 1327 1327 IWL_ERR(mvm, "Failed to update quotas after restart (%d)\n", 1328 1328 ret); ··· 1471 1471 return NULL; 1472 1472 } 1473 1473 1474 - static int iwl_mvm_set_tx_power(struct iwl_mvm *mvm, struct ieee80211_vif *vif, 1475 - s8 tx_power) 1474 + static int iwl_mvm_set_tx_power_old(struct iwl_mvm *mvm, 1475 + struct ieee80211_vif *vif, s8 tx_power) 1476 1476 { 1477 1477 /* FW is in charge of regulatory enforcement */ 1478 1478 struct iwl_reduce_tx_power_cmd reduce_txpwr_cmd = { ··· 1483 1483 return iwl_mvm_send_cmd_pdu(mvm, REDUCE_TX_POWER_CMD, 0, 1484 1484 sizeof(reduce_txpwr_cmd), 1485 1485 &reduce_txpwr_cmd); 1486 + } 1487 + 1488 + static int iwl_mvm_set_tx_power(struct iwl_mvm *mvm, struct ieee80211_vif *vif, 1489 + s16 tx_power) 1490 + { 1491 + struct iwl_dev_tx_power_cmd cmd = { 1492 + .set_mode = 0, 1493 + .mac_context_id = 1494 + cpu_to_le32(iwl_mvm_vif_from_mac80211(vif)->id), 1495 + .pwr_restriction = cpu_to_le16(8 * tx_power), 1496 + }; 1497 + 1498 + if (!(mvm->fw->ucode_capa.api[0] & IWL_UCODE_TLV_API_TX_POWER_DEV)) 1499 + return iwl_mvm_set_tx_power_old(mvm, vif, tx_power); 1500 + 1501 + if (tx_power == IWL_DEFAULT_MAX_TX_POWER) 1502 + cmd.pwr_restriction = cpu_to_le16(IWL_DEV_MAX_TX_POWER); 1503 + 1504 + return iwl_mvm_send_cmd_pdu(mvm, REDUCE_TX_POWER_CMD, 0, 1505 + sizeof(cmd), &cmd); 1486 1506 } 1487 1507 1488 1508 static int iwl_mvm_mac_add_interface(struct ieee80211_hw *hw,
-1
drivers/net/wireless/iwlwifi/mvm/mvm.h
··· 603 603 604 604 enum iwl_ucode_type cur_ucode; 605 605 bool ucode_loaded; 606 - bool init_ucode_complete; 607 606 bool calibrating; 608 607 u32 error_event_table; 609 608 u32 log_event_table;
+10
drivers/net/wireless/iwlwifi/mvm/ops.c
··· 865 865 return; 866 866 867 867 mutex_lock(&mvm->mutex); 868 + 869 + /* stop recording */ 870 + if (mvm->cfg->device_family == IWL_DEVICE_FAMILY_7000) { 871 + iwl_set_bits_prph(mvm->trans, MON_BUFF_SAMPLE_CTL, 0x100); 872 + } else { 873 + iwl_write_prph(mvm->trans, DBGC_IN_SAMPLE, 0); 874 + /* wait before we collect the data till the DBGC stop */ 875 + udelay(100); 876 + } 877 + 868 878 iwl_mvm_fw_error_dump(mvm); 869 879 870 880 /* start recording again if the firmware is not crashed */
+5
drivers/net/wireless/iwlwifi/mvm/rx.c
··· 478 478 if (vif->type != NL80211_IFTYPE_STATION) 479 479 return; 480 480 481 + if (sig == 0) { 482 + IWL_DEBUG_RX(mvm, "RSSI is 0 - skip signal based decision\n"); 483 + return; 484 + } 485 + 481 486 mvmvif->bf_data.ave_beacon_signal = sig; 482 487 483 488 /* BT Coex */
+1 -1
drivers/net/wireless/iwlwifi/mvm/scan.c
··· 319 319 struct iwl_device_cmd *cmd) 320 320 { 321 321 struct iwl_rx_packet *pkt = rxb_addr(rxb); 322 - struct iwl_scan_complete_notif *notif = (void *)pkt->data; 322 + struct iwl_lmac_scan_complete_notif *notif = (void *)pkt->data; 323 323 324 324 IWL_DEBUG_SCAN(mvm, 325 325 "Scan offload iteration complete: status=0x%x scanned channels=%d\n",
+9 -8
drivers/net/wireless/iwlwifi/pcie/trans.c
··· 5 5 * 6 6 * GPL LICENSE SUMMARY 7 7 * 8 - * Copyright(c) 2007 - 2014 Intel Corporation. All rights reserved. 9 - * Copyright(c) 2013 - 2014 Intel Mobile Communications GmbH 8 + * Copyright(c) 2007 - 2015 Intel Corporation. All rights reserved. 9 + * Copyright(c) 2013 - 2015 Intel Mobile Communications GmbH 10 10 * 11 11 * This program is free software; you can redistribute it and/or modify 12 12 * it under the terms of version 2 of the GNU General Public License as ··· 31 31 * 32 32 * BSD LICENSE 33 33 * 34 - * Copyright(c) 2005 - 2014 Intel Corporation. All rights reserved. 35 - * Copyright(c) 2013 - 2014 Intel Mobile Communications GmbH 34 + * Copyright(c) 2005 - 2015 Intel Corporation. All rights reserved. 35 + * Copyright(c) 2013 - 2015 Intel Mobile Communications GmbH 36 36 * All rights reserved. 37 37 * 38 38 * Redistribution and use in source and binary forms, with or without ··· 104 104 static void iwl_pcie_alloc_fw_monitor(struct iwl_trans *trans) 105 105 { 106 106 struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); 107 - struct page *page; 107 + struct page *page = NULL; 108 108 dma_addr_t phys; 109 109 u32 size; 110 110 u8 power; ··· 131 131 DMA_FROM_DEVICE); 132 132 if (dma_mapping_error(trans->dev, phys)) { 133 133 __free_pages(page, order); 134 + page = NULL; 134 135 continue; 135 136 } 136 137 IWL_INFO(trans, ··· 1021 1020 iwl_pcie_tx_start(trans, scd_addr); 1022 1021 } 1023 1022 1024 - static void iwl_trans_pcie_stop_device(struct iwl_trans *trans) 1023 + static void iwl_trans_pcie_stop_device(struct iwl_trans *trans, bool low_power) 1025 1024 { 1026 1025 struct iwl_trans_pcie *trans_pcie = IWL_TRANS_GET_PCIE_TRANS(trans); 1027 1026 bool hw_rfkill, was_hw_rfkill; ··· 1116 1115 void iwl_trans_pcie_rf_kill(struct iwl_trans *trans, bool state) 1117 1116 { 1118 1117 if (iwl_op_mode_hw_rf_kill(trans->op_mode, state)) 1119 - iwl_trans_pcie_stop_device(trans); 1118 + iwl_trans_pcie_stop_device(trans, true); 1120 1119 } 1121 1120 1122 1121 static void iwl_trans_pcie_d3_suspend(struct iwl_trans *trans, bool test) ··· 1201 1200 return 0; 1202 1201 } 1203 1202 1204 - static int iwl_trans_pcie_start_hw(struct iwl_trans *trans) 1203 + static int iwl_trans_pcie_start_hw(struct iwl_trans *trans, bool low_power) 1205 1204 { 1206 1205 bool hw_rfkill; 1207 1206 int err;
+1 -1
drivers/net/wireless/rtlwifi/usb.c
··· 126 126 127 127 do { 128 128 status = usb_control_msg(udev, pipe, request, reqtype, value, 129 - index, pdata, len, 0); /*max. timeout*/ 129 + index, pdata, len, 1000); 130 130 if (status < 0) { 131 131 /* firmware download is checksumed, don't retry */ 132 132 if ((value >= FW_8192C_START_ADDRESS &&
+1 -1
drivers/parisc/superio.c
··· 348 348 BUG(); 349 349 return -1; 350 350 } 351 - printk("superio_fixup_irq(%s) ven 0x%x dev 0x%x from %pf\n", 351 + printk(KERN_DEBUG "superio_fixup_irq(%s) ven 0x%x dev 0x%x from %ps\n", 352 352 pci_name(pcidev), 353 353 pcidev->vendor, pcidev->device, 354 354 __builtin_return_address(0));
+1
drivers/power/axp288_fuel_gauge.c
··· 1149 1149 1150 1150 module_platform_driver(axp288_fuel_gauge_driver); 1151 1151 1152 + MODULE_AUTHOR("Ramakrishna Pallala <ramakrishna.pallala@intel.com>"); 1152 1153 MODULE_AUTHOR("Todd Brandt <todd.e.brandt@linux.intel.com>"); 1153 1154 MODULE_DESCRIPTION("Xpower AXP288 Fuel Gauge Driver"); 1154 1155 MODULE_LICENSE("GPL");
+8
drivers/power/bq27x00_battery.c
··· 1109 1109 } 1110 1110 module_exit(bq27x00_battery_exit); 1111 1111 1112 + #ifdef CONFIG_BATTERY_BQ27X00_PLATFORM 1113 + MODULE_ALIAS("platform:bq27000-battery"); 1114 + #endif 1115 + 1116 + #ifdef CONFIG_BATTERY_BQ27X00_I2C 1117 + MODULE_ALIAS("i2c:bq27000-battery"); 1118 + #endif 1119 + 1112 1120 MODULE_AUTHOR("Rodolfo Giometti <giometti@linux.it>"); 1113 1121 MODULE_DESCRIPTION("BQ27x00 battery monitor driver"); 1114 1122 MODULE_LICENSE("GPL");
+1 -1
drivers/power/collie_battery.c
··· 347 347 goto err_psy_reg_main; 348 348 } 349 349 350 - psy_main_cfg.drv_data = &collie_bat_bu; 350 + psy_bu_cfg.drv_data = &collie_bat_bu; 351 351 collie_bat_bu.psy = power_supply_register(&dev->ucb->dev, 352 352 &collie_bat_bu_desc, 353 353 &psy_bu_cfg);
+1
drivers/power/reset/Kconfig
··· 41 41 config POWER_RESET_BRCMSTB 42 42 bool "Broadcom STB reset driver" 43 43 depends on ARM || MIPS || COMPILE_TEST 44 + depends on MFD_SYSCON 44 45 default ARCH_BRCMSTB 45 46 help 46 47 This driver provides restart support for Broadcom STB boards.
+2 -2
drivers/power/reset/at91-reset.c
··· 212 212 res = platform_get_resource(pdev, IORESOURCE_MEM, idx + 1 ); 213 213 at91_ramc_base[idx] = devm_ioremap(&pdev->dev, res->start, 214 214 resource_size(res)); 215 - if (IS_ERR(at91_ramc_base[idx])) { 215 + if (!at91_ramc_base[idx]) { 216 216 dev_err(&pdev->dev, "Could not map ram controller address\n"); 217 - return PTR_ERR(at91_ramc_base[idx]); 217 + return -ENOMEM; 218 218 } 219 219 } 220 220
+3 -15
drivers/power/reset/ltc2952-poweroff.c
··· 120 120 121 121 static void ltc2952_poweroff_start_wde(struct ltc2952_poweroff *data) 122 122 { 123 - if (hrtimer_start(&data->timer_wde, data->wde_interval, 124 - HRTIMER_MODE_REL)) { 125 - /* 126 - * The device will not toggle the watchdog reset, 127 - * thus shut down is only safe if the PowerPath controller 128 - * has a long enough time-off before triggering a hardware 129 - * power-off. 130 - * 131 - * Only sending a warning as the system will power-off anyway 132 - */ 133 - dev_err(data->dev, "unable to start the timer\n"); 134 - } 123 + hrtimer_start(&data->timer_wde, data->wde_interval, HRTIMER_MODE_REL); 135 124 } 136 125 137 126 static enum hrtimer_restart ··· 154 165 } 155 166 156 167 if (gpiod_get_value(data->gpio_trigger)) { 157 - if (hrtimer_start(&data->timer_trigger, data->trigger_delay, 158 - HRTIMER_MODE_REL)) 159 - dev_err(data->dev, "unable to start the wait timer\n"); 168 + hrtimer_start(&data->timer_trigger, data->trigger_delay, 169 + HRTIMER_MODE_REL); 160 170 } else { 161 171 hrtimer_cancel(&data->timer_trigger); 162 172 /* omitting return value check, timer should have been valid */
+1 -1
drivers/rtc/rtc-armada38x.c
··· 64 64 static int armada38x_rtc_read_time(struct device *dev, struct rtc_time *tm) 65 65 { 66 66 struct armada38x_rtc *rtc = dev_get_drvdata(dev); 67 - unsigned long time, time_check, flags; 67 + unsigned long time, time_check; 68 68 69 69 mutex_lock(&rtc->mutex_time); 70 70 time = readl(rtc->regs + RTC_TIME);
+2 -1
drivers/spi/Kconfig
··· 78 78 config SPI_BCM2835 79 79 tristate "BCM2835 SPI controller" 80 80 depends on ARCH_BCM2835 || COMPILE_TEST 81 + depends on GPIOLIB 81 82 help 82 83 This selects a driver for the Broadcom BCM2835 SPI master. 83 84 ··· 303 302 config SPI_FSL_DSPI 304 303 tristate "Freescale DSPI controller" 305 304 select REGMAP_MMIO 306 - depends on SOC_VF610 || COMPILE_TEST 305 + depends on SOC_VF610 || SOC_LS1021A || COMPILE_TEST 307 306 help 308 307 This enables support for the Freescale DSPI controller in master 309 308 mode. VF610 platform uses the controller.
+2 -3
drivers/spi/spi-bcm2835.c
··· 164 164 unsigned long xfer_time_us) 165 165 { 166 166 struct bcm2835_spi *bs = spi_master_get_devdata(master); 167 - unsigned long timeout = jiffies + 168 - max(4 * xfer_time_us * HZ / 1000000, 2uL); 167 + /* set timeout to 1 second of maximum polling */ 168 + unsigned long timeout = jiffies + HZ; 169 169 170 170 /* enable HW block without interrupts */ 171 171 bcm2835_wr(bs, BCM2835_SPI_CS, cs | BCM2835_SPI_CS_TA); 172 172 173 - /* set timeout to 4x the expected time, or 2 jiffies */ 174 173 /* loop until finished the transfer */ 175 174 while (bs->rx_len) { 176 175 /* read from fifo as much as possible */
+10 -7
drivers/spi/spi-bitbang.c
··· 180 180 { 181 181 struct spi_bitbang_cs *cs = spi->controller_state; 182 182 struct spi_bitbang *bitbang; 183 - int retval; 184 183 unsigned long flags; 185 184 186 185 bitbang = spi_master_get_devdata(spi->master); ··· 196 197 if (!cs->txrx_word) 197 198 return -EINVAL; 198 199 199 - retval = bitbang->setup_transfer(spi, NULL); 200 - if (retval < 0) 201 - return retval; 200 + if (bitbang->setup_transfer) { 201 + int retval = bitbang->setup_transfer(spi, NULL); 202 + if (retval < 0) 203 + return retval; 204 + } 202 205 203 206 dev_dbg(&spi->dev, "%s, %u nsec/bit\n", __func__, 2 * cs->nsecs); 204 207 ··· 296 295 297 296 /* init (-1) or override (1) transfer params */ 298 297 if (do_setup != 0) { 299 - status = bitbang->setup_transfer(spi, t); 300 - if (status < 0) 301 - break; 298 + if (bitbang->setup_transfer) { 299 + status = bitbang->setup_transfer(spi, t); 300 + if (status < 0) 301 + break; 302 + } 302 303 if (do_setup == -1) 303 304 do_setup = 0; 304 305 }
+23 -17
drivers/spi/spi-fsl-cpm.c
··· 24 24 #include <linux/of_address.h> 25 25 #include <linux/spi/spi.h> 26 26 #include <linux/types.h> 27 + #include <linux/platform_device.h> 27 28 28 29 #include "spi-fsl-cpm.h" 29 30 #include "spi-fsl-lib.h" ··· 270 269 if (mspi->flags & SPI_CPM2) { 271 270 pram_ofs = cpm_muram_alloc(SPI_PRAM_SIZE, 64); 272 271 out_be16(spi_base, pram_ofs); 273 - } else { 274 - struct spi_pram __iomem *pram = spi_base; 275 - u16 rpbase = in_be16(&pram->rpbase); 276 - 277 - /* Microcode relocation patch applied? */ 278 - if (rpbase) { 279 - pram_ofs = rpbase; 280 - } else { 281 - pram_ofs = cpm_muram_alloc(SPI_PRAM_SIZE, 64); 282 - out_be16(spi_base, pram_ofs); 283 - } 284 272 } 285 273 286 274 iounmap(spi_base); ··· 282 292 struct device_node *np = dev->of_node; 283 293 const u32 *iprop; 284 294 int size; 285 - unsigned long pram_ofs; 286 295 unsigned long bds_ofs; 287 296 288 297 if (!(mspi->flags & SPI_CPM_MODE)) ··· 308 319 } 309 320 } 310 321 311 - pram_ofs = fsl_spi_cpm_get_pram(mspi); 312 - if (IS_ERR_VALUE(pram_ofs)) { 322 + if (mspi->flags & SPI_CPM1) { 323 + struct resource *res; 324 + void *pram; 325 + 326 + res = platform_get_resource(to_platform_device(dev), 327 + IORESOURCE_MEM, 1); 328 + pram = devm_ioremap_resource(dev, res); 329 + if (IS_ERR(pram)) 330 + mspi->pram = NULL; 331 + else 332 + mspi->pram = pram; 333 + } else { 334 + unsigned long pram_ofs = fsl_spi_cpm_get_pram(mspi); 335 + 336 + if (IS_ERR_VALUE(pram_ofs)) 337 + mspi->pram = NULL; 338 + else 339 + mspi->pram = cpm_muram_addr(pram_ofs); 340 + } 341 + if (mspi->pram == NULL) { 313 342 dev_err(dev, "can't allocate spi parameter ram\n"); 314 343 goto err_pram; 315 344 } ··· 352 345 dev_err(dev, "unable to map dummy rx buffer\n"); 353 346 goto err_dummy_rx; 354 347 } 355 - 356 - mspi->pram = cpm_muram_addr(pram_ofs); 357 348 358 349 mspi->tx_bd = cpm_muram_addr(bds_ofs); 359 350 mspi->rx_bd = cpm_muram_addr(bds_ofs + sizeof(*mspi->tx_bd)); ··· 380 375 err_dummy_tx: 381 376 cpm_muram_free(bds_ofs); 382 377 err_bds: 383 - cpm_muram_free(pram_ofs); 378 + if (!(mspi->flags & SPI_CPM1)) 379 + cpm_muram_free(cpm_muram_offset(mspi->pram)); 384 380 err_pram: 385 381 fsl_spi_free_dummy_rx(); 386 382 return -ENOMEM;
+31 -14
drivers/spi/spi-fsl-espi.c
··· 359 359 struct fsl_espi_transfer *trans, u8 *rx_buff) 360 360 { 361 361 struct fsl_espi_transfer *espi_trans = trans; 362 - unsigned int n_tx = espi_trans->n_tx; 363 - unsigned int n_rx = espi_trans->n_rx; 362 + unsigned int total_len = espi_trans->len; 364 363 struct spi_transfer *t; 365 364 u8 *local_buf; 366 365 u8 *rx_buf = rx_buff; 367 366 unsigned int trans_len; 368 367 unsigned int addr; 369 - int i, pos, loop; 368 + unsigned int tx_only; 369 + unsigned int rx_pos = 0; 370 + unsigned int pos; 371 + int i, loop; 370 372 371 373 local_buf = kzalloc(SPCOM_TRANLEN_MAX, GFP_KERNEL); 372 374 if (!local_buf) { ··· 376 374 return; 377 375 } 378 376 379 - for (pos = 0, loop = 0; pos < n_rx; pos += trans_len, loop++) { 380 - trans_len = n_rx - pos; 381 - if (trans_len > SPCOM_TRANLEN_MAX - n_tx) 382 - trans_len = SPCOM_TRANLEN_MAX - n_tx; 377 + for (pos = 0, loop = 0; pos < total_len; pos += trans_len, loop++) { 378 + trans_len = total_len - pos; 383 379 384 380 i = 0; 381 + tx_only = 0; 385 382 list_for_each_entry(t, &m->transfers, transfer_list) { 386 383 if (t->tx_buf) { 387 384 memcpy(local_buf + i, t->tx_buf, t->len); 388 385 i += t->len; 386 + if (!t->rx_buf) 387 + tx_only += t->len; 389 388 } 390 389 } 391 390 391 + /* Add additional TX bytes to compensate SPCOM_TRANLEN_MAX */ 392 + if (loop > 0) 393 + trans_len += tx_only; 394 + 395 + if (trans_len > SPCOM_TRANLEN_MAX) 396 + trans_len = SPCOM_TRANLEN_MAX; 397 + 398 + /* Update device offset */ 392 399 if (pos > 0) { 393 400 addr = fsl_espi_cmd2addr(local_buf); 394 - addr += pos; 401 + addr += rx_pos; 395 402 fsl_espi_addr2cmd(addr, local_buf); 396 403 } 397 404 398 - espi_trans->n_tx = n_tx; 399 - espi_trans->n_rx = trans_len; 400 - espi_trans->len = trans_len + n_tx; 405 + espi_trans->len = trans_len; 401 406 espi_trans->tx_buf = local_buf; 402 407 espi_trans->rx_buf = local_buf; 403 408 fsl_espi_do_trans(m, espi_trans); 404 409 405 - memcpy(rx_buf + pos, espi_trans->rx_buf + n_tx, trans_len); 410 + /* If there is at least one RX byte then copy it to rx_buf */ 411 + if (tx_only < SPCOM_TRANLEN_MAX) 412 + memcpy(rx_buf + rx_pos, espi_trans->rx_buf + tx_only, 413 + trans_len - tx_only); 414 + 415 + rx_pos += trans_len - tx_only; 406 416 407 417 if (loop > 0) 408 - espi_trans->actual_length += espi_trans->len - n_tx; 418 + espi_trans->actual_length += espi_trans->len - tx_only; 409 419 else 410 420 espi_trans->actual_length += espi_trans->len; 411 421 } ··· 432 418 u8 *rx_buf = NULL; 433 419 unsigned int n_tx = 0; 434 420 unsigned int n_rx = 0; 421 + unsigned int xfer_len = 0; 435 422 struct fsl_espi_transfer espi_trans; 436 423 437 424 list_for_each_entry(t, &m->transfers, transfer_list) { ··· 442 427 n_rx += t->len; 443 428 rx_buf = t->rx_buf; 444 429 } 430 + if ((t->tx_buf) || (t->rx_buf)) 431 + xfer_len += t->len; 445 432 } 446 433 447 434 espi_trans.n_tx = n_tx; 448 435 espi_trans.n_rx = n_rx; 449 - espi_trans.len = n_tx + n_rx; 436 + espi_trans.len = xfer_len; 450 437 espi_trans.actual_length = 0; 451 438 espi_trans.status = 0; 452 439
+12 -4
drivers/spi/spi-omap2-mcspi.c
··· 1210 1210 struct omap2_mcspi *mcspi; 1211 1211 struct omap2_mcspi_dma *mcspi_dma; 1212 1212 struct spi_transfer *t; 1213 + int status; 1213 1214 1214 1215 spi = m->spi; 1215 1216 mcspi = spi_master_get_devdata(master); ··· 1230 1229 tx_buf ? "tx" : "", 1231 1230 rx_buf ? "rx" : "", 1232 1231 t->bits_per_word); 1233 - return -EINVAL; 1232 + status = -EINVAL; 1233 + goto out; 1234 1234 } 1235 1235 1236 1236 if (m->is_dma_mapped || len < DMA_MIN_BYTES) ··· 1243 1241 if (dma_mapping_error(mcspi->dev, t->tx_dma)) { 1244 1242 dev_dbg(mcspi->dev, "dma %cX %d bytes error\n", 1245 1243 'T', len); 1246 - return -EINVAL; 1244 + status = -EINVAL; 1245 + goto out; 1247 1246 } 1248 1247 } 1249 1248 if (mcspi_dma->dma_rx && rx_buf != NULL) { ··· 1256 1253 if (tx_buf != NULL) 1257 1254 dma_unmap_single(mcspi->dev, t->tx_dma, 1258 1255 len, DMA_TO_DEVICE); 1259 - return -EINVAL; 1256 + status = -EINVAL; 1257 + goto out; 1260 1258 } 1261 1259 } 1262 1260 } 1263 1261 1264 1262 omap2_mcspi_work(mcspi, m); 1263 + /* spi_finalize_current_message() changes the status inside the 1264 + * spi_message, save the status here. */ 1265 + status = m->status; 1266 + out: 1265 1267 spi_finalize_current_message(master); 1266 - return 0; 1268 + return status; 1267 1269 } 1268 1270 1269 1271 static int omap2_mcspi_master_setup(struct omap2_mcspi *mcspi)
+9
drivers/spi/spi.c
··· 583 583 rx_dev = master->dma_rx->device->dev; 584 584 585 585 list_for_each_entry(xfer, &msg->transfers, transfer_list) { 586 + /* 587 + * Restore the original value of tx_buf or rx_buf if they are 588 + * NULL. 589 + */ 590 + if (xfer->tx_buf == master->dummy_tx) 591 + xfer->tx_buf = NULL; 592 + if (xfer->rx_buf == master->dummy_rx) 593 + xfer->rx_buf = NULL; 594 + 586 595 if (!master->can_dma(master, msg->spi, xfer)) 587 596 continue; 588 597
+7 -9
drivers/staging/gdm724x/gdm_mux.c
··· 158 158 unsigned int start_flag; 159 159 unsigned int payload_size; 160 160 unsigned short packet_type; 161 - int dummy_cnt; 161 + int total_len; 162 162 u32 packet_size_sum = r->offset; 163 163 int index; 164 164 int ret = TO_HOST_INVALID_PACKET; ··· 176 176 break; 177 177 } 178 178 179 - dummy_cnt = ALIGN(MUX_HEADER_SIZE + payload_size, 4); 179 + total_len = ALIGN(MUX_HEADER_SIZE + payload_size, 4); 180 180 181 181 if (len - packet_size_sum < 182 - MUX_HEADER_SIZE + payload_size + dummy_cnt) { 182 + total_len) { 183 183 pr_err("invalid payload : %d %d %04x\n", 184 184 payload_size, len, packet_type); 185 185 break; ··· 202 202 break; 203 203 } 204 204 205 - packet_size_sum += MUX_HEADER_SIZE + payload_size + dummy_cnt; 205 + packet_size_sum += total_len; 206 206 if (len - packet_size_sum <= MUX_HEADER_SIZE + 2) { 207 207 ret = r->callback(NULL, 208 208 0, ··· 361 361 struct mux_pkt_header *mux_header; 362 362 struct mux_tx *t = NULL; 363 363 static u32 seq_num = 1; 364 - int dummy_cnt; 365 364 int total_len; 366 365 int ret; 367 366 unsigned long flags; ··· 373 374 374 375 spin_lock_irqsave(&mux_dev->write_lock, flags); 375 376 376 - dummy_cnt = ALIGN(MUX_HEADER_SIZE + len, 4); 377 - 378 - total_len = len + MUX_HEADER_SIZE + dummy_cnt; 377 + total_len = ALIGN(MUX_HEADER_SIZE + len, 4); 379 378 380 379 t = alloc_mux_tx(total_len); 381 380 if (!t) { ··· 389 392 mux_header->packet_type = __cpu_to_le16(packet_type[tty_index]); 390 393 391 394 memcpy(t->buf+MUX_HEADER_SIZE, data, len); 392 - memset(t->buf+MUX_HEADER_SIZE+len, 0, dummy_cnt); 395 + memset(t->buf+MUX_HEADER_SIZE+len, 0, total_len - MUX_HEADER_SIZE - 396 + len); 393 397 394 398 t->len = total_len; 395 399 t->callback = cb;
+7 -10
drivers/staging/rtl8712/rtl871x_ioctl_linux.c
··· 1900 1900 struct mp_ioctl_handler *phandler; 1901 1901 struct mp_ioctl_param *poidparam; 1902 1902 unsigned long BytesRead, BytesWritten, BytesNeeded; 1903 - u8 *pparmbuf = NULL, bset; 1903 + u8 *pparmbuf, bset; 1904 1904 u16 len; 1905 1905 uint status; 1906 1906 int ret = 0; 1907 1907 1908 - if ((!p->length) || (!p->pointer)) { 1909 - ret = -EINVAL; 1910 - goto _r871x_mp_ioctl_hdl_exit; 1911 - } 1908 + if ((!p->length) || (!p->pointer)) 1909 + return -EINVAL; 1910 + 1912 1911 bset = (u8)(p->flags & 0xFFFF); 1913 1912 len = p->length; 1914 - pparmbuf = NULL; 1915 1913 pparmbuf = memdup_user(p->pointer, len); 1916 - if (IS_ERR(pparmbuf)) { 1917 - ret = PTR_ERR(pparmbuf); 1918 - goto _r871x_mp_ioctl_hdl_exit; 1919 - } 1914 + if (IS_ERR(pparmbuf)) 1915 + return PTR_ERR(pparmbuf); 1916 + 1920 1917 poidparam = (struct mp_ioctl_param *)pparmbuf; 1921 1918 if (poidparam->subcode >= MAX_MP_IOCTL_SUBCODE) { 1922 1919 ret = -EINVAL;
+1 -1
drivers/staging/sm750fb/sm750.c
··· 1250 1250 return -ENODEV; 1251 1251 } 1252 1252 1253 - static void __exit lynxfb_pci_remove(struct pci_dev *pdev) 1253 + static void lynxfb_pci_remove(struct pci_dev *pdev) 1254 1254 { 1255 1255 struct fb_info *info; 1256 1256 struct lynx_share *share;
+7 -3
drivers/staging/vt6655/card.c
··· 362 362 * Return Value: none 363 363 */ 364 364 bool CARDbUpdateTSF(struct vnt_private *pDevice, unsigned char byRxRate, 365 - u64 qwBSSTimestamp, u64 qwLocalTSF) 365 + u64 qwBSSTimestamp) 366 366 { 367 + u64 local_tsf; 367 368 u64 qwTSFOffset = 0; 368 369 369 - if (qwBSSTimestamp != qwLocalTSF) { 370 - qwTSFOffset = CARDqGetTSFOffset(byRxRate, qwBSSTimestamp, qwLocalTSF); 370 + CARDbGetCurrentTSF(pDevice, &local_tsf); 371 + 372 + if (qwBSSTimestamp != local_tsf) { 373 + qwTSFOffset = CARDqGetTSFOffset(byRxRate, qwBSSTimestamp, 374 + local_tsf); 371 375 /* adjust TSF, HW's TSF add TSF Offset reg */ 372 376 VNSvOutPortD(pDevice->PortOffset + MAC_REG_TSFOFST, (u32)qwTSFOffset); 373 377 VNSvOutPortD(pDevice->PortOffset + MAC_REG_TSFOFST + 4, (u32)(qwTSFOffset >> 32));
+1 -1
drivers/staging/vt6655/card.h
··· 83 83 bool CARDbRadioPowerOn(struct vnt_private *); 84 84 bool CARDbSetPhyParameter(struct vnt_private *, u8); 85 85 bool CARDbUpdateTSF(struct vnt_private *, unsigned char byRxRate, 86 - u64 qwBSSTimestamp, u64 qwLocalTSF); 86 + u64 qwBSSTimestamp); 87 87 bool CARDbSetBeaconPeriod(struct vnt_private *, unsigned short wBeaconInterval); 88 88 89 89 #endif /* __CARD_H__ */
+25 -17
drivers/staging/vt6655/device_main.c
··· 912 912 913 913 if (!(tsr1 & TSR1_TERR)) { 914 914 info->status.rates[0].idx = idx; 915 - info->flags |= IEEE80211_TX_STAT_ACK; 915 + 916 + if (info->flags & IEEE80211_TX_CTL_NO_ACK) 917 + info->flags |= IEEE80211_TX_STAT_NOACK_TRANSMITTED; 918 + else 919 + info->flags |= IEEE80211_TX_STAT_ACK; 916 920 } 917 921 918 922 return 0; ··· 941 937 /* Only the status of first TD in the chain is correct */ 942 938 if (pTD->m_td1TD1.byTCR & TCR_STP) { 943 939 if ((pTD->pTDInfo->byFlags & TD_FLAGS_NETIF_SKB) != 0) { 944 - 945 - vnt_int_report_rate(pDevice, pTD->pTDInfo, byTsr0, byTsr1); 946 - 947 940 if (!(byTsr1 & TSR1_TERR)) { 948 941 if (byTsr0 != 0) { 949 942 pr_debug(" Tx[%d] OK but has error. tsr1[%02X] tsr0[%02X]\n", ··· 959 958 (int)uIdx, byTsr1, byTsr0); 960 959 } 961 960 } 961 + 962 + vnt_int_report_rate(pDevice, pTD->pTDInfo, byTsr0, byTsr1); 963 + 962 964 device_free_tx_buf(pDevice, pTD); 963 965 pDevice->iTDUsed[uIdx]--; 964 966 } ··· 993 989 skb->len, DMA_TO_DEVICE); 994 990 } 995 991 996 - if (pTDInfo->byFlags & TD_FLAGS_NETIF_SKB) 992 + if (skb) 997 993 ieee80211_tx_status_irqsafe(pDevice->hw, skb); 998 - else 999 - dev_kfree_skb_irq(skb); 1000 994 1001 995 pTDInfo->skb_dma = 0; 1002 996 pTDInfo->skb = NULL; ··· 1206 1204 if (dma_idx == TYPE_AC0DMA) 1207 1205 head_td->pTDInfo->byFlags = TD_FLAGS_NETIF_SKB; 1208 1206 1209 - priv->iTDUsed[dma_idx]++; 1210 - 1211 - /* Take ownership */ 1212 - wmb(); 1213 - head_td->m_td0TD0.f1Owner = OWNED_BY_NIC; 1214 - 1215 - /* get Next */ 1216 - wmb(); 1217 1207 priv->apCurrTD[dma_idx] = head_td->next; 1218 1208 1219 1209 spin_unlock_irqrestore(&priv->lock, flags); ··· 1226 1232 1227 1233 head_td->buff_addr = cpu_to_le32(head_td->pTDInfo->skb_dma); 1228 1234 1235 + /* Poll Transmit the adapter */ 1236 + wmb(); 1237 + head_td->m_td0TD0.f1Owner = OWNED_BY_NIC; 1238 + wmb(); /* second memory barrier */ 1239 + 1229 1240 if (head_td->pTDInfo->byFlags & TD_FLAGS_NETIF_SKB) 1230 1241 MACvTransmitAC0(priv->PortOffset); 1231 1242 else 1232 1243 MACvTransmit0(priv->PortOffset); 1244 + 1245 + priv->iTDUsed[dma_idx]++; 1233 1246 1234 1247 spin_unlock_irqrestore(&priv->lock, flags); 1235 1248 ··· 1417 1416 1418 1417 priv->current_aid = conf->aid; 1419 1418 1420 - if (changed & BSS_CHANGED_BSSID) 1419 + if (changed & BSS_CHANGED_BSSID) { 1420 + unsigned long flags; 1421 + 1422 + spin_lock_irqsave(&priv->lock, flags); 1423 + 1421 1424 MACvWriteBSSIDAddress(priv->PortOffset, (u8 *)conf->bssid); 1425 + 1426 + spin_unlock_irqrestore(&priv->lock, flags); 1427 + } 1422 1428 1423 1429 if (changed & BSS_CHANGED_BASIC_RATES) { 1424 1430 priv->basic_rates = conf->basic_rates; ··· 1485 1477 if (changed & BSS_CHANGED_ASSOC && priv->op_mode != NL80211_IFTYPE_AP) { 1486 1478 if (conf->assoc) { 1487 1479 CARDbUpdateTSF(priv, conf->beacon_rate->hw_value, 1488 - conf->sync_device_ts, conf->sync_tsf); 1480 + conf->sync_tsf); 1489 1481 1490 1482 CARDbSetBeaconPeriod(priv, conf->beacon_int); 1491 1483
+11 -3
drivers/staging/vt6656/rxtx.c
··· 805 805 vnt_schedule_command(priv, WLAN_CMD_SETPOWER); 806 806 } 807 807 808 - if (current_rate > RATE_11M) 809 - pkt_type = priv->packet_type; 810 - else 808 + if (current_rate > RATE_11M) { 809 + if (info->band == IEEE80211_BAND_5GHZ) { 810 + pkt_type = PK_TYPE_11A; 811 + } else { 812 + if (tx_rate->flags & IEEE80211_TX_RC_USE_CTS_PROTECT) 813 + pkt_type = PK_TYPE_11GB; 814 + else 815 + pkt_type = PK_TYPE_11GA; 816 + } 817 + } else { 811 818 pkt_type = PK_TYPE_11B; 819 + } 812 820 813 821 spin_lock_irqsave(&priv->lock, flags); 814 822
+47 -40
drivers/thermal/intel_powerclamp.c
··· 206 206 207 207 } 208 208 209 + struct pkg_cstate_info { 210 + bool skip; 211 + int msr_index; 212 + int cstate_id; 213 + }; 214 + 215 + #define PKG_CSTATE_INIT(id) { \ 216 + .msr_index = MSR_PKG_C##id##_RESIDENCY, \ 217 + .cstate_id = id \ 218 + } 219 + 220 + static struct pkg_cstate_info pkg_cstates[] = { 221 + PKG_CSTATE_INIT(2), 222 + PKG_CSTATE_INIT(3), 223 + PKG_CSTATE_INIT(6), 224 + PKG_CSTATE_INIT(7), 225 + PKG_CSTATE_INIT(8), 226 + PKG_CSTATE_INIT(9), 227 + PKG_CSTATE_INIT(10), 228 + {NULL}, 229 + }; 230 + 209 231 static bool has_pkg_state_counter(void) 210 232 { 211 - u64 tmp; 212 - return !rdmsrl_safe(MSR_PKG_C2_RESIDENCY, &tmp) || 213 - !rdmsrl_safe(MSR_PKG_C3_RESIDENCY, &tmp) || 214 - !rdmsrl_safe(MSR_PKG_C6_RESIDENCY, &tmp) || 215 - !rdmsrl_safe(MSR_PKG_C7_RESIDENCY, &tmp); 233 + u64 val; 234 + struct pkg_cstate_info *info = pkg_cstates; 235 + 236 + /* check if any one of the counter msrs exists */ 237 + while (info->msr_index) { 238 + if (!rdmsrl_safe(info->msr_index, &val)) 239 + return true; 240 + info++; 241 + } 242 + 243 + return false; 216 244 } 217 245 218 246 static u64 pkg_state_counter(void) 219 247 { 220 248 u64 val; 221 249 u64 count = 0; 250 + struct pkg_cstate_info *info = pkg_cstates; 222 251 223 - static bool skip_c2; 224 - static bool skip_c3; 225 - static bool skip_c6; 226 - static bool skip_c7; 227 - 228 - if (!skip_c2) { 229 - if (!rdmsrl_safe(MSR_PKG_C2_RESIDENCY, &val)) 230 - count += val; 231 - else 232 - skip_c2 = true; 233 - } 234 - 235 - if (!skip_c3) { 236 - if (!rdmsrl_safe(MSR_PKG_C3_RESIDENCY, &val)) 237 - count += val; 238 - else 239 - skip_c3 = true; 240 - } 241 - 242 - if (!skip_c6) { 243 - if (!rdmsrl_safe(MSR_PKG_C6_RESIDENCY, &val)) 244 - count += val; 245 - else 246 - skip_c6 = true; 247 - } 248 - 249 - if (!skip_c7) { 250 - if (!rdmsrl_safe(MSR_PKG_C7_RESIDENCY, &val)) 251 - count += val; 252 - else 253 - skip_c7 = true; 252 + while (info->msr_index) { 253 + if (!info->skip) { 254 + if (!rdmsrl_safe(info->msr_index, &val)) 255 + count += val; 256 + else 257 + info->skip = true; 258 + } 259 + info++; 254 260 } 255 261 256 262 return count; ··· 673 667 }; 674 668 675 669 /* runs on Nehalem and later */ 676 - static const struct x86_cpu_id intel_powerclamp_ids[] = { 670 + static const struct x86_cpu_id intel_powerclamp_ids[] __initconst = { 677 671 { X86_VENDOR_INTEL, 6, 0x1a}, 678 672 { X86_VENDOR_INTEL, 6, 0x1c}, 679 673 { X86_VENDOR_INTEL, 6, 0x1e}, ··· 695 689 { X86_VENDOR_INTEL, 6, 0x46}, 696 690 { X86_VENDOR_INTEL, 6, 0x4c}, 697 691 { X86_VENDOR_INTEL, 6, 0x4d}, 692 + { X86_VENDOR_INTEL, 6, 0x4f}, 698 693 { X86_VENDOR_INTEL, 6, 0x56}, 699 694 {} 700 695 }; 701 696 MODULE_DEVICE_TABLE(x86cpu, intel_powerclamp_ids); 702 697 703 - static int powerclamp_probe(void) 698 + static int __init powerclamp_probe(void) 704 699 { 705 700 if (!x86_match_cpu(intel_powerclamp_ids)) { 706 701 pr_err("Intel powerclamp does not run on family %d model %d\n", ··· 767 760 debugfs_remove_recursive(debug_dir); 768 761 } 769 762 770 - static int powerclamp_init(void) 763 + static int __init powerclamp_init(void) 771 764 { 772 765 int retval; 773 766 int bitmap_size; ··· 816 809 } 817 810 module_init(powerclamp_init); 818 811 819 - static void powerclamp_exit(void) 812 + static void __exit powerclamp_exit(void) 820 813 { 821 814 unregister_hotcpu_notifier(&powerclamp_cpu_notifier); 822 815 end_power_clamp();
+1 -1
drivers/thermal/rockchip_thermal.c
··· 529 529 530 530 thermal->pclk = devm_clk_get(&pdev->dev, "apb_pclk"); 531 531 if (IS_ERR(thermal->pclk)) { 532 - error = PTR_ERR(thermal->clk); 532 + error = PTR_ERR(thermal->pclk); 533 533 dev_err(&pdev->dev, "failed to get apb_pclk clock: %d\n", 534 534 error); 535 535 return error;
+1 -1
drivers/thermal/thermal_core.h
··· 103 103 static inline bool of_thermal_is_trip_valid(struct thermal_zone_device *tz, 104 104 int trip) 105 105 { 106 - return 0; 106 + return false; 107 107 } 108 108 static inline const struct thermal_trip * 109 109 of_thermal_get_trip_points(struct thermal_zone_device *tz)
+2 -3
drivers/tty/n_gsm.c
··· 3170 3170 return gsmtty_modem_update(dlci, encode); 3171 3171 } 3172 3172 3173 - static void gsmtty_remove(struct tty_driver *driver, struct tty_struct *tty) 3173 + static void gsmtty_cleanup(struct tty_struct *tty) 3174 3174 { 3175 3175 struct gsm_dlci *dlci = tty->driver_data; 3176 3176 struct gsm_mux *gsm = dlci->gsm; ··· 3178 3178 dlci_put(dlci); 3179 3179 dlci_put(gsm->dlci[0]); 3180 3180 mux_put(gsm); 3181 - driver->ttys[tty->index] = NULL; 3182 3181 } 3183 3182 3184 3183 /* Virtual ttys for the demux */ ··· 3198 3199 .tiocmget = gsmtty_tiocmget, 3199 3200 .tiocmset = gsmtty_tiocmset, 3200 3201 .break_ctl = gsmtty_break_ctl, 3201 - .remove = gsmtty_remove, 3202 + .cleanup = gsmtty_cleanup, 3202 3203 }; 3203 3204 3204 3205
+2 -2
drivers/tty/n_hdlc.c
··· 600 600 add_wait_queue(&tty->read_wait, &wait); 601 601 602 602 for (;;) { 603 - if (test_bit(TTY_OTHER_CLOSED, &tty->flags)) { 603 + if (test_bit(TTY_OTHER_DONE, &tty->flags)) { 604 604 ret = -EIO; 605 605 break; 606 606 } ··· 828 828 /* set bits for operations that won't block */ 829 829 if (n_hdlc->rx_buf_list.head) 830 830 mask |= POLLIN | POLLRDNORM; /* readable */ 831 - if (test_bit(TTY_OTHER_CLOSED, &tty->flags)) 831 + if (test_bit(TTY_OTHER_DONE, &tty->flags)) 832 832 mask |= POLLHUP; 833 833 if (tty_hung_up_p(filp)) 834 834 mask |= POLLHUP;
+18 -4
drivers/tty/n_tty.c
··· 1949 1949 return ldata->commit_head - ldata->read_tail >= amt; 1950 1950 } 1951 1951 1952 + static inline int check_other_done(struct tty_struct *tty) 1953 + { 1954 + int done = test_bit(TTY_OTHER_DONE, &tty->flags); 1955 + if (done) { 1956 + /* paired with cmpxchg() in check_other_closed(); ensures 1957 + * read buffer head index is not stale 1958 + */ 1959 + smp_mb__after_atomic(); 1960 + } 1961 + return done; 1962 + } 1963 + 1952 1964 /** 1953 1965 * copy_from_read_buf - copy read data directly 1954 1966 * @tty: terminal device ··· 2179 2167 struct n_tty_data *ldata = tty->disc_data; 2180 2168 unsigned char __user *b = buf; 2181 2169 DEFINE_WAIT_FUNC(wait, woken_wake_function); 2182 - int c; 2170 + int c, done; 2183 2171 int minimum, time; 2184 2172 ssize_t retval = 0; 2185 2173 long timeout; ··· 2247 2235 ((minimum - (b - buf)) >= 1)) 2248 2236 ldata->minimum_to_wake = (minimum - (b - buf)); 2249 2237 2238 + done = check_other_done(tty); 2239 + 2250 2240 if (!input_available_p(tty, 0)) { 2251 - if (test_bit(TTY_OTHER_CLOSED, &tty->flags)) { 2241 + if (done) { 2252 2242 retval = -EIO; 2253 2243 break; 2254 2244 } ··· 2457 2443 2458 2444 poll_wait(file, &tty->read_wait, wait); 2459 2445 poll_wait(file, &tty->write_wait, wait); 2446 + if (check_other_done(tty)) 2447 + mask |= POLLHUP; 2460 2448 if (input_available_p(tty, 1)) 2461 2449 mask |= POLLIN | POLLRDNORM; 2462 2450 if (tty->packet && tty->link->ctrl_status) 2463 2451 mask |= POLLPRI | POLLIN | POLLRDNORM; 2464 - if (test_bit(TTY_OTHER_CLOSED, &tty->flags)) 2465 - mask |= POLLHUP; 2466 2452 if (tty_hung_up_p(file)) 2467 2453 mask |= POLLHUP; 2468 2454 if (!(mask & (POLLHUP | POLLIN | POLLRDNORM))) {
+3 -2
drivers/tty/pty.c
··· 53 53 /* Review - krefs on tty_link ?? */ 54 54 if (!tty->link) 55 55 return; 56 - tty_flush_to_ldisc(tty->link); 57 56 set_bit(TTY_OTHER_CLOSED, &tty->link->flags); 58 - wake_up_interruptible(&tty->link->read_wait); 57 + tty_flip_buffer_push(tty->link->port); 59 58 wake_up_interruptible(&tty->link->write_wait); 60 59 if (tty->driver->subtype == PTY_TYPE_MASTER) { 61 60 set_bit(TTY_OTHER_CLOSED, &tty->flags); ··· 242 243 goto out; 243 244 244 245 clear_bit(TTY_IO_ERROR, &tty->flags); 246 + /* TTY_OTHER_CLOSED must be cleared before TTY_OTHER_DONE */ 245 247 clear_bit(TTY_OTHER_CLOSED, &tty->link->flags); 248 + clear_bit(TTY_OTHER_DONE, &tty->link->flags); 246 249 set_bit(TTY_THROTTLED, &tty->flags); 247 250 return 0; 248 251
+4 -1
drivers/tty/serial/amba-pl011.c
··· 1639 1639 1640 1640 writew(uap->vendor->ifls, uap->port.membase + UART011_IFLS); 1641 1641 1642 + /* Assume that TX IRQ doesn't work until we see one: */ 1643 + uap->tx_irq_seen = 0; 1644 + 1642 1645 spin_lock_irq(&uap->port.lock); 1643 1646 1644 1647 /* restore RTS and DTR */ ··· 1705 1702 spin_lock_irq(&uap->port.lock); 1706 1703 uap->im = 0; 1707 1704 writew(uap->im, uap->port.membase + UART011_IMSC); 1708 - writew(0xffff & ~UART011_TXIS, uap->port.membase + UART011_ICR); 1705 + writew(0xffff, uap->port.membase + UART011_ICR); 1709 1706 spin_unlock_irq(&uap->port.lock); 1710 1707 1711 1708 pl011_dma_shutdown(uap);
+2 -7
drivers/tty/serial/earlycon.c
··· 187 187 return 0; 188 188 189 189 err = setup_earlycon(buf); 190 - if (err == -ENOENT) { 191 - pr_warn("no match for %s\n", buf); 192 - err = 0; 193 - } else if (err == -EALREADY) { 194 - pr_warn("already registered\n"); 195 - err = 0; 196 - } 190 + if (err == -ENOENT || err == -EALREADY) 191 + return 0; 197 192 return err; 198 193 } 199 194 early_param("earlycon", param_setup_earlycon);
+2
drivers/tty/serial/omap-serial.c
··· 1735 1735 err_add_port: 1736 1736 pm_runtime_put(&pdev->dev); 1737 1737 pm_runtime_disable(&pdev->dev); 1738 + pm_qos_remove_request(&up->pm_qos_request); 1739 + device_init_wakeup(up->dev, false); 1738 1740 err_rs485: 1739 1741 err_port_line: 1740 1742 return ret;
+27 -14
drivers/tty/tty_buffer.c
··· 37 37 38 38 #define TTY_BUFFER_PAGE (((PAGE_SIZE - sizeof(struct tty_buffer)) / 2) & ~0xFF) 39 39 40 + /* 41 + * If all tty flip buffers have been processed by flush_to_ldisc() or 42 + * dropped by tty_buffer_flush(), check if the linked pty has been closed. 43 + * If so, wake the reader/poll to process 44 + */ 45 + static inline void check_other_closed(struct tty_struct *tty) 46 + { 47 + unsigned long flags, old; 48 + 49 + /* transition from TTY_OTHER_CLOSED => TTY_OTHER_DONE must be atomic */ 50 + for (flags = ACCESS_ONCE(tty->flags); 51 + test_bit(TTY_OTHER_CLOSED, &flags); 52 + ) { 53 + old = flags; 54 + __set_bit(TTY_OTHER_DONE, &flags); 55 + flags = cmpxchg(&tty->flags, old, flags); 56 + if (old == flags) { 57 + wake_up_interruptible(&tty->read_wait); 58 + break; 59 + } 60 + } 61 + } 40 62 41 63 /** 42 64 * tty_buffer_lock_exclusive - gain exclusive access to buffer ··· 250 228 251 229 if (ld && ld->ops->flush_buffer) 252 230 ld->ops->flush_buffer(tty); 231 + 232 + check_other_closed(tty); 253 233 254 234 atomic_dec(&buf->priority); 255 235 mutex_unlock(&buf->lock); ··· 495 471 smp_rmb(); 496 472 count = head->commit - head->read; 497 473 if (!count) { 498 - if (next == NULL) 474 + if (next == NULL) { 475 + check_other_closed(tty); 499 476 break; 477 + } 500 478 buf->head = next; 501 479 tty_buffer_free(port, head); 502 480 continue; ··· 512 486 mutex_unlock(&buf->lock); 513 487 514 488 tty_ldisc_deref(disc); 515 - } 516 - 517 - /** 518 - * tty_flush_to_ldisc 519 - * @tty: tty to push 520 - * 521 - * Push the terminal flip buffers to the line discipline. 522 - * 523 - * Must not be called from IRQ context. 524 - */ 525 - void tty_flush_to_ldisc(struct tty_struct *tty) 526 - { 527 - flush_work(&tty->port->buf.work); 528 489 } 529 490 530 491 /**
+5 -1
drivers/usb/chipidea/debug.c
··· 88 88 char buf[32]; 89 89 int ret; 90 90 91 - if (copy_from_user(buf, ubuf, min_t(size_t, sizeof(buf) - 1, count))) 91 + count = min_t(size_t, sizeof(buf) - 1, count); 92 + if (copy_from_user(buf, ubuf, count)) 92 93 return -EFAULT; 94 + 95 + /* sscanf requires a zero terminated string */ 96 + buf[count] = '\0'; 93 97 94 98 if (sscanf(buf, "%u", &mode) != 1) 95 99 return -EINVAL;
+3
drivers/usb/core/quirks.c
··· 106 106 { USB_DEVICE(0x04f3, 0x010c), .driver_info = 107 107 USB_QUIRK_DEVICE_QUALIFIER }, 108 108 109 + { USB_DEVICE(0x04f3, 0x0125), .driver_info = 110 + USB_QUIRK_DEVICE_QUALIFIER }, 111 + 109 112 { USB_DEVICE(0x04f3, 0x016f), .driver_info = 110 113 USB_QUIRK_DEVICE_QUALIFIER }, 111 114
+47 -47
drivers/usb/dwc3/dwc3-omap.c
··· 65 65 #define USBOTGSS_IRQENABLE_SET_MISC 0x003c 66 66 #define USBOTGSS_IRQENABLE_CLR_MISC 0x0040 67 67 #define USBOTGSS_IRQMISC_OFFSET 0x03fc 68 - #define USBOTGSS_UTMI_OTG_CTRL 0x0080 69 - #define USBOTGSS_UTMI_OTG_STATUS 0x0084 68 + #define USBOTGSS_UTMI_OTG_STATUS 0x0080 69 + #define USBOTGSS_UTMI_OTG_CTRL 0x0084 70 70 #define USBOTGSS_UTMI_OTG_OFFSET 0x0480 71 71 #define USBOTGSS_TXFIFO_DEPTH 0x0508 72 72 #define USBOTGSS_RXFIFO_DEPTH 0x050c ··· 98 98 #define USBOTGSS_IRQMISC_DISCHRGVBUS_FALL (1 << 3) 99 99 #define USBOTGSS_IRQMISC_IDPULLUP_FALL (1 << 0) 100 100 101 - /* UTMI_OTG_CTRL REGISTER */ 102 - #define USBOTGSS_UTMI_OTG_CTRL_DRVVBUS (1 << 5) 103 - #define USBOTGSS_UTMI_OTG_CTRL_CHRGVBUS (1 << 4) 104 - #define USBOTGSS_UTMI_OTG_CTRL_DISCHRGVBUS (1 << 3) 105 - #define USBOTGSS_UTMI_OTG_CTRL_IDPULLUP (1 << 0) 106 - 107 101 /* UTMI_OTG_STATUS REGISTER */ 108 - #define USBOTGSS_UTMI_OTG_STATUS_SW_MODE (1 << 31) 109 - #define USBOTGSS_UTMI_OTG_STATUS_POWERPRESENT (1 << 9) 110 - #define USBOTGSS_UTMI_OTG_STATUS_TXBITSTUFFENABLE (1 << 8) 111 - #define USBOTGSS_UTMI_OTG_STATUS_IDDIG (1 << 4) 112 - #define USBOTGSS_UTMI_OTG_STATUS_SESSEND (1 << 3) 113 - #define USBOTGSS_UTMI_OTG_STATUS_SESSVALID (1 << 2) 114 - #define USBOTGSS_UTMI_OTG_STATUS_VBUSVALID (1 << 1) 102 + #define USBOTGSS_UTMI_OTG_STATUS_DRVVBUS (1 << 5) 103 + #define USBOTGSS_UTMI_OTG_STATUS_CHRGVBUS (1 << 4) 104 + #define USBOTGSS_UTMI_OTG_STATUS_DISCHRGVBUS (1 << 3) 105 + #define USBOTGSS_UTMI_OTG_STATUS_IDPULLUP (1 << 0) 106 + 107 + /* UTMI_OTG_CTRL REGISTER */ 108 + #define USBOTGSS_UTMI_OTG_CTRL_SW_MODE (1 << 31) 109 + #define USBOTGSS_UTMI_OTG_CTRL_POWERPRESENT (1 << 9) 110 + #define USBOTGSS_UTMI_OTG_CTRL_TXBITSTUFFENABLE (1 << 8) 111 + #define USBOTGSS_UTMI_OTG_CTRL_IDDIG (1 << 4) 112 + #define USBOTGSS_UTMI_OTG_CTRL_SESSEND (1 << 3) 113 + #define USBOTGSS_UTMI_OTG_CTRL_SESSVALID (1 << 2) 114 + #define USBOTGSS_UTMI_OTG_CTRL_VBUSVALID (1 << 1) 115 115 116 116 struct dwc3_omap { 117 117 struct device *dev; ··· 119 119 int irq; 120 120 void __iomem *base; 121 121 122 - u32 utmi_otg_status; 122 + u32 utmi_otg_ctrl; 123 123 u32 utmi_otg_offset; 124 124 u32 irqmisc_offset; 125 125 u32 irq_eoi_offset; ··· 153 153 writel(value, base + offset); 154 154 } 155 155 156 - static u32 dwc3_omap_read_utmi_status(struct dwc3_omap *omap) 156 + static u32 dwc3_omap_read_utmi_ctrl(struct dwc3_omap *omap) 157 157 { 158 - return dwc3_omap_readl(omap->base, USBOTGSS_UTMI_OTG_STATUS + 158 + return dwc3_omap_readl(omap->base, USBOTGSS_UTMI_OTG_CTRL + 159 159 omap->utmi_otg_offset); 160 160 } 161 161 162 - static void dwc3_omap_write_utmi_status(struct dwc3_omap *omap, u32 value) 162 + static void dwc3_omap_write_utmi_ctrl(struct dwc3_omap *omap, u32 value) 163 163 { 164 - dwc3_omap_writel(omap->base, USBOTGSS_UTMI_OTG_STATUS + 164 + dwc3_omap_writel(omap->base, USBOTGSS_UTMI_OTG_CTRL + 165 165 omap->utmi_otg_offset, value); 166 166 167 167 } ··· 235 235 } 236 236 } 237 237 238 - val = dwc3_omap_read_utmi_status(omap); 239 - val &= ~(USBOTGSS_UTMI_OTG_STATUS_IDDIG 240 - | USBOTGSS_UTMI_OTG_STATUS_VBUSVALID 241 - | USBOTGSS_UTMI_OTG_STATUS_SESSEND); 242 - val |= USBOTGSS_UTMI_OTG_STATUS_SESSVALID 243 - | USBOTGSS_UTMI_OTG_STATUS_POWERPRESENT; 244 - dwc3_omap_write_utmi_status(omap, val); 238 + val = dwc3_omap_read_utmi_ctrl(omap); 239 + val &= ~(USBOTGSS_UTMI_OTG_CTRL_IDDIG 240 + | USBOTGSS_UTMI_OTG_CTRL_VBUSVALID 241 + | USBOTGSS_UTMI_OTG_CTRL_SESSEND); 242 + val |= USBOTGSS_UTMI_OTG_CTRL_SESSVALID 243 + | USBOTGSS_UTMI_OTG_CTRL_POWERPRESENT; 244 + dwc3_omap_write_utmi_ctrl(omap, val); 245 245 break; 246 246 247 247 case OMAP_DWC3_VBUS_VALID: 248 248 dev_dbg(omap->dev, "VBUS Connect\n"); 249 249 250 - val = dwc3_omap_read_utmi_status(omap); 251 - val &= ~USBOTGSS_UTMI_OTG_STATUS_SESSEND; 252 - val |= USBOTGSS_UTMI_OTG_STATUS_IDDIG 253 - | USBOTGSS_UTMI_OTG_STATUS_VBUSVALID 254 - | USBOTGSS_UTMI_OTG_STATUS_SESSVALID 255 - | USBOTGSS_UTMI_OTG_STATUS_POWERPRESENT; 256 - dwc3_omap_write_utmi_status(omap, val); 250 + val = dwc3_omap_read_utmi_ctrl(omap); 251 + val &= ~USBOTGSS_UTMI_OTG_CTRL_SESSEND; 252 + val |= USBOTGSS_UTMI_OTG_CTRL_IDDIG 253 + | USBOTGSS_UTMI_OTG_CTRL_VBUSVALID 254 + | USBOTGSS_UTMI_OTG_CTRL_SESSVALID 255 + | USBOTGSS_UTMI_OTG_CTRL_POWERPRESENT; 256 + dwc3_omap_write_utmi_ctrl(omap, val); 257 257 break; 258 258 259 259 case OMAP_DWC3_ID_FLOAT: ··· 263 263 case OMAP_DWC3_VBUS_OFF: 264 264 dev_dbg(omap->dev, "VBUS Disconnect\n"); 265 265 266 - val = dwc3_omap_read_utmi_status(omap); 267 - val &= ~(USBOTGSS_UTMI_OTG_STATUS_SESSVALID 268 - | USBOTGSS_UTMI_OTG_STATUS_VBUSVALID 269 - | USBOTGSS_UTMI_OTG_STATUS_POWERPRESENT); 270 - val |= USBOTGSS_UTMI_OTG_STATUS_SESSEND 271 - | USBOTGSS_UTMI_OTG_STATUS_IDDIG; 272 - dwc3_omap_write_utmi_status(omap, val); 266 + val = dwc3_omap_read_utmi_ctrl(omap); 267 + val &= ~(USBOTGSS_UTMI_OTG_CTRL_SESSVALID 268 + | USBOTGSS_UTMI_OTG_CTRL_VBUSVALID 269 + | USBOTGSS_UTMI_OTG_CTRL_POWERPRESENT); 270 + val |= USBOTGSS_UTMI_OTG_CTRL_SESSEND 271 + | USBOTGSS_UTMI_OTG_CTRL_IDDIG; 272 + dwc3_omap_write_utmi_ctrl(omap, val); 273 273 break; 274 274 275 275 default: ··· 422 422 struct device_node *node = omap->dev->of_node; 423 423 int utmi_mode = 0; 424 424 425 - reg = dwc3_omap_read_utmi_status(omap); 425 + reg = dwc3_omap_read_utmi_ctrl(omap); 426 426 427 427 of_property_read_u32(node, "utmi-mode", &utmi_mode); 428 428 429 429 switch (utmi_mode) { 430 430 case DWC3_OMAP_UTMI_MODE_SW: 431 - reg |= USBOTGSS_UTMI_OTG_STATUS_SW_MODE; 431 + reg |= USBOTGSS_UTMI_OTG_CTRL_SW_MODE; 432 432 break; 433 433 case DWC3_OMAP_UTMI_MODE_HW: 434 - reg &= ~USBOTGSS_UTMI_OTG_STATUS_SW_MODE; 434 + reg &= ~USBOTGSS_UTMI_OTG_CTRL_SW_MODE; 435 435 break; 436 436 default: 437 437 dev_dbg(omap->dev, "UNKNOWN utmi mode %d\n", utmi_mode); 438 438 } 439 439 440 - dwc3_omap_write_utmi_status(omap, reg); 440 + dwc3_omap_write_utmi_ctrl(omap, reg); 441 441 } 442 442 443 443 static int dwc3_omap_extcon_register(struct dwc3_omap *omap) ··· 614 614 { 615 615 struct dwc3_omap *omap = dev_get_drvdata(dev); 616 616 617 - omap->utmi_otg_status = dwc3_omap_read_utmi_status(omap); 617 + omap->utmi_otg_ctrl = dwc3_omap_read_utmi_ctrl(omap); 618 618 dwc3_omap_disable_irqs(omap); 619 619 620 620 return 0; ··· 624 624 { 625 625 struct dwc3_omap *omap = dev_get_drvdata(dev); 626 626 627 - dwc3_omap_write_utmi_status(omap, omap->utmi_otg_status); 627 + dwc3_omap_write_utmi_ctrl(omap, omap->utmi_otg_ctrl); 628 628 dwc3_omap_enable_irqs(omap); 629 629 630 630 pm_runtime_disable(dev);
+1
drivers/usb/gadget/configfs.c
··· 1295 1295 } 1296 1296 } 1297 1297 c->next_interface_id = 0; 1298 + memset(c->interface, 0, sizeof(c->interface)); 1298 1299 c->superspeed = 0; 1299 1300 c->highspeed = 0; 1300 1301 c->fullspeed = 0;
+14 -2
drivers/usb/gadget/function/f_hid.c
··· 437 437 | USB_REQ_GET_DESCRIPTOR): 438 438 switch (value >> 8) { 439 439 case HID_DT_HID: 440 + { 441 + struct hid_descriptor hidg_desc_copy = hidg_desc; 442 + 440 443 VDBG(cdev, "USB_REQ_GET_DESCRIPTOR: HID\n"); 444 + hidg_desc_copy.desc[0].bDescriptorType = HID_DT_REPORT; 445 + hidg_desc_copy.desc[0].wDescriptorLength = 446 + cpu_to_le16(hidg->report_desc_length); 447 + 441 448 length = min_t(unsigned short, length, 442 - hidg_desc.bLength); 443 - memcpy(req->buf, &hidg_desc, length); 449 + hidg_desc_copy.bLength); 450 + memcpy(req->buf, &hidg_desc_copy, length); 444 451 goto respond; 445 452 break; 453 + } 446 454 case HID_DT_REPORT: 447 455 VDBG(cdev, "USB_REQ_GET_DESCRIPTOR: REPORT\n"); 448 456 length = min_t(unsigned short, length, ··· 640 632 hidg_fs_in_ep_desc.wMaxPacketSize = cpu_to_le16(hidg->report_length); 641 633 hidg_hs_out_ep_desc.wMaxPacketSize = cpu_to_le16(hidg->report_length); 642 634 hidg_fs_out_ep_desc.wMaxPacketSize = cpu_to_le16(hidg->report_length); 635 + /* 636 + * We can use hidg_desc struct here but we should not relay 637 + * that its content won't change after returning from this function. 638 + */ 643 639 hidg_desc.desc[0].bDescriptorType = HID_DT_REPORT; 644 640 hidg_desc.desc[0].wDescriptorLength = 645 641 cpu_to_le16(hidg->report_desc_length);
+4 -1
drivers/usb/gadget/function/u_serial.c
··· 113 113 int write_allocated; 114 114 struct gs_buf port_write_buf; 115 115 wait_queue_head_t drain_wait; /* wait while writes drain */ 116 + bool write_busy; 116 117 117 118 /* REVISIT this state ... */ 118 119 struct usb_cdc_line_coding port_line_coding; /* 8-N-1 etc */ ··· 364 363 int status = 0; 365 364 bool do_tty_wake = false; 366 365 367 - while (!list_empty(pool)) { 366 + while (!port->write_busy && !list_empty(pool)) { 368 367 struct usb_request *req; 369 368 int len; 370 369 ··· 394 393 * NOTE that we may keep sending data for a while after 395 394 * the TTY closed (dev->ioport->port_tty is NULL). 396 395 */ 396 + port->write_busy = true; 397 397 spin_unlock(&port->port_lock); 398 398 status = usb_ep_queue(in, req, GFP_ATOMIC); 399 399 spin_lock(&port->port_lock); 400 + port->write_busy = false; 400 401 401 402 if (status) { 402 403 pr_debug("%s: %s %s err %d\n",
+5 -5
drivers/usb/gadget/legacy/acm_ms.c
··· 121 121 /* 122 122 * We _always_ have both ACM and mass storage functions. 123 123 */ 124 - static int __init acm_ms_do_config(struct usb_configuration *c) 124 + static int acm_ms_do_config(struct usb_configuration *c) 125 125 { 126 126 struct fsg_opts *opts; 127 127 int status; ··· 174 174 175 175 /*-------------------------------------------------------------------------*/ 176 176 177 - static int __init acm_ms_bind(struct usb_composite_dev *cdev) 177 + static int acm_ms_bind(struct usb_composite_dev *cdev) 178 178 { 179 179 struct usb_gadget *gadget = cdev->gadget; 180 180 struct fsg_opts *opts; ··· 249 249 return status; 250 250 } 251 251 252 - static int __exit acm_ms_unbind(struct usb_composite_dev *cdev) 252 + static int acm_ms_unbind(struct usb_composite_dev *cdev) 253 253 { 254 254 usb_put_function(f_msg); 255 255 usb_put_function_instance(fi_msg); ··· 258 258 return 0; 259 259 } 260 260 261 - static __refdata struct usb_composite_driver acm_ms_driver = { 261 + static struct usb_composite_driver acm_ms_driver = { 262 262 .name = "g_acm_ms", 263 263 .dev = &device_desc, 264 264 .max_speed = USB_SPEED_SUPER, 265 265 .strings = dev_strings, 266 266 .bind = acm_ms_bind, 267 - .unbind = __exit_p(acm_ms_unbind), 267 + .unbind = acm_ms_unbind, 268 268 }; 269 269 270 270 module_usb_composite_driver(acm_ms_driver);
+5 -5
drivers/usb/gadget/legacy/audio.c
··· 167 167 168 168 /*-------------------------------------------------------------------------*/ 169 169 170 - static int __init audio_do_config(struct usb_configuration *c) 170 + static int audio_do_config(struct usb_configuration *c) 171 171 { 172 172 int status; 173 173 ··· 216 216 217 217 /*-------------------------------------------------------------------------*/ 218 218 219 - static int __init audio_bind(struct usb_composite_dev *cdev) 219 + static int audio_bind(struct usb_composite_dev *cdev) 220 220 { 221 221 #ifndef CONFIG_GADGET_UAC1 222 222 struct f_uac2_opts *uac2_opts; ··· 276 276 return status; 277 277 } 278 278 279 - static int __exit audio_unbind(struct usb_composite_dev *cdev) 279 + static int audio_unbind(struct usb_composite_dev *cdev) 280 280 { 281 281 #ifdef CONFIG_GADGET_UAC1 282 282 if (!IS_ERR_OR_NULL(f_uac1)) ··· 292 292 return 0; 293 293 } 294 294 295 - static __refdata struct usb_composite_driver audio_driver = { 295 + static struct usb_composite_driver audio_driver = { 296 296 .name = "g_audio", 297 297 .dev = &device_desc, 298 298 .strings = audio_strings, 299 299 .max_speed = USB_SPEED_HIGH, 300 300 .bind = audio_bind, 301 - .unbind = __exit_p(audio_unbind), 301 + .unbind = audio_unbind, 302 302 }; 303 303 304 304 module_usb_composite_driver(audio_driver);
+5 -5
drivers/usb/gadget/legacy/cdc2.c
··· 104 104 /* 105 105 * We _always_ have both CDC ECM and CDC ACM functions. 106 106 */ 107 - static int __init cdc_do_config(struct usb_configuration *c) 107 + static int cdc_do_config(struct usb_configuration *c) 108 108 { 109 109 int status; 110 110 ··· 153 153 154 154 /*-------------------------------------------------------------------------*/ 155 155 156 - static int __init cdc_bind(struct usb_composite_dev *cdev) 156 + static int cdc_bind(struct usb_composite_dev *cdev) 157 157 { 158 158 struct usb_gadget *gadget = cdev->gadget; 159 159 struct f_ecm_opts *ecm_opts; ··· 211 211 return status; 212 212 } 213 213 214 - static int __exit cdc_unbind(struct usb_composite_dev *cdev) 214 + static int cdc_unbind(struct usb_composite_dev *cdev) 215 215 { 216 216 usb_put_function(f_acm); 217 217 usb_put_function_instance(fi_serial); ··· 222 222 return 0; 223 223 } 224 224 225 - static __refdata struct usb_composite_driver cdc_driver = { 225 + static struct usb_composite_driver cdc_driver = { 226 226 .name = "g_cdc", 227 227 .dev = &device_desc, 228 228 .strings = dev_strings, 229 229 .max_speed = USB_SPEED_HIGH, 230 230 .bind = cdc_bind, 231 - .unbind = __exit_p(cdc_unbind), 231 + .unbind = cdc_unbind, 232 232 }; 233 233 234 234 module_usb_composite_driver(cdc_driver);
+2 -2
drivers/usb/gadget/legacy/dbgp.c
··· 284 284 return -ENODEV; 285 285 } 286 286 287 - static int __init dbgp_bind(struct usb_gadget *gadget, 287 + static int dbgp_bind(struct usb_gadget *gadget, 288 288 struct usb_gadget_driver *driver) 289 289 { 290 290 int err, stp; ··· 406 406 return err; 407 407 } 408 408 409 - static __refdata struct usb_gadget_driver dbgp_driver = { 409 + static struct usb_gadget_driver dbgp_driver = { 410 410 .function = "dbgp", 411 411 .max_speed = USB_SPEED_HIGH, 412 412 .bind = dbgp_bind,
+6 -6
drivers/usb/gadget/legacy/ether.c
··· 222 222 * the first one present. That's to make Microsoft's drivers happy, 223 223 * and to follow DOCSIS 1.0 (cable modem standard). 224 224 */ 225 - static int __init rndis_do_config(struct usb_configuration *c) 225 + static int rndis_do_config(struct usb_configuration *c) 226 226 { 227 227 int status; 228 228 ··· 264 264 /* 265 265 * We _always_ have an ECM, CDC Subset, or EEM configuration. 266 266 */ 267 - static int __init eth_do_config(struct usb_configuration *c) 267 + static int eth_do_config(struct usb_configuration *c) 268 268 { 269 269 int status = 0; 270 270 ··· 318 318 319 319 /*-------------------------------------------------------------------------*/ 320 320 321 - static int __init eth_bind(struct usb_composite_dev *cdev) 321 + static int eth_bind(struct usb_composite_dev *cdev) 322 322 { 323 323 struct usb_gadget *gadget = cdev->gadget; 324 324 struct f_eem_opts *eem_opts = NULL; ··· 447 447 return status; 448 448 } 449 449 450 - static int __exit eth_unbind(struct usb_composite_dev *cdev) 450 + static int eth_unbind(struct usb_composite_dev *cdev) 451 451 { 452 452 if (has_rndis()) { 453 453 usb_put_function(f_rndis); ··· 466 466 return 0; 467 467 } 468 468 469 - static __refdata struct usb_composite_driver eth_driver = { 469 + static struct usb_composite_driver eth_driver = { 470 470 .name = "g_ether", 471 471 .dev = &device_desc, 472 472 .strings = dev_strings, 473 473 .max_speed = USB_SPEED_SUPER, 474 474 .bind = eth_bind, 475 - .unbind = __exit_p(eth_unbind), 475 + .unbind = eth_unbind, 476 476 }; 477 477 478 478 module_usb_composite_driver(eth_driver);
+1 -1
drivers/usb/gadget/legacy/g_ffs.c
··· 163 163 static int gfs_do_config(struct usb_configuration *c); 164 164 165 165 166 - static __refdata struct usb_composite_driver gfs_driver = { 166 + static struct usb_composite_driver gfs_driver = { 167 167 .name = DRIVER_NAME, 168 168 .dev = &gfs_dev_desc, 169 169 .strings = gfs_dev_strings,
+5 -5
drivers/usb/gadget/legacy/gmidi.c
··· 118 118 static struct usb_function_instance *fi_midi; 119 119 static struct usb_function *f_midi; 120 120 121 - static int __exit midi_unbind(struct usb_composite_dev *dev) 121 + static int midi_unbind(struct usb_composite_dev *dev) 122 122 { 123 123 usb_put_function(f_midi); 124 124 usb_put_function_instance(fi_midi); ··· 133 133 .MaxPower = CONFIG_USB_GADGET_VBUS_DRAW, 134 134 }; 135 135 136 - static int __init midi_bind_config(struct usb_configuration *c) 136 + static int midi_bind_config(struct usb_configuration *c) 137 137 { 138 138 int status; 139 139 ··· 150 150 return 0; 151 151 } 152 152 153 - static int __init midi_bind(struct usb_composite_dev *cdev) 153 + static int midi_bind(struct usb_composite_dev *cdev) 154 154 { 155 155 struct f_midi_opts *midi_opts; 156 156 int status; ··· 185 185 return status; 186 186 } 187 187 188 - static __refdata struct usb_composite_driver midi_driver = { 188 + static struct usb_composite_driver midi_driver = { 189 189 .name = (char *) longname, 190 190 .dev = &device_desc, 191 191 .strings = dev_strings, 192 192 .max_speed = USB_SPEED_HIGH, 193 193 .bind = midi_bind, 194 - .unbind = __exit_p(midi_unbind), 194 + .unbind = midi_unbind, 195 195 }; 196 196 197 197 module_usb_composite_driver(midi_driver);
+6 -6
drivers/usb/gadget/legacy/hid.c
··· 106 106 107 107 /****************************** Configurations ******************************/ 108 108 109 - static int __init do_config(struct usb_configuration *c) 109 + static int do_config(struct usb_configuration *c) 110 110 { 111 111 struct hidg_func_node *e, *n; 112 112 int status = 0; ··· 147 147 148 148 /****************************** Gadget Bind ******************************/ 149 149 150 - static int __init hid_bind(struct usb_composite_dev *cdev) 150 + static int hid_bind(struct usb_composite_dev *cdev) 151 151 { 152 152 struct usb_gadget *gadget = cdev->gadget; 153 153 struct list_head *tmp; ··· 205 205 return status; 206 206 } 207 207 208 - static int __exit hid_unbind(struct usb_composite_dev *cdev) 208 + static int hid_unbind(struct usb_composite_dev *cdev) 209 209 { 210 210 struct hidg_func_node *n; 211 211 ··· 216 216 return 0; 217 217 } 218 218 219 - static int __init hidg_plat_driver_probe(struct platform_device *pdev) 219 + static int hidg_plat_driver_probe(struct platform_device *pdev) 220 220 { 221 221 struct hidg_func_descriptor *func = dev_get_platdata(&pdev->dev); 222 222 struct hidg_func_node *entry; ··· 252 252 /****************************** Some noise ******************************/ 253 253 254 254 255 - static __refdata struct usb_composite_driver hidg_driver = { 255 + static struct usb_composite_driver hidg_driver = { 256 256 .name = "g_hid", 257 257 .dev = &device_desc, 258 258 .strings = dev_strings, 259 259 .max_speed = USB_SPEED_HIGH, 260 260 .bind = hid_bind, 261 - .unbind = __exit_p(hid_unbind), 261 + .unbind = hid_unbind, 262 262 }; 263 263 264 264 static struct platform_driver hidg_plat_driver = {
+3 -3
drivers/usb/gadget/legacy/mass_storage.c
··· 130 130 return 0; 131 131 } 132 132 133 - static int __init msg_do_config(struct usb_configuration *c) 133 + static int msg_do_config(struct usb_configuration *c) 134 134 { 135 135 struct fsg_opts *opts; 136 136 int ret; ··· 170 170 171 171 /****************************** Gadget Bind ******************************/ 172 172 173 - static int __init msg_bind(struct usb_composite_dev *cdev) 173 + static int msg_bind(struct usb_composite_dev *cdev) 174 174 { 175 175 static const struct fsg_operations ops = { 176 176 .thread_exits = msg_thread_exits, ··· 248 248 249 249 /****************************** Some noise ******************************/ 250 250 251 - static __refdata struct usb_composite_driver msg_driver = { 251 + static struct usb_composite_driver msg_driver = { 252 252 .name = "g_mass_storage", 253 253 .dev = &msg_device_desc, 254 254 .max_speed = USB_SPEED_SUPER,
+5 -5
drivers/usb/gadget/legacy/multi.c
··· 149 149 static struct usb_function *f_rndis; 150 150 static struct usb_function *f_msg_rndis; 151 151 152 - static __init int rndis_do_config(struct usb_configuration *c) 152 + static int rndis_do_config(struct usb_configuration *c) 153 153 { 154 154 struct fsg_opts *fsg_opts; 155 155 int ret; ··· 237 237 static struct usb_function *f_ecm; 238 238 static struct usb_function *f_msg_multi; 239 239 240 - static __init int cdc_do_config(struct usb_configuration *c) 240 + static int cdc_do_config(struct usb_configuration *c) 241 241 { 242 242 struct fsg_opts *fsg_opts; 243 243 int ret; ··· 466 466 return status; 467 467 } 468 468 469 - static int __exit multi_unbind(struct usb_composite_dev *cdev) 469 + static int multi_unbind(struct usb_composite_dev *cdev) 470 470 { 471 471 #ifdef CONFIG_USB_G_MULTI_CDC 472 472 usb_put_function(f_msg_multi); ··· 497 497 /****************************** Some noise ******************************/ 498 498 499 499 500 - static __refdata struct usb_composite_driver multi_driver = { 500 + static struct usb_composite_driver multi_driver = { 501 501 .name = "g_multi", 502 502 .dev = &device_desc, 503 503 .strings = dev_strings, 504 504 .max_speed = USB_SPEED_HIGH, 505 505 .bind = multi_bind, 506 - .unbind = __exit_p(multi_unbind), 506 + .unbind = multi_unbind, 507 507 .needs_serial = 1, 508 508 }; 509 509
+5 -5
drivers/usb/gadget/legacy/ncm.c
··· 107 107 108 108 /*-------------------------------------------------------------------------*/ 109 109 110 - static int __init ncm_do_config(struct usb_configuration *c) 110 + static int ncm_do_config(struct usb_configuration *c) 111 111 { 112 112 int status; 113 113 ··· 143 143 144 144 /*-------------------------------------------------------------------------*/ 145 145 146 - static int __init gncm_bind(struct usb_composite_dev *cdev) 146 + static int gncm_bind(struct usb_composite_dev *cdev) 147 147 { 148 148 struct usb_gadget *gadget = cdev->gadget; 149 149 struct f_ncm_opts *ncm_opts; ··· 186 186 return status; 187 187 } 188 188 189 - static int __exit gncm_unbind(struct usb_composite_dev *cdev) 189 + static int gncm_unbind(struct usb_composite_dev *cdev) 190 190 { 191 191 if (!IS_ERR_OR_NULL(f_ncm)) 192 192 usb_put_function(f_ncm); ··· 195 195 return 0; 196 196 } 197 197 198 - static __refdata struct usb_composite_driver ncm_driver = { 198 + static struct usb_composite_driver ncm_driver = { 199 199 .name = "g_ncm", 200 200 .dev = &device_desc, 201 201 .strings = dev_strings, 202 202 .max_speed = USB_SPEED_HIGH, 203 203 .bind = gncm_bind, 204 - .unbind = __exit_p(gncm_unbind), 204 + .unbind = gncm_unbind, 205 205 }; 206 206 207 207 module_usb_composite_driver(ncm_driver);
+5 -5
drivers/usb/gadget/legacy/nokia.c
··· 118 118 static struct usb_function_instance *fi_obex2; 119 119 static struct usb_function_instance *fi_phonet; 120 120 121 - static int __init nokia_bind_config(struct usb_configuration *c) 121 + static int nokia_bind_config(struct usb_configuration *c) 122 122 { 123 123 struct usb_function *f_acm; 124 124 struct usb_function *f_phonet = NULL; ··· 224 224 return status; 225 225 } 226 226 227 - static int __init nokia_bind(struct usb_composite_dev *cdev) 227 + static int nokia_bind(struct usb_composite_dev *cdev) 228 228 { 229 229 struct usb_gadget *gadget = cdev->gadget; 230 230 int status; ··· 307 307 return status; 308 308 } 309 309 310 - static int __exit nokia_unbind(struct usb_composite_dev *cdev) 310 + static int nokia_unbind(struct usb_composite_dev *cdev) 311 311 { 312 312 if (!IS_ERR_OR_NULL(f_obex1_cfg2)) 313 313 usb_put_function(f_obex1_cfg2); ··· 338 338 return 0; 339 339 } 340 340 341 - static __refdata struct usb_composite_driver nokia_driver = { 341 + static struct usb_composite_driver nokia_driver = { 342 342 .name = "g_nokia", 343 343 .dev = &device_desc, 344 344 .strings = dev_strings, 345 345 .max_speed = USB_SPEED_HIGH, 346 346 .bind = nokia_bind, 347 - .unbind = __exit_p(nokia_unbind), 347 + .unbind = nokia_unbind, 348 348 }; 349 349 350 350 module_usb_composite_driver(nokia_driver);
+4 -4
drivers/usb/gadget/legacy/printer.c
··· 126 126 .bmAttributes = USB_CONFIG_ATT_ONE | USB_CONFIG_ATT_SELFPOWER, 127 127 }; 128 128 129 - static int __init printer_do_config(struct usb_configuration *c) 129 + static int printer_do_config(struct usb_configuration *c) 130 130 { 131 131 struct usb_gadget *gadget = c->cdev->gadget; 132 132 int status = 0; ··· 152 152 return status; 153 153 } 154 154 155 - static int __init printer_bind(struct usb_composite_dev *cdev) 155 + static int printer_bind(struct usb_composite_dev *cdev) 156 156 { 157 157 struct f_printer_opts *opts; 158 158 int ret, len; ··· 191 191 return ret; 192 192 } 193 193 194 - static int __exit printer_unbind(struct usb_composite_dev *cdev) 194 + static int printer_unbind(struct usb_composite_dev *cdev) 195 195 { 196 196 usb_put_function(f_printer); 197 197 usb_put_function_instance(fi_printer); ··· 199 199 return 0; 200 200 } 201 201 202 - static __refdata struct usb_composite_driver printer_driver = { 202 + static struct usb_composite_driver printer_driver = { 203 203 .name = shortname, 204 204 .dev = &device_desc, 205 205 .strings = dev_strings,
+2 -2
drivers/usb/gadget/legacy/serial.c
··· 174 174 return ret; 175 175 } 176 176 177 - static int __init gs_bind(struct usb_composite_dev *cdev) 177 + static int gs_bind(struct usb_composite_dev *cdev) 178 178 { 179 179 int status; 180 180 ··· 230 230 return 0; 231 231 } 232 232 233 - static __refdata struct usb_composite_driver gserial_driver = { 233 + static struct usb_composite_driver gserial_driver = { 234 234 .name = "g_serial", 235 235 .dev = &device_desc, 236 236 .strings = dev_strings,
+1 -1
drivers/usb/gadget/legacy/tcm_usb_gadget.c
··· 2397 2397 return 0; 2398 2398 } 2399 2399 2400 - static __refdata struct usb_composite_driver usbg_driver = { 2400 + static struct usb_composite_driver usbg_driver = { 2401 2401 .name = "g_target", 2402 2402 .dev = &usbg_device_desc, 2403 2403 .strings = usbg_strings,
+4 -4
drivers/usb/gadget/legacy/webcam.c
··· 334 334 * USB configuration 335 335 */ 336 336 337 - static int __init 337 + static int 338 338 webcam_config_bind(struct usb_configuration *c) 339 339 { 340 340 int status = 0; ··· 358 358 .MaxPower = CONFIG_USB_GADGET_VBUS_DRAW, 359 359 }; 360 360 361 - static int /* __init_or_exit */ 361 + static int 362 362 webcam_unbind(struct usb_composite_dev *cdev) 363 363 { 364 364 if (!IS_ERR_OR_NULL(f_uvc)) ··· 368 368 return 0; 369 369 } 370 370 371 - static int __init 371 + static int 372 372 webcam_bind(struct usb_composite_dev *cdev) 373 373 { 374 374 struct f_uvc_opts *uvc_opts; ··· 422 422 * Driver 423 423 */ 424 424 425 - static __refdata struct usb_composite_driver webcam_driver = { 425 + static struct usb_composite_driver webcam_driver = { 426 426 .name = "g_webcam", 427 427 .dev = &webcam_device_descriptor, 428 428 .strings = webcam_device_strings,
+2 -2
drivers/usb/gadget/legacy/zero.c
··· 272 272 module_param_named(qlen, gzero_options.qlen, uint, S_IRUGO|S_IWUSR); 273 273 MODULE_PARM_DESC(qlen, "depth of loopback queue"); 274 274 275 - static int __init zero_bind(struct usb_composite_dev *cdev) 275 + static int zero_bind(struct usb_composite_dev *cdev) 276 276 { 277 277 struct f_ss_opts *ss_opts; 278 278 struct f_lb_opts *lb_opts; ··· 400 400 return 0; 401 401 } 402 402 403 - static __refdata struct usb_composite_driver zero_driver = { 403 + static struct usb_composite_driver zero_driver = { 404 404 .name = "zero", 405 405 .dev = &device_desc, 406 406 .strings = dev_strings,
+2 -2
drivers/usb/gadget/udc/at91_udc.c
··· 1942 1942 return retval; 1943 1943 } 1944 1944 1945 - static int __exit at91udc_remove(struct platform_device *pdev) 1945 + static int at91udc_remove(struct platform_device *pdev) 1946 1946 { 1947 1947 struct at91_udc *udc = platform_get_drvdata(pdev); 1948 1948 unsigned long flags; ··· 2018 2018 #endif 2019 2019 2020 2020 static struct platform_driver at91_udc_driver = { 2021 - .remove = __exit_p(at91udc_remove), 2021 + .remove = at91udc_remove, 2022 2022 .shutdown = at91udc_shutdown, 2023 2023 .suspend = at91udc_suspend, 2024 2024 .resume = at91udc_resume,
+2 -2
drivers/usb/gadget/udc/atmel_usba_udc.c
··· 2186 2186 return 0; 2187 2187 } 2188 2188 2189 - static int __exit usba_udc_remove(struct platform_device *pdev) 2189 + static int usba_udc_remove(struct platform_device *pdev) 2190 2190 { 2191 2191 struct usba_udc *udc; 2192 2192 int i; ··· 2258 2258 static SIMPLE_DEV_PM_OPS(usba_udc_pm_ops, usba_udc_suspend, usba_udc_resume); 2259 2259 2260 2260 static struct platform_driver udc_driver = { 2261 - .remove = __exit_p(usba_udc_remove), 2261 + .remove = usba_udc_remove, 2262 2262 .driver = { 2263 2263 .name = "atmel_usba_udc", 2264 2264 .pm = &usba_udc_pm_ops,
+2 -2
drivers/usb/gadget/udc/fsl_udc_core.c
··· 2525 2525 /* Driver removal function 2526 2526 * Free resources and finish pending transactions 2527 2527 */ 2528 - static int __exit fsl_udc_remove(struct platform_device *pdev) 2528 + static int fsl_udc_remove(struct platform_device *pdev) 2529 2529 { 2530 2530 struct resource *res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 2531 2531 struct fsl_usb2_platform_data *pdata = dev_get_platdata(&pdev->dev); ··· 2663 2663 }; 2664 2664 MODULE_DEVICE_TABLE(platform, fsl_udc_devtype); 2665 2665 static struct platform_driver udc_driver = { 2666 - .remove = __exit_p(fsl_udc_remove), 2666 + .remove = fsl_udc_remove, 2667 2667 /* Just for FSL i.mx SoC currently */ 2668 2668 .id_table = fsl_udc_devtype, 2669 2669 /* these suspend and resume are not usb suspend and resume */
+2 -2
drivers/usb/gadget/udc/fusb300_udc.c
··· 1342 1342 .udc_stop = fusb300_udc_stop, 1343 1343 }; 1344 1344 1345 - static int __exit fusb300_remove(struct platform_device *pdev) 1345 + static int fusb300_remove(struct platform_device *pdev) 1346 1346 { 1347 1347 struct fusb300 *fusb300 = platform_get_drvdata(pdev); 1348 1348 ··· 1492 1492 } 1493 1493 1494 1494 static struct platform_driver fusb300_driver = { 1495 - .remove = __exit_p(fusb300_remove), 1495 + .remove = fusb300_remove, 1496 1496 .driver = { 1497 1497 .name = (char *) udc_name, 1498 1498 },
+2 -2
drivers/usb/gadget/udc/m66592-udc.c
··· 1528 1528 .pullup = m66592_pullup, 1529 1529 }; 1530 1530 1531 - static int __exit m66592_remove(struct platform_device *pdev) 1531 + static int m66592_remove(struct platform_device *pdev) 1532 1532 { 1533 1533 struct m66592 *m66592 = platform_get_drvdata(pdev); 1534 1534 ··· 1695 1695 1696 1696 /*-------------------------------------------------------------------------*/ 1697 1697 static struct platform_driver m66592_driver = { 1698 - .remove = __exit_p(m66592_remove), 1698 + .remove = m66592_remove, 1699 1699 .driver = { 1700 1700 .name = (char *) udc_name, 1701 1701 },
+2 -2
drivers/usb/gadget/udc/r8a66597-udc.c
··· 1820 1820 .set_selfpowered = r8a66597_set_selfpowered, 1821 1821 }; 1822 1822 1823 - static int __exit r8a66597_remove(struct platform_device *pdev) 1823 + static int r8a66597_remove(struct platform_device *pdev) 1824 1824 { 1825 1825 struct r8a66597 *r8a66597 = platform_get_drvdata(pdev); 1826 1826 ··· 1974 1974 1975 1975 /*-------------------------------------------------------------------------*/ 1976 1976 static struct platform_driver r8a66597_driver = { 1977 - .remove = __exit_p(r8a66597_remove), 1977 + .remove = r8a66597_remove, 1978 1978 .driver = { 1979 1979 .name = (char *) udc_name, 1980 1980 },
+2 -2
drivers/usb/gadget/udc/udc-xilinx.c
··· 2071 2071 /* Map the registers */ 2072 2072 res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 2073 2073 udc->addr = devm_ioremap_resource(&pdev->dev, res); 2074 - if (!udc->addr) 2075 - return -ENOMEM; 2074 + if (IS_ERR(udc->addr)) 2075 + return PTR_ERR(udc->addr); 2076 2076 2077 2077 irq = platform_get_irq(pdev, 0); 2078 2078 if (irq < 0) {
+6 -1
drivers/usb/host/xhci-ring.c
··· 2026 2026 break; 2027 2027 case COMP_DEV_ERR: 2028 2028 case COMP_STALL: 2029 + frame->status = -EPROTO; 2030 + skip_td = true; 2031 + break; 2029 2032 case COMP_TX_ERR: 2030 2033 frame->status = -EPROTO; 2034 + if (event_trb != td->last_trb) 2035 + return 0; 2031 2036 skip_td = true; 2032 2037 break; 2033 2038 case COMP_STOP: ··· 2645 2640 xhci_halt(xhci); 2646 2641 hw_died: 2647 2642 spin_unlock(&xhci->lock); 2648 - return -ESHUTDOWN; 2643 + return IRQ_HANDLED; 2649 2644 } 2650 2645 2651 2646 /*
+1 -1
drivers/usb/host/xhci.h
··· 1267 1267 * since the command ring is 64-byte aligned. 1268 1268 * It must also be greater than 16. 1269 1269 */ 1270 - #define TRBS_PER_SEGMENT 64 1270 + #define TRBS_PER_SEGMENT 256 1271 1271 /* Allow two commands + a link TRB, along with any reserved command TRBs */ 1272 1272 #define MAX_RSVD_CMD_TRBS (TRBS_PER_SEGMENT - 3) 1273 1273 #define TRB_SEGMENT_SIZE (TRBS_PER_SEGMENT*16)
+1 -1
drivers/usb/phy/phy-isp1301-omap.c
··· 94 94 95 95 #if defined(CONFIG_MACH_OMAP_H2) || defined(CONFIG_MACH_OMAP_H3) 96 96 97 - #if defined(CONFIG_TPS65010) || defined(CONFIG_TPS65010_MODULE) 97 + #if defined(CONFIG_TPS65010) || (defined(CONFIG_TPS65010_MODULE) && defined(MODULE)) 98 98 99 99 #include <linux/i2c/tps65010.h> 100 100
+1
drivers/usb/serial/cp210x.c
··· 127 127 { USB_DEVICE(0x10C4, 0x88A5) }, /* Planet Innovation Ingeni ZigBee USB Device */ 128 128 { USB_DEVICE(0x10C4, 0x8946) }, /* Ketra N1 Wireless Interface */ 129 129 { USB_DEVICE(0x10C4, 0x8977) }, /* CEL MeshWorks DevKit Device */ 130 + { USB_DEVICE(0x10C4, 0x8998) }, /* KCF Technologies PRN */ 130 131 { USB_DEVICE(0x10C4, 0xEA60) }, /* Silicon Labs factory default */ 131 132 { USB_DEVICE(0x10C4, 0xEA61) }, /* Silicon Labs factory default */ 132 133 { USB_DEVICE(0x10C4, 0xEA70) }, /* Silicon Labs factory default */
-1
drivers/usb/serial/pl2303.c
··· 61 61 { USB_DEVICE(DCU10_VENDOR_ID, DCU10_PRODUCT_ID) }, 62 62 { USB_DEVICE(SITECOM_VENDOR_ID, SITECOM_PRODUCT_ID) }, 63 63 { USB_DEVICE(ALCATEL_VENDOR_ID, ALCATEL_PRODUCT_ID) }, 64 - { USB_DEVICE(SAMSUNG_VENDOR_ID, SAMSUNG_PRODUCT_ID) }, 65 64 { USB_DEVICE(SIEMENS_VENDOR_ID, SIEMENS_PRODUCT_ID_SX1), 66 65 .driver_info = PL2303_QUIRK_UART_STATE_IDX0 }, 67 66 { USB_DEVICE(SIEMENS_VENDOR_ID, SIEMENS_PRODUCT_ID_X65),
-4
drivers/usb/serial/pl2303.h
··· 62 62 #define ALCATEL_VENDOR_ID 0x11f7 63 63 #define ALCATEL_PRODUCT_ID 0x02df 64 64 65 - /* Samsung I330 phone cradle */ 66 - #define SAMSUNG_VENDOR_ID 0x04e8 67 - #define SAMSUNG_PRODUCT_ID 0x8001 68 - 69 65 #define SIEMENS_VENDOR_ID 0x11f5 70 66 #define SIEMENS_PRODUCT_ID_SX1 0x0001 71 67 #define SIEMENS_PRODUCT_ID_X65 0x0003
+1 -1
drivers/usb/serial/visor.c
··· 95 95 .driver_info = (kernel_ulong_t)&palm_os_4_probe }, 96 96 { USB_DEVICE(ACER_VENDOR_ID, ACER_S10_ID), 97 97 .driver_info = (kernel_ulong_t)&palm_os_4_probe }, 98 - { USB_DEVICE(SAMSUNG_VENDOR_ID, SAMSUNG_SCH_I330_ID), 98 + { USB_DEVICE_INTERFACE_CLASS(SAMSUNG_VENDOR_ID, SAMSUNG_SCH_I330_ID, 0xff), 99 99 .driver_info = (kernel_ulong_t)&palm_os_4_probe }, 100 100 { USB_DEVICE(SAMSUNG_VENDOR_ID, SAMSUNG_SPH_I500_ID), 101 101 .driver_info = (kernel_ulong_t)&palm_os_4_probe },
+7
drivers/usb/storage/unusual_devs.h
··· 766 766 USB_SC_DEVICE, USB_PR_DEVICE, NULL, 767 767 US_FL_GO_SLOW ), 768 768 769 + /* Reported by Christian Schaller <cschalle@redhat.com> */ 770 + UNUSUAL_DEV( 0x059f, 0x0651, 0x0000, 0x0000, 771 + "LaCie", 772 + "External HDD", 773 + USB_SC_DEVICE, USB_PR_DEVICE, NULL, 774 + US_FL_NO_WP_DETECT ), 775 + 769 776 /* Submitted by Joel Bourquard <numlock@freesurf.ch> 770 777 * Some versions of this device need the SubClass and Protocol overrides 771 778 * while others don't.
+27 -4
fs/btrfs/extent-tree.c
··· 3180 3180 btrfs_mark_buffer_dirty(leaf); 3181 3181 fail: 3182 3182 btrfs_release_path(path); 3183 - if (ret) 3184 - btrfs_abort_transaction(trans, root, ret); 3185 3183 return ret; 3186 3184 3187 3185 } ··· 3485 3487 ret = 0; 3486 3488 } 3487 3489 } 3488 - if (!ret) 3490 + if (!ret) { 3489 3491 ret = write_one_cache_group(trans, root, path, cache); 3492 + /* 3493 + * Our block group might still be attached to the list 3494 + * of new block groups in the transaction handle of some 3495 + * other task (struct btrfs_trans_handle->new_bgs). This 3496 + * means its block group item isn't yet in the extent 3497 + * tree. If this happens ignore the error, as we will 3498 + * try again later in the critical section of the 3499 + * transaction commit. 3500 + */ 3501 + if (ret == -ENOENT) { 3502 + ret = 0; 3503 + spin_lock(&cur_trans->dirty_bgs_lock); 3504 + if (list_empty(&cache->dirty_list)) { 3505 + list_add_tail(&cache->dirty_list, 3506 + &cur_trans->dirty_bgs); 3507 + btrfs_get_block_group(cache); 3508 + } 3509 + spin_unlock(&cur_trans->dirty_bgs_lock); 3510 + } else if (ret) { 3511 + btrfs_abort_transaction(trans, root, ret); 3512 + } 3513 + } 3490 3514 3491 3515 /* if its not on the io list, we need to put the block group */ 3492 3516 if (should_put) ··· 3617 3597 ret = 0; 3618 3598 } 3619 3599 } 3620 - if (!ret) 3600 + if (!ret) { 3621 3601 ret = write_one_cache_group(trans, root, path, cache); 3602 + if (ret) 3603 + btrfs_abort_transaction(trans, root, ret); 3604 + } 3622 3605 3623 3606 /* if its not on the io list, we need to put the block group */ 3624 3607 if (should_put)
+19
fs/btrfs/extent_io.c
··· 4772 4772 start >> PAGE_CACHE_SHIFT); 4773 4773 if (eb && atomic_inc_not_zero(&eb->refs)) { 4774 4774 rcu_read_unlock(); 4775 + /* 4776 + * Lock our eb's refs_lock to avoid races with 4777 + * free_extent_buffer. When we get our eb it might be flagged 4778 + * with EXTENT_BUFFER_STALE and another task running 4779 + * free_extent_buffer might have seen that flag set, 4780 + * eb->refs == 2, that the buffer isn't under IO (dirty and 4781 + * writeback flags not set) and it's still in the tree (flag 4782 + * EXTENT_BUFFER_TREE_REF set), therefore being in the process 4783 + * of decrementing the extent buffer's reference count twice. 4784 + * So here we could race and increment the eb's reference count, 4785 + * clear its stale flag, mark it as dirty and drop our reference 4786 + * before the other task finishes executing free_extent_buffer, 4787 + * which would later result in an attempt to free an extent 4788 + * buffer that is dirty. 4789 + */ 4790 + if (test_bit(EXTENT_BUFFER_STALE, &eb->bflags)) { 4791 + spin_lock(&eb->refs_lock); 4792 + spin_unlock(&eb->refs_lock); 4793 + } 4775 4794 mark_extent_buffer_accessed(eb, NULL); 4776 4795 return eb; 4777 4796 }
+12 -2
fs/btrfs/free-space-cache.c
··· 3466 3466 struct btrfs_free_space_ctl *ctl = root->free_ino_ctl; 3467 3467 int ret; 3468 3468 struct btrfs_io_ctl io_ctl; 3469 + bool release_metadata = true; 3469 3470 3470 3471 if (!btrfs_test_opt(root, INODE_MAP_CACHE)) 3471 3472 return 0; ··· 3474 3473 memset(&io_ctl, 0, sizeof(io_ctl)); 3475 3474 ret = __btrfs_write_out_cache(root, inode, ctl, NULL, &io_ctl, 3476 3475 trans, path, 0); 3477 - if (!ret) 3476 + if (!ret) { 3477 + /* 3478 + * At this point writepages() didn't error out, so our metadata 3479 + * reservation is released when the writeback finishes, at 3480 + * inode.c:btrfs_finish_ordered_io(), regardless of it finishing 3481 + * with or without an error. 3482 + */ 3483 + release_metadata = false; 3478 3484 ret = btrfs_wait_cache_io(root, trans, NULL, &io_ctl, path, 0); 3485 + } 3479 3486 3480 3487 if (ret) { 3481 - btrfs_delalloc_release_metadata(inode, inode->i_size); 3488 + if (release_metadata) 3489 + btrfs_delalloc_release_metadata(inode, inode->i_size); 3482 3490 #ifdef DEBUG 3483 3491 btrfs_err(root->fs_info, 3484 3492 "failed to write free ino cache for root %llu",
+10 -4
fs/btrfs/ordered-data.c
··· 722 722 int btrfs_wait_ordered_range(struct inode *inode, u64 start, u64 len) 723 723 { 724 724 int ret = 0; 725 + int ret_wb = 0; 725 726 u64 end; 726 727 u64 orig_end; 727 728 struct btrfs_ordered_extent *ordered; ··· 742 741 if (ret) 743 742 return ret; 744 743 745 - ret = filemap_fdatawait_range(inode->i_mapping, start, orig_end); 746 - if (ret) 747 - return ret; 744 + /* 745 + * If we have a writeback error don't return immediately. Wait first 746 + * for any ordered extents that haven't completed yet. This is to make 747 + * sure no one can dirty the same page ranges and call writepages() 748 + * before the ordered extents complete - to avoid failures (-EEXIST) 749 + * when adding the new ordered extents to the ordered tree. 750 + */ 751 + ret_wb = filemap_fdatawait_range(inode->i_mapping, start, orig_end); 748 752 749 753 end = orig_end; 750 754 while (1) { ··· 773 767 break; 774 768 end--; 775 769 } 776 - return ret; 770 + return ret_wb ? ret_wb : ret; 777 771 } 778 772 779 773 /*
+3
fs/exec.c
··· 659 659 if (stack_base > STACK_SIZE_MAX) 660 660 stack_base = STACK_SIZE_MAX; 661 661 662 + /* Add space for stack randomization. */ 663 + stack_base += (STACK_RND_MASK << PAGE_SHIFT); 664 + 662 665 /* Make sure we didn't let the argument array grow too large. */ 663 666 if (vma->vm_end - vma->vm_start > stack_base) 664 667 return -ENOMEM;
-1
fs/ext4/ext4.h
··· 2889 2889 struct ext4_map_blocks *map, int flags); 2890 2890 extern int ext4_ext_calc_metadata_amount(struct inode *inode, 2891 2891 ext4_lblk_t lblocks); 2892 - extern int ext4_extent_tree_init(handle_t *, struct inode *); 2893 2892 extern int ext4_ext_calc_credits_for_single_extent(struct inode *inode, 2894 2893 int num, 2895 2894 struct ext4_ext_path *path);
+6
fs/ext4/ext4_jbd2.c
··· 87 87 ext4_put_nojournal(handle); 88 88 return 0; 89 89 } 90 + 91 + if (!handle->h_transaction) { 92 + err = jbd2_journal_stop(handle); 93 + return handle->h_err ? handle->h_err : err; 94 + } 95 + 90 96 sb = handle->h_transaction->t_journal->j_private; 91 97 err = handle->h_err; 92 98 rc = jbd2_journal_stop(handle);
+9 -1
fs/ext4/extents.c
··· 377 377 ext4_lblk_t lblock = le32_to_cpu(ext->ee_block); 378 378 ext4_lblk_t last = lblock + len - 1; 379 379 380 - if (lblock > last) 380 + if (len == 0 || lblock > last) 381 381 return 0; 382 382 return ext4_data_block_valid(EXT4_SB(inode->i_sb), block, len); 383 383 } ··· 5395 5395 unsigned int credits; 5396 5396 loff_t new_size, ioffset; 5397 5397 int ret; 5398 + 5399 + /* 5400 + * We need to test this early because xfstests assumes that a 5401 + * collapse range of (0, 1) will return EOPNOTSUPP if the file 5402 + * system does not support collapse range. 5403 + */ 5404 + if (!ext4_test_inode_flag(inode, EXT4_INODE_EXTENTS)) 5405 + return -EOPNOTSUPP; 5398 5406 5399 5407 /* Collapse range works only on fs block size aligned offsets. */ 5400 5408 if (offset & (EXT4_CLUSTER_SIZE(sb) - 1) ||
+1 -1
fs/ext4/inode.c
··· 4345 4345 int inode_size = EXT4_INODE_SIZE(sb); 4346 4346 4347 4347 oi.orig_ino = orig_ino; 4348 - ino = orig_ino & ~(inodes_per_block - 1); 4348 + ino = (orig_ino & ~(inodes_per_block - 1)) + 1; 4349 4349 for (i = 0; i < inodes_per_block; i++, ino++, buf += inode_size) { 4350 4350 if (ino == orig_ino) 4351 4351 continue;
+2
fs/ext4/super.c
··· 294 294 struct ext4_super_block *es = EXT4_SB(sb)->s_es; 295 295 296 296 EXT4_SB(sb)->s_mount_state |= EXT4_ERROR_FS; 297 + if (bdev_read_only(sb->s_bdev)) 298 + return; 297 299 es->s_state |= cpu_to_le16(EXT4_ERROR_FS); 298 300 es->s_last_error_time = cpu_to_le32(get_seconds()); 299 301 strncpy(es->s_last_error_func, func, sizeof(es->s_last_error_func));
+1 -1
fs/hostfs/hostfs_kern.c
··· 581 581 if (name == NULL) 582 582 goto out_put; 583 583 584 - fd = file_create(name, mode & S_IFMT); 584 + fd = file_create(name, mode & 0777); 585 585 if (fd < 0) 586 586 error = fd; 587 587 else
+9 -1
fs/jbd2/recovery.c
··· 842 842 { 843 843 jbd2_journal_revoke_header_t *header; 844 844 int offset, max; 845 + int csum_size = 0; 846 + __u32 rcount; 845 847 int record_len = 4; 846 848 847 849 header = (jbd2_journal_revoke_header_t *) bh->b_data; 848 850 offset = sizeof(jbd2_journal_revoke_header_t); 849 - max = be32_to_cpu(header->r_count); 851 + rcount = be32_to_cpu(header->r_count); 850 852 851 853 if (!jbd2_revoke_block_csum_verify(journal, header)) 852 854 return -EINVAL; 855 + 856 + if (jbd2_journal_has_csum_v2or3(journal)) 857 + csum_size = sizeof(struct jbd2_journal_revoke_tail); 858 + if (rcount > journal->j_blocksize - csum_size) 859 + return -EINVAL; 860 + max = rcount; 853 861 854 862 if (JBD2_HAS_INCOMPAT_FEATURE(journal, JBD2_FEATURE_INCOMPAT_64BIT)) 855 863 record_len = 8;
+10 -8
fs/jbd2/revoke.c
··· 577 577 { 578 578 int csum_size = 0; 579 579 struct buffer_head *descriptor; 580 - int offset; 580 + int sz, offset; 581 581 journal_header_t *header; 582 582 583 583 /* If we are already aborting, this all becomes a noop. We ··· 594 594 if (jbd2_journal_has_csum_v2or3(journal)) 595 595 csum_size = sizeof(struct jbd2_journal_revoke_tail); 596 596 597 + if (JBD2_HAS_INCOMPAT_FEATURE(journal, JBD2_FEATURE_INCOMPAT_64BIT)) 598 + sz = 8; 599 + else 600 + sz = 4; 601 + 597 602 /* Make sure we have a descriptor with space left for the record */ 598 603 if (descriptor) { 599 - if (offset >= journal->j_blocksize - csum_size) { 604 + if (offset + sz > journal->j_blocksize - csum_size) { 600 605 flush_descriptor(journal, descriptor, offset, write_op); 601 606 descriptor = NULL; 602 607 } ··· 624 619 *descriptorp = descriptor; 625 620 } 626 621 627 - if (JBD2_HAS_INCOMPAT_FEATURE(journal, JBD2_FEATURE_INCOMPAT_64BIT)) { 622 + if (JBD2_HAS_INCOMPAT_FEATURE(journal, JBD2_FEATURE_INCOMPAT_64BIT)) 628 623 * ((__be64 *)(&descriptor->b_data[offset])) = 629 624 cpu_to_be64(record->blocknr); 630 - offset += 8; 631 - 632 - } else { 625 + else 633 626 * ((__be32 *)(&descriptor->b_data[offset])) = 634 627 cpu_to_be32(record->blocknr); 635 - offset += 4; 636 - } 628 + offset += sz; 637 629 638 630 *offsetp = offset; 639 631 }
+16 -9
fs/jbd2/transaction.c
··· 551 551 int result; 552 552 int wanted; 553 553 554 - WARN_ON(!transaction); 555 554 if (is_handle_aborted(handle)) 556 555 return -EROFS; 557 556 journal = transaction->t_journal; ··· 626 627 tid_t tid; 627 628 int need_to_start, ret; 628 629 629 - WARN_ON(!transaction); 630 630 /* If we've had an abort of any type, don't even think about 631 631 * actually doing the restart! */ 632 632 if (is_handle_aborted(handle)) ··· 783 785 int need_copy = 0; 784 786 unsigned long start_lock, time_lock; 785 787 786 - WARN_ON(!transaction); 787 788 if (is_handle_aborted(handle)) 788 789 return -EROFS; 789 790 journal = transaction->t_journal; ··· 1048 1051 int err; 1049 1052 1050 1053 jbd_debug(5, "journal_head %p\n", jh); 1051 - WARN_ON(!transaction); 1052 1054 err = -EROFS; 1053 1055 if (is_handle_aborted(handle)) 1054 1056 goto out; ··· 1262 1266 struct journal_head *jh; 1263 1267 int ret = 0; 1264 1268 1265 - WARN_ON(!transaction); 1266 1269 if (is_handle_aborted(handle)) 1267 1270 return -EROFS; 1268 1271 journal = transaction->t_journal; ··· 1392 1397 int err = 0; 1393 1398 int was_modified = 0; 1394 1399 1395 - WARN_ON(!transaction); 1396 1400 if (is_handle_aborted(handle)) 1397 1401 return -EROFS; 1398 1402 journal = transaction->t_journal; ··· 1524 1530 tid_t tid; 1525 1531 pid_t pid; 1526 1532 1527 - if (!transaction) 1528 - goto free_and_exit; 1533 + if (!transaction) { 1534 + /* 1535 + * Handle is already detached from the transaction so 1536 + * there is nothing to do other than decrease a refcount, 1537 + * or free the handle if refcount drops to zero 1538 + */ 1539 + if (--handle->h_ref > 0) { 1540 + jbd_debug(4, "h_ref %d -> %d\n", handle->h_ref + 1, 1541 + handle->h_ref); 1542 + return err; 1543 + } else { 1544 + if (handle->h_rsv_handle) 1545 + jbd2_free_handle(handle->h_rsv_handle); 1546 + goto free_and_exit; 1547 + } 1548 + } 1529 1549 journal = transaction->t_journal; 1530 1550 1531 1551 J_ASSERT(journal_current_handle() == handle); ··· 2381 2373 transaction_t *transaction = handle->h_transaction; 2382 2374 journal_t *journal; 2383 2375 2384 - WARN_ON(!transaction); 2385 2376 if (is_handle_aborted(handle)) 2386 2377 return -EROFS; 2387 2378 journal = transaction->t_journal;
+8 -1
fs/kernfs/dir.c
··· 518 518 if (!kn) 519 519 goto err_out1; 520 520 521 - ret = ida_simple_get(&root->ino_ida, 1, 0, GFP_KERNEL); 521 + /* 522 + * If the ino of the sysfs entry created for a kmem cache gets 523 + * allocated from an ida layer, which is accounted to the memcg that 524 + * owns the cache, the memcg will get pinned forever. So do not account 525 + * ino ida allocations. 526 + */ 527 + ret = ida_simple_get(&root->ino_ida, 1, 0, 528 + GFP_KERNEL | __GFP_NOACCOUNT); 522 529 if (ret < 0) 523 530 goto err_out2; 524 531 kn->ino = ret;
+11
fs/nfsd/blocklayout.c
··· 181 181 } 182 182 183 183 const struct nfsd4_layout_ops bl_layout_ops = { 184 + /* 185 + * Pretend that we send notification to the client. This is a blatant 186 + * lie to force recent Linux clients to cache our device IDs. 187 + * We rarely ever change the device ID, so the harm of leaking deviceids 188 + * for a while isn't too bad. Unfortunately RFC5661 is a complete mess 189 + * in this regard, but I filed errata 4119 for this a while ago, and 190 + * hopefully the Linux client will eventually start caching deviceids 191 + * without this again. 192 + */ 193 + .notify_types = 194 + NOTIFY_DEVICEID4_DELETE | NOTIFY_DEVICEID4_CHANGE, 184 195 .proc_getdeviceinfo = nfsd4_block_proc_getdeviceinfo, 185 196 .encode_getdeviceinfo = nfsd4_block_encode_getdeviceinfo, 186 197 .proc_layoutget = nfsd4_block_proc_layoutget,
+55 -64
fs/nfsd/nfs4callback.c
··· 224 224 } 225 225 226 226 static int decode_cb_op_status(struct xdr_stream *xdr, enum nfs_opnum4 expected, 227 - enum nfsstat4 *status) 227 + int *status) 228 228 { 229 229 __be32 *p; 230 230 u32 op; ··· 235 235 op = be32_to_cpup(p++); 236 236 if (unlikely(op != expected)) 237 237 goto out_unexpected; 238 - *status = be32_to_cpup(p); 238 + *status = nfs_cb_stat_to_errno(be32_to_cpup(p)); 239 239 return 0; 240 240 out_overflow: 241 241 print_overflow_msg(__func__, xdr); ··· 446 446 static int decode_cb_sequence4res(struct xdr_stream *xdr, 447 447 struct nfsd4_callback *cb) 448 448 { 449 - enum nfsstat4 nfserr; 450 449 int status; 451 450 452 451 if (cb->cb_minorversion == 0) 453 452 return 0; 454 453 455 - status = decode_cb_op_status(xdr, OP_CB_SEQUENCE, &nfserr); 456 - if (unlikely(status)) 457 - goto out; 458 - if (unlikely(nfserr != NFS4_OK)) 459 - goto out_default; 460 - status = decode_cb_sequence4resok(xdr, cb); 461 - out: 462 - return status; 463 - out_default: 464 - return nfs_cb_stat_to_errno(nfserr); 454 + status = decode_cb_op_status(xdr, OP_CB_SEQUENCE, &cb->cb_status); 455 + if (unlikely(status || cb->cb_status)) 456 + return status; 457 + 458 + return decode_cb_sequence4resok(xdr, cb); 465 459 } 466 460 467 461 /* ··· 518 524 struct nfsd4_callback *cb) 519 525 { 520 526 struct nfs4_cb_compound_hdr hdr; 521 - enum nfsstat4 nfserr; 522 527 int status; 523 528 524 529 status = decode_cb_compound4res(xdr, &hdr); 525 530 if (unlikely(status)) 526 - goto out; 531 + return status; 527 532 528 533 if (cb != NULL) { 529 534 status = decode_cb_sequence4res(xdr, cb); 530 - if (unlikely(status)) 531 - goto out; 535 + if (unlikely(status || cb->cb_status)) 536 + return status; 532 537 } 533 538 534 - status = decode_cb_op_status(xdr, OP_CB_RECALL, &nfserr); 535 - if (unlikely(status)) 536 - goto out; 537 - if (unlikely(nfserr != NFS4_OK)) 538 - status = nfs_cb_stat_to_errno(nfserr); 539 - out: 540 - return status; 539 + return decode_cb_op_status(xdr, OP_CB_RECALL, &cb->cb_status); 541 540 } 542 541 543 542 #ifdef CONFIG_NFSD_PNFS ··· 608 621 struct nfsd4_callback *cb) 609 622 { 610 623 struct nfs4_cb_compound_hdr hdr; 611 - enum nfsstat4 nfserr; 612 624 int status; 613 625 614 626 status = decode_cb_compound4res(xdr, &hdr); 615 627 if (unlikely(status)) 616 - goto out; 628 + return status; 629 + 617 630 if (cb) { 618 631 status = decode_cb_sequence4res(xdr, cb); 619 - if (unlikely(status)) 620 - goto out; 632 + if (unlikely(status || cb->cb_status)) 633 + return status; 621 634 } 622 - status = decode_cb_op_status(xdr, OP_CB_LAYOUTRECALL, &nfserr); 623 - if (unlikely(status)) 624 - goto out; 625 - if (unlikely(nfserr != NFS4_OK)) 626 - status = nfs_cb_stat_to_errno(nfserr); 627 - out: 628 - return status; 635 + return decode_cb_op_status(xdr, OP_CB_LAYOUTRECALL, &cb->cb_status); 629 636 } 630 637 #endif /* CONFIG_NFSD_PNFS */ 631 638 ··· 879 898 if (!nfsd41_cb_get_slot(clp, task)) 880 899 return; 881 900 } 882 - spin_lock(&clp->cl_lock); 883 - if (list_empty(&cb->cb_per_client)) { 884 - /* This is the first call, not a restart */ 885 - cb->cb_done = false; 886 - list_add(&cb->cb_per_client, &clp->cl_callbacks); 887 - } 888 - spin_unlock(&clp->cl_lock); 889 901 rpc_call_start(task); 890 902 } 891 903 ··· 892 918 893 919 if (clp->cl_minorversion) { 894 920 /* No need for lock, access serialized in nfsd4_cb_prepare */ 895 - ++clp->cl_cb_session->se_cb_seq_nr; 921 + if (!task->tk_status) 922 + ++clp->cl_cb_session->se_cb_seq_nr; 896 923 clear_bit(0, &clp->cl_cb_slot_busy); 897 924 rpc_wake_up_next(&clp->cl_cb_waitq); 898 925 dprintk("%s: freed slot, new seqid=%d\n", __func__, 899 926 clp->cl_cb_session->se_cb_seq_nr); 900 927 } 901 928 902 - if (clp->cl_cb_client != task->tk_client) { 903 - /* We're shutting down or changing cl_cb_client; leave 904 - * it to nfsd4_process_cb_update to restart the call if 905 - * necessary. */ 929 + /* 930 + * If the backchannel connection was shut down while this 931 + * task was queued, we need to resubmit it after setting up 932 + * a new backchannel connection. 933 + * 934 + * Note that if we lost our callback connection permanently 935 + * the submission code will error out, so we don't need to 936 + * handle that case here. 937 + */ 938 + if (task->tk_flags & RPC_TASK_KILLED) { 939 + task->tk_status = 0; 940 + cb->cb_need_restart = true; 906 941 return; 907 942 } 908 943 909 - if (cb->cb_done) 910 - return; 944 + if (cb->cb_status) { 945 + WARN_ON_ONCE(task->tk_status); 946 + task->tk_status = cb->cb_status; 947 + } 911 948 912 949 switch (cb->cb_ops->done(cb, task)) { 913 950 case 0: ··· 934 949 default: 935 950 BUG(); 936 951 } 937 - cb->cb_done = true; 938 952 } 939 953 940 954 static void nfsd4_cb_release(void *calldata) 941 955 { 942 956 struct nfsd4_callback *cb = calldata; 943 - struct nfs4_client *clp = cb->cb_clp; 944 957 945 - if (cb->cb_done) { 946 - spin_lock(&clp->cl_lock); 947 - list_del(&cb->cb_per_client); 948 - spin_unlock(&clp->cl_lock); 949 - 958 + if (cb->cb_need_restart) 959 + nfsd4_run_cb(cb); 960 + else 950 961 cb->cb_ops->release(cb); 951 - } 962 + 952 963 } 953 964 954 965 static const struct rpc_call_ops nfsd4_cb_ops = { ··· 1039 1058 nfsd4_mark_cb_down(clp, err); 1040 1059 return; 1041 1060 } 1042 - /* Yay, the callback channel's back! Restart any callbacks: */ 1043 - list_for_each_entry(cb, &clp->cl_callbacks, cb_per_client) 1044 - queue_work(callback_wq, &cb->cb_work); 1045 1061 } 1046 1062 1047 1063 static void ··· 1049 1071 struct nfs4_client *clp = cb->cb_clp; 1050 1072 struct rpc_clnt *clnt; 1051 1073 1052 - if (cb->cb_ops && cb->cb_ops->prepare) 1053 - cb->cb_ops->prepare(cb); 1074 + if (cb->cb_need_restart) { 1075 + cb->cb_need_restart = false; 1076 + } else { 1077 + if (cb->cb_ops && cb->cb_ops->prepare) 1078 + cb->cb_ops->prepare(cb); 1079 + } 1054 1080 1055 1081 if (clp->cl_flags & NFSD4_CLIENT_CB_FLAG_MASK) 1056 1082 nfsd4_process_cb_update(cb); ··· 1066 1084 cb->cb_ops->release(cb); 1067 1085 return; 1068 1086 } 1087 + 1088 + /* 1089 + * Don't send probe messages for 4.1 or later. 1090 + */ 1091 + if (!cb->cb_ops && clp->cl_minorversion) { 1092 + clp->cl_cb_state = NFSD4_CB_UP; 1093 + return; 1094 + } 1095 + 1069 1096 cb->cb_msg.rpc_cred = clp->cl_cb_cred; 1070 1097 rpc_call_async(clnt, &cb->cb_msg, RPC_TASK_SOFT | RPC_TASK_SOFTCONN, 1071 1098 cb->cb_ops ? &nfsd4_cb_ops : &nfsd4_cb_probe_ops, cb); ··· 1089 1098 cb->cb_msg.rpc_resp = cb; 1090 1099 cb->cb_ops = ops; 1091 1100 INIT_WORK(&cb->cb_work, nfsd4_run_cb_work); 1092 - INIT_LIST_HEAD(&cb->cb_per_client); 1093 - cb->cb_done = true; 1101 + cb->cb_status = 0; 1102 + cb->cb_need_restart = false; 1094 1103 } 1095 1104 1096 1105 void nfsd4_run_cb(struct nfsd4_callback *cb)
+129 -18
fs/nfsd/nfs4state.c
··· 94 94 static struct kmem_cache *file_slab; 95 95 static struct kmem_cache *stateid_slab; 96 96 static struct kmem_cache *deleg_slab; 97 + static struct kmem_cache *odstate_slab; 97 98 98 99 static void free_session(struct nfsd4_session *); 99 100 ··· 282 281 if (atomic_dec_and_lock(&fi->fi_ref, &state_lock)) { 283 282 hlist_del_rcu(&fi->fi_hash); 284 283 spin_unlock(&state_lock); 284 + WARN_ON_ONCE(!list_empty(&fi->fi_clnt_odstate)); 285 285 WARN_ON_ONCE(!list_empty(&fi->fi_delegations)); 286 286 call_rcu(&fi->fi_rcu, nfsd4_free_file_rcu); 287 287 } ··· 473 471 __nfs4_file_put_access(fp, O_RDONLY); 474 472 } 475 473 474 + /* 475 + * Allocate a new open/delegation state counter. This is needed for 476 + * pNFS for proper return on close semantics. 477 + * 478 + * Note that we only allocate it for pNFS-enabled exports, otherwise 479 + * all pointers to struct nfs4_clnt_odstate are always NULL. 480 + */ 481 + static struct nfs4_clnt_odstate * 482 + alloc_clnt_odstate(struct nfs4_client *clp) 483 + { 484 + struct nfs4_clnt_odstate *co; 485 + 486 + co = kmem_cache_zalloc(odstate_slab, GFP_KERNEL); 487 + if (co) { 488 + co->co_client = clp; 489 + atomic_set(&co->co_odcount, 1); 490 + } 491 + return co; 492 + } 493 + 494 + static void 495 + hash_clnt_odstate_locked(struct nfs4_clnt_odstate *co) 496 + { 497 + struct nfs4_file *fp = co->co_file; 498 + 499 + lockdep_assert_held(&fp->fi_lock); 500 + list_add(&co->co_perfile, &fp->fi_clnt_odstate); 501 + } 502 + 503 + static inline void 504 + get_clnt_odstate(struct nfs4_clnt_odstate *co) 505 + { 506 + if (co) 507 + atomic_inc(&co->co_odcount); 508 + } 509 + 510 + static void 511 + put_clnt_odstate(struct nfs4_clnt_odstate *co) 512 + { 513 + struct nfs4_file *fp; 514 + 515 + if (!co) 516 + return; 517 + 518 + fp = co->co_file; 519 + if (atomic_dec_and_lock(&co->co_odcount, &fp->fi_lock)) { 520 + list_del(&co->co_perfile); 521 + spin_unlock(&fp->fi_lock); 522 + 523 + nfsd4_return_all_file_layouts(co->co_client, fp); 524 + kmem_cache_free(odstate_slab, co); 525 + } 526 + } 527 + 528 + static struct nfs4_clnt_odstate * 529 + find_or_hash_clnt_odstate(struct nfs4_file *fp, struct nfs4_clnt_odstate *new) 530 + { 531 + struct nfs4_clnt_odstate *co; 532 + struct nfs4_client *cl; 533 + 534 + if (!new) 535 + return NULL; 536 + 537 + cl = new->co_client; 538 + 539 + spin_lock(&fp->fi_lock); 540 + list_for_each_entry(co, &fp->fi_clnt_odstate, co_perfile) { 541 + if (co->co_client == cl) { 542 + get_clnt_odstate(co); 543 + goto out; 544 + } 545 + } 546 + co = new; 547 + co->co_file = fp; 548 + hash_clnt_odstate_locked(new); 549 + out: 550 + spin_unlock(&fp->fi_lock); 551 + return co; 552 + } 553 + 476 554 struct nfs4_stid *nfs4_alloc_stid(struct nfs4_client *cl, 477 555 struct kmem_cache *slab) 478 556 { ··· 688 606 } 689 607 690 608 static struct nfs4_delegation * 691 - alloc_init_deleg(struct nfs4_client *clp, struct svc_fh *current_fh) 609 + alloc_init_deleg(struct nfs4_client *clp, struct svc_fh *current_fh, 610 + struct nfs4_clnt_odstate *odstate) 692 611 { 693 612 struct nfs4_delegation *dp; 694 613 long n; ··· 714 631 INIT_LIST_HEAD(&dp->dl_perfile); 715 632 INIT_LIST_HEAD(&dp->dl_perclnt); 716 633 INIT_LIST_HEAD(&dp->dl_recall_lru); 634 + dp->dl_clnt_odstate = odstate; 635 + get_clnt_odstate(odstate); 717 636 dp->dl_type = NFS4_OPEN_DELEGATE_READ; 718 637 dp->dl_retries = 1; 719 638 nfsd4_init_cb(&dp->dl_recall, dp->dl_stid.sc_client, ··· 799 714 spin_lock(&state_lock); 800 715 unhash_delegation_locked(dp); 801 716 spin_unlock(&state_lock); 717 + put_clnt_odstate(dp->dl_clnt_odstate); 802 718 nfs4_put_deleg_lease(dp->dl_stid.sc_file); 803 719 nfs4_put_stid(&dp->dl_stid); 804 720 } ··· 810 724 811 725 WARN_ON(!list_empty(&dp->dl_recall_lru)); 812 726 727 + put_clnt_odstate(dp->dl_clnt_odstate); 813 728 nfs4_put_deleg_lease(dp->dl_stid.sc_file); 814 729 815 730 if (clp->cl_minorversion == 0) ··· 1020 933 { 1021 934 struct nfs4_ol_stateid *stp = openlockstateid(stid); 1022 935 936 + put_clnt_odstate(stp->st_clnt_odstate); 1023 937 release_all_access(stp); 1024 938 if (stp->st_stateowner) 1025 939 nfs4_put_stateowner(stp->st_stateowner); ··· 1626 1538 INIT_LIST_HEAD(&clp->cl_openowners); 1627 1539 INIT_LIST_HEAD(&clp->cl_delegations); 1628 1540 INIT_LIST_HEAD(&clp->cl_lru); 1629 - INIT_LIST_HEAD(&clp->cl_callbacks); 1630 1541 INIT_LIST_HEAD(&clp->cl_revoked); 1631 1542 #ifdef CONFIG_NFSD_PNFS 1632 1543 INIT_LIST_HEAD(&clp->cl_lo_states); ··· 1721 1634 while (!list_empty(&reaplist)) { 1722 1635 dp = list_entry(reaplist.next, struct nfs4_delegation, dl_recall_lru); 1723 1636 list_del_init(&dp->dl_recall_lru); 1637 + put_clnt_odstate(dp->dl_clnt_odstate); 1724 1638 nfs4_put_deleg_lease(dp->dl_stid.sc_file); 1725 1639 nfs4_put_stid(&dp->dl_stid); 1726 1640 } ··· 3145 3057 spin_lock_init(&fp->fi_lock); 3146 3058 INIT_LIST_HEAD(&fp->fi_stateids); 3147 3059 INIT_LIST_HEAD(&fp->fi_delegations); 3060 + INIT_LIST_HEAD(&fp->fi_clnt_odstate); 3148 3061 fh_copy_shallow(&fp->fi_fhandle, fh); 3149 3062 fp->fi_deleg_file = NULL; 3150 3063 fp->fi_had_conflict = false; ··· 3162 3073 void 3163 3074 nfsd4_free_slabs(void) 3164 3075 { 3076 + kmem_cache_destroy(odstate_slab); 3165 3077 kmem_cache_destroy(openowner_slab); 3166 3078 kmem_cache_destroy(lockowner_slab); 3167 3079 kmem_cache_destroy(file_slab); ··· 3193 3103 sizeof(struct nfs4_delegation), 0, 0, NULL); 3194 3104 if (deleg_slab == NULL) 3195 3105 goto out_free_stateid_slab; 3106 + odstate_slab = kmem_cache_create("nfsd4_odstate", 3107 + sizeof(struct nfs4_clnt_odstate), 0, 0, NULL); 3108 + if (odstate_slab == NULL) 3109 + goto out_free_deleg_slab; 3196 3110 return 0; 3197 3111 3112 + out_free_deleg_slab: 3113 + kmem_cache_destroy(deleg_slab); 3198 3114 out_free_stateid_slab: 3199 3115 kmem_cache_destroy(stateid_slab); 3200 3116 out_free_file_slab: ··· 3677 3581 open->op_stp = nfs4_alloc_open_stateid(clp); 3678 3582 if (!open->op_stp) 3679 3583 return nfserr_jukebox; 3584 + 3585 + if (nfsd4_has_session(cstate) && 3586 + (cstate->current_fh.fh_export->ex_flags & NFSEXP_PNFS)) { 3587 + open->op_odstate = alloc_clnt_odstate(clp); 3588 + if (!open->op_odstate) 3589 + return nfserr_jukebox; 3590 + } 3591 + 3680 3592 return nfs_ok; 3681 3593 } 3682 3594 ··· 3973 3869 3974 3870 static struct nfs4_delegation * 3975 3871 nfs4_set_delegation(struct nfs4_client *clp, struct svc_fh *fh, 3976 - struct nfs4_file *fp) 3872 + struct nfs4_file *fp, struct nfs4_clnt_odstate *odstate) 3977 3873 { 3978 3874 int status; 3979 3875 struct nfs4_delegation *dp; ··· 3981 3877 if (fp->fi_had_conflict) 3982 3878 return ERR_PTR(-EAGAIN); 3983 3879 3984 - dp = alloc_init_deleg(clp, fh); 3880 + dp = alloc_init_deleg(clp, fh, odstate); 3985 3881 if (!dp) 3986 3882 return ERR_PTR(-ENOMEM); 3987 3883 ··· 4007 3903 spin_unlock(&state_lock); 4008 3904 out: 4009 3905 if (status) { 3906 + put_clnt_odstate(dp->dl_clnt_odstate); 4010 3907 nfs4_put_stid(&dp->dl_stid); 4011 3908 return ERR_PTR(status); 4012 3909 } ··· 4085 3980 default: 4086 3981 goto out_no_deleg; 4087 3982 } 4088 - dp = nfs4_set_delegation(clp, fh, stp->st_stid.sc_file); 3983 + dp = nfs4_set_delegation(clp, fh, stp->st_stid.sc_file, stp->st_clnt_odstate); 4089 3984 if (IS_ERR(dp)) 4090 3985 goto out_no_deleg; 4091 3986 ··· 4174 4069 release_open_stateid(stp); 4175 4070 goto out; 4176 4071 } 4072 + 4073 + stp->st_clnt_odstate = find_or_hash_clnt_odstate(fp, 4074 + open->op_odstate); 4075 + if (stp->st_clnt_odstate == open->op_odstate) 4076 + open->op_odstate = NULL; 4177 4077 } 4178 4078 update_stateid(&stp->st_stid.sc_stateid); 4179 4079 memcpy(&open->op_stateid, &stp->st_stid.sc_stateid, sizeof(stateid_t)); ··· 4239 4129 kmem_cache_free(file_slab, open->op_file); 4240 4130 if (open->op_stp) 4241 4131 nfs4_put_stid(&open->op_stp->st_stid); 4132 + if (open->op_odstate) 4133 + kmem_cache_free(odstate_slab, open->op_odstate); 4242 4134 } 4243 4135 4244 4136 __be32 ··· 4497 4385 return nfserr_old_stateid; 4498 4386 } 4499 4387 4388 + static __be32 nfsd4_check_openowner_confirmed(struct nfs4_ol_stateid *ols) 4389 + { 4390 + if (ols->st_stateowner->so_is_open_owner && 4391 + !(openowner(ols->st_stateowner)->oo_flags & NFS4_OO_CONFIRMED)) 4392 + return nfserr_bad_stateid; 4393 + return nfs_ok; 4394 + } 4395 + 4500 4396 static __be32 nfsd4_validate_stateid(struct nfs4_client *cl, stateid_t *stateid) 4501 4397 { 4502 4398 struct nfs4_stid *s; 4503 - struct nfs4_ol_stateid *ols; 4504 4399 __be32 status = nfserr_bad_stateid; 4505 4400 4506 4401 if (ZERO_STATEID(stateid) || ONE_STATEID(stateid)) ··· 4537 4418 break; 4538 4419 case NFS4_OPEN_STID: 4539 4420 case NFS4_LOCK_STID: 4540 - ols = openlockstateid(s); 4541 - if (ols->st_stateowner->so_is_open_owner 4542 - && !(openowner(ols->st_stateowner)->oo_flags 4543 - & NFS4_OO_CONFIRMED)) 4544 - status = nfserr_bad_stateid; 4545 - else 4546 - status = nfs_ok; 4421 + status = nfsd4_check_openowner_confirmed(openlockstateid(s)); 4547 4422 break; 4548 4423 default: 4549 4424 printk("unknown stateid type %x\n", s->sc_type); ··· 4629 4516 status = nfs4_check_fh(current_fh, stp); 4630 4517 if (status) 4631 4518 goto out; 4632 - if (stp->st_stateowner->so_is_open_owner 4633 - && !(openowner(stp->st_stateowner)->oo_flags & NFS4_OO_CONFIRMED)) 4519 + status = nfsd4_check_openowner_confirmed(stp); 4520 + if (status) 4634 4521 goto out; 4635 4522 status = nfs4_check_openmode(stp, flags); 4636 4523 if (status) ··· 4964 4851 goto out; 4965 4852 update_stateid(&stp->st_stid.sc_stateid); 4966 4853 memcpy(&close->cl_stateid, &stp->st_stid.sc_stateid, sizeof(stateid_t)); 4967 - 4968 - nfsd4_return_all_file_layouts(stp->st_stateowner->so_client, 4969 - stp->st_stid.sc_file); 4970 4854 4971 4855 nfsd4_close_open_stateid(stp); 4972 4856 ··· 6598 6488 list_for_each_safe(pos, next, &reaplist) { 6599 6489 dp = list_entry (pos, struct nfs4_delegation, dl_recall_lru); 6600 6490 list_del_init(&dp->dl_recall_lru); 6491 + put_clnt_odstate(dp->dl_clnt_odstate); 6601 6492 nfs4_put_deleg_lease(dp->dl_stid.sc_file); 6602 6493 nfs4_put_stid(&dp->dl_stid); 6603 6494 }
+16 -3
fs/nfsd/state.h
··· 63 63 64 64 struct nfsd4_callback { 65 65 struct nfs4_client *cb_clp; 66 - struct list_head cb_per_client; 67 66 u32 cb_minorversion; 68 67 struct rpc_message cb_msg; 69 68 struct nfsd4_callback_ops *cb_ops; 70 69 struct work_struct cb_work; 71 - bool cb_done; 70 + int cb_status; 71 + bool cb_need_restart; 72 72 }; 73 73 74 74 struct nfsd4_callback_ops { ··· 126 126 struct list_head dl_perfile; 127 127 struct list_head dl_perclnt; 128 128 struct list_head dl_recall_lru; /* delegation recalled */ 129 + struct nfs4_clnt_odstate *dl_clnt_odstate; 129 130 u32 dl_type; 130 131 time_t dl_time; 131 132 /* For recall: */ ··· 333 332 int cl_cb_state; 334 333 struct nfsd4_callback cl_cb_null; 335 334 struct nfsd4_session *cl_cb_session; 336 - struct list_head cl_callbacks; /* list of in-progress callbacks */ 337 335 338 336 /* for all client information that callback code might need: */ 339 337 spinlock_t cl_lock; ··· 465 465 } 466 466 467 467 /* 468 + * Per-client state indicating no. of opens and outstanding delegations 469 + * on a file from a particular client.'od' stands for 'open & delegation' 470 + */ 471 + struct nfs4_clnt_odstate { 472 + struct nfs4_client *co_client; 473 + struct nfs4_file *co_file; 474 + struct list_head co_perfile; 475 + atomic_t co_odcount; 476 + }; 477 + 478 + /* 468 479 * nfs4_file: a file opened by some number of (open) nfs4_stateowners. 469 480 * 470 481 * These objects are global. nfsd keeps one instance of a nfs4_file per ··· 496 485 struct list_head fi_delegations; 497 486 struct rcu_head fi_rcu; 498 487 }; 488 + struct list_head fi_clnt_odstate; 499 489 /* One each for O_RDONLY, O_WRONLY, O_RDWR: */ 500 490 struct file * fi_fds[3]; 501 491 /* ··· 538 526 struct list_head st_perstateowner; 539 527 struct list_head st_locks; 540 528 struct nfs4_stateowner * st_stateowner; 529 + struct nfs4_clnt_odstate * st_clnt_odstate; 541 530 unsigned char st_access_bmap; 542 531 unsigned char st_deny_bmap; 543 532 struct nfs4_ol_stateid * st_openstp;
+1
fs/nfsd/xdr4.h
··· 247 247 struct nfs4_openowner *op_openowner; /* used during processing */ 248 248 struct nfs4_file *op_file; /* used during processing */ 249 249 struct nfs4_ol_stateid *op_stp; /* used during processing */ 250 + struct nfs4_clnt_odstate *op_odstate; /* used during processing */ 250 251 struct nfs4_acl *op_acl; 251 252 struct xdr_netobj op_label; 252 253 };
+1
include/drm/drm_pciids.h
··· 186 186 {0x1002, 0x6658, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_BONAIRE|RADEON_NEW_MEMMAP}, \ 187 187 {0x1002, 0x665c, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_BONAIRE|RADEON_NEW_MEMMAP}, \ 188 188 {0x1002, 0x665d, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_BONAIRE|RADEON_NEW_MEMMAP}, \ 189 + {0x1002, 0x665f, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_BONAIRE|RADEON_NEW_MEMMAP}, \ 189 190 {0x1002, 0x6660, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_HAINAN|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP}, \ 190 191 {0x1002, 0x6663, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_HAINAN|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP}, \ 191 192 {0x1002, 0x6664, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_HAINAN|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP}, \
+2
include/linux/gfp.h
··· 30 30 #define ___GFP_HARDWALL 0x20000u 31 31 #define ___GFP_THISNODE 0x40000u 32 32 #define ___GFP_RECLAIMABLE 0x80000u 33 + #define ___GFP_NOACCOUNT 0x100000u 33 34 #define ___GFP_NOTRACK 0x200000u 34 35 #define ___GFP_NO_KSWAPD 0x400000u 35 36 #define ___GFP_OTHER_NODE 0x800000u ··· 88 87 #define __GFP_HARDWALL ((__force gfp_t)___GFP_HARDWALL) /* Enforce hardwall cpuset memory allocs */ 89 88 #define __GFP_THISNODE ((__force gfp_t)___GFP_THISNODE)/* No fallback, no policies */ 90 89 #define __GFP_RECLAIMABLE ((__force gfp_t)___GFP_RECLAIMABLE) /* Page is reclaimable */ 90 + #define __GFP_NOACCOUNT ((__force gfp_t)___GFP_NOACCOUNT) /* Don't account to kmemcg */ 91 91 #define __GFP_NOTRACK ((__force gfp_t)___GFP_NOTRACK) /* Don't track with kmemcheck */ 92 92 93 93 #define __GFP_NO_KSWAPD ((__force gfp_t)___GFP_NO_KSWAPD)
+10
include/linux/libata.h
··· 205 205 ATA_LFLAG_SW_ACTIVITY = (1 << 7), /* keep activity stats */ 206 206 ATA_LFLAG_NO_LPM = (1 << 8), /* disable LPM on this link */ 207 207 ATA_LFLAG_RST_ONCE = (1 << 9), /* limit recovery to one reset */ 208 + ATA_LFLAG_CHANGED = (1 << 10), /* LPM state changed on this link */ 208 209 209 210 /* struct ata_port flags */ 210 211 ATA_FLAG_SLAVE_POSS = (1 << 0), /* host supports slave dev */ ··· 309 308 * doing SRST. 310 309 */ 311 310 ATA_TMOUT_PMP_SRST_WAIT = 5000, 311 + 312 + /* When the LPM policy is set to ATA_LPM_MAX_POWER, there might 313 + * be a spurious PHY event, so ignore the first PHY event that 314 + * occurs within 10s after the policy change. 315 + */ 316 + ATA_TMOUT_SPURIOUS_PHY = 10000, 312 317 313 318 /* ATA bus states */ 314 319 BUS_UNKNOWN = 0, ··· 795 788 struct ata_eh_context eh_context; 796 789 797 790 struct ata_device device[ATA_MAX_DEVICES]; 791 + 792 + unsigned long last_lpm_change; /* when last LPM change happened */ 798 793 }; 799 794 #define ATA_LINK_CLEAR_BEGIN offsetof(struct ata_link, active_tag) 800 795 #define ATA_LINK_CLEAR_END offsetof(struct ata_link, device[0]) ··· 1210 1201 extern int ata_do_set_mode(struct ata_link *link, struct ata_device **r_failed_dev); 1211 1202 extern void ata_scsi_port_error_handler(struct Scsi_Host *host, struct ata_port *ap); 1212 1203 extern void ata_scsi_cmd_error_handler(struct Scsi_Host *host, struct ata_port *ap, struct list_head *eh_q); 1204 + extern bool sata_lpm_ignore_phy_events(struct ata_link *link); 1213 1205 1214 1206 extern int ata_cable_40wire(struct ata_port *ap); 1215 1207 extern int ata_cable_80wire(struct ata_port *ap);
+4
include/linux/memcontrol.h
··· 463 463 if (!memcg_kmem_enabled()) 464 464 return true; 465 465 466 + if (gfp & __GFP_NOACCOUNT) 467 + return true; 466 468 /* 467 469 * __GFP_NOFAIL allocations will move on even if charging is not 468 470 * possible. Therefore we don't even try, and have this allocation ··· 523 521 memcg_kmem_get_cache(struct kmem_cache *cachep, gfp_t gfp) 524 522 { 525 523 if (!memcg_kmem_enabled()) 524 + return cachep; 525 + if (gfp & __GFP_NOACCOUNT) 526 526 return cachep; 527 527 if (gfp & __GFP_NOFAIL) 528 528 return cachep;
-3
include/linux/netdevice.h
··· 25 25 #ifndef _LINUX_NETDEVICE_H 26 26 #define _LINUX_NETDEVICE_H 27 27 28 - #include <linux/pm_qos.h> 29 28 #include <linux/timer.h> 30 29 #include <linux/bug.h> 31 30 #include <linux/delay.h> ··· 1497 1498 * for hardware timestamping 1498 1499 * 1499 1500 * @qdisc_tx_busylock: XXX: need comments on this one 1500 - * 1501 - * @pm_qos_req: Power Management QoS object 1502 1501 * 1503 1502 * FIXME: cleanup struct net_device such that network protocol info 1504 1503 * moves out.
+4 -3
include/linux/sched/rt.h
··· 18 18 #ifdef CONFIG_RT_MUTEXES 19 19 extern int rt_mutex_getprio(struct task_struct *p); 20 20 extern void rt_mutex_setprio(struct task_struct *p, int prio); 21 - extern int rt_mutex_check_prio(struct task_struct *task, int newprio); 21 + extern int rt_mutex_get_effective_prio(struct task_struct *task, int newprio); 22 22 extern struct task_struct *rt_mutex_get_top_task(struct task_struct *task); 23 23 extern void rt_mutex_adjust_pi(struct task_struct *p); 24 24 static inline bool tsk_is_pi_blocked(struct task_struct *tsk) ··· 31 31 return p->normal_prio; 32 32 } 33 33 34 - static inline int rt_mutex_check_prio(struct task_struct *task, int newprio) 34 + static inline int rt_mutex_get_effective_prio(struct task_struct *task, 35 + int newprio) 35 36 { 36 - return 0; 37 + return newprio; 37 38 } 38 39 39 40 static inline struct task_struct *rt_mutex_get_top_task(struct task_struct *task)
+8
include/linux/tcp.h
··· 145 145 * read the code and the spec side by side (and laugh ...) 146 146 * See RFC793 and RFC1122. The RFC writes these in capitals. 147 147 */ 148 + u64 bytes_received; /* RFC4898 tcpEStatsAppHCThruOctetsReceived 149 + * sum(delta(rcv_nxt)), or how many bytes 150 + * were acked. 151 + */ 148 152 u32 rcv_nxt; /* What we want to receive next */ 149 153 u32 copied_seq; /* Head of yet unread data */ 150 154 u32 rcv_wup; /* rcv_nxt on last window update sent */ 151 155 u32 snd_nxt; /* Next sequence we send */ 152 156 157 + u64 bytes_acked; /* RFC4898 tcpEStatsAppHCThruOctetsAcked 158 + * sum(delta(snd_una)), or how many bytes 159 + * were acked. 160 + */ 153 161 u32 snd_una; /* First byte we want an ack for */ 154 162 u32 snd_sml; /* Last byte of the most recently transmitted small packet */ 155 163 u32 rcv_tstamp; /* timestamp of last received ACK (for keepalives) */
+1 -1
include/linux/tty.h
··· 339 339 #define TTY_EXCLUSIVE 3 /* Exclusive open mode */ 340 340 #define TTY_DEBUG 4 /* Debugging */ 341 341 #define TTY_DO_WRITE_WAKEUP 5 /* Call write_wakeup after queuing new */ 342 + #define TTY_OTHER_DONE 6 /* Closed pty has completed input processing */ 342 343 #define TTY_LDISC_OPEN 11 /* Line discipline is open */ 343 344 #define TTY_PTY_LOCK 16 /* pty private */ 344 345 #define TTY_NO_WRITE_SPLIT 17 /* Preserve write boundaries to driver */ ··· 463 462 extern void do_SAK(struct tty_struct *tty); 464 463 extern void __do_SAK(struct tty_struct *tty); 465 464 extern void no_tty(void); 466 - extern void tty_flush_to_ldisc(struct tty_struct *tty); 467 465 extern void tty_buffer_free_all(struct tty_port *port); 468 466 extern void tty_buffer_flush(struct tty_struct *tty, struct tty_ldisc *ld); 469 467 extern void tty_buffer_init(struct tty_port *port);
+2 -2
include/linux/uidgid.h
··· 109 109 110 110 static inline bool uid_valid(kuid_t uid) 111 111 { 112 - return !uid_eq(uid, INVALID_UID); 112 + return __kuid_val(uid) != (uid_t) -1; 113 113 } 114 114 115 115 static inline bool gid_valid(kgid_t gid) 116 116 { 117 - return !gid_eq(gid, INVALID_GID); 117 + return __kgid_val(gid) != (gid_t) -1; 118 118 } 119 119 120 120 #ifdef CONFIG_USER_NS
+2
include/net/cfg802154.h
··· 30 30 struct cfg802154_ops { 31 31 struct net_device * (*add_virtual_intf_deprecated)(struct wpan_phy *wpan_phy, 32 32 const char *name, 33 + unsigned char name_assign_type, 33 34 int type); 34 35 void (*del_virtual_intf_deprecated)(struct wpan_phy *wpan_phy, 35 36 struct net_device *dev); 36 37 int (*add_virtual_intf)(struct wpan_phy *wpan_phy, 37 38 const char *name, 39 + unsigned char name_assign_type, 38 40 enum nl802154_iftype type, 39 41 __le64 extended_addr); 40 42 int (*del_virtual_intf)(struct wpan_phy *wpan_phy,
+7 -3
include/net/codel.h
··· 120 120 * struct codel_params - contains codel parameters 121 121 * @target: target queue size (in time units) 122 122 * @interval: width of moving time window 123 + * @mtu: device mtu, or minimal queue backlog in bytes. 123 124 * @ecn: is Explicit Congestion Notification enabled 124 125 */ 125 126 struct codel_params { 126 127 codel_time_t target; 127 128 codel_time_t interval; 129 + u32 mtu; 128 130 bool ecn; 129 131 }; 130 132 ··· 168 166 u32 ecn_mark; 169 167 }; 170 168 171 - static void codel_params_init(struct codel_params *params) 169 + static void codel_params_init(struct codel_params *params, 170 + const struct Qdisc *sch) 172 171 { 173 172 params->interval = MS2TIME(100); 174 173 params->target = MS2TIME(5); 174 + params->mtu = psched_mtu(qdisc_dev(sch)); 175 175 params->ecn = false; 176 176 } 177 177 ··· 184 180 185 181 static void codel_stats_init(struct codel_stats *stats) 186 182 { 187 - stats->maxpacket = 256; 183 + stats->maxpacket = 0; 188 184 } 189 185 190 186 /* ··· 238 234 stats->maxpacket = qdisc_pkt_len(skb); 239 235 240 236 if (codel_time_before(vars->ldelay, params->target) || 241 - sch->qstats.backlog <= stats->maxpacket) { 237 + sch->qstats.backlog <= params->mtu) { 242 238 /* went below - stay below for at least interval */ 243 239 vars->first_above_time = 0; 244 240 return false;
+2
include/net/mac80211.h
··· 1666 1666 * @sta: station table entry, %NULL for per-vif queue 1667 1667 * @tid: the TID for this queue (unused for per-vif queue) 1668 1668 * @ac: the AC for this queue 1669 + * @drv_priv: data area for driver use, will always be aligned to 1670 + * sizeof(void *). 1669 1671 * 1670 1672 * The driver can obtain packets from this queue by calling 1671 1673 * ieee80211_tx_dequeue().
+92 -2
include/net/mac802154.h
··· 247 247 __put_unaligned_memmove64(swab64p(le64_src), be64_dst); 248 248 } 249 249 250 - /* Basic interface to register ieee802154 device */ 250 + /** 251 + * ieee802154_alloc_hw - Allocate a new hardware device 252 + * 253 + * This must be called once for each hardware device. The returned pointer 254 + * must be used to refer to this device when calling other functions. 255 + * mac802154 allocates a private data area for the driver pointed to by 256 + * @priv in &struct ieee802154_hw, the size of this area is given as 257 + * @priv_data_len. 258 + * 259 + * @priv_data_len: length of private data 260 + * @ops: callbacks for this device 261 + * 262 + * Return: A pointer to the new hardware device, or %NULL on error. 263 + */ 251 264 struct ieee802154_hw * 252 265 ieee802154_alloc_hw(size_t priv_data_len, const struct ieee802154_ops *ops); 266 + 267 + /** 268 + * ieee802154_free_hw - free hardware descriptor 269 + * 270 + * This function frees everything that was allocated, including the 271 + * private data for the driver. You must call ieee802154_unregister_hw() 272 + * before calling this function. 273 + * 274 + * @hw: the hardware to free 275 + */ 253 276 void ieee802154_free_hw(struct ieee802154_hw *hw); 277 + 278 + /** 279 + * ieee802154_register_hw - Register hardware device 280 + * 281 + * You must call this function before any other functions in 282 + * mac802154. Note that before a hardware can be registered, you 283 + * need to fill the contained wpan_phy's information. 284 + * 285 + * @hw: the device to register as returned by ieee802154_alloc_hw() 286 + * 287 + * Return: 0 on success. An error code otherwise. 288 + */ 254 289 int ieee802154_register_hw(struct ieee802154_hw *hw); 290 + 291 + /** 292 + * ieee802154_unregister_hw - Unregister a hardware device 293 + * 294 + * This function instructs mac802154 to free allocated resources 295 + * and unregister netdevices from the networking subsystem. 296 + * 297 + * @hw: the hardware to unregister 298 + */ 255 299 void ieee802154_unregister_hw(struct ieee802154_hw *hw); 256 300 301 + /** 302 + * ieee802154_rx - receive frame 303 + * 304 + * Use this function to hand received frames to mac802154. The receive 305 + * buffer in @skb must start with an IEEE 802.15.4 header. In case of a 306 + * paged @skb is used, the driver is recommended to put the ieee802154 307 + * header of the frame on the linear part of the @skb to avoid memory 308 + * allocation and/or memcpy by the stack. 309 + * 310 + * This function may not be called in IRQ context. Calls to this function 311 + * for a single hardware must be synchronized against each other. 312 + * 313 + * @hw: the hardware this frame came in on 314 + * @skb: the buffer to receive, owned by mac802154 after this call 315 + */ 257 316 void ieee802154_rx(struct ieee802154_hw *hw, struct sk_buff *skb); 317 + 318 + /** 319 + * ieee802154_rx_irqsafe - receive frame 320 + * 321 + * Like ieee802154_rx() but can be called in IRQ context 322 + * (internally defers to a tasklet.) 323 + * 324 + * @hw: the hardware this frame came in on 325 + * @skb: the buffer to receive, owned by mac802154 after this call 326 + * @lqi: link quality indicator 327 + */ 258 328 void ieee802154_rx_irqsafe(struct ieee802154_hw *hw, struct sk_buff *skb, 259 329 u8 lqi); 260 - 330 + /** 331 + * ieee802154_wake_queue - wake ieee802154 queue 332 + * @hw: pointer as obtained from ieee802154_alloc_hw(). 333 + * 334 + * Drivers should use this function instead of netif_wake_queue. 335 + */ 261 336 void ieee802154_wake_queue(struct ieee802154_hw *hw); 337 + 338 + /** 339 + * ieee802154_stop_queue - stop ieee802154 queue 340 + * @hw: pointer as obtained from ieee802154_alloc_hw(). 341 + * 342 + * Drivers should use this function instead of netif_stop_queue. 343 + */ 262 344 void ieee802154_stop_queue(struct ieee802154_hw *hw); 345 + 346 + /** 347 + * ieee802154_xmit_complete - frame transmission complete 348 + * 349 + * @hw: pointer as obtained from ieee802154_alloc_hw(). 350 + * @skb: buffer for transmission 351 + * @ifs_handling: indicate interframe space handling 352 + */ 263 353 void ieee802154_xmit_complete(struct ieee802154_hw *hw, struct sk_buff *skb, 264 354 bool ifs_handling); 265 355
+5 -2
include/net/tcp.h
··· 576 576 } 577 577 578 578 /* tcp.c */ 579 - void tcp_get_info(const struct sock *, struct tcp_info *); 579 + void tcp_get_info(struct sock *, struct tcp_info *); 580 580 581 581 /* Read 'sendfile()'-style from a TCP socket */ 582 582 typedef int (*sk_read_actor_t)(read_descriptor_t *, struct sk_buff *, ··· 804 804 /* Requires ECN/ECT set on all packets */ 805 805 #define TCP_CONG_NEEDS_ECN 0x2 806 806 807 + union tcp_cc_info; 808 + 807 809 struct tcp_congestion_ops { 808 810 struct list_head list; 809 811 u32 key; ··· 831 829 /* hook for packet ack accounting (optional) */ 832 830 void (*pkts_acked)(struct sock *sk, u32 num_acked, s32 rtt_us); 833 831 /* get info for inet_diag (optional) */ 834 - int (*get_info)(struct sock *sk, u32 ext, struct sk_buff *skb); 832 + size_t (*get_info)(struct sock *sk, u32 ext, int *attr, 833 + union tcp_cc_info *info); 835 834 836 835 char name[TCP_CA_NAME_MAX]; 837 836 struct module *owner;
+4
include/uapi/linux/inet_diag.h
··· 143 143 __u32 dctcp_ab_tot; 144 144 }; 145 145 146 + union tcp_cc_info { 147 + struct tcpvegas_info vegas; 148 + struct tcp_dctcp_info dctcp; 149 + }; 146 150 #endif /* _UAPI_INET_DIAG_H_ */
+10
include/uapi/linux/mpls.h
··· 31 31 #define MPLS_LS_TTL_MASK 0x000000FF 32 32 #define MPLS_LS_TTL_SHIFT 0 33 33 34 + /* Reserved labels */ 35 + #define MPLS_LABEL_IPV4NULL 0 /* RFC3032 */ 36 + #define MPLS_LABEL_RTALERT 1 /* RFC3032 */ 37 + #define MPLS_LABEL_IPV6NULL 2 /* RFC3032 */ 38 + #define MPLS_LABEL_IMPLNULL 3 /* RFC3032 */ 39 + #define MPLS_LABEL_ENTROPY 7 /* RFC6790 */ 40 + #define MPLS_LABEL_GAL 13 /* RFC5586 */ 41 + #define MPLS_LABEL_OAMALERT 14 /* RFC3429 */ 42 + #define MPLS_LABEL_EXTENSION 15 /* RFC7274 */ 43 + 34 44 #endif /* _UAPI_MPLS_H */
+3
include/uapi/linux/tcp.h
··· 112 112 #define TCP_FASTOPEN 23 /* Enable FastOpen on listeners */ 113 113 #define TCP_TIMESTAMP 24 114 114 #define TCP_NOTSENT_LOWAT 25 /* limit number of unsent bytes in write queue */ 115 + #define TCP_CC_INFO 26 /* Get Congestion Control (optional) info */ 115 116 116 117 struct tcp_repair_opt { 117 118 __u32 opt_code; ··· 190 189 191 190 __u64 tcpi_pacing_rate; 192 191 __u64 tcpi_max_pacing_rate; 192 + __u64 tcpi_bytes_acked; /* RFC4898 tcpEStatsAppHCThruOctetsAcked */ 193 + __u64 tcpi_bytes_received; /* RFC4898 tcpEStatsAppHCThruOctetsReceived */ 193 194 }; 194 195 195 196 /* for TCP_MD5SIG socket option */
+33 -8
kernel/events/core.c
··· 913 913 * Those places that change perf_event::ctx will hold both 914 914 * perf_event_ctx::mutex of the 'old' and 'new' ctx value. 915 915 * 916 - * Lock ordering is by mutex address. There is one other site where 917 - * perf_event_context::mutex nests and that is put_event(). But remember that 918 - * that is a parent<->child context relation, and migration does not affect 919 - * children, therefore these two orderings should not interact. 916 + * Lock ordering is by mutex address. There are two other sites where 917 + * perf_event_context::mutex nests and those are: 918 + * 919 + * - perf_event_exit_task_context() [ child , 0 ] 920 + * __perf_event_exit_task() 921 + * sync_child_event() 922 + * put_event() [ parent, 1 ] 923 + * 924 + * - perf_event_init_context() [ parent, 0 ] 925 + * inherit_task_group() 926 + * inherit_group() 927 + * inherit_event() 928 + * perf_event_alloc() 929 + * perf_init_event() 930 + * perf_try_init_event() [ child , 1 ] 931 + * 932 + * While it appears there is an obvious deadlock here -- the parent and child 933 + * nesting levels are inverted between the two. This is in fact safe because 934 + * life-time rules separate them. That is an exiting task cannot fork, and a 935 + * spawning task cannot (yet) exit. 936 + * 937 + * But remember that that these are parent<->child context relations, and 938 + * migration does not affect children, therefore these two orderings should not 939 + * interact. 920 940 * 921 941 * The change in perf_event::ctx does not affect children (as claimed above) 922 942 * because the sys_perf_event_open() case will install a new event and break ··· 3677 3657 } 3678 3658 } 3679 3659 3680 - /* 3681 - * Called when the last reference to the file is gone. 3682 - */ 3683 3660 static void put_event(struct perf_event *event) 3684 3661 { 3685 3662 struct perf_event_context *ctx; ··· 3714 3697 } 3715 3698 EXPORT_SYMBOL_GPL(perf_event_release_kernel); 3716 3699 3700 + /* 3701 + * Called when the last reference to the file is gone. 3702 + */ 3717 3703 static int perf_release(struct inode *inode, struct file *file) 3718 3704 { 3719 3705 put_event(file->private_data); ··· 7384 7364 return -ENODEV; 7385 7365 7386 7366 if (event->group_leader != event) { 7387 - ctx = perf_event_ctx_lock(event->group_leader); 7367 + /* 7368 + * This ctx->mutex can nest when we're called through 7369 + * inheritance. See the perf_event_ctx_lock_nested() comment. 7370 + */ 7371 + ctx = perf_event_ctx_lock_nested(event->group_leader, 7372 + SINGLE_DEPTH_NESTING); 7388 7373 BUG_ON(!ctx); 7389 7374 } 7390 7375
+7 -5
kernel/locking/rtmutex.c
··· 265 265 } 266 266 267 267 /* 268 - * Called by sched_setscheduler() to check whether the priority change 269 - * is overruled by a possible priority boosting. 268 + * Called by sched_setscheduler() to get the priority which will be 269 + * effective after the change. 270 270 */ 271 - int rt_mutex_check_prio(struct task_struct *task, int newprio) 271 + int rt_mutex_get_effective_prio(struct task_struct *task, int newprio) 272 272 { 273 273 if (!task_has_pi_waiters(task)) 274 - return 0; 274 + return newprio; 275 275 276 - return task_top_pi_waiter(task)->task->prio <= newprio; 276 + if (task_top_pi_waiter(task)->task->prio <= newprio) 277 + return task_top_pi_waiter(task)->task->prio; 278 + return newprio; 277 279 } 278 280 279 281 /*
+26 -28
kernel/sched/core.c
··· 3300 3300 3301 3301 /* Actually do priority change: must hold pi & rq lock. */ 3302 3302 static void __setscheduler(struct rq *rq, struct task_struct *p, 3303 - const struct sched_attr *attr) 3303 + const struct sched_attr *attr, bool keep_boost) 3304 3304 { 3305 3305 __setscheduler_params(p, attr); 3306 3306 3307 3307 /* 3308 - * If we get here, there was no pi waiters boosting the 3309 - * task. It is safe to use the normal prio. 3308 + * Keep a potential priority boosting if called from 3309 + * sched_setscheduler(). 3310 3310 */ 3311 - p->prio = normal_prio(p); 3311 + if (keep_boost) 3312 + p->prio = rt_mutex_get_effective_prio(p, normal_prio(p)); 3313 + else 3314 + p->prio = normal_prio(p); 3312 3315 3313 3316 if (dl_prio(p->prio)) 3314 3317 p->sched_class = &dl_sched_class; ··· 3411 3408 int newprio = dl_policy(attr->sched_policy) ? MAX_DL_PRIO - 1 : 3412 3409 MAX_RT_PRIO - 1 - attr->sched_priority; 3413 3410 int retval, oldprio, oldpolicy = -1, queued, running; 3414 - int policy = attr->sched_policy; 3411 + int new_effective_prio, policy = attr->sched_policy; 3415 3412 unsigned long flags; 3416 3413 const struct sched_class *prev_class; 3417 3414 struct rq *rq; ··· 3593 3590 oldprio = p->prio; 3594 3591 3595 3592 /* 3596 - * Special case for priority boosted tasks. 3597 - * 3598 - * If the new priority is lower or equal (user space view) 3599 - * than the current (boosted) priority, we just store the new 3593 + * Take priority boosted tasks into account. If the new 3594 + * effective priority is unchanged, we just store the new 3600 3595 * normal parameters and do not touch the scheduler class and 3601 3596 * the runqueue. This will be done when the task deboost 3602 3597 * itself. 3603 3598 */ 3604 - if (rt_mutex_check_prio(p, newprio)) { 3599 + new_effective_prio = rt_mutex_get_effective_prio(p, newprio); 3600 + if (new_effective_prio == oldprio) { 3605 3601 __setscheduler_params(p, attr); 3606 3602 task_rq_unlock(rq, p, &flags); 3607 3603 return 0; ··· 3614 3612 put_prev_task(rq, p); 3615 3613 3616 3614 prev_class = p->sched_class; 3617 - __setscheduler(rq, p, attr); 3615 + __setscheduler(rq, p, attr, true); 3618 3616 3619 3617 if (running) 3620 3618 p->sched_class->set_curr_task(rq); ··· 6999 6997 unsigned long flags; 7000 6998 long cpu = (long)hcpu; 7001 6999 struct dl_bw *dl_b; 7000 + bool overflow; 7001 + int cpus; 7002 7002 7003 - switch (action & ~CPU_TASKS_FROZEN) { 7003 + switch (action) { 7004 7004 case CPU_DOWN_PREPARE: 7005 - /* explicitly allow suspend */ 7006 - if (!(action & CPU_TASKS_FROZEN)) { 7007 - bool overflow; 7008 - int cpus; 7005 + rcu_read_lock_sched(); 7006 + dl_b = dl_bw_of(cpu); 7009 7007 7010 - rcu_read_lock_sched(); 7011 - dl_b = dl_bw_of(cpu); 7008 + raw_spin_lock_irqsave(&dl_b->lock, flags); 7009 + cpus = dl_bw_cpus(cpu); 7010 + overflow = __dl_overflow(dl_b, cpus, 0, 0); 7011 + raw_spin_unlock_irqrestore(&dl_b->lock, flags); 7012 7012 7013 - raw_spin_lock_irqsave(&dl_b->lock, flags); 7014 - cpus = dl_bw_cpus(cpu); 7015 - overflow = __dl_overflow(dl_b, cpus, 0, 0); 7016 - raw_spin_unlock_irqrestore(&dl_b->lock, flags); 7013 + rcu_read_unlock_sched(); 7017 7014 7018 - rcu_read_unlock_sched(); 7019 - 7020 - if (overflow) 7021 - return notifier_from_errno(-EBUSY); 7022 - } 7015 + if (overflow) 7016 + return notifier_from_errno(-EBUSY); 7023 7017 cpuset_update_active_cpus(false); 7024 7018 break; 7025 7019 case CPU_DOWN_PREPARE_FROZEN: ··· 7344 7346 queued = task_on_rq_queued(p); 7345 7347 if (queued) 7346 7348 dequeue_task(rq, p, 0); 7347 - __setscheduler(rq, p, &attr); 7349 + __setscheduler(rq, p, &attr, false); 7348 7350 if (queued) { 7349 7351 enqueue_task(rq, p, 0); 7350 7352 resched_curr(rq);
+15 -5
kernel/watchdog.c
··· 41 41 #define NMI_WATCHDOG_ENABLED (1 << NMI_WATCHDOG_ENABLED_BIT) 42 42 #define SOFT_WATCHDOG_ENABLED (1 << SOFT_WATCHDOG_ENABLED_BIT) 43 43 44 + static DEFINE_MUTEX(watchdog_proc_mutex); 45 + 44 46 #ifdef CONFIG_HARDLOCKUP_DETECTOR 45 47 static unsigned long __read_mostly watchdog_enabled = SOFT_WATCHDOG_ENABLED|NMI_WATCHDOG_ENABLED; 46 48 #else ··· 610 608 { 611 609 int cpu; 612 610 613 - if (!watchdog_user_enabled) 614 - return; 611 + mutex_lock(&watchdog_proc_mutex); 612 + 613 + if (!(watchdog_enabled & NMI_WATCHDOG_ENABLED)) 614 + goto unlock; 615 615 616 616 get_online_cpus(); 617 617 for_each_online_cpu(cpu) 618 618 watchdog_nmi_enable(cpu); 619 619 put_online_cpus(); 620 + 621 + unlock: 622 + mutex_lock(&watchdog_proc_mutex); 620 623 } 621 624 622 625 void watchdog_nmi_disable_all(void) 623 626 { 624 627 int cpu; 625 628 629 + mutex_lock(&watchdog_proc_mutex); 630 + 626 631 if (!watchdog_running) 627 - return; 632 + goto unlock; 628 633 629 634 get_online_cpus(); 630 635 for_each_online_cpu(cpu) 631 636 watchdog_nmi_disable(cpu); 632 637 put_online_cpus(); 638 + 639 + unlock: 640 + mutex_unlock(&watchdog_proc_mutex); 633 641 } 634 642 #else 635 643 static int watchdog_nmi_enable(unsigned int cpu) { return 0; } ··· 755 743 return err; 756 744 757 745 } 758 - 759 - static DEFINE_MUTEX(watchdog_proc_mutex); 760 746 761 747 /* 762 748 * common function for watchdog, nmi_watchdog and soft_watchdog parameter
+2 -1
mm/kmemleak.c
··· 115 115 #define BYTES_PER_POINTER sizeof(void *) 116 116 117 117 /* GFP bitmask for kmemleak internal allocations */ 118 - #define gfp_kmemleak_mask(gfp) (((gfp) & (GFP_KERNEL | GFP_ATOMIC)) | \ 118 + #define gfp_kmemleak_mask(gfp) (((gfp) & (GFP_KERNEL | GFP_ATOMIC | \ 119 + __GFP_NOACCOUNT)) | \ 119 120 __GFP_NORETRY | __GFP_NOMEMALLOC | \ 120 121 __GFP_NOWARN) 121 122
+1 -1
mm/mempolicy.c
··· 2518 2518 if (numabalancing_override) 2519 2519 set_numabalancing_state(numabalancing_override == 1); 2520 2520 2521 - if (nr_node_ids > 1 && !numabalancing_override) { 2521 + if (num_online_nodes() > 1 && !numabalancing_override) { 2522 2522 pr_info("%s automatic NUMA balancing. " 2523 2523 "Configure with numa_balancing= or the " 2524 2524 "kernel.numa_balancing sysctl",
+2 -1
mm/page_isolation.c
··· 101 101 buddy_idx = __find_buddy_index(page_idx, order); 102 102 buddy = page + (buddy_idx - page_idx); 103 103 104 - if (!is_migrate_isolate_page(buddy)) { 104 + if (pfn_valid_within(page_to_pfn(buddy)) && 105 + !is_migrate_isolate_page(buddy)) { 105 106 __isolate_free_page(page, order); 106 107 kernel_map_pages(page, (1 << order), 1); 107 108 set_page_refcounted(page);
+2 -1
net/bluetooth/hci_core.c
··· 1557 1557 { 1558 1558 BT_DBG("%s %p", hdev->name, hdev); 1559 1559 1560 - if (!hci_dev_test_flag(hdev, HCI_UNREGISTER)) { 1560 + if (!hci_dev_test_flag(hdev, HCI_UNREGISTER) && 1561 + test_bit(HCI_UP, &hdev->flags)) { 1561 1562 /* Execute vendor specific shutdown routine */ 1562 1563 if (hdev->shutdown) 1563 1564 hdev->shutdown(hdev);
+1 -1
net/core/dev.c
··· 5209 5209 if (__netdev_find_adj(upper_dev, dev, &upper_dev->all_adj_list.upper)) 5210 5210 return -EBUSY; 5211 5211 5212 - if (__netdev_find_adj(dev, upper_dev, &dev->all_adj_list.upper)) 5212 + if (__netdev_find_adj(dev, upper_dev, &dev->adj_list.upper)) 5213 5213 return -EEXIST; 5214 5214 5215 5215 if (master && netdev_master_upper_dev_get(dev))
+1 -1
net/core/net_namespace.c
··· 601 601 } 602 602 603 603 err = rtnl_net_fill(msg, NETLINK_CB(skb).portid, nlh->nlmsg_seq, 0, 604 - RTM_GETNSID, net, peer, -1); 604 + RTM_NEWNSID, net, peer, -1); 605 605 if (err < 0) 606 606 goto err_out; 607 607
+1 -1
net/core/sock.c
··· 1474 1474 return; 1475 1475 1476 1476 sock_hold(sk); 1477 - sock_net_set(sk, get_net(&init_net)); 1478 1477 sock_release(sk->sk_socket); 1478 + sock_net_set(sk, get_net(&init_net)); 1479 1479 sock_put(sk); 1480 1480 } 1481 1481 EXPORT_SYMBOL(sk_release_kernel);
+3 -1
net/ieee802154/Makefile
··· 3 3 obj-y += 6lowpan/ 4 4 5 5 ieee802154-y := netlink.o nl-mac.o nl-phy.o nl_policy.o core.o \ 6 - header_ops.o sysfs.o nl802154.o 6 + header_ops.o sysfs.o nl802154.o trace.o 7 7 ieee802154_socket-y := socket.o 8 + 9 + CFLAGS_trace.o := -I$(src) 8 10 9 11 ccflags-y += -D__CHECK_ENDIAN__
+4 -1
net/ieee802154/nl-phy.c
··· 175 175 int rc = -ENOBUFS; 176 176 struct net_device *dev; 177 177 int type = __IEEE802154_DEV_INVALID; 178 + unsigned char name_assign_type; 178 179 179 180 pr_debug("%s\n", __func__); 180 181 ··· 191 190 if (devname[nla_len(info->attrs[IEEE802154_ATTR_DEV_NAME]) - 1] 192 191 != '\0') 193 192 return -EINVAL; /* phy name should be null-terminated */ 193 + name_assign_type = NET_NAME_USER; 194 194 } else { 195 195 devname = "wpan%d"; 196 + name_assign_type = NET_NAME_ENUM; 196 197 } 197 198 198 199 if (strlen(devname) >= IFNAMSIZ) ··· 224 221 } 225 222 226 223 dev = rdev_add_virtual_intf_deprecated(wpan_phy_to_rdev(phy), devname, 227 - type); 224 + name_assign_type, type); 228 225 if (IS_ERR(dev)) { 229 226 rc = PTR_ERR(dev); 230 227 goto nla_put_failure;
+1 -1
net/ieee802154/nl802154.c
··· 589 589 590 590 return rdev_add_virtual_intf(rdev, 591 591 nla_data(info->attrs[NL802154_ATTR_IFNAME]), 592 - type, extended_addr); 592 + NET_NAME_USER, type, extended_addr); 593 593 } 594 594 595 595 static int nl802154_del_interface(struct sk_buff *skb, struct genl_info *info)
+72 -13
net/ieee802154/rdev-ops.h
··· 4 4 #include <net/cfg802154.h> 5 5 6 6 #include "core.h" 7 + #include "trace.h" 7 8 8 9 static inline struct net_device * 9 10 rdev_add_virtual_intf_deprecated(struct cfg802154_registered_device *rdev, 10 - const char *name, int type) 11 + const char *name, 12 + unsigned char name_assign_type, 13 + int type) 11 14 { 12 15 return rdev->ops->add_virtual_intf_deprecated(&rdev->wpan_phy, name, 13 - type); 16 + name_assign_type, type); 14 17 } 15 18 16 19 static inline void ··· 25 22 26 23 static inline int 27 24 rdev_add_virtual_intf(struct cfg802154_registered_device *rdev, char *name, 25 + unsigned char name_assign_type, 28 26 enum nl802154_iftype type, __le64 extended_addr) 29 27 { 30 - return rdev->ops->add_virtual_intf(&rdev->wpan_phy, name, type, 28 + int ret; 29 + 30 + trace_802154_rdev_add_virtual_intf(&rdev->wpan_phy, name, type, 31 31 extended_addr); 32 + ret = rdev->ops->add_virtual_intf(&rdev->wpan_phy, name, 33 + name_assign_type, type, 34 + extended_addr); 35 + trace_802154_rdev_return_int(&rdev->wpan_phy, ret); 36 + return ret; 32 37 } 33 38 34 39 static inline int 35 40 rdev_del_virtual_intf(struct cfg802154_registered_device *rdev, 36 41 struct wpan_dev *wpan_dev) 37 42 { 38 - return rdev->ops->del_virtual_intf(&rdev->wpan_phy, wpan_dev); 43 + int ret; 44 + 45 + trace_802154_rdev_del_virtual_intf(&rdev->wpan_phy, wpan_dev); 46 + ret = rdev->ops->del_virtual_intf(&rdev->wpan_phy, wpan_dev); 47 + trace_802154_rdev_return_int(&rdev->wpan_phy, ret); 48 + return ret; 39 49 } 40 50 41 51 static inline int 42 52 rdev_set_channel(struct cfg802154_registered_device *rdev, u8 page, u8 channel) 43 53 { 44 - return rdev->ops->set_channel(&rdev->wpan_phy, page, channel); 54 + int ret; 55 + 56 + trace_802154_rdev_set_channel(&rdev->wpan_phy, page, channel); 57 + ret = rdev->ops->set_channel(&rdev->wpan_phy, page, channel); 58 + trace_802154_rdev_return_int(&rdev->wpan_phy, ret); 59 + return ret; 45 60 } 46 61 47 62 static inline int 48 63 rdev_set_cca_mode(struct cfg802154_registered_device *rdev, 49 64 const struct wpan_phy_cca *cca) 50 65 { 51 - return rdev->ops->set_cca_mode(&rdev->wpan_phy, cca); 66 + int ret; 67 + 68 + trace_802154_rdev_set_cca_mode(&rdev->wpan_phy, cca); 69 + ret = rdev->ops->set_cca_mode(&rdev->wpan_phy, cca); 70 + trace_802154_rdev_return_int(&rdev->wpan_phy, ret); 71 + return ret; 52 72 } 53 73 54 74 static inline int 55 75 rdev_set_pan_id(struct cfg802154_registered_device *rdev, 56 76 struct wpan_dev *wpan_dev, __le16 pan_id) 57 77 { 58 - return rdev->ops->set_pan_id(&rdev->wpan_phy, wpan_dev, pan_id); 78 + int ret; 79 + 80 + trace_802154_rdev_set_pan_id(&rdev->wpan_phy, wpan_dev, pan_id); 81 + ret = rdev->ops->set_pan_id(&rdev->wpan_phy, wpan_dev, pan_id); 82 + trace_802154_rdev_return_int(&rdev->wpan_phy, ret); 83 + return ret; 59 84 } 60 85 61 86 static inline int 62 87 rdev_set_short_addr(struct cfg802154_registered_device *rdev, 63 88 struct wpan_dev *wpan_dev, __le16 short_addr) 64 89 { 65 - return rdev->ops->set_short_addr(&rdev->wpan_phy, wpan_dev, short_addr); 90 + int ret; 91 + 92 + trace_802154_rdev_set_short_addr(&rdev->wpan_phy, wpan_dev, short_addr); 93 + ret = rdev->ops->set_short_addr(&rdev->wpan_phy, wpan_dev, short_addr); 94 + trace_802154_rdev_return_int(&rdev->wpan_phy, ret); 95 + return ret; 66 96 } 67 97 68 98 static inline int 69 99 rdev_set_backoff_exponent(struct cfg802154_registered_device *rdev, 70 100 struct wpan_dev *wpan_dev, u8 min_be, u8 max_be) 71 101 { 72 - return rdev->ops->set_backoff_exponent(&rdev->wpan_phy, wpan_dev, 102 + int ret; 103 + 104 + trace_802154_rdev_set_backoff_exponent(&rdev->wpan_phy, wpan_dev, 73 105 min_be, max_be); 106 + ret = rdev->ops->set_backoff_exponent(&rdev->wpan_phy, wpan_dev, 107 + min_be, max_be); 108 + trace_802154_rdev_return_int(&rdev->wpan_phy, ret); 109 + return ret; 74 110 } 75 111 76 112 static inline int 77 113 rdev_set_max_csma_backoffs(struct cfg802154_registered_device *rdev, 78 114 struct wpan_dev *wpan_dev, u8 max_csma_backoffs) 79 115 { 80 - return rdev->ops->set_max_csma_backoffs(&rdev->wpan_phy, wpan_dev, 81 - max_csma_backoffs); 116 + int ret; 117 + 118 + trace_802154_rdev_set_csma_backoffs(&rdev->wpan_phy, wpan_dev, 119 + max_csma_backoffs); 120 + ret = rdev->ops->set_max_csma_backoffs(&rdev->wpan_phy, wpan_dev, 121 + max_csma_backoffs); 122 + trace_802154_rdev_return_int(&rdev->wpan_phy, ret); 123 + return ret; 82 124 } 83 125 84 126 static inline int 85 127 rdev_set_max_frame_retries(struct cfg802154_registered_device *rdev, 86 128 struct wpan_dev *wpan_dev, s8 max_frame_retries) 87 129 { 88 - return rdev->ops->set_max_frame_retries(&rdev->wpan_phy, wpan_dev, 130 + int ret; 131 + 132 + trace_802154_rdev_set_max_frame_retries(&rdev->wpan_phy, wpan_dev, 89 133 max_frame_retries); 134 + ret = rdev->ops->set_max_frame_retries(&rdev->wpan_phy, wpan_dev, 135 + max_frame_retries); 136 + trace_802154_rdev_return_int(&rdev->wpan_phy, ret); 137 + return ret; 90 138 } 91 139 92 140 static inline int 93 141 rdev_set_lbt_mode(struct cfg802154_registered_device *rdev, 94 142 struct wpan_dev *wpan_dev, bool mode) 95 143 { 96 - return rdev->ops->set_lbt_mode(&rdev->wpan_phy, wpan_dev, mode); 144 + int ret; 145 + 146 + trace_802154_rdev_set_lbt_mode(&rdev->wpan_phy, wpan_dev, mode); 147 + ret = rdev->ops->set_lbt_mode(&rdev->wpan_phy, wpan_dev, mode); 148 + trace_802154_rdev_return_int(&rdev->wpan_phy, ret); 149 + return ret; 97 150 } 98 151 99 152 #endif /* __CFG802154_RDEV_OPS */
+7
net/ieee802154/trace.c
··· 1 + #include <linux/module.h> 2 + 3 + #ifndef __CHECKER__ 4 + #define CREATE_TRACE_POINTS 5 + #include "trace.h" 6 + 7 + #endif
+247
net/ieee802154/trace.h
··· 1 + /* Based on net/wireless/tracing.h */ 2 + 3 + #undef TRACE_SYSTEM 4 + #define TRACE_SYSTEM cfg802154 5 + 6 + #if !defined(__RDEV_CFG802154_OPS_TRACE) || defined(TRACE_HEADER_MULTI_READ) 7 + #define __RDEV_CFG802154_OPS_TRACE 8 + 9 + #include <linux/tracepoint.h> 10 + 11 + #include <net/cfg802154.h> 12 + 13 + #define MAXNAME 32 14 + #define WPAN_PHY_ENTRY __array(char, wpan_phy_name, MAXNAME) 15 + #define WPAN_PHY_ASSIGN strlcpy(__entry->wpan_phy_name, \ 16 + wpan_phy_name(wpan_phy), \ 17 + MAXNAME) 18 + #define WPAN_PHY_PR_FMT "%s" 19 + #define WPAN_PHY_PR_ARG __entry->wpan_phy_name 20 + 21 + #define WPAN_DEV_ENTRY __field(u32, identifier) 22 + #define WPAN_DEV_ASSIGN (__entry->identifier) = (!IS_ERR_OR_NULL(wpan_dev) \ 23 + ? wpan_dev->identifier : 0) 24 + #define WPAN_DEV_PR_FMT "wpan_dev(%u)" 25 + #define WPAN_DEV_PR_ARG (__entry->identifier) 26 + 27 + #define WPAN_CCA_ENTRY __field(enum nl802154_cca_modes, cca_mode) \ 28 + __field(enum nl802154_cca_opts, cca_opt) 29 + #define WPAN_CCA_ASSIGN \ 30 + do { \ 31 + (__entry->cca_mode) = cca->mode; \ 32 + (__entry->cca_opt) = cca->opt; \ 33 + } while (0) 34 + #define WPAN_CCA_PR_FMT "cca_mode: %d, cca_opt: %d" 35 + #define WPAN_CCA_PR_ARG __entry->cca_mode, __entry->cca_opt 36 + 37 + #define BOOL_TO_STR(bo) (bo) ? "true" : "false" 38 + 39 + /************************************************************* 40 + * rdev->ops traces * 41 + *************************************************************/ 42 + 43 + TRACE_EVENT(802154_rdev_add_virtual_intf, 44 + TP_PROTO(struct wpan_phy *wpan_phy, char *name, 45 + enum nl802154_iftype type, __le64 extended_addr), 46 + TP_ARGS(wpan_phy, name, type, extended_addr), 47 + TP_STRUCT__entry( 48 + WPAN_PHY_ENTRY 49 + __string(vir_intf_name, name ? name : "<noname>") 50 + __field(enum nl802154_iftype, type) 51 + __field(__le64, extended_addr) 52 + ), 53 + TP_fast_assign( 54 + WPAN_PHY_ASSIGN; 55 + __assign_str(vir_intf_name, name ? name : "<noname>"); 56 + __entry->type = type; 57 + __entry->extended_addr = extended_addr; 58 + ), 59 + TP_printk(WPAN_PHY_PR_FMT ", virtual intf name: %s, type: %d, ea %llx", 60 + WPAN_PHY_PR_ARG, __get_str(vir_intf_name), __entry->type, 61 + __le64_to_cpu(__entry->extended_addr)) 62 + ); 63 + 64 + TRACE_EVENT(802154_rdev_del_virtual_intf, 65 + TP_PROTO(struct wpan_phy *wpan_phy, struct wpan_dev *wpan_dev), 66 + TP_ARGS(wpan_phy, wpan_dev), 67 + TP_STRUCT__entry( 68 + WPAN_PHY_ENTRY 69 + WPAN_DEV_ENTRY 70 + ), 71 + TP_fast_assign( 72 + WPAN_PHY_ASSIGN; 73 + WPAN_DEV_ASSIGN; 74 + ), 75 + TP_printk(WPAN_PHY_PR_FMT ", " WPAN_DEV_PR_FMT, WPAN_PHY_PR_ARG, 76 + WPAN_DEV_PR_ARG) 77 + ); 78 + 79 + TRACE_EVENT(802154_rdev_set_channel, 80 + TP_PROTO(struct wpan_phy *wpan_phy, u8 page, u8 channel), 81 + TP_ARGS(wpan_phy, page, channel), 82 + TP_STRUCT__entry( 83 + WPAN_PHY_ENTRY 84 + __field(u8, page) 85 + __field(u8, channel) 86 + ), 87 + TP_fast_assign( 88 + WPAN_PHY_ASSIGN; 89 + __entry->page = page; 90 + __entry->channel = channel; 91 + ), 92 + TP_printk(WPAN_PHY_PR_FMT ", page: %d, channel: %d", WPAN_PHY_PR_ARG, 93 + __entry->page, __entry->channel) 94 + ); 95 + 96 + TRACE_EVENT(802154_rdev_set_cca_mode, 97 + TP_PROTO(struct wpan_phy *wpan_phy, const struct wpan_phy_cca *cca), 98 + TP_ARGS(wpan_phy, cca), 99 + TP_STRUCT__entry( 100 + WPAN_PHY_ENTRY 101 + WPAN_CCA_ENTRY 102 + ), 103 + TP_fast_assign( 104 + WPAN_PHY_ASSIGN; 105 + WPAN_CCA_ASSIGN; 106 + ), 107 + TP_printk(WPAN_PHY_PR_FMT ", " WPAN_CCA_PR_FMT, WPAN_PHY_PR_ARG, 108 + WPAN_CCA_PR_ARG) 109 + ); 110 + 111 + DECLARE_EVENT_CLASS(802154_le16_template, 112 + TP_PROTO(struct wpan_phy *wpan_phy, struct wpan_dev *wpan_dev, 113 + __le16 le16arg), 114 + TP_ARGS(wpan_phy, wpan_dev, le16arg), 115 + TP_STRUCT__entry( 116 + WPAN_PHY_ENTRY 117 + WPAN_DEV_ENTRY 118 + __field(__le16, le16arg) 119 + ), 120 + TP_fast_assign( 121 + WPAN_PHY_ASSIGN; 122 + WPAN_DEV_ASSIGN; 123 + __entry->le16arg = le16arg; 124 + ), 125 + TP_printk(WPAN_PHY_PR_FMT ", " WPAN_DEV_PR_FMT ", pan id: 0x%04x", 126 + WPAN_PHY_PR_ARG, WPAN_DEV_PR_ARG, 127 + __le16_to_cpu(__entry->le16arg)) 128 + ); 129 + 130 + DEFINE_EVENT(802154_le16_template, 802154_rdev_set_pan_id, 131 + TP_PROTO(struct wpan_phy *wpan_phy, struct wpan_dev *wpan_dev, 132 + __le16 le16arg), 133 + TP_ARGS(wpan_phy, wpan_dev, le16arg) 134 + ); 135 + 136 + DEFINE_EVENT_PRINT(802154_le16_template, 802154_rdev_set_short_addr, 137 + TP_PROTO(struct wpan_phy *wpan_phy, struct wpan_dev *wpan_dev, 138 + __le16 le16arg), 139 + TP_ARGS(wpan_phy, wpan_dev, le16arg), 140 + TP_printk(WPAN_PHY_PR_FMT ", " WPAN_DEV_PR_FMT ", sa: 0x%04x", 141 + WPAN_PHY_PR_ARG, WPAN_DEV_PR_ARG, 142 + __le16_to_cpu(__entry->le16arg)) 143 + ); 144 + 145 + TRACE_EVENT(802154_rdev_set_backoff_exponent, 146 + TP_PROTO(struct wpan_phy *wpan_phy, struct wpan_dev *wpan_dev, 147 + u8 min_be, u8 max_be), 148 + TP_ARGS(wpan_phy, wpan_dev, min_be, max_be), 149 + TP_STRUCT__entry( 150 + WPAN_PHY_ENTRY 151 + WPAN_DEV_ENTRY 152 + __field(u8, min_be) 153 + __field(u8, max_be) 154 + ), 155 + TP_fast_assign( 156 + WPAN_PHY_ASSIGN; 157 + WPAN_DEV_ASSIGN; 158 + __entry->min_be = min_be; 159 + __entry->max_be = max_be; 160 + ), 161 + 162 + TP_printk(WPAN_PHY_PR_FMT ", " WPAN_DEV_PR_FMT 163 + ", min be: %d, max_be: %d", WPAN_PHY_PR_ARG, 164 + WPAN_DEV_PR_ARG, __entry->min_be, __entry->max_be) 165 + ); 166 + 167 + TRACE_EVENT(802154_rdev_set_csma_backoffs, 168 + TP_PROTO(struct wpan_phy *wpan_phy, struct wpan_dev *wpan_dev, 169 + u8 max_csma_backoffs), 170 + TP_ARGS(wpan_phy, wpan_dev, max_csma_backoffs), 171 + TP_STRUCT__entry( 172 + WPAN_PHY_ENTRY 173 + WPAN_DEV_ENTRY 174 + __field(u8, max_csma_backoffs) 175 + ), 176 + TP_fast_assign( 177 + WPAN_PHY_ASSIGN; 178 + WPAN_DEV_ASSIGN; 179 + __entry->max_csma_backoffs = max_csma_backoffs; 180 + ), 181 + 182 + TP_printk(WPAN_PHY_PR_FMT ", " WPAN_DEV_PR_FMT 183 + ", max csma backoffs: %d", WPAN_PHY_PR_ARG, 184 + WPAN_DEV_PR_ARG, __entry->max_csma_backoffs) 185 + ); 186 + 187 + TRACE_EVENT(802154_rdev_set_max_frame_retries, 188 + TP_PROTO(struct wpan_phy *wpan_phy, struct wpan_dev *wpan_dev, 189 + s8 max_frame_retries), 190 + TP_ARGS(wpan_phy, wpan_dev, max_frame_retries), 191 + TP_STRUCT__entry( 192 + WPAN_PHY_ENTRY 193 + WPAN_DEV_ENTRY 194 + __field(s8, max_frame_retries) 195 + ), 196 + TP_fast_assign( 197 + WPAN_PHY_ASSIGN; 198 + WPAN_DEV_ASSIGN; 199 + __entry->max_frame_retries = max_frame_retries; 200 + ), 201 + 202 + TP_printk(WPAN_PHY_PR_FMT ", " WPAN_DEV_PR_FMT 203 + ", max frame retries: %d", WPAN_PHY_PR_ARG, 204 + WPAN_DEV_PR_ARG, __entry->max_frame_retries) 205 + ); 206 + 207 + TRACE_EVENT(802154_rdev_set_lbt_mode, 208 + TP_PROTO(struct wpan_phy *wpan_phy, struct wpan_dev *wpan_dev, 209 + bool mode), 210 + TP_ARGS(wpan_phy, wpan_dev, mode), 211 + TP_STRUCT__entry( 212 + WPAN_PHY_ENTRY 213 + WPAN_DEV_ENTRY 214 + __field(bool, mode) 215 + ), 216 + TP_fast_assign( 217 + WPAN_PHY_ASSIGN; 218 + WPAN_DEV_ASSIGN; 219 + __entry->mode = mode; 220 + ), 221 + TP_printk(WPAN_PHY_PR_FMT ", " WPAN_DEV_PR_FMT 222 + ", lbt mode: %s", WPAN_PHY_PR_ARG, 223 + WPAN_DEV_PR_ARG, BOOL_TO_STR(__entry->mode)) 224 + ); 225 + 226 + TRACE_EVENT(802154_rdev_return_int, 227 + TP_PROTO(struct wpan_phy *wpan_phy, int ret), 228 + TP_ARGS(wpan_phy, ret), 229 + TP_STRUCT__entry( 230 + WPAN_PHY_ENTRY 231 + __field(int, ret) 232 + ), 233 + TP_fast_assign( 234 + WPAN_PHY_ASSIGN; 235 + __entry->ret = ret; 236 + ), 237 + TP_printk(WPAN_PHY_PR_FMT ", returned: %d", WPAN_PHY_PR_ARG, 238 + __entry->ret) 239 + ); 240 + 241 + #endif /* !__RDEV_CFG802154_OPS_TRACE || TRACE_HEADER_MULTI_READ */ 242 + 243 + #undef TRACE_INCLUDE_PATH 244 + #define TRACE_INCLUDE_PATH . 245 + #undef TRACE_INCLUDE_FILE 246 + #define TRACE_INCLUDE_FILE trace 247 + #include <trace/define_trace.h>
+5 -3
net/ipv4/inet_diag.c
··· 224 224 handler->idiag_get_info(sk, r, info); 225 225 226 226 if (sk->sk_state < TCP_TIME_WAIT) { 227 - int err = 0; 227 + union tcp_cc_info info; 228 + size_t sz = 0; 229 + int attr; 228 230 229 231 rcu_read_lock(); 230 232 ca_ops = READ_ONCE(icsk->icsk_ca_ops); 231 233 if (ca_ops && ca_ops->get_info) 232 - err = ca_ops->get_info(sk, ext, skb); 234 + sz = ca_ops->get_info(sk, ext, &attr, &info); 233 235 rcu_read_unlock(); 234 - if (err < 0) 236 + if (sz && nla_put(skb, attr, sz, &info) < 0) 235 237 goto errout; 236 238 } 237 239
+27 -1
net/ipv4/tcp.c
··· 252 252 #include <linux/types.h> 253 253 #include <linux/fcntl.h> 254 254 #include <linux/poll.h> 255 + #include <linux/inet_diag.h> 255 256 #include <linux/init.h> 256 257 #include <linux/fs.h> 257 258 #include <linux/skbuff.h> ··· 2593 2592 #endif 2594 2593 2595 2594 /* Return information about state of tcp endpoint in API format. */ 2596 - void tcp_get_info(const struct sock *sk, struct tcp_info *info) 2595 + void tcp_get_info(struct sock *sk, struct tcp_info *info) 2597 2596 { 2598 2597 const struct tcp_sock *tp = tcp_sk(sk); 2599 2598 const struct inet_connection_sock *icsk = inet_csk(sk); ··· 2664 2663 2665 2664 rate = READ_ONCE(sk->sk_max_pacing_rate); 2666 2665 info->tcpi_max_pacing_rate = rate != ~0U ? rate : ~0ULL; 2666 + 2667 + spin_lock_bh(&sk->sk_lock.slock); 2668 + info->tcpi_bytes_acked = tp->bytes_acked; 2669 + info->tcpi_bytes_received = tp->bytes_received; 2670 + spin_unlock_bh(&sk->sk_lock.slock); 2667 2671 } 2668 2672 EXPORT_SYMBOL_GPL(tcp_get_info); 2669 2673 ··· 2734 2728 tcp_get_info(sk, &info); 2735 2729 2736 2730 len = min_t(unsigned int, len, sizeof(info)); 2731 + if (put_user(len, optlen)) 2732 + return -EFAULT; 2733 + if (copy_to_user(optval, &info, len)) 2734 + return -EFAULT; 2735 + return 0; 2736 + } 2737 + case TCP_CC_INFO: { 2738 + const struct tcp_congestion_ops *ca_ops; 2739 + union tcp_cc_info info; 2740 + size_t sz = 0; 2741 + int attr; 2742 + 2743 + if (get_user(len, optlen)) 2744 + return -EFAULT; 2745 + 2746 + ca_ops = icsk->icsk_ca_ops; 2747 + if (ca_ops && ca_ops->get_info) 2748 + sz = ca_ops->get_info(sk, ~0U, &attr, &info); 2749 + 2750 + len = min_t(unsigned int, len, sz); 2737 2751 if (put_user(len, optlen)) 2738 2752 return -EFAULT; 2739 2753 if (copy_to_user(optval, &info, len))
+10 -10
net/ipv4/tcp_dctcp.c
··· 277 277 } 278 278 } 279 279 280 - static int dctcp_get_info(struct sock *sk, u32 ext, struct sk_buff *skb) 280 + static size_t dctcp_get_info(struct sock *sk, u32 ext, int *attr, 281 + union tcp_cc_info *info) 281 282 { 282 283 const struct dctcp *ca = inet_csk_ca(sk); 283 284 ··· 287 286 */ 288 287 if (ext & (1 << (INET_DIAG_DCTCPINFO - 1)) || 289 288 ext & (1 << (INET_DIAG_VEGASINFO - 1))) { 290 - struct tcp_dctcp_info info; 291 - 292 - memset(&info, 0, sizeof(info)); 289 + memset(info, 0, sizeof(struct tcp_dctcp_info)); 293 290 if (inet_csk(sk)->icsk_ca_ops != &dctcp_reno) { 294 - info.dctcp_enabled = 1; 295 - info.dctcp_ce_state = (u16) ca->ce_state; 296 - info.dctcp_alpha = ca->dctcp_alpha; 297 - info.dctcp_ab_ecn = ca->acked_bytes_ecn; 298 - info.dctcp_ab_tot = ca->acked_bytes_total; 291 + info->dctcp.dctcp_enabled = 1; 292 + info->dctcp.dctcp_ce_state = (u16) ca->ce_state; 293 + info->dctcp.dctcp_alpha = ca->dctcp_alpha; 294 + info->dctcp.dctcp_ab_ecn = ca->acked_bytes_ecn; 295 + info->dctcp.dctcp_ab_tot = ca->acked_bytes_total; 299 296 } 300 297 301 - return nla_put(skb, INET_DIAG_DCTCPINFO, sizeof(info), &info); 298 + *attr = INET_DIAG_DCTCPINFO; 299 + return sizeof(*info); 302 300 } 303 301 return 0; 304 302 }
+1
net/ipv4/tcp_fastopen.c
··· 206 206 skb_set_owner_r(skb2, child); 207 207 __skb_queue_tail(&child->sk_receive_queue, skb2); 208 208 tp->syn_data_acked = 1; 209 + tp->bytes_received = end_seq - TCP_SKB_CB(skb)->seq - 1; 209 210 } else { 210 211 end_seq = TCP_SKB_CB(skb)->seq + 1; 211 212 }
+11 -10
net/ipv4/tcp_illinois.c
··· 300 300 } 301 301 302 302 /* Extract info for Tcp socket info provided via netlink. */ 303 - static int tcp_illinois_info(struct sock *sk, u32 ext, struct sk_buff *skb) 303 + static size_t tcp_illinois_info(struct sock *sk, u32 ext, int *attr, 304 + union tcp_cc_info *info) 304 305 { 305 306 const struct illinois *ca = inet_csk_ca(sk); 306 307 307 308 if (ext & (1 << (INET_DIAG_VEGASINFO - 1))) { 308 - struct tcpvegas_info info = { 309 - .tcpv_enabled = 1, 310 - .tcpv_rttcnt = ca->cnt_rtt, 311 - .tcpv_minrtt = ca->base_rtt, 312 - }; 309 + info->vegas.tcpv_enabled = 1; 310 + info->vegas.tcpv_rttcnt = ca->cnt_rtt; 311 + info->vegas.tcpv_minrtt = ca->base_rtt; 312 + info->vegas.tcpv_rtt = 0; 313 313 314 - if (info.tcpv_rttcnt > 0) { 314 + if (info->vegas.tcpv_rttcnt > 0) { 315 315 u64 t = ca->sum_rtt; 316 316 317 - do_div(t, info.tcpv_rttcnt); 318 - info.tcpv_rtt = t; 317 + do_div(t, info->vegas.tcpv_rttcnt); 318 + info->vegas.tcpv_rtt = t; 319 319 } 320 - return nla_put(skb, INET_DIAG_VEGASINFO, sizeof(info), &info); 320 + *attr = INET_DIAG_VEGASINFO; 321 + return sizeof(struct tcpvegas_info); 321 322 } 322 323 return 0; 323 324 }
+26 -10
net/ipv4/tcp_input.c
··· 1820 1820 for (j = 0; j < used_sacks; j++) 1821 1821 tp->recv_sack_cache[i++] = sp[j]; 1822 1822 1823 - tcp_mark_lost_retrans(sk); 1824 - 1825 - tcp_verify_left_out(tp); 1826 - 1827 1823 if ((state.reord < tp->fackets_out) && 1828 1824 ((inet_csk(sk)->icsk_ca_state != TCP_CA_Loss) || tp->undo_marker)) 1829 1825 tcp_update_reordering(sk, tp->fackets_out - state.reord, 0); 1830 1826 1827 + tcp_mark_lost_retrans(sk); 1828 + tcp_verify_left_out(tp); 1831 1829 out: 1832 1830 1833 1831 #if FASTRETRANS_DEBUG > 0 ··· 3278 3280 (ack_seq == tp->snd_wl1 && nwin > tp->snd_wnd); 3279 3281 } 3280 3282 3283 + /* If we update tp->snd_una, also update tp->bytes_acked */ 3284 + static void tcp_snd_una_update(struct tcp_sock *tp, u32 ack) 3285 + { 3286 + u32 delta = ack - tp->snd_una; 3287 + 3288 + tp->bytes_acked += delta; 3289 + tp->snd_una = ack; 3290 + } 3291 + 3292 + /* If we update tp->rcv_nxt, also update tp->bytes_received */ 3293 + static void tcp_rcv_nxt_update(struct tcp_sock *tp, u32 seq) 3294 + { 3295 + u32 delta = seq - tp->rcv_nxt; 3296 + 3297 + tp->bytes_received += delta; 3298 + tp->rcv_nxt = seq; 3299 + } 3300 + 3281 3301 /* Update our send window. 3282 3302 * 3283 3303 * Window update algorithm, described in RFC793/RFC1122 (used in linux-2.2 ··· 3331 3315 } 3332 3316 } 3333 3317 3334 - tp->snd_una = ack; 3318 + tcp_snd_una_update(tp, ack); 3335 3319 3336 3320 return flag; 3337 3321 } ··· 3513 3497 * Note, we use the fact that SND.UNA>=SND.WL2. 3514 3498 */ 3515 3499 tcp_update_wl(tp, ack_seq); 3516 - tp->snd_una = ack; 3500 + tcp_snd_una_update(tp, ack); 3517 3501 flag |= FLAG_WIN_UPDATE; 3518 3502 3519 3503 tcp_in_ack_event(sk, CA_ACK_WIN_UPDATE); ··· 4252 4236 4253 4237 tail = skb_peek_tail(&sk->sk_receive_queue); 4254 4238 eaten = tail && tcp_try_coalesce(sk, tail, skb, &fragstolen); 4255 - tp->rcv_nxt = TCP_SKB_CB(skb)->end_seq; 4239 + tcp_rcv_nxt_update(tp, TCP_SKB_CB(skb)->end_seq); 4256 4240 if (!eaten) 4257 4241 __skb_queue_tail(&sk->sk_receive_queue, skb); 4258 4242 if (TCP_SKB_CB(skb)->tcp_flags & TCPHDR_FIN) ··· 4420 4404 __skb_pull(skb, hdrlen); 4421 4405 eaten = (tail && 4422 4406 tcp_try_coalesce(sk, tail, skb, fragstolen)) ? 1 : 0; 4423 - tcp_sk(sk)->rcv_nxt = TCP_SKB_CB(skb)->end_seq; 4407 + tcp_rcv_nxt_update(tcp_sk(sk), TCP_SKB_CB(skb)->end_seq); 4424 4408 if (!eaten) { 4425 4409 __skb_queue_tail(&sk->sk_receive_queue, skb); 4426 4410 skb_set_owner_r(skb, sk); ··· 4513 4497 4514 4498 eaten = tcp_queue_rcv(sk, skb, 0, &fragstolen); 4515 4499 } 4516 - tp->rcv_nxt = TCP_SKB_CB(skb)->end_seq; 4500 + tcp_rcv_nxt_update(tp, TCP_SKB_CB(skb)->end_seq); 4517 4501 if (skb->len) 4518 4502 tcp_event_data_recv(sk, skb); 4519 4503 if (TCP_SKB_CB(skb)->tcp_flags & TCPHDR_FIN) ··· 5261 5245 tcp_rcv_rtt_measure_ts(sk, skb); 5262 5246 5263 5247 __skb_pull(skb, tcp_header_len); 5264 - tp->rcv_nxt = TCP_SKB_CB(skb)->end_seq; 5248 + tcp_rcv_nxt_update(tp, TCP_SKB_CB(skb)->end_seq); 5265 5249 NET_INC_STATS_BH(sock_net(sk), LINUX_MIB_TCPHPHITSTOUSER); 5266 5250 eaten = 1; 5267 5251 }
+10 -9
net/ipv4/tcp_vegas.c
··· 286 286 } 287 287 288 288 /* Extract info for Tcp socket info provided via netlink. */ 289 - int tcp_vegas_get_info(struct sock *sk, u32 ext, struct sk_buff *skb) 289 + size_t tcp_vegas_get_info(struct sock *sk, u32 ext, int *attr, 290 + union tcp_cc_info *info) 290 291 { 291 292 const struct vegas *ca = inet_csk_ca(sk); 292 - if (ext & (1 << (INET_DIAG_VEGASINFO - 1))) { 293 - struct tcpvegas_info info = { 294 - .tcpv_enabled = ca->doing_vegas_now, 295 - .tcpv_rttcnt = ca->cntRTT, 296 - .tcpv_rtt = ca->baseRTT, 297 - .tcpv_minrtt = ca->minRTT, 298 - }; 299 293 300 - return nla_put(skb, INET_DIAG_VEGASINFO, sizeof(info), &info); 294 + if (ext & (1 << (INET_DIAG_VEGASINFO - 1))) { 295 + info->vegas.tcpv_enabled = ca->doing_vegas_now, 296 + info->vegas.tcpv_rttcnt = ca->cntRTT, 297 + info->vegas.tcpv_rtt = ca->baseRTT, 298 + info->vegas.tcpv_minrtt = ca->minRTT, 299 + 300 + *attr = INET_DIAG_VEGASINFO; 301 + return sizeof(struct tcpvegas_info); 301 302 } 302 303 return 0; 303 304 }
+2 -1
net/ipv4/tcp_vegas.h
··· 19 19 void tcp_vegas_state(struct sock *sk, u8 ca_state); 20 20 void tcp_vegas_pkts_acked(struct sock *sk, u32 cnt, s32 rtt_us); 21 21 void tcp_vegas_cwnd_event(struct sock *sk, enum tcp_ca_event event); 22 - int tcp_vegas_get_info(struct sock *sk, u32 ext, struct sk_buff *skb); 22 + size_t tcp_vegas_get_info(struct sock *sk, u32 ext, int *attr, 23 + union tcp_cc_info *info); 23 24 24 25 #endif /* __TCP_VEGAS_H */
+8 -7
net/ipv4/tcp_westwood.c
··· 256 256 } 257 257 258 258 /* Extract info for Tcp socket info provided via netlink. */ 259 - static int tcp_westwood_info(struct sock *sk, u32 ext, struct sk_buff *skb) 259 + static size_t tcp_westwood_info(struct sock *sk, u32 ext, int *attr, 260 + union tcp_cc_info *info) 260 261 { 261 262 const struct westwood *ca = inet_csk_ca(sk); 262 263 263 264 if (ext & (1 << (INET_DIAG_VEGASINFO - 1))) { 264 - struct tcpvegas_info info = { 265 - .tcpv_enabled = 1, 266 - .tcpv_rtt = jiffies_to_usecs(ca->rtt), 267 - .tcpv_minrtt = jiffies_to_usecs(ca->rtt_min), 268 - }; 265 + info->vegas.tcpv_enabled = 1; 266 + info->vegas.tcpv_rttcnt = 0; 267 + info->vegas.tcpv_rtt = jiffies_to_usecs(ca->rtt), 268 + info->vegas.tcpv_minrtt = jiffies_to_usecs(ca->rtt_min), 269 269 270 - return nla_put(skb, INET_DIAG_VEGASINFO, sizeof(info), &info); 270 + *attr = INET_DIAG_VEGASINFO; 271 + return sizeof(struct tcpvegas_info); 271 272 } 272 273 return 0; 273 274 }
+32 -9
net/ipv6/ip6_output.c
··· 886 886 #endif 887 887 int err; 888 888 889 + /* The correct way to handle this would be to do 890 + * ip6_route_get_saddr, and then ip6_route_output; however, 891 + * the route-specific preferred source forces the 892 + * ip6_route_output call _before_ ip6_route_get_saddr. 893 + * 894 + * In source specific routing (no src=any default route), 895 + * ip6_route_output will fail given src=any saddr, though, so 896 + * that's why we try it again later. 897 + */ 898 + if (ipv6_addr_any(&fl6->saddr) && (!*dst || !(*dst)->error)) { 899 + struct rt6_info *rt; 900 + bool had_dst = *dst != NULL; 901 + 902 + if (!had_dst) 903 + *dst = ip6_route_output(net, sk, fl6); 904 + rt = (*dst)->error ? NULL : (struct rt6_info *)*dst; 905 + err = ip6_route_get_saddr(net, rt, &fl6->daddr, 906 + sk ? inet6_sk(sk)->srcprefs : 0, 907 + &fl6->saddr); 908 + if (err) 909 + goto out_err_release; 910 + 911 + /* If we had an erroneous initial result, pretend it 912 + * never existed and let the SA-enabled version take 913 + * over. 914 + */ 915 + if (!had_dst && (*dst)->error) { 916 + dst_release(*dst); 917 + *dst = NULL; 918 + } 919 + } 920 + 889 921 if (!*dst) 890 922 *dst = ip6_route_output(net, sk, fl6); 891 923 892 924 err = (*dst)->error; 893 925 if (err) 894 926 goto out_err_release; 895 - 896 - if (ipv6_addr_any(&fl6->saddr)) { 897 - struct rt6_info *rt = (struct rt6_info *) *dst; 898 - err = ip6_route_get_saddr(net, rt, &fl6->daddr, 899 - sk ? inet6_sk(sk)->srcprefs : 0, 900 - &fl6->saddr); 901 - if (err) 902 - goto out_err_release; 903 - } 904 927 905 928 #ifdef CONFIG_IPV6_OPTIMISTIC_DAD 906 929 /*
+3 -2
net/ipv6/route.c
··· 2245 2245 unsigned int prefs, 2246 2246 struct in6_addr *saddr) 2247 2247 { 2248 - struct inet6_dev *idev = ip6_dst_idev((struct dst_entry *)rt); 2248 + struct inet6_dev *idev = 2249 + rt ? ip6_dst_idev((struct dst_entry *)rt) : NULL; 2249 2250 int err = 0; 2250 - if (rt->rt6i_prefsrc.plen) 2251 + if (rt && rt->rt6i_prefsrc.plen) 2251 2252 *saddr = rt->rt6i_prefsrc.addr; 2252 2253 else 2253 2254 err = ipv6_dev_get_saddr(net, idev ? idev->dev : NULL,
+7 -5
net/mac80211/iface.c
··· 819 819 * (because if we remove a STA after ops->remove_interface() 820 820 * the driver will have removed the vif info already!) 821 821 * 822 - * This is relevant only in WDS mode, in all other modes we've 823 - * already removed all stations when disconnecting or similar, 824 - * so warn otherwise. 822 + * In WDS mode a station must exist here and be flushed, for 823 + * AP_VLANs stations may exist since there's nothing else that 824 + * would have removed them, but in other modes there shouldn't 825 + * be any stations. 825 826 */ 826 827 flushed = sta_info_flush(sdata); 827 - WARN_ON_ONCE((sdata->vif.type != NL80211_IFTYPE_WDS && flushed > 0) || 828 - (sdata->vif.type == NL80211_IFTYPE_WDS && flushed != 1)); 828 + WARN_ON_ONCE(sdata->vif.type != NL80211_IFTYPE_AP_VLAN && 829 + ((sdata->vif.type != NL80211_IFTYPE_WDS && flushed > 0) || 830 + (sdata->vif.type == NL80211_IFTYPE_WDS && flushed != 1))); 829 831 830 832 /* don't count this interface for promisc/allmulti while it is down */ 831 833 if (sdata->flags & IEEE80211_SDATA_ALLMULTI)
+18 -1
net/mac80211/sta_info.c
··· 66 66 67 67 static const struct rhashtable_params sta_rht_params = { 68 68 .nelem_hint = 3, /* start small */ 69 + .automatic_shrinking = true, 69 70 .head_offset = offsetof(struct sta_info, hash_node), 70 71 .key_offset = offsetof(struct sta_info, sta.addr), 71 72 .key_len = ETH_ALEN, ··· 158 157 const u8 *addr) 159 158 { 160 159 struct ieee80211_local *local = sdata->local; 160 + struct sta_info *sta; 161 + struct rhash_head *tmp; 162 + const struct bucket_table *tbl; 161 163 162 - return rhashtable_lookup_fast(&local->sta_hash, addr, sta_rht_params); 164 + rcu_read_lock(); 165 + tbl = rht_dereference_rcu(local->sta_hash.tbl, &local->sta_hash); 166 + 167 + for_each_sta_info(local, tbl, addr, sta, tmp) { 168 + if (sta->sdata == sdata) { 169 + rcu_read_unlock(); 170 + /* this is safe as the caller must already hold 171 + * another rcu read section or the mutex 172 + */ 173 + return sta; 174 + } 175 + } 176 + rcu_read_unlock(); 177 + return NULL; 163 178 } 164 179 165 180 /*
+6 -3
net/mac802154/cfg.c
··· 22 22 23 23 static struct net_device * 24 24 ieee802154_add_iface_deprecated(struct wpan_phy *wpan_phy, 25 - const char *name, int type) 25 + const char *name, 26 + unsigned char name_assign_type, int type) 26 27 { 27 28 struct ieee802154_local *local = wpan_phy_priv(wpan_phy); 28 29 struct net_device *dev; 29 30 30 31 rtnl_lock(); 31 - dev = ieee802154_if_add(local, name, type, 32 + dev = ieee802154_if_add(local, name, name_assign_type, type, 32 33 cpu_to_le64(0x0000000000000000ULL)); 33 34 rtnl_unlock(); 34 35 ··· 46 45 47 46 static int 48 47 ieee802154_add_iface(struct wpan_phy *phy, const char *name, 48 + unsigned char name_assign_type, 49 49 enum nl802154_iftype type, __le64 extended_addr) 50 50 { 51 51 struct ieee802154_local *local = wpan_phy_priv(phy); 52 52 struct net_device *err; 53 53 54 - err = ieee802154_if_add(local, name, type, extended_addr); 54 + err = ieee802154_if_add(local, name, name_assign_type, type, 55 + extended_addr); 55 56 return PTR_ERR_OR_ZERO(err); 56 57 } 57 58
+2 -1
net/mac802154/ieee802154_i.h
··· 182 182 void ieee802154_if_remove(struct ieee802154_sub_if_data *sdata); 183 183 struct net_device * 184 184 ieee802154_if_add(struct ieee802154_local *local, const char *name, 185 - enum nl802154_iftype type, __le64 extended_addr); 185 + unsigned char name_assign_type, enum nl802154_iftype type, 186 + __le64 extended_addr); 186 187 void ieee802154_remove_interfaces(struct ieee802154_local *local); 187 188 188 189 #endif /* __IEEE802154_I_H */
+3 -2
net/mac802154/iface.c
··· 522 522 523 523 struct net_device * 524 524 ieee802154_if_add(struct ieee802154_local *local, const char *name, 525 - enum nl802154_iftype type, __le64 extended_addr) 525 + unsigned char name_assign_type, enum nl802154_iftype type, 526 + __le64 extended_addr) 526 527 { 527 528 struct net_device *ndev = NULL; 528 529 struct ieee802154_sub_if_data *sdata = NULL; ··· 532 531 ASSERT_RTNL(); 533 532 534 533 ndev = alloc_netdev(sizeof(*sdata) + local->hw.vif_data_size, name, 535 - NET_NAME_UNKNOWN, ieee802154_if_setup); 534 + name_assign_type, ieee802154_if_setup); 536 535 if (!ndev) 537 536 return ERR_PTR(-ENOMEM); 538 537
+2 -2
net/mac802154/llsec.c
··· 134 134 for (i = 0; i < ARRAY_SIZE(key->tfm); i++) { 135 135 key->tfm[i] = crypto_alloc_aead("ccm(aes)", 0, 136 136 CRYPTO_ALG_ASYNC); 137 - if (!key->tfm[i]) 137 + if (IS_ERR(key->tfm[i])) 138 138 goto err_tfm; 139 139 if (crypto_aead_setkey(key->tfm[i], template->key, 140 140 IEEE802154_LLSEC_KEY_SIZE)) ··· 144 144 } 145 145 146 146 key->tfm0 = crypto_alloc_blkcipher("ctr(aes)", 0, CRYPTO_ALG_ASYNC); 147 - if (!key->tfm0) 147 + if (IS_ERR(key->tfm0)) 148 148 goto err_tfm; 149 149 150 150 if (crypto_blkcipher_setkey(key->tfm0, template->key,
+5 -2
net/mac802154/main.c
··· 161 161 162 162 rtnl_lock(); 163 163 164 - dev = ieee802154_if_add(local, "wpan%d", NL802154_IFTYPE_NODE, 164 + dev = ieee802154_if_add(local, "wpan%d", NET_NAME_ENUM, 165 + NL802154_IFTYPE_NODE, 165 166 cpu_to_le64(0x0000000000000000ULL)); 166 167 if (IS_ERR(dev)) { 167 168 rtnl_unlock(); 168 169 rc = PTR_ERR(dev); 169 - goto out_wq; 170 + goto out_phy; 170 171 } 171 172 172 173 rtnl_unlock(); 173 174 174 175 return 0; 175 176 177 + out_phy: 178 + wpan_phy_unregister(local->phy); 176 179 out_wq: 177 180 destroy_workqueue(local->workqueue); 178 181 out:
+9 -9
net/mpls/af_mpls.c
··· 647 647 return -EINVAL; 648 648 649 649 switch (dec.label) { 650 - case LABEL_IMPLICIT_NULL: 650 + case MPLS_LABEL_IMPLNULL: 651 651 /* RFC3032: This is a label that an LSR may 652 652 * assign and distribute, but which never 653 653 * actually appears in the encapsulation. ··· 935 935 } 936 936 937 937 /* In case the predefined labels need to be populated */ 938 - if (limit > LABEL_IPV4_EXPLICIT_NULL) { 938 + if (limit > MPLS_LABEL_IPV4NULL) { 939 939 struct net_device *lo = net->loopback_dev; 940 940 rt0 = mpls_rt_alloc(lo->addr_len); 941 941 if (!rt0) ··· 945 945 rt0->rt_via_table = NEIGH_LINK_TABLE; 946 946 memcpy(rt0->rt_via, lo->dev_addr, lo->addr_len); 947 947 } 948 - if (limit > LABEL_IPV6_EXPLICIT_NULL) { 948 + if (limit > MPLS_LABEL_IPV6NULL) { 949 949 struct net_device *lo = net->loopback_dev; 950 950 rt2 = mpls_rt_alloc(lo->addr_len); 951 951 if (!rt2) ··· 973 973 memcpy(labels, old, cp_size); 974 974 975 975 /* If needed set the predefined labels */ 976 - if ((old_limit <= LABEL_IPV6_EXPLICIT_NULL) && 977 - (limit > LABEL_IPV6_EXPLICIT_NULL)) { 978 - RCU_INIT_POINTER(labels[LABEL_IPV6_EXPLICIT_NULL], rt2); 976 + if ((old_limit <= MPLS_LABEL_IPV6NULL) && 977 + (limit > MPLS_LABEL_IPV6NULL)) { 978 + RCU_INIT_POINTER(labels[MPLS_LABEL_IPV6NULL], rt2); 979 979 rt2 = NULL; 980 980 } 981 981 982 - if ((old_limit <= LABEL_IPV4_EXPLICIT_NULL) && 983 - (limit > LABEL_IPV4_EXPLICIT_NULL)) { 984 - RCU_INIT_POINTER(labels[LABEL_IPV4_EXPLICIT_NULL], rt0); 982 + if ((old_limit <= MPLS_LABEL_IPV4NULL) && 983 + (limit > MPLS_LABEL_IPV4NULL)) { 984 + RCU_INIT_POINTER(labels[MPLS_LABEL_IPV4NULL], rt0); 985 985 rt0 = NULL; 986 986 } 987 987
-10
net/mpls/internal.h
··· 1 1 #ifndef MPLS_INTERNAL_H 2 2 #define MPLS_INTERNAL_H 3 3 4 - #define LABEL_IPV4_EXPLICIT_NULL 0 /* RFC3032 */ 5 - #define LABEL_ROUTER_ALERT_LABEL 1 /* RFC3032 */ 6 - #define LABEL_IPV6_EXPLICIT_NULL 2 /* RFC3032 */ 7 - #define LABEL_IMPLICIT_NULL 3 /* RFC3032 */ 8 - #define LABEL_ENTROPY_INDICATOR 7 /* RFC6790 */ 9 - #define LABEL_GAL 13 /* RFC5586 */ 10 - #define LABEL_OAM_ALERT 14 /* RFC3429 */ 11 - #define LABEL_EXTENSION 15 /* RFC7274 */ 12 - 13 - 14 4 struct mpls_shim_hdr { 15 5 __be32 label_stack_entry; 16 6 };
-1
net/netlink/af_netlink.c
··· 3139 3139 .key_len = netlink_compare_arg_len, 3140 3140 .obj_hashfn = netlink_hash, 3141 3141 .obj_cmpfn = netlink_compare, 3142 - .max_size = 65536, 3143 3142 .automatic_shrinking = true, 3144 3143 }; 3145 3144
+6 -3
net/packet/af_packet.c
··· 2311 2311 tlen = dev->needed_tailroom; 2312 2312 skb = sock_alloc_send_skb(&po->sk, 2313 2313 hlen + tlen + sizeof(struct sockaddr_ll), 2314 - 0, &err); 2314 + !need_wait, &err); 2315 2315 2316 - if (unlikely(skb == NULL)) 2316 + if (unlikely(skb == NULL)) { 2317 + /* we assume the socket was initially writeable ... */ 2318 + if (likely(len_sum > 0)) 2319 + err = len_sum; 2317 2320 goto out_status; 2318 - 2321 + } 2319 2322 tp_len = tpacket_fill_skb(po, skb, ph, dev, size_max, proto, 2320 2323 addr, hlen); 2321 2324 if (tp_len > dev->mtu + dev->hard_header_len) {
+15 -2
net/rds/connection.c
··· 126 126 struct rds_transport *loop_trans; 127 127 unsigned long flags; 128 128 int ret; 129 + struct rds_transport *otrans = trans; 129 130 131 + if (!is_outgoing && otrans->t_type == RDS_TRANS_TCP) 132 + goto new_conn; 130 133 rcu_read_lock(); 131 134 conn = rds_conn_lookup(head, laddr, faddr, trans); 132 135 if (conn && conn->c_loopback && conn->c_trans != &rds_loop_transport && ··· 145 142 if (conn) 146 143 goto out; 147 144 145 + new_conn: 148 146 conn = kmem_cache_zalloc(rds_conn_slab, gfp); 149 147 if (!conn) { 150 148 conn = ERR_PTR(-ENOMEM); ··· 234 230 /* Creating normal conn */ 235 231 struct rds_connection *found; 236 232 237 - found = rds_conn_lookup(head, laddr, faddr, trans); 233 + if (!is_outgoing && otrans->t_type == RDS_TRANS_TCP) 234 + found = NULL; 235 + else 236 + found = rds_conn_lookup(head, laddr, faddr, trans); 238 237 if (found) { 239 238 trans->conn_free(conn->c_transport_data); 240 239 kmem_cache_free(rds_conn_slab, conn); 241 240 conn = found; 242 241 } else { 243 - hlist_add_head_rcu(&conn->c_hash_node, head); 242 + if ((is_outgoing && otrans->t_type == RDS_TRANS_TCP) || 243 + (otrans->t_type != RDS_TRANS_TCP)) { 244 + /* Only the active side should be added to 245 + * reconnect list for TCP. 246 + */ 247 + hlist_add_head_rcu(&conn->c_hash_node, head); 248 + } 244 249 rds_cong_add_conn(conn); 245 250 rds_conn_count++; 246 251 }
+11 -2
net/rds/ib_cm.c
··· 183 183 184 184 /* If the peer gave us the last packet it saw, process this as if 185 185 * we had received a regular ACK. */ 186 - if (dp && dp->dp_ack_seq) 187 - rds_send_drop_acked(conn, be64_to_cpu(dp->dp_ack_seq), NULL); 186 + if (dp) { 187 + /* dp structure start is not guaranteed to be 8 bytes aligned. 188 + * Since dp_ack_seq is 64-bit extended load operations can be 189 + * used so go through get_unaligned to avoid unaligned errors. 190 + */ 191 + __be64 dp_ack_seq = get_unaligned(&dp->dp_ack_seq); 192 + 193 + if (dp_ack_seq) 194 + rds_send_drop_acked(conn, be64_to_cpu(dp_ack_seq), 195 + NULL); 196 + } 188 197 189 198 rds_connect_complete(conn); 190 199 }
+1
net/rds/tcp_connect.c
··· 62 62 case TCP_ESTABLISHED: 63 63 rds_connect_complete(conn); 64 64 break; 65 + case TCP_CLOSE_WAIT: 65 66 case TCP_CLOSE: 66 67 rds_conn_drop(conn); 67 68 default:
+46
net/rds/tcp_listen.c
··· 45 45 static DECLARE_WORK(rds_tcp_listen_work, rds_tcp_accept_worker); 46 46 static struct socket *rds_tcp_listen_sock; 47 47 48 + static int rds_tcp_keepalive(struct socket *sock) 49 + { 50 + /* values below based on xs_udp_default_timeout */ 51 + int keepidle = 5; /* send a probe 'keepidle' secs after last data */ 52 + int keepcnt = 5; /* number of unack'ed probes before declaring dead */ 53 + int keepalive = 1; 54 + int ret = 0; 55 + 56 + ret = kernel_setsockopt(sock, SOL_SOCKET, SO_KEEPALIVE, 57 + (char *)&keepalive, sizeof(keepalive)); 58 + if (ret < 0) 59 + goto bail; 60 + 61 + ret = kernel_setsockopt(sock, IPPROTO_TCP, TCP_KEEPCNT, 62 + (char *)&keepcnt, sizeof(keepcnt)); 63 + if (ret < 0) 64 + goto bail; 65 + 66 + ret = kernel_setsockopt(sock, IPPROTO_TCP, TCP_KEEPIDLE, 67 + (char *)&keepidle, sizeof(keepidle)); 68 + if (ret < 0) 69 + goto bail; 70 + 71 + /* KEEPINTVL is the interval between successive probes. We follow 72 + * the model in xs_tcp_finish_connecting() and re-use keepidle. 73 + */ 74 + ret = kernel_setsockopt(sock, IPPROTO_TCP, TCP_KEEPINTVL, 75 + (char *)&keepidle, sizeof(keepidle)); 76 + bail: 77 + return ret; 78 + } 79 + 48 80 static int rds_tcp_accept_one(struct socket *sock) 49 81 { 50 82 struct socket *new_sock = NULL; 51 83 struct rds_connection *conn; 52 84 int ret; 53 85 struct inet_sock *inet; 86 + struct rds_tcp_connection *rs_tcp; 54 87 55 88 ret = sock_create_lite(sock->sk->sk_family, sock->sk->sk_type, 56 89 sock->sk->sk_protocol, &new_sock); ··· 93 60 new_sock->type = sock->type; 94 61 new_sock->ops = sock->ops; 95 62 ret = sock->ops->accept(sock, new_sock, O_NONBLOCK); 63 + if (ret < 0) 64 + goto out; 65 + 66 + ret = rds_tcp_keepalive(new_sock); 96 67 if (ret < 0) 97 68 goto out; 98 69 ··· 114 77 ret = PTR_ERR(conn); 115 78 goto out; 116 79 } 80 + /* An incoming SYN request came in, and TCP just accepted it. 81 + * We always create a new conn for listen side of TCP, and do not 82 + * add it to the c_hash_list. 83 + * 84 + * If the client reboots, this conn will need to be cleaned up. 85 + * rds_tcp_state_change() will do that cleanup 86 + */ 87 + rs_tcp = (struct rds_tcp_connection *)conn->c_transport_data; 88 + WARN_ON(!rs_tcp || rs_tcp->t_sock); 117 89 118 90 /* 119 91 * see the comment above rds_queue_delayed_reconnect()
+3 -4
net/sched/cls_api.c
··· 308 308 case RTM_DELTFILTER: 309 309 err = tp->ops->delete(tp, fh); 310 310 if (err == 0) { 311 - tfilter_notify(net, skb, n, tp, fh, RTM_DELTFILTER); 312 - if (tcf_destroy(tp, false)) { 313 - struct tcf_proto *next = rtnl_dereference(tp->next); 311 + struct tcf_proto *next = rtnl_dereference(tp->next); 314 312 313 + tfilter_notify(net, skb, n, tp, fh, RTM_DELTFILTER); 314 + if (tcf_destroy(tp, false)) 315 315 RCU_INIT_POINTER(*back, next); 316 - } 317 316 } 318 317 goto errout; 319 318 case RTM_GETTFILTER:
+1 -1
net/sched/sch_codel.c
··· 164 164 165 165 sch->limit = DEFAULT_CODEL_LIMIT; 166 166 167 - codel_params_init(&q->params); 167 + codel_params_init(&q->params, sch); 168 168 codel_vars_init(&q->vars); 169 169 codel_stats_init(&q->stats); 170 170
+1 -1
net/sched/sch_fq_codel.c
··· 391 391 q->perturbation = prandom_u32(); 392 392 INIT_LIST_HEAD(&q->new_flows); 393 393 INIT_LIST_HEAD(&q->old_flows); 394 - codel_params_init(&q->cparams); 394 + codel_params_init(&q->cparams, sch); 395 395 codel_stats_init(&q->cstats); 396 396 q->cparams.ecn = true; 397 397
+2 -2
net/sched/sch_gred.c
··· 229 229 break; 230 230 } 231 231 232 - if (q->backlog + qdisc_pkt_len(skb) <= q->limit) { 232 + if (gred_backlog(t, q, sch) + qdisc_pkt_len(skb) <= q->limit) { 233 233 q->backlog += qdisc_pkt_len(skb); 234 234 return qdisc_enqueue_tail(skb, sch); 235 235 } ··· 553 553 554 554 opt.limit = q->limit; 555 555 opt.DP = q->DP; 556 - opt.backlog = q->backlog; 556 + opt.backlog = gred_backlog(table, q, sch); 557 557 opt.prio = q->prio; 558 558 opt.qth_min = q->parms.qth_min >> q->parms.Wlog; 559 559 opt.qth_max = q->parms.qth_max >> q->parms.Wlog;
+16 -7
net/sunrpc/auth_gss/gss_rpc_xdr.c
··· 793 793 { 794 794 u32 value_follows; 795 795 int err; 796 + struct page *scratch; 797 + 798 + scratch = alloc_page(GFP_KERNEL); 799 + if (!scratch) 800 + return -ENOMEM; 801 + xdr_set_scratch_buffer(xdr, page_address(scratch), PAGE_SIZE); 796 802 797 803 /* res->status */ 798 804 err = gssx_dec_status(xdr, &res->status); 799 805 if (err) 800 - return err; 806 + goto out_free; 801 807 802 808 /* res->context_handle */ 803 809 err = gssx_dec_bool(xdr, &value_follows); 804 810 if (err) 805 - return err; 811 + goto out_free; 806 812 if (value_follows) { 807 813 err = gssx_dec_ctx(xdr, res->context_handle); 808 814 if (err) 809 - return err; 815 + goto out_free; 810 816 } else { 811 817 res->context_handle = NULL; 812 818 } ··· 820 814 /* res->output_token */ 821 815 err = gssx_dec_bool(xdr, &value_follows); 822 816 if (err) 823 - return err; 817 + goto out_free; 824 818 if (value_follows) { 825 819 err = gssx_dec_buffer(xdr, res->output_token); 826 820 if (err) 827 - return err; 821 + goto out_free; 828 822 } else { 829 823 res->output_token = NULL; 830 824 } ··· 832 826 /* res->delegated_cred_handle */ 833 827 err = gssx_dec_bool(xdr, &value_follows); 834 828 if (err) 835 - return err; 829 + goto out_free; 836 830 if (value_follows) { 837 831 /* we do not support upcall servers sending this data. */ 838 - return -EINVAL; 832 + err = -EINVAL; 833 + goto out_free; 839 834 } 840 835 841 836 /* res->options */ 842 837 err = gssx_dec_option_array(xdr, &res->options); 843 838 839 + out_free: 840 + __free_page(scratch); 844 841 return err; 845 842 }
+2 -1
tools/lib/lockdep/Makefile
··· 14 14 $(eval $(1) = $(2))) 15 15 endef 16 16 17 - # Allow setting CC and AR, or setting CROSS_COMPILE as a prefix. 17 + # Allow setting CC and AR and LD, or setting CROSS_COMPILE as a prefix. 18 18 $(call allow-override,CC,$(CROSS_COMPILE)gcc) 19 19 $(call allow-override,AR,$(CROSS_COMPILE)ar) 20 + $(call allow-override,LD,$(CROSS_COMPILE)ld) 20 21 21 22 INSTALL = install 22 23
+3
tools/lib/lockdep/uinclude/linux/kernel.h
··· 28 28 #define __init 29 29 #define noinline 30 30 #define list_add_tail_rcu list_add_tail 31 + #define list_for_each_entry_rcu list_for_each_entry 32 + #define barrier() 33 + #define synchronize_sched() 31 34 32 35 #ifndef CALLER_ADDR0 33 36 #define CALLER_ADDR0 ((unsigned long)__builtin_return_address(0))
+1 -1
tools/perf/Makefile
··· 24 24 # (To override it, run 'make JOBS=1' and similar.) 25 25 # 26 26 ifeq ($(JOBS),) 27 - JOBS := $(shell egrep -c '^processor|^CPU' /proc/cpuinfo 2>/dev/null) 27 + JOBS := $(shell (getconf _NPROCESSORS_ONLN || egrep -c '^processor|^CPU[0-9]' /proc/cpuinfo) 2>/dev/null) 28 28 ifeq ($(JOBS),0) 29 29 JOBS := 1 30 30 endif
+33 -24
tools/testing/selftests/x86/Makefile
··· 1 - .PHONY: all all_32 all_64 check_build32 clean run_tests 1 + all: 2 + 3 + include ../lib.mk 4 + 5 + .PHONY: all all_32 all_64 warn_32bit_failure clean 2 6 3 7 TARGETS_C_BOTHBITS := sigreturn single_step_syscall 4 8 ··· 11 7 12 8 CFLAGS := -O2 -g -std=gnu99 -pthread -Wall 13 9 14 - UNAME_P := $(shell uname -p) 10 + UNAME_M := $(shell uname -m) 11 + CAN_BUILD_I386 := $(shell ./check_cc.sh $(CC) trivial_32bit_program.c -m32) 12 + CAN_BUILD_X86_64 := $(shell ./check_cc.sh $(CC) trivial_64bit_program.c) 15 13 16 - # Always build 32-bit tests 14 + ifeq ($(CAN_BUILD_I386),1) 17 15 all: all_32 18 - 19 - # If we're on a 64-bit host, build 64-bit tests as well 20 - ifeq ($(shell uname -p),x86_64) 21 - all: all_64 16 + TEST_PROGS += $(BINARIES_32) 22 17 endif 23 18 24 - all_32: check_build32 $(BINARIES_32) 19 + ifeq ($(CAN_BUILD_X86_64),1) 20 + all: all_64 21 + TEST_PROGS += $(BINARIES_64) 22 + endif 23 + 24 + all_32: $(BINARIES_32) 25 25 26 26 all_64: $(BINARIES_64) 27 27 28 28 clean: 29 29 $(RM) $(BINARIES_32) $(BINARIES_64) 30 - 31 - run_tests: 32 - ./run_x86_tests.sh 33 30 34 31 $(TARGETS_C_BOTHBITS:%=%_32): %_32: %.c 35 32 $(CC) -m32 -o $@ $(CFLAGS) $(EXTRA_CFLAGS) $^ -lrt -ldl ··· 38 33 $(TARGETS_C_BOTHBITS:%=%_64): %_64: %.c 39 34 $(CC) -m64 -o $@ $(CFLAGS) $(EXTRA_CFLAGS) $^ -lrt -ldl 40 35 41 - check_build32: 42 - @if ! $(CC) -m32 -o /dev/null trivial_32bit_program.c; then \ 43 - echo "Warning: you seem to have a broken 32-bit build" 2>&1; \ 44 - echo "environment. If you are using a Debian-like"; \ 45 - echo " distribution, try:"; \ 46 - echo ""; \ 47 - echo " apt-get install gcc-multilib libc6-i386 libc6-dev-i386"; \ 48 - echo ""; \ 49 - echo "If you are using a Fedora-like distribution, try:"; \ 50 - echo ""; \ 51 - echo " yum install glibc-devel.*i686"; \ 52 - exit 1; \ 53 - fi 36 + # x86_64 users should be encouraged to install 32-bit libraries 37 + ifeq ($(CAN_BUILD_I386)$(CAN_BUILD_X86_64),01) 38 + all: warn_32bit_failure 39 + 40 + warn_32bit_failure: 41 + @echo "Warning: you seem to have a broken 32-bit build" 2>&1; \ 42 + echo "environment. This will reduce test coverage of 64-bit" 2>&1; \ 43 + echo "kernels. If you are using a Debian-like distribution," 2>&1; \ 44 + echo "try:"; 2>&1; \ 45 + echo ""; \ 46 + echo " apt-get install gcc-multilib libc6-i386 libc6-dev-i386"; \ 47 + echo ""; \ 48 + echo "If you are using a Fedora-like distribution, try:"; \ 49 + echo ""; \ 50 + echo " yum install glibc-devel.*i686"; \ 51 + exit 0; 52 + endif
+16
tools/testing/selftests/x86/check_cc.sh
··· 1 + #!/bin/sh 2 + # check_cc.sh - Helper to test userspace compilation support 3 + # Copyright (c) 2015 Andrew Lutomirski 4 + # GPL v2 5 + 6 + CC="$1" 7 + TESTPROG="$2" 8 + shift 2 9 + 10 + if "$CC" -o /dev/null "$TESTPROG" -O0 "$@" 2>/dev/null; then 11 + echo 1 12 + else 13 + echo 0 14 + fi 15 + 16 + exit 0
-13
tools/testing/selftests/x86/run_x86_tests.sh
··· 1 - #!/bin/bash 2 - 3 - # This is deliberately minimal. IMO kselftests should provide a standard 4 - # script here. 5 - ./sigreturn_32 || exit 1 6 - ./single_step_syscall_32 || exit 1 7 - 8 - if [[ "$uname -p" -eq "x86_64" ]]; then 9 - ./sigreturn_64 || exit 1 10 - ./single_step_syscall_64 || exit 1 11 - fi 12 - 13 - exit 0
+4
tools/testing/selftests/x86/trivial_32bit_program.c
··· 4 4 * GPL v2 5 5 */ 6 6 7 + #ifndef __i386__ 8 + # error wrong architecture 9 + #endif 10 + 7 11 #include <stdio.h> 8 12 9 13 int main()
+18
tools/testing/selftests/x86/trivial_64bit_program.c
··· 1 + /* 2 + * Trivial program to check that we have a valid 32-bit build environment. 3 + * Copyright (c) 2015 Andy Lutomirski 4 + * GPL v2 5 + */ 6 + 7 + #ifndef __x86_64__ 8 + # error wrong architecture 9 + #endif 10 + 11 + #include <stdio.h> 12 + 13 + int main() 14 + { 15 + printf("\n"); 16 + 17 + return 0; 18 + }
-8
tools/thermal/tmon/Makefile
··· 12 12 INSTALL_PROGRAM=install -m 755 -p 13 13 DEL_FILE=rm -f 14 14 15 - INSTALL_CONFIGFILE=install -m 644 -p 16 - CONFIG_FILE= 17 - CONFIG_PATH= 18 - 19 15 # Static builds might require -ltinfo, for instance 20 16 ifneq ($(findstring -static, $(LDFLAGS)),) 21 17 STATIC := --static ··· 34 38 install: 35 39 - mkdir -p $(INSTALL_ROOT)/$(BINDIR) 36 40 - $(INSTALL_PROGRAM) "$(TARGET)" "$(INSTALL_ROOT)/$(BINDIR)/$(TARGET)" 37 - - mkdir -p $(INSTALL_ROOT)/$(CONFIG_PATH) 38 - - $(INSTALL_CONFIGFILE) "$(CONFIG_FILE)" "$(INSTALL_ROOT)/$(CONFIG_PATH)" 39 41 40 42 uninstall: 41 43 $(DEL_FILE) "$(INSTALL_ROOT)/$(BINDIR)/$(TARGET)" 42 - $(CONFIG_FILE) "$(CONFIG_PATH)" 43 - 44 44 45 45 clean: 46 46 find . -name "*.o" | xargs $(DEL_FILE)
+1 -1
tools/vm/Makefile
··· 3 3 TARGETS=page-types slabinfo page_owner_sort 4 4 5 5 LIB_DIR = ../lib/api 6 - LIBS = $(LIB_DIR)/libapikfs.a 6 + LIBS = $(LIB_DIR)/libapi.a 7 7 8 8 CC = $(CROSS_COMPILE)gcc 9 9 CFLAGS = -Wall -Wextra -I../lib/