Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net

Conflicts:
drivers/net/wireless/ath/ath9k/Kconfig
drivers/net/xen-netback/netback.c
net/batman-adv/bat_iv_ogm.c
net/wireless/nl80211.c

The ath9k Kconfig conflict was a change of a Kconfig option name right
next to the deletion of another option.

The xen-netback conflict was overlapping changes involving the
handling of the notify list in xen_netbk_rx_action().

Batman conflict resolution provided by Antonio Quartulli, basically
keep everything in both conflict hunks.

The nl80211 conflict is a little more involved. In 'net' we added a
dynamic memory allocation to nl80211_dump_wiphy() to fix a race that
Linus reported. Meanwhile in 'net-next' the handlers were converted
to use pre and post doit handlers which use a flag to determine
whether to hold the RTNL mutex around the operation.

However, the dump handlers to not use this logic. Instead they have
to explicitly do the locking. There were apparent bugs in the
conversion of nl80211_dump_wiphy() in that we were not dropping the
RTNL mutex in all the return paths, and it seems we very much should
be doing so. So I fixed that whilst handling the overlapping changes.

To simplify the initial returns, I take the RTNL mutex after we try
to allocate 'tb'.

Signed-off-by: David S. Miller <davem@davemloft.net>

+8017 -2196
+9 -3
Documentation/bcache.txt
··· 319 319 Symlink to each of the cache devices comprising this cache set. 320 320 321 321 cache_available_percent 322 - Percentage of cache device free. 322 + Percentage of cache device which doesn't contain dirty data, and could 323 + potentially be used for writeback. This doesn't mean this space isn't used 324 + for clean cached data; the unused statistic (in priority_stats) is typically 325 + much lower. 323 326 324 327 clear_stats 325 328 Clears the statistics associated with this cache ··· 426 423 Total buckets in this cache 427 424 428 425 priority_stats 429 - Statistics about how recently data in the cache has been accessed. This can 430 - reveal your working set size. 426 + Statistics about how recently data in the cache has been accessed. 427 + This can reveal your working set size. Unused is the percentage of 428 + the cache that doesn't contain any data. Metadata is bcache's 429 + metadata overhead. Average is the average priority of cache buckets. 430 + Next is a list of quantiles with the priority threshold of each. 431 431 432 432 written 433 433 Sum of all data that has been written to the cache; comparison with
+2 -6
Documentation/devices.txt
··· 498 498 499 499 Each device type has 5 bits (32 minors). 500 500 501 - 13 block 8-bit MFM/RLL/IDE controller 502 - 0 = /dev/xda First XT disk whole disk 503 - 64 = /dev/xdb Second XT disk whole disk 504 - 505 - Partitions are handled in the same way as IDE disks 506 - (see major number 3). 501 + 13 block Previously used for the XT disk (/dev/xdN) 502 + Deleted in kernel v3.9. 507 503 508 504 14 char Open Sound System (OSS) 509 505 0 = /dev/mixer Mixer control
+1 -1
Documentation/devicetree/bindings/rtc/atmel,at91rm9200-rtc.txt
··· 1 1 Atmel AT91RM9200 Real Time Clock 2 2 3 3 Required properties: 4 - - compatible: should be: "atmel,at91rm9200-rtc" 4 + - compatible: should be: "atmel,at91rm9200-rtc" or "atmel,at91sam9x5-rtc" 5 5 - reg: physical base address of the controller and length of memory mapped 6 6 region. 7 7 - interrupts: rtc alarm/event interrupt
+3 -3
Documentation/dmatest.txt
··· 34 34 After a while you will start to get messages about current status or error like 35 35 in the original code. 36 36 37 - Note that running a new test will stop any in progress test. 37 + Note that running a new test will not stop any in progress test. 38 38 39 39 The following command should return actual state of the test. 40 40 % cat /sys/kernel/debug/dmatest/run ··· 52 52 53 53 The module parameters that is supplied to the kernel command line will be used 54 54 for the first performed test. After user gets a control, the test could be 55 - interrupted or re-run with same or different parameters. For the details see 56 - the above section "Part 2 - When dmatest is built as a module..." 55 + re-run with the same or different parameters. For the details see the above 56 + section "Part 2 - When dmatest is built as a module..." 57 57 58 58 In both cases the module parameters are used as initial values for the test case. 59 59 You always could check them at run-time by running
-3
Documentation/kernel-parameters.txt
··· 3351 3351 plus one apbt timer for broadcast timer. 3352 3352 x86_mrst_timer=apbt_only | lapic_and_apbt 3353 3353 3354 - xd= [HW,XT] Original XT pre-IDE (RLL encoded) disks. 3355 - xd_geo= See header of drivers/block/xd.c. 3356 - 3357 3354 xen_emul_unplug= [HW,X86,XEN] 3358 3355 Unplug Xen emulated devices 3359 3356 Format: [unplug0,][unplug1]
-2
Documentation/m68k/kernel-options.txt
··· 80 80 /dev/sdd: -> 0x0830 (forth SCSI disk) 81 81 /dev/sde: -> 0x0840 (fifth SCSI disk) 82 82 /dev/fd : -> 0x0200 (floppy disk) 83 - /dev/xda: -> 0x0c00 (first XT disk, unused in Linux/m68k) 84 - /dev/xdb: -> 0x0c40 (second XT disk, unused in Linux/m68k) 85 83 86 84 The name must be followed by a decimal number, that stands for the 87 85 partition number. Internally, the value of the number is just
+14 -5
MAINTAINERS
··· 2895 2895 2896 2896 ECRYPT FILE SYSTEM 2897 2897 M: Tyler Hicks <tyhicks@canonical.com> 2898 - M: Dustin Kirkland <dustin.kirkland@gazzang.com> 2899 2898 L: ecryptfs@vger.kernel.org 2899 + W: http://ecryptfs.org 2900 2900 W: https://launchpad.net/ecryptfs 2901 2901 S: Supported 2902 2902 F: Documentation/filesystems/ecryptfs.txt ··· 4453 4453 F: drivers/scsi/*iscsi* 4454 4454 F: include/scsi/*iscsi* 4455 4455 4456 + ISCSI EXTENSIONS FOR RDMA (ISER) INITIATOR 4457 + M: Or Gerlitz <ogerlitz@mellanox.com> 4458 + M: Roi Dayan <roid@mellanox.com> 4459 + L: linux-rdma@vger.kernel.org 4460 + S: Supported 4461 + W: http://www.openfabrics.org 4462 + W: www.open-iscsi.org 4463 + Q: http://patchwork.kernel.org/project/linux-rdma/list/ 4464 + F: drivers/infiniband/ulp/iser 4465 + 4456 4466 ISDN SUBSYSTEM 4457 4467 M: Karsten Keil <isdn@linux-pingi.de> 4458 4468 L: isdn4linux@listserv.isdn4linux.de (subscribers-only) ··· 5771 5761 L: linux-nvme@lists.infradead.org 5772 5762 T: git git://git.infradead.org/users/willy/linux-nvme.git 5773 5763 S: Supported 5774 - F: drivers/block/nvme.c 5764 + F: drivers/block/nvme* 5775 5765 F: include/linux/nvme.h 5776 5766 5777 5767 OMAP SUPPORT ··· 7629 7619 SPI SUBSYSTEM 7630 7620 M: Mark Brown <broonie@kernel.org> 7631 7621 M: Grant Likely <grant.likely@linaro.org> 7632 - L: spi-devel-general@lists.sourceforge.net 7622 + L: linux-spi@vger.kernel.org 7633 7623 T: git git://git.kernel.org/pub/scm/linux/kernel/git/broonie/spi.git 7634 7624 Q: http://patchwork.kernel.org/project/spi-devel-general/list/ 7635 7625 S: Maintained ··· 9009 8999 F: drivers/net/wireless/wl3501* 9010 9000 9011 9001 WM97XX TOUCHSCREEN DRIVERS 9012 - M: Mark Brown <broonie@opensource.wolfsonmicro.com> 9002 + M: Mark Brown <broonie@kernel.org> 9013 9003 M: Liam Girdwood <lrg@slimlogic.co.uk> 9014 9004 L: linux-input@vger.kernel.org 9015 9005 T: git git://opensource.wolfsonmicro.com/linux-2.6-touch ··· 9019 9009 F: include/linux/wm97xx.h 9020 9010 9021 9011 WOLFSON MICROELECTRONICS DRIVERS 9022 - M: Mark Brown <broonie@opensource.wolfsonmicro.com> 9023 9012 L: patches@opensource.wolfsonmicro.com 9024 9013 T: git git://opensource.wolfsonmicro.com/linux-2.6-asoc 9025 9014 T: git git://opensource.wolfsonmicro.com/linux-2.6-audioplus
+1 -1
Makefile
··· 1 1 VERSION = 3 2 2 PATCHLEVEL = 10 3 3 SUBLEVEL = 0 4 - EXTRAVERSION = -rc4 4 + EXTRAVERSION = -rc6 5 5 NAME = Unicycling Gorilla 6 6 7 7 # *DOCUMENTATION*
+1 -1
arch/arm/boot/compressed/Makefile
··· 124 124 endif 125 125 126 126 ccflags-y := -fpic -mno-single-pic-base -fno-builtin -I$(obj) 127 - asflags-y := -Wa,-march=all -DZIMAGE 127 + asflags-y := -DZIMAGE 128 128 129 129 # Supply kernel BSS size to the decompressor via a linker symbol. 130 130 KBSS_SZ = $(shell $(CROSS_COMPILE)size $(obj)/../../../../vmlinux | \
+28
arch/arm/boot/compressed/debug.S
··· 1 1 #include <linux/linkage.h> 2 2 #include <asm/assembler.h> 3 3 4 + #ifndef CONFIG_DEBUG_SEMIHOSTING 5 + 4 6 #include CONFIG_DEBUG_LL_INCLUDE 5 7 6 8 ENTRY(putc) ··· 12 10 busyuart r3, r1 13 11 mov pc, lr 14 12 ENDPROC(putc) 13 + 14 + #else 15 + 16 + ENTRY(putc) 17 + adr r1, 1f 18 + ldmia r1, {r2, r3} 19 + add r2, r2, r1 20 + ldr r1, [r2, r3] 21 + strb r0, [r1] 22 + mov r0, #0x03 @ SYS_WRITEC 23 + ARM( svc #0x123456 ) 24 + THUMB( svc #0xab ) 25 + mov pc, lr 26 + .align 2 27 + 1: .word _GLOBAL_OFFSET_TABLE_ - . 28 + .word semi_writec_buf(GOT) 29 + ENDPROC(putc) 30 + 31 + .bss 32 + .global semi_writec_buf 33 + .type semi_writec_buf, %object 34 + semi_writec_buf: 35 + .space 4 36 + .size semi_writec_buf, 4 37 + 38 + #endif
+1
arch/arm/boot/compressed/head-sa1100.S
··· 11 11 #include <asm/mach-types.h> 12 12 13 13 .section ".start", "ax" 14 + .arch armv4 14 15 15 16 __SA1100_start: 16 17
+1
arch/arm/boot/compressed/head-shark.S
··· 18 18 19 19 .section ".start", "ax" 20 20 21 + .arch armv4 21 22 b __beginning 22 23 23 24 __ofw_data: .long 0 @ the number of memory blocks
+3 -2
arch/arm/boot/compressed/head.S
··· 11 11 #include <linux/linkage.h> 12 12 #include <asm/assembler.h> 13 13 14 + .arch armv7-a 14 15 /* 15 16 * Debugging stuff 16 17 * ··· 806 805 .align 2 807 806 .type proc_types,#object 808 807 proc_types: 809 - .word 0x00000000 @ old ARM ID 810 - .word 0x0000f000 808 + .word 0x41000000 @ old ARM ID 809 + .word 0xff00f000 811 810 mov pc, lr 812 811 THUMB( nop ) 813 812 mov pc, lr
+2 -2
arch/arm/boot/dts/am33xx.dtsi
··· 409 409 ti,hwmods = "gpmc"; 410 410 reg = <0x50000000 0x2000>; 411 411 interrupts = <100>; 412 - num-cs = <7>; 413 - num-waitpins = <2>; 412 + gpmc,num-cs = <7>; 413 + gpmc,num-waitpins = <2>; 414 414 #address-cells = <2>; 415 415 #size-cells = <1>; 416 416 status = "disabled";
+3 -2
arch/arm/boot/dts/armada-xp-gp.dts
··· 39 39 }; 40 40 41 41 soc { 42 - ranges = <0 0 0xd0000000 0x100000 43 - 0xf0000000 0 0xf0000000 0x1000000>; 42 + ranges = <0 0 0xd0000000 0x100000 /* Internal registers 1MiB */ 43 + 0xe0000000 0 0xe0000000 0x8100000 /* PCIe */ 44 + 0xf0000000 0 0xf0000000 0x1000000 /* Device Bus, NOR 16MiB */>; 44 45 45 46 internal-regs { 46 47 serial@12000 {
+3 -2
arch/arm/boot/dts/armada-xp-openblocks-ax3-4.dts
··· 27 27 }; 28 28 29 29 soc { 30 - ranges = <0 0 0xd0000000 0x100000 31 - 0xf0000000 0 0xf0000000 0x8000000>; 30 + ranges = <0 0 0xd0000000 0x100000 /* Internal registers 1MiB */ 31 + 0xe0000000 0 0xe0000000 0x8100000 /* PCIe */ 32 + 0xf0000000 0 0xf0000000 0x8000000 /* Device Bus, NOR 128MiB */>; 32 33 33 34 internal-regs { 34 35 serial@12000 {
+1
arch/arm/boot/dts/bcm2835.dtsi
··· 44 44 reg = <0x7e201000 0x1000>; 45 45 interrupts = <2 25>; 46 46 clock-frequency = <3000000>; 47 + arm,primecell-periphid = <0x00241011>; 47 48 }; 48 49 49 50 gpio: gpio {
+6 -6
arch/arm/boot/dts/imx25.dtsi
··· 141 141 #size-cells = <0>; 142 142 compatible = "fsl,imx25-cspi", "fsl,imx35-cspi"; 143 143 reg = <0x43fa4000 0x4000>; 144 - clocks = <&clks 62>; 145 - clock-names = "ipg"; 144 + clocks = <&clks 62>, <&clks 62>; 145 + clock-names = "ipg", "per"; 146 146 interrupts = <14>; 147 147 status = "disabled"; 148 148 }; ··· 182 182 compatible = "fsl,imx25-cspi", "fsl,imx35-cspi"; 183 183 reg = <0x50004000 0x4000>; 184 184 interrupts = <0>; 185 - clocks = <&clks 80>; 186 - clock-names = "ipg"; 185 + clocks = <&clks 80>, <&clks 80>; 186 + clock-names = "ipg", "per"; 187 187 status = "disabled"; 188 188 }; 189 189 ··· 210 210 #size-cells = <0>; 211 211 compatible = "fsl,imx25-cspi", "fsl,imx35-cspi"; 212 212 reg = <0x50010000 0x4000>; 213 - clocks = <&clks 79>; 214 - clock-names = "ipg"; 213 + clocks = <&clks 79>, <&clks 79>; 214 + clock-names = "ipg", "per"; 215 215 interrupts = <13>; 216 216 status = "disabled"; 217 217 };
+3 -3
arch/arm/boot/dts/imx27.dtsi
··· 131 131 compatible = "fsl,imx27-cspi"; 132 132 reg = <0x1000e000 0x1000>; 133 133 interrupts = <16>; 134 - clocks = <&clks 53>, <&clks 0>; 134 + clocks = <&clks 53>, <&clks 53>; 135 135 clock-names = "ipg", "per"; 136 136 status = "disabled"; 137 137 }; ··· 142 142 compatible = "fsl,imx27-cspi"; 143 143 reg = <0x1000f000 0x1000>; 144 144 interrupts = <15>; 145 - clocks = <&clks 52>, <&clks 0>; 145 + clocks = <&clks 52>, <&clks 52>; 146 146 clock-names = "ipg", "per"; 147 147 status = "disabled"; 148 148 }; ··· 223 223 compatible = "fsl,imx27-cspi"; 224 224 reg = <0x10017000 0x1000>; 225 225 interrupts = <6>; 226 - clocks = <&clks 51>, <&clks 0>; 226 + clocks = <&clks 51>, <&clks 51>; 227 227 clock-names = "ipg", "per"; 228 228 status = "disabled"; 229 229 };
+1 -1
arch/arm/boot/dts/imx51.dtsi
··· 631 631 compatible = "fsl,imx51-cspi", "fsl,imx35-cspi"; 632 632 reg = <0x83fc0000 0x4000>; 633 633 interrupts = <38>; 634 - clocks = <&clks 55>, <&clks 0>; 634 + clocks = <&clks 55>, <&clks 55>; 635 635 clock-names = "ipg", "per"; 636 636 status = "disabled"; 637 637 };
+1 -1
arch/arm/boot/dts/imx53.dtsi
··· 714 714 compatible = "fsl,imx53-cspi", "fsl,imx35-cspi"; 715 715 reg = <0x63fc0000 0x4000>; 716 716 interrupts = <38>; 717 - clocks = <&clks 55>, <&clks 0>; 717 + clocks = <&clks 55>, <&clks 55>; 718 718 clock-names = "ipg", "per"; 719 719 status = "disabled"; 720 720 };
+20
arch/arm/boot/dts/omap4-panda-common.dtsi
··· 56 56 }; 57 57 }; 58 58 59 + &omap4_pmx_wkup { 60 + pinctrl-names = "default"; 61 + pinctrl-0 = < 62 + &twl6030_wkup_pins 63 + >; 64 + 65 + twl6030_wkup_pins: pinmux_twl6030_wkup_pins { 66 + pinctrl-single,pins = < 67 + 0x14 0x2 /* fref_clk0_out.sys_drm_msecure OUTPUT | MODE2 */ 68 + >; 69 + }; 70 + }; 71 + 59 72 &omap4_pmx_core { 60 73 pinctrl-names = "default"; 61 74 pinctrl-0 = < 75 + &twl6030_pins 62 76 &twl6040_pins 63 77 &mcpdm_pins 64 78 &mcbsp1_pins 65 79 &dss_hdmi_pins 66 80 &tpd12s015_pins 67 81 >; 82 + 83 + twl6030_pins: pinmux_twl6030_pins { 84 + pinctrl-single,pins = < 85 + 0x15e 0x4118 /* sys_nirq1.sys_nirq1 OMAP_WAKEUP_EN | INPUT_PULLUP | MODE0 */ 86 + >; 87 + }; 68 88 69 89 twl6040_pins: pinmux_twl6040_pins { 70 90 pinctrl-single,pins = <
+20
arch/arm/boot/dts/omap4-sdp.dts
··· 142 142 }; 143 143 }; 144 144 145 + &omap4_pmx_wkup { 146 + pinctrl-names = "default"; 147 + pinctrl-0 = < 148 + &twl6030_wkup_pins 149 + >; 150 + 151 + twl6030_wkup_pins: pinmux_twl6030_wkup_pins { 152 + pinctrl-single,pins = < 153 + 0x14 0x2 /* fref_clk0_out.sys_drm_msecure OUTPUT | MODE2 */ 154 + >; 155 + }; 156 + }; 157 + 145 158 &omap4_pmx_core { 146 159 pinctrl-names = "default"; 147 160 pinctrl-0 = < 161 + &twl6030_pins 148 162 &twl6040_pins 149 163 &mcpdm_pins 150 164 &dmic_pins ··· 190 176 pinctrl-single,pins = < 191 177 0x11c 0x100 /* uart4_rx.uart4_rx INPUT | MODE0 */ 192 178 0x11e 0 /* uart4_tx.uart4_tx OUTPUT | MODE0 */ 179 + >; 180 + }; 181 + 182 + twl6030_pins: pinmux_twl6030_pins { 183 + pinctrl-single,pins = < 184 + 0x15e 0x4118 /* sys_nirq1.sys_nirq1 OMAP_WAKEUP_EN | INPUT_PULLUP | MODE0 */ 193 185 >; 194 186 }; 195 187
+3
arch/arm/boot/dts/omap5.dtsi
··· 538 538 interrupts = <0 41 0x4>; 539 539 ti,hwmods = "timer5"; 540 540 ti,timer-dsp; 541 + ti,timer-pwm; 541 542 }; 542 543 543 544 timer6: timer@4013a000 { ··· 575 574 reg = <0x4803e000 0x80>; 576 575 interrupts = <0 45 0x4>; 577 576 ti,hwmods = "timer9"; 577 + ti,timer-pwm; 578 578 }; 579 579 580 580 timer10: timer@48086000 { ··· 583 581 reg = <0x48086000 0x80>; 584 582 interrupts = <0 46 0x4>; 585 583 ti,hwmods = "timer10"; 584 + ti,timer-pwm; 586 585 }; 587 586 588 587 timer11: timer@48088000 {
+9 -2
arch/arm/include/asm/percpu.h
··· 30 30 static inline unsigned long __my_cpu_offset(void) 31 31 { 32 32 unsigned long off; 33 - /* Read TPIDRPRW */ 34 - asm("mrc p15, 0, %0, c13, c0, 4" : "=r" (off) : : "memory"); 33 + register unsigned long *sp asm ("sp"); 34 + 35 + /* 36 + * Read TPIDRPRW. 37 + * We want to allow caching the value, so avoid using volatile and 38 + * instead use a fake stack read to hazard against barrier(). 39 + */ 40 + asm("mrc p15, 0, %0, c13, c0, 4" : "=r" (off) : "Q" (*sp)); 41 + 35 42 return off; 36 43 } 37 44 #define __my_cpu_offset __my_cpu_offset()
+2
arch/arm/kernel/topology.c
··· 13 13 14 14 #include <linux/cpu.h> 15 15 #include <linux/cpumask.h> 16 + #include <linux/export.h> 16 17 #include <linux/init.h> 17 18 #include <linux/percpu.h> 18 19 #include <linux/node.h> ··· 201 200 * cpu topology table 202 201 */ 203 202 struct cputopo_arm cpu_topology[NR_CPUS]; 203 + EXPORT_SYMBOL_GPL(cpu_topology); 204 204 205 205 const struct cpumask *cpu_coregroup_mask(int cpu) 206 206 {
+2
arch/arm/mach-exynos/common.c
··· 386 386 387 387 void __init exynos_init_io(struct map_desc *mach_desc, int size) 388 388 { 389 + debug_ll_io_init(); 390 + 389 391 #ifdef CONFIG_OF 390 392 if (initial_boot_params) 391 393 of_scan_flat_dt(exynos_fdt_map_chipid, NULL);
+2 -2
arch/arm/mach-imx/clk-imx6q.c
··· 181 181 static const char *periph2_clk2_sels[] = { "pll3_usb_otg", "pll2_bus", }; 182 182 static const char *periph_sels[] = { "periph_pre", "periph_clk2", }; 183 183 static const char *periph2_sels[] = { "periph2_pre", "periph2_clk2", }; 184 - static const char *axi_sels[] = { "periph", "pll2_pfd2_396m", "pll3_pfd1_540m", }; 184 + static const char *axi_sels[] = { "periph", "pll2_pfd2_396m", "periph", "pll3_pfd1_540m", }; 185 185 static const char *audio_sels[] = { "pll4_post_div", "pll3_pfd2_508m", "pll3_pfd3_454m", "pll3_usb_otg", }; 186 186 static const char *gpu_axi_sels[] = { "axi", "ahb", }; 187 187 static const char *gpu2d_core_sels[] = { "axi", "pll3_usb_otg", "pll2_pfd0_352m", "pll2_pfd2_396m", }; 188 188 static const char *gpu3d_core_sels[] = { "mmdc_ch0_axi", "pll3_usb_otg", "pll2_pfd1_594m", "pll2_pfd2_396m", }; 189 189 static const char *gpu3d_shader_sels[] = { "mmdc_ch0_axi", "pll3_usb_otg", "pll2_pfd1_594m", "pll3_pfd0_720m", }; 190 190 static const char *ipu_sels[] = { "mmdc_ch0_axi", "pll2_pfd2_396m", "pll3_120m", "pll3_pfd1_540m", }; 191 - static const char *ldb_di_sels[] = { "pll5_video", "pll2_pfd0_352m", "pll2_pfd2_396m", "mmdc_ch1_axi", "pll3_usb_otg", }; 191 + static const char *ldb_di_sels[] = { "pll5_video_div", "pll2_pfd0_352m", "pll2_pfd2_396m", "mmdc_ch1_axi", "pll3_usb_otg", }; 192 192 static const char *ipu_di_pre_sels[] = { "mmdc_ch0_axi", "pll3_usb_otg", "pll5_video_div", "pll2_pfd0_352m", "pll2_pfd2_396m", "pll3_pfd1_540m", }; 193 193 static const char *ipu1_di0_sels[] = { "ipu1_di0_pre", "dummy", "dummy", "ldb_di0", "ldb_di1", }; 194 194 static const char *ipu1_di1_sels[] = { "ipu1_di1_pre", "dummy", "dummy", "ldb_di0", "ldb_di1", };
-10
arch/arm/mach-kirkwood/board-ts219.c
··· 41 41 42 42 pm_power_off = qnap_tsx1x_power_off; 43 43 } 44 - 45 - /* FIXME: Will not work with DT. Maybe use MPP40_GPIO? */ 46 - static int __init ts219_pci_init(void) 47 - { 48 - if (machine_is_ts219()) 49 - kirkwood_pcie_init(KW_PCIE0); 50 - 51 - return 0; 52 - } 53 - subsys_initcall(ts219_pci_init);
+3 -2
arch/arm/mach-kirkwood/mpp.c
··· 22 22 23 23 kirkwood_pcie_id(&dev, &rev); 24 24 25 - if ((dev == MV88F6281_DEV_ID && rev >= MV88F6281_REV_A0) || 26 - (dev == MV88F6282_DEV_ID)) 25 + if (dev == MV88F6281_DEV_ID && rev >= MV88F6281_REV_A0) 27 26 return MPP_F6281_MASK; 27 + if (dev == MV88F6282_DEV_ID) 28 + return MPP_F6282_MASK; 28 29 if (dev == MV88F6192_DEV_ID && rev >= MV88F6192_REV_A0) 29 30 return MPP_F6192_MASK; 30 31 if (dev == MV88F6180_DEV_ID)
+11 -5
arch/arm/mach-mvebu/coherency_ll.S
··· 32 32 33 33 /* Add CPU to SMP group - Atomic */ 34 34 add r3, r0, #ARMADA_XP_CFB_CTL_REG_OFFSET 35 - ldr r2, [r3] 35 + 1: 36 + ldrex r2, [r3] 36 37 orr r2, r2, r1 37 - str r2, [r3] 38 + strex r0, r2, [r3] 39 + cmp r0, #0 40 + bne 1b 38 41 39 42 /* Enable coherency on CPU - Atomic */ 40 - add r3, r0, #ARMADA_XP_CFB_CFG_REG_OFFSET 41 - ldr r2, [r3] 43 + add r3, r3, #ARMADA_XP_CFB_CFG_REG_OFFSET 44 + 1: 45 + ldrex r2, [r3] 42 46 orr r2, r2, r1 43 - str r2, [r3] 47 + strex r0, r2, [r3] 48 + cmp r0, #0 49 + bne 1b 44 50 45 51 dsb 46 52
+9 -9
arch/arm/mach-omap2/clock36xx.c
··· 20 20 21 21 #include <linux/kernel.h> 22 22 #include <linux/clk.h> 23 + #include <linux/clk-provider.h> 23 24 #include <linux/io.h> 24 25 25 26 #include "clock.h" 26 27 #include "clock36xx.h" 27 - 28 + #define to_clk_divider(_hw) container_of(_hw, struct clk_divider, hw) 28 29 29 30 /** 30 31 * omap36xx_pwrdn_clk_enable_with_hsdiv_restore - enable clocks suffering ··· 40 39 */ 41 40 int omap36xx_pwrdn_clk_enable_with_hsdiv_restore(struct clk_hw *clk) 42 41 { 43 - struct clk_hw_omap *parent; 42 + struct clk_divider *parent; 44 43 struct clk_hw *parent_hw; 45 - u32 dummy_v, orig_v, clksel_shift; 44 + u32 dummy_v, orig_v; 46 45 int ret; 47 46 48 47 /* Clear PWRDN bit of HSDIVIDER */ 49 48 ret = omap2_dflt_clk_enable(clk); 50 49 51 50 parent_hw = __clk_get_hw(__clk_get_parent(clk->clk)); 52 - parent = to_clk_hw_omap(parent_hw); 51 + parent = to_clk_divider(parent_hw); 53 52 54 53 /* Restore the dividers */ 55 54 if (!ret) { 56 - clksel_shift = __ffs(parent->clksel_mask); 57 - orig_v = __raw_readl(parent->clksel_reg); 55 + orig_v = __raw_readl(parent->reg); 58 56 dummy_v = orig_v; 59 57 60 58 /* Write any other value different from the Read value */ 61 - dummy_v ^= (1 << clksel_shift); 62 - __raw_writel(dummy_v, parent->clksel_reg); 59 + dummy_v ^= (1 << parent->shift); 60 + __raw_writel(dummy_v, parent->reg); 63 61 64 62 /* Write the original divider */ 65 - __raw_writel(orig_v, parent->clksel_reg); 63 + __raw_writel(orig_v, parent->reg); 66 64 } 67 65 68 66 return ret;
+8 -1
arch/arm/mach-omap2/omap_hwmod_33xx_data.c
··· 2007 2007 }, 2008 2008 }; 2009 2009 2010 + /* uart2 */ 2011 + static struct omap_hwmod_dma_info uart2_edma_reqs[] = { 2012 + { .name = "tx", .dma_req = 28, }, 2013 + { .name = "rx", .dma_req = 29, }, 2014 + { .dma_req = -1 } 2015 + }; 2016 + 2010 2017 static struct omap_hwmod_irq_info am33xx_uart2_irqs[] = { 2011 2018 { .irq = 73 + OMAP_INTC_START, }, 2012 2019 { .irq = -1 }, ··· 2025 2018 .clkdm_name = "l4ls_clkdm", 2026 2019 .flags = HWMOD_SWSUP_SIDLE_ACT, 2027 2020 .mpu_irqs = am33xx_uart2_irqs, 2028 - .sdma_reqs = uart1_edma_reqs, 2021 + .sdma_reqs = uart2_edma_reqs, 2029 2022 .main_clk = "dpll_per_m2_div4_ck", 2030 2023 .prcm = { 2031 2024 .omap4 = {
+4 -2
arch/arm/mach-omap2/pm34xx.c
··· 546 546 /* Clear any pending PRCM interrupts */ 547 547 omap2_prm_write_mod_reg(0, OCP_MOD, OMAP3_PRM_IRQSTATUS_MPU_OFFSET); 548 548 549 - if (omap3_has_iva()) 550 - omap3_iva_idle(); 549 + /* 550 + * We need to idle iva2_pwrdm even on am3703 with no iva2. 551 + */ 552 + omap3_iva_idle(); 551 553 552 554 omap3_d2d_idle(); 553 555 }
+4 -2
arch/arm/mach-prima2/pm.c
··· 101 101 struct device_node *np; 102 102 103 103 np = of_find_matching_node(NULL, pwrc_ids); 104 - if (!np) 105 - panic("unable to find compatible pwrc node in dtb\n"); 104 + if (!np) { 105 + pr_err("unable to find compatible sirf pwrc node in dtb\n"); 106 + return -ENOENT; 107 + } 106 108 107 109 /* 108 110 * pwrc behind rtciobrg is not located in memory space
+4 -2
arch/arm/mach-prima2/rstc.c
··· 28 28 struct device_node *np; 29 29 30 30 np = of_find_matching_node(NULL, rstc_ids); 31 - if (!np) 32 - panic("unable to find compatible rstc node in dtb\n"); 31 + if (!np) { 32 + pr_err("unable to find compatible sirf rstc node in dtb\n"); 33 + return -ENOENT; 34 + } 33 35 34 36 sirfsoc_rstc_base = of_iomap(np, 0); 35 37 if (!sirfsoc_rstc_base)
+1 -1
arch/arm/mach-shmobile/setup-sh73a0.c
··· 252 252 .name = "CMT10", 253 253 .channel_offset = 0x10, 254 254 .timer_bit = 0, 255 - .clockevent_rating = 125, 255 + .clockevent_rating = 80, 256 256 .clocksource_rating = 125, 257 257 }; 258 258
+3
arch/arm/mach-ux500/board-mop500-regulators.c
··· 374 374 static struct regulator_init_data ab8500_regulators[AB8500_NUM_REGULATORS] = { 375 375 /* supplies to the display/camera */ 376 376 [AB8500_LDO_AUX1] = { 377 + .supply_regulator = "ab8500-ext-supply3", 377 378 .constraints = { 378 379 .name = "V-DISPLAY", 379 380 .min_uV = 2800000, ··· 388 387 }, 389 388 /* supplies to the on-board eMMC */ 390 389 [AB8500_LDO_AUX2] = { 390 + .supply_regulator = "ab8500-ext-supply3", 391 391 .constraints = { 392 392 .name = "V-eMMC1", 393 393 .min_uV = 1100000, ··· 404 402 }, 405 403 /* supply for VAUX3, supplies to SDcard slots */ 406 404 [AB8500_LDO_AUX3] = { 405 + .supply_regulator = "ab8500-ext-supply3", 407 406 .constraints = { 408 407 .name = "V-MMC-SD", 409 408 .min_uV = 1100000,
+4
arch/arm/mach-ux500/cpuidle.c
··· 21 21 #include <asm/proc-fns.h> 22 22 23 23 #include "db8500-regs.h" 24 + #include "id.h" 24 25 25 26 static atomic_t master = ATOMIC_INIT(0); 26 27 static DEFINE_SPINLOCK(master_lock); ··· 115 114 116 115 int __init ux500_idle_init(void) 117 116 { 117 + if (!(cpu_is_u8500_family() || cpu_is_ux540_family())) 118 + return -ENODEV; 119 + 118 120 /* Configure wake up reasons */ 119 121 prcmu_enable_wakeups(PRCMU_WAKEUP(ARM) | PRCMU_WAKEUP(RTC) | 120 122 PRCMU_WAKEUP(ABB));
+9 -1
arch/arm/plat-samsung/include/plat/uncompress.h
··· 66 66 67 67 static void putc(int ch) 68 68 { 69 + if (!config_enabled(CONFIG_DEBUG_LL)) 70 + return; 71 + 69 72 if (uart_rd(S3C2410_UFCON) & S3C2410_UFCON_FIFOMODE) { 70 73 int level; 71 74 ··· 121 118 #ifdef CONFIG_S3C_BOOT_UART_FORCE_FIFO 122 119 static inline void arch_enable_uart_fifo(void) 123 120 { 124 - u32 fifocon = uart_rd(S3C2410_UFCON); 121 + u32 fifocon; 122 + 123 + if (!config_enabled(CONFIG_DEBUG_LL)) 124 + return; 125 + 126 + fifocon = uart_rd(S3C2410_UFCON); 125 127 126 128 if (!(fifocon & S3C2410_UFCON_FIFOMODE)) { 127 129 fifocon |= S3C2410_UFCON_RESETBOTH;
+13 -5
arch/arm/plat-samsung/pm.c
··· 16 16 #include <linux/suspend.h> 17 17 #include <linux/errno.h> 18 18 #include <linux/delay.h> 19 + #include <linux/of.h> 19 20 #include <linux/serial_core.h> 20 21 #include <linux/io.h> 21 22 ··· 262 261 * require a full power-cycle) 263 262 */ 264 263 265 - if (!any_allowed(s3c_irqwake_intmask, s3c_irqwake_intallow) && 264 + if (!of_have_populated_dt() && 265 + !any_allowed(s3c_irqwake_intmask, s3c_irqwake_intallow) && 266 266 !any_allowed(s3c_irqwake_eintmask, s3c_irqwake_eintallow)) { 267 267 printk(KERN_ERR "%s: No wake-up sources!\n", __func__); 268 268 printk(KERN_ERR "%s: Aborting sleep\n", __func__); ··· 272 270 273 271 /* save all necessary core registers not covered by the drivers */ 274 272 275 - samsung_pm_save_gpios(); 276 - samsung_pm_saved_gpios(); 273 + if (!of_have_populated_dt()) { 274 + samsung_pm_save_gpios(); 275 + samsung_pm_saved_gpios(); 276 + } 277 + 277 278 s3c_pm_save_uarts(); 278 279 s3c_pm_save_core(); 279 280 ··· 315 310 316 311 s3c_pm_restore_core(); 317 312 s3c_pm_restore_uarts(); 318 - samsung_pm_restore_gpios(); 319 - s3c_pm_restored_gpios(); 313 + 314 + if (!of_have_populated_dt()) { 315 + samsung_pm_restore_gpios(); 316 + s3c_pm_restored_gpios(); 317 + } 320 318 321 319 s3c_pm_debug_init(); 322 320
+2 -1
arch/m68k/include/asm/gpio.h
··· 86 86 return gpio < MCFGPIO_PIN_MAX ? 0 : __gpio_cansleep(gpio); 87 87 } 88 88 89 + #ifndef CONFIG_GPIOLIB 89 90 static inline int gpio_request_one(unsigned gpio, unsigned long flags, const char *label) 90 91 { 91 92 int err; ··· 106 105 107 106 return err; 108 107 } 109 - 108 + #endif /* !CONFIG_GPIOLIB */ 110 109 #endif
+9 -6
arch/mips/cavium-octeon/setup.c
··· 428 428 */ 429 429 static void octeon_kill_core(void *arg) 430 430 { 431 - mb(); 432 - if (octeon_is_simulation()) { 433 - /* The simulator needs the watchdog to stop for dead cores */ 434 - cvmx_write_csr(CVMX_CIU_WDOGX(cvmx_get_core_num()), 0); 431 + if (octeon_is_simulation()) 435 432 /* A break instruction causes the simulator stop a core */ 436 - asm volatile ("sync\nbreak"); 437 - } 433 + asm volatile ("break" ::: "memory"); 434 + 435 + local_irq_disable(); 436 + /* Disable watchdog on this core. */ 437 + cvmx_write_csr(CVMX_CIU_WDOGX(cvmx_get_core_num()), 0); 438 + /* Spin in a low power mode. */ 439 + while (true) 440 + asm volatile ("wait" ::: "memory"); 438 441 } 439 442 440 443
+1 -1
arch/mips/include/asm/mmu_context.h
··· 117 117 if (! ((asid += ASID_INC) & ASID_MASK) ) { 118 118 if (cpu_has_vtag_icache) 119 119 flush_icache_all(); 120 - #ifdef CONFIG_VIRTUALIZATION 120 + #ifdef CONFIG_KVM 121 121 kvm_local_flush_tlb_all(); /* start new asid cycle */ 122 122 #else 123 123 local_flush_tlb_all(); /* start new asid cycle */
+32
arch/mips/include/asm/ptrace.h
··· 16 16 #include <asm/isadep.h> 17 17 #include <uapi/asm/ptrace.h> 18 18 19 + /* 20 + * This struct defines the way the registers are stored on the stack during a 21 + * system call/exception. As usual the registers k0/k1 aren't being saved. 22 + */ 23 + struct pt_regs { 24 + #ifdef CONFIG_32BIT 25 + /* Pad bytes for argument save space on the stack. */ 26 + unsigned long pad0[6]; 27 + #endif 28 + 29 + /* Saved main processor registers. */ 30 + unsigned long regs[32]; 31 + 32 + /* Saved special registers. */ 33 + unsigned long cp0_status; 34 + unsigned long hi; 35 + unsigned long lo; 36 + #ifdef CONFIG_CPU_HAS_SMARTMIPS 37 + unsigned long acx; 38 + #endif 39 + unsigned long cp0_badvaddr; 40 + unsigned long cp0_cause; 41 + unsigned long cp0_epc; 42 + #ifdef CONFIG_MIPS_MT_SMTC 43 + unsigned long cp0_tcstatus; 44 + #endif /* CONFIG_MIPS_MT_SMTC */ 45 + #ifdef CONFIG_CPU_CAVIUM_OCTEON 46 + unsigned long long mpl[3]; /* MTM{0,1,2} */ 47 + unsigned long long mtp[3]; /* MTP{0,1,2} */ 48 + #endif 49 + } __aligned(8); 50 + 19 51 struct task_struct; 20 52 21 53 extern int ptrace_getregs(struct task_struct *child, __s64 __user *data);
+39 -42
arch/mips/include/uapi/asm/kvm.h
··· 58 58 * bits[2..0] - Register 'sel' index. 59 59 * bits[7..3] - Register 'rd' index. 60 60 * bits[15..8] - Must be zero. 61 - * bits[63..16] - 1 -> CP0 registers. 61 + * bits[31..16] - 1 -> CP0 registers. 62 + * bits[51..32] - Must be zero. 63 + * bits[63..52] - As per linux/kvm.h 62 64 * 63 65 * Other sets registers may be added in the future. Each set would 64 - * have its own identifier in bits[63..16]. 65 - * 66 - * The addr field of struct kvm_one_reg must point to an aligned 67 - * 64-bit wide location. For registers that are narrower than 68 - * 64-bits, the value is stored in the low order bits of the location, 69 - * and sign extended to 64-bits. 66 + * have its own identifier in bits[31..16]. 70 67 * 71 68 * The registers defined in struct kvm_regs are also accessible, the 72 69 * id values for these are below. 73 70 */ 74 71 75 - #define KVM_REG_MIPS_R0 0 76 - #define KVM_REG_MIPS_R1 1 77 - #define KVM_REG_MIPS_R2 2 78 - #define KVM_REG_MIPS_R3 3 79 - #define KVM_REG_MIPS_R4 4 80 - #define KVM_REG_MIPS_R5 5 81 - #define KVM_REG_MIPS_R6 6 82 - #define KVM_REG_MIPS_R7 7 83 - #define KVM_REG_MIPS_R8 8 84 - #define KVM_REG_MIPS_R9 9 85 - #define KVM_REG_MIPS_R10 10 86 - #define KVM_REG_MIPS_R11 11 87 - #define KVM_REG_MIPS_R12 12 88 - #define KVM_REG_MIPS_R13 13 89 - #define KVM_REG_MIPS_R14 14 90 - #define KVM_REG_MIPS_R15 15 91 - #define KVM_REG_MIPS_R16 16 92 - #define KVM_REG_MIPS_R17 17 93 - #define KVM_REG_MIPS_R18 18 94 - #define KVM_REG_MIPS_R19 19 95 - #define KVM_REG_MIPS_R20 20 96 - #define KVM_REG_MIPS_R21 21 97 - #define KVM_REG_MIPS_R22 22 98 - #define KVM_REG_MIPS_R23 23 99 - #define KVM_REG_MIPS_R24 24 100 - #define KVM_REG_MIPS_R25 25 101 - #define KVM_REG_MIPS_R26 26 102 - #define KVM_REG_MIPS_R27 27 103 - #define KVM_REG_MIPS_R28 28 104 - #define KVM_REG_MIPS_R29 29 105 - #define KVM_REG_MIPS_R30 30 106 - #define KVM_REG_MIPS_R31 31 72 + #define KVM_REG_MIPS_R0 (KVM_REG_MIPS | KVM_REG_SIZE_U64 | 0) 73 + #define KVM_REG_MIPS_R1 (KVM_REG_MIPS | KVM_REG_SIZE_U64 | 1) 74 + #define KVM_REG_MIPS_R2 (KVM_REG_MIPS | KVM_REG_SIZE_U64 | 2) 75 + #define KVM_REG_MIPS_R3 (KVM_REG_MIPS | KVM_REG_SIZE_U64 | 3) 76 + #define KVM_REG_MIPS_R4 (KVM_REG_MIPS | KVM_REG_SIZE_U64 | 4) 77 + #define KVM_REG_MIPS_R5 (KVM_REG_MIPS | KVM_REG_SIZE_U64 | 5) 78 + #define KVM_REG_MIPS_R6 (KVM_REG_MIPS | KVM_REG_SIZE_U64 | 6) 79 + #define KVM_REG_MIPS_R7 (KVM_REG_MIPS | KVM_REG_SIZE_U64 | 7) 80 + #define KVM_REG_MIPS_R8 (KVM_REG_MIPS | KVM_REG_SIZE_U64 | 8) 81 + #define KVM_REG_MIPS_R9 (KVM_REG_MIPS | KVM_REG_SIZE_U64 | 9) 82 + #define KVM_REG_MIPS_R10 (KVM_REG_MIPS | KVM_REG_SIZE_U64 | 10) 83 + #define KVM_REG_MIPS_R11 (KVM_REG_MIPS | KVM_REG_SIZE_U64 | 11) 84 + #define KVM_REG_MIPS_R12 (KVM_REG_MIPS | KVM_REG_SIZE_U64 | 12) 85 + #define KVM_REG_MIPS_R13 (KVM_REG_MIPS | KVM_REG_SIZE_U64 | 13) 86 + #define KVM_REG_MIPS_R14 (KVM_REG_MIPS | KVM_REG_SIZE_U64 | 14) 87 + #define KVM_REG_MIPS_R15 (KVM_REG_MIPS | KVM_REG_SIZE_U64 | 15) 88 + #define KVM_REG_MIPS_R16 (KVM_REG_MIPS | KVM_REG_SIZE_U64 | 16) 89 + #define KVM_REG_MIPS_R17 (KVM_REG_MIPS | KVM_REG_SIZE_U64 | 17) 90 + #define KVM_REG_MIPS_R18 (KVM_REG_MIPS | KVM_REG_SIZE_U64 | 18) 91 + #define KVM_REG_MIPS_R19 (KVM_REG_MIPS | KVM_REG_SIZE_U64 | 19) 92 + #define KVM_REG_MIPS_R20 (KVM_REG_MIPS | KVM_REG_SIZE_U64 | 20) 93 + #define KVM_REG_MIPS_R21 (KVM_REG_MIPS | KVM_REG_SIZE_U64 | 21) 94 + #define KVM_REG_MIPS_R22 (KVM_REG_MIPS | KVM_REG_SIZE_U64 | 22) 95 + #define KVM_REG_MIPS_R23 (KVM_REG_MIPS | KVM_REG_SIZE_U64 | 23) 96 + #define KVM_REG_MIPS_R24 (KVM_REG_MIPS | KVM_REG_SIZE_U64 | 24) 97 + #define KVM_REG_MIPS_R25 (KVM_REG_MIPS | KVM_REG_SIZE_U64 | 25) 98 + #define KVM_REG_MIPS_R26 (KVM_REG_MIPS | KVM_REG_SIZE_U64 | 26) 99 + #define KVM_REG_MIPS_R27 (KVM_REG_MIPS | KVM_REG_SIZE_U64 | 27) 100 + #define KVM_REG_MIPS_R28 (KVM_REG_MIPS | KVM_REG_SIZE_U64 | 28) 101 + #define KVM_REG_MIPS_R29 (KVM_REG_MIPS | KVM_REG_SIZE_U64 | 29) 102 + #define KVM_REG_MIPS_R30 (KVM_REG_MIPS | KVM_REG_SIZE_U64 | 30) 103 + #define KVM_REG_MIPS_R31 (KVM_REG_MIPS | KVM_REG_SIZE_U64 | 31) 107 104 108 - #define KVM_REG_MIPS_HI 32 109 - #define KVM_REG_MIPS_LO 33 110 - #define KVM_REG_MIPS_PC 34 105 + #define KVM_REG_MIPS_HI (KVM_REG_MIPS | KVM_REG_SIZE_U64 | 32) 106 + #define KVM_REG_MIPS_LO (KVM_REG_MIPS | KVM_REG_SIZE_U64 | 33) 107 + #define KVM_REG_MIPS_PC (KVM_REG_MIPS | KVM_REG_SIZE_U64 | 34) 111 108 112 109 /* 113 110 * KVM MIPS specific structures and definitions
+2 -15
arch/mips/include/uapi/asm/ptrace.h
··· 22 22 #define DSP_CONTROL 77 23 23 #define ACX 78 24 24 25 + #ifndef __KERNEL__ 25 26 /* 26 27 * This struct defines the way the registers are stored on the stack during a 27 28 * system call/exception. As usual the registers k0/k1 aren't being saved. 28 29 */ 29 30 struct pt_regs { 30 - #ifdef CONFIG_32BIT 31 - /* Pad bytes for argument save space on the stack. */ 32 - unsigned long pad0[6]; 33 - #endif 34 - 35 31 /* Saved main processor registers. */ 36 32 unsigned long regs[32]; 37 33 ··· 35 39 unsigned long cp0_status; 36 40 unsigned long hi; 37 41 unsigned long lo; 38 - #ifdef CONFIG_CPU_HAS_SMARTMIPS 39 - unsigned long acx; 40 - #endif 41 42 unsigned long cp0_badvaddr; 42 43 unsigned long cp0_cause; 43 44 unsigned long cp0_epc; 44 - #ifdef CONFIG_MIPS_MT_SMTC 45 - unsigned long cp0_tcstatus; 46 - #endif /* CONFIG_MIPS_MT_SMTC */ 47 - #ifdef CONFIG_CPU_CAVIUM_OCTEON 48 - unsigned long long mpl[3]; /* MTM{0,1,2} */ 49 - unsigned long long mtp[3]; /* MTP{0,1,2} */ 50 - #endif 51 45 } __attribute__ ((aligned (8))); 46 + #endif /* __KERNEL__ */ 52 47 53 48 /* Arbitrarily choose the same ptrace numbers as used by the Sparc code. */ 54 49 #define PTRACE_GETREGS 12
+11
arch/mips/kernel/binfmt_elfn32.c
··· 119 119 #undef TASK_SIZE 120 120 #define TASK_SIZE TASK_SIZE32 121 121 122 + #undef cputime_to_timeval 123 + #define cputime_to_timeval cputime_to_compat_timeval 124 + static __inline__ void 125 + cputime_to_compat_timeval(const cputime_t cputime, struct compat_timeval *value) 126 + { 127 + unsigned long jiffies = cputime_to_jiffies(cputime); 128 + 129 + value->tv_usec = (jiffies % HZ) * (1000000L / HZ); 130 + value->tv_sec = jiffies / HZ; 131 + } 132 + 122 133 #include "../../../fs/binfmt_elf.c"
+11
arch/mips/kernel/binfmt_elfo32.c
··· 162 162 #undef TASK_SIZE 163 163 #define TASK_SIZE TASK_SIZE32 164 164 165 + #undef cputime_to_timeval 166 + #define cputime_to_timeval cputime_to_compat_timeval 167 + static __inline__ void 168 + cputime_to_compat_timeval(const cputime_t cputime, struct compat_timeval *value) 169 + { 170 + unsigned long jiffies = cputime_to_jiffies(cputime); 171 + 172 + value->tv_usec = (jiffies % HZ) * (1000000L / HZ); 173 + value->tv_sec = jiffies / HZ; 174 + } 175 + 165 176 #include "../../../fs/binfmt_elf.c"
+4
arch/mips/kernel/ftrace.c
··· 25 25 #define MCOUNT_OFFSET_INSNS 4 26 26 #endif 27 27 28 + #ifdef CONFIG_DYNAMIC_FTRACE 29 + 28 30 /* Arch override because MIPS doesn't need to run this from stop_machine() */ 29 31 void arch_ftrace_update_code(int command) 30 32 { 31 33 ftrace_modify_all_code(command); 32 34 } 35 + 36 + #endif 33 37 34 38 /* 35 39 * Check if the address is in kernel space
+7 -6
arch/mips/kernel/idle.c
··· 93 93 } 94 94 95 95 /* 96 - * The Au1xxx wait is available only if using 32khz counter or 97 - * external timer source, but specifically not CP0 Counter. 98 - * alchemy/common/time.c may override cpu_wait! 96 + * Au1 'wait' is only useful when the 32kHz counter is used as timer, 97 + * since coreclock (and the cp0 counter) stops upon executing it. Only an 98 + * interrupt can wake it, so they must be enabled before entering idle modes. 99 99 */ 100 100 static void au1k_wait(void) 101 101 { 102 + unsigned long c0status = read_c0_status() | 1; /* irqs on */ 103 + 102 104 __asm__( 103 105 " .set mips3 \n" 104 106 " cache 0x14, 0(%0) \n" 105 107 " cache 0x14, 32(%0) \n" 106 108 " sync \n" 107 - " nop \n" 109 + " mtc0 %1, $12 \n" /* wr c0status */ 108 110 " wait \n" 109 111 " nop \n" 110 112 " nop \n" 111 113 " nop \n" 112 114 " nop \n" 113 115 " .set mips0 \n" 114 - : : "r" (au1k_wait)); 115 - local_irq_enable(); 116 + : : "r" (au1k_wait), "r" (c0status)); 116 117 } 117 118 118 119 static int __initdata nowait;
+1
arch/mips/kernel/rtlx.c
··· 40 40 #include <asm/processor.h> 41 41 #include <asm/vpe.h> 42 42 #include <asm/rtlx.h> 43 + #include <asm/setup.h> 43 44 44 45 static struct rtlx_info *rtlx; 45 46 static int major;
+15 -13
arch/mips/kernel/traps.c
··· 897 897 898 898 asmlinkage void do_tr(struct pt_regs *regs) 899 899 { 900 - unsigned int opcode, tcode = 0; 900 + u32 opcode, tcode = 0; 901 901 u16 instr[2]; 902 - unsigned long epc = exception_epc(regs); 902 + unsigned long epc = msk_isa16_mode(exception_epc(regs)); 903 903 904 - if ((__get_user(instr[0], (u16 __user *)msk_isa16_mode(epc))) || 905 - (__get_user(instr[1], (u16 __user *)msk_isa16_mode(epc + 2)))) 904 + if (get_isa16_mode(regs->cp0_epc)) { 905 + if (__get_user(instr[0], (u16 __user *)(epc + 0)) || 906 + __get_user(instr[1], (u16 __user *)(epc + 2))) 906 907 goto out_sigsegv; 907 - opcode = (instr[0] << 16) | instr[1]; 908 - 909 - /* Immediate versions don't provide a code. */ 910 - if (!(opcode & OPCODE)) { 911 - if (get_isa16_mode(regs->cp0_epc)) 912 - /* microMIPS */ 913 - tcode = (opcode >> 12) & 0x1f; 914 - else 915 - tcode = ((opcode >> 6) & ((1 << 10) - 1)); 908 + opcode = (instr[0] << 16) | instr[1]; 909 + /* Immediate versions don't provide a code. */ 910 + if (!(opcode & OPCODE)) 911 + tcode = (opcode >> 12) & ((1 << 4) - 1); 912 + } else { 913 + if (__get_user(opcode, (u32 __user *)epc)) 914 + goto out_sigsegv; 915 + /* Immediate versions don't provide a code. */ 916 + if (!(opcode & OPCODE)) 917 + tcode = (opcode >> 6) & ((1 << 10) - 1); 916 918 } 917 919 918 920 do_trap_or_bp(regs, tcode, "Trap");
+54 -29
arch/mips/kvm/kvm_mips.c
··· 485 485 return -ENOIOCTLCMD; 486 486 } 487 487 488 - #define KVM_REG_MIPS_CP0_INDEX (0x10000 + 8 * 0 + 0) 489 - #define KVM_REG_MIPS_CP0_ENTRYLO0 (0x10000 + 8 * 2 + 0) 490 - #define KVM_REG_MIPS_CP0_ENTRYLO1 (0x10000 + 8 * 3 + 0) 491 - #define KVM_REG_MIPS_CP0_CONTEXT (0x10000 + 8 * 4 + 0) 492 - #define KVM_REG_MIPS_CP0_USERLOCAL (0x10000 + 8 * 4 + 2) 493 - #define KVM_REG_MIPS_CP0_PAGEMASK (0x10000 + 8 * 5 + 0) 494 - #define KVM_REG_MIPS_CP0_PAGEGRAIN (0x10000 + 8 * 5 + 1) 495 - #define KVM_REG_MIPS_CP0_WIRED (0x10000 + 8 * 6 + 0) 496 - #define KVM_REG_MIPS_CP0_HWRENA (0x10000 + 8 * 7 + 0) 497 - #define KVM_REG_MIPS_CP0_BADVADDR (0x10000 + 8 * 8 + 0) 498 - #define KVM_REG_MIPS_CP0_COUNT (0x10000 + 8 * 9 + 0) 499 - #define KVM_REG_MIPS_CP0_ENTRYHI (0x10000 + 8 * 10 + 0) 500 - #define KVM_REG_MIPS_CP0_COMPARE (0x10000 + 8 * 11 + 0) 501 - #define KVM_REG_MIPS_CP0_STATUS (0x10000 + 8 * 12 + 0) 502 - #define KVM_REG_MIPS_CP0_CAUSE (0x10000 + 8 * 13 + 0) 503 - #define KVM_REG_MIPS_CP0_EBASE (0x10000 + 8 * 15 + 1) 504 - #define KVM_REG_MIPS_CP0_CONFIG (0x10000 + 8 * 16 + 0) 505 - #define KVM_REG_MIPS_CP0_CONFIG1 (0x10000 + 8 * 16 + 1) 506 - #define KVM_REG_MIPS_CP0_CONFIG2 (0x10000 + 8 * 16 + 2) 507 - #define KVM_REG_MIPS_CP0_CONFIG3 (0x10000 + 8 * 16 + 3) 508 - #define KVM_REG_MIPS_CP0_CONFIG7 (0x10000 + 8 * 16 + 7) 509 - #define KVM_REG_MIPS_CP0_XCONTEXT (0x10000 + 8 * 20 + 0) 510 - #define KVM_REG_MIPS_CP0_ERROREPC (0x10000 + 8 * 30 + 0) 488 + #define MIPS_CP0_32(_R, _S) \ 489 + (KVM_REG_MIPS | KVM_REG_SIZE_U32 | 0x10000 | (8 * (_R) + (_S))) 490 + 491 + #define MIPS_CP0_64(_R, _S) \ 492 + (KVM_REG_MIPS | KVM_REG_SIZE_U64 | 0x10000 | (8 * (_R) + (_S))) 493 + 494 + #define KVM_REG_MIPS_CP0_INDEX MIPS_CP0_32(0, 0) 495 + #define KVM_REG_MIPS_CP0_ENTRYLO0 MIPS_CP0_64(2, 0) 496 + #define KVM_REG_MIPS_CP0_ENTRYLO1 MIPS_CP0_64(3, 0) 497 + #define KVM_REG_MIPS_CP0_CONTEXT MIPS_CP0_64(4, 0) 498 + #define KVM_REG_MIPS_CP0_USERLOCAL MIPS_CP0_64(4, 2) 499 + #define KVM_REG_MIPS_CP0_PAGEMASK MIPS_CP0_32(5, 0) 500 + #define KVM_REG_MIPS_CP0_PAGEGRAIN MIPS_CP0_32(5, 1) 501 + #define KVM_REG_MIPS_CP0_WIRED MIPS_CP0_32(6, 0) 502 + #define KVM_REG_MIPS_CP0_HWRENA MIPS_CP0_32(7, 0) 503 + #define KVM_REG_MIPS_CP0_BADVADDR MIPS_CP0_64(8, 0) 504 + #define KVM_REG_MIPS_CP0_COUNT MIPS_CP0_32(9, 0) 505 + #define KVM_REG_MIPS_CP0_ENTRYHI MIPS_CP0_64(10, 0) 506 + #define KVM_REG_MIPS_CP0_COMPARE MIPS_CP0_32(11, 0) 507 + #define KVM_REG_MIPS_CP0_STATUS MIPS_CP0_32(12, 0) 508 + #define KVM_REG_MIPS_CP0_CAUSE MIPS_CP0_32(13, 0) 509 + #define KVM_REG_MIPS_CP0_EBASE MIPS_CP0_64(15, 1) 510 + #define KVM_REG_MIPS_CP0_CONFIG MIPS_CP0_32(16, 0) 511 + #define KVM_REG_MIPS_CP0_CONFIG1 MIPS_CP0_32(16, 1) 512 + #define KVM_REG_MIPS_CP0_CONFIG2 MIPS_CP0_32(16, 2) 513 + #define KVM_REG_MIPS_CP0_CONFIG3 MIPS_CP0_32(16, 3) 514 + #define KVM_REG_MIPS_CP0_CONFIG7 MIPS_CP0_32(16, 7) 515 + #define KVM_REG_MIPS_CP0_XCONTEXT MIPS_CP0_64(20, 0) 516 + #define KVM_REG_MIPS_CP0_ERROREPC MIPS_CP0_64(30, 0) 511 517 512 518 static u64 kvm_mips_get_one_regs[] = { 513 519 KVM_REG_MIPS_R0, ··· 573 567 static int kvm_mips_get_reg(struct kvm_vcpu *vcpu, 574 568 const struct kvm_one_reg *reg) 575 569 { 576 - u64 __user *uaddr = (u64 __user *)(long)reg->addr; 577 - 578 570 struct mips_coproc *cop0 = vcpu->arch.cop0; 579 571 s64 v; 580 572 ··· 635 631 default: 636 632 return -EINVAL; 637 633 } 638 - return put_user(v, uaddr); 634 + if ((reg->id & KVM_REG_SIZE_MASK) == KVM_REG_SIZE_U64) { 635 + u64 __user *uaddr64 = (u64 __user *)(long)reg->addr; 636 + return put_user(v, uaddr64); 637 + } else if ((reg->id & KVM_REG_SIZE_MASK) == KVM_REG_SIZE_U32) { 638 + u32 __user *uaddr32 = (u32 __user *)(long)reg->addr; 639 + u32 v32 = (u32)v; 640 + return put_user(v32, uaddr32); 641 + } else { 642 + return -EINVAL; 643 + } 639 644 } 640 645 641 646 static int kvm_mips_set_reg(struct kvm_vcpu *vcpu, 642 647 const struct kvm_one_reg *reg) 643 648 { 644 - u64 __user *uaddr = (u64 __user *)(long)reg->addr; 645 649 struct mips_coproc *cop0 = vcpu->arch.cop0; 646 650 u64 v; 647 651 648 - if (get_user(v, uaddr) != 0) 649 - return -EFAULT; 652 + if ((reg->id & KVM_REG_SIZE_MASK) == KVM_REG_SIZE_U64) { 653 + u64 __user *uaddr64 = (u64 __user *)(long)reg->addr; 654 + 655 + if (get_user(v, uaddr64) != 0) 656 + return -EFAULT; 657 + } else if ((reg->id & KVM_REG_SIZE_MASK) == KVM_REG_SIZE_U32) { 658 + u32 __user *uaddr32 = (u32 __user *)(long)reg->addr; 659 + s32 v32; 660 + 661 + if (get_user(v32, uaddr32) != 0) 662 + return -EFAULT; 663 + v = (s64)v32; 664 + } else { 665 + return -EINVAL; 666 + } 650 667 651 668 switch (reg->id) { 652 669 case KVM_REG_MIPS_R0:
-4
arch/mips/mm/tlbex.c
··· 301 301 static struct uasm_label labels[128] __cpuinitdata; 302 302 static struct uasm_reloc relocs[128] __cpuinitdata; 303 303 304 - #ifdef CONFIG_64BIT 305 - static int check_for_high_segbits __cpuinitdata; 306 - #endif 307 - 308 304 static int check_for_high_segbits __cpuinitdata; 309 305 310 306 static unsigned int kscratch_used_mask __cpuinitdata;
+1 -1
arch/mips/ralink/of.c
··· 88 88 __dt_setup_arch(&__dtb_start); 89 89 90 90 if (soc_info.mem_size) 91 - add_memory_region(soc_info.mem_base, soc_info.mem_size, 91 + add_memory_region(soc_info.mem_base, soc_info.mem_size * SZ_1M, 92 92 BOOT_MEM_RAM); 93 93 else 94 94 detect_memory_region(soc_info.mem_base,
+10 -7
arch/powerpc/include/asm/cputable.h
··· 176 176 #define CPU_FTR_CFAR LONG_ASM_CONST(0x0100000000000000) 177 177 #define CPU_FTR_HAS_PPR LONG_ASM_CONST(0x0200000000000000) 178 178 #define CPU_FTR_DAWR LONG_ASM_CONST(0x0400000000000000) 179 + #define CPU_FTR_DABRX LONG_ASM_CONST(0x0800000000000000) 179 180 180 181 #ifndef __ASSEMBLY__ 181 182 ··· 395 394 CPU_FTR_PPCAS_ARCH_V2 | CPU_FTR_CTRL | CPU_FTR_ARCH_201 | \ 396 395 CPU_FTR_ALTIVEC_COMP | CPU_FTR_CAN_NAP | CPU_FTR_MMCRA | \ 397 396 CPU_FTR_CP_USE_DCBTZ | CPU_FTR_STCX_CHECKS_ADDRESS | \ 398 - CPU_FTR_HVMODE) 397 + CPU_FTR_HVMODE | CPU_FTR_DABRX) 399 398 #define CPU_FTRS_POWER5 (CPU_FTR_USE_TB | CPU_FTR_LWSYNC | \ 400 399 CPU_FTR_PPCAS_ARCH_V2 | CPU_FTR_CTRL | \ 401 400 CPU_FTR_MMCRA | CPU_FTR_SMT | \ 402 401 CPU_FTR_COHERENT_ICACHE | CPU_FTR_PURR | \ 403 - CPU_FTR_STCX_CHECKS_ADDRESS | CPU_FTR_POPCNTB) 402 + CPU_FTR_STCX_CHECKS_ADDRESS | CPU_FTR_POPCNTB | CPU_FTR_DABRX) 404 403 #define CPU_FTRS_POWER6 (CPU_FTR_USE_TB | CPU_FTR_LWSYNC | \ 405 404 CPU_FTR_PPCAS_ARCH_V2 | CPU_FTR_CTRL | \ 406 405 CPU_FTR_MMCRA | CPU_FTR_SMT | \ 407 406 CPU_FTR_COHERENT_ICACHE | \ 408 407 CPU_FTR_PURR | CPU_FTR_SPURR | CPU_FTR_REAL_LE | \ 409 408 CPU_FTR_DSCR | CPU_FTR_UNALIGNED_LD_STD | \ 410 - CPU_FTR_STCX_CHECKS_ADDRESS | CPU_FTR_POPCNTB | CPU_FTR_CFAR) 409 + CPU_FTR_STCX_CHECKS_ADDRESS | CPU_FTR_POPCNTB | CPU_FTR_CFAR | \ 410 + CPU_FTR_DABRX) 411 411 #define CPU_FTRS_POWER7 (CPU_FTR_USE_TB | CPU_FTR_LWSYNC | \ 412 412 CPU_FTR_PPCAS_ARCH_V2 | CPU_FTR_CTRL | CPU_FTR_ARCH_206 |\ 413 413 CPU_FTR_MMCRA | CPU_FTR_SMT | \ ··· 417 415 CPU_FTR_DSCR | CPU_FTR_SAO | CPU_FTR_ASYM_SMT | \ 418 416 CPU_FTR_STCX_CHECKS_ADDRESS | CPU_FTR_POPCNTB | CPU_FTR_POPCNTD | \ 419 417 CPU_FTR_ICSWX | CPU_FTR_CFAR | CPU_FTR_HVMODE | \ 420 - CPU_FTR_VMX_COPY | CPU_FTR_HAS_PPR) 418 + CPU_FTR_VMX_COPY | CPU_FTR_HAS_PPR | CPU_FTR_DABRX) 421 419 #define CPU_FTRS_POWER8 (CPU_FTR_USE_TB | CPU_FTR_LWSYNC | \ 422 420 CPU_FTR_PPCAS_ARCH_V2 | CPU_FTR_CTRL | CPU_FTR_ARCH_206 |\ 423 421 CPU_FTR_MMCRA | CPU_FTR_SMT | \ ··· 432 430 CPU_FTR_PPCAS_ARCH_V2 | CPU_FTR_CTRL | \ 433 431 CPU_FTR_ALTIVEC_COMP | CPU_FTR_MMCRA | CPU_FTR_SMT | \ 434 432 CPU_FTR_PAUSE_ZERO | CPU_FTR_CELL_TB_BUG | CPU_FTR_CP_USE_DCBTZ | \ 435 - CPU_FTR_UNALIGNED_LD_STD) 433 + CPU_FTR_UNALIGNED_LD_STD | CPU_FTR_DABRX) 436 434 #define CPU_FTRS_PA6T (CPU_FTR_USE_TB | CPU_FTR_LWSYNC | \ 437 435 CPU_FTR_PPCAS_ARCH_V2 | CPU_FTR_ALTIVEC_COMP | \ 438 - CPU_FTR_PURR | CPU_FTR_REAL_LE) 436 + CPU_FTR_PURR | CPU_FTR_REAL_LE | CPU_FTR_DABRX) 439 437 #define CPU_FTRS_COMPATIBLE (CPU_FTR_USE_TB | CPU_FTR_PPCAS_ARCH_V2) 440 438 441 439 #define CPU_FTRS_A2 (CPU_FTR_USE_TB | CPU_FTR_SMT | CPU_FTR_DBELL | \ 442 - CPU_FTR_NOEXECUTE | CPU_FTR_NODSISRALIGN | CPU_FTR_ICSWX) 440 + CPU_FTR_NOEXECUTE | CPU_FTR_NODSISRALIGN | \ 441 + CPU_FTR_ICSWX | CPU_FTR_DABRX ) 443 442 444 443 #ifdef __powerpc64__ 445 444 #ifdef CONFIG_PPC_BOOK3E
+1 -1
arch/powerpc/include/asm/exception-64s.h
··· 513 513 */ 514 514 #define STD_EXCEPTION_COMMON_ASYNC(trap, label, hdlr) \ 515 515 EXCEPTION_COMMON(trap, label, hdlr, ret_from_except_lite, \ 516 - FINISH_NAP;RUNLATCH_ON;DISABLE_INTS) 516 + FINISH_NAP;DISABLE_INTS;RUNLATCH_ON) 517 517 518 518 /* 519 519 * When the idle code in power4_idle puts the CPU into NAP mode,
+10 -6
arch/powerpc/include/asm/kvm_asm.h
··· 54 54 #define BOOKE_INTERRUPT_DEBUG 15 55 55 56 56 /* E500 */ 57 - #define BOOKE_INTERRUPT_SPE_UNAVAIL 32 58 - #define BOOKE_INTERRUPT_SPE_FP_DATA 33 57 + #define BOOKE_INTERRUPT_SPE_ALTIVEC_UNAVAIL 32 58 + #define BOOKE_INTERRUPT_SPE_FP_DATA_ALTIVEC_ASSIST 33 59 + /* 60 + * TODO: Unify 32-bit and 64-bit kernel exception handlers to use same defines 61 + */ 62 + #define BOOKE_INTERRUPT_SPE_UNAVAIL BOOKE_INTERRUPT_SPE_ALTIVEC_UNAVAIL 63 + #define BOOKE_INTERRUPT_SPE_FP_DATA BOOKE_INTERRUPT_SPE_FP_DATA_ALTIVEC_ASSIST 64 + #define BOOKE_INTERRUPT_ALTIVEC_UNAVAIL BOOKE_INTERRUPT_SPE_ALTIVEC_UNAVAIL 65 + #define BOOKE_INTERRUPT_ALTIVEC_ASSIST \ 66 + BOOKE_INTERRUPT_SPE_FP_DATA_ALTIVEC_ASSIST 59 67 #define BOOKE_INTERRUPT_SPE_FP_ROUND 34 60 68 #define BOOKE_INTERRUPT_PERFORMANCE_MONITOR 35 61 69 #define BOOKE_INTERRUPT_DOORBELL 36 ··· 74 66 #define BOOKE_INTERRUPT_GUEST_DBELL_CRIT 39 75 67 #define BOOKE_INTERRUPT_HV_SYSCALL 40 76 68 #define BOOKE_INTERRUPT_HV_PRIV 41 77 - 78 - /* altivec */ 79 - #define BOOKE_INTERRUPT_ALTIVEC_UNAVAIL 42 80 - #define BOOKE_INTERRUPT_ALTIVEC_ASSIST 43 81 69 82 70 /* book3s */ 83 71
+4 -4
arch/powerpc/kernel/cputable.c
··· 452 452 .mmu_features = MMU_FTRS_POWER8, 453 453 .icache_bsize = 128, 454 454 .dcache_bsize = 128, 455 - .oprofile_type = PPC_OPROFILE_POWER4, 456 - .oprofile_cpu_type = 0, 455 + .oprofile_type = PPC_OPROFILE_INVALID, 456 + .oprofile_cpu_type = "ppc64/ibm-compat-v1", 457 457 .cpu_setup = __setup_cpu_power8, 458 458 .cpu_restore = __restore_cpu_power8, 459 459 .platform = "power8", ··· 506 506 .dcache_bsize = 128, 507 507 .num_pmcs = 6, 508 508 .pmc_type = PPC_PMC_IBM, 509 - .oprofile_cpu_type = 0, 510 - .oprofile_type = PPC_OPROFILE_POWER4, 509 + .oprofile_cpu_type = "ppc64/power8", 510 + .oprofile_type = PPC_OPROFILE_INVALID, 511 511 .cpu_setup = __setup_cpu_power8, 512 512 .cpu_restore = __restore_cpu_power8, 513 513 .platform = "power8",
-28
arch/powerpc/kernel/entry_64.S
··· 465 465 std r0, THREAD_EBBHR(r3) 466 466 mfspr r0, SPRN_EBBRR 467 467 std r0, THREAD_EBBRR(r3) 468 - 469 - /* PMU registers made user read/(write) by EBB */ 470 - mfspr r0, SPRN_SIAR 471 - std r0, THREAD_SIAR(r3) 472 - mfspr r0, SPRN_SDAR 473 - std r0, THREAD_SDAR(r3) 474 - mfspr r0, SPRN_SIER 475 - std r0, THREAD_SIER(r3) 476 - mfspr r0, SPRN_MMCR0 477 - std r0, THREAD_MMCR0(r3) 478 - mfspr r0, SPRN_MMCR2 479 - std r0, THREAD_MMCR2(r3) 480 - mfspr r0, SPRN_MMCRA 481 - std r0, THREAD_MMCRA(r3) 482 468 END_FTR_SECTION_IFSET(CPU_FTR_ARCH_207S) 483 469 #endif 484 470 ··· 566 580 mtspr SPRN_EBBHR, r0 567 581 ld r0, THREAD_EBBRR(r4) 568 582 mtspr SPRN_EBBRR, r0 569 - 570 - /* PMU registers made user read/(write) by EBB */ 571 - ld r0, THREAD_SIAR(r4) 572 - mtspr SPRN_SIAR, r0 573 - ld r0, THREAD_SDAR(r4) 574 - mtspr SPRN_SDAR, r0 575 - ld r0, THREAD_SIER(r4) 576 - mtspr SPRN_SIER, r0 577 - ld r0, THREAD_MMCR0(r4) 578 - mtspr SPRN_MMCR0, r0 579 - ld r0, THREAD_MMCR2(r4) 580 - mtspr SPRN_MMCR2, r0 581 - ld r0, THREAD_MMCRA(r4) 582 - mtspr SPRN_MMCRA, r0 583 583 584 584 ld r0,THREAD_TAR(r4) 585 585 mtspr SPRN_TAR,r0
+27 -65
arch/powerpc/kernel/exceptions-64s.S
··· 454 454 xori r10,r10,(MSR_FE0|MSR_FE1) 455 455 mtmsrd r10 456 456 sync 457 - fmr 0,0 458 - fmr 1,1 459 - fmr 2,2 460 - fmr 3,3 461 - fmr 4,4 462 - fmr 5,5 463 - fmr 6,6 464 - fmr 7,7 465 - fmr 8,8 466 - fmr 9,9 467 - fmr 10,10 468 - fmr 11,11 469 - fmr 12,12 470 - fmr 13,13 471 - fmr 14,14 472 - fmr 15,15 473 - fmr 16,16 474 - fmr 17,17 475 - fmr 18,18 476 - fmr 19,19 477 - fmr 20,20 478 - fmr 21,21 479 - fmr 22,22 480 - fmr 23,23 481 - fmr 24,24 482 - fmr 25,25 483 - fmr 26,26 484 - fmr 27,27 485 - fmr 28,28 486 - fmr 29,29 487 - fmr 30,30 488 - fmr 31,31 457 + 458 + #define FMR2(n) fmr (n), (n) ; fmr n+1, n+1 459 + #define FMR4(n) FMR2(n) ; FMR2(n+2) 460 + #define FMR8(n) FMR4(n) ; FMR4(n+4) 461 + #define FMR16(n) FMR8(n) ; FMR8(n+8) 462 + #define FMR32(n) FMR16(n) ; FMR16(n+16) 463 + FMR32(0) 464 + 489 465 FTR_SECTION_ELSE 490 466 /* 491 467 * To denormalise we need to move a copy of the register to itself. ··· 471 495 oris r10,r10,MSR_VSX@h 472 496 mtmsrd r10 473 497 sync 474 - XVCPSGNDP(0,0,0) 475 - XVCPSGNDP(1,1,1) 476 - XVCPSGNDP(2,2,2) 477 - XVCPSGNDP(3,3,3) 478 - XVCPSGNDP(4,4,4) 479 - XVCPSGNDP(5,5,5) 480 - XVCPSGNDP(6,6,6) 481 - XVCPSGNDP(7,7,7) 482 - XVCPSGNDP(8,8,8) 483 - XVCPSGNDP(9,9,9) 484 - XVCPSGNDP(10,10,10) 485 - XVCPSGNDP(11,11,11) 486 - XVCPSGNDP(12,12,12) 487 - XVCPSGNDP(13,13,13) 488 - XVCPSGNDP(14,14,14) 489 - XVCPSGNDP(15,15,15) 490 - XVCPSGNDP(16,16,16) 491 - XVCPSGNDP(17,17,17) 492 - XVCPSGNDP(18,18,18) 493 - XVCPSGNDP(19,19,19) 494 - XVCPSGNDP(20,20,20) 495 - XVCPSGNDP(21,21,21) 496 - XVCPSGNDP(22,22,22) 497 - XVCPSGNDP(23,23,23) 498 - XVCPSGNDP(24,24,24) 499 - XVCPSGNDP(25,25,25) 500 - XVCPSGNDP(26,26,26) 501 - XVCPSGNDP(27,27,27) 502 - XVCPSGNDP(28,28,28) 503 - XVCPSGNDP(29,29,29) 504 - XVCPSGNDP(30,30,30) 505 - XVCPSGNDP(31,31,31) 498 + 499 + #define XVCPSGNDP2(n) XVCPSGNDP(n,n,n) ; XVCPSGNDP(n+1,n+1,n+1) 500 + #define XVCPSGNDP4(n) XVCPSGNDP2(n) ; XVCPSGNDP2(n+2) 501 + #define XVCPSGNDP8(n) XVCPSGNDP4(n) ; XVCPSGNDP4(n+4) 502 + #define XVCPSGNDP16(n) XVCPSGNDP8(n) ; XVCPSGNDP8(n+8) 503 + #define XVCPSGNDP32(n) XVCPSGNDP16(n) ; XVCPSGNDP16(n+16) 504 + XVCPSGNDP32(0) 505 + 506 506 ALT_FTR_SECTION_END_IFCLR(CPU_FTR_ARCH_206) 507 + 508 + BEGIN_FTR_SECTION 509 + b denorm_done 510 + END_FTR_SECTION_IFCLR(CPU_FTR_ARCH_207S) 511 + /* 512 + * To denormalise we need to move a copy of the register to itself. 513 + * For POWER8 we need to do that for all 64 VSX registers 514 + */ 515 + XVCPSGNDP32(32) 516 + denorm_done: 507 517 mtspr SPRN_HSRR0,r11 508 518 mtcrf 0x80,r9 509 519 ld r9,PACA_EXGEN+EX_R9(r13) ··· 683 721 STD_EXCEPTION_COMMON(0xb00, trap_0b, .unknown_exception) 684 722 STD_EXCEPTION_COMMON(0xd00, single_step, .single_step_exception) 685 723 STD_EXCEPTION_COMMON(0xe00, trap_0e, .unknown_exception) 686 - STD_EXCEPTION_COMMON(0xe40, emulation_assist, .program_check_exception) 724 + STD_EXCEPTION_COMMON(0xe40, emulation_assist, .emulation_assist_interrupt) 687 725 STD_EXCEPTION_COMMON(0xe60, hmi_exception, .unknown_exception) 688 726 #ifdef CONFIG_PPC_DOORBELL 689 727 STD_EXCEPTION_COMMON_ASYNC(0xe80, h_doorbell, .doorbell_exception)
+1 -1
arch/powerpc/kernel/irq.c
··· 162 162 * in case we also had a rollover while hard disabled 163 163 */ 164 164 local_paca->irq_happened &= ~PACA_IRQ_DEC; 165 - if (decrementer_check_overflow()) 165 + if ((happened & PACA_IRQ_DEC) || decrementer_check_overflow()) 166 166 return 0x900; 167 167 168 168 /* Finally check if an external interrupt happened */
+3 -1
arch/powerpc/kernel/pci-common.c
··· 827 827 } 828 828 for (i = 0; i < DEVICE_COUNT_RESOURCE; i++) { 829 829 struct resource *res = dev->resource + i; 830 + struct pci_bus_region reg; 830 831 if (!res->flags) 831 832 continue; 832 833 ··· 836 835 * at 0 as unset as well, except if PCI_PROBE_ONLY is also set 837 836 * since in that case, we don't want to re-assign anything 838 837 */ 838 + pcibios_resource_to_bus(dev, &reg, res); 839 839 if (pci_has_flag(PCI_REASSIGN_ALL_RSRC) || 840 - (res->start == 0 && !pci_has_flag(PCI_PROBE_ONLY))) { 840 + (reg.start == 0 && !pci_has_flag(PCI_PROBE_ONLY))) { 841 841 /* Only print message if not re-assigning */ 842 842 if (!pci_has_flag(PCI_REASSIGN_ALL_RSRC)) 843 843 pr_debug("PCI:%s Resource %d %016llx-%016llx [%x] "
+4 -3
arch/powerpc/kernel/process.c
··· 399 399 static inline int __set_dabr(unsigned long dabr, unsigned long dabrx) 400 400 { 401 401 mtspr(SPRN_DABR, dabr); 402 - mtspr(SPRN_DABRX, dabrx); 402 + if (cpu_has_feature(CPU_FTR_DABRX)) 403 + mtspr(SPRN_DABRX, dabrx); 403 404 return 0; 404 405 } 405 406 #else ··· 1369 1368 1370 1369 #ifdef CONFIG_PPC64 1371 1370 /* Called with hard IRQs off */ 1372 - void __ppc64_runlatch_on(void) 1371 + void notrace __ppc64_runlatch_on(void) 1373 1372 { 1374 1373 struct thread_info *ti = current_thread_info(); 1375 1374 unsigned long ctrl; ··· 1382 1381 } 1383 1382 1384 1383 /* Called with hard IRQs off */ 1385 - void __ppc64_runlatch_off(void) 1384 + void notrace __ppc64_runlatch_off(void) 1386 1385 { 1387 1386 struct thread_info *ti = current_thread_info(); 1388 1387 unsigned long ctrl;
+10
arch/powerpc/kernel/traps.c
··· 1165 1165 exception_exit(prev_state); 1166 1166 } 1167 1167 1168 + /* 1169 + * This occurs when running in hypervisor mode on POWER6 or later 1170 + * and an illegal instruction is encountered. 1171 + */ 1172 + void __kprobes emulation_assist_interrupt(struct pt_regs *regs) 1173 + { 1174 + regs->msr |= REASON_ILLEGAL; 1175 + program_check_exception(regs); 1176 + } 1177 + 1168 1178 void alignment_exception(struct pt_regs *regs) 1169 1179 { 1170 1180 enum ctx_state prev_state = exception_enter();
+5
arch/powerpc/kvm/44x_tlb.c
··· 441 441 struct kvmppc_vcpu_44x *vcpu_44x = to_44x(vcpu); 442 442 struct kvmppc_44x_tlbe *tlbe; 443 443 unsigned int gtlb_index; 444 + int idx; 444 445 445 446 gtlb_index = kvmppc_get_gpr(vcpu, ra); 446 447 if (gtlb_index >= KVM44x_GUEST_TLB_SIZE) { ··· 474 473 return EMULATE_FAIL; 475 474 } 476 475 476 + idx = srcu_read_lock(&vcpu->kvm->srcu); 477 + 477 478 if (tlbe_is_host_safe(vcpu, tlbe)) { 478 479 gva_t eaddr; 479 480 gpa_t gpaddr; ··· 491 488 492 489 kvmppc_mmu_map(vcpu, eaddr, gpaddr, gtlb_index); 493 490 } 491 + 492 + srcu_read_unlock(&vcpu->kvm->srcu, idx); 494 493 495 494 trace_kvm_gtlb_write(gtlb_index, tlbe->tid, tlbe->word0, tlbe->word1, 496 495 tlbe->word2);
+18
arch/powerpc/kvm/booke.c
··· 832 832 { 833 833 int r = RESUME_HOST; 834 834 int s; 835 + int idx; 836 + 837 + #ifdef CONFIG_PPC64 838 + WARN_ON(local_paca->irq_happened != 0); 839 + #endif 840 + 841 + /* 842 + * We enter with interrupts disabled in hardware, but 843 + * we need to call hard_irq_disable anyway to ensure that 844 + * the software state is kept in sync. 845 + */ 846 + hard_irq_disable(); 835 847 836 848 /* update before a new last_exit_type is rewritten */ 837 849 kvmppc_update_timing_stats(vcpu); ··· 1065 1053 break; 1066 1054 } 1067 1055 1056 + idx = srcu_read_lock(&vcpu->kvm->srcu); 1057 + 1068 1058 gpaddr = kvmppc_mmu_xlate(vcpu, gtlb_index, eaddr); 1069 1059 gfn = gpaddr >> PAGE_SHIFT; 1070 1060 ··· 1089 1075 kvmppc_account_exit(vcpu, MMIO_EXITS); 1090 1076 } 1091 1077 1078 + srcu_read_unlock(&vcpu->kvm->srcu, idx); 1092 1079 break; 1093 1080 } 1094 1081 ··· 1113 1098 1114 1099 kvmppc_account_exit(vcpu, ITLB_VIRT_MISS_EXITS); 1115 1100 1101 + idx = srcu_read_lock(&vcpu->kvm->srcu); 1102 + 1116 1103 gpaddr = kvmppc_mmu_xlate(vcpu, gtlb_index, eaddr); 1117 1104 gfn = gpaddr >> PAGE_SHIFT; 1118 1105 ··· 1131 1114 kvmppc_booke_queue_irqprio(vcpu, BOOKE_IRQPRIO_MACHINE_CHECK); 1132 1115 } 1133 1116 1117 + srcu_read_unlock(&vcpu->kvm->srcu, idx); 1134 1118 break; 1135 1119 } 1136 1120
+5
arch/powerpc/kvm/e500_mmu.c
··· 396 396 struct kvm_book3e_206_tlb_entry *gtlbe; 397 397 int tlbsel, esel; 398 398 int recal = 0; 399 + int idx; 399 400 400 401 tlbsel = get_tlb_tlbsel(vcpu); 401 402 esel = get_tlb_esel(vcpu, tlbsel); ··· 431 430 kvmppc_set_tlb1map_range(vcpu, gtlbe); 432 431 } 433 432 433 + idx = srcu_read_lock(&vcpu->kvm->srcu); 434 + 434 435 /* Invalidate shadow mappings for the about-to-be-clobbered TLBE. */ 435 436 if (tlbe_is_host_safe(vcpu, gtlbe)) { 436 437 u64 eaddr = get_tlb_eaddr(gtlbe); ··· 446 443 /* Premap the faulting page */ 447 444 kvmppc_mmu_map(vcpu, eaddr, raddr, index_of(tlbsel, esel)); 448 445 } 446 + 447 + srcu_read_unlock(&vcpu->kvm->srcu, idx); 449 448 450 449 kvmppc_set_exit_type(vcpu, EMULATED_TLBWE_EXITS); 451 450 return EMULATE_DONE;
-2
arch/powerpc/kvm/e500mc.c
··· 177 177 r = 0; 178 178 else if (strcmp(cur_cpu_spec->cpu_name, "e5500") == 0) 179 179 r = 0; 180 - else if (strcmp(cur_cpu_spec->cpu_name, "e6500") == 0) 181 - r = 0; 182 180 else 183 181 r = -ENOTSUPP; 184 182
+1 -1
arch/powerpc/perf/core-book3s.c
··· 1758 1758 } 1759 1759 } 1760 1760 } 1761 - if ((!found) && printk_ratelimit()) 1761 + if (!found && !nmi && printk_ratelimit()) 1762 1762 printk(KERN_WARNING "Can't find PMC that caused IRQ\n"); 1763 1763 1764 1764 /*
+5 -7
arch/powerpc/platforms/pseries/eeh_pseries.c
··· 83 83 ibm_configure_pe = rtas_token("ibm,configure-pe"); 84 84 ibm_configure_bridge = rtas_token("ibm,configure-bridge"); 85 85 86 - /* necessary sanity check */ 86 + /* 87 + * Necessary sanity check. We needn't check "get-config-addr-info" 88 + * and its variant since the old firmware probably support address 89 + * of domain/bus/slot/function for EEH RTAS operations. 90 + */ 87 91 if (ibm_set_eeh_option == RTAS_UNKNOWN_SERVICE) { 88 92 pr_warning("%s: RTAS service <ibm,set-eeh-option> invalid\n", 89 93 __func__); ··· 104 100 return -EINVAL; 105 101 } else if (ibm_slot_error_detail == RTAS_UNKNOWN_SERVICE) { 106 102 pr_warning("%s: RTAS service <ibm,slot-error-detail> invalid\n", 107 - __func__); 108 - return -EINVAL; 109 - } else if (ibm_get_config_addr_info2 == RTAS_UNKNOWN_SERVICE && 110 - ibm_get_config_addr_info == RTAS_UNKNOWN_SERVICE) { 111 - pr_warning("%s: RTAS service <ibm,get-config-addr-info2> and " 112 - "<ibm,get-config-addr-info> invalid\n", 113 103 __func__); 114 104 return -EINVAL; 115 105 } else if (ibm_configure_pe == RTAS_UNKNOWN_SERVICE &&
+22 -10
arch/s390/include/asm/pgtable.h
··· 623 623 " csg %0,%1,%2\n" 624 624 " jl 0b\n" 625 625 : "=&d" (old), "=&d" (new), "=Q" (ptep[PTRS_PER_PTE]) 626 - : "Q" (ptep[PTRS_PER_PTE]) : "cc"); 626 + : "Q" (ptep[PTRS_PER_PTE]) : "cc", "memory"); 627 627 #endif 628 628 return __pgste(new); 629 629 } ··· 635 635 " nihh %1,0xff7f\n" /* clear RCP_PCL_BIT */ 636 636 " stg %1,%0\n" 637 637 : "=Q" (ptep[PTRS_PER_PTE]) 638 - : "d" (pgste_val(pgste)), "Q" (ptep[PTRS_PER_PTE]) : "cc"); 638 + : "d" (pgste_val(pgste)), "Q" (ptep[PTRS_PER_PTE]) 639 + : "cc", "memory"); 639 640 preempt_enable(); 641 + #endif 642 + } 643 + 644 + static inline void pgste_set(pte_t *ptep, pgste_t pgste) 645 + { 646 + #ifdef CONFIG_PGSTE 647 + *(pgste_t *)(ptep + PTRS_PER_PTE) = pgste; 640 648 #endif 641 649 } 642 650 ··· 712 704 { 713 705 #ifdef CONFIG_PGSTE 714 706 unsigned long address; 715 - unsigned long okey, nkey; 707 + unsigned long nkey; 716 708 717 709 if (pte_val(entry) & _PAGE_INVALID) 718 710 return; 711 + VM_BUG_ON(!(pte_val(*ptep) & _PAGE_INVALID)); 719 712 address = pte_val(entry) & PAGE_MASK; 720 - okey = nkey = page_get_storage_key(address); 721 - nkey &= ~(_PAGE_ACC_BITS | _PAGE_FP_BIT); 722 - /* Set page access key and fetch protection bit from pgste */ 723 - nkey |= (pgste_val(pgste) & (RCP_ACC_BITS | RCP_FP_BIT)) >> 56; 724 - if (okey != nkey) 725 - page_set_storage_key(address, nkey, 0); 713 + /* 714 + * Set page access key and fetch protection bit from pgste. 715 + * The guest C/R information is still in the PGSTE, set real 716 + * key C/R to 0. 717 + */ 718 + nkey = (pgste_val(pgste) & (RCP_ACC_BITS | RCP_FP_BIT)) >> 56; 719 + page_set_storage_key(address, nkey, 0); 726 720 #endif 727 721 } 728 722 ··· 1109 1099 if (!mm_exclusive(mm)) 1110 1100 __ptep_ipte(address, ptep); 1111 1101 1112 - if (mm_has_pgste(mm)) 1102 + if (mm_has_pgste(mm)) { 1113 1103 pgste = pgste_update_all(&pte, pgste); 1104 + pgste_set(ptep, pgste); 1105 + } 1114 1106 return pte; 1115 1107 } 1116 1108
+8 -4
arch/s390/kernel/dumpstack.c
··· 74 74 75 75 static void show_trace(struct task_struct *task, unsigned long *stack) 76 76 { 77 + const unsigned long frame_size = 78 + STACK_FRAME_OVERHEAD + sizeof(struct pt_regs); 77 79 register unsigned long __r15 asm ("15"); 78 80 unsigned long sp; 79 81 ··· 84 82 sp = task ? task->thread.ksp : __r15; 85 83 printk("Call Trace:\n"); 86 84 #ifdef CONFIG_CHECK_STACK 87 - sp = __show_trace(sp, S390_lowcore.panic_stack - 4096, 88 - S390_lowcore.panic_stack); 85 + sp = __show_trace(sp, 86 + S390_lowcore.panic_stack + frame_size - 4096, 87 + S390_lowcore.panic_stack + frame_size); 89 88 #endif 90 - sp = __show_trace(sp, S390_lowcore.async_stack - ASYNC_SIZE, 91 - S390_lowcore.async_stack); 89 + sp = __show_trace(sp, 90 + S390_lowcore.async_stack + frame_size - ASYNC_SIZE, 91 + S390_lowcore.async_stack + frame_size); 92 92 if (task) 93 93 __show_trace(sp, (unsigned long) task_stack_page(task), 94 94 (unsigned long) task_stack_page(task) + THREAD_SIZE);
+64
arch/s390/kernel/irq.c
··· 311 311 spin_unlock(&ma_subclass_lock); 312 312 } 313 313 EXPORT_SYMBOL(measurement_alert_subclass_unregister); 314 + 315 + void synchronize_irq(unsigned int irq) 316 + { 317 + /* 318 + * Not needed, the handler is protected by a lock and IRQs that occur 319 + * after the handler is deleted are just NOPs. 320 + */ 321 + } 322 + EXPORT_SYMBOL_GPL(synchronize_irq); 323 + 324 + #ifndef CONFIG_PCI 325 + 326 + /* Only PCI devices have dynamically-defined IRQ handlers */ 327 + 328 + int request_irq(unsigned int irq, irq_handler_t handler, 329 + unsigned long irqflags, const char *devname, void *dev_id) 330 + { 331 + return -EINVAL; 332 + } 333 + EXPORT_SYMBOL_GPL(request_irq); 334 + 335 + void free_irq(unsigned int irq, void *dev_id) 336 + { 337 + WARN_ON(1); 338 + } 339 + EXPORT_SYMBOL_GPL(free_irq); 340 + 341 + void enable_irq(unsigned int irq) 342 + { 343 + WARN_ON(1); 344 + } 345 + EXPORT_SYMBOL_GPL(enable_irq); 346 + 347 + void disable_irq(unsigned int irq) 348 + { 349 + WARN_ON(1); 350 + } 351 + EXPORT_SYMBOL_GPL(disable_irq); 352 + 353 + #endif /* !CONFIG_PCI */ 354 + 355 + void disable_irq_nosync(unsigned int irq) 356 + { 357 + disable_irq(irq); 358 + } 359 + EXPORT_SYMBOL_GPL(disable_irq_nosync); 360 + 361 + unsigned long probe_irq_on(void) 362 + { 363 + return 0; 364 + } 365 + EXPORT_SYMBOL_GPL(probe_irq_on); 366 + 367 + int probe_irq_off(unsigned long val) 368 + { 369 + return 0; 370 + } 371 + EXPORT_SYMBOL_GPL(probe_irq_off); 372 + 373 + unsigned int probe_irq_mask(unsigned long val) 374 + { 375 + return val; 376 + } 377 + EXPORT_SYMBOL_GPL(probe_irq_mask);
+1 -1
arch/s390/kernel/sclp.S
··· 225 225 ahi %r2,1 226 226 ltr %r0,%r0 # end of string? 227 227 jz .LfinalizemtoS4 228 - chi %r0,0x15 # end of line (NL)? 228 + chi %r0,0x0a # end of line (NL)? 229 229 jz .LfinalizemtoS4 230 230 stc %r0,0(%r6,%r7) # copy to mto 231 231 la %r11,0(%r6,%r7)
-33
arch/s390/pci/pci.c
··· 302 302 return rc; 303 303 } 304 304 305 - void synchronize_irq(unsigned int irq) 306 - { 307 - /* 308 - * Not needed, the handler is protected by a lock and IRQs that occur 309 - * after the handler is deleted are just NOPs. 310 - */ 311 - } 312 - EXPORT_SYMBOL_GPL(synchronize_irq); 313 - 314 305 void enable_irq(unsigned int irq) 315 306 { 316 307 struct msi_desc *msi = irq_get_msi_desc(irq); ··· 317 326 zpci_msi_set_mask_bits(msi, 1, 1); 318 327 } 319 328 EXPORT_SYMBOL_GPL(disable_irq); 320 - 321 - void disable_irq_nosync(unsigned int irq) 322 - { 323 - disable_irq(irq); 324 - } 325 - EXPORT_SYMBOL_GPL(disable_irq_nosync); 326 - 327 - unsigned long probe_irq_on(void) 328 - { 329 - return 0; 330 - } 331 - EXPORT_SYMBOL_GPL(probe_irq_on); 332 - 333 - int probe_irq_off(unsigned long val) 334 - { 335 - return 0; 336 - } 337 - EXPORT_SYMBOL_GPL(probe_irq_off); 338 - 339 - unsigned int probe_irq_mask(unsigned long val) 340 - { 341 - return val; 342 - } 343 - EXPORT_SYMBOL_GPL(probe_irq_mask); 344 329 345 330 void pcibios_fixup_bus(struct pci_bus *bus) 346 331 {
+3 -2
arch/sparc/kernel/prom_common.c
··· 54 54 int of_set_property(struct device_node *dp, const char *name, void *val, int len) 55 55 { 56 56 struct property **prevp; 57 + unsigned long flags; 57 58 void *new_val; 58 59 int err; 59 60 ··· 65 64 err = -ENODEV; 66 65 67 66 mutex_lock(&of_set_property_mutex); 68 - raw_spin_lock(&devtree_lock); 67 + raw_spin_lock_irqsave(&devtree_lock, flags); 69 68 prevp = &dp->properties; 70 69 while (*prevp) { 71 70 struct property *prop = *prevp; ··· 92 91 } 93 92 prevp = &(*prevp)->next; 94 93 } 95 - raw_spin_unlock(&devtree_lock); 94 + raw_spin_unlock_irqrestore(&devtree_lock, flags); 96 95 mutex_unlock(&of_set_property_mutex); 97 96 98 97 /* XXX Upate procfs if necessary... */
-47
arch/x86/boot/compressed/eboot.c
··· 251 251 *size = len; 252 252 } 253 253 254 - static efi_status_t setup_efi_vars(struct boot_params *params) 255 - { 256 - struct setup_data *data; 257 - struct efi_var_bootdata *efidata; 258 - u64 store_size, remaining_size, var_size; 259 - efi_status_t status; 260 - 261 - if (sys_table->runtime->hdr.revision < EFI_2_00_SYSTEM_TABLE_REVISION) 262 - return EFI_UNSUPPORTED; 263 - 264 - data = (struct setup_data *)(unsigned long)params->hdr.setup_data; 265 - 266 - while (data && data->next) 267 - data = (struct setup_data *)(unsigned long)data->next; 268 - 269 - status = efi_call_phys4((void *)sys_table->runtime->query_variable_info, 270 - EFI_VARIABLE_NON_VOLATILE | 271 - EFI_VARIABLE_BOOTSERVICE_ACCESS | 272 - EFI_VARIABLE_RUNTIME_ACCESS, &store_size, 273 - &remaining_size, &var_size); 274 - 275 - if (status != EFI_SUCCESS) 276 - return status; 277 - 278 - status = efi_call_phys3(sys_table->boottime->allocate_pool, 279 - EFI_LOADER_DATA, sizeof(*efidata), &efidata); 280 - 281 - if (status != EFI_SUCCESS) 282 - return status; 283 - 284 - efidata->data.type = SETUP_EFI_VARS; 285 - efidata->data.len = sizeof(struct efi_var_bootdata) - 286 - sizeof(struct setup_data); 287 - efidata->data.next = 0; 288 - efidata->store_size = store_size; 289 - efidata->remaining_size = remaining_size; 290 - efidata->max_var_size = var_size; 291 - 292 - if (data) 293 - data->next = (unsigned long)efidata; 294 - else 295 - params->hdr.setup_data = (unsigned long)efidata; 296 - 297 - } 298 - 299 254 static efi_status_t setup_efi_pci(struct boot_params *params) 300 255 { 301 256 efi_pci_io_protocol *pci; ··· 1156 1201 goto fail; 1157 1202 1158 1203 setup_graphics(boot_params); 1159 - 1160 - setup_efi_vars(boot_params); 1161 1204 1162 1205 setup_efi_pci(boot_params); 1163 1206
-7
arch/x86/include/asm/efi.h
··· 102 102 extern void efi_unmap_memmap(void); 103 103 extern void efi_memory_uc(u64 addr, unsigned long size); 104 104 105 - struct efi_var_bootdata { 106 - struct setup_data data; 107 - u64 store_size; 108 - u64 remaining_size; 109 - u64 max_var_size; 110 - }; 111 - 112 105 #ifdef CONFIG_EFI 113 106 114 107 static inline bool efi_is_native(void)
-1
arch/x86/include/uapi/asm/bootparam.h
··· 6 6 #define SETUP_E820_EXT 1 7 7 #define SETUP_DTB 2 8 8 #define SETUP_PCI 3 9 - #define SETUP_EFI_VARS 4 10 9 11 10 /* ram_size flags */ 12 11 #define RAMDISK_IMAGE_START_MASK 0x07FF
+1 -1
arch/x86/kernel/relocate_kernel_64.S
··· 160 160 xorq %rbp, %rbp 161 161 xorq %r8, %r8 162 162 xorq %r9, %r9 163 - xorq %r10, %r9 163 + xorq %r10, %r10 164 164 xorq %r11, %r11 165 165 xorq %r12, %r12 166 166 xorq %r13, %r13
+3 -3
arch/x86/mm/init.c
··· 277 277 end_pfn = limit_pfn; 278 278 nr_range = save_mr(mr, nr_range, start_pfn, end_pfn, 0); 279 279 280 + if (!after_bootmem) 281 + adjust_range_page_size_mask(mr, nr_range); 282 + 280 283 /* try to merge same page size and continuous */ 281 284 for (i = 0; nr_range > 1 && i < nr_range - 1; i++) { 282 285 unsigned long old_start; ··· 293 290 mr[i--].start = old_start; 294 291 nr_range--; 295 292 } 296 - 297 - if (!after_bootmem) 298 - adjust_range_page_size_mask(mr, nr_range); 299 293 300 294 for (i = 0; i < nr_range; i++) 301 295 printk(KERN_DEBUG " [mem %#010lx-%#010lx] page %s\n",
+65 -123
arch/x86/platform/efi/efi.c
··· 42 42 #include <linux/io.h> 43 43 #include <linux/reboot.h> 44 44 #include <linux/bcd.h> 45 - #include <linux/ucs2_string.h> 46 45 47 46 #include <asm/setup.h> 48 47 #include <asm/efi.h> ··· 53 54 54 55 #define EFI_DEBUG 1 55 56 56 - /* 57 - * There's some additional metadata associated with each 58 - * variable. Intel's reference implementation is 60 bytes - bump that 59 - * to account for potential alignment constraints 60 - */ 61 - #define VAR_METADATA_SIZE 64 57 + #define EFI_MIN_RESERVE 5120 58 + 59 + #define EFI_DUMMY_GUID \ 60 + EFI_GUID(0x4424ac57, 0xbe4b, 0x47dd, 0x9e, 0x97, 0xed, 0x50, 0xf0, 0x9f, 0x92, 0xa9) 61 + 62 + static efi_char16_t efi_dummy_name[6] = { 'D', 'U', 'M', 'M', 'Y', 0 }; 62 63 63 64 struct efi __read_mostly efi = { 64 65 .mps = EFI_INVALID_TABLE_ADDR, ··· 77 78 78 79 static struct efi efi_phys __initdata; 79 80 static efi_system_table_t efi_systab __initdata; 80 - 81 - static u64 efi_var_store_size; 82 - static u64 efi_var_remaining_size; 83 - static u64 efi_var_max_var_size; 84 - static u64 boot_used_size; 85 - static u64 boot_var_size; 86 - static u64 active_size; 87 81 88 82 unsigned long x86_efi_facility; 89 83 ··· 180 188 efi_char16_t *name, 181 189 efi_guid_t *vendor) 182 190 { 183 - efi_status_t status; 184 - static bool finished = false; 185 - static u64 var_size; 186 - 187 - status = efi_call_virt3(get_next_variable, 188 - name_size, name, vendor); 189 - 190 - if (status == EFI_NOT_FOUND) { 191 - finished = true; 192 - if (var_size < boot_used_size) { 193 - boot_var_size = boot_used_size - var_size; 194 - active_size += boot_var_size; 195 - } else { 196 - printk(KERN_WARNING FW_BUG "efi: Inconsistent initial sizes\n"); 197 - } 198 - } 199 - 200 - if (boot_used_size && !finished) { 201 - unsigned long size = 0; 202 - u32 attr; 203 - efi_status_t s; 204 - void *tmp; 205 - 206 - s = virt_efi_get_variable(name, vendor, &attr, &size, NULL); 207 - 208 - if (s != EFI_BUFFER_TOO_SMALL || !size) 209 - return status; 210 - 211 - tmp = kmalloc(size, GFP_ATOMIC); 212 - 213 - if (!tmp) 214 - return status; 215 - 216 - s = virt_efi_get_variable(name, vendor, &attr, &size, tmp); 217 - 218 - if (s == EFI_SUCCESS && (attr & EFI_VARIABLE_NON_VOLATILE)) { 219 - var_size += size; 220 - var_size += ucs2_strsize(name, 1024); 221 - active_size += size; 222 - active_size += VAR_METADATA_SIZE; 223 - active_size += ucs2_strsize(name, 1024); 224 - } 225 - 226 - kfree(tmp); 227 - } 228 - 229 - return status; 191 + return efi_call_virt3(get_next_variable, 192 + name_size, name, vendor); 230 193 } 231 194 232 195 static efi_status_t virt_efi_set_variable(efi_char16_t *name, ··· 190 243 unsigned long data_size, 191 244 void *data) 192 245 { 193 - efi_status_t status; 194 - u32 orig_attr = 0; 195 - unsigned long orig_size = 0; 196 - 197 - status = virt_efi_get_variable(name, vendor, &orig_attr, &orig_size, 198 - NULL); 199 - 200 - if (status != EFI_BUFFER_TOO_SMALL) 201 - orig_size = 0; 202 - 203 - status = efi_call_virt5(set_variable, 204 - name, vendor, attr, 205 - data_size, data); 206 - 207 - if (status == EFI_SUCCESS) { 208 - if (orig_size) { 209 - active_size -= orig_size; 210 - active_size -= ucs2_strsize(name, 1024); 211 - active_size -= VAR_METADATA_SIZE; 212 - } 213 - if (data_size) { 214 - active_size += data_size; 215 - active_size += ucs2_strsize(name, 1024); 216 - active_size += VAR_METADATA_SIZE; 217 - } 218 - } 219 - 220 - return status; 246 + return efi_call_virt5(set_variable, 247 + name, vendor, attr, 248 + data_size, data); 221 249 } 222 250 223 251 static efi_status_t virt_efi_query_variable_info(u32 attr, ··· 708 786 char vendor[100] = "unknown"; 709 787 int i = 0; 710 788 void *tmp; 711 - struct setup_data *data; 712 - struct efi_var_bootdata *efi_var_data; 713 - u64 pa_data; 714 789 715 790 #ifdef CONFIG_X86_32 716 791 if (boot_params.efi_info.efi_systab_hi || ··· 724 805 725 806 if (efi_systab_init(efi_phys.systab)) 726 807 return; 727 - 728 - pa_data = boot_params.hdr.setup_data; 729 - while (pa_data) { 730 - data = early_ioremap(pa_data, sizeof(*efi_var_data)); 731 - if (data->type == SETUP_EFI_VARS) { 732 - efi_var_data = (struct efi_var_bootdata *)data; 733 - 734 - efi_var_store_size = efi_var_data->store_size; 735 - efi_var_remaining_size = efi_var_data->remaining_size; 736 - efi_var_max_var_size = efi_var_data->max_var_size; 737 - } 738 - pa_data = data->next; 739 - early_iounmap(data, sizeof(*efi_var_data)); 740 - } 741 - 742 - boot_used_size = efi_var_store_size - efi_var_remaining_size; 743 808 744 809 set_bit(EFI_SYSTEM_TABLES, &x86_efi_facility); 745 810 ··· 988 1085 runtime_code_page_mkexec(); 989 1086 990 1087 kfree(new_memmap); 1088 + 1089 + /* clean DUMMY object */ 1090 + efi.set_variable(efi_dummy_name, &EFI_DUMMY_GUID, 1091 + EFI_VARIABLE_NON_VOLATILE | 1092 + EFI_VARIABLE_BOOTSERVICE_ACCESS | 1093 + EFI_VARIABLE_RUNTIME_ACCESS, 1094 + 0, NULL); 991 1095 } 992 1096 993 1097 /* ··· 1046 1136 efi_status_t status; 1047 1137 u64 storage_size, remaining_size, max_size; 1048 1138 1139 + if (!(attributes & EFI_VARIABLE_NON_VOLATILE)) 1140 + return 0; 1141 + 1049 1142 status = efi.query_variable_info(attributes, &storage_size, 1050 1143 &remaining_size, &max_size); 1051 1144 if (status != EFI_SUCCESS) 1052 1145 return status; 1053 1146 1054 - if (!max_size && remaining_size > size) 1055 - printk_once(KERN_ERR FW_BUG "Broken EFI implementation" 1056 - " is returning MaxVariableSize=0\n"); 1057 1147 /* 1058 1148 * Some firmware implementations refuse to boot if there's insufficient 1059 1149 * space in the variable store. We account for that by refusing the 1060 1150 * write if permitting it would reduce the available space to under 1061 - * 50%. However, some firmware won't reclaim variable space until 1062 - * after the used (not merely the actively used) space drops below 1063 - * a threshold. We can approximate that case with the value calculated 1064 - * above. If both the firmware and our calculations indicate that the 1065 - * available space would drop below 50%, refuse the write. 1151 + * 5KB. This figure was provided by Samsung, so should be safe. 1066 1152 */ 1153 + if ((remaining_size - size < EFI_MIN_RESERVE) && 1154 + !efi_no_storage_paranoia) { 1067 1155 1068 - if (!storage_size || size > remaining_size || 1069 - (max_size && size > max_size)) 1070 - return EFI_OUT_OF_RESOURCES; 1156 + /* 1157 + * Triggering garbage collection may require that the firmware 1158 + * generate a real EFI_OUT_OF_RESOURCES error. We can force 1159 + * that by attempting to use more space than is available. 1160 + */ 1161 + unsigned long dummy_size = remaining_size + 1024; 1162 + void *dummy = kmalloc(dummy_size, GFP_ATOMIC); 1071 1163 1072 - if (!efi_no_storage_paranoia && 1073 - ((active_size + size + VAR_METADATA_SIZE > storage_size / 2) && 1074 - (remaining_size - size < storage_size / 2))) 1075 - return EFI_OUT_OF_RESOURCES; 1164 + status = efi.set_variable(efi_dummy_name, &EFI_DUMMY_GUID, 1165 + EFI_VARIABLE_NON_VOLATILE | 1166 + EFI_VARIABLE_BOOTSERVICE_ACCESS | 1167 + EFI_VARIABLE_RUNTIME_ACCESS, 1168 + dummy_size, dummy); 1169 + 1170 + if (status == EFI_SUCCESS) { 1171 + /* 1172 + * This should have failed, so if it didn't make sure 1173 + * that we delete it... 1174 + */ 1175 + efi.set_variable(efi_dummy_name, &EFI_DUMMY_GUID, 1176 + EFI_VARIABLE_NON_VOLATILE | 1177 + EFI_VARIABLE_BOOTSERVICE_ACCESS | 1178 + EFI_VARIABLE_RUNTIME_ACCESS, 1179 + 0, dummy); 1180 + } 1181 + 1182 + /* 1183 + * The runtime code may now have triggered a garbage collection 1184 + * run, so check the variable info again 1185 + */ 1186 + status = efi.query_variable_info(attributes, &storage_size, 1187 + &remaining_size, &max_size); 1188 + 1189 + if (status != EFI_SUCCESS) 1190 + return status; 1191 + 1192 + /* 1193 + * There still isn't enough room, so return an error 1194 + */ 1195 + if (remaining_size - size < EFI_MIN_RESERVE) 1196 + return EFI_OUT_OF_RESOURCES; 1197 + } 1076 1198 1077 1199 return EFI_SUCCESS; 1078 1200 }
+1 -3
arch/x86/tools/relocs.c
··· 42 42 "^(xen_irq_disable_direct_reloc$|" 43 43 "xen_save_fl_direct_reloc$|" 44 44 "VDSO|" 45 - #if ELF_BITS == 64 46 - "__vvar_page|" 47 - #endif 48 45 "__crc_)", 49 46 50 47 /* ··· 69 72 "__per_cpu_load|" 70 73 "init_per_cpu__.*|" 71 74 "__end_rodata_hpage_align|" 75 + "__vvar_page|" 72 76 #endif 73 77 "_end)$" 74 78 };
+8
arch/x86/xen/smp.c
··· 17 17 #include <linux/slab.h> 18 18 #include <linux/smp.h> 19 19 #include <linux/irq_work.h> 20 + #include <linux/tick.h> 20 21 21 22 #include <asm/paravirt.h> 22 23 #include <asm/desc.h> ··· 448 447 play_dead_common(); 449 448 HYPERVISOR_vcpu_op(VCPUOP_down, smp_processor_id(), NULL); 450 449 cpu_bringup(); 450 + /* 451 + * commit 4b0c0f294 (tick: Cleanup NOHZ per cpu data on cpu down) 452 + * clears certain data that the cpu_idle loop (which called us 453 + * and that we return from) expects. The only way to get that 454 + * data back is to call: 455 + */ 456 + tick_nohz_idle_enter(); 451 457 } 452 458 453 459 #else /* !CONFIG_HOTPLUG_CPU */
+1 -1
block/blk-core.c
··· 3164 3164 q->rpm_status = RPM_ACTIVE; 3165 3165 __blk_run_queue(q); 3166 3166 pm_runtime_mark_last_busy(q->dev); 3167 - pm_runtime_autosuspend(q->dev); 3167 + pm_request_autosuspend(q->dev); 3168 3168 } else { 3169 3169 q->rpm_status = RPM_SUSPENDED; 3170 3170 }
+2
crypto/Kconfig
··· 823 823 config CRYPTO_BLOWFISH_AVX2_X86_64 824 824 tristate "Blowfish cipher algorithm (x86_64/AVX2)" 825 825 depends on X86 && 64BIT 826 + depends on BROKEN 826 827 select CRYPTO_ALGAPI 827 828 select CRYPTO_CRYPTD 828 829 select CRYPTO_ABLK_HELPER_X86 ··· 1300 1299 config CRYPTO_TWOFISH_AVX2_X86_64 1301 1300 tristate "Twofish cipher algorithm (x86_64/AVX2)" 1302 1301 depends on X86 && 64BIT 1302 + depends on BROKEN 1303 1303 select CRYPTO_ALGAPI 1304 1304 select CRYPTO_CRYPTD 1305 1305 select CRYPTO_ABLK_HELPER_X86
+4 -3
drivers/acpi/apei/ghes.c
··· 919 919 break; 920 920 case ACPI_HEST_NOTIFY_EXTERNAL: 921 921 /* External interrupt vector is GSI */ 922 - if (acpi_gsi_to_irq(generic->notify.vector, &ghes->irq)) { 922 + rc = acpi_gsi_to_irq(generic->notify.vector, &ghes->irq); 923 + if (rc) { 923 924 pr_err(GHES_PFX "Failed to map GSI to IRQ for generic hardware error source: %d\n", 924 925 generic->header.source_id); 925 926 goto err_edac_unreg; 926 927 } 927 - if (request_irq(ghes->irq, ghes_irq_func, 928 - 0, "GHES IRQ", ghes)) { 928 + rc = request_irq(ghes->irq, ghes_irq_func, 0, "GHES IRQ", ghes); 929 + if (rc) { 929 930 pr_err(GHES_PFX "Failed to register IRQ for generic hardware error source: %d\n", 930 931 generic->header.source_id); 931 932 goto err_edac_unreg;
+6 -4
drivers/acpi/device_pm.c
··· 278 278 if (result) 279 279 return result; 280 280 } else if (state == ACPI_STATE_UNKNOWN) { 281 - /* No power resources and missing _PSC? Try to force D0. */ 281 + /* 282 + * No power resources and missing _PSC? Cross fingers and make 283 + * it D0 in hope that this is what the BIOS put the device into. 284 + * [We tried to force D0 here by executing _PS0, but that broke 285 + * Toshiba P870-303 in a nasty way.] 286 + */ 282 287 state = ACPI_STATE_D0; 283 - result = acpi_dev_pm_explicit_set(device, state); 284 - if (result) 285 - return result; 286 288 } 287 289 device->power.state = state; 288 290 return 0;
+1 -4
drivers/acpi/scan.c
··· 1017 1017 return -ENOSYS; 1018 1018 1019 1019 result = driver->ops.add(device); 1020 - if (result) { 1021 - device->driver = NULL; 1022 - device->driver_data = NULL; 1020 + if (result) 1023 1021 return result; 1024 - } 1025 1022 1026 1023 device->driver = driver; 1027 1024
+19
drivers/acpi/video.c
··· 458 458 }, 459 459 { 460 460 .callback = video_ignore_initial_backlight, 461 + .ident = "HP Pavilion g6 Notebook PC", 462 + .matches = { 463 + DMI_MATCH(DMI_BOARD_VENDOR, "Hewlett-Packard"), 464 + DMI_MATCH(DMI_PRODUCT_NAME, "HP Pavilion g6 Notebook PC"), 465 + }, 466 + }, 467 + { 468 + .callback = video_ignore_initial_backlight, 461 469 .ident = "HP 1000 Notebook PC", 462 470 .matches = { 463 471 DMI_MATCH(DMI_BOARD_VENDOR, "Hewlett-Packard"), 464 472 DMI_MATCH(DMI_PRODUCT_NAME, "HP 1000 Notebook PC"), 473 + }, 474 + }, 475 + { 476 + .callback = video_ignore_initial_backlight, 477 + .ident = "HP Pavilion m4", 478 + .matches = { 479 + DMI_MATCH(DMI_BOARD_VENDOR, "Hewlett-Packard"), 480 + DMI_MATCH(DMI_PRODUCT_NAME, "HP Pavilion m4 Notebook PC"), 465 481 }, 466 482 }, 467 483 {} ··· 1721 1705 struct input_dev *input; 1722 1706 int error; 1723 1707 acpi_status status; 1708 + 1709 + if (device->handler) 1710 + return -EINVAL; 1724 1711 1725 1712 status = acpi_walk_namespace(ACPI_TYPE_DEVICE, 1726 1713 device->parent->handle, 1,
+2 -4
drivers/base/regmap/regcache-rbtree.c
··· 143 143 int registers = 0; 144 144 int this_registers, average; 145 145 146 - map->lock(map); 146 + map->lock(map->lock_arg); 147 147 148 148 mem_size = sizeof(*rbtree_ctx); 149 149 mem_size += BITS_TO_LONGS(map->cache_present_nbits) * sizeof(long); ··· 170 170 seq_printf(s, "%d nodes, %d registers, average %d registers, used %zu bytes\n", 171 171 nodes, registers, average, mem_size); 172 172 173 - map->unlock(map); 173 + map->unlock(map->lock_arg); 174 174 175 175 return 0; 176 176 } ··· 391 391 for (node = rb_first(&rbtree_ctx->root); node; node = rb_next(node)) { 392 392 rbnode = rb_entry(node, struct regcache_rbtree_node, node); 393 393 394 - if (rbnode->base_reg < min) 395 - continue; 396 394 if (rbnode->base_reg > max) 397 395 break; 398 396 if (rbnode->base_reg + rbnode->blklen < min)
+10 -10
drivers/base/regmap/regcache.c
··· 270 270 271 271 BUG_ON(!map->cache_ops || !map->cache_ops->sync); 272 272 273 - map->lock(map); 273 + map->lock(map->lock_arg); 274 274 /* Remember the initial bypass state */ 275 275 bypass = map->cache_bypass; 276 276 dev_dbg(map->dev, "Syncing %s cache\n", ··· 306 306 trace_regcache_sync(map->dev, name, "stop"); 307 307 /* Restore the bypass state */ 308 308 map->cache_bypass = bypass; 309 - map->unlock(map); 309 + map->unlock(map->lock_arg); 310 310 311 311 return ret; 312 312 } ··· 333 333 334 334 BUG_ON(!map->cache_ops || !map->cache_ops->sync); 335 335 336 - map->lock(map); 336 + map->lock(map->lock_arg); 337 337 338 338 /* Remember the initial bypass state */ 339 339 bypass = map->cache_bypass; ··· 352 352 trace_regcache_sync(map->dev, name, "stop region"); 353 353 /* Restore the bypass state */ 354 354 map->cache_bypass = bypass; 355 - map->unlock(map); 355 + map->unlock(map->lock_arg); 356 356 357 357 return ret; 358 358 } ··· 372 372 */ 373 373 void regcache_cache_only(struct regmap *map, bool enable) 374 374 { 375 - map->lock(map); 375 + map->lock(map->lock_arg); 376 376 WARN_ON(map->cache_bypass && enable); 377 377 map->cache_only = enable; 378 378 trace_regmap_cache_only(map->dev, enable); 379 - map->unlock(map); 379 + map->unlock(map->lock_arg); 380 380 } 381 381 EXPORT_SYMBOL_GPL(regcache_cache_only); 382 382 ··· 391 391 */ 392 392 void regcache_mark_dirty(struct regmap *map) 393 393 { 394 - map->lock(map); 394 + map->lock(map->lock_arg); 395 395 map->cache_dirty = true; 396 - map->unlock(map); 396 + map->unlock(map->lock_arg); 397 397 } 398 398 EXPORT_SYMBOL_GPL(regcache_mark_dirty); 399 399 ··· 410 410 */ 411 411 void regcache_cache_bypass(struct regmap *map, bool enable) 412 412 { 413 - map->lock(map); 413 + map->lock(map->lock_arg); 414 414 WARN_ON(map->cache_only && enable); 415 415 map->cache_bypass = enable; 416 416 trace_regmap_cache_bypass(map->dev, enable); 417 - map->unlock(map); 417 + map->unlock(map->lock_arg); 418 418 } 419 419 EXPORT_SYMBOL_GPL(regcache_cache_bypass); 420 420
+4 -1
drivers/base/regmap/regmap-debugfs.c
··· 265 265 char *start = buf; 266 266 unsigned long reg, value; 267 267 struct regmap *map = file->private_data; 268 + int ret; 268 269 269 270 buf_size = min(count, (sizeof(buf)-1)); 270 271 if (copy_from_user(buf, user_buf, buf_size)) ··· 283 282 /* Userspace has been fiddling around behind the kernel's back */ 284 283 add_taint(TAINT_USER, LOCKDEP_NOW_UNRELIABLE); 285 284 286 - regmap_write(map, reg, value); 285 + ret = regmap_write(map, reg, value); 286 + if (ret < 0) 287 + return ret; 287 288 return buf_size; 288 289 } 289 290 #else
+16 -16
drivers/block/cciss.c
··· 168 168 static int cciss_open(struct block_device *bdev, fmode_t mode); 169 169 static int cciss_unlocked_open(struct block_device *bdev, fmode_t mode); 170 170 static void cciss_release(struct gendisk *disk, fmode_t mode); 171 - static int do_ioctl(struct block_device *bdev, fmode_t mode, 172 - unsigned int cmd, unsigned long arg); 173 171 static int cciss_ioctl(struct block_device *bdev, fmode_t mode, 174 172 unsigned int cmd, unsigned long arg); 175 173 static int cciss_getgeo(struct block_device *bdev, struct hd_geometry *geo); ··· 233 235 .owner = THIS_MODULE, 234 236 .open = cciss_unlocked_open, 235 237 .release = cciss_release, 236 - .ioctl = do_ioctl, 238 + .ioctl = cciss_ioctl, 237 239 .getgeo = cciss_getgeo, 238 240 #ifdef CONFIG_COMPAT 239 241 .compat_ioctl = cciss_compat_ioctl, ··· 1141 1143 mutex_unlock(&cciss_mutex); 1142 1144 } 1143 1145 1144 - static int do_ioctl(struct block_device *bdev, fmode_t mode, 1145 - unsigned cmd, unsigned long arg) 1146 - { 1147 - int ret; 1148 - mutex_lock(&cciss_mutex); 1149 - ret = cciss_ioctl(bdev, mode, cmd, arg); 1150 - mutex_unlock(&cciss_mutex); 1151 - return ret; 1152 - } 1153 - 1154 1146 #ifdef CONFIG_COMPAT 1155 1147 1156 1148 static int cciss_ioctl32_passthru(struct block_device *bdev, fmode_t mode, ··· 1167 1179 case CCISS_REGNEWD: 1168 1180 case CCISS_RESCANDISK: 1169 1181 case CCISS_GETLUNINFO: 1170 - return do_ioctl(bdev, mode, cmd, arg); 1182 + return cciss_ioctl(bdev, mode, cmd, arg); 1171 1183 1172 1184 case CCISS_PASSTHRU32: 1173 1185 return cciss_ioctl32_passthru(bdev, mode, cmd, arg); ··· 1207 1219 if (err) 1208 1220 return -EFAULT; 1209 1221 1210 - err = do_ioctl(bdev, mode, CCISS_PASSTHRU, (unsigned long)p); 1222 + err = cciss_ioctl(bdev, mode, CCISS_PASSTHRU, (unsigned long)p); 1211 1223 if (err) 1212 1224 return err; 1213 1225 err |= ··· 1249 1261 if (err) 1250 1262 return -EFAULT; 1251 1263 1252 - err = do_ioctl(bdev, mode, CCISS_BIG_PASSTHRU, (unsigned long)p); 1264 + err = cciss_ioctl(bdev, mode, CCISS_BIG_PASSTHRU, (unsigned long)p); 1253 1265 if (err) 1254 1266 return err; 1255 1267 err |= ··· 1299 1311 static int cciss_getintinfo(ctlr_info_t *h, void __user *argp) 1300 1312 { 1301 1313 cciss_coalint_struct intinfo; 1314 + unsigned long flags; 1302 1315 1303 1316 if (!argp) 1304 1317 return -EINVAL; 1318 + spin_lock_irqsave(&h->lock, flags); 1305 1319 intinfo.delay = readl(&h->cfgtable->HostWrite.CoalIntDelay); 1306 1320 intinfo.count = readl(&h->cfgtable->HostWrite.CoalIntCount); 1321 + spin_unlock_irqrestore(&h->lock, flags); 1307 1322 if (copy_to_user 1308 1323 (argp, &intinfo, sizeof(cciss_coalint_struct))) 1309 1324 return -EFAULT; ··· 1347 1356 static int cciss_getnodename(ctlr_info_t *h, void __user *argp) 1348 1357 { 1349 1358 NodeName_type NodeName; 1359 + unsigned long flags; 1350 1360 int i; 1351 1361 1352 1362 if (!argp) 1353 1363 return -EINVAL; 1364 + spin_lock_irqsave(&h->lock, flags); 1354 1365 for (i = 0; i < 16; i++) 1355 1366 NodeName[i] = readb(&h->cfgtable->ServerName[i]); 1367 + spin_unlock_irqrestore(&h->lock, flags); 1356 1368 if (copy_to_user(argp, NodeName, sizeof(NodeName_type))) 1357 1369 return -EFAULT; 1358 1370 return 0; ··· 1392 1398 static int cciss_getheartbeat(ctlr_info_t *h, void __user *argp) 1393 1399 { 1394 1400 Heartbeat_type heartbeat; 1401 + unsigned long flags; 1395 1402 1396 1403 if (!argp) 1397 1404 return -EINVAL; 1405 + spin_lock_irqsave(&h->lock, flags); 1398 1406 heartbeat = readl(&h->cfgtable->HeartBeat); 1407 + spin_unlock_irqrestore(&h->lock, flags); 1399 1408 if (copy_to_user(argp, &heartbeat, sizeof(Heartbeat_type))) 1400 1409 return -EFAULT; 1401 1410 return 0; ··· 1407 1410 static int cciss_getbustypes(ctlr_info_t *h, void __user *argp) 1408 1411 { 1409 1412 BusTypes_type BusTypes; 1413 + unsigned long flags; 1410 1414 1411 1415 if (!argp) 1412 1416 return -EINVAL; 1417 + spin_lock_irqsave(&h->lock, flags); 1413 1418 BusTypes = readl(&h->cfgtable->BusTypes); 1419 + spin_unlock_irqrestore(&h->lock, flags); 1414 1420 if (copy_to_user(argp, &BusTypes, sizeof(BusTypes_type))) 1415 1421 return -EFAULT; 1416 1422 return 0;
+5 -3
drivers/block/mtip32xx/mtip32xx.c
··· 3002 3002 3003 3003 static void mtip_hw_debugfs_exit(struct driver_data *dd) 3004 3004 { 3005 - debugfs_remove_recursive(dd->dfs_node); 3005 + if (dd->dfs_node) 3006 + debugfs_remove_recursive(dd->dfs_node); 3006 3007 } 3007 3008 3008 3009 ··· 3864 3863 struct driver_data *dd = queue->queuedata; 3865 3864 struct scatterlist *sg; 3866 3865 struct bio_vec *bvec; 3867 - int nents = 0; 3866 + int i, nents = 0; 3868 3867 int tag = 0, unaligned = 0; 3869 3868 3870 3869 if (unlikely(dd->dd_flag & MTIP_DDF_STOP_IO)) { ··· 3922 3921 } 3923 3922 3924 3923 /* Create the scatter list for this bio. */ 3925 - bio_for_each_segment(bvec, bio, nents) { 3924 + bio_for_each_segment(bvec, bio, i) { 3926 3925 sg_set_page(&sg[nents], 3927 3926 bvec->bv_page, 3928 3927 bvec->bv_len, 3929 3928 bvec->bv_offset); 3929 + nents++; 3930 3930 } 3931 3931 3932 3932 /* Issue the read/write. */
+48 -14
drivers/block/nvme-core.c
··· 629 629 struct nvme_command *cmnd; 630 630 struct nvme_iod *iod; 631 631 enum dma_data_direction dma_dir; 632 - int cmdid, length, result = -ENOMEM; 632 + int cmdid, length, result; 633 633 u16 control; 634 634 u32 dsmgmt; 635 635 int psegs = bio_phys_segments(ns->queue, bio); ··· 640 640 return result; 641 641 } 642 642 643 + result = -ENOMEM; 643 644 iod = nvme_alloc_iod(psegs, bio->bi_size, GFP_ATOMIC); 644 645 if (!iod) 645 646 goto nomem; ··· 978 977 979 978 if (timeout && !time_after(now, info[cmdid].timeout)) 980 979 continue; 980 + if (info[cmdid].ctx == CMD_CTX_CANCELLED) 981 + continue; 981 982 dev_warn(nvmeq->q_dmadev, "Cancelling I/O %d\n", cmdid); 982 983 ctx = cancel_cmdid(nvmeq, cmdid, &fn); 983 984 fn(nvmeq->dev, ctx, &cqe); ··· 1209 1206 1210 1207 if (addr & 3) 1211 1208 return ERR_PTR(-EINVAL); 1212 - if (!length) 1209 + if (!length || length > INT_MAX - PAGE_SIZE) 1213 1210 return ERR_PTR(-EINVAL); 1214 1211 1215 1212 offset = offset_in_page(addr); ··· 1230 1227 sg_init_table(sg, count); 1231 1228 for (i = 0; i < count; i++) { 1232 1229 sg_set_page(&sg[i], pages[i], 1233 - min_t(int, length, PAGE_SIZE - offset), offset); 1230 + min_t(unsigned, length, PAGE_SIZE - offset), 1231 + offset); 1234 1232 length -= (PAGE_SIZE - offset); 1235 1233 offset = 0; 1236 1234 } ··· 1439 1435 nvme_free_iod(dev, iod); 1440 1436 } 1441 1437 1442 - if (!status && copy_to_user(&ucmd->result, &cmd.result, 1438 + if ((status >= 0) && copy_to_user(&ucmd->result, &cmd.result, 1443 1439 sizeof(cmd.result))) 1444 1440 status = -EFAULT; 1445 1441 ··· 1637 1633 1638 1634 static int nvme_setup_io_queues(struct nvme_dev *dev) 1639 1635 { 1640 - int result, cpu, i, nr_io_queues, db_bar_size, q_depth; 1636 + struct pci_dev *pdev = dev->pci_dev; 1637 + int result, cpu, i, nr_io_queues, db_bar_size, q_depth, q_count; 1641 1638 1642 1639 nr_io_queues = num_online_cpus(); 1643 1640 result = set_queue_count(dev, nr_io_queues); ··· 1647 1642 if (result < nr_io_queues) 1648 1643 nr_io_queues = result; 1649 1644 1645 + q_count = nr_io_queues; 1650 1646 /* Deregister the admin queue's interrupt */ 1651 1647 free_irq(dev->entry[0].vector, dev->queues[0]); 1652 1648 1653 1649 db_bar_size = 4096 + ((nr_io_queues + 1) << (dev->db_stride + 3)); 1654 1650 if (db_bar_size > 8192) { 1655 1651 iounmap(dev->bar); 1656 - dev->bar = ioremap(pci_resource_start(dev->pci_dev, 0), 1657 - db_bar_size); 1652 + dev->bar = ioremap(pci_resource_start(pdev, 0), db_bar_size); 1658 1653 dev->dbs = ((void __iomem *)dev->bar) + 4096; 1659 1654 dev->queues[0]->q_db = dev->dbs; 1660 1655 } ··· 1662 1657 for (i = 0; i < nr_io_queues; i++) 1663 1658 dev->entry[i].entry = i; 1664 1659 for (;;) { 1665 - result = pci_enable_msix(dev->pci_dev, dev->entry, 1666 - nr_io_queues); 1660 + result = pci_enable_msix(pdev, dev->entry, nr_io_queues); 1667 1661 if (result == 0) { 1668 1662 break; 1669 1663 } else if (result > 0) { 1670 1664 nr_io_queues = result; 1671 1665 continue; 1672 1666 } else { 1673 - nr_io_queues = 1; 1667 + nr_io_queues = 0; 1674 1668 break; 1669 + } 1670 + } 1671 + 1672 + if (nr_io_queues == 0) { 1673 + nr_io_queues = q_count; 1674 + for (;;) { 1675 + result = pci_enable_msi_block(pdev, nr_io_queues); 1676 + if (result == 0) { 1677 + for (i = 0; i < nr_io_queues; i++) 1678 + dev->entry[i].vector = i + pdev->irq; 1679 + break; 1680 + } else if (result > 0) { 1681 + nr_io_queues = result; 1682 + continue; 1683 + } else { 1684 + nr_io_queues = 1; 1685 + break; 1686 + } 1675 1687 } 1676 1688 } 1677 1689 ··· 1872 1850 { 1873 1851 struct nvme_dev *dev = container_of(kref, struct nvme_dev, kref); 1874 1852 nvme_dev_remove(dev); 1875 - pci_disable_msix(dev->pci_dev); 1853 + if (dev->pci_dev->msi_enabled) 1854 + pci_disable_msi(dev->pci_dev); 1855 + else if (dev->pci_dev->msix_enabled) 1856 + pci_disable_msix(dev->pci_dev); 1876 1857 iounmap(dev->bar); 1877 1858 nvme_release_instance(dev); 1878 1859 nvme_release_prp_pools(dev); ··· 1948 1923 INIT_LIST_HEAD(&dev->namespaces); 1949 1924 dev->pci_dev = pdev; 1950 1925 pci_set_drvdata(pdev, dev); 1951 - dma_set_mask(&pdev->dev, DMA_BIT_MASK(64)); 1952 - dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(64)); 1926 + 1927 + if (!dma_set_mask(&pdev->dev, DMA_BIT_MASK(64))) 1928 + dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(64)); 1929 + else if (!dma_set_mask(&pdev->dev, DMA_BIT_MASK(32))) 1930 + dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(32)); 1931 + else 1932 + goto disable; 1933 + 1953 1934 result = nvme_set_instance(dev); 1954 1935 if (result) 1955 1936 goto disable; ··· 2008 1977 unmap: 2009 1978 iounmap(dev->bar); 2010 1979 disable_msix: 2011 - pci_disable_msix(pdev); 1980 + if (dev->pci_dev->msi_enabled) 1981 + pci_disable_msi(dev->pci_dev); 1982 + else if (dev->pci_dev->msix_enabled) 1983 + pci_disable_msix(dev->pci_dev); 2012 1984 nvme_release_instance(dev); 2013 1985 nvme_release_prp_pools(dev); 2014 1986 disable:
+1 -2
drivers/block/nvme-scsi.c
··· 44 44 #include <linux/sched.h> 45 45 #include <linux/slab.h> 46 46 #include <linux/types.h> 47 - #include <linux/version.h> 48 47 #include <scsi/sg.h> 49 48 #include <scsi/scsi.h> 50 49 ··· 1653 1654 } 1654 1655 } 1655 1656 1656 - static u16 nvme_trans_modesel_get_mp(struct nvme_ns *ns, struct sg_io_hdr *hdr, 1657 + static int nvme_trans_modesel_get_mp(struct nvme_ns *ns, struct sg_io_hdr *hdr, 1657 1658 u8 *mode_page, u8 page_code) 1658 1659 { 1659 1660 int res = SNTI_TRANSLATION_SUCCESS;
+2 -1
drivers/block/pktcdvd.c
··· 83 83 84 84 #define MAX_SPEED 0xffff 85 85 86 - #define ZONE(sector, pd) (((sector) + (pd)->offset) & ~((pd)->settings.size - 1)) 86 + #define ZONE(sector, pd) (((sector) + (pd)->offset) & \ 87 + ~(sector_t)((pd)->settings.size - 1)) 87 88 88 89 static DEFINE_MUTEX(pktcdvd_mutex); 89 90 static struct pktcdvd_device *pkt_devs[MAX_WRITERS];
+18 -15
drivers/block/rbd.c
··· 519 519 }; 520 520 521 521 /* 522 - * Initialize an rbd client instance. 523 - * We own *ceph_opts. 522 + * Initialize an rbd client instance. Success or not, this function 523 + * consumes ceph_opts. 524 524 */ 525 525 static struct rbd_client *rbd_client_create(struct ceph_options *ceph_opts) 526 526 { ··· 675 675 676 676 /* 677 677 * Get a ceph client with specific addr and configuration, if one does 678 - * not exist create it. 678 + * not exist create it. Either way, ceph_opts is consumed by this 679 + * function. 679 680 */ 680 681 static struct rbd_client *rbd_get_client(struct ceph_options *ceph_opts) 681 682 { ··· 4698 4697 return ret; 4699 4698 } 4700 4699 4701 - /* Undo whatever state changes are made by v1 or v2 image probe */ 4702 - 4700 + /* 4701 + * Undo whatever state changes are made by v1 or v2 header info 4702 + * call. 4703 + */ 4703 4704 static void rbd_dev_unprobe(struct rbd_device *rbd_dev) 4704 4705 { 4705 4706 struct rbd_image_header *header; ··· 4905 4902 int tmp; 4906 4903 4907 4904 /* 4908 - * Get the id from the image id object. If it's not a 4909 - * format 2 image, we'll get ENOENT back, and we'll assume 4910 - * it's a format 1 image. 4905 + * Get the id from the image id object. Unless there's an 4906 + * error, rbd_dev->spec->image_id will be filled in with 4907 + * a dynamically-allocated string, and rbd_dev->image_format 4908 + * will be set to either 1 or 2. 4911 4909 */ 4912 4910 ret = rbd_dev_image_id(rbd_dev); 4913 4911 if (ret) ··· 4996 4992 rc = PTR_ERR(rbdc); 4997 4993 goto err_out_args; 4998 4994 } 4999 - ceph_opts = NULL; /* rbd_dev client now owns this */ 5000 4995 5001 4996 /* pick the pool */ 5002 4997 osdc = &rbdc->client->osdc; ··· 5030 5027 rbd_dev->mapping.read_only = read_only; 5031 5028 5032 5029 rc = rbd_dev_device_setup(rbd_dev); 5033 - if (!rc) 5034 - return count; 5030 + if (rc) { 5031 + rbd_dev_image_release(rbd_dev); 5032 + goto err_out_module; 5033 + } 5035 5034 5036 - rbd_dev_image_release(rbd_dev); 5035 + return count; 5036 + 5037 5037 err_out_rbd_dev: 5038 5038 rbd_dev_destroy(rbd_dev); 5039 5039 err_out_client: 5040 5040 rbd_put_client(rbdc); 5041 5041 err_out_args: 5042 - if (ceph_opts) 5043 - ceph_destroy_options(ceph_opts); 5044 - kfree(rbd_opts); 5045 5042 rbd_spec_put(spec); 5046 5043 err_out_module: 5047 5044 module_put(THIS_MODULE);
+2 -2
drivers/bluetooth/Kconfig
··· 201 201 The core driver to support Marvell Bluetooth devices. 202 202 203 203 This driver is required if you want to support 204 - Marvell Bluetooth devices, such as 8688/8787/8797. 204 + Marvell Bluetooth devices, such as 8688/8787/8797/8897. 205 205 206 206 Say Y here to compile Marvell Bluetooth driver 207 207 into the kernel or say M to compile it as module. ··· 214 214 The driver for Marvell Bluetooth chipsets with SDIO interface. 215 215 216 216 This driver is required if you want to use Marvell Bluetooth 217 - devices with SDIO interface. Currently SD8688/SD8787/SD8797 217 + devices with SDIO interface. Currently SD8688/SD8787/SD8797/SD8897 218 218 chipsets are supported. 219 219 220 220 Say Y here to compile support for Marvell BT-over-SDIO driver
+4 -5
drivers/bluetooth/btmrvl_main.c
··· 498 498 add_wait_queue(&thread->wait_q, &wait); 499 499 500 500 set_current_state(TASK_INTERRUPTIBLE); 501 + if (kthread_should_stop()) { 502 + BT_DBG("main_thread: break from main thread"); 503 + break; 504 + } 501 505 502 506 if (adapter->wakeup_tries || 503 507 ((!adapter->int_count) && ··· 516 512 remove_wait_queue(&thread->wait_q, &wait); 517 513 518 514 BT_DBG("main_thread woke up"); 519 - 520 - if (kthread_should_stop()) { 521 - BT_DBG("main_thread: break from main thread"); 522 - break; 523 - } 524 515 525 516 spin_lock_irqsave(&priv->driver_lock, flags); 526 517 if (adapter->int_count) {
+28
drivers/bluetooth/btmrvl_sdio.c
··· 82 82 .io_port_2 = 0x7a, 83 83 }; 84 84 85 + static const struct btmrvl_sdio_card_reg btmrvl_reg_88xx = { 86 + .cfg = 0x00, 87 + .host_int_mask = 0x02, 88 + .host_intstatus = 0x03, 89 + .card_status = 0x50, 90 + .sq_read_base_addr_a0 = 0x60, 91 + .sq_read_base_addr_a1 = 0x61, 92 + .card_revision = 0xbc, 93 + .card_fw_status0 = 0xc0, 94 + .card_fw_status1 = 0xc1, 95 + .card_rx_len = 0xc2, 96 + .card_rx_unit = 0xc3, 97 + .io_port_0 = 0xd8, 98 + .io_port_1 = 0xd9, 99 + .io_port_2 = 0xda, 100 + }; 101 + 85 102 static const struct btmrvl_sdio_device btmrvl_sdio_sd8688 = { 86 103 .helper = "mrvl/sd8688_helper.bin", 87 104 .firmware = "mrvl/sd8688.bin", ··· 120 103 .sd_blksz_fw_dl = 256, 121 104 }; 122 105 106 + static const struct btmrvl_sdio_device btmrvl_sdio_sd8897 = { 107 + .helper = NULL, 108 + .firmware = "mrvl/sd8897_uapsta.bin", 109 + .reg = &btmrvl_reg_88xx, 110 + .sd_blksz_fw_dl = 256, 111 + }; 112 + 123 113 static const struct sdio_device_id btmrvl_sdio_ids[] = { 124 114 /* Marvell SD8688 Bluetooth device */ 125 115 { SDIO_DEVICE(SDIO_VENDOR_ID_MARVELL, 0x9105), ··· 140 116 /* Marvell SD8797 Bluetooth device */ 141 117 { SDIO_DEVICE(SDIO_VENDOR_ID_MARVELL, 0x912A), 142 118 .driver_data = (unsigned long) &btmrvl_sdio_sd8797 }, 119 + /* Marvell SD8897 Bluetooth device */ 120 + { SDIO_DEVICE(SDIO_VENDOR_ID_MARVELL, 0x912E), 121 + .driver_data = (unsigned long) &btmrvl_sdio_sd8897 }, 143 122 144 123 { } /* Terminating entry */ 145 124 }; ··· 1221 1194 MODULE_FIRMWARE("mrvl/sd8688.bin"); 1222 1195 MODULE_FIRMWARE("mrvl/sd8787_uapsta.bin"); 1223 1196 MODULE_FIRMWARE("mrvl/sd8797_uapsta.bin"); 1197 + MODULE_FIRMWARE("mrvl/sd8897_uapsta.bin");
+2 -2
drivers/cpufreq/acpi-cpufreq.c
··· 347 347 switch (per_cpu(acfreq_data, cpumask_first(mask))->cpu_feature) { 348 348 case SYSTEM_INTEL_MSR_CAPABLE: 349 349 cmd.type = SYSTEM_INTEL_MSR_CAPABLE; 350 - cmd.addr.msr.reg = MSR_IA32_PERF_STATUS; 350 + cmd.addr.msr.reg = MSR_IA32_PERF_CTL; 351 351 break; 352 352 case SYSTEM_AMD_MSR_CAPABLE: 353 353 cmd.type = SYSTEM_AMD_MSR_CAPABLE; 354 - cmd.addr.msr.reg = MSR_AMD_PERF_STATUS; 354 + cmd.addr.msr.reg = MSR_AMD_PERF_CTL; 355 355 break; 356 356 case SYSTEM_IO_CAPABLE: 357 357 cmd.type = SYSTEM_IO_CAPABLE;
+3 -2
drivers/cpufreq/cpufreq-cpu0.c
··· 45 45 struct cpufreq_freqs freqs; 46 46 struct opp *opp; 47 47 unsigned long volt = 0, volt_old = 0, tol = 0; 48 - long freq_Hz; 48 + long freq_Hz, freq_exact; 49 49 unsigned int index; 50 50 int ret; 51 51 ··· 60 60 freq_Hz = clk_round_rate(cpu_clk, freq_table[index].frequency * 1000); 61 61 if (freq_Hz < 0) 62 62 freq_Hz = freq_table[index].frequency * 1000; 63 + freq_exact = freq_Hz; 63 64 freqs.new = freq_Hz / 1000; 64 65 freqs.old = clk_get_rate(cpu_clk) / 1000; 65 66 ··· 99 98 } 100 99 } 101 100 102 - ret = clk_set_rate(cpu_clk, freqs.new * 1000); 101 + ret = clk_set_rate(cpu_clk, freq_exact); 103 102 if (ret) { 104 103 pr_err("failed to set clock rate: %d\n", ret); 105 104 if (cpu_reg)
+3
drivers/cpufreq/cpufreq_governor.c
··· 26 26 #include <linux/tick.h> 27 27 #include <linux/types.h> 28 28 #include <linux/workqueue.h> 29 + #include <linux/cpu.h> 29 30 30 31 #include "cpufreq_governor.h" 31 32 ··· 181 180 if (!all_cpus) { 182 181 __gov_queue_work(smp_processor_id(), dbs_data, delay); 183 182 } else { 183 + get_online_cpus(); 184 184 for_each_cpu(i, policy->cpus) 185 185 __gov_queue_work(i, dbs_data, delay); 186 + put_online_cpus(); 186 187 } 187 188 } 188 189 EXPORT_SYMBOL_GPL(gov_queue_work);
+1 -1
drivers/crypto/sahara.c
··· 863 863 { .compatible = "fsl,imx27-sahara" }, 864 864 { /* sentinel */ } 865 865 }; 866 - MODULE_DEVICE_TABLE(platform, sahara_dt_ids); 866 + MODULE_DEVICE_TABLE(of, sahara_dt_ids); 867 867 868 868 static int sahara_probe(struct platform_device *pdev) 869 869 {
+23 -22
drivers/dma/dmatest.c
··· 716 716 } 717 717 dma_async_issue_pending(chan); 718 718 719 - wait_event_freezable_timeout(done_wait, 720 - done.done || kthread_should_stop(), 719 + wait_event_freezable_timeout(done_wait, done.done, 721 720 msecs_to_jiffies(params->timeout)); 722 721 723 722 status = dma_async_is_tx_complete(chan, cookie, NULL, NULL); ··· 996 997 static int __restart_threaded_test(struct dmatest_info *info, bool run) 997 998 { 998 999 struct dmatest_params *params = &info->params; 999 - int ret; 1000 1000 1001 1001 /* Stop any running test first */ 1002 1002 __stop_threaded_test(info); ··· 1010 1012 memcpy(params, &info->dbgfs_params, sizeof(*params)); 1011 1013 1012 1014 /* Run test with new parameters */ 1013 - ret = __run_threaded_test(info); 1014 - if (ret) { 1015 - __stop_threaded_test(info); 1016 - pr_err("dmatest: Can't run test\n"); 1015 + return __run_threaded_test(info); 1016 + } 1017 + 1018 + static bool __is_threaded_test_run(struct dmatest_info *info) 1019 + { 1020 + struct dmatest_chan *dtc; 1021 + 1022 + list_for_each_entry(dtc, &info->channels, node) { 1023 + struct dmatest_thread *thread; 1024 + 1025 + list_for_each_entry(thread, &dtc->threads, node) { 1026 + if (!thread->done) 1027 + return true; 1028 + } 1017 1029 } 1018 1030 1019 - return ret; 1031 + return false; 1020 1032 } 1021 1033 1022 1034 static ssize_t dtf_write_string(void *to, size_t available, loff_t *ppos, ··· 1099 1091 { 1100 1092 struct dmatest_info *info = file->private_data; 1101 1093 char buf[3]; 1102 - struct dmatest_chan *dtc; 1103 - bool alive = false; 1104 1094 1105 1095 mutex_lock(&info->lock); 1106 - list_for_each_entry(dtc, &info->channels, node) { 1107 - struct dmatest_thread *thread; 1108 1096 1109 - list_for_each_entry(thread, &dtc->threads, node) { 1110 - if (!thread->done) { 1111 - alive = true; 1112 - break; 1113 - } 1114 - } 1115 - } 1116 - 1117 - if (alive) { 1097 + if (__is_threaded_test_run(info)) { 1118 1098 buf[0] = 'Y'; 1119 1099 } else { 1120 1100 __stop_threaded_test(info); ··· 1128 1132 1129 1133 if (strtobool(buf, &bv) == 0) { 1130 1134 mutex_lock(&info->lock); 1131 - ret = __restart_threaded_test(info, bv); 1135 + 1136 + if (__is_threaded_test_run(info)) 1137 + ret = -EBUSY; 1138 + else 1139 + ret = __restart_threaded_test(info, bv); 1140 + 1132 1141 mutex_unlock(&info->lock); 1133 1142 } 1134 1143
+5 -3
drivers/dma/ste_dma40.c
··· 1566 1566 return; 1567 1567 } 1568 1568 1569 - if (d40_queue_start(d40c) == NULL) 1569 + if (d40_queue_start(d40c) == NULL) { 1570 1570 d40c->busy = false; 1571 - pm_runtime_mark_last_busy(d40c->base->dev); 1572 - pm_runtime_put_autosuspend(d40c->base->dev); 1571 + 1572 + pm_runtime_mark_last_busy(d40c->base->dev); 1573 + pm_runtime_put_autosuspend(d40c->base->dev); 1574 + } 1573 1575 1574 1576 d40_desc_remove(d40d); 1575 1577 d40_desc_done(d40c, d40d);
+5 -1
drivers/gpu/drm/drm_irq.c
··· 1054 1054 */ 1055 1055 void drm_vblank_pre_modeset(struct drm_device *dev, int crtc) 1056 1056 { 1057 - /* vblank is not initialized (IRQ not installed ?) */ 1057 + /* vblank is not initialized (IRQ not installed ?), or has been freed */ 1058 1058 if (!dev->num_crtcs) 1059 1059 return; 1060 1060 /* ··· 1075 1075 void drm_vblank_post_modeset(struct drm_device *dev, int crtc) 1076 1076 { 1077 1077 unsigned long irqflags; 1078 + 1079 + /* vblank is not initialized (IRQ not installed ?), or has been freed */ 1080 + if (!dev->num_crtcs) 1081 + return; 1078 1082 1079 1083 if (dev->vblank_inmodeset[crtc]) { 1080 1084 spin_lock_irqsave(&dev->vbl_lock, irqflags);
+25 -5
drivers/gpu/drm/gma500/cdv_intel_display.c
··· 1462 1462 size_t addr = 0; 1463 1463 struct gtt_range *gt; 1464 1464 struct drm_gem_object *obj; 1465 - int ret; 1465 + int ret = 0; 1466 1466 1467 1467 /* if we want to turn of the cursor ignore width and height */ 1468 1468 if (!handle) { ··· 1499 1499 1500 1500 if (obj->size < width * height * 4) { 1501 1501 dev_dbg(dev->dev, "buffer is to small\n"); 1502 - return -ENOMEM; 1502 + ret = -ENOMEM; 1503 + goto unref_cursor; 1503 1504 } 1504 1505 1505 1506 gt = container_of(obj, struct gtt_range, gem); ··· 1509 1508 ret = psb_gtt_pin(gt); 1510 1509 if (ret) { 1511 1510 dev_err(dev->dev, "Can not pin down handle 0x%x\n", handle); 1512 - return ret; 1511 + goto unref_cursor; 1513 1512 } 1514 1513 1515 1514 addr = gt->offset; /* Or resource.start ??? */ ··· 1533 1532 struct gtt_range, gem); 1534 1533 psb_gtt_unpin(gt); 1535 1534 drm_gem_object_unreference(psb_intel_crtc->cursor_obj); 1536 - psb_intel_crtc->cursor_obj = obj; 1537 1535 } 1538 - return 0; 1536 + 1537 + psb_intel_crtc->cursor_obj = obj; 1538 + return ret; 1539 + 1540 + unref_cursor: 1541 + drm_gem_object_unreference(obj); 1542 + return ret; 1539 1543 } 1540 1544 1541 1545 static int cdv_intel_crtc_cursor_move(struct drm_crtc *crtc, int x, int y) ··· 1756 1750 kfree(psb_intel_crtc); 1757 1751 } 1758 1752 1753 + static void cdv_intel_crtc_disable(struct drm_crtc *crtc) 1754 + { 1755 + struct gtt_range *gt; 1756 + struct drm_crtc_helper_funcs *crtc_funcs = crtc->helper_private; 1757 + 1758 + crtc_funcs->dpms(crtc, DRM_MODE_DPMS_OFF); 1759 + 1760 + if (crtc->fb) { 1761 + gt = to_psb_fb(crtc->fb)->gtt; 1762 + psb_gtt_unpin(gt); 1763 + } 1764 + } 1765 + 1759 1766 const struct drm_crtc_helper_funcs cdv_intel_helper_funcs = { 1760 1767 .dpms = cdv_intel_crtc_dpms, 1761 1768 .mode_fixup = cdv_intel_crtc_mode_fixup, ··· 1776 1757 .mode_set_base = cdv_intel_pipe_set_base, 1777 1758 .prepare = cdv_intel_crtc_prepare, 1778 1759 .commit = cdv_intel_crtc_commit, 1760 + .disable = cdv_intel_crtc_disable, 1779 1761 }; 1780 1762 1781 1763 const struct drm_crtc_funcs cdv_intel_crtc_funcs = {
+2 -2
drivers/gpu/drm/gma500/framebuffer.c
··· 121 121 unsigned long address; 122 122 int ret; 123 123 unsigned long pfn; 124 - /* FIXME: assumes fb at stolen base which may not be true */ 125 - unsigned long phys_addr = (unsigned long)dev_priv->stolen_base; 124 + unsigned long phys_addr = (unsigned long)dev_priv->stolen_base + 125 + psbfb->gtt->offset; 126 126 127 127 page_num = (vma->vm_end - vma->vm_start) >> PAGE_SHIFT; 128 128 address = (unsigned long)vmf->virtual_address - (vmf->pgoff << PAGE_SHIFT);
+27 -6
drivers/gpu/drm/gma500/psb_intel_display.c
··· 843 843 struct gtt_range *cursor_gt = psb_intel_crtc->cursor_gt; 844 844 struct drm_gem_object *obj; 845 845 void *tmp_dst, *tmp_src; 846 - int ret, i, cursor_pages; 846 + int ret = 0, i, cursor_pages; 847 847 848 848 /* if we want to turn of the cursor ignore width and height */ 849 849 if (!handle) { ··· 880 880 881 881 if (obj->size < width * height * 4) { 882 882 dev_dbg(dev->dev, "buffer is to small\n"); 883 - return -ENOMEM; 883 + ret = -ENOMEM; 884 + goto unref_cursor; 884 885 } 885 886 886 887 gt = container_of(obj, struct gtt_range, gem); ··· 890 889 ret = psb_gtt_pin(gt); 891 890 if (ret) { 892 891 dev_err(dev->dev, "Can not pin down handle 0x%x\n", handle); 893 - return ret; 892 + goto unref_cursor; 894 893 } 895 894 896 895 if (dev_priv->ops->cursor_needs_phys) { 897 896 if (cursor_gt == NULL) { 898 897 dev_err(dev->dev, "No hardware cursor mem available"); 899 - return -ENOMEM; 898 + ret = -ENOMEM; 899 + goto unref_cursor; 900 900 } 901 901 902 902 /* Prevent overflow */ ··· 938 936 struct gtt_range, gem); 939 937 psb_gtt_unpin(gt); 940 938 drm_gem_object_unreference(psb_intel_crtc->cursor_obj); 941 - psb_intel_crtc->cursor_obj = obj; 942 939 } 943 - return 0; 940 + 941 + psb_intel_crtc->cursor_obj = obj; 942 + return ret; 943 + 944 + unref_cursor: 945 + drm_gem_object_unreference(obj); 946 + return ret; 944 947 } 945 948 946 949 static int psb_intel_crtc_cursor_move(struct drm_crtc *crtc, int x, int y) ··· 1157 1150 kfree(psb_intel_crtc); 1158 1151 } 1159 1152 1153 + static void psb_intel_crtc_disable(struct drm_crtc *crtc) 1154 + { 1155 + struct gtt_range *gt; 1156 + struct drm_crtc_helper_funcs *crtc_funcs = crtc->helper_private; 1157 + 1158 + crtc_funcs->dpms(crtc, DRM_MODE_DPMS_OFF); 1159 + 1160 + if (crtc->fb) { 1161 + gt = to_psb_fb(crtc->fb)->gtt; 1162 + psb_gtt_unpin(gt); 1163 + } 1164 + } 1165 + 1160 1166 const struct drm_crtc_helper_funcs psb_intel_helper_funcs = { 1161 1167 .dpms = psb_intel_crtc_dpms, 1162 1168 .mode_fixup = psb_intel_crtc_mode_fixup, ··· 1177 1157 .mode_set_base = psb_intel_pipe_set_base, 1178 1158 .prepare = psb_intel_crtc_prepare, 1179 1159 .commit = psb_intel_crtc_commit, 1160 + .disable = psb_intel_crtc_disable, 1180 1161 }; 1181 1162 1182 1163 const struct drm_crtc_funcs psb_intel_crtc_funcs = {
+2 -5
drivers/gpu/drm/i915/i915_gem.c
··· 91 91 { 92 92 int ret; 93 93 94 - #define EXIT_COND (!i915_reset_in_progress(error)) 94 + #define EXIT_COND (!i915_reset_in_progress(error) || \ 95 + i915_terminally_wedged(error)) 95 96 if (EXIT_COND) 96 97 return 0; 97 - 98 - /* GPU is already declared terminally dead, give up. */ 99 - if (i915_terminally_wedged(error)) 100 - return -EIO; 101 98 102 99 /* 103 100 * Only wait 10 seconds for the gpu reset to complete to avoid hanging
+5
drivers/gpu/drm/i915/intel_display.c
··· 7937 7937 memset(&pipe_config, 0, sizeof(pipe_config)); 7938 7938 active = dev_priv->display.get_pipe_config(crtc, 7939 7939 &pipe_config); 7940 + 7941 + /* hw state is inconsistent with the pipe A quirk */ 7942 + if (crtc->pipe == PIPE_A && dev_priv->quirks & QUIRK_PIPEA_FORCE) 7943 + active = crtc->active; 7944 + 7940 7945 WARN(crtc->active != active, 7941 7946 "crtc active state doesn't match with hw state " 7942 7947 "(expected %i, found %i)\n", crtc->active, active);
+2 -2
drivers/gpu/drm/i915/intel_lvds.c
··· 815 815 }, 816 816 { 817 817 .callback = intel_no_lvds_dmi_callback, 818 - .ident = "Hewlett-Packard HP t5740e Thin Client", 818 + .ident = "Hewlett-Packard HP t5740", 819 819 .matches = { 820 820 DMI_MATCH(DMI_BOARD_VENDOR, "Hewlett-Packard"), 821 - DMI_MATCH(DMI_PRODUCT_NAME, "HP t5740e Thin Client"), 821 + DMI_MATCH(DMI_PRODUCT_NAME, " t5740"), 822 822 }, 823 823 }, 824 824 {
+15 -11
drivers/gpu/drm/i915/intel_sdvo.c
··· 1776 1776 * Assume that the preferred modes are 1777 1777 * arranged in priority order. 1778 1778 */ 1779 - intel_ddc_get_modes(connector, intel_sdvo->i2c); 1780 - if (list_empty(&connector->probed_modes) == false) 1781 - goto end; 1779 + intel_ddc_get_modes(connector, &intel_sdvo->ddc); 1782 1780 1783 - /* Fetch modes from VBT */ 1781 + /* 1782 + * Fetch modes from VBT. For SDVO prefer the VBT mode since some 1783 + * SDVO->LVDS transcoders can't cope with the EDID mode. Since 1784 + * drm_mode_probed_add adds the mode at the head of the list we add it 1785 + * last. 1786 + */ 1784 1787 if (dev_priv->sdvo_lvds_vbt_mode != NULL) { 1785 1788 newmode = drm_mode_duplicate(connector->dev, 1786 1789 dev_priv->sdvo_lvds_vbt_mode); ··· 1795 1792 } 1796 1793 } 1797 1794 1798 - end: 1799 1795 list_for_each_entry(newmode, &connector->probed_modes, head) { 1800 1796 if (newmode->type & DRM_MODE_TYPE_PREFERRED) { 1801 1797 intel_sdvo->sdvo_lvds_fixed_mode = ··· 2792 2790 SDVOB_HOTPLUG_INT_STATUS_I915 : SDVOC_HOTPLUG_INT_STATUS_I915; 2793 2791 } 2794 2792 2795 - /* Only enable the hotplug irq if we need it, to work around noisy 2796 - * hotplug lines. 2797 - */ 2798 - if (intel_sdvo->hotplug_active) 2799 - intel_encoder->hpd_pin = HPD_SDVO_B ? HPD_SDVO_B : HPD_SDVO_C; 2800 - 2801 2793 intel_encoder->compute_config = intel_sdvo_compute_config; 2802 2794 intel_encoder->disable = intel_disable_sdvo; 2803 2795 intel_encoder->mode_set = intel_sdvo_mode_set; ··· 2808 2812 SDVO_NAME(intel_sdvo)); 2809 2813 /* Output_setup can leave behind connectors! */ 2810 2814 goto err_output; 2815 + } 2816 + 2817 + /* Only enable the hotplug irq if we need it, to work around noisy 2818 + * hotplug lines. 2819 + */ 2820 + if (intel_sdvo->hotplug_active) { 2821 + intel_encoder->hpd_pin = 2822 + intel_sdvo->is_sdvob ? HPD_SDVO_B : HPD_SDVO_C; 2811 2823 } 2812 2824 2813 2825 /*
+5 -4
drivers/gpu/drm/mgag200/mgag200_mode.c
··· 1034 1034 else 1035 1035 hi_pri_lvl = 5; 1036 1036 1037 - WREG8(0x1fde, 0x06); 1038 - WREG8(0x1fdf, hi_pri_lvl); 1037 + WREG8(MGAREG_CRTCEXT_INDEX, 0x06); 1038 + WREG8(MGAREG_CRTCEXT_DATA, hi_pri_lvl); 1039 1039 } else { 1040 + WREG8(MGAREG_CRTCEXT_INDEX, 0x06); 1040 1041 if (mdev->reg_1e24 >= 0x01) 1041 - WREG8(0x1fdf, 0x03); 1042 + WREG8(MGAREG_CRTCEXT_DATA, 0x03); 1042 1043 else 1043 - WREG8(0x1fdf, 0x04); 1044 + WREG8(MGAREG_CRTCEXT_DATA, 0x04); 1044 1045 } 1045 1046 } 1046 1047 return 0;
+6 -1
drivers/gpu/drm/nouveau/core/engine/disp/dacnv50.c
··· 50 50 { 51 51 const u32 doff = (or * 0x800); 52 52 int load = -EINVAL; 53 + nv_mask(priv, 0x61a004 + doff, 0x807f0000, 0x80150000); 54 + nv_wait(priv, 0x61a004 + doff, 0x80000000, 0x00000000); 53 55 nv_wr32(priv, 0x61a00c + doff, 0x00100000 | loadval); 54 - udelay(9500); 56 + mdelay(9); 57 + udelay(500); 55 58 nv_wr32(priv, 0x61a00c + doff, 0x80000000); 56 59 load = (nv_rd32(priv, 0x61a00c + doff) & 0x38000000) >> 27; 57 60 nv_wr32(priv, 0x61a00c + doff, 0x00000000); 61 + nv_mask(priv, 0x61a004 + doff, 0x807f0000, 0x80550000); 62 + nv_wait(priv, 0x61a004 + doff, 0x80000000, 0x00000000); 58 63 return load; 59 64 } 60 65
+4
drivers/gpu/drm/nouveau/core/engine/disp/hdminv84.c
··· 55 55 nv_wr32(priv, 0x616510 + hoff, 0x00000000); 56 56 nv_mask(priv, 0x616500 + hoff, 0x00000001, 0x00000001); 57 57 58 + nv_mask(priv, 0x6165d0 + hoff, 0x00070001, 0x00010001); /* SPARE, HW_CTS */ 59 + nv_mask(priv, 0x616568 + hoff, 0x00010101, 0x00000000); /* ACR_CTRL, ?? */ 60 + nv_mask(priv, 0x616578 + hoff, 0x80000000, 0x80000000); /* ACR_0441_ENABLE */ 61 + 58 62 /* ??? */ 59 63 nv_mask(priv, 0x61733c, 0x00100000, 0x00100000); /* RESETF */ 60 64 nv_mask(priv, 0x61733c, 0x10000000, 0x10000000); /* LOOKUP_EN */
+10 -4
drivers/gpu/drm/nouveau/core/engine/fifo/nv50.c
··· 40 40 * FIFO channel objects 41 41 ******************************************************************************/ 42 42 43 - void 44 - nv50_fifo_playlist_update(struct nv50_fifo_priv *priv) 43 + static void 44 + nv50_fifo_playlist_update_locked(struct nv50_fifo_priv *priv) 45 45 { 46 46 struct nouveau_bar *bar = nouveau_bar(priv); 47 47 struct nouveau_gpuobj *cur; 48 48 int i, p; 49 49 50 - mutex_lock(&nv_subdev(priv)->mutex); 51 50 cur = priv->playlist[priv->cur_playlist]; 52 51 priv->cur_playlist = !priv->cur_playlist; 53 52 ··· 60 61 nv_wr32(priv, 0x0032f4, cur->addr >> 12); 61 62 nv_wr32(priv, 0x0032ec, p); 62 63 nv_wr32(priv, 0x002500, 0x00000101); 64 + } 65 + 66 + void 67 + nv50_fifo_playlist_update(struct nv50_fifo_priv *priv) 68 + { 69 + mutex_lock(&nv_subdev(priv)->mutex); 70 + nv50_fifo_playlist_update_locked(priv); 63 71 mutex_unlock(&nv_subdev(priv)->mutex); 64 72 } 65 73 ··· 495 489 496 490 for (i = 0; i < 128; i++) 497 491 nv_wr32(priv, 0x002600 + (i * 4), 0x00000000); 498 - nv50_fifo_playlist_update(priv); 492 + nv50_fifo_playlist_update_locked(priv); 499 493 500 494 nv_wr32(priv, 0x003200, 0x00000001); 501 495 nv_wr32(priv, 0x003250, 0x00000001);
+1 -1
drivers/gpu/drm/nouveau/core/include/core/class.h
··· 218 218 #define NV50_DISP_DAC_PWR_STATE 0x00000040 219 219 #define NV50_DISP_DAC_PWR_STATE_ON 0x00000000 220 220 #define NV50_DISP_DAC_PWR_STATE_OFF 0x00000040 221 - #define NV50_DISP_DAC_LOAD 0x0002000c 221 + #define NV50_DISP_DAC_LOAD 0x00020100 222 222 #define NV50_DISP_DAC_LOAD_VALUE 0x00000007 223 223 224 224 #define NV50_DISP_PIOR_MTHD 0x00030000
+3 -1
drivers/gpu/drm/nouveau/nv50_display.c
··· 1554 1554 { 1555 1555 struct nv50_disp *disp = nv50_disp(encoder->dev); 1556 1556 int ret, or = nouveau_encoder(encoder)->or; 1557 - u32 load = 0; 1557 + u32 load = nouveau_drm(encoder->dev)->vbios.dactestval; 1558 + if (load == 0) 1559 + load = 340; 1558 1560 1559 1561 ret = nv_exec(disp->core, NV50_DISP_DAC_LOAD + or, &load, sizeof(load)); 1560 1562 if (ret || load != 7)
+8 -3
drivers/gpu/drm/radeon/atombios_encoders.c
··· 667 667 int 668 668 atombios_get_encoder_mode(struct drm_encoder *encoder) 669 669 { 670 + struct drm_device *dev = encoder->dev; 671 + struct radeon_device *rdev = dev->dev_private; 670 672 struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder); 671 673 struct drm_connector *connector; 672 674 struct radeon_connector *radeon_connector; ··· 695 693 case DRM_MODE_CONNECTOR_DVII: 696 694 case DRM_MODE_CONNECTOR_HDMIB: /* HDMI-B is basically DL-DVI; analog works fine */ 697 695 if (drm_detect_hdmi_monitor(radeon_connector->edid) && 698 - radeon_audio) 696 + radeon_audio && 697 + !ASIC_IS_DCE6(rdev)) /* remove once we support DCE6 */ 699 698 return ATOM_ENCODER_MODE_HDMI; 700 699 else if (radeon_connector->use_digital) 701 700 return ATOM_ENCODER_MODE_DVI; ··· 707 704 case DRM_MODE_CONNECTOR_HDMIA: 708 705 default: 709 706 if (drm_detect_hdmi_monitor(radeon_connector->edid) && 710 - radeon_audio) 707 + radeon_audio && 708 + !ASIC_IS_DCE6(rdev)) /* remove once we support DCE6 */ 711 709 return ATOM_ENCODER_MODE_HDMI; 712 710 else 713 711 return ATOM_ENCODER_MODE_DVI; ··· 722 718 (dig_connector->dp_sink_type == CONNECTOR_OBJECT_ID_eDP)) 723 719 return ATOM_ENCODER_MODE_DP; 724 720 else if (drm_detect_hdmi_monitor(radeon_connector->edid) && 725 - radeon_audio) 721 + radeon_audio && 722 + !ASIC_IS_DCE6(rdev)) /* remove once we support DCE6 */ 726 723 return ATOM_ENCODER_MODE_HDMI; 727 724 else 728 725 return ATOM_ENCODER_MODE_DVI;
+6 -4
drivers/gpu/drm/radeon/evergreen.c
··· 4754 4754 rdev->ring[R600_RING_TYPE_UVD_INDEX].ring_size = 0; 4755 4755 4756 4756 /* Enable IRQ */ 4757 + if (!rdev->irq.installed) { 4758 + r = radeon_irq_kms_init(rdev); 4759 + if (r) 4760 + return r; 4761 + } 4762 + 4757 4763 r = r600_irq_init(rdev); 4758 4764 if (r) { 4759 4765 DRM_ERROR("radeon: IH init failed (%d).\n", r); ··· 4926 4920 return r; 4927 4921 /* Memory manager */ 4928 4922 r = radeon_bo_init(rdev); 4929 - if (r) 4930 - return r; 4931 - 4932 - r = radeon_irq_kms_init(rdev); 4933 4923 if (r) 4934 4924 return r; 4935 4925
+6 -4
drivers/gpu/drm/radeon/ni.c
··· 2025 2025 } 2026 2026 2027 2027 /* Enable IRQ */ 2028 + if (!rdev->irq.installed) { 2029 + r = radeon_irq_kms_init(rdev); 2030 + if (r) 2031 + return r; 2032 + } 2033 + 2028 2034 r = r600_irq_init(rdev); 2029 2035 if (r) { 2030 2036 DRM_ERROR("radeon: IH init failed (%d).\n", r); ··· 2193 2187 return r; 2194 2188 /* Memory manager */ 2195 2189 r = radeon_bo_init(rdev); 2196 - if (r) 2197 - return r; 2198 - 2199 - r = radeon_irq_kms_init(rdev); 2200 2190 if (r) 2201 2191 return r; 2202 2192
+6 -3
drivers/gpu/drm/radeon/r100.c
··· 3869 3869 } 3870 3870 3871 3871 /* Enable IRQ */ 3872 + if (!rdev->irq.installed) { 3873 + r = radeon_irq_kms_init(rdev); 3874 + if (r) 3875 + return r; 3876 + } 3877 + 3872 3878 r100_irq_set(rdev); 3873 3879 rdev->config.r100.hdp_cntl = RREG32(RADEON_HOST_PATH_CNTL); 3874 3880 /* 1M ring buffer */ ··· 4028 4022 r100_mc_init(rdev); 4029 4023 /* Fence driver */ 4030 4024 r = radeon_fence_driver_init(rdev); 4031 - if (r) 4032 - return r; 4033 - r = radeon_irq_kms_init(rdev); 4034 4025 if (r) 4035 4026 return r; 4036 4027 /* Memory manager */
+6 -3
drivers/gpu/drm/radeon/r300.c
··· 1382 1382 } 1383 1383 1384 1384 /* Enable IRQ */ 1385 + if (!rdev->irq.installed) { 1386 + r = radeon_irq_kms_init(rdev); 1387 + if (r) 1388 + return r; 1389 + } 1390 + 1385 1391 r100_irq_set(rdev); 1386 1392 rdev->config.r300.hdp_cntl = RREG32(RADEON_HOST_PATH_CNTL); 1387 1393 /* 1M ring buffer */ ··· 1520 1514 r300_mc_init(rdev); 1521 1515 /* Fence driver */ 1522 1516 r = radeon_fence_driver_init(rdev); 1523 - if (r) 1524 - return r; 1525 - r = radeon_irq_kms_init(rdev); 1526 1517 if (r) 1527 1518 return r; 1528 1519 /* Memory manager */
+6 -4
drivers/gpu/drm/radeon/r420.c
··· 265 265 } 266 266 267 267 /* Enable IRQ */ 268 + if (!rdev->irq.installed) { 269 + r = radeon_irq_kms_init(rdev); 270 + if (r) 271 + return r; 272 + } 273 + 268 274 r100_irq_set(rdev); 269 275 rdev->config.r300.hdp_cntl = RREG32(RADEON_HOST_PATH_CNTL); 270 276 /* 1M ring buffer */ ··· 414 408 r420_debugfs(rdev); 415 409 /* Fence driver */ 416 410 r = radeon_fence_driver_init(rdev); 417 - if (r) { 418 - return r; 419 - } 420 - r = radeon_irq_kms_init(rdev); 421 411 if (r) { 422 412 return r; 423 413 }
+6 -3
drivers/gpu/drm/radeon/r520.c
··· 194 194 } 195 195 196 196 /* Enable IRQ */ 197 + if (!rdev->irq.installed) { 198 + r = radeon_irq_kms_init(rdev); 199 + if (r) 200 + return r; 201 + } 202 + 197 203 rs600_irq_set(rdev); 198 204 rdev->config.r300.hdp_cntl = RREG32(RADEON_HOST_PATH_CNTL); 199 205 /* 1M ring buffer */ ··· 301 295 rv515_debugfs(rdev); 302 296 /* Fence driver */ 303 297 r = radeon_fence_driver_init(rdev); 304 - if (r) 305 - return r; 306 - r = radeon_irq_kms_init(rdev); 307 298 if (r) 308 299 return r; 309 300 /* Memory manager */
+49 -4
drivers/gpu/drm/radeon/r600.c
··· 1046 1046 return -1; 1047 1047 } 1048 1048 1049 + uint32_t rs780_mc_rreg(struct radeon_device *rdev, uint32_t reg) 1050 + { 1051 + uint32_t r; 1052 + 1053 + WREG32(R_0028F8_MC_INDEX, S_0028F8_MC_IND_ADDR(reg)); 1054 + r = RREG32(R_0028FC_MC_DATA); 1055 + WREG32(R_0028F8_MC_INDEX, ~C_0028F8_MC_IND_ADDR); 1056 + return r; 1057 + } 1058 + 1059 + void rs780_mc_wreg(struct radeon_device *rdev, uint32_t reg, uint32_t v) 1060 + { 1061 + WREG32(R_0028F8_MC_INDEX, S_0028F8_MC_IND_ADDR(reg) | 1062 + S_0028F8_MC_IND_WR_EN(1)); 1063 + WREG32(R_0028FC_MC_DATA, v); 1064 + WREG32(R_0028F8_MC_INDEX, 0x7F); 1065 + } 1066 + 1049 1067 static void r600_mc_program(struct radeon_device *rdev) 1050 1068 { 1051 1069 struct rv515_mc_save save; ··· 1199 1181 { 1200 1182 u32 tmp; 1201 1183 int chansize, numchan; 1184 + uint32_t h_addr, l_addr; 1185 + unsigned long long k8_addr; 1202 1186 1203 1187 /* Get VRAM informations */ 1204 1188 rdev->mc.vram_is_ddr = true; ··· 1241 1221 if (rdev->flags & RADEON_IS_IGP) { 1242 1222 rs690_pm_info(rdev); 1243 1223 rdev->mc.igp_sideport_enabled = radeon_atombios_sideport_present(rdev); 1224 + 1225 + if (rdev->family == CHIP_RS780 || rdev->family == CHIP_RS880) { 1226 + /* Use K8 direct mapping for fast fb access. */ 1227 + rdev->fastfb_working = false; 1228 + h_addr = G_000012_K8_ADDR_EXT(RREG32_MC(R_000012_MC_MISC_UMA_CNTL)); 1229 + l_addr = RREG32_MC(R_000011_K8_FB_LOCATION); 1230 + k8_addr = ((unsigned long long)h_addr) << 32 | l_addr; 1231 + #if defined(CONFIG_X86_32) && !defined(CONFIG_X86_PAE) 1232 + if (k8_addr + rdev->mc.visible_vram_size < 0x100000000ULL) 1233 + #endif 1234 + { 1235 + /* FastFB shall be used with UMA memory. Here it is simply disabled when sideport 1236 + * memory is present. 1237 + */ 1238 + if (rdev->mc.igp_sideport_enabled == false && radeon_fastfb == 1) { 1239 + DRM_INFO("Direct mapping: aper base at 0x%llx, replaced by direct mapping base 0x%llx.\n", 1240 + (unsigned long long)rdev->mc.aper_base, k8_addr); 1241 + rdev->mc.aper_base = (resource_size_t)k8_addr; 1242 + rdev->fastfb_working = true; 1243 + } 1244 + } 1245 + } 1244 1246 } 1247 + 1245 1248 radeon_update_bandwidth_info(rdev); 1246 1249 return 0; 1247 1250 } ··· 3245 3202 } 3246 3203 3247 3204 /* Enable IRQ */ 3205 + if (!rdev->irq.installed) { 3206 + r = radeon_irq_kms_init(rdev); 3207 + if (r) 3208 + return r; 3209 + } 3210 + 3248 3211 r = r600_irq_init(rdev); 3249 3212 if (r) { 3250 3213 DRM_ERROR("radeon: IH init failed (%d).\n", r); ··· 3402 3353 return r; 3403 3354 /* Memory manager */ 3404 3355 r = radeon_bo_init(rdev); 3405 - if (r) 3406 - return r; 3407 - 3408 - r = radeon_irq_kms_init(rdev); 3409 3356 if (r) 3410 3357 return r; 3411 3358
+8
drivers/gpu/drm/radeon/r600d.h
··· 1342 1342 #define PACKET3_STRMOUT_BASE_UPDATE 0x72 /* r7xx */ 1343 1343 #define PACKET3_SURFACE_BASE_UPDATE 0x73 1344 1344 1345 + #define R_000011_K8_FB_LOCATION 0x11 1346 + #define R_000012_MC_MISC_UMA_CNTL 0x12 1347 + #define G_000012_K8_ADDR_EXT(x) (((x) >> 0) & 0xFF) 1348 + #define R_0028F8_MC_INDEX 0x28F8 1349 + #define S_0028F8_MC_IND_ADDR(x) (((x) & 0x1FF) << 0) 1350 + #define C_0028F8_MC_IND_ADDR 0xFFFFFE00 1351 + #define S_0028F8_MC_IND_WR_EN(x) (((x) & 0x1) << 9) 1352 + #define R_0028FC_MC_DATA 0x28FC 1345 1353 1346 1354 #define R_008020_GRBM_SOFT_RESET 0x8020 1347 1355 #define S_008020_SOFT_RESET_CP(x) (((x) & 1) << 0)
+4
drivers/gpu/drm/radeon/radeon_asic.c
··· 122 122 rdev->mc_rreg = &rs600_mc_rreg; 123 123 rdev->mc_wreg = &rs600_mc_wreg; 124 124 } 125 + if (rdev->family == CHIP_RS780 || rdev->family == CHIP_RS880) { 126 + rdev->mc_rreg = &rs780_mc_rreg; 127 + rdev->mc_wreg = &rs780_mc_wreg; 128 + } 125 129 if (rdev->family >= CHIP_R600) { 126 130 rdev->pciep_rreg = &r600_pciep_rreg; 127 131 rdev->pciep_wreg = &r600_pciep_wreg;
+2
drivers/gpu/drm/radeon/radeon_asic.h
··· 347 347 extern void r600_pm_misc(struct radeon_device *rdev); 348 348 extern void r600_pm_init_profile(struct radeon_device *rdev); 349 349 extern void rs780_pm_init_profile(struct radeon_device *rdev); 350 + extern uint32_t rs780_mc_rreg(struct radeon_device *rdev, uint32_t reg); 351 + extern void rs780_mc_wreg(struct radeon_device *rdev, uint32_t reg, uint32_t v); 350 352 extern void r600_pm_get_dynpm_state(struct radeon_device *rdev); 351 353 extern void r600_set_pcie_lanes(struct radeon_device *rdev, int lanes); 352 354 extern int r600_get_pcie_lanes(struct radeon_device *rdev);
+6 -3
drivers/gpu/drm/radeon/rs400.c
··· 417 417 } 418 418 419 419 /* Enable IRQ */ 420 + if (!rdev->irq.installed) { 421 + r = radeon_irq_kms_init(rdev); 422 + if (r) 423 + return r; 424 + } 425 + 420 426 r100_irq_set(rdev); 421 427 rdev->config.r300.hdp_cntl = RREG32(RADEON_HOST_PATH_CNTL); 422 428 /* 1M ring buffer */ ··· 539 533 rs400_mc_init(rdev); 540 534 /* Fence driver */ 541 535 r = radeon_fence_driver_init(rdev); 542 - if (r) 543 - return r; 544 - r = radeon_irq_kms_init(rdev); 545 536 if (r) 546 537 return r; 547 538 /* Memory manager */
+6 -3
drivers/gpu/drm/radeon/rs600.c
··· 923 923 } 924 924 925 925 /* Enable IRQ */ 926 + if (!rdev->irq.installed) { 927 + r = radeon_irq_kms_init(rdev); 928 + if (r) 929 + return r; 930 + } 931 + 926 932 rs600_irq_set(rdev); 927 933 rdev->config.r300.hdp_cntl = RREG32(RADEON_HOST_PATH_CNTL); 928 934 /* 1M ring buffer */ ··· 1051 1045 rs600_debugfs(rdev); 1052 1046 /* Fence driver */ 1053 1047 r = radeon_fence_driver_init(rdev); 1054 - if (r) 1055 - return r; 1056 - r = radeon_irq_kms_init(rdev); 1057 1048 if (r) 1058 1049 return r; 1059 1050 /* Memory manager */
+6 -3
drivers/gpu/drm/radeon/rs690.c
··· 651 651 } 652 652 653 653 /* Enable IRQ */ 654 + if (!rdev->irq.installed) { 655 + r = radeon_irq_kms_init(rdev); 656 + if (r) 657 + return r; 658 + } 659 + 654 660 rs600_irq_set(rdev); 655 661 rdev->config.r300.hdp_cntl = RREG32(RADEON_HOST_PATH_CNTL); 656 662 /* 1M ring buffer */ ··· 780 774 rv515_debugfs(rdev); 781 775 /* Fence driver */ 782 776 r = radeon_fence_driver_init(rdev); 783 - if (r) 784 - return r; 785 - r = radeon_irq_kms_init(rdev); 786 777 if (r) 787 778 return r; 788 779 /* Memory manager */
+6 -3
drivers/gpu/drm/radeon/rv515.c
··· 532 532 } 533 533 534 534 /* Enable IRQ */ 535 + if (!rdev->irq.installed) { 536 + r = radeon_irq_kms_init(rdev); 537 + if (r) 538 + return r; 539 + } 540 + 535 541 rs600_irq_set(rdev); 536 542 rdev->config.r300.hdp_cntl = RREG32(RADEON_HOST_PATH_CNTL); 537 543 /* 1M ring buffer */ ··· 666 660 rv515_debugfs(rdev); 667 661 /* Fence driver */ 668 662 r = radeon_fence_driver_init(rdev); 669 - if (r) 670 - return r; 671 - r = radeon_irq_kms_init(rdev); 672 663 if (r) 673 664 return r; 674 665 /* Memory manager */
+6 -4
drivers/gpu/drm/radeon/rv770.c
··· 1887 1887 rdev->ring[R600_RING_TYPE_UVD_INDEX].ring_size = 0; 1888 1888 1889 1889 /* Enable IRQ */ 1890 + if (!rdev->irq.installed) { 1891 + r = radeon_irq_kms_init(rdev); 1892 + if (r) 1893 + return r; 1894 + } 1895 + 1890 1896 r = r600_irq_init(rdev); 1891 1897 if (r) { 1892 1898 DRM_ERROR("radeon: IH init failed (%d).\n", r); ··· 2048 2042 return r; 2049 2043 /* Memory manager */ 2050 2044 r = radeon_bo_init(rdev); 2051 - if (r) 2052 - return r; 2053 - 2054 - r = radeon_irq_kms_init(rdev); 2055 2045 if (r) 2056 2046 return r; 2057 2047
+6 -4
drivers/gpu/drm/radeon/si.c
··· 5350 5350 } 5351 5351 5352 5352 /* Enable IRQ */ 5353 + if (!rdev->irq.installed) { 5354 + r = radeon_irq_kms_init(rdev); 5355 + if (r) 5356 + return r; 5357 + } 5358 + 5353 5359 r = si_irq_init(rdev); 5354 5360 if (r) { 5355 5361 DRM_ERROR("radeon: IH init failed (%d).\n", r); ··· 5536 5530 return r; 5537 5531 /* Memory manager */ 5538 5532 r = radeon_bo_init(rdev); 5539 - if (r) 5540 - return r; 5541 - 5542 - r = radeon_irq_kms_init(rdev); 5543 5533 if (r) 5544 5534 return r; 5545 5535
+1
drivers/gpu/drm/tilcdc/Kconfig
··· 6 6 select DRM_GEM_CMA_HELPER 7 7 select VIDEOMODE_HELPERS 8 8 select BACKLIGHT_CLASS_DEVICE 9 + select BACKLIGHT_LCD_SUPPORT 9 10 help 10 11 Choose this option if you have an TI SoC with LCDC display 11 12 controller, for example AM33xx in beagle-bone, DA8xx, or
+7 -4
drivers/hid/hid-multitouch.c
··· 264 264 static void mt_free_input_name(struct hid_input *hi) 265 265 { 266 266 struct hid_device *hdev = hi->report->device; 267 + const char *name = hi->input->name; 267 268 268 - if (hi->input->name != hdev->name) 269 - kfree(hi->input->name); 269 + if (name != hdev->name) { 270 + hi->input->name = hdev->name; 271 + kfree(name); 272 + } 270 273 } 271 274 272 275 static ssize_t mt_show_quirks(struct device *dev, ··· 1043 1040 struct hid_input *hi; 1044 1041 1045 1042 sysfs_remove_group(&hdev->dev.kobj, &mt_attribute_group); 1046 - hid_hw_stop(hdev); 1047 - 1048 1043 list_for_each_entry(hi, &hdev->inputs, list) 1049 1044 mt_free_input_name(hi); 1045 + 1046 + hid_hw_stop(hdev); 1050 1047 1051 1048 kfree(td); 1052 1049 hid_set_drvdata(hdev, NULL);
+50 -8
drivers/hwmon/adm1021.c
··· 331 331 man_id = i2c_smbus_read_byte_data(client, ADM1021_REG_MAN_ID); 332 332 dev_id = i2c_smbus_read_byte_data(client, ADM1021_REG_DEV_ID); 333 333 334 + if (man_id < 0 || dev_id < 0) 335 + return -ENODEV; 336 + 334 337 if (man_id == 0x4d && dev_id == 0x01) 335 338 type_name = "max1617a"; 336 339 else if (man_id == 0x41) { 337 340 if ((dev_id & 0xF0) == 0x30) 338 341 type_name = "adm1023"; 339 - else 342 + else if ((dev_id & 0xF0) == 0x00) 340 343 type_name = "adm1021"; 344 + else 345 + return -ENODEV; 341 346 } else if (man_id == 0x49) 342 347 type_name = "thmc10"; 343 348 else if (man_id == 0x23) 344 349 type_name = "gl523sm"; 345 350 else if (man_id == 0x54) 346 351 type_name = "mc1066"; 347 - /* LM84 Mfr ID in a different place, and it has more unused bits */ 348 - else if (conv_rate == 0x00 349 - && (config & 0x7F) == 0x00 350 - && (status & 0xAB) == 0x00) 351 - type_name = "lm84"; 352 - else 353 - type_name = "max1617"; 352 + else { 353 + int lte, rte, lhi, rhi, llo, rlo; 354 + 355 + /* extra checks for LM84 and MAX1617 to avoid misdetections */ 356 + 357 + llo = i2c_smbus_read_byte_data(client, ADM1021_REG_THYST_R(0)); 358 + rlo = i2c_smbus_read_byte_data(client, ADM1021_REG_THYST_R(1)); 359 + 360 + /* fail if any of the additional register reads failed */ 361 + if (llo < 0 || rlo < 0) 362 + return -ENODEV; 363 + 364 + lte = i2c_smbus_read_byte_data(client, ADM1021_REG_TEMP(0)); 365 + rte = i2c_smbus_read_byte_data(client, ADM1021_REG_TEMP(1)); 366 + lhi = i2c_smbus_read_byte_data(client, ADM1021_REG_TOS_R(0)); 367 + rhi = i2c_smbus_read_byte_data(client, ADM1021_REG_TOS_R(1)); 368 + 369 + /* 370 + * Fail for negative temperatures and negative high limits. 371 + * This check also catches read errors on the tested registers. 372 + */ 373 + if ((s8)lte < 0 || (s8)rte < 0 || (s8)lhi < 0 || (s8)rhi < 0) 374 + return -ENODEV; 375 + 376 + /* fail if all registers hold the same value */ 377 + if (lte == rte && lte == lhi && lte == rhi && lte == llo 378 + && lte == rlo) 379 + return -ENODEV; 380 + 381 + /* 382 + * LM84 Mfr ID is in a different place, 383 + * and it has more unused bits. 384 + */ 385 + if (conv_rate == 0x00 386 + && (config & 0x7F) == 0x00 387 + && (status & 0xAB) == 0x00) { 388 + type_name = "lm84"; 389 + } else { 390 + /* fail if low limits are larger than high limits */ 391 + if ((s8)llo > lhi || (s8)rlo > rhi) 392 + return -ENODEV; 393 + type_name = "max1617"; 394 + } 395 + } 354 396 355 397 pr_debug("Detected chip %s at adapter %d, address 0x%02x.\n", 356 398 type_name, i2c_adapter_id(adapter), client->addr);
+1 -1
drivers/infiniband/hw/qib/qib_keys.c
··· 61 61 if (dma_region) { 62 62 struct qib_mregion *tmr; 63 63 64 - tmr = rcu_dereference(dev->dma_mr); 64 + tmr = rcu_access_pointer(dev->dma_mr); 65 65 if (!tmr) { 66 66 qib_get_mr(mr); 67 67 rcu_assign_pointer(dev->dma_mr, mr);
+1
drivers/infiniband/ulp/iser/iscsi_iser.c
··· 5 5 * Copyright (C) 2004 Alex Aizman 6 6 * Copyright (C) 2005 Mike Christie 7 7 * Copyright (c) 2005, 2006 Voltaire, Inc. All rights reserved. 8 + * Copyright (c) 2013 Mellanox Technologies. All rights reserved. 8 9 * maintained by openib-general@openib.org 9 10 * 10 11 * This software is available to you under a choice of one of two
+1
drivers/infiniband/ulp/iser/iscsi_iser.h
··· 8 8 * 9 9 * Copyright (c) 2004, 2005, 2006 Voltaire, Inc. All rights reserved. 10 10 * Copyright (c) 2005, 2006 Cisco Systems. All rights reserved. 11 + * Copyright (c) 2013 Mellanox Technologies. All rights reserved. 11 12 * 12 13 * This software is available to you under a choice of one of two 13 14 * licenses. You may choose to be licensed under the terms of the GNU
+1
drivers/infiniband/ulp/iser/iser_initiator.c
··· 1 1 /* 2 2 * Copyright (c) 2004, 2005, 2006 Voltaire, Inc. All rights reserved. 3 + * Copyright (c) 2013 Mellanox Technologies. All rights reserved. 3 4 * 4 5 * This software is available to you under a choice of one of two 5 6 * licenses. You may choose to be licensed under the terms of the GNU
+1
drivers/infiniband/ulp/iser/iser_memory.c
··· 1 1 /* 2 2 * Copyright (c) 2004, 2005, 2006 Voltaire, Inc. All rights reserved. 3 + * Copyright (c) 2013 Mellanox Technologies. All rights reserved. 3 4 * 4 5 * This software is available to you under a choice of one of two 5 6 * licenses. You may choose to be licensed under the terms of the GNU
+9 -7
drivers/infiniband/ulp/iser/iser_verbs.c
··· 1 1 /* 2 2 * Copyright (c) 2004, 2005, 2006 Voltaire, Inc. All rights reserved. 3 3 * Copyright (c) 2005, 2006 Cisco Systems. All rights reserved. 4 + * Copyright (c) 2013 Mellanox Technologies. All rights reserved. 4 5 * 5 6 * This software is available to you under a choice of one of two 6 7 * licenses. You may choose to be licensed under the terms of the GNU ··· 293 292 } 294 293 295 294 /** 296 - * releases the FMR pool, QP and CMA ID objects, returns 0 on success, 295 + * releases the FMR pool and QP objects, returns 0 on success, 297 296 * -1 on failure 298 297 */ 299 - static int iser_free_ib_conn_res(struct iser_conn *ib_conn, int can_destroy_id) 298 + static int iser_free_ib_conn_res(struct iser_conn *ib_conn) 300 299 { 301 300 int cq_index; 302 301 BUG_ON(ib_conn == NULL); ··· 315 314 316 315 rdma_destroy_qp(ib_conn->cma_id); 317 316 } 318 - /* if cma handler context, the caller acts s.t the cma destroy the id */ 319 - if (ib_conn->cma_id != NULL && can_destroy_id) 320 - rdma_destroy_id(ib_conn->cma_id); 321 317 322 318 ib_conn->fmr_pool = NULL; 323 319 ib_conn->qp = NULL; 324 - ib_conn->cma_id = NULL; 325 320 kfree(ib_conn->page_vec); 326 321 327 322 if (ib_conn->login_buf) { ··· 412 415 list_del(&ib_conn->conn_list); 413 416 mutex_unlock(&ig.connlist_mutex); 414 417 iser_free_rx_descriptors(ib_conn); 415 - iser_free_ib_conn_res(ib_conn, can_destroy_id); 418 + iser_free_ib_conn_res(ib_conn); 416 419 ib_conn->device = NULL; 417 420 /* on EVENT_ADDR_ERROR there's no device yet for this conn */ 418 421 if (device != NULL) 419 422 iser_device_try_release(device); 423 + /* if cma handler context, the caller actually destroy the id */ 424 + if (ib_conn->cma_id != NULL && can_destroy_id) { 425 + rdma_destroy_id(ib_conn->cma_id); 426 + ib_conn->cma_id = NULL; 427 + } 420 428 iscsi_destroy_endpoint(ib_conn->ep); 421 429 } 422 430
+4 -10
drivers/irqchip/irq-mxs.c
··· 76 76 { 77 77 u32 irqnr; 78 78 79 - do { 80 - irqnr = __raw_readl(icoll_base + HW_ICOLL_STAT_OFFSET); 81 - if (irqnr != 0x7f) { 82 - __raw_writel(irqnr, icoll_base + HW_ICOLL_VECTOR); 83 - irqnr = irq_find_mapping(icoll_domain, irqnr); 84 - handle_IRQ(irqnr, regs); 85 - continue; 86 - } 87 - break; 88 - } while (1); 79 + irqnr = __raw_readl(icoll_base + HW_ICOLL_STAT_OFFSET); 80 + __raw_writel(irqnr, icoll_base + HW_ICOLL_VECTOR); 81 + irqnr = irq_find_mapping(icoll_domain, irqnr); 82 + handle_IRQ(irqnr, regs); 89 83 } 90 84 91 85 static int icoll_irq_domain_map(struct irq_domain *d, unsigned int virq,
+1 -1
drivers/irqchip/irq-versatile-fpga.c
··· 119 119 120 120 /* Skip invalid IRQs, only register handlers for the real ones */ 121 121 if (!(f->valid & BIT(hwirq))) 122 - return -ENOTSUPP; 122 + return -EPERM; 123 123 irq_set_chip_data(irq, f); 124 124 irq_set_chip_and_handler(irq, &f->chip, 125 125 handle_level_irq);
+1 -1
drivers/irqchip/irq-vic.c
··· 197 197 198 198 /* Skip invalid IRQs, only register handlers for the real ones */ 199 199 if (!(v->valid_sources & (1 << hwirq))) 200 - return -ENOTSUPP; 200 + return -EPERM; 201 201 irq_set_chip_and_handler(irq, &vic_chip, handle_level_irq); 202 202 irq_set_chip_data(irq, v->base); 203 203 set_irq_flags(irq, IRQF_VALID | IRQF_PROBE);
-1
drivers/md/bcache/Kconfig
··· 1 1 2 2 config BCACHE 3 3 tristate "Block device as cache" 4 - select CLOSURES 5 4 ---help--- 6 5 Allows a block device to be used as cache for other devices; uses 7 6 a btree for indexing and the layout is optimized for SSDs.
+1 -1
drivers/md/bcache/bcache.h
··· 1241 1241 struct cache_set *bch_cache_set_alloc(struct cache_sb *); 1242 1242 void bch_btree_cache_free(struct cache_set *); 1243 1243 int bch_btree_cache_alloc(struct cache_set *); 1244 - void bch_writeback_init_cached_dev(struct cached_dev *); 1244 + void bch_cached_dev_writeback_init(struct cached_dev *); 1245 1245 void bch_moving_init_cache_set(struct cache_set *); 1246 1246 1247 1247 void bch_cache_allocator_exit(struct cache *ca);
+16 -18
drivers/md/bcache/stats.c
··· 93 93 }; 94 94 static KTYPE(bch_stats); 95 95 96 - static void scale_accounting(unsigned long data); 97 - 98 - void bch_cache_accounting_init(struct cache_accounting *acc, 99 - struct closure *parent) 100 - { 101 - kobject_init(&acc->total.kobj, &bch_stats_ktype); 102 - kobject_init(&acc->five_minute.kobj, &bch_stats_ktype); 103 - kobject_init(&acc->hour.kobj, &bch_stats_ktype); 104 - kobject_init(&acc->day.kobj, &bch_stats_ktype); 105 - 106 - closure_init(&acc->cl, parent); 107 - init_timer(&acc->timer); 108 - acc->timer.expires = jiffies + accounting_delay; 109 - acc->timer.data = (unsigned long) acc; 110 - acc->timer.function = scale_accounting; 111 - add_timer(&acc->timer); 112 - } 113 - 114 96 int bch_cache_accounting_add_kobjs(struct cache_accounting *acc, 115 97 struct kobject *parent) 116 98 { ··· 225 243 struct cached_dev *dc = container_of(s->d, struct cached_dev, disk); 226 244 atomic_add(sectors, &dc->accounting.collector.sectors_bypassed); 227 245 atomic_add(sectors, &s->op.c->accounting.collector.sectors_bypassed); 246 + } 247 + 248 + void bch_cache_accounting_init(struct cache_accounting *acc, 249 + struct closure *parent) 250 + { 251 + kobject_init(&acc->total.kobj, &bch_stats_ktype); 252 + kobject_init(&acc->five_minute.kobj, &bch_stats_ktype); 253 + kobject_init(&acc->hour.kobj, &bch_stats_ktype); 254 + kobject_init(&acc->day.kobj, &bch_stats_ktype); 255 + 256 + closure_init(&acc->cl, parent); 257 + init_timer(&acc->timer); 258 + acc->timer.expires = jiffies + accounting_delay; 259 + acc->timer.data = (unsigned long) acc; 260 + acc->timer.function = scale_accounting; 261 + add_timer(&acc->timer); 228 262 }
+82 -103
drivers/md/bcache/super.c
··· 634 634 return 0; 635 635 } 636 636 637 - static int release_dev(struct gendisk *b, fmode_t mode) 637 + static void release_dev(struct gendisk *b, fmode_t mode) 638 638 { 639 639 struct bcache_device *d = b->private_data; 640 640 closure_put(&d->cl); 641 - return 0; 642 641 } 643 642 644 643 static int ioctl_dev(struct block_device *b, fmode_t mode, ··· 731 732 732 733 if (d->c) 733 734 bcache_device_detach(d); 734 - 735 - if (d->disk) 735 + if (d->disk && d->disk->flags & GENHD_FL_UP) 736 736 del_gendisk(d->disk); 737 737 if (d->disk && d->disk->queue) 738 738 blk_cleanup_queue(d->disk->queue); ··· 754 756 if (!(d->bio_split = bioset_create(4, offsetof(struct bbio, bio))) || 755 757 !(d->unaligned_bvec = mempool_create_kmalloc_pool(1, 756 758 sizeof(struct bio_vec) * BIO_MAX_PAGES)) || 757 - bio_split_pool_init(&d->bio_split_hook)) 758 - 759 - return -ENOMEM; 760 - 761 - d->disk = alloc_disk(1); 762 - if (!d->disk) 759 + bio_split_pool_init(&d->bio_split_hook) || 760 + !(d->disk = alloc_disk(1)) || 761 + !(q = blk_alloc_queue(GFP_KERNEL))) 763 762 return -ENOMEM; 764 763 765 764 snprintf(d->disk->disk_name, DISK_NAME_LEN, "bcache%i", bcache_minor); ··· 765 770 d->disk->first_minor = bcache_minor++; 766 771 d->disk->fops = &bcache_ops; 767 772 d->disk->private_data = d; 768 - 769 - q = blk_alloc_queue(GFP_KERNEL); 770 - if (!q) 771 - return -ENOMEM; 772 773 773 774 blk_queue_make_request(q, NULL); 774 775 d->disk->queue = q; ··· 990 999 991 1000 mutex_lock(&bch_register_lock); 992 1001 993 - bd_unlink_disk_holder(dc->bdev, dc->disk.disk); 1002 + if (atomic_read(&dc->running)) 1003 + bd_unlink_disk_holder(dc->bdev, dc->disk.disk); 994 1004 bcache_device_free(&dc->disk); 995 1005 list_del(&dc->list); 996 1006 997 1007 mutex_unlock(&bch_register_lock); 998 1008 999 1009 if (!IS_ERR_OR_NULL(dc->bdev)) { 1000 - blk_sync_queue(bdev_get_queue(dc->bdev)); 1010 + if (dc->bdev->bd_disk) 1011 + blk_sync_queue(bdev_get_queue(dc->bdev)); 1012 + 1001 1013 blkdev_put(dc->bdev, FMODE_READ|FMODE_WRITE|FMODE_EXCL); 1002 1014 } 1003 1015 ··· 1022 1028 1023 1029 static int cached_dev_init(struct cached_dev *dc, unsigned block_size) 1024 1030 { 1025 - int err; 1031 + int ret; 1026 1032 struct io *io; 1027 - 1028 - closure_init(&dc->disk.cl, NULL); 1029 - set_closure_fn(&dc->disk.cl, cached_dev_flush, system_wq); 1033 + struct request_queue *q = bdev_get_queue(dc->bdev); 1030 1034 1031 1035 __module_get(THIS_MODULE); 1032 1036 INIT_LIST_HEAD(&dc->list); 1037 + closure_init(&dc->disk.cl, NULL); 1038 + set_closure_fn(&dc->disk.cl, cached_dev_flush, system_wq); 1033 1039 kobject_init(&dc->disk.kobj, &bch_cached_dev_ktype); 1034 - 1035 - bch_cache_accounting_init(&dc->accounting, &dc->disk.cl); 1036 - 1037 - err = bcache_device_init(&dc->disk, block_size); 1038 - if (err) 1039 - goto err; 1040 - 1041 - spin_lock_init(&dc->io_lock); 1042 - closure_init_unlocked(&dc->sb_write); 1043 1040 INIT_WORK(&dc->detach, cached_dev_detach_finish); 1041 + closure_init_unlocked(&dc->sb_write); 1042 + INIT_LIST_HEAD(&dc->io_lru); 1043 + spin_lock_init(&dc->io_lock); 1044 + bch_cache_accounting_init(&dc->accounting, &dc->disk.cl); 1044 1045 1045 1046 dc->sequential_merge = true; 1046 1047 dc->sequential_cutoff = 4 << 20; 1047 - 1048 - INIT_LIST_HEAD(&dc->io_lru); 1049 - dc->sb_bio.bi_max_vecs = 1; 1050 - dc->sb_bio.bi_io_vec = dc->sb_bio.bi_inline_vecs; 1051 1048 1052 1049 for (io = dc->io; io < dc->io + RECENT_IO; io++) { 1053 1050 list_add(&io->lru, &dc->io_lru); 1054 1051 hlist_add_head(&io->hash, dc->io_hash + RECENT_IO); 1055 1052 } 1056 1053 1057 - bch_writeback_init_cached_dev(dc); 1054 + ret = bcache_device_init(&dc->disk, block_size); 1055 + if (ret) 1056 + return ret; 1057 + 1058 + set_capacity(dc->disk.disk, 1059 + dc->bdev->bd_part->nr_sects - dc->sb.data_offset); 1060 + 1061 + dc->disk.disk->queue->backing_dev_info.ra_pages = 1062 + max(dc->disk.disk->queue->backing_dev_info.ra_pages, 1063 + q->backing_dev_info.ra_pages); 1064 + 1065 + bch_cached_dev_request_init(dc); 1066 + bch_cached_dev_writeback_init(dc); 1058 1067 return 0; 1059 - err: 1060 - bcache_device_stop(&dc->disk); 1061 - return err; 1062 1068 } 1063 1069 1064 1070 /* Cached device - bcache superblock */ 1065 1071 1066 - static const char *register_bdev(struct cache_sb *sb, struct page *sb_page, 1072 + static void register_bdev(struct cache_sb *sb, struct page *sb_page, 1067 1073 struct block_device *bdev, 1068 1074 struct cached_dev *dc) 1069 1075 { 1070 1076 char name[BDEVNAME_SIZE]; 1071 1077 const char *err = "cannot allocate memory"; 1072 - struct gendisk *g; 1073 1078 struct cache_set *c; 1074 1079 1075 - if (!dc || cached_dev_init(dc, sb->block_size << 9) != 0) 1076 - return err; 1077 - 1078 1080 memcpy(&dc->sb, sb, sizeof(struct cache_sb)); 1079 - dc->sb_bio.bi_io_vec[0].bv_page = sb_page; 1080 1081 dc->bdev = bdev; 1081 1082 dc->bdev->bd_holder = dc; 1082 1083 1083 - g = dc->disk.disk; 1084 + bio_init(&dc->sb_bio); 1085 + dc->sb_bio.bi_max_vecs = 1; 1086 + dc->sb_bio.bi_io_vec = dc->sb_bio.bi_inline_vecs; 1087 + dc->sb_bio.bi_io_vec[0].bv_page = sb_page; 1088 + get_page(sb_page); 1084 1089 1085 - set_capacity(g, dc->bdev->bd_part->nr_sects - dc->sb.data_offset); 1086 - 1087 - g->queue->backing_dev_info.ra_pages = 1088 - max(g->queue->backing_dev_info.ra_pages, 1089 - bdev->bd_queue->backing_dev_info.ra_pages); 1090 - 1091 - bch_cached_dev_request_init(dc); 1090 + if (cached_dev_init(dc, sb->block_size << 9)) 1091 + goto err; 1092 1092 1093 1093 err = "error creating kobject"; 1094 1094 if (kobject_add(&dc->disk.kobj, &part_to_dev(bdev->bd_part)->kobj, ··· 1090 1102 goto err; 1091 1103 if (bch_cache_accounting_add_kobjs(&dc->accounting, &dc->disk.kobj)) 1092 1104 goto err; 1105 + 1106 + pr_info("registered backing device %s", bdevname(bdev, name)); 1093 1107 1094 1108 list_add(&dc->list, &uncached_devices); 1095 1109 list_for_each_entry(c, &bch_cache_sets, list) ··· 1101 1111 BDEV_STATE(&dc->sb) == BDEV_STATE_STALE) 1102 1112 bch_cached_dev_run(dc); 1103 1113 1104 - return NULL; 1114 + return; 1105 1115 err: 1106 - kobject_put(&dc->disk.kobj); 1107 1116 pr_notice("error opening %s: %s", bdevname(bdev, name), err); 1108 - /* 1109 - * Return NULL instead of an error because kobject_put() cleans 1110 - * everything up 1111 - */ 1112 - return NULL; 1117 + bcache_device_stop(&dc->disk); 1113 1118 } 1114 1119 1115 1120 /* Flash only volumes */ ··· 1702 1717 size_t free; 1703 1718 struct bucket *b; 1704 1719 1705 - if (!ca) 1706 - return -ENOMEM; 1707 - 1708 1720 __module_get(THIS_MODULE); 1709 1721 kobject_init(&ca->kobj, &bch_cache_ktype); 1710 1722 1711 - memcpy(&ca->sb, sb, sizeof(struct cache_sb)); 1712 - 1713 1723 INIT_LIST_HEAD(&ca->discards); 1714 - 1715 - bio_init(&ca->sb_bio); 1716 - ca->sb_bio.bi_max_vecs = 1; 1717 - ca->sb_bio.bi_io_vec = ca->sb_bio.bi_inline_vecs; 1718 1724 1719 1725 bio_init(&ca->journal.bio); 1720 1726 ca->journal.bio.bi_max_vecs = 8; ··· 1718 1742 !init_fifo(&ca->free_inc, free << 2, GFP_KERNEL) || 1719 1743 !init_fifo(&ca->unused, free << 2, GFP_KERNEL) || 1720 1744 !init_heap(&ca->heap, free << 3, GFP_KERNEL) || 1721 - !(ca->buckets = vmalloc(sizeof(struct bucket) * 1745 + !(ca->buckets = vzalloc(sizeof(struct bucket) * 1722 1746 ca->sb.nbuckets)) || 1723 1747 !(ca->prio_buckets = kzalloc(sizeof(uint64_t) * prio_buckets(ca) * 1724 1748 2, GFP_KERNEL)) || 1725 1749 !(ca->disk_buckets = alloc_bucket_pages(GFP_KERNEL, ca)) || 1726 1750 !(ca->alloc_workqueue = alloc_workqueue("bch_allocator", 0, 1)) || 1727 1751 bio_split_pool_init(&ca->bio_split_hook)) 1728 - goto err; 1752 + return -ENOMEM; 1729 1753 1730 1754 ca->prio_last_buckets = ca->prio_buckets + prio_buckets(ca); 1731 1755 1732 - memset(ca->buckets, 0, ca->sb.nbuckets * sizeof(struct bucket)); 1733 1756 for_each_bucket(b, ca) 1734 1757 atomic_set(&b->pin, 0); 1735 1758 ··· 1741 1766 return -ENOMEM; 1742 1767 } 1743 1768 1744 - static const char *register_cache(struct cache_sb *sb, struct page *sb_page, 1769 + static void register_cache(struct cache_sb *sb, struct page *sb_page, 1745 1770 struct block_device *bdev, struct cache *ca) 1746 1771 { 1747 1772 char name[BDEVNAME_SIZE]; 1748 1773 const char *err = "cannot allocate memory"; 1749 1774 1750 - if (cache_alloc(sb, ca) != 0) 1751 - return err; 1752 - 1753 - ca->sb_bio.bi_io_vec[0].bv_page = sb_page; 1775 + memcpy(&ca->sb, sb, sizeof(struct cache_sb)); 1754 1776 ca->bdev = bdev; 1755 1777 ca->bdev->bd_holder = ca; 1756 1778 1779 + bio_init(&ca->sb_bio); 1780 + ca->sb_bio.bi_max_vecs = 1; 1781 + ca->sb_bio.bi_io_vec = ca->sb_bio.bi_inline_vecs; 1782 + ca->sb_bio.bi_io_vec[0].bv_page = sb_page; 1783 + get_page(sb_page); 1784 + 1757 1785 if (blk_queue_discard(bdev_get_queue(ca->bdev))) 1758 1786 ca->discard = CACHE_DISCARD(&ca->sb); 1787 + 1788 + if (cache_alloc(sb, ca) != 0) 1789 + goto err; 1759 1790 1760 1791 err = "error creating kobject"; 1761 1792 if (kobject_add(&ca->kobj, &part_to_dev(bdev->bd_part)->kobj, "bcache")) ··· 1772 1791 goto err; 1773 1792 1774 1793 pr_info("registered cache device %s", bdevname(bdev, name)); 1775 - 1776 - return NULL; 1794 + return; 1777 1795 err: 1796 + pr_notice("error opening %s: %s", bdevname(bdev, name), err); 1778 1797 kobject_put(&ca->kobj); 1779 - pr_info("error opening %s: %s", bdevname(bdev, name), err); 1780 - /* Return NULL instead of an error because kobject_put() cleans 1781 - * everything up 1782 - */ 1783 - return NULL; 1784 1798 } 1785 1799 1786 1800 /* Global interfaces/init */ ··· 1809 1833 bdev = blkdev_get_by_path(strim(path), 1810 1834 FMODE_READ|FMODE_WRITE|FMODE_EXCL, 1811 1835 sb); 1812 - if (bdev == ERR_PTR(-EBUSY)) 1813 - err = "device busy"; 1814 - 1815 - if (IS_ERR(bdev) || 1816 - set_blocksize(bdev, 4096)) 1836 + if (IS_ERR(bdev)) { 1837 + if (bdev == ERR_PTR(-EBUSY)) 1838 + err = "device busy"; 1817 1839 goto err; 1840 + } 1841 + 1842 + err = "failed to set blocksize"; 1843 + if (set_blocksize(bdev, 4096)) 1844 + goto err_close; 1818 1845 1819 1846 err = read_super(sb, bdev, &sb_page); 1820 1847 if (err) ··· 1825 1846 1826 1847 if (SB_IS_BDEV(sb)) { 1827 1848 struct cached_dev *dc = kzalloc(sizeof(*dc), GFP_KERNEL); 1849 + if (!dc) 1850 + goto err_close; 1828 1851 1829 - err = register_bdev(sb, sb_page, bdev, dc); 1852 + register_bdev(sb, sb_page, bdev, dc); 1830 1853 } else { 1831 1854 struct cache *ca = kzalloc(sizeof(*ca), GFP_KERNEL); 1855 + if (!ca) 1856 + goto err_close; 1832 1857 1833 - err = register_cache(sb, sb_page, bdev, ca); 1858 + register_cache(sb, sb_page, bdev, ca); 1834 1859 } 1835 - 1836 - if (err) { 1837 - /* register_(bdev|cache) will only return an error if they 1838 - * didn't get far enough to create the kobject - if they did, 1839 - * the kobject destructor will do this cleanup. 1840 - */ 1860 + out: 1861 + if (sb_page) 1841 1862 put_page(sb_page); 1842 - err_close: 1843 - blkdev_put(bdev, FMODE_READ|FMODE_WRITE|FMODE_EXCL); 1844 - err: 1845 - if (attr != &ksysfs_register_quiet) 1846 - pr_info("error opening %s: %s", path, err); 1847 - ret = -EINVAL; 1848 - } 1849 - 1850 1863 kfree(sb); 1851 1864 kfree(path); 1852 1865 mutex_unlock(&bch_register_lock); 1853 1866 module_put(THIS_MODULE); 1854 1867 return ret; 1868 + 1869 + err_close: 1870 + blkdev_put(bdev, FMODE_READ|FMODE_WRITE|FMODE_EXCL); 1871 + err: 1872 + if (attr != &ksysfs_register_quiet) 1873 + pr_info("error opening %s: %s", path, err); 1874 + ret = -EINVAL; 1875 + goto out; 1855 1876 } 1856 1877 1857 1878 static int bcache_reboot(struct notifier_block *n, unsigned long code, void *x)
+1 -1
drivers/md/bcache/writeback.c
··· 375 375 refill_dirty(cl); 376 376 } 377 377 378 - void bch_writeback_init_cached_dev(struct cached_dev *dc) 378 + void bch_cached_dev_writeback_init(struct cached_dev *dc) 379 379 { 380 380 closure_init_unlocked(&dc->writeback); 381 381 init_rwsem(&dc->writeback_lock);
+1 -1
drivers/md/md.c
··· 5268 5268 5269 5269 static void __md_stop_writes(struct mddev *mddev) 5270 5270 { 5271 + set_bit(MD_RECOVERY_FROZEN, &mddev->recovery); 5271 5272 if (mddev->sync_thread) { 5272 - set_bit(MD_RECOVERY_FROZEN, &mddev->recovery); 5273 5273 set_bit(MD_RECOVERY_INTR, &mddev->recovery); 5274 5274 md_reap_sync_thread(mddev); 5275 5275 }
+24 -14
drivers/md/raid1.c
··· 417 417 418 418 r1_bio->bios[mirror] = NULL; 419 419 to_put = bio; 420 - set_bit(R1BIO_Uptodate, &r1_bio->state); 420 + /* 421 + * Do not set R1BIO_Uptodate if the current device is 422 + * rebuilding or Faulty. This is because we cannot use 423 + * such device for properly reading the data back (we could 424 + * potentially use it, if the current write would have felt 425 + * before rdev->recovery_offset, but for simplicity we don't 426 + * check this here. 427 + */ 428 + if (test_bit(In_sync, &conf->mirrors[mirror].rdev->flags) && 429 + !test_bit(Faulty, &conf->mirrors[mirror].rdev->flags)) 430 + set_bit(R1BIO_Uptodate, &r1_bio->state); 421 431 422 432 /* Maybe we can clear some bad blocks. */ 423 433 if (is_badblock(conf->mirrors[mirror].rdev, ··· 880 870 wake_up(&conf->wait_barrier); 881 871 } 882 872 883 - static void freeze_array(struct r1conf *conf) 873 + static void freeze_array(struct r1conf *conf, int extra) 884 874 { 885 875 /* stop syncio and normal IO and wait for everything to 886 876 * go quite. 887 877 * We increment barrier and nr_waiting, and then 888 - * wait until nr_pending match nr_queued+1 878 + * wait until nr_pending match nr_queued+extra 889 879 * This is called in the context of one normal IO request 890 880 * that has failed. Thus any sync request that might be pending 891 881 * will be blocked by nr_pending, and we need to wait for 892 882 * pending IO requests to complete or be queued for re-try. 893 - * Thus the number queued (nr_queued) plus this request (1) 883 + * Thus the number queued (nr_queued) plus this request (extra) 894 884 * must match the number of pending IOs (nr_pending) before 895 885 * we continue. 896 886 */ ··· 898 888 conf->barrier++; 899 889 conf->nr_waiting++; 900 890 wait_event_lock_irq_cmd(conf->wait_barrier, 901 - conf->nr_pending == conf->nr_queued+1, 891 + conf->nr_pending == conf->nr_queued+extra, 902 892 conf->resync_lock, 903 893 flush_pending_writes(conf)); 904 894 spin_unlock_irq(&conf->resync_lock); ··· 1554 1544 * we wait for all outstanding requests to complete. 1555 1545 */ 1556 1546 synchronize_sched(); 1557 - raise_barrier(conf); 1558 - lower_barrier(conf); 1547 + freeze_array(conf, 0); 1548 + unfreeze_array(conf); 1559 1549 clear_bit(Unmerged, &rdev->flags); 1560 1550 } 1561 1551 md_integrity_add_rdev(rdev, mddev); ··· 1605 1595 */ 1606 1596 struct md_rdev *repl = 1607 1597 conf->mirrors[conf->raid_disks + number].rdev; 1608 - raise_barrier(conf); 1598 + freeze_array(conf, 0); 1609 1599 clear_bit(Replacement, &repl->flags); 1610 1600 p->rdev = repl; 1611 1601 conf->mirrors[conf->raid_disks + number].rdev = NULL; 1612 - lower_barrier(conf); 1602 + unfreeze_array(conf); 1613 1603 clear_bit(WantReplacement, &rdev->flags); 1614 1604 } else 1615 1605 clear_bit(WantReplacement, &rdev->flags); ··· 2205 2195 * frozen 2206 2196 */ 2207 2197 if (mddev->ro == 0) { 2208 - freeze_array(conf); 2198 + freeze_array(conf, 1); 2209 2199 fix_read_error(conf, r1_bio->read_disk, 2210 2200 r1_bio->sector, r1_bio->sectors); 2211 2201 unfreeze_array(conf); ··· 2790 2780 return PTR_ERR(conf); 2791 2781 2792 2782 if (mddev->queue) 2793 - blk_queue_max_write_same_sectors(mddev->queue, 2794 - mddev->chunk_sectors); 2783 + blk_queue_max_write_same_sectors(mddev->queue, 0); 2784 + 2795 2785 rdev_for_each(rdev, mddev) { 2796 2786 if (!mddev->gendisk) 2797 2787 continue; ··· 2973 2963 return -ENOMEM; 2974 2964 } 2975 2965 2976 - raise_barrier(conf); 2966 + freeze_array(conf, 0); 2977 2967 2978 2968 /* ok, everything is stopped */ 2979 2969 oldpool = conf->r1bio_pool; ··· 3004 2994 conf->raid_disks = mddev->raid_disks = raid_disks; 3005 2995 mddev->delta_disks = 0; 3006 2996 3007 - lower_barrier(conf); 2997 + unfreeze_array(conf); 3008 2998 3009 2999 set_bit(MD_RECOVERY_NEEDED, &mddev->recovery); 3010 3000 md_wakeup_thread(mddev->thread);
+19 -10
drivers/md/raid10.c
··· 490 490 sector_t first_bad; 491 491 int bad_sectors; 492 492 493 - set_bit(R10BIO_Uptodate, &r10_bio->state); 493 + /* 494 + * Do not set R10BIO_Uptodate if the current device is 495 + * rebuilding or Faulty. This is because we cannot use 496 + * such device for properly reading the data back (we could 497 + * potentially use it, if the current write would have felt 498 + * before rdev->recovery_offset, but for simplicity we don't 499 + * check this here. 500 + */ 501 + if (test_bit(In_sync, &rdev->flags) && 502 + !test_bit(Faulty, &rdev->flags)) 503 + set_bit(R10BIO_Uptodate, &r10_bio->state); 494 504 495 505 /* Maybe we can clear some bad blocks. */ 496 506 if (is_badblock(rdev, ··· 1065 1055 wake_up(&conf->wait_barrier); 1066 1056 } 1067 1057 1068 - static void freeze_array(struct r10conf *conf) 1058 + static void freeze_array(struct r10conf *conf, int extra) 1069 1059 { 1070 1060 /* stop syncio and normal IO and wait for everything to 1071 1061 * go quiet. 1072 1062 * We increment barrier and nr_waiting, and then 1073 - * wait until nr_pending match nr_queued+1 1063 + * wait until nr_pending match nr_queued+extra 1074 1064 * This is called in the context of one normal IO request 1075 1065 * that has failed. Thus any sync request that might be pending 1076 1066 * will be blocked by nr_pending, and we need to wait for 1077 1067 * pending IO requests to complete or be queued for re-try. 1078 - * Thus the number queued (nr_queued) plus this request (1) 1068 + * Thus the number queued (nr_queued) plus this request (extra) 1079 1069 * must match the number of pending IOs (nr_pending) before 1080 1070 * we continue. 1081 1071 */ ··· 1083 1073 conf->barrier++; 1084 1074 conf->nr_waiting++; 1085 1075 wait_event_lock_irq_cmd(conf->wait_barrier, 1086 - conf->nr_pending == conf->nr_queued+1, 1076 + conf->nr_pending == conf->nr_queued+extra, 1087 1077 conf->resync_lock, 1088 1078 flush_pending_writes(conf)); 1089 1079 ··· 1847 1837 * we wait for all outstanding requests to complete. 1848 1838 */ 1849 1839 synchronize_sched(); 1850 - raise_barrier(conf, 0); 1851 - lower_barrier(conf); 1840 + freeze_array(conf, 0); 1841 + unfreeze_array(conf); 1852 1842 clear_bit(Unmerged, &rdev->flags); 1853 1843 } 1854 1844 md_integrity_add_rdev(rdev, mddev); ··· 2622 2612 r10_bio->devs[slot].bio = NULL; 2623 2613 2624 2614 if (mddev->ro == 0) { 2625 - freeze_array(conf); 2615 + freeze_array(conf, 1); 2626 2616 fix_read_error(conf, mddev, r10_bio); 2627 2617 unfreeze_array(conf); 2628 2618 } else ··· 3619 3609 if (mddev->queue) { 3620 3610 blk_queue_max_discard_sectors(mddev->queue, 3621 3611 mddev->chunk_sectors); 3622 - blk_queue_max_write_same_sectors(mddev->queue, 3623 - mddev->chunk_sectors); 3612 + blk_queue_max_write_same_sectors(mddev->queue, 0); 3624 3613 blk_queue_io_min(mddev->queue, chunk_size); 3625 3614 if (conf->geo.raid_disks % conf->geo.near_copies) 3626 3615 blk_queue_io_opt(mddev->queue, chunk_size * conf->geo.raid_disks);
+5 -1
drivers/md/raid5.c
··· 664 664 if (test_bit(R5_ReadNoMerge, &sh->dev[i].flags)) 665 665 bi->bi_rw |= REQ_FLUSH; 666 666 667 + bi->bi_vcnt = 1; 667 668 bi->bi_io_vec[0].bv_len = STRIPE_SIZE; 668 669 bi->bi_io_vec[0].bv_offset = 0; 669 670 bi->bi_size = STRIPE_SIZE; ··· 702 701 else 703 702 rbi->bi_sector = (sh->sector 704 703 + rrdev->data_offset); 704 + rbi->bi_vcnt = 1; 705 705 rbi->bi_io_vec[0].bv_len = STRIPE_SIZE; 706 706 rbi->bi_io_vec[0].bv_offset = 0; 707 707 rbi->bi_size = STRIPE_SIZE; ··· 5466 5464 if (mddev->major_version == 0 && 5467 5465 mddev->minor_version > 90) 5468 5466 rdev->recovery_offset = reshape_offset; 5469 - 5467 + 5470 5468 if (rdev->recovery_offset < reshape_offset) { 5471 5469 /* We need to check old and new layout */ 5472 5470 if (!only_parity(rdev->raid_disk, ··· 5588 5586 * guarantee discard_zerors_data 5589 5587 */ 5590 5588 mddev->queue->limits.discard_zeroes_data = 0; 5589 + 5590 + blk_queue_max_write_same_sectors(mddev->queue, 0); 5591 5591 5592 5592 rdev_for_each(rdev, mddev) { 5593 5593 disk_stack_limits(mddev->gendisk, rdev->bdev,
+2 -2
drivers/misc/mei/init.c
··· 197 197 { 198 198 dev_dbg(&dev->pdev->dev, "stopping the device.\n"); 199 199 200 + flush_scheduled_work(); 201 + 200 202 mutex_lock(&dev->device_lock); 201 203 202 204 cancel_delayed_work(&dev->timer_work); ··· 211 209 mei_reset(dev, 0); 212 210 213 211 mutex_unlock(&dev->device_lock); 214 - 215 - flush_scheduled_work(); 216 212 217 213 mei_watchdog_unregister(dev); 218 214 }
+2
drivers/misc/mei/nfc.c
··· 142 142 mei_cl_unlink(ndev->cl_info); 143 143 kfree(ndev->cl_info); 144 144 } 145 + 146 + memset(ndev, 0, sizeof(struct mei_nfc_dev)); 145 147 } 146 148 147 149 static int mei_nfc_build_bus_name(struct mei_nfc_dev *ndev)
+1
drivers/misc/mei/pci-me.c
··· 325 325 326 326 mutex_lock(&dev->device_lock); 327 327 dev->dev_state = MEI_DEV_POWER_UP; 328 + mei_clear_interrupts(dev); 328 329 mei_reset(dev, 1); 329 330 mutex_unlock(&dev->device_lock); 330 331
+1
drivers/misc/sgi-gru/grufile.c
··· 172 172 nodesperblade = 2; 173 173 else 174 174 nodesperblade = 1; 175 + memset(&info, 0, sizeof(info)); 175 176 info.cpus = num_online_cpus(); 176 177 info.nodes = num_online_nodes(); 177 178 info.blades = info.nodes / nodesperblade;
+17 -6
drivers/net/bonding/bond_main.c
··· 725 725 struct net_device *bond_dev, *vlan_dev, *upper_dev; 726 726 struct vlan_entry *vlan; 727 727 728 - rcu_read_lock(); 729 728 read_lock(&bond->lock); 729 + rcu_read_lock(); 730 730 731 731 bond_dev = bond->dev; 732 732 ··· 748 748 if (vlan_dev) 749 749 __bond_resend_igmp_join_requests(vlan_dev); 750 750 } 751 - 752 - if (--bond->igmp_retrans > 0) 753 - queue_delayed_work(bond->wq, &bond->mcast_work, HZ/5); 754 - 755 - read_unlock(&bond->lock); 756 751 rcu_read_unlock(); 752 + 753 + /* We use curr_slave_lock to protect against concurrent access to 754 + * igmp_retrans from multiple running instances of this function and 755 + * bond_change_active_slave 756 + */ 757 + write_lock_bh(&bond->curr_slave_lock); 758 + if (bond->igmp_retrans > 1) { 759 + bond->igmp_retrans--; 760 + queue_delayed_work(bond->wq, &bond->mcast_work, HZ/5); 761 + } 762 + write_unlock_bh(&bond->curr_slave_lock); 763 + read_unlock(&bond->lock); 757 764 } 758 765 759 766 static void bond_resend_igmp_join_requests_delayed(struct work_struct *work) ··· 1908 1901 1909 1902 err_undo_flags: 1910 1903 bond_compute_features(bond); 1904 + /* Enslave of first slave has failed and we need to fix master's mac */ 1905 + if (bond->slave_cnt == 0 && 1906 + ether_addr_equal(bond_dev->dev_addr, slave_dev->dev_addr)) 1907 + eth_hw_addr_random(bond_dev); 1911 1908 1912 1909 return res; 1913 1910 }
+1 -1
drivers/net/bonding/bonding.h
··· 225 225 rwlock_t curr_slave_lock; 226 226 u8 send_peer_notif; 227 227 s8 setup_by_slave; 228 - s8 igmp_retrans; 228 + u8 igmp_retrans; 229 229 #ifdef CONFIG_PROC_FS 230 230 struct proc_dir_entry *proc_entry; 231 231 char proc_file_name[IFNAMSIZ];
+4 -1
drivers/net/can/usb/usb_8dev.c
··· 977 977 err = usb_8dev_cmd_version(priv, &version); 978 978 if (err) { 979 979 netdev_err(netdev, "can't get firmware version\n"); 980 - goto cleanup_cmd_msg_buffer; 980 + goto cleanup_unregister_candev; 981 981 } else { 982 982 netdev_info(netdev, 983 983 "firmware: %d.%d, hardware: %d.%d\n", ··· 988 988 devm_can_led_init(netdev); 989 989 990 990 return 0; 991 + 992 + cleanup_unregister_candev: 993 + unregister_netdev(priv->netdev); 991 994 992 995 cleanup_cmd_msg_buffer: 993 996 kfree(priv->cmd_msg_buffer);
+18
drivers/net/ethernet/atheros/Kconfig
··· 67 67 To compile this driver as a module, choose M here. The module 68 68 will be called atl1c. 69 69 70 + config ALX 71 + tristate "Qualcomm Atheros AR816x/AR817x support" 72 + depends on PCI 73 + select CRC32 74 + select NET_CORE 75 + select MDIO 76 + help 77 + This driver supports the Qualcomm Atheros L1F ethernet adapter, 78 + i.e. the following chipsets: 79 + 80 + 1969:1091 - AR8161 Gigabit Ethernet 81 + 1969:1090 - AR8162 Fast Ethernet 82 + 1969:10A1 - AR8171 Gigabit Ethernet 83 + 1969:10A0 - AR8172 Fast Ethernet 84 + 85 + To compile this driver as a module, choose M here. The module 86 + will be called alx. 87 + 70 88 endif # NET_VENDOR_ATHEROS
+1
drivers/net/ethernet/atheros/Makefile
··· 6 6 obj-$(CONFIG_ATL2) += atlx/ 7 7 obj-$(CONFIG_ATL1E) += atl1e/ 8 8 obj-$(CONFIG_ATL1C) += atl1c/ 9 + obj-$(CONFIG_ALX) += alx/
+3
drivers/net/ethernet/atheros/alx/Makefile
··· 1 + obj-$(CONFIG_ALX) += alx.o 2 + alx-objs := main.o ethtool.o hw.o 3 + ccflags-y += -D__CHECK_ENDIAN__
+114
drivers/net/ethernet/atheros/alx/alx.h
··· 1 + /* 2 + * Copyright (c) 2013 Johannes Berg <johannes@sipsolutions.net> 3 + * 4 + * This file is free software: you may copy, redistribute and/or modify it 5 + * under the terms of the GNU General Public License as published by the 6 + * Free Software Foundation, either version 2 of the License, or (at your 7 + * option) any later version. 8 + * 9 + * This file is distributed in the hope that it will be useful, but 10 + * WITHOUT ANY WARRANTY; without even the implied warranty of 11 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU 12 + * General Public License for more details. 13 + * 14 + * You should have received a copy of the GNU General Public License 15 + * along with this program. If not, see <http://www.gnu.org/licenses/>. 16 + * 17 + * This file incorporates work covered by the following copyright and 18 + * permission notice: 19 + * 20 + * Copyright (c) 2012 Qualcomm Atheros, Inc. 21 + * 22 + * Permission to use, copy, modify, and/or distribute this software for any 23 + * purpose with or without fee is hereby granted, provided that the above 24 + * copyright notice and this permission notice appear in all copies. 25 + * 26 + * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES 27 + * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF 28 + * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR 29 + * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES 30 + * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN 31 + * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF 32 + * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. 33 + */ 34 + 35 + #ifndef _ALX_H_ 36 + #define _ALX_H_ 37 + 38 + #include <linux/types.h> 39 + #include <linux/etherdevice.h> 40 + #include <linux/dma-mapping.h> 41 + #include <linux/spinlock.h> 42 + #include "hw.h" 43 + 44 + #define ALX_WATCHDOG_TIME (5 * HZ) 45 + 46 + struct alx_buffer { 47 + struct sk_buff *skb; 48 + DEFINE_DMA_UNMAP_ADDR(dma); 49 + DEFINE_DMA_UNMAP_LEN(size); 50 + }; 51 + 52 + struct alx_rx_queue { 53 + struct alx_rrd *rrd; 54 + dma_addr_t rrd_dma; 55 + 56 + struct alx_rfd *rfd; 57 + dma_addr_t rfd_dma; 58 + 59 + struct alx_buffer *bufs; 60 + 61 + u16 write_idx, read_idx; 62 + u16 rrd_read_idx; 63 + }; 64 + #define ALX_RX_ALLOC_THRESH 32 65 + 66 + struct alx_tx_queue { 67 + struct alx_txd *tpd; 68 + dma_addr_t tpd_dma; 69 + struct alx_buffer *bufs; 70 + u16 write_idx, read_idx; 71 + }; 72 + 73 + #define ALX_DEFAULT_TX_WORK 128 74 + 75 + enum alx_device_quirks { 76 + ALX_DEV_QUIRK_MSI_INTX_DISABLE_BUG = BIT(0), 77 + }; 78 + 79 + struct alx_priv { 80 + struct net_device *dev; 81 + 82 + struct alx_hw hw; 83 + 84 + /* all descriptor memory */ 85 + struct { 86 + dma_addr_t dma; 87 + void *virt; 88 + int size; 89 + } descmem; 90 + 91 + /* protect int_mask updates */ 92 + spinlock_t irq_lock; 93 + u32 int_mask; 94 + 95 + int tx_ringsz; 96 + int rx_ringsz; 97 + int rxbuf_size; 98 + 99 + struct napi_struct napi; 100 + struct alx_tx_queue txq; 101 + struct alx_rx_queue rxq; 102 + 103 + struct work_struct link_check_wk; 104 + struct work_struct reset_wk; 105 + 106 + u16 msg_enable; 107 + 108 + bool msi; 109 + }; 110 + 111 + extern const struct ethtool_ops alx_ethtool_ops; 112 + extern const char alx_drv_name[]; 113 + 114 + #endif
+272
drivers/net/ethernet/atheros/alx/ethtool.c
··· 1 + /* 2 + * Copyright (c) 2013 Johannes Berg <johannes@sipsolutions.net> 3 + * 4 + * This file is free software: you may copy, redistribute and/or modify it 5 + * under the terms of the GNU General Public License as published by the 6 + * Free Software Foundation, either version 2 of the License, or (at your 7 + * option) any later version. 8 + * 9 + * This file is distributed in the hope that it will be useful, but 10 + * WITHOUT ANY WARRANTY; without even the implied warranty of 11 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU 12 + * General Public License for more details. 13 + * 14 + * You should have received a copy of the GNU General Public License 15 + * along with this program. If not, see <http://www.gnu.org/licenses/>. 16 + * 17 + * This file incorporates work covered by the following copyright and 18 + * permission notice: 19 + * 20 + * Copyright (c) 2012 Qualcomm Atheros, Inc. 21 + * 22 + * Permission to use, copy, modify, and/or distribute this software for any 23 + * purpose with or without fee is hereby granted, provided that the above 24 + * copyright notice and this permission notice appear in all copies. 25 + * 26 + * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES 27 + * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF 28 + * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR 29 + * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES 30 + * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN 31 + * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF 32 + * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. 33 + */ 34 + 35 + #include <linux/pci.h> 36 + #include <linux/ip.h> 37 + #include <linux/tcp.h> 38 + #include <linux/netdevice.h> 39 + #include <linux/etherdevice.h> 40 + #include <linux/ethtool.h> 41 + #include <linux/mdio.h> 42 + #include <linux/interrupt.h> 43 + #include <asm/byteorder.h> 44 + 45 + #include "alx.h" 46 + #include "reg.h" 47 + #include "hw.h" 48 + 49 + 50 + static int alx_get_settings(struct net_device *netdev, struct ethtool_cmd *ecmd) 51 + { 52 + struct alx_priv *alx = netdev_priv(netdev); 53 + struct alx_hw *hw = &alx->hw; 54 + 55 + ecmd->supported = SUPPORTED_10baseT_Half | 56 + SUPPORTED_10baseT_Full | 57 + SUPPORTED_100baseT_Half | 58 + SUPPORTED_100baseT_Full | 59 + SUPPORTED_Autoneg | 60 + SUPPORTED_TP | 61 + SUPPORTED_Pause; 62 + if (alx_hw_giga(hw)) 63 + ecmd->supported |= SUPPORTED_1000baseT_Full; 64 + 65 + ecmd->advertising = ADVERTISED_TP; 66 + if (hw->adv_cfg & ADVERTISED_Autoneg) 67 + ecmd->advertising |= hw->adv_cfg; 68 + 69 + ecmd->port = PORT_TP; 70 + ecmd->phy_address = 0; 71 + if (hw->adv_cfg & ADVERTISED_Autoneg) 72 + ecmd->autoneg = AUTONEG_ENABLE; 73 + else 74 + ecmd->autoneg = AUTONEG_DISABLE; 75 + ecmd->transceiver = XCVR_INTERNAL; 76 + 77 + if (hw->flowctrl & ALX_FC_ANEG && hw->adv_cfg & ADVERTISED_Autoneg) { 78 + if (hw->flowctrl & ALX_FC_RX) { 79 + ecmd->advertising |= ADVERTISED_Pause; 80 + 81 + if (!(hw->flowctrl & ALX_FC_TX)) 82 + ecmd->advertising |= ADVERTISED_Asym_Pause; 83 + } else if (hw->flowctrl & ALX_FC_TX) { 84 + ecmd->advertising |= ADVERTISED_Asym_Pause; 85 + } 86 + } 87 + 88 + if (hw->link_speed != SPEED_UNKNOWN) { 89 + ethtool_cmd_speed_set(ecmd, 90 + hw->link_speed - hw->link_speed % 10); 91 + ecmd->duplex = hw->link_speed % 10; 92 + } else { 93 + ethtool_cmd_speed_set(ecmd, SPEED_UNKNOWN); 94 + ecmd->duplex = DUPLEX_UNKNOWN; 95 + } 96 + 97 + return 0; 98 + } 99 + 100 + static int alx_set_settings(struct net_device *netdev, struct ethtool_cmd *ecmd) 101 + { 102 + struct alx_priv *alx = netdev_priv(netdev); 103 + struct alx_hw *hw = &alx->hw; 104 + u32 adv_cfg; 105 + 106 + ASSERT_RTNL(); 107 + 108 + if (ecmd->autoneg == AUTONEG_ENABLE) { 109 + if (ecmd->advertising & ADVERTISED_1000baseT_Half) 110 + return -EINVAL; 111 + adv_cfg = ecmd->advertising | ADVERTISED_Autoneg; 112 + } else { 113 + int speed = ethtool_cmd_speed(ecmd); 114 + 115 + switch (speed + ecmd->duplex) { 116 + case SPEED_10 + DUPLEX_HALF: 117 + adv_cfg = ADVERTISED_10baseT_Half; 118 + break; 119 + case SPEED_10 + DUPLEX_FULL: 120 + adv_cfg = ADVERTISED_10baseT_Full; 121 + break; 122 + case SPEED_100 + DUPLEX_HALF: 123 + adv_cfg = ADVERTISED_100baseT_Half; 124 + break; 125 + case SPEED_100 + DUPLEX_FULL: 126 + adv_cfg = ADVERTISED_100baseT_Full; 127 + break; 128 + default: 129 + return -EINVAL; 130 + } 131 + } 132 + 133 + hw->adv_cfg = adv_cfg; 134 + return alx_setup_speed_duplex(hw, adv_cfg, hw->flowctrl); 135 + } 136 + 137 + static void alx_get_pauseparam(struct net_device *netdev, 138 + struct ethtool_pauseparam *pause) 139 + { 140 + struct alx_priv *alx = netdev_priv(netdev); 141 + struct alx_hw *hw = &alx->hw; 142 + 143 + if (hw->flowctrl & ALX_FC_ANEG && 144 + hw->adv_cfg & ADVERTISED_Autoneg) 145 + pause->autoneg = AUTONEG_ENABLE; 146 + else 147 + pause->autoneg = AUTONEG_DISABLE; 148 + 149 + if (hw->flowctrl & ALX_FC_TX) 150 + pause->tx_pause = 1; 151 + else 152 + pause->tx_pause = 0; 153 + 154 + if (hw->flowctrl & ALX_FC_RX) 155 + pause->rx_pause = 1; 156 + else 157 + pause->rx_pause = 0; 158 + } 159 + 160 + 161 + static int alx_set_pauseparam(struct net_device *netdev, 162 + struct ethtool_pauseparam *pause) 163 + { 164 + struct alx_priv *alx = netdev_priv(netdev); 165 + struct alx_hw *hw = &alx->hw; 166 + int err = 0; 167 + bool reconfig_phy = false; 168 + u8 fc = 0; 169 + 170 + if (pause->tx_pause) 171 + fc |= ALX_FC_TX; 172 + if (pause->rx_pause) 173 + fc |= ALX_FC_RX; 174 + if (pause->autoneg) 175 + fc |= ALX_FC_ANEG; 176 + 177 + ASSERT_RTNL(); 178 + 179 + /* restart auto-neg for auto-mode */ 180 + if (hw->adv_cfg & ADVERTISED_Autoneg) { 181 + if (!((fc ^ hw->flowctrl) & ALX_FC_ANEG)) 182 + reconfig_phy = true; 183 + if (fc & hw->flowctrl & ALX_FC_ANEG && 184 + (fc ^ hw->flowctrl) & (ALX_FC_RX | ALX_FC_TX)) 185 + reconfig_phy = true; 186 + } 187 + 188 + if (reconfig_phy) { 189 + err = alx_setup_speed_duplex(hw, hw->adv_cfg, fc); 190 + return err; 191 + } 192 + 193 + /* flow control on mac */ 194 + if ((fc ^ hw->flowctrl) & (ALX_FC_RX | ALX_FC_TX)) 195 + alx_cfg_mac_flowcontrol(hw, fc); 196 + 197 + hw->flowctrl = fc; 198 + 199 + return 0; 200 + } 201 + 202 + static u32 alx_get_msglevel(struct net_device *netdev) 203 + { 204 + struct alx_priv *alx = netdev_priv(netdev); 205 + 206 + return alx->msg_enable; 207 + } 208 + 209 + static void alx_set_msglevel(struct net_device *netdev, u32 data) 210 + { 211 + struct alx_priv *alx = netdev_priv(netdev); 212 + 213 + alx->msg_enable = data; 214 + } 215 + 216 + static void alx_get_wol(struct net_device *netdev, struct ethtool_wolinfo *wol) 217 + { 218 + struct alx_priv *alx = netdev_priv(netdev); 219 + struct alx_hw *hw = &alx->hw; 220 + 221 + wol->supported = WAKE_MAGIC | WAKE_PHY; 222 + wol->wolopts = 0; 223 + 224 + if (hw->sleep_ctrl & ALX_SLEEP_WOL_MAGIC) 225 + wol->wolopts |= WAKE_MAGIC; 226 + if (hw->sleep_ctrl & ALX_SLEEP_WOL_PHY) 227 + wol->wolopts |= WAKE_PHY; 228 + } 229 + 230 + static int alx_set_wol(struct net_device *netdev, struct ethtool_wolinfo *wol) 231 + { 232 + struct alx_priv *alx = netdev_priv(netdev); 233 + struct alx_hw *hw = &alx->hw; 234 + 235 + if (wol->wolopts & (WAKE_ARP | WAKE_MAGICSECURE | 236 + WAKE_UCAST | WAKE_BCAST | WAKE_MCAST)) 237 + return -EOPNOTSUPP; 238 + 239 + hw->sleep_ctrl = 0; 240 + 241 + if (wol->wolopts & WAKE_MAGIC) 242 + hw->sleep_ctrl |= ALX_SLEEP_WOL_MAGIC; 243 + if (wol->wolopts & WAKE_PHY) 244 + hw->sleep_ctrl |= ALX_SLEEP_WOL_PHY; 245 + 246 + device_set_wakeup_enable(&alx->hw.pdev->dev, hw->sleep_ctrl); 247 + 248 + return 0; 249 + } 250 + 251 + static void alx_get_drvinfo(struct net_device *netdev, 252 + struct ethtool_drvinfo *drvinfo) 253 + { 254 + struct alx_priv *alx = netdev_priv(netdev); 255 + 256 + strlcpy(drvinfo->driver, alx_drv_name, sizeof(drvinfo->driver)); 257 + strlcpy(drvinfo->bus_info, pci_name(alx->hw.pdev), 258 + sizeof(drvinfo->bus_info)); 259 + } 260 + 261 + const struct ethtool_ops alx_ethtool_ops = { 262 + .get_settings = alx_get_settings, 263 + .set_settings = alx_set_settings, 264 + .get_pauseparam = alx_get_pauseparam, 265 + .set_pauseparam = alx_set_pauseparam, 266 + .get_drvinfo = alx_get_drvinfo, 267 + .get_msglevel = alx_get_msglevel, 268 + .set_msglevel = alx_set_msglevel, 269 + .get_wol = alx_get_wol, 270 + .set_wol = alx_set_wol, 271 + .get_link = ethtool_op_get_link, 272 + };
+1226
drivers/net/ethernet/atheros/alx/hw.c
··· 1 + /* 2 + * Copyright (c) 2013 Johannes Berg <johannes@sipsolutions.net> 3 + * 4 + * This file is free software: you may copy, redistribute and/or modify it 5 + * under the terms of the GNU General Public License as published by the 6 + * Free Software Foundation, either version 2 of the License, or (at your 7 + * option) any later version. 8 + * 9 + * This file is distributed in the hope that it will be useful, but 10 + * WITHOUT ANY WARRANTY; without even the implied warranty of 11 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU 12 + * General Public License for more details. 13 + * 14 + * You should have received a copy of the GNU General Public License 15 + * along with this program. If not, see <http://www.gnu.org/licenses/>. 16 + * 17 + * This file incorporates work covered by the following copyright and 18 + * permission notice: 19 + * 20 + * Copyright (c) 2012 Qualcomm Atheros, Inc. 21 + * 22 + * Permission to use, copy, modify, and/or distribute this software for any 23 + * purpose with or without fee is hereby granted, provided that the above 24 + * copyright notice and this permission notice appear in all copies. 25 + * 26 + * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES 27 + * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF 28 + * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR 29 + * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES 30 + * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN 31 + * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF 32 + * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. 33 + */ 34 + #include <linux/etherdevice.h> 35 + #include <linux/delay.h> 36 + #include <linux/pci.h> 37 + #include <linux/mdio.h> 38 + #include "reg.h" 39 + #include "hw.h" 40 + 41 + static inline bool alx_is_rev_a(u8 rev) 42 + { 43 + return rev == ALX_REV_A0 || rev == ALX_REV_A1; 44 + } 45 + 46 + static int alx_wait_mdio_idle(struct alx_hw *hw) 47 + { 48 + u32 val; 49 + int i; 50 + 51 + for (i = 0; i < ALX_MDIO_MAX_AC_TO; i++) { 52 + val = alx_read_mem32(hw, ALX_MDIO); 53 + if (!(val & ALX_MDIO_BUSY)) 54 + return 0; 55 + udelay(10); 56 + } 57 + 58 + return -ETIMEDOUT; 59 + } 60 + 61 + static int alx_read_phy_core(struct alx_hw *hw, bool ext, u8 dev, 62 + u16 reg, u16 *phy_data) 63 + { 64 + u32 val, clk_sel; 65 + int err; 66 + 67 + *phy_data = 0; 68 + 69 + /* use slow clock when it's in hibernation status */ 70 + clk_sel = hw->link_speed != SPEED_UNKNOWN ? 71 + ALX_MDIO_CLK_SEL_25MD4 : 72 + ALX_MDIO_CLK_SEL_25MD128; 73 + 74 + if (ext) { 75 + val = dev << ALX_MDIO_EXTN_DEVAD_SHIFT | 76 + reg << ALX_MDIO_EXTN_REG_SHIFT; 77 + alx_write_mem32(hw, ALX_MDIO_EXTN, val); 78 + 79 + val = ALX_MDIO_SPRES_PRMBL | ALX_MDIO_START | 80 + ALX_MDIO_MODE_EXT | ALX_MDIO_OP_READ | 81 + clk_sel << ALX_MDIO_CLK_SEL_SHIFT; 82 + } else { 83 + val = ALX_MDIO_SPRES_PRMBL | 84 + clk_sel << ALX_MDIO_CLK_SEL_SHIFT | 85 + reg << ALX_MDIO_REG_SHIFT | 86 + ALX_MDIO_START | ALX_MDIO_OP_READ; 87 + } 88 + alx_write_mem32(hw, ALX_MDIO, val); 89 + 90 + err = alx_wait_mdio_idle(hw); 91 + if (err) 92 + return err; 93 + val = alx_read_mem32(hw, ALX_MDIO); 94 + *phy_data = ALX_GET_FIELD(val, ALX_MDIO_DATA); 95 + return 0; 96 + } 97 + 98 + static int alx_write_phy_core(struct alx_hw *hw, bool ext, u8 dev, 99 + u16 reg, u16 phy_data) 100 + { 101 + u32 val, clk_sel; 102 + 103 + /* use slow clock when it's in hibernation status */ 104 + clk_sel = hw->link_speed != SPEED_UNKNOWN ? 105 + ALX_MDIO_CLK_SEL_25MD4 : 106 + ALX_MDIO_CLK_SEL_25MD128; 107 + 108 + if (ext) { 109 + val = dev << ALX_MDIO_EXTN_DEVAD_SHIFT | 110 + reg << ALX_MDIO_EXTN_REG_SHIFT; 111 + alx_write_mem32(hw, ALX_MDIO_EXTN, val); 112 + 113 + val = ALX_MDIO_SPRES_PRMBL | 114 + clk_sel << ALX_MDIO_CLK_SEL_SHIFT | 115 + phy_data << ALX_MDIO_DATA_SHIFT | 116 + ALX_MDIO_START | ALX_MDIO_MODE_EXT; 117 + } else { 118 + val = ALX_MDIO_SPRES_PRMBL | 119 + clk_sel << ALX_MDIO_CLK_SEL_SHIFT | 120 + reg << ALX_MDIO_REG_SHIFT | 121 + phy_data << ALX_MDIO_DATA_SHIFT | 122 + ALX_MDIO_START; 123 + } 124 + alx_write_mem32(hw, ALX_MDIO, val); 125 + 126 + return alx_wait_mdio_idle(hw); 127 + } 128 + 129 + static int __alx_read_phy_reg(struct alx_hw *hw, u16 reg, u16 *phy_data) 130 + { 131 + return alx_read_phy_core(hw, false, 0, reg, phy_data); 132 + } 133 + 134 + static int __alx_write_phy_reg(struct alx_hw *hw, u16 reg, u16 phy_data) 135 + { 136 + return alx_write_phy_core(hw, false, 0, reg, phy_data); 137 + } 138 + 139 + static int __alx_read_phy_ext(struct alx_hw *hw, u8 dev, u16 reg, u16 *pdata) 140 + { 141 + return alx_read_phy_core(hw, true, dev, reg, pdata); 142 + } 143 + 144 + static int __alx_write_phy_ext(struct alx_hw *hw, u8 dev, u16 reg, u16 data) 145 + { 146 + return alx_write_phy_core(hw, true, dev, reg, data); 147 + } 148 + 149 + static int __alx_read_phy_dbg(struct alx_hw *hw, u16 reg, u16 *pdata) 150 + { 151 + int err; 152 + 153 + err = __alx_write_phy_reg(hw, ALX_MII_DBG_ADDR, reg); 154 + if (err) 155 + return err; 156 + 157 + return __alx_read_phy_reg(hw, ALX_MII_DBG_DATA, pdata); 158 + } 159 + 160 + static int __alx_write_phy_dbg(struct alx_hw *hw, u16 reg, u16 data) 161 + { 162 + int err; 163 + 164 + err = __alx_write_phy_reg(hw, ALX_MII_DBG_ADDR, reg); 165 + if (err) 166 + return err; 167 + 168 + return __alx_write_phy_reg(hw, ALX_MII_DBG_DATA, data); 169 + } 170 + 171 + int alx_read_phy_reg(struct alx_hw *hw, u16 reg, u16 *phy_data) 172 + { 173 + int err; 174 + 175 + spin_lock(&hw->mdio_lock); 176 + err = __alx_read_phy_reg(hw, reg, phy_data); 177 + spin_unlock(&hw->mdio_lock); 178 + 179 + return err; 180 + } 181 + 182 + int alx_write_phy_reg(struct alx_hw *hw, u16 reg, u16 phy_data) 183 + { 184 + int err; 185 + 186 + spin_lock(&hw->mdio_lock); 187 + err = __alx_write_phy_reg(hw, reg, phy_data); 188 + spin_unlock(&hw->mdio_lock); 189 + 190 + return err; 191 + } 192 + 193 + int alx_read_phy_ext(struct alx_hw *hw, u8 dev, u16 reg, u16 *pdata) 194 + { 195 + int err; 196 + 197 + spin_lock(&hw->mdio_lock); 198 + err = __alx_read_phy_ext(hw, dev, reg, pdata); 199 + spin_unlock(&hw->mdio_lock); 200 + 201 + return err; 202 + } 203 + 204 + int alx_write_phy_ext(struct alx_hw *hw, u8 dev, u16 reg, u16 data) 205 + { 206 + int err; 207 + 208 + spin_lock(&hw->mdio_lock); 209 + err = __alx_write_phy_ext(hw, dev, reg, data); 210 + spin_unlock(&hw->mdio_lock); 211 + 212 + return err; 213 + } 214 + 215 + static int alx_read_phy_dbg(struct alx_hw *hw, u16 reg, u16 *pdata) 216 + { 217 + int err; 218 + 219 + spin_lock(&hw->mdio_lock); 220 + err = __alx_read_phy_dbg(hw, reg, pdata); 221 + spin_unlock(&hw->mdio_lock); 222 + 223 + return err; 224 + } 225 + 226 + static int alx_write_phy_dbg(struct alx_hw *hw, u16 reg, u16 data) 227 + { 228 + int err; 229 + 230 + spin_lock(&hw->mdio_lock); 231 + err = __alx_write_phy_dbg(hw, reg, data); 232 + spin_unlock(&hw->mdio_lock); 233 + 234 + return err; 235 + } 236 + 237 + static u16 alx_get_phy_config(struct alx_hw *hw) 238 + { 239 + u32 val; 240 + u16 phy_val; 241 + 242 + val = alx_read_mem32(hw, ALX_PHY_CTRL); 243 + /* phy in reset */ 244 + if ((val & ALX_PHY_CTRL_DSPRST_OUT) == 0) 245 + return ALX_DRV_PHY_UNKNOWN; 246 + 247 + val = alx_read_mem32(hw, ALX_DRV); 248 + val = ALX_GET_FIELD(val, ALX_DRV_PHY); 249 + if (ALX_DRV_PHY_UNKNOWN == val) 250 + return ALX_DRV_PHY_UNKNOWN; 251 + 252 + alx_read_phy_reg(hw, ALX_MII_DBG_ADDR, &phy_val); 253 + if (ALX_PHY_INITED == phy_val) 254 + return val; 255 + 256 + return ALX_DRV_PHY_UNKNOWN; 257 + } 258 + 259 + static bool alx_wait_reg(struct alx_hw *hw, u32 reg, u32 wait, u32 *val) 260 + { 261 + u32 read; 262 + int i; 263 + 264 + for (i = 0; i < ALX_SLD_MAX_TO; i++) { 265 + read = alx_read_mem32(hw, reg); 266 + if ((read & wait) == 0) { 267 + if (val) 268 + *val = read; 269 + return true; 270 + } 271 + mdelay(1); 272 + } 273 + 274 + return false; 275 + } 276 + 277 + static bool alx_read_macaddr(struct alx_hw *hw, u8 *addr) 278 + { 279 + u32 mac0, mac1; 280 + 281 + mac0 = alx_read_mem32(hw, ALX_STAD0); 282 + mac1 = alx_read_mem32(hw, ALX_STAD1); 283 + 284 + /* addr should be big-endian */ 285 + *(__be32 *)(addr + 2) = cpu_to_be32(mac0); 286 + *(__be16 *)addr = cpu_to_be16(mac1); 287 + 288 + return is_valid_ether_addr(addr); 289 + } 290 + 291 + int alx_get_perm_macaddr(struct alx_hw *hw, u8 *addr) 292 + { 293 + u32 val; 294 + 295 + /* try to get it from register first */ 296 + if (alx_read_macaddr(hw, addr)) 297 + return 0; 298 + 299 + /* try to load from efuse */ 300 + if (!alx_wait_reg(hw, ALX_SLD, ALX_SLD_STAT | ALX_SLD_START, &val)) 301 + return -EIO; 302 + alx_write_mem32(hw, ALX_SLD, val | ALX_SLD_START); 303 + if (!alx_wait_reg(hw, ALX_SLD, ALX_SLD_START, NULL)) 304 + return -EIO; 305 + if (alx_read_macaddr(hw, addr)) 306 + return 0; 307 + 308 + /* try to load from flash/eeprom (if present) */ 309 + val = alx_read_mem32(hw, ALX_EFLD); 310 + if (val & (ALX_EFLD_F_EXIST | ALX_EFLD_E_EXIST)) { 311 + if (!alx_wait_reg(hw, ALX_EFLD, 312 + ALX_EFLD_STAT | ALX_EFLD_START, &val)) 313 + return -EIO; 314 + alx_write_mem32(hw, ALX_EFLD, val | ALX_EFLD_START); 315 + if (!alx_wait_reg(hw, ALX_EFLD, ALX_EFLD_START, NULL)) 316 + return -EIO; 317 + if (alx_read_macaddr(hw, addr)) 318 + return 0; 319 + } 320 + 321 + return -EIO; 322 + } 323 + 324 + void alx_set_macaddr(struct alx_hw *hw, const u8 *addr) 325 + { 326 + u32 val; 327 + 328 + /* for example: 00-0B-6A-F6-00-DC * STAD0=6AF600DC, STAD1=000B */ 329 + val = be32_to_cpu(*(__be32 *)(addr + 2)); 330 + alx_write_mem32(hw, ALX_STAD0, val); 331 + val = be16_to_cpu(*(__be16 *)addr); 332 + alx_write_mem32(hw, ALX_STAD1, val); 333 + } 334 + 335 + static void alx_enable_osc(struct alx_hw *hw) 336 + { 337 + u32 val; 338 + 339 + /* rising edge */ 340 + val = alx_read_mem32(hw, ALX_MISC); 341 + alx_write_mem32(hw, ALX_MISC, val & ~ALX_MISC_INTNLOSC_OPEN); 342 + alx_write_mem32(hw, ALX_MISC, val | ALX_MISC_INTNLOSC_OPEN); 343 + } 344 + 345 + static void alx_reset_osc(struct alx_hw *hw, u8 rev) 346 + { 347 + u32 val, val2; 348 + 349 + /* clear Internal OSC settings, switching OSC by hw itself */ 350 + val = alx_read_mem32(hw, ALX_MISC3); 351 + alx_write_mem32(hw, ALX_MISC3, 352 + (val & ~ALX_MISC3_25M_BY_SW) | 353 + ALX_MISC3_25M_NOTO_INTNL); 354 + 355 + /* 25M clk from chipset may be unstable 1s after de-assert of 356 + * PERST, driver need re-calibrate before enter Sleep for WoL 357 + */ 358 + val = alx_read_mem32(hw, ALX_MISC); 359 + if (rev >= ALX_REV_B0) { 360 + /* restore over current protection def-val, 361 + * this val could be reset by MAC-RST 362 + */ 363 + ALX_SET_FIELD(val, ALX_MISC_PSW_OCP, ALX_MISC_PSW_OCP_DEF); 364 + /* a 0->1 change will update the internal val of osc */ 365 + val &= ~ALX_MISC_INTNLOSC_OPEN; 366 + alx_write_mem32(hw, ALX_MISC, val); 367 + alx_write_mem32(hw, ALX_MISC, val | ALX_MISC_INTNLOSC_OPEN); 368 + /* hw will automatically dis OSC after cab. */ 369 + val2 = alx_read_mem32(hw, ALX_MSIC2); 370 + val2 &= ~ALX_MSIC2_CALB_START; 371 + alx_write_mem32(hw, ALX_MSIC2, val2); 372 + alx_write_mem32(hw, ALX_MSIC2, val2 | ALX_MSIC2_CALB_START); 373 + } else { 374 + val &= ~ALX_MISC_INTNLOSC_OPEN; 375 + /* disable isolate for rev A devices */ 376 + if (alx_is_rev_a(rev)) 377 + val &= ~ALX_MISC_ISO_EN; 378 + 379 + alx_write_mem32(hw, ALX_MISC, val | ALX_MISC_INTNLOSC_OPEN); 380 + alx_write_mem32(hw, ALX_MISC, val); 381 + } 382 + 383 + udelay(20); 384 + } 385 + 386 + static int alx_stop_mac(struct alx_hw *hw) 387 + { 388 + u32 rxq, txq, val; 389 + u16 i; 390 + 391 + rxq = alx_read_mem32(hw, ALX_RXQ0); 392 + alx_write_mem32(hw, ALX_RXQ0, rxq & ~ALX_RXQ0_EN); 393 + txq = alx_read_mem32(hw, ALX_TXQ0); 394 + alx_write_mem32(hw, ALX_TXQ0, txq & ~ALX_TXQ0_EN); 395 + 396 + udelay(40); 397 + 398 + hw->rx_ctrl &= ~(ALX_MAC_CTRL_RX_EN | ALX_MAC_CTRL_TX_EN); 399 + alx_write_mem32(hw, ALX_MAC_CTRL, hw->rx_ctrl); 400 + 401 + for (i = 0; i < ALX_DMA_MAC_RST_TO; i++) { 402 + val = alx_read_mem32(hw, ALX_MAC_STS); 403 + if (!(val & ALX_MAC_STS_IDLE)) 404 + return 0; 405 + udelay(10); 406 + } 407 + 408 + return -ETIMEDOUT; 409 + } 410 + 411 + int alx_reset_mac(struct alx_hw *hw) 412 + { 413 + u32 val, pmctrl; 414 + int i, ret; 415 + u8 rev; 416 + bool a_cr; 417 + 418 + pmctrl = 0; 419 + rev = alx_hw_revision(hw); 420 + a_cr = alx_is_rev_a(rev) && alx_hw_with_cr(hw); 421 + 422 + /* disable all interrupts, RXQ/TXQ */ 423 + alx_write_mem32(hw, ALX_MSIX_MASK, 0xFFFFFFFF); 424 + alx_write_mem32(hw, ALX_IMR, 0); 425 + alx_write_mem32(hw, ALX_ISR, ALX_ISR_DIS); 426 + 427 + ret = alx_stop_mac(hw); 428 + if (ret) 429 + return ret; 430 + 431 + /* mac reset workaroud */ 432 + alx_write_mem32(hw, ALX_RFD_PIDX, 1); 433 + 434 + /* dis l0s/l1 before mac reset */ 435 + if (a_cr) { 436 + pmctrl = alx_read_mem32(hw, ALX_PMCTRL); 437 + if (pmctrl & (ALX_PMCTRL_L1_EN | ALX_PMCTRL_L0S_EN)) 438 + alx_write_mem32(hw, ALX_PMCTRL, 439 + pmctrl & ~(ALX_PMCTRL_L1_EN | 440 + ALX_PMCTRL_L0S_EN)); 441 + } 442 + 443 + /* reset whole mac safely */ 444 + val = alx_read_mem32(hw, ALX_MASTER); 445 + alx_write_mem32(hw, ALX_MASTER, 446 + val | ALX_MASTER_DMA_MAC_RST | ALX_MASTER_OOB_DIS); 447 + 448 + /* make sure it's real idle */ 449 + udelay(10); 450 + for (i = 0; i < ALX_DMA_MAC_RST_TO; i++) { 451 + val = alx_read_mem32(hw, ALX_RFD_PIDX); 452 + if (val == 0) 453 + break; 454 + udelay(10); 455 + } 456 + for (; i < ALX_DMA_MAC_RST_TO; i++) { 457 + val = alx_read_mem32(hw, ALX_MASTER); 458 + if ((val & ALX_MASTER_DMA_MAC_RST) == 0) 459 + break; 460 + udelay(10); 461 + } 462 + if (i == ALX_DMA_MAC_RST_TO) 463 + return -EIO; 464 + udelay(10); 465 + 466 + if (a_cr) { 467 + alx_write_mem32(hw, ALX_MASTER, val | ALX_MASTER_PCLKSEL_SRDS); 468 + /* restore l0s / l1 */ 469 + if (pmctrl & (ALX_PMCTRL_L1_EN | ALX_PMCTRL_L0S_EN)) 470 + alx_write_mem32(hw, ALX_PMCTRL, pmctrl); 471 + } 472 + 473 + alx_reset_osc(hw, rev); 474 + 475 + /* clear Internal OSC settings, switching OSC by hw itself, 476 + * disable isolate for rev A devices 477 + */ 478 + val = alx_read_mem32(hw, ALX_MISC3); 479 + alx_write_mem32(hw, ALX_MISC3, 480 + (val & ~ALX_MISC3_25M_BY_SW) | 481 + ALX_MISC3_25M_NOTO_INTNL); 482 + val = alx_read_mem32(hw, ALX_MISC); 483 + val &= ~ALX_MISC_INTNLOSC_OPEN; 484 + if (alx_is_rev_a(rev)) 485 + val &= ~ALX_MISC_ISO_EN; 486 + alx_write_mem32(hw, ALX_MISC, val); 487 + udelay(20); 488 + 489 + /* driver control speed/duplex, hash-alg */ 490 + alx_write_mem32(hw, ALX_MAC_CTRL, hw->rx_ctrl); 491 + 492 + val = alx_read_mem32(hw, ALX_SERDES); 493 + alx_write_mem32(hw, ALX_SERDES, 494 + val | ALX_SERDES_MACCLK_SLWDWN | 495 + ALX_SERDES_PHYCLK_SLWDWN); 496 + 497 + return 0; 498 + } 499 + 500 + void alx_reset_phy(struct alx_hw *hw) 501 + { 502 + int i; 503 + u32 val; 504 + u16 phy_val; 505 + 506 + /* (DSP)reset PHY core */ 507 + val = alx_read_mem32(hw, ALX_PHY_CTRL); 508 + val &= ~(ALX_PHY_CTRL_DSPRST_OUT | ALX_PHY_CTRL_IDDQ | 509 + ALX_PHY_CTRL_GATE_25M | ALX_PHY_CTRL_POWER_DOWN | 510 + ALX_PHY_CTRL_CLS); 511 + val |= ALX_PHY_CTRL_RST_ANALOG; 512 + 513 + val |= (ALX_PHY_CTRL_HIB_PULSE | ALX_PHY_CTRL_HIB_EN); 514 + alx_write_mem32(hw, ALX_PHY_CTRL, val); 515 + udelay(10); 516 + alx_write_mem32(hw, ALX_PHY_CTRL, val | ALX_PHY_CTRL_DSPRST_OUT); 517 + 518 + for (i = 0; i < ALX_PHY_CTRL_DSPRST_TO; i++) 519 + udelay(10); 520 + 521 + /* phy power saving & hib */ 522 + alx_write_phy_dbg(hw, ALX_MIIDBG_LEGCYPS, ALX_LEGCYPS_DEF); 523 + alx_write_phy_dbg(hw, ALX_MIIDBG_SYSMODCTRL, 524 + ALX_SYSMODCTRL_IECHOADJ_DEF); 525 + alx_write_phy_ext(hw, ALX_MIIEXT_PCS, ALX_MIIEXT_VDRVBIAS, 526 + ALX_VDRVBIAS_DEF); 527 + 528 + /* EEE advertisement */ 529 + val = alx_read_mem32(hw, ALX_LPI_CTRL); 530 + alx_write_mem32(hw, ALX_LPI_CTRL, val & ~ALX_LPI_CTRL_EN); 531 + alx_write_phy_ext(hw, ALX_MIIEXT_ANEG, ALX_MIIEXT_LOCAL_EEEADV, 0); 532 + 533 + /* phy power saving */ 534 + alx_write_phy_dbg(hw, ALX_MIIDBG_TST10BTCFG, ALX_TST10BTCFG_DEF); 535 + alx_write_phy_dbg(hw, ALX_MIIDBG_SRDSYSMOD, ALX_SRDSYSMOD_DEF); 536 + alx_write_phy_dbg(hw, ALX_MIIDBG_TST100BTCFG, ALX_TST100BTCFG_DEF); 537 + alx_write_phy_dbg(hw, ALX_MIIDBG_ANACTRL, ALX_ANACTRL_DEF); 538 + alx_read_phy_dbg(hw, ALX_MIIDBG_GREENCFG2, &phy_val); 539 + alx_write_phy_dbg(hw, ALX_MIIDBG_GREENCFG2, 540 + phy_val & ~ALX_GREENCFG2_GATE_DFSE_EN); 541 + /* rtl8139c, 120m issue */ 542 + alx_write_phy_ext(hw, ALX_MIIEXT_ANEG, ALX_MIIEXT_NLP78, 543 + ALX_MIIEXT_NLP78_120M_DEF); 544 + alx_write_phy_ext(hw, ALX_MIIEXT_ANEG, ALX_MIIEXT_S3DIG10, 545 + ALX_MIIEXT_S3DIG10_DEF); 546 + 547 + if (hw->lnk_patch) { 548 + /* Turn off half amplitude */ 549 + alx_read_phy_ext(hw, ALX_MIIEXT_PCS, ALX_MIIEXT_CLDCTRL3, 550 + &phy_val); 551 + alx_write_phy_ext(hw, ALX_MIIEXT_PCS, ALX_MIIEXT_CLDCTRL3, 552 + phy_val | ALX_CLDCTRL3_BP_CABLE1TH_DET_GT); 553 + /* Turn off Green feature */ 554 + alx_read_phy_dbg(hw, ALX_MIIDBG_GREENCFG2, &phy_val); 555 + alx_write_phy_dbg(hw, ALX_MIIDBG_GREENCFG2, 556 + phy_val | ALX_GREENCFG2_BP_GREEN); 557 + /* Turn off half Bias */ 558 + alx_read_phy_ext(hw, ALX_MIIEXT_PCS, ALX_MIIEXT_CLDCTRL5, 559 + &phy_val); 560 + alx_write_phy_ext(hw, ALX_MIIEXT_PCS, ALX_MIIEXT_CLDCTRL5, 561 + phy_val | ALX_CLDCTRL5_BP_VD_HLFBIAS); 562 + } 563 + 564 + /* set phy interrupt mask */ 565 + alx_write_phy_reg(hw, ALX_MII_IER, ALX_IER_LINK_UP | ALX_IER_LINK_DOWN); 566 + } 567 + 568 + #define ALX_PCI_CMD (PCI_COMMAND_MASTER | PCI_COMMAND_MEMORY | PCI_COMMAND_IO) 569 + 570 + void alx_reset_pcie(struct alx_hw *hw) 571 + { 572 + u8 rev = alx_hw_revision(hw); 573 + u32 val; 574 + u16 val16; 575 + 576 + /* Workaround for PCI problem when BIOS sets MMRBC incorrectly. */ 577 + pci_read_config_word(hw->pdev, PCI_COMMAND, &val16); 578 + if (!(val16 & ALX_PCI_CMD) || (val16 & PCI_COMMAND_INTX_DISABLE)) { 579 + val16 = (val16 | ALX_PCI_CMD) & ~PCI_COMMAND_INTX_DISABLE; 580 + pci_write_config_word(hw->pdev, PCI_COMMAND, val16); 581 + } 582 + 583 + /* clear WoL setting/status */ 584 + val = alx_read_mem32(hw, ALX_WOL0); 585 + alx_write_mem32(hw, ALX_WOL0, 0); 586 + 587 + val = alx_read_mem32(hw, ALX_PDLL_TRNS1); 588 + alx_write_mem32(hw, ALX_PDLL_TRNS1, val & ~ALX_PDLL_TRNS1_D3PLLOFF_EN); 589 + 590 + /* mask some pcie error bits */ 591 + val = alx_read_mem32(hw, ALX_UE_SVRT); 592 + val &= ~(ALX_UE_SVRT_DLPROTERR | ALX_UE_SVRT_FCPROTERR); 593 + alx_write_mem32(hw, ALX_UE_SVRT, val); 594 + 595 + /* wol 25M & pclk */ 596 + val = alx_read_mem32(hw, ALX_MASTER); 597 + if (alx_is_rev_a(rev) && alx_hw_with_cr(hw)) { 598 + if ((val & ALX_MASTER_WAKEN_25M) == 0 || 599 + (val & ALX_MASTER_PCLKSEL_SRDS) == 0) 600 + alx_write_mem32(hw, ALX_MASTER, 601 + val | ALX_MASTER_PCLKSEL_SRDS | 602 + ALX_MASTER_WAKEN_25M); 603 + } else { 604 + if ((val & ALX_MASTER_WAKEN_25M) == 0 || 605 + (val & ALX_MASTER_PCLKSEL_SRDS) != 0) 606 + alx_write_mem32(hw, ALX_MASTER, 607 + (val & ~ALX_MASTER_PCLKSEL_SRDS) | 608 + ALX_MASTER_WAKEN_25M); 609 + } 610 + 611 + /* ASPM setting */ 612 + alx_enable_aspm(hw, true, true); 613 + 614 + udelay(10); 615 + } 616 + 617 + void alx_start_mac(struct alx_hw *hw) 618 + { 619 + u32 mac, txq, rxq; 620 + 621 + rxq = alx_read_mem32(hw, ALX_RXQ0); 622 + alx_write_mem32(hw, ALX_RXQ0, rxq | ALX_RXQ0_EN); 623 + txq = alx_read_mem32(hw, ALX_TXQ0); 624 + alx_write_mem32(hw, ALX_TXQ0, txq | ALX_TXQ0_EN); 625 + 626 + mac = hw->rx_ctrl; 627 + if (hw->link_speed % 10 == DUPLEX_FULL) 628 + mac |= ALX_MAC_CTRL_FULLD; 629 + else 630 + mac &= ~ALX_MAC_CTRL_FULLD; 631 + ALX_SET_FIELD(mac, ALX_MAC_CTRL_SPEED, 632 + hw->link_speed >= SPEED_1000 ? ALX_MAC_CTRL_SPEED_1000 : 633 + ALX_MAC_CTRL_SPEED_10_100); 634 + mac |= ALX_MAC_CTRL_TX_EN | ALX_MAC_CTRL_RX_EN; 635 + hw->rx_ctrl = mac; 636 + alx_write_mem32(hw, ALX_MAC_CTRL, mac); 637 + } 638 + 639 + void alx_cfg_mac_flowcontrol(struct alx_hw *hw, u8 fc) 640 + { 641 + if (fc & ALX_FC_RX) 642 + hw->rx_ctrl |= ALX_MAC_CTRL_RXFC_EN; 643 + else 644 + hw->rx_ctrl &= ~ALX_MAC_CTRL_RXFC_EN; 645 + 646 + if (fc & ALX_FC_TX) 647 + hw->rx_ctrl |= ALX_MAC_CTRL_TXFC_EN; 648 + else 649 + hw->rx_ctrl &= ~ALX_MAC_CTRL_TXFC_EN; 650 + 651 + alx_write_mem32(hw, ALX_MAC_CTRL, hw->rx_ctrl); 652 + } 653 + 654 + void alx_enable_aspm(struct alx_hw *hw, bool l0s_en, bool l1_en) 655 + { 656 + u32 pmctrl; 657 + u8 rev = alx_hw_revision(hw); 658 + 659 + pmctrl = alx_read_mem32(hw, ALX_PMCTRL); 660 + 661 + ALX_SET_FIELD(pmctrl, ALX_PMCTRL_LCKDET_TIMER, 662 + ALX_PMCTRL_LCKDET_TIMER_DEF); 663 + pmctrl |= ALX_PMCTRL_RCVR_WT_1US | 664 + ALX_PMCTRL_L1_CLKSW_EN | 665 + ALX_PMCTRL_L1_SRDSRX_PWD; 666 + ALX_SET_FIELD(pmctrl, ALX_PMCTRL_L1REQ_TO, ALX_PMCTRL_L1REG_TO_DEF); 667 + ALX_SET_FIELD(pmctrl, ALX_PMCTRL_L1_TIMER, ALX_PMCTRL_L1_TIMER_16US); 668 + pmctrl &= ~(ALX_PMCTRL_L1_SRDS_EN | 669 + ALX_PMCTRL_L1_SRDSPLL_EN | 670 + ALX_PMCTRL_L1_BUFSRX_EN | 671 + ALX_PMCTRL_SADLY_EN | 672 + ALX_PMCTRL_HOTRST_WTEN| 673 + ALX_PMCTRL_L0S_EN | 674 + ALX_PMCTRL_L1_EN | 675 + ALX_PMCTRL_ASPM_FCEN | 676 + ALX_PMCTRL_TXL1_AFTER_L0S | 677 + ALX_PMCTRL_RXL1_AFTER_L0S); 678 + if (alx_is_rev_a(rev) && alx_hw_with_cr(hw)) 679 + pmctrl |= ALX_PMCTRL_L1_SRDS_EN | ALX_PMCTRL_L1_SRDSPLL_EN; 680 + 681 + if (l0s_en) 682 + pmctrl |= (ALX_PMCTRL_L0S_EN | ALX_PMCTRL_ASPM_FCEN); 683 + if (l1_en) 684 + pmctrl |= (ALX_PMCTRL_L1_EN | ALX_PMCTRL_ASPM_FCEN); 685 + 686 + alx_write_mem32(hw, ALX_PMCTRL, pmctrl); 687 + } 688 + 689 + 690 + static u32 ethadv_to_hw_cfg(struct alx_hw *hw, u32 ethadv_cfg) 691 + { 692 + u32 cfg = 0; 693 + 694 + if (ethadv_cfg & ADVERTISED_Autoneg) { 695 + cfg |= ALX_DRV_PHY_AUTO; 696 + if (ethadv_cfg & ADVERTISED_10baseT_Half) 697 + cfg |= ALX_DRV_PHY_10; 698 + if (ethadv_cfg & ADVERTISED_10baseT_Full) 699 + cfg |= ALX_DRV_PHY_10 | ALX_DRV_PHY_DUPLEX; 700 + if (ethadv_cfg & ADVERTISED_100baseT_Half) 701 + cfg |= ALX_DRV_PHY_100; 702 + if (ethadv_cfg & ADVERTISED_100baseT_Full) 703 + cfg |= ALX_DRV_PHY_100 | ALX_DRV_PHY_DUPLEX; 704 + if (ethadv_cfg & ADVERTISED_1000baseT_Half) 705 + cfg |= ALX_DRV_PHY_1000; 706 + if (ethadv_cfg & ADVERTISED_1000baseT_Full) 707 + cfg |= ALX_DRV_PHY_100 | ALX_DRV_PHY_DUPLEX; 708 + if (ethadv_cfg & ADVERTISED_Pause) 709 + cfg |= ADVERTISE_PAUSE_CAP; 710 + if (ethadv_cfg & ADVERTISED_Asym_Pause) 711 + cfg |= ADVERTISE_PAUSE_ASYM; 712 + } else { 713 + switch (ethadv_cfg) { 714 + case ADVERTISED_10baseT_Half: 715 + cfg |= ALX_DRV_PHY_10; 716 + break; 717 + case ADVERTISED_100baseT_Half: 718 + cfg |= ALX_DRV_PHY_100; 719 + break; 720 + case ADVERTISED_10baseT_Full: 721 + cfg |= ALX_DRV_PHY_10 | ALX_DRV_PHY_DUPLEX; 722 + break; 723 + case ADVERTISED_100baseT_Full: 724 + cfg |= ALX_DRV_PHY_100 | ALX_DRV_PHY_DUPLEX; 725 + break; 726 + } 727 + } 728 + 729 + return cfg; 730 + } 731 + 732 + int alx_setup_speed_duplex(struct alx_hw *hw, u32 ethadv, u8 flowctrl) 733 + { 734 + u16 adv, giga, cr; 735 + u32 val; 736 + int err = 0; 737 + 738 + alx_write_phy_reg(hw, ALX_MII_DBG_ADDR, 0); 739 + val = alx_read_mem32(hw, ALX_DRV); 740 + ALX_SET_FIELD(val, ALX_DRV_PHY, 0); 741 + 742 + if (ethadv & ADVERTISED_Autoneg) { 743 + adv = ADVERTISE_CSMA; 744 + adv |= ethtool_adv_to_mii_adv_t(ethadv); 745 + 746 + if (flowctrl & ALX_FC_ANEG) { 747 + if (flowctrl & ALX_FC_RX) { 748 + adv |= ADVERTISED_Pause; 749 + if (!(flowctrl & ALX_FC_TX)) 750 + adv |= ADVERTISED_Asym_Pause; 751 + } else if (flowctrl & ALX_FC_TX) { 752 + adv |= ADVERTISED_Asym_Pause; 753 + } 754 + } 755 + giga = 0; 756 + if (alx_hw_giga(hw)) 757 + giga = ethtool_adv_to_mii_ctrl1000_t(ethadv); 758 + 759 + cr = BMCR_RESET | BMCR_ANENABLE | BMCR_ANRESTART; 760 + 761 + if (alx_write_phy_reg(hw, MII_ADVERTISE, adv) || 762 + alx_write_phy_reg(hw, MII_CTRL1000, giga) || 763 + alx_write_phy_reg(hw, MII_BMCR, cr)) 764 + err = -EBUSY; 765 + } else { 766 + cr = BMCR_RESET; 767 + if (ethadv == ADVERTISED_100baseT_Half || 768 + ethadv == ADVERTISED_100baseT_Full) 769 + cr |= BMCR_SPEED100; 770 + if (ethadv == ADVERTISED_10baseT_Full || 771 + ethadv == ADVERTISED_100baseT_Full) 772 + cr |= BMCR_FULLDPLX; 773 + 774 + err = alx_write_phy_reg(hw, MII_BMCR, cr); 775 + } 776 + 777 + if (!err) { 778 + alx_write_phy_reg(hw, ALX_MII_DBG_ADDR, ALX_PHY_INITED); 779 + val |= ethadv_to_hw_cfg(hw, ethadv); 780 + } 781 + 782 + alx_write_mem32(hw, ALX_DRV, val); 783 + 784 + return err; 785 + } 786 + 787 + 788 + void alx_post_phy_link(struct alx_hw *hw) 789 + { 790 + u16 phy_val, len, agc; 791 + u8 revid = alx_hw_revision(hw); 792 + bool adj_th = revid == ALX_REV_B0; 793 + int speed; 794 + 795 + if (hw->link_speed == SPEED_UNKNOWN) 796 + speed = SPEED_UNKNOWN; 797 + else 798 + speed = hw->link_speed - hw->link_speed % 10; 799 + 800 + if (revid != ALX_REV_B0 && !alx_is_rev_a(revid)) 801 + return; 802 + 803 + /* 1000BT/AZ, wrong cable length */ 804 + if (speed != SPEED_UNKNOWN) { 805 + alx_read_phy_ext(hw, ALX_MIIEXT_PCS, ALX_MIIEXT_CLDCTRL6, 806 + &phy_val); 807 + len = ALX_GET_FIELD(phy_val, ALX_CLDCTRL6_CAB_LEN); 808 + alx_read_phy_dbg(hw, ALX_MIIDBG_AGC, &phy_val); 809 + agc = ALX_GET_FIELD(phy_val, ALX_AGC_2_VGA); 810 + 811 + if ((speed == SPEED_1000 && 812 + (len > ALX_CLDCTRL6_CAB_LEN_SHORT1G || 813 + (len == 0 && agc > ALX_AGC_LONG1G_LIMT))) || 814 + (speed == SPEED_100 && 815 + (len > ALX_CLDCTRL6_CAB_LEN_SHORT100M || 816 + (len == 0 && agc > ALX_AGC_LONG100M_LIMT)))) { 817 + alx_write_phy_dbg(hw, ALX_MIIDBG_AZ_ANADECT, 818 + ALX_AZ_ANADECT_LONG); 819 + alx_read_phy_ext(hw, ALX_MIIEXT_ANEG, ALX_MIIEXT_AFE, 820 + &phy_val); 821 + alx_write_phy_ext(hw, ALX_MIIEXT_ANEG, ALX_MIIEXT_AFE, 822 + phy_val | ALX_AFE_10BT_100M_TH); 823 + } else { 824 + alx_write_phy_dbg(hw, ALX_MIIDBG_AZ_ANADECT, 825 + ALX_AZ_ANADECT_DEF); 826 + alx_read_phy_ext(hw, ALX_MIIEXT_ANEG, 827 + ALX_MIIEXT_AFE, &phy_val); 828 + alx_write_phy_ext(hw, ALX_MIIEXT_ANEG, ALX_MIIEXT_AFE, 829 + phy_val & ~ALX_AFE_10BT_100M_TH); 830 + } 831 + 832 + /* threshold adjust */ 833 + if (adj_th && hw->lnk_patch) { 834 + if (speed == SPEED_100) { 835 + alx_write_phy_dbg(hw, ALX_MIIDBG_MSE16DB, 836 + ALX_MSE16DB_UP); 837 + } else if (speed == SPEED_1000) { 838 + /* 839 + * Giga link threshold, raise the tolerance of 840 + * noise 50% 841 + */ 842 + alx_read_phy_dbg(hw, ALX_MIIDBG_MSE20DB, 843 + &phy_val); 844 + ALX_SET_FIELD(phy_val, ALX_MSE20DB_TH, 845 + ALX_MSE20DB_TH_HI); 846 + alx_write_phy_dbg(hw, ALX_MIIDBG_MSE20DB, 847 + phy_val); 848 + } 849 + } 850 + } else { 851 + alx_read_phy_ext(hw, ALX_MIIEXT_ANEG, ALX_MIIEXT_AFE, 852 + &phy_val); 853 + alx_write_phy_ext(hw, ALX_MIIEXT_ANEG, ALX_MIIEXT_AFE, 854 + phy_val & ~ALX_AFE_10BT_100M_TH); 855 + 856 + if (adj_th && hw->lnk_patch) { 857 + alx_write_phy_dbg(hw, ALX_MIIDBG_MSE16DB, 858 + ALX_MSE16DB_DOWN); 859 + alx_read_phy_dbg(hw, ALX_MIIDBG_MSE20DB, &phy_val); 860 + ALX_SET_FIELD(phy_val, ALX_MSE20DB_TH, 861 + ALX_MSE20DB_TH_DEF); 862 + alx_write_phy_dbg(hw, ALX_MIIDBG_MSE20DB, phy_val); 863 + } 864 + } 865 + } 866 + 867 + 868 + /* NOTE: 869 + * 1. phy link must be established before calling this function 870 + * 2. wol option (pattern,magic,link,etc.) is configed before call it. 871 + */ 872 + int alx_pre_suspend(struct alx_hw *hw, int speed) 873 + { 874 + u32 master, mac, phy, val; 875 + int err = 0; 876 + 877 + master = alx_read_mem32(hw, ALX_MASTER); 878 + master &= ~ALX_MASTER_PCLKSEL_SRDS; 879 + mac = hw->rx_ctrl; 880 + /* 10/100 half */ 881 + ALX_SET_FIELD(mac, ALX_MAC_CTRL_SPEED, ALX_MAC_CTRL_SPEED_10_100); 882 + mac &= ~(ALX_MAC_CTRL_FULLD | ALX_MAC_CTRL_RX_EN | ALX_MAC_CTRL_TX_EN); 883 + 884 + phy = alx_read_mem32(hw, ALX_PHY_CTRL); 885 + phy &= ~(ALX_PHY_CTRL_DSPRST_OUT | ALX_PHY_CTRL_CLS); 886 + phy |= ALX_PHY_CTRL_RST_ANALOG | ALX_PHY_CTRL_HIB_PULSE | 887 + ALX_PHY_CTRL_HIB_EN; 888 + 889 + /* without any activity */ 890 + if (!(hw->sleep_ctrl & ALX_SLEEP_ACTIVE)) { 891 + err = alx_write_phy_reg(hw, ALX_MII_IER, 0); 892 + if (err) 893 + return err; 894 + phy |= ALX_PHY_CTRL_IDDQ | ALX_PHY_CTRL_POWER_DOWN; 895 + } else { 896 + if (hw->sleep_ctrl & (ALX_SLEEP_WOL_MAGIC | ALX_SLEEP_CIFS)) 897 + mac |= ALX_MAC_CTRL_RX_EN | ALX_MAC_CTRL_BRD_EN; 898 + if (hw->sleep_ctrl & ALX_SLEEP_CIFS) 899 + mac |= ALX_MAC_CTRL_TX_EN; 900 + if (speed % 10 == DUPLEX_FULL) 901 + mac |= ALX_MAC_CTRL_FULLD; 902 + if (speed >= SPEED_1000) 903 + ALX_SET_FIELD(mac, ALX_MAC_CTRL_SPEED, 904 + ALX_MAC_CTRL_SPEED_1000); 905 + phy |= ALX_PHY_CTRL_DSPRST_OUT; 906 + err = alx_write_phy_ext(hw, ALX_MIIEXT_ANEG, 907 + ALX_MIIEXT_S3DIG10, 908 + ALX_MIIEXT_S3DIG10_SL); 909 + if (err) 910 + return err; 911 + } 912 + 913 + alx_enable_osc(hw); 914 + hw->rx_ctrl = mac; 915 + alx_write_mem32(hw, ALX_MASTER, master); 916 + alx_write_mem32(hw, ALX_MAC_CTRL, mac); 917 + alx_write_mem32(hw, ALX_PHY_CTRL, phy); 918 + 919 + /* set val of PDLL D3PLLOFF */ 920 + val = alx_read_mem32(hw, ALX_PDLL_TRNS1); 921 + val |= ALX_PDLL_TRNS1_D3PLLOFF_EN; 922 + alx_write_mem32(hw, ALX_PDLL_TRNS1, val); 923 + 924 + return 0; 925 + } 926 + 927 + bool alx_phy_configured(struct alx_hw *hw) 928 + { 929 + u32 cfg, hw_cfg; 930 + 931 + cfg = ethadv_to_hw_cfg(hw, hw->adv_cfg); 932 + cfg = ALX_GET_FIELD(cfg, ALX_DRV_PHY); 933 + hw_cfg = alx_get_phy_config(hw); 934 + 935 + if (hw_cfg == ALX_DRV_PHY_UNKNOWN) 936 + return false; 937 + 938 + return cfg == hw_cfg; 939 + } 940 + 941 + int alx_get_phy_link(struct alx_hw *hw, int *speed) 942 + { 943 + struct pci_dev *pdev = hw->pdev; 944 + u16 bmsr, giga; 945 + int err; 946 + 947 + err = alx_read_phy_reg(hw, MII_BMSR, &bmsr); 948 + if (err) 949 + return err; 950 + 951 + err = alx_read_phy_reg(hw, MII_BMSR, &bmsr); 952 + if (err) 953 + return err; 954 + 955 + if (!(bmsr & BMSR_LSTATUS)) { 956 + *speed = SPEED_UNKNOWN; 957 + return 0; 958 + } 959 + 960 + /* speed/duplex result is saved in PHY Specific Status Register */ 961 + err = alx_read_phy_reg(hw, ALX_MII_GIGA_PSSR, &giga); 962 + if (err) 963 + return err; 964 + 965 + if (!(giga & ALX_GIGA_PSSR_SPD_DPLX_RESOLVED)) 966 + goto wrong_speed; 967 + 968 + switch (giga & ALX_GIGA_PSSR_SPEED) { 969 + case ALX_GIGA_PSSR_1000MBS: 970 + *speed = SPEED_1000; 971 + break; 972 + case ALX_GIGA_PSSR_100MBS: 973 + *speed = SPEED_100; 974 + break; 975 + case ALX_GIGA_PSSR_10MBS: 976 + *speed = SPEED_10; 977 + break; 978 + default: 979 + goto wrong_speed; 980 + } 981 + 982 + *speed += (giga & ALX_GIGA_PSSR_DPLX) ? DUPLEX_FULL : DUPLEX_HALF; 983 + return 1; 984 + 985 + wrong_speed: 986 + dev_err(&pdev->dev, "invalid PHY speed/duplex: 0x%x\n", giga); 987 + return -EINVAL; 988 + } 989 + 990 + int alx_clear_phy_intr(struct alx_hw *hw) 991 + { 992 + u16 isr; 993 + 994 + /* clear interrupt status by reading it */ 995 + return alx_read_phy_reg(hw, ALX_MII_ISR, &isr); 996 + } 997 + 998 + int alx_config_wol(struct alx_hw *hw) 999 + { 1000 + u32 wol = 0; 1001 + int err = 0; 1002 + 1003 + /* turn on magic packet event */ 1004 + if (hw->sleep_ctrl & ALX_SLEEP_WOL_MAGIC) 1005 + wol |= ALX_WOL0_MAGIC_EN | ALX_WOL0_PME_MAGIC_EN; 1006 + 1007 + /* turn on link up event */ 1008 + if (hw->sleep_ctrl & ALX_SLEEP_WOL_PHY) { 1009 + wol |= ALX_WOL0_LINK_EN | ALX_WOL0_PME_LINK; 1010 + /* only link up can wake up */ 1011 + err = alx_write_phy_reg(hw, ALX_MII_IER, ALX_IER_LINK_UP); 1012 + } 1013 + alx_write_mem32(hw, ALX_WOL0, wol); 1014 + 1015 + return err; 1016 + } 1017 + 1018 + void alx_disable_rss(struct alx_hw *hw) 1019 + { 1020 + u32 ctrl = alx_read_mem32(hw, ALX_RXQ0); 1021 + 1022 + ctrl &= ~ALX_RXQ0_RSS_HASH_EN; 1023 + alx_write_mem32(hw, ALX_RXQ0, ctrl); 1024 + } 1025 + 1026 + void alx_configure_basic(struct alx_hw *hw) 1027 + { 1028 + u32 val, raw_mtu, max_payload; 1029 + u16 val16; 1030 + u8 chip_rev = alx_hw_revision(hw); 1031 + 1032 + alx_set_macaddr(hw, hw->mac_addr); 1033 + 1034 + alx_write_mem32(hw, ALX_CLK_GATE, ALX_CLK_GATE_ALL); 1035 + 1036 + /* idle timeout to switch clk_125M */ 1037 + if (chip_rev >= ALX_REV_B0) 1038 + alx_write_mem32(hw, ALX_IDLE_DECISN_TIMER, 1039 + ALX_IDLE_DECISN_TIMER_DEF); 1040 + 1041 + alx_write_mem32(hw, ALX_SMB_TIMER, hw->smb_timer * 500UL); 1042 + 1043 + val = alx_read_mem32(hw, ALX_MASTER); 1044 + val |= ALX_MASTER_IRQMOD2_EN | 1045 + ALX_MASTER_IRQMOD1_EN | 1046 + ALX_MASTER_SYSALVTIMER_EN; 1047 + alx_write_mem32(hw, ALX_MASTER, val); 1048 + alx_write_mem32(hw, ALX_IRQ_MODU_TIMER, 1049 + (hw->imt >> 1) << ALX_IRQ_MODU_TIMER1_SHIFT); 1050 + /* intr re-trig timeout */ 1051 + alx_write_mem32(hw, ALX_INT_RETRIG, ALX_INT_RETRIG_TO); 1052 + /* tpd threshold to trig int */ 1053 + alx_write_mem32(hw, ALX_TINT_TPD_THRSHLD, hw->ith_tpd); 1054 + alx_write_mem32(hw, ALX_TINT_TIMER, hw->imt); 1055 + 1056 + raw_mtu = hw->mtu + ETH_HLEN; 1057 + alx_write_mem32(hw, ALX_MTU, raw_mtu + 8); 1058 + if (raw_mtu > ALX_MTU_JUMBO_TH) 1059 + hw->rx_ctrl &= ~ALX_MAC_CTRL_FAST_PAUSE; 1060 + 1061 + if ((raw_mtu + 8) < ALX_TXQ1_JUMBO_TSO_TH) 1062 + val = (raw_mtu + 8 + 7) >> 3; 1063 + else 1064 + val = ALX_TXQ1_JUMBO_TSO_TH >> 3; 1065 + alx_write_mem32(hw, ALX_TXQ1, val | ALX_TXQ1_ERRLGPKT_DROP_EN); 1066 + 1067 + max_payload = pcie_get_readrq(hw->pdev) >> 8; 1068 + /* 1069 + * if BIOS had changed the default dma read max length, 1070 + * restore it to default value 1071 + */ 1072 + if (max_payload < ALX_DEV_CTRL_MAXRRS_MIN) 1073 + pcie_set_readrq(hw->pdev, 128 << ALX_DEV_CTRL_MAXRRS_MIN); 1074 + 1075 + val = ALX_TXQ_TPD_BURSTPREF_DEF << ALX_TXQ0_TPD_BURSTPREF_SHIFT | 1076 + ALX_TXQ0_MODE_ENHANCE | ALX_TXQ0_LSO_8023_EN | 1077 + ALX_TXQ0_SUPT_IPOPT | 1078 + ALX_TXQ_TXF_BURST_PREF_DEF << ALX_TXQ0_TXF_BURST_PREF_SHIFT; 1079 + alx_write_mem32(hw, ALX_TXQ0, val); 1080 + val = ALX_TXQ_TPD_BURSTPREF_DEF << ALX_HQTPD_Q1_NUMPREF_SHIFT | 1081 + ALX_TXQ_TPD_BURSTPREF_DEF << ALX_HQTPD_Q2_NUMPREF_SHIFT | 1082 + ALX_TXQ_TPD_BURSTPREF_DEF << ALX_HQTPD_Q3_NUMPREF_SHIFT | 1083 + ALX_HQTPD_BURST_EN; 1084 + alx_write_mem32(hw, ALX_HQTPD, val); 1085 + 1086 + /* rxq, flow control */ 1087 + val = alx_read_mem32(hw, ALX_SRAM5); 1088 + val = ALX_GET_FIELD(val, ALX_SRAM_RXF_LEN) << 3; 1089 + if (val > ALX_SRAM_RXF_LEN_8K) { 1090 + val16 = ALX_MTU_STD_ALGN >> 3; 1091 + val = (val - ALX_RXQ2_RXF_FLOW_CTRL_RSVD) >> 3; 1092 + } else { 1093 + val16 = ALX_MTU_STD_ALGN >> 3; 1094 + val = (val - ALX_MTU_STD_ALGN) >> 3; 1095 + } 1096 + alx_write_mem32(hw, ALX_RXQ2, 1097 + val16 << ALX_RXQ2_RXF_XOFF_THRESH_SHIFT | 1098 + val << ALX_RXQ2_RXF_XON_THRESH_SHIFT); 1099 + val = ALX_RXQ0_NUM_RFD_PREF_DEF << ALX_RXQ0_NUM_RFD_PREF_SHIFT | 1100 + ALX_RXQ0_RSS_MODE_DIS << ALX_RXQ0_RSS_MODE_SHIFT | 1101 + ALX_RXQ0_IDT_TBL_SIZE_DEF << ALX_RXQ0_IDT_TBL_SIZE_SHIFT | 1102 + ALX_RXQ0_RSS_HSTYP_ALL | ALX_RXQ0_RSS_HASH_EN | 1103 + ALX_RXQ0_IPV6_PARSE_EN; 1104 + 1105 + if (alx_hw_giga(hw)) 1106 + ALX_SET_FIELD(val, ALX_RXQ0_ASPM_THRESH, 1107 + ALX_RXQ0_ASPM_THRESH_100M); 1108 + 1109 + alx_write_mem32(hw, ALX_RXQ0, val); 1110 + 1111 + val = alx_read_mem32(hw, ALX_DMA); 1112 + val = ALX_DMA_RORDER_MODE_OUT << ALX_DMA_RORDER_MODE_SHIFT | 1113 + ALX_DMA_RREQ_PRI_DATA | 1114 + max_payload << ALX_DMA_RREQ_BLEN_SHIFT | 1115 + ALX_DMA_WDLY_CNT_DEF << ALX_DMA_WDLY_CNT_SHIFT | 1116 + ALX_DMA_RDLY_CNT_DEF << ALX_DMA_RDLY_CNT_SHIFT | 1117 + (hw->dma_chnl - 1) << ALX_DMA_RCHNL_SEL_SHIFT; 1118 + alx_write_mem32(hw, ALX_DMA, val); 1119 + 1120 + /* default multi-tx-q weights */ 1121 + val = ALX_WRR_PRI_RESTRICT_NONE << ALX_WRR_PRI_SHIFT | 1122 + 4 << ALX_WRR_PRI0_SHIFT | 1123 + 4 << ALX_WRR_PRI1_SHIFT | 1124 + 4 << ALX_WRR_PRI2_SHIFT | 1125 + 4 << ALX_WRR_PRI3_SHIFT; 1126 + alx_write_mem32(hw, ALX_WRR, val); 1127 + } 1128 + 1129 + static inline u32 alx_speed_to_ethadv(int speed) 1130 + { 1131 + switch (speed) { 1132 + case SPEED_1000 + DUPLEX_FULL: 1133 + return ADVERTISED_1000baseT_Full; 1134 + case SPEED_100 + DUPLEX_FULL: 1135 + return ADVERTISED_100baseT_Full; 1136 + case SPEED_100 + DUPLEX_HALF: 1137 + return ADVERTISED_10baseT_Half; 1138 + case SPEED_10 + DUPLEX_FULL: 1139 + return ADVERTISED_10baseT_Full; 1140 + case SPEED_10 + DUPLEX_HALF: 1141 + return ADVERTISED_10baseT_Half; 1142 + default: 1143 + return 0; 1144 + } 1145 + } 1146 + 1147 + int alx_select_powersaving_speed(struct alx_hw *hw, int *speed) 1148 + { 1149 + int i, err, spd; 1150 + u16 lpa; 1151 + 1152 + err = alx_get_phy_link(hw, &spd); 1153 + if (err < 0) 1154 + return err; 1155 + 1156 + if (spd == SPEED_UNKNOWN) 1157 + return 0; 1158 + 1159 + err = alx_read_phy_reg(hw, MII_LPA, &lpa); 1160 + if (err) 1161 + return err; 1162 + 1163 + if (!(lpa & LPA_LPACK)) { 1164 + *speed = spd; 1165 + return 0; 1166 + } 1167 + 1168 + if (lpa & LPA_10FULL) 1169 + *speed = SPEED_10 + DUPLEX_FULL; 1170 + else if (lpa & LPA_10HALF) 1171 + *speed = SPEED_10 + DUPLEX_HALF; 1172 + else if (lpa & LPA_100FULL) 1173 + *speed = SPEED_100 + DUPLEX_FULL; 1174 + else 1175 + *speed = SPEED_100 + DUPLEX_HALF; 1176 + 1177 + if (*speed != spd) { 1178 + err = alx_write_phy_reg(hw, ALX_MII_IER, 0); 1179 + if (err) 1180 + return err; 1181 + err = alx_setup_speed_duplex(hw, 1182 + alx_speed_to_ethadv(*speed) | 1183 + ADVERTISED_Autoneg, 1184 + ALX_FC_ANEG | ALX_FC_RX | 1185 + ALX_FC_TX); 1186 + if (err) 1187 + return err; 1188 + 1189 + /* wait for linkup */ 1190 + for (i = 0; i < ALX_MAX_SETUP_LNK_CYCLE; i++) { 1191 + int speed2; 1192 + 1193 + msleep(100); 1194 + 1195 + err = alx_get_phy_link(hw, &speed2); 1196 + if (err < 0) 1197 + return err; 1198 + if (speed2 != SPEED_UNKNOWN) 1199 + break; 1200 + } 1201 + if (i == ALX_MAX_SETUP_LNK_CYCLE) 1202 + return -ETIMEDOUT; 1203 + } 1204 + 1205 + return 0; 1206 + } 1207 + 1208 + bool alx_get_phy_info(struct alx_hw *hw) 1209 + { 1210 + u16 devs1, devs2; 1211 + 1212 + if (alx_read_phy_reg(hw, MII_PHYSID1, &hw->phy_id[0]) || 1213 + alx_read_phy_reg(hw, MII_PHYSID2, &hw->phy_id[1])) 1214 + return false; 1215 + 1216 + /* since we haven't PMA/PMD status2 register, we can't 1217 + * use mdio45_probe function for prtad and mmds. 1218 + * use fixed MMD3 to get mmds. 1219 + */ 1220 + if (alx_read_phy_ext(hw, 3, MDIO_DEVS1, &devs1) || 1221 + alx_read_phy_ext(hw, 3, MDIO_DEVS2, &devs2)) 1222 + return false; 1223 + hw->mdio.mmds = devs1 | devs2 << 16; 1224 + 1225 + return true; 1226 + }
+499
drivers/net/ethernet/atheros/alx/hw.h
··· 1 + /* 2 + * Copyright (c) 2013 Johannes Berg <johannes@sipsolutions.net> 3 + * 4 + * This file is free software: you may copy, redistribute and/or modify it 5 + * under the terms of the GNU General Public License as published by the 6 + * Free Software Foundation, either version 2 of the License, or (at your 7 + * option) any later version. 8 + * 9 + * This file is distributed in the hope that it will be useful, but 10 + * WITHOUT ANY WARRANTY; without even the implied warranty of 11 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU 12 + * General Public License for more details. 13 + * 14 + * You should have received a copy of the GNU General Public License 15 + * along with this program. If not, see <http://www.gnu.org/licenses/>. 16 + * 17 + * This file incorporates work covered by the following copyright and 18 + * permission notice: 19 + * 20 + * Copyright (c) 2012 Qualcomm Atheros, Inc. 21 + * 22 + * Permission to use, copy, modify, and/or distribute this software for any 23 + * purpose with or without fee is hereby granted, provided that the above 24 + * copyright notice and this permission notice appear in all copies. 25 + * 26 + * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES 27 + * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF 28 + * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR 29 + * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES 30 + * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN 31 + * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF 32 + * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. 33 + */ 34 + 35 + #ifndef ALX_HW_H_ 36 + #define ALX_HW_H_ 37 + #include <linux/types.h> 38 + #include <linux/mdio.h> 39 + #include <linux/pci.h> 40 + #include "reg.h" 41 + 42 + /* Transmit Packet Descriptor, contains 4 32-bit words. 43 + * 44 + * 31 16 0 45 + * +----------------+----------------+ 46 + * | vlan-tag | buf length | 47 + * +----------------+----------------+ 48 + * | Word 1 | 49 + * +----------------+----------------+ 50 + * | Word 2: buf addr lo | 51 + * +----------------+----------------+ 52 + * | Word 3: buf addr hi | 53 + * +----------------+----------------+ 54 + * 55 + * Word 2 and 3 combine to form a 64-bit buffer address 56 + * 57 + * Word 1 has three forms, depending on the state of bit 8/12/13: 58 + * if bit8 =='1', the definition is just for custom checksum offload. 59 + * if bit8 == '0' && bit12 == '1' && bit13 == '1', the *FIRST* descriptor 60 + * for the skb is special for LSO V2, Word 2 become total skb length , 61 + * Word 3 is meaningless. 62 + * other condition, the definition is for general skb or ip/tcp/udp 63 + * checksum or LSO(TSO) offload. 64 + * 65 + * Here is the depiction: 66 + * 67 + * 0-+ 0-+ 68 + * 1 | 1 | 69 + * 2 | 2 | 70 + * 3 | Payload offset 3 | L4 header offset 71 + * 4 | (7:0) 4 | (7:0) 72 + * 5 | 5 | 73 + * 6 | 6 | 74 + * 7-+ 7-+ 75 + * 8 Custom csum enable = 1 8 Custom csum enable = 0 76 + * 9 General IPv4 checksum 9 General IPv4 checksum 77 + * 10 General TCP checksum 10 General TCP checksum 78 + * 11 General UDP checksum 11 General UDP checksum 79 + * 12 Large Send Segment enable 12 Large Send Segment enable 80 + * 13 Large Send Segment type 13 Large Send Segment type 81 + * 14 VLAN tagged 14 VLAN tagged 82 + * 15 Insert VLAN tag 15 Insert VLAN tag 83 + * 16 IPv4 packet 16 IPv4 packet 84 + * 17 Ethernet frame type 17 Ethernet frame type 85 + * 18-+ 18-+ 86 + * 19 | 19 | 87 + * 20 | 20 | 88 + * 21 | Custom csum offset 21 | 89 + * 22 | (25:18) 22 | 90 + * 23 | 23 | MSS (30:18) 91 + * 24 | 24 | 92 + * 25-+ 25 | 93 + * 26-+ 26 | 94 + * 27 | 27 | 95 + * 28 | Reserved 28 | 96 + * 29 | 29 | 97 + * 30-+ 30-+ 98 + * 31 End of packet 31 End of packet 99 + */ 100 + struct alx_txd { 101 + __le16 len; 102 + __le16 vlan_tag; 103 + __le32 word1; 104 + union { 105 + __le64 addr; 106 + struct { 107 + __le32 pkt_len; 108 + __le32 resvd; 109 + } l; 110 + } adrl; 111 + } __packed; 112 + 113 + /* tpd word 1 */ 114 + #define TPD_CXSUMSTART_MASK 0x00FF 115 + #define TPD_CXSUMSTART_SHIFT 0 116 + #define TPD_L4HDROFFSET_MASK 0x00FF 117 + #define TPD_L4HDROFFSET_SHIFT 0 118 + #define TPD_CXSUM_EN_MASK 0x0001 119 + #define TPD_CXSUM_EN_SHIFT 8 120 + #define TPD_IP_XSUM_MASK 0x0001 121 + #define TPD_IP_XSUM_SHIFT 9 122 + #define TPD_TCP_XSUM_MASK 0x0001 123 + #define TPD_TCP_XSUM_SHIFT 10 124 + #define TPD_UDP_XSUM_MASK 0x0001 125 + #define TPD_UDP_XSUM_SHIFT 11 126 + #define TPD_LSO_EN_MASK 0x0001 127 + #define TPD_LSO_EN_SHIFT 12 128 + #define TPD_LSO_V2_MASK 0x0001 129 + #define TPD_LSO_V2_SHIFT 13 130 + #define TPD_VLTAGGED_MASK 0x0001 131 + #define TPD_VLTAGGED_SHIFT 14 132 + #define TPD_INS_VLTAG_MASK 0x0001 133 + #define TPD_INS_VLTAG_SHIFT 15 134 + #define TPD_IPV4_MASK 0x0001 135 + #define TPD_IPV4_SHIFT 16 136 + #define TPD_ETHTYPE_MASK 0x0001 137 + #define TPD_ETHTYPE_SHIFT 17 138 + #define TPD_CXSUMOFFSET_MASK 0x00FF 139 + #define TPD_CXSUMOFFSET_SHIFT 18 140 + #define TPD_MSS_MASK 0x1FFF 141 + #define TPD_MSS_SHIFT 18 142 + #define TPD_EOP_MASK 0x0001 143 + #define TPD_EOP_SHIFT 31 144 + 145 + #define DESC_GET(_x, _name) ((_x) >> _name##SHIFT & _name##MASK) 146 + 147 + /* Receive Free Descriptor */ 148 + struct alx_rfd { 149 + __le64 addr; /* data buffer address, length is 150 + * declared in register --- every 151 + * buffer has the same size 152 + */ 153 + } __packed; 154 + 155 + /* Receive Return Descriptor, contains 4 32-bit words. 156 + * 157 + * 31 16 0 158 + * +----------------+----------------+ 159 + * | Word 0 | 160 + * +----------------+----------------+ 161 + * | Word 1: RSS Hash value | 162 + * +----------------+----------------+ 163 + * | Word 2 | 164 + * +----------------+----------------+ 165 + * | Word 3 | 166 + * +----------------+----------------+ 167 + * 168 + * Word 0 depiction & Word 2 depiction: 169 + * 170 + * 0--+ 0--+ 171 + * 1 | 1 | 172 + * 2 | 2 | 173 + * 3 | 3 | 174 + * 4 | 4 | 175 + * 5 | 5 | 176 + * 6 | 6 | 177 + * 7 | IP payload checksum 7 | VLAN tag 178 + * 8 | (15:0) 8 | (15:0) 179 + * 9 | 9 | 180 + * 10 | 10 | 181 + * 11 | 11 | 182 + * 12 | 12 | 183 + * 13 | 13 | 184 + * 14 | 14 | 185 + * 15-+ 15-+ 186 + * 16-+ 16-+ 187 + * 17 | Number of RFDs 17 | 188 + * 18 | (19:16) 18 | 189 + * 19-+ 19 | Protocol ID 190 + * 20-+ 20 | (23:16) 191 + * 21 | 21 | 192 + * 22 | 22 | 193 + * 23 | 23-+ 194 + * 24 | 24 | Reserved 195 + * 25 | Start index of RFD-ring 25-+ 196 + * 26 | (31:20) 26 | RSS Q-num (27:25) 197 + * 27 | 27-+ 198 + * 28 | 28-+ 199 + * 29 | 29 | RSS Hash algorithm 200 + * 30 | 30 | (31:28) 201 + * 31-+ 31-+ 202 + * 203 + * Word 3 depiction: 204 + * 205 + * 0--+ 206 + * 1 | 207 + * 2 | 208 + * 3 | 209 + * 4 | 210 + * 5 | 211 + * 6 | 212 + * 7 | Packet length (include FCS) 213 + * 8 | (13:0) 214 + * 9 | 215 + * 10 | 216 + * 11 | 217 + * 12 | 218 + * 13-+ 219 + * 14 L4 Header checksum error 220 + * 15 IPv4 checksum error 221 + * 16 VLAN tagged 222 + * 17-+ 223 + * 18 | Protocol ID (19:17) 224 + * 19-+ 225 + * 20 Receive error summary 226 + * 21 FCS(CRC) error 227 + * 22 Frame alignment error 228 + * 23 Truncated packet 229 + * 24 Runt packet 230 + * 25 Incomplete packet due to insufficient rx-desc 231 + * 26 Broadcast packet 232 + * 27 Multicast packet 233 + * 28 Ethernet type (EII or 802.3) 234 + * 29 FIFO overflow 235 + * 30 Length error (for 802.3, length field mismatch with actual len) 236 + * 31 Updated, indicate to driver that this RRD is refreshed. 237 + */ 238 + struct alx_rrd { 239 + __le32 word0; 240 + __le32 rss_hash; 241 + __le32 word2; 242 + __le32 word3; 243 + } __packed; 244 + 245 + /* rrd word 0 */ 246 + #define RRD_XSUM_MASK 0xFFFF 247 + #define RRD_XSUM_SHIFT 0 248 + #define RRD_NOR_MASK 0x000F 249 + #define RRD_NOR_SHIFT 16 250 + #define RRD_SI_MASK 0x0FFF 251 + #define RRD_SI_SHIFT 20 252 + 253 + /* rrd word 2 */ 254 + #define RRD_VLTAG_MASK 0xFFFF 255 + #define RRD_VLTAG_SHIFT 0 256 + #define RRD_PID_MASK 0x00FF 257 + #define RRD_PID_SHIFT 16 258 + /* non-ip packet */ 259 + #define RRD_PID_NONIP 0 260 + /* ipv4(only) */ 261 + #define RRD_PID_IPV4 1 262 + /* tcp/ipv6 */ 263 + #define RRD_PID_IPV6TCP 2 264 + /* tcp/ipv4 */ 265 + #define RRD_PID_IPV4TCP 3 266 + /* udp/ipv6 */ 267 + #define RRD_PID_IPV6UDP 4 268 + /* udp/ipv4 */ 269 + #define RRD_PID_IPV4UDP 5 270 + /* ipv6(only) */ 271 + #define RRD_PID_IPV6 6 272 + /* LLDP packet */ 273 + #define RRD_PID_LLDP 7 274 + /* 1588 packet */ 275 + #define RRD_PID_1588 8 276 + #define RRD_RSSQ_MASK 0x0007 277 + #define RRD_RSSQ_SHIFT 25 278 + #define RRD_RSSALG_MASK 0x000F 279 + #define RRD_RSSALG_SHIFT 28 280 + #define RRD_RSSALG_TCPV6 0x1 281 + #define RRD_RSSALG_IPV6 0x2 282 + #define RRD_RSSALG_TCPV4 0x4 283 + #define RRD_RSSALG_IPV4 0x8 284 + 285 + /* rrd word 3 */ 286 + #define RRD_PKTLEN_MASK 0x3FFF 287 + #define RRD_PKTLEN_SHIFT 0 288 + #define RRD_ERR_L4_MASK 0x0001 289 + #define RRD_ERR_L4_SHIFT 14 290 + #define RRD_ERR_IPV4_MASK 0x0001 291 + #define RRD_ERR_IPV4_SHIFT 15 292 + #define RRD_VLTAGGED_MASK 0x0001 293 + #define RRD_VLTAGGED_SHIFT 16 294 + #define RRD_OLD_PID_MASK 0x0007 295 + #define RRD_OLD_PID_SHIFT 17 296 + #define RRD_ERR_RES_MASK 0x0001 297 + #define RRD_ERR_RES_SHIFT 20 298 + #define RRD_ERR_FCS_MASK 0x0001 299 + #define RRD_ERR_FCS_SHIFT 21 300 + #define RRD_ERR_FAE_MASK 0x0001 301 + #define RRD_ERR_FAE_SHIFT 22 302 + #define RRD_ERR_TRUNC_MASK 0x0001 303 + #define RRD_ERR_TRUNC_SHIFT 23 304 + #define RRD_ERR_RUNT_MASK 0x0001 305 + #define RRD_ERR_RUNT_SHIFT 24 306 + #define RRD_ERR_ICMP_MASK 0x0001 307 + #define RRD_ERR_ICMP_SHIFT 25 308 + #define RRD_BCAST_MASK 0x0001 309 + #define RRD_BCAST_SHIFT 26 310 + #define RRD_MCAST_MASK 0x0001 311 + #define RRD_MCAST_SHIFT 27 312 + #define RRD_ETHTYPE_MASK 0x0001 313 + #define RRD_ETHTYPE_SHIFT 28 314 + #define RRD_ERR_FIFOV_MASK 0x0001 315 + #define RRD_ERR_FIFOV_SHIFT 29 316 + #define RRD_ERR_LEN_MASK 0x0001 317 + #define RRD_ERR_LEN_SHIFT 30 318 + #define RRD_UPDATED_MASK 0x0001 319 + #define RRD_UPDATED_SHIFT 31 320 + 321 + 322 + #define ALX_MAX_SETUP_LNK_CYCLE 50 323 + 324 + /* for FlowControl */ 325 + #define ALX_FC_RX 0x01 326 + #define ALX_FC_TX 0x02 327 + #define ALX_FC_ANEG 0x04 328 + 329 + /* for sleep control */ 330 + #define ALX_SLEEP_WOL_PHY 0x00000001 331 + #define ALX_SLEEP_WOL_MAGIC 0x00000002 332 + #define ALX_SLEEP_CIFS 0x00000004 333 + #define ALX_SLEEP_ACTIVE (ALX_SLEEP_WOL_PHY | \ 334 + ALX_SLEEP_WOL_MAGIC | \ 335 + ALX_SLEEP_CIFS) 336 + 337 + /* for RSS hash type */ 338 + #define ALX_RSS_HASH_TYPE_IPV4 0x1 339 + #define ALX_RSS_HASH_TYPE_IPV4_TCP 0x2 340 + #define ALX_RSS_HASH_TYPE_IPV6 0x4 341 + #define ALX_RSS_HASH_TYPE_IPV6_TCP 0x8 342 + #define ALX_RSS_HASH_TYPE_ALL (ALX_RSS_HASH_TYPE_IPV4 | \ 343 + ALX_RSS_HASH_TYPE_IPV4_TCP | \ 344 + ALX_RSS_HASH_TYPE_IPV6 | \ 345 + ALX_RSS_HASH_TYPE_IPV6_TCP) 346 + #define ALX_DEF_RXBUF_SIZE 1536 347 + #define ALX_MAX_JUMBO_PKT_SIZE (9*1024) 348 + #define ALX_MAX_TSO_PKT_SIZE (7*1024) 349 + #define ALX_MAX_FRAME_SIZE ALX_MAX_JUMBO_PKT_SIZE 350 + #define ALX_MIN_FRAME_SIZE 68 351 + #define ALX_RAW_MTU(_mtu) (_mtu + ETH_HLEN + ETH_FCS_LEN + VLAN_HLEN) 352 + 353 + #define ALX_MAX_RX_QUEUES 8 354 + #define ALX_MAX_TX_QUEUES 4 355 + #define ALX_MAX_HANDLED_INTRS 5 356 + 357 + #define ALX_ISR_MISC (ALX_ISR_PCIE_LNKDOWN | \ 358 + ALX_ISR_DMAW | \ 359 + ALX_ISR_DMAR | \ 360 + ALX_ISR_SMB | \ 361 + ALX_ISR_MANU | \ 362 + ALX_ISR_TIMER) 363 + 364 + #define ALX_ISR_FATAL (ALX_ISR_PCIE_LNKDOWN | \ 365 + ALX_ISR_DMAW | ALX_ISR_DMAR) 366 + 367 + #define ALX_ISR_ALERT (ALX_ISR_RXF_OV | \ 368 + ALX_ISR_TXF_UR | \ 369 + ALX_ISR_RFD_UR) 370 + 371 + #define ALX_ISR_ALL_QUEUES (ALX_ISR_TX_Q0 | \ 372 + ALX_ISR_TX_Q1 | \ 373 + ALX_ISR_TX_Q2 | \ 374 + ALX_ISR_TX_Q3 | \ 375 + ALX_ISR_RX_Q0 | \ 376 + ALX_ISR_RX_Q1 | \ 377 + ALX_ISR_RX_Q2 | \ 378 + ALX_ISR_RX_Q3 | \ 379 + ALX_ISR_RX_Q4 | \ 380 + ALX_ISR_RX_Q5 | \ 381 + ALX_ISR_RX_Q6 | \ 382 + ALX_ISR_RX_Q7) 383 + 384 + /* maximum interrupt vectors for msix */ 385 + #define ALX_MAX_MSIX_INTRS 16 386 + 387 + #define ALX_GET_FIELD(_data, _field) \ 388 + (((_data) >> _field ## _SHIFT) & _field ## _MASK) 389 + 390 + #define ALX_SET_FIELD(_data, _field, _value) do { \ 391 + (_data) &= ~(_field ## _MASK << _field ## _SHIFT); \ 392 + (_data) |= ((_value) & _field ## _MASK) << _field ## _SHIFT;\ 393 + } while (0) 394 + 395 + struct alx_hw { 396 + struct pci_dev *pdev; 397 + u8 __iomem *hw_addr; 398 + 399 + /* current & permanent mac addr */ 400 + u8 mac_addr[ETH_ALEN]; 401 + u8 perm_addr[ETH_ALEN]; 402 + 403 + u16 mtu; 404 + u16 imt; 405 + u8 dma_chnl; 406 + u8 max_dma_chnl; 407 + /* tpd threshold to trig INT */ 408 + u32 ith_tpd; 409 + u32 rx_ctrl; 410 + u32 mc_hash[2]; 411 + 412 + u32 smb_timer; 413 + /* SPEED_* + DUPLEX_*, SPEED_UNKNOWN if link is down */ 414 + int link_speed; 415 + 416 + /* auto-neg advertisement or force mode config */ 417 + u32 adv_cfg; 418 + u8 flowctrl; 419 + 420 + u32 sleep_ctrl; 421 + 422 + spinlock_t mdio_lock; 423 + struct mdio_if_info mdio; 424 + u16 phy_id[2]; 425 + 426 + /* PHY link patch flag */ 427 + bool lnk_patch; 428 + }; 429 + 430 + static inline int alx_hw_revision(struct alx_hw *hw) 431 + { 432 + return hw->pdev->revision >> ALX_PCI_REVID_SHIFT; 433 + } 434 + 435 + static inline bool alx_hw_with_cr(struct alx_hw *hw) 436 + { 437 + return hw->pdev->revision & 1; 438 + } 439 + 440 + static inline bool alx_hw_giga(struct alx_hw *hw) 441 + { 442 + return hw->pdev->device & 1; 443 + } 444 + 445 + static inline void alx_write_mem8(struct alx_hw *hw, u32 reg, u8 val) 446 + { 447 + writeb(val, hw->hw_addr + reg); 448 + } 449 + 450 + static inline void alx_write_mem16(struct alx_hw *hw, u32 reg, u16 val) 451 + { 452 + writew(val, hw->hw_addr + reg); 453 + } 454 + 455 + static inline u16 alx_read_mem16(struct alx_hw *hw, u32 reg) 456 + { 457 + return readw(hw->hw_addr + reg); 458 + } 459 + 460 + static inline void alx_write_mem32(struct alx_hw *hw, u32 reg, u32 val) 461 + { 462 + writel(val, hw->hw_addr + reg); 463 + } 464 + 465 + static inline u32 alx_read_mem32(struct alx_hw *hw, u32 reg) 466 + { 467 + return readl(hw->hw_addr + reg); 468 + } 469 + 470 + static inline void alx_post_write(struct alx_hw *hw) 471 + { 472 + readl(hw->hw_addr); 473 + } 474 + 475 + int alx_get_perm_macaddr(struct alx_hw *hw, u8 *addr); 476 + void alx_reset_phy(struct alx_hw *hw); 477 + void alx_reset_pcie(struct alx_hw *hw); 478 + void alx_enable_aspm(struct alx_hw *hw, bool l0s_en, bool l1_en); 479 + int alx_setup_speed_duplex(struct alx_hw *hw, u32 ethadv, u8 flowctrl); 480 + void alx_post_phy_link(struct alx_hw *hw); 481 + int alx_pre_suspend(struct alx_hw *hw, int speed); 482 + int alx_read_phy_reg(struct alx_hw *hw, u16 reg, u16 *phy_data); 483 + int alx_write_phy_reg(struct alx_hw *hw, u16 reg, u16 phy_data); 484 + int alx_read_phy_ext(struct alx_hw *hw, u8 dev, u16 reg, u16 *pdata); 485 + int alx_write_phy_ext(struct alx_hw *hw, u8 dev, u16 reg, u16 data); 486 + int alx_get_phy_link(struct alx_hw *hw, int *speed); 487 + int alx_clear_phy_intr(struct alx_hw *hw); 488 + int alx_config_wol(struct alx_hw *hw); 489 + void alx_cfg_mac_flowcontrol(struct alx_hw *hw, u8 fc); 490 + void alx_start_mac(struct alx_hw *hw); 491 + int alx_reset_mac(struct alx_hw *hw); 492 + void alx_set_macaddr(struct alx_hw *hw, const u8 *addr); 493 + bool alx_phy_configured(struct alx_hw *hw); 494 + void alx_configure_basic(struct alx_hw *hw); 495 + void alx_disable_rss(struct alx_hw *hw); 496 + int alx_select_powersaving_speed(struct alx_hw *hw, int *speed); 497 + bool alx_get_phy_info(struct alx_hw *hw); 498 + 499 + #endif
+1625
drivers/net/ethernet/atheros/alx/main.c
··· 1 + /* 2 + * Copyright (c) 2013 Johannes Berg <johannes@sipsolutions.net> 3 + * 4 + * This file is free software: you may copy, redistribute and/or modify it 5 + * under the terms of the GNU General Public License as published by the 6 + * Free Software Foundation, either version 2 of the License, or (at your 7 + * option) any later version. 8 + * 9 + * This file is distributed in the hope that it will be useful, but 10 + * WITHOUT ANY WARRANTY; without even the implied warranty of 11 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU 12 + * General Public License for more details. 13 + * 14 + * You should have received a copy of the GNU General Public License 15 + * along with this program. If not, see <http://www.gnu.org/licenses/>. 16 + * 17 + * This file incorporates work covered by the following copyright and 18 + * permission notice: 19 + * 20 + * Copyright (c) 2012 Qualcomm Atheros, Inc. 21 + * 22 + * Permission to use, copy, modify, and/or distribute this software for any 23 + * purpose with or without fee is hereby granted, provided that the above 24 + * copyright notice and this permission notice appear in all copies. 25 + * 26 + * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES 27 + * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF 28 + * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR 29 + * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES 30 + * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN 31 + * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF 32 + * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. 33 + */ 34 + 35 + #include <linux/module.h> 36 + #include <linux/pci.h> 37 + #include <linux/interrupt.h> 38 + #include <linux/ip.h> 39 + #include <linux/ipv6.h> 40 + #include <linux/if_vlan.h> 41 + #include <linux/mdio.h> 42 + #include <linux/aer.h> 43 + #include <linux/bitops.h> 44 + #include <linux/netdevice.h> 45 + #include <linux/etherdevice.h> 46 + #include <net/ip6_checksum.h> 47 + #include <linux/crc32.h> 48 + #include "alx.h" 49 + #include "hw.h" 50 + #include "reg.h" 51 + 52 + const char alx_drv_name[] = "alx"; 53 + 54 + 55 + static void alx_free_txbuf(struct alx_priv *alx, int entry) 56 + { 57 + struct alx_buffer *txb = &alx->txq.bufs[entry]; 58 + 59 + if (dma_unmap_len(txb, size)) { 60 + dma_unmap_single(&alx->hw.pdev->dev, 61 + dma_unmap_addr(txb, dma), 62 + dma_unmap_len(txb, size), 63 + DMA_TO_DEVICE); 64 + dma_unmap_len_set(txb, size, 0); 65 + } 66 + 67 + if (txb->skb) { 68 + dev_kfree_skb_any(txb->skb); 69 + txb->skb = NULL; 70 + } 71 + } 72 + 73 + static int alx_refill_rx_ring(struct alx_priv *alx, gfp_t gfp) 74 + { 75 + struct alx_rx_queue *rxq = &alx->rxq; 76 + struct sk_buff *skb; 77 + struct alx_buffer *cur_buf; 78 + dma_addr_t dma; 79 + u16 cur, next, count = 0; 80 + 81 + next = cur = rxq->write_idx; 82 + if (++next == alx->rx_ringsz) 83 + next = 0; 84 + cur_buf = &rxq->bufs[cur]; 85 + 86 + while (!cur_buf->skb && next != rxq->read_idx) { 87 + struct alx_rfd *rfd = &rxq->rfd[cur]; 88 + 89 + skb = __netdev_alloc_skb(alx->dev, alx->rxbuf_size, gfp); 90 + if (!skb) 91 + break; 92 + dma = dma_map_single(&alx->hw.pdev->dev, 93 + skb->data, alx->rxbuf_size, 94 + DMA_FROM_DEVICE); 95 + if (dma_mapping_error(&alx->hw.pdev->dev, dma)) { 96 + dev_kfree_skb(skb); 97 + break; 98 + } 99 + 100 + /* Unfortunately, RX descriptor buffers must be 4-byte 101 + * aligned, so we can't use IP alignment. 102 + */ 103 + if (WARN_ON(dma & 3)) { 104 + dev_kfree_skb(skb); 105 + break; 106 + } 107 + 108 + cur_buf->skb = skb; 109 + dma_unmap_len_set(cur_buf, size, alx->rxbuf_size); 110 + dma_unmap_addr_set(cur_buf, dma, dma); 111 + rfd->addr = cpu_to_le64(dma); 112 + 113 + cur = next; 114 + if (++next == alx->rx_ringsz) 115 + next = 0; 116 + cur_buf = &rxq->bufs[cur]; 117 + count++; 118 + } 119 + 120 + if (count) { 121 + /* flush all updates before updating hardware */ 122 + wmb(); 123 + rxq->write_idx = cur; 124 + alx_write_mem16(&alx->hw, ALX_RFD_PIDX, cur); 125 + } 126 + 127 + return count; 128 + } 129 + 130 + static inline int alx_tpd_avail(struct alx_priv *alx) 131 + { 132 + struct alx_tx_queue *txq = &alx->txq; 133 + 134 + if (txq->write_idx >= txq->read_idx) 135 + return alx->tx_ringsz + txq->read_idx - txq->write_idx - 1; 136 + return txq->read_idx - txq->write_idx - 1; 137 + } 138 + 139 + static bool alx_clean_tx_irq(struct alx_priv *alx) 140 + { 141 + struct alx_tx_queue *txq = &alx->txq; 142 + u16 hw_read_idx, sw_read_idx; 143 + unsigned int total_bytes = 0, total_packets = 0; 144 + int budget = ALX_DEFAULT_TX_WORK; 145 + 146 + sw_read_idx = txq->read_idx; 147 + hw_read_idx = alx_read_mem16(&alx->hw, ALX_TPD_PRI0_CIDX); 148 + 149 + if (sw_read_idx != hw_read_idx) { 150 + while (sw_read_idx != hw_read_idx && budget > 0) { 151 + struct sk_buff *skb; 152 + 153 + skb = txq->bufs[sw_read_idx].skb; 154 + if (skb) { 155 + total_bytes += skb->len; 156 + total_packets++; 157 + budget--; 158 + } 159 + 160 + alx_free_txbuf(alx, sw_read_idx); 161 + 162 + if (++sw_read_idx == alx->tx_ringsz) 163 + sw_read_idx = 0; 164 + } 165 + txq->read_idx = sw_read_idx; 166 + 167 + netdev_completed_queue(alx->dev, total_packets, total_bytes); 168 + } 169 + 170 + if (netif_queue_stopped(alx->dev) && netif_carrier_ok(alx->dev) && 171 + alx_tpd_avail(alx) > alx->tx_ringsz/4) 172 + netif_wake_queue(alx->dev); 173 + 174 + return sw_read_idx == hw_read_idx; 175 + } 176 + 177 + static void alx_schedule_link_check(struct alx_priv *alx) 178 + { 179 + schedule_work(&alx->link_check_wk); 180 + } 181 + 182 + static void alx_schedule_reset(struct alx_priv *alx) 183 + { 184 + schedule_work(&alx->reset_wk); 185 + } 186 + 187 + static bool alx_clean_rx_irq(struct alx_priv *alx, int budget) 188 + { 189 + struct alx_rx_queue *rxq = &alx->rxq; 190 + struct alx_rrd *rrd; 191 + struct alx_buffer *rxb; 192 + struct sk_buff *skb; 193 + u16 length, rfd_cleaned = 0; 194 + 195 + while (budget > 0) { 196 + rrd = &rxq->rrd[rxq->rrd_read_idx]; 197 + if (!(rrd->word3 & cpu_to_le32(1 << RRD_UPDATED_SHIFT))) 198 + break; 199 + rrd->word3 &= ~cpu_to_le32(1 << RRD_UPDATED_SHIFT); 200 + 201 + if (ALX_GET_FIELD(le32_to_cpu(rrd->word0), 202 + RRD_SI) != rxq->read_idx || 203 + ALX_GET_FIELD(le32_to_cpu(rrd->word0), 204 + RRD_NOR) != 1) { 205 + alx_schedule_reset(alx); 206 + return 0; 207 + } 208 + 209 + rxb = &rxq->bufs[rxq->read_idx]; 210 + dma_unmap_single(&alx->hw.pdev->dev, 211 + dma_unmap_addr(rxb, dma), 212 + dma_unmap_len(rxb, size), 213 + DMA_FROM_DEVICE); 214 + dma_unmap_len_set(rxb, size, 0); 215 + skb = rxb->skb; 216 + rxb->skb = NULL; 217 + 218 + if (rrd->word3 & cpu_to_le32(1 << RRD_ERR_RES_SHIFT) || 219 + rrd->word3 & cpu_to_le32(1 << RRD_ERR_LEN_SHIFT)) { 220 + rrd->word3 = 0; 221 + dev_kfree_skb_any(skb); 222 + goto next_pkt; 223 + } 224 + 225 + length = ALX_GET_FIELD(le32_to_cpu(rrd->word3), 226 + RRD_PKTLEN) - ETH_FCS_LEN; 227 + skb_put(skb, length); 228 + skb->protocol = eth_type_trans(skb, alx->dev); 229 + 230 + skb_checksum_none_assert(skb); 231 + if (alx->dev->features & NETIF_F_RXCSUM && 232 + !(rrd->word3 & (cpu_to_le32(1 << RRD_ERR_L4_SHIFT) | 233 + cpu_to_le32(1 << RRD_ERR_IPV4_SHIFT)))) { 234 + switch (ALX_GET_FIELD(le32_to_cpu(rrd->word2), 235 + RRD_PID)) { 236 + case RRD_PID_IPV6UDP: 237 + case RRD_PID_IPV4UDP: 238 + case RRD_PID_IPV4TCP: 239 + case RRD_PID_IPV6TCP: 240 + skb->ip_summed = CHECKSUM_UNNECESSARY; 241 + break; 242 + } 243 + } 244 + 245 + napi_gro_receive(&alx->napi, skb); 246 + budget--; 247 + 248 + next_pkt: 249 + if (++rxq->read_idx == alx->rx_ringsz) 250 + rxq->read_idx = 0; 251 + if (++rxq->rrd_read_idx == alx->rx_ringsz) 252 + rxq->rrd_read_idx = 0; 253 + 254 + if (++rfd_cleaned > ALX_RX_ALLOC_THRESH) 255 + rfd_cleaned -= alx_refill_rx_ring(alx, GFP_ATOMIC); 256 + } 257 + 258 + if (rfd_cleaned) 259 + alx_refill_rx_ring(alx, GFP_ATOMIC); 260 + 261 + return budget > 0; 262 + } 263 + 264 + static int alx_poll(struct napi_struct *napi, int budget) 265 + { 266 + struct alx_priv *alx = container_of(napi, struct alx_priv, napi); 267 + struct alx_hw *hw = &alx->hw; 268 + bool complete = true; 269 + unsigned long flags; 270 + 271 + complete = alx_clean_tx_irq(alx) && 272 + alx_clean_rx_irq(alx, budget); 273 + 274 + if (!complete) 275 + return 1; 276 + 277 + napi_complete(&alx->napi); 278 + 279 + /* enable interrupt */ 280 + spin_lock_irqsave(&alx->irq_lock, flags); 281 + alx->int_mask |= ALX_ISR_TX_Q0 | ALX_ISR_RX_Q0; 282 + alx_write_mem32(hw, ALX_IMR, alx->int_mask); 283 + spin_unlock_irqrestore(&alx->irq_lock, flags); 284 + 285 + alx_post_write(hw); 286 + 287 + return 0; 288 + } 289 + 290 + static irqreturn_t alx_intr_handle(struct alx_priv *alx, u32 intr) 291 + { 292 + struct alx_hw *hw = &alx->hw; 293 + bool write_int_mask = false; 294 + 295 + spin_lock(&alx->irq_lock); 296 + 297 + /* ACK interrupt */ 298 + alx_write_mem32(hw, ALX_ISR, intr | ALX_ISR_DIS); 299 + intr &= alx->int_mask; 300 + 301 + if (intr & ALX_ISR_FATAL) { 302 + netif_warn(alx, hw, alx->dev, 303 + "fatal interrupt 0x%x, resetting\n", intr); 304 + alx_schedule_reset(alx); 305 + goto out; 306 + } 307 + 308 + if (intr & ALX_ISR_ALERT) 309 + netdev_warn(alx->dev, "alert interrupt: 0x%x\n", intr); 310 + 311 + if (intr & ALX_ISR_PHY) { 312 + /* suppress PHY interrupt, because the source 313 + * is from PHY internal. only the internal status 314 + * is cleared, the interrupt status could be cleared. 315 + */ 316 + alx->int_mask &= ~ALX_ISR_PHY; 317 + write_int_mask = true; 318 + alx_schedule_link_check(alx); 319 + } 320 + 321 + if (intr & (ALX_ISR_TX_Q0 | ALX_ISR_RX_Q0)) { 322 + napi_schedule(&alx->napi); 323 + /* mask rx/tx interrupt, enable them when napi complete */ 324 + alx->int_mask &= ~ALX_ISR_ALL_QUEUES; 325 + write_int_mask = true; 326 + } 327 + 328 + if (write_int_mask) 329 + alx_write_mem32(hw, ALX_IMR, alx->int_mask); 330 + 331 + alx_write_mem32(hw, ALX_ISR, 0); 332 + 333 + out: 334 + spin_unlock(&alx->irq_lock); 335 + return IRQ_HANDLED; 336 + } 337 + 338 + static irqreturn_t alx_intr_msi(int irq, void *data) 339 + { 340 + struct alx_priv *alx = data; 341 + 342 + return alx_intr_handle(alx, alx_read_mem32(&alx->hw, ALX_ISR)); 343 + } 344 + 345 + static irqreturn_t alx_intr_legacy(int irq, void *data) 346 + { 347 + struct alx_priv *alx = data; 348 + struct alx_hw *hw = &alx->hw; 349 + u32 intr; 350 + 351 + intr = alx_read_mem32(hw, ALX_ISR); 352 + 353 + if (intr & ALX_ISR_DIS || !(intr & alx->int_mask)) 354 + return IRQ_NONE; 355 + 356 + return alx_intr_handle(alx, intr); 357 + } 358 + 359 + static void alx_init_ring_ptrs(struct alx_priv *alx) 360 + { 361 + struct alx_hw *hw = &alx->hw; 362 + u32 addr_hi = ((u64)alx->descmem.dma) >> 32; 363 + 364 + alx->rxq.read_idx = 0; 365 + alx->rxq.write_idx = 0; 366 + alx->rxq.rrd_read_idx = 0; 367 + alx_write_mem32(hw, ALX_RX_BASE_ADDR_HI, addr_hi); 368 + alx_write_mem32(hw, ALX_RRD_ADDR_LO, alx->rxq.rrd_dma); 369 + alx_write_mem32(hw, ALX_RRD_RING_SZ, alx->rx_ringsz); 370 + alx_write_mem32(hw, ALX_RFD_ADDR_LO, alx->rxq.rfd_dma); 371 + alx_write_mem32(hw, ALX_RFD_RING_SZ, alx->rx_ringsz); 372 + alx_write_mem32(hw, ALX_RFD_BUF_SZ, alx->rxbuf_size); 373 + 374 + alx->txq.read_idx = 0; 375 + alx->txq.write_idx = 0; 376 + alx_write_mem32(hw, ALX_TX_BASE_ADDR_HI, addr_hi); 377 + alx_write_mem32(hw, ALX_TPD_PRI0_ADDR_LO, alx->txq.tpd_dma); 378 + alx_write_mem32(hw, ALX_TPD_RING_SZ, alx->tx_ringsz); 379 + 380 + /* load these pointers into the chip */ 381 + alx_write_mem32(hw, ALX_SRAM9, ALX_SRAM_LOAD_PTR); 382 + } 383 + 384 + static void alx_free_txring_buf(struct alx_priv *alx) 385 + { 386 + struct alx_tx_queue *txq = &alx->txq; 387 + int i; 388 + 389 + if (!txq->bufs) 390 + return; 391 + 392 + for (i = 0; i < alx->tx_ringsz; i++) 393 + alx_free_txbuf(alx, i); 394 + 395 + memset(txq->bufs, 0, alx->tx_ringsz * sizeof(struct alx_buffer)); 396 + memset(txq->tpd, 0, alx->tx_ringsz * sizeof(struct alx_txd)); 397 + txq->write_idx = 0; 398 + txq->read_idx = 0; 399 + 400 + netdev_reset_queue(alx->dev); 401 + } 402 + 403 + static void alx_free_rxring_buf(struct alx_priv *alx) 404 + { 405 + struct alx_rx_queue *rxq = &alx->rxq; 406 + struct alx_buffer *cur_buf; 407 + u16 i; 408 + 409 + if (rxq == NULL) 410 + return; 411 + 412 + for (i = 0; i < alx->rx_ringsz; i++) { 413 + cur_buf = rxq->bufs + i; 414 + if (cur_buf->skb) { 415 + dma_unmap_single(&alx->hw.pdev->dev, 416 + dma_unmap_addr(cur_buf, dma), 417 + dma_unmap_len(cur_buf, size), 418 + DMA_FROM_DEVICE); 419 + dev_kfree_skb(cur_buf->skb); 420 + cur_buf->skb = NULL; 421 + dma_unmap_len_set(cur_buf, size, 0); 422 + dma_unmap_addr_set(cur_buf, dma, 0); 423 + } 424 + } 425 + 426 + rxq->write_idx = 0; 427 + rxq->read_idx = 0; 428 + rxq->rrd_read_idx = 0; 429 + } 430 + 431 + static void alx_free_buffers(struct alx_priv *alx) 432 + { 433 + alx_free_txring_buf(alx); 434 + alx_free_rxring_buf(alx); 435 + } 436 + 437 + static int alx_reinit_rings(struct alx_priv *alx) 438 + { 439 + alx_free_buffers(alx); 440 + 441 + alx_init_ring_ptrs(alx); 442 + 443 + if (!alx_refill_rx_ring(alx, GFP_KERNEL)) 444 + return -ENOMEM; 445 + 446 + return 0; 447 + } 448 + 449 + static void alx_add_mc_addr(struct alx_hw *hw, const u8 *addr, u32 *mc_hash) 450 + { 451 + u32 crc32, bit, reg; 452 + 453 + crc32 = ether_crc(ETH_ALEN, addr); 454 + reg = (crc32 >> 31) & 0x1; 455 + bit = (crc32 >> 26) & 0x1F; 456 + 457 + mc_hash[reg] |= BIT(bit); 458 + } 459 + 460 + static void __alx_set_rx_mode(struct net_device *netdev) 461 + { 462 + struct alx_priv *alx = netdev_priv(netdev); 463 + struct alx_hw *hw = &alx->hw; 464 + struct netdev_hw_addr *ha; 465 + u32 mc_hash[2] = {}; 466 + 467 + if (!(netdev->flags & IFF_ALLMULTI)) { 468 + netdev_for_each_mc_addr(ha, netdev) 469 + alx_add_mc_addr(hw, ha->addr, mc_hash); 470 + 471 + alx_write_mem32(hw, ALX_HASH_TBL0, mc_hash[0]); 472 + alx_write_mem32(hw, ALX_HASH_TBL1, mc_hash[1]); 473 + } 474 + 475 + hw->rx_ctrl &= ~(ALX_MAC_CTRL_MULTIALL_EN | ALX_MAC_CTRL_PROMISC_EN); 476 + if (netdev->flags & IFF_PROMISC) 477 + hw->rx_ctrl |= ALX_MAC_CTRL_PROMISC_EN; 478 + if (netdev->flags & IFF_ALLMULTI) 479 + hw->rx_ctrl |= ALX_MAC_CTRL_MULTIALL_EN; 480 + 481 + alx_write_mem32(hw, ALX_MAC_CTRL, hw->rx_ctrl); 482 + } 483 + 484 + static void alx_set_rx_mode(struct net_device *netdev) 485 + { 486 + __alx_set_rx_mode(netdev); 487 + } 488 + 489 + static int alx_set_mac_address(struct net_device *netdev, void *data) 490 + { 491 + struct alx_priv *alx = netdev_priv(netdev); 492 + struct alx_hw *hw = &alx->hw; 493 + struct sockaddr *addr = data; 494 + 495 + if (!is_valid_ether_addr(addr->sa_data)) 496 + return -EADDRNOTAVAIL; 497 + 498 + if (netdev->addr_assign_type & NET_ADDR_RANDOM) 499 + netdev->addr_assign_type ^= NET_ADDR_RANDOM; 500 + 501 + memcpy(netdev->dev_addr, addr->sa_data, netdev->addr_len); 502 + memcpy(hw->mac_addr, addr->sa_data, netdev->addr_len); 503 + alx_set_macaddr(hw, hw->mac_addr); 504 + 505 + return 0; 506 + } 507 + 508 + static int alx_alloc_descriptors(struct alx_priv *alx) 509 + { 510 + alx->txq.bufs = kcalloc(alx->tx_ringsz, 511 + sizeof(struct alx_buffer), 512 + GFP_KERNEL); 513 + if (!alx->txq.bufs) 514 + return -ENOMEM; 515 + 516 + alx->rxq.bufs = kcalloc(alx->rx_ringsz, 517 + sizeof(struct alx_buffer), 518 + GFP_KERNEL); 519 + if (!alx->rxq.bufs) 520 + goto out_free; 521 + 522 + /* physical tx/rx ring descriptors 523 + * 524 + * Allocate them as a single chunk because they must not cross a 525 + * 4G boundary (hardware has a single register for high 32 bits 526 + * of addresses only) 527 + */ 528 + alx->descmem.size = sizeof(struct alx_txd) * alx->tx_ringsz + 529 + sizeof(struct alx_rrd) * alx->rx_ringsz + 530 + sizeof(struct alx_rfd) * alx->rx_ringsz; 531 + alx->descmem.virt = dma_zalloc_coherent(&alx->hw.pdev->dev, 532 + alx->descmem.size, 533 + &alx->descmem.dma, 534 + GFP_KERNEL); 535 + if (!alx->descmem.virt) 536 + goto out_free; 537 + 538 + alx->txq.tpd = (void *)alx->descmem.virt; 539 + alx->txq.tpd_dma = alx->descmem.dma; 540 + 541 + /* alignment requirement for next block */ 542 + BUILD_BUG_ON(sizeof(struct alx_txd) % 8); 543 + 544 + alx->rxq.rrd = 545 + (void *)((u8 *)alx->descmem.virt + 546 + sizeof(struct alx_txd) * alx->tx_ringsz); 547 + alx->rxq.rrd_dma = alx->descmem.dma + 548 + sizeof(struct alx_txd) * alx->tx_ringsz; 549 + 550 + /* alignment requirement for next block */ 551 + BUILD_BUG_ON(sizeof(struct alx_rrd) % 8); 552 + 553 + alx->rxq.rfd = 554 + (void *)((u8 *)alx->descmem.virt + 555 + sizeof(struct alx_txd) * alx->tx_ringsz + 556 + sizeof(struct alx_rrd) * alx->rx_ringsz); 557 + alx->rxq.rfd_dma = alx->descmem.dma + 558 + sizeof(struct alx_txd) * alx->tx_ringsz + 559 + sizeof(struct alx_rrd) * alx->rx_ringsz; 560 + 561 + return 0; 562 + out_free: 563 + kfree(alx->txq.bufs); 564 + kfree(alx->rxq.bufs); 565 + return -ENOMEM; 566 + } 567 + 568 + static int alx_alloc_rings(struct alx_priv *alx) 569 + { 570 + int err; 571 + 572 + err = alx_alloc_descriptors(alx); 573 + if (err) 574 + return err; 575 + 576 + alx->int_mask &= ~ALX_ISR_ALL_QUEUES; 577 + alx->int_mask |= ALX_ISR_TX_Q0 | ALX_ISR_RX_Q0; 578 + alx->tx_ringsz = alx->tx_ringsz; 579 + 580 + netif_napi_add(alx->dev, &alx->napi, alx_poll, 64); 581 + 582 + alx_reinit_rings(alx); 583 + return 0; 584 + } 585 + 586 + static void alx_free_rings(struct alx_priv *alx) 587 + { 588 + netif_napi_del(&alx->napi); 589 + alx_free_buffers(alx); 590 + 591 + kfree(alx->txq.bufs); 592 + kfree(alx->rxq.bufs); 593 + 594 + dma_free_coherent(&alx->hw.pdev->dev, 595 + alx->descmem.size, 596 + alx->descmem.virt, 597 + alx->descmem.dma); 598 + } 599 + 600 + static void alx_config_vector_mapping(struct alx_priv *alx) 601 + { 602 + struct alx_hw *hw = &alx->hw; 603 + 604 + alx_write_mem32(hw, ALX_MSI_MAP_TBL1, 0); 605 + alx_write_mem32(hw, ALX_MSI_MAP_TBL2, 0); 606 + alx_write_mem32(hw, ALX_MSI_ID_MAP, 0); 607 + } 608 + 609 + static void alx_irq_enable(struct alx_priv *alx) 610 + { 611 + struct alx_hw *hw = &alx->hw; 612 + 613 + /* level-1 interrupt switch */ 614 + alx_write_mem32(hw, ALX_ISR, 0); 615 + alx_write_mem32(hw, ALX_IMR, alx->int_mask); 616 + alx_post_write(hw); 617 + } 618 + 619 + static void alx_irq_disable(struct alx_priv *alx) 620 + { 621 + struct alx_hw *hw = &alx->hw; 622 + 623 + alx_write_mem32(hw, ALX_ISR, ALX_ISR_DIS); 624 + alx_write_mem32(hw, ALX_IMR, 0); 625 + alx_post_write(hw); 626 + 627 + synchronize_irq(alx->hw.pdev->irq); 628 + } 629 + 630 + static int alx_request_irq(struct alx_priv *alx) 631 + { 632 + struct pci_dev *pdev = alx->hw.pdev; 633 + struct alx_hw *hw = &alx->hw; 634 + int err; 635 + u32 msi_ctrl; 636 + 637 + msi_ctrl = (hw->imt >> 1) << ALX_MSI_RETRANS_TM_SHIFT; 638 + 639 + if (!pci_enable_msi(alx->hw.pdev)) { 640 + alx->msi = true; 641 + 642 + alx_write_mem32(hw, ALX_MSI_RETRANS_TIMER, 643 + msi_ctrl | ALX_MSI_MASK_SEL_LINE); 644 + err = request_irq(pdev->irq, alx_intr_msi, 0, 645 + alx->dev->name, alx); 646 + if (!err) 647 + goto out; 648 + /* fall back to legacy interrupt */ 649 + pci_disable_msi(alx->hw.pdev); 650 + } 651 + 652 + alx_write_mem32(hw, ALX_MSI_RETRANS_TIMER, 0); 653 + err = request_irq(pdev->irq, alx_intr_legacy, IRQF_SHARED, 654 + alx->dev->name, alx); 655 + out: 656 + if (!err) 657 + alx_config_vector_mapping(alx); 658 + return err; 659 + } 660 + 661 + static void alx_free_irq(struct alx_priv *alx) 662 + { 663 + struct pci_dev *pdev = alx->hw.pdev; 664 + 665 + free_irq(pdev->irq, alx); 666 + 667 + if (alx->msi) { 668 + pci_disable_msi(alx->hw.pdev); 669 + alx->msi = false; 670 + } 671 + } 672 + 673 + static int alx_identify_hw(struct alx_priv *alx) 674 + { 675 + struct alx_hw *hw = &alx->hw; 676 + int rev = alx_hw_revision(hw); 677 + 678 + if (rev > ALX_REV_C0) 679 + return -EINVAL; 680 + 681 + hw->max_dma_chnl = rev >= ALX_REV_B0 ? 4 : 2; 682 + 683 + return 0; 684 + } 685 + 686 + static int alx_init_sw(struct alx_priv *alx) 687 + { 688 + struct pci_dev *pdev = alx->hw.pdev; 689 + struct alx_hw *hw = &alx->hw; 690 + int err; 691 + 692 + err = alx_identify_hw(alx); 693 + if (err) { 694 + dev_err(&pdev->dev, "unrecognized chip, aborting\n"); 695 + return err; 696 + } 697 + 698 + alx->hw.lnk_patch = 699 + pdev->device == ALX_DEV_ID_AR8161 && 700 + pdev->subsystem_vendor == PCI_VENDOR_ID_ATTANSIC && 701 + pdev->subsystem_device == 0x0091 && 702 + pdev->revision == 0; 703 + 704 + hw->smb_timer = 400; 705 + hw->mtu = alx->dev->mtu; 706 + alx->rxbuf_size = ALIGN(ALX_RAW_MTU(hw->mtu), 8); 707 + alx->tx_ringsz = 256; 708 + alx->rx_ringsz = 512; 709 + hw->sleep_ctrl = ALX_SLEEP_WOL_MAGIC | ALX_SLEEP_WOL_PHY; 710 + hw->imt = 200; 711 + alx->int_mask = ALX_ISR_MISC; 712 + hw->dma_chnl = hw->max_dma_chnl; 713 + hw->ith_tpd = alx->tx_ringsz / 3; 714 + hw->link_speed = SPEED_UNKNOWN; 715 + hw->adv_cfg = ADVERTISED_Autoneg | 716 + ADVERTISED_10baseT_Half | 717 + ADVERTISED_10baseT_Full | 718 + ADVERTISED_100baseT_Full | 719 + ADVERTISED_100baseT_Half | 720 + ADVERTISED_1000baseT_Full; 721 + hw->flowctrl = ALX_FC_ANEG | ALX_FC_RX | ALX_FC_TX; 722 + 723 + hw->rx_ctrl = ALX_MAC_CTRL_WOLSPED_SWEN | 724 + ALX_MAC_CTRL_MHASH_ALG_HI5B | 725 + ALX_MAC_CTRL_BRD_EN | 726 + ALX_MAC_CTRL_PCRCE | 727 + ALX_MAC_CTRL_CRCE | 728 + ALX_MAC_CTRL_RXFC_EN | 729 + ALX_MAC_CTRL_TXFC_EN | 730 + 7 << ALX_MAC_CTRL_PRMBLEN_SHIFT; 731 + 732 + return err; 733 + } 734 + 735 + 736 + static netdev_features_t alx_fix_features(struct net_device *netdev, 737 + netdev_features_t features) 738 + { 739 + if (netdev->mtu > ALX_MAX_TSO_PKT_SIZE) 740 + features &= ~(NETIF_F_TSO | NETIF_F_TSO6); 741 + 742 + return features; 743 + } 744 + 745 + static void alx_netif_stop(struct alx_priv *alx) 746 + { 747 + alx->dev->trans_start = jiffies; 748 + if (netif_carrier_ok(alx->dev)) { 749 + netif_carrier_off(alx->dev); 750 + netif_tx_disable(alx->dev); 751 + napi_disable(&alx->napi); 752 + } 753 + } 754 + 755 + static void alx_halt(struct alx_priv *alx) 756 + { 757 + struct alx_hw *hw = &alx->hw; 758 + 759 + alx_netif_stop(alx); 760 + hw->link_speed = SPEED_UNKNOWN; 761 + 762 + alx_reset_mac(hw); 763 + 764 + /* disable l0s/l1 */ 765 + alx_enable_aspm(hw, false, false); 766 + alx_irq_disable(alx); 767 + alx_free_buffers(alx); 768 + } 769 + 770 + static void alx_configure(struct alx_priv *alx) 771 + { 772 + struct alx_hw *hw = &alx->hw; 773 + 774 + alx_configure_basic(hw); 775 + alx_disable_rss(hw); 776 + __alx_set_rx_mode(alx->dev); 777 + 778 + alx_write_mem32(hw, ALX_MAC_CTRL, hw->rx_ctrl); 779 + } 780 + 781 + static void alx_activate(struct alx_priv *alx) 782 + { 783 + /* hardware setting lost, restore it */ 784 + alx_reinit_rings(alx); 785 + alx_configure(alx); 786 + 787 + /* clear old interrupts */ 788 + alx_write_mem32(&alx->hw, ALX_ISR, ~(u32)ALX_ISR_DIS); 789 + 790 + alx_irq_enable(alx); 791 + 792 + alx_schedule_link_check(alx); 793 + } 794 + 795 + static void alx_reinit(struct alx_priv *alx) 796 + { 797 + ASSERT_RTNL(); 798 + 799 + alx_halt(alx); 800 + alx_activate(alx); 801 + } 802 + 803 + static int alx_change_mtu(struct net_device *netdev, int mtu) 804 + { 805 + struct alx_priv *alx = netdev_priv(netdev); 806 + int max_frame = mtu + ETH_HLEN + ETH_FCS_LEN + VLAN_HLEN; 807 + 808 + if ((max_frame < ALX_MIN_FRAME_SIZE) || 809 + (max_frame > ALX_MAX_FRAME_SIZE)) 810 + return -EINVAL; 811 + 812 + if (netdev->mtu == mtu) 813 + return 0; 814 + 815 + netdev->mtu = mtu; 816 + alx->hw.mtu = mtu; 817 + alx->rxbuf_size = mtu > ALX_DEF_RXBUF_SIZE ? 818 + ALIGN(max_frame, 8) : ALX_DEF_RXBUF_SIZE; 819 + netdev_update_features(netdev); 820 + if (netif_running(netdev)) 821 + alx_reinit(alx); 822 + return 0; 823 + } 824 + 825 + static void alx_netif_start(struct alx_priv *alx) 826 + { 827 + netif_tx_wake_all_queues(alx->dev); 828 + napi_enable(&alx->napi); 829 + netif_carrier_on(alx->dev); 830 + } 831 + 832 + static int __alx_open(struct alx_priv *alx, bool resume) 833 + { 834 + int err; 835 + 836 + if (!resume) 837 + netif_carrier_off(alx->dev); 838 + 839 + err = alx_alloc_rings(alx); 840 + if (err) 841 + return err; 842 + 843 + alx_configure(alx); 844 + 845 + err = alx_request_irq(alx); 846 + if (err) 847 + goto out_free_rings; 848 + 849 + /* clear old interrupts */ 850 + alx_write_mem32(&alx->hw, ALX_ISR, ~(u32)ALX_ISR_DIS); 851 + 852 + alx_irq_enable(alx); 853 + 854 + if (!resume) 855 + netif_tx_start_all_queues(alx->dev); 856 + 857 + alx_schedule_link_check(alx); 858 + return 0; 859 + 860 + out_free_rings: 861 + alx_free_rings(alx); 862 + return err; 863 + } 864 + 865 + static void __alx_stop(struct alx_priv *alx) 866 + { 867 + alx_halt(alx); 868 + alx_free_irq(alx); 869 + alx_free_rings(alx); 870 + } 871 + 872 + static const char *alx_speed_desc(u16 speed) 873 + { 874 + switch (speed) { 875 + case SPEED_1000 + DUPLEX_FULL: 876 + return "1 Gbps Full"; 877 + case SPEED_100 + DUPLEX_FULL: 878 + return "100 Mbps Full"; 879 + case SPEED_100 + DUPLEX_HALF: 880 + return "100 Mbps Half"; 881 + case SPEED_10 + DUPLEX_FULL: 882 + return "10 Mbps Full"; 883 + case SPEED_10 + DUPLEX_HALF: 884 + return "10 Mbps Half"; 885 + default: 886 + return "Unknown speed"; 887 + } 888 + } 889 + 890 + static void alx_check_link(struct alx_priv *alx) 891 + { 892 + struct alx_hw *hw = &alx->hw; 893 + unsigned long flags; 894 + int speed, old_speed; 895 + int err; 896 + 897 + /* clear PHY internal interrupt status, otherwise the main 898 + * interrupt status will be asserted forever 899 + */ 900 + alx_clear_phy_intr(hw); 901 + 902 + err = alx_get_phy_link(hw, &speed); 903 + if (err < 0) 904 + goto reset; 905 + 906 + spin_lock_irqsave(&alx->irq_lock, flags); 907 + alx->int_mask |= ALX_ISR_PHY; 908 + alx_write_mem32(hw, ALX_IMR, alx->int_mask); 909 + spin_unlock_irqrestore(&alx->irq_lock, flags); 910 + 911 + old_speed = hw->link_speed; 912 + 913 + if (old_speed == speed) 914 + return; 915 + hw->link_speed = speed; 916 + 917 + if (speed != SPEED_UNKNOWN) { 918 + netif_info(alx, link, alx->dev, 919 + "NIC Up: %s\n", alx_speed_desc(speed)); 920 + alx_post_phy_link(hw); 921 + alx_enable_aspm(hw, true, true); 922 + alx_start_mac(hw); 923 + 924 + if (old_speed == SPEED_UNKNOWN) 925 + alx_netif_start(alx); 926 + } else { 927 + /* link is now down */ 928 + alx_netif_stop(alx); 929 + netif_info(alx, link, alx->dev, "Link Down\n"); 930 + err = alx_reset_mac(hw); 931 + if (err) 932 + goto reset; 933 + alx_irq_disable(alx); 934 + 935 + /* MAC reset causes all HW settings to be lost, restore all */ 936 + err = alx_reinit_rings(alx); 937 + if (err) 938 + goto reset; 939 + alx_configure(alx); 940 + alx_enable_aspm(hw, false, true); 941 + alx_post_phy_link(hw); 942 + alx_irq_enable(alx); 943 + } 944 + 945 + return; 946 + 947 + reset: 948 + alx_schedule_reset(alx); 949 + } 950 + 951 + static int alx_open(struct net_device *netdev) 952 + { 953 + return __alx_open(netdev_priv(netdev), false); 954 + } 955 + 956 + static int alx_stop(struct net_device *netdev) 957 + { 958 + __alx_stop(netdev_priv(netdev)); 959 + return 0; 960 + } 961 + 962 + static int __alx_shutdown(struct pci_dev *pdev, bool *wol_en) 963 + { 964 + struct alx_priv *alx = pci_get_drvdata(pdev); 965 + struct net_device *netdev = alx->dev; 966 + struct alx_hw *hw = &alx->hw; 967 + int err, speed; 968 + 969 + netif_device_detach(netdev); 970 + 971 + if (netif_running(netdev)) 972 + __alx_stop(alx); 973 + 974 + #ifdef CONFIG_PM_SLEEP 975 + err = pci_save_state(pdev); 976 + if (err) 977 + return err; 978 + #endif 979 + 980 + err = alx_select_powersaving_speed(hw, &speed); 981 + if (err) 982 + return err; 983 + err = alx_clear_phy_intr(hw); 984 + if (err) 985 + return err; 986 + err = alx_pre_suspend(hw, speed); 987 + if (err) 988 + return err; 989 + err = alx_config_wol(hw); 990 + if (err) 991 + return err; 992 + 993 + *wol_en = false; 994 + if (hw->sleep_ctrl & ALX_SLEEP_ACTIVE) { 995 + netif_info(alx, wol, netdev, 996 + "wol: ctrl=%X, speed=%X\n", 997 + hw->sleep_ctrl, speed); 998 + device_set_wakeup_enable(&pdev->dev, true); 999 + *wol_en = true; 1000 + } 1001 + 1002 + pci_disable_device(pdev); 1003 + 1004 + return 0; 1005 + } 1006 + 1007 + static void alx_shutdown(struct pci_dev *pdev) 1008 + { 1009 + int err; 1010 + bool wol_en; 1011 + 1012 + err = __alx_shutdown(pdev, &wol_en); 1013 + if (!err) { 1014 + pci_wake_from_d3(pdev, wol_en); 1015 + pci_set_power_state(pdev, PCI_D3hot); 1016 + } else { 1017 + dev_err(&pdev->dev, "shutdown fail %d\n", err); 1018 + } 1019 + } 1020 + 1021 + static void alx_link_check(struct work_struct *work) 1022 + { 1023 + struct alx_priv *alx; 1024 + 1025 + alx = container_of(work, struct alx_priv, link_check_wk); 1026 + 1027 + rtnl_lock(); 1028 + alx_check_link(alx); 1029 + rtnl_unlock(); 1030 + } 1031 + 1032 + static void alx_reset(struct work_struct *work) 1033 + { 1034 + struct alx_priv *alx = container_of(work, struct alx_priv, reset_wk); 1035 + 1036 + rtnl_lock(); 1037 + alx_reinit(alx); 1038 + rtnl_unlock(); 1039 + } 1040 + 1041 + static int alx_tx_csum(struct sk_buff *skb, struct alx_txd *first) 1042 + { 1043 + u8 cso, css; 1044 + 1045 + if (skb->ip_summed != CHECKSUM_PARTIAL) 1046 + return 0; 1047 + 1048 + cso = skb_checksum_start_offset(skb); 1049 + if (cso & 1) 1050 + return -EINVAL; 1051 + 1052 + css = cso + skb->csum_offset; 1053 + first->word1 |= cpu_to_le32((cso >> 1) << TPD_CXSUMSTART_SHIFT); 1054 + first->word1 |= cpu_to_le32((css >> 1) << TPD_CXSUMOFFSET_SHIFT); 1055 + first->word1 |= cpu_to_le32(1 << TPD_CXSUM_EN_SHIFT); 1056 + 1057 + return 0; 1058 + } 1059 + 1060 + static int alx_map_tx_skb(struct alx_priv *alx, struct sk_buff *skb) 1061 + { 1062 + struct alx_tx_queue *txq = &alx->txq; 1063 + struct alx_txd *tpd, *first_tpd; 1064 + dma_addr_t dma; 1065 + int maplen, f, first_idx = txq->write_idx; 1066 + 1067 + first_tpd = &txq->tpd[txq->write_idx]; 1068 + tpd = first_tpd; 1069 + 1070 + maplen = skb_headlen(skb); 1071 + dma = dma_map_single(&alx->hw.pdev->dev, skb->data, maplen, 1072 + DMA_TO_DEVICE); 1073 + if (dma_mapping_error(&alx->hw.pdev->dev, dma)) 1074 + goto err_dma; 1075 + 1076 + dma_unmap_len_set(&txq->bufs[txq->write_idx], size, maplen); 1077 + dma_unmap_addr_set(&txq->bufs[txq->write_idx], dma, dma); 1078 + 1079 + tpd->adrl.addr = cpu_to_le64(dma); 1080 + tpd->len = cpu_to_le16(maplen); 1081 + 1082 + for (f = 0; f < skb_shinfo(skb)->nr_frags; f++) { 1083 + struct skb_frag_struct *frag; 1084 + 1085 + frag = &skb_shinfo(skb)->frags[f]; 1086 + 1087 + if (++txq->write_idx == alx->tx_ringsz) 1088 + txq->write_idx = 0; 1089 + tpd = &txq->tpd[txq->write_idx]; 1090 + 1091 + tpd->word1 = first_tpd->word1; 1092 + 1093 + maplen = skb_frag_size(frag); 1094 + dma = skb_frag_dma_map(&alx->hw.pdev->dev, frag, 0, 1095 + maplen, DMA_TO_DEVICE); 1096 + if (dma_mapping_error(&alx->hw.pdev->dev, dma)) 1097 + goto err_dma; 1098 + dma_unmap_len_set(&txq->bufs[txq->write_idx], size, maplen); 1099 + dma_unmap_addr_set(&txq->bufs[txq->write_idx], dma, dma); 1100 + 1101 + tpd->adrl.addr = cpu_to_le64(dma); 1102 + tpd->len = cpu_to_le16(maplen); 1103 + } 1104 + 1105 + /* last TPD, set EOP flag and store skb */ 1106 + tpd->word1 |= cpu_to_le32(1 << TPD_EOP_SHIFT); 1107 + txq->bufs[txq->write_idx].skb = skb; 1108 + 1109 + if (++txq->write_idx == alx->tx_ringsz) 1110 + txq->write_idx = 0; 1111 + 1112 + return 0; 1113 + 1114 + err_dma: 1115 + f = first_idx; 1116 + while (f != txq->write_idx) { 1117 + alx_free_txbuf(alx, f); 1118 + if (++f == alx->tx_ringsz) 1119 + f = 0; 1120 + } 1121 + return -ENOMEM; 1122 + } 1123 + 1124 + static netdev_tx_t alx_start_xmit(struct sk_buff *skb, 1125 + struct net_device *netdev) 1126 + { 1127 + struct alx_priv *alx = netdev_priv(netdev); 1128 + struct alx_tx_queue *txq = &alx->txq; 1129 + struct alx_txd *first; 1130 + int tpdreq = skb_shinfo(skb)->nr_frags + 1; 1131 + 1132 + if (alx_tpd_avail(alx) < tpdreq) { 1133 + netif_stop_queue(alx->dev); 1134 + goto drop; 1135 + } 1136 + 1137 + first = &txq->tpd[txq->write_idx]; 1138 + memset(first, 0, sizeof(*first)); 1139 + 1140 + if (alx_tx_csum(skb, first)) 1141 + goto drop; 1142 + 1143 + if (alx_map_tx_skb(alx, skb) < 0) 1144 + goto drop; 1145 + 1146 + netdev_sent_queue(alx->dev, skb->len); 1147 + 1148 + /* flush updates before updating hardware */ 1149 + wmb(); 1150 + alx_write_mem16(&alx->hw, ALX_TPD_PRI0_PIDX, txq->write_idx); 1151 + 1152 + if (alx_tpd_avail(alx) < alx->tx_ringsz/8) 1153 + netif_stop_queue(alx->dev); 1154 + 1155 + return NETDEV_TX_OK; 1156 + 1157 + drop: 1158 + dev_kfree_skb(skb); 1159 + return NETDEV_TX_OK; 1160 + } 1161 + 1162 + static void alx_tx_timeout(struct net_device *dev) 1163 + { 1164 + struct alx_priv *alx = netdev_priv(dev); 1165 + 1166 + alx_schedule_reset(alx); 1167 + } 1168 + 1169 + static int alx_mdio_read(struct net_device *netdev, 1170 + int prtad, int devad, u16 addr) 1171 + { 1172 + struct alx_priv *alx = netdev_priv(netdev); 1173 + struct alx_hw *hw = &alx->hw; 1174 + u16 val; 1175 + int err; 1176 + 1177 + if (prtad != hw->mdio.prtad) 1178 + return -EINVAL; 1179 + 1180 + if (devad == MDIO_DEVAD_NONE) 1181 + err = alx_read_phy_reg(hw, addr, &val); 1182 + else 1183 + err = alx_read_phy_ext(hw, devad, addr, &val); 1184 + 1185 + if (err) 1186 + return err; 1187 + return val; 1188 + } 1189 + 1190 + static int alx_mdio_write(struct net_device *netdev, 1191 + int prtad, int devad, u16 addr, u16 val) 1192 + { 1193 + struct alx_priv *alx = netdev_priv(netdev); 1194 + struct alx_hw *hw = &alx->hw; 1195 + 1196 + if (prtad != hw->mdio.prtad) 1197 + return -EINVAL; 1198 + 1199 + if (devad == MDIO_DEVAD_NONE) 1200 + return alx_write_phy_reg(hw, addr, val); 1201 + 1202 + return alx_write_phy_ext(hw, devad, addr, val); 1203 + } 1204 + 1205 + static int alx_ioctl(struct net_device *netdev, struct ifreq *ifr, int cmd) 1206 + { 1207 + struct alx_priv *alx = netdev_priv(netdev); 1208 + 1209 + if (!netif_running(netdev)) 1210 + return -EAGAIN; 1211 + 1212 + return mdio_mii_ioctl(&alx->hw.mdio, if_mii(ifr), cmd); 1213 + } 1214 + 1215 + #ifdef CONFIG_NET_POLL_CONTROLLER 1216 + static void alx_poll_controller(struct net_device *netdev) 1217 + { 1218 + struct alx_priv *alx = netdev_priv(netdev); 1219 + 1220 + if (alx->msi) 1221 + alx_intr_msi(0, alx); 1222 + else 1223 + alx_intr_legacy(0, alx); 1224 + } 1225 + #endif 1226 + 1227 + static const struct net_device_ops alx_netdev_ops = { 1228 + .ndo_open = alx_open, 1229 + .ndo_stop = alx_stop, 1230 + .ndo_start_xmit = alx_start_xmit, 1231 + .ndo_set_rx_mode = alx_set_rx_mode, 1232 + .ndo_validate_addr = eth_validate_addr, 1233 + .ndo_set_mac_address = alx_set_mac_address, 1234 + .ndo_change_mtu = alx_change_mtu, 1235 + .ndo_do_ioctl = alx_ioctl, 1236 + .ndo_tx_timeout = alx_tx_timeout, 1237 + .ndo_fix_features = alx_fix_features, 1238 + #ifdef CONFIG_NET_POLL_CONTROLLER 1239 + .ndo_poll_controller = alx_poll_controller, 1240 + #endif 1241 + }; 1242 + 1243 + static int alx_probe(struct pci_dev *pdev, const struct pci_device_id *ent) 1244 + { 1245 + struct net_device *netdev; 1246 + struct alx_priv *alx; 1247 + struct alx_hw *hw; 1248 + bool phy_configured; 1249 + int bars, pm_cap, err; 1250 + 1251 + err = pci_enable_device_mem(pdev); 1252 + if (err) 1253 + return err; 1254 + 1255 + /* The alx chip can DMA to 64-bit addresses, but it uses a single 1256 + * shared register for the high 32 bits, so only a single, aligned, 1257 + * 4 GB physical address range can be used for descriptors. 1258 + */ 1259 + if (!dma_set_mask(&pdev->dev, DMA_BIT_MASK(64)) && 1260 + !dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(64))) { 1261 + dev_dbg(&pdev->dev, "DMA to 64-BIT addresses\n"); 1262 + } else { 1263 + err = dma_set_mask(&pdev->dev, DMA_BIT_MASK(32)); 1264 + if (err) { 1265 + err = dma_set_coherent_mask(&pdev->dev, 1266 + DMA_BIT_MASK(32)); 1267 + if (err) { 1268 + dev_err(&pdev->dev, 1269 + "No usable DMA config, aborting\n"); 1270 + goto out_pci_disable; 1271 + } 1272 + } 1273 + } 1274 + 1275 + bars = pci_select_bars(pdev, IORESOURCE_MEM); 1276 + err = pci_request_selected_regions(pdev, bars, alx_drv_name); 1277 + if (err) { 1278 + dev_err(&pdev->dev, 1279 + "pci_request_selected_regions failed(bars:%d)\n", bars); 1280 + goto out_pci_disable; 1281 + } 1282 + 1283 + pci_enable_pcie_error_reporting(pdev); 1284 + pci_set_master(pdev); 1285 + 1286 + pm_cap = pci_find_capability(pdev, PCI_CAP_ID_PM); 1287 + if (pm_cap == 0) { 1288 + dev_err(&pdev->dev, 1289 + "Can't find power management capability, aborting\n"); 1290 + err = -EIO; 1291 + goto out_pci_release; 1292 + } 1293 + 1294 + err = pci_set_power_state(pdev, PCI_D0); 1295 + if (err) 1296 + goto out_pci_release; 1297 + 1298 + netdev = alloc_etherdev(sizeof(*alx)); 1299 + if (!netdev) { 1300 + err = -ENOMEM; 1301 + goto out_pci_release; 1302 + } 1303 + 1304 + SET_NETDEV_DEV(netdev, &pdev->dev); 1305 + alx = netdev_priv(netdev); 1306 + alx->dev = netdev; 1307 + alx->hw.pdev = pdev; 1308 + alx->msg_enable = NETIF_MSG_LINK | NETIF_MSG_HW | NETIF_MSG_IFUP | 1309 + NETIF_MSG_TX_ERR | NETIF_MSG_RX_ERR | NETIF_MSG_WOL; 1310 + hw = &alx->hw; 1311 + pci_set_drvdata(pdev, alx); 1312 + 1313 + hw->hw_addr = pci_ioremap_bar(pdev, 0); 1314 + if (!hw->hw_addr) { 1315 + dev_err(&pdev->dev, "cannot map device registers\n"); 1316 + err = -EIO; 1317 + goto out_free_netdev; 1318 + } 1319 + 1320 + netdev->netdev_ops = &alx_netdev_ops; 1321 + SET_ETHTOOL_OPS(netdev, &alx_ethtool_ops); 1322 + netdev->irq = pdev->irq; 1323 + netdev->watchdog_timeo = ALX_WATCHDOG_TIME; 1324 + 1325 + if (ent->driver_data & ALX_DEV_QUIRK_MSI_INTX_DISABLE_BUG) 1326 + pdev->dev_flags |= PCI_DEV_FLAGS_MSI_INTX_DISABLE_BUG; 1327 + 1328 + err = alx_init_sw(alx); 1329 + if (err) { 1330 + dev_err(&pdev->dev, "net device private data init failed\n"); 1331 + goto out_unmap; 1332 + } 1333 + 1334 + alx_reset_pcie(hw); 1335 + 1336 + phy_configured = alx_phy_configured(hw); 1337 + 1338 + if (!phy_configured) 1339 + alx_reset_phy(hw); 1340 + 1341 + err = alx_reset_mac(hw); 1342 + if (err) { 1343 + dev_err(&pdev->dev, "MAC Reset failed, error = %d\n", err); 1344 + goto out_unmap; 1345 + } 1346 + 1347 + /* setup link to put it in a known good starting state */ 1348 + if (!phy_configured) { 1349 + err = alx_setup_speed_duplex(hw, hw->adv_cfg, hw->flowctrl); 1350 + if (err) { 1351 + dev_err(&pdev->dev, 1352 + "failed to configure PHY speed/duplex (err=%d)\n", 1353 + err); 1354 + goto out_unmap; 1355 + } 1356 + } 1357 + 1358 + netdev->hw_features = NETIF_F_SG | NETIF_F_HW_CSUM; 1359 + 1360 + if (alx_get_perm_macaddr(hw, hw->perm_addr)) { 1361 + dev_warn(&pdev->dev, 1362 + "Invalid permanent address programmed, using random one\n"); 1363 + eth_hw_addr_random(netdev); 1364 + memcpy(hw->perm_addr, netdev->dev_addr, netdev->addr_len); 1365 + } 1366 + 1367 + memcpy(hw->mac_addr, hw->perm_addr, ETH_ALEN); 1368 + memcpy(netdev->dev_addr, hw->mac_addr, ETH_ALEN); 1369 + memcpy(netdev->perm_addr, hw->perm_addr, ETH_ALEN); 1370 + 1371 + hw->mdio.prtad = 0; 1372 + hw->mdio.mmds = 0; 1373 + hw->mdio.dev = netdev; 1374 + hw->mdio.mode_support = MDIO_SUPPORTS_C45 | 1375 + MDIO_SUPPORTS_C22 | 1376 + MDIO_EMULATE_C22; 1377 + hw->mdio.mdio_read = alx_mdio_read; 1378 + hw->mdio.mdio_write = alx_mdio_write; 1379 + 1380 + if (!alx_get_phy_info(hw)) { 1381 + dev_err(&pdev->dev, "failed to identify PHY\n"); 1382 + err = -EIO; 1383 + goto out_unmap; 1384 + } 1385 + 1386 + INIT_WORK(&alx->link_check_wk, alx_link_check); 1387 + INIT_WORK(&alx->reset_wk, alx_reset); 1388 + spin_lock_init(&alx->hw.mdio_lock); 1389 + spin_lock_init(&alx->irq_lock); 1390 + 1391 + netif_carrier_off(netdev); 1392 + 1393 + err = register_netdev(netdev); 1394 + if (err) { 1395 + dev_err(&pdev->dev, "register netdevice failed\n"); 1396 + goto out_unmap; 1397 + } 1398 + 1399 + device_set_wakeup_enable(&pdev->dev, hw->sleep_ctrl); 1400 + 1401 + netdev_info(netdev, 1402 + "Qualcomm Atheros AR816x/AR817x Ethernet [%pM]\n", 1403 + netdev->dev_addr); 1404 + 1405 + return 0; 1406 + 1407 + out_unmap: 1408 + iounmap(hw->hw_addr); 1409 + out_free_netdev: 1410 + free_netdev(netdev); 1411 + out_pci_release: 1412 + pci_release_selected_regions(pdev, bars); 1413 + out_pci_disable: 1414 + pci_disable_device(pdev); 1415 + return err; 1416 + } 1417 + 1418 + static void alx_remove(struct pci_dev *pdev) 1419 + { 1420 + struct alx_priv *alx = pci_get_drvdata(pdev); 1421 + struct alx_hw *hw = &alx->hw; 1422 + 1423 + cancel_work_sync(&alx->link_check_wk); 1424 + cancel_work_sync(&alx->reset_wk); 1425 + 1426 + /* restore permanent mac address */ 1427 + alx_set_macaddr(hw, hw->perm_addr); 1428 + 1429 + unregister_netdev(alx->dev); 1430 + iounmap(hw->hw_addr); 1431 + pci_release_selected_regions(pdev, 1432 + pci_select_bars(pdev, IORESOURCE_MEM)); 1433 + 1434 + pci_disable_pcie_error_reporting(pdev); 1435 + pci_disable_device(pdev); 1436 + pci_set_drvdata(pdev, NULL); 1437 + 1438 + free_netdev(alx->dev); 1439 + } 1440 + 1441 + #ifdef CONFIG_PM_SLEEP 1442 + static int alx_suspend(struct device *dev) 1443 + { 1444 + struct pci_dev *pdev = to_pci_dev(dev); 1445 + int err; 1446 + bool wol_en; 1447 + 1448 + err = __alx_shutdown(pdev, &wol_en); 1449 + if (err) { 1450 + dev_err(&pdev->dev, "shutdown fail in suspend %d\n", err); 1451 + return err; 1452 + } 1453 + 1454 + if (wol_en) { 1455 + pci_prepare_to_sleep(pdev); 1456 + } else { 1457 + pci_wake_from_d3(pdev, false); 1458 + pci_set_power_state(pdev, PCI_D3hot); 1459 + } 1460 + 1461 + return 0; 1462 + } 1463 + 1464 + static int alx_resume(struct device *dev) 1465 + { 1466 + struct pci_dev *pdev = to_pci_dev(dev); 1467 + struct alx_priv *alx = pci_get_drvdata(pdev); 1468 + struct net_device *netdev = alx->dev; 1469 + struct alx_hw *hw = &alx->hw; 1470 + int err; 1471 + 1472 + pci_set_power_state(pdev, PCI_D0); 1473 + pci_restore_state(pdev); 1474 + pci_save_state(pdev); 1475 + 1476 + pci_enable_wake(pdev, PCI_D3hot, 0); 1477 + pci_enable_wake(pdev, PCI_D3cold, 0); 1478 + 1479 + hw->link_speed = SPEED_UNKNOWN; 1480 + alx->int_mask = ALX_ISR_MISC; 1481 + 1482 + alx_reset_pcie(hw); 1483 + alx_reset_phy(hw); 1484 + 1485 + err = alx_reset_mac(hw); 1486 + if (err) { 1487 + netif_err(alx, hw, alx->dev, 1488 + "resume:reset_mac fail %d\n", err); 1489 + return -EIO; 1490 + } 1491 + 1492 + err = alx_setup_speed_duplex(hw, hw->adv_cfg, hw->flowctrl); 1493 + if (err) { 1494 + netif_err(alx, hw, alx->dev, 1495 + "resume:setup_speed_duplex fail %d\n", err); 1496 + return -EIO; 1497 + } 1498 + 1499 + if (netif_running(netdev)) { 1500 + err = __alx_open(alx, true); 1501 + if (err) 1502 + return err; 1503 + } 1504 + 1505 + netif_device_attach(netdev); 1506 + 1507 + return err; 1508 + } 1509 + #endif 1510 + 1511 + static pci_ers_result_t alx_pci_error_detected(struct pci_dev *pdev, 1512 + pci_channel_state_t state) 1513 + { 1514 + struct alx_priv *alx = pci_get_drvdata(pdev); 1515 + struct net_device *netdev = alx->dev; 1516 + pci_ers_result_t rc = PCI_ERS_RESULT_NEED_RESET; 1517 + 1518 + dev_info(&pdev->dev, "pci error detected\n"); 1519 + 1520 + rtnl_lock(); 1521 + 1522 + if (netif_running(netdev)) { 1523 + netif_device_detach(netdev); 1524 + alx_halt(alx); 1525 + } 1526 + 1527 + if (state == pci_channel_io_perm_failure) 1528 + rc = PCI_ERS_RESULT_DISCONNECT; 1529 + else 1530 + pci_disable_device(pdev); 1531 + 1532 + rtnl_unlock(); 1533 + 1534 + return rc; 1535 + } 1536 + 1537 + static pci_ers_result_t alx_pci_error_slot_reset(struct pci_dev *pdev) 1538 + { 1539 + struct alx_priv *alx = pci_get_drvdata(pdev); 1540 + struct alx_hw *hw = &alx->hw; 1541 + pci_ers_result_t rc = PCI_ERS_RESULT_DISCONNECT; 1542 + 1543 + dev_info(&pdev->dev, "pci error slot reset\n"); 1544 + 1545 + rtnl_lock(); 1546 + 1547 + if (pci_enable_device(pdev)) { 1548 + dev_err(&pdev->dev, "Failed to re-enable PCI device after reset\n"); 1549 + goto out; 1550 + } 1551 + 1552 + pci_set_master(pdev); 1553 + pci_enable_wake(pdev, PCI_D3hot, 0); 1554 + pci_enable_wake(pdev, PCI_D3cold, 0); 1555 + 1556 + alx_reset_pcie(hw); 1557 + if (!alx_reset_mac(hw)) 1558 + rc = PCI_ERS_RESULT_RECOVERED; 1559 + out: 1560 + pci_cleanup_aer_uncorrect_error_status(pdev); 1561 + 1562 + rtnl_unlock(); 1563 + 1564 + return rc; 1565 + } 1566 + 1567 + static void alx_pci_error_resume(struct pci_dev *pdev) 1568 + { 1569 + struct alx_priv *alx = pci_get_drvdata(pdev); 1570 + struct net_device *netdev = alx->dev; 1571 + 1572 + dev_info(&pdev->dev, "pci error resume\n"); 1573 + 1574 + rtnl_lock(); 1575 + 1576 + if (netif_running(netdev)) { 1577 + alx_activate(alx); 1578 + netif_device_attach(netdev); 1579 + } 1580 + 1581 + rtnl_unlock(); 1582 + } 1583 + 1584 + static const struct pci_error_handlers alx_err_handlers = { 1585 + .error_detected = alx_pci_error_detected, 1586 + .slot_reset = alx_pci_error_slot_reset, 1587 + .resume = alx_pci_error_resume, 1588 + }; 1589 + 1590 + #ifdef CONFIG_PM_SLEEP 1591 + static SIMPLE_DEV_PM_OPS(alx_pm_ops, alx_suspend, alx_resume); 1592 + #define ALX_PM_OPS (&alx_pm_ops) 1593 + #else 1594 + #define ALX_PM_OPS NULL 1595 + #endif 1596 + 1597 + static DEFINE_PCI_DEVICE_TABLE(alx_pci_tbl) = { 1598 + { PCI_VDEVICE(ATTANSIC, ALX_DEV_ID_AR8161), 1599 + .driver_data = ALX_DEV_QUIRK_MSI_INTX_DISABLE_BUG }, 1600 + { PCI_VDEVICE(ATTANSIC, ALX_DEV_ID_E2200), 1601 + .driver_data = ALX_DEV_QUIRK_MSI_INTX_DISABLE_BUG }, 1602 + { PCI_VDEVICE(ATTANSIC, ALX_DEV_ID_AR8162), 1603 + .driver_data = ALX_DEV_QUIRK_MSI_INTX_DISABLE_BUG }, 1604 + { PCI_VDEVICE(ATTANSIC, ALX_DEV_ID_AR8171) }, 1605 + { PCI_VDEVICE(ATTANSIC, ALX_DEV_ID_AR8172) }, 1606 + {} 1607 + }; 1608 + 1609 + static struct pci_driver alx_driver = { 1610 + .name = alx_drv_name, 1611 + .id_table = alx_pci_tbl, 1612 + .probe = alx_probe, 1613 + .remove = alx_remove, 1614 + .shutdown = alx_shutdown, 1615 + .err_handler = &alx_err_handlers, 1616 + .driver.pm = ALX_PM_OPS, 1617 + }; 1618 + 1619 + module_pci_driver(alx_driver); 1620 + MODULE_DEVICE_TABLE(pci, alx_pci_tbl); 1621 + MODULE_AUTHOR("Johannes Berg <johannes@sipsolutions.net>"); 1622 + MODULE_AUTHOR("Qualcomm Corporation, <nic-devel@qualcomm.com>"); 1623 + MODULE_DESCRIPTION( 1624 + "Qualcomm Atheros(R) AR816x/AR817x PCI-E Ethernet Network Driver"); 1625 + MODULE_LICENSE("GPL");
+810
drivers/net/ethernet/atheros/alx/reg.h
··· 1 + /* 2 + * Copyright (c) 2013 Johannes Berg <johannes@sipsolutions.net> 3 + * 4 + * This file is free software: you may copy, redistribute and/or modify it 5 + * under the terms of the GNU General Public License as published by the 6 + * Free Software Foundation, either version 2 of the License, or (at your 7 + * option) any later version. 8 + * 9 + * This file is distributed in the hope that it will be useful, but 10 + * WITHOUT ANY WARRANTY; without even the implied warranty of 11 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU 12 + * General Public License for more details. 13 + * 14 + * You should have received a copy of the GNU General Public License 15 + * along with this program. If not, see <http://www.gnu.org/licenses/>. 16 + * 17 + * This file incorporates work covered by the following copyright and 18 + * permission notice: 19 + * 20 + * Copyright (c) 2012 Qualcomm Atheros, Inc. 21 + * 22 + * Permission to use, copy, modify, and/or distribute this software for any 23 + * purpose with or without fee is hereby granted, provided that the above 24 + * copyright notice and this permission notice appear in all copies. 25 + * 26 + * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES 27 + * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF 28 + * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR 29 + * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES 30 + * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN 31 + * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF 32 + * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. 33 + */ 34 + 35 + #ifndef ALX_REG_H 36 + #define ALX_REG_H 37 + 38 + #define ALX_DEV_ID_AR8161 0x1091 39 + #define ALX_DEV_ID_E2200 0xe091 40 + #define ALX_DEV_ID_AR8162 0x1090 41 + #define ALX_DEV_ID_AR8171 0x10A1 42 + #define ALX_DEV_ID_AR8172 0x10A0 43 + 44 + /* rev definition, 45 + * bit(0): with xD support 46 + * bit(1): with Card Reader function 47 + * bit(7:2): real revision 48 + */ 49 + #define ALX_PCI_REVID_SHIFT 3 50 + #define ALX_REV_A0 0 51 + #define ALX_REV_A1 1 52 + #define ALX_REV_B0 2 53 + #define ALX_REV_C0 3 54 + 55 + #define ALX_DEV_CTRL 0x0060 56 + #define ALX_DEV_CTRL_MAXRRS_MIN 2 57 + 58 + #define ALX_MSIX_MASK 0x0090 59 + 60 + #define ALX_UE_SVRT 0x010C 61 + #define ALX_UE_SVRT_FCPROTERR BIT(13) 62 + #define ALX_UE_SVRT_DLPROTERR BIT(4) 63 + 64 + /* eeprom & flash load register */ 65 + #define ALX_EFLD 0x0204 66 + #define ALX_EFLD_F_EXIST BIT(10) 67 + #define ALX_EFLD_E_EXIST BIT(9) 68 + #define ALX_EFLD_STAT BIT(5) 69 + #define ALX_EFLD_START BIT(0) 70 + 71 + /* eFuse load register */ 72 + #define ALX_SLD 0x0218 73 + #define ALX_SLD_STAT BIT(12) 74 + #define ALX_SLD_START BIT(11) 75 + #define ALX_SLD_MAX_TO 100 76 + 77 + #define ALX_PDLL_TRNS1 0x1104 78 + #define ALX_PDLL_TRNS1_D3PLLOFF_EN BIT(11) 79 + 80 + #define ALX_PMCTRL 0x12F8 81 + #define ALX_PMCTRL_HOTRST_WTEN BIT(31) 82 + /* bit30: L0s/L1 controlled by MAC based on throughput(setting in 15A0) */ 83 + #define ALX_PMCTRL_ASPM_FCEN BIT(30) 84 + #define ALX_PMCTRL_SADLY_EN BIT(29) 85 + #define ALX_PMCTRL_LCKDET_TIMER_MASK 0xF 86 + #define ALX_PMCTRL_LCKDET_TIMER_SHIFT 24 87 + #define ALX_PMCTRL_LCKDET_TIMER_DEF 0xC 88 + /* bit[23:20] if pm_request_l1 time > @, then enter L0s not L1 */ 89 + #define ALX_PMCTRL_L1REQ_TO_MASK 0xF 90 + #define ALX_PMCTRL_L1REQ_TO_SHIFT 20 91 + #define ALX_PMCTRL_L1REG_TO_DEF 0xF 92 + #define ALX_PMCTRL_TXL1_AFTER_L0S BIT(19) 93 + #define ALX_PMCTRL_L1_TIMER_MASK 0x7 94 + #define ALX_PMCTRL_L1_TIMER_SHIFT 16 95 + #define ALX_PMCTRL_L1_TIMER_16US 4 96 + #define ALX_PMCTRL_RCVR_WT_1US BIT(15) 97 + /* bit13: enable pcie clk switch in L1 state */ 98 + #define ALX_PMCTRL_L1_CLKSW_EN BIT(13) 99 + #define ALX_PMCTRL_L0S_EN BIT(12) 100 + #define ALX_PMCTRL_RXL1_AFTER_L0S BIT(11) 101 + #define ALX_PMCTRL_L1_BUFSRX_EN BIT(7) 102 + /* bit6: power down serdes RX */ 103 + #define ALX_PMCTRL_L1_SRDSRX_PWD BIT(6) 104 + #define ALX_PMCTRL_L1_SRDSPLL_EN BIT(5) 105 + #define ALX_PMCTRL_L1_SRDS_EN BIT(4) 106 + #define ALX_PMCTRL_L1_EN BIT(3) 107 + 108 + /*******************************************************/ 109 + /* following registers are mapped only to memory space */ 110 + /*******************************************************/ 111 + 112 + #define ALX_MASTER 0x1400 113 + /* bit12: 1:alwys select pclk from serdes, not sw to 25M */ 114 + #define ALX_MASTER_PCLKSEL_SRDS BIT(12) 115 + /* bit11: irq moduration for rx */ 116 + #define ALX_MASTER_IRQMOD2_EN BIT(11) 117 + /* bit10: irq moduration for tx/rx */ 118 + #define ALX_MASTER_IRQMOD1_EN BIT(10) 119 + #define ALX_MASTER_SYSALVTIMER_EN BIT(7) 120 + #define ALX_MASTER_OOB_DIS BIT(6) 121 + /* bit5: wakeup without pcie clk */ 122 + #define ALX_MASTER_WAKEN_25M BIT(5) 123 + /* bit0: MAC & DMA reset */ 124 + #define ALX_MASTER_DMA_MAC_RST BIT(0) 125 + #define ALX_DMA_MAC_RST_TO 50 126 + 127 + #define ALX_IRQ_MODU_TIMER 0x1408 128 + #define ALX_IRQ_MODU_TIMER1_MASK 0xFFFF 129 + #define ALX_IRQ_MODU_TIMER1_SHIFT 0 130 + 131 + #define ALX_PHY_CTRL 0x140C 132 + #define ALX_PHY_CTRL_100AB_EN BIT(17) 133 + /* bit14: affect MAC & PHY, go to low power sts */ 134 + #define ALX_PHY_CTRL_POWER_DOWN BIT(14) 135 + /* bit13: 1:pll always ON, 0:can switch in lpw */ 136 + #define ALX_PHY_CTRL_PLL_ON BIT(13) 137 + #define ALX_PHY_CTRL_RST_ANALOG BIT(12) 138 + #define ALX_PHY_CTRL_HIB_PULSE BIT(11) 139 + #define ALX_PHY_CTRL_HIB_EN BIT(10) 140 + #define ALX_PHY_CTRL_IDDQ BIT(7) 141 + #define ALX_PHY_CTRL_GATE_25M BIT(5) 142 + #define ALX_PHY_CTRL_LED_MODE BIT(2) 143 + /* bit0: out of dsp RST state */ 144 + #define ALX_PHY_CTRL_DSPRST_OUT BIT(0) 145 + #define ALX_PHY_CTRL_DSPRST_TO 80 146 + #define ALX_PHY_CTRL_CLS (ALX_PHY_CTRL_LED_MODE | \ 147 + ALX_PHY_CTRL_100AB_EN | \ 148 + ALX_PHY_CTRL_PLL_ON) 149 + 150 + #define ALX_MAC_STS 0x1410 151 + #define ALX_MAC_STS_TXQ_BUSY BIT(3) 152 + #define ALX_MAC_STS_RXQ_BUSY BIT(2) 153 + #define ALX_MAC_STS_TXMAC_BUSY BIT(1) 154 + #define ALX_MAC_STS_RXMAC_BUSY BIT(0) 155 + #define ALX_MAC_STS_IDLE (ALX_MAC_STS_TXQ_BUSY | \ 156 + ALX_MAC_STS_RXQ_BUSY | \ 157 + ALX_MAC_STS_TXMAC_BUSY | \ 158 + ALX_MAC_STS_RXMAC_BUSY) 159 + 160 + #define ALX_MDIO 0x1414 161 + #define ALX_MDIO_MODE_EXT BIT(30) 162 + #define ALX_MDIO_BUSY BIT(27) 163 + #define ALX_MDIO_CLK_SEL_MASK 0x7 164 + #define ALX_MDIO_CLK_SEL_SHIFT 24 165 + #define ALX_MDIO_CLK_SEL_25MD4 0 166 + #define ALX_MDIO_CLK_SEL_25MD128 7 167 + #define ALX_MDIO_START BIT(23) 168 + #define ALX_MDIO_SPRES_PRMBL BIT(22) 169 + /* bit21: 1:read,0:write */ 170 + #define ALX_MDIO_OP_READ BIT(21) 171 + #define ALX_MDIO_REG_MASK 0x1F 172 + #define ALX_MDIO_REG_SHIFT 16 173 + #define ALX_MDIO_DATA_MASK 0xFFFF 174 + #define ALX_MDIO_DATA_SHIFT 0 175 + #define ALX_MDIO_MAX_AC_TO 120 176 + 177 + #define ALX_MDIO_EXTN 0x1448 178 + #define ALX_MDIO_EXTN_DEVAD_MASK 0x1F 179 + #define ALX_MDIO_EXTN_DEVAD_SHIFT 16 180 + #define ALX_MDIO_EXTN_REG_MASK 0xFFFF 181 + #define ALX_MDIO_EXTN_REG_SHIFT 0 182 + 183 + #define ALX_SERDES 0x1424 184 + #define ALX_SERDES_PHYCLK_SLWDWN BIT(18) 185 + #define ALX_SERDES_MACCLK_SLWDWN BIT(17) 186 + 187 + #define ALX_LPI_CTRL 0x1440 188 + #define ALX_LPI_CTRL_EN BIT(0) 189 + 190 + /* for B0+, bit[13..] for C0+ */ 191 + #define ALX_HRTBT_EXT_CTRL 0x1AD0 192 + #define L1F_HRTBT_EXT_CTRL_PERIOD_HIGH_MASK 0x3F 193 + #define L1F_HRTBT_EXT_CTRL_PERIOD_HIGH_SHIFT 24 194 + #define L1F_HRTBT_EXT_CTRL_SWOI_STARTUP_PKT_EN BIT(23) 195 + #define L1F_HRTBT_EXT_CTRL_IOAC_2_FRAGMENTED BIT(22) 196 + #define L1F_HRTBT_EXT_CTRL_IOAC_1_FRAGMENTED BIT(21) 197 + #define L1F_HRTBT_EXT_CTRL_IOAC_1_KEEPALIVE_EN BIT(20) 198 + #define L1F_HRTBT_EXT_CTRL_IOAC_1_HAS_VLAN BIT(19) 199 + #define L1F_HRTBT_EXT_CTRL_IOAC_1_IS_8023 BIT(18) 200 + #define L1F_HRTBT_EXT_CTRL_IOAC_1_IS_IPV6 BIT(17) 201 + #define L1F_HRTBT_EXT_CTRL_IOAC_2_KEEPALIVE_EN BIT(16) 202 + #define L1F_HRTBT_EXT_CTRL_IOAC_2_HAS_VLAN BIT(15) 203 + #define L1F_HRTBT_EXT_CTRL_IOAC_2_IS_8023 BIT(14) 204 + #define L1F_HRTBT_EXT_CTRL_IOAC_2_IS_IPV6 BIT(13) 205 + #define ALX_HRTBT_EXT_CTRL_NS_EN BIT(12) 206 + #define ALX_HRTBT_EXT_CTRL_FRAG_LEN_MASK 0xFF 207 + #define ALX_HRTBT_EXT_CTRL_FRAG_LEN_SHIFT 4 208 + #define ALX_HRTBT_EXT_CTRL_IS_8023 BIT(3) 209 + #define ALX_HRTBT_EXT_CTRL_IS_IPV6 BIT(2) 210 + #define ALX_HRTBT_EXT_CTRL_WAKEUP_EN BIT(1) 211 + #define ALX_HRTBT_EXT_CTRL_ARP_EN BIT(0) 212 + 213 + #define ALX_HRTBT_REM_IPV4_ADDR 0x1AD4 214 + #define ALX_HRTBT_HOST_IPV4_ADDR 0x1478 215 + #define ALX_HRTBT_REM_IPV6_ADDR3 0x1AD8 216 + #define ALX_HRTBT_REM_IPV6_ADDR2 0x1ADC 217 + #define ALX_HRTBT_REM_IPV6_ADDR1 0x1AE0 218 + #define ALX_HRTBT_REM_IPV6_ADDR0 0x1AE4 219 + 220 + /* 1B8C ~ 1B94 for C0+ */ 221 + #define ALX_SWOI_ACER_CTRL 0x1B8C 222 + #define ALX_SWOI_ORIG_ACK_NAK_EN BIT(20) 223 + #define ALX_SWOI_ORIG_ACK_NAK_PKT_LEN_MASK 0XFF 224 + #define ALX_SWOI_ORIG_ACK_NAK_PKT_LEN_SHIFT 12 225 + #define ALX_SWOI_ORIG_ACK_ADDR_MASK 0XFFF 226 + #define ALX_SWOI_ORIG_ACK_ADDR_SHIFT 0 227 + 228 + #define ALX_SWOI_IOAC_CTRL_2 0x1B90 229 + #define ALX_SWOI_IOAC_CTRL_2_SWOI_1_FRAG_LEN_MASK 0xFF 230 + #define ALX_SWOI_IOAC_CTRL_2_SWOI_1_FRAG_LEN_SHIFT 24 231 + #define ALX_SWOI_IOAC_CTRL_2_SWOI_1_PKT_LEN_MASK 0xFFF 232 + #define ALX_SWOI_IOAC_CTRL_2_SWOI_1_PKT_LEN_SHIFT 12 233 + #define ALX_SWOI_IOAC_CTRL_2_SWOI_1_HDR_ADDR_MASK 0xFFF 234 + #define ALX_SWOI_IOAC_CTRL_2_SWOI_1_HDR_ADDR_SHIFT 0 235 + 236 + #define ALX_SWOI_IOAC_CTRL_3 0x1B94 237 + #define ALX_SWOI_IOAC_CTRL_3_SWOI_2_FRAG_LEN_MASK 0xFF 238 + #define ALX_SWOI_IOAC_CTRL_3_SWOI_2_FRAG_LEN_SHIFT 24 239 + #define ALX_SWOI_IOAC_CTRL_3_SWOI_2_PKT_LEN_MASK 0xFFF 240 + #define ALX_SWOI_IOAC_CTRL_3_SWOI_2_PKT_LEN_SHIFT 12 241 + #define ALX_SWOI_IOAC_CTRL_3_SWOI_2_HDR_ADDR_MASK 0xFFF 242 + #define ALX_SWOI_IOAC_CTRL_3_SWOI_2_HDR_ADDR_SHIFT 0 243 + 244 + /* for B0 */ 245 + #define ALX_IDLE_DECISN_TIMER 0x1474 246 + /* 1ms */ 247 + #define ALX_IDLE_DECISN_TIMER_DEF 0x400 248 + 249 + #define ALX_MAC_CTRL 0x1480 250 + #define ALX_MAC_CTRL_FAST_PAUSE BIT(31) 251 + #define ALX_MAC_CTRL_WOLSPED_SWEN BIT(30) 252 + /* bit29: 1:legacy(hi5b), 0:marvl(lo5b)*/ 253 + #define ALX_MAC_CTRL_MHASH_ALG_HI5B BIT(29) 254 + #define ALX_MAC_CTRL_BRD_EN BIT(26) 255 + #define ALX_MAC_CTRL_MULTIALL_EN BIT(25) 256 + #define ALX_MAC_CTRL_SPEED_MASK 0x3 257 + #define ALX_MAC_CTRL_SPEED_SHIFT 20 258 + #define ALX_MAC_CTRL_SPEED_10_100 1 259 + #define ALX_MAC_CTRL_SPEED_1000 2 260 + #define ALX_MAC_CTRL_PROMISC_EN BIT(15) 261 + #define ALX_MAC_CTRL_VLANSTRIP BIT(14) 262 + #define ALX_MAC_CTRL_PRMBLEN_MASK 0xF 263 + #define ALX_MAC_CTRL_PRMBLEN_SHIFT 10 264 + #define ALX_MAC_CTRL_PCRCE BIT(7) 265 + #define ALX_MAC_CTRL_CRCE BIT(6) 266 + #define ALX_MAC_CTRL_FULLD BIT(5) 267 + #define ALX_MAC_CTRL_RXFC_EN BIT(3) 268 + #define ALX_MAC_CTRL_TXFC_EN BIT(2) 269 + #define ALX_MAC_CTRL_RX_EN BIT(1) 270 + #define ALX_MAC_CTRL_TX_EN BIT(0) 271 + 272 + #define ALX_STAD0 0x1488 273 + #define ALX_STAD1 0x148C 274 + 275 + #define ALX_HASH_TBL0 0x1490 276 + #define ALX_HASH_TBL1 0x1494 277 + 278 + #define ALX_MTU 0x149C 279 + #define ALX_MTU_JUMBO_TH 1514 280 + #define ALX_MTU_STD_ALGN 1536 281 + 282 + #define ALX_SRAM5 0x1524 283 + #define ALX_SRAM_RXF_LEN_MASK 0xFFF 284 + #define ALX_SRAM_RXF_LEN_SHIFT 0 285 + #define ALX_SRAM_RXF_LEN_8K (8*1024) 286 + 287 + #define ALX_SRAM9 0x1534 288 + #define ALX_SRAM_LOAD_PTR BIT(0) 289 + 290 + #define ALX_RX_BASE_ADDR_HI 0x1540 291 + 292 + #define ALX_TX_BASE_ADDR_HI 0x1544 293 + 294 + #define ALX_RFD_ADDR_LO 0x1550 295 + #define ALX_RFD_RING_SZ 0x1560 296 + #define ALX_RFD_BUF_SZ 0x1564 297 + 298 + #define ALX_RRD_ADDR_LO 0x1568 299 + #define ALX_RRD_RING_SZ 0x1578 300 + 301 + /* pri3: highest, pri0: lowest */ 302 + #define ALX_TPD_PRI3_ADDR_LO 0x14E4 303 + #define ALX_TPD_PRI2_ADDR_LO 0x14E0 304 + #define ALX_TPD_PRI1_ADDR_LO 0x157C 305 + #define ALX_TPD_PRI0_ADDR_LO 0x1580 306 + 307 + /* producer index is 16bit */ 308 + #define ALX_TPD_PRI3_PIDX 0x1618 309 + #define ALX_TPD_PRI2_PIDX 0x161A 310 + #define ALX_TPD_PRI1_PIDX 0x15F0 311 + #define ALX_TPD_PRI0_PIDX 0x15F2 312 + 313 + /* consumer index is 16bit */ 314 + #define ALX_TPD_PRI3_CIDX 0x161C 315 + #define ALX_TPD_PRI2_CIDX 0x161E 316 + #define ALX_TPD_PRI1_CIDX 0x15F4 317 + #define ALX_TPD_PRI0_CIDX 0x15F6 318 + 319 + #define ALX_TPD_RING_SZ 0x1584 320 + 321 + #define ALX_TXQ0 0x1590 322 + #define ALX_TXQ0_TXF_BURST_PREF_MASK 0xFFFF 323 + #define ALX_TXQ0_TXF_BURST_PREF_SHIFT 16 324 + #define ALX_TXQ_TXF_BURST_PREF_DEF 0x200 325 + #define ALX_TXQ0_LSO_8023_EN BIT(7) 326 + #define ALX_TXQ0_MODE_ENHANCE BIT(6) 327 + #define ALX_TXQ0_EN BIT(5) 328 + #define ALX_TXQ0_SUPT_IPOPT BIT(4) 329 + #define ALX_TXQ0_TPD_BURSTPREF_MASK 0xF 330 + #define ALX_TXQ0_TPD_BURSTPREF_SHIFT 0 331 + #define ALX_TXQ_TPD_BURSTPREF_DEF 5 332 + 333 + #define ALX_TXQ1 0x1594 334 + /* bit11: drop large packet, len > (rfd buf) */ 335 + #define ALX_TXQ1_ERRLGPKT_DROP_EN BIT(11) 336 + #define ALX_TXQ1_JUMBO_TSO_TH (7*1024) 337 + 338 + #define ALX_RXQ0 0x15A0 339 + #define ALX_RXQ0_EN BIT(31) 340 + #define ALX_RXQ0_RSS_HASH_EN BIT(29) 341 + #define ALX_RXQ0_RSS_MODE_MASK 0x3 342 + #define ALX_RXQ0_RSS_MODE_SHIFT 26 343 + #define ALX_RXQ0_RSS_MODE_DIS 0 344 + #define ALX_RXQ0_RSS_MODE_MQMI 3 345 + #define ALX_RXQ0_NUM_RFD_PREF_MASK 0x3F 346 + #define ALX_RXQ0_NUM_RFD_PREF_SHIFT 20 347 + #define ALX_RXQ0_NUM_RFD_PREF_DEF 8 348 + #define ALX_RXQ0_IDT_TBL_SIZE_MASK 0x1FF 349 + #define ALX_RXQ0_IDT_TBL_SIZE_SHIFT 8 350 + #define ALX_RXQ0_IDT_TBL_SIZE_DEF 0x100 351 + #define ALX_RXQ0_IDT_TBL_SIZE_NORMAL 128 352 + #define ALX_RXQ0_IPV6_PARSE_EN BIT(7) 353 + #define ALX_RXQ0_RSS_HSTYP_MASK 0xF 354 + #define ALX_RXQ0_RSS_HSTYP_SHIFT 2 355 + #define ALX_RXQ0_RSS_HSTYP_IPV6_TCP_EN BIT(5) 356 + #define ALX_RXQ0_RSS_HSTYP_IPV6_EN BIT(4) 357 + #define ALX_RXQ0_RSS_HSTYP_IPV4_TCP_EN BIT(3) 358 + #define ALX_RXQ0_RSS_HSTYP_IPV4_EN BIT(2) 359 + #define ALX_RXQ0_RSS_HSTYP_ALL (ALX_RXQ0_RSS_HSTYP_IPV6_TCP_EN | \ 360 + ALX_RXQ0_RSS_HSTYP_IPV4_TCP_EN | \ 361 + ALX_RXQ0_RSS_HSTYP_IPV6_EN | \ 362 + ALX_RXQ0_RSS_HSTYP_IPV4_EN) 363 + #define ALX_RXQ0_ASPM_THRESH_MASK 0x3 364 + #define ALX_RXQ0_ASPM_THRESH_SHIFT 0 365 + #define ALX_RXQ0_ASPM_THRESH_100M 3 366 + 367 + #define ALX_RXQ2 0x15A8 368 + #define ALX_RXQ2_RXF_XOFF_THRESH_MASK 0xFFF 369 + #define ALX_RXQ2_RXF_XOFF_THRESH_SHIFT 16 370 + #define ALX_RXQ2_RXF_XON_THRESH_MASK 0xFFF 371 + #define ALX_RXQ2_RXF_XON_THRESH_SHIFT 0 372 + /* Size = tx-packet(1522) + IPG(12) + SOF(8) + 64(Pause) + IPG(12) + SOF(8) + 373 + * rx-packet(1522) + delay-of-link(64) 374 + * = 3212. 375 + */ 376 + #define ALX_RXQ2_RXF_FLOW_CTRL_RSVD 3212 377 + 378 + #define ALX_DMA 0x15C0 379 + #define ALX_DMA_RCHNL_SEL_MASK 0x3 380 + #define ALX_DMA_RCHNL_SEL_SHIFT 26 381 + #define ALX_DMA_WDLY_CNT_MASK 0xF 382 + #define ALX_DMA_WDLY_CNT_SHIFT 16 383 + #define ALX_DMA_WDLY_CNT_DEF 4 384 + #define ALX_DMA_RDLY_CNT_MASK 0x1F 385 + #define ALX_DMA_RDLY_CNT_SHIFT 11 386 + #define ALX_DMA_RDLY_CNT_DEF 15 387 + /* bit10: 0:tpd with pri, 1: data */ 388 + #define ALX_DMA_RREQ_PRI_DATA BIT(10) 389 + #define ALX_DMA_RREQ_BLEN_MASK 0x7 390 + #define ALX_DMA_RREQ_BLEN_SHIFT 4 391 + #define ALX_DMA_RORDER_MODE_MASK 0x7 392 + #define ALX_DMA_RORDER_MODE_SHIFT 0 393 + #define ALX_DMA_RORDER_MODE_OUT 4 394 + 395 + #define ALX_WOL0 0x14A0 396 + #define ALX_WOL0_PME_LINK BIT(5) 397 + #define ALX_WOL0_LINK_EN BIT(4) 398 + #define ALX_WOL0_PME_MAGIC_EN BIT(3) 399 + #define ALX_WOL0_MAGIC_EN BIT(2) 400 + 401 + #define ALX_RFD_PIDX 0x15E0 402 + 403 + #define ALX_RFD_CIDX 0x15F8 404 + 405 + /* MIB */ 406 + #define ALX_MIB_BASE 0x1700 407 + #define ALX_MIB_RX_OK (ALX_MIB_BASE + 0) 408 + #define ALX_MIB_RX_ERRADDR (ALX_MIB_BASE + 92) 409 + #define ALX_MIB_TX_OK (ALX_MIB_BASE + 96) 410 + #define ALX_MIB_TX_MCCNT (ALX_MIB_BASE + 192) 411 + 412 + #define ALX_RX_STATS_BIN ALX_MIB_RX_OK 413 + #define ALX_RX_STATS_END ALX_MIB_RX_ERRADDR 414 + #define ALX_TX_STATS_BIN ALX_MIB_TX_OK 415 + #define ALX_TX_STATS_END ALX_MIB_TX_MCCNT 416 + 417 + #define ALX_ISR 0x1600 418 + #define ALX_ISR_DIS BIT(31) 419 + #define ALX_ISR_RX_Q7 BIT(30) 420 + #define ALX_ISR_RX_Q6 BIT(29) 421 + #define ALX_ISR_RX_Q5 BIT(28) 422 + #define ALX_ISR_RX_Q4 BIT(27) 423 + #define ALX_ISR_PCIE_LNKDOWN BIT(26) 424 + #define ALX_ISR_RX_Q3 BIT(19) 425 + #define ALX_ISR_RX_Q2 BIT(18) 426 + #define ALX_ISR_RX_Q1 BIT(17) 427 + #define ALX_ISR_RX_Q0 BIT(16) 428 + #define ALX_ISR_TX_Q0 BIT(15) 429 + #define ALX_ISR_PHY BIT(12) 430 + #define ALX_ISR_DMAW BIT(10) 431 + #define ALX_ISR_DMAR BIT(9) 432 + #define ALX_ISR_TXF_UR BIT(8) 433 + #define ALX_ISR_TX_Q3 BIT(7) 434 + #define ALX_ISR_TX_Q2 BIT(6) 435 + #define ALX_ISR_TX_Q1 BIT(5) 436 + #define ALX_ISR_RFD_UR BIT(4) 437 + #define ALX_ISR_RXF_OV BIT(3) 438 + #define ALX_ISR_MANU BIT(2) 439 + #define ALX_ISR_TIMER BIT(1) 440 + #define ALX_ISR_SMB BIT(0) 441 + 442 + #define ALX_IMR 0x1604 443 + 444 + /* re-send assert msg if SW no response */ 445 + #define ALX_INT_RETRIG 0x1608 446 + /* 40ms */ 447 + #define ALX_INT_RETRIG_TO 20000 448 + 449 + #define ALX_SMB_TIMER 0x15C4 450 + 451 + #define ALX_TINT_TPD_THRSHLD 0x15C8 452 + 453 + #define ALX_TINT_TIMER 0x15CC 454 + 455 + #define ALX_CLK_GATE 0x1814 456 + #define ALX_CLK_GATE_RXMAC BIT(5) 457 + #define ALX_CLK_GATE_TXMAC BIT(4) 458 + #define ALX_CLK_GATE_RXQ BIT(3) 459 + #define ALX_CLK_GATE_TXQ BIT(2) 460 + #define ALX_CLK_GATE_DMAR BIT(1) 461 + #define ALX_CLK_GATE_DMAW BIT(0) 462 + #define ALX_CLK_GATE_ALL (ALX_CLK_GATE_RXMAC | \ 463 + ALX_CLK_GATE_TXMAC | \ 464 + ALX_CLK_GATE_RXQ | \ 465 + ALX_CLK_GATE_TXQ | \ 466 + ALX_CLK_GATE_DMAR | \ 467 + ALX_CLK_GATE_DMAW) 468 + 469 + /* interop between drivers */ 470 + #define ALX_DRV 0x1804 471 + #define ALX_DRV_PHY_AUTO BIT(28) 472 + #define ALX_DRV_PHY_1000 BIT(27) 473 + #define ALX_DRV_PHY_100 BIT(26) 474 + #define ALX_DRV_PHY_10 BIT(25) 475 + #define ALX_DRV_PHY_DUPLEX BIT(24) 476 + /* bit23: adv Pause */ 477 + #define ALX_DRV_PHY_PAUSE BIT(23) 478 + /* bit22: adv Asym Pause */ 479 + #define ALX_DRV_PHY_MASK 0xFF 480 + #define ALX_DRV_PHY_SHIFT 21 481 + #define ALX_DRV_PHY_UNKNOWN 0 482 + 483 + /* flag of phy inited */ 484 + #define ALX_PHY_INITED 0x003F 485 + 486 + /* reg 1830 ~ 186C for C0+, 16 bit map patterns and wake packet detection */ 487 + #define ALX_WOL_CTRL2 0x1830 488 + #define ALX_WOL_CTRL2_DATA_STORE BIT(3) 489 + #define ALX_WOL_CTRL2_PTRN_EVT BIT(2) 490 + #define ALX_WOL_CTRL2_PME_PTRN_EN BIT(1) 491 + #define ALX_WOL_CTRL2_PTRN_EN BIT(0) 492 + 493 + #define ALX_WOL_CTRL3 0x1834 494 + #define ALX_WOL_CTRL3_PTRN_ADDR_MASK 0xFFFFF 495 + #define ALX_WOL_CTRL3_PTRN_ADDR_SHIFT 0 496 + 497 + #define ALX_WOL_CTRL4 0x1838 498 + #define ALX_WOL_CTRL4_PT15_MATCH BIT(31) 499 + #define ALX_WOL_CTRL4_PT14_MATCH BIT(30) 500 + #define ALX_WOL_CTRL4_PT13_MATCH BIT(29) 501 + #define ALX_WOL_CTRL4_PT12_MATCH BIT(28) 502 + #define ALX_WOL_CTRL4_PT11_MATCH BIT(27) 503 + #define ALX_WOL_CTRL4_PT10_MATCH BIT(26) 504 + #define ALX_WOL_CTRL4_PT9_MATCH BIT(25) 505 + #define ALX_WOL_CTRL4_PT8_MATCH BIT(24) 506 + #define ALX_WOL_CTRL4_PT7_MATCH BIT(23) 507 + #define ALX_WOL_CTRL4_PT6_MATCH BIT(22) 508 + #define ALX_WOL_CTRL4_PT5_MATCH BIT(21) 509 + #define ALX_WOL_CTRL4_PT4_MATCH BIT(20) 510 + #define ALX_WOL_CTRL4_PT3_MATCH BIT(19) 511 + #define ALX_WOL_CTRL4_PT2_MATCH BIT(18) 512 + #define ALX_WOL_CTRL4_PT1_MATCH BIT(17) 513 + #define ALX_WOL_CTRL4_PT0_MATCH BIT(16) 514 + #define ALX_WOL_CTRL4_PT15_EN BIT(15) 515 + #define ALX_WOL_CTRL4_PT14_EN BIT(14) 516 + #define ALX_WOL_CTRL4_PT13_EN BIT(13) 517 + #define ALX_WOL_CTRL4_PT12_EN BIT(12) 518 + #define ALX_WOL_CTRL4_PT11_EN BIT(11) 519 + #define ALX_WOL_CTRL4_PT10_EN BIT(10) 520 + #define ALX_WOL_CTRL4_PT9_EN BIT(9) 521 + #define ALX_WOL_CTRL4_PT8_EN BIT(8) 522 + #define ALX_WOL_CTRL4_PT7_EN BIT(7) 523 + #define ALX_WOL_CTRL4_PT6_EN BIT(6) 524 + #define ALX_WOL_CTRL4_PT5_EN BIT(5) 525 + #define ALX_WOL_CTRL4_PT4_EN BIT(4) 526 + #define ALX_WOL_CTRL4_PT3_EN BIT(3) 527 + #define ALX_WOL_CTRL4_PT2_EN BIT(2) 528 + #define ALX_WOL_CTRL4_PT1_EN BIT(1) 529 + #define ALX_WOL_CTRL4_PT0_EN BIT(0) 530 + 531 + #define ALX_WOL_CTRL5 0x183C 532 + #define ALX_WOL_CTRL5_PT3_LEN_MASK 0xFF 533 + #define ALX_WOL_CTRL5_PT3_LEN_SHIFT 24 534 + #define ALX_WOL_CTRL5_PT2_LEN_MASK 0xFF 535 + #define ALX_WOL_CTRL5_PT2_LEN_SHIFT 16 536 + #define ALX_WOL_CTRL5_PT1_LEN_MASK 0xFF 537 + #define ALX_WOL_CTRL5_PT1_LEN_SHIFT 8 538 + #define ALX_WOL_CTRL5_PT0_LEN_MASK 0xFF 539 + #define ALX_WOL_CTRL5_PT0_LEN_SHIFT 0 540 + 541 + #define ALX_WOL_CTRL6 0x1840 542 + #define ALX_WOL_CTRL5_PT7_LEN_MASK 0xFF 543 + #define ALX_WOL_CTRL5_PT7_LEN_SHIFT 24 544 + #define ALX_WOL_CTRL5_PT6_LEN_MASK 0xFF 545 + #define ALX_WOL_CTRL5_PT6_LEN_SHIFT 16 546 + #define ALX_WOL_CTRL5_PT5_LEN_MASK 0xFF 547 + #define ALX_WOL_CTRL5_PT5_LEN_SHIFT 8 548 + #define ALX_WOL_CTRL5_PT4_LEN_MASK 0xFF 549 + #define ALX_WOL_CTRL5_PT4_LEN_SHIFT 0 550 + 551 + #define ALX_WOL_CTRL7 0x1844 552 + #define ALX_WOL_CTRL5_PT11_LEN_MASK 0xFF 553 + #define ALX_WOL_CTRL5_PT11_LEN_SHIFT 24 554 + #define ALX_WOL_CTRL5_PT10_LEN_MASK 0xFF 555 + #define ALX_WOL_CTRL5_PT10_LEN_SHIFT 16 556 + #define ALX_WOL_CTRL5_PT9_LEN_MASK 0xFF 557 + #define ALX_WOL_CTRL5_PT9_LEN_SHIFT 8 558 + #define ALX_WOL_CTRL5_PT8_LEN_MASK 0xFF 559 + #define ALX_WOL_CTRL5_PT8_LEN_SHIFT 0 560 + 561 + #define ALX_WOL_CTRL8 0x1848 562 + #define ALX_WOL_CTRL5_PT15_LEN_MASK 0xFF 563 + #define ALX_WOL_CTRL5_PT15_LEN_SHIFT 24 564 + #define ALX_WOL_CTRL5_PT14_LEN_MASK 0xFF 565 + #define ALX_WOL_CTRL5_PT14_LEN_SHIFT 16 566 + #define ALX_WOL_CTRL5_PT13_LEN_MASK 0xFF 567 + #define ALX_WOL_CTRL5_PT13_LEN_SHIFT 8 568 + #define ALX_WOL_CTRL5_PT12_LEN_MASK 0xFF 569 + #define ALX_WOL_CTRL5_PT12_LEN_SHIFT 0 570 + 571 + #define ALX_ACER_FIXED_PTN0 0x1850 572 + #define ALX_ACER_FIXED_PTN0_MASK 0xFFFFFFFF 573 + #define ALX_ACER_FIXED_PTN0_SHIFT 0 574 + 575 + #define ALX_ACER_FIXED_PTN1 0x1854 576 + #define ALX_ACER_FIXED_PTN1_MASK 0xFFFF 577 + #define ALX_ACER_FIXED_PTN1_SHIFT 0 578 + 579 + #define ALX_ACER_RANDOM_NUM0 0x1858 580 + #define ALX_ACER_RANDOM_NUM0_MASK 0xFFFFFFFF 581 + #define ALX_ACER_RANDOM_NUM0_SHIFT 0 582 + 583 + #define ALX_ACER_RANDOM_NUM1 0x185C 584 + #define ALX_ACER_RANDOM_NUM1_MASK 0xFFFFFFFF 585 + #define ALX_ACER_RANDOM_NUM1_SHIFT 0 586 + 587 + #define ALX_ACER_RANDOM_NUM2 0x1860 588 + #define ALX_ACER_RANDOM_NUM2_MASK 0xFFFFFFFF 589 + #define ALX_ACER_RANDOM_NUM2_SHIFT 0 590 + 591 + #define ALX_ACER_RANDOM_NUM3 0x1864 592 + #define ALX_ACER_RANDOM_NUM3_MASK 0xFFFFFFFF 593 + #define ALX_ACER_RANDOM_NUM3_SHIFT 0 594 + 595 + #define ALX_ACER_MAGIC 0x1868 596 + #define ALX_ACER_MAGIC_EN BIT(31) 597 + #define ALX_ACER_MAGIC_PME_EN BIT(30) 598 + #define ALX_ACER_MAGIC_MATCH BIT(29) 599 + #define ALX_ACER_MAGIC_FF_CHECK BIT(10) 600 + #define ALX_ACER_MAGIC_RAN_LEN_MASK 0x1F 601 + #define ALX_ACER_MAGIC_RAN_LEN_SHIFT 5 602 + #define ALX_ACER_MAGIC_FIX_LEN_MASK 0x1F 603 + #define ALX_ACER_MAGIC_FIX_LEN_SHIFT 0 604 + 605 + #define ALX_ACER_TIMER 0x186C 606 + #define ALX_ACER_TIMER_EN BIT(31) 607 + #define ALX_ACER_TIMER_PME_EN BIT(30) 608 + #define ALX_ACER_TIMER_MATCH BIT(29) 609 + #define ALX_ACER_TIMER_THRES_MASK 0x1FFFF 610 + #define ALX_ACER_TIMER_THRES_SHIFT 0 611 + #define ALX_ACER_TIMER_THRES_DEF 1 612 + 613 + /* RSS definitions */ 614 + #define ALX_RSS_KEY0 0x14B0 615 + #define ALX_RSS_KEY1 0x14B4 616 + #define ALX_RSS_KEY2 0x14B8 617 + #define ALX_RSS_KEY3 0x14BC 618 + #define ALX_RSS_KEY4 0x14C0 619 + #define ALX_RSS_KEY5 0x14C4 620 + #define ALX_RSS_KEY6 0x14C8 621 + #define ALX_RSS_KEY7 0x14CC 622 + #define ALX_RSS_KEY8 0x14D0 623 + #define ALX_RSS_KEY9 0x14D4 624 + 625 + #define ALX_RSS_IDT_TBL0 0x1B00 626 + 627 + #define ALX_MSI_MAP_TBL1 0x15D0 628 + #define ALX_MSI_MAP_TBL1_TXQ1_SHIFT 20 629 + #define ALX_MSI_MAP_TBL1_TXQ0_SHIFT 16 630 + #define ALX_MSI_MAP_TBL1_RXQ3_SHIFT 12 631 + #define ALX_MSI_MAP_TBL1_RXQ2_SHIFT 8 632 + #define ALX_MSI_MAP_TBL1_RXQ1_SHIFT 4 633 + #define ALX_MSI_MAP_TBL1_RXQ0_SHIFT 0 634 + 635 + #define ALX_MSI_MAP_TBL2 0x15D8 636 + #define ALX_MSI_MAP_TBL2_TXQ3_SHIFT 20 637 + #define ALX_MSI_MAP_TBL2_TXQ2_SHIFT 16 638 + #define ALX_MSI_MAP_TBL2_RXQ7_SHIFT 12 639 + #define ALX_MSI_MAP_TBL2_RXQ6_SHIFT 8 640 + #define ALX_MSI_MAP_TBL2_RXQ5_SHIFT 4 641 + #define ALX_MSI_MAP_TBL2_RXQ4_SHIFT 0 642 + 643 + #define ALX_MSI_ID_MAP 0x15D4 644 + 645 + #define ALX_MSI_RETRANS_TIMER 0x1920 646 + /* bit16: 1:line,0:standard */ 647 + #define ALX_MSI_MASK_SEL_LINE BIT(16) 648 + #define ALX_MSI_RETRANS_TM_MASK 0xFFFF 649 + #define ALX_MSI_RETRANS_TM_SHIFT 0 650 + 651 + /* CR DMA ctrl */ 652 + 653 + /* TX QoS */ 654 + #define ALX_WRR 0x1938 655 + #define ALX_WRR_PRI_MASK 0x3 656 + #define ALX_WRR_PRI_SHIFT 29 657 + #define ALX_WRR_PRI_RESTRICT_NONE 3 658 + #define ALX_WRR_PRI3_MASK 0x1F 659 + #define ALX_WRR_PRI3_SHIFT 24 660 + #define ALX_WRR_PRI2_MASK 0x1F 661 + #define ALX_WRR_PRI2_SHIFT 16 662 + #define ALX_WRR_PRI1_MASK 0x1F 663 + #define ALX_WRR_PRI1_SHIFT 8 664 + #define ALX_WRR_PRI0_MASK 0x1F 665 + #define ALX_WRR_PRI0_SHIFT 0 666 + 667 + #define ALX_HQTPD 0x193C 668 + #define ALX_HQTPD_BURST_EN BIT(31) 669 + #define ALX_HQTPD_Q3_NUMPREF_MASK 0xF 670 + #define ALX_HQTPD_Q3_NUMPREF_SHIFT 8 671 + #define ALX_HQTPD_Q2_NUMPREF_MASK 0xF 672 + #define ALX_HQTPD_Q2_NUMPREF_SHIFT 4 673 + #define ALX_HQTPD_Q1_NUMPREF_MASK 0xF 674 + #define ALX_HQTPD_Q1_NUMPREF_SHIFT 0 675 + 676 + #define ALX_MISC 0x19C0 677 + #define ALX_MISC_PSW_OCP_MASK 0x7 678 + #define ALX_MISC_PSW_OCP_SHIFT 21 679 + #define ALX_MISC_PSW_OCP_DEF 0x7 680 + #define ALX_MISC_ISO_EN BIT(12) 681 + #define ALX_MISC_INTNLOSC_OPEN BIT(3) 682 + 683 + #define ALX_MSIC2 0x19C8 684 + #define ALX_MSIC2_CALB_START BIT(0) 685 + 686 + #define ALX_MISC3 0x19CC 687 + /* bit1: 1:Software control 25M */ 688 + #define ALX_MISC3_25M_BY_SW BIT(1) 689 + /* bit0: 25M switch to intnl OSC */ 690 + #define ALX_MISC3_25M_NOTO_INTNL BIT(0) 691 + 692 + /* MSIX tbl in memory space */ 693 + #define ALX_MSIX_ENTRY_BASE 0x2000 694 + 695 + /********************* PHY regs definition ***************************/ 696 + 697 + /* PHY Specific Status Register */ 698 + #define ALX_MII_GIGA_PSSR 0x11 699 + #define ALX_GIGA_PSSR_SPD_DPLX_RESOLVED 0x0800 700 + #define ALX_GIGA_PSSR_DPLX 0x2000 701 + #define ALX_GIGA_PSSR_SPEED 0xC000 702 + #define ALX_GIGA_PSSR_10MBS 0x0000 703 + #define ALX_GIGA_PSSR_100MBS 0x4000 704 + #define ALX_GIGA_PSSR_1000MBS 0x8000 705 + 706 + /* PHY Interrupt Enable Register */ 707 + #define ALX_MII_IER 0x12 708 + #define ALX_IER_LINK_UP 0x0400 709 + #define ALX_IER_LINK_DOWN 0x0800 710 + 711 + /* PHY Interrupt Status Register */ 712 + #define ALX_MII_ISR 0x13 713 + 714 + #define ALX_MII_DBG_ADDR 0x1D 715 + #define ALX_MII_DBG_DATA 0x1E 716 + 717 + /***************************** debug port *************************************/ 718 + 719 + #define ALX_MIIDBG_ANACTRL 0x00 720 + #define ALX_ANACTRL_DEF 0x02EF 721 + 722 + #define ALX_MIIDBG_SYSMODCTRL 0x04 723 + /* en half bias */ 724 + #define ALX_SYSMODCTRL_IECHOADJ_DEF 0xBB8B 725 + 726 + #define ALX_MIIDBG_SRDSYSMOD 0x05 727 + #define ALX_SRDSYSMOD_DEEMP_EN 0x0040 728 + #define ALX_SRDSYSMOD_DEF 0x2C46 729 + 730 + #define ALX_MIIDBG_HIBNEG 0x0B 731 + #define ALX_HIBNEG_PSHIB_EN 0x8000 732 + #define ALX_HIBNEG_HIB_PSE 0x1000 733 + #define ALX_HIBNEG_DEF 0xBC40 734 + #define ALX_HIBNEG_NOHIB (ALX_HIBNEG_DEF & \ 735 + ~(ALX_HIBNEG_PSHIB_EN | ALX_HIBNEG_HIB_PSE)) 736 + 737 + #define ALX_MIIDBG_TST10BTCFG 0x12 738 + #define ALX_TST10BTCFG_DEF 0x4C04 739 + 740 + #define ALX_MIIDBG_AZ_ANADECT 0x15 741 + #define ALX_AZ_ANADECT_DEF 0x3220 742 + #define ALX_AZ_ANADECT_LONG 0x3210 743 + 744 + #define ALX_MIIDBG_MSE16DB 0x18 745 + #define ALX_MSE16DB_UP 0x05EA 746 + #define ALX_MSE16DB_DOWN 0x02EA 747 + 748 + #define ALX_MIIDBG_MSE20DB 0x1C 749 + #define ALX_MSE20DB_TH_MASK 0x7F 750 + #define ALX_MSE20DB_TH_SHIFT 2 751 + #define ALX_MSE20DB_TH_DEF 0x2E 752 + #define ALX_MSE20DB_TH_HI 0x54 753 + 754 + #define ALX_MIIDBG_AGC 0x23 755 + #define ALX_AGC_2_VGA_MASK 0x3FU 756 + #define ALX_AGC_2_VGA_SHIFT 8 757 + #define ALX_AGC_LONG1G_LIMT 40 758 + #define ALX_AGC_LONG100M_LIMT 44 759 + 760 + #define ALX_MIIDBG_LEGCYPS 0x29 761 + #define ALX_LEGCYPS_EN 0x8000 762 + #define ALX_LEGCYPS_DEF 0x129D 763 + 764 + #define ALX_MIIDBG_TST100BTCFG 0x36 765 + #define ALX_TST100BTCFG_DEF 0xE12C 766 + 767 + #define ALX_MIIDBG_GREENCFG 0x3B 768 + #define ALX_GREENCFG_DEF 0x7078 769 + 770 + #define ALX_MIIDBG_GREENCFG2 0x3D 771 + #define ALX_GREENCFG2_BP_GREEN 0x8000 772 + #define ALX_GREENCFG2_GATE_DFSE_EN 0x0080 773 + 774 + /******* dev 3 *********/ 775 + #define ALX_MIIEXT_PCS 3 776 + 777 + #define ALX_MIIEXT_CLDCTRL3 0x8003 778 + #define ALX_CLDCTRL3_BP_CABLE1TH_DET_GT 0x8000 779 + 780 + #define ALX_MIIEXT_CLDCTRL5 0x8005 781 + #define ALX_CLDCTRL5_BP_VD_HLFBIAS 0x4000 782 + 783 + #define ALX_MIIEXT_CLDCTRL6 0x8006 784 + #define ALX_CLDCTRL6_CAB_LEN_MASK 0xFF 785 + #define ALX_CLDCTRL6_CAB_LEN_SHIFT 0 786 + #define ALX_CLDCTRL6_CAB_LEN_SHORT1G 116 787 + #define ALX_CLDCTRL6_CAB_LEN_SHORT100M 152 788 + 789 + #define ALX_MIIEXT_VDRVBIAS 0x8062 790 + #define ALX_VDRVBIAS_DEF 0x3 791 + 792 + /********* dev 7 **********/ 793 + #define ALX_MIIEXT_ANEG 7 794 + 795 + #define ALX_MIIEXT_LOCAL_EEEADV 0x3C 796 + #define ALX_LOCAL_EEEADV_1000BT 0x0004 797 + #define ALX_LOCAL_EEEADV_100BT 0x0002 798 + 799 + #define ALX_MIIEXT_AFE 0x801A 800 + #define ALX_AFE_10BT_100M_TH 0x0040 801 + 802 + #define ALX_MIIEXT_S3DIG10 0x8023 803 + /* bit0: 1:bypass 10BT rx fifo, 0:original 10BT rx */ 804 + #define ALX_MIIEXT_S3DIG10_SL 0x0001 805 + #define ALX_MIIEXT_S3DIG10_DEF 0 806 + 807 + #define ALX_MIIEXT_NLP78 0x8027 808 + #define ALX_MIIEXT_NLP78_120M_DEF 0x8A05 809 + 810 + #endif
+10
drivers/net/ethernet/broadcom/tg3.c
··· 1790 1790 int i; 1791 1791 u32 val; 1792 1792 1793 + if (tg3_flag(tp, NO_FWARE_REPORTED)) 1794 + return 0; 1795 + 1793 1796 if (tg3_flag(tp, IS_SSB_CORE)) { 1794 1797 /* We don't use firmware. */ 1795 1798 return 0; ··· 10478 10475 */ 10479 10476 static int tg3_init_hw(struct tg3 *tp, bool reset_phy) 10480 10477 { 10478 + /* Chip may have been just powered on. If so, the boot code may still 10479 + * be running initialization. Wait for it to finish to avoid races in 10480 + * accessing the hardware. 10481 + */ 10482 + tg3_enable_register_access(tp); 10483 + tg3_poll_fw(tp); 10484 + 10481 10485 tg3_switch_clocks(tp); 10482 10486 10483 10487 tw32(TG3PCI_MEM_WIN_BASE_ADDR, 0);
+1 -1
drivers/net/ethernet/brocade/bna/bnad_debugfs.c
··· 244 244 file->f_pos += offset; 245 245 break; 246 246 case 2: 247 - file->f_pos = debug->buffer_len - offset; 247 + file->f_pos = debug->buffer_len + offset; 248 248 break; 249 249 default: 250 250 return -EINVAL;
+6
drivers/net/ethernet/dec/tulip/interrupt.c
··· 76 76 77 77 mapping = pci_map_single(tp->pdev, skb->data, PKT_BUF_SZ, 78 78 PCI_DMA_FROMDEVICE); 79 + if (dma_mapping_error(&tp->pdev->dev, mapping)) { 80 + dev_kfree_skb(skb); 81 + tp->rx_buffers[entry].skb = NULL; 82 + break; 83 + } 84 + 79 85 tp->rx_buffers[entry].mapping = mapping; 80 86 81 87 tp->rx_ring[entry].buffer1 = cpu_to_le32(mapping);
+3
drivers/net/ethernet/emulex/benet/be_main.c
··· 4259 4259 netdev->features |= NETIF_F_HIGHDMA; 4260 4260 } else { 4261 4261 status = dma_set_mask(&pdev->dev, DMA_BIT_MASK(32)); 4262 + if (!status) 4263 + status = dma_set_coherent_mask(&pdev->dev, 4264 + DMA_BIT_MASK(32)); 4262 4265 if (status) { 4263 4266 dev_err(&pdev->dev, "Could not set PCI DMA Mask\n"); 4264 4267 goto free_netdev;
+13 -6
drivers/net/ethernet/renesas/sh_eth.c
··· 722 722 mdelay(1); 723 723 cnt--; 724 724 } 725 - if (cnt < 0) { 726 - pr_err("Device reset fail\n"); 725 + if (cnt <= 0) { 726 + pr_err("Device reset failed\n"); 727 727 ret = -ETIMEDOUT; 728 728 } 729 729 return ret; ··· 1260 1260 desc_status = edmac_to_cpu(mdp, rxdesc->status); 1261 1261 pkt_len = rxdesc->frame_length; 1262 1262 1263 - #if defined(CONFIG_ARCH_R8A7740) 1264 - desc_status >>= 16; 1265 - #endif 1266 - 1267 1263 if (--boguscnt < 0) 1268 1264 break; 1269 1265 1270 1266 if (!(desc_status & RDFEND)) 1271 1267 ndev->stats.rx_length_errors++; 1268 + 1269 + #if defined(CONFIG_ARCH_R8A7740) 1270 + /* 1271 + * In case of almost all GETHER/ETHERs, the Receive Frame State 1272 + * (RFS) bits in the Receive Descriptor 0 are from bit 9 to 1273 + * bit 0. However, in case of the R8A7740's GETHER, the RFS 1274 + * bits are from bit 25 to bit 16. So, the driver needs right 1275 + * shifting by 16. 1276 + */ 1277 + desc_status >>= 16; 1278 + #endif 1272 1279 1273 1280 if (desc_status & (RD_RFS1 | RD_RFS2 | RD_RFS3 | RD_RFS4 | 1274 1281 RD_RFS5 | RD_RFS6 | RD_RFS10)) {
+1 -1
drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
··· 1899 1899 1900 1900 #ifdef STMMAC_XMIT_DEBUG 1901 1901 if (netif_msg_pktdata(priv)) { 1902 - pr_info("%s: curr %d dirty=%d entry=%d, first=%p, nfrags=%d" 1902 + pr_info("%s: curr %d dirty=%d entry=%d, first=%p, nfrags=%d", 1903 1903 __func__, (priv->cur_tx % txsize), 1904 1904 (priv->dirty_tx % txsize), entry, first, nfrags); 1905 1905 if (priv->extend_desc)
+1 -1
drivers/net/ethernet/ti/cpsw.c
··· 1681 1681 priv->rx_packet_max = max(rx_packet_max, 128); 1682 1682 priv->cpts = devm_kzalloc(&pdev->dev, sizeof(struct cpts), GFP_KERNEL); 1683 1683 priv->irq_enabled = true; 1684 - if (!ndev) { 1684 + if (!priv->cpts) { 1685 1685 pr_err("error allocating cpts\n"); 1686 1686 goto clean_ndev_ret; 1687 1687 }
+5 -9
drivers/net/ethernet/ti/davinci_mdio.c
··· 449 449 __raw_writel(ctrl, &data->regs->control); 450 450 wait_for_idle(data); 451 451 452 - pm_runtime_put_sync(data->dev); 453 - 454 452 data->suspended = true; 455 453 spin_unlock(&data->lock); 454 + pm_runtime_put_sync(data->dev); 456 455 457 456 return 0; 458 457 } ··· 459 460 static int davinci_mdio_resume(struct device *dev) 460 461 { 461 462 struct davinci_mdio_data *data = dev_get_drvdata(dev); 462 - u32 ctrl; 463 463 464 - spin_lock(&data->lock); 465 464 pm_runtime_get_sync(data->dev); 466 465 466 + spin_lock(&data->lock); 467 467 /* restart the scan state machine */ 468 - ctrl = __raw_readl(&data->regs->control); 469 - ctrl |= CONTROL_ENABLE; 470 - __raw_writel(ctrl, &data->regs->control); 468 + __davinci_mdio_reset(data); 471 469 472 470 data->suspended = false; 473 471 spin_unlock(&data->lock); ··· 473 477 } 474 478 475 479 static const struct dev_pm_ops davinci_mdio_pm_ops = { 476 - .suspend = davinci_mdio_suspend, 477 - .resume = davinci_mdio_resume, 480 + .suspend_late = davinci_mdio_suspend, 481 + .resume_early = davinci_mdio_resume, 478 482 }; 479 483 480 484 static const struct of_device_id davinci_mdio_of_mtable[] = {
+3 -1
drivers/net/hyperv/netvsc_drv.c
··· 285 285 286 286 skb->protocol = eth_type_trans(skb, net); 287 287 skb->ip_summed = CHECKSUM_NONE; 288 - __vlan_hwaccel_put_tag(skb, htons(ETH_P_8021Q), packet->vlan_tci); 288 + if (packet->vlan_tci & VLAN_TAG_PRESENT) 289 + __vlan_hwaccel_put_tag(skb, htons(ETH_P_8021Q), 290 + packet->vlan_tci); 289 291 290 292 net->stats.rx_packets++; 291 293 net->stats.rx_bytes += packet->total_data_buflen;
+12 -6
drivers/net/macvlan.c
··· 853 853 struct nlattr *tb[], struct nlattr *data[]) 854 854 { 855 855 struct macvlan_dev *vlan = netdev_priv(dev); 856 - if (data && data[IFLA_MACVLAN_MODE]) 857 - vlan->mode = nla_get_u32(data[IFLA_MACVLAN_MODE]); 856 + 858 857 if (data && data[IFLA_MACVLAN_FLAGS]) { 859 858 __u16 flags = nla_get_u16(data[IFLA_MACVLAN_FLAGS]); 860 859 bool promisc = (flags ^ vlan->flags) & MACVLAN_FLAG_NOPROMISC; 860 + if (vlan->port->passthru && promisc) { 861 + int err; 861 862 862 - if (promisc && (flags & MACVLAN_FLAG_NOPROMISC)) 863 - dev_set_promiscuity(vlan->lowerdev, -1); 864 - else if (promisc && !(flags & MACVLAN_FLAG_NOPROMISC)) 865 - dev_set_promiscuity(vlan->lowerdev, 1); 863 + if (flags & MACVLAN_FLAG_NOPROMISC) 864 + err = dev_set_promiscuity(vlan->lowerdev, -1); 865 + else 866 + err = dev_set_promiscuity(vlan->lowerdev, 1); 867 + if (err < 0) 868 + return err; 869 + } 866 870 vlan->flags = flags; 867 871 } 872 + if (data && data[IFLA_MACVLAN_MODE]) 873 + vlan->mode = nla_get_u32(data[IFLA_MACVLAN_MODE]); 868 874 return 0; 869 875 } 870 876
+1 -1
drivers/net/team/team.c
··· 1111 1111 } 1112 1112 1113 1113 port->index = -1; 1114 - team_port_enable(team, port); 1115 1114 list_add_tail_rcu(&port->list, &team->port_list); 1115 + team_port_enable(team, port); 1116 1116 __team_compute_features(team); 1117 1117 __team_port_change_port_added(port, !!netif_carrier_ok(port_dev)); 1118 1118 __team_options_change_check(team);
+2
drivers/net/team/team_mode_random.c
··· 28 28 29 29 port_index = random_N(team->en_port_count); 30 30 port = team_get_port_by_index_rcu(team, port_index); 31 + if (unlikely(!port)) 32 + goto drop; 31 33 port = team_get_first_port_txable_rcu(team, port); 32 34 if (unlikely(!port)) 33 35 goto drop;
+2
drivers/net/team/team_mode_roundrobin.c
··· 33 33 port_index = team_num_to_port_index(team, 34 34 rr_priv(team)->sent_packets++); 35 35 port = team_get_port_by_index_rcu(team, port_index); 36 + if (unlikely(!port)) 37 + goto drop; 36 38 port = team_get_first_port_txable_rcu(team, port); 37 39 if (unlikely(!port)) 38 40 goto drop;
+3 -1
drivers/net/tun.c
··· 352 352 u32 numqueues = 0; 353 353 354 354 rcu_read_lock(); 355 - numqueues = tun->numqueues; 355 + numqueues = ACCESS_ONCE(tun->numqueues); 356 356 357 357 txq = skb_get_rxhash(skb); 358 358 if (txq) { ··· 2156 2156 file->private_data = tfile; 2157 2157 set_bit(SOCK_EXTERNALLY_ALLOCATED, &tfile->socket.flags); 2158 2158 INIT_LIST_HEAD(&tfile->next); 2159 + 2160 + sock_set_flag(&tfile->sk, SOCK_ZEROCOPY); 2159 2161 2160 2162 return 0; 2161 2163 }
+6
drivers/net/usb/cdc_ether.c
··· 627 627 .driver_info = 0, 628 628 }, 629 629 630 + /* Huawei E1820 - handled by qmi_wwan */ 631 + { 632 + USB_DEVICE_INTERFACE_NUMBER(HUAWEI_VENDOR_ID, 0x14ac, 1), 633 + .driver_info = 0, 634 + }, 635 + 630 636 /* Realtek RTL8152 Based USB 2.0 Ethernet Adapters */ 631 637 #if defined(CONFIG_USB_RTL8152) || defined(CONFIG_USB_RTL8152_MODULE) 632 638 {
+1
drivers/net/usb/qmi_wwan.c
··· 519 519 /* 3. Combined interface devices matching on interface number */ 520 520 {QMI_FIXED_INTF(0x0408, 0xea42, 4)}, /* Yota / Megafon M100-1 */ 521 521 {QMI_FIXED_INTF(0x12d1, 0x140c, 1)}, /* Huawei E173 */ 522 + {QMI_FIXED_INTF(0x12d1, 0x14ac, 1)}, /* Huawei E1820 */ 522 523 {QMI_FIXED_INTF(0x19d2, 0x0002, 1)}, 523 524 {QMI_FIXED_INTF(0x19d2, 0x0012, 1)}, 524 525 {QMI_FIXED_INTF(0x19d2, 0x0017, 3)},
+26 -14
drivers/net/vxlan.c
··· 603 603 604 604 /* Watch incoming packets to learn mapping between Ethernet address 605 605 * and Tunnel endpoint. 606 + * Return true if packet is bogus and should be droppped. 606 607 */ 607 - static void vxlan_snoop(struct net_device *dev, 608 + static bool vxlan_snoop(struct net_device *dev, 608 609 __be32 src_ip, const u8 *src_mac) 609 610 { 610 611 struct vxlan_dev *vxlan = netdev_priv(dev); 611 612 struct vxlan_fdb *f; 612 - int err; 613 613 614 614 f = vxlan_find_mac(vxlan, src_mac); 615 615 if (likely(f)) { 616 616 if (likely(f->remote.remote_ip == src_ip)) 617 - return; 617 + return false; 618 + 619 + /* Don't migrate static entries, drop packets */ 620 + if (f->state & NUD_NOARP) 621 + return true; 618 622 619 623 if (net_ratelimit()) 620 624 netdev_info(dev, ··· 630 626 } else { 631 627 /* learned new entry */ 632 628 spin_lock(&vxlan->hash_lock); 633 - err = vxlan_fdb_create(vxlan, src_mac, src_ip, 634 - NUD_REACHABLE, 635 - NLM_F_EXCL|NLM_F_CREATE, 636 - vxlan->dst_port, 637 - vxlan->default_dst.remote_vni, 638 - 0, NTF_SELF); 629 + 630 + /* close off race between vxlan_flush and incoming packets */ 631 + if (netif_running(dev)) 632 + vxlan_fdb_create(vxlan, src_mac, src_ip, 633 + NUD_REACHABLE, 634 + NLM_F_EXCL|NLM_F_CREATE, 635 + vxlan->dst_port, 636 + vxlan->default_dst.remote_vni, 637 + 0, NTF_SELF); 639 638 spin_unlock(&vxlan->hash_lock); 640 639 } 640 + 641 + return false; 641 642 } 642 643 643 644 ··· 775 766 vxlan->dev->dev_addr) == 0) 776 767 goto drop; 777 768 778 - if (vxlan->flags & VXLAN_F_LEARN) 779 - vxlan_snoop(skb->dev, oip->saddr, eth_hdr(skb)->h_source); 769 + if ((vxlan->flags & VXLAN_F_LEARN) && 770 + vxlan_snoop(skb->dev, oip->saddr, eth_hdr(skb)->h_source)) 771 + goto drop; 780 772 781 773 __skb_tunnel_rx(skb, vxlan->dev); 782 774 skb_reset_network_header(skb); ··· 1200 1190 struct sk_buff *skb1; 1201 1191 1202 1192 skb1 = skb_clone(skb, GFP_ATOMIC); 1203 - rc1 = vxlan_xmit_one(skb1, dev, rdst, did_rsc); 1204 - if (rc == NETDEV_TX_OK) 1205 - rc = rc1; 1193 + if (skb1) { 1194 + rc1 = vxlan_xmit_one(skb1, dev, rdst, did_rsc); 1195 + if (rc == NETDEV_TX_OK) 1196 + rc = rc1; 1197 + } 1206 1198 } 1207 1199 1208 1200 rc1 = vxlan_xmit_one(skb, dev, rdst0, did_rsc);
+7 -3
drivers/net/wireless/ath/ath9k/Kconfig
··· 84 84 developed. At this point enabling this option won't do anything 85 85 except increase code size. 86 86 87 - config ATH9K_RATE_CONTROL 87 + config ATH9K_LEGACY_RATE_CONTROL 88 88 bool "Atheros ath9k rate control" 89 89 depends on ATH9K 90 - default y 90 + default n 91 91 ---help--- 92 92 Say Y, if you want to use the ath9k specific rate control 93 - module instead of minstrel_ht. 93 + module instead of minstrel_ht. Be warned that there are various 94 + issues with the ath9k RC and minstrel is a more robust algorithm. 95 + Note that even if this option is selected, "ath9k_rate_control" 96 + has to be passed to mac80211 using the module parameter, 97 + ieee80211_default_rc_algo. 94 98 95 99 config ATH9K_HTC 96 100 tristate "Atheros HTC based wireless cards support"
+1 -1
drivers/net/wireless/ath/ath9k/Makefile
··· 8 8 antenna.o 9 9 10 10 ath9k-$(CONFIG_ATH9K_BTCOEX_SUPPORT) += mci.o 11 - ath9k-$(CONFIG_ATH9K_RATE_CONTROL) += rc.o 11 + ath9k-$(CONFIG_ATH9K_LEGACY_RATE_CONTROL) += rc.o 12 12 ath9k-$(CONFIG_ATH9K_PCI) += pci.o 13 13 ath9k-$(CONFIG_ATH9K_AHB) += ahb.o 14 14 ath9k-$(CONFIG_ATH9K_DEBUGFS) += debug.o
+5 -5
drivers/net/wireless/ath/ath9k/ar9003_2p2_initvals.h
··· 958 958 {0x0000a074, 0x00000000}, 959 959 {0x0000a078, 0x00000000}, 960 960 {0x0000a07c, 0x00000000}, 961 - {0x0000a080, 0x1a1a1a1a}, 962 - {0x0000a084, 0x1a1a1a1a}, 963 - {0x0000a088, 0x1a1a1a1a}, 964 - {0x0000a08c, 0x1a1a1a1a}, 965 - {0x0000a090, 0x171a1a1a}, 961 + {0x0000a080, 0x22222229}, 962 + {0x0000a084, 0x1d1d1d1d}, 963 + {0x0000a088, 0x1d1d1d1d}, 964 + {0x0000a08c, 0x1d1d1d1d}, 965 + {0x0000a090, 0x171d1d1d}, 966 966 {0x0000a094, 0x11111717}, 967 967 {0x0000a098, 0x00030311}, 968 968 {0x0000a09c, 0x00000000},
+1 -6
drivers/net/wireless/ath/ath9k/init.c
··· 792 792 hw->wiphy->iface_combinations = if_comb; 793 793 hw->wiphy->n_iface_combinations = ARRAY_SIZE(if_comb); 794 794 795 - if (AR_SREV_5416(sc->sc_ah)) 796 - hw->wiphy->flags &= ~WIPHY_FLAG_PS_ON_BY_DEFAULT; 795 + hw->wiphy->flags &= ~WIPHY_FLAG_PS_ON_BY_DEFAULT; 797 796 798 797 hw->wiphy->flags |= WIPHY_FLAG_IBSS_RSN; 799 798 hw->wiphy->flags |= WIPHY_FLAG_SUPPORTS_TDLS; ··· 829 830 830 831 sc->ant_rx = hw->wiphy->available_antennas_rx; 831 832 sc->ant_tx = hw->wiphy->available_antennas_tx; 832 - 833 - #ifdef CONFIG_ATH9K_RATE_CONTROL 834 - hw->rate_control_algorithm = "ath9k_rate_control"; 835 - #endif 836 833 837 834 if (sc->sc_ah->caps.hw_caps & ATH9K_HW_CAP_2GHZ) 838 835 hw->wiphy->bands[IEEE80211_BAND_2GHZ] =
+1 -1
drivers/net/wireless/ath/ath9k/rc.h
··· 231 231 } 232 232 #endif 233 233 234 - #ifdef CONFIG_ATH9K_RATE_CONTROL 234 + #ifdef CONFIG_ATH9K_LEGACY_RATE_CONTROL 235 235 int ath_rate_control_register(void); 236 236 void ath_rate_control_unregister(void); 237 237 #else
+1 -1
drivers/net/wireless/b43/main.c
··· 2458 2458 for (i = 0; i < B43_NR_FWTYPES; i++) { 2459 2459 errmsg = ctx->errors[i]; 2460 2460 if (strlen(errmsg)) 2461 - b43err(dev->wl, errmsg); 2461 + b43err(dev->wl, "%s", errmsg); 2462 2462 } 2463 2463 b43_print_fw_helptext(dev->wl, 1); 2464 2464 goto out;
+4
drivers/net/wireless/brcm80211/brcmfmac/dhd_linux.c
··· 930 930 brcmf_fws_del_interface(ifp); 931 931 brcmf_fws_deinit(drvr); 932 932 } 933 + if (drvr->iflist[0]) { 934 + free_netdev(ifp->ndev); 935 + drvr->iflist[0] = NULL; 936 + } 933 937 if (p2p_ifp) { 934 938 free_netdev(p2p_ifp->ndev); 935 939 drvr->iflist[1] = NULL;
+2 -15
drivers/net/wireless/brcm80211/brcmsmac/main.c
··· 3074 3074 */ 3075 3075 static bool brcms_c_ps_allowed(struct brcms_c_info *wlc) 3076 3076 { 3077 - /* disallow PS when one of the following global conditions meets */ 3078 - if (!wlc->pub->associated) 3079 - return false; 3080 - 3081 - /* disallow PS when one of these meets when not scanning */ 3082 - if (wlc->filter_flags & FIF_PROMISC_IN_BSS) 3083 - return false; 3084 - 3085 - if (wlc->bsscfg->type == BRCMS_TYPE_AP) 3086 - return false; 3087 - 3088 - if (wlc->bsscfg->type == BRCMS_TYPE_ADHOC) 3089 - return false; 3090 - 3091 - return true; 3077 + /* not supporting PS so always return false for now */ 3078 + return false; 3092 3079 } 3093 3080 3094 3081 static void brcms_c_statsupd(struct brcms_c_info *wlc)
+1
drivers/net/wireless/iwlegacy/3945-rs.c
··· 816 816 rs_sta->last_txrate_idx = idx; 817 817 info->control.rates[0].idx = rs_sta->last_txrate_idx; 818 818 } 819 + info->control.rates[0].count = 1; 819 820 820 821 D_RATE("leave: %d\n", idx); 821 822 }
+1 -1
drivers/net/wireless/iwlegacy/4965-rs.c
··· 2268 2268 info->control.rates[0].flags = 0; 2269 2269 } 2270 2270 info->control.rates[0].idx = rate_idx; 2271 - 2271 + info->control.rates[0].count = 1; 2272 2272 } 2273 2273 2274 2274 static void *
+3 -3
drivers/net/wireless/iwlegacy/common.h
··· 1832 1832 __le32 il_add_beacon_time(struct il_priv *il, u32 base, u32 addon, 1833 1833 u32 beacon_interval); 1834 1834 1835 - #ifdef CONFIG_PM 1835 + #ifdef CONFIG_PM_SLEEP 1836 1836 extern const struct dev_pm_ops il_pm_ops; 1837 1837 1838 1838 #define IL_LEGACY_PM_OPS (&il_pm_ops) 1839 1839 1840 - #else /* !CONFIG_PM */ 1840 + #else /* !CONFIG_PM_SLEEP */ 1841 1841 1842 1842 #define IL_LEGACY_PM_OPS NULL 1843 1843 1844 - #endif /* !CONFIG_PM */ 1844 + #endif /* !CONFIG_PM_SLEEP */ 1845 1845 1846 1846 /***************************************************** 1847 1847 * Error Handling Debugging
+1 -1
drivers/net/wireless/iwlwifi/dvm/rs.c
··· 2799 2799 info->control.rates[0].flags = 0; 2800 2800 } 2801 2801 info->control.rates[0].idx = rate_idx; 2802 - 2802 + info->control.rates[0].count = 1; 2803 2803 } 2804 2804 2805 2805 static void *rs_alloc_sta(void *priv_rate, struct ieee80211_sta *sta,
+1 -1
drivers/net/wireless/iwlwifi/dvm/rxon.c
··· 1378 1378 struct iwl_chain_noise_data *data = &priv->chain_noise_data; 1379 1379 int ret; 1380 1380 1381 - if (!(priv->calib_disabled & IWL_CHAIN_NOISE_CALIB_DISABLED)) 1381 + if (priv->calib_disabled & IWL_CHAIN_NOISE_CALIB_DISABLED) 1382 1382 return; 1383 1383 1384 1384 if ((data->state == IWL_CHAIN_NOISE_ALIVE) &&
+2
drivers/net/wireless/iwlwifi/iwl-drv.c
··· 1000 1000 */ 1001 1001 if (load_module) { 1002 1002 err = request_module("%s", op->name); 1003 + #ifdef CONFIG_IWLWIFI_OPMODE_MODULAR 1003 1004 if (err) 1004 1005 IWL_ERR(drv, 1005 1006 "failed to load module %s (error %d), is dynamic loading enabled?\n", 1006 1007 op->name, err); 1008 + #endif 1007 1009 } 1008 1010 return; 1009 1011
+1
drivers/net/wireless/iwlwifi/mvm/rs.c
··· 2652 2652 info->control.rates[0].flags = 0; 2653 2653 } 2654 2654 info->control.rates[0].idx = rate_idx; 2655 + info->control.rates[0].count = 1; 2655 2656 } 2656 2657 2657 2658 static void *rs_alloc_sta(void *mvm_rate, struct ieee80211_sta *sta,
+2 -1
drivers/net/wireless/iwlwifi/mvm/tx.c
··· 180 180 tx_cmd->tx_flags |= cpu_to_le32(TX_CMD_FLG_STA_RATE); 181 181 return; 182 182 } else if (ieee80211_is_back_req(fc)) { 183 - tx_cmd->tx_flags |= cpu_to_le32(TX_CMD_FLG_STA_RATE); 183 + tx_cmd->tx_flags |= 184 + cpu_to_le32(TX_CMD_FLG_ACK | TX_CMD_FLG_BAR); 184 185 } 185 186 186 187 /* HT rate doesn't make sense for a non data frame */
+17 -5
drivers/net/wireless/mwifiex/debugfs.c
··· 26 26 static struct dentry *mwifiex_dfs_dir; 27 27 28 28 static char *bss_modes[] = { 29 - "Unknown", 30 - "Ad-hoc", 31 - "Managed", 32 - "Auto" 29 + "UNSPECIFIED", 30 + "ADHOC", 31 + "STATION", 32 + "AP", 33 + "AP_VLAN", 34 + "WDS", 35 + "MONITOR", 36 + "MESH_POINT", 37 + "P2P_CLIENT", 38 + "P2P_GO", 39 + "P2P_DEVICE", 33 40 }; 34 41 35 42 /* size/addr for mwifiex_debug_info */ ··· 207 200 p += sprintf(p, "driver_version = %s", fmt); 208 201 p += sprintf(p, "\nverext = %s", priv->version_str); 209 202 p += sprintf(p, "\ninterface_name=\"%s\"\n", netdev->name); 210 - p += sprintf(p, "bss_mode=\"%s\"\n", bss_modes[info.bss_mode]); 203 + 204 + if (info.bss_mode >= ARRAY_SIZE(bss_modes)) 205 + p += sprintf(p, "bss_mode=\"%d\"\n", info.bss_mode); 206 + else 207 + p += sprintf(p, "bss_mode=\"%s\"\n", bss_modes[info.bss_mode]); 208 + 211 209 p += sprintf(p, "media_state=\"%s\"\n", 212 210 (!priv->media_connected ? "Disconnected" : "Connected")); 213 211 p += sprintf(p, "mac_address=\"%pM\"\n", netdev->dev_addr);
+18 -11
drivers/net/wireless/rt2x00/rt2800lib.c
··· 3027 3027 * TODO: we do not use +6 dBm option to do not increase power beyond 3028 3028 * regulatory limit, however this could be utilized for devices with 3029 3029 * CAPABILITY_POWER_LIMIT. 3030 + * 3031 + * TODO: add different temperature compensation code for RT3290 & RT5390 3032 + * to allow to use BBP_R1 for those chips. 3030 3033 */ 3031 - rt2800_bbp_read(rt2x00dev, 1, &r1); 3032 - if (delta <= -12) { 3033 - power_ctrl = 2; 3034 - delta += 12; 3035 - } else if (delta <= -6) { 3036 - power_ctrl = 1; 3037 - delta += 6; 3038 - } else { 3039 - power_ctrl = 0; 3034 + if (!rt2x00_rt(rt2x00dev, RT3290) && 3035 + !rt2x00_rt(rt2x00dev, RT5390)) { 3036 + rt2800_bbp_read(rt2x00dev, 1, &r1); 3037 + if (delta <= -12) { 3038 + power_ctrl = 2; 3039 + delta += 12; 3040 + } else if (delta <= -6) { 3041 + power_ctrl = 1; 3042 + delta += 6; 3043 + } else { 3044 + power_ctrl = 0; 3045 + } 3046 + rt2x00_set_field8(&r1, BBP1_TX_POWER_CTRL, power_ctrl); 3047 + rt2800_bbp_write(rt2x00dev, 1, r1); 3040 3048 } 3041 - rt2x00_set_field8(&r1, BBP1_TX_POWER_CTRL, power_ctrl); 3042 - rt2800_bbp_write(rt2x00dev, 1, r1); 3049 + 3043 3050 offset = TX_PWR_CFG_0; 3044 3051 3045 3052 for (i = 0; i < EEPROM_TXPOWER_BYRATE_SIZE; i += 2) {
+1
drivers/net/wireless/rtlwifi/pci.c
··· 764 764 "can't alloc skb for rx\n"); 765 765 goto done; 766 766 } 767 + kmemleak_not_leak(new_skb); 767 768 768 769 pci_unmap_single(rtlpci->pdev, 769 770 *((dma_addr_t *) skb->cb),
+99 -33
drivers/net/wireless/rtlwifi/rtl8192cu/hw.c
··· 1973 1973 } 1974 1974 } 1975 1975 1976 - void rtl92cu_update_hal_rate_table(struct ieee80211_hw *hw, 1977 - struct ieee80211_sta *sta, 1978 - u8 rssi_level) 1976 + static void rtl92cu_update_hal_rate_table(struct ieee80211_hw *hw, 1977 + struct ieee80211_sta *sta) 1979 1978 { 1980 1979 struct rtl_priv *rtlpriv = rtl_priv(hw); 1981 1980 struct rtl_phy *rtlphy = &(rtlpriv->phy); 1982 1981 struct rtl_mac *mac = rtl_mac(rtl_priv(hw)); 1983 - u32 ratr_value = (u32) mac->basic_rates; 1984 - u8 *mcsrate = mac->mcs; 1982 + struct rtl_hal *rtlhal = rtl_hal(rtl_priv(hw)); 1983 + u32 ratr_value; 1985 1984 u8 ratr_index = 0; 1986 1985 u8 nmode = mac->ht_enable; 1987 - u8 mimo_ps = 1; 1988 - u16 shortgi_rate = 0; 1989 - u32 tmp_ratr_value = 0; 1986 + u8 mimo_ps = IEEE80211_SMPS_OFF; 1987 + u16 shortgi_rate; 1988 + u32 tmp_ratr_value; 1990 1989 u8 curtxbw_40mhz = mac->bw_40; 1991 - u8 curshortgi_40mhz = mac->sgi_40; 1992 - u8 curshortgi_20mhz = mac->sgi_20; 1990 + u8 curshortgi_40mhz = (sta->ht_cap.cap & IEEE80211_HT_CAP_SGI_40) ? 1991 + 1 : 0; 1992 + u8 curshortgi_20mhz = (sta->ht_cap.cap & IEEE80211_HT_CAP_SGI_20) ? 1993 + 1 : 0; 1993 1994 enum wireless_mode wirelessmode = mac->mode; 1994 1995 1995 - ratr_value |= ((*(u16 *) (mcsrate))) << 12; 1996 + if (rtlhal->current_bandtype == BAND_ON_5G) 1997 + ratr_value = sta->supp_rates[1] << 4; 1998 + else 1999 + ratr_value = sta->supp_rates[0]; 2000 + if (mac->opmode == NL80211_IFTYPE_ADHOC) 2001 + ratr_value = 0xfff; 2002 + 2003 + ratr_value |= (sta->ht_cap.mcs.rx_mask[1] << 20 | 2004 + sta->ht_cap.mcs.rx_mask[0] << 12); 1996 2005 switch (wirelessmode) { 1997 2006 case WIRELESS_MODE_B: 1998 2007 if (ratr_value & 0x0000000c) ··· 2015 2006 case WIRELESS_MODE_N_24G: 2016 2007 case WIRELESS_MODE_N_5G: 2017 2008 nmode = 1; 2018 - if (mimo_ps == 0) { 2009 + if (mimo_ps == IEEE80211_SMPS_STATIC) { 2019 2010 ratr_value &= 0x0007F005; 2020 2011 } else { 2021 2012 u32 ratr_mask; ··· 2025 2016 ratr_mask = 0x000ff005; 2026 2017 else 2027 2018 ratr_mask = 0x0f0ff005; 2028 - if (curtxbw_40mhz) 2029 - ratr_mask |= 0x00000010; 2019 + 2030 2020 ratr_value &= ratr_mask; 2031 2021 } 2032 2022 break; ··· 2034 2026 ratr_value &= 0x000ff0ff; 2035 2027 else 2036 2028 ratr_value &= 0x0f0ff0ff; 2029 + 2037 2030 break; 2038 2031 } 2032 + 2039 2033 ratr_value &= 0x0FFFFFFF; 2040 - if (nmode && ((curtxbw_40mhz && curshortgi_40mhz) || 2041 - (!curtxbw_40mhz && curshortgi_20mhz))) { 2034 + 2035 + if (nmode && ((curtxbw_40mhz && 2036 + curshortgi_40mhz) || (!curtxbw_40mhz && 2037 + curshortgi_20mhz))) { 2038 + 2042 2039 ratr_value |= 0x10000000; 2043 2040 tmp_ratr_value = (ratr_value >> 12); 2041 + 2044 2042 for (shortgi_rate = 15; shortgi_rate > 0; shortgi_rate--) { 2045 2043 if ((1 << shortgi_rate) & tmp_ratr_value) 2046 2044 break; 2047 2045 } 2046 + 2048 2047 shortgi_rate = (shortgi_rate << 12) | (shortgi_rate << 8) | 2049 - (shortgi_rate << 4) | (shortgi_rate); 2048 + (shortgi_rate << 4) | (shortgi_rate); 2050 2049 } 2050 + 2051 2051 rtl_write_dword(rtlpriv, REG_ARFR0 + ratr_index * 4, ratr_value); 2052 + 2053 + RT_TRACE(rtlpriv, COMP_RATR, DBG_DMESG, "%x\n", 2054 + rtl_read_dword(rtlpriv, REG_ARFR0)); 2052 2055 } 2053 2056 2054 - void rtl92cu_update_hal_rate_mask(struct ieee80211_hw *hw, u8 rssi_level) 2057 + static void rtl92cu_update_hal_rate_mask(struct ieee80211_hw *hw, 2058 + struct ieee80211_sta *sta, 2059 + u8 rssi_level) 2055 2060 { 2056 2061 struct rtl_priv *rtlpriv = rtl_priv(hw); 2057 2062 struct rtl_phy *rtlphy = &(rtlpriv->phy); 2058 2063 struct rtl_mac *mac = rtl_mac(rtl_priv(hw)); 2059 - u32 ratr_bitmap = (u32) mac->basic_rates; 2060 - u8 *p_mcsrate = mac->mcs; 2061 - u8 ratr_index = 0; 2062 - u8 curtxbw_40mhz = mac->bw_40; 2063 - u8 curshortgi_40mhz = mac->sgi_40; 2064 - u8 curshortgi_20mhz = mac->sgi_20; 2065 - enum wireless_mode wirelessmode = mac->mode; 2064 + struct rtl_hal *rtlhal = rtl_hal(rtl_priv(hw)); 2065 + struct rtl_sta_info *sta_entry = NULL; 2066 + u32 ratr_bitmap; 2067 + u8 ratr_index; 2068 + u8 curtxbw_40mhz = (sta->bandwidth >= IEEE80211_STA_RX_BW_40) ? 1 : 0; 2069 + u8 curshortgi_40mhz = curtxbw_40mhz && 2070 + (sta->ht_cap.cap & IEEE80211_HT_CAP_SGI_40) ? 2071 + 1 : 0; 2072 + u8 curshortgi_20mhz = (sta->ht_cap.cap & IEEE80211_HT_CAP_SGI_20) ? 2073 + 1 : 0; 2074 + enum wireless_mode wirelessmode = 0; 2066 2075 bool shortgi = false; 2067 2076 u8 rate_mask[5]; 2068 2077 u8 macid = 0; 2069 - u8 mimops = 1; 2078 + u8 mimo_ps = IEEE80211_SMPS_OFF; 2070 2079 2071 - ratr_bitmap |= (p_mcsrate[1] << 20) | (p_mcsrate[0] << 12); 2080 + sta_entry = (struct rtl_sta_info *) sta->drv_priv; 2081 + wirelessmode = sta_entry->wireless_mode; 2082 + if (mac->opmode == NL80211_IFTYPE_STATION || 2083 + mac->opmode == NL80211_IFTYPE_MESH_POINT) 2084 + curtxbw_40mhz = mac->bw_40; 2085 + else if (mac->opmode == NL80211_IFTYPE_AP || 2086 + mac->opmode == NL80211_IFTYPE_ADHOC) 2087 + macid = sta->aid + 1; 2088 + 2089 + if (rtlhal->current_bandtype == BAND_ON_5G) 2090 + ratr_bitmap = sta->supp_rates[1] << 4; 2091 + else 2092 + ratr_bitmap = sta->supp_rates[0]; 2093 + if (mac->opmode == NL80211_IFTYPE_ADHOC) 2094 + ratr_bitmap = 0xfff; 2095 + ratr_bitmap |= (sta->ht_cap.mcs.rx_mask[1] << 20 | 2096 + sta->ht_cap.mcs.rx_mask[0] << 12); 2072 2097 switch (wirelessmode) { 2073 2098 case WIRELESS_MODE_B: 2074 2099 ratr_index = RATR_INX_WIRELESS_B; ··· 2112 2071 break; 2113 2072 case WIRELESS_MODE_G: 2114 2073 ratr_index = RATR_INX_WIRELESS_GB; 2074 + 2115 2075 if (rssi_level == 1) 2116 2076 ratr_bitmap &= 0x00000f00; 2117 2077 else if (rssi_level == 2) ··· 2127 2085 case WIRELESS_MODE_N_24G: 2128 2086 case WIRELESS_MODE_N_5G: 2129 2087 ratr_index = RATR_INX_WIRELESS_NGB; 2130 - if (mimops == 0) { 2088 + 2089 + if (mimo_ps == IEEE80211_SMPS_STATIC) { 2131 2090 if (rssi_level == 1) 2132 2091 ratr_bitmap &= 0x00070000; 2133 2092 else if (rssi_level == 2) ··· 2171 2128 } 2172 2129 } 2173 2130 } 2131 + 2174 2132 if ((curtxbw_40mhz && curshortgi_40mhz) || 2175 2133 (!curtxbw_40mhz && curshortgi_20mhz)) { 2134 + 2176 2135 if (macid == 0) 2177 2136 shortgi = true; 2178 2137 else if (macid == 1) ··· 2183 2138 break; 2184 2139 default: 2185 2140 ratr_index = RATR_INX_WIRELESS_NGB; 2141 + 2186 2142 if (rtlphy->rf_type == RF_1T2R) 2187 2143 ratr_bitmap &= 0x000ff0ff; 2188 2144 else 2189 2145 ratr_bitmap &= 0x0f0ff0ff; 2190 2146 break; 2191 2147 } 2192 - RT_TRACE(rtlpriv, COMP_RATR, DBG_DMESG, "ratr_bitmap :%x\n", 2193 - ratr_bitmap); 2194 - *(u32 *)&rate_mask = ((ratr_bitmap & 0x0fffffff) | 2195 - ratr_index << 28); 2148 + sta_entry->ratr_index = ratr_index; 2149 + 2150 + RT_TRACE(rtlpriv, COMP_RATR, DBG_DMESG, 2151 + "ratr_bitmap :%x\n", ratr_bitmap); 2152 + *(u32 *)&rate_mask = (ratr_bitmap & 0x0fffffff) | 2153 + (ratr_index << 28); 2196 2154 rate_mask[4] = macid | (shortgi ? 0x20 : 0x00) | 0x80; 2197 2155 RT_TRACE(rtlpriv, COMP_RATR, DBG_DMESG, 2198 2156 "Rate_index:%x, ratr_val:%x, %5phC\n", 2199 2157 ratr_index, ratr_bitmap, rate_mask); 2200 - rtl92c_fill_h2c_cmd(hw, H2C_RA_MASK, 5, rate_mask); 2158 + memcpy(rtlpriv->rate_mask, rate_mask, 5); 2159 + /* rtl92c_fill_h2c_cmd() does USB I/O and will result in a 2160 + * "scheduled while atomic" if called directly */ 2161 + schedule_work(&rtlpriv->works.fill_h2c_cmd); 2162 + 2163 + if (macid != 0) 2164 + sta_entry->ratr_index = ratr_index; 2165 + } 2166 + 2167 + void rtl92cu_update_hal_rate_tbl(struct ieee80211_hw *hw, 2168 + struct ieee80211_sta *sta, 2169 + u8 rssi_level) 2170 + { 2171 + struct rtl_priv *rtlpriv = rtl_priv(hw); 2172 + 2173 + if (rtlpriv->dm.useramask) 2174 + rtl92cu_update_hal_rate_mask(hw, sta, rssi_level); 2175 + else 2176 + rtl92cu_update_hal_rate_table(hw, sta); 2201 2177 } 2202 2178 2203 2179 void rtl92cu_update_channel_access_setting(struct ieee80211_hw *hw)
-4
drivers/net/wireless/rtlwifi/rtl8192cu/hw.h
··· 98 98 u32 add_msr, u32 rm_msr); 99 99 void rtl92cu_get_hw_reg(struct ieee80211_hw *hw, u8 variable, u8 *val); 100 100 void rtl92cu_set_hw_reg(struct ieee80211_hw *hw, u8 variable, u8 *val); 101 - void rtl92cu_update_hal_rate_table(struct ieee80211_hw *hw, 102 - struct ieee80211_sta *sta, 103 - u8 rssi_level); 104 - void rtl92cu_update_hal_rate_mask(struct ieee80211_hw *hw, u8 rssi_level); 105 101 106 102 void rtl92cu_update_channel_access_setting(struct ieee80211_hw *hw); 107 103 bool rtl92cu_gpio_radio_on_off_checking(struct ieee80211_hw *hw, u8 * valid);
+17 -1
drivers/net/wireless/rtlwifi/rtl8192cu/mac.c
··· 289 289 macaddr = cam_const_broad; 290 290 entry_id = key_index; 291 291 } else { 292 + if (mac->opmode == NL80211_IFTYPE_AP || 293 + mac->opmode == NL80211_IFTYPE_MESH_POINT) { 294 + entry_id = rtl_cam_get_free_entry(hw, 295 + p_macaddr); 296 + if (entry_id >= TOTAL_CAM_ENTRY) { 297 + RT_TRACE(rtlpriv, COMP_SEC, 298 + DBG_EMERG, 299 + "Can not find free hw security cam entry\n"); 300 + return; 301 + } 302 + } else { 303 + entry_id = CAM_PAIRWISE_KEY_POSITION; 304 + } 305 + 292 306 key_index = PAIRWISE_KEYIDX; 293 - entry_id = CAM_PAIRWISE_KEY_POSITION; 294 307 is_pairwise = true; 295 308 } 296 309 } 297 310 if (rtlpriv->sec.key_len[key_index] == 0) { 298 311 RT_TRACE(rtlpriv, COMP_SEC, DBG_DMESG, 299 312 "delete one entry\n"); 313 + if (mac->opmode == NL80211_IFTYPE_AP || 314 + mac->opmode == NL80211_IFTYPE_MESH_POINT) 315 + rtl_cam_del_entry(hw, p_macaddr); 300 316 rtl_cam_delete_one_entry(hw, p_macaddr, entry_id); 301 317 } else { 302 318 RT_TRACE(rtlpriv, COMP_SEC, DBG_LOUD,
+2 -2
drivers/net/wireless/rtlwifi/rtl8192cu/sw.c
··· 106 106 .update_interrupt_mask = rtl92cu_update_interrupt_mask, 107 107 .get_hw_reg = rtl92cu_get_hw_reg, 108 108 .set_hw_reg = rtl92cu_set_hw_reg, 109 - .update_rate_tbl = rtl92cu_update_hal_rate_table, 110 - .update_rate_mask = rtl92cu_update_hal_rate_mask, 109 + .update_rate_tbl = rtl92cu_update_hal_rate_tbl, 111 110 .fill_tx_desc = rtl92cu_tx_fill_desc, 112 111 .fill_fake_txdesc = rtl92cu_fill_fake_txdesc, 113 112 .fill_tx_cmddesc = rtl92cu_tx_fill_cmddesc, ··· 136 137 .phy_lc_calibrate = _rtl92cu_phy_lc_calibrate, 137 138 .phy_set_bw_mode_callback = rtl92cu_phy_set_bw_mode_callback, 138 139 .dm_dynamic_txpower = rtl92cu_dm_dynamic_txpower, 140 + .fill_h2c_cmd = rtl92c_fill_h2c_cmd, 139 141 }; 140 142 141 143 static struct rtl_mod_params rtl92cu_mod_params = {
+3
drivers/net/wireless/rtlwifi/rtl8192cu/sw.h
··· 49 49 u32 rtl92cu_phy_query_rf_reg(struct ieee80211_hw *hw, 50 50 enum radio_path rfpath, u32 regaddr, u32 bitmask); 51 51 void rtl92cu_phy_set_bw_mode_callback(struct ieee80211_hw *hw); 52 + void rtl92cu_update_hal_rate_tbl(struct ieee80211_hw *hw, 53 + struct ieee80211_sta *sta, 54 + u8 rssi_level); 52 55 53 56 #endif
+13
drivers/net/wireless/rtlwifi/usb.c
··· 824 824 825 825 /* should after adapter start and interrupt enable. */ 826 826 set_hal_stop(rtlhal); 827 + cancel_work_sync(&rtlpriv->works.fill_h2c_cmd); 827 828 /* Enable software */ 828 829 SET_USB_STOP(rtlusb); 829 830 rtl_usb_deinit(hw); ··· 1027 1026 return false; 1028 1027 } 1029 1028 1029 + static void rtl_fill_h2c_cmd_work_callback(struct work_struct *work) 1030 + { 1031 + struct rtl_works *rtlworks = 1032 + container_of(work, struct rtl_works, fill_h2c_cmd); 1033 + struct ieee80211_hw *hw = rtlworks->hw; 1034 + struct rtl_priv *rtlpriv = rtl_priv(hw); 1035 + 1036 + rtlpriv->cfg->ops->fill_h2c_cmd(hw, H2C_RA_MASK, 5, rtlpriv->rate_mask); 1037 + } 1038 + 1030 1039 static struct rtl_intf_ops rtl_usb_ops = { 1031 1040 .adapter_start = rtl_usb_start, 1032 1041 .adapter_stop = rtl_usb_stop, ··· 1068 1057 1069 1058 /* this spin lock must be initialized early */ 1070 1059 spin_lock_init(&rtlpriv->locks.usb_lock); 1060 + INIT_WORK(&rtlpriv->works.fill_h2c_cmd, 1061 + rtl_fill_h2c_cmd_work_callback); 1071 1062 1072 1063 rtlpriv->usb_data_index = 0; 1073 1064 init_completion(&rtlpriv->firmware_loading_complete);
+4
drivers/net/wireless/rtlwifi/wifi.h
··· 1736 1736 void (*bt_wifi_media_status_notify) (struct ieee80211_hw *hw, 1737 1737 bool mstate); 1738 1738 void (*bt_coex_off_before_lps) (struct ieee80211_hw *hw); 1739 + void (*fill_h2c_cmd) (struct ieee80211_hw *hw, u8 element_id, 1740 + u32 cmd_len, u8 *p_cmdbuffer); 1739 1741 }; 1740 1742 1741 1743 struct rtl_intf_ops { ··· 1871 1869 struct delayed_work fwevt_wq; 1872 1870 1873 1871 struct work_struct lps_change_work; 1872 + struct work_struct fill_h2c_cmd; 1874 1873 }; 1875 1874 1876 1875 struct rtl_debug { ··· 2051 2048 }; 2052 2049 }; 2053 2050 bool enter_ps; /* true when entering PS */ 2051 + u8 rate_mask[5]; 2054 2052 2055 2053 /*This must be the last item so 2056 2054 that it points to the data allocated
+1 -1
drivers/net/wireless/ti/wl12xx/scan.c
··· 310 310 memcpy(cmd->channels_2, cmd_channels->channels_2, 311 311 sizeof(cmd->channels_2)); 312 312 memcpy(cmd->channels_5, cmd_channels->channels_5, 313 - sizeof(cmd->channels_2)); 313 + sizeof(cmd->channels_5)); 314 314 /* channels_4 are not supported, so no need to copy them */ 315 315 } 316 316
+3 -3
drivers/net/wireless/ti/wl12xx/wl12xx.h
··· 36 36 #define WL127X_IFTYPE_SR_VER 3 37 37 #define WL127X_MAJOR_SR_VER 10 38 38 #define WL127X_SUBTYPE_SR_VER WLCORE_FW_VER_IGNORE 39 - #define WL127X_MINOR_SR_VER 115 39 + #define WL127X_MINOR_SR_VER 133 40 40 /* minimum multi-role FW version for wl127x */ 41 41 #define WL127X_IFTYPE_MR_VER 5 42 42 #define WL127X_MAJOR_MR_VER 7 43 43 #define WL127X_SUBTYPE_MR_VER WLCORE_FW_VER_IGNORE 44 - #define WL127X_MINOR_MR_VER 115 44 + #define WL127X_MINOR_MR_VER 42 45 45 46 46 /* FW chip version for wl128x */ 47 47 #define WL128X_CHIP_VER 7 ··· 49 49 #define WL128X_IFTYPE_SR_VER 3 50 50 #define WL128X_MAJOR_SR_VER 10 51 51 #define WL128X_SUBTYPE_SR_VER WLCORE_FW_VER_IGNORE 52 - #define WL128X_MINOR_SR_VER 115 52 + #define WL128X_MINOR_SR_VER 133 53 53 /* minimum multi-role FW version for wl128x */ 54 54 #define WL128X_IFTYPE_MR_VER 5 55 55 #define WL128X_MAJOR_MR_VER 7
+1 -1
drivers/net/wireless/ti/wl18xx/scan.c
··· 34 34 memcpy(cmd->channels_2, cmd_channels->channels_2, 35 35 sizeof(cmd->channels_2)); 36 36 memcpy(cmd->channels_5, cmd_channels->channels_5, 37 - sizeof(cmd->channels_2)); 37 + sizeof(cmd->channels_5)); 38 38 /* channels_4 are not supported, so no need to copy them */ 39 39 } 40 40
+5 -3
drivers/net/xen-netback/netback.c
··· 778 778 sco->meta_slots_used); 779 779 780 780 RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&vif->rx, ret); 781 - if (ret && list_empty(&vif->notify_list)) 782 - list_add_tail(&vif->notify_list, &notify); 783 781 784 782 xenvif_notify_tx_completion(vif); 785 783 786 - xenvif_put(vif); 784 + if (ret && list_empty(&vif->notify_list)) 785 + list_add_tail(&vif->notify_list, &notify); 786 + else 787 + xenvif_put(vif); 787 788 npo.meta_cons += sco->meta_slots_used; 788 789 dev_kfree_skb(skb); 789 790 } ··· 792 791 list_for_each_entry_safe(vif, tmp, &notify, notify_list) { 793 792 notify_remote_via_irq(vif->rx_irq); 794 793 list_del_init(&vif->notify_list); 794 + xenvif_put(vif); 795 795 } 796 796 797 797 /* More work to do? */
+9 -6
drivers/of/base.c
··· 192 192 struct device_node *of_find_all_nodes(struct device_node *prev) 193 193 { 194 194 struct device_node *np; 195 + unsigned long flags; 195 196 196 - raw_spin_lock(&devtree_lock); 197 + raw_spin_lock_irqsave(&devtree_lock, flags); 197 198 np = prev ? prev->allnext : of_allnodes; 198 199 for (; np != NULL; np = np->allnext) 199 200 if (of_node_get(np)) 200 201 break; 201 202 of_node_put(prev); 202 - raw_spin_unlock(&devtree_lock); 203 + raw_spin_unlock_irqrestore(&devtree_lock, flags); 203 204 return np; 204 205 } 205 206 EXPORT_SYMBOL(of_find_all_nodes); ··· 422 421 struct device_node *prev) 423 422 { 424 423 struct device_node *next; 424 + unsigned long flags; 425 425 426 - raw_spin_lock(&devtree_lock); 426 + raw_spin_lock_irqsave(&devtree_lock, flags); 427 427 next = prev ? prev->sibling : node->child; 428 428 for (; next; next = next->sibling) { 429 429 if (!__of_device_is_available(next)) ··· 433 431 break; 434 432 } 435 433 of_node_put(prev); 436 - raw_spin_unlock(&devtree_lock); 434 + raw_spin_unlock_irqrestore(&devtree_lock, flags); 437 435 return next; 438 436 } 439 437 EXPORT_SYMBOL(of_get_next_available_child); ··· 737 735 struct device_node *of_find_node_by_phandle(phandle handle) 738 736 { 739 737 struct device_node *np; 738 + unsigned long flags; 740 739 741 - raw_spin_lock(&devtree_lock); 740 + raw_spin_lock_irqsave(&devtree_lock, flags); 742 741 for (np = of_allnodes; np; np = np->allnext) 743 742 if (np->phandle == handle) 744 743 break; 745 744 of_node_get(np); 746 - raw_spin_unlock(&devtree_lock); 745 + raw_spin_unlock_irqrestore(&devtree_lock, flags); 747 746 return np; 748 747 } 749 748 EXPORT_SYMBOL(of_find_node_by_phandle);
+36 -9
drivers/pinctrl/sh-pfc/pfc-r8a7779.c
··· 2357 2357 }; 2358 2358 /* - USB0 ------------------------------------------------------------------- */ 2359 2359 static const unsigned int usb0_pins[] = { 2360 - /* OVC */ 2361 - 150, 154, 2360 + /* PENC */ 2361 + 154, 2362 2362 }; 2363 2363 static const unsigned int usb0_mux[] = { 2364 - USB_OVC0_MARK, USB_PENC0_MARK, 2364 + USB_PENC0_MARK, 2365 + }; 2366 + static const unsigned int usb0_ovc_pins[] = { 2367 + /* USB_OVC */ 2368 + 150 2369 + }; 2370 + static const unsigned int usb0_ovc_mux[] = { 2371 + USB_OVC0_MARK, 2365 2372 }; 2366 2373 /* - USB1 ------------------------------------------------------------------- */ 2367 2374 static const unsigned int usb1_pins[] = { 2368 - /* OVC */ 2369 - 152, 155, 2375 + /* PENC */ 2376 + 155, 2370 2377 }; 2371 2378 static const unsigned int usb1_mux[] = { 2372 - USB_OVC1_MARK, USB_PENC1_MARK, 2379 + USB_PENC1_MARK, 2380 + }; 2381 + static const unsigned int usb1_ovc_pins[] = { 2382 + /* USB_OVC */ 2383 + 152, 2384 + }; 2385 + static const unsigned int usb1_ovc_mux[] = { 2386 + USB_OVC1_MARK, 2373 2387 }; 2374 2388 /* - USB2 ------------------------------------------------------------------- */ 2375 2389 static const unsigned int usb2_pins[] = { 2376 - /* OVC, PENC */ 2377 - 125, 156, 2390 + /* PENC */ 2391 + 156, 2378 2392 }; 2379 2393 static const unsigned int usb2_mux[] = { 2380 - USB_OVC2_MARK, USB_PENC2_MARK, 2394 + USB_PENC2_MARK, 2395 + }; 2396 + static const unsigned int usb2_ovc_pins[] = { 2397 + /* USB_OVC */ 2398 + 125, 2399 + }; 2400 + static const unsigned int usb2_ovc_mux[] = { 2401 + USB_OVC2_MARK, 2381 2402 }; 2382 2403 2383 2404 static const struct sh_pfc_pin_group pinmux_groups[] = { ··· 2522 2501 SH_PFC_PIN_GROUP(sdhi3_cd), 2523 2502 SH_PFC_PIN_GROUP(sdhi3_wp), 2524 2503 SH_PFC_PIN_GROUP(usb0), 2504 + SH_PFC_PIN_GROUP(usb0_ovc), 2525 2505 SH_PFC_PIN_GROUP(usb1), 2506 + SH_PFC_PIN_GROUP(usb1_ovc), 2526 2507 SH_PFC_PIN_GROUP(usb2), 2508 + SH_PFC_PIN_GROUP(usb2_ovc), 2527 2509 }; 2528 2510 2529 2511 static const char * const du0_groups[] = { ··· 2707 2683 2708 2684 static const char * const usb0_groups[] = { 2709 2685 "usb0", 2686 + "usb0_ovc", 2710 2687 }; 2711 2688 2712 2689 static const char * const usb1_groups[] = { 2713 2690 "usb1", 2691 + "usb1_ovc", 2714 2692 }; 2715 2693 2716 2694 static const char * const usb2_groups[] = { 2717 2695 "usb2", 2696 + "usb2_ovc", 2718 2697 }; 2719 2698 2720 2699 static const struct sh_pfc_function pinmux_functions[] = {
+1 -1
drivers/platform/x86/hp-wmi.c
··· 703 703 } 704 704 rfkill_init_sw_state(gps_rfkill, 705 705 hp_wmi_get_sw_state(HPWMI_GPS)); 706 - rfkill_set_hw_state(bluetooth_rfkill, 706 + rfkill_set_hw_state(gps_rfkill, 707 707 hp_wmi_get_hw_state(HPWMI_GPS)); 708 708 err = rfkill_register(gps_rfkill); 709 709 if (err)
+111 -20
drivers/rtc/rtc-at91rm9200.c
··· 25 25 #include <linux/rtc.h> 26 26 #include <linux/bcd.h> 27 27 #include <linux/interrupt.h> 28 + #include <linux/spinlock.h> 28 29 #include <linux/ioctl.h> 29 30 #include <linux/completion.h> 30 31 #include <linux/io.h> ··· 43 42 44 43 #define AT91_RTC_EPOCH 1900UL /* just like arch/arm/common/rtctime.c */ 45 44 45 + struct at91_rtc_config { 46 + bool use_shadow_imr; 47 + }; 48 + 49 + static const struct at91_rtc_config *at91_rtc_config; 46 50 static DECLARE_COMPLETION(at91_rtc_updated); 47 51 static unsigned int at91_alarm_year = AT91_RTC_EPOCH; 48 52 static void __iomem *at91_rtc_regs; 49 53 static int irq; 54 + static DEFINE_SPINLOCK(at91_rtc_lock); 55 + static u32 at91_rtc_shadow_imr; 56 + 57 + static void at91_rtc_write_ier(u32 mask) 58 + { 59 + unsigned long flags; 60 + 61 + spin_lock_irqsave(&at91_rtc_lock, flags); 62 + at91_rtc_shadow_imr |= mask; 63 + at91_rtc_write(AT91_RTC_IER, mask); 64 + spin_unlock_irqrestore(&at91_rtc_lock, flags); 65 + } 66 + 67 + static void at91_rtc_write_idr(u32 mask) 68 + { 69 + unsigned long flags; 70 + 71 + spin_lock_irqsave(&at91_rtc_lock, flags); 72 + at91_rtc_write(AT91_RTC_IDR, mask); 73 + /* 74 + * Register read back (of any RTC-register) needed to make sure 75 + * IDR-register write has reached the peripheral before updating 76 + * shadow mask. 77 + * 78 + * Note that there is still a possibility that the mask is updated 79 + * before interrupts have actually been disabled in hardware. The only 80 + * way to be certain would be to poll the IMR-register, which is is 81 + * the very register we are trying to emulate. The register read back 82 + * is a reasonable heuristic. 83 + */ 84 + at91_rtc_read(AT91_RTC_SR); 85 + at91_rtc_shadow_imr &= ~mask; 86 + spin_unlock_irqrestore(&at91_rtc_lock, flags); 87 + } 88 + 89 + static u32 at91_rtc_read_imr(void) 90 + { 91 + unsigned long flags; 92 + u32 mask; 93 + 94 + if (at91_rtc_config->use_shadow_imr) { 95 + spin_lock_irqsave(&at91_rtc_lock, flags); 96 + mask = at91_rtc_shadow_imr; 97 + spin_unlock_irqrestore(&at91_rtc_lock, flags); 98 + } else { 99 + mask = at91_rtc_read(AT91_RTC_IMR); 100 + } 101 + 102 + return mask; 103 + } 50 104 51 105 /* 52 106 * Decode time/date into rtc_time structure ··· 166 110 cr = at91_rtc_read(AT91_RTC_CR); 167 111 at91_rtc_write(AT91_RTC_CR, cr | AT91_RTC_UPDCAL | AT91_RTC_UPDTIM); 168 112 169 - at91_rtc_write(AT91_RTC_IER, AT91_RTC_ACKUPD); 113 + at91_rtc_write_ier(AT91_RTC_ACKUPD); 170 114 wait_for_completion(&at91_rtc_updated); /* wait for ACKUPD interrupt */ 171 - at91_rtc_write(AT91_RTC_IDR, AT91_RTC_ACKUPD); 115 + at91_rtc_write_idr(AT91_RTC_ACKUPD); 172 116 173 117 at91_rtc_write(AT91_RTC_TIMR, 174 118 bin2bcd(tm->tm_sec) << 0 ··· 200 144 tm->tm_yday = rtc_year_days(tm->tm_mday, tm->tm_mon, tm->tm_year); 201 145 tm->tm_year = at91_alarm_year - 1900; 202 146 203 - alrm->enabled = (at91_rtc_read(AT91_RTC_IMR) & AT91_RTC_ALARM) 147 + alrm->enabled = (at91_rtc_read_imr() & AT91_RTC_ALARM) 204 148 ? 1 : 0; 205 149 206 150 dev_dbg(dev, "%s(): %4d-%02d-%02d %02d:%02d:%02d\n", __func__, ··· 225 169 tm.tm_min = alrm->time.tm_min; 226 170 tm.tm_sec = alrm->time.tm_sec; 227 171 228 - at91_rtc_write(AT91_RTC_IDR, AT91_RTC_ALARM); 172 + at91_rtc_write_idr(AT91_RTC_ALARM); 229 173 at91_rtc_write(AT91_RTC_TIMALR, 230 174 bin2bcd(tm.tm_sec) << 0 231 175 | bin2bcd(tm.tm_min) << 8 ··· 238 182 239 183 if (alrm->enabled) { 240 184 at91_rtc_write(AT91_RTC_SCCR, AT91_RTC_ALARM); 241 - at91_rtc_write(AT91_RTC_IER, AT91_RTC_ALARM); 185 + at91_rtc_write_ier(AT91_RTC_ALARM); 242 186 } 243 187 244 188 dev_dbg(dev, "%s(): %4d-%02d-%02d %02d:%02d:%02d\n", __func__, ··· 254 198 255 199 if (enabled) { 256 200 at91_rtc_write(AT91_RTC_SCCR, AT91_RTC_ALARM); 257 - at91_rtc_write(AT91_RTC_IER, AT91_RTC_ALARM); 201 + at91_rtc_write_ier(AT91_RTC_ALARM); 258 202 } else 259 - at91_rtc_write(AT91_RTC_IDR, AT91_RTC_ALARM); 203 + at91_rtc_write_idr(AT91_RTC_ALARM); 260 204 261 205 return 0; 262 206 } ··· 265 209 */ 266 210 static int at91_rtc_proc(struct device *dev, struct seq_file *seq) 267 211 { 268 - unsigned long imr = at91_rtc_read(AT91_RTC_IMR); 212 + unsigned long imr = at91_rtc_read_imr(); 269 213 270 214 seq_printf(seq, "update_IRQ\t: %s\n", 271 215 (imr & AT91_RTC_ACKUPD) ? "yes" : "no"); ··· 285 229 unsigned int rtsr; 286 230 unsigned long events = 0; 287 231 288 - rtsr = at91_rtc_read(AT91_RTC_SR) & at91_rtc_read(AT91_RTC_IMR); 232 + rtsr = at91_rtc_read(AT91_RTC_SR) & at91_rtc_read_imr(); 289 233 if (rtsr) { /* this interrupt is shared! Is it ours? */ 290 234 if (rtsr & AT91_RTC_ALARM) 291 235 events |= (RTC_AF | RTC_IRQF); ··· 306 250 return IRQ_NONE; /* not handled */ 307 251 } 308 252 253 + static const struct at91_rtc_config at91rm9200_config = { 254 + }; 255 + 256 + static const struct at91_rtc_config at91sam9x5_config = { 257 + .use_shadow_imr = true, 258 + }; 259 + 260 + #ifdef CONFIG_OF 261 + static const struct of_device_id at91_rtc_dt_ids[] = { 262 + { 263 + .compatible = "atmel,at91rm9200-rtc", 264 + .data = &at91rm9200_config, 265 + }, { 266 + .compatible = "atmel,at91sam9x5-rtc", 267 + .data = &at91sam9x5_config, 268 + }, { 269 + /* sentinel */ 270 + } 271 + }; 272 + MODULE_DEVICE_TABLE(of, at91_rtc_dt_ids); 273 + #endif 274 + 275 + static const struct at91_rtc_config * 276 + at91_rtc_get_config(struct platform_device *pdev) 277 + { 278 + const struct of_device_id *match; 279 + 280 + if (pdev->dev.of_node) { 281 + match = of_match_node(at91_rtc_dt_ids, pdev->dev.of_node); 282 + if (!match) 283 + return NULL; 284 + return (const struct at91_rtc_config *)match->data; 285 + } 286 + 287 + return &at91rm9200_config; 288 + } 289 + 309 290 static const struct rtc_class_ops at91_rtc_ops = { 310 291 .read_time = at91_rtc_readtime, 311 292 .set_time = at91_rtc_settime, ··· 360 267 struct rtc_device *rtc; 361 268 struct resource *regs; 362 269 int ret = 0; 270 + 271 + at91_rtc_config = at91_rtc_get_config(pdev); 272 + if (!at91_rtc_config) 273 + return -ENODEV; 363 274 364 275 regs = platform_get_resource(pdev, IORESOURCE_MEM, 0); 365 276 if (!regs) { ··· 387 290 at91_rtc_write(AT91_RTC_MR, 0); /* 24 hour mode */ 388 291 389 292 /* Disable all interrupts */ 390 - at91_rtc_write(AT91_RTC_IDR, AT91_RTC_ACKUPD | AT91_RTC_ALARM | 293 + at91_rtc_write_idr(AT91_RTC_ACKUPD | AT91_RTC_ALARM | 391 294 AT91_RTC_SECEV | AT91_RTC_TIMEV | 392 295 AT91_RTC_CALEV); 393 296 ··· 432 335 struct rtc_device *rtc = platform_get_drvdata(pdev); 433 336 434 337 /* Disable all interrupts */ 435 - at91_rtc_write(AT91_RTC_IDR, AT91_RTC_ACKUPD | AT91_RTC_ALARM | 338 + at91_rtc_write_idr(AT91_RTC_ACKUPD | AT91_RTC_ALARM | 436 339 AT91_RTC_SECEV | AT91_RTC_TIMEV | 437 340 AT91_RTC_CALEV); 438 341 free_irq(irq, pdev); ··· 455 358 /* this IRQ is shared with DBGU and other hardware which isn't 456 359 * necessarily doing PM like we are... 457 360 */ 458 - at91_rtc_imr = at91_rtc_read(AT91_RTC_IMR) 361 + at91_rtc_imr = at91_rtc_read_imr() 459 362 & (AT91_RTC_ALARM|AT91_RTC_SECEV); 460 363 if (at91_rtc_imr) { 461 364 if (device_may_wakeup(dev)) 462 365 enable_irq_wake(irq); 463 366 else 464 - at91_rtc_write(AT91_RTC_IDR, at91_rtc_imr); 367 + at91_rtc_write_idr(at91_rtc_imr); 465 368 } 466 369 return 0; 467 370 } ··· 472 375 if (device_may_wakeup(dev)) 473 376 disable_irq_wake(irq); 474 377 else 475 - at91_rtc_write(AT91_RTC_IER, at91_rtc_imr); 378 + at91_rtc_write_ier(at91_rtc_imr); 476 379 } 477 380 return 0; 478 381 } 479 382 #endif 480 383 481 384 static SIMPLE_DEV_PM_OPS(at91_rtc_pm_ops, at91_rtc_suspend, at91_rtc_resume); 482 - 483 - static const struct of_device_id at91_rtc_dt_ids[] = { 484 - { .compatible = "atmel,at91rm9200-rtc" }, 485 - { /* sentinel */ } 486 - }; 487 - MODULE_DEVICE_TABLE(of, at91_rtc_dt_ids); 488 385 489 386 static struct platform_driver at91_rtc_driver = { 490 387 .remove = __exit_p(at91_rtc_remove),
+3 -1
drivers/rtc/rtc-cmos.c
··· 854 854 } 855 855 856 856 spin_lock_irq(&rtc_lock); 857 + if (device_may_wakeup(dev)) 858 + hpet_rtc_timer_init(); 859 + 857 860 do { 858 861 CMOS_WRITE(tmp, RTC_CONTROL); 859 862 hpet_set_rtc_irq_bit(tmp & RTC_IRQMASK); ··· 872 869 rtc_update_irq(cmos->rtc, 1, mask); 873 870 tmp &= ~RTC_AIE; 874 871 hpet_mask_rtc_irq_bit(RTC_AIE); 875 - hpet_rtc_timer_init(); 876 872 } while (mask & RTC_AIE); 877 873 spin_unlock_irq(&rtc_lock); 878 874 }
+2 -1
drivers/rtc/rtc-tps6586x.c
··· 273 273 return ret; 274 274 } 275 275 276 + device_init_wakeup(&pdev->dev, 1); 277 + 276 278 platform_set_drvdata(pdev, rtc); 277 279 rtc->rtc = devm_rtc_device_register(&pdev->dev, dev_name(&pdev->dev), 278 280 &tps6586x_rtc_ops, THIS_MODULE); ··· 294 292 goto fail_rtc_register; 295 293 } 296 294 disable_irq(rtc->irq); 297 - device_set_wakeup_capable(&pdev->dev, 1); 298 295 return 0; 299 296 300 297 fail_rtc_register:
+1
drivers/rtc/rtc-twl.c
··· 524 524 } 525 525 526 526 platform_set_drvdata(pdev, rtc); 527 + device_init_wakeup(&pdev->dev, 1); 527 528 return 0; 528 529 529 530 out2:
+5 -1
drivers/s390/net/netiucv.c
··· 2040 2040 netiucv_setup_netdevice); 2041 2041 if (!dev) 2042 2042 return NULL; 2043 + rtnl_lock(); 2043 2044 if (dev_alloc_name(dev, dev->name) < 0) 2044 2045 goto out_netdev; 2045 2046 ··· 2062 2061 out_fsm: 2063 2062 kfree_fsm(privptr->fsm); 2064 2063 out_netdev: 2064 + rtnl_unlock(); 2065 2065 free_netdev(dev); 2066 2066 return NULL; 2067 2067 } ··· 2102 2100 2103 2101 rc = netiucv_register_device(dev); 2104 2102 if (rc) { 2103 + rtnl_unlock(); 2105 2104 IUCV_DBF_TEXT_(setup, 2, 2106 2105 "ret %d from netiucv_register_device\n", rc); 2107 2106 goto out_free_ndev; ··· 2112 2109 priv = netdev_priv(dev); 2113 2110 SET_NETDEV_DEV(dev, priv->dev); 2114 2111 2115 - rc = register_netdev(dev); 2112 + rc = register_netdevice(dev); 2113 + rtnl_unlock(); 2116 2114 if (rc) 2117 2115 goto out_unreg; 2118 2116
+1 -1
drivers/scsi/bfa/bfad_debugfs.c
··· 186 186 file->f_pos += offset; 187 187 break; 188 188 case 2: 189 - file->f_pos = debug->buffer_len - offset; 189 + file->f_pos = debug->buffer_len + offset; 190 190 break; 191 191 default: 192 192 return -EINVAL;
+1 -1
drivers/scsi/fnic/fnic_debugfs.c
··· 174 174 pos = file->f_pos + offset; 175 175 break; 176 176 case 2: 177 - pos = fnic_dbg_prt->buffer_len - offset; 177 + pos = fnic_dbg_prt->buffer_len + offset; 178 178 } 179 179 return (pos < 0 || pos > fnic_dbg_prt->buffer_len) ? 180 180 -EINVAL : (file->f_pos = pos);
+1 -1
drivers/scsi/lpfc/lpfc_debugfs.c
··· 1178 1178 pos = file->f_pos + off; 1179 1179 break; 1180 1180 case 2: 1181 - pos = debug->len - off; 1181 + pos = debug->len + off; 1182 1182 } 1183 1183 return (pos < 0 || pos > debug->len) ? -EINVAL : (file->f_pos = pos); 1184 1184 }
+1 -1
drivers/spi/spi-sh-hspi.c
··· 89 89 if ((mask & hspi_read(hspi, SPSR)) == val) 90 90 return 0; 91 91 92 - msleep(20); 92 + udelay(10); 93 93 } 94 94 95 95 dev_err(hspi->dev, "timeout\n");
+2 -1
drivers/spi/spi-topcliff-pch.c
··· 1487 1487 return 0; 1488 1488 1489 1489 err_spi_register_master: 1490 - free_irq(board_dat->pdev->irq, board_dat); 1490 + free_irq(board_dat->pdev->irq, data); 1491 1491 err_request_irq: 1492 1492 pch_spi_free_resources(board_dat, data); 1493 1493 err_spi_get_resources: ··· 1667 1667 pd_dev = platform_device_alloc("pch-spi", i); 1668 1668 if (!pd_dev) { 1669 1669 dev_err(&pdev->dev, "platform_device_alloc failed\n"); 1670 + retval = -ENOMEM; 1670 1671 goto err_platform_device; 1671 1672 } 1672 1673 pd_dev_save->pd_save[i] = pd_dev;
+35 -39
drivers/spi/spi-xilinx.c
··· 267 267 { 268 268 struct xilinx_spi *xspi = spi_master_get_devdata(spi->master); 269 269 u32 ipif_ier; 270 - u16 cr; 271 270 272 271 /* We get here with transmitter inhibited */ 273 272 ··· 275 276 xspi->remaining_bytes = t->len; 276 277 INIT_COMPLETION(xspi->done); 277 278 278 - xilinx_spi_fill_tx_fifo(xspi); 279 279 280 280 /* Enable the transmit empty interrupt, which we use to determine 281 281 * progress on the transmission. ··· 283 285 xspi->write_fn(ipif_ier | XSPI_INTR_TX_EMPTY, 284 286 xspi->regs + XIPIF_V123B_IIER_OFFSET); 285 287 286 - /* Start the transfer by not inhibiting the transmitter any longer */ 287 - cr = xspi->read_fn(xspi->regs + XSPI_CR_OFFSET) & 288 - ~XSPI_CR_TRANS_INHIBIT; 289 - xspi->write_fn(cr, xspi->regs + XSPI_CR_OFFSET); 288 + for (;;) { 289 + u16 cr; 290 + u8 sr; 290 291 291 - wait_for_completion(&xspi->done); 292 + xilinx_spi_fill_tx_fifo(xspi); 293 + 294 + /* Start the transfer by not inhibiting the transmitter any 295 + * longer 296 + */ 297 + cr = xspi->read_fn(xspi->regs + XSPI_CR_OFFSET) & 298 + ~XSPI_CR_TRANS_INHIBIT; 299 + xspi->write_fn(cr, xspi->regs + XSPI_CR_OFFSET); 300 + 301 + wait_for_completion(&xspi->done); 302 + 303 + /* A transmit has just completed. Process received data and 304 + * check for more data to transmit. Always inhibit the 305 + * transmitter while the Isr refills the transmit register/FIFO, 306 + * or make sure it is stopped if we're done. 307 + */ 308 + cr = xspi->read_fn(xspi->regs + XSPI_CR_OFFSET); 309 + xspi->write_fn(cr | XSPI_CR_TRANS_INHIBIT, 310 + xspi->regs + XSPI_CR_OFFSET); 311 + 312 + /* Read out all the data from the Rx FIFO */ 313 + sr = xspi->read_fn(xspi->regs + XSPI_SR_OFFSET); 314 + while ((sr & XSPI_SR_RX_EMPTY_MASK) == 0) { 315 + xspi->rx_fn(xspi); 316 + sr = xspi->read_fn(xspi->regs + XSPI_SR_OFFSET); 317 + } 318 + 319 + /* See if there is more data to send */ 320 + if (!xspi->remaining_bytes > 0) 321 + break; 322 + } 292 323 293 324 /* Disable the transmit empty interrupt */ 294 325 xspi->write_fn(ipif_ier, xspi->regs + XIPIF_V123B_IIER_OFFSET); ··· 341 314 xspi->write_fn(ipif_isr, xspi->regs + XIPIF_V123B_IISR_OFFSET); 342 315 343 316 if (ipif_isr & XSPI_INTR_TX_EMPTY) { /* Transmission completed */ 344 - u16 cr; 345 - u8 sr; 346 - 347 - /* A transmit has just completed. Process received data and 348 - * check for more data to transmit. Always inhibit the 349 - * transmitter while the Isr refills the transmit register/FIFO, 350 - * or make sure it is stopped if we're done. 351 - */ 352 - cr = xspi->read_fn(xspi->regs + XSPI_CR_OFFSET); 353 - xspi->write_fn(cr | XSPI_CR_TRANS_INHIBIT, 354 - xspi->regs + XSPI_CR_OFFSET); 355 - 356 - /* Read out all the data from the Rx FIFO */ 357 - sr = xspi->read_fn(xspi->regs + XSPI_SR_OFFSET); 358 - while ((sr & XSPI_SR_RX_EMPTY_MASK) == 0) { 359 - xspi->rx_fn(xspi); 360 - sr = xspi->read_fn(xspi->regs + XSPI_SR_OFFSET); 361 - } 362 - 363 - /* See if there is more data to send */ 364 - if (xspi->remaining_bytes > 0) { 365 - xilinx_spi_fill_tx_fifo(xspi); 366 - /* Start the transfer by not inhibiting the 367 - * transmitter any longer 368 - */ 369 - xspi->write_fn(cr, xspi->regs + XSPI_CR_OFFSET); 370 - } else { 371 - /* No more data to send. 372 - * Indicate the transfer is completed. 373 - */ 374 - complete(&xspi->done); 375 - } 317 + complete(&xspi->done); 376 318 } 377 319 378 320 return IRQ_HANDLED;
+2 -1
drivers/usb/chipidea/core.c
··· 276 276 277 277 ci_role_stop(ci); 278 278 ci_role_start(ci, role); 279 - enable_irq(ci->irq); 280 279 } 280 + 281 + enable_irq(ci->irq); 281 282 } 282 283 283 284 static irqreturn_t ci_irq(int irq, void *data)
+8 -5
drivers/usb/chipidea/udc.c
··· 1678 1678 1679 1679 ci->gadget.ep0 = &ci->ep0in->ep; 1680 1680 1681 - if (ci->global_phy) 1681 + if (ci->global_phy) { 1682 1682 ci->transceiver = usb_get_phy(USB_PHY_TYPE_USB2); 1683 + if (IS_ERR(ci->transceiver)) 1684 + ci->transceiver = NULL; 1685 + } 1683 1686 1684 1687 if (ci->platdata->flags & CI13XXX_REQUIRE_TRANSCEIVER) { 1685 1688 if (ci->transceiver == NULL) { ··· 1697 1694 goto put_transceiver; 1698 1695 } 1699 1696 1700 - if (!IS_ERR_OR_NULL(ci->transceiver)) { 1697 + if (ci->transceiver) { 1701 1698 retval = otg_set_peripheral(ci->transceiver->otg, 1702 1699 &ci->gadget); 1703 1700 if (retval) ··· 1714 1711 return retval; 1715 1712 1716 1713 remove_trans: 1717 - if (!IS_ERR_OR_NULL(ci->transceiver)) { 1714 + if (ci->transceiver) { 1718 1715 otg_set_peripheral(ci->transceiver->otg, NULL); 1719 1716 if (ci->global_phy) 1720 1717 usb_put_phy(ci->transceiver); ··· 1722 1719 1723 1720 dev_err(dev, "error = %i\n", retval); 1724 1721 put_transceiver: 1725 - if (!IS_ERR_OR_NULL(ci->transceiver) && ci->global_phy) 1722 + if (ci->transceiver && ci->global_phy) 1726 1723 usb_put_phy(ci->transceiver); 1727 1724 destroy_eps: 1728 1725 destroy_eps(ci); ··· 1750 1747 dma_pool_destroy(ci->td_pool); 1751 1748 dma_pool_destroy(ci->qh_pool); 1752 1749 1753 - if (!IS_ERR_OR_NULL(ci->transceiver)) { 1750 + if (ci->transceiver) { 1754 1751 otg_set_peripheral(ci->transceiver->otg, NULL); 1755 1752 if (ci->global_phy) 1756 1753 usb_put_phy(ci->transceiver);
+4 -4
drivers/usb/serial/f81232.c
··· 165 165 /* FIXME - Stubbed out for now */ 166 166 167 167 /* Don't change anything if nothing has changed */ 168 - if (!tty_termios_hw_change(&tty->termios, old_termios)) 168 + if (old_termios && !tty_termios_hw_change(&tty->termios, old_termios)) 169 169 return; 170 170 171 171 /* Do the real work here... */ 172 - tty_termios_copy_hw(&tty->termios, old_termios); 172 + if (old_termios) 173 + tty_termios_copy_hw(&tty->termios, old_termios); 173 174 } 174 175 175 176 static int f81232_tiocmget(struct tty_struct *tty) ··· 188 187 189 188 static int f81232_open(struct tty_struct *tty, struct usb_serial_port *port) 190 189 { 191 - struct ktermios tmp_termios; 192 190 int result; 193 191 194 192 /* Setup termios */ 195 193 if (tty) 196 - f81232_set_termios(tty, port, &tmp_termios); 194 + f81232_set_termios(tty, port, NULL); 197 195 198 196 result = usb_submit_urb(port->interrupt_in_urb, GFP_KERNEL); 199 197 if (result) {
+5 -5
drivers/usb/serial/pl2303.c
··· 284 284 serial settings even to the same values as before. Thus 285 285 we actually need to filter in this specific case */ 286 286 287 - if (!tty_termios_hw_change(&tty->termios, old_termios)) 287 + if (old_termios && !tty_termios_hw_change(&tty->termios, old_termios)) 288 288 return; 289 289 290 290 cflag = tty->termios.c_cflag; ··· 293 293 if (!buf) { 294 294 dev_err(&port->dev, "%s - out of memory.\n", __func__); 295 295 /* Report back no change occurred */ 296 - tty->termios = *old_termios; 296 + if (old_termios) 297 + tty->termios = *old_termios; 297 298 return; 298 299 } 299 300 ··· 434 433 control = priv->line_control; 435 434 if ((cflag & CBAUD) == B0) 436 435 priv->line_control &= ~(CONTROL_DTR | CONTROL_RTS); 437 - else if ((old_termios->c_cflag & CBAUD) == B0) 436 + else if (old_termios && (old_termios->c_cflag & CBAUD) == B0) 438 437 priv->line_control |= (CONTROL_DTR | CONTROL_RTS); 439 438 if (control != priv->line_control) { 440 439 control = priv->line_control; ··· 493 492 494 493 static int pl2303_open(struct tty_struct *tty, struct usb_serial_port *port) 495 494 { 496 - struct ktermios tmp_termios; 497 495 struct usb_serial *serial = port->serial; 498 496 struct pl2303_serial_private *spriv = usb_get_serial_data(serial); 499 497 int result; ··· 508 508 509 509 /* Setup termios */ 510 510 if (tty) 511 - pl2303_set_termios(tty, port, &tmp_termios); 511 + pl2303_set_termios(tty, port, NULL); 512 512 513 513 result = usb_submit_urb(port->interrupt_in_urb, GFP_KERNEL); 514 514 if (result) {
+4 -6
drivers/usb/serial/spcp8x5.c
··· 291 291 struct spcp8x5_private *priv = usb_get_serial_port_data(port); 292 292 unsigned long flags; 293 293 unsigned int cflag = tty->termios.c_cflag; 294 - unsigned int old_cflag = old_termios->c_cflag; 295 294 unsigned short uartdata; 296 295 unsigned char buf[2] = {0, 0}; 297 296 int baud; ··· 298 299 u8 control; 299 300 300 301 /* check that they really want us to change something */ 301 - if (!tty_termios_hw_change(&tty->termios, old_termios)) 302 + if (old_termios && !tty_termios_hw_change(&tty->termios, old_termios)) 302 303 return; 303 304 304 305 /* set DTR/RTS active */ 305 306 spin_lock_irqsave(&priv->lock, flags); 306 307 control = priv->line_control; 307 - if ((old_cflag & CBAUD) == B0) { 308 + if (old_termios && (old_termios->c_cflag & CBAUD) == B0) { 308 309 priv->line_control |= MCR_DTR; 309 - if (!(old_cflag & CRTSCTS)) 310 + if (!(old_termios->c_cflag & CRTSCTS)) 310 311 priv->line_control |= MCR_RTS; 311 312 } 312 313 if (control != priv->line_control) { ··· 393 394 394 395 static int spcp8x5_open(struct tty_struct *tty, struct usb_serial_port *port) 395 396 { 396 - struct ktermios tmp_termios; 397 397 struct usb_serial *serial = port->serial; 398 398 struct spcp8x5_private *priv = usb_get_serial_port_data(port); 399 399 int ret; ··· 409 411 spcp8x5_set_ctrl_line(port, priv->line_control); 410 412 411 413 if (tty) 412 - spcp8x5_set_termios(tty, port, &tmp_termios); 414 + spcp8x5_set_termios(tty, port, NULL); 413 415 414 416 port->port.drain_delay = 256; 415 417
+1 -1
drivers/vfio/vfio.c
··· 1360 1360 */ 1361 1361 static char *vfio_devnode(struct device *dev, umode_t *mode) 1362 1362 { 1363 - if (MINOR(dev->devt) == 0) 1363 + if (mode && (MINOR(dev->devt) == 0)) 1364 1364 *mode = S_IRUGO | S_IWUGO; 1365 1365 1366 1366 return kasprintf(GFP_KERNEL, "vfio/%s", dev_name(dev));
+13 -16
drivers/vhost/net.c
··· 155 155 156 156 static void vhost_net_clear_ubuf_info(struct vhost_net *n) 157 157 { 158 - 159 - bool zcopy; 160 158 int i; 161 159 162 - for (i = 0; i < n->dev.nvqs; ++i) { 163 - zcopy = vhost_net_zcopy_mask & (0x1 << i); 164 - if (zcopy) 165 - kfree(n->vqs[i].ubuf_info); 160 + for (i = 0; i < VHOST_NET_VQ_MAX; ++i) { 161 + kfree(n->vqs[i].ubuf_info); 162 + n->vqs[i].ubuf_info = NULL; 166 163 } 167 164 } 168 165 ··· 168 171 bool zcopy; 169 172 int i; 170 173 171 - for (i = 0; i < n->dev.nvqs; ++i) { 174 + for (i = 0; i < VHOST_NET_VQ_MAX; ++i) { 172 175 zcopy = vhost_net_zcopy_mask & (0x1 << i); 173 176 if (!zcopy) 174 177 continue; ··· 180 183 return 0; 181 184 182 185 err: 183 - while (i--) { 184 - zcopy = vhost_net_zcopy_mask & (0x1 << i); 185 - if (!zcopy) 186 - continue; 187 - kfree(n->vqs[i].ubuf_info); 188 - } 186 + vhost_net_clear_ubuf_info(n); 189 187 return -ENOMEM; 190 188 } 191 189 ··· 188 196 { 189 197 int i; 190 198 199 + vhost_net_clear_ubuf_info(n); 200 + 191 201 for (i = 0; i < VHOST_NET_VQ_MAX; i++) { 192 202 n->vqs[i].done_idx = 0; 193 203 n->vqs[i].upend_idx = 0; 194 204 n->vqs[i].ubufs = NULL; 195 - kfree(n->vqs[i].ubuf_info); 196 - n->vqs[i].ubuf_info = NULL; 197 205 n->vqs[i].vhost_hlen = 0; 198 206 n->vqs[i].sock_hlen = 0; 199 207 } ··· 428 436 kref_get(&ubufs->kref); 429 437 } 430 438 nvq->upend_idx = (nvq->upend_idx + 1) % UIO_MAXIOV; 431 - } 439 + } else 440 + msg.msg_control = NULL; 432 441 /* TODO: Check specific error and bomb out unless ENOBUFS? */ 433 442 err = sock->ops->sendmsg(NULL, sock, &msg, len); 434 443 if (unlikely(err < 0)) { ··· 1046 1053 int r; 1047 1054 1048 1055 mutex_lock(&n->dev.mutex); 1056 + if (vhost_dev_has_owner(&n->dev)) { 1057 + r = -EBUSY; 1058 + goto out; 1059 + } 1049 1060 r = vhost_net_set_ubuf_info(n); 1050 1061 if (r) 1051 1062 goto out;
+7 -1
drivers/vhost/vhost.c
··· 344 344 } 345 345 346 346 /* Caller should have device mutex */ 347 + bool vhost_dev_has_owner(struct vhost_dev *dev) 348 + { 349 + return dev->mm; 350 + } 351 + 352 + /* Caller should have device mutex */ 347 353 long vhost_dev_set_owner(struct vhost_dev *dev) 348 354 { 349 355 struct task_struct *worker; 350 356 int err; 351 357 352 358 /* Is there an owner already? */ 353 - if (dev->mm) { 359 + if (vhost_dev_has_owner(dev)) { 354 360 err = -EBUSY; 355 361 goto err_mm; 356 362 }
+1
drivers/vhost/vhost.h
··· 133 133 134 134 long vhost_dev_init(struct vhost_dev *, struct vhost_virtqueue **vqs, int nvqs); 135 135 long vhost_dev_set_owner(struct vhost_dev *dev); 136 + bool vhost_dev_has_owner(struct vhost_dev *dev); 136 137 long vhost_dev_check_owner(struct vhost_dev *); 137 138 struct vhost_memory *vhost_dev_reset_owner_prepare(void); 138 139 void vhost_dev_reset_owner(struct vhost_dev *, struct vhost_memory *);
+2 -2
drivers/xen/tmem.c
··· 379 379 #ifdef CONFIG_FRONTSWAP 380 380 if (tmem_enabled && frontswap) { 381 381 char *s = ""; 382 - struct frontswap_ops *old_ops = 383 - frontswap_register_ops(&tmem_frontswap_ops); 382 + struct frontswap_ops *old_ops; 384 383 385 384 tmem_frontswap_poolid = -1; 385 + old_ops = frontswap_register_ops(&tmem_frontswap_ops); 386 386 if (IS_ERR(old_ops) || old_ops) { 387 387 if (IS_ERR(old_ops)) 388 388 return PTR_ERR(old_ops);
+16 -20
fs/aio.c
··· 141 141 for (i = 0; i < ctx->nr_pages; i++) 142 142 put_page(ctx->ring_pages[i]); 143 143 144 - if (ctx->mmap_size) 145 - vm_munmap(ctx->mmap_base, ctx->mmap_size); 146 - 147 144 if (ctx->ring_pages && ctx->ring_pages != ctx->internal_pages) 148 145 kfree(ctx->ring_pages); 149 146 } ··· 319 322 320 323 aio_free_ring(ctx); 321 324 322 - spin_lock(&aio_nr_lock); 323 - BUG_ON(aio_nr - ctx->max_reqs > aio_nr); 324 - aio_nr -= ctx->max_reqs; 325 - spin_unlock(&aio_nr_lock); 326 - 327 325 pr_debug("freeing %p\n", ctx); 328 326 329 327 /* ··· 427 435 { 428 436 if (!atomic_xchg(&ctx->dead, 1)) { 429 437 hlist_del_rcu(&ctx->list); 430 - /* Between hlist_del_rcu() and dropping the initial ref */ 431 - synchronize_rcu(); 432 438 433 439 /* 434 - * We can't punt to workqueue here because put_ioctx() -> 435 - * free_ioctx() will unmap the ringbuffer, and that has to be 436 - * done in the original process's context. kill_ioctx_rcu/work() 437 - * exist for exit_aio(), as in that path free_ioctx() won't do 438 - * the unmap. 440 + * It'd be more correct to do this in free_ioctx(), after all 441 + * the outstanding kiocbs have finished - but by then io_destroy 442 + * has already returned, so io_setup() could potentially return 443 + * -EAGAIN with no ioctxs actually in use (as far as userspace 444 + * could tell). 439 445 */ 440 - kill_ioctx_work(&ctx->rcu_work); 446 + spin_lock(&aio_nr_lock); 447 + BUG_ON(aio_nr - ctx->max_reqs > aio_nr); 448 + aio_nr -= ctx->max_reqs; 449 + spin_unlock(&aio_nr_lock); 450 + 451 + if (ctx->mmap_size) 452 + vm_munmap(ctx->mmap_base, ctx->mmap_size); 453 + 454 + /* Between hlist_del_rcu() and dropping the initial ref */ 455 + call_rcu(&ctx->rcu_head, kill_ioctx_rcu); 441 456 } 442 457 } 443 458 ··· 494 495 */ 495 496 ctx->mmap_size = 0; 496 497 497 - if (!atomic_xchg(&ctx->dead, 1)) { 498 - hlist_del_rcu(&ctx->list); 499 - call_rcu(&ctx->rcu_head, kill_ioctx_rcu); 500 - } 498 + kill_ioctx(ctx); 501 499 } 502 500 } 503 501
+5 -5
fs/btrfs/disk-io.c
··· 2859 2859 btrfs_free_qgroup_config(fs_info); 2860 2860 fail_trans_kthread: 2861 2861 kthread_stop(fs_info->transaction_kthread); 2862 - del_fs_roots(fs_info); 2863 2862 btrfs_cleanup_transaction(fs_info->tree_root); 2863 + del_fs_roots(fs_info); 2864 2864 fail_cleaner: 2865 2865 kthread_stop(fs_info->cleaner_kthread); 2866 2866 ··· 3512 3512 percpu_counter_sum(&fs_info->delalloc_bytes)); 3513 3513 } 3514 3514 3515 - free_root_pointers(fs_info, 1); 3516 - 3517 3515 btrfs_free_block_groups(fs_info); 3516 + 3517 + btrfs_stop_all_workers(fs_info); 3518 3518 3519 3519 del_fs_roots(fs_info); 3520 3520 3521 - iput(fs_info->btree_inode); 3521 + free_root_pointers(fs_info, 1); 3522 3522 3523 - btrfs_stop_all_workers(fs_info); 3523 + iput(fs_info->btree_inode); 3524 3524 3525 3525 #ifdef CONFIG_BTRFS_FS_CHECK_INTEGRITY 3526 3526 if (btrfs_test_opt(root, CHECK_INTEGRITY))
+3
fs/btrfs/inode.c
··· 8012 8012 { 8013 8013 struct btrfs_root *root = BTRFS_I(inode)->root; 8014 8014 8015 + if (root == NULL) 8016 + return 1; 8017 + 8015 8018 /* the snap/subvol tree is on deleting */ 8016 8019 if (btrfs_root_refs(&root->root_item) == 0 && 8017 8020 root != root->fs_info->tree_root)
+5 -4
fs/btrfs/relocation.c
··· 4082 4082 return inode; 4083 4083 } 4084 4084 4085 - static struct reloc_control *alloc_reloc_control(void) 4085 + static struct reloc_control *alloc_reloc_control(struct btrfs_fs_info *fs_info) 4086 4086 { 4087 4087 struct reloc_control *rc; 4088 4088 ··· 4093 4093 INIT_LIST_HEAD(&rc->reloc_roots); 4094 4094 backref_cache_init(&rc->backref_cache); 4095 4095 mapping_tree_init(&rc->reloc_root_tree); 4096 - extent_io_tree_init(&rc->processed_blocks, NULL); 4096 + extent_io_tree_init(&rc->processed_blocks, 4097 + fs_info->btree_inode->i_mapping); 4097 4098 return rc; 4098 4099 } 4099 4100 ··· 4111 4110 int rw = 0; 4112 4111 int err = 0; 4113 4112 4114 - rc = alloc_reloc_control(); 4113 + rc = alloc_reloc_control(fs_info); 4115 4114 if (!rc) 4116 4115 return -ENOMEM; 4117 4116 ··· 4312 4311 if (list_empty(&reloc_roots)) 4313 4312 goto out; 4314 4313 4315 - rc = alloc_reloc_control(); 4314 + rc = alloc_reloc_control(root->fs_info); 4316 4315 if (!rc) { 4317 4316 err = -ENOMEM; 4318 4317 goto out;
+47 -26
fs/ceph/locks.c
··· 191 191 } 192 192 193 193 /** 194 - * Encode the flock and fcntl locks for the given inode into the pagelist. 195 - * Format is: #fcntl locks, sequential fcntl locks, #flock locks, 196 - * sequential flock locks. 197 - * Must be called with lock_flocks() already held. 198 - * If we encounter more of a specific lock type than expected, 199 - * we return the value 1. 194 + * Encode the flock and fcntl locks for the given inode into the ceph_filelock 195 + * array. Must be called with lock_flocks() already held. 196 + * If we encounter more of a specific lock type than expected, return -ENOSPC. 200 197 */ 201 - int ceph_encode_locks(struct inode *inode, struct ceph_pagelist *pagelist, 202 - int num_fcntl_locks, int num_flock_locks) 198 + int ceph_encode_locks_to_buffer(struct inode *inode, 199 + struct ceph_filelock *flocks, 200 + int num_fcntl_locks, int num_flock_locks) 203 201 { 204 202 struct file_lock *lock; 205 - struct ceph_filelock cephlock; 206 203 int err = 0; 207 204 int seen_fcntl = 0; 208 205 int seen_flock = 0; 206 + int l = 0; 209 207 210 208 dout("encoding %d flock and %d fcntl locks", num_flock_locks, 211 209 num_fcntl_locks); 212 - err = ceph_pagelist_append(pagelist, &num_fcntl_locks, sizeof(u32)); 213 - if (err) 214 - goto fail; 210 + 215 211 for (lock = inode->i_flock; lock != NULL; lock = lock->fl_next) { 216 212 if (lock->fl_flags & FL_POSIX) { 217 213 ++seen_fcntl; ··· 215 219 err = -ENOSPC; 216 220 goto fail; 217 221 } 218 - err = lock_to_ceph_filelock(lock, &cephlock); 222 + err = lock_to_ceph_filelock(lock, &flocks[l]); 219 223 if (err) 220 224 goto fail; 221 - err = ceph_pagelist_append(pagelist, &cephlock, 222 - sizeof(struct ceph_filelock)); 225 + ++l; 223 226 } 224 - if (err) 225 - goto fail; 226 227 } 227 - 228 - err = ceph_pagelist_append(pagelist, &num_flock_locks, sizeof(u32)); 229 - if (err) 230 - goto fail; 231 228 for (lock = inode->i_flock; lock != NULL; lock = lock->fl_next) { 232 229 if (lock->fl_flags & FL_FLOCK) { 233 230 ++seen_flock; ··· 228 239 err = -ENOSPC; 229 240 goto fail; 230 241 } 231 - err = lock_to_ceph_filelock(lock, &cephlock); 242 + err = lock_to_ceph_filelock(lock, &flocks[l]); 232 243 if (err) 233 244 goto fail; 234 - err = ceph_pagelist_append(pagelist, &cephlock, 235 - sizeof(struct ceph_filelock)); 245 + ++l; 236 246 } 237 - if (err) 238 - goto fail; 239 247 } 240 248 fail: 249 + return err; 250 + } 251 + 252 + /** 253 + * Copy the encoded flock and fcntl locks into the pagelist. 254 + * Format is: #fcntl locks, sequential fcntl locks, #flock locks, 255 + * sequential flock locks. 256 + * Returns zero on success. 257 + */ 258 + int ceph_locks_to_pagelist(struct ceph_filelock *flocks, 259 + struct ceph_pagelist *pagelist, 260 + int num_fcntl_locks, int num_flock_locks) 261 + { 262 + int err = 0; 263 + __le32 nlocks; 264 + 265 + nlocks = cpu_to_le32(num_fcntl_locks); 266 + err = ceph_pagelist_append(pagelist, &nlocks, sizeof(nlocks)); 267 + if (err) 268 + goto out_fail; 269 + 270 + err = ceph_pagelist_append(pagelist, flocks, 271 + num_fcntl_locks * sizeof(*flocks)); 272 + if (err) 273 + goto out_fail; 274 + 275 + nlocks = cpu_to_le32(num_flock_locks); 276 + err = ceph_pagelist_append(pagelist, &nlocks, sizeof(nlocks)); 277 + if (err) 278 + goto out_fail; 279 + 280 + err = ceph_pagelist_append(pagelist, 281 + &flocks[num_fcntl_locks], 282 + num_flock_locks * sizeof(*flocks)); 283 + out_fail: 241 284 return err; 242 285 } 243 286
+34 -29
fs/ceph/mds_client.c
··· 2478 2478 2479 2479 if (recon_state->flock) { 2480 2480 int num_fcntl_locks, num_flock_locks; 2481 - struct ceph_pagelist_cursor trunc_point; 2481 + struct ceph_filelock *flocks; 2482 2482 2483 - ceph_pagelist_set_cursor(pagelist, &trunc_point); 2484 - do { 2485 - lock_flocks(); 2486 - ceph_count_locks(inode, &num_fcntl_locks, 2487 - &num_flock_locks); 2488 - rec.v2.flock_len = (2*sizeof(u32) + 2489 - (num_fcntl_locks+num_flock_locks) * 2490 - sizeof(struct ceph_filelock)); 2491 - unlock_flocks(); 2492 - 2493 - /* pre-alloc pagelist */ 2494 - ceph_pagelist_truncate(pagelist, &trunc_point); 2495 - err = ceph_pagelist_append(pagelist, &rec, reclen); 2496 - if (!err) 2497 - err = ceph_pagelist_reserve(pagelist, 2498 - rec.v2.flock_len); 2499 - 2500 - /* encode locks */ 2501 - if (!err) { 2502 - lock_flocks(); 2503 - err = ceph_encode_locks(inode, 2504 - pagelist, 2505 - num_fcntl_locks, 2506 - num_flock_locks); 2507 - unlock_flocks(); 2508 - } 2509 - } while (err == -ENOSPC); 2483 + encode_again: 2484 + lock_flocks(); 2485 + ceph_count_locks(inode, &num_fcntl_locks, &num_flock_locks); 2486 + unlock_flocks(); 2487 + flocks = kmalloc((num_fcntl_locks+num_flock_locks) * 2488 + sizeof(struct ceph_filelock), GFP_NOFS); 2489 + if (!flocks) { 2490 + err = -ENOMEM; 2491 + goto out_free; 2492 + } 2493 + lock_flocks(); 2494 + err = ceph_encode_locks_to_buffer(inode, flocks, 2495 + num_fcntl_locks, 2496 + num_flock_locks); 2497 + unlock_flocks(); 2498 + if (err) { 2499 + kfree(flocks); 2500 + if (err == -ENOSPC) 2501 + goto encode_again; 2502 + goto out_free; 2503 + } 2504 + /* 2505 + * number of encoded locks is stable, so copy to pagelist 2506 + */ 2507 + rec.v2.flock_len = cpu_to_le32(2*sizeof(u32) + 2508 + (num_fcntl_locks+num_flock_locks) * 2509 + sizeof(struct ceph_filelock)); 2510 + err = ceph_pagelist_append(pagelist, &rec, reclen); 2511 + if (!err) 2512 + err = ceph_locks_to_pagelist(flocks, pagelist, 2513 + num_fcntl_locks, 2514 + num_flock_locks); 2515 + kfree(flocks); 2510 2516 } else { 2511 2517 err = ceph_pagelist_append(pagelist, &rec, reclen); 2512 2518 } 2513 - 2514 2519 out_free: 2515 2520 kfree(path); 2516 2521 out_dput:
+7 -2
fs/ceph/super.h
··· 822 822 extern int ceph_lock(struct file *file, int cmd, struct file_lock *fl); 823 823 extern int ceph_flock(struct file *file, int cmd, struct file_lock *fl); 824 824 extern void ceph_count_locks(struct inode *inode, int *p_num, int *f_num); 825 - extern int ceph_encode_locks(struct inode *i, struct ceph_pagelist *p, 826 - int p_locks, int f_locks); 825 + extern int ceph_encode_locks_to_buffer(struct inode *inode, 826 + struct ceph_filelock *flocks, 827 + int num_fcntl_locks, 828 + int num_flock_locks); 829 + extern int ceph_locks_to_pagelist(struct ceph_filelock *flocks, 830 + struct ceph_pagelist *pagelist, 831 + int num_fcntl_locks, int num_flock_locks); 827 832 extern int lock_to_ceph_filelock(struct file_lock *fl, struct ceph_filelock *c); 828 833 829 834 /* debugfs.c */
+2 -2
fs/cifs/connect.c
··· 3279 3279 pos = full_path + unc_len; 3280 3280 3281 3281 if (pplen) { 3282 - *pos++ = CIFS_DIR_SEP(cifs_sb); 3283 - strncpy(pos, vol->prepath, pplen); 3282 + *pos = CIFS_DIR_SEP(cifs_sb); 3283 + strncpy(pos + 1, vol->prepath, pplen); 3284 3284 pos += pplen; 3285 3285 } 3286 3286
+6
fs/ecryptfs/file.c
··· 295 295 static int 296 296 ecryptfs_fsync(struct file *file, loff_t start, loff_t end, int datasync) 297 297 { 298 + int rc; 299 + 300 + rc = filemap_write_and_wait(file->f_mapping); 301 + if (rc) 302 + return rc; 303 + 298 304 return vfs_fsync(ecryptfs_file_to_lower(file), datasync); 299 305 } 300 306
+10 -9
fs/file_table.c
··· 306 306 { 307 307 if (atomic_long_dec_and_test(&file->f_count)) { 308 308 struct task_struct *task = current; 309 + unsigned long flags; 310 + 309 311 file_sb_list_del(file); 310 - if (unlikely(in_interrupt() || task->flags & PF_KTHREAD)) { 311 - unsigned long flags; 312 - spin_lock_irqsave(&delayed_fput_lock, flags); 313 - list_add(&file->f_u.fu_list, &delayed_fput_list); 314 - schedule_work(&delayed_fput_work); 315 - spin_unlock_irqrestore(&delayed_fput_lock, flags); 316 - return; 312 + if (likely(!in_interrupt() && !(task->flags & PF_KTHREAD))) { 313 + init_task_work(&file->f_u.fu_rcuhead, ____fput); 314 + if (!task_work_add(task, &file->f_u.fu_rcuhead, true)) 315 + return; 317 316 } 318 - init_task_work(&file->f_u.fu_rcuhead, ____fput); 319 - task_work_add(task, &file->f_u.fu_rcuhead, true); 317 + spin_lock_irqsave(&delayed_fput_lock, flags); 318 + list_add(&file->f_u.fu_list, &delayed_fput_list); 319 + schedule_work(&delayed_fput_work); 320 + spin_unlock_irqrestore(&delayed_fput_lock, flags); 320 321 } 321 322 } 322 323
+4
fs/hpfs/file.c
··· 109 109 { 110 110 struct inode *inode = mapping->host; 111 111 112 + hpfs_lock(inode->i_sb); 113 + 112 114 if (to > inode->i_size) { 113 115 truncate_pagecache(inode, to, inode->i_size); 114 116 hpfs_truncate(inode); 115 117 } 118 + 119 + hpfs_unlock(inode->i_sb); 116 120 } 117 121 118 122 static int hpfs_write_begin(struct file *file, struct address_space *mapping,
+2 -2
fs/namei.c
··· 1976 1976 err = complete_walk(nd); 1977 1977 1978 1978 if (!err && nd->flags & LOOKUP_DIRECTORY) { 1979 - if (!nd->inode->i_op->lookup) { 1979 + if (!can_lookup(nd->inode)) { 1980 1980 path_put(&nd->path); 1981 1981 err = -ENOTDIR; 1982 1982 } ··· 2850 2850 if ((open_flag & O_CREAT) && S_ISDIR(nd->inode->i_mode)) 2851 2851 goto out; 2852 2852 error = -ENOTDIR; 2853 - if ((nd->flags & LOOKUP_DIRECTORY) && !nd->inode->i_op->lookup) 2853 + if ((nd->flags & LOOKUP_DIRECTORY) && !can_lookup(nd->inode)) 2854 2854 goto out; 2855 2855 audit_inode(name, nd->path.dentry, 0); 2856 2856 finish_open:
-9
fs/ncpfs/dir.c
··· 1029 1029 DPRINTK("ncp_rmdir: removing %s/%s\n", 1030 1030 dentry->d_parent->d_name.name, dentry->d_name.name); 1031 1031 1032 - /* 1033 - * fail with EBUSY if there are still references to this 1034 - * directory. 1035 - */ 1036 - dentry_unhash(dentry); 1037 - error = -EBUSY; 1038 - if (!d_unhashed(dentry)) 1039 - goto out; 1040 - 1041 1032 len = sizeof(__name); 1042 1033 error = ncp_io2vol(server, __name, &len, dentry->d_name.name, 1043 1034 dentry->d_name.len, !ncp_preserve_case(dir));
+1
fs/ocfs2/dlm/dlmrecovery.c
··· 1408 1408 mres->lockname_len, mres->lockname); 1409 1409 ret = -EFAULT; 1410 1410 spin_unlock(&res->spinlock); 1411 + dlm_lockres_put(res); 1411 1412 goto leave; 1412 1413 } 1413 1414 res->state |= DLM_LOCK_RES_MIGRATING;
+2 -2
fs/ocfs2/namei.c
··· 947 947 ocfs2_free_dir_lookup_result(&orphan_insert); 948 948 ocfs2_free_dir_lookup_result(&lookup); 949 949 950 - if (status) 950 + if (status && (status != -ENOTEMPTY)) 951 951 mlog_errno(status); 952 952 953 953 return status; ··· 2216 2216 2217 2217 brelse(orphan_dir_bh); 2218 2218 2219 - return 0; 2219 + return ret; 2220 2220 } 2221 2221 2222 2222 int ocfs2_create_inode_in_orphan(struct inode *dir,
+1
fs/proc/base.c
··· 2118 2118 nstr[notify & ~SIGEV_THREAD_ID], 2119 2119 (notify & SIGEV_THREAD_ID) ? "tid" : "pid", 2120 2120 pid_nr_ns(timer->it_pid, tp->ns)); 2121 + seq_printf(m, "ClockID: %d\n", timer->it_clock); 2121 2122 2122 2123 return 0; 2123 2124 }
+5 -5
fs/proc/kmsg.c
··· 21 21 22 22 static int kmsg_open(struct inode * inode, struct file * file) 23 23 { 24 - return do_syslog(SYSLOG_ACTION_OPEN, NULL, 0, SYSLOG_FROM_FILE); 24 + return do_syslog(SYSLOG_ACTION_OPEN, NULL, 0, SYSLOG_FROM_PROC); 25 25 } 26 26 27 27 static int kmsg_release(struct inode * inode, struct file * file) 28 28 { 29 - (void) do_syslog(SYSLOG_ACTION_CLOSE, NULL, 0, SYSLOG_FROM_FILE); 29 + (void) do_syslog(SYSLOG_ACTION_CLOSE, NULL, 0, SYSLOG_FROM_PROC); 30 30 return 0; 31 31 } 32 32 ··· 34 34 size_t count, loff_t *ppos) 35 35 { 36 36 if ((file->f_flags & O_NONBLOCK) && 37 - !do_syslog(SYSLOG_ACTION_SIZE_UNREAD, NULL, 0, SYSLOG_FROM_FILE)) 37 + !do_syslog(SYSLOG_ACTION_SIZE_UNREAD, NULL, 0, SYSLOG_FROM_PROC)) 38 38 return -EAGAIN; 39 - return do_syslog(SYSLOG_ACTION_READ, buf, count, SYSLOG_FROM_FILE); 39 + return do_syslog(SYSLOG_ACTION_READ, buf, count, SYSLOG_FROM_PROC); 40 40 } 41 41 42 42 static unsigned int kmsg_poll(struct file *file, poll_table *wait) 43 43 { 44 44 poll_wait(file, &log_wait, wait); 45 - if (do_syslog(SYSLOG_ACTION_SIZE_UNREAD, NULL, 0, SYSLOG_FROM_FILE)) 45 + if (do_syslog(SYSLOG_ACTION_SIZE_UNREAD, NULL, 0, SYSLOG_FROM_PROC)) 46 46 return POLLIN | POLLRDNORM; 47 47 return 0; 48 48 }
+1
fs/xfs/xfs_attr_leaf.h
··· 128 128 __u8 holes; 129 129 __u8 pad1; 130 130 struct xfs_attr_leaf_map freemap[XFS_ATTR_LEAF_MAPSIZE]; 131 + __be32 pad2; /* 64 bit alignment */ 131 132 }; 132 133 133 134 #define XFS_ATTR3_LEAF_CRC_OFF (offsetof(struct xfs_attr3_leaf_hdr, info.crc))
+10
fs/xfs/xfs_btree.c
··· 2544 2544 if (error) 2545 2545 goto error0; 2546 2546 2547 + /* 2548 + * we can't just memcpy() the root in for CRC enabled btree blocks. 2549 + * In that case have to also ensure the blkno remains correct 2550 + */ 2547 2551 memcpy(cblock, block, xfs_btree_block_len(cur)); 2552 + if (cur->bc_flags & XFS_BTREE_CRC_BLOCKS) { 2553 + if (cur->bc_flags & XFS_BTREE_LONG_PTRS) 2554 + cblock->bb_u.l.bb_blkno = cpu_to_be64(cbp->b_bn); 2555 + else 2556 + cblock->bb_u.s.bb_blkno = cpu_to_be64(cbp->b_bn); 2557 + } 2548 2558 2549 2559 be16_add_cpu(&block->bb_level, 1); 2550 2560 xfs_btree_set_numrecs(block, 1);
+3 -2
fs/xfs/xfs_dir2_format.h
··· 266 266 struct xfs_dir3_data_hdr { 267 267 struct xfs_dir3_blk_hdr hdr; 268 268 xfs_dir2_data_free_t best_free[XFS_DIR2_DATA_FD_COUNT]; 269 + __be32 pad; /* 64 bit alignment */ 269 270 }; 270 271 271 272 #define XFS_DIR3_DATA_CRC_OFF offsetof(struct xfs_dir3_data_hdr, hdr.crc) ··· 478 477 struct xfs_da3_blkinfo info; /* header for da routines */ 479 478 __be16 count; /* count of entries */ 480 479 __be16 stale; /* count of stale entries */ 481 - __be32 pad; 480 + __be32 pad; /* 64 bit alignment */ 482 481 }; 483 482 484 483 struct xfs_dir3_icleaf_hdr { ··· 716 715 __be32 firstdb; /* db of first entry */ 717 716 __be32 nvalid; /* count of valid entries */ 718 717 __be32 nused; /* count of used entries */ 719 - __be32 pad; /* 64 bit alignment. */ 718 + __be32 pad; /* 64 bit alignment */ 720 719 }; 721 720 722 721 struct xfs_dir3_free {
+17 -2
fs/xfs/xfs_log_recover.c
··· 1845 1845 xfs_agino_t *buffer_nextp; 1846 1846 1847 1847 trace_xfs_log_recover_buf_inode_buf(mp->m_log, buf_f); 1848 - bp->b_ops = &xfs_inode_buf_ops; 1848 + 1849 + /* 1850 + * Post recovery validation only works properly on CRC enabled 1851 + * filesystems. 1852 + */ 1853 + if (xfs_sb_version_hascrc(&mp->m_sb)) 1854 + bp->b_ops = &xfs_inode_buf_ops; 1849 1855 1850 1856 inodes_per_buf = BBTOB(bp->b_io_length) >> mp->m_sb.sb_inodelog; 1851 1857 for (i = 0; i < inodes_per_buf; i++) { ··· 2211 2205 /* Shouldn't be any more regions */ 2212 2206 ASSERT(i == item->ri_total); 2213 2207 2214 - xlog_recovery_validate_buf_type(mp, bp, buf_f); 2208 + /* 2209 + * We can only do post recovery validation on items on CRC enabled 2210 + * fielsystems as we need to know when the buffer was written to be able 2211 + * to determine if we should have replayed the item. If we replay old 2212 + * metadata over a newer buffer, then it will enter a temporarily 2213 + * inconsistent state resulting in verification failures. Hence for now 2214 + * just avoid the verification stage for non-crc filesystems 2215 + */ 2216 + if (xfs_sb_version_hascrc(&mp->m_sb)) 2217 + xlog_recovery_validate_buf_type(mp, bp, buf_f); 2215 2218 } 2216 2219 2217 2220 /*
+11 -7
fs/xfs/xfs_mount.c
··· 314 314 xfs_mount_validate_sb( 315 315 xfs_mount_t *mp, 316 316 xfs_sb_t *sbp, 317 - bool check_inprogress) 317 + bool check_inprogress, 318 + bool check_version) 318 319 { 319 320 320 321 /* ··· 338 337 339 338 /* 340 339 * Version 5 superblock feature mask validation. Reject combinations the 341 - * kernel cannot support up front before checking anything else. 340 + * kernel cannot support up front before checking anything else. For 341 + * write validation, we don't need to check feature masks. 342 342 */ 343 - if (XFS_SB_VERSION_NUM(sbp) == XFS_SB_VERSION_5) { 343 + if (check_version && XFS_SB_VERSION_NUM(sbp) == XFS_SB_VERSION_5) { 344 344 xfs_alert(mp, 345 345 "Version 5 superblock detected. This kernel has EXPERIMENTAL support enabled!\n" 346 346 "Use of these features in this kernel is at your own risk!"); ··· 677 675 678 676 static int 679 677 xfs_sb_verify( 680 - struct xfs_buf *bp) 678 + struct xfs_buf *bp, 679 + bool check_version) 681 680 { 682 681 struct xfs_mount *mp = bp->b_target->bt_mount; 683 682 struct xfs_sb sb; ··· 689 686 * Only check the in progress field for the primary superblock as 690 687 * mkfs.xfs doesn't clear it from secondary superblocks. 691 688 */ 692 - return xfs_mount_validate_sb(mp, &sb, bp->b_bn == XFS_SB_DADDR); 689 + return xfs_mount_validate_sb(mp, &sb, bp->b_bn == XFS_SB_DADDR, 690 + check_version); 693 691 } 694 692 695 693 /* ··· 723 719 goto out_error; 724 720 } 725 721 } 726 - error = xfs_sb_verify(bp); 722 + error = xfs_sb_verify(bp, true); 727 723 728 724 out_error: 729 725 if (error) { ··· 762 758 struct xfs_buf_log_item *bip = bp->b_fspriv; 763 759 int error; 764 760 765 - error = xfs_sb_verify(bp); 761 + error = xfs_sb_verify(bp, false); 766 762 if (error) { 767 763 XFS_CORRUPTION_ERROR(__func__, XFS_ERRLEVEL_LOW, mp, bp->b_addr); 768 764 xfs_buf_ioerror(bp, error);
+5
include/asm-generic/kvm_para.h
··· 18 18 return 0; 19 19 } 20 20 21 + static inline bool kvm_para_available(void) 22 + { 23 + return false; 24 + } 25 + 21 26 #endif
+4
include/linux/cpu.h
··· 175 175 176 176 extern void get_online_cpus(void); 177 177 extern void put_online_cpus(void); 178 + extern void cpu_hotplug_disable(void); 179 + extern void cpu_hotplug_enable(void); 178 180 #define hotcpu_notifier(fn, pri) cpu_notifier(fn, pri) 179 181 #define register_hotcpu_notifier(nb) register_cpu_notifier(nb) 180 182 #define unregister_hotcpu_notifier(nb) unregister_cpu_notifier(nb) ··· 200 198 201 199 #define get_online_cpus() do { } while (0) 202 200 #define put_online_cpus() do { } while (0) 201 + #define cpu_hotplug_disable() do { } while (0) 202 + #define cpu_hotplug_enable() do { } while (0) 203 203 #define hotcpu_notifier(fn, pri) do { (void)(fn); } while (0) 204 204 /* These aren't inline functions due to a GCC bug. */ 205 205 #define register_hotcpu_notifier(nb) ({ (void)(nb); 0; })
+1
include/linux/filter.h
··· 46 46 extern int sk_detach_filter(struct sock *sk); 47 47 extern int sk_chk_filter(struct sock_filter *filter, unsigned int flen); 48 48 extern int sk_get_filter(struct sock *sk, struct sock_filter __user *filter, unsigned len); 49 + extern void sk_decode_filter(struct sock_filter *filt, struct sock_filter *to); 49 50 50 51 #ifdef CONFIG_BPF_JIT 51 52 #include <stdarg.h>
+2 -2
include/linux/if_team.h
··· 260 260 return port; 261 261 cur = port; 262 262 list_for_each_entry_continue_rcu(cur, &team->port_list, list) 263 - if (team_port_txable(port)) 263 + if (team_port_txable(cur)) 264 264 return cur; 265 265 list_for_each_entry_rcu(cur, &team->port_list, list) { 266 266 if (cur == port) 267 267 break; 268 - if (team_port_txable(port)) 268 + if (team_port_txable(cur)) 269 269 return cur; 270 270 } 271 271 return NULL;
+4 -2
include/linux/math64.h
··· 6 6 7 7 #if BITS_PER_LONG == 64 8 8 9 - #define div64_long(x,y) div64_s64((x),(y)) 9 + #define div64_long(x, y) div64_s64((x), (y)) 10 + #define div64_ul(x, y) div64_u64((x), (y)) 10 11 11 12 /** 12 13 * div_u64_rem - unsigned 64bit divide with 32bit divisor with remainder ··· 48 47 49 48 #elif BITS_PER_LONG == 32 50 49 51 - #define div64_long(x,y) div_s64((x),(y)) 50 + #define div64_long(x, y) div_s64((x), (y)) 51 + #define div64_ul(x, y) div_u64((x), (y)) 52 52 53 53 #ifndef div_u64_rem 54 54 static inline u64 div_u64_rem(u64 dividend, u32 divisor, u32 *remainder)
+20
include/linux/rculist.h
··· 461 461 &(pos)->member)), typeof(*(pos)), member)) 462 462 463 463 /** 464 + * hlist_for_each_entry_rcu_notrace - iterate over rcu list of given type (for tracing) 465 + * @pos: the type * to use as a loop cursor. 466 + * @head: the head for your list. 467 + * @member: the name of the hlist_node within the struct. 468 + * 469 + * This list-traversal primitive may safely run concurrently with 470 + * the _rcu list-mutation primitives such as hlist_add_head_rcu() 471 + * as long as the traversal is guarded by rcu_read_lock(). 472 + * 473 + * This is the same as hlist_for_each_entry_rcu() except that it does 474 + * not do any RCU debugging or tracing. 475 + */ 476 + #define hlist_for_each_entry_rcu_notrace(pos, head, member) \ 477 + for (pos = hlist_entry_safe (rcu_dereference_raw_notrace(hlist_first_rcu(head)),\ 478 + typeof(*(pos)), member); \ 479 + pos; \ 480 + pos = hlist_entry_safe(rcu_dereference_raw_notrace(hlist_next_rcu(\ 481 + &(pos)->member)), typeof(*(pos)), member)) 482 + 483 + /** 464 484 * hlist_for_each_entry_rcu_bh - iterate over rcu list of given type 465 485 * @pos: the type * to use as a loop cursor. 466 486 * @head: the head for your list.
+9
include/linux/rcupdate.h
··· 640 640 641 641 #define rcu_dereference_raw(p) rcu_dereference_check(p, 1) /*@@@ needed? @@@*/ 642 642 643 + /* 644 + * The tracing infrastructure traces RCU (we want that), but unfortunately 645 + * some of the RCU checks causes tracing to lock up the system. 646 + * 647 + * The tracing version of rcu_dereference_raw() must not call 648 + * rcu_read_lock_held(). 649 + */ 650 + #define rcu_dereference_raw_notrace(p) __rcu_dereference_check((p), 1, __rcu) 651 + 643 652 /** 644 653 * rcu_access_index() - fetch RCU index with no dereferencing 645 654 * @p: The index to read
+3
include/linux/scatterlist.h
··· 111 111 static inline void sg_set_buf(struct scatterlist *sg, const void *buf, 112 112 unsigned int buflen) 113 113 { 114 + #ifdef CONFIG_DEBUG_SG 115 + BUG_ON(!virt_addr_valid(buf)); 116 + #endif 114 117 sg_set_page(sg, virt_to_page(buf), buflen, offset_in_page(buf)); 115 118 } 116 119
+12 -7
include/linux/smp.h
··· 11 11 #include <linux/list.h> 12 12 #include <linux/cpumask.h> 13 13 #include <linux/init.h> 14 + #include <linux/irqflags.h> 14 15 15 16 extern void cpu_idle(void); 16 17 ··· 140 139 } 141 140 #define smp_call_function(func, info, wait) \ 142 141 (up_smp_call_function(func, info)) 143 - #define on_each_cpu(func,info,wait) \ 144 - ({ \ 145 - local_irq_disable(); \ 146 - func(info); \ 147 - local_irq_enable(); \ 148 - 0; \ 149 - }) 142 + 143 + static inline int on_each_cpu(smp_call_func_t func, void *info, int wait) 144 + { 145 + unsigned long flags; 146 + 147 + local_irq_save(flags); 148 + func(info); 149 + local_irq_restore(flags); 150 + return 0; 151 + } 152 + 150 153 /* 151 154 * Note we still need to test the mask even for UP 152 155 * because we actually can get an empty mask from
+3
include/linux/swapops.h
··· 137 137 138 138 extern void migration_entry_wait(struct mm_struct *mm, pmd_t *pmd, 139 139 unsigned long address); 140 + extern void migration_entry_wait_huge(struct mm_struct *mm, pte_t *pte); 140 141 #else 141 142 142 143 #define make_migration_entry(page, write) swp_entry(0, 0) ··· 149 148 static inline void make_migration_entry_read(swp_entry_t *entryp) { } 150 149 static inline void migration_entry_wait(struct mm_struct *mm, pmd_t *pmd, 151 150 unsigned long address) { } 151 + static inline void migration_entry_wait_huge(struct mm_struct *mm, 152 + pte_t *pte) { } 152 153 static inline int is_write_migration_entry(swp_entry_t entry) 153 154 { 154 155 return 0;
+2 -2
include/linux/syslog.h
··· 44 44 /* Return size of the log buffer */ 45 45 #define SYSLOG_ACTION_SIZE_BUFFER 10 46 46 47 - #define SYSLOG_FROM_CALL 0 48 - #define SYSLOG_FROM_FILE 1 47 + #define SYSLOG_FROM_READER 0 48 + #define SYSLOG_FROM_PROC 1 49 49 50 50 int do_syslog(int type, char __user *buf, int count, bool from_file); 51 51
+2 -2
include/linux/tracepoint.h
··· 145 145 TP_PROTO(data_proto), \ 146 146 TP_ARGS(data_args), \ 147 147 TP_CONDITION(cond), \ 148 - rcu_idle_exit(), \ 149 - rcu_idle_enter()); \ 148 + rcu_irq_enter(), \ 149 + rcu_irq_exit()); \ 150 150 } 151 151 #else 152 152 #define __DECLARE_TRACE_RCU(name, proto, args, cond, data_proto, data_args)
+1
include/net/bluetooth/hci_core.h
··· 1117 1117 int mgmt_control(struct sock *sk, struct msghdr *msg, size_t len); 1118 1118 int mgmt_index_added(struct hci_dev *hdev); 1119 1119 int mgmt_index_removed(struct hci_dev *hdev); 1120 + int mgmt_set_powered_failed(struct hci_dev *hdev, int err); 1120 1121 int mgmt_powered(struct hci_dev *hdev, u8 powered); 1121 1122 int mgmt_discoverable(struct hci_dev *hdev, u8 discoverable); 1122 1123 int mgmt_connectable(struct hci_dev *hdev, u8 connectable);
+1
include/net/bluetooth/mgmt.h
··· 42 42 #define MGMT_STATUS_NOT_POWERED 0x0f 43 43 #define MGMT_STATUS_CANCELLED 0x10 44 44 #define MGMT_STATUS_INVALID_INDEX 0x11 45 + #define MGMT_STATUS_RFKILLED 0x12 45 46 46 47 struct mgmt_hdr { 47 48 __le16 opcode;
+3 -3
include/net/ip_tunnels.h
··· 95 95 int ip_tunnel_init(struct net_device *dev); 96 96 void ip_tunnel_uninit(struct net_device *dev); 97 97 void ip_tunnel_dellink(struct net_device *dev, struct list_head *head); 98 - int __net_init ip_tunnel_init_net(struct net *net, int ip_tnl_net_id, 99 - struct rtnl_link_ops *ops, char *devname); 98 + int ip_tunnel_init_net(struct net *net, int ip_tnl_net_id, 99 + struct rtnl_link_ops *ops, char *devname); 100 100 101 - void __net_exit ip_tunnel_delete_net(struct ip_tunnel_net *itn); 101 + void ip_tunnel_delete_net(struct ip_tunnel_net *itn); 102 102 103 103 void ip_tunnel_xmit(struct sk_buff *skb, struct net_device *dev, 104 104 const struct iphdr *tnl_params, const u8 protocol);
+2 -1
include/sound/soc-dapm.h
··· 450 450 snd_soc_dapm_aif_in, /* audio interface input */ 451 451 snd_soc_dapm_aif_out, /* audio interface output */ 452 452 snd_soc_dapm_siggen, /* signal generator */ 453 - snd_soc_dapm_dai, /* link to DAI structure */ 453 + snd_soc_dapm_dai_in, /* link to DAI structure */ 454 + snd_soc_dapm_dai_out, 454 455 snd_soc_dapm_dai_link, /* link between two DAI structures */ 455 456 }; 456 457
+1
include/uapi/linux/kvm.h
··· 783 783 #define KVM_REG_IA64 0x3000000000000000ULL 784 784 #define KVM_REG_ARM 0x4000000000000000ULL 785 785 #define KVM_REG_S390 0x5000000000000000ULL 786 + #define KVM_REG_MIPS 0x7000000000000000ULL 786 787 787 788 #define KVM_REG_SIZE_SHIFT 52 788 789 #define KVM_REG_SIZE_MASK 0x00f0000000000000ULL
+1
init/Kconfig
··· 431 431 config TREE_RCU 432 432 bool "Tree-based hierarchical RCU" 433 433 depends on !PREEMPT && SMP 434 + select IRQ_WORK 434 435 help 435 436 This option selects the RCU implementation that is 436 437 designed for very large SMP system with hundreds or
+1 -1
kernel/audit.c
··· 1056 1056 static void wait_for_auditd(unsigned long sleep_time) 1057 1057 { 1058 1058 DECLARE_WAITQUEUE(wait, current); 1059 - set_current_state(TASK_INTERRUPTIBLE); 1059 + set_current_state(TASK_UNINTERRUPTIBLE); 1060 1060 add_wait_queue(&audit_backlog_wait, &wait); 1061 1061 1062 1062 if (audit_backlog_limit &&
+1
kernel/audit_tree.c
··· 658 658 struct vfsmount *mnt; 659 659 int err; 660 660 661 + rule->tree = NULL; 661 662 list_for_each_entry(tree, &tree_list, list) { 662 663 if (!strcmp(seed->pathname, tree->pathname)) { 663 664 put_tree(seed);
+23 -32
kernel/cpu.c
··· 133 133 mutex_unlock(&cpu_hotplug.lock); 134 134 } 135 135 136 + /* 137 + * Wait for currently running CPU hotplug operations to complete (if any) and 138 + * disable future CPU hotplug (from sysfs). The 'cpu_add_remove_lock' protects 139 + * the 'cpu_hotplug_disabled' flag. The same lock is also acquired by the 140 + * hotplug path before performing hotplug operations. So acquiring that lock 141 + * guarantees mutual exclusion from any currently running hotplug operations. 142 + */ 143 + void cpu_hotplug_disable(void) 144 + { 145 + cpu_maps_update_begin(); 146 + cpu_hotplug_disabled = 1; 147 + cpu_maps_update_done(); 148 + } 149 + 150 + void cpu_hotplug_enable(void) 151 + { 152 + cpu_maps_update_begin(); 153 + cpu_hotplug_disabled = 0; 154 + cpu_maps_update_done(); 155 + } 156 + 136 157 #else /* #if CONFIG_HOTPLUG_CPU */ 137 158 static void cpu_hotplug_begin(void) {} 138 159 static void cpu_hotplug_done(void) {} ··· 562 541 core_initcall(alloc_frozen_cpus); 563 542 564 543 /* 565 - * Prevent regular CPU hotplug from racing with the freezer, by disabling CPU 566 - * hotplug when tasks are about to be frozen. Also, don't allow the freezer 567 - * to continue until any currently running CPU hotplug operation gets 568 - * completed. 569 - * To modify the 'cpu_hotplug_disabled' flag, we need to acquire the 570 - * 'cpu_add_remove_lock'. And this same lock is also taken by the regular 571 - * CPU hotplug path and released only after it is complete. Thus, we 572 - * (and hence the freezer) will block here until any currently running CPU 573 - * hotplug operation gets completed. 574 - */ 575 - void cpu_hotplug_disable_before_freeze(void) 576 - { 577 - cpu_maps_update_begin(); 578 - cpu_hotplug_disabled = 1; 579 - cpu_maps_update_done(); 580 - } 581 - 582 - 583 - /* 584 - * When tasks have been thawed, re-enable regular CPU hotplug (which had been 585 - * disabled while beginning to freeze tasks). 586 - */ 587 - void cpu_hotplug_enable_after_thaw(void) 588 - { 589 - cpu_maps_update_begin(); 590 - cpu_hotplug_disabled = 0; 591 - cpu_maps_update_done(); 592 - } 593 - 594 - /* 595 544 * When callbacks for CPU hotplug notifications are being executed, we must 596 545 * ensure that the state of the system with respect to the tasks being frozen 597 546 * or not, as reported by the notification, remains unchanged *throughout the ··· 580 589 581 590 case PM_SUSPEND_PREPARE: 582 591 case PM_HIBERNATION_PREPARE: 583 - cpu_hotplug_disable_before_freeze(); 592 + cpu_hotplug_disable(); 584 593 break; 585 594 586 595 case PM_POST_SUSPEND: 587 596 case PM_POST_HIBERNATION: 588 - cpu_hotplug_enable_after_thaw(); 597 + cpu_hotplug_enable(); 589 598 break; 590 599 591 600 default:
+1 -1
kernel/exit.c
··· 649 649 * jobs, send them a SIGHUP and then a SIGCONT. (POSIX 3.2.2.2) 650 650 */ 651 651 forget_original_parent(tsk); 652 - exit_task_namespaces(tsk); 653 652 654 653 write_lock_irq(&tasklist_lock); 655 654 if (group_dead) ··· 794 795 exit_shm(tsk); 795 796 exit_files(tsk); 796 797 exit_fs(tsk); 798 + exit_task_namespaces(tsk); 797 799 exit_task_work(tsk); 798 800 check_stack_usage(); 799 801 exit_thread();
+7 -2
kernel/irq/irqdomain.c
··· 143 143 * irq_domain_add_simple() - Allocate and register a simple irq_domain. 144 144 * @of_node: pointer to interrupt controller's device tree node. 145 145 * @size: total number of irqs in mapping 146 - * @first_irq: first number of irq block assigned to the domain 146 + * @first_irq: first number of irq block assigned to the domain, 147 + * pass zero to assign irqs on-the-fly. This will result in a 148 + * linear IRQ domain so it is important to use irq_create_mapping() 149 + * for each used IRQ, especially when SPARSE_IRQ is enabled. 147 150 * @ops: map/unmap domain callbacks 148 151 * @host_data: Controller private data pointer 149 152 * ··· 194 191 /* A linear domain is the default */ 195 192 return irq_domain_add_linear(of_node, size, ops, host_data); 196 193 } 194 + EXPORT_SYMBOL_GPL(irq_domain_add_simple); 197 195 198 196 /** 199 197 * irq_domain_add_legacy() - Allocate and register a legacy revmap irq_domain. ··· 401 397 while (count--) { 402 398 int irq = irq_base + count; 403 399 struct irq_data *irq_data = irq_get_irq_data(irq); 404 - irq_hw_number_t hwirq = irq_data->hwirq; 400 + irq_hw_number_t hwirq; 405 401 406 402 if (WARN_ON(!irq_data || irq_data->domain != domain)) 407 403 continue; 408 404 405 + hwirq = irq_data->hwirq; 409 406 irq_set_status_flags(irq, IRQ_NOREQUEST); 410 407 411 408 /* remove chip and handler */
+50 -41
kernel/printk.c
··· 363 363 log_next_seq++; 364 364 } 365 365 366 + #ifdef CONFIG_SECURITY_DMESG_RESTRICT 367 + int dmesg_restrict = 1; 368 + #else 369 + int dmesg_restrict; 370 + #endif 371 + 372 + static int syslog_action_restricted(int type) 373 + { 374 + if (dmesg_restrict) 375 + return 1; 376 + /* 377 + * Unless restricted, we allow "read all" and "get buffer size" 378 + * for everybody. 379 + */ 380 + return type != SYSLOG_ACTION_READ_ALL && 381 + type != SYSLOG_ACTION_SIZE_BUFFER; 382 + } 383 + 384 + static int check_syslog_permissions(int type, bool from_file) 385 + { 386 + /* 387 + * If this is from /proc/kmsg and we've already opened it, then we've 388 + * already done the capabilities checks at open time. 389 + */ 390 + if (from_file && type != SYSLOG_ACTION_OPEN) 391 + return 0; 392 + 393 + if (syslog_action_restricted(type)) { 394 + if (capable(CAP_SYSLOG)) 395 + return 0; 396 + /* 397 + * For historical reasons, accept CAP_SYS_ADMIN too, with 398 + * a warning. 399 + */ 400 + if (capable(CAP_SYS_ADMIN)) { 401 + pr_warn_once("%s (%d): Attempt to access syslog with " 402 + "CAP_SYS_ADMIN but no CAP_SYSLOG " 403 + "(deprecated).\n", 404 + current->comm, task_pid_nr(current)); 405 + return 0; 406 + } 407 + return -EPERM; 408 + } 409 + return security_syslog(type); 410 + } 411 + 412 + 366 413 /* /dev/kmsg - userspace message inject/listen interface */ 367 414 struct devkmsg_user { 368 415 u64 seq; ··· 667 620 if ((file->f_flags & O_ACCMODE) == O_WRONLY) 668 621 return 0; 669 622 670 - err = security_syslog(SYSLOG_ACTION_READ_ALL); 623 + err = check_syslog_permissions(SYSLOG_ACTION_READ_ALL, 624 + SYSLOG_FROM_READER); 671 625 if (err) 672 626 return err; 673 627 ··· 860 812 { 861 813 } 862 814 #endif 863 - 864 - #ifdef CONFIG_SECURITY_DMESG_RESTRICT 865 - int dmesg_restrict = 1; 866 - #else 867 - int dmesg_restrict; 868 - #endif 869 - 870 - static int syslog_action_restricted(int type) 871 - { 872 - if (dmesg_restrict) 873 - return 1; 874 - /* Unless restricted, we allow "read all" and "get buffer size" for everybody */ 875 - return type != SYSLOG_ACTION_READ_ALL && type != SYSLOG_ACTION_SIZE_BUFFER; 876 - } 877 - 878 - static int check_syslog_permissions(int type, bool from_file) 879 - { 880 - /* 881 - * If this is from /proc/kmsg and we've already opened it, then we've 882 - * already done the capabilities checks at open time. 883 - */ 884 - if (from_file && type != SYSLOG_ACTION_OPEN) 885 - return 0; 886 - 887 - if (syslog_action_restricted(type)) { 888 - if (capable(CAP_SYSLOG)) 889 - return 0; 890 - /* For historical reasons, accept CAP_SYS_ADMIN too, with a warning */ 891 - if (capable(CAP_SYS_ADMIN)) { 892 - printk_once(KERN_WARNING "%s (%d): " 893 - "Attempt to access syslog with CAP_SYS_ADMIN " 894 - "but no CAP_SYSLOG (deprecated).\n", 895 - current->comm, task_pid_nr(current)); 896 - return 0; 897 - } 898 - return -EPERM; 899 - } 900 - return 0; 901 - } 902 815 903 816 #if defined(CONFIG_PRINTK_TIME) 904 817 static bool printk_time = 1; ··· 1258 1249 1259 1250 SYSCALL_DEFINE3(syslog, int, type, char __user *, buf, int, len) 1260 1251 { 1261 - return do_syslog(type, buf, len, SYSLOG_FROM_CALL); 1252 + return do_syslog(type, buf, len, SYSLOG_FROM_READER); 1262 1253 } 1263 1254 1264 1255 /*
+17 -4
kernel/rcutree.c
··· 1451 1451 rnp->grphi, rnp->qsmask); 1452 1452 raw_spin_unlock_irq(&rnp->lock); 1453 1453 #ifdef CONFIG_PROVE_RCU_DELAY 1454 - if ((prandom_u32() % (rcu_num_nodes * 8)) == 0 && 1454 + if ((prandom_u32() % (rcu_num_nodes + 1)) == 0 && 1455 1455 system_state == SYSTEM_RUNNING) 1456 - schedule_timeout_uninterruptible(2); 1456 + udelay(200); 1457 1457 #endif /* #ifdef CONFIG_PROVE_RCU_DELAY */ 1458 1458 cond_resched(); 1459 1459 } ··· 1613 1613 } 1614 1614 } 1615 1615 1616 + static void rsp_wakeup(struct irq_work *work) 1617 + { 1618 + struct rcu_state *rsp = container_of(work, struct rcu_state, wakeup_work); 1619 + 1620 + /* Wake up rcu_gp_kthread() to start the grace period. */ 1621 + wake_up(&rsp->gp_wq); 1622 + } 1623 + 1616 1624 /* 1617 1625 * Start a new RCU grace period if warranted, re-initializing the hierarchy 1618 1626 * in preparation for detecting the next grace period. The caller must hold ··· 1645 1637 } 1646 1638 rsp->gp_flags = RCU_GP_FLAG_INIT; 1647 1639 1648 - /* Wake up rcu_gp_kthread() to start the grace period. */ 1649 - wake_up(&rsp->gp_wq); 1640 + /* 1641 + * We can't do wakeups while holding the rnp->lock, as that 1642 + * could cause possible deadlocks with the rq->lock. Deter 1643 + * the wakeup to interrupt context. 1644 + */ 1645 + irq_work_queue(&rsp->wakeup_work); 1650 1646 } 1651 1647 1652 1648 /* ··· 3247 3235 3248 3236 rsp->rda = rda; 3249 3237 init_waitqueue_head(&rsp->gp_wq); 3238 + init_irq_work(&rsp->wakeup_work, rsp_wakeup); 3250 3239 rnp = rsp->level[rcu_num_lvls - 1]; 3251 3240 for_each_possible_cpu(i) { 3252 3241 while (i > rnp->grphi)
+2
kernel/rcutree.h
··· 27 27 #include <linux/threads.h> 28 28 #include <linux/cpumask.h> 29 29 #include <linux/seqlock.h> 30 + #include <linux/irq_work.h> 30 31 31 32 /* 32 33 * Define shape of hierarchy based on NR_CPUS, CONFIG_RCU_FANOUT, and ··· 443 442 char *name; /* Name of structure. */ 444 443 char abbr; /* Abbreviated name. */ 445 444 struct list_head flavors; /* List of RCU flavors. */ 445 + struct irq_work wakeup_work; /* Postponed wakeups */ 446 446 }; 447 447 448 448 /* Values for rcu_state structure's gp_flags field. */
+10 -3
kernel/softirq.c
··· 195 195 EXPORT_SYMBOL(local_bh_enable_ip); 196 196 197 197 /* 198 - * We restart softirq processing for at most 2 ms, 199 - * and if need_resched() is not set. 198 + * We restart softirq processing for at most MAX_SOFTIRQ_RESTART times, 199 + * but break the loop if need_resched() is set or after 2 ms. 200 + * The MAX_SOFTIRQ_TIME provides a nice upper bound in most cases, but in 201 + * certain cases, such as stop_machine(), jiffies may cease to 202 + * increment and so we need the MAX_SOFTIRQ_RESTART limit as 203 + * well to make sure we eventually return from this method. 200 204 * 201 205 * These limits have been established via experimentation. 202 206 * The two things to balance is latency against fairness - ··· 208 204 * should not be able to lock up the box. 209 205 */ 210 206 #define MAX_SOFTIRQ_TIME msecs_to_jiffies(2) 207 + #define MAX_SOFTIRQ_RESTART 10 211 208 212 209 asmlinkage void __do_softirq(void) 213 210 { ··· 217 212 unsigned long end = jiffies + MAX_SOFTIRQ_TIME; 218 213 int cpu; 219 214 unsigned long old_flags = current->flags; 215 + int max_restart = MAX_SOFTIRQ_RESTART; 220 216 221 217 /* 222 218 * Mask out PF_MEMALLOC s current task context is borrowed for the ··· 271 265 272 266 pending = local_softirq_pending(); 273 267 if (pending) { 274 - if (time_before(jiffies, end) && !need_resched()) 268 + if (time_before(jiffies, end) && !need_resched() && 269 + --max_restart) 275 270 goto restart; 276 271 277 272 wakeup_softirqd();
+26 -3
kernel/sys.c
··· 362 362 } 363 363 EXPORT_SYMBOL(unregister_reboot_notifier); 364 364 365 + /* Add backwards compatibility for stable trees. */ 366 + #ifndef PF_NO_SETAFFINITY 367 + #define PF_NO_SETAFFINITY PF_THREAD_BOUND 368 + #endif 369 + 370 + static void migrate_to_reboot_cpu(void) 371 + { 372 + /* The boot cpu is always logical cpu 0 */ 373 + int cpu = 0; 374 + 375 + cpu_hotplug_disable(); 376 + 377 + /* Make certain the cpu I'm about to reboot on is online */ 378 + if (!cpu_online(cpu)) 379 + cpu = cpumask_first(cpu_online_mask); 380 + 381 + /* Prevent races with other tasks migrating this task */ 382 + current->flags |= PF_NO_SETAFFINITY; 383 + 384 + /* Make certain I only run on the appropriate processor */ 385 + set_cpus_allowed_ptr(current, cpumask_of(cpu)); 386 + } 387 + 365 388 /** 366 389 * kernel_restart - reboot the system 367 390 * @cmd: pointer to buffer containing command to execute for restart ··· 396 373 void kernel_restart(char *cmd) 397 374 { 398 375 kernel_restart_prepare(cmd); 399 - disable_nonboot_cpus(); 376 + migrate_to_reboot_cpu(); 400 377 syscore_shutdown(); 401 378 if (!cmd) 402 379 printk(KERN_EMERG "Restarting system.\n"); ··· 423 400 void kernel_halt(void) 424 401 { 425 402 kernel_shutdown_prepare(SYSTEM_HALT); 426 - disable_nonboot_cpus(); 403 + migrate_to_reboot_cpu(); 427 404 syscore_shutdown(); 428 405 printk(KERN_EMERG "System halted.\n"); 429 406 kmsg_dump(KMSG_DUMP_HALT); ··· 442 419 kernel_shutdown_prepare(SYSTEM_POWER_OFF); 443 420 if (pm_power_off_prepare) 444 421 pm_power_off_prepare(); 445 - disable_nonboot_cpus(); 422 + migrate_to_reboot_cpu(); 446 423 syscore_shutdown(); 447 424 printk(KERN_EMERG "Power down.\n"); 448 425 kmsg_dump(KMSG_DUMP_POWEROFF);
-1
kernel/time/ntp.c
··· 874 874 void __hardpps(const struct timespec *phase_ts, const struct timespec *raw_ts) 875 875 { 876 876 struct pps_normtime pts_norm, freq_norm; 877 - unsigned long flags; 878 877 879 878 pts_norm = pps_normalize_ts(*phase_ts); 880 879
+7 -1
kernel/time/tick-broadcast.c
··· 511 511 } 512 512 } 513 513 514 + /* 515 + * Remove the current cpu from the pending mask. The event is 516 + * delivered immediately in tick_do_broadcast() ! 517 + */ 518 + cpumask_clear_cpu(smp_processor_id(), tick_broadcast_pending_mask); 519 + 514 520 /* Take care of enforced broadcast requests */ 515 521 cpumask_or(tmpmask, tmpmask, tick_broadcast_force_mask); 516 522 cpumask_clear(tick_broadcast_force_mask); ··· 581 575 582 576 raw_spin_lock_irqsave(&tick_broadcast_lock, flags); 583 577 if (reason == CLOCK_EVT_NOTIFY_BROADCAST_ENTER) { 584 - WARN_ON_ONCE(cpumask_test_cpu(cpu, tick_broadcast_pending_mask)); 585 578 if (!cpumask_test_and_set_cpu(cpu, tick_broadcast_oneshot_mask)) { 579 + WARN_ON_ONCE(cpumask_test_cpu(cpu, tick_broadcast_pending_mask)); 586 580 clockevents_set_mode(dev, CLOCK_EVT_MODE_SHUTDOWN); 587 581 /* 588 582 * We only reprogram the broadcast timer if we
+8
kernel/time/timekeeping.c
··· 975 975 976 976 read_persistent_clock(&timekeeping_suspend_time); 977 977 978 + /* 979 + * On some systems the persistent_clock can not be detected at 980 + * timekeeping_init by its return value, so if we see a valid 981 + * value returned, update the persistent_clock_exists flag. 982 + */ 983 + if (timekeeping_suspend_time.tv_sec || timekeeping_suspend_time.tv_nsec) 984 + persistent_clock_exist = true; 985 + 978 986 raw_spin_lock_irqsave(&timekeeper_lock, flags); 979 987 write_seqcount_begin(&timekeeper_seq); 980 988 timekeeping_forward_now(tk);
+9 -9
kernel/trace/ftrace.c
··· 120 120 121 121 /* 122 122 * Traverse the ftrace_global_list, invoking all entries. The reason that we 123 - * can use rcu_dereference_raw() is that elements removed from this list 123 + * can use rcu_dereference_raw_notrace() is that elements removed from this list 124 124 * are simply leaked, so there is no need to interact with a grace-period 125 - * mechanism. The rcu_dereference_raw() calls are needed to handle 125 + * mechanism. The rcu_dereference_raw_notrace() calls are needed to handle 126 126 * concurrent insertions into the ftrace_global_list. 127 127 * 128 128 * Silly Alpha and silly pointer-speculation compiler optimizations! 129 129 */ 130 130 #define do_for_each_ftrace_op(op, list) \ 131 - op = rcu_dereference_raw(list); \ 131 + op = rcu_dereference_raw_notrace(list); \ 132 132 do 133 133 134 134 /* 135 135 * Optimized for just a single item in the list (as that is the normal case). 136 136 */ 137 137 #define while_for_each_ftrace_op(op) \ 138 - while (likely(op = rcu_dereference_raw((op)->next)) && \ 138 + while (likely(op = rcu_dereference_raw_notrace((op)->next)) && \ 139 139 unlikely((op) != &ftrace_list_end)) 140 140 141 141 static inline void ftrace_ops_init(struct ftrace_ops *ops) ··· 779 779 if (hlist_empty(hhd)) 780 780 return NULL; 781 781 782 - hlist_for_each_entry_rcu(rec, hhd, node) { 782 + hlist_for_each_entry_rcu_notrace(rec, hhd, node) { 783 783 if (rec->ip == ip) 784 784 return rec; 785 785 } ··· 1165 1165 1166 1166 hhd = &hash->buckets[key]; 1167 1167 1168 - hlist_for_each_entry_rcu(entry, hhd, hlist) { 1168 + hlist_for_each_entry_rcu_notrace(entry, hhd, hlist) { 1169 1169 if (entry->ip == ip) 1170 1170 return entry; 1171 1171 } ··· 1422 1422 struct ftrace_hash *notrace_hash; 1423 1423 int ret; 1424 1424 1425 - filter_hash = rcu_dereference_raw(ops->filter_hash); 1426 - notrace_hash = rcu_dereference_raw(ops->notrace_hash); 1425 + filter_hash = rcu_dereference_raw_notrace(ops->filter_hash); 1426 + notrace_hash = rcu_dereference_raw_notrace(ops->notrace_hash); 1427 1427 1428 1428 if ((ftrace_hash_empty(filter_hash) || 1429 1429 ftrace_lookup_ip(filter_hash, ip)) && ··· 2920 2920 * on the hash. rcu_read_lock is too dangerous here. 2921 2921 */ 2922 2922 preempt_disable_notrace(); 2923 - hlist_for_each_entry_rcu(entry, hhd, node) { 2923 + hlist_for_each_entry_rcu_notrace(entry, hhd, node) { 2924 2924 if (entry->ip == ip) 2925 2925 entry->ops->func(ip, parent_ip, &entry->data); 2926 2926 }
+12 -6
kernel/trace/trace.c
··· 652 652 ARCH_TRACE_CLOCKS 653 653 }; 654 654 655 - int trace_clock_id; 656 - 657 655 /* 658 656 * trace_parser_get_init - gets the buffer for trace parser 659 657 */ ··· 841 843 842 844 memcpy(max_data->comm, tsk->comm, TASK_COMM_LEN); 843 845 max_data->pid = tsk->pid; 844 - max_data->uid = task_uid(tsk); 846 + /* 847 + * If tsk == current, then use current_uid(), as that does not use 848 + * RCU. The irq tracer can be called out of RCU scope. 849 + */ 850 + if (tsk == current) 851 + max_data->uid = current_uid(); 852 + else 853 + max_data->uid = task_uid(tsk); 854 + 845 855 max_data->nice = tsk->static_prio - 20 - MAX_RT_PRIO; 846 856 max_data->policy = tsk->policy; 847 857 max_data->rt_priority = tsk->rt_priority; ··· 2824 2818 iter->iter_flags |= TRACE_FILE_ANNOTATE; 2825 2819 2826 2820 /* Output in nanoseconds only if we are using a clock in nanoseconds. */ 2827 - if (trace_clocks[trace_clock_id].in_ns) 2821 + if (trace_clocks[tr->clock_id].in_ns) 2828 2822 iter->iter_flags |= TRACE_FILE_TIME_IN_NS; 2829 2823 2830 2824 /* stop the trace while dumping if we are not opening "snapshot" */ ··· 3823 3817 iter->iter_flags |= TRACE_FILE_LAT_FMT; 3824 3818 3825 3819 /* Output in nanoseconds only if we are using a clock in nanoseconds. */ 3826 - if (trace_clocks[trace_clock_id].in_ns) 3820 + if (trace_clocks[tr->clock_id].in_ns) 3827 3821 iter->iter_flags |= TRACE_FILE_TIME_IN_NS; 3828 3822 3829 3823 iter->cpu_file = tc->cpu; ··· 5093 5087 cnt = ring_buffer_bytes_cpu(trace_buf->buffer, cpu); 5094 5088 trace_seq_printf(s, "bytes: %ld\n", cnt); 5095 5089 5096 - if (trace_clocks[trace_clock_id].in_ns) { 5090 + if (trace_clocks[tr->clock_id].in_ns) { 5097 5091 /* local or global for trace_clock */ 5098 5092 t = ns2usecs(ring_buffer_oldest_event_ts(trace_buf->buffer, cpu)); 5099 5093 usec_rem = do_div(t, USEC_PER_SEC);
-2
kernel/trace/trace.h
··· 700 700 701 701 extern unsigned long trace_flags; 702 702 703 - extern int trace_clock_id; 704 - 705 703 /* Standard output formatting function used for function return traces */ 706 704 #ifdef CONFIG_FUNCTION_GRAPH_TRACER 707 705
+1 -1
kernel/trace/trace_selftest.c
··· 1159 1159 /* stop the tracing. */ 1160 1160 tracing_stop(); 1161 1161 /* check the trace buffer */ 1162 - ret = trace_test_buffer(tr, &count); 1162 + ret = trace_test_buffer(&tr->trace_buffer, &count); 1163 1163 trace->reset(tr); 1164 1164 tracing_start(); 1165 1165
+1 -1
lib/mpi/mpicoder.c
··· 37 37 mpi_limb_t a; 38 38 MPI val = NULL; 39 39 40 - while (nbytes >= 0 && buffer[0] == 0) { 40 + while (nbytes > 0 && buffer[0] == 0) { 41 41 buffer++; 42 42 nbytes--; 43 43 }
+1 -1
mm/frontswap.c
··· 319 319 return; 320 320 frontswap_ops->invalidate_area(type); 321 321 atomic_set(&sis->frontswap_pages, 0); 322 - memset(sis->frontswap_map, 0, sis->max / sizeof(long)); 322 + bitmap_zero(sis->frontswap_map, sis->max); 323 323 } 324 324 clear_bit(type, need_init); 325 325 }
+1 -1
mm/hugetlb.c
··· 2839 2839 if (ptep) { 2840 2840 entry = huge_ptep_get(ptep); 2841 2841 if (unlikely(is_hugetlb_entry_migration(entry))) { 2842 - migration_entry_wait(mm, (pmd_t *)ptep, address); 2842 + migration_entry_wait_huge(mm, ptep); 2843 2843 return 0; 2844 2844 } else if (unlikely(is_hugetlb_entry_hwpoisoned(entry))) 2845 2845 return VM_FAULT_HWPOISON_LARGE |
+5 -9
mm/memcontrol.c
··· 1199 1199 1200 1200 mz = mem_cgroup_zoneinfo(root, nid, zid); 1201 1201 iter = &mz->reclaim_iter[reclaim->priority]; 1202 - last_visited = iter->last_visited; 1203 1202 if (prev && reclaim->generation != iter->generation) { 1204 1203 iter->last_visited = NULL; 1205 1204 goto out_unlock; ··· 1217 1218 * is alive. 1218 1219 */ 1219 1220 dead_count = atomic_read(&root->dead_count); 1220 - smp_rmb(); 1221 - last_visited = iter->last_visited; 1222 - if (last_visited) { 1223 - if ((dead_count != iter->last_dead_count) || 1224 - !css_tryget(&last_visited->css)) { 1221 + if (dead_count == iter->last_dead_count) { 1222 + smp_rmb(); 1223 + last_visited = iter->last_visited; 1224 + if (last_visited && 1225 + !css_tryget(&last_visited->css)) 1225 1226 last_visited = NULL; 1226 - } 1227 1227 } 1228 1228 } 1229 1229 ··· 3139 3141 return -ENOMEM; 3140 3142 } 3141 3143 3142 - INIT_WORK(&s->memcg_params->destroy, 3143 - kmem_cache_destroy_work_func); 3144 3144 s->memcg_params->is_root_cache = true; 3145 3145 3146 3146 /*
+18 -5
mm/migrate.c
··· 200 200 * get to the page and wait until migration is finished. 201 201 * When we return from this function the fault will be retried. 202 202 */ 203 - void migration_entry_wait(struct mm_struct *mm, pmd_t *pmd, 204 - unsigned long address) 203 + static void __migration_entry_wait(struct mm_struct *mm, pte_t *ptep, 204 + spinlock_t *ptl) 205 205 { 206 - pte_t *ptep, pte; 207 - spinlock_t *ptl; 206 + pte_t pte; 208 207 swp_entry_t entry; 209 208 struct page *page; 210 209 211 - ptep = pte_offset_map_lock(mm, pmd, address, &ptl); 210 + spin_lock(ptl); 212 211 pte = *ptep; 213 212 if (!is_swap_pte(pte)) 214 213 goto out; ··· 233 234 return; 234 235 out: 235 236 pte_unmap_unlock(ptep, ptl); 237 + } 238 + 239 + void migration_entry_wait(struct mm_struct *mm, pmd_t *pmd, 240 + unsigned long address) 241 + { 242 + spinlock_t *ptl = pte_lockptr(mm, pmd); 243 + pte_t *ptep = pte_offset_map(pmd, address); 244 + __migration_entry_wait(mm, ptep, ptl); 245 + } 246 + 247 + void migration_entry_wait_huge(struct mm_struct *mm, pte_t *pte) 248 + { 249 + spinlock_t *ptl = &(mm)->page_table_lock; 250 + __migration_entry_wait(mm, pte, ptl); 236 251 } 237 252 238 253 #ifdef CONFIG_BLOCK
+4 -2
mm/page_alloc.c
··· 1628 1628 long min = mark; 1629 1629 long lowmem_reserve = z->lowmem_reserve[classzone_idx]; 1630 1630 int o; 1631 + long free_cma = 0; 1631 1632 1632 1633 free_pages -= (1 << order) - 1; 1633 1634 if (alloc_flags & ALLOC_HIGH) ··· 1638 1637 #ifdef CONFIG_CMA 1639 1638 /* If allocation can't use CMA areas don't use free CMA pages */ 1640 1639 if (!(alloc_flags & ALLOC_CMA)) 1641 - free_pages -= zone_page_state(z, NR_FREE_CMA_PAGES); 1640 + free_cma = zone_page_state(z, NR_FREE_CMA_PAGES); 1642 1641 #endif 1643 - if (free_pages <= min + lowmem_reserve) 1642 + 1643 + if (free_pages - free_cma <= min + lowmem_reserve) 1644 1644 return false; 1645 1645 for (o = 0; o < order; o++) { 1646 1646 /* At the next order, this order's pages become unavailable */
+17 -1
mm/swap_state.c
··· 336 336 * Swap entry may have been freed since our caller observed it. 337 337 */ 338 338 err = swapcache_prepare(entry); 339 - if (err == -EEXIST) { /* seems racy */ 339 + if (err == -EEXIST) { 340 340 radix_tree_preload_end(); 341 + /* 342 + * We might race against get_swap_page() and stumble 343 + * across a SWAP_HAS_CACHE swap_map entry whose page 344 + * has not been brought into the swapcache yet, while 345 + * the other end is scheduled away waiting on discard 346 + * I/O completion at scan_swap_map(). 347 + * 348 + * In order to avoid turning this transitory state 349 + * into a permanent loop around this -EEXIST case 350 + * if !CONFIG_PREEMPT and the I/O completion happens 351 + * to be waiting on the CPU waitqueue where we are now 352 + * busy looping, we just conditionally invoke the 353 + * scheduler here, if there are some more important 354 + * tasks to run. 355 + */ 356 + cond_resched(); 341 357 continue; 342 358 } 343 359 if (err) { /* swp entry is obsolete ? */
+1 -1
mm/swapfile.c
··· 2116 2116 } 2117 2117 /* frontswap enabled? set up bit-per-page map for frontswap */ 2118 2118 if (frontswap_enabled) 2119 - frontswap_map = vzalloc(maxpages / sizeof(long)); 2119 + frontswap_map = vzalloc(BITS_TO_LONGS(maxpages) * sizeof(long)); 2120 2120 2121 2121 if (p->bdev) { 2122 2122 if (blk_queue_nonrot(bdev_get_queue(p->bdev))) {
+18 -37
net/9p/client.c
··· 562 562 563 563 if (!p9_is_proto_dotl(c)) { 564 564 /* Error is reported in string format */ 565 - uint16_t len; 566 - /* 7 = header size for RERROR, 2 is the size of string len; */ 567 - int inline_len = in_hdrlen - (7 + 2); 565 + int len; 566 + /* 7 = header size for RERROR; */ 567 + int inline_len = in_hdrlen - 7; 568 568 569 - /* Read the size of error string */ 570 - err = p9pdu_readf(req->rc, c->proto_version, "w", &len); 571 - if (err) 572 - goto out_err; 573 - 574 - ename = kmalloc(len + 1, GFP_NOFS); 575 - if (!ename) { 576 - err = -ENOMEM; 569 + len = req->rc->size - req->rc->offset; 570 + if (len > (P9_ZC_HDR_SZ - 7)) { 571 + err = -EFAULT; 577 572 goto out_err; 578 573 } 579 - if (len <= inline_len) { 580 - /* We have error in protocol buffer itself */ 581 - if (pdu_read(req->rc, ename, len)) { 582 - err = -EFAULT; 583 - goto out_free; 584 574 585 - } 586 - } else { 587 - /* 588 - * Part of the data is in user space buffer. 589 - */ 590 - if (pdu_read(req->rc, ename, inline_len)) { 591 - err = -EFAULT; 592 - goto out_free; 593 - 594 - } 575 + ename = &req->rc->sdata[req->rc->offset]; 576 + if (len > inline_len) { 577 + /* We have error in external buffer */ 595 578 if (kern_buf) { 596 579 memcpy(ename + inline_len, uidata, 597 580 len - inline_len); ··· 583 600 uidata, len - inline_len); 584 601 if (err) { 585 602 err = -EFAULT; 586 - goto out_free; 603 + goto out_err; 587 604 } 588 605 } 589 606 } 590 - ename[len] = 0; 591 - if (p9_is_proto_dotu(c)) { 592 - /* For dotu we also have error code */ 593 - err = p9pdu_readf(req->rc, 594 - c->proto_version, "d", &ecode); 595 - if (err) 596 - goto out_free; 607 + ename = NULL; 608 + err = p9pdu_readf(req->rc, c->proto_version, "s?d", 609 + &ename, &ecode); 610 + if (err) 611 + goto out_err; 612 + 613 + if (p9_is_proto_dotu(c)) 597 614 err = -ecode; 598 - } 615 + 599 616 if (!err || !IS_ERR_VALUE(err)) { 600 617 err = p9_errstr2errno(ename, strlen(ename)); 601 618 ··· 611 628 } 612 629 return err; 613 630 614 - out_free: 615 - kfree(ename); 616 631 out_err: 617 632 p9_debug(P9_DEBUG_ERROR, "couldn't parse error%d\n", err); 618 633 return err;
+56 -31
net/batman-adv/bat_iv_ogm.c
··· 70 70 71 71 return (uint8_t)(sum / count); 72 72 } 73 + 74 + /* 75 + * batadv_dup_status - duplicate status 76 + * @BATADV_NO_DUP: the packet is a duplicate 77 + * @BATADV_ORIG_DUP: OGM is a duplicate in the originator (but not for the 78 + * neighbor) 79 + * @BATADV_NEIGH_DUP: OGM is a duplicate for the neighbor 80 + * @BATADV_PROTECTED: originator is currently protected (after reboot) 81 + */ 82 + enum batadv_dup_status { 83 + BATADV_NO_DUP = 0, 84 + BATADV_ORIG_DUP, 85 + BATADV_NEIGH_DUP, 86 + BATADV_PROTECTED, 87 + }; 88 + 73 89 static struct batadv_neigh_node * 74 90 batadv_iv_ogm_neigh_new(struct batadv_hard_iface *hard_iface, 75 91 const uint8_t *neigh_addr, ··· 739 723 const struct batadv_ogm_packet *batadv_ogm_packet, 740 724 struct batadv_hard_iface *if_incoming, 741 725 const unsigned char *tt_buff, 742 - int is_duplicate) 726 + enum batadv_dup_status dup_status) 743 727 { 744 728 struct batadv_neigh_node *neigh_node = NULL, *tmp_neigh_node = NULL; 745 729 struct batadv_neigh_node *router = NULL; ··· 765 749 continue; 766 750 } 767 751 768 - if (is_duplicate) 752 + if (dup_status != BATADV_NO_DUP) 769 753 continue; 770 754 771 755 spin_lock_bh(&tmp_neigh_node->lq_update_lock); ··· 806 790 neigh_node->tq_avg = batadv_ring_buffer_avg(neigh_node->tq_recv); 807 791 spin_unlock_bh(&neigh_node->lq_update_lock); 808 792 809 - if (!is_duplicate) { 793 + if (dup_status == BATADV_NO_DUP) { 810 794 orig_node->last_ttl = batadv_ogm_packet->header.ttl; 811 795 neigh_node->last_ttl = batadv_ogm_packet->header.ttl; 812 796 } ··· 989 973 return ret; 990 974 } 991 975 992 - /* processes a batman packet for all interfaces, adjusts the sequence number and 993 - * finds out whether it is a duplicate. 994 - * returns: 995 - * 1 the packet is a duplicate 996 - * 0 the packet has not yet been received 997 - * -1 the packet is old and has been received while the seqno window 998 - * was protected. Caller should drop it. 976 + /** 977 + * batadv_iv_ogm_update_seqnos - process a batman packet for all interfaces, 978 + * adjust the sequence number and find out whether it is a duplicate 979 + * @ethhdr: ethernet header of the packet 980 + * @batadv_ogm_packet: OGM packet to be considered 981 + * @if_incoming: interface on which the OGM packet was received 982 + * 983 + * Returns duplicate status as enum batadv_dup_status 999 984 */ 1000 - static int 985 + static enum batadv_dup_status 1001 986 batadv_iv_ogm_update_seqnos(const struct ethhdr *ethhdr, 1002 987 const struct batadv_ogm_packet *batadv_ogm_packet, 1003 988 const struct batadv_hard_iface *if_incoming) ··· 1006 989 struct batadv_priv *bat_priv = netdev_priv(if_incoming->soft_iface); 1007 990 struct batadv_orig_node *orig_node; 1008 991 struct batadv_neigh_node *tmp_neigh_node; 1009 - int is_duplicate = 0; 992 + int is_dup; 1010 993 int32_t seq_diff; 1011 994 int need_update = 0; 1012 - int set_mark, ret = -1; 995 + int set_mark; 996 + enum batadv_dup_status ret = BATADV_NO_DUP; 1013 997 uint32_t seqno = ntohl(batadv_ogm_packet->seqno); 1014 998 uint8_t *neigh_addr; 1015 999 uint8_t packet_count; 1016 1000 1017 1001 orig_node = batadv_get_orig_node(bat_priv, batadv_ogm_packet->orig); 1018 1002 if (!orig_node) 1019 - return 0; 1003 + return BATADV_NO_DUP; 1020 1004 1021 1005 spin_lock_bh(&orig_node->ogm_cnt_lock); 1022 1006 seq_diff = seqno - orig_node->last_real_seqno; ··· 1025 1007 /* signalize caller that the packet is to be dropped. */ 1026 1008 if (!hlist_empty(&orig_node->neigh_list) && 1027 1009 batadv_window_protected(bat_priv, seq_diff, 1028 - &orig_node->batman_seqno_reset)) 1010 + &orig_node->batman_seqno_reset)) { 1011 + ret = BATADV_PROTECTED; 1029 1012 goto out; 1013 + } 1030 1014 1031 1015 rcu_read_lock(); 1032 1016 hlist_for_each_entry_rcu(tmp_neigh_node, 1033 1017 &orig_node->neigh_list, list) { 1034 - is_duplicate |= batadv_test_bit(tmp_neigh_node->real_bits, 1035 - orig_node->last_real_seqno, 1036 - seqno); 1037 - 1038 1018 neigh_addr = tmp_neigh_node->addr; 1019 + is_dup = batadv_test_bit(tmp_neigh_node->real_bits, 1020 + orig_node->last_real_seqno, 1021 + seqno); 1022 + 1039 1023 if (batadv_compare_eth(neigh_addr, ethhdr->h_source) && 1040 - tmp_neigh_node->if_incoming == if_incoming) 1024 + tmp_neigh_node->if_incoming == if_incoming) { 1041 1025 set_mark = 1; 1042 - else 1026 + if (is_dup) 1027 + ret = BATADV_NEIGH_DUP; 1028 + } else { 1043 1029 set_mark = 0; 1030 + if (is_dup && (ret != BATADV_NEIGH_DUP)) 1031 + ret = BATADV_ORIG_DUP; 1032 + } 1044 1033 1045 1034 /* if the window moved, set the update flag. */ 1046 1035 need_update |= batadv_bit_get_packet(bat_priv, ··· 1066 1041 orig_node->last_real_seqno, seqno); 1067 1042 orig_node->last_real_seqno = seqno; 1068 1043 } 1069 - 1070 - ret = is_duplicate; 1071 1044 1072 1045 out: 1073 1046 spin_unlock_bh(&orig_node->ogm_cnt_lock); ··· 1088 1065 int is_bidirect; 1089 1066 bool is_single_hop_neigh = false; 1090 1067 bool is_from_best_next_hop = false; 1091 - int is_duplicate, sameseq, simlar_ttl; 1068 + int sameseq, similar_ttl; 1069 + enum batadv_dup_status dup_status; 1092 1070 uint32_t if_incoming_seqno; 1093 1071 uint8_t *prev_sender; 1094 1072 ··· 1216 1192 if (!orig_node) 1217 1193 return; 1218 1194 1219 - is_duplicate = batadv_iv_ogm_update_seqnos(ethhdr, batadv_ogm_packet, 1220 - if_incoming); 1195 + dup_status = batadv_iv_ogm_update_seqnos(ethhdr, batadv_ogm_packet, 1196 + if_incoming); 1221 1197 1222 - if (is_duplicate == -1) { 1198 + if (dup_status == BATADV_PROTECTED) { 1223 1199 batadv_dbg(BATADV_DBG_BATMAN, bat_priv, 1224 1200 "Drop packet: packet within seqno protection time (sender: %pM)\n", 1225 1201 ethhdr->h_source); ··· 1289 1265 * seqno and similar ttl as the non-duplicate 1290 1266 */ 1291 1267 sameseq = orig_node->last_real_seqno == ntohl(batadv_ogm_packet->seqno); 1292 - simlar_ttl = orig_node->last_ttl - 3 <= batadv_ogm_packet->header.ttl; 1293 - if (is_bidirect && (!is_duplicate || (sameseq && simlar_ttl))) 1268 + similar_ttl = orig_node->last_ttl - 3 <= batadv_ogm_packet->header.ttl; 1269 + if (is_bidirect && ((dup_status == BATADV_NO_DUP) || 1270 + (sameseq && similar_ttl))) 1294 1271 batadv_iv_ogm_orig_update(bat_priv, orig_node, ethhdr, 1295 1272 batadv_ogm_packet, if_incoming, 1296 - tt_buff, is_duplicate); 1273 + tt_buff, dup_status); 1297 1274 1298 1275 /* is single hop (direct) neighbor */ 1299 1276 if (is_single_hop_neigh) { ··· 1315 1290 goto out_neigh; 1316 1291 } 1317 1292 1318 - if (is_duplicate) { 1293 + if (dup_status == BATADV_NEIGH_DUP) { 1319 1294 batadv_dbg(BATADV_DBG_BATMAN, bat_priv, 1320 1295 "Drop packet: duplicate packet received\n"); 1321 1296 goto out_neigh;
+4
net/batman-adv/bridge_loop_avoidance.c
··· 1073 1073 group = htons(crc16(0, primary_if->net_dev->dev_addr, ETH_ALEN)); 1074 1074 bat_priv->bla.claim_dest.group = group; 1075 1075 1076 + /* purge everything when bridge loop avoidance is turned off */ 1077 + if (!atomic_read(&bat_priv->bridge_loop_avoidance)) 1078 + oldif = NULL; 1079 + 1076 1080 if (!oldif) { 1077 1081 batadv_bla_purge_claims(bat_priv, NULL, 1); 1078 1082 batadv_bla_purge_backbone_gw(bat_priv, 1);
+1 -4
net/batman-adv/sysfs.c
··· 582 582 (strncmp(hard_iface->soft_iface->name, buff, IFNAMSIZ) == 0)) 583 583 goto out; 584 584 585 - if (!rtnl_trylock()) { 586 - ret = -ERESTARTSYS; 587 - goto out; 588 - } 585 + rtnl_lock(); 589 586 590 587 if (status_tmp == BATADV_IF_NOT_IN_USE) { 591 588 batadv_hardif_disable_interface(hard_iface,
+15 -6
net/bluetooth/hci_core.c
··· 341 341 342 342 static void bredr_setup(struct hci_request *req) 343 343 { 344 - struct hci_cp_delete_stored_link_key cp; 345 344 __le16 param; 346 345 __u8 flt_type; 347 346 ··· 363 364 /* Connection accept timeout ~20 secs */ 364 365 param = __constant_cpu_to_le16(0x7d00); 365 366 hci_req_add(req, HCI_OP_WRITE_CA_TIMEOUT, 2, &param); 366 - 367 - bacpy(&cp.bdaddr, BDADDR_ANY); 368 - cp.delete_all = 0x01; 369 - hci_req_add(req, HCI_OP_DELETE_STORED_LINK_KEY, sizeof(cp), &cp); 370 367 371 368 /* Read page scan parameters */ 372 369 if (req->hdev->hci_ver > BLUETOOTH_VER_1_1) { ··· 596 601 { 597 602 struct hci_dev *hdev = req->hdev; 598 603 u8 p; 604 + 605 + /* Only send HCI_Delete_Stored_Link_Key if it is supported */ 606 + if (hdev->commands[6] & 0x80) { 607 + struct hci_cp_delete_stored_link_key cp; 608 + 609 + bacpy(&cp.bdaddr, BDADDR_ANY); 610 + cp.delete_all = 0x01; 611 + hci_req_add(req, HCI_OP_DELETE_STORED_LINK_KEY, 612 + sizeof(cp), &cp); 613 + } 599 614 600 615 if (hdev->commands[5] & 0x10) 601 616 hci_setup_link_policy(req); ··· 1560 1555 static void hci_power_on(struct work_struct *work) 1561 1556 { 1562 1557 struct hci_dev *hdev = container_of(work, struct hci_dev, power_on); 1558 + int err; 1563 1559 1564 1560 BT_DBG("%s", hdev->name); 1565 1561 1566 - if (hci_dev_open(hdev->id) < 0) 1562 + err = hci_dev_open(hdev->id); 1563 + if (err < 0) { 1564 + mgmt_set_powered_failed(hdev, err); 1567 1565 return; 1566 + } 1568 1567 1569 1568 if (test_bit(HCI_AUTO_OFF, &hdev->dev_flags)) 1570 1569 queue_delayed_work(hdev->req_workqueue, &hdev->power_off,
+55 -18
net/bluetooth/l2cap_core.c
··· 2852 2852 BT_DBG("conn %p, code 0x%2.2x, ident 0x%2.2x, len %u", 2853 2853 conn, code, ident, dlen); 2854 2854 2855 + if (conn->mtu < L2CAP_HDR_SIZE + L2CAP_CMD_HDR_SIZE) 2856 + return NULL; 2857 + 2855 2858 len = L2CAP_HDR_SIZE + L2CAP_CMD_HDR_SIZE + dlen; 2856 2859 count = min_t(unsigned int, conn->mtu, len); 2857 2860 ··· 3680 3677 } 3681 3678 3682 3679 static inline int l2cap_command_rej(struct l2cap_conn *conn, 3683 - struct l2cap_cmd_hdr *cmd, u8 *data) 3680 + struct l2cap_cmd_hdr *cmd, u16 cmd_len, 3681 + u8 *data) 3684 3682 { 3685 3683 struct l2cap_cmd_rej_unk *rej = (struct l2cap_cmd_rej_unk *) data; 3684 + 3685 + if (cmd_len < sizeof(*rej)) 3686 + return -EPROTO; 3686 3687 3687 3688 if (rej->reason != L2CAP_REJ_NOT_UNDERSTOOD) 3688 3689 return 0; ··· 3836 3829 } 3837 3830 3838 3831 static int l2cap_connect_req(struct l2cap_conn *conn, 3839 - struct l2cap_cmd_hdr *cmd, u8 *data) 3832 + struct l2cap_cmd_hdr *cmd, u16 cmd_len, u8 *data) 3840 3833 { 3841 3834 struct hci_dev *hdev = conn->hcon->hdev; 3842 3835 struct hci_conn *hcon = conn->hcon; 3836 + 3837 + if (cmd_len < sizeof(struct l2cap_conn_req)) 3838 + return -EPROTO; 3843 3839 3844 3840 hci_dev_lock(hdev); 3845 3841 if (test_bit(HCI_MGMT, &hdev->dev_flags) && ··· 3857 3847 } 3858 3848 3859 3849 static int l2cap_connect_create_rsp(struct l2cap_conn *conn, 3860 - struct l2cap_cmd_hdr *cmd, u8 *data) 3850 + struct l2cap_cmd_hdr *cmd, u16 cmd_len, 3851 + u8 *data) 3861 3852 { 3862 3853 struct l2cap_conn_rsp *rsp = (struct l2cap_conn_rsp *) data; 3863 3854 u16 scid, dcid, result, status; 3864 3855 struct l2cap_chan *chan; 3865 3856 u8 req[128]; 3866 3857 int err; 3858 + 3859 + if (cmd_len < sizeof(*rsp)) 3860 + return -EPROTO; 3867 3861 3868 3862 scid = __le16_to_cpu(rsp->scid); 3869 3863 dcid = __le16_to_cpu(rsp->dcid); ··· 3966 3952 struct l2cap_chan *chan; 3967 3953 int len, err = 0; 3968 3954 3955 + if (cmd_len < sizeof(*req)) 3956 + return -EPROTO; 3957 + 3969 3958 dcid = __le16_to_cpu(req->dcid); 3970 3959 flags = __le16_to_cpu(req->flags); 3971 3960 ··· 3992 3975 3993 3976 /* Reject if config buffer is too small. */ 3994 3977 len = cmd_len - sizeof(*req); 3995 - if (len < 0 || chan->conf_len + len > sizeof(chan->conf_req)) { 3978 + if (chan->conf_len + len > sizeof(chan->conf_req)) { 3996 3979 l2cap_send_cmd(conn, cmd->ident, L2CAP_CONF_RSP, 3997 3980 l2cap_build_conf_rsp(chan, rsp, 3998 3981 L2CAP_CONF_REJECT, flags), rsp); ··· 4070 4053 } 4071 4054 4072 4055 static inline int l2cap_config_rsp(struct l2cap_conn *conn, 4073 - struct l2cap_cmd_hdr *cmd, u8 *data) 4056 + struct l2cap_cmd_hdr *cmd, u16 cmd_len, 4057 + u8 *data) 4074 4058 { 4075 4059 struct l2cap_conf_rsp *rsp = (struct l2cap_conf_rsp *)data; 4076 4060 u16 scid, flags, result; 4077 4061 struct l2cap_chan *chan; 4078 - int len = le16_to_cpu(cmd->len) - sizeof(*rsp); 4062 + int len = cmd_len - sizeof(*rsp); 4079 4063 int err = 0; 4064 + 4065 + if (cmd_len < sizeof(*rsp)) 4066 + return -EPROTO; 4080 4067 4081 4068 scid = __le16_to_cpu(rsp->scid); 4082 4069 flags = __le16_to_cpu(rsp->flags); ··· 4182 4161 } 4183 4162 4184 4163 static inline int l2cap_disconnect_req(struct l2cap_conn *conn, 4185 - struct l2cap_cmd_hdr *cmd, u8 *data) 4164 + struct l2cap_cmd_hdr *cmd, u16 cmd_len, 4165 + u8 *data) 4186 4166 { 4187 4167 struct l2cap_disconn_req *req = (struct l2cap_disconn_req *) data; 4188 4168 struct l2cap_disconn_rsp rsp; 4189 4169 u16 dcid, scid; 4190 4170 struct l2cap_chan *chan; 4191 4171 struct sock *sk; 4172 + 4173 + if (cmd_len != sizeof(*req)) 4174 + return -EPROTO; 4192 4175 4193 4176 scid = __le16_to_cpu(req->scid); 4194 4177 dcid = __le16_to_cpu(req->dcid); ··· 4233 4208 } 4234 4209 4235 4210 static inline int l2cap_disconnect_rsp(struct l2cap_conn *conn, 4236 - struct l2cap_cmd_hdr *cmd, u8 *data) 4211 + struct l2cap_cmd_hdr *cmd, u16 cmd_len, 4212 + u8 *data) 4237 4213 { 4238 4214 struct l2cap_disconn_rsp *rsp = (struct l2cap_disconn_rsp *) data; 4239 4215 u16 dcid, scid; 4240 4216 struct l2cap_chan *chan; 4217 + 4218 + if (cmd_len != sizeof(*rsp)) 4219 + return -EPROTO; 4241 4220 4242 4221 scid = __le16_to_cpu(rsp->scid); 4243 4222 dcid = __le16_to_cpu(rsp->dcid); ··· 4272 4243 } 4273 4244 4274 4245 static inline int l2cap_information_req(struct l2cap_conn *conn, 4275 - struct l2cap_cmd_hdr *cmd, u8 *data) 4246 + struct l2cap_cmd_hdr *cmd, u16 cmd_len, 4247 + u8 *data) 4276 4248 { 4277 4249 struct l2cap_info_req *req = (struct l2cap_info_req *) data; 4278 4250 u16 type; 4251 + 4252 + if (cmd_len != sizeof(*req)) 4253 + return -EPROTO; 4279 4254 4280 4255 type = __le16_to_cpu(req->type); 4281 4256 ··· 4327 4294 } 4328 4295 4329 4296 static inline int l2cap_information_rsp(struct l2cap_conn *conn, 4330 - struct l2cap_cmd_hdr *cmd, u8 *data) 4297 + struct l2cap_cmd_hdr *cmd, u16 cmd_len, 4298 + u8 *data) 4331 4299 { 4332 4300 struct l2cap_info_rsp *rsp = (struct l2cap_info_rsp *) data; 4333 4301 u16 type, result; 4302 + 4303 + if (cmd_len != sizeof(*rsp)) 4304 + return -EPROTO; 4334 4305 4335 4306 type = __le16_to_cpu(rsp->type); 4336 4307 result = __le16_to_cpu(rsp->result); ··· 5201 5164 5202 5165 switch (cmd->code) { 5203 5166 case L2CAP_COMMAND_REJ: 5204 - l2cap_command_rej(conn, cmd, data); 5167 + l2cap_command_rej(conn, cmd, cmd_len, data); 5205 5168 break; 5206 5169 5207 5170 case L2CAP_CONN_REQ: 5208 - err = l2cap_connect_req(conn, cmd, data); 5171 + err = l2cap_connect_req(conn, cmd, cmd_len, data); 5209 5172 break; 5210 5173 5211 5174 case L2CAP_CONN_RSP: 5212 5175 case L2CAP_CREATE_CHAN_RSP: 5213 - err = l2cap_connect_create_rsp(conn, cmd, data); 5176 + err = l2cap_connect_create_rsp(conn, cmd, cmd_len, data); 5214 5177 break; 5215 5178 5216 5179 case L2CAP_CONF_REQ: ··· 5218 5181 break; 5219 5182 5220 5183 case L2CAP_CONF_RSP: 5221 - err = l2cap_config_rsp(conn, cmd, data); 5184 + err = l2cap_config_rsp(conn, cmd, cmd_len, data); 5222 5185 break; 5223 5186 5224 5187 case L2CAP_DISCONN_REQ: 5225 - err = l2cap_disconnect_req(conn, cmd, data); 5188 + err = l2cap_disconnect_req(conn, cmd, cmd_len, data); 5226 5189 break; 5227 5190 5228 5191 case L2CAP_DISCONN_RSP: 5229 - err = l2cap_disconnect_rsp(conn, cmd, data); 5192 + err = l2cap_disconnect_rsp(conn, cmd, cmd_len, data); 5230 5193 break; 5231 5194 5232 5195 case L2CAP_ECHO_REQ: ··· 5237 5200 break; 5238 5201 5239 5202 case L2CAP_INFO_REQ: 5240 - err = l2cap_information_req(conn, cmd, data); 5203 + err = l2cap_information_req(conn, cmd, cmd_len, data); 5241 5204 break; 5242 5205 5243 5206 case L2CAP_INFO_RSP: 5244 - err = l2cap_information_rsp(conn, cmd, data); 5207 + err = l2cap_information_rsp(conn, cmd, cmd_len, data); 5245 5208 break; 5246 5209 5247 5210 case L2CAP_CREATE_CHAN_REQ:
+22 -1
net/bluetooth/mgmt.c
··· 2700 2700 break; 2701 2701 2702 2702 case DISCOV_TYPE_LE: 2703 - if (!lmp_host_le_capable(hdev)) { 2703 + if (!test_bit(HCI_LE_ENABLED, &hdev->dev_flags)) { 2704 2704 err = cmd_status(sk, hdev->id, MGMT_OP_START_DISCOVERY, 2705 2705 MGMT_STATUS_NOT_SUPPORTED); 2706 2706 mgmt_pending_remove(cmd); ··· 3414 3414 3415 3415 if (match.sk) 3416 3416 sock_put(match.sk); 3417 + 3418 + return err; 3419 + } 3420 + 3421 + int mgmt_set_powered_failed(struct hci_dev *hdev, int err) 3422 + { 3423 + struct pending_cmd *cmd; 3424 + u8 status; 3425 + 3426 + cmd = mgmt_pending_find(MGMT_OP_SET_POWERED, hdev); 3427 + if (!cmd) 3428 + return -ENOENT; 3429 + 3430 + if (err == -ERFKILL) 3431 + status = MGMT_STATUS_RFKILLED; 3432 + else 3433 + status = MGMT_STATUS_FAILED; 3434 + 3435 + err = cmd_status(cmd->sk, hdev->id, MGMT_OP_SET_POWERED, status); 3436 + 3437 + mgmt_pending_remove(cmd); 3417 3438 3418 3439 return err; 3419 3440 }
+2 -2
net/bluetooth/smp.c
··· 770 770 771 771 BT_DBG("conn %p hcon %p level 0x%2.2x", conn, hcon, sec_level); 772 772 773 - if (!lmp_host_le_capable(hcon->hdev)) 773 + if (!test_bit(HCI_LE_ENABLED, &hcon->hdev->dev_flags)) 774 774 return 1; 775 775 776 776 if (sec_level == BT_SECURITY_LOW) ··· 851 851 __u8 reason; 852 852 int err = 0; 853 853 854 - if (!lmp_host_le_capable(conn->hcon->hdev)) { 854 + if (!test_bit(HCI_LE_ENABLED, &conn->hcon->hdev->dev_flags)) { 855 855 err = -ENOTSUPP; 856 856 reason = SMP_PAIRING_NOTSUPP; 857 857 goto done;
+3 -2
net/bridge/br_multicast.c
··· 467 467 skb_set_transport_header(skb, skb->len); 468 468 mldq = (struct mld_msg *) icmp6_hdr(skb); 469 469 470 - interval = ipv6_addr_any(group) ? br->multicast_last_member_interval : 471 - br->multicast_query_response_interval; 470 + interval = ipv6_addr_any(group) ? 471 + br->multicast_query_response_interval : 472 + br->multicast_last_member_interval; 472 473 473 474 mldq->mld_type = ICMPV6_MGM_QUERY; 474 475 mldq->mld_code = 0;
+1 -1
net/ceph/osd_client.c
··· 1675 1675 __register_request(osdc, req); 1676 1676 __unregister_linger_request(osdc, req); 1677 1677 } 1678 + reset_changed_osds(osdc); 1678 1679 mutex_unlock(&osdc->request_mutex); 1679 1680 1680 1681 if (needmap) { 1681 1682 dout("%d requests for down osds, need new map\n", needmap); 1682 1683 ceph_monc_request_next_osdmap(&osdc->client->monc); 1683 1684 } 1684 - reset_changed_osds(osdc); 1685 1685 } 1686 1686 1687 1687
+3 -3
net/core/ethtool.c
··· 60 60 [NETIF_F_IPV6_CSUM_BIT] = "tx-checksum-ipv6", 61 61 [NETIF_F_HIGHDMA_BIT] = "highdma", 62 62 [NETIF_F_FRAGLIST_BIT] = "tx-scatter-gather-fraglist", 63 - [NETIF_F_HW_VLAN_CTAG_TX_BIT] = "tx-vlan-ctag-hw-insert", 63 + [NETIF_F_HW_VLAN_CTAG_TX_BIT] = "tx-vlan-hw-insert", 64 64 65 - [NETIF_F_HW_VLAN_CTAG_RX_BIT] = "rx-vlan-ctag-hw-parse", 66 - [NETIF_F_HW_VLAN_CTAG_FILTER_BIT] = "rx-vlan-ctag-filter", 65 + [NETIF_F_HW_VLAN_CTAG_RX_BIT] = "rx-vlan-hw-parse", 66 + [NETIF_F_HW_VLAN_CTAG_FILTER_BIT] = "rx-vlan-filter", 67 67 [NETIF_F_HW_VLAN_STAG_TX_BIT] = "tx-vlan-stag-hw-insert", 68 68 [NETIF_F_HW_VLAN_STAG_RX_BIT] = "rx-vlan-stag-hw-parse", 69 69 [NETIF_F_HW_VLAN_STAG_FILTER_BIT] = "rx-vlan-stag-filter",
+1 -1
net/core/filter.c
··· 778 778 } 779 779 EXPORT_SYMBOL_GPL(sk_detach_filter); 780 780 781 - static void sk_decode_filter(struct sock_filter *filt, struct sock_filter *to) 781 + void sk_decode_filter(struct sock_filter *filt, struct sock_filter *to) 782 782 { 783 783 static const u16 decodes[] = { 784 784 [BPF_S_ALU_ADD_K] = BPF_ALU|BPF_ADD|BPF_K,
+7 -2
net/core/sock_diag.c
··· 73 73 goto out; 74 74 } 75 75 76 - if (filter) 77 - memcpy(nla_data(attr), filter->insns, len); 76 + if (filter) { 77 + struct sock_filter *fb = (struct sock_filter *)nla_data(attr); 78 + int i; 79 + 80 + for (i = 0; i < filter->len; i++, fb++) 81 + sk_decode_filter(&filter->insns[i], fb); 82 + } 78 83 79 84 out: 80 85 rcu_read_unlock();
+2 -2
net/ipv4/ip_tunnel.c
··· 853 853 } 854 854 EXPORT_SYMBOL_GPL(ip_tunnel_dellink); 855 855 856 - int __net_init ip_tunnel_init_net(struct net *net, int ip_tnl_net_id, 856 + int ip_tunnel_init_net(struct net *net, int ip_tnl_net_id, 857 857 struct rtnl_link_ops *ops, char *devname) 858 858 { 859 859 struct ip_tunnel_net *itn = net_generic(net, ip_tnl_net_id); ··· 899 899 unregister_netdevice_queue(itn->fb_tunnel_dev, head); 900 900 } 901 901 902 - void __net_exit ip_tunnel_delete_net(struct ip_tunnel_net *itn) 902 + void ip_tunnel_delete_net(struct ip_tunnel_net *itn) 903 903 { 904 904 LIST_HEAD(list); 905 905
+1 -2
net/ipv4/ip_vti.c
··· 361 361 tunnel->err_count = 0; 362 362 } 363 363 364 - IPCB(skb)->flags &= ~(IPSKB_XFRM_TUNNEL_SIZE | IPSKB_XFRM_TRANSFORMED | 365 - IPSKB_REROUTED); 364 + memset(IPCB(skb), 0, sizeof(*IPCB(skb))); 366 365 skb_dst_drop(skb); 367 366 skb_dst_set(skb, &rt->dst); 368 367 nf_reset(skb);
+1 -1
net/ipv6/ndisc.c
··· 1494 1494 */ 1495 1495 1496 1496 if (ha) 1497 - ndisc_fill_addr_option(skb, ND_OPT_TARGET_LL_ADDR, ha); 1497 + ndisc_fill_addr_option(buff, ND_OPT_TARGET_LL_ADDR, ha); 1498 1498 1499 1499 /* 1500 1500 * build redirect option and copy skb over to the new packet.
+3 -3
net/l2tp/l2tp_ppp.c
··· 346 346 skb_put(skb, 2); 347 347 348 348 /* Copy user data into skb */ 349 - error = memcpy_fromiovec(skb->data, m->msg_iov, total_len); 349 + error = memcpy_fromiovec(skb_put(skb, total_len), m->msg_iov, 350 + total_len); 350 351 if (error < 0) { 351 352 kfree_skb(skb); 352 353 goto error_put_sess_tun; 353 354 } 354 - skb_put(skb, total_len); 355 355 356 356 l2tp_xmit_skb(session, skb, session->hdr_len); 357 357 358 358 sock_put(ps->tunnel_sock); 359 359 sock_put(sk); 360 360 361 - return error; 361 + return total_len; 362 362 363 363 error_put_sess_tun: 364 364 sock_put(ps->tunnel_sock);
+6
net/mac80211/cfg.c
··· 1071 1071 clear_bit(SDATA_STATE_OFFCHANNEL_BEACON_STOPPED, &sdata->state); 1072 1072 ieee80211_bss_info_change_notify(sdata, BSS_CHANGED_BEACON_ENABLED); 1073 1073 1074 + if (sdata->wdev.cac_started) { 1075 + cancel_delayed_work_sync(&sdata->dfs_cac_timer_work); 1076 + cfg80211_cac_event(sdata->dev, NL80211_RADAR_CAC_ABORTED, 1077 + GFP_KERNEL); 1078 + } 1079 + 1074 1080 drv_stop_ap(sdata->local, sdata); 1075 1081 1076 1082 /* free all potentially still buffered bcast frames */
+3 -2
net/mac80211/ieee80211_i.h
··· 1512 1512 ieee80211_tx_skb_tid(sdata, skb, 7); 1513 1513 } 1514 1514 1515 - u32 ieee802_11_parse_elems_crc(u8 *start, size_t len, bool action, 1515 + u32 ieee802_11_parse_elems_crc(const u8 *start, size_t len, bool action, 1516 1516 struct ieee802_11_elems *elems, 1517 1517 u64 filter, u32 crc); 1518 - static inline void ieee802_11_parse_elems(u8 *start, size_t len, bool action, 1518 + static inline void ieee802_11_parse_elems(const u8 *start, size_t len, 1519 + bool action, 1519 1520 struct ieee802_11_elems *elems) 1520 1521 { 1521 1522 ieee802_11_parse_elems_crc(start, len, action, elems, 0, 0);
+80 -7
net/mac80211/mlme.c
··· 2486 2486 u16 capab_info, aid; 2487 2487 struct ieee802_11_elems elems; 2488 2488 struct ieee80211_bss_conf *bss_conf = &sdata->vif.bss_conf; 2489 + const struct cfg80211_bss_ies *bss_ies = NULL; 2490 + struct ieee80211_mgd_assoc_data *assoc_data = ifmgd->assoc_data; 2489 2491 u32 changed = 0; 2490 2492 int err; 2493 + bool ret; 2491 2494 2492 2495 /* AssocResp and ReassocResp have identical structure */ 2493 2496 ··· 2522 2519 ifmgd->aid = aid; 2523 2520 2524 2521 /* 2522 + * Some APs are erroneously not including some information in their 2523 + * (re)association response frames. Try to recover by using the data 2524 + * from the beacon or probe response. This seems to afflict mobile 2525 + * 2G/3G/4G wifi routers, reported models include the "Onda PN51T", 2526 + * "Vodafone PocketWiFi 2", "ZTE MF60" and a similar T-Mobile device. 2527 + */ 2528 + if ((assoc_data->wmm && !elems.wmm_param) || 2529 + (!(ifmgd->flags & IEEE80211_STA_DISABLE_HT) && 2530 + (!elems.ht_cap_elem || !elems.ht_operation)) || 2531 + (!(ifmgd->flags & IEEE80211_STA_DISABLE_VHT) && 2532 + (!elems.vht_cap_elem || !elems.vht_operation))) { 2533 + const struct cfg80211_bss_ies *ies; 2534 + struct ieee802_11_elems bss_elems; 2535 + 2536 + rcu_read_lock(); 2537 + ies = rcu_dereference(cbss->ies); 2538 + if (ies) 2539 + bss_ies = kmemdup(ies, sizeof(*ies) + ies->len, 2540 + GFP_ATOMIC); 2541 + rcu_read_unlock(); 2542 + if (!bss_ies) 2543 + return false; 2544 + 2545 + ieee802_11_parse_elems(bss_ies->data, bss_ies->len, 2546 + false, &bss_elems); 2547 + if (assoc_data->wmm && 2548 + !elems.wmm_param && bss_elems.wmm_param) { 2549 + elems.wmm_param = bss_elems.wmm_param; 2550 + sdata_info(sdata, 2551 + "AP bug: WMM param missing from AssocResp\n"); 2552 + } 2553 + 2554 + /* 2555 + * Also check if we requested HT/VHT, otherwise the AP doesn't 2556 + * have to include the IEs in the (re)association response. 2557 + */ 2558 + if (!elems.ht_cap_elem && bss_elems.ht_cap_elem && 2559 + !(ifmgd->flags & IEEE80211_STA_DISABLE_HT)) { 2560 + elems.ht_cap_elem = bss_elems.ht_cap_elem; 2561 + sdata_info(sdata, 2562 + "AP bug: HT capability missing from AssocResp\n"); 2563 + } 2564 + if (!elems.ht_operation && bss_elems.ht_operation && 2565 + !(ifmgd->flags & IEEE80211_STA_DISABLE_HT)) { 2566 + elems.ht_operation = bss_elems.ht_operation; 2567 + sdata_info(sdata, 2568 + "AP bug: HT operation missing from AssocResp\n"); 2569 + } 2570 + if (!elems.vht_cap_elem && bss_elems.vht_cap_elem && 2571 + !(ifmgd->flags & IEEE80211_STA_DISABLE_VHT)) { 2572 + elems.vht_cap_elem = bss_elems.vht_cap_elem; 2573 + sdata_info(sdata, 2574 + "AP bug: VHT capa missing from AssocResp\n"); 2575 + } 2576 + if (!elems.vht_operation && bss_elems.vht_operation && 2577 + !(ifmgd->flags & IEEE80211_STA_DISABLE_VHT)) { 2578 + elems.vht_operation = bss_elems.vht_operation; 2579 + sdata_info(sdata, 2580 + "AP bug: VHT operation missing from AssocResp\n"); 2581 + } 2582 + } 2583 + 2584 + /* 2525 2585 * We previously checked these in the beacon/probe response, so 2526 2586 * they should be present here. This is just a safety net. 2527 2587 */ 2528 2588 if (!(ifmgd->flags & IEEE80211_STA_DISABLE_HT) && 2529 2589 (!elems.wmm_param || !elems.ht_cap_elem || !elems.ht_operation)) { 2530 2590 sdata_info(sdata, 2531 - "HT AP is missing WMM params or HT capability/operation in AssocResp\n"); 2532 - return false; 2591 + "HT AP is missing WMM params or HT capability/operation\n"); 2592 + ret = false; 2593 + goto out; 2533 2594 } 2534 2595 2535 2596 if (!(ifmgd->flags & IEEE80211_STA_DISABLE_VHT) && 2536 2597 (!elems.vht_cap_elem || !elems.vht_operation)) { 2537 2598 sdata_info(sdata, 2538 - "VHT AP is missing VHT capability/operation in AssocResp\n"); 2539 - return false; 2599 + "VHT AP is missing VHT capability/operation\n"); 2600 + ret = false; 2601 + goto out; 2540 2602 } 2541 2603 2542 2604 mutex_lock(&sdata->local->sta_mtx); ··· 2612 2544 sta = sta_info_get(sdata, cbss->bssid); 2613 2545 if (WARN_ON(!sta)) { 2614 2546 mutex_unlock(&sdata->local->sta_mtx); 2615 - return false; 2547 + ret = false; 2548 + goto out; 2616 2549 } 2617 2550 2618 2551 sband = local->hw.wiphy->bands[ieee80211_get_sdata_band(sdata)]; ··· 2666 2597 sta->sta.addr); 2667 2598 WARN_ON(__sta_info_destroy(sta)); 2668 2599 mutex_unlock(&sdata->local->sta_mtx); 2669 - return false; 2600 + ret = false; 2601 + goto out; 2670 2602 } 2671 2603 2672 2604 mutex_unlock(&sdata->local->sta_mtx); ··· 2707 2637 ieee80211_sta_rx_notify(sdata, (struct ieee80211_hdr *)mgmt); 2708 2638 ieee80211_sta_reset_beacon_monitor(sdata); 2709 2639 2710 - return true; 2640 + ret = true; 2641 + out: 2642 + kfree(bss_ies); 2643 + return ret; 2711 2644 } 2712 2645 2713 2646 static void ieee80211_rx_mgmt_assoc_resp(struct ieee80211_sub_if_data *sdata,
+1 -1
net/mac80211/rate.c
··· 615 615 if (rates[i].idx < 0) 616 616 break; 617 617 618 - rate_idx_match_mask(&rates[i], sband, mask, chan_width, 618 + rate_idx_match_mask(&rates[i], sband, chan_width, mask, 619 619 mcs_mask); 620 620 } 621 621 }
+2 -2
net/mac80211/util.c
··· 667 667 } 668 668 EXPORT_SYMBOL(ieee80211_queue_delayed_work); 669 669 670 - u32 ieee802_11_parse_elems_crc(u8 *start, size_t len, bool action, 670 + u32 ieee802_11_parse_elems_crc(const u8 *start, size_t len, bool action, 671 671 struct ieee802_11_elems *elems, 672 672 u64 filter, u32 crc) 673 673 { 674 674 size_t left = len; 675 - u8 *pos = start; 675 + const u8 *pos = start; 676 676 bool calc_crc = filter != 0; 677 677 DECLARE_BITMAP(seen_elems, 256); 678 678 const u8 *ie;
+1
net/netfilter/ipvs/ip_vs_ctl.c
··· 2542 2542 struct ip_vs_dest *dest; 2543 2543 struct ip_vs_dest_entry entry; 2544 2544 2545 + memset(&entry, 0, sizeof(entry)); 2545 2546 list_for_each_entry(dest, &svc->destinations, n_list) { 2546 2547 if (count >= get->num_dests) 2547 2548 break;
+20 -3
net/netfilter/xt_TCPMSS.c
··· 45 45 46 46 static int 47 47 tcpmss_mangle_packet(struct sk_buff *skb, 48 - const struct xt_tcpmss_info *info, 48 + const struct xt_action_param *par, 49 49 unsigned int in_mtu, 50 50 unsigned int tcphoff, 51 51 unsigned int minlen) 52 52 { 53 + const struct xt_tcpmss_info *info = par->targinfo; 53 54 struct tcphdr *tcph; 54 55 unsigned int tcplen, i; 55 56 __be16 oldval; 56 57 u16 newmss; 57 58 u8 *opt; 59 + 60 + /* This is a fragment, no TCP header is available */ 61 + if (par->fragoff != 0) 62 + return XT_CONTINUE; 58 63 59 64 if (!skb_make_writable(skb, skb->len)) 60 65 return -1; ··· 130 125 131 126 skb_put(skb, TCPOLEN_MSS); 132 127 128 + /* 129 + * IPv4: RFC 1122 states "If an MSS option is not received at 130 + * connection setup, TCP MUST assume a default send MSS of 536". 131 + * IPv6: RFC 2460 states IPv6 has a minimum MTU of 1280 and a minimum 132 + * length IPv6 header of 60, ergo the default MSS value is 1220 133 + * Since no MSS was provided, we must use the default values 134 + */ 135 + if (par->family == NFPROTO_IPV4) 136 + newmss = min(newmss, (u16)536); 137 + else 138 + newmss = min(newmss, (u16)1220); 139 + 133 140 opt = (u_int8_t *)tcph + sizeof(struct tcphdr); 134 141 memmove(opt + TCPOLEN_MSS, opt, tcplen - sizeof(struct tcphdr)); 135 142 ··· 199 182 __be16 newlen; 200 183 int ret; 201 184 202 - ret = tcpmss_mangle_packet(skb, par->targinfo, 185 + ret = tcpmss_mangle_packet(skb, par, 203 186 tcpmss_reverse_mtu(skb, PF_INET), 204 187 iph->ihl * 4, 205 188 sizeof(*iph) + sizeof(struct tcphdr)); ··· 228 211 tcphoff = ipv6_skip_exthdr(skb, sizeof(*ipv6h), &nexthdr, &frag_off); 229 212 if (tcphoff < 0) 230 213 return NF_DROP; 231 - ret = tcpmss_mangle_packet(skb, par->targinfo, 214 + ret = tcpmss_mangle_packet(skb, par, 232 215 tcpmss_reverse_mtu(skb, PF_INET6), 233 216 tcphoff, 234 217 sizeof(*ipv6h) + sizeof(struct tcphdr));
+4 -2
net/netfilter/xt_TCPOPTSTRIP.c
··· 48 48 return NF_DROP; 49 49 50 50 len = skb->len - tcphoff; 51 - if (len < (int)sizeof(struct tcphdr) || 52 - tcp_hdr(skb)->doff * 4 > len) 51 + if (len < (int)sizeof(struct tcphdr)) 53 52 return NF_DROP; 54 53 55 54 tcph = (struct tcphdr *)(skb_network_header(skb) + tcphoff); 55 + if (tcph->doff * 4 > len) 56 + return NF_DROP; 57 + 56 58 opt = (u_int8_t *)tcph; 57 59 58 60 /*
+1 -1
net/netlink/af_netlink.c
··· 371 371 err = 0; 372 372 out: 373 373 mutex_unlock(&nlk->pg_vec_lock); 374 - return 0; 374 + return err; 375 375 } 376 376 377 377 static void netlink_frame_flush_dcache(const struct nl_mmap_hdr *hdr)
+2 -3
net/packet/af_packet.c
··· 2851 2851 return -EOPNOTSUPP; 2852 2852 2853 2853 uaddr->sa_family = AF_PACKET; 2854 + memset(uaddr->sa_data, 0, sizeof(uaddr->sa_data)); 2854 2855 rcu_read_lock(); 2855 2856 dev = dev_get_by_index_rcu(sock_net(sk), pkt_sk(sk)->ifindex); 2856 2857 if (dev) 2857 - strncpy(uaddr->sa_data, dev->name, 14); 2858 - else 2859 - memset(uaddr->sa_data, 0, 14); 2858 + strlcpy(uaddr->sa_data, dev->name, sizeof(uaddr->sa_data)); 2860 2859 rcu_read_unlock(); 2861 2860 *uaddr_len = sizeof(*uaddr); 2862 2861
+6 -5
net/sched/sch_api.c
··· 291 291 { 292 292 struct qdisc_rate_table *rtab; 293 293 294 + if (tab == NULL || r->rate == 0 || r->cell_log == 0 || 295 + nla_len(tab) != TC_RTAB_SIZE) 296 + return NULL; 297 + 294 298 for (rtab = qdisc_rtab_list; rtab; rtab = rtab->next) { 295 - if (memcmp(&rtab->rate, r, sizeof(struct tc_ratespec)) == 0) { 299 + if (!memcmp(&rtab->rate, r, sizeof(struct tc_ratespec)) && 300 + !memcmp(&rtab->data, nla_data(tab), 1024)) { 296 301 rtab->refcnt++; 297 302 return rtab; 298 303 } 299 304 } 300 - 301 - if (tab == NULL || r->rate == 0 || r->cell_log == 0 || 302 - nla_len(tab) != TC_RTAB_SIZE) 303 - return NULL; 304 305 305 306 rtab = kmalloc(sizeof(*rtab), GFP_KERNEL); 306 307 if (rtab) {
+2 -4
net/sctp/outqueue.c
··· 206 206 */ 207 207 void sctp_outq_init(struct sctp_association *asoc, struct sctp_outq *q) 208 208 { 209 + memset(q, 0, sizeof(struct sctp_outq)); 210 + 209 211 q->asoc = asoc; 210 212 INIT_LIST_HEAD(&q->out_chunk_list); 211 213 INIT_LIST_HEAD(&q->control_chunk_list); ··· 215 213 INIT_LIST_HEAD(&q->sacked); 216 214 INIT_LIST_HEAD(&q->abandoned); 217 215 218 - q->fast_rtx = 0; 219 - q->outstanding_bytes = 0; 220 216 q->empty = 1; 221 - q->cork = 0; 222 - q->out_qlen = 0; 223 217 } 224 218 225 219 /* Free the outqueue structure and any related pending chunks.
+6
net/sctp/socket.c
··· 3996 3996 3997 3997 /* Release our hold on the endpoint. */ 3998 3998 sp = sctp_sk(sk); 3999 + /* This could happen during socket init, thus we bail out 4000 + * early, since the rest of the below is not setup either. 4001 + */ 4002 + if (sp->ep == NULL) 4003 + return; 4004 + 3999 4005 if (sp->do_auto_asconf) { 4000 4006 sp->do_auto_asconf = 0; 4001 4007 list_del(&sp->auto_asconf_list);
+14 -3
net/wireless/nl80211.c
··· 1527 1527 struct cfg80211_registered_device *dev; 1528 1528 s64 filter_wiphy = -1; 1529 1529 bool split = false; 1530 - struct nlattr **tb = nl80211_fam.attrbuf; 1530 + struct nlattr **tb; 1531 1531 int res; 1532 1532 1533 + /* will be zeroed in nlmsg_parse() */ 1534 + tb = kmalloc(sizeof(*tb) * (NL80211_ATTR_MAX + 1), GFP_KERNEL); 1535 + if (!tb) 1536 + return -ENOMEM; 1537 + 1533 1538 rtnl_lock(); 1539 + 1534 1540 res = nlmsg_parse(cb->nlh, GENL_HDRLEN + nl80211_fam.hdrsize, 1535 - tb, nl80211_fam.maxattr, nl80211_policy); 1541 + tb, NL80211_ATTR_MAX, nl80211_policy); 1536 1542 if (res == 0) { 1537 1543 split = tb[NL80211_ATTR_SPLIT_WIPHY_DUMP]; 1538 1544 if (tb[NL80211_ATTR_WIPHY]) ··· 1550 1544 int ifidx = nla_get_u32(tb[NL80211_ATTR_IFINDEX]); 1551 1545 1552 1546 netdev = dev_get_by_index(sock_net(skb->sk), ifidx); 1553 - if (!netdev) 1547 + if (!netdev) { 1548 + rtnl_unlock(); 1549 + kfree(tb); 1554 1550 return -ENODEV; 1551 + } 1555 1552 if (netdev->ieee80211_ptr) { 1556 1553 dev = wiphy_to_dev( 1557 1554 netdev->ieee80211_ptr->wiphy); ··· 1563 1554 dev_put(netdev); 1564 1555 } 1565 1556 } 1557 + kfree(tb); 1566 1558 1567 1559 list_for_each_entry(dev, &cfg80211_rdev_list, list) { 1568 1560 if (!net_eq(wiphy_net(&dev->wiphy), sock_net(skb->sk))) ··· 1599 1589 !skb->len && 1600 1590 cb->min_dump_alloc < 4096) { 1601 1591 cb->min_dump_alloc = 4096; 1592 + rtnl_unlock(); 1602 1593 return 1; 1603 1594 } 1604 1595 idx--;
+4 -4
scripts/Makefile.lib
··· 149 149 150 150 ld_flags = $(LDFLAGS) $(ldflags-y) 151 151 152 - dtc_cpp_flags = -Wp,-MD,$(depfile).pre -nostdinc \ 152 + dtc_cpp_flags = -Wp,-MD,$(depfile).pre.tmp -nostdinc \ 153 153 -I$(srctree)/arch/$(SRCARCH)/boot/dts \ 154 154 -I$(srctree)/arch/$(SRCARCH)/boot/dts/include \ 155 155 -undef -D__DTS__ ··· 265 265 cmd_dtc = $(CPP) $(dtc_cpp_flags) -x assembler-with-cpp -o $(dtc-tmp) $< ; \ 266 266 $(objtree)/scripts/dtc/dtc -O dtb -o $@ -b 0 \ 267 267 -i $(dir $<) $(DTC_FLAGS) \ 268 - -d $(depfile).dtc $(dtc-tmp) ; \ 269 - cat $(depfile).pre $(depfile).dtc > $(depfile) 268 + -d $(depfile).dtc.tmp $(dtc-tmp) ; \ 269 + cat $(depfile).pre.tmp $(depfile).dtc.tmp > $(depfile) 270 270 271 271 $(obj)/%.dtb: $(src)/%.dts FORCE 272 272 $(call if_changed_dep,dtc) 273 273 274 - dtc-tmp = $(subst $(comma),_,$(dot-target).dts) 274 + dtc-tmp = $(subst $(comma),_,$(dot-target).dts.tmp) 275 275 276 276 # Bzip2 277 277 # ---------------------------------------------------------------------------
+1 -1
scripts/dtc/dtc-lexer.l
··· 71 71 push_input_file(name); 72 72 } 73 73 74 - <*>^"#"(line)?{WS}+[0-9]+{WS}+{STRING}({WS}+[0-9]+)? { 74 + <*>^"#"(line)?[ \t]+[0-9]+[ \t]+{STRING}([ \t]+[0-9]+)? { 75 75 char *line, *tmp, *fn; 76 76 /* skip text before line # */ 77 77 line = yytext;
+110 -110
scripts/dtc/dtc-lexer.lex.c_shipped
··· 405 405 static yyconst flex_int32_t yy_ec[256] = 406 406 { 0, 407 407 1, 1, 1, 1, 1, 1, 1, 1, 2, 3, 408 - 2, 2, 2, 1, 1, 1, 1, 1, 1, 1, 408 + 4, 4, 4, 1, 1, 1, 1, 1, 1, 1, 409 409 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 410 - 1, 2, 4, 5, 6, 1, 1, 7, 8, 1, 411 - 1, 9, 10, 10, 11, 10, 12, 13, 14, 15, 412 - 15, 15, 15, 15, 15, 15, 15, 16, 1, 17, 413 - 18, 19, 10, 10, 20, 20, 20, 20, 20, 20, 414 - 21, 21, 21, 21, 21, 22, 21, 21, 21, 21, 415 - 21, 21, 21, 21, 23, 21, 21, 24, 21, 21, 416 - 1, 25, 26, 1, 21, 1, 20, 27, 28, 29, 410 + 1, 2, 5, 6, 7, 1, 1, 8, 9, 1, 411 + 1, 10, 11, 11, 12, 11, 13, 14, 15, 16, 412 + 16, 16, 16, 16, 16, 16, 16, 17, 1, 18, 413 + 19, 20, 11, 11, 21, 21, 21, 21, 21, 21, 414 + 22, 22, 22, 22, 22, 23, 22, 22, 22, 22, 415 + 22, 22, 22, 22, 24, 22, 22, 25, 22, 22, 416 + 1, 26, 27, 1, 22, 1, 21, 28, 29, 30, 417 417 418 - 30, 20, 21, 21, 31, 21, 21, 32, 33, 34, 419 - 35, 36, 21, 37, 38, 39, 40, 41, 21, 24, 420 - 42, 21, 43, 44, 45, 1, 1, 1, 1, 1, 418 + 31, 21, 22, 22, 32, 22, 22, 33, 34, 35, 419 + 36, 37, 22, 38, 39, 40, 41, 42, 22, 25, 420 + 43, 22, 44, 45, 46, 1, 1, 1, 1, 1, 421 421 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 422 422 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 423 423 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ··· 434 434 1, 1, 1, 1, 1 435 435 } ; 436 436 437 - static yyconst flex_int32_t yy_meta[46] = 437 + static yyconst flex_int32_t yy_meta[47] = 438 438 { 0, 439 - 1, 1, 1, 1, 1, 2, 3, 1, 2, 2, 440 - 2, 4, 5, 5, 5, 6, 1, 1, 1, 7, 441 - 8, 8, 8, 8, 1, 1, 7, 7, 7, 7, 442 - 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 443 - 8, 8, 3, 1, 1 439 + 1, 1, 1, 1, 1, 1, 2, 3, 1, 2, 440 + 2, 2, 4, 5, 5, 5, 6, 1, 1, 1, 441 + 7, 8, 8, 8, 8, 1, 1, 7, 7, 7, 442 + 7, 8, 8, 8, 8, 8, 8, 8, 8, 8, 443 + 8, 8, 8, 3, 1, 1 444 444 } ; 445 445 446 446 static yyconst flex_int16_t yy_base[175] = 447 447 { 0, 448 - 0, 388, 381, 40, 41, 386, 71, 385, 34, 44, 449 - 390, 395, 60, 62, 371, 112, 111, 111, 111, 104, 450 - 370, 106, 371, 342, 124, 119, 0, 144, 395, 0, 451 - 123, 0, 159, 153, 165, 167, 395, 130, 395, 382, 452 - 395, 0, 372, 122, 395, 157, 374, 379, 350, 21, 453 - 346, 349, 395, 395, 395, 395, 395, 362, 395, 395, 454 - 181, 346, 342, 395, 359, 0, 191, 343, 190, 351, 455 - 350, 0, 0, 0, 173, 362, 177, 367, 357, 329, 456 - 335, 328, 337, 331, 206, 329, 334, 327, 395, 338, 457 - 170, 314, 346, 345, 318, 325, 343, 158, 316, 212, 448 + 0, 385, 378, 40, 41, 383, 72, 382, 34, 44, 449 + 388, 393, 61, 117, 368, 116, 115, 115, 115, 48, 450 + 367, 107, 368, 339, 127, 120, 0, 147, 393, 0, 451 + 127, 0, 133, 156, 168, 153, 393, 125, 393, 380, 452 + 393, 0, 369, 127, 393, 160, 371, 377, 347, 21, 453 + 343, 346, 393, 393, 393, 393, 393, 359, 393, 393, 454 + 183, 343, 339, 393, 356, 0, 183, 340, 187, 348, 455 + 347, 0, 0, 0, 178, 359, 195, 365, 354, 326, 456 + 332, 325, 334, 328, 204, 326, 331, 324, 393, 335, 457 + 150, 311, 343, 342, 315, 322, 340, 179, 313, 207, 458 458 459 - 322, 319, 320, 395, 340, 336, 308, 305, 314, 304, 460 - 295, 138, 208, 220, 395, 292, 305, 265, 264, 254, 461 - 201, 222, 285, 275, 273, 270, 236, 235, 225, 115, 462 - 395, 395, 252, 216, 216, 217, 214, 230, 209, 220, 463 - 213, 239, 211, 217, 216, 209, 229, 395, 240, 225, 464 - 206, 169, 395, 395, 116, 106, 99, 54, 395, 395, 465 - 254, 260, 268, 272, 276, 282, 289, 293, 301, 309, 466 - 313, 319, 327, 335 459 + 319, 316, 317, 393, 337, 333, 305, 302, 311, 301, 460 + 310, 190, 338, 337, 393, 307, 322, 301, 305, 277, 461 + 208, 311, 307, 278, 271, 270, 248, 246, 213, 130, 462 + 393, 393, 263, 235, 207, 221, 218, 229, 213, 213, 463 + 206, 234, 218, 210, 208, 193, 219, 393, 223, 204, 464 + 176, 157, 393, 393, 120, 106, 97, 119, 393, 393, 465 + 245, 251, 259, 263, 267, 273, 280, 284, 292, 300, 466 + 304, 310, 318, 326 467 467 } ; 468 468 469 469 static yyconst flex_int16_t yy_def[175] = ··· 489 489 160, 160, 160, 160 490 490 } ; 491 491 492 - static yyconst flex_int16_t yy_nxt[441] = 492 + static yyconst flex_int16_t yy_nxt[440] = 493 493 { 0, 494 - 12, 13, 14, 15, 16, 12, 17, 18, 12, 12, 495 - 12, 19, 12, 12, 12, 12, 20, 21, 22, 23, 496 - 23, 23, 23, 23, 12, 12, 23, 23, 23, 23, 494 + 12, 13, 14, 13, 15, 16, 12, 17, 18, 12, 495 + 12, 12, 19, 12, 12, 12, 12, 20, 21, 22, 496 + 23, 23, 23, 23, 23, 12, 12, 23, 23, 23, 497 497 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 498 - 23, 23, 12, 24, 12, 25, 34, 35, 35, 25, 499 - 81, 26, 26, 27, 27, 27, 34, 35, 35, 82, 500 - 28, 36, 36, 36, 36, 159, 29, 28, 28, 28, 501 - 28, 12, 13, 14, 15, 16, 30, 17, 18, 30, 502 - 30, 30, 26, 30, 30, 30, 12, 20, 21, 22, 503 - 31, 31, 31, 31, 31, 32, 12, 31, 31, 31, 498 + 23, 23, 23, 12, 24, 12, 25, 34, 35, 35, 499 + 25, 81, 26, 26, 27, 27, 27, 34, 35, 35, 500 + 82, 28, 36, 36, 36, 53, 54, 29, 28, 28, 501 + 28, 28, 12, 13, 14, 13, 15, 16, 30, 17, 502 + 18, 30, 30, 30, 26, 30, 30, 30, 12, 20, 503 + 21, 22, 31, 31, 31, 31, 31, 32, 12, 31, 504 504 505 505 31, 31, 31, 31, 31, 31, 31, 31, 31, 31, 506 - 31, 31, 31, 12, 24, 12, 39, 41, 45, 47, 507 - 53, 54, 48, 56, 57, 61, 61, 47, 66, 45, 508 - 48, 66, 66, 66, 39, 46, 40, 49, 59, 50, 509 - 158, 51, 122, 52, 157, 49, 46, 50, 136, 63, 510 - 137, 52, 156, 43, 40, 62, 65, 65, 65, 59, 511 - 61, 61, 123, 65, 75, 69, 69, 69, 36, 36, 512 - 65, 65, 65, 65, 70, 71, 72, 69, 69, 69, 513 - 45, 46, 61, 61, 109, 77, 70, 71, 93, 110, 514 - 68, 70, 71, 85, 85, 85, 66, 46, 155, 66, 506 + 31, 31, 31, 31, 31, 12, 24, 12, 36, 36, 507 + 36, 39, 41, 45, 47, 56, 57, 48, 61, 47, 508 + 39, 159, 48, 66, 61, 45, 66, 66, 66, 158, 509 + 46, 40, 49, 59, 50, 157, 51, 49, 52, 50, 510 + 40, 63, 46, 52, 36, 36, 36, 156, 43, 62, 511 + 65, 65, 65, 59, 136, 68, 137, 65, 75, 69, 512 + 69, 69, 70, 71, 65, 65, 65, 65, 70, 71, 513 + 72, 69, 69, 69, 61, 46, 45, 155, 154, 66, 514 + 70, 71, 66, 66, 66, 122, 85, 85, 85, 59, 515 515 516 - 66, 66, 69, 69, 69, 122, 59, 100, 100, 61, 517 - 61, 70, 71, 100, 100, 148, 112, 154, 85, 85, 518 - 85, 61, 61, 129, 129, 123, 129, 129, 135, 135, 519 - 135, 142, 142, 148, 143, 149, 153, 135, 135, 135, 520 - 142, 142, 160, 143, 152, 151, 150, 146, 145, 144, 521 - 141, 140, 139, 149, 38, 38, 38, 38, 38, 38, 522 - 38, 38, 42, 138, 134, 133, 42, 42, 44, 44, 523 - 44, 44, 44, 44, 44, 44, 58, 58, 58, 58, 524 - 64, 132, 64, 66, 131, 130, 66, 160, 66, 66, 525 - 67, 128, 127, 67, 67, 67, 67, 73, 126, 73, 516 + 69, 69, 69, 46, 77, 100, 109, 93, 100, 70, 517 + 71, 110, 112, 122, 129, 123, 153, 85, 85, 85, 518 + 135, 135, 135, 148, 148, 160, 135, 135, 135, 152, 519 + 142, 142, 142, 123, 143, 142, 142, 142, 151, 143, 520 + 150, 146, 145, 149, 149, 38, 38, 38, 38, 38, 521 + 38, 38, 38, 42, 144, 141, 140, 42, 42, 44, 522 + 44, 44, 44, 44, 44, 44, 44, 58, 58, 58, 523 + 58, 64, 139, 64, 66, 138, 134, 66, 133, 66, 524 + 66, 67, 132, 131, 67, 67, 67, 67, 73, 130, 525 + 73, 73, 76, 76, 76, 76, 76, 76, 76, 76, 526 526 527 - 73, 76, 76, 76, 76, 76, 76, 76, 76, 78, 528 - 78, 78, 78, 78, 78, 78, 78, 91, 125, 91, 529 - 92, 124, 92, 92, 120, 92, 92, 121, 121, 121, 530 - 121, 121, 121, 121, 121, 147, 147, 147, 147, 147, 531 - 147, 147, 147, 119, 118, 117, 116, 115, 47, 114, 532 - 110, 113, 111, 108, 107, 106, 48, 105, 104, 89, 533 - 103, 102, 101, 99, 98, 97, 96, 95, 94, 79, 534 - 77, 90, 89, 88, 59, 87, 86, 59, 84, 83, 535 - 80, 79, 77, 74, 160, 60, 59, 55, 37, 160, 536 - 33, 25, 26, 25, 11, 160, 160, 160, 160, 160, 527 + 78, 78, 78, 78, 78, 78, 78, 78, 91, 160, 528 + 91, 92, 129, 92, 92, 128, 92, 92, 121, 121, 529 + 121, 121, 121, 121, 121, 121, 147, 147, 147, 147, 530 + 147, 147, 147, 147, 127, 126, 125, 124, 61, 61, 531 + 120, 119, 118, 117, 116, 115, 47, 114, 110, 113, 532 + 111, 108, 107, 106, 48, 105, 104, 89, 103, 102, 533 + 101, 99, 98, 97, 96, 95, 94, 79, 77, 90, 534 + 89, 88, 59, 87, 86, 59, 84, 83, 80, 79, 535 + 77, 74, 160, 60, 59, 55, 37, 160, 33, 25, 536 + 26, 25, 11, 160, 160, 160, 160, 160, 160, 160, 537 537 538 538 160, 160, 160, 160, 160, 160, 160, 160, 160, 160, 539 539 160, 160, 160, 160, 160, 160, 160, 160, 160, 160, 540 540 160, 160, 160, 160, 160, 160, 160, 160, 160, 160, 541 - 160, 160, 160, 160, 160, 160, 160, 160, 160, 160 541 + 160, 160, 160, 160, 160, 160, 160, 160, 160 542 542 } ; 543 543 544 - static yyconst flex_int16_t yy_chk[441] = 544 + static yyconst flex_int16_t yy_chk[440] = 545 545 { 0, 546 546 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 547 547 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 548 548 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 549 549 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 550 - 1, 1, 1, 1, 1, 4, 9, 9, 9, 10, 551 - 50, 4, 5, 5, 5, 5, 10, 10, 10, 50, 552 - 5, 13, 13, 14, 14, 158, 5, 5, 5, 5, 553 - 5, 7, 7, 7, 7, 7, 7, 7, 7, 7, 550 + 1, 1, 1, 1, 1, 1, 4, 9, 9, 9, 551 + 10, 50, 4, 5, 5, 5, 5, 10, 10, 10, 552 + 50, 5, 13, 13, 13, 20, 20, 5, 5, 5, 553 + 5, 5, 7, 7, 7, 7, 7, 7, 7, 7, 554 554 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 555 555 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 556 556 557 557 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 558 - 7, 7, 7, 7, 7, 7, 16, 17, 18, 19, 559 - 20, 20, 19, 22, 22, 25, 25, 26, 31, 44, 560 - 26, 31, 31, 31, 38, 18, 16, 19, 31, 19, 561 - 157, 19, 112, 19, 156, 26, 44, 26, 130, 26, 562 - 130, 26, 155, 17, 38, 25, 28, 28, 28, 28, 563 - 33, 33, 112, 28, 46, 34, 34, 34, 36, 36, 564 - 28, 28, 28, 28, 34, 34, 34, 35, 35, 35, 565 - 75, 46, 61, 61, 98, 77, 35, 35, 77, 98, 566 - 33, 91, 91, 61, 61, 61, 67, 75, 152, 67, 558 + 7, 7, 7, 7, 7, 7, 7, 7, 14, 14, 559 + 14, 16, 17, 18, 19, 22, 22, 19, 25, 26, 560 + 38, 158, 26, 31, 33, 44, 31, 31, 31, 157, 561 + 18, 16, 19, 31, 19, 156, 19, 26, 19, 26, 562 + 38, 26, 44, 26, 36, 36, 36, 155, 17, 25, 563 + 28, 28, 28, 28, 130, 33, 130, 28, 46, 34, 564 + 34, 34, 91, 91, 28, 28, 28, 28, 34, 34, 565 + 34, 35, 35, 35, 61, 46, 75, 152, 151, 67, 566 + 35, 35, 67, 67, 67, 112, 61, 61, 61, 67, 567 567 568 - 67, 67, 69, 69, 69, 121, 67, 85, 85, 113, 569 - 113, 69, 69, 100, 100, 143, 100, 151, 85, 85, 570 - 85, 114, 114, 122, 122, 121, 129, 129, 135, 135, 571 - 135, 138, 138, 147, 138, 143, 150, 129, 129, 129, 572 - 142, 142, 149, 142, 146, 145, 144, 141, 140, 139, 573 - 137, 136, 134, 147, 161, 161, 161, 161, 161, 161, 574 - 161, 161, 162, 133, 128, 127, 162, 162, 163, 163, 575 - 163, 163, 163, 163, 163, 163, 164, 164, 164, 164, 576 - 165, 126, 165, 166, 125, 124, 166, 123, 166, 166, 577 - 167, 120, 119, 167, 167, 167, 167, 168, 118, 168, 568 + 69, 69, 69, 75, 77, 85, 98, 77, 100, 69, 569 + 69, 98, 100, 121, 129, 112, 150, 85, 85, 85, 570 + 135, 135, 135, 143, 147, 149, 129, 129, 129, 146, 571 + 138, 138, 138, 121, 138, 142, 142, 142, 145, 142, 572 + 144, 141, 140, 143, 147, 161, 161, 161, 161, 161, 573 + 161, 161, 161, 162, 139, 137, 136, 162, 162, 163, 574 + 163, 163, 163, 163, 163, 163, 163, 164, 164, 164, 575 + 164, 165, 134, 165, 166, 133, 128, 166, 127, 166, 576 + 166, 167, 126, 125, 167, 167, 167, 167, 168, 124, 577 + 168, 168, 169, 169, 169, 169, 169, 169, 169, 169, 578 578 579 - 168, 169, 169, 169, 169, 169, 169, 169, 169, 170, 580 - 170, 170, 170, 170, 170, 170, 170, 171, 117, 171, 581 - 172, 116, 172, 172, 111, 172, 172, 173, 173, 173, 582 - 173, 173, 173, 173, 173, 174, 174, 174, 174, 174, 583 - 174, 174, 174, 110, 109, 108, 107, 106, 105, 103, 584 - 102, 101, 99, 97, 96, 95, 94, 93, 92, 90, 585 - 88, 87, 86, 84, 83, 82, 81, 80, 79, 78, 586 - 76, 71, 70, 68, 65, 63, 62, 58, 52, 51, 587 - 49, 48, 47, 43, 40, 24, 23, 21, 15, 11, 588 - 8, 6, 3, 2, 160, 160, 160, 160, 160, 160, 579 + 170, 170, 170, 170, 170, 170, 170, 170, 171, 123, 580 + 171, 172, 122, 172, 172, 120, 172, 172, 173, 173, 581 + 173, 173, 173, 173, 173, 173, 174, 174, 174, 174, 582 + 174, 174, 174, 174, 119, 118, 117, 116, 114, 113, 583 + 111, 110, 109, 108, 107, 106, 105, 103, 102, 101, 584 + 99, 97, 96, 95, 94, 93, 92, 90, 88, 87, 585 + 86, 84, 83, 82, 81, 80, 79, 78, 76, 71, 586 + 70, 68, 65, 63, 62, 58, 52, 51, 49, 48, 587 + 47, 43, 40, 24, 23, 21, 15, 11, 8, 6, 588 + 3, 2, 160, 160, 160, 160, 160, 160, 160, 160, 589 589 590 590 160, 160, 160, 160, 160, 160, 160, 160, 160, 160, 591 591 160, 160, 160, 160, 160, 160, 160, 160, 160, 160, 592 592 160, 160, 160, 160, 160, 160, 160, 160, 160, 160, 593 - 160, 160, 160, 160, 160, 160, 160, 160, 160, 160 593 + 160, 160, 160, 160, 160, 160, 160, 160, 160 594 594 } ; 595 595 596 596 static yy_state_type yy_last_accepting_state;
+383 -334
scripts/dtc/dtc-parser.tab.c_shipped
··· 1 + /* A Bison parser, made by GNU Bison 2.5. */ 1 2 2 - /* A Bison parser, made by GNU Bison 2.4.1. */ 3 - 4 - /* Skeleton implementation for Bison's Yacc-like parsers in C 3 + /* Bison implementation for Yacc-like parsers in C 5 4 6 - Copyright (C) 1984, 1989, 1990, 2000, 2001, 2002, 2003, 2004, 2005, 2006 7 - Free Software Foundation, Inc. 5 + Copyright (C) 1984, 1989-1990, 2000-2011 Free Software Foundation, Inc. 8 6 9 7 This program is free software: you can redistribute it and/or modify 10 8 it under the terms of the GNU General Public License as published by ··· 44 46 #define YYBISON 1 45 47 46 48 /* Bison version. */ 47 - #define YYBISON_VERSION "2.4.1" 49 + #define YYBISON_VERSION "2.5" 48 50 49 51 /* Skeleton name. */ 50 52 #define YYSKELETON_NAME "yacc.c" ··· 65 67 66 68 /* Copy the first part of user declarations. */ 67 69 68 - /* Line 189 of yacc.c */ 70 + /* Line 268 of yacc.c */ 69 71 #line 21 "dtc-parser.y" 70 72 71 73 #include <stdio.h> ··· 86 88 static unsigned char eval_char_literal(const char *s); 87 89 88 90 89 - /* Line 189 of yacc.c */ 90 - #line 93 "dtc-parser.tab.c" 91 + /* Line 268 of yacc.c */ 92 + #line 91 "dtc-parser.tab.c" 91 93 92 94 /* Enabling traces. */ 93 95 #ifndef YYDEBUG ··· 145 147 typedef union YYSTYPE 146 148 { 147 149 148 - /* Line 214 of yacc.c */ 150 + /* Line 293 of yacc.c */ 149 151 #line 40 "dtc-parser.y" 150 152 151 153 char *propnodename; ··· 169 171 170 172 171 173 172 - /* Line 214 of yacc.c */ 173 - #line 176 "dtc-parser.tab.c" 174 + /* Line 293 of yacc.c */ 175 + #line 174 "dtc-parser.tab.c" 174 176 } YYSTYPE; 175 177 # define YYSTYPE_IS_TRIVIAL 1 176 178 # define yystype YYSTYPE /* obsolescent; will be withdrawn */ ··· 181 183 /* Copy the second part of user declarations. */ 182 184 183 185 184 - /* Line 264 of yacc.c */ 185 - #line 188 "dtc-parser.tab.c" 186 + /* Line 343 of yacc.c */ 187 + #line 186 "dtc-parser.tab.c" 186 188 187 189 #ifdef short 188 190 # undef short ··· 232 234 #define YYSIZE_MAXIMUM ((YYSIZE_T) -1) 233 235 234 236 #ifndef YY_ 235 - # if YYENABLE_NLS 237 + # if defined YYENABLE_NLS && YYENABLE_NLS 236 238 # if ENABLE_NLS 237 239 # include <libintl.h> /* INFRINGES ON USER NAME SPACE */ 238 240 # define YY_(msgid) dgettext ("bison-runtime", msgid) ··· 285 287 # define alloca _alloca 286 288 # else 287 289 # define YYSTACK_ALLOC alloca 288 - # if ! defined _ALLOCA_H && ! defined _STDLIB_H && (defined __STDC__ || defined __C99__FUNC__ \ 290 + # if ! defined _ALLOCA_H && ! defined EXIT_SUCCESS && (defined __STDC__ || defined __C99__FUNC__ \ 289 291 || defined __cplusplus || defined _MSC_VER) 290 292 # include <stdlib.h> /* INFRINGES ON USER NAME SPACE */ 291 - # ifndef _STDLIB_H 292 - # define _STDLIB_H 1 293 + # ifndef EXIT_SUCCESS 294 + # define EXIT_SUCCESS 0 293 295 # endif 294 296 # endif 295 297 # endif ··· 312 314 # ifndef YYSTACK_ALLOC_MAXIMUM 313 315 # define YYSTACK_ALLOC_MAXIMUM YYSIZE_MAXIMUM 314 316 # endif 315 - # if (defined __cplusplus && ! defined _STDLIB_H \ 317 + # if (defined __cplusplus && ! defined EXIT_SUCCESS \ 316 318 && ! ((defined YYMALLOC || defined malloc) \ 317 319 && (defined YYFREE || defined free))) 318 320 # include <stdlib.h> /* INFRINGES ON USER NAME SPACE */ 319 - # ifndef _STDLIB_H 320 - # define _STDLIB_H 1 321 + # ifndef EXIT_SUCCESS 322 + # define EXIT_SUCCESS 0 321 323 # endif 322 324 # endif 323 325 # ifndef YYMALLOC 324 326 # define YYMALLOC malloc 325 - # if ! defined malloc && ! defined _STDLIB_H && (defined __STDC__ || defined __C99__FUNC__ \ 327 + # if ! defined malloc && ! defined EXIT_SUCCESS && (defined __STDC__ || defined __C99__FUNC__ \ 326 328 || defined __cplusplus || defined _MSC_VER) 327 329 void *malloc (YYSIZE_T); /* INFRINGES ON USER NAME SPACE */ 328 330 # endif 329 331 # endif 330 332 # ifndef YYFREE 331 333 # define YYFREE free 332 - # if ! defined free && ! defined _STDLIB_H && (defined __STDC__ || defined __C99__FUNC__ \ 334 + # if ! defined free && ! defined EXIT_SUCCESS && (defined __STDC__ || defined __C99__FUNC__ \ 333 335 || defined __cplusplus || defined _MSC_VER) 334 336 void free (void *); /* INFRINGES ON USER NAME SPACE */ 335 337 # endif ··· 358 360 ((N) * (sizeof (yytype_int16) + sizeof (YYSTYPE)) \ 359 361 + YYSTACK_GAP_MAXIMUM) 360 362 361 - /* Copy COUNT objects from FROM to TO. The source and destination do 362 - not overlap. */ 363 - # ifndef YYCOPY 364 - # if defined __GNUC__ && 1 < __GNUC__ 365 - # define YYCOPY(To, From, Count) \ 366 - __builtin_memcpy (To, From, (Count) * sizeof (*(From))) 367 - # else 368 - # define YYCOPY(To, From, Count) \ 369 - do \ 370 - { \ 371 - YYSIZE_T yyi; \ 372 - for (yyi = 0; yyi < (Count); yyi++) \ 373 - (To)[yyi] = (From)[yyi]; \ 374 - } \ 375 - while (YYID (0)) 376 - # endif 377 - # endif 363 + # define YYCOPY_NEEDED 1 378 364 379 365 /* Relocate STACK from its old location to the new one. The 380 366 local variables YYSIZE and YYSTACKSIZE give the old and new number of ··· 377 395 while (YYID (0)) 378 396 379 397 #endif 398 + 399 + #if defined YYCOPY_NEEDED && YYCOPY_NEEDED 400 + /* Copy COUNT objects from FROM to TO. The source and destination do 401 + not overlap. */ 402 + # ifndef YYCOPY 403 + # if defined __GNUC__ && 1 < __GNUC__ 404 + # define YYCOPY(To, From, Count) \ 405 + __builtin_memcpy (To, From, (Count) * sizeof (*(From))) 406 + # else 407 + # define YYCOPY(To, From, Count) \ 408 + do \ 409 + { \ 410 + YYSIZE_T yyi; \ 411 + for (yyi = 0; yyi < (Count); yyi++) \ 412 + (To)[yyi] = (From)[yyi]; \ 413 + } \ 414 + while (YYID (0)) 415 + # endif 416 + # endif 417 + #endif /* !YYCOPY_NEEDED */ 380 418 381 419 /* YYFINAL -- State number of the termination state. */ 382 420 #define YYFINAL 4 ··· 573 571 2, 0, 2, 2, 0, 2, 2, 2, 3, 2 574 572 }; 575 573 576 - /* YYDEFACT[STATE-NAME] -- Default rule to reduce with in state 577 - STATE-NUM when YYTABLE doesn't specify something else to do. Zero 574 + /* YYDEFACT[STATE-NAME] -- Default reduction number in state STATE-NUM. 575 + Performed when YYTABLE doesn't specify something else to do. Zero 578 576 means the default is an error. */ 579 577 static const yytype_uint8 yydefact[] = 580 578 { ··· 635 633 636 634 /* YYTABLE[YYPACT[STATE-NUM]]. What to do in state STATE-NUM. If 637 635 positive, shift that token. If negative, reduce the rule which 638 - number is the opposite. If zero, do what YYDEFACT says. 639 - If YYTABLE_NINF, syntax error. */ 636 + number is the opposite. If YYTABLE_NINF, syntax error. */ 640 637 #define YYTABLE_NINF -1 641 638 static const yytype_uint8 yytable[] = 642 639 { ··· 654 653 68, 0, 0, 70, 0, 0, 0, 0, 72, 0, 655 654 137, 0, 73, 139 656 655 }; 656 + 657 + #define yypact_value_is_default(yystate) \ 658 + ((yystate) == (-78)) 659 + 660 + #define yytable_value_is_error(yytable_value) \ 661 + YYID (0) 657 662 658 663 static const yytype_int16 yycheck[] = 659 664 { ··· 712 705 713 706 /* Like YYERROR except do call yyerror. This remains here temporarily 714 707 to ease the transition to the new meaning of YYERROR, for GCC. 715 - Once GCC version 2 has supplanted version 1, this can go. */ 708 + Once GCC version 2 has supplanted version 1, this can go. However, 709 + YYFAIL appears to be in use. Nevertheless, it is formally deprecated 710 + in Bison 2.4.2's NEWS entry, where a plan to phase it out is 711 + discussed. */ 716 712 717 713 #define YYFAIL goto yyerrlab 714 + #if defined YYFAIL 715 + /* This is here to suppress warnings from the GCC cpp's 716 + -Wunused-macros. Normally we don't worry about that warning, but 717 + some users do, and we want to make it easy for users to remove 718 + YYFAIL uses, which will produce warnings from Bison 2.5. */ 719 + #endif 718 720 719 721 #define YYRECOVERING() (!!yyerrstatus) 720 722 ··· 733 717 { \ 734 718 yychar = (Token); \ 735 719 yylval = (Value); \ 736 - yytoken = YYTRANSLATE (yychar); \ 737 720 YYPOPSTACK (1); \ 738 721 goto yybackup; \ 739 722 } \ ··· 774 759 #endif 775 760 776 761 777 - /* YY_LOCATION_PRINT -- Print the location on the stream. 778 - This macro was not mandated originally: define only if we know 779 - we won't break user code: when these are the locations we know. */ 762 + /* This macro is provided for backward compatibility. */ 780 763 781 764 #ifndef YY_LOCATION_PRINT 782 - # if YYLTYPE_IS_TRIVIAL 783 - # define YY_LOCATION_PRINT(File, Loc) \ 784 - fprintf (File, "%d.%d-%d.%d", \ 785 - (Loc).first_line, (Loc).first_column, \ 786 - (Loc).last_line, (Loc).last_column) 787 - # else 788 - # define YY_LOCATION_PRINT(File, Loc) ((void) 0) 789 - # endif 765 + # define YY_LOCATION_PRINT(File, Loc) ((void) 0) 790 766 #endif 791 767 792 768 ··· 969 963 # define YYMAXDEPTH 10000 970 964 #endif 971 965 972 - 973 966 974 967 #if YYERROR_VERBOSE 975 968 ··· 1071 1066 } 1072 1067 # endif 1073 1068 1074 - /* Copy into YYRESULT an error message about the unexpected token 1075 - YYCHAR while in state YYSTATE. Return the number of bytes copied, 1076 - including the terminating null byte. If YYRESULT is null, do not 1077 - copy anything; just return the number of bytes that would be 1078 - copied. As a special case, return 0 if an ordinary "syntax error" 1079 - message will do. Return YYSIZE_MAXIMUM if overflow occurs during 1080 - size calculation. */ 1081 - static YYSIZE_T 1082 - yysyntax_error (char *yyresult, int yystate, int yychar) 1069 + /* Copy into *YYMSG, which is of size *YYMSG_ALLOC, an error message 1070 + about the unexpected token YYTOKEN for the state stack whose top is 1071 + YYSSP. 1072 + 1073 + Return 0 if *YYMSG was successfully written. Return 1 if *YYMSG is 1074 + not large enough to hold the message. In that case, also set 1075 + *YYMSG_ALLOC to the required number of bytes. Return 2 if the 1076 + required number of bytes is too large to store. */ 1077 + static int 1078 + yysyntax_error (YYSIZE_T *yymsg_alloc, char **yymsg, 1079 + yytype_int16 *yyssp, int yytoken) 1083 1080 { 1084 - int yyn = yypact[yystate]; 1081 + YYSIZE_T yysize0 = yytnamerr (0, yytname[yytoken]); 1082 + YYSIZE_T yysize = yysize0; 1083 + YYSIZE_T yysize1; 1084 + enum { YYERROR_VERBOSE_ARGS_MAXIMUM = 5 }; 1085 + /* Internationalized format string. */ 1086 + const char *yyformat = 0; 1087 + /* Arguments of yyformat. */ 1088 + char const *yyarg[YYERROR_VERBOSE_ARGS_MAXIMUM]; 1089 + /* Number of reported tokens (one for the "unexpected", one per 1090 + "expected"). */ 1091 + int yycount = 0; 1085 1092 1086 - if (! (YYPACT_NINF < yyn && yyn <= YYLAST)) 1087 - return 0; 1088 - else 1093 + /* There are many possibilities here to consider: 1094 + - Assume YYFAIL is not used. It's too flawed to consider. See 1095 + <http://lists.gnu.org/archive/html/bison-patches/2009-12/msg00024.html> 1096 + for details. YYERROR is fine as it does not invoke this 1097 + function. 1098 + - If this state is a consistent state with a default action, then 1099 + the only way this function was invoked is if the default action 1100 + is an error action. In that case, don't check for expected 1101 + tokens because there are none. 1102 + - The only way there can be no lookahead present (in yychar) is if 1103 + this state is a consistent state with a default action. Thus, 1104 + detecting the absence of a lookahead is sufficient to determine 1105 + that there is no unexpected or expected token to report. In that 1106 + case, just report a simple "syntax error". 1107 + - Don't assume there isn't a lookahead just because this state is a 1108 + consistent state with a default action. There might have been a 1109 + previous inconsistent state, consistent state with a non-default 1110 + action, or user semantic action that manipulated yychar. 1111 + - Of course, the expected token list depends on states to have 1112 + correct lookahead information, and it depends on the parser not 1113 + to perform extra reductions after fetching a lookahead from the 1114 + scanner and before detecting a syntax error. Thus, state merging 1115 + (from LALR or IELR) and default reductions corrupt the expected 1116 + token list. However, the list is correct for canonical LR with 1117 + one exception: it will still contain any token that will not be 1118 + accepted due to an error action in a later state. 1119 + */ 1120 + if (yytoken != YYEMPTY) 1089 1121 { 1090 - int yytype = YYTRANSLATE (yychar); 1091 - YYSIZE_T yysize0 = yytnamerr (0, yytname[yytype]); 1092 - YYSIZE_T yysize = yysize0; 1093 - YYSIZE_T yysize1; 1094 - int yysize_overflow = 0; 1095 - enum { YYERROR_VERBOSE_ARGS_MAXIMUM = 5 }; 1096 - char const *yyarg[YYERROR_VERBOSE_ARGS_MAXIMUM]; 1097 - int yyx; 1122 + int yyn = yypact[*yyssp]; 1123 + yyarg[yycount++] = yytname[yytoken]; 1124 + if (!yypact_value_is_default (yyn)) 1125 + { 1126 + /* Start YYX at -YYN if negative to avoid negative indexes in 1127 + YYCHECK. In other words, skip the first -YYN actions for 1128 + this state because they are default actions. */ 1129 + int yyxbegin = yyn < 0 ? -yyn : 0; 1130 + /* Stay within bounds of both yycheck and yytname. */ 1131 + int yychecklim = YYLAST - yyn + 1; 1132 + int yyxend = yychecklim < YYNTOKENS ? yychecklim : YYNTOKENS; 1133 + int yyx; 1098 1134 1099 - # if 0 1100 - /* This is so xgettext sees the translatable formats that are 1101 - constructed on the fly. */ 1102 - YY_("syntax error, unexpected %s"); 1103 - YY_("syntax error, unexpected %s, expecting %s"); 1104 - YY_("syntax error, unexpected %s, expecting %s or %s"); 1105 - YY_("syntax error, unexpected %s, expecting %s or %s or %s"); 1106 - YY_("syntax error, unexpected %s, expecting %s or %s or %s or %s"); 1107 - # endif 1108 - char *yyfmt; 1109 - char const *yyf; 1110 - static char const yyunexpected[] = "syntax error, unexpected %s"; 1111 - static char const yyexpecting[] = ", expecting %s"; 1112 - static char const yyor[] = " or %s"; 1113 - char yyformat[sizeof yyunexpected 1114 - + sizeof yyexpecting - 1 1115 - + ((YYERROR_VERBOSE_ARGS_MAXIMUM - 2) 1116 - * (sizeof yyor - 1))]; 1117 - char const *yyprefix = yyexpecting; 1118 - 1119 - /* Start YYX at -YYN if negative to avoid negative indexes in 1120 - YYCHECK. */ 1121 - int yyxbegin = yyn < 0 ? -yyn : 0; 1122 - 1123 - /* Stay within bounds of both yycheck and yytname. */ 1124 - int yychecklim = YYLAST - yyn + 1; 1125 - int yyxend = yychecklim < YYNTOKENS ? yychecklim : YYNTOKENS; 1126 - int yycount = 1; 1127 - 1128 - yyarg[0] = yytname[yytype]; 1129 - yyfmt = yystpcpy (yyformat, yyunexpected); 1130 - 1131 - for (yyx = yyxbegin; yyx < yyxend; ++yyx) 1132 - if (yycheck[yyx + yyn] == yyx && yyx != YYTERROR) 1133 - { 1134 - if (yycount == YYERROR_VERBOSE_ARGS_MAXIMUM) 1135 - { 1136 - yycount = 1; 1137 - yysize = yysize0; 1138 - yyformat[sizeof yyunexpected - 1] = '\0'; 1139 - break; 1140 - } 1141 - yyarg[yycount++] = yytname[yyx]; 1142 - yysize1 = yysize + yytnamerr (0, yytname[yyx]); 1143 - yysize_overflow |= (yysize1 < yysize); 1144 - yysize = yysize1; 1145 - yyfmt = yystpcpy (yyfmt, yyprefix); 1146 - yyprefix = yyor; 1147 - } 1148 - 1149 - yyf = YY_(yyformat); 1150 - yysize1 = yysize + yystrlen (yyf); 1151 - yysize_overflow |= (yysize1 < yysize); 1152 - yysize = yysize1; 1153 - 1154 - if (yysize_overflow) 1155 - return YYSIZE_MAXIMUM; 1156 - 1157 - if (yyresult) 1158 - { 1159 - /* Avoid sprintf, as that infringes on the user's name space. 1160 - Don't have undefined behavior even if the translation 1161 - produced a string with the wrong number of "%s"s. */ 1162 - char *yyp = yyresult; 1163 - int yyi = 0; 1164 - while ((*yyp = *yyf) != '\0') 1165 - { 1166 - if (*yyp == '%' && yyf[1] == 's' && yyi < yycount) 1167 - { 1168 - yyp += yytnamerr (yyp, yyarg[yyi++]); 1169 - yyf += 2; 1170 - } 1171 - else 1172 - { 1173 - yyp++; 1174 - yyf++; 1175 - } 1176 - } 1177 - } 1178 - return yysize; 1135 + for (yyx = yyxbegin; yyx < yyxend; ++yyx) 1136 + if (yycheck[yyx + yyn] == yyx && yyx != YYTERROR 1137 + && !yytable_value_is_error (yytable[yyx + yyn])) 1138 + { 1139 + if (yycount == YYERROR_VERBOSE_ARGS_MAXIMUM) 1140 + { 1141 + yycount = 1; 1142 + yysize = yysize0; 1143 + break; 1144 + } 1145 + yyarg[yycount++] = yytname[yyx]; 1146 + yysize1 = yysize + yytnamerr (0, yytname[yyx]); 1147 + if (! (yysize <= yysize1 1148 + && yysize1 <= YYSTACK_ALLOC_MAXIMUM)) 1149 + return 2; 1150 + yysize = yysize1; 1151 + } 1152 + } 1179 1153 } 1154 + 1155 + switch (yycount) 1156 + { 1157 + # define YYCASE_(N, S) \ 1158 + case N: \ 1159 + yyformat = S; \ 1160 + break 1161 + YYCASE_(0, YY_("syntax error")); 1162 + YYCASE_(1, YY_("syntax error, unexpected %s")); 1163 + YYCASE_(2, YY_("syntax error, unexpected %s, expecting %s")); 1164 + YYCASE_(3, YY_("syntax error, unexpected %s, expecting %s or %s")); 1165 + YYCASE_(4, YY_("syntax error, unexpected %s, expecting %s or %s or %s")); 1166 + YYCASE_(5, YY_("syntax error, unexpected %s, expecting %s or %s or %s or %s")); 1167 + # undef YYCASE_ 1168 + } 1169 + 1170 + yysize1 = yysize + yystrlen (yyformat); 1171 + if (! (yysize <= yysize1 && yysize1 <= YYSTACK_ALLOC_MAXIMUM)) 1172 + return 2; 1173 + yysize = yysize1; 1174 + 1175 + if (*yymsg_alloc < yysize) 1176 + { 1177 + *yymsg_alloc = 2 * yysize; 1178 + if (! (yysize <= *yymsg_alloc 1179 + && *yymsg_alloc <= YYSTACK_ALLOC_MAXIMUM)) 1180 + *yymsg_alloc = YYSTACK_ALLOC_MAXIMUM; 1181 + return 1; 1182 + } 1183 + 1184 + /* Avoid sprintf, as that infringes on the user's name space. 1185 + Don't have undefined behavior even if the translation 1186 + produced a string with the wrong number of "%s"s. */ 1187 + { 1188 + char *yyp = *yymsg; 1189 + int yyi = 0; 1190 + while ((*yyp = *yyformat) != '\0') 1191 + if (*yyp == '%' && yyformat[1] == 's' && yyi < yycount) 1192 + { 1193 + yyp += yytnamerr (yyp, yyarg[yyi++]); 1194 + yyformat += 2; 1195 + } 1196 + else 1197 + { 1198 + yyp++; 1199 + yyformat++; 1200 + } 1201 + } 1202 + return 0; 1180 1203 } 1181 1204 #endif /* YYERROR_VERBOSE */ 1182 - 1183 1205 1184 1206 /*-----------------------------------------------. 1185 1207 | Release the memory associated to this symbol. | ··· 1239 1207 } 1240 1208 } 1241 1209 1210 + 1242 1211 /* Prevent warnings from -Wmissing-prototypes. */ 1243 1212 #ifdef YYPARSE_PARAM 1244 1213 #if defined __STDC__ || defined __cplusplus ··· 1266 1233 int yynerrs; 1267 1234 1268 1235 1269 - 1270 - /*-------------------------. 1271 - | yyparse or yypush_parse. | 1272 - `-------------------------*/ 1236 + /*----------. 1237 + | yyparse. | 1238 + `----------*/ 1273 1239 1274 1240 #ifdef YYPARSE_PARAM 1275 1241 #if (defined __STDC__ || defined __C99__FUNC__ \ ··· 1292 1260 #endif 1293 1261 #endif 1294 1262 { 1295 - 1296 - 1297 1263 int yystate; 1298 1264 /* Number of tokens to shift before error messages enabled. */ 1299 1265 int yyerrstatus; ··· 1446 1416 1447 1417 /* First try to decide what to do without reference to lookahead token. */ 1448 1418 yyn = yypact[yystate]; 1449 - if (yyn == YYPACT_NINF) 1419 + if (yypact_value_is_default (yyn)) 1450 1420 goto yydefault; 1451 1421 1452 1422 /* Not known => get a lookahead token if don't already have one. */ ··· 1477 1447 yyn = yytable[yyn]; 1478 1448 if (yyn <= 0) 1479 1449 { 1480 - if (yyn == 0 || yyn == YYTABLE_NINF) 1481 - goto yyerrlab; 1450 + if (yytable_value_is_error (yyn)) 1451 + goto yyerrlab; 1482 1452 yyn = -yyn; 1483 1453 goto yyreduce; 1484 1454 } ··· 1533 1503 { 1534 1504 case 2: 1535 1505 1536 - /* Line 1455 of yacc.c */ 1506 + /* Line 1806 of yacc.c */ 1537 1507 #line 110 "dtc-parser.y" 1538 1508 { 1539 1509 the_boot_info = build_boot_info((yyvsp[(3) - (4)].re), (yyvsp[(4) - (4)].node), 1540 1510 guess_boot_cpuid((yyvsp[(4) - (4)].node))); 1541 - ;} 1511 + } 1542 1512 break; 1543 1513 1544 1514 case 3: 1545 1515 1546 - /* Line 1455 of yacc.c */ 1516 + /* Line 1806 of yacc.c */ 1547 1517 #line 118 "dtc-parser.y" 1548 1518 { 1549 1519 (yyval.re) = NULL; 1550 - ;} 1520 + } 1551 1521 break; 1552 1522 1553 1523 case 4: 1554 1524 1555 - /* Line 1455 of yacc.c */ 1525 + /* Line 1806 of yacc.c */ 1556 1526 #line 122 "dtc-parser.y" 1557 1527 { 1558 1528 (yyval.re) = chain_reserve_entry((yyvsp[(1) - (2)].re), (yyvsp[(2) - (2)].re)); 1559 - ;} 1529 + } 1560 1530 break; 1561 1531 1562 1532 case 5: 1563 1533 1564 - /* Line 1455 of yacc.c */ 1534 + /* Line 1806 of yacc.c */ 1565 1535 #line 129 "dtc-parser.y" 1566 1536 { 1567 1537 (yyval.re) = build_reserve_entry((yyvsp[(2) - (4)].integer), (yyvsp[(3) - (4)].integer)); 1568 - ;} 1538 + } 1569 1539 break; 1570 1540 1571 1541 case 6: 1572 1542 1573 - /* Line 1455 of yacc.c */ 1543 + /* Line 1806 of yacc.c */ 1574 1544 #line 133 "dtc-parser.y" 1575 1545 { 1576 1546 add_label(&(yyvsp[(2) - (2)].re)->labels, (yyvsp[(1) - (2)].labelref)); 1577 1547 (yyval.re) = (yyvsp[(2) - (2)].re); 1578 - ;} 1548 + } 1579 1549 break; 1580 1550 1581 1551 case 7: 1582 1552 1583 - /* Line 1455 of yacc.c */ 1553 + /* Line 1806 of yacc.c */ 1584 1554 #line 141 "dtc-parser.y" 1585 1555 { 1586 1556 (yyval.node) = name_node((yyvsp[(2) - (2)].node), ""); 1587 - ;} 1557 + } 1588 1558 break; 1589 1559 1590 1560 case 8: 1591 1561 1592 - /* Line 1455 of yacc.c */ 1562 + /* Line 1806 of yacc.c */ 1593 1563 #line 145 "dtc-parser.y" 1594 1564 { 1595 1565 (yyval.node) = merge_nodes((yyvsp[(1) - (3)].node), (yyvsp[(3) - (3)].node)); 1596 - ;} 1566 + } 1597 1567 break; 1598 1568 1599 1569 case 9: 1600 1570 1601 - /* Line 1455 of yacc.c */ 1571 + /* Line 1806 of yacc.c */ 1602 1572 #line 149 "dtc-parser.y" 1603 1573 { 1604 1574 struct node *target = get_node_by_ref((yyvsp[(1) - (3)].node), (yyvsp[(2) - (3)].labelref)); ··· 1608 1578 else 1609 1579 print_error("label or path, '%s', not found", (yyvsp[(2) - (3)].labelref)); 1610 1580 (yyval.node) = (yyvsp[(1) - (3)].node); 1611 - ;} 1581 + } 1612 1582 break; 1613 1583 1614 1584 case 10: 1615 1585 1616 - /* Line 1455 of yacc.c */ 1586 + /* Line 1806 of yacc.c */ 1617 1587 #line 159 "dtc-parser.y" 1618 1588 { 1619 1589 struct node *target = get_node_by_ref((yyvsp[(1) - (4)].node), (yyvsp[(3) - (4)].labelref)); ··· 1624 1594 delete_node(target); 1625 1595 1626 1596 (yyval.node) = (yyvsp[(1) - (4)].node); 1627 - ;} 1597 + } 1628 1598 break; 1629 1599 1630 1600 case 11: 1631 1601 1632 - /* Line 1455 of yacc.c */ 1602 + /* Line 1806 of yacc.c */ 1633 1603 #line 173 "dtc-parser.y" 1634 1604 { 1635 1605 (yyval.node) = build_node((yyvsp[(2) - (5)].proplist), (yyvsp[(3) - (5)].nodelist)); 1636 - ;} 1606 + } 1637 1607 break; 1638 1608 1639 1609 case 12: 1640 1610 1641 - /* Line 1455 of yacc.c */ 1611 + /* Line 1806 of yacc.c */ 1642 1612 #line 180 "dtc-parser.y" 1643 1613 { 1644 1614 (yyval.proplist) = NULL; 1645 - ;} 1615 + } 1646 1616 break; 1647 1617 1648 1618 case 13: 1649 1619 1650 - /* Line 1455 of yacc.c */ 1620 + /* Line 1806 of yacc.c */ 1651 1621 #line 184 "dtc-parser.y" 1652 1622 { 1653 1623 (yyval.proplist) = chain_property((yyvsp[(2) - (2)].prop), (yyvsp[(1) - (2)].proplist)); 1654 - ;} 1624 + } 1655 1625 break; 1656 1626 1657 1627 case 14: 1658 1628 1659 - /* Line 1455 of yacc.c */ 1629 + /* Line 1806 of yacc.c */ 1660 1630 #line 191 "dtc-parser.y" 1661 1631 { 1662 1632 (yyval.prop) = build_property((yyvsp[(1) - (4)].propnodename), (yyvsp[(3) - (4)].data)); 1663 - ;} 1633 + } 1664 1634 break; 1665 1635 1666 1636 case 15: 1667 1637 1668 - /* Line 1455 of yacc.c */ 1638 + /* Line 1806 of yacc.c */ 1669 1639 #line 195 "dtc-parser.y" 1670 1640 { 1671 1641 (yyval.prop) = build_property((yyvsp[(1) - (2)].propnodename), empty_data); 1672 - ;} 1642 + } 1673 1643 break; 1674 1644 1675 1645 case 16: 1676 1646 1677 - /* Line 1455 of yacc.c */ 1647 + /* Line 1806 of yacc.c */ 1678 1648 #line 199 "dtc-parser.y" 1679 1649 { 1680 1650 (yyval.prop) = build_property_delete((yyvsp[(2) - (3)].propnodename)); 1681 - ;} 1651 + } 1682 1652 break; 1683 1653 1684 1654 case 17: 1685 1655 1686 - /* Line 1455 of yacc.c */ 1656 + /* Line 1806 of yacc.c */ 1687 1657 #line 203 "dtc-parser.y" 1688 1658 { 1689 1659 add_label(&(yyvsp[(2) - (2)].prop)->labels, (yyvsp[(1) - (2)].labelref)); 1690 1660 (yyval.prop) = (yyvsp[(2) - (2)].prop); 1691 - ;} 1661 + } 1692 1662 break; 1693 1663 1694 1664 case 18: 1695 1665 1696 - /* Line 1455 of yacc.c */ 1666 + /* Line 1806 of yacc.c */ 1697 1667 #line 211 "dtc-parser.y" 1698 1668 { 1699 1669 (yyval.data) = data_merge((yyvsp[(1) - (2)].data), (yyvsp[(2) - (2)].data)); 1700 - ;} 1670 + } 1701 1671 break; 1702 1672 1703 1673 case 19: 1704 1674 1705 - /* Line 1455 of yacc.c */ 1675 + /* Line 1806 of yacc.c */ 1706 1676 #line 215 "dtc-parser.y" 1707 1677 { 1708 1678 (yyval.data) = data_merge((yyvsp[(1) - (3)].data), (yyvsp[(2) - (3)].array).data); 1709 - ;} 1679 + } 1710 1680 break; 1711 1681 1712 1682 case 20: 1713 1683 1714 - /* Line 1455 of yacc.c */ 1684 + /* Line 1806 of yacc.c */ 1715 1685 #line 219 "dtc-parser.y" 1716 1686 { 1717 1687 (yyval.data) = data_merge((yyvsp[(1) - (4)].data), (yyvsp[(3) - (4)].data)); 1718 - ;} 1688 + } 1719 1689 break; 1720 1690 1721 1691 case 21: 1722 1692 1723 - /* Line 1455 of yacc.c */ 1693 + /* Line 1806 of yacc.c */ 1724 1694 #line 223 "dtc-parser.y" 1725 1695 { 1726 1696 (yyval.data) = data_add_marker((yyvsp[(1) - (2)].data), REF_PATH, (yyvsp[(2) - (2)].labelref)); 1727 - ;} 1697 + } 1728 1698 break; 1729 1699 1730 1700 case 22: 1731 1701 1732 - /* Line 1455 of yacc.c */ 1702 + /* Line 1806 of yacc.c */ 1733 1703 #line 227 "dtc-parser.y" 1734 1704 { 1735 1705 FILE *f = srcfile_relative_open((yyvsp[(4) - (9)].data).val, NULL); ··· 1746 1716 1747 1717 (yyval.data) = data_merge((yyvsp[(1) - (9)].data), d); 1748 1718 fclose(f); 1749 - ;} 1719 + } 1750 1720 break; 1751 1721 1752 1722 case 23: 1753 1723 1754 - /* Line 1455 of yacc.c */ 1724 + /* Line 1806 of yacc.c */ 1755 1725 #line 244 "dtc-parser.y" 1756 1726 { 1757 1727 FILE *f = srcfile_relative_open((yyvsp[(4) - (5)].data).val, NULL); ··· 1761 1731 1762 1732 (yyval.data) = data_merge((yyvsp[(1) - (5)].data), d); 1763 1733 fclose(f); 1764 - ;} 1734 + } 1765 1735 break; 1766 1736 1767 1737 case 24: 1768 1738 1769 - /* Line 1455 of yacc.c */ 1739 + /* Line 1806 of yacc.c */ 1770 1740 #line 254 "dtc-parser.y" 1771 1741 { 1772 1742 (yyval.data) = data_add_marker((yyvsp[(1) - (2)].data), LABEL, (yyvsp[(2) - (2)].labelref)); 1773 - ;} 1743 + } 1774 1744 break; 1775 1745 1776 1746 case 25: 1777 1747 1778 - /* Line 1455 of yacc.c */ 1748 + /* Line 1806 of yacc.c */ 1779 1749 #line 261 "dtc-parser.y" 1780 1750 { 1781 1751 (yyval.data) = empty_data; 1782 - ;} 1752 + } 1783 1753 break; 1784 1754 1785 1755 case 26: 1786 1756 1787 - /* Line 1455 of yacc.c */ 1757 + /* Line 1806 of yacc.c */ 1788 1758 #line 265 "dtc-parser.y" 1789 1759 { 1790 1760 (yyval.data) = (yyvsp[(1) - (2)].data); 1791 - ;} 1761 + } 1792 1762 break; 1793 1763 1794 1764 case 27: 1795 1765 1796 - /* Line 1455 of yacc.c */ 1766 + /* Line 1806 of yacc.c */ 1797 1767 #line 269 "dtc-parser.y" 1798 1768 { 1799 1769 (yyval.data) = data_add_marker((yyvsp[(1) - (2)].data), LABEL, (yyvsp[(2) - (2)].labelref)); 1800 - ;} 1770 + } 1801 1771 break; 1802 1772 1803 1773 case 28: 1804 1774 1805 - /* Line 1455 of yacc.c */ 1775 + /* Line 1806 of yacc.c */ 1806 1776 #line 276 "dtc-parser.y" 1807 1777 { 1808 1778 (yyval.array).data = empty_data; ··· 1817 1787 " are currently supported"); 1818 1788 (yyval.array).bits = 32; 1819 1789 } 1820 - ;} 1790 + } 1821 1791 break; 1822 1792 1823 1793 case 29: 1824 1794 1825 - /* Line 1455 of yacc.c */ 1795 + /* Line 1806 of yacc.c */ 1826 1796 #line 291 "dtc-parser.y" 1827 1797 { 1828 1798 (yyval.array).data = empty_data; 1829 1799 (yyval.array).bits = 32; 1830 - ;} 1800 + } 1831 1801 break; 1832 1802 1833 1803 case 30: 1834 1804 1835 - /* Line 1455 of yacc.c */ 1805 + /* Line 1806 of yacc.c */ 1836 1806 #line 296 "dtc-parser.y" 1837 1807 { 1838 1808 if ((yyvsp[(1) - (2)].array).bits < 64) { ··· 1852 1822 } 1853 1823 1854 1824 (yyval.array).data = data_append_integer((yyvsp[(1) - (2)].array).data, (yyvsp[(2) - (2)].integer), (yyvsp[(1) - (2)].array).bits); 1855 - ;} 1825 + } 1856 1826 break; 1857 1827 1858 1828 case 31: 1859 1829 1860 - /* Line 1455 of yacc.c */ 1830 + /* Line 1806 of yacc.c */ 1861 1831 #line 316 "dtc-parser.y" 1862 1832 { 1863 1833 uint64_t val = ~0ULL >> (64 - (yyvsp[(1) - (2)].array).bits); ··· 1871 1841 "arrays with 32-bit elements."); 1872 1842 1873 1843 (yyval.array).data = data_append_integer((yyvsp[(1) - (2)].array).data, val, (yyvsp[(1) - (2)].array).bits); 1874 - ;} 1844 + } 1875 1845 break; 1876 1846 1877 1847 case 32: 1878 1848 1879 - /* Line 1455 of yacc.c */ 1849 + /* Line 1806 of yacc.c */ 1880 1850 #line 330 "dtc-parser.y" 1881 1851 { 1882 1852 (yyval.array).data = data_add_marker((yyvsp[(1) - (2)].array).data, LABEL, (yyvsp[(2) - (2)].labelref)); 1883 - ;} 1853 + } 1884 1854 break; 1885 1855 1886 1856 case 33: 1887 1857 1888 - /* Line 1455 of yacc.c */ 1858 + /* Line 1806 of yacc.c */ 1889 1859 #line 337 "dtc-parser.y" 1890 1860 { 1891 1861 (yyval.integer) = eval_literal((yyvsp[(1) - (1)].literal), 0, 64); 1892 - ;} 1862 + } 1893 1863 break; 1894 1864 1895 1865 case 34: 1896 1866 1897 - /* Line 1455 of yacc.c */ 1867 + /* Line 1806 of yacc.c */ 1898 1868 #line 341 "dtc-parser.y" 1899 1869 { 1900 1870 (yyval.integer) = eval_char_literal((yyvsp[(1) - (1)].literal)); 1901 - ;} 1871 + } 1902 1872 break; 1903 1873 1904 1874 case 35: 1905 1875 1906 - /* Line 1455 of yacc.c */ 1876 + /* Line 1806 of yacc.c */ 1907 1877 #line 345 "dtc-parser.y" 1908 1878 { 1909 1879 (yyval.integer) = (yyvsp[(2) - (3)].integer); 1910 - ;} 1880 + } 1911 1881 break; 1912 1882 1913 1883 case 38: 1914 1884 1915 - /* Line 1455 of yacc.c */ 1885 + /* Line 1806 of yacc.c */ 1916 1886 #line 356 "dtc-parser.y" 1917 - { (yyval.integer) = (yyvsp[(1) - (5)].integer) ? (yyvsp[(3) - (5)].integer) : (yyvsp[(5) - (5)].integer); ;} 1887 + { (yyval.integer) = (yyvsp[(1) - (5)].integer) ? (yyvsp[(3) - (5)].integer) : (yyvsp[(5) - (5)].integer); } 1918 1888 break; 1919 1889 1920 1890 case 40: 1921 1891 1922 - /* Line 1455 of yacc.c */ 1892 + /* Line 1806 of yacc.c */ 1923 1893 #line 361 "dtc-parser.y" 1924 - { (yyval.integer) = (yyvsp[(1) - (3)].integer) || (yyvsp[(3) - (3)].integer); ;} 1894 + { (yyval.integer) = (yyvsp[(1) - (3)].integer) || (yyvsp[(3) - (3)].integer); } 1925 1895 break; 1926 1896 1927 1897 case 42: 1928 1898 1929 - /* Line 1455 of yacc.c */ 1899 + /* Line 1806 of yacc.c */ 1930 1900 #line 366 "dtc-parser.y" 1931 - { (yyval.integer) = (yyvsp[(1) - (3)].integer) && (yyvsp[(3) - (3)].integer); ;} 1901 + { (yyval.integer) = (yyvsp[(1) - (3)].integer) && (yyvsp[(3) - (3)].integer); } 1932 1902 break; 1933 1903 1934 1904 case 44: 1935 1905 1936 - /* Line 1455 of yacc.c */ 1906 + /* Line 1806 of yacc.c */ 1937 1907 #line 371 "dtc-parser.y" 1938 - { (yyval.integer) = (yyvsp[(1) - (3)].integer) | (yyvsp[(3) - (3)].integer); ;} 1908 + { (yyval.integer) = (yyvsp[(1) - (3)].integer) | (yyvsp[(3) - (3)].integer); } 1939 1909 break; 1940 1910 1941 1911 case 46: 1942 1912 1943 - /* Line 1455 of yacc.c */ 1913 + /* Line 1806 of yacc.c */ 1944 1914 #line 376 "dtc-parser.y" 1945 - { (yyval.integer) = (yyvsp[(1) - (3)].integer) ^ (yyvsp[(3) - (3)].integer); ;} 1915 + { (yyval.integer) = (yyvsp[(1) - (3)].integer) ^ (yyvsp[(3) - (3)].integer); } 1946 1916 break; 1947 1917 1948 1918 case 48: 1949 1919 1950 - /* Line 1455 of yacc.c */ 1920 + /* Line 1806 of yacc.c */ 1951 1921 #line 381 "dtc-parser.y" 1952 - { (yyval.integer) = (yyvsp[(1) - (3)].integer) & (yyvsp[(3) - (3)].integer); ;} 1922 + { (yyval.integer) = (yyvsp[(1) - (3)].integer) & (yyvsp[(3) - (3)].integer); } 1953 1923 break; 1954 1924 1955 1925 case 50: 1956 1926 1957 - /* Line 1455 of yacc.c */ 1927 + /* Line 1806 of yacc.c */ 1958 1928 #line 386 "dtc-parser.y" 1959 - { (yyval.integer) = (yyvsp[(1) - (3)].integer) == (yyvsp[(3) - (3)].integer); ;} 1929 + { (yyval.integer) = (yyvsp[(1) - (3)].integer) == (yyvsp[(3) - (3)].integer); } 1960 1930 break; 1961 1931 1962 1932 case 51: 1963 1933 1964 - /* Line 1455 of yacc.c */ 1934 + /* Line 1806 of yacc.c */ 1965 1935 #line 387 "dtc-parser.y" 1966 - { (yyval.integer) = (yyvsp[(1) - (3)].integer) != (yyvsp[(3) - (3)].integer); ;} 1936 + { (yyval.integer) = (yyvsp[(1) - (3)].integer) != (yyvsp[(3) - (3)].integer); } 1967 1937 break; 1968 1938 1969 1939 case 53: 1970 1940 1971 - /* Line 1455 of yacc.c */ 1941 + /* Line 1806 of yacc.c */ 1972 1942 #line 392 "dtc-parser.y" 1973 - { (yyval.integer) = (yyvsp[(1) - (3)].integer) < (yyvsp[(3) - (3)].integer); ;} 1943 + { (yyval.integer) = (yyvsp[(1) - (3)].integer) < (yyvsp[(3) - (3)].integer); } 1974 1944 break; 1975 1945 1976 1946 case 54: 1977 1947 1978 - /* Line 1455 of yacc.c */ 1948 + /* Line 1806 of yacc.c */ 1979 1949 #line 393 "dtc-parser.y" 1980 - { (yyval.integer) = (yyvsp[(1) - (3)].integer) > (yyvsp[(3) - (3)].integer); ;} 1950 + { (yyval.integer) = (yyvsp[(1) - (3)].integer) > (yyvsp[(3) - (3)].integer); } 1981 1951 break; 1982 1952 1983 1953 case 55: 1984 1954 1985 - /* Line 1455 of yacc.c */ 1955 + /* Line 1806 of yacc.c */ 1986 1956 #line 394 "dtc-parser.y" 1987 - { (yyval.integer) = (yyvsp[(1) - (3)].integer) <= (yyvsp[(3) - (3)].integer); ;} 1957 + { (yyval.integer) = (yyvsp[(1) - (3)].integer) <= (yyvsp[(3) - (3)].integer); } 1988 1958 break; 1989 1959 1990 1960 case 56: 1991 1961 1992 - /* Line 1455 of yacc.c */ 1962 + /* Line 1806 of yacc.c */ 1993 1963 #line 395 "dtc-parser.y" 1994 - { (yyval.integer) = (yyvsp[(1) - (3)].integer) >= (yyvsp[(3) - (3)].integer); ;} 1964 + { (yyval.integer) = (yyvsp[(1) - (3)].integer) >= (yyvsp[(3) - (3)].integer); } 1995 1965 break; 1996 1966 1997 1967 case 57: 1998 1968 1999 - /* Line 1455 of yacc.c */ 1969 + /* Line 1806 of yacc.c */ 2000 1970 #line 399 "dtc-parser.y" 2001 - { (yyval.integer) = (yyvsp[(1) - (3)].integer) << (yyvsp[(3) - (3)].integer); ;} 1971 + { (yyval.integer) = (yyvsp[(1) - (3)].integer) << (yyvsp[(3) - (3)].integer); } 2002 1972 break; 2003 1973 2004 1974 case 58: 2005 1975 2006 - /* Line 1455 of yacc.c */ 1976 + /* Line 1806 of yacc.c */ 2007 1977 #line 400 "dtc-parser.y" 2008 - { (yyval.integer) = (yyvsp[(1) - (3)].integer) >> (yyvsp[(3) - (3)].integer); ;} 1978 + { (yyval.integer) = (yyvsp[(1) - (3)].integer) >> (yyvsp[(3) - (3)].integer); } 2009 1979 break; 2010 1980 2011 1981 case 60: 2012 1982 2013 - /* Line 1455 of yacc.c */ 1983 + /* Line 1806 of yacc.c */ 2014 1984 #line 405 "dtc-parser.y" 2015 - { (yyval.integer) = (yyvsp[(1) - (3)].integer) + (yyvsp[(3) - (3)].integer); ;} 1985 + { (yyval.integer) = (yyvsp[(1) - (3)].integer) + (yyvsp[(3) - (3)].integer); } 2016 1986 break; 2017 1987 2018 1988 case 61: 2019 1989 2020 - /* Line 1455 of yacc.c */ 1990 + /* Line 1806 of yacc.c */ 2021 1991 #line 406 "dtc-parser.y" 2022 - { (yyval.integer) = (yyvsp[(1) - (3)].integer) - (yyvsp[(3) - (3)].integer); ;} 1992 + { (yyval.integer) = (yyvsp[(1) - (3)].integer) - (yyvsp[(3) - (3)].integer); } 2023 1993 break; 2024 1994 2025 1995 case 63: 2026 1996 2027 - /* Line 1455 of yacc.c */ 1997 + /* Line 1806 of yacc.c */ 2028 1998 #line 411 "dtc-parser.y" 2029 - { (yyval.integer) = (yyvsp[(1) - (3)].integer) * (yyvsp[(3) - (3)].integer); ;} 1999 + { (yyval.integer) = (yyvsp[(1) - (3)].integer) * (yyvsp[(3) - (3)].integer); } 2030 2000 break; 2031 2001 2032 2002 case 64: 2033 2003 2034 - /* Line 1455 of yacc.c */ 2004 + /* Line 1806 of yacc.c */ 2035 2005 #line 412 "dtc-parser.y" 2036 - { (yyval.integer) = (yyvsp[(1) - (3)].integer) / (yyvsp[(3) - (3)].integer); ;} 2006 + { (yyval.integer) = (yyvsp[(1) - (3)].integer) / (yyvsp[(3) - (3)].integer); } 2037 2007 break; 2038 2008 2039 2009 case 65: 2040 2010 2041 - /* Line 1455 of yacc.c */ 2011 + /* Line 1806 of yacc.c */ 2042 2012 #line 413 "dtc-parser.y" 2043 - { (yyval.integer) = (yyvsp[(1) - (3)].integer) % (yyvsp[(3) - (3)].integer); ;} 2013 + { (yyval.integer) = (yyvsp[(1) - (3)].integer) % (yyvsp[(3) - (3)].integer); } 2044 2014 break; 2045 2015 2046 2016 case 68: 2047 2017 2048 - /* Line 1455 of yacc.c */ 2018 + /* Line 1806 of yacc.c */ 2049 2019 #line 419 "dtc-parser.y" 2050 - { (yyval.integer) = -(yyvsp[(2) - (2)].integer); ;} 2020 + { (yyval.integer) = -(yyvsp[(2) - (2)].integer); } 2051 2021 break; 2052 2022 2053 2023 case 69: 2054 2024 2055 - /* Line 1455 of yacc.c */ 2025 + /* Line 1806 of yacc.c */ 2056 2026 #line 420 "dtc-parser.y" 2057 - { (yyval.integer) = ~(yyvsp[(2) - (2)].integer); ;} 2027 + { (yyval.integer) = ~(yyvsp[(2) - (2)].integer); } 2058 2028 break; 2059 2029 2060 2030 case 70: 2061 2031 2062 - /* Line 1455 of yacc.c */ 2032 + /* Line 1806 of yacc.c */ 2063 2033 #line 421 "dtc-parser.y" 2064 - { (yyval.integer) = !(yyvsp[(2) - (2)].integer); ;} 2034 + { (yyval.integer) = !(yyvsp[(2) - (2)].integer); } 2065 2035 break; 2066 2036 2067 2037 case 71: 2068 2038 2069 - /* Line 1455 of yacc.c */ 2039 + /* Line 1806 of yacc.c */ 2070 2040 #line 426 "dtc-parser.y" 2071 2041 { 2072 2042 (yyval.data) = empty_data; 2073 - ;} 2043 + } 2074 2044 break; 2075 2045 2076 2046 case 72: 2077 2047 2078 - /* Line 1455 of yacc.c */ 2048 + /* Line 1806 of yacc.c */ 2079 2049 #line 430 "dtc-parser.y" 2080 2050 { 2081 2051 (yyval.data) = data_append_byte((yyvsp[(1) - (2)].data), (yyvsp[(2) - (2)].byte)); 2082 - ;} 2052 + } 2083 2053 break; 2084 2054 2085 2055 case 73: 2086 2056 2087 - /* Line 1455 of yacc.c */ 2057 + /* Line 1806 of yacc.c */ 2088 2058 #line 434 "dtc-parser.y" 2089 2059 { 2090 2060 (yyval.data) = data_add_marker((yyvsp[(1) - (2)].data), LABEL, (yyvsp[(2) - (2)].labelref)); 2091 - ;} 2061 + } 2092 2062 break; 2093 2063 2094 2064 case 74: 2095 2065 2096 - /* Line 1455 of yacc.c */ 2066 + /* Line 1806 of yacc.c */ 2097 2067 #line 441 "dtc-parser.y" 2098 2068 { 2099 2069 (yyval.nodelist) = NULL; 2100 - ;} 2070 + } 2101 2071 break; 2102 2072 2103 2073 case 75: 2104 2074 2105 - /* Line 1455 of yacc.c */ 2075 + /* Line 1806 of yacc.c */ 2106 2076 #line 445 "dtc-parser.y" 2107 2077 { 2108 2078 (yyval.nodelist) = chain_node((yyvsp[(1) - (2)].node), (yyvsp[(2) - (2)].nodelist)); 2109 - ;} 2079 + } 2110 2080 break; 2111 2081 2112 2082 case 76: 2113 2083 2114 - /* Line 1455 of yacc.c */ 2084 + /* Line 1806 of yacc.c */ 2115 2085 #line 449 "dtc-parser.y" 2116 2086 { 2117 2087 print_error("syntax error: properties must precede subnodes"); 2118 2088 YYERROR; 2119 - ;} 2089 + } 2120 2090 break; 2121 2091 2122 2092 case 77: 2123 2093 2124 - /* Line 1455 of yacc.c */ 2094 + /* Line 1806 of yacc.c */ 2125 2095 #line 457 "dtc-parser.y" 2126 2096 { 2127 2097 (yyval.node) = name_node((yyvsp[(2) - (2)].node), (yyvsp[(1) - (2)].propnodename)); 2128 - ;} 2098 + } 2129 2099 break; 2130 2100 2131 2101 case 78: 2132 2102 2133 - /* Line 1455 of yacc.c */ 2103 + /* Line 1806 of yacc.c */ 2134 2104 #line 461 "dtc-parser.y" 2135 2105 { 2136 2106 (yyval.node) = name_node(build_node_delete(), (yyvsp[(2) - (3)].propnodename)); 2137 - ;} 2107 + } 2138 2108 break; 2139 2109 2140 2110 case 79: 2141 2111 2142 - /* Line 1455 of yacc.c */ 2112 + /* Line 1806 of yacc.c */ 2143 2113 #line 465 "dtc-parser.y" 2144 2114 { 2145 2115 add_label(&(yyvsp[(2) - (2)].node)->labels, (yyvsp[(1) - (2)].labelref)); 2146 2116 (yyval.node) = (yyvsp[(2) - (2)].node); 2147 - ;} 2117 + } 2148 2118 break; 2149 2119 2150 2120 2151 2121 2152 - /* Line 1455 of yacc.c */ 2153 - #line 2124 "dtc-parser.tab.c" 2122 + /* Line 1806 of yacc.c */ 2123 + #line 2154 "dtc-parser.tab.c" 2154 2124 default: break; 2155 2125 } 2126 + /* User semantic actions sometimes alter yychar, and that requires 2127 + that yytoken be updated with the new translation. We take the 2128 + approach of translating immediately before every use of yytoken. 2129 + One alternative is translating here after every semantic action, 2130 + but that translation would be missed if the semantic action invokes 2131 + YYABORT, YYACCEPT, or YYERROR immediately after altering yychar or 2132 + if it invokes YYBACKUP. In the case of YYABORT or YYACCEPT, an 2133 + incorrect destructor might then be invoked immediately. In the 2134 + case of YYERROR or YYBACKUP, subsequent parser actions might lead 2135 + to an incorrect destructor call or verbose syntax error message 2136 + before the lookahead is translated. */ 2156 2137 YY_SYMBOL_PRINT ("-> $$ =", yyr1[yyn], &yyval, &yyloc); 2157 2138 2158 2139 YYPOPSTACK (yylen); ··· 2191 2150 | yyerrlab -- here on detecting error | 2192 2151 `------------------------------------*/ 2193 2152 yyerrlab: 2153 + /* Make sure we have latest lookahead translation. See comments at 2154 + user semantic actions for why this is necessary. */ 2155 + yytoken = yychar == YYEMPTY ? YYEMPTY : YYTRANSLATE (yychar); 2156 + 2194 2157 /* If not already recovering from an error, report this error. */ 2195 2158 if (!yyerrstatus) 2196 2159 { ··· 2202 2157 #if ! YYERROR_VERBOSE 2203 2158 yyerror (YY_("syntax error")); 2204 2159 #else 2160 + # define YYSYNTAX_ERROR yysyntax_error (&yymsg_alloc, &yymsg, \ 2161 + yyssp, yytoken) 2205 2162 { 2206 - YYSIZE_T yysize = yysyntax_error (0, yystate, yychar); 2207 - if (yymsg_alloc < yysize && yymsg_alloc < YYSTACK_ALLOC_MAXIMUM) 2208 - { 2209 - YYSIZE_T yyalloc = 2 * yysize; 2210 - if (! (yysize <= yyalloc && yyalloc <= YYSTACK_ALLOC_MAXIMUM)) 2211 - yyalloc = YYSTACK_ALLOC_MAXIMUM; 2212 - if (yymsg != yymsgbuf) 2213 - YYSTACK_FREE (yymsg); 2214 - yymsg = (char *) YYSTACK_ALLOC (yyalloc); 2215 - if (yymsg) 2216 - yymsg_alloc = yyalloc; 2217 - else 2218 - { 2219 - yymsg = yymsgbuf; 2220 - yymsg_alloc = sizeof yymsgbuf; 2221 - } 2222 - } 2223 - 2224 - if (0 < yysize && yysize <= yymsg_alloc) 2225 - { 2226 - (void) yysyntax_error (yymsg, yystate, yychar); 2227 - yyerror (yymsg); 2228 - } 2229 - else 2230 - { 2231 - yyerror (YY_("syntax error")); 2232 - if (yysize != 0) 2233 - goto yyexhaustedlab; 2234 - } 2163 + char const *yymsgp = YY_("syntax error"); 2164 + int yysyntax_error_status; 2165 + yysyntax_error_status = YYSYNTAX_ERROR; 2166 + if (yysyntax_error_status == 0) 2167 + yymsgp = yymsg; 2168 + else if (yysyntax_error_status == 1) 2169 + { 2170 + if (yymsg != yymsgbuf) 2171 + YYSTACK_FREE (yymsg); 2172 + yymsg = (char *) YYSTACK_ALLOC (yymsg_alloc); 2173 + if (!yymsg) 2174 + { 2175 + yymsg = yymsgbuf; 2176 + yymsg_alloc = sizeof yymsgbuf; 2177 + yysyntax_error_status = 2; 2178 + } 2179 + else 2180 + { 2181 + yysyntax_error_status = YYSYNTAX_ERROR; 2182 + yymsgp = yymsg; 2183 + } 2184 + } 2185 + yyerror (yymsgp); 2186 + if (yysyntax_error_status == 2) 2187 + goto yyexhaustedlab; 2235 2188 } 2189 + # undef YYSYNTAX_ERROR 2236 2190 #endif 2237 2191 } 2238 2192 ··· 2290 2246 for (;;) 2291 2247 { 2292 2248 yyn = yypact[yystate]; 2293 - if (yyn != YYPACT_NINF) 2249 + if (!yypact_value_is_default (yyn)) 2294 2250 { 2295 2251 yyn += YYTERROR; 2296 2252 if (0 <= yyn && yyn <= YYLAST && yycheck[yyn] == YYTERROR) ··· 2349 2305 2350 2306 yyreturn: 2351 2307 if (yychar != YYEMPTY) 2352 - yydestruct ("Cleanup: discarding lookahead", 2353 - yytoken, &yylval); 2308 + { 2309 + /* Make sure we have latest lookahead translation. See comments at 2310 + user semantic actions for why this is necessary. */ 2311 + yytoken = YYTRANSLATE (yychar); 2312 + yydestruct ("Cleanup: discarding lookahead", 2313 + yytoken, &yylval); 2314 + } 2354 2315 /* Do not reclaim the symbols of the rule which action triggered 2355 2316 this YYABORT or YYACCEPT. */ 2356 2317 YYPOPSTACK (yylen); ··· 2380 2331 2381 2332 2382 2333 2383 - /* Line 1675 of yacc.c */ 2334 + /* Line 2067 of yacc.c */ 2384 2335 #line 471 "dtc-parser.y" 2385 2336 2386 2337
+6 -8
scripts/dtc/dtc-parser.tab.h_shipped
··· 1 + /* A Bison parser, made by GNU Bison 2.5. */ 1 2 2 - /* A Bison parser, made by GNU Bison 2.4.1. */ 3 - 4 - /* Skeleton interface for Bison's Yacc-like parsers in C 3 + /* Bison interface for Yacc-like parsers in C 5 4 6 - Copyright (C) 1984, 1989, 1990, 2000, 2001, 2002, 2003, 2004, 2005, 2006 7 - Free Software Foundation, Inc. 5 + Copyright (C) 1984, 1989-1990, 2000-2011 Free Software Foundation, Inc. 8 6 9 7 This program is free software: you can redistribute it and/or modify 10 8 it under the terms of the GNU General Public License as published by ··· 68 70 typedef union YYSTYPE 69 71 { 70 72 71 - /* Line 1676 of yacc.c */ 73 + /* Line 2068 of yacc.c */ 72 74 #line 40 "dtc-parser.y" 73 75 74 76 char *propnodename; ··· 92 94 93 95 94 96 95 - /* Line 1676 of yacc.c */ 96 - #line 99 "dtc-parser.tab.h" 97 + /* Line 2068 of yacc.c */ 98 + #line 97 "dtc-parser.tab.h" 97 99 } YYSTYPE; 98 100 # define YYSTYPE_IS_TRIVIAL 1 99 101 # define yystype YYSTYPE /* obsolescent; will be withdrawn */
+2 -2
sound/core/pcm_native.c
··· 1649 1649 } 1650 1650 if (!snd_pcm_stream_linked(substream)) { 1651 1651 substream->group = group; 1652 + group = NULL; 1652 1653 spin_lock_init(&substream->group->lock); 1653 1654 INIT_LIST_HEAD(&substream->group->substreams); 1654 1655 list_add_tail(&substream->link_list, &substream->group->substreams); ··· 1664 1663 _nolock: 1665 1664 snd_card_unref(substream1->pcm->card); 1666 1665 fput_light(file, fput_needed); 1667 - if (res < 0) 1668 - kfree(group); 1666 + kfree(group); 1669 1667 return res; 1670 1668 } 1671 1669
+42 -34
sound/pci/hda/hda_generic.c
··· 788 788 return; 789 789 if (codec->inv_eapd) 790 790 enable = !enable; 791 + if (spec->keep_eapd_on && !enable) 792 + return; 791 793 snd_hda_codec_update_cache(codec, pin, 0, 792 794 AC_VERB_SET_EAPD_BTLENABLE, 793 795 enable ? 0x02 : 0x00); ··· 1940 1938 * independent HP controls 1941 1939 */ 1942 1940 1943 - /* update HP auto-mute state too */ 1944 - static void update_hp_automute_hook(struct hda_codec *codec) 1945 - { 1946 - struct hda_gen_spec *spec = codec->spec; 1947 - 1948 - if (spec->hp_automute_hook) 1949 - spec->hp_automute_hook(codec, NULL); 1950 - else 1951 - snd_hda_gen_hp_automute(codec, NULL); 1952 - } 1953 - 1941 + static void call_hp_automute(struct hda_codec *codec, struct hda_jack_tbl *jack); 1954 1942 static int indep_hp_info(struct snd_kcontrol *kcontrol, 1955 1943 struct snd_ctl_elem_info *uinfo) 1956 1944 { ··· 2001 2009 else 2002 2010 *dacp = spec->alt_dac_nid; 2003 2011 2004 - update_hp_automute_hook(codec); 2012 + call_hp_automute(codec, NULL); 2005 2013 ret = 1; 2006 2014 } 2007 2015 unlock: ··· 2297 2305 else 2298 2306 val = PIN_HP; 2299 2307 set_pin_target(codec, pin, val, true); 2300 - update_hp_automute_hook(codec); 2308 + call_hp_automute(codec, NULL); 2301 2309 } 2302 2310 } 2303 2311 ··· 2706 2714 val = snd_hda_get_default_vref(codec, nid); 2707 2715 } 2708 2716 snd_hda_set_pin_ctl_cache(codec, nid, val); 2709 - update_hp_automute_hook(codec); 2717 + call_hp_automute(codec, NULL); 2710 2718 2711 2719 return 1; 2712 2720 } ··· 3851 3859 } 3852 3860 EXPORT_SYMBOL_HDA(snd_hda_gen_mic_autoswitch); 3853 3861 3862 + /* call appropriate hooks */ 3863 + static void call_hp_automute(struct hda_codec *codec, struct hda_jack_tbl *jack) 3864 + { 3865 + struct hda_gen_spec *spec = codec->spec; 3866 + if (spec->hp_automute_hook) 3867 + spec->hp_automute_hook(codec, jack); 3868 + else 3869 + snd_hda_gen_hp_automute(codec, jack); 3870 + } 3871 + 3872 + static void call_line_automute(struct hda_codec *codec, 3873 + struct hda_jack_tbl *jack) 3874 + { 3875 + struct hda_gen_spec *spec = codec->spec; 3876 + if (spec->line_automute_hook) 3877 + spec->line_automute_hook(codec, jack); 3878 + else 3879 + snd_hda_gen_line_automute(codec, jack); 3880 + } 3881 + 3882 + static void call_mic_autoswitch(struct hda_codec *codec, 3883 + struct hda_jack_tbl *jack) 3884 + { 3885 + struct hda_gen_spec *spec = codec->spec; 3886 + if (spec->mic_autoswitch_hook) 3887 + spec->mic_autoswitch_hook(codec, jack); 3888 + else 3889 + snd_hda_gen_mic_autoswitch(codec, jack); 3890 + } 3891 + 3854 3892 /* update jack retasking */ 3855 3893 static void update_automute_all(struct hda_codec *codec) 3856 3894 { 3857 - struct hda_gen_spec *spec = codec->spec; 3858 - 3859 - update_hp_automute_hook(codec); 3860 - if (spec->line_automute_hook) 3861 - spec->line_automute_hook(codec, NULL); 3862 - else 3863 - snd_hda_gen_line_automute(codec, NULL); 3864 - if (spec->mic_autoswitch_hook) 3865 - spec->mic_autoswitch_hook(codec, NULL); 3866 - else 3867 - snd_hda_gen_mic_autoswitch(codec, NULL); 3895 + call_hp_automute(codec, NULL); 3896 + call_line_automute(codec, NULL); 3897 + call_mic_autoswitch(codec, NULL); 3868 3898 } 3869 3899 3870 3900 /* ··· 4023 4009 snd_printdd("hda-codec: Enable HP auto-muting on NID 0x%x\n", 4024 4010 nid); 4025 4011 snd_hda_jack_detect_enable_callback(codec, nid, HDA_GEN_HP_EVENT, 4026 - spec->hp_automute_hook ? 4027 - spec->hp_automute_hook : 4028 - snd_hda_gen_hp_automute); 4012 + call_hp_automute); 4029 4013 spec->detect_hp = 1; 4030 4014 } 4031 4015 ··· 4036 4024 snd_printdd("hda-codec: Enable Line-Out auto-muting on NID 0x%x\n", nid); 4037 4025 snd_hda_jack_detect_enable_callback(codec, nid, 4038 4026 HDA_GEN_FRONT_EVENT, 4039 - spec->line_automute_hook ? 4040 - spec->line_automute_hook : 4041 - snd_hda_gen_line_automute); 4027 + call_line_automute); 4042 4028 spec->detect_lo = 1; 4043 4029 } 4044 4030 spec->automute_lo_possible = spec->detect_hp; ··· 4078 4068 snd_hda_jack_detect_enable_callback(codec, 4079 4069 spec->am_entry[i].pin, 4080 4070 HDA_GEN_MIC_EVENT, 4081 - spec->mic_autoswitch_hook ? 4082 - spec->mic_autoswitch_hook : 4083 - snd_hda_gen_mic_autoswitch); 4071 + call_mic_autoswitch); 4084 4072 return true; 4085 4073 } 4086 4074
+1
sound/pci/hda/hda_generic.h
··· 222 222 unsigned int multi_cap_vol:1; /* allow multiple capture xxx volumes */ 223 223 unsigned int inv_dmic_split:1; /* inverted dmic w/a for conexant */ 224 224 unsigned int own_eapd_ctl:1; /* set EAPD by own function */ 225 + unsigned int keep_eapd_on:1; /* don't turn off EAPD automatically */ 225 226 unsigned int vmaster_mute_enum:1; /* add vmaster mute mode enum */ 226 227 unsigned int indep_hp:1; /* independent HP supported */ 227 228 unsigned int prefer_hp_amp:1; /* enable HP amp for speaker if any */
+3
sound/pci/hda/patch_realtek.c
··· 3493 3493 SND_PCI_QUIRK(0x1028, 0x05f4, "Dell", ALC269_FIXUP_DELL1_MIC_NO_PRESENCE), 3494 3494 SND_PCI_QUIRK(0x1028, 0x05f5, "Dell", ALC269_FIXUP_DELL1_MIC_NO_PRESENCE), 3495 3495 SND_PCI_QUIRK(0x1028, 0x05f6, "Dell", ALC269_FIXUP_DELL1_MIC_NO_PRESENCE), 3496 + SND_PCI_QUIRK(0x1028, 0x05f8, "Dell", ALC269_FIXUP_DELL1_MIC_NO_PRESENCE), 3497 + SND_PCI_QUIRK(0x1028, 0x0609, "Dell", ALC269_FIXUP_DELL1_MIC_NO_PRESENCE), 3496 3498 SND_PCI_QUIRK(0x103c, 0x1586, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC2), 3497 3499 SND_PCI_QUIRK(0x103c, 0x18e6, "HP", ALC269_FIXUP_HP_GPIO_LED), 3498 3500 SND_PCI_QUIRK(0x103c, 0x1973, "HP Pavilion", ALC269_FIXUP_HP_MUTE_LED_MIC1), ··· 3532 3530 SND_PCI_QUIRK(0x17aa, 0x21fa, "Thinkpad X230", ALC269_FIXUP_LENOVO_DOCK), 3533 3531 SND_PCI_QUIRK(0x17aa, 0x21f3, "Thinkpad T430", ALC269_FIXUP_LENOVO_DOCK), 3534 3532 SND_PCI_QUIRK(0x17aa, 0x21fb, "Thinkpad T430s", ALC269_FIXUP_LENOVO_DOCK), 3533 + SND_PCI_QUIRK(0x17aa, 0x2208, "Thinkpad T431s", ALC269_FIXUP_LENOVO_DOCK), 3535 3534 SND_PCI_QUIRK(0x17aa, 0x2203, "Thinkpad X230 Tablet", ALC269_FIXUP_LENOVO_DOCK), 3536 3535 SND_PCI_QUIRK(0x17aa, 0x3bf8, "Quanta FL1", ALC269_FIXUP_PCM_44K), 3537 3536 SND_PCI_QUIRK(0x17aa, 0x9e54, "LENOVO NB", ALC269_FIXUP_LENOVO_EAPD),
+9 -1
sound/pci/hda/patch_via.c
··· 136 136 spec->codec_type = VT1708S; 137 137 spec->no_pin_power_ctl = 1; 138 138 spec->gen.indep_hp = 1; 139 + spec->gen.keep_eapd_on = 1; 139 140 spec->gen.pcm_playback_hook = via_playback_pcm_hook; 140 141 return spec; 141 142 } ··· 232 231 233 232 static void set_widgets_power_state(struct hda_codec *codec) 234 233 { 234 + #if 0 /* FIXME: the assumed connections don't match always with the 235 + * actual routes by the generic parser, so better to disable 236 + * the control for safety. 237 + */ 235 238 struct via_spec *spec = codec->spec; 236 239 if (spec->set_widgets_power_state) 237 240 spec->set_widgets_power_state(codec); 241 + #endif 238 242 } 239 243 240 244 static void update_power_state(struct hda_codec *codec, hda_nid_t nid, ··· 484 478 /* Fix pop noise on headphones */ 485 479 int i; 486 480 for (i = 0; i < spec->gen.autocfg.hp_outs; i++) 487 - snd_hda_set_pin_ctl(codec, spec->gen.autocfg.hp_pins[i], 0); 481 + snd_hda_codec_write(codec, spec->gen.autocfg.hp_pins[i], 482 + 0, AC_VERB_SET_PIN_WIDGET_CONTROL, 483 + 0x00); 488 484 } 489 485 490 486 return 0;
+2 -1
sound/pci/sis7019.c
··· 1341 1341 if (rc) 1342 1342 goto error_out; 1343 1343 1344 - if (pci_set_dma_mask(pci, DMA_BIT_MASK(30)) < 0) { 1344 + rc = pci_set_dma_mask(pci, DMA_BIT_MASK(30)); 1345 + if (rc < 0) { 1345 1346 dev_err(&pci->dev, "architecture does not support 30-bit PCI busmaster DMA"); 1346 1347 goto error_out_enabled; 1347 1348 }
+4 -2
sound/soc/codecs/cs42l52.c
··· 193 193 194 194 static DECLARE_TLV_DB_SCALE(pga_tlv, -600, 50, 0); 195 195 196 + static DECLARE_TLV_DB_SCALE(mix_tlv, -50, 50, 0); 197 + 196 198 static const unsigned int limiter_tlv[] = { 197 199 TLV_DB_RANGE_HEAD(2), 198 200 0, 2, TLV_DB_SCALE_ITEM(-3000, 600, 0), ··· 262 260 }; 263 261 264 262 static const struct soc_enum hp_gain_enum = 265 - SOC_ENUM_SINGLE(CS42L52_PB_CTL1, 4, 263 + SOC_ENUM_SINGLE(CS42L52_PB_CTL1, 5, 266 264 ARRAY_SIZE(hp_gain_num_text), hp_gain_num_text); 267 265 268 266 static const char * const beep_pitch_text[] = { ··· 443 441 444 442 SOC_DOUBLE_R_SX_TLV("PCM Mixer Volume", 445 443 CS42L52_PCMA_MIXER_VOL, CS42L52_PCMB_MIXER_VOL, 446 - 0, 0x7f, 0x19, hl_tlv), 444 + 0, 0x7f, 0x19, mix_tlv), 447 445 SOC_DOUBLE_R("PCM Mixer Switch", 448 446 CS42L52_PCMA_MIXER_VOL, CS42L52_PCMB_MIXER_VOL, 7, 1, 1), 449 447
+5 -5
sound/soc/codecs/tlv320aic3x.c
··· 187 187 188 188 break; 189 189 } 190 - 191 - if (found) 192 - snd_soc_dapm_sync(widget->dapm); 193 190 } 194 191 195 - ret = snd_soc_update_bits(widget->codec, reg, val_mask, val); 196 - 197 192 mutex_unlock(&widget->codec->mutex); 193 + 194 + if (found) 195 + snd_soc_dapm_sync(widget->dapm); 196 + 197 + ret = snd_soc_update_bits_locked(widget->codec, reg, val_mask, val); 198 198 return ret; 199 199 } 200 200
+2 -1
sound/soc/codecs/wm5102.c
··· 1120 1120 ARIZONA_DSP_WIDGETS(DSP1, "DSP1"), 1121 1121 1122 1122 SND_SOC_DAPM_VALUE_MUX("AEC Loopback", ARIZONA_DAC_AEC_CONTROL_1, 1123 - ARIZONA_AEC_LOOPBACK_ENA, 0, &wm5102_aec_loopback_mux), 1123 + ARIZONA_AEC_LOOPBACK_ENA_SHIFT, 0, 1124 + &wm5102_aec_loopback_mux), 1124 1125 1125 1126 SND_SOC_DAPM_PGA_E("OUT1L", SND_SOC_NOPM, 1126 1127 ARIZONA_OUT1L_ENA_SHIFT, 0, NULL, 0, arizona_hp_ev,
+2 -1
sound/soc/codecs/wm5110.c
··· 503 503 NULL, 0), 504 504 505 505 SND_SOC_DAPM_VALUE_MUX("AEC Loopback", ARIZONA_DAC_AEC_CONTROL_1, 506 - ARIZONA_AEC_LOOPBACK_ENA, 0, &wm5110_aec_loopback_mux), 506 + ARIZONA_AEC_LOOPBACK_ENA_SHIFT, 0, 507 + &wm5110_aec_loopback_mux), 507 508 508 509 SND_SOC_DAPM_AIF_OUT("AIF1TX1", NULL, 0, 509 510 ARIZONA_AIF1_TX_ENABLES, ARIZONA_AIF1TX1_ENA_SHIFT, 0),
+2 -1
sound/soc/codecs/wm8994.c
··· 3836 3836 ret); 3837 3837 } else if (!(ret & WM1811_JACKDET_LVL)) { 3838 3838 dev_dbg(codec->dev, "Ignoring removed jack\n"); 3839 - return IRQ_HANDLED; 3839 + goto out; 3840 3840 } 3841 3841 } else if (!(reg & WM8958_MICD_STS)) { 3842 3842 snd_soc_jack_report(wm8994->micdet[0].jack, 0, 3843 3843 SND_JACK_MECHANICAL | SND_JACK_HEADSET | 3844 3844 wm8994->btn_mask); 3845 + wm8994->mic_detecting = true; 3845 3846 goto out; 3846 3847 } 3847 3848
+26 -23
sound/soc/soc-dapm.c
··· 55 55 [snd_soc_dapm_clock_supply] = 1, 56 56 [snd_soc_dapm_micbias] = 2, 57 57 [snd_soc_dapm_dai_link] = 2, 58 - [snd_soc_dapm_dai] = 3, 58 + [snd_soc_dapm_dai_in] = 3, 59 + [snd_soc_dapm_dai_out] = 3, 59 60 [snd_soc_dapm_aif_in] = 3, 60 61 [snd_soc_dapm_aif_out] = 3, 61 62 [snd_soc_dapm_mic] = 4, ··· 93 92 [snd_soc_dapm_value_mux] = 9, 94 93 [snd_soc_dapm_aif_in] = 10, 95 94 [snd_soc_dapm_aif_out] = 10, 96 - [snd_soc_dapm_dai] = 10, 95 + [snd_soc_dapm_dai_in] = 10, 96 + [snd_soc_dapm_dai_out] = 10, 97 97 [snd_soc_dapm_dai_link] = 11, 98 98 [snd_soc_dapm_clock_supply] = 12, 99 99 [snd_soc_dapm_regulator_supply] = 12, ··· 421 419 case snd_soc_dapm_clock_supply: 422 420 case snd_soc_dapm_aif_in: 423 421 case snd_soc_dapm_aif_out: 424 - case snd_soc_dapm_dai: 422 + case snd_soc_dapm_dai_in: 423 + case snd_soc_dapm_dai_out: 425 424 case snd_soc_dapm_hp: 426 425 case snd_soc_dapm_mic: 427 426 case snd_soc_dapm_spk: ··· 823 820 switch (widget->id) { 824 821 case snd_soc_dapm_adc: 825 822 case snd_soc_dapm_aif_out: 826 - case snd_soc_dapm_dai: 823 + case snd_soc_dapm_dai_out: 827 824 if (widget->active) { 828 825 widget->outputs = snd_soc_dapm_suspend_check(widget); 829 826 return widget->outputs; ··· 919 916 switch (widget->id) { 920 917 case snd_soc_dapm_dac: 921 918 case snd_soc_dapm_aif_in: 922 - case snd_soc_dapm_dai: 919 + case snd_soc_dapm_dai_in: 923 920 if (widget->active) { 924 921 widget->inputs = snd_soc_dapm_suspend_check(widget); 925 922 return widget->inputs; ··· 1136 1133 out = is_connected_output_ep(w, NULL); 1137 1134 dapm_clear_walk_output(w->dapm, &w->sinks); 1138 1135 return out != 0 && in != 0; 1139 - } 1140 - 1141 - static int dapm_dai_check_power(struct snd_soc_dapm_widget *w) 1142 - { 1143 - DAPM_UPDATE_STAT(w, power_checks); 1144 - 1145 - if (w->active) 1146 - return w->active; 1147 - 1148 - return dapm_generic_check_power(w); 1149 1136 } 1150 1137 1151 1138 /* Check to see if an ADC has power */ ··· 2311 2318 case snd_soc_dapm_clock_supply: 2312 2319 case snd_soc_dapm_aif_in: 2313 2320 case snd_soc_dapm_aif_out: 2314 - case snd_soc_dapm_dai: 2321 + case snd_soc_dapm_dai_in: 2322 + case snd_soc_dapm_dai_out: 2315 2323 case snd_soc_dapm_dai_link: 2316 2324 list_add(&path->list, &dapm->card->paths); 2317 2325 list_add(&path->list_sink, &wsink->sources); ··· 3123 3129 break; 3124 3130 case snd_soc_dapm_adc: 3125 3131 case snd_soc_dapm_aif_out: 3132 + case snd_soc_dapm_dai_out: 3126 3133 w->power_check = dapm_adc_check_power; 3127 3134 break; 3128 3135 case snd_soc_dapm_dac: 3129 3136 case snd_soc_dapm_aif_in: 3137 + case snd_soc_dapm_dai_in: 3130 3138 w->power_check = dapm_dac_check_power; 3131 3139 break; 3132 3140 case snd_soc_dapm_pga: ··· 3147 3151 case snd_soc_dapm_regulator_supply: 3148 3152 case snd_soc_dapm_clock_supply: 3149 3153 w->power_check = dapm_supply_check_power; 3150 - break; 3151 - case snd_soc_dapm_dai: 3152 - w->power_check = dapm_dai_check_power; 3153 3154 break; 3154 3155 default: 3155 3156 w->power_check = dapm_always_on_check_power; ··· 3368 3375 template.reg = SND_SOC_NOPM; 3369 3376 3370 3377 if (dai->driver->playback.stream_name) { 3371 - template.id = snd_soc_dapm_dai; 3378 + template.id = snd_soc_dapm_dai_in; 3372 3379 template.name = dai->driver->playback.stream_name; 3373 3380 template.sname = dai->driver->playback.stream_name; 3374 3381 ··· 3386 3393 } 3387 3394 3388 3395 if (dai->driver->capture.stream_name) { 3389 - template.id = snd_soc_dapm_dai; 3396 + template.id = snd_soc_dapm_dai_out; 3390 3397 template.name = dai->driver->capture.stream_name; 3391 3398 template.sname = dai->driver->capture.stream_name; 3392 3399 ··· 3416 3423 3417 3424 /* For each DAI widget... */ 3418 3425 list_for_each_entry(dai_w, &card->widgets, list) { 3419 - if (dai_w->id != snd_soc_dapm_dai) 3426 + switch (dai_w->id) { 3427 + case snd_soc_dapm_dai_in: 3428 + case snd_soc_dapm_dai_out: 3429 + break; 3430 + default: 3420 3431 continue; 3432 + } 3421 3433 3422 3434 dai = dai_w->priv; 3423 3435 ··· 3431 3433 if (w->dapm != dai_w->dapm) 3432 3434 continue; 3433 3435 3434 - if (w->id == snd_soc_dapm_dai) 3436 + switch (w->id) { 3437 + case snd_soc_dapm_dai_in: 3438 + case snd_soc_dapm_dai_out: 3435 3439 continue; 3440 + default: 3441 + break; 3442 + } 3436 3443 3437 3444 if (!w->sname) 3438 3445 continue;
+10 -3
sound/soc/soc-pcm.c
··· 928 928 /* Create any new FE <--> BE connections */ 929 929 for (i = 0; i < list->num_widgets; i++) { 930 930 931 - if (list->widgets[i]->id != snd_soc_dapm_dai) 931 + switch (list->widgets[i]->id) { 932 + case snd_soc_dapm_dai_in: 933 + case snd_soc_dapm_dai_out: 934 + break; 935 + default: 932 936 continue; 937 + } 933 938 934 939 /* is there a valid BE rtd for this widget */ 935 940 be = dpcm_get_be(card, list->widgets[i], stream); ··· 2016 2011 if (cpu_dai->driver->capture.channels_min) 2017 2012 capture = 1; 2018 2013 } else { 2019 - if (codec_dai->driver->playback.channels_min) 2014 + if (codec_dai->driver->playback.channels_min && 2015 + cpu_dai->driver->playback.channels_min) 2020 2016 playback = 1; 2021 - if (codec_dai->driver->capture.channels_min) 2017 + if (codec_dai->driver->capture.channels_min && 2018 + cpu_dai->driver->capture.channels_min) 2022 2019 capture = 1; 2023 2020 } 2024 2021
+1
sound/usb/mixer.c
··· 886 886 case USB_ID(0x046d, 0x0808): 887 887 case USB_ID(0x046d, 0x0809): 888 888 case USB_ID(0x046d, 0x081d): /* HD Webcam c510 */ 889 + case USB_ID(0x046d, 0x0825): /* HD Webcam c270 */ 889 890 case USB_ID(0x046d, 0x0991): 890 891 /* Most audio usb devices lie about volume resolution. 891 892 * Most Logitech webcams have res = 384.
+12 -2
sound/usb/quirks-table.h
··· 215 215 .bInterfaceSubClass = USB_SUBCLASS_AUDIOCONTROL 216 216 }, 217 217 { 218 - USB_DEVICE(0x046d, 0x0990), 218 + .match_flags = USB_DEVICE_ID_MATCH_DEVICE | 219 + USB_DEVICE_ID_MATCH_INT_CLASS | 220 + USB_DEVICE_ID_MATCH_INT_SUBCLASS, 221 + .idVendor = 0x046d, 222 + .idProduct = 0x0990, 223 + .bInterfaceClass = USB_CLASS_AUDIO, 224 + .bInterfaceSubClass = USB_SUBCLASS_AUDIOCONTROL, 219 225 .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) { 220 226 .vendor_name = "Logitech, Inc.", 221 227 .product_name = "QuickCam Pro 9000", ··· 1798 1792 USB_DEVICE_VENDOR_SPEC(0x0582, 0x0108), 1799 1793 .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) { 1800 1794 .ifnum = 0, 1801 - .type = QUIRK_MIDI_STANDARD_INTERFACE 1795 + .type = QUIRK_MIDI_FIXED_ENDPOINT, 1796 + .data = & (const struct snd_usb_midi_endpoint_info) { 1797 + .out_cables = 0x0007, 1798 + .in_cables = 0x0007 1799 + } 1802 1800 } 1803 1801 }, 1804 1802 {
+1 -1
tools/power/x86/turbostat/turbostat.c
··· 2191 2191 2192 2192 void allocate_output_buffer() 2193 2193 { 2194 - output_buffer = calloc(1, (1 + topo.num_cpus) * 128); 2194 + output_buffer = calloc(1, (1 + topo.num_cpus) * 256); 2195 2195 outp = output_buffer; 2196 2196 if (outp == NULL) { 2197 2197 perror("calloc");