Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'imx-soc-3.11' of git://git.linaro.org/people/shawnguo/linux-2.6 into next/soc

From Shawn Guo:

imx soc changes for 3.11:

* New SoCs i.MX6 Sololite and Vybrid VF610 support
* imx5 and imx6 clock fixes and additions
* Update clock driver to use of_clk_init() function
* Refactor restart routine mxc_restart() to get it work for DT boot
as well
* Clean up mxc specific ulpi access ops
* imx defconfig updates

* tag 'imx-soc-3.11' of git://git.linaro.org/people/shawnguo/linux-2.6: (29 commits)
ARM: imx_v6_v7_defconfig: Enable Vybrid VF610
ARM: imx_v6_v7_defconfig: Enable imx-wm8962 by default
ARM: clk-imx6qdl: Add clko1 configuration for imx6qdl-sabresd
ARM: imx_v6_v7_defconfig: Enable PWM and backlight options
ARM: imx: Remove mxc specific ulpi access ops
ARM: imx: add initial support for VF610
ARM: imx: add VF610 clock support
ARM: imx_v6_v7_defconfig: enable parallel display
ARM: imx: clk: No need to initialize phandle struct
ARM: imx: irq-common: Include header to avoid sparse warning
ARM: imx: Enable mx6 solo-lite support
ARM: imx6: use common of_clk_init() call to initialize clocks
ARM: imx6q: call of_clk_init() to register fixed rate clocks
ARM: imx: imx_v6_v7_defconfig: Select CONFIG_DRM_IMX_TVE
ARM: i.MX6: clk: add different DualLite MLB clock config
ARM i.MX5: Add S/PDIF clocks
ARM i.MX53: Add SATA clock
ARM: imx6q: clk: add the eim_slow clock
ARM: imx: remove MLB PLL from pllv3
ARM: imx: disable pll8_mlb in mx6q_clks
...

Conflicts:
arch/arm/Kconfig.debug (simple add/add conflict)

Includes an update to 3.10-rc6

Signed-off-by: Arnd Bergmann <arnd@arndb.de>

+4001 -2081
+9 -3
Documentation/bcache.txt
··· 319 319 Symlink to each of the cache devices comprising this cache set. 320 320 321 321 cache_available_percent 322 - Percentage of cache device free. 322 + Percentage of cache device which doesn't contain dirty data, and could 323 + potentially be used for writeback. This doesn't mean this space isn't used 324 + for clean cached data; the unused statistic (in priority_stats) is typically 325 + much lower. 323 326 324 327 clear_stats 325 328 Clears the statistics associated with this cache ··· 426 423 Total buckets in this cache 427 424 428 425 priority_stats 429 - Statistics about how recently data in the cache has been accessed. This can 430 - reveal your working set size. 426 + Statistics about how recently data in the cache has been accessed. 427 + This can reveal your working set size. Unused is the percentage of 428 + the cache that doesn't contain any data. Metadata is bcache's 429 + metadata overhead. Average is the average priority of cache buckets. 430 + Next is a list of quantiles with the priority threshold of each. 431 431 432 432 written 433 433 Sum of all data that has been written to the cache; comparison with
+2 -6
Documentation/devices.txt
··· 498 498 499 499 Each device type has 5 bits (32 minors). 500 500 501 - 13 block 8-bit MFM/RLL/IDE controller 502 - 0 = /dev/xda First XT disk whole disk 503 - 64 = /dev/xdb Second XT disk whole disk 504 - 505 - Partitions are handled in the same way as IDE disks 506 - (see major number 3). 501 + 13 block Previously used for the XT disk (/dev/xdN) 502 + Deleted in kernel v3.9. 507 503 508 504 14 char Open Sound System (OSS) 509 505 0 = /dev/mixer Mixer control
+13
Documentation/devicetree/bindings/clock/imx5-clock.txt
··· 184 184 cko2 170 185 185 srtc_gate 171 186 186 pata_gate 172 187 + sata_gate 173 188 + spdif_xtal_sel 174 189 + spdif0_sel 175 190 + spdif1_sel 176 191 + spdif0_pred 177 192 + spdif0_podf 178 193 + spdif1_pred 179 194 + spdif1_podf 180 195 + spdif0_com_sel 181 196 + spdif1_com_sel 182 197 + spdif0_gate 183 198 + spdif1_gate 184 199 + spdif_ipg_gate 185 187 200 188 201 Examples (for mx53): 189 202
+1
Documentation/devicetree/bindings/clock/imx6q-clock.txt
··· 208 208 pll4_post_div 193 209 209 pll5_post_div 194 210 210 pll5_video_div 195 211 + eim_slow 196 211 212 212 213 Examples: 213 214
+10
Documentation/devicetree/bindings/clock/imx6sl-clock.txt
··· 1 + * Clock bindings for Freescale i.MX6 SoloLite 2 + 3 + Required properties: 4 + - compatible: Should be "fsl,imx6sl-ccm" 5 + - reg: Address and length of the register set 6 + - #clock-cells: Should be <1> 7 + 8 + The clock consumer should specify the desired clock by having the clock 9 + ID in its "clocks" phandle cell. See include/dt-bindings/clock/imx6sl-clock.h 10 + for the full list of i.MX6 SoloLite clock IDs.
+26
Documentation/devicetree/bindings/clock/vf610-clock.txt
··· 1 + * Clock bindings for Freescale Vybrid VF610 SOC 2 + 3 + Required properties: 4 + - compatible: Should be "fsl,vf610-ccm" 5 + - reg: Address and length of the register set 6 + - #clock-cells: Should be <1> 7 + 8 + The clock consumer should specify the desired clock by having the clock 9 + ID in its "clocks" phandle cell. See include/dt-bindings/clock/vf610-clock.h 10 + for the full list of VF610 clock IDs. 11 + 12 + Examples: 13 + 14 + clks: ccm@4006b000 { 15 + compatible = "fsl,vf610-ccm"; 16 + reg = <0x4006b000 0x1000>; 17 + #clock-cells = <1>; 18 + }; 19 + 20 + uart1: serial@40028000 { 21 + compatible = "fsl,vf610-uart"; 22 + reg = <0x40028000 0x1000>; 23 + interrupts = <0 62 0x04>; 24 + clocks = <&clks VF610_CLK_UART1>; 25 + clock-names = "ipg"; 26 + };
+1 -1
Documentation/devicetree/bindings/rtc/atmel,at91rm9200-rtc.txt
··· 1 1 Atmel AT91RM9200 Real Time Clock 2 2 3 3 Required properties: 4 - - compatible: should be: "atmel,at91rm9200-rtc" 4 + - compatible: should be: "atmel,at91rm9200-rtc" or "atmel,at91sam9x5-rtc" 5 5 - reg: physical base address of the controller and length of memory mapped 6 6 region. 7 7 - interrupts: rtc alarm/event interrupt
-3
Documentation/kernel-parameters.txt
··· 3351 3351 plus one apbt timer for broadcast timer. 3352 3352 x86_mrst_timer=apbt_only | lapic_and_apbt 3353 3353 3354 - xd= [HW,XT] Original XT pre-IDE (RLL encoded) disks. 3355 - xd_geo= See header of drivers/block/xd.c. 3356 - 3357 3354 xen_emul_unplug= [HW,X86,XEN] 3358 3355 Unplug Xen emulated devices 3359 3356 Format: [unplug0,][unplug1]
-2
Documentation/m68k/kernel-options.txt
··· 80 80 /dev/sdd: -> 0x0830 (forth SCSI disk) 81 81 /dev/sde: -> 0x0840 (fifth SCSI disk) 82 82 /dev/fd : -> 0x0200 (floppy disk) 83 - /dev/xda: -> 0x0c00 (first XT disk, unused in Linux/m68k) 84 - /dev/xdb: -> 0x0c40 (second XT disk, unused in Linux/m68k) 85 83 86 84 The name must be followed by a decimal number, that stands for the 87 85 partition number. Internally, the value of the number is just
+3 -4
MAINTAINERS
··· 5766 5766 L: linux-nvme@lists.infradead.org 5767 5767 T: git git://git.infradead.org/users/willy/linux-nvme.git 5768 5768 S: Supported 5769 - F: drivers/block/nvme.c 5769 + F: drivers/block/nvme* 5770 5770 F: include/linux/nvme.h 5771 5771 5772 5772 OMAP SUPPORT ··· 7624 7624 SPI SUBSYSTEM 7625 7625 M: Mark Brown <broonie@kernel.org> 7626 7626 M: Grant Likely <grant.likely@linaro.org> 7627 - L: spi-devel-general@lists.sourceforge.net 7627 + L: linux-spi@vger.kernel.org 7628 7628 T: git git://git.kernel.org/pub/scm/linux/kernel/git/broonie/spi.git 7629 7629 Q: http://patchwork.kernel.org/project/spi-devel-general/list/ 7630 7630 S: Maintained ··· 9004 9004 F: drivers/net/wireless/wl3501* 9005 9005 9006 9006 WM97XX TOUCHSCREEN DRIVERS 9007 - M: Mark Brown <broonie@opensource.wolfsonmicro.com> 9007 + M: Mark Brown <broonie@kernel.org> 9008 9008 M: Liam Girdwood <lrg@slimlogic.co.uk> 9009 9009 L: linux-input@vger.kernel.org 9010 9010 T: git git://opensource.wolfsonmicro.com/linux-2.6-touch ··· 9014 9014 F: include/linux/wm97xx.h 9015 9015 9016 9016 WOLFSON MICROELECTRONICS DRIVERS 9017 - M: Mark Brown <broonie@opensource.wolfsonmicro.com> 9018 9017 L: patches@opensource.wolfsonmicro.com 9019 9018 T: git git://opensource.wolfsonmicro.com/linux-2.6-asoc 9020 9019 T: git git://opensource.wolfsonmicro.com/linux-2.6-audioplus
+1 -1
Makefile
··· 1 1 VERSION = 3 2 2 PATCHLEVEL = 10 3 3 SUBLEVEL = 0 4 - EXTRAVERSION = -rc5 4 + EXTRAVERSION = -rc6 5 5 NAME = Unicycling Gorilla 6 6 7 7 # *DOCUMENTATION*
+11 -2
arch/arm/Kconfig.debug
··· 251 251 Say Y here if you want kernel low-level debugging support 252 252 on i.MX6Q/DL. 253 253 254 + config DEBUG_IMX6SL_UART 255 + bool "i.MX6SL Debug UART" 256 + depends on SOC_IMX6SL 257 + help 258 + Say Y here if you want kernel low-level debugging support 259 + on i.MX6SL. 260 + 254 261 config DEBUG_KEYSTONE_UART0 255 262 bool "Kernel low-level debugging on KEYSTONE2 using UART0" 256 263 depends on ARCH_KEYSTONE ··· 585 578 DEBUG_IMX35_UART || \ 586 579 DEBUG_IMX51_UART || \ 587 580 DEBUG_IMX53_UART || \ 588 - DEBUG_IMX6Q_UART 581 + DEBUG_IMX6Q_UART || \ 582 + DEBUG_IMX6SL_UART 589 583 default 1 590 584 depends on ARCH_MXC 591 585 help ··· 685 677 DEBUG_IMX35_UART || \ 686 678 DEBUG_IMX51_UART || \ 687 679 DEBUG_IMX53_UART ||\ 688 - DEBUG_IMX6Q_UART 680 + DEBUG_IMX6Q_UART || \ 681 + DEBUG_IMX6SL_UART 689 682 default "debug/keystone.S" if DEBUG_KEYSTONE_UART0 || \ 690 683 DEBUG_KEYSTONE_UART1 691 684 default "debug/mvebu.S" if DEBUG_MVEBU_UART || \
+1 -1
arch/arm/boot/compressed/Makefile
··· 124 124 endif 125 125 126 126 ccflags-y := -fpic -mno-single-pic-base -fno-builtin -I$(obj) 127 - asflags-y := -Wa,-march=all -DZIMAGE 127 + asflags-y := -DZIMAGE 128 128 129 129 # Supply kernel BSS size to the decompressor via a linker symbol. 130 130 KBSS_SZ = $(shell $(CROSS_COMPILE)size $(obj)/../../../../vmlinux | \
+28
arch/arm/boot/compressed/debug.S
··· 1 1 #include <linux/linkage.h> 2 2 #include <asm/assembler.h> 3 3 4 + #ifndef CONFIG_DEBUG_SEMIHOSTING 5 + 4 6 #include CONFIG_DEBUG_LL_INCLUDE 5 7 6 8 ENTRY(putc) ··· 12 10 busyuart r3, r1 13 11 mov pc, lr 14 12 ENDPROC(putc) 13 + 14 + #else 15 + 16 + ENTRY(putc) 17 + adr r1, 1f 18 + ldmia r1, {r2, r3} 19 + add r2, r2, r1 20 + ldr r1, [r2, r3] 21 + strb r0, [r1] 22 + mov r0, #0x03 @ SYS_WRITEC 23 + ARM( svc #0x123456 ) 24 + THUMB( svc #0xab ) 25 + mov pc, lr 26 + .align 2 27 + 1: .word _GLOBAL_OFFSET_TABLE_ - . 28 + .word semi_writec_buf(GOT) 29 + ENDPROC(putc) 30 + 31 + .bss 32 + .global semi_writec_buf 33 + .type semi_writec_buf, %object 34 + semi_writec_buf: 35 + .space 4 36 + .size semi_writec_buf, 4 37 + 38 + #endif
+1
arch/arm/boot/compressed/head-sa1100.S
··· 11 11 #include <asm/mach-types.h> 12 12 13 13 .section ".start", "ax" 14 + .arch armv4 14 15 15 16 __SA1100_start: 16 17
+1
arch/arm/boot/compressed/head-shark.S
··· 18 18 19 19 .section ".start", "ax" 20 20 21 + .arch armv4 21 22 b __beginning 22 23 23 24 __ofw_data: .long 0 @ the number of memory blocks
+3 -2
arch/arm/boot/compressed/head.S
··· 11 11 #include <linux/linkage.h> 12 12 #include <asm/assembler.h> 13 13 14 + .arch armv7-a 14 15 /* 15 16 * Debugging stuff 16 17 * ··· 806 805 .align 2 807 806 .type proc_types,#object 808 807 proc_types: 809 - .word 0x00000000 @ old ARM ID 810 - .word 0x0000f000 808 + .word 0x41000000 @ old ARM ID 809 + .word 0xff00f000 811 810 mov pc, lr 812 811 THUMB( nop ) 813 812 mov pc, lr
+2 -2
arch/arm/boot/dts/am33xx.dtsi
··· 409 409 ti,hwmods = "gpmc"; 410 410 reg = <0x50000000 0x2000>; 411 411 interrupts = <100>; 412 - num-cs = <7>; 413 - num-waitpins = <2>; 412 + gpmc,num-cs = <7>; 413 + gpmc,num-waitpins = <2>; 414 414 #address-cells = <2>; 415 415 #size-cells = <1>; 416 416 status = "disabled";
+3 -2
arch/arm/boot/dts/armada-xp-gp.dts
··· 39 39 }; 40 40 41 41 soc { 42 - ranges = <0 0 0xd0000000 0x100000 43 - 0xf0000000 0 0xf0000000 0x1000000>; 42 + ranges = <0 0 0xd0000000 0x100000 /* Internal registers 1MiB */ 43 + 0xe0000000 0 0xe0000000 0x8100000 /* PCIe */ 44 + 0xf0000000 0 0xf0000000 0x1000000 /* Device Bus, NOR 16MiB */>; 44 45 45 46 internal-regs { 46 47 serial@12000 {
+3 -2
arch/arm/boot/dts/armada-xp-openblocks-ax3-4.dts
··· 27 27 }; 28 28 29 29 soc { 30 - ranges = <0 0 0xd0000000 0x100000 31 - 0xf0000000 0 0xf0000000 0x8000000>; 30 + ranges = <0 0 0xd0000000 0x100000 /* Internal registers 1MiB */ 31 + 0xe0000000 0 0xe0000000 0x8100000 /* PCIe */ 32 + 0xf0000000 0 0xf0000000 0x8000000 /* Device Bus, NOR 128MiB */>; 32 33 33 34 internal-regs { 34 35 serial@12000 {
+20
arch/arm/boot/dts/omap4-panda-common.dtsi
··· 56 56 }; 57 57 }; 58 58 59 + &omap4_pmx_wkup { 60 + pinctrl-names = "default"; 61 + pinctrl-0 = < 62 + &twl6030_wkup_pins 63 + >; 64 + 65 + twl6030_wkup_pins: pinmux_twl6030_wkup_pins { 66 + pinctrl-single,pins = < 67 + 0x14 0x2 /* fref_clk0_out.sys_drm_msecure OUTPUT | MODE2 */ 68 + >; 69 + }; 70 + }; 71 + 59 72 &omap4_pmx_core { 60 73 pinctrl-names = "default"; 61 74 pinctrl-0 = < 75 + &twl6030_pins 62 76 &twl6040_pins 63 77 &mcpdm_pins 64 78 &mcbsp1_pins 65 79 &dss_hdmi_pins 66 80 &tpd12s015_pins 67 81 >; 82 + 83 + twl6030_pins: pinmux_twl6030_pins { 84 + pinctrl-single,pins = < 85 + 0x15e 0x4118 /* sys_nirq1.sys_nirq1 OMAP_WAKEUP_EN | INPUT_PULLUP | MODE0 */ 86 + >; 87 + }; 68 88 69 89 twl6040_pins: pinmux_twl6040_pins { 70 90 pinctrl-single,pins = <
+20
arch/arm/boot/dts/omap4-sdp.dts
··· 142 142 }; 143 143 }; 144 144 145 + &omap4_pmx_wkup { 146 + pinctrl-names = "default"; 147 + pinctrl-0 = < 148 + &twl6030_wkup_pins 149 + >; 150 + 151 + twl6030_wkup_pins: pinmux_twl6030_wkup_pins { 152 + pinctrl-single,pins = < 153 + 0x14 0x2 /* fref_clk0_out.sys_drm_msecure OUTPUT | MODE2 */ 154 + >; 155 + }; 156 + }; 157 + 145 158 &omap4_pmx_core { 146 159 pinctrl-names = "default"; 147 160 pinctrl-0 = < 161 + &twl6030_pins 148 162 &twl6040_pins 149 163 &mcpdm_pins 150 164 &dmic_pins ··· 190 176 pinctrl-single,pins = < 191 177 0x11c 0x100 /* uart4_rx.uart4_rx INPUT | MODE0 */ 192 178 0x11e 0 /* uart4_tx.uart4_tx OUTPUT | MODE0 */ 179 + >; 180 + }; 181 + 182 + twl6030_pins: pinmux_twl6030_pins { 183 + pinctrl-single,pins = < 184 + 0x15e 0x4118 /* sys_nirq1.sys_nirq1 OMAP_WAKEUP_EN | INPUT_PULLUP | MODE0 */ 193 185 >; 194 186 }; 195 187
+3
arch/arm/boot/dts/omap5.dtsi
··· 538 538 interrupts = <0 41 0x4>; 539 539 ti,hwmods = "timer5"; 540 540 ti,timer-dsp; 541 + ti,timer-pwm; 541 542 }; 542 543 543 544 timer6: timer@4013a000 { ··· 575 574 reg = <0x4803e000 0x80>; 576 575 interrupts = <0 45 0x4>; 577 576 ti,hwmods = "timer9"; 577 + ti,timer-pwm; 578 578 }; 579 579 580 580 timer10: timer@48086000 { ··· 583 581 reg = <0x48086000 0x80>; 584 582 interrupts = <0 46 0x4>; 585 583 ti,hwmods = "timer10"; 584 + ti,timer-pwm; 586 585 }; 587 586 588 587 timer11: timer@48088000 {
+10
arch/arm/configs/imx_v6_v7_defconfig
··· 37 37 CONFIG_MACH_EUKREA_CPUIMX51SD=y 38 38 CONFIG_SOC_IMX53=y 39 39 CONFIG_SOC_IMX6Q=y 40 + CONFIG_SOC_IMX6SL=y 41 + CONFIG_SOC_VF610=y 40 42 CONFIG_MXC_PWM=y 41 43 CONFIG_SMP=y 42 44 CONFIG_VMSPLIT_2G=y ··· 49 47 CONFIG_VFP=y 50 48 CONFIG_NEON=y 51 49 CONFIG_BINFMT_MISC=m 50 + CONFIG_PM_RUNTIME=y 52 51 CONFIG_PM_DEBUG=y 53 52 CONFIG_PM_TEST_SUSPEND=y 54 53 CONFIG_NET=y ··· 173 170 CONFIG_LCD_CLASS_DEVICE=y 174 171 CONFIG_LCD_L4F00242T03=y 175 172 CONFIG_BACKLIGHT_CLASS_DEVICE=y 173 + CONFIG_BACKLIGHT_PWM=y 176 174 CONFIG_FRAMEBUFFER_CONSOLE=y 177 175 CONFIG_FRAMEBUFFER_CONSOLE_DETECT_PRIMARY=y 178 176 CONFIG_FONTS=y ··· 186 182 CONFIG_SND_IMX_SOC=y 187 183 CONFIG_SND_SOC_PHYCORE_AC97=y 188 184 CONFIG_SND_SOC_EUKREA_TLV320=y 185 + CONFIG_SND_SOC_IMX_WM8962=y 189 186 CONFIG_SND_SOC_IMX_SGTL5000=y 190 187 CONFIG_SND_SOC_IMX_MC13783=y 191 188 CONFIG_USB=y ··· 213 208 CONFIG_MXS_DMA=y 214 209 CONFIG_STAGING=y 215 210 CONFIG_DRM_IMX=y 211 + CONFIG_DRM_IMX_TVE=y 212 + CONFIG_DRM_IMX_FB_HELPER=y 213 + CONFIG_DRM_IMX_PARALLEL_DISPLAY=y 216 214 CONFIG_DRM_IMX_IPUV3_CORE=y 217 215 CONFIG_DRM_IMX_IPUV3=y 218 216 CONFIG_COMMON_CLK_DEBUG=y 219 217 # CONFIG_IOMMU_SUPPORT is not set 218 + CONFIG_PWM=y 219 + CONFIG_PWM_IMX=y 220 220 CONFIG_EXT2_FS=y 221 221 CONFIG_EXT2_FS_XATTR=y 222 222 CONFIG_EXT2_FS_POSIX_ACL=y
+9 -2
arch/arm/include/asm/percpu.h
··· 30 30 static inline unsigned long __my_cpu_offset(void) 31 31 { 32 32 unsigned long off; 33 - /* Read TPIDRPRW */ 34 - asm("mrc p15, 0, %0, c13, c0, 4" : "=r" (off) : : "memory"); 33 + register unsigned long *sp asm ("sp"); 34 + 35 + /* 36 + * Read TPIDRPRW. 37 + * We want to allow caching the value, so avoid using volatile and 38 + * instead use a fake stack read to hazard against barrier(). 39 + */ 40 + asm("mrc p15, 0, %0, c13, c0, 4" : "=r" (off) : "Q" (*sp)); 41 + 35 42 return off; 36 43 } 37 44 #define __my_cpu_offset __my_cpu_offset()
+10
arch/arm/include/debug/imx-uart.h
··· 65 65 #define IMX6Q_UART_BASE_ADDR(n) IMX6Q_UART##n##_BASE_ADDR 66 66 #define IMX6Q_UART_BASE(n) IMX6Q_UART_BASE_ADDR(n) 67 67 68 + #define IMX6SL_UART1_BASE_ADDR 0x02020000 69 + #define IMX6SL_UART2_BASE_ADDR 0x02024000 70 + #define IMX6SL_UART3_BASE_ADDR 0x02034000 71 + #define IMX6SL_UART4_BASE_ADDR 0x02038000 72 + #define IMX6SL_UART5_BASE_ADDR 0x02018000 73 + #define IMX6SL_UART_BASE_ADDR(n) IMX6SL_UART##n##_BASE_ADDR 74 + #define IMX6SL_UART_BASE(n) IMX6SL_UART_BASE_ADDR(n) 75 + 68 76 #define IMX_DEBUG_UART_BASE(soc) soc##_UART_BASE(CONFIG_DEBUG_IMX_UART_PORT) 69 77 70 78 #ifdef CONFIG_DEBUG_IMX1_UART ··· 91 83 #define UART_PADDR IMX_DEBUG_UART_BASE(IMX53) 92 84 #elif defined(CONFIG_DEBUG_IMX6Q_UART) 93 85 #define UART_PADDR IMX_DEBUG_UART_BASE(IMX6Q) 86 + #elif defined(CONFIG_DEBUG_IMX6SL_UART) 87 + #define UART_PADDR IMX_DEBUG_UART_BASE(IMX6SL) 94 88 #endif 95 89 96 90 #endif /* __DEBUG_IMX_UART_H */
+2
arch/arm/kernel/topology.c
··· 13 13 14 14 #include <linux/cpu.h> 15 15 #include <linux/cpumask.h> 16 + #include <linux/export.h> 16 17 #include <linux/init.h> 17 18 #include <linux/percpu.h> 18 19 #include <linux/node.h> ··· 201 200 * cpu topology table 202 201 */ 203 202 struct cputopo_arm cpu_topology[NR_CPUS]; 203 + EXPORT_SYMBOL_GPL(cpu_topology); 204 204 205 205 const struct cpumask *cpu_coregroup_mask(int cpu) 206 206 {
+47 -16
arch/arm/mach-imx/Kconfig
··· 56 56 uses the same clocks as the GPT. Anyway, on some systems the GPT 57 57 may be in use for other purposes. 58 58 59 - config MXC_ULPI 60 - bool 61 - 62 59 config ARCH_HAS_RNGA 63 60 bool 64 61 ··· 230 233 select IMX_HAVE_PLATFORM_MXC_EHCI 231 234 select IMX_HAVE_PLATFORM_MXC_NAND 232 235 select IMX_HAVE_PLATFORM_SDHCI_ESDHC_IMX 233 - select MXC_ULPI if USB_ULPI 236 + select USB_ULPI_VIEWPORT if USB_ULPI 234 237 select SOC_IMX25 235 238 236 239 choice ··· 281 284 select IMX_HAVE_PLATFORM_MXC_NAND 282 285 select IMX_HAVE_PLATFORM_MXC_W1 283 286 select IMX_HAVE_PLATFORM_SPI_IMX 284 - select MXC_ULPI if USB_ULPI 287 + select USB_ULPI_VIEWPORT if USB_ULPI 285 288 select SOC_IMX27 286 289 help 287 290 Include support for phyCORE-i.MX27 (aka pcm038) platform. This ··· 311 314 select IMX_HAVE_PLATFORM_MXC_EHCI 312 315 select IMX_HAVE_PLATFORM_MXC_NAND 313 316 select IMX_HAVE_PLATFORM_MXC_W1 314 - select MXC_ULPI if USB_ULPI 317 + select USB_ULPI_VIEWPORT if USB_ULPI 315 318 select SOC_IMX27 316 319 help 317 320 Include support for Eukrea CPUIMX27 platform. This includes ··· 366 369 select IMX_HAVE_PLATFORM_MXC_MMC 367 370 select IMX_HAVE_PLATFORM_SPI_IMX 368 371 select MXC_DEBUG_BOARD 369 - select MXC_ULPI if USB_ULPI 372 + select USB_ULPI_VIEWPORT if USB_ULPI 370 373 select SOC_IMX27 371 374 help 372 375 Include support for MX27PDK platform. This includes specific ··· 411 414 select IMX_HAVE_PLATFORM_MXC_NAND 412 415 select IMX_HAVE_PLATFORM_MXC_W1 413 416 select IMX_HAVE_PLATFORM_SPI_IMX 414 - select MXC_ULPI if USB_ULPI 417 + select USB_ULPI_VIEWPORT if USB_ULPI 415 418 select SOC_IMX27 416 419 help 417 420 Include support for phyCARD-s (aka pca100) platform. This ··· 478 481 select IMX_HAVE_PLATFORM_MXC_EHCI 479 482 select IMX_HAVE_PLATFORM_MXC_MMC 480 483 select IMX_HAVE_PLATFORM_SPI_IMX 481 - select MXC_ULPI if USB_ULPI 484 + select USB_ULPI_VIEWPORT if USB_ULPI 482 485 select SOC_IMX31 483 486 help 484 487 Include support for mx31 based LILLY1131 modules. This includes ··· 494 497 select IMX_HAVE_PLATFORM_MXC_RTC 495 498 select IMX_HAVE_PLATFORM_SPI_IMX 496 499 select LEDS_GPIO_REGISTER 497 - select MXC_ULPI if USB_ULPI 500 + select USB_ULPI_VIEWPORT if USB_ULPI 498 501 select SOC_IMX31 499 502 help 500 503 Include support for MX31 LITEKIT platform. This includes specific ··· 511 514 select IMX_HAVE_PLATFORM_MXC_MMC 512 515 select IMX_HAVE_PLATFORM_MXC_NAND 513 516 select IMX_HAVE_PLATFORM_MXC_W1 514 - select MXC_ULPI if USB_ULPI 517 + select USB_ULPI_VIEWPORT if USB_ULPI 515 518 select SOC_IMX31 516 519 help 517 520 Include support for Phytec pcm037 platform. This includes ··· 541 544 select IMX_HAVE_PLATFORM_MXC_NAND 542 545 select IMX_HAVE_PLATFORM_SPI_IMX 543 546 select MXC_DEBUG_BOARD 544 - select MXC_ULPI if USB_ULPI 547 + select USB_ULPI_VIEWPORT if USB_ULPI 545 548 select SOC_IMX31 546 549 help 547 550 Include support for MX31PDK (3DS) platform. This includes specific ··· 568 571 select IMX_HAVE_PLATFORM_MXC_MMC 569 572 select IMX_HAVE_PLATFORM_SPI_IMX 570 573 select LEDS_GPIO_REGISTER 571 - select MXC_ULPI if USB_ULPI 574 + select USB_ULPI_VIEWPORT if USB_ULPI 572 575 select SOC_IMX31 573 576 help 574 577 Include support for mx31moboard platform. This includes specific ··· 592 595 select IMX_HAVE_PLATFORM_MXC_EHCI 593 596 select IMX_HAVE_PLATFORM_MXC_MMC 594 597 select IMX_HAVE_PLATFORM_MXC_NAND 595 - select MXC_ULPI if USB_ULPI 598 + select USB_ULPI_VIEWPORT if USB_ULPI 596 599 select SOC_IMX31 597 600 help 598 601 Include support for Atmark Armadillo-500 platform. This includes ··· 636 639 select IMX_HAVE_PLATFORM_MXC_EHCI 637 640 select IMX_HAVE_PLATFORM_MXC_NAND 638 641 select IMX_HAVE_PLATFORM_SDHCI_ESDHC_IMX 639 - select MXC_ULPI if USB_ULPI 642 + select USB_ULPI_VIEWPORT if USB_ULPI 640 643 select SOC_IMX35 641 644 help 642 645 Include support for Phytec pcm043 platform. This includes ··· 670 673 select IMX_HAVE_PLATFORM_MXC_EHCI 671 674 select IMX_HAVE_PLATFORM_MXC_NAND 672 675 select IMX_HAVE_PLATFORM_SDHCI_ESDHC_IMX 673 - select MXC_ULPI if USB_ULPI 676 + select USB_ULPI_VIEWPORT if USB_ULPI 674 677 select SOC_IMX35 675 678 help 676 679 Include support for Eukrea CPUIMX35 platform. This includes ··· 812 815 813 816 help 814 817 This enables support for Freescale i.MX6 Quad processor. 818 + 819 + config SOC_IMX6SL 820 + bool "i.MX6 SoloLite support" 821 + select ARM_ERRATA_754322 822 + select ARM_ERRATA_775420 823 + select ARM_GIC 824 + select CPU_V7 825 + select HAVE_IMX_ANATOP 826 + select HAVE_IMX_GPC 827 + select HAVE_IMX_MMDC 828 + select HAVE_IMX_SRC 829 + select PINCTRL 830 + select PINCTRL_IMX6SL 831 + select PL310_ERRATA_588369 if CACHE_PL310 832 + select PL310_ERRATA_727915 if CACHE_PL310 833 + select PL310_ERRATA_769419 if CACHE_PL310 834 + 835 + help 836 + This enables support for Freescale i.MX6 SoloLite processor. 837 + 838 + config SOC_VF610 839 + bool "Vybrid Family VF610 support" 840 + select CPU_V7 841 + select ARM_GIC 842 + select CLKSRC_OF 843 + select PINCTRL 844 + select PINCTRL_VF610 845 + select VF_PIT_TIMER 846 + select PL310_ERRATA_588369 if CACHE_PL310 847 + select PL310_ERRATA_727915 if CACHE_PL310 848 + select PL310_ERRATA_769419 if CACHE_PL310 849 + 850 + help 851 + This enable support for Freescale Vybrid VF610 processor. 815 852 816 853 endif 817 854
+3 -1
arch/arm/mach-imx/Makefile
··· 23 23 obj-$(CONFIG_MXC_TZIC) += tzic.o 24 24 obj-$(CONFIG_MXC_AVIC) += avic.o 25 25 26 - obj-$(CONFIG_MXC_ULPI) += ulpi.o 27 26 obj-$(CONFIG_MXC_USE_EPIT) += epit.o 28 27 obj-$(CONFIG_MXC_DEBUG_BOARD) += 3ds_debugboard.o 29 28 ··· 97 98 obj-$(CONFIG_SMP) += headsmp.o platsmp.o 98 99 obj-$(CONFIG_HOTPLUG_CPU) += hotplug.o 99 100 obj-$(CONFIG_SOC_IMX6Q) += clk-imx6q.o mach-imx6q.o 101 + obj-$(CONFIG_SOC_IMX6SL) += clk-imx6sl.o mach-imx6sl.o 100 102 101 103 ifeq ($(CONFIG_PM),y) 102 104 obj-$(CONFIG_SOC_IMX6Q) += pm-imx6q.o headsmp.o ··· 110 110 111 111 obj-$(CONFIG_MACH_IMX51_DT) += imx51-dt.o 112 112 obj-$(CONFIG_SOC_IMX53) += mach-imx53.o 113 + 114 + obj-$(CONFIG_SOC_VF610) += clk-vf610.o mach-vf610.o 113 115 114 116 obj-y += devices/
+36 -37
arch/arm/mach-imx/clk-imx51-imx53.c
··· 73 73 "tve_sel", "lp_apm", 74 74 "uart_root", "dummy"/* spdif0_clk_root */, 75 75 "dummy", "dummy", }; 76 + static const char *mx51_spdif_xtal_sel[] = { "osc", "ckih", "ckih2", }; 77 + static const char *mx53_spdif_xtal_sel[] = { "osc", "ckih", "ckih2", "pll4_sw", }; 78 + static const char *spdif_sel[] = { "pll1_sw", "pll2_sw", "pll3_sw", "spdif_xtal_sel", }; 79 + static const char *spdif0_com_sel[] = { "spdif0_podf", "ssi1_root_gate", }; 80 + static const char *mx51_spdif1_com_sel[] = { "spdif1_podf", "ssi2_root_gate", }; 81 + 76 82 77 83 enum imx5_clks { 78 84 dummy, ckil, osc, ckih1, ckih2, ahb, ipg, axi_a, axi_b, uart_pred, ··· 116 110 owire_gate, gpu3d_s, gpu2d_s, gpu3d_gate, gpu2d_gate, garb_gate, 117 111 cko1_sel, cko1_podf, cko1, 118 112 cko2_sel, cko2_podf, cko2, 119 - srtc_gate, pata_gate, 113 + srtc_gate, pata_gate, sata_gate, spdif_xtal_sel, spdif0_sel, 114 + spdif1_sel, spdif0_pred, spdif0_podf, spdif1_pred, spdif1_podf, 115 + spdif0_com_s, spdif1_com_sel, spdif0_gate, spdif1_gate, spdif_ipg_gate, 120 116 clk_max 121 117 }; 122 118 ··· 131 123 { 132 124 int i; 133 125 126 + of_clk_init(NULL); 127 + 134 128 clk[dummy] = imx_clk_fixed("dummy", 0); 135 - clk[ckil] = imx_clk_fixed("ckil", rate_ckil); 136 - clk[osc] = imx_clk_fixed("osc", rate_osc); 137 - clk[ckih1] = imx_clk_fixed("ckih1", rate_ckih1); 138 - clk[ckih2] = imx_clk_fixed("ckih2", rate_ckih2); 129 + clk[ckil] = imx_obtain_fixed_clock("ckil", rate_ckil); 130 + clk[osc] = imx_obtain_fixed_clock("osc", rate_osc); 131 + clk[ckih1] = imx_obtain_fixed_clock("ckih1", rate_ckih1); 132 + clk[ckih2] = imx_obtain_fixed_clock("ckih2", rate_ckih2); 139 133 140 134 clk[lp_apm] = imx_clk_mux("lp_apm", MXC_CCM_CCSR, 9, 1, 141 135 lp_apm_sel, ARRAY_SIZE(lp_apm_sel)); ··· 277 267 clk[owire_gate] = imx_clk_gate2("owire_gate", "per_root", MXC_CCM_CCGR2, 22); 278 268 clk[srtc_gate] = imx_clk_gate2("srtc_gate", "per_root", MXC_CCM_CCGR4, 28); 279 269 clk[pata_gate] = imx_clk_gate2("pata_gate", "ipg", MXC_CCM_CCGR4, 0); 270 + clk[spdif0_sel] = imx_clk_mux("spdif0_sel", MXC_CCM_CSCMR2, 0, 2, spdif_sel, ARRAY_SIZE(spdif_sel)); 271 + clk[spdif0_pred] = imx_clk_divider("spdif0_pred", "spdif0_sel", MXC_CCM_CDCDR, 25, 3); 272 + clk[spdif0_podf] = imx_clk_divider("spdif0_podf", "spdif0_pred", MXC_CCM_CDCDR, 19, 6); 273 + clk[spdif0_com_s] = imx_clk_mux_flags("spdif0_com_sel", MXC_CCM_CSCMR2, 4, 1, 274 + spdif0_com_sel, ARRAY_SIZE(spdif0_com_sel), CLK_SET_RATE_PARENT); 275 + clk[spdif0_gate] = imx_clk_gate2("spdif0_gate", "spdif0_com_sel", MXC_CCM_CCGR5, 26); 276 + clk[spdif_ipg_gate] = imx_clk_gate2("spdif_ipg_gate", "ipg", MXC_CCM_CCGR5, 30); 280 277 281 278 for (i = 0; i < ARRAY_SIZE(clk); i++) 282 279 if (IS_ERR(clk[i])) ··· 395 378 clk[mipi_hsc2_gate] = imx_clk_gate2("mipi_hsc2_gate", "ipg", MXC_CCM_CCGR4, 8); 396 379 clk[mipi_esc_gate] = imx_clk_gate2("mipi_esc_gate", "ipg", MXC_CCM_CCGR4, 10); 397 380 clk[mipi_hsp_gate] = imx_clk_gate2("mipi_hsp_gate", "ipg", MXC_CCM_CCGR4, 12); 381 + clk[spdif_xtal_sel] = imx_clk_mux("spdif_xtal_sel", MXC_CCM_CSCMR1, 2, 2, 382 + mx51_spdif_xtal_sel, ARRAY_SIZE(mx51_spdif_xtal_sel)); 383 + clk[spdif1_sel] = imx_clk_mux("spdif1_sel", MXC_CCM_CSCMR2, 2, 2, 384 + spdif_sel, ARRAY_SIZE(spdif_sel)); 385 + clk[spdif1_pred] = imx_clk_divider("spdif1_podf", "spdif1_sel", MXC_CCM_CDCDR, 16, 3); 386 + clk[spdif1_podf] = imx_clk_divider("spdif1_podf", "spdif1_pred", MXC_CCM_CDCDR, 9, 6); 387 + clk[spdif1_com_sel] = imx_clk_mux("spdif1_com_sel", MXC_CCM_CSCMR2, 5, 1, 388 + mx51_spdif1_com_sel, ARRAY_SIZE(mx51_spdif1_com_sel)); 389 + clk[spdif1_gate] = imx_clk_gate2("spdif1_gate", "spdif1_com_sel", MXC_CCM_CCGR5, 28); 398 390 399 391 for (i = 0; i < ARRAY_SIZE(clk); i++) 400 392 if (IS_ERR(clk[i])) ··· 511 485 clk[can2_serial_gate] = imx_clk_gate2("can2_serial_gate", "can_sel", MXC_CCM_CCGR4, 8); 512 486 clk[can2_ipg_gate] = imx_clk_gate2("can2_ipg_gate", "ipg", MXC_CCM_CCGR4, 6); 513 487 clk[i2c3_gate] = imx_clk_gate2("i2c3_gate", "per_root", MXC_CCM_CCGR1, 22); 488 + clk[sata_gate] = imx_clk_gate2("sata_gate", "ipg", MXC_CCM_CCGR4, 2); 514 489 515 490 clk[cko1_sel] = imx_clk_mux("cko1_sel", MXC_CCM_CCOSR, 0, 4, 516 491 mx53_cko1_sel, ARRAY_SIZE(mx53_cko1_sel)); ··· 522 495 mx53_cko2_sel, ARRAY_SIZE(mx53_cko2_sel)); 523 496 clk[cko2_podf] = imx_clk_divider("cko2_podf", "cko2_sel", MXC_CCM_CCOSR, 21, 3); 524 497 clk[cko2] = imx_clk_gate2("cko2", "cko2_podf", MXC_CCM_CCOSR, 24); 498 + clk[spdif_xtal_sel] = imx_clk_mux("spdif_xtal_sel", MXC_CCM_CSCMR1, 2, 2, 499 + mx53_spdif_xtal_sel, ARRAY_SIZE(mx53_spdif_xtal_sel)); 525 500 526 501 for (i = 0; i < ARRAY_SIZE(clk); i++) 527 502 if (IS_ERR(clk[i])) ··· 571 542 return 0; 572 543 } 573 544 574 - #ifdef CONFIG_OF 575 - static void __init clk_get_freq_dt(unsigned long *ckil, unsigned long *osc, 576 - unsigned long *ckih1, unsigned long *ckih2) 577 - { 578 - struct device_node *np; 579 - 580 - /* retrieve the freqency of fixed clocks from device tree */ 581 - for_each_compatible_node(np, NULL, "fixed-clock") { 582 - u32 rate; 583 - if (of_property_read_u32(np, "clock-frequency", &rate)) 584 - continue; 585 - 586 - if (of_device_is_compatible(np, "fsl,imx-ckil")) 587 - *ckil = rate; 588 - else if (of_device_is_compatible(np, "fsl,imx-osc")) 589 - *osc = rate; 590 - else if (of_device_is_compatible(np, "fsl,imx-ckih1")) 591 - *ckih1 = rate; 592 - else if (of_device_is_compatible(np, "fsl,imx-ckih2")) 593 - *ckih2 = rate; 594 - } 595 - } 596 - 597 545 int __init mx51_clocks_init_dt(void) 598 546 { 599 - unsigned long ckil, osc, ckih1, ckih2; 600 - 601 - clk_get_freq_dt(&ckil, &osc, &ckih1, &ckih2); 602 - return mx51_clocks_init(ckil, osc, ckih1, ckih2); 547 + return mx51_clocks_init(0, 0, 0, 0); 603 548 } 604 549 605 550 int __init mx53_clocks_init_dt(void) 606 551 { 607 - unsigned long ckil, osc, ckih1, ckih2; 608 - 609 - clk_get_freq_dt(&ckil, &osc, &ckih1, &ckih2); 610 - return mx53_clocks_init(ckil, osc, ckih1, ckih2); 552 + return mx53_clocks_init(0, 0, 0, 0); 611 553 } 612 - #endif
+26 -22
arch/arm/mach-imx/clk-imx6q.c
··· 238 238 pll4_audio, pll5_video, pll8_mlb, pll7_usb_host, pll6_enet, ssi1_ipg, 239 239 ssi2_ipg, ssi3_ipg, rom, usbphy1, usbphy2, ldb_di0_div_3_5, ldb_di1_div_3_5, 240 240 sata_ref, sata_ref_100m, pcie_ref, pcie_ref_125m, enet_ref, usbphy1_gate, 241 - usbphy2_gate, pll4_post_div, pll5_post_div, pll5_video_div, clk_max 241 + usbphy2_gate, pll4_post_div, pll5_post_div, pll5_video_div, eim_slow, clk_max 242 242 }; 243 243 244 244 static struct clk *clk[clk_max]; ··· 270 270 { } 271 271 }; 272 272 273 - int __init mx6q_clocks_init(void) 273 + static void __init imx6q_clocks_init(struct device_node *ccm_node) 274 274 { 275 275 struct device_node *np; 276 276 void __iomem *base; 277 277 int i, irq; 278 278 279 279 clk[dummy] = imx_clk_fixed("dummy", 0); 280 - 281 - /* retrieve the freqency of fixed clocks from device tree */ 282 - for_each_compatible_node(np, NULL, "fixed-clock") { 283 - u32 rate; 284 - if (of_property_read_u32(np, "clock-frequency", &rate)) 285 - continue; 286 - 287 - if (of_device_is_compatible(np, "fsl,imx-ckil")) 288 - clk[ckil] = imx_clk_fixed("ckil", rate); 289 - else if (of_device_is_compatible(np, "fsl,imx-ckih1")) 290 - clk[ckih] = imx_clk_fixed("ckih", rate); 291 - else if (of_device_is_compatible(np, "fsl,imx-osc")) 292 - clk[osc] = imx_clk_fixed("osc", rate); 293 - } 280 + clk[ckil] = imx_obtain_fixed_clock("ckil", 0); 281 + clk[ckih] = imx_obtain_fixed_clock("ckih1", 0); 282 + clk[osc] = imx_obtain_fixed_clock("osc", 0); 294 283 295 284 np = of_find_compatible_node(NULL, NULL, "fsl,imx6q-anatop"); 296 285 base = of_iomap(np, 0); ··· 301 312 clk[pll5_video] = imx_clk_pllv3(IMX_PLLV3_AV, "pll5_video", "osc", base + 0xa0, 0x7f); 302 313 clk[pll6_enet] = imx_clk_pllv3(IMX_PLLV3_ENET, "pll6_enet", "osc", base + 0xe0, 0x3); 303 314 clk[pll7_usb_host] = imx_clk_pllv3(IMX_PLLV3_USB, "pll7_usb_host","osc", base + 0x20, 0x3); 304 - clk[pll8_mlb] = imx_clk_pllv3(IMX_PLLV3_MLB, "pll8_mlb", "osc", base + 0xd0, 0x0); 305 315 306 316 /* 307 317 * Bit 20 is the reserved and read-only bit, we do this only for: ··· 348 360 clk[pll5_post_div] = clk_register_divider_table(NULL, "pll5_post_div", "pll5_video", CLK_SET_RATE_PARENT, base + 0xa0, 19, 2, 0, post_div_table, &imx_ccm_lock); 349 361 clk[pll5_video_div] = clk_register_divider_table(NULL, "pll5_video_div", "pll5_post_div", CLK_SET_RATE_PARENT, base + 0x170, 30, 2, 0, video_div_table, &imx_ccm_lock); 350 362 351 - np = of_find_compatible_node(NULL, NULL, "fsl,imx6q-ccm"); 363 + np = ccm_node; 352 364 base = of_iomap(np, 0); 353 365 WARN_ON(!base); 354 366 ccm_base = base; ··· 469 481 clk[esai] = imx_clk_gate2("esai", "esai_podf", base + 0x6c, 16); 470 482 clk[gpt_ipg] = imx_clk_gate2("gpt_ipg", "ipg", base + 0x6c, 20); 471 483 clk[gpt_ipg_per] = imx_clk_gate2("gpt_ipg_per", "ipg_per", base + 0x6c, 22); 472 - clk[gpu2d_core] = imx_clk_gate2("gpu2d_core", "gpu2d_core_podf", base + 0x6c, 24); 484 + if (cpu_is_imx6dl()) 485 + /* 486 + * The multiplexer and divider of imx6q clock gpu3d_shader get 487 + * redefined/reused as gpu2d_core_sel and gpu2d_core_podf on imx6dl. 488 + */ 489 + clk[gpu2d_core] = imx_clk_gate2("gpu2d_core", "gpu3d_shader", base + 0x6c, 24); 490 + else 491 + clk[gpu2d_core] = imx_clk_gate2("gpu2d_core", "gpu2d_core_podf", base + 0x6c, 24); 473 492 clk[gpu3d_core] = imx_clk_gate2("gpu3d_core", "gpu3d_core_podf", base + 0x6c, 26); 474 493 clk[hdmi_iahb] = imx_clk_gate2("hdmi_iahb", "ahb", base + 0x70, 0); 475 494 clk[hdmi_isfr] = imx_clk_gate2("hdmi_isfr", "pll3_pfd1_540m", base + 0x70, 4); ··· 494 499 clk[ldb_di1] = imx_clk_gate2("ldb_di1", "ldb_di1_podf", base + 0x74, 14); 495 500 clk[ipu2_di1] = imx_clk_gate2("ipu2_di1", "ipu2_di1_sel", base + 0x74, 10); 496 501 clk[hsi_tx] = imx_clk_gate2("hsi_tx", "hsi_tx_podf", base + 0x74, 16); 497 - clk[mlb] = imx_clk_gate2("mlb", "axi", base + 0x74, 18); 502 + if (cpu_is_imx6dl()) 503 + /* 504 + * The multiplexer and divider of the imx6q clock gpu2d get 505 + * redefined/reused as mlb_sys_sel and mlb_sys_clk_podf on imx6dl. 506 + */ 507 + clk[mlb] = imx_clk_gate2("mlb", "gpu2d_core_podf", base + 0x74, 18); 508 + else 509 + clk[mlb] = imx_clk_gate2("mlb", "axi", base + 0x74, 18); 498 510 clk[mmdc_ch0_axi] = imx_clk_gate2("mmdc_ch0_axi", "mmdc_ch0_axi_podf", base + 0x74, 20); 499 511 clk[mmdc_ch1_axi] = imx_clk_gate2("mmdc_ch1_axi", "mmdc_ch1_axi_podf", base + 0x74, 22); 500 512 clk[ocram] = imx_clk_gate2("ocram", "ahb", base + 0x74, 28); ··· 530 528 clk[usdhc2] = imx_clk_gate2("usdhc2", "usdhc2_podf", base + 0x80, 4); 531 529 clk[usdhc3] = imx_clk_gate2("usdhc3", "usdhc3_podf", base + 0x80, 6); 532 530 clk[usdhc4] = imx_clk_gate2("usdhc4", "usdhc4_podf", base + 0x80, 8); 531 + clk[eim_slow] = imx_clk_gate2("eim_slow", "emi_slow_podf", base + 0x80, 10); 533 532 clk[vdo_axi] = imx_clk_gate2("vdo_axi", "vdo_axi_sel", base + 0x80, 12); 534 533 clk[vpu_axi] = imx_clk_gate2("vpu_axi", "vpu_axi_podf", base + 0x80, 14); 535 534 clk[cko1] = imx_clk_gate("cko1", "cko1_podf", base + 0x60, 7); ··· 550 547 clk_register_clkdev(clk[ahb], "ahb", NULL); 551 548 clk_register_clkdev(clk[cko1], "cko1", NULL); 552 549 clk_register_clkdev(clk[arm], NULL, "cpu0"); 550 + clk_register_clkdev(clk[pll4_post_div], "pll4_post_div", NULL); 551 + clk_register_clkdev(clk[pll4_audio], "pll4_audio", NULL); 553 552 554 553 if (imx6q_revision() != IMX_CHIP_REVISION_1_0) { 555 554 clk_set_parent(clk[ldb_di0_sel], clk[pll5_video_div]); ··· 581 576 WARN_ON(!base); 582 577 irq = irq_of_parse_and_map(np, 0); 583 578 mxc_timer_init(base, irq); 584 - 585 - return 0; 586 579 } 580 + CLK_OF_DECLARE(imx6q, "fsl,imx6q-ccm", imx6q_clocks_init);
+267
arch/arm/mach-imx/clk-imx6sl.c
··· 1 + /* 2 + * Copyright 2013 Freescale Semiconductor, Inc. 3 + * 4 + * This program is free software; you can redistribute it and/or modify 5 + * it under the terms of the GNU General Public License version 2 as 6 + * published by the Free Software Foundation. 7 + * 8 + */ 9 + 10 + #include <linux/clk.h> 11 + #include <linux/clkdev.h> 12 + #include <linux/err.h> 13 + #include <linux/of.h> 14 + #include <linux/of_address.h> 15 + #include <linux/of_irq.h> 16 + #include <dt-bindings/clock/imx6sl-clock.h> 17 + 18 + #include "clk.h" 19 + #include "common.h" 20 + 21 + static const char const *step_sels[] = { "osc", "pll2_pfd2", }; 22 + static const char const *pll1_sw_sels[] = { "pll1_sys", "step", }; 23 + static const char const *ocram_alt_sels[] = { "pll2_pfd2", "pll3_pfd1", }; 24 + static const char const *ocram_sels[] = { "periph", "ocram_alt_sels", }; 25 + static const char const *pre_periph_sels[] = { "pll2_bus", "pll2_pfd2", "pll2_pfd0", "pll2_198m", }; 26 + static const char const *periph_clk2_sels[] = { "pll3_usb_otg", "osc", "osc", "dummy", }; 27 + static const char const *periph2_clk2_sels[] = { "pll3_usb_otg", "pll2_bus", }; 28 + static const char const *periph_sels[] = { "pre_periph_sel", "periph_clk2_podf", }; 29 + static const char const *periph2_sels[] = { "pre_periph2_sel", "periph2_clk2_podf", }; 30 + static const char const *csi_lcdif_sels[] = { "mmdc", "pll2_pfd2", "pll3_120m", "pll3_pfd1", }; 31 + static const char const *usdhc_sels[] = { "pll2_pfd2", "pll2_pfd0", }; 32 + static const char const *ssi_sels[] = { "pll3_pfd2", "pll3_pfd3", "pll4_post_div", "dummy", }; 33 + static const char const *perclk_sels[] = { "ipg", "osc", }; 34 + static const char const *epdc_pxp_sels[] = { "mmdc", "pll3_usb_otg", "pll5_video_div", "pll2_pfd0", "pll2_pfd2", "pll3_pfd1", }; 35 + static const char const *gpu2d_ovg_sels[] = { "pll3_pfd1", "pll3_usb_otg", "pll2_bus", "pll2_pfd2", }; 36 + static const char const *gpu2d_sels[] = { "pll2_pfd2", "pll3_usb_otg", "pll3_pfd1", "pll2_bus", }; 37 + static const char const *lcdif_pix_sels[] = { "pll2_bus", "pll3_usb_otg", "pll5_video_div", "pll2_pfd0", "pll3_pfd0", "pll3_pfd1", }; 38 + static const char const *epdc_pix_sels[] = { "pll2_bus", "pll3_usb_otg", "pll5_video_div", "pll2_pfd0", "pll2_pfd1", "pll3_pfd1", }; 39 + static const char const *audio_sels[] = { "pll4_post_div", "pll3_pfd2", "pll3_pfd3", "pll3_usb_otg", }; 40 + static const char const *ecspi_sels[] = { "pll3_60m", "osc", }; 41 + static const char const *uart_sels[] = { "pll3_80m", "osc", }; 42 + 43 + static struct clk_div_table clk_enet_ref_table[] = { 44 + { .val = 0, .div = 20, }, 45 + { .val = 1, .div = 10, }, 46 + { .val = 2, .div = 5, }, 47 + { .val = 3, .div = 4, }, 48 + { } 49 + }; 50 + 51 + static struct clk_div_table post_div_table[] = { 52 + { .val = 2, .div = 1, }, 53 + { .val = 1, .div = 2, }, 54 + { .val = 0, .div = 4, }, 55 + { } 56 + }; 57 + 58 + static struct clk_div_table video_div_table[] = { 59 + { .val = 0, .div = 1, }, 60 + { .val = 1, .div = 2, }, 61 + { .val = 2, .div = 1, }, 62 + { .val = 3, .div = 4, }, 63 + { } 64 + }; 65 + 66 + static struct clk *clks[IMX6SL_CLK_CLK_END]; 67 + static struct clk_onecell_data clk_data; 68 + 69 + static void __init imx6sl_clocks_init(struct device_node *ccm_node) 70 + { 71 + struct device_node *np; 72 + void __iomem *base; 73 + int irq; 74 + int i; 75 + 76 + clks[IMX6SL_CLK_DUMMY] = imx_clk_fixed("dummy", 0); 77 + clks[IMX6SL_CLK_CKIL] = imx_obtain_fixed_clock("ckil", 0); 78 + clks[IMX6SL_CLK_OSC] = imx_obtain_fixed_clock("osc", 0); 79 + 80 + np = of_find_compatible_node(NULL, NULL, "fsl,imx6sl-anatop"); 81 + base = of_iomap(np, 0); 82 + WARN_ON(!base); 83 + 84 + /* type name parent base div_mask */ 85 + clks[IMX6SL_CLK_PLL1_SYS] = imx_clk_pllv3(IMX_PLLV3_SYS, "pll1_sys", "osc", base, 0x7f); 86 + clks[IMX6SL_CLK_PLL2_BUS] = imx_clk_pllv3(IMX_PLLV3_GENERIC, "pll2_bus", "osc", base + 0x30, 0x1); 87 + clks[IMX6SL_CLK_PLL3_USB_OTG] = imx_clk_pllv3(IMX_PLLV3_USB, "pll3_usb_otg", "osc", base + 0x10, 0x3); 88 + clks[IMX6SL_CLK_PLL4_AUDIO] = imx_clk_pllv3(IMX_PLLV3_AV, "pll4_audio", "osc", base + 0x70, 0x7f); 89 + clks[IMX6SL_CLK_PLL5_VIDEO] = imx_clk_pllv3(IMX_PLLV3_AV, "pll5_video", "osc", base + 0xa0, 0x7f); 90 + clks[IMX6SL_CLK_PLL6_ENET] = imx_clk_pllv3(IMX_PLLV3_ENET, "pll6_enet", "osc", base + 0xe0, 0x3); 91 + clks[IMX6SL_CLK_PLL7_USB_HOST] = imx_clk_pllv3(IMX_PLLV3_USB, "pll7_usb_host", "osc", base + 0x20, 0x3); 92 + 93 + /* 94 + * usbphy1 and usbphy2 are implemented as dummy gates using reserve 95 + * bit 20. They are used by phy driver to keep the refcount of 96 + * parent PLL correct. usbphy1_gate and usbphy2_gate only needs to be 97 + * turned on during boot, and software will not need to control it 98 + * anymore after that. 99 + */ 100 + clks[IMX6SL_CLK_USBPHY1] = imx_clk_gate("usbphy1", "pll3_usb_otg", base + 0x10, 20); 101 + clks[IMX6SL_CLK_USBPHY2] = imx_clk_gate("usbphy2", "pll7_usb_host", base + 0x20, 20); 102 + clks[IMX6SL_CLK_USBPHY1_GATE] = imx_clk_gate("usbphy1_gate", "dummy", base + 0x10, 6); 103 + clks[IMX6SL_CLK_USBPHY2_GATE] = imx_clk_gate("usbphy2_gate", "dummy", base + 0x20, 6); 104 + 105 + /* dev name parent_name flags reg shift width div: flags, div_table lock */ 106 + clks[IMX6SL_CLK_PLL4_POST_DIV] = clk_register_divider_table(NULL, "pll4_post_div", "pll4_audio", CLK_SET_RATE_PARENT, base + 0x70, 19, 2, 0, post_div_table, &imx_ccm_lock); 107 + clks[IMX6SL_CLK_PLL5_POST_DIV] = clk_register_divider_table(NULL, "pll5_post_div", "pll5_video", CLK_SET_RATE_PARENT, base + 0xa0, 19, 2, 0, post_div_table, &imx_ccm_lock); 108 + clks[IMX6SL_CLK_PLL5_VIDEO_DIV] = clk_register_divider_table(NULL, "pll5_video_div", "pll5_post_div", CLK_SET_RATE_PARENT, base + 0x170, 30, 2, 0, video_div_table, &imx_ccm_lock); 109 + clks[IMX6SL_CLK_ENET_REF] = clk_register_divider_table(NULL, "enet_ref", "pll6_enet", 0, base + 0xe0, 0, 2, 0, clk_enet_ref_table, &imx_ccm_lock); 110 + 111 + /* name parent_name reg idx */ 112 + clks[IMX6SL_CLK_PLL2_PFD0] = imx_clk_pfd("pll2_pfd0", "pll2_bus", base + 0x100, 0); 113 + clks[IMX6SL_CLK_PLL2_PFD1] = imx_clk_pfd("pll2_pfd1", "pll2_bus", base + 0x100, 1); 114 + clks[IMX6SL_CLK_PLL2_PFD2] = imx_clk_pfd("pll2_pfd2", "pll2_bus", base + 0x100, 2); 115 + clks[IMX6SL_CLK_PLL3_PFD0] = imx_clk_pfd("pll3_pfd0", "pll3_usb_otg", base + 0xf0, 0); 116 + clks[IMX6SL_CLK_PLL3_PFD1] = imx_clk_pfd("pll3_pfd1", "pll3_usb_otg", base + 0xf0, 1); 117 + clks[IMX6SL_CLK_PLL3_PFD2] = imx_clk_pfd("pll3_pfd2", "pll3_usb_otg", base + 0xf0, 2); 118 + clks[IMX6SL_CLK_PLL3_PFD3] = imx_clk_pfd("pll3_pfd3", "pll3_usb_otg", base + 0xf0, 3); 119 + 120 + /* name parent_name mult div */ 121 + clks[IMX6SL_CLK_PLL2_198M] = imx_clk_fixed_factor("pll2_198m", "pll2_pfd2", 1, 2); 122 + clks[IMX6SL_CLK_PLL3_120M] = imx_clk_fixed_factor("pll3_120m", "pll3_usb_otg", 1, 4); 123 + clks[IMX6SL_CLK_PLL3_80M] = imx_clk_fixed_factor("pll3_80m", "pll3_usb_otg", 1, 6); 124 + clks[IMX6SL_CLK_PLL3_60M] = imx_clk_fixed_factor("pll3_60m", "pll3_usb_otg", 1, 8); 125 + 126 + np = ccm_node; 127 + base = of_iomap(np, 0); 128 + WARN_ON(!base); 129 + 130 + /* name reg shift width parent_names num_parents */ 131 + clks[IMX6SL_CLK_STEP] = imx_clk_mux("step", base + 0xc, 8, 1, step_sels, ARRAY_SIZE(step_sels)); 132 + clks[IMX6SL_CLK_PLL1_SW] = imx_clk_mux("pll1_sw", base + 0xc, 2, 1, pll1_sw_sels, ARRAY_SIZE(pll1_sw_sels)); 133 + clks[IMX6SL_CLK_OCRAM_ALT_SEL] = imx_clk_mux("ocram_alt_sel", base + 0x14, 7, 1, ocram_alt_sels, ARRAY_SIZE(ocram_alt_sels)); 134 + clks[IMX6SL_CLK_OCRAM_SEL] = imx_clk_mux("ocram_sel", base + 0x14, 6, 1, ocram_sels, ARRAY_SIZE(ocram_sels)); 135 + clks[IMX6SL_CLK_PRE_PERIPH2_SEL] = imx_clk_mux("pre_periph2_sel", base + 0x18, 21, 2, pre_periph_sels, ARRAY_SIZE(pre_periph_sels)); 136 + clks[IMX6SL_CLK_PRE_PERIPH_SEL] = imx_clk_mux("pre_periph_sel", base + 0x18, 18, 2, pre_periph_sels, ARRAY_SIZE(pre_periph_sels)); 137 + clks[IMX6SL_CLK_PERIPH2_CLK2_SEL] = imx_clk_mux("periph2_clk2_sel", base + 0x18, 20, 1, periph2_clk2_sels, ARRAY_SIZE(periph2_clk2_sels)); 138 + clks[IMX6SL_CLK_PERIPH_CLK2_SEL] = imx_clk_mux("periph_clk2_sel", base + 0x18, 12, 2, periph_clk2_sels, ARRAY_SIZE(periph_clk2_sels)); 139 + clks[IMX6SL_CLK_CSI_SEL] = imx_clk_mux("csi_sel", base + 0x3c, 9, 2, csi_lcdif_sels, ARRAY_SIZE(csi_lcdif_sels)); 140 + clks[IMX6SL_CLK_LCDIF_AXI_SEL] = imx_clk_mux("lcdif_axi_sel", base + 0x3c, 14, 2, csi_lcdif_sels, ARRAY_SIZE(csi_lcdif_sels)); 141 + clks[IMX6SL_CLK_USDHC1_SEL] = imx_clk_mux("usdhc1_sel", base + 0x1c, 16, 1, usdhc_sels, ARRAY_SIZE(usdhc_sels)); 142 + clks[IMX6SL_CLK_USDHC2_SEL] = imx_clk_mux("usdhc2_sel", base + 0x1c, 17, 1, usdhc_sels, ARRAY_SIZE(usdhc_sels)); 143 + clks[IMX6SL_CLK_USDHC3_SEL] = imx_clk_mux("usdhc3_sel", base + 0x1c, 18, 1, usdhc_sels, ARRAY_SIZE(usdhc_sels)); 144 + clks[IMX6SL_CLK_USDHC4_SEL] = imx_clk_mux("usdhc4_sel", base + 0x1c, 19, 1, usdhc_sels, ARRAY_SIZE(usdhc_sels)); 145 + clks[IMX6SL_CLK_SSI1_SEL] = imx_clk_mux("ssi1_sel", base + 0x1c, 10, 2, ssi_sels, ARRAY_SIZE(ssi_sels)); 146 + clks[IMX6SL_CLK_SSI2_SEL] = imx_clk_mux("ssi2_sel", base + 0x1c, 12, 2, ssi_sels, ARRAY_SIZE(ssi_sels)); 147 + clks[IMX6SL_CLK_SSI3_SEL] = imx_clk_mux("ssi3_sel", base + 0x1c, 14, 2, ssi_sels, ARRAY_SIZE(ssi_sels)); 148 + clks[IMX6SL_CLK_PERCLK_SEL] = imx_clk_mux("perclk_sel", base + 0x1c, 6, 1, perclk_sels, ARRAY_SIZE(perclk_sels)); 149 + clks[IMX6SL_CLK_PXP_AXI_SEL] = imx_clk_mux("pxp_axi_sel", base + 0x34, 6, 3, epdc_pxp_sels, ARRAY_SIZE(epdc_pxp_sels)); 150 + clks[IMX6SL_CLK_EPDC_AXI_SEL] = imx_clk_mux("epdc_axi_sel", base + 0x34, 15, 3, epdc_pxp_sels, ARRAY_SIZE(epdc_pxp_sels)); 151 + clks[IMX6SL_CLK_GPU2D_OVG_SEL] = imx_clk_mux("gpu2d_ovg_sel", base + 0x18, 4, 2, gpu2d_ovg_sels, ARRAY_SIZE(gpu2d_ovg_sels)); 152 + clks[IMX6SL_CLK_GPU2D_SEL] = imx_clk_mux("gpu2d_sel", base + 0x18, 8, 2, gpu2d_sels, ARRAY_SIZE(gpu2d_sels)); 153 + clks[IMX6SL_CLK_LCDIF_PIX_SEL] = imx_clk_mux("lcdif_pix_sel", base + 0x38, 6, 3, lcdif_pix_sels, ARRAY_SIZE(lcdif_pix_sels)); 154 + clks[IMX6SL_CLK_EPDC_PIX_SEL] = imx_clk_mux("epdc_pix_sel", base + 0x38, 15, 3, epdc_pix_sels, ARRAY_SIZE(epdc_pix_sels)); 155 + clks[IMX6SL_CLK_SPDIF0_SEL] = imx_clk_mux("spdif0_sel", base + 0x30, 20, 2, audio_sels, ARRAY_SIZE(audio_sels)); 156 + clks[IMX6SL_CLK_SPDIF1_SEL] = imx_clk_mux("spdif1_sel", base + 0x30, 7, 2, audio_sels, ARRAY_SIZE(audio_sels)); 157 + clks[IMX6SL_CLK_EXTERN_AUDIO_SEL] = imx_clk_mux("extern_audio_sel", base + 0x20, 19, 2, audio_sels, ARRAY_SIZE(audio_sels)); 158 + clks[IMX6SL_CLK_ECSPI_SEL] = imx_clk_mux("ecspi_sel", base + 0x38, 18, 1, ecspi_sels, ARRAY_SIZE(ecspi_sels)); 159 + clks[IMX6SL_CLK_UART_SEL] = imx_clk_mux("uart_sel", base + 0x24, 6, 1, uart_sels, ARRAY_SIZE(uart_sels)); 160 + 161 + /* name reg shift width busy: reg, shift parent_names num_parents */ 162 + clks[IMX6SL_CLK_PERIPH] = imx_clk_busy_mux("periph", base + 0x14, 25, 1, base + 0x48, 5, periph_sels, ARRAY_SIZE(periph_sels)); 163 + clks[IMX6SL_CLK_PERIPH2] = imx_clk_busy_mux("periph2", base + 0x14, 26, 1, base + 0x48, 3, periph2_sels, ARRAY_SIZE(periph2_sels)); 164 + 165 + /* name parent_name reg shift width */ 166 + clks[IMX6SL_CLK_OCRAM_PODF] = imx_clk_divider("ocram_podf", "ocram_sel", base + 0x14, 16, 3); 167 + clks[IMX6SL_CLK_PERIPH_CLK2_PODF] = imx_clk_divider("periph_clk2_podf", "periph_clk2_sel", base + 0x14, 27, 3); 168 + clks[IMX6SL_CLK_PERIPH2_CLK2_PODF] = imx_clk_divider("periph2_clk2_podf", "periph2_clk2_sel", base + 0x14, 0, 3); 169 + clks[IMX6SL_CLK_IPG] = imx_clk_divider("ipg", "ahb", base + 0x14, 8, 2); 170 + clks[IMX6SL_CLK_CSI_PODF] = imx_clk_divider("csi_podf", "csi_sel", base + 0x3c, 11, 3); 171 + clks[IMX6SL_CLK_LCDIF_AXI_PODF] = imx_clk_divider("lcdif_axi_podf", "lcdif_axi_sel", base + 0x3c, 16, 3); 172 + clks[IMX6SL_CLK_USDHC1_PODF] = imx_clk_divider("usdhc1_podf", "usdhc1_sel", base + 0x24, 11, 3); 173 + clks[IMX6SL_CLK_USDHC2_PODF] = imx_clk_divider("usdhc2_podf", "usdhc2_sel", base + 0x24, 16, 3); 174 + clks[IMX6SL_CLK_USDHC3_PODF] = imx_clk_divider("usdhc3_podf", "usdhc3_sel", base + 0x24, 19, 3); 175 + clks[IMX6SL_CLK_USDHC4_PODF] = imx_clk_divider("usdhc4_podf", "usdhc4_sel", base + 0x24, 22, 3); 176 + clks[IMX6SL_CLK_SSI1_PRED] = imx_clk_divider("ssi1_pred", "ssi1_sel", base + 0x28, 6, 3); 177 + clks[IMX6SL_CLK_SSI1_PODF] = imx_clk_divider("ssi1_podf", "ssi1_pred", base + 0x28, 0, 6); 178 + clks[IMX6SL_CLK_SSI2_PRED] = imx_clk_divider("ssi2_pred", "ssi2_sel", base + 0x2c, 6, 3); 179 + clks[IMX6SL_CLK_SSI2_PODF] = imx_clk_divider("ssi2_podf", "ssi2_pred", base + 0x2c, 0, 6); 180 + clks[IMX6SL_CLK_SSI3_PRED] = imx_clk_divider("ssi3_pred", "ssi3_sel", base + 0x28, 22, 3); 181 + clks[IMX6SL_CLK_SSI3_PODF] = imx_clk_divider("ssi3_podf", "ssi3_pred", base + 0x28, 16, 6); 182 + clks[IMX6SL_CLK_PERCLK] = imx_clk_divider("perclk", "perclk_sel", base + 0x1c, 0, 6); 183 + clks[IMX6SL_CLK_PXP_AXI_PODF] = imx_clk_divider("pxp_axi_podf", "pxp_axi_sel", base + 0x34, 3, 3); 184 + clks[IMX6SL_CLK_EPDC_AXI_PODF] = imx_clk_divider("epdc_axi_podf", "epdc_axi_sel", base + 0x34, 12, 3); 185 + clks[IMX6SL_CLK_GPU2D_OVG_PODF] = imx_clk_divider("gpu2d_ovg_podf", "gpu2d_ovg_sel", base + 0x18, 26, 3); 186 + clks[IMX6SL_CLK_GPU2D_PODF] = imx_clk_divider("gpu2d_podf", "gpu2d_sel", base + 0x18, 29, 3); 187 + clks[IMX6SL_CLK_LCDIF_PIX_PRED] = imx_clk_divider("lcdif_pix_pred", "lcdif_pix_sel", base + 0x38, 3, 3); 188 + clks[IMX6SL_CLK_EPDC_PIX_PRED] = imx_clk_divider("epdc_pix_pred", "epdc_pix_sel", base + 0x38, 12, 3); 189 + clks[IMX6SL_CLK_LCDIF_PIX_PODF] = imx_clk_divider("lcdif_pix_podf", "lcdif_pix_pred", base + 0x1c, 20, 3); 190 + clks[IMX6SL_CLK_EPDC_PIX_PODF] = imx_clk_divider("epdc_pix_podf", "epdc_pix_pred", base + 0x18, 23, 3); 191 + clks[IMX6SL_CLK_SPDIF0_PRED] = imx_clk_divider("spdif0_pred", "spdif0_sel", base + 0x30, 25, 3); 192 + clks[IMX6SL_CLK_SPDIF0_PODF] = imx_clk_divider("spdif0_podf", "spdif0_pred", base + 0x30, 22, 3); 193 + clks[IMX6SL_CLK_SPDIF1_PRED] = imx_clk_divider("spdif1_pred", "spdif1_sel", base + 0x30, 12, 3); 194 + clks[IMX6SL_CLK_SPDIF1_PODF] = imx_clk_divider("spdif1_podf", "spdif1_pred", base + 0x30, 9, 3); 195 + clks[IMX6SL_CLK_EXTERN_AUDIO_PRED] = imx_clk_divider("extern_audio_pred", "extern_audio_sel", base + 0x28, 9, 3); 196 + clks[IMX6SL_CLK_EXTERN_AUDIO_PODF] = imx_clk_divider("extern_audio_podf", "extern_audio_pred", base + 0x28, 25, 3); 197 + clks[IMX6SL_CLK_ECSPI_ROOT] = imx_clk_divider("ecspi_root", "ecspi_sel", base + 0x38, 19, 6); 198 + clks[IMX6SL_CLK_UART_ROOT] = imx_clk_divider("uart_root", "uart_sel", base + 0x24, 0, 6); 199 + 200 + /* name parent_name reg shift width busy: reg, shift */ 201 + clks[IMX6SL_CLK_AHB] = imx_clk_busy_divider("ahb", "periph", base + 0x14, 10, 3, base + 0x48, 1); 202 + clks[IMX6SL_CLK_MMDC_ROOT] = imx_clk_busy_divider("mmdc", "periph2", base + 0x14, 3, 3, base + 0x48, 2); 203 + clks[IMX6SL_CLK_ARM] = imx_clk_busy_divider("arm", "pll1_sw", base + 0x10, 0, 3, base + 0x48, 16); 204 + 205 + /* name parent_name reg shift */ 206 + clks[IMX6SL_CLK_ECSPI1] = imx_clk_gate2("ecspi1", "ecspi_root", base + 0x6c, 0); 207 + clks[IMX6SL_CLK_ECSPI2] = imx_clk_gate2("ecspi2", "ecspi_root", base + 0x6c, 2); 208 + clks[IMX6SL_CLK_ECSPI3] = imx_clk_gate2("ecspi3", "ecspi_root", base + 0x6c, 4); 209 + clks[IMX6SL_CLK_ECSPI4] = imx_clk_gate2("ecspi4", "ecspi_root", base + 0x6c, 6); 210 + clks[IMX6SL_CLK_EPIT1] = imx_clk_gate2("epit1", "perclk", base + 0x6c, 12); 211 + clks[IMX6SL_CLK_EPIT2] = imx_clk_gate2("epit2", "perclk", base + 0x6c, 14); 212 + clks[IMX6SL_CLK_EXTERN_AUDIO] = imx_clk_gate2("extern_audio", "extern_audio_podf", base + 0x6c, 16); 213 + clks[IMX6SL_CLK_GPT] = imx_clk_gate2("gpt", "perclk", base + 0x6c, 20); 214 + clks[IMX6SL_CLK_GPT_SERIAL] = imx_clk_gate2("gpt_serial", "perclk", base + 0x6c, 22); 215 + clks[IMX6SL_CLK_GPU2D_OVG] = imx_clk_gate2("gpu2d_ovg", "gpu2d_ovg_podf", base + 0x6c, 26); 216 + clks[IMX6SL_CLK_I2C1] = imx_clk_gate2("i2c1", "perclk", base + 0x70, 6); 217 + clks[IMX6SL_CLK_I2C2] = imx_clk_gate2("i2c2", "perclk", base + 0x70, 8); 218 + clks[IMX6SL_CLK_I2C3] = imx_clk_gate2("i2c3", "perclk", base + 0x70, 10); 219 + clks[IMX6SL_CLK_OCOTP] = imx_clk_gate2("ocotp", "ipg", base + 0x70, 12); 220 + clks[IMX6SL_CLK_CSI] = imx_clk_gate2("csi", "csi_podf", base + 0x74, 0); 221 + clks[IMX6SL_CLK_PXP_AXI] = imx_clk_gate2("pxp_axi", "pxp_axi_podf", base + 0x74, 2); 222 + clks[IMX6SL_CLK_EPDC_AXI] = imx_clk_gate2("epdc_axi", "epdc_axi_podf", base + 0x74, 4); 223 + clks[IMX6SL_CLK_LCDIF_AXI] = imx_clk_gate2("lcdif_axi", "lcdif_axi_podf", base + 0x74, 6); 224 + clks[IMX6SL_CLK_LCDIF_PIX] = imx_clk_gate2("lcdif_pix", "lcdif_pix_podf", base + 0x74, 8); 225 + clks[IMX6SL_CLK_EPDC_PIX] = imx_clk_gate2("epdc_pix", "epdc_pix_podf", base + 0x74, 10); 226 + clks[IMX6SL_CLK_OCRAM] = imx_clk_gate2("ocram", "ocram_podf", base + 0x74, 28); 227 + clks[IMX6SL_CLK_PWM1] = imx_clk_gate2("pwm1", "perclk", base + 0x78, 16); 228 + clks[IMX6SL_CLK_PWM2] = imx_clk_gate2("pwm2", "perclk", base + 0x78, 18); 229 + clks[IMX6SL_CLK_PWM3] = imx_clk_gate2("pwm3", "perclk", base + 0x78, 20); 230 + clks[IMX6SL_CLK_PWM4] = imx_clk_gate2("pwm4", "perclk", base + 0x78, 22); 231 + clks[IMX6SL_CLK_SDMA] = imx_clk_gate2("sdma", "ipg", base + 0x7c, 6); 232 + clks[IMX6SL_CLK_SPDIF] = imx_clk_gate2("spdif", "spdif0_podf", base + 0x7c, 14); 233 + clks[IMX6SL_CLK_SSI1] = imx_clk_gate2("ssi1", "ssi1_podf", base + 0x7c, 18); 234 + clks[IMX6SL_CLK_SSI2] = imx_clk_gate2("ssi2", "ssi2_podf", base + 0x7c, 20); 235 + clks[IMX6SL_CLK_SSI3] = imx_clk_gate2("ssi3", "ssi3_podf", base + 0x7c, 22); 236 + clks[IMX6SL_CLK_UART] = imx_clk_gate2("uart", "ipg", base + 0x7c, 24); 237 + clks[IMX6SL_CLK_UART_SERIAL] = imx_clk_gate2("uart_serial", "uart_root", base + 0x7c, 26); 238 + clks[IMX6SL_CLK_USBOH3] = imx_clk_gate2("usboh3", "ipg", base + 0x80, 0); 239 + clks[IMX6SL_CLK_USDHC1] = imx_clk_gate2("usdhc1", "usdhc1_podf", base + 0x80, 2); 240 + clks[IMX6SL_CLK_USDHC2] = imx_clk_gate2("usdhc2", "usdhc2_podf", base + 0x80, 4); 241 + clks[IMX6SL_CLK_USDHC3] = imx_clk_gate2("usdhc3", "usdhc3_podf", base + 0x80, 6); 242 + clks[IMX6SL_CLK_USDHC4] = imx_clk_gate2("usdhc4", "usdhc4_podf", base + 0x80, 8); 243 + 244 + for (i = 0; i < ARRAY_SIZE(clks); i++) 245 + if (IS_ERR(clks[i])) 246 + pr_err("i.MX6SL clk %d: register failed with %ld\n", 247 + i, PTR_ERR(clks[i])); 248 + 249 + clk_data.clks = clks; 250 + clk_data.clk_num = ARRAY_SIZE(clks); 251 + of_clk_add_provider(np, of_clk_src_onecell_get, &clk_data); 252 + 253 + clk_register_clkdev(clks[IMX6SL_CLK_GPT], "ipg", "imx-gpt.0"); 254 + clk_register_clkdev(clks[IMX6SL_CLK_GPT_SERIAL], "per", "imx-gpt.0"); 255 + 256 + if (IS_ENABLED(CONFIG_USB_MXS_PHY)) { 257 + clk_prepare_enable(clks[IMX6SL_CLK_USBPHY1_GATE]); 258 + clk_prepare_enable(clks[IMX6SL_CLK_USBPHY2_GATE]); 259 + } 260 + 261 + np = of_find_compatible_node(NULL, NULL, "fsl,imx6sl-gpt"); 262 + base = of_iomap(np, 0); 263 + WARN_ON(!base); 264 + irq = irq_of_parse_and_map(np, 0); 265 + mxc_timer_init(base, irq); 266 + } 267 + CLK_OF_DECLARE(imx6sl, "fsl,imx6sl-ccm", imx6sl_clocks_init);
-10
arch/arm/mach-imx/clk-pllv3.c
··· 296 296 .recalc_rate = clk_pllv3_enet_recalc_rate, 297 297 }; 298 298 299 - static const struct clk_ops clk_pllv3_mlb_ops = { 300 - .prepare = clk_pllv3_prepare, 301 - .unprepare = clk_pllv3_unprepare, 302 - .enable = clk_pllv3_enable, 303 - .disable = clk_pllv3_disable, 304 - }; 305 - 306 299 struct clk *imx_clk_pllv3(enum imx_pllv3_type type, const char *name, 307 300 const char *parent_name, void __iomem *base, 308 301 u32 div_mask) ··· 322 329 break; 323 330 case IMX_PLLV3_ENET: 324 331 ops = &clk_pllv3_enet_ops; 325 - break; 326 - case IMX_PLLV3_MLB: 327 - ops = &clk_pllv3_mlb_ops; 328 332 break; 329 333 default: 330 334 ops = &clk_pllv3_ops;
+319
arch/arm/mach-imx/clk-vf610.c
··· 1 + /* 2 + * Copyright 2012-2013 Freescale Semiconductor, Inc. 3 + * 4 + * This program is free software; you can redistribute it and/or modify 5 + * it under the terms of the GNU General Public License as published by 6 + * the Free Software Foundation; either version 2 of the License, or 7 + * (at your option) any later version. 8 + * 9 + */ 10 + 11 + #include <linux/of_address.h> 12 + #include <linux/clk.h> 13 + #include <dt-bindings/clock/vf610-clock.h> 14 + 15 + #include "clk.h" 16 + 17 + #define CCM_CCR (ccm_base + 0x00) 18 + #define CCM_CSR (ccm_base + 0x04) 19 + #define CCM_CCSR (ccm_base + 0x08) 20 + #define CCM_CACRR (ccm_base + 0x0c) 21 + #define CCM_CSCMR1 (ccm_base + 0x10) 22 + #define CCM_CSCDR1 (ccm_base + 0x14) 23 + #define CCM_CSCDR2 (ccm_base + 0x18) 24 + #define CCM_CSCDR3 (ccm_base + 0x1c) 25 + #define CCM_CSCMR2 (ccm_base + 0x20) 26 + #define CCM_CSCDR4 (ccm_base + 0x24) 27 + #define CCM_CLPCR (ccm_base + 0x2c) 28 + #define CCM_CISR (ccm_base + 0x30) 29 + #define CCM_CIMR (ccm_base + 0x34) 30 + #define CCM_CGPR (ccm_base + 0x3c) 31 + #define CCM_CCGR0 (ccm_base + 0x40) 32 + #define CCM_CCGR1 (ccm_base + 0x44) 33 + #define CCM_CCGR2 (ccm_base + 0x48) 34 + #define CCM_CCGR3 (ccm_base + 0x4c) 35 + #define CCM_CCGR4 (ccm_base + 0x50) 36 + #define CCM_CCGR5 (ccm_base + 0x54) 37 + #define CCM_CCGR6 (ccm_base + 0x58) 38 + #define CCM_CCGR7 (ccm_base + 0x5c) 39 + #define CCM_CCGR8 (ccm_base + 0x60) 40 + #define CCM_CCGR9 (ccm_base + 0x64) 41 + #define CCM_CCGR10 (ccm_base + 0x68) 42 + #define CCM_CCGR11 (ccm_base + 0x6c) 43 + #define CCM_CMEOR0 (ccm_base + 0x70) 44 + #define CCM_CMEOR1 (ccm_base + 0x74) 45 + #define CCM_CMEOR2 (ccm_base + 0x78) 46 + #define CCM_CMEOR3 (ccm_base + 0x7c) 47 + #define CCM_CMEOR4 (ccm_base + 0x80) 48 + #define CCM_CMEOR5 (ccm_base + 0x84) 49 + #define CCM_CPPDSR (ccm_base + 0x88) 50 + #define CCM_CCOWR (ccm_base + 0x8c) 51 + #define CCM_CCPGR0 (ccm_base + 0x90) 52 + #define CCM_CCPGR1 (ccm_base + 0x94) 53 + #define CCM_CCPGR2 (ccm_base + 0x98) 54 + #define CCM_CCPGR3 (ccm_base + 0x9c) 55 + 56 + #define CCM_CCGRx_CGn(n) ((n) * 2) 57 + 58 + #define PFD_PLL1_BASE (anatop_base + 0x2b0) 59 + #define PFD_PLL2_BASE (anatop_base + 0x100) 60 + #define PFD_PLL3_BASE (anatop_base + 0xf0) 61 + 62 + static void __iomem *anatop_base; 63 + static void __iomem *ccm_base; 64 + 65 + /* sources for multiplexer clocks, this is used multiple times */ 66 + static const char const *fast_sels[] = { "firc", "fxosc", }; 67 + static const char const *slow_sels[] = { "sirc_32k", "sxosc", }; 68 + static const char const *pll1_sels[] = { "pll1_main", "pll1_pfd1", "pll1_pfd2", "pll1_pfd3", "pll1_pfd4", }; 69 + static const char const *pll2_sels[] = { "pll2_main", "pll2_pfd1", "pll2_pfd2", "pll2_pfd3", "pll2_pfd4", }; 70 + static const char const *sys_sels[] = { "fast_clk_sel", "slow_clk_sel", "pll2_pfd_sel", "pll2_main", "pll1_pfd_sel", "pll3_main", }; 71 + static const char const *ddr_sels[] = { "pll2_pfd2", "sys_sel", }; 72 + static const char const *rmii_sels[] = { "enet_ext", "audio_ext", "enet_50m", "enet_25m", }; 73 + static const char const *enet_ts_sels[] = { "enet_ext", "fxosc", "audio_ext", "usb", "enet_ts", "enet_25m", "enet_50m", }; 74 + static const char const *esai_sels[] = { "audio_ext", "mlb", "spdif_rx", "pll4_main_div", }; 75 + static const char const *sai_sels[] = { "audio_ext", "mlb", "spdif_rx", "pll4_main_div", }; 76 + static const char const *nfc_sels[] = { "platform_bus", "pll1_pfd1", "pll3_pfd1", "pll3_pfd3", }; 77 + static const char const *qspi_sels[] = { "pll3_main", "pll3_pfd4", "pll2_pfd4", "pll1_pfd4", }; 78 + static const char const *esdhc_sels[] = { "pll3_main", "pll3_pfd3", "pll1_pfd3", "platform_bus", }; 79 + static const char const *dcu_sels[] = { "pll1_pfd2", "pll3_main", }; 80 + static const char const *gpu_sels[] = { "pll2_pfd2", "pll3_pfd2", }; 81 + static const char const *vadc_sels[] = { "pll6_main_div", "pll3_main_div", "pll3_main", }; 82 + /* FTM counter clock source, not module clock */ 83 + static const char const *ftm_ext_sels[] = {"sirc_128k", "sxosc", "fxosc_half", "audio_ext", }; 84 + static const char const *ftm_fix_sels[] = { "sxosc", "ipg_bus", }; 85 + 86 + static struct clk_div_table pll4_main_div_table[] = { 87 + { .val = 0, .div = 1 }, 88 + { .val = 1, .div = 2 }, 89 + { .val = 2, .div = 6 }, 90 + { .val = 3, .div = 8 }, 91 + { .val = 4, .div = 10 }, 92 + { .val = 5, .div = 12 }, 93 + { .val = 6, .div = 14 }, 94 + { .val = 7, .div = 16 }, 95 + { } 96 + }; 97 + 98 + static struct clk *clk[VF610_CLK_END]; 99 + static struct clk_onecell_data clk_data; 100 + 101 + static void __init vf610_clocks_init(struct device_node *ccm_node) 102 + { 103 + struct device_node *np; 104 + 105 + clk[VF610_CLK_DUMMY] = imx_clk_fixed("dummy", 0); 106 + clk[VF610_CLK_SIRC_128K] = imx_clk_fixed("sirc_128k", 128000); 107 + clk[VF610_CLK_SIRC_32K] = imx_clk_fixed("sirc_32k", 32000); 108 + clk[VF610_CLK_FIRC] = imx_clk_fixed("firc", 24000000); 109 + 110 + clk[VF610_CLK_SXOSC] = imx_obtain_fixed_clock("sxosc", 0); 111 + clk[VF610_CLK_FXOSC] = imx_obtain_fixed_clock("fxosc", 0); 112 + clk[VF610_CLK_AUDIO_EXT] = imx_obtain_fixed_clock("audio_ext", 0); 113 + clk[VF610_CLK_ENET_EXT] = imx_obtain_fixed_clock("enet_ext", 0); 114 + 115 + clk[VF610_CLK_FXOSC_HALF] = imx_clk_fixed_factor("fxosc_half", "fxosc", 1, 2); 116 + 117 + np = of_find_compatible_node(NULL, NULL, "fsl,vf610-anatop"); 118 + anatop_base = of_iomap(np, 0); 119 + BUG_ON(!anatop_base); 120 + 121 + np = ccm_node; 122 + ccm_base = of_iomap(np, 0); 123 + BUG_ON(!ccm_base); 124 + 125 + clk[VF610_CLK_SLOW_CLK_SEL] = imx_clk_mux("slow_clk_sel", CCM_CCSR, 4, 1, slow_sels, ARRAY_SIZE(slow_sels)); 126 + clk[VF610_CLK_FASK_CLK_SEL] = imx_clk_mux("fast_clk_sel", CCM_CCSR, 5, 1, fast_sels, ARRAY_SIZE(fast_sels)); 127 + 128 + clk[VF610_CLK_PLL1_MAIN] = imx_clk_fixed_factor("pll1_main", "fast_clk_sel", 22, 1); 129 + clk[VF610_CLK_PLL1_PFD1] = imx_clk_pfd("pll1_pfd1", "pll1_main", PFD_PLL1_BASE, 0); 130 + clk[VF610_CLK_PLL1_PFD2] = imx_clk_pfd("pll1_pfd2", "pll1_main", PFD_PLL1_BASE, 1); 131 + clk[VF610_CLK_PLL1_PFD3] = imx_clk_pfd("pll1_pfd3", "pll1_main", PFD_PLL1_BASE, 2); 132 + clk[VF610_CLK_PLL1_PFD4] = imx_clk_pfd("pll1_pfd4", "pll1_main", PFD_PLL1_BASE, 3); 133 + 134 + clk[VF610_CLK_PLL2_MAIN] = imx_clk_fixed_factor("pll2_main", "fast_clk_sel", 22, 1); 135 + clk[VF610_CLK_PLL2_PFD1] = imx_clk_pfd("pll2_pfd1", "pll2_main", PFD_PLL2_BASE, 0); 136 + clk[VF610_CLK_PLL2_PFD2] = imx_clk_pfd("pll2_pfd2", "pll2_main", PFD_PLL2_BASE, 1); 137 + clk[VF610_CLK_PLL2_PFD3] = imx_clk_pfd("pll2_pfd3", "pll2_main", PFD_PLL2_BASE, 2); 138 + clk[VF610_CLK_PLL2_PFD4] = imx_clk_pfd("pll2_pfd4", "pll2_main", PFD_PLL2_BASE, 3); 139 + 140 + clk[VF610_CLK_PLL3_MAIN] = imx_clk_fixed_factor("pll3_main", "fast_clk_sel", 20, 1); 141 + clk[VF610_CLK_PLL3_PFD1] = imx_clk_pfd("pll3_pfd1", "pll3_main", PFD_PLL3_BASE, 0); 142 + clk[VF610_CLK_PLL3_PFD2] = imx_clk_pfd("pll3_pfd2", "pll3_main", PFD_PLL3_BASE, 1); 143 + clk[VF610_CLK_PLL3_PFD3] = imx_clk_pfd("pll3_pfd3", "pll3_main", PFD_PLL3_BASE, 2); 144 + clk[VF610_CLK_PLL3_PFD4] = imx_clk_pfd("pll3_pfd4", "pll3_main", PFD_PLL3_BASE, 3); 145 + 146 + clk[VF610_CLK_PLL4_MAIN] = imx_clk_fixed_factor("pll4_main", "fast_clk_sel", 25, 1); 147 + /* Enet pll: fixed 50Mhz */ 148 + clk[VF610_CLK_PLL5_MAIN] = imx_clk_fixed_factor("pll5_main", "fast_clk_sel", 125, 6); 149 + /* pll6: default 960Mhz */ 150 + clk[VF610_CLK_PLL6_MAIN] = imx_clk_fixed_factor("pll6_main", "fast_clk_sel", 40, 1); 151 + clk[VF610_CLK_PLL1_PFD_SEL] = imx_clk_mux("pll1_pfd_sel", CCM_CCSR, 16, 3, pll1_sels, 5); 152 + clk[VF610_CLK_PLL2_PFD_SEL] = imx_clk_mux("pll2_pfd_sel", CCM_CCSR, 19, 3, pll2_sels, 5); 153 + clk[VF610_CLK_SYS_SEL] = imx_clk_mux("sys_sel", CCM_CCSR, 0, 3, sys_sels, ARRAY_SIZE(sys_sels)); 154 + clk[VF610_CLK_DDR_SEL] = imx_clk_mux("ddr_sel", CCM_CCSR, 6, 1, ddr_sels, ARRAY_SIZE(ddr_sels)); 155 + clk[VF610_CLK_SYS_BUS] = imx_clk_divider("sys_bus", "sys_sel", CCM_CACRR, 0, 3); 156 + clk[VF610_CLK_PLATFORM_BUS] = imx_clk_divider("platform_bus", "sys_bus", CCM_CACRR, 3, 3); 157 + clk[VF610_CLK_IPG_BUS] = imx_clk_divider("ipg_bus", "platform_bus", CCM_CACRR, 11, 2); 158 + 159 + clk[VF610_CLK_PLL3_MAIN_DIV] = imx_clk_divider("pll3_main_div", "pll3_main", CCM_CACRR, 20, 1); 160 + clk[VF610_CLK_PLL4_MAIN_DIV] = clk_register_divider_table(NULL, "pll4_main_div", "pll4_main", 0, CCM_CACRR, 6, 3, 0, pll4_main_div_table, &imx_ccm_lock); 161 + clk[VF610_CLK_PLL6_MAIN_DIV] = imx_clk_divider("pll6_main_div", "pll6_main", CCM_CACRR, 21, 1); 162 + 163 + clk[VF610_CLK_USBC0] = imx_clk_gate2("usbc0", "pll3_main", CCM_CCGR1, CCM_CCGRx_CGn(4)); 164 + clk[VF610_CLK_USBC1] = imx_clk_gate2("usbc1", "pll3_main", CCM_CCGR7, CCM_CCGRx_CGn(4)); 165 + 166 + clk[VF610_CLK_QSPI0_SEL] = imx_clk_mux("qspi0_sel", CCM_CSCMR1, 22, 2, qspi_sels, 4); 167 + clk[VF610_CLK_QSPI0_EN] = imx_clk_gate("qspi0_en", "qspi0_sel", CCM_CSCDR3, 4); 168 + clk[VF610_CLK_QSPI0_X4_DIV] = imx_clk_divider("qspi0_x4", "qspi0_en", CCM_CSCDR3, 0, 2); 169 + clk[VF610_CLK_QSPI0_X2_DIV] = imx_clk_divider("qspi0_x2", "qspi0_x4", CCM_CSCDR3, 2, 1); 170 + clk[VF610_CLK_QSPI0_X1_DIV] = imx_clk_divider("qspi0_x1", "qspi0_x2", CCM_CSCDR3, 3, 1); 171 + clk[VF610_CLK_QSPI0] = imx_clk_gate2("qspi0", "qspi0_x1", CCM_CCGR2, CCM_CCGRx_CGn(4)); 172 + 173 + clk[VF610_CLK_QSPI1_SEL] = imx_clk_mux("qspi1_sel", CCM_CSCMR1, 24, 2, qspi_sels, 4); 174 + clk[VF610_CLK_QSPI1_EN] = imx_clk_gate("qspi1_en", "qspi1_sel", CCM_CSCDR3, 12); 175 + clk[VF610_CLK_QSPI1_X4_DIV] = imx_clk_divider("qspi1_x4", "qspi1_en", CCM_CSCDR3, 8, 2); 176 + clk[VF610_CLK_QSPI1_X2_DIV] = imx_clk_divider("qspi1_x2", "qspi1_x4", CCM_CSCDR3, 10, 1); 177 + clk[VF610_CLK_QSPI1_X1_DIV] = imx_clk_divider("qspi1_x1", "qspi1_x2", CCM_CSCDR3, 11, 1); 178 + clk[VF610_CLK_QSPI1] = imx_clk_gate2("qspi1", "qspi1_x1", CCM_CCGR8, CCM_CCGRx_CGn(4)); 179 + 180 + clk[VF610_CLK_ENET_50M] = imx_clk_fixed_factor("enet_50m", "pll5_main", 1, 10); 181 + clk[VF610_CLK_ENET_25M] = imx_clk_fixed_factor("enet_25m", "pll5_main", 1, 20); 182 + clk[VF610_CLK_ENET_SEL] = imx_clk_mux("enet_sel", CCM_CSCMR2, 4, 2, rmii_sels, 4); 183 + clk[VF610_CLK_ENET_TS_SEL] = imx_clk_mux("enet_ts_sel", CCM_CSCMR2, 0, 3, enet_ts_sels, 7); 184 + clk[VF610_CLK_ENET] = imx_clk_gate("enet", "enet_sel", CCM_CSCDR1, 24); 185 + clk[VF610_CLK_ENET_TS] = imx_clk_gate("enet_ts", "enet_ts_sel", CCM_CSCDR1, 23); 186 + 187 + clk[VF610_CLK_PIT] = imx_clk_gate2("pit", "ipg_bus", CCM_CCGR1, CCM_CCGRx_CGn(7)); 188 + 189 + clk[VF610_CLK_UART0] = imx_clk_gate2("uart0", "ipg_bus", CCM_CCGR0, CCM_CCGRx_CGn(7)); 190 + clk[VF610_CLK_UART1] = imx_clk_gate2("uart1", "ipg_bus", CCM_CCGR0, CCM_CCGRx_CGn(8)); 191 + clk[VF610_CLK_UART2] = imx_clk_gate2("uart2", "ipg_bus", CCM_CCGR0, CCM_CCGRx_CGn(9)); 192 + clk[VF610_CLK_UART3] = imx_clk_gate2("uart3", "ipg_bus", CCM_CCGR0, CCM_CCGRx_CGn(10)); 193 + 194 + clk[VF610_CLK_I2C0] = imx_clk_gate2("i2c0", "ipg_bus", CCM_CCGR4, CCM_CCGRx_CGn(6)); 195 + clk[VF610_CLK_I2C1] = imx_clk_gate2("i2c1", "ipg_bus", CCM_CCGR4, CCM_CCGRx_CGn(7)); 196 + 197 + clk[VF610_CLK_DSPI0] = imx_clk_gate2("dspi0", "ipg_bus", CCM_CCGR0, CCM_CCGRx_CGn(12)); 198 + clk[VF610_CLK_DSPI1] = imx_clk_gate2("dspi1", "ipg_bus", CCM_CCGR0, CCM_CCGRx_CGn(13)); 199 + clk[VF610_CLK_DSPI2] = imx_clk_gate2("dspi2", "ipg_bus", CCM_CCGR6, CCM_CCGRx_CGn(12)); 200 + clk[VF610_CLK_DSPI3] = imx_clk_gate2("dspi3", "ipg_bus", CCM_CCGR6, CCM_CCGRx_CGn(13)); 201 + 202 + clk[VF610_CLK_WDT] = imx_clk_gate2("wdt", "ipg_bus", CCM_CCGR1, CCM_CCGRx_CGn(14)); 203 + 204 + clk[VF610_CLK_ESDHC0_SEL] = imx_clk_mux("esdhc0_sel", CCM_CSCMR1, 16, 2, esdhc_sels, 4); 205 + clk[VF610_CLK_ESDHC0_EN] = imx_clk_gate("esdhc0_en", "esdhc0_sel", CCM_CSCDR2, 28); 206 + clk[VF610_CLK_ESDHC0_DIV] = imx_clk_divider("esdhc0_div", "esdhc0_en", CCM_CSCDR2, 16, 4); 207 + clk[VF610_CLK_ESDHC0] = imx_clk_gate2("eshc0", "esdhc0_div", CCM_CCGR7, CCM_CCGRx_CGn(1)); 208 + 209 + clk[VF610_CLK_ESDHC1_SEL] = imx_clk_mux("esdhc1_sel", CCM_CSCMR1, 18, 2, esdhc_sels, 4); 210 + clk[VF610_CLK_ESDHC1_EN] = imx_clk_gate("esdhc1_en", "esdhc1_sel", CCM_CSCDR2, 29); 211 + clk[VF610_CLK_ESDHC1_DIV] = imx_clk_divider("esdhc1_div", "esdhc1_en", CCM_CSCDR2, 20, 4); 212 + clk[VF610_CLK_ESDHC1] = imx_clk_gate2("eshc1", "esdhc1_div", CCM_CCGR7, CCM_CCGRx_CGn(2)); 213 + 214 + /* 215 + * ftm_ext_clk and ftm_fix_clk are FTM timer counter's 216 + * selectable clock sources, both use a common enable bit 217 + * in CCM_CSCDR1, selecting "dummy" clock as parent of 218 + * "ftm0_ext_fix" make it serve only for enable/disable. 219 + */ 220 + clk[VF610_CLK_FTM0_EXT_SEL] = imx_clk_mux("ftm0_ext_sel", CCM_CSCMR2, 6, 2, ftm_ext_sels, 4); 221 + clk[VF610_CLK_FTM0_FIX_SEL] = imx_clk_mux("ftm0_fix_sel", CCM_CSCMR2, 14, 1, ftm_fix_sels, 2); 222 + clk[VF610_CLK_FTM0_EXT_FIX_EN] = imx_clk_gate("ftm0_ext_fix_en", "dummy", CCM_CSCDR1, 25); 223 + clk[VF610_CLK_FTM1_EXT_SEL] = imx_clk_mux("ftm1_ext_sel", CCM_CSCMR2, 8, 2, ftm_ext_sels, 4); 224 + clk[VF610_CLK_FTM1_FIX_SEL] = imx_clk_mux("ftm1_fix_sel", CCM_CSCMR2, 15, 1, ftm_fix_sels, 2); 225 + clk[VF610_CLK_FTM1_EXT_FIX_EN] = imx_clk_gate("ftm1_ext_fix_en", "dummy", CCM_CSCDR1, 26); 226 + clk[VF610_CLK_FTM2_EXT_SEL] = imx_clk_mux("ftm2_ext_sel", CCM_CSCMR2, 10, 2, ftm_ext_sels, 4); 227 + clk[VF610_CLK_FTM2_FIX_SEL] = imx_clk_mux("ftm2_fix_sel", CCM_CSCMR2, 16, 1, ftm_fix_sels, 2); 228 + clk[VF610_CLK_FTM2_EXT_FIX_EN] = imx_clk_gate("ftm2_ext_fix_en", "dummy", CCM_CSCDR1, 27); 229 + clk[VF610_CLK_FTM3_EXT_SEL] = imx_clk_mux("ftm3_ext_sel", CCM_CSCMR2, 12, 2, ftm_ext_sels, 4); 230 + clk[VF610_CLK_FTM3_FIX_SEL] = imx_clk_mux("ftm3_fix_sel", CCM_CSCMR2, 17, 1, ftm_fix_sels, 2); 231 + clk[VF610_CLK_FTM3_EXT_FIX_EN] = imx_clk_gate("ftm3_ext_fix_en", "dummy", CCM_CSCDR1, 28); 232 + 233 + /* ftm(n)_clk are FTM module operation clock */ 234 + clk[VF610_CLK_FTM0] = imx_clk_gate2("ftm0", "ipg_bus", CCM_CCGR1, CCM_CCGRx_CGn(8)); 235 + clk[VF610_CLK_FTM1] = imx_clk_gate2("ftm1", "ipg_bus", CCM_CCGR1, CCM_CCGRx_CGn(9)); 236 + clk[VF610_CLK_FTM2] = imx_clk_gate2("ftm2", "ipg_bus", CCM_CCGR7, CCM_CCGRx_CGn(8)); 237 + clk[VF610_CLK_FTM3] = imx_clk_gate2("ftm3", "ipg_bus", CCM_CCGR7, CCM_CCGRx_CGn(9)); 238 + 239 + clk[VF610_CLK_DCU0_SEL] = imx_clk_mux("dcu0_sel", CCM_CSCMR1, 28, 1, dcu_sels, 2); 240 + clk[VF610_CLK_DCU0_EN] = imx_clk_gate("dcu0_en", "dcu0_sel", CCM_CSCDR3, 19); 241 + clk[VF610_CLK_DCU0_DIV] = imx_clk_divider("dcu0_div", "dcu0_en", CCM_CSCDR3, 16, 3); 242 + clk[VF610_CLK_DCU0] = imx_clk_gate2("dcu0", "dcu0_div", CCM_CCGR3, CCM_CCGRx_CGn(8)); 243 + clk[VF610_CLK_DCU1_SEL] = imx_clk_mux("dcu1_sel", CCM_CSCMR1, 29, 1, dcu_sels, 2); 244 + clk[VF610_CLK_DCU1_EN] = imx_clk_gate("dcu1_en", "dcu1_sel", CCM_CSCDR3, 23); 245 + clk[VF610_CLK_DCU1_DIV] = imx_clk_divider("dcu1_div", "dcu1_en", CCM_CSCDR3, 20, 3); 246 + clk[VF610_CLK_DCU1] = imx_clk_gate2("dcu1", "dcu1_div", CCM_CCGR9, CCM_CCGRx_CGn(8)); 247 + 248 + clk[VF610_CLK_ESAI_SEL] = imx_clk_mux("esai_sel", CCM_CSCMR1, 20, 2, esai_sels, 4); 249 + clk[VF610_CLK_ESAI_EN] = imx_clk_gate("esai_en", "esai_sel", CCM_CSCDR2, 30); 250 + clk[VF610_CLK_ESAI_DIV] = imx_clk_divider("esai_div", "esai_en", CCM_CSCDR2, 24, 4); 251 + clk[VF610_CLK_ESAI] = imx_clk_gate2("esai", "esai_div", CCM_CCGR4, CCM_CCGRx_CGn(2)); 252 + 253 + clk[VF610_CLK_SAI0_SEL] = imx_clk_mux("sai0_sel", CCM_CSCMR1, 0, 2, sai_sels, 4); 254 + clk[VF610_CLK_SAI0_EN] = imx_clk_gate("sai0_en", "sai0_sel", CCM_CSCDR1, 16); 255 + clk[VF610_CLK_SAI0_DIV] = imx_clk_divider("sai0_div", "sai0_en", CCM_CSCDR1, 0, 4); 256 + clk[VF610_CLK_SAI0] = imx_clk_gate2("sai0", "sai0_div", CCM_CCGR0, CCM_CCGRx_CGn(15)); 257 + 258 + clk[VF610_CLK_SAI1_SEL] = imx_clk_mux("sai1_sel", CCM_CSCMR1, 2, 2, sai_sels, 4); 259 + clk[VF610_CLK_SAI1_EN] = imx_clk_gate("sai1_en", "sai1_sel", CCM_CSCDR1, 17); 260 + clk[VF610_CLK_SAI1_DIV] = imx_clk_divider("sai1_div", "sai1_en", CCM_CSCDR1, 4, 4); 261 + clk[VF610_CLK_SAI1] = imx_clk_gate2("sai1", "sai1_div", CCM_CCGR1, CCM_CCGRx_CGn(0)); 262 + 263 + clk[VF610_CLK_SAI2_SEL] = imx_clk_mux("sai2_sel", CCM_CSCMR1, 4, 2, sai_sels, 4); 264 + clk[VF610_CLK_SAI2_EN] = imx_clk_gate("sai2_en", "sai2_sel", CCM_CSCDR1, 18); 265 + clk[VF610_CLK_SAI2_DIV] = imx_clk_divider("sai2_div", "sai2_en", CCM_CSCDR1, 8, 4); 266 + clk[VF610_CLK_SAI2] = imx_clk_gate2("sai2", "sai2_div", CCM_CCGR1, CCM_CCGRx_CGn(1)); 267 + 268 + clk[VF610_CLK_SAI3_SEL] = imx_clk_mux("sai3_sel", CCM_CSCMR1, 6, 2, sai_sels, 4); 269 + clk[VF610_CLK_SAI3_EN] = imx_clk_gate("sai3_en", "sai3_sel", CCM_CSCDR1, 19); 270 + clk[VF610_CLK_SAI3_DIV] = imx_clk_divider("sai3_div", "sai3_en", CCM_CSCDR1, 12, 4); 271 + clk[VF610_CLK_SAI3] = imx_clk_gate2("sai3", "sai3_div", CCM_CCGR1, CCM_CCGRx_CGn(2)); 272 + 273 + clk[VF610_CLK_NFC_SEL] = imx_clk_mux("nfc_sel", CCM_CSCMR1, 12, 2, nfc_sels, 4); 274 + clk[VF610_CLK_NFC_EN] = imx_clk_gate("nfc_en", "nfc_sel", CCM_CSCDR2, 9); 275 + clk[VF610_CLK_NFC_PRE_DIV] = imx_clk_divider("nfc_pre_div", "nfc_en", CCM_CSCDR3, 13, 3); 276 + clk[VF610_CLK_NFC_FRAC_DIV] = imx_clk_divider("nfc_frac_div", "nfc_pre_div", CCM_CSCDR2, 4, 4); 277 + clk[VF610_CLK_NFC] = imx_clk_gate2("nfc", "nfc_frac_div", CCM_CCGR10, CCM_CCGRx_CGn(0)); 278 + 279 + clk[VF610_CLK_GPU_SEL] = imx_clk_mux("gpu_sel", CCM_CSCMR1, 14, 1, gpu_sels, 2); 280 + clk[VF610_CLK_GPU_EN] = imx_clk_gate("gpu_en", "gpu_sel", CCM_CSCDR2, 10); 281 + clk[VF610_CLK_GPU2D] = imx_clk_gate2("gpu", "gpu_en", CCM_CCGR8, CCM_CCGRx_CGn(15)); 282 + 283 + clk[VF610_CLK_VADC_SEL] = imx_clk_mux("vadc_sel", CCM_CSCMR1, 8, 2, vadc_sels, 3); 284 + clk[VF610_CLK_VADC_EN] = imx_clk_gate("vadc_en", "vadc_sel", CCM_CSCDR1, 22); 285 + clk[VF610_CLK_VADC_DIV] = imx_clk_divider("vadc_div", "vadc_en", CCM_CSCDR1, 20, 2); 286 + clk[VF610_CLK_VADC_DIV_HALF] = imx_clk_fixed_factor("vadc_div_half", "vadc_div", 1, 2); 287 + clk[VF610_CLK_VADC] = imx_clk_gate2("vadc", "vadc_div", CCM_CCGR8, CCM_CCGRx_CGn(7)); 288 + 289 + clk[VF610_CLK_ADC0] = imx_clk_gate2("adc0", "ipg_bus", CCM_CCGR1, CCM_CCGRx_CGn(11)); 290 + clk[VF610_CLK_ADC1] = imx_clk_gate2("adc1", "ipg_bus", CCM_CCGR7, CCM_CCGRx_CGn(11)); 291 + clk[VF610_CLK_DAC0] = imx_clk_gate2("dac0", "ipg_bus", CCM_CCGR8, CCM_CCGRx_CGn(12)); 292 + clk[VF610_CLK_DAC1] = imx_clk_gate2("dac1", "ipg_bus", CCM_CCGR8, CCM_CCGRx_CGn(13)); 293 + 294 + clk[VF610_CLK_ASRC] = imx_clk_gate2("asrc", "ipg_bus", CCM_CCGR4, CCM_CCGRx_CGn(1)); 295 + 296 + clk[VF610_CLK_FLEXCAN0] = imx_clk_gate2("flexcan0", "ipg_bus", CCM_CCGR0, CCM_CCGRx_CGn(0)); 297 + clk[VF610_CLK_FLEXCAN1] = imx_clk_gate2("flexcan1", "ipg_bus", CCM_CCGR9, CCM_CCGRx_CGn(4)); 298 + 299 + clk_set_parent(clk[VF610_CLK_QSPI0_SEL], clk[VF610_CLK_PLL1_PFD4]); 300 + clk_set_rate(clk[VF610_CLK_QSPI0_X4_DIV], clk_get_rate(clk[VF610_CLK_QSPI0_SEL]) / 2); 301 + clk_set_rate(clk[VF610_CLK_QSPI0_X2_DIV], clk_get_rate(clk[VF610_CLK_QSPI0_X4_DIV]) / 2); 302 + clk_set_rate(clk[VF610_CLK_QSPI0_X1_DIV], clk_get_rate(clk[VF610_CLK_QSPI0_X2_DIV]) / 2); 303 + 304 + clk_set_parent(clk[VF610_CLK_QSPI1_SEL], clk[VF610_CLK_PLL1_PFD4]); 305 + clk_set_rate(clk[VF610_CLK_QSPI1_X4_DIV], clk_get_rate(clk[VF610_CLK_QSPI1_SEL]) / 2); 306 + clk_set_rate(clk[VF610_CLK_QSPI1_X2_DIV], clk_get_rate(clk[VF610_CLK_QSPI1_X4_DIV]) / 2); 307 + clk_set_rate(clk[VF610_CLK_QSPI1_X1_DIV], clk_get_rate(clk[VF610_CLK_QSPI1_X2_DIV]) / 2); 308 + 309 + clk_set_parent(clk[VF610_CLK_SAI0_SEL], clk[VF610_CLK_AUDIO_EXT]); 310 + clk_set_parent(clk[VF610_CLK_SAI1_SEL], clk[VF610_CLK_AUDIO_EXT]); 311 + clk_set_parent(clk[VF610_CLK_SAI2_SEL], clk[VF610_CLK_AUDIO_EXT]); 312 + clk_set_parent(clk[VF610_CLK_SAI3_SEL], clk[VF610_CLK_AUDIO_EXT]); 313 + 314 + /* Add the clocks to provider list */ 315 + clk_data.clks = clk; 316 + clk_data.clk_num = ARRAY_SIZE(clk); 317 + of_clk_add_provider(np, of_clk_src_onecell_get, &clk_data); 318 + } 319 + CLK_OF_DECLARE(vf610, "fsl,vf610-ccm", vf610_clocks_init);
+35
arch/arm/mach-imx/clk.c
··· 1 + #include <linux/clk.h> 2 + #include <linux/err.h> 3 + #include <linux/of.h> 4 + #include <linux/slab.h> 1 5 #include <linux/spinlock.h> 2 6 #include "clk.h" 3 7 4 8 DEFINE_SPINLOCK(imx_ccm_lock); 9 + 10 + static struct clk * __init imx_obtain_fixed_clock_from_dt(const char *name) 11 + { 12 + struct of_phandle_args phandle; 13 + struct clk *clk = ERR_PTR(-ENODEV); 14 + char *path; 15 + 16 + path = kasprintf(GFP_KERNEL, "/clocks/%s", name); 17 + if (!path) 18 + return ERR_PTR(-ENOMEM); 19 + 20 + phandle.np = of_find_node_by_path(path); 21 + kfree(path); 22 + 23 + if (phandle.np) { 24 + clk = of_clk_get_from_provider(&phandle); 25 + of_node_put(phandle.np); 26 + } 27 + return clk; 28 + } 29 + 30 + struct clk * __init imx_obtain_fixed_clock( 31 + const char *name, unsigned long rate) 32 + { 33 + struct clk *clk; 34 + 35 + clk = imx_obtain_fixed_clock_from_dt(name); 36 + if (IS_ERR(clk)) 37 + clk = imx_clk_fixed(name, rate); 38 + return clk; 39 + }
+3 -1
arch/arm/mach-imx/clk.h
··· 18 18 IMX_PLLV3_USB, 19 19 IMX_PLLV3_AV, 20 20 IMX_PLLV3_ENET, 21 - IMX_PLLV3_MLB, 22 21 }; 23 22 24 23 struct clk *imx_clk_pllv3(enum imx_pllv3_type type, const char *name, ··· 27 28 const char *parent_name, unsigned long flags, 28 29 void __iomem *reg, u8 bit_idx, 29 30 u8 clk_gate_flags, spinlock_t *lock); 31 + 32 + struct clk * imx_obtain_fixed_clock( 33 + const char *name, unsigned long rate); 30 34 31 35 static inline struct clk *imx_clk_gate2(const char *name, const char *parent, 32 36 void __iomem *reg, u8 shift)
+1 -1
arch/arm/mach-imx/common.h
··· 68 68 extern int mx31_clocks_init_dt(void); 69 69 extern int mx51_clocks_init_dt(void); 70 70 extern int mx53_clocks_init_dt(void); 71 - extern int mx6q_clocks_init(void); 72 71 extern struct platform_device *mxc_register_gpio(char *name, int id, 73 72 resource_size_t iobase, resource_size_t iosize, int irq, int irq_high); 74 73 extern void mxc_set_cpu_type(unsigned int type); 75 74 extern void mxc_restart(char, const char *); 76 75 extern void mxc_arch_reset_init(void __iomem *); 76 + extern void mxc_arch_reset_init_dt(void); 77 77 extern int mx53_revision(void); 78 78 extern int imx6q_revision(void); 79 79 extern int mx53_display_revision(void);
+1
arch/arm/mach-imx/hardware.h
··· 20 20 #ifndef __ASM_ARCH_MXC_HARDWARE_H__ 21 21 #define __ASM_ARCH_MXC_HARDWARE_H__ 22 22 23 + #include <asm/io.h> 23 24 #include <asm/sizes.h> 24 25 25 26 #define addr_in_module(addr, mod) \
+2
arch/arm/mach-imx/imx25-dt.c
··· 19 19 20 20 static void __init imx25_dt_init(void) 21 21 { 22 + mxc_arch_reset_init_dt(); 23 + 22 24 of_platform_populate(NULL, of_default_bus_match_table, NULL, NULL); 23 25 } 24 26
+2
arch/arm/mach-imx/imx27-dt.c
··· 22 22 { 23 23 struct platform_device_info devinfo = { .name = "cpufreq-cpu0", }; 24 24 25 + mxc_arch_reset_init_dt(); 26 + 25 27 of_platform_populate(NULL, of_default_bus_match_table, NULL, NULL); 26 28 27 29 platform_device_register_full(&devinfo);
+2
arch/arm/mach-imx/imx31-dt.c
··· 20 20 21 21 static void __init imx31_dt_init(void) 22 22 { 23 + mxc_arch_reset_init_dt(); 24 + 23 25 of_platform_populate(NULL, of_default_bus_match_table, NULL, NULL); 24 26 } 25 27
+2
arch/arm/mach-imx/imx51-dt.c
··· 23 23 { 24 24 struct platform_device_info devinfo = { .name = "cpufreq-cpu0", }; 25 25 26 + mxc_arch_reset_init_dt(); 27 + 26 28 of_platform_populate(NULL, of_default_bus_match_table, NULL, NULL); 27 29 platform_device_register_full(&devinfo); 28 30 }
+1
arch/arm/mach-imx/irq-common.c
··· 18 18 19 19 #include <linux/module.h> 20 20 #include <linux/irq.h> 21 + #include <linux/platform_data/asoc-imx-ssi.h> 21 22 22 23 #include "irq-common.h" 23 24
+3
arch/arm/mach-imx/mach-imx53.c
··· 21 21 #include <asm/mach/time.h> 22 22 23 23 #include "common.h" 24 + #include "hardware.h" 24 25 #include "mx53.h" 25 26 26 27 static void __init imx53_qsb_init(void) ··· 39 38 40 39 static void __init imx53_dt_init(void) 41 40 { 41 + mxc_arch_reset_init_dt(); 42 + 42 43 if (of_machine_is_compatible("fsl,imx53-qsb")) 43 44 imx53_qsb_init(); 44 45
+79 -2
arch/arm/mach-imx/mach-imx6q.c
··· 11 11 */ 12 12 13 13 #include <linux/clk.h> 14 + #include <linux/clk-provider.h> 14 15 #include <linux/clkdev.h> 15 16 #include <linux/clocksource.h> 16 17 #include <linux/cpu.h> ··· 146 145 imx6q_sabrelite_cko1_setup(); 147 146 } 148 147 148 + static void __init imx6q_sabresd_cko1_setup(void) 149 + { 150 + struct clk *cko1_sel, *pll4, *pll4_post, *cko1; 151 + unsigned long rate; 152 + 153 + cko1_sel = clk_get_sys(NULL, "cko1_sel"); 154 + pll4 = clk_get_sys(NULL, "pll4_audio"); 155 + pll4_post = clk_get_sys(NULL, "pll4_post_div"); 156 + cko1 = clk_get_sys(NULL, "cko1"); 157 + if (IS_ERR(cko1_sel) || IS_ERR(pll4) 158 + || IS_ERR(pll4_post) || IS_ERR(cko1)) { 159 + pr_err("cko1 setup failed!\n"); 160 + goto put_clk; 161 + } 162 + /* 163 + * Setting pll4 at 768MHz (24MHz * 32) 164 + * So its child clock can get 24MHz easily 165 + */ 166 + clk_set_rate(pll4, 768000000); 167 + 168 + clk_set_parent(cko1_sel, pll4_post); 169 + rate = clk_round_rate(cko1, 24000000); 170 + clk_set_rate(cko1, rate); 171 + put_clk: 172 + if (!IS_ERR(cko1_sel)) 173 + clk_put(cko1_sel); 174 + if (!IS_ERR(pll4_post)) 175 + clk_put(pll4_post); 176 + if (!IS_ERR(pll4)) 177 + clk_put(pll4); 178 + if (!IS_ERR(cko1)) 179 + clk_put(cko1); 180 + } 181 + 182 + static void __init imx6q_sabresd_init(void) 183 + { 184 + imx6q_sabresd_cko1_setup(); 185 + } 186 + 149 187 static void __init imx6q_1588_init(void) 150 188 { 151 189 struct regmap *gpr; ··· 205 165 { 206 166 if (of_machine_is_compatible("fsl,imx6q-sabrelite")) 207 167 imx6q_sabrelite_init(); 168 + else if (of_machine_is_compatible("fsl,imx6q-sabresd") || 169 + of_machine_is_compatible("fsl,imx6dl-sabresd")) 170 + imx6q_sabresd_init(); 208 171 209 172 of_platform_populate(NULL, of_default_bus_match_table, NULL, NULL); 210 173 ··· 296 253 imx_scu_map_io(); 297 254 } 298 255 256 + #ifdef CONFIG_CACHE_L2X0 257 + static void __init imx6q_init_l2cache(void) 258 + { 259 + void __iomem *l2x0_base; 260 + struct device_node *np; 261 + unsigned int val; 262 + 263 + np = of_find_compatible_node(NULL, NULL, "arm,pl310-cache"); 264 + if (!np) 265 + goto out; 266 + 267 + l2x0_base = of_iomap(np, 0); 268 + if (!l2x0_base) { 269 + of_node_put(np); 270 + goto out; 271 + } 272 + 273 + /* Configure the L2 PREFETCH and POWER registers */ 274 + val = readl_relaxed(l2x0_base + L2X0_PREFETCH_CTRL); 275 + val |= 0x70800000; 276 + writel_relaxed(val, l2x0_base + L2X0_PREFETCH_CTRL); 277 + val = L2X0_DYNAMIC_CLK_GATING_EN | L2X0_STNDBY_MODE_EN; 278 + writel_relaxed(val, l2x0_base + L2X0_POWER_CTRL); 279 + 280 + iounmap(l2x0_base); 281 + of_node_put(np); 282 + 283 + out: 284 + l2x0_of_init(0, ~0UL); 285 + } 286 + #else 287 + static inline void imx6q_init_l2cache(void) {} 288 + #endif 289 + 299 290 static void __init imx6q_init_irq(void) 300 291 { 301 292 imx6q_init_revision(); 302 - l2x0_of_init(0, ~0UL); 293 + imx6q_init_l2cache(); 303 294 imx_src_init(); 304 295 imx_gpc_init(); 305 296 irqchip_init(); ··· 341 264 342 265 static void __init imx6q_timer_init(void) 343 266 { 344 - mx6q_clocks_init(); 267 + of_clk_init(NULL); 345 268 clocksource_of_init(); 346 269 imx_print_silicon_rev(cpu_is_imx6dl() ? "i.MX6DL" : "i.MX6Q", 347 270 imx6q_revision());
+52
arch/arm/mach-imx/mach-imx6sl.c
··· 1 + /* 2 + * Copyright 2013 Freescale Semiconductor, Inc. 3 + * 4 + * This program is free software; you can redistribute it and/or modify 5 + * it under the terms of the GNU General Public License version 2 as 6 + * published by the Free Software Foundation. 7 + * 8 + */ 9 + 10 + #include <linux/clk-provider.h> 11 + #include <linux/irqchip.h> 12 + #include <linux/of.h> 13 + #include <linux/of_platform.h> 14 + #include <asm/hardware/cache-l2x0.h> 15 + #include <asm/mach/arch.h> 16 + #include <asm/mach/map.h> 17 + 18 + #include "common.h" 19 + 20 + static void __init imx6sl_init_machine(void) 21 + { 22 + mxc_arch_reset_init_dt(); 23 + 24 + of_platform_populate(NULL, of_default_bus_match_table, NULL, NULL); 25 + } 26 + 27 + static void __init imx6sl_init_irq(void) 28 + { 29 + l2x0_of_init(0, ~0UL); 30 + imx_src_init(); 31 + imx_gpc_init(); 32 + irqchip_init(); 33 + } 34 + 35 + static void __init imx6sl_timer_init(void) 36 + { 37 + of_clk_init(NULL); 38 + } 39 + 40 + static const char *imx6sl_dt_compat[] __initdata = { 41 + "fsl,imx6sl", 42 + NULL, 43 + }; 44 + 45 + DT_MACHINE_START(IMX6SL, "Freescale i.MX6 SoloLite (Device Tree)") 46 + .map_io = debug_ll_io_init, 47 + .init_irq = imx6sl_init_irq, 48 + .init_time = imx6sl_timer_init, 49 + .init_machine = imx6sl_init_machine, 50 + .dt_compat = imx6sl_dt_compat, 51 + .restart = mxc_restart, 52 + MACHINE_END
+2 -2
arch/arm/mach-imx/mach-pca100.c
··· 398 398 imx27_add_fsl_usb2_udc(&otg_device_pdata); 399 399 } 400 400 401 - usbh2_pdata.otg = otg_ulpi_create(&mxc_ulpi_access_ops, 402 - ULPI_OTG_DRVVBUS | ULPI_OTG_DRVVBUS_EXT); 401 + usbh2_pdata.otg = imx_otg_ulpi_create( 402 + ULPI_OTG_DRVVBUS | ULPI_OTG_DRVVBUS_EXT); 403 403 404 404 if (usbh2_pdata.otg) 405 405 imx27_add_mxc_ehci_hs(2, &usbh2_pdata);
+48
arch/arm/mach-imx/mach-vf610.c
··· 1 + /* 2 + * Copyright 2012-2013 Freescale Semiconductor, Inc. 3 + * 4 + * This program is free software; you can redistribute it and/or modify 5 + * it under the terms of the GNU General Public License as published by 6 + * the Free Software Foundation; either version 2 of the License, or 7 + * (at your option) any later version. 8 + */ 9 + 10 + #include <linux/of_platform.h> 11 + #include <linux/clocksource.h> 12 + #include <linux/irqchip.h> 13 + #include <linux/clk-provider.h> 14 + #include <asm/mach/arch.h> 15 + #include <asm/hardware/cache-l2x0.h> 16 + 17 + #include "common.h" 18 + 19 + static void __init vf610_init_machine(void) 20 + { 21 + mxc_arch_reset_init_dt(); 22 + of_platform_populate(NULL, of_default_bus_match_table, NULL, NULL); 23 + } 24 + 25 + static void __init vf610_init_irq(void) 26 + { 27 + l2x0_of_init(0, ~0UL); 28 + irqchip_init(); 29 + } 30 + 31 + static void __init vf610_init_time(void) 32 + { 33 + of_clk_init(NULL); 34 + clocksource_of_init(); 35 + } 36 + 37 + static const char *vf610_dt_compat[] __initdata = { 38 + "fsl,vf610", 39 + NULL, 40 + }; 41 + 42 + DT_MACHINE_START(VYBRID_VF610, "Freescale Vybrid VF610 (Device Tree)") 43 + .init_irq = vf610_init_irq, 44 + .init_time = vf610_init_time, 45 + .init_machine = vf610_init_machine, 46 + .dt_compat = vf610_dt_compat, 47 + .restart = mxc_restart, 48 + MACHINE_END
+1 -1
arch/arm/mach-imx/mm-imx1.c
··· 39 39 void __init imx1_init_early(void) 40 40 { 41 41 mxc_set_cpu_type(MXC_CPU_MX1); 42 - mxc_arch_reset_init(MX1_IO_ADDRESS(MX1_WDT_BASE_ADDR)); 43 42 imx_iomuxv1_init(MX1_IO_ADDRESS(MX1_GPIO_BASE_ADDR), 44 43 MX1_NUM_GPIO_PORT); 45 44 } ··· 50 51 51 52 void __init imx1_soc_init(void) 52 53 { 54 + mxc_arch_reset_init(MX1_IO_ADDRESS(MX1_WDT_BASE_ADDR)); 53 55 mxc_device_init(); 54 56 55 57 mxc_register_gpio("imx1-gpio", 0, MX1_GPIO1_BASE_ADDR, SZ_256,
+1 -1
arch/arm/mach-imx/mm-imx21.c
··· 66 66 void __init imx21_init_early(void) 67 67 { 68 68 mxc_set_cpu_type(MXC_CPU_MX21); 69 - mxc_arch_reset_init(MX21_IO_ADDRESS(MX21_WDOG_BASE_ADDR)); 70 69 imx_iomuxv1_init(MX21_IO_ADDRESS(MX21_GPIO_BASE_ADDR), 71 70 MX21_NUM_GPIO_PORT); 72 71 } ··· 81 82 82 83 void __init imx21_soc_init(void) 83 84 { 85 + mxc_arch_reset_init(MX21_IO_ADDRESS(MX21_WDOG_BASE_ADDR)); 84 86 mxc_device_init(); 85 87 86 88 mxc_register_gpio("imx21-gpio", 0, MX21_GPIO1_BASE_ADDR, SZ_256, MX21_INT_GPIO, 0);
+1 -1
arch/arm/mach-imx/mm-imx25.c
··· 54 54 { 55 55 mxc_set_cpu_type(MXC_CPU_MX25); 56 56 mxc_iomux_v3_init(MX25_IO_ADDRESS(MX25_IOMUXC_BASE_ADDR)); 57 - mxc_arch_reset_init(MX25_IO_ADDRESS(MX25_WDOG_BASE_ADDR)); 58 57 } 59 58 60 59 void __init mx25_init_irq(void) ··· 88 89 89 90 void __init imx25_soc_init(void) 90 91 { 92 + mxc_arch_reset_init(MX25_IO_ADDRESS(MX25_WDOG_BASE_ADDR)); 91 93 mxc_device_init(); 92 94 93 95 /* i.mx25 has the i.mx35 type gpio */
+1 -1
arch/arm/mach-imx/mm-imx27.c
··· 66 66 void __init imx27_init_early(void) 67 67 { 68 68 mxc_set_cpu_type(MXC_CPU_MX27); 69 - mxc_arch_reset_init(MX27_IO_ADDRESS(MX27_WDOG_BASE_ADDR)); 70 69 imx_iomuxv1_init(MX27_IO_ADDRESS(MX27_GPIO_BASE_ADDR), 71 70 MX27_NUM_GPIO_PORT); 72 71 } ··· 81 82 82 83 void __init imx27_soc_init(void) 83 84 { 85 + mxc_arch_reset_init(MX27_IO_ADDRESS(MX27_WDOG_BASE_ADDR)); 84 86 mxc_device_init(); 85 87 86 88 /* i.mx27 has the i.mx21 type gpio */
+2 -2
arch/arm/mach-imx/mm-imx3.c
··· 138 138 void __init imx31_init_early(void) 139 139 { 140 140 mxc_set_cpu_type(MXC_CPU_MX31); 141 - mxc_arch_reset_init(MX31_IO_ADDRESS(MX31_WDOG_BASE_ADDR)); 142 141 arch_ioremap_caller = imx3_ioremap_caller; 143 142 arm_pm_idle = imx3_idle; 144 143 mx3_ccm_base = MX31_IO_ADDRESS(MX31_CCM_BASE_ADDR); ··· 173 174 174 175 imx3_init_l2x0(); 175 176 177 + mxc_arch_reset_init(MX31_IO_ADDRESS(MX31_WDOG_BASE_ADDR)); 176 178 mxc_device_init(); 177 179 178 180 mxc_register_gpio("imx31-gpio", 0, MX31_GPIO1_BASE_ADDR, SZ_16K, MX31_INT_GPIO1, 0); ··· 216 216 { 217 217 mxc_set_cpu_type(MXC_CPU_MX35); 218 218 mxc_iomux_v3_init(MX35_IO_ADDRESS(MX35_IOMUXC_BASE_ADDR)); 219 - mxc_arch_reset_init(MX35_IO_ADDRESS(MX35_WDOG_BASE_ADDR)); 220 219 arm_pm_idle = imx3_idle; 221 220 arch_ioremap_caller = imx3_ioremap_caller; 222 221 mx3_ccm_base = MX35_IO_ADDRESS(MX35_CCM_BASE_ADDR); ··· 271 272 272 273 imx3_init_l2x0(); 273 274 275 + mxc_arch_reset_init(MX35_IO_ADDRESS(MX35_WDOG_BASE_ADDR)); 274 276 mxc_device_init(); 275 277 276 278 mxc_register_gpio("imx35-gpio", 0, MX35_GPIO1_BASE_ADDR, SZ_16K, MX35_INT_GPIO1, 0);
+1 -2
arch/arm/mach-imx/mm-imx5.c
··· 83 83 imx51_ipu_mipi_setup(); 84 84 mxc_set_cpu_type(MXC_CPU_MX51); 85 85 mxc_iomux_v3_init(MX51_IO_ADDRESS(MX51_IOMUXC_BASE_ADDR)); 86 - mxc_arch_reset_init(MX51_IO_ADDRESS(MX51_WDOG1_BASE_ADDR)); 87 86 imx_src_init(); 88 87 } 89 88 ··· 90 91 { 91 92 mxc_set_cpu_type(MXC_CPU_MX53); 92 93 mxc_iomux_v3_init(MX53_IO_ADDRESS(MX53_IOMUXC_BASE_ADDR)); 93 - mxc_arch_reset_init(MX53_IO_ADDRESS(MX53_WDOG1_BASE_ADDR)); 94 94 imx_src_init(); 95 95 } 96 96 ··· 127 129 128 130 void __init imx51_soc_init(void) 129 131 { 132 + mxc_arch_reset_init(MX51_IO_ADDRESS(MX51_WDOG1_BASE_ADDR)); 130 133 mxc_device_init(); 131 134 132 135 /* i.mx51 has the i.mx35 type gpio */
+37 -10
arch/arm/mach-imx/system.c
··· 21 21 #include <linux/io.h> 22 22 #include <linux/err.h> 23 23 #include <linux/delay.h> 24 + #include <linux/of.h> 25 + #include <linux/of_address.h> 24 26 25 27 #include <asm/system_misc.h> 26 28 #include <asm/proc-fns.h> ··· 32 30 #include "hardware.h" 33 31 34 32 static void __iomem *wdog_base; 33 + static struct clk *wdog_clk; 35 34 36 35 /* 37 36 * Reset the system. It is called by machine_restart(). ··· 41 38 { 42 39 unsigned int wcr_enable; 43 40 44 - if (cpu_is_mx1()) { 45 - wcr_enable = (1 << 0); 46 - } else { 47 - struct clk *clk; 41 + if (wdog_clk) 42 + clk_enable(wdog_clk); 48 43 49 - clk = clk_get_sys("imx2-wdt.0", NULL); 50 - if (!IS_ERR(clk)) 51 - clk_prepare_enable(clk); 44 + if (cpu_is_mx1()) 45 + wcr_enable = (1 << 0); 46 + else 52 47 wcr_enable = (1 << 2); 53 - } 54 48 55 49 /* Assert SRS signal */ 56 50 __raw_writew(wcr_enable, wdog_base); ··· 55 55 /* wait for reset to assert... */ 56 56 mdelay(500); 57 57 58 - printk(KERN_ERR "Watchdog reset failed to assert reset\n"); 58 + pr_err("%s: Watchdog reset failed to assert reset\n", __func__); 59 59 60 60 /* delay to allow the serial port to show the message */ 61 61 mdelay(50); ··· 64 64 soft_restart(0); 65 65 } 66 66 67 - void mxc_arch_reset_init(void __iomem *base) 67 + void __init mxc_arch_reset_init(void __iomem *base) 68 68 { 69 69 wdog_base = base; 70 + 71 + wdog_clk = clk_get_sys("imx2-wdt.0", NULL); 72 + if (IS_ERR(wdog_clk)) { 73 + pr_warn("%s: failed to get wdog clock\n", __func__); 74 + wdog_clk = NULL; 75 + return; 76 + } 77 + 78 + clk_prepare(wdog_clk); 79 + } 80 + 81 + void __init mxc_arch_reset_init_dt(void) 82 + { 83 + struct device_node *np; 84 + 85 + np = of_find_compatible_node(NULL, NULL, "fsl,imx21-wdt"); 86 + wdog_base = of_iomap(np, 0); 87 + WARN_ON(!wdog_base); 88 + 89 + wdog_clk = of_clk_get(np, 0); 90 + if (IS_ERR(wdog_clk)) { 91 + pr_warn("%s: failed to get wdog clock\n", __func__); 92 + wdog_clk = NULL; 93 + return; 94 + } 95 + 96 + clk_prepare(wdog_clk); 70 97 }
-118
arch/arm/mach-imx/ulpi.c
··· 1 - /* 2 - * Copyright 2008 Sascha Hauer, Pengutronix <s.hauer@pengutronix.de> 3 - * Copyright 2009 Daniel Mack <daniel@caiaq.de> 4 - * 5 - * This program is free software; you can redistribute it and/or 6 - * modify it under the terms of the GNU General Public License 7 - * as published by the Free Software Foundation; either version 2 8 - * of the License, or (at your option) any later version. 9 - * This program is distributed in the hope that it will be useful, 10 - * but WITHOUT ANY WARRANTY; without even the implied warranty of 11 - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 12 - * GNU General Public License for more details. 13 - * 14 - * You should have received a copy of the GNU General Public License 15 - * along with this program; if not, write to the Free Software 16 - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, 17 - * MA 02110-1301, USA. 18 - */ 19 - 20 - #include <linux/module.h> 21 - #include <linux/kernel.h> 22 - #include <linux/io.h> 23 - #include <linux/delay.h> 24 - #include <linux/usb/otg.h> 25 - #include <linux/usb/ulpi.h> 26 - 27 - #include "ulpi.h" 28 - 29 - /* ULPIVIEW register bits */ 30 - #define ULPIVW_WU (1 << 31) /* Wakeup */ 31 - #define ULPIVW_RUN (1 << 30) /* read/write run */ 32 - #define ULPIVW_WRITE (1 << 29) /* 0 = read 1 = write */ 33 - #define ULPIVW_SS (1 << 27) /* SyncState */ 34 - #define ULPIVW_PORT_MASK 0x07 /* Port field */ 35 - #define ULPIVW_PORT_SHIFT 24 36 - #define ULPIVW_ADDR_MASK 0xff /* data address field */ 37 - #define ULPIVW_ADDR_SHIFT 16 38 - #define ULPIVW_RDATA_MASK 0xff /* read data field */ 39 - #define ULPIVW_RDATA_SHIFT 8 40 - #define ULPIVW_WDATA_MASK 0xff /* write data field */ 41 - #define ULPIVW_WDATA_SHIFT 0 42 - 43 - static int ulpi_poll(void __iomem *view, u32 bit) 44 - { 45 - int timeout = 10000; 46 - 47 - while (timeout--) { 48 - u32 data = __raw_readl(view); 49 - 50 - if (!(data & bit)) 51 - return 0; 52 - 53 - cpu_relax(); 54 - }; 55 - 56 - printk(KERN_WARNING "timeout polling for ULPI device\n"); 57 - 58 - return -ETIMEDOUT; 59 - } 60 - 61 - static int ulpi_read(struct usb_phy *otg, u32 reg) 62 - { 63 - int ret; 64 - void __iomem *view = otg->io_priv; 65 - 66 - /* make sure interface is running */ 67 - if (!(__raw_readl(view) & ULPIVW_SS)) { 68 - __raw_writel(ULPIVW_WU, view); 69 - 70 - /* wait for wakeup */ 71 - ret = ulpi_poll(view, ULPIVW_WU); 72 - if (ret) 73 - return ret; 74 - } 75 - 76 - /* read the register */ 77 - __raw_writel((ULPIVW_RUN | (reg << ULPIVW_ADDR_SHIFT)), view); 78 - 79 - /* wait for completion */ 80 - ret = ulpi_poll(view, ULPIVW_RUN); 81 - if (ret) 82 - return ret; 83 - 84 - return (__raw_readl(view) >> ULPIVW_RDATA_SHIFT) & ULPIVW_RDATA_MASK; 85 - } 86 - 87 - static int ulpi_write(struct usb_phy *otg, u32 val, u32 reg) 88 - { 89 - int ret; 90 - void __iomem *view = otg->io_priv; 91 - 92 - /* make sure the interface is running */ 93 - if (!(__raw_readl(view) & ULPIVW_SS)) { 94 - __raw_writel(ULPIVW_WU, view); 95 - /* wait for wakeup */ 96 - ret = ulpi_poll(view, ULPIVW_WU); 97 - if (ret) 98 - return ret; 99 - } 100 - 101 - __raw_writel((ULPIVW_RUN | ULPIVW_WRITE | 102 - (reg << ULPIVW_ADDR_SHIFT) | 103 - ((val & ULPIVW_WDATA_MASK) << ULPIVW_WDATA_SHIFT)), view); 104 - 105 - /* wait for completion */ 106 - return ulpi_poll(view, ULPIVW_RUN); 107 - } 108 - 109 - struct usb_phy_io_ops mxc_ulpi_access_ops = { 110 - .read = ulpi_read, 111 - .write = ulpi_write, 112 - }; 113 - EXPORT_SYMBOL_GPL(mxc_ulpi_access_ops); 114 - 115 - struct usb_phy *imx_otg_ulpi_create(unsigned int flags) 116 - { 117 - return otg_ulpi_create(&mxc_ulpi_access_ops, flags); 118 - }
+7 -4
arch/arm/mach-imx/ulpi.h
··· 1 1 #ifndef __MACH_ULPI_H 2 2 #define __MACH_ULPI_H 3 3 4 - #ifdef CONFIG_USB_ULPI 5 - struct usb_phy *imx_otg_ulpi_create(unsigned int flags); 4 + #include <linux/usb/ulpi.h> 5 + 6 + #ifdef CONFIG_USB_ULPI_VIEWPORT 7 + static inline struct usb_phy *imx_otg_ulpi_create(unsigned int flags) 8 + { 9 + return otg_ulpi_create(&ulpi_viewport_access_ops, flags); 10 + } 6 11 #else 7 12 static inline struct usb_phy *imx_otg_ulpi_create(unsigned int flags) 8 13 { 9 14 return NULL; 10 15 } 11 16 #endif 12 - 13 - extern struct usb_phy_io_ops mxc_ulpi_access_ops; 14 17 15 18 #endif /* __MACH_ULPI_H */ 16 19
+3 -2
arch/arm/mach-kirkwood/mpp.c
··· 22 22 23 23 kirkwood_pcie_id(&dev, &rev); 24 24 25 - if ((dev == MV88F6281_DEV_ID && rev >= MV88F6281_REV_A0) || 26 - (dev == MV88F6282_DEV_ID)) 25 + if (dev == MV88F6281_DEV_ID && rev >= MV88F6281_REV_A0) 27 26 return MPP_F6281_MASK; 27 + if (dev == MV88F6282_DEV_ID) 28 + return MPP_F6282_MASK; 28 29 if (dev == MV88F6192_DEV_ID && rev >= MV88F6192_REV_A0) 29 30 return MPP_F6192_MASK; 30 31 if (dev == MV88F6180_DEV_ID)
+9 -9
arch/arm/mach-omap2/clock36xx.c
··· 20 20 21 21 #include <linux/kernel.h> 22 22 #include <linux/clk.h> 23 + #include <linux/clk-provider.h> 23 24 #include <linux/io.h> 24 25 25 26 #include "clock.h" 26 27 #include "clock36xx.h" 27 - 28 + #define to_clk_divider(_hw) container_of(_hw, struct clk_divider, hw) 28 29 29 30 /** 30 31 * omap36xx_pwrdn_clk_enable_with_hsdiv_restore - enable clocks suffering ··· 40 39 */ 41 40 int omap36xx_pwrdn_clk_enable_with_hsdiv_restore(struct clk_hw *clk) 42 41 { 43 - struct clk_hw_omap *parent; 42 + struct clk_divider *parent; 44 43 struct clk_hw *parent_hw; 45 - u32 dummy_v, orig_v, clksel_shift; 44 + u32 dummy_v, orig_v; 46 45 int ret; 47 46 48 47 /* Clear PWRDN bit of HSDIVIDER */ 49 48 ret = omap2_dflt_clk_enable(clk); 50 49 51 50 parent_hw = __clk_get_hw(__clk_get_parent(clk->clk)); 52 - parent = to_clk_hw_omap(parent_hw); 51 + parent = to_clk_divider(parent_hw); 53 52 54 53 /* Restore the dividers */ 55 54 if (!ret) { 56 - clksel_shift = __ffs(parent->clksel_mask); 57 - orig_v = __raw_readl(parent->clksel_reg); 55 + orig_v = __raw_readl(parent->reg); 58 56 dummy_v = orig_v; 59 57 60 58 /* Write any other value different from the Read value */ 61 - dummy_v ^= (1 << clksel_shift); 62 - __raw_writel(dummy_v, parent->clksel_reg); 59 + dummy_v ^= (1 << parent->shift); 60 + __raw_writel(dummy_v, parent->reg); 63 61 64 62 /* Write the original divider */ 65 - __raw_writel(orig_v, parent->clksel_reg); 63 + __raw_writel(orig_v, parent->reg); 66 64 } 67 65 68 66 return ret;
+8 -1
arch/arm/mach-omap2/omap_hwmod_33xx_data.c
··· 2008 2008 }, 2009 2009 }; 2010 2010 2011 + /* uart2 */ 2012 + static struct omap_hwmod_dma_info uart2_edma_reqs[] = { 2013 + { .name = "tx", .dma_req = 28, }, 2014 + { .name = "rx", .dma_req = 29, }, 2015 + { .dma_req = -1 } 2016 + }; 2017 + 2011 2018 static struct omap_hwmod_irq_info am33xx_uart2_irqs[] = { 2012 2019 { .irq = 73 + OMAP_INTC_START, }, 2013 2020 { .irq = -1 }, ··· 2026 2019 .clkdm_name = "l4ls_clkdm", 2027 2020 .flags = HWMOD_SWSUP_SIDLE_ACT, 2028 2021 .mpu_irqs = am33xx_uart2_irqs, 2029 - .sdma_reqs = uart1_edma_reqs, 2022 + .sdma_reqs = uart2_edma_reqs, 2030 2023 .main_clk = "dpll_per_m2_div4_ck", 2031 2024 .prcm = { 2032 2025 .omap4 = {
+4 -2
arch/arm/mach-omap2/pm34xx.c
··· 546 546 /* Clear any pending PRCM interrupts */ 547 547 omap2_prm_write_mod_reg(0, OCP_MOD, OMAP3_PRM_IRQSTATUS_MPU_OFFSET); 548 548 549 - if (omap3_has_iva()) 550 - omap3_iva_idle(); 549 + /* 550 + * We need to idle iva2_pwrdm even on am3703 with no iva2. 551 + */ 552 + omap3_iva_idle(); 551 553 552 554 omap3_d2d_idle(); 553 555 }
+4 -2
arch/arm/mach-prima2/pm.c
··· 101 101 struct device_node *np; 102 102 103 103 np = of_find_matching_node(NULL, pwrc_ids); 104 - if (!np) 105 - panic("unable to find compatible pwrc node in dtb\n"); 104 + if (!np) { 105 + pr_err("unable to find compatible sirf pwrc node in dtb\n"); 106 + return -ENOENT; 107 + } 106 108 107 109 /* 108 110 * pwrc behind rtciobrg is not located in memory space
+4 -2
arch/arm/mach-prima2/rstc.c
··· 28 28 struct device_node *np; 29 29 30 30 np = of_find_matching_node(NULL, rstc_ids); 31 - if (!np) 32 - panic("unable to find compatible rstc node in dtb\n"); 31 + if (!np) { 32 + pr_err("unable to find compatible sirf rstc node in dtb\n"); 33 + return -ENOENT; 34 + } 33 35 34 36 sirfsoc_rstc_base = of_iomap(np, 0); 35 37 if (!sirfsoc_rstc_base)
+13 -5
arch/arm/plat-samsung/pm.c
··· 16 16 #include <linux/suspend.h> 17 17 #include <linux/errno.h> 18 18 #include <linux/delay.h> 19 + #include <linux/of.h> 19 20 #include <linux/serial_core.h> 20 21 #include <linux/io.h> 21 22 ··· 262 261 * require a full power-cycle) 263 262 */ 264 263 265 - if (!any_allowed(s3c_irqwake_intmask, s3c_irqwake_intallow) && 264 + if (!of_have_populated_dt() && 265 + !any_allowed(s3c_irqwake_intmask, s3c_irqwake_intallow) && 266 266 !any_allowed(s3c_irqwake_eintmask, s3c_irqwake_eintallow)) { 267 267 printk(KERN_ERR "%s: No wake-up sources!\n", __func__); 268 268 printk(KERN_ERR "%s: Aborting sleep\n", __func__); ··· 272 270 273 271 /* save all necessary core registers not covered by the drivers */ 274 272 275 - samsung_pm_save_gpios(); 276 - samsung_pm_saved_gpios(); 273 + if (!of_have_populated_dt()) { 274 + samsung_pm_save_gpios(); 275 + samsung_pm_saved_gpios(); 276 + } 277 + 277 278 s3c_pm_save_uarts(); 278 279 s3c_pm_save_core(); 279 280 ··· 315 310 316 311 s3c_pm_restore_core(); 317 312 s3c_pm_restore_uarts(); 318 - samsung_pm_restore_gpios(); 319 - s3c_pm_restored_gpios(); 313 + 314 + if (!of_have_populated_dt()) { 315 + samsung_pm_restore_gpios(); 316 + s3c_pm_restored_gpios(); 317 + } 320 318 321 319 s3c_pm_debug_init(); 322 320
+1 -1
arch/mips/include/asm/mmu_context.h
··· 117 117 if (! ((asid += ASID_INC) & ASID_MASK) ) { 118 118 if (cpu_has_vtag_icache) 119 119 flush_icache_all(); 120 - #ifdef CONFIG_VIRTUALIZATION 120 + #ifdef CONFIG_KVM 121 121 kvm_local_flush_tlb_all(); /* start new asid cycle */ 122 122 #else 123 123 local_flush_tlb_all(); /* start new asid cycle */
+39 -42
arch/mips/include/uapi/asm/kvm.h
··· 58 58 * bits[2..0] - Register 'sel' index. 59 59 * bits[7..3] - Register 'rd' index. 60 60 * bits[15..8] - Must be zero. 61 - * bits[63..16] - 1 -> CP0 registers. 61 + * bits[31..16] - 1 -> CP0 registers. 62 + * bits[51..32] - Must be zero. 63 + * bits[63..52] - As per linux/kvm.h 62 64 * 63 65 * Other sets registers may be added in the future. Each set would 64 - * have its own identifier in bits[63..16]. 65 - * 66 - * The addr field of struct kvm_one_reg must point to an aligned 67 - * 64-bit wide location. For registers that are narrower than 68 - * 64-bits, the value is stored in the low order bits of the location, 69 - * and sign extended to 64-bits. 66 + * have its own identifier in bits[31..16]. 70 67 * 71 68 * The registers defined in struct kvm_regs are also accessible, the 72 69 * id values for these are below. 73 70 */ 74 71 75 - #define KVM_REG_MIPS_R0 0 76 - #define KVM_REG_MIPS_R1 1 77 - #define KVM_REG_MIPS_R2 2 78 - #define KVM_REG_MIPS_R3 3 79 - #define KVM_REG_MIPS_R4 4 80 - #define KVM_REG_MIPS_R5 5 81 - #define KVM_REG_MIPS_R6 6 82 - #define KVM_REG_MIPS_R7 7 83 - #define KVM_REG_MIPS_R8 8 84 - #define KVM_REG_MIPS_R9 9 85 - #define KVM_REG_MIPS_R10 10 86 - #define KVM_REG_MIPS_R11 11 87 - #define KVM_REG_MIPS_R12 12 88 - #define KVM_REG_MIPS_R13 13 89 - #define KVM_REG_MIPS_R14 14 90 - #define KVM_REG_MIPS_R15 15 91 - #define KVM_REG_MIPS_R16 16 92 - #define KVM_REG_MIPS_R17 17 93 - #define KVM_REG_MIPS_R18 18 94 - #define KVM_REG_MIPS_R19 19 95 - #define KVM_REG_MIPS_R20 20 96 - #define KVM_REG_MIPS_R21 21 97 - #define KVM_REG_MIPS_R22 22 98 - #define KVM_REG_MIPS_R23 23 99 - #define KVM_REG_MIPS_R24 24 100 - #define KVM_REG_MIPS_R25 25 101 - #define KVM_REG_MIPS_R26 26 102 - #define KVM_REG_MIPS_R27 27 103 - #define KVM_REG_MIPS_R28 28 104 - #define KVM_REG_MIPS_R29 29 105 - #define KVM_REG_MIPS_R30 30 106 - #define KVM_REG_MIPS_R31 31 72 + #define KVM_REG_MIPS_R0 (KVM_REG_MIPS | KVM_REG_SIZE_U64 | 0) 73 + #define KVM_REG_MIPS_R1 (KVM_REG_MIPS | KVM_REG_SIZE_U64 | 1) 74 + #define KVM_REG_MIPS_R2 (KVM_REG_MIPS | KVM_REG_SIZE_U64 | 2) 75 + #define KVM_REG_MIPS_R3 (KVM_REG_MIPS | KVM_REG_SIZE_U64 | 3) 76 + #define KVM_REG_MIPS_R4 (KVM_REG_MIPS | KVM_REG_SIZE_U64 | 4) 77 + #define KVM_REG_MIPS_R5 (KVM_REG_MIPS | KVM_REG_SIZE_U64 | 5) 78 + #define KVM_REG_MIPS_R6 (KVM_REG_MIPS | KVM_REG_SIZE_U64 | 6) 79 + #define KVM_REG_MIPS_R7 (KVM_REG_MIPS | KVM_REG_SIZE_U64 | 7) 80 + #define KVM_REG_MIPS_R8 (KVM_REG_MIPS | KVM_REG_SIZE_U64 | 8) 81 + #define KVM_REG_MIPS_R9 (KVM_REG_MIPS | KVM_REG_SIZE_U64 | 9) 82 + #define KVM_REG_MIPS_R10 (KVM_REG_MIPS | KVM_REG_SIZE_U64 | 10) 83 + #define KVM_REG_MIPS_R11 (KVM_REG_MIPS | KVM_REG_SIZE_U64 | 11) 84 + #define KVM_REG_MIPS_R12 (KVM_REG_MIPS | KVM_REG_SIZE_U64 | 12) 85 + #define KVM_REG_MIPS_R13 (KVM_REG_MIPS | KVM_REG_SIZE_U64 | 13) 86 + #define KVM_REG_MIPS_R14 (KVM_REG_MIPS | KVM_REG_SIZE_U64 | 14) 87 + #define KVM_REG_MIPS_R15 (KVM_REG_MIPS | KVM_REG_SIZE_U64 | 15) 88 + #define KVM_REG_MIPS_R16 (KVM_REG_MIPS | KVM_REG_SIZE_U64 | 16) 89 + #define KVM_REG_MIPS_R17 (KVM_REG_MIPS | KVM_REG_SIZE_U64 | 17) 90 + #define KVM_REG_MIPS_R18 (KVM_REG_MIPS | KVM_REG_SIZE_U64 | 18) 91 + #define KVM_REG_MIPS_R19 (KVM_REG_MIPS | KVM_REG_SIZE_U64 | 19) 92 + #define KVM_REG_MIPS_R20 (KVM_REG_MIPS | KVM_REG_SIZE_U64 | 20) 93 + #define KVM_REG_MIPS_R21 (KVM_REG_MIPS | KVM_REG_SIZE_U64 | 21) 94 + #define KVM_REG_MIPS_R22 (KVM_REG_MIPS | KVM_REG_SIZE_U64 | 22) 95 + #define KVM_REG_MIPS_R23 (KVM_REG_MIPS | KVM_REG_SIZE_U64 | 23) 96 + #define KVM_REG_MIPS_R24 (KVM_REG_MIPS | KVM_REG_SIZE_U64 | 24) 97 + #define KVM_REG_MIPS_R25 (KVM_REG_MIPS | KVM_REG_SIZE_U64 | 25) 98 + #define KVM_REG_MIPS_R26 (KVM_REG_MIPS | KVM_REG_SIZE_U64 | 26) 99 + #define KVM_REG_MIPS_R27 (KVM_REG_MIPS | KVM_REG_SIZE_U64 | 27) 100 + #define KVM_REG_MIPS_R28 (KVM_REG_MIPS | KVM_REG_SIZE_U64 | 28) 101 + #define KVM_REG_MIPS_R29 (KVM_REG_MIPS | KVM_REG_SIZE_U64 | 29) 102 + #define KVM_REG_MIPS_R30 (KVM_REG_MIPS | KVM_REG_SIZE_U64 | 30) 103 + #define KVM_REG_MIPS_R31 (KVM_REG_MIPS | KVM_REG_SIZE_U64 | 31) 107 104 108 - #define KVM_REG_MIPS_HI 32 109 - #define KVM_REG_MIPS_LO 33 110 - #define KVM_REG_MIPS_PC 34 105 + #define KVM_REG_MIPS_HI (KVM_REG_MIPS | KVM_REG_SIZE_U64 | 32) 106 + #define KVM_REG_MIPS_LO (KVM_REG_MIPS | KVM_REG_SIZE_U64 | 33) 107 + #define KVM_REG_MIPS_PC (KVM_REG_MIPS | KVM_REG_SIZE_U64 | 34) 111 108 112 109 /* 113 110 * KVM MIPS specific structures and definitions
+4
arch/mips/kernel/ftrace.c
··· 25 25 #define MCOUNT_OFFSET_INSNS 4 26 26 #endif 27 27 28 + #ifdef CONFIG_DYNAMIC_FTRACE 29 + 28 30 /* Arch override because MIPS doesn't need to run this from stop_machine() */ 29 31 void arch_ftrace_update_code(int command) 30 32 { 31 33 ftrace_modify_all_code(command); 32 34 } 35 + 36 + #endif 33 37 34 38 /* 35 39 * Check if the address is in kernel space
+7 -6
arch/mips/kernel/idle.c
··· 93 93 } 94 94 95 95 /* 96 - * The Au1xxx wait is available only if using 32khz counter or 97 - * external timer source, but specifically not CP0 Counter. 98 - * alchemy/common/time.c may override cpu_wait! 96 + * Au1 'wait' is only useful when the 32kHz counter is used as timer, 97 + * since coreclock (and the cp0 counter) stops upon executing it. Only an 98 + * interrupt can wake it, so they must be enabled before entering idle modes. 99 99 */ 100 100 static void au1k_wait(void) 101 101 { 102 + unsigned long c0status = read_c0_status() | 1; /* irqs on */ 103 + 102 104 __asm__( 103 105 " .set mips3 \n" 104 106 " cache 0x14, 0(%0) \n" 105 107 " cache 0x14, 32(%0) \n" 106 108 " sync \n" 107 - " nop \n" 109 + " mtc0 %1, $12 \n" /* wr c0status */ 108 110 " wait \n" 109 111 " nop \n" 110 112 " nop \n" 111 113 " nop \n" 112 114 " nop \n" 113 115 " .set mips0 \n" 114 - : : "r" (au1k_wait)); 115 - local_irq_enable(); 116 + : : "r" (au1k_wait), "r" (c0status)); 116 117 } 117 118 118 119 static int __initdata nowait;
+54 -29
arch/mips/kvm/kvm_mips.c
··· 485 485 return -ENOIOCTLCMD; 486 486 } 487 487 488 - #define KVM_REG_MIPS_CP0_INDEX (0x10000 + 8 * 0 + 0) 489 - #define KVM_REG_MIPS_CP0_ENTRYLO0 (0x10000 + 8 * 2 + 0) 490 - #define KVM_REG_MIPS_CP0_ENTRYLO1 (0x10000 + 8 * 3 + 0) 491 - #define KVM_REG_MIPS_CP0_CONTEXT (0x10000 + 8 * 4 + 0) 492 - #define KVM_REG_MIPS_CP0_USERLOCAL (0x10000 + 8 * 4 + 2) 493 - #define KVM_REG_MIPS_CP0_PAGEMASK (0x10000 + 8 * 5 + 0) 494 - #define KVM_REG_MIPS_CP0_PAGEGRAIN (0x10000 + 8 * 5 + 1) 495 - #define KVM_REG_MIPS_CP0_WIRED (0x10000 + 8 * 6 + 0) 496 - #define KVM_REG_MIPS_CP0_HWRENA (0x10000 + 8 * 7 + 0) 497 - #define KVM_REG_MIPS_CP0_BADVADDR (0x10000 + 8 * 8 + 0) 498 - #define KVM_REG_MIPS_CP0_COUNT (0x10000 + 8 * 9 + 0) 499 - #define KVM_REG_MIPS_CP0_ENTRYHI (0x10000 + 8 * 10 + 0) 500 - #define KVM_REG_MIPS_CP0_COMPARE (0x10000 + 8 * 11 + 0) 501 - #define KVM_REG_MIPS_CP0_STATUS (0x10000 + 8 * 12 + 0) 502 - #define KVM_REG_MIPS_CP0_CAUSE (0x10000 + 8 * 13 + 0) 503 - #define KVM_REG_MIPS_CP0_EBASE (0x10000 + 8 * 15 + 1) 504 - #define KVM_REG_MIPS_CP0_CONFIG (0x10000 + 8 * 16 + 0) 505 - #define KVM_REG_MIPS_CP0_CONFIG1 (0x10000 + 8 * 16 + 1) 506 - #define KVM_REG_MIPS_CP0_CONFIG2 (0x10000 + 8 * 16 + 2) 507 - #define KVM_REG_MIPS_CP0_CONFIG3 (0x10000 + 8 * 16 + 3) 508 - #define KVM_REG_MIPS_CP0_CONFIG7 (0x10000 + 8 * 16 + 7) 509 - #define KVM_REG_MIPS_CP0_XCONTEXT (0x10000 + 8 * 20 + 0) 510 - #define KVM_REG_MIPS_CP0_ERROREPC (0x10000 + 8 * 30 + 0) 488 + #define MIPS_CP0_32(_R, _S) \ 489 + (KVM_REG_MIPS | KVM_REG_SIZE_U32 | 0x10000 | (8 * (_R) + (_S))) 490 + 491 + #define MIPS_CP0_64(_R, _S) \ 492 + (KVM_REG_MIPS | KVM_REG_SIZE_U64 | 0x10000 | (8 * (_R) + (_S))) 493 + 494 + #define KVM_REG_MIPS_CP0_INDEX MIPS_CP0_32(0, 0) 495 + #define KVM_REG_MIPS_CP0_ENTRYLO0 MIPS_CP0_64(2, 0) 496 + #define KVM_REG_MIPS_CP0_ENTRYLO1 MIPS_CP0_64(3, 0) 497 + #define KVM_REG_MIPS_CP0_CONTEXT MIPS_CP0_64(4, 0) 498 + #define KVM_REG_MIPS_CP0_USERLOCAL MIPS_CP0_64(4, 2) 499 + #define KVM_REG_MIPS_CP0_PAGEMASK MIPS_CP0_32(5, 0) 500 + #define KVM_REG_MIPS_CP0_PAGEGRAIN MIPS_CP0_32(5, 1) 501 + #define KVM_REG_MIPS_CP0_WIRED MIPS_CP0_32(6, 0) 502 + #define KVM_REG_MIPS_CP0_HWRENA MIPS_CP0_32(7, 0) 503 + #define KVM_REG_MIPS_CP0_BADVADDR MIPS_CP0_64(8, 0) 504 + #define KVM_REG_MIPS_CP0_COUNT MIPS_CP0_32(9, 0) 505 + #define KVM_REG_MIPS_CP0_ENTRYHI MIPS_CP0_64(10, 0) 506 + #define KVM_REG_MIPS_CP0_COMPARE MIPS_CP0_32(11, 0) 507 + #define KVM_REG_MIPS_CP0_STATUS MIPS_CP0_32(12, 0) 508 + #define KVM_REG_MIPS_CP0_CAUSE MIPS_CP0_32(13, 0) 509 + #define KVM_REG_MIPS_CP0_EBASE MIPS_CP0_64(15, 1) 510 + #define KVM_REG_MIPS_CP0_CONFIG MIPS_CP0_32(16, 0) 511 + #define KVM_REG_MIPS_CP0_CONFIG1 MIPS_CP0_32(16, 1) 512 + #define KVM_REG_MIPS_CP0_CONFIG2 MIPS_CP0_32(16, 2) 513 + #define KVM_REG_MIPS_CP0_CONFIG3 MIPS_CP0_32(16, 3) 514 + #define KVM_REG_MIPS_CP0_CONFIG7 MIPS_CP0_32(16, 7) 515 + #define KVM_REG_MIPS_CP0_XCONTEXT MIPS_CP0_64(20, 0) 516 + #define KVM_REG_MIPS_CP0_ERROREPC MIPS_CP0_64(30, 0) 511 517 512 518 static u64 kvm_mips_get_one_regs[] = { 513 519 KVM_REG_MIPS_R0, ··· 573 567 static int kvm_mips_get_reg(struct kvm_vcpu *vcpu, 574 568 const struct kvm_one_reg *reg) 575 569 { 576 - u64 __user *uaddr = (u64 __user *)(long)reg->addr; 577 - 578 570 struct mips_coproc *cop0 = vcpu->arch.cop0; 579 571 s64 v; 580 572 ··· 635 631 default: 636 632 return -EINVAL; 637 633 } 638 - return put_user(v, uaddr); 634 + if ((reg->id & KVM_REG_SIZE_MASK) == KVM_REG_SIZE_U64) { 635 + u64 __user *uaddr64 = (u64 __user *)(long)reg->addr; 636 + return put_user(v, uaddr64); 637 + } else if ((reg->id & KVM_REG_SIZE_MASK) == KVM_REG_SIZE_U32) { 638 + u32 __user *uaddr32 = (u32 __user *)(long)reg->addr; 639 + u32 v32 = (u32)v; 640 + return put_user(v32, uaddr32); 641 + } else { 642 + return -EINVAL; 643 + } 639 644 } 640 645 641 646 static int kvm_mips_set_reg(struct kvm_vcpu *vcpu, 642 647 const struct kvm_one_reg *reg) 643 648 { 644 - u64 __user *uaddr = (u64 __user *)(long)reg->addr; 645 649 struct mips_coproc *cop0 = vcpu->arch.cop0; 646 650 u64 v; 647 651 648 - if (get_user(v, uaddr) != 0) 649 - return -EFAULT; 652 + if ((reg->id & KVM_REG_SIZE_MASK) == KVM_REG_SIZE_U64) { 653 + u64 __user *uaddr64 = (u64 __user *)(long)reg->addr; 654 + 655 + if (get_user(v, uaddr64) != 0) 656 + return -EFAULT; 657 + } else if ((reg->id & KVM_REG_SIZE_MASK) == KVM_REG_SIZE_U32) { 658 + u32 __user *uaddr32 = (u32 __user *)(long)reg->addr; 659 + s32 v32; 660 + 661 + if (get_user(v32, uaddr32) != 0) 662 + return -EFAULT; 663 + v = (s64)v32; 664 + } else { 665 + return -EINVAL; 666 + } 650 667 651 668 switch (reg->id) { 652 669 case KVM_REG_MIPS_R0:
+10 -7
arch/powerpc/include/asm/cputable.h
··· 176 176 #define CPU_FTR_CFAR LONG_ASM_CONST(0x0100000000000000) 177 177 #define CPU_FTR_HAS_PPR LONG_ASM_CONST(0x0200000000000000) 178 178 #define CPU_FTR_DAWR LONG_ASM_CONST(0x0400000000000000) 179 + #define CPU_FTR_DABRX LONG_ASM_CONST(0x0800000000000000) 179 180 180 181 #ifndef __ASSEMBLY__ 181 182 ··· 395 394 CPU_FTR_PPCAS_ARCH_V2 | CPU_FTR_CTRL | CPU_FTR_ARCH_201 | \ 396 395 CPU_FTR_ALTIVEC_COMP | CPU_FTR_CAN_NAP | CPU_FTR_MMCRA | \ 397 396 CPU_FTR_CP_USE_DCBTZ | CPU_FTR_STCX_CHECKS_ADDRESS | \ 398 - CPU_FTR_HVMODE) 397 + CPU_FTR_HVMODE | CPU_FTR_DABRX) 399 398 #define CPU_FTRS_POWER5 (CPU_FTR_USE_TB | CPU_FTR_LWSYNC | \ 400 399 CPU_FTR_PPCAS_ARCH_V2 | CPU_FTR_CTRL | \ 401 400 CPU_FTR_MMCRA | CPU_FTR_SMT | \ 402 401 CPU_FTR_COHERENT_ICACHE | CPU_FTR_PURR | \ 403 - CPU_FTR_STCX_CHECKS_ADDRESS | CPU_FTR_POPCNTB) 402 + CPU_FTR_STCX_CHECKS_ADDRESS | CPU_FTR_POPCNTB | CPU_FTR_DABRX) 404 403 #define CPU_FTRS_POWER6 (CPU_FTR_USE_TB | CPU_FTR_LWSYNC | \ 405 404 CPU_FTR_PPCAS_ARCH_V2 | CPU_FTR_CTRL | \ 406 405 CPU_FTR_MMCRA | CPU_FTR_SMT | \ 407 406 CPU_FTR_COHERENT_ICACHE | \ 408 407 CPU_FTR_PURR | CPU_FTR_SPURR | CPU_FTR_REAL_LE | \ 409 408 CPU_FTR_DSCR | CPU_FTR_UNALIGNED_LD_STD | \ 410 - CPU_FTR_STCX_CHECKS_ADDRESS | CPU_FTR_POPCNTB | CPU_FTR_CFAR) 409 + CPU_FTR_STCX_CHECKS_ADDRESS | CPU_FTR_POPCNTB | CPU_FTR_CFAR | \ 410 + CPU_FTR_DABRX) 411 411 #define CPU_FTRS_POWER7 (CPU_FTR_USE_TB | CPU_FTR_LWSYNC | \ 412 412 CPU_FTR_PPCAS_ARCH_V2 | CPU_FTR_CTRL | CPU_FTR_ARCH_206 |\ 413 413 CPU_FTR_MMCRA | CPU_FTR_SMT | \ ··· 417 415 CPU_FTR_DSCR | CPU_FTR_SAO | CPU_FTR_ASYM_SMT | \ 418 416 CPU_FTR_STCX_CHECKS_ADDRESS | CPU_FTR_POPCNTB | CPU_FTR_POPCNTD | \ 419 417 CPU_FTR_ICSWX | CPU_FTR_CFAR | CPU_FTR_HVMODE | \ 420 - CPU_FTR_VMX_COPY | CPU_FTR_HAS_PPR) 418 + CPU_FTR_VMX_COPY | CPU_FTR_HAS_PPR | CPU_FTR_DABRX) 421 419 #define CPU_FTRS_POWER8 (CPU_FTR_USE_TB | CPU_FTR_LWSYNC | \ 422 420 CPU_FTR_PPCAS_ARCH_V2 | CPU_FTR_CTRL | CPU_FTR_ARCH_206 |\ 423 421 CPU_FTR_MMCRA | CPU_FTR_SMT | \ ··· 432 430 CPU_FTR_PPCAS_ARCH_V2 | CPU_FTR_CTRL | \ 433 431 CPU_FTR_ALTIVEC_COMP | CPU_FTR_MMCRA | CPU_FTR_SMT | \ 434 432 CPU_FTR_PAUSE_ZERO | CPU_FTR_CELL_TB_BUG | CPU_FTR_CP_USE_DCBTZ | \ 435 - CPU_FTR_UNALIGNED_LD_STD) 433 + CPU_FTR_UNALIGNED_LD_STD | CPU_FTR_DABRX) 436 434 #define CPU_FTRS_PA6T (CPU_FTR_USE_TB | CPU_FTR_LWSYNC | \ 437 435 CPU_FTR_PPCAS_ARCH_V2 | CPU_FTR_ALTIVEC_COMP | \ 438 - CPU_FTR_PURR | CPU_FTR_REAL_LE) 436 + CPU_FTR_PURR | CPU_FTR_REAL_LE | CPU_FTR_DABRX) 439 437 #define CPU_FTRS_COMPATIBLE (CPU_FTR_USE_TB | CPU_FTR_PPCAS_ARCH_V2) 440 438 441 439 #define CPU_FTRS_A2 (CPU_FTR_USE_TB | CPU_FTR_SMT | CPU_FTR_DBELL | \ 442 - CPU_FTR_NOEXECUTE | CPU_FTR_NODSISRALIGN | CPU_FTR_ICSWX) 440 + CPU_FTR_NOEXECUTE | CPU_FTR_NODSISRALIGN | \ 441 + CPU_FTR_ICSWX | CPU_FTR_DABRX ) 443 442 444 443 #ifdef __powerpc64__ 445 444 #ifdef CONFIG_PPC_BOOK3E
+1 -1
arch/powerpc/include/asm/exception-64s.h
··· 513 513 */ 514 514 #define STD_EXCEPTION_COMMON_ASYNC(trap, label, hdlr) \ 515 515 EXCEPTION_COMMON(trap, label, hdlr, ret_from_except_lite, \ 516 - FINISH_NAP;RUNLATCH_ON;DISABLE_INTS) 516 + FINISH_NAP;DISABLE_INTS;RUNLATCH_ON) 517 517 518 518 /* 519 519 * When the idle code in power4_idle puts the CPU into NAP mode,
+10 -6
arch/powerpc/include/asm/kvm_asm.h
··· 54 54 #define BOOKE_INTERRUPT_DEBUG 15 55 55 56 56 /* E500 */ 57 - #define BOOKE_INTERRUPT_SPE_UNAVAIL 32 58 - #define BOOKE_INTERRUPT_SPE_FP_DATA 33 57 + #define BOOKE_INTERRUPT_SPE_ALTIVEC_UNAVAIL 32 58 + #define BOOKE_INTERRUPT_SPE_FP_DATA_ALTIVEC_ASSIST 33 59 + /* 60 + * TODO: Unify 32-bit and 64-bit kernel exception handlers to use same defines 61 + */ 62 + #define BOOKE_INTERRUPT_SPE_UNAVAIL BOOKE_INTERRUPT_SPE_ALTIVEC_UNAVAIL 63 + #define BOOKE_INTERRUPT_SPE_FP_DATA BOOKE_INTERRUPT_SPE_FP_DATA_ALTIVEC_ASSIST 64 + #define BOOKE_INTERRUPT_ALTIVEC_UNAVAIL BOOKE_INTERRUPT_SPE_ALTIVEC_UNAVAIL 65 + #define BOOKE_INTERRUPT_ALTIVEC_ASSIST \ 66 + BOOKE_INTERRUPT_SPE_FP_DATA_ALTIVEC_ASSIST 59 67 #define BOOKE_INTERRUPT_SPE_FP_ROUND 34 60 68 #define BOOKE_INTERRUPT_PERFORMANCE_MONITOR 35 61 69 #define BOOKE_INTERRUPT_DOORBELL 36 ··· 74 66 #define BOOKE_INTERRUPT_GUEST_DBELL_CRIT 39 75 67 #define BOOKE_INTERRUPT_HV_SYSCALL 40 76 68 #define BOOKE_INTERRUPT_HV_PRIV 41 77 - 78 - /* altivec */ 79 - #define BOOKE_INTERRUPT_ALTIVEC_UNAVAIL 42 80 - #define BOOKE_INTERRUPT_ALTIVEC_ASSIST 43 81 69 82 70 /* book3s */ 83 71
+4 -4
arch/powerpc/kernel/cputable.c
··· 452 452 .mmu_features = MMU_FTRS_POWER8, 453 453 .icache_bsize = 128, 454 454 .dcache_bsize = 128, 455 - .oprofile_type = PPC_OPROFILE_POWER4, 456 - .oprofile_cpu_type = 0, 455 + .oprofile_type = PPC_OPROFILE_INVALID, 456 + .oprofile_cpu_type = "ppc64/ibm-compat-v1", 457 457 .cpu_setup = __setup_cpu_power8, 458 458 .cpu_restore = __restore_cpu_power8, 459 459 .platform = "power8", ··· 506 506 .dcache_bsize = 128, 507 507 .num_pmcs = 6, 508 508 .pmc_type = PPC_PMC_IBM, 509 - .oprofile_cpu_type = 0, 510 - .oprofile_type = PPC_OPROFILE_POWER4, 509 + .oprofile_cpu_type = "ppc64/power8", 510 + .oprofile_type = PPC_OPROFILE_INVALID, 511 511 .cpu_setup = __setup_cpu_power8, 512 512 .cpu_restore = __restore_cpu_power8, 513 513 .platform = "power8",
-28
arch/powerpc/kernel/entry_64.S
··· 465 465 std r0, THREAD_EBBHR(r3) 466 466 mfspr r0, SPRN_EBBRR 467 467 std r0, THREAD_EBBRR(r3) 468 - 469 - /* PMU registers made user read/(write) by EBB */ 470 - mfspr r0, SPRN_SIAR 471 - std r0, THREAD_SIAR(r3) 472 - mfspr r0, SPRN_SDAR 473 - std r0, THREAD_SDAR(r3) 474 - mfspr r0, SPRN_SIER 475 - std r0, THREAD_SIER(r3) 476 - mfspr r0, SPRN_MMCR0 477 - std r0, THREAD_MMCR0(r3) 478 - mfspr r0, SPRN_MMCR2 479 - std r0, THREAD_MMCR2(r3) 480 - mfspr r0, SPRN_MMCRA 481 - std r0, THREAD_MMCRA(r3) 482 468 END_FTR_SECTION_IFSET(CPU_FTR_ARCH_207S) 483 469 #endif 484 470 ··· 566 580 mtspr SPRN_EBBHR, r0 567 581 ld r0, THREAD_EBBRR(r4) 568 582 mtspr SPRN_EBBRR, r0 569 - 570 - /* PMU registers made user read/(write) by EBB */ 571 - ld r0, THREAD_SIAR(r4) 572 - mtspr SPRN_SIAR, r0 573 - ld r0, THREAD_SDAR(r4) 574 - mtspr SPRN_SDAR, r0 575 - ld r0, THREAD_SIER(r4) 576 - mtspr SPRN_SIER, r0 577 - ld r0, THREAD_MMCR0(r4) 578 - mtspr SPRN_MMCR0, r0 579 - ld r0, THREAD_MMCR2(r4) 580 - mtspr SPRN_MMCR2, r0 581 - ld r0, THREAD_MMCRA(r4) 582 - mtspr SPRN_MMCRA, r0 583 583 584 584 ld r0,THREAD_TAR(r4) 585 585 mtspr SPRN_TAR,r0
+27 -65
arch/powerpc/kernel/exceptions-64s.S
··· 454 454 xori r10,r10,(MSR_FE0|MSR_FE1) 455 455 mtmsrd r10 456 456 sync 457 - fmr 0,0 458 - fmr 1,1 459 - fmr 2,2 460 - fmr 3,3 461 - fmr 4,4 462 - fmr 5,5 463 - fmr 6,6 464 - fmr 7,7 465 - fmr 8,8 466 - fmr 9,9 467 - fmr 10,10 468 - fmr 11,11 469 - fmr 12,12 470 - fmr 13,13 471 - fmr 14,14 472 - fmr 15,15 473 - fmr 16,16 474 - fmr 17,17 475 - fmr 18,18 476 - fmr 19,19 477 - fmr 20,20 478 - fmr 21,21 479 - fmr 22,22 480 - fmr 23,23 481 - fmr 24,24 482 - fmr 25,25 483 - fmr 26,26 484 - fmr 27,27 485 - fmr 28,28 486 - fmr 29,29 487 - fmr 30,30 488 - fmr 31,31 457 + 458 + #define FMR2(n) fmr (n), (n) ; fmr n+1, n+1 459 + #define FMR4(n) FMR2(n) ; FMR2(n+2) 460 + #define FMR8(n) FMR4(n) ; FMR4(n+4) 461 + #define FMR16(n) FMR8(n) ; FMR8(n+8) 462 + #define FMR32(n) FMR16(n) ; FMR16(n+16) 463 + FMR32(0) 464 + 489 465 FTR_SECTION_ELSE 490 466 /* 491 467 * To denormalise we need to move a copy of the register to itself. ··· 471 495 oris r10,r10,MSR_VSX@h 472 496 mtmsrd r10 473 497 sync 474 - XVCPSGNDP(0,0,0) 475 - XVCPSGNDP(1,1,1) 476 - XVCPSGNDP(2,2,2) 477 - XVCPSGNDP(3,3,3) 478 - XVCPSGNDP(4,4,4) 479 - XVCPSGNDP(5,5,5) 480 - XVCPSGNDP(6,6,6) 481 - XVCPSGNDP(7,7,7) 482 - XVCPSGNDP(8,8,8) 483 - XVCPSGNDP(9,9,9) 484 - XVCPSGNDP(10,10,10) 485 - XVCPSGNDP(11,11,11) 486 - XVCPSGNDP(12,12,12) 487 - XVCPSGNDP(13,13,13) 488 - XVCPSGNDP(14,14,14) 489 - XVCPSGNDP(15,15,15) 490 - XVCPSGNDP(16,16,16) 491 - XVCPSGNDP(17,17,17) 492 - XVCPSGNDP(18,18,18) 493 - XVCPSGNDP(19,19,19) 494 - XVCPSGNDP(20,20,20) 495 - XVCPSGNDP(21,21,21) 496 - XVCPSGNDP(22,22,22) 497 - XVCPSGNDP(23,23,23) 498 - XVCPSGNDP(24,24,24) 499 - XVCPSGNDP(25,25,25) 500 - XVCPSGNDP(26,26,26) 501 - XVCPSGNDP(27,27,27) 502 - XVCPSGNDP(28,28,28) 503 - XVCPSGNDP(29,29,29) 504 - XVCPSGNDP(30,30,30) 505 - XVCPSGNDP(31,31,31) 498 + 499 + #define XVCPSGNDP2(n) XVCPSGNDP(n,n,n) ; XVCPSGNDP(n+1,n+1,n+1) 500 + #define XVCPSGNDP4(n) XVCPSGNDP2(n) ; XVCPSGNDP2(n+2) 501 + #define XVCPSGNDP8(n) XVCPSGNDP4(n) ; XVCPSGNDP4(n+4) 502 + #define XVCPSGNDP16(n) XVCPSGNDP8(n) ; XVCPSGNDP8(n+8) 503 + #define XVCPSGNDP32(n) XVCPSGNDP16(n) ; XVCPSGNDP16(n+16) 504 + XVCPSGNDP32(0) 505 + 506 506 ALT_FTR_SECTION_END_IFCLR(CPU_FTR_ARCH_206) 507 + 508 + BEGIN_FTR_SECTION 509 + b denorm_done 510 + END_FTR_SECTION_IFCLR(CPU_FTR_ARCH_207S) 511 + /* 512 + * To denormalise we need to move a copy of the register to itself. 513 + * For POWER8 we need to do that for all 64 VSX registers 514 + */ 515 + XVCPSGNDP32(32) 516 + denorm_done: 507 517 mtspr SPRN_HSRR0,r11 508 518 mtcrf 0x80,r9 509 519 ld r9,PACA_EXGEN+EX_R9(r13) ··· 683 721 STD_EXCEPTION_COMMON(0xb00, trap_0b, .unknown_exception) 684 722 STD_EXCEPTION_COMMON(0xd00, single_step, .single_step_exception) 685 723 STD_EXCEPTION_COMMON(0xe00, trap_0e, .unknown_exception) 686 - STD_EXCEPTION_COMMON(0xe40, emulation_assist, .program_check_exception) 724 + STD_EXCEPTION_COMMON(0xe40, emulation_assist, .emulation_assist_interrupt) 687 725 STD_EXCEPTION_COMMON(0xe60, hmi_exception, .unknown_exception) 688 726 #ifdef CONFIG_PPC_DOORBELL 689 727 STD_EXCEPTION_COMMON_ASYNC(0xe80, h_doorbell, .doorbell_exception)
+1 -1
arch/powerpc/kernel/irq.c
··· 162 162 * in case we also had a rollover while hard disabled 163 163 */ 164 164 local_paca->irq_happened &= ~PACA_IRQ_DEC; 165 - if (decrementer_check_overflow()) 165 + if ((happened & PACA_IRQ_DEC) || decrementer_check_overflow()) 166 166 return 0x900; 167 167 168 168 /* Finally check if an external interrupt happened */
+3 -1
arch/powerpc/kernel/pci-common.c
··· 827 827 } 828 828 for (i = 0; i < DEVICE_COUNT_RESOURCE; i++) { 829 829 struct resource *res = dev->resource + i; 830 + struct pci_bus_region reg; 830 831 if (!res->flags) 831 832 continue; 832 833 ··· 836 835 * at 0 as unset as well, except if PCI_PROBE_ONLY is also set 837 836 * since in that case, we don't want to re-assign anything 838 837 */ 838 + pcibios_resource_to_bus(dev, &reg, res); 839 839 if (pci_has_flag(PCI_REASSIGN_ALL_RSRC) || 840 - (res->start == 0 && !pci_has_flag(PCI_PROBE_ONLY))) { 840 + (reg.start == 0 && !pci_has_flag(PCI_PROBE_ONLY))) { 841 841 /* Only print message if not re-assigning */ 842 842 if (!pci_has_flag(PCI_REASSIGN_ALL_RSRC)) 843 843 pr_debug("PCI:%s Resource %d %016llx-%016llx [%x] "
+4 -3
arch/powerpc/kernel/process.c
··· 399 399 static inline int __set_dabr(unsigned long dabr, unsigned long dabrx) 400 400 { 401 401 mtspr(SPRN_DABR, dabr); 402 - mtspr(SPRN_DABRX, dabrx); 402 + if (cpu_has_feature(CPU_FTR_DABRX)) 403 + mtspr(SPRN_DABRX, dabrx); 403 404 return 0; 404 405 } 405 406 #else ··· 1369 1368 1370 1369 #ifdef CONFIG_PPC64 1371 1370 /* Called with hard IRQs off */ 1372 - void __ppc64_runlatch_on(void) 1371 + void notrace __ppc64_runlatch_on(void) 1373 1372 { 1374 1373 struct thread_info *ti = current_thread_info(); 1375 1374 unsigned long ctrl; ··· 1382 1381 } 1383 1382 1384 1383 /* Called with hard IRQs off */ 1385 - void __ppc64_runlatch_off(void) 1384 + void notrace __ppc64_runlatch_off(void) 1386 1385 { 1387 1386 struct thread_info *ti = current_thread_info(); 1388 1387 unsigned long ctrl;
+10
arch/powerpc/kernel/traps.c
··· 1165 1165 exception_exit(prev_state); 1166 1166 } 1167 1167 1168 + /* 1169 + * This occurs when running in hypervisor mode on POWER6 or later 1170 + * and an illegal instruction is encountered. 1171 + */ 1172 + void __kprobes emulation_assist_interrupt(struct pt_regs *regs) 1173 + { 1174 + regs->msr |= REASON_ILLEGAL; 1175 + program_check_exception(regs); 1176 + } 1177 + 1168 1178 void alignment_exception(struct pt_regs *regs) 1169 1179 { 1170 1180 enum ctx_state prev_state = exception_enter();
+5
arch/powerpc/kvm/44x_tlb.c
··· 441 441 struct kvmppc_vcpu_44x *vcpu_44x = to_44x(vcpu); 442 442 struct kvmppc_44x_tlbe *tlbe; 443 443 unsigned int gtlb_index; 444 + int idx; 444 445 445 446 gtlb_index = kvmppc_get_gpr(vcpu, ra); 446 447 if (gtlb_index >= KVM44x_GUEST_TLB_SIZE) { ··· 474 473 return EMULATE_FAIL; 475 474 } 476 475 476 + idx = srcu_read_lock(&vcpu->kvm->srcu); 477 + 477 478 if (tlbe_is_host_safe(vcpu, tlbe)) { 478 479 gva_t eaddr; 479 480 gpa_t gpaddr; ··· 491 488 492 489 kvmppc_mmu_map(vcpu, eaddr, gpaddr, gtlb_index); 493 490 } 491 + 492 + srcu_read_unlock(&vcpu->kvm->srcu, idx); 494 493 495 494 trace_kvm_gtlb_write(gtlb_index, tlbe->tid, tlbe->word0, tlbe->word1, 496 495 tlbe->word2);
+18
arch/powerpc/kvm/booke.c
··· 832 832 { 833 833 int r = RESUME_HOST; 834 834 int s; 835 + int idx; 836 + 837 + #ifdef CONFIG_PPC64 838 + WARN_ON(local_paca->irq_happened != 0); 839 + #endif 840 + 841 + /* 842 + * We enter with interrupts disabled in hardware, but 843 + * we need to call hard_irq_disable anyway to ensure that 844 + * the software state is kept in sync. 845 + */ 846 + hard_irq_disable(); 835 847 836 848 /* update before a new last_exit_type is rewritten */ 837 849 kvmppc_update_timing_stats(vcpu); ··· 1065 1053 break; 1066 1054 } 1067 1055 1056 + idx = srcu_read_lock(&vcpu->kvm->srcu); 1057 + 1068 1058 gpaddr = kvmppc_mmu_xlate(vcpu, gtlb_index, eaddr); 1069 1059 gfn = gpaddr >> PAGE_SHIFT; 1070 1060 ··· 1089 1075 kvmppc_account_exit(vcpu, MMIO_EXITS); 1090 1076 } 1091 1077 1078 + srcu_read_unlock(&vcpu->kvm->srcu, idx); 1092 1079 break; 1093 1080 } 1094 1081 ··· 1113 1098 1114 1099 kvmppc_account_exit(vcpu, ITLB_VIRT_MISS_EXITS); 1115 1100 1101 + idx = srcu_read_lock(&vcpu->kvm->srcu); 1102 + 1116 1103 gpaddr = kvmppc_mmu_xlate(vcpu, gtlb_index, eaddr); 1117 1104 gfn = gpaddr >> PAGE_SHIFT; 1118 1105 ··· 1131 1114 kvmppc_booke_queue_irqprio(vcpu, BOOKE_IRQPRIO_MACHINE_CHECK); 1132 1115 } 1133 1116 1117 + srcu_read_unlock(&vcpu->kvm->srcu, idx); 1134 1118 break; 1135 1119 } 1136 1120
+5
arch/powerpc/kvm/e500_mmu.c
··· 396 396 struct kvm_book3e_206_tlb_entry *gtlbe; 397 397 int tlbsel, esel; 398 398 int recal = 0; 399 + int idx; 399 400 400 401 tlbsel = get_tlb_tlbsel(vcpu); 401 402 esel = get_tlb_esel(vcpu, tlbsel); ··· 431 430 kvmppc_set_tlb1map_range(vcpu, gtlbe); 432 431 } 433 432 433 + idx = srcu_read_lock(&vcpu->kvm->srcu); 434 + 434 435 /* Invalidate shadow mappings for the about-to-be-clobbered TLBE. */ 435 436 if (tlbe_is_host_safe(vcpu, gtlbe)) { 436 437 u64 eaddr = get_tlb_eaddr(gtlbe); ··· 446 443 /* Premap the faulting page */ 447 444 kvmppc_mmu_map(vcpu, eaddr, raddr, index_of(tlbsel, esel)); 448 445 } 446 + 447 + srcu_read_unlock(&vcpu->kvm->srcu, idx); 449 448 450 449 kvmppc_set_exit_type(vcpu, EMULATED_TLBWE_EXITS); 451 450 return EMULATE_DONE;
-2
arch/powerpc/kvm/e500mc.c
··· 177 177 r = 0; 178 178 else if (strcmp(cur_cpu_spec->cpu_name, "e5500") == 0) 179 179 r = 0; 180 - else if (strcmp(cur_cpu_spec->cpu_name, "e6500") == 0) 181 - r = 0; 182 180 else 183 181 r = -ENOTSUPP; 184 182
+1 -1
arch/powerpc/perf/core-book3s.c
··· 1758 1758 } 1759 1759 } 1760 1760 } 1761 - if ((!found) && printk_ratelimit()) 1761 + if (!found && !nmi && printk_ratelimit()) 1762 1762 printk(KERN_WARNING "Can't find PMC that caused IRQ\n"); 1763 1763 1764 1764 /*
+5 -7
arch/powerpc/platforms/pseries/eeh_pseries.c
··· 83 83 ibm_configure_pe = rtas_token("ibm,configure-pe"); 84 84 ibm_configure_bridge = rtas_token("ibm,configure-bridge"); 85 85 86 - /* necessary sanity check */ 86 + /* 87 + * Necessary sanity check. We needn't check "get-config-addr-info" 88 + * and its variant since the old firmware probably support address 89 + * of domain/bus/slot/function for EEH RTAS operations. 90 + */ 87 91 if (ibm_set_eeh_option == RTAS_UNKNOWN_SERVICE) { 88 92 pr_warning("%s: RTAS service <ibm,set-eeh-option> invalid\n", 89 93 __func__); ··· 104 100 return -EINVAL; 105 101 } else if (ibm_slot_error_detail == RTAS_UNKNOWN_SERVICE) { 106 102 pr_warning("%s: RTAS service <ibm,slot-error-detail> invalid\n", 107 - __func__); 108 - return -EINVAL; 109 - } else if (ibm_get_config_addr_info2 == RTAS_UNKNOWN_SERVICE && 110 - ibm_get_config_addr_info == RTAS_UNKNOWN_SERVICE) { 111 - pr_warning("%s: RTAS service <ibm,get-config-addr-info2> and " 112 - "<ibm,get-config-addr-info> invalid\n", 113 103 __func__); 114 104 return -EINVAL; 115 105 } else if (ibm_configure_pe == RTAS_UNKNOWN_SERVICE &&
+22 -10
arch/s390/include/asm/pgtable.h
··· 623 623 " csg %0,%1,%2\n" 624 624 " jl 0b\n" 625 625 : "=&d" (old), "=&d" (new), "=Q" (ptep[PTRS_PER_PTE]) 626 - : "Q" (ptep[PTRS_PER_PTE]) : "cc"); 626 + : "Q" (ptep[PTRS_PER_PTE]) : "cc", "memory"); 627 627 #endif 628 628 return __pgste(new); 629 629 } ··· 635 635 " nihh %1,0xff7f\n" /* clear RCP_PCL_BIT */ 636 636 " stg %1,%0\n" 637 637 : "=Q" (ptep[PTRS_PER_PTE]) 638 - : "d" (pgste_val(pgste)), "Q" (ptep[PTRS_PER_PTE]) : "cc"); 638 + : "d" (pgste_val(pgste)), "Q" (ptep[PTRS_PER_PTE]) 639 + : "cc", "memory"); 639 640 preempt_enable(); 641 + #endif 642 + } 643 + 644 + static inline void pgste_set(pte_t *ptep, pgste_t pgste) 645 + { 646 + #ifdef CONFIG_PGSTE 647 + *(pgste_t *)(ptep + PTRS_PER_PTE) = pgste; 640 648 #endif 641 649 } 642 650 ··· 712 704 { 713 705 #ifdef CONFIG_PGSTE 714 706 unsigned long address; 715 - unsigned long okey, nkey; 707 + unsigned long nkey; 716 708 717 709 if (pte_val(entry) & _PAGE_INVALID) 718 710 return; 711 + VM_BUG_ON(!(pte_val(*ptep) & _PAGE_INVALID)); 719 712 address = pte_val(entry) & PAGE_MASK; 720 - okey = nkey = page_get_storage_key(address); 721 - nkey &= ~(_PAGE_ACC_BITS | _PAGE_FP_BIT); 722 - /* Set page access key and fetch protection bit from pgste */ 723 - nkey |= (pgste_val(pgste) & (RCP_ACC_BITS | RCP_FP_BIT)) >> 56; 724 - if (okey != nkey) 725 - page_set_storage_key(address, nkey, 0); 713 + /* 714 + * Set page access key and fetch protection bit from pgste. 715 + * The guest C/R information is still in the PGSTE, set real 716 + * key C/R to 0. 717 + */ 718 + nkey = (pgste_val(pgste) & (RCP_ACC_BITS | RCP_FP_BIT)) >> 56; 719 + page_set_storage_key(address, nkey, 0); 726 720 #endif 727 721 } 728 722 ··· 1109 1099 if (!mm_exclusive(mm)) 1110 1100 __ptep_ipte(address, ptep); 1111 1101 1112 - if (mm_has_pgste(mm)) 1102 + if (mm_has_pgste(mm)) { 1113 1103 pgste = pgste_update_all(&pte, pgste); 1104 + pgste_set(ptep, pgste); 1105 + } 1114 1106 return pte; 1115 1107 } 1116 1108
+8 -4
arch/s390/kernel/dumpstack.c
··· 74 74 75 75 static void show_trace(struct task_struct *task, unsigned long *stack) 76 76 { 77 + const unsigned long frame_size = 78 + STACK_FRAME_OVERHEAD + sizeof(struct pt_regs); 77 79 register unsigned long __r15 asm ("15"); 78 80 unsigned long sp; 79 81 ··· 84 82 sp = task ? task->thread.ksp : __r15; 85 83 printk("Call Trace:\n"); 86 84 #ifdef CONFIG_CHECK_STACK 87 - sp = __show_trace(sp, S390_lowcore.panic_stack - 4096, 88 - S390_lowcore.panic_stack); 85 + sp = __show_trace(sp, 86 + S390_lowcore.panic_stack + frame_size - 4096, 87 + S390_lowcore.panic_stack + frame_size); 89 88 #endif 90 - sp = __show_trace(sp, S390_lowcore.async_stack - ASYNC_SIZE, 91 - S390_lowcore.async_stack); 89 + sp = __show_trace(sp, 90 + S390_lowcore.async_stack + frame_size - ASYNC_SIZE, 91 + S390_lowcore.async_stack + frame_size); 92 92 if (task) 93 93 __show_trace(sp, (unsigned long) task_stack_page(task), 94 94 (unsigned long) task_stack_page(task) + THREAD_SIZE);
+64
arch/s390/kernel/irq.c
··· 311 311 spin_unlock(&ma_subclass_lock); 312 312 } 313 313 EXPORT_SYMBOL(measurement_alert_subclass_unregister); 314 + 315 + void synchronize_irq(unsigned int irq) 316 + { 317 + /* 318 + * Not needed, the handler is protected by a lock and IRQs that occur 319 + * after the handler is deleted are just NOPs. 320 + */ 321 + } 322 + EXPORT_SYMBOL_GPL(synchronize_irq); 323 + 324 + #ifndef CONFIG_PCI 325 + 326 + /* Only PCI devices have dynamically-defined IRQ handlers */ 327 + 328 + int request_irq(unsigned int irq, irq_handler_t handler, 329 + unsigned long irqflags, const char *devname, void *dev_id) 330 + { 331 + return -EINVAL; 332 + } 333 + EXPORT_SYMBOL_GPL(request_irq); 334 + 335 + void free_irq(unsigned int irq, void *dev_id) 336 + { 337 + WARN_ON(1); 338 + } 339 + EXPORT_SYMBOL_GPL(free_irq); 340 + 341 + void enable_irq(unsigned int irq) 342 + { 343 + WARN_ON(1); 344 + } 345 + EXPORT_SYMBOL_GPL(enable_irq); 346 + 347 + void disable_irq(unsigned int irq) 348 + { 349 + WARN_ON(1); 350 + } 351 + EXPORT_SYMBOL_GPL(disable_irq); 352 + 353 + #endif /* !CONFIG_PCI */ 354 + 355 + void disable_irq_nosync(unsigned int irq) 356 + { 357 + disable_irq(irq); 358 + } 359 + EXPORT_SYMBOL_GPL(disable_irq_nosync); 360 + 361 + unsigned long probe_irq_on(void) 362 + { 363 + return 0; 364 + } 365 + EXPORT_SYMBOL_GPL(probe_irq_on); 366 + 367 + int probe_irq_off(unsigned long val) 368 + { 369 + return 0; 370 + } 371 + EXPORT_SYMBOL_GPL(probe_irq_off); 372 + 373 + unsigned int probe_irq_mask(unsigned long val) 374 + { 375 + return val; 376 + } 377 + EXPORT_SYMBOL_GPL(probe_irq_mask);
+1 -1
arch/s390/kernel/sclp.S
··· 225 225 ahi %r2,1 226 226 ltr %r0,%r0 # end of string? 227 227 jz .LfinalizemtoS4 228 - chi %r0,0x15 # end of line (NL)? 228 + chi %r0,0x0a # end of line (NL)? 229 229 jz .LfinalizemtoS4 230 230 stc %r0,0(%r6,%r7) # copy to mto 231 231 la %r11,0(%r6,%r7)
-33
arch/s390/pci/pci.c
··· 302 302 return rc; 303 303 } 304 304 305 - void synchronize_irq(unsigned int irq) 306 - { 307 - /* 308 - * Not needed, the handler is protected by a lock and IRQs that occur 309 - * after the handler is deleted are just NOPs. 310 - */ 311 - } 312 - EXPORT_SYMBOL_GPL(synchronize_irq); 313 - 314 305 void enable_irq(unsigned int irq) 315 306 { 316 307 struct msi_desc *msi = irq_get_msi_desc(irq); ··· 317 326 zpci_msi_set_mask_bits(msi, 1, 1); 318 327 } 319 328 EXPORT_SYMBOL_GPL(disable_irq); 320 - 321 - void disable_irq_nosync(unsigned int irq) 322 - { 323 - disable_irq(irq); 324 - } 325 - EXPORT_SYMBOL_GPL(disable_irq_nosync); 326 - 327 - unsigned long probe_irq_on(void) 328 - { 329 - return 0; 330 - } 331 - EXPORT_SYMBOL_GPL(probe_irq_on); 332 - 333 - int probe_irq_off(unsigned long val) 334 - { 335 - return 0; 336 - } 337 - EXPORT_SYMBOL_GPL(probe_irq_off); 338 - 339 - unsigned int probe_irq_mask(unsigned long val) 340 - { 341 - return val; 342 - } 343 - EXPORT_SYMBOL_GPL(probe_irq_mask); 344 329 345 330 void pcibios_fixup_bus(struct pci_bus *bus) 346 331 {
+3 -2
arch/sparc/kernel/prom_common.c
··· 54 54 int of_set_property(struct device_node *dp, const char *name, void *val, int len) 55 55 { 56 56 struct property **prevp; 57 + unsigned long flags; 57 58 void *new_val; 58 59 int err; 59 60 ··· 65 64 err = -ENODEV; 66 65 67 66 mutex_lock(&of_set_property_mutex); 68 - raw_spin_lock(&devtree_lock); 67 + raw_spin_lock_irqsave(&devtree_lock, flags); 69 68 prevp = &dp->properties; 70 69 while (*prevp) { 71 70 struct property *prop = *prevp; ··· 92 91 } 93 92 prevp = &(*prevp)->next; 94 93 } 95 - raw_spin_unlock(&devtree_lock); 94 + raw_spin_unlock_irqrestore(&devtree_lock, flags); 96 95 mutex_unlock(&of_set_property_mutex); 97 96 98 97 /* XXX Upate procfs if necessary... */
-47
arch/x86/boot/compressed/eboot.c
··· 251 251 *size = len; 252 252 } 253 253 254 - static efi_status_t setup_efi_vars(struct boot_params *params) 255 - { 256 - struct setup_data *data; 257 - struct efi_var_bootdata *efidata; 258 - u64 store_size, remaining_size, var_size; 259 - efi_status_t status; 260 - 261 - if (sys_table->runtime->hdr.revision < EFI_2_00_SYSTEM_TABLE_REVISION) 262 - return EFI_UNSUPPORTED; 263 - 264 - data = (struct setup_data *)(unsigned long)params->hdr.setup_data; 265 - 266 - while (data && data->next) 267 - data = (struct setup_data *)(unsigned long)data->next; 268 - 269 - status = efi_call_phys4((void *)sys_table->runtime->query_variable_info, 270 - EFI_VARIABLE_NON_VOLATILE | 271 - EFI_VARIABLE_BOOTSERVICE_ACCESS | 272 - EFI_VARIABLE_RUNTIME_ACCESS, &store_size, 273 - &remaining_size, &var_size); 274 - 275 - if (status != EFI_SUCCESS) 276 - return status; 277 - 278 - status = efi_call_phys3(sys_table->boottime->allocate_pool, 279 - EFI_LOADER_DATA, sizeof(*efidata), &efidata); 280 - 281 - if (status != EFI_SUCCESS) 282 - return status; 283 - 284 - efidata->data.type = SETUP_EFI_VARS; 285 - efidata->data.len = sizeof(struct efi_var_bootdata) - 286 - sizeof(struct setup_data); 287 - efidata->data.next = 0; 288 - efidata->store_size = store_size; 289 - efidata->remaining_size = remaining_size; 290 - efidata->max_var_size = var_size; 291 - 292 - if (data) 293 - data->next = (unsigned long)efidata; 294 - else 295 - params->hdr.setup_data = (unsigned long)efidata; 296 - 297 - } 298 - 299 254 static efi_status_t setup_efi_pci(struct boot_params *params) 300 255 { 301 256 efi_pci_io_protocol *pci; ··· 1156 1201 goto fail; 1157 1202 1158 1203 setup_graphics(boot_params); 1159 - 1160 - setup_efi_vars(boot_params); 1161 1204 1162 1205 setup_efi_pci(boot_params); 1163 1206
-7
arch/x86/include/asm/efi.h
··· 102 102 extern void efi_unmap_memmap(void); 103 103 extern void efi_memory_uc(u64 addr, unsigned long size); 104 104 105 - struct efi_var_bootdata { 106 - struct setup_data data; 107 - u64 store_size; 108 - u64 remaining_size; 109 - u64 max_var_size; 110 - }; 111 - 112 105 #ifdef CONFIG_EFI 113 106 114 107 static inline bool efi_is_native(void)
-1
arch/x86/include/uapi/asm/bootparam.h
··· 6 6 #define SETUP_E820_EXT 1 7 7 #define SETUP_DTB 2 8 8 #define SETUP_PCI 3 9 - #define SETUP_EFI_VARS 4 10 9 11 10 /* ram_size flags */ 12 11 #define RAMDISK_IMAGE_START_MASK 0x07FF
+1 -1
arch/x86/kernel/relocate_kernel_64.S
··· 160 160 xorq %rbp, %rbp 161 161 xorq %r8, %r8 162 162 xorq %r9, %r9 163 - xorq %r10, %r9 163 + xorq %r10, %r10 164 164 xorq %r11, %r11 165 165 xorq %r12, %r12 166 166 xorq %r13, %r13
+3 -3
arch/x86/mm/init.c
··· 277 277 end_pfn = limit_pfn; 278 278 nr_range = save_mr(mr, nr_range, start_pfn, end_pfn, 0); 279 279 280 + if (!after_bootmem) 281 + adjust_range_page_size_mask(mr, nr_range); 282 + 280 283 /* try to merge same page size and continuous */ 281 284 for (i = 0; nr_range > 1 && i < nr_range - 1; i++) { 282 285 unsigned long old_start; ··· 293 290 mr[i--].start = old_start; 294 291 nr_range--; 295 292 } 296 - 297 - if (!after_bootmem) 298 - adjust_range_page_size_mask(mr, nr_range); 299 293 300 294 for (i = 0; i < nr_range; i++) 301 295 printk(KERN_DEBUG " [mem %#010lx-%#010lx] page %s\n",
+65 -123
arch/x86/platform/efi/efi.c
··· 42 42 #include <linux/io.h> 43 43 #include <linux/reboot.h> 44 44 #include <linux/bcd.h> 45 - #include <linux/ucs2_string.h> 46 45 47 46 #include <asm/setup.h> 48 47 #include <asm/efi.h> ··· 53 54 54 55 #define EFI_DEBUG 1 55 56 56 - /* 57 - * There's some additional metadata associated with each 58 - * variable. Intel's reference implementation is 60 bytes - bump that 59 - * to account for potential alignment constraints 60 - */ 61 - #define VAR_METADATA_SIZE 64 57 + #define EFI_MIN_RESERVE 5120 58 + 59 + #define EFI_DUMMY_GUID \ 60 + EFI_GUID(0x4424ac57, 0xbe4b, 0x47dd, 0x9e, 0x97, 0xed, 0x50, 0xf0, 0x9f, 0x92, 0xa9) 61 + 62 + static efi_char16_t efi_dummy_name[6] = { 'D', 'U', 'M', 'M', 'Y', 0 }; 62 63 63 64 struct efi __read_mostly efi = { 64 65 .mps = EFI_INVALID_TABLE_ADDR, ··· 77 78 78 79 static struct efi efi_phys __initdata; 79 80 static efi_system_table_t efi_systab __initdata; 80 - 81 - static u64 efi_var_store_size; 82 - static u64 efi_var_remaining_size; 83 - static u64 efi_var_max_var_size; 84 - static u64 boot_used_size; 85 - static u64 boot_var_size; 86 - static u64 active_size; 87 81 88 82 unsigned long x86_efi_facility; 89 83 ··· 180 188 efi_char16_t *name, 181 189 efi_guid_t *vendor) 182 190 { 183 - efi_status_t status; 184 - static bool finished = false; 185 - static u64 var_size; 186 - 187 - status = efi_call_virt3(get_next_variable, 188 - name_size, name, vendor); 189 - 190 - if (status == EFI_NOT_FOUND) { 191 - finished = true; 192 - if (var_size < boot_used_size) { 193 - boot_var_size = boot_used_size - var_size; 194 - active_size += boot_var_size; 195 - } else { 196 - printk(KERN_WARNING FW_BUG "efi: Inconsistent initial sizes\n"); 197 - } 198 - } 199 - 200 - if (boot_used_size && !finished) { 201 - unsigned long size = 0; 202 - u32 attr; 203 - efi_status_t s; 204 - void *tmp; 205 - 206 - s = virt_efi_get_variable(name, vendor, &attr, &size, NULL); 207 - 208 - if (s != EFI_BUFFER_TOO_SMALL || !size) 209 - return status; 210 - 211 - tmp = kmalloc(size, GFP_ATOMIC); 212 - 213 - if (!tmp) 214 - return status; 215 - 216 - s = virt_efi_get_variable(name, vendor, &attr, &size, tmp); 217 - 218 - if (s == EFI_SUCCESS && (attr & EFI_VARIABLE_NON_VOLATILE)) { 219 - var_size += size; 220 - var_size += ucs2_strsize(name, 1024); 221 - active_size += size; 222 - active_size += VAR_METADATA_SIZE; 223 - active_size += ucs2_strsize(name, 1024); 224 - } 225 - 226 - kfree(tmp); 227 - } 228 - 229 - return status; 191 + return efi_call_virt3(get_next_variable, 192 + name_size, name, vendor); 230 193 } 231 194 232 195 static efi_status_t virt_efi_set_variable(efi_char16_t *name, ··· 190 243 unsigned long data_size, 191 244 void *data) 192 245 { 193 - efi_status_t status; 194 - u32 orig_attr = 0; 195 - unsigned long orig_size = 0; 196 - 197 - status = virt_efi_get_variable(name, vendor, &orig_attr, &orig_size, 198 - NULL); 199 - 200 - if (status != EFI_BUFFER_TOO_SMALL) 201 - orig_size = 0; 202 - 203 - status = efi_call_virt5(set_variable, 204 - name, vendor, attr, 205 - data_size, data); 206 - 207 - if (status == EFI_SUCCESS) { 208 - if (orig_size) { 209 - active_size -= orig_size; 210 - active_size -= ucs2_strsize(name, 1024); 211 - active_size -= VAR_METADATA_SIZE; 212 - } 213 - if (data_size) { 214 - active_size += data_size; 215 - active_size += ucs2_strsize(name, 1024); 216 - active_size += VAR_METADATA_SIZE; 217 - } 218 - } 219 - 220 - return status; 246 + return efi_call_virt5(set_variable, 247 + name, vendor, attr, 248 + data_size, data); 221 249 } 222 250 223 251 static efi_status_t virt_efi_query_variable_info(u32 attr, ··· 708 786 char vendor[100] = "unknown"; 709 787 int i = 0; 710 788 void *tmp; 711 - struct setup_data *data; 712 - struct efi_var_bootdata *efi_var_data; 713 - u64 pa_data; 714 789 715 790 #ifdef CONFIG_X86_32 716 791 if (boot_params.efi_info.efi_systab_hi || ··· 724 805 725 806 if (efi_systab_init(efi_phys.systab)) 726 807 return; 727 - 728 - pa_data = boot_params.hdr.setup_data; 729 - while (pa_data) { 730 - data = early_ioremap(pa_data, sizeof(*efi_var_data)); 731 - if (data->type == SETUP_EFI_VARS) { 732 - efi_var_data = (struct efi_var_bootdata *)data; 733 - 734 - efi_var_store_size = efi_var_data->store_size; 735 - efi_var_remaining_size = efi_var_data->remaining_size; 736 - efi_var_max_var_size = efi_var_data->max_var_size; 737 - } 738 - pa_data = data->next; 739 - early_iounmap(data, sizeof(*efi_var_data)); 740 - } 741 - 742 - boot_used_size = efi_var_store_size - efi_var_remaining_size; 743 808 744 809 set_bit(EFI_SYSTEM_TABLES, &x86_efi_facility); 745 810 ··· 988 1085 runtime_code_page_mkexec(); 989 1086 990 1087 kfree(new_memmap); 1088 + 1089 + /* clean DUMMY object */ 1090 + efi.set_variable(efi_dummy_name, &EFI_DUMMY_GUID, 1091 + EFI_VARIABLE_NON_VOLATILE | 1092 + EFI_VARIABLE_BOOTSERVICE_ACCESS | 1093 + EFI_VARIABLE_RUNTIME_ACCESS, 1094 + 0, NULL); 991 1095 } 992 1096 993 1097 /* ··· 1046 1136 efi_status_t status; 1047 1137 u64 storage_size, remaining_size, max_size; 1048 1138 1139 + if (!(attributes & EFI_VARIABLE_NON_VOLATILE)) 1140 + return 0; 1141 + 1049 1142 status = efi.query_variable_info(attributes, &storage_size, 1050 1143 &remaining_size, &max_size); 1051 1144 if (status != EFI_SUCCESS) 1052 1145 return status; 1053 1146 1054 - if (!max_size && remaining_size > size) 1055 - printk_once(KERN_ERR FW_BUG "Broken EFI implementation" 1056 - " is returning MaxVariableSize=0\n"); 1057 1147 /* 1058 1148 * Some firmware implementations refuse to boot if there's insufficient 1059 1149 * space in the variable store. We account for that by refusing the 1060 1150 * write if permitting it would reduce the available space to under 1061 - * 50%. However, some firmware won't reclaim variable space until 1062 - * after the used (not merely the actively used) space drops below 1063 - * a threshold. We can approximate that case with the value calculated 1064 - * above. If both the firmware and our calculations indicate that the 1065 - * available space would drop below 50%, refuse the write. 1151 + * 5KB. This figure was provided by Samsung, so should be safe. 1066 1152 */ 1153 + if ((remaining_size - size < EFI_MIN_RESERVE) && 1154 + !efi_no_storage_paranoia) { 1067 1155 1068 - if (!storage_size || size > remaining_size || 1069 - (max_size && size > max_size)) 1070 - return EFI_OUT_OF_RESOURCES; 1156 + /* 1157 + * Triggering garbage collection may require that the firmware 1158 + * generate a real EFI_OUT_OF_RESOURCES error. We can force 1159 + * that by attempting to use more space than is available. 1160 + */ 1161 + unsigned long dummy_size = remaining_size + 1024; 1162 + void *dummy = kmalloc(dummy_size, GFP_ATOMIC); 1071 1163 1072 - if (!efi_no_storage_paranoia && 1073 - ((active_size + size + VAR_METADATA_SIZE > storage_size / 2) && 1074 - (remaining_size - size < storage_size / 2))) 1075 - return EFI_OUT_OF_RESOURCES; 1164 + status = efi.set_variable(efi_dummy_name, &EFI_DUMMY_GUID, 1165 + EFI_VARIABLE_NON_VOLATILE | 1166 + EFI_VARIABLE_BOOTSERVICE_ACCESS | 1167 + EFI_VARIABLE_RUNTIME_ACCESS, 1168 + dummy_size, dummy); 1169 + 1170 + if (status == EFI_SUCCESS) { 1171 + /* 1172 + * This should have failed, so if it didn't make sure 1173 + * that we delete it... 1174 + */ 1175 + efi.set_variable(efi_dummy_name, &EFI_DUMMY_GUID, 1176 + EFI_VARIABLE_NON_VOLATILE | 1177 + EFI_VARIABLE_BOOTSERVICE_ACCESS | 1178 + EFI_VARIABLE_RUNTIME_ACCESS, 1179 + 0, dummy); 1180 + } 1181 + 1182 + /* 1183 + * The runtime code may now have triggered a garbage collection 1184 + * run, so check the variable info again 1185 + */ 1186 + status = efi.query_variable_info(attributes, &storage_size, 1187 + &remaining_size, &max_size); 1188 + 1189 + if (status != EFI_SUCCESS) 1190 + return status; 1191 + 1192 + /* 1193 + * There still isn't enough room, so return an error 1194 + */ 1195 + if (remaining_size - size < EFI_MIN_RESERVE) 1196 + return EFI_OUT_OF_RESOURCES; 1197 + } 1076 1198 1077 1199 return EFI_SUCCESS; 1078 1200 }
+1 -3
arch/x86/tools/relocs.c
··· 42 42 "^(xen_irq_disable_direct_reloc$|" 43 43 "xen_save_fl_direct_reloc$|" 44 44 "VDSO|" 45 - #if ELF_BITS == 64 46 - "__vvar_page|" 47 - #endif 48 45 "__crc_)", 49 46 50 47 /* ··· 69 72 "__per_cpu_load|" 70 73 "init_per_cpu__.*|" 71 74 "__end_rodata_hpage_align|" 75 + "__vvar_page|" 72 76 #endif 73 77 "_end)$" 74 78 };
+8
arch/x86/xen/smp.c
··· 17 17 #include <linux/slab.h> 18 18 #include <linux/smp.h> 19 19 #include <linux/irq_work.h> 20 + #include <linux/tick.h> 20 21 21 22 #include <asm/paravirt.h> 22 23 #include <asm/desc.h> ··· 448 447 play_dead_common(); 449 448 HYPERVISOR_vcpu_op(VCPUOP_down, smp_processor_id(), NULL); 450 449 cpu_bringup(); 450 + /* 451 + * commit 4b0c0f294 (tick: Cleanup NOHZ per cpu data on cpu down) 452 + * clears certain data that the cpu_idle loop (which called us 453 + * and that we return from) expects. The only way to get that 454 + * data back is to call: 455 + */ 456 + tick_nohz_idle_enter(); 451 457 } 452 458 453 459 #else /* !CONFIG_HOTPLUG_CPU */
+1 -1
block/blk-core.c
··· 3164 3164 q->rpm_status = RPM_ACTIVE; 3165 3165 __blk_run_queue(q); 3166 3166 pm_runtime_mark_last_busy(q->dev); 3167 - pm_runtime_autosuspend(q->dev); 3167 + pm_request_autosuspend(q->dev); 3168 3168 } else { 3169 3169 q->rpm_status = RPM_SUSPENDED; 3170 3170 }
+2
crypto/Kconfig
··· 823 823 config CRYPTO_BLOWFISH_AVX2_X86_64 824 824 tristate "Blowfish cipher algorithm (x86_64/AVX2)" 825 825 depends on X86 && 64BIT 826 + depends on BROKEN 826 827 select CRYPTO_ALGAPI 827 828 select CRYPTO_CRYPTD 828 829 select CRYPTO_ABLK_HELPER_X86 ··· 1300 1299 config CRYPTO_TWOFISH_AVX2_X86_64 1301 1300 tristate "Twofish cipher algorithm (x86_64/AVX2)" 1302 1301 depends on X86 && 64BIT 1302 + depends on BROKEN 1303 1303 select CRYPTO_ALGAPI 1304 1304 select CRYPTO_CRYPTD 1305 1305 select CRYPTO_ABLK_HELPER_X86
+1 -4
drivers/acpi/scan.c
··· 1017 1017 return -ENOSYS; 1018 1018 1019 1019 result = driver->ops.add(device); 1020 - if (result) { 1021 - device->driver = NULL; 1022 - device->driver_data = NULL; 1020 + if (result) 1023 1021 return result; 1024 - } 1025 1022 1026 1023 device->driver = driver; 1027 1024
+3
drivers/acpi/video.c
··· 1722 1722 int error; 1723 1723 acpi_status status; 1724 1724 1725 + if (device->handler) 1726 + return -EINVAL; 1727 + 1725 1728 status = acpi_walk_namespace(ACPI_TYPE_DEVICE, 1726 1729 device->parent->handle, 1, 1727 1730 acpi_video_bus_match, NULL,
+2 -4
drivers/base/regmap/regcache-rbtree.c
··· 143 143 int registers = 0; 144 144 int this_registers, average; 145 145 146 - map->lock(map); 146 + map->lock(map->lock_arg); 147 147 148 148 mem_size = sizeof(*rbtree_ctx); 149 149 mem_size += BITS_TO_LONGS(map->cache_present_nbits) * sizeof(long); ··· 170 170 seq_printf(s, "%d nodes, %d registers, average %d registers, used %zu bytes\n", 171 171 nodes, registers, average, mem_size); 172 172 173 - map->unlock(map); 173 + map->unlock(map->lock_arg); 174 174 175 175 return 0; 176 176 } ··· 391 391 for (node = rb_first(&rbtree_ctx->root); node; node = rb_next(node)) { 392 392 rbnode = rb_entry(node, struct regcache_rbtree_node, node); 393 393 394 - if (rbnode->base_reg < min) 395 - continue; 396 394 if (rbnode->base_reg > max) 397 395 break; 398 396 if (rbnode->base_reg + rbnode->blklen < min)
+10 -10
drivers/base/regmap/regcache.c
··· 270 270 271 271 BUG_ON(!map->cache_ops || !map->cache_ops->sync); 272 272 273 - map->lock(map); 273 + map->lock(map->lock_arg); 274 274 /* Remember the initial bypass state */ 275 275 bypass = map->cache_bypass; 276 276 dev_dbg(map->dev, "Syncing %s cache\n", ··· 306 306 trace_regcache_sync(map->dev, name, "stop"); 307 307 /* Restore the bypass state */ 308 308 map->cache_bypass = bypass; 309 - map->unlock(map); 309 + map->unlock(map->lock_arg); 310 310 311 311 return ret; 312 312 } ··· 333 333 334 334 BUG_ON(!map->cache_ops || !map->cache_ops->sync); 335 335 336 - map->lock(map); 336 + map->lock(map->lock_arg); 337 337 338 338 /* Remember the initial bypass state */ 339 339 bypass = map->cache_bypass; ··· 352 352 trace_regcache_sync(map->dev, name, "stop region"); 353 353 /* Restore the bypass state */ 354 354 map->cache_bypass = bypass; 355 - map->unlock(map); 355 + map->unlock(map->lock_arg); 356 356 357 357 return ret; 358 358 } ··· 372 372 */ 373 373 void regcache_cache_only(struct regmap *map, bool enable) 374 374 { 375 - map->lock(map); 375 + map->lock(map->lock_arg); 376 376 WARN_ON(map->cache_bypass && enable); 377 377 map->cache_only = enable; 378 378 trace_regmap_cache_only(map->dev, enable); 379 - map->unlock(map); 379 + map->unlock(map->lock_arg); 380 380 } 381 381 EXPORT_SYMBOL_GPL(regcache_cache_only); 382 382 ··· 391 391 */ 392 392 void regcache_mark_dirty(struct regmap *map) 393 393 { 394 - map->lock(map); 394 + map->lock(map->lock_arg); 395 395 map->cache_dirty = true; 396 - map->unlock(map); 396 + map->unlock(map->lock_arg); 397 397 } 398 398 EXPORT_SYMBOL_GPL(regcache_mark_dirty); 399 399 ··· 410 410 */ 411 411 void regcache_cache_bypass(struct regmap *map, bool enable) 412 412 { 413 - map->lock(map); 413 + map->lock(map->lock_arg); 414 414 WARN_ON(map->cache_only && enable); 415 415 map->cache_bypass = enable; 416 416 trace_regmap_cache_bypass(map->dev, enable); 417 - map->unlock(map); 417 + map->unlock(map->lock_arg); 418 418 } 419 419 EXPORT_SYMBOL_GPL(regcache_cache_bypass); 420 420
+4 -1
drivers/base/regmap/regmap-debugfs.c
··· 265 265 char *start = buf; 266 266 unsigned long reg, value; 267 267 struct regmap *map = file->private_data; 268 + int ret; 268 269 269 270 buf_size = min(count, (sizeof(buf)-1)); 270 271 if (copy_from_user(buf, user_buf, buf_size)) ··· 283 282 /* Userspace has been fiddling around behind the kernel's back */ 284 283 add_taint(TAINT_USER, LOCKDEP_NOW_UNRELIABLE); 285 284 286 - regmap_write(map, reg, value); 285 + ret = regmap_write(map, reg, value); 286 + if (ret < 0) 287 + return ret; 287 288 return buf_size; 288 289 } 289 290 #else
+16 -16
drivers/block/cciss.c
··· 168 168 static int cciss_open(struct block_device *bdev, fmode_t mode); 169 169 static int cciss_unlocked_open(struct block_device *bdev, fmode_t mode); 170 170 static void cciss_release(struct gendisk *disk, fmode_t mode); 171 - static int do_ioctl(struct block_device *bdev, fmode_t mode, 172 - unsigned int cmd, unsigned long arg); 173 171 static int cciss_ioctl(struct block_device *bdev, fmode_t mode, 174 172 unsigned int cmd, unsigned long arg); 175 173 static int cciss_getgeo(struct block_device *bdev, struct hd_geometry *geo); ··· 233 235 .owner = THIS_MODULE, 234 236 .open = cciss_unlocked_open, 235 237 .release = cciss_release, 236 - .ioctl = do_ioctl, 238 + .ioctl = cciss_ioctl, 237 239 .getgeo = cciss_getgeo, 238 240 #ifdef CONFIG_COMPAT 239 241 .compat_ioctl = cciss_compat_ioctl, ··· 1141 1143 mutex_unlock(&cciss_mutex); 1142 1144 } 1143 1145 1144 - static int do_ioctl(struct block_device *bdev, fmode_t mode, 1145 - unsigned cmd, unsigned long arg) 1146 - { 1147 - int ret; 1148 - mutex_lock(&cciss_mutex); 1149 - ret = cciss_ioctl(bdev, mode, cmd, arg); 1150 - mutex_unlock(&cciss_mutex); 1151 - return ret; 1152 - } 1153 - 1154 1146 #ifdef CONFIG_COMPAT 1155 1147 1156 1148 static int cciss_ioctl32_passthru(struct block_device *bdev, fmode_t mode, ··· 1167 1179 case CCISS_REGNEWD: 1168 1180 case CCISS_RESCANDISK: 1169 1181 case CCISS_GETLUNINFO: 1170 - return do_ioctl(bdev, mode, cmd, arg); 1182 + return cciss_ioctl(bdev, mode, cmd, arg); 1171 1183 1172 1184 case CCISS_PASSTHRU32: 1173 1185 return cciss_ioctl32_passthru(bdev, mode, cmd, arg); ··· 1207 1219 if (err) 1208 1220 return -EFAULT; 1209 1221 1210 - err = do_ioctl(bdev, mode, CCISS_PASSTHRU, (unsigned long)p); 1222 + err = cciss_ioctl(bdev, mode, CCISS_PASSTHRU, (unsigned long)p); 1211 1223 if (err) 1212 1224 return err; 1213 1225 err |= ··· 1249 1261 if (err) 1250 1262 return -EFAULT; 1251 1263 1252 - err = do_ioctl(bdev, mode, CCISS_BIG_PASSTHRU, (unsigned long)p); 1264 + err = cciss_ioctl(bdev, mode, CCISS_BIG_PASSTHRU, (unsigned long)p); 1253 1265 if (err) 1254 1266 return err; 1255 1267 err |= ··· 1299 1311 static int cciss_getintinfo(ctlr_info_t *h, void __user *argp) 1300 1312 { 1301 1313 cciss_coalint_struct intinfo; 1314 + unsigned long flags; 1302 1315 1303 1316 if (!argp) 1304 1317 return -EINVAL; 1318 + spin_lock_irqsave(&h->lock, flags); 1305 1319 intinfo.delay = readl(&h->cfgtable->HostWrite.CoalIntDelay); 1306 1320 intinfo.count = readl(&h->cfgtable->HostWrite.CoalIntCount); 1321 + spin_unlock_irqrestore(&h->lock, flags); 1307 1322 if (copy_to_user 1308 1323 (argp, &intinfo, sizeof(cciss_coalint_struct))) 1309 1324 return -EFAULT; ··· 1347 1356 static int cciss_getnodename(ctlr_info_t *h, void __user *argp) 1348 1357 { 1349 1358 NodeName_type NodeName; 1359 + unsigned long flags; 1350 1360 int i; 1351 1361 1352 1362 if (!argp) 1353 1363 return -EINVAL; 1364 + spin_lock_irqsave(&h->lock, flags); 1354 1365 for (i = 0; i < 16; i++) 1355 1366 NodeName[i] = readb(&h->cfgtable->ServerName[i]); 1367 + spin_unlock_irqrestore(&h->lock, flags); 1356 1368 if (copy_to_user(argp, NodeName, sizeof(NodeName_type))) 1357 1369 return -EFAULT; 1358 1370 return 0; ··· 1392 1398 static int cciss_getheartbeat(ctlr_info_t *h, void __user *argp) 1393 1399 { 1394 1400 Heartbeat_type heartbeat; 1401 + unsigned long flags; 1395 1402 1396 1403 if (!argp) 1397 1404 return -EINVAL; 1405 + spin_lock_irqsave(&h->lock, flags); 1398 1406 heartbeat = readl(&h->cfgtable->HeartBeat); 1407 + spin_unlock_irqrestore(&h->lock, flags); 1399 1408 if (copy_to_user(argp, &heartbeat, sizeof(Heartbeat_type))) 1400 1409 return -EFAULT; 1401 1410 return 0; ··· 1407 1410 static int cciss_getbustypes(ctlr_info_t *h, void __user *argp) 1408 1411 { 1409 1412 BusTypes_type BusTypes; 1413 + unsigned long flags; 1410 1414 1411 1415 if (!argp) 1412 1416 return -EINVAL; 1417 + spin_lock_irqsave(&h->lock, flags); 1413 1418 BusTypes = readl(&h->cfgtable->BusTypes); 1419 + spin_unlock_irqrestore(&h->lock, flags); 1414 1420 if (copy_to_user(argp, &BusTypes, sizeof(BusTypes_type))) 1415 1421 return -EFAULT; 1416 1422 return 0;
+5 -3
drivers/block/mtip32xx/mtip32xx.c
··· 3002 3002 3003 3003 static void mtip_hw_debugfs_exit(struct driver_data *dd) 3004 3004 { 3005 - debugfs_remove_recursive(dd->dfs_node); 3005 + if (dd->dfs_node) 3006 + debugfs_remove_recursive(dd->dfs_node); 3006 3007 } 3007 3008 3008 3009 ··· 3864 3863 struct driver_data *dd = queue->queuedata; 3865 3864 struct scatterlist *sg; 3866 3865 struct bio_vec *bvec; 3867 - int nents = 0; 3866 + int i, nents = 0; 3868 3867 int tag = 0, unaligned = 0; 3869 3868 3870 3869 if (unlikely(dd->dd_flag & MTIP_DDF_STOP_IO)) { ··· 3922 3921 } 3923 3922 3924 3923 /* Create the scatter list for this bio. */ 3925 - bio_for_each_segment(bvec, bio, nents) { 3924 + bio_for_each_segment(bvec, bio, i) { 3926 3925 sg_set_page(&sg[nents], 3927 3926 bvec->bv_page, 3928 3927 bvec->bv_len, 3929 3928 bvec->bv_offset); 3929 + nents++; 3930 3930 } 3931 3931 3932 3932 /* Issue the read/write. */
+48 -14
drivers/block/nvme-core.c
··· 629 629 struct nvme_command *cmnd; 630 630 struct nvme_iod *iod; 631 631 enum dma_data_direction dma_dir; 632 - int cmdid, length, result = -ENOMEM; 632 + int cmdid, length, result; 633 633 u16 control; 634 634 u32 dsmgmt; 635 635 int psegs = bio_phys_segments(ns->queue, bio); ··· 640 640 return result; 641 641 } 642 642 643 + result = -ENOMEM; 643 644 iod = nvme_alloc_iod(psegs, bio->bi_size, GFP_ATOMIC); 644 645 if (!iod) 645 646 goto nomem; ··· 978 977 979 978 if (timeout && !time_after(now, info[cmdid].timeout)) 980 979 continue; 980 + if (info[cmdid].ctx == CMD_CTX_CANCELLED) 981 + continue; 981 982 dev_warn(nvmeq->q_dmadev, "Cancelling I/O %d\n", cmdid); 982 983 ctx = cancel_cmdid(nvmeq, cmdid, &fn); 983 984 fn(nvmeq->dev, ctx, &cqe); ··· 1209 1206 1210 1207 if (addr & 3) 1211 1208 return ERR_PTR(-EINVAL); 1212 - if (!length) 1209 + if (!length || length > INT_MAX - PAGE_SIZE) 1213 1210 return ERR_PTR(-EINVAL); 1214 1211 1215 1212 offset = offset_in_page(addr); ··· 1230 1227 sg_init_table(sg, count); 1231 1228 for (i = 0; i < count; i++) { 1232 1229 sg_set_page(&sg[i], pages[i], 1233 - min_t(int, length, PAGE_SIZE - offset), offset); 1230 + min_t(unsigned, length, PAGE_SIZE - offset), 1231 + offset); 1234 1232 length -= (PAGE_SIZE - offset); 1235 1233 offset = 0; 1236 1234 } ··· 1439 1435 nvme_free_iod(dev, iod); 1440 1436 } 1441 1437 1442 - if (!status && copy_to_user(&ucmd->result, &cmd.result, 1438 + if ((status >= 0) && copy_to_user(&ucmd->result, &cmd.result, 1443 1439 sizeof(cmd.result))) 1444 1440 status = -EFAULT; 1445 1441 ··· 1637 1633 1638 1634 static int nvme_setup_io_queues(struct nvme_dev *dev) 1639 1635 { 1640 - int result, cpu, i, nr_io_queues, db_bar_size, q_depth; 1636 + struct pci_dev *pdev = dev->pci_dev; 1637 + int result, cpu, i, nr_io_queues, db_bar_size, q_depth, q_count; 1641 1638 1642 1639 nr_io_queues = num_online_cpus(); 1643 1640 result = set_queue_count(dev, nr_io_queues); ··· 1647 1642 if (result < nr_io_queues) 1648 1643 nr_io_queues = result; 1649 1644 1645 + q_count = nr_io_queues; 1650 1646 /* Deregister the admin queue's interrupt */ 1651 1647 free_irq(dev->entry[0].vector, dev->queues[0]); 1652 1648 1653 1649 db_bar_size = 4096 + ((nr_io_queues + 1) << (dev->db_stride + 3)); 1654 1650 if (db_bar_size > 8192) { 1655 1651 iounmap(dev->bar); 1656 - dev->bar = ioremap(pci_resource_start(dev->pci_dev, 0), 1657 - db_bar_size); 1652 + dev->bar = ioremap(pci_resource_start(pdev, 0), db_bar_size); 1658 1653 dev->dbs = ((void __iomem *)dev->bar) + 4096; 1659 1654 dev->queues[0]->q_db = dev->dbs; 1660 1655 } ··· 1662 1657 for (i = 0; i < nr_io_queues; i++) 1663 1658 dev->entry[i].entry = i; 1664 1659 for (;;) { 1665 - result = pci_enable_msix(dev->pci_dev, dev->entry, 1666 - nr_io_queues); 1660 + result = pci_enable_msix(pdev, dev->entry, nr_io_queues); 1667 1661 if (result == 0) { 1668 1662 break; 1669 1663 } else if (result > 0) { 1670 1664 nr_io_queues = result; 1671 1665 continue; 1672 1666 } else { 1673 - nr_io_queues = 1; 1667 + nr_io_queues = 0; 1674 1668 break; 1669 + } 1670 + } 1671 + 1672 + if (nr_io_queues == 0) { 1673 + nr_io_queues = q_count; 1674 + for (;;) { 1675 + result = pci_enable_msi_block(pdev, nr_io_queues); 1676 + if (result == 0) { 1677 + for (i = 0; i < nr_io_queues; i++) 1678 + dev->entry[i].vector = i + pdev->irq; 1679 + break; 1680 + } else if (result > 0) { 1681 + nr_io_queues = result; 1682 + continue; 1683 + } else { 1684 + nr_io_queues = 1; 1685 + break; 1686 + } 1675 1687 } 1676 1688 } 1677 1689 ··· 1872 1850 { 1873 1851 struct nvme_dev *dev = container_of(kref, struct nvme_dev, kref); 1874 1852 nvme_dev_remove(dev); 1875 - pci_disable_msix(dev->pci_dev); 1853 + if (dev->pci_dev->msi_enabled) 1854 + pci_disable_msi(dev->pci_dev); 1855 + else if (dev->pci_dev->msix_enabled) 1856 + pci_disable_msix(dev->pci_dev); 1876 1857 iounmap(dev->bar); 1877 1858 nvme_release_instance(dev); 1878 1859 nvme_release_prp_pools(dev); ··· 1948 1923 INIT_LIST_HEAD(&dev->namespaces); 1949 1924 dev->pci_dev = pdev; 1950 1925 pci_set_drvdata(pdev, dev); 1951 - dma_set_mask(&pdev->dev, DMA_BIT_MASK(64)); 1952 - dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(64)); 1926 + 1927 + if (!dma_set_mask(&pdev->dev, DMA_BIT_MASK(64))) 1928 + dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(64)); 1929 + else if (!dma_set_mask(&pdev->dev, DMA_BIT_MASK(32))) 1930 + dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(32)); 1931 + else 1932 + goto disable; 1933 + 1953 1934 result = nvme_set_instance(dev); 1954 1935 if (result) 1955 1936 goto disable; ··· 2008 1977 unmap: 2009 1978 iounmap(dev->bar); 2010 1979 disable_msix: 2011 - pci_disable_msix(pdev); 1980 + if (dev->pci_dev->msi_enabled) 1981 + pci_disable_msi(dev->pci_dev); 1982 + else if (dev->pci_dev->msix_enabled) 1983 + pci_disable_msix(dev->pci_dev); 2012 1984 nvme_release_instance(dev); 2013 1985 nvme_release_prp_pools(dev); 2014 1986 disable:
+1 -2
drivers/block/nvme-scsi.c
··· 44 44 #include <linux/sched.h> 45 45 #include <linux/slab.h> 46 46 #include <linux/types.h> 47 - #include <linux/version.h> 48 47 #include <scsi/sg.h> 49 48 #include <scsi/scsi.h> 50 49 ··· 1653 1654 } 1654 1655 } 1655 1656 1656 - static u16 nvme_trans_modesel_get_mp(struct nvme_ns *ns, struct sg_io_hdr *hdr, 1657 + static int nvme_trans_modesel_get_mp(struct nvme_ns *ns, struct sg_io_hdr *hdr, 1657 1658 u8 *mode_page, u8 page_code) 1658 1659 { 1659 1660 int res = SNTI_TRANSLATION_SUCCESS;
+2 -1
drivers/block/pktcdvd.c
··· 83 83 84 84 #define MAX_SPEED 0xffff 85 85 86 - #define ZONE(sector, pd) (((sector) + (pd)->offset) & ~((pd)->settings.size - 1)) 86 + #define ZONE(sector, pd) (((sector) + (pd)->offset) & \ 87 + ~(sector_t)((pd)->settings.size - 1)) 87 88 88 89 static DEFINE_MUTEX(pktcdvd_mutex); 89 90 static struct pktcdvd_device *pkt_devs[MAX_WRITERS];
+18 -15
drivers/block/rbd.c
··· 519 519 }; 520 520 521 521 /* 522 - * Initialize an rbd client instance. 523 - * We own *ceph_opts. 522 + * Initialize an rbd client instance. Success or not, this function 523 + * consumes ceph_opts. 524 524 */ 525 525 static struct rbd_client *rbd_client_create(struct ceph_options *ceph_opts) 526 526 { ··· 675 675 676 676 /* 677 677 * Get a ceph client with specific addr and configuration, if one does 678 - * not exist create it. 678 + * not exist create it. Either way, ceph_opts is consumed by this 679 + * function. 679 680 */ 680 681 static struct rbd_client *rbd_get_client(struct ceph_options *ceph_opts) 681 682 { ··· 4698 4697 return ret; 4699 4698 } 4700 4699 4701 - /* Undo whatever state changes are made by v1 or v2 image probe */ 4702 - 4700 + /* 4701 + * Undo whatever state changes are made by v1 or v2 header info 4702 + * call. 4703 + */ 4703 4704 static void rbd_dev_unprobe(struct rbd_device *rbd_dev) 4704 4705 { 4705 4706 struct rbd_image_header *header; ··· 4905 4902 int tmp; 4906 4903 4907 4904 /* 4908 - * Get the id from the image id object. If it's not a 4909 - * format 2 image, we'll get ENOENT back, and we'll assume 4910 - * it's a format 1 image. 4905 + * Get the id from the image id object. Unless there's an 4906 + * error, rbd_dev->spec->image_id will be filled in with 4907 + * a dynamically-allocated string, and rbd_dev->image_format 4908 + * will be set to either 1 or 2. 4911 4909 */ 4912 4910 ret = rbd_dev_image_id(rbd_dev); 4913 4911 if (ret) ··· 4996 4992 rc = PTR_ERR(rbdc); 4997 4993 goto err_out_args; 4998 4994 } 4999 - ceph_opts = NULL; /* rbd_dev client now owns this */ 5000 4995 5001 4996 /* pick the pool */ 5002 4997 osdc = &rbdc->client->osdc; ··· 5030 5027 rbd_dev->mapping.read_only = read_only; 5031 5028 5032 5029 rc = rbd_dev_device_setup(rbd_dev); 5033 - if (!rc) 5034 - return count; 5030 + if (rc) { 5031 + rbd_dev_image_release(rbd_dev); 5032 + goto err_out_module; 5033 + } 5035 5034 5036 - rbd_dev_image_release(rbd_dev); 5035 + return count; 5036 + 5037 5037 err_out_rbd_dev: 5038 5038 rbd_dev_destroy(rbd_dev); 5039 5039 err_out_client: 5040 5040 rbd_put_client(rbdc); 5041 5041 err_out_args: 5042 - if (ceph_opts) 5043 - ceph_destroy_options(ceph_opts); 5044 - kfree(rbd_opts); 5045 5042 rbd_spec_put(spec); 5046 5043 err_out_module: 5047 5044 module_put(THIS_MODULE);
+2 -2
drivers/bluetooth/Kconfig
··· 201 201 The core driver to support Marvell Bluetooth devices. 202 202 203 203 This driver is required if you want to support 204 - Marvell Bluetooth devices, such as 8688/8787/8797. 204 + Marvell Bluetooth devices, such as 8688/8787/8797/8897. 205 205 206 206 Say Y here to compile Marvell Bluetooth driver 207 207 into the kernel or say M to compile it as module. ··· 214 214 The driver for Marvell Bluetooth chipsets with SDIO interface. 215 215 216 216 This driver is required if you want to use Marvell Bluetooth 217 - devices with SDIO interface. Currently SD8688/SD8787/SD8797 217 + devices with SDIO interface. Currently SD8688/SD8787/SD8797/SD8897 218 218 chipsets are supported. 219 219 220 220 Say Y here to compile support for Marvell BT-over-SDIO driver
+28
drivers/bluetooth/btmrvl_sdio.c
··· 82 82 .io_port_2 = 0x7a, 83 83 }; 84 84 85 + static const struct btmrvl_sdio_card_reg btmrvl_reg_88xx = { 86 + .cfg = 0x00, 87 + .host_int_mask = 0x02, 88 + .host_intstatus = 0x03, 89 + .card_status = 0x50, 90 + .sq_read_base_addr_a0 = 0x60, 91 + .sq_read_base_addr_a1 = 0x61, 92 + .card_revision = 0xbc, 93 + .card_fw_status0 = 0xc0, 94 + .card_fw_status1 = 0xc1, 95 + .card_rx_len = 0xc2, 96 + .card_rx_unit = 0xc3, 97 + .io_port_0 = 0xd8, 98 + .io_port_1 = 0xd9, 99 + .io_port_2 = 0xda, 100 + }; 101 + 85 102 static const struct btmrvl_sdio_device btmrvl_sdio_sd8688 = { 86 103 .helper = "mrvl/sd8688_helper.bin", 87 104 .firmware = "mrvl/sd8688.bin", ··· 120 103 .sd_blksz_fw_dl = 256, 121 104 }; 122 105 106 + static const struct btmrvl_sdio_device btmrvl_sdio_sd8897 = { 107 + .helper = NULL, 108 + .firmware = "mrvl/sd8897_uapsta.bin", 109 + .reg = &btmrvl_reg_88xx, 110 + .sd_blksz_fw_dl = 256, 111 + }; 112 + 123 113 static const struct sdio_device_id btmrvl_sdio_ids[] = { 124 114 /* Marvell SD8688 Bluetooth device */ 125 115 { SDIO_DEVICE(SDIO_VENDOR_ID_MARVELL, 0x9105), ··· 140 116 /* Marvell SD8797 Bluetooth device */ 141 117 { SDIO_DEVICE(SDIO_VENDOR_ID_MARVELL, 0x912A), 142 118 .driver_data = (unsigned long) &btmrvl_sdio_sd8797 }, 119 + /* Marvell SD8897 Bluetooth device */ 120 + { SDIO_DEVICE(SDIO_VENDOR_ID_MARVELL, 0x912E), 121 + .driver_data = (unsigned long) &btmrvl_sdio_sd8897 }, 143 122 144 123 { } /* Terminating entry */ 145 124 }; ··· 1221 1194 MODULE_FIRMWARE("mrvl/sd8688.bin"); 1222 1195 MODULE_FIRMWARE("mrvl/sd8787_uapsta.bin"); 1223 1196 MODULE_FIRMWARE("mrvl/sd8797_uapsta.bin"); 1197 + MODULE_FIRMWARE("mrvl/sd8897_uapsta.bin");
+1 -1
drivers/crypto/sahara.c
··· 863 863 { .compatible = "fsl,imx27-sahara" }, 864 864 { /* sentinel */ } 865 865 }; 866 - MODULE_DEVICE_TABLE(platform, sahara_dt_ids); 866 + MODULE_DEVICE_TABLE(of, sahara_dt_ids); 867 867 868 868 static int sahara_probe(struct platform_device *pdev) 869 869 {
+25 -5
drivers/gpu/drm/gma500/cdv_intel_display.c
··· 1462 1462 size_t addr = 0; 1463 1463 struct gtt_range *gt; 1464 1464 struct drm_gem_object *obj; 1465 - int ret; 1465 + int ret = 0; 1466 1466 1467 1467 /* if we want to turn of the cursor ignore width and height */ 1468 1468 if (!handle) { ··· 1499 1499 1500 1500 if (obj->size < width * height * 4) { 1501 1501 dev_dbg(dev->dev, "buffer is to small\n"); 1502 - return -ENOMEM; 1502 + ret = -ENOMEM; 1503 + goto unref_cursor; 1503 1504 } 1504 1505 1505 1506 gt = container_of(obj, struct gtt_range, gem); ··· 1509 1508 ret = psb_gtt_pin(gt); 1510 1509 if (ret) { 1511 1510 dev_err(dev->dev, "Can not pin down handle 0x%x\n", handle); 1512 - return ret; 1511 + goto unref_cursor; 1513 1512 } 1514 1513 1515 1514 addr = gt->offset; /* Or resource.start ??? */ ··· 1533 1532 struct gtt_range, gem); 1534 1533 psb_gtt_unpin(gt); 1535 1534 drm_gem_object_unreference(psb_intel_crtc->cursor_obj); 1536 - psb_intel_crtc->cursor_obj = obj; 1537 1535 } 1538 - return 0; 1536 + 1537 + psb_intel_crtc->cursor_obj = obj; 1538 + return ret; 1539 + 1540 + unref_cursor: 1541 + drm_gem_object_unreference(obj); 1542 + return ret; 1539 1543 } 1540 1544 1541 1545 static int cdv_intel_crtc_cursor_move(struct drm_crtc *crtc, int x, int y) ··· 1756 1750 kfree(psb_intel_crtc); 1757 1751 } 1758 1752 1753 + static void cdv_intel_crtc_disable(struct drm_crtc *crtc) 1754 + { 1755 + struct gtt_range *gt; 1756 + struct drm_crtc_helper_funcs *crtc_funcs = crtc->helper_private; 1757 + 1758 + crtc_funcs->dpms(crtc, DRM_MODE_DPMS_OFF); 1759 + 1760 + if (crtc->fb) { 1761 + gt = to_psb_fb(crtc->fb)->gtt; 1762 + psb_gtt_unpin(gt); 1763 + } 1764 + } 1765 + 1759 1766 const struct drm_crtc_helper_funcs cdv_intel_helper_funcs = { 1760 1767 .dpms = cdv_intel_crtc_dpms, 1761 1768 .mode_fixup = cdv_intel_crtc_mode_fixup, ··· 1776 1757 .mode_set_base = cdv_intel_pipe_set_base, 1777 1758 .prepare = cdv_intel_crtc_prepare, 1778 1759 .commit = cdv_intel_crtc_commit, 1760 + .disable = cdv_intel_crtc_disable, 1779 1761 }; 1780 1762 1781 1763 const struct drm_crtc_funcs cdv_intel_crtc_funcs = {
+2 -2
drivers/gpu/drm/gma500/framebuffer.c
··· 121 121 unsigned long address; 122 122 int ret; 123 123 unsigned long pfn; 124 - /* FIXME: assumes fb at stolen base which may not be true */ 125 - unsigned long phys_addr = (unsigned long)dev_priv->stolen_base; 124 + unsigned long phys_addr = (unsigned long)dev_priv->stolen_base + 125 + psbfb->gtt->offset; 126 126 127 127 page_num = (vma->vm_end - vma->vm_start) >> PAGE_SHIFT; 128 128 address = (unsigned long)vmf->virtual_address - (vmf->pgoff << PAGE_SHIFT);
+27 -6
drivers/gpu/drm/gma500/psb_intel_display.c
··· 843 843 struct gtt_range *cursor_gt = psb_intel_crtc->cursor_gt; 844 844 struct drm_gem_object *obj; 845 845 void *tmp_dst, *tmp_src; 846 - int ret, i, cursor_pages; 846 + int ret = 0, i, cursor_pages; 847 847 848 848 /* if we want to turn of the cursor ignore width and height */ 849 849 if (!handle) { ··· 880 880 881 881 if (obj->size < width * height * 4) { 882 882 dev_dbg(dev->dev, "buffer is to small\n"); 883 - return -ENOMEM; 883 + ret = -ENOMEM; 884 + goto unref_cursor; 884 885 } 885 886 886 887 gt = container_of(obj, struct gtt_range, gem); ··· 890 889 ret = psb_gtt_pin(gt); 891 890 if (ret) { 892 891 dev_err(dev->dev, "Can not pin down handle 0x%x\n", handle); 893 - return ret; 892 + goto unref_cursor; 894 893 } 895 894 896 895 if (dev_priv->ops->cursor_needs_phys) { 897 896 if (cursor_gt == NULL) { 898 897 dev_err(dev->dev, "No hardware cursor mem available"); 899 - return -ENOMEM; 898 + ret = -ENOMEM; 899 + goto unref_cursor; 900 900 } 901 901 902 902 /* Prevent overflow */ ··· 938 936 struct gtt_range, gem); 939 937 psb_gtt_unpin(gt); 940 938 drm_gem_object_unreference(psb_intel_crtc->cursor_obj); 941 - psb_intel_crtc->cursor_obj = obj; 942 939 } 943 - return 0; 940 + 941 + psb_intel_crtc->cursor_obj = obj; 942 + return ret; 943 + 944 + unref_cursor: 945 + drm_gem_object_unreference(obj); 946 + return ret; 944 947 } 945 948 946 949 static int psb_intel_crtc_cursor_move(struct drm_crtc *crtc, int x, int y) ··· 1157 1150 kfree(psb_intel_crtc); 1158 1151 } 1159 1152 1153 + static void psb_intel_crtc_disable(struct drm_crtc *crtc) 1154 + { 1155 + struct gtt_range *gt; 1156 + struct drm_crtc_helper_funcs *crtc_funcs = crtc->helper_private; 1157 + 1158 + crtc_funcs->dpms(crtc, DRM_MODE_DPMS_OFF); 1159 + 1160 + if (crtc->fb) { 1161 + gt = to_psb_fb(crtc->fb)->gtt; 1162 + psb_gtt_unpin(gt); 1163 + } 1164 + } 1165 + 1160 1166 const struct drm_crtc_helper_funcs psb_intel_helper_funcs = { 1161 1167 .dpms = psb_intel_crtc_dpms, 1162 1168 .mode_fixup = psb_intel_crtc_mode_fixup, ··· 1177 1157 .mode_set_base = psb_intel_pipe_set_base, 1178 1158 .prepare = psb_intel_crtc_prepare, 1179 1159 .commit = psb_intel_crtc_commit, 1160 + .disable = psb_intel_crtc_disable, 1180 1161 }; 1181 1162 1182 1163 const struct drm_crtc_funcs psb_intel_crtc_funcs = {
+14 -10
drivers/gpu/drm/i915/intel_sdvo.c
··· 1777 1777 * arranged in priority order. 1778 1778 */ 1779 1779 intel_ddc_get_modes(connector, &intel_sdvo->ddc); 1780 - if (list_empty(&connector->probed_modes) == false) 1781 - goto end; 1782 1780 1783 - /* Fetch modes from VBT */ 1781 + /* 1782 + * Fetch modes from VBT. For SDVO prefer the VBT mode since some 1783 + * SDVO->LVDS transcoders can't cope with the EDID mode. Since 1784 + * drm_mode_probed_add adds the mode at the head of the list we add it 1785 + * last. 1786 + */ 1784 1787 if (dev_priv->sdvo_lvds_vbt_mode != NULL) { 1785 1788 newmode = drm_mode_duplicate(connector->dev, 1786 1789 dev_priv->sdvo_lvds_vbt_mode); ··· 1795 1792 } 1796 1793 } 1797 1794 1798 - end: 1799 1795 list_for_each_entry(newmode, &connector->probed_modes, head) { 1800 1796 if (newmode->type & DRM_MODE_TYPE_PREFERRED) { 1801 1797 intel_sdvo->sdvo_lvds_fixed_mode = ··· 2792 2790 SDVOB_HOTPLUG_INT_STATUS_I915 : SDVOC_HOTPLUG_INT_STATUS_I915; 2793 2791 } 2794 2792 2795 - /* Only enable the hotplug irq if we need it, to work around noisy 2796 - * hotplug lines. 2797 - */ 2798 - if (intel_sdvo->hotplug_active) 2799 - intel_encoder->hpd_pin = HPD_SDVO_B ? HPD_SDVO_B : HPD_SDVO_C; 2800 - 2801 2793 intel_encoder->compute_config = intel_sdvo_compute_config; 2802 2794 intel_encoder->disable = intel_disable_sdvo; 2803 2795 intel_encoder->mode_set = intel_sdvo_mode_set; ··· 2808 2812 SDVO_NAME(intel_sdvo)); 2809 2813 /* Output_setup can leave behind connectors! */ 2810 2814 goto err_output; 2815 + } 2816 + 2817 + /* Only enable the hotplug irq if we need it, to work around noisy 2818 + * hotplug lines. 2819 + */ 2820 + if (intel_sdvo->hotplug_active) { 2821 + intel_encoder->hpd_pin = 2822 + intel_sdvo->is_sdvob ? HPD_SDVO_B : HPD_SDVO_C; 2811 2823 } 2812 2824 2813 2825 /*
+7 -4
drivers/hid/hid-multitouch.c
··· 264 264 static void mt_free_input_name(struct hid_input *hi) 265 265 { 266 266 struct hid_device *hdev = hi->report->device; 267 + const char *name = hi->input->name; 267 268 268 - if (hi->input->name != hdev->name) 269 - kfree(hi->input->name); 269 + if (name != hdev->name) { 270 + hi->input->name = hdev->name; 271 + kfree(name); 272 + } 270 273 } 271 274 272 275 static ssize_t mt_show_quirks(struct device *dev, ··· 1043 1040 struct hid_input *hi; 1044 1041 1045 1042 sysfs_remove_group(&hdev->dev.kobj, &mt_attribute_group); 1046 - hid_hw_stop(hdev); 1047 - 1048 1043 list_for_each_entry(hi, &hdev->inputs, list) 1049 1044 mt_free_input_name(hi); 1045 + 1046 + hid_hw_stop(hdev); 1050 1047 1051 1048 kfree(td); 1052 1049 hid_set_drvdata(hdev, NULL);
+50 -8
drivers/hwmon/adm1021.c
··· 331 331 man_id = i2c_smbus_read_byte_data(client, ADM1021_REG_MAN_ID); 332 332 dev_id = i2c_smbus_read_byte_data(client, ADM1021_REG_DEV_ID); 333 333 334 + if (man_id < 0 || dev_id < 0) 335 + return -ENODEV; 336 + 334 337 if (man_id == 0x4d && dev_id == 0x01) 335 338 type_name = "max1617a"; 336 339 else if (man_id == 0x41) { 337 340 if ((dev_id & 0xF0) == 0x30) 338 341 type_name = "adm1023"; 339 - else 342 + else if ((dev_id & 0xF0) == 0x00) 340 343 type_name = "adm1021"; 344 + else 345 + return -ENODEV; 341 346 } else if (man_id == 0x49) 342 347 type_name = "thmc10"; 343 348 else if (man_id == 0x23) 344 349 type_name = "gl523sm"; 345 350 else if (man_id == 0x54) 346 351 type_name = "mc1066"; 347 - /* LM84 Mfr ID in a different place, and it has more unused bits */ 348 - else if (conv_rate == 0x00 349 - && (config & 0x7F) == 0x00 350 - && (status & 0xAB) == 0x00) 351 - type_name = "lm84"; 352 - else 353 - type_name = "max1617"; 352 + else { 353 + int lte, rte, lhi, rhi, llo, rlo; 354 + 355 + /* extra checks for LM84 and MAX1617 to avoid misdetections */ 356 + 357 + llo = i2c_smbus_read_byte_data(client, ADM1021_REG_THYST_R(0)); 358 + rlo = i2c_smbus_read_byte_data(client, ADM1021_REG_THYST_R(1)); 359 + 360 + /* fail if any of the additional register reads failed */ 361 + if (llo < 0 || rlo < 0) 362 + return -ENODEV; 363 + 364 + lte = i2c_smbus_read_byte_data(client, ADM1021_REG_TEMP(0)); 365 + rte = i2c_smbus_read_byte_data(client, ADM1021_REG_TEMP(1)); 366 + lhi = i2c_smbus_read_byte_data(client, ADM1021_REG_TOS_R(0)); 367 + rhi = i2c_smbus_read_byte_data(client, ADM1021_REG_TOS_R(1)); 368 + 369 + /* 370 + * Fail for negative temperatures and negative high limits. 371 + * This check also catches read errors on the tested registers. 372 + */ 373 + if ((s8)lte < 0 || (s8)rte < 0 || (s8)lhi < 0 || (s8)rhi < 0) 374 + return -ENODEV; 375 + 376 + /* fail if all registers hold the same value */ 377 + if (lte == rte && lte == lhi && lte == rhi && lte == llo 378 + && lte == rlo) 379 + return -ENODEV; 380 + 381 + /* 382 + * LM84 Mfr ID is in a different place, 383 + * and it has more unused bits. 384 + */ 385 + if (conv_rate == 0x00 386 + && (config & 0x7F) == 0x00 387 + && (status & 0xAB) == 0x00) { 388 + type_name = "lm84"; 389 + } else { 390 + /* fail if low limits are larger than high limits */ 391 + if ((s8)llo > lhi || (s8)rlo > rhi) 392 + return -ENODEV; 393 + type_name = "max1617"; 394 + } 395 + } 354 396 355 397 pr_debug("Detected chip %s at adapter %d, address 0x%02x.\n", 356 398 type_name, i2c_adapter_id(adapter), client->addr);
-1
drivers/md/bcache/Kconfig
··· 1 1 2 2 config BCACHE 3 3 tristate "Block device as cache" 4 - select CLOSURES 5 4 ---help--- 6 5 Allows a block device to be used as cache for other devices; uses 7 6 a btree for indexing and the layout is optimized for SSDs.
+1 -1
drivers/md/bcache/bcache.h
··· 1241 1241 struct cache_set *bch_cache_set_alloc(struct cache_sb *); 1242 1242 void bch_btree_cache_free(struct cache_set *); 1243 1243 int bch_btree_cache_alloc(struct cache_set *); 1244 - void bch_writeback_init_cached_dev(struct cached_dev *); 1244 + void bch_cached_dev_writeback_init(struct cached_dev *); 1245 1245 void bch_moving_init_cache_set(struct cache_set *); 1246 1246 1247 1247 void bch_cache_allocator_exit(struct cache *ca);
+16 -18
drivers/md/bcache/stats.c
··· 93 93 }; 94 94 static KTYPE(bch_stats); 95 95 96 - static void scale_accounting(unsigned long data); 97 - 98 - void bch_cache_accounting_init(struct cache_accounting *acc, 99 - struct closure *parent) 100 - { 101 - kobject_init(&acc->total.kobj, &bch_stats_ktype); 102 - kobject_init(&acc->five_minute.kobj, &bch_stats_ktype); 103 - kobject_init(&acc->hour.kobj, &bch_stats_ktype); 104 - kobject_init(&acc->day.kobj, &bch_stats_ktype); 105 - 106 - closure_init(&acc->cl, parent); 107 - init_timer(&acc->timer); 108 - acc->timer.expires = jiffies + accounting_delay; 109 - acc->timer.data = (unsigned long) acc; 110 - acc->timer.function = scale_accounting; 111 - add_timer(&acc->timer); 112 - } 113 - 114 96 int bch_cache_accounting_add_kobjs(struct cache_accounting *acc, 115 97 struct kobject *parent) 116 98 { ··· 225 243 struct cached_dev *dc = container_of(s->d, struct cached_dev, disk); 226 244 atomic_add(sectors, &dc->accounting.collector.sectors_bypassed); 227 245 atomic_add(sectors, &s->op.c->accounting.collector.sectors_bypassed); 246 + } 247 + 248 + void bch_cache_accounting_init(struct cache_accounting *acc, 249 + struct closure *parent) 250 + { 251 + kobject_init(&acc->total.kobj, &bch_stats_ktype); 252 + kobject_init(&acc->five_minute.kobj, &bch_stats_ktype); 253 + kobject_init(&acc->hour.kobj, &bch_stats_ktype); 254 + kobject_init(&acc->day.kobj, &bch_stats_ktype); 255 + 256 + closure_init(&acc->cl, parent); 257 + init_timer(&acc->timer); 258 + acc->timer.expires = jiffies + accounting_delay; 259 + acc->timer.data = (unsigned long) acc; 260 + acc->timer.function = scale_accounting; 261 + add_timer(&acc->timer); 228 262 }
+82 -103
drivers/md/bcache/super.c
··· 634 634 return 0; 635 635 } 636 636 637 - static int release_dev(struct gendisk *b, fmode_t mode) 637 + static void release_dev(struct gendisk *b, fmode_t mode) 638 638 { 639 639 struct bcache_device *d = b->private_data; 640 640 closure_put(&d->cl); 641 - return 0; 642 641 } 643 642 644 643 static int ioctl_dev(struct block_device *b, fmode_t mode, ··· 731 732 732 733 if (d->c) 733 734 bcache_device_detach(d); 734 - 735 - if (d->disk) 735 + if (d->disk && d->disk->flags & GENHD_FL_UP) 736 736 del_gendisk(d->disk); 737 737 if (d->disk && d->disk->queue) 738 738 blk_cleanup_queue(d->disk->queue); ··· 754 756 if (!(d->bio_split = bioset_create(4, offsetof(struct bbio, bio))) || 755 757 !(d->unaligned_bvec = mempool_create_kmalloc_pool(1, 756 758 sizeof(struct bio_vec) * BIO_MAX_PAGES)) || 757 - bio_split_pool_init(&d->bio_split_hook)) 758 - 759 - return -ENOMEM; 760 - 761 - d->disk = alloc_disk(1); 762 - if (!d->disk) 759 + bio_split_pool_init(&d->bio_split_hook) || 760 + !(d->disk = alloc_disk(1)) || 761 + !(q = blk_alloc_queue(GFP_KERNEL))) 763 762 return -ENOMEM; 764 763 765 764 snprintf(d->disk->disk_name, DISK_NAME_LEN, "bcache%i", bcache_minor); ··· 765 770 d->disk->first_minor = bcache_minor++; 766 771 d->disk->fops = &bcache_ops; 767 772 d->disk->private_data = d; 768 - 769 - q = blk_alloc_queue(GFP_KERNEL); 770 - if (!q) 771 - return -ENOMEM; 772 773 773 774 blk_queue_make_request(q, NULL); 774 775 d->disk->queue = q; ··· 990 999 991 1000 mutex_lock(&bch_register_lock); 992 1001 993 - bd_unlink_disk_holder(dc->bdev, dc->disk.disk); 1002 + if (atomic_read(&dc->running)) 1003 + bd_unlink_disk_holder(dc->bdev, dc->disk.disk); 994 1004 bcache_device_free(&dc->disk); 995 1005 list_del(&dc->list); 996 1006 997 1007 mutex_unlock(&bch_register_lock); 998 1008 999 1009 if (!IS_ERR_OR_NULL(dc->bdev)) { 1000 - blk_sync_queue(bdev_get_queue(dc->bdev)); 1010 + if (dc->bdev->bd_disk) 1011 + blk_sync_queue(bdev_get_queue(dc->bdev)); 1012 + 1001 1013 blkdev_put(dc->bdev, FMODE_READ|FMODE_WRITE|FMODE_EXCL); 1002 1014 } 1003 1015 ··· 1022 1028 1023 1029 static int cached_dev_init(struct cached_dev *dc, unsigned block_size) 1024 1030 { 1025 - int err; 1031 + int ret; 1026 1032 struct io *io; 1027 - 1028 - closure_init(&dc->disk.cl, NULL); 1029 - set_closure_fn(&dc->disk.cl, cached_dev_flush, system_wq); 1033 + struct request_queue *q = bdev_get_queue(dc->bdev); 1030 1034 1031 1035 __module_get(THIS_MODULE); 1032 1036 INIT_LIST_HEAD(&dc->list); 1037 + closure_init(&dc->disk.cl, NULL); 1038 + set_closure_fn(&dc->disk.cl, cached_dev_flush, system_wq); 1033 1039 kobject_init(&dc->disk.kobj, &bch_cached_dev_ktype); 1034 - 1035 - bch_cache_accounting_init(&dc->accounting, &dc->disk.cl); 1036 - 1037 - err = bcache_device_init(&dc->disk, block_size); 1038 - if (err) 1039 - goto err; 1040 - 1041 - spin_lock_init(&dc->io_lock); 1042 - closure_init_unlocked(&dc->sb_write); 1043 1040 INIT_WORK(&dc->detach, cached_dev_detach_finish); 1041 + closure_init_unlocked(&dc->sb_write); 1042 + INIT_LIST_HEAD(&dc->io_lru); 1043 + spin_lock_init(&dc->io_lock); 1044 + bch_cache_accounting_init(&dc->accounting, &dc->disk.cl); 1044 1045 1045 1046 dc->sequential_merge = true; 1046 1047 dc->sequential_cutoff = 4 << 20; 1047 - 1048 - INIT_LIST_HEAD(&dc->io_lru); 1049 - dc->sb_bio.bi_max_vecs = 1; 1050 - dc->sb_bio.bi_io_vec = dc->sb_bio.bi_inline_vecs; 1051 1048 1052 1049 for (io = dc->io; io < dc->io + RECENT_IO; io++) { 1053 1050 list_add(&io->lru, &dc->io_lru); 1054 1051 hlist_add_head(&io->hash, dc->io_hash + RECENT_IO); 1055 1052 } 1056 1053 1057 - bch_writeback_init_cached_dev(dc); 1054 + ret = bcache_device_init(&dc->disk, block_size); 1055 + if (ret) 1056 + return ret; 1057 + 1058 + set_capacity(dc->disk.disk, 1059 + dc->bdev->bd_part->nr_sects - dc->sb.data_offset); 1060 + 1061 + dc->disk.disk->queue->backing_dev_info.ra_pages = 1062 + max(dc->disk.disk->queue->backing_dev_info.ra_pages, 1063 + q->backing_dev_info.ra_pages); 1064 + 1065 + bch_cached_dev_request_init(dc); 1066 + bch_cached_dev_writeback_init(dc); 1058 1067 return 0; 1059 - err: 1060 - bcache_device_stop(&dc->disk); 1061 - return err; 1062 1068 } 1063 1069 1064 1070 /* Cached device - bcache superblock */ 1065 1071 1066 - static const char *register_bdev(struct cache_sb *sb, struct page *sb_page, 1072 + static void register_bdev(struct cache_sb *sb, struct page *sb_page, 1067 1073 struct block_device *bdev, 1068 1074 struct cached_dev *dc) 1069 1075 { 1070 1076 char name[BDEVNAME_SIZE]; 1071 1077 const char *err = "cannot allocate memory"; 1072 - struct gendisk *g; 1073 1078 struct cache_set *c; 1074 1079 1075 - if (!dc || cached_dev_init(dc, sb->block_size << 9) != 0) 1076 - return err; 1077 - 1078 1080 memcpy(&dc->sb, sb, sizeof(struct cache_sb)); 1079 - dc->sb_bio.bi_io_vec[0].bv_page = sb_page; 1080 1081 dc->bdev = bdev; 1081 1082 dc->bdev->bd_holder = dc; 1082 1083 1083 - g = dc->disk.disk; 1084 + bio_init(&dc->sb_bio); 1085 + dc->sb_bio.bi_max_vecs = 1; 1086 + dc->sb_bio.bi_io_vec = dc->sb_bio.bi_inline_vecs; 1087 + dc->sb_bio.bi_io_vec[0].bv_page = sb_page; 1088 + get_page(sb_page); 1084 1089 1085 - set_capacity(g, dc->bdev->bd_part->nr_sects - dc->sb.data_offset); 1086 - 1087 - g->queue->backing_dev_info.ra_pages = 1088 - max(g->queue->backing_dev_info.ra_pages, 1089 - bdev->bd_queue->backing_dev_info.ra_pages); 1090 - 1091 - bch_cached_dev_request_init(dc); 1090 + if (cached_dev_init(dc, sb->block_size << 9)) 1091 + goto err; 1092 1092 1093 1093 err = "error creating kobject"; 1094 1094 if (kobject_add(&dc->disk.kobj, &part_to_dev(bdev->bd_part)->kobj, ··· 1090 1102 goto err; 1091 1103 if (bch_cache_accounting_add_kobjs(&dc->accounting, &dc->disk.kobj)) 1092 1104 goto err; 1105 + 1106 + pr_info("registered backing device %s", bdevname(bdev, name)); 1093 1107 1094 1108 list_add(&dc->list, &uncached_devices); 1095 1109 list_for_each_entry(c, &bch_cache_sets, list) ··· 1101 1111 BDEV_STATE(&dc->sb) == BDEV_STATE_STALE) 1102 1112 bch_cached_dev_run(dc); 1103 1113 1104 - return NULL; 1114 + return; 1105 1115 err: 1106 - kobject_put(&dc->disk.kobj); 1107 1116 pr_notice("error opening %s: %s", bdevname(bdev, name), err); 1108 - /* 1109 - * Return NULL instead of an error because kobject_put() cleans 1110 - * everything up 1111 - */ 1112 - return NULL; 1117 + bcache_device_stop(&dc->disk); 1113 1118 } 1114 1119 1115 1120 /* Flash only volumes */ ··· 1702 1717 size_t free; 1703 1718 struct bucket *b; 1704 1719 1705 - if (!ca) 1706 - return -ENOMEM; 1707 - 1708 1720 __module_get(THIS_MODULE); 1709 1721 kobject_init(&ca->kobj, &bch_cache_ktype); 1710 1722 1711 - memcpy(&ca->sb, sb, sizeof(struct cache_sb)); 1712 - 1713 1723 INIT_LIST_HEAD(&ca->discards); 1714 - 1715 - bio_init(&ca->sb_bio); 1716 - ca->sb_bio.bi_max_vecs = 1; 1717 - ca->sb_bio.bi_io_vec = ca->sb_bio.bi_inline_vecs; 1718 1724 1719 1725 bio_init(&ca->journal.bio); 1720 1726 ca->journal.bio.bi_max_vecs = 8; ··· 1718 1742 !init_fifo(&ca->free_inc, free << 2, GFP_KERNEL) || 1719 1743 !init_fifo(&ca->unused, free << 2, GFP_KERNEL) || 1720 1744 !init_heap(&ca->heap, free << 3, GFP_KERNEL) || 1721 - !(ca->buckets = vmalloc(sizeof(struct bucket) * 1745 + !(ca->buckets = vzalloc(sizeof(struct bucket) * 1722 1746 ca->sb.nbuckets)) || 1723 1747 !(ca->prio_buckets = kzalloc(sizeof(uint64_t) * prio_buckets(ca) * 1724 1748 2, GFP_KERNEL)) || 1725 1749 !(ca->disk_buckets = alloc_bucket_pages(GFP_KERNEL, ca)) || 1726 1750 !(ca->alloc_workqueue = alloc_workqueue("bch_allocator", 0, 1)) || 1727 1751 bio_split_pool_init(&ca->bio_split_hook)) 1728 - goto err; 1752 + return -ENOMEM; 1729 1753 1730 1754 ca->prio_last_buckets = ca->prio_buckets + prio_buckets(ca); 1731 1755 1732 - memset(ca->buckets, 0, ca->sb.nbuckets * sizeof(struct bucket)); 1733 1756 for_each_bucket(b, ca) 1734 1757 atomic_set(&b->pin, 0); 1735 1758 ··· 1741 1766 return -ENOMEM; 1742 1767 } 1743 1768 1744 - static const char *register_cache(struct cache_sb *sb, struct page *sb_page, 1769 + static void register_cache(struct cache_sb *sb, struct page *sb_page, 1745 1770 struct block_device *bdev, struct cache *ca) 1746 1771 { 1747 1772 char name[BDEVNAME_SIZE]; 1748 1773 const char *err = "cannot allocate memory"; 1749 1774 1750 - if (cache_alloc(sb, ca) != 0) 1751 - return err; 1752 - 1753 - ca->sb_bio.bi_io_vec[0].bv_page = sb_page; 1775 + memcpy(&ca->sb, sb, sizeof(struct cache_sb)); 1754 1776 ca->bdev = bdev; 1755 1777 ca->bdev->bd_holder = ca; 1756 1778 1779 + bio_init(&ca->sb_bio); 1780 + ca->sb_bio.bi_max_vecs = 1; 1781 + ca->sb_bio.bi_io_vec = ca->sb_bio.bi_inline_vecs; 1782 + ca->sb_bio.bi_io_vec[0].bv_page = sb_page; 1783 + get_page(sb_page); 1784 + 1757 1785 if (blk_queue_discard(bdev_get_queue(ca->bdev))) 1758 1786 ca->discard = CACHE_DISCARD(&ca->sb); 1787 + 1788 + if (cache_alloc(sb, ca) != 0) 1789 + goto err; 1759 1790 1760 1791 err = "error creating kobject"; 1761 1792 if (kobject_add(&ca->kobj, &part_to_dev(bdev->bd_part)->kobj, "bcache")) ··· 1772 1791 goto err; 1773 1792 1774 1793 pr_info("registered cache device %s", bdevname(bdev, name)); 1775 - 1776 - return NULL; 1794 + return; 1777 1795 err: 1796 + pr_notice("error opening %s: %s", bdevname(bdev, name), err); 1778 1797 kobject_put(&ca->kobj); 1779 - pr_info("error opening %s: %s", bdevname(bdev, name), err); 1780 - /* Return NULL instead of an error because kobject_put() cleans 1781 - * everything up 1782 - */ 1783 - return NULL; 1784 1798 } 1785 1799 1786 1800 /* Global interfaces/init */ ··· 1809 1833 bdev = blkdev_get_by_path(strim(path), 1810 1834 FMODE_READ|FMODE_WRITE|FMODE_EXCL, 1811 1835 sb); 1812 - if (bdev == ERR_PTR(-EBUSY)) 1813 - err = "device busy"; 1814 - 1815 - if (IS_ERR(bdev) || 1816 - set_blocksize(bdev, 4096)) 1836 + if (IS_ERR(bdev)) { 1837 + if (bdev == ERR_PTR(-EBUSY)) 1838 + err = "device busy"; 1817 1839 goto err; 1840 + } 1841 + 1842 + err = "failed to set blocksize"; 1843 + if (set_blocksize(bdev, 4096)) 1844 + goto err_close; 1818 1845 1819 1846 err = read_super(sb, bdev, &sb_page); 1820 1847 if (err) ··· 1825 1846 1826 1847 if (SB_IS_BDEV(sb)) { 1827 1848 struct cached_dev *dc = kzalloc(sizeof(*dc), GFP_KERNEL); 1849 + if (!dc) 1850 + goto err_close; 1828 1851 1829 - err = register_bdev(sb, sb_page, bdev, dc); 1852 + register_bdev(sb, sb_page, bdev, dc); 1830 1853 } else { 1831 1854 struct cache *ca = kzalloc(sizeof(*ca), GFP_KERNEL); 1855 + if (!ca) 1856 + goto err_close; 1832 1857 1833 - err = register_cache(sb, sb_page, bdev, ca); 1858 + register_cache(sb, sb_page, bdev, ca); 1834 1859 } 1835 - 1836 - if (err) { 1837 - /* register_(bdev|cache) will only return an error if they 1838 - * didn't get far enough to create the kobject - if they did, 1839 - * the kobject destructor will do this cleanup. 1840 - */ 1860 + out: 1861 + if (sb_page) 1841 1862 put_page(sb_page); 1842 - err_close: 1843 - blkdev_put(bdev, FMODE_READ|FMODE_WRITE|FMODE_EXCL); 1844 - err: 1845 - if (attr != &ksysfs_register_quiet) 1846 - pr_info("error opening %s: %s", path, err); 1847 - ret = -EINVAL; 1848 - } 1849 - 1850 1863 kfree(sb); 1851 1864 kfree(path); 1852 1865 mutex_unlock(&bch_register_lock); 1853 1866 module_put(THIS_MODULE); 1854 1867 return ret; 1868 + 1869 + err_close: 1870 + blkdev_put(bdev, FMODE_READ|FMODE_WRITE|FMODE_EXCL); 1871 + err: 1872 + if (attr != &ksysfs_register_quiet) 1873 + pr_info("error opening %s: %s", path, err); 1874 + ret = -EINVAL; 1875 + goto out; 1855 1876 } 1856 1877 1857 1878 static int bcache_reboot(struct notifier_block *n, unsigned long code, void *x)
+1 -1
drivers/md/bcache/writeback.c
··· 375 375 refill_dirty(cl); 376 376 } 377 377 378 - void bch_writeback_init_cached_dev(struct cached_dev *dc) 378 + void bch_cached_dev_writeback_init(struct cached_dev *dc) 379 379 { 380 380 closure_init_unlocked(&dc->writeback); 381 381 init_rwsem(&dc->writeback_lock);
+1 -1
drivers/md/md.c
··· 5268 5268 5269 5269 static void __md_stop_writes(struct mddev *mddev) 5270 5270 { 5271 + set_bit(MD_RECOVERY_FROZEN, &mddev->recovery); 5271 5272 if (mddev->sync_thread) { 5272 - set_bit(MD_RECOVERY_FROZEN, &mddev->recovery); 5273 5273 set_bit(MD_RECOVERY_INTR, &mddev->recovery); 5274 5274 md_reap_sync_thread(mddev); 5275 5275 }
+24 -14
drivers/md/raid1.c
··· 417 417 418 418 r1_bio->bios[mirror] = NULL; 419 419 to_put = bio; 420 - set_bit(R1BIO_Uptodate, &r1_bio->state); 420 + /* 421 + * Do not set R1BIO_Uptodate if the current device is 422 + * rebuilding or Faulty. This is because we cannot use 423 + * such device for properly reading the data back (we could 424 + * potentially use it, if the current write would have felt 425 + * before rdev->recovery_offset, but for simplicity we don't 426 + * check this here. 427 + */ 428 + if (test_bit(In_sync, &conf->mirrors[mirror].rdev->flags) && 429 + !test_bit(Faulty, &conf->mirrors[mirror].rdev->flags)) 430 + set_bit(R1BIO_Uptodate, &r1_bio->state); 421 431 422 432 /* Maybe we can clear some bad blocks. */ 423 433 if (is_badblock(conf->mirrors[mirror].rdev, ··· 880 870 wake_up(&conf->wait_barrier); 881 871 } 882 872 883 - static void freeze_array(struct r1conf *conf) 873 + static void freeze_array(struct r1conf *conf, int extra) 884 874 { 885 875 /* stop syncio and normal IO and wait for everything to 886 876 * go quite. 887 877 * We increment barrier and nr_waiting, and then 888 - * wait until nr_pending match nr_queued+1 878 + * wait until nr_pending match nr_queued+extra 889 879 * This is called in the context of one normal IO request 890 880 * that has failed. Thus any sync request that might be pending 891 881 * will be blocked by nr_pending, and we need to wait for 892 882 * pending IO requests to complete or be queued for re-try. 893 - * Thus the number queued (nr_queued) plus this request (1) 883 + * Thus the number queued (nr_queued) plus this request (extra) 894 884 * must match the number of pending IOs (nr_pending) before 895 885 * we continue. 896 886 */ ··· 898 888 conf->barrier++; 899 889 conf->nr_waiting++; 900 890 wait_event_lock_irq_cmd(conf->wait_barrier, 901 - conf->nr_pending == conf->nr_queued+1, 891 + conf->nr_pending == conf->nr_queued+extra, 902 892 conf->resync_lock, 903 893 flush_pending_writes(conf)); 904 894 spin_unlock_irq(&conf->resync_lock); ··· 1554 1544 * we wait for all outstanding requests to complete. 1555 1545 */ 1556 1546 synchronize_sched(); 1557 - raise_barrier(conf); 1558 - lower_barrier(conf); 1547 + freeze_array(conf, 0); 1548 + unfreeze_array(conf); 1559 1549 clear_bit(Unmerged, &rdev->flags); 1560 1550 } 1561 1551 md_integrity_add_rdev(rdev, mddev); ··· 1605 1595 */ 1606 1596 struct md_rdev *repl = 1607 1597 conf->mirrors[conf->raid_disks + number].rdev; 1608 - raise_barrier(conf); 1598 + freeze_array(conf, 0); 1609 1599 clear_bit(Replacement, &repl->flags); 1610 1600 p->rdev = repl; 1611 1601 conf->mirrors[conf->raid_disks + number].rdev = NULL; 1612 - lower_barrier(conf); 1602 + unfreeze_array(conf); 1613 1603 clear_bit(WantReplacement, &rdev->flags); 1614 1604 } else 1615 1605 clear_bit(WantReplacement, &rdev->flags); ··· 2205 2195 * frozen 2206 2196 */ 2207 2197 if (mddev->ro == 0) { 2208 - freeze_array(conf); 2198 + freeze_array(conf, 1); 2209 2199 fix_read_error(conf, r1_bio->read_disk, 2210 2200 r1_bio->sector, r1_bio->sectors); 2211 2201 unfreeze_array(conf); ··· 2790 2780 return PTR_ERR(conf); 2791 2781 2792 2782 if (mddev->queue) 2793 - blk_queue_max_write_same_sectors(mddev->queue, 2794 - mddev->chunk_sectors); 2783 + blk_queue_max_write_same_sectors(mddev->queue, 0); 2784 + 2795 2785 rdev_for_each(rdev, mddev) { 2796 2786 if (!mddev->gendisk) 2797 2787 continue; ··· 2973 2963 return -ENOMEM; 2974 2964 } 2975 2965 2976 - raise_barrier(conf); 2966 + freeze_array(conf, 0); 2977 2967 2978 2968 /* ok, everything is stopped */ 2979 2969 oldpool = conf->r1bio_pool; ··· 3004 2994 conf->raid_disks = mddev->raid_disks = raid_disks; 3005 2995 mddev->delta_disks = 0; 3006 2996 3007 - lower_barrier(conf); 2997 + unfreeze_array(conf); 3008 2998 3009 2999 set_bit(MD_RECOVERY_NEEDED, &mddev->recovery); 3010 3000 md_wakeup_thread(mddev->thread);
+19 -10
drivers/md/raid10.c
··· 490 490 sector_t first_bad; 491 491 int bad_sectors; 492 492 493 - set_bit(R10BIO_Uptodate, &r10_bio->state); 493 + /* 494 + * Do not set R10BIO_Uptodate if the current device is 495 + * rebuilding or Faulty. This is because we cannot use 496 + * such device for properly reading the data back (we could 497 + * potentially use it, if the current write would have felt 498 + * before rdev->recovery_offset, but for simplicity we don't 499 + * check this here. 500 + */ 501 + if (test_bit(In_sync, &rdev->flags) && 502 + !test_bit(Faulty, &rdev->flags)) 503 + set_bit(R10BIO_Uptodate, &r10_bio->state); 494 504 495 505 /* Maybe we can clear some bad blocks. */ 496 506 if (is_badblock(rdev, ··· 1065 1055 wake_up(&conf->wait_barrier); 1066 1056 } 1067 1057 1068 - static void freeze_array(struct r10conf *conf) 1058 + static void freeze_array(struct r10conf *conf, int extra) 1069 1059 { 1070 1060 /* stop syncio and normal IO and wait for everything to 1071 1061 * go quiet. 1072 1062 * We increment barrier and nr_waiting, and then 1073 - * wait until nr_pending match nr_queued+1 1063 + * wait until nr_pending match nr_queued+extra 1074 1064 * This is called in the context of one normal IO request 1075 1065 * that has failed. Thus any sync request that might be pending 1076 1066 * will be blocked by nr_pending, and we need to wait for 1077 1067 * pending IO requests to complete or be queued for re-try. 1078 - * Thus the number queued (nr_queued) plus this request (1) 1068 + * Thus the number queued (nr_queued) plus this request (extra) 1079 1069 * must match the number of pending IOs (nr_pending) before 1080 1070 * we continue. 1081 1071 */ ··· 1083 1073 conf->barrier++; 1084 1074 conf->nr_waiting++; 1085 1075 wait_event_lock_irq_cmd(conf->wait_barrier, 1086 - conf->nr_pending == conf->nr_queued+1, 1076 + conf->nr_pending == conf->nr_queued+extra, 1087 1077 conf->resync_lock, 1088 1078 flush_pending_writes(conf)); 1089 1079 ··· 1847 1837 * we wait for all outstanding requests to complete. 1848 1838 */ 1849 1839 synchronize_sched(); 1850 - raise_barrier(conf, 0); 1851 - lower_barrier(conf); 1840 + freeze_array(conf, 0); 1841 + unfreeze_array(conf); 1852 1842 clear_bit(Unmerged, &rdev->flags); 1853 1843 } 1854 1844 md_integrity_add_rdev(rdev, mddev); ··· 2622 2612 r10_bio->devs[slot].bio = NULL; 2623 2613 2624 2614 if (mddev->ro == 0) { 2625 - freeze_array(conf); 2615 + freeze_array(conf, 1); 2626 2616 fix_read_error(conf, mddev, r10_bio); 2627 2617 unfreeze_array(conf); 2628 2618 } else ··· 3619 3609 if (mddev->queue) { 3620 3610 blk_queue_max_discard_sectors(mddev->queue, 3621 3611 mddev->chunk_sectors); 3622 - blk_queue_max_write_same_sectors(mddev->queue, 3623 - mddev->chunk_sectors); 3612 + blk_queue_max_write_same_sectors(mddev->queue, 0); 3624 3613 blk_queue_io_min(mddev->queue, chunk_size); 3625 3614 if (conf->geo.raid_disks % conf->geo.near_copies) 3626 3615 blk_queue_io_opt(mddev->queue, chunk_size * conf->geo.raid_disks);
+5 -1
drivers/md/raid5.c
··· 664 664 if (test_bit(R5_ReadNoMerge, &sh->dev[i].flags)) 665 665 bi->bi_rw |= REQ_FLUSH; 666 666 667 + bi->bi_vcnt = 1; 667 668 bi->bi_io_vec[0].bv_len = STRIPE_SIZE; 668 669 bi->bi_io_vec[0].bv_offset = 0; 669 670 bi->bi_size = STRIPE_SIZE; ··· 702 701 else 703 702 rbi->bi_sector = (sh->sector 704 703 + rrdev->data_offset); 704 + rbi->bi_vcnt = 1; 705 705 rbi->bi_io_vec[0].bv_len = STRIPE_SIZE; 706 706 rbi->bi_io_vec[0].bv_offset = 0; 707 707 rbi->bi_size = STRIPE_SIZE; ··· 5466 5464 if (mddev->major_version == 0 && 5467 5465 mddev->minor_version > 90) 5468 5466 rdev->recovery_offset = reshape_offset; 5469 - 5467 + 5470 5468 if (rdev->recovery_offset < reshape_offset) { 5471 5469 /* We need to check old and new layout */ 5472 5470 if (!only_parity(rdev->raid_disk, ··· 5588 5586 * guarantee discard_zerors_data 5589 5587 */ 5590 5588 mddev->queue->limits.discard_zeroes_data = 0; 5589 + 5590 + blk_queue_max_write_same_sectors(mddev->queue, 0); 5591 5591 5592 5592 rdev_for_each(rdev, mddev) { 5593 5593 disk_stack_limits(mddev->gendisk, rdev->bdev,
+2 -2
drivers/misc/mei/init.c
··· 197 197 { 198 198 dev_dbg(&dev->pdev->dev, "stopping the device.\n"); 199 199 200 + flush_scheduled_work(); 201 + 200 202 mutex_lock(&dev->device_lock); 201 203 202 204 cancel_delayed_work(&dev->timer_work); ··· 211 209 mei_reset(dev, 0); 212 210 213 211 mutex_unlock(&dev->device_lock); 214 - 215 - flush_scheduled_work(); 216 212 217 213 mei_watchdog_unregister(dev); 218 214 }
+2
drivers/misc/mei/nfc.c
··· 142 142 mei_cl_unlink(ndev->cl_info); 143 143 kfree(ndev->cl_info); 144 144 } 145 + 146 + memset(ndev, 0, sizeof(struct mei_nfc_dev)); 145 147 } 146 148 147 149 static int mei_nfc_build_bus_name(struct mei_nfc_dev *ndev)
+1
drivers/misc/mei/pci-me.c
··· 325 325 326 326 mutex_lock(&dev->device_lock); 327 327 dev->dev_state = MEI_DEV_POWER_UP; 328 + mei_clear_interrupts(dev); 328 329 mei_reset(dev, 1); 329 330 mutex_unlock(&dev->device_lock); 330 331
+1
drivers/misc/sgi-gru/grufile.c
··· 172 172 nodesperblade = 2; 173 173 else 174 174 nodesperblade = 1; 175 + memset(&info, 0, sizeof(info)); 175 176 info.cpus = num_online_cpus(); 176 177 info.nodes = num_online_nodes(); 177 178 info.blades = info.nodes / nodesperblade;
+17 -6
drivers/net/bonding/bond_main.c
··· 764 764 struct net_device *bond_dev, *vlan_dev, *upper_dev; 765 765 struct vlan_entry *vlan; 766 766 767 - rcu_read_lock(); 768 767 read_lock(&bond->lock); 768 + rcu_read_lock(); 769 769 770 770 bond_dev = bond->dev; 771 771 ··· 787 787 if (vlan_dev) 788 788 __bond_resend_igmp_join_requests(vlan_dev); 789 789 } 790 - 791 - if (--bond->igmp_retrans > 0) 792 - queue_delayed_work(bond->wq, &bond->mcast_work, HZ/5); 793 - 794 - read_unlock(&bond->lock); 795 790 rcu_read_unlock(); 791 + 792 + /* We use curr_slave_lock to protect against concurrent access to 793 + * igmp_retrans from multiple running instances of this function and 794 + * bond_change_active_slave 795 + */ 796 + write_lock_bh(&bond->curr_slave_lock); 797 + if (bond->igmp_retrans > 1) { 798 + bond->igmp_retrans--; 799 + queue_delayed_work(bond->wq, &bond->mcast_work, HZ/5); 800 + } 801 + write_unlock_bh(&bond->curr_slave_lock); 802 + read_unlock(&bond->lock); 796 803 } 797 804 798 805 static void bond_resend_igmp_join_requests_delayed(struct work_struct *work) ··· 1964 1957 1965 1958 err_undo_flags: 1966 1959 bond_compute_features(bond); 1960 + /* Enslave of first slave has failed and we need to fix master's mac */ 1961 + if (bond->slave_cnt == 0 && 1962 + ether_addr_equal(bond_dev->dev_addr, slave_dev->dev_addr)) 1963 + eth_hw_addr_random(bond_dev); 1967 1964 1968 1965 return res; 1969 1966 }
+1 -1
drivers/net/bonding/bonding.h
··· 225 225 rwlock_t curr_slave_lock; 226 226 u8 send_peer_notif; 227 227 s8 setup_by_slave; 228 - s8 igmp_retrans; 228 + u8 igmp_retrans; 229 229 #ifdef CONFIG_PROC_FS 230 230 struct proc_dir_entry *proc_entry; 231 231 char proc_file_name[IFNAMSIZ];
+10
drivers/net/ethernet/broadcom/tg3.c
··· 1800 1800 int i; 1801 1801 u32 val; 1802 1802 1803 + if (tg3_flag(tp, NO_FWARE_REPORTED)) 1804 + return 0; 1805 + 1803 1806 if (tg3_flag(tp, IS_SSB_CORE)) { 1804 1807 /* We don't use firmware. */ 1805 1808 return 0; ··· 10407 10404 */ 10408 10405 static int tg3_init_hw(struct tg3 *tp, bool reset_phy) 10409 10406 { 10407 + /* Chip may have been just powered on. If so, the boot code may still 10408 + * be running initialization. Wait for it to finish to avoid races in 10409 + * accessing the hardware. 10410 + */ 10411 + tg3_enable_register_access(tp); 10412 + tg3_poll_fw(tp); 10413 + 10410 10414 tg3_switch_clocks(tp); 10411 10415 10412 10416 tw32(TG3PCI_MEM_WIN_BASE_ADDR, 0);
+6
drivers/net/ethernet/dec/tulip/interrupt.c
··· 76 76 77 77 mapping = pci_map_single(tp->pdev, skb->data, PKT_BUF_SZ, 78 78 PCI_DMA_FROMDEVICE); 79 + if (dma_mapping_error(&tp->pdev->dev, mapping)) { 80 + dev_kfree_skb(skb); 81 + tp->rx_buffers[entry].skb = NULL; 82 + break; 83 + } 84 + 79 85 tp->rx_buffers[entry].mapping = mapping; 80 86 81 87 tp->rx_ring[entry].buffer1 = cpu_to_le32(mapping);
+3
drivers/net/ethernet/emulex/benet/be_main.c
··· 4262 4262 netdev->features |= NETIF_F_HIGHDMA; 4263 4263 } else { 4264 4264 status = dma_set_mask(&pdev->dev, DMA_BIT_MASK(32)); 4265 + if (!status) 4266 + status = dma_set_coherent_mask(&pdev->dev, 4267 + DMA_BIT_MASK(32)); 4265 4268 if (status) { 4266 4269 dev_err(&pdev->dev, "Could not set PCI DMA Mask\n"); 4267 4270 goto free_netdev;
+13 -6
drivers/net/ethernet/renesas/sh_eth.c
··· 897 897 mdelay(1); 898 898 cnt--; 899 899 } 900 - if (cnt < 0) { 901 - pr_err("Device reset fail\n"); 900 + if (cnt <= 0) { 901 + pr_err("Device reset failed\n"); 902 902 ret = -ETIMEDOUT; 903 903 } 904 904 return ret; ··· 1401 1401 desc_status = edmac_to_cpu(mdp, rxdesc->status); 1402 1402 pkt_len = rxdesc->frame_length; 1403 1403 1404 - #if defined(CONFIG_ARCH_R8A7740) 1405 - desc_status >>= 16; 1406 - #endif 1407 - 1408 1404 if (--boguscnt < 0) 1409 1405 break; 1410 1406 1411 1407 if (!(desc_status & RDFEND)) 1412 1408 ndev->stats.rx_length_errors++; 1409 + 1410 + #if defined(CONFIG_ARCH_R8A7740) 1411 + /* 1412 + * In case of almost all GETHER/ETHERs, the Receive Frame State 1413 + * (RFS) bits in the Receive Descriptor 0 are from bit 9 to 1414 + * bit 0. However, in case of the R8A7740's GETHER, the RFS 1415 + * bits are from bit 25 to bit 16. So, the driver needs right 1416 + * shifting by 16. 1417 + */ 1418 + desc_status >>= 16; 1419 + #endif 1413 1420 1414 1421 if (desc_status & (RD_RFS1 | RD_RFS2 | RD_RFS3 | RD_RFS4 | 1415 1422 RD_RFS5 | RD_RFS6 | RD_RFS10)) {
+1 -1
drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
··· 1899 1899 1900 1900 #ifdef STMMAC_XMIT_DEBUG 1901 1901 if (netif_msg_pktdata(priv)) { 1902 - pr_info("%s: curr %d dirty=%d entry=%d, first=%p, nfrags=%d" 1902 + pr_info("%s: curr %d dirty=%d entry=%d, first=%p, nfrags=%d", 1903 1903 __func__, (priv->cur_tx % txsize), 1904 1904 (priv->dirty_tx % txsize), entry, first, nfrags); 1905 1905 if (priv->extend_desc)
+5 -9
drivers/net/ethernet/ti/davinci_mdio.c
··· 449 449 __raw_writel(ctrl, &data->regs->control); 450 450 wait_for_idle(data); 451 451 452 - pm_runtime_put_sync(data->dev); 453 - 454 452 data->suspended = true; 455 453 spin_unlock(&data->lock); 454 + pm_runtime_put_sync(data->dev); 456 455 457 456 return 0; 458 457 } ··· 459 460 static int davinci_mdio_resume(struct device *dev) 460 461 { 461 462 struct davinci_mdio_data *data = dev_get_drvdata(dev); 462 - u32 ctrl; 463 463 464 - spin_lock(&data->lock); 465 464 pm_runtime_get_sync(data->dev); 466 465 466 + spin_lock(&data->lock); 467 467 /* restart the scan state machine */ 468 - ctrl = __raw_readl(&data->regs->control); 469 - ctrl |= CONTROL_ENABLE; 470 - __raw_writel(ctrl, &data->regs->control); 468 + __davinci_mdio_reset(data); 471 469 472 470 data->suspended = false; 473 471 spin_unlock(&data->lock); ··· 473 477 } 474 478 475 479 static const struct dev_pm_ops davinci_mdio_pm_ops = { 476 - .suspend = davinci_mdio_suspend, 477 - .resume = davinci_mdio_resume, 480 + .suspend_late = davinci_mdio_suspend, 481 + .resume_early = davinci_mdio_resume, 478 482 }; 479 483 480 484 static const struct of_device_id davinci_mdio_of_mtable[] = {
+12 -6
drivers/net/macvlan.c
··· 853 853 struct nlattr *tb[], struct nlattr *data[]) 854 854 { 855 855 struct macvlan_dev *vlan = netdev_priv(dev); 856 - if (data && data[IFLA_MACVLAN_MODE]) 857 - vlan->mode = nla_get_u32(data[IFLA_MACVLAN_MODE]); 856 + 858 857 if (data && data[IFLA_MACVLAN_FLAGS]) { 859 858 __u16 flags = nla_get_u16(data[IFLA_MACVLAN_FLAGS]); 860 859 bool promisc = (flags ^ vlan->flags) & MACVLAN_FLAG_NOPROMISC; 860 + if (vlan->port->passthru && promisc) { 861 + int err; 861 862 862 - if (promisc && (flags & MACVLAN_FLAG_NOPROMISC)) 863 - dev_set_promiscuity(vlan->lowerdev, -1); 864 - else if (promisc && !(flags & MACVLAN_FLAG_NOPROMISC)) 865 - dev_set_promiscuity(vlan->lowerdev, 1); 863 + if (flags & MACVLAN_FLAG_NOPROMISC) 864 + err = dev_set_promiscuity(vlan->lowerdev, -1); 865 + else 866 + err = dev_set_promiscuity(vlan->lowerdev, 1); 867 + if (err < 0) 868 + return err; 869 + } 866 870 vlan->flags = flags; 867 871 } 872 + if (data && data[IFLA_MACVLAN_MODE]) 873 + vlan->mode = nla_get_u32(data[IFLA_MACVLAN_MODE]); 868 874 return 0; 869 875 } 870 876
+1 -1
drivers/net/team/team.c
··· 1092 1092 } 1093 1093 1094 1094 port->index = -1; 1095 - team_port_enable(team, port); 1096 1095 list_add_tail_rcu(&port->list, &team->port_list); 1096 + team_port_enable(team, port); 1097 1097 __team_compute_features(team); 1098 1098 __team_port_change_port_added(port, !!netif_carrier_ok(port_dev)); 1099 1099 __team_options_change_check(team);
+2
drivers/net/team/team_mode_random.c
··· 28 28 29 29 port_index = random_N(team->en_port_count); 30 30 port = team_get_port_by_index_rcu(team, port_index); 31 + if (unlikely(!port)) 32 + goto drop; 31 33 port = team_get_first_port_txable_rcu(team, port); 32 34 if (unlikely(!port)) 33 35 goto drop;
+2
drivers/net/team/team_mode_roundrobin.c
··· 32 32 33 33 port_index = rr_priv(team)->sent_packets++ % team->en_port_count; 34 34 port = team_get_port_by_index_rcu(team, port_index); 35 + if (unlikely(!port)) 36 + goto drop; 35 37 port = team_get_first_port_txable_rcu(team, port); 36 38 if (unlikely(!port)) 37 39 goto drop;
+3 -1
drivers/net/tun.c
··· 352 352 u32 numqueues = 0; 353 353 354 354 rcu_read_lock(); 355 - numqueues = tun->numqueues; 355 + numqueues = ACCESS_ONCE(tun->numqueues); 356 356 357 357 txq = skb_get_rxhash(skb); 358 358 if (txq) { ··· 2158 2158 file->private_data = tfile; 2159 2159 set_bit(SOCK_EXTERNALLY_ALLOCATED, &tfile->socket.flags); 2160 2160 INIT_LIST_HEAD(&tfile->next); 2161 + 2162 + sock_set_flag(&tfile->sk, SOCK_ZEROCOPY); 2161 2163 2162 2164 return 0; 2163 2165 }
+6
drivers/net/usb/cdc_ether.c
··· 627 627 .driver_info = 0, 628 628 }, 629 629 630 + /* Huawei E1820 - handled by qmi_wwan */ 631 + { 632 + USB_DEVICE_INTERFACE_NUMBER(HUAWEI_VENDOR_ID, 0x14ac, 1), 633 + .driver_info = 0, 634 + }, 635 + 630 636 /* Realtek RTL8152 Based USB 2.0 Ethernet Adapters */ 631 637 #if defined(CONFIG_USB_RTL8152) || defined(CONFIG_USB_RTL8152_MODULE) 632 638 {
+1
drivers/net/usb/qmi_wwan.c
··· 519 519 /* 3. Combined interface devices matching on interface number */ 520 520 {QMI_FIXED_INTF(0x0408, 0xea42, 4)}, /* Yota / Megafon M100-1 */ 521 521 {QMI_FIXED_INTF(0x12d1, 0x140c, 1)}, /* Huawei E173 */ 522 + {QMI_FIXED_INTF(0x12d1, 0x14ac, 1)}, /* Huawei E1820 */ 522 523 {QMI_FIXED_INTF(0x19d2, 0x0002, 1)}, 523 524 {QMI_FIXED_INTF(0x19d2, 0x0012, 1)}, 524 525 {QMI_FIXED_INTF(0x19d2, 0x0017, 3)},
+7 -3
drivers/net/wireless/ath/ath9k/Kconfig
··· 92 92 This option enables collection of statistics for Rx/Tx status 93 93 data and some other MAC related statistics 94 94 95 - config ATH9K_RATE_CONTROL 95 + config ATH9K_LEGACY_RATE_CONTROL 96 96 bool "Atheros ath9k rate control" 97 97 depends on ATH9K 98 - default y 98 + default n 99 99 ---help--- 100 100 Say Y, if you want to use the ath9k specific rate control 101 - module instead of minstrel_ht. 101 + module instead of minstrel_ht. Be warned that there are various 102 + issues with the ath9k RC and minstrel is a more robust algorithm. 103 + Note that even if this option is selected, "ath9k_rate_control" 104 + has to be passed to mac80211 using the module parameter, 105 + ieee80211_default_rc_algo. 102 106 103 107 config ATH9K_HTC 104 108 tristate "Atheros HTC based wireless cards support"
+1 -1
drivers/net/wireless/ath/ath9k/Makefile
··· 8 8 antenna.o 9 9 10 10 ath9k-$(CONFIG_ATH9K_BTCOEX_SUPPORT) += mci.o 11 - ath9k-$(CONFIG_ATH9K_RATE_CONTROL) += rc.o 11 + ath9k-$(CONFIG_ATH9K_LEGACY_RATE_CONTROL) += rc.o 12 12 ath9k-$(CONFIG_ATH9K_PCI) += pci.o 13 13 ath9k-$(CONFIG_ATH9K_AHB) += ahb.o 14 14 ath9k-$(CONFIG_ATH9K_DEBUGFS) += debug.o
+5 -5
drivers/net/wireless/ath/ath9k/ar9003_2p2_initvals.h
··· 958 958 {0x0000a074, 0x00000000}, 959 959 {0x0000a078, 0x00000000}, 960 960 {0x0000a07c, 0x00000000}, 961 - {0x0000a080, 0x1a1a1a1a}, 962 - {0x0000a084, 0x1a1a1a1a}, 963 - {0x0000a088, 0x1a1a1a1a}, 964 - {0x0000a08c, 0x1a1a1a1a}, 965 - {0x0000a090, 0x171a1a1a}, 961 + {0x0000a080, 0x22222229}, 962 + {0x0000a084, 0x1d1d1d1d}, 963 + {0x0000a088, 0x1d1d1d1d}, 964 + {0x0000a08c, 0x1d1d1d1d}, 965 + {0x0000a090, 0x171d1d1d}, 966 966 {0x0000a094, 0x11111717}, 967 967 {0x0000a098, 0x00030311}, 968 968 {0x0000a09c, 0x00000000},
+1 -6
drivers/net/wireless/ath/ath9k/init.c
··· 787 787 hw->wiphy->iface_combinations = if_comb; 788 788 hw->wiphy->n_iface_combinations = ARRAY_SIZE(if_comb); 789 789 790 - if (AR_SREV_5416(sc->sc_ah)) 791 - hw->wiphy->flags &= ~WIPHY_FLAG_PS_ON_BY_DEFAULT; 790 + hw->wiphy->flags &= ~WIPHY_FLAG_PS_ON_BY_DEFAULT; 792 791 793 792 hw->wiphy->flags |= WIPHY_FLAG_IBSS_RSN; 794 793 hw->wiphy->flags |= WIPHY_FLAG_SUPPORTS_TDLS; ··· 828 829 829 830 sc->ant_rx = hw->wiphy->available_antennas_rx; 830 831 sc->ant_tx = hw->wiphy->available_antennas_tx; 831 - 832 - #ifdef CONFIG_ATH9K_RATE_CONTROL 833 - hw->rate_control_algorithm = "ath9k_rate_control"; 834 - #endif 835 832 836 833 if (sc->sc_ah->caps.hw_caps & ATH9K_HW_CAP_2GHZ) 837 834 hw->wiphy->bands[IEEE80211_BAND_2GHZ] =
+1 -1
drivers/net/wireless/ath/ath9k/rc.h
··· 231 231 } 232 232 #endif 233 233 234 - #ifdef CONFIG_ATH9K_RATE_CONTROL 234 + #ifdef CONFIG_ATH9K_LEGACY_RATE_CONTROL 235 235 int ath_rate_control_register(void); 236 236 void ath_rate_control_unregister(void); 237 237 #else
+1 -1
drivers/net/wireless/b43/main.c
··· 2458 2458 for (i = 0; i < B43_NR_FWTYPES; i++) { 2459 2459 errmsg = ctx->errors[i]; 2460 2460 if (strlen(errmsg)) 2461 - b43err(dev->wl, errmsg); 2461 + b43err(dev->wl, "%s", errmsg); 2462 2462 } 2463 2463 b43_print_fw_helptext(dev->wl, 1); 2464 2464 goto out;
+3 -3
drivers/net/wireless/iwlegacy/common.h
··· 1832 1832 __le32 il_add_beacon_time(struct il_priv *il, u32 base, u32 addon, 1833 1833 u32 beacon_interval); 1834 1834 1835 - #ifdef CONFIG_PM 1835 + #ifdef CONFIG_PM_SLEEP 1836 1836 extern const struct dev_pm_ops il_pm_ops; 1837 1837 1838 1838 #define IL_LEGACY_PM_OPS (&il_pm_ops) 1839 1839 1840 - #else /* !CONFIG_PM */ 1840 + #else /* !CONFIG_PM_SLEEP */ 1841 1841 1842 1842 #define IL_LEGACY_PM_OPS NULL 1843 1843 1844 - #endif /* !CONFIG_PM */ 1844 + #endif /* !CONFIG_PM_SLEEP */ 1845 1845 1846 1846 /***************************************************** 1847 1847 * Error Handling Debugging
+17 -5
drivers/net/wireless/mwifiex/debugfs.c
··· 26 26 static struct dentry *mwifiex_dfs_dir; 27 27 28 28 static char *bss_modes[] = { 29 - "Unknown", 30 - "Ad-hoc", 31 - "Managed", 32 - "Auto" 29 + "UNSPECIFIED", 30 + "ADHOC", 31 + "STATION", 32 + "AP", 33 + "AP_VLAN", 34 + "WDS", 35 + "MONITOR", 36 + "MESH_POINT", 37 + "P2P_CLIENT", 38 + "P2P_GO", 39 + "P2P_DEVICE", 33 40 }; 34 41 35 42 /* size/addr for mwifiex_debug_info */ ··· 207 200 p += sprintf(p, "driver_version = %s", fmt); 208 201 p += sprintf(p, "\nverext = %s", priv->version_str); 209 202 p += sprintf(p, "\ninterface_name=\"%s\"\n", netdev->name); 210 - p += sprintf(p, "bss_mode=\"%s\"\n", bss_modes[info.bss_mode]); 203 + 204 + if (info.bss_mode >= ARRAY_SIZE(bss_modes)) 205 + p += sprintf(p, "bss_mode=\"%d\"\n", info.bss_mode); 206 + else 207 + p += sprintf(p, "bss_mode=\"%s\"\n", bss_modes[info.bss_mode]); 208 + 211 209 p += sprintf(p, "media_state=\"%s\"\n", 212 210 (!priv->media_connected ? "Disconnected" : "Connected")); 213 211 p += sprintf(p, "mac_address=\"%pM\"\n", netdev->dev_addr);
+1
drivers/net/wireless/rtlwifi/pci.c
··· 764 764 "can't alloc skb for rx\n"); 765 765 goto done; 766 766 } 767 + kmemleak_not_leak(new_skb); 767 768 768 769 pci_unmap_single(rtlpci->pdev, 769 770 *((dma_addr_t *) skb->cb),
+99 -33
drivers/net/wireless/rtlwifi/rtl8192cu/hw.c
··· 1973 1973 } 1974 1974 } 1975 1975 1976 - void rtl92cu_update_hal_rate_table(struct ieee80211_hw *hw, 1977 - struct ieee80211_sta *sta, 1978 - u8 rssi_level) 1976 + static void rtl92cu_update_hal_rate_table(struct ieee80211_hw *hw, 1977 + struct ieee80211_sta *sta) 1979 1978 { 1980 1979 struct rtl_priv *rtlpriv = rtl_priv(hw); 1981 1980 struct rtl_phy *rtlphy = &(rtlpriv->phy); 1982 1981 struct rtl_mac *mac = rtl_mac(rtl_priv(hw)); 1983 - u32 ratr_value = (u32) mac->basic_rates; 1984 - u8 *mcsrate = mac->mcs; 1982 + struct rtl_hal *rtlhal = rtl_hal(rtl_priv(hw)); 1983 + u32 ratr_value; 1985 1984 u8 ratr_index = 0; 1986 1985 u8 nmode = mac->ht_enable; 1987 - u8 mimo_ps = 1; 1988 - u16 shortgi_rate = 0; 1989 - u32 tmp_ratr_value = 0; 1986 + u8 mimo_ps = IEEE80211_SMPS_OFF; 1987 + u16 shortgi_rate; 1988 + u32 tmp_ratr_value; 1990 1989 u8 curtxbw_40mhz = mac->bw_40; 1991 - u8 curshortgi_40mhz = mac->sgi_40; 1992 - u8 curshortgi_20mhz = mac->sgi_20; 1990 + u8 curshortgi_40mhz = (sta->ht_cap.cap & IEEE80211_HT_CAP_SGI_40) ? 1991 + 1 : 0; 1992 + u8 curshortgi_20mhz = (sta->ht_cap.cap & IEEE80211_HT_CAP_SGI_20) ? 1993 + 1 : 0; 1993 1994 enum wireless_mode wirelessmode = mac->mode; 1994 1995 1995 - ratr_value |= ((*(u16 *) (mcsrate))) << 12; 1996 + if (rtlhal->current_bandtype == BAND_ON_5G) 1997 + ratr_value = sta->supp_rates[1] << 4; 1998 + else 1999 + ratr_value = sta->supp_rates[0]; 2000 + if (mac->opmode == NL80211_IFTYPE_ADHOC) 2001 + ratr_value = 0xfff; 2002 + 2003 + ratr_value |= (sta->ht_cap.mcs.rx_mask[1] << 20 | 2004 + sta->ht_cap.mcs.rx_mask[0] << 12); 1996 2005 switch (wirelessmode) { 1997 2006 case WIRELESS_MODE_B: 1998 2007 if (ratr_value & 0x0000000c) ··· 2015 2006 case WIRELESS_MODE_N_24G: 2016 2007 case WIRELESS_MODE_N_5G: 2017 2008 nmode = 1; 2018 - if (mimo_ps == 0) { 2009 + if (mimo_ps == IEEE80211_SMPS_STATIC) { 2019 2010 ratr_value &= 0x0007F005; 2020 2011 } else { 2021 2012 u32 ratr_mask; ··· 2025 2016 ratr_mask = 0x000ff005; 2026 2017 else 2027 2018 ratr_mask = 0x0f0ff005; 2028 - if (curtxbw_40mhz) 2029 - ratr_mask |= 0x00000010; 2019 + 2030 2020 ratr_value &= ratr_mask; 2031 2021 } 2032 2022 break; ··· 2034 2026 ratr_value &= 0x000ff0ff; 2035 2027 else 2036 2028 ratr_value &= 0x0f0ff0ff; 2029 + 2037 2030 break; 2038 2031 } 2032 + 2039 2033 ratr_value &= 0x0FFFFFFF; 2040 - if (nmode && ((curtxbw_40mhz && curshortgi_40mhz) || 2041 - (!curtxbw_40mhz && curshortgi_20mhz))) { 2034 + 2035 + if (nmode && ((curtxbw_40mhz && 2036 + curshortgi_40mhz) || (!curtxbw_40mhz && 2037 + curshortgi_20mhz))) { 2038 + 2042 2039 ratr_value |= 0x10000000; 2043 2040 tmp_ratr_value = (ratr_value >> 12); 2041 + 2044 2042 for (shortgi_rate = 15; shortgi_rate > 0; shortgi_rate--) { 2045 2043 if ((1 << shortgi_rate) & tmp_ratr_value) 2046 2044 break; 2047 2045 } 2046 + 2048 2047 shortgi_rate = (shortgi_rate << 12) | (shortgi_rate << 8) | 2049 - (shortgi_rate << 4) | (shortgi_rate); 2048 + (shortgi_rate << 4) | (shortgi_rate); 2050 2049 } 2050 + 2051 2051 rtl_write_dword(rtlpriv, REG_ARFR0 + ratr_index * 4, ratr_value); 2052 + 2053 + RT_TRACE(rtlpriv, COMP_RATR, DBG_DMESG, "%x\n", 2054 + rtl_read_dword(rtlpriv, REG_ARFR0)); 2052 2055 } 2053 2056 2054 - void rtl92cu_update_hal_rate_mask(struct ieee80211_hw *hw, u8 rssi_level) 2057 + static void rtl92cu_update_hal_rate_mask(struct ieee80211_hw *hw, 2058 + struct ieee80211_sta *sta, 2059 + u8 rssi_level) 2055 2060 { 2056 2061 struct rtl_priv *rtlpriv = rtl_priv(hw); 2057 2062 struct rtl_phy *rtlphy = &(rtlpriv->phy); 2058 2063 struct rtl_mac *mac = rtl_mac(rtl_priv(hw)); 2059 - u32 ratr_bitmap = (u32) mac->basic_rates; 2060 - u8 *p_mcsrate = mac->mcs; 2061 - u8 ratr_index = 0; 2062 - u8 curtxbw_40mhz = mac->bw_40; 2063 - u8 curshortgi_40mhz = mac->sgi_40; 2064 - u8 curshortgi_20mhz = mac->sgi_20; 2065 - enum wireless_mode wirelessmode = mac->mode; 2064 + struct rtl_hal *rtlhal = rtl_hal(rtl_priv(hw)); 2065 + struct rtl_sta_info *sta_entry = NULL; 2066 + u32 ratr_bitmap; 2067 + u8 ratr_index; 2068 + u8 curtxbw_40mhz = (sta->bandwidth >= IEEE80211_STA_RX_BW_40) ? 1 : 0; 2069 + u8 curshortgi_40mhz = curtxbw_40mhz && 2070 + (sta->ht_cap.cap & IEEE80211_HT_CAP_SGI_40) ? 2071 + 1 : 0; 2072 + u8 curshortgi_20mhz = (sta->ht_cap.cap & IEEE80211_HT_CAP_SGI_20) ? 2073 + 1 : 0; 2074 + enum wireless_mode wirelessmode = 0; 2066 2075 bool shortgi = false; 2067 2076 u8 rate_mask[5]; 2068 2077 u8 macid = 0; 2069 - u8 mimops = 1; 2078 + u8 mimo_ps = IEEE80211_SMPS_OFF; 2070 2079 2071 - ratr_bitmap |= (p_mcsrate[1] << 20) | (p_mcsrate[0] << 12); 2080 + sta_entry = (struct rtl_sta_info *) sta->drv_priv; 2081 + wirelessmode = sta_entry->wireless_mode; 2082 + if (mac->opmode == NL80211_IFTYPE_STATION || 2083 + mac->opmode == NL80211_IFTYPE_MESH_POINT) 2084 + curtxbw_40mhz = mac->bw_40; 2085 + else if (mac->opmode == NL80211_IFTYPE_AP || 2086 + mac->opmode == NL80211_IFTYPE_ADHOC) 2087 + macid = sta->aid + 1; 2088 + 2089 + if (rtlhal->current_bandtype == BAND_ON_5G) 2090 + ratr_bitmap = sta->supp_rates[1] << 4; 2091 + else 2092 + ratr_bitmap = sta->supp_rates[0]; 2093 + if (mac->opmode == NL80211_IFTYPE_ADHOC) 2094 + ratr_bitmap = 0xfff; 2095 + ratr_bitmap |= (sta->ht_cap.mcs.rx_mask[1] << 20 | 2096 + sta->ht_cap.mcs.rx_mask[0] << 12); 2072 2097 switch (wirelessmode) { 2073 2098 case WIRELESS_MODE_B: 2074 2099 ratr_index = RATR_INX_WIRELESS_B; ··· 2112 2071 break; 2113 2072 case WIRELESS_MODE_G: 2114 2073 ratr_index = RATR_INX_WIRELESS_GB; 2074 + 2115 2075 if (rssi_level == 1) 2116 2076 ratr_bitmap &= 0x00000f00; 2117 2077 else if (rssi_level == 2) ··· 2127 2085 case WIRELESS_MODE_N_24G: 2128 2086 case WIRELESS_MODE_N_5G: 2129 2087 ratr_index = RATR_INX_WIRELESS_NGB; 2130 - if (mimops == 0) { 2088 + 2089 + if (mimo_ps == IEEE80211_SMPS_STATIC) { 2131 2090 if (rssi_level == 1) 2132 2091 ratr_bitmap &= 0x00070000; 2133 2092 else if (rssi_level == 2) ··· 2171 2128 } 2172 2129 } 2173 2130 } 2131 + 2174 2132 if ((curtxbw_40mhz && curshortgi_40mhz) || 2175 2133 (!curtxbw_40mhz && curshortgi_20mhz)) { 2134 + 2176 2135 if (macid == 0) 2177 2136 shortgi = true; 2178 2137 else if (macid == 1) ··· 2183 2138 break; 2184 2139 default: 2185 2140 ratr_index = RATR_INX_WIRELESS_NGB; 2141 + 2186 2142 if (rtlphy->rf_type == RF_1T2R) 2187 2143 ratr_bitmap &= 0x000ff0ff; 2188 2144 else 2189 2145 ratr_bitmap &= 0x0f0ff0ff; 2190 2146 break; 2191 2147 } 2192 - RT_TRACE(rtlpriv, COMP_RATR, DBG_DMESG, "ratr_bitmap :%x\n", 2193 - ratr_bitmap); 2194 - *(u32 *)&rate_mask = ((ratr_bitmap & 0x0fffffff) | 2195 - ratr_index << 28); 2148 + sta_entry->ratr_index = ratr_index; 2149 + 2150 + RT_TRACE(rtlpriv, COMP_RATR, DBG_DMESG, 2151 + "ratr_bitmap :%x\n", ratr_bitmap); 2152 + *(u32 *)&rate_mask = (ratr_bitmap & 0x0fffffff) | 2153 + (ratr_index << 28); 2196 2154 rate_mask[4] = macid | (shortgi ? 0x20 : 0x00) | 0x80; 2197 2155 RT_TRACE(rtlpriv, COMP_RATR, DBG_DMESG, 2198 2156 "Rate_index:%x, ratr_val:%x, %5phC\n", 2199 2157 ratr_index, ratr_bitmap, rate_mask); 2200 - rtl92c_fill_h2c_cmd(hw, H2C_RA_MASK, 5, rate_mask); 2158 + memcpy(rtlpriv->rate_mask, rate_mask, 5); 2159 + /* rtl92c_fill_h2c_cmd() does USB I/O and will result in a 2160 + * "scheduled while atomic" if called directly */ 2161 + schedule_work(&rtlpriv->works.fill_h2c_cmd); 2162 + 2163 + if (macid != 0) 2164 + sta_entry->ratr_index = ratr_index; 2165 + } 2166 + 2167 + void rtl92cu_update_hal_rate_tbl(struct ieee80211_hw *hw, 2168 + struct ieee80211_sta *sta, 2169 + u8 rssi_level) 2170 + { 2171 + struct rtl_priv *rtlpriv = rtl_priv(hw); 2172 + 2173 + if (rtlpriv->dm.useramask) 2174 + rtl92cu_update_hal_rate_mask(hw, sta, rssi_level); 2175 + else 2176 + rtl92cu_update_hal_rate_table(hw, sta); 2201 2177 } 2202 2178 2203 2179 void rtl92cu_update_channel_access_setting(struct ieee80211_hw *hw)
-4
drivers/net/wireless/rtlwifi/rtl8192cu/hw.h
··· 98 98 u32 add_msr, u32 rm_msr); 99 99 void rtl92cu_get_hw_reg(struct ieee80211_hw *hw, u8 variable, u8 *val); 100 100 void rtl92cu_set_hw_reg(struct ieee80211_hw *hw, u8 variable, u8 *val); 101 - void rtl92cu_update_hal_rate_table(struct ieee80211_hw *hw, 102 - struct ieee80211_sta *sta, 103 - u8 rssi_level); 104 - void rtl92cu_update_hal_rate_mask(struct ieee80211_hw *hw, u8 rssi_level); 105 101 106 102 void rtl92cu_update_channel_access_setting(struct ieee80211_hw *hw); 107 103 bool rtl92cu_gpio_radio_on_off_checking(struct ieee80211_hw *hw, u8 * valid);
+17 -1
drivers/net/wireless/rtlwifi/rtl8192cu/mac.c
··· 289 289 macaddr = cam_const_broad; 290 290 entry_id = key_index; 291 291 } else { 292 + if (mac->opmode == NL80211_IFTYPE_AP || 293 + mac->opmode == NL80211_IFTYPE_MESH_POINT) { 294 + entry_id = rtl_cam_get_free_entry(hw, 295 + p_macaddr); 296 + if (entry_id >= TOTAL_CAM_ENTRY) { 297 + RT_TRACE(rtlpriv, COMP_SEC, 298 + DBG_EMERG, 299 + "Can not find free hw security cam entry\n"); 300 + return; 301 + } 302 + } else { 303 + entry_id = CAM_PAIRWISE_KEY_POSITION; 304 + } 305 + 292 306 key_index = PAIRWISE_KEYIDX; 293 - entry_id = CAM_PAIRWISE_KEY_POSITION; 294 307 is_pairwise = true; 295 308 } 296 309 } 297 310 if (rtlpriv->sec.key_len[key_index] == 0) { 298 311 RT_TRACE(rtlpriv, COMP_SEC, DBG_DMESG, 299 312 "delete one entry\n"); 313 + if (mac->opmode == NL80211_IFTYPE_AP || 314 + mac->opmode == NL80211_IFTYPE_MESH_POINT) 315 + rtl_cam_del_entry(hw, p_macaddr); 300 316 rtl_cam_delete_one_entry(hw, p_macaddr, entry_id); 301 317 } else { 302 318 RT_TRACE(rtlpriv, COMP_SEC, DBG_LOUD,
+2 -2
drivers/net/wireless/rtlwifi/rtl8192cu/sw.c
··· 106 106 .update_interrupt_mask = rtl92cu_update_interrupt_mask, 107 107 .get_hw_reg = rtl92cu_get_hw_reg, 108 108 .set_hw_reg = rtl92cu_set_hw_reg, 109 - .update_rate_tbl = rtl92cu_update_hal_rate_table, 110 - .update_rate_mask = rtl92cu_update_hal_rate_mask, 109 + .update_rate_tbl = rtl92cu_update_hal_rate_tbl, 111 110 .fill_tx_desc = rtl92cu_tx_fill_desc, 112 111 .fill_fake_txdesc = rtl92cu_fill_fake_txdesc, 113 112 .fill_tx_cmddesc = rtl92cu_tx_fill_cmddesc, ··· 136 137 .phy_lc_calibrate = _rtl92cu_phy_lc_calibrate, 137 138 .phy_set_bw_mode_callback = rtl92cu_phy_set_bw_mode_callback, 138 139 .dm_dynamic_txpower = rtl92cu_dm_dynamic_txpower, 140 + .fill_h2c_cmd = rtl92c_fill_h2c_cmd, 139 141 }; 140 142 141 143 static struct rtl_mod_params rtl92cu_mod_params = {
+3
drivers/net/wireless/rtlwifi/rtl8192cu/sw.h
··· 49 49 u32 rtl92cu_phy_query_rf_reg(struct ieee80211_hw *hw, 50 50 enum radio_path rfpath, u32 regaddr, u32 bitmask); 51 51 void rtl92cu_phy_set_bw_mode_callback(struct ieee80211_hw *hw); 52 + void rtl92cu_update_hal_rate_tbl(struct ieee80211_hw *hw, 53 + struct ieee80211_sta *sta, 54 + u8 rssi_level); 52 55 53 56 #endif
+13
drivers/net/wireless/rtlwifi/usb.c
··· 824 824 825 825 /* should after adapter start and interrupt enable. */ 826 826 set_hal_stop(rtlhal); 827 + cancel_work_sync(&rtlpriv->works.fill_h2c_cmd); 827 828 /* Enable software */ 828 829 SET_USB_STOP(rtlusb); 829 830 rtl_usb_deinit(hw); ··· 1027 1026 return false; 1028 1027 } 1029 1028 1029 + static void rtl_fill_h2c_cmd_work_callback(struct work_struct *work) 1030 + { 1031 + struct rtl_works *rtlworks = 1032 + container_of(work, struct rtl_works, fill_h2c_cmd); 1033 + struct ieee80211_hw *hw = rtlworks->hw; 1034 + struct rtl_priv *rtlpriv = rtl_priv(hw); 1035 + 1036 + rtlpriv->cfg->ops->fill_h2c_cmd(hw, H2C_RA_MASK, 5, rtlpriv->rate_mask); 1037 + } 1038 + 1030 1039 static struct rtl_intf_ops rtl_usb_ops = { 1031 1040 .adapter_start = rtl_usb_start, 1032 1041 .adapter_stop = rtl_usb_stop, ··· 1068 1057 1069 1058 /* this spin lock must be initialized early */ 1070 1059 spin_lock_init(&rtlpriv->locks.usb_lock); 1060 + INIT_WORK(&rtlpriv->works.fill_h2c_cmd, 1061 + rtl_fill_h2c_cmd_work_callback); 1071 1062 1072 1063 rtlpriv->usb_data_index = 0; 1073 1064 init_completion(&rtlpriv->firmware_loading_complete);
+4
drivers/net/wireless/rtlwifi/wifi.h
··· 1736 1736 void (*bt_wifi_media_status_notify) (struct ieee80211_hw *hw, 1737 1737 bool mstate); 1738 1738 void (*bt_coex_off_before_lps) (struct ieee80211_hw *hw); 1739 + void (*fill_h2c_cmd) (struct ieee80211_hw *hw, u8 element_id, 1740 + u32 cmd_len, u8 *p_cmdbuffer); 1739 1741 }; 1740 1742 1741 1743 struct rtl_intf_ops { ··· 1871 1869 struct delayed_work fwevt_wq; 1872 1870 1873 1871 struct work_struct lps_change_work; 1872 + struct work_struct fill_h2c_cmd; 1874 1873 }; 1875 1874 1876 1875 struct rtl_debug { ··· 2051 2048 }; 2052 2049 }; 2053 2050 bool enter_ps; /* true when entering PS */ 2051 + u8 rate_mask[5]; 2054 2052 2055 2053 /*This must be the last item so 2056 2054 that it points to the data allocated
+1 -1
drivers/net/wireless/ti/wl12xx/scan.c
··· 310 310 memcpy(cmd->channels_2, cmd_channels->channels_2, 311 311 sizeof(cmd->channels_2)); 312 312 memcpy(cmd->channels_5, cmd_channels->channels_5, 313 - sizeof(cmd->channels_2)); 313 + sizeof(cmd->channels_5)); 314 314 /* channels_4 are not supported, so no need to copy them */ 315 315 } 316 316
+3 -3
drivers/net/wireless/ti/wl12xx/wl12xx.h
··· 36 36 #define WL127X_IFTYPE_SR_VER 3 37 37 #define WL127X_MAJOR_SR_VER 10 38 38 #define WL127X_SUBTYPE_SR_VER WLCORE_FW_VER_IGNORE 39 - #define WL127X_MINOR_SR_VER 115 39 + #define WL127X_MINOR_SR_VER 133 40 40 /* minimum multi-role FW version for wl127x */ 41 41 #define WL127X_IFTYPE_MR_VER 5 42 42 #define WL127X_MAJOR_MR_VER 7 43 43 #define WL127X_SUBTYPE_MR_VER WLCORE_FW_VER_IGNORE 44 - #define WL127X_MINOR_MR_VER 115 44 + #define WL127X_MINOR_MR_VER 42 45 45 46 46 /* FW chip version for wl128x */ 47 47 #define WL128X_CHIP_VER 7 ··· 49 49 #define WL128X_IFTYPE_SR_VER 3 50 50 #define WL128X_MAJOR_SR_VER 10 51 51 #define WL128X_SUBTYPE_SR_VER WLCORE_FW_VER_IGNORE 52 - #define WL128X_MINOR_SR_VER 115 52 + #define WL128X_MINOR_SR_VER 133 53 53 /* minimum multi-role FW version for wl128x */ 54 54 #define WL128X_IFTYPE_MR_VER 5 55 55 #define WL128X_MAJOR_MR_VER 7
+1 -1
drivers/net/wireless/ti/wl18xx/scan.c
··· 34 34 memcpy(cmd->channels_2, cmd_channels->channels_2, 35 35 sizeof(cmd->channels_2)); 36 36 memcpy(cmd->channels_5, cmd_channels->channels_5, 37 - sizeof(cmd->channels_2)); 37 + sizeof(cmd->channels_5)); 38 38 /* channels_4 are not supported, so no need to copy them */ 39 39 } 40 40
+6 -5
drivers/net/xen-netback/netback.c
··· 662 662 { 663 663 struct xenvif *vif = NULL, *tmp; 664 664 s8 status; 665 - u16 irq, flags; 665 + u16 flags; 666 666 struct xen_netif_rx_response *resp; 667 667 struct sk_buff_head rxq; 668 668 struct sk_buff *skb; ··· 771 771 sco->meta_slots_used); 772 772 773 773 RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&vif->rx, ret); 774 - irq = vif->irq; 775 - if (ret && list_empty(&vif->notify_list)) 776 - list_add_tail(&vif->notify_list, &notify); 777 774 778 775 xenvif_notify_tx_completion(vif); 779 776 780 - xenvif_put(vif); 777 + if (ret && list_empty(&vif->notify_list)) 778 + list_add_tail(&vif->notify_list, &notify); 779 + else 780 + xenvif_put(vif); 781 781 npo.meta_cons += sco->meta_slots_used; 782 782 dev_kfree_skb(skb); 783 783 } ··· 785 785 list_for_each_entry_safe(vif, tmp, &notify, notify_list) { 786 786 notify_remote_via_irq(vif->irq); 787 787 list_del_init(&vif->notify_list); 788 + xenvif_put(vif); 788 789 } 789 790 790 791 /* More work to do? */
+9 -6
drivers/of/base.c
··· 192 192 struct device_node *of_find_all_nodes(struct device_node *prev) 193 193 { 194 194 struct device_node *np; 195 + unsigned long flags; 195 196 196 - raw_spin_lock(&devtree_lock); 197 + raw_spin_lock_irqsave(&devtree_lock, flags); 197 198 np = prev ? prev->allnext : of_allnodes; 198 199 for (; np != NULL; np = np->allnext) 199 200 if (of_node_get(np)) 200 201 break; 201 202 of_node_put(prev); 202 - raw_spin_unlock(&devtree_lock); 203 + raw_spin_unlock_irqrestore(&devtree_lock, flags); 203 204 return np; 204 205 } 205 206 EXPORT_SYMBOL(of_find_all_nodes); ··· 422 421 struct device_node *prev) 423 422 { 424 423 struct device_node *next; 424 + unsigned long flags; 425 425 426 - raw_spin_lock(&devtree_lock); 426 + raw_spin_lock_irqsave(&devtree_lock, flags); 427 427 next = prev ? prev->sibling : node->child; 428 428 for (; next; next = next->sibling) { 429 429 if (!__of_device_is_available(next)) ··· 433 431 break; 434 432 } 435 433 of_node_put(prev); 436 - raw_spin_unlock(&devtree_lock); 434 + raw_spin_unlock_irqrestore(&devtree_lock, flags); 437 435 return next; 438 436 } 439 437 EXPORT_SYMBOL(of_get_next_available_child); ··· 737 735 struct device_node *of_find_node_by_phandle(phandle handle) 738 736 { 739 737 struct device_node *np; 738 + unsigned long flags; 740 739 741 - raw_spin_lock(&devtree_lock); 740 + raw_spin_lock_irqsave(&devtree_lock, flags); 742 741 for (np = of_allnodes; np; np = np->allnext) 743 742 if (np->phandle == handle) 744 743 break; 745 744 of_node_get(np); 746 - raw_spin_unlock(&devtree_lock); 745 + raw_spin_unlock_irqrestore(&devtree_lock, flags); 747 746 return np; 748 747 } 749 748 EXPORT_SYMBOL(of_find_node_by_phandle);
+111 -20
drivers/rtc/rtc-at91rm9200.c
··· 25 25 #include <linux/rtc.h> 26 26 #include <linux/bcd.h> 27 27 #include <linux/interrupt.h> 28 + #include <linux/spinlock.h> 28 29 #include <linux/ioctl.h> 29 30 #include <linux/completion.h> 30 31 #include <linux/io.h> ··· 43 42 44 43 #define AT91_RTC_EPOCH 1900UL /* just like arch/arm/common/rtctime.c */ 45 44 45 + struct at91_rtc_config { 46 + bool use_shadow_imr; 47 + }; 48 + 49 + static const struct at91_rtc_config *at91_rtc_config; 46 50 static DECLARE_COMPLETION(at91_rtc_updated); 47 51 static unsigned int at91_alarm_year = AT91_RTC_EPOCH; 48 52 static void __iomem *at91_rtc_regs; 49 53 static int irq; 54 + static DEFINE_SPINLOCK(at91_rtc_lock); 55 + static u32 at91_rtc_shadow_imr; 56 + 57 + static void at91_rtc_write_ier(u32 mask) 58 + { 59 + unsigned long flags; 60 + 61 + spin_lock_irqsave(&at91_rtc_lock, flags); 62 + at91_rtc_shadow_imr |= mask; 63 + at91_rtc_write(AT91_RTC_IER, mask); 64 + spin_unlock_irqrestore(&at91_rtc_lock, flags); 65 + } 66 + 67 + static void at91_rtc_write_idr(u32 mask) 68 + { 69 + unsigned long flags; 70 + 71 + spin_lock_irqsave(&at91_rtc_lock, flags); 72 + at91_rtc_write(AT91_RTC_IDR, mask); 73 + /* 74 + * Register read back (of any RTC-register) needed to make sure 75 + * IDR-register write has reached the peripheral before updating 76 + * shadow mask. 77 + * 78 + * Note that there is still a possibility that the mask is updated 79 + * before interrupts have actually been disabled in hardware. The only 80 + * way to be certain would be to poll the IMR-register, which is is 81 + * the very register we are trying to emulate. The register read back 82 + * is a reasonable heuristic. 83 + */ 84 + at91_rtc_read(AT91_RTC_SR); 85 + at91_rtc_shadow_imr &= ~mask; 86 + spin_unlock_irqrestore(&at91_rtc_lock, flags); 87 + } 88 + 89 + static u32 at91_rtc_read_imr(void) 90 + { 91 + unsigned long flags; 92 + u32 mask; 93 + 94 + if (at91_rtc_config->use_shadow_imr) { 95 + spin_lock_irqsave(&at91_rtc_lock, flags); 96 + mask = at91_rtc_shadow_imr; 97 + spin_unlock_irqrestore(&at91_rtc_lock, flags); 98 + } else { 99 + mask = at91_rtc_read(AT91_RTC_IMR); 100 + } 101 + 102 + return mask; 103 + } 50 104 51 105 /* 52 106 * Decode time/date into rtc_time structure ··· 166 110 cr = at91_rtc_read(AT91_RTC_CR); 167 111 at91_rtc_write(AT91_RTC_CR, cr | AT91_RTC_UPDCAL | AT91_RTC_UPDTIM); 168 112 169 - at91_rtc_write(AT91_RTC_IER, AT91_RTC_ACKUPD); 113 + at91_rtc_write_ier(AT91_RTC_ACKUPD); 170 114 wait_for_completion(&at91_rtc_updated); /* wait for ACKUPD interrupt */ 171 - at91_rtc_write(AT91_RTC_IDR, AT91_RTC_ACKUPD); 115 + at91_rtc_write_idr(AT91_RTC_ACKUPD); 172 116 173 117 at91_rtc_write(AT91_RTC_TIMR, 174 118 bin2bcd(tm->tm_sec) << 0 ··· 200 144 tm->tm_yday = rtc_year_days(tm->tm_mday, tm->tm_mon, tm->tm_year); 201 145 tm->tm_year = at91_alarm_year - 1900; 202 146 203 - alrm->enabled = (at91_rtc_read(AT91_RTC_IMR) & AT91_RTC_ALARM) 147 + alrm->enabled = (at91_rtc_read_imr() & AT91_RTC_ALARM) 204 148 ? 1 : 0; 205 149 206 150 dev_dbg(dev, "%s(): %4d-%02d-%02d %02d:%02d:%02d\n", __func__, ··· 225 169 tm.tm_min = alrm->time.tm_min; 226 170 tm.tm_sec = alrm->time.tm_sec; 227 171 228 - at91_rtc_write(AT91_RTC_IDR, AT91_RTC_ALARM); 172 + at91_rtc_write_idr(AT91_RTC_ALARM); 229 173 at91_rtc_write(AT91_RTC_TIMALR, 230 174 bin2bcd(tm.tm_sec) << 0 231 175 | bin2bcd(tm.tm_min) << 8 ··· 238 182 239 183 if (alrm->enabled) { 240 184 at91_rtc_write(AT91_RTC_SCCR, AT91_RTC_ALARM); 241 - at91_rtc_write(AT91_RTC_IER, AT91_RTC_ALARM); 185 + at91_rtc_write_ier(AT91_RTC_ALARM); 242 186 } 243 187 244 188 dev_dbg(dev, "%s(): %4d-%02d-%02d %02d:%02d:%02d\n", __func__, ··· 254 198 255 199 if (enabled) { 256 200 at91_rtc_write(AT91_RTC_SCCR, AT91_RTC_ALARM); 257 - at91_rtc_write(AT91_RTC_IER, AT91_RTC_ALARM); 201 + at91_rtc_write_ier(AT91_RTC_ALARM); 258 202 } else 259 - at91_rtc_write(AT91_RTC_IDR, AT91_RTC_ALARM); 203 + at91_rtc_write_idr(AT91_RTC_ALARM); 260 204 261 205 return 0; 262 206 } ··· 265 209 */ 266 210 static int at91_rtc_proc(struct device *dev, struct seq_file *seq) 267 211 { 268 - unsigned long imr = at91_rtc_read(AT91_RTC_IMR); 212 + unsigned long imr = at91_rtc_read_imr(); 269 213 270 214 seq_printf(seq, "update_IRQ\t: %s\n", 271 215 (imr & AT91_RTC_ACKUPD) ? "yes" : "no"); ··· 285 229 unsigned int rtsr; 286 230 unsigned long events = 0; 287 231 288 - rtsr = at91_rtc_read(AT91_RTC_SR) & at91_rtc_read(AT91_RTC_IMR); 232 + rtsr = at91_rtc_read(AT91_RTC_SR) & at91_rtc_read_imr(); 289 233 if (rtsr) { /* this interrupt is shared! Is it ours? */ 290 234 if (rtsr & AT91_RTC_ALARM) 291 235 events |= (RTC_AF | RTC_IRQF); ··· 306 250 return IRQ_NONE; /* not handled */ 307 251 } 308 252 253 + static const struct at91_rtc_config at91rm9200_config = { 254 + }; 255 + 256 + static const struct at91_rtc_config at91sam9x5_config = { 257 + .use_shadow_imr = true, 258 + }; 259 + 260 + #ifdef CONFIG_OF 261 + static const struct of_device_id at91_rtc_dt_ids[] = { 262 + { 263 + .compatible = "atmel,at91rm9200-rtc", 264 + .data = &at91rm9200_config, 265 + }, { 266 + .compatible = "atmel,at91sam9x5-rtc", 267 + .data = &at91sam9x5_config, 268 + }, { 269 + /* sentinel */ 270 + } 271 + }; 272 + MODULE_DEVICE_TABLE(of, at91_rtc_dt_ids); 273 + #endif 274 + 275 + static const struct at91_rtc_config * 276 + at91_rtc_get_config(struct platform_device *pdev) 277 + { 278 + const struct of_device_id *match; 279 + 280 + if (pdev->dev.of_node) { 281 + match = of_match_node(at91_rtc_dt_ids, pdev->dev.of_node); 282 + if (!match) 283 + return NULL; 284 + return (const struct at91_rtc_config *)match->data; 285 + } 286 + 287 + return &at91rm9200_config; 288 + } 289 + 309 290 static const struct rtc_class_ops at91_rtc_ops = { 310 291 .read_time = at91_rtc_readtime, 311 292 .set_time = at91_rtc_settime, ··· 360 267 struct rtc_device *rtc; 361 268 struct resource *regs; 362 269 int ret = 0; 270 + 271 + at91_rtc_config = at91_rtc_get_config(pdev); 272 + if (!at91_rtc_config) 273 + return -ENODEV; 363 274 364 275 regs = platform_get_resource(pdev, IORESOURCE_MEM, 0); 365 276 if (!regs) { ··· 387 290 at91_rtc_write(AT91_RTC_MR, 0); /* 24 hour mode */ 388 291 389 292 /* Disable all interrupts */ 390 - at91_rtc_write(AT91_RTC_IDR, AT91_RTC_ACKUPD | AT91_RTC_ALARM | 293 + at91_rtc_write_idr(AT91_RTC_ACKUPD | AT91_RTC_ALARM | 391 294 AT91_RTC_SECEV | AT91_RTC_TIMEV | 392 295 AT91_RTC_CALEV); 393 296 ··· 432 335 struct rtc_device *rtc = platform_get_drvdata(pdev); 433 336 434 337 /* Disable all interrupts */ 435 - at91_rtc_write(AT91_RTC_IDR, AT91_RTC_ACKUPD | AT91_RTC_ALARM | 338 + at91_rtc_write_idr(AT91_RTC_ACKUPD | AT91_RTC_ALARM | 436 339 AT91_RTC_SECEV | AT91_RTC_TIMEV | 437 340 AT91_RTC_CALEV); 438 341 free_irq(irq, pdev); ··· 455 358 /* this IRQ is shared with DBGU and other hardware which isn't 456 359 * necessarily doing PM like we are... 457 360 */ 458 - at91_rtc_imr = at91_rtc_read(AT91_RTC_IMR) 361 + at91_rtc_imr = at91_rtc_read_imr() 459 362 & (AT91_RTC_ALARM|AT91_RTC_SECEV); 460 363 if (at91_rtc_imr) { 461 364 if (device_may_wakeup(dev)) 462 365 enable_irq_wake(irq); 463 366 else 464 - at91_rtc_write(AT91_RTC_IDR, at91_rtc_imr); 367 + at91_rtc_write_idr(at91_rtc_imr); 465 368 } 466 369 return 0; 467 370 } ··· 472 375 if (device_may_wakeup(dev)) 473 376 disable_irq_wake(irq); 474 377 else 475 - at91_rtc_write(AT91_RTC_IER, at91_rtc_imr); 378 + at91_rtc_write_ier(at91_rtc_imr); 476 379 } 477 380 return 0; 478 381 } 479 382 #endif 480 383 481 384 static SIMPLE_DEV_PM_OPS(at91_rtc_pm_ops, at91_rtc_suspend, at91_rtc_resume); 482 - 483 - static const struct of_device_id at91_rtc_dt_ids[] = { 484 - { .compatible = "atmel,at91rm9200-rtc" }, 485 - { /* sentinel */ } 486 - }; 487 - MODULE_DEVICE_TABLE(of, at91_rtc_dt_ids); 488 385 489 386 static struct platform_driver at91_rtc_driver = { 490 387 .remove = __exit_p(at91_rtc_remove),
+3 -1
drivers/rtc/rtc-cmos.c
··· 854 854 } 855 855 856 856 spin_lock_irq(&rtc_lock); 857 + if (device_may_wakeup(dev)) 858 + hpet_rtc_timer_init(); 859 + 857 860 do { 858 861 CMOS_WRITE(tmp, RTC_CONTROL); 859 862 hpet_set_rtc_irq_bit(tmp & RTC_IRQMASK); ··· 872 869 rtc_update_irq(cmos->rtc, 1, mask); 873 870 tmp &= ~RTC_AIE; 874 871 hpet_mask_rtc_irq_bit(RTC_AIE); 875 - hpet_rtc_timer_init(); 876 872 } while (mask & RTC_AIE); 877 873 spin_unlock_irq(&rtc_lock); 878 874 }
+2 -1
drivers/rtc/rtc-tps6586x.c
··· 273 273 return ret; 274 274 } 275 275 276 + device_init_wakeup(&pdev->dev, 1); 277 + 276 278 platform_set_drvdata(pdev, rtc); 277 279 rtc->rtc = devm_rtc_device_register(&pdev->dev, dev_name(&pdev->dev), 278 280 &tps6586x_rtc_ops, THIS_MODULE); ··· 294 292 goto fail_rtc_register; 295 293 } 296 294 disable_irq(rtc->irq); 297 - device_set_wakeup_capable(&pdev->dev, 1); 298 295 return 0; 299 296 300 297 fail_rtc_register:
+1
drivers/rtc/rtc-twl.c
··· 524 524 } 525 525 526 526 platform_set_drvdata(pdev, rtc); 527 + device_init_wakeup(&pdev->dev, 1); 527 528 return 0; 528 529 529 530 out2:
+5 -1
drivers/s390/net/netiucv.c
··· 2040 2040 netiucv_setup_netdevice); 2041 2041 if (!dev) 2042 2042 return NULL; 2043 + rtnl_lock(); 2043 2044 if (dev_alloc_name(dev, dev->name) < 0) 2044 2045 goto out_netdev; 2045 2046 ··· 2062 2061 out_fsm: 2063 2062 kfree_fsm(privptr->fsm); 2064 2063 out_netdev: 2064 + rtnl_unlock(); 2065 2065 free_netdev(dev); 2066 2066 return NULL; 2067 2067 } ··· 2102 2100 2103 2101 rc = netiucv_register_device(dev); 2104 2102 if (rc) { 2103 + rtnl_unlock(); 2105 2104 IUCV_DBF_TEXT_(setup, 2, 2106 2105 "ret %d from netiucv_register_device\n", rc); 2107 2106 goto out_free_ndev; ··· 2112 2109 priv = netdev_priv(dev); 2113 2110 SET_NETDEV_DEV(dev, priv->dev); 2114 2111 2115 - rc = register_netdev(dev); 2112 + rc = register_netdevice(dev); 2113 + rtnl_unlock(); 2116 2114 if (rc) 2117 2115 goto out_unreg; 2118 2116
+1 -1
drivers/spi/spi-sh-hspi.c
··· 89 89 if ((mask & hspi_read(hspi, SPSR)) == val) 90 90 return 0; 91 91 92 - msleep(20); 92 + udelay(10); 93 93 } 94 94 95 95 dev_err(hspi->dev, "timeout\n");
+2 -1
drivers/spi/spi-topcliff-pch.c
··· 1487 1487 return 0; 1488 1488 1489 1489 err_spi_register_master: 1490 - free_irq(board_dat->pdev->irq, board_dat); 1490 + free_irq(board_dat->pdev->irq, data); 1491 1491 err_request_irq: 1492 1492 pch_spi_free_resources(board_dat, data); 1493 1493 err_spi_get_resources: ··· 1667 1667 pd_dev = platform_device_alloc("pch-spi", i); 1668 1668 if (!pd_dev) { 1669 1669 dev_err(&pdev->dev, "platform_device_alloc failed\n"); 1670 + retval = -ENOMEM; 1670 1671 goto err_platform_device; 1671 1672 } 1672 1673 pd_dev_save->pd_save[i] = pd_dev;
+35 -39
drivers/spi/spi-xilinx.c
··· 267 267 { 268 268 struct xilinx_spi *xspi = spi_master_get_devdata(spi->master); 269 269 u32 ipif_ier; 270 - u16 cr; 271 270 272 271 /* We get here with transmitter inhibited */ 273 272 ··· 275 276 xspi->remaining_bytes = t->len; 276 277 INIT_COMPLETION(xspi->done); 277 278 278 - xilinx_spi_fill_tx_fifo(xspi); 279 279 280 280 /* Enable the transmit empty interrupt, which we use to determine 281 281 * progress on the transmission. ··· 283 285 xspi->write_fn(ipif_ier | XSPI_INTR_TX_EMPTY, 284 286 xspi->regs + XIPIF_V123B_IIER_OFFSET); 285 287 286 - /* Start the transfer by not inhibiting the transmitter any longer */ 287 - cr = xspi->read_fn(xspi->regs + XSPI_CR_OFFSET) & 288 - ~XSPI_CR_TRANS_INHIBIT; 289 - xspi->write_fn(cr, xspi->regs + XSPI_CR_OFFSET); 288 + for (;;) { 289 + u16 cr; 290 + u8 sr; 290 291 291 - wait_for_completion(&xspi->done); 292 + xilinx_spi_fill_tx_fifo(xspi); 293 + 294 + /* Start the transfer by not inhibiting the transmitter any 295 + * longer 296 + */ 297 + cr = xspi->read_fn(xspi->regs + XSPI_CR_OFFSET) & 298 + ~XSPI_CR_TRANS_INHIBIT; 299 + xspi->write_fn(cr, xspi->regs + XSPI_CR_OFFSET); 300 + 301 + wait_for_completion(&xspi->done); 302 + 303 + /* A transmit has just completed. Process received data and 304 + * check for more data to transmit. Always inhibit the 305 + * transmitter while the Isr refills the transmit register/FIFO, 306 + * or make sure it is stopped if we're done. 307 + */ 308 + cr = xspi->read_fn(xspi->regs + XSPI_CR_OFFSET); 309 + xspi->write_fn(cr | XSPI_CR_TRANS_INHIBIT, 310 + xspi->regs + XSPI_CR_OFFSET); 311 + 312 + /* Read out all the data from the Rx FIFO */ 313 + sr = xspi->read_fn(xspi->regs + XSPI_SR_OFFSET); 314 + while ((sr & XSPI_SR_RX_EMPTY_MASK) == 0) { 315 + xspi->rx_fn(xspi); 316 + sr = xspi->read_fn(xspi->regs + XSPI_SR_OFFSET); 317 + } 318 + 319 + /* See if there is more data to send */ 320 + if (!xspi->remaining_bytes > 0) 321 + break; 322 + } 292 323 293 324 /* Disable the transmit empty interrupt */ 294 325 xspi->write_fn(ipif_ier, xspi->regs + XIPIF_V123B_IIER_OFFSET); ··· 341 314 xspi->write_fn(ipif_isr, xspi->regs + XIPIF_V123B_IISR_OFFSET); 342 315 343 316 if (ipif_isr & XSPI_INTR_TX_EMPTY) { /* Transmission completed */ 344 - u16 cr; 345 - u8 sr; 346 - 347 - /* A transmit has just completed. Process received data and 348 - * check for more data to transmit. Always inhibit the 349 - * transmitter while the Isr refills the transmit register/FIFO, 350 - * or make sure it is stopped if we're done. 351 - */ 352 - cr = xspi->read_fn(xspi->regs + XSPI_CR_OFFSET); 353 - xspi->write_fn(cr | XSPI_CR_TRANS_INHIBIT, 354 - xspi->regs + XSPI_CR_OFFSET); 355 - 356 - /* Read out all the data from the Rx FIFO */ 357 - sr = xspi->read_fn(xspi->regs + XSPI_SR_OFFSET); 358 - while ((sr & XSPI_SR_RX_EMPTY_MASK) == 0) { 359 - xspi->rx_fn(xspi); 360 - sr = xspi->read_fn(xspi->regs + XSPI_SR_OFFSET); 361 - } 362 - 363 - /* See if there is more data to send */ 364 - if (xspi->remaining_bytes > 0) { 365 - xilinx_spi_fill_tx_fifo(xspi); 366 - /* Start the transfer by not inhibiting the 367 - * transmitter any longer 368 - */ 369 - xspi->write_fn(cr, xspi->regs + XSPI_CR_OFFSET); 370 - } else { 371 - /* No more data to send. 372 - * Indicate the transfer is completed. 373 - */ 374 - complete(&xspi->done); 375 - } 317 + complete(&xspi->done); 376 318 } 377 319 378 320 return IRQ_HANDLED;
+2 -1
drivers/usb/chipidea/core.c
··· 276 276 277 277 ci_role_stop(ci); 278 278 ci_role_start(ci, role); 279 - enable_irq(ci->irq); 280 279 } 280 + 281 + enable_irq(ci->irq); 281 282 } 282 283 283 284 static irqreturn_t ci_irq(int irq, void *data)
+8 -5
drivers/usb/chipidea/udc.c
··· 1678 1678 1679 1679 ci->gadget.ep0 = &ci->ep0in->ep; 1680 1680 1681 - if (ci->global_phy) 1681 + if (ci->global_phy) { 1682 1682 ci->transceiver = usb_get_phy(USB_PHY_TYPE_USB2); 1683 + if (IS_ERR(ci->transceiver)) 1684 + ci->transceiver = NULL; 1685 + } 1683 1686 1684 1687 if (ci->platdata->flags & CI13XXX_REQUIRE_TRANSCEIVER) { 1685 1688 if (ci->transceiver == NULL) { ··· 1697 1694 goto put_transceiver; 1698 1695 } 1699 1696 1700 - if (!IS_ERR_OR_NULL(ci->transceiver)) { 1697 + if (ci->transceiver) { 1701 1698 retval = otg_set_peripheral(ci->transceiver->otg, 1702 1699 &ci->gadget); 1703 1700 if (retval) ··· 1714 1711 return retval; 1715 1712 1716 1713 remove_trans: 1717 - if (!IS_ERR_OR_NULL(ci->transceiver)) { 1714 + if (ci->transceiver) { 1718 1715 otg_set_peripheral(ci->transceiver->otg, NULL); 1719 1716 if (ci->global_phy) 1720 1717 usb_put_phy(ci->transceiver); ··· 1722 1719 1723 1720 dev_err(dev, "error = %i\n", retval); 1724 1721 put_transceiver: 1725 - if (!IS_ERR_OR_NULL(ci->transceiver) && ci->global_phy) 1722 + if (ci->transceiver && ci->global_phy) 1726 1723 usb_put_phy(ci->transceiver); 1727 1724 destroy_eps: 1728 1725 destroy_eps(ci); ··· 1750 1747 dma_pool_destroy(ci->td_pool); 1751 1748 dma_pool_destroy(ci->qh_pool); 1752 1749 1753 - if (!IS_ERR_OR_NULL(ci->transceiver)) { 1750 + if (ci->transceiver) { 1754 1751 otg_set_peripheral(ci->transceiver->otg, NULL); 1755 1752 if (ci->global_phy) 1756 1753 usb_put_phy(ci->transceiver);
+4 -4
drivers/usb/serial/f81232.c
··· 165 165 /* FIXME - Stubbed out for now */ 166 166 167 167 /* Don't change anything if nothing has changed */ 168 - if (!tty_termios_hw_change(&tty->termios, old_termios)) 168 + if (old_termios && !tty_termios_hw_change(&tty->termios, old_termios)) 169 169 return; 170 170 171 171 /* Do the real work here... */ 172 - tty_termios_copy_hw(&tty->termios, old_termios); 172 + if (old_termios) 173 + tty_termios_copy_hw(&tty->termios, old_termios); 173 174 } 174 175 175 176 static int f81232_tiocmget(struct tty_struct *tty) ··· 188 187 189 188 static int f81232_open(struct tty_struct *tty, struct usb_serial_port *port) 190 189 { 191 - struct ktermios tmp_termios; 192 190 int result; 193 191 194 192 /* Setup termios */ 195 193 if (tty) 196 - f81232_set_termios(tty, port, &tmp_termios); 194 + f81232_set_termios(tty, port, NULL); 197 195 198 196 result = usb_submit_urb(port->interrupt_in_urb, GFP_KERNEL); 199 197 if (result) {
+5 -5
drivers/usb/serial/pl2303.c
··· 284 284 serial settings even to the same values as before. Thus 285 285 we actually need to filter in this specific case */ 286 286 287 - if (!tty_termios_hw_change(&tty->termios, old_termios)) 287 + if (old_termios && !tty_termios_hw_change(&tty->termios, old_termios)) 288 288 return; 289 289 290 290 cflag = tty->termios.c_cflag; ··· 293 293 if (!buf) { 294 294 dev_err(&port->dev, "%s - out of memory.\n", __func__); 295 295 /* Report back no change occurred */ 296 - tty->termios = *old_termios; 296 + if (old_termios) 297 + tty->termios = *old_termios; 297 298 return; 298 299 } 299 300 ··· 434 433 control = priv->line_control; 435 434 if ((cflag & CBAUD) == B0) 436 435 priv->line_control &= ~(CONTROL_DTR | CONTROL_RTS); 437 - else if ((old_termios->c_cflag & CBAUD) == B0) 436 + else if (old_termios && (old_termios->c_cflag & CBAUD) == B0) 438 437 priv->line_control |= (CONTROL_DTR | CONTROL_RTS); 439 438 if (control != priv->line_control) { 440 439 control = priv->line_control; ··· 493 492 494 493 static int pl2303_open(struct tty_struct *tty, struct usb_serial_port *port) 495 494 { 496 - struct ktermios tmp_termios; 497 495 struct usb_serial *serial = port->serial; 498 496 struct pl2303_serial_private *spriv = usb_get_serial_data(serial); 499 497 int result; ··· 508 508 509 509 /* Setup termios */ 510 510 if (tty) 511 - pl2303_set_termios(tty, port, &tmp_termios); 511 + pl2303_set_termios(tty, port, NULL); 512 512 513 513 result = usb_submit_urb(port->interrupt_in_urb, GFP_KERNEL); 514 514 if (result) {
+4 -6
drivers/usb/serial/spcp8x5.c
··· 291 291 struct spcp8x5_private *priv = usb_get_serial_port_data(port); 292 292 unsigned long flags; 293 293 unsigned int cflag = tty->termios.c_cflag; 294 - unsigned int old_cflag = old_termios->c_cflag; 295 294 unsigned short uartdata; 296 295 unsigned char buf[2] = {0, 0}; 297 296 int baud; ··· 298 299 u8 control; 299 300 300 301 /* check that they really want us to change something */ 301 - if (!tty_termios_hw_change(&tty->termios, old_termios)) 302 + if (old_termios && !tty_termios_hw_change(&tty->termios, old_termios)) 302 303 return; 303 304 304 305 /* set DTR/RTS active */ 305 306 spin_lock_irqsave(&priv->lock, flags); 306 307 control = priv->line_control; 307 - if ((old_cflag & CBAUD) == B0) { 308 + if (old_termios && (old_termios->c_cflag & CBAUD) == B0) { 308 309 priv->line_control |= MCR_DTR; 309 - if (!(old_cflag & CRTSCTS)) 310 + if (!(old_termios->c_cflag & CRTSCTS)) 310 311 priv->line_control |= MCR_RTS; 311 312 } 312 313 if (control != priv->line_control) { ··· 393 394 394 395 static int spcp8x5_open(struct tty_struct *tty, struct usb_serial_port *port) 395 396 { 396 - struct ktermios tmp_termios; 397 397 struct usb_serial *serial = port->serial; 398 398 struct spcp8x5_private *priv = usb_get_serial_port_data(port); 399 399 int ret; ··· 409 411 spcp8x5_set_ctrl_line(port, priv->line_control); 410 412 411 413 if (tty) 412 - spcp8x5_set_termios(tty, port, &tmp_termios); 414 + spcp8x5_set_termios(tty, port, NULL); 413 415 414 416 port->port.drain_delay = 256; 415 417
+13 -16
drivers/vhost/net.c
··· 155 155 156 156 static void vhost_net_clear_ubuf_info(struct vhost_net *n) 157 157 { 158 - 159 - bool zcopy; 160 158 int i; 161 159 162 - for (i = 0; i < n->dev.nvqs; ++i) { 163 - zcopy = vhost_net_zcopy_mask & (0x1 << i); 164 - if (zcopy) 165 - kfree(n->vqs[i].ubuf_info); 160 + for (i = 0; i < VHOST_NET_VQ_MAX; ++i) { 161 + kfree(n->vqs[i].ubuf_info); 162 + n->vqs[i].ubuf_info = NULL; 166 163 } 167 164 } 168 165 ··· 168 171 bool zcopy; 169 172 int i; 170 173 171 - for (i = 0; i < n->dev.nvqs; ++i) { 174 + for (i = 0; i < VHOST_NET_VQ_MAX; ++i) { 172 175 zcopy = vhost_net_zcopy_mask & (0x1 << i); 173 176 if (!zcopy) 174 177 continue; ··· 180 183 return 0; 181 184 182 185 err: 183 - while (i--) { 184 - zcopy = vhost_net_zcopy_mask & (0x1 << i); 185 - if (!zcopy) 186 - continue; 187 - kfree(n->vqs[i].ubuf_info); 188 - } 186 + vhost_net_clear_ubuf_info(n); 189 187 return -ENOMEM; 190 188 } 191 189 ··· 188 196 { 189 197 int i; 190 198 199 + vhost_net_clear_ubuf_info(n); 200 + 191 201 for (i = 0; i < VHOST_NET_VQ_MAX; i++) { 192 202 n->vqs[i].done_idx = 0; 193 203 n->vqs[i].upend_idx = 0; 194 204 n->vqs[i].ubufs = NULL; 195 - kfree(n->vqs[i].ubuf_info); 196 - n->vqs[i].ubuf_info = NULL; 197 205 n->vqs[i].vhost_hlen = 0; 198 206 n->vqs[i].sock_hlen = 0; 199 207 } ··· 428 436 kref_get(&ubufs->kref); 429 437 } 430 438 nvq->upend_idx = (nvq->upend_idx + 1) % UIO_MAXIOV; 431 - } 439 + } else 440 + msg.msg_control = NULL; 432 441 /* TODO: Check specific error and bomb out unless ENOBUFS? */ 433 442 err = sock->ops->sendmsg(NULL, sock, &msg, len); 434 443 if (unlikely(err < 0)) { ··· 1046 1053 int r; 1047 1054 1048 1055 mutex_lock(&n->dev.mutex); 1056 + if (vhost_dev_has_owner(&n->dev)) { 1057 + r = -EBUSY; 1058 + goto out; 1059 + } 1049 1060 r = vhost_net_set_ubuf_info(n); 1050 1061 if (r) 1051 1062 goto out;
+7 -1
drivers/vhost/vhost.c
··· 344 344 } 345 345 346 346 /* Caller should have device mutex */ 347 + bool vhost_dev_has_owner(struct vhost_dev *dev) 348 + { 349 + return dev->mm; 350 + } 351 + 352 + /* Caller should have device mutex */ 347 353 long vhost_dev_set_owner(struct vhost_dev *dev) 348 354 { 349 355 struct task_struct *worker; 350 356 int err; 351 357 352 358 /* Is there an owner already? */ 353 - if (dev->mm) { 359 + if (vhost_dev_has_owner(dev)) { 354 360 err = -EBUSY; 355 361 goto err_mm; 356 362 }
+1
drivers/vhost/vhost.h
··· 133 133 134 134 long vhost_dev_init(struct vhost_dev *, struct vhost_virtqueue **vqs, int nvqs); 135 135 long vhost_dev_set_owner(struct vhost_dev *dev); 136 + bool vhost_dev_has_owner(struct vhost_dev *dev); 136 137 long vhost_dev_check_owner(struct vhost_dev *); 137 138 struct vhost_memory *vhost_dev_reset_owner_prepare(void); 138 139 void vhost_dev_reset_owner(struct vhost_dev *, struct vhost_memory *);
+2 -2
drivers/xen/tmem.c
··· 379 379 #ifdef CONFIG_FRONTSWAP 380 380 if (tmem_enabled && frontswap) { 381 381 char *s = ""; 382 - struct frontswap_ops *old_ops = 383 - frontswap_register_ops(&tmem_frontswap_ops); 382 + struct frontswap_ops *old_ops; 384 383 385 384 tmem_frontswap_poolid = -1; 385 + old_ops = frontswap_register_ops(&tmem_frontswap_ops); 386 386 if (IS_ERR(old_ops) || old_ops) { 387 387 if (IS_ERR(old_ops)) 388 388 return PTR_ERR(old_ops);
+16 -20
fs/aio.c
··· 141 141 for (i = 0; i < ctx->nr_pages; i++) 142 142 put_page(ctx->ring_pages[i]); 143 143 144 - if (ctx->mmap_size) 145 - vm_munmap(ctx->mmap_base, ctx->mmap_size); 146 - 147 144 if (ctx->ring_pages && ctx->ring_pages != ctx->internal_pages) 148 145 kfree(ctx->ring_pages); 149 146 } ··· 319 322 320 323 aio_free_ring(ctx); 321 324 322 - spin_lock(&aio_nr_lock); 323 - BUG_ON(aio_nr - ctx->max_reqs > aio_nr); 324 - aio_nr -= ctx->max_reqs; 325 - spin_unlock(&aio_nr_lock); 326 - 327 325 pr_debug("freeing %p\n", ctx); 328 326 329 327 /* ··· 427 435 { 428 436 if (!atomic_xchg(&ctx->dead, 1)) { 429 437 hlist_del_rcu(&ctx->list); 430 - /* Between hlist_del_rcu() and dropping the initial ref */ 431 - synchronize_rcu(); 432 438 433 439 /* 434 - * We can't punt to workqueue here because put_ioctx() -> 435 - * free_ioctx() will unmap the ringbuffer, and that has to be 436 - * done in the original process's context. kill_ioctx_rcu/work() 437 - * exist for exit_aio(), as in that path free_ioctx() won't do 438 - * the unmap. 440 + * It'd be more correct to do this in free_ioctx(), after all 441 + * the outstanding kiocbs have finished - but by then io_destroy 442 + * has already returned, so io_setup() could potentially return 443 + * -EAGAIN with no ioctxs actually in use (as far as userspace 444 + * could tell). 439 445 */ 440 - kill_ioctx_work(&ctx->rcu_work); 446 + spin_lock(&aio_nr_lock); 447 + BUG_ON(aio_nr - ctx->max_reqs > aio_nr); 448 + aio_nr -= ctx->max_reqs; 449 + spin_unlock(&aio_nr_lock); 450 + 451 + if (ctx->mmap_size) 452 + vm_munmap(ctx->mmap_base, ctx->mmap_size); 453 + 454 + /* Between hlist_del_rcu() and dropping the initial ref */ 455 + call_rcu(&ctx->rcu_head, kill_ioctx_rcu); 441 456 } 442 457 } 443 458 ··· 494 495 */ 495 496 ctx->mmap_size = 0; 496 497 497 - if (!atomic_xchg(&ctx->dead, 1)) { 498 - hlist_del_rcu(&ctx->list); 499 - call_rcu(&ctx->rcu_head, kill_ioctx_rcu); 500 - } 498 + kill_ioctx(ctx); 501 499 } 502 500 } 503 501
+5 -5
fs/btrfs/disk-io.c
··· 2859 2859 btrfs_free_qgroup_config(fs_info); 2860 2860 fail_trans_kthread: 2861 2861 kthread_stop(fs_info->transaction_kthread); 2862 - del_fs_roots(fs_info); 2863 2862 btrfs_cleanup_transaction(fs_info->tree_root); 2863 + del_fs_roots(fs_info); 2864 2864 fail_cleaner: 2865 2865 kthread_stop(fs_info->cleaner_kthread); 2866 2866 ··· 3512 3512 percpu_counter_sum(&fs_info->delalloc_bytes)); 3513 3513 } 3514 3514 3515 - free_root_pointers(fs_info, 1); 3516 - 3517 3515 btrfs_free_block_groups(fs_info); 3516 + 3517 + btrfs_stop_all_workers(fs_info); 3518 3518 3519 3519 del_fs_roots(fs_info); 3520 3520 3521 - iput(fs_info->btree_inode); 3521 + free_root_pointers(fs_info, 1); 3522 3522 3523 - btrfs_stop_all_workers(fs_info); 3523 + iput(fs_info->btree_inode); 3524 3524 3525 3525 #ifdef CONFIG_BTRFS_FS_CHECK_INTEGRITY 3526 3526 if (btrfs_test_opt(root, CHECK_INTEGRITY))
+3
fs/btrfs/inode.c
··· 8012 8012 { 8013 8013 struct btrfs_root *root = BTRFS_I(inode)->root; 8014 8014 8015 + if (root == NULL) 8016 + return 1; 8017 + 8015 8018 /* the snap/subvol tree is on deleting */ 8016 8019 if (btrfs_root_refs(&root->root_item) == 0 && 8017 8020 root != root->fs_info->tree_root)
+5 -4
fs/btrfs/relocation.c
··· 4082 4082 return inode; 4083 4083 } 4084 4084 4085 - static struct reloc_control *alloc_reloc_control(void) 4085 + static struct reloc_control *alloc_reloc_control(struct btrfs_fs_info *fs_info) 4086 4086 { 4087 4087 struct reloc_control *rc; 4088 4088 ··· 4093 4093 INIT_LIST_HEAD(&rc->reloc_roots); 4094 4094 backref_cache_init(&rc->backref_cache); 4095 4095 mapping_tree_init(&rc->reloc_root_tree); 4096 - extent_io_tree_init(&rc->processed_blocks, NULL); 4096 + extent_io_tree_init(&rc->processed_blocks, 4097 + fs_info->btree_inode->i_mapping); 4097 4098 return rc; 4098 4099 } 4099 4100 ··· 4111 4110 int rw = 0; 4112 4111 int err = 0; 4113 4112 4114 - rc = alloc_reloc_control(); 4113 + rc = alloc_reloc_control(fs_info); 4115 4114 if (!rc) 4116 4115 return -ENOMEM; 4117 4116 ··· 4312 4311 if (list_empty(&reloc_roots)) 4313 4312 goto out; 4314 4313 4315 - rc = alloc_reloc_control(); 4314 + rc = alloc_reloc_control(root->fs_info); 4316 4315 if (!rc) { 4317 4316 err = -ENOMEM; 4318 4317 goto out;
+47 -26
fs/ceph/locks.c
··· 191 191 } 192 192 193 193 /** 194 - * Encode the flock and fcntl locks for the given inode into the pagelist. 195 - * Format is: #fcntl locks, sequential fcntl locks, #flock locks, 196 - * sequential flock locks. 197 - * Must be called with lock_flocks() already held. 198 - * If we encounter more of a specific lock type than expected, 199 - * we return the value 1. 194 + * Encode the flock and fcntl locks for the given inode into the ceph_filelock 195 + * array. Must be called with lock_flocks() already held. 196 + * If we encounter more of a specific lock type than expected, return -ENOSPC. 200 197 */ 201 - int ceph_encode_locks(struct inode *inode, struct ceph_pagelist *pagelist, 202 - int num_fcntl_locks, int num_flock_locks) 198 + int ceph_encode_locks_to_buffer(struct inode *inode, 199 + struct ceph_filelock *flocks, 200 + int num_fcntl_locks, int num_flock_locks) 203 201 { 204 202 struct file_lock *lock; 205 - struct ceph_filelock cephlock; 206 203 int err = 0; 207 204 int seen_fcntl = 0; 208 205 int seen_flock = 0; 206 + int l = 0; 209 207 210 208 dout("encoding %d flock and %d fcntl locks", num_flock_locks, 211 209 num_fcntl_locks); 212 - err = ceph_pagelist_append(pagelist, &num_fcntl_locks, sizeof(u32)); 213 - if (err) 214 - goto fail; 210 + 215 211 for (lock = inode->i_flock; lock != NULL; lock = lock->fl_next) { 216 212 if (lock->fl_flags & FL_POSIX) { 217 213 ++seen_fcntl; ··· 215 219 err = -ENOSPC; 216 220 goto fail; 217 221 } 218 - err = lock_to_ceph_filelock(lock, &cephlock); 222 + err = lock_to_ceph_filelock(lock, &flocks[l]); 219 223 if (err) 220 224 goto fail; 221 - err = ceph_pagelist_append(pagelist, &cephlock, 222 - sizeof(struct ceph_filelock)); 225 + ++l; 223 226 } 224 - if (err) 225 - goto fail; 226 227 } 227 - 228 - err = ceph_pagelist_append(pagelist, &num_flock_locks, sizeof(u32)); 229 - if (err) 230 - goto fail; 231 228 for (lock = inode->i_flock; lock != NULL; lock = lock->fl_next) { 232 229 if (lock->fl_flags & FL_FLOCK) { 233 230 ++seen_flock; ··· 228 239 err = -ENOSPC; 229 240 goto fail; 230 241 } 231 - err = lock_to_ceph_filelock(lock, &cephlock); 242 + err = lock_to_ceph_filelock(lock, &flocks[l]); 232 243 if (err) 233 244 goto fail; 234 - err = ceph_pagelist_append(pagelist, &cephlock, 235 - sizeof(struct ceph_filelock)); 245 + ++l; 236 246 } 237 - if (err) 238 - goto fail; 239 247 } 240 248 fail: 249 + return err; 250 + } 251 + 252 + /** 253 + * Copy the encoded flock and fcntl locks into the pagelist. 254 + * Format is: #fcntl locks, sequential fcntl locks, #flock locks, 255 + * sequential flock locks. 256 + * Returns zero on success. 257 + */ 258 + int ceph_locks_to_pagelist(struct ceph_filelock *flocks, 259 + struct ceph_pagelist *pagelist, 260 + int num_fcntl_locks, int num_flock_locks) 261 + { 262 + int err = 0; 263 + __le32 nlocks; 264 + 265 + nlocks = cpu_to_le32(num_fcntl_locks); 266 + err = ceph_pagelist_append(pagelist, &nlocks, sizeof(nlocks)); 267 + if (err) 268 + goto out_fail; 269 + 270 + err = ceph_pagelist_append(pagelist, flocks, 271 + num_fcntl_locks * sizeof(*flocks)); 272 + if (err) 273 + goto out_fail; 274 + 275 + nlocks = cpu_to_le32(num_flock_locks); 276 + err = ceph_pagelist_append(pagelist, &nlocks, sizeof(nlocks)); 277 + if (err) 278 + goto out_fail; 279 + 280 + err = ceph_pagelist_append(pagelist, 281 + &flocks[num_fcntl_locks], 282 + num_flock_locks * sizeof(*flocks)); 283 + out_fail: 241 284 return err; 242 285 } 243 286
+34 -29
fs/ceph/mds_client.c
··· 2478 2478 2479 2479 if (recon_state->flock) { 2480 2480 int num_fcntl_locks, num_flock_locks; 2481 - struct ceph_pagelist_cursor trunc_point; 2481 + struct ceph_filelock *flocks; 2482 2482 2483 - ceph_pagelist_set_cursor(pagelist, &trunc_point); 2484 - do { 2485 - lock_flocks(); 2486 - ceph_count_locks(inode, &num_fcntl_locks, 2487 - &num_flock_locks); 2488 - rec.v2.flock_len = (2*sizeof(u32) + 2489 - (num_fcntl_locks+num_flock_locks) * 2490 - sizeof(struct ceph_filelock)); 2491 - unlock_flocks(); 2492 - 2493 - /* pre-alloc pagelist */ 2494 - ceph_pagelist_truncate(pagelist, &trunc_point); 2495 - err = ceph_pagelist_append(pagelist, &rec, reclen); 2496 - if (!err) 2497 - err = ceph_pagelist_reserve(pagelist, 2498 - rec.v2.flock_len); 2499 - 2500 - /* encode locks */ 2501 - if (!err) { 2502 - lock_flocks(); 2503 - err = ceph_encode_locks(inode, 2504 - pagelist, 2505 - num_fcntl_locks, 2506 - num_flock_locks); 2507 - unlock_flocks(); 2508 - } 2509 - } while (err == -ENOSPC); 2483 + encode_again: 2484 + lock_flocks(); 2485 + ceph_count_locks(inode, &num_fcntl_locks, &num_flock_locks); 2486 + unlock_flocks(); 2487 + flocks = kmalloc((num_fcntl_locks+num_flock_locks) * 2488 + sizeof(struct ceph_filelock), GFP_NOFS); 2489 + if (!flocks) { 2490 + err = -ENOMEM; 2491 + goto out_free; 2492 + } 2493 + lock_flocks(); 2494 + err = ceph_encode_locks_to_buffer(inode, flocks, 2495 + num_fcntl_locks, 2496 + num_flock_locks); 2497 + unlock_flocks(); 2498 + if (err) { 2499 + kfree(flocks); 2500 + if (err == -ENOSPC) 2501 + goto encode_again; 2502 + goto out_free; 2503 + } 2504 + /* 2505 + * number of encoded locks is stable, so copy to pagelist 2506 + */ 2507 + rec.v2.flock_len = cpu_to_le32(2*sizeof(u32) + 2508 + (num_fcntl_locks+num_flock_locks) * 2509 + sizeof(struct ceph_filelock)); 2510 + err = ceph_pagelist_append(pagelist, &rec, reclen); 2511 + if (!err) 2512 + err = ceph_locks_to_pagelist(flocks, pagelist, 2513 + num_fcntl_locks, 2514 + num_flock_locks); 2515 + kfree(flocks); 2510 2516 } else { 2511 2517 err = ceph_pagelist_append(pagelist, &rec, reclen); 2512 2518 } 2513 - 2514 2519 out_free: 2515 2520 kfree(path); 2516 2521 out_dput:
+7 -2
fs/ceph/super.h
··· 822 822 extern int ceph_lock(struct file *file, int cmd, struct file_lock *fl); 823 823 extern int ceph_flock(struct file *file, int cmd, struct file_lock *fl); 824 824 extern void ceph_count_locks(struct inode *inode, int *p_num, int *f_num); 825 - extern int ceph_encode_locks(struct inode *i, struct ceph_pagelist *p, 826 - int p_locks, int f_locks); 825 + extern int ceph_encode_locks_to_buffer(struct inode *inode, 826 + struct ceph_filelock *flocks, 827 + int num_fcntl_locks, 828 + int num_flock_locks); 829 + extern int ceph_locks_to_pagelist(struct ceph_filelock *flocks, 830 + struct ceph_pagelist *pagelist, 831 + int num_fcntl_locks, int num_flock_locks); 827 832 extern int lock_to_ceph_filelock(struct file_lock *fl, struct ceph_filelock *c); 828 833 829 834 /* debugfs.c */
+10 -9
fs/file_table.c
··· 306 306 { 307 307 if (atomic_long_dec_and_test(&file->f_count)) { 308 308 struct task_struct *task = current; 309 + unsigned long flags; 310 + 309 311 file_sb_list_del(file); 310 - if (unlikely(in_interrupt() || task->flags & PF_KTHREAD)) { 311 - unsigned long flags; 312 - spin_lock_irqsave(&delayed_fput_lock, flags); 313 - list_add(&file->f_u.fu_list, &delayed_fput_list); 314 - schedule_work(&delayed_fput_work); 315 - spin_unlock_irqrestore(&delayed_fput_lock, flags); 316 - return; 312 + if (likely(!in_interrupt() && !(task->flags & PF_KTHREAD))) { 313 + init_task_work(&file->f_u.fu_rcuhead, ____fput); 314 + if (!task_work_add(task, &file->f_u.fu_rcuhead, true)) 315 + return; 317 316 } 318 - init_task_work(&file->f_u.fu_rcuhead, ____fput); 319 - task_work_add(task, &file->f_u.fu_rcuhead, true); 317 + spin_lock_irqsave(&delayed_fput_lock, flags); 318 + list_add(&file->f_u.fu_list, &delayed_fput_list); 319 + schedule_work(&delayed_fput_work); 320 + spin_unlock_irqrestore(&delayed_fput_lock, flags); 320 321 } 321 322 } 322 323
+2 -2
fs/namei.c
··· 1976 1976 err = complete_walk(nd); 1977 1977 1978 1978 if (!err && nd->flags & LOOKUP_DIRECTORY) { 1979 - if (!nd->inode->i_op->lookup) { 1979 + if (!can_lookup(nd->inode)) { 1980 1980 path_put(&nd->path); 1981 1981 err = -ENOTDIR; 1982 1982 } ··· 2850 2850 if ((open_flag & O_CREAT) && S_ISDIR(nd->inode->i_mode)) 2851 2851 goto out; 2852 2852 error = -ENOTDIR; 2853 - if ((nd->flags & LOOKUP_DIRECTORY) && !nd->inode->i_op->lookup) 2853 + if ((nd->flags & LOOKUP_DIRECTORY) && !can_lookup(nd->inode)) 2854 2854 goto out; 2855 2855 audit_inode(name, nd->path.dentry, 0); 2856 2856 finish_open:
-9
fs/ncpfs/dir.c
··· 1029 1029 DPRINTK("ncp_rmdir: removing %s/%s\n", 1030 1030 dentry->d_parent->d_name.name, dentry->d_name.name); 1031 1031 1032 - /* 1033 - * fail with EBUSY if there are still references to this 1034 - * directory. 1035 - */ 1036 - dentry_unhash(dentry); 1037 - error = -EBUSY; 1038 - if (!d_unhashed(dentry)) 1039 - goto out; 1040 - 1041 1032 len = sizeof(__name); 1042 1033 error = ncp_io2vol(server, __name, &len, dentry->d_name.name, 1043 1034 dentry->d_name.len, !ncp_preserve_case(dir));
+1
fs/ocfs2/dlm/dlmrecovery.c
··· 1408 1408 mres->lockname_len, mres->lockname); 1409 1409 ret = -EFAULT; 1410 1410 spin_unlock(&res->spinlock); 1411 + dlm_lockres_put(res); 1411 1412 goto leave; 1412 1413 } 1413 1414 res->state |= DLM_LOCK_RES_MIGRATING;
+2 -2
fs/ocfs2/namei.c
··· 947 947 ocfs2_free_dir_lookup_result(&orphan_insert); 948 948 ocfs2_free_dir_lookup_result(&lookup); 949 949 950 - if (status) 950 + if (status && (status != -ENOTEMPTY)) 951 951 mlog_errno(status); 952 952 953 953 return status; ··· 2216 2216 2217 2217 brelse(orphan_dir_bh); 2218 2218 2219 - return 0; 2219 + return ret; 2220 2220 } 2221 2221 2222 2222 int ocfs2_create_inode_in_orphan(struct inode *dir,
+5 -5
fs/proc/kmsg.c
··· 21 21 22 22 static int kmsg_open(struct inode * inode, struct file * file) 23 23 { 24 - return do_syslog(SYSLOG_ACTION_OPEN, NULL, 0, SYSLOG_FROM_FILE); 24 + return do_syslog(SYSLOG_ACTION_OPEN, NULL, 0, SYSLOG_FROM_PROC); 25 25 } 26 26 27 27 static int kmsg_release(struct inode * inode, struct file * file) 28 28 { 29 - (void) do_syslog(SYSLOG_ACTION_CLOSE, NULL, 0, SYSLOG_FROM_FILE); 29 + (void) do_syslog(SYSLOG_ACTION_CLOSE, NULL, 0, SYSLOG_FROM_PROC); 30 30 return 0; 31 31 } 32 32 ··· 34 34 size_t count, loff_t *ppos) 35 35 { 36 36 if ((file->f_flags & O_NONBLOCK) && 37 - !do_syslog(SYSLOG_ACTION_SIZE_UNREAD, NULL, 0, SYSLOG_FROM_FILE)) 37 + !do_syslog(SYSLOG_ACTION_SIZE_UNREAD, NULL, 0, SYSLOG_FROM_PROC)) 38 38 return -EAGAIN; 39 - return do_syslog(SYSLOG_ACTION_READ, buf, count, SYSLOG_FROM_FILE); 39 + return do_syslog(SYSLOG_ACTION_READ, buf, count, SYSLOG_FROM_PROC); 40 40 } 41 41 42 42 static unsigned int kmsg_poll(struct file *file, poll_table *wait) 43 43 { 44 44 poll_wait(file, &log_wait, wait); 45 - if (do_syslog(SYSLOG_ACTION_SIZE_UNREAD, NULL, 0, SYSLOG_FROM_FILE)) 45 + if (do_syslog(SYSLOG_ACTION_SIZE_UNREAD, NULL, 0, SYSLOG_FROM_PROC)) 46 46 return POLLIN | POLLRDNORM; 47 47 return 0; 48 48 }
+1
fs/xfs/xfs_attr_leaf.h
··· 128 128 __u8 holes; 129 129 __u8 pad1; 130 130 struct xfs_attr_leaf_map freemap[XFS_ATTR_LEAF_MAPSIZE]; 131 + __be32 pad2; /* 64 bit alignment */ 131 132 }; 132 133 133 134 #define XFS_ATTR3_LEAF_CRC_OFF (offsetof(struct xfs_attr3_leaf_hdr, info.crc))
+10
fs/xfs/xfs_btree.c
··· 2544 2544 if (error) 2545 2545 goto error0; 2546 2546 2547 + /* 2548 + * we can't just memcpy() the root in for CRC enabled btree blocks. 2549 + * In that case have to also ensure the blkno remains correct 2550 + */ 2547 2551 memcpy(cblock, block, xfs_btree_block_len(cur)); 2552 + if (cur->bc_flags & XFS_BTREE_CRC_BLOCKS) { 2553 + if (cur->bc_flags & XFS_BTREE_LONG_PTRS) 2554 + cblock->bb_u.l.bb_blkno = cpu_to_be64(cbp->b_bn); 2555 + else 2556 + cblock->bb_u.s.bb_blkno = cpu_to_be64(cbp->b_bn); 2557 + } 2548 2558 2549 2559 be16_add_cpu(&block->bb_level, 1); 2550 2560 xfs_btree_set_numrecs(block, 1);
+3 -2
fs/xfs/xfs_dir2_format.h
··· 266 266 struct xfs_dir3_data_hdr { 267 267 struct xfs_dir3_blk_hdr hdr; 268 268 xfs_dir2_data_free_t best_free[XFS_DIR2_DATA_FD_COUNT]; 269 + __be32 pad; /* 64 bit alignment */ 269 270 }; 270 271 271 272 #define XFS_DIR3_DATA_CRC_OFF offsetof(struct xfs_dir3_data_hdr, hdr.crc) ··· 478 477 struct xfs_da3_blkinfo info; /* header for da routines */ 479 478 __be16 count; /* count of entries */ 480 479 __be16 stale; /* count of stale entries */ 481 - __be32 pad; 480 + __be32 pad; /* 64 bit alignment */ 482 481 }; 483 482 484 483 struct xfs_dir3_icleaf_hdr { ··· 716 715 __be32 firstdb; /* db of first entry */ 717 716 __be32 nvalid; /* count of valid entries */ 718 717 __be32 nused; /* count of used entries */ 719 - __be32 pad; /* 64 bit alignment. */ 718 + __be32 pad; /* 64 bit alignment */ 720 719 }; 721 720 722 721 struct xfs_dir3_free {
+17 -2
fs/xfs/xfs_log_recover.c
··· 1845 1845 xfs_agino_t *buffer_nextp; 1846 1846 1847 1847 trace_xfs_log_recover_buf_inode_buf(mp->m_log, buf_f); 1848 - bp->b_ops = &xfs_inode_buf_ops; 1848 + 1849 + /* 1850 + * Post recovery validation only works properly on CRC enabled 1851 + * filesystems. 1852 + */ 1853 + if (xfs_sb_version_hascrc(&mp->m_sb)) 1854 + bp->b_ops = &xfs_inode_buf_ops; 1849 1855 1850 1856 inodes_per_buf = BBTOB(bp->b_io_length) >> mp->m_sb.sb_inodelog; 1851 1857 for (i = 0; i < inodes_per_buf; i++) { ··· 2211 2205 /* Shouldn't be any more regions */ 2212 2206 ASSERT(i == item->ri_total); 2213 2207 2214 - xlog_recovery_validate_buf_type(mp, bp, buf_f); 2208 + /* 2209 + * We can only do post recovery validation on items on CRC enabled 2210 + * fielsystems as we need to know when the buffer was written to be able 2211 + * to determine if we should have replayed the item. If we replay old 2212 + * metadata over a newer buffer, then it will enter a temporarily 2213 + * inconsistent state resulting in verification failures. Hence for now 2214 + * just avoid the verification stage for non-crc filesystems 2215 + */ 2216 + if (xfs_sb_version_hascrc(&mp->m_sb)) 2217 + xlog_recovery_validate_buf_type(mp, bp, buf_f); 2215 2218 } 2216 2219 2217 2220 /*
+11 -7
fs/xfs/xfs_mount.c
··· 314 314 xfs_mount_validate_sb( 315 315 xfs_mount_t *mp, 316 316 xfs_sb_t *sbp, 317 - bool check_inprogress) 317 + bool check_inprogress, 318 + bool check_version) 318 319 { 319 320 320 321 /* ··· 338 337 339 338 /* 340 339 * Version 5 superblock feature mask validation. Reject combinations the 341 - * kernel cannot support up front before checking anything else. 340 + * kernel cannot support up front before checking anything else. For 341 + * write validation, we don't need to check feature masks. 342 342 */ 343 - if (XFS_SB_VERSION_NUM(sbp) == XFS_SB_VERSION_5) { 343 + if (check_version && XFS_SB_VERSION_NUM(sbp) == XFS_SB_VERSION_5) { 344 344 xfs_alert(mp, 345 345 "Version 5 superblock detected. This kernel has EXPERIMENTAL support enabled!\n" 346 346 "Use of these features in this kernel is at your own risk!"); ··· 677 675 678 676 static int 679 677 xfs_sb_verify( 680 - struct xfs_buf *bp) 678 + struct xfs_buf *bp, 679 + bool check_version) 681 680 { 682 681 struct xfs_mount *mp = bp->b_target->bt_mount; 683 682 struct xfs_sb sb; ··· 689 686 * Only check the in progress field for the primary superblock as 690 687 * mkfs.xfs doesn't clear it from secondary superblocks. 691 688 */ 692 - return xfs_mount_validate_sb(mp, &sb, bp->b_bn == XFS_SB_DADDR); 689 + return xfs_mount_validate_sb(mp, &sb, bp->b_bn == XFS_SB_DADDR, 690 + check_version); 693 691 } 694 692 695 693 /* ··· 723 719 goto out_error; 724 720 } 725 721 } 726 - error = xfs_sb_verify(bp); 722 + error = xfs_sb_verify(bp, true); 727 723 728 724 out_error: 729 725 if (error) { ··· 762 758 struct xfs_buf_log_item *bip = bp->b_fspriv; 763 759 int error; 764 760 765 - error = xfs_sb_verify(bp); 761 + error = xfs_sb_verify(bp, false); 766 762 if (error) { 767 763 XFS_CORRUPTION_ERROR(__func__, XFS_ERRLEVEL_LOW, mp, bp->b_addr); 768 764 xfs_buf_ioerror(bp, error);
+5
include/asm-generic/kvm_para.h
··· 18 18 return 0; 19 19 } 20 20 21 + static inline bool kvm_para_available(void) 22 + { 23 + return false; 24 + } 25 + 21 26 #endif
+148
include/dt-bindings/clock/imx6sl-clock.h
··· 1 + /* 2 + * Copyright 2013 Freescale Semiconductor, Inc. 3 + * 4 + * This program is free software; you can redistribute it and/or modify 5 + * it under the terms of the GNU General Public License version 2 as 6 + * published by the Free Software Foundation. 7 + * 8 + */ 9 + 10 + #ifndef __DT_BINDINGS_CLOCK_IMX6SL_H 11 + #define __DT_BINDINGS_CLOCK_IMX6SL_H 12 + 13 + #define IMX6SL_CLK_DUMMY 0 14 + #define IMX6SL_CLK_CKIL 1 15 + #define IMX6SL_CLK_OSC 2 16 + #define IMX6SL_CLK_PLL1_SYS 3 17 + #define IMX6SL_CLK_PLL2_BUS 4 18 + #define IMX6SL_CLK_PLL3_USB_OTG 5 19 + #define IMX6SL_CLK_PLL4_AUDIO 6 20 + #define IMX6SL_CLK_PLL5_VIDEO 7 21 + #define IMX6SL_CLK_PLL6_ENET 8 22 + #define IMX6SL_CLK_PLL7_USB_HOST 9 23 + #define IMX6SL_CLK_USBPHY1 10 24 + #define IMX6SL_CLK_USBPHY2 11 25 + #define IMX6SL_CLK_USBPHY1_GATE 12 26 + #define IMX6SL_CLK_USBPHY2_GATE 13 27 + #define IMX6SL_CLK_PLL4_POST_DIV 14 28 + #define IMX6SL_CLK_PLL5_POST_DIV 15 29 + #define IMX6SL_CLK_PLL5_VIDEO_DIV 16 30 + #define IMX6SL_CLK_ENET_REF 17 31 + #define IMX6SL_CLK_PLL2_PFD0 18 32 + #define IMX6SL_CLK_PLL2_PFD1 19 33 + #define IMX6SL_CLK_PLL2_PFD2 20 34 + #define IMX6SL_CLK_PLL3_PFD0 21 35 + #define IMX6SL_CLK_PLL3_PFD1 22 36 + #define IMX6SL_CLK_PLL3_PFD2 23 37 + #define IMX6SL_CLK_PLL3_PFD3 24 38 + #define IMX6SL_CLK_PLL2_198M 25 39 + #define IMX6SL_CLK_PLL3_120M 26 40 + #define IMX6SL_CLK_PLL3_80M 27 41 + #define IMX6SL_CLK_PLL3_60M 28 42 + #define IMX6SL_CLK_STEP 29 43 + #define IMX6SL_CLK_PLL1_SW 30 44 + #define IMX6SL_CLK_OCRAM_ALT_SEL 31 45 + #define IMX6SL_CLK_OCRAM_SEL 32 46 + #define IMX6SL_CLK_PRE_PERIPH2_SEL 33 47 + #define IMX6SL_CLK_PRE_PERIPH_SEL 34 48 + #define IMX6SL_CLK_PERIPH2_CLK2_SEL 35 49 + #define IMX6SL_CLK_PERIPH_CLK2_SEL 36 50 + #define IMX6SL_CLK_CSI_SEL 37 51 + #define IMX6SL_CLK_LCDIF_AXI_SEL 38 52 + #define IMX6SL_CLK_USDHC1_SEL 39 53 + #define IMX6SL_CLK_USDHC2_SEL 40 54 + #define IMX6SL_CLK_USDHC3_SEL 41 55 + #define IMX6SL_CLK_USDHC4_SEL 42 56 + #define IMX6SL_CLK_SSI1_SEL 43 57 + #define IMX6SL_CLK_SSI2_SEL 44 58 + #define IMX6SL_CLK_SSI3_SEL 45 59 + #define IMX6SL_CLK_PERCLK_SEL 46 60 + #define IMX6SL_CLK_PXP_AXI_SEL 47 61 + #define IMX6SL_CLK_EPDC_AXI_SEL 48 62 + #define IMX6SL_CLK_GPU2D_OVG_SEL 49 63 + #define IMX6SL_CLK_GPU2D_SEL 50 64 + #define IMX6SL_CLK_LCDIF_PIX_SEL 51 65 + #define IMX6SL_CLK_EPDC_PIX_SEL 52 66 + #define IMX6SL_CLK_SPDIF0_SEL 53 67 + #define IMX6SL_CLK_SPDIF1_SEL 54 68 + #define IMX6SL_CLK_EXTERN_AUDIO_SEL 55 69 + #define IMX6SL_CLK_ECSPI_SEL 56 70 + #define IMX6SL_CLK_UART_SEL 57 71 + #define IMX6SL_CLK_PERIPH 58 72 + #define IMX6SL_CLK_PERIPH2 59 73 + #define IMX6SL_CLK_OCRAM_PODF 60 74 + #define IMX6SL_CLK_PERIPH_CLK2_PODF 61 75 + #define IMX6SL_CLK_PERIPH2_CLK2_PODF 62 76 + #define IMX6SL_CLK_IPG 63 77 + #define IMX6SL_CLK_CSI_PODF 64 78 + #define IMX6SL_CLK_LCDIF_AXI_PODF 65 79 + #define IMX6SL_CLK_USDHC1_PODF 66 80 + #define IMX6SL_CLK_USDHC2_PODF 67 81 + #define IMX6SL_CLK_USDHC3_PODF 68 82 + #define IMX6SL_CLK_USDHC4_PODF 69 83 + #define IMX6SL_CLK_SSI1_PRED 70 84 + #define IMX6SL_CLK_SSI1_PODF 71 85 + #define IMX6SL_CLK_SSI2_PRED 72 86 + #define IMX6SL_CLK_SSI2_PODF 73 87 + #define IMX6SL_CLK_SSI3_PRED 74 88 + #define IMX6SL_CLK_SSI3_PODF 75 89 + #define IMX6SL_CLK_PERCLK 76 90 + #define IMX6SL_CLK_PXP_AXI_PODF 77 91 + #define IMX6SL_CLK_EPDC_AXI_PODF 78 92 + #define IMX6SL_CLK_GPU2D_OVG_PODF 79 93 + #define IMX6SL_CLK_GPU2D_PODF 80 94 + #define IMX6SL_CLK_LCDIF_PIX_PRED 81 95 + #define IMX6SL_CLK_EPDC_PIX_PRED 82 96 + #define IMX6SL_CLK_LCDIF_PIX_PODF 83 97 + #define IMX6SL_CLK_EPDC_PIX_PODF 84 98 + #define IMX6SL_CLK_SPDIF0_PRED 85 99 + #define IMX6SL_CLK_SPDIF0_PODF 86 100 + #define IMX6SL_CLK_SPDIF1_PRED 87 101 + #define IMX6SL_CLK_SPDIF1_PODF 88 102 + #define IMX6SL_CLK_EXTERN_AUDIO_PRED 89 103 + #define IMX6SL_CLK_EXTERN_AUDIO_PODF 90 104 + #define IMX6SL_CLK_ECSPI_ROOT 91 105 + #define IMX6SL_CLK_UART_ROOT 92 106 + #define IMX6SL_CLK_AHB 93 107 + #define IMX6SL_CLK_MMDC_ROOT 94 108 + #define IMX6SL_CLK_ARM 95 109 + #define IMX6SL_CLK_ECSPI1 96 110 + #define IMX6SL_CLK_ECSPI2 97 111 + #define IMX6SL_CLK_ECSPI3 98 112 + #define IMX6SL_CLK_ECSPI4 99 113 + #define IMX6SL_CLK_EPIT1 100 114 + #define IMX6SL_CLK_EPIT2 101 115 + #define IMX6SL_CLK_EXTERN_AUDIO 102 116 + #define IMX6SL_CLK_GPT 103 117 + #define IMX6SL_CLK_GPT_SERIAL 104 118 + #define IMX6SL_CLK_GPU2D_OVG 105 119 + #define IMX6SL_CLK_I2C1 106 120 + #define IMX6SL_CLK_I2C2 107 121 + #define IMX6SL_CLK_I2C3 108 122 + #define IMX6SL_CLK_OCOTP 109 123 + #define IMX6SL_CLK_CSI 110 124 + #define IMX6SL_CLK_PXP_AXI 111 125 + #define IMX6SL_CLK_EPDC_AXI 112 126 + #define IMX6SL_CLK_LCDIF_AXI 113 127 + #define IMX6SL_CLK_LCDIF_PIX 114 128 + #define IMX6SL_CLK_EPDC_PIX 115 129 + #define IMX6SL_CLK_OCRAM 116 130 + #define IMX6SL_CLK_PWM1 117 131 + #define IMX6SL_CLK_PWM2 118 132 + #define IMX6SL_CLK_PWM3 119 133 + #define IMX6SL_CLK_PWM4 120 134 + #define IMX6SL_CLK_SDMA 121 135 + #define IMX6SL_CLK_SPDIF 122 136 + #define IMX6SL_CLK_SSI1 123 137 + #define IMX6SL_CLK_SSI2 124 138 + #define IMX6SL_CLK_SSI3 125 139 + #define IMX6SL_CLK_UART 126 140 + #define IMX6SL_CLK_UART_SERIAL 127 141 + #define IMX6SL_CLK_USBOH3 128 142 + #define IMX6SL_CLK_USDHC1 129 143 + #define IMX6SL_CLK_USDHC2 130 144 + #define IMX6SL_CLK_USDHC3 131 145 + #define IMX6SL_CLK_USDHC4 132 146 + #define IMX6SL_CLK_CLK_END 133 147 + 148 + #endif /* __DT_BINDINGS_CLOCK_IMX6SL_H */
+163
include/dt-bindings/clock/vf610-clock.h
··· 1 + /* 2 + * Copyright 2013 Freescale Semiconductor, Inc. 3 + * 4 + * This program is free software; you can redistribute it and/or modify 5 + * it under the terms of the GNU General Public License as published by 6 + * the Free Software Foundation; either version 2 of the License, or 7 + * (at your option) any later version. 8 + */ 9 + 10 + #ifndef __DT_BINDINGS_CLOCK_VF610_H 11 + #define __DT_BINDINGS_CLOCK_VF610_H 12 + 13 + #define VF610_CLK_DUMMY 0 14 + #define VF610_CLK_SIRC_128K 1 15 + #define VF610_CLK_SIRC_32K 2 16 + #define VF610_CLK_FIRC 3 17 + #define VF610_CLK_SXOSC 4 18 + #define VF610_CLK_FXOSC 5 19 + #define VF610_CLK_FXOSC_HALF 6 20 + #define VF610_CLK_SLOW_CLK_SEL 7 21 + #define VF610_CLK_FASK_CLK_SEL 8 22 + #define VF610_CLK_AUDIO_EXT 9 23 + #define VF610_CLK_ENET_EXT 10 24 + #define VF610_CLK_PLL1_MAIN 11 25 + #define VF610_CLK_PLL1_PFD1 12 26 + #define VF610_CLK_PLL1_PFD2 13 27 + #define VF610_CLK_PLL1_PFD3 14 28 + #define VF610_CLK_PLL1_PFD4 15 29 + #define VF610_CLK_PLL2_MAIN 16 30 + #define VF610_CLK_PLL2_PFD1 17 31 + #define VF610_CLK_PLL2_PFD2 18 32 + #define VF610_CLK_PLL2_PFD3 19 33 + #define VF610_CLK_PLL2_PFD4 20 34 + #define VF610_CLK_PLL3_MAIN 21 35 + #define VF610_CLK_PLL3_PFD1 22 36 + #define VF610_CLK_PLL3_PFD2 23 37 + #define VF610_CLK_PLL3_PFD3 24 38 + #define VF610_CLK_PLL3_PFD4 25 39 + #define VF610_CLK_PLL4_MAIN 26 40 + #define VF610_CLK_PLL5_MAIN 27 41 + #define VF610_CLK_PLL6_MAIN 28 42 + #define VF610_CLK_PLL3_MAIN_DIV 29 43 + #define VF610_CLK_PLL4_MAIN_DIV 30 44 + #define VF610_CLK_PLL6_MAIN_DIV 31 45 + #define VF610_CLK_PLL1_PFD_SEL 32 46 + #define VF610_CLK_PLL2_PFD_SEL 33 47 + #define VF610_CLK_SYS_SEL 34 48 + #define VF610_CLK_DDR_SEL 35 49 + #define VF610_CLK_SYS_BUS 36 50 + #define VF610_CLK_PLATFORM_BUS 37 51 + #define VF610_CLK_IPG_BUS 38 52 + #define VF610_CLK_UART0 39 53 + #define VF610_CLK_UART1 40 54 + #define VF610_CLK_UART2 41 55 + #define VF610_CLK_UART3 42 56 + #define VF610_CLK_UART4 43 57 + #define VF610_CLK_UART5 44 58 + #define VF610_CLK_PIT 45 59 + #define VF610_CLK_I2C0 46 60 + #define VF610_CLK_I2C1 47 61 + #define VF610_CLK_I2C2 48 62 + #define VF610_CLK_I2C3 49 63 + #define VF610_CLK_FTM0_EXT_SEL 50 64 + #define VF610_CLK_FTM0_FIX_SEL 51 65 + #define VF610_CLK_FTM0_EXT_FIX_EN 52 66 + #define VF610_CLK_FTM1_EXT_SEL 53 67 + #define VF610_CLK_FTM1_FIX_SEL 54 68 + #define VF610_CLK_FTM1_EXT_FIX_EN 55 69 + #define VF610_CLK_FTM2_EXT_SEL 56 70 + #define VF610_CLK_FTM2_FIX_SEL 57 71 + #define VF610_CLK_FTM2_EXT_FIX_EN 58 72 + #define VF610_CLK_FTM3_EXT_SEL 59 73 + #define VF610_CLK_FTM3_FIX_SEL 60 74 + #define VF610_CLK_FTM3_EXT_FIX_EN 61 75 + #define VF610_CLK_FTM0 62 76 + #define VF610_CLK_FTM1 63 77 + #define VF610_CLK_FTM2 64 78 + #define VF610_CLK_FTM3 65 79 + #define VF610_CLK_ENET_50M 66 80 + #define VF610_CLK_ENET_25M 67 81 + #define VF610_CLK_ENET_SEL 68 82 + #define VF610_CLK_ENET 69 83 + #define VF610_CLK_ENET_TS_SEL 70 84 + #define VF610_CLK_ENET_TS 71 85 + #define VF610_CLK_DSPI0 72 86 + #define VF610_CLK_DSPI1 73 87 + #define VF610_CLK_DSPI2 74 88 + #define VF610_CLK_DSPI3 75 89 + #define VF610_CLK_WDT 76 90 + #define VF610_CLK_ESDHC0_SEL 77 91 + #define VF610_CLK_ESDHC0_EN 78 92 + #define VF610_CLK_ESDHC0_DIV 79 93 + #define VF610_CLK_ESDHC0 80 94 + #define VF610_CLK_ESDHC1_SEL 81 95 + #define VF610_CLK_ESDHC1_EN 82 96 + #define VF610_CLK_ESDHC1_DIV 83 97 + #define VF610_CLK_ESDHC1 84 98 + #define VF610_CLK_DCU0_SEL 85 99 + #define VF610_CLK_DCU0_EN 86 100 + #define VF610_CLK_DCU0_DIV 87 101 + #define VF610_CLK_DCU0 88 102 + #define VF610_CLK_DCU1_SEL 89 103 + #define VF610_CLK_DCU1_EN 90 104 + #define VF610_CLK_DCU1_DIV 91 105 + #define VF610_CLK_DCU1 92 106 + #define VF610_CLK_ESAI_SEL 93 107 + #define VF610_CLK_ESAI_EN 94 108 + #define VF610_CLK_ESAI_DIV 95 109 + #define VF610_CLK_ESAI 96 110 + #define VF610_CLK_SAI0_SEL 97 111 + #define VF610_CLK_SAI0_EN 98 112 + #define VF610_CLK_SAI0_DIV 99 113 + #define VF610_CLK_SAI0 100 114 + #define VF610_CLK_SAI1_SEL 101 115 + #define VF610_CLK_SAI1_EN 102 116 + #define VF610_CLK_SAI1_DIV 103 117 + #define VF610_CLK_SAI1 104 118 + #define VF610_CLK_SAI2_SEL 105 119 + #define VF610_CLK_SAI2_EN 106 120 + #define VF610_CLK_SAI2_DIV 107 121 + #define VF610_CLK_SAI2 108 122 + #define VF610_CLK_SAI3_SEL 109 123 + #define VF610_CLK_SAI3_EN 110 124 + #define VF610_CLK_SAI3_DIV 111 125 + #define VF610_CLK_SAI3 112 126 + #define VF610_CLK_USBC0 113 127 + #define VF610_CLK_USBC1 114 128 + #define VF610_CLK_QSPI0_SEL 115 129 + #define VF610_CLK_QSPI0_EN 116 130 + #define VF610_CLK_QSPI0_X4_DIV 117 131 + #define VF610_CLK_QSPI0_X2_DIV 118 132 + #define VF610_CLK_QSPI0_X1_DIV 119 133 + #define VF610_CLK_QSPI1_SEL 120 134 + #define VF610_CLK_QSPI1_EN 121 135 + #define VF610_CLK_QSPI1_X4_DIV 122 136 + #define VF610_CLK_QSPI1_X2_DIV 123 137 + #define VF610_CLK_QSPI1_X1_DIV 124 138 + #define VF610_CLK_QSPI0 125 139 + #define VF610_CLK_QSPI1 126 140 + #define VF610_CLK_NFC_SEL 127 141 + #define VF610_CLK_NFC_EN 128 142 + #define VF610_CLK_NFC_PRE_DIV 129 143 + #define VF610_CLK_NFC_FRAC_DIV 130 144 + #define VF610_CLK_NFC_INV 131 145 + #define VF610_CLK_NFC 132 146 + #define VF610_CLK_VADC_SEL 133 147 + #define VF610_CLK_VADC_EN 134 148 + #define VF610_CLK_VADC_DIV 135 149 + #define VF610_CLK_VADC_DIV_HALF 136 150 + #define VF610_CLK_VADC 137 151 + #define VF610_CLK_ADC0 138 152 + #define VF610_CLK_ADC1 139 153 + #define VF610_CLK_DAC0 140 154 + #define VF610_CLK_DAC1 141 155 + #define VF610_CLK_FLEXCAN0 142 156 + #define VF610_CLK_FLEXCAN1 143 157 + #define VF610_CLK_ASRC 144 158 + #define VF610_CLK_GPU_SEL 145 159 + #define VF610_CLK_GPU_EN 146 160 + #define VF610_CLK_GPU2D 147 161 + #define VF610_CLK_END 148 162 + 163 + #endif /* __DT_BINDINGS_CLOCK_VF610_H */
+4
include/linux/cpu.h
··· 175 175 176 176 extern void get_online_cpus(void); 177 177 extern void put_online_cpus(void); 178 + extern void cpu_hotplug_disable(void); 179 + extern void cpu_hotplug_enable(void); 178 180 #define hotcpu_notifier(fn, pri) cpu_notifier(fn, pri) 179 181 #define register_hotcpu_notifier(nb) register_cpu_notifier(nb) 180 182 #define unregister_hotcpu_notifier(nb) unregister_cpu_notifier(nb) ··· 200 198 201 199 #define get_online_cpus() do { } while (0) 202 200 #define put_online_cpus() do { } while (0) 201 + #define cpu_hotplug_disable() do { } while (0) 202 + #define cpu_hotplug_enable() do { } while (0) 203 203 #define hotcpu_notifier(fn, pri) do { (void)(fn); } while (0) 204 204 /* These aren't inline functions due to a GCC bug. */ 205 205 #define register_hotcpu_notifier(nb) ({ (void)(nb); 0; })
+1
include/linux/filter.h
··· 46 46 extern int sk_detach_filter(struct sock *sk); 47 47 extern int sk_chk_filter(struct sock_filter *filter, unsigned int flen); 48 48 extern int sk_get_filter(struct sock *sk, struct sock_filter __user *filter, unsigned len); 49 + extern void sk_decode_filter(struct sock_filter *filt, struct sock_filter *to); 49 50 50 51 #ifdef CONFIG_BPF_JIT 51 52 #include <stdarg.h>
+2 -2
include/linux/if_team.h
··· 249 249 return port; 250 250 cur = port; 251 251 list_for_each_entry_continue_rcu(cur, &team->port_list, list) 252 - if (team_port_txable(port)) 252 + if (team_port_txable(cur)) 253 253 return cur; 254 254 list_for_each_entry_rcu(cur, &team->port_list, list) { 255 255 if (cur == port) 256 256 break; 257 - if (team_port_txable(port)) 257 + if (team_port_txable(cur)) 258 258 return cur; 259 259 } 260 260 return NULL;
+4 -2
include/linux/math64.h
··· 6 6 7 7 #if BITS_PER_LONG == 64 8 8 9 - #define div64_long(x,y) div64_s64((x),(y)) 9 + #define div64_long(x, y) div64_s64((x), (y)) 10 + #define div64_ul(x, y) div64_u64((x), (y)) 10 11 11 12 /** 12 13 * div_u64_rem - unsigned 64bit divide with 32bit divisor with remainder ··· 48 47 49 48 #elif BITS_PER_LONG == 32 50 49 51 - #define div64_long(x,y) div_s64((x),(y)) 50 + #define div64_long(x, y) div_s64((x), (y)) 51 + #define div64_ul(x, y) div_u64((x), (y)) 52 52 53 53 #ifndef div_u64_rem 54 54 static inline u64 div_u64_rem(u64 dividend, u32 divisor, u32 *remainder)
+3
include/linux/scatterlist.h
··· 111 111 static inline void sg_set_buf(struct scatterlist *sg, const void *buf, 112 112 unsigned int buflen) 113 113 { 114 + #ifdef CONFIG_DEBUG_SG 115 + BUG_ON(!virt_addr_valid(buf)); 116 + #endif 114 117 sg_set_page(sg, virt_to_page(buf), buflen, offset_in_page(buf)); 115 118 } 116 119
+12 -7
include/linux/smp.h
··· 11 11 #include <linux/list.h> 12 12 #include <linux/cpumask.h> 13 13 #include <linux/init.h> 14 + #include <linux/irqflags.h> 14 15 15 16 extern void cpu_idle(void); 16 17 ··· 140 139 } 141 140 #define smp_call_function(func, info, wait) \ 142 141 (up_smp_call_function(func, info)) 143 - #define on_each_cpu(func,info,wait) \ 144 - ({ \ 145 - local_irq_disable(); \ 146 - func(info); \ 147 - local_irq_enable(); \ 148 - 0; \ 149 - }) 142 + 143 + static inline int on_each_cpu(smp_call_func_t func, void *info, int wait) 144 + { 145 + unsigned long flags; 146 + 147 + local_irq_save(flags); 148 + func(info); 149 + local_irq_restore(flags); 150 + return 0; 151 + } 152 + 150 153 /* 151 154 * Note we still need to test the mask even for UP 152 155 * because we actually can get an empty mask from
+3
include/linux/swapops.h
··· 137 137 138 138 extern void migration_entry_wait(struct mm_struct *mm, pmd_t *pmd, 139 139 unsigned long address); 140 + extern void migration_entry_wait_huge(struct mm_struct *mm, pte_t *pte); 140 141 #else 141 142 142 143 #define make_migration_entry(page, write) swp_entry(0, 0) ··· 149 148 static inline void make_migration_entry_read(swp_entry_t *entryp) { } 150 149 static inline void migration_entry_wait(struct mm_struct *mm, pmd_t *pmd, 151 150 unsigned long address) { } 151 + static inline void migration_entry_wait_huge(struct mm_struct *mm, 152 + pte_t *pte) { } 152 153 static inline int is_write_migration_entry(swp_entry_t entry) 153 154 { 154 155 return 0;
+2 -2
include/linux/syslog.h
··· 44 44 /* Return size of the log buffer */ 45 45 #define SYSLOG_ACTION_SIZE_BUFFER 10 46 46 47 - #define SYSLOG_FROM_CALL 0 48 - #define SYSLOG_FROM_FILE 1 47 + #define SYSLOG_FROM_READER 0 48 + #define SYSLOG_FROM_PROC 1 49 49 50 50 int do_syslog(int type, char __user *buf, int count, bool from_file); 51 51
+2 -2
include/linux/tracepoint.h
··· 145 145 TP_PROTO(data_proto), \ 146 146 TP_ARGS(data_args), \ 147 147 TP_CONDITION(cond), \ 148 - rcu_idle_exit(), \ 149 - rcu_idle_enter()); \ 148 + rcu_irq_enter(), \ 149 + rcu_irq_exit()); \ 150 150 } 151 151 #else 152 152 #define __DECLARE_TRACE_RCU(name, proto, args, cond, data_proto, data_args)
+1
include/net/bluetooth/hci_core.h
··· 1117 1117 int mgmt_control(struct sock *sk, struct msghdr *msg, size_t len); 1118 1118 int mgmt_index_added(struct hci_dev *hdev); 1119 1119 int mgmt_index_removed(struct hci_dev *hdev); 1120 + int mgmt_set_powered_failed(struct hci_dev *hdev, int err); 1120 1121 int mgmt_powered(struct hci_dev *hdev, u8 powered); 1121 1122 int mgmt_discoverable(struct hci_dev *hdev, u8 discoverable); 1122 1123 int mgmt_connectable(struct hci_dev *hdev, u8 connectable);
+1
include/net/bluetooth/mgmt.h
··· 42 42 #define MGMT_STATUS_NOT_POWERED 0x0f 43 43 #define MGMT_STATUS_CANCELLED 0x10 44 44 #define MGMT_STATUS_INVALID_INDEX 0x11 45 + #define MGMT_STATUS_RFKILLED 0x12 45 46 46 47 struct mgmt_hdr { 47 48 __le16 opcode;
+3 -3
include/net/ip_tunnels.h
··· 95 95 int ip_tunnel_init(struct net_device *dev); 96 96 void ip_tunnel_uninit(struct net_device *dev); 97 97 void ip_tunnel_dellink(struct net_device *dev, struct list_head *head); 98 - int __net_init ip_tunnel_init_net(struct net *net, int ip_tnl_net_id, 99 - struct rtnl_link_ops *ops, char *devname); 98 + int ip_tunnel_init_net(struct net *net, int ip_tnl_net_id, 99 + struct rtnl_link_ops *ops, char *devname); 100 100 101 - void __net_exit ip_tunnel_delete_net(struct ip_tunnel_net *itn); 101 + void ip_tunnel_delete_net(struct ip_tunnel_net *itn); 102 102 103 103 void ip_tunnel_xmit(struct sk_buff *skb, struct net_device *dev, 104 104 const struct iphdr *tnl_params);
+2 -1
include/sound/soc-dapm.h
··· 450 450 snd_soc_dapm_aif_in, /* audio interface input */ 451 451 snd_soc_dapm_aif_out, /* audio interface output */ 452 452 snd_soc_dapm_siggen, /* signal generator */ 453 - snd_soc_dapm_dai, /* link to DAI structure */ 453 + snd_soc_dapm_dai_in, /* link to DAI structure */ 454 + snd_soc_dapm_dai_out, 454 455 snd_soc_dapm_dai_link, /* link between two DAI structures */ 455 456 }; 456 457
+1
include/uapi/linux/kvm.h
··· 783 783 #define KVM_REG_IA64 0x3000000000000000ULL 784 784 #define KVM_REG_ARM 0x4000000000000000ULL 785 785 #define KVM_REG_S390 0x5000000000000000ULL 786 + #define KVM_REG_MIPS 0x7000000000000000ULL 786 787 787 788 #define KVM_REG_SIZE_SHIFT 52 788 789 #define KVM_REG_SIZE_MASK 0x00f0000000000000ULL
+1
init/Kconfig
··· 431 431 config TREE_RCU 432 432 bool "Tree-based hierarchical RCU" 433 433 depends on !PREEMPT && SMP 434 + select IRQ_WORK 434 435 help 435 436 This option selects the RCU implementation that is 436 437 designed for very large SMP system with hundreds or
+1 -1
kernel/audit.c
··· 1056 1056 static void wait_for_auditd(unsigned long sleep_time) 1057 1057 { 1058 1058 DECLARE_WAITQUEUE(wait, current); 1059 - set_current_state(TASK_INTERRUPTIBLE); 1059 + set_current_state(TASK_UNINTERRUPTIBLE); 1060 1060 add_wait_queue(&audit_backlog_wait, &wait); 1061 1061 1062 1062 if (audit_backlog_limit &&
+1
kernel/audit_tree.c
··· 658 658 struct vfsmount *mnt; 659 659 int err; 660 660 661 + rule->tree = NULL; 661 662 list_for_each_entry(tree, &tree_list, list) { 662 663 if (!strcmp(seed->pathname, tree->pathname)) { 663 664 put_tree(seed);
+23 -32
kernel/cpu.c
··· 133 133 mutex_unlock(&cpu_hotplug.lock); 134 134 } 135 135 136 + /* 137 + * Wait for currently running CPU hotplug operations to complete (if any) and 138 + * disable future CPU hotplug (from sysfs). The 'cpu_add_remove_lock' protects 139 + * the 'cpu_hotplug_disabled' flag. The same lock is also acquired by the 140 + * hotplug path before performing hotplug operations. So acquiring that lock 141 + * guarantees mutual exclusion from any currently running hotplug operations. 142 + */ 143 + void cpu_hotplug_disable(void) 144 + { 145 + cpu_maps_update_begin(); 146 + cpu_hotplug_disabled = 1; 147 + cpu_maps_update_done(); 148 + } 149 + 150 + void cpu_hotplug_enable(void) 151 + { 152 + cpu_maps_update_begin(); 153 + cpu_hotplug_disabled = 0; 154 + cpu_maps_update_done(); 155 + } 156 + 136 157 #else /* #if CONFIG_HOTPLUG_CPU */ 137 158 static void cpu_hotplug_begin(void) {} 138 159 static void cpu_hotplug_done(void) {} ··· 562 541 core_initcall(alloc_frozen_cpus); 563 542 564 543 /* 565 - * Prevent regular CPU hotplug from racing with the freezer, by disabling CPU 566 - * hotplug when tasks are about to be frozen. Also, don't allow the freezer 567 - * to continue until any currently running CPU hotplug operation gets 568 - * completed. 569 - * To modify the 'cpu_hotplug_disabled' flag, we need to acquire the 570 - * 'cpu_add_remove_lock'. And this same lock is also taken by the regular 571 - * CPU hotplug path and released only after it is complete. Thus, we 572 - * (and hence the freezer) will block here until any currently running CPU 573 - * hotplug operation gets completed. 574 - */ 575 - void cpu_hotplug_disable_before_freeze(void) 576 - { 577 - cpu_maps_update_begin(); 578 - cpu_hotplug_disabled = 1; 579 - cpu_maps_update_done(); 580 - } 581 - 582 - 583 - /* 584 - * When tasks have been thawed, re-enable regular CPU hotplug (which had been 585 - * disabled while beginning to freeze tasks). 586 - */ 587 - void cpu_hotplug_enable_after_thaw(void) 588 - { 589 - cpu_maps_update_begin(); 590 - cpu_hotplug_disabled = 0; 591 - cpu_maps_update_done(); 592 - } 593 - 594 - /* 595 544 * When callbacks for CPU hotplug notifications are being executed, we must 596 545 * ensure that the state of the system with respect to the tasks being frozen 597 546 * or not, as reported by the notification, remains unchanged *throughout the ··· 580 589 581 590 case PM_SUSPEND_PREPARE: 582 591 case PM_HIBERNATION_PREPARE: 583 - cpu_hotplug_disable_before_freeze(); 592 + cpu_hotplug_disable(); 584 593 break; 585 594 586 595 case PM_POST_SUSPEND: 587 596 case PM_POST_HIBERNATION: 588 - cpu_hotplug_enable_after_thaw(); 597 + cpu_hotplug_enable(); 589 598 break; 590 599 591 600 default:
+1 -1
kernel/exit.c
··· 649 649 * jobs, send them a SIGHUP and then a SIGCONT. (POSIX 3.2.2.2) 650 650 */ 651 651 forget_original_parent(tsk); 652 - exit_task_namespaces(tsk); 653 652 654 653 write_lock_irq(&tasklist_lock); 655 654 if (group_dead) ··· 794 795 exit_shm(tsk); 795 796 exit_files(tsk); 796 797 exit_fs(tsk); 798 + exit_task_namespaces(tsk); 797 799 exit_task_work(tsk); 798 800 check_stack_usage(); 799 801 exit_thread();
+50 -41
kernel/printk.c
··· 363 363 log_next_seq++; 364 364 } 365 365 366 + #ifdef CONFIG_SECURITY_DMESG_RESTRICT 367 + int dmesg_restrict = 1; 368 + #else 369 + int dmesg_restrict; 370 + #endif 371 + 372 + static int syslog_action_restricted(int type) 373 + { 374 + if (dmesg_restrict) 375 + return 1; 376 + /* 377 + * Unless restricted, we allow "read all" and "get buffer size" 378 + * for everybody. 379 + */ 380 + return type != SYSLOG_ACTION_READ_ALL && 381 + type != SYSLOG_ACTION_SIZE_BUFFER; 382 + } 383 + 384 + static int check_syslog_permissions(int type, bool from_file) 385 + { 386 + /* 387 + * If this is from /proc/kmsg and we've already opened it, then we've 388 + * already done the capabilities checks at open time. 389 + */ 390 + if (from_file && type != SYSLOG_ACTION_OPEN) 391 + return 0; 392 + 393 + if (syslog_action_restricted(type)) { 394 + if (capable(CAP_SYSLOG)) 395 + return 0; 396 + /* 397 + * For historical reasons, accept CAP_SYS_ADMIN too, with 398 + * a warning. 399 + */ 400 + if (capable(CAP_SYS_ADMIN)) { 401 + pr_warn_once("%s (%d): Attempt to access syslog with " 402 + "CAP_SYS_ADMIN but no CAP_SYSLOG " 403 + "(deprecated).\n", 404 + current->comm, task_pid_nr(current)); 405 + return 0; 406 + } 407 + return -EPERM; 408 + } 409 + return security_syslog(type); 410 + } 411 + 412 + 366 413 /* /dev/kmsg - userspace message inject/listen interface */ 367 414 struct devkmsg_user { 368 415 u64 seq; ··· 667 620 if ((file->f_flags & O_ACCMODE) == O_WRONLY) 668 621 return 0; 669 622 670 - err = security_syslog(SYSLOG_ACTION_READ_ALL); 623 + err = check_syslog_permissions(SYSLOG_ACTION_READ_ALL, 624 + SYSLOG_FROM_READER); 671 625 if (err) 672 626 return err; 673 627 ··· 860 812 { 861 813 } 862 814 #endif 863 - 864 - #ifdef CONFIG_SECURITY_DMESG_RESTRICT 865 - int dmesg_restrict = 1; 866 - #else 867 - int dmesg_restrict; 868 - #endif 869 - 870 - static int syslog_action_restricted(int type) 871 - { 872 - if (dmesg_restrict) 873 - return 1; 874 - /* Unless restricted, we allow "read all" and "get buffer size" for everybody */ 875 - return type != SYSLOG_ACTION_READ_ALL && type != SYSLOG_ACTION_SIZE_BUFFER; 876 - } 877 - 878 - static int check_syslog_permissions(int type, bool from_file) 879 - { 880 - /* 881 - * If this is from /proc/kmsg and we've already opened it, then we've 882 - * already done the capabilities checks at open time. 883 - */ 884 - if (from_file && type != SYSLOG_ACTION_OPEN) 885 - return 0; 886 - 887 - if (syslog_action_restricted(type)) { 888 - if (capable(CAP_SYSLOG)) 889 - return 0; 890 - /* For historical reasons, accept CAP_SYS_ADMIN too, with a warning */ 891 - if (capable(CAP_SYS_ADMIN)) { 892 - printk_once(KERN_WARNING "%s (%d): " 893 - "Attempt to access syslog with CAP_SYS_ADMIN " 894 - "but no CAP_SYSLOG (deprecated).\n", 895 - current->comm, task_pid_nr(current)); 896 - return 0; 897 - } 898 - return -EPERM; 899 - } 900 - return 0; 901 - } 902 815 903 816 #if defined(CONFIG_PRINTK_TIME) 904 817 static bool printk_time = 1; ··· 1258 1249 1259 1250 SYSCALL_DEFINE3(syslog, int, type, char __user *, buf, int, len) 1260 1251 { 1261 - return do_syslog(type, buf, len, SYSLOG_FROM_CALL); 1252 + return do_syslog(type, buf, len, SYSLOG_FROM_READER); 1262 1253 } 1263 1254 1264 1255 /*
+17 -4
kernel/rcutree.c
··· 1451 1451 rnp->grphi, rnp->qsmask); 1452 1452 raw_spin_unlock_irq(&rnp->lock); 1453 1453 #ifdef CONFIG_PROVE_RCU_DELAY 1454 - if ((prandom_u32() % (rcu_num_nodes * 8)) == 0 && 1454 + if ((prandom_u32() % (rcu_num_nodes + 1)) == 0 && 1455 1455 system_state == SYSTEM_RUNNING) 1456 - schedule_timeout_uninterruptible(2); 1456 + udelay(200); 1457 1457 #endif /* #ifdef CONFIG_PROVE_RCU_DELAY */ 1458 1458 cond_resched(); 1459 1459 } ··· 1613 1613 } 1614 1614 } 1615 1615 1616 + static void rsp_wakeup(struct irq_work *work) 1617 + { 1618 + struct rcu_state *rsp = container_of(work, struct rcu_state, wakeup_work); 1619 + 1620 + /* Wake up rcu_gp_kthread() to start the grace period. */ 1621 + wake_up(&rsp->gp_wq); 1622 + } 1623 + 1616 1624 /* 1617 1625 * Start a new RCU grace period if warranted, re-initializing the hierarchy 1618 1626 * in preparation for detecting the next grace period. The caller must hold ··· 1645 1637 } 1646 1638 rsp->gp_flags = RCU_GP_FLAG_INIT; 1647 1639 1648 - /* Wake up rcu_gp_kthread() to start the grace period. */ 1649 - wake_up(&rsp->gp_wq); 1640 + /* 1641 + * We can't do wakeups while holding the rnp->lock, as that 1642 + * could cause possible deadlocks with the rq->lock. Deter 1643 + * the wakeup to interrupt context. 1644 + */ 1645 + irq_work_queue(&rsp->wakeup_work); 1650 1646 } 1651 1647 1652 1648 /* ··· 3247 3235 3248 3236 rsp->rda = rda; 3249 3237 init_waitqueue_head(&rsp->gp_wq); 3238 + init_irq_work(&rsp->wakeup_work, rsp_wakeup); 3250 3239 rnp = rsp->level[rcu_num_lvls - 1]; 3251 3240 for_each_possible_cpu(i) { 3252 3241 while (i > rnp->grphi)
+2
kernel/rcutree.h
··· 27 27 #include <linux/threads.h> 28 28 #include <linux/cpumask.h> 29 29 #include <linux/seqlock.h> 30 + #include <linux/irq_work.h> 30 31 31 32 /* 32 33 * Define shape of hierarchy based on NR_CPUS, CONFIG_RCU_FANOUT, and ··· 443 442 char *name; /* Name of structure. */ 444 443 char abbr; /* Abbreviated name. */ 445 444 struct list_head flavors; /* List of RCU flavors. */ 445 + struct irq_work wakeup_work; /* Postponed wakeups */ 446 446 }; 447 447 448 448 /* Values for rcu_state structure's gp_flags field. */
+10 -3
kernel/softirq.c
··· 195 195 EXPORT_SYMBOL(local_bh_enable_ip); 196 196 197 197 /* 198 - * We restart softirq processing for at most 2 ms, 199 - * and if need_resched() is not set. 198 + * We restart softirq processing for at most MAX_SOFTIRQ_RESTART times, 199 + * but break the loop if need_resched() is set or after 2 ms. 200 + * The MAX_SOFTIRQ_TIME provides a nice upper bound in most cases, but in 201 + * certain cases, such as stop_machine(), jiffies may cease to 202 + * increment and so we need the MAX_SOFTIRQ_RESTART limit as 203 + * well to make sure we eventually return from this method. 200 204 * 201 205 * These limits have been established via experimentation. 202 206 * The two things to balance is latency against fairness - ··· 208 204 * should not be able to lock up the box. 209 205 */ 210 206 #define MAX_SOFTIRQ_TIME msecs_to_jiffies(2) 207 + #define MAX_SOFTIRQ_RESTART 10 211 208 212 209 asmlinkage void __do_softirq(void) 213 210 { ··· 217 212 unsigned long end = jiffies + MAX_SOFTIRQ_TIME; 218 213 int cpu; 219 214 unsigned long old_flags = current->flags; 215 + int max_restart = MAX_SOFTIRQ_RESTART; 220 216 221 217 /* 222 218 * Mask out PF_MEMALLOC s current task context is borrowed for the ··· 271 265 272 266 pending = local_softirq_pending(); 273 267 if (pending) { 274 - if (time_before(jiffies, end) && !need_resched()) 268 + if (time_before(jiffies, end) && !need_resched() && 269 + --max_restart) 275 270 goto restart; 276 271 277 272 wakeup_softirqd();
+26 -3
kernel/sys.c
··· 362 362 } 363 363 EXPORT_SYMBOL(unregister_reboot_notifier); 364 364 365 + /* Add backwards compatibility for stable trees. */ 366 + #ifndef PF_NO_SETAFFINITY 367 + #define PF_NO_SETAFFINITY PF_THREAD_BOUND 368 + #endif 369 + 370 + static void migrate_to_reboot_cpu(void) 371 + { 372 + /* The boot cpu is always logical cpu 0 */ 373 + int cpu = 0; 374 + 375 + cpu_hotplug_disable(); 376 + 377 + /* Make certain the cpu I'm about to reboot on is online */ 378 + if (!cpu_online(cpu)) 379 + cpu = cpumask_first(cpu_online_mask); 380 + 381 + /* Prevent races with other tasks migrating this task */ 382 + current->flags |= PF_NO_SETAFFINITY; 383 + 384 + /* Make certain I only run on the appropriate processor */ 385 + set_cpus_allowed_ptr(current, cpumask_of(cpu)); 386 + } 387 + 365 388 /** 366 389 * kernel_restart - reboot the system 367 390 * @cmd: pointer to buffer containing command to execute for restart ··· 396 373 void kernel_restart(char *cmd) 397 374 { 398 375 kernel_restart_prepare(cmd); 399 - disable_nonboot_cpus(); 376 + migrate_to_reboot_cpu(); 400 377 syscore_shutdown(); 401 378 if (!cmd) 402 379 printk(KERN_EMERG "Restarting system.\n"); ··· 423 400 void kernel_halt(void) 424 401 { 425 402 kernel_shutdown_prepare(SYSTEM_HALT); 426 - disable_nonboot_cpus(); 403 + migrate_to_reboot_cpu(); 427 404 syscore_shutdown(); 428 405 printk(KERN_EMERG "System halted.\n"); 429 406 kmsg_dump(KMSG_DUMP_HALT); ··· 442 419 kernel_shutdown_prepare(SYSTEM_POWER_OFF); 443 420 if (pm_power_off_prepare) 444 421 pm_power_off_prepare(); 445 - disable_nonboot_cpus(); 422 + migrate_to_reboot_cpu(); 446 423 syscore_shutdown(); 447 424 printk(KERN_EMERG "Power down.\n"); 448 425 kmsg_dump(KMSG_DUMP_POWEROFF);
+3 -5
kernel/trace/trace.c
··· 652 652 ARCH_TRACE_CLOCKS 653 653 }; 654 654 655 - int trace_clock_id; 656 - 657 655 /* 658 656 * trace_parser_get_init - gets the buffer for trace parser 659 657 */ ··· 2824 2826 iter->iter_flags |= TRACE_FILE_ANNOTATE; 2825 2827 2826 2828 /* Output in nanoseconds only if we are using a clock in nanoseconds. */ 2827 - if (trace_clocks[trace_clock_id].in_ns) 2829 + if (trace_clocks[tr->clock_id].in_ns) 2828 2830 iter->iter_flags |= TRACE_FILE_TIME_IN_NS; 2829 2831 2830 2832 /* stop the trace while dumping if we are not opening "snapshot" */ ··· 3823 3825 iter->iter_flags |= TRACE_FILE_LAT_FMT; 3824 3826 3825 3827 /* Output in nanoseconds only if we are using a clock in nanoseconds. */ 3826 - if (trace_clocks[trace_clock_id].in_ns) 3828 + if (trace_clocks[tr->clock_id].in_ns) 3827 3829 iter->iter_flags |= TRACE_FILE_TIME_IN_NS; 3828 3830 3829 3831 iter->cpu_file = tc->cpu; ··· 5093 5095 cnt = ring_buffer_bytes_cpu(trace_buf->buffer, cpu); 5094 5096 trace_seq_printf(s, "bytes: %ld\n", cnt); 5095 5097 5096 - if (trace_clocks[trace_clock_id].in_ns) { 5098 + if (trace_clocks[tr->clock_id].in_ns) { 5097 5099 /* local or global for trace_clock */ 5098 5100 t = ns2usecs(ring_buffer_oldest_event_ts(trace_buf->buffer, cpu)); 5099 5101 usec_rem = do_div(t, USEC_PER_SEC);
-2
kernel/trace/trace.h
··· 700 700 701 701 extern unsigned long trace_flags; 702 702 703 - extern int trace_clock_id; 704 - 705 703 /* Standard output formatting function used for function return traces */ 706 704 #ifdef CONFIG_FUNCTION_GRAPH_TRACER 707 705
+1 -1
lib/mpi/mpicoder.c
··· 37 37 mpi_limb_t a; 38 38 MPI val = NULL; 39 39 40 - while (nbytes >= 0 && buffer[0] == 0) { 40 + while (nbytes > 0 && buffer[0] == 0) { 41 41 buffer++; 42 42 nbytes--; 43 43 }
+1 -1
mm/frontswap.c
··· 319 319 return; 320 320 frontswap_ops->invalidate_area(type); 321 321 atomic_set(&sis->frontswap_pages, 0); 322 - memset(sis->frontswap_map, 0, sis->max / sizeof(long)); 322 + bitmap_zero(sis->frontswap_map, sis->max); 323 323 } 324 324 clear_bit(type, need_init); 325 325 }
+1 -1
mm/hugetlb.c
··· 2839 2839 if (ptep) { 2840 2840 entry = huge_ptep_get(ptep); 2841 2841 if (unlikely(is_hugetlb_entry_migration(entry))) { 2842 - migration_entry_wait(mm, (pmd_t *)ptep, address); 2842 + migration_entry_wait_huge(mm, ptep); 2843 2843 return 0; 2844 2844 } else if (unlikely(is_hugetlb_entry_hwpoisoned(entry))) 2845 2845 return VM_FAULT_HWPOISON_LARGE |
+5 -9
mm/memcontrol.c
··· 1199 1199 1200 1200 mz = mem_cgroup_zoneinfo(root, nid, zid); 1201 1201 iter = &mz->reclaim_iter[reclaim->priority]; 1202 - last_visited = iter->last_visited; 1203 1202 if (prev && reclaim->generation != iter->generation) { 1204 1203 iter->last_visited = NULL; 1205 1204 goto out_unlock; ··· 1217 1218 * is alive. 1218 1219 */ 1219 1220 dead_count = atomic_read(&root->dead_count); 1220 - smp_rmb(); 1221 - last_visited = iter->last_visited; 1222 - if (last_visited) { 1223 - if ((dead_count != iter->last_dead_count) || 1224 - !css_tryget(&last_visited->css)) { 1221 + if (dead_count == iter->last_dead_count) { 1222 + smp_rmb(); 1223 + last_visited = iter->last_visited; 1224 + if (last_visited && 1225 + !css_tryget(&last_visited->css)) 1225 1226 last_visited = NULL; 1226 - } 1227 1227 } 1228 1228 } 1229 1229 ··· 3139 3141 return -ENOMEM; 3140 3142 } 3141 3143 3142 - INIT_WORK(&s->memcg_params->destroy, 3143 - kmem_cache_destroy_work_func); 3144 3144 s->memcg_params->is_root_cache = true; 3145 3145 3146 3146 /*
+18 -5
mm/migrate.c
··· 200 200 * get to the page and wait until migration is finished. 201 201 * When we return from this function the fault will be retried. 202 202 */ 203 - void migration_entry_wait(struct mm_struct *mm, pmd_t *pmd, 204 - unsigned long address) 203 + static void __migration_entry_wait(struct mm_struct *mm, pte_t *ptep, 204 + spinlock_t *ptl) 205 205 { 206 - pte_t *ptep, pte; 207 - spinlock_t *ptl; 206 + pte_t pte; 208 207 swp_entry_t entry; 209 208 struct page *page; 210 209 211 - ptep = pte_offset_map_lock(mm, pmd, address, &ptl); 210 + spin_lock(ptl); 212 211 pte = *ptep; 213 212 if (!is_swap_pte(pte)) 214 213 goto out; ··· 233 234 return; 234 235 out: 235 236 pte_unmap_unlock(ptep, ptl); 237 + } 238 + 239 + void migration_entry_wait(struct mm_struct *mm, pmd_t *pmd, 240 + unsigned long address) 241 + { 242 + spinlock_t *ptl = pte_lockptr(mm, pmd); 243 + pte_t *ptep = pte_offset_map(pmd, address); 244 + __migration_entry_wait(mm, ptep, ptl); 245 + } 246 + 247 + void migration_entry_wait_huge(struct mm_struct *mm, pte_t *pte) 248 + { 249 + spinlock_t *ptl = &(mm)->page_table_lock; 250 + __migration_entry_wait(mm, pte, ptl); 236 251 } 237 252 238 253 #ifdef CONFIG_BLOCK
+4 -2
mm/page_alloc.c
··· 1628 1628 long min = mark; 1629 1629 long lowmem_reserve = z->lowmem_reserve[classzone_idx]; 1630 1630 int o; 1631 + long free_cma = 0; 1631 1632 1632 1633 free_pages -= (1 << order) - 1; 1633 1634 if (alloc_flags & ALLOC_HIGH) ··· 1638 1637 #ifdef CONFIG_CMA 1639 1638 /* If allocation can't use CMA areas don't use free CMA pages */ 1640 1639 if (!(alloc_flags & ALLOC_CMA)) 1641 - free_pages -= zone_page_state(z, NR_FREE_CMA_PAGES); 1640 + free_cma = zone_page_state(z, NR_FREE_CMA_PAGES); 1642 1641 #endif 1643 - if (free_pages <= min + lowmem_reserve) 1642 + 1643 + if (free_pages - free_cma <= min + lowmem_reserve) 1644 1644 return false; 1645 1645 for (o = 0; o < order; o++) { 1646 1646 /* At the next order, this order's pages become unavailable */
+17 -1
mm/swap_state.c
··· 336 336 * Swap entry may have been freed since our caller observed it. 337 337 */ 338 338 err = swapcache_prepare(entry); 339 - if (err == -EEXIST) { /* seems racy */ 339 + if (err == -EEXIST) { 340 340 radix_tree_preload_end(); 341 + /* 342 + * We might race against get_swap_page() and stumble 343 + * across a SWAP_HAS_CACHE swap_map entry whose page 344 + * has not been brought into the swapcache yet, while 345 + * the other end is scheduled away waiting on discard 346 + * I/O completion at scan_swap_map(). 347 + * 348 + * In order to avoid turning this transitory state 349 + * into a permanent loop around this -EEXIST case 350 + * if !CONFIG_PREEMPT and the I/O completion happens 351 + * to be waiting on the CPU waitqueue where we are now 352 + * busy looping, we just conditionally invoke the 353 + * scheduler here, if there are some more important 354 + * tasks to run. 355 + */ 356 + cond_resched(); 341 357 continue; 342 358 } 343 359 if (err) { /* swp entry is obsolete ? */
+1 -1
mm/swapfile.c
··· 2116 2116 } 2117 2117 /* frontswap enabled? set up bit-per-page map for frontswap */ 2118 2118 if (frontswap_enabled) 2119 - frontswap_map = vzalloc(maxpages / sizeof(long)); 2119 + frontswap_map = vzalloc(BITS_TO_LONGS(maxpages) * sizeof(long)); 2120 2120 2121 2121 if (p->bdev) { 2122 2122 if (blk_queue_nonrot(bdev_get_queue(p->bdev))) {
+18 -37
net/9p/client.c
··· 562 562 563 563 if (!p9_is_proto_dotl(c)) { 564 564 /* Error is reported in string format */ 565 - uint16_t len; 566 - /* 7 = header size for RERROR, 2 is the size of string len; */ 567 - int inline_len = in_hdrlen - (7 + 2); 565 + int len; 566 + /* 7 = header size for RERROR; */ 567 + int inline_len = in_hdrlen - 7; 568 568 569 - /* Read the size of error string */ 570 - err = p9pdu_readf(req->rc, c->proto_version, "w", &len); 571 - if (err) 572 - goto out_err; 573 - 574 - ename = kmalloc(len + 1, GFP_NOFS); 575 - if (!ename) { 576 - err = -ENOMEM; 569 + len = req->rc->size - req->rc->offset; 570 + if (len > (P9_ZC_HDR_SZ - 7)) { 571 + err = -EFAULT; 577 572 goto out_err; 578 573 } 579 - if (len <= inline_len) { 580 - /* We have error in protocol buffer itself */ 581 - if (pdu_read(req->rc, ename, len)) { 582 - err = -EFAULT; 583 - goto out_free; 584 574 585 - } 586 - } else { 587 - /* 588 - * Part of the data is in user space buffer. 589 - */ 590 - if (pdu_read(req->rc, ename, inline_len)) { 591 - err = -EFAULT; 592 - goto out_free; 593 - 594 - } 575 + ename = &req->rc->sdata[req->rc->offset]; 576 + if (len > inline_len) { 577 + /* We have error in external buffer */ 595 578 if (kern_buf) { 596 579 memcpy(ename + inline_len, uidata, 597 580 len - inline_len); ··· 583 600 uidata, len - inline_len); 584 601 if (err) { 585 602 err = -EFAULT; 586 - goto out_free; 603 + goto out_err; 587 604 } 588 605 } 589 606 } 590 - ename[len] = 0; 591 - if (p9_is_proto_dotu(c)) { 592 - /* For dotu we also have error code */ 593 - err = p9pdu_readf(req->rc, 594 - c->proto_version, "d", &ecode); 595 - if (err) 596 - goto out_free; 607 + ename = NULL; 608 + err = p9pdu_readf(req->rc, c->proto_version, "s?d", 609 + &ename, &ecode); 610 + if (err) 611 + goto out_err; 612 + 613 + if (p9_is_proto_dotu(c)) 597 614 err = -ecode; 598 - } 615 + 599 616 if (!err || !IS_ERR_VALUE(err)) { 600 617 err = p9_errstr2errno(ename, strlen(ename)); 601 618 ··· 611 628 } 612 629 return err; 613 630 614 - out_free: 615 - kfree(ename); 616 631 out_err: 617 632 p9_debug(P9_DEBUG_ERROR, "couldn't parse error%d\n", err); 618 633 return err;
+55 -31
net/batman-adv/bat_iv_ogm.c
··· 29 29 #include "bat_algo.h" 30 30 #include "network-coding.h" 31 31 32 + /** 33 + * batadv_dup_status - duplicate status 34 + * @BATADV_NO_DUP: the packet is a duplicate 35 + * @BATADV_ORIG_DUP: OGM is a duplicate in the originator (but not for the 36 + * neighbor) 37 + * @BATADV_NEIGH_DUP: OGM is a duplicate for the neighbor 38 + * @BATADV_PROTECTED: originator is currently protected (after reboot) 39 + */ 40 + enum batadv_dup_status { 41 + BATADV_NO_DUP = 0, 42 + BATADV_ORIG_DUP, 43 + BATADV_NEIGH_DUP, 44 + BATADV_PROTECTED, 45 + }; 46 + 32 47 static struct batadv_neigh_node * 33 48 batadv_iv_ogm_neigh_new(struct batadv_hard_iface *hard_iface, 34 49 const uint8_t *neigh_addr, ··· 665 650 const struct batadv_ogm_packet *batadv_ogm_packet, 666 651 struct batadv_hard_iface *if_incoming, 667 652 const unsigned char *tt_buff, 668 - int is_duplicate) 653 + enum batadv_dup_status dup_status) 669 654 { 670 655 struct batadv_neigh_node *neigh_node = NULL, *tmp_neigh_node = NULL; 671 656 struct batadv_neigh_node *router = NULL; ··· 691 676 continue; 692 677 } 693 678 694 - if (is_duplicate) 679 + if (dup_status != BATADV_NO_DUP) 695 680 continue; 696 681 697 682 spin_lock_bh(&tmp_neigh_node->lq_update_lock); ··· 733 718 neigh_node->tq_avg = batadv_ring_buffer_avg(neigh_node->tq_recv); 734 719 spin_unlock_bh(&neigh_node->lq_update_lock); 735 720 736 - if (!is_duplicate) { 721 + if (dup_status == BATADV_NO_DUP) { 737 722 orig_node->last_ttl = batadv_ogm_packet->header.ttl; 738 723 neigh_node->last_ttl = batadv_ogm_packet->header.ttl; 739 724 } ··· 917 902 return ret; 918 903 } 919 904 920 - /* processes a batman packet for all interfaces, adjusts the sequence number and 921 - * finds out whether it is a duplicate. 922 - * returns: 923 - * 1 the packet is a duplicate 924 - * 0 the packet has not yet been received 925 - * -1 the packet is old and has been received while the seqno window 926 - * was protected. Caller should drop it. 905 + /** 906 + * batadv_iv_ogm_update_seqnos - process a batman packet for all interfaces, 907 + * adjust the sequence number and find out whether it is a duplicate 908 + * @ethhdr: ethernet header of the packet 909 + * @batadv_ogm_packet: OGM packet to be considered 910 + * @if_incoming: interface on which the OGM packet was received 911 + * 912 + * Returns duplicate status as enum batadv_dup_status 927 913 */ 928 - static int 914 + static enum batadv_dup_status 929 915 batadv_iv_ogm_update_seqnos(const struct ethhdr *ethhdr, 930 916 const struct batadv_ogm_packet *batadv_ogm_packet, 931 917 const struct batadv_hard_iface *if_incoming) ··· 934 918 struct batadv_priv *bat_priv = netdev_priv(if_incoming->soft_iface); 935 919 struct batadv_orig_node *orig_node; 936 920 struct batadv_neigh_node *tmp_neigh_node; 937 - int is_duplicate = 0; 921 + int is_dup; 938 922 int32_t seq_diff; 939 923 int need_update = 0; 940 - int set_mark, ret = -1; 924 + int set_mark; 925 + enum batadv_dup_status ret = BATADV_NO_DUP; 941 926 uint32_t seqno = ntohl(batadv_ogm_packet->seqno); 942 927 uint8_t *neigh_addr; 943 928 uint8_t packet_count; 944 929 945 930 orig_node = batadv_get_orig_node(bat_priv, batadv_ogm_packet->orig); 946 931 if (!orig_node) 947 - return 0; 932 + return BATADV_NO_DUP; 948 933 949 934 spin_lock_bh(&orig_node->ogm_cnt_lock); 950 935 seq_diff = seqno - orig_node->last_real_seqno; ··· 953 936 /* signalize caller that the packet is to be dropped. */ 954 937 if (!hlist_empty(&orig_node->neigh_list) && 955 938 batadv_window_protected(bat_priv, seq_diff, 956 - &orig_node->batman_seqno_reset)) 939 + &orig_node->batman_seqno_reset)) { 940 + ret = BATADV_PROTECTED; 957 941 goto out; 942 + } 958 943 959 944 rcu_read_lock(); 960 945 hlist_for_each_entry_rcu(tmp_neigh_node, 961 946 &orig_node->neigh_list, list) { 962 - is_duplicate |= batadv_test_bit(tmp_neigh_node->real_bits, 963 - orig_node->last_real_seqno, 964 - seqno); 965 - 966 947 neigh_addr = tmp_neigh_node->addr; 948 + is_dup = batadv_test_bit(tmp_neigh_node->real_bits, 949 + orig_node->last_real_seqno, 950 + seqno); 951 + 967 952 if (batadv_compare_eth(neigh_addr, ethhdr->h_source) && 968 - tmp_neigh_node->if_incoming == if_incoming) 953 + tmp_neigh_node->if_incoming == if_incoming) { 969 954 set_mark = 1; 970 - else 955 + if (is_dup) 956 + ret = BATADV_NEIGH_DUP; 957 + } else { 971 958 set_mark = 0; 959 + if (is_dup && (ret != BATADV_NEIGH_DUP)) 960 + ret = BATADV_ORIG_DUP; 961 + } 972 962 973 963 /* if the window moved, set the update flag. */ 974 964 need_update |= batadv_bit_get_packet(bat_priv, ··· 994 970 orig_node->last_real_seqno, seqno); 995 971 orig_node->last_real_seqno = seqno; 996 972 } 997 - 998 - ret = is_duplicate; 999 973 1000 974 out: 1001 975 spin_unlock_bh(&orig_node->ogm_cnt_lock); ··· 1016 994 int is_broadcast = 0, is_bidirect; 1017 995 bool is_single_hop_neigh = false; 1018 996 bool is_from_best_next_hop = false; 1019 - int is_duplicate, sameseq, simlar_ttl; 997 + int sameseq, similar_ttl; 998 + enum batadv_dup_status dup_status; 1020 999 uint32_t if_incoming_seqno; 1021 1000 uint8_t *prev_sender; 1022 1001 ··· 1161 1138 if (!orig_node) 1162 1139 return; 1163 1140 1164 - is_duplicate = batadv_iv_ogm_update_seqnos(ethhdr, batadv_ogm_packet, 1165 - if_incoming); 1141 + dup_status = batadv_iv_ogm_update_seqnos(ethhdr, batadv_ogm_packet, 1142 + if_incoming); 1166 1143 1167 - if (is_duplicate == -1) { 1144 + if (dup_status == BATADV_PROTECTED) { 1168 1145 batadv_dbg(BATADV_DBG_BATMAN, bat_priv, 1169 1146 "Drop packet: packet within seqno protection time (sender: %pM)\n", 1170 1147 ethhdr->h_source); ··· 1234 1211 * seqno and similar ttl as the non-duplicate 1235 1212 */ 1236 1213 sameseq = orig_node->last_real_seqno == ntohl(batadv_ogm_packet->seqno); 1237 - simlar_ttl = orig_node->last_ttl - 3 <= batadv_ogm_packet->header.ttl; 1238 - if (is_bidirect && (!is_duplicate || (sameseq && simlar_ttl))) 1214 + similar_ttl = orig_node->last_ttl - 3 <= batadv_ogm_packet->header.ttl; 1215 + if (is_bidirect && ((dup_status == BATADV_NO_DUP) || 1216 + (sameseq && similar_ttl))) 1239 1217 batadv_iv_ogm_orig_update(bat_priv, orig_node, ethhdr, 1240 1218 batadv_ogm_packet, if_incoming, 1241 - tt_buff, is_duplicate); 1219 + tt_buff, dup_status); 1242 1220 1243 1221 /* is single hop (direct) neighbor */ 1244 1222 if (is_single_hop_neigh) { ··· 1260 1236 goto out_neigh; 1261 1237 } 1262 1238 1263 - if (is_duplicate) { 1239 + if (dup_status == BATADV_NEIGH_DUP) { 1264 1240 batadv_dbg(BATADV_DBG_BATMAN, bat_priv, 1265 1241 "Drop packet: duplicate packet received\n"); 1266 1242 goto out_neigh;
+4
net/batman-adv/bridge_loop_avoidance.c
··· 1067 1067 group = htons(crc16(0, primary_if->net_dev->dev_addr, ETH_ALEN)); 1068 1068 bat_priv->bla.claim_dest.group = group; 1069 1069 1070 + /* purge everything when bridge loop avoidance is turned off */ 1071 + if (!atomic_read(&bat_priv->bridge_loop_avoidance)) 1072 + oldif = NULL; 1073 + 1070 1074 if (!oldif) { 1071 1075 batadv_bla_purge_claims(bat_priv, NULL, 1); 1072 1076 batadv_bla_purge_backbone_gw(bat_priv, 1);
+1 -4
net/batman-adv/sysfs.c
··· 582 582 (strncmp(hard_iface->soft_iface->name, buff, IFNAMSIZ) == 0)) 583 583 goto out; 584 584 585 - if (!rtnl_trylock()) { 586 - ret = -ERESTARTSYS; 587 - goto out; 588 - } 585 + rtnl_lock(); 589 586 590 587 if (status_tmp == BATADV_IF_NOT_IN_USE) { 591 588 batadv_hardif_disable_interface(hard_iface,
+5 -1
net/bluetooth/hci_core.c
··· 1555 1555 static void hci_power_on(struct work_struct *work) 1556 1556 { 1557 1557 struct hci_dev *hdev = container_of(work, struct hci_dev, power_on); 1558 + int err; 1558 1559 1559 1560 BT_DBG("%s", hdev->name); 1560 1561 1561 - if (hci_dev_open(hdev->id) < 0) 1562 + err = hci_dev_open(hdev->id); 1563 + if (err < 0) { 1564 + mgmt_set_powered_failed(hdev, err); 1562 1565 return; 1566 + } 1563 1567 1564 1568 if (test_bit(HCI_AUTO_OFF, &hdev->dev_flags)) 1565 1569 queue_delayed_work(hdev->req_workqueue, &hdev->power_off,
+52 -18
net/bluetooth/l2cap_core.c
··· 3677 3677 } 3678 3678 3679 3679 static inline int l2cap_command_rej(struct l2cap_conn *conn, 3680 - struct l2cap_cmd_hdr *cmd, u8 *data) 3680 + struct l2cap_cmd_hdr *cmd, u16 cmd_len, 3681 + u8 *data) 3681 3682 { 3682 3683 struct l2cap_cmd_rej_unk *rej = (struct l2cap_cmd_rej_unk *) data; 3684 + 3685 + if (cmd_len < sizeof(*rej)) 3686 + return -EPROTO; 3683 3687 3684 3688 if (rej->reason != L2CAP_REJ_NOT_UNDERSTOOD) 3685 3689 return 0; ··· 3833 3829 } 3834 3830 3835 3831 static int l2cap_connect_req(struct l2cap_conn *conn, 3836 - struct l2cap_cmd_hdr *cmd, u8 *data) 3832 + struct l2cap_cmd_hdr *cmd, u16 cmd_len, u8 *data) 3837 3833 { 3838 3834 struct hci_dev *hdev = conn->hcon->hdev; 3839 3835 struct hci_conn *hcon = conn->hcon; 3836 + 3837 + if (cmd_len < sizeof(struct l2cap_conn_req)) 3838 + return -EPROTO; 3840 3839 3841 3840 hci_dev_lock(hdev); 3842 3841 if (test_bit(HCI_MGMT, &hdev->dev_flags) && ··· 3854 3847 } 3855 3848 3856 3849 static int l2cap_connect_create_rsp(struct l2cap_conn *conn, 3857 - struct l2cap_cmd_hdr *cmd, u8 *data) 3850 + struct l2cap_cmd_hdr *cmd, u16 cmd_len, 3851 + u8 *data) 3858 3852 { 3859 3853 struct l2cap_conn_rsp *rsp = (struct l2cap_conn_rsp *) data; 3860 3854 u16 scid, dcid, result, status; 3861 3855 struct l2cap_chan *chan; 3862 3856 u8 req[128]; 3863 3857 int err; 3858 + 3859 + if (cmd_len < sizeof(*rsp)) 3860 + return -EPROTO; 3864 3861 3865 3862 scid = __le16_to_cpu(rsp->scid); 3866 3863 dcid = __le16_to_cpu(rsp->dcid); ··· 3963 3952 struct l2cap_chan *chan; 3964 3953 int len, err = 0; 3965 3954 3955 + if (cmd_len < sizeof(*req)) 3956 + return -EPROTO; 3957 + 3966 3958 dcid = __le16_to_cpu(req->dcid); 3967 3959 flags = __le16_to_cpu(req->flags); 3968 3960 ··· 3989 3975 3990 3976 /* Reject if config buffer is too small. */ 3991 3977 len = cmd_len - sizeof(*req); 3992 - if (len < 0 || chan->conf_len + len > sizeof(chan->conf_req)) { 3978 + if (chan->conf_len + len > sizeof(chan->conf_req)) { 3993 3979 l2cap_send_cmd(conn, cmd->ident, L2CAP_CONF_RSP, 3994 3980 l2cap_build_conf_rsp(chan, rsp, 3995 3981 L2CAP_CONF_REJECT, flags), rsp); ··· 4067 4053 } 4068 4054 4069 4055 static inline int l2cap_config_rsp(struct l2cap_conn *conn, 4070 - struct l2cap_cmd_hdr *cmd, u8 *data) 4056 + struct l2cap_cmd_hdr *cmd, u16 cmd_len, 4057 + u8 *data) 4071 4058 { 4072 4059 struct l2cap_conf_rsp *rsp = (struct l2cap_conf_rsp *)data; 4073 4060 u16 scid, flags, result; 4074 4061 struct l2cap_chan *chan; 4075 - int len = le16_to_cpu(cmd->len) - sizeof(*rsp); 4062 + int len = cmd_len - sizeof(*rsp); 4076 4063 int err = 0; 4064 + 4065 + if (cmd_len < sizeof(*rsp)) 4066 + return -EPROTO; 4077 4067 4078 4068 scid = __le16_to_cpu(rsp->scid); 4079 4069 flags = __le16_to_cpu(rsp->flags); ··· 4179 4161 } 4180 4162 4181 4163 static inline int l2cap_disconnect_req(struct l2cap_conn *conn, 4182 - struct l2cap_cmd_hdr *cmd, u8 *data) 4164 + struct l2cap_cmd_hdr *cmd, u16 cmd_len, 4165 + u8 *data) 4183 4166 { 4184 4167 struct l2cap_disconn_req *req = (struct l2cap_disconn_req *) data; 4185 4168 struct l2cap_disconn_rsp rsp; 4186 4169 u16 dcid, scid; 4187 4170 struct l2cap_chan *chan; 4188 4171 struct sock *sk; 4172 + 4173 + if (cmd_len != sizeof(*req)) 4174 + return -EPROTO; 4189 4175 4190 4176 scid = __le16_to_cpu(req->scid); 4191 4177 dcid = __le16_to_cpu(req->dcid); ··· 4230 4208 } 4231 4209 4232 4210 static inline int l2cap_disconnect_rsp(struct l2cap_conn *conn, 4233 - struct l2cap_cmd_hdr *cmd, u8 *data) 4211 + struct l2cap_cmd_hdr *cmd, u16 cmd_len, 4212 + u8 *data) 4234 4213 { 4235 4214 struct l2cap_disconn_rsp *rsp = (struct l2cap_disconn_rsp *) data; 4236 4215 u16 dcid, scid; 4237 4216 struct l2cap_chan *chan; 4217 + 4218 + if (cmd_len != sizeof(*rsp)) 4219 + return -EPROTO; 4238 4220 4239 4221 scid = __le16_to_cpu(rsp->scid); 4240 4222 dcid = __le16_to_cpu(rsp->dcid); ··· 4269 4243 } 4270 4244 4271 4245 static inline int l2cap_information_req(struct l2cap_conn *conn, 4272 - struct l2cap_cmd_hdr *cmd, u8 *data) 4246 + struct l2cap_cmd_hdr *cmd, u16 cmd_len, 4247 + u8 *data) 4273 4248 { 4274 4249 struct l2cap_info_req *req = (struct l2cap_info_req *) data; 4275 4250 u16 type; 4251 + 4252 + if (cmd_len != sizeof(*req)) 4253 + return -EPROTO; 4276 4254 4277 4255 type = __le16_to_cpu(req->type); 4278 4256 ··· 4324 4294 } 4325 4295 4326 4296 static inline int l2cap_information_rsp(struct l2cap_conn *conn, 4327 - struct l2cap_cmd_hdr *cmd, u8 *data) 4297 + struct l2cap_cmd_hdr *cmd, u16 cmd_len, 4298 + u8 *data) 4328 4299 { 4329 4300 struct l2cap_info_rsp *rsp = (struct l2cap_info_rsp *) data; 4330 4301 u16 type, result; 4302 + 4303 + if (cmd_len != sizeof(*rsp)) 4304 + return -EPROTO; 4331 4305 4332 4306 type = __le16_to_cpu(rsp->type); 4333 4307 result = __le16_to_cpu(rsp->result); ··· 5198 5164 5199 5165 switch (cmd->code) { 5200 5166 case L2CAP_COMMAND_REJ: 5201 - l2cap_command_rej(conn, cmd, data); 5167 + l2cap_command_rej(conn, cmd, cmd_len, data); 5202 5168 break; 5203 5169 5204 5170 case L2CAP_CONN_REQ: 5205 - err = l2cap_connect_req(conn, cmd, data); 5171 + err = l2cap_connect_req(conn, cmd, cmd_len, data); 5206 5172 break; 5207 5173 5208 5174 case L2CAP_CONN_RSP: 5209 5175 case L2CAP_CREATE_CHAN_RSP: 5210 - err = l2cap_connect_create_rsp(conn, cmd, data); 5176 + err = l2cap_connect_create_rsp(conn, cmd, cmd_len, data); 5211 5177 break; 5212 5178 5213 5179 case L2CAP_CONF_REQ: ··· 5215 5181 break; 5216 5182 5217 5183 case L2CAP_CONF_RSP: 5218 - err = l2cap_config_rsp(conn, cmd, data); 5184 + err = l2cap_config_rsp(conn, cmd, cmd_len, data); 5219 5185 break; 5220 5186 5221 5187 case L2CAP_DISCONN_REQ: 5222 - err = l2cap_disconnect_req(conn, cmd, data); 5188 + err = l2cap_disconnect_req(conn, cmd, cmd_len, data); 5223 5189 break; 5224 5190 5225 5191 case L2CAP_DISCONN_RSP: 5226 - err = l2cap_disconnect_rsp(conn, cmd, data); 5192 + err = l2cap_disconnect_rsp(conn, cmd, cmd_len, data); 5227 5193 break; 5228 5194 5229 5195 case L2CAP_ECHO_REQ: ··· 5234 5200 break; 5235 5201 5236 5202 case L2CAP_INFO_REQ: 5237 - err = l2cap_information_req(conn, cmd, data); 5203 + err = l2cap_information_req(conn, cmd, cmd_len, data); 5238 5204 break; 5239 5205 5240 5206 case L2CAP_INFO_RSP: 5241 - err = l2cap_information_rsp(conn, cmd, data); 5207 + err = l2cap_information_rsp(conn, cmd, cmd_len, data); 5242 5208 break; 5243 5209 5244 5210 case L2CAP_CREATE_CHAN_REQ:
+22 -1
net/bluetooth/mgmt.c
··· 2700 2700 break; 2701 2701 2702 2702 case DISCOV_TYPE_LE: 2703 - if (!lmp_host_le_capable(hdev)) { 2703 + if (!test_bit(HCI_LE_ENABLED, &hdev->dev_flags)) { 2704 2704 err = cmd_status(sk, hdev->id, MGMT_OP_START_DISCOVERY, 2705 2705 MGMT_STATUS_NOT_SUPPORTED); 2706 2706 mgmt_pending_remove(cmd); ··· 3414 3414 3415 3415 if (match.sk) 3416 3416 sock_put(match.sk); 3417 + 3418 + return err; 3419 + } 3420 + 3421 + int mgmt_set_powered_failed(struct hci_dev *hdev, int err) 3422 + { 3423 + struct pending_cmd *cmd; 3424 + u8 status; 3425 + 3426 + cmd = mgmt_pending_find(MGMT_OP_SET_POWERED, hdev); 3427 + if (!cmd) 3428 + return -ENOENT; 3429 + 3430 + if (err == -ERFKILL) 3431 + status = MGMT_STATUS_RFKILLED; 3432 + else 3433 + status = MGMT_STATUS_FAILED; 3434 + 3435 + err = cmd_status(cmd->sk, hdev->id, MGMT_OP_SET_POWERED, status); 3436 + 3437 + mgmt_pending_remove(cmd); 3417 3438 3418 3439 return err; 3419 3440 }
+2 -2
net/bluetooth/smp.c
··· 770 770 771 771 BT_DBG("conn %p hcon %p level 0x%2.2x", conn, hcon, sec_level); 772 772 773 - if (!lmp_host_le_capable(hcon->hdev)) 773 + if (!test_bit(HCI_LE_ENABLED, &hcon->hdev->dev_flags)) 774 774 return 1; 775 775 776 776 if (sec_level == BT_SECURITY_LOW) ··· 851 851 __u8 reason; 852 852 int err = 0; 853 853 854 - if (!lmp_host_le_capable(conn->hcon->hdev)) { 854 + if (!test_bit(HCI_LE_ENABLED, &conn->hcon->hdev->dev_flags)) { 855 855 err = -ENOTSUPP; 856 856 reason = SMP_PAIRING_NOTSUPP; 857 857 goto done;
+1 -1
net/ceph/osd_client.c
··· 1675 1675 __register_request(osdc, req); 1676 1676 __unregister_linger_request(osdc, req); 1677 1677 } 1678 + reset_changed_osds(osdc); 1678 1679 mutex_unlock(&osdc->request_mutex); 1679 1680 1680 1681 if (needmap) { 1681 1682 dout("%d requests for down osds, need new map\n", needmap); 1682 1683 ceph_monc_request_next_osdmap(&osdc->client->monc); 1683 1684 } 1684 - reset_changed_osds(osdc); 1685 1685 } 1686 1686 1687 1687
+1 -1
net/core/filter.c
··· 778 778 } 779 779 EXPORT_SYMBOL_GPL(sk_detach_filter); 780 780 781 - static void sk_decode_filter(struct sock_filter *filt, struct sock_filter *to) 781 + void sk_decode_filter(struct sock_filter *filt, struct sock_filter *to) 782 782 { 783 783 static const u16 decodes[] = { 784 784 [BPF_S_ALU_ADD_K] = BPF_ALU|BPF_ADD|BPF_K,
+7 -2
net/core/sock_diag.c
··· 73 73 goto out; 74 74 } 75 75 76 - if (filter) 77 - memcpy(nla_data(attr), filter->insns, len); 76 + if (filter) { 77 + struct sock_filter *fb = (struct sock_filter *)nla_data(attr); 78 + int i; 79 + 80 + for (i = 0; i < filter->len; i++, fb++) 81 + sk_decode_filter(&filter->insns[i], fb); 82 + } 78 83 79 84 out: 80 85 rcu_read_unlock();
+2 -2
net/ipv4/ip_tunnel.c
··· 853 853 } 854 854 EXPORT_SYMBOL_GPL(ip_tunnel_dellink); 855 855 856 - int __net_init ip_tunnel_init_net(struct net *net, int ip_tnl_net_id, 856 + int ip_tunnel_init_net(struct net *net, int ip_tnl_net_id, 857 857 struct rtnl_link_ops *ops, char *devname) 858 858 { 859 859 struct ip_tunnel_net *itn = net_generic(net, ip_tnl_net_id); ··· 899 899 unregister_netdevice_queue(itn->fb_tunnel_dev, head); 900 900 } 901 901 902 - void __net_exit ip_tunnel_delete_net(struct ip_tunnel_net *itn) 902 + void ip_tunnel_delete_net(struct ip_tunnel_net *itn) 903 903 { 904 904 LIST_HEAD(list); 905 905
+1 -2
net/ipv4/ip_vti.c
··· 361 361 tunnel->err_count = 0; 362 362 } 363 363 364 - IPCB(skb)->flags &= ~(IPSKB_XFRM_TUNNEL_SIZE | IPSKB_XFRM_TRANSFORMED | 365 - IPSKB_REROUTED); 364 + memset(IPCB(skb), 0, sizeof(*IPCB(skb))); 366 365 skb_dst_drop(skb); 367 366 skb_dst_set(skb, &rt->dst); 368 367 nf_reset(skb);
+3 -3
net/l2tp/l2tp_ppp.c
··· 346 346 skb_put(skb, 2); 347 347 348 348 /* Copy user data into skb */ 349 - error = memcpy_fromiovec(skb->data, m->msg_iov, total_len); 349 + error = memcpy_fromiovec(skb_put(skb, total_len), m->msg_iov, 350 + total_len); 350 351 if (error < 0) { 351 352 kfree_skb(skb); 352 353 goto error_put_sess_tun; 353 354 } 354 - skb_put(skb, total_len); 355 355 356 356 l2tp_xmit_skb(session, skb, session->hdr_len); 357 357 358 358 sock_put(ps->tunnel_sock); 359 359 sock_put(sk); 360 360 361 - return error; 361 + return total_len; 362 362 363 363 error_put_sess_tun: 364 364 sock_put(ps->tunnel_sock);
+1
net/netfilter/ipvs/ip_vs_ctl.c
··· 2542 2542 struct ip_vs_dest *dest; 2543 2543 struct ip_vs_dest_entry entry; 2544 2544 2545 + memset(&entry, 0, sizeof(entry)); 2545 2546 list_for_each_entry(dest, &svc->destinations, n_list) { 2546 2547 if (count >= get->num_dests) 2547 2548 break;
+6
net/netfilter/xt_TCPMSS.c
··· 125 125 126 126 skb_put(skb, TCPOLEN_MSS); 127 127 128 + /* RFC 879 states that the default MSS is 536 without specific 129 + * knowledge that the destination host is prepared to accept larger. 130 + * Since no MSS was provided, we MUST NOT set a value > 536. 131 + */ 132 + newmss = min(newmss, (u16)536); 133 + 128 134 opt = (u_int8_t *)tcph + sizeof(struct tcphdr); 129 135 memmove(opt + TCPOLEN_MSS, opt, tcplen - sizeof(struct tcphdr)); 130 136
+1 -1
net/netlink/af_netlink.c
··· 371 371 err = 0; 372 372 out: 373 373 mutex_unlock(&nlk->pg_vec_lock); 374 - return 0; 374 + return err; 375 375 } 376 376 377 377 static void netlink_frame_flush_dcache(const struct nl_mmap_hdr *hdr)
+2 -3
net/packet/af_packet.c
··· 2851 2851 return -EOPNOTSUPP; 2852 2852 2853 2853 uaddr->sa_family = AF_PACKET; 2854 + memset(uaddr->sa_data, 0, sizeof(uaddr->sa_data)); 2854 2855 rcu_read_lock(); 2855 2856 dev = dev_get_by_index_rcu(sock_net(sk), pkt_sk(sk)->ifindex); 2856 2857 if (dev) 2857 - strncpy(uaddr->sa_data, dev->name, 14); 2858 - else 2859 - memset(uaddr->sa_data, 0, 14); 2858 + strlcpy(uaddr->sa_data, dev->name, sizeof(uaddr->sa_data)); 2860 2859 rcu_read_unlock(); 2861 2860 *uaddr_len = sizeof(*uaddr); 2862 2861
+6 -5
net/sched/sch_api.c
··· 291 291 { 292 292 struct qdisc_rate_table *rtab; 293 293 294 + if (tab == NULL || r->rate == 0 || r->cell_log == 0 || 295 + nla_len(tab) != TC_RTAB_SIZE) 296 + return NULL; 297 + 294 298 for (rtab = qdisc_rtab_list; rtab; rtab = rtab->next) { 295 - if (memcmp(&rtab->rate, r, sizeof(struct tc_ratespec)) == 0) { 299 + if (!memcmp(&rtab->rate, r, sizeof(struct tc_ratespec)) && 300 + !memcmp(&rtab->data, nla_data(tab), 1024)) { 296 301 rtab->refcnt++; 297 302 return rtab; 298 303 } 299 304 } 300 - 301 - if (tab == NULL || r->rate == 0 || r->cell_log == 0 || 302 - nla_len(tab) != TC_RTAB_SIZE) 303 - return NULL; 304 305 305 306 rtab = kmalloc(sizeof(*rtab), GFP_KERNEL); 306 307 if (rtab) {
+2 -4
net/sctp/outqueue.c
··· 206 206 */ 207 207 void sctp_outq_init(struct sctp_association *asoc, struct sctp_outq *q) 208 208 { 209 + memset(q, 0, sizeof(struct sctp_outq)); 210 + 209 211 q->asoc = asoc; 210 212 INIT_LIST_HEAD(&q->out_chunk_list); 211 213 INIT_LIST_HEAD(&q->control_chunk_list); ··· 215 213 INIT_LIST_HEAD(&q->sacked); 216 214 INIT_LIST_HEAD(&q->abandoned); 217 215 218 - q->fast_rtx = 0; 219 - q->outstanding_bytes = 0; 220 216 q->empty = 1; 221 - q->cork = 0; 222 - q->out_qlen = 0; 223 217 } 224 218 225 219 /* Free the outqueue structure and any related pending chunks.
+6
net/sctp/socket.c
··· 4003 4003 4004 4004 /* Release our hold on the endpoint. */ 4005 4005 sp = sctp_sk(sk); 4006 + /* This could happen during socket init, thus we bail out 4007 + * early, since the rest of the below is not setup either. 4008 + */ 4009 + if (sp->ep == NULL) 4010 + return; 4011 + 4006 4012 if (sp->do_auto_asconf) { 4007 4013 sp->do_auto_asconf = 0; 4008 4014 list_del(&sp->auto_asconf_list);
+4 -4
scripts/Makefile.lib
··· 149 149 150 150 ld_flags = $(LDFLAGS) $(ldflags-y) 151 151 152 - dtc_cpp_flags = -Wp,-MD,$(depfile).pre -nostdinc \ 152 + dtc_cpp_flags = -Wp,-MD,$(depfile).pre.tmp -nostdinc \ 153 153 -I$(srctree)/arch/$(SRCARCH)/boot/dts \ 154 154 -I$(srctree)/arch/$(SRCARCH)/boot/dts/include \ 155 155 -undef -D__DTS__ ··· 265 265 cmd_dtc = $(CPP) $(dtc_cpp_flags) -x assembler-with-cpp -o $(dtc-tmp) $< ; \ 266 266 $(objtree)/scripts/dtc/dtc -O dtb -o $@ -b 0 \ 267 267 -i $(dir $<) $(DTC_FLAGS) \ 268 - -d $(depfile).dtc $(dtc-tmp) ; \ 269 - cat $(depfile).pre $(depfile).dtc > $(depfile) 268 + -d $(depfile).dtc.tmp $(dtc-tmp) ; \ 269 + cat $(depfile).pre.tmp $(depfile).dtc.tmp > $(depfile) 270 270 271 271 $(obj)/%.dtb: $(src)/%.dts FORCE 272 272 $(call if_changed_dep,dtc) 273 273 274 - dtc-tmp = $(subst $(comma),_,$(dot-target).dts) 274 + dtc-tmp = $(subst $(comma),_,$(dot-target).dts.tmp) 275 275 276 276 # Bzip2 277 277 # ---------------------------------------------------------------------------
+1 -1
scripts/dtc/dtc-lexer.l
··· 71 71 push_input_file(name); 72 72 } 73 73 74 - <*>^"#"(line)?{WS}+[0-9]+{WS}+{STRING}({WS}+[0-9]+)? { 74 + <*>^"#"(line)?[ \t]+[0-9]+[ \t]+{STRING}([ \t]+[0-9]+)? { 75 75 char *line, *tmp, *fn; 76 76 /* skip text before line # */ 77 77 line = yytext;
+110 -110
scripts/dtc/dtc-lexer.lex.c_shipped
··· 405 405 static yyconst flex_int32_t yy_ec[256] = 406 406 { 0, 407 407 1, 1, 1, 1, 1, 1, 1, 1, 2, 3, 408 - 2, 2, 2, 1, 1, 1, 1, 1, 1, 1, 408 + 4, 4, 4, 1, 1, 1, 1, 1, 1, 1, 409 409 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 410 - 1, 2, 4, 5, 6, 1, 1, 7, 8, 1, 411 - 1, 9, 10, 10, 11, 10, 12, 13, 14, 15, 412 - 15, 15, 15, 15, 15, 15, 15, 16, 1, 17, 413 - 18, 19, 10, 10, 20, 20, 20, 20, 20, 20, 414 - 21, 21, 21, 21, 21, 22, 21, 21, 21, 21, 415 - 21, 21, 21, 21, 23, 21, 21, 24, 21, 21, 416 - 1, 25, 26, 1, 21, 1, 20, 27, 28, 29, 410 + 1, 2, 5, 6, 7, 1, 1, 8, 9, 1, 411 + 1, 10, 11, 11, 12, 11, 13, 14, 15, 16, 412 + 16, 16, 16, 16, 16, 16, 16, 17, 1, 18, 413 + 19, 20, 11, 11, 21, 21, 21, 21, 21, 21, 414 + 22, 22, 22, 22, 22, 23, 22, 22, 22, 22, 415 + 22, 22, 22, 22, 24, 22, 22, 25, 22, 22, 416 + 1, 26, 27, 1, 22, 1, 21, 28, 29, 30, 417 417 418 - 30, 20, 21, 21, 31, 21, 21, 32, 33, 34, 419 - 35, 36, 21, 37, 38, 39, 40, 41, 21, 24, 420 - 42, 21, 43, 44, 45, 1, 1, 1, 1, 1, 418 + 31, 21, 22, 22, 32, 22, 22, 33, 34, 35, 419 + 36, 37, 22, 38, 39, 40, 41, 42, 22, 25, 420 + 43, 22, 44, 45, 46, 1, 1, 1, 1, 1, 421 421 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 422 422 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 423 423 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ··· 434 434 1, 1, 1, 1, 1 435 435 } ; 436 436 437 - static yyconst flex_int32_t yy_meta[46] = 437 + static yyconst flex_int32_t yy_meta[47] = 438 438 { 0, 439 - 1, 1, 1, 1, 1, 2, 3, 1, 2, 2, 440 - 2, 4, 5, 5, 5, 6, 1, 1, 1, 7, 441 - 8, 8, 8, 8, 1, 1, 7, 7, 7, 7, 442 - 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 443 - 8, 8, 3, 1, 1 439 + 1, 1, 1, 1, 1, 1, 2, 3, 1, 2, 440 + 2, 2, 4, 5, 5, 5, 6, 1, 1, 1, 441 + 7, 8, 8, 8, 8, 1, 1, 7, 7, 7, 442 + 7, 8, 8, 8, 8, 8, 8, 8, 8, 8, 443 + 8, 8, 8, 3, 1, 1 444 444 } ; 445 445 446 446 static yyconst flex_int16_t yy_base[175] = 447 447 { 0, 448 - 0, 388, 381, 40, 41, 386, 71, 385, 34, 44, 449 - 390, 395, 60, 62, 371, 112, 111, 111, 111, 104, 450 - 370, 106, 371, 342, 124, 119, 0, 144, 395, 0, 451 - 123, 0, 159, 153, 165, 167, 395, 130, 395, 382, 452 - 395, 0, 372, 122, 395, 157, 374, 379, 350, 21, 453 - 346, 349, 395, 395, 395, 395, 395, 362, 395, 395, 454 - 181, 346, 342, 395, 359, 0, 191, 343, 190, 351, 455 - 350, 0, 0, 0, 173, 362, 177, 367, 357, 329, 456 - 335, 328, 337, 331, 206, 329, 334, 327, 395, 338, 457 - 170, 314, 346, 345, 318, 325, 343, 158, 316, 212, 448 + 0, 385, 378, 40, 41, 383, 72, 382, 34, 44, 449 + 388, 393, 61, 117, 368, 116, 115, 115, 115, 48, 450 + 367, 107, 368, 339, 127, 120, 0, 147, 393, 0, 451 + 127, 0, 133, 156, 168, 153, 393, 125, 393, 380, 452 + 393, 0, 369, 127, 393, 160, 371, 377, 347, 21, 453 + 343, 346, 393, 393, 393, 393, 393, 359, 393, 393, 454 + 183, 343, 339, 393, 356, 0, 183, 340, 187, 348, 455 + 347, 0, 0, 0, 178, 359, 195, 365, 354, 326, 456 + 332, 325, 334, 328, 204, 326, 331, 324, 393, 335, 457 + 150, 311, 343, 342, 315, 322, 340, 179, 313, 207, 458 458 459 - 322, 319, 320, 395, 340, 336, 308, 305, 314, 304, 460 - 295, 138, 208, 220, 395, 292, 305, 265, 264, 254, 461 - 201, 222, 285, 275, 273, 270, 236, 235, 225, 115, 462 - 395, 395, 252, 216, 216, 217, 214, 230, 209, 220, 463 - 213, 239, 211, 217, 216, 209, 229, 395, 240, 225, 464 - 206, 169, 395, 395, 116, 106, 99, 54, 395, 395, 465 - 254, 260, 268, 272, 276, 282, 289, 293, 301, 309, 466 - 313, 319, 327, 335 459 + 319, 316, 317, 393, 337, 333, 305, 302, 311, 301, 460 + 310, 190, 338, 337, 393, 307, 322, 301, 305, 277, 461 + 208, 311, 307, 278, 271, 270, 248, 246, 213, 130, 462 + 393, 393, 263, 235, 207, 221, 218, 229, 213, 213, 463 + 206, 234, 218, 210, 208, 193, 219, 393, 223, 204, 464 + 176, 157, 393, 393, 120, 106, 97, 119, 393, 393, 465 + 245, 251, 259, 263, 267, 273, 280, 284, 292, 300, 466 + 304, 310, 318, 326 467 467 } ; 468 468 469 469 static yyconst flex_int16_t yy_def[175] = ··· 489 489 160, 160, 160, 160 490 490 } ; 491 491 492 - static yyconst flex_int16_t yy_nxt[441] = 492 + static yyconst flex_int16_t yy_nxt[440] = 493 493 { 0, 494 - 12, 13, 14, 15, 16, 12, 17, 18, 12, 12, 495 - 12, 19, 12, 12, 12, 12, 20, 21, 22, 23, 496 - 23, 23, 23, 23, 12, 12, 23, 23, 23, 23, 494 + 12, 13, 14, 13, 15, 16, 12, 17, 18, 12, 495 + 12, 12, 19, 12, 12, 12, 12, 20, 21, 22, 496 + 23, 23, 23, 23, 23, 12, 12, 23, 23, 23, 497 497 23, 23, 23, 23, 23, 23, 23, 23, 23, 23, 498 - 23, 23, 12, 24, 12, 25, 34, 35, 35, 25, 499 - 81, 26, 26, 27, 27, 27, 34, 35, 35, 82, 500 - 28, 36, 36, 36, 36, 159, 29, 28, 28, 28, 501 - 28, 12, 13, 14, 15, 16, 30, 17, 18, 30, 502 - 30, 30, 26, 30, 30, 30, 12, 20, 21, 22, 503 - 31, 31, 31, 31, 31, 32, 12, 31, 31, 31, 498 + 23, 23, 23, 12, 24, 12, 25, 34, 35, 35, 499 + 25, 81, 26, 26, 27, 27, 27, 34, 35, 35, 500 + 82, 28, 36, 36, 36, 53, 54, 29, 28, 28, 501 + 28, 28, 12, 13, 14, 13, 15, 16, 30, 17, 502 + 18, 30, 30, 30, 26, 30, 30, 30, 12, 20, 503 + 21, 22, 31, 31, 31, 31, 31, 32, 12, 31, 504 504 505 505 31, 31, 31, 31, 31, 31, 31, 31, 31, 31, 506 - 31, 31, 31, 12, 24, 12, 39, 41, 45, 47, 507 - 53, 54, 48, 56, 57, 61, 61, 47, 66, 45, 508 - 48, 66, 66, 66, 39, 46, 40, 49, 59, 50, 509 - 158, 51, 122, 52, 157, 49, 46, 50, 136, 63, 510 - 137, 52, 156, 43, 40, 62, 65, 65, 65, 59, 511 - 61, 61, 123, 65, 75, 69, 69, 69, 36, 36, 512 - 65, 65, 65, 65, 70, 71, 72, 69, 69, 69, 513 - 45, 46, 61, 61, 109, 77, 70, 71, 93, 110, 514 - 68, 70, 71, 85, 85, 85, 66, 46, 155, 66, 506 + 31, 31, 31, 31, 31, 12, 24, 12, 36, 36, 507 + 36, 39, 41, 45, 47, 56, 57, 48, 61, 47, 508 + 39, 159, 48, 66, 61, 45, 66, 66, 66, 158, 509 + 46, 40, 49, 59, 50, 157, 51, 49, 52, 50, 510 + 40, 63, 46, 52, 36, 36, 36, 156, 43, 62, 511 + 65, 65, 65, 59, 136, 68, 137, 65, 75, 69, 512 + 69, 69, 70, 71, 65, 65, 65, 65, 70, 71, 513 + 72, 69, 69, 69, 61, 46, 45, 155, 154, 66, 514 + 70, 71, 66, 66, 66, 122, 85, 85, 85, 59, 515 515 516 - 66, 66, 69, 69, 69, 122, 59, 100, 100, 61, 517 - 61, 70, 71, 100, 100, 148, 112, 154, 85, 85, 518 - 85, 61, 61, 129, 129, 123, 129, 129, 135, 135, 519 - 135, 142, 142, 148, 143, 149, 153, 135, 135, 135, 520 - 142, 142, 160, 143, 152, 151, 150, 146, 145, 144, 521 - 141, 140, 139, 149, 38, 38, 38, 38, 38, 38, 522 - 38, 38, 42, 138, 134, 133, 42, 42, 44, 44, 523 - 44, 44, 44, 44, 44, 44, 58, 58, 58, 58, 524 - 64, 132, 64, 66, 131, 130, 66, 160, 66, 66, 525 - 67, 128, 127, 67, 67, 67, 67, 73, 126, 73, 516 + 69, 69, 69, 46, 77, 100, 109, 93, 100, 70, 517 + 71, 110, 112, 122, 129, 123, 153, 85, 85, 85, 518 + 135, 135, 135, 148, 148, 160, 135, 135, 135, 152, 519 + 142, 142, 142, 123, 143, 142, 142, 142, 151, 143, 520 + 150, 146, 145, 149, 149, 38, 38, 38, 38, 38, 521 + 38, 38, 38, 42, 144, 141, 140, 42, 42, 44, 522 + 44, 44, 44, 44, 44, 44, 44, 58, 58, 58, 523 + 58, 64, 139, 64, 66, 138, 134, 66, 133, 66, 524 + 66, 67, 132, 131, 67, 67, 67, 67, 73, 130, 525 + 73, 73, 76, 76, 76, 76, 76, 76, 76, 76, 526 526 527 - 73, 76, 76, 76, 76, 76, 76, 76, 76, 78, 528 - 78, 78, 78, 78, 78, 78, 78, 91, 125, 91, 529 - 92, 124, 92, 92, 120, 92, 92, 121, 121, 121, 530 - 121, 121, 121, 121, 121, 147, 147, 147, 147, 147, 531 - 147, 147, 147, 119, 118, 117, 116, 115, 47, 114, 532 - 110, 113, 111, 108, 107, 106, 48, 105, 104, 89, 533 - 103, 102, 101, 99, 98, 97, 96, 95, 94, 79, 534 - 77, 90, 89, 88, 59, 87, 86, 59, 84, 83, 535 - 80, 79, 77, 74, 160, 60, 59, 55, 37, 160, 536 - 33, 25, 26, 25, 11, 160, 160, 160, 160, 160, 527 + 78, 78, 78, 78, 78, 78, 78, 78, 91, 160, 528 + 91, 92, 129, 92, 92, 128, 92, 92, 121, 121, 529 + 121, 121, 121, 121, 121, 121, 147, 147, 147, 147, 530 + 147, 147, 147, 147, 127, 126, 125, 124, 61, 61, 531 + 120, 119, 118, 117, 116, 115, 47, 114, 110, 113, 532 + 111, 108, 107, 106, 48, 105, 104, 89, 103, 102, 533 + 101, 99, 98, 97, 96, 95, 94, 79, 77, 90, 534 + 89, 88, 59, 87, 86, 59, 84, 83, 80, 79, 535 + 77, 74, 160, 60, 59, 55, 37, 160, 33, 25, 536 + 26, 25, 11, 160, 160, 160, 160, 160, 160, 160, 537 537 538 538 160, 160, 160, 160, 160, 160, 160, 160, 160, 160, 539 539 160, 160, 160, 160, 160, 160, 160, 160, 160, 160, 540 540 160, 160, 160, 160, 160, 160, 160, 160, 160, 160, 541 - 160, 160, 160, 160, 160, 160, 160, 160, 160, 160 541 + 160, 160, 160, 160, 160, 160, 160, 160, 160 542 542 } ; 543 543 544 - static yyconst flex_int16_t yy_chk[441] = 544 + static yyconst flex_int16_t yy_chk[440] = 545 545 { 0, 546 546 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 547 547 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 548 548 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 549 549 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 550 - 1, 1, 1, 1, 1, 4, 9, 9, 9, 10, 551 - 50, 4, 5, 5, 5, 5, 10, 10, 10, 50, 552 - 5, 13, 13, 14, 14, 158, 5, 5, 5, 5, 553 - 5, 7, 7, 7, 7, 7, 7, 7, 7, 7, 550 + 1, 1, 1, 1, 1, 1, 4, 9, 9, 9, 551 + 10, 50, 4, 5, 5, 5, 5, 10, 10, 10, 552 + 50, 5, 13, 13, 13, 20, 20, 5, 5, 5, 553 + 5, 5, 7, 7, 7, 7, 7, 7, 7, 7, 554 554 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 555 555 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 556 556 557 557 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 558 - 7, 7, 7, 7, 7, 7, 16, 17, 18, 19, 559 - 20, 20, 19, 22, 22, 25, 25, 26, 31, 44, 560 - 26, 31, 31, 31, 38, 18, 16, 19, 31, 19, 561 - 157, 19, 112, 19, 156, 26, 44, 26, 130, 26, 562 - 130, 26, 155, 17, 38, 25, 28, 28, 28, 28, 563 - 33, 33, 112, 28, 46, 34, 34, 34, 36, 36, 564 - 28, 28, 28, 28, 34, 34, 34, 35, 35, 35, 565 - 75, 46, 61, 61, 98, 77, 35, 35, 77, 98, 566 - 33, 91, 91, 61, 61, 61, 67, 75, 152, 67, 558 + 7, 7, 7, 7, 7, 7, 7, 7, 14, 14, 559 + 14, 16, 17, 18, 19, 22, 22, 19, 25, 26, 560 + 38, 158, 26, 31, 33, 44, 31, 31, 31, 157, 561 + 18, 16, 19, 31, 19, 156, 19, 26, 19, 26, 562 + 38, 26, 44, 26, 36, 36, 36, 155, 17, 25, 563 + 28, 28, 28, 28, 130, 33, 130, 28, 46, 34, 564 + 34, 34, 91, 91, 28, 28, 28, 28, 34, 34, 565 + 34, 35, 35, 35, 61, 46, 75, 152, 151, 67, 566 + 35, 35, 67, 67, 67, 112, 61, 61, 61, 67, 567 567 568 - 67, 67, 69, 69, 69, 121, 67, 85, 85, 113, 569 - 113, 69, 69, 100, 100, 143, 100, 151, 85, 85, 570 - 85, 114, 114, 122, 122, 121, 129, 129, 135, 135, 571 - 135, 138, 138, 147, 138, 143, 150, 129, 129, 129, 572 - 142, 142, 149, 142, 146, 145, 144, 141, 140, 139, 573 - 137, 136, 134, 147, 161, 161, 161, 161, 161, 161, 574 - 161, 161, 162, 133, 128, 127, 162, 162, 163, 163, 575 - 163, 163, 163, 163, 163, 163, 164, 164, 164, 164, 576 - 165, 126, 165, 166, 125, 124, 166, 123, 166, 166, 577 - 167, 120, 119, 167, 167, 167, 167, 168, 118, 168, 568 + 69, 69, 69, 75, 77, 85, 98, 77, 100, 69, 569 + 69, 98, 100, 121, 129, 112, 150, 85, 85, 85, 570 + 135, 135, 135, 143, 147, 149, 129, 129, 129, 146, 571 + 138, 138, 138, 121, 138, 142, 142, 142, 145, 142, 572 + 144, 141, 140, 143, 147, 161, 161, 161, 161, 161, 573 + 161, 161, 161, 162, 139, 137, 136, 162, 162, 163, 574 + 163, 163, 163, 163, 163, 163, 163, 164, 164, 164, 575 + 164, 165, 134, 165, 166, 133, 128, 166, 127, 166, 576 + 166, 167, 126, 125, 167, 167, 167, 167, 168, 124, 577 + 168, 168, 169, 169, 169, 169, 169, 169, 169, 169, 578 578 579 - 168, 169, 169, 169, 169, 169, 169, 169, 169, 170, 580 - 170, 170, 170, 170, 170, 170, 170, 171, 117, 171, 581 - 172, 116, 172, 172, 111, 172, 172, 173, 173, 173, 582 - 173, 173, 173, 173, 173, 174, 174, 174, 174, 174, 583 - 174, 174, 174, 110, 109, 108, 107, 106, 105, 103, 584 - 102, 101, 99, 97, 96, 95, 94, 93, 92, 90, 585 - 88, 87, 86, 84, 83, 82, 81, 80, 79, 78, 586 - 76, 71, 70, 68, 65, 63, 62, 58, 52, 51, 587 - 49, 48, 47, 43, 40, 24, 23, 21, 15, 11, 588 - 8, 6, 3, 2, 160, 160, 160, 160, 160, 160, 579 + 170, 170, 170, 170, 170, 170, 170, 170, 171, 123, 580 + 171, 172, 122, 172, 172, 120, 172, 172, 173, 173, 581 + 173, 173, 173, 173, 173, 173, 174, 174, 174, 174, 582 + 174, 174, 174, 174, 119, 118, 117, 116, 114, 113, 583 + 111, 110, 109, 108, 107, 106, 105, 103, 102, 101, 584 + 99, 97, 96, 95, 94, 93, 92, 90, 88, 87, 585 + 86, 84, 83, 82, 81, 80, 79, 78, 76, 71, 586 + 70, 68, 65, 63, 62, 58, 52, 51, 49, 48, 587 + 47, 43, 40, 24, 23, 21, 15, 11, 8, 6, 588 + 3, 2, 160, 160, 160, 160, 160, 160, 160, 160, 589 589 590 590 160, 160, 160, 160, 160, 160, 160, 160, 160, 160, 591 591 160, 160, 160, 160, 160, 160, 160, 160, 160, 160, 592 592 160, 160, 160, 160, 160, 160, 160, 160, 160, 160, 593 - 160, 160, 160, 160, 160, 160, 160, 160, 160, 160 593 + 160, 160, 160, 160, 160, 160, 160, 160, 160 594 594 } ; 595 595 596 596 static yy_state_type yy_last_accepting_state;
+383 -334
scripts/dtc/dtc-parser.tab.c_shipped
··· 1 + /* A Bison parser, made by GNU Bison 2.5. */ 1 2 2 - /* A Bison parser, made by GNU Bison 2.4.1. */ 3 - 4 - /* Skeleton implementation for Bison's Yacc-like parsers in C 3 + /* Bison implementation for Yacc-like parsers in C 5 4 6 - Copyright (C) 1984, 1989, 1990, 2000, 2001, 2002, 2003, 2004, 2005, 2006 7 - Free Software Foundation, Inc. 5 + Copyright (C) 1984, 1989-1990, 2000-2011 Free Software Foundation, Inc. 8 6 9 7 This program is free software: you can redistribute it and/or modify 10 8 it under the terms of the GNU General Public License as published by ··· 44 46 #define YYBISON 1 45 47 46 48 /* Bison version. */ 47 - #define YYBISON_VERSION "2.4.1" 49 + #define YYBISON_VERSION "2.5" 48 50 49 51 /* Skeleton name. */ 50 52 #define YYSKELETON_NAME "yacc.c" ··· 65 67 66 68 /* Copy the first part of user declarations. */ 67 69 68 - /* Line 189 of yacc.c */ 70 + /* Line 268 of yacc.c */ 69 71 #line 21 "dtc-parser.y" 70 72 71 73 #include <stdio.h> ··· 86 88 static unsigned char eval_char_literal(const char *s); 87 89 88 90 89 - /* Line 189 of yacc.c */ 90 - #line 93 "dtc-parser.tab.c" 91 + /* Line 268 of yacc.c */ 92 + #line 91 "dtc-parser.tab.c" 91 93 92 94 /* Enabling traces. */ 93 95 #ifndef YYDEBUG ··· 145 147 typedef union YYSTYPE 146 148 { 147 149 148 - /* Line 214 of yacc.c */ 150 + /* Line 293 of yacc.c */ 149 151 #line 40 "dtc-parser.y" 150 152 151 153 char *propnodename; ··· 169 171 170 172 171 173 172 - /* Line 214 of yacc.c */ 173 - #line 176 "dtc-parser.tab.c" 174 + /* Line 293 of yacc.c */ 175 + #line 174 "dtc-parser.tab.c" 174 176 } YYSTYPE; 175 177 # define YYSTYPE_IS_TRIVIAL 1 176 178 # define yystype YYSTYPE /* obsolescent; will be withdrawn */ ··· 181 183 /* Copy the second part of user declarations. */ 182 184 183 185 184 - /* Line 264 of yacc.c */ 185 - #line 188 "dtc-parser.tab.c" 186 + /* Line 343 of yacc.c */ 187 + #line 186 "dtc-parser.tab.c" 186 188 187 189 #ifdef short 188 190 # undef short ··· 232 234 #define YYSIZE_MAXIMUM ((YYSIZE_T) -1) 233 235 234 236 #ifndef YY_ 235 - # if YYENABLE_NLS 237 + # if defined YYENABLE_NLS && YYENABLE_NLS 236 238 # if ENABLE_NLS 237 239 # include <libintl.h> /* INFRINGES ON USER NAME SPACE */ 238 240 # define YY_(msgid) dgettext ("bison-runtime", msgid) ··· 285 287 # define alloca _alloca 286 288 # else 287 289 # define YYSTACK_ALLOC alloca 288 - # if ! defined _ALLOCA_H && ! defined _STDLIB_H && (defined __STDC__ || defined __C99__FUNC__ \ 290 + # if ! defined _ALLOCA_H && ! defined EXIT_SUCCESS && (defined __STDC__ || defined __C99__FUNC__ \ 289 291 || defined __cplusplus || defined _MSC_VER) 290 292 # include <stdlib.h> /* INFRINGES ON USER NAME SPACE */ 291 - # ifndef _STDLIB_H 292 - # define _STDLIB_H 1 293 + # ifndef EXIT_SUCCESS 294 + # define EXIT_SUCCESS 0 293 295 # endif 294 296 # endif 295 297 # endif ··· 312 314 # ifndef YYSTACK_ALLOC_MAXIMUM 313 315 # define YYSTACK_ALLOC_MAXIMUM YYSIZE_MAXIMUM 314 316 # endif 315 - # if (defined __cplusplus && ! defined _STDLIB_H \ 317 + # if (defined __cplusplus && ! defined EXIT_SUCCESS \ 316 318 && ! ((defined YYMALLOC || defined malloc) \ 317 319 && (defined YYFREE || defined free))) 318 320 # include <stdlib.h> /* INFRINGES ON USER NAME SPACE */ 319 - # ifndef _STDLIB_H 320 - # define _STDLIB_H 1 321 + # ifndef EXIT_SUCCESS 322 + # define EXIT_SUCCESS 0 321 323 # endif 322 324 # endif 323 325 # ifndef YYMALLOC 324 326 # define YYMALLOC malloc 325 - # if ! defined malloc && ! defined _STDLIB_H && (defined __STDC__ || defined __C99__FUNC__ \ 327 + # if ! defined malloc && ! defined EXIT_SUCCESS && (defined __STDC__ || defined __C99__FUNC__ \ 326 328 || defined __cplusplus || defined _MSC_VER) 327 329 void *malloc (YYSIZE_T); /* INFRINGES ON USER NAME SPACE */ 328 330 # endif 329 331 # endif 330 332 # ifndef YYFREE 331 333 # define YYFREE free 332 - # if ! defined free && ! defined _STDLIB_H && (defined __STDC__ || defined __C99__FUNC__ \ 334 + # if ! defined free && ! defined EXIT_SUCCESS && (defined __STDC__ || defined __C99__FUNC__ \ 333 335 || defined __cplusplus || defined _MSC_VER) 334 336 void free (void *); /* INFRINGES ON USER NAME SPACE */ 335 337 # endif ··· 358 360 ((N) * (sizeof (yytype_int16) + sizeof (YYSTYPE)) \ 359 361 + YYSTACK_GAP_MAXIMUM) 360 362 361 - /* Copy COUNT objects from FROM to TO. The source and destination do 362 - not overlap. */ 363 - # ifndef YYCOPY 364 - # if defined __GNUC__ && 1 < __GNUC__ 365 - # define YYCOPY(To, From, Count) \ 366 - __builtin_memcpy (To, From, (Count) * sizeof (*(From))) 367 - # else 368 - # define YYCOPY(To, From, Count) \ 369 - do \ 370 - { \ 371 - YYSIZE_T yyi; \ 372 - for (yyi = 0; yyi < (Count); yyi++) \ 373 - (To)[yyi] = (From)[yyi]; \ 374 - } \ 375 - while (YYID (0)) 376 - # endif 377 - # endif 363 + # define YYCOPY_NEEDED 1 378 364 379 365 /* Relocate STACK from its old location to the new one. The 380 366 local variables YYSIZE and YYSTACKSIZE give the old and new number of ··· 377 395 while (YYID (0)) 378 396 379 397 #endif 398 + 399 + #if defined YYCOPY_NEEDED && YYCOPY_NEEDED 400 + /* Copy COUNT objects from FROM to TO. The source and destination do 401 + not overlap. */ 402 + # ifndef YYCOPY 403 + # if defined __GNUC__ && 1 < __GNUC__ 404 + # define YYCOPY(To, From, Count) \ 405 + __builtin_memcpy (To, From, (Count) * sizeof (*(From))) 406 + # else 407 + # define YYCOPY(To, From, Count) \ 408 + do \ 409 + { \ 410 + YYSIZE_T yyi; \ 411 + for (yyi = 0; yyi < (Count); yyi++) \ 412 + (To)[yyi] = (From)[yyi]; \ 413 + } \ 414 + while (YYID (0)) 415 + # endif 416 + # endif 417 + #endif /* !YYCOPY_NEEDED */ 380 418 381 419 /* YYFINAL -- State number of the termination state. */ 382 420 #define YYFINAL 4 ··· 573 571 2, 0, 2, 2, 0, 2, 2, 2, 3, 2 574 572 }; 575 573 576 - /* YYDEFACT[STATE-NAME] -- Default rule to reduce with in state 577 - STATE-NUM when YYTABLE doesn't specify something else to do. Zero 574 + /* YYDEFACT[STATE-NAME] -- Default reduction number in state STATE-NUM. 575 + Performed when YYTABLE doesn't specify something else to do. Zero 578 576 means the default is an error. */ 579 577 static const yytype_uint8 yydefact[] = 580 578 { ··· 635 633 636 634 /* YYTABLE[YYPACT[STATE-NUM]]. What to do in state STATE-NUM. If 637 635 positive, shift that token. If negative, reduce the rule which 638 - number is the opposite. If zero, do what YYDEFACT says. 639 - If YYTABLE_NINF, syntax error. */ 636 + number is the opposite. If YYTABLE_NINF, syntax error. */ 640 637 #define YYTABLE_NINF -1 641 638 static const yytype_uint8 yytable[] = 642 639 { ··· 654 653 68, 0, 0, 70, 0, 0, 0, 0, 72, 0, 655 654 137, 0, 73, 139 656 655 }; 656 + 657 + #define yypact_value_is_default(yystate) \ 658 + ((yystate) == (-78)) 659 + 660 + #define yytable_value_is_error(yytable_value) \ 661 + YYID (0) 657 662 658 663 static const yytype_int16 yycheck[] = 659 664 { ··· 712 705 713 706 /* Like YYERROR except do call yyerror. This remains here temporarily 714 707 to ease the transition to the new meaning of YYERROR, for GCC. 715 - Once GCC version 2 has supplanted version 1, this can go. */ 708 + Once GCC version 2 has supplanted version 1, this can go. However, 709 + YYFAIL appears to be in use. Nevertheless, it is formally deprecated 710 + in Bison 2.4.2's NEWS entry, where a plan to phase it out is 711 + discussed. */ 716 712 717 713 #define YYFAIL goto yyerrlab 714 + #if defined YYFAIL 715 + /* This is here to suppress warnings from the GCC cpp's 716 + -Wunused-macros. Normally we don't worry about that warning, but 717 + some users do, and we want to make it easy for users to remove 718 + YYFAIL uses, which will produce warnings from Bison 2.5. */ 719 + #endif 718 720 719 721 #define YYRECOVERING() (!!yyerrstatus) 720 722 ··· 733 717 { \ 734 718 yychar = (Token); \ 735 719 yylval = (Value); \ 736 - yytoken = YYTRANSLATE (yychar); \ 737 720 YYPOPSTACK (1); \ 738 721 goto yybackup; \ 739 722 } \ ··· 774 759 #endif 775 760 776 761 777 - /* YY_LOCATION_PRINT -- Print the location on the stream. 778 - This macro was not mandated originally: define only if we know 779 - we won't break user code: when these are the locations we know. */ 762 + /* This macro is provided for backward compatibility. */ 780 763 781 764 #ifndef YY_LOCATION_PRINT 782 - # if YYLTYPE_IS_TRIVIAL 783 - # define YY_LOCATION_PRINT(File, Loc) \ 784 - fprintf (File, "%d.%d-%d.%d", \ 785 - (Loc).first_line, (Loc).first_column, \ 786 - (Loc).last_line, (Loc).last_column) 787 - # else 788 - # define YY_LOCATION_PRINT(File, Loc) ((void) 0) 789 - # endif 765 + # define YY_LOCATION_PRINT(File, Loc) ((void) 0) 790 766 #endif 791 767 792 768 ··· 969 963 # define YYMAXDEPTH 10000 970 964 #endif 971 965 972 - 973 966 974 967 #if YYERROR_VERBOSE 975 968 ··· 1071 1066 } 1072 1067 # endif 1073 1068 1074 - /* Copy into YYRESULT an error message about the unexpected token 1075 - YYCHAR while in state YYSTATE. Return the number of bytes copied, 1076 - including the terminating null byte. If YYRESULT is null, do not 1077 - copy anything; just return the number of bytes that would be 1078 - copied. As a special case, return 0 if an ordinary "syntax error" 1079 - message will do. Return YYSIZE_MAXIMUM if overflow occurs during 1080 - size calculation. */ 1081 - static YYSIZE_T 1082 - yysyntax_error (char *yyresult, int yystate, int yychar) 1069 + /* Copy into *YYMSG, which is of size *YYMSG_ALLOC, an error message 1070 + about the unexpected token YYTOKEN for the state stack whose top is 1071 + YYSSP. 1072 + 1073 + Return 0 if *YYMSG was successfully written. Return 1 if *YYMSG is 1074 + not large enough to hold the message. In that case, also set 1075 + *YYMSG_ALLOC to the required number of bytes. Return 2 if the 1076 + required number of bytes is too large to store. */ 1077 + static int 1078 + yysyntax_error (YYSIZE_T *yymsg_alloc, char **yymsg, 1079 + yytype_int16 *yyssp, int yytoken) 1083 1080 { 1084 - int yyn = yypact[yystate]; 1081 + YYSIZE_T yysize0 = yytnamerr (0, yytname[yytoken]); 1082 + YYSIZE_T yysize = yysize0; 1083 + YYSIZE_T yysize1; 1084 + enum { YYERROR_VERBOSE_ARGS_MAXIMUM = 5 }; 1085 + /* Internationalized format string. */ 1086 + const char *yyformat = 0; 1087 + /* Arguments of yyformat. */ 1088 + char const *yyarg[YYERROR_VERBOSE_ARGS_MAXIMUM]; 1089 + /* Number of reported tokens (one for the "unexpected", one per 1090 + "expected"). */ 1091 + int yycount = 0; 1085 1092 1086 - if (! (YYPACT_NINF < yyn && yyn <= YYLAST)) 1087 - return 0; 1088 - else 1093 + /* There are many possibilities here to consider: 1094 + - Assume YYFAIL is not used. It's too flawed to consider. See 1095 + <http://lists.gnu.org/archive/html/bison-patches/2009-12/msg00024.html> 1096 + for details. YYERROR is fine as it does not invoke this 1097 + function. 1098 + - If this state is a consistent state with a default action, then 1099 + the only way this function was invoked is if the default action 1100 + is an error action. In that case, don't check for expected 1101 + tokens because there are none. 1102 + - The only way there can be no lookahead present (in yychar) is if 1103 + this state is a consistent state with a default action. Thus, 1104 + detecting the absence of a lookahead is sufficient to determine 1105 + that there is no unexpected or expected token to report. In that 1106 + case, just report a simple "syntax error". 1107 + - Don't assume there isn't a lookahead just because this state is a 1108 + consistent state with a default action. There might have been a 1109 + previous inconsistent state, consistent state with a non-default 1110 + action, or user semantic action that manipulated yychar. 1111 + - Of course, the expected token list depends on states to have 1112 + correct lookahead information, and it depends on the parser not 1113 + to perform extra reductions after fetching a lookahead from the 1114 + scanner and before detecting a syntax error. Thus, state merging 1115 + (from LALR or IELR) and default reductions corrupt the expected 1116 + token list. However, the list is correct for canonical LR with 1117 + one exception: it will still contain any token that will not be 1118 + accepted due to an error action in a later state. 1119 + */ 1120 + if (yytoken != YYEMPTY) 1089 1121 { 1090 - int yytype = YYTRANSLATE (yychar); 1091 - YYSIZE_T yysize0 = yytnamerr (0, yytname[yytype]); 1092 - YYSIZE_T yysize = yysize0; 1093 - YYSIZE_T yysize1; 1094 - int yysize_overflow = 0; 1095 - enum { YYERROR_VERBOSE_ARGS_MAXIMUM = 5 }; 1096 - char const *yyarg[YYERROR_VERBOSE_ARGS_MAXIMUM]; 1097 - int yyx; 1122 + int yyn = yypact[*yyssp]; 1123 + yyarg[yycount++] = yytname[yytoken]; 1124 + if (!yypact_value_is_default (yyn)) 1125 + { 1126 + /* Start YYX at -YYN if negative to avoid negative indexes in 1127 + YYCHECK. In other words, skip the first -YYN actions for 1128 + this state because they are default actions. */ 1129 + int yyxbegin = yyn < 0 ? -yyn : 0; 1130 + /* Stay within bounds of both yycheck and yytname. */ 1131 + int yychecklim = YYLAST - yyn + 1; 1132 + int yyxend = yychecklim < YYNTOKENS ? yychecklim : YYNTOKENS; 1133 + int yyx; 1098 1134 1099 - # if 0 1100 - /* This is so xgettext sees the translatable formats that are 1101 - constructed on the fly. */ 1102 - YY_("syntax error, unexpected %s"); 1103 - YY_("syntax error, unexpected %s, expecting %s"); 1104 - YY_("syntax error, unexpected %s, expecting %s or %s"); 1105 - YY_("syntax error, unexpected %s, expecting %s or %s or %s"); 1106 - YY_("syntax error, unexpected %s, expecting %s or %s or %s or %s"); 1107 - # endif 1108 - char *yyfmt; 1109 - char const *yyf; 1110 - static char const yyunexpected[] = "syntax error, unexpected %s"; 1111 - static char const yyexpecting[] = ", expecting %s"; 1112 - static char const yyor[] = " or %s"; 1113 - char yyformat[sizeof yyunexpected 1114 - + sizeof yyexpecting - 1 1115 - + ((YYERROR_VERBOSE_ARGS_MAXIMUM - 2) 1116 - * (sizeof yyor - 1))]; 1117 - char const *yyprefix = yyexpecting; 1118 - 1119 - /* Start YYX at -YYN if negative to avoid negative indexes in 1120 - YYCHECK. */ 1121 - int yyxbegin = yyn < 0 ? -yyn : 0; 1122 - 1123 - /* Stay within bounds of both yycheck and yytname. */ 1124 - int yychecklim = YYLAST - yyn + 1; 1125 - int yyxend = yychecklim < YYNTOKENS ? yychecklim : YYNTOKENS; 1126 - int yycount = 1; 1127 - 1128 - yyarg[0] = yytname[yytype]; 1129 - yyfmt = yystpcpy (yyformat, yyunexpected); 1130 - 1131 - for (yyx = yyxbegin; yyx < yyxend; ++yyx) 1132 - if (yycheck[yyx + yyn] == yyx && yyx != YYTERROR) 1133 - { 1134 - if (yycount == YYERROR_VERBOSE_ARGS_MAXIMUM) 1135 - { 1136 - yycount = 1; 1137 - yysize = yysize0; 1138 - yyformat[sizeof yyunexpected - 1] = '\0'; 1139 - break; 1140 - } 1141 - yyarg[yycount++] = yytname[yyx]; 1142 - yysize1 = yysize + yytnamerr (0, yytname[yyx]); 1143 - yysize_overflow |= (yysize1 < yysize); 1144 - yysize = yysize1; 1145 - yyfmt = yystpcpy (yyfmt, yyprefix); 1146 - yyprefix = yyor; 1147 - } 1148 - 1149 - yyf = YY_(yyformat); 1150 - yysize1 = yysize + yystrlen (yyf); 1151 - yysize_overflow |= (yysize1 < yysize); 1152 - yysize = yysize1; 1153 - 1154 - if (yysize_overflow) 1155 - return YYSIZE_MAXIMUM; 1156 - 1157 - if (yyresult) 1158 - { 1159 - /* Avoid sprintf, as that infringes on the user's name space. 1160 - Don't have undefined behavior even if the translation 1161 - produced a string with the wrong number of "%s"s. */ 1162 - char *yyp = yyresult; 1163 - int yyi = 0; 1164 - while ((*yyp = *yyf) != '\0') 1165 - { 1166 - if (*yyp == '%' && yyf[1] == 's' && yyi < yycount) 1167 - { 1168 - yyp += yytnamerr (yyp, yyarg[yyi++]); 1169 - yyf += 2; 1170 - } 1171 - else 1172 - { 1173 - yyp++; 1174 - yyf++; 1175 - } 1176 - } 1177 - } 1178 - return yysize; 1135 + for (yyx = yyxbegin; yyx < yyxend; ++yyx) 1136 + if (yycheck[yyx + yyn] == yyx && yyx != YYTERROR 1137 + && !yytable_value_is_error (yytable[yyx + yyn])) 1138 + { 1139 + if (yycount == YYERROR_VERBOSE_ARGS_MAXIMUM) 1140 + { 1141 + yycount = 1; 1142 + yysize = yysize0; 1143 + break; 1144 + } 1145 + yyarg[yycount++] = yytname[yyx]; 1146 + yysize1 = yysize + yytnamerr (0, yytname[yyx]); 1147 + if (! (yysize <= yysize1 1148 + && yysize1 <= YYSTACK_ALLOC_MAXIMUM)) 1149 + return 2; 1150 + yysize = yysize1; 1151 + } 1152 + } 1179 1153 } 1154 + 1155 + switch (yycount) 1156 + { 1157 + # define YYCASE_(N, S) \ 1158 + case N: \ 1159 + yyformat = S; \ 1160 + break 1161 + YYCASE_(0, YY_("syntax error")); 1162 + YYCASE_(1, YY_("syntax error, unexpected %s")); 1163 + YYCASE_(2, YY_("syntax error, unexpected %s, expecting %s")); 1164 + YYCASE_(3, YY_("syntax error, unexpected %s, expecting %s or %s")); 1165 + YYCASE_(4, YY_("syntax error, unexpected %s, expecting %s or %s or %s")); 1166 + YYCASE_(5, YY_("syntax error, unexpected %s, expecting %s or %s or %s or %s")); 1167 + # undef YYCASE_ 1168 + } 1169 + 1170 + yysize1 = yysize + yystrlen (yyformat); 1171 + if (! (yysize <= yysize1 && yysize1 <= YYSTACK_ALLOC_MAXIMUM)) 1172 + return 2; 1173 + yysize = yysize1; 1174 + 1175 + if (*yymsg_alloc < yysize) 1176 + { 1177 + *yymsg_alloc = 2 * yysize; 1178 + if (! (yysize <= *yymsg_alloc 1179 + && *yymsg_alloc <= YYSTACK_ALLOC_MAXIMUM)) 1180 + *yymsg_alloc = YYSTACK_ALLOC_MAXIMUM; 1181 + return 1; 1182 + } 1183 + 1184 + /* Avoid sprintf, as that infringes on the user's name space. 1185 + Don't have undefined behavior even if the translation 1186 + produced a string with the wrong number of "%s"s. */ 1187 + { 1188 + char *yyp = *yymsg; 1189 + int yyi = 0; 1190 + while ((*yyp = *yyformat) != '\0') 1191 + if (*yyp == '%' && yyformat[1] == 's' && yyi < yycount) 1192 + { 1193 + yyp += yytnamerr (yyp, yyarg[yyi++]); 1194 + yyformat += 2; 1195 + } 1196 + else 1197 + { 1198 + yyp++; 1199 + yyformat++; 1200 + } 1201 + } 1202 + return 0; 1180 1203 } 1181 1204 #endif /* YYERROR_VERBOSE */ 1182 - 1183 1205 1184 1206 /*-----------------------------------------------. 1185 1207 | Release the memory associated to this symbol. | ··· 1239 1207 } 1240 1208 } 1241 1209 1210 + 1242 1211 /* Prevent warnings from -Wmissing-prototypes. */ 1243 1212 #ifdef YYPARSE_PARAM 1244 1213 #if defined __STDC__ || defined __cplusplus ··· 1266 1233 int yynerrs; 1267 1234 1268 1235 1269 - 1270 - /*-------------------------. 1271 - | yyparse or yypush_parse. | 1272 - `-------------------------*/ 1236 + /*----------. 1237 + | yyparse. | 1238 + `----------*/ 1273 1239 1274 1240 #ifdef YYPARSE_PARAM 1275 1241 #if (defined __STDC__ || defined __C99__FUNC__ \ ··· 1292 1260 #endif 1293 1261 #endif 1294 1262 { 1295 - 1296 - 1297 1263 int yystate; 1298 1264 /* Number of tokens to shift before error messages enabled. */ 1299 1265 int yyerrstatus; ··· 1446 1416 1447 1417 /* First try to decide what to do without reference to lookahead token. */ 1448 1418 yyn = yypact[yystate]; 1449 - if (yyn == YYPACT_NINF) 1419 + if (yypact_value_is_default (yyn)) 1450 1420 goto yydefault; 1451 1421 1452 1422 /* Not known => get a lookahead token if don't already have one. */ ··· 1477 1447 yyn = yytable[yyn]; 1478 1448 if (yyn <= 0) 1479 1449 { 1480 - if (yyn == 0 || yyn == YYTABLE_NINF) 1481 - goto yyerrlab; 1450 + if (yytable_value_is_error (yyn)) 1451 + goto yyerrlab; 1482 1452 yyn = -yyn; 1483 1453 goto yyreduce; 1484 1454 } ··· 1533 1503 { 1534 1504 case 2: 1535 1505 1536 - /* Line 1455 of yacc.c */ 1506 + /* Line 1806 of yacc.c */ 1537 1507 #line 110 "dtc-parser.y" 1538 1508 { 1539 1509 the_boot_info = build_boot_info((yyvsp[(3) - (4)].re), (yyvsp[(4) - (4)].node), 1540 1510 guess_boot_cpuid((yyvsp[(4) - (4)].node))); 1541 - ;} 1511 + } 1542 1512 break; 1543 1513 1544 1514 case 3: 1545 1515 1546 - /* Line 1455 of yacc.c */ 1516 + /* Line 1806 of yacc.c */ 1547 1517 #line 118 "dtc-parser.y" 1548 1518 { 1549 1519 (yyval.re) = NULL; 1550 - ;} 1520 + } 1551 1521 break; 1552 1522 1553 1523 case 4: 1554 1524 1555 - /* Line 1455 of yacc.c */ 1525 + /* Line 1806 of yacc.c */ 1556 1526 #line 122 "dtc-parser.y" 1557 1527 { 1558 1528 (yyval.re) = chain_reserve_entry((yyvsp[(1) - (2)].re), (yyvsp[(2) - (2)].re)); 1559 - ;} 1529 + } 1560 1530 break; 1561 1531 1562 1532 case 5: 1563 1533 1564 - /* Line 1455 of yacc.c */ 1534 + /* Line 1806 of yacc.c */ 1565 1535 #line 129 "dtc-parser.y" 1566 1536 { 1567 1537 (yyval.re) = build_reserve_entry((yyvsp[(2) - (4)].integer), (yyvsp[(3) - (4)].integer)); 1568 - ;} 1538 + } 1569 1539 break; 1570 1540 1571 1541 case 6: 1572 1542 1573 - /* Line 1455 of yacc.c */ 1543 + /* Line 1806 of yacc.c */ 1574 1544 #line 133 "dtc-parser.y" 1575 1545 { 1576 1546 add_label(&(yyvsp[(2) - (2)].re)->labels, (yyvsp[(1) - (2)].labelref)); 1577 1547 (yyval.re) = (yyvsp[(2) - (2)].re); 1578 - ;} 1548 + } 1579 1549 break; 1580 1550 1581 1551 case 7: 1582 1552 1583 - /* Line 1455 of yacc.c */ 1553 + /* Line 1806 of yacc.c */ 1584 1554 #line 141 "dtc-parser.y" 1585 1555 { 1586 1556 (yyval.node) = name_node((yyvsp[(2) - (2)].node), ""); 1587 - ;} 1557 + } 1588 1558 break; 1589 1559 1590 1560 case 8: 1591 1561 1592 - /* Line 1455 of yacc.c */ 1562 + /* Line 1806 of yacc.c */ 1593 1563 #line 145 "dtc-parser.y" 1594 1564 { 1595 1565 (yyval.node) = merge_nodes((yyvsp[(1) - (3)].node), (yyvsp[(3) - (3)].node)); 1596 - ;} 1566 + } 1597 1567 break; 1598 1568 1599 1569 case 9: 1600 1570 1601 - /* Line 1455 of yacc.c */ 1571 + /* Line 1806 of yacc.c */ 1602 1572 #line 149 "dtc-parser.y" 1603 1573 { 1604 1574 struct node *target = get_node_by_ref((yyvsp[(1) - (3)].node), (yyvsp[(2) - (3)].labelref)); ··· 1608 1578 else 1609 1579 print_error("label or path, '%s', not found", (yyvsp[(2) - (3)].labelref)); 1610 1580 (yyval.node) = (yyvsp[(1) - (3)].node); 1611 - ;} 1581 + } 1612 1582 break; 1613 1583 1614 1584 case 10: 1615 1585 1616 - /* Line 1455 of yacc.c */ 1586 + /* Line 1806 of yacc.c */ 1617 1587 #line 159 "dtc-parser.y" 1618 1588 { 1619 1589 struct node *target = get_node_by_ref((yyvsp[(1) - (4)].node), (yyvsp[(3) - (4)].labelref)); ··· 1624 1594 delete_node(target); 1625 1595 1626 1596 (yyval.node) = (yyvsp[(1) - (4)].node); 1627 - ;} 1597 + } 1628 1598 break; 1629 1599 1630 1600 case 11: 1631 1601 1632 - /* Line 1455 of yacc.c */ 1602 + /* Line 1806 of yacc.c */ 1633 1603 #line 173 "dtc-parser.y" 1634 1604 { 1635 1605 (yyval.node) = build_node((yyvsp[(2) - (5)].proplist), (yyvsp[(3) - (5)].nodelist)); 1636 - ;} 1606 + } 1637 1607 break; 1638 1608 1639 1609 case 12: 1640 1610 1641 - /* Line 1455 of yacc.c */ 1611 + /* Line 1806 of yacc.c */ 1642 1612 #line 180 "dtc-parser.y" 1643 1613 { 1644 1614 (yyval.proplist) = NULL; 1645 - ;} 1615 + } 1646 1616 break; 1647 1617 1648 1618 case 13: 1649 1619 1650 - /* Line 1455 of yacc.c */ 1620 + /* Line 1806 of yacc.c */ 1651 1621 #line 184 "dtc-parser.y" 1652 1622 { 1653 1623 (yyval.proplist) = chain_property((yyvsp[(2) - (2)].prop), (yyvsp[(1) - (2)].proplist)); 1654 - ;} 1624 + } 1655 1625 break; 1656 1626 1657 1627 case 14: 1658 1628 1659 - /* Line 1455 of yacc.c */ 1629 + /* Line 1806 of yacc.c */ 1660 1630 #line 191 "dtc-parser.y" 1661 1631 { 1662 1632 (yyval.prop) = build_property((yyvsp[(1) - (4)].propnodename), (yyvsp[(3) - (4)].data)); 1663 - ;} 1633 + } 1664 1634 break; 1665 1635 1666 1636 case 15: 1667 1637 1668 - /* Line 1455 of yacc.c */ 1638 + /* Line 1806 of yacc.c */ 1669 1639 #line 195 "dtc-parser.y" 1670 1640 { 1671 1641 (yyval.prop) = build_property((yyvsp[(1) - (2)].propnodename), empty_data); 1672 - ;} 1642 + } 1673 1643 break; 1674 1644 1675 1645 case 16: 1676 1646 1677 - /* Line 1455 of yacc.c */ 1647 + /* Line 1806 of yacc.c */ 1678 1648 #line 199 "dtc-parser.y" 1679 1649 { 1680 1650 (yyval.prop) = build_property_delete((yyvsp[(2) - (3)].propnodename)); 1681 - ;} 1651 + } 1682 1652 break; 1683 1653 1684 1654 case 17: 1685 1655 1686 - /* Line 1455 of yacc.c */ 1656 + /* Line 1806 of yacc.c */ 1687 1657 #line 203 "dtc-parser.y" 1688 1658 { 1689 1659 add_label(&(yyvsp[(2) - (2)].prop)->labels, (yyvsp[(1) - (2)].labelref)); 1690 1660 (yyval.prop) = (yyvsp[(2) - (2)].prop); 1691 - ;} 1661 + } 1692 1662 break; 1693 1663 1694 1664 case 18: 1695 1665 1696 - /* Line 1455 of yacc.c */ 1666 + /* Line 1806 of yacc.c */ 1697 1667 #line 211 "dtc-parser.y" 1698 1668 { 1699 1669 (yyval.data) = data_merge((yyvsp[(1) - (2)].data), (yyvsp[(2) - (2)].data)); 1700 - ;} 1670 + } 1701 1671 break; 1702 1672 1703 1673 case 19: 1704 1674 1705 - /* Line 1455 of yacc.c */ 1675 + /* Line 1806 of yacc.c */ 1706 1676 #line 215 "dtc-parser.y" 1707 1677 { 1708 1678 (yyval.data) = data_merge((yyvsp[(1) - (3)].data), (yyvsp[(2) - (3)].array).data); 1709 - ;} 1679 + } 1710 1680 break; 1711 1681 1712 1682 case 20: 1713 1683 1714 - /* Line 1455 of yacc.c */ 1684 + /* Line 1806 of yacc.c */ 1715 1685 #line 219 "dtc-parser.y" 1716 1686 { 1717 1687 (yyval.data) = data_merge((yyvsp[(1) - (4)].data), (yyvsp[(3) - (4)].data)); 1718 - ;} 1688 + } 1719 1689 break; 1720 1690 1721 1691 case 21: 1722 1692 1723 - /* Line 1455 of yacc.c */ 1693 + /* Line 1806 of yacc.c */ 1724 1694 #line 223 "dtc-parser.y" 1725 1695 { 1726 1696 (yyval.data) = data_add_marker((yyvsp[(1) - (2)].data), REF_PATH, (yyvsp[(2) - (2)].labelref)); 1727 - ;} 1697 + } 1728 1698 break; 1729 1699 1730 1700 case 22: 1731 1701 1732 - /* Line 1455 of yacc.c */ 1702 + /* Line 1806 of yacc.c */ 1733 1703 #line 227 "dtc-parser.y" 1734 1704 { 1735 1705 FILE *f = srcfile_relative_open((yyvsp[(4) - (9)].data).val, NULL); ··· 1746 1716 1747 1717 (yyval.data) = data_merge((yyvsp[(1) - (9)].data), d); 1748 1718 fclose(f); 1749 - ;} 1719 + } 1750 1720 break; 1751 1721 1752 1722 case 23: 1753 1723 1754 - /* Line 1455 of yacc.c */ 1724 + /* Line 1806 of yacc.c */ 1755 1725 #line 244 "dtc-parser.y" 1756 1726 { 1757 1727 FILE *f = srcfile_relative_open((yyvsp[(4) - (5)].data).val, NULL); ··· 1761 1731 1762 1732 (yyval.data) = data_merge((yyvsp[(1) - (5)].data), d); 1763 1733 fclose(f); 1764 - ;} 1734 + } 1765 1735 break; 1766 1736 1767 1737 case 24: 1768 1738 1769 - /* Line 1455 of yacc.c */ 1739 + /* Line 1806 of yacc.c */ 1770 1740 #line 254 "dtc-parser.y" 1771 1741 { 1772 1742 (yyval.data) = data_add_marker((yyvsp[(1) - (2)].data), LABEL, (yyvsp[(2) - (2)].labelref)); 1773 - ;} 1743 + } 1774 1744 break; 1775 1745 1776 1746 case 25: 1777 1747 1778 - /* Line 1455 of yacc.c */ 1748 + /* Line 1806 of yacc.c */ 1779 1749 #line 261 "dtc-parser.y" 1780 1750 { 1781 1751 (yyval.data) = empty_data; 1782 - ;} 1752 + } 1783 1753 break; 1784 1754 1785 1755 case 26: 1786 1756 1787 - /* Line 1455 of yacc.c */ 1757 + /* Line 1806 of yacc.c */ 1788 1758 #line 265 "dtc-parser.y" 1789 1759 { 1790 1760 (yyval.data) = (yyvsp[(1) - (2)].data); 1791 - ;} 1761 + } 1792 1762 break; 1793 1763 1794 1764 case 27: 1795 1765 1796 - /* Line 1455 of yacc.c */ 1766 + /* Line 1806 of yacc.c */ 1797 1767 #line 269 "dtc-parser.y" 1798 1768 { 1799 1769 (yyval.data) = data_add_marker((yyvsp[(1) - (2)].data), LABEL, (yyvsp[(2) - (2)].labelref)); 1800 - ;} 1770 + } 1801 1771 break; 1802 1772 1803 1773 case 28: 1804 1774 1805 - /* Line 1455 of yacc.c */ 1775 + /* Line 1806 of yacc.c */ 1806 1776 #line 276 "dtc-parser.y" 1807 1777 { 1808 1778 (yyval.array).data = empty_data; ··· 1817 1787 " are currently supported"); 1818 1788 (yyval.array).bits = 32; 1819 1789 } 1820 - ;} 1790 + } 1821 1791 break; 1822 1792 1823 1793 case 29: 1824 1794 1825 - /* Line 1455 of yacc.c */ 1795 + /* Line 1806 of yacc.c */ 1826 1796 #line 291 "dtc-parser.y" 1827 1797 { 1828 1798 (yyval.array).data = empty_data; 1829 1799 (yyval.array).bits = 32; 1830 - ;} 1800 + } 1831 1801 break; 1832 1802 1833 1803 case 30: 1834 1804 1835 - /* Line 1455 of yacc.c */ 1805 + /* Line 1806 of yacc.c */ 1836 1806 #line 296 "dtc-parser.y" 1837 1807 { 1838 1808 if ((yyvsp[(1) - (2)].array).bits < 64) { ··· 1852 1822 } 1853 1823 1854 1824 (yyval.array).data = data_append_integer((yyvsp[(1) - (2)].array).data, (yyvsp[(2) - (2)].integer), (yyvsp[(1) - (2)].array).bits); 1855 - ;} 1825 + } 1856 1826 break; 1857 1827 1858 1828 case 31: 1859 1829 1860 - /* Line 1455 of yacc.c */ 1830 + /* Line 1806 of yacc.c */ 1861 1831 #line 316 "dtc-parser.y" 1862 1832 { 1863 1833 uint64_t val = ~0ULL >> (64 - (yyvsp[(1) - (2)].array).bits); ··· 1871 1841 "arrays with 32-bit elements."); 1872 1842 1873 1843 (yyval.array).data = data_append_integer((yyvsp[(1) - (2)].array).data, val, (yyvsp[(1) - (2)].array).bits); 1874 - ;} 1844 + } 1875 1845 break; 1876 1846 1877 1847 case 32: 1878 1848 1879 - /* Line 1455 of yacc.c */ 1849 + /* Line 1806 of yacc.c */ 1880 1850 #line 330 "dtc-parser.y" 1881 1851 { 1882 1852 (yyval.array).data = data_add_marker((yyvsp[(1) - (2)].array).data, LABEL, (yyvsp[(2) - (2)].labelref)); 1883 - ;} 1853 + } 1884 1854 break; 1885 1855 1886 1856 case 33: 1887 1857 1888 - /* Line 1455 of yacc.c */ 1858 + /* Line 1806 of yacc.c */ 1889 1859 #line 337 "dtc-parser.y" 1890 1860 { 1891 1861 (yyval.integer) = eval_literal((yyvsp[(1) - (1)].literal), 0, 64); 1892 - ;} 1862 + } 1893 1863 break; 1894 1864 1895 1865 case 34: 1896 1866 1897 - /* Line 1455 of yacc.c */ 1867 + /* Line 1806 of yacc.c */ 1898 1868 #line 341 "dtc-parser.y" 1899 1869 { 1900 1870 (yyval.integer) = eval_char_literal((yyvsp[(1) - (1)].literal)); 1901 - ;} 1871 + } 1902 1872 break; 1903 1873 1904 1874 case 35: 1905 1875 1906 - /* Line 1455 of yacc.c */ 1876 + /* Line 1806 of yacc.c */ 1907 1877 #line 345 "dtc-parser.y" 1908 1878 { 1909 1879 (yyval.integer) = (yyvsp[(2) - (3)].integer); 1910 - ;} 1880 + } 1911 1881 break; 1912 1882 1913 1883 case 38: 1914 1884 1915 - /* Line 1455 of yacc.c */ 1885 + /* Line 1806 of yacc.c */ 1916 1886 #line 356 "dtc-parser.y" 1917 - { (yyval.integer) = (yyvsp[(1) - (5)].integer) ? (yyvsp[(3) - (5)].integer) : (yyvsp[(5) - (5)].integer); ;} 1887 + { (yyval.integer) = (yyvsp[(1) - (5)].integer) ? (yyvsp[(3) - (5)].integer) : (yyvsp[(5) - (5)].integer); } 1918 1888 break; 1919 1889 1920 1890 case 40: 1921 1891 1922 - /* Line 1455 of yacc.c */ 1892 + /* Line 1806 of yacc.c */ 1923 1893 #line 361 "dtc-parser.y" 1924 - { (yyval.integer) = (yyvsp[(1) - (3)].integer) || (yyvsp[(3) - (3)].integer); ;} 1894 + { (yyval.integer) = (yyvsp[(1) - (3)].integer) || (yyvsp[(3) - (3)].integer); } 1925 1895 break; 1926 1896 1927 1897 case 42: 1928 1898 1929 - /* Line 1455 of yacc.c */ 1899 + /* Line 1806 of yacc.c */ 1930 1900 #line 366 "dtc-parser.y" 1931 - { (yyval.integer) = (yyvsp[(1) - (3)].integer) && (yyvsp[(3) - (3)].integer); ;} 1901 + { (yyval.integer) = (yyvsp[(1) - (3)].integer) && (yyvsp[(3) - (3)].integer); } 1932 1902 break; 1933 1903 1934 1904 case 44: 1935 1905 1936 - /* Line 1455 of yacc.c */ 1906 + /* Line 1806 of yacc.c */ 1937 1907 #line 371 "dtc-parser.y" 1938 - { (yyval.integer) = (yyvsp[(1) - (3)].integer) | (yyvsp[(3) - (3)].integer); ;} 1908 + { (yyval.integer) = (yyvsp[(1) - (3)].integer) | (yyvsp[(3) - (3)].integer); } 1939 1909 break; 1940 1910 1941 1911 case 46: 1942 1912 1943 - /* Line 1455 of yacc.c */ 1913 + /* Line 1806 of yacc.c */ 1944 1914 #line 376 "dtc-parser.y" 1945 - { (yyval.integer) = (yyvsp[(1) - (3)].integer) ^ (yyvsp[(3) - (3)].integer); ;} 1915 + { (yyval.integer) = (yyvsp[(1) - (3)].integer) ^ (yyvsp[(3) - (3)].integer); } 1946 1916 break; 1947 1917 1948 1918 case 48: 1949 1919 1950 - /* Line 1455 of yacc.c */ 1920 + /* Line 1806 of yacc.c */ 1951 1921 #line 381 "dtc-parser.y" 1952 - { (yyval.integer) = (yyvsp[(1) - (3)].integer) & (yyvsp[(3) - (3)].integer); ;} 1922 + { (yyval.integer) = (yyvsp[(1) - (3)].integer) & (yyvsp[(3) - (3)].integer); } 1953 1923 break; 1954 1924 1955 1925 case 50: 1956 1926 1957 - /* Line 1455 of yacc.c */ 1927 + /* Line 1806 of yacc.c */ 1958 1928 #line 386 "dtc-parser.y" 1959 - { (yyval.integer) = (yyvsp[(1) - (3)].integer) == (yyvsp[(3) - (3)].integer); ;} 1929 + { (yyval.integer) = (yyvsp[(1) - (3)].integer) == (yyvsp[(3) - (3)].integer); } 1960 1930 break; 1961 1931 1962 1932 case 51: 1963 1933 1964 - /* Line 1455 of yacc.c */ 1934 + /* Line 1806 of yacc.c */ 1965 1935 #line 387 "dtc-parser.y" 1966 - { (yyval.integer) = (yyvsp[(1) - (3)].integer) != (yyvsp[(3) - (3)].integer); ;} 1936 + { (yyval.integer) = (yyvsp[(1) - (3)].integer) != (yyvsp[(3) - (3)].integer); } 1967 1937 break; 1968 1938 1969 1939 case 53: 1970 1940 1971 - /* Line 1455 of yacc.c */ 1941 + /* Line 1806 of yacc.c */ 1972 1942 #line 392 "dtc-parser.y" 1973 - { (yyval.integer) = (yyvsp[(1) - (3)].integer) < (yyvsp[(3) - (3)].integer); ;} 1943 + { (yyval.integer) = (yyvsp[(1) - (3)].integer) < (yyvsp[(3) - (3)].integer); } 1974 1944 break; 1975 1945 1976 1946 case 54: 1977 1947 1978 - /* Line 1455 of yacc.c */ 1948 + /* Line 1806 of yacc.c */ 1979 1949 #line 393 "dtc-parser.y" 1980 - { (yyval.integer) = (yyvsp[(1) - (3)].integer) > (yyvsp[(3) - (3)].integer); ;} 1950 + { (yyval.integer) = (yyvsp[(1) - (3)].integer) > (yyvsp[(3) - (3)].integer); } 1981 1951 break; 1982 1952 1983 1953 case 55: 1984 1954 1985 - /* Line 1455 of yacc.c */ 1955 + /* Line 1806 of yacc.c */ 1986 1956 #line 394 "dtc-parser.y" 1987 - { (yyval.integer) = (yyvsp[(1) - (3)].integer) <= (yyvsp[(3) - (3)].integer); ;} 1957 + { (yyval.integer) = (yyvsp[(1) - (3)].integer) <= (yyvsp[(3) - (3)].integer); } 1988 1958 break; 1989 1959 1990 1960 case 56: 1991 1961 1992 - /* Line 1455 of yacc.c */ 1962 + /* Line 1806 of yacc.c */ 1993 1963 #line 395 "dtc-parser.y" 1994 - { (yyval.integer) = (yyvsp[(1) - (3)].integer) >= (yyvsp[(3) - (3)].integer); ;} 1964 + { (yyval.integer) = (yyvsp[(1) - (3)].integer) >= (yyvsp[(3) - (3)].integer); } 1995 1965 break; 1996 1966 1997 1967 case 57: 1998 1968 1999 - /* Line 1455 of yacc.c */ 1969 + /* Line 1806 of yacc.c */ 2000 1970 #line 399 "dtc-parser.y" 2001 - { (yyval.integer) = (yyvsp[(1) - (3)].integer) << (yyvsp[(3) - (3)].integer); ;} 1971 + { (yyval.integer) = (yyvsp[(1) - (3)].integer) << (yyvsp[(3) - (3)].integer); } 2002 1972 break; 2003 1973 2004 1974 case 58: 2005 1975 2006 - /* Line 1455 of yacc.c */ 1976 + /* Line 1806 of yacc.c */ 2007 1977 #line 400 "dtc-parser.y" 2008 - { (yyval.integer) = (yyvsp[(1) - (3)].integer) >> (yyvsp[(3) - (3)].integer); ;} 1978 + { (yyval.integer) = (yyvsp[(1) - (3)].integer) >> (yyvsp[(3) - (3)].integer); } 2009 1979 break; 2010 1980 2011 1981 case 60: 2012 1982 2013 - /* Line 1455 of yacc.c */ 1983 + /* Line 1806 of yacc.c */ 2014 1984 #line 405 "dtc-parser.y" 2015 - { (yyval.integer) = (yyvsp[(1) - (3)].integer) + (yyvsp[(3) - (3)].integer); ;} 1985 + { (yyval.integer) = (yyvsp[(1) - (3)].integer) + (yyvsp[(3) - (3)].integer); } 2016 1986 break; 2017 1987 2018 1988 case 61: 2019 1989 2020 - /* Line 1455 of yacc.c */ 1990 + /* Line 1806 of yacc.c */ 2021 1991 #line 406 "dtc-parser.y" 2022 - { (yyval.integer) = (yyvsp[(1) - (3)].integer) - (yyvsp[(3) - (3)].integer); ;} 1992 + { (yyval.integer) = (yyvsp[(1) - (3)].integer) - (yyvsp[(3) - (3)].integer); } 2023 1993 break; 2024 1994 2025 1995 case 63: 2026 1996 2027 - /* Line 1455 of yacc.c */ 1997 + /* Line 1806 of yacc.c */ 2028 1998 #line 411 "dtc-parser.y" 2029 - { (yyval.integer) = (yyvsp[(1) - (3)].integer) * (yyvsp[(3) - (3)].integer); ;} 1999 + { (yyval.integer) = (yyvsp[(1) - (3)].integer) * (yyvsp[(3) - (3)].integer); } 2030 2000 break; 2031 2001 2032 2002 case 64: 2033 2003 2034 - /* Line 1455 of yacc.c */ 2004 + /* Line 1806 of yacc.c */ 2035 2005 #line 412 "dtc-parser.y" 2036 - { (yyval.integer) = (yyvsp[(1) - (3)].integer) / (yyvsp[(3) - (3)].integer); ;} 2006 + { (yyval.integer) = (yyvsp[(1) - (3)].integer) / (yyvsp[(3) - (3)].integer); } 2037 2007 break; 2038 2008 2039 2009 case 65: 2040 2010 2041 - /* Line 1455 of yacc.c */ 2011 + /* Line 1806 of yacc.c */ 2042 2012 #line 413 "dtc-parser.y" 2043 - { (yyval.integer) = (yyvsp[(1) - (3)].integer) % (yyvsp[(3) - (3)].integer); ;} 2013 + { (yyval.integer) = (yyvsp[(1) - (3)].integer) % (yyvsp[(3) - (3)].integer); } 2044 2014 break; 2045 2015 2046 2016 case 68: 2047 2017 2048 - /* Line 1455 of yacc.c */ 2018 + /* Line 1806 of yacc.c */ 2049 2019 #line 419 "dtc-parser.y" 2050 - { (yyval.integer) = -(yyvsp[(2) - (2)].integer); ;} 2020 + { (yyval.integer) = -(yyvsp[(2) - (2)].integer); } 2051 2021 break; 2052 2022 2053 2023 case 69: 2054 2024 2055 - /* Line 1455 of yacc.c */ 2025 + /* Line 1806 of yacc.c */ 2056 2026 #line 420 "dtc-parser.y" 2057 - { (yyval.integer) = ~(yyvsp[(2) - (2)].integer); ;} 2027 + { (yyval.integer) = ~(yyvsp[(2) - (2)].integer); } 2058 2028 break; 2059 2029 2060 2030 case 70: 2061 2031 2062 - /* Line 1455 of yacc.c */ 2032 + /* Line 1806 of yacc.c */ 2063 2033 #line 421 "dtc-parser.y" 2064 - { (yyval.integer) = !(yyvsp[(2) - (2)].integer); ;} 2034 + { (yyval.integer) = !(yyvsp[(2) - (2)].integer); } 2065 2035 break; 2066 2036 2067 2037 case 71: 2068 2038 2069 - /* Line 1455 of yacc.c */ 2039 + /* Line 1806 of yacc.c */ 2070 2040 #line 426 "dtc-parser.y" 2071 2041 { 2072 2042 (yyval.data) = empty_data; 2073 - ;} 2043 + } 2074 2044 break; 2075 2045 2076 2046 case 72: 2077 2047 2078 - /* Line 1455 of yacc.c */ 2048 + /* Line 1806 of yacc.c */ 2079 2049 #line 430 "dtc-parser.y" 2080 2050 { 2081 2051 (yyval.data) = data_append_byte((yyvsp[(1) - (2)].data), (yyvsp[(2) - (2)].byte)); 2082 - ;} 2052 + } 2083 2053 break; 2084 2054 2085 2055 case 73: 2086 2056 2087 - /* Line 1455 of yacc.c */ 2057 + /* Line 1806 of yacc.c */ 2088 2058 #line 434 "dtc-parser.y" 2089 2059 { 2090 2060 (yyval.data) = data_add_marker((yyvsp[(1) - (2)].data), LABEL, (yyvsp[(2) - (2)].labelref)); 2091 - ;} 2061 + } 2092 2062 break; 2093 2063 2094 2064 case 74: 2095 2065 2096 - /* Line 1455 of yacc.c */ 2066 + /* Line 1806 of yacc.c */ 2097 2067 #line 441 "dtc-parser.y" 2098 2068 { 2099 2069 (yyval.nodelist) = NULL; 2100 - ;} 2070 + } 2101 2071 break; 2102 2072 2103 2073 case 75: 2104 2074 2105 - /* Line 1455 of yacc.c */ 2075 + /* Line 1806 of yacc.c */ 2106 2076 #line 445 "dtc-parser.y" 2107 2077 { 2108 2078 (yyval.nodelist) = chain_node((yyvsp[(1) - (2)].node), (yyvsp[(2) - (2)].nodelist)); 2109 - ;} 2079 + } 2110 2080 break; 2111 2081 2112 2082 case 76: 2113 2083 2114 - /* Line 1455 of yacc.c */ 2084 + /* Line 1806 of yacc.c */ 2115 2085 #line 449 "dtc-parser.y" 2116 2086 { 2117 2087 print_error("syntax error: properties must precede subnodes"); 2118 2088 YYERROR; 2119 - ;} 2089 + } 2120 2090 break; 2121 2091 2122 2092 case 77: 2123 2093 2124 - /* Line 1455 of yacc.c */ 2094 + /* Line 1806 of yacc.c */ 2125 2095 #line 457 "dtc-parser.y" 2126 2096 { 2127 2097 (yyval.node) = name_node((yyvsp[(2) - (2)].node), (yyvsp[(1) - (2)].propnodename)); 2128 - ;} 2098 + } 2129 2099 break; 2130 2100 2131 2101 case 78: 2132 2102 2133 - /* Line 1455 of yacc.c */ 2103 + /* Line 1806 of yacc.c */ 2134 2104 #line 461 "dtc-parser.y" 2135 2105 { 2136 2106 (yyval.node) = name_node(build_node_delete(), (yyvsp[(2) - (3)].propnodename)); 2137 - ;} 2107 + } 2138 2108 break; 2139 2109 2140 2110 case 79: 2141 2111 2142 - /* Line 1455 of yacc.c */ 2112 + /* Line 1806 of yacc.c */ 2143 2113 #line 465 "dtc-parser.y" 2144 2114 { 2145 2115 add_label(&(yyvsp[(2) - (2)].node)->labels, (yyvsp[(1) - (2)].labelref)); 2146 2116 (yyval.node) = (yyvsp[(2) - (2)].node); 2147 - ;} 2117 + } 2148 2118 break; 2149 2119 2150 2120 2151 2121 2152 - /* Line 1455 of yacc.c */ 2153 - #line 2124 "dtc-parser.tab.c" 2122 + /* Line 1806 of yacc.c */ 2123 + #line 2154 "dtc-parser.tab.c" 2154 2124 default: break; 2155 2125 } 2126 + /* User semantic actions sometimes alter yychar, and that requires 2127 + that yytoken be updated with the new translation. We take the 2128 + approach of translating immediately before every use of yytoken. 2129 + One alternative is translating here after every semantic action, 2130 + but that translation would be missed if the semantic action invokes 2131 + YYABORT, YYACCEPT, or YYERROR immediately after altering yychar or 2132 + if it invokes YYBACKUP. In the case of YYABORT or YYACCEPT, an 2133 + incorrect destructor might then be invoked immediately. In the 2134 + case of YYERROR or YYBACKUP, subsequent parser actions might lead 2135 + to an incorrect destructor call or verbose syntax error message 2136 + before the lookahead is translated. */ 2156 2137 YY_SYMBOL_PRINT ("-> $$ =", yyr1[yyn], &yyval, &yyloc); 2157 2138 2158 2139 YYPOPSTACK (yylen); ··· 2191 2150 | yyerrlab -- here on detecting error | 2192 2151 `------------------------------------*/ 2193 2152 yyerrlab: 2153 + /* Make sure we have latest lookahead translation. See comments at 2154 + user semantic actions for why this is necessary. */ 2155 + yytoken = yychar == YYEMPTY ? YYEMPTY : YYTRANSLATE (yychar); 2156 + 2194 2157 /* If not already recovering from an error, report this error. */ 2195 2158 if (!yyerrstatus) 2196 2159 { ··· 2202 2157 #if ! YYERROR_VERBOSE 2203 2158 yyerror (YY_("syntax error")); 2204 2159 #else 2160 + # define YYSYNTAX_ERROR yysyntax_error (&yymsg_alloc, &yymsg, \ 2161 + yyssp, yytoken) 2205 2162 { 2206 - YYSIZE_T yysize = yysyntax_error (0, yystate, yychar); 2207 - if (yymsg_alloc < yysize && yymsg_alloc < YYSTACK_ALLOC_MAXIMUM) 2208 - { 2209 - YYSIZE_T yyalloc = 2 * yysize; 2210 - if (! (yysize <= yyalloc && yyalloc <= YYSTACK_ALLOC_MAXIMUM)) 2211 - yyalloc = YYSTACK_ALLOC_MAXIMUM; 2212 - if (yymsg != yymsgbuf) 2213 - YYSTACK_FREE (yymsg); 2214 - yymsg = (char *) YYSTACK_ALLOC (yyalloc); 2215 - if (yymsg) 2216 - yymsg_alloc = yyalloc; 2217 - else 2218 - { 2219 - yymsg = yymsgbuf; 2220 - yymsg_alloc = sizeof yymsgbuf; 2221 - } 2222 - } 2223 - 2224 - if (0 < yysize && yysize <= yymsg_alloc) 2225 - { 2226 - (void) yysyntax_error (yymsg, yystate, yychar); 2227 - yyerror (yymsg); 2228 - } 2229 - else 2230 - { 2231 - yyerror (YY_("syntax error")); 2232 - if (yysize != 0) 2233 - goto yyexhaustedlab; 2234 - } 2163 + char const *yymsgp = YY_("syntax error"); 2164 + int yysyntax_error_status; 2165 + yysyntax_error_status = YYSYNTAX_ERROR; 2166 + if (yysyntax_error_status == 0) 2167 + yymsgp = yymsg; 2168 + else if (yysyntax_error_status == 1) 2169 + { 2170 + if (yymsg != yymsgbuf) 2171 + YYSTACK_FREE (yymsg); 2172 + yymsg = (char *) YYSTACK_ALLOC (yymsg_alloc); 2173 + if (!yymsg) 2174 + { 2175 + yymsg = yymsgbuf; 2176 + yymsg_alloc = sizeof yymsgbuf; 2177 + yysyntax_error_status = 2; 2178 + } 2179 + else 2180 + { 2181 + yysyntax_error_status = YYSYNTAX_ERROR; 2182 + yymsgp = yymsg; 2183 + } 2184 + } 2185 + yyerror (yymsgp); 2186 + if (yysyntax_error_status == 2) 2187 + goto yyexhaustedlab; 2235 2188 } 2189 + # undef YYSYNTAX_ERROR 2236 2190 #endif 2237 2191 } 2238 2192 ··· 2290 2246 for (;;) 2291 2247 { 2292 2248 yyn = yypact[yystate]; 2293 - if (yyn != YYPACT_NINF) 2249 + if (!yypact_value_is_default (yyn)) 2294 2250 { 2295 2251 yyn += YYTERROR; 2296 2252 if (0 <= yyn && yyn <= YYLAST && yycheck[yyn] == YYTERROR) ··· 2349 2305 2350 2306 yyreturn: 2351 2307 if (yychar != YYEMPTY) 2352 - yydestruct ("Cleanup: discarding lookahead", 2353 - yytoken, &yylval); 2308 + { 2309 + /* Make sure we have latest lookahead translation. See comments at 2310 + user semantic actions for why this is necessary. */ 2311 + yytoken = YYTRANSLATE (yychar); 2312 + yydestruct ("Cleanup: discarding lookahead", 2313 + yytoken, &yylval); 2314 + } 2354 2315 /* Do not reclaim the symbols of the rule which action triggered 2355 2316 this YYABORT or YYACCEPT. */ 2356 2317 YYPOPSTACK (yylen); ··· 2380 2331 2381 2332 2382 2333 2383 - /* Line 1675 of yacc.c */ 2334 + /* Line 2067 of yacc.c */ 2384 2335 #line 471 "dtc-parser.y" 2385 2336 2386 2337
+6 -8
scripts/dtc/dtc-parser.tab.h_shipped
··· 1 + /* A Bison parser, made by GNU Bison 2.5. */ 1 2 2 - /* A Bison parser, made by GNU Bison 2.4.1. */ 3 - 4 - /* Skeleton interface for Bison's Yacc-like parsers in C 3 + /* Bison interface for Yacc-like parsers in C 5 4 6 - Copyright (C) 1984, 1989, 1990, 2000, 2001, 2002, 2003, 2004, 2005, 2006 7 - Free Software Foundation, Inc. 5 + Copyright (C) 1984, 1989-1990, 2000-2011 Free Software Foundation, Inc. 8 6 9 7 This program is free software: you can redistribute it and/or modify 10 8 it under the terms of the GNU General Public License as published by ··· 68 70 typedef union YYSTYPE 69 71 { 70 72 71 - /* Line 1676 of yacc.c */ 73 + /* Line 2068 of yacc.c */ 72 74 #line 40 "dtc-parser.y" 73 75 74 76 char *propnodename; ··· 92 94 93 95 94 96 95 - /* Line 1676 of yacc.c */ 96 - #line 99 "dtc-parser.tab.h" 97 + /* Line 2068 of yacc.c */ 98 + #line 97 "dtc-parser.tab.h" 97 99 } YYSTYPE; 98 100 # define YYSTYPE_IS_TRIVIAL 1 99 101 # define yystype YYSTYPE /* obsolescent; will be withdrawn */
+2 -2
sound/core/pcm_native.c
··· 1649 1649 } 1650 1650 if (!snd_pcm_stream_linked(substream)) { 1651 1651 substream->group = group; 1652 + group = NULL; 1652 1653 spin_lock_init(&substream->group->lock); 1653 1654 INIT_LIST_HEAD(&substream->group->substreams); 1654 1655 list_add_tail(&substream->link_list, &substream->group->substreams); ··· 1664 1663 _nolock: 1665 1664 snd_card_unref(substream1->pcm->card); 1666 1665 fput_light(file, fput_needed); 1667 - if (res < 0) 1668 - kfree(group); 1666 + kfree(group); 1669 1667 return res; 1670 1668 } 1671 1669
+4 -2
sound/soc/codecs/cs42l52.c
··· 193 193 194 194 static DECLARE_TLV_DB_SCALE(pga_tlv, -600, 50, 0); 195 195 196 + static DECLARE_TLV_DB_SCALE(mix_tlv, -50, 50, 0); 197 + 196 198 static const unsigned int limiter_tlv[] = { 197 199 TLV_DB_RANGE_HEAD(2), 198 200 0, 2, TLV_DB_SCALE_ITEM(-3000, 600, 0), ··· 262 260 }; 263 261 264 262 static const struct soc_enum hp_gain_enum = 265 - SOC_ENUM_SINGLE(CS42L52_PB_CTL1, 4, 263 + SOC_ENUM_SINGLE(CS42L52_PB_CTL1, 5, 266 264 ARRAY_SIZE(hp_gain_num_text), hp_gain_num_text); 267 265 268 266 static const char * const beep_pitch_text[] = { ··· 443 441 444 442 SOC_DOUBLE_R_SX_TLV("PCM Mixer Volume", 445 443 CS42L52_PCMA_MIXER_VOL, CS42L52_PCMB_MIXER_VOL, 446 - 0, 0x7f, 0x19, hl_tlv), 444 + 0, 0x7f, 0x19, mix_tlv), 447 445 SOC_DOUBLE_R("PCM Mixer Switch", 448 446 CS42L52_PCMA_MIXER_VOL, CS42L52_PCMB_MIXER_VOL, 7, 1, 1), 449 447
+5 -5
sound/soc/codecs/tlv320aic3x.c
··· 187 187 188 188 break; 189 189 } 190 - 191 - if (found) 192 - snd_soc_dapm_sync(widget->dapm); 193 190 } 194 191 195 - ret = snd_soc_update_bits(widget->codec, reg, val_mask, val); 196 - 197 192 mutex_unlock(&widget->codec->mutex); 193 + 194 + if (found) 195 + snd_soc_dapm_sync(widget->dapm); 196 + 197 + ret = snd_soc_update_bits_locked(widget->codec, reg, val_mask, val); 198 198 return ret; 199 199 } 200 200
+2 -1
sound/soc/codecs/wm5102.c
··· 1120 1120 ARIZONA_DSP_WIDGETS(DSP1, "DSP1"), 1121 1121 1122 1122 SND_SOC_DAPM_VALUE_MUX("AEC Loopback", ARIZONA_DAC_AEC_CONTROL_1, 1123 - ARIZONA_AEC_LOOPBACK_ENA, 0, &wm5102_aec_loopback_mux), 1123 + ARIZONA_AEC_LOOPBACK_ENA_SHIFT, 0, 1124 + &wm5102_aec_loopback_mux), 1124 1125 1125 1126 SND_SOC_DAPM_PGA_E("OUT1L", SND_SOC_NOPM, 1126 1127 ARIZONA_OUT1L_ENA_SHIFT, 0, NULL, 0, arizona_hp_ev,
+2 -1
sound/soc/codecs/wm5110.c
··· 503 503 NULL, 0), 504 504 505 505 SND_SOC_DAPM_VALUE_MUX("AEC Loopback", ARIZONA_DAC_AEC_CONTROL_1, 506 - ARIZONA_AEC_LOOPBACK_ENA, 0, &wm5110_aec_loopback_mux), 506 + ARIZONA_AEC_LOOPBACK_ENA_SHIFT, 0, 507 + &wm5110_aec_loopback_mux), 507 508 508 509 SND_SOC_DAPM_AIF_OUT("AIF1TX1", NULL, 0, 509 510 ARIZONA_AIF1_TX_ENABLES, ARIZONA_AIF1TX1_ENA_SHIFT, 0),
+2 -1
sound/soc/codecs/wm8994.c
··· 3836 3836 ret); 3837 3837 } else if (!(ret & WM1811_JACKDET_LVL)) { 3838 3838 dev_dbg(codec->dev, "Ignoring removed jack\n"); 3839 - return IRQ_HANDLED; 3839 + goto out; 3840 3840 } 3841 3841 } else if (!(reg & WM8958_MICD_STS)) { 3842 3842 snd_soc_jack_report(wm8994->micdet[0].jack, 0, 3843 3843 SND_JACK_MECHANICAL | SND_JACK_HEADSET | 3844 3844 wm8994->btn_mask); 3845 + wm8994->mic_detecting = true; 3845 3846 goto out; 3846 3847 } 3847 3848
+26 -23
sound/soc/soc-dapm.c
··· 55 55 [snd_soc_dapm_clock_supply] = 1, 56 56 [snd_soc_dapm_micbias] = 2, 57 57 [snd_soc_dapm_dai_link] = 2, 58 - [snd_soc_dapm_dai] = 3, 58 + [snd_soc_dapm_dai_in] = 3, 59 + [snd_soc_dapm_dai_out] = 3, 59 60 [snd_soc_dapm_aif_in] = 3, 60 61 [snd_soc_dapm_aif_out] = 3, 61 62 [snd_soc_dapm_mic] = 4, ··· 93 92 [snd_soc_dapm_value_mux] = 9, 94 93 [snd_soc_dapm_aif_in] = 10, 95 94 [snd_soc_dapm_aif_out] = 10, 96 - [snd_soc_dapm_dai] = 10, 95 + [snd_soc_dapm_dai_in] = 10, 96 + [snd_soc_dapm_dai_out] = 10, 97 97 [snd_soc_dapm_dai_link] = 11, 98 98 [snd_soc_dapm_clock_supply] = 12, 99 99 [snd_soc_dapm_regulator_supply] = 12, ··· 421 419 case snd_soc_dapm_clock_supply: 422 420 case snd_soc_dapm_aif_in: 423 421 case snd_soc_dapm_aif_out: 424 - case snd_soc_dapm_dai: 422 + case snd_soc_dapm_dai_in: 423 + case snd_soc_dapm_dai_out: 425 424 case snd_soc_dapm_hp: 426 425 case snd_soc_dapm_mic: 427 426 case snd_soc_dapm_spk: ··· 823 820 switch (widget->id) { 824 821 case snd_soc_dapm_adc: 825 822 case snd_soc_dapm_aif_out: 826 - case snd_soc_dapm_dai: 823 + case snd_soc_dapm_dai_out: 827 824 if (widget->active) { 828 825 widget->outputs = snd_soc_dapm_suspend_check(widget); 829 826 return widget->outputs; ··· 919 916 switch (widget->id) { 920 917 case snd_soc_dapm_dac: 921 918 case snd_soc_dapm_aif_in: 922 - case snd_soc_dapm_dai: 919 + case snd_soc_dapm_dai_in: 923 920 if (widget->active) { 924 921 widget->inputs = snd_soc_dapm_suspend_check(widget); 925 922 return widget->inputs; ··· 1136 1133 out = is_connected_output_ep(w, NULL); 1137 1134 dapm_clear_walk_output(w->dapm, &w->sinks); 1138 1135 return out != 0 && in != 0; 1139 - } 1140 - 1141 - static int dapm_dai_check_power(struct snd_soc_dapm_widget *w) 1142 - { 1143 - DAPM_UPDATE_STAT(w, power_checks); 1144 - 1145 - if (w->active) 1146 - return w->active; 1147 - 1148 - return dapm_generic_check_power(w); 1149 1136 } 1150 1137 1151 1138 /* Check to see if an ADC has power */ ··· 2311 2318 case snd_soc_dapm_clock_supply: 2312 2319 case snd_soc_dapm_aif_in: 2313 2320 case snd_soc_dapm_aif_out: 2314 - case snd_soc_dapm_dai: 2321 + case snd_soc_dapm_dai_in: 2322 + case snd_soc_dapm_dai_out: 2315 2323 case snd_soc_dapm_dai_link: 2316 2324 list_add(&path->list, &dapm->card->paths); 2317 2325 list_add(&path->list_sink, &wsink->sources); ··· 3123 3129 break; 3124 3130 case snd_soc_dapm_adc: 3125 3131 case snd_soc_dapm_aif_out: 3132 + case snd_soc_dapm_dai_out: 3126 3133 w->power_check = dapm_adc_check_power; 3127 3134 break; 3128 3135 case snd_soc_dapm_dac: 3129 3136 case snd_soc_dapm_aif_in: 3137 + case snd_soc_dapm_dai_in: 3130 3138 w->power_check = dapm_dac_check_power; 3131 3139 break; 3132 3140 case snd_soc_dapm_pga: ··· 3147 3151 case snd_soc_dapm_regulator_supply: 3148 3152 case snd_soc_dapm_clock_supply: 3149 3153 w->power_check = dapm_supply_check_power; 3150 - break; 3151 - case snd_soc_dapm_dai: 3152 - w->power_check = dapm_dai_check_power; 3153 3154 break; 3154 3155 default: 3155 3156 w->power_check = dapm_always_on_check_power; ··· 3368 3375 template.reg = SND_SOC_NOPM; 3369 3376 3370 3377 if (dai->driver->playback.stream_name) { 3371 - template.id = snd_soc_dapm_dai; 3378 + template.id = snd_soc_dapm_dai_in; 3372 3379 template.name = dai->driver->playback.stream_name; 3373 3380 template.sname = dai->driver->playback.stream_name; 3374 3381 ··· 3386 3393 } 3387 3394 3388 3395 if (dai->driver->capture.stream_name) { 3389 - template.id = snd_soc_dapm_dai; 3396 + template.id = snd_soc_dapm_dai_out; 3390 3397 template.name = dai->driver->capture.stream_name; 3391 3398 template.sname = dai->driver->capture.stream_name; 3392 3399 ··· 3416 3423 3417 3424 /* For each DAI widget... */ 3418 3425 list_for_each_entry(dai_w, &card->widgets, list) { 3419 - if (dai_w->id != snd_soc_dapm_dai) 3426 + switch (dai_w->id) { 3427 + case snd_soc_dapm_dai_in: 3428 + case snd_soc_dapm_dai_out: 3429 + break; 3430 + default: 3420 3431 continue; 3432 + } 3421 3433 3422 3434 dai = dai_w->priv; 3423 3435 ··· 3431 3433 if (w->dapm != dai_w->dapm) 3432 3434 continue; 3433 3435 3434 - if (w->id == snd_soc_dapm_dai) 3436 + switch (w->id) { 3437 + case snd_soc_dapm_dai_in: 3438 + case snd_soc_dapm_dai_out: 3435 3439 continue; 3440 + default: 3441 + break; 3442 + } 3436 3443 3437 3444 if (!w->sname) 3438 3445 continue;
+10 -3
sound/soc/soc-pcm.c
··· 928 928 /* Create any new FE <--> BE connections */ 929 929 for (i = 0; i < list->num_widgets; i++) { 930 930 931 - if (list->widgets[i]->id != snd_soc_dapm_dai) 931 + switch (list->widgets[i]->id) { 932 + case snd_soc_dapm_dai_in: 933 + case snd_soc_dapm_dai_out: 934 + break; 935 + default: 932 936 continue; 937 + } 933 938 934 939 /* is there a valid BE rtd for this widget */ 935 940 be = dpcm_get_be(card, list->widgets[i], stream); ··· 2016 2011 if (cpu_dai->driver->capture.channels_min) 2017 2012 capture = 1; 2018 2013 } else { 2019 - if (codec_dai->driver->playback.channels_min) 2014 + if (codec_dai->driver->playback.channels_min && 2015 + cpu_dai->driver->playback.channels_min) 2020 2016 playback = 1; 2021 - if (codec_dai->driver->capture.channels_min) 2017 + if (codec_dai->driver->capture.channels_min && 2018 + cpu_dai->driver->capture.channels_min) 2022 2019 capture = 1; 2023 2020 } 2024 2021
+1 -1
tools/power/x86/turbostat/turbostat.c
··· 2191 2191 2192 2192 void allocate_output_buffer() 2193 2193 { 2194 - output_buffer = calloc(1, (1 + topo.num_cpus) * 128); 2194 + output_buffer = calloc(1, (1 + topo.num_cpus) * 256); 2195 2195 outp = output_buffer; 2196 2196 if (outp == NULL) { 2197 2197 perror("calloc");