Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 's3c24xx-dma' of git://git.kernel.org/pub/scm/linux/kernel/git/kgene/linux-samsung into next/drivers

From Kukjin Kim, this branch adds device-tree support to the DMA controller
on the older Samsung SoCs. It also adds support for one of the missing SoCs
in the family (2410).

The driver has been Ack:ed by Vinod Koul, but is merged through here due
to dependencies with platform code.

* tag 's3c24xx-dma' of git://git.kernel.org/pub/scm/linux/kernel/git/kgene/linux-samsung:
ARM: S3C24XX: add dma pdata for s3c2410, s3c2440 and s3c2442
dmaengine: s3c24xx-dma: add support for the s3c2410 type of controller
ARM: S3C24XX: Fix possible dma selection warning
ARM: SAMSUNG: set s3c24xx_dma_filter for s3c64xx-spi0 device
ARM: S3C24XX: add platform-devices for new dma driver for s3c2412 and s3c2443
dmaengine: add driver for Samsung s3c24xx SoCs
ARM: S3C24XX: number the dma clocks
+ Linux 3.12-rc3

Signed-off-by: Olof Johansson <olof@lixom.net>

+3994 -1263
+1 -2
CREDITS
··· 2808 2808 S: Canada K2P 0X8 2809 2809 2810 2810 N: Mikael Pettersson 2811 - E: mikpe@it.uu.se 2812 - W: http://user.it.uu.se/~mikpe/linux/ 2811 + E: mikpelinux@gmail.com 2813 2812 D: Miscellaneous fixes 2814 2813 2815 2814 N: Reed H. Petty
+5 -5
Documentation/devicetree/bindings/mmc/exynos-dw-mshc.txt
··· 1 - * Samsung Exynos specific extensions to the Synopsis Designware Mobile 1 + * Samsung Exynos specific extensions to the Synopsys Designware Mobile 2 2 Storage Host Controller 3 3 4 - The Synopsis designware mobile storage host controller is used to interface 4 + The Synopsys designware mobile storage host controller is used to interface 5 5 a SoC with storage medium such as eMMC or SD/MMC cards. This file documents 6 - differences between the core Synopsis dw mshc controller properties described 7 - by synopsis-dw-mshc.txt and the properties used by the Samsung Exynos specific 8 - extensions to the Synopsis Designware Mobile Storage Host Controller. 6 + differences between the core Synopsys dw mshc controller properties described 7 + by synopsys-dw-mshc.txt and the properties used by the Samsung Exynos specific 8 + extensions to the Synopsys Designware Mobile Storage Host Controller. 9 9 10 10 Required Properties: 11 11
+5 -5
Documentation/devicetree/bindings/mmc/rockchip-dw-mshc.txt
··· 1 - * Rockchip specific extensions to the Synopsis Designware Mobile 1 + * Rockchip specific extensions to the Synopsys Designware Mobile 2 2 Storage Host Controller 3 3 4 - The Synopsis designware mobile storage host controller is used to interface 4 + The Synopsys designware mobile storage host controller is used to interface 5 5 a SoC with storage medium such as eMMC or SD/MMC cards. This file documents 6 - differences between the core Synopsis dw mshc controller properties described 7 - by synopsis-dw-mshc.txt and the properties used by the Rockchip specific 8 - extensions to the Synopsis Designware Mobile Storage Host Controller. 6 + differences between the core Synopsys dw mshc controller properties described 7 + by synopsys-dw-mshc.txt and the properties used by the Rockchip specific 8 + extensions to the Synopsys Designware Mobile Storage Host Controller. 9 9 10 10 Required Properties: 11 11
+4 -4
Documentation/devicetree/bindings/mmc/synopsis-dw-mshc.txt Documentation/devicetree/bindings/mmc/synopsys-dw-mshc.txt
··· 1 - * Synopsis Designware Mobile Storage Host Controller 1 + * Synopsys Designware Mobile Storage Host Controller 2 2 3 - The Synopsis designware mobile storage host controller is used to interface 3 + The Synopsys designware mobile storage host controller is used to interface 4 4 a SoC with storage medium such as eMMC or SD/MMC cards. This file documents 5 5 differences between the core mmc properties described by mmc.txt and the 6 - properties used by the Synopsis Designware Mobile Storage Host Controller. 6 + properties used by the Synopsys Designware Mobile Storage Host Controller. 7 7 8 8 Required Properties: 9 9 10 10 * compatible: should be 11 - - snps,dw-mshc: for controllers compliant with synopsis dw-mshc. 11 + - snps,dw-mshc: for controllers compliant with synopsys dw-mshc. 12 12 * #address-cells: should be 1. 13 13 * #size-cells: should be 0. 14 14
+1 -1
Documentation/devicetree/bindings/pci/designware-pcie.txt
··· 1 - * Synopsis Designware PCIe interface 1 + * Synopsys Designware PCIe interface 2 2 3 3 Required properties: 4 4 - compatible: should contain "snps,dw-pcie" to identify the
Documentation/devicetree/bindings/tty/serial/qca,ar9330-uart.txt Documentation/devicetree/bindings/serial/qca,ar9330-uart.txt
+4
Documentation/kernel-parameters.txt
··· 3485 3485 the unplug protocol 3486 3486 never -- do not unplug even if version check succeeds 3487 3487 3488 + xen_nopvspin [X86,XEN] 3489 + Disables the ticketlock slowpath using Xen PV 3490 + optimizations. 3491 + 3488 3492 xirc2ps_cs= [NET,PCMCIA] 3489 3493 Format: 3490 3494 <irq>,<irq_mask>,<io>,<full_duplex>,<do_sound>,<lockup_hack>[,<irq2>[,<irq3>[,<irq4>]]]
+6
Documentation/sound/alsa/HD-Audio-Models.txt
··· 296 296 imac27 IMac 27 Inch 297 297 auto BIOS setup (default) 298 298 299 + Cirrus Logic CS4208 300 + =================== 301 + mba6 MacBook Air 6,1 and 6,2 302 + gpio0 Enable GPIO 0 amp 303 + auto BIOS setup (default) 304 + 299 305 VIA VT17xx/VT18xx/VT20xx 300 306 ======================== 301 307 auto BIOS setup (default)
+16 -4
MAINTAINERS
··· 1812 1812 F: drivers/net/ethernet/broadcom/bnx2x/ 1813 1813 1814 1814 BROADCOM BCM281XX/BCM11XXX ARM ARCHITECTURE 1815 - M: Christian Daudt <csd@broadcom.com> 1815 + M: Christian Daudt <bcm@fixthebug.org> 1816 + L: bcm-kernel-feedback-list@broadcom.com 1816 1817 T: git git://git.github.com/broadcom/bcm11351 1817 1818 S: Maintained 1818 1819 F: arch/arm/mach-bcm/ ··· 2639 2638 F: include/linux/device-mapper.h 2640 2639 F: include/linux/dm-*.h 2641 2640 F: include/uapi/linux/dm-*.h 2641 + 2642 + DIGI NEO AND CLASSIC PCI PRODUCTS 2643 + M: Lidza Louina <lidza.louina@gmail.com> 2644 + L: driverdev-devel@linuxdriverproject.org 2645 + S: Maintained 2646 + F: drivers/staging/dgnc/ 2647 + 2648 + DIGI EPCA PCI PRODUCTS 2649 + M: Lidza Louina <lidza.louina@gmail.com> 2650 + L: driverdev-devel@linuxdriverproject.org 2651 + S: Maintained 2652 + F: drivers/staging/dgap/ 2642 2653 2643 2654 DIOLAN U2C-12 I2C DRIVER 2644 2655 M: Guenter Roeck <linux@roeck-us.net> ··· 6608 6595 F: drivers/net/wireless/prism54/ 6609 6596 6610 6597 PROMISE SATA TX2/TX4 CONTROLLER LIBATA DRIVER 6611 - M: Mikael Pettersson <mikpe@it.uu.se> 6598 + M: Mikael Pettersson <mikpelinux@gmail.com> 6612 6599 L: linux-ide@vger.kernel.org 6613 6600 S: Maintained 6614 6601 F: drivers/ata/sata_promise.* ··· 8737 8724 F: drivers/hid/usbhid/ 8738 8725 8739 8726 USB/IP DRIVERS 8740 - M: Matt Mooney <mfm@muteddisk.com> 8741 8727 L: linux-usb@vger.kernel.org 8742 - S: Maintained 8728 + S: Orphan 8743 8729 F: drivers/staging/usbip/ 8744 8730 8745 8731 USB ISP116X DRIVER
+1 -1
Makefile
··· 1 1 VERSION = 3 2 2 PATCHLEVEL = 12 3 3 SUBLEVEL = 0 4 - EXTRAVERSION = -rc2 4 + EXTRAVERSION = -rc3 5 5 NAME = One Giant Leap for Frogkind 6 6 7 7 # *DOCUMENTATION*
-3
arch/Kconfig
··· 286 286 config HAVE_ARCH_JUMP_LABEL 287 287 bool 288 288 289 - config HAVE_ARCH_MUTEX_CPU_RELAX 290 - bool 291 - 292 289 config HAVE_RCU_TABLE_FREE 293 290 bool 294 291
+1 -2
arch/arm/Kconfig
··· 2216 2216 2217 2217 config KERNEL_MODE_NEON 2218 2218 bool "Support for NEON in kernel mode" 2219 - default n 2220 - depends on NEON 2219 + depends on NEON && AEABI 2221 2220 help 2222 2221 Say Y to include support for NEON in kernel mode. 2223 2222
+3 -3
arch/arm/crypto/aes-armv4.S
··· 148 148 @ const AES_KEY *key) { 149 149 .align 5 150 150 ENTRY(AES_encrypt) 151 - sub r3,pc,#8 @ AES_encrypt 151 + adr r3,AES_encrypt 152 152 stmdb sp!,{r1,r4-r12,lr} 153 153 mov r12,r0 @ inp 154 154 mov r11,r2 ··· 381 381 .align 5 382 382 ENTRY(private_AES_set_encrypt_key) 383 383 _armv4_AES_set_encrypt_key: 384 - sub r3,pc,#8 @ AES_set_encrypt_key 384 + adr r3,_armv4_AES_set_encrypt_key 385 385 teq r0,#0 386 386 moveq r0,#-1 387 387 beq .Labrt ··· 843 843 @ const AES_KEY *key) { 844 844 .align 5 845 845 ENTRY(AES_decrypt) 846 - sub r3,pc,#8 @ AES_decrypt 846 + adr r3,AES_decrypt 847 847 stmdb sp!,{r1,r4-r12,lr} 848 848 mov r12,r0 @ inp 849 849 mov r11,r2
+7
arch/arm/include/asm/uaccess.h
··· 19 19 #include <asm/unified.h> 20 20 #include <asm/compiler.h> 21 21 22 + #if __LINUX_ARM_ARCH__ < 6 23 + #include <asm-generic/uaccess-unaligned.h> 24 + #else 25 + #define __get_user_unaligned __get_user 26 + #define __put_user_unaligned __put_user 27 + #endif 28 + 22 29 #define VERIFY_READ 0 23 30 #define VERIFY_WRITE 1 24 31
+2 -2
arch/arm/kernel/entry-common.S
··· 442 442 ldrcc pc, [tbl, scno, lsl #2] @ call sys_* routine 443 443 444 444 add r1, sp, #S_OFF 445 - cmp scno, #(__ARM_NR_BASE - __NR_SYSCALL_BASE) 445 + 2: cmp scno, #(__ARM_NR_BASE - __NR_SYSCALL_BASE) 446 446 eor r0, scno, #__NR_SYSCALL_BASE @ put OS number back 447 447 bcs arm_syscall 448 - 2: mov why, #0 @ no longer a real syscall 448 + mov why, #0 @ no longer a real syscall 449 449 b sys_ni_syscall @ not private func 450 450 451 451 #if defined(CONFIG_OABI_COMPAT) || !defined(CONFIG_AEABI)
+4 -4
arch/arm/kernel/entry-header.S
··· 329 329 #ifdef CONFIG_CONTEXT_TRACKING 330 330 .if \save 331 331 stmdb sp!, {r0-r3, ip, lr} 332 - bl user_exit 332 + bl context_tracking_user_exit 333 333 ldmia sp!, {r0-r3, ip, lr} 334 334 .else 335 - bl user_exit 335 + bl context_tracking_user_exit 336 336 .endif 337 337 #endif 338 338 .endm ··· 341 341 #ifdef CONFIG_CONTEXT_TRACKING 342 342 .if \save 343 343 stmdb sp!, {r0-r3, ip, lr} 344 - bl user_enter 344 + bl context_tracking_user_enter 345 345 ldmia sp!, {r0-r3, ip, lr} 346 346 .else 347 - bl user_enter 347 + bl context_tracking_user_enter 348 348 .endif 349 349 #endif 350 350 .endm
+2 -1
arch/arm/mach-s3c24xx/Kconfig
··· 28 28 select CPU_ARM920T 29 29 select CPU_LLSERIAL_S3C2410 30 30 select S3C2410_CLOCK 31 + select S3C2410_DMA if S3C24XX_DMA 31 32 select ARM_S3C2410_CPUFREQ if ARM_S3C24XX_CPUFREQ 32 33 select S3C2410_PM if PM 33 34 select SAMSUNG_WDT_RESET ··· 71 70 select CPU_ARM920T 72 71 select CPU_LLSERIAL_S3C2440 73 72 select S3C2410_CLOCK 73 + select S3C2410_DMA if S3C24XX_DMA 74 74 select S3C2410_PM if PM 75 75 help 76 76 Support for S3C2442 Samsung Mobile CPU based systems. ··· 150 148 config S3C2410_DMA 151 149 bool 152 150 depends on S3C24XX_DMA && (CPU_S3C2410 || CPU_S3C2442) 153 - default y if CPU_S3C2410 || CPU_S3C2442 154 151 help 155 152 DMA device selection for S3C2410 and compatible CPUs 156 153
+4 -4
arch/arm/mach-s3c24xx/clock-s3c2412.c
··· 484 484 485 485 static struct clk init_clocks[] = { 486 486 { 487 - .name = "dma", 487 + .name = "dma.0", 488 488 .parent = &clk_h, 489 489 .enable = s3c2412_clkcon_enable, 490 490 .ctrlbit = S3C2412_CLKCON_DMA0, 491 491 }, { 492 - .name = "dma", 492 + .name = "dma.1", 493 493 .parent = &clk_h, 494 494 .enable = s3c2412_clkcon_enable, 495 495 .ctrlbit = S3C2412_CLKCON_DMA1, 496 496 }, { 497 - .name = "dma", 497 + .name = "dma.2", 498 498 .parent = &clk_h, 499 499 .enable = s3c2412_clkcon_enable, 500 500 .ctrlbit = S3C2412_CLKCON_DMA2, 501 501 }, { 502 - .name = "dma", 502 + .name = "dma.3", 503 503 .parent = &clk_h, 504 504 .enable = s3c2412_clkcon_enable, 505 505 .ctrlbit = S3C2412_CLKCON_DMA3,
+6 -6
arch/arm/mach-s3c24xx/common-s3c2443.c
··· 438 438 439 439 static struct clk init_clocks[] = { 440 440 { 441 - .name = "dma", 441 + .name = "dma.0", 442 442 .parent = &clk_h, 443 443 .enable = s3c2443_clkcon_enable_h, 444 444 .ctrlbit = S3C2443_HCLKCON_DMA0, 445 445 }, { 446 - .name = "dma", 446 + .name = "dma.1", 447 447 .parent = &clk_h, 448 448 .enable = s3c2443_clkcon_enable_h, 449 449 .ctrlbit = S3C2443_HCLKCON_DMA1, 450 450 }, { 451 - .name = "dma", 451 + .name = "dma.2", 452 452 .parent = &clk_h, 453 453 .enable = s3c2443_clkcon_enable_h, 454 454 .ctrlbit = S3C2443_HCLKCON_DMA2, 455 455 }, { 456 - .name = "dma", 456 + .name = "dma.3", 457 457 .parent = &clk_h, 458 458 .enable = s3c2443_clkcon_enable_h, 459 459 .ctrlbit = S3C2443_HCLKCON_DMA3, 460 460 }, { 461 - .name = "dma", 461 + .name = "dma.4", 462 462 .parent = &clk_h, 463 463 .enable = s3c2443_clkcon_enable_h, 464 464 .ctrlbit = S3C2443_HCLKCON_DMA4, 465 465 }, { 466 - .name = "dma", 466 + .name = "dma.5", 467 467 .parent = &clk_h, 468 468 .enable = s3c2443_clkcon_enable_h, 469 469 .ctrlbit = S3C2443_HCLKCON_DMA5,
+206
arch/arm/mach-s3c24xx/common.c
··· 31 31 #include <linux/platform_device.h> 32 32 #include <linux/delay.h> 33 33 #include <linux/io.h> 34 + #include <linux/platform_data/dma-s3c24xx.h> 34 35 35 36 #include <mach/hardware.h> 36 37 #include <mach/regs-clock.h> ··· 45 44 46 45 #include <mach/regs-gpio.h> 47 46 #include <plat/regs-serial.h> 47 + #include <mach/dma.h> 48 48 49 49 #include <plat/cpu.h> 50 50 #include <plat/devs.h> ··· 331 329 clk_p.rate = pclk; 332 330 clk_f.rate = fclk; 333 331 } 332 + 333 + #if defined(CONFIG_CPU_S3C2410) || defined(CONFIG_CPU_S3C2412) || \ 334 + defined(CONFIG_CPU_S3C2440) || defined(CONFIG_CPU_S3C2442) 335 + static struct resource s3c2410_dma_resource[] = { 336 + [0] = DEFINE_RES_MEM(S3C24XX_PA_DMA, S3C24XX_SZ_DMA), 337 + [1] = DEFINE_RES_IRQ(IRQ_DMA0), 338 + [2] = DEFINE_RES_IRQ(IRQ_DMA1), 339 + [3] = DEFINE_RES_IRQ(IRQ_DMA2), 340 + [4] = DEFINE_RES_IRQ(IRQ_DMA3), 341 + }; 342 + #endif 343 + 344 + #if defined(CONFIG_CPU_S3C2410) || defined(CONFIG_CPU_S3C2442) 345 + static struct s3c24xx_dma_channel s3c2410_dma_channels[DMACH_MAX] = { 346 + [DMACH_XD0] = { S3C24XX_DMA_AHB, true, S3C24XX_DMA_CHANREQ(0, 0), }, 347 + [DMACH_XD1] = { S3C24XX_DMA_AHB, true, S3C24XX_DMA_CHANREQ(0, 1), }, 348 + [DMACH_SDI] = { S3C24XX_DMA_APB, false, S3C24XX_DMA_CHANREQ(2, 0) | 349 + S3C24XX_DMA_CHANREQ(2, 2) | 350 + S3C24XX_DMA_CHANREQ(1, 3), 351 + }, 352 + [DMACH_SPI0] = { S3C24XX_DMA_APB, true, S3C24XX_DMA_CHANREQ(3, 1), }, 353 + [DMACH_SPI1] = { S3C24XX_DMA_APB, true, S3C24XX_DMA_CHANREQ(2, 3), }, 354 + [DMACH_UART0] = { S3C24XX_DMA_APB, true, S3C24XX_DMA_CHANREQ(1, 0), }, 355 + [DMACH_UART1] = { S3C24XX_DMA_APB, true, S3C24XX_DMA_CHANREQ(1, 1), }, 356 + [DMACH_UART2] = { S3C24XX_DMA_APB, true, S3C24XX_DMA_CHANREQ(0, 3), }, 357 + [DMACH_TIMER] = { S3C24XX_DMA_APB, true, S3C24XX_DMA_CHANREQ(3, 0) | 358 + S3C24XX_DMA_CHANREQ(3, 2) | 359 + S3C24XX_DMA_CHANREQ(3, 3), 360 + }, 361 + [DMACH_I2S_IN] = { S3C24XX_DMA_APB, true, S3C24XX_DMA_CHANREQ(2, 1) | 362 + S3C24XX_DMA_CHANREQ(1, 2), 363 + }, 364 + [DMACH_I2S_OUT] = { S3C24XX_DMA_APB, true, S3C24XX_DMA_CHANREQ(0, 2), }, 365 + [DMACH_USB_EP1] = { S3C24XX_DMA_APB, true, S3C24XX_DMA_CHANREQ(4, 0), }, 366 + [DMACH_USB_EP2] = { S3C24XX_DMA_APB, true, S3C24XX_DMA_CHANREQ(4, 1), }, 367 + [DMACH_USB_EP3] = { S3C24XX_DMA_APB, true, S3C24XX_DMA_CHANREQ(4, 2), }, 368 + [DMACH_USB_EP4] = { S3C24XX_DMA_APB, true, S3C24XX_DMA_CHANREQ(4, 3), }, 369 + }; 370 + 371 + static struct s3c24xx_dma_platdata s3c2410_dma_platdata = { 372 + .num_phy_channels = 4, 373 + .channels = s3c2410_dma_channels, 374 + .num_channels = DMACH_MAX, 375 + }; 376 + 377 + struct platform_device s3c2410_device_dma = { 378 + .name = "s3c2410-dma", 379 + .id = 0, 380 + .num_resources = ARRAY_SIZE(s3c2410_dma_resource), 381 + .resource = s3c2410_dma_resource, 382 + .dev = { 383 + .platform_data = &s3c2410_dma_platdata, 384 + }, 385 + }; 386 + #endif 387 + 388 + #ifdef CONFIG_CPU_S3C2412 389 + static struct s3c24xx_dma_channel s3c2412_dma_channels[DMACH_MAX] = { 390 + [DMACH_XD0] = { S3C24XX_DMA_AHB, true, 17 }, 391 + [DMACH_XD1] = { S3C24XX_DMA_AHB, true, 18 }, 392 + [DMACH_SDI] = { S3C24XX_DMA_APB, false, 10 }, 393 + [DMACH_SPI0_RX] = { S3C24XX_DMA_APB, true, 1 }, 394 + [DMACH_SPI0_TX] = { S3C24XX_DMA_APB, true, 0 }, 395 + [DMACH_SPI1_RX] = { S3C24XX_DMA_APB, true, 3 }, 396 + [DMACH_SPI1_TX] = { S3C24XX_DMA_APB, true, 2 }, 397 + [DMACH_UART0] = { S3C24XX_DMA_APB, true, 19 }, 398 + [DMACH_UART1] = { S3C24XX_DMA_APB, true, 21 }, 399 + [DMACH_UART2] = { S3C24XX_DMA_APB, true, 23 }, 400 + [DMACH_UART0_SRC2] = { S3C24XX_DMA_APB, true, 20 }, 401 + [DMACH_UART1_SRC2] = { S3C24XX_DMA_APB, true, 22 }, 402 + [DMACH_UART2_SRC2] = { S3C24XX_DMA_APB, true, 24 }, 403 + [DMACH_TIMER] = { S3C24XX_DMA_APB, true, 9 }, 404 + [DMACH_I2S_IN] = { S3C24XX_DMA_APB, true, 5 }, 405 + [DMACH_I2S_OUT] = { S3C24XX_DMA_APB, true, 4 }, 406 + [DMACH_USB_EP1] = { S3C24XX_DMA_APB, true, 13 }, 407 + [DMACH_USB_EP2] = { S3C24XX_DMA_APB, true, 14 }, 408 + [DMACH_USB_EP3] = { S3C24XX_DMA_APB, true, 15 }, 409 + [DMACH_USB_EP4] = { S3C24XX_DMA_APB, true, 16 }, 410 + }; 411 + 412 + static struct s3c24xx_dma_platdata s3c2412_dma_platdata = { 413 + .num_phy_channels = 4, 414 + .channels = s3c2412_dma_channels, 415 + .num_channels = DMACH_MAX, 416 + }; 417 + 418 + struct platform_device s3c2412_device_dma = { 419 + .name = "s3c2412-dma", 420 + .id = 0, 421 + .num_resources = ARRAY_SIZE(s3c2410_dma_resource), 422 + .resource = s3c2410_dma_resource, 423 + .dev = { 424 + .platform_data = &s3c2412_dma_platdata, 425 + }, 426 + }; 427 + #endif 428 + 429 + #if defined(CONFIG_CPU_S3C2440) 430 + static struct s3c24xx_dma_channel s3c2440_dma_channels[DMACH_MAX] = { 431 + [DMACH_XD0] = { S3C24XX_DMA_AHB, true, S3C24XX_DMA_CHANREQ(0, 0), }, 432 + [DMACH_XD1] = { S3C24XX_DMA_AHB, true, S3C24XX_DMA_CHANREQ(0, 1), }, 433 + [DMACH_SDI] = { S3C24XX_DMA_APB, false, S3C24XX_DMA_CHANREQ(2, 0) | 434 + S3C24XX_DMA_CHANREQ(6, 1) | 435 + S3C24XX_DMA_CHANREQ(2, 2) | 436 + S3C24XX_DMA_CHANREQ(1, 3), 437 + }, 438 + [DMACH_SPI0] = { S3C24XX_DMA_APB, true, S3C24XX_DMA_CHANREQ(3, 1), }, 439 + [DMACH_SPI1] = { S3C24XX_DMA_APB, true, S3C24XX_DMA_CHANREQ(2, 3), }, 440 + [DMACH_UART0] = { S3C24XX_DMA_APB, true, S3C24XX_DMA_CHANREQ(1, 0), }, 441 + [DMACH_UART1] = { S3C24XX_DMA_APB, true, S3C24XX_DMA_CHANREQ(1, 1), }, 442 + [DMACH_UART2] = { S3C24XX_DMA_APB, true, S3C24XX_DMA_CHANREQ(0, 3), }, 443 + [DMACH_TIMER] = { S3C24XX_DMA_APB, true, S3C24XX_DMA_CHANREQ(3, 0) | 444 + S3C24XX_DMA_CHANREQ(3, 2) | 445 + S3C24XX_DMA_CHANREQ(3, 3), 446 + }, 447 + [DMACH_I2S_IN] = { S3C24XX_DMA_APB, true, S3C24XX_DMA_CHANREQ(2, 1) | 448 + S3C24XX_DMA_CHANREQ(1, 2), 449 + }, 450 + [DMACH_I2S_OUT] = { S3C24XX_DMA_APB, true, S3C24XX_DMA_CHANREQ(5, 0) | 451 + S3C24XX_DMA_CHANREQ(0, 2), 452 + }, 453 + [DMACH_PCM_IN] = { S3C24XX_DMA_APB, true, S3C24XX_DMA_CHANREQ(6, 0) | 454 + S3C24XX_DMA_CHANREQ(5, 2), 455 + }, 456 + [DMACH_PCM_OUT] = { S3C24XX_DMA_APB, true, S3C24XX_DMA_CHANREQ(5, 1) | 457 + S3C24XX_DMA_CHANREQ(6, 3), 458 + }, 459 + [DMACH_MIC_IN] = { S3C24XX_DMA_APB, true, S3C24XX_DMA_CHANREQ(6, 2) | 460 + S3C24XX_DMA_CHANREQ(5, 3), 461 + }, 462 + [DMACH_USB_EP1] = { S3C24XX_DMA_APB, true, S3C24XX_DMA_CHANREQ(4, 0), }, 463 + [DMACH_USB_EP2] = { S3C24XX_DMA_APB, true, S3C24XX_DMA_CHANREQ(4, 1), }, 464 + [DMACH_USB_EP3] = { S3C24XX_DMA_APB, true, S3C24XX_DMA_CHANREQ(4, 2), }, 465 + [DMACH_USB_EP4] = { S3C24XX_DMA_APB, true, S3C24XX_DMA_CHANREQ(4, 3), }, 466 + }; 467 + 468 + static struct s3c24xx_dma_platdata s3c2440_dma_platdata = { 469 + .num_phy_channels = 4, 470 + .channels = s3c2440_dma_channels, 471 + .num_channels = DMACH_MAX, 472 + }; 473 + 474 + struct platform_device s3c2440_device_dma = { 475 + .name = "s3c2410-dma", 476 + .id = 0, 477 + .num_resources = ARRAY_SIZE(s3c2410_dma_resource), 478 + .resource = s3c2410_dma_resource, 479 + .dev = { 480 + .platform_data = &s3c2440_dma_platdata, 481 + }, 482 + }; 483 + #endif 484 + 485 + #if defined(CONFIG_CPUS_3C2443) || defined(CONFIG_CPU_S3C2416) 486 + static struct resource s3c2443_dma_resource[] = { 487 + [0] = DEFINE_RES_MEM(S3C24XX_PA_DMA, S3C24XX_SZ_DMA), 488 + [1] = DEFINE_RES_IRQ(IRQ_S3C2443_DMA0), 489 + [2] = DEFINE_RES_IRQ(IRQ_S3C2443_DMA1), 490 + [3] = DEFINE_RES_IRQ(IRQ_S3C2443_DMA2), 491 + [4] = DEFINE_RES_IRQ(IRQ_S3C2443_DMA3), 492 + [5] = DEFINE_RES_IRQ(IRQ_S3C2443_DMA4), 493 + [6] = DEFINE_RES_IRQ(IRQ_S3C2443_DMA5), 494 + }; 495 + 496 + static struct s3c24xx_dma_channel s3c2443_dma_channels[DMACH_MAX] = { 497 + [DMACH_XD0] = { S3C24XX_DMA_AHB, true, 17 }, 498 + [DMACH_XD1] = { S3C24XX_DMA_AHB, true, 18 }, 499 + [DMACH_SDI] = { S3C24XX_DMA_APB, false, 10 }, 500 + [DMACH_SPI0_RX] = { S3C24XX_DMA_APB, true, 1 }, 501 + [DMACH_SPI0_TX] = { S3C24XX_DMA_APB, true, 0 }, 502 + [DMACH_SPI1_RX] = { S3C24XX_DMA_APB, true, 3 }, 503 + [DMACH_SPI1_TX] = { S3C24XX_DMA_APB, true, 2 }, 504 + [DMACH_UART0] = { S3C24XX_DMA_APB, true, 19 }, 505 + [DMACH_UART1] = { S3C24XX_DMA_APB, true, 21 }, 506 + [DMACH_UART2] = { S3C24XX_DMA_APB, true, 23 }, 507 + [DMACH_UART3] = { S3C24XX_DMA_APB, true, 25 }, 508 + [DMACH_UART0_SRC2] = { S3C24XX_DMA_APB, true, 20 }, 509 + [DMACH_UART1_SRC2] = { S3C24XX_DMA_APB, true, 22 }, 510 + [DMACH_UART2_SRC2] = { S3C24XX_DMA_APB, true, 24 }, 511 + [DMACH_UART3_SRC2] = { S3C24XX_DMA_APB, true, 26 }, 512 + [DMACH_TIMER] = { S3C24XX_DMA_APB, true, 9 }, 513 + [DMACH_I2S_IN] = { S3C24XX_DMA_APB, true, 5 }, 514 + [DMACH_I2S_OUT] = { S3C24XX_DMA_APB, true, 4 }, 515 + [DMACH_PCM_IN] = { S3C24XX_DMA_APB, true, 28 }, 516 + [DMACH_PCM_OUT] = { S3C24XX_DMA_APB, true, 27 }, 517 + [DMACH_MIC_IN] = { S3C24XX_DMA_APB, true, 29 }, 518 + }; 519 + 520 + static struct s3c24xx_dma_platdata s3c2443_dma_platdata = { 521 + .num_phy_channels = 6, 522 + .channels = s3c2443_dma_channels, 523 + .num_channels = DMACH_MAX, 524 + }; 525 + 526 + struct platform_device s3c2443_device_dma = { 527 + .name = "s3c2443-dma", 528 + .id = 0, 529 + .num_resources = ARRAY_SIZE(s3c2443_dma_resource), 530 + .resource = s3c2443_dma_resource, 531 + .dev = { 532 + .platform_data = &s3c2443_dma_platdata, 533 + }, 534 + }; 535 + #endif
+5
arch/arm/mach-s3c24xx/common.h
··· 109 109 110 110 extern struct syscore_ops s3c24xx_irq_syscore_ops; 111 111 112 + extern struct platform_device s3c2410_device_dma; 113 + extern struct platform_device s3c2412_device_dma; 114 + extern struct platform_device s3c2440_device_dma; 115 + extern struct platform_device s3c2443_device_dma; 116 + 112 117 #endif /* __ARCH_ARM_MACH_S3C24XX_COMMON_H */
+1
arch/arm/mach-s3c24xx/mach-jive.c
··· 466 466 &jive_device_wm8750, 467 467 &s3c_device_nand, 468 468 &s3c_device_usbgadget, 469 + &s3c2412_device_dma, 469 470 }; 470 471 471 472 static struct s3c2410_udc_mach_info jive_udc_cfg __initdata = {
+1
arch/arm/mach-s3c24xx/mach-smdk2413.c
··· 89 89 &s3c_device_i2c0, 90 90 &s3c_device_iis, 91 91 &s3c_device_usbgadget, 92 + &s3c2412_device_dma, 92 93 }; 93 94 94 95 static void __init smdk2413_fixup(struct tag *tags, char **cmdline,
+1
arch/arm/mach-s3c24xx/mach-smdk2416.c
··· 215 215 &s3c_device_hsmmc0, 216 216 &s3c_device_hsmmc1, 217 217 &s3c_device_usb_hsudc, 218 + &s3c2443_device_dma, 218 219 }; 219 220 220 221 static void __init smdk2416_map_io(void)
+1
arch/arm/mach-s3c24xx/mach-smdk2443.c
··· 115 115 #ifdef CONFIG_SND_SOC_SMDK2443_WM9710 116 116 &s3c_device_ac97, 117 117 #endif 118 + &s3c2443_device_dma, 118 119 }; 119 120 120 121 static void __init smdk2443_map_io(void)
+1
arch/arm/mach-s3c24xx/mach-vstms.c
··· 126 126 &s3c_device_iis, 127 127 &s3c_device_rtc, 128 128 &s3c_device_nand, 129 + &s3c2412_device_dma, 129 130 }; 130 131 131 132 static void __init vstms_fixup(struct tag *tags, char **cmdline,
+4 -1
arch/arm/plat-samsung/devs.c
··· 32 32 #include <linux/ioport.h> 33 33 #include <linux/platform_data/s3c-hsudc.h> 34 34 #include <linux/platform_data/s3c-hsotg.h> 35 + #include <linux/platform_data/dma-s3c24xx.h> 35 36 36 37 #include <media/s5p_hdmi.h> 37 38 ··· 1500 1499 pd.num_cs = num_cs; 1501 1500 pd.src_clk_nr = src_clk_nr; 1502 1501 pd.cfg_gpio = (cfg_gpio) ? cfg_gpio : s3c64xx_spi0_cfg_gpio; 1503 - #ifdef CONFIG_PL330_DMA 1502 + #if defined(CONFIG_PL330_DMA) 1504 1503 pd.filter = pl330_filter; 1504 + #elif defined(CONFIG_S3C24XX_DMAC) 1505 + pd.filter = s3c24xx_dma_filter; 1505 1506 #endif 1506 1507 1507 1508 s3c_set_platdata(&pd, sizeof(pd), &s3c64xx_device_spi0);
+1 -1
arch/mips/include/asm/cpu-features.h
··· 187 187 188 188 /* 189 189 * MIPS32, MIPS64, VR5500, IDT32332, IDT32334 and maybe a few other 190 - * pre-MIPS32/MIPS53 processors have CLO, CLZ. The IDT RC64574 is 64-bit and 190 + * pre-MIPS32/MIPS64 processors have CLO, CLZ. The IDT RC64574 is 64-bit and 191 191 * has CLO and CLZ but not DCLO nor DCLZ. For 64-bit kernels 192 192 * cpu_has_clo_clz also indicates the availability of DCLO and DCLZ. 193 193 */
+4 -8
arch/mips/mm/dma-default.c
··· 308 308 { 309 309 int i; 310 310 311 - /* Make sure that gcc doesn't leave the empty loop body. */ 312 - for (i = 0; i < nelems; i++, sg++) { 313 - if (cpu_needs_post_dma_flush(dev)) 311 + if (cpu_needs_post_dma_flush(dev)) 312 + for (i = 0; i < nelems; i++, sg++) 314 313 __dma_sync(sg_page(sg), sg->offset, sg->length, 315 314 direction); 316 - } 317 315 } 318 316 319 317 static void mips_dma_sync_sg_for_device(struct device *dev, ··· 319 321 { 320 322 int i; 321 323 322 - /* Make sure that gcc doesn't leave the empty loop body. */ 323 - for (i = 0; i < nelems; i++, sg++) { 324 - if (!plat_device_is_coherent(dev)) 324 + if (!plat_device_is_coherent(dev)) 325 + for (i = 0; i < nelems; i++, sg++) 325 326 __dma_sync(sg_page(sg), sg->offset, sg->length, 326 327 direction); 327 - } 328 328 } 329 329 330 330 int mips_dma_mapping_error(struct device *dev, dma_addr_t dma_addr)
-44
arch/openrisc/include/asm/prom.h
··· 14 14 * the Free Software Foundation; either version 2 of the License, or 15 15 * (at your option) any later version. 16 16 */ 17 - 18 - #include <linux/of.h> /* linux/of.h gets to determine #include ordering */ 19 - 20 17 #ifndef _ASM_OPENRISC_PROM_H 21 18 #define _ASM_OPENRISC_PROM_H 22 - #ifdef __KERNEL__ 23 - #ifndef __ASSEMBLY__ 24 19 25 - #include <linux/types.h> 26 - #include <asm/irq.h> 27 - #include <linux/irqdomain.h> 28 - #include <linux/atomic.h> 29 - #include <linux/of_irq.h> 30 - #include <linux/of_fdt.h> 31 - #include <linux/of_address.h> 32 - #include <linux/proc_fs.h> 33 - #include <linux/platform_device.h> 34 20 #define HAVE_ARCH_DEVTREE_FIXUPS 35 21 36 - /* Other Prototypes */ 37 - extern int early_uartlite_console(void); 38 - 39 - /* Parse the ibm,dma-window property of an OF node into the busno, phys and 40 - * size parameters. 41 - */ 42 - void of_parse_dma_window(struct device_node *dn, const void *dma_window_prop, 43 - unsigned long *busno, unsigned long *phys, unsigned long *size); 44 - 45 - extern void kdump_move_device_tree(void); 46 - 47 - /* Get the MAC address */ 48 - extern const void *of_get_mac_address(struct device_node *np); 49 - 50 - /** 51 - * of_irq_map_pci - Resolve the interrupt for a PCI device 52 - * @pdev: the device whose interrupt is to be resolved 53 - * @out_irq: structure of_irq filled by this function 54 - * 55 - * This function resolves the PCI interrupt for a given PCI device. If a 56 - * device-node exists for a given pci_dev, it will use normal OF tree 57 - * walking. If not, it will implement standard swizzling and walk up the 58 - * PCI tree until an device-node is found, at which point it will finish 59 - * resolving using the OF tree walking. 60 - */ 61 - struct pci_dev; 62 - extern int of_irq_map_pci(struct pci_dev *pdev, struct of_irq *out_irq); 63 - 64 - #endif /* __ASSEMBLY__ */ 65 - #endif /* __KERNEL__ */ 66 22 #endif /* _ASM_OPENRISC_PROM_H */
+2 -2
arch/powerpc/boot/Makefile
··· 74 74 src-wlib-$(CONFIG_PPC_82xx) += pq2.c fsl-soc.c planetcore.c 75 75 src-wlib-$(CONFIG_EMBEDDED6xx) += mv64x60.c mv64x60_i2c.c ugecon.c 76 76 77 - src-plat-y := of.c 77 + src-plat-y := of.c epapr.c 78 78 src-plat-$(CONFIG_40x) += fixed-head.S ep405.c cuboot-hotfoot.c \ 79 79 treeboot-walnut.c cuboot-acadia.c \ 80 80 cuboot-kilauea.c simpleboot.c \ ··· 97 97 prpmc2800.c 98 98 src-plat-$(CONFIG_AMIGAONE) += cuboot-amigaone.c 99 99 src-plat-$(CONFIG_PPC_PS3) += ps3-head.S ps3-hvcall.S ps3.c 100 - src-plat-$(CONFIG_EPAPR_BOOT) += epapr.c 100 + src-plat-$(CONFIG_EPAPR_BOOT) += epapr.c epapr-wrapper.c 101 101 102 102 src-wlib := $(sort $(src-wlib-y)) 103 103 src-plat := $(sort $(src-plat-y))
+9
arch/powerpc/boot/epapr-wrapper.c
··· 1 + extern void epapr_platform_init(unsigned long r3, unsigned long r4, 2 + unsigned long r5, unsigned long r6, 3 + unsigned long r7); 4 + 5 + void platform_init(unsigned long r3, unsigned long r4, unsigned long r5, 6 + unsigned long r6, unsigned long r7) 7 + { 8 + epapr_platform_init(r3, r4, r5, r6, r7); 9 + }
+2 -2
arch/powerpc/boot/epapr.c
··· 48 48 fdt_addr, fdt_totalsize((void *)fdt_addr), ima_size); 49 49 } 50 50 51 - void platform_init(unsigned long r3, unsigned long r4, unsigned long r5, 52 - unsigned long r6, unsigned long r7) 51 + void epapr_platform_init(unsigned long r3, unsigned long r4, unsigned long r5, 52 + unsigned long r6, unsigned long r7) 53 53 { 54 54 epapr_magic = r6; 55 55 ima_size = r7;
+15 -1
arch/powerpc/boot/of.c
··· 26 26 27 27 static unsigned long claim_base; 28 28 29 + void epapr_platform_init(unsigned long r3, unsigned long r4, unsigned long r5, 30 + unsigned long r6, unsigned long r7); 31 + 29 32 static void *of_try_claim(unsigned long size) 30 33 { 31 34 unsigned long addr = 0; ··· 64 61 } 65 62 } 66 63 67 - void platform_init(unsigned long a1, unsigned long a2, void *promptr) 64 + static void of_platform_init(unsigned long a1, unsigned long a2, void *promptr) 68 65 { 69 66 platform_ops.image_hdr = of_image_hdr; 70 67 platform_ops.malloc = of_try_claim; ··· 84 81 loader_info.initrd_size = a2; 85 82 } 86 83 } 84 + 85 + void platform_init(unsigned long r3, unsigned long r4, unsigned long r5, 86 + unsigned long r6, unsigned long r7) 87 + { 88 + /* Detect OF vs. ePAPR boot */ 89 + if (r5) 90 + of_platform_init(r3, r4, (void *)r5); 91 + else 92 + epapr_platform_init(r3, r4, r5, r6, r7); 93 + } 94 +
+5 -4
arch/powerpc/boot/wrapper
··· 148 148 149 149 case "$platform" in 150 150 pseries) 151 - platformo=$object/of.o 151 + platformo="$object/of.o $object/epapr.o" 152 152 link_address='0x4000000' 153 153 ;; 154 154 maple) 155 - platformo=$object/of.o 155 + platformo="$object/of.o $object/epapr.o" 156 156 link_address='0x400000' 157 157 ;; 158 158 pmac|chrp) 159 - platformo=$object/of.o 159 + platformo="$object/of.o $object/epapr.o" 160 160 ;; 161 161 coff) 162 - platformo="$object/crt0.o $object/of.o" 162 + platformo="$object/crt0.o $object/of.o $object/epapr.o" 163 163 lds=$object/zImage.coff.lds 164 164 link_address='0x500000' 165 165 pie= ··· 253 253 platformo="$object/treeboot-iss4xx.o" 254 254 ;; 255 255 epapr) 256 + platformo="$object/epapr.o $object/epapr-wrapper.o" 256 257 link_address='0x20000000' 257 258 pie=-pie 258 259 ;;
+2 -2
arch/powerpc/include/asm/irq.h
··· 69 69 70 70 extern void irq_ctx_init(void); 71 71 extern void call_do_softirq(struct thread_info *tp); 72 - extern int call_handle_irq(int irq, void *p1, 73 - struct thread_info *tp, void *func); 72 + extern void call_do_irq(struct pt_regs *regs, struct thread_info *tp); 74 73 extern void do_IRQ(struct pt_regs *regs); 74 + extern void __do_irq(struct pt_regs *regs); 75 75 76 76 int irq_choose_cpu(const struct cpumask *mask); 77 77
+1 -3
arch/powerpc/include/asm/processor.h
··· 149 149 150 150 struct thread_struct { 151 151 unsigned long ksp; /* Kernel stack pointer */ 152 - unsigned long ksp_limit; /* if ksp <= ksp_limit stack overflow */ 153 - 154 152 #ifdef CONFIG_PPC64 155 153 unsigned long ksp_vsid; 156 154 #endif ··· 160 162 #endif 161 163 #ifdef CONFIG_PPC32 162 164 void *pgdir; /* root of page-table tree */ 165 + unsigned long ksp_limit; /* if ksp <= ksp_limit stack overflow */ 163 166 #endif 164 167 #ifdef CONFIG_PPC_ADV_DEBUG_REGS 165 168 /* ··· 320 321 #else 321 322 #define INIT_THREAD { \ 322 323 .ksp = INIT_SP, \ 323 - .ksp_limit = INIT_SP_LIMIT, \ 324 324 .regs = (struct pt_regs *)INIT_SP - 1, /* XXX bogus, I think */ \ 325 325 .fs = KERNEL_DS, \ 326 326 .fpr = {{0}}, \
+2 -1
arch/powerpc/kernel/asm-offsets.c
··· 80 80 DEFINE(TASKTHREADPPR, offsetof(struct task_struct, thread.ppr)); 81 81 #else 82 82 DEFINE(THREAD_INFO, offsetof(struct task_struct, stack)); 83 + DEFINE(THREAD_INFO_GAP, _ALIGN_UP(sizeof(struct thread_info), 16)); 84 + DEFINE(KSP_LIMIT, offsetof(struct thread_struct, ksp_limit)); 83 85 #endif /* CONFIG_PPC64 */ 84 86 85 87 DEFINE(KSP, offsetof(struct thread_struct, ksp)); 86 - DEFINE(KSP_LIMIT, offsetof(struct thread_struct, ksp_limit)); 87 88 DEFINE(PT_REGS, offsetof(struct thread_struct, regs)); 88 89 #ifdef CONFIG_BOOKE 89 90 DEFINE(THREAD_NORMSAVES, offsetof(struct thread_struct, normsave[0]));
+44 -56
arch/powerpc/kernel/irq.c
··· 441 441 } 442 442 #endif 443 443 444 - static inline void handle_one_irq(unsigned int irq) 445 - { 446 - struct thread_info *curtp, *irqtp; 447 - unsigned long saved_sp_limit; 448 - struct irq_desc *desc; 449 - 450 - desc = irq_to_desc(irq); 451 - if (!desc) 452 - return; 453 - 454 - /* Switch to the irq stack to handle this */ 455 - curtp = current_thread_info(); 456 - irqtp = hardirq_ctx[smp_processor_id()]; 457 - 458 - if (curtp == irqtp) { 459 - /* We're already on the irq stack, just handle it */ 460 - desc->handle_irq(irq, desc); 461 - return; 462 - } 463 - 464 - saved_sp_limit = current->thread.ksp_limit; 465 - 466 - irqtp->task = curtp->task; 467 - irqtp->flags = 0; 468 - 469 - /* Copy the softirq bits in preempt_count so that the 470 - * softirq checks work in the hardirq context. */ 471 - irqtp->preempt_count = (irqtp->preempt_count & ~SOFTIRQ_MASK) | 472 - (curtp->preempt_count & SOFTIRQ_MASK); 473 - 474 - current->thread.ksp_limit = (unsigned long)irqtp + 475 - _ALIGN_UP(sizeof(struct thread_info), 16); 476 - 477 - call_handle_irq(irq, desc, irqtp, desc->handle_irq); 478 - current->thread.ksp_limit = saved_sp_limit; 479 - irqtp->task = NULL; 480 - 481 - /* Set any flag that may have been set on the 482 - * alternate stack 483 - */ 484 - if (irqtp->flags) 485 - set_bits(irqtp->flags, &curtp->flags); 486 - } 487 - 488 444 static inline void check_stack_overflow(void) 489 445 { 490 446 #ifdef CONFIG_DEBUG_STACKOVERFLOW ··· 457 501 #endif 458 502 } 459 503 460 - void do_IRQ(struct pt_regs *regs) 504 + void __do_irq(struct pt_regs *regs) 461 505 { 462 - struct pt_regs *old_regs = set_irq_regs(regs); 506 + struct irq_desc *desc; 463 507 unsigned int irq; 464 508 465 509 irq_enter(); ··· 475 519 */ 476 520 irq = ppc_md.get_irq(); 477 521 478 - /* We can hard enable interrupts now */ 522 + /* We can hard enable interrupts now to allow perf interrupts */ 479 523 may_hard_irq_enable(); 480 524 481 525 /* And finally process it */ 482 - if (irq != NO_IRQ) 483 - handle_one_irq(irq); 484 - else 526 + if (unlikely(irq == NO_IRQ)) 485 527 __get_cpu_var(irq_stat).spurious_irqs++; 528 + else { 529 + desc = irq_to_desc(irq); 530 + if (likely(desc)) 531 + desc->handle_irq(irq, desc); 532 + } 486 533 487 534 trace_irq_exit(regs); 488 535 489 536 irq_exit(); 537 + } 538 + 539 + void do_IRQ(struct pt_regs *regs) 540 + { 541 + struct pt_regs *old_regs = set_irq_regs(regs); 542 + struct thread_info *curtp, *irqtp; 543 + 544 + /* Switch to the irq stack to handle this */ 545 + curtp = current_thread_info(); 546 + irqtp = hardirq_ctx[raw_smp_processor_id()]; 547 + 548 + /* Already there ? */ 549 + if (unlikely(curtp == irqtp)) { 550 + __do_irq(regs); 551 + set_irq_regs(old_regs); 552 + return; 553 + } 554 + 555 + /* Prepare the thread_info in the irq stack */ 556 + irqtp->task = curtp->task; 557 + irqtp->flags = 0; 558 + 559 + /* Copy the preempt_count so that the [soft]irq checks work. */ 560 + irqtp->preempt_count = curtp->preempt_count; 561 + 562 + /* Switch stack and call */ 563 + call_do_irq(regs, irqtp); 564 + 565 + /* Restore stack limit */ 566 + irqtp->task = NULL; 567 + 568 + /* Copy back updates to the thread_info */ 569 + if (irqtp->flags) 570 + set_bits(irqtp->flags, &curtp->flags); 571 + 490 572 set_irq_regs(old_regs); 491 573 } 492 574 ··· 586 592 memset((void *)softirq_ctx[i], 0, THREAD_SIZE); 587 593 tp = softirq_ctx[i]; 588 594 tp->cpu = i; 589 - tp->preempt_count = 0; 590 595 591 596 memset((void *)hardirq_ctx[i], 0, THREAD_SIZE); 592 597 tp = hardirq_ctx[i]; 593 598 tp->cpu = i; 594 - tp->preempt_count = HARDIRQ_OFFSET; 595 599 } 596 600 } 597 601 598 602 static inline void do_softirq_onstack(void) 599 603 { 600 604 struct thread_info *curtp, *irqtp; 601 - unsigned long saved_sp_limit = current->thread.ksp_limit; 602 605 603 606 curtp = current_thread_info(); 604 607 irqtp = softirq_ctx[smp_processor_id()]; 605 608 irqtp->task = curtp->task; 606 609 irqtp->flags = 0; 607 - current->thread.ksp_limit = (unsigned long)irqtp + 608 - _ALIGN_UP(sizeof(struct thread_info), 16); 609 610 call_do_softirq(irqtp); 610 - current->thread.ksp_limit = saved_sp_limit; 611 611 irqtp->task = NULL; 612 612 613 613 /* Set any flag that may have been set on the
+20 -5
arch/powerpc/kernel/misc_32.S
··· 36 36 37 37 .text 38 38 39 + /* 40 + * We store the saved ksp_limit in the unused part 41 + * of the STACK_FRAME_OVERHEAD 42 + */ 39 43 _GLOBAL(call_do_softirq) 40 44 mflr r0 41 45 stw r0,4(r1) 46 + lwz r10,THREAD+KSP_LIMIT(r2) 47 + addi r11,r3,THREAD_INFO_GAP 42 48 stwu r1,THREAD_SIZE-STACK_FRAME_OVERHEAD(r3) 43 49 mr r1,r3 50 + stw r10,8(r1) 51 + stw r11,THREAD+KSP_LIMIT(r2) 44 52 bl __do_softirq 53 + lwz r10,8(r1) 45 54 lwz r1,0(r1) 46 55 lwz r0,4(r1) 56 + stw r10,THREAD+KSP_LIMIT(r2) 47 57 mtlr r0 48 58 blr 49 59 50 - _GLOBAL(call_handle_irq) 60 + _GLOBAL(call_do_irq) 51 61 mflr r0 52 62 stw r0,4(r1) 53 - mtctr r6 54 - stwu r1,THREAD_SIZE-STACK_FRAME_OVERHEAD(r5) 55 - mr r1,r5 56 - bctrl 63 + lwz r10,THREAD+KSP_LIMIT(r2) 64 + addi r11,r3,THREAD_INFO_GAP 65 + stwu r1,THREAD_SIZE-STACK_FRAME_OVERHEAD(r4) 66 + mr r1,r4 67 + stw r10,8(r1) 68 + stw r11,THREAD+KSP_LIMIT(r2) 69 + bl __do_irq 70 + lwz r10,8(r1) 57 71 lwz r1,0(r1) 58 72 lwz r0,4(r1) 73 + stw r10,THREAD+KSP_LIMIT(r2) 59 74 mtlr r0 60 75 blr 61 76
+4 -6
arch/powerpc/kernel/misc_64.S
··· 40 40 mtlr r0 41 41 blr 42 42 43 - _GLOBAL(call_handle_irq) 44 - ld r8,0(r6) 43 + _GLOBAL(call_do_irq) 45 44 mflr r0 46 45 std r0,16(r1) 47 - mtctr r8 48 - stdu r1,THREAD_SIZE-STACK_FRAME_OVERHEAD(r5) 49 - mr r1,r5 50 - bctrl 46 + stdu r1,THREAD_SIZE-STACK_FRAME_OVERHEAD(r4) 47 + mr r1,r4 48 + bl .__do_irq 51 49 ld r1,0(r1) 52 50 ld r0,16(r1) 53 51 mtlr r0
+2 -1
arch/powerpc/kernel/process.c
··· 1000 1000 kregs = (struct pt_regs *) sp; 1001 1001 sp -= STACK_FRAME_OVERHEAD; 1002 1002 p->thread.ksp = sp; 1003 + #ifdef CONFIG_PPC32 1003 1004 p->thread.ksp_limit = (unsigned long)task_stack_page(p) + 1004 1005 _ALIGN_UP(sizeof(struct thread_info), 16); 1005 - 1006 + #endif 1006 1007 #ifdef CONFIG_HAVE_HW_BREAKPOINT 1007 1008 p->thread.ptrace_bps[0] = NULL; 1008 1009 #endif
+21
arch/powerpc/kernel/prom_init.c
··· 196 196 197 197 static cell_t __initdata regbuf[1024]; 198 198 199 + static bool rtas_has_query_cpu_stopped; 200 + 199 201 200 202 /* 201 203 * Error results ... some OF calls will return "-1" on error, some ··· 1576 1574 prom_setprop(rtas_node, "/rtas", "linux,rtas-entry", 1577 1575 &val, sizeof(val)); 1578 1576 1577 + /* Check if it supports "query-cpu-stopped-state" */ 1578 + if (prom_getprop(rtas_node, "query-cpu-stopped-state", 1579 + &val, sizeof(val)) != PROM_ERROR) 1580 + rtas_has_query_cpu_stopped = true; 1581 + 1579 1582 #if defined(CONFIG_PPC_POWERNV) && defined(__BIG_ENDIAN__) 1580 1583 /* PowerVN takeover hack */ 1581 1584 prom_rtas_data = base; ··· 1821 1814 unsigned long *acknowledge 1822 1815 = (void *) LOW_ADDR(__secondary_hold_acknowledge); 1823 1816 unsigned long secondary_hold = LOW_ADDR(__secondary_hold); 1817 + 1818 + /* 1819 + * On pseries, if RTAS supports "query-cpu-stopped-state", 1820 + * we skip this stage, the CPUs will be started by the 1821 + * kernel using RTAS. 1822 + */ 1823 + if ((of_platform == PLATFORM_PSERIES || 1824 + of_platform == PLATFORM_PSERIES_LPAR) && 1825 + rtas_has_query_cpu_stopped) { 1826 + prom_printf("prom_hold_cpus: skipped\n"); 1827 + return; 1828 + } 1824 1829 1825 1830 prom_debug("prom_hold_cpus: start...\n"); 1826 1831 prom_debug(" 1) spinloop = 0x%x\n", (unsigned long)spinloop); ··· 3030 3011 * On non-powermacs, put all CPUs in spin-loops. 3031 3012 * 3032 3013 * PowerMacs use a different mechanism to spin CPUs 3014 + * 3015 + * (This must be done after instanciating RTAS) 3033 3016 */ 3034 3017 if (of_platform != PLATFORM_POWERMAC && 3035 3018 of_platform != PLATFORM_OPAL)
+2 -1
arch/powerpc/lib/sstep.c
··· 1505 1505 */ 1506 1506 if ((ra == 1) && !(regs->msr & MSR_PR) \ 1507 1507 && (val3 >= (regs->gpr[1] - STACK_INT_FRAME_SIZE))) { 1508 + #ifdef CONFIG_PPC32 1508 1509 /* 1509 1510 * Check if we will touch kernel sack overflow 1510 1511 */ ··· 1514 1513 err = -EINVAL; 1515 1514 break; 1516 1515 } 1517 - 1516 + #endif /* CONFIG_PPC32 */ 1518 1517 /* 1519 1518 * Check if we already set since that means we'll 1520 1519 * lose the previous value.
+16 -10
arch/powerpc/platforms/pseries/smp.c
··· 233 233 234 234 alloc_bootmem_cpumask_var(&of_spin_mask); 235 235 236 - /* Mark threads which are still spinning in hold loops. */ 237 - if (cpu_has_feature(CPU_FTR_SMT)) { 238 - for_each_present_cpu(i) { 239 - if (cpu_thread_in_core(i) == 0) 240 - cpumask_set_cpu(i, of_spin_mask); 241 - } 242 - } else { 243 - cpumask_copy(of_spin_mask, cpu_present_mask); 244 - } 236 + /* 237 + * Mark threads which are still spinning in hold loops 238 + * 239 + * We know prom_init will not have started them if RTAS supports 240 + * query-cpu-stopped-state. 241 + */ 242 + if (rtas_token("query-cpu-stopped-state") == RTAS_UNKNOWN_SERVICE) { 243 + if (cpu_has_feature(CPU_FTR_SMT)) { 244 + for_each_present_cpu(i) { 245 + if (cpu_thread_in_core(i) == 0) 246 + cpumask_set_cpu(i, of_spin_mask); 247 + } 248 + } else 249 + cpumask_copy(of_spin_mask, cpu_present_mask); 245 250 246 - cpumask_clear_cpu(boot_cpuid, of_spin_mask); 251 + cpumask_clear_cpu(boot_cpuid, of_spin_mask); 252 + } 247 253 248 254 /* Non-lpar has additional take/give timebase */ 249 255 if (rtas_token("freeze-time-base") != RTAS_UNKNOWN_SERVICE) {
+1 -1
arch/s390/Kconfig
··· 93 93 select ARCH_INLINE_WRITE_UNLOCK_IRQ 94 94 select ARCH_INLINE_WRITE_UNLOCK_IRQRESTORE 95 95 select ARCH_SAVE_PAGE_KEYS if HIBERNATION 96 + select ARCH_USE_CMPXCHG_LOCKREF 96 97 select ARCH_WANT_IPC_PARSE_VERSION 97 98 select BUILDTIME_EXTABLE_SORT 98 99 select CLONE_BACKWARDS2 ··· 103 102 select GENERIC_TIME_VSYSCALL_OLD 104 103 select HAVE_ALIGNED_STRUCT_PAGE if SLUB 105 104 select HAVE_ARCH_JUMP_LABEL if !MARCH_G5 106 - select HAVE_ARCH_MUTEX_CPU_RELAX 107 105 select HAVE_ARCH_SECCOMP_FILTER 108 106 select HAVE_ARCH_TRACEHOOK 109 107 select HAVE_ARCH_TRANSPARENT_HUGEPAGE if 64BIT
-2
arch/s390/include/asm/mutex.h
··· 7 7 */ 8 8 9 9 #include <asm-generic/mutex-dec.h> 10 - 11 - #define arch_mutex_cpu_relax() barrier()
+2
arch/s390/include/asm/processor.h
··· 198 198 barrier(); 199 199 } 200 200 201 + #define arch_mutex_cpu_relax() barrier() 202 + 201 203 static inline void psw_set_key(unsigned int key) 202 204 { 203 205 asm volatile("spka 0(%0)" : : "d" (key));
+5
arch/s390/include/asm/spinlock.h
··· 44 44 extern int arch_spin_trylock_retry(arch_spinlock_t *); 45 45 extern void arch_spin_relax(arch_spinlock_t *lock); 46 46 47 + static inline int arch_spin_value_unlocked(arch_spinlock_t lock) 48 + { 49 + return lock.owner_cpu == 0; 50 + } 51 + 47 52 static inline void arch_spin_lock(arch_spinlock_t *lp) 48 53 { 49 54 int old;
+20 -11
arch/x86/include/asm/xen/page.h
··· 79 79 return get_phys_to_machine(pfn) != INVALID_P2M_ENTRY; 80 80 } 81 81 82 - static inline unsigned long mfn_to_pfn(unsigned long mfn) 82 + static inline unsigned long mfn_to_pfn_no_overrides(unsigned long mfn) 83 83 { 84 84 unsigned long pfn; 85 - int ret = 0; 85 + int ret; 86 86 87 87 if (xen_feature(XENFEAT_auto_translated_physmap)) 88 88 return mfn; 89 89 90 - if (unlikely(mfn >= machine_to_phys_nr)) { 91 - pfn = ~0; 92 - goto try_override; 93 - } 94 - pfn = 0; 90 + if (unlikely(mfn >= machine_to_phys_nr)) 91 + return ~0; 92 + 95 93 /* 96 94 * The array access can fail (e.g., device space beyond end of RAM). 97 95 * In such cases it doesn't matter what we return (we return garbage), 98 96 * but we must handle the fault without crashing! 99 97 */ 100 98 ret = __get_user(pfn, &machine_to_phys_mapping[mfn]); 101 - try_override: 102 - /* ret might be < 0 if there are no entries in the m2p for mfn */ 103 99 if (ret < 0) 104 - pfn = ~0; 105 - else if (get_phys_to_machine(pfn) != mfn) 100 + return ~0; 101 + 102 + return pfn; 103 + } 104 + 105 + static inline unsigned long mfn_to_pfn(unsigned long mfn) 106 + { 107 + unsigned long pfn; 108 + 109 + if (xen_feature(XENFEAT_auto_translated_physmap)) 110 + return mfn; 111 + 112 + pfn = mfn_to_pfn_no_overrides(mfn); 113 + if (get_phys_to_machine(pfn) != mfn) { 106 114 /* 107 115 * If this appears to be a foreign mfn (because the pfn 108 116 * doesn't map back to the mfn), then check the local override ··· 119 111 * m2p_find_override_pfn returns ~0 if it doesn't find anything. 120 112 */ 121 113 pfn = m2p_find_override_pfn(mfn, ~0); 114 + } 122 115 123 116 /* 124 117 * pfn is ~0 if there are no entries in the m2p for mfn or if the
+6 -6
arch/x86/kernel/cpu/perf_event.c
··· 1506 1506 err = amd_pmu_init(); 1507 1507 break; 1508 1508 default: 1509 - return 0; 1509 + err = -ENOTSUPP; 1510 1510 } 1511 1511 if (err != 0) { 1512 1512 pr_cont("no PMU driver, software events only.\n"); ··· 1883 1883 1884 1884 void arch_perf_update_userpage(struct perf_event_mmap_page *userpg, u64 now) 1885 1885 { 1886 - userpg->cap_usr_time = 0; 1887 - userpg->cap_usr_time_zero = 0; 1888 - userpg->cap_usr_rdpmc = x86_pmu.attr_rdpmc; 1886 + userpg->cap_user_time = 0; 1887 + userpg->cap_user_time_zero = 0; 1888 + userpg->cap_user_rdpmc = x86_pmu.attr_rdpmc; 1889 1889 userpg->pmc_width = x86_pmu.cntval_bits; 1890 1890 1891 1891 if (!boot_cpu_has(X86_FEATURE_CONSTANT_TSC)) ··· 1894 1894 if (!boot_cpu_has(X86_FEATURE_NONSTOP_TSC)) 1895 1895 return; 1896 1896 1897 - userpg->cap_usr_time = 1; 1897 + userpg->cap_user_time = 1; 1898 1898 userpg->time_mult = this_cpu_read(cyc2ns); 1899 1899 userpg->time_shift = CYC2NS_SCALE_FACTOR; 1900 1900 userpg->time_offset = this_cpu_read(cyc2ns_offset) - now; 1901 1901 1902 1902 if (sched_clock_stable && !check_tsc_disabled()) { 1903 - userpg->cap_usr_time_zero = 1; 1903 + userpg->cap_user_time_zero = 1; 1904 1904 userpg->time_zero = this_cpu_read(cyc2ns_offset); 1905 1905 } 1906 1906 }
+1
arch/x86/kernel/cpu/perf_event_intel.c
··· 2325 2325 break; 2326 2326 2327 2327 case 55: /* Atom 22nm "Silvermont" */ 2328 + case 77: /* Avoton "Silvermont" */ 2328 2329 memcpy(hw_cache_event_ids, slm_hw_cache_event_ids, 2329 2330 sizeof(hw_cache_event_ids)); 2330 2331 memcpy(hw_cache_extra_regs, slm_hw_cache_extra_regs,
+5 -5
arch/x86/kernel/cpu/perf_event_intel_uncore.c
··· 2706 2706 box->hrtimer.function = uncore_pmu_hrtimer; 2707 2707 } 2708 2708 2709 - struct intel_uncore_box *uncore_alloc_box(struct intel_uncore_type *type, int cpu) 2709 + static struct intel_uncore_box *uncore_alloc_box(struct intel_uncore_type *type, int node) 2710 2710 { 2711 2711 struct intel_uncore_box *box; 2712 2712 int i, size; 2713 2713 2714 2714 size = sizeof(*box) + type->num_shared_regs * sizeof(struct intel_uncore_extra_reg); 2715 2715 2716 - box = kzalloc_node(size, GFP_KERNEL, cpu_to_node(cpu)); 2716 + box = kzalloc_node(size, GFP_KERNEL, node); 2717 2717 if (!box) 2718 2718 return NULL; 2719 2719 ··· 3031 3031 struct intel_uncore_box *fake_box; 3032 3032 int ret = -EINVAL, n; 3033 3033 3034 - fake_box = uncore_alloc_box(pmu->type, smp_processor_id()); 3034 + fake_box = uncore_alloc_box(pmu->type, NUMA_NO_NODE); 3035 3035 if (!fake_box) 3036 3036 return -ENOMEM; 3037 3037 ··· 3294 3294 } 3295 3295 3296 3296 type = pci_uncores[UNCORE_PCI_DEV_TYPE(id->driver_data)]; 3297 - box = uncore_alloc_box(type, 0); 3297 + box = uncore_alloc_box(type, NUMA_NO_NODE); 3298 3298 if (!box) 3299 3299 return -ENOMEM; 3300 3300 ··· 3499 3499 if (pmu->func_id < 0) 3500 3500 pmu->func_id = j; 3501 3501 3502 - box = uncore_alloc_box(type, cpu); 3502 + box = uncore_alloc_box(type, cpu_to_node(cpu)); 3503 3503 if (!box) 3504 3504 return -ENOMEM; 3505 3505
+1
arch/x86/kernel/microcode_amd.c
··· 216 216 /* need to apply patch? */ 217 217 if (rev >= mc_amd->hdr.patch_id) { 218 218 c->microcode = rev; 219 + uci->cpu_sig.rev = rev; 219 220 return 0; 220 221 } 221 222
+17 -1
arch/x86/kernel/reboot.c
··· 352 352 }, 353 353 { /* Handle problems with rebooting on the Precision M6600. */ 354 354 .callback = set_pci_reboot, 355 - .ident = "Dell OptiPlex 990", 355 + .ident = "Dell Precision M6600", 356 356 .matches = { 357 357 DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."), 358 358 DMI_MATCH(DMI_PRODUCT_NAME, "Precision M6600"), 359 + }, 360 + }, 361 + { /* Handle problems with rebooting on the Dell PowerEdge C6100. */ 362 + .callback = set_pci_reboot, 363 + .ident = "Dell PowerEdge C6100", 364 + .matches = { 365 + DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."), 366 + DMI_MATCH(DMI_PRODUCT_NAME, "C6100"), 367 + }, 368 + }, 369 + { /* Some C6100 machines were shipped with vendor being 'Dell'. */ 370 + .callback = set_pci_reboot, 371 + .ident = "Dell PowerEdge C6100", 372 + .matches = { 373 + DMI_MATCH(DMI_SYS_VENDOR, "Dell"), 374 + DMI_MATCH(DMI_PRODUCT_NAME, "C6100"), 359 375 }, 360 376 }, 361 377 { }
+7 -4
arch/x86/platform/efi/efi.c
··· 912 912 913 913 for (p = memmap.map; p < memmap.map_end; p += memmap.desc_size) { 914 914 md = p; 915 - if (!(md->attribute & EFI_MEMORY_RUNTIME) && 916 - md->type != EFI_BOOT_SERVICES_CODE && 917 - md->type != EFI_BOOT_SERVICES_DATA) 918 - continue; 915 + if (!(md->attribute & EFI_MEMORY_RUNTIME)) { 916 + #ifdef CONFIG_X86_64 917 + if (md->type != EFI_BOOT_SERVICES_CODE && 918 + md->type != EFI_BOOT_SERVICES_DATA) 919 + #endif 920 + continue; 921 + } 919 922 920 923 size = md->num_pages << EFI_PAGE_SHIFT; 921 924 end = md->phys_addr + size;
+4 -6
arch/x86/xen/p2m.c
··· 879 879 unsigned long uninitialized_var(address); 880 880 unsigned level; 881 881 pte_t *ptep = NULL; 882 - int ret = 0; 883 882 884 883 pfn = page_to_pfn(page); 885 884 if (!PageHighMem(page)) { ··· 925 926 * frontend pages while they are being shared with the backend, 926 927 * because mfn_to_pfn (that ends up being called by GUPF) will 927 928 * return the backend pfn rather than the frontend pfn. */ 928 - ret = __get_user(pfn, &machine_to_phys_mapping[mfn]); 929 - if (ret == 0 && get_phys_to_machine(pfn) == mfn) 929 + pfn = mfn_to_pfn_no_overrides(mfn); 930 + if (get_phys_to_machine(pfn) == mfn) 930 931 set_phys_to_machine(pfn, FOREIGN_FRAME(mfn)); 931 932 932 933 return 0; ··· 941 942 unsigned long uninitialized_var(address); 942 943 unsigned level; 943 944 pte_t *ptep = NULL; 944 - int ret = 0; 945 945 946 946 pfn = page_to_pfn(page); 947 947 mfn = get_phys_to_machine(pfn); ··· 1027 1029 * the original pfn causes mfn_to_pfn(mfn) to return the frontend 1028 1030 * pfn again. */ 1029 1031 mfn &= ~FOREIGN_FRAME_BIT; 1030 - ret = __get_user(pfn, &machine_to_phys_mapping[mfn]); 1031 - if (ret == 0 && get_phys_to_machine(pfn) == FOREIGN_FRAME(mfn) && 1032 + pfn = mfn_to_pfn_no_overrides(mfn); 1033 + if (get_phys_to_machine(pfn) == FOREIGN_FRAME(mfn) && 1032 1034 m2p_find_override(mfn) == NULL) 1033 1035 set_phys_to_machine(pfn, mfn); 1034 1036
+24 -2
arch/x86/xen/spinlock.c
··· 259 259 } 260 260 261 261 262 + /* 263 + * Our init of PV spinlocks is split in two init functions due to us 264 + * using paravirt patching and jump labels patching and having to do 265 + * all of this before SMP code is invoked. 266 + * 267 + * The paravirt patching needs to be done _before_ the alternative asm code 268 + * is started, otherwise we would not patch the core kernel code. 269 + */ 262 270 void __init xen_init_spinlocks(void) 263 271 { 264 272 ··· 275 267 return; 276 268 } 277 269 278 - static_key_slow_inc(&paravirt_ticketlocks_enabled); 279 - 280 270 pv_lock_ops.lock_spinning = PV_CALLEE_SAVE(xen_lock_spinning); 281 271 pv_lock_ops.unlock_kick = xen_unlock_kick; 282 272 } 273 + 274 + /* 275 + * While the jump_label init code needs to happend _after_ the jump labels are 276 + * enabled and before SMP is started. Hence we use pre-SMP initcall level 277 + * init. We cannot do it in xen_init_spinlocks as that is done before 278 + * jump labels are activated. 279 + */ 280 + static __init int xen_init_spinlocks_jump(void) 281 + { 282 + if (!xen_pvspin) 283 + return 0; 284 + 285 + static_key_slow_inc(&paravirt_ticketlocks_enabled); 286 + return 0; 287 + } 288 + early_initcall(xen_init_spinlocks_jump); 283 289 284 290 static __init int xen_parse_nopvspin(char *arg) 285 291 {
+14 -10
drivers/acpi/acpi_ipmi.c
··· 39 39 #include <linux/ipmi.h> 40 40 #include <linux/device.h> 41 41 #include <linux/pnp.h> 42 + #include <linux/spinlock.h> 42 43 43 44 MODULE_AUTHOR("Zhao Yakui"); 44 45 MODULE_DESCRIPTION("ACPI IPMI Opregion driver"); ··· 58 57 struct list_head head; 59 58 /* the IPMI request message list */ 60 59 struct list_head tx_msg_list; 61 - struct mutex tx_msg_lock; 60 + spinlock_t tx_msg_lock; 62 61 acpi_handle handle; 63 62 struct pnp_dev *pnp_dev; 64 63 ipmi_user_t user_interface; ··· 148 147 struct kernel_ipmi_msg *msg; 149 148 struct acpi_ipmi_buffer *buffer; 150 149 struct acpi_ipmi_device *device; 150 + unsigned long flags; 151 151 152 152 msg = &tx_msg->tx_message; 153 153 /* ··· 179 177 180 178 /* Get the msgid */ 181 179 device = tx_msg->device; 182 - mutex_lock(&device->tx_msg_lock); 180 + spin_lock_irqsave(&device->tx_msg_lock, flags); 183 181 device->curr_msgid++; 184 182 tx_msg->tx_msgid = device->curr_msgid; 185 - mutex_unlock(&device->tx_msg_lock); 183 + spin_unlock_irqrestore(&device->tx_msg_lock, flags); 186 184 } 187 185 188 186 static void acpi_format_ipmi_response(struct acpi_ipmi_msg *msg, ··· 244 242 int msg_found = 0; 245 243 struct acpi_ipmi_msg *tx_msg; 246 244 struct pnp_dev *pnp_dev = ipmi_device->pnp_dev; 245 + unsigned long flags; 247 246 248 247 if (msg->user != ipmi_device->user_interface) { 249 248 dev_warn(&pnp_dev->dev, "Unexpected response is returned. " ··· 253 250 ipmi_free_recv_msg(msg); 254 251 return; 255 252 } 256 - mutex_lock(&ipmi_device->tx_msg_lock); 253 + spin_lock_irqsave(&ipmi_device->tx_msg_lock, flags); 257 254 list_for_each_entry(tx_msg, &ipmi_device->tx_msg_list, head) { 258 255 if (msg->msgid == tx_msg->tx_msgid) { 259 256 msg_found = 1; ··· 261 258 } 262 259 } 263 260 264 - mutex_unlock(&ipmi_device->tx_msg_lock); 261 + spin_unlock_irqrestore(&ipmi_device->tx_msg_lock, flags); 265 262 if (!msg_found) { 266 263 dev_warn(&pnp_dev->dev, "Unexpected response (msg id %ld) is " 267 264 "returned.\n", msg->msgid); ··· 381 378 struct acpi_ipmi_device *ipmi_device = handler_context; 382 379 int err, rem_time; 383 380 acpi_status status; 381 + unsigned long flags; 384 382 /* 385 383 * IPMI opregion message. 386 384 * IPMI message is firstly written to the BMC and system software ··· 399 395 return AE_NO_MEMORY; 400 396 401 397 acpi_format_ipmi_msg(tx_msg, address, value); 402 - mutex_lock(&ipmi_device->tx_msg_lock); 398 + spin_lock_irqsave(&ipmi_device->tx_msg_lock, flags); 403 399 list_add_tail(&tx_msg->head, &ipmi_device->tx_msg_list); 404 - mutex_unlock(&ipmi_device->tx_msg_lock); 400 + spin_unlock_irqrestore(&ipmi_device->tx_msg_lock, flags); 405 401 err = ipmi_request_settime(ipmi_device->user_interface, 406 402 &tx_msg->addr, 407 403 tx_msg->tx_msgid, ··· 417 413 status = AE_OK; 418 414 419 415 end_label: 420 - mutex_lock(&ipmi_device->tx_msg_lock); 416 + spin_lock_irqsave(&ipmi_device->tx_msg_lock, flags); 421 417 list_del(&tx_msg->head); 422 - mutex_unlock(&ipmi_device->tx_msg_lock); 418 + spin_unlock_irqrestore(&ipmi_device->tx_msg_lock, flags); 423 419 kfree(tx_msg); 424 420 return status; 425 421 } ··· 461 457 462 458 INIT_LIST_HEAD(&ipmi_device->head); 463 459 464 - mutex_init(&ipmi_device->tx_msg_lock); 460 + spin_lock_init(&ipmi_device->tx_msg_lock); 465 461 INIT_LIST_HEAD(&ipmi_device->tx_msg_list); 466 462 ipmi_install_space_handler(ipmi_device); 467 463
+1 -1
drivers/acpi/scan.c
··· 1121 1121 EXPORT_SYMBOL(acpi_bus_register_driver); 1122 1122 1123 1123 /** 1124 - * acpi_bus_unregister_driver - unregisters a driver with the APIC bus 1124 + * acpi_bus_unregister_driver - unregisters a driver with the ACPI bus 1125 1125 * @driver: driver to unregister 1126 1126 * 1127 1127 * Unregisters a driver with the ACPI bus. Searches the namespace for all
+1 -1
drivers/ata/sata_promise.c
··· 2 2 * sata_promise.c - Promise SATA 3 3 * 4 4 * Maintained by: Tejun Heo <tj@kernel.org> 5 - * Mikael Pettersson <mikpe@it.uu.se> 5 + * Mikael Pettersson 6 6 * Please ALWAYS copy linux-ide@vger.kernel.org 7 7 * on emails. 8 8 *
+7 -7
drivers/base/core.c
··· 2017 2017 */ 2018 2018 void device_shutdown(void) 2019 2019 { 2020 - struct device *dev; 2020 + struct device *dev, *parent; 2021 2021 2022 2022 spin_lock(&devices_kset->list_lock); 2023 2023 /* ··· 2034 2034 * prevent it from being freed because parent's 2035 2035 * lock is to be held 2036 2036 */ 2037 - get_device(dev->parent); 2037 + parent = get_device(dev->parent); 2038 2038 get_device(dev); 2039 2039 /* 2040 2040 * Make sure the device is off the kset list, in the ··· 2044 2044 spin_unlock(&devices_kset->list_lock); 2045 2045 2046 2046 /* hold lock to avoid race with probe/release */ 2047 - if (dev->parent) 2048 - device_lock(dev->parent); 2047 + if (parent) 2048 + device_lock(parent); 2049 2049 device_lock(dev); 2050 2050 2051 2051 /* Don't allow any more runtime suspends */ ··· 2063 2063 } 2064 2064 2065 2065 device_unlock(dev); 2066 - if (dev->parent) 2067 - device_unlock(dev->parent); 2066 + if (parent) 2067 + device_unlock(parent); 2068 2068 2069 2069 put_device(dev); 2070 - put_device(dev->parent); 2070 + put_device(parent); 2071 2071 2072 2072 spin_lock(&devices_kset->list_lock); 2073 2073 }
+1
drivers/block/cciss.c
··· 1189 1189 int err; 1190 1190 u32 cp; 1191 1191 1192 + memset(&arg64, 0, sizeof(arg64)); 1192 1193 err = 0; 1193 1194 err |= 1194 1195 copy_from_user(&arg64.LUN_info, &arg32->LUN_info,
+1
drivers/block/cpqarray.c
··· 1193 1193 ida_pci_info_struct pciinfo; 1194 1194 1195 1195 if (!arg) return -EINVAL; 1196 + memset(&pciinfo, 0, sizeof(pciinfo)); 1196 1197 pciinfo.bus = host->pci_dev->bus->number; 1197 1198 pciinfo.dev_fn = host->pci_dev->devfn; 1198 1199 pciinfo.board_id = host->board_id;
-36
drivers/char/tpm/xen-tpmfront.c
··· 142 142 return length; 143 143 } 144 144 145 - ssize_t tpm_show_locality(struct device *dev, struct device_attribute *attr, 146 - char *buf) 147 - { 148 - struct tpm_chip *chip = dev_get_drvdata(dev); 149 - struct tpm_private *priv = TPM_VPRIV(chip); 150 - u8 locality = priv->shr->locality; 151 - 152 - return sprintf(buf, "%d\n", locality); 153 - } 154 - 155 - ssize_t tpm_store_locality(struct device *dev, struct device_attribute *attr, 156 - const char *buf, size_t len) 157 - { 158 - struct tpm_chip *chip = dev_get_drvdata(dev); 159 - struct tpm_private *priv = TPM_VPRIV(chip); 160 - u8 val; 161 - 162 - int rv = kstrtou8(buf, 0, &val); 163 - if (rv) 164 - return rv; 165 - 166 - priv->shr->locality = val; 167 - 168 - return len; 169 - } 170 - 171 145 static const struct file_operations vtpm_ops = { 172 146 .owner = THIS_MODULE, 173 147 .llseek = no_llseek, ··· 162 188 static DEVICE_ATTR(cancel, S_IWUSR | S_IWGRP, NULL, tpm_store_cancel); 163 189 static DEVICE_ATTR(durations, S_IRUGO, tpm_show_durations, NULL); 164 190 static DEVICE_ATTR(timeouts, S_IRUGO, tpm_show_timeouts, NULL); 165 - static DEVICE_ATTR(locality, S_IRUGO | S_IWUSR, tpm_show_locality, 166 - tpm_store_locality); 167 191 168 192 static struct attribute *vtpm_attrs[] = { 169 193 &dev_attr_pubek.attr, ··· 174 202 &dev_attr_cancel.attr, 175 203 &dev_attr_durations.attr, 176 204 &dev_attr_timeouts.attr, 177 - &dev_attr_locality.attr, 178 205 NULL, 179 206 }; 180 207 181 208 static struct attribute_group vtpm_attr_grp = { 182 209 .attrs = vtpm_attrs, 183 210 }; 184 - 185 - #define TPM_LONG_TIMEOUT (10 * 60 * HZ) 186 211 187 212 static const struct tpm_vendor_specific tpm_vtpm = { 188 213 .status = vtpm_status, ··· 192 223 .attr_group = &vtpm_attr_grp, 193 224 .miscdev = { 194 225 .fops = &vtpm_ops, 195 - }, 196 - .duration = { 197 - TPM_LONG_TIMEOUT, 198 - TPM_LONG_TIMEOUT, 199 - TPM_LONG_TIMEOUT, 200 226 }, 201 227 }; 202 228
+1
drivers/clocksource/Kconfig
··· 26 26 27 27 config ARMADA_370_XP_TIMER 28 28 bool 29 + select CLKSRC_OF 29 30 30 31 config ORION_TIMER 31 32 select CLKSRC_OF
+3
drivers/clocksource/clksrc-of.c
··· 30 30 clocksource_of_init_fn init_func; 31 31 32 32 for_each_matching_node_and_match(np, __clksrc_of_table, &match) { 33 + if (!of_device_is_available(np)) 34 + continue; 35 + 33 36 init_func = match->data; 34 37 init_func(np); 35 38 }
+1 -1
drivers/clocksource/em_sti.c
··· 301 301 ced->name = dev_name(&p->pdev->dev); 302 302 ced->features = CLOCK_EVT_FEAT_ONESHOT; 303 303 ced->rating = 200; 304 - ced->cpumask = cpumask_of(0); 304 + ced->cpumask = cpu_possible_mask; 305 305 ced->set_next_event = em_sti_clock_event_next; 306 306 ced->set_mode = em_sti_clock_event_mode; 307 307
+9 -1
drivers/clocksource/exynos_mct.c
··· 428 428 evt->irq); 429 429 return -EIO; 430 430 } 431 - irq_set_affinity(evt->irq, cpumask_of(cpu)); 432 431 } else { 433 432 enable_percpu_irq(mct_irqs[MCT_L0_IRQ], 0); 434 433 } ··· 448 449 unsigned long action, void *hcpu) 449 450 { 450 451 struct mct_clock_event_device *mevt; 452 + unsigned int cpu; 451 453 452 454 /* 453 455 * Grab cpu pointer in each case to avoid spurious ··· 458 458 case CPU_STARTING: 459 459 mevt = this_cpu_ptr(&percpu_mct_tick); 460 460 exynos4_local_timer_setup(&mevt->evt); 461 + break; 462 + case CPU_ONLINE: 463 + cpu = (unsigned long)hcpu; 464 + if (mct_int_type == MCT_INT_SPI) 465 + irq_set_affinity(mct_irqs[MCT_L0_IRQ + cpu], 466 + cpumask_of(cpu)); 461 467 break; 462 468 case CPU_DYING: 463 469 mevt = this_cpu_ptr(&percpu_mct_tick); ··· 506 500 &percpu_mct_tick); 507 501 WARN(err, "MCT: can't request IRQ %d (%d)\n", 508 502 mct_irqs[MCT_L0_IRQ], err); 503 + } else { 504 + irq_set_affinity(mct_irqs[MCT_L0_IRQ], cpumask_of(0)); 509 505 } 510 506 511 507 err = register_cpu_notifier(&exynos4_mct_cpu_nb);
+4
drivers/cpufreq/acpi-cpufreq.c
··· 986 986 { 987 987 int ret; 988 988 989 + /* don't keep reloading if cpufreq_driver exists */ 990 + if (cpufreq_get_current_driver()) 991 + return 0; 992 + 989 993 if (acpi_disabled) 990 994 return 0; 991 995
+3
drivers/cpufreq/cpufreq.c
··· 1460 1460 { 1461 1461 unsigned int ret_freq = 0; 1462 1462 1463 + if (cpufreq_disabled() || !cpufreq_driver) 1464 + return -ENOENT; 1465 + 1463 1466 if (!down_read_trylock(&cpufreq_rwsem)) 1464 1467 return 0; 1465 1468
+1 -1
drivers/cpufreq/exynos5440-cpufreq.c
··· 457 457 opp_free_cpufreq_table(dvfs_info->dev, &dvfs_info->freq_table); 458 458 err_put_node: 459 459 of_node_put(np); 460 - dev_err(dvfs_info->dev, "%s: failed initialization\n", __func__); 460 + dev_err(&pdev->dev, "%s: failed initialization\n", __func__); 461 461 return ret; 462 462 } 463 463
+12
drivers/dma/Kconfig
··· 154 154 This DMA controller transfers data from memory to peripheral fifo 155 155 or vice versa. It does not support memory to memory data transfer. 156 156 157 + config S3C24XX_DMAC 158 + tristate "Samsung S3C24XX DMA support" 159 + depends on ARCH_S3C24XX && !S3C24XX_DMA 160 + select DMA_ENGINE 161 + select DMA_VIRTUAL_CHANNELS 162 + help 163 + Support for the Samsung S3C24XX DMA controller driver. The 164 + DMA controller is having multiple DMA channels which can be 165 + configured for different peripherals like audio, UART, SPI. 166 + The DMA controller can transfer data from memory to peripheral, 167 + periphal to memory, periphal to periphal and memory to memory. 168 + 157 169 source "drivers/dma/sh/Kconfig" 158 170 159 171 config COH901318
+1
drivers/dma/Makefile
··· 30 30 obj-$(CONFIG_TI_EDMA) += edma.o 31 31 obj-$(CONFIG_STE_DMA40) += ste_dma40.o ste_dma40_ll.o 32 32 obj-$(CONFIG_TEGRA20_APB_DMA) += tegra20-apb-dma.o 33 + obj-$(CONFIG_S3C24XX_DMAC) += s3c24xx-dma.o 33 34 obj-$(CONFIG_PL330_DMA) += pl330.o 34 35 obj-$(CONFIG_PCH_DMA) += pch_dma.o 35 36 obj-$(CONFIG_AMBA_PL08X) += amba-pl08x.o
+1350
drivers/dma/s3c24xx-dma.c
··· 1 + /* 2 + * S3C24XX DMA handling 3 + * 4 + * Copyright (c) 2013 Heiko Stuebner <heiko@sntech.de> 5 + * 6 + * based on amba-pl08x.c 7 + * 8 + * Copyright (c) 2006 ARM Ltd. 9 + * Copyright (c) 2010 ST-Ericsson SA 10 + * 11 + * Author: Peter Pearse <peter.pearse@arm.com> 12 + * Author: Linus Walleij <linus.walleij@stericsson.com> 13 + * 14 + * This program is free software; you can redistribute it and/or modify it 15 + * under the terms of the GNU General Public License as published by the Free 16 + * Software Foundation; either version 2 of the License, or (at your option) 17 + * any later version. 18 + * 19 + * The DMA controllers in S3C24XX SoCs have a varying number of DMA signals 20 + * that can be routed to any of the 4 to 8 hardware-channels. 21 + * 22 + * Therefore on these DMA controllers the number of channels 23 + * and the number of incoming DMA signals are two totally different things. 24 + * It is usually not possible to theoretically handle all physical signals, 25 + * so a multiplexing scheme with possible denial of use is necessary. 26 + * 27 + * Open items: 28 + * - bursts 29 + */ 30 + 31 + #include <linux/platform_device.h> 32 + #include <linux/types.h> 33 + #include <linux/dmaengine.h> 34 + #include <linux/dma-mapping.h> 35 + #include <linux/interrupt.h> 36 + #include <linux/clk.h> 37 + #include <linux/module.h> 38 + #include <linux/slab.h> 39 + #include <linux/platform_data/dma-s3c24xx.h> 40 + 41 + #include "dmaengine.h" 42 + #include "virt-dma.h" 43 + 44 + #define MAX_DMA_CHANNELS 8 45 + 46 + #define S3C24XX_DISRC 0x00 47 + #define S3C24XX_DISRCC 0x04 48 + #define S3C24XX_DISRCC_INC_INCREMENT 0 49 + #define S3C24XX_DISRCC_INC_FIXED BIT(0) 50 + #define S3C24XX_DISRCC_LOC_AHB 0 51 + #define S3C24XX_DISRCC_LOC_APB BIT(1) 52 + 53 + #define S3C24XX_DIDST 0x08 54 + #define S3C24XX_DIDSTC 0x0c 55 + #define S3C24XX_DIDSTC_INC_INCREMENT 0 56 + #define S3C24XX_DIDSTC_INC_FIXED BIT(0) 57 + #define S3C24XX_DIDSTC_LOC_AHB 0 58 + #define S3C24XX_DIDSTC_LOC_APB BIT(1) 59 + #define S3C24XX_DIDSTC_INT_TC0 0 60 + #define S3C24XX_DIDSTC_INT_RELOAD BIT(2) 61 + 62 + #define S3C24XX_DCON 0x10 63 + 64 + #define S3C24XX_DCON_TC_MASK 0xfffff 65 + #define S3C24XX_DCON_DSZ_BYTE (0 << 20) 66 + #define S3C24XX_DCON_DSZ_HALFWORD (1 << 20) 67 + #define S3C24XX_DCON_DSZ_WORD (2 << 20) 68 + #define S3C24XX_DCON_DSZ_MASK (3 << 20) 69 + #define S3C24XX_DCON_DSZ_SHIFT 20 70 + #define S3C24XX_DCON_AUTORELOAD 0 71 + #define S3C24XX_DCON_NORELOAD BIT(22) 72 + #define S3C24XX_DCON_HWTRIG BIT(23) 73 + #define S3C24XX_DCON_HWSRC_SHIFT 24 74 + #define S3C24XX_DCON_SERV_SINGLE 0 75 + #define S3C24XX_DCON_SERV_WHOLE BIT(27) 76 + #define S3C24XX_DCON_TSZ_UNIT 0 77 + #define S3C24XX_DCON_TSZ_BURST4 BIT(28) 78 + #define S3C24XX_DCON_INT BIT(29) 79 + #define S3C24XX_DCON_SYNC_PCLK 0 80 + #define S3C24XX_DCON_SYNC_HCLK BIT(30) 81 + #define S3C24XX_DCON_DEMAND 0 82 + #define S3C24XX_DCON_HANDSHAKE BIT(31) 83 + 84 + #define S3C24XX_DSTAT 0x14 85 + #define S3C24XX_DSTAT_STAT_BUSY BIT(20) 86 + #define S3C24XX_DSTAT_CURRTC_MASK 0xfffff 87 + 88 + #define S3C24XX_DMASKTRIG 0x20 89 + #define S3C24XX_DMASKTRIG_SWTRIG BIT(0) 90 + #define S3C24XX_DMASKTRIG_ON BIT(1) 91 + #define S3C24XX_DMASKTRIG_STOP BIT(2) 92 + 93 + #define S3C24XX_DMAREQSEL 0x24 94 + #define S3C24XX_DMAREQSEL_HW BIT(0) 95 + 96 + /* 97 + * S3C2410, S3C2440 and S3C2442 SoCs cannot select any physical channel 98 + * for a DMA source. Instead only specific channels are valid. 99 + * All of these SoCs have 4 physical channels and the number of request 100 + * source bits is 3. Additionally we also need 1 bit to mark the channel 101 + * as valid. 102 + * Therefore we separate the chansel element of the channel data into 4 103 + * parts of 4 bits each, to hold the information if the channel is valid 104 + * and the hw request source to use. 105 + * 106 + * Example: 107 + * SDI is valid on channels 0, 2 and 3 - with varying hw request sources. 108 + * For it the chansel field would look like 109 + * 110 + * ((BIT(3) | 1) << 3 * 4) | // channel 3, with request source 1 111 + * ((BIT(3) | 2) << 2 * 4) | // channel 2, with request source 2 112 + * ((BIT(3) | 2) << 0 * 4) // channel 0, with request source 2 113 + */ 114 + #define S3C24XX_CHANSEL_WIDTH 4 115 + #define S3C24XX_CHANSEL_VALID BIT(3) 116 + #define S3C24XX_CHANSEL_REQ_MASK 7 117 + 118 + /* 119 + * struct soc_data - vendor-specific config parameters for individual SoCs 120 + * @stride: spacing between the registers of each channel 121 + * @has_reqsel: does the controller use the newer requestselection mechanism 122 + * @has_clocks: are controllable dma-clocks present 123 + */ 124 + struct soc_data { 125 + int stride; 126 + bool has_reqsel; 127 + bool has_clocks; 128 + }; 129 + 130 + /* 131 + * enum s3c24xx_dma_chan_state - holds the virtual channel states 132 + * @S3C24XX_DMA_CHAN_IDLE: the channel is idle 133 + * @S3C24XX_DMA_CHAN_RUNNING: the channel has allocated a physical transport 134 + * channel and is running a transfer on it 135 + * @S3C24XX_DMA_CHAN_WAITING: the channel is waiting for a physical transport 136 + * channel to become available (only pertains to memcpy channels) 137 + */ 138 + enum s3c24xx_dma_chan_state { 139 + S3C24XX_DMA_CHAN_IDLE, 140 + S3C24XX_DMA_CHAN_RUNNING, 141 + S3C24XX_DMA_CHAN_WAITING, 142 + }; 143 + 144 + /* 145 + * struct s3c24xx_sg - structure containing data per sg 146 + * @src_addr: src address of sg 147 + * @dst_addr: dst address of sg 148 + * @len: transfer len in bytes 149 + * @node: node for txd's dsg_list 150 + */ 151 + struct s3c24xx_sg { 152 + dma_addr_t src_addr; 153 + dma_addr_t dst_addr; 154 + size_t len; 155 + struct list_head node; 156 + }; 157 + 158 + /* 159 + * struct s3c24xx_txd - wrapper for struct dma_async_tx_descriptor 160 + * @vd: virtual DMA descriptor 161 + * @dsg_list: list of children sg's 162 + * @at: sg currently being transfered 163 + * @width: transfer width 164 + * @disrcc: value for source control register 165 + * @didstc: value for destination control register 166 + * @dcon: base value for dcon register 167 + */ 168 + struct s3c24xx_txd { 169 + struct virt_dma_desc vd; 170 + struct list_head dsg_list; 171 + struct list_head *at; 172 + u8 width; 173 + u32 disrcc; 174 + u32 didstc; 175 + u32 dcon; 176 + }; 177 + 178 + struct s3c24xx_dma_chan; 179 + 180 + /* 181 + * struct s3c24xx_dma_phy - holder for the physical channels 182 + * @id: physical index to this channel 183 + * @valid: does the channel have all required elements 184 + * @base: virtual memory base (remapped) for the this channel 185 + * @irq: interrupt for this channel 186 + * @clk: clock for this channel 187 + * @lock: a lock to use when altering an instance of this struct 188 + * @serving: virtual channel currently being served by this physicalchannel 189 + * @host: a pointer to the host (internal use) 190 + */ 191 + struct s3c24xx_dma_phy { 192 + unsigned int id; 193 + bool valid; 194 + void __iomem *base; 195 + unsigned int irq; 196 + struct clk *clk; 197 + spinlock_t lock; 198 + struct s3c24xx_dma_chan *serving; 199 + struct s3c24xx_dma_engine *host; 200 + }; 201 + 202 + /* 203 + * struct s3c24xx_dma_chan - this structure wraps a DMA ENGINE channel 204 + * @id: the id of the channel 205 + * @name: name of the channel 206 + * @vc: wrappped virtual channel 207 + * @phy: the physical channel utilized by this channel, if there is one 208 + * @runtime_addr: address for RX/TX according to the runtime config 209 + * @at: active transaction on this channel 210 + * @lock: a lock for this channel data 211 + * @host: a pointer to the host (internal use) 212 + * @state: whether the channel is idle, running etc 213 + * @slave: whether this channel is a device (slave) or for memcpy 214 + */ 215 + struct s3c24xx_dma_chan { 216 + int id; 217 + const char *name; 218 + struct virt_dma_chan vc; 219 + struct s3c24xx_dma_phy *phy; 220 + struct dma_slave_config cfg; 221 + struct s3c24xx_txd *at; 222 + struct s3c24xx_dma_engine *host; 223 + enum s3c24xx_dma_chan_state state; 224 + bool slave; 225 + }; 226 + 227 + /* 228 + * struct s3c24xx_dma_engine - the local state holder for the S3C24XX 229 + * @pdev: the corresponding platform device 230 + * @pdata: platform data passed in from the platform/machine 231 + * @base: virtual memory base (remapped) 232 + * @slave: slave engine for this instance 233 + * @memcpy: memcpy engine for this instance 234 + * @phy_chans: array of data for the physical channels 235 + */ 236 + struct s3c24xx_dma_engine { 237 + struct platform_device *pdev; 238 + const struct s3c24xx_dma_platdata *pdata; 239 + struct soc_data *sdata; 240 + void __iomem *base; 241 + struct dma_device slave; 242 + struct dma_device memcpy; 243 + struct s3c24xx_dma_phy *phy_chans; 244 + }; 245 + 246 + /* 247 + * Physical channel handling 248 + */ 249 + 250 + /* 251 + * Check whether a certain channel is busy or not. 252 + */ 253 + static int s3c24xx_dma_phy_busy(struct s3c24xx_dma_phy *phy) 254 + { 255 + unsigned int val = readl(phy->base + S3C24XX_DSTAT); 256 + return val & S3C24XX_DSTAT_STAT_BUSY; 257 + } 258 + 259 + static bool s3c24xx_dma_phy_valid(struct s3c24xx_dma_chan *s3cchan, 260 + struct s3c24xx_dma_phy *phy) 261 + { 262 + struct s3c24xx_dma_engine *s3cdma = s3cchan->host; 263 + const struct s3c24xx_dma_platdata *pdata = s3cdma->pdata; 264 + struct s3c24xx_dma_channel *cdata = &pdata->channels[s3cchan->id]; 265 + int phyvalid; 266 + 267 + /* every phy is valid for memcopy channels */ 268 + if (!s3cchan->slave) 269 + return true; 270 + 271 + /* On newer variants all phys can be used for all virtual channels */ 272 + if (s3cdma->sdata->has_reqsel) 273 + return true; 274 + 275 + phyvalid = (cdata->chansel >> (phy->id * S3C24XX_CHANSEL_WIDTH)); 276 + return (phyvalid & S3C24XX_CHANSEL_VALID) ? true : false; 277 + } 278 + 279 + /* 280 + * Allocate a physical channel for a virtual channel 281 + * 282 + * Try to locate a physical channel to be used for this transfer. If all 283 + * are taken return NULL and the requester will have to cope by using 284 + * some fallback PIO mode or retrying later. 285 + */ 286 + static 287 + struct s3c24xx_dma_phy *s3c24xx_dma_get_phy(struct s3c24xx_dma_chan *s3cchan) 288 + { 289 + struct s3c24xx_dma_engine *s3cdma = s3cchan->host; 290 + const struct s3c24xx_dma_platdata *pdata = s3cdma->pdata; 291 + struct s3c24xx_dma_channel *cdata; 292 + struct s3c24xx_dma_phy *phy = NULL; 293 + unsigned long flags; 294 + int i; 295 + int ret; 296 + 297 + if (s3cchan->slave) 298 + cdata = &pdata->channels[s3cchan->id]; 299 + 300 + for (i = 0; i < s3cdma->pdata->num_phy_channels; i++) { 301 + phy = &s3cdma->phy_chans[i]; 302 + 303 + if (!phy->valid) 304 + continue; 305 + 306 + if (!s3c24xx_dma_phy_valid(s3cchan, phy)) 307 + continue; 308 + 309 + spin_lock_irqsave(&phy->lock, flags); 310 + 311 + if (!phy->serving) { 312 + phy->serving = s3cchan; 313 + spin_unlock_irqrestore(&phy->lock, flags); 314 + break; 315 + } 316 + 317 + spin_unlock_irqrestore(&phy->lock, flags); 318 + } 319 + 320 + /* No physical channel available, cope with it */ 321 + if (i == s3cdma->pdata->num_phy_channels) { 322 + dev_warn(&s3cdma->pdev->dev, "no phy channel available\n"); 323 + return NULL; 324 + } 325 + 326 + /* start the phy clock */ 327 + if (s3cdma->sdata->has_clocks) { 328 + ret = clk_enable(phy->clk); 329 + if (ret) { 330 + dev_err(&s3cdma->pdev->dev, "could not enable clock for channel %d, err %d\n", 331 + phy->id, ret); 332 + phy->serving = NULL; 333 + return NULL; 334 + } 335 + } 336 + 337 + return phy; 338 + } 339 + 340 + /* 341 + * Mark the physical channel as free. 342 + * 343 + * This drops the link between the physical and virtual channel. 344 + */ 345 + static inline void s3c24xx_dma_put_phy(struct s3c24xx_dma_phy *phy) 346 + { 347 + struct s3c24xx_dma_engine *s3cdma = phy->host; 348 + 349 + if (s3cdma->sdata->has_clocks) 350 + clk_disable(phy->clk); 351 + 352 + phy->serving = NULL; 353 + } 354 + 355 + /* 356 + * Stops the channel by writing the stop bit. 357 + * This should not be used for an on-going transfer, but as a method of 358 + * shutting down a channel (eg, when it's no longer used) or terminating a 359 + * transfer. 360 + */ 361 + static void s3c24xx_dma_terminate_phy(struct s3c24xx_dma_phy *phy) 362 + { 363 + writel(S3C24XX_DMASKTRIG_STOP, phy->base + S3C24XX_DMASKTRIG); 364 + } 365 + 366 + /* 367 + * Virtual channel handling 368 + */ 369 + 370 + static inline 371 + struct s3c24xx_dma_chan *to_s3c24xx_dma_chan(struct dma_chan *chan) 372 + { 373 + return container_of(chan, struct s3c24xx_dma_chan, vc.chan); 374 + } 375 + 376 + static u32 s3c24xx_dma_getbytes_chan(struct s3c24xx_dma_chan *s3cchan) 377 + { 378 + struct s3c24xx_dma_phy *phy = s3cchan->phy; 379 + struct s3c24xx_txd *txd = s3cchan->at; 380 + u32 tc = readl(phy->base + S3C24XX_DSTAT) & S3C24XX_DSTAT_CURRTC_MASK; 381 + 382 + return tc * txd->width; 383 + } 384 + 385 + static int s3c24xx_dma_set_runtime_config(struct s3c24xx_dma_chan *s3cchan, 386 + struct dma_slave_config *config) 387 + { 388 + if (!s3cchan->slave) 389 + return -EINVAL; 390 + 391 + /* Reject definitely invalid configurations */ 392 + if (config->src_addr_width == DMA_SLAVE_BUSWIDTH_8_BYTES || 393 + config->dst_addr_width == DMA_SLAVE_BUSWIDTH_8_BYTES) 394 + return -EINVAL; 395 + 396 + s3cchan->cfg = *config; 397 + 398 + return 0; 399 + } 400 + 401 + /* 402 + * Transfer handling 403 + */ 404 + 405 + static inline 406 + struct s3c24xx_txd *to_s3c24xx_txd(struct dma_async_tx_descriptor *tx) 407 + { 408 + return container_of(tx, struct s3c24xx_txd, vd.tx); 409 + } 410 + 411 + static struct s3c24xx_txd *s3c24xx_dma_get_txd(void) 412 + { 413 + struct s3c24xx_txd *txd = kzalloc(sizeof(*txd), GFP_NOWAIT); 414 + 415 + if (txd) { 416 + INIT_LIST_HEAD(&txd->dsg_list); 417 + txd->dcon = S3C24XX_DCON_INT | S3C24XX_DCON_NORELOAD; 418 + } 419 + 420 + return txd; 421 + } 422 + 423 + static void s3c24xx_dma_free_txd(struct s3c24xx_txd *txd) 424 + { 425 + struct s3c24xx_sg *dsg, *_dsg; 426 + 427 + list_for_each_entry_safe(dsg, _dsg, &txd->dsg_list, node) { 428 + list_del(&dsg->node); 429 + kfree(dsg); 430 + } 431 + 432 + kfree(txd); 433 + } 434 + 435 + static void s3c24xx_dma_start_next_sg(struct s3c24xx_dma_chan *s3cchan, 436 + struct s3c24xx_txd *txd) 437 + { 438 + struct s3c24xx_dma_engine *s3cdma = s3cchan->host; 439 + struct s3c24xx_dma_phy *phy = s3cchan->phy; 440 + const struct s3c24xx_dma_platdata *pdata = s3cdma->pdata; 441 + struct s3c24xx_sg *dsg = list_entry(txd->at, struct s3c24xx_sg, node); 442 + u32 dcon = txd->dcon; 443 + u32 val; 444 + 445 + /* transfer-size and -count from len and width */ 446 + switch (txd->width) { 447 + case 1: 448 + dcon |= S3C24XX_DCON_DSZ_BYTE | dsg->len; 449 + break; 450 + case 2: 451 + dcon |= S3C24XX_DCON_DSZ_HALFWORD | (dsg->len / 2); 452 + break; 453 + case 4: 454 + dcon |= S3C24XX_DCON_DSZ_WORD | (dsg->len / 4); 455 + break; 456 + } 457 + 458 + if (s3cchan->slave) { 459 + struct s3c24xx_dma_channel *cdata = 460 + &pdata->channels[s3cchan->id]; 461 + 462 + if (s3cdma->sdata->has_reqsel) { 463 + writel_relaxed((cdata->chansel << 1) | 464 + S3C24XX_DMAREQSEL_HW, 465 + phy->base + S3C24XX_DMAREQSEL); 466 + } else { 467 + int csel = cdata->chansel >> (phy->id * 468 + S3C24XX_CHANSEL_WIDTH); 469 + 470 + csel &= S3C24XX_CHANSEL_REQ_MASK; 471 + dcon |= csel << S3C24XX_DCON_HWSRC_SHIFT; 472 + dcon |= S3C24XX_DCON_HWTRIG; 473 + } 474 + } else { 475 + if (s3cdma->sdata->has_reqsel) 476 + writel_relaxed(0, phy->base + S3C24XX_DMAREQSEL); 477 + } 478 + 479 + writel_relaxed(dsg->src_addr, phy->base + S3C24XX_DISRC); 480 + writel_relaxed(txd->disrcc, phy->base + S3C24XX_DISRCC); 481 + writel_relaxed(dsg->dst_addr, phy->base + S3C24XX_DIDST); 482 + writel_relaxed(txd->didstc, phy->base + S3C24XX_DIDSTC); 483 + writel_relaxed(dcon, phy->base + S3C24XX_DCON); 484 + 485 + val = readl_relaxed(phy->base + S3C24XX_DMASKTRIG); 486 + val &= ~S3C24XX_DMASKTRIG_STOP; 487 + val |= S3C24XX_DMASKTRIG_ON; 488 + 489 + /* trigger the dma operation for memcpy transfers */ 490 + if (!s3cchan->slave) 491 + val |= S3C24XX_DMASKTRIG_SWTRIG; 492 + 493 + writel(val, phy->base + S3C24XX_DMASKTRIG); 494 + } 495 + 496 + /* 497 + * Set the initial DMA register values and start first sg. 498 + */ 499 + static void s3c24xx_dma_start_next_txd(struct s3c24xx_dma_chan *s3cchan) 500 + { 501 + struct s3c24xx_dma_phy *phy = s3cchan->phy; 502 + struct virt_dma_desc *vd = vchan_next_desc(&s3cchan->vc); 503 + struct s3c24xx_txd *txd = to_s3c24xx_txd(&vd->tx); 504 + 505 + list_del(&txd->vd.node); 506 + 507 + s3cchan->at = txd; 508 + 509 + /* Wait for channel inactive */ 510 + while (s3c24xx_dma_phy_busy(phy)) 511 + cpu_relax(); 512 + 513 + /* point to the first element of the sg list */ 514 + txd->at = txd->dsg_list.next; 515 + s3c24xx_dma_start_next_sg(s3cchan, txd); 516 + } 517 + 518 + static void s3c24xx_dma_free_txd_list(struct s3c24xx_dma_engine *s3cdma, 519 + struct s3c24xx_dma_chan *s3cchan) 520 + { 521 + LIST_HEAD(head); 522 + 523 + vchan_get_all_descriptors(&s3cchan->vc, &head); 524 + vchan_dma_desc_free_list(&s3cchan->vc, &head); 525 + } 526 + 527 + /* 528 + * Try to allocate a physical channel. When successful, assign it to 529 + * this virtual channel, and initiate the next descriptor. The 530 + * virtual channel lock must be held at this point. 531 + */ 532 + static void s3c24xx_dma_phy_alloc_and_start(struct s3c24xx_dma_chan *s3cchan) 533 + { 534 + struct s3c24xx_dma_engine *s3cdma = s3cchan->host; 535 + struct s3c24xx_dma_phy *phy; 536 + 537 + phy = s3c24xx_dma_get_phy(s3cchan); 538 + if (!phy) { 539 + dev_dbg(&s3cdma->pdev->dev, "no physical channel available for xfer on %s\n", 540 + s3cchan->name); 541 + s3cchan->state = S3C24XX_DMA_CHAN_WAITING; 542 + return; 543 + } 544 + 545 + dev_dbg(&s3cdma->pdev->dev, "allocated physical channel %d for xfer on %s\n", 546 + phy->id, s3cchan->name); 547 + 548 + s3cchan->phy = phy; 549 + s3cchan->state = S3C24XX_DMA_CHAN_RUNNING; 550 + 551 + s3c24xx_dma_start_next_txd(s3cchan); 552 + } 553 + 554 + static void s3c24xx_dma_phy_reassign_start(struct s3c24xx_dma_phy *phy, 555 + struct s3c24xx_dma_chan *s3cchan) 556 + { 557 + struct s3c24xx_dma_engine *s3cdma = s3cchan->host; 558 + 559 + dev_dbg(&s3cdma->pdev->dev, "reassigned physical channel %d for xfer on %s\n", 560 + phy->id, s3cchan->name); 561 + 562 + /* 563 + * We do this without taking the lock; we're really only concerned 564 + * about whether this pointer is NULL or not, and we're guaranteed 565 + * that this will only be called when it _already_ is non-NULL. 566 + */ 567 + phy->serving = s3cchan; 568 + s3cchan->phy = phy; 569 + s3cchan->state = S3C24XX_DMA_CHAN_RUNNING; 570 + s3c24xx_dma_start_next_txd(s3cchan); 571 + } 572 + 573 + /* 574 + * Free a physical DMA channel, potentially reallocating it to another 575 + * virtual channel if we have any pending. 576 + */ 577 + static void s3c24xx_dma_phy_free(struct s3c24xx_dma_chan *s3cchan) 578 + { 579 + struct s3c24xx_dma_engine *s3cdma = s3cchan->host; 580 + struct s3c24xx_dma_chan *p, *next; 581 + 582 + retry: 583 + next = NULL; 584 + 585 + /* Find a waiting virtual channel for the next transfer. */ 586 + list_for_each_entry(p, &s3cdma->memcpy.channels, vc.chan.device_node) 587 + if (p->state == S3C24XX_DMA_CHAN_WAITING) { 588 + next = p; 589 + break; 590 + } 591 + 592 + if (!next) { 593 + list_for_each_entry(p, &s3cdma->slave.channels, 594 + vc.chan.device_node) 595 + if (p->state == S3C24XX_DMA_CHAN_WAITING && 596 + s3c24xx_dma_phy_valid(p, s3cchan->phy)) { 597 + next = p; 598 + break; 599 + } 600 + } 601 + 602 + /* Ensure that the physical channel is stopped */ 603 + s3c24xx_dma_terminate_phy(s3cchan->phy); 604 + 605 + if (next) { 606 + bool success; 607 + 608 + /* 609 + * Eww. We know this isn't going to deadlock 610 + * but lockdep probably doesn't. 611 + */ 612 + spin_lock(&next->vc.lock); 613 + /* Re-check the state now that we have the lock */ 614 + success = next->state == S3C24XX_DMA_CHAN_WAITING; 615 + if (success) 616 + s3c24xx_dma_phy_reassign_start(s3cchan->phy, next); 617 + spin_unlock(&next->vc.lock); 618 + 619 + /* If the state changed, try to find another channel */ 620 + if (!success) 621 + goto retry; 622 + } else { 623 + /* No more jobs, so free up the physical channel */ 624 + s3c24xx_dma_put_phy(s3cchan->phy); 625 + } 626 + 627 + s3cchan->phy = NULL; 628 + s3cchan->state = S3C24XX_DMA_CHAN_IDLE; 629 + } 630 + 631 + static void s3c24xx_dma_unmap_buffers(struct s3c24xx_txd *txd) 632 + { 633 + struct device *dev = txd->vd.tx.chan->device->dev; 634 + struct s3c24xx_sg *dsg; 635 + 636 + if (!(txd->vd.tx.flags & DMA_COMPL_SKIP_SRC_UNMAP)) { 637 + if (txd->vd.tx.flags & DMA_COMPL_SRC_UNMAP_SINGLE) 638 + list_for_each_entry(dsg, &txd->dsg_list, node) 639 + dma_unmap_single(dev, dsg->src_addr, dsg->len, 640 + DMA_TO_DEVICE); 641 + else { 642 + list_for_each_entry(dsg, &txd->dsg_list, node) 643 + dma_unmap_page(dev, dsg->src_addr, dsg->len, 644 + DMA_TO_DEVICE); 645 + } 646 + } 647 + 648 + if (!(txd->vd.tx.flags & DMA_COMPL_SKIP_DEST_UNMAP)) { 649 + if (txd->vd.tx.flags & DMA_COMPL_DEST_UNMAP_SINGLE) 650 + list_for_each_entry(dsg, &txd->dsg_list, node) 651 + dma_unmap_single(dev, dsg->dst_addr, dsg->len, 652 + DMA_FROM_DEVICE); 653 + else 654 + list_for_each_entry(dsg, &txd->dsg_list, node) 655 + dma_unmap_page(dev, dsg->dst_addr, dsg->len, 656 + DMA_FROM_DEVICE); 657 + } 658 + } 659 + 660 + static void s3c24xx_dma_desc_free(struct virt_dma_desc *vd) 661 + { 662 + struct s3c24xx_txd *txd = to_s3c24xx_txd(&vd->tx); 663 + struct s3c24xx_dma_chan *s3cchan = to_s3c24xx_dma_chan(vd->tx.chan); 664 + 665 + if (!s3cchan->slave) 666 + s3c24xx_dma_unmap_buffers(txd); 667 + 668 + s3c24xx_dma_free_txd(txd); 669 + } 670 + 671 + static irqreturn_t s3c24xx_dma_irq(int irq, void *data) 672 + { 673 + struct s3c24xx_dma_phy *phy = data; 674 + struct s3c24xx_dma_chan *s3cchan = phy->serving; 675 + struct s3c24xx_txd *txd; 676 + 677 + dev_dbg(&phy->host->pdev->dev, "interrupt on channel %d\n", phy->id); 678 + 679 + /* 680 + * Interrupts happen to notify the completion of a transfer and the 681 + * channel should have moved into its stop state already on its own. 682 + * Therefore interrupts on channels not bound to a virtual channel 683 + * should never happen. Nevertheless send a terminate command to the 684 + * channel if the unlikely case happens. 685 + */ 686 + if (unlikely(!s3cchan)) { 687 + dev_err(&phy->host->pdev->dev, "interrupt on unused channel %d\n", 688 + phy->id); 689 + 690 + s3c24xx_dma_terminate_phy(phy); 691 + 692 + return IRQ_HANDLED; 693 + } 694 + 695 + spin_lock(&s3cchan->vc.lock); 696 + txd = s3cchan->at; 697 + if (txd) { 698 + /* when more sg's are in this txd, start the next one */ 699 + if (!list_is_last(txd->at, &txd->dsg_list)) { 700 + txd->at = txd->at->next; 701 + s3c24xx_dma_start_next_sg(s3cchan, txd); 702 + } else { 703 + s3cchan->at = NULL; 704 + vchan_cookie_complete(&txd->vd); 705 + 706 + /* 707 + * And start the next descriptor (if any), 708 + * otherwise free this channel. 709 + */ 710 + if (vchan_next_desc(&s3cchan->vc)) 711 + s3c24xx_dma_start_next_txd(s3cchan); 712 + else 713 + s3c24xx_dma_phy_free(s3cchan); 714 + } 715 + } 716 + spin_unlock(&s3cchan->vc.lock); 717 + 718 + return IRQ_HANDLED; 719 + } 720 + 721 + /* 722 + * The DMA ENGINE API 723 + */ 724 + 725 + static int s3c24xx_dma_control(struct dma_chan *chan, enum dma_ctrl_cmd cmd, 726 + unsigned long arg) 727 + { 728 + struct s3c24xx_dma_chan *s3cchan = to_s3c24xx_dma_chan(chan); 729 + struct s3c24xx_dma_engine *s3cdma = s3cchan->host; 730 + unsigned long flags; 731 + int ret = 0; 732 + 733 + spin_lock_irqsave(&s3cchan->vc.lock, flags); 734 + 735 + switch (cmd) { 736 + case DMA_SLAVE_CONFIG: 737 + ret = s3c24xx_dma_set_runtime_config(s3cchan, 738 + (struct dma_slave_config *)arg); 739 + break; 740 + case DMA_TERMINATE_ALL: 741 + if (!s3cchan->phy && !s3cchan->at) { 742 + dev_err(&s3cdma->pdev->dev, "trying to terminate already stopped channel %d\n", 743 + s3cchan->id); 744 + ret = -EINVAL; 745 + break; 746 + } 747 + 748 + s3cchan->state = S3C24XX_DMA_CHAN_IDLE; 749 + 750 + /* Mark physical channel as free */ 751 + if (s3cchan->phy) 752 + s3c24xx_dma_phy_free(s3cchan); 753 + 754 + /* Dequeue current job */ 755 + if (s3cchan->at) { 756 + s3c24xx_dma_desc_free(&s3cchan->at->vd); 757 + s3cchan->at = NULL; 758 + } 759 + 760 + /* Dequeue jobs not yet fired as well */ 761 + s3c24xx_dma_free_txd_list(s3cdma, s3cchan); 762 + break; 763 + default: 764 + /* Unknown command */ 765 + ret = -ENXIO; 766 + break; 767 + } 768 + 769 + spin_unlock_irqrestore(&s3cchan->vc.lock, flags); 770 + 771 + return ret; 772 + } 773 + 774 + static int s3c24xx_dma_alloc_chan_resources(struct dma_chan *chan) 775 + { 776 + return 0; 777 + } 778 + 779 + static void s3c24xx_dma_free_chan_resources(struct dma_chan *chan) 780 + { 781 + /* Ensure all queued descriptors are freed */ 782 + vchan_free_chan_resources(to_virt_chan(chan)); 783 + } 784 + 785 + static enum dma_status s3c24xx_dma_tx_status(struct dma_chan *chan, 786 + dma_cookie_t cookie, struct dma_tx_state *txstate) 787 + { 788 + struct s3c24xx_dma_chan *s3cchan = to_s3c24xx_dma_chan(chan); 789 + struct s3c24xx_txd *txd; 790 + struct s3c24xx_sg *dsg; 791 + struct virt_dma_desc *vd; 792 + unsigned long flags; 793 + enum dma_status ret; 794 + size_t bytes = 0; 795 + 796 + spin_lock_irqsave(&s3cchan->vc.lock, flags); 797 + ret = dma_cookie_status(chan, cookie, txstate); 798 + if (ret == DMA_SUCCESS) { 799 + spin_unlock_irqrestore(&s3cchan->vc.lock, flags); 800 + return ret; 801 + } 802 + 803 + /* 804 + * There's no point calculating the residue if there's 805 + * no txstate to store the value. 806 + */ 807 + if (!txstate) { 808 + spin_unlock_irqrestore(&s3cchan->vc.lock, flags); 809 + return ret; 810 + } 811 + 812 + vd = vchan_find_desc(&s3cchan->vc, cookie); 813 + if (vd) { 814 + /* On the issued list, so hasn't been processed yet */ 815 + txd = to_s3c24xx_txd(&vd->tx); 816 + 817 + list_for_each_entry(dsg, &txd->dsg_list, node) 818 + bytes += dsg->len; 819 + } else { 820 + /* 821 + * Currently running, so sum over the pending sg's and 822 + * the currently active one. 823 + */ 824 + txd = s3cchan->at; 825 + 826 + dsg = list_entry(txd->at, struct s3c24xx_sg, node); 827 + list_for_each_entry_from(dsg, &txd->dsg_list, node) 828 + bytes += dsg->len; 829 + 830 + bytes += s3c24xx_dma_getbytes_chan(s3cchan); 831 + } 832 + spin_unlock_irqrestore(&s3cchan->vc.lock, flags); 833 + 834 + /* 835 + * This cookie not complete yet 836 + * Get number of bytes left in the active transactions and queue 837 + */ 838 + dma_set_residue(txstate, bytes); 839 + 840 + /* Whether waiting or running, we're in progress */ 841 + return ret; 842 + } 843 + 844 + /* 845 + * Initialize a descriptor to be used by memcpy submit 846 + */ 847 + static struct dma_async_tx_descriptor *s3c24xx_dma_prep_memcpy( 848 + struct dma_chan *chan, dma_addr_t dest, dma_addr_t src, 849 + size_t len, unsigned long flags) 850 + { 851 + struct s3c24xx_dma_chan *s3cchan = to_s3c24xx_dma_chan(chan); 852 + struct s3c24xx_dma_engine *s3cdma = s3cchan->host; 853 + struct s3c24xx_txd *txd; 854 + struct s3c24xx_sg *dsg; 855 + int src_mod, dest_mod; 856 + 857 + dev_dbg(&s3cdma->pdev->dev, "prepare memcpy of %d bytes from %s\n", 858 + len, s3cchan->name); 859 + 860 + if ((len & S3C24XX_DCON_TC_MASK) != len) { 861 + dev_err(&s3cdma->pdev->dev, "memcpy size %d to large\n", len); 862 + return NULL; 863 + } 864 + 865 + txd = s3c24xx_dma_get_txd(); 866 + if (!txd) 867 + return NULL; 868 + 869 + dsg = kzalloc(sizeof(*dsg), GFP_NOWAIT); 870 + if (!dsg) { 871 + s3c24xx_dma_free_txd(txd); 872 + return NULL; 873 + } 874 + list_add_tail(&dsg->node, &txd->dsg_list); 875 + 876 + dsg->src_addr = src; 877 + dsg->dst_addr = dest; 878 + dsg->len = len; 879 + 880 + /* 881 + * Determine a suitable transfer width. 882 + * The DMA controller cannot fetch/store information which is not 883 + * naturally aligned on the bus, i.e., a 4 byte fetch must start at 884 + * an address divisible by 4 - more generally addr % width must be 0. 885 + */ 886 + src_mod = src % 4; 887 + dest_mod = dest % 4; 888 + switch (len % 4) { 889 + case 0: 890 + txd->width = (src_mod == 0 && dest_mod == 0) ? 4 : 1; 891 + break; 892 + case 2: 893 + txd->width = ((src_mod == 2 || src_mod == 0) && 894 + (dest_mod == 2 || dest_mod == 0)) ? 2 : 1; 895 + break; 896 + default: 897 + txd->width = 1; 898 + break; 899 + } 900 + 901 + txd->disrcc = S3C24XX_DISRCC_LOC_AHB | S3C24XX_DISRCC_INC_INCREMENT; 902 + txd->didstc = S3C24XX_DIDSTC_LOC_AHB | S3C24XX_DIDSTC_INC_INCREMENT; 903 + txd->dcon |= S3C24XX_DCON_DEMAND | S3C24XX_DCON_SYNC_HCLK | 904 + S3C24XX_DCON_SERV_WHOLE; 905 + 906 + return vchan_tx_prep(&s3cchan->vc, &txd->vd, flags); 907 + } 908 + 909 + static struct dma_async_tx_descriptor *s3c24xx_dma_prep_slave_sg( 910 + struct dma_chan *chan, struct scatterlist *sgl, 911 + unsigned int sg_len, enum dma_transfer_direction direction, 912 + unsigned long flags, void *context) 913 + { 914 + struct s3c24xx_dma_chan *s3cchan = to_s3c24xx_dma_chan(chan); 915 + struct s3c24xx_dma_engine *s3cdma = s3cchan->host; 916 + const struct s3c24xx_dma_platdata *pdata = s3cdma->pdata; 917 + struct s3c24xx_dma_channel *cdata = &pdata->channels[s3cchan->id]; 918 + struct s3c24xx_txd *txd; 919 + struct s3c24xx_sg *dsg; 920 + struct scatterlist *sg; 921 + dma_addr_t slave_addr; 922 + u32 hwcfg = 0; 923 + int tmp; 924 + 925 + dev_dbg(&s3cdma->pdev->dev, "prepare transaction of %d bytes from %s\n", 926 + sg_dma_len(sgl), s3cchan->name); 927 + 928 + txd = s3c24xx_dma_get_txd(); 929 + if (!txd) 930 + return NULL; 931 + 932 + if (cdata->handshake) 933 + txd->dcon |= S3C24XX_DCON_HANDSHAKE; 934 + 935 + switch (cdata->bus) { 936 + case S3C24XX_DMA_APB: 937 + txd->dcon |= S3C24XX_DCON_SYNC_PCLK; 938 + hwcfg |= S3C24XX_DISRCC_LOC_APB; 939 + break; 940 + case S3C24XX_DMA_AHB: 941 + txd->dcon |= S3C24XX_DCON_SYNC_HCLK; 942 + hwcfg |= S3C24XX_DISRCC_LOC_AHB; 943 + break; 944 + } 945 + 946 + /* 947 + * Always assume our peripheral desintation is a fixed 948 + * address in memory. 949 + */ 950 + hwcfg |= S3C24XX_DISRCC_INC_FIXED; 951 + 952 + /* 953 + * Individual dma operations are requested by the slave, 954 + * so serve only single atomic operations (S3C24XX_DCON_SERV_SINGLE). 955 + */ 956 + txd->dcon |= S3C24XX_DCON_SERV_SINGLE; 957 + 958 + if (direction == DMA_MEM_TO_DEV) { 959 + txd->disrcc = S3C24XX_DISRCC_LOC_AHB | 960 + S3C24XX_DISRCC_INC_INCREMENT; 961 + txd->didstc = hwcfg; 962 + slave_addr = s3cchan->cfg.dst_addr; 963 + txd->width = s3cchan->cfg.dst_addr_width; 964 + } else if (direction == DMA_DEV_TO_MEM) { 965 + txd->disrcc = hwcfg; 966 + txd->didstc = S3C24XX_DIDSTC_LOC_AHB | 967 + S3C24XX_DIDSTC_INC_INCREMENT; 968 + slave_addr = s3cchan->cfg.src_addr; 969 + txd->width = s3cchan->cfg.src_addr_width; 970 + } else { 971 + s3c24xx_dma_free_txd(txd); 972 + dev_err(&s3cdma->pdev->dev, 973 + "direction %d unsupported\n", direction); 974 + return NULL; 975 + } 976 + 977 + for_each_sg(sgl, sg, sg_len, tmp) { 978 + dsg = kzalloc(sizeof(*dsg), GFP_NOWAIT); 979 + if (!dsg) { 980 + s3c24xx_dma_free_txd(txd); 981 + return NULL; 982 + } 983 + list_add_tail(&dsg->node, &txd->dsg_list); 984 + 985 + dsg->len = sg_dma_len(sg); 986 + if (direction == DMA_MEM_TO_DEV) { 987 + dsg->src_addr = sg_dma_address(sg); 988 + dsg->dst_addr = slave_addr; 989 + } else { /* DMA_DEV_TO_MEM */ 990 + dsg->src_addr = slave_addr; 991 + dsg->dst_addr = sg_dma_address(sg); 992 + } 993 + break; 994 + } 995 + 996 + return vchan_tx_prep(&s3cchan->vc, &txd->vd, flags); 997 + } 998 + 999 + /* 1000 + * Slave transactions callback to the slave device to allow 1001 + * synchronization of slave DMA signals with the DMAC enable 1002 + */ 1003 + static void s3c24xx_dma_issue_pending(struct dma_chan *chan) 1004 + { 1005 + struct s3c24xx_dma_chan *s3cchan = to_s3c24xx_dma_chan(chan); 1006 + unsigned long flags; 1007 + 1008 + spin_lock_irqsave(&s3cchan->vc.lock, flags); 1009 + if (vchan_issue_pending(&s3cchan->vc)) { 1010 + if (!s3cchan->phy && s3cchan->state != S3C24XX_DMA_CHAN_WAITING) 1011 + s3c24xx_dma_phy_alloc_and_start(s3cchan); 1012 + } 1013 + spin_unlock_irqrestore(&s3cchan->vc.lock, flags); 1014 + } 1015 + 1016 + /* 1017 + * Bringup and teardown 1018 + */ 1019 + 1020 + /* 1021 + * Initialise the DMAC memcpy/slave channels. 1022 + * Make a local wrapper to hold required data 1023 + */ 1024 + static int s3c24xx_dma_init_virtual_channels(struct s3c24xx_dma_engine *s3cdma, 1025 + struct dma_device *dmadev, unsigned int channels, bool slave) 1026 + { 1027 + struct s3c24xx_dma_chan *chan; 1028 + int i; 1029 + 1030 + INIT_LIST_HEAD(&dmadev->channels); 1031 + 1032 + /* 1033 + * Register as many many memcpy as we have physical channels, 1034 + * we won't always be able to use all but the code will have 1035 + * to cope with that situation. 1036 + */ 1037 + for (i = 0; i < channels; i++) { 1038 + chan = devm_kzalloc(dmadev->dev, sizeof(*chan), GFP_KERNEL); 1039 + if (!chan) { 1040 + dev_err(dmadev->dev, 1041 + "%s no memory for channel\n", __func__); 1042 + return -ENOMEM; 1043 + } 1044 + 1045 + chan->id = i; 1046 + chan->host = s3cdma; 1047 + chan->state = S3C24XX_DMA_CHAN_IDLE; 1048 + 1049 + if (slave) { 1050 + chan->slave = true; 1051 + chan->name = kasprintf(GFP_KERNEL, "slave%d", i); 1052 + if (!chan->name) 1053 + return -ENOMEM; 1054 + } else { 1055 + chan->name = kasprintf(GFP_KERNEL, "memcpy%d", i); 1056 + if (!chan->name) 1057 + return -ENOMEM; 1058 + } 1059 + dev_dbg(dmadev->dev, 1060 + "initialize virtual channel \"%s\"\n", 1061 + chan->name); 1062 + 1063 + chan->vc.desc_free = s3c24xx_dma_desc_free; 1064 + vchan_init(&chan->vc, dmadev); 1065 + } 1066 + dev_info(dmadev->dev, "initialized %d virtual %s channels\n", 1067 + i, slave ? "slave" : "memcpy"); 1068 + return i; 1069 + } 1070 + 1071 + static void s3c24xx_dma_free_virtual_channels(struct dma_device *dmadev) 1072 + { 1073 + struct s3c24xx_dma_chan *chan = NULL; 1074 + struct s3c24xx_dma_chan *next; 1075 + 1076 + list_for_each_entry_safe(chan, 1077 + next, &dmadev->channels, vc.chan.device_node) 1078 + list_del(&chan->vc.chan.device_node); 1079 + } 1080 + 1081 + /* s3c2410, s3c2440 and s3c2442 have a 0x40 stride without separate clocks */ 1082 + static struct soc_data soc_s3c2410 = { 1083 + .stride = 0x40, 1084 + .has_reqsel = false, 1085 + .has_clocks = false, 1086 + }; 1087 + 1088 + /* s3c2412 and s3c2413 have a 0x40 stride and dmareqsel mechanism */ 1089 + static struct soc_data soc_s3c2412 = { 1090 + .stride = 0x40, 1091 + .has_reqsel = true, 1092 + .has_clocks = true, 1093 + }; 1094 + 1095 + /* s3c2443 and following have a 0x100 stride and dmareqsel mechanism */ 1096 + static struct soc_data soc_s3c2443 = { 1097 + .stride = 0x100, 1098 + .has_reqsel = true, 1099 + .has_clocks = true, 1100 + }; 1101 + 1102 + static struct platform_device_id s3c24xx_dma_driver_ids[] = { 1103 + { 1104 + .name = "s3c2410-dma", 1105 + .driver_data = (kernel_ulong_t)&soc_s3c2410, 1106 + }, { 1107 + .name = "s3c2412-dma", 1108 + .driver_data = (kernel_ulong_t)&soc_s3c2412, 1109 + }, { 1110 + .name = "s3c2443-dma", 1111 + .driver_data = (kernel_ulong_t)&soc_s3c2443, 1112 + }, 1113 + { }, 1114 + }; 1115 + 1116 + static struct soc_data *s3c24xx_dma_get_soc_data(struct platform_device *pdev) 1117 + { 1118 + return (struct soc_data *) 1119 + platform_get_device_id(pdev)->driver_data; 1120 + } 1121 + 1122 + static int s3c24xx_dma_probe(struct platform_device *pdev) 1123 + { 1124 + const struct s3c24xx_dma_platdata *pdata = dev_get_platdata(&pdev->dev); 1125 + struct s3c24xx_dma_engine *s3cdma; 1126 + struct soc_data *sdata; 1127 + struct resource *res; 1128 + int ret; 1129 + int i; 1130 + 1131 + if (!pdata) { 1132 + dev_err(&pdev->dev, "platform data missing\n"); 1133 + return -ENODEV; 1134 + } 1135 + 1136 + /* Basic sanity check */ 1137 + if (pdata->num_phy_channels > MAX_DMA_CHANNELS) { 1138 + dev_err(&pdev->dev, "to many dma channels %d, max %d\n", 1139 + pdata->num_phy_channels, MAX_DMA_CHANNELS); 1140 + return -EINVAL; 1141 + } 1142 + 1143 + sdata = s3c24xx_dma_get_soc_data(pdev); 1144 + if (!sdata) 1145 + return -EINVAL; 1146 + 1147 + s3cdma = devm_kzalloc(&pdev->dev, sizeof(*s3cdma), GFP_KERNEL); 1148 + if (!s3cdma) 1149 + return -ENOMEM; 1150 + 1151 + s3cdma->pdev = pdev; 1152 + s3cdma->pdata = pdata; 1153 + s3cdma->sdata = sdata; 1154 + 1155 + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 1156 + s3cdma->base = devm_ioremap_resource(&pdev->dev, res); 1157 + if (IS_ERR(s3cdma->base)) 1158 + return PTR_ERR(s3cdma->base); 1159 + 1160 + s3cdma->phy_chans = devm_kzalloc(&pdev->dev, 1161 + sizeof(struct s3c24xx_dma_phy) * 1162 + pdata->num_phy_channels, 1163 + GFP_KERNEL); 1164 + if (!s3cdma->phy_chans) 1165 + return -ENOMEM; 1166 + 1167 + /* aquire irqs and clocks for all physical channels */ 1168 + for (i = 0; i < pdata->num_phy_channels; i++) { 1169 + struct s3c24xx_dma_phy *phy = &s3cdma->phy_chans[i]; 1170 + char clk_name[6]; 1171 + 1172 + phy->id = i; 1173 + phy->base = s3cdma->base + (i * sdata->stride); 1174 + phy->host = s3cdma; 1175 + 1176 + phy->irq = platform_get_irq(pdev, i); 1177 + if (phy->irq < 0) { 1178 + dev_err(&pdev->dev, "failed to get irq %d, err %d\n", 1179 + i, phy->irq); 1180 + continue; 1181 + } 1182 + 1183 + ret = devm_request_irq(&pdev->dev, phy->irq, s3c24xx_dma_irq, 1184 + 0, pdev->name, phy); 1185 + if (ret) { 1186 + dev_err(&pdev->dev, "Unable to request irq for channel %d, error %d\n", 1187 + i, ret); 1188 + continue; 1189 + } 1190 + 1191 + if (sdata->has_clocks) { 1192 + sprintf(clk_name, "dma.%d", i); 1193 + phy->clk = devm_clk_get(&pdev->dev, clk_name); 1194 + if (IS_ERR(phy->clk) && sdata->has_clocks) { 1195 + dev_err(&pdev->dev, "unable to aquire clock for channel %d, error %lu", 1196 + i, PTR_ERR(phy->clk)); 1197 + continue; 1198 + } 1199 + 1200 + ret = clk_prepare(phy->clk); 1201 + if (ret) { 1202 + dev_err(&pdev->dev, "clock for phy %d failed, error %d\n", 1203 + i, ret); 1204 + continue; 1205 + } 1206 + } 1207 + 1208 + spin_lock_init(&phy->lock); 1209 + phy->valid = true; 1210 + 1211 + dev_dbg(&pdev->dev, "physical channel %d is %s\n", 1212 + i, s3c24xx_dma_phy_busy(phy) ? "BUSY" : "FREE"); 1213 + } 1214 + 1215 + /* Initialize memcpy engine */ 1216 + dma_cap_set(DMA_MEMCPY, s3cdma->memcpy.cap_mask); 1217 + dma_cap_set(DMA_PRIVATE, s3cdma->memcpy.cap_mask); 1218 + s3cdma->memcpy.dev = &pdev->dev; 1219 + s3cdma->memcpy.device_alloc_chan_resources = 1220 + s3c24xx_dma_alloc_chan_resources; 1221 + s3cdma->memcpy.device_free_chan_resources = 1222 + s3c24xx_dma_free_chan_resources; 1223 + s3cdma->memcpy.device_prep_dma_memcpy = s3c24xx_dma_prep_memcpy; 1224 + s3cdma->memcpy.device_tx_status = s3c24xx_dma_tx_status; 1225 + s3cdma->memcpy.device_issue_pending = s3c24xx_dma_issue_pending; 1226 + s3cdma->memcpy.device_control = s3c24xx_dma_control; 1227 + 1228 + /* Initialize slave engine for SoC internal dedicated peripherals */ 1229 + dma_cap_set(DMA_SLAVE, s3cdma->slave.cap_mask); 1230 + dma_cap_set(DMA_PRIVATE, s3cdma->slave.cap_mask); 1231 + s3cdma->slave.dev = &pdev->dev; 1232 + s3cdma->slave.device_alloc_chan_resources = 1233 + s3c24xx_dma_alloc_chan_resources; 1234 + s3cdma->slave.device_free_chan_resources = 1235 + s3c24xx_dma_free_chan_resources; 1236 + s3cdma->slave.device_tx_status = s3c24xx_dma_tx_status; 1237 + s3cdma->slave.device_issue_pending = s3c24xx_dma_issue_pending; 1238 + s3cdma->slave.device_prep_slave_sg = s3c24xx_dma_prep_slave_sg; 1239 + s3cdma->slave.device_control = s3c24xx_dma_control; 1240 + 1241 + /* Register as many memcpy channels as there are physical channels */ 1242 + ret = s3c24xx_dma_init_virtual_channels(s3cdma, &s3cdma->memcpy, 1243 + pdata->num_phy_channels, false); 1244 + if (ret <= 0) { 1245 + dev_warn(&pdev->dev, 1246 + "%s failed to enumerate memcpy channels - %d\n", 1247 + __func__, ret); 1248 + goto err_memcpy; 1249 + } 1250 + 1251 + /* Register slave channels */ 1252 + ret = s3c24xx_dma_init_virtual_channels(s3cdma, &s3cdma->slave, 1253 + pdata->num_channels, true); 1254 + if (ret <= 0) { 1255 + dev_warn(&pdev->dev, 1256 + "%s failed to enumerate slave channels - %d\n", 1257 + __func__, ret); 1258 + goto err_slave; 1259 + } 1260 + 1261 + ret = dma_async_device_register(&s3cdma->memcpy); 1262 + if (ret) { 1263 + dev_warn(&pdev->dev, 1264 + "%s failed to register memcpy as an async device - %d\n", 1265 + __func__, ret); 1266 + goto err_memcpy_reg; 1267 + } 1268 + 1269 + ret = dma_async_device_register(&s3cdma->slave); 1270 + if (ret) { 1271 + dev_warn(&pdev->dev, 1272 + "%s failed to register slave as an async device - %d\n", 1273 + __func__, ret); 1274 + goto err_slave_reg; 1275 + } 1276 + 1277 + platform_set_drvdata(pdev, s3cdma); 1278 + dev_info(&pdev->dev, "Loaded dma driver with %d physical channels\n", 1279 + pdata->num_phy_channels); 1280 + 1281 + return 0; 1282 + 1283 + err_slave_reg: 1284 + dma_async_device_unregister(&s3cdma->memcpy); 1285 + err_memcpy_reg: 1286 + s3c24xx_dma_free_virtual_channels(&s3cdma->slave); 1287 + err_slave: 1288 + s3c24xx_dma_free_virtual_channels(&s3cdma->memcpy); 1289 + err_memcpy: 1290 + if (sdata->has_clocks) 1291 + for (i = 0; i < pdata->num_phy_channels; i++) { 1292 + struct s3c24xx_dma_phy *phy = &s3cdma->phy_chans[i]; 1293 + if (phy->valid) 1294 + clk_unprepare(phy->clk); 1295 + } 1296 + 1297 + return ret; 1298 + } 1299 + 1300 + static int s3c24xx_dma_remove(struct platform_device *pdev) 1301 + { 1302 + const struct s3c24xx_dma_platdata *pdata = dev_get_platdata(&pdev->dev); 1303 + struct s3c24xx_dma_engine *s3cdma = platform_get_drvdata(pdev); 1304 + struct soc_data *sdata = s3c24xx_dma_get_soc_data(pdev); 1305 + int i; 1306 + 1307 + dma_async_device_unregister(&s3cdma->slave); 1308 + dma_async_device_unregister(&s3cdma->memcpy); 1309 + 1310 + s3c24xx_dma_free_virtual_channels(&s3cdma->slave); 1311 + s3c24xx_dma_free_virtual_channels(&s3cdma->memcpy); 1312 + 1313 + if (sdata->has_clocks) 1314 + for (i = 0; i < pdata->num_phy_channels; i++) { 1315 + struct s3c24xx_dma_phy *phy = &s3cdma->phy_chans[i]; 1316 + if (phy->valid) 1317 + clk_unprepare(phy->clk); 1318 + } 1319 + 1320 + return 0; 1321 + } 1322 + 1323 + static struct platform_driver s3c24xx_dma_driver = { 1324 + .driver = { 1325 + .name = "s3c24xx-dma", 1326 + .owner = THIS_MODULE, 1327 + }, 1328 + .id_table = s3c24xx_dma_driver_ids, 1329 + .probe = s3c24xx_dma_probe, 1330 + .remove = s3c24xx_dma_remove, 1331 + }; 1332 + 1333 + module_platform_driver(s3c24xx_dma_driver); 1334 + 1335 + bool s3c24xx_dma_filter(struct dma_chan *chan, void *param) 1336 + { 1337 + struct s3c24xx_dma_chan *s3cchan; 1338 + 1339 + if (chan->device->dev->driver != &s3c24xx_dma_driver.driver) 1340 + return false; 1341 + 1342 + s3cchan = to_s3c24xx_dma_chan(chan); 1343 + 1344 + return s3cchan->id == (int)param; 1345 + } 1346 + EXPORT_SYMBOL(s3c24xx_dma_filter); 1347 + 1348 + MODULE_DESCRIPTION("S3C24XX DMA Driver"); 1349 + MODULE_AUTHOR("Heiko Stuebner"); 1350 + MODULE_LICENSE("GPL v2");
+1 -2
drivers/gpu/drm/i2c/tda998x_drv.c
··· 707 707 reg_write(encoder, REG_VIP_CNTRL_2, priv->vip_cntrl_2); 708 708 break; 709 709 case DRM_MODE_DPMS_OFF: 710 - /* disable audio and video ports */ 711 - reg_write(encoder, REG_ENA_AP, 0x00); 710 + /* disable video ports */ 712 711 reg_write(encoder, REG_ENA_VP_0, 0x00); 713 712 reg_write(encoder, REG_ENA_VP_1, 0x00); 714 713 reg_write(encoder, REG_ENA_VP_2, 0x00);
+4 -4
drivers/gpu/drm/i915/i915_gem.c
··· 4800 4800 4801 4801 if (!mutex_trylock(&dev->struct_mutex)) { 4802 4802 if (!mutex_is_locked_by(&dev->struct_mutex, current)) 4803 - return SHRINK_STOP; 4803 + return 0; 4804 4804 4805 4805 if (dev_priv->mm.shrinker_no_lock_stealing) 4806 - return SHRINK_STOP; 4806 + return 0; 4807 4807 4808 4808 unlock = false; 4809 4809 } ··· 4901 4901 4902 4902 if (!mutex_trylock(&dev->struct_mutex)) { 4903 4903 if (!mutex_is_locked_by(&dev->struct_mutex, current)) 4904 - return 0; 4904 + return SHRINK_STOP; 4905 4905 4906 4906 if (dev_priv->mm.shrinker_no_lock_stealing) 4907 - return 0; 4907 + return SHRINK_STOP; 4908 4908 4909 4909 unlock = false; 4910 4910 }
+4 -2
drivers/gpu/drm/i915/i915_gpu_error.c
··· 143 143 144 144 /* Seek the first printf which is hits start position */ 145 145 if (e->pos < e->start) { 146 - len = vsnprintf(NULL, 0, f, args); 147 - if (!__i915_error_seek(e, len)) 146 + va_list tmp; 147 + 148 + va_copy(tmp, args); 149 + if (!__i915_error_seek(e, vsnprintf(NULL, 0, f, tmp))) 148 150 return; 149 151 } 150 152
+4
drivers/gpu/drm/i915/intel_display.c
··· 4775 4775 4776 4776 pipeconf = 0; 4777 4777 4778 + if (dev_priv->quirks & QUIRK_PIPEA_FORCE && 4779 + I915_READ(PIPECONF(intel_crtc->pipe)) & PIPECONF_ENABLE) 4780 + pipeconf |= PIPECONF_ENABLE; 4781 + 4778 4782 if (intel_crtc->pipe == 0 && INTEL_INFO(dev)->gen < 4) { 4779 4783 /* Enable pixel doubling when the dot clock is > 90% of the (display) 4780 4784 * core speed.
+12 -1
drivers/gpu/drm/i915/intel_dp.c
··· 588 588 DRM_DEBUG_KMS("aux_ch native nack\n"); 589 589 return -EREMOTEIO; 590 590 case AUX_NATIVE_REPLY_DEFER: 591 - udelay(100); 591 + /* 592 + * For now, just give more slack to branch devices. We 593 + * could check the DPCD for I2C bit rate capabilities, 594 + * and if available, adjust the interval. We could also 595 + * be more careful with DP-to-Legacy adapters where a 596 + * long legacy cable may force very low I2C bit rates. 597 + */ 598 + if (intel_dp->dpcd[DP_DOWNSTREAMPORT_PRESENT] & 599 + DP_DWN_STRM_PORT_PRESENT) 600 + usleep_range(500, 600); 601 + else 602 + usleep_range(300, 400); 592 603 continue; 593 604 default: 594 605 DRM_ERROR("aux_ch invalid native reply 0x%02x\n",
+8
drivers/gpu/drm/i915/intel_tv.c
··· 916 916 DRM_DEBUG_KMS("forcing bpc to 8 for TV\n"); 917 917 pipe_config->pipe_bpp = 8*3; 918 918 919 + /* TV has it's own notion of sync and other mode flags, so clear them. */ 920 + pipe_config->adjusted_mode.flags = 0; 921 + 922 + /* 923 + * FIXME: We don't check whether the input mode is actually what we want 924 + * or whether userspace is doing something stupid. 925 + */ 926 + 919 927 return true; 920 928 } 921 929
-2
drivers/gpu/drm/msm/mdp4/mdp4_kms.c
··· 19 19 #include "msm_drv.h" 20 20 #include "mdp4_kms.h" 21 21 22 - #include <mach/iommu.h> 23 - 24 22 static struct mdp4_platform_config *mdp4_get_config(struct platform_device *dev); 25 23 26 24 static int mdp4_hw_init(struct msm_kms *kms)
+4 -4
drivers/gpu/drm/msm/msm_drv.c
··· 18 18 #include "msm_drv.h" 19 19 #include "msm_gpu.h" 20 20 21 - #include <mach/iommu.h> 22 - 23 21 static void msm_fb_output_poll_changed(struct drm_device *dev) 24 22 { 25 23 struct msm_drm_private *priv = dev->dev_private; ··· 60 62 int i, ret; 61 63 62 64 for (i = 0; i < cnt; i++) { 65 + /* TODO maybe some day msm iommu won't require this hack: */ 66 + struct device *msm_iommu_get_ctx(const char *ctx_name); 63 67 struct device *ctx = msm_iommu_get_ctx(names[i]); 64 68 if (!ctx) 65 69 continue; ··· 199 199 * imx drm driver on iMX5 200 200 */ 201 201 dev_err(dev->dev, "failed to load kms\n"); 202 - ret = PTR_ERR(priv->kms); 202 + ret = PTR_ERR(kms); 203 203 goto fail; 204 204 } 205 205 ··· 697 697 .gem_vm_ops = &vm_ops, 698 698 .dumb_create = msm_gem_dumb_create, 699 699 .dumb_map_offset = msm_gem_dumb_map_offset, 700 - .dumb_destroy = msm_gem_dumb_destroy, 700 + .dumb_destroy = drm_gem_dumb_destroy, 701 701 #ifdef CONFIG_DEBUG_FS 702 702 .debugfs_init = msm_debugfs_init, 703 703 .debugfs_cleanup = msm_debugfs_cleanup,
-7
drivers/gpu/drm/msm/msm_gem.c
··· 319 319 MSM_BO_SCANOUT | MSM_BO_WC, &args->handle); 320 320 } 321 321 322 - int msm_gem_dumb_destroy(struct drm_file *file, struct drm_device *dev, 323 - uint32_t handle) 324 - { 325 - /* No special work needed, drop the reference and see what falls out */ 326 - return drm_gem_handle_delete(file, handle); 327 - } 328 - 329 322 int msm_gem_dumb_map_offset(struct drm_file *file, struct drm_device *dev, 330 323 uint32_t handle, uint64_t *offset) 331 324 {
+51
drivers/gpu/drm/radeon/btc_dpm.c
··· 1168 1168 { 25000, 30000, RADEON_SCLK_UP } 1169 1169 }; 1170 1170 1171 + void btc_get_max_clock_from_voltage_dependency_table(struct radeon_clock_voltage_dependency_table *table, 1172 + u32 *max_clock) 1173 + { 1174 + u32 i, clock = 0; 1175 + 1176 + if ((table == NULL) || (table->count == 0)) { 1177 + *max_clock = clock; 1178 + return; 1179 + } 1180 + 1181 + for (i = 0; i < table->count; i++) { 1182 + if (clock < table->entries[i].clk) 1183 + clock = table->entries[i].clk; 1184 + } 1185 + *max_clock = clock; 1186 + } 1187 + 1171 1188 void btc_apply_voltage_dependency_rules(struct radeon_clock_voltage_dependency_table *table, 1172 1189 u32 clock, u16 max_voltage, u16 *voltage) 1173 1190 { ··· 2097 2080 bool disable_mclk_switching; 2098 2081 u32 mclk, sclk; 2099 2082 u16 vddc, vddci; 2083 + u32 max_sclk_vddc, max_mclk_vddci, max_mclk_vddc; 2100 2084 2101 2085 if ((rdev->pm.dpm.new_active_crtc_count > 1) || 2102 2086 btc_dpm_vblank_too_short(rdev)) ··· 2137 2119 ps->low.vddc = max_limits->vddc; 2138 2120 if (ps->low.vddci > max_limits->vddci) 2139 2121 ps->low.vddci = max_limits->vddci; 2122 + } 2123 + 2124 + /* limit clocks to max supported clocks based on voltage dependency tables */ 2125 + btc_get_max_clock_from_voltage_dependency_table(&rdev->pm.dpm.dyn_state.vddc_dependency_on_sclk, 2126 + &max_sclk_vddc); 2127 + btc_get_max_clock_from_voltage_dependency_table(&rdev->pm.dpm.dyn_state.vddci_dependency_on_mclk, 2128 + &max_mclk_vddci); 2129 + btc_get_max_clock_from_voltage_dependency_table(&rdev->pm.dpm.dyn_state.vddc_dependency_on_mclk, 2130 + &max_mclk_vddc); 2131 + 2132 + if (max_sclk_vddc) { 2133 + if (ps->low.sclk > max_sclk_vddc) 2134 + ps->low.sclk = max_sclk_vddc; 2135 + if (ps->medium.sclk > max_sclk_vddc) 2136 + ps->medium.sclk = max_sclk_vddc; 2137 + if (ps->high.sclk > max_sclk_vddc) 2138 + ps->high.sclk = max_sclk_vddc; 2139 + } 2140 + if (max_mclk_vddci) { 2141 + if (ps->low.mclk > max_mclk_vddci) 2142 + ps->low.mclk = max_mclk_vddci; 2143 + if (ps->medium.mclk > max_mclk_vddci) 2144 + ps->medium.mclk = max_mclk_vddci; 2145 + if (ps->high.mclk > max_mclk_vddci) 2146 + ps->high.mclk = max_mclk_vddci; 2147 + } 2148 + if (max_mclk_vddc) { 2149 + if (ps->low.mclk > max_mclk_vddc) 2150 + ps->low.mclk = max_mclk_vddc; 2151 + if (ps->medium.mclk > max_mclk_vddc) 2152 + ps->medium.mclk = max_mclk_vddc; 2153 + if (ps->high.mclk > max_mclk_vddc) 2154 + ps->high.mclk = max_mclk_vddc; 2140 2155 } 2141 2156 2142 2157 /* XXX validate the min clocks required for display */
+2
drivers/gpu/drm/radeon/btc_dpm.h
··· 46 46 struct rv7xx_pl *pl); 47 47 void btc_apply_voltage_dependency_rules(struct radeon_clock_voltage_dependency_table *table, 48 48 u32 clock, u16 max_voltage, u16 *voltage); 49 + void btc_get_max_clock_from_voltage_dependency_table(struct radeon_clock_voltage_dependency_table *table, 50 + u32 *max_clock); 49 51 void btc_apply_voltage_delta_rules(struct radeon_device *rdev, 50 52 u16 max_vddc, u16 max_vddci, 51 53 u16 *vddc, u16 *vddci);
+26
drivers/gpu/drm/radeon/ci_dpm.c
··· 146 146 }; 147 147 148 148 extern u8 rv770_get_memory_module_index(struct radeon_device *rdev); 149 + extern void btc_get_max_clock_from_voltage_dependency_table(struct radeon_clock_voltage_dependency_table *table, 150 + u32 *max_clock); 149 151 extern int ni_copy_and_switch_arb_sets(struct radeon_device *rdev, 150 152 u32 arb_freq_src, u32 arb_freq_dest); 151 153 extern u8 si_get_ddr3_mclk_frequency_ratio(u32 memory_clock); ··· 714 712 struct radeon_clock_and_voltage_limits *max_limits; 715 713 bool disable_mclk_switching; 716 714 u32 sclk, mclk; 715 + u32 max_sclk_vddc, max_mclk_vddci, max_mclk_vddc; 717 716 int i; 718 717 719 718 if ((rdev->pm.dpm.new_active_crtc_count > 1) || ··· 739 736 ps->performance_levels[i].mclk = max_limits->mclk; 740 737 if (ps->performance_levels[i].sclk > max_limits->sclk) 741 738 ps->performance_levels[i].sclk = max_limits->sclk; 739 + } 740 + } 741 + 742 + /* limit clocks to max supported clocks based on voltage dependency tables */ 743 + btc_get_max_clock_from_voltage_dependency_table(&rdev->pm.dpm.dyn_state.vddc_dependency_on_sclk, 744 + &max_sclk_vddc); 745 + btc_get_max_clock_from_voltage_dependency_table(&rdev->pm.dpm.dyn_state.vddci_dependency_on_mclk, 746 + &max_mclk_vddci); 747 + btc_get_max_clock_from_voltage_dependency_table(&rdev->pm.dpm.dyn_state.vddc_dependency_on_mclk, 748 + &max_mclk_vddc); 749 + 750 + for (i = 0; i < ps->performance_level_count; i++) { 751 + if (max_sclk_vddc) { 752 + if (ps->performance_levels[i].sclk > max_sclk_vddc) 753 + ps->performance_levels[i].sclk = max_sclk_vddc; 754 + } 755 + if (max_mclk_vddci) { 756 + if (ps->performance_levels[i].mclk > max_mclk_vddci) 757 + ps->performance_levels[i].mclk = max_mclk_vddci; 758 + } 759 + if (max_mclk_vddc) { 760 + if (ps->performance_levels[i].mclk > max_mclk_vddc) 761 + ps->performance_levels[i].mclk = max_mclk_vddc; 742 762 } 743 763 } 744 764
+8 -9
drivers/gpu/drm/radeon/cik.c
··· 2845 2845 rdev->config.cik.tile_config |= (3 << 0); 2846 2846 break; 2847 2847 } 2848 - if ((mc_arb_ramcfg & NOOFBANK_MASK) >> NOOFBANK_SHIFT) 2849 - rdev->config.cik.tile_config |= 1 << 4; 2850 - else 2851 - rdev->config.cik.tile_config |= 0 << 4; 2848 + rdev->config.cik.tile_config |= 2849 + ((mc_arb_ramcfg & NOOFBANK_MASK) >> NOOFBANK_SHIFT) << 4; 2852 2850 rdev->config.cik.tile_config |= 2853 2851 ((gb_addr_config & PIPE_INTERLEAVE_SIZE_MASK) >> PIPE_INTERLEAVE_SIZE_SHIFT) << 8; 2854 2852 rdev->config.cik.tile_config |= ··· 4454 4456 rdev->mc.aper_base = pci_resource_start(rdev->pdev, 0); 4455 4457 rdev->mc.aper_size = pci_resource_len(rdev->pdev, 0); 4456 4458 /* size in MB on si */ 4457 - rdev->mc.mc_vram_size = RREG32(CONFIG_MEMSIZE) * 1024 * 1024; 4458 - rdev->mc.real_vram_size = RREG32(CONFIG_MEMSIZE) * 1024 * 1024; 4459 + rdev->mc.mc_vram_size = RREG32(CONFIG_MEMSIZE) * 1024ULL * 1024ULL; 4460 + rdev->mc.real_vram_size = RREG32(CONFIG_MEMSIZE) * 1024ULL * 1024ULL; 4459 4461 rdev->mc.visible_vram_size = rdev->mc.aper_size; 4460 4462 si_vram_gtt_location(rdev, &rdev->mc); 4461 4463 radeon_update_bandwidth_info(rdev); ··· 4733 4735 u32 mc_id = (status & MEMORY_CLIENT_ID_MASK) >> MEMORY_CLIENT_ID_SHIFT; 4734 4736 u32 vmid = (status & FAULT_VMID_MASK) >> FAULT_VMID_SHIFT; 4735 4737 u32 protections = (status & PROTECTIONS_MASK) >> PROTECTIONS_SHIFT; 4736 - char *block = (char *)&mc_client; 4738 + char block[5] = { mc_client >> 24, (mc_client >> 16) & 0xff, 4739 + (mc_client >> 8) & 0xff, mc_client & 0xff, 0 }; 4737 4740 4738 - printk("VM fault (0x%02x, vmid %d) at page %u, %s from %s (%d)\n", 4741 + printk("VM fault (0x%02x, vmid %d) at page %u, %s from '%s' (0x%08x) (%d)\n", 4739 4742 protections, vmid, addr, 4740 4743 (status & MEMORY_CLIENT_RW_MASK) ? "write" : "read", 4741 - block, mc_id); 4744 + block, mc_client, mc_id); 4742 4745 } 4743 4746 4744 4747 /**
+24
drivers/gpu/drm/radeon/ni_dpm.c
··· 787 787 bool disable_mclk_switching; 788 788 u32 mclk, sclk; 789 789 u16 vddc, vddci; 790 + u32 max_sclk_vddc, max_mclk_vddci, max_mclk_vddc; 790 791 int i; 791 792 792 793 if ((rdev->pm.dpm.new_active_crtc_count > 1) || ··· 811 810 ps->performance_levels[i].vddc = max_limits->vddc; 812 811 if (ps->performance_levels[i].vddci > max_limits->vddci) 813 812 ps->performance_levels[i].vddci = max_limits->vddci; 813 + } 814 + } 815 + 816 + /* limit clocks to max supported clocks based on voltage dependency tables */ 817 + btc_get_max_clock_from_voltage_dependency_table(&rdev->pm.dpm.dyn_state.vddc_dependency_on_sclk, 818 + &max_sclk_vddc); 819 + btc_get_max_clock_from_voltage_dependency_table(&rdev->pm.dpm.dyn_state.vddci_dependency_on_mclk, 820 + &max_mclk_vddci); 821 + btc_get_max_clock_from_voltage_dependency_table(&rdev->pm.dpm.dyn_state.vddc_dependency_on_mclk, 822 + &max_mclk_vddc); 823 + 824 + for (i = 0; i < ps->performance_level_count; i++) { 825 + if (max_sclk_vddc) { 826 + if (ps->performance_levels[i].sclk > max_sclk_vddc) 827 + ps->performance_levels[i].sclk = max_sclk_vddc; 828 + } 829 + if (max_mclk_vddci) { 830 + if (ps->performance_levels[i].mclk > max_mclk_vddci) 831 + ps->performance_levels[i].mclk = max_mclk_vddci; 832 + } 833 + if (max_mclk_vddc) { 834 + if (ps->performance_levels[i].mclk > max_mclk_vddc) 835 + ps->performance_levels[i].mclk = max_mclk_vddc; 814 836 } 815 837 } 816 838
+5 -3
drivers/gpu/drm/radeon/r100.c
··· 2933 2933 seq_printf(m, "CP_RB_RPTR 0x%08x\n", rdp); 2934 2934 seq_printf(m, "%u free dwords in ring\n", ring->ring_free_dw); 2935 2935 seq_printf(m, "%u dwords in ring\n", count); 2936 - for (j = 0; j <= count; j++) { 2937 - i = (rdp + j) & ring->ptr_mask; 2938 - seq_printf(m, "r[%04d]=0x%08x\n", i, ring->ring[i]); 2936 + if (ring->ready) { 2937 + for (j = 0; j <= count; j++) { 2938 + i = (rdp + j) & ring->ptr_mask; 2939 + seq_printf(m, "r[%04d]=0x%08x\n", i, ring->ring[i]); 2940 + } 2939 2941 } 2940 2942 return 0; 2941 2943 }
+1 -1
drivers/gpu/drm/radeon/r600_dpm.c
··· 1084 1084 rdev->pm.dpm.dyn_state.uvd_clock_voltage_dependency_table.entries[i].dclk = 1085 1085 le16_to_cpu(uvd_clk->usDClkLow) | (uvd_clk->ucDClkHigh << 16); 1086 1086 rdev->pm.dpm.dyn_state.uvd_clock_voltage_dependency_table.entries[i].v = 1087 - le16_to_cpu(limits->entries[i].usVoltage); 1087 + le16_to_cpu(entry->usVoltage); 1088 1088 entry = (ATOM_PPLIB_UVD_Clock_Voltage_Limit_Record *) 1089 1089 ((u8 *)entry + sizeof(ATOM_PPLIB_UVD_Clock_Voltage_Limit_Record)); 1090 1090 }
+15 -5
drivers/gpu/drm/radeon/r600_hdmi.c
··· 257 257 * number (coefficient of two integer numbers. DCCG_AUDIO_DTOx_PHASE 258 258 * is the numerator, DCCG_AUDIO_DTOx_MODULE is the denominator 259 259 */ 260 - if (ASIC_IS_DCE3(rdev)) { 261 - /* according to the reg specs, this should DCE3.2 only, but in 262 - * practice it seems to cover DCE3.0 as well. 263 - */ 260 + if (ASIC_IS_DCE32(rdev)) { 264 261 if (dig->dig_encoder == 0) { 265 262 dto_cntl = RREG32(DCCG_AUDIO_DTO0_CNTL) & ~DCCG_AUDIO_DTO_WALLCLOCK_RATIO_MASK; 266 263 dto_cntl |= DCCG_AUDIO_DTO_WALLCLOCK_RATIO(wallclock_ratio); ··· 273 276 WREG32(DCCG_AUDIO_DTO1_MODULE, dto_modulo); 274 277 WREG32(DCCG_AUDIO_DTO_SELECT, 1); /* select DTO1 */ 275 278 } 279 + } else if (ASIC_IS_DCE3(rdev)) { 280 + /* according to the reg specs, this should DCE3.2 only, but in 281 + * practice it seems to cover DCE3.0/3.1 as well. 282 + */ 283 + if (dig->dig_encoder == 0) { 284 + WREG32(DCCG_AUDIO_DTO0_PHASE, base_rate * 100); 285 + WREG32(DCCG_AUDIO_DTO0_MODULE, clock * 100); 286 + WREG32(DCCG_AUDIO_DTO_SELECT, 0); /* select DTO0 */ 287 + } else { 288 + WREG32(DCCG_AUDIO_DTO1_PHASE, base_rate * 100); 289 + WREG32(DCCG_AUDIO_DTO1_MODULE, clock * 100); 290 + WREG32(DCCG_AUDIO_DTO_SELECT, 1); /* select DTO1 */ 291 + } 276 292 } else { 277 - /* according to the reg specs, this should be DCE2.0 and DCE3.0 */ 293 + /* according to the reg specs, this should be DCE2.0 and DCE3.0/3.1 */ 278 294 WREG32(AUDIO_DTO, AUDIO_DTO_PHASE(base_rate / 10) | 279 295 AUDIO_DTO_MODULE(clock / 10)); 280 296 }
+2
drivers/gpu/drm/radeon/radeon_asic.c
··· 1004 1004 .wait_for_vblank = &avivo_wait_for_vblank, 1005 1005 .set_backlight_level = &atombios_set_backlight_level, 1006 1006 .get_backlight_level = &atombios_get_backlight_level, 1007 + .hdmi_enable = &r600_hdmi_enable, 1008 + .hdmi_setmode = &r600_hdmi_setmode, 1007 1009 }, 1008 1010 .copy = { 1009 1011 .blit = &r600_copy_cpdma,
+43 -23
drivers/gpu/drm/radeon/radeon_atombios.c
··· 1367 1367 int index = GetIndexIntoMasterTable(DATA, PPLL_SS_Info); 1368 1368 uint16_t data_offset, size; 1369 1369 struct _ATOM_SPREAD_SPECTRUM_INFO *ss_info; 1370 + struct _ATOM_SPREAD_SPECTRUM_ASSIGNMENT *ss_assign; 1370 1371 uint8_t frev, crev; 1371 1372 int i, num_indices; 1372 1373 ··· 1379 1378 1380 1379 num_indices = (size - sizeof(ATOM_COMMON_TABLE_HEADER)) / 1381 1380 sizeof(ATOM_SPREAD_SPECTRUM_ASSIGNMENT); 1382 - 1381 + ss_assign = (struct _ATOM_SPREAD_SPECTRUM_ASSIGNMENT*) 1382 + ((u8 *)&ss_info->asSS_Info[0]); 1383 1383 for (i = 0; i < num_indices; i++) { 1384 - if (ss_info->asSS_Info[i].ucSS_Id == id) { 1384 + if (ss_assign->ucSS_Id == id) { 1385 1385 ss->percentage = 1386 - le16_to_cpu(ss_info->asSS_Info[i].usSpreadSpectrumPercentage); 1387 - ss->type = ss_info->asSS_Info[i].ucSpreadSpectrumType; 1388 - ss->step = ss_info->asSS_Info[i].ucSS_Step; 1389 - ss->delay = ss_info->asSS_Info[i].ucSS_Delay; 1390 - ss->range = ss_info->asSS_Info[i].ucSS_Range; 1391 - ss->refdiv = ss_info->asSS_Info[i].ucRecommendedRef_Div; 1386 + le16_to_cpu(ss_assign->usSpreadSpectrumPercentage); 1387 + ss->type = ss_assign->ucSpreadSpectrumType; 1388 + ss->step = ss_assign->ucSS_Step; 1389 + ss->delay = ss_assign->ucSS_Delay; 1390 + ss->range = ss_assign->ucSS_Range; 1391 + ss->refdiv = ss_assign->ucRecommendedRef_Div; 1392 1392 return true; 1393 1393 } 1394 + ss_assign = (struct _ATOM_SPREAD_SPECTRUM_ASSIGNMENT*) 1395 + ((u8 *)ss_assign + sizeof(struct _ATOM_SPREAD_SPECTRUM_ASSIGNMENT)); 1394 1396 } 1395 1397 } 1396 1398 return false; ··· 1481 1477 struct _ATOM_ASIC_INTERNAL_SS_INFO_V3 info_3; 1482 1478 }; 1483 1479 1480 + union asic_ss_assignment { 1481 + struct _ATOM_ASIC_SS_ASSIGNMENT v1; 1482 + struct _ATOM_ASIC_SS_ASSIGNMENT_V2 v2; 1483 + struct _ATOM_ASIC_SS_ASSIGNMENT_V3 v3; 1484 + }; 1485 + 1484 1486 bool radeon_atombios_get_asic_ss_info(struct radeon_device *rdev, 1485 1487 struct radeon_atom_ss *ss, 1486 1488 int id, u32 clock) ··· 1495 1485 int index = GetIndexIntoMasterTable(DATA, ASIC_InternalSS_Info); 1496 1486 uint16_t data_offset, size; 1497 1487 union asic_ss_info *ss_info; 1488 + union asic_ss_assignment *ss_assign; 1498 1489 uint8_t frev, crev; 1499 1490 int i, num_indices; 1500 1491 ··· 1520 1509 num_indices = (size - sizeof(ATOM_COMMON_TABLE_HEADER)) / 1521 1510 sizeof(ATOM_ASIC_SS_ASSIGNMENT); 1522 1511 1512 + ss_assign = (union asic_ss_assignment *)((u8 *)&ss_info->info.asSpreadSpectrum[0]); 1523 1513 for (i = 0; i < num_indices; i++) { 1524 - if ((ss_info->info.asSpreadSpectrum[i].ucClockIndication == id) && 1525 - (clock <= le32_to_cpu(ss_info->info.asSpreadSpectrum[i].ulTargetClockRange))) { 1514 + if ((ss_assign->v1.ucClockIndication == id) && 1515 + (clock <= le32_to_cpu(ss_assign->v1.ulTargetClockRange))) { 1526 1516 ss->percentage = 1527 - le16_to_cpu(ss_info->info.asSpreadSpectrum[i].usSpreadSpectrumPercentage); 1528 - ss->type = ss_info->info.asSpreadSpectrum[i].ucSpreadSpectrumMode; 1529 - ss->rate = le16_to_cpu(ss_info->info.asSpreadSpectrum[i].usSpreadRateInKhz); 1517 + le16_to_cpu(ss_assign->v1.usSpreadSpectrumPercentage); 1518 + ss->type = ss_assign->v1.ucSpreadSpectrumMode; 1519 + ss->rate = le16_to_cpu(ss_assign->v1.usSpreadRateInKhz); 1530 1520 return true; 1531 1521 } 1522 + ss_assign = (union asic_ss_assignment *) 1523 + ((u8 *)ss_assign + sizeof(ATOM_ASIC_SS_ASSIGNMENT)); 1532 1524 } 1533 1525 break; 1534 1526 case 2: 1535 1527 num_indices = (size - sizeof(ATOM_COMMON_TABLE_HEADER)) / 1536 1528 sizeof(ATOM_ASIC_SS_ASSIGNMENT_V2); 1529 + ss_assign = (union asic_ss_assignment *)((u8 *)&ss_info->info_2.asSpreadSpectrum[0]); 1537 1530 for (i = 0; i < num_indices; i++) { 1538 - if ((ss_info->info_2.asSpreadSpectrum[i].ucClockIndication == id) && 1539 - (clock <= le32_to_cpu(ss_info->info_2.asSpreadSpectrum[i].ulTargetClockRange))) { 1531 + if ((ss_assign->v2.ucClockIndication == id) && 1532 + (clock <= le32_to_cpu(ss_assign->v2.ulTargetClockRange))) { 1540 1533 ss->percentage = 1541 - le16_to_cpu(ss_info->info_2.asSpreadSpectrum[i].usSpreadSpectrumPercentage); 1542 - ss->type = ss_info->info_2.asSpreadSpectrum[i].ucSpreadSpectrumMode; 1543 - ss->rate = le16_to_cpu(ss_info->info_2.asSpreadSpectrum[i].usSpreadRateIn10Hz); 1534 + le16_to_cpu(ss_assign->v2.usSpreadSpectrumPercentage); 1535 + ss->type = ss_assign->v2.ucSpreadSpectrumMode; 1536 + ss->rate = le16_to_cpu(ss_assign->v2.usSpreadRateIn10Hz); 1544 1537 if ((crev == 2) && 1545 1538 ((id == ASIC_INTERNAL_ENGINE_SS) || 1546 1539 (id == ASIC_INTERNAL_MEMORY_SS))) 1547 1540 ss->rate /= 100; 1548 1541 return true; 1549 1542 } 1543 + ss_assign = (union asic_ss_assignment *) 1544 + ((u8 *)ss_assign + sizeof(ATOM_ASIC_SS_ASSIGNMENT_V2)); 1550 1545 } 1551 1546 break; 1552 1547 case 3: 1553 1548 num_indices = (size - sizeof(ATOM_COMMON_TABLE_HEADER)) / 1554 1549 sizeof(ATOM_ASIC_SS_ASSIGNMENT_V3); 1550 + ss_assign = (union asic_ss_assignment *)((u8 *)&ss_info->info_3.asSpreadSpectrum[0]); 1555 1551 for (i = 0; i < num_indices; i++) { 1556 - if ((ss_info->info_3.asSpreadSpectrum[i].ucClockIndication == id) && 1557 - (clock <= le32_to_cpu(ss_info->info_3.asSpreadSpectrum[i].ulTargetClockRange))) { 1552 + if ((ss_assign->v3.ucClockIndication == id) && 1553 + (clock <= le32_to_cpu(ss_assign->v3.ulTargetClockRange))) { 1558 1554 ss->percentage = 1559 - le16_to_cpu(ss_info->info_3.asSpreadSpectrum[i].usSpreadSpectrumPercentage); 1560 - ss->type = ss_info->info_3.asSpreadSpectrum[i].ucSpreadSpectrumMode; 1561 - ss->rate = le16_to_cpu(ss_info->info_3.asSpreadSpectrum[i].usSpreadRateIn10Hz); 1555 + le16_to_cpu(ss_assign->v3.usSpreadSpectrumPercentage); 1556 + ss->type = ss_assign->v3.ucSpreadSpectrumMode; 1557 + ss->rate = le16_to_cpu(ss_assign->v3.usSpreadRateIn10Hz); 1562 1558 if ((id == ASIC_INTERNAL_ENGINE_SS) || 1563 1559 (id == ASIC_INTERNAL_MEMORY_SS)) 1564 1560 ss->rate /= 100; ··· 1573 1555 radeon_atombios_get_igp_ss_overrides(rdev, ss, id); 1574 1556 return true; 1575 1557 } 1558 + ss_assign = (union asic_ss_assignment *) 1559 + ((u8 *)ss_assign + sizeof(ATOM_ASIC_SS_ASSIGNMENT_V3)); 1576 1560 } 1577 1561 break; 1578 1562 default:
+3 -2
drivers/gpu/drm/radeon/radeon_cs.c
··· 85 85 VRAM, also but everything into VRAM on AGP cards to avoid 86 86 image corruptions */ 87 87 if (p->ring == R600_RING_TYPE_UVD_INDEX && 88 - (i == 0 || p->rdev->flags & RADEON_IS_AGP)) { 89 - /* TODO: is this still needed for NI+ ? */ 88 + p->rdev->family < CHIP_PALM && 89 + (i == 0 || drm_pci_device_is_agp(p->rdev->ddev))) { 90 + 90 91 p->relocs[i].lobj.domain = 91 92 RADEON_GEM_DOMAIN_VRAM; 92 93
+12 -3
drivers/gpu/drm/radeon/radeon_device.c
··· 1320 1320 return r; 1321 1321 } 1322 1322 if ((radeon_testing & 1)) { 1323 - radeon_test_moves(rdev); 1323 + if (rdev->accel_working) 1324 + radeon_test_moves(rdev); 1325 + else 1326 + DRM_INFO("radeon: acceleration disabled, skipping move tests\n"); 1324 1327 } 1325 1328 if ((radeon_testing & 2)) { 1326 - radeon_test_syncing(rdev); 1329 + if (rdev->accel_working) 1330 + radeon_test_syncing(rdev); 1331 + else 1332 + DRM_INFO("radeon: acceleration disabled, skipping sync tests\n"); 1327 1333 } 1328 1334 if (radeon_benchmarking) { 1329 - radeon_benchmark(rdev, radeon_benchmarking); 1335 + if (rdev->accel_working) 1336 + radeon_benchmark(rdev, radeon_benchmarking); 1337 + else 1338 + DRM_INFO("radeon: acceleration disabled, skipping benchmarks\n"); 1330 1339 } 1331 1340 return 0; 1332 1341 }
+4 -4
drivers/gpu/drm/radeon/radeon_pm.c
··· 1002 1002 { 1003 1003 /* set up the default clocks if the MC ucode is loaded */ 1004 1004 if ((rdev->family >= CHIP_BARTS) && 1005 - (rdev->family <= CHIP_HAINAN) && 1005 + (rdev->family <= CHIP_CAYMAN) && 1006 1006 rdev->mc_fw) { 1007 1007 if (rdev->pm.default_vddc) 1008 1008 radeon_atom_set_voltage(rdev, rdev->pm.default_vddc, ··· 1046 1046 if (ret) { 1047 1047 DRM_ERROR("radeon: dpm resume failed\n"); 1048 1048 if ((rdev->family >= CHIP_BARTS) && 1049 - (rdev->family <= CHIP_HAINAN) && 1049 + (rdev->family <= CHIP_CAYMAN) && 1050 1050 rdev->mc_fw) { 1051 1051 if (rdev->pm.default_vddc) 1052 1052 radeon_atom_set_voltage(rdev, rdev->pm.default_vddc, ··· 1097 1097 radeon_pm_init_profile(rdev); 1098 1098 /* set up the default clocks if the MC ucode is loaded */ 1099 1099 if ((rdev->family >= CHIP_BARTS) && 1100 - (rdev->family <= CHIP_HAINAN) && 1100 + (rdev->family <= CHIP_CAYMAN) && 1101 1101 rdev->mc_fw) { 1102 1102 if (rdev->pm.default_vddc) 1103 1103 radeon_atom_set_voltage(rdev, rdev->pm.default_vddc, ··· 1183 1183 if (ret) { 1184 1184 rdev->pm.dpm_enabled = false; 1185 1185 if ((rdev->family >= CHIP_BARTS) && 1186 - (rdev->family <= CHIP_HAINAN) && 1186 + (rdev->family <= CHIP_CAYMAN) && 1187 1187 rdev->mc_fw) { 1188 1188 if (rdev->pm.default_vddc) 1189 1189 radeon_atom_set_voltage(rdev, rdev->pm.default_vddc,
+5 -3
drivers/gpu/drm/radeon/radeon_ring.c
··· 839 839 * packet that is the root issue 840 840 */ 841 841 i = (ring->rptr + ring->ptr_mask + 1 - 32) & ring->ptr_mask; 842 - for (j = 0; j <= (count + 32); j++) { 843 - seq_printf(m, "r[%5d]=0x%08x\n", i, ring->ring[i]); 844 - i = (i + 1) & ring->ptr_mask; 842 + if (ring->ready) { 843 + for (j = 0; j <= (count + 32); j++) { 844 + seq_printf(m, "r[%5d]=0x%08x\n", i, ring->ring[i]); 845 + i = (i + 1) & ring->ptr_mask; 846 + } 845 847 } 846 848 return 0; 847 849 }
+1 -2
drivers/gpu/drm/radeon/radeon_uvd.c
··· 476 476 return -EINVAL; 477 477 } 478 478 479 - /* TODO: is this still necessary on NI+ ? */ 480 - if ((cmd == 0 || cmd == 0x3) && 479 + if (p->rdev->family < CHIP_PALM && (cmd == 0 || cmd == 0x3) && 481 480 (start >> 28) != (p->rdev->uvd.gpu_addr >> 28)) { 482 481 DRM_ERROR("msg/fb buffer %LX-%LX out of 256MB segment!\n", 483 482 start, end);
+24
drivers/gpu/drm/radeon/si_dpm.c
··· 2910 2910 bool disable_sclk_switching = false; 2911 2911 u32 mclk, sclk; 2912 2912 u16 vddc, vddci; 2913 + u32 max_sclk_vddc, max_mclk_vddci, max_mclk_vddc; 2913 2914 int i; 2914 2915 2915 2916 if ((rdev->pm.dpm.new_active_crtc_count > 1) || ··· 2941 2940 ps->performance_levels[i].vddc = max_limits->vddc; 2942 2941 if (ps->performance_levels[i].vddci > max_limits->vddci) 2943 2942 ps->performance_levels[i].vddci = max_limits->vddci; 2943 + } 2944 + } 2945 + 2946 + /* limit clocks to max supported clocks based on voltage dependency tables */ 2947 + btc_get_max_clock_from_voltage_dependency_table(&rdev->pm.dpm.dyn_state.vddc_dependency_on_sclk, 2948 + &max_sclk_vddc); 2949 + btc_get_max_clock_from_voltage_dependency_table(&rdev->pm.dpm.dyn_state.vddci_dependency_on_mclk, 2950 + &max_mclk_vddci); 2951 + btc_get_max_clock_from_voltage_dependency_table(&rdev->pm.dpm.dyn_state.vddc_dependency_on_mclk, 2952 + &max_mclk_vddc); 2953 + 2954 + for (i = 0; i < ps->performance_level_count; i++) { 2955 + if (max_sclk_vddc) { 2956 + if (ps->performance_levels[i].sclk > max_sclk_vddc) 2957 + ps->performance_levels[i].sclk = max_sclk_vddc; 2958 + } 2959 + if (max_mclk_vddci) { 2960 + if (ps->performance_levels[i].mclk > max_mclk_vddci) 2961 + ps->performance_levels[i].mclk = max_mclk_vddci; 2962 + } 2963 + if (max_mclk_vddc) { 2964 + if (ps->performance_levels[i].mclk > max_mclk_vddc) 2965 + ps->performance_levels[i].mclk = max_mclk_vddc; 2944 2966 } 2945 2967 } 2946 2968
+2 -2
drivers/gpu/drm/radeon/uvd_v1_0.c
··· 212 212 /* enable VCPU clock */ 213 213 WREG32(UVD_VCPU_CNTL, 1 << 9); 214 214 215 - /* enable UMC */ 216 - WREG32_P(UVD_LMI_CTRL2, 0, ~(1 << 8)); 215 + /* enable UMC and NC0 */ 216 + WREG32_P(UVD_LMI_CTRL2, 1 << 13, ~((1 << 8) | (1 << 13))); 217 217 218 218 /* boot up the VCPU */ 219 219 WREG32(UVD_SOFT_RESET, 0);
+1 -1
drivers/hv/connection.c
··· 195 195 196 196 do { 197 197 ret = vmbus_negotiate_version(msginfo, version); 198 - if (ret) 198 + if (ret == -ETIMEDOUT) 199 199 goto cleanup; 200 200 201 201 if (vmbus_connection.conn_state == CONNECTED)
+26 -12
drivers/hv/hv_kvp.c
··· 32 32 /* 33 33 * Pre win8 version numbers used in ws2008 and ws 2008 r2 (win7) 34 34 */ 35 + #define WS2008_SRV_MAJOR 1 36 + #define WS2008_SRV_MINOR 0 37 + #define WS2008_SRV_VERSION (WS2008_SRV_MAJOR << 16 | WS2008_SRV_MINOR) 38 + 35 39 #define WIN7_SRV_MAJOR 3 36 40 #define WIN7_SRV_MINOR 0 37 - #define WIN7_SRV_MAJOR_MINOR (WIN7_SRV_MAJOR << 16 | WIN7_SRV_MINOR) 41 + #define WIN7_SRV_VERSION (WIN7_SRV_MAJOR << 16 | WIN7_SRV_MINOR) 38 42 39 43 #define WIN8_SRV_MAJOR 4 40 44 #define WIN8_SRV_MINOR 0 41 - #define WIN8_SRV_MAJOR_MINOR (WIN8_SRV_MAJOR << 16 | WIN8_SRV_MINOR) 45 + #define WIN8_SRV_VERSION (WIN8_SRV_MAJOR << 16 | WIN8_SRV_MINOR) 42 46 43 47 /* 44 48 * Global state maintained for transaction that is being processed. ··· 591 587 592 588 struct icmsg_hdr *icmsghdrp; 593 589 struct icmsg_negotiate *negop = NULL; 590 + int util_fw_version; 591 + int kvp_srv_version; 594 592 595 593 if (kvp_transaction.active) { 596 594 /* ··· 612 606 613 607 if (icmsghdrp->icmsgtype == ICMSGTYPE_NEGOTIATE) { 614 608 /* 615 - * We start with win8 version and if the host cannot 616 - * support that we use the previous version. 609 + * Based on the host, select appropriate 610 + * framework and service versions we will 611 + * negotiate. 617 612 */ 618 - if (vmbus_prep_negotiate_resp(icmsghdrp, negop, 619 - recv_buffer, UTIL_FW_MAJOR_MINOR, 620 - WIN8_SRV_MAJOR_MINOR)) 621 - goto done; 622 - 613 + switch (vmbus_proto_version) { 614 + case (VERSION_WS2008): 615 + util_fw_version = UTIL_WS2K8_FW_VERSION; 616 + kvp_srv_version = WS2008_SRV_VERSION; 617 + break; 618 + case (VERSION_WIN7): 619 + util_fw_version = UTIL_FW_VERSION; 620 + kvp_srv_version = WIN7_SRV_VERSION; 621 + break; 622 + default: 623 + util_fw_version = UTIL_FW_VERSION; 624 + kvp_srv_version = WIN8_SRV_VERSION; 625 + } 623 626 vmbus_prep_negotiate_resp(icmsghdrp, negop, 624 - recv_buffer, UTIL_FW_MAJOR_MINOR, 625 - WIN7_SRV_MAJOR_MINOR); 627 + recv_buffer, util_fw_version, 628 + kvp_srv_version); 626 629 627 630 } else { 628 631 kvp_msg = (struct hv_kvp_msg *)&recv_buffer[ ··· 664 649 return; 665 650 666 651 } 667 - done: 668 652 669 653 icmsghdrp->icflags = ICMSGHDRFLAG_TRANSACTION 670 654 | ICMSGHDRFLAG_RESPONSE;
+3 -3
drivers/hv/hv_snapshot.c
··· 26 26 27 27 #define VSS_MAJOR 5 28 28 #define VSS_MINOR 0 29 - #define VSS_MAJOR_MINOR (VSS_MAJOR << 16 | VSS_MINOR) 29 + #define VSS_VERSION (VSS_MAJOR << 16 | VSS_MINOR) 30 30 31 31 32 32 ··· 190 190 191 191 if (icmsghdrp->icmsgtype == ICMSGTYPE_NEGOTIATE) { 192 192 vmbus_prep_negotiate_resp(icmsghdrp, negop, 193 - recv_buffer, UTIL_FW_MAJOR_MINOR, 194 - VSS_MAJOR_MINOR); 193 + recv_buffer, UTIL_FW_VERSION, 194 + VSS_VERSION); 195 195 } else { 196 196 vss_msg = (struct hv_vss_msg *)&recv_buffer[ 197 197 sizeof(struct vmbuspipe_hdr) +
+54 -17
drivers/hv/hv_util.c
··· 28 28 #include <linux/reboot.h> 29 29 #include <linux/hyperv.h> 30 30 31 - #define SHUTDOWN_MAJOR 3 32 - #define SHUTDOWN_MINOR 0 33 - #define SHUTDOWN_MAJOR_MINOR (SHUTDOWN_MAJOR << 16 | SHUTDOWN_MINOR) 34 31 35 - #define TIMESYNCH_MAJOR 3 36 - #define TIMESYNCH_MINOR 0 37 - #define TIMESYNCH_MAJOR_MINOR (TIMESYNCH_MAJOR << 16 | TIMESYNCH_MINOR) 32 + #define SD_MAJOR 3 33 + #define SD_MINOR 0 34 + #define SD_VERSION (SD_MAJOR << 16 | SD_MINOR) 38 35 39 - #define HEARTBEAT_MAJOR 3 40 - #define HEARTBEAT_MINOR 0 41 - #define HEARTBEAT_MAJOR_MINOR (HEARTBEAT_MAJOR << 16 | HEARTBEAT_MINOR) 36 + #define SD_WS2008_MAJOR 1 37 + #define SD_WS2008_VERSION (SD_WS2008_MAJOR << 16 | SD_MINOR) 38 + 39 + #define TS_MAJOR 3 40 + #define TS_MINOR 0 41 + #define TS_VERSION (TS_MAJOR << 16 | TS_MINOR) 42 + 43 + #define TS_WS2008_MAJOR 1 44 + #define TS_WS2008_VERSION (TS_WS2008_MAJOR << 16 | TS_MINOR) 45 + 46 + #define HB_MAJOR 3 47 + #define HB_MINOR 0 48 + #define HB_VERSION (HB_MAJOR << 16 | HB_MINOR) 49 + 50 + #define HB_WS2008_MAJOR 1 51 + #define HB_WS2008_VERSION (HB_WS2008_MAJOR << 16 | HB_MINOR) 52 + 53 + static int sd_srv_version; 54 + static int ts_srv_version; 55 + static int hb_srv_version; 56 + static int util_fw_version; 42 57 43 58 static void shutdown_onchannelcallback(void *context); 44 59 static struct hv_util_service util_shutdown = { ··· 114 99 115 100 if (icmsghdrp->icmsgtype == ICMSGTYPE_NEGOTIATE) { 116 101 vmbus_prep_negotiate_resp(icmsghdrp, negop, 117 - shut_txf_buf, UTIL_FW_MAJOR_MINOR, 118 - SHUTDOWN_MAJOR_MINOR); 102 + shut_txf_buf, util_fw_version, 103 + sd_srv_version); 119 104 } else { 120 105 shutdown_msg = 121 106 (struct shutdown_msg_data *)&shut_txf_buf[ ··· 231 216 struct icmsg_hdr *icmsghdrp; 232 217 struct ictimesync_data *timedatap; 233 218 u8 *time_txf_buf = util_timesynch.recv_buffer; 219 + struct icmsg_negotiate *negop = NULL; 234 220 235 221 vmbus_recvpacket(channel, time_txf_buf, 236 222 PAGE_SIZE, &recvlen, &requestid); ··· 241 225 sizeof(struct vmbuspipe_hdr)]; 242 226 243 227 if (icmsghdrp->icmsgtype == ICMSGTYPE_NEGOTIATE) { 244 - vmbus_prep_negotiate_resp(icmsghdrp, NULL, time_txf_buf, 245 - UTIL_FW_MAJOR_MINOR, 246 - TIMESYNCH_MAJOR_MINOR); 228 + vmbus_prep_negotiate_resp(icmsghdrp, negop, 229 + time_txf_buf, 230 + util_fw_version, 231 + ts_srv_version); 247 232 } else { 248 233 timedatap = (struct ictimesync_data *)&time_txf_buf[ 249 234 sizeof(struct vmbuspipe_hdr) + ··· 274 257 struct icmsg_hdr *icmsghdrp; 275 258 struct heartbeat_msg_data *heartbeat_msg; 276 259 u8 *hbeat_txf_buf = util_heartbeat.recv_buffer; 260 + struct icmsg_negotiate *negop = NULL; 277 261 278 262 vmbus_recvpacket(channel, hbeat_txf_buf, 279 263 PAGE_SIZE, &recvlen, &requestid); ··· 284 266 sizeof(struct vmbuspipe_hdr)]; 285 267 286 268 if (icmsghdrp->icmsgtype == ICMSGTYPE_NEGOTIATE) { 287 - vmbus_prep_negotiate_resp(icmsghdrp, NULL, 288 - hbeat_txf_buf, UTIL_FW_MAJOR_MINOR, 289 - HEARTBEAT_MAJOR_MINOR); 269 + vmbus_prep_negotiate_resp(icmsghdrp, negop, 270 + hbeat_txf_buf, util_fw_version, 271 + hb_srv_version); 290 272 } else { 291 273 heartbeat_msg = 292 274 (struct heartbeat_msg_data *)&hbeat_txf_buf[ ··· 339 321 goto error; 340 322 341 323 hv_set_drvdata(dev, srv); 324 + /* 325 + * Based on the host; initialize the framework and 326 + * service version numbers we will negotiate. 327 + */ 328 + switch (vmbus_proto_version) { 329 + case (VERSION_WS2008): 330 + util_fw_version = UTIL_WS2K8_FW_VERSION; 331 + sd_srv_version = SD_WS2008_VERSION; 332 + ts_srv_version = TS_WS2008_VERSION; 333 + hb_srv_version = HB_WS2008_VERSION; 334 + break; 335 + 336 + default: 337 + util_fw_version = UTIL_FW_VERSION; 338 + sd_srv_version = SD_VERSION; 339 + ts_srv_version = TS_VERSION; 340 + hb_srv_version = HB_VERSION; 341 + } 342 + 342 343 return 0; 343 344 344 345 error:
+10 -1
drivers/hwmon/applesmc.c
··· 525 525 { 526 526 struct applesmc_registers *s = &smcreg; 527 527 bool left_light_sensor, right_light_sensor; 528 + unsigned int count; 528 529 u8 tmp[1]; 529 530 int ret; 530 531 531 532 if (s->init_complete) 532 533 return 0; 533 534 534 - ret = read_register_count(&s->key_count); 535 + ret = read_register_count(&count); 535 536 if (ret) 536 537 return ret; 538 + 539 + if (s->cache && s->key_count != count) { 540 + pr_warn("key count changed from %d to %d\n", 541 + s->key_count, count); 542 + kfree(s->cache); 543 + s->cache = NULL; 544 + } 545 + s->key_count = count; 537 546 538 547 if (!s->cache) 539 548 s->cache = kcalloc(s->key_count, sizeof(*s->cache), GFP_KERNEL);
+20 -6
drivers/i2c/busses/i2c-designware-core.c
··· 98 98 99 99 #define DW_IC_ERR_TX_ABRT 0x1 100 100 101 + #define DW_IC_TAR_10BITADDR_MASTER BIT(12) 102 + 101 103 /* 102 104 * status codes 103 105 */ ··· 390 388 static void i2c_dw_xfer_init(struct dw_i2c_dev *dev) 391 389 { 392 390 struct i2c_msg *msgs = dev->msgs; 393 - u32 ic_con; 391 + u32 ic_con, ic_tar = 0; 394 392 395 393 /* Disable the adapter */ 396 394 __i2c_dw_enable(dev, false); 397 395 398 - /* set the slave (target) address */ 399 - dw_writel(dev, msgs[dev->msg_write_idx].addr, DW_IC_TAR); 400 - 401 396 /* if the slave address is ten bit address, enable 10BITADDR */ 402 397 ic_con = dw_readl(dev, DW_IC_CON); 403 - if (msgs[dev->msg_write_idx].flags & I2C_M_TEN) 398 + if (msgs[dev->msg_write_idx].flags & I2C_M_TEN) { 404 399 ic_con |= DW_IC_CON_10BITADDR_MASTER; 405 - else 400 + /* 401 + * If I2C_DYNAMIC_TAR_UPDATE is set, the 10-bit addressing 402 + * mode has to be enabled via bit 12 of IC_TAR register. 403 + * We set it always as I2C_DYNAMIC_TAR_UPDATE can't be 404 + * detected from registers. 405 + */ 406 + ic_tar = DW_IC_TAR_10BITADDR_MASTER; 407 + } else { 406 408 ic_con &= ~DW_IC_CON_10BITADDR_MASTER; 409 + } 410 + 407 411 dw_writel(dev, ic_con, DW_IC_CON); 412 + 413 + /* 414 + * Set the slave (target) address and enable 10-bit addressing mode 415 + * if applicable. 416 + */ 417 + dw_writel(dev, msgs[dev->msg_write_idx].addr | ic_tar, DW_IC_TAR); 408 418 409 419 /* Enable the adapter */ 410 420 __i2c_dw_enable(dev, true);
+3
drivers/i2c/busses/i2c-ismt.c
··· 393 393 394 394 desc = &priv->hw[priv->head]; 395 395 396 + /* Initialize the DMA buffer */ 397 + memset(priv->dma_buffer, 0, sizeof(priv->dma_buffer)); 398 + 396 399 /* Initialize the descriptor */ 397 400 memset(desc, 0, sizeof(struct ismt_desc)); 398 401 desc->tgtaddr_rw = ISMT_DESC_ADDR_RW(addr, read_write);
+9 -7
drivers/i2c/busses/i2c-mv64xxx.c
··· 234 234 ctrl_reg |= MV64XXX_I2C_BRIDGE_CONTROL_WR | 235 235 (msg->len - 1) << MV64XXX_I2C_BRIDGE_CONTROL_TX_SIZE_SHIFT; 236 236 237 - writel_relaxed(data_reg_lo, 237 + writel(data_reg_lo, 238 238 drv_data->reg_base + MV64XXX_I2C_REG_TX_DATA_LO); 239 - writel_relaxed(data_reg_hi, 239 + writel(data_reg_hi, 240 240 drv_data->reg_base + MV64XXX_I2C_REG_TX_DATA_HI); 241 241 242 242 } else { ··· 697 697 MODULE_DEVICE_TABLE(of, mv64xxx_i2c_of_match_table); 698 698 699 699 #ifdef CONFIG_OF 700 + #ifdef CONFIG_HAVE_CLK 700 701 static int 701 702 mv64xxx_calc_freq(const int tclk, const int n, const int m) 702 703 { ··· 727 726 return false; 728 727 return true; 729 728 } 729 + #endif /* CONFIG_HAVE_CLK */ 730 730 731 731 static int 732 732 mv64xxx_of_config(struct mv64xxx_i2c_data *drv_data, 733 733 struct device *dev) 734 734 { 735 - const struct of_device_id *device; 736 - struct device_node *np = dev->of_node; 737 - u32 bus_freq, tclk; 738 - int rc = 0; 739 - 740 735 /* CLK is mandatory when using DT to describe the i2c bus. We 741 736 * need to know tclk in order to calculate bus clock 742 737 * factors. ··· 741 744 /* Have OF but no CLK */ 742 745 return -ENODEV; 743 746 #else 747 + const struct of_device_id *device; 748 + struct device_node *np = dev->of_node; 749 + u32 bus_freq, tclk; 750 + int rc = 0; 751 + 744 752 if (IS_ERR(drv_data->clk)) { 745 753 rc = -ENODEV; 746 754 goto out;
-2
drivers/i2c/busses/i2c-s3c2410.c
··· 1178 1178 1179 1179 i2c_del_adapter(&i2c->adap); 1180 1180 1181 - clk_disable_unprepare(i2c->clk); 1182 - 1183 1181 if (pdev->dev.of_node && IS_ERR(i2c->pctrl)) 1184 1182 s3c24xx_i2c_dt_gpio_free(i2c); 1185 1183
+3 -4
drivers/md/bcache/bcache.h
··· 498 498 */ 499 499 atomic_t has_dirty; 500 500 501 - struct ratelimit writeback_rate; 501 + struct bch_ratelimit writeback_rate; 502 502 struct delayed_work writeback_rate_update; 503 503 504 504 /* ··· 507 507 */ 508 508 sector_t last_read; 509 509 510 - /* Number of writeback bios in flight */ 511 - atomic_t in_flight; 510 + /* Limit number of writeback bios in flight */ 511 + struct semaphore in_flight; 512 512 struct closure_with_timer writeback; 513 - struct closure_waitlist writeback_wait; 514 513 515 514 struct keybuf writeback_keys; 516 515
+28 -11
drivers/md/bcache/bset.c
··· 926 926 927 927 /* Mergesort */ 928 928 929 + static void sort_key_next(struct btree_iter *iter, 930 + struct btree_iter_set *i) 931 + { 932 + i->k = bkey_next(i->k); 933 + 934 + if (i->k == i->end) 935 + *i = iter->data[--iter->used]; 936 + } 937 + 929 938 static void btree_sort_fixup(struct btree_iter *iter) 930 939 { 931 940 while (iter->used > 1) { 932 941 struct btree_iter_set *top = iter->data, *i = top + 1; 933 - struct bkey *k; 934 942 935 943 if (iter->used > 2 && 936 944 btree_iter_cmp(i[0], i[1])) 937 945 i++; 938 946 939 - for (k = i->k; 940 - k != i->end && bkey_cmp(top->k, &START_KEY(k)) > 0; 941 - k = bkey_next(k)) 942 - if (top->k > i->k) 943 - __bch_cut_front(top->k, k); 944 - else if (KEY_SIZE(k)) 945 - bch_cut_back(&START_KEY(k), top->k); 946 - 947 - if (top->k < i->k || k == i->k) 947 + if (bkey_cmp(top->k, &START_KEY(i->k)) <= 0) 948 948 break; 949 949 950 - heap_sift(iter, i - top, btree_iter_cmp); 950 + if (!KEY_SIZE(i->k)) { 951 + sort_key_next(iter, i); 952 + heap_sift(iter, i - top, btree_iter_cmp); 953 + continue; 954 + } 955 + 956 + if (top->k > i->k) { 957 + if (bkey_cmp(top->k, i->k) >= 0) 958 + sort_key_next(iter, i); 959 + else 960 + bch_cut_front(top->k, i->k); 961 + 962 + heap_sift(iter, i - top, btree_iter_cmp); 963 + } else { 964 + /* can't happen because of comparison func */ 965 + BUG_ON(!bkey_cmp(&START_KEY(top->k), &START_KEY(i->k))); 966 + bch_cut_back(&START_KEY(i->k), top->k); 967 + } 951 968 } 952 969 } 953 970
+2 -2
drivers/md/bcache/btree.c
··· 255 255 256 256 return; 257 257 err: 258 - bch_cache_set_error(b->c, "io error reading bucket %lu", 258 + bch_cache_set_error(b->c, "io error reading bucket %zu", 259 259 PTR_BUCKET_NR(b->c, &b->key, 0)); 260 260 } 261 261 ··· 612 612 return SHRINK_STOP; 613 613 614 614 /* Return -1 if we can't do anything right now */ 615 - if (sc->gfp_mask & __GFP_WAIT) 615 + if (sc->gfp_mask & __GFP_IO) 616 616 mutex_lock(&c->bucket_lock); 617 617 else if (!mutex_trylock(&c->bucket_lock)) 618 618 return -1;
+20 -13
drivers/md/bcache/journal.c
··· 153 153 bitmap_zero(bitmap, SB_JOURNAL_BUCKETS); 154 154 pr_debug("%u journal buckets", ca->sb.njournal_buckets); 155 155 156 - /* Read journal buckets ordered by golden ratio hash to quickly 156 + /* 157 + * Read journal buckets ordered by golden ratio hash to quickly 157 158 * find a sequence of buckets with valid journal entries 158 159 */ 159 160 for (i = 0; i < ca->sb.njournal_buckets; i++) { ··· 167 166 goto bsearch; 168 167 } 169 168 170 - /* If that fails, check all the buckets we haven't checked 169 + /* 170 + * If that fails, check all the buckets we haven't checked 171 171 * already 172 172 */ 173 173 pr_debug("falling back to linear search"); 174 174 175 - for (l = 0; l < ca->sb.njournal_buckets; l++) { 176 - if (test_bit(l, bitmap)) 177 - continue; 178 - 175 + for (l = find_first_zero_bit(bitmap, ca->sb.njournal_buckets); 176 + l < ca->sb.njournal_buckets; 177 + l = find_next_zero_bit(bitmap, ca->sb.njournal_buckets, l + 1)) 179 178 if (read_bucket(l)) 180 179 goto bsearch; 181 - } 180 + 181 + if (list_empty(list)) 182 + continue; 182 183 bsearch: 183 184 /* Binary search */ 184 185 m = r = find_next_bit(bitmap, ca->sb.njournal_buckets, l + 1); ··· 200 197 r = m; 201 198 } 202 199 203 - /* Read buckets in reverse order until we stop finding more 200 + /* 201 + * Read buckets in reverse order until we stop finding more 204 202 * journal entries 205 203 */ 206 - pr_debug("finishing up"); 204 + pr_debug("finishing up: m %u njournal_buckets %u", 205 + m, ca->sb.njournal_buckets); 207 206 l = m; 208 207 209 208 while (1) { ··· 233 228 } 234 229 } 235 230 236 - c->journal.seq = list_entry(list->prev, 237 - struct journal_replay, 238 - list)->j.seq; 231 + if (!list_empty(list)) 232 + c->journal.seq = list_entry(list->prev, 233 + struct journal_replay, 234 + list)->j.seq; 239 235 240 236 return 0; 241 237 #undef read_bucket ··· 434 428 return; 435 429 } 436 430 437 - switch (atomic_read(&ja->discard_in_flight) == DISCARD_IN_FLIGHT) { 431 + switch (atomic_read(&ja->discard_in_flight)) { 438 432 case DISCARD_IN_FLIGHT: 439 433 return; 440 434 ··· 695 689 if (cl) 696 690 BUG_ON(!closure_wait(&w->wait, cl)); 697 691 692 + closure_flush(&c->journal.io); 698 693 __journal_try_write(c, true); 699 694 } 700 695 }
+9 -6
drivers/md/bcache/request.c
··· 997 997 } else { 998 998 bch_writeback_add(dc); 999 999 1000 - if (s->op.flush_journal) { 1000 + if (bio->bi_rw & REQ_FLUSH) { 1001 1001 /* Also need to send a flush to the backing device */ 1002 - s->op.cache_bio = bio_clone_bioset(bio, GFP_NOIO, 1003 - dc->disk.bio_split); 1002 + struct bio *flush = bio_alloc_bioset(0, GFP_NOIO, 1003 + dc->disk.bio_split); 1004 1004 1005 - bio->bi_size = 0; 1006 - bio->bi_vcnt = 0; 1007 - closure_bio_submit(bio, cl, s->d); 1005 + flush->bi_rw = WRITE_FLUSH; 1006 + flush->bi_bdev = bio->bi_bdev; 1007 + flush->bi_end_io = request_endio; 1008 + flush->bi_private = cl; 1009 + 1010 + closure_bio_submit(flush, cl, s->d); 1008 1011 } else { 1009 1012 s->op.cache_bio = bio; 1010 1013 }
+7 -2
drivers/md/bcache/sysfs.c
··· 223 223 } 224 224 225 225 if (attr == &sysfs_label) { 226 - /* note: endlines are preserved */ 227 - memcpy(dc->sb.label, buf, SB_LABEL_SIZE); 226 + if (size > SB_LABEL_SIZE) 227 + return -EINVAL; 228 + memcpy(dc->sb.label, buf, size); 229 + if (size < SB_LABEL_SIZE) 230 + dc->sb.label[size] = '\0'; 231 + if (size && dc->sb.label[size - 1] == '\n') 232 + dc->sb.label[size - 1] = '\0'; 228 233 bch_write_bdev_super(dc, NULL); 229 234 if (dc->disk.c) { 230 235 memcpy(dc->disk.c->uuids[dc->disk.id].label,
+10 -1
drivers/md/bcache/util.c
··· 190 190 stats->last = now ?: 1; 191 191 } 192 192 193 - unsigned bch_next_delay(struct ratelimit *d, uint64_t done) 193 + /** 194 + * bch_next_delay() - increment @d by the amount of work done, and return how 195 + * long to delay until the next time to do some work. 196 + * 197 + * @d - the struct bch_ratelimit to update 198 + * @done - the amount of work done, in arbitrary units 199 + * 200 + * Returns the amount of time to delay by, in jiffies 201 + */ 202 + uint64_t bch_next_delay(struct bch_ratelimit *d, uint64_t done) 194 203 { 195 204 uint64_t now = local_clock(); 196 205
+9 -3
drivers/md/bcache/util.h
··· 450 450 (ewma) >> factor; \ 451 451 }) 452 452 453 - struct ratelimit { 453 + struct bch_ratelimit { 454 + /* Next time we want to do some work, in nanoseconds */ 454 455 uint64_t next; 456 + 457 + /* 458 + * Rate at which we want to do work, in units per nanosecond 459 + * The units here correspond to the units passed to bch_next_delay() 460 + */ 455 461 unsigned rate; 456 462 }; 457 463 458 - static inline void ratelimit_reset(struct ratelimit *d) 464 + static inline void bch_ratelimit_reset(struct bch_ratelimit *d) 459 465 { 460 466 d->next = local_clock(); 461 467 } 462 468 463 - unsigned bch_next_delay(struct ratelimit *d, uint64_t done); 469 + uint64_t bch_next_delay(struct bch_ratelimit *d, uint64_t done); 464 470 465 471 #define __DIV_SAFE(n, d, zero) \ 466 472 ({ \
+20 -22
drivers/md/bcache/writeback.c
··· 94 94 95 95 static unsigned writeback_delay(struct cached_dev *dc, unsigned sectors) 96 96 { 97 + uint64_t ret; 98 + 97 99 if (atomic_read(&dc->disk.detaching) || 98 100 !dc->writeback_percent) 99 101 return 0; 100 102 101 - return bch_next_delay(&dc->writeback_rate, sectors * 10000000ULL); 103 + ret = bch_next_delay(&dc->writeback_rate, sectors * 10000000ULL); 104 + 105 + return min_t(uint64_t, ret, HZ); 102 106 } 103 107 104 108 /* Background writeback */ ··· 212 208 213 209 up_write(&dc->writeback_lock); 214 210 215 - ratelimit_reset(&dc->writeback_rate); 211 + bch_ratelimit_reset(&dc->writeback_rate); 216 212 217 213 /* Punt to workqueue only so we don't recurse and blow the stack */ 218 214 continue_at(cl, read_dirty, dirty_wq); ··· 322 318 } 323 319 324 320 bch_keybuf_del(&dc->writeback_keys, w); 325 - atomic_dec_bug(&dc->in_flight); 326 - 327 - closure_wake_up(&dc->writeback_wait); 321 + up(&dc->in_flight); 328 322 329 323 closure_return_with_destructor(cl, dirty_io_destructor); 330 324 } ··· 351 349 352 350 closure_bio_submit(&io->bio, cl, &io->dc->disk); 353 351 354 - continue_at(cl, write_dirty_finish, dirty_wq); 352 + continue_at(cl, write_dirty_finish, system_wq); 355 353 } 356 354 357 355 static void read_dirty_endio(struct bio *bio, int error) ··· 371 369 372 370 closure_bio_submit(&io->bio, cl, &io->dc->disk); 373 371 374 - continue_at(cl, write_dirty, dirty_wq); 372 + continue_at(cl, write_dirty, system_wq); 375 373 } 376 374 377 375 static void read_dirty(struct closure *cl) ··· 396 394 397 395 if (delay > 0 && 398 396 (KEY_START(&w->key) != dc->last_read || 399 - jiffies_to_msecs(delay) > 50)) { 400 - w->private = NULL; 401 - 402 - closure_delay(&dc->writeback, delay); 403 - continue_at(cl, read_dirty, dirty_wq); 404 - } 397 + jiffies_to_msecs(delay) > 50)) 398 + delay = schedule_timeout_uninterruptible(delay); 405 399 406 400 dc->last_read = KEY_OFFSET(&w->key); 407 401 ··· 422 424 423 425 trace_bcache_writeback(&w->key); 424 426 425 - closure_call(&io->cl, read_dirty_submit, NULL, &dc->disk.cl); 427 + down(&dc->in_flight); 428 + closure_call(&io->cl, read_dirty_submit, NULL, cl); 426 429 427 430 delay = writeback_delay(dc, KEY_SIZE(&w->key)); 428 - 429 - atomic_inc(&dc->in_flight); 430 - 431 - if (!closure_wait_event(&dc->writeback_wait, cl, 432 - atomic_read(&dc->in_flight) < 64)) 433 - continue_at(cl, read_dirty, dirty_wq); 434 431 } 435 432 436 433 if (0) { ··· 435 442 bch_keybuf_del(&dc->writeback_keys, w); 436 443 } 437 444 438 - refill_dirty(cl); 445 + /* 446 + * Wait for outstanding writeback IOs to finish (and keybuf slots to be 447 + * freed) before refilling again 448 + */ 449 + continue_at(cl, refill_dirty, dirty_wq); 439 450 } 440 451 441 452 /* Init */ ··· 481 484 482 485 void bch_cached_dev_writeback_init(struct cached_dev *dc) 483 486 { 487 + sema_init(&dc->in_flight, 64); 484 488 closure_init_unlocked(&dc->writeback); 485 489 init_rwsem(&dc->writeback_lock); 486 490 ··· 511 513 512 514 int __init bch_writeback_init(void) 513 515 { 514 - dirty_wq = create_singlethread_workqueue("bcache_writeback"); 516 + dirty_wq = create_workqueue("bcache_writeback"); 515 517 if (!dirty_wq) 516 518 return -ENOMEM; 517 519
+3 -4
drivers/md/dm-io.c
··· 19 19 #define DM_MSG_PREFIX "io" 20 20 21 21 #define DM_IO_MAX_REGIONS BITS_PER_LONG 22 - #define MIN_IOS 16 23 - #define MIN_BIOS 16 24 22 25 23 struct dm_io_client { 26 24 mempool_t *pool; ··· 48 50 struct dm_io_client *dm_io_client_create(void) 49 51 { 50 52 struct dm_io_client *client; 53 + unsigned min_ios = dm_get_reserved_bio_based_ios(); 51 54 52 55 client = kmalloc(sizeof(*client), GFP_KERNEL); 53 56 if (!client) 54 57 return ERR_PTR(-ENOMEM); 55 58 56 - client->pool = mempool_create_slab_pool(MIN_IOS, _dm_io_cache); 59 + client->pool = mempool_create_slab_pool(min_ios, _dm_io_cache); 57 60 if (!client->pool) 58 61 goto bad; 59 62 60 - client->bios = bioset_create(MIN_BIOS, 0); 63 + client->bios = bioset_create(min_ios, 0); 61 64 if (!client->bios) 62 65 goto bad; 63 66
+14 -4
drivers/md/dm-mpath.c
··· 7 7 8 8 #include <linux/device-mapper.h> 9 9 10 + #include "dm.h" 10 11 #include "dm-path-selector.h" 11 12 #include "dm-uevent.h" 12 13 ··· 117 116 118 117 typedef int (*action_fn) (struct pgpath *pgpath); 119 118 120 - #define MIN_IOS 256 /* Mempool size */ 121 - 122 119 static struct kmem_cache *_mpio_cache; 123 120 124 121 static struct workqueue_struct *kmultipathd, *kmpath_handlerd; ··· 189 190 static struct multipath *alloc_multipath(struct dm_target *ti) 190 191 { 191 192 struct multipath *m; 193 + unsigned min_ios = dm_get_reserved_rq_based_ios(); 192 194 193 195 m = kzalloc(sizeof(*m), GFP_KERNEL); 194 196 if (m) { ··· 202 202 INIT_WORK(&m->trigger_event, trigger_event); 203 203 init_waitqueue_head(&m->pg_init_wait); 204 204 mutex_init(&m->work_mutex); 205 - m->mpio_pool = mempool_create_slab_pool(MIN_IOS, _mpio_cache); 205 + m->mpio_pool = mempool_create_slab_pool(min_ios, _mpio_cache); 206 206 if (!m->mpio_pool) { 207 207 kfree(m); 208 208 return NULL; ··· 1268 1268 case -EREMOTEIO: 1269 1269 case -EILSEQ: 1270 1270 case -ENODATA: 1271 + case -ENOSPC: 1271 1272 return 1; 1272 1273 } 1273 1274 ··· 1299 1298 if (!error && !clone->errors) 1300 1299 return 0; /* I/O complete */ 1301 1300 1302 - if (noretry_error(error)) 1301 + if (noretry_error(error)) { 1302 + if ((clone->cmd_flags & REQ_WRITE_SAME) && 1303 + !clone->q->limits.max_write_same_sectors) { 1304 + struct queue_limits *limits; 1305 + 1306 + /* device doesn't really support WRITE SAME, disable it */ 1307 + limits = dm_get_queue_limits(dm_table_get_md(m->ti->table)); 1308 + limits->max_write_same_sectors = 0; 1309 + } 1303 1310 return error; 1311 + } 1304 1312 1305 1313 if (mpio->pgpath) 1306 1314 fail_path(mpio->pgpath);
+1 -1
drivers/md/dm-snap-persistent.c
··· 256 256 */ 257 257 INIT_WORK_ONSTACK(&req.work, do_metadata); 258 258 queue_work(ps->metadata_wq, &req.work); 259 - flush_work(&req.work); 259 + flush_workqueue(ps->metadata_wq); 260 260 261 261 return req.result; 262 262 }
+2 -3
drivers/md/dm-snap.c
··· 725 725 */ 726 726 static int init_hash_tables(struct dm_snapshot *s) 727 727 { 728 - sector_t hash_size, cow_dev_size, origin_dev_size, max_buckets; 728 + sector_t hash_size, cow_dev_size, max_buckets; 729 729 730 730 /* 731 731 * Calculate based on the size of the original volume or 732 732 * the COW volume... 733 733 */ 734 734 cow_dev_size = get_dev_size(s->cow->bdev); 735 - origin_dev_size = get_dev_size(s->origin->bdev); 736 735 max_buckets = calc_max_buckets(); 737 736 738 - hash_size = min(origin_dev_size, cow_dev_size) >> s->store->chunk_shift; 737 + hash_size = cow_dev_size >> s->store->chunk_shift; 739 738 hash_size = min(hash_size, max_buckets); 740 739 741 740 if (hash_size < 64)
+17 -6
drivers/md/dm-stats.c
··· 451 451 struct dm_stat_percpu *p; 452 452 453 453 /* 454 - * For strict correctness we should use local_irq_disable/enable 454 + * For strict correctness we should use local_irq_save/restore 455 455 * instead of preempt_disable/enable. 456 456 * 457 - * This is racy if the driver finishes bios from non-interrupt 458 - * context as well as from interrupt context or from more different 459 - * interrupts. 457 + * preempt_disable/enable is racy if the driver finishes bios 458 + * from non-interrupt context as well as from interrupt context 459 + * or from more different interrupts. 460 460 * 461 - * However, the race only results in not counting some events, 462 - * so it is acceptable. 461 + * On 64-bit architectures the race only results in not counting some 462 + * events, so it is acceptable. On 32-bit architectures the race could 463 + * cause the counter going off by 2^32, so we need to do proper locking 464 + * there. 463 465 * 464 466 * part_stat_lock()/part_stat_unlock() have this race too. 465 467 */ 468 + #if BITS_PER_LONG == 32 469 + unsigned long flags; 470 + local_irq_save(flags); 471 + #else 466 472 preempt_disable(); 473 + #endif 467 474 p = &s->stat_percpu[smp_processor_id()][entry]; 468 475 469 476 if (!end) { ··· 485 478 p->ticks[idx] += duration; 486 479 } 487 480 481 + #if BITS_PER_LONG == 32 482 + local_irq_restore(flags); 483 + #else 488 484 preempt_enable(); 485 + #endif 489 486 } 490 487 491 488 static void __dm_stat_bio(struct dm_stat *s, unsigned long bi_rw,
+11 -3
drivers/md/dm-thin.c
··· 2095 2095 * them down to the data device. The thin device's discard 2096 2096 * processing will cause mappings to be removed from the btree. 2097 2097 */ 2098 + ti->discard_zeroes_data_unsupported = true; 2098 2099 if (pf.discard_enabled && pf.discard_passdown) { 2099 2100 ti->num_discard_bios = 1; 2100 2101 ··· 2105 2104 * thin devices' discard limits consistent). 2106 2105 */ 2107 2106 ti->discards_supported = true; 2108 - ti->discard_zeroes_data_unsupported = true; 2109 2107 } 2110 2108 ti->private = pt; 2111 2109 ··· 2689 2689 * They get transferred to the live pool in bind_control_target() 2690 2690 * called from pool_preresume(). 2691 2691 */ 2692 - if (!pt->adjusted_pf.discard_enabled) 2692 + if (!pt->adjusted_pf.discard_enabled) { 2693 + /* 2694 + * Must explicitly disallow stacking discard limits otherwise the 2695 + * block layer will stack them if pool's data device has support. 2696 + * QUEUE_FLAG_DISCARD wouldn't be set but there is no way for the 2697 + * user to see that, so make sure to set all discard limits to 0. 2698 + */ 2699 + limits->discard_granularity = 0; 2693 2700 return; 2701 + } 2694 2702 2695 2703 disable_passdown_if_not_supported(pt); 2696 2704 ··· 2834 2826 ti->per_bio_data_size = sizeof(struct dm_thin_endio_hook); 2835 2827 2836 2828 /* In case the pool supports discards, pass them on. */ 2829 + ti->discard_zeroes_data_unsupported = true; 2837 2830 if (tc->pool->pf.discard_enabled) { 2838 2831 ti->discards_supported = true; 2839 2832 ti->num_discard_bios = 1; 2840 - ti->discard_zeroes_data_unsupported = true; 2841 2833 /* Discard bios must be split on a block boundary */ 2842 2834 ti->split_discard_bios = true; 2843 2835 }
+67 -4
drivers/md/dm.c
··· 211 211 struct bio_set *bs; 212 212 }; 213 213 214 - #define MIN_IOS 256 214 + #define RESERVED_BIO_BASED_IOS 16 215 + #define RESERVED_REQUEST_BASED_IOS 256 216 + #define RESERVED_MAX_IOS 1024 215 217 static struct kmem_cache *_io_cache; 216 218 static struct kmem_cache *_rq_tio_cache; 219 + 220 + /* 221 + * Bio-based DM's mempools' reserved IOs set by the user. 222 + */ 223 + static unsigned reserved_bio_based_ios = RESERVED_BIO_BASED_IOS; 224 + 225 + /* 226 + * Request-based DM's mempools' reserved IOs set by the user. 227 + */ 228 + static unsigned reserved_rq_based_ios = RESERVED_REQUEST_BASED_IOS; 229 + 230 + static unsigned __dm_get_reserved_ios(unsigned *reserved_ios, 231 + unsigned def, unsigned max) 232 + { 233 + unsigned ios = ACCESS_ONCE(*reserved_ios); 234 + unsigned modified_ios = 0; 235 + 236 + if (!ios) 237 + modified_ios = def; 238 + else if (ios > max) 239 + modified_ios = max; 240 + 241 + if (modified_ios) { 242 + (void)cmpxchg(reserved_ios, ios, modified_ios); 243 + ios = modified_ios; 244 + } 245 + 246 + return ios; 247 + } 248 + 249 + unsigned dm_get_reserved_bio_based_ios(void) 250 + { 251 + return __dm_get_reserved_ios(&reserved_bio_based_ios, 252 + RESERVED_BIO_BASED_IOS, RESERVED_MAX_IOS); 253 + } 254 + EXPORT_SYMBOL_GPL(dm_get_reserved_bio_based_ios); 255 + 256 + unsigned dm_get_reserved_rq_based_ios(void) 257 + { 258 + return __dm_get_reserved_ios(&reserved_rq_based_ios, 259 + RESERVED_REQUEST_BASED_IOS, RESERVED_MAX_IOS); 260 + } 261 + EXPORT_SYMBOL_GPL(dm_get_reserved_rq_based_ios); 217 262 218 263 static int __init local_init(void) 219 264 { ··· 2323 2278 } 2324 2279 2325 2280 /* 2281 + * The queue_limits are only valid as long as you have a reference 2282 + * count on 'md'. 2283 + */ 2284 + struct queue_limits *dm_get_queue_limits(struct mapped_device *md) 2285 + { 2286 + BUG_ON(!atomic_read(&md->holders)); 2287 + return &md->queue->limits; 2288 + } 2289 + EXPORT_SYMBOL_GPL(dm_get_queue_limits); 2290 + 2291 + /* 2326 2292 * Fully initialize a request-based queue (->elevator, ->request_fn, etc). 2327 2293 */ 2328 2294 static int dm_init_request_based_queue(struct mapped_device *md) ··· 2918 2862 2919 2863 if (type == DM_TYPE_BIO_BASED) { 2920 2864 cachep = _io_cache; 2921 - pool_size = 16; 2865 + pool_size = dm_get_reserved_bio_based_ios(); 2922 2866 front_pad = roundup(per_bio_data_size, __alignof__(struct dm_target_io)) + offsetof(struct dm_target_io, clone); 2923 2867 } else if (type == DM_TYPE_REQUEST_BASED) { 2924 2868 cachep = _rq_tio_cache; 2925 - pool_size = MIN_IOS; 2869 + pool_size = dm_get_reserved_rq_based_ios(); 2926 2870 front_pad = offsetof(struct dm_rq_clone_bio_info, clone); 2927 2871 /* per_bio_data_size is not used. See __bind_mempools(). */ 2928 2872 WARN_ON(per_bio_data_size != 0); 2929 2873 } else 2930 2874 goto out; 2931 2875 2932 - pools->io_pool = mempool_create_slab_pool(MIN_IOS, cachep); 2876 + pools->io_pool = mempool_create_slab_pool(pool_size, cachep); 2933 2877 if (!pools->io_pool) 2934 2878 goto out; 2935 2879 ··· 2980 2924 2981 2925 module_param(major, uint, 0); 2982 2926 MODULE_PARM_DESC(major, "The major number of the device mapper"); 2927 + 2928 + module_param(reserved_bio_based_ios, uint, S_IRUGO | S_IWUSR); 2929 + MODULE_PARM_DESC(reserved_bio_based_ios, "Reserved IOs in bio-based mempools"); 2930 + 2931 + module_param(reserved_rq_based_ios, uint, S_IRUGO | S_IWUSR); 2932 + MODULE_PARM_DESC(reserved_rq_based_ios, "Reserved IOs in request-based mempools"); 2933 + 2983 2934 MODULE_DESCRIPTION(DM_NAME " driver"); 2984 2935 MODULE_AUTHOR("Joe Thornber <dm-devel@redhat.com>"); 2985 2936 MODULE_LICENSE("GPL");
+3
drivers/md/dm.h
··· 184 184 /* 185 185 * Helpers that are used by DM core 186 186 */ 187 + unsigned dm_get_reserved_bio_based_ios(void); 188 + unsigned dm_get_reserved_rq_based_ios(void); 189 + 187 190 static inline bool dm_message_test_buffer_overflow(char *result, unsigned maxlen) 188 191 { 189 192 return !maxlen || strlen(result) + 1 >= maxlen;
+1
drivers/misc/mei/amthif.c
··· 57 57 dev->iamthif_ioctl = false; 58 58 dev->iamthif_state = MEI_IAMTHIF_IDLE; 59 59 dev->iamthif_timer = 0; 60 + dev->iamthif_stall_timer = 0; 60 61 } 61 62 62 63 /**
+4 -1
drivers/misc/mei/bus.c
··· 297 297 298 298 if (cl->reading_state != MEI_READ_COMPLETE && 299 299 !waitqueue_active(&cl->rx_wait)) { 300 + 300 301 mutex_unlock(&dev->device_lock); 301 302 302 303 if (wait_event_interruptible(cl->rx_wait, 303 - (MEI_READ_COMPLETE == cl->reading_state))) { 304 + cl->reading_state == MEI_READ_COMPLETE || 305 + mei_cl_is_transitioning(cl))) { 306 + 304 307 if (signal_pending(current)) 305 308 return -EINTR; 306 309 return -ERESTARTSYS;
+6
drivers/misc/mei/client.h
··· 90 90 cl->dev->dev_state == MEI_DEV_ENABLED && 91 91 cl->state == MEI_FILE_CONNECTED); 92 92 } 93 + static inline bool mei_cl_is_transitioning(struct mei_cl *cl) 94 + { 95 + return (MEI_FILE_INITIALIZING == cl->state || 96 + MEI_FILE_DISCONNECTED == cl->state || 97 + MEI_FILE_DISCONNECTING == cl->state); 98 + } 93 99 94 100 bool mei_cl_is_other_connecting(struct mei_cl *cl); 95 101 int mei_cl_disconnect(struct mei_cl *cl);
+6 -4
drivers/misc/mei/hbm.c
··· 35 35 struct mei_me_client *clients; 36 36 int b; 37 37 38 + dev->me_clients_num = 0; 39 + dev->me_client_presentation_num = 0; 40 + dev->me_client_index = 0; 41 + 38 42 /* count how many ME clients we have */ 39 43 for_each_set_bit(b, dev->me_clients_map, MEI_CLIENTS_MAX) 40 44 dev->me_clients_num++; 41 45 42 - if (dev->me_clients_num <= 0) 46 + if (dev->me_clients_num == 0) 43 47 return; 44 48 45 49 kfree(dev->me_clients); ··· 225 221 struct hbm_props_request *prop_req; 226 222 const size_t len = sizeof(struct hbm_props_request); 227 223 unsigned long next_client_index; 228 - u8 client_num; 224 + unsigned long client_num; 229 225 230 226 231 227 client_num = dev->me_client_presentation_num; ··· 681 677 if (dev->dev_state == MEI_DEV_INIT_CLIENTS && 682 678 dev->hbm_state == MEI_HBM_ENUM_CLIENTS) { 683 679 dev->init_clients_timer = 0; 684 - dev->me_client_presentation_num = 0; 685 - dev->me_client_index = 0; 686 680 mei_hbm_me_cl_allocate(dev); 687 681 dev->hbm_state = MEI_HBM_CLIENT_PROPERTIES; 688 682
+3
drivers/misc/mei/init.c
··· 175 175 memset(&dev->wr_ext_msg, 0, sizeof(dev->wr_ext_msg)); 176 176 } 177 177 178 + /* we're already in reset, cancel the init timer */ 179 + dev->init_clients_timer = 0; 180 + 178 181 dev->me_clients_num = 0; 179 182 dev->rd_msg_hdr = 0; 180 183 dev->wd_pending = false;
+4 -7
drivers/misc/mei/main.c
··· 249 249 mutex_unlock(&dev->device_lock); 250 250 251 251 if (wait_event_interruptible(cl->rx_wait, 252 - (MEI_READ_COMPLETE == cl->reading_state || 253 - MEI_FILE_INITIALIZING == cl->state || 254 - MEI_FILE_DISCONNECTED == cl->state || 255 - MEI_FILE_DISCONNECTING == cl->state))) { 252 + MEI_READ_COMPLETE == cl->reading_state || 253 + mei_cl_is_transitioning(cl))) { 254 + 256 255 if (signal_pending(current)) 257 256 return -EINTR; 258 257 return -ERESTARTSYS; 259 258 } 260 259 261 260 mutex_lock(&dev->device_lock); 262 - if (MEI_FILE_INITIALIZING == cl->state || 263 - MEI_FILE_DISCONNECTED == cl->state || 264 - MEI_FILE_DISCONNECTING == cl->state) { 261 + if (mei_cl_is_transitioning(cl)) { 265 262 rets = -EBUSY; 266 263 goto out; 267 264 }
+3 -3
drivers/misc/mei/mei_dev.h
··· 396 396 struct mei_me_client *me_clients; /* Note: memory has to be allocated */ 397 397 DECLARE_BITMAP(me_clients_map, MEI_CLIENTS_MAX); 398 398 DECLARE_BITMAP(host_clients_map, MEI_CLIENTS_MAX); 399 - u8 me_clients_num; 400 - u8 me_client_presentation_num; 401 - u8 me_client_index; 399 + unsigned long me_clients_num; 400 + unsigned long me_client_presentation_num; 401 + unsigned long me_client_index; 402 402 403 403 struct mei_cl wd_cl; 404 404 enum mei_wd_states wd_state;
+7 -1
drivers/pci/pci.c
··· 1155 1155 1156 1156 pci_enable_bridge(dev->bus->self); 1157 1157 1158 - if (pci_is_enabled(dev)) 1158 + if (pci_is_enabled(dev)) { 1159 + if (!dev->is_busmaster) { 1160 + dev_warn(&dev->dev, "driver skip pci_set_master, fix it!\n"); 1161 + pci_set_master(dev); 1162 + } 1159 1163 return; 1164 + } 1165 + 1160 1166 retval = pci_enable_device(dev); 1161 1167 if (retval) 1162 1168 dev_err(&dev->dev, "Error enabling bridge (%d), continuing\n",
+3 -8
drivers/staging/imx-drm/imx-drm-core.c
··· 41 41 struct list_head encoder_list; 42 42 struct list_head connector_list; 43 43 struct mutex mutex; 44 - int references; 45 44 int pipes; 46 45 struct drm_fbdev_cma *fbhelper; 47 46 }; ··· 240 241 } 241 242 } 242 243 243 - imxdrm->references++; 244 - 245 244 return imxdrm->drm; 246 245 247 246 unwind_crtc: ··· 276 279 277 280 list_for_each_entry(enc, &imxdrm->encoder_list, list) 278 281 module_put(enc->owner); 279 - 280 - imxdrm->references--; 281 282 282 283 mutex_unlock(&imxdrm->mutex); 283 284 } ··· 480 485 481 486 mutex_lock(&imxdrm->mutex); 482 487 483 - if (imxdrm->references) { 488 + if (imxdrm->drm->open_count) { 484 489 ret = -EBUSY; 485 490 goto err_busy; 486 491 } ··· 559 564 560 565 mutex_lock(&imxdrm->mutex); 561 566 562 - if (imxdrm->references) { 567 + if (imxdrm->drm->open_count) { 563 568 ret = -EBUSY; 564 569 goto err_busy; 565 570 } ··· 704 709 705 710 mutex_lock(&imxdrm->mutex); 706 711 707 - if (imxdrm->references) { 712 + if (imxdrm->drm->open_count) { 708 713 ret = -EBUSY; 709 714 goto err_busy; 710 715 }
+1 -1
drivers/staging/lustre/lustre/obdecho/echo_client.c
··· 1387 1387 if (nob > ulsm_nob) 1388 1388 return (-EINVAL); 1389 1389 1390 - if (copy_to_user (ulsm, lsm, sizeof(ulsm))) 1390 + if (copy_to_user (ulsm, lsm, sizeof(*ulsm))) 1391 1391 return (-EFAULT); 1392 1392 1393 1393 for (i = 0; i < lsm->lsm_stripe_count; i++) {
+1 -1
drivers/staging/octeon-usb/cvmx-usb.c
··· 604 604 } 605 605 } 606 606 607 - memset(usb, 0, sizeof(usb)); 607 + memset(usb, 0, sizeof(*usb)); 608 608 usb->init_flags = flags; 609 609 610 610 /* Initialize the USB state structure */
+1 -1
drivers/staging/rtl8188eu/core/rtw_mp.c
··· 907 907 sscanf(data, "pts =%d, start =%d, stop =%d", &psd_pts, &psd_start, &psd_stop); 908 908 } 909 909 910 - _rtw_memset(data, '\0', sizeof(data)); 910 + _rtw_memset(data, '\0', sizeof(*data)); 911 911 912 912 i = psd_start; 913 913 while (i < psd_stop) {
+1 -1
drivers/staging/rtl8188eu/hal/rtl8188e_dm.c
··· 57 57 u8 cut_ver, fab_ver; 58 58 59 59 /* Init Value */ 60 - _rtw_memset(dm_odm, 0, sizeof(dm_odm)); 60 + _rtw_memset(dm_odm, 0, sizeof(*dm_odm)); 61 61 62 62 dm_odm->Adapter = Adapter; 63 63
+1 -1
drivers/staging/rtl8188eu/os_dep/ioctl_linux.c
··· 6973 6973 stop = strncmp(extra, "stop", 4); 6974 6974 sscanf(extra, "count =%d, pkt", &count); 6975 6975 6976 - _rtw_memset(extra, '\0', sizeof(extra)); 6976 + _rtw_memset(extra, '\0', sizeof(*extra)); 6977 6977 6978 6978 if (stop == 0) { 6979 6979 bStartTest = 0; /* To set Stop */
+1
drivers/staging/rtl8188eu/os_dep/usb_intf.c
··· 54 54 /*=== Customer ID ===*/ 55 55 /****** 8188EUS ********/ 56 56 {USB_DEVICE(0x8179, 0x07B8)}, /* Abocom - Abocom */ 57 + {USB_DEVICE(0x2001, 0x330F)}, /* DLink DWA-125 REV D1 */ 57 58 {} /* Terminating entry */ 58 59 }; 59 60
+2
drivers/staging/rtl8192u/r819xU_cmdpkt.c
··· 37 37 /* Get TCB and local buffer from common pool. 38 38 (It is shared by CmdQ, MgntQ, and USB coalesce DataQ) */ 39 39 skb = dev_alloc_skb(USB_HWDESC_HEADER_LEN + DataLen + 4); 40 + if (!skb) 41 + return RT_STATUS_FAILURE; 40 42 memcpy((unsigned char *)(skb->cb), &dev, sizeof(dev)); 41 43 tcb_desc = (cb_desc *)(skb->cb + MAX_DEV_ADDR_SIZE); 42 44 tcb_desc->queue_index = TXCMD_QUEUE;
+3
drivers/staging/vt6656/iwctl.c
··· 1634 1634 if (pMgmt == NULL) 1635 1635 return -EFAULT; 1636 1636 1637 + if (!(pDevice->flags & DEVICE_FLAGS_OPENED)) 1638 + return -ENODEV; 1639 + 1637 1640 buf = kzalloc(sizeof(struct viawget_wpa_param), GFP_KERNEL); 1638 1641 if (buf == NULL) 1639 1642 return -ENOMEM;
+2 -1
drivers/staging/vt6656/main_usb.c
··· 1098 1098 memset(pMgmt->abyCurrBSSID, 0, 6); 1099 1099 pMgmt->eCurrState = WMAC_STATE_IDLE; 1100 1100 1101 + pDevice->flags &= ~DEVICE_FLAGS_OPENED; 1102 + 1101 1103 device_free_tx_bufs(pDevice); 1102 1104 device_free_rx_bufs(pDevice); 1103 1105 device_free_int_bufs(pDevice); ··· 1111 1109 usb_free_urb(pDevice->pInterruptURB); 1112 1110 1113 1111 BSSvClearNodeDBTable(pDevice, 0); 1114 - pDevice->flags &=(~DEVICE_FLAGS_OPENED); 1115 1112 1116 1113 DBG_PRT(MSG_LEVEL_DEBUG, KERN_INFO "device_close2 \n"); 1117 1114
+2
drivers/staging/vt6656/rxtx.c
··· 148 148 DBG_PRT(MSG_LEVEL_DEBUG, KERN_INFO"GetFreeContext()\n"); 149 149 150 150 for (ii = 0; ii < pDevice->cbTD; ii++) { 151 + if (!pDevice->apTD[ii]) 152 + return NULL; 151 153 pContext = pDevice->apTD[ii]; 152 154 if (pContext->bBoolInUse == false) { 153 155 pContext->bBoolInUse = true;
+1 -2
drivers/tty/n_tty.c
··· 1758 1758 canon_change = (old->c_lflag ^ tty->termios.c_lflag) & ICANON; 1759 1759 if (canon_change) { 1760 1760 bitmap_zero(ldata->read_flags, N_TTY_BUF_SIZE); 1761 - ldata->line_start = 0; 1762 - ldata->canon_head = ldata->read_tail; 1761 + ldata->line_start = ldata->canon_head = ldata->read_tail; 1763 1762 ldata->erasing = 0; 1764 1763 ldata->lnext = 0; 1765 1764 }
+3 -10
drivers/tty/serial/pch_uart.c
··· 667 667 668 668 static int dma_push_rx(struct eg20t_port *priv, int size) 669 669 { 670 - struct tty_struct *tty; 671 670 int room; 672 671 struct uart_port *port = &priv->port; 673 672 struct tty_port *tport = &port->state->port; 674 - 675 - port = &priv->port; 676 - tty = tty_port_tty_get(tport); 677 - if (!tty) { 678 - dev_dbg(priv->port.dev, "%s:tty is busy now", __func__); 679 - return 0; 680 - } 681 673 682 674 room = tty_buffer_request_room(tport, size); 683 675 ··· 677 685 dev_warn(port->dev, "Rx overrun: dropping %u bytes\n", 678 686 size - room); 679 687 if (!room) 680 - return room; 688 + return 0; 681 689 682 690 tty_insert_flip_string(tport, sg_virt(&priv->sg_rx), size); 683 691 684 692 port->icount.rx += room; 685 - tty_kref_put(tty); 686 693 687 694 return room; 688 695 } ··· 1089 1098 if (tty == NULL) { 1090 1099 for (i = 0; error_msg[i] != NULL; i++) 1091 1100 dev_err(&priv->pdev->dev, error_msg[i]); 1101 + } else { 1102 + tty_kref_put(tty); 1092 1103 } 1093 1104 } 1094 1105
+3 -1
drivers/tty/serial/serial-tegra.c
··· 732 732 static void tegra_uart_stop_rx(struct uart_port *u) 733 733 { 734 734 struct tegra_uart_port *tup = to_tegra_uport(u); 735 - struct tty_struct *tty = tty_port_tty_get(&tup->uport.state->port); 735 + struct tty_struct *tty; 736 736 struct tty_port *port = &u->state->port; 737 737 struct dma_tx_state state; 738 738 unsigned long ier; ··· 743 743 744 744 if (!tup->rx_in_progress) 745 745 return; 746 + 747 + tty = tty_port_tty_get(&tup->uport.state->port); 746 748 747 749 tegra_uart_wait_sym_time(tup, 1); /* wait a character interval */ 748 750
+3
drivers/tty/tty_ioctl.c
··· 1201 1201 } 1202 1202 return 0; 1203 1203 case TCFLSH: 1204 + retval = tty_check_change(tty); 1205 + if (retval) 1206 + return retval; 1204 1207 return __tty_perform_flush(tty, arg); 1205 1208 default: 1206 1209 /* Try the mode commands */
+1 -1
drivers/usb/chipidea/Kconfig
··· 1 1 config USB_CHIPIDEA 2 2 tristate "ChipIdea Highspeed Dual Role Controller" 3 - depends on (USB_EHCI_HCD && USB_GADGET) || (USB_EHCI_HCD && !USB_GADGET) || (!USB_EHCI_HCD && USB_GADGET) 3 + depends on ((USB_EHCI_HCD && USB_GADGET) || (USB_EHCI_HCD && !USB_GADGET) || (!USB_EHCI_HCD && USB_GADGET)) && HAS_DMA 4 4 help 5 5 Say Y here if your system has a dual role high speed USB 6 6 controller based on ChipIdea silicon IP. Currently, only the
+5 -2
drivers/usb/chipidea/ci_hdrc_imx.c
··· 131 131 if (ret) { 132 132 dev_err(&pdev->dev, "usbmisc init failed, ret=%d\n", 133 133 ret); 134 - goto err_clk; 134 + goto err_phy; 135 135 } 136 136 } 137 137 ··· 143 143 dev_err(&pdev->dev, 144 144 "Can't register ci_hdrc platform device, err=%d\n", 145 145 ret); 146 - goto err_clk; 146 + goto err_phy; 147 147 } 148 148 149 149 if (data->usbmisc_data) { ··· 164 164 165 165 disable_device: 166 166 ci_hdrc_remove_device(data->ci_pdev); 167 + err_phy: 168 + if (data->phy) 169 + usb_phy_shutdown(data->phy); 167 170 err_clk: 168 171 clk_disable_unprepare(data->clk); 169 172 return ret;
+1
drivers/usb/chipidea/core.c
··· 605 605 dbg_remove_files(ci); 606 606 free_irq(ci->irq, ci); 607 607 ci_role_destroy(ci); 608 + kfree(ci->hw_bank.regmap); 608 609 609 610 return 0; 610 611 }
+3 -1
drivers/usb/chipidea/udc.c
··· 1600 1600 for (i = 0; i < ci->hw_ep_max; i++) { 1601 1601 struct ci_hw_ep *hwep = &ci->ci_hw_ep[i]; 1602 1602 1603 + if (hwep->pending_td) 1604 + free_pending_td(hwep); 1603 1605 dma_pool_free(ci->qh_pool, hwep->qh.ptr, hwep->qh.dma); 1604 1606 } 1605 1607 } ··· 1669 1667 if (ci->platdata->notify_event) 1670 1668 ci->platdata->notify_event(ci, 1671 1669 CI_HDRC_CONTROLLER_STOPPED_EVENT); 1672 - ci->driver = NULL; 1673 1670 spin_unlock_irqrestore(&ci->lock, flags); 1674 1671 _gadget_stop_activity(&ci->gadget); 1675 1672 spin_lock_irqsave(&ci->lock, flags); 1676 1673 pm_runtime_put(&ci->gadget.dev); 1677 1674 } 1678 1675 1676 + ci->driver = NULL; 1679 1677 spin_unlock_irqrestore(&ci->lock, flags); 1680 1678 1681 1679 return 0;
+16
drivers/usb/core/devio.c
··· 742 742 if ((index & ~USB_DIR_IN) == 0) 743 743 return 0; 744 744 ret = findintfep(ps->dev, index); 745 + if (ret < 0) { 746 + /* 747 + * Some not fully compliant Win apps seem to get 748 + * index wrong and have the endpoint number here 749 + * rather than the endpoint address (with the 750 + * correct direction). Win does let this through, 751 + * so we'll not reject it here but leave it to 752 + * the device to not break KVM. But we warn. 753 + */ 754 + ret = findintfep(ps->dev, index ^ 0x80); 755 + if (ret >= 0) 756 + dev_info(&ps->dev->dev, 757 + "%s: process %i (%s) requesting ep %02x but needs %02x\n", 758 + __func__, task_pid_nr(current), 759 + current->comm, index, index ^ 0x80); 760 + } 745 761 if (ret >= 0) 746 762 ret = checkintf(ps, ret); 747 763 break;
+3
drivers/usb/core/hub.c
··· 3426 3426 unsigned long long u2_pel; 3427 3427 int ret; 3428 3428 3429 + if (udev->state != USB_STATE_CONFIGURED) 3430 + return 0; 3431 + 3429 3432 /* Convert SEL and PEL stored in ns to us */ 3430 3433 u1_sel = DIV_ROUND_UP(udev->u1_params.sel, 1000); 3431 3434 u1_pel = DIV_ROUND_UP(udev->u1_params.pel, 1000);
+2
drivers/usb/dwc3/dwc3-pci.c
··· 29 29 #define PCI_VENDOR_ID_SYNOPSYS 0x16c3 30 30 #define PCI_DEVICE_ID_SYNOPSYS_HAPSUSB3 0xabcd 31 31 #define PCI_DEVICE_ID_INTEL_BYT 0x0f37 32 + #define PCI_DEVICE_ID_INTEL_MRFLD 0x119e 32 33 33 34 struct dwc3_pci { 34 35 struct device *dev; ··· 190 189 PCI_DEVICE_ID_SYNOPSYS_HAPSUSB3), 191 190 }, 192 191 { PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_BYT), }, 192 + { PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_MRFLD), }, 193 193 { } /* Terminating Entry */ 194 194 }; 195 195 MODULE_DEVICE_TABLE(pci, dwc3_pci_id_table);
+26 -34
drivers/usb/gadget/f_fs.c
··· 1034 1034 struct ffs_file_perms perms; 1035 1035 umode_t root_mode; 1036 1036 const char *dev_name; 1037 - union { 1038 - /* set by ffs_fs_mount(), read by ffs_sb_fill() */ 1039 - void *private_data; 1040 - /* set by ffs_sb_fill(), read by ffs_fs_mount */ 1041 - struct ffs_data *ffs_data; 1042 - }; 1037 + struct ffs_data *ffs_data; 1043 1038 }; 1044 1039 1045 1040 static int ffs_sb_fill(struct super_block *sb, void *_data, int silent) 1046 1041 { 1047 1042 struct ffs_sb_fill_data *data = _data; 1048 1043 struct inode *inode; 1049 - struct ffs_data *ffs; 1044 + struct ffs_data *ffs = data->ffs_data; 1050 1045 1051 1046 ENTER(); 1052 1047 1053 - /* Initialise data */ 1054 - ffs = ffs_data_new(); 1055 - if (unlikely(!ffs)) 1056 - goto Enomem; 1057 - 1058 1048 ffs->sb = sb; 1059 - ffs->dev_name = kstrdup(data->dev_name, GFP_KERNEL); 1060 - if (unlikely(!ffs->dev_name)) 1061 - goto Enomem; 1062 - ffs->file_perms = data->perms; 1063 - ffs->private_data = data->private_data; 1064 - 1065 - /* used by the caller of this function */ 1066 - data->ffs_data = ffs; 1067 - 1049 + data->ffs_data = NULL; 1068 1050 sb->s_fs_info = ffs; 1069 1051 sb->s_blocksize = PAGE_CACHE_SIZE; 1070 1052 sb->s_blocksize_bits = PAGE_CACHE_SHIFT; ··· 1062 1080 &data->perms); 1063 1081 sb->s_root = d_make_root(inode); 1064 1082 if (unlikely(!sb->s_root)) 1065 - goto Enomem; 1083 + return -ENOMEM; 1066 1084 1067 1085 /* EP0 file */ 1068 1086 if (unlikely(!ffs_sb_create_file(sb, "ep0", ffs, 1069 1087 &ffs_ep0_operations, NULL))) 1070 - goto Enomem; 1088 + return -ENOMEM; 1071 1089 1072 1090 return 0; 1073 - 1074 - Enomem: 1075 - return -ENOMEM; 1076 1091 } 1077 1092 1078 1093 static int ffs_fs_parse_opts(struct ffs_sb_fill_data *data, char *opts) ··· 1172 1193 struct dentry *rv; 1173 1194 int ret; 1174 1195 void *ffs_dev; 1196 + struct ffs_data *ffs; 1175 1197 1176 1198 ENTER(); 1177 1199 ··· 1180 1200 if (unlikely(ret < 0)) 1181 1201 return ERR_PTR(ret); 1182 1202 1203 + ffs = ffs_data_new(); 1204 + if (unlikely(!ffs)) 1205 + return ERR_PTR(-ENOMEM); 1206 + ffs->file_perms = data.perms; 1207 + 1208 + ffs->dev_name = kstrdup(dev_name, GFP_KERNEL); 1209 + if (unlikely(!ffs->dev_name)) { 1210 + ffs_data_put(ffs); 1211 + return ERR_PTR(-ENOMEM); 1212 + } 1213 + 1183 1214 ffs_dev = functionfs_acquire_dev_callback(dev_name); 1184 - if (IS_ERR(ffs_dev)) 1185 - return ffs_dev; 1215 + if (IS_ERR(ffs_dev)) { 1216 + ffs_data_put(ffs); 1217 + return ERR_CAST(ffs_dev); 1218 + } 1219 + ffs->private_data = ffs_dev; 1220 + data.ffs_data = ffs; 1186 1221 1187 - data.dev_name = dev_name; 1188 - data.private_data = ffs_dev; 1189 1222 rv = mount_nodev(t, flags, &data, ffs_sb_fill); 1190 - 1191 - /* data.ffs_data is set by ffs_sb_fill */ 1192 - if (IS_ERR(rv)) 1223 + if (IS_ERR(rv) && data.ffs_data) { 1193 1224 functionfs_release_dev_callback(data.ffs_data); 1194 - 1225 + ffs_data_put(data.ffs_data); 1226 + } 1195 1227 return rv; 1196 1228 } 1197 1229
+6 -11
drivers/usb/host/ehci-fsl.c
··· 130 130 } 131 131 132 132 /* Enable USB controller, 83xx or 8536 */ 133 - if (pdata->have_sysif_regs) 133 + if (pdata->have_sysif_regs && pdata->controller_ver < FSL_USB_VER_1_6) 134 134 setbits32(hcd->regs + FSL_SOC_USB_CTRL, 0x4); 135 135 136 136 /* Don't need to set host mode here. It will be done by tdi_reset() */ ··· 232 232 case FSL_USB2_PHY_ULPI: 233 233 if (pdata->have_sysif_regs && pdata->controller_ver) { 234 234 /* controller version 1.6 or above */ 235 + clrbits32(non_ehci + FSL_SOC_USB_CTRL, UTMI_PHY_EN); 235 236 setbits32(non_ehci + FSL_SOC_USB_CTRL, 236 - ULPI_PHY_CLK_SEL); 237 - /* 238 - * Due to controller issue of PHY_CLK_VALID in ULPI 239 - * mode, we set USB_CTRL_USB_EN before checking 240 - * PHY_CLK_VALID, otherwise PHY_CLK_VALID doesn't work. 241 - */ 242 - clrsetbits_be32(non_ehci + FSL_SOC_USB_CTRL, 243 - UTMI_PHY_EN, USB_CTRL_USB_EN); 237 + ULPI_PHY_CLK_SEL | USB_CTRL_USB_EN); 244 238 } 245 239 portsc |= PORT_PTS_ULPI; 246 240 break; ··· 264 270 if (pdata->have_sysif_regs && pdata->controller_ver && 265 271 (phy_mode == FSL_USB2_PHY_ULPI)) { 266 272 /* check PHY_CLK_VALID to get phy clk valid */ 267 - if (!spin_event_timeout(in_be32(non_ehci + FSL_SOC_USB_CTRL) & 268 - PHY_CLK_VALID, FSL_USB_PHY_CLK_TIMEOUT, 0)) { 273 + if (!(spin_event_timeout(in_be32(non_ehci + FSL_SOC_USB_CTRL) & 274 + PHY_CLK_VALID, FSL_USB_PHY_CLK_TIMEOUT, 0) || 275 + in_be32(non_ehci + FSL_SOC_USB_PRICTRL))) { 269 276 printk(KERN_WARNING "fsl-ehci: USB PHY clock invalid\n"); 270 277 return -EINVAL; 271 278 }
+1 -1
drivers/usb/host/ehci-pci.c
··· 361 361 .remove = usb_hcd_pci_remove, 362 362 .shutdown = usb_hcd_pci_shutdown, 363 363 364 - #ifdef CONFIG_PM_SLEEP 364 + #ifdef CONFIG_PM 365 365 .driver = { 366 366 .pm = &usb_hcd_pci_pm_ops 367 367 },
+4 -4
drivers/usb/host/imx21-hcd.c
··· 824 824 i = DIV_ROUND_UP(wrap_frame( 825 825 cur_frame - urb->start_frame), 826 826 urb->interval); 827 - if (urb->transfer_flags & URB_ISO_ASAP) { 827 + 828 + /* Treat underruns as if URB_ISO_ASAP was set */ 829 + if ((urb->transfer_flags & URB_ISO_ASAP) || 830 + i >= urb->number_of_packets) { 828 831 urb->start_frame = wrap_frame(urb->start_frame 829 832 + i * urb->interval); 830 833 i = 0; 831 - } else if (i >= urb->number_of_packets) { 832 - ret = -EXDEV; 833 - goto alloc_dmem_failed; 834 834 } 835 835 } 836 836 }
+12 -10
drivers/usb/host/ohci-hcd.c
··· 216 216 frame &= ~(ed->interval - 1); 217 217 frame |= ed->branch; 218 218 urb->start_frame = frame; 219 + ed->last_iso = frame + ed->interval * (size - 1); 219 220 } 220 221 } else if (ed->type == PIPE_ISOCHRONOUS) { 221 222 u16 next = ohci_frame_no(ohci) + 1; 222 223 u16 frame = ed->last_iso + ed->interval; 224 + u16 length = ed->interval * (size - 1); 223 225 224 226 /* Behind the scheduling threshold? */ 225 227 if (unlikely(tick_before(frame, next))) { 226 228 227 - /* USB_ISO_ASAP: Round up to the first available slot */ 229 + /* URB_ISO_ASAP: Round up to the first available slot */ 228 230 if (urb->transfer_flags & URB_ISO_ASAP) { 229 231 frame += (next - frame + ed->interval - 1) & 230 232 -ed->interval; 231 233 232 234 /* 233 - * Not ASAP: Use the next slot in the stream. If 234 - * the entire URB falls before the threshold, fail. 235 + * Not ASAP: Use the next slot in the stream, 236 + * no matter what. 235 237 */ 236 238 } else { 237 - if (tick_before(frame + ed->interval * 238 - (urb->number_of_packets - 1), next)) { 239 - retval = -EXDEV; 240 - usb_hcd_unlink_urb_from_ep(hcd, urb); 241 - goto fail; 242 - } 243 - 244 239 /* 245 240 * Some OHCI hardware doesn't handle late TDs 246 241 * correctly. After retiring them it proceeds ··· 246 251 urb_priv->td_cnt = DIV_ROUND_UP( 247 252 (u16) (next - frame), 248 253 ed->interval); 254 + if (urb_priv->td_cnt >= urb_priv->length) { 255 + ++urb_priv->td_cnt; /* Mark it */ 256 + ohci_dbg(ohci, "iso underrun %p (%u+%u < %u)\n", 257 + urb, frame, length, 258 + next); 259 + } 249 260 } 250 261 } 251 262 urb->start_frame = frame; 263 + ed->last_iso = frame + length; 252 264 } 253 265 254 266 /* fill the TDs and link them to the ed; and
+22 -4
drivers/usb/host/ohci-q.c
··· 41 41 __releases(ohci->lock) 42 42 __acquires(ohci->lock) 43 43 { 44 - struct device *dev = ohci_to_hcd(ohci)->self.controller; 44 + struct device *dev = ohci_to_hcd(ohci)->self.controller; 45 + struct usb_host_endpoint *ep = urb->ep; 46 + struct urb_priv *urb_priv; 47 + 45 48 // ASSERT (urb->hcpriv != 0); 46 49 50 + restart: 47 51 urb_free_priv (ohci, urb->hcpriv); 48 52 urb->hcpriv = NULL; 49 53 if (likely(status == -EINPROGRESS)) ··· 83 79 && ohci_to_hcd(ohci)->self.bandwidth_int_reqs == 0) { 84 80 ohci->hc_control &= ~(OHCI_CTRL_PLE|OHCI_CTRL_IE); 85 81 ohci_writel (ohci, ohci->hc_control, &ohci->regs->control); 82 + } 83 + 84 + /* 85 + * An isochronous URB that is sumitted too late won't have any TDs 86 + * (marked by the fact that the td_cnt value is larger than the 87 + * actual number of TDs). If the next URB on this endpoint is like 88 + * that, give it back now. 89 + */ 90 + if (!list_empty(&ep->urb_list)) { 91 + urb = list_first_entry(&ep->urb_list, struct urb, urb_list); 92 + urb_priv = urb->hcpriv; 93 + if (urb_priv->td_cnt > urb_priv->length) { 94 + status = 0; 95 + goto restart; 96 + } 86 97 } 87 98 } 88 99 ··· 565 546 td->hwCBP = cpu_to_hc32 (ohci, data & 0xFFFFF000); 566 547 *ohci_hwPSWp(ohci, td, 0) = cpu_to_hc16 (ohci, 567 548 (data & 0x0FFF) | 0xE000); 568 - td->ed->last_iso = info & 0xffff; 569 549 } else { 570 550 td->hwCBP = cpu_to_hc32 (ohci, data); 571 551 } ··· 1014 996 urb_priv->td_cnt++; 1015 997 1016 998 /* if URB is done, clean up */ 1017 - if (urb_priv->td_cnt == urb_priv->length) { 999 + if (urb_priv->td_cnt >= urb_priv->length) { 1018 1000 modified = completed = 1; 1019 1001 finish_urb(ohci, urb, 0); 1020 1002 } ··· 1104 1086 urb_priv->td_cnt++; 1105 1087 1106 1088 /* If all this urb's TDs are done, call complete() */ 1107 - if (urb_priv->td_cnt == urb_priv->length) 1089 + if (urb_priv->td_cnt >= urb_priv->length) 1108 1090 finish_urb(ohci, urb, status); 1109 1091 1110 1092 /* clean schedule: unlink EDs that are no longer busy */
+1 -1
drivers/usb/host/uhci-pci.c
··· 293 293 .remove = usb_hcd_pci_remove, 294 294 .shutdown = uhci_shutdown, 295 295 296 - #ifdef CONFIG_PM_SLEEP 296 + #ifdef CONFIG_PM 297 297 .driver = { 298 298 .pm = &usb_hcd_pci_pm_ops 299 299 },
+8 -4
drivers/usb/host/uhci-q.c
··· 1303 1303 } 1304 1304 1305 1305 /* Fell behind? */ 1306 - if (uhci_frame_before_eq(frame, next)) { 1306 + if (!uhci_frame_before_eq(next, frame)) { 1307 1307 1308 1308 /* USB_ISO_ASAP: Round up to the first available slot */ 1309 1309 if (urb->transfer_flags & URB_ISO_ASAP) ··· 1311 1311 -qh->period; 1312 1312 1313 1313 /* 1314 - * Not ASAP: Use the next slot in the stream. If 1315 - * the entire URB falls before the threshold, fail. 1314 + * Not ASAP: Use the next slot in the stream, 1315 + * no matter what. 1316 1316 */ 1317 1317 else if (!uhci_frame_before_eq(next, 1318 1318 frame + (urb->number_of_packets - 1) * 1319 1319 qh->period)) 1320 - return -EXDEV; 1320 + dev_dbg(uhci_dev(uhci), "iso underrun %p (%u+%u < %u)\n", 1321 + urb, frame, 1322 + (urb->number_of_packets - 1) * 1323 + qh->period, 1324 + next); 1321 1325 } 1322 1326 } 1323 1327
+36 -11
drivers/usb/host/xhci-hub.c
··· 287 287 if (virt_dev->eps[i].ring && virt_dev->eps[i].ring->dequeue) 288 288 xhci_queue_stop_endpoint(xhci, slot_id, i, suspend); 289 289 } 290 - cmd->command_trb = xhci->cmd_ring->enqueue; 290 + cmd->command_trb = xhci_find_next_enqueue(xhci->cmd_ring); 291 291 list_add_tail(&cmd->cmd_list, &virt_dev->cmd_list); 292 292 xhci_queue_stop_endpoint(xhci, slot_id, 0, suspend); 293 293 xhci_ring_cmd_db(xhci); ··· 552 552 * - Mark a port as being done with device resume, 553 553 * and ring the endpoint doorbells. 554 554 * - Stop the Synopsys redriver Compliance Mode polling. 555 + * - Drop and reacquire the xHCI lock, in order to wait for port resume. 555 556 */ 556 557 static u32 xhci_get_port_status(struct usb_hcd *hcd, 557 558 struct xhci_bus_state *bus_state, 558 559 __le32 __iomem **port_array, 559 - u16 wIndex, u32 raw_port_status) 560 + u16 wIndex, u32 raw_port_status, 561 + unsigned long flags) 562 + __releases(&xhci->lock) 563 + __acquires(&xhci->lock) 560 564 { 561 565 struct xhci_hcd *xhci = hcd_to_xhci(hcd); 562 566 u32 status = 0; ··· 595 591 return 0xffffffff; 596 592 if (time_after_eq(jiffies, 597 593 bus_state->resume_done[wIndex])) { 594 + int time_left; 595 + 598 596 xhci_dbg(xhci, "Resume USB2 port %d\n", 599 597 wIndex + 1); 600 598 bus_state->resume_done[wIndex] = 0; 601 599 clear_bit(wIndex, &bus_state->resuming_ports); 600 + 601 + set_bit(wIndex, &bus_state->rexit_ports); 602 602 xhci_set_link_state(xhci, port_array, wIndex, 603 603 XDEV_U0); 604 - xhci_dbg(xhci, "set port %d resume\n", 605 - wIndex + 1); 606 - slot_id = xhci_find_slot_id_by_port(hcd, xhci, 607 - wIndex + 1); 608 - if (!slot_id) { 609 - xhci_dbg(xhci, "slot_id is zero\n"); 610 - return 0xffffffff; 604 + 605 + spin_unlock_irqrestore(&xhci->lock, flags); 606 + time_left = wait_for_completion_timeout( 607 + &bus_state->rexit_done[wIndex], 608 + msecs_to_jiffies( 609 + XHCI_MAX_REXIT_TIMEOUT)); 610 + spin_lock_irqsave(&xhci->lock, flags); 611 + 612 + if (time_left) { 613 + slot_id = xhci_find_slot_id_by_port(hcd, 614 + xhci, wIndex + 1); 615 + if (!slot_id) { 616 + xhci_dbg(xhci, "slot_id is zero\n"); 617 + return 0xffffffff; 618 + } 619 + xhci_ring_device(xhci, slot_id); 620 + } else { 621 + int port_status = xhci_readl(xhci, 622 + port_array[wIndex]); 623 + xhci_warn(xhci, "Port resume took longer than %i msec, port status = 0x%x\n", 624 + XHCI_MAX_REXIT_TIMEOUT, 625 + port_status); 626 + status |= USB_PORT_STAT_SUSPEND; 627 + clear_bit(wIndex, &bus_state->rexit_ports); 611 628 } 612 - xhci_ring_device(xhci, slot_id); 629 + 613 630 bus_state->port_c_suspend |= 1 << wIndex; 614 631 bus_state->suspended_ports &= ~(1 << wIndex); 615 632 } else { ··· 753 728 break; 754 729 } 755 730 status = xhci_get_port_status(hcd, bus_state, port_array, 756 - wIndex, temp); 731 + wIndex, temp, flags); 757 732 if (status == 0xffffffff) 758 733 goto error; 759 734
+2
drivers/usb/host/xhci-mem.c
··· 2428 2428 for (i = 0; i < USB_MAXCHILDREN; ++i) { 2429 2429 xhci->bus_state[0].resume_done[i] = 0; 2430 2430 xhci->bus_state[1].resume_done[i] = 0; 2431 + /* Only the USB 2.0 completions will ever be used. */ 2432 + init_completion(&xhci->bus_state[1].rexit_done[i]); 2431 2433 } 2432 2434 2433 2435 if (scratchpad_alloc(xhci, flags))
+1 -1
drivers/usb/host/xhci-pci.c
··· 351 351 /* suspend and resume implemented later */ 352 352 353 353 .shutdown = usb_hcd_pci_shutdown, 354 - #ifdef CONFIG_PM_SLEEP 354 + #ifdef CONFIG_PM 355 355 .driver = { 356 356 .pm = &usb_hcd_pci_pm_ops 357 357 },
+35 -2
drivers/usb/host/xhci-ring.c
··· 123 123 return TRB_TYPE_LINK_LE32(link->control); 124 124 } 125 125 126 + union xhci_trb *xhci_find_next_enqueue(struct xhci_ring *ring) 127 + { 128 + /* Enqueue pointer can be left pointing to the link TRB, 129 + * we must handle that 130 + */ 131 + if (TRB_TYPE_LINK_LE32(ring->enqueue->link.control)) 132 + return ring->enq_seg->next->trbs; 133 + return ring->enqueue; 134 + } 135 + 126 136 /* Updates trb to point to the next TRB in the ring, and updates seg if the next 127 137 * TRB is in a new segment. This does not skip over link TRBs, and it does not 128 138 * effect the ring dequeue or enqueue pointers. ··· 869 859 /* Otherwise ring the doorbell(s) to restart queued transfers */ 870 860 ring_doorbell_for_active_rings(xhci, slot_id, ep_index); 871 861 } 872 - ep->stopped_td = NULL; 873 - ep->stopped_trb = NULL; 862 + 863 + /* Clear stopped_td and stopped_trb if endpoint is not halted */ 864 + if (!(ep->ep_state & EP_HALTED)) { 865 + ep->stopped_td = NULL; 866 + ep->stopped_trb = NULL; 867 + } 874 868 875 869 /* 876 870 * Drop the lock and complete the URBs in the cancelled TD list. ··· 1428 1414 inc_deq(xhci, xhci->cmd_ring); 1429 1415 return; 1430 1416 } 1417 + /* There is no command to handle if we get a stop event when the 1418 + * command ring is empty, event->cmd_trb points to the next 1419 + * unset command 1420 + */ 1421 + if (xhci->cmd_ring->dequeue == xhci->cmd_ring->enqueue) 1422 + return; 1431 1423 } 1432 1424 1433 1425 switch (le32_to_cpu(xhci->cmd_ring->dequeue->generic.field[3]) ··· 1761 1741 bogus_port_status = true; 1762 1742 goto cleanup; 1763 1743 } 1744 + } 1745 + 1746 + /* 1747 + * Check to see if xhci-hub.c is waiting on RExit to U0 transition (or 1748 + * RExit to a disconnect state). If so, let the the driver know it's 1749 + * out of the RExit state. 1750 + */ 1751 + if (!DEV_SUPERSPEED(temp) && 1752 + test_and_clear_bit(faked_port_index, 1753 + &bus_state->rexit_ports)) { 1754 + complete(&bus_state->rexit_done[faked_port_index]); 1755 + bogus_port_status = true; 1756 + goto cleanup; 1764 1757 } 1765 1758 1766 1759 if (hcd->speed != HCD_USB3)
+5 -20
drivers/usb/host/xhci.c
··· 2598 2598 if (command) { 2599 2599 cmd_completion = command->completion; 2600 2600 cmd_status = &command->status; 2601 - command->command_trb = xhci->cmd_ring->enqueue; 2602 - 2603 - /* Enqueue pointer can be left pointing to the link TRB, 2604 - * we must handle that 2605 - */ 2606 - if (TRB_TYPE_LINK_LE32(command->command_trb->link.control)) 2607 - command->command_trb = 2608 - xhci->cmd_ring->enq_seg->next->trbs; 2609 - 2601 + command->command_trb = xhci_find_next_enqueue(xhci->cmd_ring); 2610 2602 list_add_tail(&command->cmd_list, &virt_dev->cmd_list); 2611 2603 } else { 2612 2604 cmd_completion = &virt_dev->cmd_completion; ··· 2606 2614 } 2607 2615 init_completion(cmd_completion); 2608 2616 2609 - cmd_trb = xhci->cmd_ring->dequeue; 2617 + cmd_trb = xhci_find_next_enqueue(xhci->cmd_ring); 2610 2618 if (!ctx_change) 2611 2619 ret = xhci_queue_configure_endpoint(xhci, in_ctx->dma, 2612 2620 udev->slot_id, must_succeed); ··· 3431 3439 3432 3440 /* Attempt to submit the Reset Device command to the command ring */ 3433 3441 spin_lock_irqsave(&xhci->lock, flags); 3434 - reset_device_cmd->command_trb = xhci->cmd_ring->enqueue; 3435 - 3436 - /* Enqueue pointer can be left pointing to the link TRB, 3437 - * we must handle that 3438 - */ 3439 - if (TRB_TYPE_LINK_LE32(reset_device_cmd->command_trb->link.control)) 3440 - reset_device_cmd->command_trb = 3441 - xhci->cmd_ring->enq_seg->next->trbs; 3442 + reset_device_cmd->command_trb = xhci_find_next_enqueue(xhci->cmd_ring); 3442 3443 3443 3444 list_add_tail(&reset_device_cmd->cmd_list, &virt_dev->cmd_list); 3444 3445 ret = xhci_queue_reset_device(xhci, slot_id); ··· 3635 3650 union xhci_trb *cmd_trb; 3636 3651 3637 3652 spin_lock_irqsave(&xhci->lock, flags); 3638 - cmd_trb = xhci->cmd_ring->dequeue; 3653 + cmd_trb = xhci_find_next_enqueue(xhci->cmd_ring); 3639 3654 ret = xhci_queue_slot_control(xhci, TRB_ENABLE_SLOT, 0); 3640 3655 if (ret) { 3641 3656 spin_unlock_irqrestore(&xhci->lock, flags); ··· 3770 3785 slot_ctx->dev_info >> 27); 3771 3786 3772 3787 spin_lock_irqsave(&xhci->lock, flags); 3773 - cmd_trb = xhci->cmd_ring->dequeue; 3788 + cmd_trb = xhci_find_next_enqueue(xhci->cmd_ring); 3774 3789 ret = xhci_queue_address_device(xhci, virt_dev->in_ctx->dma, 3775 3790 udev->slot_id); 3776 3791 if (ret) {
+11
drivers/usb/host/xhci.h
··· 1412 1412 unsigned long resume_done[USB_MAXCHILDREN]; 1413 1413 /* which ports have started to resume */ 1414 1414 unsigned long resuming_ports; 1415 + /* Which ports are waiting on RExit to U0 transition. */ 1416 + unsigned long rexit_ports; 1417 + struct completion rexit_done[USB_MAXCHILDREN]; 1415 1418 }; 1419 + 1420 + 1421 + /* 1422 + * It can take up to 20 ms to transition from RExit to U0 on the 1423 + * Intel Lynx Point LP xHCI host. 1424 + */ 1425 + #define XHCI_MAX_REXIT_TIMEOUT (20 * 1000) 1416 1426 1417 1427 static inline unsigned int hcd_index(struct usb_hcd *hcd) 1418 1428 { ··· 1850 1840 union xhci_trb *cmd_trb); 1851 1841 void xhci_ring_ep_doorbell(struct xhci_hcd *xhci, unsigned int slot_id, 1852 1842 unsigned int ep_index, unsigned int stream_id); 1843 + union xhci_trb *xhci_find_next_enqueue(struct xhci_ring *ring); 1853 1844 1854 1845 /* xHCI roothub code */ 1855 1846 void xhci_set_link_state(struct xhci_hcd *xhci, __le32 __iomem **port_array,
+2 -15
drivers/video/mmp/hw/mmp_ctrl.c
··· 514 514 if (IS_ERR(ctrl->clk)) { 515 515 dev_err(ctrl->dev, "unable to get clk %s\n", mi->clk_name); 516 516 ret = -ENOENT; 517 - goto failed_get_clk; 517 + goto failed; 518 518 } 519 519 clk_prepare_enable(ctrl->clk); 520 520 ··· 551 551 path_deinit(path_plat); 552 552 } 553 553 554 - if (ctrl->clk) { 555 - devm_clk_put(ctrl->dev, ctrl->clk); 556 - clk_disable_unprepare(ctrl->clk); 557 - } 558 - failed_get_clk: 559 - devm_free_irq(ctrl->dev, ctrl->irq, ctrl); 554 + clk_disable_unprepare(ctrl->clk); 560 555 failed: 561 - if (ctrl) { 562 - if (ctrl->reg_base) 563 - devm_iounmap(ctrl->dev, ctrl->reg_base); 564 - devm_release_mem_region(ctrl->dev, res->start, 565 - resource_size(res)); 566 - devm_kfree(ctrl->dev, ctrl); 567 - } 568 - 569 556 dev_err(&pdev->dev, "device init failed\n"); 570 557 571 558 return ret;
+1
drivers/video/mxsfb.c
··· 620 620 break; 621 621 case 3: 622 622 bits_per_pixel = 32; 623 + break; 623 624 case 1: 624 625 default: 625 626 return -EINVAL;
+3 -1
drivers/video/neofb.c
··· 2075 2075 if (!fb_find_mode(&info->var, info, mode_option, NULL, 0, 2076 2076 info->monspecs.modedb, 16)) { 2077 2077 printk(KERN_ERR "neofb: Unable to find usable video mode.\n"); 2078 + err = -EINVAL; 2078 2079 goto err_map_video; 2079 2080 } 2080 2081 ··· 2098 2097 info->fix.smem_len >> 10, info->var.xres, 2099 2098 info->var.yres, h_sync / 1000, h_sync % 1000, v_sync); 2100 2099 2101 - if (fb_alloc_cmap(&info->cmap, 256, 0) < 0) 2100 + err = fb_alloc_cmap(&info->cmap, 256, 0); 2101 + if (err < 0) 2102 2102 goto err_map_video; 2103 2103 2104 2104 err = register_framebuffer(info);
+3 -3
drivers/video/of_display_timing.c
··· 120 120 return -EINVAL; 121 121 } 122 122 123 - timing_np = of_find_node_by_name(np, name); 123 + timing_np = of_get_child_by_name(np, name); 124 124 if (!timing_np) { 125 125 pr_err("%s: could not find node '%s'\n", 126 126 of_node_full_name(np), name); ··· 143 143 struct display_timings *disp; 144 144 145 145 if (!np) { 146 - pr_err("%s: no devicenode given\n", of_node_full_name(np)); 146 + pr_err("%s: no device node given\n", of_node_full_name(np)); 147 147 return NULL; 148 148 } 149 149 150 - timings_np = of_find_node_by_name(np, "display-timings"); 150 + timings_np = of_get_child_by_name(np, "display-timings"); 151 151 if (!timings_np) { 152 152 pr_err("%s: could not find display-timings node\n", 153 153 of_node_full_name(np));
+1
drivers/video/omap2/displays-new/Kconfig
··· 35 35 36 36 config DISPLAY_PANEL_DSI_CM 37 37 tristate "Generic DSI Command Mode Panel" 38 + depends on BACKLIGHT_CLASS_DEVICE 38 39 help 39 40 Driver for generic DSI command mode panels. 40 41
+1 -1
drivers/video/omap2/displays-new/connector-analog-tv.c
··· 191 191 in = omap_dss_find_output(pdata->source); 192 192 if (in == NULL) { 193 193 dev_err(&pdev->dev, "Failed to find video source\n"); 194 - return -ENODEV; 194 + return -EPROBE_DEFER; 195 195 } 196 196 197 197 ddata->in = in;
+1 -1
drivers/video/omap2/displays-new/connector-dvi.c
··· 263 263 in = omap_dss_find_output(pdata->source); 264 264 if (in == NULL) { 265 265 dev_err(&pdev->dev, "Failed to find video source\n"); 266 - return -ENODEV; 266 + return -EPROBE_DEFER; 267 267 } 268 268 269 269 ddata->in = in;
+1 -1
drivers/video/omap2/displays-new/connector-hdmi.c
··· 290 290 in = omap_dss_find_output(pdata->source); 291 291 if (in == NULL) { 292 292 dev_err(&pdev->dev, "Failed to find video source\n"); 293 - return -ENODEV; 293 + return -EPROBE_DEFER; 294 294 } 295 295 296 296 ddata->in = in;
+1
drivers/video/omap2/dss/dispc.c
··· 3691 3691 } 3692 3692 3693 3693 pm_runtime_enable(&pdev->dev); 3694 + pm_runtime_irq_safe(&pdev->dev); 3694 3695 3695 3696 r = dispc_runtime_get(); 3696 3697 if (r)
+1 -8
drivers/video/s3fb.c
··· 1336 1336 (info->var.bits_per_pixel * info->var.xres_virtual); 1337 1337 if (info->var.yres_virtual < info->var.yres) { 1338 1338 dev_err(info->device, "virtual vertical size smaller than real\n"); 1339 - goto err_find_mode; 1340 - } 1341 - 1342 - /* maximize virtual vertical size for fast scrolling */ 1343 - info->var.yres_virtual = info->fix.smem_len * 8 / 1344 - (info->var.bits_per_pixel * info->var.xres_virtual); 1345 - if (info->var.yres_virtual < info->var.yres) { 1346 - dev_err(info->device, "virtual vertical size smaller than real\n"); 1339 + rc = -EINVAL; 1347 1340 goto err_find_mode; 1348 1341 } 1349 1342
+11 -12
drivers/xen/balloon.c
··· 398 398 if (nr_pages > ARRAY_SIZE(frame_list)) 399 399 nr_pages = ARRAY_SIZE(frame_list); 400 400 401 - scratch_page = get_balloon_scratch_page(); 402 - 403 401 for (i = 0; i < nr_pages; i++) { 404 402 page = alloc_page(gfp); 405 403 if (page == NULL) { ··· 411 413 412 414 scrub_page(page); 413 415 416 + /* 417 + * Ballooned out frames are effectively replaced with 418 + * a scratch frame. Ensure direct mappings and the 419 + * p2m are consistent. 420 + */ 421 + scratch_page = get_balloon_scratch_page(); 414 422 #ifdef CONFIG_XEN_HAVE_PVMMU 415 423 if (xen_pv_domain() && !PageHighMem(page)) { 416 424 ret = HYPERVISOR_update_va_mapping( ··· 426 422 BUG_ON(ret); 427 423 } 428 424 #endif 429 - } 430 - 431 - /* Ensure that ballooned highmem pages don't have kmaps. */ 432 - kmap_flush_unused(); 433 - flush_tlb_all(); 434 - 435 - /* No more mappings: invalidate P2M and add to balloon. */ 436 - for (i = 0; i < nr_pages; i++) { 437 - pfn = mfn_to_pfn(frame_list[i]); 438 425 if (!xen_feature(XENFEAT_auto_translated_physmap)) { 439 426 unsigned long p; 440 427 p = page_to_pfn(scratch_page); 441 428 __set_phys_to_machine(pfn, pfn_to_mfn(p)); 442 429 } 430 + put_balloon_scratch_page(); 431 + 443 432 balloon_append(pfn_to_page(pfn)); 444 433 } 445 434 446 - put_balloon_scratch_page(); 435 + /* Ensure that ballooned highmem pages don't have kmaps. */ 436 + kmap_flush_unused(); 437 + flush_tlb_all(); 447 438 448 439 set_xen_guest_handle(reservation.extent_start, frame_list); 449 440 reservation.nr_extents = nr_pages;
+2 -2
fs/bio.c
··· 917 917 src_p = kmap_atomic(src_bv->bv_page); 918 918 dst_p = kmap_atomic(dst_bv->bv_page); 919 919 920 - memcpy(dst_p + dst_bv->bv_offset, 921 - src_p + src_bv->bv_offset, 920 + memcpy(dst_p + dst_offset, 921 + src_p + src_offset, 922 922 bytes); 923 923 924 924 kunmap_atomic(dst_p);
+1 -1
fs/ocfs2/super.c
··· 1924 1924 { 1925 1925 int tmp, hangup_needed = 0; 1926 1926 struct ocfs2_super *osb = NULL; 1927 - char nodestr[8]; 1927 + char nodestr[12]; 1928 1928 1929 1929 trace_ocfs2_dismount_volume(sb); 1930 1930
+4 -63
fs/reiserfs/journal.c
··· 1163 1163 return NULL; 1164 1164 } 1165 1165 1166 - static int newer_jl_done(struct reiserfs_journal_cnode *cn) 1167 - { 1168 - struct super_block *sb = cn->sb; 1169 - b_blocknr_t blocknr = cn->blocknr; 1170 - 1171 - cn = cn->hprev; 1172 - while (cn) { 1173 - if (cn->sb == sb && cn->blocknr == blocknr && cn->jlist && 1174 - atomic_read(&cn->jlist->j_commit_left) != 0) 1175 - return 0; 1176 - cn = cn->hprev; 1177 - } 1178 - return 1; 1179 - } 1180 - 1181 1166 static void remove_journal_hash(struct super_block *, 1182 1167 struct reiserfs_journal_cnode **, 1183 1168 struct reiserfs_journal_list *, unsigned long, ··· 1338 1353 reiserfs_warning(s, "clm-2048", "called with wcount %d", 1339 1354 atomic_read(&journal->j_wcount)); 1340 1355 } 1341 - BUG_ON(jl->j_trans_id == 0); 1342 1356 1343 1357 /* if flushall == 0, the lock is already held */ 1344 1358 if (flushall) { ··· 1577 1593 return err; 1578 1594 } 1579 1595 1580 - static int test_transaction(struct super_block *s, 1581 - struct reiserfs_journal_list *jl) 1582 - { 1583 - struct reiserfs_journal_cnode *cn; 1584 - 1585 - if (jl->j_len == 0 || atomic_read(&jl->j_nonzerolen) == 0) 1586 - return 1; 1587 - 1588 - cn = jl->j_realblock; 1589 - while (cn) { 1590 - /* if the blocknr == 0, this has been cleared from the hash, 1591 - ** skip it 1592 - */ 1593 - if (cn->blocknr == 0) { 1594 - goto next; 1595 - } 1596 - if (cn->bh && !newer_jl_done(cn)) 1597 - return 0; 1598 - next: 1599 - cn = cn->next; 1600 - cond_resched(); 1601 - } 1602 - return 0; 1603 - } 1604 - 1605 1596 static int write_one_transaction(struct super_block *s, 1606 1597 struct reiserfs_journal_list *jl, 1607 1598 struct buffer_chunk *chunk) ··· 1764 1805 break; 1765 1806 tjl = JOURNAL_LIST_ENTRY(tjl->j_list.next); 1766 1807 } 1808 + get_journal_list(jl); 1809 + get_journal_list(flush_jl); 1767 1810 /* try to find a group of blocks we can flush across all the 1768 1811 ** transactions, but only bother if we've actually spanned 1769 1812 ** across multiple lists ··· 1774 1813 ret = kupdate_transactions(s, jl, &tjl, &trans_id, len, i); 1775 1814 } 1776 1815 flush_journal_list(s, flush_jl, 1); 1816 + put_journal_list(s, flush_jl); 1817 + put_journal_list(s, jl); 1777 1818 return 0; 1778 1819 } 1779 1820 ··· 3831 3868 return 1; 3832 3869 } 3833 3870 3834 - static void flush_old_journal_lists(struct super_block *s) 3835 - { 3836 - struct reiserfs_journal *journal = SB_JOURNAL(s); 3837 - struct reiserfs_journal_list *jl; 3838 - struct list_head *entry; 3839 - time_t now = get_seconds(); 3840 - 3841 - while (!list_empty(&journal->j_journal_list)) { 3842 - entry = journal->j_journal_list.next; 3843 - jl = JOURNAL_LIST_ENTRY(entry); 3844 - /* this check should always be run, to send old lists to disk */ 3845 - if (jl->j_timestamp < (now - (JOURNAL_MAX_TRANS_AGE * 4)) && 3846 - atomic_read(&jl->j_commit_left) == 0 && 3847 - test_transaction(s, jl)) { 3848 - flush_used_journal_lists(s, jl); 3849 - } else { 3850 - break; 3851 - } 3852 - } 3853 - } 3854 - 3855 3871 /* 3856 3872 ** long and ugly. If flush, will not return until all commit 3857 3873 ** blocks and all real buffers in the trans are on disk. ··· 4174 4232 } 4175 4233 } 4176 4234 } 4177 - flush_old_journal_lists(sb); 4178 4235 4179 4236 journal->j_current_jl->j_list_bitmap = 4180 4237 get_list_bitmap(sb, journal->j_current_jl);
+7 -9
fs/udf/ialloc.c
··· 30 30 { 31 31 struct super_block *sb = inode->i_sb; 32 32 struct udf_sb_info *sbi = UDF_SB(sb); 33 + struct logicalVolIntegrityDescImpUse *lvidiu = udf_sb_lvidiu(sb); 33 34 34 - mutex_lock(&sbi->s_alloc_mutex); 35 - if (sbi->s_lvid_bh) { 36 - struct logicalVolIntegrityDescImpUse *lvidiu = 37 - udf_sb_lvidiu(sbi); 35 + if (lvidiu) { 36 + mutex_lock(&sbi->s_alloc_mutex); 38 37 if (S_ISDIR(inode->i_mode)) 39 38 le32_add_cpu(&lvidiu->numDirs, -1); 40 39 else 41 40 le32_add_cpu(&lvidiu->numFiles, -1); 42 41 udf_updated_lvid(sb); 42 + mutex_unlock(&sbi->s_alloc_mutex); 43 43 } 44 - mutex_unlock(&sbi->s_alloc_mutex); 45 44 46 45 udf_free_blocks(sb, NULL, &UDF_I(inode)->i_location, 0, 1); 47 46 } ··· 54 55 uint32_t start = UDF_I(dir)->i_location.logicalBlockNum; 55 56 struct udf_inode_info *iinfo; 56 57 struct udf_inode_info *dinfo = UDF_I(dir); 58 + struct logicalVolIntegrityDescImpUse *lvidiu; 57 59 58 60 inode = new_inode(sb); 59 61 ··· 92 92 return NULL; 93 93 } 94 94 95 - if (sbi->s_lvid_bh) { 96 - struct logicalVolIntegrityDescImpUse *lvidiu; 97 - 95 + lvidiu = udf_sb_lvidiu(sb); 96 + if (lvidiu) { 98 97 iinfo->i_unique = lvid_get_unique_id(sb); 99 98 mutex_lock(&sbi->s_alloc_mutex); 100 - lvidiu = udf_sb_lvidiu(sbi); 101 99 if (S_ISDIR(mode)) 102 100 le32_add_cpu(&lvidiu->numDirs, 1); 103 101 else
+40 -24
fs/udf/super.c
··· 94 94 static int udf_statfs(struct dentry *, struct kstatfs *); 95 95 static int udf_show_options(struct seq_file *, struct dentry *); 96 96 97 - struct logicalVolIntegrityDescImpUse *udf_sb_lvidiu(struct udf_sb_info *sbi) 97 + struct logicalVolIntegrityDescImpUse *udf_sb_lvidiu(struct super_block *sb) 98 98 { 99 - struct logicalVolIntegrityDesc *lvid = 100 - (struct logicalVolIntegrityDesc *)sbi->s_lvid_bh->b_data; 101 - __u32 number_of_partitions = le32_to_cpu(lvid->numOfPartitions); 102 - __u32 offset = number_of_partitions * 2 * 103 - sizeof(uint32_t)/sizeof(uint8_t); 99 + struct logicalVolIntegrityDesc *lvid; 100 + unsigned int partnum; 101 + unsigned int offset; 102 + 103 + if (!UDF_SB(sb)->s_lvid_bh) 104 + return NULL; 105 + lvid = (struct logicalVolIntegrityDesc *)UDF_SB(sb)->s_lvid_bh->b_data; 106 + partnum = le32_to_cpu(lvid->numOfPartitions); 107 + if ((sb->s_blocksize - sizeof(struct logicalVolIntegrityDescImpUse) - 108 + offsetof(struct logicalVolIntegrityDesc, impUse)) / 109 + (2 * sizeof(uint32_t)) < partnum) { 110 + udf_err(sb, "Logical volume integrity descriptor corrupted " 111 + "(numOfPartitions = %u)!\n", partnum); 112 + return NULL; 113 + } 114 + /* The offset is to skip freeSpaceTable and sizeTable arrays */ 115 + offset = partnum * 2 * sizeof(uint32_t); 104 116 return (struct logicalVolIntegrityDescImpUse *)&(lvid->impUse[offset]); 105 117 } 106 118 ··· 641 629 struct udf_options uopt; 642 630 struct udf_sb_info *sbi = UDF_SB(sb); 643 631 int error = 0; 632 + struct logicalVolIntegrityDescImpUse *lvidiu = udf_sb_lvidiu(sb); 644 633 645 - if (sbi->s_lvid_bh) { 646 - int write_rev = le16_to_cpu(udf_sb_lvidiu(sbi)->minUDFWriteRev); 634 + if (lvidiu) { 635 + int write_rev = le16_to_cpu(lvidiu->minUDFWriteRev); 647 636 if (write_rev > UDF_MAX_WRITE_VERSION && !(*flags & MS_RDONLY)) 648 637 return -EACCES; 649 638 } ··· 1918 1905 1919 1906 if (!bh) 1920 1907 return; 1908 + lvid = (struct logicalVolIntegrityDesc *)bh->b_data; 1909 + lvidiu = udf_sb_lvidiu(sb); 1910 + if (!lvidiu) 1911 + return; 1921 1912 1922 1913 mutex_lock(&sbi->s_alloc_mutex); 1923 - lvid = (struct logicalVolIntegrityDesc *)bh->b_data; 1924 - lvidiu = udf_sb_lvidiu(sbi); 1925 - 1926 1914 lvidiu->impIdent.identSuffix[0] = UDF_OS_CLASS_UNIX; 1927 1915 lvidiu->impIdent.identSuffix[1] = UDF_OS_ID_LINUX; 1928 1916 udf_time_to_disk_stamp(&lvid->recordingDateAndTime, ··· 1951 1937 1952 1938 if (!bh) 1953 1939 return; 1940 + lvid = (struct logicalVolIntegrityDesc *)bh->b_data; 1941 + lvidiu = udf_sb_lvidiu(sb); 1942 + if (!lvidiu) 1943 + return; 1954 1944 1955 1945 mutex_lock(&sbi->s_alloc_mutex); 1956 - lvid = (struct logicalVolIntegrityDesc *)bh->b_data; 1957 - lvidiu = udf_sb_lvidiu(sbi); 1958 1946 lvidiu->impIdent.identSuffix[0] = UDF_OS_CLASS_UNIX; 1959 1947 lvidiu->impIdent.identSuffix[1] = UDF_OS_ID_LINUX; 1960 1948 udf_time_to_disk_stamp(&lvid->recordingDateAndTime, CURRENT_TIME); ··· 2109 2093 2110 2094 if (sbi->s_lvid_bh) { 2111 2095 struct logicalVolIntegrityDescImpUse *lvidiu = 2112 - udf_sb_lvidiu(sbi); 2113 - uint16_t minUDFReadRev = le16_to_cpu(lvidiu->minUDFReadRev); 2114 - uint16_t minUDFWriteRev = le16_to_cpu(lvidiu->minUDFWriteRev); 2115 - /* uint16_t maxUDFWriteRev = 2116 - le16_to_cpu(lvidiu->maxUDFWriteRev); */ 2096 + udf_sb_lvidiu(sb); 2097 + uint16_t minUDFReadRev; 2098 + uint16_t minUDFWriteRev; 2117 2099 2100 + if (!lvidiu) { 2101 + ret = -EINVAL; 2102 + goto error_out; 2103 + } 2104 + minUDFReadRev = le16_to_cpu(lvidiu->minUDFReadRev); 2105 + minUDFWriteRev = le16_to_cpu(lvidiu->minUDFWriteRev); 2118 2106 if (minUDFReadRev > UDF_MAX_READ_VERSION) { 2119 2107 udf_err(sb, "minUDFReadRev=%x (max is %x)\n", 2120 - le16_to_cpu(lvidiu->minUDFReadRev), 2108 + minUDFReadRev, 2121 2109 UDF_MAX_READ_VERSION); 2122 2110 ret = -EINVAL; 2123 2111 goto error_out; ··· 2285 2265 struct logicalVolIntegrityDescImpUse *lvidiu; 2286 2266 u64 id = huge_encode_dev(sb->s_bdev->bd_dev); 2287 2267 2288 - if (sbi->s_lvid_bh != NULL) 2289 - lvidiu = udf_sb_lvidiu(sbi); 2290 - else 2291 - lvidiu = NULL; 2292 - 2268 + lvidiu = udf_sb_lvidiu(sb); 2293 2269 buf->f_type = UDF_SUPER_MAGIC; 2294 2270 buf->f_bsize = sb->s_blocksize; 2295 2271 buf->f_blocks = sbi->s_partmaps[sbi->s_partition].s_partition_len;
+1 -1
fs/udf/udf_sb.h
··· 162 162 return sb->s_fs_info; 163 163 } 164 164 165 - struct logicalVolIntegrityDescImpUse *udf_sb_lvidiu(struct udf_sb_info *sbi); 165 + struct logicalVolIntegrityDescImpUse *udf_sb_lvidiu(struct super_block *sb); 166 166 167 167 int udf_compute_nr_groups(struct super_block *sb, u32 partition); 168 168
+1
fs/xfs/xfs_buf_item.c
··· 628 628 else if (aborted) { 629 629 ASSERT(XFS_FORCED_SHUTDOWN(lip->li_mountp)); 630 630 if (lip->li_flags & XFS_LI_IN_AIL) { 631 + spin_lock(&lip->li_ailp->xa_lock); 631 632 xfs_trans_ail_delete(lip->li_ailp, lip, 632 633 SHUTDOWN_LOG_IO_ERROR); 633 634 }
+3 -2
fs/xfs/xfs_da_btree.c
··· 1224 1224 /* start with smaller blk num */ 1225 1225 forward = nodehdr.forw < nodehdr.back; 1226 1226 for (i = 0; i < 2; forward = !forward, i++) { 1227 + struct xfs_da3_icnode_hdr thdr; 1227 1228 if (forward) 1228 1229 blkno = nodehdr.forw; 1229 1230 else ··· 1237 1236 return(error); 1238 1237 1239 1238 node = bp->b_addr; 1240 - xfs_da3_node_hdr_from_disk(&nodehdr, node); 1239 + xfs_da3_node_hdr_from_disk(&thdr, node); 1241 1240 xfs_trans_brelse(state->args->trans, bp); 1242 1241 1243 - if (count - nodehdr.count >= 0) 1242 + if (count - thdr.count >= 0) 1244 1243 break; /* fits with at least 25% to spare */ 1245 1244 } 1246 1245 if (i >= 2) {
+1 -1
fs/xfs/xfs_fs.h
··· 515 515 /* XFS_IOC_GETBIOSIZE ---- deprecated 47 */ 516 516 #define XFS_IOC_GETBMAPX _IOWR('X', 56, struct getbmap) 517 517 #define XFS_IOC_ZERO_RANGE _IOW ('X', 57, struct xfs_flock64) 518 - #define XFS_IOC_FREE_EOFBLOCKS _IOR ('X', 58, struct xfs_eofblocks) 518 + #define XFS_IOC_FREE_EOFBLOCKS _IOR ('X', 58, struct xfs_fs_eofblocks) 519 519 520 520 /* 521 521 * ioctl commands that replace IRIX syssgi()'s
+4 -5
fs/xfs/xfs_icache.c
··· 119 119 ip->i_itemp = NULL; 120 120 } 121 121 122 - /* asserts to verify all state is correct here */ 123 - ASSERT(atomic_read(&ip->i_pincount) == 0); 124 - ASSERT(!spin_is_locked(&ip->i_flags_lock)); 125 - ASSERT(!xfs_isiflocked(ip)); 126 - 127 122 /* 128 123 * Because we use RCU freeing we need to ensure the inode always 129 124 * appears to be reclaimed with an invalid inode number when in the ··· 129 134 ip->i_flags = XFS_IRECLAIM; 130 135 ip->i_ino = 0; 131 136 spin_unlock(&ip->i_flags_lock); 137 + 138 + /* asserts to verify all state is correct here */ 139 + ASSERT(atomic_read(&ip->i_pincount) == 0); 140 + ASSERT(!xfs_isiflocked(ip)); 132 141 133 142 call_rcu(&VFS_I(ip)->i_rcu, xfs_inode_free_callback); 134 143 }
+59 -14
fs/xfs/xfs_log_recover.c
··· 1970 1970 * magic number. If we don't recognise the magic number in the buffer, then 1971 1971 * return a LSN of -1 so that the caller knows it was an unrecognised block and 1972 1972 * so can recover the buffer. 1973 + * 1974 + * Note: we cannot rely solely on magic number matches to determine that the 1975 + * buffer has a valid LSN - we also need to verify that it belongs to this 1976 + * filesystem, so we need to extract the object's LSN and compare it to that 1977 + * which we read from the superblock. If the UUIDs don't match, then we've got a 1978 + * stale metadata block from an old filesystem instance that we need to recover 1979 + * over the top of. 1973 1980 */ 1974 1981 static xfs_lsn_t 1975 1982 xlog_recover_get_buf_lsn( ··· 1987 1980 __uint16_t magic16; 1988 1981 __uint16_t magicda; 1989 1982 void *blk = bp->b_addr; 1983 + uuid_t *uuid; 1984 + xfs_lsn_t lsn = -1; 1990 1985 1991 1986 /* v4 filesystems always recover immediately */ 1992 1987 if (!xfs_sb_version_hascrc(&mp->m_sb)) ··· 2001 1992 case XFS_ABTB_MAGIC: 2002 1993 case XFS_ABTC_MAGIC: 2003 1994 case XFS_IBT_CRC_MAGIC: 2004 - case XFS_IBT_MAGIC: 2005 - return be64_to_cpu( 2006 - ((struct xfs_btree_block *)blk)->bb_u.s.bb_lsn); 1995 + case XFS_IBT_MAGIC: { 1996 + struct xfs_btree_block *btb = blk; 1997 + 1998 + lsn = be64_to_cpu(btb->bb_u.s.bb_lsn); 1999 + uuid = &btb->bb_u.s.bb_uuid; 2000 + break; 2001 + } 2007 2002 case XFS_BMAP_CRC_MAGIC: 2008 - case XFS_BMAP_MAGIC: 2009 - return be64_to_cpu( 2010 - ((struct xfs_btree_block *)blk)->bb_u.l.bb_lsn); 2003 + case XFS_BMAP_MAGIC: { 2004 + struct xfs_btree_block *btb = blk; 2005 + 2006 + lsn = be64_to_cpu(btb->bb_u.l.bb_lsn); 2007 + uuid = &btb->bb_u.l.bb_uuid; 2008 + break; 2009 + } 2011 2010 case XFS_AGF_MAGIC: 2012 - return be64_to_cpu(((struct xfs_agf *)blk)->agf_lsn); 2011 + lsn = be64_to_cpu(((struct xfs_agf *)blk)->agf_lsn); 2012 + uuid = &((struct xfs_agf *)blk)->agf_uuid; 2013 + break; 2013 2014 case XFS_AGFL_MAGIC: 2014 - return be64_to_cpu(((struct xfs_agfl *)blk)->agfl_lsn); 2015 + lsn = be64_to_cpu(((struct xfs_agfl *)blk)->agfl_lsn); 2016 + uuid = &((struct xfs_agfl *)blk)->agfl_uuid; 2017 + break; 2015 2018 case XFS_AGI_MAGIC: 2016 - return be64_to_cpu(((struct xfs_agi *)blk)->agi_lsn); 2019 + lsn = be64_to_cpu(((struct xfs_agi *)blk)->agi_lsn); 2020 + uuid = &((struct xfs_agi *)blk)->agi_uuid; 2021 + break; 2017 2022 case XFS_SYMLINK_MAGIC: 2018 - return be64_to_cpu(((struct xfs_dsymlink_hdr *)blk)->sl_lsn); 2023 + lsn = be64_to_cpu(((struct xfs_dsymlink_hdr *)blk)->sl_lsn); 2024 + uuid = &((struct xfs_dsymlink_hdr *)blk)->sl_uuid; 2025 + break; 2019 2026 case XFS_DIR3_BLOCK_MAGIC: 2020 2027 case XFS_DIR3_DATA_MAGIC: 2021 2028 case XFS_DIR3_FREE_MAGIC: 2022 - return be64_to_cpu(((struct xfs_dir3_blk_hdr *)blk)->lsn); 2029 + lsn = be64_to_cpu(((struct xfs_dir3_blk_hdr *)blk)->lsn); 2030 + uuid = &((struct xfs_dir3_blk_hdr *)blk)->uuid; 2031 + break; 2023 2032 case XFS_ATTR3_RMT_MAGIC: 2024 - return be64_to_cpu(((struct xfs_attr3_rmt_hdr *)blk)->rm_lsn); 2033 + lsn = be64_to_cpu(((struct xfs_attr3_rmt_hdr *)blk)->rm_lsn); 2034 + uuid = &((struct xfs_attr3_rmt_hdr *)blk)->rm_uuid; 2035 + break; 2025 2036 case XFS_SB_MAGIC: 2026 - return be64_to_cpu(((struct xfs_dsb *)blk)->sb_lsn); 2037 + lsn = be64_to_cpu(((struct xfs_dsb *)blk)->sb_lsn); 2038 + uuid = &((struct xfs_dsb *)blk)->sb_uuid; 2039 + break; 2027 2040 default: 2028 2041 break; 2042 + } 2043 + 2044 + if (lsn != (xfs_lsn_t)-1) { 2045 + if (!uuid_equal(&mp->m_sb.sb_uuid, uuid)) 2046 + goto recover_immediately; 2047 + return lsn; 2029 2048 } 2030 2049 2031 2050 magicda = be16_to_cpu(((struct xfs_da_blkinfo *)blk)->magic); ··· 2061 2024 case XFS_DIR3_LEAF1_MAGIC: 2062 2025 case XFS_DIR3_LEAFN_MAGIC: 2063 2026 case XFS_DA3_NODE_MAGIC: 2064 - return be64_to_cpu(((struct xfs_da3_blkinfo *)blk)->lsn); 2027 + lsn = be64_to_cpu(((struct xfs_da3_blkinfo *)blk)->lsn); 2028 + uuid = &((struct xfs_da3_blkinfo *)blk)->uuid; 2029 + break; 2065 2030 default: 2066 2031 break; 2032 + } 2033 + 2034 + if (lsn != (xfs_lsn_t)-1) { 2035 + if (!uuid_equal(&mp->m_sb.sb_uuid, uuid)) 2036 + goto recover_immediately; 2037 + return lsn; 2067 2038 } 2068 2039 2069 2040 /*
+2 -1
include/linux/device-mapper.h
··· 406 406 union map_info *dm_get_mapinfo(struct bio *bio); 407 407 union map_info *dm_get_rq_mapinfo(struct request *rq); 408 408 409 + struct queue_limits *dm_get_queue_limits(struct mapped_device *md); 410 + 409 411 /* 410 412 * Geometry functions. 411 413 */ 412 414 int dm_get_geometry(struct mapped_device *md, struct hd_geometry *geo); 413 415 int dm_set_geometry(struct mapped_device *md, struct hd_geometry *geo); 414 - 415 416 416 417 /*----------------------------------------------------------------- 417 418 * Functions for manipulating device-mapper tables.
+5 -2
include/linux/hyperv.h
··· 30 30 /* 31 31 * Framework version for util services. 32 32 */ 33 + #define UTIL_FW_MINOR 0 34 + 35 + #define UTIL_WS2K8_FW_MAJOR 1 36 + #define UTIL_WS2K8_FW_VERSION (UTIL_WS2K8_FW_MAJOR << 16 | UTIL_FW_MINOR) 33 37 34 38 #define UTIL_FW_MAJOR 3 35 - #define UTIL_FW_MINOR 0 36 - #define UTIL_FW_MAJOR_MINOR (UTIL_FW_MAJOR << 16 | UTIL_FW_MINOR) 39 + #define UTIL_FW_VERSION (UTIL_FW_MAJOR << 16 | UTIL_FW_MINOR) 37 40 38 41 39 42 /*
+10 -45
include/linux/memcontrol.h
··· 53 53 unsigned int generation; 54 54 }; 55 55 56 - enum mem_cgroup_filter_t { 57 - VISIT, /* visit current node */ 58 - SKIP, /* skip the current node and continue traversal */ 59 - SKIP_TREE, /* skip the whole subtree and continue traversal */ 60 - }; 61 - 62 - /* 63 - * mem_cgroup_filter_t predicate might instruct mem_cgroup_iter_cond how to 64 - * iterate through the hierarchy tree. Each tree element is checked by the 65 - * predicate before it is returned by the iterator. If a filter returns 66 - * SKIP or SKIP_TREE then the iterator code continues traversal (with the 67 - * next node down the hierarchy or the next node that doesn't belong under the 68 - * memcg's subtree). 69 - */ 70 - typedef enum mem_cgroup_filter_t 71 - (*mem_cgroup_iter_filter)(struct mem_cgroup *memcg, struct mem_cgroup *root); 72 - 73 56 #ifdef CONFIG_MEMCG 74 57 /* 75 58 * All "charge" functions with gfp_mask should use GFP_KERNEL or ··· 120 137 extern void mem_cgroup_end_migration(struct mem_cgroup *memcg, 121 138 struct page *oldpage, struct page *newpage, bool migration_ok); 122 139 123 - struct mem_cgroup *mem_cgroup_iter_cond(struct mem_cgroup *root, 124 - struct mem_cgroup *prev, 125 - struct mem_cgroup_reclaim_cookie *reclaim, 126 - mem_cgroup_iter_filter cond); 127 - 128 - static inline struct mem_cgroup *mem_cgroup_iter(struct mem_cgroup *root, 129 - struct mem_cgroup *prev, 130 - struct mem_cgroup_reclaim_cookie *reclaim) 131 - { 132 - return mem_cgroup_iter_cond(root, prev, reclaim, NULL); 133 - } 134 - 140 + struct mem_cgroup *mem_cgroup_iter(struct mem_cgroup *, 141 + struct mem_cgroup *, 142 + struct mem_cgroup_reclaim_cookie *); 135 143 void mem_cgroup_iter_break(struct mem_cgroup *, struct mem_cgroup *); 136 144 137 145 /* ··· 234 260 mem_cgroup_update_page_stat(page, idx, -1); 235 261 } 236 262 237 - enum mem_cgroup_filter_t 238 - mem_cgroup_soft_reclaim_eligible(struct mem_cgroup *memcg, 239 - struct mem_cgroup *root); 263 + unsigned long mem_cgroup_soft_limit_reclaim(struct zone *zone, int order, 264 + gfp_t gfp_mask, 265 + unsigned long *total_scanned); 240 266 241 267 void __mem_cgroup_count_vm_event(struct mm_struct *mm, enum vm_event_item idx); 242 268 static inline void mem_cgroup_count_vm_event(struct mm_struct *mm, ··· 350 376 struct page *oldpage, struct page *newpage, bool migration_ok) 351 377 { 352 378 } 353 - static inline struct mem_cgroup * 354 - mem_cgroup_iter_cond(struct mem_cgroup *root, 355 - struct mem_cgroup *prev, 356 - struct mem_cgroup_reclaim_cookie *reclaim, 357 - mem_cgroup_iter_filter cond) 358 - { 359 - /* first call must return non-NULL, second return NULL */ 360 - return (struct mem_cgroup *)(unsigned long)!prev; 361 - } 362 379 363 380 static inline struct mem_cgroup * 364 381 mem_cgroup_iter(struct mem_cgroup *root, ··· 436 471 } 437 472 438 473 static inline 439 - enum mem_cgroup_filter_t 440 - mem_cgroup_soft_reclaim_eligible(struct mem_cgroup *memcg, 441 - struct mem_cgroup *root) 474 + unsigned long mem_cgroup_soft_limit_reclaim(struct zone *zone, int order, 475 + gfp_t gfp_mask, 476 + unsigned long *total_scanned) 442 477 { 443 - return VISIT; 478 + return 0; 444 479 } 445 480 446 481 static inline void mem_cgroup_split_huge_fixup(struct page *head)
+3 -3
include/linux/mutex.h
··· 15 15 #include <linux/spinlock_types.h> 16 16 #include <linux/linkage.h> 17 17 #include <linux/lockdep.h> 18 - 19 18 #include <linux/atomic.h> 19 + #include <asm/processor.h> 20 20 21 21 /* 22 22 * Simple, straightforward mutexes with strict semantics: ··· 175 175 176 176 extern int atomic_dec_and_mutex_lock(atomic_t *cnt, struct mutex *lock); 177 177 178 - #ifndef CONFIG_HAVE_ARCH_MUTEX_CPU_RELAX 179 - #define arch_mutex_cpu_relax() cpu_relax() 178 + #ifndef arch_mutex_cpu_relax 179 + # define arch_mutex_cpu_relax() cpu_relax() 180 180 #endif 181 181 182 182 #endif
+8 -12
include/linux/of_irq.h
··· 1 1 #ifndef __OF_IRQ_H 2 2 #define __OF_IRQ_H 3 3 4 - #if defined(CONFIG_OF) 5 - struct of_irq; 6 4 #include <linux/types.h> 7 5 #include <linux/errno.h> 8 6 #include <linux/irq.h> ··· 8 10 #include <linux/ioport.h> 9 11 #include <linux/of.h> 10 12 11 - /* 12 - * irq_of_parse_and_map() is used by all OF enabled platforms; but SPARC 13 - * implements it differently. However, the prototype is the same for all, 14 - * so declare it here regardless of the CONFIG_OF_IRQ setting. 15 - */ 16 - extern unsigned int irq_of_parse_and_map(struct device_node *node, int index); 17 - 18 - #if defined(CONFIG_OF_IRQ) 19 13 /** 20 14 * of_irq - container for device_node/irq_specifier pair for an irq controller 21 15 * @controller: pointer to interrupt controller device tree node ··· 61 71 extern int of_irq_count(struct device_node *dev); 62 72 extern int of_irq_to_resource_table(struct device_node *dev, 63 73 struct resource *res, int nr_irqs); 64 - extern struct device_node *of_irq_find_parent(struct device_node *child); 65 74 66 75 extern void of_irq_init(const struct of_device_id *matches); 67 76 68 - #endif /* CONFIG_OF_IRQ */ 77 + #if defined(CONFIG_OF) 78 + /* 79 + * irq_of_parse_and_map() is used by all OF enabled platforms; but SPARC 80 + * implements it differently. However, the prototype is the same for all, 81 + * so declare it here regardless of the CONFIG_OF_IRQ setting. 82 + */ 83 + extern unsigned int irq_of_parse_and_map(struct device_node *node, int index); 84 + extern struct device_node *of_irq_find_parent(struct device_node *child); 69 85 70 86 #else /* !CONFIG_OF */ 71 87 static inline unsigned int irq_of_parse_and_map(struct device_node *dev,
+46
include/linux/platform_data/dma-s3c24xx.h
··· 1 + /* 2 + * S3C24XX DMA handling 3 + * 4 + * Copyright (c) 2013 Heiko Stuebner <heiko@sntech.de> 5 + * 6 + * This program is free software; you can redistribute it and/or modify it 7 + * under the terms of the GNU General Public License as published by the Free 8 + * Software Foundation; either version 2 of the License, or (at your option) 9 + * any later version. 10 + */ 11 + 12 + /* Helper to encode the source selection constraints for early s3c socs. */ 13 + #define S3C24XX_DMA_CHANREQ(src, chan) ((BIT(3) | src) << chan * 4) 14 + 15 + enum s3c24xx_dma_bus { 16 + S3C24XX_DMA_APB, 17 + S3C24XX_DMA_AHB, 18 + }; 19 + 20 + /** 21 + * @bus: on which bus does the peripheral reside - AHB or APB. 22 + * @handshake: is a handshake with the peripheral necessary 23 + * @chansel: channel selection information, depending on variant; reqsel for 24 + * s3c2443 and later and channel-selection map for earlier SoCs 25 + * see CHANSEL doc in s3c2443-dma.c 26 + */ 27 + struct s3c24xx_dma_channel { 28 + enum s3c24xx_dma_bus bus; 29 + bool handshake; 30 + u16 chansel; 31 + }; 32 + 33 + /** 34 + * struct s3c24xx_dma_platdata - platform specific settings 35 + * @num_phy_channels: number of physical channels 36 + * @channels: array of virtual channel descriptions 37 + * @num_channels: number of virtual channels 38 + */ 39 + struct s3c24xx_dma_platdata { 40 + int num_phy_channels; 41 + struct s3c24xx_dma_channel *channels; 42 + int num_channels; 43 + }; 44 + 45 + struct dma_chan; 46 + bool s3c24xx_dma_filter(struct dma_chan *chan, void *param);
+6
include/linux/smp.h
··· 155 155 156 156 static inline void kick_all_cpus_sync(void) { } 157 157 158 + static inline void __smp_call_function_single(int cpuid, 159 + struct call_single_data *data, int wait) 160 + { 161 + on_each_cpu(data->func, data->info, wait); 162 + } 163 + 158 164 #endif /* !SMP */ 159 165 160 166 /*
+2
include/uapi/drm/radeon_drm.h
··· 1007 1007 #define SI_TILE_MODE_DEPTH_STENCIL_2D_4AA 3 1008 1008 #define SI_TILE_MODE_DEPTH_STENCIL_2D_8AA 2 1009 1009 1010 + #define CIK_TILE_MODE_DEPTH_STENCIL_1D 5 1011 + 1010 1012 #endif
+10 -5
include/uapi/linux/perf_event.h
··· 380 380 union { 381 381 __u64 capabilities; 382 382 struct { 383 - __u64 cap_usr_time : 1, 384 - cap_usr_rdpmc : 1, 385 - cap_usr_time_zero : 1, 386 - cap_____res : 61; 383 + __u64 cap_bit0 : 1, /* Always 0, deprecated, see commit 860f085b74e9 */ 384 + cap_bit0_is_deprecated : 1, /* Always 1, signals that bit 0 is zero */ 385 + 386 + cap_user_rdpmc : 1, /* The RDPMC instruction can be used to read counts */ 387 + cap_user_time : 1, /* The time_* fields are used */ 388 + cap_user_time_zero : 1, /* The time_zero field is used */ 389 + cap_____res : 59; 387 390 }; 388 391 }; 389 392 ··· 445 442 * ((rem * time_mult) >> time_shift); 446 443 */ 447 444 __u64 time_zero; 445 + __u32 size; /* Header size up to __reserved[] fields. */ 448 446 449 447 /* 450 448 * Hole for extension of the self monitor capabilities 451 449 */ 452 450 453 - __u64 __reserved[119]; /* align to 1k */ 451 + __u8 __reserved[118*8+4]; /* align to 1k. */ 454 452 455 453 /* 456 454 * Control data for the mmap() data buffer. ··· 532 528 * u64 len; 533 529 * u64 pgoff; 534 530 * char filename[]; 531 + * struct sample_id sample_id; 535 532 * }; 536 533 */ 537 534 PERF_RECORD_MMAP = 1,
+13 -6
ipc/msg.c
··· 165 165 ipc_rmid(&msg_ids(ns), &s->q_perm); 166 166 } 167 167 168 + static void msg_rcu_free(struct rcu_head *head) 169 + { 170 + struct ipc_rcu *p = container_of(head, struct ipc_rcu, rcu); 171 + struct msg_queue *msq = ipc_rcu_to_struct(p); 172 + 173 + security_msg_queue_free(msq); 174 + ipc_rcu_free(head); 175 + } 176 + 168 177 /** 169 178 * newque - Create a new msg queue 170 179 * @ns: namespace ··· 198 189 msq->q_perm.security = NULL; 199 190 retval = security_msg_queue_alloc(msq); 200 191 if (retval) { 201 - ipc_rcu_putref(msq); 192 + ipc_rcu_putref(msq, ipc_rcu_free); 202 193 return retval; 203 194 } 204 195 205 196 /* ipc_addid() locks msq upon success. */ 206 197 id = ipc_addid(&msg_ids(ns), &msq->q_perm, ns->msg_ctlmni); 207 198 if (id < 0) { 208 - security_msg_queue_free(msq); 209 - ipc_rcu_putref(msq); 199 + ipc_rcu_putref(msq, msg_rcu_free); 210 200 return id; 211 201 } 212 202 ··· 284 276 free_msg(msg); 285 277 } 286 278 atomic_sub(msq->q_cbytes, &ns->msg_bytes); 287 - security_msg_queue_free(msq); 288 - ipc_rcu_putref(msq); 279 + ipc_rcu_putref(msq, msg_rcu_free); 289 280 } 290 281 291 282 /* ··· 724 717 rcu_read_lock(); 725 718 ipc_lock_object(&msq->q_perm); 726 719 727 - ipc_rcu_putref(msq); 720 + ipc_rcu_putref(msq, ipc_rcu_free); 728 721 if (msq->q_perm.deleted) { 729 722 err = -EIDRM; 730 723 goto out_unlock0;
+18 -16
ipc/sem.c
··· 243 243 } 244 244 } 245 245 246 + static void sem_rcu_free(struct rcu_head *head) 247 + { 248 + struct ipc_rcu *p = container_of(head, struct ipc_rcu, rcu); 249 + struct sem_array *sma = ipc_rcu_to_struct(p); 250 + 251 + security_sem_free(sma); 252 + ipc_rcu_free(head); 253 + } 254 + 246 255 /* 247 256 * If the request contains only one semaphore operation, and there are 248 257 * no complex transactions pending, lock only the semaphore involved. ··· 383 374 static inline void sem_lock_and_putref(struct sem_array *sma) 384 375 { 385 376 sem_lock(sma, NULL, -1); 386 - ipc_rcu_putref(sma); 387 - } 388 - 389 - static inline void sem_putref(struct sem_array *sma) 390 - { 391 - ipc_rcu_putref(sma); 377 + ipc_rcu_putref(sma, ipc_rcu_free); 392 378 } 393 379 394 380 static inline void sem_rmid(struct ipc_namespace *ns, struct sem_array *s) ··· 462 458 sma->sem_perm.security = NULL; 463 459 retval = security_sem_alloc(sma); 464 460 if (retval) { 465 - ipc_rcu_putref(sma); 461 + ipc_rcu_putref(sma, ipc_rcu_free); 466 462 return retval; 467 463 } 468 464 469 465 id = ipc_addid(&sem_ids(ns), &sma->sem_perm, ns->sc_semmni); 470 466 if (id < 0) { 471 - security_sem_free(sma); 472 - ipc_rcu_putref(sma); 467 + ipc_rcu_putref(sma, sem_rcu_free); 473 468 return id; 474 469 } 475 470 ns->used_sems += nsems; ··· 1050 1047 1051 1048 wake_up_sem_queue_do(&tasks); 1052 1049 ns->used_sems -= sma->sem_nsems; 1053 - security_sem_free(sma); 1054 - ipc_rcu_putref(sma); 1050 + ipc_rcu_putref(sma, sem_rcu_free); 1055 1051 } 1056 1052 1057 1053 static unsigned long copy_semid_to_user(void __user *buf, struct semid64_ds *in, int version) ··· 1294 1292 rcu_read_unlock(); 1295 1293 sem_io = ipc_alloc(sizeof(ushort)*nsems); 1296 1294 if(sem_io == NULL) { 1297 - sem_putref(sma); 1295 + ipc_rcu_putref(sma, ipc_rcu_free); 1298 1296 return -ENOMEM; 1299 1297 } 1300 1298 ··· 1330 1328 if(nsems > SEMMSL_FAST) { 1331 1329 sem_io = ipc_alloc(sizeof(ushort)*nsems); 1332 1330 if(sem_io == NULL) { 1333 - sem_putref(sma); 1331 + ipc_rcu_putref(sma, ipc_rcu_free); 1334 1332 return -ENOMEM; 1335 1333 } 1336 1334 } 1337 1335 1338 1336 if (copy_from_user (sem_io, p, nsems*sizeof(ushort))) { 1339 - sem_putref(sma); 1337 + ipc_rcu_putref(sma, ipc_rcu_free); 1340 1338 err = -EFAULT; 1341 1339 goto out_free; 1342 1340 } 1343 1341 1344 1342 for (i = 0; i < nsems; i++) { 1345 1343 if (sem_io[i] > SEMVMX) { 1346 - sem_putref(sma); 1344 + ipc_rcu_putref(sma, ipc_rcu_free); 1347 1345 err = -ERANGE; 1348 1346 goto out_free; 1349 1347 } ··· 1631 1629 /* step 2: allocate new undo structure */ 1632 1630 new = kzalloc(sizeof(struct sem_undo) + sizeof(short)*nsems, GFP_KERNEL); 1633 1631 if (!new) { 1634 - sem_putref(sma); 1632 + ipc_rcu_putref(sma, ipc_rcu_free); 1635 1633 return ERR_PTR(-ENOMEM); 1636 1634 } 1637 1635
+12 -5
ipc/shm.c
··· 167 167 ipc_lock_object(&ipcp->shm_perm); 168 168 } 169 169 170 + static void shm_rcu_free(struct rcu_head *head) 171 + { 172 + struct ipc_rcu *p = container_of(head, struct ipc_rcu, rcu); 173 + struct shmid_kernel *shp = ipc_rcu_to_struct(p); 174 + 175 + security_shm_free(shp); 176 + ipc_rcu_free(head); 177 + } 178 + 170 179 static inline void shm_rmid(struct ipc_namespace *ns, struct shmid_kernel *s) 171 180 { 172 181 ipc_rmid(&shm_ids(ns), &s->shm_perm); ··· 217 208 user_shm_unlock(file_inode(shp->shm_file)->i_size, 218 209 shp->mlock_user); 219 210 fput (shp->shm_file); 220 - security_shm_free(shp); 221 - ipc_rcu_putref(shp); 211 + ipc_rcu_putref(shp, shm_rcu_free); 222 212 } 223 213 224 214 /* ··· 505 497 shp->shm_perm.security = NULL; 506 498 error = security_shm_alloc(shp); 507 499 if (error) { 508 - ipc_rcu_putref(shp); 500 + ipc_rcu_putref(shp, ipc_rcu_free); 509 501 return error; 510 502 } 511 503 ··· 574 566 user_shm_unlock(size, shp->mlock_user); 575 567 fput(file); 576 568 no_file: 577 - security_shm_free(shp); 578 - ipc_rcu_putref(shp); 569 + ipc_rcu_putref(shp, shm_rcu_free); 579 570 return error; 580 571 } 581 572
+12 -20
ipc/util.c
··· 474 474 kfree(ptr); 475 475 } 476 476 477 - struct ipc_rcu { 478 - struct rcu_head rcu; 479 - atomic_t refcount; 480 - } ____cacheline_aligned_in_smp; 481 - 482 477 /** 483 478 * ipc_rcu_alloc - allocate ipc and rcu space 484 479 * @size: size desired ··· 500 505 return atomic_inc_not_zero(&p->refcount); 501 506 } 502 507 503 - /** 504 - * ipc_schedule_free - free ipc + rcu space 505 - * @head: RCU callback structure for queued work 506 - */ 507 - static void ipc_schedule_free(struct rcu_head *head) 508 - { 509 - vfree(container_of(head, struct ipc_rcu, rcu)); 510 - } 511 - 512 - void ipc_rcu_putref(void *ptr) 508 + void ipc_rcu_putref(void *ptr, void (*func)(struct rcu_head *head)) 513 509 { 514 510 struct ipc_rcu *p = ((struct ipc_rcu *)ptr) - 1; 515 511 516 512 if (!atomic_dec_and_test(&p->refcount)) 517 513 return; 518 514 519 - if (is_vmalloc_addr(ptr)) { 520 - call_rcu(&p->rcu, ipc_schedule_free); 521 - } else { 522 - kfree_rcu(p, rcu); 523 - } 515 + call_rcu(&p->rcu, func); 516 + } 517 + 518 + void ipc_rcu_free(struct rcu_head *head) 519 + { 520 + struct ipc_rcu *p = container_of(head, struct ipc_rcu, rcu); 521 + 522 + if (is_vmalloc_addr(p)) 523 + vfree(p); 524 + else 525 + kfree(p); 524 526 } 525 527 526 528 /**
+9 -1
ipc/util.h
··· 47 47 static inline void shm_exit_ns(struct ipc_namespace *ns) { } 48 48 #endif 49 49 50 + struct ipc_rcu { 51 + struct rcu_head rcu; 52 + atomic_t refcount; 53 + } ____cacheline_aligned_in_smp; 54 + 55 + #define ipc_rcu_to_struct(p) ((void *)(p+1)) 56 + 50 57 /* 51 58 * Structure that holds the parameters needed by the ipc operations 52 59 * (see after) ··· 127 120 */ 128 121 void* ipc_rcu_alloc(int size); 129 122 int ipc_rcu_getref(void *ptr); 130 - void ipc_rcu_putref(void *ptr); 123 + void ipc_rcu_putref(void *ptr, void (*func)(struct rcu_head *head)); 124 + void ipc_rcu_free(struct rcu_head *head); 131 125 132 126 struct kern_ipc_perm *ipc_lock(struct ipc_ids *, int); 133 127 struct kern_ipc_perm *ipc_obtain_object(struct ipc_ids *ids, int id);
+3 -2
kernel/audit.c
··· 1117 1117 1118 1118 sleep_time = timeout_start + audit_backlog_wait_time - 1119 1119 jiffies; 1120 - if ((long)sleep_time > 0) 1120 + if ((long)sleep_time > 0) { 1121 1121 wait_for_auditd(sleep_time); 1122 - continue; 1122 + continue; 1123 + } 1123 1124 } 1124 1125 if (audit_rate_check() && printk_ratelimit()) 1125 1126 printk(KERN_WARNING
+12
kernel/context_tracking.c
··· 51 51 unsigned long flags; 52 52 53 53 /* 54 + * Repeat the user_enter() check here because some archs may be calling 55 + * this from asm and if no CPU needs context tracking, they shouldn't 56 + * go further. Repeat the check here until they support the static key 57 + * check. 58 + */ 59 + if (!static_key_false(&context_tracking_enabled)) 60 + return; 61 + 62 + /* 54 63 * Some contexts may involve an exception occuring in an irq, 55 64 * leading to that nesting: 56 65 * rcu_irq_enter() rcu_user_exit() rcu_user_exit() rcu_irq_exit() ··· 159 150 void context_tracking_user_exit(void) 160 151 { 161 152 unsigned long flags; 153 + 154 + if (!static_key_false(&context_tracking_enabled)) 155 + return; 162 156 163 157 if (in_interrupt()) 164 158 return;
+21
kernel/events/core.c
··· 3660 3660 *running = ctx_time - event->tstamp_running; 3661 3661 } 3662 3662 3663 + static void perf_event_init_userpage(struct perf_event *event) 3664 + { 3665 + struct perf_event_mmap_page *userpg; 3666 + struct ring_buffer *rb; 3667 + 3668 + rcu_read_lock(); 3669 + rb = rcu_dereference(event->rb); 3670 + if (!rb) 3671 + goto unlock; 3672 + 3673 + userpg = rb->user_page; 3674 + 3675 + /* Allow new userspace to detect that bit 0 is deprecated */ 3676 + userpg->cap_bit0_is_deprecated = 1; 3677 + userpg->size = offsetof(struct perf_event_mmap_page, __reserved); 3678 + 3679 + unlock: 3680 + rcu_read_unlock(); 3681 + } 3682 + 3663 3683 void __weak arch_perf_update_userpage(struct perf_event_mmap_page *userpg, u64 now) 3664 3684 { 3665 3685 } ··· 4064 4044 ring_buffer_attach(event, rb); 4065 4045 rcu_assign_pointer(event->rb, rb); 4066 4046 4047 + perf_event_init_userpage(event); 4067 4048 perf_event_update_userpage(event); 4068 4049 4069 4050 unlock:
+3 -3
kernel/params.c
··· 254 254 255 255 256 256 STANDARD_PARAM_DEF(byte, unsigned char, "%hhu", unsigned long, kstrtoul); 257 - STANDARD_PARAM_DEF(short, short, "%hi", long, kstrtoul); 257 + STANDARD_PARAM_DEF(short, short, "%hi", long, kstrtol); 258 258 STANDARD_PARAM_DEF(ushort, unsigned short, "%hu", unsigned long, kstrtoul); 259 - STANDARD_PARAM_DEF(int, int, "%i", long, kstrtoul); 259 + STANDARD_PARAM_DEF(int, int, "%i", long, kstrtol); 260 260 STANDARD_PARAM_DEF(uint, unsigned int, "%u", unsigned long, kstrtoul); 261 - STANDARD_PARAM_DEF(long, long, "%li", long, kstrtoul); 261 + STANDARD_PARAM_DEF(long, long, "%li", long, kstrtol); 262 262 STANDARD_PARAM_DEF(ulong, unsigned long, "%lu", unsigned long, kstrtoul); 263 263 264 264 int param_set_charp(const char *val, const struct kernel_param *kp)
+8 -1
kernel/reboot.c
··· 32 32 #endif 33 33 enum reboot_mode reboot_mode DEFAULT_REBOOT_MODE; 34 34 35 - int reboot_default; 35 + /* 36 + * This variable is used privately to keep track of whether or not 37 + * reboot_type is still set to its default value (i.e., reboot= hasn't 38 + * been set on the command line). This is needed so that we can 39 + * suppress DMI scanning for reboot quirks. Without it, it's 40 + * impossible to override a faulty reboot quirk without recompiling. 41 + */ 42 + int reboot_default = 1; 36 43 int reboot_cpu; 37 44 enum reboot_type reboot_type = BOOT_ACPI; 38 45 int reboot_force;
+5 -4
kernel/sched/fair.c
··· 4242 4242 } 4243 4243 4244 4244 if (!se) { 4245 - cfs_rq->h_load = rq->avg.load_avg_contrib; 4245 + cfs_rq->h_load = cfs_rq->runnable_load_avg; 4246 4246 cfs_rq->last_h_load_update = now; 4247 4247 } 4248 4248 ··· 4823 4823 (busiest->load_per_task * SCHED_POWER_SCALE) / 4824 4824 busiest->group_power; 4825 4825 4826 - if (busiest->avg_load - local->avg_load + scaled_busy_load_per_task >= 4827 - (scaled_busy_load_per_task * imbn)) { 4826 + if (busiest->avg_load + scaled_busy_load_per_task >= 4827 + local->avg_load + (scaled_busy_load_per_task * imbn)) { 4828 4828 env->imbalance = busiest->load_per_task; 4829 4829 return; 4830 4830 } ··· 4896 4896 * max load less than avg load(as we skip the groups at or below 4897 4897 * its cpu_power, while calculating max_load..) 4898 4898 */ 4899 - if (busiest->avg_load < sds->avg_load) { 4899 + if (busiest->avg_load <= sds->avg_load || 4900 + local->avg_load >= sds->avg_load) { 4900 4901 env->imbalance = 0; 4901 4902 return fix_small_imbalance(env, sds); 4902 4903 }
+55 -5
kernel/watchdog.c
··· 486 486 .unpark = watchdog_enable, 487 487 }; 488 488 489 - static int watchdog_enable_all_cpus(void) 489 + static void restart_watchdog_hrtimer(void *info) 490 + { 491 + struct hrtimer *hrtimer = &__raw_get_cpu_var(watchdog_hrtimer); 492 + int ret; 493 + 494 + /* 495 + * No need to cancel and restart hrtimer if it is currently executing 496 + * because it will reprogram itself with the new period now. 497 + * We should never see it unqueued here because we are running per-cpu 498 + * with interrupts disabled. 499 + */ 500 + ret = hrtimer_try_to_cancel(hrtimer); 501 + if (ret == 1) 502 + hrtimer_start(hrtimer, ns_to_ktime(sample_period), 503 + HRTIMER_MODE_REL_PINNED); 504 + } 505 + 506 + static void update_timers(int cpu) 507 + { 508 + struct call_single_data data = {.func = restart_watchdog_hrtimer}; 509 + /* 510 + * Make sure that perf event counter will adopt to a new 511 + * sampling period. Updating the sampling period directly would 512 + * be much nicer but we do not have an API for that now so 513 + * let's use a big hammer. 514 + * Hrtimer will adopt the new period on the next tick but this 515 + * might be late already so we have to restart the timer as well. 516 + */ 517 + watchdog_nmi_disable(cpu); 518 + __smp_call_function_single(cpu, &data, 1); 519 + watchdog_nmi_enable(cpu); 520 + } 521 + 522 + static void update_timers_all_cpus(void) 523 + { 524 + int cpu; 525 + 526 + get_online_cpus(); 527 + preempt_disable(); 528 + for_each_online_cpu(cpu) 529 + update_timers(cpu); 530 + preempt_enable(); 531 + put_online_cpus(); 532 + } 533 + 534 + static int watchdog_enable_all_cpus(bool sample_period_changed) 490 535 { 491 536 int err = 0; 492 537 ··· 541 496 pr_err("Failed to create watchdog threads, disabled\n"); 542 497 else 543 498 watchdog_running = 1; 499 + } else if (sample_period_changed) { 500 + update_timers_all_cpus(); 544 501 } 545 502 546 503 return err; ··· 567 520 void __user *buffer, size_t *lenp, loff_t *ppos) 568 521 { 569 522 int err, old_thresh, old_enabled; 523 + static DEFINE_MUTEX(watchdog_proc_mutex); 570 524 525 + mutex_lock(&watchdog_proc_mutex); 571 526 old_thresh = ACCESS_ONCE(watchdog_thresh); 572 527 old_enabled = ACCESS_ONCE(watchdog_user_enabled); 573 528 574 529 err = proc_dointvec_minmax(table, write, buffer, lenp, ppos); 575 530 if (err || !write) 576 - return err; 531 + goto out; 577 532 578 533 set_sample_period(); 579 534 /* ··· 584 535 * watchdog_*_all_cpus() function takes care of this. 585 536 */ 586 537 if (watchdog_user_enabled && watchdog_thresh) 587 - err = watchdog_enable_all_cpus(); 538 + err = watchdog_enable_all_cpus(old_thresh != watchdog_thresh); 588 539 else 589 540 watchdog_disable_all_cpus(); 590 541 ··· 593 544 watchdog_thresh = old_thresh; 594 545 watchdog_user_enabled = old_enabled; 595 546 } 596 - 547 + out: 548 + mutex_unlock(&watchdog_proc_mutex); 597 549 return err; 598 550 } 599 551 #endif /* CONFIG_SYSCTL */ ··· 604 554 set_sample_period(); 605 555 606 556 if (watchdog_user_enabled) 607 - watchdog_enable_all_cpus(); 557 + watchdog_enable_all_cpus(false); 608 558 }
+1 -4
lib/kobject.c
··· 933 933 934 934 bool kobj_ns_current_may_mount(enum kobj_ns_type type) 935 935 { 936 - bool may_mount = false; 937 - 938 - if (type == KOBJ_NS_TYPE_NONE) 939 - return true; 936 + bool may_mount = true; 940 937 941 938 spin_lock(&kobj_ns_type_lock); 942 939 if ((type > KOBJ_NS_TYPE_NONE) && (type < KOBJ_NS_TYPES) &&
+20 -3
lib/lockref.c
··· 4 4 #ifdef CONFIG_CMPXCHG_LOCKREF 5 5 6 6 /* 7 + * Allow weakly-ordered memory architectures to provide barrier-less 8 + * cmpxchg semantics for lockref updates. 9 + */ 10 + #ifndef cmpxchg64_relaxed 11 + # define cmpxchg64_relaxed cmpxchg64 12 + #endif 13 + 14 + /* 15 + * Allow architectures to override the default cpu_relax() within CMPXCHG_LOOP. 16 + * This is useful for architectures with an expensive cpu_relax(). 17 + */ 18 + #ifndef arch_mutex_cpu_relax 19 + # define arch_mutex_cpu_relax() cpu_relax() 20 + #endif 21 + 22 + /* 7 23 * Note that the "cmpxchg()" reloads the "old" value for the 8 24 * failure case. 9 25 */ ··· 30 14 while (likely(arch_spin_value_unlocked(old.lock.rlock.raw_lock))) { \ 31 15 struct lockref new = old, prev = old; \ 32 16 CODE \ 33 - old.lock_count = cmpxchg64(&lockref->lock_count, \ 34 - old.lock_count, new.lock_count); \ 17 + old.lock_count = cmpxchg64_relaxed(&lockref->lock_count, \ 18 + old.lock_count, \ 19 + new.lock_count); \ 35 20 if (likely(old.lock_count == prev.lock_count)) { \ 36 21 SUCCESS; \ 37 22 } \ 38 - cpu_relax(); \ 23 + arch_mutex_cpu_relax(); \ 39 24 } \ 40 25 } while (0) 41 26
+404 -150
mm/memcontrol.c
··· 39 39 #include <linux/limits.h> 40 40 #include <linux/export.h> 41 41 #include <linux/mutex.h> 42 + #include <linux/rbtree.h> 42 43 #include <linux/slab.h> 43 44 #include <linux/swap.h> 44 45 #include <linux/swapops.h> ··· 161 160 162 161 struct mem_cgroup_reclaim_iter reclaim_iter[DEF_PRIORITY + 1]; 163 162 163 + struct rb_node tree_node; /* RB tree node */ 164 + unsigned long long usage_in_excess;/* Set to the value by which */ 165 + /* the soft limit is exceeded*/ 166 + bool on_tree; 164 167 struct mem_cgroup *memcg; /* Back pointer, we cannot */ 165 168 /* use container_of */ 166 169 }; ··· 172 167 struct mem_cgroup_per_node { 173 168 struct mem_cgroup_per_zone zoneinfo[MAX_NR_ZONES]; 174 169 }; 170 + 171 + /* 172 + * Cgroups above their limits are maintained in a RB-Tree, independent of 173 + * their hierarchy representation 174 + */ 175 + 176 + struct mem_cgroup_tree_per_zone { 177 + struct rb_root rb_root; 178 + spinlock_t lock; 179 + }; 180 + 181 + struct mem_cgroup_tree_per_node { 182 + struct mem_cgroup_tree_per_zone rb_tree_per_zone[MAX_NR_ZONES]; 183 + }; 184 + 185 + struct mem_cgroup_tree { 186 + struct mem_cgroup_tree_per_node *rb_tree_per_node[MAX_NUMNODES]; 187 + }; 188 + 189 + static struct mem_cgroup_tree soft_limit_tree __read_mostly; 175 190 176 191 struct mem_cgroup_threshold { 177 192 struct eventfd_ctx *eventfd; ··· 328 303 atomic_t numainfo_events; 329 304 atomic_t numainfo_updating; 330 305 #endif 331 - /* 332 - * Protects soft_contributed transitions. 333 - * See mem_cgroup_update_soft_limit 334 - */ 335 - spinlock_t soft_lock; 336 - 337 - /* 338 - * If true then this group has increased parents' children_in_excess 339 - * when it got over the soft limit. 340 - * When a group falls bellow the soft limit, parents' children_in_excess 341 - * is decreased and soft_contributed changed to false. 342 - */ 343 - bool soft_contributed; 344 - 345 - /* Number of children that are in soft limit excess */ 346 - atomic_t children_in_excess; 347 306 348 307 struct mem_cgroup_per_node *nodeinfo[0]; 349 308 /* WARNING: nodeinfo must be the last member here */ ··· 431 422 * limit reclaim to prevent infinite loops, if they ever occur. 432 423 */ 433 424 #define MEM_CGROUP_MAX_RECLAIM_LOOPS 100 425 + #define MEM_CGROUP_MAX_SOFT_LIMIT_RECLAIM_LOOPS 2 434 426 435 427 enum charge_type { 436 428 MEM_CGROUP_CHARGE_TYPE_CACHE = 0, ··· 658 648 return mem_cgroup_zoneinfo(memcg, nid, zid); 659 649 } 660 650 651 + static struct mem_cgroup_tree_per_zone * 652 + soft_limit_tree_node_zone(int nid, int zid) 653 + { 654 + return &soft_limit_tree.rb_tree_per_node[nid]->rb_tree_per_zone[zid]; 655 + } 656 + 657 + static struct mem_cgroup_tree_per_zone * 658 + soft_limit_tree_from_page(struct page *page) 659 + { 660 + int nid = page_to_nid(page); 661 + int zid = page_zonenum(page); 662 + 663 + return &soft_limit_tree.rb_tree_per_node[nid]->rb_tree_per_zone[zid]; 664 + } 665 + 666 + static void 667 + __mem_cgroup_insert_exceeded(struct mem_cgroup *memcg, 668 + struct mem_cgroup_per_zone *mz, 669 + struct mem_cgroup_tree_per_zone *mctz, 670 + unsigned long long new_usage_in_excess) 671 + { 672 + struct rb_node **p = &mctz->rb_root.rb_node; 673 + struct rb_node *parent = NULL; 674 + struct mem_cgroup_per_zone *mz_node; 675 + 676 + if (mz->on_tree) 677 + return; 678 + 679 + mz->usage_in_excess = new_usage_in_excess; 680 + if (!mz->usage_in_excess) 681 + return; 682 + while (*p) { 683 + parent = *p; 684 + mz_node = rb_entry(parent, struct mem_cgroup_per_zone, 685 + tree_node); 686 + if (mz->usage_in_excess < mz_node->usage_in_excess) 687 + p = &(*p)->rb_left; 688 + /* 689 + * We can't avoid mem cgroups that are over their soft 690 + * limit by the same amount 691 + */ 692 + else if (mz->usage_in_excess >= mz_node->usage_in_excess) 693 + p = &(*p)->rb_right; 694 + } 695 + rb_link_node(&mz->tree_node, parent, p); 696 + rb_insert_color(&mz->tree_node, &mctz->rb_root); 697 + mz->on_tree = true; 698 + } 699 + 700 + static void 701 + __mem_cgroup_remove_exceeded(struct mem_cgroup *memcg, 702 + struct mem_cgroup_per_zone *mz, 703 + struct mem_cgroup_tree_per_zone *mctz) 704 + { 705 + if (!mz->on_tree) 706 + return; 707 + rb_erase(&mz->tree_node, &mctz->rb_root); 708 + mz->on_tree = false; 709 + } 710 + 711 + static void 712 + mem_cgroup_remove_exceeded(struct mem_cgroup *memcg, 713 + struct mem_cgroup_per_zone *mz, 714 + struct mem_cgroup_tree_per_zone *mctz) 715 + { 716 + spin_lock(&mctz->lock); 717 + __mem_cgroup_remove_exceeded(memcg, mz, mctz); 718 + spin_unlock(&mctz->lock); 719 + } 720 + 721 + 722 + static void mem_cgroup_update_tree(struct mem_cgroup *memcg, struct page *page) 723 + { 724 + unsigned long long excess; 725 + struct mem_cgroup_per_zone *mz; 726 + struct mem_cgroup_tree_per_zone *mctz; 727 + int nid = page_to_nid(page); 728 + int zid = page_zonenum(page); 729 + mctz = soft_limit_tree_from_page(page); 730 + 731 + /* 732 + * Necessary to update all ancestors when hierarchy is used. 733 + * because their event counter is not touched. 734 + */ 735 + for (; memcg; memcg = parent_mem_cgroup(memcg)) { 736 + mz = mem_cgroup_zoneinfo(memcg, nid, zid); 737 + excess = res_counter_soft_limit_excess(&memcg->res); 738 + /* 739 + * We have to update the tree if mz is on RB-tree or 740 + * mem is over its softlimit. 741 + */ 742 + if (excess || mz->on_tree) { 743 + spin_lock(&mctz->lock); 744 + /* if on-tree, remove it */ 745 + if (mz->on_tree) 746 + __mem_cgroup_remove_exceeded(memcg, mz, mctz); 747 + /* 748 + * Insert again. mz->usage_in_excess will be updated. 749 + * If excess is 0, no tree ops. 750 + */ 751 + __mem_cgroup_insert_exceeded(memcg, mz, mctz, excess); 752 + spin_unlock(&mctz->lock); 753 + } 754 + } 755 + } 756 + 757 + static void mem_cgroup_remove_from_trees(struct mem_cgroup *memcg) 758 + { 759 + int node, zone; 760 + struct mem_cgroup_per_zone *mz; 761 + struct mem_cgroup_tree_per_zone *mctz; 762 + 763 + for_each_node(node) { 764 + for (zone = 0; zone < MAX_NR_ZONES; zone++) { 765 + mz = mem_cgroup_zoneinfo(memcg, node, zone); 766 + mctz = soft_limit_tree_node_zone(node, zone); 767 + mem_cgroup_remove_exceeded(memcg, mz, mctz); 768 + } 769 + } 770 + } 771 + 772 + static struct mem_cgroup_per_zone * 773 + __mem_cgroup_largest_soft_limit_node(struct mem_cgroup_tree_per_zone *mctz) 774 + { 775 + struct rb_node *rightmost = NULL; 776 + struct mem_cgroup_per_zone *mz; 777 + 778 + retry: 779 + mz = NULL; 780 + rightmost = rb_last(&mctz->rb_root); 781 + if (!rightmost) 782 + goto done; /* Nothing to reclaim from */ 783 + 784 + mz = rb_entry(rightmost, struct mem_cgroup_per_zone, tree_node); 785 + /* 786 + * Remove the node now but someone else can add it back, 787 + * we will to add it back at the end of reclaim to its correct 788 + * position in the tree. 789 + */ 790 + __mem_cgroup_remove_exceeded(mz->memcg, mz, mctz); 791 + if (!res_counter_soft_limit_excess(&mz->memcg->res) || 792 + !css_tryget(&mz->memcg->css)) 793 + goto retry; 794 + done: 795 + return mz; 796 + } 797 + 798 + static struct mem_cgroup_per_zone * 799 + mem_cgroup_largest_soft_limit_node(struct mem_cgroup_tree_per_zone *mctz) 800 + { 801 + struct mem_cgroup_per_zone *mz; 802 + 803 + spin_lock(&mctz->lock); 804 + mz = __mem_cgroup_largest_soft_limit_node(mctz); 805 + spin_unlock(&mctz->lock); 806 + return mz; 807 + } 808 + 661 809 /* 662 810 * Implementation Note: reading percpu statistics for memcg. 663 811 * ··· 990 822 } 991 823 992 824 /* 993 - * Called from rate-limited memcg_check_events when enough 994 - * MEM_CGROUP_TARGET_SOFTLIMIT events are accumulated and it makes sure 995 - * that all the parents up the hierarchy will be notified that this group 996 - * is in excess or that it is not in excess anymore. mmecg->soft_contributed 997 - * makes the transition a single action whenever the state flips from one to 998 - * the other. 999 - */ 1000 - static void mem_cgroup_update_soft_limit(struct mem_cgroup *memcg) 1001 - { 1002 - unsigned long long excess = res_counter_soft_limit_excess(&memcg->res); 1003 - struct mem_cgroup *parent = memcg; 1004 - int delta = 0; 1005 - 1006 - spin_lock(&memcg->soft_lock); 1007 - if (excess) { 1008 - if (!memcg->soft_contributed) { 1009 - delta = 1; 1010 - memcg->soft_contributed = true; 1011 - } 1012 - } else { 1013 - if (memcg->soft_contributed) { 1014 - delta = -1; 1015 - memcg->soft_contributed = false; 1016 - } 1017 - } 1018 - 1019 - /* 1020 - * Necessary to update all ancestors when hierarchy is used 1021 - * because their event counter is not touched. 1022 - * We track children even outside the hierarchy for the root 1023 - * cgroup because tree walk starting at root should visit 1024 - * all cgroups and we want to prevent from pointless tree 1025 - * walk if no children is below the limit. 1026 - */ 1027 - while (delta && (parent = parent_mem_cgroup(parent))) 1028 - atomic_add(delta, &parent->children_in_excess); 1029 - if (memcg != root_mem_cgroup && !root_mem_cgroup->use_hierarchy) 1030 - atomic_add(delta, &root_mem_cgroup->children_in_excess); 1031 - spin_unlock(&memcg->soft_lock); 1032 - } 1033 - 1034 - /* 1035 825 * Check events in order. 1036 826 * 1037 827 */ ··· 1012 886 1013 887 mem_cgroup_threshold(memcg); 1014 888 if (unlikely(do_softlimit)) 1015 - mem_cgroup_update_soft_limit(memcg); 889 + mem_cgroup_update_tree(memcg, page); 1016 890 #if MAX_NUMNODES > 1 1017 891 if (unlikely(do_numainfo)) 1018 892 atomic_inc(&memcg->numainfo_events); ··· 1055 929 return memcg; 1056 930 } 1057 931 1058 - static enum mem_cgroup_filter_t 1059 - mem_cgroup_filter(struct mem_cgroup *memcg, struct mem_cgroup *root, 1060 - mem_cgroup_iter_filter cond) 1061 - { 1062 - if (!cond) 1063 - return VISIT; 1064 - return cond(memcg, root); 1065 - } 1066 - 1067 932 /* 1068 933 * Returns a next (in a pre-order walk) alive memcg (with elevated css 1069 934 * ref. count) or NULL if the whole root's subtree has been visited. ··· 1062 945 * helper function to be used by mem_cgroup_iter 1063 946 */ 1064 947 static struct mem_cgroup *__mem_cgroup_iter_next(struct mem_cgroup *root, 1065 - struct mem_cgroup *last_visited, mem_cgroup_iter_filter cond) 948 + struct mem_cgroup *last_visited) 1066 949 { 1067 950 struct cgroup_subsys_state *prev_css, *next_css; 1068 951 ··· 1080 963 if (next_css) { 1081 964 struct mem_cgroup *mem = mem_cgroup_from_css(next_css); 1082 965 1083 - switch (mem_cgroup_filter(mem, root, cond)) { 1084 - case SKIP: 966 + if (css_tryget(&mem->css)) 967 + return mem; 968 + else { 1085 969 prev_css = next_css; 1086 970 goto skip_node; 1087 - case SKIP_TREE: 1088 - if (mem == root) 1089 - return NULL; 1090 - /* 1091 - * css_rightmost_descendant is not an optimal way to 1092 - * skip through a subtree (especially for imbalanced 1093 - * trees leaning to right) but that's what we have right 1094 - * now. More effective solution would be traversing 1095 - * right-up for first non-NULL without calling 1096 - * css_next_descendant_pre afterwards. 1097 - */ 1098 - prev_css = css_rightmost_descendant(next_css); 1099 - goto skip_node; 1100 - case VISIT: 1101 - if (css_tryget(&mem->css)) 1102 - return mem; 1103 - else { 1104 - prev_css = next_css; 1105 - goto skip_node; 1106 - } 1107 - break; 1108 971 } 1109 972 } 1110 973 ··· 1148 1051 * @root: hierarchy root 1149 1052 * @prev: previously returned memcg, NULL on first invocation 1150 1053 * @reclaim: cookie for shared reclaim walks, NULL for full walks 1151 - * @cond: filter for visited nodes, NULL for no filter 1152 1054 * 1153 1055 * Returns references to children of the hierarchy below @root, or 1154 1056 * @root itself, or %NULL after a full round-trip. ··· 1160 1064 * divide up the memcgs in the hierarchy among all concurrent 1161 1065 * reclaimers operating on the same zone and priority. 1162 1066 */ 1163 - struct mem_cgroup *mem_cgroup_iter_cond(struct mem_cgroup *root, 1067 + struct mem_cgroup *mem_cgroup_iter(struct mem_cgroup *root, 1164 1068 struct mem_cgroup *prev, 1165 - struct mem_cgroup_reclaim_cookie *reclaim, 1166 - mem_cgroup_iter_filter cond) 1069 + struct mem_cgroup_reclaim_cookie *reclaim) 1167 1070 { 1168 1071 struct mem_cgroup *memcg = NULL; 1169 1072 struct mem_cgroup *last_visited = NULL; 1170 1073 1171 - if (mem_cgroup_disabled()) { 1172 - /* first call must return non-NULL, second return NULL */ 1173 - return (struct mem_cgroup *)(unsigned long)!prev; 1174 - } 1074 + if (mem_cgroup_disabled()) 1075 + return NULL; 1175 1076 1176 1077 if (!root) 1177 1078 root = root_mem_cgroup; ··· 1179 1086 if (!root->use_hierarchy && root != root_mem_cgroup) { 1180 1087 if (prev) 1181 1088 goto out_css_put; 1182 - if (mem_cgroup_filter(root, root, cond) == VISIT) 1183 - return root; 1184 - return NULL; 1089 + return root; 1185 1090 } 1186 1091 1187 1092 rcu_read_lock(); ··· 1202 1111 last_visited = mem_cgroup_iter_load(iter, root, &seq); 1203 1112 } 1204 1113 1205 - memcg = __mem_cgroup_iter_next(root, last_visited, cond); 1114 + memcg = __mem_cgroup_iter_next(root, last_visited); 1206 1115 1207 1116 if (reclaim) { 1208 1117 mem_cgroup_iter_update(iter, last_visited, memcg, seq); ··· 1213 1122 reclaim->generation = iter->generation; 1214 1123 } 1215 1124 1216 - /* 1217 - * We have finished the whole tree walk or no group has been 1218 - * visited because filter told us to skip the root node. 1219 - */ 1220 - if (!memcg && (prev || (cond && !last_visited))) 1125 + if (prev && !memcg) 1221 1126 goto out_unlock; 1222 1127 } 1223 1128 out_unlock: ··· 1854 1767 return total; 1855 1768 } 1856 1769 1857 - #if MAX_NUMNODES > 1 1858 1770 /** 1859 1771 * test_mem_cgroup_node_reclaimable 1860 1772 * @memcg: the target memcg ··· 1876 1790 return false; 1877 1791 1878 1792 } 1793 + #if MAX_NUMNODES > 1 1879 1794 1880 1795 /* 1881 1796 * Always updating the nodemask is not very good - even if we have an empty ··· 1944 1857 return node; 1945 1858 } 1946 1859 1860 + /* 1861 + * Check all nodes whether it contains reclaimable pages or not. 1862 + * For quick scan, we make use of scan_nodes. This will allow us to skip 1863 + * unused nodes. But scan_nodes is lazily updated and may not cotain 1864 + * enough new information. We need to do double check. 1865 + */ 1866 + static bool mem_cgroup_reclaimable(struct mem_cgroup *memcg, bool noswap) 1867 + { 1868 + int nid; 1869 + 1870 + /* 1871 + * quick check...making use of scan_node. 1872 + * We can skip unused nodes. 1873 + */ 1874 + if (!nodes_empty(memcg->scan_nodes)) { 1875 + for (nid = first_node(memcg->scan_nodes); 1876 + nid < MAX_NUMNODES; 1877 + nid = next_node(nid, memcg->scan_nodes)) { 1878 + 1879 + if (test_mem_cgroup_node_reclaimable(memcg, nid, noswap)) 1880 + return true; 1881 + } 1882 + } 1883 + /* 1884 + * Check rest of nodes. 1885 + */ 1886 + for_each_node_state(nid, N_MEMORY) { 1887 + if (node_isset(nid, memcg->scan_nodes)) 1888 + continue; 1889 + if (test_mem_cgroup_node_reclaimable(memcg, nid, noswap)) 1890 + return true; 1891 + } 1892 + return false; 1893 + } 1894 + 1947 1895 #else 1948 1896 int mem_cgroup_select_victim_node(struct mem_cgroup *memcg) 1949 1897 { 1950 1898 return 0; 1951 1899 } 1952 1900 1901 + static bool mem_cgroup_reclaimable(struct mem_cgroup *memcg, bool noswap) 1902 + { 1903 + return test_mem_cgroup_node_reclaimable(memcg, 0, noswap); 1904 + } 1953 1905 #endif 1954 1906 1955 - /* 1956 - * A group is eligible for the soft limit reclaim under the given root 1957 - * hierarchy if 1958 - * a) it is over its soft limit 1959 - * b) any parent up the hierarchy is over its soft limit 1960 - * 1961 - * If the given group doesn't have any children over the limit then it 1962 - * doesn't make any sense to iterate its subtree. 1963 - */ 1964 - enum mem_cgroup_filter_t 1965 - mem_cgroup_soft_reclaim_eligible(struct mem_cgroup *memcg, 1966 - struct mem_cgroup *root) 1907 + static int mem_cgroup_soft_reclaim(struct mem_cgroup *root_memcg, 1908 + struct zone *zone, 1909 + gfp_t gfp_mask, 1910 + unsigned long *total_scanned) 1967 1911 { 1968 - struct mem_cgroup *parent; 1912 + struct mem_cgroup *victim = NULL; 1913 + int total = 0; 1914 + int loop = 0; 1915 + unsigned long excess; 1916 + unsigned long nr_scanned; 1917 + struct mem_cgroup_reclaim_cookie reclaim = { 1918 + .zone = zone, 1919 + .priority = 0, 1920 + }; 1969 1921 1970 - if (!memcg) 1971 - memcg = root_mem_cgroup; 1972 - parent = memcg; 1922 + excess = res_counter_soft_limit_excess(&root_memcg->res) >> PAGE_SHIFT; 1973 1923 1974 - if (res_counter_soft_limit_excess(&memcg->res)) 1975 - return VISIT; 1976 - 1977 - /* 1978 - * If any parent up to the root in the hierarchy is over its soft limit 1979 - * then we have to obey and reclaim from this group as well. 1980 - */ 1981 - while ((parent = parent_mem_cgroup(parent))) { 1982 - if (res_counter_soft_limit_excess(&parent->res)) 1983 - return VISIT; 1984 - if (parent == root) 1924 + while (1) { 1925 + victim = mem_cgroup_iter(root_memcg, victim, &reclaim); 1926 + if (!victim) { 1927 + loop++; 1928 + if (loop >= 2) { 1929 + /* 1930 + * If we have not been able to reclaim 1931 + * anything, it might because there are 1932 + * no reclaimable pages under this hierarchy 1933 + */ 1934 + if (!total) 1935 + break; 1936 + /* 1937 + * We want to do more targeted reclaim. 1938 + * excess >> 2 is not to excessive so as to 1939 + * reclaim too much, nor too less that we keep 1940 + * coming back to reclaim from this cgroup 1941 + */ 1942 + if (total >= (excess >> 2) || 1943 + (loop > MEM_CGROUP_MAX_RECLAIM_LOOPS)) 1944 + break; 1945 + } 1946 + continue; 1947 + } 1948 + if (!mem_cgroup_reclaimable(victim, false)) 1949 + continue; 1950 + total += mem_cgroup_shrink_node_zone(victim, gfp_mask, false, 1951 + zone, &nr_scanned); 1952 + *total_scanned += nr_scanned; 1953 + if (!res_counter_soft_limit_excess(&root_memcg->res)) 1985 1954 break; 1986 1955 } 1987 - 1988 - if (!atomic_read(&memcg->children_in_excess)) 1989 - return SKIP_TREE; 1990 - return SKIP; 1956 + mem_cgroup_iter_break(root_memcg, victim); 1957 + return total; 1991 1958 } 1992 1959 1993 1960 static DEFINE_SPINLOCK(memcg_oom_lock); ··· 2953 2812 unlock_page_cgroup(pc); 2954 2813 2955 2814 /* 2956 - * "charge_statistics" updated event counter. 2815 + * "charge_statistics" updated event counter. Then, check it. 2816 + * Insert ancestor (and ancestor's ancestors), to softlimit RB-tree. 2817 + * if they exceeds softlimit. 2957 2818 */ 2958 2819 memcg_check_events(memcg, page); 2959 2820 } ··· 4790 4647 return ret; 4791 4648 } 4792 4649 4650 + unsigned long mem_cgroup_soft_limit_reclaim(struct zone *zone, int order, 4651 + gfp_t gfp_mask, 4652 + unsigned long *total_scanned) 4653 + { 4654 + unsigned long nr_reclaimed = 0; 4655 + struct mem_cgroup_per_zone *mz, *next_mz = NULL; 4656 + unsigned long reclaimed; 4657 + int loop = 0; 4658 + struct mem_cgroup_tree_per_zone *mctz; 4659 + unsigned long long excess; 4660 + unsigned long nr_scanned; 4661 + 4662 + if (order > 0) 4663 + return 0; 4664 + 4665 + mctz = soft_limit_tree_node_zone(zone_to_nid(zone), zone_idx(zone)); 4666 + /* 4667 + * This loop can run a while, specially if mem_cgroup's continuously 4668 + * keep exceeding their soft limit and putting the system under 4669 + * pressure 4670 + */ 4671 + do { 4672 + if (next_mz) 4673 + mz = next_mz; 4674 + else 4675 + mz = mem_cgroup_largest_soft_limit_node(mctz); 4676 + if (!mz) 4677 + break; 4678 + 4679 + nr_scanned = 0; 4680 + reclaimed = mem_cgroup_soft_reclaim(mz->memcg, zone, 4681 + gfp_mask, &nr_scanned); 4682 + nr_reclaimed += reclaimed; 4683 + *total_scanned += nr_scanned; 4684 + spin_lock(&mctz->lock); 4685 + 4686 + /* 4687 + * If we failed to reclaim anything from this memory cgroup 4688 + * it is time to move on to the next cgroup 4689 + */ 4690 + next_mz = NULL; 4691 + if (!reclaimed) { 4692 + do { 4693 + /* 4694 + * Loop until we find yet another one. 4695 + * 4696 + * By the time we get the soft_limit lock 4697 + * again, someone might have aded the 4698 + * group back on the RB tree. Iterate to 4699 + * make sure we get a different mem. 4700 + * mem_cgroup_largest_soft_limit_node returns 4701 + * NULL if no other cgroup is present on 4702 + * the tree 4703 + */ 4704 + next_mz = 4705 + __mem_cgroup_largest_soft_limit_node(mctz); 4706 + if (next_mz == mz) 4707 + css_put(&next_mz->memcg->css); 4708 + else /* next_mz == NULL or other memcg */ 4709 + break; 4710 + } while (1); 4711 + } 4712 + __mem_cgroup_remove_exceeded(mz->memcg, mz, mctz); 4713 + excess = res_counter_soft_limit_excess(&mz->memcg->res); 4714 + /* 4715 + * One school of thought says that we should not add 4716 + * back the node to the tree if reclaim returns 0. 4717 + * But our reclaim could return 0, simply because due 4718 + * to priority we are exposing a smaller subset of 4719 + * memory to reclaim from. Consider this as a longer 4720 + * term TODO. 4721 + */ 4722 + /* If excess == 0, no tree ops */ 4723 + __mem_cgroup_insert_exceeded(mz->memcg, mz, mctz, excess); 4724 + spin_unlock(&mctz->lock); 4725 + css_put(&mz->memcg->css); 4726 + loop++; 4727 + /* 4728 + * Could not reclaim anything and there are no more 4729 + * mem cgroups to try or we seem to be looping without 4730 + * reclaiming anything. 4731 + */ 4732 + if (!nr_reclaimed && 4733 + (next_mz == NULL || 4734 + loop > MEM_CGROUP_MAX_SOFT_LIMIT_RECLAIM_LOOPS)) 4735 + break; 4736 + } while (!nr_reclaimed); 4737 + if (next_mz) 4738 + css_put(&next_mz->memcg->css); 4739 + return nr_reclaimed; 4740 + } 4741 + 4793 4742 /** 4794 4743 * mem_cgroup_force_empty_list - clears LRU of a group 4795 4744 * @memcg: group to clear ··· 6146 5911 for (zone = 0; zone < MAX_NR_ZONES; zone++) { 6147 5912 mz = &pn->zoneinfo[zone]; 6148 5913 lruvec_init(&mz->lruvec); 5914 + mz->usage_in_excess = 0; 5915 + mz->on_tree = false; 6149 5916 mz->memcg = memcg; 6150 5917 } 6151 5918 memcg->nodeinfo[node] = pn; ··· 6203 5966 int node; 6204 5967 size_t size = memcg_size(); 6205 5968 5969 + mem_cgroup_remove_from_trees(memcg); 6206 5970 free_css_id(&mem_cgroup_subsys, &memcg->css); 6207 5971 6208 5972 for_each_node(node) ··· 6240 6002 } 6241 6003 EXPORT_SYMBOL(parent_mem_cgroup); 6242 6004 6005 + static void __init mem_cgroup_soft_limit_tree_init(void) 6006 + { 6007 + struct mem_cgroup_tree_per_node *rtpn; 6008 + struct mem_cgroup_tree_per_zone *rtpz; 6009 + int tmp, node, zone; 6010 + 6011 + for_each_node(node) { 6012 + tmp = node; 6013 + if (!node_state(node, N_NORMAL_MEMORY)) 6014 + tmp = -1; 6015 + rtpn = kzalloc_node(sizeof(*rtpn), GFP_KERNEL, tmp); 6016 + BUG_ON(!rtpn); 6017 + 6018 + soft_limit_tree.rb_tree_per_node[node] = rtpn; 6019 + 6020 + for (zone = 0; zone < MAX_NR_ZONES; zone++) { 6021 + rtpz = &rtpn->rb_tree_per_zone[zone]; 6022 + rtpz->rb_root = RB_ROOT; 6023 + spin_lock_init(&rtpz->lock); 6024 + } 6025 + } 6026 + } 6027 + 6243 6028 static struct cgroup_subsys_state * __ref 6244 6029 mem_cgroup_css_alloc(struct cgroup_subsys_state *parent_css) 6245 6030 { ··· 6292 6031 mutex_init(&memcg->thresholds_lock); 6293 6032 spin_lock_init(&memcg->move_lock); 6294 6033 vmpressure_init(&memcg->vmpressure); 6295 - spin_lock_init(&memcg->soft_lock); 6296 6034 6297 6035 return &memcg->css; 6298 6036 ··· 6369 6109 6370 6110 mem_cgroup_invalidate_reclaim_iterators(memcg); 6371 6111 mem_cgroup_reparent_charges(memcg); 6372 - if (memcg->soft_contributed) { 6373 - while ((memcg = parent_mem_cgroup(memcg))) 6374 - atomic_dec(&memcg->children_in_excess); 6375 - 6376 - if (memcg != root_mem_cgroup && !root_mem_cgroup->use_hierarchy) 6377 - atomic_dec(&root_mem_cgroup->children_in_excess); 6378 - } 6379 6112 mem_cgroup_destroy_all_caches(memcg); 6380 6113 vmpressure_cleanup(&memcg->vmpressure); 6381 6114 } ··· 7043 6790 { 7044 6791 hotcpu_notifier(memcg_cpu_hotplug_callback, 0); 7045 6792 enable_swap_cgroup(); 6793 + mem_cgroup_soft_limit_tree_init(); 7046 6794 memcg_stock_init(); 7047 6795 return 0; 7048 6796 }
+1
mm/mlock.c
··· 736 736 737 737 /* Ignore errors */ 738 738 mlock_fixup(vma, &prev, vma->vm_start, vma->vm_end, newflags); 739 + cond_resched(); 739 740 } 740 741 out: 741 742 return 0;
+31 -52
mm/vmscan.c
··· 139 139 { 140 140 return !sc->target_mem_cgroup; 141 141 } 142 - 143 - static bool mem_cgroup_should_soft_reclaim(struct scan_control *sc) 144 - { 145 - struct mem_cgroup *root = sc->target_mem_cgroup; 146 - return !mem_cgroup_disabled() && 147 - mem_cgroup_soft_reclaim_eligible(root, root) != SKIP_TREE; 148 - } 149 142 #else 150 143 static bool global_reclaim(struct scan_control *sc) 151 144 { 152 145 return true; 153 - } 154 - 155 - static bool mem_cgroup_should_soft_reclaim(struct scan_control *sc) 156 - { 157 - return false; 158 146 } 159 147 #endif 160 148 ··· 2164 2176 } 2165 2177 } 2166 2178 2167 - static int 2168 - __shrink_zone(struct zone *zone, struct scan_control *sc, bool soft_reclaim) 2179 + static void shrink_zone(struct zone *zone, struct scan_control *sc) 2169 2180 { 2170 2181 unsigned long nr_reclaimed, nr_scanned; 2171 - int groups_scanned = 0; 2172 2182 2173 2183 do { 2174 2184 struct mem_cgroup *root = sc->target_mem_cgroup; ··· 2174 2188 .zone = zone, 2175 2189 .priority = sc->priority, 2176 2190 }; 2177 - struct mem_cgroup *memcg = NULL; 2178 - mem_cgroup_iter_filter filter = (soft_reclaim) ? 2179 - mem_cgroup_soft_reclaim_eligible : NULL; 2191 + struct mem_cgroup *memcg; 2180 2192 2181 2193 nr_reclaimed = sc->nr_reclaimed; 2182 2194 nr_scanned = sc->nr_scanned; 2183 2195 2184 - while ((memcg = mem_cgroup_iter_cond(root, memcg, &reclaim, filter))) { 2196 + memcg = mem_cgroup_iter(root, NULL, &reclaim); 2197 + do { 2185 2198 struct lruvec *lruvec; 2186 2199 2187 - groups_scanned++; 2188 2200 lruvec = mem_cgroup_zone_lruvec(zone, memcg); 2189 2201 2190 2202 shrink_lruvec(lruvec, sc); ··· 2202 2218 mem_cgroup_iter_break(root, memcg); 2203 2219 break; 2204 2220 } 2205 - } 2221 + memcg = mem_cgroup_iter(root, memcg, &reclaim); 2222 + } while (memcg); 2206 2223 2207 2224 vmpressure(sc->gfp_mask, sc->target_mem_cgroup, 2208 2225 sc->nr_scanned - nr_scanned, ··· 2211 2226 2212 2227 } while (should_continue_reclaim(zone, sc->nr_reclaimed - nr_reclaimed, 2213 2228 sc->nr_scanned - nr_scanned, sc)); 2214 - 2215 - return groups_scanned; 2216 - } 2217 - 2218 - 2219 - static void shrink_zone(struct zone *zone, struct scan_control *sc) 2220 - { 2221 - bool do_soft_reclaim = mem_cgroup_should_soft_reclaim(sc); 2222 - unsigned long nr_scanned = sc->nr_scanned; 2223 - int scanned_groups; 2224 - 2225 - scanned_groups = __shrink_zone(zone, sc, do_soft_reclaim); 2226 - /* 2227 - * memcg iterator might race with other reclaimer or start from 2228 - * a incomplete tree walk so the tree walk in __shrink_zone 2229 - * might have missed groups that are above the soft limit. Try 2230 - * another loop to catch up with others. Do it just once to 2231 - * prevent from reclaim latencies when other reclaimers always 2232 - * preempt this one. 2233 - */ 2234 - if (do_soft_reclaim && !scanned_groups) 2235 - __shrink_zone(zone, sc, do_soft_reclaim); 2236 - 2237 - /* 2238 - * No group is over the soft limit or those that are do not have 2239 - * pages in the zone we are reclaiming so we have to reclaim everybody 2240 - */ 2241 - if (do_soft_reclaim && (sc->nr_scanned == nr_scanned)) { 2242 - __shrink_zone(zone, sc, false); 2243 - return; 2244 - } 2245 2229 } 2246 2230 2247 2231 /* Returns true if compaction should go ahead for a high-order request */ ··· 2274 2320 { 2275 2321 struct zoneref *z; 2276 2322 struct zone *zone; 2323 + unsigned long nr_soft_reclaimed; 2324 + unsigned long nr_soft_scanned; 2277 2325 bool aborted_reclaim = false; 2278 2326 2279 2327 /* ··· 2315 2359 continue; 2316 2360 } 2317 2361 } 2362 + /* 2363 + * This steals pages from memory cgroups over softlimit 2364 + * and returns the number of reclaimed pages and 2365 + * scanned pages. This works for global memory pressure 2366 + * and balancing, not for a memcg's limit. 2367 + */ 2368 + nr_soft_scanned = 0; 2369 + nr_soft_reclaimed = mem_cgroup_soft_limit_reclaim(zone, 2370 + sc->order, sc->gfp_mask, 2371 + &nr_soft_scanned); 2372 + sc->nr_reclaimed += nr_soft_reclaimed; 2373 + sc->nr_scanned += nr_soft_scanned; 2318 2374 /* need some check for avoid more shrink_zone() */ 2319 2375 } 2320 2376 ··· 2920 2952 { 2921 2953 int i; 2922 2954 int end_zone = 0; /* Inclusive. 0 = ZONE_DMA */ 2955 + unsigned long nr_soft_reclaimed; 2956 + unsigned long nr_soft_scanned; 2923 2957 struct scan_control sc = { 2924 2958 .gfp_mask = GFP_KERNEL, 2925 2959 .priority = DEF_PRIORITY, ··· 3035 3065 continue; 3036 3066 3037 3067 sc.nr_scanned = 0; 3068 + 3069 + nr_soft_scanned = 0; 3070 + /* 3071 + * Call soft limit reclaim before calling shrink_zone. 3072 + */ 3073 + nr_soft_reclaimed = mem_cgroup_soft_limit_reclaim(zone, 3074 + order, sc.gfp_mask, 3075 + &nr_soft_scanned); 3076 + sc.nr_reclaimed += nr_soft_reclaimed; 3038 3077 3039 3078 /* 3040 3079 * There should be no need to raise the scanning
+2 -2
scripts/checkpatch.pl
··· 3975 3975 # check for new externs in .h files. 3976 3976 if ($realfile =~ /\.h$/ && 3977 3977 $line =~ /^\+\s*(extern\s+)$Type\s*$Ident\s*\(/s) { 3978 - if (WARN("AVOID_EXTERNS", 3979 - "extern prototypes should be avoided in .h files\n" . $herecurr) && 3978 + if (CHK("AVOID_EXTERNS", 3979 + "extern prototypes should be avoided in .h files\n" . $herecurr) && 3980 3980 $fix) { 3981 3981 $fixed[$linenr - 1] =~ s/(.*)\bextern\b\s*(.*)/$1$2/; 3982 3982 }
+14 -1
sound/core/compress_offload.c
··· 139 139 static int snd_compr_free(struct inode *inode, struct file *f) 140 140 { 141 141 struct snd_compr_file *data = f->private_data; 142 + struct snd_compr_runtime *runtime = data->stream.runtime; 143 + 144 + switch (runtime->state) { 145 + case SNDRV_PCM_STATE_RUNNING: 146 + case SNDRV_PCM_STATE_DRAINING: 147 + case SNDRV_PCM_STATE_PAUSED: 148 + data->stream.ops->trigger(&data->stream, SNDRV_PCM_TRIGGER_STOP); 149 + break; 150 + default: 151 + break; 152 + } 153 + 142 154 data->stream.ops->free(&data->stream); 143 155 kfree(data->stream.runtime->buffer); 144 156 kfree(data->stream.runtime); ··· 849 837 struct snd_compr *compr; 850 838 851 839 compr = device->device_data; 852 - snd_unregister_device(compr->direction, compr->card, compr->device); 840 + snd_unregister_device(SNDRV_DEVICE_TYPE_COMPRESS, compr->card, 841 + compr->device); 853 842 return 0; 854 843 } 855 844
+67 -5
sound/pci/hda/patch_cirrus.c
··· 111 111 /* 0x0009 - 0x0014 -> 12 test regs */ 112 112 /* 0x0015 - visibility reg */ 113 113 114 + /* Cirrus Logic CS4208 */ 115 + #define CS4208_VENDOR_NID 0x24 116 + 114 117 /* 115 118 * Cirrus Logic CS4210 116 119 * ··· 226 223 {} /* terminator */ 227 224 }; 228 225 226 + static const struct hda_verb cs4208_coef_init_verbs[] = { 227 + {0x01, AC_VERB_SET_POWER_STATE, 0x00}, /* AFG: D0 */ 228 + {0x24, AC_VERB_SET_PROC_STATE, 0x01}, /* VPW: processing on */ 229 + {0x24, AC_VERB_SET_COEF_INDEX, 0x0033}, 230 + {0x24, AC_VERB_SET_PROC_COEF, 0x0001}, /* A1 ICS */ 231 + {0x24, AC_VERB_SET_COEF_INDEX, 0x0034}, 232 + {0x24, AC_VERB_SET_PROC_COEF, 0x1C01}, /* A1 Enable, A Thresh = 300mV */ 233 + {} /* terminator */ 234 + }; 235 + 229 236 /* Errata: CS4207 rev C0/C1/C2 Silicon 230 237 * 231 238 * http://www.cirrus.com/en/pubs/errata/ER880C3.pdf ··· 308 295 /* init_verb sequence for C0/C1/C2 errata*/ 309 296 snd_hda_sequence_write(codec, cs_errata_init_verbs); 310 297 snd_hda_sequence_write(codec, cs_coef_init_verbs); 298 + } else if (spec->vendor_nid == CS4208_VENDOR_NID) { 299 + snd_hda_sequence_write(codec, cs4208_coef_init_verbs); 311 300 } 312 301 313 302 snd_hda_gen_init(codec); ··· 449 434 {} /* terminator */ 450 435 }; 451 436 437 + static const struct hda_pintbl mba6_pincfgs[] = { 438 + { 0x10, 0x032120f0 }, /* HP */ 439 + { 0x11, 0x500000f0 }, 440 + { 0x12, 0x90100010 }, /* Speaker */ 441 + { 0x13, 0x500000f0 }, 442 + { 0x14, 0x500000f0 }, 443 + { 0x15, 0x770000f0 }, 444 + { 0x16, 0x770000f0 }, 445 + { 0x17, 0x430000f0 }, 446 + { 0x18, 0x43ab9030 }, /* Mic */ 447 + { 0x19, 0x770000f0 }, 448 + { 0x1a, 0x770000f0 }, 449 + { 0x1b, 0x770000f0 }, 450 + { 0x1c, 0x90a00090 }, 451 + { 0x1d, 0x500000f0 }, 452 + { 0x1e, 0x500000f0 }, 453 + { 0x1f, 0x500000f0 }, 454 + { 0x20, 0x500000f0 }, 455 + { 0x21, 0x430000f0 }, 456 + { 0x22, 0x430000f0 }, 457 + {} /* terminator */ 458 + }; 459 + 452 460 static void cs420x_fixup_gpio_13(struct hda_codec *codec, 453 461 const struct hda_fixup *fix, int action) 454 462 { ··· 594 556 595 557 /* 596 558 * CS4208 support: 597 - * Its layout is no longer compatible with CS4206/CS4207, and the generic 598 - * parser seems working fairly well, except for trivial fixups. 559 + * Its layout is no longer compatible with CS4206/CS4207 599 560 */ 600 561 enum { 562 + CS4208_MBA6, 601 563 CS4208_GPIO0, 602 564 }; 603 565 604 566 static const struct hda_model_fixup cs4208_models[] = { 605 567 { .id = CS4208_GPIO0, .name = "gpio0" }, 568 + { .id = CS4208_MBA6, .name = "mba6" }, 606 569 {} 607 570 }; 608 571 609 572 static const struct snd_pci_quirk cs4208_fixup_tbl[] = { 610 573 /* codec SSID */ 611 - SND_PCI_QUIRK(0x106b, 0x7100, "MacBookPro 6,1", CS4208_GPIO0), 612 - SND_PCI_QUIRK(0x106b, 0x7200, "MacBookPro 6,2", CS4208_GPIO0), 574 + SND_PCI_QUIRK(0x106b, 0x7100, "MacBookAir 6,1", CS4208_MBA6), 575 + SND_PCI_QUIRK(0x106b, 0x7200, "MacBookAir 6,2", CS4208_MBA6), 613 576 {} /* terminator */ 614 577 }; 615 578 ··· 627 588 } 628 589 629 590 static const struct hda_fixup cs4208_fixups[] = { 591 + [CS4208_MBA6] = { 592 + .type = HDA_FIXUP_PINS, 593 + .v.pins = mba6_pincfgs, 594 + .chained = true, 595 + .chain_id = CS4208_GPIO0, 596 + }, 630 597 [CS4208_GPIO0] = { 631 598 .type = HDA_FIXUP_FUNC, 632 599 .v.func = cs4208_fixup_gpio0, 633 600 }, 634 601 }; 635 602 603 + /* correct the 0dB offset of input pins */ 604 + static void cs4208_fix_amp_caps(struct hda_codec *codec, hda_nid_t adc) 605 + { 606 + unsigned int caps; 607 + 608 + caps = query_amp_caps(codec, adc, HDA_INPUT); 609 + caps &= ~(AC_AMPCAP_OFFSET); 610 + caps |= 0x02; 611 + snd_hda_override_amp_caps(codec, adc, HDA_INPUT, caps); 612 + } 613 + 636 614 static int patch_cs4208(struct hda_codec *codec) 637 615 { 638 616 struct cs_spec *spec; 639 617 int err; 640 618 641 - spec = cs_alloc_spec(codec, 0); /* no specific w/a */ 619 + spec = cs_alloc_spec(codec, CS4208_VENDOR_NID); 642 620 if (!spec) 643 621 return -ENOMEM; 644 622 ··· 664 608 snd_hda_pick_fixup(codec, cs4208_models, cs4208_fixup_tbl, 665 609 cs4208_fixups); 666 610 snd_hda_apply_fixup(codec, HDA_FIXUP_ACT_PRE_PROBE); 611 + 612 + snd_hda_override_wcaps(codec, 0x18, 613 + get_wcaps(codec, 0x18) | AC_WCAP_STEREO); 614 + cs4208_fix_amp_caps(codec, 0x18); 615 + cs4208_fix_amp_caps(codec, 0x1b); 616 + cs4208_fix_amp_caps(codec, 0x1c); 667 617 668 618 err = cs_parse_auto_config(codec); 669 619 if (err < 0)
+30 -19
sound/pci/hda/patch_hdmi.c
··· 1149 1149 } 1150 1150 1151 1151 static void haswell_config_cvts(struct hda_codec *codec, 1152 - int pin_id, int mux_id) 1152 + hda_nid_t pin_nid, int mux_idx) 1153 1153 { 1154 1154 struct hdmi_spec *spec = codec->spec; 1155 - struct hdmi_spec_per_pin *per_pin; 1156 - int pin_idx, mux_idx; 1157 - int curr; 1158 - int err; 1155 + hda_nid_t nid, end_nid; 1156 + int cvt_idx, curr; 1157 + struct hdmi_spec_per_cvt *per_cvt; 1159 1158 1160 - for (pin_idx = 0; pin_idx < spec->num_pins; pin_idx++) { 1161 - per_pin = get_pin(spec, pin_idx); 1159 + /* configure all pins, including "no physical connection" ones */ 1160 + end_nid = codec->start_nid + codec->num_nodes; 1161 + for (nid = codec->start_nid; nid < end_nid; nid++) { 1162 + unsigned int wid_caps = get_wcaps(codec, nid); 1163 + unsigned int wid_type = get_wcaps_type(wid_caps); 1162 1164 1163 - if (pin_idx == pin_id) 1165 + if (wid_type != AC_WID_PIN) 1164 1166 continue; 1165 1167 1166 - curr = snd_hda_codec_read(codec, per_pin->pin_nid, 0, 1167 - AC_VERB_GET_CONNECT_SEL, 0); 1168 + if (nid == pin_nid) 1169 + continue; 1168 1170 1169 - /* Choose another unused converter */ 1170 - if (curr == mux_id) { 1171 - err = hdmi_choose_cvt(codec, pin_idx, NULL, &mux_idx); 1172 - if (err < 0) 1173 - return; 1174 - snd_printdd("HDMI: choose converter %d for pin %d\n", mux_idx, pin_idx); 1175 - snd_hda_codec_write_cache(codec, per_pin->pin_nid, 0, 1171 + curr = snd_hda_codec_read(codec, nid, 0, 1172 + AC_VERB_GET_CONNECT_SEL, 0); 1173 + if (curr != mux_idx) 1174 + continue; 1175 + 1176 + /* choose an unassigned converter. The conveters in the 1177 + * connection list are in the same order as in the codec. 1178 + */ 1179 + for (cvt_idx = 0; cvt_idx < spec->num_cvts; cvt_idx++) { 1180 + per_cvt = get_cvt(spec, cvt_idx); 1181 + if (!per_cvt->assigned) { 1182 + snd_printdd("choose cvt %d for pin nid %d\n", 1183 + cvt_idx, nid); 1184 + snd_hda_codec_write_cache(codec, nid, 0, 1176 1185 AC_VERB_SET_CONNECT_SEL, 1177 - mux_idx); 1186 + cvt_idx); 1187 + break; 1188 + } 1178 1189 } 1179 1190 } 1180 1191 } ··· 1227 1216 1228 1217 /* configure unused pins to choose other converters */ 1229 1218 if (is_haswell(codec)) 1230 - haswell_config_cvts(codec, pin_idx, mux_idx); 1219 + haswell_config_cvts(codec, per_pin->pin_nid, mux_idx); 1231 1220 1232 1221 snd_hda_spdif_ctls_assign(codec, pin_idx, per_cvt->cvt_nid); 1233 1222
+15 -1
sound/pci/hda/patch_realtek.c
··· 3439 3439 /* Set to manual mode */ 3440 3440 val = alc_read_coef_idx(codec, 0x06); 3441 3441 alc_write_coef_idx(codec, 0x06, val & ~0x000c); 3442 + /* Enable Line1 input control by verb */ 3443 + val = alc_read_coef_idx(codec, 0x1a); 3444 + alc_write_coef_idx(codec, 0x1a, val | (1 << 4)); 3442 3445 break; 3443 3446 } 3444 3447 } ··· 3534 3531 ALC269VB_FIXUP_ORDISSIMO_EVE2, 3535 3532 ALC283_FIXUP_CHROME_BOOK, 3536 3533 ALC282_FIXUP_ASUS_TX300, 3534 + ALC283_FIXUP_INT_MIC, 3537 3535 }; 3538 3536 3539 3537 static const struct hda_fixup alc269_fixups[] = { ··· 3794 3790 .type = HDA_FIXUP_FUNC, 3795 3791 .v.func = alc282_fixup_asus_tx300, 3796 3792 }, 3793 + [ALC283_FIXUP_INT_MIC] = { 3794 + .type = HDA_FIXUP_VERBS, 3795 + .v.verbs = (const struct hda_verb[]) { 3796 + {0x20, AC_VERB_SET_COEF_INDEX, 0x1a}, 3797 + {0x20, AC_VERB_SET_PROC_COEF, 0x0011}, 3798 + { } 3799 + }, 3800 + .chained = true, 3801 + .chain_id = ALC269_FIXUP_LIMIT_INT_MIC_BOOST 3802 + }, 3797 3803 }; 3798 3804 3799 3805 static const struct snd_pci_quirk alc269_fixup_tbl[] = { ··· 3888 3874 SND_PCI_QUIRK(0x17aa, 0x2214, "Thinkpad", ALC269_FIXUP_LIMIT_INT_MIC_BOOST), 3889 3875 SND_PCI_QUIRK(0x17aa, 0x2215, "Thinkpad", ALC269_FIXUP_LIMIT_INT_MIC_BOOST), 3890 3876 SND_PCI_QUIRK(0x17aa, 0x5013, "Thinkpad", ALC269_FIXUP_LIMIT_INT_MIC_BOOST), 3891 - SND_PCI_QUIRK(0x17aa, 0x501a, "Thinkpad", ALC269_FIXUP_LIMIT_INT_MIC_BOOST), 3877 + SND_PCI_QUIRK(0x17aa, 0x501a, "Thinkpad", ALC283_FIXUP_INT_MIC), 3892 3878 SND_PCI_QUIRK(0x17aa, 0x5026, "Thinkpad", ALC269_FIXUP_LIMIT_INT_MIC_BOOST), 3893 3879 SND_PCI_QUIRK(0x17aa, 0x5109, "Thinkpad", ALC269_FIXUP_LIMIT_INT_MIC_BOOST), 3894 3880 SND_PCI_QUIRK(0x17aa, 0x3bf8, "Quanta FL1", ALC269_FIXUP_PCM_44K),
+1 -1
sound/soc/samsung/Kconfig
··· 2 2 tristate "ASoC support for Samsung" 3 3 depends on PLAT_SAMSUNG 4 4 select S3C64XX_DMA if ARCH_S3C64XX 5 - select S3C2410_DMA if ARCH_S3C24XX 5 + select S3C24XX_DMA if ARCH_S3C24XX 6 6 help 7 7 Say Y or M if you want to add support for codecs attached to 8 8 the Samsung SoCs' Audio interfaces. You will also need to
-1
tools/lib/lk/debugfs.c
··· 5 5 #include <stdbool.h> 6 6 #include <sys/vfs.h> 7 7 #include <sys/mount.h> 8 - #include <linux/magic.h> 9 8 #include <linux/kernel.h> 10 9 11 10 #include "debugfs.h"
+3 -3
tools/perf/arch/x86/util/tsc.c
··· 32 32 int perf_read_tsc_conversion(const struct perf_event_mmap_page *pc, 33 33 struct perf_tsc_conversion *tc) 34 34 { 35 - bool cap_usr_time_zero; 35 + bool cap_user_time_zero; 36 36 u32 seq; 37 37 int i = 0; 38 38 ··· 42 42 tc->time_mult = pc->time_mult; 43 43 tc->time_shift = pc->time_shift; 44 44 tc->time_zero = pc->time_zero; 45 - cap_usr_time_zero = pc->cap_usr_time_zero; 45 + cap_user_time_zero = pc->cap_user_time_zero; 46 46 rmb(); 47 47 if (pc->lock == seq && !(seq & 1)) 48 48 break; ··· 52 52 } 53 53 } 54 54 55 - if (!cap_usr_time_zero) 55 + if (!cap_user_time_zero) 56 56 return -EOPNOTSUPP; 57 57 58 58 return 0;
-2
tools/perf/builtin-inject.c
··· 321 321 return perf_event__repipe(tool, event_sw, &sample_sw, machine); 322 322 } 323 323 324 - extern volatile int session_done; 325 - 326 324 static void sig_handler(int sig __maybe_unused) 327 325 { 328 326 session_done = 1;
+1 -1
tools/perf/builtin-kmem.c
··· 101 101 102 102 dir1 = opendir(PATH_SYS_NODE); 103 103 if (!dir1) 104 - return -1; 104 + return 0; 105 105 106 106 while ((dent1 = readdir(dir1)) != NULL) { 107 107 if (dent1->d_type != DT_DIR ||
+3 -2
tools/perf/builtin-report.c
··· 401 401 return 0; 402 402 } 403 403 404 - extern volatile int session_done; 405 - 406 404 static void sig_handler(int sig __maybe_unused) 407 405 { 408 406 session_done = 1; ··· 565 567 hists__link(leader_hists, hists); 566 568 } 567 569 } 570 + 571 + if (session_done()) 572 + return 0; 568 573 569 574 if (nr_samples == 0) { 570 575 ui__error("The %s file has no samples!\n", session->filename);
-2
tools/perf/builtin-script.c
··· 553 553 .ordering_requires_timestamps = true, 554 554 }; 555 555 556 - extern volatile int session_done; 557 - 558 556 static void sig_handler(int sig __maybe_unused) 559 557 { 560 558 session_done = 1;
+18
tools/perf/builtin-trace.c
··· 16 16 #include <sys/mman.h> 17 17 #include <linux/futex.h> 18 18 19 + /* For older distros: */ 20 + #ifndef MAP_STACK 21 + # define MAP_STACK 0x20000 22 + #endif 23 + 24 + #ifndef MADV_HWPOISON 25 + # define MADV_HWPOISON 100 26 + #endif 27 + 28 + #ifndef MADV_MERGEABLE 29 + # define MADV_MERGEABLE 12 30 + #endif 31 + 32 + #ifndef MADV_UNMERGEABLE 33 + # define MADV_UNMERGEABLE 13 34 + #endif 35 + 19 36 static size_t syscall_arg__scnprintf_hex(char *bf, size_t size, 20 37 unsigned long arg, 21 38 u8 arg_idx __maybe_unused, ··· 1055 1038 1056 1039 trace->tool.sample = trace__process_sample; 1057 1040 trace->tool.mmap = perf_event__process_mmap; 1041 + trace->tool.mmap2 = perf_event__process_mmap2; 1058 1042 trace->tool.comm = perf_event__process_comm; 1059 1043 trace->tool.exit = perf_event__process_exit; 1060 1044 trace->tool.fork = perf_event__process_fork;
+4 -1
tools/perf/config/Makefile
··· 87 87 CFLAGS += -Wextra 88 88 CFLAGS += -std=gnu99 89 89 90 - EXTLIBS = -lelf -lpthread -lrt -lm 90 + EXTLIBS = -lelf -lpthread -lrt -lm -ldl 91 91 92 92 ifeq ($(call try-cc,$(SOURCE_HELLO),$(CFLAGS) -Werror -fstack-protector-all,-fstack-protector-all),y) 93 93 CFLAGS += -fstack-protector-all ··· 179 179 FLAGS_LIBELF=$(CFLAGS) $(LDFLAGS) $(EXTLIBS) 180 180 ifeq ($(call try-cc,$(SOURCE_ELF_MMAP),$(FLAGS_LIBELF),-DLIBELF_MMAP),y) 181 181 CFLAGS += -DLIBELF_MMAP 182 + endif 183 + ifeq ($(call try-cc,$(SOURCE_ELF_GETPHDRNUM),$(FLAGS_LIBELF),-DHAVE_ELF_GETPHDRNUM),y) 184 + CFLAGS += -DHAVE_ELF_GETPHDRNUM 182 185 endif 183 186 184 187 # include ARCH specific config
+10
tools/perf/config/feature-tests.mak
··· 61 61 } 62 62 endef 63 63 64 + define SOURCE_ELF_GETPHDRNUM 65 + #include <libelf.h> 66 + int main(void) 67 + { 68 + size_t dst; 69 + return elf_getphdrnum(0, &dst); 70 + } 71 + endef 72 + 64 73 ifndef NO_SLANG 65 74 define SOURCE_SLANG 66 75 #include <slang.h> ··· 219 210 220 211 int main(void) 221 212 { 213 + printf(\"error message: %s\n\", audit_errno_to_name(0)); 222 214 return audit_open(); 223 215 } 224 216 endef
+1 -1
tools/perf/util/annotate.c
··· 809 809 end = map__rip_2objdump(map, sym->end); 810 810 811 811 offset = line_ip - start; 812 - if (offset < 0 || (u64)line_ip > end) 812 + if ((u64)line_ip < start || (u64)line_ip > end) 813 813 offset = -1; 814 814 else 815 815 parsed_line = tmp2 + 1;
+19
tools/perf/util/dwarf-aux.c
··· 263 263 } 264 264 265 265 /** 266 + * die_is_func_def - Ensure that this DIE is a subprogram and definition 267 + * @dw_die: a DIE 268 + * 269 + * Ensure that this DIE is a subprogram and NOT a declaration. This 270 + * returns true if @dw_die is a function definition. 271 + **/ 272 + bool die_is_func_def(Dwarf_Die *dw_die) 273 + { 274 + Dwarf_Attribute attr; 275 + 276 + return (dwarf_tag(dw_die) == DW_TAG_subprogram && 277 + dwarf_attr(dw_die, DW_AT_declaration, &attr) == NULL); 278 + } 279 + 280 + /** 266 281 * die_get_data_member_location - Get the data-member offset 267 282 * @mb_die: a DIE of a member of a data structure 268 283 * @offs: The offset of the member in the data structure ··· 407 392 { 408 393 struct __addr_die_search_param *ad = data; 409 394 395 + /* 396 + * Since a declaration entry doesn't has given pc, this always returns 397 + * function definition entry. 398 + */ 410 399 if (dwarf_tag(fn_die) == DW_TAG_subprogram && 411 400 dwarf_haspc(fn_die, ad->addr)) { 412 401 memcpy(ad->die_mem, fn_die, sizeof(Dwarf_Die));
+3
tools/perf/util/dwarf-aux.h
··· 38 38 extern int cu_walk_functions_at(Dwarf_Die *cu_die, Dwarf_Addr addr, 39 39 int (*callback)(Dwarf_Die *, void *), void *data); 40 40 41 + /* Ensure that this DIE is a subprogram and definition (not declaration) */ 42 + extern bool die_is_func_def(Dwarf_Die *dw_die); 43 + 41 44 /* Compare diename and tname */ 42 45 extern bool die_compare_name(Dwarf_Die *dw_die, const char *tname); 43 46
+28 -13
tools/perf/util/header.c
··· 199 199 return write_padded(fd, name, name_len + 1, len); 200 200 } 201 201 202 - static int __dsos__write_buildid_table(struct list_head *head, pid_t pid, 203 - u16 misc, int fd) 202 + static int __dsos__write_buildid_table(struct list_head *head, 203 + struct machine *machine, 204 + pid_t pid, u16 misc, int fd) 204 205 { 206 + char nm[PATH_MAX]; 205 207 struct dso *pos; 206 208 207 209 dsos__for_each_with_build_id(pos, head) { ··· 217 215 if (is_vdso_map(pos->short_name)) { 218 216 name = (char *) VDSO__MAP_NAME; 219 217 name_len = sizeof(VDSO__MAP_NAME) + 1; 218 + } else if (dso__is_kcore(pos)) { 219 + machine__mmap_name(machine, nm, sizeof(nm)); 220 + name = nm; 221 + name_len = strlen(nm) + 1; 220 222 } else { 221 223 name = pos->long_name; 222 224 name_len = pos->long_name_len + 1; ··· 246 240 umisc = PERF_RECORD_MISC_GUEST_USER; 247 241 } 248 242 249 - err = __dsos__write_buildid_table(&machine->kernel_dsos, machine->pid, 250 - kmisc, fd); 243 + err = __dsos__write_buildid_table(&machine->kernel_dsos, machine, 244 + machine->pid, kmisc, fd); 251 245 if (err == 0) 252 - err = __dsos__write_buildid_table(&machine->user_dsos, 246 + err = __dsos__write_buildid_table(&machine->user_dsos, machine, 253 247 machine->pid, umisc, fd); 254 248 return err; 255 249 } ··· 381 375 return err; 382 376 } 383 377 384 - static int dso__cache_build_id(struct dso *dso, const char *debugdir) 378 + static int dso__cache_build_id(struct dso *dso, struct machine *machine, 379 + const char *debugdir) 385 380 { 386 381 bool is_kallsyms = dso->kernel && dso->long_name[0] != '/'; 387 382 bool is_vdso = is_vdso_map(dso->short_name); 383 + char *name = dso->long_name; 384 + char nm[PATH_MAX]; 388 385 389 - return build_id_cache__add_b(dso->build_id, sizeof(dso->build_id), 390 - dso->long_name, debugdir, 391 - is_kallsyms, is_vdso); 386 + if (dso__is_kcore(dso)) { 387 + is_kallsyms = true; 388 + machine__mmap_name(machine, nm, sizeof(nm)); 389 + name = nm; 390 + } 391 + return build_id_cache__add_b(dso->build_id, sizeof(dso->build_id), name, 392 + debugdir, is_kallsyms, is_vdso); 392 393 } 393 394 394 - static int __dsos__cache_build_ids(struct list_head *head, const char *debugdir) 395 + static int __dsos__cache_build_ids(struct list_head *head, 396 + struct machine *machine, const char *debugdir) 395 397 { 396 398 struct dso *pos; 397 399 int err = 0; 398 400 399 401 dsos__for_each_with_build_id(pos, head) 400 - if (dso__cache_build_id(pos, debugdir)) 402 + if (dso__cache_build_id(pos, machine, debugdir)) 401 403 err = -1; 402 404 403 405 return err; ··· 413 399 414 400 static int machine__cache_build_ids(struct machine *machine, const char *debugdir) 415 401 { 416 - int ret = __dsos__cache_build_ids(&machine->kernel_dsos, debugdir); 417 - ret |= __dsos__cache_build_ids(&machine->user_dsos, debugdir); 402 + int ret = __dsos__cache_build_ids(&machine->kernel_dsos, machine, 403 + debugdir); 404 + ret |= __dsos__cache_build_ids(&machine->user_dsos, machine, debugdir); 418 405 return ret; 419 406 } 420 407
+2
tools/perf/util/hist.c
··· 611 611 next = rb_first(root); 612 612 613 613 while (next) { 614 + if (session_done()) 615 + break; 614 616 n = rb_entry(next, struct hist_entry, rb_node_in); 615 617 next = rb_next(&n->rb_node_in); 616 618
+1 -1
tools/perf/util/machine.c
··· 792 792 modules = path; 793 793 } 794 794 795 - if (symbol__restricted_filename(path, "/proc/modules")) 795 + if (symbol__restricted_filename(modules, "/proc/modules")) 796 796 return -1; 797 797 798 798 file = fopen(modules, "r");
+45 -42
tools/perf/util/probe-finder.c
··· 118 118 static int debuginfo__init_offline_dwarf(struct debuginfo *self, 119 119 const char *path) 120 120 { 121 - Dwfl_Module *mod; 122 121 int fd; 123 122 124 123 fd = open(path, O_RDONLY); ··· 128 129 if (!self->dwfl) 129 130 goto error; 130 131 131 - mod = dwfl_report_offline(self->dwfl, "", "", fd); 132 - if (!mod) 132 + self->mod = dwfl_report_offline(self->dwfl, "", "", fd); 133 + if (!self->mod) 133 134 goto error; 134 135 135 - self->dbg = dwfl_module_getdwarf(mod, &self->bias); 136 + self->dbg = dwfl_module_getdwarf(self->mod, &self->bias); 136 137 if (!self->dbg) 137 138 goto error; 138 139 ··· 675 676 } 676 677 677 678 /* Convert subprogram DIE to trace point */ 678 - static int convert_to_trace_point(Dwarf_Die *sp_die, Dwarf_Addr paddr, 679 - bool retprobe, struct probe_trace_point *tp) 679 + static int convert_to_trace_point(Dwarf_Die *sp_die, Dwfl_Module *mod, 680 + Dwarf_Addr paddr, bool retprobe, 681 + struct probe_trace_point *tp) 680 682 { 681 683 Dwarf_Addr eaddr, highaddr; 682 - const char *name; 684 + GElf_Sym sym; 685 + const char *symbol; 683 686 684 - /* Copy the name of probe point */ 685 - name = dwarf_diename(sp_die); 686 - if (name) { 687 - if (dwarf_entrypc(sp_die, &eaddr) != 0) { 688 - pr_warning("Failed to get entry address of %s\n", 689 - dwarf_diename(sp_die)); 690 - return -ENOENT; 691 - } 692 - if (dwarf_highpc(sp_die, &highaddr) != 0) { 693 - pr_warning("Failed to get end address of %s\n", 694 - dwarf_diename(sp_die)); 695 - return -ENOENT; 696 - } 697 - if (paddr > highaddr) { 698 - pr_warning("Offset specified is greater than size of %s\n", 699 - dwarf_diename(sp_die)); 700 - return -EINVAL; 701 - } 702 - tp->symbol = strdup(name); 703 - if (tp->symbol == NULL) 704 - return -ENOMEM; 705 - tp->offset = (unsigned long)(paddr - eaddr); 706 - } else 707 - /* This function has no name. */ 708 - tp->offset = (unsigned long)paddr; 687 + /* Verify the address is correct */ 688 + if (dwarf_entrypc(sp_die, &eaddr) != 0) { 689 + pr_warning("Failed to get entry address of %s\n", 690 + dwarf_diename(sp_die)); 691 + return -ENOENT; 692 + } 693 + if (dwarf_highpc(sp_die, &highaddr) != 0) { 694 + pr_warning("Failed to get end address of %s\n", 695 + dwarf_diename(sp_die)); 696 + return -ENOENT; 697 + } 698 + if (paddr > highaddr) { 699 + pr_warning("Offset specified is greater than size of %s\n", 700 + dwarf_diename(sp_die)); 701 + return -EINVAL; 702 + } 703 + 704 + /* Get an appropriate symbol from symtab */ 705 + symbol = dwfl_module_addrsym(mod, paddr, &sym, NULL); 706 + if (!symbol) { 707 + pr_warning("Failed to find symbol at 0x%lx\n", 708 + (unsigned long)paddr); 709 + return -ENOENT; 710 + } 711 + tp->offset = (unsigned long)(paddr - sym.st_value); 712 + tp->symbol = strdup(symbol); 713 + if (!tp->symbol) 714 + return -ENOMEM; 709 715 710 716 /* Return probe must be on the head of a subprogram */ 711 717 if (retprobe) { ··· 738 734 } 739 735 740 736 /* If not a real subprogram, find a real one */ 741 - if (dwarf_tag(sc_die) != DW_TAG_subprogram) { 737 + if (!die_is_func_def(sc_die)) { 742 738 if (!die_find_realfunc(&pf->cu_die, pf->addr, &pf->sp_die)) { 743 739 pr_warning("Failed to find probe point in any " 744 740 "functions.\n"); ··· 984 980 struct dwarf_callback_param *param = data; 985 981 struct probe_finder *pf = param->data; 986 982 struct perf_probe_point *pp = &pf->pev->point; 987 - Dwarf_Attribute attr; 988 983 989 984 /* Check tag and diename */ 990 - if (dwarf_tag(sp_die) != DW_TAG_subprogram || 991 - !die_compare_name(sp_die, pp->function) || 992 - dwarf_attr(sp_die, DW_AT_declaration, &attr)) 985 + if (!die_is_func_def(sp_die) || 986 + !die_compare_name(sp_die, pp->function)) 993 987 return DWARF_CB_OK; 994 988 995 989 /* Check declared file */ ··· 1153 1151 tev = &tf->tevs[tf->ntevs++]; 1154 1152 1155 1153 /* Trace point should be converted from subprogram DIE */ 1156 - ret = convert_to_trace_point(&pf->sp_die, pf->addr, 1154 + ret = convert_to_trace_point(&pf->sp_die, tf->mod, pf->addr, 1157 1155 pf->pev->point.retprobe, &tev->point); 1158 1156 if (ret < 0) 1159 1157 return ret; ··· 1185 1183 { 1186 1184 struct trace_event_finder tf = { 1187 1185 .pf = {.pev = pev, .callback = add_probe_trace_event}, 1188 - .max_tevs = max_tevs}; 1186 + .mod = self->mod, .max_tevs = max_tevs}; 1189 1187 int ret; 1190 1188 1191 1189 /* Allocate result tevs array */ ··· 1254 1252 vl = &af->vls[af->nvls++]; 1255 1253 1256 1254 /* Trace point should be converted from subprogram DIE */ 1257 - ret = convert_to_trace_point(&pf->sp_die, pf->addr, 1255 + ret = convert_to_trace_point(&pf->sp_die, af->mod, pf->addr, 1258 1256 pf->pev->point.retprobe, &vl->point); 1259 1257 if (ret < 0) 1260 1258 return ret; ··· 1293 1291 { 1294 1292 struct available_var_finder af = { 1295 1293 .pf = {.pev = pev, .callback = add_available_vars}, 1294 + .mod = self->mod, 1296 1295 .max_vls = max_vls, .externs = externs}; 1297 1296 int ret; 1298 1297 ··· 1477 1474 return 0; 1478 1475 } 1479 1476 1480 - /* Search function from function name */ 1477 + /* Search function definition from function name */ 1481 1478 static int line_range_search_cb(Dwarf_Die *sp_die, void *data) 1482 1479 { 1483 1480 struct dwarf_callback_param *param = data; ··· 1488 1485 if (lr->file && strtailcmp(lr->file, dwarf_decl_file(sp_die))) 1489 1486 return DWARF_CB_OK; 1490 1487 1491 - if (dwarf_tag(sp_die) == DW_TAG_subprogram && 1488 + if (die_is_func_def(sp_die) && 1492 1489 die_compare_name(sp_die, lr->function)) { 1493 1490 lf->fname = dwarf_decl_file(sp_die); 1494 1491 dwarf_decl_line(sp_die, &lr->offset);
+3
tools/perf/util/probe-finder.h
··· 23 23 /* debug information structure */ 24 24 struct debuginfo { 25 25 Dwarf *dbg; 26 + Dwfl_Module *mod; 26 27 Dwfl *dwfl; 27 28 Dwarf_Addr bias; 28 29 }; ··· 78 77 79 78 struct trace_event_finder { 80 79 struct probe_finder pf; 80 + Dwfl_Module *mod; /* For solving symbols */ 81 81 struct probe_trace_event *tevs; /* Found trace events */ 82 82 int ntevs; /* Number of trace events */ 83 83 int max_tevs; /* Max number of trace events */ ··· 86 84 87 85 struct available_var_finder { 88 86 struct probe_finder pf; 87 + Dwfl_Module *mod; /* For solving symbols */ 89 88 struct variable_list *vls; /* Found variable lists */ 90 89 int nvls; /* Number of variable lists */ 91 90 int max_vls; /* Max no. of variable lists */
+7 -2
tools/perf/util/session.c
··· 531 531 return 0; 532 532 533 533 list_for_each_entry_safe(iter, tmp, head, list) { 534 + if (session_done()) 535 + return 0; 536 + 534 537 if (iter->timestamp > limit) 535 538 break; 536 539 ··· 1163 1160 } 1164 1161 } 1165 1162 1166 - #define session_done() (*(volatile int *)(&session_done)) 1167 1163 volatile int session_done; 1168 1164 1169 1165 static int __perf_session__process_pipe_events(struct perf_session *self, ··· 1374 1372 "Processing events..."); 1375 1373 } 1376 1374 1375 + err = 0; 1376 + if (session_done()) 1377 + goto out_err; 1378 + 1377 1379 if (file_pos < file_size) 1378 1380 goto more; 1379 1381 1380 - err = 0; 1381 1382 /* do the final flush for ordered samples */ 1382 1383 session->ordered_samples.next_flush = ULLONG_MAX; 1383 1384 err = flush_sample_queue(session, tool);
+4
tools/perf/util/session.h
··· 124 124 125 125 #define perf_session__set_tracepoints_handlers(session, array) \ 126 126 __perf_session__set_tracepoints_handlers(session, array, ARRAY_SIZE(array)) 127 + 128 + extern volatile int session_done; 129 + 130 + #define session_done() (*(volatile int *)(&session_done)) 127 131 #endif /* __PERF_SESSION_H */
+16
tools/perf/util/symbol-elf.c
··· 8 8 #include "symbol.h" 9 9 #include "debug.h" 10 10 11 + #ifndef HAVE_ELF_GETPHDRNUM 12 + static int elf_getphdrnum(Elf *elf, size_t *dst) 13 + { 14 + GElf_Ehdr gehdr; 15 + GElf_Ehdr *ehdr; 16 + 17 + ehdr = gelf_getehdr(elf, &gehdr); 18 + if (!ehdr) 19 + return -1; 20 + 21 + *dst = ehdr->e_phnum; 22 + 23 + return 0; 24 + } 25 + #endif 26 + 11 27 #ifndef NT_GNU_BUILD_ID 12 28 #define NT_GNU_BUILD_ID 3 13 29 #endif
+1 -1
tools/perf/util/trace-event-parse.c
··· 186 186 char *next = NULL; 187 187 char *addr_str; 188 188 char *mod; 189 - char *fmt; 189 + char *fmt = NULL; 190 190 191 191 line = strtok_r(file, "\n", &next); 192 192 while (line) {