Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge branch 'master' of master.kernel.org:/pub/scm/linux/kernel/git/davem/net-2.6

+2767 -1693
+13 -52
Documentation/power/devices.txt
··· 520 520 device. This field is a pointer to an object of type struct dev_power_domain, 521 521 defined in include/linux/pm.h, providing a set of power management callbacks 522 522 analogous to the subsystem-level and device driver callbacks that are executed 523 - for the given device during all power transitions, in addition to the respective 524 - subsystem-level callbacks. Specifically, the power domain "suspend" callbacks 525 - (i.e. ->runtime_suspend(), ->suspend(), ->freeze(), ->poweroff(), etc.) are 526 - executed after the analogous subsystem-level callbacks, while the power domain 527 - "resume" callbacks (i.e. ->runtime_resume(), ->resume(), ->thaw(), ->restore, 528 - etc.) are executed before the analogous subsystem-level callbacks. Error codes 529 - returned by the "suspend" and "resume" power domain callbacks are ignored. 523 + for the given device during all power transitions, instead of the respective 524 + subsystem-level callbacks. Specifically, if a device's pm_domain pointer is 525 + not NULL, the ->suspend() callback from the object pointed to by it will be 526 + executed instead of its subsystem's (e.g. bus type's) ->suspend() callback and 527 + anlogously for all of the remaining callbacks. In other words, power management 528 + domain callbacks, if defined for the given device, always take precedence over 529 + the callbacks provided by the device's subsystem (e.g. bus type). 530 530 531 - Power domain ->runtime_idle() callback is executed before the subsystem-level 532 - ->runtime_idle() callback and the result returned by it is not ignored. Namely, 533 - if it returns error code, the subsystem-level ->runtime_idle() callback will not 534 - be called and the helper function rpm_idle() executing it will return error 535 - code. This mechanism is intended to help platforms where saving device state 536 - is a time consuming operation and should only be carried out if all devices 537 - in the power domain are idle, before turning off the shared power resource(s). 538 - Namely, the power domain ->runtime_idle() callback may return error code until 539 - the pm_runtime_idle() helper (or its asychronous version) has been called for 540 - all devices in the power domain (it is recommended that the returned error code 541 - be -EBUSY in those cases), preventing the subsystem-level ->runtime_idle() 542 - callback from being run prematurely. 543 - 544 - The support for device power domains is only relevant to platforms needing to 545 - use the same subsystem-level (e.g. platform bus type) and device driver power 546 - management callbacks in many different power domain configurations and wanting 547 - to avoid incorporating the support for power domains into the subsystem-level 548 - callbacks. The other platforms need not implement it or take it into account 549 - in any way. 550 - 551 - 552 - System Devices 553 - -------------- 554 - System devices (sysdevs) follow a slightly different API, which can be found in 555 - 556 - include/linux/sysdev.h 557 - drivers/base/sys.c 558 - 559 - System devices will be suspended with interrupts disabled, and after all other 560 - devices have been suspended. On resume, they will be resumed before any other 561 - devices, and also with interrupts disabled. These things occur in special 562 - "sysdev_driver" phases, which affect only system devices. 563 - 564 - Thus, after the suspend_noirq (or freeze_noirq or poweroff_noirq) phase, when 565 - the non-boot CPUs are all offline and IRQs are disabled on the remaining online 566 - CPU, then a sysdev_driver.suspend phase is carried out, and the system enters a 567 - sleep state (or a system image is created). During resume (or after the image 568 - has been created or loaded) a sysdev_driver.resume phase is carried out, IRQs 569 - are enabled on the only online CPU, the non-boot CPUs are enabled, and the 570 - resume_noirq (or thaw_noirq or restore_noirq) phase begins. 571 - 572 - Code to actually enter and exit the system-wide low power state sometimes 573 - involves hardware details that are only known to the boot firmware, and 574 - may leave a CPU running software (from SRAM or flash memory) that monitors 575 - the system and manages its wakeup sequence. 531 + The support for device power management domains is only relevant to platforms 532 + needing to use the same device driver power management callbacks in many 533 + different power domain configurations and wanting to avoid incorporating the 534 + support for power domains into subsystem-level callbacks, for example by 535 + modifying the platform bus type. Other platforms need not implement it or take 536 + it into account in any way. 576 537 577 538 578 539 Device Low Power (suspend) States
-5
Documentation/power/runtime_pm.txt
··· 566 566 pm_runtime_set_active(dev); 567 567 pm_runtime_enable(dev); 568 568 569 - The PM core always increments the run-time usage counter before calling the 570 - ->prepare() callback and decrements it after calling the ->complete() callback. 571 - Hence disabling run-time PM temporarily like this will not cause any run-time 572 - suspend callbacks to be lost. 573 - 574 569 7. Generic subsystem callbacks 575 570 576 571 Subsystems may wish to conserve code space by using the set of generic power
+8 -1
Documentation/usb/error-codes.txt
··· 76 76 reported. That's because transfers often involve several packets, so that 77 77 one or more packets could finish before an error stops further endpoint I/O. 78 78 79 + For isochronous URBs, the urb status value is non-zero only if the URB is 80 + unlinked, the device is removed, the host controller is disabled, or the total 81 + transferred length is less than the requested length and the URB_SHORT_NOT_OK 82 + flag is set. Completion handlers for isochronous URBs should only see 83 + urb->status set to zero, -ENOENT, -ECONNRESET, -ESHUTDOWN, or -EREMOTEIO. 84 + Individual frame descriptor status fields may report more status codes. 85 + 79 86 80 87 0 Transfer completed successfully 81 88 ··· 139 132 device removal events immediately. 140 133 141 134 -EXDEV ISO transfer only partially completed 142 - look at individual frame status for details 135 + (only set in iso_frame_desc[n].status, not urb->status) 143 136 144 137 -EINVAL ISO madness, if this happens: Log off and go home 145 138
+19 -10
MAINTAINERS
··· 1345 1345 F: include/linux/cfag12864b.h 1346 1346 1347 1347 AVR32 ARCHITECTURE 1348 - M: Hans-Christian Egtvedt <hans-christian.egtvedt@atmel.com> 1348 + M: Haavard Skinnemoen <hskinnemoen@gmail.com> 1349 + M: Hans-Christian Egtvedt <egtvedt@samfundet.no> 1349 1350 W: http://www.atmel.com/products/AVR32/ 1350 1351 W: http://avr32linux.org/ 1351 1352 W: http://avrfreaks.net/ 1352 - S: Supported 1353 + S: Maintained 1353 1354 F: arch/avr32/ 1354 1355 1355 1356 AVR32/AT32AP MACHINE SUPPORT 1356 - M: Hans-Christian Egtvedt <hans-christian.egtvedt@atmel.com> 1357 - S: Supported 1357 + M: Haavard Skinnemoen <hskinnemoen@gmail.com> 1358 + M: Hans-Christian Egtvedt <egtvedt@samfundet.no> 1359 + S: Maintained 1358 1360 F: arch/avr32/mach-at32ap/ 1359 1361 1360 1362 AX.25 NETWORK LAYER ··· 1392 1390 BATMAN ADVANCED 1393 1391 M: Marek Lindner <lindner_marek@yahoo.de> 1394 1392 M: Simon Wunderlich <siwu@hrz.tu-chemnitz.de> 1395 - M: Sven Eckelmann <sven@narfation.org> 1396 1393 L: b.a.t.m.a.n@lists.open-mesh.org 1397 1394 W: http://www.open-mesh.org/ 1398 1395 S: Maintained ··· 1424 1423 F: arch/blackfin/ 1425 1424 1426 1425 BLACKFIN EMAC DRIVER 1427 - M: Michael Hennerich <michael.hennerich@analog.com> 1428 1426 L: uclinux-dist-devel@blackfin.uclinux.org 1429 1427 W: http://blackfin.uclinux.org 1430 1428 S: Supported ··· 1639 1639 M: Oliver Hartkopp <socketcan@hartkopp.net> 1640 1640 M: Oliver Hartkopp <oliver.hartkopp@volkswagen.de> 1641 1641 M: Urs Thuermann <urs.thuermann@volkswagen.de> 1642 - L: socketcan-core@lists.berlios.de 1642 + L: socketcan-core@lists.berlios.de (subscribers-only) 1643 1643 L: netdev@vger.kernel.org 1644 1644 W: http://developer.berlios.de/projects/socketcan/ 1645 1645 S: Maintained ··· 1651 1651 1652 1652 CAN NETWORK DRIVERS 1653 1653 M: Wolfgang Grandegger <wg@grandegger.com> 1654 - L: socketcan-core@lists.berlios.de 1654 + L: socketcan-core@lists.berlios.de (subscribers-only) 1655 1655 L: netdev@vger.kernel.org 1656 1656 W: http://developer.berlios.de/projects/socketcan/ 1657 1657 S: Maintained ··· 5181 5181 F: drivers/net/qlcnic/ 5182 5182 5183 5183 QLOGIC QLGE 10Gb ETHERNET DRIVER 5184 + M: Jitendra Kalsaria <jitendra.kalsaria@qlogic.com> 5184 5185 M: Ron Mercer <ron.mercer@qlogic.com> 5185 5186 M: linux-driver@qlogic.com 5186 5187 L: netdev@vger.kernel.org ··· 6435 6434 F: drivers/usb/misc/rio500* 6436 6435 6437 6436 USB EHCI DRIVER 6437 + M: Alan Stern <stern@rowland.harvard.edu> 6438 6438 L: linux-usb@vger.kernel.org 6439 - S: Orphan 6439 + S: Maintained 6440 6440 F: Documentation/usb/ehci.txt 6441 6441 F: drivers/usb/host/ehci* 6442 6442 ··· 6466 6464 S: Maintained 6467 6465 F: Documentation/hid/hiddev.txt 6468 6466 F: drivers/hid/usbhid/ 6467 + 6468 + USB/IP DRIVERS 6469 + M: Matt Mooney <mfm@muteddisk.com> 6470 + L: linux-usb@vger.kernel.org 6471 + S: Maintained 6472 + F: drivers/staging/usbip/ 6469 6473 6470 6474 USB ISP116X DRIVER 6471 6475 M: Olav Kongas <ok@artecdesign.ee> ··· 6502 6494 F: sound/usb/midi.* 6503 6495 6504 6496 USB OHCI DRIVER 6497 + M: Alan Stern <stern@rowland.harvard.edu> 6505 6498 L: linux-usb@vger.kernel.org 6506 - S: Orphan 6499 + S: Maintained 6507 6500 F: Documentation/usb/ohci.txt 6508 6501 F: drivers/usb/host/ohci* 6509 6502
+1 -1
Makefile
··· 1 1 VERSION = 3 2 2 PATCHLEVEL = 0 3 3 SUBLEVEL = 0 4 - EXTRAVERSION = -rc4 4 + EXTRAVERSION = -rc5 5 5 NAME = Sneaky Weasel 6 6 7 7 # *DOCUMENTATION*
-1
arch/alpha/include/asm/mmzone.h
··· 56 56 * Given a kernel address, find the home node of the underlying memory. 57 57 */ 58 58 #define kvaddr_to_nid(kaddr) pa_to_nid(__pa(kaddr)) 59 - #define node_start_pfn(nid) (NODE_DATA(nid)->node_start_pfn) 60 59 61 60 /* 62 61 * Given a kaddr, LOCAL_BASE_ADDR finds the owning node of the memory
+13 -1
arch/arm/boot/compressed/head.S
··· 597 597 sub pc, lr, r0, lsr #32 @ properly flush pipeline 598 598 #endif 599 599 600 + #define PROC_ENTRY_SIZE (4*5) 601 + 600 602 /* 601 603 * Here follow the relocatable cache support functions for the 602 604 * various processors. This is a generic hook for locating an ··· 626 624 ARM( addeq pc, r12, r3 ) @ call cache function 627 625 THUMB( addeq r12, r3 ) 628 626 THUMB( moveq pc, r12 ) @ call cache function 629 - add r12, r12, #4*5 627 + add r12, r12, #PROC_ENTRY_SIZE 630 628 b 1b 631 629 632 630 /* ··· 795 793 THUMB( nop ) 796 794 797 795 .size proc_types, . - proc_types 796 + 797 + /* 798 + * If you get a "non-constant expression in ".if" statement" 799 + * error from the assembler on this line, check that you have 800 + * not accidentally written a "b" instruction where you should 801 + * have written W(b). 802 + */ 803 + .if (. - proc_types) % PROC_ENTRY_SIZE != 0 804 + .error "The size of one or more proc_types entries is wrong." 805 + .endif 798 806 799 807 /* 800 808 * Turn off the Cache and MMU. ARMv3 does not support
+4
arch/arm/include/asm/assembler.h
··· 13 13 * Do not include any C declarations in this file - it is included by 14 14 * assembler source. 15 15 */ 16 + #ifndef __ASM_ASSEMBLER_H__ 17 + #define __ASM_ASSEMBLER_H__ 18 + 16 19 #ifndef __ASSEMBLY__ 17 20 #error "Only include this from assembly code" 18 21 #endif ··· 293 290 .macro ldrusr, reg, ptr, inc, cond=al, rept=1, abort=9001f 294 291 usracc ldr, \reg, \ptr, \inc, \cond, \rept, \abort 295 292 .endm 293 + #endif /* __ASM_ASSEMBLER_H__ */
+2
arch/arm/include/asm/entry-macro-multi.S
··· 1 + #include <asm/assembler.h> 2 + 1 3 /* 2 4 * Interrupt handling. Preserves r7, r8, r9 3 5 */
+11 -2
arch/arm/kernel/module.c
··· 193 193 offset -= 0x02000000; 194 194 offset += sym->st_value - loc; 195 195 196 - /* only Thumb addresses allowed (no interworking) */ 197 - if (!(offset & 1) || 196 + /* 197 + * For function symbols, only Thumb addresses are 198 + * allowed (no interworking). 199 + * 200 + * For non-function symbols, the destination 201 + * has no specific ARM/Thumb disposition, so 202 + * the branch is resolved under the assumption 203 + * that interworking is not required. 204 + */ 205 + if ((ELF32_ST_TYPE(sym->st_info) == STT_FUNC && 206 + !(offset & 1)) || 198 207 offset <= (s32)0xff000000 || 199 208 offset >= (s32)0x01000000) { 200 209 pr_err("%s: section %u reloc %u sym '%s': relocation %u out of range (%#lx -> %#x)\n",
+5 -1
arch/arm/kernel/smp.c
··· 318 318 smp_store_cpu_info(cpu); 319 319 320 320 /* 321 - * OK, now it's safe to let the boot CPU continue 321 + * OK, now it's safe to let the boot CPU continue. Wait for 322 + * the CPU migration code to notice that the CPU is online 323 + * before we continue. 322 324 */ 323 325 set_cpu_online(cpu, true); 326 + while (!cpu_active(cpu)) 327 + cpu_relax(); 324 328 325 329 /* 326 330 * OK, it's off to the idle thread for us
+1
arch/arm/mach-exynos4/init.c
··· 35 35 tcfg->clocks = exynos4_serial_clocks; 36 36 tcfg->clocks_size = ARRAY_SIZE(exynos4_serial_clocks); 37 37 } 38 + tcfg->flags |= NO_NEED_CHECK_CLKSRC; 38 39 } 39 40 40 41 s3c24xx_init_uartdevs("s5pv210-uart", s5p_uart_resources, cfg, no);
+2
arch/arm/mach-h720x/Kconfig
··· 6 6 bool "gms30c7201" 7 7 depends on ARCH_H720X 8 8 select CPU_H7201 9 + select ZONE_DMA 9 10 help 10 11 Say Y here if you are using the Hynix GMS30C7201 Reference Board 11 12 12 13 config ARCH_H7202 13 14 bool "hms30c7202" 14 15 select CPU_H7202 16 + select ZONE_DMA 15 17 depends on ARCH_H720X 16 18 help 17 19 Say Y here if you are using the Hynix HMS30C7202 Reference Board
+2 -2
arch/arm/mach-shmobile/board-ag5evm.c
··· 381 381 gpio_set_value(GPIO_PORT114, state); 382 382 } 383 383 384 - static struct sh_mobile_sdhi_info sh_sdhi1_platdata = { 384 + static struct sh_mobile_sdhi_info sh_sdhi1_info = { 385 385 .tmio_flags = TMIO_MMC_WRPROTECT_DISABLE, 386 386 .tmio_caps = MMC_CAP_NONREMOVABLE | MMC_CAP_SDIO_IRQ, 387 387 .tmio_ocr_mask = MMC_VDD_32_33 | MMC_VDD_33_34, ··· 413 413 .name = "sh_mobile_sdhi", 414 414 .id = 1, 415 415 .dev = { 416 - .platform_data = &sh_sdhi1_platdata, 416 + .platform_data = &sh_sdhi1_info, 417 417 }, 418 418 .num_resources = ARRAY_SIZE(sdhi1_resources), 419 419 .resource = sdhi1_resources,
+1 -1
arch/arm/mach-shmobile/board-ap4evb.c
··· 913 913 I2C_BOARD_INFO("imx074", 0x1a), 914 914 }; 915 915 916 - struct soc_camera_link imx074_link = { 916 + static struct soc_camera_link imx074_link = { 917 917 .bus_id = 0, 918 918 .board_info = &imx074_info, 919 919 .i2c_adapter_id = 0,
+1 -1
arch/arm/mach-shmobile/board-mackerel.c
··· 1287 1287 &nor_flash_device, 1288 1288 &smc911x_device, 1289 1289 &lcdc_device, 1290 - &usbhs0_device, 1291 1290 &usb1_host_device, 1292 1291 &usbhs1_device, 1292 + &usbhs0_device, 1293 1293 &leds_device, 1294 1294 &fsi_device, 1295 1295 &fsi_ak4643_device,
+12 -4
arch/arm/mach-ux500/board-mop500-pins.c
··· 110 110 GPIO168_KP_O0, 111 111 112 112 /* UART */ 113 - GPIO0_U0_CTSn | PIN_INPUT_PULLUP, 114 - GPIO1_U0_RTSn | PIN_OUTPUT_HIGH, 115 - GPIO2_U0_RXD | PIN_INPUT_PULLUP, 116 - GPIO3_U0_TXD | PIN_OUTPUT_HIGH, 113 + /* uart-0 pins gpio configuration should be 114 + * kept intact to prevent glitch in tx line 115 + * when tty dev is opened. Later these pins 116 + * are configured to uart mop500_pins_uart0 117 + * 118 + * It will be replaced with uart configuration 119 + * once the issue is solved. 120 + */ 121 + GPIO0_GPIO | PIN_INPUT_PULLUP, 122 + GPIO1_GPIO | PIN_OUTPUT_HIGH, 123 + GPIO2_GPIO | PIN_INPUT_PULLUP, 124 + GPIO3_GPIO | PIN_OUTPUT_HIGH, 117 125 118 126 GPIO29_U2_RXD | PIN_INPUT_PULLUP, 119 127 GPIO30_U2_TXD | PIN_OUTPUT_HIGH,
+54
arch/arm/mach-ux500/board-mop500.c
··· 27 27 #include <linux/leds-lp5521.h> 28 28 #include <linux/input.h> 29 29 #include <linux/gpio_keys.h> 30 + #include <linux/delay.h> 30 31 31 32 #include <asm/mach-types.h> 32 33 #include <asm/mach/arch.h> 33 34 34 35 #include <plat/i2c.h> 35 36 #include <plat/ste_dma40.h> 37 + #include <plat/pincfg.h> 36 38 37 39 #include <mach/hardware.h> 38 40 #include <mach/setup.h> 39 41 #include <mach/devices.h> 40 42 #include <mach/irqs.h> 41 43 44 + #include "pins-db8500.h" 42 45 #include "ste-dma40-db8500.h" 43 46 #include "devices-db8500.h" 44 47 #include "board-mop500.h" ··· 396 393 }; 397 394 #endif 398 395 396 + 397 + static pin_cfg_t mop500_pins_uart0[] = { 398 + GPIO0_U0_CTSn | PIN_INPUT_PULLUP, 399 + GPIO1_U0_RTSn | PIN_OUTPUT_HIGH, 400 + GPIO2_U0_RXD | PIN_INPUT_PULLUP, 401 + GPIO3_U0_TXD | PIN_OUTPUT_HIGH, 402 + }; 403 + 404 + #define PRCC_K_SOFTRST_SET 0x18 405 + #define PRCC_K_SOFTRST_CLEAR 0x1C 406 + static void ux500_uart0_reset(void) 407 + { 408 + void __iomem *prcc_rst_set, *prcc_rst_clr; 409 + 410 + prcc_rst_set = (void __iomem *)IO_ADDRESS(U8500_CLKRST1_BASE + 411 + PRCC_K_SOFTRST_SET); 412 + prcc_rst_clr = (void __iomem *)IO_ADDRESS(U8500_CLKRST1_BASE + 413 + PRCC_K_SOFTRST_CLEAR); 414 + 415 + /* Activate soft reset PRCC_K_SOFTRST_CLEAR */ 416 + writel((readl(prcc_rst_clr) | 0x1), prcc_rst_clr); 417 + udelay(1); 418 + 419 + /* Release soft reset PRCC_K_SOFTRST_SET */ 420 + writel((readl(prcc_rst_set) | 0x1), prcc_rst_set); 421 + udelay(1); 422 + } 423 + 424 + static void ux500_uart0_init(void) 425 + { 426 + int ret; 427 + 428 + ret = nmk_config_pins(mop500_pins_uart0, 429 + ARRAY_SIZE(mop500_pins_uart0)); 430 + if (ret < 0) 431 + pr_err("pl011: uart pins_enable failed\n"); 432 + } 433 + 434 + static void ux500_uart0_exit(void) 435 + { 436 + int ret; 437 + 438 + ret = nmk_config_pins_sleep(mop500_pins_uart0, 439 + ARRAY_SIZE(mop500_pins_uart0)); 440 + if (ret < 0) 441 + pr_err("pl011: uart pins_disable failed\n"); 442 + } 443 + 399 444 static struct amba_pl011_data uart0_plat = { 400 445 #ifdef CONFIG_STE_DMA40 401 446 .dma_filter = stedma40_filter, 402 447 .dma_rx_param = &uart0_dma_cfg_rx, 403 448 .dma_tx_param = &uart0_dma_cfg_tx, 404 449 #endif 450 + .init = ux500_uart0_init, 451 + .exit = ux500_uart0_exit, 452 + .reset = ux500_uart0_reset, 405 453 }; 406 454 407 455 static struct amba_pl011_data uart1_plat = {
+10 -6
arch/arm/mm/proc-v7.S
··· 210 210 211 211 /* Suspend/resume support: derived from arch/arm/mach-s5pv210/sleep.S */ 212 212 .globl cpu_v7_suspend_size 213 - .equ cpu_v7_suspend_size, 4 * 8 213 + .equ cpu_v7_suspend_size, 4 * 9 214 214 #ifdef CONFIG_PM_SLEEP 215 215 ENTRY(cpu_v7_do_suspend) 216 216 stmfd sp!, {r4 - r11, lr} 217 217 mrc p15, 0, r4, c13, c0, 0 @ FCSE/PID 218 218 mrc p15, 0, r5, c13, c0, 1 @ Context ID 219 + mrc p15, 0, r6, c13, c0, 3 @ User r/o thread ID 220 + stmia r0!, {r4 - r6} 219 221 mrc p15, 0, r6, c3, c0, 0 @ Domain ID 220 222 mrc p15, 0, r7, c2, c0, 0 @ TTB 0 221 223 mrc p15, 0, r8, c2, c0, 1 @ TTB 1 222 224 mrc p15, 0, r9, c1, c0, 0 @ Control register 223 225 mrc p15, 0, r10, c1, c0, 1 @ Auxiliary control register 224 226 mrc p15, 0, r11, c1, c0, 2 @ Co-processor access control 225 - stmia r0, {r4 - r11} 227 + stmia r0, {r6 - r11} 226 228 ldmfd sp!, {r4 - r11, pc} 227 229 ENDPROC(cpu_v7_do_suspend) 228 230 ··· 232 230 mov ip, #0 233 231 mcr p15, 0, ip, c8, c7, 0 @ invalidate TLBs 234 232 mcr p15, 0, ip, c7, c5, 0 @ invalidate I cache 235 - ldmia r0, {r4 - r11} 233 + ldmia r0!, {r4 - r6} 236 234 mcr p15, 0, r4, c13, c0, 0 @ FCSE/PID 237 235 mcr p15, 0, r5, c13, c0, 1 @ Context ID 236 + mcr p15, 0, r6, c13, c0, 3 @ User r/o thread ID 237 + ldmia r0, {r6 - r11} 238 238 mcr p15, 0, r6, c3, c0, 0 @ Domain ID 239 239 mcr p15, 0, r7, c2, c0, 0 @ TTB 0 240 240 mcr p15, 0, r8, c2, c0, 1 @ TTB 1 ··· 422 418 .word cpu_v7_dcache_clean_area 423 419 .word cpu_v7_switch_mm 424 420 .word cpu_v7_set_pte_ext 425 - .word 0 426 - .word 0 427 - .word 0 421 + .word cpu_v7_suspend_size 422 + .word cpu_v7_do_suspend 423 + .word cpu_v7_do_resume 428 424 .size v7_processor_functions, . - v7_processor_functions 429 425 430 426 .section ".rodata"
+1
arch/arm/plat-iop/cp6.c
··· 18 18 */ 19 19 #include <linux/init.h> 20 20 #include <asm/traps.h> 21 + #include <asm/ptrace.h> 21 22 22 23 static int cp6_trap(struct pt_regs *regs, unsigned int instr) 23 24 {
+2
arch/arm/plat-samsung/include/plat/regs-serial.h
··· 224 224 #define S5PV210_UFSTAT_RXMASK (255<<0) 225 225 #define S5PV210_UFSTAT_RXSHIFT (0) 226 226 227 + #define NO_NEED_CHECK_CLKSRC 1 228 + 227 229 #ifndef __ASSEMBLY__ 228 230 229 231 /* struct s3c24xx_uart_clksrc
+1 -7
arch/m32r/include/asm/mmzone.h
··· 14 14 #define NODE_DATA(nid) (node_data[nid]) 15 15 16 16 #define node_localnr(pfn, nid) ((pfn) - NODE_DATA(nid)->node_start_pfn) 17 - #define node_start_pfn(nid) (NODE_DATA(nid)->node_start_pfn) 18 - #define node_end_pfn(nid) \ 19 - ({ \ 20 - pg_data_t *__pgdat = NODE_DATA(nid); \ 21 - __pgdat->node_start_pfn + __pgdat->node_spanned_pages - 1; \ 22 - }) 23 17 24 18 #define pmd_page(pmd) (pfn_to_page(pmd_val(pmd) >> PAGE_SHIFT)) 25 19 /* ··· 38 44 int node; 39 45 40 46 for (node = 0 ; node < MAX_NUMNODES ; node++) 41 - if (pfn >= node_start_pfn(node) && pfn <= node_end_pfn(node)) 47 + if (pfn >= node_start_pfn(node) && pfn < node_end_pfn(node)) 42 48 break; 43 49 44 50 return node;
+1
arch/mn10300/include/asm/uaccess.h
··· 15 15 * User space memory access functions 16 16 */ 17 17 #include <linux/thread_info.h> 18 + #include <linux/kernel.h> 18 19 #include <asm/page.h> 19 20 #include <asm/errno.h> 20 21
-7
arch/parisc/include/asm/mmzone.h
··· 14 14 15 15 #define NODE_DATA(nid) (&node_data[nid].pg_data) 16 16 17 - #define node_start_pfn(nid) (NODE_DATA(nid)->node_start_pfn) 18 - #define node_end_pfn(nid) \ 19 - ({ \ 20 - pg_data_t *__pgdat = NODE_DATA(nid); \ 21 - __pgdat->node_start_pfn + __pgdat->node_spanned_pages; \ 22 - }) 23 - 24 17 /* We have these possible memory map layouts: 25 18 * Astro: 0-3.75, 67.75-68, 4-64 26 19 * zx1: 0-1, 257-260, 4-256
+6 -3
arch/powerpc/boot/dts/p1022ds.dts
··· 209 209 wm8776:codec@1a { 210 210 compatible = "wlf,wm8776"; 211 211 reg = <0x1a>; 212 - /* MCLK source is a stand-alone oscillator */ 213 - clock-frequency = <12288000>; 212 + /* 213 + * clock-frequency will be set by U-Boot if 214 + * the clock is enabled. 215 + */ 214 216 }; 215 217 }; 216 218 ··· 282 280 codec-handle = <&wm8776>; 283 281 fsl,playback-dma = <&dma00>; 284 282 fsl,capture-dma = <&dma01>; 285 - fsl,fifo-depth = <16>; 283 + fsl,fifo-depth = <15>; 284 + fsl,ssi-asynchronous; 286 285 }; 287 286 288 287 dma@c300 {
-1
arch/powerpc/configs/pseries_defconfig
··· 148 148 CONFIG_SCSI_CXGB3_ISCSI=m 149 149 CONFIG_SCSI_CXGB4_ISCSI=m 150 150 CONFIG_SCSI_BNX2_ISCSI=m 151 - CONFIG_SCSI_BNX2_ISCSI=m 152 151 CONFIG_BE2ISCSI=m 153 152 CONFIG_SCSI_IBMVSCSI=y 154 153 CONFIG_SCSI_IBMVFC=m
-7
arch/powerpc/include/asm/mmzone.h
··· 38 38 #define memory_hotplug_max() memblock_end_of_DRAM() 39 39 #endif 40 40 41 - /* 42 - * Following are macros that each numa implmentation must define. 43 - */ 44 - 45 - #define node_start_pfn(nid) (NODE_DATA(nid)->node_start_pfn) 46 - #define node_end_pfn(nid) (NODE_DATA(nid)->node_end_pfn) 47 - 48 41 #else 49 42 #define memory_hotplug_max() memblock_end_of_DRAM() 50 43 #endif /* CONFIG_NEED_MULTIPLE_NODES */
+17 -12
arch/powerpc/kernel/rtas-rtc.c
··· 4 4 #include <linux/init.h> 5 5 #include <linux/rtc.h> 6 6 #include <linux/delay.h> 7 + #include <linux/ratelimit.h> 7 8 #include <asm/prom.h> 8 9 #include <asm/rtas.h> 9 10 #include <asm/time.h> ··· 30 29 } 31 30 } while (wait_time && (get_tb() < max_wait_tb)); 32 31 33 - if (error != 0 && printk_ratelimit()) { 34 - printk(KERN_WARNING "error: reading the clock failed (%d)\n", 35 - error); 32 + if (error != 0) { 33 + printk_ratelimited(KERN_WARNING 34 + "error: reading the clock failed (%d)\n", 35 + error); 36 36 return 0; 37 37 } 38 38 ··· 57 55 58 56 wait_time = rtas_busy_delay_time(error); 59 57 if (wait_time) { 60 - if (in_interrupt() && printk_ratelimit()) { 58 + if (in_interrupt()) { 61 59 memset(rtc_tm, 0, sizeof(struct rtc_time)); 62 - printk(KERN_WARNING "error: reading clock" 63 - " would delay interrupt\n"); 60 + printk_ratelimited(KERN_WARNING 61 + "error: reading clock " 62 + "would delay interrupt\n"); 64 63 return; /* delay not allowed */ 65 64 } 66 65 msleep(wait_time); 67 66 } 68 67 } while (wait_time && (get_tb() < max_wait_tb)); 69 68 70 - if (error != 0 && printk_ratelimit()) { 71 - printk(KERN_WARNING "error: reading the clock failed (%d)\n", 72 - error); 69 + if (error != 0) { 70 + printk_ratelimited(KERN_WARNING 71 + "error: reading the clock failed (%d)\n", 72 + error); 73 73 return; 74 74 } 75 75 ··· 103 99 } 104 100 } while (wait_time && (get_tb() < max_wait_tb)); 105 101 106 - if (error != 0 && printk_ratelimit()) 107 - printk(KERN_WARNING "error: setting the clock failed (%d)\n", 108 - error); 102 + if (error != 0) 103 + printk_ratelimited(KERN_WARNING 104 + "error: setting the clock failed (%d)\n", 105 + error); 109 106 110 107 return 0; 111 108 }
+31 -26
arch/powerpc/kernel/signal_32.c
··· 25 25 #include <linux/errno.h> 26 26 #include <linux/elf.h> 27 27 #include <linux/ptrace.h> 28 + #include <linux/ratelimit.h> 28 29 #ifdef CONFIG_PPC64 29 30 #include <linux/syscalls.h> 30 31 #include <linux/compat.h> ··· 893 892 printk("badframe in handle_rt_signal, regs=%p frame=%p newsp=%lx\n", 894 893 regs, frame, newsp); 895 894 #endif 896 - if (show_unhandled_signals && printk_ratelimit()) 897 - printk(KERN_INFO "%s[%d]: bad frame in handle_rt_signal32: " 898 - "%p nip %08lx lr %08lx\n", 899 - current->comm, current->pid, 900 - addr, regs->nip, regs->link); 895 + if (show_unhandled_signals) 896 + printk_ratelimited(KERN_INFO 897 + "%s[%d]: bad frame in handle_rt_signal32: " 898 + "%p nip %08lx lr %08lx\n", 899 + current->comm, current->pid, 900 + addr, regs->nip, regs->link); 901 901 902 902 force_sigsegv(sig, current); 903 903 return 0; ··· 1060 1058 return 0; 1061 1059 1062 1060 bad: 1063 - if (show_unhandled_signals && printk_ratelimit()) 1064 - printk(KERN_INFO "%s[%d]: bad frame in sys_rt_sigreturn: " 1065 - "%p nip %08lx lr %08lx\n", 1066 - current->comm, current->pid, 1067 - rt_sf, regs->nip, regs->link); 1061 + if (show_unhandled_signals) 1062 + printk_ratelimited(KERN_INFO 1063 + "%s[%d]: bad frame in sys_rt_sigreturn: " 1064 + "%p nip %08lx lr %08lx\n", 1065 + current->comm, current->pid, 1066 + rt_sf, regs->nip, regs->link); 1068 1067 1069 1068 force_sig(SIGSEGV, current); 1070 1069 return 0; ··· 1152 1149 * We kill the task with a SIGSEGV in this situation. 1153 1150 */ 1154 1151 if (do_setcontext(ctx, regs, 1)) { 1155 - if (show_unhandled_signals && printk_ratelimit()) 1156 - printk(KERN_INFO "%s[%d]: bad frame in " 1157 - "sys_debug_setcontext: %p nip %08lx " 1158 - "lr %08lx\n", 1159 - current->comm, current->pid, 1160 - ctx, regs->nip, regs->link); 1152 + if (show_unhandled_signals) 1153 + printk_ratelimited(KERN_INFO "%s[%d]: bad frame in " 1154 + "sys_debug_setcontext: %p nip %08lx " 1155 + "lr %08lx\n", 1156 + current->comm, current->pid, 1157 + ctx, regs->nip, regs->link); 1161 1158 1162 1159 force_sig(SIGSEGV, current); 1163 1160 goto out; ··· 1239 1236 printk("badframe in handle_signal, regs=%p frame=%p newsp=%lx\n", 1240 1237 regs, frame, newsp); 1241 1238 #endif 1242 - if (show_unhandled_signals && printk_ratelimit()) 1243 - printk(KERN_INFO "%s[%d]: bad frame in handle_signal32: " 1244 - "%p nip %08lx lr %08lx\n", 1245 - current->comm, current->pid, 1246 - frame, regs->nip, regs->link); 1239 + if (show_unhandled_signals) 1240 + printk_ratelimited(KERN_INFO 1241 + "%s[%d]: bad frame in handle_signal32: " 1242 + "%p nip %08lx lr %08lx\n", 1243 + current->comm, current->pid, 1244 + frame, regs->nip, regs->link); 1247 1245 1248 1246 force_sigsegv(sig, current); 1249 1247 return 0; ··· 1292 1288 return 0; 1293 1289 1294 1290 badframe: 1295 - if (show_unhandled_signals && printk_ratelimit()) 1296 - printk(KERN_INFO "%s[%d]: bad frame in sys_sigreturn: " 1297 - "%p nip %08lx lr %08lx\n", 1298 - current->comm, current->pid, 1299 - addr, regs->nip, regs->link); 1291 + if (show_unhandled_signals) 1292 + printk_ratelimited(KERN_INFO 1293 + "%s[%d]: bad frame in sys_sigreturn: " 1294 + "%p nip %08lx lr %08lx\n", 1295 + current->comm, current->pid, 1296 + addr, regs->nip, regs->link); 1300 1297 1301 1298 force_sig(SIGSEGV, current); 1302 1299 return 0;
+9 -8
arch/powerpc/kernel/signal_64.c
··· 24 24 #include <linux/elf.h> 25 25 #include <linux/ptrace.h> 26 26 #include <linux/module.h> 27 + #include <linux/ratelimit.h> 27 28 28 29 #include <asm/sigcontext.h> 29 30 #include <asm/ucontext.h> ··· 381 380 printk("badframe in sys_rt_sigreturn, regs=%p uc=%p &uc->uc_mcontext=%p\n", 382 381 regs, uc, &uc->uc_mcontext); 383 382 #endif 384 - if (show_unhandled_signals && printk_ratelimit()) 385 - printk(regs->msr & MSR_64BIT ? fmt64 : fmt32, 386 - current->comm, current->pid, "rt_sigreturn", 387 - (long)uc, regs->nip, regs->link); 383 + if (show_unhandled_signals) 384 + printk_ratelimited(regs->msr & MSR_64BIT ? fmt64 : fmt32, 385 + current->comm, current->pid, "rt_sigreturn", 386 + (long)uc, regs->nip, regs->link); 388 387 389 388 force_sig(SIGSEGV, current); 390 389 return 0; ··· 469 468 printk("badframe in setup_rt_frame, regs=%p frame=%p newsp=%lx\n", 470 469 regs, frame, newsp); 471 470 #endif 472 - if (show_unhandled_signals && printk_ratelimit()) 473 - printk(regs->msr & MSR_64BIT ? fmt64 : fmt32, 474 - current->comm, current->pid, "setup_rt_frame", 475 - (long)frame, regs->nip, regs->link); 471 + if (show_unhandled_signals) 472 + printk_ratelimited(regs->msr & MSR_64BIT ? fmt64 : fmt32, 473 + current->comm, current->pid, "setup_rt_frame", 474 + (long)frame, regs->nip, regs->link); 476 475 477 476 force_sigsegv(signr, current); 478 477 return 0;
+11 -13
arch/powerpc/kernel/traps.c
··· 34 34 #include <linux/bug.h> 35 35 #include <linux/kdebug.h> 36 36 #include <linux/debugfs.h> 37 + #include <linux/ratelimit.h> 37 38 38 39 #include <asm/emulated_ops.h> 39 40 #include <asm/pgtable.h> ··· 198 197 if (die("Exception in kernel mode", regs, signr)) 199 198 return; 200 199 } else if (show_unhandled_signals && 201 - unhandled_signal(current, signr) && 202 - printk_ratelimit()) { 203 - printk(regs->msr & MSR_64BIT ? fmt64 : fmt32, 204 - current->comm, current->pid, signr, 205 - addr, regs->nip, regs->link, code); 206 - } 200 + unhandled_signal(current, signr)) { 201 + printk_ratelimited(regs->msr & MSR_64BIT ? fmt64 : fmt32, 202 + current->comm, current->pid, signr, 203 + addr, regs->nip, regs->link, code); 204 + } 207 205 208 206 memset(&info, 0, sizeof(info)); 209 207 info.si_signo = signr; ··· 425 425 unsigned long reason = mcsr; 426 426 int recoverable = 1; 427 427 428 - if (reason & MCSR_BUS_RBERR) { 428 + if (reason & MCSR_LD) { 429 429 recoverable = fsl_rio_mcheck_exception(regs); 430 430 if (recoverable == 1) 431 431 goto silent_out; ··· 1342 1342 } else { 1343 1343 /* didn't recognize the instruction */ 1344 1344 /* XXX quick hack for now: set the non-Java bit in the VSCR */ 1345 - if (printk_ratelimit()) 1346 - printk(KERN_ERR "Unrecognized altivec instruction " 1347 - "in %s at %lx\n", current->comm, regs->nip); 1345 + printk_ratelimited(KERN_ERR "Unrecognized altivec instruction " 1346 + "in %s at %lx\n", current->comm, regs->nip); 1348 1347 current->thread.vscr.u[3] |= 0x10000; 1349 1348 } 1350 1349 } ··· 1547 1548 1548 1549 void ppc_warn_emulated_print(const char *type) 1549 1550 { 1550 - if (printk_ratelimit()) 1551 - pr_warning("%s used emulated %s instruction\n", current->comm, 1552 - type); 1551 + pr_warn_ratelimited("%s used emulated %s instruction\n", current->comm, 1552 + type); 1553 1553 } 1554 1554 1555 1555 static int __init ppc_warn_emulated_init(void)
+5 -5
arch/powerpc/mm/fault.c
··· 31 31 #include <linux/kdebug.h> 32 32 #include <linux/perf_event.h> 33 33 #include <linux/magic.h> 34 + #include <linux/ratelimit.h> 34 35 35 36 #include <asm/firmware.h> 36 37 #include <asm/page.h> ··· 347 346 return 0; 348 347 } 349 348 350 - if (is_exec && (error_code & DSISR_PROTFAULT) 351 - && printk_ratelimit()) 352 - printk(KERN_CRIT "kernel tried to execute NX-protected" 353 - " page (%lx) - exploit attempt? (uid: %d)\n", 354 - address, current_uid()); 349 + if (is_exec && (error_code & DSISR_PROTFAULT)) 350 + printk_ratelimited(KERN_CRIT "kernel tried to execute NX-protected" 351 + " page (%lx) - exploit attempt? (uid: %d)\n", 352 + address, current_uid()); 355 353 356 354 return SIGSEGV; 357 355
+17 -16
arch/powerpc/sysdev/fsl_rio.c
··· 283 283 #ifdef CONFIG_E500 284 284 int fsl_rio_mcheck_exception(struct pt_regs *regs) 285 285 { 286 - const struct exception_table_entry *entry = NULL; 287 - unsigned long reason = mfspr(SPRN_MCSR); 286 + const struct exception_table_entry *entry; 287 + unsigned long reason; 288 288 289 - if (reason & MCSR_BUS_RBERR) { 290 - reason = in_be32((u32 *)(rio_regs_win + RIO_LTLEDCSR)); 291 - if (reason & (RIO_LTLEDCSR_IER | RIO_LTLEDCSR_PRT)) { 292 - /* Check if we are prepared to handle this fault */ 293 - entry = search_exception_tables(regs->nip); 294 - if (entry) { 295 - pr_debug("RIO: %s - MC Exception handled\n", 296 - __func__); 297 - out_be32((u32 *)(rio_regs_win + RIO_LTLEDCSR), 298 - 0); 299 - regs->msr |= MSR_RI; 300 - regs->nip = entry->fixup; 301 - return 1; 302 - } 289 + if (!rio_regs_win) 290 + return 0; 291 + 292 + reason = in_be32((u32 *)(rio_regs_win + RIO_LTLEDCSR)); 293 + if (reason & (RIO_LTLEDCSR_IER | RIO_LTLEDCSR_PRT)) { 294 + /* Check if we are prepared to handle this fault */ 295 + entry = search_exception_tables(regs->nip); 296 + if (entry) { 297 + pr_debug("RIO: %s - MC Exception handled\n", 298 + __func__); 299 + out_be32((u32 *)(rio_regs_win + RIO_LTLEDCSR), 300 + 0); 301 + regs->msr |= MSR_RI; 302 + regs->nip = entry->fixup; 303 + return 1; 303 304 } 304 305 } 305 306
+5 -6
arch/powerpc/sysdev/mpic.c
··· 29 29 #include <linux/pci.h> 30 30 #include <linux/slab.h> 31 31 #include <linux/syscore_ops.h> 32 + #include <linux/ratelimit.h> 32 33 33 34 #include <asm/ptrace.h> 34 35 #include <asm/signal.h> ··· 1649 1648 return NO_IRQ; 1650 1649 } 1651 1650 if (unlikely(mpic->protected && test_bit(src, mpic->protected))) { 1652 - if (printk_ratelimit()) 1653 - printk(KERN_WARNING "%s: Got protected source %d !\n", 1654 - mpic->name, (int)src); 1651 + printk_ratelimited(KERN_WARNING "%s: Got protected source %d !\n", 1652 + mpic->name, (int)src); 1655 1653 mpic_eoi(mpic); 1656 1654 return NO_IRQ; 1657 1655 } ··· 1688 1688 return NO_IRQ; 1689 1689 } 1690 1690 if (unlikely(mpic->protected && test_bit(src, mpic->protected))) { 1691 - if (printk_ratelimit()) 1692 - printk(KERN_WARNING "%s: Got protected source %d !\n", 1693 - mpic->name, (int)src); 1691 + printk_ratelimited(KERN_WARNING "%s: Got protected source %d !\n", 1692 + mpic->name, (int)src); 1694 1693 return NO_IRQ; 1695 1694 } 1696 1695
+1
arch/s390/Kconfig
··· 579 579 def_bool y 580 580 prompt "s390 guest support for KVM (EXPERIMENTAL)" 581 581 depends on 64BIT && EXPERIMENTAL 582 + select VIRTUALIZATION 582 583 select VIRTIO 583 584 select VIRTIO_RING 584 585 select VIRTIO_CONSOLE
+2 -2
arch/s390/kernel/smp.c
··· 262 262 263 263 memset(&parms.orvals, 0, sizeof(parms.orvals)); 264 264 memset(&parms.andvals, 0xff, sizeof(parms.andvals)); 265 - parms.orvals[cr] = 1 << bit; 265 + parms.orvals[cr] = 1UL << bit; 266 266 on_each_cpu(smp_ctl_bit_callback, &parms, 1); 267 267 } 268 268 EXPORT_SYMBOL(smp_ctl_set_bit); ··· 276 276 277 277 memset(&parms.orvals, 0, sizeof(parms.orvals)); 278 278 memset(&parms.andvals, 0xff, sizeof(parms.andvals)); 279 - parms.andvals[cr] = ~(1L << bit); 279 + parms.andvals[cr] = ~(1UL << bit); 280 280 on_each_cpu(smp_ctl_bit_callback, &parms, 1); 281 281 } 282 282 EXPORT_SYMBOL(smp_ctl_clear_bit);
+7 -1
arch/s390/oprofile/init.c
··· 25 25 26 26 #include "hwsampler.h" 27 27 28 - #define DEFAULT_INTERVAL 4096 28 + #define DEFAULT_INTERVAL 4127518 29 29 30 30 #define DEFAULT_SDBT_BLOCKS 1 31 31 #define DEFAULT_SDB_BLOCKS 511 ··· 150 150 oprofile_max_interval = hwsampler_query_max_interval(); 151 151 if (oprofile_max_interval == 0) 152 152 return -ENODEV; 153 + 154 + /* The initial value should be sane */ 155 + if (oprofile_hw_interval < oprofile_min_interval) 156 + oprofile_hw_interval = oprofile_min_interval; 157 + if (oprofile_hw_interval > oprofile_max_interval) 158 + oprofile_hw_interval = oprofile_max_interval; 153 159 154 160 if (oprofile_timer_init(ops)) 155 161 return -ENODEV;
+5
arch/sh/Kconfig
··· 348 348 select SYS_SUPPORTS_CMT 349 349 select ARCH_WANT_OPTIONAL_GPIOLIB 350 350 select USB_ARCH_HAS_OHCI 351 + select USB_OHCI_SH if USB_OHCI_HCD 351 352 help 352 353 Select SH7720 if you have a SH3-DSP SH7720 CPU. 353 354 ··· 358 357 select CPU_HAS_DSP 359 358 select SYS_SUPPORTS_CMT 360 359 select USB_ARCH_HAS_OHCI 360 + select USB_OHCI_SH if USB_OHCI_HCD 361 361 help 362 362 Select SH7721 if you have a SH3-DSP SH7721 CPU. 363 363 ··· 442 440 bool "Support SH7763 processor" 443 441 select CPU_SH4A 444 442 select USB_ARCH_HAS_OHCI 443 + select USB_OHCI_SH if USB_OHCI_HCD 445 444 help 446 445 Select SH7763 if you have a SH4A SH7763(R5S77631) CPU. 447 446 ··· 470 467 select GENERIC_CLOCKEVENTS_BROADCAST if SMP 471 468 select ARCH_WANT_OPTIONAL_GPIOLIB 472 469 select USB_ARCH_HAS_OHCI 470 + select USB_OHCI_SH if USB_OHCI_HCD 473 471 select USB_ARCH_HAS_EHCI 472 + select USB_EHCI_SH if USB_EHCI_HCD 474 473 475 474 config CPU_SUBTYPE_SHX3 476 475 bool "Support SH-X3 processor"
+3 -5
arch/sh/configs/sh7757lcr_defconfig
··· 9 9 CONFIG_TASK_IO_ACCOUNTING=y 10 10 CONFIG_LOG_BUF_SHIFT=14 11 11 CONFIG_BLK_DEV_INITRD=y 12 - # CONFIG_CC_OPTIMIZE_FOR_SIZE is not set 13 12 # CONFIG_SYSCTL_SYSCALL is not set 14 13 CONFIG_KALLSYMS_ALL=y 15 14 CONFIG_SLAB=y ··· 38 39 CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug" 39 40 # CONFIG_FW_LOADER is not set 40 41 CONFIG_MTD=y 41 - CONFIG_MTD_CONCAT=y 42 - CONFIG_MTD_PARTITIONS=y 43 42 CONFIG_MTD_CHAR=y 44 43 CONFIG_MTD_BLOCK=y 45 44 CONFIG_MTD_M25P80=y ··· 53 56 # CONFIG_KEYBOARD_ATKBD is not set 54 57 # CONFIG_MOUSE_PS2 is not set 55 58 # CONFIG_SERIO is not set 59 + # CONFIG_LEGACY_PTYS is not set 56 60 CONFIG_SERIAL_SH_SCI=y 57 61 CONFIG_SERIAL_SH_SCI_NR_UARTS=3 58 62 CONFIG_SERIAL_SH_SCI_CONSOLE=y 59 - # CONFIG_LEGACY_PTYS is not set 60 63 # CONFIG_HW_RANDOM is not set 61 64 CONFIG_SPI=y 62 65 CONFIG_SPI_SH=y 63 66 # CONFIG_HWMON is not set 64 - CONFIG_MFD_SH_MOBILE_SDHI=y 65 67 CONFIG_USB=y 66 68 CONFIG_USB_EHCI_HCD=y 69 + CONFIG_USB_EHCI_SH=y 67 70 CONFIG_USB_OHCI_HCD=y 71 + CONFIG_USB_OHCI_SH=y 68 72 CONFIG_USB_STORAGE=y 69 73 CONFIG_MMC=y 70 74 CONFIG_MMC_SDHI=y
-4
arch/sh/include/asm/mmzone.h
··· 9 9 extern struct pglist_data *node_data[]; 10 10 #define NODE_DATA(nid) (node_data[nid]) 11 11 12 - #define node_start_pfn(nid) (NODE_DATA(nid)->node_start_pfn) 13 - #define node_end_pfn(nid) (NODE_DATA(nid)->node_start_pfn + \ 14 - NODE_DATA(nid)->node_spanned_pages) 15 - 16 12 static inline int pfn_to_nid(unsigned long pfn) 17 13 { 18 14 int nid;
+78 -28
arch/sh/kernel/cpu/sh4a/setup-sh7757.c
··· 183 183 { 184 184 .slave_id = SHDMA_SLAVE_SCIF2_RX, 185 185 .addr = 0x1f4b0014, 186 - .chcr = SM_INC | 0x800 | 0x40000000 | 186 + .chcr = DM_INC | 0x800 | 0x40000000 | 187 187 TS_INDEX2VAL(XMIT_SZ_8BIT), 188 188 .mid_rid = 0x22, 189 189 }, ··· 197 197 { 198 198 .slave_id = SHDMA_SLAVE_SCIF3_RX, 199 199 .addr = 0x1f4c0014, 200 - .chcr = SM_INC | 0x800 | 0x40000000 | 200 + .chcr = DM_INC | 0x800 | 0x40000000 | 201 201 TS_INDEX2VAL(XMIT_SZ_8BIT), 202 202 .mid_rid = 0x2a, 203 203 }, ··· 211 211 { 212 212 .slave_id = SHDMA_SLAVE_SCIF4_RX, 213 213 .addr = 0x1f4d0014, 214 - .chcr = SM_INC | 0x800 | 0x40000000 | 214 + .chcr = DM_INC | 0x800 | 0x40000000 | 215 215 TS_INDEX2VAL(XMIT_SZ_8BIT), 216 216 .mid_rid = 0x42, 217 217 }, ··· 228 228 { 229 229 .slave_id = SHDMA_SLAVE_RIIC0_RX, 230 230 .addr = 0x1e500013, 231 - .chcr = SM_INC | 0x800 | 0x40000000 | 231 + .chcr = DM_INC | 0x800 | 0x40000000 | 232 232 TS_INDEX2VAL(XMIT_SZ_8BIT), 233 233 .mid_rid = 0x22, 234 234 }, ··· 242 242 { 243 243 .slave_id = SHDMA_SLAVE_RIIC1_RX, 244 244 .addr = 0x1e510013, 245 - .chcr = SM_INC | 0x800 | 0x40000000 | 245 + .chcr = DM_INC | 0x800 | 0x40000000 | 246 246 TS_INDEX2VAL(XMIT_SZ_8BIT), 247 247 .mid_rid = 0x2a, 248 248 }, ··· 256 256 { 257 257 .slave_id = SHDMA_SLAVE_RIIC2_RX, 258 258 .addr = 0x1e520013, 259 - .chcr = SM_INC | 0x800 | 0x40000000 | 259 + .chcr = DM_INC | 0x800 | 0x40000000 | 260 260 TS_INDEX2VAL(XMIT_SZ_8BIT), 261 261 .mid_rid = 0xa2, 262 262 }, ··· 265 265 .addr = 0x1e530012, 266 266 .chcr = SM_INC | 0x800 | 0x40000000 | 267 267 TS_INDEX2VAL(XMIT_SZ_8BIT), 268 - .mid_rid = 0xab, 268 + .mid_rid = 0xa9, 269 269 }, 270 270 { 271 271 .slave_id = SHDMA_SLAVE_RIIC3_RX, 272 272 .addr = 0x1e530013, 273 - .chcr = SM_INC | 0x800 | 0x40000000 | 273 + .chcr = DM_INC | 0x800 | 0x40000000 | 274 274 TS_INDEX2VAL(XMIT_SZ_8BIT), 275 275 .mid_rid = 0xaf, 276 276 }, ··· 279 279 .addr = 0x1e540012, 280 280 .chcr = SM_INC | 0x800 | 0x40000000 | 281 281 TS_INDEX2VAL(XMIT_SZ_8BIT), 282 - .mid_rid = 0xc1, 282 + .mid_rid = 0xc5, 283 283 }, 284 284 { 285 285 .slave_id = SHDMA_SLAVE_RIIC4_RX, 286 286 .addr = 0x1e540013, 287 - .chcr = SM_INC | 0x800 | 0x40000000 | 287 + .chcr = DM_INC | 0x800 | 0x40000000 | 288 288 TS_INDEX2VAL(XMIT_SZ_8BIT), 289 - .mid_rid = 0xc2, 289 + .mid_rid = 0xc6, 290 290 }, 291 291 }; 292 292 ··· 301 301 { 302 302 .slave_id = SHDMA_SLAVE_RIIC5_RX, 303 303 .addr = 0x1e550013, 304 - .chcr = SM_INC | 0x800 | 0x40000000 | 304 + .chcr = DM_INC | 0x800 | 0x40000000 | 305 305 TS_INDEX2VAL(XMIT_SZ_8BIT), 306 306 .mid_rid = 0x22, 307 307 }, ··· 315 315 { 316 316 .slave_id = SHDMA_SLAVE_RIIC6_RX, 317 317 .addr = 0x1e560013, 318 - .chcr = SM_INC | 0x800 | 0x40000000 | 318 + .chcr = DM_INC | 0x800 | 0x40000000 | 319 319 TS_INDEX2VAL(XMIT_SZ_8BIT), 320 320 .mid_rid = 0x2a, 321 321 }, ··· 329 329 { 330 330 .slave_id = SHDMA_SLAVE_RIIC7_RX, 331 331 .addr = 0x1e570013, 332 - .chcr = SM_INC | 0x800 | 0x40000000 | 332 + .chcr = DM_INC | 0x800 | 0x40000000 | 333 333 TS_INDEX2VAL(XMIT_SZ_8BIT), 334 334 .mid_rid = 0x42, 335 335 }, ··· 343 343 { 344 344 .slave_id = SHDMA_SLAVE_RIIC8_RX, 345 345 .addr = 0x1e580013, 346 - .chcr = SM_INC | 0x800 | 0x40000000 | 346 + .chcr = DM_INC | 0x800 | 0x40000000 | 347 347 TS_INDEX2VAL(XMIT_SZ_8BIT), 348 348 .mid_rid = 0x46, 349 349 }, ··· 357 357 { 358 358 .slave_id = SHDMA_SLAVE_RIIC9_RX, 359 359 .addr = 0x1e590013, 360 - .chcr = SM_INC | 0x800 | 0x40000000 | 360 + .chcr = DM_INC | 0x800 | 0x40000000 | 361 361 TS_INDEX2VAL(XMIT_SZ_8BIT), 362 362 .mid_rid = 0x52, 363 363 }, ··· 659 659 .resource = spi0_resources, 660 660 }; 661 661 662 + static struct resource usb_ehci_resources[] = { 663 + [0] = { 664 + .start = 0xfe4f1000, 665 + .end = 0xfe4f10ff, 666 + .flags = IORESOURCE_MEM, 667 + }, 668 + [1] = { 669 + .start = 57, 670 + .end = 57, 671 + .flags = IORESOURCE_IRQ, 672 + }, 673 + }; 674 + 675 + static struct platform_device usb_ehci_device = { 676 + .name = "sh_ehci", 677 + .id = -1, 678 + .dev = { 679 + .dma_mask = &usb_ehci_device.dev.coherent_dma_mask, 680 + .coherent_dma_mask = DMA_BIT_MASK(32), 681 + }, 682 + .num_resources = ARRAY_SIZE(usb_ehci_resources), 683 + .resource = usb_ehci_resources, 684 + }; 685 + 686 + static struct resource usb_ohci_resources[] = { 687 + [0] = { 688 + .start = 0xfe4f1800, 689 + .end = 0xfe4f18ff, 690 + .flags = IORESOURCE_MEM, 691 + }, 692 + [1] = { 693 + .start = 57, 694 + .end = 57, 695 + .flags = IORESOURCE_IRQ, 696 + }, 697 + }; 698 + 699 + static struct platform_device usb_ohci_device = { 700 + .name = "sh_ohci", 701 + .id = -1, 702 + .dev = { 703 + .dma_mask = &usb_ohci_device.dev.coherent_dma_mask, 704 + .coherent_dma_mask = DMA_BIT_MASK(32), 705 + }, 706 + .num_resources = ARRAY_SIZE(usb_ohci_resources), 707 + .resource = usb_ohci_resources, 708 + }; 709 + 662 710 static struct platform_device *sh7757_devices[] __initdata = { 663 711 &scif2_device, 664 712 &scif3_device, ··· 718 670 &dma2_device, 719 671 &dma3_device, 720 672 &spi0_device, 673 + &usb_ehci_device, 674 + &usb_ohci_device, 721 675 }; 722 676 723 677 static int __init sh7757_devices_setup(void) ··· 1089 1039 1090 1040 /* Support for external interrupt pins in IRQ mode */ 1091 1041 static struct intc_vect vectors_irq0123[] __initdata = { 1092 - INTC_VECT(IRQ0, 0x240), INTC_VECT(IRQ1, 0x280), 1093 - INTC_VECT(IRQ2, 0x2c0), INTC_VECT(IRQ3, 0x300), 1042 + INTC_VECT(IRQ0, 0x200), INTC_VECT(IRQ1, 0x240), 1043 + INTC_VECT(IRQ2, 0x280), INTC_VECT(IRQ3, 0x2c0), 1094 1044 }; 1095 1045 1096 1046 static struct intc_vect vectors_irq4567[] __initdata = { 1097 - INTC_VECT(IRQ4, 0x340), INTC_VECT(IRQ5, 0x380), 1098 - INTC_VECT(IRQ6, 0x3c0), INTC_VECT(IRQ7, 0x200), 1047 + INTC_VECT(IRQ4, 0x300), INTC_VECT(IRQ5, 0x340), 1048 + INTC_VECT(IRQ6, 0x380), INTC_VECT(IRQ7, 0x3c0), 1099 1049 }; 1100 1050 1101 1051 static struct intc_sense_reg sense_registers[] __initdata = { ··· 1129 1079 }; 1130 1080 1131 1081 static struct intc_vect vectors_irl4567[] __initdata = { 1132 - INTC_VECT(IRL4_LLLL, 0xb00), INTC_VECT(IRL4_LLLH, 0xb20), 1133 - INTC_VECT(IRL4_LLHL, 0xb40), INTC_VECT(IRL4_LLHH, 0xb60), 1134 - INTC_VECT(IRL4_LHLL, 0xb80), INTC_VECT(IRL4_LHLH, 0xba0), 1135 - INTC_VECT(IRL4_LHHL, 0xbc0), INTC_VECT(IRL4_LHHH, 0xbe0), 1136 - INTC_VECT(IRL4_HLLL, 0xc00), INTC_VECT(IRL4_HLLH, 0xc20), 1137 - INTC_VECT(IRL4_HLHL, 0xc40), INTC_VECT(IRL4_HLHH, 0xc60), 1138 - INTC_VECT(IRL4_HHLL, 0xc80), INTC_VECT(IRL4_HHLH, 0xca0), 1139 - INTC_VECT(IRL4_HHHL, 0xcc0), 1082 + INTC_VECT(IRL4_LLLL, 0x200), INTC_VECT(IRL4_LLLH, 0x220), 1083 + INTC_VECT(IRL4_LLHL, 0x240), INTC_VECT(IRL4_LLHH, 0x260), 1084 + INTC_VECT(IRL4_LHLL, 0x280), INTC_VECT(IRL4_LHLH, 0x2a0), 1085 + INTC_VECT(IRL4_LHHL, 0x2c0), INTC_VECT(IRL4_LHHH, 0x2e0), 1086 + INTC_VECT(IRL4_HLLL, 0x300), INTC_VECT(IRL4_HLLH, 0x320), 1087 + INTC_VECT(IRL4_HLHL, 0x340), INTC_VECT(IRL4_HLHH, 0x360), 1088 + INTC_VECT(IRL4_HHLL, 0x380), INTC_VECT(IRL4_HHLH, 0x3a0), 1089 + INTC_VECT(IRL4_HHHL, 0x3c0), 1140 1090 }; 1141 1091 1142 1092 static DECLARE_INTC_DESC(intc_desc_irl0123, "sh7757-irl0123", vectors_irl0123,
+3 -3
arch/sh/kernel/irq.c
··· 13 13 #include <linux/seq_file.h> 14 14 #include <linux/ftrace.h> 15 15 #include <linux/delay.h> 16 + #include <linux/ratelimit.h> 16 17 #include <asm/processor.h> 17 18 #include <asm/machvec.h> 18 19 #include <asm/uaccess.h> ··· 269 268 unsigned int newcpu = cpumask_any_and(data->affinity, 270 269 cpu_online_mask); 271 270 if (newcpu >= nr_cpu_ids) { 272 - if (printk_ratelimit()) 273 - printk(KERN_INFO "IRQ%u no longer affine to CPU%u\n", 274 - irq, cpu); 271 + pr_info_ratelimited("IRQ%u no longer affine to CPU%u\n", 272 + irq, cpu); 275 273 276 274 cpumask_setall(data->affinity); 277 275 newcpu = cpumask_any_and(data->affinity,
+5 -4
arch/sh/mm/alignment.c
··· 13 13 #include <linux/seq_file.h> 14 14 #include <linux/proc_fs.h> 15 15 #include <linux/uaccess.h> 16 + #include <linux/ratelimit.h> 16 17 #include <asm/alignment.h> 17 18 #include <asm/processor.h> 18 19 ··· 96 95 void unaligned_fixups_notify(struct task_struct *tsk, insn_size_t insn, 97 96 struct pt_regs *regs) 98 97 { 99 - if (user_mode(regs) && (se_usermode & UM_WARN) && printk_ratelimit()) 100 - pr_notice("Fixing up unaligned userspace access " 98 + if (user_mode(regs) && (se_usermode & UM_WARN)) 99 + pr_notice_ratelimited("Fixing up unaligned userspace access " 101 100 "in \"%s\" pid=%d pc=0x%p ins=0x%04hx\n", 102 101 tsk->comm, task_pid_nr(tsk), 103 102 (void *)instruction_pointer(regs), insn); 104 - else if (se_kernmode_warn && printk_ratelimit()) 105 - pr_notice("Fixing up unaligned kernel access " 103 + else if (se_kernmode_warn) 104 + pr_notice_ratelimited("Fixing up unaligned kernel access " 106 105 "in \"%s\" pid=%d pc=0x%p ins=0x%04hx\n", 107 106 tsk->comm, task_pid_nr(tsk), 108 107 (void *)instruction_pointer(regs), insn);
-2
arch/sparc/include/asm/mmzone.h
··· 8 8 extern struct pglist_data *node_data[]; 9 9 10 10 #define NODE_DATA(nid) (node_data[nid]) 11 - #define node_start_pfn(nid) (NODE_DATA(nid)->node_start_pfn) 12 - #define node_end_pfn(nid) (NODE_DATA(nid)->node_end_pfn) 13 11 14 12 extern int numa_cpu_lookup_table[]; 15 13 extern cpumask_t numa_cpumask_lookup_table[];
-11
arch/tile/include/asm/mmzone.h
··· 40 40 return highbits_to_node[__pfn_to_highbits(pfn)]; 41 41 } 42 42 43 - /* 44 - * Following are macros that each numa implmentation must define. 45 - */ 46 - 47 - #define node_start_pfn(nid) (NODE_DATA(nid)->node_start_pfn) 48 - #define node_end_pfn(nid) \ 49 - ({ \ 50 - pg_data_t *__pgdat = NODE_DATA(nid); \ 51 - __pgdat->node_start_pfn + __pgdat->node_spanned_pages; \ 52 - }) 53 - 54 43 #define kern_addr_valid(kaddr) virt_addr_valid((void *)kaddr) 55 44 56 45 static inline int pfn_valid(int pfn)
+6
arch/um/include/asm/percpu.h
··· 1 + #ifndef __UM_PERCPU_H 2 + #define __UM_PERCPU_H 3 + 4 + #include <asm-generic/percpu.h> 5 + 6 + #endif /* __UM_PERCPU_H */
+1 -1
arch/x86/include/asm/apb_timer.h
··· 62 62 #else /* CONFIG_APB_TIMER */ 63 63 64 64 static inline unsigned long apbt_quick_calibrate(void) {return 0; } 65 - static inline void apbt_time_init(void) {return 0; } 65 + static inline void apbt_time_init(void) { } 66 66 67 67 #endif 68 68 #endif /* ASM_X86_APBT_H */
-11
arch/x86/include/asm/mmzone_32.h
··· 48 48 #endif 49 49 } 50 50 51 - /* 52 - * Following are macros that each numa implmentation must define. 53 - */ 54 - 55 - #define node_start_pfn(nid) (NODE_DATA(nid)->node_start_pfn) 56 - #define node_end_pfn(nid) \ 57 - ({ \ 58 - pg_data_t *__pgdat = NODE_DATA(nid); \ 59 - __pgdat->node_start_pfn + __pgdat->node_spanned_pages; \ 60 - }) 61 - 62 51 static inline int pfn_valid(int pfn) 63 52 { 64 53 int nid = pfn_to_nid(pfn);
-3
arch/x86/include/asm/mmzone_64.h
··· 13 13 14 14 #define NODE_DATA(nid) (node_data[nid]) 15 15 16 - #define node_start_pfn(nid) (NODE_DATA(nid)->node_start_pfn) 17 - #define node_end_pfn(nid) (NODE_DATA(nid)->node_start_pfn + \ 18 - NODE_DATA(nid)->node_spanned_pages) 19 16 #endif 20 17 #endif /* _ASM_X86_MMZONE_64_H */
+7 -5
arch/x86/kvm/emulate.c
··· 3372 3372 int def_op_bytes, def_ad_bytes, goffset, simd_prefix; 3373 3373 bool op_prefix = false; 3374 3374 struct opcode opcode; 3375 - struct operand memop = { .type = OP_NONE }; 3375 + struct operand memop = { .type = OP_NONE }, *memopp = NULL; 3376 3376 3377 3377 c->eip = ctxt->eip; 3378 3378 c->fetch.start = c->eip; ··· 3547 3547 if (memop.type == OP_MEM && c->ad_bytes != 8) 3548 3548 memop.addr.mem.ea = (u32)memop.addr.mem.ea; 3549 3549 3550 - if (memop.type == OP_MEM && c->rip_relative) 3551 - memop.addr.mem.ea += c->eip; 3552 - 3553 3550 /* 3554 3551 * Decode and fetch the source operand: register, memory 3555 3552 * or immediate. ··· 3568 3571 c->op_bytes; 3569 3572 srcmem_common: 3570 3573 c->src = memop; 3574 + memopp = &c->src; 3571 3575 break; 3572 3576 case SrcImmU16: 3573 3577 rc = decode_imm(ctxt, &c->src, 2, false); ··· 3665 3667 case DstMem: 3666 3668 case DstMem64: 3667 3669 c->dst = memop; 3670 + memopp = &c->dst; 3668 3671 if ((c->d & DstMask) == DstMem64) 3669 3672 c->dst.bytes = 8; 3670 3673 else ··· 3699 3700 /* Special instructions do their own operand decoding. */ 3700 3701 default: 3701 3702 c->dst.type = OP_NONE; /* Disable writeback. */ 3702 - return 0; 3703 + break; 3703 3704 } 3704 3705 3705 3706 done: 3707 + if (memopp && memopp->type == OP_MEM && c->rip_relative) 3708 + memopp->addr.mem.ea += c->eip; 3709 + 3706 3710 return (rc == X86EMUL_UNHANDLEABLE) ? EMULATION_FAILED : EMULATION_OK; 3707 3711 } 3708 3712
+1 -1
arch/x86/pci/acpi.c
··· 188 188 return false; 189 189 } 190 190 191 - static void coalesce_windows(struct pci_root_info *info, int type) 191 + static void coalesce_windows(struct pci_root_info *info, unsigned long type) 192 192 { 193 193 int i, j; 194 194 struct resource *res1, *res2;
+2 -2
block/blk-throttle.c
··· 927 927 928 928 bio_list_init(&bio_list_on_stack); 929 929 930 - throtl_log(td, "dispatch nr_queued=%lu read=%u write=%u", 930 + throtl_log(td, "dispatch nr_queued=%d read=%u write=%u", 931 931 total_nr_queued(td), td->nr_queued[READ], 932 932 td->nr_queued[WRITE]); 933 933 ··· 1204 1204 } 1205 1205 1206 1206 queue_bio: 1207 - throtl_log_tg(td, tg, "[%c] bio. bdisp=%u sz=%u bps=%llu" 1207 + throtl_log_tg(td, tg, "[%c] bio. bdisp=%llu sz=%u bps=%llu" 1208 1208 " iodisp=%u iops=%u queued=%d/%d", 1209 1209 rw == READ ? 'R' : 'W', 1210 1210 tg->bytes_disp[rw], bio->bi_size, tg->bps[rw],
+10 -6
block/cfq-iosched.c
··· 988 988 989 989 cfq_log_cfqg(cfqd, cfqg, "served: vt=%llu min_vt=%llu", cfqg->vdisktime, 990 990 st->min_vdisktime); 991 - cfq_log_cfqq(cfqq->cfqd, cfqq, "sl_used=%u disp=%u charge=%u iops=%u" 992 - " sect=%u", used_sl, cfqq->slice_dispatch, charge, 993 - iops_mode(cfqd), cfqq->nr_sectors); 991 + cfq_log_cfqq(cfqq->cfqd, cfqq, 992 + "sl_used=%u disp=%u charge=%u iops=%u sect=%lu", 993 + used_sl, cfqq->slice_dispatch, charge, 994 + iops_mode(cfqd), cfqq->nr_sectors); 994 995 cfq_blkiocg_update_timeslice_used(&cfqg->blkg, used_sl, 995 996 unaccounted_sl); 996 997 cfq_blkiocg_set_start_empty_time(&cfqg->blkg); ··· 2024 2023 */ 2025 2024 if (sample_valid(cic->ttime_samples) && 2026 2025 (cfqq->slice_end - jiffies < cic->ttime_mean)) { 2027 - cfq_log_cfqq(cfqd, cfqq, "Not idling. think_time:%d", 2028 - cic->ttime_mean); 2026 + cfq_log_cfqq(cfqd, cfqq, "Not idling. think_time:%lu", 2027 + cic->ttime_mean); 2029 2028 return; 2030 2029 } 2031 2030 ··· 2773 2772 smp_wmb(); 2774 2773 cic->key = cfqd_dead_key(cfqd); 2775 2774 2776 - if (ioc->ioc_data == cic) 2775 + if (rcu_dereference(ioc->ioc_data) == cic) { 2776 + spin_lock(&ioc->lock); 2777 2777 rcu_assign_pointer(ioc->ioc_data, NULL); 2778 + spin_unlock(&ioc->lock); 2779 + } 2778 2780 2779 2781 if (cic->cfqq[BLK_RW_ASYNC]) { 2780 2782 cfq_exit_cfqq(cfqd, cic->cfqq[BLK_RW_ASYNC]);
+45 -34
block/genhd.c
··· 1371 1371 struct gendisk *disk; /* the associated disk */ 1372 1372 spinlock_t lock; 1373 1373 1374 + struct mutex block_mutex; /* protects blocking */ 1374 1375 int block; /* event blocking depth */ 1375 1376 unsigned int pending; /* events already sent out */ 1376 1377 unsigned int clearing; /* events being cleared */ ··· 1415 1414 return msecs_to_jiffies(intv_msecs); 1416 1415 } 1417 1416 1418 - static void __disk_block_events(struct gendisk *disk, bool sync) 1417 + /** 1418 + * disk_block_events - block and flush disk event checking 1419 + * @disk: disk to block events for 1420 + * 1421 + * On return from this function, it is guaranteed that event checking 1422 + * isn't in progress and won't happen until unblocked by 1423 + * disk_unblock_events(). Events blocking is counted and the actual 1424 + * unblocking happens after the matching number of unblocks are done. 1425 + * 1426 + * Note that this intentionally does not block event checking from 1427 + * disk_clear_events(). 1428 + * 1429 + * CONTEXT: 1430 + * Might sleep. 1431 + */ 1432 + void disk_block_events(struct gendisk *disk) 1419 1433 { 1420 1434 struct disk_events *ev = disk->ev; 1421 1435 unsigned long flags; 1422 1436 bool cancel; 1423 1437 1438 + if (!ev) 1439 + return; 1440 + 1441 + /* 1442 + * Outer mutex ensures that the first blocker completes canceling 1443 + * the event work before further blockers are allowed to finish. 1444 + */ 1445 + mutex_lock(&ev->block_mutex); 1446 + 1424 1447 spin_lock_irqsave(&ev->lock, flags); 1425 1448 cancel = !ev->block++; 1426 1449 spin_unlock_irqrestore(&ev->lock, flags); 1427 1450 1428 - if (cancel) { 1429 - if (sync) 1430 - cancel_delayed_work_sync(&disk->ev->dwork); 1431 - else 1432 - cancel_delayed_work(&disk->ev->dwork); 1433 - } 1451 + if (cancel) 1452 + cancel_delayed_work_sync(&disk->ev->dwork); 1453 + 1454 + mutex_unlock(&ev->block_mutex); 1434 1455 } 1435 1456 1436 1457 static void __disk_unblock_events(struct gendisk *disk, bool check_now) ··· 1484 1461 } 1485 1462 1486 1463 /** 1487 - * disk_block_events - block and flush disk event checking 1488 - * @disk: disk to block events for 1489 - * 1490 - * On return from this function, it is guaranteed that event checking 1491 - * isn't in progress and won't happen until unblocked by 1492 - * disk_unblock_events(). Events blocking is counted and the actual 1493 - * unblocking happens after the matching number of unblocks are done. 1494 - * 1495 - * Note that this intentionally does not block event checking from 1496 - * disk_clear_events(). 1497 - * 1498 - * CONTEXT: 1499 - * Might sleep. 1500 - */ 1501 - void disk_block_events(struct gendisk *disk) 1502 - { 1503 - if (disk->ev) 1504 - __disk_block_events(disk, true); 1505 - } 1506 - 1507 - /** 1508 1464 * disk_unblock_events - unblock disk event checking 1509 1465 * @disk: disk to unblock events for 1510 1466 * ··· 1510 1508 */ 1511 1509 void disk_check_events(struct gendisk *disk) 1512 1510 { 1513 - if (disk->ev) { 1514 - __disk_block_events(disk, false); 1515 - __disk_unblock_events(disk, true); 1511 + struct disk_events *ev = disk->ev; 1512 + unsigned long flags; 1513 + 1514 + if (!ev) 1515 + return; 1516 + 1517 + spin_lock_irqsave(&ev->lock, flags); 1518 + if (!ev->block) { 1519 + cancel_delayed_work(&ev->dwork); 1520 + queue_delayed_work(system_nrt_wq, &ev->dwork, 0); 1516 1521 } 1522 + spin_unlock_irqrestore(&ev->lock, flags); 1517 1523 } 1518 1524 EXPORT_SYMBOL_GPL(disk_check_events); 1519 1525 ··· 1556 1546 spin_unlock_irq(&ev->lock); 1557 1547 1558 1548 /* uncondtionally schedule event check and wait for it to finish */ 1559 - __disk_block_events(disk, true); 1549 + disk_block_events(disk); 1560 1550 queue_delayed_work(system_nrt_wq, &ev->dwork, 0); 1561 1551 flush_delayed_work(&ev->dwork); 1562 1552 __disk_unblock_events(disk, false); ··· 1674 1664 if (intv < 0 && intv != -1) 1675 1665 return -EINVAL; 1676 1666 1677 - __disk_block_events(disk, true); 1667 + disk_block_events(disk); 1678 1668 disk->ev->poll_msecs = intv; 1679 1669 __disk_unblock_events(disk, true); 1680 1670 ··· 1760 1750 INIT_LIST_HEAD(&ev->node); 1761 1751 ev->disk = disk; 1762 1752 spin_lock_init(&ev->lock); 1753 + mutex_init(&ev->block_mutex); 1763 1754 ev->block = 1; 1764 1755 ev->poll_msecs = -1; 1765 1756 INIT_DELAYED_WORK(&ev->dwork, disk_events_workfn); ··· 1781 1770 if (!disk->ev) 1782 1771 return; 1783 1772 1784 - __disk_block_events(disk, true); 1773 + disk_block_events(disk); 1785 1774 1786 1775 mutex_lock(&disk_events_mutex); 1787 1776 list_del_init(&disk->ev->node);
+3 -4
crypto/deflate.c
··· 32 32 #include <linux/interrupt.h> 33 33 #include <linux/mm.h> 34 34 #include <linux/net.h> 35 - #include <linux/slab.h> 36 35 37 36 #define DEFLATE_DEF_LEVEL Z_DEFAULT_COMPRESSION 38 37 #define DEFLATE_DEF_WINBITS 11 ··· 72 73 int ret = 0; 73 74 struct z_stream_s *stream = &ctx->decomp_stream; 74 75 75 - stream->workspace = kzalloc(zlib_inflate_workspacesize(), GFP_KERNEL); 76 + stream->workspace = vzalloc(zlib_inflate_workspacesize()); 76 77 if (!stream->workspace) { 77 78 ret = -ENOMEM; 78 79 goto out; ··· 85 86 out: 86 87 return ret; 87 88 out_free: 88 - kfree(stream->workspace); 89 + vfree(stream->workspace); 89 90 goto out; 90 91 } 91 92 ··· 98 99 static void deflate_decomp_exit(struct deflate_ctx *ctx) 99 100 { 100 101 zlib_inflateEnd(&ctx->decomp_stream); 101 - kfree(ctx->decomp_stream.workspace); 102 + vfree(ctx->decomp_stream.workspace); 102 103 } 103 104 104 105 static int deflate_init(struct crypto_tfm *tfm)
+3 -4
crypto/zlib.c
··· 29 29 #include <linux/interrupt.h> 30 30 #include <linux/mm.h> 31 31 #include <linux/net.h> 32 - #include <linux/slab.h> 33 32 34 33 #include <crypto/internal/compress.h> 35 34 ··· 59 60 60 61 if (stream->workspace) { 61 62 zlib_inflateEnd(stream); 62 - kfree(stream->workspace); 63 + vfree(stream->workspace); 63 64 stream->workspace = NULL; 64 65 } 65 66 } ··· 227 228 ? nla_get_u32(tb[ZLIB_DECOMP_WINDOWBITS]) 228 229 : DEF_WBITS; 229 230 230 - stream->workspace = kzalloc(zlib_inflate_workspacesize(), GFP_KERNEL); 231 + stream->workspace = vzalloc(zlib_inflate_workspacesize()); 231 232 if (!stream->workspace) 232 233 return -ENOMEM; 233 234 234 235 ret = zlib_inflateInit2(stream, ctx->decomp_windowBits); 235 236 if (ret != Z_OK) { 236 - kfree(stream->workspace); 237 + vfree(stream->workspace); 237 238 stream->workspace = NULL; 238 239 return -EINVAL; 239 240 }
+1 -1
drivers/ata/libahci.c
··· 452 452 } 453 453 454 454 if (mask_port_map) { 455 - dev_printk(KERN_ERR, dev, "masking port_map 0x%x -> 0x%x\n", 455 + dev_printk(KERN_WARNING, dev, "masking port_map 0x%x -> 0x%x\n", 456 456 port_map, 457 457 port_map & mask_port_map); 458 458 port_map &= mask_port_map;
+3 -3
drivers/ata/libata-core.c
··· 4143 4143 * Devices which choke on SETXFER. Applies only if both the 4144 4144 * device and controller are SATA. 4145 4145 */ 4146 - { "PIONEER DVD-RW DVRTD08", "1.00", ATA_HORKAGE_NOSETXFER }, 4147 - { "PIONEER DVD-RW DVR-212D", "1.28", ATA_HORKAGE_NOSETXFER }, 4148 - { "PIONEER DVD-RW DVR-216D", "1.08", ATA_HORKAGE_NOSETXFER }, 4146 + { "PIONEER DVD-RW DVRTD08", NULL, ATA_HORKAGE_NOSETXFER }, 4147 + { "PIONEER DVD-RW DVR-212D", NULL, ATA_HORKAGE_NOSETXFER }, 4148 + { "PIONEER DVD-RW DVR-216D", NULL, ATA_HORKAGE_NOSETXFER }, 4149 4149 4150 4150 /* End Marker */ 4151 4151 { }
+6
drivers/ata/libata-scsi.c
··· 3797 3797 */ 3798 3798 int ata_sas_port_start(struct ata_port *ap) 3799 3799 { 3800 + /* 3801 + * the port is marked as frozen at allocation time, but if we don't 3802 + * have new eh, we won't thaw it 3803 + */ 3804 + if (!ap->ops->error_handler) 3805 + ap->pflags &= ~ATA_PFLAG_FROZEN; 3800 3806 return 0; 3801 3807 } 3802 3808 EXPORT_SYMBOL_GPL(ata_sas_port_start);
+3
drivers/ata/pata_marvell.c
··· 161 161 { PCI_DEVICE(0x11AB, 0x6121), }, 162 162 { PCI_DEVICE(0x11AB, 0x6123), }, 163 163 { PCI_DEVICE(0x11AB, 0x6145), }, 164 + { PCI_DEVICE(0x1B4B, 0x91A0), }, 165 + { PCI_DEVICE(0x1B4B, 0x91A4), }, 166 + 164 167 { } /* terminate list */ 165 168 }; 166 169
+1 -1
drivers/ata/sata_dwc_460ex.c
··· 389 389 /* 390 390 * Function: get_burst_length_encode 391 391 * arguments: datalength: length in bytes of data 392 - * returns value to be programmed in register corrresponding to data length 392 + * returns value to be programmed in register corresponding to data length 393 393 * This value is effectively the log(base 2) of the length 394 394 */ 395 395 static int get_burst_length_encode(int datalength)
+1 -1
drivers/base/platform.c
··· 367 367 * 368 368 * Returns &struct platform_device pointer on success, or ERR_PTR() on error. 369 369 */ 370 - struct platform_device *__init_or_module platform_device_register_resndata( 370 + struct platform_device *platform_device_register_resndata( 371 371 struct device *parent, 372 372 const char *name, int id, 373 373 const struct resource *res, unsigned int num,
+2 -2
drivers/base/power/clock_ops.c
··· 387 387 clknb = container_of(nb, struct pm_clk_notifier_block, nb); 388 388 389 389 switch (action) { 390 - case BUS_NOTIFY_ADD_DEVICE: 390 + case BUS_NOTIFY_BIND_DRIVER: 391 391 if (clknb->con_ids[0]) { 392 392 for (con_id = clknb->con_ids; *con_id; con_id++) 393 393 enable_clock(dev, *con_id); ··· 395 395 enable_clock(dev, NULL); 396 396 } 397 397 break; 398 - case BUS_NOTIFY_DEL_DEVICE: 398 + case BUS_NOTIFY_UNBOUND_DRIVER: 399 399 if (clknb->con_ids[0]) { 400 400 for (con_id = clknb->con_ids; *con_id; con_id++) 401 401 disable_clock(dev, *con_id);
+21 -7
drivers/base/power/main.c
··· 57 57 */ 58 58 void device_pm_init(struct device *dev) 59 59 { 60 - dev->power.in_suspend = false; 60 + dev->power.is_prepared = false; 61 + dev->power.is_suspended = false; 61 62 init_completion(&dev->power.completion); 62 63 complete_all(&dev->power.completion); 63 64 dev->power.wakeup = NULL; ··· 92 91 pr_debug("PM: Adding info for %s:%s\n", 93 92 dev->bus ? dev->bus->name : "No Bus", dev_name(dev)); 94 93 mutex_lock(&dpm_list_mtx); 95 - if (dev->parent && dev->parent->power.in_suspend) 94 + if (dev->parent && dev->parent->power.is_prepared) 96 95 dev_warn(dev, "parent %s should not be sleeping\n", 97 96 dev_name(dev->parent)); 98 97 list_add_tail(&dev->power.entry, &dpm_list); ··· 512 511 dpm_wait(dev->parent, async); 513 512 device_lock(dev); 514 513 515 - dev->power.in_suspend = false; 514 + /* 515 + * This is a fib. But we'll allow new children to be added below 516 + * a resumed device, even if the device hasn't been completed yet. 517 + */ 518 + dev->power.is_prepared = false; 519 + 520 + if (!dev->power.is_suspended) 521 + goto Unlock; 516 522 517 523 if (dev->pwr_domain) { 518 524 pm_dev_dbg(dev, state, "power domain "); ··· 556 548 } 557 549 558 550 End: 551 + dev->power.is_suspended = false; 552 + 553 + Unlock: 559 554 device_unlock(dev); 560 555 complete_all(&dev->power.completion); 561 556 ··· 681 670 struct device *dev = to_device(dpm_prepared_list.prev); 682 671 683 672 get_device(dev); 684 - dev->power.in_suspend = false; 673 + dev->power.is_prepared = false; 685 674 list_move(&dev->power.entry, &list); 686 675 mutex_unlock(&dpm_list_mtx); 687 676 ··· 846 835 device_lock(dev); 847 836 848 837 if (async_error) 849 - goto End; 838 + goto Unlock; 850 839 851 840 if (pm_wakeup_pending()) { 852 841 async_error = -EBUSY; 853 - goto End; 842 + goto Unlock; 854 843 } 855 844 856 845 if (dev->pwr_domain) { ··· 888 877 } 889 878 890 879 End: 880 + dev->power.is_suspended = !error; 881 + 882 + Unlock: 891 883 device_unlock(dev); 892 884 complete_all(&dev->power.completion); 893 885 ··· 1056 1042 put_device(dev); 1057 1043 break; 1058 1044 } 1059 - dev->power.in_suspend = true; 1045 + dev->power.is_prepared = true; 1060 1046 if (!list_empty(&dev->power.entry)) 1061 1047 list_move_tail(&dev->power.entry, &dpm_prepared_list); 1062 1048 put_device(dev);
+1
drivers/connector/connector.c
··· 139 139 spin_unlock_bh(&dev->cbdev->queue_lock); 140 140 141 141 if (cbq != NULL) { 142 + err = 0; 142 143 cbq->callback(msg, nsp); 143 144 kfree_skb(skb); 144 145 cn_queue_release_callback(cbq);
+3 -3
drivers/crypto/caam/caamalg.c
··· 238 238 239 239 /* build shared descriptor for this session */ 240 240 sh_desc = kmalloc(CAAM_CMD_SZ * DESC_AEAD_SHARED_TEXT_LEN + 241 - keys_fit_inline ? 242 - ctx->split_key_pad_len + ctx->enckeylen : 243 - CAAM_PTR_SZ * 2, GFP_DMA | GFP_KERNEL); 241 + (keys_fit_inline ? 242 + ctx->split_key_pad_len + ctx->enckeylen : 243 + CAAM_PTR_SZ * 2), GFP_DMA | GFP_KERNEL); 244 244 if (!sh_desc) { 245 245 dev_err(jrdev, "could not allocate shared descriptor\n"); 246 246 return -ENOMEM;
+1
drivers/firmware/google/Kconfig
··· 13 13 config GOOGLE_SMI 14 14 tristate "SMI interface for Google platforms" 15 15 depends on ACPI && DMI 16 + select EFI 16 17 select EFI_VARS 17 18 help 18 19 Say Y here if you want to enable SMI callbacks for Google
+1
drivers/gpu/drm/drm_gem.c
··· 34 34 #include <linux/module.h> 35 35 #include <linux/mman.h> 36 36 #include <linux/pagemap.h> 37 + #include <linux/shmem_fs.h> 37 38 #include "drmP.h" 38 39 39 40 /** @file drm_gem.c
+1 -2
drivers/gpu/drm/i915/i915_dma.c
··· 2182 2182 /* Flush any outstanding unpin_work. */ 2183 2183 flush_workqueue(dev_priv->wq); 2184 2184 2185 - i915_gem_free_all_phys_object(dev); 2186 - 2187 2185 mutex_lock(&dev->struct_mutex); 2186 + i915_gem_free_all_phys_object(dev); 2188 2187 i915_gem_cleanup_ringbuffer(dev); 2189 2188 mutex_unlock(&dev->struct_mutex); 2190 2189 if (I915_HAS_FBC(dev) && i915_powersave)
+3
drivers/gpu/drm/i915/i915_drv.c
··· 579 579 } else switch (INTEL_INFO(dev)->gen) { 580 580 case 6: 581 581 ret = gen6_do_reset(dev, flags); 582 + /* If reset with a user forcewake, try to restore */ 583 + if (atomic_read(&dev_priv->forcewake_count)) 584 + __gen6_gt_force_wake_get(dev_priv); 582 585 break; 583 586 case 5: 584 587 ret = ironlake_do_reset(dev, flags);
+3
drivers/gpu/drm/i915/i915_drv.h
··· 211 211 void (*fdi_link_train)(struct drm_crtc *crtc); 212 212 void (*init_clock_gating)(struct drm_device *dev); 213 213 void (*init_pch_clock_gating)(struct drm_device *dev); 214 + int (*queue_flip)(struct drm_device *dev, struct drm_crtc *crtc, 215 + struct drm_framebuffer *fb, 216 + struct drm_i915_gem_object *obj); 214 217 /* clock updates for mode set */ 215 218 /* cursor updates */ 216 219 /* render clock increase/decrease */
+22 -30
drivers/gpu/drm/i915/i915_gem.c
··· 31 31 #include "i915_drv.h" 32 32 #include "i915_trace.h" 33 33 #include "intel_drv.h" 34 + #include <linux/shmem_fs.h> 34 35 #include <linux/slab.h> 35 36 #include <linux/swap.h> 36 37 #include <linux/pci.h> ··· 360 359 if ((page_offset + remain) > PAGE_SIZE) 361 360 page_length = PAGE_SIZE - page_offset; 362 361 363 - page = read_cache_page_gfp(mapping, offset >> PAGE_SHIFT, 364 - GFP_HIGHUSER | __GFP_RECLAIMABLE); 362 + page = shmem_read_mapping_page(mapping, offset >> PAGE_SHIFT); 365 363 if (IS_ERR(page)) 366 364 return PTR_ERR(page); 367 365 ··· 463 463 if ((data_page_offset + page_length) > PAGE_SIZE) 464 464 page_length = PAGE_SIZE - data_page_offset; 465 465 466 - page = read_cache_page_gfp(mapping, offset >> PAGE_SHIFT, 467 - GFP_HIGHUSER | __GFP_RECLAIMABLE); 466 + page = shmem_read_mapping_page(mapping, offset >> PAGE_SHIFT); 468 467 if (IS_ERR(page)) { 469 468 ret = PTR_ERR(page); 470 469 goto out; ··· 796 797 if ((page_offset + remain) > PAGE_SIZE) 797 798 page_length = PAGE_SIZE - page_offset; 798 799 799 - page = read_cache_page_gfp(mapping, offset >> PAGE_SHIFT, 800 - GFP_HIGHUSER | __GFP_RECLAIMABLE); 800 + page = shmem_read_mapping_page(mapping, offset >> PAGE_SHIFT); 801 801 if (IS_ERR(page)) 802 802 return PTR_ERR(page); 803 803 ··· 905 907 if ((data_page_offset + page_length) > PAGE_SIZE) 906 908 page_length = PAGE_SIZE - data_page_offset; 907 909 908 - page = read_cache_page_gfp(mapping, offset >> PAGE_SHIFT, 909 - GFP_HIGHUSER | __GFP_RECLAIMABLE); 910 + page = shmem_read_mapping_page(mapping, offset >> PAGE_SHIFT); 910 911 if (IS_ERR(page)) { 911 912 ret = PTR_ERR(page); 912 913 goto out; ··· 1216 1219 ret = i915_gem_object_bind_to_gtt(obj, 0, true); 1217 1220 if (ret) 1218 1221 goto unlock; 1219 - } 1220 1222 1221 - ret = i915_gem_object_set_to_gtt_domain(obj, write); 1222 - if (ret) 1223 - goto unlock; 1223 + ret = i915_gem_object_set_to_gtt_domain(obj, write); 1224 + if (ret) 1225 + goto unlock; 1226 + } 1224 1227 1225 1228 if (obj->tiling_mode == I915_TILING_NONE) 1226 1229 ret = i915_gem_object_put_fence(obj); ··· 1555 1558 1556 1559 inode = obj->base.filp->f_path.dentry->d_inode; 1557 1560 mapping = inode->i_mapping; 1561 + gfpmask |= mapping_gfp_mask(mapping); 1562 + 1558 1563 for (i = 0; i < page_count; i++) { 1559 - page = read_cache_page_gfp(mapping, i, 1560 - GFP_HIGHUSER | 1561 - __GFP_COLD | 1562 - __GFP_RECLAIMABLE | 1563 - gfpmask); 1564 + page = shmem_read_mapping_page_gfp(mapping, i, gfpmask); 1564 1565 if (IS_ERR(page)) 1565 1566 goto err_pages; 1566 1567 ··· 1696 1701 /* Our goal here is to return as much of the memory as 1697 1702 * is possible back to the system as we are called from OOM. 1698 1703 * To do this we must instruct the shmfs to drop all of its 1699 - * backing pages, *now*. Here we mirror the actions taken 1700 - * when by shmem_delete_inode() to release the backing store. 1704 + * backing pages, *now*. 1701 1705 */ 1702 1706 inode = obj->base.filp->f_path.dentry->d_inode; 1703 - truncate_inode_pages(inode->i_mapping, 0); 1704 - if (inode->i_op->truncate_range) 1705 - inode->i_op->truncate_range(inode, 0, (loff_t)-1); 1707 + shmem_truncate_range(inode, 0, (loff_t)-1); 1706 1708 1707 1709 obj->madv = __I915_MADV_PURGED; 1708 1710 } ··· 2072 2080 if (!ier) { 2073 2081 DRM_ERROR("something (likely vbetool) disabled " 2074 2082 "interrupts, re-enabling\n"); 2075 - i915_driver_irq_preinstall(ring->dev); 2076 - i915_driver_irq_postinstall(ring->dev); 2083 + ring->dev->driver->irq_preinstall(ring->dev); 2084 + ring->dev->driver->irq_postinstall(ring->dev); 2077 2085 } 2078 2086 2079 2087 trace_i915_gem_request_wait_begin(ring, seqno); ··· 2918 2926 */ 2919 2927 wmb(); 2920 2928 2921 - i915_gem_release_mmap(obj); 2922 - 2923 2929 old_write_domain = obj->base.write_domain; 2924 2930 obj->base.write_domain = 0; 2925 2931 ··· 3557 3567 { 3558 3568 struct drm_i915_private *dev_priv = dev->dev_private; 3559 3569 struct drm_i915_gem_object *obj; 3570 + struct address_space *mapping; 3560 3571 3561 3572 obj = kzalloc(sizeof(*obj), GFP_KERNEL); 3562 3573 if (obj == NULL) ··· 3567 3576 kfree(obj); 3568 3577 return NULL; 3569 3578 } 3579 + 3580 + mapping = obj->base.filp->f_path.dentry->d_inode->i_mapping; 3581 + mapping_set_gfp_mask(mapping, GFP_HIGHUSER | __GFP_RECLAIMABLE); 3570 3582 3571 3583 i915_gem_info_add_obj(dev_priv, size); 3572 3584 ··· 3946 3952 3947 3953 page_count = obj->base.size / PAGE_SIZE; 3948 3954 for (i = 0; i < page_count; i++) { 3949 - struct page *page = read_cache_page_gfp(mapping, i, 3950 - GFP_HIGHUSER | __GFP_RECLAIMABLE); 3955 + struct page *page = shmem_read_mapping_page(mapping, i); 3951 3956 if (!IS_ERR(page)) { 3952 3957 char *dst = kmap_atomic(page); 3953 3958 memcpy(dst, vaddr + i*PAGE_SIZE, PAGE_SIZE); ··· 4007 4014 struct page *page; 4008 4015 char *dst, *src; 4009 4016 4010 - page = read_cache_page_gfp(mapping, i, 4011 - GFP_HIGHUSER | __GFP_RECLAIMABLE); 4017 + page = shmem_read_mapping_page(mapping, i); 4012 4018 if (IS_ERR(page)) 4013 4019 return PTR_ERR(page); 4014 4020
-4
drivers/gpu/drm/i915/i915_gem_execbuffer.c
··· 187 187 if ((flush_domains | invalidate_domains) & I915_GEM_DOMAIN_CPU) 188 188 i915_gem_clflush_object(obj); 189 189 190 - /* blow away mappings if mapped through GTT */ 191 - if ((flush_domains | invalidate_domains) & I915_GEM_DOMAIN_GTT) 192 - i915_gem_release_mmap(obj); 193 - 194 190 if (obj->base.pending_write_domain) 195 191 cd->flips |= atomic_read(&obj->pending_flip); 196 192
+1
drivers/gpu/drm/i915/i915_irq.c
··· 1749 1749 * happens. 1750 1750 */ 1751 1751 I915_WRITE(GEN6_BLITTER_HWSTAM, ~GEN6_BLITTER_USER_INTERRUPT); 1752 + I915_WRITE(GEN6_BSD_HWSTAM, ~GEN6_BSD_USER_INTERRUPT); 1752 1753 } 1753 1754 1754 1755 /* XXX hotplug from PCH */
+1
drivers/gpu/drm/i915/i915_reg.h
··· 531 531 #define GEN6_BSD_SLEEP_PSMI_CONTROL_RC_ILDL_MESSAGE_ENABLE 0 532 532 #define GEN6_BSD_SLEEP_PSMI_CONTROL_IDLE_INDICATOR (1 << 3) 533 533 534 + #define GEN6_BSD_HWSTAM 0x12098 534 535 #define GEN6_BSD_IMR 0x120a8 535 536 #define GEN6_BSD_USER_INTERRUPT (1 << 12) 536 537
+5
drivers/gpu/drm/i915/i915_suspend.c
··· 678 678 } 679 679 680 680 /* VGA state */ 681 + mutex_lock(&dev->struct_mutex); 681 682 dev_priv->saveVGA0 = I915_READ(VGA0); 682 683 dev_priv->saveVGA1 = I915_READ(VGA1); 683 684 dev_priv->saveVGA_PD = I915_READ(VGA_PD); ··· 688 687 dev_priv->saveVGACNTRL = I915_READ(VGACNTRL); 689 688 690 689 i915_save_vga(dev); 690 + mutex_unlock(&dev->struct_mutex); 691 691 } 692 692 693 693 void i915_restore_display(struct drm_device *dev) ··· 782 780 I915_WRITE(CPU_VGACNTRL, dev_priv->saveVGACNTRL); 783 781 else 784 782 I915_WRITE(VGACNTRL, dev_priv->saveVGACNTRL); 783 + 784 + mutex_lock(&dev->struct_mutex); 785 785 I915_WRITE(VGA0, dev_priv->saveVGA0); 786 786 I915_WRITE(VGA1, dev_priv->saveVGA1); 787 787 I915_WRITE(VGA_PD, dev_priv->saveVGA_PD); ··· 791 787 udelay(150); 792 788 793 789 i915_restore_vga(dev); 790 + mutex_unlock(&dev->struct_mutex); 794 791 } 795 792 796 793 int i915_save_state(struct drm_device *dev)
+223 -85
drivers/gpu/drm/i915/intel_display.c
··· 4687 4687 4688 4688 I915_WRITE(DSPCNTR(plane), dspcntr); 4689 4689 POSTING_READ(DSPCNTR(plane)); 4690 + intel_enable_plane(dev_priv, plane, pipe); 4690 4691 4691 4692 ret = intel_pipe_set_base(crtc, x, y, old_fb); 4692 4693 ··· 5218 5217 5219 5218 I915_WRITE(DSPCNTR(plane), dspcntr); 5220 5219 POSTING_READ(DSPCNTR(plane)); 5221 - if (!HAS_PCH_SPLIT(dev)) 5222 - intel_enable_plane(dev_priv, plane, pipe); 5223 5220 5224 5221 ret = intel_pipe_set_base(crtc, x, y, old_fb); 5225 5222 ··· 6261 6262 spin_unlock_irqrestore(&dev->event_lock, flags); 6262 6263 } 6263 6264 6265 + static int intel_gen2_queue_flip(struct drm_device *dev, 6266 + struct drm_crtc *crtc, 6267 + struct drm_framebuffer *fb, 6268 + struct drm_i915_gem_object *obj) 6269 + { 6270 + struct drm_i915_private *dev_priv = dev->dev_private; 6271 + struct intel_crtc *intel_crtc = to_intel_crtc(crtc); 6272 + unsigned long offset; 6273 + u32 flip_mask; 6274 + int ret; 6275 + 6276 + ret = intel_pin_and_fence_fb_obj(dev, obj, LP_RING(dev_priv)); 6277 + if (ret) 6278 + goto out; 6279 + 6280 + /* Offset into the new buffer for cases of shared fbs between CRTCs */ 6281 + offset = crtc->y * fb->pitch + crtc->x * fb->bits_per_pixel/8; 6282 + 6283 + ret = BEGIN_LP_RING(6); 6284 + if (ret) 6285 + goto out; 6286 + 6287 + /* Can't queue multiple flips, so wait for the previous 6288 + * one to finish before executing the next. 6289 + */ 6290 + if (intel_crtc->plane) 6291 + flip_mask = MI_WAIT_FOR_PLANE_B_FLIP; 6292 + else 6293 + flip_mask = MI_WAIT_FOR_PLANE_A_FLIP; 6294 + OUT_RING(MI_WAIT_FOR_EVENT | flip_mask); 6295 + OUT_RING(MI_NOOP); 6296 + OUT_RING(MI_DISPLAY_FLIP | 6297 + MI_DISPLAY_FLIP_PLANE(intel_crtc->plane)); 6298 + OUT_RING(fb->pitch); 6299 + OUT_RING(obj->gtt_offset + offset); 6300 + OUT_RING(MI_NOOP); 6301 + ADVANCE_LP_RING(); 6302 + out: 6303 + return ret; 6304 + } 6305 + 6306 + static int intel_gen3_queue_flip(struct drm_device *dev, 6307 + struct drm_crtc *crtc, 6308 + struct drm_framebuffer *fb, 6309 + struct drm_i915_gem_object *obj) 6310 + { 6311 + struct drm_i915_private *dev_priv = dev->dev_private; 6312 + struct intel_crtc *intel_crtc = to_intel_crtc(crtc); 6313 + unsigned long offset; 6314 + u32 flip_mask; 6315 + int ret; 6316 + 6317 + ret = intel_pin_and_fence_fb_obj(dev, obj, LP_RING(dev_priv)); 6318 + if (ret) 6319 + goto out; 6320 + 6321 + /* Offset into the new buffer for cases of shared fbs between CRTCs */ 6322 + offset = crtc->y * fb->pitch + crtc->x * fb->bits_per_pixel/8; 6323 + 6324 + ret = BEGIN_LP_RING(6); 6325 + if (ret) 6326 + goto out; 6327 + 6328 + if (intel_crtc->plane) 6329 + flip_mask = MI_WAIT_FOR_PLANE_B_FLIP; 6330 + else 6331 + flip_mask = MI_WAIT_FOR_PLANE_A_FLIP; 6332 + OUT_RING(MI_WAIT_FOR_EVENT | flip_mask); 6333 + OUT_RING(MI_NOOP); 6334 + OUT_RING(MI_DISPLAY_FLIP_I915 | 6335 + MI_DISPLAY_FLIP_PLANE(intel_crtc->plane)); 6336 + OUT_RING(fb->pitch); 6337 + OUT_RING(obj->gtt_offset + offset); 6338 + OUT_RING(MI_NOOP); 6339 + 6340 + ADVANCE_LP_RING(); 6341 + out: 6342 + return ret; 6343 + } 6344 + 6345 + static int intel_gen4_queue_flip(struct drm_device *dev, 6346 + struct drm_crtc *crtc, 6347 + struct drm_framebuffer *fb, 6348 + struct drm_i915_gem_object *obj) 6349 + { 6350 + struct drm_i915_private *dev_priv = dev->dev_private; 6351 + struct intel_crtc *intel_crtc = to_intel_crtc(crtc); 6352 + uint32_t pf, pipesrc; 6353 + int ret; 6354 + 6355 + ret = intel_pin_and_fence_fb_obj(dev, obj, LP_RING(dev_priv)); 6356 + if (ret) 6357 + goto out; 6358 + 6359 + ret = BEGIN_LP_RING(4); 6360 + if (ret) 6361 + goto out; 6362 + 6363 + /* i965+ uses the linear or tiled offsets from the 6364 + * Display Registers (which do not change across a page-flip) 6365 + * so we need only reprogram the base address. 6366 + */ 6367 + OUT_RING(MI_DISPLAY_FLIP | 6368 + MI_DISPLAY_FLIP_PLANE(intel_crtc->plane)); 6369 + OUT_RING(fb->pitch); 6370 + OUT_RING(obj->gtt_offset | obj->tiling_mode); 6371 + 6372 + /* XXX Enabling the panel-fitter across page-flip is so far 6373 + * untested on non-native modes, so ignore it for now. 6374 + * pf = I915_READ(pipe == 0 ? PFA_CTL_1 : PFB_CTL_1) & PF_ENABLE; 6375 + */ 6376 + pf = 0; 6377 + pipesrc = I915_READ(PIPESRC(intel_crtc->pipe)) & 0x0fff0fff; 6378 + OUT_RING(pf | pipesrc); 6379 + ADVANCE_LP_RING(); 6380 + out: 6381 + return ret; 6382 + } 6383 + 6384 + static int intel_gen6_queue_flip(struct drm_device *dev, 6385 + struct drm_crtc *crtc, 6386 + struct drm_framebuffer *fb, 6387 + struct drm_i915_gem_object *obj) 6388 + { 6389 + struct drm_i915_private *dev_priv = dev->dev_private; 6390 + struct intel_crtc *intel_crtc = to_intel_crtc(crtc); 6391 + uint32_t pf, pipesrc; 6392 + int ret; 6393 + 6394 + ret = intel_pin_and_fence_fb_obj(dev, obj, LP_RING(dev_priv)); 6395 + if (ret) 6396 + goto out; 6397 + 6398 + ret = BEGIN_LP_RING(4); 6399 + if (ret) 6400 + goto out; 6401 + 6402 + OUT_RING(MI_DISPLAY_FLIP | 6403 + MI_DISPLAY_FLIP_PLANE(intel_crtc->plane)); 6404 + OUT_RING(fb->pitch | obj->tiling_mode); 6405 + OUT_RING(obj->gtt_offset); 6406 + 6407 + pf = I915_READ(PF_CTL(intel_crtc->pipe)) & PF_ENABLE; 6408 + pipesrc = I915_READ(PIPESRC(intel_crtc->pipe)) & 0x0fff0fff; 6409 + OUT_RING(pf | pipesrc); 6410 + ADVANCE_LP_RING(); 6411 + out: 6412 + return ret; 6413 + } 6414 + 6415 + /* 6416 + * On gen7 we currently use the blit ring because (in early silicon at least) 6417 + * the render ring doesn't give us interrpts for page flip completion, which 6418 + * means clients will hang after the first flip is queued. Fortunately the 6419 + * blit ring generates interrupts properly, so use it instead. 6420 + */ 6421 + static int intel_gen7_queue_flip(struct drm_device *dev, 6422 + struct drm_crtc *crtc, 6423 + struct drm_framebuffer *fb, 6424 + struct drm_i915_gem_object *obj) 6425 + { 6426 + struct drm_i915_private *dev_priv = dev->dev_private; 6427 + struct intel_crtc *intel_crtc = to_intel_crtc(crtc); 6428 + struct intel_ring_buffer *ring = &dev_priv->ring[BCS]; 6429 + int ret; 6430 + 6431 + ret = intel_pin_and_fence_fb_obj(dev, obj, ring); 6432 + if (ret) 6433 + goto out; 6434 + 6435 + ret = intel_ring_begin(ring, 4); 6436 + if (ret) 6437 + goto out; 6438 + 6439 + intel_ring_emit(ring, MI_DISPLAY_FLIP_I915 | (intel_crtc->plane << 19)); 6440 + intel_ring_emit(ring, (fb->pitch | obj->tiling_mode)); 6441 + intel_ring_emit(ring, (obj->gtt_offset)); 6442 + intel_ring_emit(ring, (MI_NOOP)); 6443 + intel_ring_advance(ring); 6444 + out: 6445 + return ret; 6446 + } 6447 + 6448 + static int intel_default_queue_flip(struct drm_device *dev, 6449 + struct drm_crtc *crtc, 6450 + struct drm_framebuffer *fb, 6451 + struct drm_i915_gem_object *obj) 6452 + { 6453 + return -ENODEV; 6454 + } 6455 + 6264 6456 static int intel_crtc_page_flip(struct drm_crtc *crtc, 6265 6457 struct drm_framebuffer *fb, 6266 6458 struct drm_pending_vblank_event *event) ··· 6462 6272 struct drm_i915_gem_object *obj; 6463 6273 struct intel_crtc *intel_crtc = to_intel_crtc(crtc); 6464 6274 struct intel_unpin_work *work; 6465 - unsigned long flags, offset; 6466 - int pipe = intel_crtc->pipe; 6467 - u32 pf, pipesrc; 6275 + unsigned long flags; 6468 6276 int ret; 6469 6277 6470 6278 work = kzalloc(sizeof *work, GFP_KERNEL); ··· 6491 6303 obj = intel_fb->obj; 6492 6304 6493 6305 mutex_lock(&dev->struct_mutex); 6494 - ret = intel_pin_and_fence_fb_obj(dev, obj, LP_RING(dev_priv)); 6495 - if (ret) 6496 - goto cleanup_work; 6497 6306 6498 6307 /* Reference the objects for the scheduled work. */ 6499 6308 drm_gem_object_reference(&work->old_fb_obj->base); ··· 6502 6317 if (ret) 6503 6318 goto cleanup_objs; 6504 6319 6505 - if (IS_GEN3(dev) || IS_GEN2(dev)) { 6506 - u32 flip_mask; 6507 - 6508 - /* Can't queue multiple flips, so wait for the previous 6509 - * one to finish before executing the next. 6510 - */ 6511 - ret = BEGIN_LP_RING(2); 6512 - if (ret) 6513 - goto cleanup_objs; 6514 - 6515 - if (intel_crtc->plane) 6516 - flip_mask = MI_WAIT_FOR_PLANE_B_FLIP; 6517 - else 6518 - flip_mask = MI_WAIT_FOR_PLANE_A_FLIP; 6519 - OUT_RING(MI_WAIT_FOR_EVENT | flip_mask); 6520 - OUT_RING(MI_NOOP); 6521 - ADVANCE_LP_RING(); 6522 - } 6523 - 6524 6320 work->pending_flip_obj = obj; 6525 6321 6526 6322 work->enable_stall_check = true; 6527 - 6528 - /* Offset into the new buffer for cases of shared fbs between CRTCs */ 6529 - offset = crtc->y * fb->pitch + crtc->x * fb->bits_per_pixel/8; 6530 - 6531 - ret = BEGIN_LP_RING(4); 6532 - if (ret) 6533 - goto cleanup_objs; 6534 6323 6535 6324 /* Block clients from rendering to the new back buffer until 6536 6325 * the flip occurs and the object is no longer visible. 6537 6326 */ 6538 6327 atomic_add(1 << intel_crtc->plane, &work->old_fb_obj->pending_flip); 6539 6328 6540 - switch (INTEL_INFO(dev)->gen) { 6541 - case 2: 6542 - OUT_RING(MI_DISPLAY_FLIP | 6543 - MI_DISPLAY_FLIP_PLANE(intel_crtc->plane)); 6544 - OUT_RING(fb->pitch); 6545 - OUT_RING(obj->gtt_offset + offset); 6546 - OUT_RING(MI_NOOP); 6547 - break; 6548 - 6549 - case 3: 6550 - OUT_RING(MI_DISPLAY_FLIP_I915 | 6551 - MI_DISPLAY_FLIP_PLANE(intel_crtc->plane)); 6552 - OUT_RING(fb->pitch); 6553 - OUT_RING(obj->gtt_offset + offset); 6554 - OUT_RING(MI_NOOP); 6555 - break; 6556 - 6557 - case 4: 6558 - case 5: 6559 - /* i965+ uses the linear or tiled offsets from the 6560 - * Display Registers (which do not change across a page-flip) 6561 - * so we need only reprogram the base address. 6562 - */ 6563 - OUT_RING(MI_DISPLAY_FLIP | 6564 - MI_DISPLAY_FLIP_PLANE(intel_crtc->plane)); 6565 - OUT_RING(fb->pitch); 6566 - OUT_RING(obj->gtt_offset | obj->tiling_mode); 6567 - 6568 - /* XXX Enabling the panel-fitter across page-flip is so far 6569 - * untested on non-native modes, so ignore it for now. 6570 - * pf = I915_READ(pipe == 0 ? PFA_CTL_1 : PFB_CTL_1) & PF_ENABLE; 6571 - */ 6572 - pf = 0; 6573 - pipesrc = I915_READ(PIPESRC(pipe)) & 0x0fff0fff; 6574 - OUT_RING(pf | pipesrc); 6575 - break; 6576 - 6577 - case 6: 6578 - case 7: 6579 - OUT_RING(MI_DISPLAY_FLIP | 6580 - MI_DISPLAY_FLIP_PLANE(intel_crtc->plane)); 6581 - OUT_RING(fb->pitch | obj->tiling_mode); 6582 - OUT_RING(obj->gtt_offset); 6583 - 6584 - pf = I915_READ(PF_CTL(pipe)) & PF_ENABLE; 6585 - pipesrc = I915_READ(PIPESRC(pipe)) & 0x0fff0fff; 6586 - OUT_RING(pf | pipesrc); 6587 - break; 6588 - } 6589 - ADVANCE_LP_RING(); 6329 + ret = dev_priv->display.queue_flip(dev, crtc, fb, obj); 6330 + if (ret) 6331 + goto cleanup_pending; 6590 6332 6591 6333 mutex_unlock(&dev->struct_mutex); 6592 6334 ··· 6521 6409 6522 6410 return 0; 6523 6411 6412 + cleanup_pending: 6413 + atomic_sub(1 << intel_crtc->plane, &work->old_fb_obj->pending_flip); 6524 6414 cleanup_objs: 6525 6415 drm_gem_object_unreference(&work->old_fb_obj->base); 6526 6416 drm_gem_object_unreference(&obj->base); 6527 - cleanup_work: 6528 6417 mutex_unlock(&dev->struct_mutex); 6529 6418 6530 6419 spin_lock_irqsave(&dev->event_lock, flags); ··· 7769 7656 dev_priv->display.get_fifo_size = i845_get_fifo_size; 7770 7657 else 7771 7658 dev_priv->display.get_fifo_size = i830_get_fifo_size; 7659 + } 7660 + 7661 + /* Default just returns -ENODEV to indicate unsupported */ 7662 + dev_priv->display.queue_flip = intel_default_queue_flip; 7663 + 7664 + switch (INTEL_INFO(dev)->gen) { 7665 + case 2: 7666 + dev_priv->display.queue_flip = intel_gen2_queue_flip; 7667 + break; 7668 + 7669 + case 3: 7670 + dev_priv->display.queue_flip = intel_gen3_queue_flip; 7671 + break; 7672 + 7673 + case 4: 7674 + case 5: 7675 + dev_priv->display.queue_flip = intel_gen4_queue_flip; 7676 + break; 7677 + 7678 + case 6: 7679 + dev_priv->display.queue_flip = intel_gen6_queue_flip; 7680 + break; 7681 + case 7: 7682 + dev_priv->display.queue_flip = intel_gen7_queue_flip; 7683 + break; 7772 7684 } 7773 7685 } 7774 7686
+5
drivers/gpu/drm/i915/intel_overlay.c
··· 1416 1416 goto out_free; 1417 1417 overlay->reg_bo = reg_bo; 1418 1418 1419 + mutex_lock(&dev->struct_mutex); 1420 + 1419 1421 if (OVERLAY_NEEDS_PHYSICAL(dev)) { 1420 1422 ret = i915_gem_attach_phys_object(dev, reg_bo, 1421 1423 I915_GEM_PHYS_OVERLAY_REGS, ··· 1441 1439 goto out_unpin_bo; 1442 1440 } 1443 1441 } 1442 + 1443 + mutex_unlock(&dev->struct_mutex); 1444 1444 1445 1445 /* init all values */ 1446 1446 overlay->color_key = 0x0101fe; ··· 1468 1464 i915_gem_object_unpin(reg_bo); 1469 1465 out_free_bo: 1470 1466 drm_gem_object_unreference(&reg_bo->base); 1467 + mutex_unlock(&dev->struct_mutex); 1471 1468 out_free: 1472 1469 kfree(overlay); 1473 1470 return;
+2 -2
drivers/gpu/drm/radeon/evergreen.c
··· 2013 2013 rdev->config.evergreen.tile_config |= (3 << 0); 2014 2014 break; 2015 2015 } 2016 - /* num banks is 8 on all fusion asics */ 2016 + /* num banks is 8 on all fusion asics. 0 = 4, 1 = 8, 2 = 16 */ 2017 2017 if (rdev->flags & RADEON_IS_IGP) 2018 - rdev->config.evergreen.tile_config |= 8 << 4; 2018 + rdev->config.evergreen.tile_config |= 1 << 4; 2019 2019 else 2020 2020 rdev->config.evergreen.tile_config |= 2021 2021 ((mc_arb_ramcfg & NOOFBANK_MASK) >> NOOFBANK_SHIFT) << 4;
+1
drivers/gpu/drm/radeon/radeon.h
··· 179 179 void radeon_combios_get_power_modes(struct radeon_device *rdev); 180 180 void radeon_atombios_get_power_modes(struct radeon_device *rdev); 181 181 void radeon_atom_set_voltage(struct radeon_device *rdev, u16 voltage_level, u8 voltage_type); 182 + int radeon_atom_get_max_vddc(struct radeon_device *rdev, u16 *voltage); 182 183 void rs690_pm_info(struct radeon_device *rdev); 183 184 extern int rv6xx_get_temp(struct radeon_device *rdev); 184 185 extern int rv770_get_temp(struct radeon_device *rdev);
+36
drivers/gpu/drm/radeon/radeon_atombios.c
··· 2320 2320 le16_to_cpu(clock_info->r600.usVDDC); 2321 2321 } 2322 2322 2323 + /* patch up vddc if necessary */ 2324 + if (rdev->pm.power_state[state_index].clock_info[mode_index].voltage.voltage == 0xff01) { 2325 + u16 vddc; 2326 + 2327 + if (radeon_atom_get_max_vddc(rdev, &vddc) == 0) 2328 + rdev->pm.power_state[state_index].clock_info[mode_index].voltage.voltage = vddc; 2329 + } 2330 + 2323 2331 if (rdev->flags & RADEON_IS_IGP) { 2324 2332 /* skip invalid modes */ 2325 2333 if (rdev->pm.power_state[state_index].clock_info[mode_index].sclk == 0) ··· 2638 2630 atom_execute_table(rdev->mode_info.atom_context, index, (uint32_t *)&args); 2639 2631 } 2640 2632 2633 + int radeon_atom_get_max_vddc(struct radeon_device *rdev, 2634 + u16 *voltage) 2635 + { 2636 + union set_voltage args; 2637 + int index = GetIndexIntoMasterTable(COMMAND, SetVoltage); 2638 + u8 frev, crev; 2641 2639 2640 + if (!atom_parse_cmd_header(rdev->mode_info.atom_context, index, &frev, &crev)) 2641 + return -EINVAL; 2642 + 2643 + switch (crev) { 2644 + case 1: 2645 + return -EINVAL; 2646 + case 2: 2647 + args.v2.ucVoltageType = SET_VOLTAGE_GET_MAX_VOLTAGE; 2648 + args.v2.ucVoltageMode = 0; 2649 + args.v2.usVoltageLevel = 0; 2650 + 2651 + atom_execute_table(rdev->mode_info.atom_context, index, (uint32_t *)&args); 2652 + 2653 + *voltage = le16_to_cpu(args.v2.usVoltageLevel); 2654 + break; 2655 + default: 2656 + DRM_ERROR("Unknown table version %d, %d\n", frev, crev); 2657 + return -EINVAL; 2658 + } 2659 + 2660 + return 0; 2661 + } 2642 2662 2643 2663 void radeon_atom_initialize_bios_scratch_regs(struct drm_device *dev) 2644 2664 {
+1 -1
drivers/gpu/drm/radeon/radeon_bios.c
··· 104 104 static bool radeon_atrm_get_bios(struct radeon_device *rdev) 105 105 { 106 106 int ret; 107 - int size = 64 * 1024; 107 + int size = 256 * 1024; 108 108 int i; 109 109 110 110 if (!radeon_atrm_supported(rdev->pdev))
+3 -2
drivers/gpu/drm/ttm/ttm_tt.c
··· 31 31 #include <linux/sched.h> 32 32 #include <linux/highmem.h> 33 33 #include <linux/pagemap.h> 34 + #include <linux/shmem_fs.h> 34 35 #include <linux/file.h> 35 36 #include <linux/swap.h> 36 37 #include <linux/slab.h> ··· 485 484 swap_space = swap_storage->f_path.dentry->d_inode->i_mapping; 486 485 487 486 for (i = 0; i < ttm->num_pages; ++i) { 488 - from_page = read_mapping_page(swap_space, i, NULL); 487 + from_page = shmem_read_mapping_page(swap_space, i); 489 488 if (IS_ERR(from_page)) { 490 489 ret = PTR_ERR(from_page); 491 490 goto out_err; ··· 558 557 from_page = ttm->pages[i]; 559 558 if (unlikely(from_page == NULL)) 560 559 continue; 561 - to_page = read_mapping_page(swap_space, i, NULL); 560 + to_page = shmem_read_mapping_page(swap_space, i); 562 561 if (unlikely(IS_ERR(to_page))) { 563 562 ret = PTR_ERR(to_page); 564 563 goto out_err;
+1
drivers/hid/hid-core.c
··· 1423 1423 { HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH, USB_DEVICE_ID_SPACETRAVELLER) }, 1424 1424 { HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH, USB_DEVICE_ID_SPACENAVIGATOR) }, 1425 1425 { HID_USB_DEVICE(USB_VENDOR_ID_LUMIO, USB_DEVICE_ID_CRYSTALTOUCH) }, 1426 + { HID_USB_DEVICE(USB_VENDOR_ID_LUMIO, USB_DEVICE_ID_CRYSTALTOUCH_DUAL) }, 1426 1427 { HID_USB_DEVICE(USB_VENDOR_ID_MICROCHIP, USB_DEVICE_ID_PICOLCD) }, 1427 1428 { HID_USB_DEVICE(USB_VENDOR_ID_MICROCHIP, USB_DEVICE_ID_PICOLCD_BOOTLOADER) }, 1428 1429 { HID_USB_DEVICE(USB_VENDOR_ID_MICROSOFT, USB_DEVICE_ID_SIDEWINDER_GV) },
+1
drivers/hid/hid-ids.h
··· 449 449 450 450 #define USB_VENDOR_ID_LUMIO 0x202e 451 451 #define USB_DEVICE_ID_CRYSTALTOUCH 0x0006 452 + #define USB_DEVICE_ID_CRYSTALTOUCH_DUAL 0x0007 452 453 453 454 #define USB_VENDOR_ID_MCC 0x09db 454 455 #define USB_DEVICE_ID_MCC_PMD1024LS 0x0076
+7 -5
drivers/hid/hid-multitouch.c
··· 271 271 } 272 272 return 1; 273 273 case HID_DG_CONTACTID: 274 + if (!td->maxcontacts) 275 + td->maxcontacts = MT_DEFAULT_MAXCONTACT; 274 276 input_mt_init_slots(hi->input, td->maxcontacts); 275 277 td->last_slot_field = usage->hid; 276 278 td->last_field_index = field->index; ··· 549 547 if (ret) 550 548 goto fail; 551 549 552 - if (!td->maxcontacts) 553 - td->maxcontacts = MT_DEFAULT_MAXCONTACT; 554 - 555 550 td->slots = kzalloc(td->maxcontacts * sizeof(struct mt_slot), 556 551 GFP_KERNEL); 557 552 if (!td->slots) { ··· 676 677 { .driver_data = MT_CLS_CONFIDENCE_MINUS_ONE, 677 678 HID_USB_DEVICE(USB_VENDOR_ID_LUMIO, 678 679 USB_DEVICE_ID_CRYSTALTOUCH) }, 680 + { .driver_data = MT_CLS_CONFIDENCE_MINUS_ONE, 681 + HID_USB_DEVICE(USB_VENDOR_ID_LUMIO, 682 + USB_DEVICE_ID_CRYSTALTOUCH_DUAL) }, 679 683 680 684 /* MosArt panels */ 681 685 { .driver_data = MT_CLS_CONFIDENCE_MINUS_ONE, ··· 709 707 HID_USB_DEVICE(USB_VENDOR_ID_STANTUM, 710 708 USB_DEVICE_ID_MTP)}, 711 709 { .driver_data = MT_CLS_CONFIDENCE, 712 - HID_USB_DEVICE(USB_VENDOR_ID_STANTUM, 710 + HID_USB_DEVICE(USB_VENDOR_ID_STANTUM_STM, 713 711 USB_DEVICE_ID_MTP_STM)}, 714 712 { .driver_data = MT_CLS_CONFIDENCE, 715 - HID_USB_DEVICE(USB_VENDOR_ID_STANTUM, 713 + HID_USB_DEVICE(USB_VENDOR_ID_STANTUM_SITRONIX, 716 714 USB_DEVICE_ID_MTP_SITRONIX)}, 717 715 718 716 /* Touch International panels */
+4 -4
drivers/i2c/busses/i2c-taos-evm.c
··· 234 234 235 235 if (taos->state != TAOS_STATE_IDLE) { 236 236 err = -ENODEV; 237 - dev_dbg(&serio->dev, "TAOS EVM reset failed (state=%d, " 237 + dev_err(&serio->dev, "TAOS EVM reset failed (state=%d, " 238 238 "pos=%d)\n", taos->state, taos->pos); 239 239 goto exit_close; 240 240 } ··· 255 255 msecs_to_jiffies(250)); 256 256 if (taos->state != TAOS_STATE_IDLE) { 257 257 err = -ENODEV; 258 - dev_err(&adapter->dev, "Echo off failed " 258 + dev_err(&serio->dev, "TAOS EVM echo off failed " 259 259 "(state=%d)\n", taos->state); 260 260 goto exit_close; 261 261 } ··· 263 263 err = i2c_add_adapter(adapter); 264 264 if (err) 265 265 goto exit_close; 266 - dev_dbg(&serio->dev, "Connected to TAOS EVM\n"); 266 + dev_info(&serio->dev, "Connected to TAOS EVM\n"); 267 267 268 268 taos->client = taos_instantiate_device(adapter); 269 269 return 0; ··· 288 288 serio_set_drvdata(serio, NULL); 289 289 kfree(taos); 290 290 291 - dev_dbg(&serio->dev, "Disconnected from TAOS EVM\n"); 291 + dev_info(&serio->dev, "Disconnected from TAOS EVM\n"); 292 292 } 293 293 294 294 static struct serio_device_id taos_serio_ids[] = {
+4 -3
drivers/i2c/muxes/pca954x.c
··· 201 201 202 202 i2c_set_clientdata(client, data); 203 203 204 - /* Read the mux register at addr to verify 205 - * that the mux is in fact present. 204 + /* Write the mux register at addr to verify 205 + * that the mux is in fact present. This also 206 + * initializes the mux to disconnected state. 206 207 */ 207 - if (i2c_smbus_read_byte(client) < 0) { 208 + if (i2c_smbus_write_byte(client, 0) < 0) { 208 209 dev_warn(&client->dev, "probe failed\n"); 209 210 goto exit_free; 210 211 }
+37 -9
drivers/infiniband/hw/cxgb4/cm.c
··· 1463 1463 struct c4iw_qp_attributes attrs; 1464 1464 int disconnect = 1; 1465 1465 int release = 0; 1466 - int abort = 0; 1467 1466 struct tid_info *t = dev->rdev.lldi.tids; 1468 1467 unsigned int tid = GET_TID(hdr); 1468 + int ret; 1469 1469 1470 1470 ep = lookup_tid(t, tid); 1471 1471 PDBG("%s ep %p tid %u\n", __func__, ep, ep->hwtid); ··· 1501 1501 start_ep_timer(ep); 1502 1502 __state_set(&ep->com, CLOSING); 1503 1503 attrs.next_state = C4IW_QP_STATE_CLOSING; 1504 - abort = c4iw_modify_qp(ep->com.qp->rhp, ep->com.qp, 1504 + ret = c4iw_modify_qp(ep->com.qp->rhp, ep->com.qp, 1505 1505 C4IW_QP_ATTR_NEXT_STATE, &attrs, 1); 1506 - peer_close_upcall(ep); 1507 - disconnect = 1; 1506 + if (ret != -ECONNRESET) { 1507 + peer_close_upcall(ep); 1508 + disconnect = 1; 1509 + } 1508 1510 break; 1509 1511 case ABORTING: 1510 1512 disconnect = 0; ··· 2111 2109 break; 2112 2110 } 2113 2111 2114 - mutex_unlock(&ep->com.mutex); 2115 2112 if (close) { 2116 - if (abrupt) 2117 - ret = abort_connection(ep, NULL, gfp); 2118 - else 2113 + if (abrupt) { 2114 + close_complete_upcall(ep); 2115 + ret = send_abort(ep, NULL, gfp); 2116 + } else 2119 2117 ret = send_halfclose(ep, gfp); 2120 2118 if (ret) 2121 2119 fatal = 1; 2122 2120 } 2121 + mutex_unlock(&ep->com.mutex); 2123 2122 if (fatal) 2124 2123 release_ep_resources(ep); 2125 2124 return ret; ··· 2304 2301 return 0; 2305 2302 } 2306 2303 2304 + static int peer_abort_intr(struct c4iw_dev *dev, struct sk_buff *skb) 2305 + { 2306 + struct cpl_abort_req_rss *req = cplhdr(skb); 2307 + struct c4iw_ep *ep; 2308 + struct tid_info *t = dev->rdev.lldi.tids; 2309 + unsigned int tid = GET_TID(req); 2310 + 2311 + ep = lookup_tid(t, tid); 2312 + if (is_neg_adv_abort(req->status)) { 2313 + PDBG("%s neg_adv_abort ep %p tid %u\n", __func__, ep, 2314 + ep->hwtid); 2315 + kfree_skb(skb); 2316 + return 0; 2317 + } 2318 + PDBG("%s ep %p tid %u state %u\n", __func__, ep, ep->hwtid, 2319 + ep->com.state); 2320 + 2321 + /* 2322 + * Wake up any threads in rdma_init() or rdma_fini(). 2323 + */ 2324 + c4iw_wake_up(&ep->com.wr_wait, -ECONNRESET); 2325 + sched(dev, skb); 2326 + return 0; 2327 + } 2328 + 2307 2329 /* 2308 2330 * Most upcalls from the T4 Core go to sched() to 2309 2331 * schedule the processing on a work queue. ··· 2345 2317 [CPL_PASS_ESTABLISH] = sched, 2346 2318 [CPL_PEER_CLOSE] = sched, 2347 2319 [CPL_CLOSE_CON_RPL] = sched, 2348 - [CPL_ABORT_REQ_RSS] = sched, 2320 + [CPL_ABORT_REQ_RSS] = peer_abort_intr, 2349 2321 [CPL_RDMA_TERMINATE] = sched, 2350 2322 [CPL_FW4_ACK] = sched, 2351 2323 [CPL_SET_TCB_RPL] = set_tcb_rpl,
+4
drivers/infiniband/hw/cxgb4/cq.c
··· 801 801 if (ucontext) { 802 802 memsize = roundup(memsize, PAGE_SIZE); 803 803 hwentries = memsize / sizeof *chp->cq.queue; 804 + while (hwentries > T4_MAX_IQ_SIZE) { 805 + memsize -= PAGE_SIZE; 806 + hwentries = memsize / sizeof *chp->cq.queue; 807 + } 804 808 } 805 809 chp->cq.size = hwentries; 806 810 chp->cq.memsize = memsize;
+1 -1
drivers/infiniband/hw/cxgb4/mem.c
··· 625 625 mhp->attr.perms = c4iw_ib_to_tpt_access(acc); 626 626 mhp->attr.va_fbo = virt; 627 627 mhp->attr.page_size = shift - 12; 628 - mhp->attr.len = (u32) length; 628 + mhp->attr.len = length; 629 629 630 630 err = register_mem(rhp, php, mhp, shift); 631 631 if (err)
+1 -4
drivers/infiniband/hw/cxgb4/qp.c
··· 1207 1207 c4iw_get_ep(&qhp->ep->com); 1208 1208 } 1209 1209 ret = rdma_fini(rhp, qhp, ep); 1210 - if (ret) { 1211 - if (internal) 1212 - c4iw_get_ep(&qhp->ep->com); 1210 + if (ret) 1213 1211 goto err; 1214 - } 1215 1212 break; 1216 1213 case C4IW_QP_STATE_TERMINATE: 1217 1214 set_state(qhp, C4IW_QP_STATE_TERMINATE);
+18 -7
drivers/infiniband/hw/qib/qib_iba7322.c
··· 469 469 #define IB_7322_LT_STATE_RECOVERIDLE 0x0f 470 470 #define IB_7322_LT_STATE_CFGENH 0x10 471 471 #define IB_7322_LT_STATE_CFGTEST 0x11 472 + #define IB_7322_LT_STATE_CFGWAITRMTTEST 0x12 473 + #define IB_7322_LT_STATE_CFGWAITENH 0x13 472 474 473 475 /* link state machine states from IBC */ 474 476 #define IB_7322_L_STATE_DOWN 0x0 ··· 500 498 IB_PHYSPORTSTATE_LINK_ERR_RECOVER, 501 499 [IB_7322_LT_STATE_CFGENH] = IB_PHYSPORTSTATE_CFG_ENH, 502 500 [IB_7322_LT_STATE_CFGTEST] = IB_PHYSPORTSTATE_CFG_TRAIN, 503 - [0x12] = IB_PHYSPORTSTATE_CFG_TRAIN, 504 - [0x13] = IB_PHYSPORTSTATE_CFG_WAIT_ENH, 501 + [IB_7322_LT_STATE_CFGWAITRMTTEST] = 502 + IB_PHYSPORTSTATE_CFG_TRAIN, 503 + [IB_7322_LT_STATE_CFGWAITENH] = 504 + IB_PHYSPORTSTATE_CFG_WAIT_ENH, 505 505 [0x14] = IB_PHYSPORTSTATE_CFG_TRAIN, 506 506 [0x15] = IB_PHYSPORTSTATE_CFG_TRAIN, 507 507 [0x16] = IB_PHYSPORTSTATE_CFG_TRAIN, ··· 1696 1692 break; 1697 1693 } 1698 1694 1699 - if (ibclt == IB_7322_LT_STATE_CFGTEST && 1695 + if (((ibclt >= IB_7322_LT_STATE_CFGTEST && 1696 + ibclt <= IB_7322_LT_STATE_CFGWAITENH) || 1697 + ibclt == IB_7322_LT_STATE_LINKUP) && 1700 1698 (ibcst & SYM_MASK(IBCStatusA_0, LinkSpeedQDR))) { 1701 1699 force_h1(ppd); 1702 1700 ppd->cpspec->qdr_reforce = 1; ··· 7307 7301 static void serdes_7322_los_enable(struct qib_pportdata *ppd, int enable) 7308 7302 { 7309 7303 u64 data = qib_read_kreg_port(ppd, krp_serdesctrl); 7310 - printk(KERN_INFO QIB_DRV_NAME " IB%u:%u Turning LOS %s\n", 7311 - ppd->dd->unit, ppd->port, (enable ? "on" : "off")); 7312 - if (enable) 7304 + u8 state = SYM_FIELD(data, IBSerdesCtrl_0, RXLOSEN); 7305 + 7306 + if (enable && !state) { 7307 + printk(KERN_INFO QIB_DRV_NAME " IB%u:%u Turning LOS on\n", 7308 + ppd->dd->unit, ppd->port); 7313 7309 data |= SYM_MASK(IBSerdesCtrl_0, RXLOSEN); 7314 - else 7310 + } else if (!enable && state) { 7311 + printk(KERN_INFO QIB_DRV_NAME " IB%u:%u Turning LOS off\n", 7312 + ppd->dd->unit, ppd->port); 7315 7313 data &= ~SYM_MASK(IBSerdesCtrl_0, RXLOSEN); 7314 + } 7316 7315 qib_write_kreg_port(ppd, krp_serdesctrl, data); 7317 7316 } 7318 7317
+5 -1
drivers/infiniband/hw/qib/qib_intr.c
··· 96 96 * states, or if it transitions from any of the up (INIT or better) 97 97 * states into any of the down states (except link recovery), then 98 98 * call the chip-specific code to take appropriate actions. 99 + * 100 + * ppd->lflags could be 0 if this is the first time the interrupt 101 + * handlers has been called but the link is already up. 99 102 */ 100 - if (lstate >= IB_PORT_INIT && (ppd->lflags & QIBL_LINKDOWN) && 103 + if (lstate >= IB_PORT_INIT && 104 + (!ppd->lflags || (ppd->lflags & QIBL_LINKDOWN)) && 101 105 ltstate == IB_PHYSPORTSTATE_LINKUP) { 102 106 /* transitioned to UP */ 103 107 if (dd->f_ib_updown(ppd, 1, ibcs))
+2 -2
drivers/leds/leds-lp5521.c
··· 593 593 &lp5521_led_attribute_group); 594 594 } 595 595 596 - static int __init lp5521_init_led(struct lp5521_led *led, 596 + static int __devinit lp5521_init_led(struct lp5521_led *led, 597 597 struct i2c_client *client, 598 598 int chan, struct lp5521_platform_data *pdata) 599 599 { ··· 637 637 return 0; 638 638 } 639 639 640 - static int lp5521_probe(struct i2c_client *client, 640 + static int __devinit lp5521_probe(struct i2c_client *client, 641 641 const struct i2c_device_id *id) 642 642 { 643 643 struct lp5521_chip *chip;
+2 -2
drivers/leds/leds-lp5523.c
··· 826 826 return 0; 827 827 } 828 828 829 - static int __init lp5523_init_led(struct lp5523_led *led, struct device *dev, 829 + static int __devinit lp5523_init_led(struct lp5523_led *led, struct device *dev, 830 830 int chan, struct lp5523_platform_data *pdata) 831 831 { 832 832 char name[32]; ··· 872 872 873 873 static struct i2c_driver lp5523_driver; 874 874 875 - static int lp5523_probe(struct i2c_client *client, 875 + static int __devinit lp5523_probe(struct i2c_client *client, 876 876 const struct i2c_device_id *id) 877 877 { 878 878 struct lp5523_chip *chip;
+1
drivers/md/md.c
··· 7088 7088 list_for_each_entry(rdev, &mddev->disks, same_set) { 7089 7089 if (rdev->raid_disk >= 0 && 7090 7090 !test_bit(In_sync, &rdev->flags) && 7091 + !test_bit(Faulty, &rdev->flags) && 7091 7092 !test_bit(Blocked, &rdev->flags)) 7092 7093 spares++; 7093 7094 if (rdev->raid_disk < 0
+1 -1
drivers/misc/cb710/sgbuf2.c
··· 47 47 48 48 static inline bool needs_unaligned_copy(const void *ptr) 49 49 { 50 - #ifdef HAVE_EFFICIENT_UNALIGNED_ACCESS 50 + #ifdef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS 51 51 return false; 52 52 #else 53 53 return ((ptr - NULL) & 3) != 0;
+1 -1
drivers/misc/ioc4.c
··· 270 270 return IOC4_VARIANT_PCI_RT; 271 271 } 272 272 273 - static void __devinit 273 + static void 274 274 ioc4_load_modules(struct work_struct *work) 275 275 { 276 276 request_module("sgiioc4");
+8
drivers/misc/lkdtm.c
··· 120 120 static enum cname cpoint = CN_INVALID; 121 121 static enum ctype cptype = CT_NONE; 122 122 static int count = DEFAULT_COUNT; 123 + static DEFINE_SPINLOCK(count_lock); 123 124 124 125 module_param(recur_count, int, 0644); 125 126 MODULE_PARM_DESC(recur_count, " Recursion level for the stack overflow test, "\ ··· 231 230 static int lkdtm_parse_commandline(void) 232 231 { 233 232 int i; 233 + unsigned long flags; 234 234 235 235 if (cpoint_count < 1 || recur_count < 1) 236 236 return -EINVAL; 237 237 238 + spin_lock_irqsave(&count_lock, flags); 238 239 count = cpoint_count; 240 + spin_unlock_irqrestore(&count_lock, flags); 239 241 240 242 /* No special parameters */ 241 243 if (!cpoint_type && !cpoint_name) ··· 353 349 354 350 static void lkdtm_handler(void) 355 351 { 352 + unsigned long flags; 353 + 354 + spin_lock_irqsave(&count_lock, flags); 356 355 count--; 357 356 printk(KERN_INFO "lkdtm: Crash point %s of type %s hit, trigger in %d rounds\n", 358 357 cp_name_to_str(cpoint), cp_type_to_str(cptype), count); ··· 364 357 lkdtm_do_action(cptype); 365 358 count = cpoint_count; 366 359 } 360 + spin_unlock_irqrestore(&count_lock, flags); 367 361 } 368 362 369 363 static int lkdtm_register_cpoint(enum cname which)
+7 -4
drivers/misc/pti.c
··· 317 317 * a master, channel ID address 318 318 * used to write to PTI HW. 319 319 * 320 - * @mc: master, channel apeture ID address to be released. 320 + * @mc: master, channel apeture ID address to be released. This 321 + * will de-allocate the structure via kfree(). 321 322 */ 322 323 void pti_release_masterchannel(struct pti_masterchannel *mc) 323 324 { ··· 476 475 else 477 476 pti_tty_data->mc = pti_request_masterchannel(2); 478 477 479 - if (pti_tty_data->mc == NULL) 478 + if (pti_tty_data->mc == NULL) { 479 + kfree(pti_tty_data); 480 480 return -ENXIO; 481 + } 481 482 tty->driver_data = pti_tty_data; 482 483 } 483 484 ··· 498 495 if (pti_tty_data == NULL) 499 496 return; 500 497 pti_release_masterchannel(pti_tty_data->mc); 501 - kfree(tty->driver_data); 498 + kfree(pti_tty_data); 502 499 tty->driver_data = NULL; 503 500 } 504 501 ··· 584 581 static int pti_char_release(struct inode *inode, struct file *filp) 585 582 { 586 583 pti_release_masterchannel(filp->private_data); 587 - kfree(filp->private_data); 584 + filp->private_data = NULL; 588 585 return 0; 589 586 } 590 587
+1 -1
drivers/misc/ti-st/st_core.c
··· 605 605 pr_debug("%s: %d ", __func__, proto->chnl_id); 606 606 607 607 st_kim_ref(&st_gdata, 0); 608 - if (proto->chnl_id >= ST_MAX_CHANNELS) { 608 + if (!st_gdata || proto->chnl_id >= ST_MAX_CHANNELS) { 609 609 pr_err(" chnl_id %d not supported", proto->chnl_id); 610 610 return -EPROTONOSUPPORT; 611 611 }
+6 -2
drivers/misc/ti-st/st_kim.c
··· 245 245 pr_err("invalid action after change remote baud command"); 246 246 } else { 247 247 *ptr = *ptr + sizeof(struct bts_action) + 248 - ((struct bts_action *)nxt_action)->size; 248 + ((struct bts_action *)cur_action)->size; 249 249 *len = *len - (sizeof(struct bts_action) + 250 - ((struct bts_action *)nxt_action)->size); 250 + ((struct bts_action *)cur_action)->size); 251 251 /* warn user on not commenting these in firmware */ 252 252 pr_warn("skipping the wait event of change remote baud"); 253 253 } ··· 604 604 struct kim_data_s *kim_gdata; 605 605 /* get kim_gdata reference from platform device */ 606 606 pdev = st_get_plat_device(id); 607 + if (!pdev) { 608 + *core_data = NULL; 609 + return; 610 + } 607 611 kim_gdata = dev_get_drvdata(&pdev->dev); 608 612 *core_data = kim_gdata->core_data; 609 613 }
+4 -1
drivers/mmc/card/block.c
··· 1024 1024 INIT_LIST_HEAD(&md->part); 1025 1025 md->usage = 1; 1026 1026 1027 - ret = mmc_init_queue(&md->queue, card, &md->lock); 1027 + ret = mmc_init_queue(&md->queue, card, &md->lock, subname); 1028 1028 if (ret) 1029 1029 goto err_putdisk; 1030 1030 ··· 1297 1297 struct mmc_blk_data *md = mmc_get_drvdata(card); 1298 1298 1299 1299 mmc_blk_remove_parts(card, md); 1300 + mmc_claim_host(card->host); 1301 + mmc_blk_part_switch(card, md); 1302 + mmc_release_host(card->host); 1300 1303 mmc_blk_remove_req(md); 1301 1304 mmc_set_drvdata(card, NULL); 1302 1305 }
+6 -9
drivers/mmc/card/queue.c
··· 106 106 * @mq: mmc queue 107 107 * @card: mmc card to attach this queue 108 108 * @lock: queue lock 109 + * @subname: partition subname 109 110 * 110 111 * Initialise a MMC card request queue. 111 112 */ 112 - int mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card, spinlock_t *lock) 113 + int mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card, 114 + spinlock_t *lock, const char *subname) 113 115 { 114 116 struct mmc_host *host = card->host; 115 117 u64 limit = BLK_BOUNCE_HIGH; ··· 135 133 mq->queue->limits.max_discard_sectors = UINT_MAX; 136 134 if (card->erased_byte == 0) 137 135 mq->queue->limits.discard_zeroes_data = 1; 138 - if (!mmc_can_trim(card) && is_power_of_2(card->erase_size)) { 139 - mq->queue->limits.discard_granularity = 140 - card->erase_size << 9; 141 - mq->queue->limits.discard_alignment = 142 - card->erase_size << 9; 143 - } 136 + mq->queue->limits.discard_granularity = card->pref_erase << 9; 144 137 if (mmc_can_secure_erase_trim(card)) 145 138 queue_flag_set_unlocked(QUEUE_FLAG_SECDISCARD, 146 139 mq->queue); ··· 206 209 207 210 sema_init(&mq->thread_sem, 1); 208 211 209 - mq->thread = kthread_run(mmc_queue_thread, mq, "mmcqd/%d", 210 - host->index); 212 + mq->thread = kthread_run(mmc_queue_thread, mq, "mmcqd/%d%s", 213 + host->index, subname ? subname : ""); 211 214 212 215 if (IS_ERR(mq->thread)) { 213 216 ret = PTR_ERR(mq->thread);
+2 -1
drivers/mmc/card/queue.h
··· 19 19 unsigned int bounce_sg_len; 20 20 }; 21 21 22 - extern int mmc_init_queue(struct mmc_queue *, struct mmc_card *, spinlock_t *); 22 + extern int mmc_init_queue(struct mmc_queue *, struct mmc_card *, spinlock_t *, 23 + const char *); 23 24 extern void mmc_cleanup_queue(struct mmc_queue *); 24 25 extern void mmc_queue_suspend(struct mmc_queue *); 25 26 extern void mmc_queue_resume(struct mmc_queue *);
+1 -1
drivers/mmc/core/core.c
··· 1245 1245 */ 1246 1246 timeout_clks <<= 1; 1247 1247 timeout_us += (timeout_clks * 1000) / 1248 - (card->host->ios.clock / 1000); 1248 + (mmc_host_clk_rate(card->host) / 1000); 1249 1249 1250 1250 erase_timeout = timeout_us / 1000; 1251 1251
+39
drivers/mmc/core/sdio.c
··· 691 691 static int mmc_sdio_power_restore(struct mmc_host *host) 692 692 { 693 693 int ret; 694 + u32 ocr; 694 695 695 696 BUG_ON(!host); 696 697 BUG_ON(!host->card); 697 698 698 699 mmc_claim_host(host); 700 + 701 + /* 702 + * Reset the card by performing the same steps that are taken by 703 + * mmc_rescan_try_freq() and mmc_attach_sdio() during a "normal" probe. 704 + * 705 + * sdio_reset() is technically not needed. Having just powered up the 706 + * hardware, it should already be in reset state. However, some 707 + * platforms (such as SD8686 on OLPC) do not instantly cut power, 708 + * meaning that a reset is required when restoring power soon after 709 + * powering off. It is harmless in other cases. 710 + * 711 + * The CMD5 reset (mmc_send_io_op_cond()), according to the SDIO spec, 712 + * is not necessary for non-removable cards. However, it is required 713 + * for OLPC SD8686 (which expects a [CMD5,5,3,7] init sequence), and 714 + * harmless in other situations. 715 + * 716 + * With these steps taken, mmc_select_voltage() is also required to 717 + * restore the correct voltage setting of the card. 718 + */ 719 + sdio_reset(host); 720 + mmc_go_idle(host); 721 + mmc_send_if_cond(host, host->ocr_avail); 722 + 723 + ret = mmc_send_io_op_cond(host, 0, &ocr); 724 + if (ret) 725 + goto out; 726 + 727 + if (host->ocr_avail_sdio) 728 + host->ocr_avail = host->ocr_avail_sdio; 729 + 730 + host->ocr = mmc_select_voltage(host, ocr & ~0x7F); 731 + if (!host->ocr) { 732 + ret = -EINVAL; 733 + goto out; 734 + } 735 + 699 736 ret = mmc_sdio_init_card(host, host->ocr, host->card, 700 737 mmc_card_keep_power(host)); 701 738 if (!ret && host->sdio_irqs) 702 739 mmc_signal_sdio_irq(host); 740 + 741 + out: 703 742 mmc_release_host(host); 704 743 705 744 return ret;
+1 -1
drivers/mmc/core/sdio_bus.c
··· 189 189 190 190 /* Then undo the runtime PM settings in sdio_bus_probe() */ 191 191 if (func->card->host->caps & MMC_CAP_POWER_OFF_CARD) 192 - pm_runtime_put_noidle(dev); 192 + pm_runtime_put_sync(dev); 193 193 194 194 out: 195 195 return ret;
+5
drivers/mmc/host/of_mmc_spi.c
··· 25 25 #include <linux/mmc/core.h> 26 26 #include <linux/mmc/host.h> 27 27 28 + /* For archs that don't support NO_IRQ (such as mips), provide a dummy value */ 29 + #ifndef NO_IRQ 30 + #define NO_IRQ 0 31 + #endif 32 + 28 33 MODULE_LICENSE("GPL"); 29 34 30 35 enum {
+3 -3
drivers/mmc/host/omap_hsmmc.c
··· 429 429 return -EINVAL; 430 430 } 431 431 } 432 - mmc_slot(host).ocr_mask = mmc_regulator_get_ocrmask(reg); 433 432 434 433 /* Allow an aux regulator */ 435 434 reg = regulator_get(host->dev, "vmmc_aux"); ··· 961 962 spin_unlock(&host->irq_lock); 962 963 963 964 if (host->use_dma && dma_ch != -1) { 964 - dma_unmap_sg(mmc_dev(host->mmc), host->data->sg, host->dma_len, 965 + dma_unmap_sg(mmc_dev(host->mmc), host->data->sg, 966 + host->data->sg_len, 965 967 omap_hsmmc_get_dma_dir(host, host->data)); 966 968 omap_free_dma(dma_ch); 967 969 } ··· 1346 1346 return; 1347 1347 } 1348 1348 1349 - dma_unmap_sg(mmc_dev(host->mmc), data->sg, host->dma_len, 1349 + dma_unmap_sg(mmc_dev(host->mmc), data->sg, data->sg_len, 1350 1350 omap_hsmmc_get_dma_dir(host, data)); 1351 1351 1352 1352 req_in_progress = host->req_in_progress;
+3 -2
drivers/mmc/host/sh_mobile_sdhi.c
··· 92 92 mmc_data->ocr_mask = p->tmio_ocr_mask; 93 93 mmc_data->capabilities |= p->tmio_caps; 94 94 95 - if (p->dma_slave_tx >= 0 && p->dma_slave_rx >= 0) { 95 + if (p->dma_slave_tx > 0 && p->dma_slave_rx > 0) { 96 96 priv->param_tx.slave_id = p->dma_slave_tx; 97 97 priv->param_rx.slave_id = p->dma_slave_rx; 98 98 priv->dma_priv.chan_priv_tx = &priv->param_tx; ··· 165 165 166 166 p->pdata = NULL; 167 167 168 + tmio_mmc_host_remove(host); 169 + 168 170 for (i = 0; i < 3; i++) { 169 171 irq = platform_get_irq(pdev, i); 170 172 if (irq >= 0) 171 173 free_irq(irq, host); 172 174 } 173 175 174 - tmio_mmc_host_remove(host); 175 176 clk_disable(priv->clk); 176 177 clk_put(priv->clk); 177 178 kfree(priv);
+2 -2
drivers/mmc/host/tmio_mmc_pio.c
··· 824 824 struct tmio_mmc_host *host = mmc_priv(mmc); 825 825 struct tmio_mmc_data *pdata = host->pdata; 826 826 827 - return ((pdata->flags & TMIO_MMC_WRPROTECT_DISABLE) || 828 - !(sd_ctrl_read32(host, CTL_STATUS) & TMIO_STAT_WRPROTECT)); 827 + return !((pdata->flags & TMIO_MMC_WRPROTECT_DISABLE) || 828 + (sd_ctrl_read32(host, CTL_STATUS) & TMIO_STAT_WRPROTECT)); 829 829 } 830 830 831 831 static int tmio_mmc_get_cd(struct mmc_host *mmc)
+4 -7
drivers/mmc/host/vub300.c
··· 2096 2096 static int vub300_probe(struct usb_interface *interface, 2097 2097 const struct usb_device_id *id) 2098 2098 { /* NOT irq */ 2099 - struct vub300_mmc_host *vub300 = NULL; 2099 + struct vub300_mmc_host *vub300; 2100 2100 struct usb_host_interface *iface_desc; 2101 2101 struct usb_device *udev = usb_get_dev(interface_to_usbdev(interface)); 2102 2102 int i; ··· 2118 2118 command_out_urb = usb_alloc_urb(0, GFP_KERNEL); 2119 2119 if (!command_out_urb) { 2120 2120 retval = -ENOMEM; 2121 - dev_err(&vub300->udev->dev, 2122 - "not enough memory for the command_out_urb\n"); 2121 + dev_err(&udev->dev, "not enough memory for command_out_urb\n"); 2123 2122 goto error0; 2124 2123 } 2125 2124 command_res_urb = usb_alloc_urb(0, GFP_KERNEL); 2126 2125 if (!command_res_urb) { 2127 2126 retval = -ENOMEM; 2128 - dev_err(&vub300->udev->dev, 2129 - "not enough memory for the command_res_urb\n"); 2127 + dev_err(&udev->dev, "not enough memory for command_res_urb\n"); 2130 2128 goto error1; 2131 2129 } 2132 2130 /* this also allocates memory for our VUB300 mmc host device */ 2133 2131 mmc = mmc_alloc_host(sizeof(struct vub300_mmc_host), &udev->dev); 2134 2132 if (!mmc) { 2135 2133 retval = -ENOMEM; 2136 - dev_err(&vub300->udev->dev, 2137 - "not enough memory for the mmc_host\n"); 2134 + dev_err(&udev->dev, "not enough memory for the mmc_host\n"); 2138 2135 goto error4; 2139 2136 } 2140 2137 /* MMC core transfer sizes tunable parameters */
+3 -3
drivers/mtd/nand/fsl_elbc_nand.c
··· 339 339 (FIR_OP_UA << FIR_OP1_SHIFT) | 340 340 (FIR_OP_RBW << FIR_OP2_SHIFT)); 341 341 out_be32(&lbc->fcr, NAND_CMD_READID << FCR_CMD0_SHIFT); 342 - /* 5 bytes for manuf, device and exts */ 343 - out_be32(&lbc->fbcr, 5); 344 - elbc_fcm_ctrl->read_bytes = 5; 342 + /* nand_get_flash_type() reads 8 bytes of entire ID string */ 343 + out_be32(&lbc->fbcr, 8); 344 + elbc_fcm_ctrl->read_bytes = 8; 345 345 elbc_fcm_ctrl->use_mdr = 1; 346 346 elbc_fcm_ctrl->mdr = 0; 347 347
+1
drivers/net/8139too.c
··· 993 993 * features 994 994 */ 995 995 dev->features |= NETIF_F_SG | NETIF_F_HW_CSUM | NETIF_F_HIGHDMA; 996 + dev->vlan_features = dev->features; 996 997 997 998 dev->irq = pdev->irq; 998 999
+2 -1
drivers/net/Kconfig
··· 3415 3415 3416 3416 config NETCONSOLE_DYNAMIC 3417 3417 bool "Dynamic reconfiguration of logging targets" 3418 - depends on NETCONSOLE && SYSFS && CONFIGFS_FS 3418 + depends on NETCONSOLE && SYSFS && CONFIGFS_FS && \ 3419 + !(NETCONSOLE=y && CONFIGFS_FS=m) 3419 3420 help 3420 3421 This option enables the ability to dynamically reconfigure target 3421 3422 parameters (interface, IP addresses, port numbers, MAC addresses)
+3 -4
drivers/net/bna/bnad.c
··· 1111 1111 struct bna_intr_info *intr_info) 1112 1112 { 1113 1113 int err = 0; 1114 - unsigned long flags; 1114 + unsigned long irq_flags = 0, flags; 1115 1115 u32 irq; 1116 1116 irq_handler_t irq_handler; 1117 1117 ··· 1125 1125 if (bnad->cfg_flags & BNAD_CF_MSIX) { 1126 1126 irq_handler = (irq_handler_t)bnad_msix_mbox_handler; 1127 1127 irq = bnad->msix_table[bnad->msix_num - 1].vector; 1128 - flags = 0; 1129 1128 intr_info->intr_type = BNA_INTR_T_MSIX; 1130 1129 intr_info->idl[0].vector = bnad->msix_num - 1; 1131 1130 } else { 1132 1131 irq_handler = (irq_handler_t)bnad_isr; 1133 1132 irq = bnad->pcidev->irq; 1134 - flags = IRQF_SHARED; 1133 + irq_flags = IRQF_SHARED; 1135 1134 intr_info->intr_type = BNA_INTR_T_INTX; 1136 1135 /* intr_info->idl.vector = 0 ? */ 1137 1136 } 1138 1137 spin_unlock_irqrestore(&bnad->bna_lock, flags); 1139 - 1138 + flags = irq_flags; 1140 1139 sprintf(bnad->mbox_irq_name, "%s", BNAD_NAME); 1141 1140 1142 1141 /*
+3 -3
drivers/net/bnx2x/bnx2x_main.c
··· 50 50 #include <linux/zlib.h> 51 51 #include <linux/io.h> 52 52 #include <linux/stringify.h> 53 + #include <linux/vmalloc.h> 53 54 54 55 #include "bnx2x.h" 55 56 #include "bnx2x_init.h" ··· 5153 5152 if (bp->strm == NULL) 5154 5153 goto gunzip_nomem2; 5155 5154 5156 - bp->strm->workspace = kmalloc(zlib_inflate_workspacesize(), 5157 - GFP_KERNEL); 5155 + bp->strm->workspace = vmalloc(zlib_inflate_workspacesize()); 5158 5156 if (bp->strm->workspace == NULL) 5159 5157 goto gunzip_nomem3; 5160 5158 ··· 5177 5177 static void bnx2x_gunzip_end(struct bnx2x *bp) 5178 5178 { 5179 5179 if (bp->strm) { 5180 - kfree(bp->strm->workspace); 5180 + vfree(bp->strm->workspace); 5181 5181 kfree(bp->strm); 5182 5182 bp->strm = NULL; 5183 5183 }
+2 -2
drivers/net/can/Kconfig
··· 34 34 config CAN_DEV 35 35 tristate "Platform CAN drivers with Netlink support" 36 36 depends on CAN 37 - default Y 37 + default y 38 38 ---help--- 39 39 Enables the common framework for platform CAN drivers with Netlink 40 40 support. This is the standard library for CAN drivers. ··· 43 43 config CAN_CALC_BITTIMING 44 44 bool "CAN bit-timing calculation" 45 45 depends on CAN_DEV 46 - default Y 46 + default y 47 47 ---help--- 48 48 If enabled, CAN bit-timing parameters will be calculated for the 49 49 bit-rate specified via Netlink argument "bitrate" when the device
+2 -2
drivers/net/cxgb3/sge.c
··· 2026 2026 skb->ip_summed = CHECKSUM_UNNECESSARY; 2027 2027 } else 2028 2028 skb_checksum_none_assert(skb); 2029 - skb_record_rx_queue(skb, qs - &adap->sge.qs[0]); 2029 + skb_record_rx_queue(skb, qs - &adap->sge.qs[pi->first_qset]); 2030 2030 2031 2031 if (unlikely(p->vlan_valid)) { 2032 2032 struct vlan_group *grp = pi->vlan_grp; ··· 2145 2145 if (!complete) 2146 2146 return; 2147 2147 2148 - skb_record_rx_queue(skb, qs - &adap->sge.qs[0]); 2148 + skb_record_rx_queue(skb, qs - &adap->sge.qs[pi->first_qset]); 2149 2149 2150 2150 if (unlikely(cpl->vlan_valid)) { 2151 2151 struct vlan_group *grp = pi->vlan_grp;
+3 -4
drivers/net/greth.c
··· 1017 1017 return -EINVAL; 1018 1018 1019 1019 memcpy(dev->dev_addr, addr->sa_data, dev->addr_len); 1020 + GRETH_REGSAVE(regs->esa_msb, dev->dev_addr[0] << 8 | dev->dev_addr[1]); 1021 + GRETH_REGSAVE(regs->esa_lsb, dev->dev_addr[2] << 24 | dev->dev_addr[3] << 16 | 1022 + dev->dev_addr[4] << 8 | dev->dev_addr[5]); 1020 1023 1021 - GRETH_REGSAVE(regs->esa_msb, addr->sa_data[0] << 8 | addr->sa_data[1]); 1022 - GRETH_REGSAVE(regs->esa_lsb, 1023 - addr->sa_data[2] << 24 | addr-> 1024 - sa_data[3] << 16 | addr->sa_data[4] << 8 | addr->sa_data[5]); 1025 1024 return 0; 1026 1025 } 1027 1026
+2 -2
drivers/net/hamradio/6pack.c
··· 692 692 { 693 693 struct sixpack *sp; 694 694 695 - write_lock(&disc_data_lock); 695 + write_lock_bh(&disc_data_lock); 696 696 sp = tty->disc_data; 697 697 tty->disc_data = NULL; 698 - write_unlock(&disc_data_lock); 698 + write_unlock_bh(&disc_data_lock); 699 699 if (!sp) 700 700 return; 701 701
+2 -2
drivers/net/hamradio/mkiss.c
··· 813 813 { 814 814 struct mkiss *ax; 815 815 816 - write_lock(&disc_data_lock); 816 + write_lock_bh(&disc_data_lock); 817 817 ax = tty->disc_data; 818 818 tty->disc_data = NULL; 819 - write_unlock(&disc_data_lock); 819 + write_unlock_bh(&disc_data_lock); 820 820 821 821 if (!ax) 822 822 return;
+2 -1
drivers/net/natsemi.c
··· 2360 2360 PCI_DMA_FROMDEVICE); 2361 2361 } else { 2362 2362 pci_unmap_single(np->pci_dev, np->rx_dma[entry], 2363 - buflen, PCI_DMA_FROMDEVICE); 2363 + buflen + NATSEMI_PADDING, 2364 + PCI_DMA_FROMDEVICE); 2364 2365 skb_put(skb = np->rx_skbuff[entry], pkt_len); 2365 2366 np->rx_skbuff[entry] = NULL; 2366 2367 }
+2 -3
drivers/net/ppp_deflate.c
··· 305 305 306 306 if (state) { 307 307 zlib_inflateEnd(&state->strm); 308 - kfree(state->strm.workspace); 308 + vfree(state->strm.workspace); 309 309 kfree(state); 310 310 } 311 311 } ··· 345 345 346 346 state->w_size = w_size; 347 347 state->strm.next_out = NULL; 348 - state->strm.workspace = kmalloc(zlib_inflate_workspacesize(), 349 - GFP_KERNEL|__GFP_REPEAT); 348 + state->strm.workspace = vmalloc(zlib_inflate_workspacesize()); 350 349 if (state->strm.workspace == NULL) 351 350 goto out_free; 352 351
+2 -1
drivers/net/qlge/qlge.h
··· 17 17 */ 18 18 #define DRV_NAME "qlge" 19 19 #define DRV_STRING "QLogic 10 Gigabit PCI-E Ethernet Driver " 20 - #define DRV_VERSION "v1.00.00.27.00.00-01" 20 + #define DRV_VERSION "v1.00.00.29.00.00-01" 21 21 22 22 #define WQ_ADDR_ALIGN 0x3 /* 4 byte alignment */ 23 23 ··· 1997 1997 QL_LB_LINK_UP = 10, 1998 1998 QL_FRC_COREDUMP = 11, 1999 1999 QL_EEH_FATAL = 12, 2000 + QL_ASIC_RECOVERY = 14, /* We are in ascic recovery. */ 2000 2001 }; 2001 2002 2002 2003 /* link_status bit definitions */
+23 -17
drivers/net/qlge/qlge_main.c
··· 2152 2152 * thread 2153 2153 */ 2154 2154 clear_bit(QL_ADAPTER_UP, &qdev->flags); 2155 + /* Set asic recovery bit to indicate reset process that we are 2156 + * in fatal error recovery process rather than normal close 2157 + */ 2158 + set_bit(QL_ASIC_RECOVERY, &qdev->flags); 2155 2159 queue_delayed_work(qdev->workqueue, &qdev->asic_reset_work, 0); 2156 2160 } 2157 2161 ··· 2170 2166 return; 2171 2167 2172 2168 case CAM_LOOKUP_ERR_EVENT: 2173 - netif_err(qdev, link, qdev->ndev, 2174 - "Multiple CAM hits lookup occurred.\n"); 2175 - netif_err(qdev, drv, qdev->ndev, 2176 - "This event shouldn't occur.\n"); 2169 + netdev_err(qdev->ndev, "Multiple CAM hits lookup occurred.\n"); 2170 + netdev_err(qdev->ndev, "This event shouldn't occur.\n"); 2177 2171 ql_queue_asic_error(qdev); 2178 2172 return; 2179 2173 2180 2174 case SOFT_ECC_ERROR_EVENT: 2181 - netif_err(qdev, rx_err, qdev->ndev, 2182 - "Soft ECC error detected.\n"); 2175 + netdev_err(qdev->ndev, "Soft ECC error detected.\n"); 2183 2176 ql_queue_asic_error(qdev); 2184 2177 break; 2185 2178 2186 2179 case PCI_ERR_ANON_BUF_RD: 2187 - netif_err(qdev, rx_err, qdev->ndev, 2188 - "PCI error occurred when reading anonymous buffers from rx_ring %d.\n", 2189 - ib_ae_rsp->q_id); 2180 + netdev_err(qdev->ndev, "PCI error occurred when reading " 2181 + "anonymous buffers from rx_ring %d.\n", 2182 + ib_ae_rsp->q_id); 2190 2183 ql_queue_asic_error(qdev); 2191 2184 break; 2192 2185 ··· 2438 2437 */ 2439 2438 if (var & STS_FE) { 2440 2439 ql_queue_asic_error(qdev); 2441 - netif_err(qdev, intr, qdev->ndev, 2442 - "Got fatal error, STS = %x.\n", var); 2440 + netdev_err(qdev->ndev, "Got fatal error, STS = %x.\n", var); 2443 2441 var = ql_read32(qdev, ERR_STS); 2444 - netif_err(qdev, intr, qdev->ndev, 2445 - "Resetting chip. Error Status Register = 0x%x\n", var); 2442 + netdev_err(qdev->ndev, "Resetting chip. " 2443 + "Error Status Register = 0x%x\n", var); 2446 2444 return IRQ_HANDLED; 2447 2445 } 2448 2446 ··· 3818 3818 end_jiffies = jiffies + 3819 3819 max((unsigned long)1, usecs_to_jiffies(30)); 3820 3820 3821 - /* Stop management traffic. */ 3822 - ql_mb_set_mgmnt_traffic_ctl(qdev, MB_SET_MPI_TFK_STOP); 3821 + /* Check if bit is set then skip the mailbox command and 3822 + * clear the bit, else we are in normal reset process. 3823 + */ 3824 + if (!test_bit(QL_ASIC_RECOVERY, &qdev->flags)) { 3825 + /* Stop management traffic. */ 3826 + ql_mb_set_mgmnt_traffic_ctl(qdev, MB_SET_MPI_TFK_STOP); 3823 3827 3824 - /* Wait for the NIC and MGMNT FIFOs to empty. */ 3825 - ql_wait_fifo_empty(qdev); 3828 + /* Wait for the NIC and MGMNT FIFOs to empty. */ 3829 + ql_wait_fifo_empty(qdev); 3830 + } else 3831 + clear_bit(QL_ASIC_RECOVERY, &qdev->flags); 3826 3832 3827 3833 ql_write32(qdev, RST_FO, (RST_FO_FR << 16) | RST_FO_FR); 3828 3834
+1 -1
drivers/net/r8169.c
··· 753 753 msleep(2); 754 754 for (i = 0; i < 5; i++) { 755 755 udelay(100); 756 - if (!(RTL_R32(ERIDR) & ERIAR_FLAG)) 756 + if (!(RTL_R32(ERIAR) & ERIAR_FLAG)) 757 757 break; 758 758 } 759 759
+15 -13
drivers/net/rionet.c
··· 378 378 379 379 static void rionet_remove(struct rio_dev *rdev) 380 380 { 381 - struct net_device *ndev = NULL; 381 + struct net_device *ndev = rio_get_drvdata(rdev); 382 382 struct rionet_peer *peer, *tmp; 383 383 384 384 free_pages((unsigned long)rionet_active, rdev->net->hport->sys_size ? ··· 433 433 .ndo_set_mac_address = eth_mac_addr, 434 434 }; 435 435 436 - static int rionet_setup_netdev(struct rio_mport *mport) 436 + static int rionet_setup_netdev(struct rio_mport *mport, struct net_device *ndev) 437 437 { 438 438 int rc = 0; 439 - struct net_device *ndev = NULL; 440 439 struct rionet_private *rnet; 441 440 u16 device_id; 442 - 443 - /* Allocate our net_device structure */ 444 - ndev = alloc_etherdev(sizeof(struct rionet_private)); 445 - if (ndev == NULL) { 446 - printk(KERN_INFO "%s: could not allocate ethernet device.\n", 447 - DRV_NAME); 448 - rc = -ENOMEM; 449 - goto out; 450 - } 451 441 452 442 rionet_active = (struct rio_dev **)__get_free_pages(GFP_KERNEL, 453 443 mport->sys_size ? __fls(sizeof(void *)) + 4 : 0); ··· 494 504 int rc = -ENODEV; 495 505 u32 lpef, lsrc_ops, ldst_ops; 496 506 struct rionet_peer *peer; 507 + struct net_device *ndev = NULL; 497 508 498 509 /* If local device is not rionet capable, give up quickly */ 499 510 if (!rionet_capable) 500 511 goto out; 512 + 513 + /* Allocate our net_device structure */ 514 + ndev = alloc_etherdev(sizeof(struct rionet_private)); 515 + if (ndev == NULL) { 516 + printk(KERN_INFO "%s: could not allocate ethernet device.\n", 517 + DRV_NAME); 518 + rc = -ENOMEM; 519 + goto out; 520 + } 501 521 502 522 /* 503 523 * First time through, make sure local device is rionet ··· 529 529 goto out; 530 530 } 531 531 532 - rc = rionet_setup_netdev(rdev->net->hport); 532 + rc = rionet_setup_netdev(rdev->net->hport, ndev); 533 533 rionet_check = 1; 534 534 } 535 535 ··· 545 545 peer->rdev = rdev; 546 546 list_add_tail(&peer->node, &rionet_peers); 547 547 } 548 + 549 + rio_set_drvdata(rdev, ndev); 548 550 549 551 out: 550 552 return rc;
+25 -17
drivers/net/usb/kalmia.c
··· 100 100 static int 101 101 kalmia_init_and_get_ethernet_addr(struct usbnet *dev, u8 *ethernet_addr) 102 102 { 103 - char init_msg_1[] = 103 + const static char init_msg_1[] = 104 104 { 0x57, 0x50, 0x04, 0x00, 0x00, 0x00, 0x00, 0x20, 0x00, 0x00, 105 105 0x00, 0x00 }; 106 - char init_msg_2[] = 106 + const static char init_msg_2[] = 107 107 { 0x57, 0x50, 0x04, 0x00, 0x00, 0x00, 0x00, 0x02, 0x00, 0xf4, 108 108 0x00, 0x00 }; 109 - char receive_buf[28]; 109 + const static int buflen = 28; 110 + char *usb_buf; 110 111 int status; 111 112 112 - status = kalmia_send_init_packet(dev, init_msg_1, sizeof(init_msg_1) 113 - / sizeof(init_msg_1[0]), receive_buf, 24); 113 + usb_buf = kmalloc(buflen, GFP_DMA | GFP_KERNEL); 114 + if (!usb_buf) 115 + return -ENOMEM; 116 + 117 + memcpy(usb_buf, init_msg_1, 12); 118 + status = kalmia_send_init_packet(dev, usb_buf, sizeof(init_msg_1) 119 + / sizeof(init_msg_1[0]), usb_buf, 24); 114 120 if (status != 0) 115 121 return status; 116 122 117 - status = kalmia_send_init_packet(dev, init_msg_2, sizeof(init_msg_2) 118 - / sizeof(init_msg_2[0]), receive_buf, 28); 123 + memcpy(usb_buf, init_msg_2, 12); 124 + status = kalmia_send_init_packet(dev, usb_buf, sizeof(init_msg_2) 125 + / sizeof(init_msg_2[0]), usb_buf, 28); 119 126 if (status != 0) 120 127 return status; 121 128 122 - memcpy(ethernet_addr, receive_buf + 10, ETH_ALEN); 129 + memcpy(ethernet_addr, usb_buf + 10, ETH_ALEN); 123 130 131 + kfree(usb_buf); 124 132 return status; 125 133 } 126 134 127 135 static int 128 136 kalmia_bind(struct usbnet *dev, struct usb_interface *intf) 129 137 { 130 - u8 status; 138 + int status; 131 139 u8 ethernet_addr[ETH_ALEN]; 132 140 133 141 /* Don't bind to AT command interface */ ··· 198 190 dev_kfree_skb_any(skb); 199 191 skb = skb2; 200 192 201 - done: header_start = skb_push(skb, KALMIA_HEADER_LENGTH); 193 + done: 194 + header_start = skb_push(skb, KALMIA_HEADER_LENGTH); 202 195 ether_type_1 = header_start[KALMIA_HEADER_LENGTH + 12]; 203 196 ether_type_2 = header_start[KALMIA_HEADER_LENGTH + 13]; 204 197 ··· 210 201 header_start[0] = 0x57; 211 202 header_start[1] = 0x44; 212 203 content_len = skb->len - KALMIA_HEADER_LENGTH; 213 - header_start[2] = (content_len & 0xff); /* low byte */ 214 - header_start[3] = (content_len >> 8); /* high byte */ 215 204 205 + put_unaligned_le16(content_len, &header_start[2]); 216 206 header_start[4] = ether_type_1; 217 207 header_start[5] = ether_type_2; 218 208 ··· 239 231 * Our task here is to strip off framing, leaving skb with one 240 232 * data frame for the usbnet framework code to process. 241 233 */ 242 - const u8 HEADER_END_OF_USB_PACKET[] = 234 + const static u8 HEADER_END_OF_USB_PACKET[] = 243 235 { 0x57, 0x5a, 0x00, 0x00, 0x08, 0x00 }; 244 - const u8 EXPECTED_UNKNOWN_HEADER_1[] = 236 + const static u8 EXPECTED_UNKNOWN_HEADER_1[] = 245 237 { 0x57, 0x43, 0x1e, 0x00, 0x15, 0x02 }; 246 - const u8 EXPECTED_UNKNOWN_HEADER_2[] = 238 + const static u8 EXPECTED_UNKNOWN_HEADER_2[] = 247 239 { 0x57, 0x50, 0x0e, 0x00, 0x00, 0x00 }; 248 - u8 i = 0; 240 + int i = 0; 249 241 250 242 /* incomplete header? */ 251 243 if (skb->len < KALMIA_HEADER_LENGTH) ··· 293 285 294 286 /* subtract start header and end header */ 295 287 usb_packet_length = skb->len - (2 * KALMIA_HEADER_LENGTH); 296 - ether_packet_length = header_start[2] + (header_start[3] << 8); 288 + ether_packet_length = get_unaligned_le16(&header_start[2]); 297 289 skb_pull(skb, KALMIA_HEADER_LENGTH); 298 290 299 291 /* Some small packets misses end marker */
-10
drivers/net/usb/zaurus.c
··· 331 331 ZAURUS_MASTER_INTERFACE, 332 332 .driver_info = ZAURUS_PXA_INFO, 333 333 }, 334 - 335 - 336 - /* At least some of the newest PXA units have very different lies about 337 - * their standards support: they claim to be cell phones offering 338 - * direct access to their radios! (No, they don't conform to CDC MDLM.) 339 - */ 340 334 { 341 - USB_INTERFACE_INFO(USB_CLASS_COMM, USB_CDC_SUBCLASS_MDLM, 342 - USB_CDC_PROTO_NONE), 343 - .driver_info = (unsigned long) &bogus_mdlm_info, 344 - }, { 345 335 /* Motorola MOTOMAGX phones */ 346 336 USB_DEVICE_AND_INTERFACE_INFO(0x22b8, 0x6425, USB_CLASS_COMM, 347 337 USB_CDC_SUBCLASS_MDLM, USB_CDC_PROTO_NONE),
+92 -39
drivers/net/vmxnet3/vmxnet3_drv.c
··· 573 573 struct vmxnet3_cmd_ring *ring = &rq->rx_ring[ring_idx]; 574 574 u32 val; 575 575 576 - while (num_allocated < num_to_alloc) { 576 + while (num_allocated <= num_to_alloc) { 577 577 struct vmxnet3_rx_buf_info *rbi; 578 578 union Vmxnet3_GenericDesc *gd; 579 579 ··· 619 619 620 620 BUG_ON(rbi->dma_addr == 0); 621 621 gd->rxd.addr = cpu_to_le64(rbi->dma_addr); 622 - gd->dword[2] = cpu_to_le32((ring->gen << VMXNET3_RXD_GEN_SHIFT) 622 + gd->dword[2] = cpu_to_le32((!ring->gen << VMXNET3_RXD_GEN_SHIFT) 623 623 | val | rbi->len); 624 624 625 + /* Fill the last buffer but dont mark it ready, or else the 626 + * device will think that the queue is full */ 627 + if (num_allocated == num_to_alloc) 628 + break; 629 + 630 + gd->dword[2] |= cpu_to_le32(ring->gen << VMXNET3_RXD_GEN_SHIFT); 625 631 num_allocated++; 626 632 vmxnet3_cmd_ring_adv_next2fill(ring); 627 633 } ··· 1144 1138 VMXNET3_REG_RXPROD, VMXNET3_REG_RXPROD2 1145 1139 }; 1146 1140 u32 num_rxd = 0; 1141 + bool skip_page_frags = false; 1147 1142 struct Vmxnet3_RxCompDesc *rcd; 1148 1143 struct vmxnet3_rx_ctx *ctx = &rq->rx_ctx; 1149 1144 #ifdef __BIG_ENDIAN_BITFIELD ··· 1155 1148 &rxComp); 1156 1149 while (rcd->gen == rq->comp_ring.gen) { 1157 1150 struct vmxnet3_rx_buf_info *rbi; 1158 - struct sk_buff *skb; 1151 + struct sk_buff *skb, *new_skb = NULL; 1152 + struct page *new_page = NULL; 1159 1153 int num_to_alloc; 1160 1154 struct Vmxnet3_RxDesc *rxd; 1161 1155 u32 idx, ring_idx; 1162 - 1156 + struct vmxnet3_cmd_ring *ring = NULL; 1163 1157 if (num_rxd >= quota) { 1164 1158 /* we may stop even before we see the EOP desc of 1165 1159 * the current pkt ··· 1171 1163 BUG_ON(rcd->rqID != rq->qid && rcd->rqID != rq->qid2); 1172 1164 idx = rcd->rxdIdx; 1173 1165 ring_idx = rcd->rqID < adapter->num_rx_queues ? 0 : 1; 1166 + ring = rq->rx_ring + ring_idx; 1174 1167 vmxnet3_getRxDesc(rxd, &rq->rx_ring[ring_idx].base[idx].rxd, 1175 1168 &rxCmdDesc); 1176 1169 rbi = rq->buf_info[ring_idx] + idx; ··· 1200 1191 goto rcd_done; 1201 1192 } 1202 1193 1194 + skip_page_frags = false; 1203 1195 ctx->skb = rbi->skb; 1204 - rbi->skb = NULL; 1196 + new_skb = dev_alloc_skb(rbi->len + NET_IP_ALIGN); 1197 + if (new_skb == NULL) { 1198 + /* Skb allocation failed, do not handover this 1199 + * skb to stack. Reuse it. Drop the existing pkt 1200 + */ 1201 + rq->stats.rx_buf_alloc_failure++; 1202 + ctx->skb = NULL; 1203 + rq->stats.drop_total++; 1204 + skip_page_frags = true; 1205 + goto rcd_done; 1206 + } 1205 1207 1206 1208 pci_unmap_single(adapter->pdev, rbi->dma_addr, rbi->len, 1207 1209 PCI_DMA_FROMDEVICE); 1208 1210 1209 1211 skb_put(ctx->skb, rcd->len); 1212 + 1213 + /* Immediate refill */ 1214 + new_skb->dev = adapter->netdev; 1215 + skb_reserve(new_skb, NET_IP_ALIGN); 1216 + rbi->skb = new_skb; 1217 + rbi->dma_addr = pci_map_single(adapter->pdev, 1218 + rbi->skb->data, rbi->len, 1219 + PCI_DMA_FROMDEVICE); 1220 + rxd->addr = cpu_to_le64(rbi->dma_addr); 1221 + rxd->len = rbi->len; 1222 + 1210 1223 } else { 1211 - BUG_ON(ctx->skb == NULL); 1224 + BUG_ON(ctx->skb == NULL && !skip_page_frags); 1225 + 1212 1226 /* non SOP buffer must be type 1 in most cases */ 1213 - if (rbi->buf_type == VMXNET3_RX_BUF_PAGE) { 1214 - BUG_ON(rxd->btype != VMXNET3_RXD_BTYPE_BODY); 1227 + BUG_ON(rbi->buf_type != VMXNET3_RX_BUF_PAGE); 1228 + BUG_ON(rxd->btype != VMXNET3_RXD_BTYPE_BODY); 1215 1229 1216 - if (rcd->len) { 1217 - pci_unmap_page(adapter->pdev, 1218 - rbi->dma_addr, rbi->len, 1219 - PCI_DMA_FROMDEVICE); 1230 + /* If an sop buffer was dropped, skip all 1231 + * following non-sop fragments. They will be reused. 1232 + */ 1233 + if (skip_page_frags) 1234 + goto rcd_done; 1220 1235 1221 - vmxnet3_append_frag(ctx->skb, rcd, rbi); 1222 - rbi->page = NULL; 1223 - } 1224 - } else { 1225 - /* 1226 - * The only time a non-SOP buffer is type 0 is 1227 - * when it's EOP and error flag is raised, which 1228 - * has already been handled. 1236 + new_page = alloc_page(GFP_ATOMIC); 1237 + if (unlikely(new_page == NULL)) { 1238 + /* Replacement page frag could not be allocated. 1239 + * Reuse this page. Drop the pkt and free the 1240 + * skb which contained this page as a frag. Skip 1241 + * processing all the following non-sop frags. 1229 1242 */ 1230 - BUG_ON(true); 1243 + rq->stats.rx_buf_alloc_failure++; 1244 + dev_kfree_skb(ctx->skb); 1245 + ctx->skb = NULL; 1246 + skip_page_frags = true; 1247 + goto rcd_done; 1231 1248 } 1249 + 1250 + if (rcd->len) { 1251 + pci_unmap_page(adapter->pdev, 1252 + rbi->dma_addr, rbi->len, 1253 + PCI_DMA_FROMDEVICE); 1254 + 1255 + vmxnet3_append_frag(ctx->skb, rcd, rbi); 1256 + } 1257 + 1258 + /* Immediate refill */ 1259 + rbi->page = new_page; 1260 + rbi->dma_addr = pci_map_page(adapter->pdev, rbi->page, 1261 + 0, PAGE_SIZE, 1262 + PCI_DMA_FROMDEVICE); 1263 + rxd->addr = cpu_to_le64(rbi->dma_addr); 1264 + rxd->len = rbi->len; 1232 1265 } 1266 + 1233 1267 1234 1268 skb = ctx->skb; 1235 1269 if (rcd->eop) { ··· 1295 1243 } 1296 1244 1297 1245 rcd_done: 1298 - /* device may skip some rx descs */ 1299 - rq->rx_ring[ring_idx].next2comp = idx; 1300 - VMXNET3_INC_RING_IDX_ONLY(rq->rx_ring[ring_idx].next2comp, 1301 - rq->rx_ring[ring_idx].size); 1246 + /* device may have skipped some rx descs */ 1247 + ring->next2comp = idx; 1248 + num_to_alloc = vmxnet3_cmd_ring_desc_avail(ring); 1249 + ring = rq->rx_ring + ring_idx; 1250 + while (num_to_alloc) { 1251 + vmxnet3_getRxDesc(rxd, &ring->base[ring->next2fill].rxd, 1252 + &rxCmdDesc); 1253 + BUG_ON(!rxd->addr); 1302 1254 1303 - /* refill rx buffers frequently to avoid starving the h/w */ 1304 - num_to_alloc = vmxnet3_cmd_ring_desc_avail(rq->rx_ring + 1305 - ring_idx); 1306 - if (unlikely(num_to_alloc > VMXNET3_RX_ALLOC_THRESHOLD(rq, 1307 - ring_idx, adapter))) { 1308 - vmxnet3_rq_alloc_rx_buf(rq, ring_idx, num_to_alloc, 1309 - adapter); 1255 + /* Recv desc is ready to be used by the device */ 1256 + rxd->gen = ring->gen; 1257 + vmxnet3_cmd_ring_adv_next2fill(ring); 1258 + num_to_alloc--; 1259 + } 1310 1260 1311 - /* if needed, update the register */ 1312 - if (unlikely(rq->shared->updateRxProd)) { 1313 - VMXNET3_WRITE_BAR0_REG(adapter, 1314 - rxprod_reg[ring_idx] + rq->qid * 8, 1315 - rq->rx_ring[ring_idx].next2fill); 1316 - rq->uncommitted[ring_idx] = 0; 1317 - } 1261 + /* if needed, update the register */ 1262 + if (unlikely(rq->shared->updateRxProd)) { 1263 + VMXNET3_WRITE_BAR0_REG(adapter, 1264 + rxprod_reg[ring_idx] + rq->qid * 8, 1265 + ring->next2fill); 1266 + rq->uncommitted[ring_idx] = 0; 1318 1267 } 1319 1268 1320 1269 vmxnet3_comp_ring_adv_next2proc(&rq->comp_ring);
+2 -2
drivers/net/vmxnet3/vmxnet3_int.h
··· 69 69 /* 70 70 * Version numbers 71 71 */ 72 - #define VMXNET3_DRIVER_VERSION_STRING "1.1.9.0-k" 72 + #define VMXNET3_DRIVER_VERSION_STRING "1.1.14.0-k" 73 73 74 74 /* a 32-bit int, each byte encode a verion number in VMXNET3_DRIVER_VERSION */ 75 - #define VMXNET3_DRIVER_VERSION_NUM 0x01010900 75 + #define VMXNET3_DRIVER_VERSION_NUM 0x01010E00 76 76 77 77 #if defined(CONFIG_PCI_MSI) 78 78 /* RSS only makes sense if MSI-X is supported. */
+3 -5
drivers/net/wireless/ath/ath5k/eeprom.c
··· 691 691 if (!chinfo[pier].pd_curves) 692 692 continue; 693 693 694 - for (pdg = 0; pdg < ee->ee_pd_gains[mode]; pdg++) { 694 + for (pdg = 0; pdg < AR5K_EEPROM_N_PD_CURVES; pdg++) { 695 695 struct ath5k_pdgain_info *pd = 696 696 &chinfo[pier].pd_curves[pdg]; 697 697 698 - if (pd != NULL) { 699 - kfree(pd->pd_step); 700 - kfree(pd->pd_pwr); 701 - } 698 + kfree(pd->pd_step); 699 + kfree(pd->pd_pwr); 702 700 } 703 701 704 702 kfree(chinfo[pier].pd_curves);
+6
drivers/net/wireless/ath/ath9k/pci.c
··· 278 278 279 279 ath9k_hw_set_gpio(sc->sc_ah, sc->sc_ah->led_pin, 1); 280 280 281 + /* The device has to be moved to FULLSLEEP forcibly. 282 + * Otherwise the chip never moved to full sleep, 283 + * when no interface is up. 284 + */ 285 + ath9k_hw_setpower(sc->sc_ah, ATH9K_PM_FULL_SLEEP); 286 + 281 287 return 0; 282 288 } 283 289
+3 -1
drivers/pci/pci-driver.c
··· 624 624 * system from the sleep state, we'll have to prevent it from signaling 625 625 * wake-up. 626 626 */ 627 - pm_runtime_resume(dev); 627 + pm_runtime_get_sync(dev); 628 628 629 629 if (drv && drv->pm && drv->pm->prepare) 630 630 error = drv->pm->prepare(dev); ··· 638 638 639 639 if (drv && drv->pm && drv->pm->complete) 640 640 drv->pm->complete(dev); 641 + 642 + pm_runtime_put_sync(dev); 641 643 } 642 644 643 645 #else /* !CONFIG_PM_SLEEP */
+1 -1
drivers/pci/pci.c
··· 3284 3284 * @dev: the PCI device 3285 3285 * @decode: true = enable decoding, false = disable decoding 3286 3286 * @command_bits: PCI_COMMAND_IO and/or PCI_COMMAND_MEMORY 3287 - * @change_bridge_flags: traverse ancestors and change bridges 3287 + * @flags: traverse ancestors and change bridges 3288 3288 * CHANGE_BRIDGE_ONLY / CHANGE_BRIDGE 3289 3289 */ 3290 3290 int pci_set_vga_state(struct pci_dev *dev, bool decode,
+1 -1
drivers/pci/probe.c
··· 168 168 res->flags |= pci_calc_resource_flags(l) | IORESOURCE_SIZEALIGN; 169 169 if (type == pci_bar_io) { 170 170 l &= PCI_BASE_ADDRESS_IO_MASK; 171 - mask = PCI_BASE_ADDRESS_IO_MASK & IO_SPACE_LIMIT; 171 + mask = PCI_BASE_ADDRESS_IO_MASK & (u32) IO_SPACE_LIMIT; 172 172 } else { 173 173 l &= PCI_BASE_ADDRESS_MEM_MASK; 174 174 mask = (u32)PCI_BASE_ADDRESS_MEM_MASK;
+2
drivers/pci/quirks.c
··· 2761 2761 } 2762 2762 DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_RICOH, PCI_DEVICE_ID_RICOH_R5C832, ricoh_mmc_fixup_r5c832); 2763 2763 DECLARE_PCI_FIXUP_RESUME_EARLY(PCI_VENDOR_ID_RICOH, PCI_DEVICE_ID_RICOH_R5C832, ricoh_mmc_fixup_r5c832); 2764 + DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_RICOH, PCI_DEVICE_ID_RICOH_R5CE823, ricoh_mmc_fixup_r5c832); 2765 + DECLARE_PCI_FIXUP_RESUME_EARLY(PCI_VENDOR_ID_RICOH, PCI_DEVICE_ID_RICOH_R5CE823, ricoh_mmc_fixup_r5c832); 2764 2766 #endif /*CONFIG_MMC_RICOH_MMC*/ 2765 2767 2766 2768 #if defined(CONFIG_DMAR) || defined(CONFIG_INTR_REMAP)
+1
drivers/rtc/rtc-ds1307.c
··· 149 149 { "ds1340", ds_1340 }, 150 150 { "ds3231", ds_3231 }, 151 151 { "m41t00", m41t00 }, 152 + { "pt7c4338", ds_1307 }, 152 153 { "rx8025", rx_8025 }, 153 154 { } 154 155 };
+3 -42
drivers/rtc/rtc-vt8500.c
··· 78 78 void __iomem *regbase; 79 79 struct resource *res; 80 80 int irq_alarm; 81 - int irq_hz; 82 81 struct rtc_device *rtc; 83 82 spinlock_t lock; /* Protects this structure */ 84 83 }; ··· 98 99 99 100 if (isr & 1) 100 101 events |= RTC_AF | RTC_IRQF; 101 - 102 - /* Only second/minute interrupts are supported */ 103 - if (isr & 2) 104 - events |= RTC_UF | RTC_IRQF; 105 102 106 103 rtc_update_irq(vt8500_rtc->rtc, 1, events); 107 104 ··· 194 199 return 0; 195 200 } 196 201 197 - static int vt8500_update_irq_enable(struct device *dev, unsigned int enabled) 198 - { 199 - struct vt8500_rtc *vt8500_rtc = dev_get_drvdata(dev); 200 - unsigned long tmp = readl(vt8500_rtc->regbase + VT8500_RTC_CR); 201 - 202 - if (enabled) 203 - tmp |= VT8500_RTC_CR_SM_SEC | VT8500_RTC_CR_SM_ENABLE; 204 - else 205 - tmp &= ~VT8500_RTC_CR_SM_ENABLE; 206 - 207 - writel(tmp, vt8500_rtc->regbase + VT8500_RTC_CR); 208 - return 0; 209 - } 210 - 211 202 static const struct rtc_class_ops vt8500_rtc_ops = { 212 203 .read_time = vt8500_rtc_read_time, 213 204 .set_time = vt8500_rtc_set_time, 214 205 .read_alarm = vt8500_rtc_read_alarm, 215 206 .set_alarm = vt8500_rtc_set_alarm, 216 207 .alarm_irq_enable = vt8500_alarm_irq_enable, 217 - .update_irq_enable = vt8500_update_irq_enable, 218 208 }; 219 209 220 210 static int __devinit vt8500_rtc_probe(struct platform_device *pdev) ··· 228 248 goto err_free; 229 249 } 230 250 231 - vt8500_rtc->irq_hz = platform_get_irq(pdev, 1); 232 - if (vt8500_rtc->irq_hz < 0) { 233 - dev_err(&pdev->dev, "No 1Hz IRQ resource defined\n"); 234 - ret = -ENXIO; 235 - goto err_free; 236 - } 237 - 238 251 vt8500_rtc->res = request_mem_region(vt8500_rtc->res->start, 239 252 resource_size(vt8500_rtc->res), 240 253 "vt8500-rtc"); ··· 245 272 goto err_release; 246 273 } 247 274 248 - /* Enable the second/minute interrupt generation and enable RTC */ 249 - writel(VT8500_RTC_CR_ENABLE | VT8500_RTC_CR_24H 250 - | VT8500_RTC_CR_SM_ENABLE | VT8500_RTC_CR_SM_SEC, 275 + /* Enable RTC and set it to 24-hour mode */ 276 + writel(VT8500_RTC_CR_ENABLE | VT8500_RTC_CR_24H, 251 277 vt8500_rtc->regbase + VT8500_RTC_CR); 252 278 253 279 vt8500_rtc->rtc = rtc_device_register("vt8500-rtc", &pdev->dev, ··· 258 286 goto err_unmap; 259 287 } 260 288 261 - ret = request_irq(vt8500_rtc->irq_hz, vt8500_rtc_irq, 0, 262 - "rtc 1Hz", vt8500_rtc); 263 - if (ret < 0) { 264 - dev_err(&pdev->dev, "can't get irq %i, err %d\n", 265 - vt8500_rtc->irq_hz, ret); 266 - goto err_unreg; 267 - } 268 - 269 289 ret = request_irq(vt8500_rtc->irq_alarm, vt8500_rtc_irq, 0, 270 290 "rtc alarm", vt8500_rtc); 271 291 if (ret < 0) { 272 292 dev_err(&pdev->dev, "can't get irq %i, err %d\n", 273 293 vt8500_rtc->irq_alarm, ret); 274 - goto err_free_hz; 294 + goto err_unreg; 275 295 } 276 296 277 297 return 0; 278 298 279 - err_free_hz: 280 - free_irq(vt8500_rtc->irq_hz, vt8500_rtc); 281 299 err_unreg: 282 300 rtc_device_unregister(vt8500_rtc->rtc); 283 301 err_unmap: ··· 285 323 struct vt8500_rtc *vt8500_rtc = platform_get_drvdata(pdev); 286 324 287 325 free_irq(vt8500_rtc->irq_alarm, vt8500_rtc); 288 - free_irq(vt8500_rtc->irq_hz, vt8500_rtc); 289 326 290 327 rtc_device_unregister(vt8500_rtc->rtc); 291 328
+2
drivers/staging/brcm80211/Kconfig
··· 7 7 default n 8 8 depends on PCI 9 9 depends on WLAN && MAC80211 10 + depends on X86 || MIPS 10 11 select BRCMUTIL 11 12 select FW_LOADER 12 13 select CRC_CCITT ··· 21 20 default n 22 21 depends on MMC 23 22 depends on WLAN && CFG80211 23 + depends on X86 || MIPS 24 24 select BRCMUTIL 25 25 select FW_LOADER 26 26 select WIRELESS_EXT
+22
drivers/staging/comedi/Kconfig
··· 2 2 tristate "Data acquisition support (comedi)" 3 3 default N 4 4 depends on m 5 + depends on BROKEN || FRV || M32R || MN10300 || SUPERH || TILE || X86 5 6 ---help--- 6 7 Enable support a wide range of data acquisition devices 7 8 for Linux. ··· 161 160 162 161 config COMEDI_PCL812 163 162 tristate "Advantech PCL-812/813 and ADlink ACL-8112/8113/8113/8216" 163 + depends on VIRT_TO_BUS 164 164 default N 165 165 ---help--- 166 166 Enable support for Advantech PCL-812/PG, PCL-813/B, ADLink ··· 173 171 174 172 config COMEDI_PCL816 175 173 tristate "Advantech PCL-814 and PCL-816 ISA card support" 174 + depends on VIRT_TO_BUS 176 175 default N 177 176 ---help--- 178 177 Enable support for Advantech PCL-814 and PCL-816 ISA cards ··· 183 180 184 181 config COMEDI_PCL818 185 182 tristate "Advantech PCL-718 and PCL-818 ISA card support" 183 + depends on VIRT_TO_BUS 186 184 default N 187 185 ---help--- 188 186 Enable support for Advantech PCL-818 ISA cards ··· 273 269 274 270 config COMEDI_DAS1800 275 271 tristate "DAS1800 and compatible ISA card support" 272 + depends on VIRT_TO_BUS 276 273 select COMEDI_FC 277 274 default N 278 275 ---help--- ··· 345 340 config COMEDI_DT282X 346 341 tristate "Data Translation DT2821 series and DT-EZ ISA card support" 347 342 select COMEDI_FC 343 + depends on VIRT_TO_BUS 348 344 default N 349 345 ---help--- 350 346 Enable support for Data Translation DT2821 series including DT-EZ ··· 425 419 config COMEDI_NI_AT_A2150 426 420 tristate "NI AT-A2150 ISA card support" 427 421 depends on COMEDI_NI_COMMON 422 + depends on VIRT_TO_BUS 428 423 default N 429 424 ---help--- 430 425 Enable support for National Instruments AT-A2150 cards ··· 543 536 544 537 config COMEDI_ADDI_APCI_035 545 538 tristate "ADDI-DATA APCI_035 support" 539 + depends on VIRT_TO_BUS 546 540 default N 547 541 ---help--- 548 542 Enable support for ADDI-DATA APCI_035 cards ··· 553 545 554 546 config COMEDI_ADDI_APCI_1032 555 547 tristate "ADDI-DATA APCI_1032 support" 548 + depends on VIRT_TO_BUS 556 549 default N 557 550 ---help--- 558 551 Enable support for ADDI-DATA APCI_1032 cards ··· 563 554 564 555 config COMEDI_ADDI_APCI_1500 565 556 tristate "ADDI-DATA APCI_1500 support" 557 + depends on VIRT_TO_BUS 566 558 default N 567 559 ---help--- 568 560 Enable support for ADDI-DATA APCI_1500 cards ··· 573 563 574 564 config COMEDI_ADDI_APCI_1516 575 565 tristate "ADDI-DATA APCI_1516 support" 566 + depends on VIRT_TO_BUS 576 567 default N 577 568 ---help--- 578 569 Enable support for ADDI-DATA APCI_1516 cards ··· 583 572 584 573 config COMEDI_ADDI_APCI_1564 585 574 tristate "ADDI-DATA APCI_1564 support" 575 + depends on VIRT_TO_BUS 586 576 default N 587 577 ---help--- 588 578 Enable support for ADDI-DATA APCI_1564 cards ··· 593 581 594 582 config COMEDI_ADDI_APCI_16XX 595 583 tristate "ADDI-DATA APCI_16xx support" 584 + depends on VIRT_TO_BUS 596 585 default N 597 586 ---help--- 598 587 Enable support for ADDI-DATA APCI_16xx cards ··· 603 590 604 591 config COMEDI_ADDI_APCI_2016 605 592 tristate "ADDI-DATA APCI_2016 support" 593 + depends on VIRT_TO_BUS 606 594 default N 607 595 ---help--- 608 596 Enable support for ADDI-DATA APCI_2016 cards ··· 613 599 614 600 config COMEDI_ADDI_APCI_2032 615 601 tristate "ADDI-DATA APCI_2032 support" 602 + depends on VIRT_TO_BUS 616 603 default N 617 604 ---help--- 618 605 Enable support for ADDI-DATA APCI_2032 cards ··· 623 608 624 609 config COMEDI_ADDI_APCI_2200 625 610 tristate "ADDI-DATA APCI_2200 support" 611 + depends on VIRT_TO_BUS 626 612 default N 627 613 ---help--- 628 614 Enable support for ADDI-DATA APCI_2200 cards ··· 633 617 634 618 config COMEDI_ADDI_APCI_3001 635 619 tristate "ADDI-DATA APCI_3001 support" 620 + depends on VIRT_TO_BUS 636 621 select COMEDI_FC 637 622 default N 638 623 ---help--- ··· 644 627 645 628 config COMEDI_ADDI_APCI_3120 646 629 tristate "ADDI-DATA APCI_3520 support" 630 + depends on VIRT_TO_BUS 647 631 select COMEDI_FC 648 632 default N 649 633 ---help--- ··· 655 637 656 638 config COMEDI_ADDI_APCI_3501 657 639 tristate "ADDI-DATA APCI_3501 support" 640 + depends on VIRT_TO_BUS 658 641 default N 659 642 ---help--- 660 643 Enable support for ADDI-DATA APCI_3501 cards ··· 665 646 666 647 config COMEDI_ADDI_APCI_3XXX 667 648 tristate "ADDI-DATA APCI_3xxx support" 649 + depends on VIRT_TO_BUS 668 650 default N 669 651 ---help--- 670 652 Enable support for ADDI-DATA APCI_3xxx cards ··· 732 712 config COMEDI_ADL_PCI9118 733 713 tristate "ADLink PCI-9118DG, PCI-9118HG, PCI-9118HR support" 734 714 select COMEDI_FC 715 + depends on VIRT_TO_BUS 735 716 default N 736 717 ---help--- 737 718 Enable support for ADlink PCI-9118DG, PCI-9118HG, PCI-9118HR cards ··· 1308 1287 depends on COMEDI_MITE 1309 1288 select COMEDI_8255 1310 1289 select COMEDI_FC 1290 + depends on VIRT_TO_BUS 1311 1291 default N 1312 1292 ---help--- 1313 1293 Enable support for National Instruments Lab-PC and compatibles
+1 -1
drivers/staging/iio/Kconfig
··· 4 4 5 5 menuconfig IIO 6 6 tristate "Industrial I/O support" 7 - depends on !S390 7 + depends on GENERIC_HARDIRQS 8 8 help 9 9 The industrial I/O subsystem provides a unified framework for 10 10 drivers for many different types of embedded sensors using a
+1 -1
drivers/staging/iio/accel/adis16204.h
··· 84 84 85 85 int adis16204_set_irq(struct iio_dev *indio_dev, bool enable); 86 86 87 - #ifdef CONFIG_IIO_RING_BUFFER 88 87 enum adis16204_scan { 89 88 ADIS16204_SCAN_SUPPLY, 90 89 ADIS16204_SCAN_ACC_X, ··· 92 93 ADIS16204_SCAN_TEMP, 93 94 }; 94 95 96 + #ifdef CONFIG_IIO_RING_BUFFER 95 97 void adis16204_remove_trigger(struct iio_dev *indio_dev); 96 98 int adis16204_probe_trigger(struct iio_dev *indio_dev); 97 99
+2 -2
drivers/staging/iio/accel/adis16209.h
··· 121 121 122 122 int adis16209_set_irq(struct iio_dev *indio_dev, bool enable); 123 123 124 - #ifdef CONFIG_IIO_RING_BUFFER 125 - 126 124 #define ADIS16209_SCAN_SUPPLY 0 127 125 #define ADIS16209_SCAN_ACC_X 1 128 126 #define ADIS16209_SCAN_ACC_Y 2 ··· 129 131 #define ADIS16209_SCAN_INCLI_X 5 130 132 #define ADIS16209_SCAN_INCLI_Y 6 131 133 #define ADIS16209_SCAN_ROT 7 134 + 135 + #ifdef CONFIG_IIO_RING_BUFFER 132 136 133 137 void adis16209_remove_trigger(struct iio_dev *indio_dev); 134 138 int adis16209_probe_trigger(struct iio_dev *indio_dev);
+1 -1
drivers/staging/iio/gyro/adis16260.h
··· 104 104 105 105 int adis16260_set_irq(struct iio_dev *indio_dev, bool enable); 106 106 107 - #ifdef CONFIG_IIO_RING_BUFFER 108 107 /* At the moment triggers are only used for ring buffer 109 108 * filling. This may change! 110 109 */ ··· 114 115 #define ADIS16260_SCAN_TEMP 3 115 116 #define ADIS16260_SCAN_ANGL 4 116 117 118 + #ifdef CONFIG_IIO_RING_BUFFER 117 119 void adis16260_remove_trigger(struct iio_dev *indio_dev); 118 120 int adis16260_probe_trigger(struct iio_dev *indio_dev); 119 121
+1 -1
drivers/staging/iio/imu/adis16400.h
··· 158 158 159 159 int adis16400_set_irq(struct iio_dev *indio_dev, bool enable); 160 160 161 - #ifdef CONFIG_IIO_RING_BUFFER 162 161 /* At the moment triggers are only used for ring buffer 163 162 * filling. This may change! 164 163 */ ··· 181 182 #define ADIS16300_SCAN_INCLI_X 12 182 183 #define ADIS16300_SCAN_INCLI_Y 13 183 184 185 + #ifdef CONFIG_IIO_RING_BUFFER 184 186 void adis16400_remove_trigger(struct iio_dev *indio_dev); 185 187 int adis16400_probe_trigger(struct iio_dev *indio_dev); 186 188
+1 -1
drivers/staging/mei/init.c
··· 189 189 mutex_lock(&dev->device_lock); 190 190 } 191 191 192 - if (!err && !dev->recvd_msg) { 192 + if (err <= 0 && !dev->recvd_msg) { 193 193 dev->mei_state = MEI_DISABLED; 194 194 dev_dbg(&dev->pdev->dev, 195 195 "wait_event_interruptible_timeout failed"
+9 -4
drivers/staging/mei/wd.c
··· 169 169 ret = wait_event_interruptible_timeout(dev->wait_stop_wd, 170 170 dev->wd_stopped, 10 * HZ); 171 171 mutex_lock(&dev->device_lock); 172 - if (!dev->wd_stopped) 173 - dev_dbg(&dev->pdev->dev, "stop wd failed to complete.\n"); 174 - else 175 - dev_dbg(&dev->pdev->dev, "stop wd complete.\n"); 172 + if (dev->wd_stopped) { 173 + dev_dbg(&dev->pdev->dev, "stop wd complete ret=%d.\n", ret); 174 + ret = 0; 175 + } else { 176 + if (!ret) 177 + ret = -ETIMEDOUT; 178 + dev_warn(&dev->pdev->dev, 179 + "stop wd failed to complete ret=%d.\n", ret); 180 + } 176 181 177 182 if (preserve) 178 183 dev->wd_timeout = wd_timeout;
+8 -5
drivers/target/loopback/tcm_loop.c
··· 386 386 */ 387 387 se_cmd->se_tmr_req = core_tmr_alloc_req(se_cmd, (void *)tl_tmr, 388 388 TMR_LUN_RESET); 389 - if (!se_cmd->se_tmr_req) 389 + if (IS_ERR(se_cmd->se_tmr_req)) 390 390 goto release; 391 391 /* 392 392 * Locate the underlying TCM struct se_lun from sc->device->lun ··· 1017 1017 struct se_portal_group *se_tpg; 1018 1018 struct tcm_loop_hba *tl_hba = tl_tpg->tl_hba; 1019 1019 struct tcm_loop_nexus *tl_nexus; 1020 + int ret = -ENOMEM; 1020 1021 1021 1022 if (tl_tpg->tl_hba->tl_nexus) { 1022 1023 printk(KERN_INFO "tl_tpg->tl_hba->tl_nexus already exists\n"); ··· 1034 1033 * Initialize the struct se_session pointer 1035 1034 */ 1036 1035 tl_nexus->se_sess = transport_init_session(); 1037 - if (!tl_nexus->se_sess) 1036 + if (IS_ERR(tl_nexus->se_sess)) { 1037 + ret = PTR_ERR(tl_nexus->se_sess); 1038 1038 goto out; 1039 + } 1039 1040 /* 1040 1041 * Since we are running in 'demo mode' this call with generate a 1041 1042 * struct se_node_acl for the tcm_loop struct se_portal_group with the SCSI ··· 1063 1060 1064 1061 out: 1065 1062 kfree(tl_nexus); 1066 - return -ENOMEM; 1063 + return ret; 1067 1064 } 1068 1065 1069 1066 static int tcm_loop_drop_nexus( ··· 1143 1140 * the fabric protocol_id set in tcm_loop_make_scsi_hba(), and call 1144 1141 * tcm_loop_make_nexus() 1145 1142 */ 1146 - if (strlen(page) > TL_WWN_ADDR_LEN) { 1143 + if (strlen(page) >= TL_WWN_ADDR_LEN) { 1147 1144 printk(KERN_ERR "Emulated NAA Sas Address: %s, exceeds" 1148 1145 " max: %d\n", page, TL_WWN_ADDR_LEN); 1149 1146 return -EINVAL; ··· 1324 1321 return ERR_PTR(-EINVAL); 1325 1322 1326 1323 check_len: 1327 - if (strlen(name) > TL_WWN_ADDR_LEN) { 1324 + if (strlen(name) >= TL_WWN_ADDR_LEN) { 1328 1325 printk(KERN_ERR "Emulated NAA %s Address: %s, exceeds" 1329 1326 " max: %d\n", name, tcm_loop_dump_proto_id(tl_hba), 1330 1327 TL_WWN_ADDR_LEN);
+12 -12
drivers/target/target_core_configfs.c
··· 304 304 printk(KERN_ERR "Unable to locate passed fabric name\n"); 305 305 return NULL; 306 306 } 307 - if (strlen(name) > TARGET_FABRIC_NAME_SIZE) { 307 + if (strlen(name) >= TARGET_FABRIC_NAME_SIZE) { 308 308 printk(KERN_ERR "Passed name: %s exceeds TARGET_FABRIC" 309 309 "_NAME_SIZE\n", name); 310 310 return NULL; ··· 312 312 313 313 tf = kzalloc(sizeof(struct target_fabric_configfs), GFP_KERNEL); 314 314 if (!(tf)) 315 - return ERR_PTR(-ENOMEM); 315 + return NULL; 316 316 317 317 INIT_LIST_HEAD(&tf->tf_list); 318 318 atomic_set(&tf->tf_access_cnt, 0); ··· 851 851 return -EOPNOTSUPP; 852 852 } 853 853 854 - if ((strlen(page) + 1) > INQUIRY_VPD_SERIAL_LEN) { 854 + if (strlen(page) >= INQUIRY_VPD_SERIAL_LEN) { 855 855 printk(KERN_ERR "Emulated VPD Unit Serial exceeds" 856 856 " INQUIRY_VPD_SERIAL_LEN: %d\n", INQUIRY_VPD_SERIAL_LEN); 857 857 return -EOVERFLOW; ··· 917 917 918 918 transport_dump_vpd_proto_id(vpd, buf, VPD_TMP_BUF_SIZE); 919 919 920 - if ((len + strlen(buf) > PAGE_SIZE)) 920 + if ((len + strlen(buf) >= PAGE_SIZE)) 921 921 break; 922 922 923 923 len += sprintf(page+len, "%s", buf); ··· 962 962 \ 963 963 memset(buf, 0, VPD_TMP_BUF_SIZE); \ 964 964 transport_dump_vpd_assoc(vpd, buf, VPD_TMP_BUF_SIZE); \ 965 - if ((len + strlen(buf) > PAGE_SIZE)) \ 965 + if ((len + strlen(buf) >= PAGE_SIZE)) \ 966 966 break; \ 967 967 len += sprintf(page+len, "%s", buf); \ 968 968 \ 969 969 memset(buf, 0, VPD_TMP_BUF_SIZE); \ 970 970 transport_dump_vpd_ident_type(vpd, buf, VPD_TMP_BUF_SIZE); \ 971 - if ((len + strlen(buf) > PAGE_SIZE)) \ 971 + if ((len + strlen(buf) >= PAGE_SIZE)) \ 972 972 break; \ 973 973 len += sprintf(page+len, "%s", buf); \ 974 974 \ 975 975 memset(buf, 0, VPD_TMP_BUF_SIZE); \ 976 976 transport_dump_vpd_ident(vpd, buf, VPD_TMP_BUF_SIZE); \ 977 - if ((len + strlen(buf) > PAGE_SIZE)) \ 977 + if ((len + strlen(buf) >= PAGE_SIZE)) \ 978 978 break; \ 979 979 len += sprintf(page+len, "%s", buf); \ 980 980 } \ ··· 1299 1299 &i_buf[0] : "", pr_reg->pr_res_key, 1300 1300 pr_reg->pr_res_generation); 1301 1301 1302 - if ((len + strlen(buf) > PAGE_SIZE)) 1302 + if ((len + strlen(buf) >= PAGE_SIZE)) 1303 1303 break; 1304 1304 1305 1305 len += sprintf(page+len, "%s", buf); ··· 1496 1496 ret = -ENOMEM; 1497 1497 goto out; 1498 1498 } 1499 - if (strlen(i_port) > PR_APTPL_MAX_IPORT_LEN) { 1499 + if (strlen(i_port) >= PR_APTPL_MAX_IPORT_LEN) { 1500 1500 printk(KERN_ERR "APTPL metadata initiator_node=" 1501 1501 " exceeds PR_APTPL_MAX_IPORT_LEN: %d\n", 1502 1502 PR_APTPL_MAX_IPORT_LEN); ··· 1510 1510 ret = -ENOMEM; 1511 1511 goto out; 1512 1512 } 1513 - if (strlen(isid) > PR_REG_ISID_LEN) { 1513 + if (strlen(isid) >= PR_REG_ISID_LEN) { 1514 1514 printk(KERN_ERR "APTPL metadata initiator_isid" 1515 1515 "= exceeds PR_REG_ISID_LEN: %d\n", 1516 1516 PR_REG_ISID_LEN); ··· 1571 1571 ret = -ENOMEM; 1572 1572 goto out; 1573 1573 } 1574 - if (strlen(t_port) > PR_APTPL_MAX_TPORT_LEN) { 1574 + if (strlen(t_port) >= PR_APTPL_MAX_TPORT_LEN) { 1575 1575 printk(KERN_ERR "APTPL metadata target_node=" 1576 1576 " exceeds PR_APTPL_MAX_TPORT_LEN: %d\n", 1577 1577 PR_APTPL_MAX_TPORT_LEN); ··· 3052 3052 int ret; 3053 3053 3054 3054 memset(buf, 0, TARGET_CORE_NAME_MAX_LEN); 3055 - if (strlen(name) > TARGET_CORE_NAME_MAX_LEN) { 3055 + if (strlen(name) >= TARGET_CORE_NAME_MAX_LEN) { 3056 3056 printk(KERN_ERR "Passed *name strlen(): %d exceeds" 3057 3057 " TARGET_CORE_NAME_MAX_LEN: %d\n", (int)strlen(name), 3058 3058 TARGET_CORE_NAME_MAX_LEN);
+3 -2
drivers/target/target_core_device.c
··· 192 192 &SE_NODE_ACL(se_sess)->device_list[unpacked_lun]; 193 193 if (deve->lun_flags & TRANSPORT_LUNFLAGS_INITIATOR_ACCESS) { 194 194 se_lun = se_cmd->se_lun = se_tmr->tmr_lun = deve->se_lun; 195 - dev = se_tmr->tmr_dev = se_lun->lun_se_dev; 195 + dev = se_lun->lun_se_dev; 196 196 se_cmd->pr_res_key = deve->pr_res_key; 197 197 se_cmd->orig_fe_lun = unpacked_lun; 198 198 se_cmd->se_orig_obj_ptr = SE_LUN(se_cmd)->lun_se_dev; ··· 216 216 se_cmd->se_cmd_flags |= SCF_SCSI_CDB_EXCEPTION; 217 217 return -1; 218 218 } 219 + se_tmr->tmr_dev = dev; 219 220 220 221 spin_lock(&dev->se_tmr_lock); 221 222 list_add_tail(&se_tmr->tmr_list, &dev->dev_tmr_list); ··· 1431 1430 struct se_lun_acl *lacl; 1432 1431 struct se_node_acl *nacl; 1433 1432 1434 - if (strlen(initiatorname) > TRANSPORT_IQN_LEN) { 1433 + if (strlen(initiatorname) >= TRANSPORT_IQN_LEN) { 1435 1434 printk(KERN_ERR "%s InitiatorName exceeds maximum size.\n", 1436 1435 TPG_TFO(tpg)->get_fabric_name()); 1437 1436 *ret = -EOVERFLOW;
+3 -3
drivers/target/target_core_pr.c
··· 1916 1916 pr_reg->pr_res_mapped_lun); 1917 1917 } 1918 1918 1919 - if ((len + strlen(tmp) > pr_aptpl_buf_len)) { 1919 + if ((len + strlen(tmp) >= pr_aptpl_buf_len)) { 1920 1920 printk(KERN_ERR "Unable to update renaming" 1921 1921 " APTPL metadata\n"); 1922 1922 spin_unlock(&T10_RES(su_dev)->registration_lock); ··· 1934 1934 TPG_TFO(tpg)->tpg_get_tag(tpg), 1935 1935 lun->lun_sep->sep_rtpi, lun->unpacked_lun, reg_count); 1936 1936 1937 - if ((len + strlen(tmp) > pr_aptpl_buf_len)) { 1937 + if ((len + strlen(tmp) >= pr_aptpl_buf_len)) { 1938 1938 printk(KERN_ERR "Unable to update renaming" 1939 1939 " APTPL metadata\n"); 1940 1940 spin_unlock(&T10_RES(su_dev)->registration_lock); ··· 1986 1986 memset(iov, 0, sizeof(struct iovec)); 1987 1987 memset(path, 0, 512); 1988 1988 1989 - if (strlen(&wwn->unit_serial[0]) > 512) { 1989 + if (strlen(&wwn->unit_serial[0]) >= 512) { 1990 1990 printk(KERN_ERR "WWN value for struct se_device does not fit" 1991 1991 " into path buffer\n"); 1992 1992 return -1;
+7 -1
drivers/target/target_core_tmr.c
··· 75 75 { 76 76 struct se_device *dev = tmr->tmr_dev; 77 77 78 + if (!dev) { 79 + kmem_cache_free(se_tmr_req_cache, tmr); 80 + return; 81 + } 82 + 78 83 spin_lock(&dev->se_tmr_lock); 79 84 list_del(&tmr->tmr_list); 80 - kmem_cache_free(se_tmr_req_cache, tmr); 81 85 spin_unlock(&dev->se_tmr_lock); 86 + 87 + kmem_cache_free(se_tmr_req_cache, tmr); 82 88 } 83 89 84 90 static void core_tmr_handle_tas_abort(
+3 -3
drivers/target/target_core_transport.c
··· 536 536 void transport_deregister_session_configfs(struct se_session *se_sess) 537 537 { 538 538 struct se_node_acl *se_nacl; 539 - 539 + unsigned long flags; 540 540 /* 541 541 * Used by struct se_node_acl's under ConfigFS to locate active struct se_session 542 542 */ 543 543 se_nacl = se_sess->se_node_acl; 544 544 if ((se_nacl)) { 545 - spin_lock_irq(&se_nacl->nacl_sess_lock); 545 + spin_lock_irqsave(&se_nacl->nacl_sess_lock, flags); 546 546 list_del(&se_sess->sess_acl_list); 547 547 /* 548 548 * If the session list is empty, then clear the pointer. ··· 556 556 se_nacl->acl_sess_list.prev, 557 557 struct se_session, sess_acl_list); 558 558 } 559 - spin_unlock_irq(&se_nacl->nacl_sess_lock); 559 + spin_unlock_irqrestore(&se_nacl->nacl_sess_lock, flags); 560 560 } 561 561 } 562 562 EXPORT_SYMBOL(transport_deregister_session_configfs);
+1 -1
drivers/target/tcm_fc/tcm_fc.h
··· 144 144 */ 145 145 struct ft_cmd { 146 146 enum ft_cmd_state state; 147 - u16 lun; /* LUN from request */ 147 + u32 lun; /* LUN from request */ 148 148 struct ft_sess *sess; /* session held for cmd */ 149 149 struct fc_seq *seq; /* sequence in exchange mgr */ 150 150 struct se_cmd se_cmd; /* Local TCM I/O descriptor */
+33 -31
drivers/target/tcm_fc/tfc_cmd.c
··· 94 94 16, 4, cmd->cdb, MAX_COMMAND_SIZE, 0); 95 95 } 96 96 97 - /* 98 - * Get LUN from CDB. 99 - */ 100 - static int ft_get_lun_for_cmd(struct ft_cmd *cmd, u8 *lunp) 101 - { 102 - u64 lun; 103 - 104 - lun = lunp[1]; 105 - switch (lunp[0] >> 6) { 106 - case 0: 107 - break; 108 - case 1: 109 - lun |= (lunp[0] & 0x3f) << 8; 110 - break; 111 - default: 112 - return -1; 113 - } 114 - if (lun >= TRANSPORT_MAX_LUNS_PER_TPG) 115 - return -1; 116 - cmd->lun = lun; 117 - return transport_get_lun_for_cmd(&cmd->se_cmd, NULL, lun); 118 - } 119 - 120 97 static void ft_queue_cmd(struct ft_sess *sess, struct ft_cmd *cmd) 121 98 { 122 99 struct se_queue_obj *qobj; ··· 395 418 { 396 419 struct se_tmr_req *tmr; 397 420 struct fcp_cmnd *fcp; 421 + struct ft_sess *sess; 398 422 u8 tm_func; 399 423 400 424 fcp = fc_frame_payload_get(cmd->req_frame, sizeof(*fcp)); ··· 403 425 switch (fcp->fc_tm_flags) { 404 426 case FCP_TMF_LUN_RESET: 405 427 tm_func = TMR_LUN_RESET; 406 - if (ft_get_lun_for_cmd(cmd, fcp->fc_lun) < 0) { 407 - ft_dump_cmd(cmd, __func__); 408 - transport_send_check_condition_and_sense(&cmd->se_cmd, 409 - cmd->se_cmd.scsi_sense_reason, 0); 410 - ft_sess_put(cmd->sess); 411 - return; 412 - } 413 428 break; 414 429 case FCP_TMF_TGT_RESET: 415 430 tm_func = TMR_TARGET_WARM_RESET; ··· 434 463 return; 435 464 } 436 465 cmd->se_cmd.se_tmr_req = tmr; 466 + 467 + switch (fcp->fc_tm_flags) { 468 + case FCP_TMF_LUN_RESET: 469 + cmd->lun = scsilun_to_int((struct scsi_lun *)fcp->fc_lun); 470 + if (transport_get_lun_for_tmr(&cmd->se_cmd, cmd->lun) < 0) { 471 + /* 472 + * Make sure to clean up newly allocated TMR request 473 + * since "unable to handle TMR request because failed 474 + * to get to LUN" 475 + */ 476 + FT_TM_DBG("Failed to get LUN for TMR func %d, " 477 + "se_cmd %p, unpacked_lun %d\n", 478 + tm_func, &cmd->se_cmd, cmd->lun); 479 + ft_dump_cmd(cmd, __func__); 480 + sess = cmd->sess; 481 + transport_send_check_condition_and_sense(&cmd->se_cmd, 482 + cmd->se_cmd.scsi_sense_reason, 0); 483 + transport_generic_free_cmd(&cmd->se_cmd, 0, 1, 0); 484 + ft_sess_put(sess); 485 + return; 486 + } 487 + break; 488 + case FCP_TMF_TGT_RESET: 489 + case FCP_TMF_CLR_TASK_SET: 490 + case FCP_TMF_ABT_TASK_SET: 491 + case FCP_TMF_CLR_ACA: 492 + break; 493 + default: 494 + return; 495 + } 437 496 transport_generic_handle_tmr(&cmd->se_cmd); 438 497 } 439 498 ··· 636 635 637 636 fc_seq_exch(cmd->seq)->lp->tt.seq_set_resp(cmd->seq, ft_recv_seq, cmd); 638 637 639 - ret = ft_get_lun_for_cmd(cmd, fcp->fc_lun); 638 + cmd->lun = scsilun_to_int((struct scsi_lun *)fcp->fc_lun); 639 + ret = transport_get_lun_for_cmd(&cmd->se_cmd, NULL, cmd->lun); 640 640 if (ret < 0) { 641 641 ft_dump_cmd(cmd, __func__); 642 642 transport_send_check_condition_and_sense(&cmd->se_cmd,
+1 -1
drivers/target/tcm_fc/tfc_io.c
··· 203 203 /* XXX For now, initiator will retry */ 204 204 if (printk_ratelimit()) 205 205 printk(KERN_ERR "%s: Failed to send frame %p, " 206 - "xid <0x%x>, remaining <0x%x>, " 206 + "xid <0x%x>, remaining %zu, " 207 207 "lso_max <0x%x>\n", 208 208 __func__, fp, ep->xid, 209 209 remaining, lport->lso_max);
+2 -2
drivers/target/tcm_fc/tfc_sess.c
··· 229 229 return NULL; 230 230 231 231 sess->se_sess = transport_init_session(); 232 - if (!sess->se_sess) { 232 + if (IS_ERR(sess->se_sess)) { 233 233 kfree(sess); 234 234 return NULL; 235 235 } ··· 332 332 lport = sess->tport->lport; 333 333 port_id = sess->port_id; 334 334 if (port_id == -1) { 335 - mutex_lock(&ft_lport_lock); 335 + mutex_unlock(&ft_lport_lock); 336 336 return; 337 337 } 338 338 FT_SESS_DBG("port_id %x\n", port_id);
+20 -6
drivers/tty/n_gsm.c
··· 875 875 *dp++ = last << 7 | first << 6 | 1; /* EA */ 876 876 len--; 877 877 } 878 - memcpy(dp, skb_pull(dlci->skb, len), len); 878 + memcpy(dp, dlci->skb->data, len); 879 + skb_pull(dlci->skb, len); 879 880 __gsm_data_queue(dlci, msg); 880 881 if (last) 881 882 dlci->skb = NULL; ··· 985 984 */ 986 985 987 986 static void gsm_process_modem(struct tty_struct *tty, struct gsm_dlci *dlci, 988 - u32 modem) 987 + u32 modem, int clen) 989 988 { 990 989 int mlines = 0; 991 - u8 brk = modem >> 6; 990 + u8 brk = 0; 991 + 992 + /* The modem status command can either contain one octet (v.24 signals) 993 + or two octets (v.24 signals + break signals). The length field will 994 + either be 2 or 3 respectively. This is specified in section 995 + 5.4.6.3.7 of the 27.010 mux spec. */ 996 + 997 + if (clen == 2) 998 + modem = modem & 0x7f; 999 + else { 1000 + brk = modem & 0x7f; 1001 + modem = (modem >> 7) & 0x7f; 1002 + }; 992 1003 993 1004 /* Flow control/ready to communicate */ 994 1005 if (modem & MDM_FC) { ··· 1074 1061 return; 1075 1062 } 1076 1063 tty = tty_port_tty_get(&dlci->port); 1077 - gsm_process_modem(tty, dlci, modem); 1064 + gsm_process_modem(tty, dlci, modem, clen); 1078 1065 if (tty) { 1079 1066 tty_wakeup(tty); 1080 1067 tty_kref_put(tty); ··· 1495 1482 * open we shovel the bits down it, if not we drop them. 1496 1483 */ 1497 1484 1498 - static void gsm_dlci_data(struct gsm_dlci *dlci, u8 *data, int len) 1485 + static void gsm_dlci_data(struct gsm_dlci *dlci, u8 *data, int clen) 1499 1486 { 1500 1487 /* krefs .. */ 1501 1488 struct tty_port *port = &dlci->port; 1502 1489 struct tty_struct *tty = tty_port_tty_get(port); 1503 1490 unsigned int modem = 0; 1491 + int len = clen; 1504 1492 1505 1493 if (debug & 16) 1506 1494 pr_debug("%d bytes for tty %p\n", len, tty); ··· 1521 1507 if (len == 0) 1522 1508 return; 1523 1509 } 1524 - gsm_process_modem(tty, dlci, modem); 1510 + gsm_process_modem(tty, dlci, modem, clen); 1525 1511 /* Line state will go via DLCI 0 controls only */ 1526 1512 case 1: 1527 1513 default:
+1
drivers/tty/n_tty.c
··· 1815 1815 /* FIXME: does n_tty_set_room need locking ? */ 1816 1816 n_tty_set_room(tty); 1817 1817 timeout = schedule_timeout(timeout); 1818 + BUG_ON(!tty->read_buf); 1818 1819 continue; 1819 1820 } 1820 1821 __set_current_state(TASK_RUNNING);
+1
drivers/tty/serial/8250.c
··· 3318 3318 uart->port.flags &= ~UPF_BOOT_AUTOCONF; 3319 3319 uart->port.type = PORT_UNKNOWN; 3320 3320 uart->port.dev = &serial8250_isa_devs->dev; 3321 + uart->capabilities = uart_config[uart->port.type].flags; 3321 3322 uart_add_one_port(&serial8250_reg, &uart->port); 3322 3323 } else { 3323 3324 uart->port.dev = NULL;
+60 -1
drivers/tty/serial/8250_pci.c
··· 973 973 974 974 static int 975 975 pci_omegapci_setup(struct serial_private *priv, 976 - struct pciserial_board *board, 976 + const struct pciserial_board *board, 977 977 struct uart_port *port, int idx) 978 978 { 979 979 return setup_port(priv, port, 2, idx * 8, 0); ··· 992 992 priv->dev->subsystem_device); 993 993 994 994 return pci_default_setup(priv, board, port, idx); 995 + } 996 + 997 + static int pci_eg20t_init(struct pci_dev *dev) 998 + { 999 + #if defined(CONFIG_SERIAL_PCH_UART) || defined(CONFIG_SERIAL_PCH_UART_MODULE) 1000 + return -ENODEV; 1001 + #else 1002 + return 0; 1003 + #endif 995 1004 } 996 1005 997 1006 /* This should be in linux/pci_ids.h */ ··· 1454 1445 .subdevice = PCI_ANY_ID, 1455 1446 .init = pci_oxsemi_tornado_init, 1456 1447 .setup = pci_default_setup, 1448 + }, 1449 + { 1450 + .vendor = PCI_VENDOR_ID_INTEL, 1451 + .device = 0x8811, 1452 + .init = pci_eg20t_init, 1453 + }, 1454 + { 1455 + .vendor = PCI_VENDOR_ID_INTEL, 1456 + .device = 0x8812, 1457 + .init = pci_eg20t_init, 1458 + }, 1459 + { 1460 + .vendor = PCI_VENDOR_ID_INTEL, 1461 + .device = 0x8813, 1462 + .init = pci_eg20t_init, 1463 + }, 1464 + { 1465 + .vendor = PCI_VENDOR_ID_INTEL, 1466 + .device = 0x8814, 1467 + .init = pci_eg20t_init, 1468 + }, 1469 + { 1470 + .vendor = 0x10DB, 1471 + .device = 0x8027, 1472 + .init = pci_eg20t_init, 1473 + }, 1474 + { 1475 + .vendor = 0x10DB, 1476 + .device = 0x8028, 1477 + .init = pci_eg20t_init, 1478 + }, 1479 + { 1480 + .vendor = 0x10DB, 1481 + .device = 0x8029, 1482 + .init = pci_eg20t_init, 1483 + }, 1484 + { 1485 + .vendor = 0x10DB, 1486 + .device = 0x800C, 1487 + .init = pci_eg20t_init, 1488 + }, 1489 + { 1490 + .vendor = 0x10DB, 1491 + .device = 0x800D, 1492 + .init = pci_eg20t_init, 1493 + }, 1494 + { 1495 + .vendor = 0x10DB, 1496 + .device = 0x800D, 1497 + .init = pci_eg20t_init, 1457 1498 }, 1458 1499 /* 1459 1500 * Cronyx Omega PCI (PLX-chip based)
+122 -1
drivers/tty/serial/amba-pl011.c
··· 50 50 #include <linux/dmaengine.h> 51 51 #include <linux/dma-mapping.h> 52 52 #include <linux/scatterlist.h> 53 + #include <linux/delay.h> 53 54 54 55 #include <asm/io.h> 55 56 #include <asm/sizes.h> ··· 66 65 #define UART_DR_ERROR (UART011_DR_OE|UART011_DR_BE|UART011_DR_PE|UART011_DR_FE) 67 66 #define UART_DUMMY_DR_RX (1 << 16) 68 67 68 + 69 + #define UART_WA_SAVE_NR 14 70 + 71 + static void pl011_lockup_wa(unsigned long data); 72 + static const u32 uart_wa_reg[UART_WA_SAVE_NR] = { 73 + ST_UART011_DMAWM, 74 + ST_UART011_TIMEOUT, 75 + ST_UART011_LCRH_RX, 76 + UART011_IBRD, 77 + UART011_FBRD, 78 + ST_UART011_LCRH_TX, 79 + UART011_IFLS, 80 + ST_UART011_XFCR, 81 + ST_UART011_XON1, 82 + ST_UART011_XON2, 83 + ST_UART011_XOFF1, 84 + ST_UART011_XOFF2, 85 + UART011_CR, 86 + UART011_IMSC 87 + }; 88 + 89 + static u32 uart_wa_regdata[UART_WA_SAVE_NR]; 90 + static DECLARE_TASKLET(pl011_lockup_tlet, pl011_lockup_wa, 0); 91 + 69 92 /* There is by now at least one vendor with differing details, so handle it */ 70 93 struct vendor_data { 71 94 unsigned int ifls; ··· 97 72 unsigned int lcrh_tx; 98 73 unsigned int lcrh_rx; 99 74 bool oversampling; 75 + bool interrupt_may_hang; /* vendor-specific */ 100 76 bool dma_threshold; 101 77 }; 102 78 ··· 116 90 .lcrh_tx = ST_UART011_LCRH_TX, 117 91 .lcrh_rx = ST_UART011_LCRH_RX, 118 92 .oversampling = true, 93 + .interrupt_may_hang = true, 119 94 .dma_threshold = true, 120 95 }; 96 + 97 + static struct uart_amba_port *amba_ports[UART_NR]; 121 98 122 99 /* Deals with DMA transactions */ 123 100 ··· 161 132 unsigned int lcrh_rx; /* vendor-specific */ 162 133 bool autorts; 163 134 char type[12]; 135 + bool interrupt_may_hang; /* vendor-specific */ 164 136 #ifdef CONFIG_DMA_ENGINE 165 137 /* DMA stuff */ 166 138 bool using_tx_dma; ··· 1038 1008 #endif 1039 1009 1040 1010 1011 + /* 1012 + * pl011_lockup_wa 1013 + * This workaround aims to break the deadlock situation 1014 + * when after long transfer over uart in hardware flow 1015 + * control, uart interrupt registers cannot be cleared. 1016 + * Hence uart transfer gets blocked. 1017 + * 1018 + * It is seen that during such deadlock condition ICR 1019 + * don't get cleared even on multiple write. This leads 1020 + * pass_counter to decrease and finally reach zero. This 1021 + * can be taken as trigger point to run this UART_BT_WA. 1022 + * 1023 + */ 1024 + static void pl011_lockup_wa(unsigned long data) 1025 + { 1026 + struct uart_amba_port *uap = amba_ports[0]; 1027 + void __iomem *base = uap->port.membase; 1028 + struct circ_buf *xmit = &uap->port.state->xmit; 1029 + struct tty_struct *tty = uap->port.state->port.tty; 1030 + int buf_empty_retries = 200; 1031 + int loop; 1032 + 1033 + /* Stop HCI layer from submitting data for tx */ 1034 + tty->hw_stopped = 1; 1035 + while (!uart_circ_empty(xmit)) { 1036 + if (buf_empty_retries-- == 0) 1037 + break; 1038 + udelay(100); 1039 + } 1040 + 1041 + /* Backup registers */ 1042 + for (loop = 0; loop < UART_WA_SAVE_NR; loop++) 1043 + uart_wa_regdata[loop] = readl(base + uart_wa_reg[loop]); 1044 + 1045 + /* Disable UART so that FIFO data is flushed out */ 1046 + writew(0x00, uap->port.membase + UART011_CR); 1047 + 1048 + /* Soft reset UART module */ 1049 + if (uap->port.dev->platform_data) { 1050 + struct amba_pl011_data *plat; 1051 + 1052 + plat = uap->port.dev->platform_data; 1053 + if (plat->reset) 1054 + plat->reset(); 1055 + } 1056 + 1057 + /* Restore registers */ 1058 + for (loop = 0; loop < UART_WA_SAVE_NR; loop++) 1059 + writew(uart_wa_regdata[loop] , 1060 + uap->port.membase + uart_wa_reg[loop]); 1061 + 1062 + /* Initialise the old status of the modem signals */ 1063 + uap->old_status = readw(uap->port.membase + UART01x_FR) & 1064 + UART01x_FR_MODEM_ANY; 1065 + 1066 + if (readl(base + UART011_MIS) & 0x2) 1067 + printk(KERN_EMERG "UART_BT_WA: ***FAILED***\n"); 1068 + 1069 + /* Start Tx/Rx */ 1070 + tty->hw_stopped = 0; 1071 + } 1072 + 1041 1073 static void pl011_stop_tx(struct uart_port *port) 1042 1074 { 1043 1075 struct uart_amba_port *uap = (struct uart_amba_port *)port; ··· 1250 1158 if (status & UART011_TXIS) 1251 1159 pl011_tx_chars(uap); 1252 1160 1253 - if (pass_counter-- == 0) 1161 + if (pass_counter-- == 0) { 1162 + if (uap->interrupt_may_hang) 1163 + tasklet_schedule(&pl011_lockup_tlet); 1254 1164 break; 1165 + } 1255 1166 1256 1167 status = readw(uap->port.membase + UART011_MIS); 1257 1168 } while (status != 0); ··· 1434 1339 writew(uap->im, uap->port.membase + UART011_IMSC); 1435 1340 spin_unlock_irq(&uap->port.lock); 1436 1341 1342 + if (uap->port.dev->platform_data) { 1343 + struct amba_pl011_data *plat; 1344 + 1345 + plat = uap->port.dev->platform_data; 1346 + if (plat->init) 1347 + plat->init(); 1348 + } 1349 + 1437 1350 return 0; 1438 1351 1439 1352 clk_dis: ··· 1497 1394 * Shut down the clock producer 1498 1395 */ 1499 1396 clk_disable(uap->clk); 1397 + 1398 + if (uap->port.dev->platform_data) { 1399 + struct amba_pl011_data *plat; 1400 + 1401 + plat = uap->port.dev->platform_data; 1402 + if (plat->exit) 1403 + plat->exit(); 1404 + } 1405 + 1500 1406 } 1501 1407 1502 1408 static void ··· 1812 1700 if (!uap) 1813 1701 return -ENODEV; 1814 1702 1703 + if (uap->port.dev->platform_data) { 1704 + struct amba_pl011_data *plat; 1705 + 1706 + plat = uap->port.dev->platform_data; 1707 + if (plat->init) 1708 + plat->init(); 1709 + } 1710 + 1815 1711 uap->port.uartclk = clk_get_rate(uap->clk); 1816 1712 1817 1713 if (options) ··· 1894 1774 uap->lcrh_rx = vendor->lcrh_rx; 1895 1775 uap->lcrh_tx = vendor->lcrh_tx; 1896 1776 uap->fifosize = vendor->fifosize; 1777 + uap->interrupt_may_hang = vendor->interrupt_may_hang; 1897 1778 uap->port.dev = &dev->dev; 1898 1779 uap->port.mapbase = dev->res.start; 1899 1780 uap->port.membase = base;
+14 -4
drivers/tty/serial/bcm63xx_uart.c
··· 250 250 /* get overrun/fifo empty information from ier 251 251 * register */ 252 252 iestat = bcm_uart_readl(port, UART_IR_REG); 253 + 254 + if (unlikely(iestat & UART_IR_STAT(UART_IR_RXOVER))) { 255 + unsigned int val; 256 + 257 + /* fifo reset is required to clear 258 + * interrupt */ 259 + val = bcm_uart_readl(port, UART_CTL_REG); 260 + val |= UART_CTL_RSTRXFIFO_MASK; 261 + bcm_uart_writel(port, val, UART_CTL_REG); 262 + 263 + port->icount.overrun++; 264 + tty_insert_flip_char(tty, 0, TTY_OVERRUN); 265 + } 266 + 253 267 if (!(iestat & UART_IR_STAT(UART_IR_RXNOTEMPTY))) 254 268 break; 255 269 ··· 298 284 if (uart_handle_sysrq_char(port, c)) 299 285 continue; 300 286 301 - if (unlikely(iestat & UART_IR_STAT(UART_IR_RXOVER))) { 302 - port->icount.overrun++; 303 - tty_insert_flip_char(tty, 0, TTY_OVERRUN); 304 - } 305 287 306 288 if ((cstat & port->ignore_status_mask) == 0) 307 289 tty_insert_flip_char(tty, c, flag);
+1 -1
drivers/tty/serial/jsm/jsm_driver.c
··· 125 125 brd->bd_uart_offset = 0x200; 126 126 brd->bd_dividend = 921600; 127 127 128 - brd->re_map_membase = ioremap(brd->membase, 0x1000); 128 + brd->re_map_membase = ioremap(brd->membase, pci_resource_len(pdev, 0)); 129 129 if (!brd->re_map_membase) { 130 130 dev_err(&pdev->dev, 131 131 "card has no PCI Memory resources, "
+3 -2
drivers/tty/serial/mrst_max3110.c
··· 421 421 int ret = 0; 422 422 struct circ_buf *xmit = &max->con_xmit; 423 423 424 - init_waitqueue_head(wq); 425 424 pr_info(PR_FMT "start main thread\n"); 426 425 427 426 do { ··· 822 823 res = RC_TAG; 823 824 ret = max3110_write_then_read(max, (u8 *)&res, (u8 *)&res, 2, 0); 824 825 if (ret < 0 || res == 0 || res == 0xffff) { 825 - printk(KERN_ERR "MAX3111 deemed not present (conf reg %04x)", 826 + dev_dbg(&spi->dev, "MAX3111 deemed not present (conf reg %04x)", 826 827 res); 827 828 ret = -ENODEV; 828 829 goto err_get_page; ··· 836 837 max->con_xmit.buf = buffer; 837 838 max->con_xmit.head = 0; 838 839 max->con_xmit.tail = 0; 840 + 841 + init_waitqueue_head(&max->wq); 839 842 840 843 max->main_thread = kthread_run(max3110_main_thread, 841 844 max, "max3110_main");
+2 -2
drivers/tty/serial/s5pv210.c
··· 30 30 struct s3c2410_uartcfg *cfg = port->dev->platform_data; 31 31 unsigned long ucon = rd_regl(port, S3C2410_UCON); 32 32 33 - if ((cfg->clocks_size) == 1) 33 + if (cfg->flags & NO_NEED_CHECK_CLKSRC) 34 34 return 0; 35 35 36 36 if (strcmp(clk->name, "pclk") == 0) ··· 55 55 56 56 clk->divisor = 1; 57 57 58 - if ((cfg->clocks_size) == 1) 58 + if (cfg->flags & NO_NEED_CHECK_CLKSRC) 59 59 return 0; 60 60 61 61 switch (ucon & S5PV210_UCON_CLKMASK) {
+3 -1
drivers/tty/tty_ldisc.c
··· 555 555 static int tty_ldisc_wait_idle(struct tty_struct *tty) 556 556 { 557 557 int ret; 558 - ret = wait_event_interruptible_timeout(tty_ldisc_idle, 558 + ret = wait_event_timeout(tty_ldisc_idle, 559 559 atomic_read(&tty->ldisc->users) == 1, 5 * HZ); 560 560 if (ret < 0) 561 561 return ret; ··· 762 762 763 763 if (IS_ERR(ld)) 764 764 return -1; 765 + 766 + WARN_ON_ONCE(tty_ldisc_wait_idle(tty)); 765 767 766 768 tty_ldisc_close(tty, tty->ldisc); 767 769 tty_ldisc_put(tty->ldisc);
+13 -4
drivers/usb/core/driver.c
··· 375 375 * Just re-enable it without affecting the endpoint toggles. 376 376 */ 377 377 usb_enable_interface(udev, intf, false); 378 - } else if (!error && !intf->dev.power.in_suspend) { 378 + } else if (!error && !intf->dev.power.is_prepared) { 379 379 r = usb_set_interface(udev, intf->altsetting[0]. 380 380 desc.bInterfaceNumber, 0); 381 381 if (r < 0) ··· 960 960 } 961 961 962 962 /* Try to rebind the interface */ 963 - if (!intf->dev.power.in_suspend) { 963 + if (!intf->dev.power.is_prepared) { 964 964 intf->needs_binding = 0; 965 965 rc = device_attach(&intf->dev); 966 966 if (rc < 0) ··· 1107 1107 if (intf->condition == USB_INTERFACE_UNBOUND) { 1108 1108 1109 1109 /* Carry out a deferred switch to altsetting 0 */ 1110 - if (intf->needs_altsetting0 && !intf->dev.power.in_suspend) { 1110 + if (intf->needs_altsetting0 && !intf->dev.power.is_prepared) { 1111 1111 usb_set_interface(udev, intf->altsetting[0]. 1112 1112 desc.bInterfaceNumber, 0); 1113 1113 intf->needs_altsetting0 = 0; ··· 1187 1187 for (i = n - 1; i >= 0; --i) { 1188 1188 intf = udev->actconfig->interface[i]; 1189 1189 status = usb_suspend_interface(udev, intf, msg); 1190 + 1191 + /* Ignore errors during system sleep transitions */ 1192 + if (!(msg.event & PM_EVENT_AUTO)) 1193 + status = 0; 1190 1194 if (status != 0) 1191 1195 break; 1192 1196 } 1193 1197 } 1194 - if (status == 0) 1198 + if (status == 0) { 1195 1199 status = usb_suspend_device(udev, msg); 1200 + 1201 + /* Again, ignore errors during system sleep transitions */ 1202 + if (!(msg.event & PM_EVENT_AUTO)) 1203 + status = 0; 1204 + } 1196 1205 1197 1206 /* If the suspend failed, resume interfaces that did get suspended */ 1198 1207 if (status != 0) {
+11 -5
drivers/usb/core/hub.c
··· 1634 1634 { 1635 1635 struct usb_device *udev = *pdev; 1636 1636 int i; 1637 + struct usb_hcd *hcd = bus_to_hcd(udev->bus); 1637 1638 1638 1639 if (!udev) { 1639 1640 pr_debug ("%s nodev\n", __func__); ··· 1662 1661 * so that the hardware is now fully quiesced. 1663 1662 */ 1664 1663 dev_dbg (&udev->dev, "unregistering device\n"); 1664 + mutex_lock(hcd->bandwidth_mutex); 1665 1665 usb_disable_device(udev, 0); 1666 + mutex_unlock(hcd->bandwidth_mutex); 1666 1667 usb_hcd_synchronize_unlinks(udev); 1667 1668 1668 1669 usb_remove_ep_devs(&udev->ep0); ··· 2365 2362 USB_DEVICE_REMOTE_WAKEUP, 0, 2366 2363 NULL, 0, 2367 2364 USB_CTRL_SET_TIMEOUT); 2365 + 2366 + /* System sleep transitions should never fail */ 2367 + if (!(msg.event & PM_EVENT_AUTO)) 2368 + status = 0; 2368 2369 } else { 2369 2370 /* device has up to 10 msec to fully suspend */ 2370 2371 dev_dbg(&udev->dev, "usb %ssuspend\n", ··· 2618 2611 struct usb_device *hdev = hub->hdev; 2619 2612 unsigned port1; 2620 2613 2621 - /* fail if children aren't already suspended */ 2614 + /* Warn if children aren't already suspended */ 2622 2615 for (port1 = 1; port1 <= hdev->maxchild; port1++) { 2623 2616 struct usb_device *udev; 2624 2617 2625 2618 udev = hdev->children [port1-1]; 2626 2619 if (udev && udev->can_submit) { 2627 - if (!(msg.event & PM_EVENT_AUTO)) 2628 - dev_dbg(&intf->dev, "port %d nyet suspended\n", 2629 - port1); 2630 - return -EBUSY; 2620 + dev_warn(&intf->dev, "port %d nyet suspended\n", port1); 2621 + if (msg.event & PM_EVENT_AUTO) 2622 + return -EBUSY; 2631 2623 } 2632 2624 } 2633 2625
+14 -1
drivers/usb/core/message.c
··· 1135 1135 * Deallocates hcd/hardware state for the endpoints (nuking all or most 1136 1136 * pending urbs) and usbcore state for the interfaces, so that usbcore 1137 1137 * must usb_set_configuration() before any interfaces could be used. 1138 + * 1139 + * Must be called with hcd->bandwidth_mutex held. 1138 1140 */ 1139 1141 void usb_disable_device(struct usb_device *dev, int skip_ep0) 1140 1142 { 1141 1143 int i; 1144 + struct usb_hcd *hcd = bus_to_hcd(dev->bus); 1142 1145 1143 1146 /* getting rid of interfaces will disconnect 1144 1147 * any drivers bound to them (a key side effect) ··· 1175 1172 1176 1173 dev_dbg(&dev->dev, "%s nuking %s URBs\n", __func__, 1177 1174 skip_ep0 ? "non-ep0" : "all"); 1175 + if (hcd->driver->check_bandwidth) { 1176 + /* First pass: Cancel URBs, leave endpoint pointers intact. */ 1177 + for (i = skip_ep0; i < 16; ++i) { 1178 + usb_disable_endpoint(dev, i, false); 1179 + usb_disable_endpoint(dev, i + USB_DIR_IN, false); 1180 + } 1181 + /* Remove endpoints from the host controller internal state */ 1182 + usb_hcd_alloc_bandwidth(dev, NULL, NULL, NULL); 1183 + /* Second pass: remove endpoint pointers */ 1184 + } 1178 1185 for (i = skip_ep0; i < 16; ++i) { 1179 1186 usb_disable_endpoint(dev, i, true); 1180 1187 usb_disable_endpoint(dev, i + USB_DIR_IN, true); ··· 1740 1727 /* if it's already configured, clear out old state first. 1741 1728 * getting rid of old interfaces means unbinding their drivers. 1742 1729 */ 1730 + mutex_lock(hcd->bandwidth_mutex); 1743 1731 if (dev->state != USB_STATE_ADDRESS) 1744 1732 usb_disable_device(dev, 1); /* Skip ep0 */ 1745 1733 ··· 1753 1739 * host controller will not allow submissions to dropped endpoints. If 1754 1740 * this call fails, the device state is unchanged. 1755 1741 */ 1756 - mutex_lock(hcd->bandwidth_mutex); 1757 1742 ret = usb_hcd_alloc_bandwidth(dev, cp, NULL, NULL); 1758 1743 if (ret < 0) { 1759 1744 mutex_unlock(hcd->bandwidth_mutex);
+6 -4
drivers/usb/host/ehci-ath79.c
··· 44 44 struct ehci_hcd *ehci = hcd_to_ehci(hcd); 45 45 struct platform_device *pdev = to_platform_device(hcd->self.controller); 46 46 const struct platform_device_id *id; 47 - int hclength; 48 47 int ret; 49 48 50 49 id = platform_get_device_id(pdev); ··· 52 53 return -EINVAL; 53 54 } 54 55 55 - hclength = HC_LENGTH(ehci, ehci_readl(ehci, &ehci->caps->hc_capbase)); 56 56 switch (id->driver_data) { 57 57 case EHCI_ATH79_IP_V1: 58 58 ehci->has_synopsys_hc_bug = 1; 59 59 60 60 ehci->caps = hcd->regs; 61 - ehci->regs = hcd->regs + hclength; 61 + ehci->regs = hcd->regs + 62 + HC_LENGTH(ehci, 63 + ehci_readl(ehci, &ehci->caps->hc_capbase)); 62 64 break; 63 65 64 66 case EHCI_ATH79_IP_V2: 65 67 hcd->has_tt = 1; 66 68 67 69 ehci->caps = hcd->regs + 0x100; 68 - ehci->regs = hcd->regs + 0x100 + hclength; 70 + ehci->regs = hcd->regs + 0x100 + 71 + HC_LENGTH(ehci, 72 + ehci_readl(ehci, &ehci->caps->hc_capbase)); 69 73 break; 70 74 71 75 default:
+4
drivers/usb/host/ehci-hcd.c
··· 1 1 /* 2 + * Enhanced Host Controller Interface (EHCI) driver for USB. 3 + * 4 + * Maintainer: Alan Stern <stern@rowland.harvard.edu> 5 + * 2 6 * Copyright (c) 2000-2004 by David Brownell 3 7 * 4 8 * This program is free software; you can redistribute it and/or modify it
+1 -1
drivers/usb/host/isp1760-hcd.c
··· 1555 1555 1556 1556 /* We need to forcefully reclaim the slot since some transfers never 1557 1557 return, e.g. interrupt transfers and NAKed bulk transfers. */ 1558 - if (usb_pipebulk(urb->pipe)) { 1558 + if (usb_pipecontrol(urb->pipe) || usb_pipebulk(urb->pipe)) { 1559 1559 skip_map = reg_read32(hcd->regs, HC_ATL_PTD_SKIPMAP_REG); 1560 1560 skip_map |= (1 << qh->slot); 1561 1561 reg_write32(hcd->regs, HC_ATL_PTD_SKIPMAP_REG, skip_map);
+3 -1
drivers/usb/host/ohci-hcd.c
··· 1 1 /* 2 - * OHCI HCD (Host Controller Driver) for USB. 2 + * Open Host Controller Interface (OHCI) driver for USB. 3 + * 4 + * Maintainer: Alan Stern <stern@rowland.harvard.edu> 3 5 * 4 6 * (C) Copyright 1999 Roman Weissgaerber <weissg@vienna.at> 5 7 * (C) Copyright 2000-2004 David Brownell <dbrownell@users.sourceforge.net>
+1
drivers/usb/host/r8a66597-hcd.c
··· 2517 2517 INIT_LIST_HEAD(&r8a66597->child_device); 2518 2518 2519 2519 hcd->rsrc_start = res->start; 2520 + hcd->has_tt = 1; 2520 2521 2521 2522 ret = usb_add_hcd(hcd, irq, IRQF_DISABLED | irq_trigger); 2522 2523 if (ret != 0) {
-2
drivers/usb/host/xhci-mem.c
··· 1215 1215 ep_ctx->ep_info2 |= cpu_to_le32(MAX_PACKET(max_packet)); 1216 1216 /* dig out max burst from ep companion desc */ 1217 1217 max_packet = ep->ss_ep_comp.bMaxBurst; 1218 - if (!max_packet) 1219 - xhci_warn(xhci, "WARN no SS endpoint bMaxBurst\n"); 1220 1218 ep_ctx->ep_info2 |= cpu_to_le32(MAX_BURST(max_packet)); 1221 1219 break; 1222 1220 case USB_SPEED_HIGH:
+8
drivers/usb/host/xhci-pci.c
··· 29 29 #define PCI_VENDOR_ID_FRESCO_LOGIC 0x1b73 30 30 #define PCI_DEVICE_ID_FRESCO_LOGIC_PDK 0x1000 31 31 32 + #define PCI_VENDOR_ID_ETRON 0x1b6f 33 + #define PCI_DEVICE_ID_ASROCK_P67 0x7023 34 + 32 35 static const char hcd_name[] = "xhci_hcd"; 33 36 34 37 /* called after powerup, by probe or system-pm "wakeup" */ ··· 136 133 xhci->quirks |= XHCI_SPURIOUS_SUCCESS; 137 134 xhci->quirks |= XHCI_EP_LIMIT_QUIRK; 138 135 xhci->limit_active_eps = 64; 136 + } 137 + if (pdev->vendor == PCI_VENDOR_ID_ETRON && 138 + pdev->device == PCI_DEVICE_ID_ASROCK_P67) { 139 + xhci->quirks |= XHCI_RESET_ON_RESUME; 140 + xhci_dbg(xhci, "QUIRK: Resetting on resume\n"); 139 141 } 140 142 141 143 /* Make sure the HC is halted. */
+25 -5
drivers/usb/host/xhci-ring.c
··· 1733 1733 frame->status = -EOVERFLOW; 1734 1734 skip_td = true; 1735 1735 break; 1736 + case COMP_DEV_ERR: 1736 1737 case COMP_STALL: 1737 1738 frame->status = -EPROTO; 1738 1739 skip_td = true; ··· 1768 1767 } 1769 1768 } 1770 1769 1771 - if ((idx == urb_priv->length - 1) && *status == -EINPROGRESS) 1772 - *status = 0; 1773 - 1774 1770 return finish_td(xhci, td, event_trb, event, ep, status, false); 1775 1771 } 1776 1772 ··· 1785 1787 idx = urb_priv->td_cnt; 1786 1788 frame = &td->urb->iso_frame_desc[idx]; 1787 1789 1788 - /* The transfer is partly done */ 1789 - *status = -EXDEV; 1790 + /* The transfer is partly done. */ 1790 1791 frame->status = -EXDEV; 1791 1792 1792 1793 /* calc actual length */ ··· 2013 2016 TRB_TO_SLOT_ID(le32_to_cpu(event->flags)), 2014 2017 ep_index); 2015 2018 goto cleanup; 2019 + case COMP_DEV_ERR: 2020 + xhci_warn(xhci, "WARN: detect an incompatible device"); 2021 + status = -EPROTO; 2022 + break; 2016 2023 case COMP_MISSED_INT: 2017 2024 /* 2018 2025 * When encounter missed service error, one or more isoc tds ··· 2064 2063 /* Is this a TRB in the currently executing TD? */ 2065 2064 event_seg = trb_in_td(ep_ring->deq_seg, ep_ring->dequeue, 2066 2065 td->last_trb, event_dma); 2066 + 2067 + /* 2068 + * Skip the Force Stopped Event. The event_trb(event_dma) of FSE 2069 + * is not in the current TD pointed by ep_ring->dequeue because 2070 + * that the hardware dequeue pointer still at the previous TRB 2071 + * of the current TD. The previous TRB maybe a Link TD or the 2072 + * last TRB of the previous TD. The command completion handle 2073 + * will take care the rest. 2074 + */ 2075 + if (!event_seg && trb_comp_code == COMP_STOP_INVAL) { 2076 + ret = 0; 2077 + goto cleanup; 2078 + } 2079 + 2067 2080 if (!event_seg) { 2068 2081 if (!ep->skip || 2069 2082 !usb_endpoint_xfer_isoc(&td->urb->ep->desc)) { ··· 2173 2158 urb->transfer_buffer_length, 2174 2159 status); 2175 2160 spin_unlock(&xhci->lock); 2161 + /* EHCI, UHCI, and OHCI always unconditionally set the 2162 + * urb->status of an isochronous endpoint to 0. 2163 + */ 2164 + if (usb_pipetype(urb->pipe) == PIPE_ISOCHRONOUS) 2165 + status = 0; 2176 2166 usb_hcd_giveback_urb(bus_to_hcd(urb->dev->bus), urb, status); 2177 2167 spin_lock(&xhci->lock); 2178 2168 }
+35 -4
drivers/usb/host/xhci.c
··· 759 759 msleep(100); 760 760 761 761 spin_lock_irq(&xhci->lock); 762 + if (xhci->quirks & XHCI_RESET_ON_RESUME) 763 + hibernated = true; 762 764 763 765 if (!hibernated) { 764 766 /* step 1: restore register */ ··· 1403 1401 u32 added_ctxs; 1404 1402 unsigned int last_ctx; 1405 1403 u32 new_add_flags, new_drop_flags, new_slot_info; 1404 + struct xhci_virt_device *virt_dev; 1406 1405 int ret = 0; 1407 1406 1408 1407 ret = xhci_check_args(hcd, udev, ep, 1, true, __func__); ··· 1428 1425 return 0; 1429 1426 } 1430 1427 1431 - in_ctx = xhci->devs[udev->slot_id]->in_ctx; 1432 - out_ctx = xhci->devs[udev->slot_id]->out_ctx; 1428 + virt_dev = xhci->devs[udev->slot_id]; 1429 + in_ctx = virt_dev->in_ctx; 1430 + out_ctx = virt_dev->out_ctx; 1433 1431 ctrl_ctx = xhci_get_input_control_ctx(xhci, in_ctx); 1434 1432 ep_index = xhci_get_endpoint_index(&ep->desc); 1435 1433 ep_ctx = xhci_get_ep_ctx(xhci, out_ctx, ep_index); 1434 + 1435 + /* If this endpoint is already in use, and the upper layers are trying 1436 + * to add it again without dropping it, reject the addition. 1437 + */ 1438 + if (virt_dev->eps[ep_index].ring && 1439 + !(le32_to_cpu(ctrl_ctx->drop_flags) & 1440 + xhci_get_endpoint_flag(&ep->desc))) { 1441 + xhci_warn(xhci, "Trying to add endpoint 0x%x " 1442 + "without dropping it.\n", 1443 + (unsigned int) ep->desc.bEndpointAddress); 1444 + return -EINVAL; 1445 + } 1446 + 1436 1447 /* If the HCD has already noted the endpoint is enabled, 1437 1448 * ignore this request. 1438 1449 */ ··· 1462 1445 * process context, not interrupt context (or so documenation 1463 1446 * for usb_set_interface() and usb_set_configuration() claim). 1464 1447 */ 1465 - if (xhci_endpoint_init(xhci, xhci->devs[udev->slot_id], 1466 - udev, ep, GFP_NOIO) < 0) { 1448 + if (xhci_endpoint_init(xhci, virt_dev, udev, ep, GFP_NOIO) < 0) { 1467 1449 dev_dbg(&udev->dev, "%s - could not initialize ep %#x\n", 1468 1450 __func__, ep->desc.bEndpointAddress); 1469 1451 return -ENOMEM; ··· 1553 1537 "and endpoint is not disabled.\n"); 1554 1538 ret = -EINVAL; 1555 1539 break; 1540 + case COMP_DEV_ERR: 1541 + dev_warn(&udev->dev, "ERROR: Incompatible device for endpoint " 1542 + "configure command.\n"); 1543 + ret = -ENODEV; 1544 + break; 1556 1545 case COMP_SUCCESS: 1557 1546 dev_dbg(&udev->dev, "Successful Endpoint Configure command\n"); 1558 1547 ret = 0; ··· 1591 1570 "evaluate context command.\n"); 1592 1571 xhci_dbg_ctx(xhci, virt_dev->out_ctx, 1); 1593 1572 ret = -EINVAL; 1573 + break; 1574 + case COMP_DEV_ERR: 1575 + dev_warn(&udev->dev, "ERROR: Incompatible device for evaluate " 1576 + "context command.\n"); 1577 + ret = -ENODEV; 1594 1578 break; 1595 1579 case COMP_MEL_ERR: 1596 1580 /* Max Exit Latency too large error */ ··· 2878 2852 case COMP_TX_ERR: 2879 2853 dev_warn(&udev->dev, "Device not responding to set address.\n"); 2880 2854 ret = -EPROTO; 2855 + break; 2856 + case COMP_DEV_ERR: 2857 + dev_warn(&udev->dev, "ERROR: Incompatible device for address " 2858 + "device command.\n"); 2859 + ret = -ENODEV; 2881 2860 break; 2882 2861 case COMP_SUCCESS: 2883 2862 xhci_dbg(xhci, "Successful Address Device command\n");
+3
drivers/usb/host/xhci.h
··· 874 874 #define COMP_PING_ERR 20 875 875 /* Event Ring is full */ 876 876 #define COMP_ER_FULL 21 877 + /* Incompatible Device Error */ 878 + #define COMP_DEV_ERR 22 877 879 /* Missed Service Error - HC couldn't service an isoc ep within interval */ 878 880 #define COMP_MISSED_INT 23 879 881 /* Successfully stopped command ring */ ··· 1310 1308 */ 1311 1309 #define XHCI_EP_LIMIT_QUIRK (1 << 5) 1312 1310 #define XHCI_BROKEN_MSI (1 << 6) 1311 + #define XHCI_RESET_ON_RESUME (1 << 7) 1313 1312 unsigned int num_active_eps; 1314 1313 unsigned int limit_active_eps; 1315 1314 /* There are two roothubs to keep track of bus suspend info for */
+6
drivers/usb/musb/musb_gadget.c
··· 1524 1524 csr = musb_readw(epio, MUSB_TXCSR); 1525 1525 if (csr & MUSB_TXCSR_FIFONOTEMPTY) { 1526 1526 csr |= MUSB_TXCSR_FLUSHFIFO | MUSB_TXCSR_P_WZC_BITS; 1527 + /* 1528 + * Setting both TXPKTRDY and FLUSHFIFO makes controller 1529 + * to interrupt current FIFO loading, but not flushing 1530 + * the already loaded ones. 1531 + */ 1532 + csr &= ~MUSB_TXCSR_TXPKTRDY; 1527 1533 musb_writew(epio, MUSB_TXCSR, csr); 1528 1534 /* REVISIT may be inappropriate w/o FIFONOTEMPTY ... */ 1529 1535 musb_writew(epio, MUSB_TXCSR, csr);
+1 -1
drivers/usb/musb/musb_host.c
··· 1575 1575 /* even if there was an error, we did the dma 1576 1576 * for iso_frame_desc->length 1577 1577 */ 1578 - if (d->status != EILSEQ && d->status != -EOVERFLOW) 1578 + if (d->status != -EILSEQ && d->status != -EOVERFLOW) 1579 1579 d->status = 0; 1580 1580 1581 1581 if (++qh->iso_idx >= urb->number_of_packets)
+14 -5
drivers/usb/serial/ftdi_sio.c
··· 179 179 { USB_DEVICE(FTDI_VID, FTDI_232RL_PID) }, 180 180 { USB_DEVICE(FTDI_VID, FTDI_8U2232C_PID) }, 181 181 { USB_DEVICE(FTDI_VID, FTDI_4232H_PID) }, 182 + { USB_DEVICE(FTDI_VID, FTDI_232H_PID) }, 182 183 { USB_DEVICE(FTDI_VID, FTDI_MICRO_CHAMELEON_PID) }, 183 184 { USB_DEVICE(FTDI_VID, FTDI_RELAIS_PID) }, 184 185 { USB_DEVICE(FTDI_VID, FTDI_OPENDCC_PID) }, ··· 849 848 [FT2232C] = "FT2232C", 850 849 [FT232RL] = "FT232RL", 851 850 [FT2232H] = "FT2232H", 852 - [FT4232H] = "FT4232H" 851 + [FT4232H] = "FT4232H", 852 + [FT232H] = "FT232H" 853 853 }; 854 854 855 855 ··· 1170 1168 break; 1171 1169 case FT2232H: /* FT2232H chip */ 1172 1170 case FT4232H: /* FT4232H chip */ 1171 + case FT232H: /* FT232H chip */ 1173 1172 if ((baud <= 12000000) & (baud >= 1200)) { 1174 1173 div_value = ftdi_2232h_baud_to_divisor(baud); 1175 1174 } else if (baud < 1200) { ··· 1432 1429 } else if (version < 0x600) { 1433 1430 /* Assume it's an FT232BM (or FT245BM) */ 1434 1431 priv->chip_type = FT232BM; 1435 - } else { 1436 - /* Assume it's an FT232R */ 1432 + } else if (version < 0x900) { 1433 + /* Assume it's an FT232RL */ 1437 1434 priv->chip_type = FT232RL; 1435 + } else { 1436 + /* Assume it's an FT232H */ 1437 + priv->chip_type = FT232H; 1438 1438 } 1439 1439 dev_info(&udev->dev, "Detected %s\n", ftdi_chip_name[priv->chip_type]); 1440 1440 } ··· 1565 1559 priv->chip_type == FT2232C || 1566 1560 priv->chip_type == FT232RL || 1567 1561 priv->chip_type == FT2232H || 1568 - priv->chip_type == FT4232H)) { 1562 + priv->chip_type == FT4232H || 1563 + priv->chip_type == FT232H)) { 1569 1564 retval = device_create_file(&port->dev, 1570 1565 &dev_attr_latency_timer); 1571 1566 } ··· 1587 1580 priv->chip_type == FT2232C || 1588 1581 priv->chip_type == FT232RL || 1589 1582 priv->chip_type == FT2232H || 1590 - priv->chip_type == FT4232H) { 1583 + priv->chip_type == FT4232H || 1584 + priv->chip_type == FT232H) { 1591 1585 device_remove_file(&port->dev, &dev_attr_latency_timer); 1592 1586 } 1593 1587 } ··· 2220 2212 case FT232RL: 2221 2213 case FT2232H: 2222 2214 case FT4232H: 2215 + case FT232H: 2223 2216 len = 2; 2224 2217 break; 2225 2218 default:
+2 -1
drivers/usb/serial/ftdi_sio.h
··· 156 156 FT2232C = 4, 157 157 FT232RL = 5, 158 158 FT2232H = 6, 159 - FT4232H = 7 159 + FT4232H = 7, 160 + FT232H = 8 160 161 }; 161 162 162 163 enum ftdi_sio_baudrate {
+1
drivers/usb/serial/ftdi_sio_ids.h
··· 22 22 #define FTDI_8U232AM_ALT_PID 0x6006 /* FTDI's alternate PID for above */ 23 23 #define FTDI_8U2232C_PID 0x6010 /* Dual channel device */ 24 24 #define FTDI_4232H_PID 0x6011 /* Quad channel hi-speed device */ 25 + #define FTDI_232H_PID 0x6014 /* Single channel hi-speed device */ 25 26 #define FTDI_SIO_PID 0x8372 /* Product Id SIO application of 8U100AX */ 26 27 #define FTDI_232RL_PID 0xFBFA /* Product ID for FT232RL */ 27 28
+1
drivers/usb/serial/ti_usb_3410_5052.c
··· 1745 1745 } 1746 1746 if (fw_p->size > TI_FIRMWARE_BUF_SIZE) { 1747 1747 dev_err(&dev->dev, "%s - firmware too large %zu\n", __func__, fw_p->size); 1748 + release_firmware(fw_p); 1748 1749 return -ENOENT; 1749 1750 } 1750 1751
+1 -2
drivers/watchdog/Kconfig
··· 535 535 536 536 config INTEL_SCU_WATCHDOG 537 537 bool "Intel SCU Watchdog for Mobile Platforms" 538 - depends on WATCHDOG 539 - depends on INTEL_SCU_IPC 538 + depends on X86_MRST 540 539 ---help--- 541 540 Hardware driver for the watchdog time built into the Intel SCU 542 541 for Intel Mobile Platforms.
+1 -1
drivers/watchdog/at32ap700x_wdt.c
··· 448 448 } 449 449 module_exit(at32_wdt_exit); 450 450 451 - MODULE_AUTHOR("Hans-Christian Egtvedt <hcegtvedt@atmel.com>"); 451 + MODULE_AUTHOR("Hans-Christian Egtvedt <egtvedt@samfundet.no>"); 452 452 MODULE_DESCRIPTION("Watchdog driver for Atmel AT32AP700X"); 453 453 MODULE_LICENSE("GPL"); 454 454 MODULE_ALIAS_MISCDEV(WATCHDOG_MINOR);
+1 -1
drivers/watchdog/gef_wdt.c
··· 329 329 MODULE_DESCRIPTION("GE watchdog driver"); 330 330 MODULE_LICENSE("GPL"); 331 331 MODULE_ALIAS_MISCDEV(WATCHDOG_MINOR); 332 - MODULE_ALIAS("platform: gef_wdt"); 332 + MODULE_ALIAS("platform:gef_wdt");
-1
drivers/watchdog/intel_scu_watchdog.c
··· 42 42 #include <linux/sched.h> 43 43 #include <linux/signal.h> 44 44 #include <linux/sfi.h> 45 - #include <linux/types.h> 46 45 #include <asm/irq.h> 47 46 #include <asm/atomic.h> 48 47 #include <asm/intel_scu_ipc.h>
+16 -13
drivers/watchdog/mtx-1_wdt.c
··· 66 66 int default_ticks; 67 67 unsigned long inuse; 68 68 unsigned gpio; 69 - int gstate; 69 + unsigned int gstate; 70 70 } mtx1_wdt_device; 71 71 72 72 static void mtx1_wdt_trigger(unsigned long unused) 73 73 { 74 - u32 tmp; 75 - 76 74 spin_lock(&mtx1_wdt_device.lock); 77 75 if (mtx1_wdt_device.running) 78 76 ticks--; 79 77 80 78 /* toggle wdt gpio */ 81 - mtx1_wdt_device.gstate = ~mtx1_wdt_device.gstate; 82 - if (mtx1_wdt_device.gstate) 83 - gpio_direction_output(mtx1_wdt_device.gpio, 1); 84 - else 85 - gpio_direction_input(mtx1_wdt_device.gpio); 79 + mtx1_wdt_device.gstate = !mtx1_wdt_device.gstate; 80 + gpio_set_value(mtx1_wdt_device.gpio, mtx1_wdt_device.gstate); 86 81 87 82 if (mtx1_wdt_device.queue && ticks) 88 83 mod_timer(&mtx1_wdt_device.timer, jiffies + MTX1_WDT_INTERVAL); ··· 100 105 if (!mtx1_wdt_device.queue) { 101 106 mtx1_wdt_device.queue = 1; 102 107 mtx1_wdt_device.gstate = 1; 103 - gpio_direction_output(mtx1_wdt_device.gpio, 1); 108 + gpio_set_value(mtx1_wdt_device.gpio, 1); 104 109 mod_timer(&mtx1_wdt_device.timer, jiffies + MTX1_WDT_INTERVAL); 105 110 } 106 111 mtx1_wdt_device.running++; ··· 115 120 if (mtx1_wdt_device.queue) { 116 121 mtx1_wdt_device.queue = 0; 117 122 mtx1_wdt_device.gstate = 0; 118 - gpio_direction_output(mtx1_wdt_device.gpio, 0); 123 + gpio_set_value(mtx1_wdt_device.gpio, 0); 119 124 } 120 125 ticks = mtx1_wdt_device.default_ticks; 121 126 spin_unlock_irqrestore(&mtx1_wdt_device.lock, flags); ··· 209 214 int ret; 210 215 211 216 mtx1_wdt_device.gpio = pdev->resource[0].start; 217 + ret = gpio_request_one(mtx1_wdt_device.gpio, 218 + GPIOF_OUT_INIT_HIGH, "mtx1-wdt"); 219 + if (ret < 0) { 220 + dev_err(&pdev->dev, "failed to request gpio"); 221 + return ret; 222 + } 212 223 213 224 spin_lock_init(&mtx1_wdt_device.lock); 214 225 init_completion(&mtx1_wdt_device.stop); ··· 240 239 mtx1_wdt_device.queue = 0; 241 240 wait_for_completion(&mtx1_wdt_device.stop); 242 241 } 242 + 243 + gpio_free(mtx1_wdt_device.gpio); 243 244 misc_deregister(&mtx1_wdt_misc); 244 245 return 0; 245 246 } 246 247 247 - static struct platform_driver mtx1_wdt = { 248 + static struct platform_driver mtx1_wdt_driver = { 248 249 .probe = mtx1_wdt_probe, 249 250 .remove = __devexit_p(mtx1_wdt_remove), 250 251 .driver.name = "mtx1-wdt", ··· 255 252 256 253 static int __init mtx1_wdt_init(void) 257 254 { 258 - return platform_driver_register(&mtx1_wdt); 255 + return platform_driver_register(&mtx1_wdt_driver); 259 256 } 260 257 261 258 static void __exit mtx1_wdt_exit(void) 262 259 { 263 - platform_driver_unregister(&mtx1_wdt); 260 + platform_driver_unregister(&mtx1_wdt_driver); 264 261 } 265 262 266 263 module_init(mtx1_wdt_init);
+5
drivers/watchdog/wm831x_wdt.c
··· 320 320 struct wm831x_watchdog_pdata *pdata; 321 321 int reg, ret; 322 322 323 + if (wm831x) { 324 + dev_err(&pdev->dev, "wm831x watchdog already registered\n"); 325 + return -EBUSY; 326 + } 327 + 323 328 wm831x = dev_get_drvdata(pdev->dev.parent); 324 329 325 330 ret = wm831x_reg_read(wm831x, WM831X_WATCHDOG);
+13 -1
fs/block_dev.c
··· 762 762 if (!disk) 763 763 return ERR_PTR(-ENXIO); 764 764 765 - whole = bdget_disk(disk, 0); 765 + /* 766 + * Normally, @bdev should equal what's returned from bdget_disk() 767 + * if partno is 0; however, some drivers (floppy) use multiple 768 + * bdev's for the same physical device and @bdev may be one of the 769 + * aliases. Keep @bdev if partno is 0. This means claimer 770 + * tracking is broken for those devices but it has always been that 771 + * way. 772 + */ 773 + if (partno) 774 + whole = bdget_disk(disk, 0); 775 + else 776 + whole = bdgrab(bdev); 777 + 766 778 module_put(disk->fops->owner); 767 779 put_disk(disk); 768 780 if (!whole)
-1
fs/btrfs/ctree.h
··· 19 19 #ifndef __BTRFS_CTREE__ 20 20 #define __BTRFS_CTREE__ 21 21 22 - #include <linux/version.h> 23 22 #include <linux/mm.h> 24 23 #include <linux/highmem.h> 25 24 #include <linux/fs.h>
+92 -36
fs/btrfs/delayed-inode.c
··· 82 82 return root->fs_info->delayed_root; 83 83 } 84 84 85 + static struct btrfs_delayed_node *btrfs_get_delayed_node(struct inode *inode) 86 + { 87 + struct btrfs_inode *btrfs_inode = BTRFS_I(inode); 88 + struct btrfs_root *root = btrfs_inode->root; 89 + u64 ino = btrfs_ino(inode); 90 + struct btrfs_delayed_node *node; 91 + 92 + node = ACCESS_ONCE(btrfs_inode->delayed_node); 93 + if (node) { 94 + atomic_inc(&node->refs); 95 + return node; 96 + } 97 + 98 + spin_lock(&root->inode_lock); 99 + node = radix_tree_lookup(&root->delayed_nodes_tree, ino); 100 + if (node) { 101 + if (btrfs_inode->delayed_node) { 102 + atomic_inc(&node->refs); /* can be accessed */ 103 + BUG_ON(btrfs_inode->delayed_node != node); 104 + spin_unlock(&root->inode_lock); 105 + return node; 106 + } 107 + btrfs_inode->delayed_node = node; 108 + atomic_inc(&node->refs); /* can be accessed */ 109 + atomic_inc(&node->refs); /* cached in the inode */ 110 + spin_unlock(&root->inode_lock); 111 + return node; 112 + } 113 + spin_unlock(&root->inode_lock); 114 + 115 + return NULL; 116 + } 117 + 85 118 static struct btrfs_delayed_node *btrfs_get_or_create_delayed_node( 86 119 struct inode *inode) 87 120 { ··· 125 92 int ret; 126 93 127 94 again: 128 - node = ACCESS_ONCE(btrfs_inode->delayed_node); 129 - if (node) { 130 - atomic_inc(&node->refs); /* can be accessed */ 95 + node = btrfs_get_delayed_node(inode); 96 + if (node) 131 97 return node; 132 - } 133 - 134 - spin_lock(&root->inode_lock); 135 - node = radix_tree_lookup(&root->delayed_nodes_tree, ino); 136 - if (node) { 137 - if (btrfs_inode->delayed_node) { 138 - spin_unlock(&root->inode_lock); 139 - goto again; 140 - } 141 - btrfs_inode->delayed_node = node; 142 - atomic_inc(&node->refs); /* can be accessed */ 143 - atomic_inc(&node->refs); /* cached in the inode */ 144 - spin_unlock(&root->inode_lock); 145 - return node; 146 - } 147 - spin_unlock(&root->inode_lock); 148 98 149 99 node = kmem_cache_alloc(delayed_node_cache, GFP_NOFS); 150 100 if (!node) ··· 562 546 next = rb_entry(p, struct btrfs_delayed_item, rb_node); 563 547 564 548 return next; 565 - } 566 - 567 - static inline struct btrfs_delayed_node *btrfs_get_delayed_node( 568 - struct inode *inode) 569 - { 570 - struct btrfs_inode *btrfs_inode = BTRFS_I(inode); 571 - struct btrfs_delayed_node *delayed_node; 572 - 573 - delayed_node = btrfs_inode->delayed_node; 574 - if (delayed_node) 575 - atomic_inc(&delayed_node->refs); 576 - 577 - return delayed_node; 578 549 } 579 550 580 551 static inline struct btrfs_root *btrfs_get_fs_root(struct btrfs_root *root, ··· 1407 1404 1408 1405 int btrfs_inode_delayed_dir_index_count(struct inode *inode) 1409 1406 { 1410 - struct btrfs_delayed_node *delayed_node = BTRFS_I(inode)->delayed_node; 1411 - int ret = 0; 1407 + struct btrfs_delayed_node *delayed_node = btrfs_get_delayed_node(inode); 1412 1408 1413 1409 if (!delayed_node) 1414 1410 return -ENOENT; ··· 1417 1415 * a new directory index is added into the delayed node and index_cnt 1418 1416 * is updated now. So we needn't lock the delayed node. 1419 1417 */ 1420 - if (!delayed_node->index_cnt) 1418 + if (!delayed_node->index_cnt) { 1419 + btrfs_release_delayed_node(delayed_node); 1421 1420 return -EINVAL; 1421 + } 1422 1422 1423 1423 BTRFS_I(inode)->index_cnt = delayed_node->index_cnt; 1424 - return ret; 1424 + btrfs_release_delayed_node(delayed_node); 1425 + return 0; 1425 1426 } 1426 1427 1427 1428 void btrfs_get_delayed_items(struct inode *inode, struct list_head *ins_list, ··· 1616 1611 inode->i_ctime.tv_sec); 1617 1612 btrfs_set_stack_timespec_nsec(btrfs_inode_ctime(inode_item), 1618 1613 inode->i_ctime.tv_nsec); 1614 + } 1615 + 1616 + int btrfs_fill_inode(struct inode *inode, u32 *rdev) 1617 + { 1618 + struct btrfs_delayed_node *delayed_node; 1619 + struct btrfs_inode_item *inode_item; 1620 + struct btrfs_timespec *tspec; 1621 + 1622 + delayed_node = btrfs_get_delayed_node(inode); 1623 + if (!delayed_node) 1624 + return -ENOENT; 1625 + 1626 + mutex_lock(&delayed_node->mutex); 1627 + if (!delayed_node->inode_dirty) { 1628 + mutex_unlock(&delayed_node->mutex); 1629 + btrfs_release_delayed_node(delayed_node); 1630 + return -ENOENT; 1631 + } 1632 + 1633 + inode_item = &delayed_node->inode_item; 1634 + 1635 + inode->i_uid = btrfs_stack_inode_uid(inode_item); 1636 + inode->i_gid = btrfs_stack_inode_gid(inode_item); 1637 + btrfs_i_size_write(inode, btrfs_stack_inode_size(inode_item)); 1638 + inode->i_mode = btrfs_stack_inode_mode(inode_item); 1639 + inode->i_nlink = btrfs_stack_inode_nlink(inode_item); 1640 + inode_set_bytes(inode, btrfs_stack_inode_nbytes(inode_item)); 1641 + BTRFS_I(inode)->generation = btrfs_stack_inode_generation(inode_item); 1642 + BTRFS_I(inode)->sequence = btrfs_stack_inode_sequence(inode_item); 1643 + inode->i_rdev = 0; 1644 + *rdev = btrfs_stack_inode_rdev(inode_item); 1645 + BTRFS_I(inode)->flags = btrfs_stack_inode_flags(inode_item); 1646 + 1647 + tspec = btrfs_inode_atime(inode_item); 1648 + inode->i_atime.tv_sec = btrfs_stack_timespec_sec(tspec); 1649 + inode->i_atime.tv_nsec = btrfs_stack_timespec_nsec(tspec); 1650 + 1651 + tspec = btrfs_inode_mtime(inode_item); 1652 + inode->i_mtime.tv_sec = btrfs_stack_timespec_sec(tspec); 1653 + inode->i_mtime.tv_nsec = btrfs_stack_timespec_nsec(tspec); 1654 + 1655 + tspec = btrfs_inode_ctime(inode_item); 1656 + inode->i_ctime.tv_sec = btrfs_stack_timespec_sec(tspec); 1657 + inode->i_ctime.tv_nsec = btrfs_stack_timespec_nsec(tspec); 1658 + 1659 + inode->i_generation = BTRFS_I(inode)->generation; 1660 + BTRFS_I(inode)->index_cnt = (u64)-1; 1661 + 1662 + mutex_unlock(&delayed_node->mutex); 1663 + btrfs_release_delayed_node(delayed_node); 1664 + return 0; 1619 1665 } 1620 1666 1621 1667 int btrfs_delayed_update_inode(struct btrfs_trans_handle *trans,
+1
fs/btrfs/delayed-inode.h
··· 119 119 120 120 int btrfs_delayed_update_inode(struct btrfs_trans_handle *trans, 121 121 struct btrfs_root *root, struct inode *inode); 122 + int btrfs_fill_inode(struct inode *inode, u32 *rdev); 122 123 123 124 /* Used for drop dead root */ 124 125 void btrfs_kill_all_delayed_nodes(struct btrfs_root *root);
+2 -2
fs/btrfs/extent-tree.c
··· 4842 4842 u64 num_bytes, u64 empty_size, 4843 4843 u64 search_start, u64 search_end, 4844 4844 u64 hint_byte, struct btrfs_key *ins, 4845 - int data) 4845 + u64 data) 4846 4846 { 4847 4847 int ret = 0; 4848 4848 struct btrfs_root *root = orig_root->fs_info->extent_root; ··· 4869 4869 4870 4870 space_info = __find_space_info(root->fs_info, data); 4871 4871 if (!space_info) { 4872 - printk(KERN_ERR "No space info for %d\n", data); 4872 + printk(KERN_ERR "No space info for %llu\n", data); 4873 4873 return -ENOSPC; 4874 4874 } 4875 4875
+6 -3
fs/btrfs/free-space-cache.c
··· 1893 1893 1894 1894 while ((node = rb_last(&ctl->free_space_offset)) != NULL) { 1895 1895 info = rb_entry(node, struct btrfs_free_space, offset_index); 1896 - unlink_free_space(ctl, info); 1897 - kfree(info->bitmap); 1898 - kmem_cache_free(btrfs_free_space_cachep, info); 1896 + if (!info->bitmap) { 1897 + unlink_free_space(ctl, info); 1898 + kmem_cache_free(btrfs_free_space_cachep, info); 1899 + } else { 1900 + free_bitmap(ctl, info); 1901 + } 1899 1902 if (need_resched()) { 1900 1903 spin_unlock(&ctl->tree_lock); 1901 1904 cond_resched();
+11 -2
fs/btrfs/inode.c
··· 2509 2509 int maybe_acls; 2510 2510 u32 rdev; 2511 2511 int ret; 2512 + bool filled = false; 2513 + 2514 + ret = btrfs_fill_inode(inode, &rdev); 2515 + if (!ret) 2516 + filled = true; 2512 2517 2513 2518 path = btrfs_alloc_path(); 2514 2519 BUG_ON(!path); ··· 2525 2520 goto make_bad; 2526 2521 2527 2522 leaf = path->nodes[0]; 2523 + 2524 + if (filled) 2525 + goto cache_acl; 2526 + 2528 2527 inode_item = btrfs_item_ptr(leaf, path->slots[0], 2529 2528 struct btrfs_inode_item); 2530 2529 if (!leaf->map_token) ··· 2565 2556 2566 2557 BTRFS_I(inode)->index_cnt = (u64)-1; 2567 2558 BTRFS_I(inode)->flags = btrfs_inode_flags(leaf, inode_item); 2568 - 2559 + cache_acl: 2569 2560 /* 2570 2561 * try to precache a NULL acl entry for files that don't have 2571 2562 * any xattrs or acls ··· 2581 2572 } 2582 2573 2583 2574 btrfs_free_path(path); 2584 - inode_item = NULL; 2585 2575 2586 2576 switch (inode->i_mode & S_IFMT) { 2587 2577 case S_IFREG: ··· 4528 4520 inode_tree_add(inode); 4529 4521 4530 4522 trace_btrfs_inode_new(inode); 4523 + btrfs_set_inode_last_trans(trans, inode); 4531 4524 4532 4525 return inode; 4533 4526 fail:
+1 -1
fs/cifs/Kconfig
··· 156 156 157 157 config CIFS_NFSD_EXPORT 158 158 bool "Allow nfsd to export CIFS file system (EXPERIMENTAL)" 159 - depends on CIFS && EXPERIMENTAL 159 + depends on CIFS && EXPERIMENTAL && BROKEN 160 160 help 161 161 Allows NFS server to export a CIFS mounted share (nfsd over cifs)
+1
fs/cifs/cifs_fs_sb.h
··· 42 42 #define CIFS_MOUNT_MULTIUSER 0x20000 /* multiuser mount */ 43 43 #define CIFS_MOUNT_STRICT_IO 0x40000 /* strict cache mode */ 44 44 #define CIFS_MOUNT_RWPIDFORWARD 0x80000 /* use pid forwarding for rw */ 45 + #define CIFS_MOUNT_POSIXACL 0x100000 /* mirror of MS_POSIXACL in mnt_cifs_flags */ 45 46 46 47 struct cifs_sb_info { 47 48 struct rb_root tlink_tree;
+66 -92
fs/cifs/cifsfs.c
··· 104 104 } 105 105 106 106 static int 107 - cifs_read_super(struct super_block *sb, struct smb_vol *volume_info, 108 - const char *devname, int silent) 107 + cifs_read_super(struct super_block *sb) 109 108 { 110 109 struct inode *inode; 111 110 struct cifs_sb_info *cifs_sb; ··· 112 113 113 114 cifs_sb = CIFS_SB(sb); 114 115 115 - spin_lock_init(&cifs_sb->tlink_tree_lock); 116 - cifs_sb->tlink_tree = RB_ROOT; 116 + if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_POSIXACL) 117 + sb->s_flags |= MS_POSIXACL; 117 118 118 - rc = bdi_setup_and_register(&cifs_sb->bdi, "cifs", BDI_CAP_MAP_COPY); 119 - if (rc) 120 - return rc; 119 + if (cifs_sb_master_tcon(cifs_sb)->ses->capabilities & CAP_LARGE_FILES) 120 + sb->s_maxbytes = MAX_LFS_FILESIZE; 121 + else 122 + sb->s_maxbytes = MAX_NON_LFS; 121 123 122 - cifs_sb->bdi.ra_pages = default_backing_dev_info.ra_pages; 123 - 124 - rc = cifs_mount(sb, cifs_sb, volume_info, devname); 125 - 126 - if (rc) { 127 - if (!silent) 128 - cERROR(1, "cifs_mount failed w/return code = %d", rc); 129 - goto out_mount_failed; 130 - } 124 + /* BB FIXME fix time_gran to be larger for LANMAN sessions */ 125 + sb->s_time_gran = 100; 131 126 132 127 sb->s_magic = CIFS_MAGIC_NUMBER; 133 128 sb->s_op = &cifs_super_ops; ··· 163 170 if (inode) 164 171 iput(inode); 165 172 166 - cifs_umount(sb, cifs_sb); 167 - 168 - out_mount_failed: 169 - bdi_destroy(&cifs_sb->bdi); 170 173 return rc; 171 174 } 172 175 173 - static void 174 - cifs_put_super(struct super_block *sb) 176 + static void cifs_kill_sb(struct super_block *sb) 175 177 { 176 - int rc = 0; 177 - struct cifs_sb_info *cifs_sb; 178 - 179 - cFYI(1, "In cifs_put_super"); 180 - cifs_sb = CIFS_SB(sb); 181 - if (cifs_sb == NULL) { 182 - cFYI(1, "Empty cifs superblock info passed to unmount"); 183 - return; 184 - } 185 - 186 - rc = cifs_umount(sb, cifs_sb); 187 - if (rc) 188 - cERROR(1, "cifs_umount failed with return code %d", rc); 189 - if (cifs_sb->mountdata) { 190 - kfree(cifs_sb->mountdata); 191 - cifs_sb->mountdata = NULL; 192 - } 193 - 194 - unload_nls(cifs_sb->local_nls); 195 - bdi_destroy(&cifs_sb->bdi); 196 - kfree(cifs_sb); 178 + struct cifs_sb_info *cifs_sb = CIFS_SB(sb); 179 + kill_anon_super(sb); 180 + cifs_umount(cifs_sb); 197 181 } 198 182 199 183 static int ··· 518 548 } 519 549 520 550 static const struct super_operations cifs_super_ops = { 521 - .put_super = cifs_put_super, 522 551 .statfs = cifs_statfs, 523 552 .alloc_inode = cifs_alloc_inode, 524 553 .destroy_inode = cifs_destroy_inode, ··· 554 585 full_path = cifs_build_path_to_root(vol, cifs_sb, 555 586 cifs_sb_master_tcon(cifs_sb)); 556 587 if (full_path == NULL) 557 - return NULL; 588 + return ERR_PTR(-ENOMEM); 558 589 559 590 cFYI(1, "Get root dentry for %s", full_path); 560 591 ··· 583 614 dchild = d_alloc(dparent, &name); 584 615 if (dchild == NULL) { 585 616 dput(dparent); 586 - dparent = NULL; 617 + dparent = ERR_PTR(-ENOMEM); 587 618 goto out; 588 619 } 589 620 } ··· 601 632 if (rc) { 602 633 dput(dchild); 603 634 dput(dparent); 604 - dparent = NULL; 635 + dparent = ERR_PTR(rc); 605 636 goto out; 606 637 } 607 638 alias = d_materialise_unique(dchild, inode); ··· 609 640 dput(dchild); 610 641 if (IS_ERR(alias)) { 611 642 dput(dparent); 612 - dparent = NULL; 643 + dparent = ERR_PTR(-EINVAL); /* XXX */ 613 644 goto out; 614 645 } 615 646 dchild = alias; ··· 627 658 _FreeXid(xid); 628 659 kfree(full_path); 629 660 return dparent; 661 + } 662 + 663 + static int cifs_set_super(struct super_block *sb, void *data) 664 + { 665 + struct cifs_mnt_data *mnt_data = data; 666 + sb->s_fs_info = mnt_data->cifs_sb; 667 + return set_anon_super(sb, NULL); 630 668 } 631 669 632 670 static struct dentry * ··· 656 680 cifs_sb = kzalloc(sizeof(struct cifs_sb_info), GFP_KERNEL); 657 681 if (cifs_sb == NULL) { 658 682 root = ERR_PTR(-ENOMEM); 659 - goto out; 683 + goto out_nls; 684 + } 685 + 686 + cifs_sb->mountdata = kstrndup(data, PAGE_SIZE, GFP_KERNEL); 687 + if (cifs_sb->mountdata == NULL) { 688 + root = ERR_PTR(-ENOMEM); 689 + goto out_cifs_sb; 660 690 } 661 691 662 692 cifs_setup_cifs_sb(volume_info, cifs_sb); 693 + 694 + rc = cifs_mount(cifs_sb, volume_info); 695 + if (rc) { 696 + if (!(flags & MS_SILENT)) 697 + cERROR(1, "cifs_mount failed w/return code = %d", rc); 698 + root = ERR_PTR(rc); 699 + goto out_mountdata; 700 + } 663 701 664 702 mnt_data.vol = volume_info; 665 703 mnt_data.cifs_sb = cifs_sb; 666 704 mnt_data.flags = flags; 667 705 668 - sb = sget(fs_type, cifs_match_super, set_anon_super, &mnt_data); 706 + sb = sget(fs_type, cifs_match_super, cifs_set_super, &mnt_data); 669 707 if (IS_ERR(sb)) { 670 708 root = ERR_CAST(sb); 671 - goto out_cifs_sb; 709 + cifs_umount(cifs_sb); 710 + goto out; 672 711 } 673 712 674 - if (sb->s_fs_info) { 713 + if (sb->s_root) { 675 714 cFYI(1, "Use existing superblock"); 676 - goto out_shared; 715 + cifs_umount(cifs_sb); 716 + } else { 717 + sb->s_flags = flags; 718 + /* BB should we make this contingent on mount parm? */ 719 + sb->s_flags |= MS_NODIRATIME | MS_NOATIME; 720 + 721 + rc = cifs_read_super(sb); 722 + if (rc) { 723 + root = ERR_PTR(rc); 724 + goto out_super; 725 + } 726 + 727 + sb->s_flags |= MS_ACTIVE; 677 728 } 678 - 679 - /* 680 - * Copy mount params for use in submounts. Better to do 681 - * the copy here and deal with the error before cleanup gets 682 - * complicated post-mount. 683 - */ 684 - cifs_sb->mountdata = kstrndup(data, PAGE_SIZE, GFP_KERNEL); 685 - if (cifs_sb->mountdata == NULL) { 686 - root = ERR_PTR(-ENOMEM); 687 - goto out_super; 688 - } 689 - 690 - sb->s_flags = flags; 691 - /* BB should we make this contingent on mount parm? */ 692 - sb->s_flags |= MS_NODIRATIME | MS_NOATIME; 693 - sb->s_fs_info = cifs_sb; 694 - 695 - rc = cifs_read_super(sb, volume_info, dev_name, 696 - flags & MS_SILENT ? 1 : 0); 697 - if (rc) { 698 - root = ERR_PTR(rc); 699 - goto out_super; 700 - } 701 - 702 - sb->s_flags |= MS_ACTIVE; 703 729 704 730 root = cifs_get_root(volume_info, sb); 705 - if (root == NULL) 731 + if (IS_ERR(root)) 706 732 goto out_super; 707 733 708 734 cFYI(1, "dentry root is: %p", root); 709 735 goto out; 710 736 711 - out_shared: 712 - root = cifs_get_root(volume_info, sb); 713 - if (root) 714 - cFYI(1, "dentry root is: %p", root); 715 - goto out; 716 - 717 737 out_super: 718 - kfree(cifs_sb->mountdata); 719 738 deactivate_locked_super(sb); 720 - 721 - out_cifs_sb: 722 - unload_nls(cifs_sb->local_nls); 723 - kfree(cifs_sb); 724 - 725 739 out: 726 740 cifs_cleanup_volume_info(&volume_info); 727 741 return root; 742 + 743 + out_mountdata: 744 + kfree(cifs_sb->mountdata); 745 + out_cifs_sb: 746 + kfree(cifs_sb); 747 + out_nls: 748 + unload_nls(volume_info->local_nls); 749 + goto out; 728 750 } 729 751 730 752 static ssize_t cifs_file_aio_write(struct kiocb *iocb, const struct iovec *iov, ··· 811 837 .owner = THIS_MODULE, 812 838 .name = "cifs", 813 839 .mount = cifs_do_mount, 814 - .kill_sb = kill_anon_super, 840 + .kill_sb = cifs_kill_sb, 815 841 /* .fs_flags */ 816 842 }; 817 843 const struct inode_operations cifs_dir_inode_ops = {
+4 -4
fs/cifs/cifsproto.h
··· 157 157 extern void cifs_cleanup_volume_info(struct smb_vol **pvolume_info); 158 158 extern int cifs_setup_volume_info(struct smb_vol **pvolume_info, 159 159 char *mount_data, const char *devname); 160 - extern int cifs_mount(struct super_block *, struct cifs_sb_info *, 161 - struct smb_vol *, const char *); 162 - extern int cifs_umount(struct super_block *, struct cifs_sb_info *); 160 + extern int cifs_mount(struct cifs_sb_info *, struct smb_vol *); 161 + extern void cifs_umount(struct cifs_sb_info *); 163 162 extern void cifs_dfs_release_automount_timer(void); 164 163 void cifs_proc_init(void); 165 164 void cifs_proc_clean(void); ··· 217 218 struct dfs_info3_param **preferrals, 218 219 int remap); 219 220 extern void reset_cifs_unix_caps(int xid, struct cifs_tcon *tcon, 220 - struct super_block *sb, struct smb_vol *vol); 221 + struct cifs_sb_info *cifs_sb, 222 + struct smb_vol *vol); 221 223 extern int CIFSSMBQFSInfo(const int xid, struct cifs_tcon *tcon, 222 224 struct kstatfs *FSData); 223 225 extern int SMBOldQFSInfo(const int xid, struct cifs_tcon *tcon,
+53 -35
fs/cifs/connect.c
··· 2546 2546 } 2547 2547 2548 2548 void reset_cifs_unix_caps(int xid, struct cifs_tcon *tcon, 2549 - struct super_block *sb, struct smb_vol *vol_info) 2549 + struct cifs_sb_info *cifs_sb, struct smb_vol *vol_info) 2550 2550 { 2551 2551 /* if we are reconnecting then should we check to see if 2552 2552 * any requested capabilities changed locally e.g. via ··· 2600 2600 cap &= ~CIFS_UNIX_POSIX_ACL_CAP; 2601 2601 else if (CIFS_UNIX_POSIX_ACL_CAP & cap) { 2602 2602 cFYI(1, "negotiated posix acl support"); 2603 - if (sb) 2604 - sb->s_flags |= MS_POSIXACL; 2603 + if (cifs_sb) 2604 + cifs_sb->mnt_cifs_flags |= 2605 + CIFS_MOUNT_POSIXACL; 2605 2606 } 2606 2607 2607 2608 if (vol_info && vol_info->posix_paths == 0) 2608 2609 cap &= ~CIFS_UNIX_POSIX_PATHNAMES_CAP; 2609 2610 else if (cap & CIFS_UNIX_POSIX_PATHNAMES_CAP) { 2610 2611 cFYI(1, "negotiate posix pathnames"); 2611 - if (sb) 2612 - CIFS_SB(sb)->mnt_cifs_flags |= 2612 + if (cifs_sb) 2613 + cifs_sb->mnt_cifs_flags |= 2613 2614 CIFS_MOUNT_POSIX_PATHS; 2614 2615 } 2615 2616 2616 - if (sb && (CIFS_SB(sb)->rsize > 127 * 1024)) { 2617 + if (cifs_sb && (cifs_sb->rsize > 127 * 1024)) { 2617 2618 if ((cap & CIFS_UNIX_LARGE_READ_CAP) == 0) { 2618 - CIFS_SB(sb)->rsize = 127 * 1024; 2619 + cifs_sb->rsize = 127 * 1024; 2619 2620 cFYI(DBG2, "larger reads not supported by srv"); 2620 2621 } 2621 2622 } ··· 2662 2661 struct cifs_sb_info *cifs_sb) 2663 2662 { 2664 2663 INIT_DELAYED_WORK(&cifs_sb->prune_tlinks, cifs_prune_tlinks); 2664 + 2665 + spin_lock_init(&cifs_sb->tlink_tree_lock); 2666 + cifs_sb->tlink_tree = RB_ROOT; 2665 2667 2666 2668 if (pvolume_info->rsize > CIFSMaxBufSize) { 2667 2669 cERROR(1, "rsize %d too large, using MaxBufSize", ··· 2754 2750 2755 2751 /* 2756 2752 * When the server supports very large writes via POSIX extensions, we can 2757 - * allow up to 2^24 - PAGE_CACHE_SIZE. 2753 + * allow up to 2^24-1, minus the size of a WRITE_AND_X header, not including 2754 + * the RFC1001 length. 2758 2755 * 2759 2756 * Note that this might make for "interesting" allocation problems during 2760 - * writeback however (as we have to allocate an array of pointers for the 2761 - * pages). A 16M write means ~32kb page array with PAGE_CACHE_SIZE == 4096. 2757 + * writeback however as we have to allocate an array of pointers for the 2758 + * pages. A 16M write means ~32kb page array with PAGE_CACHE_SIZE == 4096. 2762 2759 */ 2763 - #define CIFS_MAX_WSIZE ((1<<24) - PAGE_CACHE_SIZE) 2760 + #define CIFS_MAX_WSIZE ((1<<24) - 1 - sizeof(WRITE_REQ) + 4) 2764 2761 2765 2762 /* 2766 - * When the server doesn't allow large posix writes, default to a wsize of 2767 - * 128k - PAGE_CACHE_SIZE -- one page less than the largest frame size 2768 - * described in RFC1001. This allows space for the header without going over 2769 - * that by default. 2763 + * When the server doesn't allow large posix writes, only allow a wsize of 2764 + * 128k minus the size of the WRITE_AND_X header. That allows for a write up 2765 + * to the maximum size described by RFC1002. 2770 2766 */ 2771 - #define CIFS_MAX_RFC1001_WSIZE (128 * 1024 - PAGE_CACHE_SIZE) 2767 + #define CIFS_MAX_RFC1002_WSIZE (128 * 1024 - sizeof(WRITE_REQ) + 4) 2772 2768 2773 2769 /* 2774 2770 * The default wsize is 1M. find_get_pages seems to return a maximum of 256 ··· 2787 2783 2788 2784 /* can server support 24-bit write sizes? (via UNIX extensions) */ 2789 2785 if (!tcon->unix_ext || !(unix_cap & CIFS_UNIX_LARGE_WRITE_CAP)) 2790 - wsize = min_t(unsigned int, wsize, CIFS_MAX_RFC1001_WSIZE); 2786 + wsize = min_t(unsigned int, wsize, CIFS_MAX_RFC1002_WSIZE); 2791 2787 2792 - /* no CAP_LARGE_WRITE_X? Limit it to 16 bits */ 2793 - if (!(server->capabilities & CAP_LARGE_WRITE_X)) 2794 - wsize = min_t(unsigned int, wsize, USHRT_MAX); 2788 + /* 2789 + * no CAP_LARGE_WRITE_X or is signing enabled without CAP_UNIX set? 2790 + * Limit it to max buffer offered by the server, minus the size of the 2791 + * WRITEX header, not including the 4 byte RFC1001 length. 2792 + */ 2793 + if (!(server->capabilities & CAP_LARGE_WRITE_X) || 2794 + (!(server->capabilities & CAP_UNIX) && 2795 + (server->sec_mode & (SECMODE_SIGN_ENABLED|SECMODE_SIGN_REQUIRED)))) 2796 + wsize = min_t(unsigned int, wsize, 2797 + server->maxBuf - sizeof(WRITE_REQ) + 4); 2795 2798 2796 2799 /* hard limit of CIFS_MAX_WSIZE */ 2797 2800 wsize = min_t(unsigned int, wsize, CIFS_MAX_WSIZE); ··· 2948 2937 2949 2938 if (volume_info->nullauth) { 2950 2939 cFYI(1, "null user"); 2951 - volume_info->username = ""; 2940 + volume_info->username = kzalloc(1, GFP_KERNEL); 2941 + if (volume_info->username == NULL) { 2942 + rc = -ENOMEM; 2943 + goto out; 2944 + } 2952 2945 } else if (volume_info->username) { 2953 2946 /* BB fixme parse for domain name here */ 2954 2947 cFYI(1, "Username: %s", volume_info->username); ··· 2986 2971 } 2987 2972 2988 2973 int 2989 - cifs_mount(struct super_block *sb, struct cifs_sb_info *cifs_sb, 2990 - struct smb_vol *volume_info, const char *devname) 2974 + cifs_mount(struct cifs_sb_info *cifs_sb, struct smb_vol *volume_info) 2991 2975 { 2992 2976 int rc = 0; 2993 2977 int xid; ··· 2997 2983 struct tcon_link *tlink; 2998 2984 #ifdef CONFIG_CIFS_DFS_UPCALL 2999 2985 int referral_walks_count = 0; 2986 + 2987 + rc = bdi_setup_and_register(&cifs_sb->bdi, "cifs", BDI_CAP_MAP_COPY); 2988 + if (rc) 2989 + return rc; 2990 + 2991 + cifs_sb->bdi.ra_pages = default_backing_dev_info.ra_pages; 2992 + 3000 2993 try_mount_again: 3001 2994 /* cleanup activities if we're chasing a referral */ 3002 2995 if (referral_walks_count) { ··· 3028 3007 srvTcp = cifs_get_tcp_session(volume_info); 3029 3008 if (IS_ERR(srvTcp)) { 3030 3009 rc = PTR_ERR(srvTcp); 3010 + bdi_destroy(&cifs_sb->bdi); 3031 3011 goto out; 3032 3012 } 3033 3013 ··· 3039 3017 pSesInfo = NULL; 3040 3018 goto mount_fail_check; 3041 3019 } 3042 - 3043 - if (pSesInfo->capabilities & CAP_LARGE_FILES) 3044 - sb->s_maxbytes = MAX_LFS_FILESIZE; 3045 - else 3046 - sb->s_maxbytes = MAX_NON_LFS; 3047 - 3048 - /* BB FIXME fix time_gran to be larger for LANMAN sessions */ 3049 - sb->s_time_gran = 100; 3050 3020 3051 3021 /* search for existing tcon to this server share */ 3052 3022 tcon = cifs_get_tcon(pSesInfo, volume_info); ··· 3052 3038 if (tcon->ses->capabilities & CAP_UNIX) { 3053 3039 /* reset of caps checks mount to see if unix extensions 3054 3040 disabled for just this mount */ 3055 - reset_cifs_unix_caps(xid, tcon, sb, volume_info); 3041 + reset_cifs_unix_caps(xid, tcon, cifs_sb, volume_info); 3056 3042 if ((tcon->ses->server->tcpStatus == CifsNeedReconnect) && 3057 3043 (le64_to_cpu(tcon->fsUnixInfo.Capability) & 3058 3044 CIFS_UNIX_TRANSPORT_ENCRYPTION_MANDATORY_CAP)) { ··· 3175 3161 cifs_put_smb_ses(pSesInfo); 3176 3162 else 3177 3163 cifs_put_tcp_session(srvTcp); 3164 + bdi_destroy(&cifs_sb->bdi); 3178 3165 goto out; 3179 3166 } 3180 3167 ··· 3350 3335 return rc; 3351 3336 } 3352 3337 3353 - int 3354 - cifs_umount(struct super_block *sb, struct cifs_sb_info *cifs_sb) 3338 + void 3339 + cifs_umount(struct cifs_sb_info *cifs_sb) 3355 3340 { 3356 3341 struct rb_root *root = &cifs_sb->tlink_tree; 3357 3342 struct rb_node *node; ··· 3372 3357 } 3373 3358 spin_unlock(&cifs_sb->tlink_tree_lock); 3374 3359 3375 - return 0; 3360 + bdi_destroy(&cifs_sb->bdi); 3361 + kfree(cifs_sb->mountdata); 3362 + unload_nls(cifs_sb->local_nls); 3363 + kfree(cifs_sb); 3376 3364 } 3377 3365 3378 3366 int cifs_negotiate_protocol(unsigned int xid, struct cifs_ses *ses)
+2 -4
fs/cifs/smbencrypt.c
··· 90 90 sg_init_one(&sgout, out, 8); 91 91 92 92 rc = crypto_blkcipher_encrypt(&desc, &sgout, &sgin, 8); 93 - if (rc) { 93 + if (rc) 94 94 cERROR(1, "could not encrypt crypt key rc: %d\n", rc); 95 - crypto_free_blkcipher(tfm_des); 96 - goto smbhash_err; 97 - } 98 95 96 + crypto_free_blkcipher(tfm_des); 99 97 smbhash_err: 100 98 return rc; 101 99 }
+6 -3
fs/ext4/ext4_extents.h
··· 125 125 * positive retcode - signal for ext4_ext_walk_space(), see below 126 126 * callback must return valid extent (passed or newly created) 127 127 */ 128 - typedef int (*ext_prepare_callback)(struct inode *, struct ext4_ext_path *, 128 + typedef int (*ext_prepare_callback)(struct inode *, ext4_lblk_t, 129 129 struct ext4_ext_cache *, 130 130 struct ext4_extent *, void *); 131 131 ··· 133 133 #define EXT_BREAK 1 134 134 #define EXT_REPEAT 2 135 135 136 - /* Maximum logical block in a file; ext4_extent's ee_block is __le32 */ 137 - #define EXT_MAX_BLOCK 0xffffffff 136 + /* 137 + * Maximum number of logical blocks in a file; ext4_extent's ee_block is 138 + * __le32. 139 + */ 140 + #define EXT_MAX_BLOCKS 0xffffffff 138 141 139 142 /* 140 143 * EXT_INIT_MAX_LEN is the maximum number of blocks we can have in an
+20 -22
fs/ext4/extents.c
··· 1408 1408 1409 1409 /* 1410 1410 * ext4_ext_next_allocated_block: 1411 - * returns allocated block in subsequent extent or EXT_MAX_BLOCK. 1411 + * returns allocated block in subsequent extent or EXT_MAX_BLOCKS. 1412 1412 * NOTE: it considers block number from index entry as 1413 1413 * allocated block. Thus, index entries have to be consistent 1414 1414 * with leaves. ··· 1422 1422 depth = path->p_depth; 1423 1423 1424 1424 if (depth == 0 && path->p_ext == NULL) 1425 - return EXT_MAX_BLOCK; 1425 + return EXT_MAX_BLOCKS; 1426 1426 1427 1427 while (depth >= 0) { 1428 1428 if (depth == path->p_depth) { ··· 1439 1439 depth--; 1440 1440 } 1441 1441 1442 - return EXT_MAX_BLOCK; 1442 + return EXT_MAX_BLOCKS; 1443 1443 } 1444 1444 1445 1445 /* 1446 1446 * ext4_ext_next_leaf_block: 1447 - * returns first allocated block from next leaf or EXT_MAX_BLOCK 1447 + * returns first allocated block from next leaf or EXT_MAX_BLOCKS 1448 1448 */ 1449 1449 static ext4_lblk_t ext4_ext_next_leaf_block(struct inode *inode, 1450 1450 struct ext4_ext_path *path) ··· 1456 1456 1457 1457 /* zero-tree has no leaf blocks at all */ 1458 1458 if (depth == 0) 1459 - return EXT_MAX_BLOCK; 1459 + return EXT_MAX_BLOCKS; 1460 1460 1461 1461 /* go to index block */ 1462 1462 depth--; ··· 1469 1469 depth--; 1470 1470 } 1471 1471 1472 - return EXT_MAX_BLOCK; 1472 + return EXT_MAX_BLOCKS; 1473 1473 } 1474 1474 1475 1475 /* ··· 1677 1677 */ 1678 1678 if (b2 < b1) { 1679 1679 b2 = ext4_ext_next_allocated_block(path); 1680 - if (b2 == EXT_MAX_BLOCK) 1680 + if (b2 == EXT_MAX_BLOCKS) 1681 1681 goto out; 1682 1682 } 1683 1683 1684 1684 /* check for wrap through zero on extent logical start block*/ 1685 1685 if (b1 + len1 < b1) { 1686 - len1 = EXT_MAX_BLOCK - b1; 1686 + len1 = EXT_MAX_BLOCKS - b1; 1687 1687 newext->ee_len = cpu_to_le16(len1); 1688 1688 ret = 1; 1689 1689 } ··· 1767 1767 fex = EXT_LAST_EXTENT(eh); 1768 1768 next = ext4_ext_next_leaf_block(inode, path); 1769 1769 if (le32_to_cpu(newext->ee_block) > le32_to_cpu(fex->ee_block) 1770 - && next != EXT_MAX_BLOCK) { 1770 + && next != EXT_MAX_BLOCKS) { 1771 1771 ext_debug("next leaf block - %d\n", next); 1772 1772 BUG_ON(npath != NULL); 1773 1773 npath = ext4_ext_find_extent(inode, next, NULL); ··· 1887 1887 BUG_ON(func == NULL); 1888 1888 BUG_ON(inode == NULL); 1889 1889 1890 - while (block < last && block != EXT_MAX_BLOCK) { 1890 + while (block < last && block != EXT_MAX_BLOCKS) { 1891 1891 num = last - block; 1892 1892 /* find extent for this block */ 1893 1893 down_read(&EXT4_I(inode)->i_data_sem); ··· 1958 1958 err = -EIO; 1959 1959 break; 1960 1960 } 1961 - err = func(inode, path, &cbex, ex, cbdata); 1961 + err = func(inode, next, &cbex, ex, cbdata); 1962 1962 ext4_ext_drop_refs(path); 1963 1963 1964 1964 if (err < 0) ··· 2020 2020 if (ex == NULL) { 2021 2021 /* there is no extent yet, so gap is [0;-] */ 2022 2022 lblock = 0; 2023 - len = EXT_MAX_BLOCK; 2023 + len = EXT_MAX_BLOCKS; 2024 2024 ext_debug("cache gap(whole file):"); 2025 2025 } else if (block < le32_to_cpu(ex->ee_block)) { 2026 2026 lblock = block; ··· 2350 2350 * never happen because at least one of the end points 2351 2351 * needs to be on the edge of the extent. 2352 2352 */ 2353 - if (end == EXT_MAX_BLOCK) { 2353 + if (end == EXT_MAX_BLOCKS - 1) { 2354 2354 ext_debug(" bad truncate %u:%u\n", 2355 2355 start, end); 2356 2356 block = 0; ··· 2398 2398 * If this is a truncate, this condition 2399 2399 * should never happen 2400 2400 */ 2401 - if (end == EXT_MAX_BLOCK) { 2401 + if (end == EXT_MAX_BLOCKS - 1) { 2402 2402 ext_debug(" bad truncate %u:%u\n", 2403 2403 start, end); 2404 2404 err = -EIO; ··· 2478 2478 * we need to remove it from the leaf 2479 2479 */ 2480 2480 if (num == 0) { 2481 - if (end != EXT_MAX_BLOCK) { 2481 + if (end != EXT_MAX_BLOCKS - 1) { 2482 2482 /* 2483 2483 * For hole punching, we need to scoot all the 2484 2484 * extents up when an extent is removed so that ··· 3699 3699 3700 3700 last_block = (inode->i_size + sb->s_blocksize - 1) 3701 3701 >> EXT4_BLOCK_SIZE_BITS(sb); 3702 - err = ext4_ext_remove_space(inode, last_block, EXT_MAX_BLOCK); 3702 + err = ext4_ext_remove_space(inode, last_block, EXT_MAX_BLOCKS - 1); 3703 3703 3704 3704 /* In a multi-transaction truncate, we only make the final 3705 3705 * transaction synchronous. ··· 3914 3914 /* 3915 3915 * Callback function called for each extent to gather FIEMAP information. 3916 3916 */ 3917 - static int ext4_ext_fiemap_cb(struct inode *inode, struct ext4_ext_path *path, 3917 + static int ext4_ext_fiemap_cb(struct inode *inode, ext4_lblk_t next, 3918 3918 struct ext4_ext_cache *newex, struct ext4_extent *ex, 3919 3919 void *data) 3920 3920 { 3921 3921 __u64 logical; 3922 3922 __u64 physical; 3923 3923 __u64 length; 3924 - loff_t size; 3925 3924 __u32 flags = 0; 3926 3925 int ret = 0; 3927 3926 struct fiemap_extent_info *fieinfo = data; ··· 4102 4103 if (ex && ext4_ext_is_uninitialized(ex)) 4103 4104 flags |= FIEMAP_EXTENT_UNWRITTEN; 4104 4105 4105 - size = i_size_read(inode); 4106 - if (logical + length >= size) 4106 + if (next == EXT_MAX_BLOCKS) 4107 4107 flags |= FIEMAP_EXTENT_LAST; 4108 4108 4109 4109 ret = fiemap_fill_next_extent(fieinfo, logical, physical, ··· 4345 4347 4346 4348 start_blk = start >> inode->i_sb->s_blocksize_bits; 4347 4349 last_blk = (start + len - 1) >> inode->i_sb->s_blocksize_bits; 4348 - if (last_blk >= EXT_MAX_BLOCK) 4349 - last_blk = EXT_MAX_BLOCK-1; 4350 + if (last_blk >= EXT_MAX_BLOCKS) 4351 + last_blk = EXT_MAX_BLOCKS-1; 4350 4352 len_blks = ((ext4_lblk_t) last_blk) - start_blk + 1; 4351 4353 4352 4354 /*
+1 -1
fs/ext4/inode.c
··· 2634 2634 struct buffer_head *page_bufs = NULL; 2635 2635 struct inode *inode = page->mapping->host; 2636 2636 2637 - trace_ext4_writepage(inode, page); 2637 + trace_ext4_writepage(page); 2638 2638 size = i_size_read(inode); 2639 2639 if (page->index == size >> PAGE_CACHE_SHIFT) 2640 2640 len = size & ~PAGE_CACHE_MASK;
+4 -4
fs/ext4/mballoc.c
··· 3578 3578 free += next - bit; 3579 3579 3580 3580 trace_ext4_mballoc_discard(sb, NULL, group, bit, next - bit); 3581 - trace_ext4_mb_release_inode_pa(sb, pa->pa_inode, pa, 3582 - grp_blk_start + bit, next - bit); 3581 + trace_ext4_mb_release_inode_pa(pa, grp_blk_start + bit, 3582 + next - bit); 3583 3583 mb_free_blocks(pa->pa_inode, e4b, bit, next - bit); 3584 3584 bit = next + 1; 3585 3585 } ··· 3608 3608 ext4_group_t group; 3609 3609 ext4_grpblk_t bit; 3610 3610 3611 - trace_ext4_mb_release_group_pa(sb, pa); 3611 + trace_ext4_mb_release_group_pa(pa); 3612 3612 BUG_ON(pa->pa_deleted == 0); 3613 3613 ext4_get_group_no_and_offset(sb, pa->pa_pstart, &group, &bit); 3614 3614 BUG_ON(group != e4b->bd_group && pa->pa_len != 0); ··· 4448 4448 * @inode: inode 4449 4449 * @block: start physical block to free 4450 4450 * @count: number of blocks to count 4451 - * @metadata: Are these metadata blocks 4451 + * @flags: flags used by ext4_free_blocks 4452 4452 */ 4453 4453 void ext4_free_blocks(handle_t *handle, struct inode *inode, 4454 4454 struct buffer_head *bh, ext4_fsblk_t block,
+5 -5
fs/ext4/move_extent.c
··· 1002 1002 return -EINVAL; 1003 1003 } 1004 1004 1005 - if ((orig_start > EXT_MAX_BLOCK) || 1006 - (donor_start > EXT_MAX_BLOCK) || 1007 - (*len > EXT_MAX_BLOCK) || 1008 - (orig_start + *len > EXT_MAX_BLOCK)) { 1005 + if ((orig_start >= EXT_MAX_BLOCKS) || 1006 + (donor_start >= EXT_MAX_BLOCKS) || 1007 + (*len > EXT_MAX_BLOCKS) || 1008 + (orig_start + *len >= EXT_MAX_BLOCKS)) { 1009 1009 ext4_debug("ext4 move extent: Can't handle over [%u] blocks " 1010 - "[ino:orig %lu, donor %lu]\n", EXT_MAX_BLOCK, 1010 + "[ino:orig %lu, donor %lu]\n", EXT_MAX_BLOCKS, 1011 1011 orig_inode->i_ino, donor_inode->i_ino); 1012 1012 return -EINVAL; 1013 1013 }
+12 -3
fs/ext4/super.c
··· 2243 2243 * in the vfs. ext4 inode has 48 bits of i_block in fsblock units, 2244 2244 * so that won't be a limiting factor. 2245 2245 * 2246 + * However there is other limiting factor. We do store extents in the form 2247 + * of starting block and length, hence the resulting length of the extent 2248 + * covering maximum file size must fit into on-disk format containers as 2249 + * well. Given that length is always by 1 unit bigger than max unit (because 2250 + * we count 0 as well) we have to lower the s_maxbytes by one fs block. 2251 + * 2246 2252 * Note, this does *not* consider any metadata overhead for vfs i_blocks. 2247 2253 */ 2248 2254 static loff_t ext4_max_size(int blkbits, int has_huge_files) ··· 2270 2264 upper_limit <<= blkbits; 2271 2265 } 2272 2266 2273 - /* 32-bit extent-start container, ee_block */ 2274 - res = 1LL << 32; 2267 + /* 2268 + * 32-bit extent-start container, ee_block. We lower the maxbytes 2269 + * by one fs block, so ee_len can cover the extent of maximum file 2270 + * size 2271 + */ 2272 + res = (1LL << 32) - 1; 2275 2273 res <<= blkbits; 2276 - res -= 1; 2277 2274 2278 2275 /* Sanity check against vm- & vfs- imposed limits */ 2279 2276 if (res > upper_limit)
+7
fs/inode.c
··· 423 423 void end_writeback(struct inode *inode) 424 424 { 425 425 might_sleep(); 426 + /* 427 + * We have to cycle tree_lock here because reclaim can be still in the 428 + * process of removing the last page (in __delete_from_page_cache()) 429 + * and we must not free mapping under it. 430 + */ 431 + spin_lock_irq(&inode->i_data.tree_lock); 426 432 BUG_ON(inode->i_data.nrpages); 433 + spin_unlock_irq(&inode->i_data.tree_lock); 427 434 BUG_ON(!list_empty(&inode->i_data.private_list)); 428 435 BUG_ON(!(inode->i_state & I_FREEING)); 429 436 BUG_ON(inode->i_state & I_CLEAR);
+16 -12
fs/jbd2/checkpoint.c
··· 97 97 98 98 if (jh->b_jlist == BJ_None && !buffer_locked(bh) && 99 99 !buffer_dirty(bh) && !buffer_write_io_error(bh)) { 100 + /* 101 + * Get our reference so that bh cannot be freed before 102 + * we unlock it 103 + */ 104 + get_bh(bh); 100 105 JBUFFER_TRACE(jh, "remove from checkpoint list"); 101 106 ret = __jbd2_journal_remove_checkpoint(jh) + 1; 102 107 jbd_unlock_bh_state(bh); 103 - jbd2_journal_remove_journal_head(bh); 104 108 BUFFER_TRACE(bh, "release"); 105 109 __brelse(bh); 106 110 } else { ··· 227 223 spin_lock(&journal->j_list_lock); 228 224 goto restart; 229 225 } 226 + get_bh(bh); 230 227 if (buffer_locked(bh)) { 231 - atomic_inc(&bh->b_count); 232 228 spin_unlock(&journal->j_list_lock); 233 229 jbd_unlock_bh_state(bh); 234 230 wait_on_buffer(bh); ··· 247 243 */ 248 244 released = __jbd2_journal_remove_checkpoint(jh); 249 245 jbd_unlock_bh_state(bh); 250 - jbd2_journal_remove_journal_head(bh); 251 246 __brelse(bh); 252 247 } 253 248 ··· 287 284 int ret = 0; 288 285 289 286 if (buffer_locked(bh)) { 290 - atomic_inc(&bh->b_count); 287 + get_bh(bh); 291 288 spin_unlock(&journal->j_list_lock); 292 289 jbd_unlock_bh_state(bh); 293 290 wait_on_buffer(bh); ··· 319 316 ret = 1; 320 317 if (unlikely(buffer_write_io_error(bh))) 321 318 ret = -EIO; 319 + get_bh(bh); 322 320 J_ASSERT_JH(jh, !buffer_jbddirty(bh)); 323 321 BUFFER_TRACE(bh, "remove from checkpoint"); 324 322 __jbd2_journal_remove_checkpoint(jh); 325 323 spin_unlock(&journal->j_list_lock); 326 324 jbd_unlock_bh_state(bh); 327 - jbd2_journal_remove_journal_head(bh); 328 325 __brelse(bh); 329 326 } else { 330 327 /* ··· 557 554 /* 558 555 * journal_clean_one_cp_list 559 556 * 560 - * Find all the written-back checkpoint buffers in the given list and release them. 557 + * Find all the written-back checkpoint buffers in the given list and 558 + * release them. 561 559 * 562 560 * Called with the journal locked. 563 561 * Called with j_list_lock held. ··· 667 663 * checkpoint lists. 668 664 * 669 665 * The function returns 1 if it frees the transaction, 0 otherwise. 666 + * The function can free jh and bh. 670 667 * 671 - * This function is called with the journal locked. 672 668 * This function is called with j_list_lock held. 673 669 * This function is called with jbd_lock_bh_state(jh2bh(jh)) 674 670 */ ··· 688 684 } 689 685 journal = transaction->t_journal; 690 686 687 + JBUFFER_TRACE(jh, "removing from transaction"); 691 688 __buffer_unlink(jh); 692 689 jh->b_cp_transaction = NULL; 690 + jbd2_journal_put_journal_head(jh); 693 691 694 692 if (transaction->t_checkpoint_list != NULL || 695 693 transaction->t_checkpoint_io_list != NULL) 696 694 goto out; 697 - JBUFFER_TRACE(jh, "transaction has no more buffers"); 698 695 699 696 /* 700 697 * There is one special case to worry about: if we have just pulled the ··· 706 701 * The locking here around t_state is a bit sleazy. 707 702 * See the comment at the end of jbd2_journal_commit_transaction(). 708 703 */ 709 - if (transaction->t_state != T_FINISHED) { 710 - JBUFFER_TRACE(jh, "belongs to running/committing transaction"); 704 + if (transaction->t_state != T_FINISHED) 711 705 goto out; 712 - } 713 706 714 707 /* OK, that was the last buffer for the transaction: we can now 715 708 safely remove this transaction from the log */ ··· 726 723 wake_up(&journal->j_wait_logspace); 727 724 ret = 1; 728 725 out: 729 - JBUFFER_TRACE(jh, "exit"); 730 726 return ret; 731 727 } 732 728 ··· 744 742 J_ASSERT_JH(jh, buffer_dirty(jh2bh(jh)) || buffer_jbddirty(jh2bh(jh))); 745 743 J_ASSERT_JH(jh, jh->b_cp_transaction == NULL); 746 744 745 + /* Get reference for checkpointing transaction */ 746 + jbd2_journal_grab_journal_head(jh2bh(jh)); 747 747 jh->b_cp_transaction = transaction; 748 748 749 749 if (!transaction->t_checkpoint_list) {
+19 -14
fs/jbd2/commit.c
··· 848 848 while (commit_transaction->t_forget) { 849 849 transaction_t *cp_transaction; 850 850 struct buffer_head *bh; 851 + int try_to_free = 0; 851 852 852 853 jh = commit_transaction->t_forget; 853 854 spin_unlock(&journal->j_list_lock); 854 855 bh = jh2bh(jh); 856 + /* 857 + * Get a reference so that bh cannot be freed before we are 858 + * done with it. 859 + */ 860 + get_bh(bh); 855 861 jbd_lock_bh_state(bh); 856 862 J_ASSERT_JH(jh, jh->b_transaction == commit_transaction); 857 863 ··· 920 914 __jbd2_journal_insert_checkpoint(jh, commit_transaction); 921 915 if (is_journal_aborted(journal)) 922 916 clear_buffer_jbddirty(bh); 923 - JBUFFER_TRACE(jh, "refile for checkpoint writeback"); 924 - __jbd2_journal_refile_buffer(jh); 925 - jbd_unlock_bh_state(bh); 926 917 } else { 927 918 J_ASSERT_BH(bh, !buffer_dirty(bh)); 928 - /* The buffer on BJ_Forget list and not jbddirty means 919 + /* 920 + * The buffer on BJ_Forget list and not jbddirty means 929 921 * it has been freed by this transaction and hence it 930 922 * could not have been reallocated until this 931 923 * transaction has committed. *BUT* it could be 932 924 * reallocated once we have written all the data to 933 925 * disk and before we process the buffer on BJ_Forget 934 - * list. */ 935 - JBUFFER_TRACE(jh, "refile or unfile freed buffer"); 936 - __jbd2_journal_refile_buffer(jh); 937 - if (!jh->b_transaction) { 938 - jbd_unlock_bh_state(bh); 939 - /* needs a brelse */ 940 - jbd2_journal_remove_journal_head(bh); 941 - release_buffer_page(bh); 942 - } else 943 - jbd_unlock_bh_state(bh); 926 + * list. 927 + */ 928 + if (!jh->b_next_transaction) 929 + try_to_free = 1; 944 930 } 931 + JBUFFER_TRACE(jh, "refile or unfile buffer"); 932 + __jbd2_journal_refile_buffer(jh); 933 + jbd_unlock_bh_state(bh); 934 + if (try_to_free) 935 + release_buffer_page(bh); /* Drops bh reference */ 936 + else 937 + __brelse(bh); 945 938 cond_resched_lock(&journal->j_list_lock); 946 939 } 947 940 spin_unlock(&journal->j_list_lock);
+29 -62
fs/jbd2/journal.c
··· 2078 2078 * When a buffer has its BH_JBD bit set it is immune from being released by 2079 2079 * core kernel code, mainly via ->b_count. 2080 2080 * 2081 - * A journal_head may be detached from its buffer_head when the journal_head's 2082 - * b_transaction, b_cp_transaction and b_next_transaction pointers are NULL. 2083 - * Various places in JBD call jbd2_journal_remove_journal_head() to indicate that the 2084 - * journal_head can be dropped if needed. 2081 + * A journal_head is detached from its buffer_head when the journal_head's 2082 + * b_jcount reaches zero. Running transaction (b_transaction) and checkpoint 2083 + * transaction (b_cp_transaction) hold their references to b_jcount. 2085 2084 * 2086 2085 * Various places in the kernel want to attach a journal_head to a buffer_head 2087 2086 * _before_ attaching the journal_head to a transaction. To protect the ··· 2093 2094 * (Attach a journal_head if needed. Increments b_jcount) 2094 2095 * struct journal_head *jh = jbd2_journal_add_journal_head(bh); 2095 2096 * ... 2097 + * (Get another reference for transaction) 2098 + * jbd2_journal_grab_journal_head(bh); 2096 2099 * jh->b_transaction = xxx; 2100 + * (Put original reference) 2097 2101 * jbd2_journal_put_journal_head(jh); 2098 - * 2099 - * Now, the journal_head's b_jcount is zero, but it is safe from being released 2100 - * because it has a non-zero b_transaction. 2101 2102 */ 2102 2103 2103 2104 /* 2104 2105 * Give a buffer_head a journal_head. 2105 2106 * 2106 - * Doesn't need the journal lock. 2107 2107 * May sleep. 2108 2108 */ 2109 2109 struct journal_head *jbd2_journal_add_journal_head(struct buffer_head *bh) ··· 2166 2168 struct journal_head *jh = bh2jh(bh); 2167 2169 2168 2170 J_ASSERT_JH(jh, jh->b_jcount >= 0); 2169 - 2170 - get_bh(bh); 2171 - if (jh->b_jcount == 0) { 2172 - if (jh->b_transaction == NULL && 2173 - jh->b_next_transaction == NULL && 2174 - jh->b_cp_transaction == NULL) { 2175 - J_ASSERT_JH(jh, jh->b_jlist == BJ_None); 2176 - J_ASSERT_BH(bh, buffer_jbd(bh)); 2177 - J_ASSERT_BH(bh, jh2bh(jh) == bh); 2178 - BUFFER_TRACE(bh, "remove journal_head"); 2179 - if (jh->b_frozen_data) { 2180 - printk(KERN_WARNING "%s: freeing " 2181 - "b_frozen_data\n", 2182 - __func__); 2183 - jbd2_free(jh->b_frozen_data, bh->b_size); 2184 - } 2185 - if (jh->b_committed_data) { 2186 - printk(KERN_WARNING "%s: freeing " 2187 - "b_committed_data\n", 2188 - __func__); 2189 - jbd2_free(jh->b_committed_data, bh->b_size); 2190 - } 2191 - bh->b_private = NULL; 2192 - jh->b_bh = NULL; /* debug, really */ 2193 - clear_buffer_jbd(bh); 2194 - __brelse(bh); 2195 - journal_free_journal_head(jh); 2196 - } else { 2197 - BUFFER_TRACE(bh, "journal_head was locked"); 2198 - } 2171 + J_ASSERT_JH(jh, jh->b_transaction == NULL); 2172 + J_ASSERT_JH(jh, jh->b_next_transaction == NULL); 2173 + J_ASSERT_JH(jh, jh->b_cp_transaction == NULL); 2174 + J_ASSERT_JH(jh, jh->b_jlist == BJ_None); 2175 + J_ASSERT_BH(bh, buffer_jbd(bh)); 2176 + J_ASSERT_BH(bh, jh2bh(jh) == bh); 2177 + BUFFER_TRACE(bh, "remove journal_head"); 2178 + if (jh->b_frozen_data) { 2179 + printk(KERN_WARNING "%s: freeing b_frozen_data\n", __func__); 2180 + jbd2_free(jh->b_frozen_data, bh->b_size); 2199 2181 } 2182 + if (jh->b_committed_data) { 2183 + printk(KERN_WARNING "%s: freeing b_committed_data\n", __func__); 2184 + jbd2_free(jh->b_committed_data, bh->b_size); 2185 + } 2186 + bh->b_private = NULL; 2187 + jh->b_bh = NULL; /* debug, really */ 2188 + clear_buffer_jbd(bh); 2189 + journal_free_journal_head(jh); 2200 2190 } 2201 2191 2202 2192 /* 2203 - * jbd2_journal_remove_journal_head(): if the buffer isn't attached to a transaction 2204 - * and has a zero b_jcount then remove and release its journal_head. If we did 2205 - * see that the buffer is not used by any transaction we also "logically" 2206 - * decrement ->b_count. 2207 - * 2208 - * We in fact take an additional increment on ->b_count as a convenience, 2209 - * because the caller usually wants to do additional things with the bh 2210 - * after calling here. 2211 - * The caller of jbd2_journal_remove_journal_head() *must* run __brelse(bh) at some 2212 - * time. Once the caller has run __brelse(), the buffer is eligible for 2213 - * reaping by try_to_free_buffers(). 2214 - */ 2215 - void jbd2_journal_remove_journal_head(struct buffer_head *bh) 2216 - { 2217 - jbd_lock_bh_journal_head(bh); 2218 - __journal_remove_journal_head(bh); 2219 - jbd_unlock_bh_journal_head(bh); 2220 - } 2221 - 2222 - /* 2223 - * Drop a reference on the passed journal_head. If it fell to zero then try to 2193 + * Drop a reference on the passed journal_head. If it fell to zero then 2224 2194 * release the journal_head from the buffer_head. 2225 2195 */ 2226 2196 void jbd2_journal_put_journal_head(struct journal_head *jh) ··· 2198 2232 jbd_lock_bh_journal_head(bh); 2199 2233 J_ASSERT_JH(jh, jh->b_jcount > 0); 2200 2234 --jh->b_jcount; 2201 - if (!jh->b_jcount && !jh->b_transaction) { 2235 + if (!jh->b_jcount) { 2202 2236 __journal_remove_journal_head(bh); 2237 + jbd_unlock_bh_journal_head(bh); 2203 2238 __brelse(bh); 2204 - } 2205 - jbd_unlock_bh_journal_head(bh); 2239 + } else 2240 + jbd_unlock_bh_journal_head(bh); 2206 2241 } 2207 2242 2208 2243 /*
+34 -33
fs/jbd2/transaction.c
··· 30 30 #include <linux/module.h> 31 31 32 32 static void __jbd2_journal_temp_unlink_buffer(struct journal_head *jh); 33 + static void __jbd2_journal_unfile_buffer(struct journal_head *jh); 33 34 34 35 /* 35 36 * jbd2_get_transaction: obtain a new transaction_t object. ··· 765 764 if (!jh->b_transaction) { 766 765 JBUFFER_TRACE(jh, "no transaction"); 767 766 J_ASSERT_JH(jh, !jh->b_next_transaction); 768 - jh->b_transaction = transaction; 769 767 JBUFFER_TRACE(jh, "file as BJ_Reserved"); 770 768 spin_lock(&journal->j_list_lock); 771 769 __jbd2_journal_file_buffer(jh, transaction, BJ_Reserved); ··· 814 814 * int jbd2_journal_get_write_access() - notify intent to modify a buffer for metadata (not data) update. 815 815 * @handle: transaction to add buffer modifications to 816 816 * @bh: bh to be used for metadata writes 817 - * @credits: variable that will receive credits for the buffer 818 817 * 819 818 * Returns an error code or 0 on success. 820 819 * ··· 895 896 * committed and so it's safe to clear the dirty bit. 896 897 */ 897 898 clear_buffer_dirty(jh2bh(jh)); 898 - jh->b_transaction = transaction; 899 - 900 899 /* first access by this transaction */ 901 900 jh->b_modified = 0; 902 901 ··· 929 932 * non-rewindable consequences 930 933 * @handle: transaction 931 934 * @bh: buffer to undo 932 - * @credits: store the number of taken credits here (if not NULL) 933 935 * 934 936 * Sometimes there is a need to distinguish between metadata which has 935 937 * been committed to disk and that which has not. The ext3fs code uses ··· 1228 1232 __jbd2_journal_file_buffer(jh, transaction, BJ_Forget); 1229 1233 } else { 1230 1234 __jbd2_journal_unfile_buffer(jh); 1231 - jbd2_journal_remove_journal_head(bh); 1232 - __brelse(bh); 1233 1235 if (!buffer_jbd(bh)) { 1234 1236 spin_unlock(&journal->j_list_lock); 1235 1237 jbd_unlock_bh_state(bh); ··· 1550 1556 mark_buffer_dirty(bh); /* Expose it to the VM */ 1551 1557 } 1552 1558 1553 - void __jbd2_journal_unfile_buffer(struct journal_head *jh) 1559 + /* 1560 + * Remove buffer from all transactions. 1561 + * 1562 + * Called with bh_state lock and j_list_lock 1563 + * 1564 + * jh and bh may be already freed when this function returns. 1565 + */ 1566 + static void __jbd2_journal_unfile_buffer(struct journal_head *jh) 1554 1567 { 1555 1568 __jbd2_journal_temp_unlink_buffer(jh); 1556 1569 jh->b_transaction = NULL; 1570 + jbd2_journal_put_journal_head(jh); 1557 1571 } 1558 1572 1559 1573 void jbd2_journal_unfile_buffer(journal_t *journal, struct journal_head *jh) 1560 1574 { 1561 - jbd_lock_bh_state(jh2bh(jh)); 1575 + struct buffer_head *bh = jh2bh(jh); 1576 + 1577 + /* Get reference so that buffer cannot be freed before we unlock it */ 1578 + get_bh(bh); 1579 + jbd_lock_bh_state(bh); 1562 1580 spin_lock(&journal->j_list_lock); 1563 1581 __jbd2_journal_unfile_buffer(jh); 1564 1582 spin_unlock(&journal->j_list_lock); 1565 - jbd_unlock_bh_state(jh2bh(jh)); 1583 + jbd_unlock_bh_state(bh); 1584 + __brelse(bh); 1566 1585 } 1567 1586 1568 1587 /* ··· 1602 1595 if (jh->b_jlist == BJ_None) { 1603 1596 JBUFFER_TRACE(jh, "remove from checkpoint list"); 1604 1597 __jbd2_journal_remove_checkpoint(jh); 1605 - jbd2_journal_remove_journal_head(bh); 1606 - __brelse(bh); 1607 1598 } 1608 1599 } 1609 1600 spin_unlock(&journal->j_list_lock); ··· 1664 1659 /* 1665 1660 * We take our own ref against the journal_head here to avoid 1666 1661 * having to add tons of locking around each instance of 1667 - * jbd2_journal_remove_journal_head() and 1668 1662 * jbd2_journal_put_journal_head(). 1669 1663 */ 1670 1664 jh = jbd2_journal_grab_journal_head(bh); ··· 1701 1697 int may_free = 1; 1702 1698 struct buffer_head *bh = jh2bh(jh); 1703 1699 1704 - __jbd2_journal_unfile_buffer(jh); 1705 - 1706 1700 if (jh->b_cp_transaction) { 1707 1701 JBUFFER_TRACE(jh, "on running+cp transaction"); 1702 + __jbd2_journal_temp_unlink_buffer(jh); 1708 1703 /* 1709 1704 * We don't want to write the buffer anymore, clear the 1710 1705 * bit so that we don't confuse checks in ··· 1714 1711 may_free = 0; 1715 1712 } else { 1716 1713 JBUFFER_TRACE(jh, "on running transaction"); 1717 - jbd2_journal_remove_journal_head(bh); 1718 - __brelse(bh); 1714 + __jbd2_journal_unfile_buffer(jh); 1719 1715 } 1720 1716 return may_free; 1721 1717 } ··· 1992 1990 1993 1991 if (jh->b_transaction) 1994 1992 __jbd2_journal_temp_unlink_buffer(jh); 1993 + else 1994 + jbd2_journal_grab_journal_head(bh); 1995 1995 jh->b_transaction = transaction; 1996 1996 1997 1997 switch (jlist) { ··· 2045 2041 * already started to be used by a subsequent transaction, refile the 2046 2042 * buffer on that transaction's metadata list. 2047 2043 * 2048 - * Called under journal->j_list_lock 2049 - * 2044 + * Called under j_list_lock 2050 2045 * Called under jbd_lock_bh_state(jh2bh(jh)) 2046 + * 2047 + * jh and bh may be already free when this function returns 2051 2048 */ 2052 2049 void __jbd2_journal_refile_buffer(struct journal_head *jh) 2053 2050 { ··· 2072 2067 2073 2068 was_dirty = test_clear_buffer_jbddirty(bh); 2074 2069 __jbd2_journal_temp_unlink_buffer(jh); 2070 + /* 2071 + * We set b_transaction here because b_next_transaction will inherit 2072 + * our jh reference and thus __jbd2_journal_file_buffer() must not 2073 + * take a new one. 2074 + */ 2075 2075 jh->b_transaction = jh->b_next_transaction; 2076 2076 jh->b_next_transaction = NULL; 2077 2077 if (buffer_freed(bh)) ··· 2093 2083 } 2094 2084 2095 2085 /* 2096 - * For the unlocked version of this call, also make sure that any 2097 - * hanging journal_head is cleaned up if necessary. 2086 + * __jbd2_journal_refile_buffer() with necessary locking added. We take our 2087 + * bh reference so that we can safely unlock bh. 2098 2088 * 2099 - * __jbd2_journal_refile_buffer is usually called as part of a single locked 2100 - * operation on a buffer_head, in which the caller is probably going to 2101 - * be hooking the journal_head onto other lists. In that case it is up 2102 - * to the caller to remove the journal_head if necessary. For the 2103 - * unlocked jbd2_journal_refile_buffer call, the caller isn't going to be 2104 - * doing anything else to the buffer so we need to do the cleanup 2105 - * ourselves to avoid a jh leak. 2106 - * 2107 - * *** The journal_head may be freed by this call! *** 2089 + * The jh and bh may be freed by this call. 2108 2090 */ 2109 2091 void jbd2_journal_refile_buffer(journal_t *journal, struct journal_head *jh) 2110 2092 { 2111 2093 struct buffer_head *bh = jh2bh(jh); 2112 2094 2095 + /* Get reference so that buffer cannot be freed before we unlock it */ 2096 + get_bh(bh); 2113 2097 jbd_lock_bh_state(bh); 2114 2098 spin_lock(&journal->j_list_lock); 2115 - 2116 2099 __jbd2_journal_refile_buffer(jh); 2117 2100 jbd_unlock_bh_state(bh); 2118 - jbd2_journal_remove_journal_head(bh); 2119 - 2120 2101 spin_unlock(&journal->j_list_lock); 2121 2102 __brelse(bh); 2122 2103 }
+3 -3
fs/jfs/file.c
··· 66 66 struct jfs_inode_info *ji = JFS_IP(inode); 67 67 spin_lock_irq(&ji->ag_lock); 68 68 if (ji->active_ag == -1) { 69 - ji->active_ag = ji->agno; 70 - atomic_inc( 71 - &JFS_SBI(inode->i_sb)->bmap->db_active[ji->agno]); 69 + struct jfs_sb_info *jfs_sb = JFS_SBI(inode->i_sb); 70 + ji->active_ag = BLKTOAG(addressPXD(&ji->ixpxd), jfs_sb); 71 + atomic_inc( &jfs_sb->bmap->db_active[ji->active_ag]); 72 72 } 73 73 spin_unlock_irq(&ji->ag_lock); 74 74 }
+5 -7
fs/jfs/jfs_imap.c
··· 397 397 release_metapage(mp); 398 398 399 399 /* set the ag for the inode */ 400 - JFS_IP(ip)->agno = BLKTOAG(agstart, sbi); 400 + JFS_IP(ip)->agstart = agstart; 401 401 JFS_IP(ip)->active_ag = -1; 402 402 403 403 return (rc); ··· 901 901 902 902 /* get the allocation group for this ino. 903 903 */ 904 - agno = JFS_IP(ip)->agno; 904 + agno = BLKTOAG(JFS_IP(ip)->agstart, JFS_SBI(ip->i_sb)); 905 905 906 906 /* Lock the AG specific inode map information 907 907 */ ··· 1315 1315 static inline void 1316 1316 diInitInode(struct inode *ip, int iagno, int ino, int extno, struct iag * iagp) 1317 1317 { 1318 - struct jfs_sb_info *sbi = JFS_SBI(ip->i_sb); 1319 1318 struct jfs_inode_info *jfs_ip = JFS_IP(ip); 1320 1319 1321 1320 ip->i_ino = (iagno << L2INOSPERIAG) + ino; 1322 1321 jfs_ip->ixpxd = iagp->inoext[extno]; 1323 - jfs_ip->agno = BLKTOAG(le64_to_cpu(iagp->agstart), sbi); 1322 + jfs_ip->agstart = le64_to_cpu(iagp->agstart); 1324 1323 jfs_ip->active_ag = -1; 1325 1324 } 1326 1325 ··· 1378 1379 */ 1379 1380 1380 1381 /* get the ag number of this iag */ 1381 - agno = JFS_IP(pip)->agno; 1382 + agno = BLKTOAG(JFS_IP(pip)->agstart, JFS_SBI(pip->i_sb)); 1382 1383 1383 1384 if (atomic_read(&JFS_SBI(pip->i_sb)->bmap->db_active[agno])) { 1384 1385 /* ··· 2920 2921 continue; 2921 2922 } 2922 2923 2923 - /* agstart that computes to the same ag is treated as same; */ 2924 2924 agstart = le64_to_cpu(iagp->agstart); 2925 - /* iagp->agstart = agstart & ~(mp->db_agsize - 1); */ 2926 2925 n = agstart >> mp->db_agl2size; 2926 + iagp->agstart = cpu_to_le64((s64)n << mp->db_agl2size); 2927 2927 2928 2928 /* compute backed inodes */ 2929 2929 numinos = (EXTSPERIAG - le32_to_cpu(iagp->nfreeexts))
+2 -1
fs/jfs/jfs_incore.h
··· 50 50 short btindex; /* btpage entry index*/ 51 51 struct inode *ipimap; /* inode map */ 52 52 unsigned long cflag; /* commit flags */ 53 + u64 agstart; /* agstart of the containing IAG */ 53 54 u16 bxflag; /* xflag of pseudo buffer? */ 54 - unchar agno; /* ag number */ 55 + unchar pad; 55 56 signed char active_ag; /* ag currently allocating from */ 56 57 lid_t blid; /* lid of pseudo buffer? */ 57 58 lid_t atlhead; /* anonymous tlock list head */
+1 -1
fs/jfs/resize.c
··· 80 80 int log_formatted = 0; 81 81 struct inode *iplist[1]; 82 82 struct jfs_superblock *j_sb, *j_sb2; 83 - uint old_agsize; 83 + s64 old_agsize; 84 84 int agsizechanged = 0; 85 85 struct buffer_head *bh, *bh2; 86 86
+7 -1
fs/lockd/clntproc.c
··· 708 708 709 709 if (task->tk_status < 0) { 710 710 dprintk("lockd: unlock failed (err = %d)\n", -task->tk_status); 711 - goto retry_rebind; 711 + switch (task->tk_status) { 712 + case -EACCES: 713 + case -EIO: 714 + goto die; 715 + default: 716 + goto retry_rebind; 717 + } 712 718 } 713 719 if (status == NLM_LCK_DENIED_GRACE_PERIOD) { 714 720 rpc_delay(task, NLMCLNT_GRACE_WAIT);
+4 -2
fs/nfs/inode.c
··· 256 256 257 257 nfs_attr_check_mountpoint(sb, fattr); 258 258 259 - if ((fattr->valid & NFS_ATTR_FATTR_FILEID) == 0 && (fattr->valid & NFS_ATTR_FATTR_MOUNTPOINT) == 0) 259 + if (((fattr->valid & NFS_ATTR_FATTR_FILEID) == 0) && 260 + !nfs_attr_use_mounted_on_fileid(fattr)) 260 261 goto out_no_inode; 261 262 if ((fattr->valid & NFS_ATTR_FATTR_TYPE) == 0) 262 263 goto out_no_inode; ··· 1295 1294 if (new_isize != cur_isize) { 1296 1295 /* Do we perhaps have any outstanding writes, or has 1297 1296 * the file grown beyond our last write? */ 1298 - if (nfsi->npages == 0 || new_isize > cur_isize) { 1297 + if ((nfsi->npages == 0 && !test_bit(NFS_INO_LAYOUTCOMMIT, &nfsi->flags)) || 1298 + new_isize > cur_isize) { 1299 1299 i_size_write(inode, new_isize); 1300 1300 invalid |= NFS_INO_INVALID_ATTR|NFS_INO_INVALID_DATA; 1301 1301 }
+11
fs/nfs/internal.h
··· 45 45 fattr->valid |= NFS_ATTR_FATTR_MOUNTPOINT; 46 46 } 47 47 48 + static inline int nfs_attr_use_mounted_on_fileid(struct nfs_fattr *fattr) 49 + { 50 + if (((fattr->valid & NFS_ATTR_FATTR_MOUNTED_ON_FILEID) == 0) || 51 + (((fattr->valid & NFS_ATTR_FATTR_MOUNTPOINT) == 0) && 52 + ((fattr->valid & NFS_ATTR_FATTR_V4_REFERRAL) == 0))) 53 + return 0; 54 + 55 + fattr->fileid = fattr->mounted_on_fileid; 56 + return 1; 57 + } 58 + 48 59 struct nfs_clone_mount { 49 60 const struct super_block *sb; 50 61 const struct dentry *dentry;
+14 -7
fs/nfs/nfs4filelayout.c
··· 30 30 */ 31 31 32 32 #include <linux/nfs_fs.h> 33 + #include <linux/nfs_page.h> 33 34 34 35 #include "internal.h" 35 36 #include "nfs4filelayout.h" ··· 553 552 __func__, nfl_util, fl->num_fh, fl->first_stripe_index, 554 553 fl->pattern_offset); 555 554 556 - if (!fl->num_fh) 555 + /* Note that a zero value for num_fh is legal for STRIPE_SPARSE. 556 + * Futher checking is done in filelayout_check_layout */ 557 + if (fl->num_fh < 0 || fl->num_fh > 558 + max(NFS4_PNFS_MAX_STRIPE_CNT, NFS4_PNFS_MAX_MULTI_CNT)) 557 559 goto out_err; 558 560 559 - fl->fh_array = kzalloc(fl->num_fh * sizeof(struct nfs_fh *), 560 - gfp_flags); 561 - if (!fl->fh_array) 562 - goto out_err; 561 + if (fl->num_fh > 0) { 562 + fl->fh_array = kzalloc(fl->num_fh * sizeof(struct nfs_fh *), 563 + gfp_flags); 564 + if (!fl->fh_array) 565 + goto out_err; 566 + } 563 567 564 568 for (i = 0; i < fl->num_fh; i++) { 565 569 /* Do we want to use a mempool here? */ ··· 667 661 u64 p_stripe, r_stripe; 668 662 u32 stripe_unit; 669 663 670 - if (!pnfs_generic_pg_test(pgio, prev, req)) 671 - return 0; 664 + if (!pnfs_generic_pg_test(pgio, prev, req) || 665 + !nfs_generic_pg_test(pgio, prev, req)) 666 + return false; 672 667 673 668 if (!pgio->pg_lseg) 674 669 return 1;
+27 -18
fs/nfs/nfs4proc.c
··· 2265 2265 return nfs4_map_errors(status); 2266 2266 } 2267 2267 2268 + static void nfs_fixup_referral_attributes(struct nfs_fattr *fattr); 2268 2269 /* 2269 2270 * Get locations and (maybe) other attributes of a referral. 2270 2271 * Note that we'll actually follow the referral later when 2271 2272 * we detect fsid mismatch in inode revalidation 2272 2273 */ 2273 - static int nfs4_get_referral(struct inode *dir, const struct qstr *name, struct nfs_fattr *fattr, struct nfs_fh *fhandle) 2274 + static int nfs4_get_referral(struct inode *dir, const struct qstr *name, 2275 + struct nfs_fattr *fattr, struct nfs_fh *fhandle) 2274 2276 { 2275 2277 int status = -ENOMEM; 2276 2278 struct page *page = NULL; ··· 2290 2288 goto out; 2291 2289 /* Make sure server returned a different fsid for the referral */ 2292 2290 if (nfs_fsid_equal(&NFS_SERVER(dir)->fsid, &locations->fattr.fsid)) { 2293 - dprintk("%s: server did not return a different fsid for a referral at %s\n", __func__, name->name); 2291 + dprintk("%s: server did not return a different fsid for" 2292 + " a referral at %s\n", __func__, name->name); 2294 2293 status = -EIO; 2295 2294 goto out; 2296 2295 } 2296 + /* Fixup attributes for the nfs_lookup() call to nfs_fhget() */ 2297 + nfs_fixup_referral_attributes(&locations->fattr); 2297 2298 2299 + /* replace the lookup nfs_fattr with the locations nfs_fattr */ 2298 2300 memcpy(fattr, &locations->fattr, sizeof(struct nfs_fattr)); 2299 - fattr->valid |= NFS_ATTR_FATTR_V4_REFERRAL; 2300 - if (!fattr->mode) 2301 - fattr->mode = S_IFDIR; 2302 2301 memset(fhandle, 0, sizeof(struct nfs_fh)); 2303 2302 out: 2304 2303 if (page) ··· 4670 4667 return len; 4671 4668 } 4672 4669 4670 + /* 4671 + * nfs_fhget will use either the mounted_on_fileid or the fileid 4672 + */ 4673 4673 static void nfs_fixup_referral_attributes(struct nfs_fattr *fattr) 4674 4674 { 4675 - if (!((fattr->valid & NFS_ATTR_FATTR_FILEID) && 4676 - (fattr->valid & NFS_ATTR_FATTR_FSID) && 4677 - (fattr->valid & NFS_ATTR_FATTR_V4_REFERRAL))) 4675 + if (!(((fattr->valid & NFS_ATTR_FATTR_MOUNTED_ON_FILEID) || 4676 + (fattr->valid & NFS_ATTR_FATTR_FILEID)) && 4677 + (fattr->valid & NFS_ATTR_FATTR_FSID) && 4678 + (fattr->valid & NFS_ATTR_FATTR_V4_REFERRAL))) 4678 4679 return; 4679 4680 4680 4681 fattr->valid |= NFS_ATTR_FATTR_TYPE | NFS_ATTR_FATTR_MODE | ··· 4693 4686 struct nfs_server *server = NFS_SERVER(dir); 4694 4687 u32 bitmask[2] = { 4695 4688 [0] = FATTR4_WORD0_FSID | FATTR4_WORD0_FS_LOCATIONS, 4696 - [1] = FATTR4_WORD1_MOUNTED_ON_FILEID, 4697 4689 }; 4698 4690 struct nfs4_fs_locations_arg args = { 4699 4691 .dir_fh = NFS_FH(dir), ··· 4711 4705 int status; 4712 4706 4713 4707 dprintk("%s: start\n", __func__); 4708 + 4709 + /* Ask for the fileid of the absent filesystem if mounted_on_fileid 4710 + * is not supported */ 4711 + if (NFS_SERVER(dir)->attr_bitmask[1] & FATTR4_WORD1_MOUNTED_ON_FILEID) 4712 + bitmask[1] |= FATTR4_WORD1_MOUNTED_ON_FILEID; 4713 + else 4714 + bitmask[0] |= FATTR4_WORD0_FILEID; 4715 + 4714 4716 nfs_fattr_init(&fs_locations->fattr); 4715 4717 fs_locations->server = server; 4716 4718 fs_locations->nlocations = 0; 4717 4719 status = nfs4_call_sync(server->client, server, &msg, &args.seq_args, &res.seq_res, 0); 4718 - nfs_fixup_referral_attributes(&fs_locations->fattr); 4719 4720 dprintk("%s: returned status = %d\n", __func__, status); 4720 4721 return status; 4721 4722 } ··· 5111 5098 if (mxresp_sz == 0) 5112 5099 mxresp_sz = NFS_MAX_FILE_IO_SIZE; 5113 5100 /* Fore channel attributes */ 5114 - args->fc_attrs.headerpadsz = 0; 5115 5101 args->fc_attrs.max_rqst_sz = mxrqst_sz; 5116 5102 args->fc_attrs.max_resp_sz = mxresp_sz; 5117 5103 args->fc_attrs.max_ops = NFS4_MAX_OPS; ··· 5123 5111 args->fc_attrs.max_ops, args->fc_attrs.max_reqs); 5124 5112 5125 5113 /* Back channel attributes */ 5126 - args->bc_attrs.headerpadsz = 0; 5127 5114 args->bc_attrs.max_rqst_sz = PAGE_SIZE; 5128 5115 args->bc_attrs.max_resp_sz = PAGE_SIZE; 5129 5116 args->bc_attrs.max_resp_sz_cached = 0; ··· 5142 5131 struct nfs4_channel_attrs *sent = &args->fc_attrs; 5143 5132 struct nfs4_channel_attrs *rcvd = &session->fc_attrs; 5144 5133 5145 - if (rcvd->headerpadsz > sent->headerpadsz) 5146 - return -EINVAL; 5147 5134 if (rcvd->max_resp_sz > sent->max_resp_sz) 5148 5135 return -EINVAL; 5149 5136 /* ··· 5706 5697 { 5707 5698 struct nfs4_layoutreturn *lrp = calldata; 5708 5699 struct nfs_server *server; 5700 + struct pnfs_layout_hdr *lo = NFS_I(lrp->args.inode)->layout; 5709 5701 5710 5702 dprintk("--> %s\n", __func__); 5711 5703 ··· 5718 5708 nfs_restart_rpc(task, lrp->clp); 5719 5709 return; 5720 5710 } 5711 + spin_lock(&lo->plh_inode->i_lock); 5721 5712 if (task->tk_status == 0) { 5722 - struct pnfs_layout_hdr *lo = NFS_I(lrp->args.inode)->layout; 5723 - 5724 5713 if (lrp->res.lrs_present) { 5725 - spin_lock(&lo->plh_inode->i_lock); 5726 5714 pnfs_set_layout_stateid(lo, &lrp->res.stateid, true); 5727 - spin_unlock(&lo->plh_inode->i_lock); 5728 5715 } else 5729 5716 BUG_ON(!list_empty(&lo->plh_segs)); 5730 5717 } 5718 + lo->plh_block_lgets--; 5719 + spin_unlock(&lo->plh_inode->i_lock); 5731 5720 dprintk("<-- %s\n", __func__); 5732 5721 } 5733 5722
+14 -12
fs/nfs/nfs4xdr.c
··· 255 255 #define decode_fs_locations_maxsz \ 256 256 (0) 257 257 #define encode_secinfo_maxsz (op_encode_hdr_maxsz + nfs4_name_maxsz) 258 - #define decode_secinfo_maxsz (op_decode_hdr_maxsz + 4 + (NFS_MAX_SECFLAVORS * (16 + GSS_OID_MAX_LEN))) 258 + #define decode_secinfo_maxsz (op_decode_hdr_maxsz + 1 + ((NFS_MAX_SECFLAVORS * (16 + GSS_OID_MAX_LEN)) / 4)) 259 259 260 260 #if defined(CONFIG_NFS_V4_1) 261 261 #define NFS4_MAX_MACHINE_NAME_LEN (64) ··· 1725 1725 *p++ = cpu_to_be32(args->flags); /*flags */ 1726 1726 1727 1727 /* Fore Channel */ 1728 - *p++ = cpu_to_be32(args->fc_attrs.headerpadsz); /* header padding size */ 1728 + *p++ = cpu_to_be32(0); /* header padding size */ 1729 1729 *p++ = cpu_to_be32(args->fc_attrs.max_rqst_sz); /* max req size */ 1730 1730 *p++ = cpu_to_be32(args->fc_attrs.max_resp_sz); /* max resp size */ 1731 1731 *p++ = cpu_to_be32(max_resp_sz_cached); /* Max resp sz cached */ ··· 1734 1734 *p++ = cpu_to_be32(0); /* rdmachannel_attrs */ 1735 1735 1736 1736 /* Back Channel */ 1737 - *p++ = cpu_to_be32(args->fc_attrs.headerpadsz); /* header padding size */ 1737 + *p++ = cpu_to_be32(0); /* header padding size */ 1738 1738 *p++ = cpu_to_be32(args->bc_attrs.max_rqst_sz); /* max req size */ 1739 1739 *p++ = cpu_to_be32(args->bc_attrs.max_resp_sz); /* max resp size */ 1740 1740 *p++ = cpu_to_be32(args->bc_attrs.max_resp_sz_cached); /* Max resp sz cached */ ··· 3098 3098 return -EIO; 3099 3099 } 3100 3100 3101 - static int decode_attr_error(struct xdr_stream *xdr, uint32_t *bitmap) 3101 + static int decode_attr_error(struct xdr_stream *xdr, uint32_t *bitmap, int32_t *res) 3102 3102 { 3103 3103 __be32 *p; 3104 3104 ··· 3109 3109 if (unlikely(!p)) 3110 3110 goto out_overflow; 3111 3111 bitmap[0] &= ~FATTR4_WORD0_RDATTR_ERROR; 3112 - return -be32_to_cpup(p); 3112 + *res = -be32_to_cpup(p); 3113 3113 } 3114 3114 return 0; 3115 3115 out_overflow: ··· 4070 4070 int status; 4071 4071 umode_t fmode = 0; 4072 4072 uint32_t type; 4073 + int32_t err; 4073 4074 4074 4075 status = decode_attr_type(xdr, bitmap, &type); 4075 4076 if (status < 0) ··· 4096 4095 goto xdr_error; 4097 4096 fattr->valid |= status; 4098 4097 4099 - status = decode_attr_error(xdr, bitmap); 4100 - if (status == -NFS4ERR_WRONGSEC) { 4101 - nfs_fixup_secinfo_attributes(fattr, fh); 4102 - status = 0; 4103 - } 4098 + err = 0; 4099 + status = decode_attr_error(xdr, bitmap, &err); 4104 4100 if (status < 0) 4105 4101 goto xdr_error; 4102 + if (err == -NFS4ERR_WRONGSEC) 4103 + nfs_fixup_secinfo_attributes(fattr, fh); 4106 4104 4107 4105 status = decode_attr_filehandle(xdr, bitmap, fh); 4108 4106 if (status < 0) ··· 4997 4997 struct nfs4_channel_attrs *attrs) 4998 4998 { 4999 4999 __be32 *p; 5000 - u32 nr_attrs; 5000 + u32 nr_attrs, val; 5001 5001 5002 5002 p = xdr_inline_decode(xdr, 28); 5003 5003 if (unlikely(!p)) 5004 5004 goto out_overflow; 5005 - attrs->headerpadsz = be32_to_cpup(p++); 5005 + val = be32_to_cpup(p++); /* headerpadsz */ 5006 + if (val) 5007 + return -EINVAL; /* no support for header padding yet */ 5006 5008 attrs->max_rqst_sz = be32_to_cpup(p++); 5007 5009 attrs->max_resp_sz = be32_to_cpup(p++); 5008 5010 attrs->max_resp_sz_cached = be32_to_cpup(p++);
+3 -1
fs/nfs/objlayout/objio_osd.c
··· 108 108 de = n; 109 109 } 110 110 111 - atomic_inc(&de->id_node.ref); 112 111 return de; 113 112 } 114 113 ··· 999 1000 { 1000 1001 if (!pnfs_generic_pg_test(pgio, prev, req)) 1001 1002 return false; 1003 + 1004 + if (pgio->pg_lseg == NULL) 1005 + return true; 1002 1006 1003 1007 return pgio->pg_count + req->wb_bytes <= 1004 1008 OBJIO_LSEG(pgio->pg_lseg)->max_io_size;
+1 -1
fs/nfs/objlayout/objlayout.c
··· 291 291 struct nfs_read_data *rdata; 292 292 293 293 state->status = status; 294 - dprintk("%s: Begin status=%ld eof=%d\n", __func__, status, eof); 294 + dprintk("%s: Begin status=%zd eof=%d\n", __func__, status, eof); 295 295 rdata = state->rpcdata; 296 296 rdata->task.tk_status = status; 297 297 if (status >= 0) {
+2 -1
fs/nfs/pagelist.c
··· 204 204 TASK_UNINTERRUPTIBLE); 205 205 } 206 206 207 - static bool nfs_generic_pg_test(struct nfs_pageio_descriptor *desc, struct nfs_page *prev, struct nfs_page *req) 207 + bool nfs_generic_pg_test(struct nfs_pageio_descriptor *desc, struct nfs_page *prev, struct nfs_page *req) 208 208 { 209 209 /* 210 210 * FIXME: ideally we should be able to coalesce all requests ··· 218 218 219 219 return desc->pg_count + req->wb_bytes <= desc->pg_bsize; 220 220 } 221 + EXPORT_SYMBOL_GPL(nfs_generic_pg_test); 221 222 222 223 /** 223 224 * nfs_pageio_init - initialise a page io descriptor
+31 -13
fs/nfs/pnfs.c
··· 634 634 635 635 spin_lock(&ino->i_lock); 636 636 lo = nfsi->layout; 637 - if (!lo || !mark_matching_lsegs_invalid(lo, &tmp_list, NULL)) { 637 + if (!lo) { 638 638 spin_unlock(&ino->i_lock); 639 - dprintk("%s: no layout segments to return\n", __func__); 640 - goto out; 639 + dprintk("%s: no layout to return\n", __func__); 640 + return status; 641 641 } 642 642 stateid = nfsi->layout->plh_stateid; 643 643 /* Reference matched in nfs4_layoutreturn_release */ 644 644 get_layout_hdr(lo); 645 + mark_matching_lsegs_invalid(lo, &tmp_list, NULL); 646 + lo->plh_block_lgets++; 645 647 spin_unlock(&ino->i_lock); 646 648 pnfs_free_lseg_list(&tmp_list); 647 649 ··· 652 650 lrp = kzalloc(sizeof(*lrp), GFP_KERNEL); 653 651 if (unlikely(lrp == NULL)) { 654 652 status = -ENOMEM; 653 + set_bit(NFS_LAYOUT_RW_FAILED, &lo->plh_flags); 654 + set_bit(NFS_LAYOUT_RO_FAILED, &lo->plh_flags); 655 + put_layout_hdr(lo); 655 656 goto out; 656 657 } 657 658 ··· 892 887 ret = get_lseg(lseg); 893 888 break; 894 889 } 895 - if (cmp_layout(range, &lseg->pls_range) > 0) 890 + if (lseg->pls_range.offset > range->offset) 896 891 break; 897 892 } 898 893 ··· 1064 1059 gfp_flags = GFP_NOFS; 1065 1060 } 1066 1061 1067 - if (pgio->pg_count == prev->wb_bytes) { 1062 + if (pgio->pg_lseg == NULL) { 1063 + if (pgio->pg_count != prev->wb_bytes) 1064 + return true; 1068 1065 /* This is first coelesce call for a series of nfs_pages */ 1069 1066 pgio->pg_lseg = pnfs_update_layout(pgio->pg_inode, 1070 1067 prev->wb_context, 1071 - req_offset(req), 1068 + req_offset(prev), 1072 1069 pgio->pg_count, 1073 1070 access_type, 1074 1071 gfp_flags); 1075 - return true; 1072 + if (pgio->pg_lseg == NULL) 1073 + return true; 1076 1074 } 1077 1075 1078 - if (pgio->pg_lseg && 1079 - req_offset(req) > end_offset(pgio->pg_lseg->pls_range.offset, 1080 - pgio->pg_lseg->pls_range.length)) 1081 - return false; 1082 - 1083 - return true; 1076 + /* 1077 + * Test if a nfs_page is fully contained in the pnfs_layout_range. 1078 + * Note that this test makes several assumptions: 1079 + * - that the previous nfs_page in the struct nfs_pageio_descriptor 1080 + * is known to lie within the range. 1081 + * - that the nfs_page being tested is known to be contiguous with the 1082 + * previous nfs_page. 1083 + * - Layout ranges are page aligned, so we only have to test the 1084 + * start offset of the request. 1085 + * 1086 + * Please also note that 'end_offset' is actually the offset of the 1087 + * first byte that lies outside the pnfs_layout_range. FIXME? 1088 + * 1089 + */ 1090 + return req_offset(req) < end_offset(pgio->pg_lseg->pls_range.offset, 1091 + pgio->pg_lseg->pls_range.length); 1084 1092 } 1085 1093 EXPORT_SYMBOL_GPL(pnfs_generic_pg_test); 1086 1094
+1
fs/nfs/pnfs.h
··· 186 186 /* pnfs_dev.c */ 187 187 struct nfs4_deviceid_node { 188 188 struct hlist_node node; 189 + struct hlist_node tmpnode; 189 190 const struct pnfs_layoutdriver_type *ld; 190 191 const struct nfs_client *nfs_client; 191 192 struct nfs4_deviceid deviceid;
+12 -5
fs/nfs/pnfs_dev.c
··· 174 174 const struct nfs4_deviceid *id) 175 175 { 176 176 INIT_HLIST_NODE(&d->node); 177 + INIT_HLIST_NODE(&d->tmpnode); 177 178 d->ld = ld; 178 179 d->nfs_client = nfs_client; 179 180 d->deviceid = *id; ··· 209 208 210 209 hlist_add_head_rcu(&new->node, &nfs4_deviceid_cache[hash]); 211 210 spin_unlock(&nfs4_deviceid_lock); 211 + atomic_inc(&new->ref); 212 212 213 213 return new; 214 214 } ··· 240 238 _deviceid_purge_client(const struct nfs_client *clp, long hash) 241 239 { 242 240 struct nfs4_deviceid_node *d; 243 - struct hlist_node *n, *next; 241 + struct hlist_node *n; 244 242 HLIST_HEAD(tmp); 245 243 244 + spin_lock(&nfs4_deviceid_lock); 246 245 rcu_read_lock(); 247 246 hlist_for_each_entry_rcu(d, n, &nfs4_deviceid_cache[hash], node) 248 247 if (d->nfs_client == clp && atomic_read(&d->ref)) { 249 248 hlist_del_init_rcu(&d->node); 250 - hlist_add_head(&d->node, &tmp); 249 + hlist_add_head(&d->tmpnode, &tmp); 251 250 } 252 251 rcu_read_unlock(); 252 + spin_unlock(&nfs4_deviceid_lock); 253 253 254 254 if (hlist_empty(&tmp)) 255 255 return; 256 256 257 257 synchronize_rcu(); 258 - hlist_for_each_entry_safe(d, n, next, &tmp, node) 258 + while (!hlist_empty(&tmp)) { 259 + d = hlist_entry(tmp.first, struct nfs4_deviceid_node, tmpnode); 260 + hlist_del(&d->tmpnode); 259 261 if (atomic_dec_and_test(&d->ref)) 260 262 d->ld->free_deviceid_node(d); 263 + } 261 264 } 262 265 263 266 void ··· 270 263 { 271 264 long h; 272 265 273 - spin_lock(&nfs4_deviceid_lock); 266 + if (!(clp->cl_exchange_flags & EXCHGID4_FLAG_USE_PNFS_MDS)) 267 + return; 274 268 for (h = 0; h < NFS4_DEVICE_ID_HASH_SIZE; h++) 275 269 _deviceid_purge_client(clp, h); 276 - spin_unlock(&nfs4_deviceid_lock); 277 270 }
-1
fs/omfs/file.c
··· 4 4 * Released under GPL v2. 5 5 */ 6 6 7 - #include <linux/version.h> 8 7 #include <linux/module.h> 9 8 #include <linux/fs.h> 10 9 #include <linux/buffer_head.h>
+5 -2
fs/proc/base.c
··· 2708 2708 struct task_io_accounting acct = task->ioac; 2709 2709 unsigned long flags; 2710 2710 2711 + if (!ptrace_may_access(task, PTRACE_MODE_READ)) 2712 + return -EACCES; 2713 + 2711 2714 if (whole && lock_task_sighand(task, &flags)) { 2712 2715 struct task_struct *t = task; 2713 2716 ··· 2842 2839 REG("coredump_filter", S_IRUGO|S_IWUSR, proc_coredump_filter_operations), 2843 2840 #endif 2844 2841 #ifdef CONFIG_TASK_IO_ACCOUNTING 2845 - INF("io", S_IRUGO, proc_tgid_io_accounting), 2842 + INF("io", S_IRUSR, proc_tgid_io_accounting), 2846 2843 #endif 2847 2844 #ifdef CONFIG_HARDWALL 2848 2845 INF("hardwall", S_IRUGO, proc_pid_hardwall), ··· 3184 3181 REG("make-it-fail", S_IRUGO|S_IWUSR, proc_fault_inject_operations), 3185 3182 #endif 3186 3183 #ifdef CONFIG_TASK_IO_ACCOUNTING 3187 - INF("io", S_IRUGO, proc_tid_io_accounting), 3184 + INF("io", S_IRUSR, proc_tid_io_accounting), 3188 3185 #endif 3189 3186 #ifdef CONFIG_HARDWALL 3190 3187 INF("hardwall", S_IRUGO, proc_pid_hardwall),
+6 -2
fs/romfs/mmap-nommu.c
··· 27 27 { 28 28 struct inode *inode = file->f_mapping->host; 29 29 struct mtd_info *mtd = inode->i_sb->s_mtd; 30 - unsigned long isize, offset; 30 + unsigned long isize, offset, maxpages, lpages; 31 31 32 32 if (!mtd) 33 33 goto cant_map_directly; 34 34 35 + /* the mapping mustn't extend beyond the EOF */ 36 + lpages = (len + PAGE_SIZE - 1) >> PAGE_SHIFT; 35 37 isize = i_size_read(inode); 36 38 offset = pgoff << PAGE_SHIFT; 37 - if (offset > isize || len > isize || offset > isize - len) 39 + 40 + maxpages = (isize + PAGE_SIZE - 1) >> PAGE_SHIFT; 41 + if ((pgoff >= maxpages) || (maxpages - pgoff < lpages)) 38 42 return (unsigned long) -EINVAL; 39 43 40 44 /* we need to call down to the MTD layer to do the actual mapping */
+7
fs/xfs/xfs_attr.c
··· 490 490 args.whichfork = XFS_ATTR_FORK; 491 491 492 492 /* 493 + * we have no control over the attribute names that userspace passes us 494 + * to remove, so we have to allow the name lookup prior to attribute 495 + * removal to fail. 496 + */ 497 + args.op_flags = XFS_DA_OP_OKNOENT; 498 + 499 + /* 493 500 * Attach the dquots to the inode. 494 501 */ 495 502 error = xfs_qm_dqattach(dp, 0);
+9 -4
fs/xfs/xfs_iget.c
··· 253 253 rcu_read_lock(); 254 254 spin_lock(&ip->i_flags_lock); 255 255 256 - ip->i_flags &= ~XFS_INEW; 257 - ip->i_flags |= XFS_IRECLAIMABLE; 258 - __xfs_inode_set_reclaim_tag(pag, ip); 256 + ip->i_flags &= ~(XFS_INEW | XFS_IRECLAIM); 257 + ASSERT(ip->i_flags & XFS_IRECLAIMABLE); 259 258 trace_xfs_iget_reclaim_fail(ip); 260 259 goto out_error; 261 260 } 262 261 263 262 spin_lock(&pag->pag_ici_lock); 264 263 spin_lock(&ip->i_flags_lock); 265 - ip->i_flags &= ~(XFS_IRECLAIMABLE | XFS_IRECLAIM); 264 + 265 + /* 266 + * Clear the per-lifetime state in the inode as we are now 267 + * effectively a new inode and need to return to the initial 268 + * state before reuse occurs. 269 + */ 270 + ip->i_flags &= ~XFS_IRECLAIM_RESET_FLAGS; 266 271 ip->i_flags |= XFS_INEW; 267 272 __xfs_inode_clear_reclaim_tag(mp, pag, ip); 268 273 inode->i_state = I_NEW;
+10
fs/xfs/xfs_inode.h
··· 384 384 #define XFS_IDIRTY_RELEASE 0x0040 /* dirty release already seen */ 385 385 386 386 /* 387 + * Per-lifetime flags need to be reset when re-using a reclaimable inode during 388 + * inode lookup. Thi prevents unintended behaviour on the new inode from 389 + * ocurring. 390 + */ 391 + #define XFS_IRECLAIM_RESET_FLAGS \ 392 + (XFS_IRECLAIMABLE | XFS_IRECLAIM | \ 393 + XFS_IDIRTY_RELEASE | XFS_ITRUNCATED | \ 394 + XFS_IFILESTREAM); 395 + 396 + /* 387 397 * Flags for inode locking. 388 398 * Bit ranges: 1<<1 - 1<<16-1 -- iolock/ilock modes (bitfield) 389 399 * 1<<16 - 1<<32-1 -- lockdep annotation (integers)
+5 -2
fs/xfs/xfs_vnodeops.c
··· 960 960 * be exposed to that problem. 961 961 */ 962 962 truncated = xfs_iflags_test_and_clear(ip, XFS_ITRUNCATED); 963 - if (truncated && VN_DIRTY(VFS_I(ip)) && ip->i_delayed_blks > 0) 964 - xfs_flush_pages(ip, 0, -1, XBF_ASYNC, FI_NONE); 963 + if (truncated) { 964 + xfs_iflags_clear(ip, XFS_IDIRTY_RELEASE); 965 + if (VN_DIRTY(VFS_I(ip)) && ip->i_delayed_blks > 0) 966 + xfs_flush_pages(ip, 0, -1, XBF_ASYNC, FI_NONE); 967 + } 965 968 } 966 969 967 970 if (ip->i_d.di_nlink == 0)
+3
include/linux/amba/serial.h
··· 201 201 bool (*dma_filter)(struct dma_chan *chan, void *filter_param); 202 202 void *dma_rx_param; 203 203 void *dma_tx_param; 204 + void (*init) (void); 205 + void (*exit) (void); 206 + void (*reset) (void); 204 207 }; 205 208 #endif 206 209
+1 -1
include/linux/blk_types.h
··· 167 167 (REQ_FAILFAST_DEV | REQ_FAILFAST_TRANSPORT | REQ_FAILFAST_DRIVER) 168 168 #define REQ_COMMON_MASK \ 169 169 (REQ_WRITE | REQ_FAILFAST_MASK | REQ_SYNC | REQ_META | REQ_DISCARD | \ 170 - REQ_NOIDLE | REQ_FLUSH | REQ_FUA) 170 + REQ_NOIDLE | REQ_FLUSH | REQ_FUA | REQ_SECURE) 171 171 #define REQ_CLONE_MASK REQ_COMMON_MASK 172 172 173 173 #define REQ_RAHEAD (1 << __REQ_RAHEAD)
+2 -1
include/linux/blktrace_api.h
··· 169 169 extern int do_blk_trace_setup(struct request_queue *q, char *name, 170 170 dev_t dev, struct block_device *bdev, 171 171 struct blk_user_trace_setup *buts); 172 - extern void __trace_note_message(struct blk_trace *, const char *fmt, ...); 172 + extern __attribute__((format(printf, 2, 3))) 173 + void __trace_note_message(struct blk_trace *, const char *fmt, ...); 173 174 174 175 /** 175 176 * blk_add_trace_msg - Add a (simple) message to the blktrace stream
+2
include/linux/compat.h
··· 467 467 char __user *optval, unsigned int optlen); 468 468 asmlinkage long compat_sys_sendmsg(int fd, struct compat_msghdr __user *msg, 469 469 unsigned flags); 470 + asmlinkage long compat_sys_sendmmsg(int fd, struct compat_mmsghdr __user *mmsg, 471 + unsigned vlen, unsigned int flags); 470 472 asmlinkage long compat_sys_recvmsg(int fd, struct compat_msghdr __user *msg, 471 473 unsigned int flags); 472 474 asmlinkage long compat_sys_recv(int fd, void __user *buf, size_t len,
+1 -1
include/linux/connector.h
··· 44 44 #define CN_VAL_DRBD 0x1 45 45 #define CN_KVP_IDX 0x9 /* HyperV KVP */ 46 46 47 - #define CN_NETLINK_USERS 9 47 + #define CN_NETLINK_USERS 10 /* Highest index + 1 */ 48 48 49 49 /* 50 50 * Maximum connector's message size.
+2 -3
include/linux/device.h
··· 530 530 * @dma_mem: Internal for coherent mem override. 531 531 * @archdata: For arch-specific additions. 532 532 * @of_node: Associated device tree node. 533 - * @of_match: Matching of_device_id from driver. 534 533 * @devt: For creating the sysfs "dev". 535 534 * @devres_lock: Spinlock to protect the resource of the device. 536 535 * @devres_head: The resources list of the device. ··· 653 654 654 655 static inline void device_enable_async_suspend(struct device *dev) 655 656 { 656 - if (!dev->power.in_suspend) 657 + if (!dev->power.is_prepared) 657 658 dev->power.async_suspend = true; 658 659 } 659 660 660 661 static inline void device_disable_async_suspend(struct device *dev) 661 662 { 662 - if (!dev->power.in_suspend) 663 + if (!dev->power.is_prepared) 663 664 dev->power.async_suspend = false; 664 665 } 665 666
+1
include/linux/fs.h
··· 639 639 struct prio_tree_root i_mmap; /* tree of private and shared mappings */ 640 640 struct list_head i_mmap_nonlinear;/*list VM_NONLINEAR mappings */ 641 641 struct mutex i_mmap_mutex; /* protect tree, count, list */ 642 + /* Protected by tree_lock together with the radix tree */ 642 643 unsigned long nrpages; /* number of total pages */ 643 644 pgoff_t writeback_index;/* writeback starts here */ 644 645 const struct address_space_operations *a_ops; /* methods */
+1
include/linux/hrtimer.h
··· 135 135 * @cpu_base: per cpu clock base 136 136 * @index: clock type index for per_cpu support when moving a 137 137 * timer to a base on another cpu. 138 + * @clockid: clock id for per_cpu support 138 139 * @active: red black tree root node for the active timers 139 140 * @resolution: the resolution of the clock, in nanoseconds 140 141 * @get_time: function to retrieve the current time of the clock
-2
include/linux/jbd2.h
··· 1024 1024 1025 1025 /* Filing buffers */ 1026 1026 extern void jbd2_journal_unfile_buffer(journal_t *, struct journal_head *); 1027 - extern void __jbd2_journal_unfile_buffer(struct journal_head *); 1028 1027 extern void __jbd2_journal_refile_buffer(struct journal_head *); 1029 1028 extern void jbd2_journal_refile_buffer(journal_t *, struct journal_head *); 1030 1029 extern void __jbd2_journal_file_buffer(struct journal_head *, transaction_t *, int); ··· 1164 1165 */ 1165 1166 struct journal_head *jbd2_journal_add_journal_head(struct buffer_head *bh); 1166 1167 struct journal_head *jbd2_journal_grab_journal_head(struct buffer_head *bh); 1167 - void jbd2_journal_remove_journal_head(struct buffer_head *bh); 1168 1168 void jbd2_journal_put_journal_head(struct journal_head *jh); 1169 1169 1170 1170 /*
+7
include/linux/mmzone.h
··· 647 647 #endif 648 648 #define nid_page_nr(nid, pagenr) pgdat_page_nr(NODE_DATA(nid),(pagenr)) 649 649 650 + #define node_start_pfn(nid) (NODE_DATA(nid)->node_start_pfn) 651 + 652 + #define node_end_pfn(nid) ({\ 653 + pg_data_t *__pgdat = NODE_DATA(nid);\ 654 + __pgdat->node_start_pfn + __pgdat->node_spanned_pages;\ 655 + }) 656 + 650 657 #include <linux/memory_hotplug.h> 651 658 652 659 extern struct mutex zonelists_mutex;
+3
include/linux/nfs_page.h
··· 92 92 struct nfs_page *); 93 93 extern void nfs_pageio_complete(struct nfs_pageio_descriptor *desc); 94 94 extern void nfs_pageio_cond_complete(struct nfs_pageio_descriptor *, pgoff_t); 95 + extern bool nfs_generic_pg_test(struct nfs_pageio_descriptor *desc, 96 + struct nfs_page *prev, 97 + struct nfs_page *req); 95 98 extern int nfs_wait_on_request(struct nfs_page *); 96 99 extern void nfs_unlock_request(struct nfs_page *req); 97 100 extern int nfs_set_page_tag_locked(struct nfs_page *req);
-1
include/linux/nfs_xdr.h
··· 158 158 159 159 /* nfs41 sessions channel attributes */ 160 160 struct nfs4_channel_attrs { 161 - u32 headerpadsz; 162 161 u32 max_rqst_sz; 163 162 u32 max_resp_sz; 164 163 u32 max_resp_sz_cached;
+1
include/linux/pci_ids.h
··· 1537 1537 #define PCI_DEVICE_ID_RICOH_RL5C476 0x0476 1538 1538 #define PCI_DEVICE_ID_RICOH_RL5C478 0x0478 1539 1539 #define PCI_DEVICE_ID_RICOH_R5C822 0x0822 1540 + #define PCI_DEVICE_ID_RICOH_R5CE823 0xe823 1540 1541 #define PCI_DEVICE_ID_RICOH_R5C832 0x0832 1541 1542 #define PCI_DEVICE_ID_RICOH_R5C843 0x0843 1542 1543
+2 -1
include/linux/pm.h
··· 425 425 pm_message_t power_state; 426 426 unsigned int can_wakeup:1; 427 427 unsigned int async_suspend:1; 428 - unsigned int in_suspend:1; /* Owned by the PM core */ 428 + bool is_prepared:1; /* Owned by the PM core */ 429 + bool is_suspended:1; /* Ditto */ 429 430 spinlock_t lock; 430 431 #ifdef CONFIG_PM_SLEEP 431 432 struct list_head entry;
+21
include/linux/shmem_fs.h
··· 3 3 4 4 #include <linux/swap.h> 5 5 #include <linux/mempolicy.h> 6 + #include <linux/pagemap.h> 6 7 #include <linux/percpu_counter.h> 7 8 8 9 /* inode in-kernel data */ ··· 46 45 return container_of(inode, struct shmem_inode_info, vfs_inode); 47 46 } 48 47 48 + /* 49 + * Functions in mm/shmem.c called directly from elsewhere: 50 + */ 49 51 extern int init_tmpfs(void); 50 52 extern int shmem_fill_super(struct super_block *sb, void *data, int silent); 53 + extern struct file *shmem_file_setup(const char *name, 54 + loff_t size, unsigned long flags); 55 + extern int shmem_zero_setup(struct vm_area_struct *); 56 + extern int shmem_lock(struct file *file, int lock, struct user_struct *user); 57 + extern struct page *shmem_read_mapping_page_gfp(struct address_space *mapping, 58 + pgoff_t index, gfp_t gfp_mask); 59 + extern void shmem_truncate_range(struct inode *inode, loff_t start, loff_t end); 60 + extern int shmem_unuse(swp_entry_t entry, struct page *page); 61 + extern void mem_cgroup_get_shmem_target(struct inode *inode, pgoff_t pgoff, 62 + struct page **pagep, swp_entry_t *ent); 63 + 64 + static inline struct page *shmem_read_mapping_page( 65 + struct address_space *mapping, pgoff_t index) 66 + { 67 + return shmem_read_mapping_page_gfp(mapping, index, 68 + mapping_gfp_mask(mapping)); 69 + } 51 70 52 71 #endif
+2 -1
include/linux/sunrpc/sched.h
··· 84 84 #endif 85 85 unsigned char tk_priority : 2,/* Task priority */ 86 86 tk_garb_retry : 2, 87 - tk_cred_retry : 2; 87 + tk_cred_retry : 2, 88 + tk_rebind_retry : 2; 88 89 }; 89 90 #define tk_xprt tk_client->cl_xprt 90 91
-10
include/linux/swap.h
··· 300 300 extern int kswapd_run(int nid); 301 301 extern void kswapd_stop(int nid); 302 302 303 - #ifdef CONFIG_MMU 304 - /* linux/mm/shmem.c */ 305 - extern int shmem_unuse(swp_entry_t entry, struct page *page); 306 - #endif /* CONFIG_MMU */ 307 - 308 - #ifdef CONFIG_CGROUP_MEM_RES_CTLR 309 - extern void mem_cgroup_get_shmem_target(struct inode *inode, pgoff_t pgoff, 310 - struct page **pagep, swp_entry_t *ent); 311 - #endif 312 - 313 303 #ifdef CONFIG_SWAP 314 304 /* linux/mm/page_io.c */ 315 305 extern int swap_readpage(struct page *);
+1
include/net/dst.h
··· 77 77 #define DST_NOPOLICY 0x0004 78 78 #define DST_NOHASH 0x0008 79 79 #define DST_NOCACHE 0x0010 80 + #define DST_NOCOUNT 0x0020 80 81 union { 81 82 struct dst_entry *next; 82 83 struct rtable __rcu *rt_next;
-1
include/net/sock.h
··· 179 179 * @sk_dst_cache: destination cache 180 180 * @sk_dst_lock: destination cache lock 181 181 * @sk_policy: flow policy 182 - * @sk_rmem_alloc: receive queue bytes committed 183 182 * @sk_receive_queue: incoming packets 184 183 * @sk_wmem_alloc: transmit queue bytes committed 185 184 * @sk_write_queue: Packet sending queue
+1 -2
include/sound/soc.h
··· 248 248 extern struct snd_ac97_bus_ops soc_ac97_ops; 249 249 250 250 enum snd_soc_control_type { 251 - SND_SOC_CUSTOM = 1, 252 - SND_SOC_I2C, 251 + SND_SOC_I2C = 1, 253 252 SND_SOC_SPI, 254 253 }; 255 254
+76 -103
include/trace/events/ext4.h
··· 26 26 __field( umode_t, mode ) 27 27 __field( uid_t, uid ) 28 28 __field( gid_t, gid ) 29 - __field( blkcnt_t, blocks ) 29 + __field( __u64, blocks ) 30 30 ), 31 31 32 32 TP_fast_assign( ··· 40 40 41 41 TP_printk("dev %d,%d ino %lu mode 0%o uid %u gid %u blocks %llu", 42 42 MAJOR(__entry->dev), MINOR(__entry->dev), 43 - (unsigned long) __entry->ino, 44 - __entry->mode, __entry->uid, __entry->gid, 45 - (unsigned long long) __entry->blocks) 43 + (unsigned long) __entry->ino, __entry->mode, 44 + __entry->uid, __entry->gid, __entry->blocks) 46 45 ); 47 46 48 47 TRACE_EVENT(ext4_request_inode, ··· 177 178 TP_printk("dev %d,%d ino %lu new_size %lld", 178 179 MAJOR(__entry->dev), MINOR(__entry->dev), 179 180 (unsigned long) __entry->ino, 180 - (long long) __entry->new_size) 181 + __entry->new_size) 181 182 ); 182 183 183 184 DECLARE_EVENT_CLASS(ext4__write_begin, ··· 203 204 __entry->flags = flags; 204 205 ), 205 206 206 - TP_printk("dev %d,%d ino %lu pos %llu len %u flags %u", 207 + TP_printk("dev %d,%d ino %lu pos %lld len %u flags %u", 207 208 MAJOR(__entry->dev), MINOR(__entry->dev), 208 209 (unsigned long) __entry->ino, 209 210 __entry->pos, __entry->len, __entry->flags) ··· 247 248 __entry->copied = copied; 248 249 ), 249 250 250 - TP_printk("dev %d,%d ino %lu pos %llu len %u copied %u", 251 + TP_printk("dev %d,%d ino %lu pos %lld len %u copied %u", 251 252 MAJOR(__entry->dev), MINOR(__entry->dev), 252 253 (unsigned long) __entry->ino, 253 254 __entry->pos, __entry->len, __entry->copied) ··· 285 286 TP_ARGS(inode, pos, len, copied) 286 287 ); 287 288 288 - TRACE_EVENT(ext4_writepage, 289 - TP_PROTO(struct inode *inode, struct page *page), 290 - 291 - TP_ARGS(inode, page), 292 - 293 - TP_STRUCT__entry( 294 - __field( dev_t, dev ) 295 - __field( ino_t, ino ) 296 - __field( pgoff_t, index ) 297 - 298 - ), 299 - 300 - TP_fast_assign( 301 - __entry->dev = inode->i_sb->s_dev; 302 - __entry->ino = inode->i_ino; 303 - __entry->index = page->index; 304 - ), 305 - 306 - TP_printk("dev %d,%d ino %lu page_index %lu", 307 - MAJOR(__entry->dev), MINOR(__entry->dev), 308 - (unsigned long) __entry->ino, __entry->index) 309 - ); 310 - 311 289 TRACE_EVENT(ext4_da_writepages, 312 290 TP_PROTO(struct inode *inode, struct writeback_control *wbc), 313 291 ··· 317 341 ), 318 342 319 343 TP_printk("dev %d,%d ino %lu nr_to_write %ld pages_skipped %ld " 320 - "range_start %llu range_end %llu sync_mode %d" 344 + "range_start %lld range_end %lld sync_mode %d" 321 345 "for_kupdate %d range_cyclic %d writeback_index %lu", 322 346 MAJOR(__entry->dev), MINOR(__entry->dev), 323 347 (unsigned long) __entry->ino, __entry->nr_to_write, ··· 425 449 TP_printk("dev %d,%d ino %lu page_index %lu", 426 450 MAJOR(__entry->dev), MINOR(__entry->dev), 427 451 (unsigned long) __entry->ino, 428 - __entry->index) 452 + (unsigned long) __entry->index) 453 + ); 454 + 455 + DEFINE_EVENT(ext4__page_op, ext4_writepage, 456 + 457 + TP_PROTO(struct page *page), 458 + 459 + TP_ARGS(page) 429 460 ); 430 461 431 462 DEFINE_EVENT(ext4__page_op, ext4_readpage, ··· 472 489 TP_printk("dev %d,%d ino %lu page_index %lu offset %lu", 473 490 MAJOR(__entry->dev), MINOR(__entry->dev), 474 491 (unsigned long) __entry->ino, 475 - __entry->index, __entry->offset) 492 + (unsigned long) __entry->index, __entry->offset) 476 493 ); 477 494 478 495 TRACE_EVENT(ext4_discard_blocks, ··· 545 562 ); 546 563 547 564 TRACE_EVENT(ext4_mb_release_inode_pa, 548 - TP_PROTO(struct super_block *sb, 549 - struct inode *inode, 550 - struct ext4_prealloc_space *pa, 565 + TP_PROTO(struct ext4_prealloc_space *pa, 551 566 unsigned long long block, unsigned int count), 552 567 553 - TP_ARGS(sb, inode, pa, block, count), 568 + TP_ARGS(pa, block, count), 554 569 555 570 TP_STRUCT__entry( 556 571 __field( dev_t, dev ) ··· 559 578 ), 560 579 561 580 TP_fast_assign( 562 - __entry->dev = sb->s_dev; 563 - __entry->ino = inode->i_ino; 581 + __entry->dev = pa->pa_inode->i_sb->s_dev; 582 + __entry->ino = pa->pa_inode->i_ino; 564 583 __entry->block = block; 565 584 __entry->count = count; 566 585 ), ··· 572 591 ); 573 592 574 593 TRACE_EVENT(ext4_mb_release_group_pa, 575 - TP_PROTO(struct super_block *sb, 576 - struct ext4_prealloc_space *pa), 594 + TP_PROTO(struct ext4_prealloc_space *pa), 577 595 578 - TP_ARGS(sb, pa), 596 + TP_ARGS(pa), 579 597 580 598 TP_STRUCT__entry( 581 599 __field( dev_t, dev ) ··· 584 604 ), 585 605 586 606 TP_fast_assign( 587 - __entry->dev = sb->s_dev; 607 + __entry->dev = pa->pa_inode->i_sb->s_dev; 588 608 __entry->pa_pstart = pa->pa_pstart; 589 609 __entry->pa_len = pa->pa_len; 590 610 ), ··· 646 666 __field( ino_t, ino ) 647 667 __field( unsigned int, flags ) 648 668 __field( unsigned int, len ) 649 - __field( __u64, logical ) 669 + __field( __u32, logical ) 670 + __field( __u32, lleft ) 671 + __field( __u32, lright ) 650 672 __field( __u64, goal ) 651 - __field( __u64, lleft ) 652 - __field( __u64, lright ) 653 673 __field( __u64, pleft ) 654 674 __field( __u64, pright ) 655 675 ), ··· 667 687 __entry->pright = ar->pright; 668 688 ), 669 689 670 - TP_printk("dev %d,%d ino %lu flags %u len %u lblk %llu goal %llu " 671 - "lleft %llu lright %llu pleft %llu pright %llu ", 690 + TP_printk("dev %d,%d ino %lu flags %u len %u lblk %u goal %llu " 691 + "lleft %u lright %u pleft %llu pright %llu ", 672 692 MAJOR(__entry->dev), MINOR(__entry->dev), 673 - (unsigned long) __entry->ino, 674 - __entry->flags, __entry->len, 675 - (unsigned long long) __entry->logical, 676 - (unsigned long long) __entry->goal, 677 - (unsigned long long) __entry->lleft, 678 - (unsigned long long) __entry->lright, 679 - (unsigned long long) __entry->pleft, 680 - (unsigned long long) __entry->pright) 693 + (unsigned long) __entry->ino, __entry->flags, 694 + __entry->len, __entry->logical, __entry->goal, 695 + __entry->lleft, __entry->lright, __entry->pleft, 696 + __entry->pright) 681 697 ); 682 698 683 699 TRACE_EVENT(ext4_allocate_blocks, ··· 687 711 __field( __u64, block ) 688 712 __field( unsigned int, flags ) 689 713 __field( unsigned int, len ) 690 - __field( __u64, logical ) 714 + __field( __u32, logical ) 715 + __field( __u32, lleft ) 716 + __field( __u32, lright ) 691 717 __field( __u64, goal ) 692 - __field( __u64, lleft ) 693 - __field( __u64, lright ) 694 718 __field( __u64, pleft ) 695 719 __field( __u64, pright ) 696 720 ), ··· 709 733 __entry->pright = ar->pright; 710 734 ), 711 735 712 - TP_printk("dev %d,%d ino %lu flags %u len %u block %llu lblk %llu " 713 - "goal %llu lleft %llu lright %llu pleft %llu pright %llu", 736 + TP_printk("dev %d,%d ino %lu flags %u len %u block %llu lblk %u " 737 + "goal %llu lleft %u lright %u pleft %llu pright %llu", 714 738 MAJOR(__entry->dev), MINOR(__entry->dev), 715 - (unsigned long) __entry->ino, 716 - __entry->flags, __entry->len, __entry->block, 717 - (unsigned long long) __entry->logical, 718 - (unsigned long long) __entry->goal, 719 - (unsigned long long) __entry->lleft, 720 - (unsigned long long) __entry->lright, 721 - (unsigned long long) __entry->pleft, 722 - (unsigned long long) __entry->pright) 739 + (unsigned long) __entry->ino, __entry->flags, 740 + __entry->len, __entry->block, __entry->logical, 741 + __entry->goal, __entry->lleft, __entry->lright, 742 + __entry->pleft, __entry->pright) 723 743 ); 724 744 725 745 TRACE_EVENT(ext4_free_blocks, ··· 727 755 TP_STRUCT__entry( 728 756 __field( dev_t, dev ) 729 757 __field( ino_t, ino ) 730 - __field( umode_t, mode ) 758 + __field( umode_t, mode ) 731 759 __field( __u64, block ) 732 760 __field( unsigned long, count ) 733 - __field( int, flags ) 761 + __field( int, flags ) 734 762 ), 735 763 736 764 TP_fast_assign( ··· 770 798 __entry->parent = dentry->d_parent->d_inode->i_ino; 771 799 ), 772 800 773 - TP_printk("dev %d,%d ino %ld parent %ld datasync %d ", 801 + TP_printk("dev %d,%d ino %lu parent %lu datasync %d ", 774 802 MAJOR(__entry->dev), MINOR(__entry->dev), 775 803 (unsigned long) __entry->ino, 776 804 (unsigned long) __entry->parent, __entry->datasync) ··· 793 821 __entry->dev = inode->i_sb->s_dev; 794 822 ), 795 823 796 - TP_printk("dev %d,%d ino %ld ret %d", 824 + TP_printk("dev %d,%d ino %lu ret %d", 797 825 MAJOR(__entry->dev), MINOR(__entry->dev), 798 826 (unsigned long) __entry->ino, 799 827 __entry->ret) ··· 977 1005 __entry->result_len = len; 978 1006 ), 979 1007 980 - TP_printk("dev %d,%d inode %lu extent %u/%d/%u ", 1008 + TP_printk("dev %d,%d inode %lu extent %u/%d/%d ", 981 1009 MAJOR(__entry->dev), MINOR(__entry->dev), 982 1010 (unsigned long) __entry->ino, 983 1011 __entry->result_group, __entry->result_start, ··· 1065 1093 "allocated_meta_blocks %d", 1066 1094 MAJOR(__entry->dev), MINOR(__entry->dev), 1067 1095 (unsigned long) __entry->ino, 1068 - __entry->mode, (unsigned long long) __entry->i_blocks, 1096 + __entry->mode, __entry->i_blocks, 1069 1097 __entry->used_blocks, __entry->reserved_data_blocks, 1070 1098 __entry->reserved_meta_blocks, __entry->allocated_meta_blocks) 1071 1099 ); ··· 1099 1127 "reserved_data_blocks %d reserved_meta_blocks %d", 1100 1128 MAJOR(__entry->dev), MINOR(__entry->dev), 1101 1129 (unsigned long) __entry->ino, 1102 - __entry->mode, (unsigned long long) __entry->i_blocks, 1130 + __entry->mode, __entry->i_blocks, 1103 1131 __entry->md_needed, __entry->reserved_data_blocks, 1104 1132 __entry->reserved_meta_blocks) 1105 1133 ); ··· 1136 1164 "allocated_meta_blocks %d", 1137 1165 MAJOR(__entry->dev), MINOR(__entry->dev), 1138 1166 (unsigned long) __entry->ino, 1139 - __entry->mode, (unsigned long long) __entry->i_blocks, 1167 + __entry->mode, __entry->i_blocks, 1140 1168 __entry->freed_blocks, __entry->reserved_data_blocks, 1141 1169 __entry->reserved_meta_blocks, __entry->allocated_meta_blocks) 1142 1170 ); ··· 1211 1239 __entry->rw = rw; 1212 1240 ), 1213 1241 1214 - TP_printk("dev %d,%d ino %lu pos %llu len %lu rw %d", 1242 + TP_printk("dev %d,%d ino %lu pos %lld len %lu rw %d", 1215 1243 MAJOR(__entry->dev), MINOR(__entry->dev), 1216 1244 (unsigned long) __entry->ino, 1217 - (unsigned long long) __entry->pos, __entry->len, __entry->rw) 1245 + __entry->pos, __entry->len, __entry->rw) 1218 1246 ); 1219 1247 1220 1248 TRACE_EVENT(ext4_direct_IO_exit, 1221 - TP_PROTO(struct inode *inode, loff_t offset, unsigned long len, int rw, int ret), 1249 + TP_PROTO(struct inode *inode, loff_t offset, unsigned long len, 1250 + int rw, int ret), 1222 1251 1223 1252 TP_ARGS(inode, offset, len, rw, ret), 1224 1253 ··· 1241 1268 __entry->ret = ret; 1242 1269 ), 1243 1270 1244 - TP_printk("dev %d,%d ino %lu pos %llu len %lu rw %d ret %d", 1271 + TP_printk("dev %d,%d ino %lu pos %lld len %lu rw %d ret %d", 1245 1272 MAJOR(__entry->dev), MINOR(__entry->dev), 1246 1273 (unsigned long) __entry->ino, 1247 - (unsigned long long) __entry->pos, __entry->len, 1274 + __entry->pos, __entry->len, 1248 1275 __entry->rw, __entry->ret) 1249 1276 ); 1250 1277 ··· 1269 1296 __entry->mode = mode; 1270 1297 ), 1271 1298 1272 - TP_printk("dev %d,%d ino %ld pos %llu len %llu mode %d", 1299 + TP_printk("dev %d,%d ino %lu pos %lld len %lld mode %d", 1273 1300 MAJOR(__entry->dev), MINOR(__entry->dev), 1274 - (unsigned long) __entry->ino, 1275 - (unsigned long long) __entry->pos, 1276 - (unsigned long long) __entry->len, __entry->mode) 1301 + (unsigned long) __entry->ino, __entry->pos, 1302 + __entry->len, __entry->mode) 1277 1303 ); 1278 1304 1279 1305 TRACE_EVENT(ext4_fallocate_exit, 1280 - TP_PROTO(struct inode *inode, loff_t offset, unsigned int max_blocks, int ret), 1306 + TP_PROTO(struct inode *inode, loff_t offset, 1307 + unsigned int max_blocks, int ret), 1281 1308 1282 1309 TP_ARGS(inode, offset, max_blocks, ret), 1283 1310 ··· 1285 1312 __field( ino_t, ino ) 1286 1313 __field( dev_t, dev ) 1287 1314 __field( loff_t, pos ) 1288 - __field( unsigned, blocks ) 1315 + __field( unsigned int, blocks ) 1289 1316 __field( int, ret ) 1290 1317 ), 1291 1318 ··· 1297 1324 __entry->ret = ret; 1298 1325 ), 1299 1326 1300 - TP_printk("dev %d,%d ino %ld pos %llu blocks %d ret %d", 1327 + TP_printk("dev %d,%d ino %lu pos %lld blocks %u ret %d", 1301 1328 MAJOR(__entry->dev), MINOR(__entry->dev), 1302 1329 (unsigned long) __entry->ino, 1303 - (unsigned long long) __entry->pos, __entry->blocks, 1330 + __entry->pos, __entry->blocks, 1304 1331 __entry->ret) 1305 1332 ); 1306 1333 ··· 1323 1350 __entry->dev = dentry->d_inode->i_sb->s_dev; 1324 1351 ), 1325 1352 1326 - TP_printk("dev %d,%d ino %ld size %lld parent %ld", 1353 + TP_printk("dev %d,%d ino %lu size %lld parent %lu", 1327 1354 MAJOR(__entry->dev), MINOR(__entry->dev), 1328 1355 (unsigned long) __entry->ino, __entry->size, 1329 1356 (unsigned long) __entry->parent) ··· 1346 1373 __entry->ret = ret; 1347 1374 ), 1348 1375 1349 - TP_printk("dev %d,%d ino %ld ret %d", 1376 + TP_printk("dev %d,%d ino %lu ret %d", 1350 1377 MAJOR(__entry->dev), MINOR(__entry->dev), 1351 1378 (unsigned long) __entry->ino, 1352 1379 __entry->ret) ··· 1360 1387 TP_STRUCT__entry( 1361 1388 __field( ino_t, ino ) 1362 1389 __field( dev_t, dev ) 1363 - __field( blkcnt_t, blocks ) 1390 + __field( __u64, blocks ) 1364 1391 ), 1365 1392 1366 1393 TP_fast_assign( ··· 1369 1396 __entry->blocks = inode->i_blocks; 1370 1397 ), 1371 1398 1372 - TP_printk("dev %d,%d ino %lu blocks %lu", 1399 + TP_printk("dev %d,%d ino %lu blocks %llu", 1373 1400 MAJOR(__entry->dev), MINOR(__entry->dev), 1374 - (unsigned long) __entry->ino, (unsigned long) __entry->blocks) 1401 + (unsigned long) __entry->ino, __entry->blocks) 1375 1402 ); 1376 1403 1377 1404 DEFINE_EVENT(ext4__truncate, ext4_truncate_enter, ··· 1390 1417 1391 1418 DECLARE_EVENT_CLASS(ext4__map_blocks_enter, 1392 1419 TP_PROTO(struct inode *inode, ext4_lblk_t lblk, 1393 - unsigned len, unsigned flags), 1420 + unsigned int len, unsigned int flags), 1394 1421 1395 1422 TP_ARGS(inode, lblk, len, flags), 1396 1423 ··· 1398 1425 __field( ino_t, ino ) 1399 1426 __field( dev_t, dev ) 1400 1427 __field( ext4_lblk_t, lblk ) 1401 - __field( unsigned, len ) 1402 - __field( unsigned, flags ) 1428 + __field( unsigned int, len ) 1429 + __field( unsigned int, flags ) 1403 1430 ), 1404 1431 1405 1432 TP_fast_assign( ··· 1413 1440 TP_printk("dev %d,%d ino %lu lblk %u len %u flags %u", 1414 1441 MAJOR(__entry->dev), MINOR(__entry->dev), 1415 1442 (unsigned long) __entry->ino, 1416 - (unsigned) __entry->lblk, __entry->len, __entry->flags) 1443 + __entry->lblk, __entry->len, __entry->flags) 1417 1444 ); 1418 1445 1419 1446 DEFINE_EVENT(ext4__map_blocks_enter, ext4_ext_map_blocks_enter, ··· 1432 1459 1433 1460 DECLARE_EVENT_CLASS(ext4__map_blocks_exit, 1434 1461 TP_PROTO(struct inode *inode, ext4_lblk_t lblk, 1435 - ext4_fsblk_t pblk, unsigned len, int ret), 1462 + ext4_fsblk_t pblk, unsigned int len, int ret), 1436 1463 1437 1464 TP_ARGS(inode, lblk, pblk, len, ret), 1438 1465 ··· 1441 1468 __field( dev_t, dev ) 1442 1469 __field( ext4_lblk_t, lblk ) 1443 1470 __field( ext4_fsblk_t, pblk ) 1444 - __field( unsigned, len ) 1471 + __field( unsigned int, len ) 1445 1472 __field( int, ret ) 1446 1473 ), 1447 1474 ··· 1457 1484 TP_printk("dev %d,%d ino %lu lblk %u pblk %llu len %u ret %d", 1458 1485 MAJOR(__entry->dev), MINOR(__entry->dev), 1459 1486 (unsigned long) __entry->ino, 1460 - (unsigned) __entry->lblk, (unsigned long long) __entry->pblk, 1487 + __entry->lblk, __entry->pblk, 1461 1488 __entry->len, __entry->ret) 1462 1489 ); 1463 1490 ··· 1497 1524 TP_printk("dev %d,%d ino %lu lblk %u pblk %llu", 1498 1525 MAJOR(__entry->dev), MINOR(__entry->dev), 1499 1526 (unsigned long) __entry->ino, 1500 - (unsigned) __entry->lblk, (unsigned long long) __entry->pblk) 1527 + __entry->lblk, __entry->pblk) 1501 1528 ); 1502 1529 1503 1530 TRACE_EVENT(ext4_load_inode,
+8 -6
init/calibrate.c
··· 245 245 246 246 void __cpuinit calibrate_delay(void) 247 247 { 248 + unsigned long lpj; 248 249 static bool printed; 249 250 250 251 if (preset_lpj) { 251 - loops_per_jiffy = preset_lpj; 252 + lpj = preset_lpj; 252 253 if (!printed) 253 254 pr_info("Calibrating delay loop (skipped) " 254 255 "preset value.. "); 255 256 } else if ((!printed) && lpj_fine) { 256 - loops_per_jiffy = lpj_fine; 257 + lpj = lpj_fine; 257 258 pr_info("Calibrating delay loop (skipped), " 258 259 "value calculated using timer frequency.. "); 259 - } else if ((loops_per_jiffy = calibrate_delay_direct()) != 0) { 260 + } else if ((lpj = calibrate_delay_direct()) != 0) { 260 261 if (!printed) 261 262 pr_info("Calibrating delay using timer " 262 263 "specific routine.. "); 263 264 } else { 264 265 if (!printed) 265 266 pr_info("Calibrating delay loop... "); 266 - loops_per_jiffy = calibrate_delay_converge(); 267 + lpj = calibrate_delay_converge(); 267 268 } 268 269 if (!printed) 269 270 pr_cont("%lu.%02lu BogoMIPS (lpj=%lu)\n", 270 - loops_per_jiffy/(500000/HZ), 271 - (loops_per_jiffy/(5000/HZ)) % 100, loops_per_jiffy); 271 + lpj/(500000/HZ), 272 + (lpj/(5000/HZ)) % 100, lpj); 272 273 274 + loops_per_jiffy = lpj; 273 275 printed = true; 274 276 }
+3 -1
kernel/power/user.c
··· 113 113 if (error) 114 114 pm_notifier_call_chain(PM_POST_RESTORE); 115 115 } 116 - if (error) 116 + if (error) { 117 + free_basic_memory_bitmaps(); 117 118 atomic_inc(&snapshot_device_available); 119 + } 118 120 data->frozen = 0; 119 121 data->ready = 0; 120 122 data->platform_support = 0;
+12 -3
kernel/taskstats.c
··· 285 285 static int add_del_listener(pid_t pid, const struct cpumask *mask, int isadd) 286 286 { 287 287 struct listener_list *listeners; 288 - struct listener *s, *tmp; 288 + struct listener *s, *tmp, *s2; 289 289 unsigned int cpu; 290 290 291 291 if (!cpumask_subset(mask, cpu_possible_mask)) 292 292 return -EINVAL; 293 293 294 + s = NULL; 294 295 if (isadd == REGISTER) { 295 296 for_each_cpu(cpu, mask) { 296 - s = kmalloc_node(sizeof(struct listener), GFP_KERNEL, 297 - cpu_to_node(cpu)); 297 + if (!s) 298 + s = kmalloc_node(sizeof(struct listener), 299 + GFP_KERNEL, cpu_to_node(cpu)); 298 300 if (!s) 299 301 goto cleanup; 300 302 s->pid = pid; ··· 305 303 306 304 listeners = &per_cpu(listener_array, cpu); 307 305 down_write(&listeners->sem); 306 + list_for_each_entry_safe(s2, tmp, &listeners->list, list) { 307 + if (s2->pid == pid) 308 + goto next_cpu; 309 + } 308 310 list_add(&s->list, &listeners->list); 311 + s = NULL; 312 + next_cpu: 309 313 up_write(&listeners->sem); 310 314 } 315 + kfree(s); 311 316 return 0; 312 317 } 313 318
+88 -70
kernel/time/alarmtimer.c
··· 42 42 clockid_t base_clockid; 43 43 } alarm_bases[ALARM_NUMTYPE]; 44 44 45 + /* freezer delta & lock used to handle clock_nanosleep triggered wakeups */ 46 + static ktime_t freezer_delta; 47 + static DEFINE_SPINLOCK(freezer_delta_lock); 48 + 45 49 #ifdef CONFIG_RTC_CLASS 46 50 /* rtc timer and device for setting alarm wakeups at suspend */ 47 51 static struct rtc_timer rtctimer; 48 52 static struct rtc_device *rtcdev; 49 - #endif 53 + static DEFINE_SPINLOCK(rtcdev_lock); 50 54 51 - /* freezer delta & lock used to handle clock_nanosleep triggered wakeups */ 52 - static ktime_t freezer_delta; 53 - static DEFINE_SPINLOCK(freezer_delta_lock); 55 + /** 56 + * has_wakealarm - check rtc device has wakealarm ability 57 + * @dev: current device 58 + * @name_ptr: name to be returned 59 + * 60 + * This helper function checks to see if the rtc device can wake 61 + * from suspend. 62 + */ 63 + static int has_wakealarm(struct device *dev, void *name_ptr) 64 + { 65 + struct rtc_device *candidate = to_rtc_device(dev); 66 + 67 + if (!candidate->ops->set_alarm) 68 + return 0; 69 + if (!device_may_wakeup(candidate->dev.parent)) 70 + return 0; 71 + 72 + *(const char **)name_ptr = dev_name(dev); 73 + return 1; 74 + } 75 + 76 + /** 77 + * alarmtimer_get_rtcdev - Return selected rtcdevice 78 + * 79 + * This function returns the rtc device to use for wakealarms. 80 + * If one has not already been chosen, it checks to see if a 81 + * functional rtc device is available. 82 + */ 83 + static struct rtc_device *alarmtimer_get_rtcdev(void) 84 + { 85 + struct device *dev; 86 + char *str; 87 + unsigned long flags; 88 + struct rtc_device *ret; 89 + 90 + spin_lock_irqsave(&rtcdev_lock, flags); 91 + if (!rtcdev) { 92 + /* Find an rtc device and init the rtc_timer */ 93 + dev = class_find_device(rtc_class, NULL, &str, has_wakealarm); 94 + /* If we have a device then str is valid. See has_wakealarm() */ 95 + if (dev) { 96 + rtcdev = rtc_class_open(str); 97 + /* 98 + * Drop the reference we got in class_find_device, 99 + * rtc_open takes its own. 100 + */ 101 + put_device(dev); 102 + rtc_timer_init(&rtctimer, NULL, NULL); 103 + } 104 + } 105 + ret = rtcdev; 106 + spin_unlock_irqrestore(&rtcdev_lock, flags); 107 + 108 + return ret; 109 + } 110 + #else 111 + #define alarmtimer_get_rtcdev() (0) 112 + #define rtcdev (0) 113 + #endif 54 114 55 115 56 116 /** ··· 226 166 struct rtc_time tm; 227 167 ktime_t min, now; 228 168 unsigned long flags; 169 + struct rtc_device *rtc; 229 170 int i; 230 171 231 172 spin_lock_irqsave(&freezer_delta_lock, flags); ··· 234 173 freezer_delta = ktime_set(0, 0); 235 174 spin_unlock_irqrestore(&freezer_delta_lock, flags); 236 175 176 + rtc = rtcdev; 237 177 /* If we have no rtcdev, just return */ 238 - if (!rtcdev) 178 + if (!rtc) 239 179 return 0; 240 180 241 181 /* Find the soonest timer to expire*/ ··· 261 199 WARN_ON(min.tv64 < NSEC_PER_SEC); 262 200 263 201 /* Setup an rtc timer to fire that far in the future */ 264 - rtc_timer_cancel(rtcdev, &rtctimer); 265 - rtc_read_time(rtcdev, &tm); 202 + rtc_timer_cancel(rtc, &rtctimer); 203 + rtc_read_time(rtc, &tm); 266 204 now = rtc_tm_to_ktime(tm); 267 205 now = ktime_add(now, min); 268 206 269 - rtc_timer_start(rtcdev, &rtctimer, now, ktime_set(0, 0)); 207 + rtc_timer_start(rtc, &rtctimer, now, ktime_set(0, 0)); 270 208 271 209 return 0; 272 210 } ··· 384 322 { 385 323 clockid_t baseid = alarm_bases[clock2alarm(which_clock)].base_clockid; 386 324 325 + if (!alarmtimer_get_rtcdev()) 326 + return -ENOTSUPP; 327 + 387 328 return hrtimer_get_res(baseid, tp); 388 329 } 389 330 ··· 400 335 static int alarm_clock_get(clockid_t which_clock, struct timespec *tp) 401 336 { 402 337 struct alarm_base *base = &alarm_bases[clock2alarm(which_clock)]; 338 + 339 + if (!alarmtimer_get_rtcdev()) 340 + return -ENOTSUPP; 403 341 404 342 *tp = ktime_to_timespec(base->gettime()); 405 343 return 0; ··· 418 350 { 419 351 enum alarmtimer_type type; 420 352 struct alarm_base *base; 353 + 354 + if (!alarmtimer_get_rtcdev()) 355 + return -ENOTSUPP; 421 356 422 357 if (!capable(CAP_WAKE_ALARM)) 423 358 return -EPERM; ··· 456 385 */ 457 386 static int alarm_timer_del(struct k_itimer *timr) 458 387 { 388 + if (!rtcdev) 389 + return -ENOTSUPP; 390 + 459 391 alarm_cancel(&timr->it.alarmtimer); 460 392 return 0; 461 393 } ··· 476 402 struct itimerspec *new_setting, 477 403 struct itimerspec *old_setting) 478 404 { 405 + if (!rtcdev) 406 + return -ENOTSUPP; 407 + 479 408 /* Save old values */ 480 409 old_setting->it_interval = 481 410 ktime_to_timespec(timr->it.alarmtimer.period); ··· 618 541 int ret = 0; 619 542 struct restart_block *restart; 620 543 544 + if (!alarmtimer_get_rtcdev()) 545 + return -ENOTSUPP; 546 + 621 547 if (!capable(CAP_WAKE_ALARM)) 622 548 return -EPERM; 623 549 ··· 718 638 } 719 639 device_initcall(alarmtimer_init); 720 640 721 - #ifdef CONFIG_RTC_CLASS 722 - /** 723 - * has_wakealarm - check rtc device has wakealarm ability 724 - * @dev: current device 725 - * @name_ptr: name to be returned 726 - * 727 - * This helper function checks to see if the rtc device can wake 728 - * from suspend. 729 - */ 730 - static int __init has_wakealarm(struct device *dev, void *name_ptr) 731 - { 732 - struct rtc_device *candidate = to_rtc_device(dev); 733 - 734 - if (!candidate->ops->set_alarm) 735 - return 0; 736 - if (!device_may_wakeup(candidate->dev.parent)) 737 - return 0; 738 - 739 - *(const char **)name_ptr = dev_name(dev); 740 - return 1; 741 - } 742 - 743 - /** 744 - * alarmtimer_init_late - Late initializing of alarmtimer code 745 - * 746 - * This function locates a rtc device to use for wakealarms. 747 - * Run as late_initcall to make sure rtc devices have been 748 - * registered. 749 - */ 750 - static int __init alarmtimer_init_late(void) 751 - { 752 - struct device *dev; 753 - char *str; 754 - 755 - /* Find an rtc device and init the rtc_timer */ 756 - dev = class_find_device(rtc_class, NULL, &str, has_wakealarm); 757 - /* If we have a device then str is valid. See has_wakealarm() */ 758 - if (dev) { 759 - rtcdev = rtc_class_open(str); 760 - /* 761 - * Drop the reference we got in class_find_device, 762 - * rtc_open takes its own. 763 - */ 764 - put_device(dev); 765 - } 766 - if (!rtcdev) { 767 - printk(KERN_WARNING "No RTC device found, ALARM timers will" 768 - " not wake from suspend"); 769 - } 770 - rtc_timer_init(&rtctimer, NULL, NULL); 771 - 772 - return 0; 773 - } 774 - #else 775 - static int __init alarmtimer_init_late(void) 776 - { 777 - printk(KERN_WARNING "Kernel not built with RTC support, ALARM timers" 778 - " will not wake from suspend"); 779 - return 0; 780 - } 781 - #endif 782 - late_initcall(alarmtimer_init_late);
+1
mm/memcontrol.c
··· 35 35 #include <linux/limits.h> 36 36 #include <linux/mutex.h> 37 37 #include <linux/rbtree.h> 38 + #include <linux/shmem_fs.h> 38 39 #include <linux/slab.h> 39 40 #include <linux/swap.h> 40 41 #include <linux/swapops.h>
+6 -15
mm/memory-failure.c
··· 391 391 struct task_struct *tsk; 392 392 struct anon_vma *av; 393 393 394 - read_lock(&tasklist_lock); 395 394 av = page_lock_anon_vma(page); 396 395 if (av == NULL) /* Not actually mapped anymore */ 397 - goto out; 396 + return; 397 + 398 + read_lock(&tasklist_lock); 398 399 for_each_process (tsk) { 399 400 struct anon_vma_chain *vmac; 400 401 ··· 409 408 add_to_kill(tsk, page, vma, to_kill, tkc); 410 409 } 411 410 } 412 - page_unlock_anon_vma(av); 413 - out: 414 411 read_unlock(&tasklist_lock); 412 + page_unlock_anon_vma(av); 415 413 } 416 414 417 415 /* ··· 424 424 struct prio_tree_iter iter; 425 425 struct address_space *mapping = page->mapping; 426 426 427 - /* 428 - * A note on the locking order between the two locks. 429 - * We don't rely on this particular order. 430 - * If you have some other code that needs a different order 431 - * feel free to switch them around. Or add a reverse link 432 - * from mm_struct to task_struct, then this could be all 433 - * done without taking tasklist_lock and looping over all tasks. 434 - */ 435 - 436 - read_lock(&tasklist_lock); 437 427 mutex_lock(&mapping->i_mmap_mutex); 428 + read_lock(&tasklist_lock); 438 429 for_each_process(tsk) { 439 430 pgoff_t pgoff = page->index << (PAGE_CACHE_SHIFT - PAGE_SHIFT); 440 431 ··· 445 454 add_to_kill(tsk, page, vma, to_kill, tkc); 446 455 } 447 456 } 448 - mutex_unlock(&mapping->i_mmap_mutex); 449 457 read_unlock(&tasklist_lock); 458 + mutex_unlock(&mapping->i_mmap_mutex); 450 459 } 451 460 452 461 /*
-24
mm/memory.c
··· 2798 2798 } 2799 2799 EXPORT_SYMBOL(unmap_mapping_range); 2800 2800 2801 - int vmtruncate_range(struct inode *inode, loff_t offset, loff_t end) 2802 - { 2803 - struct address_space *mapping = inode->i_mapping; 2804 - 2805 - /* 2806 - * If the underlying filesystem is not going to provide 2807 - * a way to truncate a range of blocks (punch a hole) - 2808 - * we should return failure right now. 2809 - */ 2810 - if (!inode->i_op->truncate_range) 2811 - return -ENOSYS; 2812 - 2813 - mutex_lock(&inode->i_mutex); 2814 - down_write(&inode->i_alloc_sem); 2815 - unmap_mapping_range(mapping, offset, (end - offset), 1); 2816 - truncate_inode_pages_range(mapping, offset, end); 2817 - unmap_mapping_range(mapping, offset, (end - offset), 1); 2818 - inode->i_op->truncate_range(inode, offset, end); 2819 - up_write(&inode->i_alloc_sem); 2820 - mutex_unlock(&inode->i_mutex); 2821 - 2822 - return 0; 2823 - } 2824 - 2825 2801 /* 2826 2802 * We enter with non-exclusive mmap_sem (to exclude vma changes, 2827 2803 * but allow concurrent faults), and pte mapped but not yet locked.
+3 -1
mm/memory_hotplug.c
··· 498 498 * The node we allocated has no zone fallback lists. For avoiding 499 499 * to access not-initialized zonelist, build here. 500 500 */ 501 + mutex_lock(&zonelists_mutex); 501 502 build_all_zonelists(NULL); 503 + mutex_unlock(&zonelists_mutex); 502 504 503 505 return pgdat; 504 506 } ··· 523 521 524 522 lock_memory_hotplug(); 525 523 pgdat = hotadd_new_pgdat(nid, 0); 526 - if (pgdat) { 524 + if (!pgdat) { 527 525 ret = -ENOMEM; 528 526 goto out; 529 527 }
+2 -3
mm/rmap.c
··· 38 38 * in arch-dependent flush_dcache_mmap_lock, 39 39 * within inode_wb_list_lock in __sync_single_inode) 40 40 * 41 - * (code doesn't rely on that order so it could be switched around) 42 - * ->tasklist_lock 43 - * anon_vma->mutex (memory_failure, collect_procs_anon) 41 + * anon_vma->mutex,mapping->i_mutex (memory_failure, collect_procs_anon) 42 + * ->tasklist_lock 44 43 * pte map lock 45 44 */ 46 45
+52 -22
mm/shmem.c
··· 539 539 } while (next); 540 540 } 541 541 542 - static void shmem_truncate_range(struct inode *inode, loff_t start, loff_t end) 542 + void shmem_truncate_range(struct inode *inode, loff_t start, loff_t end) 543 543 { 544 544 struct shmem_inode_info *info = SHMEM_I(inode); 545 545 unsigned long idx; ··· 561 561 spinlock_t *needs_lock; 562 562 spinlock_t *punch_lock; 563 563 unsigned long upper_limit; 564 + 565 + truncate_inode_pages_range(inode->i_mapping, start, end); 564 566 565 567 inode->i_ctime = inode->i_mtime = CURRENT_TIME; 566 568 idx = (start + PAGE_CACHE_SIZE - 1) >> PAGE_CACHE_SHIFT; ··· 740 738 * lowered next_index. Also, though shmem_getpage checks 741 739 * i_size before adding to cache, no recheck after: so fix the 742 740 * narrow window there too. 743 - * 744 - * Recalling truncate_inode_pages_range and unmap_mapping_range 745 - * every time for punch_hole (which never got a chance to clear 746 - * SHMEM_PAGEIN at the start of vmtruncate_range) is expensive, 747 - * yet hardly ever necessary: try to optimize them out later. 748 741 */ 749 742 truncate_inode_pages_range(inode->i_mapping, start, end); 750 - if (punch_hole) 751 - unmap_mapping_range(inode->i_mapping, start, 752 - end - start, 1); 753 743 } 754 744 755 745 spin_lock(&info->lock); ··· 760 766 shmem_free_pages(pages_to_free.next); 761 767 } 762 768 } 769 + EXPORT_SYMBOL_GPL(shmem_truncate_range); 763 770 764 - static int shmem_notify_change(struct dentry *dentry, struct iattr *attr) 771 + static int shmem_setattr(struct dentry *dentry, struct iattr *attr) 765 772 { 766 773 struct inode *inode = dentry->d_inode; 767 - loff_t newsize = attr->ia_size; 768 774 int error; 769 775 770 776 error = inode_change_ok(inode, attr); 771 777 if (error) 772 778 return error; 773 779 774 - if (S_ISREG(inode->i_mode) && (attr->ia_valid & ATTR_SIZE) 775 - && newsize != inode->i_size) { 780 + if (S_ISREG(inode->i_mode) && (attr->ia_valid & ATTR_SIZE)) { 781 + loff_t oldsize = inode->i_size; 782 + loff_t newsize = attr->ia_size; 776 783 struct page *page = NULL; 777 784 778 - if (newsize < inode->i_size) { 785 + if (newsize < oldsize) { 779 786 /* 780 787 * If truncating down to a partial page, then 781 788 * if that page is already allocated, hold it ··· 805 810 spin_unlock(&info->lock); 806 811 } 807 812 } 808 - 809 - /* XXX(truncate): truncate_setsize should be called last */ 810 - truncate_setsize(inode, newsize); 813 + if (newsize != oldsize) { 814 + i_size_write(inode, newsize); 815 + inode->i_ctime = inode->i_mtime = CURRENT_TIME; 816 + } 817 + if (newsize < oldsize) { 818 + loff_t holebegin = round_up(newsize, PAGE_SIZE); 819 + unmap_mapping_range(inode->i_mapping, holebegin, 0, 1); 820 + shmem_truncate_range(inode, newsize, (loff_t)-1); 821 + /* unmap again to remove racily COWed private pages */ 822 + unmap_mapping_range(inode->i_mapping, holebegin, 0, 1); 823 + } 811 824 if (page) 812 825 page_cache_release(page); 813 - shmem_truncate_range(inode, newsize, (loff_t)-1); 814 826 } 815 827 816 828 setattr_copy(inode, attr); ··· 834 832 struct shmem_xattr *xattr, *nxattr; 835 833 836 834 if (inode->i_mapping->a_ops == &shmem_aops) { 837 - truncate_inode_pages(inode->i_mapping, 0); 838 835 shmem_unacct_size(info->flags, inode->i_size); 839 836 inode->i_size = 0; 840 837 shmem_truncate_range(inode, 0, (loff_t)-1); ··· 2707 2706 }; 2708 2707 2709 2708 static const struct inode_operations shmem_inode_operations = { 2710 - .setattr = shmem_notify_change, 2709 + .setattr = shmem_setattr, 2711 2710 .truncate_range = shmem_truncate_range, 2712 2711 #ifdef CONFIG_TMPFS_XATTR 2713 2712 .setxattr = shmem_setxattr, ··· 2740 2739 .removexattr = shmem_removexattr, 2741 2740 #endif 2742 2741 #ifdef CONFIG_TMPFS_POSIX_ACL 2743 - .setattr = shmem_notify_change, 2742 + .setattr = shmem_setattr, 2744 2743 .check_acl = generic_check_acl, 2745 2744 #endif 2746 2745 }; ··· 2753 2752 .removexattr = shmem_removexattr, 2754 2753 #endif 2755 2754 #ifdef CONFIG_TMPFS_POSIX_ACL 2756 - .setattr = shmem_notify_change, 2755 + .setattr = shmem_setattr, 2757 2756 .check_acl = generic_check_acl, 2758 2757 #endif 2759 2758 }; ··· 2909 2908 return 0; 2910 2909 } 2911 2910 2911 + void shmem_truncate_range(struct inode *inode, loff_t start, loff_t end) 2912 + { 2913 + truncate_inode_pages_range(inode->i_mapping, start, end); 2914 + } 2915 + EXPORT_SYMBOL_GPL(shmem_truncate_range); 2916 + 2912 2917 #ifdef CONFIG_CGROUP_MEM_RES_CTLR 2913 2918 /** 2914 2919 * mem_cgroup_get_shmem_target - find a page or entry assigned to the shmem file ··· 3035 3028 vma->vm_flags |= VM_CAN_NONLINEAR; 3036 3029 return 0; 3037 3030 } 3031 + 3032 + /** 3033 + * shmem_read_mapping_page_gfp - read into page cache, using specified page allocation flags. 3034 + * @mapping: the page's address_space 3035 + * @index: the page index 3036 + * @gfp: the page allocator flags to use if allocating 3037 + * 3038 + * This behaves as a tmpfs "read_cache_page_gfp(mapping, index, gfp)", 3039 + * with any new page allocations done using the specified allocation flags. 3040 + * But read_cache_page_gfp() uses the ->readpage() method: which does not 3041 + * suit tmpfs, since it may have pages in swapcache, and needs to find those 3042 + * for itself; although drivers/gpu/drm i915 and ttm rely upon this support. 3043 + * 3044 + * Provide a stub for those callers to start using now, then later 3045 + * flesh it out to call shmem_getpage() with additional gfp mask, when 3046 + * shmem_file_splice_read() is added and shmem_readpage() is removed. 3047 + */ 3048 + struct page *shmem_read_mapping_page_gfp(struct address_space *mapping, 3049 + pgoff_t index, gfp_t gfp) 3050 + { 3051 + return read_cache_page_gfp(mapping, index, gfp); 3052 + } 3053 + EXPORT_SYMBOL_GPL(shmem_read_mapping_page_gfp);
+1 -1
mm/swapfile.c
··· 14 14 #include <linux/vmalloc.h> 15 15 #include <linux/pagemap.h> 16 16 #include <linux/namei.h> 17 - #include <linux/shm.h> 17 + #include <linux/shmem_fs.h> 18 18 #include <linux/blkdev.h> 19 19 #include <linux/random.h> 20 20 #include <linux/writeback.h>
+29
mm/truncate.c
··· 304 304 * @lstart: offset from which to truncate 305 305 * 306 306 * Called under (and serialised by) inode->i_mutex. 307 + * 308 + * Note: When this function returns, there can be a page in the process of 309 + * deletion (inside __delete_from_page_cache()) in the specified range. Thus 310 + * mapping->nrpages can be non-zero when this function returns even after 311 + * truncation of the whole mapping. 307 312 */ 308 313 void truncate_inode_pages(struct address_space *mapping, loff_t lstart) 309 314 { ··· 608 603 return 0; 609 604 } 610 605 EXPORT_SYMBOL(vmtruncate); 606 + 607 + int vmtruncate_range(struct inode *inode, loff_t offset, loff_t end) 608 + { 609 + struct address_space *mapping = inode->i_mapping; 610 + 611 + /* 612 + * If the underlying filesystem is not going to provide 613 + * a way to truncate a range of blocks (punch a hole) - 614 + * we should return failure right now. 615 + */ 616 + if (!inode->i_op->truncate_range) 617 + return -ENOSYS; 618 + 619 + mutex_lock(&inode->i_mutex); 620 + down_write(&inode->i_alloc_sem); 621 + unmap_mapping_range(mapping, offset, (end - offset), 1); 622 + inode->i_op->truncate_range(inode, offset, end); 623 + /* unmap again to remove racily COWed private pages */ 624 + unmap_mapping_range(mapping, offset, (end - offset), 1); 625 + up_write(&inode->i_alloc_sem); 626 + mutex_unlock(&inode->i_mutex); 627 + 628 + return 0; 629 + }
+15 -12
mm/vmscan.c
··· 1995 1995 * If a zone is deemed to be full of pinned pages then just give it a light 1996 1996 * scan then give up on it. 1997 1997 */ 1998 - static unsigned long shrink_zones(int priority, struct zonelist *zonelist, 1998 + static void shrink_zones(int priority, struct zonelist *zonelist, 1999 1999 struct scan_control *sc) 2000 2000 { 2001 2001 struct zoneref *z; 2002 2002 struct zone *zone; 2003 2003 unsigned long nr_soft_reclaimed; 2004 2004 unsigned long nr_soft_scanned; 2005 - unsigned long total_scanned = 0; 2006 2005 2007 2006 for_each_zone_zonelist_nodemask(zone, z, zonelist, 2008 2007 gfp_zone(sc->gfp_mask), sc->nodemask) { ··· 2016 2017 continue; 2017 2018 if (zone->all_unreclaimable && priority != DEF_PRIORITY) 2018 2019 continue; /* Let kswapd poll it */ 2020 + /* 2021 + * This steals pages from memory cgroups over softlimit 2022 + * and returns the number of reclaimed pages and 2023 + * scanned pages. This works for global memory pressure 2024 + * and balancing, not for a memcg's limit. 2025 + */ 2026 + nr_soft_scanned = 0; 2027 + nr_soft_reclaimed = mem_cgroup_soft_limit_reclaim(zone, 2028 + sc->order, sc->gfp_mask, 2029 + &nr_soft_scanned); 2030 + sc->nr_reclaimed += nr_soft_reclaimed; 2031 + sc->nr_scanned += nr_soft_scanned; 2032 + /* need some check for avoid more shrink_zone() */ 2019 2033 } 2020 - 2021 - nr_soft_scanned = 0; 2022 - nr_soft_reclaimed = mem_cgroup_soft_limit_reclaim(zone, 2023 - sc->order, sc->gfp_mask, 2024 - &nr_soft_scanned); 2025 - sc->nr_reclaimed += nr_soft_reclaimed; 2026 - total_scanned += nr_soft_scanned; 2027 2034 2028 2035 shrink_zone(priority, zone, sc); 2029 2036 } 2030 - 2031 - return total_scanned; 2032 2037 } 2033 2038 2034 2039 static bool zone_reclaimable(struct zone *zone) ··· 2097 2094 sc->nr_scanned = 0; 2098 2095 if (!priority) 2099 2096 disable_swap_token(sc->mem_cgroup); 2100 - total_scanned += shrink_zones(priority, zonelist, sc); 2097 + shrink_zones(priority, zonelist, sc); 2101 2098 /* 2102 2099 * Don't shrink slabs when reclaiming memory from 2103 2100 * over limit cgroups
+5
net/8021q/vlan_dev.c
··· 588 588 static u32 vlan_dev_fix_features(struct net_device *dev, u32 features) 589 589 { 590 590 struct net_device *real_dev = vlan_dev_info(dev)->real_dev; 591 + u32 old_features = features; 591 592 592 593 features &= real_dev->features; 593 594 features &= real_dev->vlan_features; 595 + 596 + if (old_features & NETIF_F_SOFT_FEATURES) 597 + features |= old_features & NETIF_F_SOFT_FEATURES; 598 + 594 599 if (dev_ethtool_get_rx_csum(real_dev)) 595 600 features |= NETIF_F_RXCSUM; 596 601 features |= NETIF_F_LLTX;
+3 -1
net/bridge/br_device.c
··· 49 49 skb_pull(skb, ETH_HLEN); 50 50 51 51 rcu_read_lock(); 52 - if (is_multicast_ether_addr(dest)) { 52 + if (is_broadcast_ether_addr(dest)) 53 + br_flood_deliver(br, skb); 54 + else if (is_multicast_ether_addr(dest)) { 53 55 if (unlikely(netpoll_tx_running(dev))) { 54 56 br_flood_deliver(br, skb); 55 57 goto out;
+4 -2
net/bridge/br_input.c
··· 60 60 br = p->br; 61 61 br_fdb_update(br, p, eth_hdr(skb)->h_source); 62 62 63 - if (is_multicast_ether_addr(dest) && 63 + if (!is_broadcast_ether_addr(dest) && is_multicast_ether_addr(dest) && 64 64 br_multicast_rcv(br, p, skb)) 65 65 goto drop; 66 66 ··· 77 77 78 78 dst = NULL; 79 79 80 - if (is_multicast_ether_addr(dest)) { 80 + if (is_broadcast_ether_addr(dest)) 81 + skb2 = skb; 82 + else if (is_multicast_ether_addr(dest)) { 81 83 mdst = br_mdb_get(br, skb); 82 84 if (mdst || BR_INPUT_SKB_CB_MROUTERS_ONLY(skb)) { 83 85 if ((mdst && mdst->mglist) ||
+4 -1
net/bridge/br_multicast.c
··· 1379 1379 if (unlikely(ip_fast_csum((u8 *)iph, iph->ihl))) 1380 1380 return -EINVAL; 1381 1381 1382 - if (iph->protocol != IPPROTO_IGMP) 1382 + if (iph->protocol != IPPROTO_IGMP) { 1383 + if ((iph->daddr & IGMP_LOCAL_GROUP_MASK) != IGMP_LOCAL_GROUP) 1384 + BR_INPUT_SKB_CB(skb)->mrouters_only = 1; 1383 1385 return 0; 1386 + } 1384 1387 1385 1388 len = ntohs(iph->tot_len); 1386 1389 if (skb->len < len || len < ip_hdrlen(skb))
+4 -2
net/core/dst.c
··· 190 190 dst->lastuse = jiffies; 191 191 dst->flags = flags; 192 192 dst->next = NULL; 193 - dst_entries_add(ops, 1); 193 + if (!(flags & DST_NOCOUNT)) 194 + dst_entries_add(ops, 1); 194 195 return dst; 195 196 } 196 197 EXPORT_SYMBOL(dst_alloc); ··· 244 243 neigh_release(neigh); 245 244 } 246 245 247 - dst_entries_add(dst->ops, -1); 246 + if (!(dst->flags & DST_NOCOUNT)) 247 + dst_entries_add(dst->ops, -1); 248 248 249 249 if (dst->ops->destroy) 250 250 dst->ops->destroy(dst);
+3 -1
net/ipv4/af_inet.c
··· 465 465 if (addr_len < sizeof(struct sockaddr_in)) 466 466 goto out; 467 467 468 - if (addr->sin_family != AF_INET) 468 + if (addr->sin_family != AF_INET) { 469 + err = -EAFNOSUPPORT; 469 470 goto out; 471 + } 470 472 471 473 chk_addr_ret = inet_addr_type(sock_net(sk), addr->sin_addr.s_addr); 472 474
+8 -11
net/ipv4/ip_output.c
··· 802 802 skb = skb_peek_tail(queue); 803 803 804 804 exthdrlen = !skb ? rt->dst.header_len : 0; 805 - length += exthdrlen; 806 - transhdrlen += exthdrlen; 807 805 mtu = cork->fragsize; 808 806 809 807 hh_len = LL_RESERVED_SPACE(rt->dst.dev); ··· 828 830 cork->length += length; 829 831 if (((length > mtu) || (skb && skb_is_gso(skb))) && 830 832 (sk->sk_protocol == IPPROTO_UDP) && 831 - (rt->dst.dev->features & NETIF_F_UFO)) { 833 + (rt->dst.dev->features & NETIF_F_UFO) && !rt->dst.header_len) { 832 834 err = ip_ufo_append_data(sk, queue, getfrag, from, length, 833 835 hh_len, fragheaderlen, transhdrlen, 834 836 mtu, flags); ··· 881 883 else 882 884 alloclen = fraglen; 883 885 886 + alloclen += exthdrlen; 887 + 884 888 /* The last fragment gets additional space at tail. 885 889 * Note, with MSG_MORE we overallocate on fragments, 886 890 * because we have no idea what fragment will be 887 891 * the last. 888 892 */ 889 - if (datalen == length + fraggap) { 893 + if (datalen == length + fraggap) 890 894 alloclen += rt->dst.trailer_len; 891 - /* make sure mtu is not reached */ 892 - if (datalen > mtu - fragheaderlen - rt->dst.trailer_len) 893 - datalen -= ALIGN(rt->dst.trailer_len, 8); 894 - } 895 + 895 896 if (transhdrlen) { 896 897 skb = sock_alloc_send_skb(sk, 897 898 alloclen + hh_len + 15, ··· 923 926 /* 924 927 * Find where to start putting bytes. 925 928 */ 926 - data = skb_put(skb, fraglen); 929 + data = skb_put(skb, fraglen + exthdrlen); 927 930 skb_set_network_header(skb, exthdrlen); 928 931 skb->transport_header = (skb->network_header + 929 932 fragheaderlen); 930 - data += fragheaderlen; 933 + data += fragheaderlen + exthdrlen; 931 934 932 935 if (fraggap) { 933 936 skb->csum = skb_copy_and_csum_bits( ··· 1061 1064 */ 1062 1065 *rtp = NULL; 1063 1066 cork->fragsize = inet->pmtudisc == IP_PMTUDISC_PROBE ? 1064 - rt->dst.dev->mtu : dst_mtu(rt->dst.path); 1067 + rt->dst.dev->mtu : dst_mtu(&rt->dst); 1065 1068 cork->dst = &rt->dst; 1066 1069 cork->length = 0; 1067 1070 cork->tx_flags = ipc->tx_flags;
+22 -38
net/ipv4/netfilter.c
··· 17 17 const struct iphdr *iph = ip_hdr(skb); 18 18 struct rtable *rt; 19 19 struct flowi4 fl4 = {}; 20 - unsigned long orefdst; 20 + __be32 saddr = iph->saddr; 21 + __u8 flags = 0; 21 22 unsigned int hh_len; 22 - unsigned int type; 23 23 24 - type = inet_addr_type(net, iph->saddr); 25 - if (skb->sk && inet_sk(skb->sk)->transparent) 26 - type = RTN_LOCAL; 27 - if (addr_type == RTN_UNSPEC) 28 - addr_type = type; 24 + if (!skb->sk && addr_type != RTN_LOCAL) { 25 + if (addr_type == RTN_UNSPEC) 26 + addr_type = inet_addr_type(net, saddr); 27 + if (addr_type == RTN_LOCAL || addr_type == RTN_UNICAST) 28 + flags |= FLOWI_FLAG_ANYSRC; 29 + else 30 + saddr = 0; 31 + } 29 32 30 33 /* some non-standard hacks like ipt_REJECT.c:send_reset() can cause 31 34 * packets with foreign saddr to appear on the NF_INET_LOCAL_OUT hook. 32 35 */ 33 - if (addr_type == RTN_LOCAL) { 34 - fl4.daddr = iph->daddr; 35 - if (type == RTN_LOCAL) 36 - fl4.saddr = iph->saddr; 37 - fl4.flowi4_tos = RT_TOS(iph->tos); 38 - fl4.flowi4_oif = skb->sk ? skb->sk->sk_bound_dev_if : 0; 39 - fl4.flowi4_mark = skb->mark; 40 - fl4.flowi4_flags = skb->sk ? inet_sk_flowi_flags(skb->sk) : 0; 41 - rt = ip_route_output_key(net, &fl4); 42 - if (IS_ERR(rt)) 43 - return -1; 36 + fl4.daddr = iph->daddr; 37 + fl4.saddr = saddr; 38 + fl4.flowi4_tos = RT_TOS(iph->tos); 39 + fl4.flowi4_oif = skb->sk ? skb->sk->sk_bound_dev_if : 0; 40 + fl4.flowi4_mark = skb->mark; 41 + fl4.flowi4_flags = skb->sk ? inet_sk_flowi_flags(skb->sk) : flags; 42 + rt = ip_route_output_key(net, &fl4); 43 + if (IS_ERR(rt)) 44 + return -1; 44 45 45 - /* Drop old route. */ 46 - skb_dst_drop(skb); 47 - skb_dst_set(skb, &rt->dst); 48 - } else { 49 - /* non-local src, find valid iif to satisfy 50 - * rp-filter when calling ip_route_input. */ 51 - fl4.daddr = iph->saddr; 52 - rt = ip_route_output_key(net, &fl4); 53 - if (IS_ERR(rt)) 54 - return -1; 55 - 56 - orefdst = skb->_skb_refdst; 57 - if (ip_route_input(skb, iph->daddr, iph->saddr, 58 - RT_TOS(iph->tos), rt->dst.dev) != 0) { 59 - dst_release(&rt->dst); 60 - return -1; 61 - } 62 - dst_release(&rt->dst); 63 - refdst_drop(orefdst); 64 - } 46 + /* Drop old route. */ 47 + skb_dst_drop(skb); 48 + skb_dst_set(skb, &rt->dst); 65 49 66 50 if (skb_dst(skb)->error) 67 51 return -1;
+4 -10
net/ipv4/netfilter/ipt_REJECT.c
··· 40 40 struct iphdr *niph; 41 41 const struct tcphdr *oth; 42 42 struct tcphdr _otcph, *tcph; 43 - unsigned int addr_type; 44 43 45 44 /* IP header checks: fragment. */ 46 45 if (ip_hdr(oldskb)->frag_off & htons(IP_OFFSET)) ··· 52 53 53 54 /* No RST for RST. */ 54 55 if (oth->rst) 56 + return; 57 + 58 + if (skb_rtable(oldskb)->rt_flags & (RTCF_BROADCAST | RTCF_MULTICAST)) 55 59 return; 56 60 57 61 /* Check checksum */ ··· 103 101 nskb->csum_start = (unsigned char *)tcph - nskb->head; 104 102 nskb->csum_offset = offsetof(struct tcphdr, check); 105 103 106 - addr_type = RTN_UNSPEC; 107 - if (hook != NF_INET_FORWARD 108 - #ifdef CONFIG_BRIDGE_NETFILTER 109 - || (nskb->nf_bridge && nskb->nf_bridge->mask & BRNF_BRIDGED) 110 - #endif 111 - ) 112 - addr_type = RTN_LOCAL; 113 - 114 104 /* ip_route_me_harder expects skb->dst to be set */ 115 105 skb_dst_set_noref(nskb, skb_dst(oldskb)); 116 106 117 107 nskb->protocol = htons(ETH_P_IP); 118 - if (ip_route_me_harder(nskb, addr_type)) 108 + if (ip_route_me_harder(nskb, RTN_UNSPEC)) 119 109 goto free_nskb; 120 110 121 111 niph->ttl = ip4_dst_hoplimit(skb_dst(nskb));
+3
net/ipv4/udp.c
··· 1250 1250 1251 1251 if (noblock) 1252 1252 return -EAGAIN; 1253 + 1254 + /* starting over for a new packet */ 1255 + msg->msg_flags &= ~MSG_TRUNC; 1253 1256 goto try_again; 1254 1257 } 1255 1258
+6 -1
net/ipv4/xfrm4_output.c
··· 32 32 dst = skb_dst(skb); 33 33 mtu = dst_mtu(dst); 34 34 if (skb->len > mtu) { 35 - icmp_send(skb, ICMP_DEST_UNREACH, ICMP_FRAG_NEEDED, htonl(mtu)); 35 + if (skb->sk) 36 + ip_local_error(skb->sk, EMSGSIZE, ip_hdr(skb)->daddr, 37 + inet_sk(skb->sk)->inet_dport, mtu); 38 + else 39 + icmp_send(skb, ICMP_DEST_UNREACH, 40 + ICMP_FRAG_NEEDED, htonl(mtu)); 36 41 ret = -EMSGSIZE; 37 42 } 38 43 out:
+1 -1
net/ipv6/af_inet6.c
··· 274 274 return -EINVAL; 275 275 276 276 if (addr->sin6_family != AF_INET6) 277 - return -EINVAL; 277 + return -EAFNOSUPPORT; 278 278 279 279 addr_type = ipv6_addr_type(&addr->sin6_addr); 280 280 if ((addr_type & IPV6_ADDR_MULTICAST) && sock->type == SOCK_STREAM)
+9 -16
net/ipv6/route.c
··· 228 228 229 229 /* allocate dst with ip6_dst_ops */ 230 230 static inline struct rt6_info *ip6_dst_alloc(struct dst_ops *ops, 231 - struct net_device *dev) 231 + struct net_device *dev, 232 + int flags) 232 233 { 233 - struct rt6_info *rt = dst_alloc(ops, dev, 0, 0, 0); 234 + struct rt6_info *rt = dst_alloc(ops, dev, 0, 0, flags); 234 235 235 236 memset(&rt->rt6i_table, 0, sizeof(*rt) - sizeof(struct dst_entry)); 236 237 ··· 1043 1042 if (unlikely(idev == NULL)) 1044 1043 return NULL; 1045 1044 1046 - rt = ip6_dst_alloc(&net->ipv6.ip6_dst_ops, dev); 1045 + rt = ip6_dst_alloc(&net->ipv6.ip6_dst_ops, dev, 0); 1047 1046 if (unlikely(rt == NULL)) { 1048 1047 in6_dev_put(idev); 1049 1048 goto out; ··· 1062 1061 atomic_set(&rt->dst.__refcnt, 1); 1063 1062 dst_metric_set(&rt->dst, RTAX_HOPLIMIT, 255); 1064 1063 rt->dst.output = ip6_output; 1065 - 1066 - #if 0 /* there's no chance to use these for ndisc */ 1067 - rt->dst.flags = ipv6_addr_type(addr) & IPV6_ADDR_UNICAST 1068 - ? DST_HOST 1069 - : 0; 1070 - ipv6_addr_copy(&rt->rt6i_dst.addr, addr); 1071 - rt->rt6i_dst.plen = 128; 1072 - #endif 1073 1064 1074 1065 spin_lock_bh(&icmp6_dst_lock); 1075 1066 rt->dst.next = icmp6_dst_gc_list; ··· 1207 1214 goto out; 1208 1215 } 1209 1216 1210 - rt = ip6_dst_alloc(&net->ipv6.ip6_dst_ops, NULL); 1217 + rt = ip6_dst_alloc(&net->ipv6.ip6_dst_ops, NULL, DST_NOCOUNT); 1211 1218 1212 1219 if (rt == NULL) { 1213 1220 err = -ENOMEM; ··· 1237 1244 ipv6_addr_prefix(&rt->rt6i_dst.addr, &cfg->fc_dst, cfg->fc_dst_len); 1238 1245 rt->rt6i_dst.plen = cfg->fc_dst_len; 1239 1246 if (rt->rt6i_dst.plen == 128) 1240 - rt->dst.flags = DST_HOST; 1247 + rt->dst.flags |= DST_HOST; 1241 1248 1242 1249 #ifdef CONFIG_IPV6_SUBTREES 1243 1250 ipv6_addr_prefix(&rt->rt6i_src.addr, &cfg->fc_src, cfg->fc_src_len); ··· 1727 1734 { 1728 1735 struct net *net = dev_net(ort->rt6i_dev); 1729 1736 struct rt6_info *rt = ip6_dst_alloc(&net->ipv6.ip6_dst_ops, 1730 - ort->dst.dev); 1737 + ort->dst.dev, 0); 1731 1738 1732 1739 if (rt) { 1733 1740 rt->dst.input = ort->dst.input; ··· 2006 2013 { 2007 2014 struct net *net = dev_net(idev->dev); 2008 2015 struct rt6_info *rt = ip6_dst_alloc(&net->ipv6.ip6_dst_ops, 2009 - net->loopback_dev); 2016 + net->loopback_dev, 0); 2010 2017 struct neighbour *neigh; 2011 2018 2012 2019 if (rt == NULL) { ··· 2018 2025 2019 2026 in6_dev_hold(idev); 2020 2027 2021 - rt->dst.flags = DST_HOST; 2028 + rt->dst.flags |= DST_HOST; 2022 2029 rt->dst.input = ip6_input; 2023 2030 rt->dst.output = ip6_output; 2024 2031 rt->rt6i_idev = idev;
+4 -1
net/ipv6/udp.c
··· 453 453 } 454 454 unlock_sock_fast(sk, slow); 455 455 456 - if (flags & MSG_DONTWAIT) 456 + if (noblock) 457 457 return -EAGAIN; 458 + 459 + /* starting over for a new packet */ 460 + msg->msg_flags &= ~MSG_TRUNC; 458 461 goto try_again; 459 462 } 460 463
+2 -2
net/sunrpc/auth_gss/auth_gss.c
··· 577 577 } 578 578 inode = &gss_msg->inode->vfs_inode; 579 579 for (;;) { 580 - prepare_to_wait(&gss_msg->waitqueue, &wait, TASK_INTERRUPTIBLE); 580 + prepare_to_wait(&gss_msg->waitqueue, &wait, TASK_KILLABLE); 581 581 spin_lock(&inode->i_lock); 582 582 if (gss_msg->ctx != NULL || gss_msg->msg.errno < 0) { 583 583 break; 584 584 } 585 585 spin_unlock(&inode->i_lock); 586 - if (signalled()) { 586 + if (fatal_signal_pending(current)) { 587 587 err = -ERESTARTSYS; 588 588 goto out_intr; 589 589 }
+4 -1
net/sunrpc/clnt.c
··· 1061 1061 1062 1062 dprintk("RPC: %5u rpc_buffer allocation failed\n", task->tk_pid); 1063 1063 1064 - if (RPC_IS_ASYNC(task) || !signalled()) { 1064 + if (RPC_IS_ASYNC(task) || !fatal_signal_pending(current)) { 1065 1065 task->tk_action = call_allocate; 1066 1066 rpc_delay(task, HZ>>4); 1067 1067 return; ··· 1175 1175 status = -EOPNOTSUPP; 1176 1176 break; 1177 1177 } 1178 + if (task->tk_rebind_retry == 0) 1179 + break; 1180 + task->tk_rebind_retry--; 1178 1181 rpc_delay(task, 3*HZ); 1179 1182 goto retry_timeout; 1180 1183 case -ETIMEDOUT:
+1
net/sunrpc/sched.c
··· 792 792 /* Initialize retry counters */ 793 793 task->tk_garb_retry = 2; 794 794 task->tk_cred_retry = 2; 795 + task->tk_rebind_retry = 2; 795 796 796 797 task->tk_priority = task_setup_data->priority - RPC_PRIORITY_LOW; 797 798 task->tk_owner = current->tgid;
+3 -3
net/xfrm/xfrm_policy.c
··· 50 50 static void xfrm_policy_put_afinfo(struct xfrm_policy_afinfo *afinfo); 51 51 static void xfrm_init_pmtu(struct dst_entry *dst); 52 52 static int stale_bundle(struct dst_entry *dst); 53 - static int xfrm_bundle_ok(struct xfrm_dst *xdst, int family); 53 + static int xfrm_bundle_ok(struct xfrm_dst *xdst); 54 54 55 55 56 56 static struct xfrm_policy *__xfrm_policy_unlink(struct xfrm_policy *pol, ··· 2241 2241 2242 2242 static int stale_bundle(struct dst_entry *dst) 2243 2243 { 2244 - return !xfrm_bundle_ok((struct xfrm_dst *)dst, AF_UNSPEC); 2244 + return !xfrm_bundle_ok((struct xfrm_dst *)dst); 2245 2245 } 2246 2246 2247 2247 void xfrm_dst_ifdown(struct dst_entry *dst, struct net_device *dev) ··· 2313 2313 * still valid. 2314 2314 */ 2315 2315 2316 - static int xfrm_bundle_ok(struct xfrm_dst *first, int family) 2316 + static int xfrm_bundle_ok(struct xfrm_dst *first) 2317 2317 { 2318 2318 struct dst_entry *dst = &first->u.dst; 2319 2319 struct xfrm_dst *last;
+2 -1
security/keys/request_key.c
··· 469 469 } else if (ret == -EINPROGRESS) { 470 470 ret = 0; 471 471 } else { 472 - key = ERR_PTR(ret); 472 + goto couldnt_alloc_key; 473 473 } 474 474 475 475 key_put(dest_keyring); ··· 479 479 construction_failed: 480 480 key_negate_and_link(key, key_negative_timeout, NULL, NULL); 481 481 key_put(key); 482 + couldnt_alloc_key: 482 483 key_put(dest_keyring); 483 484 kleave(" = %d", ret); 484 485 return ERR_PTR(ret);
-1
sound/pci/asihpi/asihpi.c
··· 27 27 #include "hpioctl.h" 28 28 29 29 #include <linux/pci.h> 30 - #include <linux/version.h> 31 30 #include <linux/init.h> 32 31 #include <linux/jiffies.h> 33 32 #include <linux/slab.h>
+11 -2
sound/pci/hda/patch_realtek.c
··· 4883 4883 SND_PCI_QUIRK(0x1025, 0xe309, "ULI", ALC880_3ST_DIG), 4884 4884 SND_PCI_QUIRK(0x1025, 0xe310, "ULI", ALC880_3ST), 4885 4885 SND_PCI_QUIRK(0x1039, 0x1234, NULL, ALC880_6ST_DIG), 4886 - SND_PCI_QUIRK(0x103c, 0x2a09, "HP", ALC880_5ST), 4887 4886 SND_PCI_QUIRK(0x1043, 0x10b3, "ASUS W1V", ALC880_ASUS_W1V), 4888 4887 SND_PCI_QUIRK(0x1043, 0x10c2, "ASUS W6A", ALC880_ASUS_DIG), 4889 4888 SND_PCI_QUIRK(0x1043, 0x10c3, "ASUS Wxx", ALC880_ASUS_DIG), ··· 12599 12600 */ 12600 12601 enum { 12601 12602 PINFIX_FSC_H270, 12603 + PINFIX_HP_Z200, 12602 12604 }; 12603 12605 12604 12606 static const struct alc_fixup alc262_fixups[] = { ··· 12612 12612 { } 12613 12613 } 12614 12614 }, 12615 + [PINFIX_HP_Z200] = { 12616 + .type = ALC_FIXUP_PINS, 12617 + .v.pins = (const struct alc_pincfg[]) { 12618 + { 0x16, 0x99130120 }, /* internal speaker */ 12619 + { } 12620 + } 12621 + }, 12615 12622 }; 12616 12623 12617 12624 static const struct snd_pci_quirk alc262_fixup_tbl[] = { 12625 + SND_PCI_QUIRK(0x103c, 0x170b, "HP Z200", PINFIX_HP_Z200), 12618 12626 SND_PCI_QUIRK(0x1734, 0x1147, "FSC Celsius H270", PINFIX_FSC_H270), 12619 12627 {} 12620 12628 }; ··· 12739 12731 ALC262_HP_BPC), 12740 12732 SND_PCI_QUIRK_MASK(0x103c, 0xff00, 0x1500, "HP z series", 12741 12733 ALC262_HP_BPC), 12734 + SND_PCI_QUIRK(0x103c, 0x170b, "HP Z200", 12735 + ALC262_AUTO), 12742 12736 SND_PCI_QUIRK_MASK(0x103c, 0xff00, 0x1700, "HP xw series", 12743 12737 ALC262_HP_BPC), 12744 12738 SND_PCI_QUIRK(0x103c, 0x2800, "HP D7000", ALC262_HP_BPC_D7000_WL), ··· 13882 13872 SND_PCI_QUIRK(0x1043, 0x1205, "ASUS W7J", ALC268_3ST), 13883 13873 SND_PCI_QUIRK(0x1170, 0x0040, "ZEPTO", ALC268_ZEPTO), 13884 13874 SND_PCI_QUIRK(0x14c0, 0x0025, "COMPAL IFL90/JFL-92", ALC268_TOSHIBA), 13885 - SND_PCI_QUIRK(0x152d, 0x0763, "Diverse (CPR2000)", ALC268_ACER), 13886 13875 SND_PCI_QUIRK(0x152d, 0x0771, "Quanta IL1", ALC267_QUANTA_IL1), 13887 13876 {} 13888 13877 };
+27 -8
sound/pci/hda/patch_via.c
··· 745 745 struct via_spec *spec = codec->spec; 746 746 hda_nid_t nid = kcontrol->private_value; 747 747 unsigned int pinsel = ucontrol->value.enumerated.item[0]; 748 + unsigned int parm0, parm1; 748 749 /* Get Independent Mode index of headphone pin widget */ 749 750 spec->hp_independent_mode = spec->hp_independent_mode_index == pinsel 750 751 ? 1 : 0; 751 - if (spec->codec_type == VT1718S) 752 + if (spec->codec_type == VT1718S) { 752 753 snd_hda_codec_write(codec, nid, 0, 753 754 AC_VERB_SET_CONNECT_SEL, pinsel ? 2 : 0); 755 + /* Set correct mute switch for MW3 */ 756 + parm0 = spec->hp_independent_mode ? 757 + AMP_IN_UNMUTE(0) : AMP_IN_MUTE(0); 758 + parm1 = spec->hp_independent_mode ? 759 + AMP_IN_MUTE(1) : AMP_IN_UNMUTE(1); 760 + snd_hda_codec_write(codec, 0x1b, 0, 761 + AC_VERB_SET_AMP_GAIN_MUTE, parm0); 762 + snd_hda_codec_write(codec, 0x1b, 0, 763 + AC_VERB_SET_AMP_GAIN_MUTE, parm1); 764 + } 754 765 else 755 766 snd_hda_codec_write(codec, nid, 0, 756 767 AC_VERB_SET_CONNECT_SEL, pinsel); ··· 4294 4283 {0x21, AC_VERB_SET_AMP_GAIN_MUTE, AMP_IN_MUTE(2)}, 4295 4284 {0x21, AC_VERB_SET_AMP_GAIN_MUTE, AMP_IN_MUTE(3)}, 4296 4285 {0x21, AC_VERB_SET_AMP_GAIN_MUTE, AMP_IN_UNMUTE(5)}, 4297 - 4298 - /* Setup default input of Front HP to MW9 */ 4299 - {0x28, AC_VERB_SET_CONNECT_SEL, 0x1}, 4300 4286 /* PW9 PW10 Output enable */ 4301 4287 {0x2d, AC_VERB_SET_PIN_WIDGET_CONTROL, AC_PINCTL_OUT_EN}, 4302 4288 {0x2e, AC_VERB_SET_PIN_WIDGET_CONTROL, AC_PINCTL_OUT_EN}, ··· 4302 4294 /* Enable Boost Volume backdoor */ 4303 4295 {0x1, 0xf88, 0x8}, 4304 4296 /* MW0/1/2/3/4: un-mute index 0 (AOWx), mute index 1 (MW9) */ 4305 - {0x18, AC_VERB_SET_AMP_GAIN_MUTE, AMP_IN_UNMUTE(0)}, 4297 + {0x18, AC_VERB_SET_AMP_GAIN_MUTE, AMP_IN_MUTE(0)}, 4306 4298 {0x19, AC_VERB_SET_AMP_GAIN_MUTE, AMP_IN_UNMUTE(0)}, 4307 4299 {0x1a, AC_VERB_SET_AMP_GAIN_MUTE, AMP_IN_UNMUTE(0)}, 4308 - {0x1b, AC_VERB_SET_AMP_GAIN_MUTE, AMP_IN_UNMUTE(0)}, 4300 + {0x1b, AC_VERB_SET_AMP_GAIN_MUTE, AMP_IN_MUTE(0)}, 4309 4301 {0x1c, AC_VERB_SET_AMP_GAIN_MUTE, AMP_IN_UNMUTE(0)}, 4310 4302 {0x18, AC_VERB_SET_AMP_GAIN_MUTE, AMP_IN_UNMUTE(1)}, 4311 4303 {0x19, AC_VERB_SET_AMP_GAIN_MUTE, AMP_IN_MUTE(1)}, ··· 4315 4307 /* set MUX1 = 2 (AOW4), MUX2 = 1 (AOW3) */ 4316 4308 {0x34, AC_VERB_SET_CONNECT_SEL, 0x2}, 4317 4309 {0x35, AC_VERB_SET_CONNECT_SEL, 0x1}, 4318 - /* Unmute MW4's index 0 */ 4319 - {0x1c, AC_VERB_SET_AMP_GAIN_MUTE, AMP_IN_UNMUTE(0)}, 4320 4310 { } 4321 4311 }; 4322 4312 ··· 4462 4456 if (err < 0) 4463 4457 return err; 4464 4458 } else if (i == AUTO_SEQ_FRONT) { 4459 + /* add control to mixer index 0 */ 4460 + err = via_add_control(spec, VIA_CTL_WIDGET_VOL, 4461 + "Master Front Playback Volume", 4462 + HDA_COMPOSE_AMP_VAL(0x21, 3, 5, 4463 + HDA_INPUT)); 4464 + if (err < 0) 4465 + return err; 4466 + err = via_add_control(spec, VIA_CTL_WIDGET_MUTE, 4467 + "Master Front Playback Switch", 4468 + HDA_COMPOSE_AMP_VAL(0x21, 3, 5, 4469 + HDA_INPUT)); 4470 + if (err < 0) 4471 + return err; 4465 4472 /* Front */ 4466 4473 sprintf(name, "%s Playback Volume", chname[i]); 4467 4474 err = via_add_control(
-1
sound/soc/codecs/wm8991.c
··· 13 13 14 14 #include <linux/module.h> 15 15 #include <linux/moduleparam.h> 16 - #include <linux/version.h> 17 16 #include <linux/kernel.h> 18 17 #include <linux/init.h> 19 18 #include <linux/delay.h>
-7
sound/soc/imx/Kconfig
··· 11 11 12 12 if SND_IMX_SOC 13 13 14 - config SND_MXC_SOC_SSI 15 - tristate 16 - 17 14 config SND_MXC_SOC_FIQ 18 15 tristate 19 16 ··· 21 24 tristate "Audio on the the i.MX31ADS with WM1133-EV1 fitted" 22 25 depends on MACH_MX31ADS_WM1133_EV1 && EXPERIMENTAL 23 26 select SND_SOC_WM8350 24 - select SND_MXC_SOC_SSI 25 27 select SND_MXC_SOC_FIQ 26 28 help 27 29 Enable support for audio on the i.MX31ADS with the WM1133-EV1 ··· 30 34 tristate "SoC audio support for Visstrim M10 boards" 31 35 depends on MACH_IMX27_VISSTRIM_M10 32 36 select SND_SOC_TVL320AIC32X4 33 - select SND_MXC_SOC_SSI 34 37 select SND_MXC_SOC_MX2 35 38 help 36 39 Say Y if you want to add support for SoC audio on Visstrim SM10 ··· 39 44 tristate "SoC Audio support for Phytec phyCORE (and phyCARD) boards" 40 45 depends on MACH_PCM043 || MACH_PCA100 41 46 select SND_SOC_WM9712 42 - select SND_MXC_SOC_SSI 43 47 select SND_MXC_SOC_FIQ 44 48 help 45 49 Say Y if you want to add support for SoC audio on Phytec phyCORE ··· 51 57 || MACH_EUKREA_MBIMXSD35_BASEBOARD \ 52 58 || MACH_EUKREA_MBIMXSD51_BASEBOARD 53 59 select SND_SOC_TLV320AIC23 54 - select SND_MXC_SOC_SSI 55 60 select SND_MXC_SOC_FIQ 56 61 help 57 62 Enable I2S based access to the TLV320AIC23B codec attached
+2
sound/soc/imx/imx-pcm-dma-mx2.c
··· 337 337 platform_driver_unregister(&imx_pcm_driver); 338 338 } 339 339 module_exit(snd_imx_pcm_exit); 340 + MODULE_LICENSE("GPL"); 341 + MODULE_ALIAS("platform:imx-pcm-audio");
+1 -1
sound/soc/imx/imx-ssi.c
··· 774 774 MODULE_AUTHOR("Sascha Hauer, <s.hauer@pengutronix.de>"); 775 775 MODULE_DESCRIPTION("i.MX I2S/ac97 SoC Interface"); 776 776 MODULE_LICENSE("GPL"); 777 - 777 + MODULE_ALIAS("platform:imx-ssi");
+2 -2
sound/soc/pxa/pxa2xx-pcm.c
··· 95 95 if (!card->dev->coherent_dma_mask) 96 96 card->dev->coherent_dma_mask = DMA_BIT_MASK(32); 97 97 98 - if (dai->driver->playback.channels_min) { 98 + if (pcm->streams[SNDRV_PCM_STREAM_PLAYBACK].substream) { 99 99 ret = pxa2xx_pcm_preallocate_dma_buffer(pcm, 100 100 SNDRV_PCM_STREAM_PLAYBACK); 101 101 if (ret) 102 102 goto out; 103 103 } 104 104 105 - if (dai->driver->capture.channels_min) { 105 + if (pcm->streams[SNDRV_PCM_STREAM_CAPTURE].substream) { 106 106 ret = pxa2xx_pcm_preallocate_dma_buffer(pcm, 107 107 SNDRV_PCM_STREAM_CAPTURE); 108 108 if (ret)
-3
sound/soc/soc-cache.c
··· 409 409 codec->bulk_write_raw = snd_soc_hw_bulk_write_raw; 410 410 411 411 switch (control) { 412 - case SND_SOC_CUSTOM: 413 - break; 414 - 415 412 case SND_SOC_I2C: 416 413 #if defined(CONFIG_I2C) || (defined(CONFIG_I2C_MODULE) && defined(MODULE)) 417 414 codec->hw_write = (hw_write_t)i2c_master_send;