Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge branch 'master' of git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux into drm-next

This backmerges drm-fixes into drm-next mainly for the amdkfd
stuff, I'm not 100% confident, but it builds and the amdkfd
folks can fix anything up.

Signed-off-by: Dave Airlie <airlied@redhat.com>
Conflicts:
drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.h

+2574 -2243
-60
Documentation/ABI/testing/sysfs-platform-dell-laptop
··· 1 - What: /sys/class/leds/dell::kbd_backlight/als_setting 2 - Date: December 2014 3 - KernelVersion: 3.19 4 - Contact: Gabriele Mazzotta <gabriele.mzt@gmail.com>, 5 - Pali Rohár <pali.rohar@gmail.com> 6 - Description: 7 - This file allows to control the automatic keyboard 8 - illumination mode on some systems that have an ambient 9 - light sensor. Write 1 to this file to enable the auto 10 - mode, 0 to disable it. 11 - 12 - What: /sys/class/leds/dell::kbd_backlight/start_triggers 13 - Date: December 2014 14 - KernelVersion: 3.19 15 - Contact: Gabriele Mazzotta <gabriele.mzt@gmail.com>, 16 - Pali Rohár <pali.rohar@gmail.com> 17 - Description: 18 - This file allows to control the input triggers that 19 - turn on the keyboard backlight illumination that is 20 - disabled because of inactivity. 21 - Read the file to see the triggers available. The ones 22 - enabled are preceded by '+', those disabled by '-'. 23 - 24 - To enable a trigger, write its name preceded by '+' to 25 - this file. To disable a trigger, write its name preceded 26 - by '-' instead. 27 - 28 - For example, to enable the keyboard as trigger run: 29 - echo +keyboard > /sys/class/leds/dell::kbd_backlight/start_triggers 30 - To disable it: 31 - echo -keyboard > /sys/class/leds/dell::kbd_backlight/start_triggers 32 - 33 - Note that not all the available triggers can be configured. 34 - 35 - What: /sys/class/leds/dell::kbd_backlight/stop_timeout 36 - Date: December 2014 37 - KernelVersion: 3.19 38 - Contact: Gabriele Mazzotta <gabriele.mzt@gmail.com>, 39 - Pali Rohár <pali.rohar@gmail.com> 40 - Description: 41 - This file allows to specify the interval after which the 42 - keyboard illumination is disabled because of inactivity. 43 - The timeouts are expressed in seconds, minutes, hours and 44 - days, for which the symbols are 's', 'm', 'h' and 'd' 45 - respectively. 46 - 47 - To configure the timeout, write to this file a value along 48 - with any the above units. If no unit is specified, the value 49 - is assumed to be expressed in seconds. 50 - 51 - For example, to set the timeout to 10 minutes run: 52 - echo 10m > /sys/class/leds/dell::kbd_backlight/stop_timeout 53 - 54 - Note that when this file is read, the returned value might be 55 - expressed in a different unit than the one used when the timeout 56 - was set. 57 - 58 - Also note that only some timeouts are supported and that 59 - some systems might fall back to a specific timeout in case 60 - an invalid timeout is written to this file.
+1 -1
Documentation/devicetree/bindings/arm/arm-boards
··· 23 23 range of 0x200 bytes. 24 24 25 25 - syscon: the root node of the Integrator platforms must have a 26 - system controller node pointong to the control registers, 26 + system controller node pointing to the control registers, 27 27 with the compatible string 28 28 "arm,integrator-ap-syscon" 29 29 "arm,integrator-cp-syscon"
+72
Documentation/devicetree/bindings/arm/fw-cfg.txt
··· 1 + * QEMU Firmware Configuration bindings for ARM 2 + 3 + QEMU's arm-softmmu and aarch64-softmmu emulation / virtualization targets 4 + provide the following Firmware Configuration interface on the "virt" machine 5 + type: 6 + 7 + - A write-only, 16-bit wide selector (or control) register, 8 + - a read-write, 64-bit wide data register. 9 + 10 + QEMU exposes the control and data register to ARM guests as memory mapped 11 + registers; their location is communicated to the guest's UEFI firmware in the 12 + DTB that QEMU places at the bottom of the guest's DRAM. 13 + 14 + The guest writes a selector value (a key) to the selector register, and then 15 + can read the corresponding data (produced by QEMU) via the data register. If 16 + the selected entry is writable, the guest can rewrite it through the data 17 + register. 18 + 19 + The selector register takes keys in big endian byte order. 20 + 21 + The data register allows accesses with 8, 16, 32 and 64-bit width (only at 22 + offset 0 of the register). Accesses larger than a byte are interpreted as 23 + arrays, bundled together only for better performance. The bytes constituting 24 + such a word, in increasing address order, correspond to the bytes that would 25 + have been transferred by byte-wide accesses in chronological order. 26 + 27 + The interface allows guest firmware to download various parameters and blobs 28 + that affect how the firmware works and what tables it installs for the guest 29 + OS. For example, boot order of devices, ACPI tables, SMBIOS tables, kernel and 30 + initrd images for direct kernel booting, virtual machine UUID, SMP information, 31 + virtual NUMA topology, and so on. 32 + 33 + The authoritative registry of the valid selector values and their meanings is 34 + the QEMU source code; the structure of the data blobs corresponding to the 35 + individual key values is also defined in the QEMU source code. 36 + 37 + The presence of the registers can be verified by selecting the "signature" blob 38 + with key 0x0000, and reading four bytes from the data register. The returned 39 + signature is "QEMU". 40 + 41 + The outermost protocol (involving the write / read sequences of the control and 42 + data registers) is expected to be versioned, and/or described by feature bits. 43 + The interface revision / feature bitmap can be retrieved with key 0x0001. The 44 + blob to be read from the data register has size 4, and it is to be interpreted 45 + as a uint32_t value in little endian byte order. The current value 46 + (corresponding to the above outer protocol) is zero. 47 + 48 + The guest kernel is not expected to use these registers (although it is 49 + certainly allowed to); the device tree bindings are documented here because 50 + this is where device tree bindings reside in general. 51 + 52 + Required properties: 53 + 54 + - compatible: "qemu,fw-cfg-mmio". 55 + 56 + - reg: the MMIO region used by the device. 57 + * Bytes 0x0 to 0x7 cover the data register. 58 + * Bytes 0x8 to 0x9 cover the selector register. 59 + * Further registers may be appended to the region in case of future interface 60 + revisions / feature bits. 61 + 62 + Example: 63 + 64 + / { 65 + #size-cells = <0x2>; 66 + #address-cells = <0x2>; 67 + 68 + fw-cfg@9020000 { 69 + compatible = "qemu,fw-cfg-mmio"; 70 + reg = <0x0 0x9020000 0x0 0xa>; 71 + }; 72 + };
+1 -1
Documentation/devicetree/bindings/graph.txt
··· 19 19 may be described by specialized bindings depending on the type of connection. 20 20 21 21 To see how this binding applies to video pipelines, for example, see 22 - Documentation/device-tree/bindings/media/video-interfaces.txt. 22 + Documentation/devicetree/bindings/media/video-interfaces.txt. 23 23 Here the ports describe data interfaces, and the links between them are 24 24 the connecting data buses. A single port with multiple connections can 25 25 correspond to multiple devices being connected to the same physical bus.
+3 -1
Documentation/devicetree/bindings/vendor-prefixes.txt
··· 9 9 adapteva Adapteva, Inc. 10 10 adi Analog Devices, Inc. 11 11 aeroflexgaisler Aeroflex Gaisler AB 12 - ak Asahi Kasei Corp. 13 12 allwinner Allwinner Technology Co., Ltd. 14 13 altr Altera Corp. 15 14 amcc Applied Micro Circuits Corporation (APM, formally AMCC) ··· 19 20 apm Applied Micro Circuits Corporation (APM) 20 21 arm ARM Ltd. 21 22 armadeus ARMadeus Systems SARL 23 + asahi-kasei Asahi Kasei Corp. 22 24 atmel Atmel Corporation 23 25 auo AU Optronics Corporation 24 26 avago Avago Technologies ··· 130 130 powervr PowerVR (deprecated, use img) 131 131 qca Qualcomm Atheros, Inc. 132 132 qcom Qualcomm Technologies, Inc 133 + qemu QEMU, a generic and open source machine emulator and virtualizer 133 134 qnap QNAP Systems, Inc. 134 135 radxa Radxa 135 136 raidsonic RaidSonic Technology GmbH ··· 172 171 v3 V3 Semiconductor 173 172 variscite Variscite Ltd. 174 173 via VIA Technologies, Inc. 174 + virtio Virtual I/O Device Specification, developed by the OASIS consortium 175 175 voipac Voipac Technologies s.r.o. 176 176 winbond Winbond Electronics corp. 177 177 wlf Wolfson Microelectronics
+6 -9
MAINTAINERS
··· 698 698 W: http://blackfin.uclinux.org/ 699 699 S: Supported 700 700 F: sound/soc/blackfin/* 701 - 701 + 702 702 ANALOG DEVICES INC IIO DRIVERS 703 703 M: Lars-Peter Clausen <lars@metafoo.de> 704 704 M: Michael Hennerich <Michael.Hennerich@analog.com> ··· 4752 4752 F: drivers/net/ethernet/ibm/ibmveth.* 4753 4753 4754 4754 IBM Power Virtual SCSI Device Drivers 4755 - M: Nathan Fontenot <nfont@linux.vnet.ibm.com> 4755 + M: Tyrel Datwyler <tyreld@linux.vnet.ibm.com> 4756 4756 L: linux-scsi@vger.kernel.org 4757 4757 S: Supported 4758 4758 F: drivers/scsi/ibmvscsi/ibmvscsi* 4759 4759 F: drivers/scsi/ibmvscsi/viosrp.h 4760 4760 4761 4761 IBM Power Virtual FC Device Drivers 4762 - M: Brian King <brking@linux.vnet.ibm.com> 4762 + M: Tyrel Datwyler <tyreld@linux.vnet.ibm.com> 4763 4763 L: linux-scsi@vger.kernel.org 4764 4764 S: Supported 4765 4765 F: drivers/scsi/ibmvscsi/ibmvfc* ··· 4948 4948 INTEL C600 SERIES SAS CONTROLLER DRIVER 4949 4949 M: Intel SCU Linux support <intel-linux-scu@intel.com> 4950 4950 M: Artur Paszkiewicz <artur.paszkiewicz@intel.com> 4951 - M: Dave Jiang <dave.jiang@intel.com> 4952 4951 L: linux-scsi@vger.kernel.org 4953 4952 T: git git://git.code.sf.net/p/intel-sas/isci 4954 4953 S: Supported ··· 7025 7026 M: Grant Likely <grant.likely@linaro.org> 7026 7027 M: Rob Herring <robh+dt@kernel.org> 7027 7028 L: devicetree@vger.kernel.org 7028 - W: http://fdt.secretlab.ca 7029 - T: git git://git.secretlab.ca/git/linux-2.6.git 7029 + W: http://www.devicetree.org/ 7030 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/glikely/linux.git 7030 7031 S: Maintained 7031 7032 F: drivers/of/ 7032 7033 F: include/linux/of*.h 7033 7034 F: scripts/dtc/ 7034 - K: of_get_property 7035 - K: of_match_table 7036 7035 7037 7036 OPEN FIRMWARE AND FLATTENED DEVICE TREE BINDINGS 7038 7037 M: Rob Herring <robh+dt@kernel.org> ··· 7275 7278 F: drivers/pci/host/*layerscape* 7276 7279 7277 7280 PCI DRIVER FOR IMX6 7278 - M: Richard Zhu <r65037@freescale.com> 7281 + M: Richard Zhu <Richard.Zhu@freescale.com> 7279 7282 M: Lucas Stach <l.stach@pengutronix.de> 7280 7283 L: linux-pci@vger.kernel.org 7281 7284 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
+1 -1
Makefile
··· 1 1 VERSION = 3 2 2 PATCHLEVEL = 19 3 3 SUBLEVEL = 0 4 - EXTRAVERSION = -rc5 4 + EXTRAVERSION = -rc6 5 5 NAME = Diseased Newt 6 6 7 7 # *DOCUMENTATION*
+6 -2
arch/alpha/kernel/pci.c
··· 285 285 if (r->parent || !r->start || !r->flags) 286 286 continue; 287 287 if (pci_has_flag(PCI_PROBE_ONLY) || 288 - (r->flags & IORESOURCE_PCI_FIXED)) 289 - pci_claim_resource(dev, i); 288 + (r->flags & IORESOURCE_PCI_FIXED)) { 289 + if (pci_claim_resource(dev, i) == 0) 290 + continue; 291 + 292 + pci_claim_bridge_resource(dev, i); 293 + } 290 294 } 291 295 } 292 296
+6
arch/arm/boot/dts/dra7.dtsi
··· 1257 1257 tx-fifo-resize; 1258 1258 maximum-speed = "super-speed"; 1259 1259 dr_mode = "otg"; 1260 + snps,dis_u3_susphy_quirk; 1261 + snps,dis_u2_susphy_quirk; 1260 1262 }; 1261 1263 }; 1262 1264 ··· 1280 1278 tx-fifo-resize; 1281 1279 maximum-speed = "high-speed"; 1282 1280 dr_mode = "otg"; 1281 + snps,dis_u3_susphy_quirk; 1282 + snps,dis_u2_susphy_quirk; 1283 1283 }; 1284 1284 }; 1285 1285 ··· 1303 1299 tx-fifo-resize; 1304 1300 maximum-speed = "high-speed"; 1305 1301 dr_mode = "otg"; 1302 + snps,dis_u3_susphy_quirk; 1303 + snps,dis_u2_susphy_quirk; 1306 1304 }; 1307 1305 }; 1308 1306
+4 -4
arch/arm/boot/dts/imx25.dtsi
··· 369 369 compatible = "fsl,imx25-pwm", "fsl,imx27-pwm"; 370 370 #pwm-cells = <2>; 371 371 reg = <0x53fa0000 0x4000>; 372 - clocks = <&clks 106>, <&clks 36>; 372 + clocks = <&clks 106>, <&clks 52>; 373 373 clock-names = "ipg", "per"; 374 374 interrupts = <36>; 375 375 }; ··· 388 388 compatible = "fsl,imx25-pwm", "fsl,imx27-pwm"; 389 389 #pwm-cells = <2>; 390 390 reg = <0x53fa8000 0x4000>; 391 - clocks = <&clks 107>, <&clks 36>; 391 + clocks = <&clks 107>, <&clks 52>; 392 392 clock-names = "ipg", "per"; 393 393 interrupts = <41>; 394 394 }; ··· 429 429 pwm4: pwm@53fc8000 { 430 430 compatible = "fsl,imx25-pwm", "fsl,imx27-pwm"; 431 431 reg = <0x53fc8000 0x4000>; 432 - clocks = <&clks 108>, <&clks 36>; 432 + clocks = <&clks 108>, <&clks 52>; 433 433 clock-names = "ipg", "per"; 434 434 interrupts = <42>; 435 435 }; ··· 476 476 compatible = "fsl,imx25-pwm", "fsl,imx27-pwm"; 477 477 #pwm-cells = <2>; 478 478 reg = <0x53fe0000 0x4000>; 479 - clocks = <&clks 105>, <&clks 36>; 479 + clocks = <&clks 105>, <&clks 52>; 480 480 clock-names = "ipg", "per"; 481 481 interrupts = <26>; 482 482 };
+4 -4
arch/arm/boot/dts/imx6sx-sdb.dts
··· 166 166 #address-cells = <1>; 167 167 #size-cells = <0>; 168 168 169 - ethphy1: ethernet-phy@0 { 170 - reg = <0>; 169 + ethphy1: ethernet-phy@1 { 170 + reg = <1>; 171 171 }; 172 172 173 - ethphy2: ethernet-phy@1 { 174 - reg = <1>; 173 + ethphy2: ethernet-phy@2 { 174 + reg = <2>; 175 175 }; 176 176 }; 177 177 };
+1 -1
arch/arm/boot/dts/tegra20-seaboard.dts
··· 406 406 clock-frequency = <400000>; 407 407 408 408 magnetometer@c { 409 - compatible = "ak,ak8975"; 409 + compatible = "asahi-kasei,ak8975"; 410 410 reg = <0xc>; 411 411 interrupt-parent = <&gpio>; 412 412 interrupts = <TEGRA_GPIO(N, 5) IRQ_TYPE_LEVEL_HIGH>;
+7 -6
arch/arm/kernel/entry-header.S
··· 253 253 .endm 254 254 255 255 .macro restore_user_regs, fast = 0, offset = 0 256 - ldr r1, [sp, #\offset + S_PSR] @ get calling cpsr 257 - ldr lr, [sp, #\offset + S_PC]! @ get pc 256 + mov r2, sp 257 + ldr r1, [r2, #\offset + S_PSR] @ get calling cpsr 258 + ldr lr, [r2, #\offset + S_PC]! @ get pc 258 259 msr spsr_cxsf, r1 @ save in spsr_svc 259 260 #if defined(CONFIG_CPU_V6) || defined(CONFIG_CPU_32v6K) 260 261 @ We must avoid clrex due to Cortex-A15 erratum #830321 261 - strex r1, r2, [sp] @ clear the exclusive monitor 262 + strex r1, r2, [r2] @ clear the exclusive monitor 262 263 #endif 263 264 .if \fast 264 - ldmdb sp, {r1 - lr}^ @ get calling r1 - lr 265 + ldmdb r2, {r1 - lr}^ @ get calling r1 - lr 265 266 .else 266 - ldmdb sp, {r0 - lr}^ @ get calling r0 - lr 267 + ldmdb r2, {r0 - lr}^ @ get calling r0 - lr 267 268 .endif 268 269 mov r0, r0 @ ARMv5T and earlier require a nop 269 270 @ after ldm {}^ 270 - add sp, sp, #S_FRAME_SIZE - S_PC 271 + add sp, sp, #\offset + S_FRAME_SIZE 271 272 movs pc, lr @ return & move spsr_svc into cpsr 272 273 .endm 273 274
+8 -2
arch/arm/kernel/perf_event.c
··· 116 116 ret = 1; 117 117 } 118 118 119 - if (left > (s64)armpmu->max_period) 120 - left = armpmu->max_period; 119 + /* 120 + * Limit the maximum period to prevent the counter value 121 + * from overtaking the one we are about to program. In 122 + * effect we are reducing max_period to account for 123 + * interrupt latency (and we are being very conservative). 124 + */ 125 + if (left > (armpmu->max_period >> 1)) 126 + left = armpmu->max_period >> 1; 121 127 122 128 local64_set(&hwc->prev_count, (u64)-left); 123 129
+5 -2
arch/arm/kernel/setup.c
··· 657 657 658 658 /* 659 659 * Ensure that start/size are aligned to a page boundary. 660 - * Size is appropriately rounded down, start is rounded up. 660 + * Size is rounded down, start is rounded up. 661 661 */ 662 - size -= start & ~PAGE_MASK; 663 662 aligned_start = PAGE_ALIGN(start); 663 + if (aligned_start > start + size) 664 + size = 0; 665 + else 666 + size -= aligned_start - start; 664 667 665 668 #ifndef CONFIG_ARCH_PHYS_ADDR_T_64BIT 666 669 if (aligned_start > ULONG_MAX) {
+6 -1
arch/arm/mach-mvebu/coherency.c
··· 246 246 return type; 247 247 } 248 248 249 + /* 250 + * As a precaution, we currently completely disable hardware I/O 251 + * coherency, until enough testing is done with automatic I/O 252 + * synchronization barriers to validate that it is a proper solution. 253 + */ 249 254 int coherency_available(void) 250 255 { 251 - return coherency_type() != COHERENCY_FABRIC_TYPE_NONE; 256 + return false; 252 257 } 253 258 254 259 int __init coherency_init(void)
+1
arch/arm/mach-omap2/common.h
··· 211 211 extern struct device *omap2_get_l3_device(void); 212 212 extern struct device *omap4_get_dsp_device(void); 213 213 214 + unsigned int omap4_xlate_irq(unsigned int hwirq); 214 215 void omap_gic_of_init(void); 215 216 216 217 #ifdef CONFIG_CACHE_L2X0
+32
arch/arm/mach-omap2/omap4-common.c
··· 256 256 } 257 257 omap_early_initcall(omap4_sar_ram_init); 258 258 259 + static struct of_device_id gic_match[] = { 260 + { .compatible = "arm,cortex-a9-gic", }, 261 + { .compatible = "arm,cortex-a15-gic", }, 262 + { }, 263 + }; 264 + 265 + static struct device_node *gic_node; 266 + 267 + unsigned int omap4_xlate_irq(unsigned int hwirq) 268 + { 269 + struct of_phandle_args irq_data; 270 + unsigned int irq; 271 + 272 + if (!gic_node) 273 + gic_node = of_find_matching_node(NULL, gic_match); 274 + 275 + if (WARN_ON(!gic_node)) 276 + return hwirq; 277 + 278 + irq_data.np = gic_node; 279 + irq_data.args_count = 3; 280 + irq_data.args[0] = 0; 281 + irq_data.args[1] = hwirq - OMAP44XX_IRQ_GIC_START; 282 + irq_data.args[2] = IRQ_TYPE_LEVEL_HIGH; 283 + 284 + irq = irq_create_of_mapping(&irq_data); 285 + if (WARN_ON(!irq)) 286 + irq = hwirq; 287 + 288 + return irq; 289 + } 290 + 259 291 void __init omap_gic_of_init(void) 260 292 { 261 293 struct device_node *np;
+8 -2
arch/arm/mach-omap2/omap_hwmod.c
··· 3534 3534 3535 3535 mpu_irqs_cnt = _count_mpu_irqs(oh); 3536 3536 for (i = 0; i < mpu_irqs_cnt; i++) { 3537 + unsigned int irq; 3538 + 3539 + if (oh->xlate_irq) 3540 + irq = oh->xlate_irq((oh->mpu_irqs + i)->irq); 3541 + else 3542 + irq = (oh->mpu_irqs + i)->irq; 3537 3543 (res + r)->name = (oh->mpu_irqs + i)->name; 3538 - (res + r)->start = (oh->mpu_irqs + i)->irq; 3539 - (res + r)->end = (oh->mpu_irqs + i)->irq; 3544 + (res + r)->start = irq; 3545 + (res + r)->end = irq; 3540 3546 (res + r)->flags = IORESOURCE_IRQ; 3541 3547 r++; 3542 3548 }
+1
arch/arm/mach-omap2/omap_hwmod.h
··· 676 676 spinlock_t _lock; 677 677 struct list_head node; 678 678 struct omap_hwmod_ocp_if *_mpu_port; 679 + unsigned int (*xlate_irq)(unsigned int); 679 680 u16 flags; 680 681 u8 mpu_rt_idx; 681 682 u8 response_lat;
+5
arch/arm/mach-omap2/omap_hwmod_44xx_data.c
··· 479 479 .class = &omap44xx_dma_hwmod_class, 480 480 .clkdm_name = "l3_dma_clkdm", 481 481 .mpu_irqs = omap44xx_dma_system_irqs, 482 + .xlate_irq = omap4_xlate_irq, 482 483 .main_clk = "l3_div_ck", 483 484 .prcm = { 484 485 .omap4 = { ··· 641 640 .class = &omap44xx_dispc_hwmod_class, 642 641 .clkdm_name = "l3_dss_clkdm", 643 642 .mpu_irqs = omap44xx_dss_dispc_irqs, 643 + .xlate_irq = omap4_xlate_irq, 644 644 .sdma_reqs = omap44xx_dss_dispc_sdma_reqs, 645 645 .main_clk = "dss_dss_clk", 646 646 .prcm = { ··· 695 693 .class = &omap44xx_dsi_hwmod_class, 696 694 .clkdm_name = "l3_dss_clkdm", 697 695 .mpu_irqs = omap44xx_dss_dsi1_irqs, 696 + .xlate_irq = omap4_xlate_irq, 698 697 .sdma_reqs = omap44xx_dss_dsi1_sdma_reqs, 699 698 .main_clk = "dss_dss_clk", 700 699 .prcm = { ··· 729 726 .class = &omap44xx_dsi_hwmod_class, 730 727 .clkdm_name = "l3_dss_clkdm", 731 728 .mpu_irqs = omap44xx_dss_dsi2_irqs, 729 + .xlate_irq = omap4_xlate_irq, 732 730 .sdma_reqs = omap44xx_dss_dsi2_sdma_reqs, 733 731 .main_clk = "dss_dss_clk", 734 732 .prcm = { ··· 788 784 */ 789 785 .flags = HWMOD_SWSUP_SIDLE, 790 786 .mpu_irqs = omap44xx_dss_hdmi_irqs, 787 + .xlate_irq = omap4_xlate_irq, 791 788 .sdma_reqs = omap44xx_dss_hdmi_sdma_reqs, 792 789 .main_clk = "dss_48mhz_clk", 793 790 .prcm = {
+1
arch/arm/mach-omap2/omap_hwmod_54xx_data.c
··· 288 288 .class = &omap54xx_dma_hwmod_class, 289 289 .clkdm_name = "dma_clkdm", 290 290 .mpu_irqs = omap54xx_dma_system_irqs, 291 + .xlate_irq = omap4_xlate_irq, 291 292 .main_clk = "l3_iclk_div", 292 293 .prcm = { 293 294 .omap4 = {
+1
arch/arm/mach-omap2/prcm-common.h
··· 498 498 u8 nr_irqs; 499 499 const struct omap_prcm_irq *irqs; 500 500 int irq; 501 + unsigned int (*xlate_irq)(unsigned int); 501 502 void (*read_pending_irqs)(unsigned long *events); 502 503 void (*ocp_barrier)(void); 503 504 void (*save_and_clear_irqen)(u32 *saved_mask);
+4 -1
arch/arm/mach-omap2/prm44xx.c
··· 49 49 .irqs = omap4_prcm_irqs, 50 50 .nr_irqs = ARRAY_SIZE(omap4_prcm_irqs), 51 51 .irq = 11 + OMAP44XX_IRQ_GIC_START, 52 + .xlate_irq = omap4_xlate_irq, 52 53 .read_pending_irqs = &omap44xx_prm_read_pending_irqs, 53 54 .ocp_barrier = &omap44xx_prm_ocp_barrier, 54 55 .save_and_clear_irqen = &omap44xx_prm_save_and_clear_irqen, ··· 752 751 } 753 752 754 753 /* Once OMAP4 DT is filled as well */ 755 - if (irq_num >= 0) 754 + if (irq_num >= 0) { 756 755 omap4_prcm_irq_setup.irq = irq_num; 756 + omap4_prcm_irq_setup.xlate_irq = NULL; 757 + } 757 758 } 758 759 759 760 omap44xx_prm_enable_io_wakeup();
+12 -2
arch/arm/mach-omap2/prm_common.c
··· 187 187 */ 188 188 void omap_prcm_irq_cleanup(void) 189 189 { 190 + unsigned int irq; 190 191 int i; 191 192 192 193 if (!prcm_irq_setup) { ··· 212 211 kfree(prcm_irq_setup->priority_mask); 213 212 prcm_irq_setup->priority_mask = NULL; 214 213 215 - irq_set_chained_handler(prcm_irq_setup->irq, NULL); 214 + if (prcm_irq_setup->xlate_irq) 215 + irq = prcm_irq_setup->xlate_irq(prcm_irq_setup->irq); 216 + else 217 + irq = prcm_irq_setup->irq; 218 + irq_set_chained_handler(irq, NULL); 216 219 217 220 if (prcm_irq_setup->base_irq > 0) 218 221 irq_free_descs(prcm_irq_setup->base_irq, ··· 264 259 int offset, i; 265 260 struct irq_chip_generic *gc; 266 261 struct irq_chip_type *ct; 262 + unsigned int irq; 267 263 268 264 if (!irq_setup) 269 265 return -EINVAL; ··· 304 298 1 << (offset & 0x1f); 305 299 } 306 300 307 - irq_set_chained_handler(irq_setup->irq, omap_prcm_irq_handler); 301 + if (irq_setup->xlate_irq) 302 + irq = irq_setup->xlate_irq(irq_setup->irq); 303 + else 304 + irq = irq_setup->irq; 305 + irq_set_chained_handler(irq, omap_prcm_irq_handler); 308 306 309 307 irq_setup->base_irq = irq_alloc_descs(-1, 0, irq_setup->nr_regs * 32, 310 308 0);
+6 -1
arch/arm/mach-omap2/twl-common.c
··· 66 66 omap_register_i2c_bus(bus, clkrate, &pmic_i2c_board_info, 1); 67 67 } 68 68 69 + #ifdef CONFIG_ARCH_OMAP4 69 70 void __init omap4_pmic_init(const char *pmic_type, 70 71 struct twl4030_platform_data *pmic_data, 71 72 struct i2c_board_info *devices, int nr_devices) 72 73 { 73 74 /* PMIC part*/ 75 + unsigned int irq; 76 + 74 77 omap_mux_init_signal("sys_nirq1", OMAP_PIN_INPUT_PULLUP | OMAP_PIN_OFF_WAKEUPENABLE); 75 78 omap_mux_init_signal("fref_clk0_out.sys_drm_msecure", OMAP_PIN_OUTPUT); 76 - omap_pmic_init(1, 400, pmic_type, 7 + OMAP44XX_IRQ_GIC_START, pmic_data); 79 + irq = omap4_xlate_irq(7 + OMAP44XX_IRQ_GIC_START); 80 + omap_pmic_init(1, 400, pmic_type, irq, pmic_data); 77 81 78 82 /* Register additional devices on i2c1 bus if needed */ 79 83 if (devices) 80 84 i2c_register_board_info(1, devices, nr_devices); 81 85 } 86 + #endif 82 87 83 88 void __init omap_pmic_late_init(void) 84 89 {
+8 -1
arch/arm/mach-shmobile/setup-r8a7778.c
··· 576 576 void __init r8a7778_init_irq_dt(void) 577 577 { 578 578 void __iomem *base = ioremap_nocache(0xfe700000, 0x00100000); 579 + #ifdef CONFIG_ARCH_SHMOBILE_LEGACY 580 + void __iomem *gic_dist_base = ioremap_nocache(0xfe438000, 0x1000); 581 + void __iomem *gic_cpu_base = ioremap_nocache(0xfe430000, 0x1000); 582 + #endif 579 583 580 584 BUG_ON(!base); 581 585 586 + #ifdef CONFIG_ARCH_SHMOBILE_LEGACY 587 + gic_init(0, 29, gic_dist_base, gic_cpu_base); 588 + #else 582 589 irqchip_init(); 583 - 590 + #endif 584 591 /* route all interrupts to ARM */ 585 592 __raw_writel(0x73ffffff, base + INT2NTSR0); 586 593 __raw_writel(0xffffffff, base + INT2NTSR1);
+8 -1
arch/arm/mach-shmobile/setup-r8a7779.c
··· 720 720 721 721 void __init r8a7779_init_irq_dt(void) 722 722 { 723 + #ifdef CONFIG_ARCH_SHMOBILE_LEGACY 724 + void __iomem *gic_dist_base = ioremap_nocache(0xf0001000, 0x1000); 725 + void __iomem *gic_cpu_base = ioremap_nocache(0xf0000100, 0x1000); 726 + #endif 723 727 gic_arch_extn.irq_set_wake = r8a7779_set_wake; 724 728 729 + #ifdef CONFIG_ARCH_SHMOBILE_LEGACY 730 + gic_init(0, 29, gic_dist_base, gic_cpu_base); 731 + #else 725 732 irqchip_init(); 726 - 733 + #endif 727 734 /* route all interrupts to ARM */ 728 735 __raw_writel(0xffffffff, INT2NTSR0); 729 736 __raw_writel(0x3fffffff, INT2NTSR1);
+1
arch/arm64/Makefile
··· 85 85 # We use MRPROPER_FILES and CLEAN_FILES now 86 86 archclean: 87 87 $(Q)$(MAKE) $(clean)=$(boot) 88 + $(Q)$(MAKE) $(clean)=$(boot)/dts 88 89 89 90 define archhelp 90 91 echo '* Image.gz - Compressed kernel image (arch/$(ARCH)/boot/Image.gz)'
-2
arch/arm64/boot/dts/Makefile
··· 3 3 dts-dirs += arm 4 4 dts-dirs += cavium 5 5 6 - always := $(dtb-y) 7 6 subdir-y := $(dts-dirs) 8 - clean-files := *.dtb
+1 -1
arch/arm64/boot/dts/arm/juno.dts
··· 22 22 }; 23 23 24 24 chosen { 25 - stdout-path = &soc_uart0; 25 + stdout-path = "serial0:115200n8"; 26 26 }; 27 27 28 28 psci {
+1
arch/arm64/mm/dump.c
··· 15 15 */ 16 16 #include <linux/debugfs.h> 17 17 #include <linux/fs.h> 18 + #include <linux/io.h> 18 19 #include <linux/mm.h> 19 20 #include <linux/sched.h> 20 21 #include <linux/seq_file.h>
+1 -12
arch/avr32/kernel/module.c
··· 19 19 #include <linux/moduleloader.h> 20 20 #include <linux/vmalloc.h> 21 21 22 - void module_free(struct module *mod, void *module_region) 22 + void module_arch_freeing_init(struct module *mod) 23 23 { 24 24 vfree(mod->arch.syminfo); 25 25 mod->arch.syminfo = NULL; 26 - 27 - vfree(module_region); 28 26 } 29 27 30 28 static inline int check_rela(Elf32_Rela *rela, struct module *module, ··· 288 290 } 289 291 290 292 return ret; 291 - } 292 - 293 - int module_finalize(const Elf_Ehdr *hdr, const Elf_Shdr *sechdrs, 294 - struct module *module) 295 - { 296 - vfree(module->arch.syminfo); 297 - module->arch.syminfo = NULL; 298 - 299 - return 0; 300 293 }
+1 -1
arch/cris/arch-v32/drivers/sync_serial.c
··· 604 604 struct timespec *ts) 605 605 { 606 606 unsigned long flags; 607 - int dev = MINOR(file->f_dentry->d_inode->i_rdev); 607 + int dev = MINOR(file_inode(file)->i_rdev); 608 608 int avail; 609 609 struct sync_port *port; 610 610 unsigned char *start;
+1 -1
arch/cris/kernel/module.c
··· 36 36 } 37 37 38 38 /* Free memory returned from module_alloc */ 39 - void module_free(struct module *mod, void *module_region) 39 + void module_memfree(void *module_region) 40 40 { 41 41 kfree(module_region); 42 42 }
+1 -1
arch/frv/mb93090-mb00/pci-frv.c
··· 94 94 r = &dev->resource[idx]; 95 95 if (!r->start) 96 96 continue; 97 - pci_claim_resource(dev, idx); 97 + pci_claim_bridge_resource(dev, idx); 98 98 } 99 99 } 100 100 pcibios_allocate_bus_resources(&bus->children);
+2 -4
arch/ia64/kernel/module.c
··· 305 305 #endif /* !USE_BRL */ 306 306 307 307 void 308 - module_free (struct module *mod, void *module_region) 308 + module_arch_freeing_init (struct module *mod) 309 309 { 310 - if (mod && mod->arch.init_unw_table && 311 - module_region == mod->module_init) { 310 + if (mod->arch.init_unw_table) { 312 311 unw_remove_unwind_table(mod->arch.init_unw_table); 313 312 mod->arch.init_unw_table = NULL; 314 313 } 315 - vfree(module_region); 316 314 } 317 315 318 316 /* Have we already seen one of these relocations? */
+26 -32
arch/ia64/pci/pci.c
··· 487 487 return 0; 488 488 } 489 489 490 - static int is_valid_resource(struct pci_dev *dev, int idx) 491 - { 492 - unsigned int i, type_mask = IORESOURCE_IO | IORESOURCE_MEM; 493 - struct resource *devr = &dev->resource[idx], *busr; 494 - 495 - if (!dev->bus) 496 - return 0; 497 - 498 - pci_bus_for_each_resource(dev->bus, busr, i) { 499 - if (!busr || ((busr->flags ^ devr->flags) & type_mask)) 500 - continue; 501 - if ((devr->start) && (devr->start >= busr->start) && 502 - (devr->end <= busr->end)) 503 - return 1; 504 - } 505 - return 0; 506 - } 507 - 508 - static void pcibios_fixup_resources(struct pci_dev *dev, int start, int limit) 509 - { 510 - int i; 511 - 512 - for (i = start; i < limit; i++) { 513 - if (!dev->resource[i].flags) 514 - continue; 515 - if ((is_valid_resource(dev, i))) 516 - pci_claim_resource(dev, i); 517 - } 518 - } 519 - 520 490 void pcibios_fixup_device_resources(struct pci_dev *dev) 521 491 { 522 - pcibios_fixup_resources(dev, 0, PCI_BRIDGE_RESOURCES); 492 + int idx; 493 + 494 + if (!dev->bus) 495 + return; 496 + 497 + for (idx = 0; idx < PCI_BRIDGE_RESOURCES; idx++) { 498 + struct resource *r = &dev->resource[idx]; 499 + 500 + if (!r->flags || r->parent || !r->start) 501 + continue; 502 + 503 + pci_claim_resource(dev, idx); 504 + } 523 505 } 524 506 EXPORT_SYMBOL_GPL(pcibios_fixup_device_resources); 525 507 526 508 static void pcibios_fixup_bridge_resources(struct pci_dev *dev) 527 509 { 528 - pcibios_fixup_resources(dev, PCI_BRIDGE_RESOURCES, PCI_NUM_RESOURCES); 510 + int idx; 511 + 512 + if (!dev->bus) 513 + return; 514 + 515 + for (idx = PCI_BRIDGE_RESOURCES; idx < PCI_NUM_RESOURCES; idx++) { 516 + struct resource *r = &dev->resource[idx]; 517 + 518 + if (!r->flags || r->parent || !r->start) 519 + continue; 520 + 521 + pci_claim_bridge_resource(dev, idx); 522 + } 529 523 } 530 524 531 525 /*
+12 -1
arch/microblaze/pci/pci-common.c
··· 1026 1026 pr, (pr && pr->name) ? pr->name : "nil"); 1027 1027 1028 1028 if (pr && !(pr->flags & IORESOURCE_UNSET)) { 1029 + struct pci_dev *dev = bus->self; 1030 + 1029 1031 if (request_resource(pr, res) == 0) 1030 1032 continue; 1031 1033 /* ··· 1037 1035 */ 1038 1036 if (reparent_resources(pr, res) == 0) 1039 1037 continue; 1038 + 1039 + if (dev && i < PCI_BRIDGE_RESOURCE_NUM && 1040 + pci_claim_bridge_resource(dev, 1041 + i + PCI_BRIDGE_RESOURCES) == 0) 1042 + continue; 1043 + 1040 1044 } 1041 1045 pr_warn("PCI: Cannot allocate resource region "); 1042 1046 pr_cont("%d of PCI bridge %d, will remap\n", i, bus->number); ··· 1235 1227 (unsigned long long)r->end, 1236 1228 (unsigned int)r->flags); 1237 1229 1238 - pci_claim_resource(dev, i); 1230 + if (pci_claim_resource(dev, i) == 0) 1231 + continue; 1232 + 1233 + pci_claim_bridge_resource(dev, i); 1239 1234 } 1240 1235 } 1241 1236
+1 -1
arch/mips/net/bpf_jit.c
··· 1388 1388 void bpf_jit_free(struct bpf_prog *fp) 1389 1389 { 1390 1390 if (fp->jited) 1391 - module_free(NULL, fp->bpf_func); 1391 + module_memfree(fp->bpf_func); 1392 1392 1393 1393 bpf_prog_unlock_free(fp); 1394 1394 }
+1 -1
arch/mn10300/unit-asb2305/pci-asb2305.c
··· 106 106 if (!r->flags) 107 107 continue; 108 108 if (!r->start || 109 - pci_claim_resource(dev, idx) < 0) { 109 + pci_claim_bridge_resource(dev, idx) < 0) { 110 110 printk(KERN_ERR "PCI:" 111 111 " Cannot allocate resource" 112 112 " region %d of bridge %s\n",
+24 -29
arch/mn10300/unit-asb2305/pci.c
··· 281 281 return -ENODEV; 282 282 } 283 283 284 - static int is_valid_resource(struct pci_dev *dev, int idx) 285 - { 286 - unsigned int i, type_mask = IORESOURCE_IO | IORESOURCE_MEM; 287 - struct resource *devr = &dev->resource[idx], *busr; 288 - 289 - if (dev->bus) { 290 - pci_bus_for_each_resource(dev->bus, busr, i) { 291 - if (!busr || (busr->flags ^ devr->flags) & type_mask) 292 - continue; 293 - 294 - if (devr->start && 295 - devr->start >= busr->start && 296 - devr->end <= busr->end) 297 - return 1; 298 - } 299 - } 300 - 301 - return 0; 302 - } 303 - 304 284 static void pcibios_fixup_device_resources(struct pci_dev *dev) 305 285 { 306 - int limit, i; 286 + int idx; 307 287 308 - if (dev->bus->number != 0) 288 + if (!dev->bus) 309 289 return; 310 290 311 - limit = (dev->hdr_type == PCI_HEADER_TYPE_NORMAL) ? 312 - PCI_BRIDGE_RESOURCES : PCI_NUM_RESOURCES; 291 + for (idx = 0; idx < PCI_BRIDGE_RESOURCES; idx++) { 292 + struct resource *r = &dev->resource[idx]; 313 293 314 - for (i = 0; i < limit; i++) { 315 - if (!dev->resource[i].flags) 294 + if (!r->flags || r->parent || !r->start) 316 295 continue; 317 296 318 - if (is_valid_resource(dev, i)) 319 - pci_claim_resource(dev, i); 297 + pci_claim_resource(dev, idx); 298 + } 299 + } 300 + 301 + static void pcibios_fixup_bridge_resources(struct pci_dev *dev) 302 + { 303 + int idx; 304 + 305 + if (!dev->bus) 306 + return; 307 + 308 + for (idx = PCI_BRIDGE_RESOURCES; idx < PCI_NUM_RESOURCES; idx++) { 309 + struct resource *r = &dev->resource[idx]; 310 + 311 + if (!r->flags || r->parent || !r->start) 312 + continue; 313 + 314 + pci_claim_bridge_resource(dev, idx); 320 315 } 321 316 } 322 317 ··· 325 330 326 331 if (bus->self) { 327 332 pci_read_bridge_bases(bus); 328 - pcibios_fixup_device_resources(bus->self); 333 + pcibios_fixup_bridge_resources(bus->self); 329 334 } 330 335 331 336 list_for_each_entry(dev, &bus->devices, bus_list)
+1 -1
arch/nios2/kernel/module.c
··· 36 36 } 37 37 38 38 /* Free memory returned from module_alloc */ 39 - void module_free(struct module *mod, void *module_region) 39 + void module_memfree(void *module_region) 40 40 { 41 41 kfree(module_region); 42 42 }
+1 -1
arch/nios2/kernel/signal.c
··· 200 200 201 201 /* Set up to return from userspace; jump to fixed address sigreturn 202 202 trampoline on kuser page. */ 203 - regs->ra = (unsigned long) (0x1040); 203 + regs->ra = (unsigned long) (0x1044); 204 204 205 205 /* Set up registers for signal handler */ 206 206 regs->sp = (unsigned long) frame;
+1 -5
arch/parisc/kernel/module.c
··· 298 298 } 299 299 #endif 300 300 301 - 302 - /* Free memory returned from module_alloc */ 303 - void module_free(struct module *mod, void *module_region) 301 + void module_arch_freeing_init(struct module *mod) 304 302 { 305 303 kfree(mod->arch.section); 306 304 mod->arch.section = NULL; 307 - 308 - vfree(module_region); 309 305 } 310 306 311 307 /* Additional bytes needed in front of individual sections */
+11 -1
arch/powerpc/kernel/pci-common.c
··· 1184 1184 pr, (pr && pr->name) ? pr->name : "nil"); 1185 1185 1186 1186 if (pr && !(pr->flags & IORESOURCE_UNSET)) { 1187 + struct pci_dev *dev = bus->self; 1188 + 1187 1189 if (request_resource(pr, res) == 0) 1188 1190 continue; 1189 1191 /* ··· 1194 1192 * bridge resource and try again. 1195 1193 */ 1196 1194 if (reparent_resources(pr, res) == 0) 1195 + continue; 1196 + 1197 + if (dev && i < PCI_BRIDGE_RESOURCE_NUM && 1198 + pci_claim_bridge_resource(dev, 1199 + i + PCI_BRIDGE_RESOURCES) == 0) 1197 1200 continue; 1198 1201 } 1199 1202 pr_warning("PCI: Cannot allocate resource region " ··· 1408 1401 (unsigned long long)r->end, 1409 1402 (unsigned int)r->flags); 1410 1403 1411 - pci_claim_resource(dev, i); 1404 + if (pci_claim_resource(dev, i) == 0) 1405 + continue; 1406 + 1407 + pci_claim_bridge_resource(dev, i); 1412 1408 } 1413 1409 } 1414 1410
+1 -1
arch/powerpc/net/bpf_jit_comp.c
··· 699 699 void bpf_jit_free(struct bpf_prog *fp) 700 700 { 701 701 if (fp->jited) 702 - module_free(NULL, fp->bpf_func); 702 + module_memfree(fp->bpf_func); 703 703 704 704 bpf_prog_unlock_free(fp); 705 705 }
+1 -1
arch/powerpc/platforms/powernv/setup.c
··· 304 304 * all cpus at boot. Get these reg values of current cpu and use the 305 305 * same accross all cpus. 306 306 */ 307 - uint64_t lpcr_val = mfspr(SPRN_LPCR); 307 + uint64_t lpcr_val = mfspr(SPRN_LPCR) & ~(u64)LPCR_PECE1; 308 308 uint64_t hid0_val = mfspr(SPRN_HID0); 309 309 uint64_t hid1_val = mfspr(SPRN_HID1); 310 310 uint64_t hid4_val = mfspr(SPRN_HID4);
+1
arch/powerpc/xmon/xmon.c
··· 337 337 args.token = rtas_token("set-indicator"); 338 338 if (args.token == RTAS_UNKNOWN_SERVICE) 339 339 return; 340 + args.token = cpu_to_be32(args.token); 340 341 args.nargs = cpu_to_be32(3); 341 342 args.nret = cpu_to_be32(1); 342 343 args.rets = &args.args[3];
+3 -7
arch/s390/kernel/module.c
··· 55 55 } 56 56 #endif 57 57 58 - /* Free memory returned from module_alloc */ 59 - void module_free(struct module *mod, void *module_region) 58 + void module_arch_freeing_init(struct module *mod) 60 59 { 61 - if (mod) { 62 - vfree(mod->arch.syminfo); 63 - mod->arch.syminfo = NULL; 64 - } 65 - vfree(module_region); 60 + vfree(mod->arch.syminfo); 61 + mod->arch.syminfo = NULL; 66 62 } 67 63 68 64 static void check_rela(Elf_Rela *rela, struct module *me)
+16 -12
arch/s390/net/bpf_jit.S
··· 22 22 * skb_copy_bits takes 4 parameters: 23 23 * %r2 = skb pointer 24 24 * %r3 = offset into skb data 25 - * %r4 = length to copy 26 - * %r5 = pointer to temp buffer 25 + * %r4 = pointer to temp buffer 26 + * %r5 = length to copy 27 27 */ 28 28 #define SKBDATA %r8 29 29 ··· 44 44 45 45 sk_load_word_slow: 46 46 lgr %r9,%r2 # save %r2 47 - lhi %r4,4 # 4 bytes 48 - la %r5,160(%r15) # pointer to temp buffer 47 + lgr %r3,%r1 # offset 48 + la %r4,160(%r15) # pointer to temp buffer 49 + lghi %r5,4 # 4 bytes 49 50 brasl %r14,skb_copy_bits # get data from skb 50 51 l %r5,160(%r15) # load result from temp buffer 51 52 ltgr %r2,%r2 # set cc to (%r2 != 0) ··· 70 69 71 70 sk_load_half_slow: 72 71 lgr %r9,%r2 # save %r2 73 - lhi %r4,2 # 2 bytes 74 - la %r5,162(%r15) # pointer to temp buffer 72 + lgr %r3,%r1 # offset 73 + la %r4,162(%r15) # pointer to temp buffer 74 + lghi %r5,2 # 2 bytes 75 75 brasl %r14,skb_copy_bits # get data from skb 76 76 xc 160(2,%r15),160(%r15) 77 77 l %r5,160(%r15) # load result from temp buffer ··· 97 95 98 96 sk_load_byte_slow: 99 97 lgr %r9,%r2 # save %r2 100 - lhi %r4,1 # 1 bytes 101 - la %r5,163(%r15) # pointer to temp buffer 98 + lgr %r3,%r1 # offset 99 + la %r4,163(%r15) # pointer to temp buffer 100 + lghi %r5,1 # 1 byte 102 101 brasl %r14,skb_copy_bits # get data from skb 103 102 xc 160(3,%r15),160(%r15) 104 103 l %r5,160(%r15) # load result from temp buffer ··· 107 104 lgr %r2,%r9 # restore %r2 108 105 br %r8 109 106 110 - /* A = (*(u8 *)(skb->data+K) & 0xf) << 2 */ 107 + /* X = (*(u8 *)(skb->data+K) & 0xf) << 2 */ 111 108 ENTRY(sk_load_byte_msh) 112 109 llgfr %r1,%r3 # extend offset 113 110 clr %r11,%r3 # hlen < offset ? 114 - jle sk_load_byte_slow 111 + jle sk_load_byte_msh_slow 115 112 lhi %r12,0 116 113 ic %r12,0(%r1,%r10) # get byte from skb 117 114 nill %r12,0x0f ··· 121 118 122 119 sk_load_byte_msh_slow: 123 120 lgr %r9,%r2 # save %r2 124 - lhi %r4,2 # 2 bytes 125 - la %r5,162(%r15) # pointer to temp buffer 121 + lgr %r3,%r1 # offset 122 + la %r4,163(%r15) # pointer to temp buffer 123 + lghi %r5,1 # 1 byte 126 124 brasl %r14,skb_copy_bits # get data from skb 127 125 xc 160(3,%r15),160(%r15) 128 126 l %r12,160(%r15) # load result from temp buffer
+3 -6
arch/s390/net/bpf_jit_comp.c
··· 448 448 mask = 0x800000; /* je */ 449 449 kbranch: /* Emit compare if the branch targets are different */ 450 450 if (filter->jt != filter->jf) { 451 - if (K <= 16383) 452 - /* chi %r5,<K> */ 453 - EMIT4_IMM(0xa75e0000, K); 454 - else if (test_facility(21)) 451 + if (test_facility(21)) 455 452 /* clfi %r5,<K> */ 456 453 EMIT6_IMM(0xc25f0000, K); 457 454 else 458 - /* c %r5,<d(K)>(%r13) */ 459 - EMIT4_DISP(0x5950d000, EMIT_CONST(K)); 455 + /* cl %r5,<d(K)>(%r13) */ 456 + EMIT4_DISP(0x5550d000, EMIT_CONST(K)); 460 457 } 461 458 branch: if (filter->jt == filter->jf) { 462 459 if (filter->jt == 0)
+4 -1
arch/sparc/kernel/pci.c
··· 639 639 (unsigned long long)r->end, 640 640 (unsigned int)r->flags); 641 641 642 - pci_claim_resource(dev, i); 642 + if (pci_claim_resource(dev, i) == 0) 643 + continue; 644 + 645 + pci_claim_bridge_resource(dev, i); 643 646 } 644 647 } 645 648
+2 -2
arch/sparc/net/bpf_jit_comp.c
··· 776 776 if (unlikely(proglen + ilen > oldproglen)) { 777 777 pr_err("bpb_jit_compile fatal error\n"); 778 778 kfree(addrs); 779 - module_free(NULL, image); 779 + module_memfree(image); 780 780 return; 781 781 } 782 782 memcpy(image + proglen, temp, ilen); ··· 822 822 void bpf_jit_free(struct bpf_prog *fp) 823 823 { 824 824 if (fp->jited) 825 - module_free(NULL, fp->bpf_func); 825 + module_memfree(fp->bpf_func); 826 826 827 827 bpf_prog_unlock_free(fp); 828 828 }
+2 -2
arch/tile/kernel/module.c
··· 74 74 75 75 76 76 /* Free memory returned from module_alloc */ 77 - void module_free(struct module *mod, void *module_region) 77 + void module_memfree(void *module_region) 78 78 { 79 79 vfree(module_region); 80 80 ··· 83 83 0, 0, 0, NULL, NULL, 0); 84 84 85 85 /* 86 - * FIXME: If module_region == mod->module_init, trim exception 86 + * FIXME: Add module_arch_freeing_init to trim exception 87 87 * table entries. 88 88 */ 89 89 }
+5 -1
arch/x86/Kconfig
··· 857 857 858 858 config X86_UP_APIC 859 859 bool "Local APIC support on uniprocessors" 860 - depends on X86_32 && !SMP && !X86_32_NON_STANDARD && !PCI_MSI 860 + depends on X86_32 && !SMP && !X86_32_NON_STANDARD 861 861 ---help--- 862 862 A local APIC (Advanced Programmable Interrupt Controller) is an 863 863 integrated interrupt controller in the CPU. If you have a single-CPU ··· 867 867 all. The local APIC supports CPU-generated self-interrupts (timer, 868 868 performance counters), and the NMI watchdog which detects hard 869 869 lockups. 870 + 871 + config X86_UP_APIC_MSI 872 + def_bool y 873 + select X86_UP_APIC if X86_32 && !SMP && !X86_32_NON_STANDARD && PCI_MSI 870 874 871 875 config X86_UP_IOAPIC 872 876 bool "IO-APIC support on uniprocessors"
+1 -1
arch/x86/boot/compressed/Makefile
··· 90 90 suffix-$(CONFIG_KERNEL_LZ4) := lz4 91 91 92 92 RUN_SIZE = $(shell $(OBJDUMP) -h vmlinux | \ 93 - perl $(srctree)/arch/x86/tools/calc_run_size.pl) 93 + $(CONFIG_SHELL) $(srctree)/arch/x86/tools/calc_run_size.sh) 94 94 quiet_cmd_mkpiggy = MKPIGGY $@ 95 95 cmd_mkpiggy = $(obj)/mkpiggy $< $(RUN_SIZE) > $@ || ( rm -f $@ ; false ) 96 96
+8 -1
arch/x86/boot/compressed/misc.c
··· 373 373 unsigned long output_len, 374 374 unsigned long run_size) 375 375 { 376 + unsigned char *output_orig = output; 377 + 376 378 real_mode = rmode; 377 379 378 380 sanitize_boot_params(real_mode); ··· 423 421 debug_putstr("\nDecompressing Linux... "); 424 422 decompress(input_data, input_len, NULL, NULL, output, NULL, error); 425 423 parse_elf(output); 426 - handle_relocations(output, output_len); 424 + /* 425 + * 32-bit always performs relocations. 64-bit relocations are only 426 + * needed if kASLR has chosen a different load address. 427 + */ 428 + if (!IS_ENABLED(CONFIG_X86_64) || output != output_orig) 429 + handle_relocations(output, output_len); 427 430 debug_putstr("done.\nBooting the kernel.\n"); 428 431 return output; 429 432 }
+1
arch/x86/include/asm/acpi.h
··· 50 50 51 51 extern int (*__acpi_register_gsi)(struct device *dev, u32 gsi, 52 52 int trigger, int polarity); 53 + extern void (*__acpi_unregister_gsi)(u32 gsi); 53 54 54 55 static inline void disable_acpi(void) 55 56 {
+14 -6
arch/x86/include/asm/desc.h
··· 251 251 gdt[GDT_ENTRY_TLS_MIN + i] = t->tls_array[i]; 252 252 } 253 253 254 - #define _LDT_empty(info) \ 254 + /* This intentionally ignores lm, since 32-bit apps don't have that field. */ 255 + #define LDT_empty(info) \ 255 256 ((info)->base_addr == 0 && \ 256 257 (info)->limit == 0 && \ 257 258 (info)->contents == 0 && \ ··· 262 261 (info)->seg_not_present == 1 && \ 263 262 (info)->useable == 0) 264 263 265 - #ifdef CONFIG_X86_64 266 - #define LDT_empty(info) (_LDT_empty(info) && ((info)->lm == 0)) 267 - #else 268 - #define LDT_empty(info) (_LDT_empty(info)) 269 - #endif 264 + /* Lots of programs expect an all-zero user_desc to mean "no segment at all". */ 265 + static inline bool LDT_zero(const struct user_desc *info) 266 + { 267 + return (info->base_addr == 0 && 268 + info->limit == 0 && 269 + info->contents == 0 && 270 + info->read_exec_only == 0 && 271 + info->seg_32bit == 0 && 272 + info->limit_in_pages == 0 && 273 + info->seg_not_present == 0 && 274 + info->useable == 0); 275 + } 270 276 271 277 static inline void clear_LDT(void) 272 278 {
+19 -1
arch/x86/include/asm/mmu_context.h
··· 130 130 static inline void arch_unmap(struct mm_struct *mm, struct vm_area_struct *vma, 131 131 unsigned long start, unsigned long end) 132 132 { 133 - mpx_notify_unmap(mm, vma, start, end); 133 + /* 134 + * mpx_notify_unmap() goes and reads a rarely-hot 135 + * cacheline in the mm_struct. That can be expensive 136 + * enough to be seen in profiles. 137 + * 138 + * The mpx_notify_unmap() call and its contents have been 139 + * observed to affect munmap() performance on hardware 140 + * where MPX is not present. 141 + * 142 + * The unlikely() optimizes for the fast case: no MPX 143 + * in the CPU, or no MPX use in the process. Even if 144 + * we get this wrong (in the unlikely event that MPX 145 + * is widely enabled on some system) the overhead of 146 + * MPX itself (reading bounds tables) is expected to 147 + * overwhelm the overhead of getting this unlikely() 148 + * consistently wrong. 149 + */ 150 + if (unlikely(cpu_feature_enabled(X86_FEATURE_MPX))) 151 + mpx_notify_unmap(mm, vma, start, end); 134 152 } 135 153 136 154 #endif /* _ASM_X86_MMU_CONTEXT_H */
+12 -12
arch/x86/kernel/acpi/boot.c
··· 611 611 612 612 int acpi_gsi_to_irq(u32 gsi, unsigned int *irqp) 613 613 { 614 - int irq; 614 + int rc, irq, trigger, polarity; 615 615 616 - if (acpi_irq_model == ACPI_IRQ_MODEL_PIC) { 617 - *irqp = gsi; 618 - } else { 619 - mutex_lock(&acpi_ioapic_lock); 620 - irq = mp_map_gsi_to_irq(gsi, 621 - IOAPIC_MAP_ALLOC | IOAPIC_MAP_CHECK); 622 - mutex_unlock(&acpi_ioapic_lock); 623 - if (irq < 0) 624 - return -1; 625 - *irqp = irq; 616 + rc = acpi_get_override_irq(gsi, &trigger, &polarity); 617 + if (rc == 0) { 618 + trigger = trigger ? ACPI_LEVEL_SENSITIVE : ACPI_EDGE_SENSITIVE; 619 + polarity = polarity ? ACPI_ACTIVE_LOW : ACPI_ACTIVE_HIGH; 620 + irq = acpi_register_gsi(NULL, gsi, trigger, polarity); 621 + if (irq >= 0) { 622 + *irqp = irq; 623 + return 0; 624 + } 626 625 } 627 - return 0; 626 + 627 + return -1; 628 628 } 629 629 EXPORT_SYMBOL_GPL(acpi_gsi_to_irq); 630 630
+1
arch/x86/kernel/cpu/mshyperv.c
··· 107 107 .rating = 400, /* use this when running on Hyperv*/ 108 108 .read = read_hv_clock, 109 109 .mask = CLOCKSOURCE_MASK(64), 110 + .flags = CLOCK_SOURCE_IS_CONTINUOUS, 110 111 }; 111 112 112 113 static void __init ms_hyperv_init_platform(void)
+1 -1
arch/x86/kernel/ftrace.c
··· 674 674 } 675 675 static inline void tramp_free(void *tramp) 676 676 { 677 - module_free(NULL, tramp); 677 + module_memfree(tramp); 678 678 } 679 679 #else 680 680 /* Trampolines can only be created if modules are supported */
+1 -1
arch/x86/kernel/irq.c
··· 127 127 seq_puts(p, " Machine check polls\n"); 128 128 #endif 129 129 #if IS_ENABLED(CONFIG_HYPERV) || defined(CONFIG_XEN) 130 - seq_printf(p, "%*s: ", prec, "THR"); 130 + seq_printf(p, "%*s: ", prec, "HYP"); 131 131 for_each_online_cpu(j) 132 132 seq_printf(p, "%10u ", irq_stats(j)->irq_hv_callback_count); 133 133 seq_puts(p, " Hypervisor callback interrupts\n");
+23 -2
arch/x86/kernel/tls.c
··· 29 29 30 30 static bool tls_desc_okay(const struct user_desc *info) 31 31 { 32 - if (LDT_empty(info)) 32 + /* 33 + * For historical reasons (i.e. no one ever documented how any 34 + * of the segmentation APIs work), user programs can and do 35 + * assume that a struct user_desc that's all zeros except for 36 + * entry_number means "no segment at all". This never actually 37 + * worked. In fact, up to Linux 3.19, a struct user_desc like 38 + * this would create a 16-bit read-write segment with base and 39 + * limit both equal to zero. 40 + * 41 + * That was close enough to "no segment at all" until we 42 + * hardened this function to disallow 16-bit TLS segments. Fix 43 + * it up by interpreting these zeroed segments the way that they 44 + * were almost certainly intended to be interpreted. 45 + * 46 + * The correct way to ask for "no segment at all" is to specify 47 + * a user_desc that satisfies LDT_empty. To keep everything 48 + * working, we accept both. 49 + * 50 + * Note that there's a similar kludge in modify_ldt -- look at 51 + * the distinction between modes 1 and 0x11. 52 + */ 53 + if (LDT_empty(info) || LDT_zero(info)) 33 54 return true; 34 55 35 56 /* ··· 92 71 cpu = get_cpu(); 93 72 94 73 while (n-- > 0) { 95 - if (LDT_empty(info)) 74 + if (LDT_empty(info) || LDT_zero(info)) 96 75 desc->a = desc->b = 0; 97 76 else 98 77 fill_ldt(desc, info);
+1 -1
arch/x86/kernel/tsc.c
··· 617 617 goto success; 618 618 } 619 619 } 620 - pr_err("Fast TSC calibration failed\n"); 620 + pr_info("Fast TSC calibration failed\n"); 621 621 return 0; 622 622 623 623 success:
+10 -21
arch/x86/kvm/emulate.c
··· 2348 2348 * Not recognized on AMD in compat mode (but is recognized in legacy 2349 2349 * mode). 2350 2350 */ 2351 - if ((ctxt->mode == X86EMUL_MODE_PROT32) && (efer & EFER_LMA) 2351 + if ((ctxt->mode != X86EMUL_MODE_PROT64) && (efer & EFER_LMA) 2352 2352 && !vendor_intel(ctxt)) 2353 2353 return emulate_ud(ctxt); 2354 2354 ··· 2359 2359 setup_syscalls_segments(ctxt, &cs, &ss); 2360 2360 2361 2361 ops->get_msr(ctxt, MSR_IA32_SYSENTER_CS, &msr_data); 2362 - switch (ctxt->mode) { 2363 - case X86EMUL_MODE_PROT32: 2364 - if ((msr_data & 0xfffc) == 0x0) 2365 - return emulate_gp(ctxt, 0); 2366 - break; 2367 - case X86EMUL_MODE_PROT64: 2368 - if (msr_data == 0x0) 2369 - return emulate_gp(ctxt, 0); 2370 - break; 2371 - default: 2372 - break; 2373 - } 2362 + if ((msr_data & 0xfffc) == 0x0) 2363 + return emulate_gp(ctxt, 0); 2374 2364 2375 2365 ctxt->eflags &= ~(EFLG_VM | EFLG_IF); 2376 - cs_sel = (u16)msr_data; 2377 - cs_sel &= ~SELECTOR_RPL_MASK; 2366 + cs_sel = (u16)msr_data & ~SELECTOR_RPL_MASK; 2378 2367 ss_sel = cs_sel + 8; 2379 - ss_sel &= ~SELECTOR_RPL_MASK; 2380 - if (ctxt->mode == X86EMUL_MODE_PROT64 || (efer & EFER_LMA)) { 2368 + if (efer & EFER_LMA) { 2381 2369 cs.d = 0; 2382 2370 cs.l = 1; 2383 2371 } ··· 2374 2386 ops->set_segment(ctxt, ss_sel, &ss, 0, VCPU_SREG_SS); 2375 2387 2376 2388 ops->get_msr(ctxt, MSR_IA32_SYSENTER_EIP, &msr_data); 2377 - ctxt->_eip = msr_data; 2389 + ctxt->_eip = (efer & EFER_LMA) ? msr_data : (u32)msr_data; 2378 2390 2379 2391 ops->get_msr(ctxt, MSR_IA32_SYSENTER_ESP, &msr_data); 2380 - *reg_write(ctxt, VCPU_REGS_RSP) = msr_data; 2392 + *reg_write(ctxt, VCPU_REGS_RSP) = (efer & EFER_LMA) ? msr_data : 2393 + (u32)msr_data; 2381 2394 2382 2395 return X86EMUL_CONTINUE; 2383 2396 } ··· 3780 3791 }; 3781 3792 3782 3793 static const struct opcode group6[] = { 3783 - DI(Prot, sldt), 3784 - DI(Prot, str), 3794 + DI(Prot | DstMem, sldt), 3795 + DI(Prot | DstMem, str), 3785 3796 II(Prot | Priv | SrcMem16, em_lldt, lldt), 3786 3797 II(Prot | Priv | SrcMem16, em_ltr, ltr), 3787 3798 N, N, N, N,
+2 -2
arch/x86/mm/init.c
··· 43 43 [_PAGE_CACHE_MODE_WT] = _PAGE_PCD, 44 44 [_PAGE_CACHE_MODE_WP] = _PAGE_PCD, 45 45 }; 46 - EXPORT_SYMBOL_GPL(__cachemode2pte_tbl); 46 + EXPORT_SYMBOL(__cachemode2pte_tbl); 47 47 uint8_t __pte2cachemode_tbl[8] = { 48 48 [__pte2cm_idx(0)] = _PAGE_CACHE_MODE_WB, 49 49 [__pte2cm_idx(_PAGE_PWT)] = _PAGE_CACHE_MODE_WC, ··· 54 54 [__pte2cm_idx(_PAGE_PCD | _PAGE_PAT)] = _PAGE_CACHE_MODE_UC_MINUS, 55 55 [__pte2cm_idx(_PAGE_PWT | _PAGE_PCD | _PAGE_PAT)] = _PAGE_CACHE_MODE_UC, 56 56 }; 57 - EXPORT_SYMBOL_GPL(__pte2cachemode_tbl); 57 + EXPORT_SYMBOL(__pte2cachemode_tbl); 58 58 59 59 static unsigned long __initdata pgt_buf_start; 60 60 static unsigned long __initdata pgt_buf_end;
+6
arch/x86/mm/mpx.c
··· 349 349 return MPX_INVALID_BOUNDS_DIR; 350 350 351 351 /* 352 + * 32-bit binaries on 64-bit kernels are currently 353 + * unsupported. 354 + */ 355 + if (IS_ENABLED(CONFIG_X86_64) && test_thread_flag(TIF_IA32)) 356 + return MPX_INVALID_BOUNDS_DIR; 357 + /* 352 358 * The bounds directory pointer is stored in a register 353 359 * only accessible if we first do an xsave. 354 360 */
+6 -1
arch/x86/mm/pat.c
··· 234 234 PAT(4, WB) | PAT(5, WC) | PAT(6, UC_MINUS) | PAT(7, UC); 235 235 236 236 /* Boot CPU check */ 237 - if (!boot_pat_state) 237 + if (!boot_pat_state) { 238 238 rdmsrl(MSR_IA32_CR_PAT, boot_pat_state); 239 + if (!boot_pat_state) { 240 + pat_disable("PAT read returns always zero, disabled."); 241 + return; 242 + } 243 + } 239 244 240 245 wrmsrl(MSR_IA32_CR_PAT, pat); 241 246
+1 -1
arch/x86/pci/i386.c
··· 216 216 continue; 217 217 if (r->parent) /* Already allocated */ 218 218 continue; 219 - if (!r->start || pci_claim_resource(dev, idx) < 0) { 219 + if (!r->start || pci_claim_bridge_resource(dev, idx) < 0) { 220 220 /* 221 221 * Something is wrong with the region. 222 222 * Invalidate the resource to prevent
+2 -47
arch/x86/pci/xen.c
··· 458 458 * just how GSIs get registered. 459 459 */ 460 460 __acpi_register_gsi = acpi_register_gsi_xen_hvm; 461 + __acpi_unregister_gsi = NULL; 461 462 #endif 462 463 463 464 #ifdef CONFIG_PCI_MSI ··· 472 471 } 473 472 474 473 #ifdef CONFIG_XEN_DOM0 475 - static __init void xen_setup_acpi_sci(void) 476 - { 477 - int rc; 478 - int trigger, polarity; 479 - int gsi = acpi_sci_override_gsi; 480 - int irq = -1; 481 - int gsi_override = -1; 482 - 483 - if (!gsi) 484 - return; 485 - 486 - rc = acpi_get_override_irq(gsi, &trigger, &polarity); 487 - if (rc) { 488 - printk(KERN_WARNING "xen: acpi_get_override_irq failed for acpi" 489 - " sci, rc=%d\n", rc); 490 - return; 491 - } 492 - trigger = trigger ? ACPI_LEVEL_SENSITIVE : ACPI_EDGE_SENSITIVE; 493 - polarity = polarity ? ACPI_ACTIVE_LOW : ACPI_ACTIVE_HIGH; 494 - 495 - printk(KERN_INFO "xen: sci override: global_irq=%d trigger=%d " 496 - "polarity=%d\n", gsi, trigger, polarity); 497 - 498 - /* Before we bind the GSI to a Linux IRQ, check whether 499 - * we need to override it with bus_irq (IRQ) value. Usually for 500 - * IRQs below IRQ_LEGACY_IRQ this holds IRQ == GSI, as so: 501 - * ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 low level) 502 - * but there are oddballs where the IRQ != GSI: 503 - * ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 20 low level) 504 - * which ends up being: gsi_to_irq[9] == 20 505 - * (which is what acpi_gsi_to_irq ends up calling when starting the 506 - * the ACPI interpreter and keels over since IRQ 9 has not been 507 - * setup as we had setup IRQ 20 for it). 508 - */ 509 - if (acpi_gsi_to_irq(gsi, &irq) == 0) { 510 - /* Use the provided value if it's valid. */ 511 - if (irq >= 0) 512 - gsi_override = irq; 513 - } 514 - 515 - gsi = xen_register_gsi(gsi, gsi_override, trigger, polarity); 516 - printk(KERN_INFO "xen: acpi sci %d\n", gsi); 517 - 518 - return; 519 - } 520 - 521 474 int __init pci_xen_initial_domain(void) 522 475 { 523 476 int irq; ··· 482 527 x86_msi.restore_msi_irqs = xen_initdom_restore_msi_irqs; 483 528 pci_msi_ignore_mask = 1; 484 529 #endif 485 - xen_setup_acpi_sci(); 486 530 __acpi_register_gsi = acpi_register_gsi_xen; 531 + __acpi_unregister_gsi = NULL; 487 532 /* Pre-allocate legacy irqs */ 488 533 for (irq = 0; irq < nr_legacy_irqs(); irq++) { 489 534 int trigger, polarity;
-39
arch/x86/tools/calc_run_size.pl
··· 1 - #!/usr/bin/perl 2 - # 3 - # Calculate the amount of space needed to run the kernel, including room for 4 - # the .bss and .brk sections. 5 - # 6 - # Usage: 7 - # objdump -h a.out | perl calc_run_size.pl 8 - use strict; 9 - 10 - my $mem_size = 0; 11 - my $file_offset = 0; 12 - 13 - my $sections=" *[0-9]+ \.(?:bss|brk) +"; 14 - while (<>) { 15 - if (/^$sections([0-9a-f]+) +(?:[0-9a-f]+ +){2}([0-9a-f]+)/) { 16 - my $size = hex($1); 17 - my $offset = hex($2); 18 - $mem_size += $size; 19 - if ($file_offset == 0) { 20 - $file_offset = $offset; 21 - } elsif ($file_offset != $offset) { 22 - # BFD linker shows the same file offset in ELF. 23 - # Gold linker shows them as consecutive. 24 - next if ($file_offset + $mem_size == $offset + $size); 25 - 26 - printf STDERR "file_offset: 0x%lx\n", $file_offset; 27 - printf STDERR "mem_size: 0x%lx\n", $mem_size; 28 - printf STDERR "offset: 0x%lx\n", $offset; 29 - printf STDERR "size: 0x%lx\n", $size; 30 - 31 - die ".bss and .brk are non-contiguous\n"; 32 - } 33 - } 34 - } 35 - 36 - if ($file_offset == 0) { 37 - die "Never found .bss or .brk file offset\n"; 38 - } 39 - printf("%d\n", $mem_size + $file_offset);
+42
arch/x86/tools/calc_run_size.sh
··· 1 + #!/bin/sh 2 + # 3 + # Calculate the amount of space needed to run the kernel, including room for 4 + # the .bss and .brk sections. 5 + # 6 + # Usage: 7 + # objdump -h a.out | sh calc_run_size.sh 8 + 9 + NUM='\([0-9a-fA-F]*[ \t]*\)' 10 + OUT=$(sed -n 's/^[ \t0-9]*.b[sr][sk][ \t]*'"$NUM$NUM$NUM$NUM"'.*/\1\4/p') 11 + if [ -z "$OUT" ] ; then 12 + echo "Never found .bss or .brk file offset" >&2 13 + exit 1 14 + fi 15 + 16 + OUT=$(echo ${OUT# }) 17 + sizeA=$(printf "%d" 0x${OUT%% *}) 18 + OUT=${OUT#* } 19 + offsetA=$(printf "%d" 0x${OUT%% *}) 20 + OUT=${OUT#* } 21 + sizeB=$(printf "%d" 0x${OUT%% *}) 22 + OUT=${OUT#* } 23 + offsetB=$(printf "%d" 0x${OUT%% *}) 24 + 25 + run_size=$(( $offsetA + $sizeA + $sizeB )) 26 + 27 + # BFD linker shows the same file offset in ELF. 28 + if [ "$offsetA" -ne "$offsetB" ] ; then 29 + # Gold linker shows them as consecutive. 30 + endB=$(( $offsetB + $sizeB )) 31 + if [ "$endB" != "$run_size" ] ; then 32 + printf "sizeA: 0x%x\n" $sizeA >&2 33 + printf "offsetA: 0x%x\n" $offsetA >&2 34 + printf "sizeB: 0x%x\n" $sizeB >&2 35 + printf "offsetB: 0x%x\n" $offsetB >&2 36 + echo ".bss and .brk are non-contiguous" >&2 37 + exit 1 38 + fi 39 + fi 40 + 41 + printf "%d\n" $run_size 42 + exit 0
+23 -2
block/blk-mq-sysfs.c
··· 15 15 16 16 static void blk_mq_sysfs_release(struct kobject *kobj) 17 17 { 18 + struct request_queue *q; 19 + 20 + q = container_of(kobj, struct request_queue, mq_kobj); 21 + free_percpu(q->queue_ctx); 22 + } 23 + 24 + static void blk_mq_ctx_release(struct kobject *kobj) 25 + { 26 + struct blk_mq_ctx *ctx; 27 + 28 + ctx = container_of(kobj, struct blk_mq_ctx, kobj); 29 + kobject_put(&ctx->queue->mq_kobj); 30 + } 31 + 32 + static void blk_mq_hctx_release(struct kobject *kobj) 33 + { 34 + struct blk_mq_hw_ctx *hctx; 35 + 36 + hctx = container_of(kobj, struct blk_mq_hw_ctx, kobj); 37 + kfree(hctx); 18 38 } 19 39 20 40 struct blk_mq_ctx_sysfs_entry { ··· 338 318 static struct kobj_type blk_mq_ctx_ktype = { 339 319 .sysfs_ops = &blk_mq_sysfs_ops, 340 320 .default_attrs = default_ctx_attrs, 341 - .release = blk_mq_sysfs_release, 321 + .release = blk_mq_ctx_release, 342 322 }; 343 323 344 324 static struct kobj_type blk_mq_hw_ktype = { 345 325 .sysfs_ops = &blk_mq_hw_sysfs_ops, 346 326 .default_attrs = default_hw_ctx_attrs, 347 - .release = blk_mq_sysfs_release, 327 + .release = blk_mq_hctx_release, 348 328 }; 349 329 350 330 static void blk_mq_unregister_hctx(struct blk_mq_hw_ctx *hctx) ··· 375 355 return ret; 376 356 377 357 hctx_for_each_ctx(hctx, ctx, i) { 358 + kobject_get(&q->mq_kobj); 378 359 ret = kobject_add(&ctx->kobj, &hctx->kobj, "cpu%u", ctx->cpu); 379 360 if (ret) 380 361 break;
+1 -5
block/blk-mq.c
··· 1641 1641 struct blk_mq_hw_ctx *hctx; 1642 1642 unsigned int i; 1643 1643 1644 - queue_for_each_hw_ctx(q, hctx, i) { 1644 + queue_for_each_hw_ctx(q, hctx, i) 1645 1645 free_cpumask_var(hctx->cpumask); 1646 - kfree(hctx); 1647 - } 1648 1646 } 1649 1647 1650 1648 static int blk_mq_init_hctx(struct request_queue *q, ··· 2000 2002 2001 2003 percpu_ref_exit(&q->mq_usage_counter); 2002 2004 2003 - free_percpu(q->queue_ctx); 2004 2005 kfree(q->queue_hw_ctx); 2005 2006 kfree(q->mq_map); 2006 2007 2007 - q->queue_ctx = NULL; 2008 2008 q->queue_hw_ctx = NULL; 2009 2009 q->mq_map = NULL; 2010 2010
-1
drivers/acpi/pci_irq.c
··· 512 512 dev_dbg(&dev->dev, "PCI INT %c disabled\n", pin_name(pin)); 513 513 if (gsi >= 0) { 514 514 acpi_unregister_gsi(gsi); 515 - dev->irq = 0; 516 515 dev->irq_managed = 0; 517 516 } 518 517 }
+1 -1
drivers/block/nvme-core.c
··· 106 106 dma_addr_t cq_dma_addr; 107 107 u32 __iomem *q_db; 108 108 u16 q_depth; 109 - u16 cq_vector; 109 + s16 cq_vector; 110 110 u16 sq_head; 111 111 u16 sq_tail; 112 112 u16 cq_head;
+13
drivers/bus/mvebu-mbus.c
··· 210 210 } 211 211 212 212 /* Checks whether the given window number is available */ 213 + 214 + /* On Armada XP, 375 and 38x the MBus window 13 has the remap 215 + * capability, like windows 0 to 7. However, the mvebu-mbus driver 216 + * isn't currently taking into account this special case, which means 217 + * that when window 13 is actually used, the remap registers are left 218 + * to 0, making the device using this MBus window unavailable. The 219 + * quick fix for stable is to not use window 13. A follow up patch 220 + * will correctly handle this window. 221 + */ 213 222 static int mvebu_mbus_window_is_free(struct mvebu_mbus_state *mbus, 214 223 const int win) 215 224 { 216 225 void __iomem *addr = mbus->mbuswins_base + 217 226 mbus->soc->win_cfg_offset(win); 218 227 u32 ctrl = readl(addr + WIN_CTRL_OFF); 228 + 229 + if (win == 13) 230 + return false; 231 + 219 232 return !(ctrl & WIN_CTRL_ENABLE); 220 233 } 221 234
+4 -5
drivers/clocksource/bcm_kona_timer.c
··· 68 68 } 69 69 70 70 static void 71 - kona_timer_get_counter(void *timer_base, uint32_t *msw, uint32_t *lsw) 71 + kona_timer_get_counter(void __iomem *timer_base, uint32_t *msw, uint32_t *lsw) 72 72 { 73 - void __iomem *base = IOMEM(timer_base); 74 73 int loop_limit = 4; 75 74 76 75 /* ··· 85 86 */ 86 87 87 88 while (--loop_limit) { 88 - *msw = readl(base + KONA_GPTIMER_STCHI_OFFSET); 89 - *lsw = readl(base + KONA_GPTIMER_STCLO_OFFSET); 90 - if (*msw == readl(base + KONA_GPTIMER_STCHI_OFFSET)) 89 + *msw = readl(timer_base + KONA_GPTIMER_STCHI_OFFSET); 90 + *lsw = readl(timer_base + KONA_GPTIMER_STCLO_OFFSET); 91 + if (*msw == readl(timer_base + KONA_GPTIMER_STCHI_OFFSET)) 91 92 break; 92 93 } 93 94 if (!loop_limit) {
+2 -2
drivers/clocksource/exynos_mct.c
··· 97 97 writel_relaxed(value, reg_base + offset); 98 98 99 99 if (likely(offset >= EXYNOS4_MCT_L_BASE(0))) { 100 - stat_addr = (offset & ~EXYNOS4_MCT_L_MASK) + MCT_L_WSTAT_OFFSET; 101 - switch (offset & EXYNOS4_MCT_L_MASK) { 100 + stat_addr = (offset & EXYNOS4_MCT_L_MASK) + MCT_L_WSTAT_OFFSET; 101 + switch (offset & ~EXYNOS4_MCT_L_MASK) { 102 102 case MCT_L_TCON_OFFSET: 103 103 mask = 1 << 3; /* L_TCON write status */ 104 104 break;
+1 -1
drivers/clocksource/sh_tmu.c
··· 428 428 ced->features = CLOCK_EVT_FEAT_PERIODIC; 429 429 ced->features |= CLOCK_EVT_FEAT_ONESHOT; 430 430 ced->rating = 200; 431 - ced->cpumask = cpumask_of(0); 431 + ced->cpumask = cpu_possible_mask; 432 432 ced->set_next_event = sh_tmu_clock_event_next; 433 433 ced->set_mode = sh_tmu_clock_event_mode; 434 434 ced->suspend = sh_tmu_clock_event_suspend;
+4 -5
drivers/gpu/drm/amd/amdkfd/kfd_device.c
··· 183 183 kfd->shared_resources = *gpu_resources; 184 184 185 185 /* calculate max size of mqds needed for queues */ 186 - size = max_num_of_processes * 187 - max_num_of_queues_per_process * 188 - kfd->device_info->mqd_size_aligned; 186 + size = max_num_of_queues_per_device * 187 + kfd->device_info->mqd_size_aligned; 189 188 190 189 /* 191 190 * calculate max size of runlist packet. 192 191 * There can be only 2 packets at once 193 192 */ 194 - size += (max_num_of_processes * sizeof(struct pm4_map_process) + 195 - max_num_of_processes * max_num_of_queues_per_process * 193 + size += (KFD_MAX_NUM_OF_PROCESSES * sizeof(struct pm4_map_process) + 194 + max_num_of_queues_per_device * 196 195 sizeof(struct pm4_map_queues) + sizeof(struct pm4_runlist)) * 2; 197 196 198 197 /* Add size of HIQ & DIQ */
+77 -3
drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
··· 135 135 136 136 mutex_lock(&dqm->lock); 137 137 138 + if (dqm->total_queue_count >= max_num_of_queues_per_device) { 139 + pr_warn("amdkfd: Can't create new usermode queue because %d queues were already created\n", 140 + dqm->total_queue_count); 141 + mutex_unlock(&dqm->lock); 142 + return -EPERM; 143 + } 144 + 138 145 if (list_empty(&qpd->queues_list)) { 139 146 retval = allocate_vmid(dqm, qpd, q); 140 147 if (retval != 0) { ··· 168 161 169 162 list_add(&q->list, &qpd->queues_list); 170 163 dqm->queue_count++; 164 + 171 165 if (q->properties.type == KFD_QUEUE_TYPE_SDMA) 172 166 dqm->sdma_queue_count++; 167 + 168 + /* 169 + * Unconditionally increment this counter, regardless of the queue's 170 + * type or whether the queue is active. 171 + */ 172 + dqm->total_queue_count++; 173 + pr_debug("Total of %d queues are accountable so far\n", 174 + dqm->total_queue_count); 175 + 173 176 mutex_unlock(&dqm->lock); 174 177 return 0; 175 178 } ··· 314 297 if (list_empty(&qpd->queues_list)) 315 298 deallocate_vmid(dqm, qpd, q); 316 299 dqm->queue_count--; 300 + 301 + /* 302 + * Unconditionally decrement this counter, regardless of the queue's 303 + * type 304 + */ 305 + dqm->total_queue_count--; 306 + pr_debug("Total of %d queues are accountable so far\n", 307 + dqm->total_queue_count); 308 + 317 309 out: 318 310 mutex_unlock(&dqm->lock); 319 311 return retval; ··· 496 470 497 471 for (i = 0; i < pipes_num; i++) { 498 472 inx = i + first_pipe; 473 + /* 474 + * HPD buffer on GTT is allocated by amdkfd, no need to waste 475 + * space in GTT for pipelines we don't initialize 476 + */ 499 477 pipe_hpd_addr = dqm->pipelines_addr + i * CIK_HPD_EOP_BYTES; 500 478 pr_debug("kfd: pipeline address %llX\n", pipe_hpd_addr); 501 479 /* = log2(bytes/4)-1 */ 502 - kfd2kgd->init_pipeline(dqm->dev->kgd, i, 480 + kfd2kgd->init_pipeline(dqm->dev->kgd, inx, 503 481 CIK_HPD_EOP_BYTES_LOG2 - 3, pipe_hpd_addr); 504 482 } 505 483 ··· 518 488 519 489 pr_debug("kfd: In %s\n", __func__); 520 490 521 - retval = init_pipelines(dqm, get_pipes_num(dqm), KFD_DQM_FIRST_PIPE); 522 - 491 + retval = init_pipelines(dqm, get_pipes_num(dqm), get_first_pipe(dqm)); 523 492 return retval; 524 493 } 525 494 ··· 773 744 pr_debug("kfd: In func %s\n", __func__); 774 745 775 746 mutex_lock(&dqm->lock); 747 + if (dqm->total_queue_count >= max_num_of_queues_per_device) { 748 + pr_warn("amdkfd: Can't create new kernel queue because %d queues were already created\n", 749 + dqm->total_queue_count); 750 + mutex_unlock(&dqm->lock); 751 + return -EPERM; 752 + } 753 + 754 + /* 755 + * Unconditionally increment this counter, regardless of the queue's 756 + * type or whether the queue is active. 757 + */ 758 + dqm->total_queue_count++; 759 + pr_debug("Total of %d queues are accountable so far\n", 760 + dqm->total_queue_count); 761 + 776 762 list_add(&kq->list, &qpd->priv_queue_list); 777 763 dqm->queue_count++; 778 764 qpd->is_debug = true; ··· 811 767 dqm->queue_count--; 812 768 qpd->is_debug = false; 813 769 execute_queues_cpsch(dqm, false); 770 + /* 771 + * Unconditionally decrement this counter, regardless of the queue's 772 + * type. 773 + */ 774 + dqm->total_queue_count++; 775 + pr_debug("Total of %d queues are accountable so far\n", 776 + dqm->total_queue_count); 814 777 mutex_unlock(&dqm->lock); 815 778 } 816 779 ··· 844 793 845 794 mutex_lock(&dqm->lock); 846 795 796 + if (dqm->total_queue_count >= max_num_of_queues_per_device) { 797 + pr_warn("amdkfd: Can't create new usermode queue because %d queues were already created\n", 798 + dqm->total_queue_count); 799 + retval = -EPERM; 800 + goto out; 801 + } 802 + 847 803 if (q->properties.type == KFD_QUEUE_TYPE_SDMA) 848 804 select_sdma_engine_id(q); 849 805 ··· 875 817 876 818 if (q->properties.type == KFD_QUEUE_TYPE_SDMA) 877 819 dqm->sdma_queue_count++; 820 + /* 821 + * Unconditionally increment this counter, regardless of the queue's 822 + * type or whether the queue is active. 823 + */ 824 + dqm->total_queue_count++; 825 + 826 + pr_debug("Total of %d queues are accountable so far\n", 827 + dqm->total_queue_count); 878 828 879 829 out: 880 830 mutex_unlock(&dqm->lock); ··· 1023 957 execute_queues_cpsch(dqm, false); 1024 958 1025 959 mqd->uninit_mqd(mqd, q->mqd, q->mqd_mem_obj); 960 + 961 + /* 962 + * Unconditionally decrement this counter, regardless of the queue's 963 + * type 964 + */ 965 + dqm->total_queue_count--; 966 + pr_debug("Total of %d queues are accountable so far\n", 967 + dqm->total_queue_count); 1026 968 1027 969 mutex_unlock(&dqm->lock); 1028 970
+1
drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.h
··· 144 144 unsigned int processes_count; 145 145 unsigned int queue_count; 146 146 unsigned int sdma_queue_count; 147 + unsigned int total_queue_count; 147 148 unsigned int next_pipe_to_allocate; 148 149 unsigned int *allocated_queues; 149 150 unsigned int sdma_bitmap;
+8 -19
drivers/gpu/drm/amd/amdkfd/kfd_module.c
··· 50 50 MODULE_PARM_DESC(sched_policy, 51 51 "Scheduling policy (0 = HWS (Default), 1 = HWS without over-subscription, 2 = Non-HWS (Used for debugging only)"); 52 52 53 - int max_num_of_processes = KFD_MAX_NUM_OF_PROCESSES_DEFAULT; 54 - module_param(max_num_of_processes, int, 0444); 55 - MODULE_PARM_DESC(max_num_of_processes, 56 - "Kernel cmdline parameter that defines the amdkfd maximum number of supported processes"); 57 - 58 - int max_num_of_queues_per_process = KFD_MAX_NUM_OF_QUEUES_PER_PROCESS_DEFAULT; 59 - module_param(max_num_of_queues_per_process, int, 0444); 60 - MODULE_PARM_DESC(max_num_of_queues_per_process, 61 - "Kernel cmdline parameter that defines the amdkfd maximum number of supported queues per process"); 53 + int max_num_of_queues_per_device = KFD_MAX_NUM_OF_QUEUES_PER_DEVICE_DEFAULT; 54 + module_param(max_num_of_queues_per_device, int, 0444); 55 + MODULE_PARM_DESC(max_num_of_queues_per_device, 56 + "Maximum number of supported queues per device (1 = Minimum, 4096 = default)"); 62 57 63 58 bool kgd2kfd_init(unsigned interface_version, 64 59 const struct kfd2kgd_calls *f2g, ··· 95 100 } 96 101 97 102 /* Verify module parameters */ 98 - if ((max_num_of_processes < 0) || 99 - (max_num_of_processes > KFD_MAX_NUM_OF_PROCESSES)) { 100 - pr_err("kfd: max_num_of_processes must be between 0 to KFD_MAX_NUM_OF_PROCESSES\n"); 101 - return -1; 102 - } 103 - 104 - if ((max_num_of_queues_per_process < 0) || 105 - (max_num_of_queues_per_process > 106 - KFD_MAX_NUM_OF_QUEUES_PER_PROCESS)) { 107 - pr_err("kfd: max_num_of_queues_per_process must be between 0 to KFD_MAX_NUM_OF_QUEUES_PER_PROCESS\n"); 103 + if ((max_num_of_queues_per_device < 0) || 104 + (max_num_of_queues_per_device > 105 + KFD_MAX_NUM_OF_QUEUES_PER_DEVICE)) { 106 + pr_err("kfd: max_num_of_queues_per_device must be between 0 to KFD_MAX_NUM_OF_QUEUES_PER_DEVICE\n"); 108 107 return -1; 109 108 } 110 109
+1 -1
drivers/gpu/drm/amd/amdkfd/kfd_pasid.c
··· 30 30 31 31 int kfd_pasid_init(void) 32 32 { 33 - pasid_limit = max_num_of_processes; 33 + pasid_limit = KFD_MAX_NUM_OF_PROCESSES; 34 34 35 35 pasid_bitmap = kcalloc(BITS_TO_LONGS(pasid_limit), sizeof(long), GFP_KERNEL); 36 36 if (!pasid_bitmap)
+8 -9
drivers/gpu/drm/amd/amdkfd/kfd_priv.h
··· 52 52 #define kfd_alloc_struct(ptr_to_struct) \ 53 53 ((typeof(ptr_to_struct)) kzalloc(sizeof(*ptr_to_struct), GFP_KERNEL)) 54 54 55 - /* Kernel module parameter to specify maximum number of supported processes */ 56 - extern int max_num_of_processes; 57 - 58 - #define KFD_MAX_NUM_OF_PROCESSES_DEFAULT 32 59 55 #define KFD_MAX_NUM_OF_PROCESSES 512 56 + #define KFD_MAX_NUM_OF_QUEUES_PER_PROCESS 1024 60 57 61 58 /* 62 - * Kernel module parameter to specify maximum number of supported queues 63 - * per process 59 + * Kernel module parameter to specify maximum number of supported queues per 60 + * device 64 61 */ 65 - extern int max_num_of_queues_per_process; 62 + extern int max_num_of_queues_per_device; 66 63 67 - #define KFD_MAX_NUM_OF_QUEUES_PER_PROCESS_DEFAULT 128 68 - #define KFD_MAX_NUM_OF_QUEUES_PER_PROCESS 1024 64 + #define KFD_MAX_NUM_OF_QUEUES_PER_DEVICE_DEFAULT 4096 65 + #define KFD_MAX_NUM_OF_QUEUES_PER_DEVICE \ 66 + (KFD_MAX_NUM_OF_PROCESSES * \ 67 + KFD_MAX_NUM_OF_QUEUES_PER_PROCESS) 69 68 70 69 #define KFD_KERNEL_QUEUE_SIZE 2048 71 70
+8 -4
drivers/gpu/drm/amd/amdkfd/kfd_process_queue_manager.c
··· 54 54 pr_debug("kfd: in %s\n", __func__); 55 55 56 56 found = find_first_zero_bit(pqm->queue_slot_bitmap, 57 - max_num_of_queues_per_process); 57 + KFD_MAX_NUM_OF_QUEUES_PER_PROCESS); 58 58 59 59 pr_debug("kfd: the new slot id %lu\n", found); 60 60 61 - if (found >= max_num_of_queues_per_process) { 61 + if (found >= KFD_MAX_NUM_OF_QUEUES_PER_PROCESS) { 62 62 pr_info("amdkfd: Can not open more queues for process with pasid %d\n", 63 63 pqm->process->pasid); 64 64 return -ENOMEM; ··· 76 76 77 77 INIT_LIST_HEAD(&pqm->queues); 78 78 pqm->queue_slot_bitmap = 79 - kzalloc(DIV_ROUND_UP(max_num_of_queues_per_process, 79 + kzalloc(DIV_ROUND_UP(KFD_MAX_NUM_OF_QUEUES_PER_PROCESS, 80 80 BITS_PER_BYTE), GFP_KERNEL); 81 81 if (pqm->queue_slot_bitmap == NULL) 82 82 return -ENOMEM; ··· 206 206 pqn->kq = NULL; 207 207 retval = dev->dqm->ops.create_queue(dev->dqm, q, &pdd->qpd, 208 208 &q->properties.vmid); 209 + pr_debug("DQM returned %d for create_queue\n", retval); 209 210 print_queue(q); 210 211 break; 211 212 case KFD_QUEUE_TYPE_DIQ: ··· 227 226 } 228 227 229 228 if (retval != 0) { 230 - pr_err("kfd: error dqm create queue\n"); 229 + pr_debug("Error dqm create queue\n"); 231 230 goto err_create_queue; 232 231 } 233 232 ··· 246 245 err_create_queue: 247 246 kfree(pqn); 248 247 err_allocate_pqn: 248 + /* check if queues list is empty unregister process from device */ 249 249 clear_bit(*qid, pqm->queue_slot_bitmap); 250 + if (list_empty(&pqm->queues)) 251 + dev->dqm->ops.unregister_process(dev->dqm, &pdd->qpd); 250 252 return retval; 251 253 } 252 254
+42 -10
drivers/gpu/drm/i2c/tda998x_drv.c
··· 32 32 struct tda998x_priv { 33 33 struct i2c_client *cec; 34 34 struct i2c_client *hdmi; 35 + struct mutex mutex; 36 + struct delayed_work dwork; 35 37 uint16_t rev; 36 38 uint8_t current_page; 37 39 int dpms; ··· 404 402 uint8_t addr = REG2ADDR(reg); 405 403 int ret; 406 404 405 + mutex_lock(&priv->mutex); 407 406 ret = set_page(priv, reg); 408 407 if (ret < 0) 409 - return ret; 408 + goto out; 410 409 411 410 ret = i2c_master_send(client, &addr, sizeof(addr)); 412 411 if (ret < 0) ··· 417 414 if (ret < 0) 418 415 goto fail; 419 416 420 - return ret; 417 + goto out; 421 418 422 419 fail: 423 420 dev_err(&client->dev, "Error %d reading from 0x%x\n", ret, reg); 421 + out: 422 + mutex_unlock(&priv->mutex); 424 423 return ret; 425 424 } 426 425 ··· 436 431 buf[0] = REG2ADDR(reg); 437 432 memcpy(&buf[1], p, cnt); 438 433 434 + mutex_lock(&priv->mutex); 439 435 ret = set_page(priv, reg); 440 436 if (ret < 0) 441 - return; 437 + goto out; 442 438 443 439 ret = i2c_master_send(client, buf, cnt + 1); 444 440 if (ret < 0) 445 441 dev_err(&client->dev, "Error %d writing to 0x%x\n", ret, reg); 442 + out: 443 + mutex_unlock(&priv->mutex); 446 444 } 447 445 448 446 static int ··· 467 459 uint8_t buf[] = {REG2ADDR(reg), val}; 468 460 int ret; 469 461 462 + mutex_lock(&priv->mutex); 470 463 ret = set_page(priv, reg); 471 464 if (ret < 0) 472 - return; 465 + goto out; 473 466 474 467 ret = i2c_master_send(client, buf, sizeof(buf)); 475 468 if (ret < 0) 476 469 dev_err(&client->dev, "Error %d writing to 0x%x\n", ret, reg); 470 + out: 471 + mutex_unlock(&priv->mutex); 477 472 } 478 473 479 474 static void ··· 486 475 uint8_t buf[] = {REG2ADDR(reg), val >> 8, val}; 487 476 int ret; 488 477 478 + mutex_lock(&priv->mutex); 489 479 ret = set_page(priv, reg); 490 480 if (ret < 0) 491 - return; 481 + goto out; 492 482 493 483 ret = i2c_master_send(client, buf, sizeof(buf)); 494 484 if (ret < 0) 495 485 dev_err(&client->dev, "Error %d writing to 0x%x\n", ret, reg); 486 + out: 487 + mutex_unlock(&priv->mutex); 496 488 } 497 489 498 490 static void ··· 550 536 reg_write(priv, REG_MUX_VP_VIP_OUT, 0x24); 551 537 } 552 538 539 + /* handle HDMI connect/disconnect */ 540 + static void tda998x_hpd(struct work_struct *work) 541 + { 542 + struct delayed_work *dwork = to_delayed_work(work); 543 + struct tda998x_priv *priv = 544 + container_of(dwork, struct tda998x_priv, dwork); 545 + 546 + if (priv->encoder && priv->encoder->dev) 547 + drm_kms_helper_hotplug_event(priv->encoder->dev); 548 + } 549 + 553 550 /* 554 551 * only 2 interrupts may occur: screen plug/unplug and EDID read 555 552 */ ··· 584 559 priv->wq_edid_wait = 0; 585 560 wake_up(&priv->wq_edid); 586 561 } else if (cec != 0) { /* HPD change */ 587 - if (priv->encoder && priv->encoder->dev) 588 - drm_helper_hpd_irq_event(priv->encoder->dev); 562 + schedule_delayed_work(&priv->dwork, HZ/10); 589 563 } 590 564 return IRQ_HANDLED; 591 565 } ··· 1194 1170 /* disable all IRQs and free the IRQ handler */ 1195 1171 cec_write(priv, REG_CEC_RXSHPDINTENA, 0); 1196 1172 reg_clear(priv, REG_INT_FLAGS_2, INT_FLAGS_2_EDID_BLK_RD); 1197 - if (priv->hdmi->irq) 1173 + if (priv->hdmi->irq) { 1198 1174 free_irq(priv->hdmi->irq, priv); 1175 + cancel_delayed_work_sync(&priv->dwork); 1176 + } 1199 1177 1200 1178 i2c_unregister_device(priv->cec); 1201 1179 } ··· 1281 1255 struct device_node *np = client->dev.of_node; 1282 1256 u32 video; 1283 1257 int rev_lo, rev_hi, ret; 1258 + unsigned short cec_addr; 1284 1259 1285 1260 priv->vip_cntrl_0 = VIP_CNTRL_0_SWAP_A(2) | VIP_CNTRL_0_SWAP_B(3); 1286 1261 priv->vip_cntrl_1 = VIP_CNTRL_1_SWAP_C(0) | VIP_CNTRL_1_SWAP_D(1); ··· 1289 1262 1290 1263 priv->current_page = 0xff; 1291 1264 priv->hdmi = client; 1292 - priv->cec = i2c_new_dummy(client->adapter, 0x34); 1265 + /* CEC I2C address bound to TDA998x I2C addr by configuration pins */ 1266 + cec_addr = 0x34 + (client->addr & 0x03); 1267 + priv->cec = i2c_new_dummy(client->adapter, cec_addr); 1293 1268 if (!priv->cec) 1294 1269 return -ENODEV; 1295 1270 1296 1271 priv->dpms = DRM_MODE_DPMS_OFF; 1272 + 1273 + mutex_init(&priv->mutex); /* protect the page access */ 1297 1274 1298 1275 /* wake up the device: */ 1299 1276 cec_write(priv, REG_CEC_ENAMODS, ··· 1354 1323 if (client->irq) { 1355 1324 int irqf_trigger; 1356 1325 1357 - /* init read EDID waitqueue */ 1326 + /* init read EDID waitqueue and HDP work */ 1358 1327 init_waitqueue_head(&priv->wq_edid); 1328 + INIT_DELAYED_WORK(&priv->dwork, tda998x_hpd); 1359 1329 1360 1330 /* clear pending interrupts */ 1361 1331 reg_read(priv, REG_INT_FLAGS_0);
-1
drivers/gpu/drm/radeon/cik_sdma.c
··· 845 845 for (; ndw > 0; ndw -= 2, --count, pe += 8) { 846 846 if (flags & R600_PTE_SYSTEM) { 847 847 value = radeon_vm_map_gart(rdev, addr); 848 - value &= 0xFFFFFFFFFFFFF000ULL; 849 848 } else if (flags & R600_PTE_VALID) { 850 849 value = addr; 851 850 } else {
-1
drivers/gpu/drm/radeon/ni_dma.c
··· 372 372 for (; ndw > 0; ndw -= 2, --count, pe += 8) { 373 373 if (flags & R600_PTE_SYSTEM) { 374 374 value = radeon_vm_map_gart(rdev, addr); 375 - value &= 0xFFFFFFFFFFFFF000ULL; 376 375 } else if (flags & R600_PTE_VALID) { 377 376 value = addr; 378 377 } else {
+8 -2
drivers/gpu/drm/radeon/r100.c
··· 644 644 return r; 645 645 rdev->gart.table_size = rdev->gart.num_gpu_pages * 4; 646 646 rdev->asic->gart.tlb_flush = &r100_pci_gart_tlb_flush; 647 + rdev->asic->gart.get_page_entry = &r100_pci_gart_get_page_entry; 647 648 rdev->asic->gart.set_page = &r100_pci_gart_set_page; 648 649 return radeon_gart_table_ram_alloc(rdev); 649 650 } ··· 682 681 WREG32(RADEON_AIC_HI_ADDR, 0); 683 682 } 684 683 684 + uint64_t r100_pci_gart_get_page_entry(uint64_t addr, uint32_t flags) 685 + { 686 + return addr; 687 + } 688 + 685 689 void r100_pci_gart_set_page(struct radeon_device *rdev, unsigned i, 686 - uint64_t addr, uint32_t flags) 690 + uint64_t entry) 687 691 { 688 692 u32 *gtt = rdev->gart.ptr; 689 - gtt[i] = cpu_to_le32(lower_32_bits(addr)); 693 + gtt[i] = cpu_to_le32(lower_32_bits(entry)); 690 694 } 691 695 692 696 void r100_pci_gart_fini(struct radeon_device *rdev)
+11 -5
drivers/gpu/drm/radeon/r300.c
··· 73 73 #define R300_PTE_WRITEABLE (1 << 2) 74 74 #define R300_PTE_READABLE (1 << 3) 75 75 76 - void rv370_pcie_gart_set_page(struct radeon_device *rdev, unsigned i, 77 - uint64_t addr, uint32_t flags) 76 + uint64_t rv370_pcie_gart_get_page_entry(uint64_t addr, uint32_t flags) 78 77 { 79 - void __iomem *ptr = rdev->gart.ptr; 80 - 81 78 addr = (lower_32_bits(addr) >> 8) | 82 79 ((upper_32_bits(addr) & 0xff) << 24); 83 80 if (flags & RADEON_GART_PAGE_READ) ··· 83 86 addr |= R300_PTE_WRITEABLE; 84 87 if (!(flags & RADEON_GART_PAGE_SNOOP)) 85 88 addr |= R300_PTE_UNSNOOPED; 89 + return addr; 90 + } 91 + 92 + void rv370_pcie_gart_set_page(struct radeon_device *rdev, unsigned i, 93 + uint64_t entry) 94 + { 95 + void __iomem *ptr = rdev->gart.ptr; 96 + 86 97 /* on x86 we want this to be CPU endian, on powerpc 87 98 * on powerpc without HW swappers, it'll get swapped on way 88 99 * into VRAM - so no need for cpu_to_le32 on VRAM tables */ 89 - writel(addr, ((void __iomem *)ptr) + (i * 4)); 100 + writel(entry, ((void __iomem *)ptr) + (i * 4)); 90 101 } 91 102 92 103 int rv370_pcie_gart_init(struct radeon_device *rdev) ··· 114 109 DRM_ERROR("Failed to register debugfs file for PCIE gart !\n"); 115 110 rdev->gart.table_size = rdev->gart.num_gpu_pages * 4; 116 111 rdev->asic->gart.tlb_flush = &rv370_pcie_gart_tlb_flush; 112 + rdev->asic->gart.get_page_entry = &rv370_pcie_gart_get_page_entry; 117 113 rdev->asic->gart.set_page = &rv370_pcie_gart_set_page; 118 114 return radeon_gart_table_vram_alloc(rdev); 119 115 }
+6 -3
drivers/gpu/drm/radeon/radeon.h
··· 242 242 * Dummy page 243 243 */ 244 244 struct radeon_dummy_page { 245 + uint64_t entry; 245 246 struct page *page; 246 247 dma_addr_t addr; 247 248 }; ··· 646 645 unsigned num_cpu_pages; 647 646 unsigned table_size; 648 647 struct page **pages; 649 - dma_addr_t *pages_addr; 648 + uint64_t *pages_entry; 650 649 bool ready; 651 650 }; 652 651 ··· 1859 1858 /* gart */ 1860 1859 struct { 1861 1860 void (*tlb_flush)(struct radeon_device *rdev); 1861 + uint64_t (*get_page_entry)(uint64_t addr, uint32_t flags); 1862 1862 void (*set_page)(struct radeon_device *rdev, unsigned i, 1863 - uint64_t addr, uint32_t flags); 1863 + uint64_t entry); 1864 1864 } gart; 1865 1865 struct { 1866 1866 int (*init)(struct radeon_device *rdev); ··· 2869 2867 #define radeon_vga_set_state(rdev, state) (rdev)->asic->vga_set_state((rdev), (state)) 2870 2868 #define radeon_asic_reset(rdev) (rdev)->asic->asic_reset((rdev)) 2871 2869 #define radeon_gart_tlb_flush(rdev) (rdev)->asic->gart.tlb_flush((rdev)) 2872 - #define radeon_gart_set_page(rdev, i, p, f) (rdev)->asic->gart.set_page((rdev), (i), (p), (f)) 2870 + #define radeon_gart_get_page_entry(a, f) (rdev)->asic->gart.get_page_entry((a), (f)) 2871 + #define radeon_gart_set_page(rdev, i, e) (rdev)->asic->gart.set_page((rdev), (i), (e)) 2873 2872 #define radeon_asic_vm_init(rdev) (rdev)->asic->vm.init((rdev)) 2874 2873 #define radeon_asic_vm_fini(rdev) (rdev)->asic->vm.fini((rdev)) 2875 2874 #define radeon_asic_vm_copy_pages(rdev, ib, pe, src, count) ((rdev)->asic->vm.copy_pages((rdev), (ib), (pe), (src), (count)))
+24
drivers/gpu/drm/radeon/radeon_asic.c
··· 159 159 DRM_INFO("Forcing AGP to PCIE mode\n"); 160 160 rdev->flags |= RADEON_IS_PCIE; 161 161 rdev->asic->gart.tlb_flush = &rv370_pcie_gart_tlb_flush; 162 + rdev->asic->gart.get_page_entry = &rv370_pcie_gart_get_page_entry; 162 163 rdev->asic->gart.set_page = &rv370_pcie_gart_set_page; 163 164 } else { 164 165 DRM_INFO("Forcing AGP to PCI mode\n"); 165 166 rdev->flags |= RADEON_IS_PCI; 166 167 rdev->asic->gart.tlb_flush = &r100_pci_gart_tlb_flush; 168 + rdev->asic->gart.get_page_entry = &r100_pci_gart_get_page_entry; 167 169 rdev->asic->gart.set_page = &r100_pci_gart_set_page; 168 170 } 169 171 rdev->mc.gtt_size = radeon_gart_size * 1024 * 1024; ··· 201 199 .mc_wait_for_idle = &r100_mc_wait_for_idle, 202 200 .gart = { 203 201 .tlb_flush = &r100_pci_gart_tlb_flush, 202 + .get_page_entry = &r100_pci_gart_get_page_entry, 204 203 .set_page = &r100_pci_gart_set_page, 205 204 }, 206 205 .ring = { ··· 268 265 .mc_wait_for_idle = &r100_mc_wait_for_idle, 269 266 .gart = { 270 267 .tlb_flush = &r100_pci_gart_tlb_flush, 268 + .get_page_entry = &r100_pci_gart_get_page_entry, 271 269 .set_page = &r100_pci_gart_set_page, 272 270 }, 273 271 .ring = { ··· 363 359 .mc_wait_for_idle = &r300_mc_wait_for_idle, 364 360 .gart = { 365 361 .tlb_flush = &r100_pci_gart_tlb_flush, 362 + .get_page_entry = &r100_pci_gart_get_page_entry, 366 363 .set_page = &r100_pci_gart_set_page, 367 364 }, 368 365 .ring = { ··· 430 425 .mc_wait_for_idle = &r300_mc_wait_for_idle, 431 426 .gart = { 432 427 .tlb_flush = &rv370_pcie_gart_tlb_flush, 428 + .get_page_entry = &rv370_pcie_gart_get_page_entry, 433 429 .set_page = &rv370_pcie_gart_set_page, 434 430 }, 435 431 .ring = { ··· 497 491 .mc_wait_for_idle = &r300_mc_wait_for_idle, 498 492 .gart = { 499 493 .tlb_flush = &rv370_pcie_gart_tlb_flush, 494 + .get_page_entry = &rv370_pcie_gart_get_page_entry, 500 495 .set_page = &rv370_pcie_gart_set_page, 501 496 }, 502 497 .ring = { ··· 564 557 .mc_wait_for_idle = &rs400_mc_wait_for_idle, 565 558 .gart = { 566 559 .tlb_flush = &rs400_gart_tlb_flush, 560 + .get_page_entry = &rs400_gart_get_page_entry, 567 561 .set_page = &rs400_gart_set_page, 568 562 }, 569 563 .ring = { ··· 631 623 .mc_wait_for_idle = &rs600_mc_wait_for_idle, 632 624 .gart = { 633 625 .tlb_flush = &rs600_gart_tlb_flush, 626 + .get_page_entry = &rs600_gart_get_page_entry, 634 627 .set_page = &rs600_gart_set_page, 635 628 }, 636 629 .ring = { ··· 698 689 .mc_wait_for_idle = &rs690_mc_wait_for_idle, 699 690 .gart = { 700 691 .tlb_flush = &rs400_gart_tlb_flush, 692 + .get_page_entry = &rs400_gart_get_page_entry, 701 693 .set_page = &rs400_gart_set_page, 702 694 }, 703 695 .ring = { ··· 765 755 .mc_wait_for_idle = &rv515_mc_wait_for_idle, 766 756 .gart = { 767 757 .tlb_flush = &rv370_pcie_gart_tlb_flush, 758 + .get_page_entry = &rv370_pcie_gart_get_page_entry, 768 759 .set_page = &rv370_pcie_gart_set_page, 769 760 }, 770 761 .ring = { ··· 832 821 .mc_wait_for_idle = &r520_mc_wait_for_idle, 833 822 .gart = { 834 823 .tlb_flush = &rv370_pcie_gart_tlb_flush, 824 + .get_page_entry = &rv370_pcie_gart_get_page_entry, 835 825 .set_page = &rv370_pcie_gart_set_page, 836 826 }, 837 827 .ring = { ··· 927 915 .get_gpu_clock_counter = &r600_get_gpu_clock_counter, 928 916 .gart = { 929 917 .tlb_flush = &r600_pcie_gart_tlb_flush, 918 + .get_page_entry = &rs600_gart_get_page_entry, 930 919 .set_page = &rs600_gart_set_page, 931 920 }, 932 921 .ring = { ··· 1011 998 .get_gpu_clock_counter = &r600_get_gpu_clock_counter, 1012 999 .gart = { 1013 1000 .tlb_flush = &r600_pcie_gart_tlb_flush, 1001 + .get_page_entry = &rs600_gart_get_page_entry, 1014 1002 .set_page = &rs600_gart_set_page, 1015 1003 }, 1016 1004 .ring = { ··· 1101 1087 .get_gpu_clock_counter = &r600_get_gpu_clock_counter, 1102 1088 .gart = { 1103 1089 .tlb_flush = &r600_pcie_gart_tlb_flush, 1090 + .get_page_entry = &rs600_gart_get_page_entry, 1104 1091 .set_page = &rs600_gart_set_page, 1105 1092 }, 1106 1093 .ring = { ··· 1204 1189 .get_gpu_clock_counter = &r600_get_gpu_clock_counter, 1205 1190 .gart = { 1206 1191 .tlb_flush = &r600_pcie_gart_tlb_flush, 1192 + .get_page_entry = &rs600_gart_get_page_entry, 1207 1193 .set_page = &rs600_gart_set_page, 1208 1194 }, 1209 1195 .ring = { ··· 1321 1305 .get_gpu_clock_counter = &r600_get_gpu_clock_counter, 1322 1306 .gart = { 1323 1307 .tlb_flush = &evergreen_pcie_gart_tlb_flush, 1308 + .get_page_entry = &rs600_gart_get_page_entry, 1324 1309 .set_page = &rs600_gart_set_page, 1325 1310 }, 1326 1311 .ring = { ··· 1412 1395 .get_gpu_clock_counter = &r600_get_gpu_clock_counter, 1413 1396 .gart = { 1414 1397 .tlb_flush = &evergreen_pcie_gart_tlb_flush, 1398 + .get_page_entry = &rs600_gart_get_page_entry, 1415 1399 .set_page = &rs600_gart_set_page, 1416 1400 }, 1417 1401 .ring = { ··· 1502 1484 .get_gpu_clock_counter = &r600_get_gpu_clock_counter, 1503 1485 .gart = { 1504 1486 .tlb_flush = &evergreen_pcie_gart_tlb_flush, 1487 + .get_page_entry = &rs600_gart_get_page_entry, 1505 1488 .set_page = &rs600_gart_set_page, 1506 1489 }, 1507 1490 .ring = { ··· 1636 1617 .get_gpu_clock_counter = &r600_get_gpu_clock_counter, 1637 1618 .gart = { 1638 1619 .tlb_flush = &cayman_pcie_gart_tlb_flush, 1620 + .get_page_entry = &rs600_gart_get_page_entry, 1639 1621 .set_page = &rs600_gart_set_page, 1640 1622 }, 1641 1623 .vm = { ··· 1738 1718 .get_gpu_clock_counter = &r600_get_gpu_clock_counter, 1739 1719 .gart = { 1740 1720 .tlb_flush = &cayman_pcie_gart_tlb_flush, 1721 + .get_page_entry = &rs600_gart_get_page_entry, 1741 1722 .set_page = &rs600_gart_set_page, 1742 1723 }, 1743 1724 .vm = { ··· 1870 1849 .get_gpu_clock_counter = &si_get_gpu_clock_counter, 1871 1850 .gart = { 1872 1851 .tlb_flush = &si_pcie_gart_tlb_flush, 1852 + .get_page_entry = &rs600_gart_get_page_entry, 1873 1853 .set_page = &rs600_gart_set_page, 1874 1854 }, 1875 1855 .vm = { ··· 2034 2012 .get_gpu_clock_counter = &cik_get_gpu_clock_counter, 2035 2013 .gart = { 2036 2014 .tlb_flush = &cik_pcie_gart_tlb_flush, 2015 + .get_page_entry = &rs600_gart_get_page_entry, 2037 2016 .set_page = &rs600_gart_set_page, 2038 2017 }, 2039 2018 .vm = { ··· 2144 2121 .get_gpu_clock_counter = &cik_get_gpu_clock_counter, 2145 2122 .gart = { 2146 2123 .tlb_flush = &cik_pcie_gart_tlb_flush, 2124 + .get_page_entry = &rs600_gart_get_page_entry, 2147 2125 .set_page = &rs600_gart_set_page, 2148 2126 }, 2149 2127 .vm = {
+8 -4
drivers/gpu/drm/radeon/radeon_asic.h
··· 67 67 int r100_asic_reset(struct radeon_device *rdev); 68 68 u32 r100_get_vblank_counter(struct radeon_device *rdev, int crtc); 69 69 void r100_pci_gart_tlb_flush(struct radeon_device *rdev); 70 + uint64_t r100_pci_gart_get_page_entry(uint64_t addr, uint32_t flags); 70 71 void r100_pci_gart_set_page(struct radeon_device *rdev, unsigned i, 71 - uint64_t addr, uint32_t flags); 72 + uint64_t entry); 72 73 void r100_ring_start(struct radeon_device *rdev, struct radeon_ring *ring); 73 74 int r100_irq_set(struct radeon_device *rdev); 74 75 int r100_irq_process(struct radeon_device *rdev); ··· 173 172 struct radeon_fence *fence); 174 173 extern int r300_cs_parse(struct radeon_cs_parser *p); 175 174 extern void rv370_pcie_gart_tlb_flush(struct radeon_device *rdev); 175 + extern uint64_t rv370_pcie_gart_get_page_entry(uint64_t addr, uint32_t flags); 176 176 extern void rv370_pcie_gart_set_page(struct radeon_device *rdev, unsigned i, 177 - uint64_t addr, uint32_t flags); 177 + uint64_t entry); 178 178 extern void rv370_set_pcie_lanes(struct radeon_device *rdev, int lanes); 179 179 extern int rv370_get_pcie_lanes(struct radeon_device *rdev); 180 180 extern void r300_set_reg_safe(struct radeon_device *rdev); ··· 210 208 extern int rs400_suspend(struct radeon_device *rdev); 211 209 extern int rs400_resume(struct radeon_device *rdev); 212 210 void rs400_gart_tlb_flush(struct radeon_device *rdev); 211 + uint64_t rs400_gart_get_page_entry(uint64_t addr, uint32_t flags); 213 212 void rs400_gart_set_page(struct radeon_device *rdev, unsigned i, 214 - uint64_t addr, uint32_t flags); 213 + uint64_t entry); 215 214 uint32_t rs400_mc_rreg(struct radeon_device *rdev, uint32_t reg); 216 215 void rs400_mc_wreg(struct radeon_device *rdev, uint32_t reg, uint32_t v); 217 216 int rs400_gart_init(struct radeon_device *rdev); ··· 235 232 void rs600_irq_disable(struct radeon_device *rdev); 236 233 u32 rs600_get_vblank_counter(struct radeon_device *rdev, int crtc); 237 234 void rs600_gart_tlb_flush(struct radeon_device *rdev); 235 + uint64_t rs600_gart_get_page_entry(uint64_t addr, uint32_t flags); 238 236 void rs600_gart_set_page(struct radeon_device *rdev, unsigned i, 239 - uint64_t addr, uint32_t flags); 237 + uint64_t entry); 240 238 uint32_t rs600_mc_rreg(struct radeon_device *rdev, uint32_t reg); 241 239 void rs600_mc_wreg(struct radeon_device *rdev, uint32_t reg, uint32_t v); 242 240 void rs600_bandwidth_update(struct radeon_device *rdev);
+2
drivers/gpu/drm/radeon/radeon_device.c
··· 774 774 rdev->dummy_page.page = NULL; 775 775 return -ENOMEM; 776 776 } 777 + rdev->dummy_page.entry = radeon_gart_get_page_entry(rdev->dummy_page.addr, 778 + RADEON_GART_PAGE_DUMMY); 777 779 return 0; 778 780 } 779 781
+32 -22
drivers/gpu/drm/radeon/radeon_gart.c
··· 165 165 radeon_bo_unpin(rdev->gart.robj); 166 166 radeon_bo_unreserve(rdev->gart.robj); 167 167 rdev->gart.table_addr = gpu_addr; 168 + 169 + if (!r) { 170 + int i; 171 + 172 + /* We might have dropped some GART table updates while it wasn't 173 + * mapped, restore all entries 174 + */ 175 + for (i = 0; i < rdev->gart.num_gpu_pages; i++) 176 + radeon_gart_set_page(rdev, i, rdev->gart.pages_entry[i]); 177 + mb(); 178 + radeon_gart_tlb_flush(rdev); 179 + } 180 + 168 181 return r; 169 182 } 170 183 ··· 241 228 unsigned t; 242 229 unsigned p; 243 230 int i, j; 244 - u64 page_base; 245 231 246 232 if (!rdev->gart.ready) { 247 233 WARN(1, "trying to unbind memory from uninitialized GART !\n"); ··· 251 239 for (i = 0; i < pages; i++, p++) { 252 240 if (rdev->gart.pages[p]) { 253 241 rdev->gart.pages[p] = NULL; 254 - rdev->gart.pages_addr[p] = rdev->dummy_page.addr; 255 - page_base = rdev->gart.pages_addr[p]; 256 242 for (j = 0; j < (PAGE_SIZE / RADEON_GPU_PAGE_SIZE); j++, t++) { 243 + rdev->gart.pages_entry[t] = rdev->dummy_page.entry; 257 244 if (rdev->gart.ptr) { 258 - radeon_gart_set_page(rdev, t, page_base, 259 - RADEON_GART_PAGE_DUMMY); 245 + radeon_gart_set_page(rdev, t, 246 + rdev->dummy_page.entry); 260 247 } 261 - page_base += RADEON_GPU_PAGE_SIZE; 262 248 } 263 249 } 264 250 } ··· 284 274 { 285 275 unsigned t; 286 276 unsigned p; 287 - uint64_t page_base; 277 + uint64_t page_base, page_entry; 288 278 int i, j; 289 279 290 280 if (!rdev->gart.ready) { ··· 295 285 p = t / (PAGE_SIZE / RADEON_GPU_PAGE_SIZE); 296 286 297 287 for (i = 0; i < pages; i++, p++) { 298 - rdev->gart.pages_addr[p] = dma_addr[i]; 299 288 rdev->gart.pages[p] = pagelist[i]; 300 - if (rdev->gart.ptr) { 301 - page_base = rdev->gart.pages_addr[p]; 302 - for (j = 0; j < (PAGE_SIZE / RADEON_GPU_PAGE_SIZE); j++, t++) { 303 - radeon_gart_set_page(rdev, t, page_base, flags); 304 - page_base += RADEON_GPU_PAGE_SIZE; 289 + page_base = dma_addr[i]; 290 + for (j = 0; j < (PAGE_SIZE / RADEON_GPU_PAGE_SIZE); j++, t++) { 291 + page_entry = radeon_gart_get_page_entry(page_base, flags); 292 + rdev->gart.pages_entry[t] = page_entry; 293 + if (rdev->gart.ptr) { 294 + radeon_gart_set_page(rdev, t, page_entry); 305 295 } 296 + page_base += RADEON_GPU_PAGE_SIZE; 306 297 } 307 298 } 308 299 mb(); ··· 345 334 radeon_gart_fini(rdev); 346 335 return -ENOMEM; 347 336 } 348 - rdev->gart.pages_addr = vzalloc(sizeof(dma_addr_t) * 349 - rdev->gart.num_cpu_pages); 350 - if (rdev->gart.pages_addr == NULL) { 337 + rdev->gart.pages_entry = vmalloc(sizeof(uint64_t) * 338 + rdev->gart.num_gpu_pages); 339 + if (rdev->gart.pages_entry == NULL) { 351 340 radeon_gart_fini(rdev); 352 341 return -ENOMEM; 353 342 } 354 343 /* set GART entry to point to the dummy page by default */ 355 - for (i = 0; i < rdev->gart.num_cpu_pages; i++) { 356 - rdev->gart.pages_addr[i] = rdev->dummy_page.addr; 357 - } 344 + for (i = 0; i < rdev->gart.num_gpu_pages; i++) 345 + rdev->gart.pages_entry[i] = rdev->dummy_page.entry; 358 346 return 0; 359 347 } 360 348 ··· 366 356 */ 367 357 void radeon_gart_fini(struct radeon_device *rdev) 368 358 { 369 - if (rdev->gart.pages && rdev->gart.pages_addr && rdev->gart.ready) { 359 + if (rdev->gart.ready) { 370 360 /* unbind pages */ 371 361 radeon_gart_unbind(rdev, 0, rdev->gart.num_cpu_pages); 372 362 } 373 363 rdev->gart.ready = false; 374 364 vfree(rdev->gart.pages); 375 - vfree(rdev->gart.pages_addr); 365 + vfree(rdev->gart.pages_entry); 376 366 rdev->gart.pages = NULL; 377 - rdev->gart.pages_addr = NULL; 367 + rdev->gart.pages_entry = NULL; 378 368 379 369 radeon_dummy_page_fini(rdev); 380 370 }
+1 -1
drivers/gpu/drm/radeon/radeon_kfd.c
··· 392 392 static int kgd_init_pipeline(struct kgd_dev *kgd, uint32_t pipe_id, 393 393 uint32_t hpd_size, uint64_t hpd_gpu_addr) 394 394 { 395 - uint32_t mec = (++pipe_id / CIK_PIPE_PER_MEC) + 1; 395 + uint32_t mec = (pipe_id / CIK_PIPE_PER_MEC) + 1; 396 396 uint32_t pipe = (pipe_id % CIK_PIPE_PER_MEC); 397 397 398 398 lock_srbm(kgd, mec, pipe, 0, 0);
+2 -4
drivers/gpu/drm/radeon/radeon_vm.c
··· 587 587 uint64_t result; 588 588 589 589 /* page table offset */ 590 - result = rdev->gart.pages_addr[addr >> PAGE_SHIFT]; 591 - 592 - /* in case cpu page size != gpu page size*/ 593 - result |= addr & (~PAGE_MASK); 590 + result = rdev->gart.pages_entry[addr >> RADEON_GPU_PAGE_SHIFT]; 591 + result &= ~RADEON_GPU_PAGE_MASK; 594 592 595 593 return result; 596 594 }
+9 -5
drivers/gpu/drm/radeon/rs400.c
··· 212 212 #define RS400_PTE_WRITEABLE (1 << 2) 213 213 #define RS400_PTE_READABLE (1 << 3) 214 214 215 - void rs400_gart_set_page(struct radeon_device *rdev, unsigned i, 216 - uint64_t addr, uint32_t flags) 215 + uint64_t rs400_gart_get_page_entry(uint64_t addr, uint32_t flags) 217 216 { 218 217 uint32_t entry; 219 - u32 *gtt = rdev->gart.ptr; 220 218 221 219 entry = (lower_32_bits(addr) & PAGE_MASK) | 222 220 ((upper_32_bits(addr) & 0xff) << 4); ··· 224 226 entry |= RS400_PTE_WRITEABLE; 225 227 if (!(flags & RADEON_GART_PAGE_SNOOP)) 226 228 entry |= RS400_PTE_UNSNOOPED; 227 - entry = cpu_to_le32(entry); 228 - gtt[i] = entry; 229 + return entry; 230 + } 231 + 232 + void rs400_gart_set_page(struct radeon_device *rdev, unsigned i, 233 + uint64_t entry) 234 + { 235 + u32 *gtt = rdev->gart.ptr; 236 + gtt[i] = cpu_to_le32(lower_32_bits(entry)); 229 237 } 230 238 231 239 int rs400_mc_wait_for_idle(struct radeon_device *rdev)
+9 -5
drivers/gpu/drm/radeon/rs600.c
··· 626 626 radeon_gart_table_vram_free(rdev); 627 627 } 628 628 629 - void rs600_gart_set_page(struct radeon_device *rdev, unsigned i, 630 - uint64_t addr, uint32_t flags) 629 + uint64_t rs600_gart_get_page_entry(uint64_t addr, uint32_t flags) 631 630 { 632 - void __iomem *ptr = (void *)rdev->gart.ptr; 633 - 634 631 addr = addr & 0xFFFFFFFFFFFFF000ULL; 635 632 addr |= R600_PTE_SYSTEM; 636 633 if (flags & RADEON_GART_PAGE_VALID) ··· 638 641 addr |= R600_PTE_WRITEABLE; 639 642 if (flags & RADEON_GART_PAGE_SNOOP) 640 643 addr |= R600_PTE_SNOOPED; 641 - writeq(addr, ptr + (i * 8)); 644 + return addr; 645 + } 646 + 647 + void rs600_gart_set_page(struct radeon_device *rdev, unsigned i, 648 + uint64_t entry) 649 + { 650 + void __iomem *ptr = (void *)rdev->gart.ptr; 651 + writeq(entry, ptr + (i * 8)); 642 652 } 643 653 644 654 int rs600_irq_set(struct radeon_device *rdev)
-1
drivers/gpu/drm/radeon/si_dma.c
··· 123 123 for (; ndw > 0; ndw -= 2, --count, pe += 8) { 124 124 if (flags & R600_PTE_SYSTEM) { 125 125 value = radeon_vm_map_gart(rdev, addr); 126 - value &= 0xFFFFFFFFFFFFF000ULL; 127 126 } else if (flags & R600_PTE_VALID) { 128 127 value = addr; 129 128 } else {
+5 -23
drivers/gpu/drm/vmwgfx/vmwgfx_drv.c
··· 406 406 if (unlikely(ret != 0)) 407 407 --dev_priv->num_3d_resources; 408 408 } else if (unhide_svga) { 409 - mutex_lock(&dev_priv->hw_mutex); 410 409 vmw_write(dev_priv, SVGA_REG_ENABLE, 411 410 vmw_read(dev_priv, SVGA_REG_ENABLE) & 412 411 ~SVGA_REG_ENABLE_HIDE); 413 - mutex_unlock(&dev_priv->hw_mutex); 414 412 } 415 413 416 414 mutex_unlock(&dev_priv->release_mutex); ··· 431 433 mutex_lock(&dev_priv->release_mutex); 432 434 if (unlikely(--dev_priv->num_3d_resources == 0)) 433 435 vmw_release_device(dev_priv); 434 - else if (hide_svga) { 435 - mutex_lock(&dev_priv->hw_mutex); 436 + else if (hide_svga) 436 437 vmw_write(dev_priv, SVGA_REG_ENABLE, 437 438 vmw_read(dev_priv, SVGA_REG_ENABLE) | 438 439 SVGA_REG_ENABLE_HIDE); 439 - mutex_unlock(&dev_priv->hw_mutex); 440 - } 441 440 442 441 n3d = (int32_t) dev_priv->num_3d_resources; 443 442 mutex_unlock(&dev_priv->release_mutex); ··· 595 600 dev_priv->dev = dev; 596 601 dev_priv->vmw_chipset = chipset; 597 602 dev_priv->last_read_seqno = (uint32_t) -100; 598 - mutex_init(&dev_priv->hw_mutex); 599 603 mutex_init(&dev_priv->cmdbuf_mutex); 600 604 mutex_init(&dev_priv->release_mutex); 601 605 mutex_init(&dev_priv->binding_mutex); 602 606 rwlock_init(&dev_priv->resource_lock); 603 607 ttm_lock_init(&dev_priv->reservation_sem); 608 + spin_lock_init(&dev_priv->hw_lock); 609 + spin_lock_init(&dev_priv->waiter_lock); 610 + spin_lock_init(&dev_priv->cap_lock); 604 611 605 612 for (i = vmw_res_context; i < vmw_res_max; ++i) { 606 613 idr_init(&dev_priv->res_idr[i]); ··· 623 626 624 627 dev_priv->enable_fb = enable_fbdev; 625 628 626 - mutex_lock(&dev_priv->hw_mutex); 627 - 628 629 vmw_write(dev_priv, SVGA_REG_ID, SVGA_ID_2); 629 630 svga_id = vmw_read(dev_priv, SVGA_REG_ID); 630 631 if (svga_id != SVGA_ID_2) { 631 632 ret = -ENOSYS; 632 633 DRM_ERROR("Unsupported SVGA ID 0x%x\n", svga_id); 633 - mutex_unlock(&dev_priv->hw_mutex); 634 634 goto out_err0; 635 635 } 636 636 ··· 677 683 dev_priv->prim_bb_mem = dev_priv->vram_size; 678 684 679 685 ret = vmw_dma_masks(dev_priv); 680 - if (unlikely(ret != 0)) { 681 - mutex_unlock(&dev_priv->hw_mutex); 686 + if (unlikely(ret != 0)) 682 687 goto out_err0; 683 - } 684 688 685 689 /* 686 690 * Limit back buffer size to VRAM size. Remove this once ··· 686 694 */ 687 695 if (dev_priv->prim_bb_mem > dev_priv->vram_size) 688 696 dev_priv->prim_bb_mem = dev_priv->vram_size; 689 - 690 - mutex_unlock(&dev_priv->hw_mutex); 691 697 692 698 vmw_print_capabilities(dev_priv->capabilities); 693 699 ··· 1150 1160 if (unlikely(ret != 0)) 1151 1161 return ret; 1152 1162 vmw_kms_save_vga(dev_priv); 1153 - mutex_lock(&dev_priv->hw_mutex); 1154 1163 vmw_write(dev_priv, SVGA_REG_TRACES, 0); 1155 - mutex_unlock(&dev_priv->hw_mutex); 1156 1164 } 1157 1165 1158 1166 if (active) { ··· 1184 1196 if (!dev_priv->enable_fb) { 1185 1197 vmw_kms_restore_vga(dev_priv); 1186 1198 vmw_3d_resource_dec(dev_priv, true); 1187 - mutex_lock(&dev_priv->hw_mutex); 1188 1199 vmw_write(dev_priv, SVGA_REG_TRACES, 1); 1189 - mutex_unlock(&dev_priv->hw_mutex); 1190 1200 } 1191 1201 return ret; 1192 1202 } ··· 1219 1233 DRM_ERROR("Unable to clean VRAM on master drop.\n"); 1220 1234 vmw_kms_restore_vga(dev_priv); 1221 1235 vmw_3d_resource_dec(dev_priv, true); 1222 - mutex_lock(&dev_priv->hw_mutex); 1223 1236 vmw_write(dev_priv, SVGA_REG_TRACES, 1); 1224 - mutex_unlock(&dev_priv->hw_mutex); 1225 1237 } 1226 1238 1227 1239 dev_priv->active_master = &dev_priv->fbdev_master; ··· 1351 1367 struct drm_device *dev = pci_get_drvdata(pdev); 1352 1368 struct vmw_private *dev_priv = vmw_priv(dev); 1353 1369 1354 - mutex_lock(&dev_priv->hw_mutex); 1355 1370 vmw_write(dev_priv, SVGA_REG_ID, SVGA_ID_2); 1356 1371 (void) vmw_read(dev_priv, SVGA_REG_ID); 1357 - mutex_unlock(&dev_priv->hw_mutex); 1358 1372 1359 1373 /** 1360 1374 * Reclaim 3d reference held by fbdev and potentially
+21 -4
drivers/gpu/drm/vmwgfx/vmwgfx_drv.h
··· 399 399 uint32_t memory_size; 400 400 bool has_gmr; 401 401 bool has_mob; 402 - struct mutex hw_mutex; 402 + spinlock_t hw_lock; 403 + spinlock_t cap_lock; 403 404 404 405 /* 405 406 * VGA registers. ··· 450 449 atomic_t marker_seq; 451 450 wait_queue_head_t fence_queue; 452 451 wait_queue_head_t fifo_queue; 453 - int fence_queue_waiters; /* Protected by hw_mutex */ 454 - int goal_queue_waiters; /* Protected by hw_mutex */ 452 + spinlock_t waiter_lock; 453 + int fence_queue_waiters; /* Protected by waiter_lock */ 454 + int goal_queue_waiters; /* Protected by waiter_lock */ 455 455 atomic_t fifo_queue_waiters; 456 456 uint32_t last_read_seqno; 457 457 spinlock_t irq_lock; ··· 555 553 return (struct vmw_master *) master->driver_priv; 556 554 } 557 555 556 + /* 557 + * The locking here is fine-grained, so that it is performed once 558 + * for every read- and write operation. This is of course costly, but we 559 + * don't perform much register access in the timing critical paths anyway. 560 + * Instead we have the extra benefit of being sure that we don't forget 561 + * the hw lock around register accesses. 562 + */ 558 563 static inline void vmw_write(struct vmw_private *dev_priv, 559 564 unsigned int offset, uint32_t value) 560 565 { 566 + unsigned long irq_flags; 567 + 568 + spin_lock_irqsave(&dev_priv->hw_lock, irq_flags); 561 569 outl(offset, dev_priv->io_start + VMWGFX_INDEX_PORT); 562 570 outl(value, dev_priv->io_start + VMWGFX_VALUE_PORT); 571 + spin_unlock_irqrestore(&dev_priv->hw_lock, irq_flags); 563 572 } 564 573 565 574 static inline uint32_t vmw_read(struct vmw_private *dev_priv, 566 575 unsigned int offset) 567 576 { 568 - uint32_t val; 577 + unsigned long irq_flags; 578 + u32 val; 569 579 580 + spin_lock_irqsave(&dev_priv->hw_lock, irq_flags); 570 581 outl(offset, dev_priv->io_start + VMWGFX_INDEX_PORT); 571 582 val = inl(dev_priv->io_start + VMWGFX_VALUE_PORT); 583 + spin_unlock_irqrestore(&dev_priv->hw_lock, irq_flags); 584 + 572 585 return val; 573 586 } 574 587
+2 -16
drivers/gpu/drm/vmwgfx/vmwgfx_fence.c
··· 35 35 struct vmw_private *dev_priv; 36 36 spinlock_t lock; 37 37 struct list_head fence_list; 38 - struct work_struct work, ping_work; 38 + struct work_struct work; 39 39 u32 user_fence_size; 40 40 u32 fence_size; 41 41 u32 event_fence_action_size; ··· 134 134 return "svga"; 135 135 } 136 136 137 - static void vmw_fence_ping_func(struct work_struct *work) 138 - { 139 - struct vmw_fence_manager *fman = 140 - container_of(work, struct vmw_fence_manager, ping_work); 141 - 142 - vmw_fifo_ping_host(fman->dev_priv, SVGA_SYNC_GENERIC); 143 - } 144 - 145 137 static bool vmw_fence_enable_signaling(struct fence *f) 146 138 { 147 139 struct vmw_fence_obj *fence = ··· 147 155 if (seqno - fence->base.seqno < VMW_FENCE_WRAP) 148 156 return false; 149 157 150 - if (mutex_trylock(&dev_priv->hw_mutex)) { 151 - vmw_fifo_ping_host_locked(dev_priv, SVGA_SYNC_GENERIC); 152 - mutex_unlock(&dev_priv->hw_mutex); 153 - } else 154 - schedule_work(&fman->ping_work); 158 + vmw_fifo_ping_host(dev_priv, SVGA_SYNC_GENERIC); 155 159 156 160 return true; 157 161 } ··· 293 305 INIT_LIST_HEAD(&fman->fence_list); 294 306 INIT_LIST_HEAD(&fman->cleanup_list); 295 307 INIT_WORK(&fman->work, &vmw_fence_work_func); 296 - INIT_WORK(&fman->ping_work, &vmw_fence_ping_func); 297 308 fman->fifo_down = true; 298 309 fman->user_fence_size = ttm_round_pot(sizeof(struct vmw_user_fence)); 299 310 fman->fence_size = ttm_round_pot(sizeof(struct vmw_fence_obj)); ··· 310 323 bool lists_empty; 311 324 312 325 (void) cancel_work_sync(&fman->work); 313 - (void) cancel_work_sync(&fman->ping_work); 314 326 315 327 spin_lock_irqsave(&fman->lock, irq_flags); 316 328 lists_empty = list_empty(&fman->fence_list) &&
+15 -21
drivers/gpu/drm/vmwgfx/vmwgfx_fifo.c
··· 44 44 if (!dev_priv->has_mob) 45 45 return false; 46 46 47 - mutex_lock(&dev_priv->hw_mutex); 47 + spin_lock(&dev_priv->cap_lock); 48 48 vmw_write(dev_priv, SVGA_REG_DEV_CAP, SVGA3D_DEVCAP_3D); 49 49 result = vmw_read(dev_priv, SVGA_REG_DEV_CAP); 50 - mutex_unlock(&dev_priv->hw_mutex); 50 + spin_unlock(&dev_priv->cap_lock); 51 51 52 52 return (result != 0); 53 53 } ··· 120 120 DRM_INFO("height %d\n", vmw_read(dev_priv, SVGA_REG_HEIGHT)); 121 121 DRM_INFO("bpp %d\n", vmw_read(dev_priv, SVGA_REG_BITS_PER_PIXEL)); 122 122 123 - mutex_lock(&dev_priv->hw_mutex); 124 123 dev_priv->enable_state = vmw_read(dev_priv, SVGA_REG_ENABLE); 125 124 dev_priv->config_done_state = vmw_read(dev_priv, SVGA_REG_CONFIG_DONE); 126 125 dev_priv->traces_state = vmw_read(dev_priv, SVGA_REG_TRACES); ··· 142 143 mb(); 143 144 144 145 vmw_write(dev_priv, SVGA_REG_CONFIG_DONE, 1); 145 - mutex_unlock(&dev_priv->hw_mutex); 146 146 147 147 max = ioread32(fifo_mem + SVGA_FIFO_MAX); 148 148 min = ioread32(fifo_mem + SVGA_FIFO_MIN); ··· 158 160 return vmw_fifo_send_fence(dev_priv, &dummy); 159 161 } 160 162 161 - void vmw_fifo_ping_host_locked(struct vmw_private *dev_priv, uint32_t reason) 163 + void vmw_fifo_ping_host(struct vmw_private *dev_priv, uint32_t reason) 162 164 { 163 165 __le32 __iomem *fifo_mem = dev_priv->mmio_virt; 166 + static DEFINE_SPINLOCK(ping_lock); 167 + unsigned long irq_flags; 164 168 169 + /* 170 + * The ping_lock is needed because we don't have an atomic 171 + * test-and-set of the SVGA_FIFO_BUSY register. 172 + */ 173 + spin_lock_irqsave(&ping_lock, irq_flags); 165 174 if (unlikely(ioread32(fifo_mem + SVGA_FIFO_BUSY) == 0)) { 166 175 iowrite32(1, fifo_mem + SVGA_FIFO_BUSY); 167 176 vmw_write(dev_priv, SVGA_REG_SYNC, reason); 168 177 } 169 - } 170 - 171 - void vmw_fifo_ping_host(struct vmw_private *dev_priv, uint32_t reason) 172 - { 173 - mutex_lock(&dev_priv->hw_mutex); 174 - 175 - vmw_fifo_ping_host_locked(dev_priv, reason); 176 - 177 - mutex_unlock(&dev_priv->hw_mutex); 178 + spin_unlock_irqrestore(&ping_lock, irq_flags); 178 179 } 179 180 180 181 void vmw_fifo_release(struct vmw_private *dev_priv, struct vmw_fifo_state *fifo) 181 182 { 182 183 __le32 __iomem *fifo_mem = dev_priv->mmio_virt; 183 - 184 - mutex_lock(&dev_priv->hw_mutex); 185 184 186 185 vmw_write(dev_priv, SVGA_REG_SYNC, SVGA_SYNC_GENERIC); 187 186 while (vmw_read(dev_priv, SVGA_REG_BUSY) != 0) ··· 193 198 vmw_write(dev_priv, SVGA_REG_TRACES, 194 199 dev_priv->traces_state); 195 200 196 - mutex_unlock(&dev_priv->hw_mutex); 197 201 vmw_marker_queue_takedown(&fifo->marker_queue); 198 202 199 203 if (likely(fifo->static_buffer != NULL)) { ··· 265 271 return vmw_fifo_wait_noirq(dev_priv, bytes, 266 272 interruptible, timeout); 267 273 268 - mutex_lock(&dev_priv->hw_mutex); 274 + spin_lock(&dev_priv->waiter_lock); 269 275 if (atomic_add_return(1, &dev_priv->fifo_queue_waiters) > 0) { 270 276 spin_lock_irqsave(&dev_priv->irq_lock, irq_flags); 271 277 outl(SVGA_IRQFLAG_FIFO_PROGRESS, ··· 274 280 vmw_write(dev_priv, SVGA_REG_IRQMASK, dev_priv->irq_mask); 275 281 spin_unlock_irqrestore(&dev_priv->irq_lock, irq_flags); 276 282 } 277 - mutex_unlock(&dev_priv->hw_mutex); 283 + spin_unlock(&dev_priv->waiter_lock); 278 284 279 285 if (interruptible) 280 286 ret = wait_event_interruptible_timeout ··· 290 296 else if (likely(ret > 0)) 291 297 ret = 0; 292 298 293 - mutex_lock(&dev_priv->hw_mutex); 299 + spin_lock(&dev_priv->waiter_lock); 294 300 if (atomic_dec_and_test(&dev_priv->fifo_queue_waiters)) { 295 301 spin_lock_irqsave(&dev_priv->irq_lock, irq_flags); 296 302 dev_priv->irq_mask &= ~SVGA_IRQFLAG_FIFO_PROGRESS; 297 303 vmw_write(dev_priv, SVGA_REG_IRQMASK, dev_priv->irq_mask); 298 304 spin_unlock_irqrestore(&dev_priv->irq_lock, irq_flags); 299 305 } 300 - mutex_unlock(&dev_priv->hw_mutex); 306 + spin_unlock(&dev_priv->waiter_lock); 301 307 302 308 return ret; 303 309 }
+4 -4
drivers/gpu/drm/vmwgfx/vmwgfx_ioctl.c
··· 135 135 (pair_offset + max_size * sizeof(SVGA3dCapPair)) / sizeof(u32); 136 136 compat_cap->header.type = SVGA3DCAPS_RECORD_DEVCAPS; 137 137 138 - mutex_lock(&dev_priv->hw_mutex); 138 + spin_lock(&dev_priv->cap_lock); 139 139 for (i = 0; i < max_size; ++i) { 140 140 vmw_write(dev_priv, SVGA_REG_DEV_CAP, i); 141 141 compat_cap->pairs[i][0] = i; 142 142 compat_cap->pairs[i][1] = vmw_read(dev_priv, SVGA_REG_DEV_CAP); 143 143 } 144 - mutex_unlock(&dev_priv->hw_mutex); 144 + spin_unlock(&dev_priv->cap_lock); 145 145 146 146 return 0; 147 147 } ··· 191 191 if (num > SVGA3D_DEVCAP_MAX) 192 192 num = SVGA3D_DEVCAP_MAX; 193 193 194 - mutex_lock(&dev_priv->hw_mutex); 194 + spin_lock(&dev_priv->cap_lock); 195 195 for (i = 0; i < num; ++i) { 196 196 vmw_write(dev_priv, SVGA_REG_DEV_CAP, i); 197 197 *bounce32++ = vmw_read(dev_priv, SVGA_REG_DEV_CAP); 198 198 } 199 - mutex_unlock(&dev_priv->hw_mutex); 199 + spin_unlock(&dev_priv->cap_lock); 200 200 } else if (gb_objects) { 201 201 ret = vmw_fill_compat_cap(dev_priv, bounce, size); 202 202 if (unlikely(ret != 0))
+9 -16
drivers/gpu/drm/vmwgfx/vmwgfx_irq.c
··· 62 62 63 63 static bool vmw_fifo_idle(struct vmw_private *dev_priv, uint32_t seqno) 64 64 { 65 - uint32_t busy; 66 65 67 - mutex_lock(&dev_priv->hw_mutex); 68 - busy = vmw_read(dev_priv, SVGA_REG_BUSY); 69 - mutex_unlock(&dev_priv->hw_mutex); 70 - 71 - return (busy == 0); 66 + return (vmw_read(dev_priv, SVGA_REG_BUSY) == 0); 72 67 } 73 68 74 69 void vmw_update_seqno(struct vmw_private *dev_priv, ··· 179 184 180 185 void vmw_seqno_waiter_add(struct vmw_private *dev_priv) 181 186 { 182 - mutex_lock(&dev_priv->hw_mutex); 187 + spin_lock(&dev_priv->waiter_lock); 183 188 if (dev_priv->fence_queue_waiters++ == 0) { 184 189 unsigned long irq_flags; 185 190 ··· 190 195 vmw_write(dev_priv, SVGA_REG_IRQMASK, dev_priv->irq_mask); 191 196 spin_unlock_irqrestore(&dev_priv->irq_lock, irq_flags); 192 197 } 193 - mutex_unlock(&dev_priv->hw_mutex); 198 + spin_unlock(&dev_priv->waiter_lock); 194 199 } 195 200 196 201 void vmw_seqno_waiter_remove(struct vmw_private *dev_priv) 197 202 { 198 - mutex_lock(&dev_priv->hw_mutex); 203 + spin_lock(&dev_priv->waiter_lock); 199 204 if (--dev_priv->fence_queue_waiters == 0) { 200 205 unsigned long irq_flags; 201 206 ··· 204 209 vmw_write(dev_priv, SVGA_REG_IRQMASK, dev_priv->irq_mask); 205 210 spin_unlock_irqrestore(&dev_priv->irq_lock, irq_flags); 206 211 } 207 - mutex_unlock(&dev_priv->hw_mutex); 212 + spin_unlock(&dev_priv->waiter_lock); 208 213 } 209 214 210 215 211 216 void vmw_goal_waiter_add(struct vmw_private *dev_priv) 212 217 { 213 - mutex_lock(&dev_priv->hw_mutex); 218 + spin_lock(&dev_priv->waiter_lock); 214 219 if (dev_priv->goal_queue_waiters++ == 0) { 215 220 unsigned long irq_flags; 216 221 ··· 221 226 vmw_write(dev_priv, SVGA_REG_IRQMASK, dev_priv->irq_mask); 222 227 spin_unlock_irqrestore(&dev_priv->irq_lock, irq_flags); 223 228 } 224 - mutex_unlock(&dev_priv->hw_mutex); 229 + spin_unlock(&dev_priv->waiter_lock); 225 230 } 226 231 227 232 void vmw_goal_waiter_remove(struct vmw_private *dev_priv) 228 233 { 229 - mutex_lock(&dev_priv->hw_mutex); 234 + spin_lock(&dev_priv->waiter_lock); 230 235 if (--dev_priv->goal_queue_waiters == 0) { 231 236 unsigned long irq_flags; 232 237 ··· 235 240 vmw_write(dev_priv, SVGA_REG_IRQMASK, dev_priv->irq_mask); 236 241 spin_unlock_irqrestore(&dev_priv->irq_lock, irq_flags); 237 242 } 238 - mutex_unlock(&dev_priv->hw_mutex); 243 + spin_unlock(&dev_priv->waiter_lock); 239 244 } 240 245 241 246 int vmw_wait_seqno(struct vmw_private *dev_priv, ··· 310 315 if (!(dev_priv->capabilities & SVGA_CAP_IRQMASK)) 311 316 return; 312 317 313 - mutex_lock(&dev_priv->hw_mutex); 314 318 vmw_write(dev_priv, SVGA_REG_IRQMASK, 0); 315 - mutex_unlock(&dev_priv->hw_mutex); 316 319 317 320 status = inl(dev_priv->io_start + VMWGFX_IRQSTATUS_PORT); 318 321 outl(status, dev_priv->io_start + VMWGFX_IRQSTATUS_PORT);
-2
drivers/gpu/drm/vmwgfx/vmwgfx_kms.c
··· 1828 1828 struct vmw_private *dev_priv = vmw_priv(dev); 1829 1829 struct vmw_display_unit *du = vmw_connector_to_du(connector); 1830 1830 1831 - mutex_lock(&dev_priv->hw_mutex); 1832 1831 num_displays = vmw_read(dev_priv, SVGA_REG_NUM_DISPLAYS); 1833 - mutex_unlock(&dev_priv->hw_mutex); 1834 1832 1835 1833 return ((vmw_connector_to_du(connector)->unit < num_displays && 1836 1834 du->pref_active) ?
+10
drivers/hwmon/Kconfig
··· 574 574 for those channels specified in the map. This map can be provided 575 575 either via platform data or the device tree bindings. 576 576 577 + config SENSORS_I5500 578 + tristate "Intel 5500/5520/X58 temperature sensor" 579 + depends on X86 && PCI 580 + help 581 + If you say yes here you get support for the temperature 582 + sensor inside the Intel 5500, 5520 and X58 chipsets. 583 + 584 + This driver can also be built as a module. If so, the module 585 + will be called i5500_temp. 586 + 577 587 config SENSORS_CORETEMP 578 588 tristate "Intel Core/Core2/Atom temperature sensor" 579 589 depends on X86
+1
drivers/hwmon/Makefile
··· 68 68 obj-$(CONFIG_SENSORS_HIH6130) += hih6130.o 69 69 obj-$(CONFIG_SENSORS_HTU21) += htu21.o 70 70 obj-$(CONFIG_SENSORS_ULTRA45) += ultra45_env.o 71 + obj-$(CONFIG_SENSORS_I5500) += i5500_temp.o 71 72 obj-$(CONFIG_SENSORS_I5K_AMB) += i5k_amb.o 72 73 obj-$(CONFIG_SENSORS_IBMAEM) += ibmaem.o 73 74 obj-$(CONFIG_SENSORS_IBMPEX) += ibmpex.o
+149
drivers/hwmon/i5500_temp.c
··· 1 + /* 2 + * i5500_temp - Driver for Intel 5500/5520/X58 chipset thermal sensor 3 + * 4 + * Copyright (C) 2012, 2014 Jean Delvare <jdelvare@suse.de> 5 + * 6 + * This program is free software; you can redistribute it and/or modify 7 + * it under the terms of the GNU General Public License as published by 8 + * the Free Software Foundation; either version 2 of the License, or 9 + * (at your option) any later version. 10 + * 11 + * This program is distributed in the hope that it will be useful, 12 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 13 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 14 + * GNU General Public License for more details. 15 + */ 16 + 17 + #include <linux/module.h> 18 + #include <linux/init.h> 19 + #include <linux/slab.h> 20 + #include <linux/jiffies.h> 21 + #include <linux/device.h> 22 + #include <linux/pci.h> 23 + #include <linux/hwmon.h> 24 + #include <linux/hwmon-sysfs.h> 25 + #include <linux/err.h> 26 + #include <linux/mutex.h> 27 + 28 + /* Register definitions from datasheet */ 29 + #define REG_TSTHRCATA 0xE2 30 + #define REG_TSCTRL 0xE8 31 + #define REG_TSTHRRPEX 0xEB 32 + #define REG_TSTHRLO 0xEC 33 + #define REG_TSTHRHI 0xEE 34 + #define REG_CTHINT 0xF0 35 + #define REG_TSFSC 0xF3 36 + #define REG_CTSTS 0xF4 37 + #define REG_TSTHRRQPI 0xF5 38 + #define REG_CTCTRL 0xF7 39 + #define REG_TSTIMER 0xF8 40 + 41 + /* 42 + * Sysfs stuff 43 + */ 44 + 45 + /* Sensor resolution : 0.5 degree C */ 46 + static ssize_t show_temp(struct device *dev, 47 + struct device_attribute *devattr, char *buf) 48 + { 49 + struct pci_dev *pdev = to_pci_dev(dev->parent); 50 + long temp; 51 + u16 tsthrhi; 52 + s8 tsfsc; 53 + 54 + pci_read_config_word(pdev, REG_TSTHRHI, &tsthrhi); 55 + pci_read_config_byte(pdev, REG_TSFSC, &tsfsc); 56 + temp = ((long)tsthrhi - tsfsc) * 500; 57 + 58 + return sprintf(buf, "%ld\n", temp); 59 + } 60 + 61 + static ssize_t show_thresh(struct device *dev, 62 + struct device_attribute *devattr, char *buf) 63 + { 64 + struct pci_dev *pdev = to_pci_dev(dev->parent); 65 + int reg = to_sensor_dev_attr(devattr)->index; 66 + long temp; 67 + u16 tsthr; 68 + 69 + pci_read_config_word(pdev, reg, &tsthr); 70 + temp = tsthr * 500; 71 + 72 + return sprintf(buf, "%ld\n", temp); 73 + } 74 + 75 + static ssize_t show_alarm(struct device *dev, 76 + struct device_attribute *devattr, char *buf) 77 + { 78 + struct pci_dev *pdev = to_pci_dev(dev->parent); 79 + int nr = to_sensor_dev_attr(devattr)->index; 80 + u8 ctsts; 81 + 82 + pci_read_config_byte(pdev, REG_CTSTS, &ctsts); 83 + return sprintf(buf, "%u\n", (unsigned int)ctsts & (1 << nr)); 84 + } 85 + 86 + static DEVICE_ATTR(temp1_input, S_IRUGO, show_temp, NULL); 87 + static SENSOR_DEVICE_ATTR(temp1_crit, S_IRUGO, show_thresh, NULL, 0xE2); 88 + static SENSOR_DEVICE_ATTR(temp1_max_hyst, S_IRUGO, show_thresh, NULL, 0xEC); 89 + static SENSOR_DEVICE_ATTR(temp1_max, S_IRUGO, show_thresh, NULL, 0xEE); 90 + static SENSOR_DEVICE_ATTR(temp1_crit_alarm, S_IRUGO, show_alarm, NULL, 0); 91 + static SENSOR_DEVICE_ATTR(temp1_max_alarm, S_IRUGO, show_alarm, NULL, 1); 92 + 93 + static struct attribute *i5500_temp_attrs[] = { 94 + &dev_attr_temp1_input.attr, 95 + &sensor_dev_attr_temp1_crit.dev_attr.attr, 96 + &sensor_dev_attr_temp1_max_hyst.dev_attr.attr, 97 + &sensor_dev_attr_temp1_max.dev_attr.attr, 98 + &sensor_dev_attr_temp1_crit_alarm.dev_attr.attr, 99 + &sensor_dev_attr_temp1_max_alarm.dev_attr.attr, 100 + NULL 101 + }; 102 + 103 + ATTRIBUTE_GROUPS(i5500_temp); 104 + 105 + static const struct pci_device_id i5500_temp_ids[] = { 106 + { PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x3438) }, 107 + { 0 }, 108 + }; 109 + 110 + MODULE_DEVICE_TABLE(pci, i5500_temp_ids); 111 + 112 + static int i5500_temp_probe(struct pci_dev *pdev, 113 + const struct pci_device_id *id) 114 + { 115 + int err; 116 + struct device *hwmon_dev; 117 + u32 tstimer; 118 + s8 tsfsc; 119 + 120 + err = pci_enable_device(pdev); 121 + if (err) { 122 + dev_err(&pdev->dev, "Failed to enable device\n"); 123 + return err; 124 + } 125 + 126 + pci_read_config_byte(pdev, REG_TSFSC, &tsfsc); 127 + pci_read_config_dword(pdev, REG_TSTIMER, &tstimer); 128 + if (tsfsc == 0x7F && tstimer == 0x07D30D40) { 129 + dev_notice(&pdev->dev, "Sensor seems to be disabled\n"); 130 + return -ENODEV; 131 + } 132 + 133 + hwmon_dev = devm_hwmon_device_register_with_groups(&pdev->dev, 134 + "intel5500", NULL, 135 + i5500_temp_groups); 136 + return PTR_ERR_OR_ZERO(hwmon_dev); 137 + } 138 + 139 + static struct pci_driver i5500_temp_driver = { 140 + .name = "i5500_temp", 141 + .id_table = i5500_temp_ids, 142 + .probe = i5500_temp_probe, 143 + }; 144 + 145 + module_pci_driver(i5500_temp_driver); 146 + 147 + MODULE_AUTHOR("Jean Delvare <jdelvare@suse.de>"); 148 + MODULE_DESCRIPTION("Intel 5500/5520/X58 chipset thermal sensor driver"); 149 + MODULE_LICENSE("GPL");
+2 -2
drivers/irqchip/irq-atmel-aic-common.c
··· 28 28 #define AT91_AIC_IRQ_MIN_PRIORITY 0 29 29 #define AT91_AIC_IRQ_MAX_PRIORITY 7 30 30 31 - #define AT91_AIC_SRCTYPE GENMASK(7, 6) 31 + #define AT91_AIC_SRCTYPE GENMASK(6, 5) 32 32 #define AT91_AIC_SRCTYPE_LOW (0 << 5) 33 33 #define AT91_AIC_SRCTYPE_FALLING (1 << 5) 34 34 #define AT91_AIC_SRCTYPE_HIGH (2 << 5) ··· 74 74 return -EINVAL; 75 75 } 76 76 77 - *val &= AT91_AIC_SRCTYPE; 77 + *val &= ~AT91_AIC_SRCTYPE; 78 78 *val |= aic_type; 79 79 80 80 return 0;
+1 -1
drivers/irqchip/irq-gic-v3-its.c
··· 1053 1053 * of two entries. No, the architecture doesn't let you 1054 1054 * express an ITT with a single entry. 1055 1055 */ 1056 - nr_ites = max(2, roundup_pow_of_two(nvecs)); 1056 + nr_ites = max(2UL, roundup_pow_of_two(nvecs)); 1057 1057 sz = nr_ites * its->ite_size; 1058 1058 sz = max(sz, ITS_ITT_ALIGN) + ITS_ITT_ALIGN - 1; 1059 1059 itt = kmalloc(sz, GFP_KERNEL);
+1 -1
drivers/irqchip/irq-hip04.c
··· 381 381 * It will be refined as each CPU probes its ID. 382 382 */ 383 383 for (i = 0; i < NR_HIP04_CPU_IF; i++) 384 - hip04_cpu_map[i] = 0xff; 384 + hip04_cpu_map[i] = 0xffff; 385 385 386 386 /* 387 387 * Find out how many interrupts are supported.
+2 -2
drivers/irqchip/irq-mtk-sysirq.c
··· 137 137 return -ENOMEM; 138 138 139 139 chip_data->intpol_base = of_io_request_and_map(node, 0, "intpol"); 140 - if (!chip_data->intpol_base) { 140 + if (IS_ERR(chip_data->intpol_base)) { 141 141 pr_err("mtk_sysirq: unable to map sysirq register\n"); 142 - ret = -ENOMEM; 142 + ret = PTR_ERR(chip_data->intpol_base); 143 143 goto out_free; 144 144 } 145 145
+21 -5
drivers/irqchip/irq-omap-intc.c
··· 263 263 return ret; 264 264 } 265 265 266 - static int __init omap_init_irq_legacy(u32 base) 266 + static int __init omap_init_irq_legacy(u32 base, struct device_node *node) 267 267 { 268 268 int j, irq_base; 269 269 ··· 277 277 irq_base = 0; 278 278 } 279 279 280 - domain = irq_domain_add_legacy(NULL, omap_nr_irqs, irq_base, 0, 280 + domain = irq_domain_add_legacy(node, omap_nr_irqs, irq_base, 0, 281 281 &irq_domain_simple_ops, NULL); 282 282 283 283 omap_irq_soft_reset(); ··· 301 301 { 302 302 int ret; 303 303 304 - if (node) 304 + /* 305 + * FIXME legacy OMAP DMA driver sitting under arch/arm/plat-omap/dma.c 306 + * depends is still not ready for linear IRQ domains; because of that 307 + * we need to temporarily "blacklist" OMAP2 and OMAP3 devices from using 308 + * linear IRQ Domain until that driver is finally fixed. 309 + */ 310 + if (of_device_is_compatible(node, "ti,omap2-intc") || 311 + of_device_is_compatible(node, "ti,omap3-intc")) { 312 + struct resource res; 313 + 314 + if (of_address_to_resource(node, 0, &res)) 315 + return -ENOMEM; 316 + 317 + base = res.start; 318 + ret = omap_init_irq_legacy(base, node); 319 + } else if (node) { 305 320 ret = omap_init_irq_of(node); 306 - else 307 - ret = omap_init_irq_legacy(base); 321 + } else { 322 + ret = omap_init_irq_legacy(base, NULL); 323 + } 308 324 309 325 if (ret == 0) 310 326 omap_irq_enable_protection();
+95 -6
drivers/md/dm-cache-metadata.c
··· 94 94 } __packed; 95 95 96 96 struct dm_cache_metadata { 97 + atomic_t ref_count; 98 + struct list_head list; 99 + 97 100 struct block_device *bdev; 98 101 struct dm_block_manager *bm; 99 102 struct dm_space_map *metadata_sm; ··· 672 669 673 670 /*----------------------------------------------------------------*/ 674 671 675 - struct dm_cache_metadata *dm_cache_metadata_open(struct block_device *bdev, 676 - sector_t data_block_size, 677 - bool may_format_device, 678 - size_t policy_hint_size) 672 + static struct dm_cache_metadata *metadata_open(struct block_device *bdev, 673 + sector_t data_block_size, 674 + bool may_format_device, 675 + size_t policy_hint_size) 679 676 { 680 677 int r; 681 678 struct dm_cache_metadata *cmd; ··· 686 683 return NULL; 687 684 } 688 685 686 + atomic_set(&cmd->ref_count, 1); 689 687 init_rwsem(&cmd->root_lock); 690 688 cmd->bdev = bdev; 691 689 cmd->data_block_size = data_block_size; ··· 709 705 return cmd; 710 706 } 711 707 708 + /* 709 + * We keep a little list of ref counted metadata objects to prevent two 710 + * different target instances creating separate bufio instances. This is 711 + * an issue if a table is reloaded before the suspend. 712 + */ 713 + static DEFINE_MUTEX(table_lock); 714 + static LIST_HEAD(table); 715 + 716 + static struct dm_cache_metadata *lookup(struct block_device *bdev) 717 + { 718 + struct dm_cache_metadata *cmd; 719 + 720 + list_for_each_entry(cmd, &table, list) 721 + if (cmd->bdev == bdev) { 722 + atomic_inc(&cmd->ref_count); 723 + return cmd; 724 + } 725 + 726 + return NULL; 727 + } 728 + 729 + static struct dm_cache_metadata *lookup_or_open(struct block_device *bdev, 730 + sector_t data_block_size, 731 + bool may_format_device, 732 + size_t policy_hint_size) 733 + { 734 + struct dm_cache_metadata *cmd, *cmd2; 735 + 736 + mutex_lock(&table_lock); 737 + cmd = lookup(bdev); 738 + mutex_unlock(&table_lock); 739 + 740 + if (cmd) 741 + return cmd; 742 + 743 + cmd = metadata_open(bdev, data_block_size, may_format_device, policy_hint_size); 744 + if (cmd) { 745 + mutex_lock(&table_lock); 746 + cmd2 = lookup(bdev); 747 + if (cmd2) { 748 + mutex_unlock(&table_lock); 749 + __destroy_persistent_data_objects(cmd); 750 + kfree(cmd); 751 + return cmd2; 752 + } 753 + list_add(&cmd->list, &table); 754 + mutex_unlock(&table_lock); 755 + } 756 + 757 + return cmd; 758 + } 759 + 760 + static bool same_params(struct dm_cache_metadata *cmd, sector_t data_block_size) 761 + { 762 + if (cmd->data_block_size != data_block_size) { 763 + DMERR("data_block_size (%llu) different from that in metadata (%llu)\n", 764 + (unsigned long long) data_block_size, 765 + (unsigned long long) cmd->data_block_size); 766 + return false; 767 + } 768 + 769 + return true; 770 + } 771 + 772 + struct dm_cache_metadata *dm_cache_metadata_open(struct block_device *bdev, 773 + sector_t data_block_size, 774 + bool may_format_device, 775 + size_t policy_hint_size) 776 + { 777 + struct dm_cache_metadata *cmd = lookup_or_open(bdev, data_block_size, 778 + may_format_device, policy_hint_size); 779 + if (cmd && !same_params(cmd, data_block_size)) { 780 + dm_cache_metadata_close(cmd); 781 + return NULL; 782 + } 783 + 784 + return cmd; 785 + } 786 + 712 787 void dm_cache_metadata_close(struct dm_cache_metadata *cmd) 713 788 { 714 - __destroy_persistent_data_objects(cmd); 715 - kfree(cmd); 789 + if (atomic_dec_and_test(&cmd->ref_count)) { 790 + mutex_lock(&table_lock); 791 + list_del(&cmd->list); 792 + mutex_unlock(&table_lock); 793 + 794 + __destroy_persistent_data_objects(cmd); 795 + kfree(cmd); 796 + } 716 797 } 717 798 718 799 /*
+50 -39
drivers/md/dm-cache-target.c
··· 221 221 struct list_head need_commit_migrations; 222 222 sector_t migration_threshold; 223 223 wait_queue_head_t migration_wait; 224 - atomic_t nr_migrations; 224 + atomic_t nr_allocated_migrations; 225 + 226 + /* 227 + * The number of in flight migrations that are performing 228 + * background io. eg, promotion, writeback. 229 + */ 230 + atomic_t nr_io_migrations; 225 231 226 232 wait_queue_head_t quiescing_wait; 227 233 atomic_t quiescing; ··· 264 258 struct dm_deferred_set *all_io_ds; 265 259 266 260 mempool_t *migration_pool; 267 - struct dm_cache_migration *next_migration; 268 261 269 262 struct dm_cache_policy *policy; 270 263 unsigned policy_nr_args; ··· 355 350 dm_bio_prison_free_cell(cache->prison, cell); 356 351 } 357 352 353 + static struct dm_cache_migration *alloc_migration(struct cache *cache) 354 + { 355 + struct dm_cache_migration *mg; 356 + 357 + mg = mempool_alloc(cache->migration_pool, GFP_NOWAIT); 358 + if (mg) { 359 + mg->cache = cache; 360 + atomic_inc(&mg->cache->nr_allocated_migrations); 361 + } 362 + 363 + return mg; 364 + } 365 + 366 + static void free_migration(struct dm_cache_migration *mg) 367 + { 368 + if (atomic_dec_and_test(&mg->cache->nr_allocated_migrations)) 369 + wake_up(&mg->cache->migration_wait); 370 + 371 + mempool_free(mg, mg->cache->migration_pool); 372 + } 373 + 358 374 static int prealloc_data_structs(struct cache *cache, struct prealloc *p) 359 375 { 360 376 if (!p->mg) { 361 - p->mg = mempool_alloc(cache->migration_pool, GFP_NOWAIT); 377 + p->mg = alloc_migration(cache); 362 378 if (!p->mg) 363 379 return -ENOMEM; 364 380 } ··· 408 382 free_prison_cell(cache, p->cell1); 409 383 410 384 if (p->mg) 411 - mempool_free(p->mg, cache->migration_pool); 385 + free_migration(p->mg); 412 386 } 413 387 414 388 static struct dm_cache_migration *prealloc_get_migration(struct prealloc *p) ··· 880 854 * Migration covers moving data from the origin device to the cache, or 881 855 * vice versa. 882 856 *--------------------------------------------------------------*/ 883 - static void free_migration(struct dm_cache_migration *mg) 857 + static void inc_io_migrations(struct cache *cache) 884 858 { 885 - mempool_free(mg, mg->cache->migration_pool); 859 + atomic_inc(&cache->nr_io_migrations); 886 860 } 887 861 888 - static void inc_nr_migrations(struct cache *cache) 862 + static void dec_io_migrations(struct cache *cache) 889 863 { 890 - atomic_inc(&cache->nr_migrations); 891 - } 892 - 893 - static void dec_nr_migrations(struct cache *cache) 894 - { 895 - atomic_dec(&cache->nr_migrations); 896 - 897 - /* 898 - * Wake the worker in case we're suspending the target. 899 - */ 900 - wake_up(&cache->migration_wait); 864 + atomic_dec(&cache->nr_io_migrations); 901 865 } 902 866 903 867 static void __cell_defer(struct cache *cache, struct dm_bio_prison_cell *cell, ··· 910 894 wake_worker(cache); 911 895 } 912 896 913 - static void cleanup_migration(struct dm_cache_migration *mg) 897 + static void free_io_migration(struct dm_cache_migration *mg) 914 898 { 915 - struct cache *cache = mg->cache; 899 + dec_io_migrations(mg->cache); 916 900 free_migration(mg); 917 - dec_nr_migrations(cache); 918 901 } 919 902 920 903 static void migration_failure(struct dm_cache_migration *mg) ··· 938 923 cell_defer(cache, mg->new_ocell, true); 939 924 } 940 925 941 - cleanup_migration(mg); 926 + free_io_migration(mg); 942 927 } 943 928 944 929 static void migration_success_pre_commit(struct dm_cache_migration *mg) ··· 949 934 if (mg->writeback) { 950 935 clear_dirty(cache, mg->old_oblock, mg->cblock); 951 936 cell_defer(cache, mg->old_ocell, false); 952 - cleanup_migration(mg); 937 + free_io_migration(mg); 953 938 return; 954 939 955 940 } else if (mg->demote) { ··· 959 944 mg->old_oblock); 960 945 if (mg->promote) 961 946 cell_defer(cache, mg->new_ocell, true); 962 - cleanup_migration(mg); 947 + free_io_migration(mg); 963 948 return; 964 949 } 965 950 } else { 966 951 if (dm_cache_insert_mapping(cache->cmd, mg->cblock, mg->new_oblock)) { 967 952 DMWARN_LIMIT("promotion failed; couldn't update on disk metadata"); 968 953 policy_remove_mapping(cache->policy, mg->new_oblock); 969 - cleanup_migration(mg); 954 + free_io_migration(mg); 970 955 return; 971 956 } 972 957 } ··· 999 984 } else { 1000 985 if (mg->invalidate) 1001 986 policy_remove_mapping(cache->policy, mg->old_oblock); 1002 - cleanup_migration(mg); 987 + free_io_migration(mg); 1003 988 } 1004 989 1005 990 } else { ··· 1014 999 bio_endio(mg->new_ocell->holder, 0); 1015 1000 cell_defer(cache, mg->new_ocell, false); 1016 1001 } 1017 - cleanup_migration(mg); 1002 + free_io_migration(mg); 1018 1003 } 1019 1004 } 1020 1005 ··· 1266 1251 mg->new_ocell = cell; 1267 1252 mg->start_jiffies = jiffies; 1268 1253 1269 - inc_nr_migrations(cache); 1254 + inc_io_migrations(cache); 1270 1255 quiesce_migration(mg); 1271 1256 } 1272 1257 ··· 1290 1275 mg->new_ocell = NULL; 1291 1276 mg->start_jiffies = jiffies; 1292 1277 1293 - inc_nr_migrations(cache); 1278 + inc_io_migrations(cache); 1294 1279 quiesce_migration(mg); 1295 1280 } 1296 1281 ··· 1317 1302 mg->new_ocell = new_ocell; 1318 1303 mg->start_jiffies = jiffies; 1319 1304 1320 - inc_nr_migrations(cache); 1305 + inc_io_migrations(cache); 1321 1306 quiesce_migration(mg); 1322 1307 } 1323 1308 ··· 1345 1330 mg->new_ocell = NULL; 1346 1331 mg->start_jiffies = jiffies; 1347 1332 1348 - inc_nr_migrations(cache); 1333 + inc_io_migrations(cache); 1349 1334 quiesce_migration(mg); 1350 1335 } 1351 1336 ··· 1427 1412 1428 1413 static bool spare_migration_bandwidth(struct cache *cache) 1429 1414 { 1430 - sector_t current_volume = (atomic_read(&cache->nr_migrations) + 1) * 1415 + sector_t current_volume = (atomic_read(&cache->nr_io_migrations) + 1) * 1431 1416 cache->sectors_per_block; 1432 1417 return current_volume < cache->migration_threshold; 1433 1418 } ··· 1779 1764 1780 1765 static void wait_for_migrations(struct cache *cache) 1781 1766 { 1782 - wait_event(cache->migration_wait, !atomic_read(&cache->nr_migrations)); 1767 + wait_event(cache->migration_wait, !atomic_read(&cache->nr_allocated_migrations)); 1783 1768 } 1784 1769 1785 1770 static void stop_worker(struct cache *cache) ··· 1890 1875 static void destroy(struct cache *cache) 1891 1876 { 1892 1877 unsigned i; 1893 - 1894 - if (cache->next_migration) 1895 - mempool_free(cache->next_migration, cache->migration_pool); 1896 1878 1897 1879 if (cache->migration_pool) 1898 1880 mempool_destroy(cache->migration_pool); ··· 2436 2424 INIT_LIST_HEAD(&cache->quiesced_migrations); 2437 2425 INIT_LIST_HEAD(&cache->completed_migrations); 2438 2426 INIT_LIST_HEAD(&cache->need_commit_migrations); 2439 - atomic_set(&cache->nr_migrations, 0); 2427 + atomic_set(&cache->nr_allocated_migrations, 0); 2428 + atomic_set(&cache->nr_io_migrations, 0); 2440 2429 init_waitqueue_head(&cache->migration_wait); 2441 2430 2442 2431 init_waitqueue_head(&cache->quiescing_wait); ··· 2499 2486 *error = "Error creating cache's migration mempool"; 2500 2487 goto bad; 2501 2488 } 2502 - 2503 - cache->next_migration = NULL; 2504 2489 2505 2490 cache->need_tick_bio = true; 2506 2491 cache->sized = false;
+7 -2
drivers/md/dm.c
··· 206 206 /* zero-length flush that will be cloned and submitted to targets */ 207 207 struct bio flush_bio; 208 208 209 + /* the number of internal suspends */ 210 + unsigned internal_suspend_count; 211 + 209 212 struct dm_stats stats; 210 213 }; 211 214 ··· 2931 2928 { 2932 2929 struct dm_table *map = NULL; 2933 2930 2934 - if (dm_suspended_internally_md(md)) 2931 + if (md->internal_suspend_count++) 2935 2932 return; /* nested internal suspend */ 2936 2933 2937 2934 if (dm_suspended_md(md)) { ··· 2956 2953 2957 2954 static void __dm_internal_resume(struct mapped_device *md) 2958 2955 { 2959 - if (!dm_suspended_internally_md(md)) 2956 + BUG_ON(!md->internal_suspend_count); 2957 + 2958 + if (--md->internal_suspend_count) 2960 2959 return; /* resume from nested internal suspend */ 2961 2960 2962 2961 if (dm_suspended_md(md))
+17 -6
drivers/media/pci/cx23885/cx23885-cards.c
··· 614 614 .portb = CX23885_MPEG_DVB, 615 615 }, 616 616 [CX23885_BOARD_HAUPPAUGE_HVR4400] = { 617 - .name = "Hauppauge WinTV-HVR4400", 617 + .name = "Hauppauge WinTV-HVR4400/HVR5500", 618 618 .porta = CX23885_ANALOG_VIDEO, 619 619 .portb = CX23885_MPEG_DVB, 620 620 .portc = CX23885_MPEG_DVB, 621 621 .tuner_type = TUNER_NXP_TDA18271, 622 622 .tuner_addr = 0x60, /* 0xc0 >> 1 */ 623 623 .tuner_bus = 1, 624 + }, 625 + [CX23885_BOARD_HAUPPAUGE_STARBURST] = { 626 + .name = "Hauppauge WinTV Starburst", 627 + .portb = CX23885_MPEG_DVB, 624 628 }, 625 629 [CX23885_BOARD_AVERMEDIA_HC81R] = { 626 630 .name = "AVerTV Hybrid Express Slim HC81R", ··· 940 936 }, { 941 937 .subvendor = 0x0070, 942 938 .subdevice = 0xc108, 943 - .card = CX23885_BOARD_HAUPPAUGE_HVR4400, 939 + .card = CX23885_BOARD_HAUPPAUGE_HVR4400, /* Hauppauge WinTV HVR-4400 (Model 121xxx, Hybrid DVB-T/S2, IR) */ 944 940 }, { 945 941 .subvendor = 0x0070, 946 942 .subdevice = 0xc138, 947 - .card = CX23885_BOARD_HAUPPAUGE_HVR4400, 943 + .card = CX23885_BOARD_HAUPPAUGE_HVR4400, /* Hauppauge WinTV HVR-5500 (Model 121xxx, Hybrid DVB-T/C/S2, IR) */ 948 944 }, { 949 945 .subvendor = 0x0070, 950 946 .subdevice = 0xc12a, 951 - .card = CX23885_BOARD_HAUPPAUGE_HVR4400, 947 + .card = CX23885_BOARD_HAUPPAUGE_STARBURST, /* Hauppauge WinTV Starburst (Model 121x00, DVB-S2, IR) */ 952 948 }, { 953 949 .subvendor = 0x0070, 954 950 .subdevice = 0xc1f8, 955 - .card = CX23885_BOARD_HAUPPAUGE_HVR4400, 951 + .card = CX23885_BOARD_HAUPPAUGE_HVR4400, /* Hauppauge WinTV HVR-5500 (Model 121xxx, Hybrid DVB-T/C/S2, IR) */ 956 952 }, { 957 953 .subvendor = 0x1461, 958 954 .subdevice = 0xd939, ··· 1549 1545 cx_write(GPIO_ISM, 0x00000000);/* INTERRUPTS active low*/ 1550 1546 break; 1551 1547 case CX23885_BOARD_HAUPPAUGE_HVR4400: 1548 + case CX23885_BOARD_HAUPPAUGE_STARBURST: 1552 1549 /* GPIO-8 tda10071 demod reset */ 1553 - /* GPIO-9 si2165 demod reset */ 1550 + /* GPIO-9 si2165 demod reset (only HVR4400/HVR5500)*/ 1554 1551 1555 1552 /* Put the parts into reset and back */ 1556 1553 cx23885_gpio_enable(dev, GPIO_8 | GPIO_9, 1); ··· 1877 1872 case CX23885_BOARD_HAUPPAUGE_HVR1850: 1878 1873 case CX23885_BOARD_HAUPPAUGE_HVR1290: 1879 1874 case CX23885_BOARD_HAUPPAUGE_HVR4400: 1875 + case CX23885_BOARD_HAUPPAUGE_STARBURST: 1880 1876 case CX23885_BOARD_HAUPPAUGE_IMPACTVCBE: 1881 1877 if (dev->i2c_bus[0].i2c_rc == 0) 1882 1878 hauppauge_eeprom(dev, eeprom+0xc0); ··· 1985 1979 ts2->gen_ctrl_val = 0xc; /* Serial bus + punctured clock */ 1986 1980 ts2->ts_clk_en_val = 0x1; /* Enable TS_CLK */ 1987 1981 ts2->src_sel_val = CX23885_SRC_SEL_PARALLEL_MPEG_VIDEO; 1982 + break; 1983 + case CX23885_BOARD_HAUPPAUGE_STARBURST: 1984 + ts1->gen_ctrl_val = 0xc; /* Serial bus + punctured clock */ 1985 + ts1->ts_clk_en_val = 0x1; /* Enable TS_CLK */ 1986 + ts1->src_sel_val = CX23885_SRC_SEL_PARALLEL_MPEG_VIDEO; 1988 1987 break; 1989 1988 case CX23885_BOARD_DVBSKY_T9580: 1990 1989 case CX23885_BOARD_DVBSKY_T982:
+2 -2
drivers/media/pci/cx23885/cx23885-core.c
··· 2049 2049 2050 2050 cx23885_shutdown(dev); 2051 2051 2052 - pci_disable_device(pci_dev); 2053 - 2054 2052 /* unregister stuff */ 2055 2053 free_irq(pci_dev->irq, dev); 2054 + 2055 + pci_disable_device(pci_dev); 2056 2056 2057 2057 cx23885_dev_unregister(dev); 2058 2058 vb2_dma_sg_cleanup_ctx(dev->alloc_ctx);
+11
drivers/media/pci/cx23885/cx23885-dvb.c
··· 1710 1710 break; 1711 1711 } 1712 1712 break; 1713 + case CX23885_BOARD_HAUPPAUGE_STARBURST: 1714 + i2c_bus = &dev->i2c_bus[0]; 1715 + fe0->dvb.frontend = dvb_attach(tda10071_attach, 1716 + &hauppauge_tda10071_config, 1717 + &i2c_bus->i2c_adap); 1718 + if (fe0->dvb.frontend != NULL) { 1719 + dvb_attach(a8293_attach, fe0->dvb.frontend, 1720 + &i2c_bus->i2c_adap, 1721 + &hauppauge_a8293_config); 1722 + } 1723 + break; 1713 1724 case CX23885_BOARD_DVBSKY_T9580: 1714 1725 case CX23885_BOARD_DVBSKY_S950: 1715 1726 i2c_bus = &dev->i2c_bus[0];
+1
drivers/media/pci/cx23885/cx23885.h
··· 99 99 #define CX23885_BOARD_DVBSKY_S950 49 100 100 #define CX23885_BOARD_DVBSKY_S952 50 101 101 #define CX23885_BOARD_DVBSKY_T982 51 102 + #define CX23885_BOARD_HAUPPAUGE_STARBURST 52 102 103 103 104 #define GPIO_0 0x00000001 104 105 #define GPIO_1 0x00000002
+5 -2
drivers/media/platform/omap3isp/ispvideo.c
··· 602 602 strlcpy(cap->card, video->video.name, sizeof(cap->card)); 603 603 strlcpy(cap->bus_info, "media", sizeof(cap->bus_info)); 604 604 605 + cap->capabilities = V4L2_CAP_VIDEO_CAPTURE | V4L2_CAP_VIDEO_OUTPUT 606 + | V4L2_CAP_STREAMING | V4L2_CAP_DEVICE_CAPS; 607 + 605 608 if (video->type == V4L2_BUF_TYPE_VIDEO_CAPTURE) 606 - cap->capabilities = V4L2_CAP_VIDEO_CAPTURE | V4L2_CAP_STREAMING; 609 + cap->device_caps = V4L2_CAP_VIDEO_CAPTURE | V4L2_CAP_STREAMING; 607 610 else 608 - cap->capabilities = V4L2_CAP_VIDEO_OUTPUT | V4L2_CAP_STREAMING; 611 + cap->device_caps = V4L2_CAP_VIDEO_OUTPUT | V4L2_CAP_STREAMING; 609 612 610 613 return 0; 611 614 }
+3 -2
drivers/media/platform/soc_camera/atmel-isi.c
··· 760 760 { 761 761 strcpy(cap->driver, "atmel-isi"); 762 762 strcpy(cap->card, "Atmel Image Sensor Interface"); 763 - cap->capabilities = (V4L2_CAP_VIDEO_CAPTURE | 764 - V4L2_CAP_STREAMING); 763 + cap->device_caps = V4L2_CAP_VIDEO_CAPTURE | V4L2_CAP_STREAMING; 764 + cap->capabilities = cap->device_caps | V4L2_CAP_DEVICE_CAPS; 765 + 765 766 return 0; 766 767 } 767 768
+2 -1
drivers/media/platform/soc_camera/mx2_camera.c
··· 1256 1256 { 1257 1257 /* cap->name is set by the friendly caller:-> */ 1258 1258 strlcpy(cap->card, MX2_CAM_DRIVER_DESCRIPTION, sizeof(cap->card)); 1259 - cap->capabilities = V4L2_CAP_VIDEO_CAPTURE | V4L2_CAP_STREAMING; 1259 + cap->device_caps = V4L2_CAP_VIDEO_CAPTURE | V4L2_CAP_STREAMING; 1260 + cap->capabilities = cap->device_caps | V4L2_CAP_DEVICE_CAPS; 1260 1261 1261 1262 return 0; 1262 1263 }
+2 -1
drivers/media/platform/soc_camera/mx3_camera.c
··· 967 967 { 968 968 /* cap->name is set by the firendly caller:-> */ 969 969 strlcpy(cap->card, "i.MX3x Camera", sizeof(cap->card)); 970 - cap->capabilities = V4L2_CAP_VIDEO_CAPTURE | V4L2_CAP_STREAMING; 970 + cap->device_caps = V4L2_CAP_VIDEO_CAPTURE | V4L2_CAP_STREAMING; 971 + cap->capabilities = cap->device_caps | V4L2_CAP_DEVICE_CAPS; 971 972 972 973 return 0; 973 974 }
+2 -1
drivers/media/platform/soc_camera/omap1_camera.c
··· 1427 1427 { 1428 1428 /* cap->name is set by the friendly caller:-> */ 1429 1429 strlcpy(cap->card, "OMAP1 Camera", sizeof(cap->card)); 1430 - cap->capabilities = V4L2_CAP_VIDEO_CAPTURE | V4L2_CAP_STREAMING; 1430 + cap->device_caps = V4L2_CAP_VIDEO_CAPTURE | V4L2_CAP_STREAMING; 1431 + cap->capabilities = cap->device_caps | V4L2_CAP_DEVICE_CAPS; 1431 1432 1432 1433 return 0; 1433 1434 }
+2 -1
drivers/media/platform/soc_camera/pxa_camera.c
··· 1576 1576 { 1577 1577 /* cap->name is set by the firendly caller:-> */ 1578 1578 strlcpy(cap->card, pxa_cam_driver_description, sizeof(cap->card)); 1579 - cap->capabilities = V4L2_CAP_VIDEO_CAPTURE | V4L2_CAP_STREAMING; 1579 + cap->device_caps = V4L2_CAP_VIDEO_CAPTURE | V4L2_CAP_STREAMING; 1580 + cap->capabilities = cap->device_caps | V4L2_CAP_DEVICE_CAPS; 1580 1581 1581 1582 return 0; 1582 1583 }
+3 -1
drivers/media/platform/soc_camera/rcar_vin.c
··· 1799 1799 struct v4l2_capability *cap) 1800 1800 { 1801 1801 strlcpy(cap->card, "R_Car_VIN", sizeof(cap->card)); 1802 - cap->capabilities = V4L2_CAP_VIDEO_CAPTURE | V4L2_CAP_STREAMING; 1802 + cap->device_caps = V4L2_CAP_VIDEO_CAPTURE | V4L2_CAP_STREAMING; 1803 + cap->capabilities = cap->device_caps | V4L2_CAP_DEVICE_CAPS; 1804 + 1803 1805 return 0; 1804 1806 } 1805 1807
+3 -1
drivers/media/platform/soc_camera/sh_mobile_ceu_camera.c
··· 1652 1652 struct v4l2_capability *cap) 1653 1653 { 1654 1654 strlcpy(cap->card, "SuperH_Mobile_CEU", sizeof(cap->card)); 1655 - cap->capabilities = V4L2_CAP_VIDEO_CAPTURE | V4L2_CAP_STREAMING; 1655 + cap->device_caps = V4L2_CAP_VIDEO_CAPTURE | V4L2_CAP_STREAMING; 1656 + cap->capabilities = cap->device_caps | V4L2_CAP_DEVICE_CAPS; 1657 + 1656 1658 return 0; 1657 1659 } 1658 1660
+1 -1
drivers/media/usb/dvb-usb/cxusb.c
··· 2232 2232 { 2233 2233 "Mygica T230 DVB-T/T2/C", 2234 2234 { NULL }, 2235 - { &cxusb_table[22], NULL }, 2235 + { &cxusb_table[20], NULL }, 2236 2236 }, 2237 2237 } 2238 2238 };
+13 -11
drivers/media/usb/pvrusb2/pvrusb2-v4l2.c
··· 89 89 module_param_array(vbi_nr, int, NULL, 0444); 90 90 MODULE_PARM_DESC(vbi_nr, "Offset for device's vbi dev minor"); 91 91 92 - static struct v4l2_capability pvr_capability ={ 93 - .driver = "pvrusb2", 94 - .card = "Hauppauge WinTV pvr-usb2", 95 - .bus_info = "usb", 96 - .version = LINUX_VERSION_CODE, 97 - .capabilities = (V4L2_CAP_VIDEO_CAPTURE | 98 - V4L2_CAP_TUNER | V4L2_CAP_AUDIO | V4L2_CAP_RADIO | 99 - V4L2_CAP_READWRITE), 100 - }; 101 - 102 92 static struct v4l2_fmtdesc pvr_fmtdesc [] = { 103 93 { 104 94 .index = 0, ··· 150 160 struct pvr2_v4l2_fh *fh = file->private_data; 151 161 struct pvr2_hdw *hdw = fh->channel.mc_head->hdw; 152 162 153 - memcpy(cap, &pvr_capability, sizeof(struct v4l2_capability)); 163 + strlcpy(cap->driver, "pvrusb2", sizeof(cap->driver)); 154 164 strlcpy(cap->bus_info, pvr2_hdw_get_bus_info(hdw), 155 165 sizeof(cap->bus_info)); 156 166 strlcpy(cap->card, pvr2_hdw_get_desc(hdw), sizeof(cap->card)); 167 + cap->capabilities = V4L2_CAP_VIDEO_CAPTURE | V4L2_CAP_TUNER | 168 + V4L2_CAP_AUDIO | V4L2_CAP_RADIO | 169 + V4L2_CAP_READWRITE | V4L2_CAP_DEVICE_CAPS; 170 + switch (fh->pdi->devbase.vfl_type) { 171 + case VFL_TYPE_GRABBER: 172 + cap->device_caps = V4L2_CAP_VIDEO_CAPTURE | V4L2_CAP_AUDIO; 173 + break; 174 + case VFL_TYPE_RADIO: 175 + cap->device_caps = V4L2_CAP_RADIO; 176 + break; 177 + } 178 + cap->device_caps |= V4L2_CAP_TUNER | V4L2_CAP_READWRITE; 157 179 return 0; 158 180 } 159 181
+9 -10
drivers/media/v4l2-core/videobuf2-core.c
··· 3146 3146 prequeue--; 3147 3147 } else { 3148 3148 call_void_qop(q, wait_finish, q); 3149 - ret = vb2_internal_dqbuf(q, &fileio->b, 0); 3149 + if (!threadio->stop) 3150 + ret = vb2_internal_dqbuf(q, &fileio->b, 0); 3150 3151 call_void_qop(q, wait_prepare, q); 3151 3152 dprintk(5, "file io: vb2_dqbuf result: %d\n", ret); 3152 3153 } 3153 - if (threadio->stop) 3154 - break; 3155 - if (ret) 3154 + if (ret || threadio->stop) 3156 3155 break; 3157 3156 try_to_freeze(); 3158 3157 3159 3158 vb = q->bufs[fileio->b.index]; 3160 3159 if (!(fileio->b.flags & V4L2_BUF_FLAG_ERROR)) 3161 - ret = threadio->fnc(vb, threadio->priv); 3162 - if (ret) 3163 - break; 3160 + if (threadio->fnc(vb, threadio->priv)) 3161 + break; 3164 3162 call_void_qop(q, wait_finish, q); 3165 3163 if (set_timestamp) 3166 3164 v4l2_get_timestamp(&fileio->b.timestamp); 3167 - ret = vb2_internal_qbuf(q, &fileio->b); 3165 + if (!threadio->stop) 3166 + ret = vb2_internal_qbuf(q, &fileio->b); 3168 3167 call_void_qop(q, wait_prepare, q); 3169 - if (ret) 3168 + if (ret || threadio->stop) 3170 3169 break; 3171 3170 } 3172 3171 ··· 3234 3235 threadio->stop = true; 3235 3236 vb2_internal_streamoff(q, q->type); 3236 3237 call_void_qop(q, wait_prepare, q); 3238 + err = kthread_stop(threadio->thread); 3237 3239 q->fileio = NULL; 3238 3240 fileio->req.count = 0; 3239 3241 vb2_reqbufs(q, &fileio->req); 3240 3242 kfree(fileio); 3241 - err = kthread_stop(threadio->thread); 3242 3243 threadio->thread = NULL; 3243 3244 kfree(threadio); 3244 3245 q->fileio = NULL;
+3
drivers/net/can/c_can/c_can.c
··· 615 615 616 616 c_can_irq_control(priv, false); 617 617 618 + /* put ctrl to init on stop to end ongoing transmission */ 619 + priv->write_reg(priv, C_CAN_CTRL_REG, CONTROL_INIT); 620 + 618 621 /* deactivate pins */ 619 622 pinctrl_pm_select_sleep_state(dev->dev.parent); 620 623 priv->can.state = CAN_STATE_STOPPED;
+15 -13
drivers/net/can/usb/kvaser_usb.c
··· 587 587 usb_sndbulkpipe(dev->udev, 588 588 dev->bulk_out->bEndpointAddress), 589 589 buf, msg->len, 590 - kvaser_usb_simple_msg_callback, priv); 590 + kvaser_usb_simple_msg_callback, netdev); 591 591 usb_anchor_urb(urb, &priv->tx_submitted); 592 592 593 593 err = usb_submit_urb(urb, GFP_ATOMIC); ··· 662 662 priv = dev->nets[channel]; 663 663 stats = &priv->netdev->stats; 664 664 665 - if (status & M16C_STATE_BUS_RESET) { 666 - kvaser_usb_unlink_tx_urbs(priv); 667 - return; 668 - } 669 - 670 665 skb = alloc_can_err_skb(priv->netdev, &cf); 671 666 if (!skb) { 672 667 stats->rx_dropped++; ··· 672 677 673 678 netdev_dbg(priv->netdev, "Error status: 0x%02x\n", status); 674 679 675 - if (status & M16C_STATE_BUS_OFF) { 680 + if (status & (M16C_STATE_BUS_OFF | M16C_STATE_BUS_RESET)) { 676 681 cf->can_id |= CAN_ERR_BUSOFF; 677 682 678 683 priv->can.can_stats.bus_off++; ··· 698 703 } 699 704 700 705 new_state = CAN_STATE_ERROR_PASSIVE; 701 - } 702 - 703 - if (status == M16C_STATE_BUS_ERROR) { 706 + } else if (status & M16C_STATE_BUS_ERROR) { 704 707 if ((priv->can.state < CAN_STATE_ERROR_WARNING) && 705 708 ((txerr >= 96) || (rxerr >= 96))) { 706 709 cf->can_id |= CAN_ERR_CRTL; ··· 708 715 709 716 priv->can.can_stats.error_warning++; 710 717 new_state = CAN_STATE_ERROR_WARNING; 711 - } else if (priv->can.state > CAN_STATE_ERROR_ACTIVE) { 718 + } else if ((priv->can.state > CAN_STATE_ERROR_ACTIVE) && 719 + ((txerr < 96) && (rxerr < 96))) { 712 720 cf->can_id |= CAN_ERR_PROT; 713 721 cf->data[2] = CAN_ERR_PROT_ACTIVE; 714 722 ··· 1584 1590 { 1585 1591 struct kvaser_usb *dev; 1586 1592 int err = -ENOMEM; 1587 - int i; 1593 + int i, retry = 3; 1588 1594 1589 1595 dev = devm_kzalloc(&intf->dev, sizeof(*dev), GFP_KERNEL); 1590 1596 if (!dev) ··· 1602 1608 1603 1609 usb_set_intfdata(intf, dev); 1604 1610 1605 - err = kvaser_usb_get_software_info(dev); 1611 + /* On some x86 laptops, plugging a Kvaser device again after 1612 + * an unplug makes the firmware always ignore the very first 1613 + * command. For such a case, provide some room for retries 1614 + * instead of completely exiting the driver. 1615 + */ 1616 + do { 1617 + err = kvaser_usb_get_software_info(dev); 1618 + } while (--retry && err == -ETIMEDOUT); 1619 + 1606 1620 if (err) { 1607 1621 dev_err(&intf->dev, 1608 1622 "Cannot get software infos, error %d\n", err);
+5 -4
drivers/net/ethernet/amd/xgbe/xgbe-common.h
··· 767 767 #define MTL_Q_RQOMR 0x40 768 768 #define MTL_Q_RQMPOCR 0x44 769 769 #define MTL_Q_RQDR 0x4c 770 + #define MTL_Q_RQFCR 0x50 770 771 #define MTL_Q_IER 0x70 771 772 #define MTL_Q_ISR 0x74 772 773 773 774 /* MTL queue register entry bit positions and sizes */ 775 + #define MTL_Q_RQFCR_RFA_INDEX 1 776 + #define MTL_Q_RQFCR_RFA_WIDTH 6 777 + #define MTL_Q_RQFCR_RFD_INDEX 17 778 + #define MTL_Q_RQFCR_RFD_WIDTH 6 774 779 #define MTL_Q_RQOMR_EHFC_INDEX 7 775 780 #define MTL_Q_RQOMR_EHFC_WIDTH 1 776 - #define MTL_Q_RQOMR_RFA_INDEX 8 777 - #define MTL_Q_RQOMR_RFA_WIDTH 3 778 - #define MTL_Q_RQOMR_RFD_INDEX 13 779 - #define MTL_Q_RQOMR_RFD_WIDTH 3 780 781 #define MTL_Q_RQOMR_RQS_INDEX 16 781 782 #define MTL_Q_RQOMR_RQS_WIDTH 9 782 783 #define MTL_Q_RQOMR_RSF_INDEX 5
+2 -2
drivers/net/ethernet/amd/xgbe/xgbe-dev.c
··· 2079 2079 2080 2080 for (i = 0; i < pdata->rx_q_count; i++) { 2081 2081 /* Activate flow control when less than 4k left in fifo */ 2082 - XGMAC_MTL_IOWRITE_BITS(pdata, i, MTL_Q_RQOMR, RFA, 2); 2082 + XGMAC_MTL_IOWRITE_BITS(pdata, i, MTL_Q_RQFCR, RFA, 2); 2083 2083 2084 2084 /* De-activate flow control when more than 6k left in fifo */ 2085 - XGMAC_MTL_IOWRITE_BITS(pdata, i, MTL_Q_RQOMR, RFD, 4); 2085 + XGMAC_MTL_IOWRITE_BITS(pdata, i, MTL_Q_RQFCR, RFD, 4); 2086 2086 } 2087 2087 } 2088 2088
+1 -1
drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c
··· 3175 3175 } 3176 3176 #endif 3177 3177 if (!bnx2x_fp_lock_napi(fp)) 3178 - return work_done; 3178 + return budget; 3179 3179 3180 3180 for_each_cos_in_tx_queue(fp, cos) 3181 3181 if (bnx2x_tx_queue_has_work(fp->txdata_ptr[cos]))
+1 -1
drivers/net/ethernet/cisco/enic/enic_main.c
··· 1335 1335 int err; 1336 1336 1337 1337 if (!enic_poll_lock_napi(&enic->rq[rq])) 1338 - return work_done; 1338 + return budget; 1339 1339 /* Service RQ 1340 1340 */ 1341 1341
+49 -10
drivers/net/ethernet/marvell/mv643xx_eth.c
··· 192 192 #define IS_TSO_HEADER(txq, addr) \ 193 193 ((addr >= txq->tso_hdrs_dma) && \ 194 194 (addr < txq->tso_hdrs_dma + txq->tx_ring_size * TSO_HEADER_SIZE)) 195 + 196 + #define DESC_DMA_MAP_SINGLE 0 197 + #define DESC_DMA_MAP_PAGE 1 198 + 195 199 /* 196 200 * RX/TX descriptors. 197 201 */ ··· 366 362 dma_addr_t tso_hdrs_dma; 367 363 368 364 struct tx_desc *tx_desc_area; 365 + char *tx_desc_mapping; /* array to track the type of the dma mapping */ 369 366 dma_addr_t tx_desc_dma; 370 367 int tx_desc_area_size; 371 368 ··· 755 750 if (txq->tx_curr_desc == txq->tx_ring_size) 756 751 txq->tx_curr_desc = 0; 757 752 desc = &txq->tx_desc_area[tx_index]; 753 + txq->tx_desc_mapping[tx_index] = DESC_DMA_MAP_SINGLE; 758 754 759 755 desc->l4i_chk = 0; 760 756 desc->byte_cnt = length; ··· 885 879 skb_frag_t *this_frag; 886 880 int tx_index; 887 881 struct tx_desc *desc; 888 - void *addr; 889 882 890 883 this_frag = &skb_shinfo(skb)->frags[frag]; 891 - addr = page_address(this_frag->page.p) + this_frag->page_offset; 892 884 tx_index = txq->tx_curr_desc++; 893 885 if (txq->tx_curr_desc == txq->tx_ring_size) 894 886 txq->tx_curr_desc = 0; 895 887 desc = &txq->tx_desc_area[tx_index]; 888 + txq->tx_desc_mapping[tx_index] = DESC_DMA_MAP_PAGE; 896 889 897 890 /* 898 891 * The last fragment will generate an interrupt ··· 907 902 908 903 desc->l4i_chk = 0; 909 904 desc->byte_cnt = skb_frag_size(this_frag); 910 - desc->buf_ptr = dma_map_single(mp->dev->dev.parent, addr, 911 - desc->byte_cnt, DMA_TO_DEVICE); 905 + desc->buf_ptr = skb_frag_dma_map(mp->dev->dev.parent, 906 + this_frag, 0, desc->byte_cnt, 907 + DMA_TO_DEVICE); 912 908 } 913 909 } 914 910 ··· 942 936 if (txq->tx_curr_desc == txq->tx_ring_size) 943 937 txq->tx_curr_desc = 0; 944 938 desc = &txq->tx_desc_area[tx_index]; 939 + txq->tx_desc_mapping[tx_index] = DESC_DMA_MAP_SINGLE; 945 940 946 941 if (nr_frags) { 947 942 txq_submit_frag_skb(txq, skb); ··· 1054 1047 int tx_index; 1055 1048 struct tx_desc *desc; 1056 1049 u32 cmd_sts; 1050 + char desc_dma_map; 1057 1051 1058 1052 tx_index = txq->tx_used_desc; 1059 1053 desc = &txq->tx_desc_area[tx_index]; 1054 + desc_dma_map = txq->tx_desc_mapping[tx_index]; 1055 + 1060 1056 cmd_sts = desc->cmd_sts; 1061 1057 1062 1058 if (cmd_sts & BUFFER_OWNED_BY_DMA) { ··· 1075 1065 reclaimed++; 1076 1066 txq->tx_desc_count--; 1077 1067 1078 - if (!IS_TSO_HEADER(txq, desc->buf_ptr)) 1079 - dma_unmap_single(mp->dev->dev.parent, desc->buf_ptr, 1080 - desc->byte_cnt, DMA_TO_DEVICE); 1068 + if (!IS_TSO_HEADER(txq, desc->buf_ptr)) { 1069 + 1070 + if (desc_dma_map == DESC_DMA_MAP_PAGE) 1071 + dma_unmap_page(mp->dev->dev.parent, 1072 + desc->buf_ptr, 1073 + desc->byte_cnt, 1074 + DMA_TO_DEVICE); 1075 + else 1076 + dma_unmap_single(mp->dev->dev.parent, 1077 + desc->buf_ptr, 1078 + desc->byte_cnt, 1079 + DMA_TO_DEVICE); 1080 + } 1081 1081 1082 1082 if (cmd_sts & TX_ENABLE_INTERRUPT) { 1083 1083 struct sk_buff *skb = __skb_dequeue(&txq->tx_skb); ··· 2016 1996 struct tx_queue *txq = mp->txq + index; 2017 1997 struct tx_desc *tx_desc; 2018 1998 int size; 1999 + int ret; 2019 2000 int i; 2020 2001 2021 2002 txq->index = index; ··· 2069 2048 nexti * sizeof(struct tx_desc); 2070 2049 } 2071 2050 2051 + txq->tx_desc_mapping = kcalloc(txq->tx_ring_size, sizeof(char), 2052 + GFP_KERNEL); 2053 + if (!txq->tx_desc_mapping) { 2054 + ret = -ENOMEM; 2055 + goto err_free_desc_area; 2056 + } 2057 + 2072 2058 /* Allocate DMA buffers for TSO MAC/IP/TCP headers */ 2073 2059 txq->tso_hdrs = dma_alloc_coherent(mp->dev->dev.parent, 2074 2060 txq->tx_ring_size * TSO_HEADER_SIZE, 2075 2061 &txq->tso_hdrs_dma, GFP_KERNEL); 2076 2062 if (txq->tso_hdrs == NULL) { 2077 - dma_free_coherent(mp->dev->dev.parent, txq->tx_desc_area_size, 2078 - txq->tx_desc_area, txq->tx_desc_dma); 2079 - return -ENOMEM; 2063 + ret = -ENOMEM; 2064 + goto err_free_desc_mapping; 2080 2065 } 2081 2066 skb_queue_head_init(&txq->tx_skb); 2082 2067 2083 2068 return 0; 2069 + 2070 + err_free_desc_mapping: 2071 + kfree(txq->tx_desc_mapping); 2072 + err_free_desc_area: 2073 + if (index == 0 && size <= mp->tx_desc_sram_size) 2074 + iounmap(txq->tx_desc_area); 2075 + else 2076 + dma_free_coherent(mp->dev->dev.parent, txq->tx_desc_area_size, 2077 + txq->tx_desc_area, txq->tx_desc_dma); 2078 + return ret; 2084 2079 } 2085 2080 2086 2081 static void txq_deinit(struct tx_queue *txq) ··· 2114 2077 else 2115 2078 dma_free_coherent(mp->dev->dev.parent, txq->tx_desc_area_size, 2116 2079 txq->tx_desc_area, txq->tx_desc_dma); 2080 + kfree(txq->tx_desc_mapping); 2081 + 2117 2082 if (txq->tso_hdrs) 2118 2083 dma_free_coherent(mp->dev->dev.parent, 2119 2084 txq->tx_ring_size * TSO_HEADER_SIZE,
+4 -1
drivers/net/ethernet/qlogic/netxen/netxen_nic_main.c
··· 2388 2388 2389 2389 work_done = netxen_process_rcv_ring(sds_ring, budget); 2390 2390 2391 - if ((work_done < budget) && tx_complete) { 2391 + if (!tx_complete) 2392 + work_done = budget; 2393 + 2394 + if (work_done < budget) { 2392 2395 napi_complete(&sds_ring->napi); 2393 2396 if (test_bit(__NX_DEV_UP, &adapter->state)) 2394 2397 netxen_nic_enable_int(sds_ring);
+114 -52
drivers/net/ethernet/renesas/sh_eth.c
··· 396 396 [TSU_ADRL31] = 0x01fc, 397 397 }; 398 398 399 + static void sh_eth_rcv_snd_disable(struct net_device *ndev); 400 + static struct net_device_stats *sh_eth_get_stats(struct net_device *ndev); 401 + 399 402 static bool sh_eth_is_gether(struct sh_eth_private *mdp) 400 403 { 401 404 return mdp->reg_offset == sh_eth_offset_gigabit; ··· 1123 1120 int rx_ringsize = sizeof(*rxdesc) * mdp->num_rx_ring; 1124 1121 int tx_ringsize = sizeof(*txdesc) * mdp->num_tx_ring; 1125 1122 int skbuff_size = mdp->rx_buf_sz + SH_ETH_RX_ALIGN - 1; 1123 + dma_addr_t dma_addr; 1126 1124 1127 1125 mdp->cur_rx = 0; 1128 1126 mdp->cur_tx = 0; ··· 1137 1133 /* skb */ 1138 1134 mdp->rx_skbuff[i] = NULL; 1139 1135 skb = netdev_alloc_skb(ndev, skbuff_size); 1140 - mdp->rx_skbuff[i] = skb; 1141 1136 if (skb == NULL) 1142 1137 break; 1143 1138 sh_eth_set_receive_align(skb); ··· 1145 1142 rxdesc = &mdp->rx_ring[i]; 1146 1143 /* The size of the buffer is a multiple of 16 bytes. */ 1147 1144 rxdesc->buffer_length = ALIGN(mdp->rx_buf_sz, 16); 1148 - dma_map_single(&ndev->dev, skb->data, rxdesc->buffer_length, 1149 - DMA_FROM_DEVICE); 1150 - rxdesc->addr = virt_to_phys(skb->data); 1145 + dma_addr = dma_map_single(&ndev->dev, skb->data, 1146 + rxdesc->buffer_length, 1147 + DMA_FROM_DEVICE); 1148 + if (dma_mapping_error(&ndev->dev, dma_addr)) { 1149 + kfree_skb(skb); 1150 + break; 1151 + } 1152 + mdp->rx_skbuff[i] = skb; 1153 + rxdesc->addr = dma_addr; 1151 1154 rxdesc->status = cpu_to_edmac(mdp, RD_RACT | RD_RFP); 1152 1155 1153 1156 /* Rx descriptor address set */ ··· 1325 1316 RFLR); 1326 1317 1327 1318 sh_eth_write(ndev, sh_eth_read(ndev, EESR), EESR); 1328 - if (start) 1319 + if (start) { 1320 + mdp->irq_enabled = true; 1329 1321 sh_eth_write(ndev, mdp->cd->eesipr_value, EESIPR); 1322 + } 1330 1323 1331 1324 /* PAUSE Prohibition */ 1332 1325 val = (sh_eth_read(ndev, ECMR) & ECMR_DM) | ··· 1365 1354 } 1366 1355 1367 1356 return ret; 1357 + } 1358 + 1359 + static void sh_eth_dev_exit(struct net_device *ndev) 1360 + { 1361 + struct sh_eth_private *mdp = netdev_priv(ndev); 1362 + int i; 1363 + 1364 + /* Deactivate all TX descriptors, so DMA should stop at next 1365 + * packet boundary if it's currently running 1366 + */ 1367 + for (i = 0; i < mdp->num_tx_ring; i++) 1368 + mdp->tx_ring[i].status &= ~cpu_to_edmac(mdp, TD_TACT); 1369 + 1370 + /* Disable TX FIFO egress to MAC */ 1371 + sh_eth_rcv_snd_disable(ndev); 1372 + 1373 + /* Stop RX DMA at next packet boundary */ 1374 + sh_eth_write(ndev, 0, EDRRR); 1375 + 1376 + /* Aside from TX DMA, we can't tell when the hardware is 1377 + * really stopped, so we need to reset to make sure. 1378 + * Before doing that, wait for long enough to *probably* 1379 + * finish transmitting the last packet and poll stats. 1380 + */ 1381 + msleep(2); /* max frame time at 10 Mbps < 1250 us */ 1382 + sh_eth_get_stats(ndev); 1383 + sh_eth_reset(ndev); 1368 1384 } 1369 1385 1370 1386 /* free Tx skb function */ ··· 1438 1400 u16 pkt_len = 0; 1439 1401 u32 desc_status; 1440 1402 int skbuff_size = mdp->rx_buf_sz + SH_ETH_RX_ALIGN - 1; 1403 + dma_addr_t dma_addr; 1441 1404 1442 1405 boguscnt = min(boguscnt, *quota); 1443 1406 limit = boguscnt; ··· 1486 1447 mdp->rx_skbuff[entry] = NULL; 1487 1448 if (mdp->cd->rpadir) 1488 1449 skb_reserve(skb, NET_IP_ALIGN); 1489 - dma_sync_single_for_cpu(&ndev->dev, rxdesc->addr, 1490 - ALIGN(mdp->rx_buf_sz, 16), 1491 - DMA_FROM_DEVICE); 1450 + dma_unmap_single(&ndev->dev, rxdesc->addr, 1451 + ALIGN(mdp->rx_buf_sz, 16), 1452 + DMA_FROM_DEVICE); 1492 1453 skb_put(skb, pkt_len); 1493 1454 skb->protocol = eth_type_trans(skb, ndev); 1494 1455 netif_receive_skb(skb); ··· 1508 1469 1509 1470 if (mdp->rx_skbuff[entry] == NULL) { 1510 1471 skb = netdev_alloc_skb(ndev, skbuff_size); 1511 - mdp->rx_skbuff[entry] = skb; 1512 1472 if (skb == NULL) 1513 1473 break; /* Better luck next round. */ 1514 1474 sh_eth_set_receive_align(skb); 1515 - dma_map_single(&ndev->dev, skb->data, 1516 - rxdesc->buffer_length, DMA_FROM_DEVICE); 1475 + dma_addr = dma_map_single(&ndev->dev, skb->data, 1476 + rxdesc->buffer_length, 1477 + DMA_FROM_DEVICE); 1478 + if (dma_mapping_error(&ndev->dev, dma_addr)) { 1479 + kfree_skb(skb); 1480 + break; 1481 + } 1482 + mdp->rx_skbuff[entry] = skb; 1517 1483 1518 1484 skb_checksum_none_assert(skb); 1519 - rxdesc->addr = virt_to_phys(skb->data); 1485 + rxdesc->addr = dma_addr; 1520 1486 } 1521 1487 if (entry >= mdp->num_rx_ring - 1) 1522 1488 rxdesc->status |= ··· 1617 1573 if (intr_status & EESR_RFRMER) { 1618 1574 /* Receive Frame Overflow int */ 1619 1575 ndev->stats.rx_frame_errors++; 1620 - netif_err(mdp, rx_err, ndev, "Receive Abort\n"); 1621 1576 } 1622 1577 } 1623 1578 ··· 1635 1592 if (intr_status & EESR_RDE) { 1636 1593 /* Receive Descriptor Empty int */ 1637 1594 ndev->stats.rx_over_errors++; 1638 - netif_err(mdp, rx_err, ndev, "Receive Descriptor Empty\n"); 1639 1595 } 1640 1596 1641 1597 if (intr_status & EESR_RFE) { 1642 1598 /* Receive FIFO Overflow int */ 1643 1599 ndev->stats.rx_fifo_errors++; 1644 - netif_err(mdp, rx_err, ndev, "Receive FIFO Overflow\n"); 1645 1600 } 1646 1601 1647 1602 if (!mdp->cd->no_ade && (intr_status & EESR_ADE)) { ··· 1694 1653 if (intr_status & (EESR_RX_CHECK | cd->tx_check | cd->eesr_err_check)) 1695 1654 ret = IRQ_HANDLED; 1696 1655 else 1697 - goto other_irq; 1656 + goto out; 1657 + 1658 + if (!likely(mdp->irq_enabled)) { 1659 + sh_eth_write(ndev, 0, EESIPR); 1660 + goto out; 1661 + } 1698 1662 1699 1663 if (intr_status & EESR_RX_CHECK) { 1700 1664 if (napi_schedule_prep(&mdp->napi)) { ··· 1730 1684 sh_eth_error(ndev, intr_status); 1731 1685 } 1732 1686 1733 - other_irq: 1687 + out: 1734 1688 spin_unlock(&mdp->lock); 1735 1689 1736 1690 return ret; ··· 1758 1712 napi_complete(napi); 1759 1713 1760 1714 /* Reenable Rx interrupts */ 1761 - sh_eth_write(ndev, mdp->cd->eesipr_value, EESIPR); 1715 + if (mdp->irq_enabled) 1716 + sh_eth_write(ndev, mdp->cd->eesipr_value, EESIPR); 1762 1717 out: 1763 1718 return budget - quota; 1764 1719 } ··· 2015 1968 return -EINVAL; 2016 1969 2017 1970 if (netif_running(ndev)) { 1971 + netif_device_detach(ndev); 2018 1972 netif_tx_disable(ndev); 2019 - /* Disable interrupts by clearing the interrupt mask. */ 2020 - sh_eth_write(ndev, 0x0000, EESIPR); 2021 - /* Stop the chip's Tx and Rx processes. */ 2022 - sh_eth_write(ndev, 0, EDTRR); 2023 - sh_eth_write(ndev, 0, EDRRR); 2024 - synchronize_irq(ndev->irq); 2025 - } 2026 1973 2027 - /* Free all the skbuffs in the Rx queue. */ 2028 - sh_eth_ring_free(ndev); 2029 - /* Free DMA buffer */ 2030 - sh_eth_free_dma_buffer(mdp); 1974 + /* Serialise with the interrupt handler and NAPI, then 1975 + * disable interrupts. We have to clear the 1976 + * irq_enabled flag first to ensure that interrupts 1977 + * won't be re-enabled. 1978 + */ 1979 + mdp->irq_enabled = false; 1980 + synchronize_irq(ndev->irq); 1981 + napi_synchronize(&mdp->napi); 1982 + sh_eth_write(ndev, 0x0000, EESIPR); 1983 + 1984 + sh_eth_dev_exit(ndev); 1985 + 1986 + /* Free all the skbuffs in the Rx queue. */ 1987 + sh_eth_ring_free(ndev); 1988 + /* Free DMA buffer */ 1989 + sh_eth_free_dma_buffer(mdp); 1990 + } 2031 1991 2032 1992 /* Set new parameters */ 2033 1993 mdp->num_rx_ring = ring->rx_pending; 2034 1994 mdp->num_tx_ring = ring->tx_pending; 2035 1995 2036 - ret = sh_eth_ring_init(ndev); 2037 - if (ret < 0) { 2038 - netdev_err(ndev, "%s: sh_eth_ring_init failed.\n", __func__); 2039 - return ret; 2040 - } 2041 - ret = sh_eth_dev_init(ndev, false); 2042 - if (ret < 0) { 2043 - netdev_err(ndev, "%s: sh_eth_dev_init failed.\n", __func__); 2044 - return ret; 2045 - } 2046 - 2047 1996 if (netif_running(ndev)) { 1997 + ret = sh_eth_ring_init(ndev); 1998 + if (ret < 0) { 1999 + netdev_err(ndev, "%s: sh_eth_ring_init failed.\n", 2000 + __func__); 2001 + return ret; 2002 + } 2003 + ret = sh_eth_dev_init(ndev, false); 2004 + if (ret < 0) { 2005 + netdev_err(ndev, "%s: sh_eth_dev_init failed.\n", 2006 + __func__); 2007 + return ret; 2008 + } 2009 + 2010 + mdp->irq_enabled = true; 2048 2011 sh_eth_write(ndev, mdp->cd->eesipr_value, EESIPR); 2049 2012 /* Setting the Rx mode will start the Rx process. */ 2050 2013 sh_eth_write(ndev, EDRRR_R, EDRRR); 2051 - netif_wake_queue(ndev); 2014 + netif_device_attach(ndev); 2052 2015 } 2053 2016 2054 2017 return 0; ··· 2174 2117 } 2175 2118 spin_unlock_irqrestore(&mdp->lock, flags); 2176 2119 2120 + if (skb_padto(skb, ETH_ZLEN)) 2121 + return NETDEV_TX_OK; 2122 + 2177 2123 entry = mdp->cur_tx % mdp->num_tx_ring; 2178 2124 mdp->tx_skbuff[entry] = skb; 2179 2125 txdesc = &mdp->tx_ring[entry]; ··· 2186 2126 skb->len + 2); 2187 2127 txdesc->addr = dma_map_single(&ndev->dev, skb->data, skb->len, 2188 2128 DMA_TO_DEVICE); 2189 - if (skb->len < ETH_ZLEN) 2190 - txdesc->buffer_length = ETH_ZLEN; 2191 - else 2192 - txdesc->buffer_length = skb->len; 2129 + if (dma_mapping_error(&ndev->dev, txdesc->addr)) { 2130 + kfree_skb(skb); 2131 + return NETDEV_TX_OK; 2132 + } 2133 + txdesc->buffer_length = skb->len; 2193 2134 2194 2135 if (entry >= mdp->num_tx_ring - 1) 2195 2136 txdesc->status |= cpu_to_edmac(mdp, TD_TACT | TD_TDLE); ··· 2242 2181 2243 2182 netif_stop_queue(ndev); 2244 2183 2245 - /* Disable interrupts by clearing the interrupt mask. */ 2184 + /* Serialise with the interrupt handler and NAPI, then disable 2185 + * interrupts. We have to clear the irq_enabled flag first to 2186 + * ensure that interrupts won't be re-enabled. 2187 + */ 2188 + mdp->irq_enabled = false; 2189 + synchronize_irq(ndev->irq); 2190 + napi_disable(&mdp->napi); 2246 2191 sh_eth_write(ndev, 0x0000, EESIPR); 2247 2192 2248 - /* Stop the chip's Tx and Rx processes. */ 2249 - sh_eth_write(ndev, 0, EDTRR); 2250 - sh_eth_write(ndev, 0, EDRRR); 2193 + sh_eth_dev_exit(ndev); 2251 2194 2252 - sh_eth_get_stats(ndev); 2253 2195 /* PHY Disconnect */ 2254 2196 if (mdp->phydev) { 2255 2197 phy_stop(mdp->phydev); ··· 2261 2197 } 2262 2198 2263 2199 free_irq(ndev->irq, ndev); 2264 - 2265 - napi_disable(&mdp->napi); 2266 2200 2267 2201 /* Free all the skbuffs in the Rx queue. */ 2268 2202 sh_eth_ring_free(ndev);
+1
drivers/net/ethernet/renesas/sh_eth.h
··· 513 513 u32 rx_buf_sz; /* Based on MTU+slack. */ 514 514 int edmac_endian; 515 515 struct napi_struct napi; 516 + bool irq_enabled; 516 517 /* MII transceiver section. */ 517 518 u32 phy_id; /* PHY ID */ 518 519 struct mii_bus *mii_bus; /* MDIO bus control */
+4 -1
drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
··· 2778 2778 * @addr: iobase memory address 2779 2779 * Description: this is the main probe function used to 2780 2780 * call the alloc_etherdev, allocate the priv structure. 2781 + * Return: 2782 + * on success the new private structure is returned, otherwise the error 2783 + * pointer. 2781 2784 */ 2782 2785 struct stmmac_priv *stmmac_dvr_probe(struct device *device, 2783 2786 struct plat_stmmacenet_data *plat_dat, ··· 2792 2789 2793 2790 ndev = alloc_etherdev(sizeof(struct stmmac_priv)); 2794 2791 if (!ndev) 2795 - return NULL; 2792 + return ERR_PTR(-ENOMEM); 2796 2793 2797 2794 SET_NETDEV_DEV(ndev, device); 2798 2795
+22
drivers/net/ethernet/ti/cpsw.c
··· 1683 1683 if (vid == priv->data.default_vlan) 1684 1684 return 0; 1685 1685 1686 + if (priv->data.dual_emac) { 1687 + /* In dual EMAC, reserved VLAN id should not be used for 1688 + * creating VLAN interfaces as this can break the dual 1689 + * EMAC port separation 1690 + */ 1691 + int i; 1692 + 1693 + for (i = 0; i < priv->data.slaves; i++) { 1694 + if (vid == priv->slaves[i].port_vlan) 1695 + return -EINVAL; 1696 + } 1697 + } 1698 + 1686 1699 dev_info(priv->dev, "Adding vlanid %d to vlan filter\n", vid); 1687 1700 return cpsw_add_vlan_ale_entry(priv, vid); 1688 1701 } ··· 1708 1695 1709 1696 if (vid == priv->data.default_vlan) 1710 1697 return 0; 1698 + 1699 + if (priv->data.dual_emac) { 1700 + int i; 1701 + 1702 + for (i = 0; i < priv->data.slaves; i++) { 1703 + if (vid == priv->slaves[i].port_vlan) 1704 + return -EINVAL; 1705 + } 1706 + } 1711 1707 1712 1708 dev_info(priv->dev, "removing vlanid %d from vlan filter\n", vid); 1713 1709 ret = cpsw_ale_del_vlan(priv->ale, vid, 0);
+4 -2
drivers/net/ipvlan/ipvlan_core.c
··· 377 377 }; 378 378 379 379 dst = ip6_route_output(dev_net(dev), NULL, &fl6); 380 - if (IS_ERR(dst)) 380 + if (dst->error) { 381 + ret = dst->error; 382 + dst_release(dst); 381 383 goto err; 382 - 384 + } 383 385 skb_dst_drop(skb); 384 386 skb_dst_set(skb, dst); 385 387 err = ip6_local_out(skb);
+3 -4
drivers/net/wireless/ath/ath9k/main.c
··· 285 285 286 286 __ath_cancel_work(sc); 287 287 288 + disable_irq(sc->irq); 288 289 tasklet_disable(&sc->intr_tq); 289 290 tasklet_disable(&sc->bcon_tasklet); 290 291 spin_lock_bh(&sc->sc_pcu_lock); ··· 332 331 r = -EIO; 333 332 334 333 out: 334 + enable_irq(sc->irq); 335 335 spin_unlock_bh(&sc->sc_pcu_lock); 336 336 tasklet_enable(&sc->bcon_tasklet); 337 337 tasklet_enable(&sc->intr_tq); ··· 514 512 if (!ah || test_bit(ATH_OP_INVALID, &common->op_flags)) 515 513 return IRQ_NONE; 516 514 517 - if (!AR_SREV_9100(ah) && test_bit(ATH_OP_HW_RESET, &common->op_flags)) 518 - return IRQ_NONE; 519 - 520 515 /* shared irq, not for us */ 521 516 if (!ath9k_hw_intrpend(ah)) 522 517 return IRQ_NONE; ··· 528 529 ath9k_debug_sync_cause(sc, sync_cause); 529 530 status &= ah->imask; /* discard unasked-for bits */ 530 531 531 - if (AR_SREV_9100(ah) && test_bit(ATH_OP_HW_RESET, &common->op_flags)) 532 + if (test_bit(ATH_OP_HW_RESET, &common->op_flags)) 532 533 return IRQ_HANDLED; 533 534 534 535 /*
+2
drivers/net/wireless/iwlwifi/iwl-fw-file.h
··· 246 246 * @IWL_UCODE_TLV_API_BASIC_DWELL: use only basic dwell time in scan command, 247 247 * regardless of the band or the number of the probes. FW will calculate 248 248 * the actual dwell time. 249 + * @IWL_UCODE_TLV_API_SINGLE_SCAN_EBS: EBS is supported for single scans too. 249 250 */ 250 251 enum iwl_ucode_tlv_api { 251 252 IWL_UCODE_TLV_API_WOWLAN_CONFIG_TID = BIT(0), ··· 258 257 IWL_UCODE_TLV_API_SF_NO_DUMMY_NOTIF = BIT(7), 259 258 IWL_UCODE_TLV_API_FRAGMENTED_SCAN = BIT(8), 260 259 IWL_UCODE_TLV_API_BASIC_DWELL = BIT(13), 260 + IWL_UCODE_TLV_API_SINGLE_SCAN_EBS = BIT(16), 261 261 }; 262 262 263 263 /**
+5 -2
drivers/net/wireless/iwlwifi/mvm/fw-api-scan.h
··· 653 653 }; 654 654 655 655 /* iwl_scan_channel_opt - CHANNEL_OPTIMIZATION_API_S 656 - * @flags: enum iwl_scan_channel_flgs 657 - * @non_ebs_ratio: how many regular scan iteration before EBS 656 + * @flags: enum iwl_scan_channel_flags 657 + * @non_ebs_ratio: defines the ratio of number of scan iterations where EBS is 658 + * involved. 659 + * 1 - EBS is disabled. 660 + * 2 - every second scan will be full scan(and so on). 658 661 */ 659 662 struct iwl_scan_channel_opt { 660 663 __le16 flags;
+9 -11
drivers/net/wireless/iwlwifi/mvm/mac80211.c
··· 3343 3343 msk |= mvmsta->tfd_queue_msk; 3344 3344 } 3345 3345 3346 - if (drop) { 3347 - if (iwl_mvm_flush_tx_path(mvm, msk, true)) 3348 - IWL_ERR(mvm, "flush request fail\n"); 3349 - mutex_unlock(&mvm->mutex); 3350 - } else { 3351 - mutex_unlock(&mvm->mutex); 3346 + msk &= ~BIT(vif->hw_queue[IEEE80211_AC_VO]); 3352 3347 3353 - /* this can take a while, and we may need/want other operations 3354 - * to succeed while doing this, so do it without the mutex held 3355 - */ 3356 - iwl_trans_wait_tx_queue_empty(mvm->trans, msk); 3357 - } 3348 + if (iwl_mvm_flush_tx_path(mvm, msk, true)) 3349 + IWL_ERR(mvm, "flush request fail\n"); 3350 + mutex_unlock(&mvm->mutex); 3351 + 3352 + /* this can take a while, and we may need/want other operations 3353 + * to succeed while doing this, so do it without the mutex held 3354 + */ 3355 + iwl_trans_wait_tx_queue_empty(mvm->trans, msk); 3358 3356 } 3359 3357 3360 3358 const struct ieee80211_ops iwl_mvm_hw_ops = {
+41 -12
drivers/net/wireless/iwlwifi/mvm/scan.c
··· 72 72 73 73 #define IWL_PLCP_QUIET_THRESH 1 74 74 #define IWL_ACTIVE_QUIET_TIME 10 75 + #define IWL_DENSE_EBS_SCAN_RATIO 5 76 + #define IWL_SPARSE_EBS_SCAN_RATIO 1 75 77 76 78 struct iwl_mvm_scan_params { 77 79 u32 max_out_time; ··· 1107 1105 return iwl_umac_scan_stop(mvm, IWL_UMAC_SCAN_UID_SCHED_SCAN, 1108 1106 notify); 1109 1107 1108 + if (mvm->scan_status == IWL_MVM_SCAN_NONE) 1109 + return 0; 1110 + 1111 + if (iwl_mvm_is_radio_killed(mvm)) 1112 + goto out; 1113 + 1110 1114 if (mvm->scan_status != IWL_MVM_SCAN_SCHED && 1111 1115 (!(mvm->fw->ucode_capa.api[0] & IWL_UCODE_TLV_API_LMAC_SCAN) || 1112 1116 mvm->scan_status != IWL_MVM_SCAN_OS)) { ··· 1149 1141 if (mvm->scan_status == IWL_MVM_SCAN_OS) 1150 1142 iwl_mvm_unref(mvm, IWL_MVM_REF_SCAN); 1151 1143 1144 + out: 1152 1145 mvm->scan_status = IWL_MVM_SCAN_NONE; 1153 1146 1154 1147 if (notify) { ··· 1306 1297 cmd->scan_prio = cpu_to_le32(IWL_SCAN_PRIORITY_HIGH); 1307 1298 cmd->iter_num = cpu_to_le32(1); 1308 1299 1309 - if (mvm->fw->ucode_capa.flags & IWL_UCODE_TLV_FLAGS_EBS_SUPPORT && 1310 - mvm->last_ebs_successful) { 1311 - cmd->channel_opt[0].flags = 1312 - cpu_to_le16(IWL_SCAN_CHANNEL_FLAG_EBS | 1313 - IWL_SCAN_CHANNEL_FLAG_EBS_ACCURATE | 1314 - IWL_SCAN_CHANNEL_FLAG_CACHE_ADD); 1315 - cmd->channel_opt[1].flags = 1316 - cpu_to_le16(IWL_SCAN_CHANNEL_FLAG_EBS | 1317 - IWL_SCAN_CHANNEL_FLAG_EBS_ACCURATE | 1318 - IWL_SCAN_CHANNEL_FLAG_CACHE_ADD); 1319 - } 1320 - 1321 1300 if (iwl_mvm_rrm_scan_needed(mvm)) 1322 1301 cmd->scan_flags |= 1323 1302 cpu_to_le32(IWL_MVM_LMAC_SCAN_FLAGS_RRM_ENABLED); ··· 1379 1382 cmd->schedule[1].delay = 0; 1380 1383 cmd->schedule[1].iterations = 0; 1381 1384 cmd->schedule[1].full_scan_mul = 0; 1385 + 1386 + if (mvm->fw->ucode_capa.api[0] & IWL_UCODE_TLV_API_SINGLE_SCAN_EBS && 1387 + mvm->last_ebs_successful) { 1388 + cmd->channel_opt[0].flags = 1389 + cpu_to_le16(IWL_SCAN_CHANNEL_FLAG_EBS | 1390 + IWL_SCAN_CHANNEL_FLAG_EBS_ACCURATE | 1391 + IWL_SCAN_CHANNEL_FLAG_CACHE_ADD); 1392 + cmd->channel_opt[0].non_ebs_ratio = 1393 + cpu_to_le16(IWL_DENSE_EBS_SCAN_RATIO); 1394 + cmd->channel_opt[1].flags = 1395 + cpu_to_le16(IWL_SCAN_CHANNEL_FLAG_EBS | 1396 + IWL_SCAN_CHANNEL_FLAG_EBS_ACCURATE | 1397 + IWL_SCAN_CHANNEL_FLAG_CACHE_ADD); 1398 + cmd->channel_opt[1].non_ebs_ratio = 1399 + cpu_to_le16(IWL_SPARSE_EBS_SCAN_RATIO); 1400 + } 1382 1401 1383 1402 for (i = 1; i <= req->req.n_ssids; i++) 1384 1403 ssid_bitmap |= BIT(i); ··· 1495 1482 cmd->schedule[1].delay = cpu_to_le16(req->interval / MSEC_PER_SEC); 1496 1483 cmd->schedule[1].iterations = 0xff; 1497 1484 cmd->schedule[1].full_scan_mul = IWL_FULL_SCAN_MULTIPLIER; 1485 + 1486 + if (mvm->fw->ucode_capa.flags & IWL_UCODE_TLV_FLAGS_EBS_SUPPORT && 1487 + mvm->last_ebs_successful) { 1488 + cmd->channel_opt[0].flags = 1489 + cpu_to_le16(IWL_SCAN_CHANNEL_FLAG_EBS | 1490 + IWL_SCAN_CHANNEL_FLAG_EBS_ACCURATE | 1491 + IWL_SCAN_CHANNEL_FLAG_CACHE_ADD); 1492 + cmd->channel_opt[0].non_ebs_ratio = 1493 + cpu_to_le16(IWL_DENSE_EBS_SCAN_RATIO); 1494 + cmd->channel_opt[1].flags = 1495 + cpu_to_le16(IWL_SCAN_CHANNEL_FLAG_EBS | 1496 + IWL_SCAN_CHANNEL_FLAG_EBS_ACCURATE | 1497 + IWL_SCAN_CHANNEL_FLAG_CACHE_ADD); 1498 + cmd->channel_opt[1].non_ebs_ratio = 1499 + cpu_to_le16(IWL_SPARSE_EBS_SCAN_RATIO); 1500 + } 1498 1501 1499 1502 iwl_mvm_lmac_scan_cfg_channels(mvm, req->channels, req->n_channels, 1500 1503 ssid_bitmap, cmd);
+9 -2
drivers/net/wireless/iwlwifi/mvm/tx.c
··· 90 90 91 91 if (ieee80211_is_probe_resp(fc)) 92 92 tx_flags |= TX_CMD_FLG_TSF; 93 - else if (ieee80211_is_back_req(fc)) 94 - tx_flags |= TX_CMD_FLG_ACK | TX_CMD_FLG_BAR; 95 93 96 94 if (ieee80211_has_morefrags(fc)) 97 95 tx_flags |= TX_CMD_FLG_MORE_FRAG; ··· 98 100 u8 *qc = ieee80211_get_qos_ctl(hdr); 99 101 tx_cmd->tid_tspec = qc[0] & 0xf; 100 102 tx_flags &= ~TX_CMD_FLG_SEQ_CTL; 103 + } else if (ieee80211_is_back_req(fc)) { 104 + struct ieee80211_bar *bar = (void *)skb->data; 105 + u16 control = le16_to_cpu(bar->control); 106 + 107 + tx_flags |= TX_CMD_FLG_ACK | TX_CMD_FLG_BAR; 108 + tx_cmd->tid_tspec = (control & 109 + IEEE80211_BAR_CTRL_TID_INFO_MASK) >> 110 + IEEE80211_BAR_CTRL_TID_INFO_SHIFT; 111 + WARN_ON_ONCE(tx_cmd->tid_tspec >= IWL_MAX_TID_COUNT); 101 112 } else { 102 113 tx_cmd->tid_tspec = IWL_TID_NON_QOS; 103 114 if (info->flags & IEEE80211_TX_CTL_ASSIGN_SEQ)
-11
drivers/of/overlay.c
··· 114 114 ret = of_overlay_apply_one(ov, tchild, child); 115 115 if (ret) 116 116 return ret; 117 - 118 - /* The properties are already copied, now do the child nodes */ 119 - for_each_child_of_node(child, grandchild) { 120 - ret = of_overlay_apply_single_device_node(ov, tchild, grandchild); 121 - if (ret) { 122 - pr_err("%s: Failed to apply single node @%s/%s\n", 123 - __func__, tchild->full_name, 124 - grandchild->name); 125 - return ret; 126 - } 127 - } 128 117 } 129 118 130 119 return ret;
+10 -1
drivers/of/platform.c
··· 188 188 size = dev->coherent_dma_mask; 189 189 } else { 190 190 offset = PFN_DOWN(paddr - dma_addr); 191 - dev_dbg(dev, "dma_pfn_offset(%#08lx)\n", dev->dma_pfn_offset); 191 + dev_dbg(dev, "dma_pfn_offset(%#08lx)\n", offset); 192 192 } 193 193 dev->dma_pfn_offset = offset; 194 194 ··· 566 566 if (!of_node_check_flag(rd->dn->parent, OF_POPULATED_BUS)) 567 567 return NOTIFY_OK; /* not for us */ 568 568 569 + /* already populated? (driver using of_populate manually) */ 570 + if (of_node_check_flag(rd->dn, OF_POPULATED)) 571 + return NOTIFY_OK; 572 + 569 573 /* pdev_parent may be NULL when no bus platform device */ 570 574 pdev_parent = of_find_device_by_node(rd->dn->parent); 571 575 pdev = of_platform_device_create(rd->dn, NULL, ··· 585 581 break; 586 582 587 583 case OF_RECONFIG_CHANGE_REMOVE: 584 + 585 + /* already depopulated? */ 586 + if (!of_node_check_flag(rd->dn, OF_POPULATED)) 587 + return NOTIFY_OK; 588 + 588 589 /* find our device by node */ 589 590 pdev = of_find_device_by_node(rd->dn); 590 591 if (pdev == NULL)
+55
drivers/of/unittest-data/tests-overlay.dtsi
··· 176 176 }; 177 177 }; 178 178 179 + overlay10 { 180 + fragment@0 { 181 + target-path = "/testcase-data/overlay-node/test-bus"; 182 + __overlay__ { 183 + 184 + /* suppress DTC warning */ 185 + #address-cells = <1>; 186 + #size-cells = <0>; 187 + 188 + test-selftest10 { 189 + compatible = "selftest"; 190 + status = "okay"; 191 + reg = <10>; 192 + 193 + #address-cells = <1>; 194 + #size-cells = <0>; 195 + 196 + test-selftest101 { 197 + compatible = "selftest"; 198 + status = "okay"; 199 + reg = <1>; 200 + }; 201 + 202 + }; 203 + }; 204 + }; 205 + }; 206 + 207 + overlay11 { 208 + fragment@0 { 209 + target-path = "/testcase-data/overlay-node/test-bus"; 210 + __overlay__ { 211 + 212 + /* suppress DTC warning */ 213 + #address-cells = <1>; 214 + #size-cells = <0>; 215 + 216 + test-selftest11 { 217 + compatible = "selftest"; 218 + status = "okay"; 219 + reg = <11>; 220 + 221 + #address-cells = <1>; 222 + #size-cells = <0>; 223 + 224 + test-selftest111 { 225 + compatible = "selftest"; 226 + status = "okay"; 227 + reg = <1>; 228 + }; 229 + 230 + }; 231 + }; 232 + }; 233 + }; 179 234 }; 180 235 };
+39
drivers/of/unittest.c
··· 978 978 } 979 979 980 980 dev_dbg(dev, "%s for node @%s\n", __func__, np->full_name); 981 + 982 + of_platform_populate(np, NULL, NULL, &pdev->dev); 983 + 981 984 return 0; 982 985 } 983 986 ··· 1388 1385 selftest(1, "overlay test %d passed\n", 8); 1389 1386 } 1390 1387 1388 + /* test insertion of a bus with parent devices */ 1389 + static void of_selftest_overlay_10(void) 1390 + { 1391 + int ret; 1392 + char *child_path; 1393 + 1394 + /* device should disable */ 1395 + ret = of_selftest_apply_overlay_check(10, 10, 0, 1); 1396 + if (selftest(ret == 0, "overlay test %d failed; overlay application\n", 10)) 1397 + return; 1398 + 1399 + child_path = kasprintf(GFP_KERNEL, "%s/test-selftest101", 1400 + selftest_path(10)); 1401 + if (selftest(child_path, "overlay test %d failed; kasprintf\n", 10)) 1402 + return; 1403 + 1404 + ret = of_path_platform_device_exists(child_path); 1405 + kfree(child_path); 1406 + if (selftest(ret, "overlay test %d failed; no child device\n", 10)) 1407 + return; 1408 + } 1409 + 1410 + /* test insertion of a bus with parent devices (and revert) */ 1411 + static void of_selftest_overlay_11(void) 1412 + { 1413 + int ret; 1414 + 1415 + /* device should disable */ 1416 + ret = of_selftest_apply_revert_overlay_check(11, 11, 0, 1); 1417 + if (selftest(ret == 0, "overlay test %d failed; overlay application\n", 11)) 1418 + return; 1419 + } 1420 + 1391 1421 static void __init of_selftest_overlay(void) 1392 1422 { 1393 1423 struct device_node *bus_np = NULL; ··· 1468 1432 of_selftest_overlay_5(); 1469 1433 of_selftest_overlay_6(); 1470 1434 of_selftest_overlay_8(); 1435 + 1436 + of_selftest_overlay_10(); 1437 + of_selftest_overlay_11(); 1471 1438 1472 1439 out: 1473 1440 of_node_put(bus_np);
+2 -3
drivers/parisc/lba_pci.c
··· 694 694 int i; 695 695 /* PCI-PCI Bridge */ 696 696 pci_read_bridge_bases(bus); 697 - for (i = PCI_BRIDGE_RESOURCES; i < PCI_NUM_RESOURCES; i++) { 698 - pci_claim_resource(bus->self, i); 699 - } 697 + for (i = PCI_BRIDGE_RESOURCES; i < PCI_NUM_RESOURCES; i++) 698 + pci_claim_bridge_resource(bus->self, i); 700 699 } else { 701 700 /* Host-PCI Bridge */ 702 701 int err;
+43
drivers/pci/bus.c
··· 228 228 } 229 229 EXPORT_SYMBOL(pci_bus_alloc_resource); 230 230 231 + /* 232 + * The @idx resource of @dev should be a PCI-PCI bridge window. If this 233 + * resource fits inside a window of an upstream bridge, do nothing. If it 234 + * overlaps an upstream window but extends outside it, clip the resource so 235 + * it fits completely inside. 236 + */ 237 + bool pci_bus_clip_resource(struct pci_dev *dev, int idx) 238 + { 239 + struct pci_bus *bus = dev->bus; 240 + struct resource *res = &dev->resource[idx]; 241 + struct resource orig_res = *res; 242 + struct resource *r; 243 + int i; 244 + 245 + pci_bus_for_each_resource(bus, r, i) { 246 + resource_size_t start, end; 247 + 248 + if (!r) 249 + continue; 250 + 251 + if (resource_type(res) != resource_type(r)) 252 + continue; 253 + 254 + start = max(r->start, res->start); 255 + end = min(r->end, res->end); 256 + 257 + if (start > end) 258 + continue; /* no overlap */ 259 + 260 + if (res->start == start && res->end == end) 261 + return false; /* no change */ 262 + 263 + res->start = start; 264 + res->end = end; 265 + dev_printk(KERN_DEBUG, &dev->dev, "%pR clipped to %pR\n", 266 + &orig_res, res); 267 + 268 + return true; 269 + } 270 + 271 + return false; 272 + } 273 + 231 274 void __weak pcibios_resource_survey_bus(struct pci_bus *bus) { } 232 275 233 276 /**
+36 -4
drivers/pci/pci.c
··· 3271 3271 { 3272 3272 struct pci_dev *pdev; 3273 3273 3274 - if (pci_is_root_bus(dev->bus) || dev->subordinate || !dev->bus->self) 3274 + if (pci_is_root_bus(dev->bus) || dev->subordinate || 3275 + !dev->bus->self || dev->dev_flags & PCI_DEV_FLAGS_NO_BUS_RESET) 3275 3276 return -ENOTTY; 3276 3277 3277 3278 list_for_each_entry(pdev, &dev->bus->devices, bus_list) ··· 3306 3305 { 3307 3306 struct pci_dev *pdev; 3308 3307 3309 - if (dev->subordinate || !dev->slot) 3308 + if (dev->subordinate || !dev->slot || 3309 + dev->dev_flags & PCI_DEV_FLAGS_NO_BUS_RESET) 3310 3310 return -ENOTTY; 3311 3311 3312 3312 list_for_each_entry(pdev, &dev->bus->devices, bus_list) ··· 3559 3557 } 3560 3558 EXPORT_SYMBOL_GPL(pci_try_reset_function); 3561 3559 3560 + /* Do any devices on or below this bus prevent a bus reset? */ 3561 + static bool pci_bus_resetable(struct pci_bus *bus) 3562 + { 3563 + struct pci_dev *dev; 3564 + 3565 + list_for_each_entry(dev, &bus->devices, bus_list) { 3566 + if (dev->dev_flags & PCI_DEV_FLAGS_NO_BUS_RESET || 3567 + (dev->subordinate && !pci_bus_resetable(dev->subordinate))) 3568 + return false; 3569 + } 3570 + 3571 + return true; 3572 + } 3573 + 3562 3574 /* Lock devices from the top of the tree down */ 3563 3575 static void pci_bus_lock(struct pci_bus *bus) 3564 3576 { ··· 3621 3605 pci_dev_unlock(dev); 3622 3606 } 3623 3607 return 0; 3608 + } 3609 + 3610 + /* Do any devices on or below this slot prevent a bus reset? */ 3611 + static bool pci_slot_resetable(struct pci_slot *slot) 3612 + { 3613 + struct pci_dev *dev; 3614 + 3615 + list_for_each_entry(dev, &slot->bus->devices, bus_list) { 3616 + if (!dev->slot || dev->slot != slot) 3617 + continue; 3618 + if (dev->dev_flags & PCI_DEV_FLAGS_NO_BUS_RESET || 3619 + (dev->subordinate && !pci_bus_resetable(dev->subordinate))) 3620 + return false; 3621 + } 3622 + 3623 + return true; 3624 3624 } 3625 3625 3626 3626 /* Lock devices from the top of the tree down */ ··· 3760 3728 { 3761 3729 int rc; 3762 3730 3763 - if (!slot) 3731 + if (!slot || !pci_slot_resetable(slot)) 3764 3732 return -ENOTTY; 3765 3733 3766 3734 if (!probe) ··· 3852 3820 3853 3821 static int pci_bus_reset(struct pci_bus *bus, int probe) 3854 3822 { 3855 - if (!bus->self) 3823 + if (!bus->self || !pci_bus_resetable(bus)) 3856 3824 return -ENOTTY; 3857 3825 3858 3826 if (probe)
+1
drivers/pci/pci.h
··· 208 208 void __pci_bus_assign_resources(const struct pci_bus *bus, 209 209 struct list_head *realloc_head, 210 210 struct list_head *fail_head); 211 + bool pci_bus_clip_resource(struct pci_dev *dev, int idx); 211 212 212 213 /** 213 214 * pci_ari_enabled - query ARI forwarding status
+14
drivers/pci/quirks.c
··· 3028 3028 DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_MELLANOX, PCI_ANY_ID, 3029 3029 quirk_broken_intx_masking); 3030 3030 3031 + static void quirk_no_bus_reset(struct pci_dev *dev) 3032 + { 3033 + dev->dev_flags |= PCI_DEV_FLAGS_NO_BUS_RESET; 3034 + } 3035 + 3036 + /* 3037 + * Atheros AR93xx chips do not behave after a bus reset. The device will 3038 + * throw a Link Down error on AER-capable systems and regardless of AER, 3039 + * config space of the device is never accessible again and typically 3040 + * causes the system to hang or reset when access is attempted. 3041 + * http://www.spinics.net/lists/linux-pci/msg34797.html 3042 + */ 3043 + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_ATHEROS, 0x0030, quirk_no_bus_reset); 3044 + 3031 3045 #ifdef CONFIG_ACPI 3032 3046 /* 3033 3047 * Apple: Shutdown Cactus Ridge Thunderbolt controller.
+44 -12
drivers/pci/setup-bus.c
··· 530 530 config space writes, so it's quite possible that an I/O window of 531 531 the bridge will have some undesirable address (e.g. 0) after the 532 532 first write. Ditto 64-bit prefetchable MMIO. */ 533 - static void pci_setup_bridge_io(struct pci_bus *bus) 533 + static void pci_setup_bridge_io(struct pci_dev *bridge) 534 534 { 535 - struct pci_dev *bridge = bus->self; 536 535 struct resource *res; 537 536 struct pci_bus_region region; 538 537 unsigned long io_mask; ··· 544 545 io_mask = PCI_IO_1K_RANGE_MASK; 545 546 546 547 /* Set up the top and bottom of the PCI I/O segment for this bus. */ 547 - res = bus->resource[0]; 548 + res = &bridge->resource[PCI_BRIDGE_RESOURCES + 0]; 548 549 pcibios_resource_to_bus(bridge->bus, &region, res); 549 550 if (res->flags & IORESOURCE_IO) { 550 551 pci_read_config_word(bridge, PCI_IO_BASE, &l); ··· 567 568 pci_write_config_dword(bridge, PCI_IO_BASE_UPPER16, io_upper16); 568 569 } 569 570 570 - static void pci_setup_bridge_mmio(struct pci_bus *bus) 571 + static void pci_setup_bridge_mmio(struct pci_dev *bridge) 571 572 { 572 - struct pci_dev *bridge = bus->self; 573 573 struct resource *res; 574 574 struct pci_bus_region region; 575 575 u32 l; 576 576 577 577 /* Set up the top and bottom of the PCI Memory segment for this bus. */ 578 - res = bus->resource[1]; 578 + res = &bridge->resource[PCI_BRIDGE_RESOURCES + 1]; 579 579 pcibios_resource_to_bus(bridge->bus, &region, res); 580 580 if (res->flags & IORESOURCE_MEM) { 581 581 l = (region.start >> 16) & 0xfff0; ··· 586 588 pci_write_config_dword(bridge, PCI_MEMORY_BASE, l); 587 589 } 588 590 589 - static void pci_setup_bridge_mmio_pref(struct pci_bus *bus) 591 + static void pci_setup_bridge_mmio_pref(struct pci_dev *bridge) 590 592 { 591 - struct pci_dev *bridge = bus->self; 592 593 struct resource *res; 593 594 struct pci_bus_region region; 594 595 u32 l, bu, lu; ··· 599 602 600 603 /* Set up PREF base/limit. */ 601 604 bu = lu = 0; 602 - res = bus->resource[2]; 605 + res = &bridge->resource[PCI_BRIDGE_RESOURCES + 2]; 603 606 pcibios_resource_to_bus(bridge->bus, &region, res); 604 607 if (res->flags & IORESOURCE_PREFETCH) { 605 608 l = (region.start >> 16) & 0xfff0; ··· 627 630 &bus->busn_res); 628 631 629 632 if (type & IORESOURCE_IO) 630 - pci_setup_bridge_io(bus); 633 + pci_setup_bridge_io(bridge); 631 634 632 635 if (type & IORESOURCE_MEM) 633 - pci_setup_bridge_mmio(bus); 636 + pci_setup_bridge_mmio(bridge); 634 637 635 638 if (type & IORESOURCE_PREFETCH) 636 - pci_setup_bridge_mmio_pref(bus); 639 + pci_setup_bridge_mmio_pref(bridge); 637 640 638 641 pci_write_config_word(bridge, PCI_BRIDGE_CONTROL, bus->bridge_ctl); 639 642 } ··· 644 647 IORESOURCE_PREFETCH; 645 648 646 649 __pci_setup_bridge(bus, type); 650 + } 651 + 652 + 653 + int pci_claim_bridge_resource(struct pci_dev *bridge, int i) 654 + { 655 + if (i < PCI_BRIDGE_RESOURCES || i > PCI_BRIDGE_RESOURCE_END) 656 + return 0; 657 + 658 + if (pci_claim_resource(bridge, i) == 0) 659 + return 0; /* claimed the window */ 660 + 661 + if ((bridge->class >> 8) != PCI_CLASS_BRIDGE_PCI) 662 + return 0; 663 + 664 + if (!pci_bus_clip_resource(bridge, i)) 665 + return -EINVAL; /* clipping didn't change anything */ 666 + 667 + switch (i - PCI_BRIDGE_RESOURCES) { 668 + case 0: 669 + pci_setup_bridge_io(bridge); 670 + break; 671 + case 1: 672 + pci_setup_bridge_mmio(bridge); 673 + break; 674 + case 2: 675 + pci_setup_bridge_mmio_pref(bridge); 676 + break; 677 + default: 678 + return -EINVAL; 679 + } 680 + 681 + if (pci_claim_resource(bridge, i) == 0) 682 + return 0; /* claimed a smaller window */ 683 + 684 + return -EINVAL; 647 685 } 648 686 649 687 /* Check whether the bridge supports optional I/O and
+6 -1049
drivers/platform/x86/dell-laptop.c
··· 2 2 * Driver for Dell laptop extras 3 3 * 4 4 * Copyright (c) Red Hat <mjg@redhat.com> 5 - * Copyright (c) 2014 Gabriele Mazzotta <gabriele.mzt@gmail.com> 6 - * Copyright (c) 2014 Pali Rohár <pali.rohar@gmail.com> 7 5 * 8 - * Based on documentation in the libsmbios package: 9 - * Copyright (C) 2005-2014 Dell Inc. 6 + * Based on documentation in the libsmbios package, Copyright (C) 2005 Dell 7 + * Inc. 10 8 * 11 9 * This program is free software; you can redistribute it and/or modify 12 10 * it under the terms of the GNU General Public License version 2 as ··· 32 34 #include "../../firmware/dcdbas.h" 33 35 34 36 #define BRIGHTNESS_TOKEN 0x7d 35 - #define KBD_LED_OFF_TOKEN 0x01E1 36 - #define KBD_LED_ON_TOKEN 0x01E2 37 - #define KBD_LED_AUTO_TOKEN 0x01E3 38 - #define KBD_LED_AUTO_25_TOKEN 0x02EA 39 - #define KBD_LED_AUTO_50_TOKEN 0x02EB 40 - #define KBD_LED_AUTO_75_TOKEN 0x02EC 41 - #define KBD_LED_AUTO_100_TOKEN 0x02F6 42 37 43 38 /* This structure will be modified by the firmware when we enter 44 39 * system management mode, hence the volatiles */ ··· 62 71 63 72 struct quirk_entry { 64 73 u8 touchpad_led; 65 - 66 - int needs_kbd_timeouts; 67 - /* 68 - * Ordered list of timeouts expressed in seconds. 69 - * The list must end with -1 70 - */ 71 - int kbd_timeouts[]; 72 74 }; 73 75 74 76 static struct quirk_entry *quirks; ··· 75 91 quirks = dmi->driver_data; 76 92 return 1; 77 93 } 78 - 79 - /* 80 - * These values come from Windows utility provided by Dell. If any other value 81 - * is used then BIOS silently set timeout to 0 without any error message. 82 - */ 83 - static struct quirk_entry quirk_dell_xps13_9333 = { 84 - .needs_kbd_timeouts = 1, 85 - .kbd_timeouts = { 0, 5, 15, 60, 5 * 60, 15 * 60, -1 }, 86 - }; 87 94 88 95 static int da_command_address; 89 96 static int da_command_code; ··· 267 292 }, 268 293 .driver_data = &quirk_dell_vostro_v130, 269 294 }, 270 - { 271 - .callback = dmi_matched, 272 - .ident = "Dell XPS13 9333", 273 - .matches = { 274 - DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."), 275 - DMI_MATCH(DMI_PRODUCT_NAME, "XPS13 9333"), 276 - }, 277 - .driver_data = &quirk_dell_xps13_9333, 278 - }, 279 295 { } 280 296 }; 281 297 ··· 331 365 } 332 366 } 333 367 334 - static int find_token_id(int tokenid) 368 + static int find_token_location(int tokenid) 335 369 { 336 370 int i; 337 - 338 371 for (i = 0; i < da_num_tokens; i++) { 339 372 if (da_tokens[i].tokenID == tokenid) 340 - return i; 373 + return da_tokens[i].location; 341 374 } 342 375 343 376 return -1; 344 - } 345 - 346 - static int find_token_location(int tokenid) 347 - { 348 - int id; 349 - 350 - id = find_token_id(tokenid); 351 - if (id == -1) 352 - return -1; 353 - 354 - return da_tokens[id].location; 355 377 } 356 378 357 379 static struct calling_interface_buffer * ··· 360 406 dcdbas_smi_request(&command); 361 407 362 408 return buffer; 363 - } 364 - 365 - static inline int dell_smi_error(int value) 366 - { 367 - switch (value) { 368 - case 0: /* Completed successfully */ 369 - return 0; 370 - case -1: /* Completed with error */ 371 - return -EIO; 372 - case -2: /* Function not supported */ 373 - return -ENXIO; 374 - default: /* Unknown error */ 375 - return -EINVAL; 376 - } 377 409 } 378 410 379 411 /* Derived from information in DellWirelessCtl.cpp: ··· 716 776 else 717 777 dell_send_request(buffer, 1, 1); 718 778 719 - out: 779 + out: 720 780 release_buffer(); 721 781 return ret; 722 782 } ··· 740 800 741 801 ret = buffer->output[1]; 742 802 743 - out: 803 + out: 744 804 release_buffer(); 745 805 return ret; 746 806 } ··· 787 847 static void touchpad_led_exit(void) 788 848 { 789 849 led_classdev_unregister(&touchpad_led); 790 - } 791 - 792 - /* 793 - * Derived from information in smbios-keyboard-ctl: 794 - * 795 - * cbClass 4 796 - * cbSelect 11 797 - * Keyboard illumination 798 - * cbArg1 determines the function to be performed 799 - * 800 - * cbArg1 0x0 = Get Feature Information 801 - * cbRES1 Standard return codes (0, -1, -2) 802 - * cbRES2, word0 Bitmap of user-selectable modes 803 - * bit 0 Always off (All systems) 804 - * bit 1 Always on (Travis ATG, Siberia) 805 - * bit 2 Auto: ALS-based On; ALS-based Off (Travis ATG) 806 - * bit 3 Auto: ALS- and input-activity-based On; input-activity based Off 807 - * bit 4 Auto: Input-activity-based On; input-activity based Off 808 - * bit 5 Auto: Input-activity-based On (illumination level 25%); input-activity based Off 809 - * bit 6 Auto: Input-activity-based On (illumination level 50%); input-activity based Off 810 - * bit 7 Auto: Input-activity-based On (illumination level 75%); input-activity based Off 811 - * bit 8 Auto: Input-activity-based On (illumination level 100%); input-activity based Off 812 - * bits 9-15 Reserved for future use 813 - * cbRES2, byte2 Reserved for future use 814 - * cbRES2, byte3 Keyboard illumination type 815 - * 0 Reserved 816 - * 1 Tasklight 817 - * 2 Backlight 818 - * 3-255 Reserved for future use 819 - * cbRES3, byte0 Supported auto keyboard illumination trigger bitmap. 820 - * bit 0 Any keystroke 821 - * bit 1 Touchpad activity 822 - * bit 2 Pointing stick 823 - * bit 3 Any mouse 824 - * bits 4-7 Reserved for future use 825 - * cbRES3, byte1 Supported timeout unit bitmap 826 - * bit 0 Seconds 827 - * bit 1 Minutes 828 - * bit 2 Hours 829 - * bit 3 Days 830 - * bits 4-7 Reserved for future use 831 - * cbRES3, byte2 Number of keyboard light brightness levels 832 - * cbRES4, byte0 Maximum acceptable seconds value (0 if seconds not supported). 833 - * cbRES4, byte1 Maximum acceptable minutes value (0 if minutes not supported). 834 - * cbRES4, byte2 Maximum acceptable hours value (0 if hours not supported). 835 - * cbRES4, byte3 Maximum acceptable days value (0 if days not supported) 836 - * 837 - * cbArg1 0x1 = Get Current State 838 - * cbRES1 Standard return codes (0, -1, -2) 839 - * cbRES2, word0 Bitmap of current mode state 840 - * bit 0 Always off (All systems) 841 - * bit 1 Always on (Travis ATG, Siberia) 842 - * bit 2 Auto: ALS-based On; ALS-based Off (Travis ATG) 843 - * bit 3 Auto: ALS- and input-activity-based On; input-activity based Off 844 - * bit 4 Auto: Input-activity-based On; input-activity based Off 845 - * bit 5 Auto: Input-activity-based On (illumination level 25%); input-activity based Off 846 - * bit 6 Auto: Input-activity-based On (illumination level 50%); input-activity based Off 847 - * bit 7 Auto: Input-activity-based On (illumination level 75%); input-activity based Off 848 - * bit 8 Auto: Input-activity-based On (illumination level 100%); input-activity based Off 849 - * bits 9-15 Reserved for future use 850 - * Note: Only One bit can be set 851 - * cbRES2, byte2 Currently active auto keyboard illumination triggers. 852 - * bit 0 Any keystroke 853 - * bit 1 Touchpad activity 854 - * bit 2 Pointing stick 855 - * bit 3 Any mouse 856 - * bits 4-7 Reserved for future use 857 - * cbRES2, byte3 Current Timeout 858 - * bits 7:6 Timeout units indicator: 859 - * 00b Seconds 860 - * 01b Minutes 861 - * 10b Hours 862 - * 11b Days 863 - * bits 5:0 Timeout value (0-63) in sec/min/hr/day 864 - * NOTE: A value of 0 means always on (no timeout) if any bits of RES3 byte 865 - * are set upon return from the [Get feature information] call. 866 - * cbRES3, byte0 Current setting of ALS value that turns the light on or off. 867 - * cbRES3, byte1 Current ALS reading 868 - * cbRES3, byte2 Current keyboard light level. 869 - * 870 - * cbArg1 0x2 = Set New State 871 - * cbRES1 Standard return codes (0, -1, -2) 872 - * cbArg2, word0 Bitmap of current mode state 873 - * bit 0 Always off (All systems) 874 - * bit 1 Always on (Travis ATG, Siberia) 875 - * bit 2 Auto: ALS-based On; ALS-based Off (Travis ATG) 876 - * bit 3 Auto: ALS- and input-activity-based On; input-activity based Off 877 - * bit 4 Auto: Input-activity-based On; input-activity based Off 878 - * bit 5 Auto: Input-activity-based On (illumination level 25%); input-activity based Off 879 - * bit 6 Auto: Input-activity-based On (illumination level 50%); input-activity based Off 880 - * bit 7 Auto: Input-activity-based On (illumination level 75%); input-activity based Off 881 - * bit 8 Auto: Input-activity-based On (illumination level 100%); input-activity based Off 882 - * bits 9-15 Reserved for future use 883 - * Note: Only One bit can be set 884 - * cbArg2, byte2 Desired auto keyboard illumination triggers. Must remain inactive to allow 885 - * keyboard to turn off automatically. 886 - * bit 0 Any keystroke 887 - * bit 1 Touchpad activity 888 - * bit 2 Pointing stick 889 - * bit 3 Any mouse 890 - * bits 4-7 Reserved for future use 891 - * cbArg2, byte3 Desired Timeout 892 - * bits 7:6 Timeout units indicator: 893 - * 00b Seconds 894 - * 01b Minutes 895 - * 10b Hours 896 - * 11b Days 897 - * bits 5:0 Timeout value (0-63) in sec/min/hr/day 898 - * cbArg3, byte0 Desired setting of ALS value that turns the light on or off. 899 - * cbArg3, byte2 Desired keyboard light level. 900 - */ 901 - 902 - 903 - enum kbd_timeout_unit { 904 - KBD_TIMEOUT_SECONDS = 0, 905 - KBD_TIMEOUT_MINUTES, 906 - KBD_TIMEOUT_HOURS, 907 - KBD_TIMEOUT_DAYS, 908 - }; 909 - 910 - enum kbd_mode_bit { 911 - KBD_MODE_BIT_OFF = 0, 912 - KBD_MODE_BIT_ON, 913 - KBD_MODE_BIT_ALS, 914 - KBD_MODE_BIT_TRIGGER_ALS, 915 - KBD_MODE_BIT_TRIGGER, 916 - KBD_MODE_BIT_TRIGGER_25, 917 - KBD_MODE_BIT_TRIGGER_50, 918 - KBD_MODE_BIT_TRIGGER_75, 919 - KBD_MODE_BIT_TRIGGER_100, 920 - }; 921 - 922 - #define kbd_is_als_mode_bit(bit) \ 923 - ((bit) == KBD_MODE_BIT_ALS || (bit) == KBD_MODE_BIT_TRIGGER_ALS) 924 - #define kbd_is_trigger_mode_bit(bit) \ 925 - ((bit) >= KBD_MODE_BIT_TRIGGER_ALS && (bit) <= KBD_MODE_BIT_TRIGGER_100) 926 - #define kbd_is_level_mode_bit(bit) \ 927 - ((bit) >= KBD_MODE_BIT_TRIGGER_25 && (bit) <= KBD_MODE_BIT_TRIGGER_100) 928 - 929 - struct kbd_info { 930 - u16 modes; 931 - u8 type; 932 - u8 triggers; 933 - u8 levels; 934 - u8 seconds; 935 - u8 minutes; 936 - u8 hours; 937 - u8 days; 938 - }; 939 - 940 - struct kbd_state { 941 - u8 mode_bit; 942 - u8 triggers; 943 - u8 timeout_value; 944 - u8 timeout_unit; 945 - u8 als_setting; 946 - u8 als_value; 947 - u8 level; 948 - }; 949 - 950 - static const int kbd_tokens[] = { 951 - KBD_LED_OFF_TOKEN, 952 - KBD_LED_AUTO_25_TOKEN, 953 - KBD_LED_AUTO_50_TOKEN, 954 - KBD_LED_AUTO_75_TOKEN, 955 - KBD_LED_AUTO_100_TOKEN, 956 - KBD_LED_ON_TOKEN, 957 - }; 958 - 959 - static u16 kbd_token_bits; 960 - 961 - static struct kbd_info kbd_info; 962 - static bool kbd_als_supported; 963 - static bool kbd_triggers_supported; 964 - 965 - static u8 kbd_mode_levels[16]; 966 - static int kbd_mode_levels_count; 967 - 968 - static u8 kbd_previous_level; 969 - static u8 kbd_previous_mode_bit; 970 - 971 - static bool kbd_led_present; 972 - 973 - /* 974 - * NOTE: there are three ways to set the keyboard backlight level. 975 - * First, via kbd_state.mode_bit (assigning KBD_MODE_BIT_TRIGGER_* value). 976 - * Second, via kbd_state.level (assigning numerical value <= kbd_info.levels). 977 - * Third, via SMBIOS tokens (KBD_LED_* in kbd_tokens) 978 - * 979 - * There are laptops which support only one of these methods. If we want to 980 - * support as many machines as possible we need to implement all three methods. 981 - * The first two methods use the kbd_state structure. The third uses SMBIOS 982 - * tokens. If kbd_info.levels == 0, the machine does not support setting the 983 - * keyboard backlight level via kbd_state.level. 984 - */ 985 - 986 - static int kbd_get_info(struct kbd_info *info) 987 - { 988 - u8 units; 989 - int ret; 990 - 991 - get_buffer(); 992 - 993 - buffer->input[0] = 0x0; 994 - dell_send_request(buffer, 4, 11); 995 - ret = buffer->output[0]; 996 - 997 - if (ret) { 998 - ret = dell_smi_error(ret); 999 - goto out; 1000 - } 1001 - 1002 - info->modes = buffer->output[1] & 0xFFFF; 1003 - info->type = (buffer->output[1] >> 24) & 0xFF; 1004 - info->triggers = buffer->output[2] & 0xFF; 1005 - units = (buffer->output[2] >> 8) & 0xFF; 1006 - info->levels = (buffer->output[2] >> 16) & 0xFF; 1007 - 1008 - if (units & BIT(0)) 1009 - info->seconds = (buffer->output[3] >> 0) & 0xFF; 1010 - if (units & BIT(1)) 1011 - info->minutes = (buffer->output[3] >> 8) & 0xFF; 1012 - if (units & BIT(2)) 1013 - info->hours = (buffer->output[3] >> 16) & 0xFF; 1014 - if (units & BIT(3)) 1015 - info->days = (buffer->output[3] >> 24) & 0xFF; 1016 - 1017 - out: 1018 - release_buffer(); 1019 - return ret; 1020 - } 1021 - 1022 - static unsigned int kbd_get_max_level(void) 1023 - { 1024 - if (kbd_info.levels != 0) 1025 - return kbd_info.levels; 1026 - if (kbd_mode_levels_count > 0) 1027 - return kbd_mode_levels_count - 1; 1028 - return 0; 1029 - } 1030 - 1031 - static int kbd_get_level(struct kbd_state *state) 1032 - { 1033 - int i; 1034 - 1035 - if (kbd_info.levels != 0) 1036 - return state->level; 1037 - 1038 - if (kbd_mode_levels_count > 0) { 1039 - for (i = 0; i < kbd_mode_levels_count; ++i) 1040 - if (kbd_mode_levels[i] == state->mode_bit) 1041 - return i; 1042 - return 0; 1043 - } 1044 - 1045 - return -EINVAL; 1046 - } 1047 - 1048 - static int kbd_set_level(struct kbd_state *state, u8 level) 1049 - { 1050 - if (kbd_info.levels != 0) { 1051 - if (level != 0) 1052 - kbd_previous_level = level; 1053 - if (state->level == level) 1054 - return 0; 1055 - state->level = level; 1056 - if (level != 0 && state->mode_bit == KBD_MODE_BIT_OFF) 1057 - state->mode_bit = kbd_previous_mode_bit; 1058 - else if (level == 0 && state->mode_bit != KBD_MODE_BIT_OFF) { 1059 - kbd_previous_mode_bit = state->mode_bit; 1060 - state->mode_bit = KBD_MODE_BIT_OFF; 1061 - } 1062 - return 0; 1063 - } 1064 - 1065 - if (kbd_mode_levels_count > 0 && level < kbd_mode_levels_count) { 1066 - if (level != 0) 1067 - kbd_previous_level = level; 1068 - state->mode_bit = kbd_mode_levels[level]; 1069 - return 0; 1070 - } 1071 - 1072 - return -EINVAL; 1073 - } 1074 - 1075 - static int kbd_get_state(struct kbd_state *state) 1076 - { 1077 - int ret; 1078 - 1079 - get_buffer(); 1080 - 1081 - buffer->input[0] = 0x1; 1082 - dell_send_request(buffer, 4, 11); 1083 - ret = buffer->output[0]; 1084 - 1085 - if (ret) { 1086 - ret = dell_smi_error(ret); 1087 - goto out; 1088 - } 1089 - 1090 - state->mode_bit = ffs(buffer->output[1] & 0xFFFF); 1091 - if (state->mode_bit != 0) 1092 - state->mode_bit--; 1093 - 1094 - state->triggers = (buffer->output[1] >> 16) & 0xFF; 1095 - state->timeout_value = (buffer->output[1] >> 24) & 0x3F; 1096 - state->timeout_unit = (buffer->output[1] >> 30) & 0x3; 1097 - state->als_setting = buffer->output[2] & 0xFF; 1098 - state->als_value = (buffer->output[2] >> 8) & 0xFF; 1099 - state->level = (buffer->output[2] >> 16) & 0xFF; 1100 - 1101 - out: 1102 - release_buffer(); 1103 - return ret; 1104 - } 1105 - 1106 - static int kbd_set_state(struct kbd_state *state) 1107 - { 1108 - int ret; 1109 - 1110 - get_buffer(); 1111 - buffer->input[0] = 0x2; 1112 - buffer->input[1] = BIT(state->mode_bit) & 0xFFFF; 1113 - buffer->input[1] |= (state->triggers & 0xFF) << 16; 1114 - buffer->input[1] |= (state->timeout_value & 0x3F) << 24; 1115 - buffer->input[1] |= (state->timeout_unit & 0x3) << 30; 1116 - buffer->input[2] = state->als_setting & 0xFF; 1117 - buffer->input[2] |= (state->level & 0xFF) << 16; 1118 - dell_send_request(buffer, 4, 11); 1119 - ret = buffer->output[0]; 1120 - release_buffer(); 1121 - 1122 - return dell_smi_error(ret); 1123 - } 1124 - 1125 - static int kbd_set_state_safe(struct kbd_state *state, struct kbd_state *old) 1126 - { 1127 - int ret; 1128 - 1129 - ret = kbd_set_state(state); 1130 - if (ret == 0) 1131 - return 0; 1132 - 1133 - /* 1134 - * When setting the new state fails,try to restore the previous one. 1135 - * This is needed on some machines where BIOS sets a default state when 1136 - * setting a new state fails. This default state could be all off. 1137 - */ 1138 - 1139 - if (kbd_set_state(old)) 1140 - pr_err("Setting old previous keyboard state failed\n"); 1141 - 1142 - return ret; 1143 - } 1144 - 1145 - static int kbd_set_token_bit(u8 bit) 1146 - { 1147 - int id; 1148 - int ret; 1149 - 1150 - if (bit >= ARRAY_SIZE(kbd_tokens)) 1151 - return -EINVAL; 1152 - 1153 - id = find_token_id(kbd_tokens[bit]); 1154 - if (id == -1) 1155 - return -EINVAL; 1156 - 1157 - get_buffer(); 1158 - buffer->input[0] = da_tokens[id].location; 1159 - buffer->input[1] = da_tokens[id].value; 1160 - dell_send_request(buffer, 1, 0); 1161 - ret = buffer->output[0]; 1162 - release_buffer(); 1163 - 1164 - return dell_smi_error(ret); 1165 - } 1166 - 1167 - static int kbd_get_token_bit(u8 bit) 1168 - { 1169 - int id; 1170 - int ret; 1171 - int val; 1172 - 1173 - if (bit >= ARRAY_SIZE(kbd_tokens)) 1174 - return -EINVAL; 1175 - 1176 - id = find_token_id(kbd_tokens[bit]); 1177 - if (id == -1) 1178 - return -EINVAL; 1179 - 1180 - get_buffer(); 1181 - buffer->input[0] = da_tokens[id].location; 1182 - dell_send_request(buffer, 0, 0); 1183 - ret = buffer->output[0]; 1184 - val = buffer->output[1]; 1185 - release_buffer(); 1186 - 1187 - if (ret) 1188 - return dell_smi_error(ret); 1189 - 1190 - return (val == da_tokens[id].value); 1191 - } 1192 - 1193 - static int kbd_get_first_active_token_bit(void) 1194 - { 1195 - int i; 1196 - int ret; 1197 - 1198 - for (i = 0; i < ARRAY_SIZE(kbd_tokens); ++i) { 1199 - ret = kbd_get_token_bit(i); 1200 - if (ret == 1) 1201 - return i; 1202 - } 1203 - 1204 - return ret; 1205 - } 1206 - 1207 - static int kbd_get_valid_token_counts(void) 1208 - { 1209 - return hweight16(kbd_token_bits); 1210 - } 1211 - 1212 - static inline int kbd_init_info(void) 1213 - { 1214 - struct kbd_state state; 1215 - int ret; 1216 - int i; 1217 - 1218 - ret = kbd_get_info(&kbd_info); 1219 - if (ret) 1220 - return ret; 1221 - 1222 - kbd_get_state(&state); 1223 - 1224 - /* NOTE: timeout value is stored in 6 bits so max value is 63 */ 1225 - if (kbd_info.seconds > 63) 1226 - kbd_info.seconds = 63; 1227 - if (kbd_info.minutes > 63) 1228 - kbd_info.minutes = 63; 1229 - if (kbd_info.hours > 63) 1230 - kbd_info.hours = 63; 1231 - if (kbd_info.days > 63) 1232 - kbd_info.days = 63; 1233 - 1234 - /* NOTE: On tested machines ON mode did not work and caused 1235 - * problems (turned backlight off) so do not use it 1236 - */ 1237 - kbd_info.modes &= ~BIT(KBD_MODE_BIT_ON); 1238 - 1239 - kbd_previous_level = kbd_get_level(&state); 1240 - kbd_previous_mode_bit = state.mode_bit; 1241 - 1242 - if (kbd_previous_level == 0 && kbd_get_max_level() != 0) 1243 - kbd_previous_level = 1; 1244 - 1245 - if (kbd_previous_mode_bit == KBD_MODE_BIT_OFF) { 1246 - kbd_previous_mode_bit = 1247 - ffs(kbd_info.modes & ~BIT(KBD_MODE_BIT_OFF)); 1248 - if (kbd_previous_mode_bit != 0) 1249 - kbd_previous_mode_bit--; 1250 - } 1251 - 1252 - if (kbd_info.modes & (BIT(KBD_MODE_BIT_ALS) | 1253 - BIT(KBD_MODE_BIT_TRIGGER_ALS))) 1254 - kbd_als_supported = true; 1255 - 1256 - if (kbd_info.modes & ( 1257 - BIT(KBD_MODE_BIT_TRIGGER_ALS) | BIT(KBD_MODE_BIT_TRIGGER) | 1258 - BIT(KBD_MODE_BIT_TRIGGER_25) | BIT(KBD_MODE_BIT_TRIGGER_50) | 1259 - BIT(KBD_MODE_BIT_TRIGGER_75) | BIT(KBD_MODE_BIT_TRIGGER_100) 1260 - )) 1261 - kbd_triggers_supported = true; 1262 - 1263 - /* kbd_mode_levels[0] is reserved, see below */ 1264 - for (i = 0; i < 16; ++i) 1265 - if (kbd_is_level_mode_bit(i) && (BIT(i) & kbd_info.modes)) 1266 - kbd_mode_levels[1 + kbd_mode_levels_count++] = i; 1267 - 1268 - /* 1269 - * Find the first supported mode and assign to kbd_mode_levels[0]. 1270 - * This should be 0 (off), but we cannot depend on the BIOS to 1271 - * support 0. 1272 - */ 1273 - if (kbd_mode_levels_count > 0) { 1274 - for (i = 0; i < 16; ++i) { 1275 - if (BIT(i) & kbd_info.modes) { 1276 - kbd_mode_levels[0] = i; 1277 - break; 1278 - } 1279 - } 1280 - kbd_mode_levels_count++; 1281 - } 1282 - 1283 - return 0; 1284 - 1285 - } 1286 - 1287 - static inline void kbd_init_tokens(void) 1288 - { 1289 - int i; 1290 - 1291 - for (i = 0; i < ARRAY_SIZE(kbd_tokens); ++i) 1292 - if (find_token_id(kbd_tokens[i]) != -1) 1293 - kbd_token_bits |= BIT(i); 1294 - } 1295 - 1296 - static void kbd_init(void) 1297 - { 1298 - int ret; 1299 - 1300 - ret = kbd_init_info(); 1301 - kbd_init_tokens(); 1302 - 1303 - if (kbd_token_bits != 0 || ret == 0) 1304 - kbd_led_present = true; 1305 - } 1306 - 1307 - static ssize_t kbd_led_timeout_store(struct device *dev, 1308 - struct device_attribute *attr, 1309 - const char *buf, size_t count) 1310 - { 1311 - struct kbd_state new_state; 1312 - struct kbd_state state; 1313 - bool convert; 1314 - int value; 1315 - int ret; 1316 - char ch; 1317 - u8 unit; 1318 - int i; 1319 - 1320 - ret = sscanf(buf, "%d %c", &value, &ch); 1321 - if (ret < 1) 1322 - return -EINVAL; 1323 - else if (ret == 1) 1324 - ch = 's'; 1325 - 1326 - if (value < 0) 1327 - return -EINVAL; 1328 - 1329 - convert = false; 1330 - 1331 - switch (ch) { 1332 - case 's': 1333 - if (value > kbd_info.seconds) 1334 - convert = true; 1335 - unit = KBD_TIMEOUT_SECONDS; 1336 - break; 1337 - case 'm': 1338 - if (value > kbd_info.minutes) 1339 - convert = true; 1340 - unit = KBD_TIMEOUT_MINUTES; 1341 - break; 1342 - case 'h': 1343 - if (value > kbd_info.hours) 1344 - convert = true; 1345 - unit = KBD_TIMEOUT_HOURS; 1346 - break; 1347 - case 'd': 1348 - if (value > kbd_info.days) 1349 - convert = true; 1350 - unit = KBD_TIMEOUT_DAYS; 1351 - break; 1352 - default: 1353 - return -EINVAL; 1354 - } 1355 - 1356 - if (quirks && quirks->needs_kbd_timeouts) 1357 - convert = true; 1358 - 1359 - if (convert) { 1360 - /* Convert value from current units to seconds */ 1361 - switch (unit) { 1362 - case KBD_TIMEOUT_DAYS: 1363 - value *= 24; 1364 - case KBD_TIMEOUT_HOURS: 1365 - value *= 60; 1366 - case KBD_TIMEOUT_MINUTES: 1367 - value *= 60; 1368 - unit = KBD_TIMEOUT_SECONDS; 1369 - } 1370 - 1371 - if (quirks && quirks->needs_kbd_timeouts) { 1372 - for (i = 0; quirks->kbd_timeouts[i] != -1; i++) { 1373 - if (value <= quirks->kbd_timeouts[i]) { 1374 - value = quirks->kbd_timeouts[i]; 1375 - break; 1376 - } 1377 - } 1378 - } 1379 - 1380 - if (value <= kbd_info.seconds && kbd_info.seconds) { 1381 - unit = KBD_TIMEOUT_SECONDS; 1382 - } else if (value / 60 <= kbd_info.minutes && kbd_info.minutes) { 1383 - value /= 60; 1384 - unit = KBD_TIMEOUT_MINUTES; 1385 - } else if (value / (60 * 60) <= kbd_info.hours && kbd_info.hours) { 1386 - value /= (60 * 60); 1387 - unit = KBD_TIMEOUT_HOURS; 1388 - } else if (value / (60 * 60 * 24) <= kbd_info.days && kbd_info.days) { 1389 - value /= (60 * 60 * 24); 1390 - unit = KBD_TIMEOUT_DAYS; 1391 - } else { 1392 - return -EINVAL; 1393 - } 1394 - } 1395 - 1396 - ret = kbd_get_state(&state); 1397 - if (ret) 1398 - return ret; 1399 - 1400 - new_state = state; 1401 - new_state.timeout_value = value; 1402 - new_state.timeout_unit = unit; 1403 - 1404 - ret = kbd_set_state_safe(&new_state, &state); 1405 - if (ret) 1406 - return ret; 1407 - 1408 - return count; 1409 - } 1410 - 1411 - static ssize_t kbd_led_timeout_show(struct device *dev, 1412 - struct device_attribute *attr, char *buf) 1413 - { 1414 - struct kbd_state state; 1415 - int ret; 1416 - int len; 1417 - 1418 - ret = kbd_get_state(&state); 1419 - if (ret) 1420 - return ret; 1421 - 1422 - len = sprintf(buf, "%d", state.timeout_value); 1423 - 1424 - switch (state.timeout_unit) { 1425 - case KBD_TIMEOUT_SECONDS: 1426 - return len + sprintf(buf+len, "s\n"); 1427 - case KBD_TIMEOUT_MINUTES: 1428 - return len + sprintf(buf+len, "m\n"); 1429 - case KBD_TIMEOUT_HOURS: 1430 - return len + sprintf(buf+len, "h\n"); 1431 - case KBD_TIMEOUT_DAYS: 1432 - return len + sprintf(buf+len, "d\n"); 1433 - default: 1434 - return -EINVAL; 1435 - } 1436 - 1437 - return len; 1438 - } 1439 - 1440 - static DEVICE_ATTR(stop_timeout, S_IRUGO | S_IWUSR, 1441 - kbd_led_timeout_show, kbd_led_timeout_store); 1442 - 1443 - static const char * const kbd_led_triggers[] = { 1444 - "keyboard", 1445 - "touchpad", 1446 - /*"trackstick"*/ NULL, /* NOTE: trackstick is just alias for touchpad */ 1447 - "mouse", 1448 - }; 1449 - 1450 - static ssize_t kbd_led_triggers_store(struct device *dev, 1451 - struct device_attribute *attr, 1452 - const char *buf, size_t count) 1453 - { 1454 - struct kbd_state new_state; 1455 - struct kbd_state state; 1456 - bool triggers_enabled = false; 1457 - bool als_enabled = false; 1458 - bool disable_als = false; 1459 - bool enable_als = false; 1460 - int trigger_bit = -1; 1461 - char trigger[21]; 1462 - int i, ret; 1463 - 1464 - ret = sscanf(buf, "%20s", trigger); 1465 - if (ret != 1) 1466 - return -EINVAL; 1467 - 1468 - if (trigger[0] != '+' && trigger[0] != '-') 1469 - return -EINVAL; 1470 - 1471 - ret = kbd_get_state(&state); 1472 - if (ret) 1473 - return ret; 1474 - 1475 - if (kbd_als_supported) 1476 - als_enabled = kbd_is_als_mode_bit(state.mode_bit); 1477 - 1478 - if (kbd_triggers_supported) 1479 - triggers_enabled = kbd_is_trigger_mode_bit(state.mode_bit); 1480 - 1481 - if (kbd_als_supported) { 1482 - if (strcmp(trigger, "+als") == 0) { 1483 - if (als_enabled) 1484 - return count; 1485 - enable_als = true; 1486 - } else if (strcmp(trigger, "-als") == 0) { 1487 - if (!als_enabled) 1488 - return count; 1489 - disable_als = true; 1490 - } 1491 - } 1492 - 1493 - if (enable_als || disable_als) { 1494 - new_state = state; 1495 - if (enable_als) { 1496 - if (triggers_enabled) 1497 - new_state.mode_bit = KBD_MODE_BIT_TRIGGER_ALS; 1498 - else 1499 - new_state.mode_bit = KBD_MODE_BIT_ALS; 1500 - } else { 1501 - if (triggers_enabled) { 1502 - new_state.mode_bit = KBD_MODE_BIT_TRIGGER; 1503 - kbd_set_level(&new_state, kbd_previous_level); 1504 - } else { 1505 - new_state.mode_bit = KBD_MODE_BIT_ON; 1506 - } 1507 - } 1508 - if (!(kbd_info.modes & BIT(new_state.mode_bit))) 1509 - return -EINVAL; 1510 - ret = kbd_set_state_safe(&new_state, &state); 1511 - if (ret) 1512 - return ret; 1513 - kbd_previous_mode_bit = new_state.mode_bit; 1514 - return count; 1515 - } 1516 - 1517 - if (kbd_triggers_supported) { 1518 - for (i = 0; i < ARRAY_SIZE(kbd_led_triggers); ++i) { 1519 - if (!(kbd_info.triggers & BIT(i))) 1520 - continue; 1521 - if (!kbd_led_triggers[i]) 1522 - continue; 1523 - if (strcmp(trigger+1, kbd_led_triggers[i]) != 0) 1524 - continue; 1525 - if (trigger[0] == '+' && 1526 - triggers_enabled && (state.triggers & BIT(i))) 1527 - return count; 1528 - if (trigger[0] == '-' && 1529 - (!triggers_enabled || !(state.triggers & BIT(i)))) 1530 - return count; 1531 - trigger_bit = i; 1532 - break; 1533 - } 1534 - } 1535 - 1536 - if (trigger_bit != -1) { 1537 - new_state = state; 1538 - if (trigger[0] == '+') 1539 - new_state.triggers |= BIT(trigger_bit); 1540 - else { 1541 - new_state.triggers &= ~BIT(trigger_bit); 1542 - /* NOTE: trackstick bit (2) must be disabled when 1543 - * disabling touchpad bit (1), otherwise touchpad 1544 - * bit (1) will not be disabled */ 1545 - if (trigger_bit == 1) 1546 - new_state.triggers &= ~BIT(2); 1547 - } 1548 - if ((kbd_info.triggers & new_state.triggers) != 1549 - new_state.triggers) 1550 - return -EINVAL; 1551 - if (new_state.triggers && !triggers_enabled) { 1552 - if (als_enabled) 1553 - new_state.mode_bit = KBD_MODE_BIT_TRIGGER_ALS; 1554 - else { 1555 - new_state.mode_bit = KBD_MODE_BIT_TRIGGER; 1556 - kbd_set_level(&new_state, kbd_previous_level); 1557 - } 1558 - } else if (new_state.triggers == 0) { 1559 - if (als_enabled) 1560 - new_state.mode_bit = KBD_MODE_BIT_ALS; 1561 - else 1562 - kbd_set_level(&new_state, 0); 1563 - } 1564 - if (!(kbd_info.modes & BIT(new_state.mode_bit))) 1565 - return -EINVAL; 1566 - ret = kbd_set_state_safe(&new_state, &state); 1567 - if (ret) 1568 - return ret; 1569 - if (new_state.mode_bit != KBD_MODE_BIT_OFF) 1570 - kbd_previous_mode_bit = new_state.mode_bit; 1571 - return count; 1572 - } 1573 - 1574 - return -EINVAL; 1575 - } 1576 - 1577 - static ssize_t kbd_led_triggers_show(struct device *dev, 1578 - struct device_attribute *attr, char *buf) 1579 - { 1580 - struct kbd_state state; 1581 - bool triggers_enabled; 1582 - int level, i, ret; 1583 - int len = 0; 1584 - 1585 - ret = kbd_get_state(&state); 1586 - if (ret) 1587 - return ret; 1588 - 1589 - len = 0; 1590 - 1591 - if (kbd_triggers_supported) { 1592 - triggers_enabled = kbd_is_trigger_mode_bit(state.mode_bit); 1593 - level = kbd_get_level(&state); 1594 - for (i = 0; i < ARRAY_SIZE(kbd_led_triggers); ++i) { 1595 - if (!(kbd_info.triggers & BIT(i))) 1596 - continue; 1597 - if (!kbd_led_triggers[i]) 1598 - continue; 1599 - if ((triggers_enabled || level <= 0) && 1600 - (state.triggers & BIT(i))) 1601 - buf[len++] = '+'; 1602 - else 1603 - buf[len++] = '-'; 1604 - len += sprintf(buf+len, "%s ", kbd_led_triggers[i]); 1605 - } 1606 - } 1607 - 1608 - if (kbd_als_supported) { 1609 - if (kbd_is_als_mode_bit(state.mode_bit)) 1610 - len += sprintf(buf+len, "+als "); 1611 - else 1612 - len += sprintf(buf+len, "-als "); 1613 - } 1614 - 1615 - if (len) 1616 - buf[len - 1] = '\n'; 1617 - 1618 - return len; 1619 - } 1620 - 1621 - static DEVICE_ATTR(start_triggers, S_IRUGO | S_IWUSR, 1622 - kbd_led_triggers_show, kbd_led_triggers_store); 1623 - 1624 - static ssize_t kbd_led_als_store(struct device *dev, 1625 - struct device_attribute *attr, 1626 - const char *buf, size_t count) 1627 - { 1628 - struct kbd_state state; 1629 - struct kbd_state new_state; 1630 - u8 setting; 1631 - int ret; 1632 - 1633 - ret = kstrtou8(buf, 10, &setting); 1634 - if (ret) 1635 - return ret; 1636 - 1637 - ret = kbd_get_state(&state); 1638 - if (ret) 1639 - return ret; 1640 - 1641 - new_state = state; 1642 - new_state.als_setting = setting; 1643 - 1644 - ret = kbd_set_state_safe(&new_state, &state); 1645 - if (ret) 1646 - return ret; 1647 - 1648 - return count; 1649 - } 1650 - 1651 - static ssize_t kbd_led_als_show(struct device *dev, 1652 - struct device_attribute *attr, char *buf) 1653 - { 1654 - struct kbd_state state; 1655 - int ret; 1656 - 1657 - ret = kbd_get_state(&state); 1658 - if (ret) 1659 - return ret; 1660 - 1661 - return sprintf(buf, "%d\n", state.als_setting); 1662 - } 1663 - 1664 - static DEVICE_ATTR(als_setting, S_IRUGO | S_IWUSR, 1665 - kbd_led_als_show, kbd_led_als_store); 1666 - 1667 - static struct attribute *kbd_led_attrs[] = { 1668 - &dev_attr_stop_timeout.attr, 1669 - &dev_attr_start_triggers.attr, 1670 - &dev_attr_als_setting.attr, 1671 - NULL, 1672 - }; 1673 - ATTRIBUTE_GROUPS(kbd_led); 1674 - 1675 - static enum led_brightness kbd_led_level_get(struct led_classdev *led_cdev) 1676 - { 1677 - int ret; 1678 - u16 num; 1679 - struct kbd_state state; 1680 - 1681 - if (kbd_get_max_level()) { 1682 - ret = kbd_get_state(&state); 1683 - if (ret) 1684 - return 0; 1685 - ret = kbd_get_level(&state); 1686 - if (ret < 0) 1687 - return 0; 1688 - return ret; 1689 - } 1690 - 1691 - if (kbd_get_valid_token_counts()) { 1692 - ret = kbd_get_first_active_token_bit(); 1693 - if (ret < 0) 1694 - return 0; 1695 - for (num = kbd_token_bits; num != 0 && ret > 0; --ret) 1696 - num &= num - 1; /* clear the first bit set */ 1697 - if (num == 0) 1698 - return 0; 1699 - return ffs(num) - 1; 1700 - } 1701 - 1702 - pr_warn("Keyboard brightness level control not supported\n"); 1703 - return 0; 1704 - } 1705 - 1706 - static void kbd_led_level_set(struct led_classdev *led_cdev, 1707 - enum led_brightness value) 1708 - { 1709 - struct kbd_state state; 1710 - struct kbd_state new_state; 1711 - u16 num; 1712 - 1713 - if (kbd_get_max_level()) { 1714 - if (kbd_get_state(&state)) 1715 - return; 1716 - new_state = state; 1717 - if (kbd_set_level(&new_state, value)) 1718 - return; 1719 - kbd_set_state_safe(&new_state, &state); 1720 - return; 1721 - } 1722 - 1723 - if (kbd_get_valid_token_counts()) { 1724 - for (num = kbd_token_bits; num != 0 && value > 0; --value) 1725 - num &= num - 1; /* clear the first bit set */ 1726 - if (num == 0) 1727 - return; 1728 - kbd_set_token_bit(ffs(num) - 1); 1729 - return; 1730 - } 1731 - 1732 - pr_warn("Keyboard brightness level control not supported\n"); 1733 - } 1734 - 1735 - static struct led_classdev kbd_led = { 1736 - .name = "dell::kbd_backlight", 1737 - .brightness_set = kbd_led_level_set, 1738 - .brightness_get = kbd_led_level_get, 1739 - .groups = kbd_led_groups, 1740 - }; 1741 - 1742 - static int __init kbd_led_init(struct device *dev) 1743 - { 1744 - kbd_init(); 1745 - if (!kbd_led_present) 1746 - return -ENODEV; 1747 - kbd_led.max_brightness = kbd_get_max_level(); 1748 - if (!kbd_led.max_brightness) { 1749 - kbd_led.max_brightness = kbd_get_valid_token_counts(); 1750 - if (kbd_led.max_brightness) 1751 - kbd_led.max_brightness--; 1752 - } 1753 - return led_classdev_register(dev, &kbd_led); 1754 - } 1755 - 1756 - static void brightness_set_exit(struct led_classdev *led_cdev, 1757 - enum led_brightness value) 1758 - { 1759 - /* Don't change backlight level on exit */ 1760 - }; 1761 - 1762 - static void kbd_led_exit(void) 1763 - { 1764 - if (!kbd_led_present) 1765 - return; 1766 - kbd_led.brightness_set = brightness_set_exit; 1767 - led_classdev_unregister(&kbd_led); 1768 850 } 1769 851 1770 852 static int __init dell_init(void) ··· 840 1878 841 1879 if (quirks && quirks->touchpad_led) 842 1880 touchpad_led_init(&platform_device->dev); 843 - 844 - kbd_led_init(&platform_device->dev); 845 1881 846 1882 dell_laptop_dir = debugfs_create_dir("dell_laptop", NULL); 847 1883 if (dell_laptop_dir != NULL) ··· 908 1948 debugfs_remove_recursive(dell_laptop_dir); 909 1949 if (quirks && quirks->touchpad_led) 910 1950 touchpad_led_exit(); 911 - kbd_led_exit(); 912 1951 i8042_remove_filter(dell_laptop_i8042_filter); 913 1952 cancel_delayed_work_sync(&dell_rfkill_work); 914 1953 backlight_device_unregister(dell_backlight_device); ··· 924 1965 module_exit(dell_exit); 925 1966 926 1967 MODULE_AUTHOR("Matthew Garrett <mjg@redhat.com>"); 927 - MODULE_AUTHOR("Gabriele Mazzotta <gabriele.mzt@gmail.com>"); 928 - MODULE_AUTHOR("Pali Rohár <pali.rohar@gmail.com>"); 929 1968 MODULE_DESCRIPTION("Dell laptop driver"); 930 1969 MODULE_LICENSE("GPL");
+3 -1
drivers/regulator/core.c
··· 1488 1488 } 1489 1489 EXPORT_SYMBOL_GPL(regulator_get_optional); 1490 1490 1491 - /* Locks held by regulator_put() */ 1491 + /* regulator_list_mutex lock held by regulator_put() */ 1492 1492 static void _regulator_put(struct regulator *regulator) 1493 1493 { 1494 1494 struct regulator_dev *rdev; ··· 1503 1503 /* remove any sysfs entries */ 1504 1504 if (regulator->dev) 1505 1505 sysfs_remove_link(&rdev->dev.kobj, regulator->supply_name); 1506 + mutex_lock(&rdev->mutex); 1506 1507 kfree(regulator->supply_name); 1507 1508 list_del(&regulator->list); 1508 1509 kfree(regulator); 1509 1510 1510 1511 rdev->open_count--; 1511 1512 rdev->exclusive = 0; 1513 + mutex_unlock(&rdev->mutex); 1512 1514 1513 1515 module_put(rdev->owner); 1514 1516 }
+38 -4
drivers/regulator/s2mps11.c
··· 405 405 .enable_mask = S2MPS14_ENABLE_MASK \ 406 406 } 407 407 408 + #define regulator_desc_s2mps13_buck7(num, min, step, min_sel) { \ 409 + .name = "BUCK"#num, \ 410 + .id = S2MPS13_BUCK##num, \ 411 + .ops = &s2mps14_reg_ops, \ 412 + .type = REGULATOR_VOLTAGE, \ 413 + .owner = THIS_MODULE, \ 414 + .min_uV = min, \ 415 + .uV_step = step, \ 416 + .linear_min_sel = min_sel, \ 417 + .n_voltages = S2MPS14_BUCK_N_VOLTAGES, \ 418 + .ramp_delay = S2MPS13_BUCK_RAMP_DELAY, \ 419 + .vsel_reg = S2MPS13_REG_B1OUT + (num) * 2 - 1, \ 420 + .vsel_mask = S2MPS14_BUCK_VSEL_MASK, \ 421 + .enable_reg = S2MPS13_REG_B1CTRL + (num - 1) * 2, \ 422 + .enable_mask = S2MPS14_ENABLE_MASK \ 423 + } 424 + 425 + #define regulator_desc_s2mps13_buck8_10(num, min, step, min_sel) { \ 426 + .name = "BUCK"#num, \ 427 + .id = S2MPS13_BUCK##num, \ 428 + .ops = &s2mps14_reg_ops, \ 429 + .type = REGULATOR_VOLTAGE, \ 430 + .owner = THIS_MODULE, \ 431 + .min_uV = min, \ 432 + .uV_step = step, \ 433 + .linear_min_sel = min_sel, \ 434 + .n_voltages = S2MPS14_BUCK_N_VOLTAGES, \ 435 + .ramp_delay = S2MPS13_BUCK_RAMP_DELAY, \ 436 + .vsel_reg = S2MPS13_REG_B1OUT + (num) * 2 - 1, \ 437 + .vsel_mask = S2MPS14_BUCK_VSEL_MASK, \ 438 + .enable_reg = S2MPS13_REG_B1CTRL + (num) * 2 - 1, \ 439 + .enable_mask = S2MPS14_ENABLE_MASK \ 440 + } 441 + 408 442 static const struct regulator_desc s2mps13_regulators[] = { 409 443 regulator_desc_s2mps13_ldo(1, MIN_800_MV, STEP_12_5_MV, 0x00), 410 444 regulator_desc_s2mps13_ldo(2, MIN_1400_MV, STEP_50_MV, 0x0C), ··· 486 452 regulator_desc_s2mps13_buck(4, MIN_500_MV, STEP_6_25_MV, 0x10), 487 453 regulator_desc_s2mps13_buck(5, MIN_500_MV, STEP_6_25_MV, 0x10), 488 454 regulator_desc_s2mps13_buck(6, MIN_500_MV, STEP_6_25_MV, 0x10), 489 - regulator_desc_s2mps13_buck(7, MIN_500_MV, STEP_6_25_MV, 0x10), 490 - regulator_desc_s2mps13_buck(8, MIN_1000_MV, STEP_12_5_MV, 0x20), 491 - regulator_desc_s2mps13_buck(9, MIN_1000_MV, STEP_12_5_MV, 0x20), 492 - regulator_desc_s2mps13_buck(10, MIN_500_MV, STEP_6_25_MV, 0x10), 455 + regulator_desc_s2mps13_buck7(7, MIN_500_MV, STEP_6_25_MV, 0x10), 456 + regulator_desc_s2mps13_buck8_10(8, MIN_1000_MV, STEP_12_5_MV, 0x20), 457 + regulator_desc_s2mps13_buck8_10(9, MIN_1000_MV, STEP_12_5_MV, 0x20), 458 + regulator_desc_s2mps13_buck8_10(10, MIN_500_MV, STEP_6_25_MV, 0x10), 493 459 }; 494 460 495 461 static int s2mps14_regulator_enable(struct regulator_dev *rdev)
+1
drivers/rtc/rtc-s5m.c
··· 832 832 static const struct platform_device_id s5m_rtc_id[] = { 833 833 { "s5m-rtc", S5M8767X }, 834 834 { "s2mps14-rtc", S2MPS14X }, 835 + { }, 835 836 }; 836 837 837 838 static struct platform_driver s5m_rtc_driver = {
+101 -16
drivers/s390/net/qeth_core_main.c
··· 1784 1784 QETH_DBF_TEXT(SETUP, 2, "idxanswr"); 1785 1785 card = CARD_FROM_CDEV(channel->ccwdev); 1786 1786 iob = qeth_get_buffer(channel); 1787 + if (!iob) 1788 + return -ENOMEM; 1787 1789 iob->callback = idx_reply_cb; 1788 1790 memcpy(&channel->ccw, READ_CCW, sizeof(struct ccw1)); 1789 1791 channel->ccw.count = QETH_BUFSIZE; ··· 1836 1834 QETH_DBF_TEXT(SETUP, 2, "idxactch"); 1837 1835 1838 1836 iob = qeth_get_buffer(channel); 1837 + if (!iob) 1838 + return -ENOMEM; 1839 1839 iob->callback = idx_reply_cb; 1840 1840 memcpy(&channel->ccw, WRITE_CCW, sizeof(struct ccw1)); 1841 1841 channel->ccw.count = IDX_ACTIVATE_SIZE; ··· 2025 2021 } 2026 2022 EXPORT_SYMBOL_GPL(qeth_prepare_control_data); 2027 2023 2024 + /** 2025 + * qeth_send_control_data() - send control command to the card 2026 + * @card: qeth_card structure pointer 2027 + * @len: size of the command buffer 2028 + * @iob: qeth_cmd_buffer pointer 2029 + * @reply_cb: callback function pointer 2030 + * @cb_card: pointer to the qeth_card structure 2031 + * @cb_reply: pointer to the qeth_reply structure 2032 + * @cb_cmd: pointer to the original iob for non-IPA 2033 + * commands, or to the qeth_ipa_cmd structure 2034 + * for the IPA commands. 2035 + * @reply_param: private pointer passed to the callback 2036 + * 2037 + * Returns the value of the `return_code' field of the response 2038 + * block returned from the hardware, or other error indication. 2039 + * Value of zero indicates successful execution of the command. 2040 + * 2041 + * Callback function gets called one or more times, with cb_cmd 2042 + * pointing to the response returned by the hardware. Callback 2043 + * function must return non-zero if more reply blocks are expected, 2044 + * and zero if the last or only reply block is received. Callback 2045 + * function can get the value of the reply_param pointer from the 2046 + * field 'param' of the structure qeth_reply. 2047 + */ 2048 + 2028 2049 int qeth_send_control_data(struct qeth_card *card, int len, 2029 2050 struct qeth_cmd_buffer *iob, 2030 - int (*reply_cb)(struct qeth_card *, struct qeth_reply *, 2031 - unsigned long), 2051 + int (*reply_cb)(struct qeth_card *cb_card, 2052 + struct qeth_reply *cb_reply, 2053 + unsigned long cb_cmd), 2032 2054 void *reply_param) 2033 2055 { 2034 2056 int rc; ··· 2944 2914 struct qeth_cmd_buffer *iob; 2945 2915 struct qeth_ipa_cmd *cmd; 2946 2916 2947 - iob = qeth_wait_for_buffer(&card->write); 2948 - cmd = (struct qeth_ipa_cmd *)(iob->data+IPA_PDU_HEADER_SIZE); 2949 - qeth_fill_ipacmd_header(card, cmd, ipacmd, prot); 2917 + iob = qeth_get_buffer(&card->write); 2918 + if (iob) { 2919 + cmd = (struct qeth_ipa_cmd *)(iob->data+IPA_PDU_HEADER_SIZE); 2920 + qeth_fill_ipacmd_header(card, cmd, ipacmd, prot); 2921 + } else { 2922 + dev_warn(&card->gdev->dev, 2923 + "The qeth driver ran out of channel command buffers\n"); 2924 + QETH_DBF_MESSAGE(1, "%s The qeth driver ran out of channel command buffers", 2925 + dev_name(&card->gdev->dev)); 2926 + } 2950 2927 2951 2928 return iob; 2952 2929 } ··· 2968 2931 &card->token.ulp_connection_r, QETH_MPC_TOKEN_LENGTH); 2969 2932 } 2970 2933 EXPORT_SYMBOL_GPL(qeth_prepare_ipa_cmd); 2934 + 2935 + /** 2936 + * qeth_send_ipa_cmd() - send an IPA command 2937 + * 2938 + * See qeth_send_control_data() for explanation of the arguments. 2939 + */ 2971 2940 2972 2941 int qeth_send_ipa_cmd(struct qeth_card *card, struct qeth_cmd_buffer *iob, 2973 2942 int (*reply_cb)(struct qeth_card *, struct qeth_reply*, ··· 3011 2968 QETH_DBF_TEXT(SETUP, 2, "strtlan"); 3012 2969 3013 2970 iob = qeth_get_ipacmd_buffer(card, IPA_CMD_STARTLAN, 0); 2971 + if (!iob) 2972 + return -ENOMEM; 3014 2973 rc = qeth_send_ipa_cmd(card, iob, NULL, NULL); 3015 2974 return rc; 3016 2975 } ··· 3058 3013 3059 3014 iob = qeth_get_ipacmd_buffer(card, IPA_CMD_SETADAPTERPARMS, 3060 3015 QETH_PROT_IPV4); 3061 - cmd = (struct qeth_ipa_cmd *)(iob->data+IPA_PDU_HEADER_SIZE); 3062 - cmd->data.setadapterparms.hdr.cmdlength = cmdlen; 3063 - cmd->data.setadapterparms.hdr.command_code = command; 3064 - cmd->data.setadapterparms.hdr.used_total = 1; 3065 - cmd->data.setadapterparms.hdr.seq_no = 1; 3016 + if (iob) { 3017 + cmd = (struct qeth_ipa_cmd *)(iob->data+IPA_PDU_HEADER_SIZE); 3018 + cmd->data.setadapterparms.hdr.cmdlength = cmdlen; 3019 + cmd->data.setadapterparms.hdr.command_code = command; 3020 + cmd->data.setadapterparms.hdr.used_total = 1; 3021 + cmd->data.setadapterparms.hdr.seq_no = 1; 3022 + } 3066 3023 3067 3024 return iob; 3068 3025 } ··· 3077 3030 QETH_CARD_TEXT(card, 3, "queryadp"); 3078 3031 iob = qeth_get_adapter_cmd(card, IPA_SETADP_QUERY_COMMANDS_SUPPORTED, 3079 3032 sizeof(struct qeth_ipacmd_setadpparms)); 3033 + if (!iob) 3034 + return -ENOMEM; 3080 3035 rc = qeth_send_ipa_cmd(card, iob, qeth_query_setadapterparms_cb, NULL); 3081 3036 return rc; 3082 3037 } ··· 3129 3080 3130 3081 QETH_DBF_TEXT_(SETUP, 2, "qipassi%i", prot); 3131 3082 iob = qeth_get_ipacmd_buffer(card, IPA_CMD_QIPASSIST, prot); 3083 + if (!iob) 3084 + return -ENOMEM; 3132 3085 rc = qeth_send_ipa_cmd(card, iob, qeth_query_ipassists_cb, NULL); 3133 3086 return rc; 3134 3087 } ··· 3170 3119 return -ENOMEDIUM; 3171 3120 iob = qeth_get_adapter_cmd(card, IPA_SETADP_QUERY_SWITCH_ATTRIBUTES, 3172 3121 sizeof(struct qeth_ipacmd_setadpparms_hdr)); 3122 + if (!iob) 3123 + return -ENOMEM; 3173 3124 return qeth_send_ipa_cmd(card, iob, 3174 3125 qeth_query_switch_attributes_cb, sw_info); 3175 3126 } ··· 3199 3146 3200 3147 QETH_DBF_TEXT(SETUP, 2, "qdiagass"); 3201 3148 iob = qeth_get_ipacmd_buffer(card, IPA_CMD_SET_DIAG_ASS, 0); 3149 + if (!iob) 3150 + return -ENOMEM; 3202 3151 cmd = (struct qeth_ipa_cmd *)(iob->data+IPA_PDU_HEADER_SIZE); 3203 3152 cmd->data.diagass.subcmd_len = 16; 3204 3153 cmd->data.diagass.subcmd = QETH_DIAGS_CMD_QUERY; ··· 3252 3197 3253 3198 QETH_DBF_TEXT(SETUP, 2, "diagtrap"); 3254 3199 iob = qeth_get_ipacmd_buffer(card, IPA_CMD_SET_DIAG_ASS, 0); 3200 + if (!iob) 3201 + return -ENOMEM; 3255 3202 cmd = (struct qeth_ipa_cmd *)(iob->data+IPA_PDU_HEADER_SIZE); 3256 3203 cmd->data.diagass.subcmd_len = 80; 3257 3204 cmd->data.diagass.subcmd = QETH_DIAGS_CMD_TRAP; ··· 4219 4162 4220 4163 iob = qeth_get_adapter_cmd(card, IPA_SETADP_SET_PROMISC_MODE, 4221 4164 sizeof(struct qeth_ipacmd_setadpparms)); 4165 + if (!iob) 4166 + return; 4222 4167 cmd = (struct qeth_ipa_cmd *)(iob->data + IPA_PDU_HEADER_SIZE); 4223 4168 cmd->data.setadapterparms.data.mode = mode; 4224 4169 qeth_send_ipa_cmd(card, iob, qeth_setadp_promisc_mode_cb, NULL); ··· 4291 4232 4292 4233 iob = qeth_get_adapter_cmd(card, IPA_SETADP_ALTER_MAC_ADDRESS, 4293 4234 sizeof(struct qeth_ipacmd_setadpparms)); 4235 + if (!iob) 4236 + return -ENOMEM; 4294 4237 cmd = (struct qeth_ipa_cmd *)(iob->data+IPA_PDU_HEADER_SIZE); 4295 4238 cmd->data.setadapterparms.data.change_addr.cmd = CHANGE_ADDR_READ_MAC; 4296 4239 cmd->data.setadapterparms.data.change_addr.addr_size = OSA_ADDR_LEN; ··· 4406 4345 iob = qeth_get_adapter_cmd(card, IPA_SETADP_SET_ACCESS_CONTROL, 4407 4346 sizeof(struct qeth_ipacmd_setadpparms_hdr) + 4408 4347 sizeof(struct qeth_set_access_ctrl)); 4348 + if (!iob) 4349 + return -ENOMEM; 4409 4350 cmd = (struct qeth_ipa_cmd *)(iob->data+IPA_PDU_HEADER_SIZE); 4410 4351 access_ctrl_req = &cmd->data.setadapterparms.data.set_access_ctrl; 4411 4352 access_ctrl_req->subcmd_code = isolation; ··· 4651 4588 4652 4589 iob = qeth_get_adapter_cmd(card, IPA_SETADP_SET_SNMP_CONTROL, 4653 4590 QETH_SNMP_SETADP_CMDLENGTH + req_len); 4591 + if (!iob) { 4592 + rc = -ENOMEM; 4593 + goto out; 4594 + } 4654 4595 cmd = (struct qeth_ipa_cmd *)(iob->data+IPA_PDU_HEADER_SIZE); 4655 4596 memcpy(&cmd->data.setadapterparms.data.snmp, &ureq->cmd, req_len); 4656 4597 rc = qeth_send_ipa_snmp_cmd(card, iob, QETH_SETADP_BASE_LEN + req_len, ··· 4666 4599 if (copy_to_user(udata, qinfo.udata, qinfo.udata_len)) 4667 4600 rc = -EFAULT; 4668 4601 } 4669 - 4602 + out: 4670 4603 kfree(ureq); 4671 4604 kfree(qinfo.udata); 4672 4605 return rc; ··· 4737 4670 iob = qeth_get_adapter_cmd(card, IPA_SETADP_QUERY_OAT, 4738 4671 sizeof(struct qeth_ipacmd_setadpparms_hdr) + 4739 4672 sizeof(struct qeth_query_oat)); 4673 + if (!iob) { 4674 + rc = -ENOMEM; 4675 + goto out_free; 4676 + } 4740 4677 cmd = (struct qeth_ipa_cmd *)(iob->data+IPA_PDU_HEADER_SIZE); 4741 4678 oat_req = &cmd->data.setadapterparms.data.query_oat; 4742 4679 oat_req->subcmd_code = oat_data.command; ··· 4806 4735 return -EOPNOTSUPP; 4807 4736 iob = qeth_get_adapter_cmd(card, IPA_SETADP_QUERY_CARD_INFO, 4808 4737 sizeof(struct qeth_ipacmd_setadpparms_hdr)); 4738 + if (!iob) 4739 + return -ENOMEM; 4809 4740 return qeth_send_ipa_cmd(card, iob, qeth_query_card_info_cb, 4810 4741 (void *)carrier_info); 4811 4742 } ··· 5133 5060 card->options.adp.supported_funcs = 0; 5134 5061 card->options.sbp.supported_funcs = 0; 5135 5062 card->info.diagass_support = 0; 5136 - qeth_query_ipassists(card, QETH_PROT_IPV4); 5137 - if (qeth_is_supported(card, IPA_SETADAPTERPARMS)) 5138 - qeth_query_setadapterparms(card); 5139 - if (qeth_adp_supported(card, IPA_SETADP_SET_DIAG_ASSIST)) 5140 - qeth_query_setdiagass(card); 5063 + rc = qeth_query_ipassists(card, QETH_PROT_IPV4); 5064 + if (rc == -ENOMEM) 5065 + goto out; 5066 + if (qeth_is_supported(card, IPA_SETADAPTERPARMS)) { 5067 + rc = qeth_query_setadapterparms(card); 5068 + if (rc < 0) { 5069 + QETH_DBF_TEXT_(SETUP, 2, "6err%d", rc); 5070 + goto out; 5071 + } 5072 + } 5073 + if (qeth_adp_supported(card, IPA_SETADP_SET_DIAG_ASSIST)) { 5074 + rc = qeth_query_setdiagass(card); 5075 + if (rc < 0) { 5076 + QETH_DBF_TEXT_(SETUP, 2, "7err%d", rc); 5077 + goto out; 5078 + } 5079 + } 5141 5080 return 0; 5142 5081 out: 5143 5082 dev_warn(&card->gdev->dev, "The qeth device driver failed to recover "
+113 -117
drivers/s390/net/qeth_l2_main.c
··· 27 27 static int qeth_l2_stop(struct net_device *); 28 28 static int qeth_l2_send_delmac(struct qeth_card *, __u8 *); 29 29 static int qeth_l2_send_setdelmac(struct qeth_card *, __u8 *, 30 - enum qeth_ipa_cmds, 31 - int (*reply_cb) (struct qeth_card *, 32 - struct qeth_reply*, 33 - unsigned long)); 30 + enum qeth_ipa_cmds); 34 31 static void qeth_l2_set_multicast_list(struct net_device *); 35 32 static int qeth_l2_recover(void *); 36 33 static void qeth_bridgeport_query_support(struct qeth_card *card); ··· 127 130 return ndev; 128 131 } 129 132 130 - static int qeth_l2_send_setgroupmac_cb(struct qeth_card *card, 131 - struct qeth_reply *reply, 132 - unsigned long data) 133 + static int qeth_setdel_makerc(struct qeth_card *card, int retcode) 133 134 { 134 - struct qeth_ipa_cmd *cmd; 135 - __u8 *mac; 135 + int rc; 136 136 137 - QETH_CARD_TEXT(card, 2, "L2Sgmacb"); 138 - cmd = (struct qeth_ipa_cmd *) data; 139 - mac = &cmd->data.setdelmac.mac[0]; 140 - /* MAC already registered, needed in couple/uncouple case */ 141 - if (cmd->hdr.return_code == IPA_RC_L2_DUP_MAC) { 142 - QETH_DBF_MESSAGE(2, "Group MAC %pM already existing on %s \n", 143 - mac, QETH_CARD_IFNAME(card)); 144 - cmd->hdr.return_code = 0; 137 + if (retcode) 138 + QETH_CARD_TEXT_(card, 2, "err%04x", retcode); 139 + switch (retcode) { 140 + case IPA_RC_SUCCESS: 141 + rc = 0; 142 + break; 143 + case IPA_RC_L2_UNSUPPORTED_CMD: 144 + rc = -ENOSYS; 145 + break; 146 + case IPA_RC_L2_ADDR_TABLE_FULL: 147 + rc = -ENOSPC; 148 + break; 149 + case IPA_RC_L2_DUP_MAC: 150 + case IPA_RC_L2_DUP_LAYER3_MAC: 151 + rc = -EEXIST; 152 + break; 153 + case IPA_RC_L2_MAC_NOT_AUTH_BY_HYP: 154 + case IPA_RC_L2_MAC_NOT_AUTH_BY_ADP: 155 + rc = -EPERM; 156 + break; 157 + case IPA_RC_L2_MAC_NOT_FOUND: 158 + rc = -ENOENT; 159 + break; 160 + case -ENOMEM: 161 + rc = -ENOMEM; 162 + break; 163 + default: 164 + rc = -EIO; 165 + break; 145 166 } 146 - if (cmd->hdr.return_code) 147 - QETH_DBF_MESSAGE(2, "Could not set group MAC %pM on %s: %x\n", 148 - mac, QETH_CARD_IFNAME(card), cmd->hdr.return_code); 149 - return 0; 167 + return rc; 150 168 } 151 169 152 170 static int qeth_l2_send_setgroupmac(struct qeth_card *card, __u8 *mac) 153 171 { 172 + int rc; 173 + 154 174 QETH_CARD_TEXT(card, 2, "L2Sgmac"); 155 - return qeth_l2_send_setdelmac(card, mac, IPA_CMD_SETGMAC, 156 - qeth_l2_send_setgroupmac_cb); 157 - } 158 - 159 - static int qeth_l2_send_delgroupmac_cb(struct qeth_card *card, 160 - struct qeth_reply *reply, 161 - unsigned long data) 162 - { 163 - struct qeth_ipa_cmd *cmd; 164 - __u8 *mac; 165 - 166 - QETH_CARD_TEXT(card, 2, "L2Dgmacb"); 167 - cmd = (struct qeth_ipa_cmd *) data; 168 - mac = &cmd->data.setdelmac.mac[0]; 169 - if (cmd->hdr.return_code) 170 - QETH_DBF_MESSAGE(2, "Could not delete group MAC %pM on %s: %x\n", 171 - mac, QETH_CARD_IFNAME(card), cmd->hdr.return_code); 172 - return 0; 175 + rc = qeth_setdel_makerc(card, qeth_l2_send_setdelmac(card, mac, 176 + IPA_CMD_SETGMAC)); 177 + if (rc == -EEXIST) 178 + QETH_DBF_MESSAGE(2, "Group MAC %pM already existing on %s\n", 179 + mac, QETH_CARD_IFNAME(card)); 180 + else if (rc) 181 + QETH_DBF_MESSAGE(2, "Could not set group MAC %pM on %s: %d\n", 182 + mac, QETH_CARD_IFNAME(card), rc); 183 + return rc; 173 184 } 174 185 175 186 static int qeth_l2_send_delgroupmac(struct qeth_card *card, __u8 *mac) 176 187 { 188 + int rc; 189 + 177 190 QETH_CARD_TEXT(card, 2, "L2Dgmac"); 178 - return qeth_l2_send_setdelmac(card, mac, IPA_CMD_DELGMAC, 179 - qeth_l2_send_delgroupmac_cb); 191 + rc = qeth_setdel_makerc(card, qeth_l2_send_setdelmac(card, mac, 192 + IPA_CMD_DELGMAC)); 193 + if (rc) 194 + QETH_DBF_MESSAGE(2, 195 + "Could not delete group MAC %pM on %s: %d\n", 196 + mac, QETH_CARD_IFNAME(card), rc); 197 + return rc; 180 198 } 181 199 182 200 static void qeth_l2_add_mc(struct qeth_card *card, __u8 *mac, int vmac) ··· 209 197 mc->is_vmac = vmac; 210 198 211 199 if (vmac) { 212 - rc = qeth_l2_send_setdelmac(card, mac, IPA_CMD_SETVMAC, 213 - NULL); 200 + rc = qeth_setdel_makerc(card, 201 + qeth_l2_send_setdelmac(card, mac, IPA_CMD_SETVMAC)); 214 202 } else { 215 - rc = qeth_l2_send_setgroupmac(card, mac); 203 + rc = qeth_setdel_makerc(card, 204 + qeth_l2_send_setgroupmac(card, mac)); 216 205 } 217 206 218 207 if (!rc) ··· 231 218 if (del) { 232 219 if (mc->is_vmac) 233 220 qeth_l2_send_setdelmac(card, mc->mc_addr, 234 - IPA_CMD_DELVMAC, NULL); 221 + IPA_CMD_DELVMAC); 235 222 else 236 223 qeth_l2_send_delgroupmac(card, mc->mc_addr); 237 224 } ··· 304 291 305 292 QETH_CARD_TEXT_(card, 4, "L2sdv%x", ipacmd); 306 293 iob = qeth_get_ipacmd_buffer(card, ipacmd, QETH_PROT_IPV4); 294 + if (!iob) 295 + return -ENOMEM; 307 296 cmd = (struct qeth_ipa_cmd *)(iob->data+IPA_PDU_HEADER_SIZE); 308 297 cmd->data.setdelvlan.vlan_id = i; 309 298 return qeth_send_ipa_cmd(card, iob, ··· 328 313 { 329 314 struct qeth_card *card = dev->ml_priv; 330 315 struct qeth_vlan_vid *id; 316 + int rc; 331 317 332 318 QETH_CARD_TEXT_(card, 4, "aid:%d", vid); 333 319 if (!vid) ··· 344 328 id = kmalloc(sizeof(struct qeth_vlan_vid), GFP_ATOMIC); 345 329 if (id) { 346 330 id->vid = vid; 347 - qeth_l2_send_setdelvlan(card, vid, IPA_CMD_SETVLAN); 331 + rc = qeth_l2_send_setdelvlan(card, vid, IPA_CMD_SETVLAN); 332 + if (rc) { 333 + kfree(id); 334 + return rc; 335 + } 348 336 spin_lock_bh(&card->vlanlock); 349 337 list_add_tail(&id->list, &card->vid_list); 350 338 spin_unlock_bh(&card->vlanlock); ··· 363 343 { 364 344 struct qeth_vlan_vid *id, *tmpid = NULL; 365 345 struct qeth_card *card = dev->ml_priv; 346 + int rc = 0; 366 347 367 348 QETH_CARD_TEXT_(card, 4, "kid:%d", vid); 368 349 if (card->info.type == QETH_CARD_TYPE_OSM) { ··· 384 363 } 385 364 spin_unlock_bh(&card->vlanlock); 386 365 if (tmpid) { 387 - qeth_l2_send_setdelvlan(card, vid, IPA_CMD_DELVLAN); 366 + rc = qeth_l2_send_setdelvlan(card, vid, IPA_CMD_DELVLAN); 388 367 kfree(tmpid); 389 368 } 390 369 qeth_l2_set_multicast_list(card->dev); 391 - return 0; 370 + return rc; 392 371 } 393 372 394 373 static int qeth_l2_stop_card(struct qeth_card *card, int recovery_mode) ··· 560 539 } 561 540 562 541 static int qeth_l2_send_setdelmac(struct qeth_card *card, __u8 *mac, 563 - enum qeth_ipa_cmds ipacmd, 564 - int (*reply_cb) (struct qeth_card *, 565 - struct qeth_reply*, 566 - unsigned long)) 542 + enum qeth_ipa_cmds ipacmd) 567 543 { 568 544 struct qeth_ipa_cmd *cmd; 569 545 struct qeth_cmd_buffer *iob; 570 546 571 547 QETH_CARD_TEXT(card, 2, "L2sdmac"); 572 548 iob = qeth_get_ipacmd_buffer(card, ipacmd, QETH_PROT_IPV4); 549 + if (!iob) 550 + return -ENOMEM; 573 551 cmd = (struct qeth_ipa_cmd *)(iob->data+IPA_PDU_HEADER_SIZE); 574 552 cmd->data.setdelmac.mac_length = OSA_ADDR_LEN; 575 553 memcpy(&cmd->data.setdelmac.mac, mac, OSA_ADDR_LEN); 576 - return qeth_send_ipa_cmd(card, iob, reply_cb, NULL); 577 - } 578 - 579 - static int qeth_l2_send_setmac_cb(struct qeth_card *card, 580 - struct qeth_reply *reply, 581 - unsigned long data) 582 - { 583 - struct qeth_ipa_cmd *cmd; 584 - 585 - QETH_CARD_TEXT(card, 2, "L2Smaccb"); 586 - cmd = (struct qeth_ipa_cmd *) data; 587 - if (cmd->hdr.return_code) { 588 - QETH_CARD_TEXT_(card, 2, "L2er%x", cmd->hdr.return_code); 589 - card->info.mac_bits &= ~QETH_LAYER2_MAC_REGISTERED; 590 - switch (cmd->hdr.return_code) { 591 - case IPA_RC_L2_DUP_MAC: 592 - case IPA_RC_L2_DUP_LAYER3_MAC: 593 - dev_warn(&card->gdev->dev, 594 - "MAC address %pM already exists\n", 595 - cmd->data.setdelmac.mac); 596 - break; 597 - case IPA_RC_L2_MAC_NOT_AUTH_BY_HYP: 598 - case IPA_RC_L2_MAC_NOT_AUTH_BY_ADP: 599 - dev_warn(&card->gdev->dev, 600 - "MAC address %pM is not authorized\n", 601 - cmd->data.setdelmac.mac); 602 - break; 603 - default: 604 - break; 605 - } 606 - } else { 607 - card->info.mac_bits |= QETH_LAYER2_MAC_REGISTERED; 608 - memcpy(card->dev->dev_addr, cmd->data.setdelmac.mac, 609 - OSA_ADDR_LEN); 610 - dev_info(&card->gdev->dev, 611 - "MAC address %pM successfully registered on device %s\n", 612 - card->dev->dev_addr, card->dev->name); 613 - } 614 - return 0; 554 + return qeth_send_ipa_cmd(card, iob, NULL, NULL); 615 555 } 616 556 617 557 static int qeth_l2_send_setmac(struct qeth_card *card, __u8 *mac) 618 558 { 559 + int rc; 560 + 619 561 QETH_CARD_TEXT(card, 2, "L2Setmac"); 620 - return qeth_l2_send_setdelmac(card, mac, IPA_CMD_SETVMAC, 621 - qeth_l2_send_setmac_cb); 622 - } 623 - 624 - static int qeth_l2_send_delmac_cb(struct qeth_card *card, 625 - struct qeth_reply *reply, 626 - unsigned long data) 627 - { 628 - struct qeth_ipa_cmd *cmd; 629 - 630 - QETH_CARD_TEXT(card, 2, "L2Dmaccb"); 631 - cmd = (struct qeth_ipa_cmd *) data; 632 - if (cmd->hdr.return_code) { 633 - QETH_CARD_TEXT_(card, 2, "err%d", cmd->hdr.return_code); 634 - return 0; 562 + rc = qeth_setdel_makerc(card, qeth_l2_send_setdelmac(card, mac, 563 + IPA_CMD_SETVMAC)); 564 + if (rc == 0) { 565 + card->info.mac_bits |= QETH_LAYER2_MAC_REGISTERED; 566 + memcpy(card->dev->dev_addr, mac, OSA_ADDR_LEN); 567 + dev_info(&card->gdev->dev, 568 + "MAC address %pM successfully registered on device %s\n", 569 + card->dev->dev_addr, card->dev->name); 570 + } else { 571 + card->info.mac_bits &= ~QETH_LAYER2_MAC_REGISTERED; 572 + switch (rc) { 573 + case -EEXIST: 574 + dev_warn(&card->gdev->dev, 575 + "MAC address %pM already exists\n", mac); 576 + break; 577 + case -EPERM: 578 + dev_warn(&card->gdev->dev, 579 + "MAC address %pM is not authorized\n", mac); 580 + break; 581 + } 635 582 } 636 - card->info.mac_bits &= ~QETH_LAYER2_MAC_REGISTERED; 637 - 638 - return 0; 583 + return rc; 639 584 } 640 585 641 586 static int qeth_l2_send_delmac(struct qeth_card *card, __u8 *mac) 642 587 { 588 + int rc; 589 + 643 590 QETH_CARD_TEXT(card, 2, "L2Delmac"); 644 591 if (!(card->info.mac_bits & QETH_LAYER2_MAC_REGISTERED)) 645 592 return 0; 646 - return qeth_l2_send_setdelmac(card, mac, IPA_CMD_DELVMAC, 647 - qeth_l2_send_delmac_cb); 593 + rc = qeth_setdel_makerc(card, qeth_l2_send_setdelmac(card, mac, 594 + IPA_CMD_DELVMAC)); 595 + if (rc == 0) 596 + card->info.mac_bits &= ~QETH_LAYER2_MAC_REGISTERED; 597 + return rc; 648 598 } 649 599 650 600 static int qeth_l2_request_initial_mac(struct qeth_card *card) ··· 643 651 if (rc) { 644 652 QETH_DBF_MESSAGE(2, "couldn't get MAC address on " 645 653 "device %s: x%x\n", CARD_BUS_ID(card), rc); 646 - QETH_DBF_TEXT_(SETUP, 2, "1err%d", rc); 654 + QETH_DBF_TEXT_(SETUP, 2, "1err%04x", rc); 647 655 return rc; 648 656 } 649 657 QETH_DBF_HEX(SETUP, 2, card->dev->dev_addr, OSA_ADDR_LEN); ··· 679 687 return -ERESTARTSYS; 680 688 } 681 689 rc = qeth_l2_send_delmac(card, &card->dev->dev_addr[0]); 682 - if (!rc || (rc == IPA_RC_L2_MAC_NOT_FOUND)) 690 + if (!rc || (rc == -ENOENT)) 683 691 rc = qeth_l2_send_setmac(card, addr->sa_data); 684 692 return rc ? -EINVAL : 0; 685 693 } ··· 988 996 recover_flag = card->state; 989 997 rc = qeth_core_hardsetup_card(card); 990 998 if (rc) { 991 - QETH_DBF_TEXT_(SETUP, 2, "2err%d", rc); 999 + QETH_DBF_TEXT_(SETUP, 2, "2err%04x", rc); 992 1000 rc = -ENODEV; 993 1001 goto out_remove; 994 1002 } ··· 1722 1730 1723 1731 QETH_CARD_TEXT(card, 2, "brqsuppo"); 1724 1732 iob = qeth_get_ipacmd_buffer(card, IPA_CMD_SETBRIDGEPORT, 0); 1733 + if (!iob) 1734 + return; 1725 1735 cmd = (struct qeth_ipa_cmd *)(iob->data+IPA_PDU_HEADER_SIZE); 1726 1736 cmd->data.sbp.hdr.cmdlength = 1727 1737 sizeof(struct qeth_ipacmd_sbp_hdr) + ··· 1799 1805 if (!(card->options.sbp.supported_funcs & IPA_SBP_QUERY_BRIDGE_PORTS)) 1800 1806 return -EOPNOTSUPP; 1801 1807 iob = qeth_get_ipacmd_buffer(card, IPA_CMD_SETBRIDGEPORT, 0); 1808 + if (!iob) 1809 + return -ENOMEM; 1802 1810 cmd = (struct qeth_ipa_cmd *)(iob->data+IPA_PDU_HEADER_SIZE); 1803 1811 cmd->data.sbp.hdr.cmdlength = 1804 1812 sizeof(struct qeth_ipacmd_sbp_hdr); ··· 1813 1817 if (rc) 1814 1818 return rc; 1815 1819 rc = qeth_bridgeport_makerc(card, &cbctl, IPA_SBP_QUERY_BRIDGE_PORTS); 1816 - if (rc) 1817 - return rc; 1818 - return 0; 1820 + return rc; 1819 1821 } 1820 1822 EXPORT_SYMBOL_GPL(qeth_bridgeport_query_ports); 1821 1823 ··· 1867 1873 if (!(card->options.sbp.supported_funcs & setcmd)) 1868 1874 return -EOPNOTSUPP; 1869 1875 iob = qeth_get_ipacmd_buffer(card, IPA_CMD_SETBRIDGEPORT, 0); 1876 + if (!iob) 1877 + return -ENOMEM; 1870 1878 cmd = (struct qeth_ipa_cmd *)(iob->data+IPA_PDU_HEADER_SIZE); 1871 1879 cmd->data.sbp.hdr.cmdlength = cmdlength; 1872 1880 cmd->data.sbp.hdr.command_code = setcmd;
+39 -11
drivers/s390/net/qeth_l3_main.c
··· 549 549 QETH_CARD_TEXT(card, 4, "setdelmc"); 550 550 551 551 iob = qeth_get_ipacmd_buffer(card, ipacmd, addr->proto); 552 + if (!iob) 553 + return -ENOMEM; 552 554 cmd = (struct qeth_ipa_cmd *)(iob->data+IPA_PDU_HEADER_SIZE); 553 555 memcpy(&cmd->data.setdelipm.mac, addr->mac, OSA_ADDR_LEN); 554 556 if (addr->proto == QETH_PROT_IPV6) ··· 590 588 QETH_CARD_TEXT_(card, 4, "flags%02X", flags); 591 589 592 590 iob = qeth_get_ipacmd_buffer(card, ipacmd, addr->proto); 591 + if (!iob) 592 + return -ENOMEM; 593 593 cmd = (struct qeth_ipa_cmd *)(iob->data+IPA_PDU_HEADER_SIZE); 594 594 if (addr->proto == QETH_PROT_IPV6) { 595 595 memcpy(cmd->data.setdelip6.ip_addr, &addr->u.a6.addr, ··· 620 616 621 617 QETH_CARD_TEXT(card, 4, "setroutg"); 622 618 iob = qeth_get_ipacmd_buffer(card, IPA_CMD_SETRTG, prot); 619 + if (!iob) 620 + return -ENOMEM; 623 621 cmd = (struct qeth_ipa_cmd *)(iob->data+IPA_PDU_HEADER_SIZE); 624 622 cmd->data.setrtg.type = (type); 625 623 rc = qeth_send_ipa_cmd(card, iob, NULL, NULL); ··· 1055 1049 QETH_CARD_TEXT(card, 4, "getasscm"); 1056 1050 iob = qeth_get_ipacmd_buffer(card, IPA_CMD_SETASSPARMS, prot); 1057 1051 1058 - cmd = (struct qeth_ipa_cmd *)(iob->data+IPA_PDU_HEADER_SIZE); 1059 - cmd->data.setassparms.hdr.assist_no = ipa_func; 1060 - cmd->data.setassparms.hdr.length = 8 + len; 1061 - cmd->data.setassparms.hdr.command_code = cmd_code; 1062 - cmd->data.setassparms.hdr.return_code = 0; 1063 - cmd->data.setassparms.hdr.seq_no = 0; 1052 + if (iob) { 1053 + cmd = (struct qeth_ipa_cmd *)(iob->data+IPA_PDU_HEADER_SIZE); 1054 + cmd->data.setassparms.hdr.assist_no = ipa_func; 1055 + cmd->data.setassparms.hdr.length = 8 + len; 1056 + cmd->data.setassparms.hdr.command_code = cmd_code; 1057 + cmd->data.setassparms.hdr.return_code = 0; 1058 + cmd->data.setassparms.hdr.seq_no = 0; 1059 + } 1064 1060 1065 1061 return iob; 1066 1062 } ··· 1098 1090 QETH_CARD_TEXT(card, 4, "simassp6"); 1099 1091 iob = qeth_l3_get_setassparms_cmd(card, ipa_func, cmd_code, 1100 1092 0, QETH_PROT_IPV6); 1093 + if (!iob) 1094 + return -ENOMEM; 1101 1095 rc = qeth_l3_send_setassparms(card, iob, 0, 0, 1102 1096 qeth_l3_default_setassparms_cb, NULL); 1103 1097 return rc; ··· 1118 1108 length = sizeof(__u32); 1119 1109 iob = qeth_l3_get_setassparms_cmd(card, ipa_func, cmd_code, 1120 1110 length, QETH_PROT_IPV4); 1111 + if (!iob) 1112 + return -ENOMEM; 1121 1113 rc = qeth_l3_send_setassparms(card, iob, length, data, 1122 1114 qeth_l3_default_setassparms_cb, NULL); 1123 1115 return rc; ··· 1506 1494 1507 1495 iob = qeth_get_ipacmd_buffer(card, IPA_CMD_CREATE_ADDR, 1508 1496 QETH_PROT_IPV6); 1497 + if (!iob) 1498 + return -ENOMEM; 1509 1499 cmd = (struct qeth_ipa_cmd *)(iob->data+IPA_PDU_HEADER_SIZE); 1510 1500 *((__u16 *) &cmd->data.create_destroy_addr.unique_id[6]) = 1511 1501 card->info.unique_id; ··· 1551 1537 1552 1538 iob = qeth_get_ipacmd_buffer(card, IPA_CMD_CREATE_ADDR, 1553 1539 QETH_PROT_IPV6); 1540 + if (!iob) 1541 + return -ENOMEM; 1554 1542 cmd = (struct qeth_ipa_cmd *)(iob->data+IPA_PDU_HEADER_SIZE); 1555 1543 *((__u16 *) &cmd->data.create_destroy_addr.unique_id[6]) = 1556 1544 card->info.unique_id; ··· 1627 1611 QETH_DBF_TEXT(SETUP, 2, "diagtrac"); 1628 1612 1629 1613 iob = qeth_get_ipacmd_buffer(card, IPA_CMD_SET_DIAG_ASS, 0); 1614 + if (!iob) 1615 + return -ENOMEM; 1630 1616 cmd = (struct qeth_ipa_cmd *)(iob->data+IPA_PDU_HEADER_SIZE); 1631 1617 cmd->data.diagass.subcmd_len = 16; 1632 1618 cmd->data.diagass.subcmd = QETH_DIAGS_CMD_TRACE; ··· 2460 2442 IPA_CMD_ASS_ARP_QUERY_INFO, 2461 2443 sizeof(struct qeth_arp_query_data) - sizeof(char), 2462 2444 prot); 2445 + if (!iob) 2446 + return -ENOMEM; 2463 2447 cmd = (struct qeth_ipa_cmd *)(iob->data+IPA_PDU_HEADER_SIZE); 2464 2448 cmd->data.setassparms.data.query_arp.request_bits = 0x000F; 2465 2449 cmd->data.setassparms.data.query_arp.reply_bits = 0; ··· 2555 2535 IPA_CMD_ASS_ARP_ADD_ENTRY, 2556 2536 sizeof(struct qeth_arp_cache_entry), 2557 2537 QETH_PROT_IPV4); 2538 + if (!iob) 2539 + return -ENOMEM; 2558 2540 rc = qeth_l3_send_setassparms(card, iob, 2559 2541 sizeof(struct qeth_arp_cache_entry), 2560 2542 (unsigned long) entry, ··· 2596 2574 IPA_CMD_ASS_ARP_REMOVE_ENTRY, 2597 2575 12, 2598 2576 QETH_PROT_IPV4); 2577 + if (!iob) 2578 + return -ENOMEM; 2599 2579 rc = qeth_l3_send_setassparms(card, iob, 2600 2580 12, (unsigned long)buf, 2601 2581 qeth_l3_default_setassparms_cb, NULL); ··· 3286 3262 3287 3263 static int qeth_l3_setup_netdev(struct qeth_card *card) 3288 3264 { 3265 + int rc; 3266 + 3289 3267 if (card->info.type == QETH_CARD_TYPE_OSD || 3290 3268 card->info.type == QETH_CARD_TYPE_OSX) { 3291 3269 if ((card->info.link_type == QETH_LINK_TYPE_LANE_TR) || ··· 3319 3293 return -ENODEV; 3320 3294 card->dev->flags |= IFF_NOARP; 3321 3295 card->dev->netdev_ops = &qeth_l3_netdev_ops; 3322 - qeth_l3_iqd_read_initial_mac(card); 3296 + rc = qeth_l3_iqd_read_initial_mac(card); 3297 + if (rc) 3298 + return rc; 3323 3299 if (card->options.hsuid[0]) 3324 3300 memcpy(card->dev->perm_addr, card->options.hsuid, 9); 3325 3301 } else ··· 3388 3360 recover_flag = card->state; 3389 3361 rc = qeth_core_hardsetup_card(card); 3390 3362 if (rc) { 3391 - QETH_DBF_TEXT_(SETUP, 2, "2err%d", rc); 3363 + QETH_DBF_TEXT_(SETUP, 2, "2err%04x", rc); 3392 3364 rc = -ENODEV; 3393 3365 goto out_remove; 3394 3366 } ··· 3429 3401 contin: 3430 3402 rc = qeth_l3_setadapter_parms(card); 3431 3403 if (rc) 3432 - QETH_DBF_TEXT_(SETUP, 2, "2err%d", rc); 3404 + QETH_DBF_TEXT_(SETUP, 2, "2err%04x", rc); 3433 3405 if (!card->options.sniffer) { 3434 3406 rc = qeth_l3_start_ipassists(card); 3435 3407 if (rc) { ··· 3438 3410 } 3439 3411 rc = qeth_l3_setrouting_v4(card); 3440 3412 if (rc) 3441 - QETH_DBF_TEXT_(SETUP, 2, "4err%d", rc); 3413 + QETH_DBF_TEXT_(SETUP, 2, "4err%04x", rc); 3442 3414 rc = qeth_l3_setrouting_v6(card); 3443 3415 if (rc) 3444 - QETH_DBF_TEXT_(SETUP, 2, "5err%d", rc); 3416 + QETH_DBF_TEXT_(SETUP, 2, "5err%04x", rc); 3445 3417 } 3446 3418 netif_tx_disable(card->dev); 3447 3419
+92
drivers/scsi/ipr.c
··· 683 683 ipr_reinit_ipr_cmnd(ipr_cmd); 684 684 ipr_cmd->u.scratch = 0; 685 685 ipr_cmd->sibling = NULL; 686 + ipr_cmd->eh_comp = NULL; 686 687 ipr_cmd->fast_done = fast_done; 687 688 init_timer(&ipr_cmd->timer); 688 689 } ··· 849 848 850 849 scsi_dma_unmap(ipr_cmd->scsi_cmd); 851 850 scsi_cmd->scsi_done(scsi_cmd); 851 + if (ipr_cmd->eh_comp) 852 + complete(ipr_cmd->eh_comp); 852 853 list_add_tail(&ipr_cmd->queue, &ipr_cmd->hrrq->hrrq_free_q); 853 854 } 854 855 ··· 4814 4811 return rc; 4815 4812 } 4816 4813 4814 + /** 4815 + * ipr_match_lun - Match function for specified LUN 4816 + * @ipr_cmd: ipr command struct 4817 + * @device: device to match (sdev) 4818 + * 4819 + * Returns: 4820 + * 1 if command matches sdev / 0 if command does not match sdev 4821 + **/ 4822 + static int ipr_match_lun(struct ipr_cmnd *ipr_cmd, void *device) 4823 + { 4824 + if (ipr_cmd->scsi_cmd && ipr_cmd->scsi_cmd->device == device) 4825 + return 1; 4826 + return 0; 4827 + } 4828 + 4829 + /** 4830 + * ipr_wait_for_ops - Wait for matching commands to complete 4831 + * @ipr_cmd: ipr command struct 4832 + * @device: device to match (sdev) 4833 + * @match: match function to use 4834 + * 4835 + * Returns: 4836 + * SUCCESS / FAILED 4837 + **/ 4838 + static int ipr_wait_for_ops(struct ipr_ioa_cfg *ioa_cfg, void *device, 4839 + int (*match)(struct ipr_cmnd *, void *)) 4840 + { 4841 + struct ipr_cmnd *ipr_cmd; 4842 + int wait; 4843 + unsigned long flags; 4844 + struct ipr_hrr_queue *hrrq; 4845 + signed long timeout = IPR_ABORT_TASK_TIMEOUT; 4846 + DECLARE_COMPLETION_ONSTACK(comp); 4847 + 4848 + ENTER; 4849 + do { 4850 + wait = 0; 4851 + 4852 + for_each_hrrq(hrrq, ioa_cfg) { 4853 + spin_lock_irqsave(hrrq->lock, flags); 4854 + list_for_each_entry(ipr_cmd, &hrrq->hrrq_pending_q, queue) { 4855 + if (match(ipr_cmd, device)) { 4856 + ipr_cmd->eh_comp = &comp; 4857 + wait++; 4858 + } 4859 + } 4860 + spin_unlock_irqrestore(hrrq->lock, flags); 4861 + } 4862 + 4863 + if (wait) { 4864 + timeout = wait_for_completion_timeout(&comp, timeout); 4865 + 4866 + if (!timeout) { 4867 + wait = 0; 4868 + 4869 + for_each_hrrq(hrrq, ioa_cfg) { 4870 + spin_lock_irqsave(hrrq->lock, flags); 4871 + list_for_each_entry(ipr_cmd, &hrrq->hrrq_pending_q, queue) { 4872 + if (match(ipr_cmd, device)) { 4873 + ipr_cmd->eh_comp = NULL; 4874 + wait++; 4875 + } 4876 + } 4877 + spin_unlock_irqrestore(hrrq->lock, flags); 4878 + } 4879 + 4880 + if (wait) 4881 + dev_err(&ioa_cfg->pdev->dev, "Timed out waiting for aborted commands\n"); 4882 + LEAVE; 4883 + return wait ? FAILED : SUCCESS; 4884 + } 4885 + } 4886 + } while (wait); 4887 + 4888 + LEAVE; 4889 + return SUCCESS; 4890 + } 4891 + 4817 4892 static int ipr_eh_host_reset(struct scsi_cmnd *cmd) 4818 4893 { 4819 4894 struct ipr_ioa_cfg *ioa_cfg; ··· 5111 5030 static int ipr_eh_dev_reset(struct scsi_cmnd *cmd) 5112 5031 { 5113 5032 int rc; 5033 + struct ipr_ioa_cfg *ioa_cfg; 5034 + 5035 + ioa_cfg = (struct ipr_ioa_cfg *) cmd->device->host->hostdata; 5114 5036 5115 5037 spin_lock_irq(cmd->device->host->host_lock); 5116 5038 rc = __ipr_eh_dev_reset(cmd); 5117 5039 spin_unlock_irq(cmd->device->host->host_lock); 5040 + 5041 + if (rc == SUCCESS) 5042 + rc = ipr_wait_for_ops(ioa_cfg, cmd->device, ipr_match_lun); 5118 5043 5119 5044 return rc; 5120 5045 } ··· 5321 5234 { 5322 5235 unsigned long flags; 5323 5236 int rc; 5237 + struct ipr_ioa_cfg *ioa_cfg; 5324 5238 5325 5239 ENTER; 5240 + 5241 + ioa_cfg = (struct ipr_ioa_cfg *) scsi_cmd->device->host->hostdata; 5326 5242 5327 5243 spin_lock_irqsave(scsi_cmd->device->host->host_lock, flags); 5328 5244 rc = ipr_cancel_op(scsi_cmd); 5329 5245 spin_unlock_irqrestore(scsi_cmd->device->host->host_lock, flags); 5330 5246 5247 + if (rc == SUCCESS) 5248 + rc = ipr_wait_for_ops(ioa_cfg, scsi_cmd->device, ipr_match_lun); 5331 5249 LEAVE; 5332 5250 return rc; 5333 5251 }
+1
drivers/scsi/ipr.h
··· 1606 1606 struct scsi_device *sdev; 1607 1607 } u; 1608 1608 1609 + struct completion *eh_comp; 1609 1610 struct ipr_hrr_queue *hrrq; 1610 1611 struct ipr_ioa_cfg *ioa_cfg; 1611 1612 };
+3 -10
drivers/scsi/scsi.c
··· 986 986 return -ENXIO; 987 987 if (!get_device(&sdev->sdev_gendev)) 988 988 return -ENXIO; 989 - /* We can fail this if we're doing SCSI operations 989 + /* We can fail try_module_get if we're doing SCSI operations 990 990 * from module exit (like cache flush) */ 991 - try_module_get(sdev->host->hostt->module); 991 + __module_get(sdev->host->hostt->module); 992 992 993 993 return 0; 994 994 } ··· 1004 1004 */ 1005 1005 void scsi_device_put(struct scsi_device *sdev) 1006 1006 { 1007 - #ifdef CONFIG_MODULE_UNLOAD 1008 - struct module *module = sdev->host->hostt->module; 1009 - 1010 - /* The module refcount will be zero if scsi_device_get() 1011 - * was called from a module removal routine */ 1012 - if (module && module_refcount(module) != 0) 1013 - module_put(module); 1014 - #endif 1007 + module_put(sdev->host->hostt->module); 1015 1008 put_device(&sdev->sdev_gendev); 1016 1009 } 1017 1010 EXPORT_SYMBOL(scsi_device_put);
+2 -2
drivers/scsi/scsi_debug.c
··· 1623 1623 req_opcode = cmd[3]; 1624 1624 req_sa = get_unaligned_be16(cmd + 4); 1625 1625 alloc_len = get_unaligned_be32(cmd + 6); 1626 - if (alloc_len < 4 && alloc_len > 0xffff) { 1626 + if (alloc_len < 4 || alloc_len > 0xffff) { 1627 1627 mk_sense_invalid_fld(scp, SDEB_IN_CDB, 6, -1); 1628 1628 return check_condition_result; 1629 1629 } ··· 1631 1631 a_len = 8192; 1632 1632 else 1633 1633 a_len = alloc_len; 1634 - arr = kzalloc((a_len < 256) ? 320 : a_len + 64, GFP_KERNEL); 1634 + arr = kzalloc((a_len < 256) ? 320 : a_len + 64, GFP_ATOMIC); 1635 1635 if (NULL == arr) { 1636 1636 mk_sense_buffer(scp, ILLEGAL_REQUEST, INSUFF_RES_ASC, 1637 1637 INSUFF_RES_ASCQ);
+11 -1
drivers/scsi/scsi_lib.c
··· 1143 1143 struct scsi_data_buffer *prot_sdb = cmd->prot_sdb; 1144 1144 int ivecs, count; 1145 1145 1146 - BUG_ON(prot_sdb == NULL); 1146 + if (prot_sdb == NULL) { 1147 + /* 1148 + * This can happen if someone (e.g. multipath) 1149 + * queues a command to a device on an adapter 1150 + * that does not support DIX. 1151 + */ 1152 + WARN_ON_ONCE(1); 1153 + error = BLKPREP_KILL; 1154 + goto err_exit; 1155 + } 1156 + 1147 1157 ivecs = blk_rq_count_integrity_sg(rq->q, rq->bio); 1148 1158 1149 1159 if (scsi_alloc_sgtable(prot_sdb, ivecs, is_mq)) {
-1
drivers/spi/spi-dw-mid.c
··· 271 271 iounmap(clk_reg); 272 272 273 273 dws->num_cs = 16; 274 - dws->fifo_len = 40; /* FIFO has 40 words buffer */ 275 274 276 275 #ifdef CONFIG_SPI_DW_MID_DMA 277 276 dws->dma_priv = kzalloc(sizeof(struct mid_dma), GFP_KERNEL);
+3 -3
drivers/spi/spi-dw.c
··· 621 621 if (!dws->fifo_len) { 622 622 u32 fifo; 623 623 624 - for (fifo = 2; fifo <= 257; fifo++) { 624 + for (fifo = 2; fifo <= 256; fifo++) { 625 625 dw_writew(dws, DW_SPI_TXFLTR, fifo); 626 626 if (fifo != dw_readw(dws, DW_SPI_TXFLTR)) 627 627 break; 628 628 } 629 629 630 - dws->fifo_len = (fifo == 257) ? 0 : fifo; 630 + dws->fifo_len = (fifo == 2) ? 0 : fifo - 1; 631 631 dw_writew(dws, DW_SPI_TXFLTR, 0); 632 632 } 633 633 } ··· 673 673 if (dws->dma_ops && dws->dma_ops->dma_init) { 674 674 ret = dws->dma_ops->dma_init(dws); 675 675 if (ret) { 676 - dev_warn(&master->dev, "DMA init failed\n"); 676 + dev_warn(dev, "DMA init failed\n"); 677 677 dws->dma_inited = 0; 678 678 } 679 679 }
+1 -1
drivers/spi/spi-pxa2xx.c
··· 546 546 cs_deassert(drv_data); 547 547 } 548 548 549 - spi_finalize_current_message(drv_data->master); 550 549 drv_data->cur_chip = NULL; 550 + spi_finalize_current_message(drv_data->master); 551 551 } 552 552 553 553 static void reset_sccr1(struct driver_data *drv_data)
+1 -1
drivers/spi/spi-sh-msiof.c
··· 82 82 #define MDR1_SYNCMD_LR 0x30000000 /* L/R mode */ 83 83 #define MDR1_SYNCAC_SHIFT 25 /* Sync Polarity (1 = Active-low) */ 84 84 #define MDR1_BITLSB_SHIFT 24 /* MSB/LSB First (1 = LSB first) */ 85 - #define MDR1_FLD_MASK 0x000000c0 /* Frame Sync Signal Interval (0-3) */ 85 + #define MDR1_FLD_MASK 0x0000000c /* Frame Sync Signal Interval (0-3) */ 86 86 #define MDR1_FLD_SHIFT 2 87 87 #define MDR1_XXSTP 0x00000001 /* Transmission/Reception Stop on FIFO */ 88 88 /* TMDR1 */
+1
drivers/staging/media/tlg2300/Kconfig
··· 1 1 config VIDEO_TLG2300 2 2 tristate "Telegent TLG2300 USB video capture support (Deprecated)" 3 3 depends on VIDEO_DEV && I2C && SND && DVB_CORE 4 + depends on MEDIA_USB_SUPPORT 4 5 select VIDEO_TUNER 5 6 select VIDEO_TVEEPROM 6 7 depends on RC_CORE
-1
drivers/watchdog/cadence_wdt.c
··· 503 503 .shutdown = cdns_wdt_shutdown, 504 504 .driver = { 505 505 .name = "cdns-wdt", 506 - .owner = THIS_MODULE, 507 506 .of_match_table = cdns_wdt_of_match, 508 507 .pm = &cdns_wdt_pm_ops, 509 508 },
+31 -9
drivers/watchdog/imx2_wdt.c
··· 52 52 #define IMX2_WDT_WRSR 0x04 /* Reset Status Register */ 53 53 #define IMX2_WDT_WRSR_TOUT (1 << 1) /* -> Reset due to Timeout */ 54 54 55 + #define IMX2_WDT_WMCR 0x08 /* Misc Register */ 56 + 55 57 #define IMX2_WDT_MAX_TIME 128 56 58 #define IMX2_WDT_DEFAULT_TIME 60 /* in seconds */ 57 59 ··· 276 274 277 275 imx2_wdt_ping_if_active(wdog); 278 276 277 + /* 278 + * Disable the watchdog power down counter at boot. Otherwise the power 279 + * down counter will pull down the #WDOG interrupt line for one clock 280 + * cycle. 281 + */ 282 + regmap_write(wdev->regmap, IMX2_WDT_WMCR, 0); 283 + 279 284 ret = watchdog_register_device(wdog); 280 285 if (ret) { 281 286 dev_err(&pdev->dev, "cannot register watchdog device\n"); ··· 336 327 } 337 328 338 329 #ifdef CONFIG_PM_SLEEP 339 - /* Disable watchdog if it is active during suspend */ 330 + /* Disable watchdog if it is active or non-active but still running */ 340 331 static int imx2_wdt_suspend(struct device *dev) 341 332 { 342 333 struct watchdog_device *wdog = dev_get_drvdata(dev); 343 334 struct imx2_wdt_device *wdev = watchdog_get_drvdata(wdog); 344 335 345 - imx2_wdt_set_timeout(wdog, IMX2_WDT_MAX_TIME); 346 - imx2_wdt_ping(wdog); 336 + /* The watchdog IP block is running */ 337 + if (imx2_wdt_is_running(wdev)) { 338 + imx2_wdt_set_timeout(wdog, IMX2_WDT_MAX_TIME); 339 + imx2_wdt_ping(wdog); 347 340 348 - /* Watchdog has been stopped but IP block is still running */ 349 - if (!watchdog_active(wdog) && imx2_wdt_is_running(wdev)) 350 - del_timer_sync(&wdev->timer); 341 + /* The watchdog is not active */ 342 + if (!watchdog_active(wdog)) 343 + del_timer_sync(&wdev->timer); 344 + } 351 345 352 346 clk_disable_unprepare(wdev->clk); 353 347 ··· 366 354 clk_prepare_enable(wdev->clk); 367 355 368 356 if (watchdog_active(wdog) && !imx2_wdt_is_running(wdev)) { 369 - /* Resumes from deep sleep we need restart 370 - * the watchdog again. 357 + /* 358 + * If the watchdog is still active and resumes 359 + * from deep sleep state, need to restart the 360 + * watchdog again. 371 361 */ 372 362 imx2_wdt_setup(wdog); 373 363 imx2_wdt_set_timeout(wdog, wdog->timeout); 374 364 imx2_wdt_ping(wdog); 375 365 } else if (imx2_wdt_is_running(wdev)) { 366 + /* Resuming from non-deep sleep state. */ 367 + imx2_wdt_set_timeout(wdog, wdog->timeout); 376 368 imx2_wdt_ping(wdog); 377 - mod_timer(&wdev->timer, jiffies + wdog->timeout * HZ / 2); 369 + /* 370 + * But the watchdog is not active, then start 371 + * the timer again. 372 + */ 373 + if (!watchdog_active(wdog)) 374 + mod_timer(&wdev->timer, 375 + jiffies + wdog->timeout * HZ / 2); 378 376 } 379 377 380 378 return 0;
-1
drivers/watchdog/meson_wdt.c
··· 215 215 .remove = meson_wdt_remove, 216 216 .shutdown = meson_wdt_shutdown, 217 217 .driver = { 218 - .owner = THIS_MODULE, 219 218 .name = DRV_NAME, 220 219 .of_match_table = meson_wdt_dt_ids, 221 220 },
+1
fs/btrfs/ctree.h
··· 1171 1171 struct percpu_counter total_bytes_pinned; 1172 1172 1173 1173 struct list_head list; 1174 + /* Protected by the spinlock 'lock'. */ 1174 1175 struct list_head ro_bgs; 1175 1176 1176 1177 struct rw_semaphore groups_sem;
+1 -1
fs/btrfs/extent-tree.c
··· 9422 9422 * are still on the list after taking the semaphore 9423 9423 */ 9424 9424 list_del_init(&block_group->list); 9425 - list_del_init(&block_group->ro_list); 9426 9425 if (list_empty(&block_group->space_info->block_groups[index])) { 9427 9426 kobj = block_group->space_info->block_group_kobjs[index]; 9428 9427 block_group->space_info->block_group_kobjs[index] = NULL; ··· 9463 9464 btrfs_remove_free_space_cache(block_group); 9464 9465 9465 9466 spin_lock(&block_group->space_info->lock); 9467 + list_del_init(&block_group->ro_list); 9466 9468 block_group->space_info->total_bytes -= block_group->key.offset; 9467 9469 block_group->space_info->bytes_readonly -= block_group->key.offset; 9468 9470 block_group->space_info->disk_total -= block_group->key.offset * factor;
+1 -1
fs/btrfs/extent_io.c
··· 2190 2190 2191 2191 next = next_state(state); 2192 2192 2193 - failrec = (struct io_failure_record *)state->private; 2193 + failrec = (struct io_failure_record *)(unsigned long)state->private; 2194 2194 free_extent_state(state); 2195 2195 kfree(failrec); 2196 2196
+1 -1
fs/btrfs/scrub.c
··· 3053 3053 3054 3054 ppath = btrfs_alloc_path(); 3055 3055 if (!ppath) { 3056 - btrfs_free_path(ppath); 3056 + btrfs_free_path(path); 3057 3057 return -ENOMEM; 3058 3058 } 3059 3059
+12 -2
fs/btrfs/super.c
··· 1000 1000 */ 1001 1001 if (fs_info->pending_changes == 0) 1002 1002 return 0; 1003 + /* 1004 + * A non-blocking test if the fs is frozen. We must not 1005 + * start a new transaction here otherwise a deadlock 1006 + * happens. The pending operations are delayed to the 1007 + * next commit after thawing. 1008 + */ 1009 + if (__sb_start_write(sb, SB_FREEZE_WRITE, false)) 1010 + __sb_end_write(sb, SB_FREEZE_WRITE); 1011 + else 1012 + return 0; 1003 1013 trans = btrfs_start_transaction(root, 0); 1004 - } else { 1005 - return PTR_ERR(trans); 1006 1014 } 1015 + if (IS_ERR(trans)) 1016 + return PTR_ERR(trans); 1007 1017 } 1008 1018 return btrfs_commit_transaction(trans, root); 1009 1019 }
+1 -1
fs/btrfs/transaction.c
··· 2118 2118 unsigned long prev; 2119 2119 unsigned long bit; 2120 2120 2121 - prev = cmpxchg(&fs_info->pending_changes, 0, 0); 2121 + prev = xchg(&fs_info->pending_changes, 0); 2122 2122 if (!prev) 2123 2123 return; 2124 2124
+5 -16
fs/cifs/ioctl.c
··· 86 86 } 87 87 88 88 src_inode = file_inode(src_file.file); 89 + rc = -EINVAL; 90 + if (S_ISDIR(src_inode->i_mode)) 91 + goto out_fput; 89 92 90 93 /* 91 94 * Note: cifs case is easier than btrfs since server responsible for 92 95 * checks for proper open modes and file type and if it wants 93 96 * server could even support copy of range where source = target 94 97 */ 95 - 96 - /* so we do not deadlock racing two ioctls on same files */ 97 - if (target_inode < src_inode) { 98 - mutex_lock_nested(&target_inode->i_mutex, I_MUTEX_PARENT); 99 - mutex_lock_nested(&src_inode->i_mutex, I_MUTEX_CHILD); 100 - } else { 101 - mutex_lock_nested(&src_inode->i_mutex, I_MUTEX_PARENT); 102 - mutex_lock_nested(&target_inode->i_mutex, I_MUTEX_CHILD); 103 - } 98 + lock_two_nondirectories(target_inode, src_inode); 104 99 105 100 /* determine range to clone */ 106 101 rc = -EINVAL; ··· 119 124 out_unlock: 120 125 /* although unlocking in the reverse order from locking is not 121 126 strictly necessary here it is a little cleaner to be consistent */ 122 - if (target_inode < src_inode) { 123 - mutex_unlock(&src_inode->i_mutex); 124 - mutex_unlock(&target_inode->i_mutex); 125 - } else { 126 - mutex_unlock(&target_inode->i_mutex); 127 - mutex_unlock(&src_inode->i_mutex); 128 - } 127 + unlock_two_nondirectories(src_inode, target_inode); 129 128 out_fput: 130 129 fdput(src_file); 131 130 out_drop_write:
+2 -2
include/dt-bindings/interrupt-controller/arm-gic.h
··· 7 7 8 8 #include <dt-bindings/interrupt-controller/irq.h> 9 9 10 - /* interrupt specific cell 0 */ 10 + /* interrupt specifier cell 0 */ 11 11 12 12 #define GIC_SPI 0 13 13 #define GIC_PPI 1 14 14 15 15 /* 16 16 * Interrupt specifier cell 2. 17 - * The flaggs in irq.h are valid, plus those below. 17 + * The flags in irq.h are valid, plus those below. 18 18 */ 19 19 #define GIC_CPU_MASK_RAW(x) ((x) << 8) 20 20 #define GIC_CPU_MASK_SIMPLE(num) GIC_CPU_MASK_RAW((1 << (num)) - 1)
+2
include/linux/mfd/samsung/s2mps13.h
··· 59 59 S2MPS13_REG_B6CTRL, 60 60 S2MPS13_REG_B6OUT, 61 61 S2MPS13_REG_B7CTRL, 62 + S2MPS13_REG_B7SW, 62 63 S2MPS13_REG_B7OUT, 63 64 S2MPS13_REG_B8CTRL, 64 65 S2MPS13_REG_B8OUT, ··· 103 102 S2MPS13_REG_L26CTRL, 104 103 S2MPS13_REG_L27CTRL, 105 104 S2MPS13_REG_L28CTRL, 105 + S2MPS13_REG_L29CTRL, 106 106 S2MPS13_REG_L30CTRL, 107 107 S2MPS13_REG_L31CTRL, 108 108 S2MPS13_REG_L32CTRL,
+1 -1
include/linux/module.h
··· 444 444 #define module_put_and_exit(code) __module_put_and_exit(THIS_MODULE, code) 445 445 446 446 #ifdef CONFIG_MODULE_UNLOAD 447 - unsigned long module_refcount(struct module *mod); 447 + int module_refcount(struct module *mod); 448 448 void __symbol_put(const char *symbol); 449 449 #define symbol_put(x) __symbol_put(VMLINUX_SYMBOL_STR(x)) 450 450 void symbol_put_addr(void *addr);
+3 -1
include/linux/moduleloader.h
··· 26 26 void *module_alloc(unsigned long size); 27 27 28 28 /* Free memory returned from module_alloc. */ 29 - void module_free(struct module *mod, void *module_region); 29 + void module_memfree(void *module_region); 30 30 31 31 /* 32 32 * Apply the given relocation to the (simplified) ELF. Return -error ··· 82 82 /* Any cleanup needed when module leaves. */ 83 83 void module_arch_cleanup(struct module *mod); 84 84 85 + /* Any cleanup before freeing mod->module_init */ 86 + void module_arch_freeing_init(struct module *mod); 85 87 #endif
-5
include/linux/oom.h
··· 85 85 oom_killer_disabled = false; 86 86 } 87 87 88 - static inline bool oom_gfp_allowed(gfp_t gfp_mask) 89 - { 90 - return (gfp_mask & __GFP_FS) && !(gfp_mask & __GFP_NORETRY); 91 - } 92 - 93 88 extern struct task_struct *find_lock_task_mm(struct task_struct *p); 94 89 95 90 static inline bool task_will_free_mem(struct task_struct *task)
+3
include/linux/pci.h
··· 175 175 PCI_DEV_FLAGS_DMA_ALIAS_DEVFN = (__force pci_dev_flags_t) (1 << 4), 176 176 /* Use a PCIe-to-PCI bridge alias even if !pci_is_pcie */ 177 177 PCI_DEV_FLAG_PCIE_BRIDGE_ALIAS = (__force pci_dev_flags_t) (1 << 5), 178 + /* Do not use bus resets for device */ 179 + PCI_DEV_FLAGS_NO_BUS_RESET = (__force pci_dev_flags_t) (1 << 6), 178 180 }; 179 181 180 182 enum pci_irq_reroute_variant { ··· 1067 1065 void pci_bus_assign_resources(const struct pci_bus *bus); 1068 1066 void pci_bus_size_bridges(struct pci_bus *bus); 1069 1067 int pci_claim_resource(struct pci_dev *, int); 1068 + int pci_claim_bridge_resource(struct pci_dev *bridge, int i); 1070 1069 void pci_assign_unassigned_resources(void); 1071 1070 void pci_assign_unassigned_bridge_resources(struct pci_dev *bridge); 1072 1071 void pci_assign_unassigned_bus_resources(struct pci_bus *bus);
+12 -3
include/linux/printk.h
··· 10 10 extern const char linux_banner[]; 11 11 extern const char linux_proc_banner[]; 12 12 13 - extern char *log_buf_addr_get(void); 14 - extern u32 log_buf_len_get(void); 15 - 16 13 static inline int printk_get_level(const char *buffer) 17 14 { 18 15 if (buffer[0] == KERN_SOH_ASCII && buffer[1]) { ··· 160 163 161 164 extern void wake_up_klogd(void); 162 165 166 + char *log_buf_addr_get(void); 167 + u32 log_buf_len_get(void); 163 168 void log_buf_kexec_setup(void); 164 169 void __init setup_log_buf(int early); 165 170 void dump_stack_set_arch_desc(const char *fmt, ...); ··· 195 196 196 197 static inline void wake_up_klogd(void) 197 198 { 199 + } 200 + 201 + static inline char *log_buf_addr_get(void) 202 + { 203 + return NULL; 204 + } 205 + 206 + static inline u32 log_buf_len_get(void) 207 + { 208 + return 0; 198 209 } 199 210 200 211 static inline void log_buf_kexec_setup(void)
+13
include/linux/time.h
··· 110 110 return true; 111 111 } 112 112 113 + static inline bool timeval_valid(const struct timeval *tv) 114 + { 115 + /* Dates before 1970 are bogus */ 116 + if (tv->tv_sec < 0) 117 + return false; 118 + 119 + /* Can't have more microseconds then a second */ 120 + if (tv->tv_usec < 0 || tv->tv_usec >= USEC_PER_SEC) 121 + return false; 122 + 123 + return true; 124 + } 125 + 113 126 extern struct timespec timespec_trunc(struct timespec t, unsigned gran); 114 127 115 128 #define CURRENT_TIME (current_kernel_time())
+6 -5
include/net/ip.h
··· 39 39 struct ip_options opt; /* Compiled IP options */ 40 40 unsigned char flags; 41 41 42 - #define IPSKB_FORWARDED 1 43 - #define IPSKB_XFRM_TUNNEL_SIZE 2 44 - #define IPSKB_XFRM_TRANSFORMED 4 45 - #define IPSKB_FRAG_COMPLETE 8 46 - #define IPSKB_REROUTED 16 42 + #define IPSKB_FORWARDED BIT(0) 43 + #define IPSKB_XFRM_TUNNEL_SIZE BIT(1) 44 + #define IPSKB_XFRM_TRANSFORMED BIT(2) 45 + #define IPSKB_FRAG_COMPLETE BIT(3) 46 + #define IPSKB_REROUTED BIT(4) 47 + #define IPSKB_DOREDIRECT BIT(5) 47 48 48 49 u16 frag_max_size; 49 50 };
+9 -7
include/trace/events/kvm.h
··· 146 146 147 147 #if defined(CONFIG_HAVE_KVM_IRQFD) 148 148 149 + #ifdef kvm_irqchips 150 + #define kvm_ack_irq_string "irqchip %s pin %u" 151 + #define kvm_ack_irq_parm __print_symbolic(__entry->irqchip, kvm_irqchips), __entry->pin 152 + #else 153 + #define kvm_ack_irq_string "irqchip %d pin %u" 154 + #define kvm_ack_irq_parm __entry->irqchip, __entry->pin 155 + #endif 156 + 149 157 TRACE_EVENT(kvm_ack_irq, 150 158 TP_PROTO(unsigned int irqchip, unsigned int pin), 151 159 TP_ARGS(irqchip, pin), ··· 168 160 __entry->pin = pin; 169 161 ), 170 162 171 - #ifdef kvm_irqchips 172 - TP_printk("irqchip %s pin %u", 173 - __print_symbolic(__entry->irqchip, kvm_irqchips), 174 - __entry->pin) 175 - #else 176 - TP_printk("irqchip %d pin %u", __entry->irqchip, __entry->pin) 177 - #endif 163 + TP_printk(kvm_ack_irq_string, kvm_ack_irq_parm) 178 164 ); 179 165 180 166 #endif /* defined(CONFIG_HAVE_KVM_IRQFD) */
+1 -1
kernel/bpf/core.c
··· 163 163 164 164 void bpf_jit_binary_free(struct bpf_binary_header *hdr) 165 165 { 166 - module_free(NULL, hdr); 166 + module_memfree(hdr); 167 167 } 168 168 #endif /* CONFIG_BPF_JIT */ 169 169
+17 -8
kernel/bpf/syscall.c
··· 150 150 int ufd = attr->map_fd; 151 151 struct fd f = fdget(ufd); 152 152 struct bpf_map *map; 153 - void *key, *value; 153 + void *key, *value, *ptr; 154 154 int err; 155 155 156 156 if (CHECK_ATTR(BPF_MAP_LOOKUP_ELEM)) ··· 169 169 if (copy_from_user(key, ukey, map->key_size) != 0) 170 170 goto free_key; 171 171 172 - err = -ENOENT; 173 - rcu_read_lock(); 174 - value = map->ops->map_lookup_elem(map, key); 172 + err = -ENOMEM; 173 + value = kmalloc(map->value_size, GFP_USER); 175 174 if (!value) 176 - goto err_unlock; 175 + goto free_key; 176 + 177 + rcu_read_lock(); 178 + ptr = map->ops->map_lookup_elem(map, key); 179 + if (ptr) 180 + memcpy(value, ptr, map->value_size); 181 + rcu_read_unlock(); 182 + 183 + err = -ENOENT; 184 + if (!ptr) 185 + goto free_value; 177 186 178 187 err = -EFAULT; 179 188 if (copy_to_user(uvalue, value, map->value_size) != 0) 180 - goto err_unlock; 189 + goto free_value; 181 190 182 191 err = 0; 183 192 184 - err_unlock: 185 - rcu_read_unlock(); 193 + free_value: 194 + kfree(value); 186 195 free_key: 187 196 kfree(key); 188 197 err_put:
+1 -1
kernel/cgroup.c
··· 1909 1909 * 1910 1910 * And don't kill the default root. 1911 1911 */ 1912 - if (css_has_online_children(&root->cgrp.self) || 1912 + if (!list_empty(&root->cgrp.self.children) || 1913 1913 root == &cgrp_dfl_root) 1914 1914 cgroup_put(&root->cgrp); 1915 1915 else
+1 -1
kernel/debug/kdb/kdb_main.c
··· 2023 2023 kdb_printf("%-20s%8u 0x%p ", mod->name, 2024 2024 mod->core_size, (void *)mod); 2025 2025 #ifdef CONFIG_MODULE_UNLOAD 2026 - kdb_printf("%4ld ", module_refcount(mod)); 2026 + kdb_printf("%4d ", module_refcount(mod)); 2027 2027 #endif 2028 2028 if (mod->state == MODULE_STATE_GOING) 2029 2029 kdb_printf(" (Unloading)");
+1 -1
kernel/kprobes.c
··· 127 127 128 128 static void free_insn_page(void *page) 129 129 { 130 - module_free(NULL, page); 130 + module_memfree(page); 131 131 } 132 132 133 133 struct kprobe_insn_cache kprobe_insn_slots = {
+68 -23
kernel/module.c
··· 772 772 return 0; 773 773 } 774 774 775 - unsigned long module_refcount(struct module *mod) 775 + /** 776 + * module_refcount - return the refcount or -1 if unloading 777 + * 778 + * @mod: the module we're checking 779 + * 780 + * Returns: 781 + * -1 if the module is in the process of unloading 782 + * otherwise the number of references in the kernel to the module 783 + */ 784 + int module_refcount(struct module *mod) 776 785 { 777 - return (unsigned long)atomic_read(&mod->refcnt) - MODULE_REF_BASE; 786 + return atomic_read(&mod->refcnt) - MODULE_REF_BASE; 778 787 } 779 788 EXPORT_SYMBOL(module_refcount); 780 789 ··· 865 856 struct module_use *use; 866 857 int printed_something = 0; 867 858 868 - seq_printf(m, " %lu ", module_refcount(mod)); 859 + seq_printf(m, " %i ", module_refcount(mod)); 869 860 870 861 /* 871 862 * Always include a trailing , so userspace can differentiate ··· 917 908 static ssize_t show_refcnt(struct module_attribute *mattr, 918 909 struct module_kobject *mk, char *buffer) 919 910 { 920 - return sprintf(buffer, "%lu\n", module_refcount(mk->mod)); 911 + return sprintf(buffer, "%i\n", module_refcount(mk->mod)); 921 912 } 922 913 923 914 static struct module_attribute modinfo_refcnt = ··· 1804 1795 static void unset_module_init_ro_nx(struct module *mod) { } 1805 1796 #endif 1806 1797 1807 - void __weak module_free(struct module *mod, void *module_region) 1798 + void __weak module_memfree(void *module_region) 1808 1799 { 1809 1800 vfree(module_region); 1810 1801 } 1811 1802 1812 1803 void __weak module_arch_cleanup(struct module *mod) 1804 + { 1805 + } 1806 + 1807 + void __weak module_arch_freeing_init(struct module *mod) 1813 1808 { 1814 1809 } 1815 1810 ··· 1854 1841 1855 1842 /* This may be NULL, but that's OK */ 1856 1843 unset_module_init_ro_nx(mod); 1857 - module_free(mod, mod->module_init); 1844 + module_arch_freeing_init(mod); 1845 + module_memfree(mod->module_init); 1858 1846 kfree(mod->args); 1859 1847 percpu_modfree(mod); 1860 1848 ··· 1864 1850 1865 1851 /* Finally, free the core (containing the module structure) */ 1866 1852 unset_module_core_ro_nx(mod); 1867 - module_free(mod, mod->module_core); 1853 + module_memfree(mod->module_core); 1868 1854 1869 1855 #ifdef CONFIG_MPU 1870 1856 update_protections(current->mm); ··· 2799 2785 */ 2800 2786 kmemleak_ignore(ptr); 2801 2787 if (!ptr) { 2802 - module_free(mod, mod->module_core); 2788 + module_memfree(mod->module_core); 2803 2789 return -ENOMEM; 2804 2790 } 2805 2791 memset(ptr, 0, mod->init_size); ··· 2944 2930 static void module_deallocate(struct module *mod, struct load_info *info) 2945 2931 { 2946 2932 percpu_modfree(mod); 2947 - module_free(mod, mod->module_init); 2948 - module_free(mod, mod->module_core); 2933 + module_arch_freeing_init(mod); 2934 + module_memfree(mod->module_init); 2935 + module_memfree(mod->module_core); 2949 2936 } 2950 2937 2951 2938 int __weak module_finalize(const Elf_Ehdr *hdr, ··· 2998 2983 #endif 2999 2984 } 3000 2985 2986 + /* For freeing module_init on success, in case kallsyms traversing */ 2987 + struct mod_initfree { 2988 + struct rcu_head rcu; 2989 + void *module_init; 2990 + }; 2991 + 2992 + static void do_free_init(struct rcu_head *head) 2993 + { 2994 + struct mod_initfree *m = container_of(head, struct mod_initfree, rcu); 2995 + module_memfree(m->module_init); 2996 + kfree(m); 2997 + } 2998 + 3001 2999 /* This is where the real work happens */ 3002 3000 static int do_init_module(struct module *mod) 3003 3001 { 3004 3002 int ret = 0; 3003 + struct mod_initfree *freeinit; 3004 + 3005 + freeinit = kmalloc(sizeof(*freeinit), GFP_KERNEL); 3006 + if (!freeinit) { 3007 + ret = -ENOMEM; 3008 + goto fail; 3009 + } 3010 + freeinit->module_init = mod->module_init; 3005 3011 3006 3012 /* 3007 3013 * We want to find out whether @mod uses async during init. Clear ··· 3035 2999 if (mod->init != NULL) 3036 3000 ret = do_one_initcall(mod->init); 3037 3001 if (ret < 0) { 3038 - /* 3039 - * Init routine failed: abort. Try to protect us from 3040 - * buggy refcounters. 3041 - */ 3042 - mod->state = MODULE_STATE_GOING; 3043 - synchronize_sched(); 3044 - module_put(mod); 3045 - blocking_notifier_call_chain(&module_notify_list, 3046 - MODULE_STATE_GOING, mod); 3047 - free_module(mod); 3048 - wake_up_all(&module_wq); 3049 - return ret; 3002 + goto fail_free_freeinit; 3050 3003 } 3051 3004 if (ret > 0) { 3052 3005 pr_warn("%s: '%s'->init suspiciously returned %d, it should " ··· 3080 3055 mod->strtab = mod->core_strtab; 3081 3056 #endif 3082 3057 unset_module_init_ro_nx(mod); 3083 - module_free(mod, mod->module_init); 3058 + module_arch_freeing_init(mod); 3084 3059 mod->module_init = NULL; 3085 3060 mod->init_size = 0; 3086 3061 mod->init_ro_size = 0; 3087 3062 mod->init_text_size = 0; 3063 + /* 3064 + * We want to free module_init, but be aware that kallsyms may be 3065 + * walking this with preempt disabled. In all the failure paths, 3066 + * we call synchronize_rcu/synchronize_sched, but we don't want 3067 + * to slow down the success path, so use actual RCU here. 3068 + */ 3069 + call_rcu(&freeinit->rcu, do_free_init); 3088 3070 mutex_unlock(&module_mutex); 3089 3071 wake_up_all(&module_wq); 3090 3072 3091 3073 return 0; 3074 + 3075 + fail_free_freeinit: 3076 + kfree(freeinit); 3077 + fail: 3078 + /* Try to protect us from buggy refcounters. */ 3079 + mod->state = MODULE_STATE_GOING; 3080 + synchronize_sched(); 3081 + module_put(mod); 3082 + blocking_notifier_call_chain(&module_notify_list, 3083 + MODULE_STATE_GOING, mod); 3084 + free_module(mod); 3085 + wake_up_all(&module_wq); 3086 + return ret; 3092 3087 } 3093 3088 3094 3089 static int may_init_module(void)
+3
kernel/params.c
··· 642 642 mk->mp->grp.attrs = new_attrs; 643 643 644 644 /* Tack new one on the end. */ 645 + memset(&mk->mp->attrs[mk->mp->num], 0, sizeof(mk->mp->attrs[0])); 645 646 sysfs_attr_init(&mk->mp->attrs[mk->mp->num].mattr.attr); 646 647 mk->mp->attrs[mk->mp->num].param = kp; 647 648 mk->mp->attrs[mk->mp->num].mattr.show = param_attr_show; 648 649 /* Do not allow runtime DAC changes to make param writable. */ 649 650 if ((kp->perm & (S_IWUSR | S_IWGRP | S_IWOTH)) != 0) 650 651 mk->mp->attrs[mk->mp->num].mattr.store = param_attr_store; 652 + else 653 + mk->mp->attrs[mk->mp->num].mattr.store = NULL; 651 654 mk->mp->attrs[mk->mp->num].mattr.attr.name = (char *)name; 652 655 mk->mp->attrs[mk->mp->num].mattr.attr.mode = kp->perm; 653 656 mk->mp->num++;
+4
kernel/sys.c
··· 2210 2210 up_write(&me->mm->mmap_sem); 2211 2211 break; 2212 2212 case PR_MPX_ENABLE_MANAGEMENT: 2213 + if (arg2 || arg3 || arg4 || arg5) 2214 + return -EINVAL; 2213 2215 error = MPX_ENABLE_MANAGEMENT(me); 2214 2216 break; 2215 2217 case PR_MPX_DISABLE_MANAGEMENT: 2218 + if (arg2 || arg3 || arg4 || arg5) 2219 + return -EINVAL; 2216 2220 error = MPX_DISABLE_MANAGEMENT(me); 2217 2221 break; 2218 2222 default:
+7
kernel/time/ntp.c
··· 633 633 if ((txc->modes & ADJ_SETOFFSET) && (!capable(CAP_SYS_TIME))) 634 634 return -EPERM; 635 635 636 + if (txc->modes & ADJ_FREQUENCY) { 637 + if (LONG_MIN / PPM_SCALE > txc->freq) 638 + return -EINVAL; 639 + if (LONG_MAX / PPM_SCALE < txc->freq) 640 + return -EINVAL; 641 + } 642 + 636 643 return 0; 637 644 } 638 645
+4
kernel/time/time.c
··· 196 196 if (tv) { 197 197 if (copy_from_user(&user_tv, tv, sizeof(*tv))) 198 198 return -EFAULT; 199 + 200 + if (!timeval_valid(&user_tv)) 201 + return -EINVAL; 202 + 199 203 new_ts.tv_sec = user_tv.tv_sec; 200 204 new_ts.tv_nsec = user_tv.tv_usec * NSEC_PER_USEC; 201 205 }
+2 -2
mm/memcontrol.c
··· 1477 1477 1478 1478 pr_info("Task in "); 1479 1479 pr_cont_cgroup_path(task_cgroup(p, memory_cgrp_id)); 1480 - pr_info(" killed as a result of limit of "); 1480 + pr_cont(" killed as a result of limit of "); 1481 1481 pr_cont_cgroup_path(memcg->css.cgroup); 1482 - pr_info("\n"); 1482 + pr_cont("\n"); 1483 1483 1484 1484 rcu_read_unlock(); 1485 1485
+35 -47
mm/page_alloc.c
··· 2332 2332 __alloc_pages_may_oom(gfp_t gfp_mask, unsigned int order, 2333 2333 struct zonelist *zonelist, enum zone_type high_zoneidx, 2334 2334 nodemask_t *nodemask, struct zone *preferred_zone, 2335 - int classzone_idx, int migratetype) 2335 + int classzone_idx, int migratetype, unsigned long *did_some_progress) 2336 2336 { 2337 2337 struct page *page; 2338 2338 2339 - /* Acquire the per-zone oom lock for each zone */ 2339 + *did_some_progress = 0; 2340 + 2341 + if (oom_killer_disabled) 2342 + return NULL; 2343 + 2344 + /* 2345 + * Acquire the per-zone oom lock for each zone. If that 2346 + * fails, somebody else is making progress for us. 2347 + */ 2340 2348 if (!oom_zonelist_trylock(zonelist, gfp_mask)) { 2349 + *did_some_progress = 1; 2341 2350 schedule_timeout_uninterruptible(1); 2342 2351 return NULL; 2343 2352 } ··· 2372 2363 goto out; 2373 2364 2374 2365 if (!(gfp_mask & __GFP_NOFAIL)) { 2366 + /* Coredumps can quickly deplete all memory reserves */ 2367 + if (current->flags & PF_DUMPCORE) 2368 + goto out; 2375 2369 /* The OOM killer will not help higher order allocs */ 2376 2370 if (order > PAGE_ALLOC_COSTLY_ORDER) 2377 2371 goto out; 2378 2372 /* The OOM killer does not needlessly kill tasks for lowmem */ 2379 2373 if (high_zoneidx < ZONE_NORMAL) 2374 + goto out; 2375 + /* The OOM killer does not compensate for light reclaim */ 2376 + if (!(gfp_mask & __GFP_FS)) 2380 2377 goto out; 2381 2378 /* 2382 2379 * GFP_THISNODE contains __GFP_NORETRY and we never hit this. ··· 2396 2381 } 2397 2382 /* Exhausted what can be done so it's blamo time */ 2398 2383 out_of_memory(zonelist, gfp_mask, order, nodemask, false); 2399 - 2384 + *did_some_progress = 1; 2400 2385 out: 2401 2386 oom_zonelist_unlock(zonelist, gfp_mask); 2402 2387 return page; ··· 2673 2658 (gfp_mask & GFP_THISNODE) == GFP_THISNODE) 2674 2659 goto nopage; 2675 2660 2676 - restart: 2661 + retry: 2677 2662 if (!(gfp_mask & __GFP_NO_KSWAPD)) 2678 2663 wake_all_kswapds(order, zonelist, high_zoneidx, 2679 2664 preferred_zone, nodemask); ··· 2696 2681 classzone_idx = zonelist_zone_idx(preferred_zoneref); 2697 2682 } 2698 2683 2699 - rebalance: 2700 2684 /* This is the last chance, in general, before the goto nopage. */ 2701 2685 page = get_page_from_freelist(gfp_mask, nodemask, order, zonelist, 2702 2686 high_zoneidx, alloc_flags & ~ALLOC_NO_WATERMARKS, ··· 2802 2788 if (page) 2803 2789 goto got_pg; 2804 2790 2805 - /* 2806 - * If we failed to make any progress reclaiming, then we are 2807 - * running out of options and have to consider going OOM 2808 - */ 2809 - if (!did_some_progress) { 2810 - if (oom_gfp_allowed(gfp_mask)) { 2811 - if (oom_killer_disabled) 2812 - goto nopage; 2813 - /* Coredumps can quickly deplete all memory reserves */ 2814 - if ((current->flags & PF_DUMPCORE) && 2815 - !(gfp_mask & __GFP_NOFAIL)) 2816 - goto nopage; 2817 - page = __alloc_pages_may_oom(gfp_mask, order, 2818 - zonelist, high_zoneidx, 2819 - nodemask, preferred_zone, 2820 - classzone_idx, migratetype); 2821 - if (page) 2822 - goto got_pg; 2823 - 2824 - if (!(gfp_mask & __GFP_NOFAIL)) { 2825 - /* 2826 - * The oom killer is not called for high-order 2827 - * allocations that may fail, so if no progress 2828 - * is being made, there are no other options and 2829 - * retrying is unlikely to help. 2830 - */ 2831 - if (order > PAGE_ALLOC_COSTLY_ORDER) 2832 - goto nopage; 2833 - /* 2834 - * The oom killer is not called for lowmem 2835 - * allocations to prevent needlessly killing 2836 - * innocent tasks. 2837 - */ 2838 - if (high_zoneidx < ZONE_NORMAL) 2839 - goto nopage; 2840 - } 2841 - 2842 - goto restart; 2843 - } 2844 - } 2845 - 2846 2791 /* Check if we should retry the allocation */ 2847 2792 pages_reclaimed += did_some_progress; 2848 2793 if (should_alloc_retry(gfp_mask, order, did_some_progress, 2849 2794 pages_reclaimed)) { 2795 + /* 2796 + * If we fail to make progress by freeing individual 2797 + * pages, but the allocation wants us to keep going, 2798 + * start OOM killing tasks. 2799 + */ 2800 + if (!did_some_progress) { 2801 + page = __alloc_pages_may_oom(gfp_mask, order, zonelist, 2802 + high_zoneidx, nodemask, 2803 + preferred_zone, classzone_idx, 2804 + migratetype,&did_some_progress); 2805 + if (page) 2806 + goto got_pg; 2807 + if (!did_some_progress) 2808 + goto nopage; 2809 + } 2850 2810 /* Wait for some write requests to complete then retry */ 2851 2811 wait_iff_congested(preferred_zone, BLK_RW_ASYNC, HZ/50); 2852 - goto rebalance; 2812 + goto retry; 2853 2813 } else { 2854 2814 /* 2855 2815 * High-order allocations do not necessarily loop after
+1 -1
mm/vmscan.c
··· 2656 2656 * should make reasonable progress. 2657 2657 */ 2658 2658 for_each_zone_zonelist_nodemask(zone, z, zonelist, 2659 - gfp_mask, nodemask) { 2659 + gfp_zone(gfp_mask), nodemask) { 2660 2660 if (zone_idx(zone) > ZONE_NORMAL) 2661 2661 continue; 2662 2662
+1
net/dsa/slave.c
··· 46 46 snprintf(ds->slave_mii_bus->id, MII_BUS_ID_SIZE, "dsa-%d:%.2x", 47 47 ds->index, ds->pd->sw_addr); 48 48 ds->slave_mii_bus->parent = ds->master_dev; 49 + ds->slave_mii_bus->phy_mask = ~ds->phys_mii_mask; 49 50 } 50 51 51 52
+2 -1
net/ipv4/ip_forward.c
··· 129 129 * We now generate an ICMP HOST REDIRECT giving the route 130 130 * we calculated. 131 131 */ 132 - if (rt->rt_flags&RTCF_DOREDIRECT && !opt->srr && !skb_sec_path(skb)) 132 + if (IPCB(skb)->flags & IPSKB_DOREDIRECT && !opt->srr && 133 + !skb_sec_path(skb)) 133 134 ip_rt_send_redirect(skb); 134 135 135 136 skb->priority = rt_tos2priority(iph->tos);
+4 -1
net/ipv4/ping.c
··· 966 966 967 967 sk = ping_lookup(net, skb, ntohs(icmph->un.echo.id)); 968 968 if (sk != NULL) { 969 + struct sk_buff *skb2 = skb_clone(skb, GFP_ATOMIC); 970 + 969 971 pr_debug("rcv on socket %p\n", sk); 970 - ping_queue_rcv_skb(sk, skb_get(skb)); 972 + if (skb2) 973 + ping_queue_rcv_skb(sk, skb2); 971 974 sock_put(sk); 972 975 return true; 973 976 }
+5 -4
net/ipv4/route.c
··· 1554 1554 1555 1555 do_cache = res->fi && !itag; 1556 1556 if (out_dev == in_dev && err && IN_DEV_TX_REDIRECTS(out_dev) && 1557 + skb->protocol == htons(ETH_P_IP) && 1557 1558 (IN_DEV_SHARED_MEDIA(out_dev) || 1558 - inet_addr_onlink(out_dev, saddr, FIB_RES_GW(*res)))) { 1559 - flags |= RTCF_DOREDIRECT; 1560 - do_cache = false; 1561 - } 1559 + inet_addr_onlink(out_dev, saddr, FIB_RES_GW(*res)))) 1560 + IPCB(skb)->flags |= IPSKB_DOREDIRECT; 1562 1561 1563 1562 if (skb->protocol != htons(ETH_P_IP)) { 1564 1563 /* Not IP (i.e. ARP). Do not create route, if it is ··· 2302 2303 r->rtm_flags = (rt->rt_flags & ~0xFFFF) | RTM_F_CLONED; 2303 2304 if (rt->rt_flags & RTCF_NOTIFY) 2304 2305 r->rtm_flags |= RTM_F_NOTIFY; 2306 + if (IPCB(skb)->flags & IPSKB_DOREDIRECT) 2307 + r->rtm_flags |= RTCF_DOREDIRECT; 2305 2308 2306 2309 if (nla_put_be32(skb, RTA_DST, dst)) 2307 2310 goto nla_put_failure;
+3 -1
net/ipv4/udp_diag.c
··· 99 99 s_slot = cb->args[0]; 100 100 num = s_num = cb->args[1]; 101 101 102 - for (slot = s_slot; slot <= table->mask; num = s_num = 0, slot++) { 102 + for (slot = s_slot; slot <= table->mask; s_num = 0, slot++) { 103 103 struct sock *sk; 104 104 struct hlist_nulls_node *node; 105 105 struct udp_hslot *hslot = &table->hash[slot]; 106 + 107 + num = 0; 106 108 107 109 if (hlist_nulls_empty(&hslot->head)) 108 110 continue;
+26 -19
net/ipv6/ip6_fib.c
··· 659 659 return 0; 660 660 } 661 661 662 + static void fib6_purge_rt(struct rt6_info *rt, struct fib6_node *fn, 663 + struct net *net) 664 + { 665 + if (atomic_read(&rt->rt6i_ref) != 1) { 666 + /* This route is used as dummy address holder in some split 667 + * nodes. It is not leaked, but it still holds other resources, 668 + * which must be released in time. So, scan ascendant nodes 669 + * and replace dummy references to this route with references 670 + * to still alive ones. 671 + */ 672 + while (fn) { 673 + if (!(fn->fn_flags & RTN_RTINFO) && fn->leaf == rt) { 674 + fn->leaf = fib6_find_prefix(net, fn); 675 + atomic_inc(&fn->leaf->rt6i_ref); 676 + rt6_release(rt); 677 + } 678 + fn = fn->parent; 679 + } 680 + /* No more references are possible at this point. */ 681 + BUG_ON(atomic_read(&rt->rt6i_ref) != 1); 682 + } 683 + } 684 + 662 685 /* 663 686 * Insert routing information in a node. 664 687 */ ··· 830 807 rt->dst.rt6_next = iter->dst.rt6_next; 831 808 atomic_inc(&rt->rt6i_ref); 832 809 inet6_rt_notify(RTM_NEWROUTE, rt, info); 833 - rt6_release(iter); 834 810 if (!(fn->fn_flags & RTN_RTINFO)) { 835 811 info->nl_net->ipv6.rt6_stats->fib_route_nodes++; 836 812 fn->fn_flags |= RTN_RTINFO; 837 813 } 814 + fib6_purge_rt(iter, fn, info->nl_net); 815 + rt6_release(iter); 838 816 } 839 817 840 818 return 0; ··· 1346 1322 fn = fib6_repair_tree(net, fn); 1347 1323 } 1348 1324 1349 - if (atomic_read(&rt->rt6i_ref) != 1) { 1350 - /* This route is used as dummy address holder in some split 1351 - * nodes. It is not leaked, but it still holds other resources, 1352 - * which must be released in time. So, scan ascendant nodes 1353 - * and replace dummy references to this route with references 1354 - * to still alive ones. 1355 - */ 1356 - while (fn) { 1357 - if (!(fn->fn_flags & RTN_RTINFO) && fn->leaf == rt) { 1358 - fn->leaf = fib6_find_prefix(net, fn); 1359 - atomic_inc(&fn->leaf->rt6i_ref); 1360 - rt6_release(rt); 1361 - } 1362 - fn = fn->parent; 1363 - } 1364 - /* No more references are possible at this point. */ 1365 - BUG_ON(atomic_read(&rt->rt6i_ref) != 1); 1366 - } 1325 + fib6_purge_rt(rt, fn, net); 1367 1326 1368 1327 inet6_rt_notify(RTM_DELROUTE, rt, info); 1369 1328 rt6_release(rt);
+5 -1
net/ipv6/route.c
··· 1242 1242 rt = net->ipv6.ip6_null_entry; 1243 1243 else if (rt->dst.error) { 1244 1244 rt = net->ipv6.ip6_null_entry; 1245 - } else if (rt == net->ipv6.ip6_null_entry) { 1245 + goto out; 1246 + } 1247 + 1248 + if (rt == net->ipv6.ip6_null_entry) { 1246 1249 fn = fib6_backtrack(fn, &fl6->saddr); 1247 1250 if (fn) 1248 1251 goto restart; 1249 1252 } 1250 1253 1254 + out: 1251 1255 dst_hold(&rt->dst); 1252 1256 1253 1257 read_unlock_bh(&table->tb6_lock);
+8 -2
net/ipv6/xfrm6_policy.c
··· 130 130 { 131 131 struct flowi6 *fl6 = &fl->u.ip6; 132 132 int onlyproto = 0; 133 - u16 offset = skb_network_header_len(skb); 134 133 const struct ipv6hdr *hdr = ipv6_hdr(skb); 134 + u16 offset = sizeof(*hdr); 135 135 struct ipv6_opt_hdr *exthdr; 136 136 const unsigned char *nh = skb_network_header(skb); 137 - u8 nexthdr = nh[IP6CB(skb)->nhoff]; 137 + u16 nhoff = IP6CB(skb)->nhoff; 138 138 int oif = 0; 139 + u8 nexthdr; 140 + 141 + if (!nhoff) 142 + nhoff = offsetof(struct ipv6hdr, nexthdr); 143 + 144 + nexthdr = nh[nhoff]; 139 145 140 146 if (skb_dst(skb)) 141 147 oif = skb_dst(skb)->dev->ifindex;
+4 -4
net/llc/sysctl_net_llc.c
··· 18 18 { 19 19 .procname = "ack", 20 20 .data = &sysctl_llc2_ack_timeout, 21 - .maxlen = sizeof(long), 21 + .maxlen = sizeof(sysctl_llc2_ack_timeout), 22 22 .mode = 0644, 23 23 .proc_handler = proc_dointvec_jiffies, 24 24 }, 25 25 { 26 26 .procname = "busy", 27 27 .data = &sysctl_llc2_busy_timeout, 28 - .maxlen = sizeof(long), 28 + .maxlen = sizeof(sysctl_llc2_busy_timeout), 29 29 .mode = 0644, 30 30 .proc_handler = proc_dointvec_jiffies, 31 31 }, 32 32 { 33 33 .procname = "p", 34 34 .data = &sysctl_llc2_p_timeout, 35 - .maxlen = sizeof(long), 35 + .maxlen = sizeof(sysctl_llc2_p_timeout), 36 36 .mode = 0644, 37 37 .proc_handler = proc_dointvec_jiffies, 38 38 }, 39 39 { 40 40 .procname = "rej", 41 41 .data = &sysctl_llc2_rej_timeout, 42 - .maxlen = sizeof(long), 42 + .maxlen = sizeof(sysctl_llc2_rej_timeout), 43 43 .mode = 0644, 44 44 .proc_handler = proc_dointvec_jiffies, 45 45 },
+15 -14
net/mac80211/pm.c
··· 86 86 } 87 87 } 88 88 89 - /* tear down aggregation sessions and remove STAs */ 90 - mutex_lock(&local->sta_mtx); 91 - list_for_each_entry(sta, &local->sta_list, list) { 92 - if (sta->uploaded) { 93 - enum ieee80211_sta_state state; 94 - 95 - state = sta->sta_state; 96 - for (; state > IEEE80211_STA_NOTEXIST; state--) 97 - WARN_ON(drv_sta_state(local, sta->sdata, sta, 98 - state, state - 1)); 99 - } 100 - } 101 - mutex_unlock(&local->sta_mtx); 102 - 103 89 /* remove all interfaces that were created in the driver */ 104 90 list_for_each_entry(sdata, &local->interfaces, list) { 105 91 if (!ieee80211_sdata_running(sdata)) ··· 96 110 continue; 97 111 case NL80211_IFTYPE_STATION: 98 112 ieee80211_mgd_quiesce(sdata); 113 + break; 114 + case NL80211_IFTYPE_WDS: 115 + /* tear down aggregation sessions and remove STAs */ 116 + mutex_lock(&local->sta_mtx); 117 + sta = sdata->u.wds.sta; 118 + if (sta && sta->uploaded) { 119 + enum ieee80211_sta_state state; 120 + 121 + state = sta->sta_state; 122 + for (; state > IEEE80211_STA_NOTEXIST; state--) 123 + WARN_ON(drv_sta_state(local, sta->sdata, 124 + sta, state, 125 + state - 1)); 126 + } 127 + mutex_unlock(&local->sta_mtx); 99 128 break; 100 129 default: 101 130 break;
+1 -1
net/mac80211/rx.c
··· 272 272 else if (rate && rate->flags & IEEE80211_RATE_ERP_G) 273 273 channel_flags |= IEEE80211_CHAN_OFDM | IEEE80211_CHAN_2GHZ; 274 274 else if (rate) 275 - channel_flags |= IEEE80211_CHAN_OFDM | IEEE80211_CHAN_2GHZ; 275 + channel_flags |= IEEE80211_CHAN_CCK | IEEE80211_CHAN_2GHZ; 276 276 else 277 277 channel_flags |= IEEE80211_CHAN_2GHZ; 278 278 put_unaligned_le16(channel_flags, pos);
+14 -3
net/sched/cls_bpf.c
··· 180 180 } 181 181 182 182 bpf_size = bpf_len * sizeof(*bpf_ops); 183 + if (bpf_size != nla_len(tb[TCA_BPF_OPS])) { 184 + ret = -EINVAL; 185 + goto errout; 186 + } 187 + 183 188 bpf_ops = kzalloc(bpf_size, GFP_KERNEL); 184 189 if (bpf_ops == NULL) { 185 190 ret = -ENOMEM; ··· 220 215 struct cls_bpf_head *head) 221 216 { 222 217 unsigned int i = 0x80000000; 218 + u32 handle; 223 219 224 220 do { 225 221 if (++head->hgen == 0x7FFFFFFF) 226 222 head->hgen = 1; 227 223 } while (--i > 0 && cls_bpf_get(tp, head->hgen)); 228 - if (i == 0) 229 - pr_err("Insufficient number of handles\n"); 230 224 231 - return i; 225 + if (unlikely(i == 0)) { 226 + pr_err("Insufficient number of handles\n"); 227 + handle = 0; 228 + } else { 229 + handle = head->hgen; 230 + } 231 + 232 + return handle; 232 233 } 233 234 234 235 static int cls_bpf_change(struct net *net, struct sk_buff *in_skb,
-1
net/sctp/associola.c
··· 1182 1182 asoc->peer.peer_hmacs = new->peer.peer_hmacs; 1183 1183 new->peer.peer_hmacs = NULL; 1184 1184 1185 - sctp_auth_key_put(asoc->asoc_shared_key); 1186 1185 sctp_auth_asoc_init_active_key(asoc, GFP_ATOMIC); 1187 1186 } 1188 1187
-3
net/socket.c
··· 869 869 static struct sock_iocb *alloc_sock_iocb(struct kiocb *iocb, 870 870 struct sock_iocb *siocb) 871 871 { 872 - if (!is_sync_kiocb(iocb)) 873 - BUG(); 874 - 875 872 siocb->kiocb = iocb; 876 873 iocb->private = siocb; 877 874 return siocb;
+4 -5
net/wireless/nl80211.c
··· 2854 2854 if (!rdev->ops->get_key) 2855 2855 return -EOPNOTSUPP; 2856 2856 2857 + if (!pairwise && mac_addr && !(rdev->wiphy.flags & WIPHY_FLAG_IBSS_RSN)) 2858 + return -ENOENT; 2859 + 2857 2860 msg = nlmsg_new(NLMSG_DEFAULT_SIZE, GFP_KERNEL); 2858 2861 if (!msg) 2859 2862 return -ENOMEM; ··· 2875 2872 if (mac_addr && 2876 2873 nla_put(msg, NL80211_ATTR_MAC, ETH_ALEN, mac_addr)) 2877 2874 goto nla_put_failure; 2878 - 2879 - if (pairwise && mac_addr && 2880 - !(rdev->wiphy.flags & WIPHY_FLAG_IBSS_RSN)) 2881 - return -ENOENT; 2882 2875 2883 2876 err = rdev_get_key(rdev, dev, key_idx, pairwise, mac_addr, &cookie, 2884 2877 get_key_callback); ··· 3046 3047 wdev_lock(dev->ieee80211_ptr); 3047 3048 err = nl80211_key_allowed(dev->ieee80211_ptr); 3048 3049 3049 - if (key.type == NL80211_KEYTYPE_PAIRWISE && mac_addr && 3050 + if (key.type == NL80211_KEYTYPE_GROUP && mac_addr && 3050 3051 !(rdev->wiphy.flags & WIPHY_FLAG_IBSS_RSN)) 3051 3052 err = -ENOENT; 3052 3053
+6
net/wireless/util.c
··· 308 308 goto out; 309 309 } 310 310 311 + if (ieee80211_is_mgmt(fc)) { 312 + if (ieee80211_has_order(fc)) 313 + hdrlen += IEEE80211_HT_CTL_LEN; 314 + goto out; 315 + } 316 + 311 317 if (ieee80211_is_ctl(fc)) { 312 318 /* 313 319 * ACK and CTS are 10 bytes, all others 16. To see how
+2 -2
samples/bpf/test_maps.c
··· 69 69 70 70 /* iterate over two elements */ 71 71 assert(bpf_get_next_key(map_fd, &key, &next_key) == 0 && 72 - next_key == 2); 72 + (next_key == 1 || next_key == 2)); 73 73 assert(bpf_get_next_key(map_fd, &next_key, &next_key) == 0 && 74 - next_key == 1); 74 + (next_key == 1 || next_key == 2)); 75 75 assert(bpf_get_next_key(map_fd, &next_key, &next_key) == -1 && 76 76 errno == ENOENT); 77 77