Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'v4.0-rc7' into drm-next

Linux 4.0-rc7

Requested by Alex for fixes -next needs.

Conflicts:
drivers/gpu/drm/i915/intel_sprite.c

+1510 -723
+3 -1
Documentation/devicetree/bindings/net/dsa/dsa.txt
··· 19 19 (DSA_MAX_SWITCHES). 20 20 Each of these switch child nodes should have the following required properties: 21 21 22 - - reg : Describes the switch address on the MII bus 22 + - reg : Contains two fields. The first one describes the 23 + address on the MII bus. The second is the switch 24 + number that must be unique in cascaded configurations 23 25 - #address-cells : Must be 1 24 26 - #size-cells : Must be 0 25 27
+8
Documentation/input/alps.txt
··· 114 114 byte 4: 0 y6 y5 y4 y3 y2 y1 y0 115 115 byte 5: 0 z6 z5 z4 z3 z2 z1 z0 116 116 117 + Protocol Version 2 DualPoint devices send standard PS/2 mouse packets for 118 + the DualPoint Stick. 119 + 117 120 Dualpoint device -- interleaved packet format 118 121 --------------------------------------------- 119 122 ··· 129 126 byte 6: 0 y9 y8 y7 1 m r l 130 127 byte 7: 0 y6 y5 y4 y3 y2 y1 y0 131 128 byte 8: 0 z6 z5 z4 z3 z2 z1 z0 129 + 130 + Devices which use the interleaving format normally send standard PS/2 mouse 131 + packets for the DualPoint Stick + ALPS Absolute Mode packets for the 132 + touchpad, switching to the interleaved packet format when both the stick and 133 + the touchpad are used at the same time. 132 134 133 135 ALPS Absolute Mode - Protocol Version 3 134 136 ---------------------------------------
+6
Documentation/input/event-codes.txt
··· 294 294 The kernel does not provide button emulation for such devices but treats 295 295 them as any other INPUT_PROP_BUTTONPAD device. 296 296 297 + INPUT_PROP_ACCELEROMETER 298 + ------------------------- 299 + Directional axes on this device (absolute and/or relative x, y, z) represent 300 + accelerometer data. All other axes retain their meaning. A device must not mix 301 + regular directional axes and accelerometer axes on the same event node. 302 + 297 303 Guidelines: 298 304 ========== 299 305 The guidelines below ensure proper single-touch and multi-finger functionality.
+6 -3
Documentation/input/multi-touch-protocol.txt
··· 312 312 313 313 The type of approaching tool. A lot of kernel drivers cannot distinguish 314 314 between different tool types, such as a finger or a pen. In such cases, the 315 - event should be omitted. The protocol currently supports MT_TOOL_FINGER and 316 - MT_TOOL_PEN [2]. For type B devices, this event is handled by input core; 317 - drivers should instead use input_mt_report_slot_state(). 315 + event should be omitted. The protocol currently supports MT_TOOL_FINGER, 316 + MT_TOOL_PEN, and MT_TOOL_PALM [2]. For type B devices, this event is handled 317 + by input core; drivers should instead use input_mt_report_slot_state(). 318 + A contact's ABS_MT_TOOL_TYPE may change over time while still touching the 319 + device, because the firmware may not be able to determine which tool is being 320 + used when it first appears. 318 321 319 322 ABS_MT_BLOB_ID 320 323
+14 -16
MAINTAINERS
··· 637 637 F: include/uapi/linux/kfd_ioctl.h 638 638 639 639 AMD MICROCODE UPDATE SUPPORT 640 - M: Andreas Herrmann <herrmann.der.user@googlemail.com> 641 - L: amd64-microcode@amd64.org 640 + M: Borislav Petkov <bp@alien8.de> 642 641 S: Maintained 643 642 F: arch/x86/kernel/cpu/microcode/amd* 644 643 ··· 5093 5094 F: drivers/platform/x86/intel_menlow.c 5094 5095 5095 5096 INTEL IA32 MICROCODE UPDATE SUPPORT 5096 - M: Tigran Aivazian <tigran@aivazian.fsnet.co.uk> 5097 + M: Borislav Petkov <bp@alien8.de> 5097 5098 S: Maintained 5098 5099 F: arch/x86/kernel/cpu/microcode/core* 5099 5100 F: arch/x86/kernel/cpu/microcode/intel* ··· 5134 5135 S: Maintained 5135 5136 F: drivers/char/hw_random/ixp4xx-rng.c 5136 5137 5137 - INTEL ETHERNET DRIVERS (e100/e1000/e1000e/fm10k/igb/igbvf/ixgb/ixgbe/ixgbevf/i40e/i40evf) 5138 + INTEL ETHERNET DRIVERS 5138 5139 M: Jeff Kirsher <jeffrey.t.kirsher@intel.com> 5139 - M: Jesse Brandeburg <jesse.brandeburg@intel.com> 5140 - M: Bruce Allan <bruce.w.allan@intel.com> 5141 - M: Carolyn Wyborny <carolyn.wyborny@intel.com> 5142 - M: Don Skidmore <donald.c.skidmore@intel.com> 5143 - M: Greg Rose <gregory.v.rose@intel.com> 5144 - M: Matthew Vick <matthew.vick@intel.com> 5145 - M: John Ronciak <john.ronciak@intel.com> 5146 - M: Mitch Williams <mitch.a.williams@intel.com> 5147 - M: Linux NICS <linux.nics@intel.com> 5148 - L: e1000-devel@lists.sourceforge.net 5140 + R: Jesse Brandeburg <jesse.brandeburg@intel.com> 5141 + R: Shannon Nelson <shannon.nelson@intel.com> 5142 + R: Carolyn Wyborny <carolyn.wyborny@intel.com> 5143 + R: Don Skidmore <donald.c.skidmore@intel.com> 5144 + R: Matthew Vick <matthew.vick@intel.com> 5145 + R: John Ronciak <john.ronciak@intel.com> 5146 + R: Mitch Williams <mitch.a.williams@intel.com> 5147 + L: intel-wired-lan@lists.osuosl.org 5149 5148 W: http://www.intel.com/support/feedback.htm 5150 5149 W: http://e1000.sourceforge.net/ 5151 - T: git git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/net.git 5152 - T: git git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/net-next.git 5150 + Q: http://patchwork.ozlabs.org/project/intel-wired-lan/list/ 5151 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/net-queue.git 5152 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/jkirsher/next-queue.git 5153 5153 S: Supported 5154 5154 F: Documentation/networking/e100.txt 5155 5155 F: Documentation/networking/e1000.txt
+1 -1
Makefile
··· 1 1 VERSION = 4 2 2 PATCHLEVEL = 0 3 3 SUBLEVEL = 0 4 - EXTRAVERSION = -rc6 4 + EXTRAVERSION = -rc7 5 5 NAME = Hurr durr I'ma sheep 6 6 7 7 # *DOCUMENTATION*
+1 -1
arch/powerpc/include/asm/cputhreads.h
··· 55 55 56 56 static inline int cpu_nr_cores(void) 57 57 { 58 - return NR_CPUS >> threads_shift; 58 + return nr_cpu_ids >> threads_shift; 59 59 } 60 60 61 61 static inline cpumask_t cpu_online_cores_map(void)
+5 -5
arch/x86/kernel/cpu/perf_event_intel.c
··· 212 212 INTEL_UEVENT_CONSTRAINT(0x01c0, 0x2), /* INST_RETIRED.PREC_DIST */ 213 213 INTEL_EVENT_CONSTRAINT(0xcd, 0x8), /* MEM_TRANS_RETIRED.LOAD_LATENCY */ 214 214 /* CYCLE_ACTIVITY.CYCLES_L1D_PENDING */ 215 - INTEL_EVENT_CONSTRAINT(0x08a3, 0x4), 215 + INTEL_UEVENT_CONSTRAINT(0x08a3, 0x4), 216 216 /* CYCLE_ACTIVITY.STALLS_L1D_PENDING */ 217 - INTEL_EVENT_CONSTRAINT(0x0ca3, 0x4), 217 + INTEL_UEVENT_CONSTRAINT(0x0ca3, 0x4), 218 218 /* CYCLE_ACTIVITY.CYCLES_NO_EXECUTE */ 219 - INTEL_EVENT_CONSTRAINT(0x04a3, 0xf), 219 + INTEL_UEVENT_CONSTRAINT(0x04a3, 0xf), 220 220 EVENT_CONSTRAINT_END 221 221 }; 222 222 ··· 1649 1649 if (c) 1650 1650 return c; 1651 1651 1652 - c = intel_pebs_constraints(event); 1652 + c = intel_shared_regs_constraints(cpuc, event); 1653 1653 if (c) 1654 1654 return c; 1655 1655 1656 - c = intel_shared_regs_constraints(cpuc, event); 1656 + c = intel_pebs_constraints(event); 1657 1657 if (c) 1658 1658 return c; 1659 1659
+15 -1
arch/x86/kernel/entry_64.S
··· 799 799 cmpq %r11,(EFLAGS-ARGOFFSET)(%rsp) /* R11 == RFLAGS */ 800 800 jne opportunistic_sysret_failed 801 801 802 - testq $X86_EFLAGS_RF,%r11 /* sysret can't restore RF */ 802 + /* 803 + * SYSRET can't restore RF. SYSRET can restore TF, but unlike IRET, 804 + * restoring TF results in a trap from userspace immediately after 805 + * SYSRET. This would cause an infinite loop whenever #DB happens 806 + * with register state that satisfies the opportunistic SYSRET 807 + * conditions. For example, single-stepping this user code: 808 + * 809 + * movq $stuck_here,%rcx 810 + * pushfq 811 + * popq %r11 812 + * stuck_here: 813 + * 814 + * would never get past 'stuck_here'. 815 + */ 816 + testq $(X86_EFLAGS_RF|X86_EFLAGS_TF), %r11 803 817 jnz opportunistic_sysret_failed 804 818 805 819 /* nothing to check for RSP */
+1 -1
arch/x86/kernel/kgdb.c
··· 72 72 { "bx", 8, offsetof(struct pt_regs, bx) }, 73 73 { "cx", 8, offsetof(struct pt_regs, cx) }, 74 74 { "dx", 8, offsetof(struct pt_regs, dx) }, 75 - { "si", 8, offsetof(struct pt_regs, dx) }, 75 + { "si", 8, offsetof(struct pt_regs, si) }, 76 76 { "di", 8, offsetof(struct pt_regs, di) }, 77 77 { "bp", 8, offsetof(struct pt_regs, bp) }, 78 78 { "sp", 8, offsetof(struct pt_regs, sp) },
+10
arch/x86/kernel/reboot.c
··· 183 183 }, 184 184 }, 185 185 186 + /* ASRock */ 187 + { /* Handle problems with rebooting on ASRock Q1900DC-ITX */ 188 + .callback = set_pci_reboot, 189 + .ident = "ASRock Q1900DC-ITX", 190 + .matches = { 191 + DMI_MATCH(DMI_BOARD_VENDOR, "ASRock"), 192 + DMI_MATCH(DMI_BOARD_NAME, "Q1900DC-ITX"), 193 + }, 194 + }, 195 + 186 196 /* ASUS */ 187 197 { /* Handle problems with rebooting on ASUS P4S800 */ 188 198 .callback = set_bios_reboot,
+9 -1
arch/x86/xen/p2m.c
··· 91 91 unsigned long xen_max_p2m_pfn __read_mostly; 92 92 EXPORT_SYMBOL_GPL(xen_max_p2m_pfn); 93 93 94 + #ifdef CONFIG_XEN_BALLOON_MEMORY_HOTPLUG_LIMIT 95 + #define P2M_LIMIT CONFIG_XEN_BALLOON_MEMORY_HOTPLUG_LIMIT 96 + #else 97 + #define P2M_LIMIT 0 98 + #endif 99 + 94 100 static DEFINE_SPINLOCK(p2m_update_lock); 95 101 96 102 static unsigned long *p2m_mid_missing_mfn; ··· 391 385 void __init xen_vmalloc_p2m_tree(void) 392 386 { 393 387 static struct vm_struct vm; 388 + unsigned long p2m_limit; 394 389 390 + p2m_limit = (phys_addr_t)P2M_LIMIT * 1024 * 1024 * 1024 / PAGE_SIZE; 395 391 vm.flags = VM_ALLOC; 396 - vm.size = ALIGN(sizeof(unsigned long) * xen_max_p2m_pfn, 392 + vm.size = ALIGN(sizeof(unsigned long) * max(xen_max_p2m_pfn, p2m_limit), 397 393 PMD_SIZE * PMDS_PER_MID_PAGE); 398 394 vm_area_register_early(&vm, PMD_SIZE * PMDS_PER_MID_PAGE); 399 395 pr_notice("p2m virtual area at %p, size is %lx\n", vm.addr, vm.size);
+3 -3
block/blk-settings.c
··· 585 585 b->physical_block_size); 586 586 587 587 t->io_min = max(t->io_min, b->io_min); 588 - t->io_opt = lcm(t->io_opt, b->io_opt); 588 + t->io_opt = lcm_not_zero(t->io_opt, b->io_opt); 589 589 590 590 t->cluster &= b->cluster; 591 591 t->discard_zeroes_data &= b->discard_zeroes_data; ··· 616 616 b->raid_partial_stripes_expensive); 617 617 618 618 /* Find lowest common alignment_offset */ 619 - t->alignment_offset = lcm(t->alignment_offset, alignment) 619 + t->alignment_offset = lcm_not_zero(t->alignment_offset, alignment) 620 620 % max(t->physical_block_size, t->io_min); 621 621 622 622 /* Verify that new alignment_offset is on a logical block boundary */ ··· 643 643 b->max_discard_sectors); 644 644 t->discard_granularity = max(t->discard_granularity, 645 645 b->discard_granularity); 646 - t->discard_alignment = lcm(t->discard_alignment, alignment) % 646 + t->discard_alignment = lcm_not_zero(t->discard_alignment, alignment) % 647 647 t->discard_granularity; 648 648 } 649 649
+13 -2
drivers/ata/libata-core.c
··· 4204 4204 { "PIONEER DVD-RW DVR-216D", NULL, ATA_HORKAGE_NOSETXFER }, 4205 4205 4206 4206 /* devices that don't properly handle queued TRIM commands */ 4207 - { "Micron_M[56]*", NULL, ATA_HORKAGE_NO_NCQ_TRIM | 4207 + { "Micron_M500*", NULL, ATA_HORKAGE_NO_NCQ_TRIM | 4208 4208 ATA_HORKAGE_ZERO_AFTER_TRIM, }, 4209 - { "Crucial_CT*SSD*", NULL, ATA_HORKAGE_NO_NCQ_TRIM, }, 4209 + { "Crucial_CT*M500*", NULL, ATA_HORKAGE_NO_NCQ_TRIM | 4210 + ATA_HORKAGE_ZERO_AFTER_TRIM, }, 4211 + { "Micron_M5[15]0*", "MU01", ATA_HORKAGE_NO_NCQ_TRIM | 4212 + ATA_HORKAGE_ZERO_AFTER_TRIM, }, 4213 + { "Crucial_CT*M550*", "MU01", ATA_HORKAGE_NO_NCQ_TRIM | 4214 + ATA_HORKAGE_ZERO_AFTER_TRIM, }, 4215 + { "Crucial_CT*MX100*", "MU01", ATA_HORKAGE_NO_NCQ_TRIM | 4216 + ATA_HORKAGE_ZERO_AFTER_TRIM, }, 4217 + { "Samsung SSD 850 PRO*", NULL, ATA_HORKAGE_NO_NCQ_TRIM | 4218 + ATA_HORKAGE_ZERO_AFTER_TRIM, }, 4210 4219 4211 4220 /* 4212 4221 * As defined, the DRAT (Deterministic Read After Trim) and RZAT ··· 4235 4226 */ 4236 4227 { "INTEL*SSDSC2MH*", NULL, 0, }, 4237 4228 4229 + { "Micron*", NULL, ATA_HORKAGE_ZERO_AFTER_TRIM, }, 4230 + { "Crucial*", NULL, ATA_HORKAGE_ZERO_AFTER_TRIM, }, 4238 4231 { "INTEL*SSD*", NULL, ATA_HORKAGE_ZERO_AFTER_TRIM, }, 4239 4232 { "SSD*INTEL*", NULL, ATA_HORKAGE_ZERO_AFTER_TRIM, }, 4240 4233 { "Samsung*SSD*", NULL, ATA_HORKAGE_ZERO_AFTER_TRIM, },
+1
drivers/dma/bcm2835-dma.c
··· 475 475 * c->desc is NULL and exit.) 476 476 */ 477 477 if (c->desc) { 478 + bcm2835_dma_desc_free(&c->desc->vd); 478 479 c->desc = NULL; 479 480 bcm2835_dma_abort(c->chan_base); 480 481
+7
drivers/dma/dma-jz4740.c
··· 511 511 kfree(container_of(vdesc, struct jz4740_dma_desc, vdesc)); 512 512 } 513 513 514 + #define JZ4740_DMA_BUSWIDTHS (BIT(DMA_SLAVE_BUSWIDTH_1_BYTE) | \ 515 + BIT(DMA_SLAVE_BUSWIDTH_2_BYTES) | BIT(DMA_SLAVE_BUSWIDTH_4_BYTES)) 516 + 514 517 static int jz4740_dma_probe(struct platform_device *pdev) 515 518 { 516 519 struct jz4740_dmaengine_chan *chan; ··· 551 548 dd->device_prep_dma_cyclic = jz4740_dma_prep_dma_cyclic; 552 549 dd->device_config = jz4740_dma_slave_config; 553 550 dd->device_terminate_all = jz4740_dma_terminate_all; 551 + dd->src_addr_widths = JZ4740_DMA_BUSWIDTHS; 552 + dd->dst_addr_widths = JZ4740_DMA_BUSWIDTHS; 553 + dd->directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV); 554 + dd->residue_granularity = DMA_RESIDUE_GRANULARITY_BURST; 554 555 dd->dev = &pdev->dev; 555 556 INIT_LIST_HEAD(&dd->channels); 556 557
+7
drivers/dma/edma.c
··· 260 260 */ 261 261 if (echan->edesc) { 262 262 int cyclic = echan->edesc->cyclic; 263 + 264 + /* 265 + * free the running request descriptor 266 + * since it is not in any of the vdesc lists 267 + */ 268 + edma_desc_free(&echan->edesc->vdesc); 269 + 263 270 echan->edesc = NULL; 264 271 edma_stop(echan->ch_num); 265 272 /* Move the cyclic channel back to default queue */
+3 -1
drivers/dma/moxart-dma.c
··· 193 193 194 194 spin_lock_irqsave(&ch->vc.lock, flags); 195 195 196 - if (ch->desc) 196 + if (ch->desc) { 197 + moxart_dma_desc_free(&ch->desc->vd); 197 198 ch->desc = NULL; 199 + } 198 200 199 201 ctrl = readl(ch->base + REG_OFF_CTRL); 200 202 ctrl &= ~(APB_DMA_ENABLE | APB_DMA_FIN_INT_EN | APB_DMA_ERR_INT_EN);
+1
drivers/dma/omap-dma.c
··· 981 981 * c->desc is NULL and exit.) 982 982 */ 983 983 if (c->desc) { 984 + omap_dma_desc_free(&c->desc->vd); 984 985 c->desc = NULL; 985 986 /* Avoid stopping the dma twice */ 986 987 if (!c->paused)
+7 -15
drivers/firmware/dmi_scan.c
··· 86 86 int i = 0; 87 87 88 88 /* 89 - * Stop when we see all the items the table claimed to have 90 - * OR we run off the end of the table (also happens) 89 + * Stop when we have seen all the items the table claimed to have 90 + * (SMBIOS < 3.0 only) OR we reach an end-of-table marker OR we run 91 + * off the end of the table (should never happen but sometimes does 92 + * on bogus implementations.) 91 93 */ 92 - while ((i < num) && (data - buf + sizeof(struct dmi_header)) <= len) { 94 + while ((!num || i < num) && 95 + (data - buf + sizeof(struct dmi_header)) <= len) { 93 96 const struct dmi_header *dm = (const struct dmi_header *)data; 94 97 95 98 /* ··· 532 529 if (memcmp(buf, "_SM3_", 5) == 0 && 533 530 buf[6] < 32 && dmi_checksum(buf, buf[6])) { 534 531 dmi_ver = get_unaligned_be16(buf + 7); 532 + dmi_num = 0; /* No longer specified */ 535 533 dmi_len = get_unaligned_le32(buf + 12); 536 534 dmi_base = get_unaligned_le64(buf + 16); 537 - 538 - /* 539 - * The 64-bit SMBIOS 3.0 entry point no longer has a field 540 - * containing the number of structures present in the table. 541 - * Instead, it defines the table size as a maximum size, and 542 - * relies on the end-of-table structure type (#127) to be used 543 - * to signal the end of the table. 544 - * So let's define dmi_num as an upper bound as well: each 545 - * structure has a 4 byte header, so dmi_len / 4 is an upper 546 - * bound for the number of structures in the table. 547 - */ 548 - dmi_num = dmi_len / 4; 549 535 550 536 if (dmi_walk_early(dmi_decode) == 0) { 551 537 pr_info("SMBIOS %d.%d present.\n",
+1 -1
drivers/gpio/gpio-mpc8xxx.c
··· 334 334 .xlate = irq_domain_xlate_twocell, 335 335 }; 336 336 337 - static struct of_device_id mpc8xxx_gpio_ids[] __initdata = { 337 + static struct of_device_id mpc8xxx_gpio_ids[] = { 338 338 { .compatible = "fsl,mpc8349-gpio", }, 339 339 { .compatible = "fsl,mpc8572-gpio", }, 340 340 { .compatible = "fsl,mpc8610-gpio", },
+1 -1
drivers/gpio/gpio-syscon.c
··· 219 219 ret = of_property_read_u32_index(np, "gpio,syscon-dev", 2, 220 220 &priv->dir_reg_offset); 221 221 if (ret) 222 - dev_err(dev, "can't read the dir register offset!\n"); 222 + dev_dbg(dev, "can't read the dir register offset!\n"); 223 223 224 224 priv->dir_reg_offset <<= 3; 225 225 }
+10
drivers/gpio/gpiolib-acpi.c
··· 201 201 if (!handler) 202 202 return AE_BAD_PARAMETER; 203 203 204 + pin = acpi_gpiochip_pin_to_gpio_offset(chip, pin); 205 + if (pin < 0) 206 + return AE_BAD_PARAMETER; 207 + 204 208 desc = gpiochip_request_own_desc(chip, pin, "ACPI:Event"); 205 209 if (IS_ERR(desc)) { 206 210 dev_err(chip->dev, "Failed to request GPIO\n"); ··· 554 550 struct acpi_gpio_connection *conn; 555 551 struct gpio_desc *desc; 556 552 bool found; 553 + 554 + pin = acpi_gpiochip_pin_to_gpio_offset(chip, pin); 555 + if (pin < 0) { 556 + status = AE_BAD_PARAMETER; 557 + goto out; 558 + } 557 559 558 560 mutex_lock(&achip->conn_lock); 559 561
+1
drivers/gpu/drm/drm_edid_load.c
··· 287 287 288 288 drm_mode_connector_update_edid_property(connector, edid); 289 289 ret = drm_add_edid_modes(connector, edid); 290 + drm_edid_to_eld(connector, edid); 290 291 kfree(edid); 291 292 292 293 return ret;
+1
drivers/gpu/drm/drm_probe_helper.c
··· 174 174 struct edid *edid = (struct edid *) connector->edid_blob_ptr->data; 175 175 176 176 count = drm_add_edid_modes(connector, edid); 177 + drm_edid_to_eld(connector, edid); 177 178 } else 178 179 count = (*connector_funcs->get_modes)(connector); 179 180 }
+5 -3
drivers/gpu/drm/exynos/exynos_drm_fimd.c
··· 147 147 unsigned int ovl_height; 148 148 unsigned int fb_width; 149 149 unsigned int fb_height; 150 + unsigned int fb_pitch; 150 151 unsigned int bpp; 151 152 unsigned int pixel_format; 152 153 dma_addr_t dma_addr; ··· 533 532 win_data->offset_y = plane->crtc_y; 534 533 win_data->ovl_width = plane->crtc_width; 535 534 win_data->ovl_height = plane->crtc_height; 535 + win_data->fb_pitch = plane->pitch; 536 536 win_data->fb_width = plane->fb_width; 537 537 win_data->fb_height = plane->fb_height; 538 538 win_data->dma_addr = plane->dma_addr[0] + offset; 539 539 win_data->bpp = plane->bpp; 540 540 win_data->pixel_format = plane->pixel_format; 541 - win_data->buf_offsize = (plane->fb_width - plane->crtc_width) * 542 - (plane->bpp >> 3); 541 + win_data->buf_offsize = 542 + plane->pitch - (plane->crtc_width * (plane->bpp >> 3)); 543 543 win_data->line_size = plane->crtc_width * (plane->bpp >> 3); 544 544 545 545 DRM_DEBUG_KMS("offset_x = %d, offset_y = %d\n", ··· 706 704 writel(val, ctx->regs + VIDWx_BUF_START(win, 0)); 707 705 708 706 /* buffer end address */ 709 - size = win_data->fb_width * win_data->ovl_height * (win_data->bpp >> 3); 707 + size = win_data->fb_pitch * win_data->ovl_height * (win_data->bpp >> 3); 710 708 val = (unsigned long)(win_data->dma_addr + size); 711 709 writel(val, ctx->regs + VIDWx_BUF_END(win, 0)); 712 710
+10 -7
drivers/gpu/drm/exynos/exynos_mixer.c
··· 55 55 unsigned int fb_x; 56 56 unsigned int fb_y; 57 57 unsigned int fb_width; 58 + unsigned int fb_pitch; 58 59 unsigned int fb_height; 59 60 unsigned int src_width; 60 61 unsigned int src_height; ··· 439 438 } else { 440 439 luma_addr[0] = win_data->dma_addr; 441 440 chroma_addr[0] = win_data->dma_addr 442 - + (win_data->fb_width * win_data->fb_height); 441 + + (win_data->fb_pitch * win_data->fb_height); 443 442 } 444 443 445 444 if (win_data->scan_flags & DRM_MODE_FLAG_INTERLACE) { ··· 448 447 luma_addr[1] = luma_addr[0] + 0x40; 449 448 chroma_addr[1] = chroma_addr[0] + 0x40; 450 449 } else { 451 - luma_addr[1] = luma_addr[0] + win_data->fb_width; 452 - chroma_addr[1] = chroma_addr[0] + win_data->fb_width; 450 + luma_addr[1] = luma_addr[0] + win_data->fb_pitch; 451 + chroma_addr[1] = chroma_addr[0] + win_data->fb_pitch; 453 452 } 454 453 } else { 455 454 ctx->interlace = false; ··· 470 469 vp_reg_writemask(res, VP_MODE, val, VP_MODE_FMT_MASK); 471 470 472 471 /* setting size of input image */ 473 - vp_reg_write(res, VP_IMG_SIZE_Y, VP_IMG_HSIZE(win_data->fb_width) | 472 + vp_reg_write(res, VP_IMG_SIZE_Y, VP_IMG_HSIZE(win_data->fb_pitch) | 474 473 VP_IMG_VSIZE(win_data->fb_height)); 475 474 /* chroma height has to reduced by 2 to avoid chroma distorions */ 476 - vp_reg_write(res, VP_IMG_SIZE_C, VP_IMG_HSIZE(win_data->fb_width) | 475 + vp_reg_write(res, VP_IMG_SIZE_C, VP_IMG_HSIZE(win_data->fb_pitch) | 477 476 VP_IMG_VSIZE(win_data->fb_height / 2)); 478 477 479 478 vp_reg_write(res, VP_SRC_WIDTH, win_data->src_width); ··· 560 559 /* converting dma address base and source offset */ 561 560 dma_addr = win_data->dma_addr 562 561 + (win_data->fb_x * win_data->bpp >> 3) 563 - + (win_data->fb_y * win_data->fb_width * win_data->bpp >> 3); 562 + + (win_data->fb_y * win_data->fb_pitch); 564 563 src_x_offset = 0; 565 564 src_y_offset = 0; 566 565 ··· 577 576 MXR_GRP_CFG_FORMAT_VAL(fmt), MXR_GRP_CFG_FORMAT_MASK); 578 577 579 578 /* setup geometry */ 580 - mixer_reg_write(res, MXR_GRAPHIC_SPAN(win), win_data->fb_width); 579 + mixer_reg_write(res, MXR_GRAPHIC_SPAN(win), 580 + win_data->fb_pitch / (win_data->bpp >> 3)); 581 581 582 582 /* setup display size */ 583 583 if (ctx->mxr_ver == MXR_VER_128_0_0_184 && ··· 963 961 win_data->fb_y = plane->fb_y; 964 962 win_data->fb_width = plane->fb_width; 965 963 win_data->fb_height = plane->fb_height; 964 + win_data->fb_pitch = plane->pitch; 966 965 win_data->src_width = plane->src_width; 967 966 win_data->src_height = plane->src_height; 968 967
+1 -1
drivers/gpu/drm/i915/intel_sprite.c
··· 1113 1113 drm_modeset_lock_all(dev); 1114 1114 1115 1115 plane = drm_plane_find(dev, set->plane_id); 1116 - if (!plane) { 1116 + if (!plane || plane->type != DRM_PLANE_TYPE_OVERLAY) { 1117 1117 ret = -ENOENT; 1118 1118 goto out_unlock; 1119 1119 }
+4 -7
drivers/gpu/drm/radeon/radeon_mn.c
··· 122 122 it = interval_tree_iter_first(&rmn->objects, start, end); 123 123 while (it) { 124 124 struct radeon_bo *bo; 125 - struct fence *fence; 126 125 int r; 127 126 128 127 bo = container_of(it, struct radeon_bo, mn_it); ··· 133 134 continue; 134 135 } 135 136 136 - fence = reservation_object_get_excl(bo->tbo.resv); 137 - if (fence) { 138 - r = radeon_fence_wait((struct radeon_fence *)fence, false); 139 - if (r) 140 - DRM_ERROR("(%d) failed to wait for user bo\n", r); 141 - } 137 + r = reservation_object_wait_timeout_rcu(bo->tbo.resv, true, 138 + false, MAX_SCHEDULE_TIMEOUT); 139 + if (r) 140 + DRM_ERROR("(%d) failed to wait for user bo\n", r); 142 141 143 142 radeon_ttm_placement_from_domain(bo, RADEON_GEM_DOMAIN_CPU); 144 143 r = ttm_bo_validate(&bo->tbo, &bo->placement, false, false);
+4
drivers/gpu/drm/radeon/radeon_ttm.c
··· 598 598 enum dma_data_direction direction = write ? 599 599 DMA_BIDIRECTIONAL : DMA_TO_DEVICE; 600 600 601 + /* double check that we don't free the table twice */ 602 + if (!ttm->sg->sgl) 603 + return; 604 + 601 605 /* free the sg table and pages again */ 602 606 dma_unmap_sg(rdev->dev, ttm->sg->sgl, ttm->sg->nents, direction); 603 607
+1 -1
drivers/iio/accel/bma180.c
··· 659 659 660 660 mutex_lock(&data->mutex); 661 661 662 - for_each_set_bit(bit, indio_dev->buffer->scan_mask, 662 + for_each_set_bit(bit, indio_dev->active_scan_mask, 663 663 indio_dev->masklength) { 664 664 ret = bma180_get_data_reg(data, bit); 665 665 if (ret < 0) {
+10 -10
drivers/iio/accel/bmc150-accel.c
··· 168 168 int val; 169 169 int val2; 170 170 u8 bw_bits; 171 - } bmc150_accel_samp_freq_table[] = { {7, 810000, 0x08}, 172 - {15, 630000, 0x09}, 173 - {31, 250000, 0x0A}, 174 - {62, 500000, 0x0B}, 175 - {125, 0, 0x0C}, 176 - {250, 0, 0x0D}, 177 - {500, 0, 0x0E}, 178 - {1000, 0, 0x0F} }; 171 + } bmc150_accel_samp_freq_table[] = { {15, 620000, 0x08}, 172 + {31, 260000, 0x09}, 173 + {62, 500000, 0x0A}, 174 + {125, 0, 0x0B}, 175 + {250, 0, 0x0C}, 176 + {500, 0, 0x0D}, 177 + {1000, 0, 0x0E}, 178 + {2000, 0, 0x0F} }; 179 179 180 180 static const struct { 181 181 int bw_bits; ··· 840 840 } 841 841 842 842 static IIO_CONST_ATTR_SAMP_FREQ_AVAIL( 843 - "7.810000 15.630000 31.250000 62.500000 125 250 500 1000"); 843 + "15.620000 31.260000 62.50000 125 250 500 1000 2000"); 844 844 845 845 static struct attribute *bmc150_accel_attributes[] = { 846 846 &iio_const_attr_sampling_frequency_available.dev_attr.attr, ··· 986 986 int bit, ret, i = 0; 987 987 988 988 mutex_lock(&data->mutex); 989 - for_each_set_bit(bit, indio_dev->buffer->scan_mask, 989 + for_each_set_bit(bit, indio_dev->active_scan_mask, 990 990 indio_dev->masklength) { 991 991 ret = i2c_smbus_read_word_data(data->client, 992 992 BMC150_ACCEL_AXIS_TO_REG(bit));
+1 -1
drivers/iio/accel/kxcjk-1013.c
··· 956 956 957 957 mutex_lock(&data->mutex); 958 958 959 - for_each_set_bit(bit, indio_dev->buffer->scan_mask, 959 + for_each_set_bit(bit, indio_dev->active_scan_mask, 960 960 indio_dev->masklength) { 961 961 ret = kxcjk1013_get_acc_reg(data, bit); 962 962 if (ret < 0) {
+2 -1
drivers/iio/adc/Kconfig
··· 137 137 138 138 config CC10001_ADC 139 139 tristate "Cosmic Circuits 10001 ADC driver" 140 - depends on HAS_IOMEM || HAVE_CLK || REGULATOR 140 + depends on HAVE_CLK || REGULATOR 141 + depends on HAS_IOMEM 141 142 select IIO_BUFFER 142 143 select IIO_TRIGGERED_BUFFER 143 144 help
+2 -3
drivers/iio/adc/at91_adc.c
··· 544 544 { 545 545 struct iio_dev *idev = iio_trigger_get_drvdata(trig); 546 546 struct at91_adc_state *st = iio_priv(idev); 547 - struct iio_buffer *buffer = idev->buffer; 548 547 struct at91_adc_reg_desc *reg = st->registers; 549 548 u32 status = at91_adc_readl(st, reg->trigger_register); 550 549 int value; ··· 563 564 at91_adc_writel(st, reg->trigger_register, 564 565 status | value); 565 566 566 - for_each_set_bit(bit, buffer->scan_mask, 567 + for_each_set_bit(bit, idev->active_scan_mask, 567 568 st->num_channels) { 568 569 struct iio_chan_spec const *chan = idev->channels + bit; 569 570 at91_adc_writel(st, AT91_ADC_CHER, ··· 578 579 at91_adc_writel(st, reg->trigger_register, 579 580 status & ~value); 580 581 581 - for_each_set_bit(bit, buffer->scan_mask, 582 + for_each_set_bit(bit, idev->active_scan_mask, 582 583 st->num_channels) { 583 584 struct iio_chan_spec const *chan = idev->channels + bit; 584 585 at91_adc_writel(st, AT91_ADC_CHDR,
+1 -2
drivers/iio/adc/ti_am335x_adc.c
··· 188 188 static int tiadc_buffer_postenable(struct iio_dev *indio_dev) 189 189 { 190 190 struct tiadc_device *adc_dev = iio_priv(indio_dev); 191 - struct iio_buffer *buffer = indio_dev->buffer; 192 191 unsigned int enb = 0; 193 192 u8 bit; 194 193 195 194 tiadc_step_config(indio_dev); 196 - for_each_set_bit(bit, buffer->scan_mask, adc_dev->channels) 195 + for_each_set_bit(bit, indio_dev->active_scan_mask, adc_dev->channels) 197 196 enb |= (get_adc_step_bit(adc_dev, bit) << 1); 198 197 adc_dev->buffer_en_ch_steps = enb; 199 198
+61 -30
drivers/iio/adc/vf610_adc.c
··· 141 141 struct regulator *vref; 142 142 struct vf610_adc_feature adc_feature; 143 143 144 + u32 sample_freq_avail[5]; 145 + 144 146 struct completion completion; 145 147 }; 148 + 149 + static const u32 vf610_hw_avgs[] = { 1, 4, 8, 16, 32 }; 146 150 147 151 #define VF610_ADC_CHAN(_idx, _chan_type) { \ 148 152 .type = (_chan_type), \ ··· 184 180 /* sentinel */ 185 181 }; 186 182 187 - /* 188 - * ADC sample frequency, unit is ADCK cycles. 189 - * ADC clk source is ipg clock, which is the same as bus clock. 190 - * 191 - * ADC conversion time = SFCAdder + AverageNum x (BCT + LSTAdder) 192 - * SFCAdder: fixed to 6 ADCK cycles 193 - * AverageNum: 1, 4, 8, 16, 32 samples for hardware average. 194 - * BCT (Base Conversion Time): fixed to 25 ADCK cycles for 12 bit mode 195 - * LSTAdder(Long Sample Time): fixed to 3 ADCK cycles 196 - * 197 - * By default, enable 12 bit resolution mode, clock source 198 - * set to ipg clock, So get below frequency group: 199 - */ 200 - static const u32 vf610_sample_freq_avail[5] = 201 - {1941176, 559332, 286957, 145374, 73171}; 183 + static inline void vf610_adc_calculate_rates(struct vf610_adc *info) 184 + { 185 + unsigned long adck_rate, ipg_rate = clk_get_rate(info->clk); 186 + int i; 187 + 188 + /* 189 + * Calculate ADC sample frequencies 190 + * Sample time unit is ADCK cycles. ADCK clk source is ipg clock, 191 + * which is the same as bus clock. 192 + * 193 + * ADC conversion time = SFCAdder + AverageNum x (BCT + LSTAdder) 194 + * SFCAdder: fixed to 6 ADCK cycles 195 + * AverageNum: 1, 4, 8, 16, 32 samples for hardware average. 196 + * BCT (Base Conversion Time): fixed to 25 ADCK cycles for 12 bit mode 197 + * LSTAdder(Long Sample Time): fixed to 3 ADCK cycles 198 + */ 199 + adck_rate = ipg_rate / info->adc_feature.clk_div; 200 + for (i = 0; i < ARRAY_SIZE(vf610_hw_avgs); i++) 201 + info->sample_freq_avail[i] = 202 + adck_rate / (6 + vf610_hw_avgs[i] * (25 + 3)); 203 + } 202 204 203 205 static inline void vf610_adc_cfg_init(struct vf610_adc *info) 204 206 { 207 + struct vf610_adc_feature *adc_feature = &info->adc_feature; 208 + 205 209 /* set default Configuration for ADC controller */ 206 - info->adc_feature.clk_sel = VF610_ADCIOC_BUSCLK_SET; 207 - info->adc_feature.vol_ref = VF610_ADCIOC_VR_VREF_SET; 210 + adc_feature->clk_sel = VF610_ADCIOC_BUSCLK_SET; 211 + adc_feature->vol_ref = VF610_ADCIOC_VR_VREF_SET; 208 212 209 - info->adc_feature.calibration = true; 210 - info->adc_feature.ovwren = true; 213 + adc_feature->calibration = true; 214 + adc_feature->ovwren = true; 211 215 212 - info->adc_feature.clk_div = 1; 213 - info->adc_feature.res_mode = 12; 214 - info->adc_feature.sample_rate = 1; 215 - info->adc_feature.lpm = true; 216 + adc_feature->res_mode = 12; 217 + adc_feature->sample_rate = 1; 218 + adc_feature->lpm = true; 219 + 220 + /* Use a save ADCK which is below 20MHz on all devices */ 221 + adc_feature->clk_div = 8; 222 + 223 + vf610_adc_calculate_rates(info); 216 224 } 217 225 218 226 static void vf610_adc_cfg_post_set(struct vf610_adc *info) ··· 306 290 307 291 cfg_data = readl(info->regs + VF610_REG_ADC_CFG); 308 292 309 - /* low power configuration */ 310 293 cfg_data &= ~VF610_ADC_ADLPC_EN; 311 294 if (adc_feature->lpm) 312 295 cfg_data |= VF610_ADC_ADLPC_EN; 313 296 314 - /* disable high speed */ 315 297 cfg_data &= ~VF610_ADC_ADHSC_EN; 316 298 317 299 writel(cfg_data, info->regs + VF610_REG_ADC_CFG); ··· 449 435 return IRQ_HANDLED; 450 436 } 451 437 452 - static IIO_CONST_ATTR_SAMP_FREQ_AVAIL("1941176, 559332, 286957, 145374, 73171"); 438 + static ssize_t vf610_show_samp_freq_avail(struct device *dev, 439 + struct device_attribute *attr, char *buf) 440 + { 441 + struct vf610_adc *info = iio_priv(dev_to_iio_dev(dev)); 442 + size_t len = 0; 443 + int i; 444 + 445 + for (i = 0; i < ARRAY_SIZE(info->sample_freq_avail); i++) 446 + len += scnprintf(buf + len, PAGE_SIZE - len, 447 + "%u ", info->sample_freq_avail[i]); 448 + 449 + /* replace trailing space by newline */ 450 + buf[len - 1] = '\n'; 451 + 452 + return len; 453 + } 454 + 455 + static IIO_DEV_ATTR_SAMP_FREQ_AVAIL(vf610_show_samp_freq_avail); 453 456 454 457 static struct attribute *vf610_attributes[] = { 455 - &iio_const_attr_sampling_frequency_available.dev_attr.attr, 458 + &iio_dev_attr_sampling_frequency_available.dev_attr.attr, 456 459 NULL 457 460 }; 458 461 ··· 533 502 return IIO_VAL_FRACTIONAL_LOG2; 534 503 535 504 case IIO_CHAN_INFO_SAMP_FREQ: 536 - *val = vf610_sample_freq_avail[info->adc_feature.sample_rate]; 505 + *val = info->sample_freq_avail[info->adc_feature.sample_rate]; 537 506 *val2 = 0; 538 507 return IIO_VAL_INT; 539 508 ··· 556 525 switch (mask) { 557 526 case IIO_CHAN_INFO_SAMP_FREQ: 558 527 for (i = 0; 559 - i < ARRAY_SIZE(vf610_sample_freq_avail); 528 + i < ARRAY_SIZE(info->sample_freq_avail); 560 529 i++) 561 - if (val == vf610_sample_freq_avail[i]) { 530 + if (val == info->sample_freq_avail[i]) { 562 531 info->adc_feature.sample_rate = i; 563 532 vf610_adc_sample_set(info); 564 533 return 0;
+1 -1
drivers/iio/gyro/bmg160.c
··· 822 822 int bit, ret, i = 0; 823 823 824 824 mutex_lock(&data->mutex); 825 - for_each_set_bit(bit, indio_dev->buffer->scan_mask, 825 + for_each_set_bit(bit, indio_dev->active_scan_mask, 826 826 indio_dev->masklength) { 827 827 ret = i2c_smbus_read_word_data(data->client, 828 828 BMG160_AXIS_TO_REG(bit));
+1 -1
drivers/iio/imu/adis_trigger.c
··· 60 60 iio_trigger_set_drvdata(adis->trig, adis); 61 61 ret = iio_trigger_register(adis->trig); 62 62 63 - indio_dev->trig = adis->trig; 63 + indio_dev->trig = iio_trigger_get(adis->trig); 64 64 if (ret) 65 65 goto error_free_irq; 66 66
+30 -26
drivers/iio/imu/inv_mpu6050/inv_mpu_core.c
··· 410 410 } 411 411 } 412 412 413 - static int inv_mpu6050_write_fsr(struct inv_mpu6050_state *st, int fsr) 413 + static int inv_mpu6050_write_gyro_scale(struct inv_mpu6050_state *st, int val) 414 414 { 415 - int result; 415 + int result, i; 416 416 u8 d; 417 417 418 - if (fsr < 0 || fsr > INV_MPU6050_MAX_GYRO_FS_PARAM) 419 - return -EINVAL; 420 - if (fsr == st->chip_config.fsr) 421 - return 0; 418 + for (i = 0; i < ARRAY_SIZE(gyro_scale_6050); ++i) { 419 + if (gyro_scale_6050[i] == val) { 420 + d = (i << INV_MPU6050_GYRO_CONFIG_FSR_SHIFT); 421 + result = inv_mpu6050_write_reg(st, 422 + st->reg->gyro_config, d); 423 + if (result) 424 + return result; 422 425 423 - d = (fsr << INV_MPU6050_GYRO_CONFIG_FSR_SHIFT); 424 - result = inv_mpu6050_write_reg(st, st->reg->gyro_config, d); 425 - if (result) 426 - return result; 427 - st->chip_config.fsr = fsr; 426 + st->chip_config.fsr = i; 427 + return 0; 428 + } 429 + } 428 430 429 - return 0; 431 + return -EINVAL; 430 432 } 431 433 432 - static int inv_mpu6050_write_accel_fs(struct inv_mpu6050_state *st, int fs) 434 + static int inv_mpu6050_write_accel_scale(struct inv_mpu6050_state *st, int val) 433 435 { 434 - int result; 436 + int result, i; 435 437 u8 d; 436 438 437 - if (fs < 0 || fs > INV_MPU6050_MAX_ACCL_FS_PARAM) 438 - return -EINVAL; 439 - if (fs == st->chip_config.accl_fs) 440 - return 0; 439 + for (i = 0; i < ARRAY_SIZE(accel_scale); ++i) { 440 + if (accel_scale[i] == val) { 441 + d = (i << INV_MPU6050_ACCL_CONFIG_FSR_SHIFT); 442 + result = inv_mpu6050_write_reg(st, 443 + st->reg->accl_config, d); 444 + if (result) 445 + return result; 441 446 442 - d = (fs << INV_MPU6050_ACCL_CONFIG_FSR_SHIFT); 443 - result = inv_mpu6050_write_reg(st, st->reg->accl_config, d); 444 - if (result) 445 - return result; 446 - st->chip_config.accl_fs = fs; 447 + st->chip_config.accl_fs = i; 448 + return 0; 449 + } 450 + } 447 451 448 - return 0; 452 + return -EINVAL; 449 453 } 450 454 451 455 static int inv_mpu6050_write_raw(struct iio_dev *indio_dev, ··· 475 471 case IIO_CHAN_INFO_SCALE: 476 472 switch (chan->type) { 477 473 case IIO_ANGL_VEL: 478 - result = inv_mpu6050_write_fsr(st, val); 474 + result = inv_mpu6050_write_gyro_scale(st, val2); 479 475 break; 480 476 case IIO_ACCEL: 481 - result = inv_mpu6050_write_accel_fs(st, val); 477 + result = inv_mpu6050_write_accel_scale(st, val2); 482 478 break; 483 479 default: 484 480 result = -EINVAL;
+14 -11
drivers/iio/imu/inv_mpu6050/inv_mpu_ring.c
··· 24 24 #include <linux/poll.h> 25 25 #include "inv_mpu_iio.h" 26 26 27 + static void inv_clear_kfifo(struct inv_mpu6050_state *st) 28 + { 29 + unsigned long flags; 30 + 31 + /* take the spin lock sem to avoid interrupt kick in */ 32 + spin_lock_irqsave(&st->time_stamp_lock, flags); 33 + kfifo_reset(&st->timestamps); 34 + spin_unlock_irqrestore(&st->time_stamp_lock, flags); 35 + } 36 + 27 37 int inv_reset_fifo(struct iio_dev *indio_dev) 28 38 { 29 39 int result; ··· 60 50 INV_MPU6050_BIT_FIFO_RST); 61 51 if (result) 62 52 goto reset_fifo_fail; 53 + 54 + /* clear timestamps fifo */ 55 + inv_clear_kfifo(st); 56 + 63 57 /* enable interrupt */ 64 58 if (st->chip_config.accl_fifo_enable || 65 59 st->chip_config.gyro_fifo_enable) { ··· 95 81 INV_MPU6050_BIT_DATA_RDY_EN); 96 82 97 83 return result; 98 - } 99 - 100 - static void inv_clear_kfifo(struct inv_mpu6050_state *st) 101 - { 102 - unsigned long flags; 103 - 104 - /* take the spin lock sem to avoid interrupt kick in */ 105 - spin_lock_irqsave(&st->time_stamp_lock, flags); 106 - kfifo_reset(&st->timestamps); 107 - spin_unlock_irqrestore(&st->time_stamp_lock, flags); 108 84 } 109 85 110 86 /** ··· 188 184 flush_fifo: 189 185 /* Flush HW and SW FIFOs. */ 190 186 inv_reset_fifo(indio_dev); 191 - inv_clear_kfifo(st); 192 187 mutex_unlock(&indio_dev->mlock); 193 188 iio_trigger_notify_done(indio_dev->trig); 194 189
+1 -1
drivers/iio/imu/kmx61.c
··· 1227 1227 base = KMX61_MAG_XOUT_L; 1228 1228 1229 1229 mutex_lock(&data->lock); 1230 - for_each_set_bit(bit, indio_dev->buffer->scan_mask, 1230 + for_each_set_bit(bit, indio_dev->active_scan_mask, 1231 1231 indio_dev->masklength) { 1232 1232 ret = kmx61_read_measurement(data, base, bit); 1233 1233 if (ret < 0) {
+3 -2
drivers/iio/industrialio-core.c
··· 847 847 * @attr_list: List of IIO device attributes 848 848 * 849 849 * This function frees the memory allocated for each of the IIO device 850 - * attributes in the list. Note: if you want to reuse the list after calling 851 - * this function you have to reinitialize it using INIT_LIST_HEAD(). 850 + * attributes in the list. 852 851 */ 853 852 void iio_free_chan_devattr_list(struct list_head *attr_list) 854 853 { ··· 855 856 856 857 list_for_each_entry_safe(p, n, attr_list, l) { 857 858 kfree(p->dev_attr.attr.name); 859 + list_del(&p->l); 858 860 kfree(p); 859 861 } 860 862 } ··· 936 936 937 937 iio_free_chan_devattr_list(&indio_dev->channel_attr_list); 938 938 kfree(indio_dev->chan_attr_group.attrs); 939 + indio_dev->chan_attr_group.attrs = NULL; 939 940 } 940 941 941 942 static void iio_dev_release(struct device *device)
+1
drivers/iio/industrialio-event.c
··· 500 500 error_free_setup_event_lines: 501 501 iio_free_chan_devattr_list(&indio_dev->event_interface->dev_attr_list); 502 502 kfree(indio_dev->event_interface); 503 + indio_dev->event_interface = NULL; 503 504 return ret; 504 505 } 505 506
+1 -1
drivers/iio/proximity/sx9500.c
··· 494 494 495 495 mutex_lock(&data->mutex); 496 496 497 - for_each_set_bit(bit, indio_dev->buffer->scan_mask, 497 + for_each_set_bit(bit, indio_dev->active_scan_mask, 498 498 indio_dev->masklength) { 499 499 ret = sx9500_read_proximity(data, &indio_dev->channels[bit], 500 500 &val);
+8
drivers/infiniband/core/umem.c
··· 99 99 if (dmasync) 100 100 dma_set_attr(DMA_ATTR_WRITE_BARRIER, &attrs); 101 101 102 + /* 103 + * If the combination of the addr and size requested for this memory 104 + * region causes an integer overflow, return error. 105 + */ 106 + if ((PAGE_ALIGN(addr + size) <= size) || 107 + (PAGE_ALIGN(addr + size) <= addr)) 108 + return ERR_PTR(-EINVAL); 109 + 102 110 if (!can_do_mlock()) 103 111 return ERR_PTR(-EPERM); 104 112
+29 -19
drivers/input/mouse/alps.c
··· 1154 1154 mutex_unlock(&alps_mutex); 1155 1155 } 1156 1156 1157 - static void alps_report_bare_ps2_packet(struct input_dev *dev, 1157 + static void alps_report_bare_ps2_packet(struct psmouse *psmouse, 1158 1158 unsigned char packet[], 1159 1159 bool report_buttons) 1160 1160 { 1161 + struct alps_data *priv = psmouse->private; 1162 + struct input_dev *dev; 1163 + 1164 + /* Figure out which device to use to report the bare packet */ 1165 + if (priv->proto_version == ALPS_PROTO_V2 && 1166 + (priv->flags & ALPS_DUALPOINT)) { 1167 + /* On V2 devices the DualPoint Stick reports bare packets */ 1168 + dev = priv->dev2; 1169 + } else if (unlikely(IS_ERR_OR_NULL(priv->dev3))) { 1170 + /* Register dev3 mouse if we received PS/2 packet first time */ 1171 + if (!IS_ERR(priv->dev3)) 1172 + psmouse_queue_work(psmouse, &priv->dev3_register_work, 1173 + 0); 1174 + return; 1175 + } else { 1176 + dev = priv->dev3; 1177 + } 1178 + 1161 1179 if (report_buttons) 1162 1180 alps_report_buttons(dev, NULL, 1163 1181 packet[0] & 1, packet[0] & 2, packet[0] & 4); ··· 1250 1232 * de-synchronization. 1251 1233 */ 1252 1234 1253 - alps_report_bare_ps2_packet(priv->dev2, 1254 - &psmouse->packet[3], false); 1235 + alps_report_bare_ps2_packet(psmouse, &psmouse->packet[3], 1236 + false); 1255 1237 1256 1238 /* 1257 1239 * Continue with the standard ALPS protocol handling, ··· 1307 1289 * properly we only do this if the device is fully synchronized. 1308 1290 */ 1309 1291 if (!psmouse->out_of_sync_cnt && (psmouse->packet[0] & 0xc8) == 0x08) { 1310 - 1311 - /* Register dev3 mouse if we received PS/2 packet first time */ 1312 - if (unlikely(!priv->dev3)) 1313 - psmouse_queue_work(psmouse, 1314 - &priv->dev3_register_work, 0); 1315 - 1316 1292 if (psmouse->pktcnt == 3) { 1317 - /* Once dev3 mouse device is registered report data */ 1318 - if (likely(!IS_ERR_OR_NULL(priv->dev3))) 1319 - alps_report_bare_ps2_packet(priv->dev3, 1320 - psmouse->packet, 1321 - true); 1293 + alps_report_bare_ps2_packet(psmouse, psmouse->packet, 1294 + true); 1322 1295 return PSMOUSE_FULL_PACKET; 1323 1296 } 1324 1297 return PSMOUSE_GOOD_DATA; ··· 2290 2281 priv->set_abs_params = alps_set_abs_params_mt; 2291 2282 priv->nibble_commands = alps_v3_nibble_commands; 2292 2283 priv->addr_command = PSMOUSE_CMD_RESET_WRAP; 2293 - priv->x_max = 1360; 2294 - priv->y_max = 660; 2295 2284 priv->x_bits = 23; 2296 2285 priv->y_bits = 12; 2286 + 2287 + if (alps_dolphin_get_device_area(psmouse, priv)) 2288 + return -EIO; 2289 + 2297 2290 break; 2298 2291 2299 2292 case ALPS_PROTO_V6: ··· 2314 2303 priv->set_abs_params = alps_set_abs_params_mt; 2315 2304 priv->nibble_commands = alps_v3_nibble_commands; 2316 2305 priv->addr_command = PSMOUSE_CMD_RESET_WRAP; 2317 - 2318 - if (alps_dolphin_get_device_area(psmouse, priv)) 2319 - return -EIO; 2306 + priv->x_max = 0xfff; 2307 + priv->y_max = 0x7ff; 2320 2308 2321 2309 if (priv->fw_ver[1] != 0xba) 2322 2310 priv->flags |= ALPS_BUTTONPAD;
+6 -1
drivers/input/mouse/synaptics.c
··· 154 154 }, 155 155 { 156 156 (const char * const []){"LEN2006", NULL}, 157 + {2691, 2691}, 158 + 1024, 5045, 2457, 4832 159 + }, 160 + { 161 + (const char * const []){"LEN2006", NULL}, 157 162 {ANY_BOARD_ID, ANY_BOARD_ID}, 158 163 1264, 5675, 1171, 4688 159 164 }, ··· 194 189 "LEN2003", 195 190 "LEN2004", /* L440 */ 196 191 "LEN2005", 197 - "LEN2006", 192 + "LEN2006", /* Edge E440/E540 */ 198 193 "LEN2007", 199 194 "LEN2008", 200 195 "LEN2009",
+6 -3
drivers/iommu/arm-smmu.c
··· 1288 1288 return 0; 1289 1289 1290 1290 spin_lock_irqsave(&smmu_domain->pgtbl_lock, flags); 1291 - if (smmu_domain->smmu->features & ARM_SMMU_FEAT_TRANS_OPS) 1291 + if (smmu_domain->smmu->features & ARM_SMMU_FEAT_TRANS_OPS && 1292 + smmu_domain->stage == ARM_SMMU_DOMAIN_S1) { 1292 1293 ret = arm_smmu_iova_to_phys_hard(domain, iova); 1293 - else 1294 + } else { 1294 1295 ret = ops->iova_to_phys(ops, iova); 1296 + } 1297 + 1295 1298 spin_unlock_irqrestore(&smmu_domain->pgtbl_lock, flags); 1296 1299 1297 1300 return ret; ··· 1559 1556 return -ENODEV; 1560 1557 } 1561 1558 1562 - if (smmu->version == 1 || (!(id & ID0_ATOSNS) && (id & ID0_S1TS))) { 1559 + if ((id & ID0_S1TS) && ((smmu->version == 1) || (id & ID0_ATOSNS))) { 1563 1560 smmu->features |= ARM_SMMU_FEAT_TRANS_OPS; 1564 1561 dev_notice(smmu->dev, "\taddress translation ops\n"); 1565 1562 }
+3 -4
drivers/iommu/intel-iommu.c
··· 1742 1742 1743 1743 static void domain_exit(struct dmar_domain *domain) 1744 1744 { 1745 - struct dmar_drhd_unit *drhd; 1746 - struct intel_iommu *iommu; 1747 1745 struct page *freelist = NULL; 1746 + int i; 1748 1747 1749 1748 /* Domain 0 is reserved, so dont process it */ 1750 1749 if (!domain) ··· 1763 1764 1764 1765 /* clear attached or cached domains */ 1765 1766 rcu_read_lock(); 1766 - for_each_active_iommu(iommu, drhd) 1767 - iommu_detach_domain(domain, iommu); 1767 + for_each_set_bit(i, domain->iommu_bmp, g_num_of_iommus) 1768 + iommu_detach_domain(domain, g_iommus[i]); 1768 1769 rcu_read_unlock(); 1769 1770 1770 1771 dma_free_pagelist(freelist);
+1
drivers/iommu/ipmmu-vmsa.c
··· 851 851 852 852 static const struct of_device_id ipmmu_of_ids[] = { 853 853 { .compatible = "renesas,ipmmu-vmsa", }, 854 + { } 854 855 }; 855 856 856 857 static struct platform_driver ipmmu_driver = {
+48 -9
drivers/irqchip/irq-gic-v3-its.c
··· 169 169 170 170 static void its_encode_devid(struct its_cmd_block *cmd, u32 devid) 171 171 { 172 - cmd->raw_cmd[0] &= ~(0xffffUL << 32); 172 + cmd->raw_cmd[0] &= BIT_ULL(32) - 1; 173 173 cmd->raw_cmd[0] |= ((u64)devid) << 32; 174 174 } 175 175 ··· 802 802 int i; 803 803 int psz = SZ_64K; 804 804 u64 shr = GITS_BASER_InnerShareable; 805 + u64 cache = GITS_BASER_WaWb; 805 806 806 807 for (i = 0; i < GITS_BASER_NR_REGS; i++) { 807 808 u64 val = readq_relaxed(its->base + GITS_BASER + i * 8); ··· 849 848 val = (virt_to_phys(base) | 850 849 (type << GITS_BASER_TYPE_SHIFT) | 851 850 ((entry_size - 1) << GITS_BASER_ENTRY_SIZE_SHIFT) | 852 - GITS_BASER_WaWb | 851 + cache | 853 852 shr | 854 853 GITS_BASER_VALID); 855 854 ··· 875 874 * Shareability didn't stick. Just use 876 875 * whatever the read reported, which is likely 877 876 * to be the only thing this redistributor 878 - * supports. 877 + * supports. If that's zero, make it 878 + * non-cacheable as well. 879 879 */ 880 880 shr = tmp & GITS_BASER_SHAREABILITY_MASK; 881 + if (!shr) 882 + cache = GITS_BASER_nC; 881 883 goto retry_baser; 882 884 } 883 885 ··· 984 980 tmp = readq_relaxed(rbase + GICR_PROPBASER); 985 981 986 982 if ((tmp ^ val) & GICR_PROPBASER_SHAREABILITY_MASK) { 983 + if (!(tmp & GICR_PROPBASER_SHAREABILITY_MASK)) { 984 + /* 985 + * The HW reports non-shareable, we must 986 + * remove the cacheability attributes as 987 + * well. 988 + */ 989 + val &= ~(GICR_PROPBASER_SHAREABILITY_MASK | 990 + GICR_PROPBASER_CACHEABILITY_MASK); 991 + val |= GICR_PROPBASER_nC; 992 + writeq_relaxed(val, rbase + GICR_PROPBASER); 993 + } 987 994 pr_info_once("GIC: using cache flushing for LPI property table\n"); 988 995 gic_rdists->flags |= RDIST_FLAGS_PROPBASE_NEEDS_FLUSHING; 989 996 } 990 997 991 998 /* set PENDBASE */ 992 999 val = (page_to_phys(pend_page) | 993 - GICR_PROPBASER_InnerShareable | 994 - GICR_PROPBASER_WaWb); 1000 + GICR_PENDBASER_InnerShareable | 1001 + GICR_PENDBASER_WaWb); 995 1002 996 1003 writeq_relaxed(val, rbase + GICR_PENDBASER); 1004 + tmp = readq_relaxed(rbase + GICR_PENDBASER); 1005 + 1006 + if (!(tmp & GICR_PENDBASER_SHAREABILITY_MASK)) { 1007 + /* 1008 + * The HW reports non-shareable, we must remove the 1009 + * cacheability attributes as well. 1010 + */ 1011 + val &= ~(GICR_PENDBASER_SHAREABILITY_MASK | 1012 + GICR_PENDBASER_CACHEABILITY_MASK); 1013 + val |= GICR_PENDBASER_nC; 1014 + writeq_relaxed(val, rbase + GICR_PENDBASER); 1015 + } 997 1016 998 1017 /* Enable LPIs */ 999 1018 val = readl_relaxed(rbase + GICR_CTLR); ··· 1053 1026 * This ITS wants a linear CPU number. 1054 1027 */ 1055 1028 target = readq_relaxed(gic_data_rdist_rd_base() + GICR_TYPER); 1056 - target = GICR_TYPER_CPU_NUMBER(target); 1029 + target = GICR_TYPER_CPU_NUMBER(target) << 16; 1057 1030 } 1058 1031 1059 1032 /* Perform collection mapping */ ··· 1449 1422 1450 1423 writeq_relaxed(baser, its->base + GITS_CBASER); 1451 1424 tmp = readq_relaxed(its->base + GITS_CBASER); 1452 - writeq_relaxed(0, its->base + GITS_CWRITER); 1453 - writel_relaxed(GITS_CTLR_ENABLE, its->base + GITS_CTLR); 1454 1425 1455 - if ((tmp ^ baser) & GITS_BASER_SHAREABILITY_MASK) { 1426 + if ((tmp ^ baser) & GITS_CBASER_SHAREABILITY_MASK) { 1427 + if (!(tmp & GITS_CBASER_SHAREABILITY_MASK)) { 1428 + /* 1429 + * The HW reports non-shareable, we must 1430 + * remove the cacheability attributes as 1431 + * well. 1432 + */ 1433 + baser &= ~(GITS_CBASER_SHAREABILITY_MASK | 1434 + GITS_CBASER_CACHEABILITY_MASK); 1435 + baser |= GITS_CBASER_nC; 1436 + writeq_relaxed(baser, its->base + GITS_CBASER); 1437 + } 1456 1438 pr_info("ITS: using cache flushing for cmd queue\n"); 1457 1439 its->flags |= ITS_FLAGS_CMDQ_NEEDS_FLUSHING; 1458 1440 } 1441 + 1442 + writeq_relaxed(0, its->base + GITS_CWRITER); 1443 + writel_relaxed(GITS_CTLR_ENABLE, its->base + GITS_CTLR); 1459 1444 1460 1445 if (of_property_read_bool(its->msi_chip.of_node, "msi-controller")) { 1461 1446 its->domain = irq_domain_add_tree(NULL, &its_domain_ops, its);
+1 -1
drivers/lguest/Kconfig
··· 1 1 config LGUEST 2 2 tristate "Linux hypervisor example code" 3 - depends on X86_32 && EVENTFD && TTY 3 + depends on X86_32 && EVENTFD && TTY && PCI_DIRECT 4 4 select HVC_DRIVER 5 5 ---help--- 6 6 This is a very simple module which allows you to run
+2 -1
drivers/net/bonding/bond_main.c
··· 3850 3850 /* Find out if any slaves have the same mapping as this skb. */ 3851 3851 bond_for_each_slave_rcu(bond, slave, iter) { 3852 3852 if (slave->queue_id == skb->queue_mapping) { 3853 - if (bond_slave_can_tx(slave)) { 3853 + if (bond_slave_is_up(slave) && 3854 + slave->link == BOND_LINK_UP) { 3854 3855 bond_dev_queue_xmit(bond, skb, slave->dev); 3855 3856 return 0; 3856 3857 }
+11 -7
drivers/net/can/flexcan.c
··· 592 592 rx_state = unlikely(reg_esr & FLEXCAN_ESR_RX_WRN) ? 593 593 CAN_STATE_ERROR_WARNING : CAN_STATE_ERROR_ACTIVE; 594 594 new_state = max(tx_state, rx_state); 595 - } else if (unlikely(flt == FLEXCAN_ESR_FLT_CONF_PASSIVE)) { 595 + } else { 596 596 __flexcan_get_berr_counter(dev, &bec); 597 - new_state = CAN_STATE_ERROR_PASSIVE; 597 + new_state = flt == FLEXCAN_ESR_FLT_CONF_PASSIVE ? 598 + CAN_STATE_ERROR_PASSIVE : CAN_STATE_BUS_OFF; 598 599 rx_state = bec.rxerr >= bec.txerr ? new_state : 0; 599 600 tx_state = bec.rxerr <= bec.txerr ? new_state : 0; 600 - } else { 601 - new_state = CAN_STATE_BUS_OFF; 602 601 } 603 602 604 603 /* state hasn't changed */ ··· 1157 1158 const struct flexcan_devtype_data *devtype_data; 1158 1159 struct net_device *dev; 1159 1160 struct flexcan_priv *priv; 1161 + struct regulator *reg_xceiver; 1160 1162 struct resource *mem; 1161 1163 struct clk *clk_ipg = NULL, *clk_per = NULL; 1162 1164 void __iomem *base; 1163 1165 int err, irq; 1164 1166 u32 clock_freq = 0; 1167 + 1168 + reg_xceiver = devm_regulator_get(&pdev->dev, "xceiver"); 1169 + if (PTR_ERR(reg_xceiver) == -EPROBE_DEFER) 1170 + return -EPROBE_DEFER; 1171 + else if (IS_ERR(reg_xceiver)) 1172 + reg_xceiver = NULL; 1165 1173 1166 1174 if (pdev->dev.of_node) 1167 1175 of_property_read_u32(pdev->dev.of_node, ··· 1230 1224 priv->pdata = dev_get_platdata(&pdev->dev); 1231 1225 priv->devtype_data = devtype_data; 1232 1226 1233 - priv->reg_xceiver = devm_regulator_get(&pdev->dev, "xceiver"); 1234 - if (IS_ERR(priv->reg_xceiver)) 1235 - priv->reg_xceiver = NULL; 1227 + priv->reg_xceiver = reg_xceiver; 1236 1228 1237 1229 netif_napi_add(dev, &priv->napi, flexcan_poll, FLEXCAN_NAPI_WEIGHT); 1238 1230
+2
drivers/net/can/usb/gs_usb.c
··· 901 901 } 902 902 903 903 dev = kzalloc(sizeof(*dev), GFP_KERNEL); 904 + if (!dev) 905 + return -ENOMEM; 904 906 init_usb_anchor(&dev->rx_submitted); 905 907 906 908 atomic_set(&dev->active_channels, 0);
+42 -27
drivers/net/can/usb/kvaser_usb.c
··· 25 25 #include <linux/can/dev.h> 26 26 #include <linux/can/error.h> 27 27 28 - #define MAX_TX_URBS 16 29 28 #define MAX_RX_URBS 4 30 29 #define START_TIMEOUT 1000 /* msecs */ 31 30 #define STOP_TIMEOUT 1000 /* msecs */ ··· 442 443 }; 443 444 }; 444 445 446 + /* Context for an outstanding, not yet ACKed, transmission */ 445 447 struct kvaser_usb_tx_urb_context { 446 448 struct kvaser_usb_net_priv *priv; 447 449 u32 echo_index; ··· 456 456 struct usb_endpoint_descriptor *bulk_in, *bulk_out; 457 457 struct usb_anchor rx_submitted; 458 458 459 + /* @max_tx_urbs: Firmware-reported maximum number of oustanding, 460 + * not yet ACKed, transmissions on this device. This value is 461 + * also used as a sentinel for marking free tx contexts. 462 + */ 459 463 u32 fw_version; 460 464 unsigned int nchannels; 465 + unsigned int max_tx_urbs; 461 466 enum kvaser_usb_family family; 462 467 463 468 bool rxinitdone; ··· 472 467 473 468 struct kvaser_usb_net_priv { 474 469 struct can_priv can; 475 - 476 - spinlock_t tx_contexts_lock; 477 - int active_tx_contexts; 478 - struct kvaser_usb_tx_urb_context tx_contexts[MAX_TX_URBS]; 479 - 480 - struct usb_anchor tx_submitted; 481 - struct completion start_comp, stop_comp; 470 + struct can_berr_counter bec; 482 471 483 472 struct kvaser_usb *dev; 484 473 struct net_device *netdev; 485 474 int channel; 486 475 487 - struct can_berr_counter bec; 476 + struct completion start_comp, stop_comp; 477 + struct usb_anchor tx_submitted; 478 + 479 + spinlock_t tx_contexts_lock; 480 + int active_tx_contexts; 481 + struct kvaser_usb_tx_urb_context tx_contexts[]; 488 482 }; 489 483 490 484 static const struct usb_device_id kvaser_usb_table[] = { ··· 596 592 * for further details. 597 593 */ 598 594 if (tmp->len == 0) { 599 - pos = round_up(pos, 600 - dev->bulk_in->wMaxPacketSize); 595 + pos = round_up(pos, le16_to_cpu(dev->bulk_in-> 596 + wMaxPacketSize)); 601 597 continue; 602 598 } 603 599 ··· 661 657 switch (dev->family) { 662 658 case KVASER_LEAF: 663 659 dev->fw_version = le32_to_cpu(msg.u.leaf.softinfo.fw_version); 660 + dev->max_tx_urbs = 661 + le16_to_cpu(msg.u.leaf.softinfo.max_outstanding_tx); 664 662 break; 665 663 case KVASER_USBCAN: 666 664 dev->fw_version = le32_to_cpu(msg.u.usbcan.softinfo.fw_version); 665 + dev->max_tx_urbs = 666 + le16_to_cpu(msg.u.usbcan.softinfo.max_outstanding_tx); 667 667 break; 668 668 } 669 669 ··· 723 715 724 716 stats = &priv->netdev->stats; 725 717 726 - context = &priv->tx_contexts[tid % MAX_TX_URBS]; 718 + context = &priv->tx_contexts[tid % dev->max_tx_urbs]; 727 719 728 720 /* Sometimes the state change doesn't come after a bus-off event */ 729 721 if (priv->can.restart_ms && ··· 752 744 spin_lock_irqsave(&priv->tx_contexts_lock, flags); 753 745 754 746 can_get_echo_skb(priv->netdev, context->echo_index); 755 - context->echo_index = MAX_TX_URBS; 747 + context->echo_index = dev->max_tx_urbs; 756 748 --priv->active_tx_contexts; 757 749 netif_wake_queue(priv->netdev); 758 750 ··· 1337 1329 * number of events in case of a heavy rx load on the bus. 1338 1330 */ 1339 1331 if (msg->len == 0) { 1340 - pos = round_up(pos, dev->bulk_in->wMaxPacketSize); 1332 + pos = round_up(pos, le16_to_cpu(dev->bulk_in-> 1333 + wMaxPacketSize)); 1341 1334 continue; 1342 1335 } 1343 1336 ··· 1521 1512 1522 1513 static void kvaser_usb_reset_tx_urb_contexts(struct kvaser_usb_net_priv *priv) 1523 1514 { 1524 - int i; 1515 + int i, max_tx_urbs; 1516 + 1517 + max_tx_urbs = priv->dev->max_tx_urbs; 1525 1518 1526 1519 priv->active_tx_contexts = 0; 1527 - for (i = 0; i < MAX_TX_URBS; i++) 1528 - priv->tx_contexts[i].echo_index = MAX_TX_URBS; 1520 + for (i = 0; i < max_tx_urbs; i++) 1521 + priv->tx_contexts[i].echo_index = max_tx_urbs; 1529 1522 } 1530 1523 1531 1524 /* This method might sleep. Do not call it in the atomic context ··· 1713 1702 *msg_tx_can_flags |= MSG_FLAG_REMOTE_FRAME; 1714 1703 1715 1704 spin_lock_irqsave(&priv->tx_contexts_lock, flags); 1716 - for (i = 0; i < ARRAY_SIZE(priv->tx_contexts); i++) { 1717 - if (priv->tx_contexts[i].echo_index == MAX_TX_URBS) { 1705 + for (i = 0; i < dev->max_tx_urbs; i++) { 1706 + if (priv->tx_contexts[i].echo_index == dev->max_tx_urbs) { 1718 1707 context = &priv->tx_contexts[i]; 1719 1708 1720 1709 context->echo_index = i; 1721 1710 can_put_echo_skb(skb, netdev, context->echo_index); 1722 1711 ++priv->active_tx_contexts; 1723 - if (priv->active_tx_contexts >= MAX_TX_URBS) 1712 + if (priv->active_tx_contexts >= dev->max_tx_urbs) 1724 1713 netif_stop_queue(netdev); 1725 1714 1726 1715 break; ··· 1754 1743 spin_lock_irqsave(&priv->tx_contexts_lock, flags); 1755 1744 1756 1745 can_free_echo_skb(netdev, context->echo_index); 1757 - context->echo_index = MAX_TX_URBS; 1746 + context->echo_index = dev->max_tx_urbs; 1758 1747 --priv->active_tx_contexts; 1759 1748 netif_wake_queue(netdev); 1760 1749 ··· 1892 1881 if (err) 1893 1882 return err; 1894 1883 1895 - netdev = alloc_candev(sizeof(*priv), MAX_TX_URBS); 1884 + netdev = alloc_candev(sizeof(*priv) + 1885 + dev->max_tx_urbs * sizeof(*priv->tx_contexts), 1886 + dev->max_tx_urbs); 1896 1887 if (!netdev) { 1897 1888 dev_err(&intf->dev, "Cannot alloc candev\n"); 1898 1889 return -ENOMEM; ··· 2022 2009 return err; 2023 2010 } 2024 2011 2012 + dev_dbg(&intf->dev, "Firmware version: %d.%d.%d\n", 2013 + ((dev->fw_version >> 24) & 0xff), 2014 + ((dev->fw_version >> 16) & 0xff), 2015 + (dev->fw_version & 0xffff)); 2016 + 2017 + dev_dbg(&intf->dev, "Max oustanding tx = %d URBs\n", dev->max_tx_urbs); 2018 + 2025 2019 err = kvaser_usb_get_card_info(dev); 2026 2020 if (err) { 2027 2021 dev_err(&intf->dev, 2028 2022 "Cannot get card infos, error %d\n", err); 2029 2023 return err; 2030 2024 } 2031 - 2032 - dev_dbg(&intf->dev, "Firmware version: %d.%d.%d\n", 2033 - ((dev->fw_version >> 24) & 0xff), 2034 - ((dev->fw_version >> 16) & 0xff), 2035 - (dev->fw_version & 0xffff)); 2036 2025 2037 2026 for (i = 0; i < dev->nchannels; i++) { 2038 2027 err = kvaser_usb_init_one(intf, id, i);
+8 -7
drivers/net/can/usb/peak_usb/pcan_ucan.h
··· 26 26 #define PUCAN_CMD_FILTER_STD 0x008 27 27 #define PUCAN_CMD_TX_ABORT 0x009 28 28 #define PUCAN_CMD_WR_ERR_CNT 0x00a 29 - #define PUCAN_CMD_RX_FRAME_ENABLE 0x00b 30 - #define PUCAN_CMD_RX_FRAME_DISABLE 0x00c 29 + #define PUCAN_CMD_SET_EN_OPTION 0x00b 30 + #define PUCAN_CMD_CLR_DIS_OPTION 0x00c 31 31 #define PUCAN_CMD_END_OF_COLLECTION 0x3ff 32 32 33 33 /* uCAN received messages list */ ··· 101 101 u16 unused; 102 102 }; 103 103 104 - /* uCAN RX_FRAME_ENABLE command fields */ 105 - #define PUCAN_FLTEXT_ERROR 0x0001 106 - #define PUCAN_FLTEXT_BUSLOAD 0x0002 104 + /* uCAN SET_EN/CLR_DIS _OPTION command fields */ 105 + #define PUCAN_OPTION_ERROR 0x0001 106 + #define PUCAN_OPTION_BUSLOAD 0x0002 107 + #define PUCAN_OPTION_CANDFDISO 0x0004 107 108 108 - struct __packed pucan_filter_ext { 109 + struct __packed pucan_options { 109 110 __le16 opcode_channel; 110 111 111 - __le16 ext_mask; 112 + __le16 options; 112 113 u32 unused; 113 114 }; 114 115
+50 -23
drivers/net/can/usb/peak_usb/pcan_usb_fd.c
··· 110 110 u8 unused[5]; 111 111 }; 112 112 113 - /* Extended usage of uCAN commands CMD_RX_FRAME_xxxABLE for PCAN-USB Pro FD */ 113 + /* Extended usage of uCAN commands CMD_xxx_xx_OPTION for PCAN-USB Pro FD */ 114 114 #define PCAN_UFD_FLTEXT_CALIBRATION 0x8000 115 115 116 - struct __packed pcan_ufd_filter_ext { 116 + struct __packed pcan_ufd_options { 117 117 __le16 opcode_channel; 118 118 119 - __le16 ext_mask; 119 + __le16 ucan_mask; 120 120 u16 unused; 121 121 __le16 usb_mask; 122 122 }; ··· 251 251 /* moves the pointer forward */ 252 252 pc += sizeof(struct pucan_wr_err_cnt); 253 253 254 + /* add command to switch from ISO to non-ISO mode, if fw allows it */ 255 + if (dev->can.ctrlmode_supported & CAN_CTRLMODE_FD_NON_ISO) { 256 + struct pucan_options *puo = (struct pucan_options *)pc; 257 + 258 + puo->opcode_channel = 259 + (dev->can.ctrlmode & CAN_CTRLMODE_FD_NON_ISO) ? 260 + pucan_cmd_opcode_channel(dev, 261 + PUCAN_CMD_CLR_DIS_OPTION) : 262 + pucan_cmd_opcode_channel(dev, PUCAN_CMD_SET_EN_OPTION); 263 + 264 + puo->options = cpu_to_le16(PUCAN_OPTION_CANDFDISO); 265 + 266 + /* to be sure that no other extended bits will be taken into 267 + * account 268 + */ 269 + puo->unused = 0; 270 + 271 + /* moves the pointer forward */ 272 + pc += sizeof(struct pucan_options); 273 + } 274 + 254 275 /* next, go back to operational mode */ 255 276 cmd = (struct pucan_command *)pc; 256 277 cmd->opcode_channel = pucan_cmd_opcode_channel(dev, ··· 342 321 return pcan_usb_fd_send_cmd(dev, cmd); 343 322 } 344 323 345 - /* set/unset notifications filter: 324 + /* set/unset options 346 325 * 347 - * onoff sets(1)/unset(0) notifications 348 - * mask each bit defines a kind of notification to set/unset 326 + * onoff set(1)/unset(0) options 327 + * mask each bit defines a kind of options to set/unset 349 328 */ 350 - static int pcan_usb_fd_set_filter_ext(struct peak_usb_device *dev, 351 - bool onoff, u16 ext_mask, u16 usb_mask) 329 + static int pcan_usb_fd_set_options(struct peak_usb_device *dev, 330 + bool onoff, u16 ucan_mask, u16 usb_mask) 352 331 { 353 - struct pcan_ufd_filter_ext *cmd = pcan_usb_fd_cmd_buffer(dev); 332 + struct pcan_ufd_options *cmd = pcan_usb_fd_cmd_buffer(dev); 354 333 355 334 cmd->opcode_channel = pucan_cmd_opcode_channel(dev, 356 - (onoff) ? PUCAN_CMD_RX_FRAME_ENABLE : 357 - PUCAN_CMD_RX_FRAME_DISABLE); 335 + (onoff) ? PUCAN_CMD_SET_EN_OPTION : 336 + PUCAN_CMD_CLR_DIS_OPTION); 358 337 359 - cmd->ext_mask = cpu_to_le16(ext_mask); 338 + cmd->ucan_mask = cpu_to_le16(ucan_mask); 360 339 cmd->usb_mask = cpu_to_le16(usb_mask); 361 340 362 341 /* send the command */ ··· 791 770 &pcan_usb_pro_fd); 792 771 793 772 /* enable USB calibration messages */ 794 - err = pcan_usb_fd_set_filter_ext(dev, 1, 795 - PUCAN_FLTEXT_ERROR, 796 - PCAN_UFD_FLTEXT_CALIBRATION); 773 + err = pcan_usb_fd_set_options(dev, 1, 774 + PUCAN_OPTION_ERROR, 775 + PCAN_UFD_FLTEXT_CALIBRATION); 797 776 } 798 777 799 778 pdev->usb_if->dev_opened_count++; ··· 827 806 828 807 /* turn off special msgs for that interface if no other dev opened */ 829 808 if (pdev->usb_if->dev_opened_count == 1) 830 - pcan_usb_fd_set_filter_ext(dev, 0, 831 - PUCAN_FLTEXT_ERROR, 832 - PCAN_UFD_FLTEXT_CALIBRATION); 809 + pcan_usb_fd_set_options(dev, 0, 810 + PUCAN_OPTION_ERROR, 811 + PCAN_UFD_FLTEXT_CALIBRATION); 833 812 pdev->usb_if->dev_opened_count--; 834 813 835 814 return 0; ··· 881 860 pdev->usb_if->fw_info.fw_version[2], 882 861 dev->adapter->ctrl_count); 883 862 884 - /* the currently supported hw is non-ISO */ 885 - dev->can.ctrlmode = CAN_CTRLMODE_FD_NON_ISO; 863 + /* check for ability to switch between ISO/non-ISO modes */ 864 + if (pdev->usb_if->fw_info.fw_version[0] >= 2) { 865 + /* firmware >= 2.x supports ISO/non-ISO switching */ 866 + dev->can.ctrlmode_supported |= CAN_CTRLMODE_FD_NON_ISO; 867 + } else { 868 + /* firmware < 2.x only supports fixed(!) non-ISO */ 869 + dev->can.ctrlmode |= CAN_CTRLMODE_FD_NON_ISO; 870 + } 886 871 887 872 /* tell the hardware the can driver is running */ 888 873 err = pcan_usb_fd_drv_loaded(dev, 1); ··· 964 937 if (dev->ctrl_idx == 0) { 965 938 /* turn off calibration message if any device were opened */ 966 939 if (pdev->usb_if->dev_opened_count > 0) 967 - pcan_usb_fd_set_filter_ext(dev, 0, 968 - PUCAN_FLTEXT_ERROR, 969 - PCAN_UFD_FLTEXT_CALIBRATION); 940 + pcan_usb_fd_set_options(dev, 0, 941 + PUCAN_OPTION_ERROR, 942 + PCAN_UFD_FLTEXT_CALIBRATION); 970 943 971 944 /* tell USB adapter that the driver is being unloaded */ 972 945 pcan_usb_fd_drv_loaded(dev, 0);
+1 -3
drivers/net/ethernet/broadcom/bnx2x/bnx2x.h
··· 1811 1811 int stats_state; 1812 1812 1813 1813 /* used for synchronization of concurrent threads statistics handling */ 1814 - spinlock_t stats_lock; 1814 + struct mutex stats_lock; 1815 1815 1816 1816 /* used by dmae command loader */ 1817 1817 struct dmae_command stats_dmae; ··· 1935 1935 1936 1936 int fp_array_size; 1937 1937 u32 dump_preset_idx; 1938 - bool stats_started; 1939 - struct semaphore stats_sema; 1940 1938 1941 1939 u8 phys_port_id[ETH_ALEN]; 1942 1940
+54 -45
drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
··· 129 129 u32 xmac_val; 130 130 u32 emac_addr; 131 131 u32 emac_val; 132 - u32 umac_addr; 133 - u32 umac_val; 132 + u32 umac_addr[2]; 133 + u32 umac_val[2]; 134 134 u32 bmac_addr; 135 135 u32 bmac_val[2]; 136 136 }; ··· 7866 7866 return 0; 7867 7867 } 7868 7868 7869 + /* previous driver DMAE transaction may have occurred when pre-boot stage ended 7870 + * and boot began, or when kdump kernel was loaded. Either case would invalidate 7871 + * the addresses of the transaction, resulting in was-error bit set in the pci 7872 + * causing all hw-to-host pcie transactions to timeout. If this happened we want 7873 + * to clear the interrupt which detected this from the pglueb and the was done 7874 + * bit 7875 + */ 7876 + static void bnx2x_clean_pglue_errors(struct bnx2x *bp) 7877 + { 7878 + if (!CHIP_IS_E1x(bp)) 7879 + REG_WR(bp, PGLUE_B_REG_WAS_ERROR_PF_7_0_CLR, 7880 + 1 << BP_ABS_FUNC(bp)); 7881 + } 7882 + 7869 7883 static int bnx2x_init_hw_func(struct bnx2x *bp) 7870 7884 { 7871 7885 int port = BP_PORT(bp); ··· 7972 7958 7973 7959 bnx2x_init_block(bp, BLOCK_PGLUE_B, init_phase); 7974 7960 7975 - if (!CHIP_IS_E1x(bp)) 7976 - REG_WR(bp, PGLUE_B_REG_WAS_ERROR_PF_7_0_CLR, func); 7961 + bnx2x_clean_pglue_errors(bp); 7977 7962 7978 7963 bnx2x_init_block(bp, BLOCK_ATC, init_phase); 7979 7964 bnx2x_init_block(bp, BLOCK_DMAE, init_phase); ··· 10154 10141 return base + (BP_ABS_FUNC(bp)) * stride; 10155 10142 } 10156 10143 10144 + static bool bnx2x_prev_unload_close_umac(struct bnx2x *bp, 10145 + u8 port, u32 reset_reg, 10146 + struct bnx2x_mac_vals *vals) 10147 + { 10148 + u32 mask = MISC_REGISTERS_RESET_REG_2_UMAC0 << port; 10149 + u32 base_addr; 10150 + 10151 + if (!(mask & reset_reg)) 10152 + return false; 10153 + 10154 + BNX2X_DEV_INFO("Disable umac Rx %02x\n", port); 10155 + base_addr = port ? GRCBASE_UMAC1 : GRCBASE_UMAC0; 10156 + vals->umac_addr[port] = base_addr + UMAC_REG_COMMAND_CONFIG; 10157 + vals->umac_val[port] = REG_RD(bp, vals->umac_addr[port]); 10158 + REG_WR(bp, vals->umac_addr[port], 0); 10159 + 10160 + return true; 10161 + } 10162 + 10157 10163 static void bnx2x_prev_unload_close_mac(struct bnx2x *bp, 10158 10164 struct bnx2x_mac_vals *vals) 10159 10165 { ··· 10181 10149 u8 port = BP_PORT(bp); 10182 10150 10183 10151 /* reset addresses as they also mark which values were changed */ 10184 - vals->bmac_addr = 0; 10185 - vals->umac_addr = 0; 10186 - vals->xmac_addr = 0; 10187 - vals->emac_addr = 0; 10152 + memset(vals, 0, sizeof(*vals)); 10188 10153 10189 10154 reset_reg = REG_RD(bp, MISC_REG_RESET_REG_2); 10190 10155 ··· 10230 10201 REG_WR(bp, vals->xmac_addr, 0); 10231 10202 mac_stopped = true; 10232 10203 } 10233 - mask = MISC_REGISTERS_RESET_REG_2_UMAC0 << port; 10234 - if (mask & reset_reg) { 10235 - BNX2X_DEV_INFO("Disable umac Rx\n"); 10236 - base_addr = BP_PORT(bp) ? GRCBASE_UMAC1 : GRCBASE_UMAC0; 10237 - vals->umac_addr = base_addr + UMAC_REG_COMMAND_CONFIG; 10238 - vals->umac_val = REG_RD(bp, vals->umac_addr); 10239 - REG_WR(bp, vals->umac_addr, 0); 10240 - mac_stopped = true; 10241 - } 10204 + 10205 + mac_stopped |= bnx2x_prev_unload_close_umac(bp, 0, 10206 + reset_reg, vals); 10207 + mac_stopped |= bnx2x_prev_unload_close_umac(bp, 1, 10208 + reset_reg, vals); 10242 10209 } 10243 10210 10244 10211 if (mac_stopped) ··· 10530 10505 /* Close the MAC Rx to prevent BRB from filling up */ 10531 10506 bnx2x_prev_unload_close_mac(bp, &mac_vals); 10532 10507 10533 - /* close LLH filters towards the BRB */ 10508 + /* close LLH filters for both ports towards the BRB */ 10534 10509 bnx2x_set_rx_filter(&bp->link_params, 0); 10510 + bp->link_params.port ^= 1; 10511 + bnx2x_set_rx_filter(&bp->link_params, 0); 10512 + bp->link_params.port ^= 1; 10535 10513 10536 10514 /* Check if the UNDI driver was previously loaded */ 10537 10515 if (bnx2x_prev_is_after_undi(bp)) { ··· 10581 10553 10582 10554 if (mac_vals.xmac_addr) 10583 10555 REG_WR(bp, mac_vals.xmac_addr, mac_vals.xmac_val); 10584 - if (mac_vals.umac_addr) 10585 - REG_WR(bp, mac_vals.umac_addr, mac_vals.umac_val); 10556 + if (mac_vals.umac_addr[0]) 10557 + REG_WR(bp, mac_vals.umac_addr[0], mac_vals.umac_val[0]); 10558 + if (mac_vals.umac_addr[1]) 10559 + REG_WR(bp, mac_vals.umac_addr[1], mac_vals.umac_val[1]); 10586 10560 if (mac_vals.emac_addr) 10587 10561 REG_WR(bp, mac_vals.emac_addr, mac_vals.emac_val); 10588 10562 if (mac_vals.bmac_addr) { ··· 10601 10571 return bnx2x_prev_mcp_done(bp); 10602 10572 } 10603 10573 10604 - /* previous driver DMAE transaction may have occurred when pre-boot stage ended 10605 - * and boot began, or when kdump kernel was loaded. Either case would invalidate 10606 - * the addresses of the transaction, resulting in was-error bit set in the pci 10607 - * causing all hw-to-host pcie transactions to timeout. If this happened we want 10608 - * to clear the interrupt which detected this from the pglueb and the was done 10609 - * bit 10610 - */ 10611 - static void bnx2x_prev_interrupted_dmae(struct bnx2x *bp) 10612 - { 10613 - if (!CHIP_IS_E1x(bp)) { 10614 - u32 val = REG_RD(bp, PGLUE_B_REG_PGLUE_B_INT_STS); 10615 - if (val & PGLUE_B_PGLUE_B_INT_STS_REG_WAS_ERROR_ATTN) { 10616 - DP(BNX2X_MSG_SP, 10617 - "'was error' bit was found to be set in pglueb upon startup. Clearing\n"); 10618 - REG_WR(bp, PGLUE_B_REG_WAS_ERROR_PF_7_0_CLR, 10619 - 1 << BP_FUNC(bp)); 10620 - } 10621 - } 10622 - } 10623 - 10624 10574 static int bnx2x_prev_unload(struct bnx2x *bp) 10625 10575 { 10626 10576 int time_counter = 10; ··· 10610 10600 /* clear hw from errors which may have resulted from an interrupted 10611 10601 * dmae transaction. 10612 10602 */ 10613 - bnx2x_prev_interrupted_dmae(bp); 10603 + bnx2x_clean_pglue_errors(bp); 10614 10604 10615 10605 /* Release previously held locks */ 10616 10606 hw_lock_reg = (BP_FUNC(bp) <= 5) ? ··· 12047 12037 mutex_init(&bp->port.phy_mutex); 12048 12038 mutex_init(&bp->fw_mb_mutex); 12049 12039 mutex_init(&bp->drv_info_mutex); 12040 + mutex_init(&bp->stats_lock); 12050 12041 bp->drv_info_mng_owner = false; 12051 - spin_lock_init(&bp->stats_lock); 12052 - sema_init(&bp->stats_sema, 1); 12053 12042 12054 12043 INIT_DELAYED_WORK(&bp->sp_task, bnx2x_sp_task); 12055 12044 INIT_DELAYED_WORK(&bp->sp_rtnl_task, bnx2x_sp_rtnl_task); ··· 13677 13668 cancel_delayed_work_sync(&bp->sp_task); 13678 13669 cancel_delayed_work_sync(&bp->period_task); 13679 13670 13680 - spin_lock_bh(&bp->stats_lock); 13671 + mutex_lock(&bp->stats_lock); 13681 13672 bp->stats_state = STATS_STATE_DISABLED; 13682 - spin_unlock_bh(&bp->stats_lock); 13673 + mutex_unlock(&bp->stats_lock); 13683 13674 13684 13675 bnx2x_save_statistics(bp); 13685 13676
+3 -1
drivers/net/ethernet/broadcom/bnx2x/bnx2x_sriov.c
··· 2238 2238 2239 2239 cookie.vf = vf; 2240 2240 cookie.state = VF_ACQUIRED; 2241 - bnx2x_stats_safe_exec(bp, bnx2x_set_vf_state, &cookie); 2241 + rc = bnx2x_stats_safe_exec(bp, bnx2x_set_vf_state, &cookie); 2242 + if (rc) 2243 + goto op_err; 2242 2244 } 2243 2245 2244 2246 DP(BNX2X_MSG_IOV, "set state to acquired\n");
+74 -90
drivers/net/ethernet/broadcom/bnx2x/bnx2x_stats.c
··· 123 123 */ 124 124 static void bnx2x_storm_stats_post(struct bnx2x *bp) 125 125 { 126 - if (!bp->stats_pending) { 127 - int rc; 126 + int rc; 128 127 129 - spin_lock_bh(&bp->stats_lock); 128 + if (bp->stats_pending) 129 + return; 130 130 131 - if (bp->stats_pending) { 132 - spin_unlock_bh(&bp->stats_lock); 133 - return; 134 - } 131 + bp->fw_stats_req->hdr.drv_stats_counter = 132 + cpu_to_le16(bp->stats_counter++); 135 133 136 - bp->fw_stats_req->hdr.drv_stats_counter = 137 - cpu_to_le16(bp->stats_counter++); 134 + DP(BNX2X_MSG_STATS, "Sending statistics ramrod %d\n", 135 + le16_to_cpu(bp->fw_stats_req->hdr.drv_stats_counter)); 138 136 139 - DP(BNX2X_MSG_STATS, "Sending statistics ramrod %d\n", 140 - le16_to_cpu(bp->fw_stats_req->hdr.drv_stats_counter)); 137 + /* adjust the ramrod to include VF queues statistics */ 138 + bnx2x_iov_adjust_stats_req(bp); 139 + bnx2x_dp_stats(bp); 141 140 142 - /* adjust the ramrod to include VF queues statistics */ 143 - bnx2x_iov_adjust_stats_req(bp); 144 - bnx2x_dp_stats(bp); 145 - 146 - /* send FW stats ramrod */ 147 - rc = bnx2x_sp_post(bp, RAMROD_CMD_ID_COMMON_STAT_QUERY, 0, 148 - U64_HI(bp->fw_stats_req_mapping), 149 - U64_LO(bp->fw_stats_req_mapping), 150 - NONE_CONNECTION_TYPE); 151 - if (rc == 0) 152 - bp->stats_pending = 1; 153 - 154 - spin_unlock_bh(&bp->stats_lock); 155 - } 141 + /* send FW stats ramrod */ 142 + rc = bnx2x_sp_post(bp, RAMROD_CMD_ID_COMMON_STAT_QUERY, 0, 143 + U64_HI(bp->fw_stats_req_mapping), 144 + U64_LO(bp->fw_stats_req_mapping), 145 + NONE_CONNECTION_TYPE); 146 + if (rc == 0) 147 + bp->stats_pending = 1; 156 148 } 157 149 158 150 static void bnx2x_hw_stats_post(struct bnx2x *bp) ··· 213 221 */ 214 222 215 223 /* should be called under stats_sema */ 216 - static void __bnx2x_stats_pmf_update(struct bnx2x *bp) 224 + static void bnx2x_stats_pmf_update(struct bnx2x *bp) 217 225 { 218 226 struct dmae_command *dmae; 219 227 u32 opcode; ··· 511 519 } 512 520 513 521 /* should be called under stats_sema */ 514 - static void __bnx2x_stats_start(struct bnx2x *bp) 522 + static void bnx2x_stats_start(struct bnx2x *bp) 515 523 { 516 524 if (IS_PF(bp)) { 517 525 if (bp->port.pmf) ··· 523 531 bnx2x_hw_stats_post(bp); 524 532 bnx2x_storm_stats_post(bp); 525 533 } 526 - 527 - bp->stats_started = true; 528 - } 529 - 530 - static void bnx2x_stats_start(struct bnx2x *bp) 531 - { 532 - if (down_timeout(&bp->stats_sema, HZ/10)) 533 - BNX2X_ERR("Unable to acquire stats lock\n"); 534 - __bnx2x_stats_start(bp); 535 - up(&bp->stats_sema); 536 534 } 537 535 538 536 static void bnx2x_stats_pmf_start(struct bnx2x *bp) 539 537 { 540 - if (down_timeout(&bp->stats_sema, HZ/10)) 541 - BNX2X_ERR("Unable to acquire stats lock\n"); 542 538 bnx2x_stats_comp(bp); 543 - __bnx2x_stats_pmf_update(bp); 544 - __bnx2x_stats_start(bp); 545 - up(&bp->stats_sema); 546 - } 547 - 548 - static void bnx2x_stats_pmf_update(struct bnx2x *bp) 549 - { 550 - if (down_timeout(&bp->stats_sema, HZ/10)) 551 - BNX2X_ERR("Unable to acquire stats lock\n"); 552 - __bnx2x_stats_pmf_update(bp); 553 - up(&bp->stats_sema); 539 + bnx2x_stats_pmf_update(bp); 540 + bnx2x_stats_start(bp); 554 541 } 555 542 556 543 static void bnx2x_stats_restart(struct bnx2x *bp) ··· 539 568 */ 540 569 if (IS_VF(bp)) 541 570 return; 542 - if (down_timeout(&bp->stats_sema, HZ/10)) 543 - BNX2X_ERR("Unable to acquire stats lock\n"); 571 + 544 572 bnx2x_stats_comp(bp); 545 - __bnx2x_stats_start(bp); 546 - up(&bp->stats_sema); 573 + bnx2x_stats_start(bp); 547 574 } 548 575 549 576 static void bnx2x_bmac_stats_update(struct bnx2x *bp) ··· 1215 1246 { 1216 1247 u32 *stats_comp = bnx2x_sp(bp, stats_comp); 1217 1248 1218 - /* we run update from timer context, so give up 1219 - * if somebody is in the middle of transition 1220 - */ 1221 - if (down_trylock(&bp->stats_sema)) 1249 + if (bnx2x_edebug_stats_stopped(bp)) 1222 1250 return; 1223 - 1224 - if (bnx2x_edebug_stats_stopped(bp) || !bp->stats_started) 1225 - goto out; 1226 1251 1227 1252 if (IS_PF(bp)) { 1228 1253 if (*stats_comp != DMAE_COMP_VAL) 1229 - goto out; 1254 + return; 1230 1255 1231 1256 if (bp->port.pmf) 1232 1257 bnx2x_hw_stats_update(bp); ··· 1230 1267 BNX2X_ERR("storm stats were not updated for 3 times\n"); 1231 1268 bnx2x_panic(); 1232 1269 } 1233 - goto out; 1270 + return; 1234 1271 } 1235 1272 } else { 1236 1273 /* vf doesn't collect HW statistics, and doesn't get completions ··· 1244 1281 1245 1282 /* vf is done */ 1246 1283 if (IS_VF(bp)) 1247 - goto out; 1284 + return; 1248 1285 1249 1286 if (netif_msg_timer(bp)) { 1250 1287 struct bnx2x_eth_stats *estats = &bp->eth_stats; ··· 1255 1292 1256 1293 bnx2x_hw_stats_post(bp); 1257 1294 bnx2x_storm_stats_post(bp); 1258 - 1259 - out: 1260 - up(&bp->stats_sema); 1261 1295 } 1262 1296 1263 1297 static void bnx2x_port_stats_stop(struct bnx2x *bp) ··· 1318 1358 1319 1359 static void bnx2x_stats_stop(struct bnx2x *bp) 1320 1360 { 1321 - int update = 0; 1322 - 1323 - if (down_timeout(&bp->stats_sema, HZ/10)) 1324 - BNX2X_ERR("Unable to acquire stats lock\n"); 1325 - 1326 - bp->stats_started = false; 1361 + bool update = false; 1327 1362 1328 1363 bnx2x_stats_comp(bp); 1329 1364 ··· 1336 1381 bnx2x_hw_stats_post(bp); 1337 1382 bnx2x_stats_comp(bp); 1338 1383 } 1339 - 1340 - up(&bp->stats_sema); 1341 1384 } 1342 1385 1343 1386 static void bnx2x_stats_do_nothing(struct bnx2x *bp) ··· 1363 1410 1364 1411 void bnx2x_stats_handle(struct bnx2x *bp, enum bnx2x_stats_event event) 1365 1412 { 1366 - enum bnx2x_stats_state state; 1367 - void (*action)(struct bnx2x *bp); 1413 + enum bnx2x_stats_state state = bp->stats_state; 1414 + 1368 1415 if (unlikely(bp->panic)) 1369 1416 return; 1370 1417 1371 - spin_lock_bh(&bp->stats_lock); 1372 - state = bp->stats_state; 1373 - bp->stats_state = bnx2x_stats_stm[state][event].next_state; 1374 - action = bnx2x_stats_stm[state][event].action; 1375 - spin_unlock_bh(&bp->stats_lock); 1418 + /* Statistics update run from timer context, and we don't want to stop 1419 + * that context in case someone is in the middle of a transition. 1420 + * For other events, wait a bit until lock is taken. 1421 + */ 1422 + if (!mutex_trylock(&bp->stats_lock)) { 1423 + if (event == STATS_EVENT_UPDATE) 1424 + return; 1376 1425 1377 - action(bp); 1426 + DP(BNX2X_MSG_STATS, 1427 + "Unlikely stats' lock contention [event %d]\n", event); 1428 + mutex_lock(&bp->stats_lock); 1429 + } 1430 + 1431 + bnx2x_stats_stm[state][event].action(bp); 1432 + bp->stats_state = bnx2x_stats_stm[state][event].next_state; 1433 + 1434 + mutex_unlock(&bp->stats_lock); 1378 1435 1379 1436 if ((event != STATS_EVENT_UPDATE) || netif_msg_timer(bp)) 1380 1437 DP(BNX2X_MSG_STATS, "state %d -> event %d -> state %d\n", ··· 1961 1998 } 1962 1999 } 1963 2000 1964 - void bnx2x_stats_safe_exec(struct bnx2x *bp, 1965 - void (func_to_exec)(void *cookie), 1966 - void *cookie){ 1967 - if (down_timeout(&bp->stats_sema, HZ/10)) 1968 - BNX2X_ERR("Unable to acquire stats lock\n"); 2001 + int bnx2x_stats_safe_exec(struct bnx2x *bp, 2002 + void (func_to_exec)(void *cookie), 2003 + void *cookie) 2004 + { 2005 + int cnt = 10, rc = 0; 2006 + 2007 + /* Wait for statistics to end [while blocking further requests], 2008 + * then run supplied function 'safely'. 2009 + */ 2010 + mutex_lock(&bp->stats_lock); 2011 + 1969 2012 bnx2x_stats_comp(bp); 2013 + while (bp->stats_pending && cnt--) 2014 + if (bnx2x_storm_stats_update(bp)) 2015 + usleep_range(1000, 2000); 2016 + if (bp->stats_pending) { 2017 + BNX2X_ERR("Failed to wait for stats pending to clear [possibly FW is stuck]\n"); 2018 + rc = -EBUSY; 2019 + goto out; 2020 + } 2021 + 1970 2022 func_to_exec(cookie); 1971 - __bnx2x_stats_start(bp); 1972 - up(&bp->stats_sema); 2023 + 2024 + out: 2025 + /* No need to restart statistics - if they're enabled, the timer 2026 + * will restart the statistics. 2027 + */ 2028 + mutex_unlock(&bp->stats_lock); 2029 + 2030 + return rc; 1973 2031 }
+3 -3
drivers/net/ethernet/broadcom/bnx2x/bnx2x_stats.h
··· 539 539 void bnx2x_memset_stats(struct bnx2x *bp); 540 540 void bnx2x_stats_init(struct bnx2x *bp); 541 541 void bnx2x_stats_handle(struct bnx2x *bp, enum bnx2x_stats_event event); 542 - void bnx2x_stats_safe_exec(struct bnx2x *bp, 543 - void (func_to_exec)(void *cookie), 544 - void *cookie); 542 + int bnx2x_stats_safe_exec(struct bnx2x *bp, 543 + void (func_to_exec)(void *cookie), 544 + void *cookie); 545 545 546 546 /** 547 547 * bnx2x_save_statistics - save statistics when unloading.
+8 -6
drivers/net/ethernet/chelsio/cxgb4/cxgb4.h
··· 376 376 enum { 377 377 INGQ_EXTRAS = 2, /* firmware event queue and */ 378 378 /* forwarded interrupts */ 379 - MAX_EGRQ = MAX_ETH_QSETS*2 + MAX_OFLD_QSETS*2 380 - + MAX_CTRL_QUEUES + MAX_RDMA_QUEUES + MAX_ISCSI_QUEUES, 381 379 MAX_INGQ = MAX_ETH_QSETS + MAX_OFLD_QSETS + MAX_RDMA_QUEUES 382 380 + MAX_RDMA_CIQS + MAX_ISCSI_QUEUES + INGQ_EXTRAS, 383 381 }; ··· 614 616 unsigned int idma_qid[2]; /* SGE IDMA Hung Ingress Queue ID */ 615 617 616 618 unsigned int egr_start; 619 + unsigned int egr_sz; 617 620 unsigned int ingr_start; 618 - void *egr_map[MAX_EGRQ]; /* qid->queue egress queue map */ 619 - struct sge_rspq *ingr_map[MAX_INGQ]; /* qid->queue ingress queue map */ 620 - DECLARE_BITMAP(starving_fl, MAX_EGRQ); 621 - DECLARE_BITMAP(txq_maperr, MAX_EGRQ); 621 + unsigned int ingr_sz; 622 + void **egr_map; /* qid->queue egress queue map */ 623 + struct sge_rspq **ingr_map; /* qid->queue ingress queue map */ 624 + unsigned long *starving_fl; 625 + unsigned long *txq_maperr; 622 626 struct timer_list rx_timer; /* refills starving FLs */ 623 627 struct timer_list tx_timer; /* checks Tx queues */ 624 628 }; ··· 1136 1136 1137 1137 unsigned int qtimer_val(const struct adapter *adap, 1138 1138 const struct sge_rspq *q); 1139 + 1140 + int t4_init_devlog_params(struct adapter *adapter); 1139 1141 int t4_init_sge_params(struct adapter *adapter); 1140 1142 int t4_init_tp_params(struct adapter *adap); 1141 1143 int t4_filter_field_shift(const struct adapter *adap, int filter_sel);
+7 -1
drivers/net/ethernet/chelsio/cxgb4/cxgb4_debugfs.c
··· 670 670 "0.9375" }; 671 671 672 672 int i; 673 - u16 incr[NMTUS][NCCTRL_WIN]; 673 + u16 (*incr)[NCCTRL_WIN]; 674 674 struct adapter *adap = seq->private; 675 + 676 + incr = kmalloc(sizeof(*incr) * NMTUS, GFP_KERNEL); 677 + if (!incr) 678 + return -ENOMEM; 675 679 676 680 t4_read_cong_tbl(adap, incr); 677 681 ··· 689 685 adap->params.a_wnd[i], 690 686 dec_fac[adap->params.b_wnd[i]]); 691 687 } 688 + 689 + kfree(incr); 692 690 return 0; 693 691 } 694 692
+98 -39
drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
··· 920 920 { 921 921 int i; 922 922 923 - for (i = 0; i < ARRAY_SIZE(adap->sge.ingr_map); i++) { 923 + for (i = 0; i < adap->sge.ingr_sz; i++) { 924 924 struct sge_rspq *q = adap->sge.ingr_map[i]; 925 925 926 926 if (q && q->handler) { ··· 934 934 } 935 935 } 936 936 937 + /* Disable interrupt and napi handler */ 938 + static void disable_interrupts(struct adapter *adap) 939 + { 940 + if (adap->flags & FULL_INIT_DONE) { 941 + t4_intr_disable(adap); 942 + if (adap->flags & USING_MSIX) { 943 + free_msix_queue_irqs(adap); 944 + free_irq(adap->msix_info[0].vec, adap); 945 + } else { 946 + free_irq(adap->pdev->irq, adap); 947 + } 948 + quiesce_rx(adap); 949 + } 950 + } 951 + 937 952 /* 938 953 * Enable NAPI scheduling and interrupt generation for all Rx queues. 939 954 */ ··· 956 941 { 957 942 int i; 958 943 959 - for (i = 0; i < ARRAY_SIZE(adap->sge.ingr_map); i++) { 944 + for (i = 0; i < adap->sge.ingr_sz; i++) { 960 945 struct sge_rspq *q = adap->sge.ingr_map[i]; 961 946 962 947 if (!q) ··· 985 970 int err, msi_idx, i, j; 986 971 struct sge *s = &adap->sge; 987 972 988 - bitmap_zero(s->starving_fl, MAX_EGRQ); 989 - bitmap_zero(s->txq_maperr, MAX_EGRQ); 973 + bitmap_zero(s->starving_fl, s->egr_sz); 974 + bitmap_zero(s->txq_maperr, s->egr_sz); 990 975 991 976 if (adap->flags & USING_MSIX) 992 977 msi_idx = 1; /* vector 0 is for non-queue interrupts */ ··· 998 983 msi_idx = -((int)s->intrq.abs_id + 1); 999 984 } 1000 985 986 + /* NOTE: If you add/delete any Ingress/Egress Queue allocations in here, 987 + * don't forget to update the following which need to be 988 + * synchronized to and changes here. 989 + * 990 + * 1. The calculations of MAX_INGQ in cxgb4.h. 991 + * 992 + * 2. Update enable_msix/name_msix_vecs/request_msix_queue_irqs 993 + * to accommodate any new/deleted Ingress Queues 994 + * which need MSI-X Vectors. 995 + * 996 + * 3. Update sge_qinfo_show() to include information on the 997 + * new/deleted queues. 998 + */ 1001 999 err = t4_sge_alloc_rxq(adap, &s->fw_evtq, true, adap->port[0], 1002 1000 msi_idx, NULL, fwevtq_handler); 1003 1001 if (err) { ··· 4272 4244 4273 4245 static void cxgb_down(struct adapter *adapter) 4274 4246 { 4275 - t4_intr_disable(adapter); 4276 4247 cancel_work_sync(&adapter->tid_release_task); 4277 4248 cancel_work_sync(&adapter->db_full_task); 4278 4249 cancel_work_sync(&adapter->db_drop_task); 4279 4250 adapter->tid_release_task_busy = false; 4280 4251 adapter->tid_release_head = NULL; 4281 4252 4282 - if (adapter->flags & USING_MSIX) { 4283 - free_msix_queue_irqs(adapter); 4284 - free_irq(adapter->msix_info[0].vec, adapter); 4285 - } else 4286 - free_irq(adapter->pdev->irq, adapter); 4287 - quiesce_rx(adapter); 4288 4253 t4_sge_stop(adapter); 4289 4254 t4_free_sge_resources(adapter); 4290 4255 adapter->flags &= ~FULL_INIT_DONE; ··· 4754 4733 if (ret < 0) 4755 4734 return ret; 4756 4735 4757 - ret = t4_cfg_pfvf(adap, adap->fn, adap->fn, 0, MAX_EGRQ, 64, MAX_INGQ, 4758 - 0, 0, 4, 0xf, 0xf, 16, FW_CMD_CAP_PF, FW_CMD_CAP_PF); 4736 + ret = t4_cfg_pfvf(adap, adap->fn, adap->fn, 0, adap->sge.egr_sz, 64, 4737 + MAX_INGQ, 0, 0, 4, 0xf, 0xf, 16, FW_CMD_CAP_PF, 4738 + FW_CMD_CAP_PF); 4759 4739 if (ret < 0) 4760 4740 return ret; 4761 4741 ··· 5110 5088 enum dev_state state; 5111 5089 u32 params[7], val[7]; 5112 5090 struct fw_caps_config_cmd caps_cmd; 5113 - struct fw_devlog_cmd devlog_cmd; 5114 - u32 devlog_meminfo; 5115 5091 int reset = 1; 5092 + 5093 + /* Grab Firmware Device Log parameters as early as possible so we have 5094 + * access to it for debugging, etc. 5095 + */ 5096 + ret = t4_init_devlog_params(adap); 5097 + if (ret < 0) 5098 + return ret; 5116 5099 5117 5100 /* Contact FW, advertising Master capability */ 5118 5101 ret = t4_fw_hello(adap, adap->mbox, adap->mbox, MASTER_MAY, &state); ··· 5195 5168 ret = get_vpd_params(adap, &adap->params.vpd); 5196 5169 if (ret < 0) 5197 5170 goto bye; 5198 - 5199 - /* Read firmware device log parameters. We really need to find a way 5200 - * to get these parameters initialized with some default values (which 5201 - * are likely to be correct) for the case where we either don't 5202 - * attache to the firmware or it's crashed when we probe the adapter. 5203 - * That way we'll still be able to perform early firmware startup 5204 - * debugging ... If the request to get the Firmware's Device Log 5205 - * parameters fails, we'll live so we don't make that a fatal error. 5206 - */ 5207 - memset(&devlog_cmd, 0, sizeof(devlog_cmd)); 5208 - devlog_cmd.op_to_write = htonl(FW_CMD_OP_V(FW_DEVLOG_CMD) | 5209 - FW_CMD_REQUEST_F | FW_CMD_READ_F); 5210 - devlog_cmd.retval_len16 = htonl(FW_LEN16(devlog_cmd)); 5211 - ret = t4_wr_mbox(adap, adap->mbox, &devlog_cmd, sizeof(devlog_cmd), 5212 - &devlog_cmd); 5213 - if (ret == 0) { 5214 - devlog_meminfo = 5215 - ntohl(devlog_cmd.memtype_devlog_memaddr16_devlog); 5216 - adap->params.devlog.memtype = 5217 - FW_DEVLOG_CMD_MEMTYPE_DEVLOG_G(devlog_meminfo); 5218 - adap->params.devlog.start = 5219 - FW_DEVLOG_CMD_MEMADDR16_DEVLOG_G(devlog_meminfo) << 4; 5220 - adap->params.devlog.size = ntohl(devlog_cmd.memsize_devlog); 5221 - } 5222 5171 5223 5172 /* 5224 5173 * Find out what ports are available to us. Note that we need to do ··· 5295 5292 adap->tids.ftid_base = val[3]; 5296 5293 adap->tids.nftids = val[4] - val[3] + 1; 5297 5294 adap->sge.ingr_start = val[5]; 5295 + 5296 + /* qids (ingress/egress) returned from firmware can be anywhere 5297 + * in the range from EQ(IQFLINT)_START to EQ(IQFLINT)_END. 5298 + * Hence driver needs to allocate memory for this range to 5299 + * store the queue info. Get the highest IQFLINT/EQ index returned 5300 + * in FW_EQ_*_CMD.alloc command. 5301 + */ 5302 + params[0] = FW_PARAM_PFVF(EQ_END); 5303 + params[1] = FW_PARAM_PFVF(IQFLINT_END); 5304 + ret = t4_query_params(adap, adap->mbox, adap->fn, 0, 2, params, val); 5305 + if (ret < 0) 5306 + goto bye; 5307 + adap->sge.egr_sz = val[0] - adap->sge.egr_start + 1; 5308 + adap->sge.ingr_sz = val[1] - adap->sge.ingr_start + 1; 5309 + 5310 + adap->sge.egr_map = kcalloc(adap->sge.egr_sz, 5311 + sizeof(*adap->sge.egr_map), GFP_KERNEL); 5312 + if (!adap->sge.egr_map) { 5313 + ret = -ENOMEM; 5314 + goto bye; 5315 + } 5316 + 5317 + adap->sge.ingr_map = kcalloc(adap->sge.ingr_sz, 5318 + sizeof(*adap->sge.ingr_map), GFP_KERNEL); 5319 + if (!adap->sge.ingr_map) { 5320 + ret = -ENOMEM; 5321 + goto bye; 5322 + } 5323 + 5324 + /* Allocate the memory for the vaious egress queue bitmaps 5325 + * ie starving_fl and txq_maperr. 5326 + */ 5327 + adap->sge.starving_fl = kcalloc(BITS_TO_LONGS(adap->sge.egr_sz), 5328 + sizeof(long), GFP_KERNEL); 5329 + if (!adap->sge.starving_fl) { 5330 + ret = -ENOMEM; 5331 + goto bye; 5332 + } 5333 + 5334 + adap->sge.txq_maperr = kcalloc(BITS_TO_LONGS(adap->sge.egr_sz), 5335 + sizeof(long), GFP_KERNEL); 5336 + if (!adap->sge.txq_maperr) { 5337 + ret = -ENOMEM; 5338 + goto bye; 5339 + } 5298 5340 5299 5341 params[0] = FW_PARAM_PFVF(CLIP_START); 5300 5342 params[1] = FW_PARAM_PFVF(CLIP_END); ··· 5549 5501 * happened to HW/FW, stop issuing commands. 5550 5502 */ 5551 5503 bye: 5504 + kfree(adap->sge.egr_map); 5505 + kfree(adap->sge.ingr_map); 5506 + kfree(adap->sge.starving_fl); 5507 + kfree(adap->sge.txq_maperr); 5552 5508 if (ret != -ETIMEDOUT && ret != -EIO) 5553 5509 t4_fw_bye(adap, adap->mbox); 5554 5510 return ret; ··· 5580 5528 netif_carrier_off(dev); 5581 5529 } 5582 5530 spin_unlock(&adap->stats_lock); 5531 + disable_interrupts(adap); 5583 5532 if (adap->flags & FULL_INIT_DONE) 5584 5533 cxgb_down(adap); 5585 5534 rtnl_unlock(); ··· 5965 5912 5966 5913 t4_free_mem(adapter->l2t); 5967 5914 t4_free_mem(adapter->tids.tid_tab); 5915 + kfree(adapter->sge.egr_map); 5916 + kfree(adapter->sge.ingr_map); 5917 + kfree(adapter->sge.starving_fl); 5918 + kfree(adapter->sge.txq_maperr); 5968 5919 disable_msi(adapter); 5969 5920 5970 5921 for_each_port(adapter, i) ··· 6293 6236 6294 6237 if (is_offload(adapter)) 6295 6238 detach_ulds(adapter); 6239 + 6240 + disable_interrupts(adapter); 6296 6241 6297 6242 for_each_port(adapter, i) 6298 6243 if (adapter->port[i]->reg_state == NETREG_REGISTERED)
+4 -3
drivers/net/ethernet/chelsio/cxgb4/sge.c
··· 2171 2171 struct adapter *adap = (struct adapter *)data; 2172 2172 struct sge *s = &adap->sge; 2173 2173 2174 - for (i = 0; i < ARRAY_SIZE(s->starving_fl); i++) 2174 + for (i = 0; i < BITS_TO_LONGS(s->egr_sz); i++) 2175 2175 for (m = s->starving_fl[i]; m; m &= m - 1) { 2176 2176 struct sge_eth_rxq *rxq; 2177 2177 unsigned int id = __ffs(m) + i * BITS_PER_LONG; ··· 2259 2259 struct adapter *adap = (struct adapter *)data; 2260 2260 struct sge *s = &adap->sge; 2261 2261 2262 - for (i = 0; i < ARRAY_SIZE(s->txq_maperr); i++) 2262 + for (i = 0; i < BITS_TO_LONGS(s->egr_sz); i++) 2263 2263 for (m = s->txq_maperr[i]; m; m &= m - 1) { 2264 2264 unsigned long id = __ffs(m) + i * BITS_PER_LONG; 2265 2265 struct sge_ofld_txq *txq = s->egr_map[id]; ··· 2741 2741 free_rspq_fl(adap, &adap->sge.intrq, NULL); 2742 2742 2743 2743 /* clear the reverse egress queue map */ 2744 - memset(adap->sge.egr_map, 0, sizeof(adap->sge.egr_map)); 2744 + memset(adap->sge.egr_map, 0, 2745 + adap->sge.egr_sz * sizeof(*adap->sge.egr_map)); 2745 2746 } 2746 2747 2747 2748 void t4_sge_start(struct adapter *adap)
+53
drivers/net/ethernet/chelsio/cxgb4/t4_hw.c
··· 4459 4459 } 4460 4460 4461 4461 /** 4462 + * t4_init_devlog_params - initialize adapter->params.devlog 4463 + * @adap: the adapter 4464 + * 4465 + * Initialize various fields of the adapter's Firmware Device Log 4466 + * Parameters structure. 4467 + */ 4468 + int t4_init_devlog_params(struct adapter *adap) 4469 + { 4470 + struct devlog_params *dparams = &adap->params.devlog; 4471 + u32 pf_dparams; 4472 + unsigned int devlog_meminfo; 4473 + struct fw_devlog_cmd devlog_cmd; 4474 + int ret; 4475 + 4476 + /* If we're dealing with newer firmware, the Device Log Paramerters 4477 + * are stored in a designated register which allows us to access the 4478 + * Device Log even if we can't talk to the firmware. 4479 + */ 4480 + pf_dparams = 4481 + t4_read_reg(adap, PCIE_FW_REG(PCIE_FW_PF_A, PCIE_FW_PF_DEVLOG)); 4482 + if (pf_dparams) { 4483 + unsigned int nentries, nentries128; 4484 + 4485 + dparams->memtype = PCIE_FW_PF_DEVLOG_MEMTYPE_G(pf_dparams); 4486 + dparams->start = PCIE_FW_PF_DEVLOG_ADDR16_G(pf_dparams) << 4; 4487 + 4488 + nentries128 = PCIE_FW_PF_DEVLOG_NENTRIES128_G(pf_dparams); 4489 + nentries = (nentries128 + 1) * 128; 4490 + dparams->size = nentries * sizeof(struct fw_devlog_e); 4491 + 4492 + return 0; 4493 + } 4494 + 4495 + /* Otherwise, ask the firmware for it's Device Log Parameters. 4496 + */ 4497 + memset(&devlog_cmd, 0, sizeof(devlog_cmd)); 4498 + devlog_cmd.op_to_write = htonl(FW_CMD_OP_V(FW_DEVLOG_CMD) | 4499 + FW_CMD_REQUEST_F | FW_CMD_READ_F); 4500 + devlog_cmd.retval_len16 = htonl(FW_LEN16(devlog_cmd)); 4501 + ret = t4_wr_mbox(adap, adap->mbox, &devlog_cmd, sizeof(devlog_cmd), 4502 + &devlog_cmd); 4503 + if (ret) 4504 + return ret; 4505 + 4506 + devlog_meminfo = ntohl(devlog_cmd.memtype_devlog_memaddr16_devlog); 4507 + dparams->memtype = FW_DEVLOG_CMD_MEMTYPE_DEVLOG_G(devlog_meminfo); 4508 + dparams->start = FW_DEVLOG_CMD_MEMADDR16_DEVLOG_G(devlog_meminfo) << 4; 4509 + dparams->size = ntohl(devlog_cmd.memsize_devlog); 4510 + 4511 + return 0; 4512 + } 4513 + 4514 + /** 4462 4515 * t4_init_sge_params - initialize adap->params.sge 4463 4516 * @adapter: the adapter 4464 4517 *
+3
drivers/net/ethernet/chelsio/cxgb4/t4_regs.h
··· 63 63 #define MC_BIST_STATUS_REG(reg_addr, idx) ((reg_addr) + (idx) * 4) 64 64 #define EDC_BIST_STATUS_REG(reg_addr, idx) ((reg_addr) + (idx) * 4) 65 65 66 + #define PCIE_FW_REG(reg_addr, idx) ((reg_addr) + (idx) * 4) 67 + 66 68 #define SGE_PF_KDOORBELL_A 0x0 67 69 68 70 #define QID_S 15 ··· 709 707 #define PFNUM_V(x) ((x) << PFNUM_S) 710 708 711 709 #define PCIE_FW_A 0x30b8 710 + #define PCIE_FW_PF_A 0x30bc 712 711 713 712 #define PCIE_CORE_UTL_SYSTEM_BUS_AGENT_STATUS_A 0x5908 714 713
+37 -2
drivers/net/ethernet/chelsio/cxgb4/t4fw_api.h
··· 101 101 FW_RI_BIND_MW_WR = 0x18, 102 102 FW_RI_FR_NSMR_WR = 0x19, 103 103 FW_RI_INV_LSTAG_WR = 0x1a, 104 - FW_LASTC2E_WR = 0x40 104 + FW_LASTC2E_WR = 0x70 105 105 }; 106 106 107 107 struct fw_wr_hdr { ··· 993 993 FW_MEMTYPE_CF_EXTMEM = 0x2, 994 994 FW_MEMTYPE_CF_FLASH = 0x4, 995 995 FW_MEMTYPE_CF_INTERNAL = 0x5, 996 + FW_MEMTYPE_CF_EXTMEM1 = 0x6, 996 997 }; 997 998 998 999 struct fw_caps_config_cmd { ··· 1036 1035 FW_PARAMS_MNEM_PFVF = 2, /* function params */ 1037 1036 FW_PARAMS_MNEM_REG = 3, /* limited register access */ 1038 1037 FW_PARAMS_MNEM_DMAQ = 4, /* dma queue params */ 1038 + FW_PARAMS_MNEM_CHNET = 5, /* chnet params */ 1039 1039 FW_PARAMS_MNEM_LAST 1040 1040 }; 1041 1041 ··· 3104 3102 FW_DEVLOG_FACILITY_FCOE = 0x2E, 3105 3103 FW_DEVLOG_FACILITY_FOISCSI = 0x30, 3106 3104 FW_DEVLOG_FACILITY_FOFCOE = 0x32, 3107 - FW_DEVLOG_FACILITY_MAX = 0x32, 3105 + FW_DEVLOG_FACILITY_CHNET = 0x34, 3106 + FW_DEVLOG_FACILITY_MAX = 0x34, 3108 3107 }; 3109 3108 3110 3109 /* log message format */ ··· 3141 3138 #define FW_DEVLOG_CMD_MEMADDR16_DEVLOG_G(x) \ 3142 3139 (((x) >> FW_DEVLOG_CMD_MEMADDR16_DEVLOG_S) & \ 3143 3140 FW_DEVLOG_CMD_MEMADDR16_DEVLOG_M) 3141 + 3142 + /* P C I E F W P F 7 R E G I S T E R */ 3143 + 3144 + /* PF7 stores the Firmware Device Log parameters which allows Host Drivers to 3145 + * access the "devlog" which needing to contact firmware. The encoding is 3146 + * mostly the same as that returned by the DEVLOG command except for the size 3147 + * which is encoded as the number of entries in multiples-1 of 128 here rather 3148 + * than the memory size as is done in the DEVLOG command. Thus, 0 means 128 3149 + * and 15 means 2048. This of course in turn constrains the allowed values 3150 + * for the devlog size ... 3151 + */ 3152 + #define PCIE_FW_PF_DEVLOG 7 3153 + 3154 + #define PCIE_FW_PF_DEVLOG_NENTRIES128_S 28 3155 + #define PCIE_FW_PF_DEVLOG_NENTRIES128_M 0xf 3156 + #define PCIE_FW_PF_DEVLOG_NENTRIES128_V(x) \ 3157 + ((x) << PCIE_FW_PF_DEVLOG_NENTRIES128_S) 3158 + #define PCIE_FW_PF_DEVLOG_NENTRIES128_G(x) \ 3159 + (((x) >> PCIE_FW_PF_DEVLOG_NENTRIES128_S) & \ 3160 + PCIE_FW_PF_DEVLOG_NENTRIES128_M) 3161 + 3162 + #define PCIE_FW_PF_DEVLOG_ADDR16_S 4 3163 + #define PCIE_FW_PF_DEVLOG_ADDR16_M 0xffffff 3164 + #define PCIE_FW_PF_DEVLOG_ADDR16_V(x) ((x) << PCIE_FW_PF_DEVLOG_ADDR16_S) 3165 + #define PCIE_FW_PF_DEVLOG_ADDR16_G(x) \ 3166 + (((x) >> PCIE_FW_PF_DEVLOG_ADDR16_S) & PCIE_FW_PF_DEVLOG_ADDR16_M) 3167 + 3168 + #define PCIE_FW_PF_DEVLOG_MEMTYPE_S 0 3169 + #define PCIE_FW_PF_DEVLOG_MEMTYPE_M 0xf 3170 + #define PCIE_FW_PF_DEVLOG_MEMTYPE_V(x) ((x) << PCIE_FW_PF_DEVLOG_MEMTYPE_S) 3171 + #define PCIE_FW_PF_DEVLOG_MEMTYPE_G(x) \ 3172 + (((x) >> PCIE_FW_PF_DEVLOG_MEMTYPE_S) & PCIE_FW_PF_DEVLOG_MEMTYPE_M) 3144 3173 3145 3174 #endif /* _T4FW_INTERFACE_H_ */
+4 -4
drivers/net/ethernet/chelsio/cxgb4/t4fw_version.h
··· 36 36 #define __T4FW_VERSION_H__ 37 37 38 38 #define T4FW_VERSION_MAJOR 0x01 39 - #define T4FW_VERSION_MINOR 0x0C 40 - #define T4FW_VERSION_MICRO 0x19 39 + #define T4FW_VERSION_MINOR 0x0D 40 + #define T4FW_VERSION_MICRO 0x20 41 41 #define T4FW_VERSION_BUILD 0x00 42 42 43 43 #define T5FW_VERSION_MAJOR 0x01 44 - #define T5FW_VERSION_MINOR 0x0C 45 - #define T5FW_VERSION_MICRO 0x19 44 + #define T5FW_VERSION_MINOR 0x0D 45 + #define T5FW_VERSION_MICRO 0x20 46 46 #define T5FW_VERSION_BUILD 0x00 47 47 48 48 #endif
+8 -4
drivers/net/ethernet/chelsio/cxgb4vf/sge.c
··· 1004 1004 ? (tq->pidx - 1) 1005 1005 : (tq->size - 1)); 1006 1006 __be64 *src = (__be64 *)&tq->desc[index]; 1007 - __be64 __iomem *dst = (__be64 *)(tq->bar2_addr + 1007 + __be64 __iomem *dst = (__be64 __iomem *)(tq->bar2_addr + 1008 1008 SGE_UDB_WCDOORBELL); 1009 1009 unsigned int count = EQ_UNIT / sizeof(__be64); 1010 1010 ··· 1018 1018 * DMA. 1019 1019 */ 1020 1020 while (count) { 1021 - writeq(*src, dst); 1021 + /* the (__force u64) is because the compiler 1022 + * doesn't understand the endian swizzling 1023 + * going on 1024 + */ 1025 + writeq((__force u64)*src, dst); 1022 1026 src++; 1023 1027 dst++; 1024 1028 count--; ··· 1256 1252 BUG_ON(DIV_ROUND_UP(ETHTXQ_MAX_HDR, TXD_PER_EQ_UNIT) > 1); 1257 1253 wr = (void *)&txq->q.desc[txq->q.pidx]; 1258 1254 wr->equiq_to_len16 = cpu_to_be32(wr_mid); 1259 - wr->r3[0] = cpu_to_be64(0); 1260 - wr->r3[1] = cpu_to_be64(0); 1255 + wr->r3[0] = cpu_to_be32(0); 1256 + wr->r3[1] = cpu_to_be32(0); 1261 1257 skb_copy_from_linear_data(skb, (void *)wr->ethmacdst, fw_hdr_copy_len); 1262 1258 end = (u64 *)wr + flits; 1263 1259
+3 -3
drivers/net/ethernet/chelsio/cxgb4vf/t4vf_hw.c
··· 210 210 211 211 if (rpl) { 212 212 /* request bit in high-order BE word */ 213 - WARN_ON((be32_to_cpu(*(const u32 *)cmd) 213 + WARN_ON((be32_to_cpu(*(const __be32 *)cmd) 214 214 & FW_CMD_REQUEST_F) == 0); 215 215 get_mbox_rpl(adapter, rpl, size, mbox_data); 216 - WARN_ON((be32_to_cpu(*(u32 *)rpl) 216 + WARN_ON((be32_to_cpu(*(__be32 *)rpl) 217 217 & FW_CMD_REQUEST_F) != 0); 218 218 } 219 219 t4_write_reg(adapter, mbox_ctl, ··· 484 484 * o The BAR2 Queue ID. 485 485 * o The BAR2 Queue ID Offset into the BAR2 page. 486 486 */ 487 - bar2_page_offset = ((qid >> qpp_shift) << page_shift); 487 + bar2_page_offset = ((u64)(qid >> qpp_shift) << page_shift); 488 488 bar2_qid = qid & qpp_mask; 489 489 bar2_qid_offset = bar2_qid * SGE_UDB_SIZE; 490 490
+27 -3
drivers/net/ethernet/freescale/fec_main.c
··· 1954 1954 struct fec_enet_private *fep = netdev_priv(ndev); 1955 1955 struct device_node *node; 1956 1956 int err = -ENXIO, i; 1957 + u32 mii_speed, holdtime; 1957 1958 1958 1959 /* 1959 1960 * The i.MX28 dual fec interfaces are not equal. ··· 1992 1991 * Reference Manual has an error on this, and gets fixed on i.MX6Q 1993 1992 * document. 1994 1993 */ 1995 - fep->phy_speed = DIV_ROUND_UP(clk_get_rate(fep->clk_ipg), 5000000); 1994 + mii_speed = DIV_ROUND_UP(clk_get_rate(fep->clk_ipg), 5000000); 1996 1995 if (fep->quirks & FEC_QUIRK_ENET_MAC) 1997 - fep->phy_speed--; 1998 - fep->phy_speed <<= 1; 1996 + mii_speed--; 1997 + if (mii_speed > 63) { 1998 + dev_err(&pdev->dev, 1999 + "fec clock (%lu) to fast to get right mii speed\n", 2000 + clk_get_rate(fep->clk_ipg)); 2001 + err = -EINVAL; 2002 + goto err_out; 2003 + } 2004 + 2005 + /* 2006 + * The i.MX28 and i.MX6 types have another filed in the MSCR (aka 2007 + * MII_SPEED) register that defines the MDIO output hold time. Earlier 2008 + * versions are RAZ there, so just ignore the difference and write the 2009 + * register always. 2010 + * The minimal hold time according to IEE802.3 (clause 22) is 10 ns. 2011 + * HOLDTIME + 1 is the number of clk cycles the fec is holding the 2012 + * output. 2013 + * The HOLDTIME bitfield takes values between 0 and 7 (inclusive). 2014 + * Given that ceil(clkrate / 5000000) <= 64, the calculation for 2015 + * holdtime cannot result in a value greater than 3. 2016 + */ 2017 + holdtime = DIV_ROUND_UP(clk_get_rate(fep->clk_ipg), 100000000) - 1; 2018 + 2019 + fep->phy_speed = mii_speed << 1 | holdtime << 8; 2020 + 1999 2021 writel(fep->phy_speed, fep->hwp + FEC_MII_SPEED); 2000 2022 2001 2023 fep->mii_bus = mdiobus_alloc();
+3
drivers/net/ethernet/freescale/ucc_geth.c
··· 3893 3893 ugeth->phy_interface = phy_interface; 3894 3894 ugeth->max_speed = max_speed; 3895 3895 3896 + /* Carrier starts down, phylib will bring it up */ 3897 + netif_carrier_off(dev); 3898 + 3896 3899 err = register_netdev(dev); 3897 3900 if (err) { 3898 3901 if (netif_msg_probe(ugeth))
+1 -6
drivers/net/ethernet/marvell/mvneta.c
··· 2658 2658 static int mvneta_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd) 2659 2659 { 2660 2660 struct mvneta_port *pp = netdev_priv(dev); 2661 - int ret; 2662 2661 2663 2662 if (!pp->phy_dev) 2664 2663 return -ENOTSUPP; 2665 2664 2666 - ret = phy_mii_ioctl(pp->phy_dev, ifr, cmd); 2667 - if (!ret) 2668 - mvneta_adjust_link(dev); 2669 - 2670 - return ret; 2665 + return phy_mii_ioctl(pp->phy_dev, ifr, cmd); 2671 2666 } 2672 2667 2673 2668 /* Ethtool methods */
+3 -2
drivers/net/ethernet/mellanox/mlx4/cmd.c
··· 724 724 * on the host, we deprecate the error message for this 725 725 * specific command/input_mod/opcode_mod/fw-status to be debug. 726 726 */ 727 - if (op == MLX4_CMD_SET_PORT && in_modifier == 1 && 727 + if (op == MLX4_CMD_SET_PORT && 728 + (in_modifier == 1 || in_modifier == 2) && 728 729 op_modifier == 0 && context->fw_status == CMD_STAT_BAD_SIZE) 729 730 mlx4_dbg(dev, "command 0x%x failed: fw status = 0x%x\n", 730 731 op, context->fw_status); ··· 1994 1993 goto reset_slave; 1995 1994 slave_state[slave].vhcr_dma = ((u64) param) << 48; 1996 1995 priv->mfunc.master.slave_state[slave].cookie = 0; 1997 - mutex_init(&priv->mfunc.master.gen_eqe_mutex[slave]); 1998 1996 break; 1999 1997 case MLX4_COMM_CMD_VHCR1: 2000 1998 if (slave_state[slave].last_cmd != MLX4_COMM_CMD_VHCR0) ··· 2225 2225 for (i = 0; i < dev->num_slaves; ++i) { 2226 2226 s_state = &priv->mfunc.master.slave_state[i]; 2227 2227 s_state->last_cmd = MLX4_COMM_CMD_RESET; 2228 + mutex_init(&priv->mfunc.master.gen_eqe_mutex[i]); 2228 2229 for (j = 0; j < MLX4_EVENT_TYPES_NUM; ++j) 2229 2230 s_state->event_eq[j].eqn = -1; 2230 2231 __raw_writel((__force u32) 0,
+8 -7
drivers/net/ethernet/mellanox/mlx4/en_netdev.c
··· 2805 2805 netif_carrier_off(dev); 2806 2806 mlx4_en_set_default_moderation(priv); 2807 2807 2808 - err = register_netdev(dev); 2809 - if (err) { 2810 - en_err(priv, "Netdev registration failed for port %d\n", port); 2811 - goto out; 2812 - } 2813 - priv->registered = 1; 2814 - 2815 2808 en_warn(priv, "Using %d TX rings\n", prof->tx_ring_num); 2816 2809 en_warn(priv, "Using %d RX rings\n", prof->rx_ring_num); 2817 2810 ··· 2845 2852 SERVICE_TASK_DELAY); 2846 2853 2847 2854 mlx4_set_stats_bitmap(mdev->dev, &priv->stats_bitmap); 2855 + 2856 + err = register_netdev(dev); 2857 + if (err) { 2858 + en_err(priv, "Netdev registration failed for port %d\n", port); 2859 + goto out; 2860 + } 2861 + 2862 + priv->registered = 1; 2848 2863 2849 2864 return 0; 2850 2865
+7 -11
drivers/net/ethernet/mellanox/mlx4/eq.c
··· 153 153 154 154 /* All active slaves need to receive the event */ 155 155 if (slave == ALL_SLAVES) { 156 - for (i = 0; i < dev->num_slaves; i++) { 157 - if (i != dev->caps.function && 158 - master->slave_state[i].active) 159 - if (mlx4_GEN_EQE(dev, i, eqe)) 160 - mlx4_warn(dev, "Failed to generate event for slave %d\n", 161 - i); 156 + for (i = 0; i <= dev->persist->num_vfs; i++) { 157 + if (mlx4_GEN_EQE(dev, i, eqe)) 158 + mlx4_warn(dev, "Failed to generate event for slave %d\n", 159 + i); 162 160 } 163 161 } else { 164 162 if (mlx4_GEN_EQE(dev, slave, eqe)) ··· 201 203 struct mlx4_eqe *eqe) 202 204 { 203 205 struct mlx4_priv *priv = mlx4_priv(dev); 204 - struct mlx4_slave_state *s_slave = 205 - &priv->mfunc.master.slave_state[slave]; 206 206 207 - if (!s_slave->active) { 208 - /*mlx4_warn(dev, "Trying to pass event to inactive slave\n");*/ 207 + if (slave < 0 || slave > dev->persist->num_vfs || 208 + slave == dev->caps.function || 209 + !priv->mfunc.master.slave_state[slave].active) 209 210 return; 210 - } 211 211 212 212 slave_event(dev, slave, eqe); 213 213 }
+6
drivers/net/ethernet/mellanox/mlx4/resource_tracker.c
··· 3095 3095 if (!priv->mfunc.master.slave_state) 3096 3096 return -EINVAL; 3097 3097 3098 + /* check for slave valid, slave not PF, and slave active */ 3099 + if (slave < 0 || slave > dev->persist->num_vfs || 3100 + slave == dev->caps.function || 3101 + !priv->mfunc.master.slave_state[slave].active) 3102 + return 0; 3103 + 3098 3104 event_eq = &priv->mfunc.master.slave_state[slave].event_eq[eqe->type]; 3099 3105 3100 3106 /* Create the event only if the slave is registered */
+7 -1
drivers/net/ethernet/rocker/rocker.c
··· 4468 4468 struct net_device *master = netdev_master_upper_dev_get(dev); 4469 4469 int err = 0; 4470 4470 4471 + /* There are currently three cases handled here: 4472 + * 1. Joining a bridge 4473 + * 2. Leaving a previously joined bridge 4474 + * 3. Other, e.g. being added to or removed from a bond or openvswitch, 4475 + * in which case nothing is done 4476 + */ 4471 4477 if (master && master->rtnl_link_ops && 4472 4478 !strcmp(master->rtnl_link_ops->kind, "bridge")) 4473 4479 err = rocker_port_bridge_join(rocker_port, master); 4474 - else 4480 + else if (rocker_port_is_bridged(rocker_port)) 4475 4481 err = rocker_port_bridge_leave(rocker_port); 4476 4482 4477 4483 return err;
+3 -1
drivers/net/ipvlan/ipvlan.h
··· 114 114 rx_handler_result_t ipvlan_handle_frame(struct sk_buff **pskb); 115 115 int ipvlan_queue_xmit(struct sk_buff *skb, struct net_device *dev); 116 116 void ipvlan_ht_addr_add(struct ipvl_dev *ipvlan, struct ipvl_addr *addr); 117 - bool ipvlan_addr_busy(struct ipvl_dev *ipvlan, void *iaddr, bool is_v6); 117 + struct ipvl_addr *ipvlan_find_addr(const struct ipvl_dev *ipvlan, 118 + const void *iaddr, bool is_v6); 119 + bool ipvlan_addr_busy(struct ipvl_port *port, void *iaddr, bool is_v6); 118 120 struct ipvl_addr *ipvlan_ht_addr_lookup(const struct ipvl_port *port, 119 121 const void *iaddr, bool is_v6); 120 122 void ipvlan_ht_addr_del(struct ipvl_addr *addr, bool sync);
+21 -9
drivers/net/ipvlan/ipvlan_core.c
··· 81 81 hash = (addr->atype == IPVL_IPV6) ? 82 82 ipvlan_get_v6_hash(&addr->ip6addr) : 83 83 ipvlan_get_v4_hash(&addr->ip4addr); 84 - hlist_add_head_rcu(&addr->hlnode, &port->hlhead[hash]); 84 + if (hlist_unhashed(&addr->hlnode)) 85 + hlist_add_head_rcu(&addr->hlnode, &port->hlhead[hash]); 85 86 } 86 87 87 88 void ipvlan_ht_addr_del(struct ipvl_addr *addr, bool sync) 88 89 { 89 - hlist_del_rcu(&addr->hlnode); 90 + hlist_del_init_rcu(&addr->hlnode); 90 91 if (sync) 91 92 synchronize_rcu(); 92 93 } 93 94 94 - bool ipvlan_addr_busy(struct ipvl_dev *ipvlan, void *iaddr, bool is_v6) 95 + struct ipvl_addr *ipvlan_find_addr(const struct ipvl_dev *ipvlan, 96 + const void *iaddr, bool is_v6) 95 97 { 96 - struct ipvl_port *port = ipvlan->port; 97 98 struct ipvl_addr *addr; 98 99 99 100 list_for_each_entry(addr, &ipvlan->addrs, anode) { ··· 102 101 ipv6_addr_equal(&addr->ip6addr, iaddr)) || 103 102 (!is_v6 && addr->atype == IPVL_IPV4 && 104 103 addr->ip4addr.s_addr == ((struct in_addr *)iaddr)->s_addr)) 104 + return addr; 105 + } 106 + return NULL; 107 + } 108 + 109 + bool ipvlan_addr_busy(struct ipvl_port *port, void *iaddr, bool is_v6) 110 + { 111 + struct ipvl_dev *ipvlan; 112 + 113 + ASSERT_RTNL(); 114 + 115 + list_for_each_entry(ipvlan, &port->ipvlans, pnode) { 116 + if (ipvlan_find_addr(ipvlan, iaddr, is_v6)) 105 117 return true; 106 118 } 107 - 108 - if (ipvlan_ht_addr_lookup(port, iaddr, is_v6)) 109 - return true; 110 - 111 119 return false; 112 120 } 113 121 ··· 202 192 if (skb->protocol == htons(ETH_P_PAUSE)) 203 193 return; 204 194 205 - list_for_each_entry(ipvlan, &port->ipvlans, pnode) { 195 + rcu_read_lock(); 196 + list_for_each_entry_rcu(ipvlan, &port->ipvlans, pnode) { 206 197 if (local && (ipvlan == in_dev)) 207 198 continue; 208 199 ··· 230 219 mcast_acct: 231 220 ipvlan_count_rx(ipvlan, len, ret == NET_RX_SUCCESS, true); 232 221 } 222 + rcu_read_unlock(); 233 223 234 224 /* Locally generated? ...Forward a copy to the main-device as 235 225 * well. On the RX side we'll ignore it (wont give it to any
+19 -11
drivers/net/ipvlan/ipvlan_main.c
··· 505 505 if (ipvlan->ipv6cnt > 0 || ipvlan->ipv4cnt > 0) { 506 506 list_for_each_entry_safe(addr, next, &ipvlan->addrs, anode) { 507 507 ipvlan_ht_addr_del(addr, !dev->dismantle); 508 - list_del_rcu(&addr->anode); 508 + list_del(&addr->anode); 509 509 } 510 510 } 511 511 list_del_rcu(&ipvlan->pnode); ··· 607 607 { 608 608 struct ipvl_addr *addr; 609 609 610 - if (ipvlan_addr_busy(ipvlan, ip6_addr, true)) { 610 + if (ipvlan_addr_busy(ipvlan->port, ip6_addr, true)) { 611 611 netif_err(ipvlan, ifup, ipvlan->dev, 612 612 "Failed to add IPv6=%pI6c addr for %s intf\n", 613 613 ip6_addr, ipvlan->dev->name); ··· 620 620 addr->master = ipvlan; 621 621 memcpy(&addr->ip6addr, ip6_addr, sizeof(struct in6_addr)); 622 622 addr->atype = IPVL_IPV6; 623 - list_add_tail_rcu(&addr->anode, &ipvlan->addrs); 623 + list_add_tail(&addr->anode, &ipvlan->addrs); 624 624 ipvlan->ipv6cnt++; 625 - ipvlan_ht_addr_add(ipvlan, addr); 625 + /* If the interface is not up, the address will be added to the hash 626 + * list by ipvlan_open. 627 + */ 628 + if (netif_running(ipvlan->dev)) 629 + ipvlan_ht_addr_add(ipvlan, addr); 626 630 627 631 return 0; 628 632 } ··· 635 631 { 636 632 struct ipvl_addr *addr; 637 633 638 - addr = ipvlan_ht_addr_lookup(ipvlan->port, ip6_addr, true); 634 + addr = ipvlan_find_addr(ipvlan, ip6_addr, true); 639 635 if (!addr) 640 636 return; 641 637 642 638 ipvlan_ht_addr_del(addr, true); 643 - list_del_rcu(&addr->anode); 639 + list_del(&addr->anode); 644 640 ipvlan->ipv6cnt--; 645 641 WARN_ON(ipvlan->ipv6cnt < 0); 646 642 kfree_rcu(addr, rcu); ··· 679 675 { 680 676 struct ipvl_addr *addr; 681 677 682 - if (ipvlan_addr_busy(ipvlan, ip4_addr, false)) { 678 + if (ipvlan_addr_busy(ipvlan->port, ip4_addr, false)) { 683 679 netif_err(ipvlan, ifup, ipvlan->dev, 684 680 "Failed to add IPv4=%pI4 on %s intf.\n", 685 681 ip4_addr, ipvlan->dev->name); ··· 692 688 addr->master = ipvlan; 693 689 memcpy(&addr->ip4addr, ip4_addr, sizeof(struct in_addr)); 694 690 addr->atype = IPVL_IPV4; 695 - list_add_tail_rcu(&addr->anode, &ipvlan->addrs); 691 + list_add_tail(&addr->anode, &ipvlan->addrs); 696 692 ipvlan->ipv4cnt++; 697 - ipvlan_ht_addr_add(ipvlan, addr); 693 + /* If the interface is not up, the address will be added to the hash 694 + * list by ipvlan_open. 695 + */ 696 + if (netif_running(ipvlan->dev)) 697 + ipvlan_ht_addr_add(ipvlan, addr); 698 698 ipvlan_set_broadcast_mac_filter(ipvlan, true); 699 699 700 700 return 0; ··· 708 700 { 709 701 struct ipvl_addr *addr; 710 702 711 - addr = ipvlan_ht_addr_lookup(ipvlan->port, ip4_addr, false); 703 + addr = ipvlan_find_addr(ipvlan, ip4_addr, false); 712 704 if (!addr) 713 705 return; 714 706 715 707 ipvlan_ht_addr_del(addr, true); 716 - list_del_rcu(&addr->anode); 708 + list_del(&addr->anode); 717 709 ipvlan->ipv4cnt--; 718 710 WARN_ON(ipvlan->ipv4cnt < 0); 719 711 if (!ipvlan->ipv4cnt)
+2
drivers/net/usb/asix_common.c
··· 188 188 memcpy(skb_tail_pointer(skb), &padbytes, sizeof(padbytes)); 189 189 skb_put(skb, sizeof(padbytes)); 190 190 } 191 + 192 + usbnet_set_skb_tx_stats(skb, 1, 0); 191 193 return skb; 192 194 } 193 195
+8
drivers/net/usb/cdc_ether.c
··· 522 522 #define DELL_VENDOR_ID 0x413C 523 523 #define REALTEK_VENDOR_ID 0x0bda 524 524 #define SAMSUNG_VENDOR_ID 0x04e8 525 + #define LENOVO_VENDOR_ID 0x17ef 525 526 526 527 static const struct usb_device_id products[] = { 527 528 /* BLACKLIST !! ··· 699 698 /* Samsung USB Ethernet Adapters */ 700 699 { 701 700 USB_DEVICE_AND_INTERFACE_INFO(SAMSUNG_VENDOR_ID, 0xa101, USB_CLASS_COMM, 701 + USB_CDC_SUBCLASS_ETHERNET, USB_CDC_PROTO_NONE), 702 + .driver_info = 0, 703 + }, 704 + 705 + /* Lenovo Thinkpad USB 3.0 Ethernet Adapters (based on Realtek RTL8153) */ 706 + { 707 + USB_DEVICE_AND_INTERFACE_INFO(LENOVO_VENDOR_ID, 0x7205, USB_CLASS_COMM, 702 708 USB_CDC_SUBCLASS_ETHERNET, USB_CDC_PROTO_NONE), 703 709 .driver_info = 0, 704 710 },
+3 -3
drivers/net/usb/cdc_ncm.c
··· 1172 1172 1173 1173 /* return skb */ 1174 1174 ctx->tx_curr_skb = NULL; 1175 - dev->net->stats.tx_packets += ctx->tx_curr_frame_num; 1176 1175 1177 1176 /* keep private stats: framing overhead and number of NTBs */ 1178 1177 ctx->tx_overhead += skb_out->len - ctx->tx_curr_frame_payload; 1179 1178 ctx->tx_ntbs++; 1180 1179 1181 - /* usbnet has already counted all the framing overhead. 1180 + /* usbnet will count all the framing overhead by default. 1182 1181 * Adjust the stats so that the tx_bytes counter show real 1183 1182 * payload data instead. 1184 1183 */ 1185 - dev->net->stats.tx_bytes -= skb_out->len - ctx->tx_curr_frame_payload; 1184 + usbnet_set_skb_tx_stats(skb_out, n, 1185 + ctx->tx_curr_frame_payload - skb_out->len); 1186 1186 1187 1187 return skb_out; 1188 1188
+2
drivers/net/usb/r8152.c
··· 492 492 /* Define these values to match your device */ 493 493 #define VENDOR_ID_REALTEK 0x0bda 494 494 #define VENDOR_ID_SAMSUNG 0x04e8 495 + #define VENDOR_ID_LENOVO 0x17ef 495 496 496 497 #define MCU_TYPE_PLA 0x0100 497 498 #define MCU_TYPE_USB 0x0000 ··· 4038 4037 {REALTEK_USB_DEVICE(VENDOR_ID_REALTEK, 0x8152)}, 4039 4038 {REALTEK_USB_DEVICE(VENDOR_ID_REALTEK, 0x8153)}, 4040 4039 {REALTEK_USB_DEVICE(VENDOR_ID_SAMSUNG, 0xa101)}, 4040 + {REALTEK_USB_DEVICE(VENDOR_ID_LENOVO, 0x7205)}, 4041 4041 {} 4042 4042 }; 4043 4043
+1
drivers/net/usb/sr9800.c
··· 144 144 skb_put(skb, sizeof(padbytes)); 145 145 } 146 146 147 + usbnet_set_skb_tx_stats(skb, 1, 0); 147 148 return skb; 148 149 } 149 150
+14 -3
drivers/net/usb/usbnet.c
··· 1188 1188 struct usbnet *dev = entry->dev; 1189 1189 1190 1190 if (urb->status == 0) { 1191 - if (!(dev->driver_info->flags & FLAG_MULTI_PACKET)) 1192 - dev->net->stats.tx_packets++; 1191 + dev->net->stats.tx_packets += entry->packets; 1193 1192 dev->net->stats.tx_bytes += entry->length; 1194 1193 } else { 1195 1194 dev->net->stats.tx_errors++; ··· 1346 1347 } else 1347 1348 urb->transfer_flags |= URB_ZERO_PACKET; 1348 1349 } 1349 - entry->length = urb->transfer_buffer_length = length; 1350 + urb->transfer_buffer_length = length; 1351 + 1352 + if (info->flags & FLAG_MULTI_PACKET) { 1353 + /* Driver has set number of packets and a length delta. 1354 + * Calculate the complete length and ensure that it's 1355 + * positive. 1356 + */ 1357 + entry->length += length; 1358 + if (WARN_ON_ONCE(entry->length <= 0)) 1359 + entry->length = length; 1360 + } else { 1361 + usbnet_set_skb_tx_stats(skb, 1, length); 1362 + } 1350 1363 1351 1364 spin_lock_irqsave(&dev->txq.lock, flags); 1352 1365 retval = usb_autopm_get_interface_async(dev->intf);
+12 -8
drivers/net/wireless/ath/ath9k/beacon.c
··· 219 219 struct ath_common *common = ath9k_hw_common(sc->sc_ah); 220 220 struct ath_vif *avp = (void *)vif->drv_priv; 221 221 struct ath_buf *bf = avp->av_bcbuf; 222 + struct ath_beacon_config *cur_conf = &sc->cur_chan->beacon; 222 223 223 224 ath_dbg(common, CONFIG, "Removing interface at beacon slot: %d\n", 224 225 avp->av_bslot); 225 226 226 227 tasklet_disable(&sc->bcon_tasklet); 228 + 229 + cur_conf->enable_beacon &= ~BIT(avp->av_bslot); 227 230 228 231 if (bf && bf->bf_mpdu) { 229 232 struct sk_buff *skb = bf->bf_mpdu; ··· 524 521 } 525 522 526 523 if (sc->sc_ah->opmode == NL80211_IFTYPE_AP) { 527 - if ((vif->type != NL80211_IFTYPE_AP) || 528 - (sc->nbcnvifs > 1)) { 524 + if (vif->type != NL80211_IFTYPE_AP) { 529 525 ath_dbg(common, CONFIG, 530 526 "An AP interface is already present !\n"); 531 527 return false; ··· 618 616 * enabling/disabling SWBA. 619 617 */ 620 618 if (changed & BSS_CHANGED_BEACON_ENABLED) { 621 - if (!bss_conf->enable_beacon && 622 - (sc->nbcnvifs <= 1)) { 623 - cur_conf->enable_beacon = false; 624 - } else if (bss_conf->enable_beacon) { 625 - cur_conf->enable_beacon = true; 626 - ath9k_cache_beacon_config(sc, ctx, bss_conf); 619 + bool enabled = cur_conf->enable_beacon; 620 + 621 + if (!bss_conf->enable_beacon) { 622 + cur_conf->enable_beacon &= ~BIT(avp->av_bslot); 623 + } else { 624 + cur_conf->enable_beacon |= BIT(avp->av_bslot); 625 + if (!enabled) 626 + ath9k_cache_beacon_config(sc, ctx, bss_conf); 627 627 } 628 628 } 629 629
+1 -1
drivers/net/wireless/ath/ath9k/common.h
··· 54 54 u16 dtim_period; 55 55 u16 bmiss_timeout; 56 56 u8 dtim_count; 57 - bool enable_beacon; 57 + u8 enable_beacon; 58 58 bool ibss_creator; 59 59 u32 nexttbtt; 60 60 u32 intval;
+1 -1
drivers/net/wireless/ath/ath9k/hw.c
··· 424 424 ah->power_mode = ATH9K_PM_UNDEFINED; 425 425 ah->htc_reset_init = true; 426 426 427 - ah->tpc_enabled = true; 427 + ah->tpc_enabled = false; 428 428 429 429 ah->ani_function = ATH9K_ANI_ALL; 430 430 if (!AR_SREV_9300_20_OR_LATER(ah))
+2 -1
drivers/net/wireless/brcm80211/brcmfmac/feature.c
··· 126 126 brcmf_feat_iovar_int_get(ifp, BRCMF_FEAT_MCHAN, "mchan"); 127 127 if (drvr->bus_if->wowl_supported) 128 128 brcmf_feat_iovar_int_get(ifp, BRCMF_FEAT_WOWL, "wowl"); 129 - brcmf_feat_iovar_int_set(ifp, BRCMF_FEAT_MBSS, "mbss", 0); 129 + if (drvr->bus_if->chip != BRCM_CC_43362_CHIP_ID) 130 + brcmf_feat_iovar_int_set(ifp, BRCMF_FEAT_MBSS, "mbss", 0); 130 131 131 132 /* set chip related quirks */ 132 133 switch (drvr->bus_if->chip) {
-1
drivers/net/wireless/iwlwifi/dvm/dev.h
··· 708 708 unsigned long reload_jiffies; 709 709 int reload_count; 710 710 bool ucode_loaded; 711 - bool init_ucode_run; /* Don't run init uCode again */ 712 711 713 712 u8 plcp_delta_threshold; 714 713
+9 -8
drivers/net/wireless/iwlwifi/dvm/mac80211.c
··· 1114 1114 scd_queues &= ~(BIT(IWL_IPAN_CMD_QUEUE_NUM) | 1115 1115 BIT(IWL_DEFAULT_CMD_QUEUE_NUM)); 1116 1116 1117 - if (vif) 1118 - scd_queues &= ~BIT(vif->hw_queue[IEEE80211_AC_VO]); 1119 - 1120 - IWL_DEBUG_TX_QUEUES(priv, "Flushing SCD queues: 0x%x\n", scd_queues); 1121 - if (iwlagn_txfifo_flush(priv, scd_queues)) { 1122 - IWL_ERR(priv, "flush request fail\n"); 1123 - goto done; 1117 + if (drop) { 1118 + IWL_DEBUG_TX_QUEUES(priv, "Flushing SCD queues: 0x%x\n", 1119 + scd_queues); 1120 + if (iwlagn_txfifo_flush(priv, scd_queues)) { 1121 + IWL_ERR(priv, "flush request fail\n"); 1122 + goto done; 1123 + } 1124 1124 } 1125 + 1125 1126 IWL_DEBUG_TX_QUEUES(priv, "wait transmit/flush all frames\n"); 1126 - iwl_trans_wait_tx_queue_empty(priv->trans, 0xffffffff); 1127 + iwl_trans_wait_tx_queue_empty(priv->trans, scd_queues); 1127 1128 done: 1128 1129 mutex_unlock(&priv->mutex); 1129 1130 IWL_DEBUG_MAC80211(priv, "leave\n");
-5
drivers/net/wireless/iwlwifi/dvm/ucode.c
··· 418 418 if (!priv->fw->img[IWL_UCODE_INIT].sec[0].len) 419 419 return 0; 420 420 421 - if (priv->init_ucode_run) 422 - return 0; 423 - 424 421 iwl_init_notification_wait(&priv->notif_wait, &calib_wait, 425 422 calib_complete, ARRAY_SIZE(calib_complete), 426 423 iwlagn_wait_calib, priv); ··· 437 440 */ 438 441 ret = iwl_wait_notification(&priv->notif_wait, &calib_wait, 439 442 UCODE_CALIB_TIMEOUT); 440 - if (!ret) 441 - priv->init_ucode_run = true; 442 443 443 444 goto out; 444 445
+1
drivers/net/wireless/iwlwifi/iwl-drv.c
··· 1257 1257 op->name, err); 1258 1258 #endif 1259 1259 } 1260 + kfree(pieces); 1260 1261 return; 1261 1262 1262 1263 try_again:
+22 -2
drivers/net/wireless/iwlwifi/mvm/rs.c
··· 1278 1278 struct iwl_mvm *mvm = IWL_OP_MODE_GET_MVM(op_mode); 1279 1279 struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb); 1280 1280 1281 + if (!iwl_mvm_sta_from_mac80211(sta)->vif) 1282 + return; 1283 + 1281 1284 if (!ieee80211_is_data(hdr->frame_control) || 1282 1285 info->flags & IEEE80211_TX_CTL_NO_ACK) 1283 1286 return; ··· 2514 2511 struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb); 2515 2512 struct iwl_lq_sta *lq_sta = mvm_sta; 2516 2513 2514 + if (sta && !iwl_mvm_sta_from_mac80211(sta)->vif) { 2515 + /* if vif isn't initialized mvm doesn't know about 2516 + * this station, so don't do anything with the it 2517 + */ 2518 + sta = NULL; 2519 + mvm_sta = NULL; 2520 + } 2521 + 2517 2522 /* TODO: handle rate_idx_mask and rate_idx_mcs_mask */ 2518 2523 2519 2524 /* Treat uninitialized rate scaling data same as non-existing. */ ··· 2837 2826 struct iwl_op_mode *op_mode = 2838 2827 (struct iwl_op_mode *)mvm_r; 2839 2828 struct iwl_mvm *mvm = IWL_OP_MODE_GET_MVM(op_mode); 2829 + 2830 + if (!iwl_mvm_sta_from_mac80211(sta)->vif) 2831 + return; 2840 2832 2841 2833 /* Stop any ongoing aggregations as rs starts off assuming no agg */ 2842 2834 for (tid = 0; tid < IWL_MAX_TID_COUNT; tid++) ··· 3601 3587 3602 3588 MVM_DEBUGFS_READ_WRITE_FILE_OPS(ss_force, 32); 3603 3589 3604 - static void rs_add_debugfs(void *mvm, void *mvm_sta, struct dentry *dir) 3590 + static void rs_add_debugfs(void *mvm, void *priv_sta, struct dentry *dir) 3605 3591 { 3606 - struct iwl_lq_sta *lq_sta = mvm_sta; 3592 + struct iwl_lq_sta *lq_sta = priv_sta; 3593 + struct iwl_mvm_sta *mvmsta; 3594 + 3595 + mvmsta = container_of(lq_sta, struct iwl_mvm_sta, lq_sta); 3596 + 3597 + if (!mvmsta->vif) 3598 + return; 3607 3599 3608 3600 debugfs_create_file("rate_scale_table", S_IRUSR | S_IWUSR, dir, 3609 3601 lq_sta, &rs_sta_dbgfs_scale_table_ops);
+2
drivers/net/wireless/iwlwifi/mvm/time-event.c
··· 197 197 struct iwl_time_event_notif *notif) 198 198 { 199 199 if (!le32_to_cpu(notif->status)) { 200 + if (te_data->vif->type == NL80211_IFTYPE_STATION) 201 + ieee80211_connection_loss(te_data->vif); 200 202 IWL_DEBUG_TE(mvm, "CSA time event failed to start\n"); 201 203 iwl_mvm_te_clear_data(mvm, te_data); 202 204 return;
+4 -2
drivers/net/wireless/iwlwifi/mvm/tx.c
··· 949 949 mvmsta = iwl_mvm_sta_from_mac80211(sta); 950 950 tid_data = &mvmsta->tid_data[tid]; 951 951 952 - if (WARN_ONCE(tid_data->txq_id != scd_flow, "Q %d, tid %d, flow %d", 953 - tid_data->txq_id, tid, scd_flow)) { 952 + if (tid_data->txq_id != scd_flow) { 953 + IWL_ERR(mvm, 954 + "invalid BA notification: Q %d, tid %d, flow %d\n", 955 + tid_data->txq_id, tid, scd_flow); 954 956 rcu_read_unlock(); 955 957 return 0; 956 958 }
+4 -2
drivers/net/wireless/iwlwifi/pcie/drv.c
··· 368 368 /* 3165 Series */ 369 369 {IWL_PCI_DEVICE(0x3165, 0x4010, iwl3165_2ac_cfg)}, 370 370 {IWL_PCI_DEVICE(0x3165, 0x4012, iwl3165_2ac_cfg)}, 371 - {IWL_PCI_DEVICE(0x3165, 0x4110, iwl3165_2ac_cfg)}, 372 - {IWL_PCI_DEVICE(0x3165, 0x4210, iwl3165_2ac_cfg)}, 373 371 {IWL_PCI_DEVICE(0x3165, 0x4410, iwl3165_2ac_cfg)}, 374 372 {IWL_PCI_DEVICE(0x3165, 0x4510, iwl3165_2ac_cfg)}, 373 + {IWL_PCI_DEVICE(0x3165, 0x4110, iwl3165_2ac_cfg)}, 374 + {IWL_PCI_DEVICE(0x3166, 0x4310, iwl3165_2ac_cfg)}, 375 + {IWL_PCI_DEVICE(0x3166, 0x4210, iwl3165_2ac_cfg)}, 376 + {IWL_PCI_DEVICE(0x3165, 0x8010, iwl3165_2ac_cfg)}, 375 377 376 378 /* 7265 Series */ 377 379 {IWL_PCI_DEVICE(0x095A, 0x5010, iwl7265_2ac_cfg)},
+11 -1
drivers/net/wireless/rtlwifi/pci.c
··· 1124 1124 /*This is for new trx flow*/ 1125 1125 struct rtl_tx_buffer_desc *pbuffer_desc = NULL; 1126 1126 u8 temp_one = 1; 1127 + u8 *entry; 1127 1128 1128 1129 memset(&tcb_desc, 0, sizeof(struct rtl_tcb_desc)); 1129 1130 ring = &rtlpci->tx_ring[BEACON_QUEUE]; 1130 1131 pskb = __skb_dequeue(&ring->queue); 1131 - if (pskb) 1132 + if (rtlpriv->use_new_trx_flow) 1133 + entry = (u8 *)(&ring->buffer_desc[ring->idx]); 1134 + else 1135 + entry = (u8 *)(&ring->desc[ring->idx]); 1136 + if (pskb) { 1137 + pci_unmap_single(rtlpci->pdev, 1138 + rtlpriv->cfg->ops->get_desc( 1139 + (u8 *)entry, true, HW_DESC_TXBUFF_ADDR), 1140 + pskb->len, PCI_DMA_TODEVICE); 1132 1141 kfree_skb(pskb); 1142 + } 1133 1143 1134 1144 /*NB: the beacon data buffer must be 32-bit aligned. */ 1135 1145 pskb = ieee80211_beacon_get(hw, mac->vif);
+1 -4
drivers/net/xen-netfront.c
··· 1008 1008 1009 1009 static int xennet_change_mtu(struct net_device *dev, int mtu) 1010 1010 { 1011 - int max = xennet_can_sg(dev) ? 1012 - XEN_NETIF_MAX_TX_SIZE - MAX_TCP_HEADER : ETH_DATA_LEN; 1011 + int max = xennet_can_sg(dev) ? XEN_NETIF_MAX_TX_SIZE : ETH_DATA_LEN; 1013 1012 1014 1013 if (mtu > max) 1015 1014 return -EINVAL; ··· 1277 1278 1278 1279 netdev->ethtool_ops = &xennet_ethtool_ops; 1279 1280 SET_NETDEV_DEV(netdev, &dev->dev); 1280 - 1281 - netif_set_gso_max_size(netdev, XEN_NETIF_MAX_TX_SIZE - MAX_TCP_HEADER); 1282 1281 1283 1282 np->netdev = netdev; 1284 1283
+8 -3
drivers/of/address.c
··· 450 450 return NULL; 451 451 } 452 452 453 - static int of_empty_ranges_quirk(void) 453 + static int of_empty_ranges_quirk(struct device_node *np) 454 454 { 455 455 if (IS_ENABLED(CONFIG_PPC)) { 456 - /* To save cycles, we cache the result */ 456 + /* To save cycles, we cache the result for global "Mac" setting */ 457 457 static int quirk_state = -1; 458 458 459 + /* PA-SEMI sdc DT bug */ 460 + if (of_device_is_compatible(np, "1682m-sdc")) 461 + return true; 462 + 463 + /* Make quirk cached */ 459 464 if (quirk_state < 0) 460 465 quirk_state = 461 466 of_machine_is_compatible("Power Macintosh") || ··· 495 490 * This code is only enabled on powerpc. --gcl 496 491 */ 497 492 ranges = of_get_property(parent, rprop, &rlen); 498 - if (ranges == NULL && !of_empty_ranges_quirk()) { 493 + if (ranges == NULL && !of_empty_ranges_quirk(parent)) { 499 494 pr_debug("OF: no ranges; cannot translate\n"); 500 495 return 1; 501 496 }
+1
drivers/staging/iio/Kconfig
··· 38 38 config IIO_SIMPLE_DUMMY_BUFFER 39 39 bool "Buffered capture support" 40 40 select IIO_BUFFER 41 + select IIO_TRIGGER 41 42 select IIO_KFIFO_BUF 42 43 help 43 44 Add buffered data capture to the simple dummy driver.
+1
drivers/staging/iio/magnetometer/hmc5843_core.c
··· 592 592 mutex_init(&data->lock); 593 593 594 594 indio_dev->dev.parent = dev; 595 + indio_dev->name = dev->driver->name; 595 596 indio_dev->info = &hmc5843_info; 596 597 indio_dev->modes = INDIO_DIRECT_MODE; 597 598 indio_dev->channels = data->variant->channels;
+5
drivers/tty/serial/fsl_lpuart.c
··· 921 921 writeb(val | UARTPFIFO_TXFE | UARTPFIFO_RXFE, 922 922 sport->port.membase + UARTPFIFO); 923 923 924 + /* explicitly clear RDRF */ 925 + readb(sport->port.membase + UARTSR1); 926 + 924 927 /* flush Tx and Rx FIFO */ 925 928 writeb(UARTCFIFO_TXFLUSH | UARTCFIFO_RXFLUSH, 926 929 sport->port.membase + UARTCFIFO); ··· 1078 1075 1079 1076 sport->txfifo_size = 0x1 << (((temp >> UARTPFIFO_TXSIZE_OFF) & 1080 1077 UARTPFIFO_FIFOSIZE_MASK) + 1); 1078 + 1079 + sport->port.fifosize = sport->txfifo_size; 1081 1080 1082 1081 sport->rxfifo_size = 0x1 << (((temp >> UARTPFIFO_RXSIZE_OFF) & 1083 1082 UARTPFIFO_FIFOSIZE_MASK) + 1);
+1
drivers/tty/serial/samsung.c
··· 963 963 free_irq(ourport->tx_irq, ourport); 964 964 tx_enabled(port) = 0; 965 965 ourport->tx_claimed = 0; 966 + ourport->tx_mode = 0; 966 967 } 967 968 968 969 if (ourport->rx_claimed) {
+8 -1
drivers/usb/host/xhci-hub.c
··· 387 387 status = PORT_PLC; 388 388 port_change_bit = "link state"; 389 389 break; 390 + case USB_PORT_FEAT_C_PORT_CONFIG_ERROR: 391 + status = PORT_CEC; 392 + port_change_bit = "config error"; 393 + break; 390 394 default: 391 395 /* Should never happen */ 392 396 return; ··· 592 588 status |= USB_PORT_STAT_C_LINK_STATE << 16; 593 589 if ((raw_port_status & PORT_WRC)) 594 590 status |= USB_PORT_STAT_C_BH_RESET << 16; 591 + if ((raw_port_status & PORT_CEC)) 592 + status |= USB_PORT_STAT_C_CONFIG_ERROR << 16; 595 593 } 596 594 597 595 if (hcd->speed != HCD_USB3) { ··· 1011 1005 case USB_PORT_FEAT_C_OVER_CURRENT: 1012 1006 case USB_PORT_FEAT_C_ENABLE: 1013 1007 case USB_PORT_FEAT_C_PORT_LINK_STATE: 1008 + case USB_PORT_FEAT_C_PORT_CONFIG_ERROR: 1014 1009 xhci_clear_port_change_bit(xhci, wValue, wIndex, 1015 1010 port_array[wIndex], temp); 1016 1011 break; ··· 1076 1069 */ 1077 1070 status = bus_state->resuming_ports; 1078 1071 1079 - mask = PORT_CSC | PORT_PEC | PORT_OCC | PORT_PLC | PORT_WRC; 1072 + mask = PORT_CSC | PORT_PEC | PORT_OCC | PORT_PLC | PORT_WRC | PORT_CEC; 1080 1073 1081 1074 spin_lock_irqsave(&xhci->lock, flags); 1082 1075 /* For each port, did anything change? If so, set that bit in buf. */
+1 -1
drivers/usb/host/xhci-pci.c
··· 115 115 if (pdev->vendor == PCI_VENDOR_ID_INTEL) { 116 116 xhci->quirks |= XHCI_LPM_SUPPORT; 117 117 xhci->quirks |= XHCI_INTEL_HOST; 118 + xhci->quirks |= XHCI_AVOID_BEI; 118 119 } 119 120 if (pdev->vendor == PCI_VENDOR_ID_INTEL && 120 121 pdev->device == PCI_DEVICE_ID_INTEL_PANTHERPOINT_XHCI) { ··· 131 130 * PPT chipsets. 132 131 */ 133 132 xhci->quirks |= XHCI_SPURIOUS_REBOOT; 134 - xhci->quirks |= XHCI_AVOID_BEI; 135 133 } 136 134 if (pdev->vendor == PCI_VENDOR_ID_INTEL && 137 135 pdev->device == PCI_DEVICE_ID_INTEL_LYNXPOINT_LP_XHCI) {
+1 -1
drivers/usb/isp1760/isp1760-udc.c
··· 1203 1203 1204 1204 if (udc->driver) { 1205 1205 dev_err(udc->isp->dev, "UDC already has a gadget driver\n"); 1206 - spin_unlock(&udc->lock); 1206 + spin_unlock_irqrestore(&udc->lock, flags); 1207 1207 return -EBUSY; 1208 1208 } 1209 1209
+7 -2
drivers/usb/serial/ftdi_sio.c
··· 604 604 .driver_info = (kernel_ulong_t)&ftdi_jtag_quirk }, 605 605 { USB_DEVICE(FTDI_VID, FTDI_NT_ORIONLXM_PID), 606 606 .driver_info = (kernel_ulong_t)&ftdi_jtag_quirk }, 607 + { USB_DEVICE(FTDI_VID, FTDI_SYNAPSE_SS200_PID) }, 607 608 /* 608 609 * ELV devices: 609 610 */ ··· 1884 1883 { 1885 1884 struct usb_device *udev = serial->dev; 1886 1885 1887 - if ((udev->manufacturer && !strcmp(udev->manufacturer, "CALAO Systems")) || 1888 - (udev->product && !strcmp(udev->product, "BeagleBone/XDS100V2"))) 1886 + if (udev->manufacturer && !strcmp(udev->manufacturer, "CALAO Systems")) 1887 + return ftdi_jtag_probe(serial); 1888 + 1889 + if (udev->product && 1890 + (!strcmp(udev->product, "BeagleBone/XDS100V2") || 1891 + !strcmp(udev->product, "SNAP Connect E10"))) 1889 1892 return ftdi_jtag_probe(serial); 1890 1893 1891 1894 return 0;
+6
drivers/usb/serial/ftdi_sio_ids.h
··· 561 561 */ 562 562 #define FTDI_NT_ORIONLXM_PID 0x7c90 /* OrionLXm Substation Automation Platform */ 563 563 564 + /* 565 + * Synapse Wireless product ids (FTDI_VID) 566 + * http://www.synapse-wireless.com 567 + */ 568 + #define FTDI_SYNAPSE_SS200_PID 0x9090 /* SS200 - SNAP Stick 200 */ 569 + 564 570 565 571 /********************************/ 566 572 /** third-party VID/PID combos **/
+3
drivers/usb/serial/keyspan_pda.c
··· 61 61 /* For Xircom PGSDB9 and older Entrega version of the same device */ 62 62 #define XIRCOM_VENDOR_ID 0x085a 63 63 #define XIRCOM_FAKE_ID 0x8027 64 + #define XIRCOM_FAKE_ID_2 0x8025 /* "PGMFHUB" serial */ 64 65 #define ENTREGA_VENDOR_ID 0x1645 65 66 #define ENTREGA_FAKE_ID 0x8093 66 67 ··· 71 70 #endif 72 71 #ifdef XIRCOM 73 72 { USB_DEVICE(XIRCOM_VENDOR_ID, XIRCOM_FAKE_ID) }, 73 + { USB_DEVICE(XIRCOM_VENDOR_ID, XIRCOM_FAKE_ID_2) }, 74 74 { USB_DEVICE(ENTREGA_VENDOR_ID, ENTREGA_FAKE_ID) }, 75 75 #endif 76 76 { USB_DEVICE(KEYSPAN_VENDOR_ID, KEYSPAN_PDA_ID) }, ··· 95 93 #ifdef XIRCOM 96 94 static const struct usb_device_id id_table_fake_xircom[] = { 97 95 { USB_DEVICE(XIRCOM_VENDOR_ID, XIRCOM_FAKE_ID) }, 96 + { USB_DEVICE(XIRCOM_VENDOR_ID, XIRCOM_FAKE_ID_2) }, 98 97 { USB_DEVICE(ENTREGA_VENDOR_ID, ENTREGA_FAKE_ID) }, 99 98 { } 100 99 };
+17
drivers/xen/Kconfig
··· 55 55 56 56 In that case step 3 should be omitted. 57 57 58 + config XEN_BALLOON_MEMORY_HOTPLUG_LIMIT 59 + int "Hotplugged memory limit (in GiB) for a PV guest" 60 + default 512 if X86_64 61 + default 4 if X86_32 62 + range 0 64 if X86_32 63 + depends on XEN_HAVE_PVMMU 64 + depends on XEN_BALLOON_MEMORY_HOTPLUG 65 + help 66 + Maxmium amount of memory (in GiB) that a PV guest can be 67 + expanded to when using memory hotplug. 68 + 69 + A PV guest can have more memory than this limit if is 70 + started with a larger maximum. 71 + 72 + This value is used to allocate enough space in internal 73 + tables needed for physical memory administration. 74 + 58 75 config XEN_SCRUB_PAGES 59 76 bool "Scrub pages before returning them to system" 60 77 depends on XEN_BALLOON
+23
drivers/xen/balloon.c
··· 229 229 balloon_hotplug = round_up(balloon_hotplug, PAGES_PER_SECTION); 230 230 nid = memory_add_physaddr_to_nid(hotplug_start_paddr); 231 231 232 + #ifdef CONFIG_XEN_HAVE_PVMMU 233 + /* 234 + * add_memory() will build page tables for the new memory so 235 + * the p2m must contain invalid entries so the correct 236 + * non-present PTEs will be written. 237 + * 238 + * If a failure occurs, the original (identity) p2m entries 239 + * are not restored since this region is now known not to 240 + * conflict with any devices. 241 + */ 242 + if (!xen_feature(XENFEAT_auto_translated_physmap)) { 243 + unsigned long pfn, i; 244 + 245 + pfn = PFN_DOWN(hotplug_start_paddr); 246 + for (i = 0; i < balloon_hotplug; i++) { 247 + if (!set_phys_to_machine(pfn + i, INVALID_P2M_ENTRY)) { 248 + pr_warn("set_phys_to_machine() failed, no memory added\n"); 249 + return BP_ECANCELED; 250 + } 251 + } 252 + } 253 + #endif 254 + 232 255 rc = add_memory(nid, hotplug_start_paddr, balloon_hotplug << PAGE_SHIFT); 233 256 234 257 if (rc) {
+5 -1
fs/cifs/cifsencrypt.c
··· 1 1 /* 2 2 * fs/cifs/cifsencrypt.c 3 3 * 4 + * Encryption and hashing operations relating to NTLM, NTLMv2. See MS-NLMP 5 + * for more detailed information 6 + * 4 7 * Copyright (C) International Business Machines Corp., 2005,2013 5 8 * Author(s): Steve French (sfrench@us.ibm.com) 6 9 * ··· 518 515 __func__); 519 516 return rc; 520 517 } 521 - } else if (ses->serverName) { 518 + } else { 519 + /* We use ses->serverName if no domain name available */ 522 520 len = strlen(ses->serverName); 523 521 524 522 server = kmalloc(2 + (len * 2), GFP_KERNEL);
+11 -2
fs/cifs/connect.c
··· 1599 1599 pr_warn("CIFS: username too long\n"); 1600 1600 goto cifs_parse_mount_err; 1601 1601 } 1602 + 1603 + kfree(vol->username); 1602 1604 vol->username = kstrdup(string, GFP_KERNEL); 1603 1605 if (!vol->username) 1604 1606 goto cifs_parse_mount_err; ··· 1702 1700 goto cifs_parse_mount_err; 1703 1701 } 1704 1702 1703 + kfree(vol->domainname); 1705 1704 vol->domainname = kstrdup(string, GFP_KERNEL); 1706 1705 if (!vol->domainname) { 1707 1706 pr_warn("CIFS: no memory for domainname\n"); ··· 1734 1731 } 1735 1732 1736 1733 if (strncasecmp(string, "default", 7) != 0) { 1734 + kfree(vol->iocharset); 1737 1735 vol->iocharset = kstrdup(string, 1738 1736 GFP_KERNEL); 1739 1737 if (!vol->iocharset) { ··· 2917 2913 * calling name ends in null (byte 16) from old smb 2918 2914 * convention. 2919 2915 */ 2920 - if (server->workstation_RFC1001_name && 2921 - server->workstation_RFC1001_name[0] != 0) 2916 + if (server->workstation_RFC1001_name[0] != 0) 2922 2917 rfc1002mangle(ses_init_buf->trailer. 2923 2918 session_req.calling_name, 2924 2919 server->workstation_RFC1001_name, ··· 3695 3692 #endif /* CIFS_WEAK_PW_HASH */ 3696 3693 rc = SMBNTencrypt(tcon->password, ses->server->cryptkey, 3697 3694 bcc_ptr, nls_codepage); 3695 + if (rc) { 3696 + cifs_dbg(FYI, "%s Can't generate NTLM rsp. Error: %d\n", 3697 + __func__, rc); 3698 + cifs_buf_release(smb_buffer); 3699 + return rc; 3700 + } 3698 3701 3699 3702 bcc_ptr += CIFS_AUTH_RESP_SIZE; 3700 3703 if (ses->capabilities & CAP_UNICODE) {
+1
fs/cifs/file.c
··· 1823 1823 cifsFileInfo_put(inv_file); 1824 1824 spin_lock(&cifs_file_list_lock); 1825 1825 ++refind; 1826 + inv_file = NULL; 1826 1827 goto refind_writable; 1827 1828 } 1828 1829 }
+2
fs/cifs/inode.c
··· 771 771 cifs_buf_release(srchinf->ntwrk_buf_start); 772 772 } 773 773 kfree(srchinf); 774 + if (rc) 775 + goto cgii_exit; 774 776 } else 775 777 goto cgii_exit; 776 778
+1 -1
fs/cifs/smb2misc.c
··· 322 322 323 323 /* return pointer to beginning of data area, ie offset from SMB start */ 324 324 if ((*off != 0) && (*len != 0)) 325 - return hdr->ProtocolId + *off; 325 + return (char *)(&hdr->ProtocolId[0]) + *off; 326 326 else 327 327 return NULL; 328 328 }
+2 -1
fs/cifs/smb2ops.c
··· 684 684 685 685 /* No need to change MaxChunks since already set to 1 */ 686 686 chunk_sizes_updated = true; 687 - } 687 + } else 688 + goto cchunk_out; 688 689 } 689 690 690 691 cchunk_out:
+10 -7
fs/cifs/smb2pdu.c
··· 1218 1218 struct smb2_ioctl_req *req; 1219 1219 struct smb2_ioctl_rsp *rsp; 1220 1220 struct TCP_Server_Info *server; 1221 - struct cifs_ses *ses = tcon->ses; 1221 + struct cifs_ses *ses; 1222 1222 struct kvec iov[2]; 1223 1223 int resp_buftype; 1224 1224 int num_iovecs; ··· 1232 1232 /* zero out returned data len, in case of error */ 1233 1233 if (plen) 1234 1234 *plen = 0; 1235 + 1236 + if (tcon) 1237 + ses = tcon->ses; 1238 + else 1239 + return -EIO; 1235 1240 1236 1241 if (ses && (ses->server)) 1237 1242 server = ses->server; ··· 1301 1296 rsp = (struct smb2_ioctl_rsp *)iov[0].iov_base; 1302 1297 1303 1298 if ((rc != 0) && (rc != -EINVAL)) { 1304 - if (tcon) 1305 - cifs_stats_fail_inc(tcon, SMB2_IOCTL_HE); 1299 + cifs_stats_fail_inc(tcon, SMB2_IOCTL_HE); 1306 1300 goto ioctl_exit; 1307 1301 } else if (rc == -EINVAL) { 1308 1302 if ((opcode != FSCTL_SRV_COPYCHUNK_WRITE) && 1309 1303 (opcode != FSCTL_SRV_COPYCHUNK)) { 1310 - if (tcon) 1311 - cifs_stats_fail_inc(tcon, SMB2_IOCTL_HE); 1304 + cifs_stats_fail_inc(tcon, SMB2_IOCTL_HE); 1312 1305 goto ioctl_exit; 1313 1306 } 1314 1307 } ··· 1632 1629 1633 1630 rc = SendReceive2(xid, ses, iov, 1, &resp_buftype, 0); 1634 1631 1635 - if ((rc != 0) && tcon) 1632 + if (rc != 0) 1636 1633 cifs_stats_fail_inc(tcon, SMB2_FLUSH_HE); 1637 1634 1638 1635 free_rsp_buf(resp_buftype, iov[0].iov_base); ··· 2117 2114 struct kvec iov[2]; 2118 2115 int rc = 0; 2119 2116 int len; 2120 - int resp_buftype; 2117 + int resp_buftype = CIFS_NO_BUFFER; 2121 2118 unsigned char *bufptr; 2122 2119 struct TCP_Server_Info *server; 2123 2120 struct cifs_ses *ses = tcon->ses;
+83 -10
fs/fs-writeback.c
··· 53 53 struct completion *done; /* set if the caller waits */ 54 54 }; 55 55 56 + /* 57 + * If an inode is constantly having its pages dirtied, but then the 58 + * updates stop dirtytime_expire_interval seconds in the past, it's 59 + * possible for the worst case time between when an inode has its 60 + * timestamps updated and when they finally get written out to be two 61 + * dirtytime_expire_intervals. We set the default to 12 hours (in 62 + * seconds), which means most of the time inodes will have their 63 + * timestamps written to disk after 12 hours, but in the worst case a 64 + * few inodes might not their timestamps updated for 24 hours. 65 + */ 66 + unsigned int dirtytime_expire_interval = 12 * 60 * 60; 67 + 56 68 /** 57 69 * writeback_in_progress - determine whether there is writeback in progress 58 70 * @bdi: the device's backing_dev_info structure. ··· 287 275 288 276 if ((flags & EXPIRE_DIRTY_ATIME) == 0) 289 277 older_than_this = work->older_than_this; 290 - else if ((work->reason == WB_REASON_SYNC) == 0) { 291 - expire_time = jiffies - (HZ * 86400); 278 + else if (!work->for_sync) { 279 + expire_time = jiffies - (dirtytime_expire_interval * HZ); 292 280 older_than_this = &expire_time; 293 281 } 294 282 while (!list_empty(delaying_queue)) { ··· 470 458 */ 471 459 redirty_tail(inode, wb); 472 460 } else if (inode->i_state & I_DIRTY_TIME) { 461 + inode->dirtied_when = jiffies; 473 462 list_move(&inode->i_wb_list, &wb->b_dirty_time); 474 463 } else { 475 464 /* The inode is clean. Remove from writeback lists. */ ··· 518 505 spin_lock(&inode->i_lock); 519 506 520 507 dirty = inode->i_state & I_DIRTY; 521 - if (((dirty & (I_DIRTY_SYNC | I_DIRTY_DATASYNC)) && 522 - (inode->i_state & I_DIRTY_TIME)) || 523 - (inode->i_state & I_DIRTY_TIME_EXPIRED)) { 524 - dirty |= I_DIRTY_TIME | I_DIRTY_TIME_EXPIRED; 525 - trace_writeback_lazytime(inode); 526 - } 508 + if (inode->i_state & I_DIRTY_TIME) { 509 + if ((dirty & (I_DIRTY_SYNC | I_DIRTY_DATASYNC)) || 510 + unlikely(inode->i_state & I_DIRTY_TIME_EXPIRED) || 511 + unlikely(time_after(jiffies, 512 + (inode->dirtied_time_when + 513 + dirtytime_expire_interval * HZ)))) { 514 + dirty |= I_DIRTY_TIME | I_DIRTY_TIME_EXPIRED; 515 + trace_writeback_lazytime(inode); 516 + } 517 + } else 518 + inode->i_state &= ~I_DIRTY_TIME_EXPIRED; 527 519 inode->i_state &= ~dirty; 528 520 529 521 /* ··· 1149 1131 rcu_read_unlock(); 1150 1132 } 1151 1133 1134 + /* 1135 + * Wake up bdi's periodically to make sure dirtytime inodes gets 1136 + * written back periodically. We deliberately do *not* check the 1137 + * b_dirtytime list in wb_has_dirty_io(), since this would cause the 1138 + * kernel to be constantly waking up once there are any dirtytime 1139 + * inodes on the system. So instead we define a separate delayed work 1140 + * function which gets called much more rarely. (By default, only 1141 + * once every 12 hours.) 1142 + * 1143 + * If there is any other write activity going on in the file system, 1144 + * this function won't be necessary. But if the only thing that has 1145 + * happened on the file system is a dirtytime inode caused by an atime 1146 + * update, we need this infrastructure below to make sure that inode 1147 + * eventually gets pushed out to disk. 1148 + */ 1149 + static void wakeup_dirtytime_writeback(struct work_struct *w); 1150 + static DECLARE_DELAYED_WORK(dirtytime_work, wakeup_dirtytime_writeback); 1151 + 1152 + static void wakeup_dirtytime_writeback(struct work_struct *w) 1153 + { 1154 + struct backing_dev_info *bdi; 1155 + 1156 + rcu_read_lock(); 1157 + list_for_each_entry_rcu(bdi, &bdi_list, bdi_list) { 1158 + if (list_empty(&bdi->wb.b_dirty_time)) 1159 + continue; 1160 + bdi_wakeup_thread(bdi); 1161 + } 1162 + rcu_read_unlock(); 1163 + schedule_delayed_work(&dirtytime_work, dirtytime_expire_interval * HZ); 1164 + } 1165 + 1166 + static int __init start_dirtytime_writeback(void) 1167 + { 1168 + schedule_delayed_work(&dirtytime_work, dirtytime_expire_interval * HZ); 1169 + return 0; 1170 + } 1171 + __initcall(start_dirtytime_writeback); 1172 + 1173 + int dirtytime_interval_handler(struct ctl_table *table, int write, 1174 + void __user *buffer, size_t *lenp, loff_t *ppos) 1175 + { 1176 + int ret; 1177 + 1178 + ret = proc_dointvec_minmax(table, write, buffer, lenp, ppos); 1179 + if (ret == 0 && write) 1180 + mod_delayed_work(system_wq, &dirtytime_work, 0); 1181 + return ret; 1182 + } 1183 + 1152 1184 static noinline void block_dump___mark_inode_dirty(struct inode *inode) 1153 1185 { 1154 1186 if (inode->i_ino || strcmp(inode->i_sb->s_id, "bdev")) { ··· 1337 1269 } 1338 1270 1339 1271 inode->dirtied_when = jiffies; 1340 - list_move(&inode->i_wb_list, dirtytime ? 1341 - &bdi->wb.b_dirty_time : &bdi->wb.b_dirty); 1272 + if (dirtytime) 1273 + inode->dirtied_time_when = jiffies; 1274 + if (inode->i_state & (I_DIRTY_INODE | I_DIRTY_PAGES)) 1275 + list_move(&inode->i_wb_list, &bdi->wb.b_dirty); 1276 + else 1277 + list_move(&inode->i_wb_list, 1278 + &bdi->wb.b_dirty_time); 1342 1279 spin_unlock(&bdi->wb.list_lock); 1343 1280 trace_writeback_dirty_inode_enqueue(inode); 1344 1281
+2 -3
fs/locks.c
··· 1388 1388 int __break_lease(struct inode *inode, unsigned int mode, unsigned int type) 1389 1389 { 1390 1390 int error = 0; 1391 - struct file_lock *new_fl; 1392 1391 struct file_lock_context *ctx = inode->i_flctx; 1393 - struct file_lock *fl; 1392 + struct file_lock *new_fl, *fl, *tmp; 1394 1393 unsigned long break_time; 1395 1394 int want_write = (mode & O_ACCMODE) != O_RDONLY; 1396 1395 LIST_HEAD(dispose); ··· 1419 1420 break_time++; /* so that 0 means no break time */ 1420 1421 } 1421 1422 1422 - list_for_each_entry(fl, &ctx->flc_lease, fl_list) { 1423 + list_for_each_entry_safe(fl, tmp, &ctx->flc_lease, fl_list) { 1423 1424 if (!leases_conflict(fl, new_fl)) 1424 1425 continue; 1425 1426 if (want_write) {
+1 -1
fs/nfsd/blocklayout.c
··· 137 137 seg->offset = iomap.offset; 138 138 seg->length = iomap.length; 139 139 140 - dprintk("GET: %lld:%lld %d\n", bex->foff, bex->len, bex->es); 140 + dprintk("GET: 0x%llx:0x%llx %d\n", bex->foff, bex->len, bex->es); 141 141 return 0; 142 142 143 143 out_error:
+3 -3
fs/nfsd/blocklayoutxdr.c
··· 122 122 123 123 p = xdr_decode_hyper(p, &bex.foff); 124 124 if (bex.foff & (block_size - 1)) { 125 - dprintk("%s: unaligned offset %lld\n", 125 + dprintk("%s: unaligned offset 0x%llx\n", 126 126 __func__, bex.foff); 127 127 goto fail; 128 128 } 129 129 p = xdr_decode_hyper(p, &bex.len); 130 130 if (bex.len & (block_size - 1)) { 131 - dprintk("%s: unaligned length %lld\n", 131 + dprintk("%s: unaligned length 0x%llx\n", 132 132 __func__, bex.foff); 133 133 goto fail; 134 134 } 135 135 p = xdr_decode_hyper(p, &bex.soff); 136 136 if (bex.soff & (block_size - 1)) { 137 - dprintk("%s: unaligned disk offset %lld\n", 137 + dprintk("%s: unaligned disk offset 0x%llx\n", 138 138 __func__, bex.soff); 139 139 goto fail; 140 140 }
+8 -4
fs/nfsd/nfs4layouts.c
··· 118 118 { 119 119 struct super_block *sb = exp->ex_path.mnt->mnt_sb; 120 120 121 - if (exp->ex_flags & NFSEXP_NOPNFS) 121 + if (!(exp->ex_flags & NFSEXP_PNFS)) 122 122 return; 123 123 124 124 if (sb->s_export_op->get_uuid && ··· 440 440 list_move_tail(&lp->lo_perstate, reaplist); 441 441 return; 442 442 } 443 - end = seg->offset; 443 + lo->offset = layout_end(seg); 444 444 } else { 445 445 /* retain the whole layout segment on a split. */ 446 446 if (layout_end(seg) < end) { 447 447 dprintk("%s: split not supported\n", __func__); 448 448 return; 449 449 } 450 - 451 - lo->offset = layout_end(seg); 450 + end = seg->offset; 452 451 } 453 452 454 453 layout_update_len(lo, end); ··· 512 513 513 514 spin_lock(&clp->cl_lock); 514 515 list_for_each_entry_safe(ls, n, &clp->cl_lo_states, ls_perclnt) { 516 + if (ls->ls_layout_type != lrp->lr_layout_type) 517 + continue; 518 + 515 519 if (lrp->lr_return_type == RETURN_FSID && 516 520 !fh_fsid_match(&ls->ls_stid.sc_file->fi_fhandle, 517 521 &cstate->current_fh.fh_handle)) ··· 588 586 int error; 589 587 590 588 rpc_ntop((struct sockaddr *)&clp->cl_addr, addr_str, sizeof(addr_str)); 589 + 590 + trace_layout_recall_fail(&ls->ls_stid.sc_stateid); 591 591 592 592 printk(KERN_WARNING 593 593 "nfsd: client %s failed to respond to layout recall. "
+1 -1
fs/nfsd/nfs4proc.c
··· 1237 1237 nfserr = ops->proc_getdeviceinfo(exp->ex_path.mnt->mnt_sb, gdp); 1238 1238 1239 1239 gdp->gd_notify_types &= ops->notify_types; 1240 - exp_put(exp); 1241 1240 out: 1241 + exp_put(exp); 1242 1242 return nfserr; 1243 1243 } 1244 1244
+2 -2
fs/nfsd/nfs4state.c
··· 3221 3221 } else 3222 3222 nfs4_free_openowner(&oo->oo_owner); 3223 3223 spin_unlock(&clp->cl_lock); 3224 - return oo; 3224 + return ret; 3225 3225 } 3226 3226 3227 3227 static void init_open_stateid(struct nfs4_ol_stateid *stp, struct nfs4_file *fp, struct nfsd4_open *open) { ··· 5062 5062 } else 5063 5063 nfs4_free_lockowner(&lo->lo_owner); 5064 5064 spin_unlock(&clp->cl_lock); 5065 - return lo; 5065 + return ret; 5066 5066 } 5067 5067 5068 5068 static void
+16 -4
fs/nfsd/nfs4xdr.c
··· 1562 1562 p = xdr_decode_hyper(p, &lgp->lg_seg.offset); 1563 1563 p = xdr_decode_hyper(p, &lgp->lg_seg.length); 1564 1564 p = xdr_decode_hyper(p, &lgp->lg_minlength); 1565 - nfsd4_decode_stateid(argp, &lgp->lg_sid); 1565 + 1566 + status = nfsd4_decode_stateid(argp, &lgp->lg_sid); 1567 + if (status) 1568 + return status; 1569 + 1566 1570 READ_BUF(4); 1567 1571 lgp->lg_maxcount = be32_to_cpup(p++); 1568 1572 ··· 1584 1580 p = xdr_decode_hyper(p, &lcp->lc_seg.offset); 1585 1581 p = xdr_decode_hyper(p, &lcp->lc_seg.length); 1586 1582 lcp->lc_reclaim = be32_to_cpup(p++); 1587 - nfsd4_decode_stateid(argp, &lcp->lc_sid); 1583 + 1584 + status = nfsd4_decode_stateid(argp, &lcp->lc_sid); 1585 + if (status) 1586 + return status; 1587 + 1588 1588 READ_BUF(4); 1589 1589 lcp->lc_newoffset = be32_to_cpup(p++); 1590 1590 if (lcp->lc_newoffset) { ··· 1636 1628 READ_BUF(16); 1637 1629 p = xdr_decode_hyper(p, &lrp->lr_seg.offset); 1638 1630 p = xdr_decode_hyper(p, &lrp->lr_seg.length); 1639 - nfsd4_decode_stateid(argp, &lrp->lr_sid); 1631 + 1632 + status = nfsd4_decode_stateid(argp, &lrp->lr_sid); 1633 + if (status) 1634 + return status; 1635 + 1640 1636 READ_BUF(4); 1641 1637 lrp->lrf_body_len = be32_to_cpup(p++); 1642 1638 if (lrp->lrf_body_len > 0) { ··· 4135 4123 return nfserr_resource; 4136 4124 *p++ = cpu_to_be32(lrp->lrs_present); 4137 4125 if (lrp->lrs_present) 4138 - nfsd4_encode_stateid(xdr, &lrp->lr_sid); 4126 + return nfsd4_encode_stateid(xdr, &lrp->lr_sid); 4139 4127 return nfs_ok; 4140 4128 } 4141 4129 #endif /* CONFIG_NFSD_PNFS */
+5 -1
fs/nfsd/nfscache.c
··· 165 165 { 166 166 unsigned int hashsize; 167 167 unsigned int i; 168 + int status = 0; 168 169 169 170 max_drc_entries = nfsd_cache_size_limit(); 170 171 atomic_set(&num_drc_entries, 0); 171 172 hashsize = nfsd_hashsize(max_drc_entries); 172 173 maskbits = ilog2(hashsize); 173 174 174 - register_shrinker(&nfsd_reply_cache_shrinker); 175 + status = register_shrinker(&nfsd_reply_cache_shrinker); 176 + if (status) 177 + return status; 178 + 175 179 drc_slab = kmem_cache_create("nfsd_drc", sizeof(struct svc_cacherep), 176 180 0, 0, NULL); 177 181 if (!drc_slab)
+1
include/linux/fs.h
··· 604 604 struct mutex i_mutex; 605 605 606 606 unsigned long dirtied_when; /* jiffies of first dirtying */ 607 + unsigned long dirtied_time_when; 607 608 608 609 struct hlist_node i_hash; 609 610 struct list_head i_wb_list; /* backing dev IO list */
+17
include/linux/irqchip/arm-gic-v3.h
··· 126 126 #define GICR_PROPBASER_WaWb (5U << 7) 127 127 #define GICR_PROPBASER_RaWaWt (6U << 7) 128 128 #define GICR_PROPBASER_RaWaWb (7U << 7) 129 + #define GICR_PROPBASER_CACHEABILITY_MASK (7U << 7) 129 130 #define GICR_PROPBASER_IDBITS_MASK (0x1f) 131 + 132 + #define GICR_PENDBASER_NonShareable (0U << 10) 133 + #define GICR_PENDBASER_InnerShareable (1U << 10) 134 + #define GICR_PENDBASER_OuterShareable (2U << 10) 135 + #define GICR_PENDBASER_SHAREABILITY_MASK (3UL << 10) 136 + #define GICR_PENDBASER_nCnB (0U << 7) 137 + #define GICR_PENDBASER_nC (1U << 7) 138 + #define GICR_PENDBASER_RaWt (2U << 7) 139 + #define GICR_PENDBASER_RaWb (3U << 7) 140 + #define GICR_PENDBASER_WaWt (4U << 7) 141 + #define GICR_PENDBASER_WaWb (5U << 7) 142 + #define GICR_PENDBASER_RaWaWt (6U << 7) 143 + #define GICR_PENDBASER_RaWaWb (7U << 7) 144 + #define GICR_PENDBASER_CACHEABILITY_MASK (7U << 7) 130 145 131 146 /* 132 147 * Re-Distributor registers, offsets from SGI_base ··· 197 182 #define GITS_CBASER_WaWb (5UL << 59) 198 183 #define GITS_CBASER_RaWaWt (6UL << 59) 199 184 #define GITS_CBASER_RaWaWb (7UL << 59) 185 + #define GITS_CBASER_CACHEABILITY_MASK (7UL << 59) 200 186 #define GITS_CBASER_NonShareable (0UL << 10) 201 187 #define GITS_CBASER_InnerShareable (1UL << 10) 202 188 #define GITS_CBASER_OuterShareable (2UL << 10) ··· 214 198 #define GITS_BASER_WaWb (5UL << 59) 215 199 #define GITS_BASER_RaWaWt (6UL << 59) 216 200 #define GITS_BASER_RaWaWb (7UL << 59) 201 + #define GITS_BASER_CACHEABILITY_MASK (7UL << 59) 217 202 #define GITS_BASER_TYPE_SHIFT (56) 218 203 #define GITS_BASER_TYPE(r) (((r) >> GITS_BASER_TYPE_SHIFT) & 7) 219 204 #define GITS_BASER_ENTRY_SIZE_SHIFT (48)
+1
include/linux/lcm.h
··· 4 4 #include <linux/compiler.h> 5 5 6 6 unsigned long lcm(unsigned long a, unsigned long b) __attribute_const__; 7 + unsigned long lcm_not_zero(unsigned long a, unsigned long b) __attribute_const__; 7 8 8 9 #endif /* _LCM_H */
+6
include/linux/netdevice.h
··· 2185 2185 void synchronize_net(void); 2186 2186 int init_dummy_netdev(struct net_device *dev); 2187 2187 2188 + DECLARE_PER_CPU(int, xmit_recursion); 2189 + static inline int dev_recursion_level(void) 2190 + { 2191 + return this_cpu_read(xmit_recursion); 2192 + } 2193 + 2188 2194 struct net_device *dev_get_by_index(struct net *net, int ifindex); 2189 2195 struct net_device *__dev_get_by_index(struct net *net, int ifindex); 2190 2196 struct net_device *dev_get_by_index_rcu(struct net *net, int ifindex);
+9 -9
include/linux/sunrpc/debug.h
··· 60 60 #if IS_ENABLED(CONFIG_SUNRPC_DEBUG) 61 61 void rpc_register_sysctl(void); 62 62 void rpc_unregister_sysctl(void); 63 - int sunrpc_debugfs_init(void); 63 + void sunrpc_debugfs_init(void); 64 64 void sunrpc_debugfs_exit(void); 65 - int rpc_clnt_debugfs_register(struct rpc_clnt *); 65 + void rpc_clnt_debugfs_register(struct rpc_clnt *); 66 66 void rpc_clnt_debugfs_unregister(struct rpc_clnt *); 67 - int rpc_xprt_debugfs_register(struct rpc_xprt *); 67 + void rpc_xprt_debugfs_register(struct rpc_xprt *); 68 68 void rpc_xprt_debugfs_unregister(struct rpc_xprt *); 69 69 #else 70 - static inline int 70 + static inline void 71 71 sunrpc_debugfs_init(void) 72 72 { 73 - return 0; 73 + return; 74 74 } 75 75 76 76 static inline void ··· 79 79 return; 80 80 } 81 81 82 - static inline int 82 + static inline void 83 83 rpc_clnt_debugfs_register(struct rpc_clnt *clnt) 84 84 { 85 - return 0; 85 + return; 86 86 } 87 87 88 88 static inline void ··· 91 91 return; 92 92 } 93 93 94 - static inline int 94 + static inline void 95 95 rpc_xprt_debugfs_register(struct rpc_xprt *xprt) 96 96 { 97 - return 0; 97 + return; 98 98 } 99 99 100 100 static inline void
+15 -1
include/linux/usb/usbnet.h
··· 227 227 struct urb *urb; 228 228 struct usbnet *dev; 229 229 enum skb_state state; 230 - size_t length; 230 + long length; 231 + unsigned long packets; 231 232 }; 233 + 234 + /* Drivers that set FLAG_MULTI_PACKET must call this in their 235 + * tx_fixup method before returning an skb. 236 + */ 237 + static inline void 238 + usbnet_set_skb_tx_stats(struct sk_buff *skb, 239 + unsigned long packets, long bytes_delta) 240 + { 241 + struct skb_data *entry = (struct skb_data *) skb->cb; 242 + 243 + entry->packets = packets; 244 + entry->length = bytes_delta; 245 + } 232 246 233 247 extern int usbnet_open(struct net_device *net); 234 248 extern int usbnet_stop(struct net_device *net);
+3
include/linux/writeback.h
··· 130 130 extern unsigned long vm_dirty_bytes; 131 131 extern unsigned int dirty_writeback_interval; 132 132 extern unsigned int dirty_expire_interval; 133 + extern unsigned int dirtytime_expire_interval; 133 134 extern int vm_highmem_is_dirtyable; 134 135 extern int block_dump; 135 136 extern int laptop_mode; ··· 147 146 extern int dirty_bytes_handler(struct ctl_table *table, int write, 148 147 void __user *buffer, size_t *lenp, 149 148 loff_t *ppos); 149 + int dirtytime_interval_handler(struct ctl_table *table, int write, 150 + void __user *buffer, size_t *lenp, loff_t *ppos); 150 151 151 152 struct ctl_table; 152 153 int dirty_writeback_centisecs_handler(struct ctl_table *, int,
-16
include/net/ip.h
··· 453 453 454 454 #endif 455 455 456 - static inline int sk_mc_loop(struct sock *sk) 457 - { 458 - if (!sk) 459 - return 1; 460 - switch (sk->sk_family) { 461 - case AF_INET: 462 - return inet_sk(sk)->mc_loop; 463 - #if IS_ENABLED(CONFIG_IPV6) 464 - case AF_INET6: 465 - return inet6_sk(sk)->mc_loop; 466 - #endif 467 - } 468 - WARN_ON(1); 469 - return 1; 470 - } 471 - 472 456 bool ip_call_ra_chain(struct sk_buff *skb); 473 457 474 458 /*
+2 -1
include/net/ip6_route.h
··· 174 174 175 175 static inline int ip6_skb_dst_mtu(struct sk_buff *skb) 176 176 { 177 - struct ipv6_pinfo *np = skb->sk ? inet6_sk(skb->sk) : NULL; 177 + struct ipv6_pinfo *np = skb->sk && !dev_recursion_level() ? 178 + inet6_sk(skb->sk) : NULL; 178 179 179 180 return (np && np->pmtudisc >= IPV6_PMTUDISC_PROBE) ? 180 181 skb_dst(skb)->dev->mtu : dst_mtu(skb_dst(skb));
+2
include/net/sock.h
··· 1762 1762 1763 1763 struct dst_entry *sk_dst_check(struct sock *sk, u32 cookie); 1764 1764 1765 + bool sk_mc_loop(struct sock *sk); 1766 + 1765 1767 static inline bool sk_can_gso(const struct sock *sk) 1766 1768 { 1767 1769 return net_gso_ok(sk->sk_route_caps, sk->sk_gso_type);
+2 -1
include/uapi/linux/input.h
··· 973 973 */ 974 974 #define MT_TOOL_FINGER 0 975 975 #define MT_TOOL_PEN 1 976 - #define MT_TOOL_MAX 1 976 + #define MT_TOOL_PALM 2 977 + #define MT_TOOL_MAX 2 977 978 978 979 /* 979 980 * Values describing the status of a force-feedback effect
+1 -1
include/uapi/linux/nfsd/export.h
··· 47 47 * exported filesystem. 48 48 */ 49 49 #define NFSEXP_V4ROOT 0x10000 50 - #define NFSEXP_NOPNFS 0x20000 50 + #define NFSEXP_PNFS 0x20000 51 51 52 52 /* All flags that we claim to support. (Note we don't support NOACL.) */ 53 53 #define NFSEXP_ALLFLAGS 0x3FE7F
+8
kernel/sysctl.c
··· 1228 1228 .extra1 = &zero, 1229 1229 }, 1230 1230 { 1231 + .procname = "dirtytime_expire_seconds", 1232 + .data = &dirtytime_expire_interval, 1233 + .maxlen = sizeof(dirty_expire_interval), 1234 + .mode = 0644, 1235 + .proc_handler = dirtytime_interval_handler, 1236 + .extra1 = &zero, 1237 + }, 1238 + { 1231 1239 .procname = "nr_pdflush_threads", 1232 1240 .mode = 0444 /* read-only */, 1233 1241 .proc_handler = pdflush_proc_obsolete,
+11
lib/lcm.c
··· 12 12 return 0; 13 13 } 14 14 EXPORT_SYMBOL_GPL(lcm); 15 + 16 + unsigned long lcm_not_zero(unsigned long a, unsigned long b) 17 + { 18 + unsigned long l = lcm(a, b); 19 + 20 + if (l) 21 + return l; 22 + 23 + return (b ? : a); 24 + } 25 + EXPORT_SYMBOL_GPL(lcm_not_zero);
+2
lib/nlattr.c
··· 279 279 int minlen = min_t(int, count, nla_len(src)); 280 280 281 281 memcpy(dest, nla_data(src), minlen); 282 + if (count > minlen) 283 + memset(dest + minlen, 0, count - minlen); 282 284 283 285 return minlen; 284 286 }
+3 -1
net/core/dev.c
··· 2848 2848 #define skb_update_prio(skb) 2849 2849 #endif 2850 2850 2851 - static DEFINE_PER_CPU(int, xmit_recursion); 2851 + DEFINE_PER_CPU(int, xmit_recursion); 2852 + EXPORT_SYMBOL(xmit_recursion); 2853 + 2852 2854 #define RECURSION_LIMIT 10 2853 2855 2854 2856 /**
+1 -1
net/core/fib_rules.c
··· 175 175 176 176 spin_lock(&net->rules_mod_lock); 177 177 list_del_rcu(&ops->list); 178 - fib_rules_cleanup_ops(ops); 179 178 spin_unlock(&net->rules_mod_lock); 180 179 180 + fib_rules_cleanup_ops(ops); 181 181 call_rcu(&ops->rcu, fib_rules_put_rcu); 182 182 } 183 183 EXPORT_SYMBOL_GPL(fib_rules_unregister);
+3 -1
net/core/net_namespace.c
··· 198 198 */ 199 199 int peernet2id(struct net *net, struct net *peer) 200 200 { 201 - int id = __peernet2id(net, peer, true); 201 + bool alloc = atomic_read(&peer->count) == 0 ? false : true; 202 + int id; 202 203 204 + id = __peernet2id(net, peer, alloc); 203 205 return id >= 0 ? id : NETNSA_NSID_NOT_ASSIGNED; 204 206 } 205 207 EXPORT_SYMBOL(peernet2id);
+2 -2
net/core/rtnetlink.c
··· 1932 1932 struct ifinfomsg *ifm, 1933 1933 struct nlattr **tb) 1934 1934 { 1935 - struct net_device *dev; 1935 + struct net_device *dev, *aux; 1936 1936 int err; 1937 1937 1938 - for_each_netdev(net, dev) { 1938 + for_each_netdev_safe(net, dev, aux) { 1939 1939 if (dev->group == group) { 1940 1940 err = do_setlink(skb, dev, ifm, tb, NULL, 0); 1941 1941 if (err < 0)
+19
net/core/sock.c
··· 653 653 sock_reset_flag(sk, bit); 654 654 } 655 655 656 + bool sk_mc_loop(struct sock *sk) 657 + { 658 + if (dev_recursion_level()) 659 + return false; 660 + if (!sk) 661 + return true; 662 + switch (sk->sk_family) { 663 + case AF_INET: 664 + return inet_sk(sk)->mc_loop; 665 + #if IS_ENABLED(CONFIG_IPV6) 666 + case AF_INET6: 667 + return inet6_sk(sk)->mc_loop; 668 + #endif 669 + } 670 + WARN_ON(1); 671 + return true; 672 + } 673 + EXPORT_SYMBOL(sk_mc_loop); 674 + 656 675 /* 657 676 * This is meant for all protocols to use and covers goings on 658 677 * at the socket level. Everything here is generic.
+2
net/decnet/dn_rules.c
··· 248 248 249 249 void __exit dn_fib_rules_cleanup(void) 250 250 { 251 + rtnl_lock(); 251 252 fib_rules_unregister(dn_fib_rules_ops); 253 + rtnl_unlock(); 252 254 rcu_barrier(); 253 255 } 254 256
+7 -16
net/dsa/dsa.c
··· 501 501 #ifdef CONFIG_OF 502 502 static int dsa_of_setup_routing_table(struct dsa_platform_data *pd, 503 503 struct dsa_chip_data *cd, 504 - int chip_index, 504 + int chip_index, int port_index, 505 505 struct device_node *link) 506 506 { 507 - int ret; 508 507 const __be32 *reg; 509 - int link_port_addr; 510 508 int link_sw_addr; 511 509 struct device_node *parent_sw; 512 510 int len; ··· 517 519 if (!reg || (len != sizeof(*reg) * 2)) 518 520 return -EINVAL; 519 521 522 + /* 523 + * Get the destination switch number from the second field of its 'reg' 524 + * property, i.e. for "reg = <0x19 1>" sw_addr is '1'. 525 + */ 520 526 link_sw_addr = be32_to_cpup(reg + 1); 521 527 522 528 if (link_sw_addr >= pd->nr_chips) ··· 537 535 memset(cd->rtable, -1, pd->nr_chips * sizeof(s8)); 538 536 } 539 537 540 - reg = of_get_property(link, "reg", NULL); 541 - if (!reg) { 542 - ret = -EINVAL; 543 - goto out; 544 - } 545 - 546 - link_port_addr = be32_to_cpup(reg); 547 - 548 - cd->rtable[link_sw_addr] = link_port_addr; 538 + cd->rtable[link_sw_addr] = port_index; 549 539 550 540 return 0; 551 - out: 552 - kfree(cd->rtable); 553 - return ret; 554 541 } 555 542 556 543 static void dsa_of_free_platform_data(struct dsa_platform_data *pd) ··· 649 658 if (!strcmp(port_name, "dsa") && link && 650 659 pd->nr_chips > 1) { 651 660 ret = dsa_of_setup_routing_table(pd, cd, 652 - chip_index, link); 661 + chip_index, port_index, link); 653 662 if (ret) 654 663 goto out_free_chip; 655 664 }
+1 -2
net/ipv4/fib_frontend.c
··· 1111 1111 { 1112 1112 unsigned int i; 1113 1113 1114 + rtnl_lock(); 1114 1115 #ifdef CONFIG_IP_MULTIPLE_TABLES 1115 1116 fib4_rules_exit(net); 1116 1117 #endif 1117 - 1118 - rtnl_lock(); 1119 1118 for (i = 0; i < FIB_TABLE_HASHSZ; i++) { 1120 1119 struct fib_table *tb; 1121 1120 struct hlist_head *head;
+6 -1
net/ipv4/ipmr.c
··· 268 268 return 0; 269 269 270 270 err2: 271 - kfree(mrt); 271 + ipmr_free_table(mrt); 272 272 err1: 273 273 fib_rules_unregister(ops); 274 274 return err; ··· 278 278 { 279 279 struct mr_table *mrt, *next; 280 280 281 + rtnl_lock(); 281 282 list_for_each_entry_safe(mrt, next, &net->ipv4.mr_tables, list) { 282 283 list_del(&mrt->list); 283 284 ipmr_free_table(mrt); 284 285 } 285 286 fib_rules_unregister(net->ipv4.mr_rules_ops); 287 + rtnl_unlock(); 286 288 } 287 289 #else 288 290 #define ipmr_for_each_table(mrt, net) \ ··· 310 308 311 309 static void __net_exit ipmr_rules_exit(struct net *net) 312 310 { 311 + rtnl_lock(); 313 312 ipmr_free_table(net->ipv4.mrt); 313 + net->ipv4.mrt = NULL; 314 + rtnl_unlock(); 314 315 } 315 316 #endif 316 317
+4 -3
net/ipv4/tcp_input.c
··· 3105 3105 if (!first_ackt.v64) 3106 3106 first_ackt = last_ackt; 3107 3107 3108 - if (!(sacked & TCPCB_SACKED_ACKED)) 3108 + if (!(sacked & TCPCB_SACKED_ACKED)) { 3109 3109 reord = min(pkts_acked, reord); 3110 - if (!after(scb->end_seq, tp->high_seq)) 3111 - flag |= FLAG_ORIG_SACK_ACKED; 3110 + if (!after(scb->end_seq, tp->high_seq)) 3111 + flag |= FLAG_ORIG_SACK_ACKED; 3112 + } 3112 3113 } 3113 3114 3114 3115 if (sacked & TCPCB_SACKED_ACKED)
+1 -1
net/ipv4/tcp_ipv4.c
··· 1518 1518 skb->sk = sk; 1519 1519 skb->destructor = sock_edemux; 1520 1520 if (sk->sk_state != TCP_TIME_WAIT) { 1521 - struct dst_entry *dst = sk->sk_rx_dst; 1521 + struct dst_entry *dst = READ_ONCE(sk->sk_rx_dst); 1522 1522 1523 1523 if (dst) 1524 1524 dst = dst_check(dst, 0);
+2
net/ipv6/fib6_rules.c
··· 322 322 323 323 static void __net_exit fib6_rules_net_exit(struct net *net) 324 324 { 325 + rtnl_lock(); 325 326 fib_rules_unregister(net->ipv6.fib6_rules_ops); 327 + rtnl_unlock(); 326 328 } 327 329 328 330 static struct pernet_operations fib6_rules_net_ops = {
+2 -1
net/ipv6/ip6_output.c
··· 542 542 { 543 543 struct sk_buff *frag; 544 544 struct rt6_info *rt = (struct rt6_info *)skb_dst(skb); 545 - struct ipv6_pinfo *np = skb->sk ? inet6_sk(skb->sk) : NULL; 545 + struct ipv6_pinfo *np = skb->sk && !dev_recursion_level() ? 546 + inet6_sk(skb->sk) : NULL; 546 547 struct ipv6hdr *tmp_hdr; 547 548 struct frag_hdr *fh; 548 549 unsigned int mtu, hlen, left, len;
+3 -3
net/ipv6/ip6mr.c
··· 252 252 return 0; 253 253 254 254 err2: 255 - kfree(mrt); 255 + ip6mr_free_table(mrt); 256 256 err1: 257 257 fib_rules_unregister(ops); 258 258 return err; ··· 267 267 list_del(&mrt->list); 268 268 ip6mr_free_table(mrt); 269 269 } 270 - rtnl_unlock(); 271 270 fib_rules_unregister(net->ipv6.mr6_rules_ops); 271 + rtnl_unlock(); 272 272 } 273 273 #else 274 274 #define ip6mr_for_each_table(mrt, net) \ ··· 336 336 337 337 static void ip6mr_free_table(struct mr6_table *mrt) 338 338 { 339 - del_timer(&mrt->ipmr_expire_timer); 339 + del_timer_sync(&mrt->ipmr_expire_timer); 340 340 mroute_clean_tables(mrt); 341 341 kfree(mrt); 342 342 }
+8 -1
net/ipv6/ndisc.c
··· 1218 1218 if (rt) 1219 1219 rt6_set_expires(rt, jiffies + (HZ * lifetime)); 1220 1220 if (ra_msg->icmph.icmp6_hop_limit) { 1221 - in6_dev->cnf.hop_limit = ra_msg->icmph.icmp6_hop_limit; 1221 + /* Only set hop_limit on the interface if it is higher than 1222 + * the current hop_limit. 1223 + */ 1224 + if (in6_dev->cnf.hop_limit < ra_msg->icmph.icmp6_hop_limit) { 1225 + in6_dev->cnf.hop_limit = ra_msg->icmph.icmp6_hop_limit; 1226 + } else { 1227 + ND_PRINTK(2, warn, "RA: Got route advertisement with lower hop_limit than current\n"); 1228 + } 1222 1229 if (rt) 1223 1230 dst_metric_set(&rt->dst, RTAX_HOPLIMIT, 1224 1231 ra_msg->icmph.icmp6_hop_limit);
+12 -1
net/ipv6/tcp_ipv6.c
··· 1411 1411 TCP_SKB_CB(skb)->sacked = 0; 1412 1412 } 1413 1413 1414 + static void tcp_v6_restore_cb(struct sk_buff *skb) 1415 + { 1416 + /* We need to move header back to the beginning if xfrm6_policy_check() 1417 + * and tcp_v6_fill_cb() are going to be called again. 1418 + */ 1419 + memmove(IP6CB(skb), &TCP_SKB_CB(skb)->header.h6, 1420 + sizeof(struct inet6_skb_parm)); 1421 + } 1422 + 1414 1423 static int tcp_v6_rcv(struct sk_buff *skb) 1415 1424 { 1416 1425 const struct tcphdr *th; ··· 1552 1543 inet_twsk_deschedule(tw, &tcp_death_row); 1553 1544 inet_twsk_put(tw); 1554 1545 sk = sk2; 1546 + tcp_v6_restore_cb(skb); 1555 1547 goto process; 1556 1548 } 1557 1549 /* Fall through to ACK */ ··· 1561 1551 tcp_v6_timewait_ack(sk, skb); 1562 1552 break; 1563 1553 case TCP_TW_RST: 1554 + tcp_v6_restore_cb(skb); 1564 1555 goto no_tcp_socket; 1565 1556 case TCP_TW_SUCCESS: 1566 1557 ; ··· 1596 1585 skb->sk = sk; 1597 1586 skb->destructor = sock_edemux; 1598 1587 if (sk->sk_state != TCP_TIME_WAIT) { 1599 - struct dst_entry *dst = sk->sk_rx_dst; 1588 + struct dst_entry *dst = READ_ONCE(sk->sk_rx_dst); 1600 1589 1601 1590 if (dst) 1602 1591 dst = dst_check(dst, inet6_sk(sk)->rx_dst_cookie);
+1 -3
net/iucv/af_iucv.c
··· 1114 1114 noblock, &err); 1115 1115 else 1116 1116 skb = sock_alloc_send_skb(sk, len, noblock, &err); 1117 - if (!skb) { 1118 - err = -ENOMEM; 1117 + if (!skb) 1119 1118 goto out; 1120 - } 1121 1119 if (iucv->transport == AF_IUCV_TRANS_HIPER) 1122 1120 skb_reserve(skb, sizeof(struct af_iucv_trans_hdr) + ETH_HLEN); 1123 1121 if (memcpy_from_msg(skb_put(skb, len), msg, len)) {
+1
net/l2tp/l2tp_core.c
··· 1871 1871 l2tp_wq = alloc_workqueue("l2tp", WQ_UNBOUND, 0); 1872 1872 if (!l2tp_wq) { 1873 1873 pr_err("alloc_workqueue failed\n"); 1874 + unregister_pernet_device(&l2tp_net_ops); 1874 1875 rc = -ENOMEM; 1875 1876 goto out; 1876 1877 }
+6 -2
net/mac80211/agg-rx.c
··· 49 49 container_of(h, struct tid_ampdu_rx, rcu_head); 50 50 int i; 51 51 52 - del_timer_sync(&tid_rx->reorder_timer); 53 - 54 52 for (i = 0; i < tid_rx->buf_size; i++) 55 53 __skb_queue_purge(&tid_rx->reorder_buf[i]); 56 54 kfree(tid_rx->reorder_buf); ··· 90 92 tid, WLAN_BACK_RECIPIENT, reason); 91 93 92 94 del_timer_sync(&tid_rx->session_timer); 95 + 96 + /* make sure ieee80211_sta_reorder_release() doesn't re-arm the timer */ 97 + spin_lock_bh(&tid_rx->reorder_lock); 98 + tid_rx->removed = true; 99 + spin_unlock_bh(&tid_rx->reorder_lock); 100 + del_timer_sync(&tid_rx->reorder_timer); 93 101 94 102 call_rcu(&tid_rx->rcu_head, ieee80211_free_tid_rx); 95 103 }
+4 -3
net/mac80211/rx.c
··· 873 873 874 874 set_release_timer: 875 875 876 - mod_timer(&tid_agg_rx->reorder_timer, 877 - tid_agg_rx->reorder_time[j] + 1 + 878 - HT_RX_REORDER_BUF_TIMEOUT); 876 + if (!tid_agg_rx->removed) 877 + mod_timer(&tid_agg_rx->reorder_timer, 878 + tid_agg_rx->reorder_time[j] + 1 + 879 + HT_RX_REORDER_BUF_TIMEOUT); 879 880 } else { 880 881 del_timer(&tid_agg_rx->reorder_timer); 881 882 }
+2
net/mac80211/sta_info.h
··· 175 175 * @reorder_lock: serializes access to reorder buffer, see below. 176 176 * @auto_seq: used for offloaded BA sessions to automatically pick head_seq_and 177 177 * and ssn. 178 + * @removed: this session is removed (but might have been found due to RCU) 178 179 * 179 180 * This structure's lifetime is managed by RCU, assignments to 180 181 * the array holding it must hold the aggregation mutex. ··· 200 199 u16 timeout; 201 200 u8 dialog_token; 202 201 bool auto_seq; 202 + bool removed; 203 203 }; 204 204 205 205 /**
+1 -3
net/openvswitch/vport.c
··· 274 274 ASSERT_OVSL(); 275 275 276 276 hlist_del_rcu(&vport->hash_node); 277 - 278 - vport->ops->destroy(vport); 279 - 280 277 module_put(vport->ops->owner); 278 + vport->ops->destroy(vport); 281 279 } 282 280 283 281 /**
+1 -3
net/sunrpc/clnt.c
··· 303 303 struct super_block *pipefs_sb; 304 304 int err; 305 305 306 - err = rpc_clnt_debugfs_register(clnt); 307 - if (err) 308 - return err; 306 + rpc_clnt_debugfs_register(clnt); 309 307 310 308 pipefs_sb = rpc_get_sb_net(net); 311 309 if (pipefs_sb) {
+29 -23
net/sunrpc/debugfs.c
··· 129 129 .release = tasks_release, 130 130 }; 131 131 132 - int 132 + void 133 133 rpc_clnt_debugfs_register(struct rpc_clnt *clnt) 134 134 { 135 - int len, err; 135 + int len; 136 136 char name[24]; /* enough for "../../rpc_xprt/ + 8 hex digits + NULL */ 137 + struct rpc_xprt *xprt; 137 138 138 139 /* Already registered? */ 139 - if (clnt->cl_debugfs) 140 - return 0; 140 + if (clnt->cl_debugfs || !rpc_clnt_dir) 141 + return; 141 142 142 143 len = snprintf(name, sizeof(name), "%x", clnt->cl_clid); 143 144 if (len >= sizeof(name)) 144 - return -EINVAL; 145 + return; 145 146 146 147 /* make the per-client dir */ 147 148 clnt->cl_debugfs = debugfs_create_dir(name, rpc_clnt_dir); 148 149 if (!clnt->cl_debugfs) 149 - return -ENOMEM; 150 + return; 150 151 151 152 /* make tasks file */ 152 - err = -ENOMEM; 153 153 if (!debugfs_create_file("tasks", S_IFREG | S_IRUSR, clnt->cl_debugfs, 154 154 clnt, &tasks_fops)) 155 155 goto out_err; 156 156 157 - err = -EINVAL; 158 157 rcu_read_lock(); 158 + xprt = rcu_dereference(clnt->cl_xprt); 159 + /* no "debugfs" dentry? Don't bother with the symlink. */ 160 + if (!xprt->debugfs) { 161 + rcu_read_unlock(); 162 + return; 163 + } 159 164 len = snprintf(name, sizeof(name), "../../rpc_xprt/%s", 160 - rcu_dereference(clnt->cl_xprt)->debugfs->d_name.name); 165 + xprt->debugfs->d_name.name); 161 166 rcu_read_unlock(); 167 + 162 168 if (len >= sizeof(name)) 163 169 goto out_err; 164 170 165 - err = -ENOMEM; 166 171 if (!debugfs_create_symlink("xprt", clnt->cl_debugfs, name)) 167 172 goto out_err; 168 173 169 - return 0; 174 + return; 170 175 out_err: 171 176 debugfs_remove_recursive(clnt->cl_debugfs); 172 177 clnt->cl_debugfs = NULL; 173 - return err; 174 178 } 175 179 176 180 void ··· 230 226 .release = xprt_info_release, 231 227 }; 232 228 233 - int 229 + void 234 230 rpc_xprt_debugfs_register(struct rpc_xprt *xprt) 235 231 { 236 232 int len, id; 237 233 static atomic_t cur_id; 238 234 char name[9]; /* 8 hex digits + NULL term */ 239 235 236 + if (!rpc_xprt_dir) 237 + return; 238 + 240 239 id = (unsigned int)atomic_inc_return(&cur_id); 241 240 242 241 len = snprintf(name, sizeof(name), "%x", id); 243 242 if (len >= sizeof(name)) 244 - return -EINVAL; 243 + return; 245 244 246 245 /* make the per-client dir */ 247 246 xprt->debugfs = debugfs_create_dir(name, rpc_xprt_dir); 248 247 if (!xprt->debugfs) 249 - return -ENOMEM; 248 + return; 250 249 251 250 /* make tasks file */ 252 251 if (!debugfs_create_file("info", S_IFREG | S_IRUSR, xprt->debugfs, 253 252 xprt, &xprt_info_fops)) { 254 253 debugfs_remove_recursive(xprt->debugfs); 255 254 xprt->debugfs = NULL; 256 - return -ENOMEM; 257 255 } 258 - 259 - return 0; 260 256 } 261 257 262 258 void ··· 270 266 sunrpc_debugfs_exit(void) 271 267 { 272 268 debugfs_remove_recursive(topdir); 269 + topdir = NULL; 270 + rpc_clnt_dir = NULL; 271 + rpc_xprt_dir = NULL; 273 272 } 274 273 275 - int __init 274 + void __init 276 275 sunrpc_debugfs_init(void) 277 276 { 278 277 topdir = debugfs_create_dir("sunrpc", NULL); 279 278 if (!topdir) 280 - goto out; 279 + return; 281 280 282 281 rpc_clnt_dir = debugfs_create_dir("rpc_clnt", topdir); 283 282 if (!rpc_clnt_dir) ··· 290 283 if (!rpc_xprt_dir) 291 284 goto out_remove; 292 285 293 - return 0; 286 + return; 294 287 out_remove: 295 288 debugfs_remove_recursive(topdir); 296 289 topdir = NULL; 297 - out: 298 - return -ENOMEM; 290 + rpc_clnt_dir = NULL; 299 291 }
+1 -6
net/sunrpc/sunrpc_syms.c
··· 98 98 if (err) 99 99 goto out4; 100 100 101 - err = sunrpc_debugfs_init(); 102 - if (err) 103 - goto out5; 104 - 101 + sunrpc_debugfs_init(); 105 102 #if IS_ENABLED(CONFIG_SUNRPC_DEBUG) 106 103 rpc_register_sysctl(); 107 104 #endif ··· 106 109 init_socket_xprt(); /* clnt sock transport */ 107 110 return 0; 108 111 109 - out5: 110 - unregister_rpc_pipefs(); 111 112 out4: 112 113 unregister_pernet_subsys(&sunrpc_net_ops); 113 114 out3:
+1 -6
net/sunrpc/xprt.c
··· 1331 1331 */ 1332 1332 struct rpc_xprt *xprt_create_transport(struct xprt_create *args) 1333 1333 { 1334 - int err; 1335 1334 struct rpc_xprt *xprt; 1336 1335 struct xprt_class *t; 1337 1336 ··· 1371 1372 return ERR_PTR(-ENOMEM); 1372 1373 } 1373 1374 1374 - err = rpc_xprt_debugfs_register(xprt); 1375 - if (err) { 1376 - xprt_destroy(xprt); 1377 - return ERR_PTR(err); 1378 - } 1375 + rpc_xprt_debugfs_register(xprt); 1379 1376 1380 1377 dprintk("RPC: created transport %p with %u slots\n", xprt, 1381 1378 xprt->max_reqs);
+1 -1
net/tipc/core.c
··· 152 152 static void __exit tipc_exit(void) 153 153 { 154 154 tipc_bearer_cleanup(); 155 + unregister_pernet_subsys(&tipc_net_ops); 155 156 tipc_netlink_stop(); 156 157 tipc_netlink_compat_stop(); 157 158 tipc_socket_stop(); 158 159 tipc_unregister_sysctl(); 159 - unregister_pernet_subsys(&tipc_net_ops); 160 160 161 161 pr_info("Deactivated\n"); 162 162 }