Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'v3.18' into drm-next

Linux 3.18

Backmerge Linus tree into -next as we had conflicts in i915/radeon/nouveau,
and everyone was solving them individually.

* tag 'v3.18': (57 commits)
Linux 3.18
watchdog: s3c2410_wdt: Fix the mask bit offset for Exynos7
uapi: fix to export linux/vm_sockets.h
i2c: cadence: Set the hardware time-out register to maximum value
i2c: davinci: generate STP always when NACK is received
ahci: disable MSI on SAMSUNG 0xa800 SSD
context_tracking: Restore previous state in schedule_user
slab: fix nodeid bounds check for non-contiguous node IDs
lib/genalloc.c: export devm_gen_pool_create() for modules
mm: fix anon_vma_clone() error treatment
mm: fix swapoff hang after page migration and fork
fat: fix oops on corrupted vfat fs
ipc/sem.c: fully initialize sem_array before making it visible
drivers/input/evdev.c: don't kfree() a vmalloc address
cxgb4: Fill in supported link mode for SFP modules
xen-netfront: Remove BUGs on paged skb data which crosses a page boundary
mm/vmpressure.c: fix race in vmpressure_work_fn()
mm: frontswap: invalidate expired data on a dup-store failure
mm: do not overwrite reserved pages counter at show_mem()
drm/radeon: kernel panic in drm_calc_vbltimestamp_from_scanoutpos with 3.18.0-rc6
...

Conflicts:
drivers/gpu/drm/i915/intel_display.c
drivers/gpu/drm/nouveau/nouveau_drm.c
drivers/gpu/drm/radeon/radeon_cs.c

+356 -268
+20 -18
MAINTAINERS
··· 1838 1838 F: net/ax25/ 1839 1839 1840 1840 AZ6007 DVB DRIVER 1841 - M: Mauro Carvalho Chehab <m.chehab@samsung.com> 1841 + M: Mauro Carvalho Chehab <mchehab@osg.samsung.com> 1842 1842 L: linux-media@vger.kernel.org 1843 1843 W: http://linuxtv.org 1844 1844 T: git git://linuxtv.org/media_tree.git ··· 2208 2208 F: fs/btrfs/ 2209 2209 2210 2210 BTTV VIDEO4LINUX DRIVER 2211 - M: Mauro Carvalho Chehab <m.chehab@samsung.com> 2211 + M: Mauro Carvalho Chehab <mchehab@osg.samsung.com> 2212 2212 L: linux-media@vger.kernel.org 2213 2213 W: http://linuxtv.org 2214 2214 T: git git://linuxtv.org/media_tree.git ··· 2729 2729 F: include/media/cx2341x* 2730 2730 2731 2731 CX88 VIDEO4LINUX DRIVER 2732 - M: Mauro Carvalho Chehab <m.chehab@samsung.com> 2732 + M: Mauro Carvalho Chehab <mchehab@osg.samsung.com> 2733 2733 L: linux-media@vger.kernel.org 2734 2734 W: http://linuxtv.org 2735 2735 T: git git://linuxtv.org/media_tree.git ··· 3419 3419 EDAC-CORE 3420 3420 M: Doug Thompson <dougthompson@xmission.com> 3421 3421 M: Borislav Petkov <bp@alien8.de> 3422 - M: Mauro Carvalho Chehab <m.chehab@samsung.com> 3422 + M: Mauro Carvalho Chehab <mchehab@osg.samsung.com> 3423 3423 L: linux-edac@vger.kernel.org 3424 3424 W: bluesmoke.sourceforge.net 3425 3425 S: Supported ··· 3468 3468 F: drivers/edac/e7xxx_edac.c 3469 3469 3470 3470 EDAC-GHES 3471 - M: Mauro Carvalho Chehab <m.chehab@samsung.com> 3471 + M: Mauro Carvalho Chehab <mchehab@osg.samsung.com> 3472 3472 L: linux-edac@vger.kernel.org 3473 3473 W: bluesmoke.sourceforge.net 3474 3474 S: Maintained ··· 3496 3496 F: drivers/edac/i5000_edac.c 3497 3497 3498 3498 EDAC-I5400 3499 - M: Mauro Carvalho Chehab <m.chehab@samsung.com> 3499 + M: Mauro Carvalho Chehab <mchehab@osg.samsung.com> 3500 3500 L: linux-edac@vger.kernel.org 3501 3501 W: bluesmoke.sourceforge.net 3502 3502 S: Maintained 3503 3503 F: drivers/edac/i5400_edac.c 3504 3504 3505 3505 EDAC-I7300 3506 - M: Mauro Carvalho Chehab <m.chehab@samsung.com> 3506 + M: Mauro Carvalho Chehab <mchehab@osg.samsung.com> 3507 3507 L: linux-edac@vger.kernel.org 3508 3508 W: bluesmoke.sourceforge.net 3509 3509 S: Maintained 3510 3510 F: drivers/edac/i7300_edac.c 3511 3511 3512 3512 EDAC-I7CORE 3513 - M: Mauro Carvalho Chehab <m.chehab@samsung.com> 3513 + M: Mauro Carvalho Chehab <mchehab@osg.samsung.com> 3514 3514 L: linux-edac@vger.kernel.org 3515 3515 W: bluesmoke.sourceforge.net 3516 3516 S: Maintained ··· 3553 3553 F: drivers/edac/r82600_edac.c 3554 3554 3555 3555 EDAC-SBRIDGE 3556 - M: Mauro Carvalho Chehab <m.chehab@samsung.com> 3556 + M: Mauro Carvalho Chehab <mchehab@osg.samsung.com> 3557 3557 L: linux-edac@vger.kernel.org 3558 3558 W: bluesmoke.sourceforge.net 3559 3559 S: Maintained ··· 3613 3613 F: drivers/net/ethernet/ibm/ehea/ 3614 3614 3615 3615 EM28XX VIDEO4LINUX DRIVER 3616 - M: Mauro Carvalho Chehab <m.chehab@samsung.com> 3616 + M: Mauro Carvalho Chehab <mchehab@osg.samsung.com> 3617 3617 L: linux-media@vger.kernel.org 3618 3618 W: http://linuxtv.org 3619 3619 T: git git://linuxtv.org/media_tree.git ··· 5979 5979 F: drivers/media/radio/radio-maxiradio* 5980 5980 5981 5981 MEDIA INPUT INFRASTRUCTURE (V4L/DVB) 5982 - M: Mauro Carvalho Chehab <m.chehab@samsung.com> 5982 + M: Mauro Carvalho Chehab <mchehab@osg.samsung.com> 5983 5983 P: LinuxTV.org Project 5984 5984 L: linux-media@vger.kernel.org 5985 5985 W: http://linuxtv.org ··· 8030 8030 F: drivers/media/i2c/saa6588* 8031 8031 8032 8032 SAA7134 VIDEO4LINUX DRIVER 8033 - M: Mauro Carvalho Chehab <m.chehab@samsung.com> 8033 + M: Mauro Carvalho Chehab <mchehab@osg.samsung.com> 8034 8034 L: linux-media@vger.kernel.org 8035 8035 W: http://linuxtv.org 8036 8036 T: git git://linuxtv.org/media_tree.git ··· 8488 8488 F: drivers/media/radio/si4713/radio-usb-si4713.c 8489 8489 8490 8490 SIANO DVB DRIVER 8491 - M: Mauro Carvalho Chehab <m.chehab@samsung.com> 8491 + M: Mauro Carvalho Chehab <mchehab@osg.samsung.com> 8492 8492 L: linux-media@vger.kernel.org 8493 8493 W: http://linuxtv.org 8494 8494 T: git git://linuxtv.org/media_tree.git ··· 8699 8699 F: drivers/leds/leds-net48xx.c 8700 8700 8701 8701 SOFTLOGIC 6x10 MPEG CODEC 8702 - M: Ismael Luceno <ismael.luceno@corp.bluecherry.net> 8702 + M: Bluecherry Maintainers <maintainers@bluecherrydvr.com> 8703 + M: Andrey Utkin <andrey.utkin@corp.bluecherry.net> 8704 + M: Andrey Utkin <andrey.krieger.utkin@gmail.com> 8703 8705 L: linux-media@vger.kernel.org 8704 8706 S: Supported 8705 8707 F: drivers/media/pci/solo6x10/ ··· 9175 9173 F: drivers/media/i2c/tda9840* 9176 9174 9177 9175 TEA5761 TUNER DRIVER 9178 - M: Mauro Carvalho Chehab <m.chehab@samsung.com> 9176 + M: Mauro Carvalho Chehab <mchehab@osg.samsung.com> 9179 9177 L: linux-media@vger.kernel.org 9180 9178 W: http://linuxtv.org 9181 9179 T: git git://linuxtv.org/media_tree.git ··· 9183 9181 F: drivers/media/tuners/tea5761.* 9184 9182 9185 9183 TEA5767 TUNER DRIVER 9186 - M: Mauro Carvalho Chehab <m.chehab@samsung.com> 9184 + M: Mauro Carvalho Chehab <mchehab@osg.samsung.com> 9187 9185 L: linux-media@vger.kernel.org 9188 9186 W: http://linuxtv.org 9189 9187 T: git git://linuxtv.org/media_tree.git ··· 9495 9493 F: mm/shmem.c 9496 9494 9497 9495 TM6000 VIDEO4LINUX DRIVER 9498 - M: Mauro Carvalho Chehab <m.chehab@samsung.com> 9496 + M: Mauro Carvalho Chehab <mchehab@osg.samsung.com> 9499 9497 L: linux-media@vger.kernel.org 9500 9498 W: http://linuxtv.org 9501 9499 T: git git://linuxtv.org/media_tree.git ··· 10316 10314 F: arch/x86/kernel/cpu/mcheck/* 10317 10315 10318 10316 XC2028/3028 TUNER DRIVER 10319 - M: Mauro Carvalho Chehab <m.chehab@samsung.com> 10317 + M: Mauro Carvalho Chehab <mchehab@osg.samsung.com> 10320 10318 L: linux-media@vger.kernel.org 10321 10319 W: http://linuxtv.org 10322 10320 T: git git://linuxtv.org/media_tree.git
+1 -1
Makefile
··· 1 1 VERSION = 3 2 2 PATCHLEVEL = 18 3 3 SUBLEVEL = 0 4 - EXTRAVERSION = -rc7 4 + EXTRAVERSION = 5 5 NAME = Diseased Newt 6 6 7 7 # *DOCUMENTATION*
+2 -6
arch/s390/kernel/nmi.c
··· 54 54 */ 55 55 local_irq_save(flags); 56 56 local_mcck_disable(); 57 - /* 58 - * Ummm... Does this make sense at all? Copying the percpu struct 59 - * and then zapping it one statement later? 60 - */ 61 - memcpy(&mcck, this_cpu_ptr(&cpu_mcck), sizeof(mcck)); 62 - memset(&mcck, 0, sizeof(struct mcck_struct)); 57 + mcck = *this_cpu_ptr(&cpu_mcck); 58 + memset(this_cpu_ptr(&cpu_mcck), 0, sizeof(mcck)); 63 59 clear_cpu_flag(CIF_MCCK_PENDING); 64 60 local_mcck_enable(); 65 61 local_irq_restore(flags);
+1 -1
arch/x86/boot/compressed/Makefile
··· 76 76 suffix-$(CONFIG_KERNEL_LZO) := lzo 77 77 suffix-$(CONFIG_KERNEL_LZ4) := lz4 78 78 79 - RUN_SIZE = $(shell objdump -h vmlinux | \ 79 + RUN_SIZE = $(shell $(OBJDUMP) -h vmlinux | \ 80 80 perl $(srctree)/arch/x86/tools/calc_run_size.pl) 81 81 quiet_cmd_mkpiggy = MKPIGGY $@ 82 82 cmd_mkpiggy = $(obj)/mkpiggy $< $(RUN_SIZE) > $@ || ( rm -f $@ ; false )
+2
arch/x86/kernel/cpu/microcode/core.c
··· 465 465 466 466 if (uci->valid && uci->mc) 467 467 microcode_ops->apply_microcode(cpu); 468 + #ifdef CONFIG_X86_64 468 469 else if (!uci->mc) 469 470 /* 470 471 * We might resume and not have applied late microcode but still ··· 474 473 * applying patches early on the APs. 475 474 */ 476 475 load_ucode_ap(); 476 + #endif 477 477 } 478 478 479 479 static struct syscore_ops mc_syscore_ops = {
+7 -6
block/bio-integrity.c
··· 216 216 { 217 217 struct blk_integrity *bi = bdev_get_integrity(bio->bi_bdev); 218 218 struct blk_integrity_iter iter; 219 - struct bio_vec *bv; 219 + struct bvec_iter bviter; 220 + struct bio_vec bv; 220 221 struct bio_integrity_payload *bip = bio_integrity(bio); 221 - unsigned int i, ret = 0; 222 + unsigned int ret = 0; 222 223 void *prot_buf = page_address(bip->bip_vec->bv_page) + 223 224 bip->bip_vec->bv_offset; 224 225 ··· 228 227 iter.seed = bip_get_seed(bip); 229 228 iter.prot_buf = prot_buf; 230 229 231 - bio_for_each_segment_all(bv, bio, i) { 232 - void *kaddr = kmap_atomic(bv->bv_page); 230 + bio_for_each_segment(bv, bio, bviter) { 231 + void *kaddr = kmap_atomic(bv.bv_page); 233 232 234 - iter.data_buf = kaddr + bv->bv_offset; 235 - iter.data_size = bv->bv_len; 233 + iter.data_buf = kaddr + bv.bv_offset; 234 + iter.data_size = bv.bv_len; 236 235 237 236 ret = proc_fn(&iter); 238 237 if (ret) {
+2 -1
drivers/acpi/video.c
··· 1164 1164 return true; 1165 1165 1166 1166 for (i = 0; i < video->attached_count; i++) { 1167 - if (video->attached_array[i].bind_info == device) 1167 + if ((video->attached_array[i].value.int_val & 0xfff) == 1168 + (device->device_id & 0xfff)) 1168 1169 return true; 1169 1170 } 1170 1171
+4
drivers/ata/ahci.c
··· 321 321 { PCI_VDEVICE(INTEL, 0x8c87), board_ahci }, /* 9 Series RAID */ 322 322 { PCI_VDEVICE(INTEL, 0x8c8e), board_ahci }, /* 9 Series RAID */ 323 323 { PCI_VDEVICE(INTEL, 0x8c8f), board_ahci }, /* 9 Series RAID */ 324 + { PCI_VDEVICE(INTEL, 0x9d03), board_ahci }, /* Sunrise Point-LP AHCI */ 325 + { PCI_VDEVICE(INTEL, 0x9d05), board_ahci }, /* Sunrise Point-LP RAID */ 326 + { PCI_VDEVICE(INTEL, 0x9d07), board_ahci }, /* Sunrise Point-LP RAID */ 324 327 { PCI_VDEVICE(INTEL, 0xa103), board_ahci }, /* Sunrise Point-H AHCI */ 325 328 { PCI_VDEVICE(INTEL, 0xa103), board_ahci }, /* Sunrise Point-H RAID */ 326 329 { PCI_VDEVICE(INTEL, 0xa105), board_ahci }, /* Sunrise Point-H RAID */ ··· 495 492 * enabled. https://bugzilla.kernel.org/show_bug.cgi?id=60731 496 493 */ 497 494 { PCI_VDEVICE(SAMSUNG, 0x1600), board_ahci_nomsi }, 495 + { PCI_VDEVICE(SAMSUNG, 0xa800), board_ahci_nomsi }, 498 496 499 497 /* Enmotus */ 500 498 { PCI_DEVICE(0x1c44, 0x8000), board_ahci },
+1 -1
drivers/ata/sata_fsl.c
··· 1488 1488 host_priv->csr_base = csr_base; 1489 1489 1490 1490 irq = irq_of_parse_and_map(ofdev->dev.of_node, 0); 1491 - if (irq < 0) { 1491 + if (!irq) { 1492 1492 dev_err(&ofdev->dev, "invalid irq from platform\n"); 1493 1493 goto error_exit_with_cleanup; 1494 1494 }
-3
drivers/gpu/drm/i915/intel_display.c
··· 4565 4565 ironlake_fdi_disable(crtc); 4566 4566 4567 4567 ironlake_disable_pch_transcoder(dev_priv, pipe); 4568 - intel_set_pch_fifo_underrun_reporting(dev_priv, pipe, true); 4569 4568 4570 4569 if (HAS_PCH_CPT(dev)) { 4571 4570 /* disable TRANS_DP_CTL */ ··· 4635 4636 4636 4637 if (intel_crtc->config.has_pch_encoder) { 4637 4638 lpt_disable_pch_transcoder(dev_priv); 4638 - intel_set_pch_fifo_underrun_reporting(dev_priv, TRANSCODER_A, 4639 - true); 4640 4639 intel_ddi_fdi_disable(crtc); 4641 4640 } 4642 4641
+11 -11
drivers/gpu/drm/i915/intel_lvds.c
··· 899 899 int pipe; 900 900 u8 pin; 901 901 902 + /* 903 + * Unlock registers and just leave them unlocked. Do this before 904 + * checking quirk lists to avoid bogus WARNINGs. 905 + */ 906 + if (HAS_PCH_SPLIT(dev)) { 907 + I915_WRITE(PCH_PP_CONTROL, 908 + I915_READ(PCH_PP_CONTROL) | PANEL_UNLOCK_REGS); 909 + } else { 910 + I915_WRITE(PP_CONTROL, 911 + I915_READ(PP_CONTROL) | PANEL_UNLOCK_REGS); 912 + } 902 913 if (!intel_lvds_supported(dev)) 903 914 return; 904 915 ··· 1108 1097 lvds_encoder->a3_power = I915_READ(lvds_encoder->reg) & 1109 1098 LVDS_A3_POWER_MASK; 1110 1099 1111 - /* 1112 - * Unlock registers and just 1113 - * leave them unlocked 1114 - */ 1115 - if (HAS_PCH_SPLIT(dev)) { 1116 - I915_WRITE(PCH_PP_CONTROL, 1117 - I915_READ(PCH_PP_CONTROL) | PANEL_UNLOCK_REGS); 1118 - } else { 1119 - I915_WRITE(PP_CONTROL, 1120 - I915_READ(PP_CONTROL) | PANEL_UNLOCK_REGS); 1121 - } 1122 1100 lvds_connector->lid_notifier.notifier_call = intel_lid_notify; 1123 1101 if (acpi_lid_notifier_register(&lvds_connector->lid_notifier)) { 1124 1102 DRM_DEBUG_KMS("lid notifier registration failed\n");
-1
drivers/gpu/drm/nouveau/core/engine/device/nvc0.c
··· 218 218 device->oclass[NVDEV_ENGINE_BSP ] = &nvc0_bsp_oclass; 219 219 device->oclass[NVDEV_ENGINE_PPP ] = &nvc0_ppp_oclass; 220 220 device->oclass[NVDEV_ENGINE_COPY0 ] = &nvc0_copy0_oclass; 221 - device->oclass[NVDEV_ENGINE_COPY1 ] = &nvc0_copy1_oclass; 222 221 device->oclass[NVDEV_ENGINE_DISP ] = nva3_disp_oclass; 223 222 device->oclass[NVDEV_ENGINE_PERFMON] = &nvc0_perfmon_oclass; 224 223 break;
+1 -1
drivers/gpu/drm/nouveau/core/engine/fifo/nv04.c
··· 551 551 } 552 552 553 553 if (status & 0x40000000) { 554 - nouveau_fifo_uevent(&priv->base); 555 554 nv_wr32(priv, 0x002100, 0x40000000); 555 + nouveau_fifo_uevent(&priv->base); 556 556 status &= ~0x40000000; 557 557 } 558 558 }
+2 -2
drivers/gpu/drm/nouveau/core/engine/fifo/nvc0.c
··· 740 740 u32 inte = nv_rd32(priv, 0x002628); 741 741 u32 unkn; 742 742 743 + nv_wr32(priv, 0x0025a8 + (engn * 0x04), intr); 744 + 743 745 for (unkn = 0; unkn < 8; unkn++) { 744 746 u32 ints = (intr >> (unkn * 0x04)) & inte; 745 747 if (ints & 0x1) { ··· 753 751 nv_mask(priv, 0x002628, ints, 0); 754 752 } 755 753 } 756 - 757 - nv_wr32(priv, 0x0025a8 + (engn * 0x04), intr); 758 754 } 759 755 760 756 static void
+1 -1
drivers/gpu/drm/nouveau/core/engine/fifo/nve0.c
··· 982 982 } 983 983 984 984 if (stat & 0x80000000) { 985 - nve0_fifo_intr_engine(priv); 986 985 nv_wr32(priv, 0x002100, 0x80000000); 986 + nve0_fifo_intr_engine(priv); 987 987 stat &= ~0x80000000; 988 988 } 989 989
+1 -1
drivers/gpu/drm/nouveau/nouveau_drm.c
··· 664 664 665 665 pci_save_state(pdev); 666 666 pci_disable_device(pdev); 667 - pci_ignore_hotplug(pdev); 668 667 pci_set_power_state(pdev, PCI_D3hot); 669 668 return 0; 670 669 } ··· 731 732 ret = nouveau_do_suspend(drm_dev, true); 732 733 pci_save_state(pdev); 733 734 pci_disable_device(pdev); 735 + pci_ignore_hotplug(pdev); 734 736 pci_set_power_state(pdev, PCI_D3cold); 735 737 drm_dev->switch_power_state = DRM_SWITCH_POWER_DYNAMIC_OFF; 736 738 return ret;
+65 -27
drivers/gpu/drm/nouveau/nouveau_fence.c
··· 52 52 return container_of(fence->base.lock, struct nouveau_fence_chan, lock); 53 53 } 54 54 55 - static void 55 + static int 56 56 nouveau_fence_signal(struct nouveau_fence *fence) 57 57 { 58 + int drop = 0; 59 + 58 60 fence_signal_locked(&fence->base); 59 61 list_del(&fence->head); 62 + rcu_assign_pointer(fence->channel, NULL); 60 63 61 64 if (test_bit(FENCE_FLAG_USER_BITS, &fence->base.flags)) { 62 65 struct nouveau_fence_chan *fctx = nouveau_fctx(fence); 63 66 64 67 if (!--fctx->notify_ref) 65 - nvif_notify_put(&fctx->notify); 68 + drop = 1; 66 69 } 67 70 68 71 fence_put(&fence->base); 72 + return drop; 69 73 } 70 74 71 75 static struct nouveau_fence * ··· 92 88 { 93 89 struct nouveau_fence *fence; 94 90 95 - nvif_notify_fini(&fctx->notify); 96 - 97 91 spin_lock_irq(&fctx->lock); 98 92 while (!list_empty(&fctx->pending)) { 99 93 fence = list_entry(fctx->pending.next, typeof(*fence), head); 100 94 101 - nouveau_fence_signal(fence); 102 - fence->channel = NULL; 95 + if (nouveau_fence_signal(fence)) 96 + nvif_notify_put(&fctx->notify); 103 97 } 104 98 spin_unlock_irq(&fctx->lock); 99 + 100 + nvif_notify_fini(&fctx->notify); 101 + fctx->dead = 1; 102 + 103 + /* 104 + * Ensure that all accesses to fence->channel complete before freeing 105 + * the channel. 106 + */ 107 + synchronize_rcu(); 105 108 } 106 109 107 110 static void ··· 123 112 kref_put(&fctx->fence_ref, nouveau_fence_context_put); 124 113 } 125 114 126 - static void 115 + static int 127 116 nouveau_fence_update(struct nouveau_channel *chan, struct nouveau_fence_chan *fctx) 128 117 { 129 118 struct nouveau_fence *fence; 130 - 119 + int drop = 0; 131 120 u32 seq = fctx->read(chan); 132 121 133 122 while (!list_empty(&fctx->pending)) { 134 123 fence = list_entry(fctx->pending.next, typeof(*fence), head); 135 124 136 125 if ((int)(seq - fence->base.seqno) < 0) 137 - return; 126 + break; 138 127 139 - nouveau_fence_signal(fence); 128 + drop |= nouveau_fence_signal(fence); 140 129 } 130 + 131 + return drop; 141 132 } 142 133 143 134 static int ··· 148 135 struct nouveau_fence_chan *fctx = 149 136 container_of(notify, typeof(*fctx), notify); 150 137 unsigned long flags; 138 + int ret = NVIF_NOTIFY_KEEP; 151 139 152 140 spin_lock_irqsave(&fctx->lock, flags); 153 141 if (!list_empty(&fctx->pending)) { 154 142 struct nouveau_fence *fence; 143 + struct nouveau_channel *chan; 155 144 156 145 fence = list_entry(fctx->pending.next, typeof(*fence), head); 157 - nouveau_fence_update(fence->channel, fctx); 146 + chan = rcu_dereference_protected(fence->channel, lockdep_is_held(&fctx->lock)); 147 + if (nouveau_fence_update(fence->channel, fctx)) 148 + ret = NVIF_NOTIFY_DROP; 158 149 } 159 150 spin_unlock_irqrestore(&fctx->lock, flags); 160 151 161 - /* Always return keep here. NVIF refcount is handled with nouveau_fence_update */ 162 - return NVIF_NOTIFY_KEEP; 152 + return ret; 163 153 } 164 154 165 155 void ··· 278 262 if (!ret) { 279 263 fence_get(&fence->base); 280 264 spin_lock_irq(&fctx->lock); 281 - nouveau_fence_update(chan, fctx); 265 + 266 + if (nouveau_fence_update(chan, fctx)) 267 + nvif_notify_put(&fctx->notify); 268 + 282 269 list_add_tail(&fence->head, &fctx->pending); 283 270 spin_unlock_irq(&fctx->lock); 284 271 } ··· 295 276 if (fence->base.ops == &nouveau_fence_ops_legacy || 296 277 fence->base.ops == &nouveau_fence_ops_uevent) { 297 278 struct nouveau_fence_chan *fctx = nouveau_fctx(fence); 279 + struct nouveau_channel *chan; 298 280 unsigned long flags; 299 281 300 282 if (test_bit(FENCE_FLAG_SIGNALED_BIT, &fence->base.flags)) 301 283 return true; 302 284 303 285 spin_lock_irqsave(&fctx->lock, flags); 304 - nouveau_fence_update(fence->channel, fctx); 286 + chan = rcu_dereference_protected(fence->channel, lockdep_is_held(&fctx->lock)); 287 + if (chan && nouveau_fence_update(chan, fctx)) 288 + nvif_notify_put(&fctx->notify); 305 289 spin_unlock_irqrestore(&fctx->lock, flags); 306 290 } 307 291 return fence_is_signaled(&fence->base); ··· 409 387 410 388 if (fence && (!exclusive || !fobj || !fobj->shared_count)) { 411 389 struct nouveau_channel *prev = NULL; 390 + bool must_wait = true; 412 391 413 392 f = nouveau_local_fence(fence, chan->drm); 414 - if (f) 415 - prev = f->channel; 393 + if (f) { 394 + rcu_read_lock(); 395 + prev = rcu_dereference(f->channel); 396 + if (prev && (prev == chan || fctx->sync(f, prev, chan) == 0)) 397 + must_wait = false; 398 + rcu_read_unlock(); 399 + } 416 400 417 - if (!prev || (prev != chan && (ret = fctx->sync(f, prev, chan)))) 401 + if (must_wait) 418 402 ret = fence_wait(fence, intr); 419 403 420 404 return ret; ··· 431 403 432 404 for (i = 0; i < fobj->shared_count && !ret; ++i) { 433 405 struct nouveau_channel *prev = NULL; 406 + bool must_wait = true; 434 407 435 408 fence = rcu_dereference_protected(fobj->shared[i], 436 409 reservation_object_held(resv)); 437 410 438 411 f = nouveau_local_fence(fence, chan->drm); 439 - if (f) 440 - prev = f->channel; 412 + if (f) { 413 + rcu_read_lock(); 414 + prev = rcu_dereference(f->channel); 415 + if (prev && (prev == chan || fctx->sync(f, prev, chan) == 0)) 416 + must_wait = false; 417 + rcu_read_unlock(); 418 + } 441 419 442 - if (!prev || (prev != chan && (ret = fctx->sync(f, prev, chan)))) 420 + if (must_wait) 443 421 ret = fence_wait(fence, intr); 444 - 445 - if (ret) 446 - break; 447 422 } 448 423 449 424 return ret; ··· 494 463 struct nouveau_fence *fence = from_fence(f); 495 464 struct nouveau_fence_chan *fctx = nouveau_fctx(fence); 496 465 497 - return fence->channel ? fctx->name : "dead channel"; 466 + return !fctx->dead ? fctx->name : "dead channel"; 498 467 } 499 468 500 469 /* ··· 507 476 { 508 477 struct nouveau_fence *fence = from_fence(f); 509 478 struct nouveau_fence_chan *fctx = nouveau_fctx(fence); 510 - struct nouveau_channel *chan = fence->channel; 479 + struct nouveau_channel *chan; 480 + bool ret = false; 511 481 512 - return (int)(fctx->read(chan) - fence->base.seqno) >= 0; 482 + rcu_read_lock(); 483 + chan = rcu_dereference(fence->channel); 484 + if (chan) 485 + ret = (int)(fctx->read(chan) - fence->base.seqno) >= 0; 486 + rcu_read_unlock(); 487 + 488 + return ret; 513 489 } 514 490 515 491 static bool nouveau_fence_no_signaling(struct fence *f)
+2 -2
drivers/gpu/drm/nouveau/nouveau_fence.h
··· 14 14 15 15 bool sysmem; 16 16 17 - struct nouveau_channel *channel; 17 + struct nouveau_channel __rcu *channel; 18 18 unsigned long timeout; 19 19 }; 20 20 ··· 47 47 char name[32]; 48 48 49 49 struct nvif_notify notify; 50 - int notify_ref; 50 + int notify_ref, dead; 51 51 }; 52 52 53 53 struct nouveau_fence_priv {
-1
drivers/gpu/drm/radeon/radeon_cs.c
··· 241 241 resv = reloc->robj->tbo.resv; 242 242 r = radeon_sync_resv(p->rdev, &p->ib.sync, resv, 243 243 reloc->tv.shared); 244 - 245 244 if (r) 246 245 return r; 247 246 }
+2
drivers/gpu/drm/radeon/radeon_kms.c
··· 800 800 801 801 /* Get associated drm_crtc: */ 802 802 drmcrtc = &rdev->mode_info.crtcs[crtc]->base; 803 + if (!drmcrtc) 804 + return -EINVAL; 803 805 804 806 /* Helper routine in DRM core does all the work: */ 805 807 return drm_calc_vbltimestamp_from_scanoutpos(dev, crtc, max_error,
+7
drivers/gpu/drm/radeon/radeon_object.c
··· 233 233 if (!(rdev->flags & RADEON_IS_PCIE)) 234 234 bo->flags &= ~(RADEON_GEM_GTT_WC | RADEON_GEM_GTT_UC); 235 235 236 + #ifdef CONFIG_X86_32 237 + /* XXX: Write-combined CPU mappings of GTT seem broken on 32-bit 238 + * See https://bugs.freedesktop.org/show_bug.cgi?id=84627 239 + */ 240 + bo->flags &= ~RADEON_GEM_GTT_WC; 241 + #endif 242 + 236 243 radeon_ttm_placement_from_domain(bo, domain); 237 244 /* Kernel allocation are uninterruptible */ 238 245 down_read(&rdev->pm.mclk_lock);
+11
drivers/i2c/busses/i2c-cadence.c
··· 111 111 #define CDNS_I2C_DIVA_MAX 4 112 112 #define CDNS_I2C_DIVB_MAX 64 113 113 114 + #define CDNS_I2C_TIMEOUT_MAX 0xFF 115 + 114 116 #define cdns_i2c_readreg(offset) readl_relaxed(id->membase + offset) 115 117 #define cdns_i2c_writereg(val, offset) writel_relaxed(val, id->membase + offset) 116 118 ··· 853 851 dev_err(&pdev->dev, "reg adap failed: %d\n", ret); 854 852 goto err_clk_dis; 855 853 } 854 + 855 + /* 856 + * Cadence I2C controller has a bug wherein it generates 857 + * invalid read transaction after HW timeout in master receiver mode. 858 + * HW timeout is not used by this driver and the interrupt is disabled. 859 + * But the feature itself cannot be disabled. Hence maximum value 860 + * is written to this register to reduce the chances of error. 861 + */ 862 + cdns_i2c_writereg(CDNS_I2C_TIMEOUT_MAX, CDNS_I2C_TIME_OUT_OFFSET); 856 863 857 864 dev_info(&pdev->dev, "%u kHz mmio %08lx irq %d\n", 858 865 id->i2c_clk / 1000, (unsigned long)r_mem->start, id->irq);
+3 -5
drivers/i2c/busses/i2c-davinci.c
··· 407 407 if (dev->cmd_err & DAVINCI_I2C_STR_NACK) { 408 408 if (msg->flags & I2C_M_IGNORE_NAK) 409 409 return msg->len; 410 - if (stop) { 411 - w = davinci_i2c_read_reg(dev, DAVINCI_I2C_MDR_REG); 412 - w |= DAVINCI_I2C_MDR_STP; 413 - davinci_i2c_write_reg(dev, DAVINCI_I2C_MDR_REG, w); 414 - } 410 + w = davinci_i2c_read_reg(dev, DAVINCI_I2C_MDR_REG); 411 + w |= DAVINCI_I2C_MDR_STP; 412 + davinci_i2c_write_reg(dev, DAVINCI_I2C_MDR_REG, w); 415 413 return -EREMOTEIO; 416 414 } 417 415 return -EIO;
+1 -1
drivers/i2c/busses/i2c-designware-core.c
··· 359 359 } 360 360 361 361 /* Configure Tx/Rx FIFO threshold levels */ 362 - dw_writel(dev, dev->tx_fifo_depth - 1, DW_IC_TX_TL); 362 + dw_writel(dev, dev->tx_fifo_depth / 2, DW_IC_TX_TL); 363 363 dw_writel(dev, 0, DW_IC_RX_TL); 364 364 365 365 /* configure the i2c master */
+5 -5
drivers/i2c/busses/i2c-omap.c
··· 922 922 if (stat & OMAP_I2C_STAT_NACK) { 923 923 err |= OMAP_I2C_STAT_NACK; 924 924 omap_i2c_ack_stat(dev, OMAP_I2C_STAT_NACK); 925 - break; 926 925 } 927 926 928 927 if (stat & OMAP_I2C_STAT_AL) { 929 928 dev_err(dev->dev, "Arbitration lost\n"); 930 929 err |= OMAP_I2C_STAT_AL; 931 930 omap_i2c_ack_stat(dev, OMAP_I2C_STAT_AL); 932 - break; 933 931 } 934 932 935 933 /* ··· 952 954 if (dev->fifo_size) 953 955 num_bytes = dev->buf_len; 954 956 955 - omap_i2c_receive_data(dev, num_bytes, true); 956 - 957 - if (dev->errata & I2C_OMAP_ERRATA_I207) 957 + if (dev->errata & I2C_OMAP_ERRATA_I207) { 958 958 i2c_omap_errata_i207(dev, stat); 959 + num_bytes = (omap_i2c_read_reg(dev, 960 + OMAP_I2C_BUFSTAT_REG) >> 8) & 0x3F; 961 + } 959 962 963 + omap_i2c_receive_data(dev, num_bytes, true); 960 964 omap_i2c_ack_stat(dev, OMAP_I2C_STAT_RDR); 961 965 continue; 962 966 }
+1 -1
drivers/input/evdev.c
··· 421 421 422 422 err_free_client: 423 423 evdev_detach_client(evdev, client); 424 - kfree(client); 424 + kvfree(client); 425 425 return error; 426 426 } 427 427
+1 -1
drivers/media/i2c/smiapp/smiapp-core.c
··· 2190 2190 ret = smiapp_set_compose(subdev, fh, sel); 2191 2191 break; 2192 2192 default: 2193 - BUG(); 2193 + ret = -EINVAL; 2194 2194 } 2195 2195 2196 2196 mutex_unlock(&sensor->mutex);
+3 -3
drivers/media/pci/cx23885/cx23885-core.c
··· 1078 1078 for (line = 0; line < lines; line++) { 1079 1079 while (offset && offset >= sg_dma_len(sg)) { 1080 1080 offset -= sg_dma_len(sg); 1081 - sg++; 1081 + sg = sg_next(sg); 1082 1082 } 1083 1083 1084 1084 if (lpi && line > 0 && !(line % lpi)) ··· 1101 1101 *(rp++) = cpu_to_le32(0); /* bits 63-32 */ 1102 1102 todo -= (sg_dma_len(sg)-offset); 1103 1103 offset = 0; 1104 - sg++; 1104 + sg = sg_next(sg); 1105 1105 while (todo > sg_dma_len(sg)) { 1106 1106 *(rp++) = cpu_to_le32(RISC_WRITE| 1107 1107 sg_dma_len(sg)); 1108 1108 *(rp++) = cpu_to_le32(sg_dma_address(sg)); 1109 1109 *(rp++) = cpu_to_le32(0); /* bits 63-32 */ 1110 1110 todo -= sg_dma_len(sg); 1111 - sg++; 1111 + sg = sg_next(sg); 1112 1112 } 1113 1113 *(rp++) = cpu_to_le32(RISC_WRITE|RISC_EOL|todo); 1114 1114 *(rp++) = cpu_to_le32(sg_dma_address(sg));
+2 -8
drivers/media/pci/solo6x10/solo6x10-core.c
··· 105 105 if (!status) 106 106 return IRQ_NONE; 107 107 108 - if (status & ~solo_dev->irq_mask) { 109 - solo_reg_write(solo_dev, SOLO_IRQ_STAT, 110 - status & ~solo_dev->irq_mask); 111 - status &= solo_dev->irq_mask; 112 - } 108 + /* Acknowledge all interrupts immediately */ 109 + solo_reg_write(solo_dev, SOLO_IRQ_STAT, status); 113 110 114 111 if (status & SOLO_IRQ_PCI_ERR) 115 112 solo_p2m_error_isr(solo_dev); ··· 128 131 129 132 if (status & SOLO_IRQ_G723) 130 133 solo_g723_isr(solo_dev); 131 - 132 - /* Clear all interrupts handled */ 133 - solo_reg_write(solo_dev, SOLO_IRQ_STAT, status); 134 134 135 135 return IRQ_HANDLED; 136 136 }
+1 -1
drivers/media/rc/ir-rc6-decoder.c
··· 259 259 case 32: 260 260 if ((scancode & RC6_6A_LCC_MASK) == RC6_6A_MCE_CC) { 261 261 protocol = RC_TYPE_RC6_MCE; 262 - scancode &= ~RC6_6A_MCE_TOGGLE_MASK; 263 262 toggle = !!(scancode & RC6_6A_MCE_TOGGLE_MASK); 263 + scancode &= ~RC6_6A_MCE_TOGGLE_MASK; 264 264 } else { 265 265 protocol = RC_BIT_RC6_6A_32; 266 266 toggle = 0;
+1 -1
drivers/media/usb/s2255/s2255drv.c
··· 632 632 break; 633 633 case V4L2_PIX_FMT_JPEG: 634 634 case V4L2_PIX_FMT_MJPEG: 635 - buf->vb.v4l2_buf.length = jpgsize; 635 + vb2_set_plane_payload(&buf->vb, 0, jpgsize); 636 636 memcpy(vbuf, tmpbuf, jpgsize); 637 637 break; 638 638 case V4L2_PIX_FMT_YUV422P:
+6 -1
drivers/net/bonding/bond_netlink.c
··· 225 225 226 226 bond_option_arp_ip_targets_clear(bond); 227 227 nla_for_each_nested(attr, data[IFLA_BOND_ARP_IP_TARGET], rem) { 228 - __be32 target = nla_get_be32(attr); 228 + __be32 target; 229 + 230 + if (nla_len(attr) < sizeof(target)) 231 + return -EINVAL; 232 + 233 + target = nla_get_be32(attr); 229 234 230 235 bond_opt_initval(&newval, (__force u64)target); 231 236 err = __bond_opt_set(bond, BOND_OPT_ARP_TARGETS,
+6 -2
drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
··· 2442 2442 SUPPORTED_10000baseKR_Full | SUPPORTED_1000baseKX_Full | 2443 2443 SUPPORTED_10000baseKX4_Full; 2444 2444 else if (type == FW_PORT_TYPE_FIBER_XFI || 2445 - type == FW_PORT_TYPE_FIBER_XAUI || type == FW_PORT_TYPE_SFP) 2445 + type == FW_PORT_TYPE_FIBER_XAUI || type == FW_PORT_TYPE_SFP) { 2446 2446 v |= SUPPORTED_FIBRE; 2447 - else if (type == FW_PORT_TYPE_BP40_BA) 2447 + if (caps & FW_PORT_CAP_SPEED_1G) 2448 + v |= SUPPORTED_1000baseT_Full; 2449 + if (caps & FW_PORT_CAP_SPEED_10G) 2450 + v |= SUPPORTED_10000baseT_Full; 2451 + } else if (type == FW_PORT_TYPE_BP40_BA) 2448 2452 v |= SUPPORTED_40000baseSR4_Full; 2449 2453 2450 2454 if (caps & FW_PORT_CAP_ANEG)
+48 -48
drivers/net/ethernet/renesas/sh_eth.c
··· 917 917 return ret; 918 918 } 919 919 920 - #if defined(CONFIG_CPU_SH4) || defined(CONFIG_ARCH_SHMOBILE) 921 920 static void sh_eth_set_receive_align(struct sk_buff *skb) 922 921 { 923 - int reserve; 922 + uintptr_t reserve = (uintptr_t)skb->data & (SH_ETH_RX_ALIGN - 1); 924 923 925 - reserve = SH4_SKB_RX_ALIGN - ((u32)skb->data & (SH4_SKB_RX_ALIGN - 1)); 926 924 if (reserve) 927 - skb_reserve(skb, reserve); 925 + skb_reserve(skb, SH_ETH_RX_ALIGN - reserve); 928 926 } 929 - #else 930 - static void sh_eth_set_receive_align(struct sk_buff *skb) 931 - { 932 - skb_reserve(skb, SH2_SH3_SKB_RX_ALIGN); 933 - } 934 - #endif 935 927 936 928 937 929 /* CPU <-> EDMAC endian convert */ ··· 1111 1119 struct sh_eth_txdesc *txdesc = NULL; 1112 1120 int rx_ringsize = sizeof(*rxdesc) * mdp->num_rx_ring; 1113 1121 int tx_ringsize = sizeof(*txdesc) * mdp->num_tx_ring; 1122 + int skbuff_size = mdp->rx_buf_sz + SH_ETH_RX_ALIGN - 1; 1114 1123 1115 1124 mdp->cur_rx = 0; 1116 1125 mdp->cur_tx = 0; ··· 1124 1131 for (i = 0; i < mdp->num_rx_ring; i++) { 1125 1132 /* skb */ 1126 1133 mdp->rx_skbuff[i] = NULL; 1127 - skb = netdev_alloc_skb(ndev, mdp->rx_buf_sz); 1134 + skb = netdev_alloc_skb(ndev, skbuff_size); 1128 1135 mdp->rx_skbuff[i] = skb; 1129 1136 if (skb == NULL) 1130 1137 break; 1131 - dma_map_single(&ndev->dev, skb->data, mdp->rx_buf_sz, 1132 - DMA_FROM_DEVICE); 1133 1138 sh_eth_set_receive_align(skb); 1134 1139 1135 1140 /* RX descriptor */ 1136 1141 rxdesc = &mdp->rx_ring[i]; 1142 + /* The size of the buffer is a multiple of 16 bytes. */ 1143 + rxdesc->buffer_length = ALIGN(mdp->rx_buf_sz, 16); 1144 + dma_map_single(&ndev->dev, skb->data, rxdesc->buffer_length, 1145 + DMA_FROM_DEVICE); 1137 1146 rxdesc->addr = virt_to_phys(PTR_ALIGN(skb->data, 4)); 1138 1147 rxdesc->status = cpu_to_edmac(mdp, RD_RACT | RD_RFP); 1139 1148 1140 - /* The size of the buffer is 16 byte boundary. */ 1141 - rxdesc->buffer_length = ALIGN(mdp->rx_buf_sz, 16); 1142 1149 /* Rx descriptor address set */ 1143 1150 if (i == 0) { 1144 1151 sh_eth_write(ndev, mdp->rx_desc_dma, RDLAR); ··· 1390 1397 struct sk_buff *skb; 1391 1398 u16 pkt_len = 0; 1392 1399 u32 desc_status; 1400 + int skbuff_size = mdp->rx_buf_sz + SH_ETH_RX_ALIGN - 1; 1393 1401 1394 1402 rxdesc = &mdp->rx_ring[entry]; 1395 1403 while (!(rxdesc->status & cpu_to_edmac(mdp, RD_RACT))) { ··· 1442 1448 if (mdp->cd->rpadir) 1443 1449 skb_reserve(skb, NET_IP_ALIGN); 1444 1450 dma_sync_single_for_cpu(&ndev->dev, rxdesc->addr, 1445 - mdp->rx_buf_sz, 1451 + ALIGN(mdp->rx_buf_sz, 16), 1446 1452 DMA_FROM_DEVICE); 1447 1453 skb_put(skb, pkt_len); 1448 1454 skb->protocol = eth_type_trans(skb, ndev); ··· 1462 1468 rxdesc->buffer_length = ALIGN(mdp->rx_buf_sz, 16); 1463 1469 1464 1470 if (mdp->rx_skbuff[entry] == NULL) { 1465 - skb = netdev_alloc_skb(ndev, mdp->rx_buf_sz); 1471 + skb = netdev_alloc_skb(ndev, skbuff_size); 1466 1472 mdp->rx_skbuff[entry] = skb; 1467 1473 if (skb == NULL) 1468 1474 break; /* Better luck next round. */ 1469 - dma_map_single(&ndev->dev, skb->data, mdp->rx_buf_sz, 1470 - DMA_FROM_DEVICE); 1471 1475 sh_eth_set_receive_align(skb); 1476 + dma_map_single(&ndev->dev, skb->data, 1477 + rxdesc->buffer_length, DMA_FROM_DEVICE); 1472 1478 1473 1479 skb_checksum_none_assert(skb); 1474 1480 rxdesc->addr = virt_to_phys(PTR_ALIGN(skb->data, 4)); ··· 2036 2042 if (ret) 2037 2043 goto out_free_irq; 2038 2044 2045 + mdp->is_opened = 1; 2046 + 2039 2047 return ret; 2040 2048 2041 2049 out_free_irq: ··· 2127 2131 return NETDEV_TX_OK; 2128 2132 } 2129 2133 2134 + static struct net_device_stats *sh_eth_get_stats(struct net_device *ndev) 2135 + { 2136 + struct sh_eth_private *mdp = netdev_priv(ndev); 2137 + 2138 + if (sh_eth_is_rz_fast_ether(mdp)) 2139 + return &ndev->stats; 2140 + 2141 + if (!mdp->is_opened) 2142 + return &ndev->stats; 2143 + 2144 + ndev->stats.tx_dropped += sh_eth_read(ndev, TROCR); 2145 + sh_eth_write(ndev, 0, TROCR); /* (write clear) */ 2146 + ndev->stats.collisions += sh_eth_read(ndev, CDCR); 2147 + sh_eth_write(ndev, 0, CDCR); /* (write clear) */ 2148 + ndev->stats.tx_carrier_errors += sh_eth_read(ndev, LCCR); 2149 + sh_eth_write(ndev, 0, LCCR); /* (write clear) */ 2150 + 2151 + if (sh_eth_is_gether(mdp)) { 2152 + ndev->stats.tx_carrier_errors += sh_eth_read(ndev, CERCR); 2153 + sh_eth_write(ndev, 0, CERCR); /* (write clear) */ 2154 + ndev->stats.tx_carrier_errors += sh_eth_read(ndev, CEECR); 2155 + sh_eth_write(ndev, 0, CEECR); /* (write clear) */ 2156 + } else { 2157 + ndev->stats.tx_carrier_errors += sh_eth_read(ndev, CNDCR); 2158 + sh_eth_write(ndev, 0, CNDCR); /* (write clear) */ 2159 + } 2160 + 2161 + return &ndev->stats; 2162 + } 2163 + 2130 2164 /* device close function */ 2131 2165 static int sh_eth_close(struct net_device *ndev) 2132 2166 { ··· 2171 2145 sh_eth_write(ndev, 0, EDTRR); 2172 2146 sh_eth_write(ndev, 0, EDRRR); 2173 2147 2148 + sh_eth_get_stats(ndev); 2174 2149 /* PHY Disconnect */ 2175 2150 if (mdp->phydev) { 2176 2151 phy_stop(mdp->phydev); ··· 2190 2163 2191 2164 pm_runtime_put_sync(&mdp->pdev->dev); 2192 2165 2166 + mdp->is_opened = 0; 2167 + 2193 2168 return 0; 2194 - } 2195 - 2196 - static struct net_device_stats *sh_eth_get_stats(struct net_device *ndev) 2197 - { 2198 - struct sh_eth_private *mdp = netdev_priv(ndev); 2199 - 2200 - if (sh_eth_is_rz_fast_ether(mdp)) 2201 - return &ndev->stats; 2202 - 2203 - pm_runtime_get_sync(&mdp->pdev->dev); 2204 - 2205 - ndev->stats.tx_dropped += sh_eth_read(ndev, TROCR); 2206 - sh_eth_write(ndev, 0, TROCR); /* (write clear) */ 2207 - ndev->stats.collisions += sh_eth_read(ndev, CDCR); 2208 - sh_eth_write(ndev, 0, CDCR); /* (write clear) */ 2209 - ndev->stats.tx_carrier_errors += sh_eth_read(ndev, LCCR); 2210 - sh_eth_write(ndev, 0, LCCR); /* (write clear) */ 2211 - if (sh_eth_is_gether(mdp)) { 2212 - ndev->stats.tx_carrier_errors += sh_eth_read(ndev, CERCR); 2213 - sh_eth_write(ndev, 0, CERCR); /* (write clear) */ 2214 - ndev->stats.tx_carrier_errors += sh_eth_read(ndev, CEECR); 2215 - sh_eth_write(ndev, 0, CEECR); /* (write clear) */ 2216 - } else { 2217 - ndev->stats.tx_carrier_errors += sh_eth_read(ndev, CNDCR); 2218 - sh_eth_write(ndev, 0, CNDCR); /* (write clear) */ 2219 - } 2220 - pm_runtime_put_sync(&mdp->pdev->dev); 2221 - 2222 - return &ndev->stats; 2223 2169 } 2224 2170 2225 2171 /* ioctl to device function */
+3 -2
drivers/net/ethernet/renesas/sh_eth.h
··· 162 162 163 163 /* Driver's parameters */ 164 164 #if defined(CONFIG_CPU_SH4) || defined(CONFIG_ARCH_SHMOBILE) 165 - #define SH4_SKB_RX_ALIGN 32 165 + #define SH_ETH_RX_ALIGN 32 166 166 #else 167 - #define SH2_SH3_SKB_RX_ALIGN 2 167 + #define SH_ETH_RX_ALIGN 2 168 168 #endif 169 169 170 170 /* Register's bits ··· 522 522 523 523 unsigned no_ether_link:1; 524 524 unsigned ether_link_active_low:1; 525 + unsigned is_opened:1; 525 526 }; 526 527 527 528 static inline void sh_eth_soft_swap(char *src, int len)
+9 -9
drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
··· 265 265 266 266 plat_dat = dev_get_platdata(&pdev->dev); 267 267 268 + if (!plat_dat) 269 + plat_dat = devm_kzalloc(&pdev->dev, 270 + sizeof(struct plat_stmmacenet_data), 271 + GFP_KERNEL); 272 + if (!plat_dat) { 273 + pr_err("%s: ERROR: no memory", __func__); 274 + return -ENOMEM; 275 + } 276 + 268 277 /* Set default value for multicast hash bins */ 269 278 plat_dat->multicast_filter_bins = HASH_TABLE_SIZE; 270 279 ··· 281 272 plat_dat->unicast_filter_entries = 1; 282 273 283 274 if (pdev->dev.of_node) { 284 - if (!plat_dat) 285 - plat_dat = devm_kzalloc(&pdev->dev, 286 - sizeof(struct plat_stmmacenet_data), 287 - GFP_KERNEL); 288 - if (!plat_dat) { 289 - pr_err("%s: ERROR: no memory", __func__); 290 - return -ENOMEM; 291 - } 292 - 293 275 ret = stmmac_probe_config_dt(pdev, plat_dat, &mac); 294 276 if (ret) { 295 277 pr_err("%s: main dt probe failed", __func__);
-5
drivers/net/xen-netfront.c
··· 496 496 len = skb_frag_size(frag); 497 497 offset = frag->page_offset; 498 498 499 - /* Data must not cross a page boundary. */ 500 - BUG_ON(len + offset > PAGE_SIZE<<compound_order(page)); 501 - 502 499 /* Skip unused frames from start of page */ 503 500 page += offset >> PAGE_SHIFT; 504 501 offset &= ~PAGE_MASK; 505 502 506 503 while (len > 0) { 507 504 unsigned long bytes; 508 - 509 - BUG_ON(offset >= PAGE_SIZE); 510 505 511 506 bytes = PAGE_SIZE - offset; 512 507 if (bytes > len)
-2
drivers/of/fdt.c
··· 964 964 int __init __weak early_init_dt_reserve_memory_arch(phys_addr_t base, 965 965 phys_addr_t size, bool nomap) 966 966 { 967 - if (memblock_is_region_reserved(base, size)) 968 - return -EBUSY; 969 967 if (nomap) 970 968 return memblock_remove(base, size); 971 969 return memblock_reserve(base, size);
+20 -8
drivers/pci/host/pci-tegra.c
··· 276 276 277 277 struct resource all; 278 278 struct resource io; 279 + struct resource pio; 279 280 struct resource mem; 280 281 struct resource prefetch; 281 282 struct resource busn; ··· 659 658 { 660 659 struct tegra_pcie *pcie = sys_to_pcie(sys); 661 660 int err; 662 - phys_addr_t io_start; 663 661 664 662 err = devm_request_resource(pcie->dev, &pcie->all, &pcie->mem); 665 663 if (err < 0) ··· 668 668 if (err) 669 669 return err; 670 670 671 - io_start = pci_pio_to_address(pcie->io.start); 672 - 673 671 pci_add_resource_offset(&sys->resources, &pcie->mem, sys->mem_offset); 674 672 pci_add_resource_offset(&sys->resources, &pcie->prefetch, 675 673 sys->mem_offset); 676 674 pci_add_resource(&sys->resources, &pcie->busn); 677 675 678 - pci_ioremap_io(nr * SZ_64K, io_start); 676 + pci_ioremap_io(pcie->pio.start, pcie->io.start); 679 677 680 678 return 1; 681 679 } ··· 784 786 static void tegra_pcie_setup_translations(struct tegra_pcie *pcie) 785 787 { 786 788 u32 fpci_bar, size, axi_address; 787 - phys_addr_t io_start = pci_pio_to_address(pcie->io.start); 788 789 789 790 /* Bar 0: type 1 extended configuration space */ 790 791 fpci_bar = 0xfe100000; ··· 796 799 /* Bar 1: downstream IO bar */ 797 800 fpci_bar = 0xfdfc0000; 798 801 size = resource_size(&pcie->io); 799 - axi_address = io_start; 802 + axi_address = pcie->io.start; 800 803 afi_writel(pcie, axi_address, AFI_AXI_BAR1_START); 801 804 afi_writel(pcie, size >> 12, AFI_AXI_BAR1_SZ); 802 805 afi_writel(pcie, fpci_bar, AFI_FPCI_BAR1); ··· 1687 1690 1688 1691 switch (res.flags & IORESOURCE_TYPE_BITS) { 1689 1692 case IORESOURCE_IO: 1690 - memcpy(&pcie->io, &res, sizeof(res)); 1691 - pcie->io.name = np->full_name; 1693 + memcpy(&pcie->pio, &res, sizeof(res)); 1694 + pcie->pio.name = np->full_name; 1695 + 1696 + /* 1697 + * The Tegra PCIe host bridge uses this to program the 1698 + * mapping of the I/O space to the physical address, 1699 + * so we override the .start and .end fields here that 1700 + * of_pci_range_to_resource() converted to I/O space. 1701 + * We also set the IORESOURCE_MEM type to clarify that 1702 + * the resource is in the physical memory space. 1703 + */ 1704 + pcie->io.start = range.cpu_addr; 1705 + pcie->io.end = range.cpu_addr + range.size - 1; 1706 + pcie->io.flags = IORESOURCE_MEM; 1707 + pcie->io.name = "I/O"; 1708 + 1709 + memcpy(&res, &pcie->io, sizeof(res)); 1692 1710 break; 1693 1711 1694 1712 case IORESOURCE_MEM:
+1 -1
drivers/watchdog/s3c2410_wdt.c
··· 161 161 static const struct s3c2410_wdt_variant drv_data_exynos7 = { 162 162 .disable_reg = EXYNOS5_WDT_DISABLE_REG_OFFSET, 163 163 .mask_reset_reg = EXYNOS5_WDT_MASK_RESET_REG_OFFSET, 164 - .mask_bit = 0, 164 + .mask_bit = 23, 165 165 .rst_stat_reg = EXYNOS5_RST_STAT_REG_OFFSET, 166 166 .rst_stat_bit = 23, /* A57 WDTRESET */ 167 167 .quirks = QUIRK_HAS_PMU_CONFIG | QUIRK_HAS_RST_STAT,
+11 -9
fs/fat/namei_vfat.c
··· 736 736 } 737 737 738 738 alias = d_find_alias(inode); 739 - if (alias && !vfat_d_anon_disconn(alias)) { 739 + /* 740 + * Checking "alias->d_parent == dentry->d_parent" to make sure 741 + * FS is not corrupted (especially double linked dir). 742 + */ 743 + if (alias && alias->d_parent == dentry->d_parent && 744 + !vfat_d_anon_disconn(alias)) { 740 745 /* 741 746 * This inode has non anonymous-DCACHE_DISCONNECTED 742 747 * dentry. This means, the user did ->lookup() by an ··· 760 755 761 756 out: 762 757 mutex_unlock(&MSDOS_SB(sb)->s_lock); 763 - dentry->d_time = dentry->d_parent->d_inode->i_version; 764 - dentry = d_splice_alias(inode, dentry); 765 - if (dentry) 766 - dentry->d_time = dentry->d_parent->d_inode->i_version; 767 - return dentry; 768 - 758 + if (!inode) 759 + dentry->d_time = dir->i_version; 760 + return d_splice_alias(inode, dentry); 769 761 error: 770 762 mutex_unlock(&MSDOS_SB(sb)->s_lock); 771 763 return ERR_PTR(err); ··· 795 793 inode->i_mtime = inode->i_atime = inode->i_ctime = ts; 796 794 /* timestamp is already written, so mark_inode_dirty() is unneeded. */ 797 795 798 - dentry->d_time = dentry->d_parent->d_inode->i_version; 799 796 d_instantiate(dentry, inode); 800 797 out: 801 798 mutex_unlock(&MSDOS_SB(sb)->s_lock); ··· 825 824 clear_nlink(inode); 826 825 inode->i_mtime = inode->i_atime = CURRENT_TIME_SEC; 827 826 fat_detach(inode); 827 + dentry->d_time = dir->i_version; 828 828 out: 829 829 mutex_unlock(&MSDOS_SB(sb)->s_lock); 830 830 ··· 851 849 clear_nlink(inode); 852 850 inode->i_mtime = inode->i_atime = CURRENT_TIME_SEC; 853 851 fat_detach(inode); 852 + dentry->d_time = dir->i_version; 854 853 out: 855 854 mutex_unlock(&MSDOS_SB(sb)->s_lock); 856 855 ··· 892 889 inode->i_mtime = inode->i_atime = inode->i_ctime = ts; 893 890 /* timestamp is already written, so mark_inode_dirty() is unneeded. */ 894 891 895 - dentry->d_time = dentry->d_parent->d_inode->i_version; 896 892 d_instantiate(dentry, inode); 897 893 898 894 mutex_unlock(&MSDOS_SB(sb)->s_lock);
+2 -3
fs/jbd2/journal.c
··· 1853 1853 journal->j_chksum_driver = NULL; 1854 1854 return 0; 1855 1855 } 1856 - } 1857 1856 1858 - /* Precompute checksum seed for all metadata */ 1859 - if (jbd2_journal_has_csum_v2or3(journal)) 1857 + /* Precompute checksum seed for all metadata */ 1860 1858 journal->j_csum_seed = jbd2_chksum(journal, ~0, 1861 1859 sb->s_uuid, 1862 1860 sizeof(sb->s_uuid)); 1861 + } 1863 1862 } 1864 1863 1865 1864 /* If enabling v1 checksums, downgrade superblock */
+1 -1
include/uapi/linux/Kbuild
··· 427 427 header-y += virtio_pci.h 428 428 header-y += virtio_ring.h 429 429 header-y += virtio_rng.h 430 - header=y += vm_sockets.h 430 + header-y += vm_sockets.h 431 431 header-y += vt.h 432 432 header-y += wait.h 433 433 header-y += wanrouter.h
+8 -7
ipc/sem.c
··· 507 507 return retval; 508 508 } 509 509 510 - id = ipc_addid(&sem_ids(ns), &sma->sem_perm, ns->sc_semmni); 511 - if (id < 0) { 512 - ipc_rcu_putref(sma, sem_rcu_free); 513 - return id; 514 - } 515 - ns->used_sems += nsems; 516 - 517 510 sma->sem_base = (struct sem *) &sma[1]; 518 511 519 512 for (i = 0; i < nsems; i++) { ··· 521 528 INIT_LIST_HEAD(&sma->list_id); 522 529 sma->sem_nsems = nsems; 523 530 sma->sem_ctime = get_seconds(); 531 + 532 + id = ipc_addid(&sem_ids(ns), &sma->sem_perm, ns->sc_semmni); 533 + if (id < 0) { 534 + ipc_rcu_putref(sma, sem_rcu_free); 535 + return id; 536 + } 537 + ns->used_sems += nsems; 538 + 524 539 sem_unlock(sma, -1); 525 540 rcu_read_unlock(); 526 541
+6 -2
kernel/sched/core.c
··· 2874 2874 * or we have been woken up remotely but the IPI has not yet arrived, 2875 2875 * we haven't yet exited the RCU idle mode. Do it here manually until 2876 2876 * we find a better solution. 2877 + * 2878 + * NB: There are buggy callers of this function. Ideally we 2879 + * should warn if prev_state != IN_USER, but that will trigger 2880 + * too frequently to make sense yet. 2877 2881 */ 2878 - user_exit(); 2882 + enum ctx_state prev_state = exception_enter(); 2879 2883 schedule(); 2880 - user_enter(); 2884 + exception_exit(prev_state); 2881 2885 } 2882 2886 #endif 2883 2887
+1
lib/genalloc.c
··· 598 598 599 599 return pool; 600 600 } 601 + EXPORT_SYMBOL(devm_gen_pool_create); 601 602 602 603 /** 603 604 * dev_get_gen_pool - Obtain the gen_pool (if any) for a device
+1 -1
lib/show_mem.c
··· 28 28 continue; 29 29 30 30 total += zone->present_pages; 31 - reserved = zone->present_pages - zone->managed_pages; 31 + reserved += zone->present_pages - zone->managed_pages; 32 32 33 33 if (is_highmem_idx(zoneid)) 34 34 highmem += zone->present_pages;
+3 -1
mm/frontswap.c
··· 244 244 the (older) page from frontswap 245 245 */ 246 246 inc_frontswap_failed_stores(); 247 - if (dup) 247 + if (dup) { 248 248 __frontswap_clear(sis, offset); 249 + frontswap_ops->invalidate_page(type, offset); 250 + } 249 251 } 250 252 if (frontswap_writethrough_enabled) 251 253 /* report failure so swap also writes to swap device */
+12 -12
mm/memory.c
··· 816 816 if (!pte_file(pte)) { 817 817 swp_entry_t entry = pte_to_swp_entry(pte); 818 818 819 - if (swap_duplicate(entry) < 0) 820 - return entry.val; 819 + if (likely(!non_swap_entry(entry))) { 820 + if (swap_duplicate(entry) < 0) 821 + return entry.val; 821 822 822 - /* make sure dst_mm is on swapoff's mmlist. */ 823 - if (unlikely(list_empty(&dst_mm->mmlist))) { 824 - spin_lock(&mmlist_lock); 825 - if (list_empty(&dst_mm->mmlist)) 826 - list_add(&dst_mm->mmlist, 827 - &src_mm->mmlist); 828 - spin_unlock(&mmlist_lock); 829 - } 830 - if (likely(!non_swap_entry(entry))) 823 + /* make sure dst_mm is on swapoff's mmlist. */ 824 + if (unlikely(list_empty(&dst_mm->mmlist))) { 825 + spin_lock(&mmlist_lock); 826 + if (list_empty(&dst_mm->mmlist)) 827 + list_add(&dst_mm->mmlist, 828 + &src_mm->mmlist); 829 + spin_unlock(&mmlist_lock); 830 + } 831 831 rss[MM_SWAPENTS]++; 832 - else if (is_migration_entry(entry)) { 832 + } else if (is_migration_entry(entry)) { 833 833 page = migration_entry_to_page(entry); 834 834 835 835 if (PageAnon(page))
+7 -3
mm/mmap.c
··· 776 776 * shrinking vma had, to cover any anon pages imported. 777 777 */ 778 778 if (exporter && exporter->anon_vma && !importer->anon_vma) { 779 - if (anon_vma_clone(importer, exporter)) 780 - return -ENOMEM; 779 + int error; 780 + 781 + error = anon_vma_clone(importer, exporter); 782 + if (error) 783 + return error; 781 784 importer->anon_vma = exporter->anon_vma; 782 785 } 783 786 } ··· 2472 2469 if (err) 2473 2470 goto out_free_vma; 2474 2471 2475 - if (anon_vma_clone(new, vma)) 2472 + err = anon_vma_clone(new, vma); 2473 + if (err) 2476 2474 goto out_free_mpol; 2477 2475 2478 2476 if (new->vm_file)
+4 -2
mm/rmap.c
··· 274 274 { 275 275 struct anon_vma_chain *avc; 276 276 struct anon_vma *anon_vma; 277 + int error; 277 278 278 279 /* Don't bother if the parent process has no anon_vma here. */ 279 280 if (!pvma->anon_vma) ··· 284 283 * First, attach the new VMA to the parent VMA's anon_vmas, 285 284 * so rmap can find non-COWed pages in child processes. 286 285 */ 287 - if (anon_vma_clone(vma, pvma)) 288 - return -ENOMEM; 286 + error = anon_vma_clone(vma, pvma); 287 + if (error) 288 + return error; 289 289 290 290 /* Then add our own anon_vma. */ 291 291 anon_vma = anon_vma_alloc();
+1 -1
mm/slab.c
··· 3076 3076 void *obj; 3077 3077 int x; 3078 3078 3079 - VM_BUG_ON(nodeid > num_online_nodes()); 3079 + VM_BUG_ON(nodeid < 0 || nodeid >= MAX_NUMNODES); 3080 3080 n = get_node(cachep, nodeid); 3081 3081 BUG_ON(!n); 3082 3082
+6 -4
mm/vmpressure.c
··· 165 165 unsigned long scanned; 166 166 unsigned long reclaimed; 167 167 168 + spin_lock(&vmpr->sr_lock); 168 169 /* 169 170 * Several contexts might be calling vmpressure(), so it is 170 171 * possible that the work was rescheduled again before the old ··· 174 173 * here. No need for any locks here since we don't care if 175 174 * vmpr->reclaimed is in sync. 176 175 */ 177 - if (!vmpr->scanned) 178 - return; 179 - 180 - spin_lock(&vmpr->sr_lock); 181 176 scanned = vmpr->scanned; 177 + if (!scanned) { 178 + spin_unlock(&vmpr->sr_lock); 179 + return; 180 + } 181 + 182 182 reclaimed = vmpr->reclaimed; 183 183 vmpr->scanned = 0; 184 184 vmpr->reclaimed = 0;
+1
net/core/rtnetlink.c
··· 1498 1498 goto errout; 1499 1499 } 1500 1500 if (!netlink_ns_capable(skb, net->user_ns, CAP_NET_ADMIN)) { 1501 + put_net(net); 1501 1502 err = -EPERM; 1502 1503 goto errout; 1503 1504 }
+1
security/keys/internal.h
··· 117 117 #define KEYRING_SEARCH_NO_UPDATE_TIME 0x0004 /* Don't update times */ 118 118 #define KEYRING_SEARCH_NO_CHECK_PERM 0x0008 /* Don't check permissions */ 119 119 #define KEYRING_SEARCH_DETECT_TOO_DEEP 0x0010 /* Give an error on excessive depth */ 120 + #define KEYRING_SEARCH_SKIP_EXPIRED 0x0020 /* Ignore expired keys (intention to replace) */ 120 121 121 122 int (*iterator)(const void *object, void *iterator_data); 122 123
+26 -30
security/keys/keyctl.c
··· 26 26 #include <asm/uaccess.h> 27 27 #include "internal.h" 28 28 29 + #define KEY_MAX_DESC_SIZE 4096 30 + 29 31 static int key_get_type_from_user(char *type, 30 32 const char __user *_type, 31 33 unsigned len) ··· 80 78 81 79 description = NULL; 82 80 if (_description) { 83 - description = strndup_user(_description, PAGE_SIZE); 81 + description = strndup_user(_description, KEY_MAX_DESC_SIZE); 84 82 if (IS_ERR(description)) { 85 83 ret = PTR_ERR(description); 86 84 goto error; ··· 179 177 goto error; 180 178 181 179 /* pull the description into kernel space */ 182 - description = strndup_user(_description, PAGE_SIZE); 180 + description = strndup_user(_description, KEY_MAX_DESC_SIZE); 183 181 if (IS_ERR(description)) { 184 182 ret = PTR_ERR(description); 185 183 goto error; ··· 289 287 /* fetch the name from userspace */ 290 288 name = NULL; 291 289 if (_name) { 292 - name = strndup_user(_name, PAGE_SIZE); 290 + name = strndup_user(_name, KEY_MAX_DESC_SIZE); 293 291 if (IS_ERR(name)) { 294 292 ret = PTR_ERR(name); 295 293 goto error; ··· 564 562 { 565 563 struct key *key, *instkey; 566 564 key_ref_t key_ref; 567 - char *tmpbuf; 565 + char *infobuf; 568 566 long ret; 567 + int desclen, infolen; 569 568 570 569 key_ref = lookup_user_key(keyid, KEY_LOOKUP_PARTIAL, KEY_NEED_VIEW); 571 570 if (IS_ERR(key_ref)) { ··· 589 586 } 590 587 591 588 okay: 592 - /* calculate how much description we're going to return */ 593 - ret = -ENOMEM; 594 - tmpbuf = kmalloc(PAGE_SIZE, GFP_KERNEL); 595 - if (!tmpbuf) 596 - goto error2; 597 - 598 589 key = key_ref_to_ptr(key_ref); 590 + desclen = strlen(key->description); 599 591 600 - ret = snprintf(tmpbuf, PAGE_SIZE - 1, 601 - "%s;%d;%d;%08x;%s", 602 - key->type->name, 603 - from_kuid_munged(current_user_ns(), key->uid), 604 - from_kgid_munged(current_user_ns(), key->gid), 605 - key->perm, 606 - key->description ?: ""); 607 - 608 - /* include a NUL char at the end of the data */ 609 - if (ret > PAGE_SIZE - 1) 610 - ret = PAGE_SIZE - 1; 611 - tmpbuf[ret] = 0; 612 - ret++; 592 + /* calculate how much information we're going to return */ 593 + ret = -ENOMEM; 594 + infobuf = kasprintf(GFP_KERNEL, 595 + "%s;%d;%d;%08x;", 596 + key->type->name, 597 + from_kuid_munged(current_user_ns(), key->uid), 598 + from_kgid_munged(current_user_ns(), key->gid), 599 + key->perm); 600 + if (!infobuf) 601 + goto error2; 602 + infolen = strlen(infobuf); 603 + ret = infolen + desclen + 1; 613 604 614 605 /* consider returning the data */ 615 - if (buffer && buflen > 0) { 616 - if (buflen > ret) 617 - buflen = ret; 618 - 619 - if (copy_to_user(buffer, tmpbuf, buflen) != 0) 606 + if (buffer && buflen >= ret) { 607 + if (copy_to_user(buffer, infobuf, infolen) != 0 || 608 + copy_to_user(buffer + infolen, key->description, 609 + desclen + 1) != 0) 620 610 ret = -EFAULT; 621 611 } 622 612 623 - kfree(tmpbuf); 613 + kfree(infobuf); 624 614 error2: 625 615 key_ref_put(key_ref); 626 616 error: ··· 645 649 if (ret < 0) 646 650 goto error; 647 651 648 - description = strndup_user(_description, PAGE_SIZE); 652 + description = strndup_user(_description, KEY_MAX_DESC_SIZE); 649 653 if (IS_ERR(description)) { 650 654 ret = PTR_ERR(description); 651 655 goto error;
+6 -4
security/keys/keyring.c
··· 546 546 } 547 547 548 548 if (key->expiry && ctx->now.tv_sec >= key->expiry) { 549 - ctx->result = ERR_PTR(-EKEYEXPIRED); 549 + if (!(ctx->flags & KEYRING_SEARCH_SKIP_EXPIRED)) 550 + ctx->result = ERR_PTR(-EKEYEXPIRED); 550 551 kleave(" = %d [expire]", ctx->skipped_ret); 551 552 goto skipped; 552 553 } ··· 629 628 ctx->index_key.type->name, 630 629 ctx->index_key.description); 631 630 631 + #define STATE_CHECKS (KEYRING_SEARCH_NO_STATE_CHECK | KEYRING_SEARCH_DO_STATE_CHECK) 632 + BUG_ON((ctx->flags & STATE_CHECKS) == 0 || 633 + (ctx->flags & STATE_CHECKS) == STATE_CHECKS); 634 + 632 635 if (ctx->index_key.description) 633 636 ctx->index_key.desc_len = strlen(ctx->index_key.description); 634 637 ··· 642 637 if (ctx->match_data.lookup_type == KEYRING_SEARCH_LOOKUP_ITERATE || 643 638 keyring_compare_object(keyring, &ctx->index_key)) { 644 639 ctx->skipped_ret = 2; 645 - ctx->flags |= KEYRING_SEARCH_DO_STATE_CHECK; 646 640 switch (ctx->iterator(keyring_key_to_ptr(keyring), ctx)) { 647 641 case 1: 648 642 goto found; ··· 653 649 } 654 650 655 651 ctx->skipped_ret = 0; 656 - if (ctx->flags & KEYRING_SEARCH_NO_STATE_CHECK) 657 - ctx->flags &= ~KEYRING_SEARCH_DO_STATE_CHECK; 658 652 659 653 /* Start processing a new keyring */ 660 654 descend_to_keyring:
+2
security/keys/request_key.c
··· 516 516 .match_data.cmp = key_default_cmp, 517 517 .match_data.raw_data = description, 518 518 .match_data.lookup_type = KEYRING_SEARCH_LOOKUP_DIRECT, 519 + .flags = (KEYRING_SEARCH_DO_STATE_CHECK | 520 + KEYRING_SEARCH_SKIP_EXPIRED), 519 521 }; 520 522 struct key *key; 521 523 key_ref_t key_ref;
+1
security/keys/request_key_auth.c
··· 249 249 .match_data.cmp = key_default_cmp, 250 250 .match_data.raw_data = description, 251 251 .match_data.lookup_type = KEYRING_SEARCH_LOOKUP_DIRECT, 252 + .flags = KEYRING_SEARCH_DO_STATE_CHECK, 252 253 }; 253 254 struct key *authkey; 254 255 key_ref_t authkey_ref;
+2
sound/pci/hda/patch_realtek.c
··· 4790 4790 SND_PCI_QUIRK(0x1028, 0x0638, "Dell Inspiron 5439", ALC290_FIXUP_MONO_SPEAKERS_HSJACK), 4791 4791 SND_PCI_QUIRK(0x1028, 0x064a, "Dell", ALC293_FIXUP_DELL1_MIC_NO_PRESENCE), 4792 4792 SND_PCI_QUIRK(0x1028, 0x064b, "Dell", ALC293_FIXUP_DELL1_MIC_NO_PRESENCE), 4793 + SND_PCI_QUIRK(0x1028, 0x06d9, "Dell", ALC293_FIXUP_DELL1_MIC_NO_PRESENCE), 4794 + SND_PCI_QUIRK(0x1028, 0x06da, "Dell", ALC293_FIXUP_DELL1_MIC_NO_PRESENCE), 4793 4795 SND_PCI_QUIRK(0x1028, 0x164a, "Dell", ALC293_FIXUP_DELL1_MIC_NO_PRESENCE), 4794 4796 SND_PCI_QUIRK(0x1028, 0x164b, "Dell", ALC293_FIXUP_DELL1_MIC_NO_PRESENCE), 4795 4797 SND_PCI_QUIRK(0x103c, 0x1586, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC2),