···370370then the interface is considered to be idle, and the kernel may371371autosuspend the device.372372373373-Drivers need not be concerned about balancing changes to the usage374374-counter; the USB core will undo any remaining "get"s when a driver375375-is unbound from its interface. As a corollary, drivers must not call376376-any of the ``usb_autopm_*`` functions after their ``disconnect``377377-routine has returned.373373+Drivers must be careful to balance their overall changes to the usage374374+counter. Unbalanced "get"s will remain in effect when a driver is375375+unbound from its interface, preventing the device from going into376376+runtime suspend should the interface be bound to a driver again. On377377+the other hand, drivers are allowed to achieve this balance by calling378378+the ``usb_autopm_*`` functions even after their ``disconnect`` routine379379+has returned -- say from within a work-queue routine -- provided they380380+retain an active reference to the interface (via ``usb_get_intf`` and381381+``usb_put_intf``).378382379383Drivers using the async routines are responsible for their own380384synchronization and mutual exclusion.
+2
Documentation/networking/ip-sysctl.txt
···13421342 Default value is 0.1343134313441344xfrm4_gc_thresh - INTEGER13451345+ (Obsolete since linux-4.14)13451346 The threshold at which we will start garbage collecting for IPv413461347 destination cache entries. At twice this value the system will13471348 refuse new allocations.···19511950 Default: 01952195119531952xfrm6_gc_thresh - INTEGER19531953+ (Obsolete since linux-4.14)19541954 The threshold at which we will start garbage collecting for IPv619551955 destination cache entries. At twice this value the system will19561956 refuse new allocations.
+1-1
Documentation/networking/netdev-FAQ.rst
···132132will reply and ask what should be done.133133134134Q: I made changes to only a few patches in a patch series should I resend only those changed?135135---------------------------------------------------------------------------------------------135135+---------------------------------------------------------------------------------------------136136A: No, please resend the entire patch series and make sure you do number your137137patches such that it is clear this is the latest and greatest set of patches138138that can be applied.
+8-8
Documentation/sysctl/vm.txt
···866866increase the success rate of future high-order allocations such as SLUB867867allocations, THP and hugetlbfs pages.868868869869-To make it sensible with respect to the watermark_scale_factor parameter,870870-the unit is in fractions of 10,000. The default value of 15,000 means871871-that up to 150% of the high watermark will be reclaimed in the event of872872-a pageblock being mixed due to fragmentation. The level of reclaim is873873-determined by the number of fragmentation events that occurred in the874874-recent past. If this value is smaller than a pageblock then a pageblocks875875-worth of pages will be reclaimed (e.g. 2MB on 64-bit x86). A boost factor876876-of 0 will disable the feature.869869+To make it sensible with respect to the watermark_scale_factor870870+parameter, the unit is in fractions of 10,000. The default value of871871+15,000 on !DISCONTIGMEM configurations means that up to 150% of the high872872+watermark will be reclaimed in the event of a pageblock being mixed due873873+to fragmentation. The level of reclaim is determined by the number of874874+fragmentation events that occurred in the recent past. If this value is875875+smaller than a pageblock then a pageblocks worth of pages will be reclaimed876876+(e.g. 2MB on 64-bit x86). A boost factor of 0 will disable the feature.877877878878=============================================================879879
+2-2
Makefile
···22VERSION = 533PATCHLEVEL = 144SUBLEVEL = 055-EXTRAVERSION = -rc655+EXTRAVERSION = -rc766NAME = Shy Crocodile7788# *DOCUMENTATION*···679679KBUILD_CFLAGS += $(call cc-disable-warning, format-truncation)680680KBUILD_CFLAGS += $(call cc-disable-warning, format-overflow)681681KBUILD_CFLAGS += $(call cc-disable-warning, int-in-bool-context)682682+KBUILD_CFLAGS += $(call cc-disable-warning, address-of-packed-member)682683683684ifdef CONFIG_CC_OPTIMIZE_FOR_SIZE684685KBUILD_CFLAGS += -Os···721720KBUILD_CPPFLAGS += $(call cc-option,-Qunused-arguments,)722721KBUILD_CFLAGS += $(call cc-disable-warning, format-invalid-specifier)723722KBUILD_CFLAGS += $(call cc-disable-warning, gnu)724724-KBUILD_CFLAGS += $(call cc-disable-warning, address-of-packed-member)725723# Quiet clang warning: comparison of unsigned expression < 0 is always false726724KBUILD_CFLAGS += $(call cc-disable-warning, tautological-compare)727725# CLANG uses a _MergedGlobals as optimization, but this breaks modpost, as the
···113113 }114114115115 READ_BCR(ARC_REG_CLUSTER_BCR, cbcr);116116- if (cbcr.c)116116+ if (cbcr.c) {117117 ioc_exists = 1;118118- else118118+119119+ /*120120+ * As for today we don't support both IOC and ZONE_HIGHMEM enabled121121+ * simultaneously. This happens because as of today IOC aperture covers122122+ * only ZONE_NORMAL (low mem) and any dma transactions outside this123123+ * region won't be HW coherent.124124+ * If we want to use both IOC and ZONE_HIGHMEM we can use125125+ * bounce_buffer to handle dma transactions to HIGHMEM.126126+ * Also it is possible to modify dma_direct cache ops or increase IOC127127+ * aperture size if we are planning to use HIGHMEM without PAE.128128+ */129129+ if (IS_ENABLED(CONFIG_HIGHMEM) || is_pae40_enabled())130130+ ioc_enable = 0;131131+ } else {119132 ioc_enable = 0;133133+ }120134121135 /* HS 2.0 didn't have AUX_VOL */122136 if (cpuinfo_arc700[cpu].core.family > 0x51) {···1171115711721158 if (!ioc_enable)11731159 return;11741174-11751175- /*11761176- * As for today we don't support both IOC and ZONE_HIGHMEM enabled11771177- * simultaneously. This happens because as of today IOC aperture covers11781178- * only ZONE_NORMAL (low mem) and any dma transactions outside this11791179- * region won't be HW coherent.11801180- * If we want to use both IOC and ZONE_HIGHMEM we can use11811181- * bounce_buffer to handle dma transactions to HIGHMEM.11821182- * Also it is possible to modify dma_direct cache ops or increase IOC11831183- * aperture size if we are planning to use HIGHMEM without PAE.11841184- */11851185- if (IS_ENABLED(CONFIG_HIGHMEM))11861186- panic("IOC and HIGHMEM can't be used simultaneously");1187116011881161 /* Flush + invalidate + disable L1 dcache */11891162 __dc_disable();
+1-1
arch/arm/Kconfig
···7373 select HAVE_EFFICIENT_UNALIGNED_ACCESS if (CPU_V6 || CPU_V6K || CPU_V7) && MMU7474 select HAVE_EXIT_THREAD7575 select HAVE_FTRACE_MCOUNT_RECORD if !XIP_KERNEL7676- select HAVE_FUNCTION_GRAPH_TRACER if !THUMB2_KERNEL7676+ select HAVE_FUNCTION_GRAPH_TRACER if !THUMB2_KERNEL && !CC_IS_CLANG7777 select HAVE_FUNCTION_TRACER if !XIP_KERNEL7878 select HAVE_GCC_PLUGINS7979 select HAVE_HW_BREAKPOINT if PERF_EVENTS && (CPU_V6 || CPU_V6K || CPU_V7)
+3-3
arch/arm/Kconfig.debug
···47474848choice4949 prompt "Choose kernel unwinder"5050- default UNWINDER_ARM if AEABI && !FUNCTION_GRAPH_TRACER5151- default UNWINDER_FRAME_POINTER if !AEABI || FUNCTION_GRAPH_TRACER5050+ default UNWINDER_ARM if AEABI5151+ default UNWINDER_FRAME_POINTER if !AEABI5252 help5353 This determines which method will be used for unwinding kernel stack5454 traces for panics, oopses, bugs, warnings, perf, /proc/<pid>/stack,···65656666config UNWINDER_ARM6767 bool "ARM EABI stack unwinder"6868- depends on AEABI6868+ depends on AEABI && !FUNCTION_GRAPH_TRACER6969 select ARM_UNWIND7070 help7171 This option enables stack unwinding support in the kernel
+15-1
arch/arm/boot/compressed/head.S
···1438143814391439 @ Preserve return value of efi_entry() in r414401440 mov r4, r014411441- bl cache_clean_flush14411441+14421442+ @ our cache maintenance code relies on CP15 barrier instructions14431443+ @ but since we arrived here with the MMU and caches configured14441444+ @ by UEFI, we must check that the CP15BEN bit is set in SCTLR.14451445+ @ Note that this bit is RAO/WI on v6 and earlier, so the ISB in14461446+ @ the enable path will be executed on v7+ only.14471447+ mrc p15, 0, r1, c1, c0, 0 @ read SCTLR14481448+ tst r1, #(1 << 5) @ CP15BEN bit set?14491449+ bne 0f14501450+ orr r1, r1, #(1 << 5) @ CP15 barrier instructions14511451+ mcr p15, 0, r1, c1, c0, 0 @ write SCTLR14521452+ ARM( .inst 0xf57ff06f @ v7+ isb )14531453+ THUMB( isb )14541454+14551455+0: bl cache_clean_flush14421456 bl cache_off1443145714441458 @ Set parameters for booting zImage according to boot protocol
···103103 * to be revisited if support for multiple ftrace entry points104104 * is added in the future, but for now, the pr_err() below105105 * deals with a theoretical issue only.106106+ *107107+ * Note that PLTs are place relative, and plt_entries_equal()108108+ * checks whether they point to the same target. Here, we need109109+ * to check if the actual opcodes are in fact identical,110110+ * regardless of the offset in memory so use memcmp() instead.106111 */107112 trampoline = get_plt_entry(addr, mod->arch.ftrace_trampoline);108108- if (!plt_entries_equal(mod->arch.ftrace_trampoline,109109- &trampoline)) {113113+ if (memcmp(mod->arch.ftrace_trampoline, &trampoline,114114+ sizeof(trampoline))) {110115 if (plt_entry_is_initialized(mod->arch.ftrace_trampoline)) {111116 pr_err("ftrace: far branches to multiple entry points unsupported inside a single module\n");112117 return -EINVAL;
+1-1
arch/arm64/mm/init.c
···363363 * Otherwise, this is a no-op364364 */365365 u64 base = phys_initrd_start & PAGE_MASK;366366- u64 size = PAGE_ALIGN(phys_initrd_size);366366+ u64 size = PAGE_ALIGN(phys_initrd_start + phys_initrd_size) - base;367367368368 /*369369 * We can only add back the initrd memory if we don't end up
+3-2
arch/mips/net/ebpf_jit.c
···186186 * separate frame pointer, so BPF_REG_10 relative accesses are187187 * adjusted to be $sp relative.188188 */189189-int ebpf_to_mips_reg(struct jit_ctx *ctx, const struct bpf_insn *insn,190190- enum which_ebpf_reg w)189189+static int ebpf_to_mips_reg(struct jit_ctx *ctx,190190+ const struct bpf_insn *insn,191191+ enum which_ebpf_reg w)191192{192193 int ebpf_reg = (w == src_reg || w == src_reg_no_fp) ?193194 insn->src_reg : insn->dst_reg;
+1
arch/powerpc/configs/skiroot_defconfig
···266266CONFIG_MSDOS_FS=m267267CONFIG_VFAT_FS=m268268CONFIG_PROC_KCORE=y269269+CONFIG_HUGETLBFS=y269270# CONFIG_MISC_FILESYSTEMS is not set270271# CONFIG_NETWORK_FILESYSTEMS is not set271272CONFIG_NLS=y
+59-40
arch/powerpc/mm/mmu_context_iommu.c
···9595 unsigned long entries, unsigned long dev_hpa,9696 struct mm_iommu_table_group_mem_t **pmem)9797{9898- struct mm_iommu_table_group_mem_t *mem;9999- long i, ret, locked_entries = 0;9898+ struct mm_iommu_table_group_mem_t *mem, *mem2;9999+ long i, ret, locked_entries = 0, pinned = 0;100100 unsigned int pageshift;101101-102102- mutex_lock(&mem_list_mutex);103103-104104- list_for_each_entry_rcu(mem, &mm->context.iommu_group_mem_list,105105- next) {106106- /* Overlap? */107107- if ((mem->ua < (ua + (entries << PAGE_SHIFT))) &&108108- (ua < (mem->ua +109109- (mem->entries << PAGE_SHIFT)))) {110110- ret = -EINVAL;111111- goto unlock_exit;112112- }113113-114114- }101101+ unsigned long entry, chunk;115102116103 if (dev_hpa == MM_IOMMU_TABLE_INVALID_HPA) {117104 ret = mm_iommu_adjust_locked_vm(mm, entries, true);118105 if (ret)119119- goto unlock_exit;106106+ return ret;120107121108 locked_entries = entries;122109 }···135148 }136149137150 down_read(&mm->mmap_sem);138138- ret = get_user_pages_longterm(ua, entries, FOLL_WRITE, mem->hpages, NULL);139139- up_read(&mm->mmap_sem);140140- if (ret != entries) {141141- /* free the reference taken */142142- for (i = 0; i < ret; i++)143143- put_page(mem->hpages[i]);151151+ chunk = (1UL << (PAGE_SHIFT + MAX_ORDER - 1)) /152152+ sizeof(struct vm_area_struct *);153153+ chunk = min(chunk, entries);154154+ for (entry = 0; entry < entries; entry += chunk) {155155+ unsigned long n = min(entries - entry, chunk);144156145145- vfree(mem->hpas);146146- kfree(mem);147147- ret = -EFAULT;148148- goto unlock_exit;157157+ ret = get_user_pages_longterm(ua + (entry << PAGE_SHIFT), n,158158+ FOLL_WRITE, mem->hpages + entry, NULL);159159+ if (ret == n) {160160+ pinned += n;161161+ continue;162162+ }163163+ if (ret > 0)164164+ pinned += ret;165165+ break;166166+ }167167+ up_read(&mm->mmap_sem);168168+ if (pinned != entries) {169169+ if (!ret)170170+ ret = -EFAULT;171171+ goto free_exit;149172 }150173151174 pageshift = PAGE_SHIFT;···180183 }181184182185good_exit:183183- ret = 0;184186 atomic64_set(&mem->mapped, 1);185187 mem->used = 1;186188 mem->ua = ua;187189 mem->entries = entries;188188- *pmem = mem;190190+191191+ mutex_lock(&mem_list_mutex);192192+193193+ list_for_each_entry_rcu(mem2, &mm->context.iommu_group_mem_list, next) {194194+ /* Overlap? */195195+ if ((mem2->ua < (ua + (entries << PAGE_SHIFT))) &&196196+ (ua < (mem2->ua +197197+ (mem2->entries << PAGE_SHIFT)))) {198198+ ret = -EINVAL;199199+ mutex_unlock(&mem_list_mutex);200200+ goto free_exit;201201+ }202202+ }189203190204 list_add_rcu(&mem->next, &mm->context.iommu_group_mem_list);191205192192-unlock_exit:193193- if (locked_entries && ret)194194- mm_iommu_adjust_locked_vm(mm, locked_entries, false);195195-196206 mutex_unlock(&mem_list_mutex);207207+208208+ *pmem = mem;209209+210210+ return 0;211211+212212+free_exit:213213+ /* free the reference taken */214214+ for (i = 0; i < pinned; i++)215215+ put_page(mem->hpages[i]);216216+217217+ vfree(mem->hpas);218218+ kfree(mem);219219+220220+unlock_exit:221221+ mm_iommu_adjust_locked_vm(mm, locked_entries, false);197222198223 return ret;199224}···285266long mm_iommu_put(struct mm_struct *mm, struct mm_iommu_table_group_mem_t *mem)286267{287268 long ret = 0;288288- unsigned long entries, dev_hpa;269269+ unsigned long unlock_entries = 0;289270290271 mutex_lock(&mem_list_mutex);291272···306287 goto unlock_exit;307288 }308289309309- /* @mapped became 0 so now mappings are disabled, release the region */310310- entries = mem->entries;311311- dev_hpa = mem->dev_hpa;312312- mm_iommu_release(mem);290290+ if (mem->dev_hpa == MM_IOMMU_TABLE_INVALID_HPA)291291+ unlock_entries = mem->entries;313292314314- if (dev_hpa == MM_IOMMU_TABLE_INVALID_HPA)315315- mm_iommu_adjust_locked_vm(mm, entries, false);293293+ /* @mapped became 0 so now mappings are disabled, release the region */294294+ mm_iommu_release(mem);316295317296unlock_exit:318297 mutex_unlock(&mem_list_mutex);298298+299299+ mm_iommu_adjust_locked_vm(mm, unlock_entries, false);319300320301 return ret;321302}
+1-1
arch/powerpc/platforms/Kconfig.cputype
···324324325325config PPC_RADIX_MMU326326 bool "Radix MMU Support"327327- depends on PPC_BOOK3S_64327327+ depends on PPC_BOOK3S_64 && HUGETLB_PAGE328328 select ARCH_HAS_GIGANTIC_PAGE if (MEMORY_ISOLATION && COMPACTION) || CMA329329 default y330330 help
+1-1
arch/x86/boot/compressed/misc.c
···352352 boot_params->hdr.loadflags &= ~KASLR_FLAG;353353354354 /* Save RSDP address for later use. */355355- boot_params->acpi_rsdp_addr = get_rsdp_addr();355355+ /* boot_params->acpi_rsdp_addr = get_rsdp_addr(); */356356357357 sanitize_boot_params(boot_params);358358
···81818282 ACPI_FUNCTION_TRACE(ev_enable_gpe);83838484- /* Clear the GPE status */8585- status = acpi_hw_clear_gpe(gpe_event_info);8686- if (ACPI_FAILURE(status))8787- return_ACPI_STATUS(status);8888-8984 /* Enable the requested GPE */8585+9086 status = acpi_hw_low_set_gpe(gpe_event_info, ACPI_GPE_ENABLE);9187 return_ACPI_STATUS(status);9288}
···12821282 enum dma_status status;12831283 unsigned int residue = 0;12841284 unsigned int dptr = 0;12851285+ unsigned int chcrb;12861286+ unsigned int tcrb;12871287+ unsigned int i;1285128812861289 if (!desc)12871290 return 0;···13331330 }1334133113351332 /*13331333+ * We need to read two registers.13341334+ * Make sure the control register does not skip to next chunk13351335+ * while reading the counter.13361336+ * Trying it 3 times should be enough: Initial read, retry, retry13371337+ * for the paranoid.13381338+ */13391339+ for (i = 0; i < 3; i++) {13401340+ chcrb = rcar_dmac_chan_read(chan, RCAR_DMACHCRB) &13411341+ RCAR_DMACHCRB_DPTR_MASK;13421342+ tcrb = rcar_dmac_chan_read(chan, RCAR_DMATCRB);13431343+ /* Still the same? */13441344+ if (chcrb == (rcar_dmac_chan_read(chan, RCAR_DMACHCRB) &13451345+ RCAR_DMACHCRB_DPTR_MASK))13461346+ break;13471347+ }13481348+ WARN_ONCE(i >= 3, "residue might be not continuous!");13491349+13501350+ /*13361351 * In descriptor mode the descriptor running pointer is not maintained13371352 * by the interrupt handler, find the running descriptor from the13381353 * descriptor pointer field in the CHCRB register. In non-descriptor13391354 * mode just use the running descriptor pointer.13401355 */13411356 if (desc->hwdescs.use) {13421342- dptr = (rcar_dmac_chan_read(chan, RCAR_DMACHCRB) &13431343- RCAR_DMACHCRB_DPTR_MASK) >> RCAR_DMACHCRB_DPTR_SHIFT;13571357+ dptr = chcrb >> RCAR_DMACHCRB_DPTR_SHIFT;13441358 if (dptr == 0)13451359 dptr = desc->nchunks;13461360 dptr--;···13751355 }1376135613771357 /* Add the residue for the current chunk. */13781378- residue += rcar_dmac_chan_read(chan, RCAR_DMATCRB) << desc->xfer_shift;13581358+ residue += tcrb << desc->xfer_shift;1379135913801360 return residue;13811361}···13881368 enum dma_status status;13891369 unsigned long flags;13901370 unsigned int residue;13711371+ bool cyclic;1391137213921373 status = dma_cookie_status(chan, cookie, txstate);13931374 if (status == DMA_COMPLETE || !txstate)···1396137513971376 spin_lock_irqsave(&rchan->lock, flags);13981377 residue = rcar_dmac_chan_get_residue(rchan, cookie);13781378+ cyclic = rchan->desc.running ? rchan->desc.running->cyclic : false;13991379 spin_unlock_irqrestore(&rchan->lock, flags);1400138014011381 /* if there's no residue, the cookie is complete */14021402- if (!residue)13821382+ if (!residue && !cyclic)14031383 return DMA_COMPLETE;1404138414051385 dma_set_residue(txstate, residue);
···1379137913801380 status = gpiochip_add_irqchip(chip, lock_key, request_key);13811381 if (status)13821382- goto err_remove_chip;13821382+ goto err_free_gpiochip_mask;1383138313841384 status = of_gpiochip_add(chip);13851385 if (status)···1387138713881388 status = gpiochip_init_valid_mask(chip);13891389 if (status)13901390- goto err_remove_chip;13901390+ goto err_remove_of_chip;1391139113921392 for (i = 0; i < chip->ngpio; i++) {13931393 struct gpio_desc *desc = &gdev->descs[i];···14151415 if (gpiolib_initialized) {14161416 status = gpiochip_setup_dev(gdev);14171417 if (status)14181418- goto err_remove_chip;14181418+ goto err_remove_acpi_chip;14191419 }14201420 return 0;1421142114221422-err_remove_chip:14221422+err_remove_acpi_chip:14231423 acpi_gpiochip_remove(chip);14241424+err_remove_of_chip:14241425 gpiochip_free_hogs(chip);14251426 of_gpiochip_remove(chip);14271427+err_remove_chip:14281428+ gpiochip_irqchip_remove(chip);14291429+err_free_gpiochip_mask:14261430 gpiochip_free_valid_mask(chip);14271431err_remove_irqchip_mask:14281432 gpiochip_irqchip_free_valid_mask(chip);
+12-4
drivers/gpu/drm/bridge/synopsys/dw-hdmi.c
···10461046 if (hdmi->version < 0x200a)10471047 return false;1048104810491049+ /* Disable if no DDC bus */10501050+ if (!hdmi->ddc)10511051+ return false;10521052+10491053 /* Disable if SCDC is not supported, or if an HF-VSDB block is absent */10501054 if (!display->hdmi.scdc.supported ||10511055 !display->hdmi.scdc.scrambling.supported)···16881684 * Source Devices compliant shall set the16891685 * Source Version = 1.16901686 */16911691- drm_scdc_readb(&hdmi->i2c->adap, SCDC_SINK_VERSION,16871687+ drm_scdc_readb(hdmi->ddc, SCDC_SINK_VERSION,16921688 &bytes);16931693- drm_scdc_writeb(&hdmi->i2c->adap, SCDC_SOURCE_VERSION,16891689+ drm_scdc_writeb(hdmi->ddc, SCDC_SOURCE_VERSION,16941690 min_t(u8, bytes, SCDC_MIN_SOURCE_VERSION));1695169116961692 /* Enabled Scrambling in the Sink */16971697- drm_scdc_set_scrambling(&hdmi->i2c->adap, 1);16931693+ drm_scdc_set_scrambling(hdmi->ddc, 1);1698169416991695 /*17001696 * To activate the scrambler feature, you must ensure···17101706 hdmi_writeb(hdmi, 0, HDMI_FC_SCRAMBLER_CTRL);17111707 hdmi_writeb(hdmi, (u8)~HDMI_MC_SWRSTZ_TMDSSWRST_REQ,17121708 HDMI_MC_SWRSTZ);17131713- drm_scdc_set_scrambling(&hdmi->i2c->adap, 0);17091709+ drm_scdc_set_scrambling(hdmi->ddc, 0);17141710 }17151711 }17161712···18041800 * iteration for others.18051801 * The Amlogic Meson GX SoCs (v2.01a) have been identified as needing18061802 * the workaround with a single iteration.18031803+ * The Rockchip RK3288 SoC (v2.00a) and RK3328/RK3399 SoCs (v2.11a) have18041804+ * been identified as needing the workaround with a single iteration.18071805 */1808180618091807 switch (hdmi->version) {···18141808 break;18151809 case 0x131a:18161810 case 0x132a:18111811+ case 0x200a:18171812 case 0x201a:18131813+ case 0x211a:18181814 case 0x212a:18191815 count = 1;18201816 break;
+4-2
drivers/gpu/drm/i915/intel_ddi.c
···38623862 ret = intel_hdmi_compute_config(encoder, pipe_config, conn_state);38633863 else38643864 ret = intel_dp_compute_config(encoder, pipe_config, conn_state);38653865+ if (ret)38663866+ return ret;3865386738663866- if (IS_GEN9_LP(dev_priv) && ret)38683868+ if (IS_GEN9_LP(dev_priv))38673869 pipe_config->lane_lat_optim_mask =38683870 bxt_ddi_phy_calc_lane_lat_optim_mask(pipe_config->lane_count);3869387138703872 intel_ddi_compute_min_voltage_level(dev_priv, pipe_config);3871387338723872- return ret;38743874+ return 0;3873387538743876}38753877
+3-3
drivers/gpu/drm/i915/intel_dp.c
···18861886 int pipe_bpp;18871887 int ret;1888188818891889+ pipe_config->fec_enable = !intel_dp_is_edp(intel_dp) &&18901890+ intel_dp_supports_fec(intel_dp, pipe_config);18911891+18891892 if (!intel_dp_supports_dsc(intel_dp, pipe_config))18901893 return -EINVAL;18911894···2118211521192116 if (adjusted_mode->flags & DRM_MODE_FLAG_DBLCLK)21202117 return -EINVAL;21212121-21222122- pipe_config->fec_enable = !intel_dp_is_edp(intel_dp) &&21232123- intel_dp_supports_fec(intel_dp, pipe_config);2124211821252119 ret = intel_dp_compute_link_config(encoder, pipe_config, conn_state);21262120 if (ret < 0)
+1-1
drivers/gpu/drm/imx/ipuv3-crtc.c
···7171 if (disable_partial)7272 ipu_plane_disable(ipu_crtc->plane[1], true);7373 if (disable_full)7474- ipu_plane_disable(ipu_crtc->plane[0], false);7474+ ipu_plane_disable(ipu_crtc->plane[0], true);7575}76767777static void ipu_crtc_atomic_disable(struct drm_crtc *crtc,
+1-2
drivers/gpu/drm/scheduler/sched_main.c
···366366EXPORT_SYMBOL(drm_sched_increase_karma);367367368368/**369369- * drm_sched_hw_job_reset - stop the scheduler if it contains the bad job369369+ * drm_sched_stop - stop the scheduler370370 *371371 * @sched: scheduler instance372372- * @bad: bad scheduler job373372 *374373 */375374void drm_sched_stop(struct drm_gpu_scheduler *sched)
···4949 * ttm_global_mutex - protecting the global BO state5050 */5151DEFINE_MUTEX(ttm_global_mutex);5252-struct ttm_bo_global ttm_bo_glob = {5353- .use_count = 05454-};5252+unsigned ttm_bo_glob_use_count;5353+struct ttm_bo_global ttm_bo_glob;55545655static struct attribute ttm_bo_count = {5756 .name = "bo_count",···15301531 struct ttm_bo_global *glob = &ttm_bo_glob;1531153215321533 mutex_lock(&ttm_global_mutex);15331533- if (--glob->use_count > 0)15341534+ if (--ttm_bo_glob_use_count > 0)15341535 goto out;1535153615361537 kobject_del(&glob->kobj);15371538 kobject_put(&glob->kobj);15381539 ttm_mem_global_release(&ttm_mem_glob);15401540+ memset(glob, 0, sizeof(*glob));15391541out:15401542 mutex_unlock(&ttm_global_mutex);15411543}···15481548 unsigned i;1549154915501550 mutex_lock(&ttm_global_mutex);15511551- if (++glob->use_count > 1)15511551+ if (++ttm_bo_glob_use_count > 1)15521552 goto out;1553155315541554 ret = ttm_mem_global_init(&ttm_mem_glob);
+3-2
drivers/gpu/drm/ttm/ttm_memory.c
···461461462462void ttm_mem_global_release(struct ttm_mem_global *glob)463463{464464- unsigned int i;465464 struct ttm_mem_zone *zone;465465+ unsigned int i;466466467467 /* let the page allocator first stop the shrink work. */468468 ttm_page_alloc_fini();···475475 zone = glob->zones[i];476476 kobject_del(&zone->kobj);477477 kobject_put(&zone->kobj);478478- }478478+ }479479 kobject_del(&glob->kobj);480480 kobject_put(&glob->kobj);481481+ memset(glob, 0, sizeof(*glob));481482}482483483484static void ttm_check_swapping(struct ttm_mem_global *glob)
+1-1
drivers/gpu/drm/vc4/vc4_crtc.c
···10421042vc4_crtc_reset(struct drm_crtc *crtc)10431043{10441044 if (crtc->state)10451045- __drm_atomic_helper_crtc_destroy_state(crtc->state);10451045+ vc4_crtc_destroy_state(crtc, crtc->state);1046104610471047 crtc->state = kzalloc(sizeof(struct vc4_crtc_state), GFP_KERNEL);10481048 if (crtc->state)
+5-28
drivers/gpu/drm/vmwgfx/vmwgfx_drv.c
···546546}547547548548/**549549- * vmw_assume_iommu - Figure out whether coherent dma-remapping might be550550- * taking place.551551- * @dev: Pointer to the struct drm_device.552552- *553553- * Return: true if iommu present, false otherwise.554554- */555555-static bool vmw_assume_iommu(struct drm_device *dev)556556-{557557- const struct dma_map_ops *ops = get_dma_ops(dev->dev);558558-559559- return !dma_is_direct(ops) && ops &&560560- ops->map_page != dma_direct_map_page;561561-}562562-563563-/**564549 * vmw_dma_select_mode - Determine how DMA mappings should be set up for this565550 * system.566551 *567552 * @dev_priv: Pointer to a struct vmw_private568553 *569569- * This functions tries to determine the IOMMU setup and what actions570570- * need to be taken by the driver to make system pages visible to the571571- * device.554554+ * This functions tries to determine what actions need to be taken by the555555+ * driver to make system pages visible to the device.572556 * If this function decides that DMA is not possible, it returns -EINVAL.573557 * The driver may then try to disable features of the device that require574558 * DMA.···562578 static const char *names[vmw_dma_map_max] = {563579 [vmw_dma_phys] = "Using physical TTM page addresses.",564580 [vmw_dma_alloc_coherent] = "Using coherent TTM pages.",565565- [vmw_dma_map_populate] = "Keeping DMA mappings.",581581+ [vmw_dma_map_populate] = "Caching DMA mappings.",566582 [vmw_dma_map_bind] = "Giving up DMA mappings early."};567583568584 if (vmw_force_coherent)569585 dev_priv->map_mode = vmw_dma_alloc_coherent;570570- else if (vmw_assume_iommu(dev_priv->dev))571571- dev_priv->map_mode = vmw_dma_map_populate;572572- else if (!vmw_force_iommu)573573- dev_priv->map_mode = vmw_dma_phys;574574- else if (IS_ENABLED(CONFIG_SWIOTLB) && swiotlb_nr_tbl())575575- dev_priv->map_mode = vmw_dma_alloc_coherent;586586+ else if (vmw_restrict_iommu)587587+ dev_priv->map_mode = vmw_dma_map_bind;576588 else577589 dev_priv->map_mode = vmw_dma_map_populate;578578-579579- if (dev_priv->map_mode == vmw_dma_map_populate && vmw_restrict_iommu)580580- dev_priv->map_mode = vmw_dma_map_bind;581590582591 /* No TTM coherent page pool? FIXME: Ask TTM instead! */583592 if (!(IS_ENABLED(CONFIG_SWIOTLB) || IS_ENABLED(CONFIG_INTEL_IOMMU)) &&
···185185int i2c_generic_scl_recovery(struct i2c_adapter *adap)186186{187187 struct i2c_bus_recovery_info *bri = adap->bus_recovery_info;188188- int i = 0, scl = 1, ret;188188+ int i = 0, scl = 1, ret = 0;189189190190 if (bri->prepare_recovery)191191 bri->prepare_recovery(adap);
···208208 kref_put(&file->async_file->ref,209209 ib_uverbs_release_async_event_file);210210 put_device(&file->device->dev);211211+212212+ if (file->disassociate_page)213213+ __free_pages(file->disassociate_page, 0);211214 kfree(file);212215}213216···880877 kfree(priv);881878}882879880880+/*881881+ * Once the zap_vma_ptes has been called touches to the VMA will come here and882882+ * we return a dummy writable zero page for all the pfns.883883+ */884884+static vm_fault_t rdma_umap_fault(struct vm_fault *vmf)885885+{886886+ struct ib_uverbs_file *ufile = vmf->vma->vm_file->private_data;887887+ struct rdma_umap_priv *priv = vmf->vma->vm_private_data;888888+ vm_fault_t ret = 0;889889+890890+ if (!priv)891891+ return VM_FAULT_SIGBUS;892892+893893+ /* Read only pages can just use the system zero page. */894894+ if (!(vmf->vma->vm_flags & (VM_WRITE | VM_MAYWRITE))) {895895+ vmf->page = ZERO_PAGE(vmf->address);896896+ get_page(vmf->page);897897+ return 0;898898+ }899899+900900+ mutex_lock(&ufile->umap_lock);901901+ if (!ufile->disassociate_page)902902+ ufile->disassociate_page =903903+ alloc_pages(vmf->gfp_mask | __GFP_ZERO, 0);904904+905905+ if (ufile->disassociate_page) {906906+ /*907907+ * This VMA is forced to always be shared so this doesn't have908908+ * to worry about COW.909909+ */910910+ vmf->page = ufile->disassociate_page;911911+ get_page(vmf->page);912912+ } else {913913+ ret = VM_FAULT_SIGBUS;914914+ }915915+ mutex_unlock(&ufile->umap_lock);916916+917917+ return ret;918918+}919919+883920static const struct vm_operations_struct rdma_umap_ops = {884921 .open = rdma_umap_open,885922 .close = rdma_umap_close,923923+ .fault = rdma_umap_fault,886924};887925888926static struct rdma_umap_priv *rdma_user_mmap_pre(struct ib_ucontext *ucontext,···932888{933889 struct ib_uverbs_file *ufile = ucontext->ufile;934890 struct rdma_umap_priv *priv;891891+892892+ if (!(vma->vm_flags & VM_SHARED))893893+ return ERR_PTR(-EINVAL);935894936895 if (vma->vm_end - vma->vm_start != size)937896 return ERR_PTR(-EINVAL);···1039992 * at a time to get the lock ordering right. Typically there1040993 * will only be one mm, so no big deal.1041994 */10421042- down_write(&mm->mmap_sem);995995+ down_read(&mm->mmap_sem);1043996 if (!mmget_still_valid(mm))1044997 goto skip_mm;1045998 mutex_lock(&ufile->umap_lock);···1053100610541007 zap_vma_ptes(vma, vma->vm_start,10551008 vma->vm_end - vma->vm_start);10561056- vma->vm_flags &= ~(VM_SHARED | VM_MAYSHARE);10571009 }10581010 mutex_unlock(&ufile->umap_lock);10591011 skip_mm:10601060- up_write(&mm->mmap_sem);10121012+ up_read(&mm->mmap_sem);10611013 mmput(mm);10621014 }10631015}
+1-1
drivers/infiniband/hw/hns/hns_roce_qp.c
···533533534534static int hns_roce_qp_has_sq(struct ib_qp_init_attr *attr)535535{536536- if (attr->qp_type == IB_QPT_XRC_TGT)536536+ if (attr->qp_type == IB_QPT_XRC_TGT || !attr->cap.max_send_wr)537537 return 0;538538539539 return 1;
+7-5
drivers/infiniband/hw/mlx5/main.c
···11191119 if (MLX5_CAP_GEN(mdev, qp_packet_based))11201120 resp.flags |=11211121 MLX5_IB_QUERY_DEV_RESP_PACKET_BASED_CREDIT_MODE;11221122+11231123+ resp.flags |= MLX5_IB_QUERY_DEV_RESP_FLAGS_SCAT2CQE_DCT;11221124 }1123112511241126 if (field_avail(typeof(resp), sw_parsing_caps,···2068206620692067 if (vma->vm_flags & VM_WRITE)20702068 return -EPERM;20692069+ vma->vm_flags &= ~VM_MAYWRITE;2071207020722071 if (!dev->mdev->clock_info_page)20732072 return -EOPNOTSUPP;···2234223122352232 if (vma->vm_flags & VM_WRITE)22362233 return -EPERM;22342234+ vma->vm_flags &= ~VM_MAYWRITE;2237223522382236 /* Don't expose to user-space information it shouldn't have */22392237 if (PAGE_SIZE > 4096)22402238 return -EOPNOTSUPP;2241223922422242- vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);22432240 pfn = (dev->mdev->iseg_base +22442241 offsetof(struct mlx5_init_seg, internal_timer_h)) >>22452242 PAGE_SHIFT;22462246- if (io_remap_pfn_range(vma, vma->vm_start, pfn,22472247- PAGE_SIZE, vma->vm_page_prot))22482248- return -EAGAIN;22492249- break;22432243+ return rdma_user_mmap_io(&context->ibucontext, vma, pfn,22442244+ PAGE_SIZE,22452245+ pgprot_noncached(vma->vm_page_prot));22502246 case MLX5_IB_MMAP_CLOCK_INFO:22512247 return mlx5_ib_mmap_clock_info_page(dev, vma, context);22522248
···608608 if (unlikely(mapped_segs == mr->mr.max_segs))609609 return -ENOMEM;610610611611- if (mr->mr.length == 0) {612612- mr->mr.user_base = addr;613613- mr->mr.iova = addr;614614- }615615-616611 m = mapped_segs / RVT_SEGSZ;617612 n = mapped_segs % RVT_SEGSZ;618613 mr->mr.map[m]->segs[n].vaddr = (void *)addr;···625630 * @sg_nents: number of entries in sg626631 * @sg_offset: offset in bytes into sg627632 *633633+ * Overwrite rvt_mr length with mr length calculated by ib_sg_to_pages.634634+ *628635 * Return: number of sg elements mapped to the memory region629636 */630637int rvt_map_mr_sg(struct ib_mr *ibmr, struct scatterlist *sg,631638 int sg_nents, unsigned int *sg_offset)632639{633640 struct rvt_mr *mr = to_imr(ibmr);641641+ int ret;634642635643 mr->mr.length = 0;636644 mr->mr.page_shift = PAGE_SHIFT;637637- return ib_sg_to_pages(ibmr, sg, sg_nents, sg_offset,638638- rvt_set_page);645645+ ret = ib_sg_to_pages(ibmr, sg, sg_nents, sg_offset, rvt_set_page);646646+ mr->mr.user_base = ibmr->iova;647647+ mr->mr.iova = ibmr->iova;648648+ mr->mr.offset = ibmr->iova - (u64)mr->mr.map[0]->segs[0].vaddr;649649+ mr->mr.length = (size_t)ibmr->length;650650+ return ret;639651}640652641653/**···673671 ibmr->rkey = key;674672 mr->mr.lkey = key;675673 mr->mr.access_flags = access;674674+ mr->mr.iova = ibmr->iova;676675 atomic_set(&mr->mr.lkey_invalid, 0);677676678677 return 0;
+1-1
drivers/input/keyboard/Kconfig
···420420421421config KEYBOARD_SNVS_PWRKEY422422 tristate "IMX SNVS Power Key Driver"423423- depends on SOC_IMX6SX || SOC_IMX7D423423+ depends on ARCH_MXC || COMPILE_TEST424424 depends on OF425425 help426426 This is the snvs powerkey driver for the Freescale i.MX application
···12301230 }1231123112321232 rc = f11_write_control_regs(fn, &f11->sens_query,12331233- &f11->dev_controls, fn->fd.query_base_addr);12331233+ &f11->dev_controls, fn->fd.control_base_addr);12341234 if (rc)12351235 dev_warn(&fn->dev, "Failed to write control registers\n");12361236
+6-6
drivers/mtd/nand/raw/marvell_nand.c
···722722 struct marvell_nfc *nfc = to_marvell_nfc(chip->controller);723723 u32 ndcr_generic;724724725725- if (chip == nfc->selected_chip && die_nr == marvell_nand->selected_die)726726- return;727727-728728- writel_relaxed(marvell_nand->ndtr0, nfc->regs + NDTR0);729729- writel_relaxed(marvell_nand->ndtr1, nfc->regs + NDTR1);730730-731725 /*732726 * Reset the NDCR register to a clean state for this particular chip,733727 * also clear ND_RUN bit.···732738733739 /* Also reset the interrupt status register */734740 marvell_nfc_clear_int(nfc, NDCR_ALL_INT);741741+742742+ if (chip == nfc->selected_chip && die_nr == marvell_nand->selected_die)743743+ return;744744+745745+ writel_relaxed(marvell_nand->ndtr0, nfc->regs + NDTR0);746746+ writel_relaxed(marvell_nand->ndtr1, nfc->regs + NDTR1);735747736748 nfc->selected_chip = chip;737749 marvell_nand->selected_die = die_nr;
+6
drivers/net/dsa/bcm_sf2_cfp.c
···886886 fs->m_ext.data[1]))887887 return -EINVAL;888888889889+ if (fs->location != RX_CLS_LOC_ANY && fs->location >= CFP_NUM_RULES)890890+ return -EINVAL;891891+889892 if (fs->location != RX_CLS_LOC_ANY &&890893 test_bit(fs->location, priv->cfp.used))891894 return -EBUSY;···976973{977974 struct cfp_rule *rule;978975 int ret;976976+977977+ if (loc >= CFP_NUM_RULES)978978+ return -EINVAL;979979980980 /* Refuse deleting unused rules, and those that are not unique since981981 * that could leave IPv6 rules with one of the chained rule in the
···333333 */334334 dwmac->irq_pwr_wakeup = platform_get_irq_byname(pdev,335335 "stm32_pwr_wakeup");336336+ if (dwmac->irq_pwr_wakeup == -EPROBE_DEFER)337337+ return -EPROBE_DEFER;338338+336339 if (!dwmac->clk_eth_ck && dwmac->irq_pwr_wakeup >= 0) {337340 err = device_init_wakeup(&pdev->dev, true);338341 if (err) {
+1-1
drivers/net/ethernet/stmicro/stmmac/stmmac_pci.c
···160160 .driver_data = (void *)&galileo_stmmac_dmi_data,161161 },162162 /*163163- * There are 2 types of SIMATIC IOT2000: IOT20202 and IOT2040.163163+ * There are 2 types of SIMATIC IOT2000: IOT2020 and IOT2040.164164 * The asset tag "6ES7647-0AA00-0YA2" is only for IOT2020 which165165 * has only one pci network device while other asset tags are166166 * for IOT2040 which has two.
+6
drivers/net/ieee802154/mcr20a.c
···533533 dev_dbg(printdev(lp), "no slotted operation\n");534534 ret = regmap_update_bits(lp->regmap_dar, DAR_PHY_CTRL1,535535 DAR_PHY_CTRL1_SLOTTED, 0x0);536536+ if (ret < 0)537537+ return ret;536538537539 /* enable irq */538540 enable_irq(lp->spi->irq);···542540 /* Unmask SEQ interrupt */543541 ret = regmap_update_bits(lp->regmap_dar, DAR_PHY_CTRL2,544542 DAR_PHY_CTRL2_SEQMSK, 0x0);543543+ if (ret < 0)544544+ return ret;545545546546 /* Start the RX sequence */547547 dev_dbg(printdev(lp), "start the RX sequence\n");548548 ret = regmap_update_bits(lp->regmap_dar, DAR_PHY_CTRL1,549549 DAR_PHY_CTRL1_XCVSEQ_MASK, MCR20A_XCVSEQ_RX);550550+ if (ret < 0)551551+ return ret;550552551553 return 0;552554}
+4-2
drivers/net/phy/marvell.c
···1597159715981598static void marvell_get_strings(struct phy_device *phydev, u8 *data)15991599{16001600+ int count = marvell_get_sset_count(phydev);16001601 int i;1601160216021602- for (i = 0; i < ARRAY_SIZE(marvell_hw_stats); i++) {16031603+ for (i = 0; i < count; i++) {16031604 strlcpy(data + i * ETH_GSTRING_LEN,16041605 marvell_hw_stats[i].string, ETH_GSTRING_LEN);16051606 }···16281627static void marvell_get_stats(struct phy_device *phydev,16291628 struct ethtool_stats *stats, u64 *data)16301629{16301630+ int count = marvell_get_sset_count(phydev);16311631 int i;1632163216331633- for (i = 0; i < ARRAY_SIZE(marvell_hw_stats); i++)16331633+ for (i = 0; i < count; i++)16341634 data[i] = marvell_get_stat(phydev, i);16351635}16361636
+1-1
drivers/net/slip/slhc.c
···153153void154154slhc_free(struct slcompress *comp)155155{156156- if ( comp == NULLSLCOMPR )156156+ if ( IS_ERR_OR_NULL(comp) )157157 return;158158159159 if ( comp->tstate != NULLSLSTATE )
···11/******************************************************************************22 *33 * Copyright(c) 2007 - 2014 Intel Corporation. All rights reserved.44- * Copyright(c) 2018 Intel Corporation44+ * Copyright(c) 2018 - 2019 Intel Corporation55 *66 * This program is free software; you can redistribute it and/or modify it77 * under the terms of version 2 of the GNU General Public License as···136136 .ht_params = &iwl5000_ht_params,137137 .led_mode = IWL_LED_BLINK,138138 .internal_wimax_coex = true,139139+ .csr = &iwl_csr_v1,139140};140141141142#define IWL_DEVICE_5150 \
···181181182182 adapter = card->adapter;183183184184- if (test_bit(MWIFIEX_IS_SUSPENDED, &adapter->work_flags)) {184184+ if (!test_bit(MWIFIEX_IS_SUSPENDED, &adapter->work_flags)) {185185 mwifiex_dbg(adapter, WARN,186186 "device already resumed\n");187187 return 0;
+17-2
drivers/pci/pci.c
···62626262 } else if (!strncmp(str, "pcie_scan_all", 13)) {62636263 pci_add_flags(PCI_SCAN_ALL_PCIE_DEVS);62646264 } else if (!strncmp(str, "disable_acs_redir=", 18)) {62656265- disable_acs_redir_param =62666266- kstrdup(str + 18, GFP_KERNEL);62656265+ disable_acs_redir_param = str + 18;62676266 } else {62686267 printk(KERN_ERR "PCI: Unknown option `%s'\n",62696268 str);···62736274 return 0;62746275}62756276early_param("pci", pci_setup);62776277+62786278+/*62796279+ * 'disable_acs_redir_param' is initialized in pci_setup(), above, to point62806280+ * to data in the __initdata section which will be freed after the init62816281+ * sequence is complete. We can't allocate memory in pci_setup() because some62826282+ * architectures do not have any memory allocation service available during62836283+ * an early_param() call. So we allocate memory and copy the variable here62846284+ * before the init section is freed.62856285+ */62866286+static int __init pci_realloc_setup_params(void)62876287+{62886288+ disable_acs_redir_param = kstrdup(disable_acs_redir_param, GFP_KERNEL);62896289+62906290+ return 0;62916291+}62926292+pure_initcall(pci_realloc_setup_params);
+8
drivers/pci/pcie/Kconfig
···142142143143 This is only useful if you have devices that support PTM, but it144144 is safe to enable even if you don't.145145+146146+config PCIE_BW147147+ bool "PCI Express Bandwidth Change Notification"148148+ depends on PCIEPORTBUS149149+ help150150+ This enables PCI Express Bandwidth Change Notification. If151151+ you know link width or rate changes occur only to correct152152+ unreliable links, you may answer Y.
+1-1
drivers/pci/pcie/Makefile
···33# Makefile for PCI Express features and port driver4455pcieportdrv-y := portdrv_core.o portdrv_pci.o err.o66-pcieportdrv-y += bw_notification.o7687obj-$(CONFIG_PCIEPORTBUS) += pcieportdrv.o98···1213obj-$(CONFIG_PCIE_PME) += pme.o1314obj-$(CONFIG_PCIE_DPC) += dpc.o1415obj-$(CONFIG_PCIE_PTM) += ptm.o1616+obj-$(CONFIG_PCIE_BW) += bw_notification.o
+4
drivers/pci/pcie/portdrv.h
···4949static inline int pcie_dpc_init(void) { return 0; }5050#endif51515252+#ifdef CONFIG_PCIE_BW5253int pcie_bandwidth_notification_init(void);5454+#else5555+static inline int pcie_bandwidth_notification_init(void) { return 0; }5656+#endif53575458/* Port Type */5559#define PCIE_ANY_PORT (~0)
···221221 int avg_current;222222 u32 cc_lsb;223223224224+ if (!divider)225225+ return 0;226226+224227 sample &= 0xffffff; /* 24-bits, unsigned */225228 offset &= 0x7ff; /* 10-bits, signed */226229
-6
drivers/power/supply/power_supply_sysfs.c
···383383 char *prop_buf;384384 char *attrname;385385386386- dev_dbg(dev, "uevent\n");387387-388386 if (!psy || !psy->desc) {389387 dev_dbg(dev, "No power supply yet\n");390388 return ret;391389 }392392-393393- dev_dbg(dev, "POWER_SUPPLY_NAME=%s\n", psy->desc->name);394390395391 ret = add_uevent_var(env, "POWER_SUPPLY_NAME=%s", psy->desc->name);396392 if (ret)···422426 ret = -ENOMEM;423427 goto out;424428 }425425-426426- dev_dbg(dev, "prop %s=%s\n", attrname, prop_buf);427429428430 ret = add_uevent_var(env, "POWER_SUPPLY_%s=%s", attrname, prop_buf);429431 kfree(attrname);
-13
drivers/usb/core/driver.c
···473473 pm_runtime_disable(dev);474474 pm_runtime_set_suspended(dev);475475476476- /* Undo any residual pm_autopm_get_interface_* calls */477477- for (r = atomic_read(&intf->pm_usage_cnt); r > 0; --r)478478- usb_autopm_put_interface_no_suspend(intf);479479- atomic_set(&intf->pm_usage_cnt, 0);480480-481476 if (!error)482477 usb_autosuspend_device(udev);483478···16281633 int status;1629163416301635 usb_mark_last_busy(udev);16311631- atomic_dec(&intf->pm_usage_cnt);16321636 status = pm_runtime_put_sync(&intf->dev);16331637 dev_vdbg(&intf->dev, "%s: cnt %d -> %d\n",16341638 __func__, atomic_read(&intf->dev.power.usage_count),···16561662 int status;1657166316581664 usb_mark_last_busy(udev);16591659- atomic_dec(&intf->pm_usage_cnt);16601665 status = pm_runtime_put(&intf->dev);16611666 dev_vdbg(&intf->dev, "%s: cnt %d -> %d\n",16621667 __func__, atomic_read(&intf->dev.power.usage_count),···16771684 struct usb_device *udev = interface_to_usbdev(intf);1678168516791686 usb_mark_last_busy(udev);16801680- atomic_dec(&intf->pm_usage_cnt);16811687 pm_runtime_put_noidle(&intf->dev);16821688}16831689EXPORT_SYMBOL_GPL(usb_autopm_put_interface_no_suspend);···17071715 status = pm_runtime_get_sync(&intf->dev);17081716 if (status < 0)17091717 pm_runtime_put_sync(&intf->dev);17101710- else17111711- atomic_inc(&intf->pm_usage_cnt);17121718 dev_vdbg(&intf->dev, "%s: cnt %d -> %d\n",17131719 __func__, atomic_read(&intf->dev.power.usage_count),17141720 status);···17401750 status = pm_runtime_get(&intf->dev);17411751 if (status < 0 && status != -EINPROGRESS)17421752 pm_runtime_put_noidle(&intf->dev);17431743- else17441744- atomic_inc(&intf->pm_usage_cnt);17451753 dev_vdbg(&intf->dev, "%s: cnt %d -> %d\n",17461754 __func__, atomic_read(&intf->dev.power.usage_count),17471755 status);···17631775 struct usb_device *udev = interface_to_usbdev(intf);1764177617651777 usb_mark_last_busy(udev);17661766- atomic_inc(&intf->pm_usage_cnt);17671778 pm_runtime_get_noresume(&intf->dev);17681779}17691780EXPORT_SYMBOL_GPL(usb_autopm_get_interface_no_resume);
+3-1
drivers/usb/core/message.c
···820820821821 if (dev->state == USB_STATE_SUSPENDED)822822 return -EHOSTUNREACH;823823- if (size <= 0 || !buf || !index)823823+ if (size <= 0 || !buf)824824 return -EINVAL;825825 buf[0] = 0;826826+ if (index <= 0 || index >= 256)827827+ return -EINVAL;826828 tbuf = kmalloc(256, GFP_NOIO);827829 if (!tbuf)828830 return -ENOMEM;
+15-4
drivers/usb/gadget/udc/dummy_hcd.c
···979979 struct dummy_hcd *dum_hcd = gadget_to_dummy_hcd(g);980980 struct dummy *dum = dum_hcd->dum;981981982982- if (driver->max_speed == USB_SPEED_UNKNOWN)982982+ switch (g->speed) {983983+ /* All the speeds we support */984984+ case USB_SPEED_LOW:985985+ case USB_SPEED_FULL:986986+ case USB_SPEED_HIGH:987987+ case USB_SPEED_SUPER:988988+ break;989989+ default:990990+ dev_err(dummy_dev(dum_hcd), "Unsupported driver max speed %d\n",991991+ driver->max_speed);983992 return -EINVAL;993993+ }984994985995 /*986996 * SLAVE side init ... the layer above hardware, which···17941784 /* Bus speed is 500000 bytes/ms, so use a little less */17951785 total = 490000;17961786 break;17971797- default:17871787+ default: /* Can't happen */17981788 dev_err(dummy_dev(dum_hcd), "bogus device speed\n");17991799- return;17891789+ total = 0;17901790+ break;18001791 }1801179218021793 /* FIXME if HZ != 1000 this will probably misbehave ... */···1839182818401829 /* Used up this frame's bandwidth? */18411830 if (total <= 0)18421842- break;18311831+ continue;1843183218441833 /* find the gadget's ep for this request (if configured) */18451834 address = usb_pipeendpoint (urb->pipe);
+1
drivers/usb/misc/yurex.c
···314314 usb_deregister_dev(interface, &yurex_class);315315316316 /* prevent more I/O from starting */317317+ usb_poison_urb(dev->urb);317318 mutex_lock(&dev->io_mutex);318319 dev->interface = NULL;319320 mutex_unlock(&dev->io_mutex);
+5-8
drivers/usb/storage/realtek_cr.c
···763763 break;764764 case RTS51X_STAT_IDLE:765765 case RTS51X_STAT_SS:766766- usb_stor_dbg(us, "RTS51X_STAT_SS, intf->pm_usage_cnt:%d, power.usage:%d\n",767767- atomic_read(&us->pusb_intf->pm_usage_cnt),766766+ usb_stor_dbg(us, "RTS51X_STAT_SS, power.usage:%d\n",768767 atomic_read(&us->pusb_intf->dev.power.usage_count));769768770770- if (atomic_read(&us->pusb_intf->pm_usage_cnt) > 0) {769769+ if (atomic_read(&us->pusb_intf->dev.power.usage_count) > 0) {771770 usb_stor_dbg(us, "Ready to enter SS state\n");772771 rts51x_set_stat(chip, RTS51X_STAT_SS);773772 /* ignore mass storage interface's children */774773 pm_suspend_ignore_children(&us->pusb_intf->dev, true);775774 usb_autopm_put_interface_async(us->pusb_intf);776776- usb_stor_dbg(us, "RTS51X_STAT_SS 01, intf->pm_usage_cnt:%d, power.usage:%d\n",777777- atomic_read(&us->pusb_intf->pm_usage_cnt),775775+ usb_stor_dbg(us, "RTS51X_STAT_SS 01, power.usage:%d\n",778776 atomic_read(&us->pusb_intf->dev.power.usage_count));779777 }780778 break;···805807 int ret;806808807809 if (working_scsi(srb)) {808808- usb_stor_dbg(us, "working scsi, intf->pm_usage_cnt:%d, power.usage:%d\n",809809- atomic_read(&us->pusb_intf->pm_usage_cnt),810810+ usb_stor_dbg(us, "working scsi, power.usage:%d\n",810811 atomic_read(&us->pusb_intf->dev.power.usage_count));811812812812- if (atomic_read(&us->pusb_intf->pm_usage_cnt) <= 0) {813813+ if (atomic_read(&us->pusb_intf->dev.power.usage_count) <= 0) {813814 ret = usb_autopm_get_interface(us->pusb_intf);814815 usb_stor_dbg(us, "working scsi, ret=%d\n", ret);815816 }
+3-9
drivers/usb/usbip/stub_rx.c
···361361 }362362363363 if (usb_endpoint_xfer_isoc(epd)) {364364- /* validate packet size and number of packets */365365- unsigned int maxp, packets, bytes;366366-367367- maxp = usb_endpoint_maxp(epd);368368- maxp *= usb_endpoint_maxp_mult(epd);369369- bytes = pdu->u.cmd_submit.transfer_buffer_length;370370- packets = DIV_ROUND_UP(bytes, maxp);371371-364364+ /* validate number of packets */372365 if (pdu->u.cmd_submit.number_of_packets < 0 ||373373- pdu->u.cmd_submit.number_of_packets > packets) {366366+ pdu->u.cmd_submit.number_of_packets >367367+ USBIP_MAX_ISO_PACKETS) {374368 dev_err(&sdev->udev->dev,375369 "CMD_SUBMIT: isoc invalid num packets %d\n",376370 pdu->u.cmd_submit.number_of_packets);
+7
drivers/usb/usbip/usbip_common.h
···121121#define USBIP_DIR_OUT 0x00122122#define USBIP_DIR_IN 0x01123123124124+/*125125+ * Arbitrary limit for the maximum number of isochronous packets in an URB,126126+ * compare for example the uhci_submit_isochronous function in127127+ * drivers/usb/host/uhci-q.c128128+ */129129+#define USBIP_MAX_ISO_PACKETS 1024130130+124131/**125132 * struct usbip_header_basic - data pertinent to every request126133 * @command: the usbip request type
+3-3
drivers/w1/masters/ds2490.c
···10161016 /* alternative 3, 1ms interrupt (greatly speeds search), 64 byte bulk */10171017 alt = 3;10181018 err = usb_set_interface(dev->udev,10191019- intf->altsetting[alt].desc.bInterfaceNumber, alt);10191019+ intf->cur_altsetting->desc.bInterfaceNumber, alt);10201020 if (err) {10211021 dev_err(&dev->udev->dev, "Failed to set alternative setting %d "10221022 "for %d interface: err=%d.\n", alt,10231023- intf->altsetting[alt].desc.bInterfaceNumber, err);10231023+ intf->cur_altsetting->desc.bInterfaceNumber, err);10241024 goto err_out_clear;10251025 }1026102610271027- iface_desc = &intf->altsetting[alt];10271027+ iface_desc = intf->cur_altsetting;10281028 if (iface_desc->desc.bNumEndpoints != NUM_EP-1) {10291029 pr_info("Num endpoints=%d. It is not DS9490R.\n",10301030 iface_desc->desc.bNumEndpoints);
+2-1
fs/block_dev.c
···264264 bio_for_each_segment_all(bvec, &bio, i, iter_all) {265265 if (should_dirty && !PageCompound(bvec->bv_page))266266 set_page_dirty_lock(bvec->bv_page);267267- put_page(bvec->bv_page);267267+ if (!bio_flagged(&bio, BIO_NO_PAGE_REF))268268+ put_page(bvec->bv_page);268269 }269270270271 if (unlikely(bio.bi_status))
···11631163 return 0;11641164}1165116511661166+static int d_name_cmp(struct dentry *dentry, const char *name, size_t len)11671167+{11681168+ int ret;11691169+11701170+ /* take d_lock to ensure dentry->d_name stability */11711171+ spin_lock(&dentry->d_lock);11721172+ ret = dentry->d_name.len - len;11731173+ if (!ret)11741174+ ret = memcmp(dentry->d_name.name, name, len);11751175+ spin_unlock(&dentry->d_lock);11761176+ return ret;11771177+}11781178+11661179/*11671180 * Incorporate results into the local cache. This is either just11681181 * one inode, or a directory, dentry, and possibly linked-to inode (e.g.,···14251412 err = splice_dentry(&req->r_dentry, in);14261413 if (err < 0)14271414 goto done;14281428- } else if (rinfo->head->is_dentry) {14151415+ } else if (rinfo->head->is_dentry &&14161416+ !d_name_cmp(req->r_dentry, rinfo->dname, rinfo->dname_len)) {14291417 struct ceph_vino *ptvino = NULL;1430141814311419 if ((le32_to_cpu(rinfo->diri.in->cap.caps) & CEPH_CAP_FILE_SHARED) ||
+59-11
fs/ceph/mds_client.c
···14141414 list_add(&ci->i_prealloc_cap_flush->i_list, &to_remove);14151415 ci->i_prealloc_cap_flush = NULL;14161416 }14171417+14181418+ if (drop &&14191419+ ci->i_wrbuffer_ref_head == 0 &&14201420+ ci->i_wr_ref == 0 &&14211421+ ci->i_dirty_caps == 0 &&14221422+ ci->i_flushing_caps == 0) {14231423+ ceph_put_snap_context(ci->i_head_snapc);14241424+ ci->i_head_snapc = NULL;14251425+ }14171426 }14181427 spin_unlock(&ci->i_ceph_lock);14191428 while (!list_empty(&to_remove)) {···21702161 return path;21712162}2172216321642164+/* Duplicate the dentry->d_name.name safely */21652165+static int clone_dentry_name(struct dentry *dentry, const char **ppath,21662166+ int *ppathlen)21672167+{21682168+ u32 len;21692169+ char *name;21702170+21712171+retry:21722172+ len = READ_ONCE(dentry->d_name.len);21732173+ name = kmalloc(len + 1, GFP_NOFS);21742174+ if (!name)21752175+ return -ENOMEM;21762176+21772177+ spin_lock(&dentry->d_lock);21782178+ if (dentry->d_name.len != len) {21792179+ spin_unlock(&dentry->d_lock);21802180+ kfree(name);21812181+ goto retry;21822182+ }21832183+ memcpy(name, dentry->d_name.name, len);21842184+ spin_unlock(&dentry->d_lock);21852185+21862186+ name[len] = '\0';21872187+ *ppath = name;21882188+ *ppathlen = len;21892189+ return 0;21902190+}21912191+21732192static int build_dentry_path(struct dentry *dentry, struct inode *dir,21742193 const char **ppath, int *ppathlen, u64 *pino,21752175- int *pfreepath)21942194+ bool *pfreepath, bool parent_locked)21762195{21962196+ int ret;21772197 char *path;2178219821792199 rcu_read_lock();···22112173 if (dir && ceph_snap(dir) == CEPH_NOSNAP) {22122174 *pino = ceph_ino(dir);22132175 rcu_read_unlock();22142214- *ppath = dentry->d_name.name;22152215- *ppathlen = dentry->d_name.len;21762176+ if (parent_locked) {21772177+ *ppath = dentry->d_name.name;21782178+ *ppathlen = dentry->d_name.len;21792179+ } else {21802180+ ret = clone_dentry_name(dentry, ppath, ppathlen);21812181+ if (ret)21822182+ return ret;21832183+ *pfreepath = true;21842184+ }22162185 return 0;22172186 }22182187 rcu_read_unlock();···22272182 if (IS_ERR(path))22282183 return PTR_ERR(path);22292184 *ppath = path;22302230- *pfreepath = 1;21852185+ *pfreepath = true;22312186 return 0;22322187}2233218822342189static int build_inode_path(struct inode *inode,22352190 const char **ppath, int *ppathlen, u64 *pino,22362236- int *pfreepath)21912191+ bool *pfreepath)22372192{22382193 struct dentry *dentry;22392194 char *path;···22492204 if (IS_ERR(path))22502205 return PTR_ERR(path);22512206 *ppath = path;22522252- *pfreepath = 1;22072207+ *pfreepath = true;22532208 return 0;22542209}22552210···22602215static int set_request_path_attr(struct inode *rinode, struct dentry *rdentry,22612216 struct inode *rdiri, const char *rpath,22622217 u64 rino, const char **ppath, int *pathlen,22632263- u64 *ino, int *freepath)22182218+ u64 *ino, bool *freepath, bool parent_locked)22642219{22652220 int r = 0;22662221···22702225 ceph_snap(rinode));22712226 } else if (rdentry) {22722227 r = build_dentry_path(rdentry, rdiri, ppath, pathlen, ino,22732273- freepath);22282228+ freepath, parent_locked);22742229 dout(" dentry %p %llx/%.*s\n", rdentry, *ino, *pathlen,22752230 *ppath);22762231 } else if (rpath || rino) {···22962251 const char *path2 = NULL;22972252 u64 ino1 = 0, ino2 = 0;22982253 int pathlen1 = 0, pathlen2 = 0;22992299- int freepath1 = 0, freepath2 = 0;22542254+ bool freepath1 = false, freepath2 = false;23002255 int len;23012256 u16 releases;23022257 void *p, *end;···2304225923052260 ret = set_request_path_attr(req->r_inode, req->r_dentry,23062261 req->r_parent, req->r_path1, req->r_ino1.ino,23072307- &path1, &pathlen1, &ino1, &freepath1);22622262+ &path1, &pathlen1, &ino1, &freepath1,22632263+ test_bit(CEPH_MDS_R_PARENT_LOCKED,22642264+ &req->r_req_flags));23082265 if (ret < 0) {23092266 msg = ERR_PTR(ret);23102267 goto out;23112268 }2312226922702270+ /* If r_old_dentry is set, then assume that its parent is locked */23132271 ret = set_request_path_attr(NULL, req->r_old_dentry,23142272 req->r_old_dentry_dir,23152273 req->r_path2, req->r_ino2.ino,23162316- &path2, &pathlen2, &ino2, &freepath2);22742274+ &path2, &pathlen2, &ino2, &freepath2, true);23172275 if (ret < 0) {23182276 msg = ERR_PTR(ret);23192277 goto out_free1;
+6-1
fs/ceph/snap.c
···572572 old_snapc = NULL;573573574574update_snapc:575575- if (ci->i_head_snapc) {575575+ if (ci->i_wrbuffer_ref_head == 0 &&576576+ ci->i_wr_ref == 0 &&577577+ ci->i_dirty_caps == 0 &&578578+ ci->i_flushing_caps == 0) {579579+ ci->i_head_snapc = NULL;580580+ } else {576581 ci->i_head_snapc = ceph_get_snap_context(new_snapc);577582 dout(" new snapc is %p\n", new_snapc);578583 }
+1-14
fs/cifs/file.c
···28772877 struct cifs_tcon *tcon;28782878 struct cifs_sb_info *cifs_sb;28792879 struct dentry *dentry = ctx->cfile->dentry;28802880- unsigned int i;28812880 int rc;2882288128832882 tcon = tlink_tcon(ctx->cfile->tlink);···29392940 list_del_init(&wdata->list);29402941 kref_put(&wdata->refcount, cifs_uncached_writedata_release);29412942 }29422942-29432943- if (!ctx->direct_io)29442944- for (i = 0; i < ctx->npages; i++)29452945- put_page(ctx->bv[i].bv_page);2946294329472944 cifs_stats_bytes_written(tcon, ctx->total_len);29482945 set_bit(CIFS_INO_INVALID_MAPPING, &CIFS_I(dentry->d_inode)->flags);···35773582 struct iov_iter *to = &ctx->iter;35783583 struct cifs_sb_info *cifs_sb;35793584 struct cifs_tcon *tcon;35803580- unsigned int i;35813585 int rc;3582358635833587 tcon = tlink_tcon(ctx->cfile->tlink);···36603666 kref_put(&rdata->refcount, cifs_uncached_readdata_release);36613667 }3662366836633663- if (!ctx->direct_io) {36643664- for (i = 0; i < ctx->npages; i++) {36653665- if (ctx->should_dirty)36663666- set_page_dirty(ctx->bv[i].bv_page);36673667- put_page(ctx->bv[i].bv_page);36683668- }36693669-36693669+ if (!ctx->direct_io)36703670 ctx->total_len = ctx->len - iov_iter_count(to);36713671- }3672367136733672 /* mask nodata case */36743673 if (rc == -ENODATA)
+4
fs/cifs/inode.c
···17351735 if (rc == 0 || rc != -EBUSY)17361736 goto do_rename_exit;1737173717381738+ /* Don't fall back to using SMB on SMB 2+ mount */17391739+ if (server->vals->protocol_id != 0)17401740+ goto do_rename_exit;17411741+17381742 /* open-file renames don't work across directories */17391743 if (to_dentry->d_parent != from_dentry->d_parent)17401744 goto do_rename_exit;
+22-1
fs/cifs/misc.c
···789789{790790 struct cifs_aio_ctx *ctx;791791792792+ /*793793+ * Must use kzalloc to initialize ctx->bv to NULL and ctx->direct_io794794+ * to false so that we know when we have to unreference pages within795795+ * cifs_aio_ctx_release()796796+ */792797 ctx = kzalloc(sizeof(struct cifs_aio_ctx), GFP_KERNEL);793798 if (!ctx)794799 return NULL;···812807 struct cifs_aio_ctx, refcount);813808814809 cifsFileInfo_put(ctx->cfile);815815- kvfree(ctx->bv);810810+811811+ /*812812+ * ctx->bv is only set if setup_aio_ctx_iter() was call successfuly813813+ * which means that iov_iter_get_pages() was a success and thus that814814+ * we have taken reference on pages.815815+ */816816+ if (ctx->bv) {817817+ unsigned i;818818+819819+ for (i = 0; i < ctx->npages; i++) {820820+ if (ctx->should_dirty)821821+ set_page_dirty(ctx->bv[i].bv_page);822822+ put_page(ctx->bv[i].bv_page);823823+ }824824+ kvfree(ctx->bv);825825+ }826826+816827 kfree(ctx);817828}818829
···44 * supporting fast/efficient IO.55 *66 * A note on the read/write ordering memory barriers that are matched between77- * the application and kernel side. When the application reads the CQ ring88- * tail, it must use an appropriate smp_rmb() to order with the smp_wmb()99- * the kernel uses after writing the tail. Failure to do so could cause a1010- * delay in when the application notices that completion events available.1111- * This isn't a fatal condition. Likewise, the application must use an1212- * appropriate smp_wmb() both before writing the SQ tail, and after writing1313- * the SQ tail. The first one orders the sqe writes with the tail write, and1414- * the latter is paired with the smp_rmb() the kernel will issue before1515- * reading the SQ tail on submission.77+ * the application and kernel side.88+ *99+ * After the application reads the CQ ring tail, it must use an1010+ * appropriate smp_rmb() to pair with the smp_wmb() the kernel uses1111+ * before writing the tail (using smp_load_acquire to read the tail will1212+ * do). It also needs a smp_mb() before updating CQ head (ordering the1313+ * entry load(s) with the head store), pairing with an implicit barrier1414+ * through a control-dependency in io_get_cqring (smp_store_release to1515+ * store head will do). Failure to do so could lead to reading invalid1616+ * CQ entries.1717+ *1818+ * Likewise, the application must use an appropriate smp_wmb() before1919+ * writing the SQ tail (ordering SQ entry stores with the tail store),2020+ * which pairs with smp_load_acquire in io_get_sqring (smp_store_release2121+ * to store the tail will do). And it needs a barrier ordering the SQ2222+ * head load before writing new SQ entries (smp_load_acquire to read2323+ * head will do).2424+ *2525+ * When using the SQ poll thread (IORING_SETUP_SQPOLL), the application2626+ * needs to check the SQ flags for IORING_SQ_NEED_WAKEUP *after*2727+ * updating the SQ tail; a full memory barrier smp_mb() is needed2828+ * between.1629 *1730 * Also see the examples in the liburing library:1831 *···8370 u32 tail ____cacheline_aligned_in_smp;8471};85727373+/*7474+ * This data is shared with the application through the mmap at offset7575+ * IORING_OFF_SQ_RING.7676+ *7777+ * The offsets to the member fields are published through struct7878+ * io_sqring_offsets when calling io_uring_setup.7979+ */8680struct io_sq_ring {8181+ /*8282+ * Head and tail offsets into the ring; the offsets need to be8383+ * masked to get valid indices.8484+ *8585+ * The kernel controls head and the application controls tail.8686+ */8787 struct io_uring r;8888+ /*8989+ * Bitmask to apply to head and tail offsets (constant, equals9090+ * ring_entries - 1)9191+ */8892 u32 ring_mask;9393+ /* Ring size (constant, power of 2) */8994 u32 ring_entries;9595+ /*9696+ * Number of invalid entries dropped by the kernel due to9797+ * invalid index stored in array9898+ *9999+ * Written by the kernel, shouldn't be modified by the100100+ * application (i.e. get number of "new events" by comparing to101101+ * cached value).102102+ *103103+ * After a new SQ head value was read by the application this104104+ * counter includes all submissions that were dropped reaching105105+ * the new SQ head (and possibly more).106106+ */90107 u32 dropped;108108+ /*109109+ * Runtime flags110110+ *111111+ * Written by the kernel, shouldn't be modified by the112112+ * application.113113+ *114114+ * The application needs a full memory barrier before checking115115+ * for IORING_SQ_NEED_WAKEUP after updating the sq tail.116116+ */91117 u32 flags;118118+ /*119119+ * Ring buffer of indices into array of io_uring_sqe, which is120120+ * mmapped by the application using the IORING_OFF_SQES offset.121121+ *122122+ * This indirection could e.g. be used to assign fixed123123+ * io_uring_sqe entries to operations and only submit them to124124+ * the queue when needed.125125+ *126126+ * The kernel modifies neither the indices array nor the entries127127+ * array.128128+ */92129 u32 array[];93130};94131132132+/*133133+ * This data is shared with the application through the mmap at offset134134+ * IORING_OFF_CQ_RING.135135+ *136136+ * The offsets to the member fields are published through struct137137+ * io_cqring_offsets when calling io_uring_setup.138138+ */95139struct io_cq_ring {140140+ /*141141+ * Head and tail offsets into the ring; the offsets need to be142142+ * masked to get valid indices.143143+ *144144+ * The application controls head and the kernel tail.145145+ */96146 struct io_uring r;147147+ /*148148+ * Bitmask to apply to head and tail offsets (constant, equals149149+ * ring_entries - 1)150150+ */97151 u32 ring_mask;152152+ /* Ring size (constant, power of 2) */98153 u32 ring_entries;154154+ /*155155+ * Number of completion events lost because the queue was full;156156+ * this should be avoided by the application by making sure157157+ * there are not more requests pending thatn there is space in158158+ * the completion queue.159159+ *160160+ * Written by the kernel, shouldn't be modified by the161161+ * application (i.e. get number of "new events" by comparing to162162+ * cached value).163163+ *164164+ * As completion events come in out of order this counter is not165165+ * ordered with any other data.166166+ */99167 u32 overflow;168168+ /*169169+ * Ring buffer of completion events.170170+ *171171+ * The kernel writes completion events fresh every time they are172172+ * produced, so the application is allowed to modify pending173173+ * entries.174174+ */100175 struct io_uring_cqe cqes[];101176};102177···322221 struct list_head list;323222 unsigned int flags;324223 refcount_t refs;325325-#define REQ_F_FORCE_NONBLOCK 1 /* inline submission attempt */224224+#define REQ_F_NOWAIT 1 /* must not punt to workers */326225#define REQ_F_IOPOLL_COMPLETED 2 /* polled IO has completed */327226#define REQ_F_FIXED_FILE 4 /* ctx owns file */328227#define REQ_F_SEQ_PREV 8 /* sequential with previous */···418317 /* order cqe stores with ring update */419318 smp_store_release(&ring->r.tail, ctx->cached_cq_tail);420319421421- /*422422- * Write sider barrier of tail update, app has read side. See423423- * comment at the top of this file.424424- */425425- smp_wmb();426426-427320 if (wq_has_sleeper(&ctx->cq_wait)) {428321 wake_up_interruptible(&ctx->cq_wait);429322 kill_fasync(&ctx->cq_fasync, SIGIO, POLL_IN);···431336 unsigned tail;432337433338 tail = ctx->cached_cq_tail;434434- /* See comment at the top of the file */435435- smp_rmb();339339+ /*340340+ * writes to the cq entry need to come after reading head; the341341+ * control dependency is enough as we're using WRITE_ONCE to342342+ * fill the cq entry343343+ */436344 if (tail - READ_ONCE(ring->r.head) == ring->ring_entries)437345 return NULL;438346···838740}839741840742static int io_prep_rw(struct io_kiocb *req, const struct sqe_submit *s,841841- bool force_nonblock, struct io_submit_state *state)743743+ bool force_nonblock)842744{843745 const struct io_uring_sqe *sqe = s->sqe;844746 struct io_ring_ctx *ctx = req->ctx;···872774 ret = kiocb_set_rw_flags(kiocb, READ_ONCE(sqe->rw_flags));873775 if (unlikely(ret))874776 return ret;875875- if (force_nonblock) {777777+778778+ /* don't allow async punt if RWF_NOWAIT was requested */779779+ if (kiocb->ki_flags & IOCB_NOWAIT)780780+ req->flags |= REQ_F_NOWAIT;781781+782782+ if (force_nonblock)876783 kiocb->ki_flags |= IOCB_NOWAIT;877877- req->flags |= REQ_F_FORCE_NONBLOCK;878878- }784784+879785 if (ctx->flags & IORING_SETUP_IOPOLL) {880786 if (!(kiocb->ki_flags & IOCB_DIRECT) ||881787 !kiocb->ki_filp->f_op->iopoll)···1040938}10419391042940static int io_read(struct io_kiocb *req, const struct sqe_submit *s,10431043- bool force_nonblock, struct io_submit_state *state)941941+ bool force_nonblock)1044942{1045943 struct iovec inline_vecs[UIO_FASTIOV], *iovec = inline_vecs;1046944 struct kiocb *kiocb = &req->rw;···1049947 size_t iov_count;1050948 int ret;105194910521052- ret = io_prep_rw(req, s, force_nonblock, state);950950+ ret = io_prep_rw(req, s, force_nonblock);1053951 if (ret)1054952 return ret;1055953 file = kiocb->ki_filp;···1087985}10889861089987static int io_write(struct io_kiocb *req, const struct sqe_submit *s,10901090- bool force_nonblock, struct io_submit_state *state)988988+ bool force_nonblock)1091989{1092990 struct iovec inline_vecs[UIO_FASTIOV], *iovec = inline_vecs;1093991 struct kiocb *kiocb = &req->rw;···1096994 size_t iov_count;1097995 int ret;109899610991099- ret = io_prep_rw(req, s, force_nonblock, state);997997+ ret = io_prep_rw(req, s, force_nonblock);1100998 if (ret)1101999 return ret;11021000···14381336}1439133714401338static int __io_submit_sqe(struct io_ring_ctx *ctx, struct io_kiocb *req,14411441- const struct sqe_submit *s, bool force_nonblock,14421442- struct io_submit_state *state)13391339+ const struct sqe_submit *s, bool force_nonblock)14431340{14441341 int ret, opcode;14451342···14541353 case IORING_OP_READV:14551354 if (unlikely(s->sqe->buf_index))14561355 return -EINVAL;14571457- ret = io_read(req, s, force_nonblock, state);13561356+ ret = io_read(req, s, force_nonblock);14581357 break;14591358 case IORING_OP_WRITEV:14601359 if (unlikely(s->sqe->buf_index))14611360 return -EINVAL;14621462- ret = io_write(req, s, force_nonblock, state);13611361+ ret = io_write(req, s, force_nonblock);14631362 break;14641363 case IORING_OP_READ_FIXED:14651465- ret = io_read(req, s, force_nonblock, state);13641364+ ret = io_read(req, s, force_nonblock);14661365 break;14671366 case IORING_OP_WRITE_FIXED:14681468- ret = io_write(req, s, force_nonblock, state);13671367+ ret = io_write(req, s, force_nonblock);14691368 break;14701369 case IORING_OP_FSYNC:14711370 ret = io_fsync(req, s->sqe, force_nonblock);···15381437 struct sqe_submit *s = &req->submit;15391438 const struct io_uring_sqe *sqe = s->sqe;1540143915411541- /* Ensure we clear previously set forced non-block flag */15421542- req->flags &= ~REQ_F_FORCE_NONBLOCK;14401440+ /* Ensure we clear previously set non-block flag */15431441 req->rw.ki_flags &= ~IOCB_NOWAIT;1544144215451443 ret = 0;···15571457 s->has_user = cur_mm != NULL;15581458 s->needs_lock = true;15591459 do {15601560- ret = __io_submit_sqe(ctx, req, s, false, NULL);14601460+ ret = __io_submit_sqe(ctx, req, s, false);15611461 /*15621462 * We can get EAGAIN for polled IO even though15631463 * we're forcing a sync submission from here,···15681468 break;15691469 cond_resched();15701470 } while (1);15711571-15721572- /* drop submission reference */15731573- io_put_req(req);15741471 }14721472+14731473+ /* drop submission reference */14741474+ io_put_req(req);14751475+15751476 if (ret) {15761477 io_cqring_add_event(ctx, sqe->user_data, ret, 0);15771478 io_put_req(req);···17241623 if (unlikely(ret))17251624 goto out;1726162517271727- ret = __io_submit_sqe(ctx, req, s, true, state);17281728- if (ret == -EAGAIN) {16261626+ ret = __io_submit_sqe(ctx, req, s, true);16271627+ if (ret == -EAGAIN && !(req->flags & REQ_F_NOWAIT)) {17291628 struct io_uring_sqe *sqe_copy;1730162917311630 sqe_copy = kmalloc(sizeof(*sqe_copy), GFP_KERNEL);···17991698 * write new data to them.18001699 */18011700 smp_store_release(&ring->r.head, ctx->cached_sq_head);18021802-18031803- /*18041804- * write side barrier of head update, app has read side. See18051805- * comment at the top of this file18061806- */18071807- smp_wmb();18081701 }18091809-}18101810-18111811-/*18121812- * Undo last io_get_sqring()18131813- */18141814-static void io_drop_sqring(struct io_ring_ctx *ctx)18151815-{18161816- ctx->cached_sq_head--;18171702}1818170318191704/*···18241737 * though the application is the one updating it.18251738 */18261739 head = ctx->cached_sq_head;18271827- /* See comment at the top of this file */18281828- smp_rmb();18291829- if (head == READ_ONCE(ring->r.tail))17401740+ /* make sure SQ entry isn't read before tail */17411741+ if (head == smp_load_acquire(&ring->r.tail))18301742 return false;1831174318321744 head = READ_ONCE(ring->array[head & ctx->sq_mask]);···18391753 /* drop invalid entries */18401754 ctx->cached_sq_head++;18411755 ring->dropped++;18421842- /* See comment at the top of this file */18431843- smp_wmb();18441756 return false;18451757}18461758···1948186419491865 /* Tell userspace we may need a wakeup call */19501866 ctx->sq_ring->flags |= IORING_SQ_NEED_WAKEUP;19511951- smp_wmb();18671867+ /* make sure to read SQ tail after writing flags */18681868+ smp_mb();1952186919531870 if (!io_get_sqring(ctx, &sqes[0])) {19541871 if (kthread_should_stop()) {···19621877 finish_wait(&ctx->sqo_wait, &wait);1963187819641879 ctx->sq_ring->flags &= ~IORING_SQ_NEED_WAKEUP;19651965- smp_wmb();19661880 continue;19671881 }19681882 finish_wait(&ctx->sqo_wait, &wait);1969188319701884 ctx->sq_ring->flags &= ~IORING_SQ_NEED_WAKEUP;19711971- smp_wmb();19721885 }1973188619741887 i = 0;···20111928static int io_ring_submit(struct io_ring_ctx *ctx, unsigned int to_submit)20121929{20131930 struct io_submit_state state, *statep = NULL;20142014- int i, ret = 0, submit = 0;19311931+ int i, submit = 0;2015193220161933 if (to_submit > IO_PLUG_THRESHOLD) {20171934 io_submit_state_start(&state, ctx, to_submit);···2020193720211938 for (i = 0; i < to_submit; i++) {20221939 struct sqe_submit s;19401940+ int ret;2023194120241942 if (!io_get_sqring(ctx, &s))20251943 break;···20281944 s.has_user = true;20291945 s.needs_lock = false;20301946 s.needs_fixed_file = false;19471947+ submit++;2031194820321949 ret = io_submit_sqe(ctx, &s, statep);20332033- if (ret) {20342034- io_drop_sqring(ctx);20352035- break;20362036- }20372037-20382038- submit++;19501950+ if (ret)19511951+ io_cqring_add_event(ctx, s.sqe->user_data, ret, 0);20391952 }20401953 io_commit_sqring(ctx);2041195420421955 if (statep)20431956 io_submit_state_end(statep);2044195720452045- return submit ? submit : ret;19581958+ return submit;20461959}2047196020481961static unsigned io_cqring_events(struct io_cq_ring *ring)···23202239 mmgrab(current->mm);23212240 ctx->sqo_mm = current->mm;2322224123232323- ret = -EINVAL;23242324- if (!cpu_possible(p->sq_thread_cpu))23252325- goto err;23262326-23272242 if (ctx->flags & IORING_SETUP_SQPOLL) {23282243 ret = -EPERM;23292244 if (!capable(CAP_SYS_ADMIN))···23302253 ctx->sq_thread_idle = HZ;2331225423322255 if (p->flags & IORING_SETUP_SQ_AFF) {23332333- int cpu;22562256+ int cpu = array_index_nospec(p->sq_thread_cpu,22572257+ nr_cpu_ids);2334225823352335- cpu = array_index_nospec(p->sq_thread_cpu, NR_CPUS);23362259 ret = -EINVAL;23372337- if (!cpu_possible(p->sq_thread_cpu))22602260+ if (!cpu_possible(cpu))23382261 goto err;2339226223402263 ctx->sqo_thread = kthread_create_on_cpu(io_sq_thread,···2397232023982321static void io_mem_free(void *ptr)23992322{24002400- struct page *page = virt_to_head_page(ptr);23232323+ struct page *page;2401232423252325+ if (!ptr)23262326+ return;23272327+23282328+ page = virt_to_head_page(ptr);24022329 if (put_page_testzero(page))24032330 free_compound_page(page);24042331}···2443236224442363 if (ctx->account_mem)24452364 io_unaccount_mem(ctx->user, imu->nr_bvecs);24462446- kfree(imu->bvec);23652365+ kvfree(imu->bvec);24472366 imu->nr_bvecs = 0;24482367 }24492368···25352454 if (!pages || nr_pages > got_pages) {25362455 kfree(vmas);25372456 kfree(pages);25382538- pages = kmalloc_array(nr_pages, sizeof(struct page *),24572457+ pages = kvmalloc_array(nr_pages, sizeof(struct page *),25392458 GFP_KERNEL);25402540- vmas = kmalloc_array(nr_pages,24592459+ vmas = kvmalloc_array(nr_pages,25412460 sizeof(struct vm_area_struct *),25422461 GFP_KERNEL);25432462 if (!pages || !vmas) {···25492468 got_pages = nr_pages;25502469 }2551247025522552- imu->bvec = kmalloc_array(nr_pages, sizeof(struct bio_vec),24712471+ imu->bvec = kvmalloc_array(nr_pages, sizeof(struct bio_vec),25532472 GFP_KERNEL);25542473 ret = -ENOMEM;25552474 if (!imu->bvec) {···25882507 }25892508 if (ctx->account_mem)25902509 io_unaccount_mem(ctx->user, nr_pages);25102510+ kvfree(imu->bvec);25912511 goto err;25922512 }25932513···2611252926122530 ctx->nr_user_bufs++;26132531 }26142614- kfree(pages);26152615- kfree(vmas);25322532+ kvfree(pages);25332533+ kvfree(vmas);26162534 return 0;26172535err:26182618- kfree(pages);26192619- kfree(vmas);25362536+ kvfree(pages);25372537+ kvfree(vmas);26202538 io_sqe_buffer_unregister(ctx);26212539 return ret;26222540}···26542572 __poll_t mask = 0;2655257326562574 poll_wait(file, &ctx->cq_wait, wait);26572657- /* See comment at the top of this file */25752575+ /*25762576+ * synchronizes with barrier from wq_has_sleeper call in25772577+ * io_commit_cqring25782578+ */26582579 smp_rmb();26592659- if (READ_ONCE(ctx->sq_ring->r.tail) + 1 != ctx->cached_sq_head)25802580+ if (READ_ONCE(ctx->sq_ring->r.tail) - ctx->cached_sq_head !=25812581+ ctx->sq_ring->ring_entries)26602582 mask |= EPOLLOUT | EPOLLWRNORM;26612583 if (READ_ONCE(ctx->cq_ring->r.head) != ctx->cached_cq_tail)26622584 mask |= EPOLLIN | EPOLLRDNORM;···27712685 mutex_lock(&ctx->uring_lock);27722686 submitted = io_ring_submit(ctx, to_submit);27732687 mutex_unlock(&ctx->uring_lock);27742774-27752775- if (submitted < 0)27762776- goto out_ctx;27772688 }27782689 if (flags & IORING_ENTER_GETEVENTS) {27792690 unsigned nr_events = 0;2780269127812692 min_complete = min(min_complete, ctx->cq_entries);27822782-27832783- /*27842784- * The application could have included the 'to_submit' count27852785- * in how many events it wanted to wait for. If we failed to27862786- * submit the desired count, we may need to adjust the number27872787- * of events to poll/wait for.27882788- */27892789- if (submitted < to_submit)27902790- min_complete = min_t(unsigned, submitted, min_complete);2791269327922694 if (ctx->flags & IORING_SETUP_IOPOLL) {27932695 mutex_lock(&ctx->uring_lock);···28222748 return -EOVERFLOW;2823274928242750 ctx->sq_sqes = io_mem_alloc(size);28252825- if (!ctx->sq_sqes) {28262826- io_mem_free(ctx->sq_ring);27512751+ if (!ctx->sq_sqes)28272752 return -ENOMEM;28282828- }2829275328302754 cq_ring = io_mem_alloc(struct_size(cq_ring, cqes, p->cq_entries));28312831- if (!cq_ring) {28322832- io_mem_free(ctx->sq_ring);28332833- io_mem_free(ctx->sq_sqes);27552755+ if (!cq_ring)28342756 return -ENOMEM;28352835- }2836275728372758 ctx->cq_ring = cq_ring;28382759 cq_ring->ring_mask = p->cq_entries - 1;···30022933 __acquires(ctx->uring_lock)30032934{30042935 int ret;29362936+29372937+ /*29382938+ * We're inside the ring mutex, if the ref is already dying, then29392939+ * someone else killed the ctx or is already going through29402940+ * io_uring_register().29412941+ */29422942+ if (percpu_ref_is_dying(&ctx->refs))29432943+ return -ENXIO;3005294430062945 percpu_ref_kill(&ctx->refs);30072946
+12-2
fs/notify/fanotify/fanotify.c
···346346 __kernel_fsid_t fsid = {};347347348348 fsnotify_foreach_obj_type(type) {349349+ struct fsnotify_mark_connector *conn;350350+349351 if (!fsnotify_iter_should_report_type(iter_info, type))350352 continue;351353352352- fsid = iter_info->marks[type]->connector->fsid;354354+ conn = READ_ONCE(iter_info->marks[type]->connector);355355+ /* Mark is just getting destroyed or created? */356356+ if (!conn)357357+ continue;358358+ fsid = conn->fsid;353359 if (WARN_ON_ONCE(!fsid.val[0] && !fsid.val[1]))354360 continue;355361 return fsid;···414408 return 0;415409 }416410417417- if (FAN_GROUP_FLAG(group, FAN_REPORT_FID))411411+ if (FAN_GROUP_FLAG(group, FAN_REPORT_FID)) {418412 fsid = fanotify_get_fsid(iter_info);413413+ /* Racing with mark destruction or creation? */414414+ if (!fsid.val[0] && !fsid.val[1])415415+ return 0;416416+ }419417420418 event = fanotify_alloc_event(group, inode, mask, data, data_type,421419 &fsid);
+6-6
fs/notify/mark.c
···239239240240void fsnotify_put_mark(struct fsnotify_mark *mark)241241{242242- struct fsnotify_mark_connector *conn;242242+ struct fsnotify_mark_connector *conn = READ_ONCE(mark->connector);243243 void *objp = NULL;244244 unsigned int type = FSNOTIFY_OBJ_TYPE_DETACHED;245245 bool free_conn = false;246246247247 /* Catch marks that were actually never attached to object */248248- if (!mark->connector) {248248+ if (!conn) {249249 if (refcount_dec_and_test(&mark->refcnt))250250 fsnotify_final_mark_destroy(mark);251251 return;···255255 * We have to be careful so that traversals of obj_list under lock can256256 * safely grab mark reference.257257 */258258- if (!refcount_dec_and_lock(&mark->refcnt, &mark->connector->lock))258258+ if (!refcount_dec_and_lock(&mark->refcnt, &conn->lock))259259 return;260260261261- conn = mark->connector;262261 hlist_del_init_rcu(&mark->obj_list);263262 if (hlist_empty(&conn->list)) {264263 objp = fsnotify_detach_connector_from_object(conn, &type);···265266 } else {266267 __fsnotify_recalc_mask(conn);267268 }268268- mark->connector = NULL;269269+ WRITE_ONCE(mark->connector, NULL);269270 spin_unlock(&conn->lock);270271271272 fsnotify_drop_object(type, objp);···619620 /* mark should be the last entry. last is the current last entry */620621 hlist_add_behind_rcu(&mark->obj_list, &last->obj_list);621622added:622622- mark->connector = conn;623623+ WRITE_ONCE(mark->connector, conn);623624out_err:624625 spin_unlock(&conn->lock);625626 spin_unlock(&mark->lock);···807808 refcount_set(&mark->refcnt, 1);808809 fsnotify_get_group(group);809810 mark->group = group;811811+ WRITE_ONCE(mark->connector, NULL);810812}811813812814/*
···200200 * @dev: driver model's view of this device201201 * @usb_dev: if an interface is bound to the USB major, this will point202202 * to the sysfs representation for that device.203203- * @pm_usage_cnt: PM usage counter for this interface204203 * @reset_ws: Used for scheduling resets from atomic context.205204 * @resetting_device: USB core reset the device, so use alt setting 0 as206205 * current; needs bandwidth alloc after reset.···256257257258 struct device dev; /* interface specific device info */258259 struct device *usb_dev;259259- atomic_t pm_usage_cnt; /* usage counter for autosuspend */260260 struct work_struct reset_ws; /* for resets in atomic context */261261};262262#define to_usb_interface(d) container_of(d, struct usb_interface, dev)
-1
include/net/sctp/command.h
···105105 SCTP_CMD_T1_RETRAN, /* Mark for retransmission after T1 timeout */106106 SCTP_CMD_UPDATE_INITTAG, /* Update peer inittag */107107 SCTP_CMD_SEND_MSG, /* Send the whole use message */108108- SCTP_CMD_SEND_NEXT_ASCONF, /* Send the next ASCONF after ACK */109108 SCTP_CMD_PURGE_ASCONF_QUEUE, /* Purge all asconf queues.*/110109 SCTP_CMD_SET_ASOC, /* Restore association context */111110 SCTP_CMD_LAST
+19-1
include/net/xfrm.h
···306306};307307308308struct xfrm_if_cb {309309- struct xfrm_if *(*decode_session)(struct sk_buff *skb);309309+ struct xfrm_if *(*decode_session)(struct sk_buff *skb,310310+ unsigned short family);310311};311312312313void xfrm_if_register_cb(const struct xfrm_if_cb *ifcb);···13361335 return atomic_read(&x->tunnel_users);13371336}1338133713381338+static inline bool xfrm_id_proto_valid(u8 proto)13391339+{13401340+ switch (proto) {13411341+ case IPPROTO_AH:13421342+ case IPPROTO_ESP:13431343+ case IPPROTO_COMP:13441344+#if IS_ENABLED(CONFIG_IPV6)13451345+ case IPPROTO_ROUTING:13461346+ case IPPROTO_DSTOPTS:13471347+#endif13481348+ return true;13491349+ default:13501350+ return false;13511351+ }13521352+}13531353+13541354+/* IPSEC_PROTO_ANY only matches 3 IPsec protocols, 0 could match all. */13391355static inline int xfrm_id_proto_match(u8 proto, u8 userproto)13401356{13411357 return (!userproto || proto == userproto ||
···43494349 return 0;43504350}4351435143524352+static void __find_good_pkt_pointers(struct bpf_func_state *state,43534353+ struct bpf_reg_state *dst_reg,43544354+ enum bpf_reg_type type, u16 new_range)43554355+{43564356+ struct bpf_reg_state *reg;43574357+ int i;43584358+43594359+ for (i = 0; i < MAX_BPF_REG; i++) {43604360+ reg = &state->regs[i];43614361+ if (reg->type == type && reg->id == dst_reg->id)43624362+ /* keep the maximum range already checked */43634363+ reg->range = max(reg->range, new_range);43644364+ }43654365+43664366+ bpf_for_each_spilled_reg(i, state, reg) {43674367+ if (!reg)43684368+ continue;43694369+ if (reg->type == type && reg->id == dst_reg->id)43704370+ reg->range = max(reg->range, new_range);43714371+ }43724372+}43734373+43524374static void find_good_pkt_pointers(struct bpf_verifier_state *vstate,43534375 struct bpf_reg_state *dst_reg,43544376 enum bpf_reg_type type,43554377 bool range_right_open)43564378{43574357- struct bpf_func_state *state = vstate->frame[vstate->curframe];43584358- struct bpf_reg_state *regs = state->regs, *reg;43594379 u16 new_range;43604360- int i, j;43804380+ int i;4361438143624382 if (dst_reg->off < 0 ||43634383 (dst_reg->off == 0 && range_right_open))···44424422 * the range won't allow anything.44434423 * dst_reg->off is known < MAX_PACKET_OFF, therefore it fits in a u16.44444424 */44454445- for (i = 0; i < MAX_BPF_REG; i++)44464446- if (regs[i].type == type && regs[i].id == dst_reg->id)44474447- /* keep the maximum range already checked */44484448- regs[i].range = max(regs[i].range, new_range);44494449-44504450- for (j = 0; j <= vstate->curframe; j++) {44514451- state = vstate->frame[j];44524452- bpf_for_each_spilled_reg(i, state, reg) {44534453- if (!reg)44544454- continue;44554455- if (reg->type == type && reg->id == dst_reg->id)44564456- reg->range = max(reg->range, new_range);44574457- }44584458- }44254425+ for (i = 0; i <= vstate->curframe; i++)44264426+ __find_good_pkt_pointers(vstate->frame[i], dst_reg, type,44274427+ new_range);44594428}4460442944614430/* compute branch direction of the expression "if (reg opcode val) goto target;"···49184909 }49194910}4920491149124912+static void __mark_ptr_or_null_regs(struct bpf_func_state *state, u32 id,49134913+ bool is_null)49144914+{49154915+ struct bpf_reg_state *reg;49164916+ int i;49174917+49184918+ for (i = 0; i < MAX_BPF_REG; i++)49194919+ mark_ptr_or_null_reg(state, &state->regs[i], id, is_null);49204920+49214921+ bpf_for_each_spilled_reg(i, state, reg) {49224922+ if (!reg)49234923+ continue;49244924+ mark_ptr_or_null_reg(state, reg, id, is_null);49254925+ }49264926+}49274927+49214928/* The logic is similar to find_good_pkt_pointers(), both could eventually49224929 * be folded together at some point.49234930 */···49414916 bool is_null)49424917{49434918 struct bpf_func_state *state = vstate->frame[vstate->curframe];49444944- struct bpf_reg_state *reg, *regs = state->regs;49194919+ struct bpf_reg_state *regs = state->regs;49454920 u32 ref_obj_id = regs[regno].ref_obj_id;49464921 u32 id = regs[regno].id;49474947- int i, j;49224922+ int i;4948492349494924 if (ref_obj_id && ref_obj_id == id && is_null)49504925 /* regs[regno] is in the " == NULL" branch.···49534928 */49544929 WARN_ON_ONCE(release_reference_state(state, id));4955493049564956- for (i = 0; i < MAX_BPF_REG; i++)49574957- mark_ptr_or_null_reg(state, ®s[i], id, is_null);49584958-49594959- for (j = 0; j <= vstate->curframe; j++) {49604960- state = vstate->frame[j];49614961- bpf_for_each_spilled_reg(i, state, reg) {49624962- if (!reg)49634963- continue;49644964- mark_ptr_or_null_reg(state, reg, id, is_null);49654965- }49664966- }49314931+ for (i = 0; i <= vstate->curframe; i++)49324932+ __mark_ptr_or_null_regs(vstate->frame[i], id, is_null);49674933}4968493449694935static bool try_match_pkt_pointers(const struct bpf_insn *insn,
+4
kernel/sched/fair.c
···20072007 if (p->last_task_numa_placement) {20082008 delta = runtime - p->last_sum_exec_runtime;20092009 *period = now - p->last_task_numa_placement;20102010+20112011+ /* Avoid time going backwards, prevent potential divide error: */20122012+ if (unlikely((s64)*period < 0))20132013+ *period = 0;20102014 } else {20112015 delta = p->se.avg.load_sum;20122016 *period = LOAD_AVG_MAX;
+15-2
kernel/seccomp.c
···502502 *503503 * Caller must be holding current->sighand->siglock lock.504504 *505505- * Returns 0 on success, -ve on error.505505+ * Returns 0 on success, -ve on error, or506506+ * - in TSYNC mode: the pid of a thread which was either not in the correct507507+ * seccomp mode or did not have an ancestral seccomp filter508508+ * - in NEW_LISTENER mode: the fd of the new listener506509 */507510static long seccomp_attach_filter(unsigned int flags,508511 struct seccomp_filter *filter)···12611258 if (flags & ~SECCOMP_FILTER_FLAG_MASK)12621259 return -EINVAL;1263126012611261+ /*12621262+ * In the successful case, NEW_LISTENER returns the new listener fd.12631263+ * But in the failure case, TSYNC returns the thread that died. If you12641264+ * combine these two flags, there's no way to tell whether something12651265+ * succeeded or failed. So, let's disallow this combination.12661266+ */12671267+ if ((flags & SECCOMP_FILTER_FLAG_TSYNC) &&12681268+ (flags & SECCOMP_FILTER_FLAG_NEW_LISTENER))12691269+ return -EINVAL;12701270+12641271 /* Prepare the new filter before holding any locks. */12651272 prepared = seccomp_prepare_user_filter(filter);12661273 if (IS_ERR(prepared))···13171304 mutex_unlock(¤t->signal->cred_guard_mutex);13181305out_put_fd:13191306 if (flags & SECCOMP_FILTER_FLAG_NEW_LISTENER) {13201320- if (ret < 0) {13071307+ if (ret) {13211308 listener_f->private_data = NULL;13221309 fput(listener_f);13231310 put_unused_fd(listener);
+1-1
kernel/trace/ring_buffer.c
···762762763763 preempt_disable_notrace();764764 time = rb_time_stamp(buffer);765765- preempt_enable_no_resched_notrace();765765+ preempt_enable_notrace();766766767767 return time;768768}
···19371937 depends on m19381938 depends on BLOCK && (64BIT || LBDAF) # for XFS, BTRFS19391939 depends on NETDEVICES && NET_CORE && INET # for TUN19401940+ depends on BLOCK19401941 select TEST_LKM19411942 select XFS_FS19421943 select TUN
+3-3
lib/test_vmalloc.c
···383383static int test_func(void *private)384384{385385 struct test_driver *t = private;386386- cpumask_t newmask = CPU_MASK_NONE;387386 int random_array[ARRAY_SIZE(test_case_array)];388387 int index, i, j, ret;389388 ktime_t kt;390389 u64 delta;391390392392- cpumask_set_cpu(t->cpu, &newmask);393393- set_cpus_allowed_ptr(current, &newmask);391391+ ret = set_cpus_allowed_ptr(current, cpumask_of(t->cpu));392392+ if (ret < 0)393393+ pr_err("Failed to set affinity to %d CPU\n", t->cpu);394394395395 for (i = 0; i < ARRAY_SIZE(test_case_array); i++)396396 random_array[i] = i;
+1
mm/memory_hotplug.c
···874874 */875875 mem = find_memory_block(__pfn_to_section(pfn));876876 nid = mem->nid;877877+ put_device(&mem->dev);877878878879 /* associate pfn range with the zone */879880 zone = move_pfn_range(online_type, nid, pfn, nr_pages);
+19-8
mm/page_alloc.c
···266266267267int min_free_kbytes = 1024;268268int user_min_free_kbytes = -1;269269+#ifdef CONFIG_DISCONTIGMEM270270+/*271271+ * DiscontigMem defines memory ranges as separate pg_data_t even if the ranges272272+ * are not on separate NUMA nodes. Functionally this works but with273273+ * watermark_boost_factor, it can reclaim prematurely as the ranges can be274274+ * quite small. By default, do not boost watermarks on discontigmem as in275275+ * many cases very high-order allocations like THP are likely to be276276+ * unsupported and the premature reclaim offsets the advantage of long-term277277+ * fragmentation avoidance.278278+ */279279+int watermark_boost_factor __read_mostly;280280+#else269281int watermark_boost_factor __read_mostly = 15000;282282+#endif270283int watermark_scale_factor = 10;271284272285static unsigned long nr_kernel_pages __initdata;···34323419 alloc_flags |= ALLOC_KSWAPD;3433342034343421#ifdef CONFIG_ZONE_DMA3234223422+ if (!zone)34233423+ return alloc_flags;34243424+34353425 if (zone_idx(zone) != ZONE_NORMAL)34363436- goto out;34263426+ return alloc_flags;3437342734383428 /*34393429 * If ZONE_DMA32 exists, assume it is the one after ZONE_NORMAL and···34453429 */34463430 BUILD_BUG_ON(ZONE_NORMAL - ZONE_DMA32 != 1);34473431 if (nr_online_nodes > 1 && !populated_zone(--zone))34483448- goto out;34323432+ return alloc_flags;3449343334503450-out:34343434+ alloc_flags |= ALLOC_NOFRAGMENT;34513435#endif /* CONFIG_ZONE_DMA32 */34523436 return alloc_flags;34533437}···3788377237893773 memalloc_noreclaim_restore(noreclaim_flag);37903774 psi_memstall_leave(&pflags);37913791-37923792- if (*compact_result <= COMPACT_INACTIVE) {37933793- WARN_ON_ONCE(page);37943794- return NULL;37953795- }3796377537973776 /*37983777 * At least in one zone compaction wasn't deferred or skipped, so let's
+1
net/appletalk/ddp.c
···19151915 ddp_dl = register_snap_client(ddp_snap_id, atalk_rcv);19161916 if (!ddp_dl) {19171917 pr_crit("Unable to register DDP with SNAP.\n");19181918+ rc = -ENOMEM;19181919 goto out_sock;19191920 }19201921
+15-5
net/ipv4/esp4.c
···226226 tail[plen - 1] = proto;227227}228228229229-static void esp_output_udp_encap(struct xfrm_state *x, struct sk_buff *skb, struct esp_info *esp)229229+static int esp_output_udp_encap(struct xfrm_state *x, struct sk_buff *skb, struct esp_info *esp)230230{231231 int encap_type;232232 struct udphdr *uh;···234234 __be16 sport, dport;235235 struct xfrm_encap_tmpl *encap = x->encap;236236 struct ip_esp_hdr *esph = esp->esph;237237+ unsigned int len;237238238239 spin_lock_bh(&x->lock);239240 sport = encap->encap_sport;···242241 encap_type = encap->encap_type;243242 spin_unlock_bh(&x->lock);244243244244+ len = skb->len + esp->tailen - skb_transport_offset(skb);245245+ if (len + sizeof(struct iphdr) >= IP_MAX_MTU)246246+ return -EMSGSIZE;247247+245248 uh = (struct udphdr *)esph;246249 uh->source = sport;247250 uh->dest = dport;248248- uh->len = htons(skb->len + esp->tailen249249- - skb_transport_offset(skb));251251+ uh->len = htons(len);250252 uh->check = 0;251253252254 switch (encap_type) {···266262267263 *skb_mac_header(skb) = IPPROTO_UDP;268264 esp->esph = esph;265265+266266+ return 0;269267}270268271269int esp_output_head(struct xfrm_state *x, struct sk_buff *skb, struct esp_info *esp)···281275 int tailen = esp->tailen;282276283277 /* this is non-NULL only with UDP Encapsulation */284284- if (x->encap)285285- esp_output_udp_encap(x, skb, esp);278278+ if (x->encap) {279279+ int err = esp_output_udp_encap(x, skb, esp);280280+281281+ if (err < 0)282282+ return err;283283+ }286284287285 if (!skb_cloned(skb)) {288286 if (tailen <= skb_tailroom(skb)) {
···16731673 if (TCP_SKB_CB(tail)->end_seq != TCP_SKB_CB(skb)->seq ||16741674 TCP_SKB_CB(tail)->ip_dsfield != TCP_SKB_CB(skb)->ip_dsfield ||16751675 ((TCP_SKB_CB(tail)->tcp_flags |16761676- TCP_SKB_CB(skb)->tcp_flags) & TCPHDR_URG) ||16761676+ TCP_SKB_CB(skb)->tcp_flags) & (TCPHDR_SYN | TCPHDR_RST | TCPHDR_URG)) ||16771677+ !((TCP_SKB_CB(tail)->tcp_flags &16781678+ TCP_SKB_CB(skb)->tcp_flags) & TCPHDR_ACK) ||16771679 ((TCP_SKB_CB(tail)->tcp_flags ^16781680 TCP_SKB_CB(skb)->tcp_flags) & (TCPHDR_ECE | TCPHDR_CWR)) ||16791681#ifdef CONFIG_TLS_DEVICE···16941692 if (after(TCP_SKB_CB(skb)->ack_seq, TCP_SKB_CB(tail)->ack_seq))16951693 TCP_SKB_CB(tail)->ack_seq = TCP_SKB_CB(skb)->ack_seq;1696169416951695+ /* We have to update both TCP_SKB_CB(tail)->tcp_flags and16961696+ * thtail->fin, so that the fast path in tcp_rcv_established()16971697+ * is not entered if we append a packet with a FIN.16981698+ * SYN, RST, URG are not present.16991699+ * ACK is set on both packets.17001700+ * PSH : we do not really care in TCP stack,17011701+ * at least for 'GRO' packets.17021702+ */17031703+ thtail->fin |= th->fin;16971704 TCP_SKB_CB(tail)->tcp_flags |= TCP_SKB_CB(skb)->tcp_flags;1698170516991706 if (TCP_SKB_CB(skb)->has_rxtstamp) {
+12-4
net/ipv4/udp_offload.c
···352352 struct sk_buff *pp = NULL;353353 struct udphdr *uh2;354354 struct sk_buff *p;355355+ unsigned int ulen;355356356357 /* requires non zero csum, for symmetry with GSO */357358 if (!uh->check) {···360359 return NULL;361360 }362361362362+ /* Do not deal with padded or malicious packets, sorry ! */363363+ ulen = ntohs(uh->len);364364+ if (ulen <= sizeof(*uh) || ulen != skb_gro_len(skb)) {365365+ NAPI_GRO_CB(skb)->flush = 1;366366+ return NULL;367367+ }363368 /* pull encapsulating udp header */364369 skb_gro_pull(skb, sizeof(struct udphdr));365370 skb_gro_postpull_rcsum(skb, uh, sizeof(struct udphdr));···384377385378 /* Terminate the flow on len mismatch or if it grow "too much".386379 * Under small packet flood GRO count could elsewhere grow a lot387387- * leading to execessive truesize values380380+ * leading to excessive truesize values.381381+ * On len mismatch merge the first packet shorter than gso_size,382382+ * otherwise complete the GRO packet.388383 */389389- if (!skb_gro_receive(p, skb) &&384384+ if (ulen > ntohs(uh2->len) || skb_gro_receive(p, skb) ||385385+ ulen != ntohs(uh2->len) ||390386 NAPI_GRO_CB(p)->count >= UDP_GRO_CNT_MAX)391391- pp = p;392392- else if (uh->len != uh2->len)393387 pp = p;394388395389 return pp;
···380380 in6_dev_put(idev);381381 }382382383383- rcu_read_lock();384384- from = rcu_dereference(rt->from);385385- rcu_assign_pointer(rt->from, NULL);383383+ from = xchg((__force struct fib6_info **)&rt->from, NULL);386384 fib6_info_release(from);387387- rcu_read_unlock();388385}389386390387static void ip6_dst_ifdown(struct dst_entry *dst, struct net_device *dev,···13201323 /* purge completely the exception to allow releasing the held resources:13211324 * some [sk] cache may keep the dst around for unlimited time13221325 */13231323- from = rcu_dereference_protected(rt6_ex->rt6i->from,13241324- lockdep_is_held(&rt6_exception_lock));13251325- rcu_assign_pointer(rt6_ex->rt6i->from, NULL);13261326+ from = xchg((__force struct fib6_info **)&rt6_ex->rt6i->from, NULL);13261327 fib6_info_release(from);13271328 dst_dev_put(&rt6_ex->rt6i->dst);13281329···3490349534913496 rcu_read_lock();34923497 res.f6i = rcu_dereference(rt->from);34933493- /* This fib6_info_hold() is safe here because we hold reference to rt34943494- * and rt already holds reference to fib6_info.34953495- */34963496- fib6_info_hold(res.f6i);34973497- rcu_read_unlock();34983498+ if (!res.f6i)34993499+ goto out;3498350034993501 res.nh = &res.f6i->fib6_nh;35003502 res.fib6_flags = res.f6i->fib6_flags;···3506351435073515 nrt->rt6i_gateway = *(struct in6_addr *)neigh->primary_key;3508351635093509- /* No need to remove rt from the exception table if rt is35103510- * a cached route because rt6_insert_exception() will35113511- * takes care of it35123512- */35173517+ /* rt6_insert_exception() will take care of duplicated exceptions */35133518 if (rt6_insert_exception(nrt, &res)) {35143519 dst_release_immediate(&nrt->dst);35153520 goto out;···35193530 call_netevent_notifiers(NETEVENT_REDIRECT, &netevent);3520353135213532out:35223522- fib6_info_release(res.f6i);35333533+ rcu_read_unlock();35233534 neigh_release(neigh);35243535}35253536···3761377237623773static int ip6_pkt_drop(struct sk_buff *skb, u8 code, int ipstats_mib_noroutes)37633774{37643764- int type;37653775 struct dst_entry *dst = skb_dst(skb);37763776+ struct net *net = dev_net(dst->dev);37773777+ struct inet6_dev *idev;37783778+ int type;37793779+37803780+ if (netif_is_l3_master(skb->dev) &&37813781+ dst->dev == net->loopback_dev)37823782+ idev = __in6_dev_get_safely(dev_get_by_index_rcu(net, IP6CB(skb)->iif));37833783+ else37843784+ idev = ip6_dst_idev(dst);37853785+37663786 switch (ipstats_mib_noroutes) {37673787 case IPSTATS_MIB_INNOROUTES:37683788 type = ipv6_addr_type(&ipv6_hdr(skb)->daddr);37693789 if (type == IPV6_ADDR_ANY) {37703770- IP6_INC_STATS(dev_net(dst->dev),37713771- __in6_dev_get_safely(skb->dev),37723772- IPSTATS_MIB_INADDRERRORS);37903790+ IP6_INC_STATS(net, idev, IPSTATS_MIB_INADDRERRORS);37733791 break;37743792 }37753793 /* FALLTHROUGH */37763794 case IPSTATS_MIB_OUTNOROUTES:37773777- IP6_INC_STATS(dev_net(dst->dev), ip6_dst_idev(dst),37783778- ipstats_mib_noroutes);37953795+ IP6_INC_STATS(net, idev, ipstats_mib_noroutes);37793796 break;37803797 }37983798+37993799+ /* Start over by dropping the dst for l3mdev case */38003800+ if (netif_is_l3_master(skb->dev))38013801+ skb_dst_drop(skb);38023802+37813803 icmpv6_send(skb, ICMPV6_DEST_UNREACH, code, 0);37823804 kfree_skb(skb);37833805 return 0;···5056505650575057 rcu_read_lock();50585058 from = rcu_dereference(rt->from);50595059-50605060- if (fibmatch)50615061- err = rt6_fill_node(net, skb, from, NULL, NULL, NULL, iif,50625062- RTM_NEWROUTE, NETLINK_CB(in_skb).portid,50635063- nlh->nlmsg_seq, 0);50645064- else50655065- err = rt6_fill_node(net, skb, from, dst, &fl6.daddr,50665066- &fl6.saddr, iif, RTM_NEWROUTE,50675067- NETLINK_CB(in_skb).portid, nlh->nlmsg_seq,50685068- 0);50595059+ if (from) {50605060+ if (fibmatch)50615061+ err = rt6_fill_node(net, skb, from, NULL, NULL, NULL,50625062+ iif, RTM_NEWROUTE,50635063+ NETLINK_CB(in_skb).portid,50645064+ nlh->nlmsg_seq, 0);50655065+ else50665066+ err = rt6_fill_node(net, skb, from, dst, &fl6.daddr,50675067+ &fl6.saddr, iif, RTM_NEWROUTE,50685068+ NETLINK_CB(in_skb).portid,50695069+ nlh->nlmsg_seq, 0);50705070+ } else {50715071+ err = -ENETUNREACH;50725072+ }50695073 rcu_read_unlock();5070507450715075 if (err < 0) {
+5-1
net/ipv6/xfrm6_tunnel.c
···345345 unsigned int i;346346347347 xfrm_flush_gc();348348- xfrm_state_flush(net, IPSEC_PROTO_ANY, false, true);348348+ xfrm_state_flush(net, 0, false, true);349349350350 for (i = 0; i < XFRM6_TUNNEL_SPI_BYADDR_HSIZE; i++)351351 WARN_ON_ONCE(!hlist_empty(&xfrm6_tn->spi_byaddr[i]));···402402 xfrm6_tunnel_deregister(&xfrm6_tunnel_handler, AF_INET6);403403 xfrm_unregister_type(&xfrm6_tunnel_type, AF_INET6);404404 unregister_pernet_subsys(&xfrm6_tunnel_net_ops);405405+ /* Someone maybe has gotten the xfrm6_tunnel_spi.406406+ * So need to wait it.407407+ */408408+ rcu_barrier();405409 kmem_cache_destroy(xfrm6_tunnel_spi_kmem);406410}407411
+3-1
net/key/af_key.c
···1951195119521952 if (rq->sadb_x_ipsecrequest_mode == 0)19531953 return -EINVAL;19541954+ if (!xfrm_id_proto_valid(rq->sadb_x_ipsecrequest_proto))19551955+ return -EINVAL;1954195619551955- t->id.proto = rq->sadb_x_ipsecrequest_proto; /* XXX check proto */19571957+ t->id.proto = rq->sadb_x_ipsecrequest_proto;19561958 if ((mode = pfkey_mode_to_xfrm(rq->sadb_x_ipsecrequest_mode)) < 0)19571959 return -EINVAL;19581960 t->mode = mode;
···26002600 void *ph;26012601 DECLARE_SOCKADDR(struct sockaddr_ll *, saddr, msg->msg_name);26022602 bool need_wait = !(msg->msg_flags & MSG_DONTWAIT);26032603+ unsigned char *addr = NULL;26032604 int tp_len, size_max;26042604- unsigned char *addr;26052605 void *data;26062606 int len_sum = 0;26072607 int status = TP_STATUS_AVAILABLE;···26122612 if (likely(saddr == NULL)) {26132613 dev = packet_cached_dev_get(po);26142614 proto = po->num;26152615- addr = NULL;26162615 } else {26172616 err = -EINVAL;26182617 if (msg->msg_namelen < sizeof(struct sockaddr_ll))···26212622 sll_addr)))26222623 goto out;26232624 proto = saddr->sll_protocol;26242624- addr = saddr->sll_halen ? saddr->sll_addr : NULL;26252625 dev = dev_get_by_index(sock_net(&po->sk), saddr->sll_ifindex);26262626- if (addr && dev && saddr->sll_halen < dev->addr_len)26272627- goto out_put;26262626+ if (po->sk.sk_socket->type == SOCK_DGRAM) {26272627+ if (dev && msg->msg_namelen < dev->addr_len +26282628+ offsetof(struct sockaddr_ll, sll_addr))26292629+ goto out_put;26302630+ addr = saddr->sll_addr;26312631+ }26282632 }2629263326302634 err = -ENXIO;···27992797 struct sk_buff *skb;28002798 struct net_device *dev;28012799 __be16 proto;28022802- unsigned char *addr;28002800+ unsigned char *addr = NULL;28032801 int err, reserve = 0;28042802 struct sockcm_cookie sockc;28052803 struct virtio_net_hdr vnet_hdr = { 0 };···28162814 if (likely(saddr == NULL)) {28172815 dev = packet_cached_dev_get(po);28182816 proto = po->num;28192819- addr = NULL;28202817 } else {28212818 err = -EINVAL;28222819 if (msg->msg_namelen < sizeof(struct sockaddr_ll))···28232822 if (msg->msg_namelen < (saddr->sll_halen + offsetof(struct sockaddr_ll, sll_addr)))28242823 goto out;28252824 proto = saddr->sll_protocol;28262826- addr = saddr->sll_halen ? saddr->sll_addr : NULL;28272825 dev = dev_get_by_index(sock_net(sk), saddr->sll_ifindex);28282828- if (addr && dev && saddr->sll_halen < dev->addr_len)28292829- goto out_unlock;28262826+ if (sock->type == SOCK_DGRAM) {28272827+ if (dev && msg->msg_namelen < dev->addr_len +28282828+ offsetof(struct sockaddr_ll, sll_addr))28292829+ goto out_unlock;28302830+ addr = saddr->sll_addr;28312831+ }28302832 }2831283328322834 err = -ENXIO;···33463342 sock_recv_ts_and_drops(msg, sk, skb);3347334333483344 if (msg->msg_name) {33453345+ int copy_len;33463346+33493347 /* If the address length field is there to be filled33503348 * in, we fill it in now.33513349 */33523350 if (sock->type == SOCK_PACKET) {33533351 __sockaddr_check_size(sizeof(struct sockaddr_pkt));33543352 msg->msg_namelen = sizeof(struct sockaddr_pkt);33533353+ copy_len = msg->msg_namelen;33553354 } else {33563355 struct sockaddr_ll *sll = &PACKET_SKB_CB(skb)->sa.ll;3357335633583357 msg->msg_namelen = sll->sll_halen +33593358 offsetof(struct sockaddr_ll, sll_addr);33593359+ copy_len = msg->msg_namelen;33603360+ if (msg->msg_namelen < sizeof(struct sockaddr_ll)) {33613361+ memset(msg->msg_name +33623362+ offsetof(struct sockaddr_ll, sll_addr),33633363+ 0, sizeof(sll->sll_addr));33643364+ msg->msg_namelen = sizeof(struct sockaddr_ll);33653365+ }33603366 }33613361- memcpy(msg->msg_name, &PACKET_SKB_CB(skb)->sa,33623362- msg->msg_namelen);33673367+ memcpy(msg->msg_name, &PACKET_SKB_CB(skb)->sa, copy_len);33633368 }3364336933653370 if (pkt_sk(sk)->auxdata) {
+3-5
net/rds/ib_recv.c
···772772 unsigned long frag_off;773773 unsigned long to_copy;774774 unsigned long copied;775775- uint64_t uncongested = 0;775775+ __le64 uncongested = 0;776776 void *addr;777777778778 /* catch completely corrupt packets */···789789 copied = 0;790790791791 while (copied < RDS_CONG_MAP_BYTES) {792792- uint64_t *src, *dst;792792+ __le64 *src, *dst;793793 unsigned int k;794794795795 to_copy = min(RDS_FRAG_SIZE - frag_off, PAGE_SIZE - map_off);···824824 }825825826826 /* the congestion map is in little endian order */827827- uncongested = le64_to_cpu(uncongested);828828-829829- rds_cong_map_updated(map, uncongested);827827+ rds_cong_map_updated(map, le64_to_cpu(uncongested));830828}831829832830static void rds_ib_process_recv(struct rds_connection *conn,
+16-16
net/rxrpc/call_object.c
···604604605605 _enter("");606606607607- if (list_empty(&rxnet->calls))608608- return;607607+ if (!list_empty(&rxnet->calls)) {608608+ write_lock(&rxnet->call_lock);609609610610- write_lock(&rxnet->call_lock);610610+ while (!list_empty(&rxnet->calls)) {611611+ call = list_entry(rxnet->calls.next,612612+ struct rxrpc_call, link);613613+ _debug("Zapping call %p", call);611614612612- while (!list_empty(&rxnet->calls)) {613613- call = list_entry(rxnet->calls.next, struct rxrpc_call, link);614614- _debug("Zapping call %p", call);615615+ rxrpc_see_call(call);616616+ list_del_init(&call->link);615617616616- rxrpc_see_call(call);617617- list_del_init(&call->link);618618+ pr_err("Call %p still in use (%d,%s,%lx,%lx)!\n",619619+ call, atomic_read(&call->usage),620620+ rxrpc_call_states[call->state],621621+ call->flags, call->events);618622619619- pr_err("Call %p still in use (%d,%s,%lx,%lx)!\n",620620- call, atomic_read(&call->usage),621621- rxrpc_call_states[call->state],622622- call->flags, call->events);623623+ write_unlock(&rxnet->call_lock);624624+ cond_resched();625625+ write_lock(&rxnet->call_lock);626626+ }623627624628 write_unlock(&rxnet->call_lock);625625- cond_resched();626626- write_lock(&rxnet->call_lock);627629 }628628-629629- write_unlock(&rxnet->call_lock);630630631631 atomic_dec(&rxnet->nr_calls);632632 wait_var_event(&rxnet->nr_calls, !atomic_read(&rxnet->nr_calls));
-29
net/sctp/sm_sideeffect.c
···11121112}111311131114111411151115-/* Sent the next ASCONF packet currently stored in the association.11161116- * This happens after the ASCONF_ACK was succeffully processed.11171117- */11181118-static void sctp_cmd_send_asconf(struct sctp_association *asoc)11191119-{11201120- struct net *net = sock_net(asoc->base.sk);11211121-11221122- /* Send the next asconf chunk from the addip chunk11231123- * queue.11241124- */11251125- if (!list_empty(&asoc->addip_chunk_list)) {11261126- struct list_head *entry = asoc->addip_chunk_list.next;11271127- struct sctp_chunk *asconf = list_entry(entry,11281128- struct sctp_chunk, list);11291129- list_del_init(entry);11301130-11311131- /* Hold the chunk until an ASCONF_ACK is received. */11321132- sctp_chunk_hold(asconf);11331133- if (sctp_primitive_ASCONF(net, asoc, asconf))11341134- sctp_chunk_free(asconf);11351135- else11361136- asoc->addip_last_asconf = asconf;11371137- }11381138-}11391139-11401140-11411115/* These three macros allow us to pull the debugging code out of the11421116 * main flow of sctp_do_sm() to keep attention focused on the real11431117 * functionality there.···17561782 local_cork = 1;17571783 }17581784 sctp_cmd_send_msg(asoc, cmd->obj.msg, gfp);17591759- break;17601760- case SCTP_CMD_SEND_NEXT_ASCONF:17611761- sctp_cmd_send_asconf(asoc);17621785 break;17631786 case SCTP_CMD_PURGE_ASCONF_QUEUE:17641787 sctp_asconf_queue_teardown(asoc);
+27-8
net/sctp/sm_statefuns.c
···38243824 return SCTP_DISPOSITION_CONSUME;38253825}3826382638273827+static enum sctp_disposition sctp_send_next_asconf(38283828+ struct net *net,38293829+ const struct sctp_endpoint *ep,38303830+ struct sctp_association *asoc,38313831+ const union sctp_subtype type,38323832+ struct sctp_cmd_seq *commands)38333833+{38343834+ struct sctp_chunk *asconf;38353835+ struct list_head *entry;38363836+38373837+ if (list_empty(&asoc->addip_chunk_list))38383838+ return SCTP_DISPOSITION_CONSUME;38393839+38403840+ entry = asoc->addip_chunk_list.next;38413841+ asconf = list_entry(entry, struct sctp_chunk, list);38423842+38433843+ list_del_init(entry);38443844+ sctp_chunk_hold(asconf);38453845+ asoc->addip_last_asconf = asconf;38463846+38473847+ return sctp_sf_do_prm_asconf(net, ep, asoc, type, asconf, commands);38483848+}38493849+38273850/*38283851 * ADDIP Section 4.3 General rules for address manipulation38293852 * When building TLV parameters for the ASCONF Chunk that will add or···39383915 SCTP_TO(SCTP_EVENT_TIMEOUT_T4_RTO));3939391639403917 if (!sctp_process_asconf_ack((struct sctp_association *)asoc,39413941- asconf_ack)) {39423942- /* Successfully processed ASCONF_ACK. We can39433943- * release the next asconf if we have one.39443944- */39453945- sctp_add_cmd_sf(commands, SCTP_CMD_SEND_NEXT_ASCONF,39463946- SCTP_NULL());39473947- return SCTP_DISPOSITION_CONSUME;39483948- }39183918+ asconf_ack))39193919+ return sctp_send_next_asconf(net, ep,39203920+ (struct sctp_association *)asoc,39213921+ type, commands);3949392239503923 abort = sctp_make_abort(asoc, asconf_ack,39513924 sizeof(struct sctp_errhdr));
+32-15
net/tls/tls_device.c
···580580static int tls_device_reencrypt(struct sock *sk, struct sk_buff *skb)581581{582582 struct strp_msg *rxm = strp_msg(skb);583583- int err = 0, offset = rxm->offset, copy, nsg;583583+ int err = 0, offset = rxm->offset, copy, nsg, data_len, pos;584584 struct sk_buff *skb_iter, *unused;585585 struct scatterlist sg[1];586586 char *orig_buf, *buf;···611611 else612612 err = 0;613613614614- copy = min_t(int, skb_pagelen(skb) - offset,615615- rxm->full_len - TLS_CIPHER_AES_GCM_128_TAG_SIZE);614614+ data_len = rxm->full_len - TLS_CIPHER_AES_GCM_128_TAG_SIZE;616615617617- if (skb->decrypted)618618- skb_store_bits(skb, offset, buf, copy);616616+ if (skb_pagelen(skb) > offset) {617617+ copy = min_t(int, skb_pagelen(skb) - offset, data_len);619618620620- offset += copy;621621- buf += copy;622622-623623- skb_walk_frags(skb, skb_iter) {624624- copy = min_t(int, skb_iter->len,625625- rxm->full_len - offset + rxm->offset -626626- TLS_CIPHER_AES_GCM_128_TAG_SIZE);627627-628628- if (skb_iter->decrypted)629629- skb_store_bits(skb_iter, offset, buf, copy);619619+ if (skb->decrypted)620620+ skb_store_bits(skb, offset, buf, copy);630621631622 offset += copy;632623 buf += copy;624624+ }625625+626626+ pos = skb_pagelen(skb);627627+ skb_walk_frags(skb, skb_iter) {628628+ int frag_pos;629629+630630+ /* Practically all frags must belong to msg if reencrypt631631+ * is needed with current strparser and coalescing logic,632632+ * but strparser may "get optimized", so let's be safe.633633+ */634634+ if (pos + skb_iter->len <= offset)635635+ goto done_with_frag;636636+ if (pos >= data_len + rxm->offset)637637+ break;638638+639639+ frag_pos = offset - pos;640640+ copy = min_t(int, skb_iter->len - frag_pos,641641+ data_len + rxm->offset - offset);642642+643643+ if (skb_iter->decrypted)644644+ skb_store_bits(skb_iter, frag_pos, buf, copy);645645+646646+ offset += copy;647647+ buf += copy;648648+done_with_frag:649649+ pos += skb_iter->len;633650 }634651635652free_buf:
···37693769 /*37703770 * The last request may have been received before this37713771 * registration call. Call the driver notifier if37723772- * initiator is USER and user type is CELL_BASE.37723772+ * initiator is USER.37733773 */37743774- if (lr->initiator == NL80211_REGDOM_SET_BY_USER &&37753775- lr->user_reg_hint_type == NL80211_USER_REG_HINT_CELL_BASE)37743774+ if (lr->initiator == NL80211_REGDOM_SET_BY_USER)37763775 reg_call_notifier(wiphy, lr);37773776 }37783777
+14-3
net/xfrm/xfrm_interface.c
···7070 return NULL;7171}72727373-static struct xfrm_if *xfrmi_decode_session(struct sk_buff *skb)7373+static struct xfrm_if *xfrmi_decode_session(struct sk_buff *skb,7474+ unsigned short family)7475{7576 struct xfrmi_net *xfrmn;7676- int ifindex;7777 struct xfrm_if *xi;7878+ int ifindex = 0;78797980 if (!secpath_exists(skb) || !skb->dev)8081 return NULL;81828383+ switch (family) {8484+ case AF_INET6:8585+ ifindex = inet6_sdif(skb);8686+ break;8787+ case AF_INET:8888+ ifindex = inet_sdif(skb);8989+ break;9090+ }9191+ if (!ifindex)9292+ ifindex = skb->dev->ifindex;9393+8294 xfrmn = net_generic(xs_net(xfrm_input_state(skb)), xfrmi_net_id);8383- ifindex = skb->dev->ifindex;84958596 for_each_xfrmi_rcu(xfrmn->xfrmi[0], xi) {8697 if (ifindex == xi->dev->ifindex &&
+1-1
net/xfrm/xfrm_policy.c
···35193519 ifcb = xfrm_if_get_cb();3520352035213521 if (ifcb) {35223522- xi = ifcb->decode_session(skb);35223522+ xi = ifcb->decode_session(skb, family);35233523 if (xi) {35243524 if_id = xi->p.if_id;35253525 net = xi->net;
···14241424 ret = verify_policy_dir(p->dir);14251425 if (ret)14261426 return ret;14271427- if (p->index && ((p->index & XFRM_POLICY_MAX) != p->dir))14271427+ if (p->index && (xfrm_policy_id2dir(p->index) != p->dir))14281428 return -EINVAL;1429142914301430 return 0;···15131513 return -EINVAL;15141514 }1515151515161516- switch (ut[i].id.proto) {15171517- case IPPROTO_AH:15181518- case IPPROTO_ESP:15191519- case IPPROTO_COMP:15201520-#if IS_ENABLED(CONFIG_IPV6)15211521- case IPPROTO_ROUTING:15221522- case IPPROTO_DSTOPTS:15231523-#endif15241524- case IPSEC_PROTO_ANY:15251525- break;15261526- default:15161516+ if (!xfrm_id_proto_valid(ut[i].id.proto))15271517 return -EINVAL;15281528- }15291529-15301518 }1531151915321520 return 0;
···21662166 SECCOMP_FILTER_FLAG_LOG,21672167 SECCOMP_FILTER_FLAG_SPEC_ALLOW,21682168 SECCOMP_FILTER_FLAG_NEW_LISTENER };21692169- unsigned int flag, all_flags;21692169+ unsigned int exclusive[] = {21702170+ SECCOMP_FILTER_FLAG_TSYNC,21712171+ SECCOMP_FILTER_FLAG_NEW_LISTENER };21722172+ unsigned int flag, all_flags, exclusive_mask;21702173 int i;21712174 long ret;2172217521732173- /* Test detection of known-good filter flags */21762176+ /* Test detection of individual known-good filter flags */21742177 for (i = 0, all_flags = 0; i < ARRAY_SIZE(flags); i++) {21752178 int bits = 0;21762179···22002197 all_flags |= flag;22012198 }2202219922032203- /* Test detection of all known-good filter flags */22042204- ret = seccomp(SECCOMP_SET_MODE_FILTER, all_flags, NULL);22052205- EXPECT_EQ(-1, ret);22062206- EXPECT_EQ(EFAULT, errno) {22072207- TH_LOG("Failed to detect that all known-good filter flags (0x%X) are supported!",22082208- all_flags);22002200+ /*22012201+ * Test detection of all known-good filter flags combined. But22022202+ * for the exclusive flags we need to mask them out and try them22032203+ * individually for the "all flags" testing.22042204+ */22052205+ exclusive_mask = 0;22062206+ for (i = 0; i < ARRAY_SIZE(exclusive); i++)22072207+ exclusive_mask |= exclusive[i];22082208+ for (i = 0; i < ARRAY_SIZE(exclusive); i++) {22092209+ flag = all_flags & ~exclusive_mask;22102210+ flag |= exclusive[i];22112211+22122212+ ret = seccomp(SECCOMP_SET_MODE_FILTER, flag, NULL);22132213+ EXPECT_EQ(-1, ret);22142214+ EXPECT_EQ(EFAULT, errno) {22152215+ TH_LOG("Failed to detect that all known-good filter flags (0x%X) are supported!",22162216+ flag);22172217+ }22092218 }2210221922112211- /* Test detection of an unknown filter flag */22202220+ /* Test detection of an unknown filter flags, without exclusives. */22122221 flag = -1;22222222+ flag &= ~exclusive_mask;22132223 ret = seccomp(SECCOMP_SET_MODE_FILTER, flag, NULL);22142224 EXPECT_EQ(-1, ret);22152225 EXPECT_EQ(EINVAL, errno) {