···228228 bridge link set dev DEV learning on self229229 bridge link set dev DEV learning_sync on self230230231231-Learning_sync attribute enables syncing of the learned/forgotton FDB entry to231231+Learning_sync attribute enables syncing of the learned/forgotten FDB entry to232232the bridge's FDB. It's possible, but not optimal, to enable learning on the233233device port and on the bridge port, and disable learning_sync.234234···245245port device supports ageing, when the FDB entry expires, it will notify the246246driver which in turn will notify the bridge with SWITCHDEV_FDB_DEL. If the247247device does not support ageing, the driver can simulate ageing using a248248-garbage collection timer to monitor FBD entries. Expired entries will be248248+garbage collection timer to monitor FDB entries. Expired entries will be249249notified to the bridge using SWITCHDEV_FDB_DEL. See rocker driver for250250example of driver running ageing timer.251251
+11-8
Documentation/printk-formats.txt
···5858 %ps versatile_init5959 %pB prev_fn_of_versatile_init+0x88/0x8860606161-For printing symbols and function pointers. The ``S`` and ``s`` specifiers6262-result in the symbol name with (``S``) or without (``s``) offsets. Where6363-this is used on a kernel without KALLSYMS - the symbol address is6464-printed instead.6161+The ``F`` and ``f`` specifiers are for printing function pointers,6262+for example, f->func, &gettimeofday. They have the same result as6363+``S`` and ``s`` specifiers. But they do an extra conversion on6464+ia64, ppc64 and parisc64 architectures where the function pointers6565+are actually function descriptors.6666+6767+The ``S`` and ``s`` specifiers can be used for printing symbols6868+from direct addresses, for example, __builtin_return_address(0),6969+(void *)regs->ip. They result in the symbol name with (``S``) or7070+without (``s``) offsets. If KALLSYMS are disabled then the symbol7171+address is printed instead.65726673The ``B`` specifier results in the symbol name with offsets and should be6774used when printing stack backtraces. The specifier takes into6875consideration the effect of compiler optimisations which may occur6976when tail-call``s are used and marked with the noreturn GCC attribute.70777171-On ia64, ppc64 and parisc64 architectures function pointers are7272-actually function descriptors which must first be resolved. The ``F`` and7373-``f`` specifiers perform this resolution and then provide the same7474-functionality as the ``S`` and ``s`` specifiers.75787679Kernel Pointers7780===============
+36-11
Documentation/sysctl/net.txt
···3535bpf_jit_enable3636--------------37373838-This enables Berkeley Packet Filter Just in Time compiler.3939-Currently supported on x86_64 architecture, bpf_jit provides a framework4040-to speed packet filtering, the one used by tcpdump/libpcap for example.3838+This enables the BPF Just in Time (JIT) compiler. BPF is a flexible3939+and efficient infrastructure allowing to execute bytecode at various4040+hook points. It is used in a number of Linux kernel subsystems such4141+as networking (e.g. XDP, tc), tracing (e.g. kprobes, uprobes, tracepoints)4242+and security (e.g. seccomp). LLVM has a BPF back end that can compile4343+restricted C into a sequence of BPF instructions. After program load4444+through bpf(2) and passing a verifier in the kernel, a JIT will then4545+translate these BPF proglets into native CPU instructions. There are4646+two flavors of JITs, the newer eBPF JIT currently supported on:4747+ - x86_644848+ - arm644949+ - ppc645050+ - sparc645151+ - mips645252+ - s390x5353+5454+And the older cBPF JIT supported on the following archs:5555+ - arm5656+ - mips5757+ - ppc5858+ - sparc5959+6060+eBPF JITs are a superset of cBPF JITs, meaning the kernel will6161+migrate cBPF instructions into eBPF instructions and then JIT6262+compile them transparently. Older cBPF JITs can only translate6363+tcpdump filters, seccomp rules, etc, but not mentioned eBPF6464+programs loaded through bpf(2).6565+4166Values :4267 0 - disable the JIT (default value)4368 1 - enable the JIT···7146bpf_jit_harden7247--------------73487474-This enables hardening for the Berkeley Packet Filter Just in Time compiler.7575-Supported are eBPF JIT backends. Enabling hardening trades off performance,7676-but can mitigate JIT spraying.4949+This enables hardening for the BPF JIT compiler. Supported are eBPF5050+JIT backends. Enabling hardening trades off performance, but can5151+mitigate JIT spraying.7752Values :7853 0 - disable JIT hardening (default value)7954 1 - enable JIT hardening for unprivileged users only···8257bpf_jit_kallsyms8358----------------84598585-When Berkeley Packet Filter Just in Time compiler is enabled, then compiled8686-images are unknown addresses to the kernel, meaning they neither show up in8787-traces nor in /proc/kallsyms. This enables export of these addresses, which8888-can be used for debugging/tracing. If bpf_jit_harden is enabled, this feature8989-is disabled.6060+When BPF JIT compiler is enabled, then compiled images are unknown6161+addresses to the kernel, meaning they neither show up in traces nor6262+in /proc/kallsyms. This enables export of these addresses, which can6363+be used for debugging/tracing. If bpf_jit_harden is enabled, this6464+feature is disabled.9065Values :9166 0 - disable JIT kallsyms export (default value)9267 1 - enable JIT kallsyms export for privileged users only
···2121# CONFIG_BLK_DEV_BSG is not set2222# CONFIG_IOSCHED_DEADLINE is not set2323# CONFIG_IOSCHED_CFQ is not set2424-CONFIG_ARC_PLAT_SIM=y2524CONFIG_ISA_ARCV2=y2625CONFIG_ARC_BUILTIN_DTB_NAME="haps_hs"2726CONFIG_PREEMPT=y
-1
arch/arc/configs/haps_hs_smp_defconfig
···2323# CONFIG_BLK_DEV_BSG is not set2424# CONFIG_IOSCHED_DEADLINE is not set2525# CONFIG_IOSCHED_CFQ is not set2626-CONFIG_ARC_PLAT_SIM=y2726CONFIG_ISA_ARCV2=y2827CONFIG_SMP=y2928CONFIG_ARC_BUILTIN_DTB_NAME="haps_hs_idu"
-1
arch/arc/configs/nps_defconfig
···3939# CONFIG_INET_XFRM_MODE_TRANSPORT is not set4040# CONFIG_INET_XFRM_MODE_TUNNEL is not set4141# CONFIG_INET_XFRM_MODE_BEET is not set4242-# CONFIG_INET_LRO is not set4342# CONFIG_INET_DIAG is not set4443# CONFIG_IPV6 is not set4544# CONFIG_WIRELESS is not set
-1
arch/arc/configs/nsim_700_defconfig
···2323# CONFIG_BLK_DEV_BSG is not set2424# CONFIG_IOSCHED_DEADLINE is not set2525# CONFIG_IOSCHED_CFQ is not set2626-CONFIG_ARC_PLAT_SIM=y2726CONFIG_ARC_BUILTIN_DTB_NAME="nsim_700"2827CONFIG_PREEMPT=y2928# CONFIG_COMPACTION is not set
-1
arch/arc/configs/nsim_hs_defconfig
···2626# CONFIG_BLK_DEV_BSG is not set2727# CONFIG_IOSCHED_DEADLINE is not set2828# CONFIG_IOSCHED_CFQ is not set2929-CONFIG_ARC_PLAT_SIM=y3029CONFIG_ISA_ARCV2=y3130CONFIG_ARC_BUILTIN_DTB_NAME="nsim_hs"3231CONFIG_PREEMPT=y
-1
arch/arc/configs/nsim_hs_smp_defconfig
···2424# CONFIG_BLK_DEV_BSG is not set2525# CONFIG_IOSCHED_DEADLINE is not set2626# CONFIG_IOSCHED_CFQ is not set2727-CONFIG_ARC_PLAT_SIM=y2827CONFIG_ISA_ARCV2=y2928CONFIG_SMP=y3029CONFIG_ARC_BUILTIN_DTB_NAME="nsim_hs_idu"
-1
arch/arc/configs/nsimosci_defconfig
···2323# CONFIG_BLK_DEV_BSG is not set2424# CONFIG_IOSCHED_DEADLINE is not set2525# CONFIG_IOSCHED_CFQ is not set2626-CONFIG_ARC_PLAT_SIM=y2726CONFIG_ARC_BUILTIN_DTB_NAME="nsimosci"2827# CONFIG_COMPACTION is not set2928CONFIG_NET=y
-1
arch/arc/configs/nsimosci_hs_defconfig
···2323# CONFIG_BLK_DEV_BSG is not set2424# CONFIG_IOSCHED_DEADLINE is not set2525# CONFIG_IOSCHED_CFQ is not set2626-CONFIG_ARC_PLAT_SIM=y2726CONFIG_ISA_ARCV2=y2827CONFIG_ARC_BUILTIN_DTB_NAME="nsimosci_hs"2928# CONFIG_COMPACTION is not set
-1
arch/arc/configs/nsimosci_hs_smp_defconfig
···1818# CONFIG_BLK_DEV_BSG is not set1919# CONFIG_IOSCHED_DEADLINE is not set2020# CONFIG_IOSCHED_CFQ is not set2121-CONFIG_ARC_PLAT_SIM=y2221CONFIG_ISA_ARCV2=y2322CONFIG_SMP=y2423# CONFIG_ARC_TIMERS_64BIT is not set
-1
arch/arc/configs/tb10x_defconfig
···3838# CONFIG_INET_XFRM_MODE_TRANSPORT is not set3939# CONFIG_INET_XFRM_MODE_TUNNEL is not set4040# CONFIG_INET_XFRM_MODE_BEET is not set4141-# CONFIG_INET_LRO is not set4241# CONFIG_INET_DIAG is not set4342# CONFIG_IPV6 is not set4443# CONFIG_WIRELESS is not set
+2
arch/arc/include/asm/cache.h
···9696#define ARC_REG_SLC_FLUSH 0x9049797#define ARC_REG_SLC_INVALIDATE 0x9059898#define ARC_REG_SLC_RGN_START 0x9149999+#define ARC_REG_SLC_RGN_START1 0x91599100#define ARC_REG_SLC_RGN_END 0x916101101+#define ARC_REG_SLC_RGN_END1 0x917100102101103/* Bit val in SLC_CONTROL */102104#define SLC_CTRL_DIS 0x001
+2
arch/arc/include/asm/mmu.h
···9494 return IS_ENABLED(CONFIG_ARC_HAS_PAE40);9595}96969797+extern int pae40_exist_but_not_enab(void);9898+9799#endif /* !__ASSEMBLY__ */9810099101#endif
+3
arch/arc/kernel/intc-arcv2.c
···7575 * Set a default priority for all available interrupts to prevent7676 * switching of register banks if Fast IRQ and multiple register banks7777 * are supported by CPU.7878+ * Also disable all IRQ lines so faulty external hardware won't7979+ * trigger interrupt that kernel is not ready to handle.7880 */7981 for (i = NR_EXCEPTIONS; i < irq_bcr.irqs + NR_EXCEPTIONS; i++) {8082 write_aux_reg(AUX_IRQ_SELECT, i);8183 write_aux_reg(AUX_IRQ_PRIORITY, ARCV2_IRQ_DEF_PRIO);8484+ write_aux_reg(AUX_IRQ_ENABLE, 0);8285 }83868487 /* setup status32, don't enable intr yet as kernel doesn't want */
+13-1
arch/arc/kernel/intc-compact.c
···2727 */2828void arc_init_IRQ(void)2929{3030- int level_mask = 0;3030+ int level_mask = 0, i;31313232 /* Is timer high priority Interrupt (Level2 in ARCompact jargon) */3333 level_mask |= IS_ENABLED(CONFIG_ARC_COMPACT_IRQ_LEVELS) << TIMER0_IRQ;···40404141 if (level_mask)4242 pr_info("Level-2 interrupts bitset %x\n", level_mask);4343+4444+ /*4545+ * Disable all IRQ lines so faulty external hardware won't4646+ * trigger interrupt that kernel is not ready to handle.4747+ */4848+ for (i = TIMER0_IRQ; i < NR_CPU_IRQS; i++) {4949+ unsigned int ienb;5050+5151+ ienb = read_aux_reg(AUX_IENABLE);5252+ ienb &= ~(1 << i);5353+ write_aux_reg(AUX_IENABLE, ienb);5454+ }4355}44564557/*
+42-8
arch/arc/mm/cache.c
···665665 static DEFINE_SPINLOCK(lock);666666 unsigned long flags;667667 unsigned int ctrl;668668+ phys_addr_t end;668669669670 spin_lock_irqsave(&lock, flags);670671···695694 * END needs to be setup before START (latter triggers the operation)696695 * END can't be same as START, so add (l2_line_sz - 1) to sz697696 */698698- write_aux_reg(ARC_REG_SLC_RGN_END, (paddr + sz + l2_line_sz - 1));699699- write_aux_reg(ARC_REG_SLC_RGN_START, paddr);697697+ end = paddr + sz + l2_line_sz - 1;698698+ if (is_pae40_enabled())699699+ write_aux_reg(ARC_REG_SLC_RGN_END1, upper_32_bits(end));700700+701701+ write_aux_reg(ARC_REG_SLC_RGN_END, lower_32_bits(end));702702+703703+ if (is_pae40_enabled())704704+ write_aux_reg(ARC_REG_SLC_RGN_START1, upper_32_bits(paddr));705705+706706+ write_aux_reg(ARC_REG_SLC_RGN_START, lower_32_bits(paddr));707707+708708+ /* Make sure "busy" bit reports correct stataus, see STAR 9001165532 */709709+ read_aux_reg(ARC_REG_SLC_CTRL);700710701711 while (read_aux_reg(ARC_REG_SLC_CTRL) & SLC_CTRL_BUSY);702712···11231111 __dc_enable();11241112}1125111311141114+/*11151115+ * Cache related boot time checks/setups only needed on master CPU:11161116+ * - Geometry checks (kernel build and hardware agree: e.g. L1_CACHE_BYTES)11171117+ * Assume SMP only, so all cores will have same cache config. A check on11181118+ * one core suffices for all11191119+ * - IOC setup / dma callbacks only need to be done once11201120+ */11261121void __init arc_cache_init_master(void)11271122{11281123 unsigned int __maybe_unused cpu = smp_processor_id();···1209119012101191 printk(arc_cache_mumbojumbo(0, str, sizeof(str)));1211119212121212- /*12131213- * Only master CPU needs to execute rest of function:12141214- * - Assume SMP so all cores will have same cache config so12151215- * any geomtry checks will be same for all12161216- * - IOC setup / dma callbacks only need to be setup once12171217- */12181193 if (!cpu)12191194 arc_cache_init_master();11951195+11961196+ /*11971197+ * In PAE regime, TLB and cache maintenance ops take wider addresses11981198+ * And even if PAE is not enabled in kernel, the upper 32-bits still need11991199+ * to be zeroed to keep the ops sane.12001200+ * As an optimization for more common !PAE enabled case, zero them out12011201+ * once at init, rather than checking/setting to 0 for every runtime op12021202+ */12031203+ if (is_isa_arcv2() && pae40_exist_but_not_enab()) {12041204+12051205+ if (IS_ENABLED(CONFIG_ARC_HAS_ICACHE))12061206+ write_aux_reg(ARC_REG_IC_PTAG_HI, 0);12071207+12081208+ if (IS_ENABLED(CONFIG_ARC_HAS_DCACHE))12091209+ write_aux_reg(ARC_REG_DC_PTAG_HI, 0);12101210+12111211+ if (l2_line_sz) {12121212+ write_aux_reg(ARC_REG_SLC_RGN_END1, 0);12131213+ write_aux_reg(ARC_REG_SLC_RGN_START1, 0);12141214+ }12151215+ }12201216}
+45
arch/arc/mm/dma.c
···153153 }154154}155155156156+/*157157+ * arc_dma_map_page - map a portion of a page for streaming DMA158158+ *159159+ * Ensure that any data held in the cache is appropriately discarded160160+ * or written back.161161+ *162162+ * The device owns this memory once this call has completed. The CPU163163+ * can regain ownership by calling dma_unmap_page().164164+ *165165+ * Note: while it takes struct page as arg, caller can "abuse" it to pass166166+ * a region larger than PAGE_SIZE, provided it is physically contiguous167167+ * and this still works correctly168168+ */156169static dma_addr_t arc_dma_map_page(struct device *dev, struct page *page,157170 unsigned long offset, size_t size, enum dma_data_direction dir,158171 unsigned long attrs)···176163 _dma_cache_sync(paddr, size, dir);177164178165 return plat_phys_to_dma(dev, paddr);166166+}167167+168168+/*169169+ * arc_dma_unmap_page - unmap a buffer previously mapped through dma_map_page()170170+ *171171+ * After this call, reads by the CPU to the buffer are guaranteed to see172172+ * whatever the device wrote there.173173+ *174174+ * Note: historically this routine was not implemented for ARC175175+ */176176+static void arc_dma_unmap_page(struct device *dev, dma_addr_t handle,177177+ size_t size, enum dma_data_direction dir,178178+ unsigned long attrs)179179+{180180+ phys_addr_t paddr = plat_dma_to_phys(dev, handle);181181+182182+ if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC))183183+ _dma_cache_sync(paddr, size, dir);179184}180185181186static int arc_dma_map_sg(struct device *dev, struct scatterlist *sg,···207176 s->length, dir);208177209178 return nents;179179+}180180+181181+static void arc_dma_unmap_sg(struct device *dev, struct scatterlist *sg,182182+ int nents, enum dma_data_direction dir,183183+ unsigned long attrs)184184+{185185+ struct scatterlist *s;186186+ int i;187187+188188+ for_each_sg(sg, s, nents, i)189189+ arc_dma_unmap_page(dev, sg_dma_address(s), sg_dma_len(s), dir,190190+ attrs);210191}211192212193static void arc_dma_sync_single_for_cpu(struct device *dev,···266223 .free = arc_dma_free,267224 .mmap = arc_dma_mmap,268225 .map_page = arc_dma_map_page,226226+ .unmap_page = arc_dma_unmap_page,269227 .map_sg = arc_dma_map_sg,228228+ .unmap_sg = arc_dma_unmap_sg,270229 .sync_single_for_device = arc_dma_sync_single_for_device,271230 .sync_single_for_cpu = arc_dma_sync_single_for_cpu,272231 .sync_sg_for_cpu = arc_dma_sync_sg_for_cpu,
+11-1
arch/arc/mm/tlb.c
···104104/* A copy of the ASID from the PID reg is kept in asid_cache */105105DEFINE_PER_CPU(unsigned int, asid_cache) = MM_CTXT_FIRST_CYCLE;106106107107+static int __read_mostly pae_exists;108108+107109/*108110 * Utility Routine to erase a J-TLB entry109111 * Caller needs to setup Index Reg (manually or via getIndex)···786784 mmu->u_dtlb = mmu4->u_dtlb * 4;787785 mmu->u_itlb = mmu4->u_itlb * 4;788786 mmu->sasid = mmu4->sasid;789789- mmu->pae = mmu4->pae;787787+ pae_exists = mmu->pae = mmu4->pae;790788 }791789}792790···809807 IS_AVAIL2(p_mmu->pae, ", PAE40 ", CONFIG_ARC_HAS_PAE40));810808811809 return buf;810810+}811811+812812+int pae40_exist_but_not_enab(void)813813+{814814+ return pae_exists && !is_pae40_enabled();812815}813816814817void arc_mmu_init(void)···866859 /* swapper_pg_dir is the pgd for the kernel, used by vmalloc */867860 write_aux_reg(ARC_REG_SCRATCH_DATA0, swapper_pg_dir);868861#endif862862+863863+ if (pae40_exist_but_not_enab())864864+ write_aux_reg(ARC_REG_TLBPD1HI, 0);869865}870866871867/*
-13
arch/arc/plat-sim/Kconfig
···11-#22-# Copyright (C) 2007-2010, 2011-2012 Synopsys, Inc. (www.synopsys.com)33-#44-# This program is free software; you can redistribute it and/or modify55-# it under the terms of the GNU General Public License version 2 as66-# published by the Free Software Foundation.77-#88-99-menuconfig ARC_PLAT_SIM1010- bool "ARC nSIM based simulation virtual platforms"1111- help1212- Support for nSIM based ARC simulation platforms1313- This includes the standalone nSIM (uart only) vs. System C OSCI VP
···4545 stdout-path = "serial0:115200n8";4646 };47474848- audio_clkout: audio_clkout {4848+ audio_clkout: audio-clkout {4949 /*5050 * This is same as <&rcar_sound 0>5151 * but needed to avoid cs2000/rcar_sound probe dead-lock
···114114115115/*116116 * This is the base location for PIE (ET_DYN with INTERP) loads. On117117- * 64-bit, this is raised to 4GB to leave the entire 32-bit address117117+ * 64-bit, this is above 4GB to leave the entire 32-bit address118118 * space open for things that want to use the area for 32-bit pointers.119119 */120120-#define ELF_ET_DYN_BASE 0x100000000UL120120+#define ELF_ET_DYN_BASE (2 * TASK_SIZE_64 / 3)121121122122#ifndef __ASSEMBLY__123123
···7575 * containing function pointers) to be reinitialized, and zero-initialized7676 * .bss variables will be reset to 0.7777 */7878-u64 __init kaslr_early_init(u64 dt_phys, u64 modulo_offset)7878+u64 __init kaslr_early_init(u64 dt_phys)7979{8080 void *fdt;8181 u64 seed, offset, mask, module_range;···131131 /*132132 * The kernel Image should not extend across a 1GB/32MB/512MB alignment133133 * boundary (for 4KB/16KB/64KB granule kernels, respectively). If this134134- * happens, increase the KASLR offset by the size of the kernel image135135- * rounded up by SWAPPER_BLOCK_SIZE.134134+ * happens, round down the KASLR offset by (1 << SWAPPER_TABLE_SHIFT).135135+ *136136+ * NOTE: The references to _text and _end below will already take the137137+ * modulo offset (the physical displacement modulo 2 MB) into138138+ * account, given that the physical placement is controlled by139139+ * the loader, and will not change as a result of the virtual140140+ * mapping we choose.136141 */137137- if ((((u64)_text + offset + modulo_offset) >> SWAPPER_TABLE_SHIFT) !=138138- (((u64)_end + offset + modulo_offset) >> SWAPPER_TABLE_SHIFT)) {139139- u64 kimg_sz = _end - _text;140140- offset = (offset + round_up(kimg_sz, SWAPPER_BLOCK_SIZE))141141- & mask;142142- }142142+ if ((((u64)_text + offset) >> SWAPPER_TABLE_SHIFT) !=143143+ (((u64)_end + offset) >> SWAPPER_TABLE_SHIFT))144144+ offset = round_down(offset, 1 << SWAPPER_TABLE_SHIFT);143145144146 if (IS_ENABLED(CONFIG_KASAN))145147 /*
+4-1
arch/arm64/mm/fault.c
···435435 * the mmap_sem because it would already be released436436 * in __lock_page_or_retry in mm/filemap.c.437437 */438438- if (fatal_signal_pending(current))438438+ if (fatal_signal_pending(current)) {439439+ if (!user_mode(regs))440440+ goto no_context;439441 return 0;442442+ }440443441444 /*442445 * Clear FAULT_FLAG_ALLOW_RETRY to avoid any risk of
+1-1
arch/powerpc/Kconfig
···199199 select HAVE_OPTPROBES if PPC64200200 select HAVE_PERF_EVENTS201201 select HAVE_PERF_EVENTS_NMI if PPC64202202- select HAVE_HARDLOCKUP_DETECTOR_PERF if HAVE_PERF_EVENTS_NMI && !HAVE_HARDLOCKUP_DETECTOR_ARCH202202+ select HAVE_HARDLOCKUP_DETECTOR_PERF if PERF_EVENTS && HAVE_PERF_EVENTS_NMI && !HAVE_HARDLOCKUP_DETECTOR_ARCH203203 select HAVE_PERF_REGS204204 select HAVE_PERF_USER_STACK_DUMP205205 select HAVE_RCU_TABLE_FREE if SMP
+18
arch/powerpc/include/asm/mmu_context.h
···9090 /* Mark this context has been used on the new CPU */9191 if (!cpumask_test_cpu(smp_processor_id(), mm_cpumask(next))) {9292 cpumask_set_cpu(smp_processor_id(), mm_cpumask(next));9393+9494+ /*9595+ * This full barrier orders the store to the cpumask above vs9696+ * a subsequent operation which allows this CPU to begin loading9797+ * translations for next.9898+ *9999+ * When using the radix MMU that operation is the load of the100100+ * MMU context id, which is then moved to SPRN_PID.101101+ *102102+ * For the hash MMU it is either the first load from slb_cache103103+ * in switch_slb(), and/or the store of paca->mm_ctx_id in104104+ * copy_mm_to_paca().105105+ *106106+ * On the read side the barrier is in pte_xchg(), which orders107107+ * the store to the PTE vs the load of mm_cpumask.108108+ */109109+ smp_mb();110110+93111 new_on_cpu = true;94112 }95113
+1
arch/powerpc/include/asm/pgtable-be-types.h
···8787 unsigned long *p = (unsigned long *)ptep;8888 __be64 prev;89899090+ /* See comment in switch_mm_irqs_off() */9091 prev = (__force __be64)__cmpxchg_u64(p, (__force unsigned long)pte_raw(old),9192 (__force unsigned long)pte_raw(new));9293
+1
arch/powerpc/include/asm/pgtable-types.h
···6262{6363 unsigned long *p = (unsigned long *)ptep;64646565+ /* See comment in switch_mm_irqs_off() */6566 return pte_val(old) == __cmpxchg_u64(p, pte_val(old), pte_val(new));6667}6768#endif
+3-2
arch/powerpc/kernel/process.c
···362362363363 cpumsr = msr_check_and_set(MSR_FP|MSR_VEC|MSR_VSX);364364365365- if (current->thread.regs && (current->thread.regs->msr & MSR_VSX)) {365365+ if (current->thread.regs &&366366+ (current->thread.regs->msr & (MSR_VSX|MSR_VEC|MSR_FP))) {366367 check_if_tm_restore_required(current);367368 /*368369 * If a thread has already been reclaimed then the···387386{388387 if (tsk->thread.regs) {389388 preempt_disable();390390- if (tsk->thread.regs->msr & MSR_VSX) {389389+ if (tsk->thread.regs->msr & (MSR_VSX|MSR_VEC|MSR_FP)) {391390 BUG_ON(tsk != current);392391 giveup_vsx(tsk);393392 }
+34-22
arch/powerpc/kvm/book3s_64_vio.c
···294294 struct kvm_create_spapr_tce_64 *args)295295{296296 struct kvmppc_spapr_tce_table *stt = NULL;297297+ struct kvmppc_spapr_tce_table *siter;297298 unsigned long npages, size;298299 int ret = -ENOMEM;299300 int i;301301+ int fd = -1;300302301303 if (!args->size)302304 return -EINVAL;303305304304- /* Check this LIOBN hasn't been previously allocated */305305- list_for_each_entry(stt, &kvm->arch.spapr_tce_tables, list) {306306- if (stt->liobn == args->liobn)307307- return -EBUSY;308308- }309309-310306 size = _ALIGN_UP(args->size, PAGE_SIZE >> 3);311307 npages = kvmppc_tce_pages(size);312308 ret = kvmppc_account_memlimit(kvmppc_stt_pages(npages), true);313313- if (ret) {314314- stt = NULL;315315- goto fail;316316- }309309+ if (ret)310310+ return ret;317311318312 ret = -ENOMEM;319313 stt = kzalloc(sizeof(*stt) + npages * sizeof(struct page *),320314 GFP_KERNEL);321315 if (!stt)322322- goto fail;316316+ goto fail_acct;323317324318 stt->liobn = args->liobn;325319 stt->page_shift = args->page_shift;···328334 goto fail;329335 }330336331331- kvm_get_kvm(kvm);337337+ ret = fd = anon_inode_getfd("kvm-spapr-tce", &kvm_spapr_tce_fops,338338+ stt, O_RDWR | O_CLOEXEC);339339+ if (ret < 0)340340+ goto fail;332341333342 mutex_lock(&kvm->lock);334334- list_add_rcu(&stt->list, &kvm->arch.spapr_tce_tables);343343+344344+ /* Check this LIOBN hasn't been previously allocated */345345+ ret = 0;346346+ list_for_each_entry(siter, &kvm->arch.spapr_tce_tables, list) {347347+ if (siter->liobn == args->liobn) {348348+ ret = -EBUSY;349349+ break;350350+ }351351+ }352352+353353+ if (!ret) {354354+ list_add_rcu(&stt->list, &kvm->arch.spapr_tce_tables);355355+ kvm_get_kvm(kvm);356356+ }335357336358 mutex_unlock(&kvm->lock);337359338338- return anon_inode_getfd("kvm-spapr-tce", &kvm_spapr_tce_fops,339339- stt, O_RDWR | O_CLOEXEC);360360+ if (!ret)361361+ return fd;340362341341-fail:342342- if (stt) {343343- for (i = 0; i < npages; i++)344344- if (stt->pages[i])345345- __free_page(stt->pages[i]);363363+ put_unused_fd(fd);346364347347- kfree(stt);348348- }365365+ fail:366366+ for (i = 0; i < npages; i++)367367+ if (stt->pages[i])368368+ __free_page(stt->pages[i]);369369+370370+ kfree(stt);371371+ fail_acct:372372+ kvmppc_account_memlimit(kvmppc_stt_pages(npages), false);349373 return ret;350374}351375
+3
arch/powerpc/kvm/book3s_hv_rmhandlers.S
···12911291 /* Hypervisor doorbell - exit only if host IPI flag set */12921292 cmpwi r12, BOOK3S_INTERRUPT_H_DOORBELL12931293 bne 3f12941294+BEGIN_FTR_SECTION12951295+ PPC_MSGSYNC12961296+END_FTR_SECTION_IFSET(CPU_FTR_ARCH_300)12941297 lbz r0, HSTATE_HOST_IPI(r13)12951298 cmpwi r0, 012961299 beq 4f
+65-3
arch/powerpc/kvm/book3s_xive_template.c
···1616 u8 cppr;1717 u16 ack;18181919- /* XXX DD1 bug workaround: Check PIPR vs. CPPR first ! */1919+ /*2020+ * Ensure any previous store to CPPR is ordered vs.2121+ * the subsequent loads from PIPR or ACK.2222+ */2323+ eieio();2424+2525+ /*2626+ * DD1 bug workaround: If PIPR is less favored than CPPR2727+ * ignore the interrupt or we might incorrectly lose an IPB2828+ * bit.2929+ */3030+ if (cpu_has_feature(CPU_FTR_POWER9_DD1)) {3131+ u8 pipr = __x_readb(__x_tima + TM_QW1_OS + TM_PIPR);3232+ if (pipr >= xc->hw_cppr)3333+ return;3434+ }20352136 /* Perform the acknowledge OS to register cycle. */2237 ack = be16_to_cpu(__x_readw(__x_tima + TM_SPC_ACK_OS_REG));···250235 /*251236 * If we found an interrupt, adjust what the guest CPPR should252237 * be as if we had just fetched that interrupt from HW.238238+ *239239+ * Note: This can only make xc->cppr smaller as the previous240240+ * loop will only exit with hirq != 0 if prio is lower than241241+ * the current xc->cppr. Thus we don't need to re-check xc->mfrr242242+ * for pending IPIs.253243 */254244 if (hirq)255245 xc->cppr = prio;···401381 xc->cppr = cppr;402382403383 /*384384+ * Order the above update of xc->cppr with the subsequent385385+ * read of xc->mfrr inside push_pending_to_hw()386386+ */387387+ smp_mb();388388+389389+ /*404390 * We are masking less, we need to look for pending things405391 * to deliver and set VP pending bits accordingly to trigger406392 * a new interrupt otherwise we might miss MFRR changes for···446420 * used to signal MFRR changes is EOId when fetched from447421 * the queue.448422 */449449- if (irq == XICS_IPI || irq == 0)423423+ if (irq == XICS_IPI || irq == 0) {424424+ /*425425+ * This barrier orders the setting of xc->cppr vs.426426+ * subsquent test of xc->mfrr done inside427427+ * scan_interrupts and push_pending_to_hw428428+ */429429+ smp_mb();450430 goto bail;431431+ }451432452433 /* Find interrupt source */453434 sb = kvmppc_xive_find_source(xive, irq, &src);454435 if (!sb) {455436 pr_devel(" source not found !\n");456437 rc = H_PARAMETER;438438+ /* Same as above */439439+ smp_mb();457440 goto bail;458441 }459442 state = &sb->irq_state[src];460443 kvmppc_xive_select_irq(state, &hw_num, &xd);461444462445 state->in_eoi = true;463463- mb();446446+447447+ /*448448+ * This barrier orders both setting of in_eoi above vs,449449+ * subsequent test of guest_priority, and the setting450450+ * of xc->cppr vs. subsquent test of xc->mfrr done inside451451+ * scan_interrupts and push_pending_to_hw452452+ */453453+ smp_mb();464454465455again:466456 if (state->guest_priority == MASKED) {···503461504462 }505463464464+ /*465465+ * This barrier orders the above guest_priority check466466+ * and spin_lock/unlock with clearing in_eoi below.467467+ *468468+ * It also has to be a full mb() as it must ensure469469+ * the MMIOs done in source_eoi() are completed before470470+ * state->in_eoi is visible.471471+ */506472 mb();507473 state->in_eoi = false;508474bail:···544494545495 /* Locklessly write over MFRR */546496 xc->mfrr = mfrr;497497+498498+ /*499499+ * The load of xc->cppr below and the subsequent MMIO store500500+ * to the IPI must happen after the above mfrr update is501501+ * globally visible so that:502502+ *503503+ * - Synchronize with another CPU doing an H_EOI or a H_CPPR504504+ * updating xc->cppr then reading xc->mfrr.505505+ *506506+ * - The target of the IPI sees the xc->mfrr update507507+ */508508+ mb();547509548510 /* Shoot the IPI if most favored than target cppr */549511 if (mfrr < xc->cppr)
+5-2
arch/s390/kvm/sthyi.c
···394394 "srl %[cc],28\n"395395 : [cc] "=d" (cc)396396 : [code] "d" (code), [addr] "a" (addr)397397- : "memory", "cc");397397+ : "3", "memory", "cc");398398 return cc;399399}400400···425425 VCPU_EVENT(vcpu, 3, "STHYI: fc: %llu addr: 0x%016llx", code, addr);426426 trace_kvm_s390_handle_sthyi(vcpu, code, addr);427427428428- if (reg1 == reg2 || reg1 & 1 || reg2 & 1 || addr & ~PAGE_MASK)428428+ if (reg1 == reg2 || reg1 & 1 || reg2 & 1)429429 return kvm_s390_inject_program_int(vcpu, PGM_SPECIFICATION);430430431431 if (code & 0xffff) {432432 cc = 3;433433 goto out;434434 }435435+436436+ if (addr & ~PAGE_MASK)437437+ return kvm_s390_inject_program_int(vcpu, PGM_SPECIFICATION);435438436439 /*437440 * If the page has not yet been faulted in, we want to do that
···21142114 load_mm_cr4(this_cpu_read(cpu_tlbstate.loaded_mm));21152115}2116211621172117-static void x86_pmu_event_mapped(struct perf_event *event)21172117+static void x86_pmu_event_mapped(struct perf_event *event, struct mm_struct *mm)21182118{21192119 if (!(event->hw.flags & PERF_X86_EVENT_RDPMC_ALLOWED))21202120 return;···21292129 * For now, this can't happen because all callers hold mmap_sem21302130 * for write. If this changes, we'll need a different solution.21312131 */21322132- lockdep_assert_held_exclusive(¤t->mm->mmap_sem);21322132+ lockdep_assert_held_exclusive(&mm->mmap_sem);2133213321342134- if (atomic_inc_return(¤t->mm->context.perf_rdpmc_allowed) == 1)21352135- on_each_cpu_mask(mm_cpumask(current->mm), refresh_pce, NULL, 1);21342134+ if (atomic_inc_return(&mm->context.perf_rdpmc_allowed) == 1)21352135+ on_each_cpu_mask(mm_cpumask(mm), refresh_pce, NULL, 1);21362136}2137213721382138-static void x86_pmu_event_unmapped(struct perf_event *event)21382138+static void x86_pmu_event_unmapped(struct perf_event *event, struct mm_struct *mm)21392139{21402140- if (!current->mm)21412141- return;2142214021432141 if (!(event->hw.flags & PERF_X86_EVENT_RDPMC_ALLOWED))21442142 return;2145214321462146- if (atomic_dec_and_test(¤t->mm->context.perf_rdpmc_allowed))21472147- on_each_cpu_mask(mm_cpumask(current->mm), refresh_pce, NULL, 1);21442144+ if (atomic_dec_and_test(&mm->context.perf_rdpmc_allowed))21452145+ on_each_cpu_mask(mm_cpumask(mm), refresh_pce, NULL, 1);21482146}2149214721502148static int x86_pmu_event_idx(struct perf_event *event)
···587587 * P4_CONFIG_ALIASABLE or bits for P4_PEBS_METRIC, they are588588 * either up to date automatically or not applicable at all.589589 */590590-struct p4_event_alias {590590+static struct p4_event_alias {591591 u64 original;592592 u64 alternative;593593} p4_event_aliases[] = {
···247247248248/*249249 * This is the base location for PIE (ET_DYN with INTERP) loads. On250250- * 64-bit, this is raised to 4GB to leave the entire 32-bit address250250+ * 64-bit, this is above 4GB to leave the entire 32-bit address251251 * space open for things that want to use the area for 32-bit pointers.252252 */253253#define ELF_ET_DYN_BASE (mmap_is_ia32() ? 0x000400000UL : \254254- 0x100000000UL)254254+ (TASK_SIZE / 3 * 2))255255256256/* This yields a mask that user programs can use to figure out what257257 instruction set this CPU supports. This could be done in user space,
···237237 stop_machine(mtrr_rendezvous_handler, &data, cpu_online_mask);238238}239239240240+static void set_mtrr_cpuslocked(unsigned int reg, unsigned long base,241241+ unsigned long size, mtrr_type type)242242+{243243+ struct set_mtrr_data data = { .smp_reg = reg,244244+ .smp_base = base,245245+ .smp_size = size,246246+ .smp_type = type247247+ };248248+249249+ stop_machine_cpuslocked(mtrr_rendezvous_handler, &data, cpu_online_mask);250250+}251251+240252static void set_mtrr_from_inactive_cpu(unsigned int reg, unsigned long base,241253 unsigned long size, mtrr_type type)242254{···382370 /* Search for an empty MTRR */383371 i = mtrr_if->get_free_region(base, size, replace);384372 if (i >= 0) {385385- set_mtrr(i, base, size, type);373373+ set_mtrr_cpuslocked(i, base, size, type);386374 if (likely(replace < 0)) {387375 mtrr_usage_table[i] = 1;388376 } else {···390378 if (increment)391379 mtrr_usage_table[i]++;392380 if (unlikely(replace != i)) {393393- set_mtrr(replace, 0, 0, 0);381381+ set_mtrr_cpuslocked(replace, 0, 0, 0);394382 mtrr_usage_table[replace] = 0;395383 }396384 }···518506 goto out;519507 }520508 if (--mtrr_usage_table[reg] < 1)521521- set_mtrr(reg, 0, 0, 0);509509+ set_mtrr_cpuslocked(reg, 0, 0, 0);522510 error = reg;523511 out:524512 mutex_unlock(&mtrr_mutex);
+4-3
arch/x86/kernel/head64.c
···5353 pudval_t *pud;5454 pmdval_t *pmd, pmd_entry;5555 int i;5656+ unsigned int *next_pgt_ptr;56575758 /* Is the address too large? */5859 if (physaddr >> MAX_PHYSMEM_BITS)···9291 * creates a bunch of nonsense entries but that is fine --9392 * it avoids problems around wraparound.9493 */9595-9696- pud = fixup_pointer(early_dynamic_pgts[next_early_pgt++], physaddr);9797- pmd = fixup_pointer(early_dynamic_pgts[next_early_pgt++], physaddr);9494+ next_pgt_ptr = fixup_pointer(&next_early_pgt, physaddr);9595+ pud = fixup_pointer(early_dynamic_pgts[(*next_pgt_ptr)++], physaddr);9696+ pmd = fixup_pointer(early_dynamic_pgts[(*next_pgt_ptr)++], physaddr);98979998 if (IS_ENABLED(CONFIG_X86_5LEVEL)) {10099 p4d = fixup_pointer(early_dynamic_pgts[next_early_pgt++], physaddr);
···971971 * Returns zero if CPU booted OK, else error code from972972 * ->wakeup_secondary_cpu.973973 */974974-static int do_boot_cpu(int apicid, int cpu, struct task_struct *idle)974974+static int do_boot_cpu(int apicid, int cpu, struct task_struct *idle,975975+ int *cpu0_nmi_registered)975976{976977 volatile u32 *trampoline_status =977978 (volatile u32 *) __va(real_mode_header->trampoline_status);···980979 unsigned long start_ip = real_mode_header->trampoline_start;981980982981 unsigned long boot_error = 0;983983- int cpu0_nmi_registered = 0;984982 unsigned long timeout;985983986984 idle->thread.sp = (unsigned long)task_pt_regs(idle);···10351035 boot_error = apic->wakeup_secondary_cpu(apicid, start_ip);10361036 else10371037 boot_error = wakeup_cpu_via_init_nmi(cpu, start_ip, apicid,10381038- &cpu0_nmi_registered);10381038+ cpu0_nmi_registered);1039103910401040 if (!boot_error) {10411041 /*···10801080 */10811081 smpboot_restore_warm_reset_vector();10821082 }10831083- /*10841084- * Clean up the nmi handler. Do this after the callin and callout sync10851085- * to avoid impact of possible long unregister time.10861086- */10871087- if (cpu0_nmi_registered)10881088- unregister_nmi_handler(NMI_LOCAL, "wake_cpu0");1089108310901084 return boot_error;10911085}···10871093int native_cpu_up(unsigned int cpu, struct task_struct *tidle)10881094{10891095 int apicid = apic->cpu_present_to_apicid(cpu);10961096+ int cpu0_nmi_registered = 0;10901097 unsigned long flags;10911091- int err;10981098+ int err, ret = 0;1092109910931100 WARN_ON(irqs_disabled());10941101···1126113111271132 common_cpu_up(cpu, tidle);1128113311291129- err = do_boot_cpu(apicid, cpu, tidle);11341134+ err = do_boot_cpu(apicid, cpu, tidle, &cpu0_nmi_registered);11301135 if (err) {11311136 pr_err("do_boot_cpu failed(%d) to wakeup CPU#%u\n", err, cpu);11321132- return -EIO;11371137+ ret = -EIO;11381138+ goto unreg_nmi;11331139 }1134114011351141 /*···11461150 touch_nmi_watchdog();11471151 }1148115211491149- return 0;11531153+unreg_nmi:11541154+ /*11551155+ * Clean up the nmi handler. Do this after the callin and callout sync11561156+ * to avoid impact of possible long unregister time.11571157+ */11581158+ if (cpu0_nmi_registered)11591159+ unregister_nmi_handler(NMI_LOCAL, "wake_cpu0");11601160+11611161+ return ret;11501162}1151116311521164/**
+1-1
arch/x86/kvm/cpuid.c
···469469 entry->ecx &= kvm_cpuid_7_0_ecx_x86_features;470470 cpuid_mask(&entry->ecx, CPUID_7_ECX);471471 /* PKU is not yet implemented for shadow paging. */472472- if (!tdp_enabled)472472+ if (!tdp_enabled || !boot_cpu_has(X86_FEATURE_OSPKE))473473 entry->ecx &= ~F(PKU);474474 entry->edx &= kvm_cpuid_7_0_edx_x86_features;475475 entry->edx &= get_scattered_cpuid_leaf(7, 0, CPUID_EDX);
···185185 * index of the protection domain, so pte_pkey * 2 is186186 * is the index of the first bit for the domain.187187 */188188- pkru_bits = (kvm_read_pkru(vcpu) >> (pte_pkey * 2)) & 3;188188+ pkru_bits = (vcpu->arch.pkru >> (pte_pkey * 2)) & 3;189189190190 /* clear present bit, replace PFEC.RSVD with ACC_USER_MASK. */191191 offset = (pfec & ~1) +
+1-8
arch/x86/kvm/svm.c
···1100110011011101 if (vls) {11021102 if (!npt_enabled ||11031103- !boot_cpu_has(X86_FEATURE_VIRTUAL_VMLOAD_VMSAVE) ||11031103+ !boot_cpu_has(X86_FEATURE_V_VMSAVE_VMLOAD) ||11041104 !IS_ENABLED(CONFIG_X86_64)) {11051105 vls = false;11061106 } else {···17751775 * so we do not need to update the CPL here.17761776 */17771777 to_svm(vcpu)->vmcb->save.rflags = rflags;17781778-}17791779-17801780-static u32 svm_get_pkru(struct kvm_vcpu *vcpu)17811781-{17821782- return 0;17831778}1784177917851780static void svm_cache_reg(struct kvm_vcpu *vcpu, enum kvm_reg reg)···54075412 .cache_reg = svm_cache_reg,54085413 .get_rflags = svm_get_rflags,54095414 .set_rflags = svm_set_rflags,54105410-54115411- .get_pkru = svm_get_pkru,5412541554135416 .tlb_flush = svm_flush_tlb,54145417
+8-17
arch/x86/kvm/vmx.c
···636636637637 u64 current_tsc_ratio;638638639639- bool guest_pkru_valid;640640- u32 guest_pkru;641639 u32 host_pkru;642640643641 /*···2379238123802382 if ((old_rflags ^ to_vmx(vcpu)->rflags) & X86_EFLAGS_VM)23812383 to_vmx(vcpu)->emulation_required = emulation_required(vcpu);23822382-}23832383-23842384-static u32 vmx_get_pkru(struct kvm_vcpu *vcpu)23852385-{23862386- return to_vmx(vcpu)->guest_pkru;23872384}2388238523892386static u32 vmx_get_interrupt_shadow(struct kvm_vcpu *vcpu)···90139020 if (vcpu->guest_debug & KVM_GUESTDBG_SINGLESTEP)90149021 vmx_set_interrupt_shadow(vcpu, 0);9015902290169016- if (vmx->guest_pkru_valid)90179017- __write_pkru(vmx->guest_pkru);90239023+ if (static_cpu_has(X86_FEATURE_PKU) &&90249024+ kvm_read_cr4_bits(vcpu, X86_CR4_PKE) &&90259025+ vcpu->arch.pkru != vmx->host_pkru)90269026+ __write_pkru(vcpu->arch.pkru);9018902790199028 atomic_switch_perf_msrs(vmx);90209029 debugctlmsr = get_debugctlmsr();···91649169 * back on host, so it is safe to read guest PKRU from current91659170 * XSAVE.91669171 */91679167- if (boot_cpu_has(X86_FEATURE_OSPKE)) {91689168- vmx->guest_pkru = __read_pkru();91699169- if (vmx->guest_pkru != vmx->host_pkru) {91709170- vmx->guest_pkru_valid = true;91729172+ if (static_cpu_has(X86_FEATURE_PKU) &&91739173+ kvm_read_cr4_bits(vcpu, X86_CR4_PKE)) {91749174+ vcpu->arch.pkru = __read_pkru();91759175+ if (vcpu->arch.pkru != vmx->host_pkru)91719176 __write_pkru(vmx->host_pkru);91729172- } else91739173- vmx->guest_pkru_valid = false;91749177 }9175917891769179 /*···1167411681 .cache_reg = vmx_cache_reg,1167511682 .get_rflags = vmx_get_rflags,1167611683 .set_rflags = vmx_set_rflags,1167711677-1167811678- .get_pkru = vmx_get_pkru,11679116841168011685 .tlb_flush = vmx_flush_tlb,1168111686
···5050static unsigned long stack_maxrandom_size(unsigned long task_size)5151{5252 unsigned long max = 0;5353- if ((current->flags & PF_RANDOMIZE) &&5454- !(current->personality & ADDR_NO_RANDOMIZE)) {5353+ if (current->flags & PF_RANDOMIZE) {5554 max = (-1UL) & __STACK_RND_MASK(task_size == tasksize_32bit());5655 max <<= PAGE_SHIFT;5756 }···78797980static unsigned long arch_rnd(unsigned int rndbits)8081{8282+ if (!(current->flags & PF_RANDOMIZE))8383+ return 0;8184 return (get_random_long() & ((1UL << rndbits) - 1)) << PAGE_SHIFT;8285}83868487unsigned long arch_mmap_rnd(void)8588{8686- if (!(current->flags & PF_RANDOMIZE))8787- return 0;8889 return arch_rnd(mmap_is_ia32() ? mmap32_rnd_bits : mmap64_rnd_bits);8990}9091
+2-2
arch/x86/platform/uv/tlb_uv.c
···2626static struct bau_operations ops __ro_after_init;27272828/* timeouts in nanoseconds (indexed by UVH_AGING_PRESCALE_SEL urgency7 30:28) */2929-static int timeout_base_ns[] = {2929+static const int timeout_base_ns[] = {3030 20,3131 160,3232 1280,···12161216 * set a bit in the UVH_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE register.12171217 * Such a message must be ignored.12181218 */12191219-void process_uv2_message(struct msg_desc *mdp, struct bau_control *bcp)12191219+static void process_uv2_message(struct msg_desc *mdp, struct bau_control *bcp)12201220{12211221 unsigned long mmr_image;12221222 unsigned char swack_vec;
···382382 } \383383} while (0)384384385385+static inline unsigned int throtl_bio_data_size(struct bio *bio)386386+{387387+ /* assume it's one sector */388388+ if (unlikely(bio_op(bio) == REQ_OP_DISCARD))389389+ return 512;390390+ return bio->bi_iter.bi_size;391391+}392392+385393static void throtl_qnode_init(struct throtl_qnode *qn, struct throtl_grp *tg)386394{387395 INIT_LIST_HEAD(&qn->node);···942934 bool rw = bio_data_dir(bio);943935 u64 bytes_allowed, extra_bytes, tmp;944936 unsigned long jiffy_elapsed, jiffy_wait, jiffy_elapsed_rnd;937937+ unsigned int bio_size = throtl_bio_data_size(bio);945938946939 jiffy_elapsed = jiffy_elapsed_rnd = jiffies - tg->slice_start[rw];947940···956947 do_div(tmp, HZ);957948 bytes_allowed = tmp;958949959959- if (tg->bytes_disp[rw] + bio->bi_iter.bi_size <= bytes_allowed) {950950+ if (tg->bytes_disp[rw] + bio_size <= bytes_allowed) {960951 if (wait)961952 *wait = 0;962953 return true;963954 }964955965956 /* Calc approx time to dispatch */966966- extra_bytes = tg->bytes_disp[rw] + bio->bi_iter.bi_size - bytes_allowed;957957+ extra_bytes = tg->bytes_disp[rw] + bio_size - bytes_allowed;967958 jiffy_wait = div64_u64(extra_bytes * HZ, tg_bps_limit(tg, rw));968959969960 if (!jiffy_wait)···10431034static void throtl_charge_bio(struct throtl_grp *tg, struct bio *bio)10441035{10451036 bool rw = bio_data_dir(bio);10371037+ unsigned int bio_size = throtl_bio_data_size(bio);1046103810471039 /* Charge the bio to the group */10481048- tg->bytes_disp[rw] += bio->bi_iter.bi_size;10401040+ tg->bytes_disp[rw] += bio_size;10491041 tg->io_disp[rw]++;10501050- tg->last_bytes_disp[rw] += bio->bi_iter.bi_size;10421042+ tg->last_bytes_disp[rw] += bio_size;10511043 tg->last_io_disp[rw]++;1052104410531045 /*
+44-30
block/bsg-lib.c
···2929#include <scsi/scsi_cmnd.h>30303131/**3232- * bsg_destroy_job - routine to teardown/delete a bsg job3232+ * bsg_teardown_job - routine to teardown a bsg job3333 * @job: bsg_job that is to be torn down3434 */3535-static void bsg_destroy_job(struct kref *kref)3535+static void bsg_teardown_job(struct kref *kref)3636{3737 struct bsg_job *job = container_of(kref, struct bsg_job, kref);3838 struct request *rq = job->req;3939-4040- blk_end_request_all(rq, BLK_STS_OK);41394240 put_device(job->dev); /* release reference for the request */43414442 kfree(job->request_payload.sg_list);4543 kfree(job->reply_payload.sg_list);4646- kfree(job);4444+4545+ blk_end_request_all(rq, BLK_STS_OK);4746}48474948void bsg_job_put(struct bsg_job *job)5049{5151- kref_put(&job->kref, bsg_destroy_job);5050+ kref_put(&job->kref, bsg_teardown_job);5251}5352EXPORT_SYMBOL_GPL(bsg_job_put);5453···99100 */100101static void bsg_softirq_done(struct request *rq)101102{102102- struct bsg_job *job = rq->special;103103+ struct bsg_job *job = blk_mq_rq_to_pdu(rq);103104104105 bsg_job_put(job);105106}···121122}122123123124/**124124- * bsg_create_job - create the bsg_job structure for the bsg request125125+ * bsg_prepare_job - create the bsg_job structure for the bsg request125126 * @dev: device that is being sent the bsg request126127 * @req: BSG request that needs a job structure127128 */128128-static int bsg_create_job(struct device *dev, struct request *req)129129+static int bsg_prepare_job(struct device *dev, struct request *req)129130{130131 struct request *rsp = req->next_rq;131131- struct request_queue *q = req->q;132132 struct scsi_request *rq = scsi_req(req);133133- struct bsg_job *job;133133+ struct bsg_job *job = blk_mq_rq_to_pdu(req);134134 int ret;135135136136- BUG_ON(req->special);137137-138138- job = kzalloc(sizeof(struct bsg_job) + q->bsg_job_size, GFP_KERNEL);139139- if (!job)140140- return -ENOMEM;141141-142142- req->special = job;143143- job->req = req;144144- if (q->bsg_job_size)145145- job->dd_data = (void *)&job[1];146136 job->request = rq->cmd;147137 job->request_len = rq->cmd_len;148148- job->reply = rq->sense;149149- job->reply_len = SCSI_SENSE_BUFFERSIZE; /* Size of sense buffer150150- * allocated */138138+151139 if (req->bio) {152140 ret = bsg_map_buffer(&job->request_payload, req);153141 if (ret)···173187{174188 struct device *dev = q->queuedata;175189 struct request *req;176176- struct bsg_job *job;177190 int ret;178191179192 if (!get_device(dev))···184199 break;185200 spin_unlock_irq(q->queue_lock);186201187187- ret = bsg_create_job(dev, req);202202+ ret = bsg_prepare_job(dev, req);188203 if (ret) {189204 scsi_req(req)->result = ret;190205 blk_end_request_all(req, BLK_STS_OK);···192207 continue;193208 }194209195195- job = req->special;196196- ret = q->bsg_job_fn(job);210210+ ret = q->bsg_job_fn(blk_mq_rq_to_pdu(req));197211 spin_lock_irq(q->queue_lock);198212 if (ret)199213 break;···201217 spin_unlock_irq(q->queue_lock);202218 put_device(dev);203219 spin_lock_irq(q->queue_lock);220220+}221221+222222+static int bsg_init_rq(struct request_queue *q, struct request *req, gfp_t gfp)223223+{224224+ struct bsg_job *job = blk_mq_rq_to_pdu(req);225225+ struct scsi_request *sreq = &job->sreq;226226+227227+ memset(job, 0, sizeof(*job));228228+229229+ scsi_req_init(sreq);230230+ sreq->sense_len = SCSI_SENSE_BUFFERSIZE;231231+ sreq->sense = kzalloc(sreq->sense_len, gfp);232232+ if (!sreq->sense)233233+ return -ENOMEM;234234+235235+ job->req = req;236236+ job->reply = sreq->sense;237237+ job->reply_len = sreq->sense_len;238238+ job->dd_data = job + 1;239239+240240+ return 0;241241+}242242+243243+static void bsg_exit_rq(struct request_queue *q, struct request *req)244244+{245245+ struct bsg_job *job = blk_mq_rq_to_pdu(req);246246+ struct scsi_request *sreq = &job->sreq;247247+248248+ kfree(sreq->sense);204249}205250206251/**···248235 q = blk_alloc_queue(GFP_KERNEL);249236 if (!q)250237 return ERR_PTR(-ENOMEM);251251- q->cmd_size = sizeof(struct scsi_request);238238+ q->cmd_size = sizeof(struct bsg_job) + dd_job_size;239239+ q->init_rq_fn = bsg_init_rq;240240+ q->exit_rq_fn = bsg_exit_rq;252241 q->request_fn = bsg_request_fn;253242254243 ret = blk_init_allocated_queue(q);···258243 goto out_cleanup_queue;259244260245 q->queuedata = dev;261261- q->bsg_job_size = dd_job_size;262246 q->bsg_job_fn = job_fn;263247 queue_flag_set_unlocked(QUEUE_FLAG_BIDI, q);264248 queue_flag_set_unlocked(QUEUE_FLAG_SCSI_PASSTHROUGH, q);
+7-3
drivers/acpi/acpica/nsxfeval.c
···100100 free_buffer_on_error = TRUE;101101 }102102103103- status = acpi_get_handle(handle, pathname, &target_handle);104104- if (ACPI_FAILURE(status)) {105105- return_ACPI_STATUS(status);103103+ if (pathname) {104104+ status = acpi_get_handle(handle, pathname, &target_handle);105105+ if (ACPI_FAILURE(status)) {106106+ return_ACPI_STATUS(status);107107+ }108108+ } else {109109+ target_handle = handle;106110 }107111108112 full_pathname = acpi_ns_get_external_pathname(target_handle);
+7-10
drivers/acpi/ec.c
···17411741 * functioning ECDT EC first in order to handle the events.17421742 * https://bugzilla.kernel.org/show_bug.cgi?id=11502117431743 */17441744-int __init acpi_ec_ecdt_start(void)17441744+static int __init acpi_ec_ecdt_start(void)17451745{17461746 acpi_handle handle;17471747···20032003int __init acpi_ec_init(void)20042004{20052005 int result;20062006+ int ecdt_fail, dsdt_fail;2006200720072008 /* register workqueue for _Qxx evaluations */20082009 result = acpi_ec_query_init();20092010 if (result)20102010- goto err_exit;20112011- /* Now register the driver for the EC */20122012- result = acpi_bus_register_driver(&acpi_ec_driver);20132013- if (result)20142014- goto err_exit;20112011+ return result;2015201220162016-err_exit:20172017- if (result)20182018- acpi_ec_query_exit();20192019- return result;20132013+ /* Drivers must be started after acpi_ec_query_init() */20142014+ ecdt_fail = acpi_ec_ecdt_start();20152015+ dsdt_fail = acpi_bus_register_driver(&acpi_ec_driver);20162016+ return ecdt_fail && dsdt_fail ? -ENODEV : 0;20202017}2021201820222019/* EC driver currently not unloadable */
···20752075 /*20762076 * Get the bios in the request so we can re-queue them.20772077 */20782078- if (req_op(shadow[i].request) == REQ_OP_FLUSH ||20792079- req_op(shadow[i].request) == REQ_OP_DISCARD ||20802080- req_op(shadow[i].request) == REQ_OP_SECURE_ERASE ||20782078+ if (req_op(shadow[j].request) == REQ_OP_FLUSH ||20792079+ req_op(shadow[j].request) == REQ_OP_DISCARD ||20802080+ req_op(shadow[j].request) == REQ_OP_SECURE_ERASE ||20812081 shadow[j].request->cmd_flags & REQ_FUA) {20822082 /*20832083 * Flush operations don't contain bios, so
+1-1
drivers/clocksource/Kconfig
···262262263263config CLKSRC_PISTACHIO264264 bool "Clocksource for Pistachio SoC" if COMPILE_TEST265265- depends on HAS_IOMEM265265+ depends on GENERIC_CLOCKEVENTS && HAS_IOMEM266266 select TIMER_OF267267 help268268 Enables the clocksource for the Pistachio SoC.
+1-1
drivers/clocksource/arm_arch_timer.c
···14401440 * While unlikely, it's theoretically possible that none of the frames14411441 * in a timer expose the combination of feature we want.14421442 */14431443- for (i = i; i < timer_count; i++) {14431443+ for (i = 0; i < timer_count; i++) {14441444 timer = &timers[i];1445144514461446 frame = arch_timer_mem_find_best_frame(timer);
+6-5
drivers/clocksource/em_sti.c
···305305 irq = platform_get_irq(pdev, 0);306306 if (irq < 0) {307307 dev_err(&pdev->dev, "failed to get irq\n");308308- return -EINVAL;308308+ return irq;309309 }310310311311 /* map memory, let base point to the STI instance */···314314 if (IS_ERR(p->base))315315 return PTR_ERR(p->base);316316317317- if (devm_request_irq(&pdev->dev, irq, em_sti_interrupt,318318- IRQF_TIMER | IRQF_IRQPOLL | IRQF_NOBALANCING,319319- dev_name(&pdev->dev), p)) {317317+ ret = devm_request_irq(&pdev->dev, irq, em_sti_interrupt,318318+ IRQF_TIMER | IRQF_IRQPOLL | IRQF_NOBALANCING,319319+ dev_name(&pdev->dev), p);320320+ if (ret) {320321 dev_err(&pdev->dev, "failed to request low IRQ\n");321321- return -ENOENT;322322+ return ret;322323 }323324324325 /* get hold of clock */
···244244 struct dma_fence *f = e->fence;245245 struct amd_sched_fence *s_fence = to_amd_sched_fence(f);246246247247+ if (dma_fence_is_signaled(f)) {248248+ hash_del(&e->node);249249+ dma_fence_put(f);250250+ kmem_cache_free(amdgpu_sync_slab, e);251251+ continue;252252+ }247253 if (ring && s_fence) {248254 /* For fences from the same ring it is sufficient249255 * when they are scheduled.···260254261255 return &s_fence->scheduled;262256 }263263- }264264-265265- if (dma_fence_is_signaled(f)) {266266- hash_del(&e->node);267267- dma_fence_put(f);268268- kmem_cache_free(amdgpu_sync_slab, e);269269- continue;270257 }271258272259 return f;
+8-3
drivers/gpu/drm/drm_atomic.c
···16551655 if (config->funcs->atomic_check)16561656 ret = config->funcs->atomic_check(state->dev, state);1657165716581658+ if (ret)16591659+ return ret;16601660+16581661 if (!state->allow_modeset) {16591662 for_each_new_crtc_in_state(state, crtc, crtc_state, i) {16601663 if (drm_atomic_crtc_needs_modeset(crtc_state)) {···16681665 }16691666 }1670166716711671- return ret;16681668+ return 0;16721669}16731670EXPORT_SYMBOL(drm_atomic_check_only);16741671···21702167 struct drm_atomic_state *state;21712168 struct drm_modeset_acquire_ctx ctx;21722169 struct drm_plane *plane;21732173- struct drm_out_fence_state *fence_state = NULL;21702170+ struct drm_out_fence_state *fence_state;21742171 unsigned plane_mask;21752172 int ret = 0;21762176- unsigned int i, j, num_fences = 0;21732173+ unsigned int i, j, num_fences;2177217421782175 /* disallow for drivers not supporting atomic: */21792176 if (!drm_core_check_feature(dev, DRIVER_ATOMIC))···22142211 plane_mask = 0;22152212 copied_objs = 0;22162213 copied_props = 0;22142214+ fence_state = NULL;22152215+ num_fences = 0;2217221622182217 for (i = 0; i < arg->count_objs; i++) {22192218 uint32_t obj_id, count_props;
+3-3
drivers/gpu/drm/drm_gem.c
···255255 struct drm_gem_object *obj = ptr;256256 struct drm_device *dev = obj->dev;257257258258+ if (dev->driver->gem_close_object)259259+ dev->driver->gem_close_object(obj, file_priv);260260+258261 if (drm_core_check_feature(dev, DRIVER_PRIME))259262 drm_gem_remove_prime_handles(obj, file_priv);260263 drm_vma_node_revoke(&obj->vma_node, file_priv);261261-262262- if (dev->driver->gem_close_object)263263- dev->driver->gem_close_object(obj, file_priv);264264265265 drm_gem_object_handle_put_unlocked(obj);266266
+1
drivers/gpu/drm/drm_plane.c
···601601602602 crtc = drm_crtc_find(dev, plane_req->crtc_id);603603 if (!crtc) {604604+ drm_framebuffer_put(fb);604605 DRM_DEBUG_KMS("Unknown crtc ID %d\n",605606 plane_req->crtc_id);606607 return -ENOENT;
···4580458045814581 sseu->slice_mask |= BIT(s);4582458245834583- if (IS_GEN9_BC(dev_priv))45834583+ if (IS_GEN9_BC(dev_priv) || IS_CANNONLAKE(dev_priv))45844584 sseu->subslice_mask =45854585 INTEL_INFO(dev_priv)->sseu.subslice_mask;45864586
+8-7
drivers/gpu/drm/i915/i915_gem_context.c
···688688}689689690690static bool691691-needs_pd_load_pre(struct i915_hw_ppgtt *ppgtt,692692- struct intel_engine_cs *engine,693693- struct i915_gem_context *to)691691+needs_pd_load_pre(struct i915_hw_ppgtt *ppgtt, struct intel_engine_cs *engine)694692{693693+ struct i915_gem_context *from = engine->legacy_active_context;694694+695695 if (!ppgtt)696696 return false;697697698698 /* Always load the ppgtt on first use */699699- if (!engine->legacy_active_context)699699+ if (!from)700700 return true;701701702702 /* Same context without new entries, skip */703703- if (engine->legacy_active_context == to &&703703+ if ((!from->ppgtt || from->ppgtt == ppgtt) &&704704 !(intel_engine_flag(engine) & ppgtt->pd_dirty_rings))705705 return false;706706···744744 if (skip_rcs_switch(ppgtt, engine, to))745745 return 0;746746747747- if (needs_pd_load_pre(ppgtt, engine, to)) {747747+ if (needs_pd_load_pre(ppgtt, engine)) {748748 /* Older GENs and non render rings still want the load first,749749 * "PP_DCLV followed by PP_DIR_BASE register through Load750750 * Register Immediate commands in Ring Buffer before submitting···841841 struct i915_hw_ppgtt *ppgtt =842842 to->ppgtt ?: req->i915->mm.aliasing_ppgtt;843843844844- if (needs_pd_load_pre(ppgtt, engine, to)) {844844+ if (needs_pd_load_pre(ppgtt, engine)) {845845 int ret;846846847847 trace_switch_mm(engine, to);···852852 ppgtt->pd_dirty_rings &= ~intel_engine_flag(engine);853853 }854854855855+ engine->legacy_active_context = to;855856 return 0;856857 }857858
+4
drivers/gpu/drm/i915/i915_gem_render_state.c
···242242 goto err_unpin;243243 }244244245245+ ret = req->engine->emit_flush(req, EMIT_INVALIDATE);246246+ if (ret)247247+ goto err_unpin;248248+245249 ret = req->engine->emit_bb_start(req,246250 so->batch_offset, so->batch_size,247251 I915_DISPATCH_SECURE);
+9-6
drivers/gpu/drm/i915/intel_bios.c
···11201120 bool is_dvi, is_hdmi, is_dp, is_edp, is_crt;11211121 uint8_t aux_channel, ddc_pin;11221122 /* Each DDI port can have more than one value on the "DVO Port" field,11231123- * so look for all the possible values for each port and abort if more11241124- * than one is found. */11231123+ * so look for all the possible values for each port.11241124+ */11251125 int dvo_ports[][3] = {11261126 {DVO_PORT_HDMIA, DVO_PORT_DPA, -1},11271127 {DVO_PORT_HDMIB, DVO_PORT_DPB, -1},···11301130 {DVO_PORT_CRT, DVO_PORT_HDMIE, DVO_PORT_DPE},11311131 };1132113211331133- /* Find the child device to use, abort if more than one found. */11331133+ /*11341134+ * Find the first child device to reference the port, report if more11351135+ * than one found.11361136+ */11341137 for (i = 0; i < dev_priv->vbt.child_dev_num; i++) {11351138 it = dev_priv->vbt.child_dev + i;11361139···1143114011441141 if (it->common.dvo_port == dvo_ports[port][j]) {11451142 if (child) {11461146- DRM_DEBUG_KMS("More than one child device for port %c in VBT.\n",11431143+ DRM_DEBUG_KMS("More than one child device for port %c in VBT, using the first.\n",11471144 port_name(port));11481148- return;11451145+ } else {11461146+ child = it;11491147 }11501150- child = it;11511148 }11521149 }11531150 }
+1-1
drivers/gpu/drm/i915/intel_ddi.c
···17621762 if (dev_priv->vbt.edp.low_vswing) {17631763 if (voltage == VOLTAGE_INFO_0_85V) {17641764 *n_entries = ARRAY_SIZE(cnl_ddi_translations_edp_0_85V);17651765- return cnl_ddi_translations_dp_0_85V;17651765+ return cnl_ddi_translations_edp_0_85V;17661766 } else if (voltage == VOLTAGE_INFO_0_95V) {17671767 *n_entries = ARRAY_SIZE(cnl_ddi_translations_edp_0_95V);17681768 return cnl_ddi_translations_edp_0_95V;
+7
drivers/gpu/drm/i915/intel_display.c
···34853485 !gpu_reset_clobbers_display(dev_priv))34863486 return;3487348734883488+ /* We have a modeset vs reset deadlock, defensively unbreak it.34893489+ *34903490+ * FIXME: We can do a _lot_ better, this is just a first iteration.34913491+ */34923492+ i915_gem_set_wedged(dev_priv);34933493+ DRM_DEBUG_DRIVER("Wedging GPU to avoid deadlocks with pending modeset updates\n");34943494+34883495 /*34893496 * Need mode_config.mutex so that we don't34903497 * trample ongoing ->detect() and whatnot.
+1-1
drivers/gpu/drm/i915/intel_dsi_dcs_backlight.c
···4646 struct intel_encoder *encoder = connector->encoder;4747 struct intel_dsi *intel_dsi = enc_to_intel_dsi(&encoder->base);4848 struct mipi_dsi_device *dsi_device;4949- u8 data;4949+ u8 data = 0;5050 enum port port;51515252 /* FIXME: Need to take care of 16 bit brightness level */
+1-1
drivers/gpu/drm/i915/intel_dsi_vbt.c
···306306307307 if (!gpio_desc) {308308 gpio_desc = devm_gpiod_get_index(dev_priv->drm.dev,309309- "panel", gpio_index,309309+ NULL, gpio_index,310310 value ? GPIOD_OUT_LOW :311311 GPIOD_OUT_HIGH);312312
+22-1
drivers/gpu/drm/i915/intel_lrc.c
···12211221 return ret;12221222}1223122312241224+static u8 gtiir[] = {12251225+ [RCS] = 0,12261226+ [BCS] = 0,12271227+ [VCS] = 1,12281228+ [VCS2] = 1,12291229+ [VECS] = 3,12301230+};12311231+12241232static int gen8_init_common_ring(struct intel_engine_cs *engine)12251233{12261234 struct drm_i915_private *dev_priv = engine->i915;···1253124512541246 DRM_DEBUG_DRIVER("Execlists enabled for %s\n", engine->name);1255124712561256- /* After a GPU reset, we may have requests to replay */12481248+ GEM_BUG_ON(engine->id >= ARRAY_SIZE(gtiir));12491249+12501250+ /*12511251+ * Clear any pending interrupt state.12521252+ *12531253+ * We do it twice out of paranoia that some of the IIR are double12541254+ * buffered, and if we only reset it once there may still be12551255+ * an interrupt pending.12561256+ */12571257+ I915_WRITE(GEN8_GT_IIR(gtiir[engine->id]),12581258+ GT_CONTEXT_SWITCH_INTERRUPT << engine->irq_shift);12591259+ I915_WRITE(GEN8_GT_IIR(gtiir[engine->id]),12601260+ GT_CONTEXT_SWITCH_INTERRUPT << engine->irq_shift);12571261 clear_bit(ENGINE_IRQ_EXECLIST, &engine->irq_posted);1258126212631263+ /* After a GPU reset, we may have requests to replay */12591264 submit = false;12601265 for (n = 0; n < ARRAY_SIZE(engine->execlist_port); n++) {12611266 if (!port_isset(&port[n]))
···210210 struct drm_device *dev = intel_dig_port->base.base.dev;211211 struct drm_i915_private *dev_priv = to_i915(dev);212212213213- if (!IS_GEN9(dev_priv)) {214214- DRM_ERROR("LSPCON is supported on GEN9 only\n");213213+ if (!HAS_LSPCON(dev_priv)) {214214+ DRM_ERROR("LSPCON is not supported on this platform\n");215215 return false;216216 }217217
+2-4
drivers/gpu/drm/imx/ipuv3-plane.c
···545545 return;546546 }547547548548+ ics = ipu_drm_fourcc_to_colorspace(fb->format->format);548549 switch (ipu_plane->dp_flow) {549550 case IPU_DP_FLOW_SYNC_BG:550550- ipu_dp_setup_channel(ipu_plane->dp,551551- IPUV3_COLORSPACE_RGB,552552- IPUV3_COLORSPACE_RGB);551551+ ipu_dp_setup_channel(ipu_plane->dp, ics, IPUV3_COLORSPACE_RGB);553552 ipu_dp_set_global_alpha(ipu_plane->dp, true, 0, true);554553 break;555554 case IPU_DP_FLOW_SYNC_FG:556556- ics = ipu_drm_fourcc_to_colorspace(state->fb->format->format);557555 ipu_dp_setup_channel(ipu_plane->dp, ics,558556 IPUV3_COLORSPACE_UNKNOWN);559557 /* Enable local alpha on partial plane */
···11config IMX_IPUV3_CORE22 tristate "IPUv3 core support"33 depends on SOC_IMX5 || SOC_IMX6Q || ARCH_MULTIPLATFORM44+ depends on DRM || !DRM # if DRM=m, this can't be 'y'45 select GENERIC_IRQ_CHIP56 help67 Choose this if you have a i.MX5/6 system and want to use the Image
+3-2
drivers/i2c/busses/i2c-aspeed.c
···410410 }411411412412 /* We are in an invalid state; reset bus to a known state. */413413- if (!bus->msgs && bus->master_state != ASPEED_I2C_MASTER_STOP) {413413+ if (!bus->msgs) {414414 dev_err(bus->dev, "bus in unknown state");415415 bus->cmd_err = -EIO;416416- aspeed_i2c_do_stop(bus);416416+ if (bus->master_state != ASPEED_I2C_MASTER_STOP)417417+ aspeed_i2c_do_stop(bus);417418 goto out_no_complete;418419 }419420 msg = &bus->msgs[bus->msgs_index];
···353353 }354354355355 /*356356- * An I2C ID table is not mandatory, if and only if, a suitable Device357357- * Tree match table entry is supplied for the probing device.356356+ * An I2C ID table is not mandatory, if and only if, a suitable OF357357+ * or ACPI ID table is supplied for the probing device.358358 */359359 if (!driver->id_table &&360360 !i2c_acpi_match_device(dev->driver->acpi_match_table, client) &&
+1-1
drivers/iio/adc/ina2xx-adc.c
···666666{667667 struct iio_dev *indio_dev = data;668668 struct ina2xx_chip_info *chip = iio_priv(indio_dev);669669- unsigned int sampling_us = SAMPLING_PERIOD(chip);669669+ int sampling_us = SAMPLING_PERIOD(chip);670670 int buffer_us;671671672672 /*
+5-5
drivers/iio/adc/stm32-adc-core.c
···6464#define STM32H7_CKMODE_MASK GENMASK(17, 16)65656666/* STM32 H7 maximum analog clock rate (from datasheet) */6767-#define STM32H7_ADC_MAX_CLK_RATE 720000006767+#define STM32H7_ADC_MAX_CLK_RATE 3600000068686969/**7070 * stm32_adc_common_regs - stm32 common registers, compatible dependent data···148148 return -EINVAL;149149 }150150151151- priv->common.rate = rate;151151+ priv->common.rate = rate / stm32f4_pclk_div[i];152152 val = readl_relaxed(priv->common.base + STM32F4_ADC_CCR);153153 val &= ~STM32F4_ADC_ADCPRE_MASK;154154 val |= i << STM32F4_ADC_ADCPRE_SHIFT;155155 writel_relaxed(val, priv->common.base + STM32F4_ADC_CCR);156156157157 dev_dbg(&pdev->dev, "Using analog clock source at %ld kHz\n",158158- rate / (stm32f4_pclk_div[i] * 1000));158158+ priv->common.rate / 1000);159159160160 return 0;161161}···250250251251out:252252 /* rate used later by each ADC instance to control BOOST mode */253253- priv->common.rate = rate;253253+ priv->common.rate = rate / div;254254255255 /* Set common clock mode and prescaler */256256 val = readl_relaxed(priv->common.base + STM32H7_ADC_CCR);···260260 writel_relaxed(val, priv->common.base + STM32H7_ADC_CCR);261261262262 dev_dbg(&pdev->dev, "Using %s clock/%d source at %ld kHz\n",263263- ckmode ? "bus" : "adc", div, rate / (div * 1000));263263+ ckmode ? "bus" : "adc", div, priv->common.rate / 1000);264264265265 return 0;266266}
···10851085 bool is_ib = (mlx5_ib_port_link_layer(ibdev, port) ==10861086 IB_LINK_LAYER_INFINIBAND);1087108710881088+ /* CM layer calls ib_modify_port() regardless of the link layer. For10891089+ * Ethernet ports, qkey violation and Port capabilities are meaningless.10901090+ */10911091+ if (!is_ib)10921092+ return 0;10931093+10881094 if (MLX5_CAP_GEN(dev->mdev, ib_virt) && is_ib) {10891095 change_mask = props->clr_port_cap_mask | props->set_port_cap_mask;10901096 value = ~props->clr_port_cap_mask | props->set_port_cap_mask;
···265265 if (ps2_command(&psmouse->ps2dev, param, MAKE_PS2_CMD(0, 2, TP_READ_ID)))266266 return -1;267267268268- if (param[0] != TP_MAGIC_IDENT)268268+ /* add new TP ID. */269269+ if (!(param[0] & TP_MAGIC_IDENT))269270 return -1;270271271272 if (firmware_id)···381380 return 0;382381383382 if (trackpoint_read(ps2dev, TP_EXT_BTN, &button_info)) {384384- psmouse_warn(psmouse, "failed to get extended button data\n");385385- button_info = 0;383383+ psmouse_warn(psmouse, "failed to get extended button data, assuming 3 buttons\n");384384+ button_info = 0x33;386385 }387386388387 psmouse->private = kzalloc(sizeof(struct trackpoint_data), GFP_KERNEL);
+2-1
drivers/input/mouse/trackpoint.h
···2121#define TP_COMMAND 0xE2 /* Commands start with this */22222323#define TP_READ_ID 0xE1 /* Sent for device identification */2424-#define TP_MAGIC_IDENT 0x01 /* Sent after a TP_READ_ID followed */2424+#define TP_MAGIC_IDENT 0x03 /* Sent after a TP_READ_ID followed */2525 /* by the firmware ID */2626+ /* Firmware ID includes 0x1, 0x2, 0x3 */262727282829/*
···189189190190 ct->chip.irq_suspend = brcmstb_l2_intc_suspend;191191 ct->chip.irq_resume = brcmstb_l2_intc_resume;192192+ ct->chip.irq_pm_shutdown = brcmstb_l2_intc_suspend;192193193194 if (data->can_wake) {194195 /* This IRQ chip can wake the system, set all child interrupts
···1835183518361836#define ACPI_GICV3_ITS_MEM_SIZE (SZ_128K)1837183718381838-#if defined(CONFIG_ACPI_NUMA) && (ACPI_CA_VERSION >= 0x20170531)18381838+#ifdef CONFIG_ACPI_NUMA18391839struct its_srat_map {18401840 /* numa node id */18411841 u32 numa_node;···18431843 u32 its_id;18441844};1845184518461846-static struct its_srat_map its_srat_maps[MAX_NUMNODES] __initdata;18461846+static struct its_srat_map *its_srat_maps __initdata;18471847static int its_in_srat __initdata;1848184818491849static int __init acpi_get_its_numa_node(u32 its_id)···18551855 return its_srat_maps[i].numa_node;18561856 }18571857 return NUMA_NO_NODE;18581858+}18591859+18601860+static int __init gic_acpi_match_srat_its(struct acpi_subtable_header *header,18611861+ const unsigned long end)18621862+{18631863+ return 0;18581864}1859186518601866static int __init gic_acpi_parse_srat_its(struct acpi_subtable_header *header,···18761870 if (its_affinity->header.length < sizeof(*its_affinity)) {18771871 pr_err("SRAT: Invalid header length %d in ITS affinity\n",18781872 its_affinity->header.length);18791879- return -EINVAL;18801880- }18811881-18821882- if (its_in_srat >= MAX_NUMNODES) {18831883- pr_err("SRAT: ITS affinity exceeding max count[%d]\n",18841884- MAX_NUMNODES);18851873 return -EINVAL;18861874 }18871875···1897189718981898static void __init acpi_table_parse_srat_its(void)18991899{19001900+ int count;19011901+19021902+ count = acpi_table_parse_entries(ACPI_SIG_SRAT,19031903+ sizeof(struct acpi_table_srat),19041904+ ACPI_SRAT_TYPE_GIC_ITS_AFFINITY,19051905+ gic_acpi_match_srat_its, 0);19061906+ if (count <= 0)19071907+ return;19081908+19091909+ its_srat_maps = kmalloc(count * sizeof(struct its_srat_map),19101910+ GFP_KERNEL);19111911+ if (!its_srat_maps) {19121912+ pr_warn("SRAT: Failed to allocate memory for its_srat_maps!\n");19131913+ return;19141914+ }19151915+19001916 acpi_table_parse_entries(ACPI_SIG_SRAT,19011917 sizeof(struct acpi_table_srat),19021918 ACPI_SRAT_TYPE_GIC_ITS_AFFINITY,19031919 gic_acpi_parse_srat_its, 0);19041920}19211921+19221922+/* free the its_srat_maps after ITS probing */19231923+static void __init acpi_its_srat_maps_free(void)19241924+{19251925+ kfree(its_srat_maps);19261926+}19051927#else19061928static void __init acpi_table_parse_srat_its(void) { }19071929static int __init acpi_get_its_numa_node(u32 its_id) { return NUMA_NO_NODE; }19301930+static void __init acpi_its_srat_maps_free(void) { }19081931#endif1909193219101933static int __init gic_acpi_parse_madt_its(struct acpi_subtable_header *header,···19741951 acpi_table_parse_srat_its();19751952 acpi_table_parse_madt(ACPI_MADT_TYPE_GENERIC_TRANSLATOR,19761953 gic_acpi_parse_madt_its, 0);19541954+ acpi_its_srat_maps_free();19771955}19781956#else19791957static void __init its_acpi_probe(void) { }
+13-3
drivers/irqchip/irq-gic-v3.c
···353353354354 if (static_key_true(&supports_deactivate))355355 gic_write_eoir(irqnr);356356+ else357357+ isb();356358357359 err = handle_domain_irq(gic_data.domain, irqnr, regs);358360 if (err) {···642640static int gic_set_affinity(struct irq_data *d, const struct cpumask *mask_val,643641 bool force)644642{645645- unsigned int cpu = cpumask_any_and(mask_val, cpu_online_mask);643643+ unsigned int cpu;646644 void __iomem *reg;647645 int enabled;648646 u64 val;647647+648648+ if (force)649649+ cpu = cpumask_first(mask_val);650650+ else651651+ cpu = cpumask_any_and(mask_val, cpu_online_mask);649652650653 if (cpu >= nr_cpu_ids)651654 return -EINVAL;···838831 if (ret)839832 return ret;840833841841- for (i = 0; i < nr_irqs; i++)842842- gic_irq_domain_map(domain, virq + i, hwirq + i);834834+ for (i = 0; i < nr_irqs; i++) {835835+ ret = gic_irq_domain_map(domain, virq + i, hwirq + i);836836+ if (ret)837837+ return ret;838838+ }843839844840 return 0;845841}
+10-4
drivers/irqchip/irq-gic.c
···361361 if (likely(irqnr > 15 && irqnr < 1020)) {362362 if (static_key_true(&supports_deactivate))363363 writel_relaxed(irqstat, cpu_base + GIC_CPU_EOI);364364+ isb();364365 handle_domain_irq(gic->domain, irqnr, regs);365366 continue;366367 }···402401 goto out;403402404403 cascade_irq = irq_find_mapping(chip_data->domain, gic_irq);405405- if (unlikely(gic_irq < 32 || gic_irq > 1020))404404+ if (unlikely(gic_irq < 32 || gic_irq > 1020)) {406405 handle_bad_irq(desc);407407- else406406+ } else {407407+ isb();408408 generic_handle_irq(cascade_irq);409409+ }409410410411 out:411412 chained_irq_exit(chip, desc);···10301027 if (ret)10311028 return ret;1032102910331033- for (i = 0; i < nr_irqs; i++)10341034- gic_irq_domain_map(domain, virq + i, hwirq + i);10301030+ for (i = 0; i < nr_irqs; i++) {10311031+ ret = gic_irq_domain_map(domain, virq + i, hwirq + i);10321032+ if (ret)10331033+ return ret;10341034+ }1035103510361036 return 0;10371037}
+4-1
drivers/isdn/mISDN/fsm.c
···26262727#define FSM_TIMER_DEBUG 028282929-void2929+int3030mISDN_FsmNew(struct Fsm *fsm,3131 struct FsmNode *fnlist, int fncount)3232{···34343535 fsm->jumpmatrix = kzalloc(sizeof(FSMFNPTR) * fsm->state_count *3636 fsm->event_count, GFP_KERNEL);3737+ if (fsm->jumpmatrix == NULL)3838+ return -ENOMEM;37393840 for (i = 0; i < fncount; i++)3941 if ((fnlist[i].state >= fsm->state_count) ||···4745 } else4846 fsm->jumpmatrix[fsm->state_count * fnlist[i].event +4947 fnlist[i].state] = (FSMFNPTR) fnlist[i].routine;4848+ return 0;5049}5150EXPORT_SYMBOL(mISDN_FsmNew);5251
···79967996 if (mddev->safemode == 1)79977997 mddev->safemode = 0;79987998 /* sync_checkers is always 0 when writes_pending is in per-cpu mode */79997999- if (mddev->in_sync || !mddev->sync_checkers) {79997999+ if (mddev->in_sync || mddev->sync_checkers) {80008000 spin_lock(&mddev->lock);80018001 if (mddev->in_sync) {80028002 mddev->in_sync = 0;···8655865586568656 if (mddev_trylock(mddev)) {86578657 int spares = 0;86588658+86598659+ if (!mddev->external && mddev->safemode == 1)86608660+ mddev->safemode = 0;8658866186598662 if (mddev->ro) {86608663 struct md_rdev *rdev;
+46-15
drivers/md/raid5-cache.c
···236236 bool need_split_bio;237237 struct bio *split_bio;238238239239- unsigned int has_flush:1; /* include flush request */240240- unsigned int has_fua:1; /* include fua request */241241- unsigned int has_null_flush:1; /* include empty flush request */239239+ unsigned int has_flush:1; /* include flush request */240240+ unsigned int has_fua:1; /* include fua request */241241+ unsigned int has_null_flush:1; /* include null flush request */242242+ unsigned int has_flush_payload:1; /* include flush payload */242243 /*243244 * io isn't sent yet, flush/fua request can only be submitted till it's244245 * the first IO in running_ios list···572571 struct r5l_io_unit *io_deferred;573572 struct r5l_log *log = io->log;574573 unsigned long flags;574574+ bool has_null_flush;575575+ bool has_flush_payload;575576576577 if (bio->bi_status)577578 md_error(log->rdev->mddev, log->rdev);···583580584581 spin_lock_irqsave(&log->io_list_lock, flags);585582 __r5l_set_io_unit_state(io, IO_UNIT_IO_END);583583+584584+ /*585585+ * if the io doesn't not have null_flush or flush payload,586586+ * it is not safe to access it after releasing io_list_lock.587587+ * Therefore, it is necessary to check the condition with588588+ * the lock held.589589+ */590590+ has_null_flush = io->has_null_flush;591591+ has_flush_payload = io->has_flush_payload;592592+586593 if (log->need_cache_flush && !list_empty(&io->stripe_list))587594 r5l_move_to_end_ios(log);588595 else···613600 if (log->need_cache_flush)614601 md_wakeup_thread(log->rdev->mddev->thread);615602616616- if (io->has_null_flush) {603603+ /* finish flush only io_unit and PAYLOAD_FLUSH only io_unit */604604+ if (has_null_flush) {617605 struct bio *bi;618606619607 WARN_ON(bio_list_empty(&io->flush_barriers));620608 while ((bi = bio_list_pop(&io->flush_barriers)) != NULL) {621609 bio_endio(bi);622622- atomic_dec(&io->pending_stripe);610610+ if (atomic_dec_and_test(&io->pending_stripe)) {611611+ __r5l_stripe_write_finished(io);612612+ return;613613+ }623614 }624615 }625625-626626- /* finish flush only io_unit and PAYLOAD_FLUSH only io_unit */627627- if (atomic_read(&io->pending_stripe) == 0)628628- __r5l_stripe_write_finished(io);616616+ /* decrease pending_stripe for flush payload */617617+ if (has_flush_payload)618618+ if (atomic_dec_and_test(&io->pending_stripe))619619+ __r5l_stripe_write_finished(io);629620}630621631622static void r5l_do_submit_io(struct r5l_log *log, struct r5l_io_unit *io)···898881 payload->size = cpu_to_le32(sizeof(__le64));899882 payload->flush_stripes[0] = cpu_to_le64(sect);900883 io->meta_offset += meta_size;884884+ /* multiple flush payloads count as one pending_stripe */885885+ if (!io->has_flush_payload) {886886+ io->has_flush_payload = 1;887887+ atomic_inc(&io->pending_stripe);888888+ }901889 mutex_unlock(&log->io_mutex);902890}903891···25622540 */25632541int r5c_journal_mode_set(struct mddev *mddev, int mode)25642542{25652565- struct r5conf *conf = mddev->private;25662566- struct r5l_log *log = conf->log;25672567-25682568- if (!log)25692569- return -ENODEV;25432543+ struct r5conf *conf;25442544+ int err;2570254525712546 if (mode < R5C_JOURNAL_MODE_WRITE_THROUGH ||25722547 mode > R5C_JOURNAL_MODE_WRITE_BACK)25732548 return -EINVAL;2574254925502550+ err = mddev_lock(mddev);25512551+ if (err)25522552+ return err;25532553+ conf = mddev->private;25542554+ if (!conf || !conf->log) {25552555+ mddev_unlock(mddev);25562556+ return -ENODEV;25572557+ }25582558+25752559 if (raid5_calc_degraded(conf) > 0 &&25762576- mode == R5C_JOURNAL_MODE_WRITE_BACK)25602560+ mode == R5C_JOURNAL_MODE_WRITE_BACK) {25612561+ mddev_unlock(mddev);25772562 return -EINVAL;25632563+ }2578256425792565 mddev_suspend(mddev);25802566 conf->log->r5c_journal_mode = mode;25812567 mddev_resume(mddev);25682568+ mddev_unlock(mddev);2582256925832570 pr_debug("md/raid:%s: setting r5c cache mode to %d: %s\n",25842571 mdname(mddev), mode, r5c_journal_mode_str[mode]);
···206206 * parameter207207 *208208 * This function encodes the @ncycles value as described in the datasheet209209- * (section "SMC Pulse Register"), and then stores the result in the209209+ * (section "SMC Cycle Register"), and then stores the result in the210210 * @conf->setup field at @shift position.211211 *212212 * Returns -EINVAL if @shift is invalid, -ERANGE if @ncycles does not fit in
···13711371 R1_CC_ERROR | /* Card controller error */ \13721372 R1_ERROR) /* General/unknown error */1373137313741374-static bool mmc_blk_has_cmd_err(struct mmc_command *cmd)13741374+static void mmc_blk_eval_resp_error(struct mmc_blk_request *brq)13751375{13761376- if (!cmd->error && cmd->resp[0] & CMD_ERRORS)13771377- cmd->error = -EIO;13761376+ u32 val;1378137713791379- return cmd->error;13781378+ /*13791379+ * Per the SD specification(physical layer version 4.10)[1],13801380+ * section 4.3.3, it explicitly states that "When the last13811381+ * block of user area is read using CMD18, the host should13821382+ * ignore OUT_OF_RANGE error that may occur even the sequence13831383+ * is correct". And JESD84-B51 for eMMC also has a similar13841384+ * statement on section 6.8.3.13851385+ *13861386+ * Multiple block read/write could be done by either predefined13871387+ * method, namely CMD23, or open-ending mode. For open-ending mode,13881388+ * we should ignore the OUT_OF_RANGE error as it's normal behaviour.13891389+ *13901390+ * However the spec[1] doesn't tell us whether we should also13911391+ * ignore that for predefined method. But per the spec[1], section13921392+ * 4.15 Set Block Count Command, it says"If illegal block count13931393+ * is set, out of range error will be indicated during read/write13941394+ * operation (For example, data transfer is stopped at user area13951395+ * boundary)." In another word, we could expect a out of range error13961396+ * in the response for the following CMD18/25. And if argument of13971397+ * CMD23 + the argument of CMD18/25 exceed the max number of blocks,13981398+ * we could also expect to get a -ETIMEDOUT or any error number from13991399+ * the host drivers due to missing data response(for write)/data(for14001400+ * read), as the cards will stop the data transfer by itself per the14011401+ * spec. So we only need to check R1_OUT_OF_RANGE for open-ending mode.14021402+ */14031403+14041404+ if (!brq->stop.error) {14051405+ bool oor_with_open_end;14061406+ /* If there is no error yet, check R1 response */14071407+14081408+ val = brq->stop.resp[0] & CMD_ERRORS;14091409+ oor_with_open_end = val & R1_OUT_OF_RANGE && !brq->mrq.sbc;14101410+14111411+ if (val && !oor_with_open_end)14121412+ brq->stop.error = -EIO;14131413+ }13801414}1381141513821416static enum mmc_blk_status mmc_blk_err_check(struct mmc_card *card,···14341400 * stop.error indicates a problem with the stop command. Data14351401 * may have been transferred, or may still be transferring.14361402 */14371437- if (brq->sbc.error || brq->cmd.error || mmc_blk_has_cmd_err(&brq->stop) ||14381438- brq->data.error) {14031403+14041404+ mmc_blk_eval_resp_error(brq);14051405+14061406+ if (brq->sbc.error || brq->cmd.error ||14071407+ brq->stop.error || brq->data.error) {14391408 switch (mmc_blk_cmd_recovery(card, req, brq, &ecc_err, &gen_err)) {14401409 case ERR_RETRY:14411410 return MMC_BLK_RETRY;
+12-1
drivers/mtd/nand/atmel/nand-controller.c
···13641364 ret = atmel_smc_cs_conf_set_timing(smcconf,13651365 ATMEL_HSMC_TIMINGS_TADL_SHIFT,13661366 ncycles);13671367- if (ret)13671367+ /*13681368+ * Version 4 of the ONFI spec mandates that tADL be at least 40013691369+ * nanoseconds, but, depending on the master clock rate, 400 ns may not13701370+ * fit in the tADL field of the SMC reg. We need to relax the check and13711371+ * accept the -ERANGE return code.13721372+ *13731373+ * Note that previous versions of the ONFI spec had a lower tADL_min13741374+ * (100 or 200 ns). It's not clear why this timing constraint got13751375+ * increased but it seems most NANDs are fine with values lower than13761376+ * 400ns, so we should be safe.13771377+ */13781378+ if (ret && ret != -ERANGE)13681379 return ret;1369138013701381 ncycles = DIV_ROUND_UP(conf->timings.sdr.tAR_min, mckperiodps);
···15691569 new_slave->delay = 0;15701570 new_slave->link_failure_count = 0;1571157115721572- if (bond_update_speed_duplex(new_slave))15721572+ if (bond_update_speed_duplex(new_slave) &&15731573+ bond_needs_speed_duplex(bond))15731574 new_slave->link = BOND_LINK_DOWN;1574157515751576 new_slave->last_rx = jiffies -···21412140 continue;2142214121432142 case BOND_LINK_UP:21442144- if (bond_update_speed_duplex(slave)) {21432143+ if (bond_update_speed_duplex(slave) &&21442144+ bond_needs_speed_duplex(bond)) {21452145 slave->link = BOND_LINK_DOWN;21462146- netdev_warn(bond->dev,21472147- "failed to get link speed/duplex for %s\n",21482148- slave->dev->name);21462146+ if (net_ratelimit())21472147+ netdev_warn(bond->dev,21482148+ "failed to get link speed/duplex for %s\n",21492149+ slave->dev->name);21492150 continue;21502151 }21512152 bond_set_slave_link_state(slave, BOND_LINK_UP,
···46544654 dev->name, adap->params.vpd.id, adap->name, buf);46554655}4656465646574657-static void enable_pcie_relaxed_ordering(struct pci_dev *dev)46584658-{46594659- pcie_capability_set_word(dev, PCI_EXP_DEVCTL, PCI_EXP_DEVCTL_RELAX_EN);46604660-}46614661-46624657/*46634658 * Free the following resources:46644659 * - memory used for tables···49034908 }4904490949054910 pci_enable_pcie_error_reporting(pdev);49064906- enable_pcie_relaxed_ordering(pdev);49074911 pci_set_master(pdev);49084912 pci_save_state(pdev);49094913···49404946 adapter->pf = func;49414947 adapter->msg_enable = DFLT_MSG_ENABLE;49424948 memset(adapter->chan_map, 0xff, sizeof(adapter->chan_map));49494949+49504950+ /* If possible, we use PCIe Relaxed Ordering Attribute to deliver49514951+ * Ingress Packet Data to Free List Buffers in order to allow for49524952+ * chipset performance optimizations between the Root Complex and49534953+ * Memory Controllers. (Messages to the associated Ingress Queue49544954+ * notifying new Packet Placement in the Free Lists Buffers will be49554955+ * send without the Relaxed Ordering Attribute thus guaranteeing that49564956+ * all preceding PCIe Transaction Layer Packets will be processed49574957+ * first.) But some Root Complexes have various issues with Upstream49584958+ * Transaction Layer Packets with the Relaxed Ordering Attribute set.49594959+ * The PCIe devices which under the Root Complexes will be cleared the49604960+ * Relaxed Ordering bit in the configuration space, So we check our49614961+ * PCIe configuration space to see if it's flagged with advice against49624962+ * using Relaxed Ordering.49634963+ */49644964+ if (!pcie_relaxed_ordering_enabled(pdev))49654965+ adapter->flags |= ROOT_NO_RELAXED_ORDERING;4943496649444967 spin_lock_init(&adapter->stats_lock);49454968 spin_lock_init(&adapter->tid_release_lock);
+3-2
drivers/net/ethernet/chelsio/cxgb4/sge.c
···27192719 struct fw_iq_cmd c;27202720 struct sge *s = &adap->sge;27212721 struct port_info *pi = netdev_priv(dev);27222722+ int relaxed = !(adap->flags & ROOT_NO_RELAXED_ORDERING);2722272327232724 /* Size needs to be multiple of 16, including status entry. */27242725 iq->size = roundup(iq->size, 16);···2773277227742773 flsz = fl->size / 8 + s->stat_len / sizeof(struct tx_desc);27752774 c.iqns_to_fl0congen |= htonl(FW_IQ_CMD_FL0PACKEN_F |27762776- FW_IQ_CMD_FL0FETCHRO_F |27772777- FW_IQ_CMD_FL0DATARO_F |27752775+ FW_IQ_CMD_FL0FETCHRO_V(relaxed) |27762776+ FW_IQ_CMD_FL0DATARO_V(relaxed) |27782777 FW_IQ_CMD_FL0PADEN_F);27792778 if (cong >= 0)27802779 c.iqns_to_fl0congen |=
···28882888 */28892889 adapter->name = pci_name(pdev);28902890 adapter->msg_enable = DFLT_MSG_ENABLE;28912891+28922892+ /* If possible, we use PCIe Relaxed Ordering Attribute to deliver28932893+ * Ingress Packet Data to Free List Buffers in order to allow for28942894+ * chipset performance optimizations between the Root Complex and28952895+ * Memory Controllers. (Messages to the associated Ingress Queue28962896+ * notifying new Packet Placement in the Free Lists Buffers will be28972897+ * send without the Relaxed Ordering Attribute thus guaranteeing that28982898+ * all preceding PCIe Transaction Layer Packets will be processed28992899+ * first.) But some Root Complexes have various issues with Upstream29002900+ * Transaction Layer Packets with the Relaxed Ordering Attribute set.29012901+ * The PCIe devices which under the Root Complexes will be cleared the29022902+ * Relaxed Ordering bit in the configuration space, So we check our29032903+ * PCIe configuration space to see if it's flagged with advice against29042904+ * using Relaxed Ordering.29052905+ */29062906+ if (!pcie_relaxed_ordering_enabled(pdev))29072907+ adapter->flags |= ROOT_NO_RELAXED_ORDERING;29082908+28912909 err = adap_init0(adapter);28922910 if (err)28932911 goto err_unmap_bar;
+3
drivers/net/ethernet/chelsio/cxgb4vf/sge.c
···22052205 struct port_info *pi = netdev_priv(dev);22062206 struct fw_iq_cmd cmd, rpl;22072207 int ret, iqandst, flsz = 0;22082208+ int relaxed = !(adapter->flags & ROOT_NO_RELAXED_ORDERING);2208220922092210 /*22102211 * If we're using MSI interrupts and we're not initializing the···23012300 cpu_to_be32(23022301 FW_IQ_CMD_FL0HOSTFCMODE_V(SGE_HOSTFCMODE_NONE) |23032302 FW_IQ_CMD_FL0PACKEN_F |23032303+ FW_IQ_CMD_FL0FETCHRO_V(relaxed) |23042304+ FW_IQ_CMD_FL0DATARO_V(relaxed) |23042305 FW_IQ_CMD_FL0PADEN_F);2305230623062307 /* In T6, for egress queue type FL there is internal overhead
+2-2
drivers/net/ethernet/mellanox/mlx4/main.c
···432432 /* Virtual PCI function needs to determine UAR page size from433433 * firmware. Only master PCI function can set the uar page size434434 */435435- if (enable_4k_uar)435435+ if (enable_4k_uar || !dev->persist->num_vfs)436436 dev->uar_page_shift = DEFAULT_UAR_PAGE_SHIFT;437437 else438438 dev->uar_page_shift = PAGE_SHIFT;···2277227722782278 dev->caps.max_fmr_maps = (1 << (32 - ilog2(dev->caps.num_mpts))) - 1;2279227922802280- if (enable_4k_uar) {22802280+ if (enable_4k_uar || !dev->persist->num_vfs) {22812281 init_hca.log_uar_sz = ilog2(dev->caps.num_uars) +22822282 PAGE_SHIFT - DEFAULT_UAR_PAGE_SHIFT;22832283 init_hca.uar_page_sz = DEFAULT_UAR_PAGE_SHIFT - 12;
+2-6
drivers/net/ethernet/netronome/nfp/flower/cmsg.c
···115115 return;116116 }117117118118- if (link) {118118+ if (link)119119 netif_carrier_on(netdev);120120- rtnl_lock();121121- dev_set_mtu(netdev, be16_to_cpu(msg->mtu));122122- rtnl_unlock();123123- } else {120120+ else124121 netif_carrier_off(netdev);125125- }126122 rcu_read_unlock();127123}128124
···328328 * @IWL_UCODE_TLV_CAPA_TX_POWER_ACK: reduced TX power API has larger329329 * command size (command version 4) that supports toggling ACK TX330330 * power reduction.331331+ * @IWL_UCODE_TLV_CAPA_MLME_OFFLOAD: supports MLME offload331332 *332333 * @NUM_IWL_UCODE_TLV_CAPA: number of bits used333334 */···374373 IWL_UCODE_TLV_CAPA_EXTEND_SHARED_MEM_CFG = (__force iwl_ucode_tlv_capa_t)80,375374 IWL_UCODE_TLV_CAPA_LQM_SUPPORT = (__force iwl_ucode_tlv_capa_t)81,376375 IWL_UCODE_TLV_CAPA_TX_POWER_ACK = (__force iwl_ucode_tlv_capa_t)84,376376+ IWL_UCODE_TLV_CAPA_MLME_OFFLOAD = (__force iwl_ucode_tlv_capa_t)96,377377378378 NUM_IWL_UCODE_TLV_CAPA379379#ifdef __CHECKER__
+4-4
drivers/net/wireless/intel/iwlwifi/iwl-config.h
···276276 * @fw_name_pre: Firmware filename prefix. The api version and extension277277 * (.ucode) will be added to filename before loading from disk. The278278 * filename is constructed as fw_name_pre<api>.ucode.279279- * @fw_name_pre_next_step: same as @fw_name_pre, only for next step279279+ * @fw_name_pre_b_or_c_step: same as @fw_name_pre, only for b or c steps280280 * (if supported)281281- * @fw_name_pre_rf_next_step: same as @fw_name_pre_next_step, only for rf next282282- * step. Supported only in integrated solutions.281281+ * @fw_name_pre_rf_next_step: same as @fw_name_pre_b_or_c_step, only for rf282282+ * next step. Supported only in integrated solutions.283283 * @ucode_api_max: Highest version of uCode API supported by driver.284284 * @ucode_api_min: Lowest version of uCode API supported by driver.285285 * @max_inst_size: The maximal length of the fw inst section···330330 /* params specific to an individual device within a device family */331331 const char *name;332332 const char *fw_name_pre;333333- const char *fw_name_pre_next_step;333333+ const char *fw_name_pre_b_or_c_step;334334 const char *fw_name_pre_rf_next_step;335335 /* params not likely to change within a device family */336336 const struct iwl_base_params *base_params;
···25972597 spin_lock_bh(&mvm_sta->lock);25982598 for (i = 0; i <= IWL_MAX_TID_COUNT; i++) {25992599 tid_data = &mvm_sta->tid_data[i];26002600- while ((skb = __skb_dequeue(&tid_data->deferred_tx_frames)))26002600+26012601+ while ((skb = __skb_dequeue(&tid_data->deferred_tx_frames))) {26022602+ struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb);26032603+26042604+ /*26052605+ * The first deferred frame should've stopped the MAC26062606+ * queues, so we should never get a second deferred26072607+ * frame for the RA/TID.26082608+ */26092609+ iwl_mvm_start_mac_queues(mvm, info->hw_queue);26012610 ieee80211_free_txskb(mvm->hw, skb);26112611+ }26022612 }26032613 spin_unlock_bh(&mvm_sta->lock);26042614}
···185185 else186186 udp_hdr(skb)->check = 0;187187188188- /* mac header len should include IV, size is in words */189189- if (info->control.hw_key)188188+ /*189189+ * mac header len should include IV, size is in words unless190190+ * the IV is added by the firmware like in WEP.191191+ * In new Tx API, the IV is always added by the firmware.192192+ */193193+ if (!iwl_mvm_has_new_tx_api(mvm) && info->control.hw_key &&194194+ info->control.hw_key->cipher != WLAN_CIPHER_SUITE_WEP40 &&195195+ info->control.hw_key->cipher != WLAN_CIPHER_SUITE_WEP104)190196 mh_len += info->control.hw_key->iv_len;191197 mh_len /= 2;192198 offload_assist |= mh_len << TX_CMD_OFFLD_MH_SIZE;···18201814 struct iwl_mvm_ba_notif *ba_notif;18211815 struct iwl_mvm_tid_data *tid_data;18221816 struct iwl_mvm_sta *mvmsta;18171817+18181818+ ba_info.flags = IEEE80211_TX_STAT_AMPDU;1823181918241820 if (iwl_mvm_has_new_tx_api(mvm)) {18251821 struct iwl_mvm_compressed_ba_notif *ba_res =
···924924 ntb_free_mw(nt, i);925925926926 /* if there's an actual failure, we should just bail */927927- if (rc < 0) {928928- ntb_link_disable(ndev);927927+ if (rc < 0)929928 return;930930- }931929932930out:933931 if (ntb_link_is_up(ndev, NULL, NULL) == 1)···10571059 int node;10581060 int rc, i;1059106110601060- mw_count = ntb_mw_count(ndev, PIDX);10621062+ mw_count = ntb_peer_mw_count(ndev);1061106310621064 if (!ndev->ops->mw_set_trans) {10631065 dev_err(&ndev->dev, "Inbound MW based NTB API is required\n");
+1-1
drivers/ntb/test/ntb_tool.c
···959959 tc->ntb = ntb;960960 init_waitqueue_head(&tc->link_wq);961961962962- tc->mw_count = min(ntb_mw_count(tc->ntb, PIDX), MAX_MWS);962962+ tc->mw_count = min(ntb_peer_mw_count(tc->ntb), MAX_MWS);963963 for (i = 0; i < tc->mw_count; i++) {964964 rc = tool_init_mw(tc, i);965965 if (rc)
+2-1
drivers/nvme/host/fabrics.c
···794794 int i;795795796796 for (i = 0; i < ARRAY_SIZE(opt_tokens); i++) {797797- if (opt_tokens[i].token & ~allowed_opts) {797797+ if ((opt_tokens[i].token & opts->mask) &&798798+ (opt_tokens[i].token & ~allowed_opts)) {798799 pr_warn("invalid parameter '%s'\n",799800 opt_tokens[i].pattern);800801 }
···4747 default n4848 depends on NET49495050+config SCSI_MQ_DEFAULT5151+ bool "SCSI: use blk-mq I/O path by default"5252+ depends on SCSI5353+ ---help---5454+ This option enables the new blk-mq based I/O path for SCSI5555+ devices by default. With the option the scsi_mod.use_blk_mq5656+ module/boot option defaults to Y, without it to N, but it can5757+ still be overridden either way.5858+5959+ If unsure say N.6060+5061config SCSI_PROC_FS5162 bool "legacy /proc/scsi/ support"5263 depends on SCSI && PROC_FS
···401401 for (i = 0; i < vha->hw->max_req_queues; i++) {402402 struct req_que *req = vha->hw->req_q_map[i];403403404404- if (!test_bit(i, vha->hw->req_qid_map))405405- continue;406406-407404 if (req || !buf) {408405 length = req ?409406 req->length : REQUEST_ENTRY_CNT_24XX;···414417 } else if (ent->t263.queue_type == T263_QUEUE_TYPE_RSP) {415418 for (i = 0; i < vha->hw->max_rsp_queues; i++) {416419 struct rsp_que *rsp = vha->hw->rsp_q_map[i];417417-418418- if (!test_bit(i, vha->hw->rsp_qid_map))419419- continue;420420421421 if (rsp || !buf) {422422 length = rsp ?···654660 for (i = 0; i < vha->hw->max_req_queues; i++) {655661 struct req_que *req = vha->hw->req_q_map[i];656662657657- if (!test_bit(i, vha->hw->req_qid_map))658658- continue;659659-660663 if (req || !buf) {661664 qla27xx_insert16(i, buf, len);662665 qla27xx_insert16(1, buf, len);···665674 } else if (ent->t274.queue_type == T274_QUEUE_TYPE_RSP_SHAD) {666675 for (i = 0; i < vha->hw->max_rsp_queues; i++) {667676 struct rsp_que *rsp = vha->hw->rsp_q_map[i];668668-669669- if (!test_bit(i, vha->hw->rsp_qid_map))670670- continue;671677672678 if (rsp || !buf) {673679 qla27xx_insert16(i, buf, len);
+4
drivers/scsi/scsi.c
···800800module_param(scsi_logging_level, int, S_IRUGO|S_IWUSR);801801MODULE_PARM_DESC(scsi_logging_level, "a bit mask of logging levels");802802803803+#ifdef CONFIG_SCSI_MQ_DEFAULT803804bool scsi_use_blk_mq = true;805805+#else806806+bool scsi_use_blk_mq = false;807807+#endif804808module_param_named(use_blk_mq, scsi_use_blk_mq, bool, S_IWUSR | S_IRUGO);805809806810static int __init init_scsi(void)
+3
drivers/scsi/sd.c
···12771277{12781278 struct request *rq = SCpnt->request;1279127912801280+ if (SCpnt->flags & SCMD_ZONE_WRITE_LOCK)12811281+ sd_zbc_write_unlock_zone(SCpnt);12821282+12801283 if (rq->rq_flags & RQF_SPECIAL_PAYLOAD)12811284 __free_page(rq->special_vec.bv_page);12821285
+5-4
drivers/scsi/sd_zbc.c
···294294 test_and_set_bit(zno, sdkp->zones_wlock))295295 return BLKPREP_DEFER;296296297297+ WARN_ON_ONCE(cmd->flags & SCMD_ZONE_WRITE_LOCK);298298+ cmd->flags |= SCMD_ZONE_WRITE_LOCK;299299+297300 return BLKPREP_OK;298301}299302···305302 struct request *rq = cmd->request;306303 struct scsi_disk *sdkp = scsi_disk(rq->rq_disk);307304308308- if (sdkp->zones_wlock) {305305+ if (sdkp->zones_wlock && cmd->flags & SCMD_ZONE_WRITE_LOCK) {309306 unsigned int zno = sd_zbc_zone_no(sdkp, blk_rq_pos(rq));310307 WARN_ON_ONCE(!test_bit(zno, sdkp->zones_wlock));308308+ cmd->flags &= ~SCMD_ZONE_WRITE_LOCK;311309 clear_bit_unlock(zno, sdkp->zones_wlock);312310 smp_mb__after_atomic();313311 }···338334 case REQ_OP_WRITE:339335 case REQ_OP_WRITE_ZEROES:340336 case REQ_OP_WRITE_SAME:341341-342342- /* Unlock the zone */343343- sd_zbc_write_unlock_zone(cmd);344337345338 if (result &&346339 sshdr->sense_key == ILLEGAL_REQUEST &&
+1-1
drivers/scsi/ses.c
···9999100100 ret = scsi_execute_req(sdev, cmd, DMA_FROM_DEVICE, buf, bufflen,101101 NULL, SES_TIMEOUT, SES_RETRIES, NULL);102102- if (unlikely(!ret))102102+ if (unlikely(ret))103103 return ret;104104105105 recv_page_code = ((unsigned char *)buf)[0];
+2-2
drivers/scsi/st.c
···42994299 kref_init(&tpnt->kref);43004300 tpnt->disk = disk;43014301 disk->private_data = &tpnt->driver;43024302- disk->queue = SDp->request_queue;43034302 /* SCSI tape doesn't register this gendisk via add_disk(). Manually43044303 * take queue reference that release_disk() expects. */43054305- if (!blk_get_queue(disk->queue))43044304+ if (!blk_get_queue(SDp->request_queue))43064305 goto out_put_disk;43064306+ disk->queue = SDp->request_queue;43074307 tpnt->driver = &st_template;4308430843094309 tpnt->device = SDp;
+8-7
drivers/soc/imx/gpcv2.c
···200200201201 domain->dev = &pdev->dev;202202203203- ret = pm_genpd_init(&domain->genpd, NULL, true);204204- if (ret) {205205- dev_err(domain->dev, "Failed to init power domain\n");206206- return ret;207207- }208208-209203 domain->regulator = devm_regulator_get_optional(domain->dev, "power");210204 if (IS_ERR(domain->regulator)) {211205 if (PTR_ERR(domain->regulator) != -ENODEV) {212212- dev_err(domain->dev, "Failed to get domain's regulator\n");206206+ if (PTR_ERR(domain->regulator) != -EPROBE_DEFER)207207+ dev_err(domain->dev, "Failed to get domain's regulator\n");213208 return PTR_ERR(domain->regulator);214209 }215210 } else {216211 regulator_set_voltage(domain->regulator,217212 domain->voltage, domain->voltage);213213+ }214214+215215+ ret = pm_genpd_init(&domain->genpd, NULL, true);216216+ if (ret) {217217+ dev_err(domain->dev, "Failed to init power domain\n");218218+ return ret;218219 }219220220221 ret = of_genpd_add_provider_simple(domain->dev->of_node,
···6969#ifdef CONFIG_UNIX98_PTYS7070 if (tty->driver == ptm_driver) {7171 mutex_lock(&devpts_mutex);7272- if (tty->link->driver_data) {7373- struct path *path = tty->link->driver_data;7474-7575- devpts_pty_kill(path->dentry);7676- path_put(path);7777- kfree(path);7878- }7272+ if (tty->link->driver_data)7373+ devpts_pty_kill(tty->link->driver_data);7974 mutex_unlock(&devpts_mutex);8075 }8176#endif···602607static struct cdev ptmx_cdev;603608604609/**605605- * pty_open_peer - open the peer of a pty606606- * @tty: the peer of the pty being opened610610+ * ptm_open_peer - open the peer of a pty611611+ * @master: the open struct file of the ptmx device node612612+ * @tty: the master of the pty being opened613613+ * @flags: the flags for open607614 *608608- * Open the cached dentry in tty->link, providing a safe way for userspace609609- * to get the slave end of a pty (where they have the master fd and cannot610610- * access or trust the mount namespace /dev/pts was mounted inside).615615+ * Provide a race free way for userspace to open the slave end of a pty616616+ * (where they have the master fd and cannot access or trust the mount617617+ * namespace /dev/pts was mounted inside).611618 */612612-static struct file *pty_open_peer(struct tty_struct *tty, int flags)613613-{614614- if (tty->driver->subtype != PTY_TYPE_MASTER)615615- return ERR_PTR(-EIO);616616- return dentry_open(tty->link->driver_data, flags, current_cred());617617-}618618-619619-static int pty_get_peer(struct tty_struct *tty, int flags)619619+int ptm_open_peer(struct file *master, struct tty_struct *tty, int flags)620620{621621 int fd = -1;622622- struct file *filp = NULL;622622+ struct file *filp;623623 int retval = -EINVAL;624624+ struct path path;625625+626626+ if (tty->driver != ptm_driver)627627+ return -EIO;624628625629 fd = get_unused_fd_flags(0);626630 if (fd < 0) {···627633 goto err;628634 }629635630630- filp = pty_open_peer(tty, flags);636636+ /* Compute the slave's path */637637+ path.mnt = devpts_mntget(master, tty->driver_data);638638+ if (IS_ERR(path.mnt)) {639639+ retval = PTR_ERR(path.mnt);640640+ goto err_put;641641+ }642642+ path.dentry = tty->link->driver_data;643643+644644+ filp = dentry_open(&path, flags, current_cred());645645+ mntput(path.mnt);631646 if (IS_ERR(filp)) {632647 retval = PTR_ERR(filp);633648 goto err_put;···665662 return pty_get_pktmode(tty, (int __user *)arg);666663 case TIOCGPTN: /* Get PT Number */667664 return put_user(tty->index, (unsigned int __user *)arg);668668- case TIOCGPTPEER: /* Open the other end */669669- return pty_get_peer(tty, (int) arg);670665 case TIOCSIG: /* Send signal to other side of pty */671666 return pty_signal(tty, (int) arg);672667 }···792791{793792 struct pts_fs_info *fsi;794793 struct tty_struct *tty;795795- struct path *pts_path;796794 struct dentry *dentry;797795 int retval;798796 int index;···845845 retval = PTR_ERR(dentry);846846 goto err_release;847847 }848848- /* We need to cache a fake path for TIOCGPTPEER. */849849- pts_path = kmalloc(sizeof(struct path), GFP_KERNEL);850850- if (!pts_path)851851- goto err_release;852852- pts_path->mnt = filp->f_path.mnt;853853- pts_path->dentry = dentry;854854- path_get(pts_path);855855- tty->link->driver_data = pts_path;848848+ tty->link->driver_data = dentry;856849857850 retval = ptm_driver->ops->open(tty, filp);858851 if (retval)859859- goto err_path_put;852852+ goto err_release;860853861854 tty_debug_hangup(tty, "opening (count=%d)\n", tty->count);862855863856 tty_unlock(tty);864857 return 0;865865-err_path_put:866866- path_put(pts_path);867867- kfree(pts_path);868858err_release:869859 tty_unlock(tty);870860 // This will also put-ref the fsi
+3
drivers/tty/tty_io.c
···25182518 case TIOCSSERIAL:25192519 tty_warn_deprecated_flags(p);25202520 break;25212521+ case TIOCGPTPEER:25222522+ /* Special because the struct file is needed */25232523+ return ptm_open_peer(file, tty, (int)arg);25212524 default:25222525 retval = tty_jobctrl_ioctl(tty, real_tty, file, cmd, arg);25232526 if (retval != -ENOIOCTLCMD)
···62126212 }62136213}6214621462156215-int btrfs_map_bio(struct btrfs_fs_info *fs_info, struct bio *bio,62166216- int mirror_num, int async_submit)62156215+blk_status_t btrfs_map_bio(struct btrfs_fs_info *fs_info, struct bio *bio,62166216+ int mirror_num, int async_submit)62176217{62186218 struct btrfs_device *dev;62196219 struct bio *first_bio = bio;···62336233 &map_length, &bbio, mirror_num, 1);62346234 if (ret) {62356235 btrfs_bio_counter_dec(fs_info);62366236- return ret;62366236+ return errno_to_blk_status(ret);62376237 }6238623862396239 total_devs = bbio->num_stripes;···62566256 }6257625762586258 btrfs_bio_counter_dec(fs_info);62596259- return ret;62596259+ return errno_to_blk_status(ret);62606260 }6261626162626262 if (map_length < length) {···62836283 dev_nr, async_submit);62846284 }62856285 btrfs_bio_counter_dec(fs_info);62866286- return 0;62866286+ return BLK_STS_OK;62876287}6288628862896289struct btrfs_device *btrfs_find_device(struct btrfs_fs_info *fs_info, u64 devid,
+3-3
fs/btrfs/volumes.h
···7474 int missing;7575 int can_discard;7676 int is_tgtdev_for_dev_replace;7777- int last_flush_error;7777+ blk_status_t last_flush_error;7878 int flush_bio_sent;79798080#ifdef __BTRFS_NEED_DEVICE_DATA_ORDERED···416416 struct btrfs_fs_info *fs_info, u64 type);417417void btrfs_mapping_init(struct btrfs_mapping_tree *tree);418418void btrfs_mapping_tree_free(struct btrfs_mapping_tree *tree);419419-int btrfs_map_bio(struct btrfs_fs_info *fs_info, struct bio *bio,420420- int mirror_num, int async_submit);419419+blk_status_t btrfs_map_bio(struct btrfs_fs_info *fs_info, struct bio *bio,420420+ int mirror_num, int async_submit);421421int btrfs_open_devices(struct btrfs_fs_devices *fs_devices,422422 fmode_t flags, void *holder);423423int btrfs_scan_one_device(const char *path, fmode_t flags, void *holder,
+12-6
fs/cifs/dir.c
···194194}195195196196/*197197+ * Don't allow path components longer than the server max.197198 * Don't allow the separator character in a path component.198199 * The VFS will not allow "/", but "\" is allowed by posix.199200 */200201static int201201-check_name(struct dentry *direntry)202202+check_name(struct dentry *direntry, struct cifs_tcon *tcon)202203{203204 struct cifs_sb_info *cifs_sb = CIFS_SB(direntry->d_sb);204205 int i;206206+207207+ if (unlikely(direntry->d_name.len >208208+ tcon->fsAttrInfo.MaxPathNameComponentLength))209209+ return -ENAMETOOLONG;205210206211 if (!(cifs_sb->mnt_cifs_flags & CIFS_MOUNT_POSIX_PATHS)) {207212 for (i = 0; i < direntry->d_name.len; i++) {···505500 return finish_no_open(file, res);506501 }507502508508- rc = check_name(direntry);509509- if (rc)510510- return rc;511511-512503 xid = get_xid();513504514505 cifs_dbg(FYI, "parent inode = 0x%p name is: %pd and dentry = 0x%p\n",···517516 }518517519518 tcon = tlink_tcon(tlink);519519+520520+ rc = check_name(direntry, tcon);521521+ if (rc)522522+ goto out_free_xid;523523+520524 server = tcon->ses->server;521525522526 if (server->ops->new_lease_key)···782776 }783777 pTcon = tlink_tcon(tlink);784778785785- rc = check_name(direntry);779779+ rc = check_name(direntry, pTcon);786780 if (rc)787781 goto lookup_out;788782
···1383138313841384 trace_dax_pmd_fault(inode, vmf, max_pgoff, 0);1385138513861386+ /*13871387+ * Make sure that the faulting address's PMD offset (color) matches13881388+ * the PMD offset from the start of the file. This is necessary so13891389+ * that a PMD range in the page table overlaps exactly with a PMD13901390+ * range in the radix tree.13911391+ */13921392+ if ((vmf->pgoff & PG_PMD_COLOUR) !=13931393+ ((vmf->address >> PAGE_SHIFT) & PG_PMD_COLOUR))13941394+ goto fallback;13951395+13861396 /* Fall back to PTEs if we're going to COW */13871397 if (write && !(vma->vm_flags & VM_SHARED))13881398 goto fallback;
+49-16
fs/devpts/inode.c
···133133 return sb->s_fs_info;134134}135135136136+static int devpts_ptmx_path(struct path *path)137137+{138138+ struct super_block *sb;139139+ int err;140140+141141+ /* Has the devpts filesystem already been found? */142142+ if (path->mnt->mnt_sb->s_magic == DEVPTS_SUPER_MAGIC)143143+ return 0;144144+145145+ /* Is a devpts filesystem at "pts" in the same directory? */146146+ err = path_pts(path);147147+ if (err)148148+ return err;149149+150150+ /* Is the path the root of a devpts filesystem? */151151+ sb = path->mnt->mnt_sb;152152+ if ((sb->s_magic != DEVPTS_SUPER_MAGIC) ||153153+ (path->mnt->mnt_root != sb->s_root))154154+ return -ENODEV;155155+156156+ return 0;157157+}158158+159159+struct vfsmount *devpts_mntget(struct file *filp, struct pts_fs_info *fsi)160160+{161161+ struct path path;162162+ int err;163163+164164+ path = filp->f_path;165165+ path_get(&path);166166+167167+ err = devpts_ptmx_path(&path);168168+ dput(path.dentry);169169+ if (err) {170170+ mntput(path.mnt);171171+ path.mnt = ERR_PTR(err);172172+ }173173+ if (DEVPTS_SB(path.mnt->mnt_sb) != fsi) {174174+ mntput(path.mnt);175175+ path.mnt = ERR_PTR(-ENODEV);176176+ }177177+ return path.mnt;178178+}179179+136180struct pts_fs_info *devpts_acquire(struct file *filp)137181{138182 struct pts_fs_info *result;···187143 path = filp->f_path;188144 path_get(&path);189145190190- /* Has the devpts filesystem already been found? */191191- sb = path.mnt->mnt_sb;192192- if (sb->s_magic != DEVPTS_SUPER_MAGIC) {193193- /* Is a devpts filesystem at "pts" in the same directory? */194194- err = path_pts(&path);195195- if (err) {196196- result = ERR_PTR(err);197197- goto out;198198- }199199-200200- /* Is the path the root of a devpts filesystem? */201201- result = ERR_PTR(-ENODEV);202202- sb = path.mnt->mnt_sb;203203- if ((sb->s_magic != DEVPTS_SUPER_MAGIC) ||204204- (path.mnt->mnt_root != sb->s_root))205205- goto out;146146+ err = devpts_ptmx_path(&path);147147+ if (err) {148148+ result = ERR_PTR(err);149149+ goto out;206150 }207151208152 /*209153 * pty code needs to hold extra references in case of last /dev/tty close210154 */155155+ sb = path.mnt->mnt_sb;211156 atomic_inc(&sb->s_active);212157 result = DEVPTS_SB(sb);213158
+5-2
fs/ext4/mballoc.c
···23002300 EXT4_MAX_BLOCK_LOG_SIZE);23012301 struct sg {23022302 struct ext4_group_info info;23032303- ext4_grpblk_t counters[blocksize_bits + 2];23032303+ ext4_grpblk_t counters[EXT4_MAX_BLOCK_LOG_SIZE + 2];23042304 } sg;2305230523062306 group--;···23082308 seq_puts(seq, "#group: free frags first ["23092309 " 2^0 2^1 2^2 2^3 2^4 2^5 2^6 "23102310 " 2^7 2^8 2^9 2^10 2^11 2^12 2^13 ]\n");23112311+23122312+ i = (blocksize_bits + 2) * sizeof(sg.info.bb_counters[0]) +23132313+ sizeof(struct ext4_group_info);2311231423122315 grinfo = ext4_get_group_info(sb, group);23132316 /* Load the group info in memory only if not already loaded. */···23232320 buddy_loaded = 1;23242321 }2325232223262326- memcpy(&sg, ext4_get_group_info(sb, group), sizeof(sg));23232323+ memcpy(&sg, ext4_get_group_info(sb, group), i);2327232423282325 if (buddy_loaded)23292326 ext4_mb_unload_buddy(&e4b);
···749749 return 0;750750 }751751752752+ /*753753+ * During the second phase of log recovery, we need iget and754754+ * iput to behave like they do for an active filesystem.755755+ * xfs_fs_drop_inode needs to be able to prevent the deletion756756+ * of inodes before we're done replaying log items on those757757+ * inodes. Turn it off immediately after recovery finishes758758+ * so that we don't leak the quota inodes if subsequent mount759759+ * activities fail.760760+ */761761+ mp->m_super->s_flags |= MS_ACTIVE;752762 error = xlog_recover_finish(mp->m_log);753763 if (!error)754764 xfs_log_work_queue(mp);765765+ mp->m_super->s_flags &= ~MS_ACTIVE;755766756767 return error;757768}
+2-10
fs/xfs/xfs_mount.c
···945945 }946946947947 /*948948- * During the second phase of log recovery, we need iget and949949- * iput to behave like they do for an active filesystem.950950- * xfs_fs_drop_inode needs to be able to prevent the deletion951951- * of inodes before we're done replaying log items on those952952- * inodes.953953- */954954- mp->m_super->s_flags |= MS_ACTIVE;955955-956956- /*957948 * Finish recovering the file system. This part needed to be delayed958949 * until after the root and real-time bitmap inodes were consistently959950 * read in.···10191028 out_quota:10201029 xfs_qm_unmount_quotas(mp);10211030 out_rtunmount:10221022- mp->m_super->s_flags &= ~MS_ACTIVE;10231031 xfs_rtunmount_inodes(mp);10241032 out_rele_rip:10251033 IRELE(rip);10261034 cancel_delayed_work_sync(&mp->m_reclaim_work);10271035 xfs_reclaim_inodes(mp, SYNC_WAIT);10361036+ /* Clean out dquots that might be in memory after quotacheck. */10371037+ xfs_qm_unmount(mp);10281038 out_log_dealloc:10291039 mp->m_flags |= XFS_MOUNT_UNMOUNTING;10301040 xfs_log_mount_cancel(mp);
+26-12
include/asm-generic/vmlinux.lds.h
···6060#define ALIGN_FUNCTION() . = ALIGN(8)61616262/*6363+ * LD_DEAD_CODE_DATA_ELIMINATION option enables -fdata-sections, which6464+ * generates .data.identifier sections, which need to be pulled in with6565+ * .data. We don't want to pull in .data..other sections, which Linux6666+ * has defined. Same for text and bss.6767+ */6868+#ifdef CONFIG_LD_DEAD_CODE_DATA_ELIMINATION6969+#define TEXT_MAIN .text .text.[0-9a-zA-Z_]*7070+#define DATA_MAIN .data .data.[0-9a-zA-Z_]*7171+#define BSS_MAIN .bss .bss.[0-9a-zA-Z_]*7272+#else7373+#define TEXT_MAIN .text7474+#define DATA_MAIN .data7575+#define BSS_MAIN .bss7676+#endif7777+7878+/*6379 * Align to a 32 byte boundary equal to the6480 * alignment gcc 4.5 uses for a struct6581 */···214198215199/*216200 * .data section217217- * LD_DEAD_CODE_DATA_ELIMINATION option enables -fdata-sections generates218218- * .data.identifier which needs to be pulled in with .data, but don't want to219219- * pull in .data..stuff which has its own requirements. Same for bss.220201 */221202#define DATA_DATA \222222- *(.data .data.[0-9a-zA-Z_]*) \203203+ *(DATA_MAIN) \223204 *(.ref.data) \224205 *(.data..shared_aligned) /* percpu related */ \225206 MEM_KEEP(init.data) \···447434 VMLINUX_SYMBOL(__security_initcall_end) = .; \448435 }449436450450-/* .text section. Map to function alignment to avoid address changes437437+/*438438+ * .text section. Map to function alignment to avoid address changes451439 * during second ld run in second ld pass when generating System.map452452- * LD_DEAD_CODE_DATA_ELIMINATION option enables -ffunction-sections generates453453- * .text.identifier which needs to be pulled in with .text , but some454454- * architectures define .text.foo which is not intended to be pulled in here.455455- * Those enabling LD_DEAD_CODE_DATA_ELIMINATION must ensure they don't have456456- * conflicting section names, and must pull in .text.[0-9a-zA-Z_]* */440440+ *441441+ * TEXT_MAIN here will match .text.fixup and .text.unlikely if dead442442+ * code elimination is enabled, so these sections should be converted443443+ * to use ".." first.444444+ */457445#define TEXT_TEXT \458446 ALIGN_FUNCTION(); \459459- *(.text.hot .text .text.fixup .text.unlikely) \447447+ *(.text.hot TEXT_MAIN .text.fixup .text.unlikely) \460448 *(.ref.text) \461449 MEM_KEEP(init.text) \462450 MEM_KEEP(exit.text) \···627613 BSS_FIRST_SECTIONS \628614 *(.bss..page_aligned) \629615 *(.dynbss) \630630- *(.bss .bss.[0-9a-zA-Z_]*) \616616+ *(BSS_MAIN) \631617 *(COMMON) \632618 }633619
-1
include/linux/blkdev.h
···568568569569#if defined(CONFIG_BLK_DEV_BSG)570570 bsg_job_fn *bsg_job_fn;571571- int bsg_job_size;572571 struct bsg_class_device bsg_dev;573572#endif574573
···907907/* Page cache limit. The filesystems should put that into their s_maxbytes 908908 limits, otherwise bad things can happen in VM. */ 909909#if BITS_PER_LONG==32910910-#define MAX_LFS_FILESIZE (((loff_t)PAGE_SIZE << (BITS_PER_LONG-1))-1)910910+#define MAX_LFS_FILESIZE ((loff_t)ULONG_MAX << PAGE_SHIFT)911911#elif BITS_PER_LONG==64912912-#define MAX_LFS_FILESIZE ((loff_t)0x7fffffffffffffffLL)912912+#define MAX_LFS_FILESIZE ((loff_t)LLONG_MAX)913913#endif914914915915#define FL_POSIX 1
+1-1
include/linux/iio/iio.h
···535535 * @scan_timestamp: [INTERN] set if any buffers have requested timestamp536536 * @scan_index_timestamp:[INTERN] cache of the index to the timestamp537537 * @trig: [INTERN] current device trigger (buffer modes)538538- * @trig_readonly [INTERN] mark the current trigger immutable538538+ * @trig_readonly: [INTERN] mark the current trigger immutable539539 * @pollfunc: [DRIVER] function run on trigger being received540540 * @pollfunc_event: [DRIVER] function run on events trigger being received541541 * @channels: [DRIVER] channel specification structure table
+2-2
include/linux/iio/trigger.h
···144144/**145145 * iio_trigger_set_immutable() - set an immutable trigger on destination146146 *147147- * @indio_dev - IIO device structure containing the device148148- * @trig - trigger to assign to device147147+ * @indio_dev: IIO device structure containing the device148148+ * @trig: trigger to assign to device149149 *150150 **/151151int iio_trigger_set_immutable(struct iio_dev *indio_dev, struct iio_trigger *trig);
···37373838/* Historically, SOCKWQ_ASYNC_NOSPACE & SOCKWQ_ASYNC_WAITDATA were located3939 * in sock->flags, but moved into sk->sk_wq->flags to be RCU protected.4040- * Eventually all flags will be in sk->sk_wq_flags.4040+ * Eventually all flags will be in sk->sk_wq->flags.4141 */4242#define SOCKWQ_ASYNC_NOSPACE 04343#define SOCKWQ_ASYNC_WAITDATA 1
···66#include <linux/types.h>77#include <linux/nodemask.h>88#include <uapi/linux/oom.h>99+#include <linux/sched/coredump.h> /* MMF_* */1010+#include <linux/mm.h> /* VM_FAULT* */9111012struct zonelist;1113struct notifier_block;···6361static inline bool tsk_is_oom_victim(struct task_struct * tsk)6462{6563 return tsk->signal->oom_mm;6464+}6565+6666+/*6767+ * Checks whether a page fault on the given mm is still reliable.6868+ * This is no longer true if the oom reaper started to reap the6969+ * address space which is reflected by MMF_UNSTABLE flag set in7070+ * the mm. At that moment any !shared mapping would lose the content7171+ * and could cause a memory corruption (zero pages instead of the7272+ * original content).7373+ *7474+ * User should call this before establishing a page table entry for7575+ * a !shared mapping and under the proper page table lock.7676+ *7777+ * Return 0 when the PF is safe VM_FAULT_SIGBUS otherwise.7878+ */7979+static inline int check_stable_address_space(struct mm_struct *mm)8080+{8181+ if (unlikely(test_bit(MMF_UNSTABLE, &mm->flags)))8282+ return VM_FAULT_SIGBUS;8383+ return 0;6684}67856886extern unsigned long oom_badness(struct task_struct *p,
···193193}194194195195static inline int skb_array_resize_multiple(struct skb_array **rings,196196- int nrings, int size, gfp_t gfp)196196+ int nrings, unsigned int size,197197+ gfp_t gfp)197198{198199 BUILD_BUG_ON(offsetof(struct skb_array, ring));199200 return ptr_ring_resize_multiple((struct ptr_ring **)rings,
+37
include/linux/wait.h
···757757 __ret; \758758})759759760760+#define __wait_event_killable_timeout(wq_head, condition, timeout) \761761+ ___wait_event(wq_head, ___wait_cond_timeout(condition), \762762+ TASK_KILLABLE, 0, timeout, \763763+ __ret = schedule_timeout(__ret))764764+765765+/**766766+ * wait_event_killable_timeout - sleep until a condition gets true or a timeout elapses767767+ * @wq_head: the waitqueue to wait on768768+ * @condition: a C expression for the event to wait for769769+ * @timeout: timeout, in jiffies770770+ *771771+ * The process is put to sleep (TASK_KILLABLE) until the772772+ * @condition evaluates to true or a kill signal is received.773773+ * The @condition is checked each time the waitqueue @wq_head is woken up.774774+ *775775+ * wake_up() has to be called after changing any variable that could776776+ * change the result of the wait condition.777777+ *778778+ * Returns:779779+ * 0 if the @condition evaluated to %false after the @timeout elapsed,780780+ * 1 if the @condition evaluated to %true after the @timeout elapsed,781781+ * the remaining jiffies (at least 1) if the @condition evaluated782782+ * to %true before the @timeout elapsed, or -%ERESTARTSYS if it was783783+ * interrupted by a kill signal.784784+ *785785+ * Only kill signals interrupt this process.786786+ */787787+#define wait_event_killable_timeout(wq_head, condition, timeout) \788788+({ \789789+ long __ret = timeout; \790790+ might_sleep(); \791791+ if (!___wait_cond_timeout(condition)) \792792+ __ret = __wait_event_killable_timeout(wq_head, \793793+ condition, timeout); \794794+ __ret; \795795+})796796+760797761798#define __wait_event_lock_irq(wq_head, condition, lock, cmd) \762799 (void)___wait_event(wq_head, condition, TASK_UNINTERRUPTIBLE, 0, 0, \
···2929#include <linux/sched/signal.h>3030#include <net/ip.h>31313232-#ifdef CONFIG_NET_RX_BUSY_POLL3333-3434-struct napi_struct;3535-extern unsigned int sysctl_net_busy_read __read_mostly;3636-extern unsigned int sysctl_net_busy_poll __read_mostly;3737-3832/* 0 - Reserved to indicate value not set3933 * 1..NR_CPUS - Reserved for sender_cpu4034 * NR_CPUS+1..~0 - Region available for NAPI IDs4135 */4236#define MIN_NAPI_ID ((unsigned int)(NR_CPUS + 1))3737+3838+#ifdef CONFIG_NET_RX_BUSY_POLL3939+4040+struct napi_struct;4141+extern unsigned int sysctl_net_busy_read __read_mostly;4242+extern unsigned int sysctl_net_busy_poll __read_mostly;43434444static inline bool net_busy_loop_on(void)4545{
···54995499 ieee80211_manage_rx_ba_offl(vif, addr, tid + IEEE80211_NUM_TIDS);55005500}5501550155025502+/**55035503+ * ieee80211_rx_ba_timer_expired - stop a Rx BA session due to timeout55045504+ *55055505+ * Some device drivers do not offload AddBa/DelBa negotiation, but handle rx55065506+ * buffer reording internally, and therefore also handle the session timer.55075507+ *55085508+ * Trigger the timeout flow, which sends a DelBa.55095509+ *55105510+ * @vif: &struct ieee80211_vif pointer from the add_interface callback55115511+ * @addr: station mac address55125512+ * @tid: the rx tid55135513+ */55145514+void ieee80211_rx_ba_timer_expired(struct ieee80211_vif *vif,55155515+ const u8 *addr, unsigned int tid);55165516+55025517/* Rate control API */5503551855045519/**
+4-1
include/net/sch_generic.h
···806806 old = *pold;807807 *pold = new;808808 if (old != NULL) {809809- qdisc_tree_reduce_backlog(old, old->q.qlen, old->qstats.backlog);809809+ unsigned int qlen = old->q.qlen;810810+ unsigned int backlog = old->qstats.backlog;811811+810812 qdisc_reset(old);813813+ qdisc_tree_reduce_backlog(old, qlen, backlog);811814 }812815 sch_tree_unlock(sch);813816
+1-3
include/net/sock.h
···507507static inline int sk_peek_offset(struct sock *sk, int flags)508508{509509 if (unlikely(flags & MSG_PEEK)) {510510- s32 off = READ_ONCE(sk->sk_peek_off);511511- if (off >= 0)512512- return off;510510+ return READ_ONCE(sk->sk_peek_off);513511 }514512515513 return 0;
+4-3
include/net/udp.h
···366366static inline int copy_linear_skb(struct sk_buff *skb, int len, int off,367367 struct iov_iter *to)368368{369369- int n, copy = len - off;369369+ int n;370370371371- n = copy_to_iter(skb->data + off, copy, to);372372- if (n == copy)371371+ n = copy_to_iter(skb->data + off, len, to);372372+ if (n == len)373373 return 0;374374375375+ iov_iter_revert(to, n);375376 return -EFAULT;376377}377378
···66666767/* fsnotify events we care about. */6868#define AUDIT_FS_WATCH (FS_MOVE | FS_CREATE | FS_DELETE | FS_DELETE_SELF |\6969- FS_MOVE_SELF | FS_EVENT_ON_CHILD)6969+ FS_MOVE_SELF | FS_EVENT_ON_CHILD | FS_UNMOUNT)70707171static void audit_free_parent(struct audit_parent *parent)7272{···457457 list_del(&krule->rlist);458458459459 if (list_empty(&watch->rules)) {460460+ /*461461+ * audit_remove_watch() drops our reference to 'parent' which462462+ * can get freed. Grab our own reference to be safe.463463+ */464464+ audit_get_parent(parent);460465 audit_remove_watch(watch);461461-462462- if (list_empty(&parent->watches)) {463463- audit_get_parent(parent);466466+ if (list_empty(&parent->watches))464467 fsnotify_destroy_mark(&parent->mark, audit_watch_group);465465- audit_put_parent(parent);466466- }468468+ audit_put_parent(parent);467469 }468470}469471
+58-28
kernel/events/core.c
···22172217 return can_add_hw;22182218}2219221922202220+/*22212221+ * Complement to update_event_times(). This computes the tstamp_* values to22222222+ * continue 'enabled' state from @now, and effectively discards the time22232223+ * between the prior tstamp_stopped and now (as we were in the OFF state, or22242224+ * just switched (context) time base).22252225+ *22262226+ * This further assumes '@event->state == INACTIVE' (we just came from OFF) and22272227+ * cannot have been scheduled in yet. And going into INACTIVE state means22282228+ * '@event->tstamp_stopped = @now'.22292229+ *22302230+ * Thus given the rules of update_event_times():22312231+ *22322232+ * total_time_enabled = tstamp_stopped - tstamp_enabled22332233+ * total_time_running = tstamp_stopped - tstamp_running22342234+ *22352235+ * We can insert 'tstamp_stopped == now' and reverse them to compute new22362236+ * tstamp_* values.22372237+ */22382238+static void __perf_event_enable_time(struct perf_event *event, u64 now)22392239+{22402240+ WARN_ON_ONCE(event->state != PERF_EVENT_STATE_INACTIVE);22412241+22422242+ event->tstamp_stopped = now;22432243+ event->tstamp_enabled = now - event->total_time_enabled;22442244+ event->tstamp_running = now - event->total_time_running;22452245+}22462246+22202247static void add_event_to_ctx(struct perf_event *event,22212248 struct perf_event_context *ctx)22222249{···2251222422522225 list_add_event(event, ctx);22532226 perf_group_attach(event);22542254- event->tstamp_enabled = tstamp;22552255- event->tstamp_running = tstamp;22562256- event->tstamp_stopped = tstamp;22272227+ /*22282228+ * We can be called with event->state == STATE_OFF when we create with22292229+ * .disabled = 1. In that case the IOC_ENABLE will call this function.22302230+ */22312231+ if (event->state == PERF_EVENT_STATE_INACTIVE)22322232+ __perf_event_enable_time(event, tstamp);22572233}2258223422592235static void ctx_sched_out(struct perf_event_context *ctx,···25012471 u64 tstamp = perf_event_time(event);2502247225032473 event->state = PERF_EVENT_STATE_INACTIVE;25042504- event->tstamp_enabled = tstamp - event->total_time_enabled;24742474+ __perf_event_enable_time(event, tstamp);25052475 list_for_each_entry(sub, &event->sibling_list, group_entry) {24762476+ /* XXX should not be > INACTIVE if event isn't */25062477 if (sub->state >= PERF_EVENT_STATE_INACTIVE)25072507- sub->tstamp_enabled = tstamp - sub->total_time_enabled;24782478+ __perf_event_enable_time(sub, tstamp);25082479 }25092480}25102481···51215090 atomic_inc(&event->rb->aux_mmap_count);5122509151235092 if (event->pmu->event_mapped)51245124- event->pmu->event_mapped(event);50935093+ event->pmu->event_mapped(event, vma->vm_mm);51255094}5126509551275096static void perf_pmu_output_stop(struct perf_event *event);···51445113 unsigned long size = perf_data_size(rb);5145511451465115 if (event->pmu->event_unmapped)51475147- event->pmu->event_unmapped(event);51165116+ event->pmu->event_unmapped(event, vma->vm_mm);5148511751495118 /*51505119 * rb->aux_mmap_count will always drop before rb->mmap_count and···54425411 vma->vm_ops = &perf_mmap_vmops;5443541254445413 if (event->pmu->event_mapped)54455445- event->pmu->event_mapped(event);54145414+ event->pmu->event_mapped(event, vma->vm_mm);5446541554475416 return ret;54485417}···1003210001 goto err_context;10033100021003410003 /*1003510035- * Do not allow to attach to a group in a different1003610036- * task or CPU context:1000410004+ * Make sure we're both events for the same CPU;1000510005+ * grouping events for different CPUs is broken; since1000610006+ * you can never concurrently schedule them anyhow.1003710007 */1003810038- if (move_group) {1003910039- /*1004010040- * Make sure we're both on the same task, or both1004110041- * per-cpu events.1004210042- */1004310043- if (group_leader->ctx->task != ctx->task)1004410044- goto err_context;1000810008+ if (group_leader->cpu != event->cpu)1000910009+ goto err_context;10045100101004610046- /*1004710047- * Make sure we're both events for the same CPU;1004810048- * grouping events for different CPUs is broken; since1004910049- * you can never concurrently schedule them anyhow.1005010050- */1005110051- if (group_leader->cpu != event->cpu)1005210052- goto err_context;1005310053- } else {1005410054- if (group_leader->ctx != ctx)1005510055- goto err_context;1005610056- }1001110011+ /*1001210012+ * Make sure we're both on the same task, or both1001310013+ * per-CPU events.1001410014+ */1001510015+ if (group_leader->ctx->task != ctx->task)1001610016+ goto err_context;1001710017+1001810018+ /*1001910019+ * Do not allow to attach to a group in a different task1002010020+ * or CPU context. If we're moving SW events, we'll fix1002110021+ * this up later, so allow that.1002210022+ */1002310023+ if (!move_group && group_leader->ctx != ctx)1002410024+ goto err_context;10057100251005810026 /*1005910027 * Only a group leader can be exclusive or pinned
···1000100010011001void irq_modify_status(unsigned int irq, unsigned long clr, unsigned long set)10021002{10031003- unsigned long flags;10031003+ unsigned long flags, trigger, tmp;10041004 struct irq_desc *desc = irq_get_desc_lock(irq, &flags, 0);1005100510061006 if (!desc)···1014101410151015 irq_settings_clr_and_set(desc, clr, set);1016101610171017+ trigger = irqd_get_trigger_type(&desc->irq_data);10181018+10171019 irqd_clear(&desc->irq_data, IRQD_NO_BALANCING | IRQD_PER_CPU |10181020 IRQD_TRIGGER_MASK | IRQD_LEVEL | IRQD_MOVE_PCNTXT);10191021 if (irq_settings_has_no_balance_set(desc))···10271025 if (irq_settings_is_level(desc))10281026 irqd_set(&desc->irq_data, IRQD_LEVEL);1029102710301030- irqd_set(&desc->irq_data, irq_settings_get_trigger_mask(desc));10281028+ tmp = irq_settings_get_trigger_mask(desc);10291029+ if (tmp != IRQ_TYPE_NONE)10301030+ trigger = tmp;10311031+10321032+ irqd_set(&desc->irq_data, trigger);1031103310321034 irq_put_desc_unlock(desc, flags);10331035}
+2-2
kernel/irq/ipi.c
···165165 struct irq_data *data = irq_get_irq_data(irq);166166 struct cpumask *ipimask = data ? irq_data_get_affinity_mask(data) : NULL;167167168168- if (!data || !ipimask || cpu > nr_cpu_ids)168168+ if (!data || !ipimask || cpu >= nr_cpu_ids)169169 return INVALID_HWIRQ;170170171171 if (!cpumask_test_cpu(cpu, ipimask))···195195 if (!chip->ipi_send_single && !chip->ipi_send_mask)196196 return -EINVAL;197197198198- if (cpu > nr_cpu_ids)198198+ if (cpu >= nr_cpu_ids)199199 return -EINVAL;200200201201 if (dest) {
+23-2
kernel/kmod.c
···7171static DECLARE_WAIT_QUEUE_HEAD(kmod_wq);72727373/*7474+ * This is a restriction on having *all* MAX_KMOD_CONCURRENT threads7575+ * running at the same time without returning. When this happens we7676+ * believe you've somehow ended up with a recursive module dependency7777+ * creating a loop.7878+ *7979+ * We have no option but to fail.8080+ *8181+ * Userspace should proactively try to detect and prevent these.8282+ */8383+#define MAX_KMOD_ALL_BUSY_TIMEOUT 58484+8585+/*7486 modprobe_path is set via /proc/sys.7587*/7688char modprobe_path[KMOD_PATH_LEN] = "/sbin/modprobe";···179167 pr_warn_ratelimited("request_module: kmod_concurrent_max (%u) close to 0 (max_modprobes: %u), for module %s, throttling...",180168 atomic_read(&kmod_concurrent_max),181169 MAX_KMOD_CONCURRENT, module_name);182182- wait_event_interruptible(kmod_wq,183183- atomic_dec_if_positive(&kmod_concurrent_max) >= 0);170170+ ret = wait_event_killable_timeout(kmod_wq,171171+ atomic_dec_if_positive(&kmod_concurrent_max) >= 0,172172+ MAX_KMOD_ALL_BUSY_TIMEOUT * HZ);173173+ if (!ret) {174174+ pr_warn_ratelimited("request_module: modprobe %s cannot be processed, kmod busy with %d threads for more than %d seconds now",175175+ module_name, MAX_KMOD_CONCURRENT, MAX_KMOD_ALL_BUSY_TIMEOUT);176176+ return -ETIME;177177+ } else if (ret == -ERESTARTSYS) {178178+ pr_warn_ratelimited("request_module: sigkill sent for modprobe %s, giving up", module_name);179179+ return ret;180180+ }184181 }185182186183 trace_module_request(module_name, wait, _RET_IP_);
+4-7
kernel/pid.c
···527527 if (!ns)528528 ns = task_active_pid_ns(current);529529 if (likely(pid_alive(task))) {530530- if (type != PIDTYPE_PID)530530+ if (type != PIDTYPE_PID) {531531+ if (type == __PIDTYPE_TGID)532532+ type = PIDTYPE_PID;531533 task = task->group_leader;534534+ }532535 nr = pid_nr_ns(rcu_dereference(task->pids[type].pid), ns);533536 }534537 rcu_read_unlock();···539536 return nr;540537}541538EXPORT_SYMBOL(__task_pid_nr_ns);542542-543543-pid_t task_tgid_nr_ns(struct task_struct *tsk, struct pid_namespace *ns)544544-{545545- return pid_nr_ns(task_tgid(tsk), ns);546546-}547547-EXPORT_SYMBOL(task_tgid_nr_ns);548539549540struct pid_namespace *task_active_pid_ns(struct task_struct *tsk)550541{
+4-3
kernel/sched/wait.c
···70707171 list_for_each_entry_safe(curr, next, &wq_head->head, entry) {7272 unsigned flags = curr->flags;7373-7474- if (curr->func(curr, mode, wake_flags, key) &&7575- (flags & WQ_FLAG_EXCLUSIVE) && !--nr_exclusive)7373+ int ret = curr->func(curr, mode, wake_flags, key);7474+ if (ret < 0)7575+ break;7676+ if (ret && (flags & WQ_FLAG_EXCLUSIVE) && !--nr_exclusive)7677 break;7778 }7879}
+5-1
kernel/signal.c
···11941194 recalc_sigpending_and_wake(t);11951195 }11961196 }11971197- if (action->sa.sa_handler == SIG_DFL)11971197+ /*11981198+ * Don't clear SIGNAL_UNKILLABLE for traced tasks, users won't expect11991199+ * debugging to leave init killable.12001200+ */12011201+ if (action->sa.sa_handler == SIG_DFL && !t->ptrace)11981202 t->signal->flags &= ~SIGNAL_UNKILLABLE;11991203 ret = specific_send_sig_info(sig, info, t);12001204 spin_unlock_irqrestore(&t->sighand->siglock, flags);
+41-9
kernel/time/timer.c
···203203 bool migration_enabled;204204 bool nohz_active;205205 bool is_idle;206206+ bool must_forward_clk;206207 DECLARE_BITMAP(pending_map, WHEEL_SIZE);207208 struct hlist_head vectors[WHEEL_SIZE];208209} ____cacheline_aligned;···857856858857static inline void forward_timer_base(struct timer_base *base)859858{860860- unsigned long jnow = READ_ONCE(jiffies);859859+ unsigned long jnow;861860862861 /*863863- * We only forward the base when it's idle and we have a delta between864864- * base clock and jiffies.862862+ * We only forward the base when we are idle or have just come out of863863+ * idle (must_forward_clk logic), and have a delta between base clock864864+ * and jiffies. In the common case, run_timers will take care of it.865865 */866866- if (!base->is_idle || (long) (jnow - base->clk) < 2)866866+ if (likely(!base->must_forward_clk))867867+ return;868868+869869+ jnow = READ_ONCE(jiffies);870870+ base->must_forward_clk = base->is_idle;871871+ if ((long)(jnow - base->clk) < 2)867872 return;868873869874 /*···945938 * same array bucket then just return:946939 */947940 if (timer_pending(timer)) {941941+ /*942942+ * The downside of this optimization is that it can result in943943+ * larger granularity than you would get from adding a new944944+ * timer with this expiry.945945+ */948946 if (timer->expires == expires)949947 return 1;950948···960948 * dequeue/enqueue dance.961949 */962950 base = lock_timer_base(timer, &flags);951951+ forward_timer_base(base);963952964953 clk = base->clk;965954 idx = calc_wheel_index(expires, clk);···977964 }978965 } else {979966 base = lock_timer_base(timer, &flags);967967+ forward_timer_base(base);980968 }981969982970 ret = detach_if_pending(timer, base, false);···1005991 raw_spin_lock(&base->lock);1006992 WRITE_ONCE(timer->flags,1007993 (timer->flags & ~TIMER_BASEMASK) | base->cpu);994994+ forward_timer_base(base);1008995 }1009996 }10101010-10111011- /* Try to forward a stale timer base clock */10121012- forward_timer_base(base);10139971014998 timer->expires = expires;1015999 /*···11241112 WRITE_ONCE(timer->flags,11251113 (timer->flags & ~TIMER_BASEMASK) | cpu);11261114 }11151115+ forward_timer_base(base);1127111611281117 debug_activate(timer, timer->expires);11291118 internal_add_timer(base, timer);···15101497 if (!is_max_delta)15111498 expires = basem + (u64)(nextevt - basej) * TICK_NSEC;15121499 /*15131513- * If we expect to sleep more than a tick, mark the base idle:15001500+ * If we expect to sleep more than a tick, mark the base idle.15011501+ * Also the tick is stopped so any added timer must forward15021502+ * the base clk itself to keep granularity small. This idle15031503+ * logic is only maintained for the BASE_STD base, deferrable15041504+ * timers may still see large granularity skew (by design).15141505 */15151515- if ((expires - basem) > TICK_NSEC)15061506+ if ((expires - basem) > TICK_NSEC) {15071507+ base->must_forward_clk = true;15161508 base->is_idle = true;15091509+ }15171510 }15181511 raw_spin_unlock(&base->lock);15191512···16291610static __latent_entropy void run_timer_softirq(struct softirq_action *h)16301611{16311612 struct timer_base *base = this_cpu_ptr(&timer_bases[BASE_STD]);16131613+16141614+ /*16151615+ * must_forward_clk must be cleared before running timers so that any16161616+ * timer functions that call mod_timer will not try to forward the16171617+ * base. idle trcking / clock forwarding logic is only used with16181618+ * BASE_STD timers.16191619+ *16201620+ * The deferrable base does not do idle tracking at all, so we do16211621+ * not forward it. This can result in very large variations in16221622+ * granularity for deferrable timers, but they can be deferred for16231623+ * long periods due to idle.16241624+ */16251625+ base->must_forward_clk = false;1632162616331627 __run_timers(base);16341628 if (IS_ENABLED(CONFIG_NO_HZ_COMMON) && base->nohz_active)
···889889890890 function_profile_call(trace->func, 0, NULL, NULL);891891892892+ /* If function graph is shutting down, ret_stack can be NULL */893893+ if (!current->ret_stack)894894+ return 0;895895+892896 if (index >= 0 && index < FTRACE_RETFUNC_DEPTH)893897 current->ret_stack[index].subtime = 0;894898
+9-5
kernel/trace/ring_buffer.c
···43864386 * the page that was allocated, with the read page of the buffer.43874387 *43884388 * Returns:43894389- * The page allocated, or NULL on error.43894389+ * The page allocated, or ERR_PTR43904390 */43914391void *ring_buffer_alloc_read_page(struct ring_buffer *buffer, int cpu)43924392{43934393- struct ring_buffer_per_cpu *cpu_buffer = buffer->buffers[cpu];43934393+ struct ring_buffer_per_cpu *cpu_buffer;43944394 struct buffer_data_page *bpage = NULL;43954395 unsigned long flags;43964396 struct page *page;4397439743984398+ if (!cpumask_test_cpu(cpu, buffer->cpumask))43994399+ return ERR_PTR(-ENODEV);44004400+44014401+ cpu_buffer = buffer->buffers[cpu];43984402 local_irq_save(flags);43994403 arch_spin_lock(&cpu_buffer->lock);44004404···44164412 page = alloc_pages_node(cpu_to_node(cpu),44174413 GFP_KERNEL | __GFP_NORETRY, 0);44184414 if (!page)44194419- return NULL;44154415+ return ERR_PTR(-ENOMEM);4420441644214417 bpage = page_address(page);44224418···44714467 *44724468 * for example:44734469 * rpage = ring_buffer_alloc_read_page(buffer, cpu);44744474- * if (!rpage)44754475- * return error;44704470+ * if (IS_ERR(rpage))44714471+ * return PTR_ERR(rpage);44764472 * ret = ring_buffer_read_page(buffer, &rpage, len, cpu, 0);44774473 * if (ret >= 0)44784474 * process_page(rpage, ret);
+1-1
kernel/trace/ring_buffer_benchmark.c
···113113 int i;114114115115 bpage = ring_buffer_alloc_read_page(buffer, cpu);116116- if (!bpage)116116+ if (IS_ERR(bpage))117117 return EVENT_DROPPED;118118119119 ret = ring_buffer_read_page(buffer, &bpage, PAGE_SIZE, cpu, 1);
+13-6
kernel/trace/trace.c
···65986598{65996599 struct ftrace_buffer_info *info = filp->private_data;66006600 struct trace_iterator *iter = &info->iter;66016601- ssize_t ret;66016601+ ssize_t ret = 0;66026602 ssize_t size;6603660366046604 if (!count)···66126612 if (!info->spare) {66136613 info->spare = ring_buffer_alloc_read_page(iter->trace_buffer->buffer,66146614 iter->cpu_file);66156615- info->spare_cpu = iter->cpu_file;66156615+ if (IS_ERR(info->spare)) {66166616+ ret = PTR_ERR(info->spare);66176617+ info->spare = NULL;66186618+ } else {66196619+ info->spare_cpu = iter->cpu_file;66206620+ }66166621 }66176622 if (!info->spare)66186618- return -ENOMEM;66236623+ return ret;6619662466206625 /* Do we have previous read data to read? */66216626 if (info->read < PAGE_SIZE)···67956790 ref->ref = 1;67966791 ref->buffer = iter->trace_buffer->buffer;67976792 ref->page = ring_buffer_alloc_read_page(ref->buffer, iter->cpu_file);67986798- if (!ref->page) {67996799- ret = -ENOMEM;67936793+ if (IS_ERR(ref->page)) {67946794+ ret = PTR_ERR(ref->page);67956795+ ref->page = NULL;68006796 kfree(ref);68016797 break;68026798 }···82998293 if (ret < 0)83008294 goto out_free_cpumask;83018295 /* Used for event triggers */82968296+ ret = -ENOMEM;83028297 temp_buffer = ring_buffer_alloc(PAGE_SIZE, RB_FL_OVERWRITE);83038298 if (!temp_buffer)83048299 goto out_rm_hp_state;···84148407}8415840884168409fs_initcall(tracer_init_tracefs);84178417-late_initcall(clear_boot_tracer);84108410+late_initcall_sync(clear_boot_tracer);
···221221 if (!a)222222 return;223223224224- if (!a->pages) {225225- kfree(a);226226- return;227227- }224224+ if (!a->pages)225225+ goto free;228226229227 for (i = 0; i < a->n_pages; i++) {230228 if (!a->pages[i])231229 break;232230 free_page((unsigned long)a->pages[i]);233231 }232232+233233+ kfree(a->pages);234234+235235+ free:236236+ kfree(a);234237}235238236239struct tracing_map_array *tracing_map_array_alloc(unsigned int n_elts,
+1
kernel/watchdog.c
···240240 * hardlockup detector generates a warning241241 */242242 sample_period = get_softlockup_thresh() * ((u64)NSEC_PER_SEC / 5);243243+ watchdog_update_hrtimer_threshold(sample_period);243244}244245245246/* Commands for resetting the watchdog */
+59
kernel/watchdog_hld.c
···3737}3838EXPORT_SYMBOL(arch_touch_nmi_watchdog);39394040+#ifdef CONFIG_HARDLOCKUP_CHECK_TIMESTAMP4141+static DEFINE_PER_CPU(ktime_t, last_timestamp);4242+static DEFINE_PER_CPU(unsigned int, nmi_rearmed);4343+static ktime_t watchdog_hrtimer_sample_threshold __read_mostly;4444+4545+void watchdog_update_hrtimer_threshold(u64 period)4646+{4747+ /*4848+ * The hrtimer runs with a period of (watchdog_threshold * 2) / 54949+ *5050+ * So it runs effectively with 2.5 times the rate of the NMI5151+ * watchdog. That means the hrtimer should fire 2-3 times before5252+ * the NMI watchdog expires. The NMI watchdog on x86 is based on5353+ * unhalted CPU cycles, so if Turbo-Mode is enabled the CPU cycles5454+ * might run way faster than expected and the NMI fires in a5555+ * smaller period than the one deduced from the nominal CPU5656+ * frequency. Depending on the Turbo-Mode factor this might be fast5757+ * enough to get the NMI period smaller than the hrtimer watchdog5858+ * period and trigger false positives.5959+ *6060+ * The sample threshold is used to check in the NMI handler whether6161+ * the minimum time between two NMI samples has elapsed. That6262+ * prevents false positives.6363+ *6464+ * Set this to 4/5 of the actual watchdog threshold period so the6565+ * hrtimer is guaranteed to fire at least once within the real6666+ * watchdog threshold.6767+ */6868+ watchdog_hrtimer_sample_threshold = period * 2;6969+}7070+7171+static bool watchdog_check_timestamp(void)7272+{7373+ ktime_t delta, now = ktime_get_mono_fast_ns();7474+7575+ delta = now - __this_cpu_read(last_timestamp);7676+ if (delta < watchdog_hrtimer_sample_threshold) {7777+ /*7878+ * If ktime is jiffies based, a stalled timer would prevent7979+ * jiffies from being incremented and the filter would look8080+ * at a stale timestamp and never trigger.8181+ */8282+ if (__this_cpu_inc_return(nmi_rearmed) < 10)8383+ return false;8484+ }8585+ __this_cpu_write(nmi_rearmed, 0);8686+ __this_cpu_write(last_timestamp, now);8787+ return true;8888+}8989+#else9090+static inline bool watchdog_check_timestamp(void)9191+{9292+ return true;9393+}9494+#endif9595+4096static struct perf_event_attr wd_hw_attr = {4197 .type = PERF_TYPE_HARDWARE,4298 .config = PERF_COUNT_HW_CPU_CYCLES,···11660 __this_cpu_write(watchdog_nmi_touch, false);11761 return;11862 }6363+6464+ if (!watchdog_check_timestamp())6565+ return;1196612067 /* check for a hardlockup12168 * This is done by making sure our timer interrupt
+7
lib/Kconfig.debug
···798798 select SOFTLOCKUP_DETECTOR799799800800#801801+# Enables a timestamp based low pass filter to compensate for perf based802802+# hard lockup detection which runs too fast due to turbo modes.803803+#804804+config HARDLOCKUP_CHECK_TIMESTAMP805805+ bool806806+807807+#801808# arch/ can define HAVE_HARDLOCKUP_DETECTOR_ARCH to provide their own hard802809# lockup detector rather than the perf based detector.803810#
···16111611 * @page: the page16121612 *16131613 * This function protects unlocked LRU pages from being moved to16141614- * another cgroup and stabilizes their page->mem_cgroup binding.16141614+ * another cgroup.16151615+ *16161616+ * It ensures lifetime of the returned memcg. Caller is responsible16171617+ * for the lifetime of the page; __unlock_page_memcg() is available16181618+ * when @page might get freed inside the locked section.16151619 */16161616-void lock_page_memcg(struct page *page)16201620+struct mem_cgroup *lock_page_memcg(struct page *page)16171621{16181622 struct mem_cgroup *memcg;16191623 unsigned long flags;···16261622 * The RCU lock is held throughout the transaction. The fast16271623 * path can get away without acquiring the memcg->move_lock16281624 * because page moving starts with an RCU grace period.16291629- */16251625+ *16261626+ * The RCU lock also protects the memcg from being freed when16271627+ * the page state that is going to change is the only thing16281628+ * preventing the page itself from being freed. E.g. writeback16291629+ * doesn't hold a page reference and relies on PG_writeback to16301630+ * keep off truncation, migration and so forth.16311631+ */16301632 rcu_read_lock();1631163316321634 if (mem_cgroup_disabled())16331633- return;16351635+ return NULL;16341636again:16351637 memcg = page->mem_cgroup;16361638 if (unlikely(!memcg))16371637- return;16391639+ return NULL;1638164016391641 if (atomic_read(&memcg->moving_account) <= 0)16401640- return;16421642+ return memcg;1641164316421644 spin_lock_irqsave(&memcg->move_lock, flags);16431645 if (memcg != page->mem_cgroup) {···16591649 memcg->move_lock_task = current;16601650 memcg->move_lock_flags = flags;1661165116621662- return;16521652+ return memcg;16631653}16641654EXPORT_SYMBOL(lock_page_memcg);1665165516661656/**16671667- * unlock_page_memcg - unlock a page->mem_cgroup binding16681668- * @page: the page16571657+ * __unlock_page_memcg - unlock and unpin a memcg16581658+ * @memcg: the memcg16591659+ *16601660+ * Unlock and unpin a memcg returned by lock_page_memcg().16691661 */16701670-void unlock_page_memcg(struct page *page)16621662+void __unlock_page_memcg(struct mem_cgroup *memcg)16711663{16721672- struct mem_cgroup *memcg = page->mem_cgroup;16731673-16741664 if (memcg && memcg->move_lock_task == current) {16751665 unsigned long flags = memcg->move_lock_flags;16761666···16811671 }1682167216831673 rcu_read_unlock();16741674+}16751675+16761676+/**16771677+ * unlock_page_memcg - unlock a page->mem_cgroup binding16781678+ * @page: the page16791679+ */16801680+void unlock_page_memcg(struct page *page)16811681+{16821682+ __unlock_page_memcg(page->mem_cgroup);16841683}16851684EXPORT_SYMBOL(unlock_page_memcg);16861685
+20-16
mm/memory.c
···6868#include <linux/debugfs.h>6969#include <linux/userfaultfd_k.h>7070#include <linux/dax.h>7171+#include <linux/oom.h>71727273#include <asm/io.h>7374#include <asm/mmu_context.h>···28942893 struct vm_area_struct *vma = vmf->vma;28952894 struct mem_cgroup *memcg;28962895 struct page *page;28962896+ int ret = 0;28972897 pte_t entry;2898289828992899 /* File mapping without ->vm_ops ? */···29262924 vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd,29272925 vmf->address, &vmf->ptl);29282926 if (!pte_none(*vmf->pte))29272927+ goto unlock;29282928+ ret = check_stable_address_space(vma->vm_mm);29292929+ if (ret)29292930 goto unlock;29302931 /* Deliver the page fault to userland, check inside PT lock */29312932 if (userfaultfd_missing(vma)) {···29642959 if (!pte_none(*vmf->pte))29652960 goto release;2966296129622962+ ret = check_stable_address_space(vma->vm_mm);29632963+ if (ret)29642964+ goto release;29652965+29672966 /* Deliver the page fault to userland, check inside PT lock */29682967 if (userfaultfd_missing(vma)) {29692968 pte_unmap_unlock(vmf->pte, vmf->ptl);···29872978 update_mmu_cache(vma, vmf->address, vmf->pte);29882979unlock:29892980 pte_unmap_unlock(vmf->pte, vmf->ptl);29902990- return 0;29812981+ return ret;29912982release:29922983 mem_cgroup_cancel_charge(page, memcg, false);29932984 put_page(page);···32613252int finish_fault(struct vm_fault *vmf)32623253{32633254 struct page *page;32643264- int ret;32553255+ int ret = 0;3265325632663257 /* Did we COW the page? */32673258 if ((vmf->flags & FAULT_FLAG_WRITE) &&···32693260 page = vmf->cow_page;32703261 else32713262 page = vmf->page;32723272- ret = alloc_set_pte(vmf, vmf->memcg, page);32633263+32643264+ /*32653265+ * check even for read faults because we might have lost our CoWed32663266+ * page32673267+ */32683268+ if (!(vmf->vma->vm_flags & VM_SHARED))32693269+ ret = check_stable_address_space(vmf->vma->vm_mm);32703270+ if (!ret)32713271+ ret = alloc_set_pte(vmf, vmf->memcg, page);32733272 if (vmf->pte)32743273 pte_unmap_unlock(vmf->pte, vmf->ptl);32753274 return ret;···39163899 if (task_in_memcg_oom(current) && !(ret & VM_FAULT_OOM))39173900 mem_cgroup_oom_synchronize(false);39183901 }39193919-39203920- /*39213921- * This mm has been already reaped by the oom reaper and so the39223922- * refault cannot be trusted in general. Anonymous refaults would39233923- * lose data and give a zero page instead e.g. This is especially39243924- * problem for use_mm() because regular tasks will just die and39253925- * the corrupted data will not be visible anywhere while kthread39263926- * will outlive the oom victim and potentially propagate the date39273927- * further.39283928- */39293929- if (unlikely((current->flags & PF_KTHREAD) && !(ret & VM_FAULT_ERROR)39303930- && test_bit(MMF_UNSTABLE, &vma->vm_mm->flags)))39313931- ret = VM_FAULT_SIGBUS;3932390239333903 return ret;39343904}
-5
mm/mempolicy.c
···861861 *policy |= (pol->flags & MPOL_MODE_FLAGS);862862 }863863864864- if (vma) {865865- up_read(¤t->mm->mmap_sem);866866- vma = NULL;867867- }868868-869864 err = 0;870865 if (nmask) {871866 if (mpol_store_user_nodemask(pol)) {
+3-8
mm/migrate.c
···4141#include <linux/page_idle.h>4242#include <linux/page_owner.h>4343#include <linux/sched/mm.h>4444+#include <linux/ptrace.h>44454546#include <asm/tlbflush.h>4647···16531652 const int __user *, nodes,16541653 int __user *, status, int, flags)16551654{16561656- const struct cred *cred = current_cred(), *tcred;16571655 struct task_struct *task;16581656 struct mm_struct *mm;16591657 int err;···1676167616771677 /*16781678 * Check if this process has the right to modify the specified16791679- * process. The right exists if the process has administrative16801680- * capabilities, superuser privileges or the same16811681- * userid as the target process.16791679+ * process. Use the regular "ptrace_may_access()" checks.16821680 */16831683- tcred = __task_cred(task);16841684- if (!uid_eq(cred->euid, tcred->suid) && !uid_eq(cred->euid, tcred->uid) &&16851685- !uid_eq(cred->uid, tcred->suid) && !uid_eq(cred->uid, tcred->uid) &&16861686- !capable(CAP_SYS_NICE)) {16811681+ if (!ptrace_may_access(task, PTRACE_MODE_READ_REALCREDS)) {16871682 rcu_read_unlock();16881683 err = -EPERM;16891684 goto out;
-16
mm/nobootmem.c
···146146 NULL)147147 count += __free_memory_core(start, end);148148149149-#ifdef CONFIG_ARCH_DISCARD_MEMBLOCK150150- {151151- phys_addr_t size;152152-153153- /* Free memblock.reserved array if it was allocated */154154- size = get_allocated_memblock_reserved_regions_info(&start);155155- if (size)156156- count += __free_memory_core(start, start + size);157157-158158- /* Free memblock.memory array if it was allocated */159159- size = get_allocated_memblock_memory_regions_info(&start);160160- if (size)161161- count += __free_memory_core(start, start + size);162162- }163163-#endif164164-165149 return count;166150}167151
+12-3
mm/page-writeback.c
···27242724int test_clear_page_writeback(struct page *page)27252725{27262726 struct address_space *mapping = page_mapping(page);27272727+ struct mem_cgroup *memcg;27282728+ struct lruvec *lruvec;27272729 int ret;2728273027292729- lock_page_memcg(page);27312731+ memcg = lock_page_memcg(page);27322732+ lruvec = mem_cgroup_page_lruvec(page, page_pgdat(page));27302733 if (mapping && mapping_use_writeback_tags(mapping)) {27312734 struct inode *inode = mapping->host;27322735 struct backing_dev_info *bdi = inode_to_bdi(inode);···27572754 } else {27582755 ret = TestClearPageWriteback(page);27592756 }27572757+ /*27582758+ * NOTE: Page might be free now! Writeback doesn't hold a page27592759+ * reference on its own, it relies on truncation to wait for27602760+ * the clearing of PG_writeback. The below can only access27612761+ * page state that is static across allocation cycles.27622762+ */27602763 if (ret) {27612761- dec_lruvec_page_state(page, NR_WRITEBACK);27642764+ dec_lruvec_state(lruvec, NR_WRITEBACK);27622765 dec_zone_page_state(page, NR_ZONE_WRITE_PENDING);27632766 inc_node_page_state(page, NR_WRITTEN);27642767 }27652765- unlock_page_memcg(page);27682768+ __unlock_page_memcg(memcg);27662769 return ret;27672770}27682771
+22-2
mm/page_alloc.c
···6666#include <linux/kthread.h>6767#include <linux/memcontrol.h>6868#include <linux/ftrace.h>6969+#include <linux/nmi.h>69707071#include <asm/sections.h>7172#include <asm/tlbflush.h>···15851584 /* Reinit limits that are based on free pages after the kernel is up */15861585 files_maxfiles_init();15871586#endif15871587+#ifdef CONFIG_ARCH_DISCARD_MEMBLOCK15881588+ /* Discard memblock private memory */15891589+ memblock_discard();15901590+#endif1588159115891592 for_each_populated_zone(zone)15901593 set_zone_contiguous(zone);···2536253125372532#ifdef CONFIG_HIBERNATION2538253325342534+/*25352535+ * Touch the watchdog for every WD_PAGE_COUNT pages.25362536+ */25372537+#define WD_PAGE_COUNT (128*1024)25382538+25392539void mark_free_pages(struct zone *zone)25402540{25412541- unsigned long pfn, max_zone_pfn;25412541+ unsigned long pfn, max_zone_pfn, page_count = WD_PAGE_COUNT;25422542 unsigned long flags;25432543 unsigned int order, t;25442544 struct page *page;···25582548 if (pfn_valid(pfn)) {25592549 page = pfn_to_page(pfn);2560255025512551+ if (!--page_count) {25522552+ touch_nmi_watchdog();25532553+ page_count = WD_PAGE_COUNT;25542554+ }25552555+25612556 if (page_zone(page) != zone)25622557 continue;25632558···25762561 unsigned long i;2577256225782563 pfn = page_to_pfn(page);25792579- for (i = 0; i < (1UL << order); i++)25642564+ for (i = 0; i < (1UL << order); i++) {25652565+ if (!--page_count) {25662566+ touch_nmi_watchdog();25672567+ page_count = WD_PAGE_COUNT;25682568+ }25802569 swsusp_set_page_free(pfn_to_page(pfn + i));25702570+ }25812571 }25822572 }25832573 spin_unlock_irqrestore(&zone->lock, flags);
+2-2
mm/shmem.c
···39673967 }3968396839693969#ifdef CONFIG_TRANSPARENT_HUGE_PAGECACHE39703970- if (has_transparent_hugepage() && shmem_huge < SHMEM_HUGE_DENY)39703970+ if (has_transparent_hugepage() && shmem_huge > SHMEM_HUGE_DENY)39713971 SHMEM_SB(shm_mnt->mnt_sb)->huge = shmem_huge;39723972 else39733973 shmem_huge = 0; /* just in case it was patched */···40284028 return -EINVAL;4029402940304030 shmem_huge = huge;40314031- if (shmem_huge < SHMEM_HUGE_DENY)40314031+ if (shmem_huge > SHMEM_HUGE_DENY)40324032 SHMEM_SB(shm_mnt->mnt_sb)->huge = shmem_huge;40334033 return count;40344034}
···10071007{10081008 /* This basically follows the spec line by line -- see RFC1112 */10091009 struct igmphdr *ih;10101010- struct in_device *in_dev = __in_dev_get_rcu(skb->dev);10101010+ struct net_device *dev = skb->dev;10111011+ struct in_device *in_dev;10111012 int len = skb->len;10121013 bool dropped = true;1013101410151015+ if (netif_is_l3_master(dev)) {10161016+ dev = dev_get_by_index_rcu(dev_net(dev), IPCB(skb)->iif);10171017+ if (!dev)10181018+ goto drop;10191019+ }10201020+10211021+ in_dev = __in_dev_get_rcu(dev);10141022 if (!in_dev)10151023 goto drop;10161024
+12-4
net/ipv4/route.c
···12671267 if (mtu)12681268 return mtu;1269126912701270- mtu = dst->dev->mtu;12701270+ mtu = READ_ONCE(dst->dev->mtu);1271127112721272 if (unlikely(dst_metric_locked(dst, RTAX_MTU))) {12731273 if (rt->rt_uses_gateway && mtu > 576)···27502750 err = 0;27512751 if (IS_ERR(rt))27522752 err = PTR_ERR(rt);27532753+ else27542754+ skb_dst_set(skb, &rt->dst);27532755 }2754275627552757 if (err)27562758 goto errout_free;2757275927582758- skb_dst_set(skb, &rt->dst);27592760 if (rtm->rtm_flags & RTM_F_NOTIFY)27602761 rt->rt_flags |= RTCF_NOTIFY;2761276227622763 if (rtm->rtm_flags & RTM_F_LOOKUP_TABLE)27632764 table_id = rt->rt_table_id;2764276527652765- if (rtm->rtm_flags & RTM_F_FIB_MATCH)27662766+ if (rtm->rtm_flags & RTM_F_FIB_MATCH) {27672767+ if (!res.fi) {27682768+ err = fib_props[res.type].error;27692769+ if (!err)27702770+ err = -EHOSTUNREACH;27712771+ goto errout_free;27722772+ }27662773 err = fib_dump_info(skb, NETLINK_CB(in_skb).portid,27672774 nlh->nlmsg_seq, RTM_NEWROUTE, table_id,27682775 rt->rt_type, res.prefix, res.prefixlen,27692776 fl4.flowi4_tos, res.fi, 0);27702770- else27772777+ } else {27712778 err = rt_fill_info(net, dst, src, table_id, &fl4, skb,27722779 NETLINK_CB(in_skb).portid, nlh->nlmsg_seq);27802780+ }27732781 if (err < 0)27742782 goto errout_free;27752783
+1-2
net/ipv4/tcp_input.c
···30093009 /* delta_us may not be positive if the socket is locked30103010 * when the retrans timer fires and is rescheduled.30113011 */30123012- if (delta_us > 0)30133013- rto = usecs_to_jiffies(delta_us);30123012+ rto = usecs_to_jiffies(max_t(int, delta_us, 1));30143013 }30153014 inet_csk_reset_xmit_timer(sk, ICSK_TIME_RETRANS, rto,30163015 TCP_RTO_MAX);
+2-2
net/ipv4/tcp_ipv4.c
···17221722 */17231723 sock_hold(sk);17241724 refcounted = true;17251725+ if (tcp_filter(sk, skb))17261726+ goto discard_and_relse;17251727 nsk = tcp_check_req(sk, skb, req, false);17261728 if (!nsk) {17271729 reqsk_put(req);···17311729 }17321730 if (nsk == sk) {17331731 reqsk_put(req);17341734- } else if (tcp_filter(sk, skb)) {17351735- goto discard_and_relse;17361732 } else if (tcp_child_process(sk, nsk, skb)) {17371733 tcp_v4_send_reset(nsk, skb);17381734 goto discard_and_relse;
···914914 }915915 nsiblings = iter->rt6i_nsiblings;916916 fib6_purge_rt(iter, fn, info->nl_net);917917+ if (fn->rr_ptr == iter)918918+ fn->rr_ptr = NULL;917919 rt6_release(iter);918920919921 if (nsiblings) {···928926 if (rt6_qualify_for_ecmp(iter)) {929927 *ins = iter->dst.rt6_next;930928 fib6_purge_rt(iter, fn, info->nl_net);929929+ if (fn->rr_ptr == iter)930930+ fn->rr_ptr = NULL;931931 rt6_release(iter);932932 nsiblings--;933933 } else {···10181014 /* Create subtree root node */10191015 sfn = node_alloc();10201016 if (!sfn)10211021- goto st_failure;10171017+ goto failure;1022101810231019 sfn->leaf = info->nl_net->ipv6.ip6_null_entry;10241020 atomic_inc(&info->nl_net->ipv6.ip6_null_entry->rt6i_ref);···1035103110361032 if (IS_ERR(sn)) {10371033 /* If it is failed, discard just allocated10381038- root, and then (in st_failure) stale node10341034+ root, and then (in failure) stale node10391035 in main tree.10401036 */10411037 node_free(sfn);10421038 err = PTR_ERR(sn);10431043- goto st_failure;10391039+ goto failure;10441040 }1045104110461042 /* Now link new subtree to main tree */···1055105110561052 if (IS_ERR(sn)) {10571053 err = PTR_ERR(sn);10581058- goto st_failure;10541054+ goto failure;10591055 }10601056 }10611057···10961092 atomic_inc(&pn->leaf->rt6i_ref);10971093 }10981094#endif10991099- /* Always release dst as dst->__refcnt is guaranteed11001100- * to be taken before entering this function11011101- */11021102- dst_release_immediate(&rt->dst);10951095+ goto failure;11031096 }11041097 return err;1105109811061106-#ifdef CONFIG_IPV6_SUBTREES11071107- /* Subtree creation failed, probably main tree node11081108- is orphan. If it is, shoot it.10991099+failure:11001100+ /* fn->leaf could be NULL if fn is an intermediate node and we11011101+ * failed to add the new route to it in both subtree creation11021102+ * failure and fib6_add_rt2node() failure case.11031103+ * In both cases, fib6_repair_tree() should be called to fix11041104+ * fn->leaf.11091105 */11101110-st_failure:11111106 if (fn && !(fn->fn_flags & (RTN_RTINFO|RTN_ROOT)))11121107 fib6_repair_tree(info->nl_net, fn);11131108 /* Always release dst as dst->__refcnt is guaranteed···11141111 */11151112 dst_release_immediate(&rt->dst);11161113 return err;11171117-#endif11181114}1119111511201116/*
+8-11
net/ipv6/route.c
···417417 struct net_device *loopback_dev =418418 dev_net(dev)->loopback_dev;419419420420- if (dev != loopback_dev) {421421- if (idev && idev->dev == dev) {422422- struct inet6_dev *loopback_idev =423423- in6_dev_get(loopback_dev);424424- if (loopback_idev) {425425- rt->rt6i_idev = loopback_idev;426426- in6_dev_put(idev);427427- }420420+ if (idev && idev->dev != loopback_dev) {421421+ struct inet6_dev *loopback_idev = in6_dev_get(loopback_dev);422422+ if (loopback_idev) {423423+ rt->rt6i_idev = loopback_idev;424424+ in6_dev_put(idev);428425 }429426 }430427}···37213724 /* NETDEV_UNREGISTER could be fired for multiple times by37223725 * netdev_wait_allrefs(). Make sure we only call this once.37233726 */37243724- in6_dev_put(net->ipv6.ip6_null_entry->rt6i_idev);37273727+ in6_dev_put_clear(&net->ipv6.ip6_null_entry->rt6i_idev);37253728#ifdef CONFIG_IPV6_MULTIPLE_TABLES37263726- in6_dev_put(net->ipv6.ip6_prohibit_entry->rt6i_idev);37273727- in6_dev_put(net->ipv6.ip6_blk_hole_entry->rt6i_idev);37293729+ in6_dev_put_clear(&net->ipv6.ip6_prohibit_entry->rt6i_idev);37303730+ in6_dev_put_clear(&net->ipv6.ip6_blk_hole_entry->rt6i_idev);37283731#endif37293732 }37303733
+2-2
net/ipv6/tcp_ipv6.c
···14561456 }14571457 sock_hold(sk);14581458 refcounted = true;14591459+ if (tcp_filter(sk, skb))14601460+ goto discard_and_relse;14591461 nsk = tcp_check_req(sk, skb, req, false);14601462 if (!nsk) {14611463 reqsk_put(req);···14661464 if (nsk == sk) {14671465 reqsk_put(req);14681466 tcp_v6_restore_cb(skb);14691469- } else if (tcp_filter(sk, skb)) {14701470- goto discard_and_relse;14711467 } else if (tcp_child_process(sk, nsk, skb)) {14721468 tcp_v6_send_reset(nsk, skb);14731469 goto discard_and_relse;
···77 * Copyright 2006-2007 Jiri Benc <jbenc@suse.cz>88 * Copyright 2007, Michael Wu <flamingice@sourmilk.net>99 * Copyright 2007-2010, Intel Corporation1010- * Copyright(c) 2015 Intel Deutschland GmbH1010+ * Copyright(c) 2015-2017 Intel Deutschland GmbH1111 *1212 * This program is free software; you can redistribute it and/or modify1313 * it under the terms of the GNU General Public License version 2 as···466466 rcu_read_unlock();467467}468468EXPORT_SYMBOL(ieee80211_manage_rx_ba_offl);469469+470470+void ieee80211_rx_ba_timer_expired(struct ieee80211_vif *vif,471471+ const u8 *addr, unsigned int tid)472472+{473473+ struct ieee80211_sub_if_data *sdata = vif_to_sdata(vif);474474+ struct ieee80211_local *local = sdata->local;475475+ struct sta_info *sta;476476+477477+ rcu_read_lock();478478+ sta = sta_info_get_bss(sdata, addr);479479+ if (!sta)480480+ goto unlock;481481+482482+ set_bit(tid, sta->ampdu_mlme.tid_rx_timer_expired);483483+ ieee80211_queue_work(&local->hw, &sta->ampdu_mlme.work);484484+485485+ unlock:486486+ rcu_read_unlock();487487+}488488+EXPORT_SYMBOL(ieee80211_rx_ba_timer_expired);
···381381}382382383383static size_t upcall_msg_size(const struct dp_upcall_info *upcall_info,384384- unsigned int hdrlen)384384+ unsigned int hdrlen, int actions_attrlen)385385{386386 size_t size = NLMSG_ALIGN(sizeof(struct ovs_header))387387 + nla_total_size(hdrlen) /* OVS_PACKET_ATTR_PACKET */···398398399399 /* OVS_PACKET_ATTR_ACTIONS */400400 if (upcall_info->actions_len)401401- size += nla_total_size(upcall_info->actions_len);401401+ size += nla_total_size(actions_attrlen);402402403403 /* OVS_PACKET_ATTR_MRU */404404 if (upcall_info->mru)···465465 else466466 hlen = skb->len;467467468468- len = upcall_msg_size(upcall_info, hlen - cutlen);468468+ len = upcall_msg_size(upcall_info, hlen - cutlen,469469+ OVS_CB(skb)->acts_origlen);469470 user_skb = genlmsg_new(len, GFP_ATOMIC);470471 if (!user_skb) {471472 err = -ENOMEM;
+2
net/openvswitch/datapath.h
···9999 * when a packet is received by OVS.100100 * @mru: The maximum received fragement size; 0 if the packet is not101101 * fragmented.102102+ * @acts_origlen: The netlink size of the flow actions applied to this skb.102103 * @cutlen: The number of bytes from the packet end to be removed.103104 */104105struct ovs_skb_cb {105106 struct vport *input_vport;106107 u16 mru;108108+ u16 acts_origlen;107109 u32 cutlen;108110};109111#define OVS_CB(skb) ((struct ovs_skb_cb *)(skb)->cb)
···14311431 * be bound to classes which have been destroyed already. --TGR '0414321432 */14331433 for (h = 0; h < q->clhash.hashsize; h++) {14341434- hlist_for_each_entry(cl, &q->clhash.hash[h], common.hnode)14341434+ hlist_for_each_entry(cl, &q->clhash.hash[h], common.hnode) {14351435 tcf_block_put(cl->block);14361436+ cl->block = NULL;14371437+ }14361438 }14371439 for (h = 0; h < q->clhash.hashsize; h++) {14381440 hlist_for_each_entry_safe(cl, next, &q->clhash.hash[h],
+11-1
net/sched/sch_hfsc.c
···14281428 return err;14291429 q->eligible = RB_ROOT;1430143014311431+ err = tcf_block_get(&q->root.block, &q->root.filter_list);14321432+ if (err)14331433+ goto err_tcf;14341434+14311435 q->root.cl_common.classid = sch->handle;14321436 q->root.refcnt = 1;14331437 q->root.sched = q;···14511447 qdisc_watchdog_init(&q->watchdog, sch);1452144814531449 return 0;14501450+14511451+err_tcf:14521452+ qdisc_class_hash_destroy(&q->clhash);14531453+ return err;14541454}1455145514561456static int···15301522 unsigned int i;1531152315321524 for (i = 0; i < q->clhash.hashsize; i++) {15331533- hlist_for_each_entry(cl, &q->clhash.hash[i], cl_common.hnode)15251525+ hlist_for_each_entry(cl, &q->clhash.hash[i], cl_common.hnode) {15341526 tcf_block_put(cl->block);15271527+ cl->block = NULL;15281528+ }15351529 }15361530 for (i = 0; i < q->clhash.hashsize; i++) {15371531 hlist_for_each_entry_safe(cl, next, &q->clhash.hash[i],
+3-1
net/sched/sch_htb.c
···12581258 tcf_block_put(q->block);1259125912601260 for (i = 0; i < q->clhash.hashsize; i++) {12611261- hlist_for_each_entry(cl, &q->clhash.hash[i], common.hnode)12611261+ hlist_for_each_entry(cl, &q->clhash.hash[i], common.hnode) {12621262 tcf_block_put(cl->block);12631263+ cl->block = NULL;12641264+ }12631265 }12641266 for (i = 0; i < q->clhash.hashsize; i++) {12651267 hlist_for_each_entry_safe(cl, next, &q->clhash.hash[i],
+4-1
net/sched/sch_sfq.c
···437437 qdisc_drop(head, sch, to_free);438438439439 slot_queue_add(slot, skb);440440+ qdisc_tree_reduce_backlog(sch, 0, delta);440441 return NET_XMIT_CN;441442 }442443···469468 /* Return Congestion Notification only if we dropped a packet470469 * from this flow.471470 */472472- if (qlen != slot->qlen)471471+ if (qlen != slot->qlen) {472472+ qdisc_tree_reduce_backlog(sch, 0, dropped - qdisc_pkt_len(skb));473473 return NET_XMIT_CN;474474+ }474475475476 /* As we dropped a packet, better let upper stack know this */476477 qdisc_tree_reduce_backlog(sch, 1, dropped);
···421421 dprintk("svc: socket %p(inet %p), busy=%d\n",422422 svsk, sk,423423 test_bit(XPT_BUSY, &svsk->sk_xprt.xpt_flags));424424+425425+ /* Refer to svc_setup_socket() for details. */426426+ rmb();424427 svsk->sk_odata(sk);425428 if (!test_and_set_bit(XPT_DATA, &svsk->sk_xprt.xpt_flags))426429 svc_xprt_enqueue(&svsk->sk_xprt);···440437 if (svsk) {441438 dprintk("svc: socket %p(inet %p), write_space busy=%d\n",442439 svsk, sk, test_bit(XPT_BUSY, &svsk->sk_xprt.xpt_flags));440440+441441+ /* Refer to svc_setup_socket() for details. */442442+ rmb();443443 svsk->sk_owspace(sk);444444 svc_xprt_enqueue(&svsk->sk_xprt);445445 }···766760 dprintk("svc: socket %p TCP (listen) state change %d\n",767761 sk, sk->sk_state);768762769769- if (svsk)763763+ if (svsk) {764764+ /* Refer to svc_setup_socket() for details. */765765+ rmb();770766 svsk->sk_odata(sk);767767+ }768768+771769 /*772770 * This callback may called twice when a new connection773771 * is established as a child socket inherits everything···804794 if (!svsk)805795 printk("svc: socket %p: no user data\n", sk);806796 else {797797+ /* Refer to svc_setup_socket() for details. */798798+ rmb();807799 svsk->sk_ostate(sk);808800 if (sk->sk_state != TCP_ESTABLISHED) {809801 set_bit(XPT_CLOSE, &svsk->sk_xprt.xpt_flags);···13931381 return ERR_PTR(err);13941382 }1395138313961396- inet->sk_user_data = svsk;13971384 svsk->sk_sock = sock;13981385 svsk->sk_sk = inet;13991386 svsk->sk_ostate = inet->sk_state_change;14001387 svsk->sk_odata = inet->sk_data_ready;14011388 svsk->sk_owspace = inet->sk_write_space;13891389+ /*13901390+ * This barrier is necessary in order to prevent race condition13911391+ * with svc_data_ready(), svc_listen_data_ready() and others13921392+ * when calling callbacks above.13931393+ */13941394+ wmb();13951395+ inet->sk_user_data = svsk;1402139614031397 /* Initialize the socket */14041398 if (sock->type == SOCK_DGRAM)
···85858686# try-run8787# Usage: option = $(call try-run, $(CC)...-o "$$TMP",option-ok,otherwise)8888-# Exit code chooses option. "$$TMP" is can be used as temporary file and8989-# is automatically cleaned up.8888+# Exit code chooses option. "$$TMP" serves as a temporary file and is8989+# automatically cleaned up.9090try-run = $(shell set -e; \9191 TMP="$(TMPOUT).$$$$.tmp"; \9292 TMPO="$(TMPOUT).$$$$.o"; \···261261any-prereq = $(filter-out $(PHONY),$?) $(filter-out $(PHONY) $(wildcard $^),$^)262262263263# Execute command if command has changed or prerequisite(s) are updated.264264-#265264if_changed = $(if $(strip $(any-prereq) $(arg-check)), \266265 @set -e; \267266 $(echo-cmd) $(cmd_$(1)); \···314315 $(rule_$(1)), @:)315316316317###317317-# why - tell why a a target got build318318+# why - tell why a target got built318319# enabled by make V=2319320# Output (listed in the order they are checked):320321# (1) - due to target is PHONY
+2-2
scripts/Makefile.asm-generic
···11# include/asm-generic contains a lot of files that are used22# verbatim by several architectures.33#44-# This Makefile reads the file arch/$(SRCARCH)/include/asm/Kbuild44+# This Makefile reads the file arch/$(SRCARCH)/include/$(src)/Kbuild55# and for each file listed in this file with generic-y creates66-# a small wrapper file in $(obj) (arch/$(SRCARCH)/include/generated/asm)66+# a small wrapper file in $(obj) (arch/$(SRCARCH)/include/generated/$(src))7788kbuild-file := $(srctree)/arch/$(SRCARCH)/include/$(src)/Kbuild99-include $(kbuild-file)
+4-4
scripts/Makefile.build
···229229endif230230# Due to recursion, we must skip empty.o.231231# The empty.o file is created in the make process in order to determine232232-# the target endianness and word size. It is made before all other C233233-# files, including recordmcount.232232+# the target endianness and word size. It is made before all other C233233+# files, including recordmcount.234234sub_cmd_record_mcount = \235235 if [ $(@) != "scripts/mod/empty.o" ]; then \236236 $(objtree)/scripts/recordmcount $(RECORDMCOUNT_FLAGS) "$(@)"; \···245245 "$(LD)" "$(NM)" "$(RM)" "$(MV)" \246246 "$(if $(part-of-module),1,0)" "$(@)";247247recordmcount_source := $(srctree)/scripts/recordmcount.pl248248-endif248248+endif # BUILD_C_RECORDMCOUNT249249cmd_record_mcount = \250250 if [ "$(findstring $(CC_FLAGS_FTRACE),$(_c_flags))" = \251251 "$(CC_FLAGS_FTRACE)" ]; then \252252 $(sub_cmd_record_mcount) \253253 fi;254254-endif254254+endif # CONFIG_FTRACE_MCOUNT_RECORD255255256256ifdef CONFIG_STACK_VALIDATION257257ifneq ($(SKIP_STACK_VALIDATION),1)
···11###22-# Makefile.basic lists the most basic programs used during the build process.22+# This Makefile lists the most basic programs used during the build process.33# The programs listed herein are what are needed to do the basic stuff,44# such as fix file dependencies.55# This initial step is needed to avoid files to be recompiled
+3-3
scripts/basic/fixdep.c
···2525 *2626 * So we play the same trick that "mkdep" played before. We replace2727 * the dependency on autoconf.h by a dependency on every config2828- * option which is mentioned in any of the listed prequisites.2828+ * option which is mentioned in any of the listed prerequisites.2929 *3030 * kconfig populates a tree in include/config/ with an empty file3131 * for each config symbol and when the configuration is updated···3434 * the config symbols are rebuilt.3535 *3636 * So if the user changes his CONFIG_HIS_DRIVER option, only the objects3737- * which depend on "include/linux/config/his/driver.h" will be rebuilt,3737+ * which depend on "include/config/his/driver.h" will be rebuilt,3838 * so most likely only his driver ;-)3939 *4040 * The idea above dates, by the way, back to Michael E Chastain, AFAIK.···7575 * and then basically copies the .<target>.d file to stdout, in the7676 * process filtering out the dependency on autoconf.h and adding7777 * dependencies on include/config/my/option.h for every7878- * CONFIG_MY_OPTION encountered in any of the prequisites.7878+ * CONFIG_MY_OPTION encountered in any of the prerequisites.7979 *8080 * It will also filter out all the dependencies on *.ver. We need8181 * to make sure that the generated version checksum are globally up
···15021502static int snd_seq_ioctl_create_queue(struct snd_seq_client *client, void *arg)15031503{15041504 struct snd_seq_queue_info *info = arg;15051505- int result;15061505 struct snd_seq_queue *q;1507150615081508- result = snd_seq_queue_alloc(client->number, info->locked, info->flags);15091509- if (result < 0)15101510- return result;15111511-15121512- q = queueptr(result);15131513- if (q == NULL)15141514- return -EINVAL;15071507+ q = snd_seq_queue_alloc(client->number, info->locked, info->flags);15081508+ if (IS_ERR(q))15091509+ return PTR_ERR(q);1515151015161511 info->queue = q->queue;15171512 info->locked = q->locked;···15161521 if (!info->name[0])15171522 snprintf(info->name, sizeof(info->name), "Queue-%d", q->queue);15181523 strlcpy(q->name, info->name, sizeof(q->name));15191519- queuefree(q);15241524+ snd_use_lock_free(&q->use_lock);1520152515211526 return 0;15221527}
+9-5
sound/core/seq/seq_queue.c
···184184static void queue_use(struct snd_seq_queue *queue, int client, int use);185185186186/* allocate a new queue -187187- * return queue index value or negative value for error187187+ * return pointer to new queue or ERR_PTR(-errno) for error188188+ * The new queue's use_lock is set to 1. It is the caller's responsibility to189189+ * call snd_use_lock_free(&q->use_lock).188190 */189189-int snd_seq_queue_alloc(int client, int locked, unsigned int info_flags)191191+struct snd_seq_queue *snd_seq_queue_alloc(int client, int locked, unsigned int info_flags)190192{191193 struct snd_seq_queue *q;192194193195 q = queue_new(client, locked);194196 if (q == NULL)195195- return -ENOMEM;197197+ return ERR_PTR(-ENOMEM);196198 q->info_flags = info_flags;197199 queue_use(q, client, 1);200200+ snd_use_lock_use(&q->use_lock);198201 if (queue_list_add(q) < 0) {202202+ snd_use_lock_free(&q->use_lock);199203 queue_delete(q);200200- return -ENOMEM;204204+ return ERR_PTR(-ENOMEM);201205 }202202- return q->queue;206206+ return q;203207}204208205209/* delete a queue - queue must be owned by the client */
+1-1
sound/core/seq/seq_queue.h
···717172727373/* create new queue (constructor) */7474-int snd_seq_queue_alloc(int client, int locked, unsigned int flags);7474+struct snd_seq_queue *snd_seq_queue_alloc(int client, int locked, unsigned int flags);75757676/* delete queue (destructor) */7777int snd_seq_queue_delete(int client, int queueid);
+6-1
sound/firewire/iso-resources.c
···210210 */211211void fw_iso_resources_free(struct fw_iso_resources *r)212212{213213- struct fw_card *card = fw_parent_device(r->unit)->card;213213+ struct fw_card *card;214214 int bandwidth, channel;215215+216216+ /* Not initialized. */217217+ if (r->unit == NULL)218218+ return;219219+ card = fw_parent_device(r->unit)->card;215220216221 mutex_lock(&r->mutex);217222
···473473 echo " all Runs all tests (default)"474474 echo " -t Run test ID the number amount of times is recommended"475475 echo " -w Watch test ID run until it runs into an error"476476- echo " -c Run test ID once"477477- echo " -s Run test ID x test-count number of times"476476+ echo " -s Run test ID once"477477+ echo " -c Run test ID x test-count number of times"478478 echo " -l List all test ID list"479479 echo " -h|--help Help"480480 echo
+4
tools/testing/selftests/ntb/ntb_test.sh
···333333 link_test $LOCAL_TOOL $REMOTE_TOOL334334 link_test $REMOTE_TOOL $LOCAL_TOOL335335336336+ #Ensure the link is up on both sides before continuing337337+ write_file Y $LOCAL_TOOL/link_event338338+ write_file Y $REMOTE_TOOL/link_event339339+336340 for PEER_TRANS in $(ls $LOCAL_TOOL/peer_trans*); do337341 PT=$(basename $PEER_TRANS)338342 write_file $MW_SIZE $LOCAL_TOOL/$PT