···228228 bridge link set dev DEV learning on self229229 bridge link set dev DEV learning_sync on self230230231231-Learning_sync attribute enables syncing of the learned/forgotton FDB entry to231231+Learning_sync attribute enables syncing of the learned/forgotten FDB entry to232232the bridge's FDB. It's possible, but not optimal, to enable learning on the233233device port and on the bridge port, and disable learning_sync.234234···245245port device supports ageing, when the FDB entry expires, it will notify the246246driver which in turn will notify the bridge with SWITCHDEV_FDB_DEL. If the247247device does not support ageing, the driver can simulate ageing using a248248-garbage collection timer to monitor FBD entries. Expired entries will be248248+garbage collection timer to monitor FDB entries. Expired entries will be249249notified to the bridge using SWITCHDEV_FDB_DEL. See rocker driver for250250example of driver running ageing timer.251251
+11-8
Documentation/printk-formats.txt
···5858 %ps versatile_init5959 %pB prev_fn_of_versatile_init+0x88/0x8860606161-For printing symbols and function pointers. The ``S`` and ``s`` specifiers6262-result in the symbol name with (``S``) or without (``s``) offsets. Where6363-this is used on a kernel without KALLSYMS - the symbol address is6464-printed instead.6161+The ``F`` and ``f`` specifiers are for printing function pointers,6262+for example, f->func, &gettimeofday. They have the same result as6363+``S`` and ``s`` specifiers. But they do an extra conversion on6464+ia64, ppc64 and parisc64 architectures where the function pointers6565+are actually function descriptors.6666+6767+The ``S`` and ``s`` specifiers can be used for printing symbols6868+from direct addresses, for example, __builtin_return_address(0),6969+(void *)regs->ip. They result in the symbol name with (``S``) or7070+without (``s``) offsets. If KALLSYMS are disabled then the symbol7171+address is printed instead.65726673The ``B`` specifier results in the symbol name with offsets and should be6774used when printing stack backtraces. The specifier takes into6875consideration the effect of compiler optimisations which may occur6976when tail-call``s are used and marked with the noreturn GCC attribute.70777171-On ia64, ppc64 and parisc64 architectures function pointers are7272-actually function descriptors which must first be resolved. The ``F`` and7373-``f`` specifiers perform this resolution and then provide the same7474-functionality as the ``S`` and ``s`` specifiers.75787679Kernel Pointers7780===============
+36-11
Documentation/sysctl/net.txt
···3535bpf_jit_enable3636--------------37373838-This enables Berkeley Packet Filter Just in Time compiler.3939-Currently supported on x86_64 architecture, bpf_jit provides a framework4040-to speed packet filtering, the one used by tcpdump/libpcap for example.3838+This enables the BPF Just in Time (JIT) compiler. BPF is a flexible3939+and efficient infrastructure allowing to execute bytecode at various4040+hook points. It is used in a number of Linux kernel subsystems such4141+as networking (e.g. XDP, tc), tracing (e.g. kprobes, uprobes, tracepoints)4242+and security (e.g. seccomp). LLVM has a BPF back end that can compile4343+restricted C into a sequence of BPF instructions. After program load4444+through bpf(2) and passing a verifier in the kernel, a JIT will then4545+translate these BPF proglets into native CPU instructions. There are4646+two flavors of JITs, the newer eBPF JIT currently supported on:4747+ - x86_644848+ - arm644949+ - ppc645050+ - sparc645151+ - mips645252+ - s390x5353+5454+And the older cBPF JIT supported on the following archs:5555+ - arm5656+ - mips5757+ - ppc5858+ - sparc5959+6060+eBPF JITs are a superset of cBPF JITs, meaning the kernel will6161+migrate cBPF instructions into eBPF instructions and then JIT6262+compile them transparently. Older cBPF JITs can only translate6363+tcpdump filters, seccomp rules, etc, but not mentioned eBPF6464+programs loaded through bpf(2).6565+4166Values :4267 0 - disable the JIT (default value)4368 1 - enable the JIT···7146bpf_jit_harden7247--------------73487474-This enables hardening for the Berkeley Packet Filter Just in Time compiler.7575-Supported are eBPF JIT backends. Enabling hardening trades off performance,7676-but can mitigate JIT spraying.4949+This enables hardening for the BPF JIT compiler. Supported are eBPF5050+JIT backends. Enabling hardening trades off performance, but can5151+mitigate JIT spraying.7752Values :7853 0 - disable JIT hardening (default value)7954 1 - enable JIT hardening for unprivileged users only···8257bpf_jit_kallsyms8358----------------84598585-When Berkeley Packet Filter Just in Time compiler is enabled, then compiled8686-images are unknown addresses to the kernel, meaning they neither show up in8787-traces nor in /proc/kallsyms. This enables export of these addresses, which8888-can be used for debugging/tracing. If bpf_jit_harden is enabled, this feature8989-is disabled.6060+When BPF JIT compiler is enabled, then compiled images are unknown6161+addresses to the kernel, meaning they neither show up in traces nor6262+in /proc/kallsyms. This enables export of these addresses, which can6363+be used for debugging/tracing. If bpf_jit_harden is enabled, this6464+feature is disabled.9065Values :9166 0 - disable JIT kallsyms export (default value)9267 1 - enable JIT kallsyms export for privileged users only
···2121# CONFIG_BLK_DEV_BSG is not set2222# CONFIG_IOSCHED_DEADLINE is not set2323# CONFIG_IOSCHED_CFQ is not set2424-CONFIG_ARC_PLAT_SIM=y2524CONFIG_ISA_ARCV2=y2625CONFIG_ARC_BUILTIN_DTB_NAME="haps_hs"2726CONFIG_PREEMPT=y
-1
arch/arc/configs/haps_hs_smp_defconfig
···2323# CONFIG_BLK_DEV_BSG is not set2424# CONFIG_IOSCHED_DEADLINE is not set2525# CONFIG_IOSCHED_CFQ is not set2626-CONFIG_ARC_PLAT_SIM=y2726CONFIG_ISA_ARCV2=y2827CONFIG_SMP=y2928CONFIG_ARC_BUILTIN_DTB_NAME="haps_hs_idu"
-1
arch/arc/configs/nps_defconfig
···3939# CONFIG_INET_XFRM_MODE_TRANSPORT is not set4040# CONFIG_INET_XFRM_MODE_TUNNEL is not set4141# CONFIG_INET_XFRM_MODE_BEET is not set4242-# CONFIG_INET_LRO is not set4342# CONFIG_INET_DIAG is not set4443# CONFIG_IPV6 is not set4544# CONFIG_WIRELESS is not set
-1
arch/arc/configs/nsim_700_defconfig
···2323# CONFIG_BLK_DEV_BSG is not set2424# CONFIG_IOSCHED_DEADLINE is not set2525# CONFIG_IOSCHED_CFQ is not set2626-CONFIG_ARC_PLAT_SIM=y2726CONFIG_ARC_BUILTIN_DTB_NAME="nsim_700"2827CONFIG_PREEMPT=y2928# CONFIG_COMPACTION is not set
-1
arch/arc/configs/nsim_hs_defconfig
···2626# CONFIG_BLK_DEV_BSG is not set2727# CONFIG_IOSCHED_DEADLINE is not set2828# CONFIG_IOSCHED_CFQ is not set2929-CONFIG_ARC_PLAT_SIM=y3029CONFIG_ISA_ARCV2=y3130CONFIG_ARC_BUILTIN_DTB_NAME="nsim_hs"3231CONFIG_PREEMPT=y
-1
arch/arc/configs/nsim_hs_smp_defconfig
···2424# CONFIG_BLK_DEV_BSG is not set2525# CONFIG_IOSCHED_DEADLINE is not set2626# CONFIG_IOSCHED_CFQ is not set2727-CONFIG_ARC_PLAT_SIM=y2827CONFIG_ISA_ARCV2=y2928CONFIG_SMP=y3029CONFIG_ARC_BUILTIN_DTB_NAME="nsim_hs_idu"
-1
arch/arc/configs/nsimosci_defconfig
···2323# CONFIG_BLK_DEV_BSG is not set2424# CONFIG_IOSCHED_DEADLINE is not set2525# CONFIG_IOSCHED_CFQ is not set2626-CONFIG_ARC_PLAT_SIM=y2726CONFIG_ARC_BUILTIN_DTB_NAME="nsimosci"2827# CONFIG_COMPACTION is not set2928CONFIG_NET=y
-1
arch/arc/configs/nsimosci_hs_defconfig
···2323# CONFIG_BLK_DEV_BSG is not set2424# CONFIG_IOSCHED_DEADLINE is not set2525# CONFIG_IOSCHED_CFQ is not set2626-CONFIG_ARC_PLAT_SIM=y2726CONFIG_ISA_ARCV2=y2827CONFIG_ARC_BUILTIN_DTB_NAME="nsimosci_hs"2928# CONFIG_COMPACTION is not set
-1
arch/arc/configs/nsimosci_hs_smp_defconfig
···1818# CONFIG_BLK_DEV_BSG is not set1919# CONFIG_IOSCHED_DEADLINE is not set2020# CONFIG_IOSCHED_CFQ is not set2121-CONFIG_ARC_PLAT_SIM=y2221CONFIG_ISA_ARCV2=y2322CONFIG_SMP=y2423# CONFIG_ARC_TIMERS_64BIT is not set
-1
arch/arc/configs/tb10x_defconfig
···3838# CONFIG_INET_XFRM_MODE_TRANSPORT is not set3939# CONFIG_INET_XFRM_MODE_TUNNEL is not set4040# CONFIG_INET_XFRM_MODE_BEET is not set4141-# CONFIG_INET_LRO is not set4241# CONFIG_INET_DIAG is not set4342# CONFIG_IPV6 is not set4443# CONFIG_WIRELESS is not set
+2
arch/arc/include/asm/cache.h
···9696#define ARC_REG_SLC_FLUSH 0x9049797#define ARC_REG_SLC_INVALIDATE 0x9059898#define ARC_REG_SLC_RGN_START 0x9149999+#define ARC_REG_SLC_RGN_START1 0x91599100#define ARC_REG_SLC_RGN_END 0x916101101+#define ARC_REG_SLC_RGN_END1 0x917100102101103/* Bit val in SLC_CONTROL */102104#define SLC_CTRL_DIS 0x001
+2
arch/arc/include/asm/mmu.h
···9494 return IS_ENABLED(CONFIG_ARC_HAS_PAE40);9595}96969797+extern int pae40_exist_but_not_enab(void);9898+9799#endif /* !__ASSEMBLY__ */9810099101#endif
+3
arch/arc/kernel/intc-arcv2.c
···7575 * Set a default priority for all available interrupts to prevent7676 * switching of register banks if Fast IRQ and multiple register banks7777 * are supported by CPU.7878+ * Also disable all IRQ lines so faulty external hardware won't7979+ * trigger interrupt that kernel is not ready to handle.7880 */7981 for (i = NR_EXCEPTIONS; i < irq_bcr.irqs + NR_EXCEPTIONS; i++) {8082 write_aux_reg(AUX_IRQ_SELECT, i);8183 write_aux_reg(AUX_IRQ_PRIORITY, ARCV2_IRQ_DEF_PRIO);8484+ write_aux_reg(AUX_IRQ_ENABLE, 0);8285 }83868487 /* setup status32, don't enable intr yet as kernel doesn't want */
+13-1
arch/arc/kernel/intc-compact.c
···2727 */2828void arc_init_IRQ(void)2929{3030- int level_mask = 0;3030+ int level_mask = 0, i;31313232 /* Is timer high priority Interrupt (Level2 in ARCompact jargon) */3333 level_mask |= IS_ENABLED(CONFIG_ARC_COMPACT_IRQ_LEVELS) << TIMER0_IRQ;···40404141 if (level_mask)4242 pr_info("Level-2 interrupts bitset %x\n", level_mask);4343+4444+ /*4545+ * Disable all IRQ lines so faulty external hardware won't4646+ * trigger interrupt that kernel is not ready to handle.4747+ */4848+ for (i = TIMER0_IRQ; i < NR_CPU_IRQS; i++) {4949+ unsigned int ienb;5050+5151+ ienb = read_aux_reg(AUX_IENABLE);5252+ ienb &= ~(1 << i);5353+ write_aux_reg(AUX_IENABLE, ienb);5454+ }4355}44564557/*
+42-8
arch/arc/mm/cache.c
···665665 static DEFINE_SPINLOCK(lock);666666 unsigned long flags;667667 unsigned int ctrl;668668+ phys_addr_t end;668669669670 spin_lock_irqsave(&lock, flags);670671···695694 * END needs to be setup before START (latter triggers the operation)696695 * END can't be same as START, so add (l2_line_sz - 1) to sz697696 */698698- write_aux_reg(ARC_REG_SLC_RGN_END, (paddr + sz + l2_line_sz - 1));699699- write_aux_reg(ARC_REG_SLC_RGN_START, paddr);697697+ end = paddr + sz + l2_line_sz - 1;698698+ if (is_pae40_enabled())699699+ write_aux_reg(ARC_REG_SLC_RGN_END1, upper_32_bits(end));700700+701701+ write_aux_reg(ARC_REG_SLC_RGN_END, lower_32_bits(end));702702+703703+ if (is_pae40_enabled())704704+ write_aux_reg(ARC_REG_SLC_RGN_START1, upper_32_bits(paddr));705705+706706+ write_aux_reg(ARC_REG_SLC_RGN_START, lower_32_bits(paddr));707707+708708+ /* Make sure "busy" bit reports correct stataus, see STAR 9001165532 */709709+ read_aux_reg(ARC_REG_SLC_CTRL);700710701711 while (read_aux_reg(ARC_REG_SLC_CTRL) & SLC_CTRL_BUSY);702712···11231111 __dc_enable();11241112}1125111311141114+/*11151115+ * Cache related boot time checks/setups only needed on master CPU:11161116+ * - Geometry checks (kernel build and hardware agree: e.g. L1_CACHE_BYTES)11171117+ * Assume SMP only, so all cores will have same cache config. A check on11181118+ * one core suffices for all11191119+ * - IOC setup / dma callbacks only need to be done once11201120+ */11261121void __init arc_cache_init_master(void)11271122{11281123 unsigned int __maybe_unused cpu = smp_processor_id();···1209119012101191 printk(arc_cache_mumbojumbo(0, str, sizeof(str)));1211119212121212- /*12131213- * Only master CPU needs to execute rest of function:12141214- * - Assume SMP so all cores will have same cache config so12151215- * any geomtry checks will be same for all12161216- * - IOC setup / dma callbacks only need to be setup once12171217- */12181193 if (!cpu)12191194 arc_cache_init_master();11951195+11961196+ /*11971197+ * In PAE regime, TLB and cache maintenance ops take wider addresses11981198+ * And even if PAE is not enabled in kernel, the upper 32-bits still need11991199+ * to be zeroed to keep the ops sane.12001200+ * As an optimization for more common !PAE enabled case, zero them out12011201+ * once at init, rather than checking/setting to 0 for every runtime op12021202+ */12031203+ if (is_isa_arcv2() && pae40_exist_but_not_enab()) {12041204+12051205+ if (IS_ENABLED(CONFIG_ARC_HAS_ICACHE))12061206+ write_aux_reg(ARC_REG_IC_PTAG_HI, 0);12071207+12081208+ if (IS_ENABLED(CONFIG_ARC_HAS_DCACHE))12091209+ write_aux_reg(ARC_REG_DC_PTAG_HI, 0);12101210+12111211+ if (l2_line_sz) {12121212+ write_aux_reg(ARC_REG_SLC_RGN_END1, 0);12131213+ write_aux_reg(ARC_REG_SLC_RGN_START1, 0);12141214+ }12151215+ }12201216}
+45
arch/arc/mm/dma.c
···153153 }154154}155155156156+/*157157+ * arc_dma_map_page - map a portion of a page for streaming DMA158158+ *159159+ * Ensure that any data held in the cache is appropriately discarded160160+ * or written back.161161+ *162162+ * The device owns this memory once this call has completed. The CPU163163+ * can regain ownership by calling dma_unmap_page().164164+ *165165+ * Note: while it takes struct page as arg, caller can "abuse" it to pass166166+ * a region larger than PAGE_SIZE, provided it is physically contiguous167167+ * and this still works correctly168168+ */156169static dma_addr_t arc_dma_map_page(struct device *dev, struct page *page,157170 unsigned long offset, size_t size, enum dma_data_direction dir,158171 unsigned long attrs)···176163 _dma_cache_sync(paddr, size, dir);177164178165 return plat_phys_to_dma(dev, paddr);166166+}167167+168168+/*169169+ * arc_dma_unmap_page - unmap a buffer previously mapped through dma_map_page()170170+ *171171+ * After this call, reads by the CPU to the buffer are guaranteed to see172172+ * whatever the device wrote there.173173+ *174174+ * Note: historically this routine was not implemented for ARC175175+ */176176+static void arc_dma_unmap_page(struct device *dev, dma_addr_t handle,177177+ size_t size, enum dma_data_direction dir,178178+ unsigned long attrs)179179+{180180+ phys_addr_t paddr = plat_dma_to_phys(dev, handle);181181+182182+ if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC))183183+ _dma_cache_sync(paddr, size, dir);179184}180185181186static int arc_dma_map_sg(struct device *dev, struct scatterlist *sg,···207176 s->length, dir);208177209178 return nents;179179+}180180+181181+static void arc_dma_unmap_sg(struct device *dev, struct scatterlist *sg,182182+ int nents, enum dma_data_direction dir,183183+ unsigned long attrs)184184+{185185+ struct scatterlist *s;186186+ int i;187187+188188+ for_each_sg(sg, s, nents, i)189189+ arc_dma_unmap_page(dev, sg_dma_address(s), sg_dma_len(s), dir,190190+ attrs);210191}211192212193static void arc_dma_sync_single_for_cpu(struct device *dev,···266223 .free = arc_dma_free,267224 .mmap = arc_dma_mmap,268225 .map_page = arc_dma_map_page,226226+ .unmap_page = arc_dma_unmap_page,269227 .map_sg = arc_dma_map_sg,228228+ .unmap_sg = arc_dma_unmap_sg,270229 .sync_single_for_device = arc_dma_sync_single_for_device,271230 .sync_single_for_cpu = arc_dma_sync_single_for_cpu,272231 .sync_sg_for_cpu = arc_dma_sync_sg_for_cpu,
+11-1
arch/arc/mm/tlb.c
···104104/* A copy of the ASID from the PID reg is kept in asid_cache */105105DEFINE_PER_CPU(unsigned int, asid_cache) = MM_CTXT_FIRST_CYCLE;106106107107+static int __read_mostly pae_exists;108108+107109/*108110 * Utility Routine to erase a J-TLB entry109111 * Caller needs to setup Index Reg (manually or via getIndex)···786784 mmu->u_dtlb = mmu4->u_dtlb * 4;787785 mmu->u_itlb = mmu4->u_itlb * 4;788786 mmu->sasid = mmu4->sasid;789789- mmu->pae = mmu4->pae;787787+ pae_exists = mmu->pae = mmu4->pae;790788 }791789}792790···809807 IS_AVAIL2(p_mmu->pae, ", PAE40 ", CONFIG_ARC_HAS_PAE40));810808811809 return buf;810810+}811811+812812+int pae40_exist_but_not_enab(void)813813+{814814+ return pae_exists && !is_pae40_enabled();812815}813816814817void arc_mmu_init(void)···866859 /* swapper_pg_dir is the pgd for the kernel, used by vmalloc */867860 write_aux_reg(ARC_REG_SCRATCH_DATA0, swapper_pg_dir);868861#endif862862+863863+ if (pae40_exist_but_not_enab())864864+ write_aux_reg(ARC_REG_TLBPD1HI, 0);869865}870866871867/*
-13
arch/arc/plat-sim/Kconfig
···11-#22-# Copyright (C) 2007-2010, 2011-2012 Synopsys, Inc. (www.synopsys.com)33-#44-# This program is free software; you can redistribute it and/or modify55-# it under the terms of the GNU General Public License version 2 as66-# published by the Free Software Foundation.77-#88-99-menuconfig ARC_PLAT_SIM1010- bool "ARC nSIM based simulation virtual platforms"1111- help1212- Support for nSIM based ARC simulation platforms1313- This includes the standalone nSIM (uart only) vs. System C OSCI VP
···4545 stdout-path = "serial0:115200n8";4646 };47474848- audio_clkout: audio_clkout {4848+ audio_clkout: audio-clkout {4949 /*5050 * This is same as <&rcar_sound 0>5151 * but needed to avoid cs2000/rcar_sound probe dead-lock
···114114115115/*116116 * This is the base location for PIE (ET_DYN with INTERP) loads. On117117- * 64-bit, this is raised to 4GB to leave the entire 32-bit address117117+ * 64-bit, this is above 4GB to leave the entire 32-bit address118118 * space open for things that want to use the area for 32-bit pointers.119119 */120120-#define ELF_ET_DYN_BASE 0x100000000UL120120+#define ELF_ET_DYN_BASE (2 * TASK_SIZE_64 / 3)121121122122#ifndef __ASSEMBLY__123123
+1-1
arch/powerpc/Kconfig
···199199 select HAVE_OPTPROBES if PPC64200200 select HAVE_PERF_EVENTS201201 select HAVE_PERF_EVENTS_NMI if PPC64202202- select HAVE_HARDLOCKUP_DETECTOR_PERF if HAVE_PERF_EVENTS_NMI && !HAVE_HARDLOCKUP_DETECTOR_ARCH202202+ select HAVE_HARDLOCKUP_DETECTOR_PERF if PERF_EVENTS && HAVE_PERF_EVENTS_NMI && !HAVE_HARDLOCKUP_DETECTOR_ARCH203203 select HAVE_PERF_REGS204204 select HAVE_PERF_USER_STACK_DUMP205205 select HAVE_RCU_TABLE_FREE if SMP
+3-2
arch/powerpc/kernel/process.c
···362362363363 cpumsr = msr_check_and_set(MSR_FP|MSR_VEC|MSR_VSX);364364365365- if (current->thread.regs && (current->thread.regs->msr & MSR_VSX)) {365365+ if (current->thread.regs &&366366+ (current->thread.regs->msr & (MSR_VSX|MSR_VEC|MSR_FP))) {366367 check_if_tm_restore_required(current);367368 /*368369 * If a thread has already been reclaimed then the···387386{388387 if (tsk->thread.regs) {389388 preempt_disable();390390- if (tsk->thread.regs->msr & MSR_VSX) {389389+ if (tsk->thread.regs->msr & (MSR_VSX|MSR_VEC|MSR_FP)) {391390 BUG_ON(tsk != current);392391 giveup_vsx(tsk);393392 }
···21142114 load_mm_cr4(this_cpu_read(cpu_tlbstate.loaded_mm));21152115}2116211621172117-static void x86_pmu_event_mapped(struct perf_event *event)21172117+static void x86_pmu_event_mapped(struct perf_event *event, struct mm_struct *mm)21182118{21192119 if (!(event->hw.flags & PERF_X86_EVENT_RDPMC_ALLOWED))21202120 return;···21292129 * For now, this can't happen because all callers hold mmap_sem21302130 * for write. If this changes, we'll need a different solution.21312131 */21322132- lockdep_assert_held_exclusive(¤t->mm->mmap_sem);21322132+ lockdep_assert_held_exclusive(&mm->mmap_sem);2133213321342134- if (atomic_inc_return(¤t->mm->context.perf_rdpmc_allowed) == 1)21352135- on_each_cpu_mask(mm_cpumask(current->mm), refresh_pce, NULL, 1);21342134+ if (atomic_inc_return(&mm->context.perf_rdpmc_allowed) == 1)21352135+ on_each_cpu_mask(mm_cpumask(mm), refresh_pce, NULL, 1);21362136}2137213721382138-static void x86_pmu_event_unmapped(struct perf_event *event)21382138+static void x86_pmu_event_unmapped(struct perf_event *event, struct mm_struct *mm)21392139{21402140- if (!current->mm)21412141- return;2142214021432141 if (!(event->hw.flags & PERF_X86_EVENT_RDPMC_ALLOWED))21442142 return;2145214321462146- if (atomic_dec_and_test(¤t->mm->context.perf_rdpmc_allowed))21472147- on_each_cpu_mask(mm_cpumask(current->mm), refresh_pce, NULL, 1);21442144+ if (atomic_dec_and_test(&mm->context.perf_rdpmc_allowed))21452145+ on_each_cpu_mask(mm_cpumask(mm), refresh_pce, NULL, 1);21482146}2149214721502148static int x86_pmu_event_idx(struct perf_event *event)
···587587 * P4_CONFIG_ALIASABLE or bits for P4_PEBS_METRIC, they are588588 * either up to date automatically or not applicable at all.589589 */590590-struct p4_event_alias {590590+static struct p4_event_alias {591591 u64 original;592592 u64 alternative;593593} p4_event_aliases[] = {
···247247248248/*249249 * This is the base location for PIE (ET_DYN with INTERP) loads. On250250- * 64-bit, this is raised to 4GB to leave the entire 32-bit address250250+ * 64-bit, this is above 4GB to leave the entire 32-bit address251251 * space open for things that want to use the area for 32-bit pointers.252252 */253253#define ELF_ET_DYN_BASE (mmap_is_ia32() ? 0x000400000UL : \254254- 0x100000000UL)254254+ (TASK_SIZE / 3 * 2))255255256256/* This yields a mask that user programs can use to figure out what257257 instruction set this CPU supports. This could be done in user space,
+3
arch/x86/kernel/cpu/aperfmperf.c
···4040 struct aperfmperf_sample *s = this_cpu_ptr(&samples);4141 ktime_t now = ktime_get();4242 s64 time_delta = ktime_ms_delta(now, s->time);4343+ unsigned long flags;43444445 /* Don't bother re-computing within the cache threshold time. */4546 if (time_delta < APERFMPERF_CACHE_THRESHOLD_MS)4647 return;47484949+ local_irq_save(flags);4850 rdmsrl(MSR_IA32_APERF, aperf);4951 rdmsrl(MSR_IA32_MPERF, mperf);5252+ local_irq_restore(flags);50535154 aperf_delta = aperf - s->aperf;5255 mperf_delta = mperf - s->mperf;
···237237 stop_machine(mtrr_rendezvous_handler, &data, cpu_online_mask);238238}239239240240+static void set_mtrr_cpuslocked(unsigned int reg, unsigned long base,241241+ unsigned long size, mtrr_type type)242242+{243243+ struct set_mtrr_data data = { .smp_reg = reg,244244+ .smp_base = base,245245+ .smp_size = size,246246+ .smp_type = type247247+ };248248+249249+ stop_machine_cpuslocked(mtrr_rendezvous_handler, &data, cpu_online_mask);250250+}251251+240252static void set_mtrr_from_inactive_cpu(unsigned int reg, unsigned long base,241253 unsigned long size, mtrr_type type)242254{···382370 /* Search for an empty MTRR */383371 i = mtrr_if->get_free_region(base, size, replace);384372 if (i >= 0) {385385- set_mtrr(i, base, size, type);373373+ set_mtrr_cpuslocked(i, base, size, type);386374 if (likely(replace < 0)) {387375 mtrr_usage_table[i] = 1;388376 } else {···390378 if (increment)391379 mtrr_usage_table[i]++;392380 if (unlikely(replace != i)) {393393- set_mtrr(replace, 0, 0, 0);381381+ set_mtrr_cpuslocked(replace, 0, 0, 0);394382 mtrr_usage_table[replace] = 0;395383 }396384 }···518506 goto out;519507 }520508 if (--mtrr_usage_table[reg] < 1)521521- set_mtrr(reg, 0, 0, 0);509509+ set_mtrr_cpuslocked(reg, 0, 0, 0);522510 error = reg;523511 out:524512 mutex_unlock(&mtrr_mutex);
+4-3
arch/x86/kernel/head64.c
···5353 pudval_t *pud;5454 pmdval_t *pmd, pmd_entry;5555 int i;5656+ unsigned int *next_pgt_ptr;56575758 /* Is the address too large? */5859 if (physaddr >> MAX_PHYSMEM_BITS)···9291 * creates a bunch of nonsense entries but that is fine --9392 * it avoids problems around wraparound.9493 */9595-9696- pud = fixup_pointer(early_dynamic_pgts[next_early_pgt++], physaddr);9797- pmd = fixup_pointer(early_dynamic_pgts[next_early_pgt++], physaddr);9494+ next_pgt_ptr = fixup_pointer(&next_early_pgt, physaddr);9595+ pud = fixup_pointer(early_dynamic_pgts[(*next_pgt_ptr)++], physaddr);9696+ pmd = fixup_pointer(early_dynamic_pgts[(*next_pgt_ptr)++], physaddr);98979998 if (IS_ENABLED(CONFIG_X86_5LEVEL)) {10099 p4d = fixup_pointer(early_dynamic_pgts[next_early_pgt++], physaddr);
···971971 * Returns zero if CPU booted OK, else error code from972972 * ->wakeup_secondary_cpu.973973 */974974-static int do_boot_cpu(int apicid, int cpu, struct task_struct *idle)974974+static int do_boot_cpu(int apicid, int cpu, struct task_struct *idle,975975+ int *cpu0_nmi_registered)975976{976977 volatile u32 *trampoline_status =977978 (volatile u32 *) __va(real_mode_header->trampoline_status);···980979 unsigned long start_ip = real_mode_header->trampoline_start;981980982981 unsigned long boot_error = 0;983983- int cpu0_nmi_registered = 0;984982 unsigned long timeout;985983986984 idle->thread.sp = (unsigned long)task_pt_regs(idle);···10351035 boot_error = apic->wakeup_secondary_cpu(apicid, start_ip);10361036 else10371037 boot_error = wakeup_cpu_via_init_nmi(cpu, start_ip, apicid,10381038- &cpu0_nmi_registered);10381038+ cpu0_nmi_registered);1039103910401040 if (!boot_error) {10411041 /*···10801080 */10811081 smpboot_restore_warm_reset_vector();10821082 }10831083- /*10841084- * Clean up the nmi handler. Do this after the callin and callout sync10851085- * to avoid impact of possible long unregister time.10861086- */10871087- if (cpu0_nmi_registered)10881088- unregister_nmi_handler(NMI_LOCAL, "wake_cpu0");1089108310901084 return boot_error;10911085}···10871093int native_cpu_up(unsigned int cpu, struct task_struct *tidle)10881094{10891095 int apicid = apic->cpu_present_to_apicid(cpu);10961096+ int cpu0_nmi_registered = 0;10901097 unsigned long flags;10911091- int err;10981098+ int err, ret = 0;1092109910931100 WARN_ON(irqs_disabled());10941101···1126113111271132 common_cpu_up(cpu, tidle);1128113311291129- err = do_boot_cpu(apicid, cpu, tidle);11341134+ err = do_boot_cpu(apicid, cpu, tidle, &cpu0_nmi_registered);11301135 if (err) {11311136 pr_err("do_boot_cpu failed(%d) to wakeup CPU#%u\n", err, cpu);11321132- return -EIO;11371137+ ret = -EIO;11381138+ goto unreg_nmi;11331139 }1134114011351141 /*···11461150 touch_nmi_watchdog();11471151 }1148115211491149- return 0;11531153+unreg_nmi:11541154+ /*11551155+ * Clean up the nmi handler. Do this after the callin and callout sync11561156+ * to avoid impact of possible long unregister time.11571157+ */11581158+ if (cpu0_nmi_registered)11591159+ unregister_nmi_handler(NMI_LOCAL, "wake_cpu0");11601160+11611161+ return ret;11501162}1151116311521164/**
+1-1
arch/x86/kvm/svm.c
···1100110011011101 if (vls) {11021102 if (!npt_enabled ||11031103- !boot_cpu_has(X86_FEATURE_VIRTUAL_VMLOAD_VMSAVE) ||11031103+ !boot_cpu_has(X86_FEATURE_V_VMSAVE_VMLOAD) ||11041104 !IS_ENABLED(CONFIG_X86_64)) {11051105 vls = false;11061106 } else {
+3-4
arch/x86/mm/mmap.c
···5050static unsigned long stack_maxrandom_size(unsigned long task_size)5151{5252 unsigned long max = 0;5353- if ((current->flags & PF_RANDOMIZE) &&5454- !(current->personality & ADDR_NO_RANDOMIZE)) {5353+ if (current->flags & PF_RANDOMIZE) {5554 max = (-1UL) & __STACK_RND_MASK(task_size == tasksize_32bit());5655 max <<= PAGE_SHIFT;5756 }···78797980static unsigned long arch_rnd(unsigned int rndbits)8081{8282+ if (!(current->flags & PF_RANDOMIZE))8383+ return 0;8184 return (get_random_long() & ((1UL << rndbits) - 1)) << PAGE_SHIFT;8285}83868487unsigned long arch_mmap_rnd(void)8588{8686- if (!(current->flags & PF_RANDOMIZE))8787- return 0;8889 return arch_rnd(mmap_is_ia32() ? mmap32_rnd_bits : mmap64_rnd_bits);8990}9091
+2-2
arch/x86/platform/uv/tlb_uv.c
···2626static struct bau_operations ops __ro_after_init;27272828/* timeouts in nanoseconds (indexed by UVH_AGING_PRESCALE_SEL urgency7 30:28) */2929-static int timeout_base_ns[] = {2929+static const int timeout_base_ns[] = {3030 20,3131 160,3232 1280,···12161216 * set a bit in the UVH_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE register.12171217 * Such a message must be ignored.12181218 */12191219-void process_uv2_message(struct msg_desc *mdp, struct bau_control *bcp)12191219+static void process_uv2_message(struct msg_desc *mdp, struct bau_control *bcp)12201220{12211221 unsigned long mmr_image;12221222 unsigned char swack_vec;
···20752075 /*20762076 * Get the bios in the request so we can re-queue them.20772077 */20782078- if (req_op(shadow[i].request) == REQ_OP_FLUSH ||20792079- req_op(shadow[i].request) == REQ_OP_DISCARD ||20802080- req_op(shadow[i].request) == REQ_OP_SECURE_ERASE ||20782078+ if (req_op(shadow[j].request) == REQ_OP_FLUSH ||20792079+ req_op(shadow[j].request) == REQ_OP_DISCARD ||20802080+ req_op(shadow[j].request) == REQ_OP_SECURE_ERASE ||20812081 shadow[j].request->cmd_flags & REQ_FUA) {20822082 /*20832083 * Flush operations don't contain bios, so
+1-1
drivers/clocksource/Kconfig
···262262263263config CLKSRC_PISTACHIO264264 bool "Clocksource for Pistachio SoC" if COMPILE_TEST265265- depends on HAS_IOMEM265265+ depends on GENERIC_CLOCKEVENTS && HAS_IOMEM266266 select TIMER_OF267267 help268268 Enables the clocksource for the Pistachio SoC.
+1-1
drivers/clocksource/arm_arch_timer.c
···14401440 * While unlikely, it's theoretically possible that none of the frames14411441 * in a timer expose the combination of feature we want.14421442 */14431443- for (i = i; i < timer_count; i++) {14431443+ for (i = 0; i < timer_count; i++) {14441444 timer = &timers[i];1445144514461446 frame = arch_timer_mem_find_best_frame(timer);
+6-5
drivers/clocksource/em_sti.c
···305305 irq = platform_get_irq(pdev, 0);306306 if (irq < 0) {307307 dev_err(&pdev->dev, "failed to get irq\n");308308- return -EINVAL;308308+ return irq;309309 }310310311311 /* map memory, let base point to the STI instance */···314314 if (IS_ERR(p->base))315315 return PTR_ERR(p->base);316316317317- if (devm_request_irq(&pdev->dev, irq, em_sti_interrupt,318318- IRQF_TIMER | IRQF_IRQPOLL | IRQF_NOBALANCING,319319- dev_name(&pdev->dev), p)) {317317+ ret = devm_request_irq(&pdev->dev, irq, em_sti_interrupt,318318+ IRQF_TIMER | IRQF_IRQPOLL | IRQF_NOBALANCING,319319+ dev_name(&pdev->dev), p);320320+ if (ret) {320321 dev_err(&pdev->dev, "failed to request low IRQ\n");321321- return -ENOENT;322322+ return ret;322323 }323324324325 /* get hold of clock */
···244244 struct dma_fence *f = e->fence;245245 struct amd_sched_fence *s_fence = to_amd_sched_fence(f);246246247247+ if (dma_fence_is_signaled(f)) {248248+ hash_del(&e->node);249249+ dma_fence_put(f);250250+ kmem_cache_free(amdgpu_sync_slab, e);251251+ continue;252252+ }247253 if (ring && s_fence) {248254 /* For fences from the same ring it is sufficient249255 * when they are scheduled.···260254261255 return &s_fence->scheduled;262256 }263263- }264264-265265- if (dma_fence_is_signaled(f)) {266266- hash_del(&e->node);267267- dma_fence_put(f);268268- kmem_cache_free(amdgpu_sync_slab, e);269269- continue;270257 }271258272259 return f;
+1-1
drivers/gpu/drm/i915/i915_debugfs.c
···4580458045814581 sseu->slice_mask |= BIT(s);4582458245834583- if (IS_GEN9_BC(dev_priv))45834583+ if (IS_GEN9_BC(dev_priv) || IS_CANNONLAKE(dev_priv))45844584 sseu->subslice_mask =45854585 INTEL_INFO(dev_priv)->sseu.subslice_mask;45864586
+8-7
drivers/gpu/drm/i915/i915_gem_context.c
···688688}689689690690static bool691691-needs_pd_load_pre(struct i915_hw_ppgtt *ppgtt,692692- struct intel_engine_cs *engine,693693- struct i915_gem_context *to)691691+needs_pd_load_pre(struct i915_hw_ppgtt *ppgtt, struct intel_engine_cs *engine)694692{693693+ struct i915_gem_context *from = engine->legacy_active_context;694694+695695 if (!ppgtt)696696 return false;697697698698 /* Always load the ppgtt on first use */699699- if (!engine->legacy_active_context)699699+ if (!from)700700 return true;701701702702 /* Same context without new entries, skip */703703- if (engine->legacy_active_context == to &&703703+ if ((!from->ppgtt || from->ppgtt == ppgtt) &&704704 !(intel_engine_flag(engine) & ppgtt->pd_dirty_rings))705705 return false;706706···744744 if (skip_rcs_switch(ppgtt, engine, to))745745 return 0;746746747747- if (needs_pd_load_pre(ppgtt, engine, to)) {747747+ if (needs_pd_load_pre(ppgtt, engine)) {748748 /* Older GENs and non render rings still want the load first,749749 * "PP_DCLV followed by PP_DIR_BASE register through Load750750 * Register Immediate commands in Ring Buffer before submitting···841841 struct i915_hw_ppgtt *ppgtt =842842 to->ppgtt ?: req->i915->mm.aliasing_ppgtt;843843844844- if (needs_pd_load_pre(ppgtt, engine, to)) {844844+ if (needs_pd_load_pre(ppgtt, engine)) {845845 int ret;846846847847 trace_switch_mm(engine, to);···852852 ppgtt->pd_dirty_rings &= ~intel_engine_flag(engine);853853 }854854855855+ engine->legacy_active_context = to;855856 return 0;856857 }857858
+4
drivers/gpu/drm/i915/i915_gem_render_state.c
···242242 goto err_unpin;243243 }244244245245+ ret = req->engine->emit_flush(req, EMIT_INVALIDATE);246246+ if (ret)247247+ goto err_unpin;248248+245249 ret = req->engine->emit_bb_start(req,246250 so->batch_offset, so->batch_size,247251 I915_DISPATCH_SECURE);
+1-1
drivers/gpu/drm/i915/intel_ddi.c
···17621762 if (dev_priv->vbt.edp.low_vswing) {17631763 if (voltage == VOLTAGE_INFO_0_85V) {17641764 *n_entries = ARRAY_SIZE(cnl_ddi_translations_edp_0_85V);17651765- return cnl_ddi_translations_dp_0_85V;17651765+ return cnl_ddi_translations_edp_0_85V;17661766 } else if (voltage == VOLTAGE_INFO_0_95V) {17671767 *n_entries = ARRAY_SIZE(cnl_ddi_translations_edp_0_95V);17681768 return cnl_ddi_translations_edp_0_95V;
+7
drivers/gpu/drm/i915/intel_display.c
···34853485 !gpu_reset_clobbers_display(dev_priv))34863486 return;3487348734883488+ /* We have a modeset vs reset deadlock, defensively unbreak it.34893489+ *34903490+ * FIXME: We can do a _lot_ better, this is just a first iteration.34913491+ */34923492+ i915_gem_set_wedged(dev_priv);34933493+ DRM_DEBUG_DRIVER("Wedging GPU to avoid deadlocks with pending modeset updates\n");34943494+34883495 /*34893496 * Need mode_config.mutex so that we don't34903497 * trample ongoing ->detect() and whatnot.
···189189190190 ct->chip.irq_suspend = brcmstb_l2_intc_suspend;191191 ct->chip.irq_resume = brcmstb_l2_intc_resume;192192+ ct->chip.irq_pm_shutdown = brcmstb_l2_intc_suspend;192193193194 if (data->can_wake) {194195 /* This IRQ chip can wake the system, set all child interrupts
···1835183518361836#define ACPI_GICV3_ITS_MEM_SIZE (SZ_128K)1837183718381838-#if defined(CONFIG_ACPI_NUMA) && (ACPI_CA_VERSION >= 0x20170531)18381838+#ifdef CONFIG_ACPI_NUMA18391839struct its_srat_map {18401840 /* numa node id */18411841 u32 numa_node;···18431843 u32 its_id;18441844};1845184518461846-static struct its_srat_map its_srat_maps[MAX_NUMNODES] __initdata;18461846+static struct its_srat_map *its_srat_maps __initdata;18471847static int its_in_srat __initdata;1848184818491849static int __init acpi_get_its_numa_node(u32 its_id)···18551855 return its_srat_maps[i].numa_node;18561856 }18571857 return NUMA_NO_NODE;18581858+}18591859+18601860+static int __init gic_acpi_match_srat_its(struct acpi_subtable_header *header,18611861+ const unsigned long end)18621862+{18631863+ return 0;18581864}1859186518601866static int __init gic_acpi_parse_srat_its(struct acpi_subtable_header *header,···18761870 if (its_affinity->header.length < sizeof(*its_affinity)) {18771871 pr_err("SRAT: Invalid header length %d in ITS affinity\n",18781872 its_affinity->header.length);18791879- return -EINVAL;18801880- }18811881-18821882- if (its_in_srat >= MAX_NUMNODES) {18831883- pr_err("SRAT: ITS affinity exceeding max count[%d]\n",18841884- MAX_NUMNODES);18851873 return -EINVAL;18861874 }18871875···1897189718981898static void __init acpi_table_parse_srat_its(void)18991899{19001900+ int count;19011901+19021902+ count = acpi_table_parse_entries(ACPI_SIG_SRAT,19031903+ sizeof(struct acpi_table_srat),19041904+ ACPI_SRAT_TYPE_GIC_ITS_AFFINITY,19051905+ gic_acpi_match_srat_its, 0);19061906+ if (count <= 0)19071907+ return;19081908+19091909+ its_srat_maps = kmalloc(count * sizeof(struct its_srat_map),19101910+ GFP_KERNEL);19111911+ if (!its_srat_maps) {19121912+ pr_warn("SRAT: Failed to allocate memory for its_srat_maps!\n");19131913+ return;19141914+ }19151915+19001916 acpi_table_parse_entries(ACPI_SIG_SRAT,19011917 sizeof(struct acpi_table_srat),19021918 ACPI_SRAT_TYPE_GIC_ITS_AFFINITY,19031919 gic_acpi_parse_srat_its, 0);19041920}19211921+19221922+/* free the its_srat_maps after ITS probing */19231923+static void __init acpi_its_srat_maps_free(void)19241924+{19251925+ kfree(its_srat_maps);19261926+}19051927#else19061928static void __init acpi_table_parse_srat_its(void) { }19071929static int __init acpi_get_its_numa_node(u32 its_id) { return NUMA_NO_NODE; }19301930+static void __init acpi_its_srat_maps_free(void) { }19081931#endif1909193219101933static int __init gic_acpi_parse_madt_its(struct acpi_subtable_header *header,···19741951 acpi_table_parse_srat_its();19751952 acpi_table_parse_madt(ACPI_MADT_TYPE_GENERIC_TRANSLATOR,19761953 gic_acpi_parse_madt_its, 0);19541954+ acpi_its_srat_maps_free();19771955}19781956#else19791957static void __init its_acpi_probe(void) { }
+13-3
drivers/irqchip/irq-gic-v3.c
···353353354354 if (static_key_true(&supports_deactivate))355355 gic_write_eoir(irqnr);356356+ else357357+ isb();356358357359 err = handle_domain_irq(gic_data.domain, irqnr, regs);358360 if (err) {···642640static int gic_set_affinity(struct irq_data *d, const struct cpumask *mask_val,643641 bool force)644642{645645- unsigned int cpu = cpumask_any_and(mask_val, cpu_online_mask);643643+ unsigned int cpu;646644 void __iomem *reg;647645 int enabled;648646 u64 val;647647+648648+ if (force)649649+ cpu = cpumask_first(mask_val);650650+ else651651+ cpu = cpumask_any_and(mask_val, cpu_online_mask);649652650653 if (cpu >= nr_cpu_ids)651654 return -EINVAL;···838831 if (ret)839832 return ret;840833841841- for (i = 0; i < nr_irqs; i++)842842- gic_irq_domain_map(domain, virq + i, hwirq + i);834834+ for (i = 0; i < nr_irqs; i++) {835835+ ret = gic_irq_domain_map(domain, virq + i, hwirq + i);836836+ if (ret)837837+ return ret;838838+ }843839844840 return 0;845841}
+10-4
drivers/irqchip/irq-gic.c
···361361 if (likely(irqnr > 15 && irqnr < 1020)) {362362 if (static_key_true(&supports_deactivate))363363 writel_relaxed(irqstat, cpu_base + GIC_CPU_EOI);364364+ isb();364365 handle_domain_irq(gic->domain, irqnr, regs);365366 continue;366367 }···402401 goto out;403402404403 cascade_irq = irq_find_mapping(chip_data->domain, gic_irq);405405- if (unlikely(gic_irq < 32 || gic_irq > 1020))404404+ if (unlikely(gic_irq < 32 || gic_irq > 1020)) {406405 handle_bad_irq(desc);407407- else406406+ } else {407407+ isb();408408 generic_handle_irq(cascade_irq);409409+ }409410410411 out:411412 chained_irq_exit(chip, desc);···10301027 if (ret)10311028 return ret;1032102910331033- for (i = 0; i < nr_irqs; i++)10341034- gic_irq_domain_map(domain, virq + i, hwirq + i);10301030+ for (i = 0; i < nr_irqs; i++) {10311031+ ret = gic_irq_domain_map(domain, virq + i, hwirq + i);10321032+ if (ret)10331033+ return ret;10341034+ }1035103510361036 return 0;10371037}
+2-2
drivers/net/ethernet/mellanox/mlx4/main.c
···432432 /* Virtual PCI function needs to determine UAR page size from433433 * firmware. Only master PCI function can set the uar page size434434 */435435- if (enable_4k_uar)435435+ if (enable_4k_uar || !dev->persist->num_vfs)436436 dev->uar_page_shift = DEFAULT_UAR_PAGE_SHIFT;437437 else438438 dev->uar_page_shift = PAGE_SHIFT;···2277227722782278 dev->caps.max_fmr_maps = (1 << (32 - ilog2(dev->caps.num_mpts))) - 1;2279227922802280- if (enable_4k_uar) {22802280+ if (enable_4k_uar || !dev->persist->num_vfs) {22812281 init_hca.log_uar_sz = ilog2(dev->caps.num_uars) +22822282 PAGE_SHIFT - DEFAULT_UAR_PAGE_SHIFT;22832283 init_hca.uar_page_sz = DEFAULT_UAR_PAGE_SHIFT - 12;
···401401 for (i = 0; i < vha->hw->max_req_queues; i++) {402402 struct req_que *req = vha->hw->req_q_map[i];403403404404- if (!test_bit(i, vha->hw->req_qid_map))405405- continue;406406-407404 if (req || !buf) {408405 length = req ?409406 req->length : REQUEST_ENTRY_CNT_24XX;···414417 } else if (ent->t263.queue_type == T263_QUEUE_TYPE_RSP) {415418 for (i = 0; i < vha->hw->max_rsp_queues; i++) {416419 struct rsp_que *rsp = vha->hw->rsp_q_map[i];417417-418418- if (!test_bit(i, vha->hw->rsp_qid_map))419419- continue;420420421421 if (rsp || !buf) {422422 length = rsp ?···654660 for (i = 0; i < vha->hw->max_req_queues; i++) {655661 struct req_que *req = vha->hw->req_q_map[i];656662657657- if (!test_bit(i, vha->hw->req_qid_map))658658- continue;659659-660663 if (req || !buf) {661664 qla27xx_insert16(i, buf, len);662665 qla27xx_insert16(1, buf, len);···665674 } else if (ent->t274.queue_type == T274_QUEUE_TYPE_RSP_SHAD) {666675 for (i = 0; i < vha->hw->max_rsp_queues; i++) {667676 struct rsp_que *rsp = vha->hw->rsp_q_map[i];668668-669669- if (!test_bit(i, vha->hw->rsp_qid_map))670670- continue;671677672678 if (rsp || !buf) {673679 qla27xx_insert16(i, buf, len);
+1-1
drivers/scsi/ses.c
···9999100100 ret = scsi_execute_req(sdev, cmd, DMA_FROM_DEVICE, buf, bufflen,101101 NULL, SES_TIMEOUT, SES_RETRIES, NULL);102102- if (unlikely(!ret))102102+ if (unlikely(ret))103103 return ret;104104105105 recv_page_code = ((unsigned char *)buf)[0];
+2-2
drivers/scsi/st.c
···42994299 kref_init(&tpnt->kref);43004300 tpnt->disk = disk;43014301 disk->private_data = &tpnt->driver;43024302- disk->queue = SDp->request_queue;43034302 /* SCSI tape doesn't register this gendisk via add_disk(). Manually43044303 * take queue reference that release_disk() expects. */43054305- if (!blk_get_queue(disk->queue))43044304+ if (!blk_get_queue(SDp->request_queue))43064305 goto out_put_disk;43064306+ disk->queue = SDp->request_queue;43074307 tpnt->driver = &st_template;4308430843094309 tpnt->device = SDp;
+8-7
drivers/soc/imx/gpcv2.c
···200200201201 domain->dev = &pdev->dev;202202203203- ret = pm_genpd_init(&domain->genpd, NULL, true);204204- if (ret) {205205- dev_err(domain->dev, "Failed to init power domain\n");206206- return ret;207207- }208208-209203 domain->regulator = devm_regulator_get_optional(domain->dev, "power");210204 if (IS_ERR(domain->regulator)) {211205 if (PTR_ERR(domain->regulator) != -ENODEV) {212212- dev_err(domain->dev, "Failed to get domain's regulator\n");206206+ if (PTR_ERR(domain->regulator) != -EPROBE_DEFER)207207+ dev_err(domain->dev, "Failed to get domain's regulator\n");213208 return PTR_ERR(domain->regulator);214209 }215210 } else {216211 regulator_set_voltage(domain->regulator,217212 domain->voltage, domain->voltage);213213+ }214214+215215+ ret = pm_genpd_init(&domain->genpd, NULL, true);216216+ if (ret) {217217+ dev_err(domain->dev, "Failed to init power domain\n");218218+ return ret;218219 }219220220221 ret = of_genpd_add_provider_simple(domain->dev->of_node,
···749749 return 0;750750 }751751752752+ /*753753+ * During the second phase of log recovery, we need iget and754754+ * iput to behave like they do for an active filesystem.755755+ * xfs_fs_drop_inode needs to be able to prevent the deletion756756+ * of inodes before we're done replaying log items on those757757+ * inodes. Turn it off immediately after recovery finishes758758+ * so that we don't leak the quota inodes if subsequent mount759759+ * activities fail.760760+ */761761+ mp->m_super->s_flags |= MS_ACTIVE;752762 error = xlog_recover_finish(mp->m_log);753763 if (!error)754764 xfs_log_work_queue(mp);765765+ mp->m_super->s_flags &= ~MS_ACTIVE;755766756767 return error;757768}
+2-10
fs/xfs/xfs_mount.c
···945945 }946946947947 /*948948- * During the second phase of log recovery, we need iget and949949- * iput to behave like they do for an active filesystem.950950- * xfs_fs_drop_inode needs to be able to prevent the deletion951951- * of inodes before we're done replaying log items on those952952- * inodes.953953- */954954- mp->m_super->s_flags |= MS_ACTIVE;955955-956956- /*957948 * Finish recovering the file system. This part needed to be delayed958949 * until after the root and real-time bitmap inodes were consistently959950 * read in.···10191028 out_quota:10201029 xfs_qm_unmount_quotas(mp);10211030 out_rtunmount:10221022- mp->m_super->s_flags &= ~MS_ACTIVE;10231031 xfs_rtunmount_inodes(mp);10241032 out_rele_rip:10251033 IRELE(rip);10261034 cancel_delayed_work_sync(&mp->m_reclaim_work);10271035 xfs_reclaim_inodes(mp, SYNC_WAIT);10361036+ /* Clean out dquots that might be in memory after quotacheck. */10371037+ xfs_qm_unmount(mp);10281038 out_log_dealloc:10291039 mp->m_flags |= XFS_MOUNT_UNMOUNTING;10301040 xfs_log_mount_cancel(mp);
···66#include <linux/types.h>77#include <linux/nodemask.h>88#include <uapi/linux/oom.h>99+#include <linux/sched/coredump.h> /* MMF_* */1010+#include <linux/mm.h> /* VM_FAULT* */9111012struct zonelist;1113struct notifier_block;···6361static inline bool tsk_is_oom_victim(struct task_struct * tsk)6462{6563 return tsk->signal->oom_mm;6464+}6565+6666+/*6767+ * Checks whether a page fault on the given mm is still reliable.6868+ * This is no longer true if the oom reaper started to reap the6969+ * address space which is reflected by MMF_UNSTABLE flag set in7070+ * the mm. At that moment any !shared mapping would lose the content7171+ * and could cause a memory corruption (zero pages instead of the7272+ * original content).7373+ *7474+ * User should call this before establishing a page table entry for7575+ * a !shared mapping and under the proper page table lock.7676+ *7777+ * Return 0 when the PF is safe VM_FAULT_SIGBUS otherwise.7878+ */7979+static inline int check_stable_address_space(struct mm_struct *mm)8080+{8181+ if (unlikely(test_bit(MMF_UNSTABLE, &mm->flags)))8282+ return VM_FAULT_SIGBUS;8383+ return 0;6684}67856886extern unsigned long oom_badness(struct task_struct *p,
+2-2
include/linux/perf_event.h
···310310 * Notification that the event was mapped or unmapped. Called311311 * in the context of the mapping task.312312 */313313- void (*event_mapped) (struct perf_event *event); /*optional*/314314- void (*event_unmapped) (struct perf_event *event); /*optional*/313313+ void (*event_mapped) (struct perf_event *event, struct mm_struct *mm); /* optional */314314+ void (*event_unmapped) (struct perf_event *event, struct mm_struct *mm); /* optional */315315316316 /*317317 * Flags for ->add()/->del()/ ->start()/->stop(). There are
+3-1
include/linux/pid.h
···88 PIDTYPE_PID,99 PIDTYPE_PGID,1010 PIDTYPE_SID,1111- PIDTYPE_MAX1111+ PIDTYPE_MAX,1212+ /* only valid to __task_pid_nr_ns() */1313+ __PIDTYPE_TGID1214};13151416/*
+5-4
include/linux/ptr_ring.h
···436436 __PTR_RING_PEEK_CALL_v; \437437})438438439439-static inline void **__ptr_ring_init_queue_alloc(int size, gfp_t gfp)439439+static inline void **__ptr_ring_init_queue_alloc(unsigned int size, gfp_t gfp)440440{441441- return kzalloc(ALIGN(size * sizeof(void *), SMP_CACHE_BYTES), gfp);441441+ return kcalloc(size, sizeof(void *), gfp);442442}443443444444static inline void __ptr_ring_set_size(struct ptr_ring *r, int size)···582582 * In particular if you consume ring in interrupt or BH context, you must583583 * disable interrupts/BH when doing so.584584 */585585-static inline int ptr_ring_resize_multiple(struct ptr_ring **rings, int nrings,585585+static inline int ptr_ring_resize_multiple(struct ptr_ring **rings,586586+ unsigned int nrings,586587 int size,587588 gfp_t gfp, void (*destroy)(void *))588589{···591590 void ***queues;592591 int i;593592594594- queues = kmalloc(nrings * sizeof *queues, gfp);593593+ queues = kmalloc_array(nrings, sizeof(*queues), gfp);595594 if (!queues)596595 goto noqueues;597596
···193193}194194195195static inline int skb_array_resize_multiple(struct skb_array **rings,196196- int nrings, int size, gfp_t gfp)196196+ int nrings, unsigned int size,197197+ gfp_t gfp)197198{198199 BUILD_BUG_ON(offsetof(struct skb_array, ring));199200 return ptr_ring_resize_multiple((struct ptr_ring **)rings,
+37
include/linux/wait.h
···757757 __ret; \758758})759759760760+#define __wait_event_killable_timeout(wq_head, condition, timeout) \761761+ ___wait_event(wq_head, ___wait_cond_timeout(condition), \762762+ TASK_KILLABLE, 0, timeout, \763763+ __ret = schedule_timeout(__ret))764764+765765+/**766766+ * wait_event_killable_timeout - sleep until a condition gets true or a timeout elapses767767+ * @wq_head: the waitqueue to wait on768768+ * @condition: a C expression for the event to wait for769769+ * @timeout: timeout, in jiffies770770+ *771771+ * The process is put to sleep (TASK_KILLABLE) until the772772+ * @condition evaluates to true or a kill signal is received.773773+ * The @condition is checked each time the waitqueue @wq_head is woken up.774774+ *775775+ * wake_up() has to be called after changing any variable that could776776+ * change the result of the wait condition.777777+ *778778+ * Returns:779779+ * 0 if the @condition evaluated to %false after the @timeout elapsed,780780+ * 1 if the @condition evaluated to %true after the @timeout elapsed,781781+ * the remaining jiffies (at least 1) if the @condition evaluated782782+ * to %true before the @timeout elapsed, or -%ERESTARTSYS if it was783783+ * interrupted by a kill signal.784784+ *785785+ * Only kill signals interrupt this process.786786+ */787787+#define wait_event_killable_timeout(wq_head, condition, timeout) \788788+({ \789789+ long __ret = timeout; \790790+ might_sleep(); \791791+ if (!___wait_cond_timeout(condition)) \792792+ __ret = __wait_event_killable_timeout(wq_head, \793793+ condition, timeout); \794794+ __ret; \795795+})796796+760797761798#define __wait_event_lock_irq(wq_head, condition, lock, cmd) \762799 (void)___wait_event(wq_head, condition, TASK_UNINTERRUPTIBLE, 0, 0, \
···808808 old = *pold;809809 *pold = new;810810 if (old != NULL) {811811- qdisc_tree_reduce_backlog(old, old->q.qlen, old->qstats.backlog);811811+ unsigned int qlen = old->q.qlen;812812+ unsigned int backlog = old->qstats.backlog;813813+812814 qdisc_reset(old);815815+ qdisc_tree_reduce_backlog(old, qlen, backlog);813816 }814817 sch_tree_unlock(sch);815818
+1-3
include/net/sock.h
···509509static inline int sk_peek_offset(struct sock *sk, int flags)510510{511511 if (unlikely(flags & MSG_PEEK)) {512512- s32 off = READ_ONCE(sk->sk_peek_off);513513- if (off >= 0)514514- return off;512512+ return READ_ONCE(sk->sk_peek_off);515513 }516514517515 return 0;
+8-6
kernel/audit_watch.c
···66666767/* fsnotify events we care about. */6868#define AUDIT_FS_WATCH (FS_MOVE | FS_CREATE | FS_DELETE | FS_DELETE_SELF |\6969- FS_MOVE_SELF | FS_EVENT_ON_CHILD)6969+ FS_MOVE_SELF | FS_EVENT_ON_CHILD | FS_UNMOUNT)70707171static void audit_free_parent(struct audit_parent *parent)7272{···457457 list_del(&krule->rlist);458458459459 if (list_empty(&watch->rules)) {460460+ /*461461+ * audit_remove_watch() drops our reference to 'parent' which462462+ * can get freed. Grab our own reference to be safe.463463+ */464464+ audit_get_parent(parent);460465 audit_remove_watch(watch);461461-462462- if (list_empty(&parent->watches)) {463463- audit_get_parent(parent);466466+ if (list_empty(&parent->watches))464467 fsnotify_destroy_mark(&parent->mark, audit_watch_group);465465- audit_put_parent(parent);466466- }468468+ audit_put_parent(parent);467469 }468470}469471
+39-8
kernel/events/core.c
···22172217 return can_add_hw;22182218}2219221922202220+/*22212221+ * Complement to update_event_times(). This computes the tstamp_* values to22222222+ * continue 'enabled' state from @now, and effectively discards the time22232223+ * between the prior tstamp_stopped and now (as we were in the OFF state, or22242224+ * just switched (context) time base).22252225+ *22262226+ * This further assumes '@event->state == INACTIVE' (we just came from OFF) and22272227+ * cannot have been scheduled in yet. And going into INACTIVE state means22282228+ * '@event->tstamp_stopped = @now'.22292229+ *22302230+ * Thus given the rules of update_event_times():22312231+ *22322232+ * total_time_enabled = tstamp_stopped - tstamp_enabled22332233+ * total_time_running = tstamp_stopped - tstamp_running22342234+ *22352235+ * We can insert 'tstamp_stopped == now' and reverse them to compute new22362236+ * tstamp_* values.22372237+ */22382238+static void __perf_event_enable_time(struct perf_event *event, u64 now)22392239+{22402240+ WARN_ON_ONCE(event->state != PERF_EVENT_STATE_INACTIVE);22412241+22422242+ event->tstamp_stopped = now;22432243+ event->tstamp_enabled = now - event->total_time_enabled;22442244+ event->tstamp_running = now - event->total_time_running;22452245+}22462246+22202247static void add_event_to_ctx(struct perf_event *event,22212248 struct perf_event_context *ctx)22222249{···2251222422522225 list_add_event(event, ctx);22532226 perf_group_attach(event);22542254- event->tstamp_enabled = tstamp;22552255- event->tstamp_running = tstamp;22562256- event->tstamp_stopped = tstamp;22272227+ /*22282228+ * We can be called with event->state == STATE_OFF when we create with22292229+ * .disabled = 1. In that case the IOC_ENABLE will call this function.22302230+ */22312231+ if (event->state == PERF_EVENT_STATE_INACTIVE)22322232+ __perf_event_enable_time(event, tstamp);22572233}2258223422592235static void ctx_sched_out(struct perf_event_context *ctx,···25012471 u64 tstamp = perf_event_time(event);2502247225032473 event->state = PERF_EVENT_STATE_INACTIVE;25042504- event->tstamp_enabled = tstamp - event->total_time_enabled;24742474+ __perf_event_enable_time(event, tstamp);25052475 list_for_each_entry(sub, &event->sibling_list, group_entry) {24762476+ /* XXX should not be > INACTIVE if event isn't */25062477 if (sub->state >= PERF_EVENT_STATE_INACTIVE)25072507- sub->tstamp_enabled = tstamp - sub->total_time_enabled;24782478+ __perf_event_enable_time(sub, tstamp);25082479 }25092480}25102481···51215090 atomic_inc(&event->rb->aux_mmap_count);5122509151235092 if (event->pmu->event_mapped)51245124- event->pmu->event_mapped(event);50935093+ event->pmu->event_mapped(event, vma->vm_mm);51255094}5126509551275096static void perf_pmu_output_stop(struct perf_event *event);···51445113 unsigned long size = perf_data_size(rb);5145511451465115 if (event->pmu->event_unmapped)51475147- event->pmu->event_unmapped(event);51165116+ event->pmu->event_unmapped(event, vma->vm_mm);5148511751495118 /*51505119 * rb->aux_mmap_count will always drop before rb->mmap_count and···54425411 vma->vm_ops = &perf_mmap_vmops;5443541254445413 if (event->pmu->event_mapped)54455445- event->pmu->event_mapped(event);54145414+ event->pmu->event_mapped(event, vma->vm_mm);5446541554475416 return ret;54485417}
+8-2
kernel/irq/chip.c
···1000100010011001void irq_modify_status(unsigned int irq, unsigned long clr, unsigned long set)10021002{10031003- unsigned long flags;10031003+ unsigned long flags, trigger, tmp;10041004 struct irq_desc *desc = irq_get_desc_lock(irq, &flags, 0);1005100510061006 if (!desc)···1014101410151015 irq_settings_clr_and_set(desc, clr, set);1016101610171017+ trigger = irqd_get_trigger_type(&desc->irq_data);10181018+10171019 irqd_clear(&desc->irq_data, IRQD_NO_BALANCING | IRQD_PER_CPU |10181020 IRQD_TRIGGER_MASK | IRQD_LEVEL | IRQD_MOVE_PCNTXT);10191021 if (irq_settings_has_no_balance_set(desc))···10271025 if (irq_settings_is_level(desc))10281026 irqd_set(&desc->irq_data, IRQD_LEVEL);1029102710301030- irqd_set(&desc->irq_data, irq_settings_get_trigger_mask(desc));10281028+ tmp = irq_settings_get_trigger_mask(desc);10291029+ if (tmp != IRQ_TYPE_NONE)10301030+ trigger = tmp;10311031+10321032+ irqd_set(&desc->irq_data, trigger);1031103310321034 irq_put_desc_unlock(desc, flags);10331035}
+2-2
kernel/irq/ipi.c
···165165 struct irq_data *data = irq_get_irq_data(irq);166166 struct cpumask *ipimask = data ? irq_data_get_affinity_mask(data) : NULL;167167168168- if (!data || !ipimask || cpu > nr_cpu_ids)168168+ if (!data || !ipimask || cpu >= nr_cpu_ids)169169 return INVALID_HWIRQ;170170171171 if (!cpumask_test_cpu(cpu, ipimask))···195195 if (!chip->ipi_send_single && !chip->ipi_send_mask)196196 return -EINVAL;197197198198- if (cpu > nr_cpu_ids)198198+ if (cpu >= nr_cpu_ids)199199 return -EINVAL;200200201201 if (dest) {
+23-2
kernel/kmod.c
···7171static DECLARE_WAIT_QUEUE_HEAD(kmod_wq);72727373/*7474+ * This is a restriction on having *all* MAX_KMOD_CONCURRENT threads7575+ * running at the same time without returning. When this happens we7676+ * believe you've somehow ended up with a recursive module dependency7777+ * creating a loop.7878+ *7979+ * We have no option but to fail.8080+ *8181+ * Userspace should proactively try to detect and prevent these.8282+ */8383+#define MAX_KMOD_ALL_BUSY_TIMEOUT 58484+8585+/*7486 modprobe_path is set via /proc/sys.7587*/7688char modprobe_path[KMOD_PATH_LEN] = "/sbin/modprobe";···179167 pr_warn_ratelimited("request_module: kmod_concurrent_max (%u) close to 0 (max_modprobes: %u), for module %s, throttling...",180168 atomic_read(&kmod_concurrent_max),181169 MAX_KMOD_CONCURRENT, module_name);182182- wait_event_interruptible(kmod_wq,183183- atomic_dec_if_positive(&kmod_concurrent_max) >= 0);170170+ ret = wait_event_killable_timeout(kmod_wq,171171+ atomic_dec_if_positive(&kmod_concurrent_max) >= 0,172172+ MAX_KMOD_ALL_BUSY_TIMEOUT * HZ);173173+ if (!ret) {174174+ pr_warn_ratelimited("request_module: modprobe %s cannot be processed, kmod busy with %d threads for more than %d seconds now",175175+ module_name, MAX_KMOD_CONCURRENT, MAX_KMOD_ALL_BUSY_TIMEOUT);176176+ return -ETIME;177177+ } else if (ret == -ERESTARTSYS) {178178+ pr_warn_ratelimited("request_module: sigkill sent for modprobe %s, giving up", module_name);179179+ return ret;180180+ }184181 }185182186183 trace_module_request(module_name, wait, _RET_IP_);
+4-7
kernel/pid.c
···527527 if (!ns)528528 ns = task_active_pid_ns(current);529529 if (likely(pid_alive(task))) {530530- if (type != PIDTYPE_PID)530530+ if (type != PIDTYPE_PID) {531531+ if (type == __PIDTYPE_TGID)532532+ type = PIDTYPE_PID;531533 task = task->group_leader;534534+ }532535 nr = pid_nr_ns(rcu_dereference(task->pids[type].pid), ns);533536 }534537 rcu_read_unlock();···539536 return nr;540537}541538EXPORT_SYMBOL(__task_pid_nr_ns);542542-543543-pid_t task_tgid_nr_ns(struct task_struct *tsk, struct pid_namespace *ns)544544-{545545- return pid_nr_ns(task_tgid(tsk), ns);546546-}547547-EXPORT_SYMBOL(task_tgid_nr_ns);548539549540struct pid_namespace *task_active_pid_ns(struct task_struct *tsk)550541{
+5-1
kernel/signal.c
···11941194 recalc_sigpending_and_wake(t);11951195 }11961196 }11971197- if (action->sa.sa_handler == SIG_DFL)11971197+ /*11981198+ * Don't clear SIGNAL_UNKILLABLE for traced tasks, users won't expect11991199+ * debugging to leave init killable.12001200+ */12011201+ if (action->sa.sa_handler == SIG_DFL && !t->ptrace)11981202 t->signal->flags &= ~SIGNAL_UNKILLABLE;11991203 ret = specific_send_sig_info(sig, info, t);12001204 spin_unlock_irqrestore(&t->sighand->siglock, flags);
+1
kernel/watchdog.c
···240240 * hardlockup detector generates a warning241241 */242242 sample_period = get_softlockup_thresh() * ((u64)NSEC_PER_SEC / 5);243243+ watchdog_update_hrtimer_threshold(sample_period);243244}244245245246/* Commands for resetting the watchdog */
+59
kernel/watchdog_hld.c
···3737}3838EXPORT_SYMBOL(arch_touch_nmi_watchdog);39394040+#ifdef CONFIG_HARDLOCKUP_CHECK_TIMESTAMP4141+static DEFINE_PER_CPU(ktime_t, last_timestamp);4242+static DEFINE_PER_CPU(unsigned int, nmi_rearmed);4343+static ktime_t watchdog_hrtimer_sample_threshold __read_mostly;4444+4545+void watchdog_update_hrtimer_threshold(u64 period)4646+{4747+ /*4848+ * The hrtimer runs with a period of (watchdog_threshold * 2) / 54949+ *5050+ * So it runs effectively with 2.5 times the rate of the NMI5151+ * watchdog. That means the hrtimer should fire 2-3 times before5252+ * the NMI watchdog expires. The NMI watchdog on x86 is based on5353+ * unhalted CPU cycles, so if Turbo-Mode is enabled the CPU cycles5454+ * might run way faster than expected and the NMI fires in a5555+ * smaller period than the one deduced from the nominal CPU5656+ * frequency. Depending on the Turbo-Mode factor this might be fast5757+ * enough to get the NMI period smaller than the hrtimer watchdog5858+ * period and trigger false positives.5959+ *6060+ * The sample threshold is used to check in the NMI handler whether6161+ * the minimum time between two NMI samples has elapsed. That6262+ * prevents false positives.6363+ *6464+ * Set this to 4/5 of the actual watchdog threshold period so the6565+ * hrtimer is guaranteed to fire at least once within the real6666+ * watchdog threshold.6767+ */6868+ watchdog_hrtimer_sample_threshold = period * 2;6969+}7070+7171+static bool watchdog_check_timestamp(void)7272+{7373+ ktime_t delta, now = ktime_get_mono_fast_ns();7474+7575+ delta = now - __this_cpu_read(last_timestamp);7676+ if (delta < watchdog_hrtimer_sample_threshold) {7777+ /*7878+ * If ktime is jiffies based, a stalled timer would prevent7979+ * jiffies from being incremented and the filter would look8080+ * at a stale timestamp and never trigger.8181+ */8282+ if (__this_cpu_inc_return(nmi_rearmed) < 10)8383+ return false;8484+ }8585+ __this_cpu_write(nmi_rearmed, 0);8686+ __this_cpu_write(last_timestamp, now);8787+ return true;8888+}8989+#else9090+static inline bool watchdog_check_timestamp(void)9191+{9292+ return true;9393+}9494+#endif9595+4096static struct perf_event_attr wd_hw_attr = {4197 .type = PERF_TYPE_HARDWARE,4298 .config = PERF_COUNT_HW_CPU_CYCLES,···11660 __this_cpu_write(watchdog_nmi_touch, false);11761 return;11862 }6363+6464+ if (!watchdog_check_timestamp())6565+ return;1196612067 /* check for a hardlockup12168 * This is done by making sure our timer interrupt
+7
lib/Kconfig.debug
···798798 select SOFTLOCKUP_DETECTOR799799800800#801801+# Enables a timestamp based low pass filter to compensate for perf based802802+# hard lockup detection which runs too fast due to turbo modes.803803+#804804+config HARDLOCKUP_CHECK_TIMESTAMP805805+ bool806806+807807+#801808# arch/ can define HAVE_HARDLOCKUP_DETECTOR_ARCH to provide their own hard802809# lockup detector rather than the perf based detector.803810#
···16111611 * @page: the page16121612 *16131613 * This function protects unlocked LRU pages from being moved to16141614- * another cgroup and stabilizes their page->mem_cgroup binding.16141614+ * another cgroup.16151615+ *16161616+ * It ensures lifetime of the returned memcg. Caller is responsible16171617+ * for the lifetime of the page; __unlock_page_memcg() is available16181618+ * when @page might get freed inside the locked section.16151619 */16161616-void lock_page_memcg(struct page *page)16201620+struct mem_cgroup *lock_page_memcg(struct page *page)16171621{16181622 struct mem_cgroup *memcg;16191623 unsigned long flags;···16261622 * The RCU lock is held throughout the transaction. The fast16271623 * path can get away without acquiring the memcg->move_lock16281624 * because page moving starts with an RCU grace period.16291629- */16251625+ *16261626+ * The RCU lock also protects the memcg from being freed when16271627+ * the page state that is going to change is the only thing16281628+ * preventing the page itself from being freed. E.g. writeback16291629+ * doesn't hold a page reference and relies on PG_writeback to16301630+ * keep off truncation, migration and so forth.16311631+ */16301632 rcu_read_lock();1631163316321634 if (mem_cgroup_disabled())16331633- return;16351635+ return NULL;16341636again:16351637 memcg = page->mem_cgroup;16361638 if (unlikely(!memcg))16371637- return;16391639+ return NULL;1638164016391641 if (atomic_read(&memcg->moving_account) <= 0)16401640- return;16421642+ return memcg;1641164316421644 spin_lock_irqsave(&memcg->move_lock, flags);16431645 if (memcg != page->mem_cgroup) {···16591649 memcg->move_lock_task = current;16601650 memcg->move_lock_flags = flags;1661165116621662- return;16521652+ return memcg;16631653}16641654EXPORT_SYMBOL(lock_page_memcg);1665165516661656/**16671667- * unlock_page_memcg - unlock a page->mem_cgroup binding16681668- * @page: the page16571657+ * __unlock_page_memcg - unlock and unpin a memcg16581658+ * @memcg: the memcg16591659+ *16601660+ * Unlock and unpin a memcg returned by lock_page_memcg().16691661 */16701670-void unlock_page_memcg(struct page *page)16621662+void __unlock_page_memcg(struct mem_cgroup *memcg)16711663{16721672- struct mem_cgroup *memcg = page->mem_cgroup;16731673-16741664 if (memcg && memcg->move_lock_task == current) {16751665 unsigned long flags = memcg->move_lock_flags;16761666···16811671 }1682167216831673 rcu_read_unlock();16741674+}16751675+16761676+/**16771677+ * unlock_page_memcg - unlock a page->mem_cgroup binding16781678+ * @page: the page16791679+ */16801680+void unlock_page_memcg(struct page *page)16811681+{16821682+ __unlock_page_memcg(page->mem_cgroup);16841683}16851684EXPORT_SYMBOL(unlock_page_memcg);16861685
+20-16
mm/memory.c
···6868#include <linux/debugfs.h>6969#include <linux/userfaultfd_k.h>7070#include <linux/dax.h>7171+#include <linux/oom.h>71727273#include <asm/io.h>7374#include <asm/mmu_context.h>···28942893 struct vm_area_struct *vma = vmf->vma;28952894 struct mem_cgroup *memcg;28962895 struct page *page;28962896+ int ret = 0;28972897 pte_t entry;2898289828992899 /* File mapping without ->vm_ops ? */···29262924 vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd,29272925 vmf->address, &vmf->ptl);29282926 if (!pte_none(*vmf->pte))29272927+ goto unlock;29282928+ ret = check_stable_address_space(vma->vm_mm);29292929+ if (ret)29292930 goto unlock;29302931 /* Deliver the page fault to userland, check inside PT lock */29312932 if (userfaultfd_missing(vma)) {···29642959 if (!pte_none(*vmf->pte))29652960 goto release;2966296129622962+ ret = check_stable_address_space(vma->vm_mm);29632963+ if (ret)29642964+ goto release;29652965+29672966 /* Deliver the page fault to userland, check inside PT lock */29682967 if (userfaultfd_missing(vma)) {29692968 pte_unmap_unlock(vmf->pte, vmf->ptl);···29872978 update_mmu_cache(vma, vmf->address, vmf->pte);29882979unlock:29892980 pte_unmap_unlock(vmf->pte, vmf->ptl);29902990- return 0;29812981+ return ret;29912982release:29922983 mem_cgroup_cancel_charge(page, memcg, false);29932984 put_page(page);···32613252int finish_fault(struct vm_fault *vmf)32623253{32633254 struct page *page;32643264- int ret;32553255+ int ret = 0;3265325632663257 /* Did we COW the page? */32673258 if ((vmf->flags & FAULT_FLAG_WRITE) &&···32693260 page = vmf->cow_page;32703261 else32713262 page = vmf->page;32723272- ret = alloc_set_pte(vmf, vmf->memcg, page);32633263+32643264+ /*32653265+ * check even for read faults because we might have lost our CoWed32663266+ * page32673267+ */32683268+ if (!(vmf->vma->vm_flags & VM_SHARED))32693269+ ret = check_stable_address_space(vmf->vma->vm_mm);32703270+ if (!ret)32713271+ ret = alloc_set_pte(vmf, vmf->memcg, page);32733272 if (vmf->pte)32743273 pte_unmap_unlock(vmf->pte, vmf->ptl);32753274 return ret;···39163899 if (task_in_memcg_oom(current) && !(ret & VM_FAULT_OOM))39173900 mem_cgroup_oom_synchronize(false);39183901 }39193919-39203920- /*39213921- * This mm has been already reaped by the oom reaper and so the39223922- * refault cannot be trusted in general. Anonymous refaults would39233923- * lose data and give a zero page instead e.g. This is especially39243924- * problem for use_mm() because regular tasks will just die and39253925- * the corrupted data will not be visible anywhere while kthread39263926- * will outlive the oom victim and potentially propagate the date39273927- * further.39283928- */39293929- if (unlikely((current->flags & PF_KTHREAD) && !(ret & VM_FAULT_ERROR)39303930- && test_bit(MMF_UNSTABLE, &vma->vm_mm->flags)))39313931- ret = VM_FAULT_SIGBUS;3932390239333903 return ret;39343904}
-5
mm/mempolicy.c
···861861 *policy |= (pol->flags & MPOL_MODE_FLAGS);862862 }863863864864- if (vma) {865865- up_read(¤t->mm->mmap_sem);866866- vma = NULL;867867- }868868-869864 err = 0;870865 if (nmask) {871866 if (mpol_store_user_nodemask(pol)) {
+3-8
mm/migrate.c
···4141#include <linux/page_idle.h>4242#include <linux/page_owner.h>4343#include <linux/sched/mm.h>4444+#include <linux/ptrace.h>44454546#include <asm/tlbflush.h>4647···16531652 const int __user *, nodes,16541653 int __user *, status, int, flags)16551654{16561656- const struct cred *cred = current_cred(), *tcred;16571655 struct task_struct *task;16581656 struct mm_struct *mm;16591657 int err;···1676167616771677 /*16781678 * Check if this process has the right to modify the specified16791679- * process. The right exists if the process has administrative16801680- * capabilities, superuser privileges or the same16811681- * userid as the target process.16791679+ * process. Use the regular "ptrace_may_access()" checks.16821680 */16831683- tcred = __task_cred(task);16841684- if (!uid_eq(cred->euid, tcred->suid) && !uid_eq(cred->euid, tcred->uid) &&16851685- !uid_eq(cred->uid, tcred->suid) && !uid_eq(cred->uid, tcred->uid) &&16861686- !capable(CAP_SYS_NICE)) {16811681+ if (!ptrace_may_access(task, PTRACE_MODE_READ_REALCREDS)) {16871682 rcu_read_unlock();16881683 err = -EPERM;16891684 goto out;
-16
mm/nobootmem.c
···146146 NULL)147147 count += __free_memory_core(start, end);148148149149-#ifdef CONFIG_ARCH_DISCARD_MEMBLOCK150150- {151151- phys_addr_t size;152152-153153- /* Free memblock.reserved array if it was allocated */154154- size = get_allocated_memblock_reserved_regions_info(&start);155155- if (size)156156- count += __free_memory_core(start, start + size);157157-158158- /* Free memblock.memory array if it was allocated */159159- size = get_allocated_memblock_memory_regions_info(&start);160160- if (size)161161- count += __free_memory_core(start, start + size);162162- }163163-#endif164164-165149 return count;166150}167151
+12-3
mm/page-writeback.c
···27242724int test_clear_page_writeback(struct page *page)27252725{27262726 struct address_space *mapping = page_mapping(page);27272727+ struct mem_cgroup *memcg;27282728+ struct lruvec *lruvec;27272729 int ret;2728273027292729- lock_page_memcg(page);27312731+ memcg = lock_page_memcg(page);27322732+ lruvec = mem_cgroup_page_lruvec(page, page_pgdat(page));27302733 if (mapping && mapping_use_writeback_tags(mapping)) {27312734 struct inode *inode = mapping->host;27322735 struct backing_dev_info *bdi = inode_to_bdi(inode);···27572754 } else {27582755 ret = TestClearPageWriteback(page);27592756 }27572757+ /*27582758+ * NOTE: Page might be free now! Writeback doesn't hold a page27592759+ * reference on its own, it relies on truncation to wait for27602760+ * the clearing of PG_writeback. The below can only access27612761+ * page state that is static across allocation cycles.27622762+ */27602763 if (ret) {27612761- dec_lruvec_page_state(page, NR_WRITEBACK);27642764+ dec_lruvec_state(lruvec, NR_WRITEBACK);27622765 dec_zone_page_state(page, NR_ZONE_WRITE_PENDING);27632766 inc_node_page_state(page, NR_WRITTEN);27642767 }27652765- unlock_page_memcg(page);27682768+ __unlock_page_memcg(memcg);27662769 return ret;27672770}27682771
+4
mm/page_alloc.c
···15841584 /* Reinit limits that are based on free pages after the kernel is up */15851585 files_maxfiles_init();15861586#endif15871587+#ifdef CONFIG_ARCH_DISCARD_MEMBLOCK15881588+ /* Discard memblock private memory */15891589+ memblock_discard();15901590+#endif1587159115881592 for_each_populated_zone(zone)15891593 set_zone_contiguous(zone);
···10071007{10081008 /* This basically follows the spec line by line -- see RFC1112 */10091009 struct igmphdr *ih;10101010- struct in_device *in_dev = __in_dev_get_rcu(skb->dev);10101010+ struct net_device *dev = skb->dev;10111011+ struct in_device *in_dev;10111012 int len = skb->len;10121013 bool dropped = true;1013101410151015+ if (netif_is_l3_master(dev)) {10161016+ dev = dev_get_by_index_rcu(dev_net(dev), IPCB(skb)->iif);10171017+ if (!dev)10181018+ goto drop;10191019+ }10201020+10211021+ in_dev = __in_dev_get_rcu(dev);10141022 if (!in_dev)10151023 goto drop;10161024
+10-3
net/ipv4/route.c
···12671267 if (mtu)12681268 return mtu;1269126912701270- mtu = dst->dev->mtu;12701270+ mtu = READ_ONCE(dst->dev->mtu);1271127112721272 if (unlikely(dst_metric_locked(dst, RTAX_MTU))) {12731273 if (rt->rt_uses_gateway && mtu > 576)···27692769 if (rtm->rtm_flags & RTM_F_LOOKUP_TABLE)27702770 table_id = rt->rt_table_id;2771277127722772- if (rtm->rtm_flags & RTM_F_FIB_MATCH)27722772+ if (rtm->rtm_flags & RTM_F_FIB_MATCH) {27732773+ if (!res.fi) {27742774+ err = fib_props[res.type].error;27752775+ if (!err)27762776+ err = -EHOSTUNREACH;27772777+ goto errout_free;27782778+ }27732779 err = fib_dump_info(skb, NETLINK_CB(in_skb).portid,27742780 nlh->nlmsg_seq, RTM_NEWROUTE, table_id,27752781 rt->rt_type, res.prefix, res.prefixlen,27762782 fl4.flowi4_tos, res.fi, 0);27772777- else27832783+ } else {27782784 err = rt_fill_info(net, dst, src, table_id, &fl4, skb,27792785 NETLINK_CB(in_skb).portid, nlh->nlmsg_seq);27862786+ }27802787 if (err < 0)27812788 goto errout_free;27822789
+1-2
net/ipv4/tcp_input.c
···30093009 /* delta_us may not be positive if the socket is locked30103010 * when the retrans timer fires and is rescheduled.30113011 */30123012- if (delta_us > 0)30133013- rto = usecs_to_jiffies(delta_us);30123012+ rto = usecs_to_jiffies(max_t(int, delta_us, 1));30143013 }30153014 inet_csk_reset_xmit_timer(sk, ICSK_TIME_RETRANS, rto,30163015 TCP_RTO_MAX);
···10131013 nsiblings = iter->rt6i_nsiblings;10141014 iter->rt6i_node = NULL;10151015 fib6_purge_rt(iter, fn, info->nl_net);10161016+ if (fn->rr_ptr == iter)10171017+ fn->rr_ptr = NULL;10161018 rt6_release(iter);1017101910181020 if (nsiblings) {···10281026 *ins = iter->dst.rt6_next;10291027 iter->rt6i_node = NULL;10301028 fib6_purge_rt(iter, fn, info->nl_net);10291029+ if (fn->rr_ptr == iter)10301030+ fn->rr_ptr = NULL;10311031 rt6_release(iter);10321032 nsiblings--;10331033 } else {···11181114 /* Create subtree root node */11191115 sfn = node_alloc();11201116 if (!sfn)11211121- goto st_failure;11171117+ goto failure;1122111811231119 sfn->leaf = info->nl_net->ipv6.ip6_null_entry;11241120 atomic_inc(&info->nl_net->ipv6.ip6_null_entry->rt6i_ref);···1135113111361132 if (IS_ERR(sn)) {11371133 /* If it is failed, discard just allocated11381138- root, and then (in st_failure) stale node11341134+ root, and then (in failure) stale node11391135 in main tree.11401136 */11411137 node_free(sfn);11421138 err = PTR_ERR(sn);11431143- goto st_failure;11391139+ goto failure;11441140 }1145114111461142 /* Now link new subtree to main tree */···1155115111561152 if (IS_ERR(sn)) {11571153 err = PTR_ERR(sn);11581158- goto st_failure;11541154+ goto failure;11591155 }11601156 }11611157···11961192 atomic_inc(&pn->leaf->rt6i_ref);11971193 }11981194#endif11991199- /* Always release dst as dst->__refcnt is guaranteed12001200- * to be taken before entering this function12011201- */12021202- dst_release_immediate(&rt->dst);11951195+ goto failure;12031196 }12041197 return err;1205119812061206-#ifdef CONFIG_IPV6_SUBTREES12071207- /* Subtree creation failed, probably main tree node12081208- is orphan. If it is, shoot it.11991199+failure:12001200+ /* fn->leaf could be NULL if fn is an intermediate node and we12011201+ * failed to add the new route to it in both subtree creation12021202+ * failure and fib6_add_rt2node() failure case.12031203+ * In both cases, fib6_repair_tree() should be called to fix12041204+ * fn->leaf.12091205 */12101210-st_failure:12111206 if (fn && !(fn->fn_flags & (RTN_RTINFO|RTN_ROOT)))12121207 fib6_repair_tree(info->nl_net, fn);12131208 /* Always release dst as dst->__refcnt is guaranteed···12141211 */12151212 dst_release_immediate(&rt->dst);12161213 return err;12171217-#endif12181214}1219121512201216/*
···367367}368368369369static size_t upcall_msg_size(const struct dp_upcall_info *upcall_info,370370- unsigned int hdrlen)370370+ unsigned int hdrlen, int actions_attrlen)371371{372372 size_t size = NLMSG_ALIGN(sizeof(struct ovs_header))373373 + nla_total_size(hdrlen) /* OVS_PACKET_ATTR_PACKET */···384384385385 /* OVS_PACKET_ATTR_ACTIONS */386386 if (upcall_info->actions_len)387387- size += nla_total_size(upcall_info->actions_len);387387+ size += nla_total_size(actions_attrlen);388388389389 /* OVS_PACKET_ATTR_MRU */390390 if (upcall_info->mru)···451451 else452452 hlen = skb->len;453453454454- len = upcall_msg_size(upcall_info, hlen - cutlen);454454+ len = upcall_msg_size(upcall_info, hlen - cutlen,455455+ OVS_CB(skb)->acts_origlen);455456 user_skb = genlmsg_new(len, GFP_ATOMIC);456457 if (!user_skb) {457458 err = -ENOMEM;
+2
net/openvswitch/datapath.h
···9999 * when a packet is received by OVS.100100 * @mru: The maximum received fragement size; 0 if the packet is not101101 * fragmented.102102+ * @acts_origlen: The netlink size of the flow actions applied to this skb.102103 * @cutlen: The number of bytes from the packet end to be removed.103104 */104105struct ovs_skb_cb {105106 struct vport *input_vport;106107 u16 mru;108108+ u16 acts_origlen;107109 u32 cutlen;108110};109111#define OVS_CB(skb) ((struct ovs_skb_cb *)(skb)->cb)
···15021502static int snd_seq_ioctl_create_queue(struct snd_seq_client *client, void *arg)15031503{15041504 struct snd_seq_queue_info *info = arg;15051505- int result;15061505 struct snd_seq_queue *q;1507150615081508- result = snd_seq_queue_alloc(client->number, info->locked, info->flags);15091509- if (result < 0)15101510- return result;15111511-15121512- q = queueptr(result);15131513- if (q == NULL)15141514- return -EINVAL;15071507+ q = snd_seq_queue_alloc(client->number, info->locked, info->flags);15081508+ if (IS_ERR(q))15091509+ return PTR_ERR(q);1515151015161511 info->queue = q->queue;15171512 info->locked = q->locked;···15161521 if (!info->name[0])15171522 snprintf(info->name, sizeof(info->name), "Queue-%d", q->queue);15181523 strlcpy(q->name, info->name, sizeof(q->name));15191519- queuefree(q);15241524+ snd_use_lock_free(&q->use_lock);1520152515211526 return 0;15221527}
+9-5
sound/core/seq/seq_queue.c
···184184static void queue_use(struct snd_seq_queue *queue, int client, int use);185185186186/* allocate a new queue -187187- * return queue index value or negative value for error187187+ * return pointer to new queue or ERR_PTR(-errno) for error188188+ * The new queue's use_lock is set to 1. It is the caller's responsibility to189189+ * call snd_use_lock_free(&q->use_lock).188190 */189189-int snd_seq_queue_alloc(int client, int locked, unsigned int info_flags)191191+struct snd_seq_queue *snd_seq_queue_alloc(int client, int locked, unsigned int info_flags)190192{191193 struct snd_seq_queue *q;192194193195 q = queue_new(client, locked);194196 if (q == NULL)195195- return -ENOMEM;197197+ return ERR_PTR(-ENOMEM);196198 q->info_flags = info_flags;197199 queue_use(q, client, 1);200200+ snd_use_lock_use(&q->use_lock);198201 if (queue_list_add(q) < 0) {202202+ snd_use_lock_free(&q->use_lock);199203 queue_delete(q);200200- return -ENOMEM;204204+ return ERR_PTR(-ENOMEM);201205 }202202- return q->queue;206206+ return q;203207}204208205209/* delete a queue - queue must be owned by the client */
+1-1
sound/core/seq/seq_queue.h
···717172727373/* create new queue (constructor) */7474-int snd_seq_queue_alloc(int client, int locked, unsigned int flags);7474+struct snd_seq_queue *snd_seq_queue_alloc(int client, int locked, unsigned int flags);75757676/* delete queue (destructor) */7777int snd_seq_queue_delete(int client, int queueid);
+11-3
sound/pci/emu10k1/emufx.c
···698698{699699 struct snd_emu10k1_fx8010_control_old_gpr __user *octl;700700701701- if (emu->support_tlv)702702- return copy_from_user(gctl, &_gctl[idx], sizeof(*gctl));701701+ if (emu->support_tlv) {702702+ if (in_kernel)703703+ memcpy(gctl, (void *)&_gctl[idx], sizeof(*gctl));704704+ else if (copy_from_user(gctl, &_gctl[idx], sizeof(*gctl)))705705+ return -EFAULT;706706+ return 0;707707+ }708708+703709 octl = (struct snd_emu10k1_fx8010_control_old_gpr __user *)_gctl;704704- if (copy_from_user(gctl, &octl[idx], sizeof(*octl)))710710+ if (in_kernel)711711+ memcpy(gctl, (void *)&octl[idx], sizeof(*octl));712712+ else if (copy_from_user(gctl, &octl[idx], sizeof(*octl)))705713 return -EFAULT;706714 gctl->tlv = NULL;707715 return 0;
···542542543543 if (size < sizeof(scale))544544 return -ENOMEM;545545+ if (cval->min_mute)546546+ scale[0] = SNDRV_CTL_TLVT_DB_MINMAX_MUTE;545547 scale[2] = cval->dBmin;546548 scale[3] = cval->dBmax;547549 if (copy_to_user(_tlv, scale, sizeof(scale)))
+1
sound/usb/mixer.h
···6464 int cached;6565 int cache_val[MAX_CHANNELS];6666 u8 initialized;6767+ u8 min_mute;6768 void *private_data;6869};6970
+6
sound/usb/mixer_quirks.c
···18781878 if (unitid == 7 && cval->control == UAC_FU_VOLUME)18791879 snd_dragonfly_quirk_db_scale(mixer, cval, kctl);18801880 break;18811881+ /* lowest playback value is muted on C-Media devices */18821882+ case USB_ID(0x0d8c, 0x000c):18831883+ case USB_ID(0x0d8c, 0x0014):18841884+ if (strstr(kctl->id.name, "Playback"))18851885+ cval->min_mute = 1;18861886+ break;18811887 }18821888}18831889
+5
sound/usb/quirks.c
···11421142 case USB_ID(0x0556, 0x0014): /* Phoenix Audio TMX320VC */11431143 case USB_ID(0x05A3, 0x9420): /* ELP HD USB Camera */11441144 case USB_ID(0x074D, 0x3553): /* Outlaw RR2150 (Micronas UAC3553B) */11451145+ case USB_ID(0x1395, 0x740a): /* Sennheiser DECT */11451146 case USB_ID(0x1901, 0x0191): /* GE B850V3 CP2114 audio interface */11461147 case USB_ID(0x1de7, 0x0013): /* Phoenix Audio MT202exe */11471148 case USB_ID(0x1de7, 0x0014): /* Phoenix Audio TMX320 */···13741373 break;13751374 }13761375 }13761376+ break;13771377+ case USB_ID(0x16d0, 0x0a23):13781378+ if (fp->altsetting == 2)13791379+ return SNDRV_PCM_FMTBIT_DSD_U32_BE;13771380 break;1378138113791382 default:
+2-1
tools/lib/bpf/libbpf.c
···879879 size_t j;880880 int err = *pfd;881881882882- pr_warning("failed to create map: %s\n",882882+ pr_warning("failed to create map (name: '%s'): %s\n",883883+ obj->maps[i].name,883884 strerror(errno));884885 for (j = 0; j < i; j++)885886 zclose(obj->maps[j].fd);
+2-2
tools/testing/selftests/kmod/kmod.sh
···473473 echo " all Runs all tests (default)"474474 echo " -t Run test ID the number amount of times is recommended"475475 echo " -w Watch test ID run until it runs into an error"476476- echo " -c Run test ID once"477477- echo " -s Run test ID x test-count number of times"476476+ echo " -s Run test ID once"477477+ echo " -c Run test ID x test-count number of times"478478 echo " -l List all test ID list"479479 echo " -h|--help Help"480480 echo