···2424 cpld3_version25252626Date: November 20182727-KernelVersion: 4.212727+KernelVersion: 5.02828Contact: Vadim Pasternak <vadimpmellanox.com>2929Description: These files show with which CPLD versions have been burned3030 on LED board.···3535 jtag_enable36363737Date: November 20183838-KernelVersion: 4.213838+KernelVersion: 5.03939Contact: Vadim Pasternak <vadimpmellanox.com>4040Description: These files enable and disable the access to the JTAG domain.4141 By default access to the JTAG domain is disabled.···105105 reset_voltmon_upgrade_fail106106107107Date: November 2018108108-KernelVersion: 4.21108108+KernelVersion: 5.0109109Contact: Vadim Pasternak <vadimpmellanox.com>110110Description: These files show the system reset cause, as following: ComEx111111 power fail, reset from ComEx, system platform reset, reset
+3-4
Documentation/admin-guide/kernel-parameters.txt
···16961696 By default, super page will be supported if Intel IOMMU16971697 has the capability. With this option, super page will16981698 not be supported.16991699- sm_off [Default Off]17001700- By default, scalable mode will be supported if the16991699+ sm_on [Default Off]17001700+ By default, scalable mode will be disabled even if the17011701 hardware advertises that it has support for the scalable17021702 mode translation. With this option set, scalable mode17031703- will not be used even on hardware which claims to support17041704- it.17031703+ will be used on hardware which claims to support it.17051704 tboot_noforce [Default Off]17061705 Do not force the Intel IOMMU enabled under tboot.17071706 By default, tboot will force Intel IOMMU on, which
···44- compatible : "olpc,ap-sp"55- reg : base address and length of SoC's WTM registers66- interrupts : SP-AP interrupt77-- clocks : phandle + clock-specifier for the clock that drives the WTM88-- clock-names: should be "sp"97108Example:119 ap-sp@d4290000 {1210 compatible = "olpc,ap-sp";1311 reg = <0xd4290000 0x1000>;1412 interrupts = <40>;1515- clocks = <&soc_clocks MMP2_CLK_SP>;1616- clock-names = "sp";1713 }
+15-9
Documentation/sysctl/fs.txt
···56565757dentry-state:58585959-From linux/fs/dentry.c:5959+From linux/include/linux/dcache.h:6060--------------------------------------------------------------6161-struct {6161+struct dentry_stat_t dentry_stat {6262 int nr_dentry;6363 int nr_unused;6464 int age_limit; /* age in seconds */6565 int want_pages; /* pages requested by system */6666- int dummy[2];6767-} dentry_stat = {0, 0, 45, 0,};6868--------------------------------------------------------------- 6666+ int nr_negative; /* # of unused negative dentries */6767+ int dummy; /* Reserved for future use */6868+};6969+--------------------------------------------------------------69707070-Dentries are dynamically allocated and deallocated, and7171-nr_dentry seems to be 0 all the time. Hence it's safe to7272-assume that only nr_unused, age_limit and want_pages are7373-used. Nr_unused seems to be exactly what its name says.7171+Dentries are dynamically allocated and deallocated.7272+7373+nr_dentry shows the total number of dentries allocated (active7474++ unused). nr_unused shows the number of dentries that are not7575+actively used, but are saved in the LRU list for future reuse.7676+7477Age_limit is the age in seconds after which dcache entries7578can be reclaimed when memory is short and want_pages is7679nonzero when shrink_dcache_pages() has been called and the7780dcache isn't pruned yet.8181+8282+nr_negative shows the number of unused dentries that are also8383+negative dentries which do not mapped to actual files.78847985==============================================================8086
+1-1
Documentation/x86/resctrl_ui.txt
···99Tony Luck <tony.luck@intel.com>1010Vikas Shivappa <vikas.shivappa@intel.com>11111212-This feature is enabled by the CONFIG_X86_RESCTRL and the x86 /proc/cpuinfo1212+This feature is enabled by the CONFIG_X86_CPU_RESCTRL and the x86 /proc/cpuinfo1313flag bits:1414RDT (Resource Director Technology) Allocation - "rdt_a"1515CAT (Cache Allocation Technology) - "cat_l3", "cat_l2"
+51-1
MAINTAINERS
···28482848BPF (Safe dynamic programs and tools)28492849M: Alexei Starovoitov <ast@kernel.org>28502850M: Daniel Borkmann <daniel@iogearbox.net>28512851+R: Martin KaFai Lau <kafai@fb.com>28522852+R: Song Liu <songliubraving@fb.com>28532853+R: Yonghong Song <yhs@fb.com>28512854L: netdev@vger.kernel.org28522855L: linux-kernel@vger.kernel.org28532856T: git git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf.git···28762873F: tools/bpf/28772874F: tools/lib/bpf/28782875F: tools/testing/selftests/bpf/28762876+K: bpf28772877+N: bpf2879287828802879BPF JIT for ARM28812880M: Shubham Bansal <illusionist.neo@gmail.com>···51865181M: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>51875182T: git git://anongit.freedesktop.org/drm/drm-misc51885183L: dri-devel@lists.freedesktop.org51895189-L: xen-devel@lists.xen.org51845184+L: xen-devel@lists.xenproject.org (moderated for non-subscribers)51905185S: Supported51915186F: drivers/gpu/drm/xen/51925187F: Documentation/gpu/xen-front.rst···11322113171132311318OPENCORES I2C BUS DRIVER1132411319M: Peter Korsgaard <peter@korsgaard.com>1132011320+M: Andrew Lunn <andrew@lunn.ch>1132511321L: linux-i2c@vger.kernel.org1132611322S: Maintained1132711323F: Documentation/i2c/busses/i2c-ocores1132811324F: drivers/i2c/busses/i2c-ocores.c1132511325+F: include/linux/platform_data/i2c-ocores.h11329113261133011327OPENRISC ARCHITECTURE1133111328M: Jonas Bonn <jonas@southpole.se>···1288512878F: drivers/net/dsa/realtek-smi*1288612879F: drivers/net/dsa/rtl83*12887128801288112881+REDPINE WIRELESS DRIVER1288212882+M: Amitkumar Karwar <amitkarwar@gmail.com>1288312883+M: Siva Rebbagondla <siva8118@gmail.com>1288412884+L: linux-wireless@vger.kernel.org1288512885+S: Maintained1288612886+F: drivers/net/wireless/rsi/1288712887+1288812888REGISTER MAP ABSTRACTION1288912889M: Mark Brown <broonie@kernel.org>1289012890L: linux-kernel@vger.kernel.org···1371913705L: netdev@vger.kernel.org1372013706S: Supported1372113707F: drivers/net/ethernet/sfc/1370813708+1370913709+SFF/SFP/SFP+ MODULE SUPPORT1371013710+M: Russell King <linux@armlinux.org.uk>1371113711+L: netdev@vger.kernel.org1371213712+S: Maintained1371313713+F: drivers/net/phy/phylink.c1371413714+F: drivers/net/phy/sfp*1371513715+F: include/linux/phylink.h1371613716+F: include/linux/sfp.h13722137171372313718SGI GRU DRIVER1372413719M: Dimitri Sivanich <sivanich@sgi.com>···1667416651F: drivers/platform/x86/1667516652F: drivers/platform/olpc/16676166531665416654+X86 PLATFORM DRIVERS - ARCH1665516655+R: Darren Hart <dvhart@infradead.org>1665616656+R: Andy Shevchenko <andy@infradead.org>1665716657+L: platform-driver-x86@vger.kernel.org1665816658+L: x86@kernel.org1665916659+T: git git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git x86/core1666016660+S: Maintained1666116661+F: arch/x86/platform1666216662+1667716663X86 VDSO1667816664M: Andy Lutomirski <luto@kernel.org>1667916665L: linux-kernel@vger.kernel.org···1671416682T: git git://linuxtv.org/media_tree.git1671516683S: Maintained1671616684F: drivers/media/tuners/tuner-xc2028.*1668516685+1668616686+XDP (eXpress Data Path)1668716687+M: Alexei Starovoitov <ast@kernel.org>1668816688+M: Daniel Borkmann <daniel@iogearbox.net>1668916689+M: David S. Miller <davem@davemloft.net>1669016690+M: Jakub Kicinski <jakub.kicinski@netronome.com>1669116691+M: Jesper Dangaard Brouer <hawk@kernel.org>1669216692+M: John Fastabend <john.fastabend@gmail.com>1669316693+L: netdev@vger.kernel.org1669416694+L: xdp-newbies@vger.kernel.org1669516695+S: Supported1669616696+F: net/core/xdp.c1669716697+F: include/net/xdp.h1669816698+F: kernel/bpf/devmap.c1669916699+F: kernel/bpf/cpumap.c1670016700+F: include/trace/events/xdp.h1670116701+K: xdp1670216702+N: xdp16717167031671816704XDP SOCKETS (AF_XDP)1671916705M: Björn Töpel <bjorn.topel@intel.com>
···814814 /* For debugging, it is often good idea to remove this GPIO.815815 It means you can remove back cover (to reboot by removing816816 battery) and still use the MMC card. */817817- cd-gpios = <&gpio6 0 GPIO_ACTIVE_HIGH>; /* 160 */817817+ cd-gpios = <&gpio6 0 GPIO_ACTIVE_LOW>; /* 160 */818818};819819820820/* most boards use vaux3, only some old versions use vmmc2 instead */
···8888 * we end up running with module randomization disabled.8989 */9090 module_alloc_base = (u64)_etext - MODULES_VSIZE;9191+ __flush_dcache_area(&module_alloc_base, sizeof(module_alloc_base));91929293 /*9394 * Try to map the FDT early. If this fails, we simply bail,
···155155static int __init nfhd_init(void)156156{157157 u32 blocks, bsize;158158+ int ret;158159 int i;159160160161 nfhd_id = nf_get_id("XHDI");161162 if (!nfhd_id)162163 return -ENODEV;163164164164- major_num = register_blkdev(major_num, "nfhd");165165- if (major_num <= 0) {165165+ ret = register_blkdev(major_num, "nfhd");166166+ if (ret < 0) {166167 pr_warn("nfhd: unable to get major number\n");167167- return major_num;168168+ return ret;168169 }170170+171171+ if (!major_num)172172+ major_num = ret;169173170174 for (i = NFHD_DEV_OFFSET; i < 24; i++) {171175 if (nfhd_get_capacity(i, 0, &blocks, &bsize))
···14031403 please say 'N' here. If you want a high-performance kernel to run on14041404 new Loongson 3 machines only, please say 'Y' here.1405140514061406+config CPU_LOONGSON3_WORKAROUNDS14071407+ bool "Old Loongson 3 LLSC Workarounds"14081408+ default y if SMP14091409+ depends on CPU_LOONGSON314101410+ help14111411+ Loongson 3 processors have the llsc issues which require workarounds.14121412+ Without workarounds the system may hang unexpectedly.14131413+14141414+ Newer Loongson 3 will fix these issues and no workarounds are needed.14151415+ The workarounds have no significant side effect on them but may14161416+ decrease the performance of the system so this option should be14171417+ disabled unless the kernel is intended to be run on old systems.14181418+14191419+ If unsure, please say Y.14201420+14061421config CPU_LOONGSON2E14071422 bool "Loongson 2E"14081423 depends on SYS_HAS_CPU_LOONGSON2E
+4-4
arch/mips/boot/dts/ingenic/ci20.dts
···7676 status = "okay";77777878 pinctrl-names = "default";7979- pinctrl-0 = <&pins_uart2>;7979+ pinctrl-0 = <&pins_uart3>;8080};81818282&uart4 {···196196 bias-disable;197197 };198198199199- pins_uart2: uart2 {200200- function = "uart2";201201- groups = "uart2-data", "uart2-hwflow";199199+ pins_uart3: uart3 {200200+ function = "uart3";201201+ groups = "uart3-data", "uart3-hwflow";202202 bias-disable;203203 };204204
···222222#define __smp_mb__before_atomic() __smp_mb__before_llsc()223223#define __smp_mb__after_atomic() smp_llsc_mb()224224225225+/*226226+ * Some Loongson 3 CPUs have a bug wherein execution of a memory access (load,227227+ * store or pref) in between an ll & sc can cause the sc instruction to228228+ * erroneously succeed, breaking atomicity. Whilst it's unusual to write code229229+ * containing such sequences, this bug bites harder than we might otherwise230230+ * expect due to reordering & speculation:231231+ *232232+ * 1) A memory access appearing prior to the ll in program order may actually233233+ * be executed after the ll - this is the reordering case.234234+ *235235+ * In order to avoid this we need to place a memory barrier (ie. a sync236236+ * instruction) prior to every ll instruction, in between it & any earlier237237+ * memory access instructions. Many of these cases are already covered by238238+ * smp_mb__before_llsc() but for the remaining cases, typically ones in239239+ * which multiple CPUs may operate on a memory location but ordering is not240240+ * usually guaranteed, we use loongson_llsc_mb() below.241241+ *242242+ * This reordering case is fixed by 3A R2 CPUs, ie. 3A2000 models and later.243243+ *244244+ * 2) If a conditional branch exists between an ll & sc with a target outside245245+ * of the ll-sc loop, for example an exit upon value mismatch in cmpxchg()246246+ * or similar, then misprediction of the branch may allow speculative247247+ * execution of memory accesses from outside of the ll-sc loop.248248+ *249249+ * In order to avoid this we need a memory barrier (ie. a sync instruction)250250+ * at each affected branch target, for which we also use loongson_llsc_mb()251251+ * defined below.252252+ *253253+ * This case affects all current Loongson 3 CPUs.254254+ */255255+#ifdef CONFIG_CPU_LOONGSON3_WORKAROUNDS /* Loongson-3's LLSC workaround */256256+#define loongson_llsc_mb() __asm__ __volatile__(__WEAK_LLSC_MB : : :"memory")257257+#else258258+#define loongson_llsc_mb() do { } while (0)259259+#endif260260+225261#include <asm-generic/barrier.h>226262227263#endif /* __ASM_BARRIER_H */
+5
arch/mips/include/asm/bitops.h
···6969 : "ir" (1UL << bit), GCC_OFF_SMALL_ASM() (*m));7070#if defined(CONFIG_CPU_MIPSR2) || defined(CONFIG_CPU_MIPSR6)7171 } else if (kernel_uses_llsc && __builtin_constant_p(bit)) {7272+ loongson_llsc_mb();7273 do {7374 __asm__ __volatile__(7475 " " __LL "%0, %1 # set_bit \n"···8079 } while (unlikely(!temp));8180#endif /* CONFIG_CPU_MIPSR2 || CONFIG_CPU_MIPSR6 */8281 } else if (kernel_uses_llsc) {8282+ loongson_llsc_mb();8383 do {8484 __asm__ __volatile__(8585 " .set push \n"···125123 : "ir" (~(1UL << bit)));126124#if defined(CONFIG_CPU_MIPSR2) || defined(CONFIG_CPU_MIPSR6)127125 } else if (kernel_uses_llsc && __builtin_constant_p(bit)) {126126+ loongson_llsc_mb();128127 do {129128 __asm__ __volatile__(130129 " " __LL "%0, %1 # clear_bit \n"···136133 } while (unlikely(!temp));137134#endif /* CONFIG_CPU_MIPSR2 || CONFIG_CPU_MIPSR6 */138135 } else if (kernel_uses_llsc) {136136+ loongson_llsc_mb();139137 do {140138 __asm__ __volatile__(141139 " .set push \n"···197193 unsigned long *m = ((unsigned long *) addr) + (nr >> SZLONG_LOG);198194 unsigned long temp;199195196196+ loongson_llsc_mb();200197 do {201198 __asm__ __volatile__(202199 " .set push \n"
···457457 }458458459459 /* reprime cause register */460460- write_gcr_error_cause(0);460460+ write_gcr_error_cause(cm_error);461461}
+3-4
arch/mips/kernel/process.c
···371371static int get_frame_info(struct mips_frame_info *info)372372{373373 bool is_mmips = IS_ENABLED(CONFIG_CPU_MICROMIPS);374374- union mips_instruction insn, *ip, *ip_end;374374+ union mips_instruction insn, *ip;375375 const unsigned int max_insns = 128;376376 unsigned int last_insn_size = 0;377377 unsigned int i;···384384 if (!ip)385385 goto err;386386387387- ip_end = (void *)ip + info->func_size;388388-389389- for (i = 0; i < max_insns && ip < ip_end; i++) {387387+ for (i = 0; i < max_insns; i++) {390388 ip = (void *)ip + last_insn_size;389389+391390 if (is_mmips && mm_insn_16bit(ip->halfword[0])) {392391 insn.word = ip->halfword[0] << 16;393392 last_insn_size = 2;
+23
arch/mips/loongson64/Platform
···2323endif24242525cflags-$(CONFIG_CPU_LOONGSON3) += -Wa,--trap2626+2727+#2828+# Some versions of binutils, not currently mainline as of 2019/02/04, support2929+# an -mfix-loongson3-llsc flag which emits a sync prior to each ll instruction3030+# to work around a CPU bug (see loongson_llsc_mb() in asm/barrier.h for a3131+# description).3232+#3333+# We disable this in order to prevent the assembler meddling with the3434+# instruction that labels refer to, ie. if we label an ll instruction:3535+#3636+# 1: ll v0, 0(a0)3737+#3838+# ...then with the assembler fix applied the label may actually point at a sync3939+# instruction inserted by the assembler, and if we were using the label in an4040+# exception table the table would no longer contain the address of the ll4141+# instruction.4242+#4343+# Avoid this by explicitly disabling that assembler behaviour. If upstream4444+# binutils does not merge support for the flag then we can revisit & remove4545+# this later - for now it ensures vendor toolchains don't cause problems.4646+#4747+cflags-$(CONFIG_CPU_LOONGSON3) += $(call as-option,-Wa$(comma)-mno-fix-loongson3-llsc,)4848+2649#2750# binutils from v2.25 on and gcc starting from v4.9.0 treat -march=loongson3a2851# as MIPS64 R2; older versions as just R1. This leaves the possibility open
+6-1
arch/mips/loongson64/common/reset.c
···5959{6060#ifndef CONFIG_LEFI_FIRMWARE_INTERFACE6161 mach_prepare_shutdown();6262- unreachable();6262+6363+ /*6464+ * It needs a wait loop here, but mips/kernel/reset.c already calls6565+ * a generic delay loop, machine_hang(), so simply return.6666+ */6767+ return;6368#else6469 void (*fw_poweroff)(void) = (void *)loongson_sysconf.poweroff_addr;6570
+10
arch/mips/mm/tlbex.c
···932932 * to mimic that here by taking a load/istream page933933 * fault.934934 */935935+ if (IS_ENABLED(CONFIG_CPU_LOONGSON3_WORKAROUNDS))936936+ uasm_i_sync(p, 0);935937 UASM_i_LA(p, ptr, (unsigned long)tlb_do_page_fault_0);936938 uasm_i_jr(p, ptr);937939···16481646iPTE_LW(u32 **p, unsigned int pte, unsigned int ptr)16491647{16501648#ifdef CONFIG_SMP16491649+ if (IS_ENABLED(CONFIG_CPU_LOONGSON3_WORKAROUNDS))16501650+ uasm_i_sync(p, 0);16511651# ifdef CONFIG_PHYS_ADDR_T_64BIT16521652 if (cpu_has_64bits)16531653 uasm_i_lld(p, pte, 0, ptr);···22632259#endif2264226022652261 uasm_l_nopage_tlbl(&l, p);22622262+ if (IS_ENABLED(CONFIG_CPU_LOONGSON3_WORKAROUNDS))22632263+ uasm_i_sync(&p, 0);22662264 build_restore_work_registers(&p);22672265#ifdef CONFIG_CPU_MICROMIPS22682266 if ((unsigned long)tlb_do_page_fault_0 & 1) {···23192313#endif2320231423212315 uasm_l_nopage_tlbs(&l, p);23162316+ if (IS_ENABLED(CONFIG_CPU_LOONGSON3_WORKAROUNDS))23172317+ uasm_i_sync(&p, 0);23222318 build_restore_work_registers(&p);23232319#ifdef CONFIG_CPU_MICROMIPS23242320 if ((unsigned long)tlb_do_page_fault_1 & 1) {···23762368#endif2377236923782370 uasm_l_nopage_tlbm(&l, p);23712371+ if (IS_ENABLED(CONFIG_CPU_LOONGSON3_WORKAROUNDS))23722372+ uasm_i_sync(&p, 0);23792373 build_restore_work_registers(&p);23802374#ifdef CONFIG_CPU_MICROMIPS23812375 if ((unsigned long)tlb_do_page_fault_1 & 1) {
+5-5
arch/mips/pci/pci-octeon.c
···568568 if (octeon_has_feature(OCTEON_FEATURE_PCIE))569569 return 0;570570571571+ if (!octeon_is_pci_host()) {572572+ pr_notice("Not in host mode, PCI Controller not initialized\n");573573+ return 0;574574+ }575575+571576 /* Point pcibios_map_irq() to the PCI version of it */572577 octeon_pcibios_map_irq = octeon_pci_pcibios_map_irq;573578···583578 octeon_dma_bar_type = OCTEON_DMA_BAR_TYPE_SMALL;584579 else585580 octeon_dma_bar_type = OCTEON_DMA_BAR_TYPE_BIG;586586-587587- if (!octeon_is_pci_host()) {588588- pr_notice("Not in host mode, PCI Controller not initialized\n");589589- return 0;590590- }591581592582 /* PCI I/O and PCI MEM values */593583 set_io_port_base(OCTEON_PCI_IOSPACE_BASE);
···1258125812591259#define pmd_move_must_withdraw pmd_move_must_withdraw12601260struct spinlock;12611261-static inline int pmd_move_must_withdraw(struct spinlock *new_pmd_ptl,12621262- struct spinlock *old_pmd_ptl,12631263- struct vm_area_struct *vma)12641264-{12651265- if (radix_enabled())12661266- return false;12671267- /*12681268- * Archs like ppc64 use pgtable to store per pmd12691269- * specific information. So when we switch the pmd,12701270- * we should also withdraw and deposit the pgtable12711271- */12721272- return true;12731273-}12741274-12751275-12611261+extern int pmd_move_must_withdraw(struct spinlock *new_pmd_ptl,12621262+ struct spinlock *old_pmd_ptl,12631263+ struct vm_area_struct *vma);12641264+/*12651265+ * Hash translation mode use the deposited table to store hash pte12661266+ * slot information.12671267+ */12761268#define arch_needs_pgtable_deposit arch_needs_pgtable_deposit12771269static inline bool arch_needs_pgtable_deposit(void)12781270{
+22
arch/powerpc/mm/pgtable-book3s64.c
···400400 atomic_long_read(&direct_pages_count[MMU_PAGE_1G]) << 20);401401}402402#endif /* CONFIG_PROC_FS */403403+404404+/*405405+ * For hash translation mode, we use the deposited table to store hash slot406406+ * information and they are stored at PTRS_PER_PMD offset from related pmd407407+ * location. Hence a pmd move requires deposit and withdraw.408408+ *409409+ * For radix translation with split pmd ptl, we store the deposited table in the410410+ * pmd page. Hence if we have different pmd page we need to withdraw during pmd411411+ * move.412412+ *413413+ * With hash we use deposited table always irrespective of anon or not.414414+ * With radix we use deposited table only for anonymous mapping.415415+ */416416+int pmd_move_must_withdraw(struct spinlock *new_pmd_ptl,417417+ struct spinlock *old_pmd_ptl,418418+ struct vm_area_struct *vma)419419+{420420+ if (radix_enabled())421421+ return (new_pmd_ptl != old_pmd_ptl) && vma_is_anonymous(vma);422422+423423+ return true;424424+}
+4-1
arch/powerpc/platforms/pseries/papr_scm.c
···4343{4444 unsigned long ret[PLPAR_HCALL_BUFSIZE];4545 uint64_t rc, token;4646+ uint64_t saved = 0;46474748 /*4849 * When the hypervisor cannot map all the requested memory in a single···5756 rc = plpar_hcall(H_SCM_BIND_MEM, ret, p->drc_index, 0,5857 p->blocks, BIND_ANY_ADDR, token);5958 token = ret[0];5959+ if (!saved)6060+ saved = ret[1];6061 cond_resched();6162 } while (rc == H_BUSY);6263···6764 return -ENXIO;6865 }69667070- p->bound_addr = ret[1];6767+ p->bound_addr = saved;71687269 dev_dbg(&p->pdev->dev, "bound drc %x to %pR\n", p->drc_index, &p->res);7370
+1-1
arch/riscv/Kconfig
···103103 prompt "Base ISA"104104 default ARCH_RV64I105105 help106106- This selects the base ISA that this kernel will traget and must match106106+ This selects the base ISA that this kernel will target and must match107107 the target platform.108108109109config ARCH_RV32I
+5-3
arch/riscv/configs/defconfig
···1313CONFIG_EXPERT=y1414CONFIG_BPF_SYSCALL=y1515CONFIG_SMP=y1616-CONFIG_PCI=y1717-CONFIG_PCIE_XILINX=y1816CONFIG_MODULES=y1917CONFIG_MODULE_UNLOAD=y2018CONFIG_NET=y···2628CONFIG_IP_PNP_BOOTP=y2729CONFIG_IP_PNP_RARP=y2830CONFIG_NETLINK_DIAG=y3131+CONFIG_PCI=y3232+CONFIG_PCIEPORTBUS=y3333+CONFIG_PCI_HOST_GENERIC=y3434+CONFIG_PCIE_XILINX=y2935CONFIG_DEVTMPFS=y3036CONFIG_BLK_DEV_LOOP=y3137CONFIG_VIRTIO_BLK=y···6563CONFIG_USB_UAS=y6664CONFIG_VIRTIO_MMIO=y6765CONFIG_SIFIVE_PLIC=y6868-CONFIG_RAS=y6966CONFIG_EXT4_FS=y7067CONFIG_EXT4_FS_POSIX_ACL=y7168CONFIG_AUTOFS4_FS=y···7877CONFIG_NFS_V4_2=y7978CONFIG_ROOT_NFS=y8079CONFIG_CRYPTO_USER_API_HASH=y8080+CONFIG_CRYPTO_DEV_VIRTIO=y8181CONFIG_PRINTK_TIME=y8282# CONFIG_RCU_TRACE is not set
···2222 * This decides where the kernel will search for a free chunk of vm2323 * space during mmap's.2424 */2525-#define TASK_UNMAPPED_BASE PAGE_ALIGN(TASK_SIZE >> 1)2525+#define TASK_UNMAPPED_BASE PAGE_ALIGN(TASK_SIZE / 3)26262727#define STACK_TOP TASK_SIZE2828#define STACK_TOP_MAX STACK_TOP
···446446 branches. Requires a compiler with -mindirect-branch=thunk-extern447447 support for full protection. The kernel may run slower.448448449449-config X86_RESCTRL450450- bool "Resource Control support"449449+config X86_CPU_RESCTRL450450+ bool "x86 CPU resource control support"451451 depends on X86 && (CPU_SUP_INTEL || CPU_SUP_AMD)452452 select KERNFS453453 help454454- Enable Resource Control support.454454+ Enable x86 CPU resource control support.455455456456 Provide support for the allocation and monitoring of system resources457457 usage by the CPU.
+10
arch/x86/boot/compressed/head_64.S
···600600 leal TRAMPOLINE_32BIT_PGTABLE_OFFSET(%ecx), %eax601601 movl %eax, %cr36026023:603603+ /* Set EFER.LME=1 as a precaution in case hypervsior pulls the rug */604604+ pushl %ecx605605+ pushl %edx606606+ movl $MSR_EFER, %ecx607607+ rdmsr608608+ btsl $_EFER_LME, %eax609609+ wrmsr610610+ popl %edx611611+ popl %ecx612612+603613 /* Enable PAE and LA57 (if required) paging modes */604614 movl $X86_CR4_PAE, %eax605615 cmpl $0, %edx
···12221222 .id_table = snbep_uncore_pci_ids,12231223};1224122412251225+#define NODE_ID_MASK 0x712261226+12251227/*12261228 * build pci bus to socket mapping12271229 */···12451243 err = pci_read_config_dword(ubox_dev, nodeid_loc, &config);12461244 if (err)12471245 break;12481248- nodeid = config;12461246+ nodeid = config & NODE_ID_MASK;12491247 /* get the Node ID mapping */12501248 err = pci_read_config_dword(ubox_dev, idmap_loc, &config);12511249 if (err)
+2-1
arch/x86/include/asm/intel-family.h
···66 * "Big Core" Processors (Branded as Core, Xeon, etc...)77 *88 * The "_X" parts are generally the EP and EX Xeons, or the99- * "Extreme" ones, like Broadwell-E.99+ * "Extreme" ones, like Broadwell-E, or Atom microserver.1010 *1111 * While adding a new CPUID for a new microarchitecture, add a new1212 * group to keep logically sorted out in chronological order. Within···7171#define INTEL_FAM6_ATOM_GOLDMONT 0x5C /* Apollo Lake */7272#define INTEL_FAM6_ATOM_GOLDMONT_X 0x5F /* Denverton */7373#define INTEL_FAM6_ATOM_GOLDMONT_PLUS 0x7A /* Gemini Lake */7474+#define INTEL_FAM6_ATOM_TREMONT_X 0x86 /* Jacobsville */74757576/* Xeon Phi */7677
···2626#include <linux/mod_devicetable.h>2727#include <linux/mm.h>2828#include <linux/sched.h>2929+#include <linux/sched/smt.h>2930#include <linux/slab.h>3031#include <linux/tboot.h>3132#include <linux/trace_events.h>···68246823 * Warn upon starting the first VM in a potentially68256824 * insecure environment.68266825 */68276827- if (cpu_smt_control == CPU_SMT_ENABLED)68266826+ if (sched_smt_active())68286827 pr_warn_once(L1TF_MSG_SMT);68296828 if (l1tf_vmx_mitigation == VMENTER_L1D_FLUSH_NEVER)68306829 pr_warn_once(L1TF_MSG_L1D);
+7
arch/x86/kvm/x86.c
···51165116{51175117 u32 access = (kvm_x86_ops->get_cpl(vcpu) == 3) ? PFERR_USER_MASK : 0;5118511851195119+ /*51205120+ * FIXME: this should call handle_emulation_failure if X86EMUL_IO_NEEDED51215121+ * is returned, but our callers are not ready for that and they blindly51225122+ * call kvm_inject_page_fault. Ensure that they at least do not leak51235123+ * uninitialized kernel stack memory into cr2 and error code.51245124+ */51255125+ memset(exception, 0, sizeof(*exception));51195126 return kvm_read_guest_virt_helper(addr, val, bytes, vcpu, access,51205127 exception);51215128}
···230230231231#endif232232233233+/*234234+ * See set_mce_nospec().235235+ *236236+ * Machine check recovery code needs to change cache mode of poisoned pages to237237+ * UC to avoid speculative access logging another error. But passing the238238+ * address of the 1:1 mapping to set_memory_uc() is a fine way to encourage a239239+ * speculative access. So we cheat and flip the top bit of the address. This240240+ * works fine for the code that updates the page tables. But at the end of the241241+ * process we need to flush the TLB and cache and the non-canonical address242242+ * causes a #GP fault when used by the INVLPG and CLFLUSH instructions.243243+ *244244+ * But in the common case we already have a canonical address. This code245245+ * will fix the top bit if needed and is a no-op otherwise.246246+ */247247+static inline unsigned long fix_addr(unsigned long addr)248248+{249249+#ifdef CONFIG_X86_64250250+ return (long)(addr << 1) >> 1;251251+#else252252+ return addr;253253+#endif254254+}255255+233256static unsigned long __cpa_addr(struct cpa_data *cpa, unsigned long idx)234257{235258 if (cpa->flags & CPA_PAGES_ARRAY) {···336313 unsigned int i;337314338315 for (i = 0; i < cpa->numpages; i++)339339- __flush_tlb_one_kernel(__cpa_addr(cpa, i));316316+ __flush_tlb_one_kernel(fix_addr(__cpa_addr(cpa, i)));340317}341318342319static void cpa_flush(struct cpa_data *data, int cache)···370347 * Only flush present addresses:371348 */372349 if (pte && (pte_val(*pte) & _PAGE_PRESENT))373373- clflush_cache_range_opt((void *)addr, PAGE_SIZE);350350+ clflush_cache_range_opt((void *)fix_addr(addr), PAGE_SIZE);374351 }375352 mb();376353}···16491626 cpa->numpages = numpages;16501627 return ret;16511628}16521652-16531653-/*16541654- * Machine check recovery code needs to change cache mode of poisoned16551655- * pages to UC to avoid speculative access logging another error. But16561656- * passing the address of the 1:1 mapping to set_memory_uc() is a fine16571657- * way to encourage a speculative access. So we cheat and flip the top16581658- * bit of the address. This works fine for the code that updates the16591659- * page tables. But at the end of the process we need to flush the cache16601660- * and the non-canonical address causes a #GP fault when used by the16611661- * CLFLUSH instruction.16621662- *16631663- * But in the common case we already have a canonical address. This code16641664- * will fix the top bit if needed and is a no-op otherwise.16651665- */16661666-static inline unsigned long make_addr_canonical_again(unsigned long addr)16671667-{16681668-#ifdef CONFIG_X86_6416691669- return (long)(addr << 1) >> 1;16701670-#else16711671- return addr;16721672-#endif16731673-}16741674-1675162916761630static int change_page_attr_set_clr(unsigned long *addr, int numpages,16771631 pgprot_t mask_set, pgprot_t mask_clr,
+2-2
arch/xtensa/Kconfig
···164164 If unsure, say N.165165166166config XTENSA_UNALIGNED_USER167167- bool "Unaligned memory access in use space"167167+ bool "Unaligned memory access in user space"168168 help169169 The Xtensa architecture currently does not handle unaligned170170 memory accesses in hardware but through an exception handler.···451451 help452452 Include support for flattened device tree machine descriptions.453453454454-config BUILTIN_DTB454454+config BUILTIN_DTB_SOURCE455455 string "DTB to build into the kernel image"456456 depends on OF457457
+3-3
arch/xtensa/boot/dts/Makefile
···77#88#991010-BUILTIN_DTB := $(patsubst "%",%,$(CONFIG_BUILTIN_DTB)).dtb.o1111-ifneq ($(CONFIG_BUILTIN_DTB),"")1212-obj-$(CONFIG_OF) += $(BUILTIN_DTB)1010+BUILTIN_DTB_SOURCE := $(patsubst "%",%,$(CONFIG_BUILTIN_DTB_SOURCE)).dtb.o1111+ifneq ($(CONFIG_BUILTIN_DTB_SOURCE),"")1212+obj-$(CONFIG_OF) += $(BUILTIN_DTB_SOURCE)1313endif14141515# for CONFIG_OF_ALL_DTBS test
+1-1
arch/xtensa/configs/audio_kc705_defconfig
···3434CONFIG_CMDLINE_BOOL=y3535CONFIG_CMDLINE="earlycon=uart8250,mmio32native,0xfd050020,115200n8 console=ttyS0,115200n8 ip=dhcp root=/dev/nfs rw debug memmap=0x38000000@0"3636CONFIG_USE_OF=y3737-CONFIG_BUILTIN_DTB="kc705"3737+CONFIG_BUILTIN_DTB_SOURCE="kc705"3838# CONFIG_COMPACTION is not set3939# CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS is not set4040CONFIG_PM=y
+1-1
arch/xtensa/configs/cadence_csp_defconfig
···3838# CONFIG_PCI is not set3939CONFIG_XTENSA_PLATFORM_XTFPGA=y4040CONFIG_USE_OF=y4141-CONFIG_BUILTIN_DTB="csp"4141+CONFIG_BUILTIN_DTB_SOURCE="csp"4242# CONFIG_COMPACTION is not set4343CONFIG_XTFPGA_LCD=y4444# CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS is not set
+1-1
arch/xtensa/configs/generic_kc705_defconfig
···3333CONFIG_CMDLINE_BOOL=y3434CONFIG_CMDLINE="earlycon=uart8250,mmio32native,0xfd050020,115200n8 console=ttyS0,115200n8 ip=dhcp root=/dev/nfs rw debug memmap=0x38000000@0"3535CONFIG_USE_OF=y3636-CONFIG_BUILTIN_DTB="kc705"3636+CONFIG_BUILTIN_DTB_SOURCE="kc705"3737# CONFIG_COMPACTION is not set3838# CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS is not set3939CONFIG_NET=y
···3333CONFIG_HOTPLUG_CPU=y3434# CONFIG_INITIALIZE_XTENSA_MMU_INSIDE_VMLINUX is not set3535# CONFIG_PCI is not set3636+CONFIG_VECTORS_OFFSET=0x000020003637CONFIG_XTENSA_PLATFORM_XTFPGA=y3738CONFIG_CMDLINE_BOOL=y3839CONFIG_CMDLINE="earlycon=uart8250,mmio32native,0xfd050020,115200n8 console=ttyS0,115200n8 ip=dhcp root=/dev/nfs rw debug memmap=96M@0"3940CONFIG_USE_OF=y4040-CONFIG_BUILTIN_DTB="lx200mx"4141+CONFIG_BUILTIN_DTB_SOURCE="lx200mx"4142# CONFIG_COMPACTION is not set4243# CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS is not set4344CONFIG_NET=y
···1029102910301030 acpi_permanent_mmap = true;1031103110321032+ /* Initialize debug output. Linux does not use ACPICA defaults */10331033+ acpi_dbg_level = ACPI_LV_INFO | ACPI_LV_REPAIR;10341034+10321035#ifdef CONFIG_X8610331036 /*10341037 * If the machine falls into the DMI check table,
+22-15
drivers/android/binder.c
···59145914static int __init binder_init(void)59155915{59165916 int ret;59175917- char *device_name, *device_names, *device_tmp;59175917+ char *device_name, *device_tmp;59185918 struct binder_device *device;59195919 struct hlist_node *tmp;59205920+ char *device_names = NULL;5920592159215922 ret = binder_alloc_shrinker_init();59225923 if (ret)···59595958 &transaction_log_fops);59605959 }5961596059625962- /*59635963- * Copy the module_parameter string, because we don't want to59645964- * tokenize it in-place.59655965- */59665966- device_names = kstrdup(binder_devices_param, GFP_KERNEL);59675967- if (!device_names) {59685968- ret = -ENOMEM;59695969- goto err_alloc_device_names_failed;59615961+ if (strcmp(binder_devices_param, "") != 0) {59625962+ /*59635963+ * Copy the module_parameter string, because we don't want to59645964+ * tokenize it in-place.59655965+ */59665966+ device_names = kstrdup(binder_devices_param, GFP_KERNEL);59675967+ if (!device_names) {59685968+ ret = -ENOMEM;59695969+ goto err_alloc_device_names_failed;59705970+ }59715971+59725972+ device_tmp = device_names;59735973+ while ((device_name = strsep(&device_tmp, ","))) {59745974+ ret = init_binder_device(device_name);59755975+ if (ret)59765976+ goto err_init_binder_device_failed;59775977+ }59705978 }5971597959725972- device_tmp = device_names;59735973- while ((device_name = strsep(&device_tmp, ","))) {59745974- ret = init_binder_device(device_name);59755975- if (ret)59765976- goto err_init_binder_device_failed;59775977- }59805980+ ret = init_binderfs();59815981+ if (ret)59825982+ goto err_init_binder_device_failed;5978598359795984 return ret;59805985
+9
drivers/android/binder_internal.h
···4646}4747#endif48484949+#ifdef CONFIG_ANDROID_BINDERFS5050+extern int __init init_binderfs(void);5151+#else5252+static inline int __init init_binderfs(void)5353+{5454+ return 0;5555+}5656+#endif5757+4958#endif /* _LINUX_BINDER_INTERNAL_H */
+10-4
drivers/android/binderfs.c
···395395 struct inode *inode = NULL;396396 struct dentry *root = sb->s_root;397397 struct binderfs_info *info = sb->s_fs_info;398398+#if defined(CONFIG_IPC_NS)399399+ bool use_reserve = (info->ipc_ns == &init_ipc_ns);400400+#else401401+ bool use_reserve = true;402402+#endif398403399404 device = kzalloc(sizeof(*device), GFP_KERNEL);400405 if (!device)···418413419414 /* Reserve a new minor number for the new device. */420415 mutex_lock(&binderfs_minors_mutex);421421- minor = ida_alloc_max(&binderfs_minors, BINDERFS_MAX_MINOR, GFP_KERNEL);416416+ minor = ida_alloc_max(&binderfs_minors,417417+ use_reserve ? BINDERFS_MAX_MINOR :418418+ BINDERFS_MAX_MINOR_CAPPED,419419+ GFP_KERNEL);422420 mutex_unlock(&binderfs_minors_mutex);423421 if (minor < 0) {424422 ret = minor;···550542 .fs_flags = FS_USERNS_MOUNT,551543};552544553553-static int __init init_binderfs(void)545545+int __init init_binderfs(void)554546{555547 int ret;556548···568560569561 return ret;570562}571571-572572-device_initcall(init_binderfs);
···7979 ct_idx = get_cacheinfo_idx(this_leaf->type);8080 propname = cache_type_info[ct_idx].size_prop;81818282- if (of_property_read_u32(np, propname, &this_leaf->size))8383- this_leaf->size = 0;8282+ of_property_read_u32(np, propname, &this_leaf->size);8483}85848685/* not cache_line_size() because that's a macro in include/linux/cache.h */···113114 ct_idx = get_cacheinfo_idx(this_leaf->type);114115 propname = cache_type_info[ct_idx].nr_sets_prop;115116116116- if (of_property_read_u32(np, propname, &this_leaf->number_of_sets))117117- this_leaf->number_of_sets = 0;117117+ of_property_read_u32(np, propname, &this_leaf->number_of_sets);118118}119119120120static void cache_associativity(struct cacheinfo *this_leaf)
+5-5
drivers/base/power/runtime.c
···130130{131131 int autosuspend_delay;132132 u64 last_busy, expires = 0;133133- u64 now = ktime_to_ns(ktime_get());133133+ u64 now = ktime_get_mono_fast_ns();134134135135 if (!dev->power.use_autosuspend)136136 goto out;···909909 * If 'expires' is after the current time, we've been called910910 * too early.911911 */912912- if (expires > 0 && expires < ktime_to_ns(ktime_get())) {912912+ if (expires > 0 && expires < ktime_get_mono_fast_ns()) {913913 dev->power.timer_expires = 0;914914 rpm_suspend(dev, dev->power.timer_autosuspends ?915915 (RPM_ASYNC | RPM_AUTO) : RPM_ASYNC);···928928int pm_schedule_suspend(struct device *dev, unsigned int delay)929929{930930 unsigned long flags;931931- ktime_t expires;931931+ u64 expires;932932 int retval;933933934934 spin_lock_irqsave(&dev->power.lock, flags);···945945 /* Other scheduled or pending requests need to be canceled. */946946 pm_runtime_cancel_pending(dev);947947948948- expires = ktime_add(ktime_get(), ms_to_ktime(delay));949949- dev->power.timer_expires = ktime_to_ns(expires);948948+ expires = ktime_get_mono_fast_ns() + (u64)delay * NSEC_PER_MSEC;949949+ dev->power.timer_expires = expires;950950 dev->power.timer_autosuspends = 0;951951 hrtimer_start(&dev->power.suspend_timer, expires, HRTIMER_MODE_ABS);952952
+12-2
drivers/clk/clk.c
···15131513 if (!parent)15141514 return -EINVAL;1515151515161516- for (i = 0; i < core->num_parents; i++)15171517- if (clk_core_get_parent_by_index(core, i) == parent)15161516+ for (i = 0; i < core->num_parents; i++) {15171517+ if (core->parents[i] == parent)15181518 return i;15191519+15201520+ if (core->parents[i])15211521+ continue;15221522+15231523+ /* Fallback to comparing globally unique names */15241524+ if (!strcmp(parent->name, core->parent_names[i])) {15251525+ core->parents[i] = parent;15261526+ return i;15271527+ }15281528+ }1519152915201530 return -EINVAL;15211531}
···2121 local_irq_enable();2222 if (!current_set_polling_and_test()) {2323 unsigned int loop_count = 0;2424- u64 limit = TICK_USEC;2424+ u64 limit = TICK_NSEC;2525 int i;26262727 for (i = 1; i < drv->state_count; i++) {
···406406 }407407}408408409409-static int bcm2835_dma_abort(void __iomem *chan_base)409409+static int bcm2835_dma_abort(struct bcm2835_chan *c)410410{411411- unsigned long cs;411411+ void __iomem *chan_base = c->chan_base;412412 long int timeout = 10000;413413414414- cs = readl(chan_base + BCM2835_DMA_CS);415415- if (!(cs & BCM2835_DMA_ACTIVE))414414+ /*415415+ * A zero control block address means the channel is idle.416416+ * (The ACTIVE flag in the CS register is not a reliable indicator.)417417+ */418418+ if (!readl(chan_base + BCM2835_DMA_ADDR))416419 return 0;417420418421 /* Write 0 to the active bit - Pause the DMA */419422 writel(0, chan_base + BCM2835_DMA_CS);420423421424 /* Wait for any current AXI transfer to complete */422422- while ((cs & BCM2835_DMA_ISPAUSED) && --timeout) {425425+ while ((readl(chan_base + BCM2835_DMA_CS) &426426+ BCM2835_DMA_WAITING_FOR_WRITES) && --timeout)423427 cpu_relax();424424- cs = readl(chan_base + BCM2835_DMA_CS);425425- }426428427427- /* We'll un-pause when we set of our next DMA */429429+ /* Peripheral might be stuck and fail to signal AXI write responses */428430 if (!timeout)429429- return -ETIMEDOUT;431431+ dev_err(c->vc.chan.device->dev,432432+ "failed to complete outstanding writes\n");430433431431- if (!(cs & BCM2835_DMA_ACTIVE))432432- return 0;433433-434434- /* Terminate the control block chain */435435- writel(0, chan_base + BCM2835_DMA_NEXTCB);436436-437437- /* Abort the whole DMA */438438- writel(BCM2835_DMA_ABORT | BCM2835_DMA_ACTIVE,439439- chan_base + BCM2835_DMA_CS);440440-434434+ writel(BCM2835_DMA_RESET, chan_base + BCM2835_DMA_CS);441435 return 0;442436}443437···470476471477 spin_lock_irqsave(&c->vc.lock, flags);472478473473- /* Acknowledge interrupt */474474- writel(BCM2835_DMA_INT, c->chan_base + BCM2835_DMA_CS);479479+ /*480480+ * Clear the INT flag to receive further interrupts. Keep the channel481481+ * active in case the descriptor is cyclic or in case the client has482482+ * already terminated the descriptor and issued a new one. (May happen483483+ * if this IRQ handler is threaded.) If the channel is finished, it484484+ * will remain idle despite the ACTIVE flag being set.485485+ */486486+ writel(BCM2835_DMA_INT | BCM2835_DMA_ACTIVE,487487+ c->chan_base + BCM2835_DMA_CS);475488476489 d = c->desc;477490···486485 if (d->cyclic) {487486 /* call the cyclic callback */488487 vchan_cyclic_callback(&d->vd);489489-490490- /* Keep the DMA engine running */491491- writel(BCM2835_DMA_ACTIVE,492492- c->chan_base + BCM2835_DMA_CS);493493- } else {488488+ } else if (!readl(c->chan_base + BCM2835_DMA_ADDR)) {494489 vchan_cookie_complete(&c->desc->vd);495490 bcm2835_dma_start_desc(c);496491 }···776779 struct bcm2835_chan *c = to_bcm2835_dma_chan(chan);777780 struct bcm2835_dmadev *d = to_bcm2835_dma_dev(c->vc.chan.device);778781 unsigned long flags;779779- int timeout = 10000;780782 LIST_HEAD(head);781783782784 spin_lock_irqsave(&c->vc.lock, flags);···785789 list_del_init(&c->node);786790 spin_unlock(&d->lock);787791788788- /*789789- * Stop DMA activity: we assume the callback will not be called790790- * after bcm_dma_abort() returns (even if it does, it will see791791- * c->desc is NULL and exit.)792792- */792792+ /* stop DMA activity */793793 if (c->desc) {794794 vchan_terminate_vdesc(&c->desc->vd);795795 c->desc = NULL;796796- bcm2835_dma_abort(c->chan_base);797797-798798- /* Wait for stopping */799799- while (--timeout) {800800- if (!(readl(c->chan_base + BCM2835_DMA_CS) &801801- BCM2835_DMA_ACTIVE))802802- break;803803-804804- cpu_relax();805805- }806806-807807- if (!timeout)808808- dev_err(d->ddev.dev, "DMA transfer could not be terminated\n");796796+ bcm2835_dma_abort(c);809797 }810798811799 vchan_get_all_descriptors(&c->vc, &head);
···253253 struct vf610_gpio_port *port;254254 struct resource *iores;255255 struct gpio_chip *gc;256256+ int i;256257 int ret;257258258259 port = devm_kzalloc(&pdev->dev, sizeof(*port), GFP_KERNEL);···319318 ret = gpiochip_add_data(gc, port);320319 if (ret < 0)321320 return ret;321321+322322+ /* Mask all GPIO interrupts */323323+ for (i = 0; i < gc->ngpio; i++)324324+ vf610_gpio_writel(0, port->base + PORT_PCR(i));322325323326 /* Clear the interrupt status register for all GPIO's */324327 vf610_gpio_writel(~0, port->base + PORT_ISFR);
+8-1
drivers/gpio/gpiolib.c
···828828 /* Do not leak kernel stack to userspace */829829 memset(&ge, 0, sizeof(ge));830830831831- ge.timestamp = le->timestamp;831831+ /*832832+ * We may be running from a nested threaded interrupt in which case833833+ * we didn't get the timestamp from lineevent_irq_handler().834834+ */835835+ if (!le->timestamp)836836+ ge.timestamp = ktime_get_real_ns();837837+ else838838+ ge.timestamp = le->timestamp;832839833840 if (le->eflags & GPIOEVENT_REQUEST_RISING_EDGE834841 && le->eflags & GPIOEVENT_REQUEST_FALLING_EDGE) {
···3838#include "amdgpu_gem.h"3939#include <drm/amdgpu_drm.h>4040#include <linux/dma-buf.h>4141+#include <linux/dma-fence-array.h>41424243/**4344 * amdgpu_gem_prime_get_sg_table - &drm_driver.gem_prime_get_sg_table···188187 return ERR_PTR(ret);189188}190189190190+static int191191+__reservation_object_make_exclusive(struct reservation_object *obj)192192+{193193+ struct dma_fence **fences;194194+ unsigned int count;195195+ int r;196196+197197+ if (!reservation_object_get_list(obj)) /* no shared fences to convert */198198+ return 0;199199+200200+ r = reservation_object_get_fences_rcu(obj, NULL, &count, &fences);201201+ if (r)202202+ return r;203203+204204+ if (count == 0) {205205+ /* Now that was unexpected. */206206+ } else if (count == 1) {207207+ reservation_object_add_excl_fence(obj, fences[0]);208208+ dma_fence_put(fences[0]);209209+ kfree(fences);210210+ } else {211211+ struct dma_fence_array *array;212212+213213+ array = dma_fence_array_create(count, fences,214214+ dma_fence_context_alloc(1), 0,215215+ false);216216+ if (!array)217217+ goto err_fences_put;218218+219219+ reservation_object_add_excl_fence(obj, &array->base);220220+ dma_fence_put(&array->base);221221+ }222222+223223+ return 0;224224+225225+err_fences_put:226226+ while (count--)227227+ dma_fence_put(fences[count]);228228+ kfree(fences);229229+ return -ENOMEM;230230+}231231+191232/**192233 * amdgpu_gem_map_attach - &dma_buf_ops.attach implementation193234 * @dma_buf: Shared DMA buffer···261218262219 if (attach->dev->driver != adev->dev->driver) {263220 /*264264- * Wait for all shared fences to complete before we switch to future265265- * use of exclusive fence on this prime shared bo.221221+ * We only create shared fences for internal use, but importers222222+ * of the dmabuf rely on exclusive fences for implicitly223223+ * tracking write hazards. As any of the current fences may224224+ * correspond to a write, we need to convert all existing225225+ * fences on the reservation object into a single exclusive226226+ * fence.266227 */267267- r = reservation_object_wait_timeout_rcu(bo->tbo.resv,268268- true, false,269269- MAX_SCHEDULE_TIMEOUT);270270- if (unlikely(r < 0)) {271271- DRM_DEBUG_PRIME("Fence wait failed: %li\n", r);228228+ r = __reservation_object_make_exclusive(bo->tbo.resv);229229+ if (r)272230 goto error_unreserve;273273- }274231 }275232276233 /* pin buffer into GTT */
+3-2
drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
···33633363 struct amdgpu_task_info *task_info)33643364{33653365 struct amdgpu_vm *vm;33663366+ unsigned long flags;3366336733673367- spin_lock(&adev->vm_manager.pasid_lock);33683368+ spin_lock_irqsave(&adev->vm_manager.pasid_lock, flags);3368336933693370 vm = idr_find(&adev->vm_manager.pasid_idr, pasid);33703371 if (vm)33713372 *task_info = vm->task_info;3372337333733373- spin_unlock(&adev->vm_manager.pasid_lock);33743374+ spin_unlock_irqrestore(&adev->vm_manager.pasid_lock, flags);33743375}3375337633763377/**
···591591 dc,592592 context->bw.dce.sclk_khz);593593594594- pp_display_cfg->min_dcfclock_khz = pp_display_cfg->min_engine_clock_khz;594594+ /*595595+ * As workaround for >4x4K lightup set dcfclock to min_engine_clock value.596596+ * This is not required for less than 5 displays,597597+ * thus don't request decfclk in dc to avoid impact598598+ * on power saving.599599+ *600600+ */601601+ pp_display_cfg->min_dcfclock_khz = (context->stream_count > 4)?602602+ pp_display_cfg->min_engine_clock_khz : 0;595603596604 pp_display_cfg->min_engine_clock_deep_sleep_khz597605 = context->bw.dce.sclk_deep_sleep_khz;
···1541515415 }1541615416}15417154171541815418+static bool has_bogus_dpll_config(const struct intel_crtc_state *crtc_state)1541915419+{1542015420+ struct drm_i915_private *dev_priv = to_i915(crtc_state->base.crtc->dev);1542115421+1542215422+ /*1542315423+ * Some SNB BIOSen (eg. ASUS K53SV) are known to misprogram1542415424+ * the hardware when a high res displays plugged in. DPLL P1542515425+ * divider is zero, and the pipe timings are bonkers. We'll1542615426+ * try to disable everything in that case.1542715427+ *1542815428+ * FIXME would be nice to be able to sanitize this state1542915429+ * without several WARNs, but for now let's take the easy1543015430+ * road.1543115431+ */1543215432+ return IS_GEN6(dev_priv) &&1543315433+ crtc_state->base.active &&1543415434+ crtc_state->shared_dpll &&1543515435+ crtc_state->port_clock == 0;1543615436+}1543715437+1541815438static void intel_sanitize_encoder(struct intel_encoder *encoder)1541915439{1542015440 struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);1542115441 struct intel_connector *connector;1544215442+ struct intel_crtc *crtc = to_intel_crtc(encoder->base.crtc);1544315443+ struct intel_crtc_state *crtc_state = crtc ?1544415444+ to_intel_crtc_state(crtc->base.state) : NULL;15422154451542315446 /* We need to check both for a crtc link (meaning that the1542415447 * encoder is active and trying to read from a pipe) and the1542515448 * pipe itself being active. */1542615426- bool has_active_crtc = encoder->base.crtc &&1542715427- to_intel_crtc(encoder->base.crtc)->active;1544915449+ bool has_active_crtc = crtc_state &&1545015450+ crtc_state->base.active;1545115451+1545215452+ if (crtc_state && has_bogus_dpll_config(crtc_state)) {1545315453+ DRM_DEBUG_KMS("BIOS has misprogrammed the hardware. Disabling pipe %c\n",1545415454+ pipe_name(crtc->pipe));1545515455+ has_active_crtc = false;1545615456+ }15428154571542915458 connector = intel_encoder_find_connector(encoder);1543015459 if (connector && !has_active_crtc) {···1546415435 /* Connector is active, but has no active pipe. This is1546515436 * fallout from our resume register restoring. Disable1546615437 * the encoder manually again. */1546715467- if (encoder->base.crtc) {1546815468- struct drm_crtc_state *crtc_state = encoder->base.crtc->state;1543815438+ if (crtc_state) {1543915439+ struct drm_encoder *best_encoder;15469154401547015441 DRM_DEBUG_KMS("[ENCODER:%d:%s] manually disabled\n",1547115442 encoder->base.base.id,1547215443 encoder->base.name);1544415444+1544515445+ /* avoid oopsing in case the hooks consult best_encoder */1544615446+ best_encoder = connector->base.state->best_encoder;1544715447+ connector->base.state->best_encoder = &encoder->base;1544815448+1547315449 if (encoder->disable)1547415474- encoder->disable(encoder, to_intel_crtc_state(crtc_state), connector->base.state);1545015450+ encoder->disable(encoder, crtc_state,1545115451+ connector->base.state);1547515452 if (encoder->post_disable)1547615476- encoder->post_disable(encoder, to_intel_crtc_state(crtc_state), connector->base.state);1545315453+ encoder->post_disable(encoder, crtc_state,1545415454+ connector->base.state);1545515455+1545615456+ connector->base.state->best_encoder = best_encoder;1547715457 }1547815458 encoder->base.crtc = NULL;1547915459
···11-//SPDX-License-Identifier: GPL-2.0+11+// SPDX-License-Identifier: GPL-2.022/*33 * Copyright (C) Fuzhou Rockchip Electronics Co.Ltd44 * Author:55 * Sandy Huang <hjc@rock-chips.com>66- *77- * This software is licensed under the terms of the GNU General Public88- * License version 2, as published by the Free Software Foundation, and99- * may be copied, distributed, and modified under those terms.1010- *1111- * This program is distributed in the hope that it will be useful,1212- * but WITHOUT ANY WARRANTY; without even the implied warranty of1313- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the1414- * GNU General Public License for more details.156 */167178#include <drm/drmP.h>
+1-10
drivers/gpu/drm/rockchip/rockchip_rgb.h
···11-//SPDX-License-Identifier: GPL-2.0+11+/* SPDX-License-Identifier: GPL-2.0 */22/*33 * Copyright (C) Fuzhou Rockchip Electronics Co.Ltd44 * Author:55 * Sandy Huang <hjc@rock-chips.com>66- *77- * This software is licensed under the terms of the GNU General Public88- * License version 2, as published by the Free Software Foundation, and99- * may be copied, distributed, and modified under those terms.1010- *1111- * This program is distributed in the hope that it will be useful,1212- * but WITHOUT ANY WARRANTY; without even the implied warranty of1313- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the1414- * GNU General Public License for more details.156 */167178#ifdef CONFIG_ROCKCHIP_RGB
···2727#include <linux/iio/machine.h>2828#include <linux/iio/driver.h>29293030-#define AXP288_ADC_EN_MASK 0xF13131-#define AXP288_ADC_TS_PIN_GPADC 0xF23232-#define AXP288_ADC_TS_PIN_ON 0xF33030+/*3131+ * This mask enables all ADCs except for the battery temp-sensor (TS), that is3232+ * left as-is to avoid breaking charging on devices without a temp-sensor.3333+ */3434+#define AXP288_ADC_EN_MASK 0xF03535+#define AXP288_ADC_TS_ENABLE 0x013636+3737+#define AXP288_ADC_TS_CURRENT_ON_OFF_MASK GENMASK(1, 0)3838+#define AXP288_ADC_TS_CURRENT_OFF (0 << 0)3939+#define AXP288_ADC_TS_CURRENT_ON_WHEN_CHARGING (1 << 0)4040+#define AXP288_ADC_TS_CURRENT_ON_ONDEMAND (2 << 0)4141+#define AXP288_ADC_TS_CURRENT_ON (3 << 0)33423443enum axp288_adc_id {3544 AXP288_ADC_TS,···5344struct axp288_adc_info {5445 int irq;5546 struct regmap *regmap;4747+ bool ts_enabled;5648};57495850static const struct iio_chan_spec axp288_adc_channels[] = {···125115 return IIO_VAL_INT;126116}127117128128-static int axp288_adc_set_ts(struct regmap *regmap, unsigned int mode,129129- unsigned long address)118118+/*119119+ * The current-source used for the battery temp-sensor (TS) is shared120120+ * with the GPADC. For proper fuel-gauge and charger operation the TS121121+ * current-source needs to be permanently on. But to read the GPADC we122122+ * need to temporary switch the TS current-source to ondemand, so that123123+ * the GPADC can use it, otherwise we will always read an all 0 value.124124+ */125125+static int axp288_adc_set_ts(struct axp288_adc_info *info,126126+ unsigned int mode, unsigned long address)130127{131128 int ret;132129133133- /* channels other than GPADC do not need to switch TS pin */130130+ /* No need to switch the current-source if the TS pin is disabled */131131+ if (!info->ts_enabled)132132+ return 0;133133+134134+ /* Channels other than GPADC do not need the current source */134135 if (address != AXP288_GP_ADC_H)135136 return 0;136137137137- ret = regmap_write(regmap, AXP288_ADC_TS_PIN_CTRL, mode);138138+ ret = regmap_update_bits(info->regmap, AXP288_ADC_TS_PIN_CTRL,139139+ AXP288_ADC_TS_CURRENT_ON_OFF_MASK, mode);138140 if (ret)139141 return ret;140142141143 /* When switching to the GPADC pin give things some time to settle */142142- if (mode == AXP288_ADC_TS_PIN_GPADC)144144+ if (mode == AXP288_ADC_TS_CURRENT_ON_ONDEMAND)143145 usleep_range(6000, 10000);144146145147 return 0;···167145 mutex_lock(&indio_dev->mlock);168146 switch (mask) {169147 case IIO_CHAN_INFO_RAW:170170- if (axp288_adc_set_ts(info->regmap, AXP288_ADC_TS_PIN_GPADC,148148+ if (axp288_adc_set_ts(info, AXP288_ADC_TS_CURRENT_ON_ONDEMAND,171149 chan->address)) {172150 dev_err(&indio_dev->dev, "GPADC mode\n");173151 ret = -EINVAL;174152 break;175153 }176154 ret = axp288_adc_read_channel(val, chan->address, info->regmap);177177- if (axp288_adc_set_ts(info->regmap, AXP288_ADC_TS_PIN_ON,155155+ if (axp288_adc_set_ts(info, AXP288_ADC_TS_CURRENT_ON,178156 chan->address))179157 dev_err(&indio_dev->dev, "TS pin restore\n");180158 break;···186164 return ret;187165}188166189189-static int axp288_adc_set_state(struct regmap *regmap)167167+static int axp288_adc_initialize(struct axp288_adc_info *info)190168{191191- /* ADC should be always enabled for internal FG to function */192192- if (regmap_write(regmap, AXP288_ADC_TS_PIN_CTRL, AXP288_ADC_TS_PIN_ON))193193- return -EIO;169169+ int ret, adc_enable_val;194170195195- return regmap_write(regmap, AXP20X_ADC_EN1, AXP288_ADC_EN_MASK);171171+ /*172172+ * Determine if the TS pin is enabled and set the TS current-source173173+ * accordingly.174174+ */175175+ ret = regmap_read(info->regmap, AXP20X_ADC_EN1, &adc_enable_val);176176+ if (ret)177177+ return ret;178178+179179+ if (adc_enable_val & AXP288_ADC_TS_ENABLE) {180180+ info->ts_enabled = true;181181+ ret = regmap_update_bits(info->regmap, AXP288_ADC_TS_PIN_CTRL,182182+ AXP288_ADC_TS_CURRENT_ON_OFF_MASK,183183+ AXP288_ADC_TS_CURRENT_ON);184184+ } else {185185+ info->ts_enabled = false;186186+ ret = regmap_update_bits(info->regmap, AXP288_ADC_TS_PIN_CTRL,187187+ AXP288_ADC_TS_CURRENT_ON_OFF_MASK,188188+ AXP288_ADC_TS_CURRENT_OFF);189189+ }190190+ if (ret)191191+ return ret;192192+193193+ /* Turn on the ADC for all channels except TS, leave TS as is */194194+ return regmap_update_bits(info->regmap, AXP20X_ADC_EN1,195195+ AXP288_ADC_EN_MASK, AXP288_ADC_EN_MASK);196196}197197198198static const struct iio_info axp288_adc_iio_info = {···244200 * Set ADC to enabled state at all time, including system suspend.245201 * otherwise internal fuel gauge functionality may be affected.246202 */247247- ret = axp288_adc_set_state(axp20x->regmap);203203+ ret = axp288_adc_initialize(info);248204 if (ret) {249205 dev_err(&pdev->dev, "unable to enable ADC device\n");250206 return ret;
+2-1
drivers/iio/adc/ti-ads8688.c
···41414242#define ADS8688_VREF_MV 40964343#define ADS8688_REALBITS 164444+#define ADS8688_MAX_CHANNELS 844454546/*4647 * enum ads8688_range - ADS8688 reference voltage range···386385{387386 struct iio_poll_func *pf = p;388387 struct iio_dev *indio_dev = pf->indio_dev;389389- u16 buffer[8];388388+ u16 buffer[ADS8688_MAX_CHANNELS + sizeof(s64)/sizeof(u16)];390389 int i, j = 0;391390392391 for (i = 0; i < indio_dev->masklength; i++) {
+3-4
drivers/iio/chemical/atlas-ph-sensor.c
···444444 case IIO_CHAN_INFO_SCALE:445445 switch (chan->type) {446446 case IIO_TEMP:447447- *val = 1; /* 0.01 */448448- *val2 = 100;449449- break;447447+ *val = 10;448448+ return IIO_VAL_INT;450449 case IIO_PH:451450 *val = 1; /* 0.001 */452451 *val2 = 1000;···476477 int val, int val2, long mask)477478{478479 struct atlas_data *data = iio_priv(indio_dev);479479- __be32 reg = cpu_to_be32(val);480480+ __be32 reg = cpu_to_be32(val / 10);480481481482 if (val2 != 0 || val < 0 || val > 20000)482483 return -EINVAL;
···204204 if (atomic_dec_and_test(&file->device->refcount))205205 ib_uverbs_comp_dev(file->device);206206207207+ if (file->async_file)208208+ kref_put(&file->async_file->ref,209209+ ib_uverbs_release_async_event_file);207210 put_device(&file->device->dev);208211 kfree(file);209212}···967964968965 /* Get an arbitrary mm pointer that hasn't been cleaned yet */969966 mutex_lock(&ufile->umap_lock);970970- if (!list_empty(&ufile->umaps)) {971971- mm = list_first_entry(&ufile->umaps,972972- struct rdma_umap_priv, list)973973- ->vma->vm_mm;974974- mmget(mm);967967+ while (!list_empty(&ufile->umaps)) {968968+ int ret;969969+970970+ priv = list_first_entry(&ufile->umaps,971971+ struct rdma_umap_priv, list);972972+ mm = priv->vma->vm_mm;973973+ ret = mmget_not_zero(mm);974974+ if (!ret) {975975+ list_del_init(&priv->list);976976+ mm = NULL;977977+ continue;978978+ }979979+ break;975980 }976981 mutex_unlock(&ufile->umap_lock);977982 if (!mm)···11061095 mutex_lock(&file->device->lists_mutex);11071096 list_del_init(&file->list);11081097 mutex_unlock(&file->device->lists_mutex);11091109-11101110- if (file->async_file)11111111- kref_put(&file->async_file->ref,11121112- ib_uverbs_release_async_event_file);1113109811141099 kref_put(&file->ref, ib_uverbs_release_file);11151100
+7-1
drivers/infiniband/core/uverbs_std_types_device.c
···168168static int UVERBS_HANDLER(UVERBS_METHOD_QUERY_PORT)(169169 struct uverbs_attr_bundle *attrs)170170{171171- struct ib_device *ib_dev = attrs->ufile->device->ib_dev;171171+ struct ib_device *ib_dev;172172 struct ib_port_attr attr = {};173173 struct ib_uverbs_query_port_resp_ex resp = {};174174+ struct ib_ucontext *ucontext;174175 int ret;175176 u8 port_num;177177+178178+ ucontext = ib_uverbs_get_ucontext(attrs);179179+ if (IS_ERR(ucontext))180180+ return PTR_ERR(ucontext);181181+ ib_dev = ucontext->device;176182177183 /* FIXME: Extend the UAPI_DEF_OBJ_NEEDS_FN stuff.. */178184 if (!ib_dev->ops.query_port)
+1-1
drivers/infiniband/hw/hfi1/file_ops.c
···488488 vmf = 1;489489 break;490490 case STATUS:491491- if (flags & (unsigned long)(VM_WRITE | VM_EXEC)) {491491+ if (flags & VM_WRITE) {492492 ret = -EPERM;493493 goto done;494494 }
···9797 * The ITS structure - contains most of the infrastructure, with the9898 * top-level MSI domain, the command queue, the collections, and the9999 * list of devices writing to it.100100+ *101101+ * dev_alloc_lock has to be taken for device allocations, while the102102+ * spinlock must be taken to parse data structures such as the device103103+ * list.100104 */101105struct its_node {102106 raw_spinlock_t lock;107107+ struct mutex dev_alloc_lock;103108 struct list_head entry;104109 void __iomem *base;105110 phys_addr_t phys_base;···161156 void *itt;162157 u32 nr_ites;163158 u32 device_id;159159+ bool shared;164160};165161166162static struct {···15861580 nr_irqs /= 2;15871581 } while (nr_irqs > 0);1588158215831583+ if (!nr_irqs)15841584+ err = -ENOSPC;15851585+15891586 if (err)15901587 goto out;15911588···20682059 return 0;20692060}2070206120622062+static u64 its_clear_vpend_valid(void __iomem *vlpi_base)20632063+{20642064+ u32 count = 1000000; /* 1s! */20652065+ bool clean;20662066+ u64 val;20672067+20682068+ val = gits_read_vpendbaser(vlpi_base + GICR_VPENDBASER);20692069+ val &= ~GICR_VPENDBASER_Valid;20702070+ gits_write_vpendbaser(val, vlpi_base + GICR_VPENDBASER);20712071+20722072+ do {20732073+ val = gits_read_vpendbaser(vlpi_base + GICR_VPENDBASER);20742074+ clean = !(val & GICR_VPENDBASER_Dirty);20752075+ if (!clean) {20762076+ count--;20772077+ cpu_relax();20782078+ udelay(1);20792079+ }20802080+ } while (!clean && count);20812081+20822082+ return val;20832083+}20842084+20712085static void its_cpu_init_lpis(void)20722086{20732087 void __iomem *rbase = gic_data_rdist_rd_base();···21752143 val = readl_relaxed(rbase + GICR_CTLR);21762144 val |= GICR_CTLR_ENABLE_LPIS;21772145 writel_relaxed(val, rbase + GICR_CTLR);21462146+21472147+ if (gic_rdists->has_vlpis) {21482148+ void __iomem *vlpi_base = gic_data_rdist_vlpi_base();21492149+21502150+ /*21512151+ * It's possible for CPU to receive VLPIs before it is21522152+ * sheduled as a vPE, especially for the first CPU, and the21532153+ * VLPI with INTID larger than 2^(IDbits+1) will be considered21542154+ * as out of range and dropped by GIC.21552155+ * So we initialize IDbits to known value to avoid VLPI drop.21562156+ */21572157+ val = (LPI_NRBITS - 1) & GICR_VPROPBASER_IDBITS_MASK;21582158+ pr_debug("GICv4: CPU%d: Init IDbits to 0x%llx for GICR_VPROPBASER\n",21592159+ smp_processor_id(), val);21602160+ gits_write_vpropbaser(val, vlpi_base + GICR_VPROPBASER);21612161+21622162+ /*21632163+ * Also clear Valid bit of GICR_VPENDBASER, in case some21642164+ * ancient programming gets left in and has possibility of21652165+ * corrupting memory.21662166+ */21672167+ val = its_clear_vpend_valid(vlpi_base);21682168+ WARN_ON(val & GICR_VPENDBASER_Dirty);21692169+ }2178217021792171 /* Make sure the GIC has seen the above */21802172 dsb(sy);···24782422 struct its_device *its_dev;24792423 struct msi_domain_info *msi_info;24802424 u32 dev_id;24252425+ int err = 0;2481242624822427 /*24832428 * We ignore "dev" entierely, and rely on the dev_id that has···25012444 return -EINVAL;25022445 }2503244624472447+ mutex_lock(&its->dev_alloc_lock);25042448 its_dev = its_find_device(its, dev_id);25052449 if (its_dev) {25062450 /*···25092451 * another alias (PCI bridge of some sort). No need to25102452 * create the device.25112453 */24542454+ its_dev->shared = true;25122455 pr_debug("Reusing ITT for devID %x\n", dev_id);25132456 goto out;25142457 }2515245825162459 its_dev = its_create_device(its, dev_id, nvec, true);25172517- if (!its_dev)25182518- return -ENOMEM;24602460+ if (!its_dev) {24612461+ err = -ENOMEM;24622462+ goto out;24632463+ }2519246425202465 pr_debug("ITT %d entries, %d bits\n", nvec, ilog2(nvec));25212466out:24672467+ mutex_unlock(&its->dev_alloc_lock);25222468 info->scratchpad[0].ptr = its_dev;25232523- return 0;24692469+ return err;25242470}2525247125262472static struct msi_domain_ops its_msi_domain_ops = {···26282566{26292567 struct irq_data *d = irq_domain_get_irq_data(domain, virq);26302568 struct its_device *its_dev = irq_data_get_irq_chip_data(d);25692569+ struct its_node *its = its_dev->its;26312570 int i;2632257126332572 for (i = 0; i < nr_irqs; i++) {···26432580 irq_domain_reset_irq_data(data);26442581 }2645258226462646- /* If all interrupts have been freed, start mopping the floor */26472647- if (bitmap_empty(its_dev->event_map.lpi_map,25832583+ mutex_lock(&its->dev_alloc_lock);25842584+25852585+ /*25862586+ * If all interrupts have been freed, start mopping the25872587+ * floor. This is conditionned on the device not being shared.25882588+ */25892589+ if (!its_dev->shared &&25902590+ bitmap_empty(its_dev->event_map.lpi_map,26482591 its_dev->event_map.nr_lpis)) {26492592 its_lpi_free(its_dev->event_map.lpi_map,26502593 its_dev->event_map.lpi_base,···26612592 its_send_mapd(its_dev, 0);26622593 its_free_device(its_dev);26632594 }25952595+25962596+ mutex_unlock(&its->dev_alloc_lock);2664259726652598 irq_domain_free_irqs_parent(domain, virq, nr_irqs);26662599}···28262755static void its_vpe_deschedule(struct its_vpe *vpe)28272756{28282757 void __iomem *vlpi_base = gic_data_rdist_vlpi_base();28292829- u32 count = 1000000; /* 1s! */28302830- bool clean;28312758 u64 val;2832275928332833- /* We're being scheduled out */28342834- val = gits_read_vpendbaser(vlpi_base + GICR_VPENDBASER);28352835- val &= ~GICR_VPENDBASER_Valid;28362836- gits_write_vpendbaser(val, vlpi_base + GICR_VPENDBASER);27602760+ val = its_clear_vpend_valid(vlpi_base);2837276128382838- do {28392839- val = gits_read_vpendbaser(vlpi_base + GICR_VPENDBASER);28402840- clean = !(val & GICR_VPENDBASER_Dirty);28412841- if (!clean) {28422842- count--;28432843- cpu_relax();28442844- udelay(1);28452845- }28462846- } while (!clean && count);28472847-28482848- if (unlikely(!clean && !count)) {27622762+ if (unlikely(val & GICR_VPENDBASER_Dirty)) {28492763 pr_err_ratelimited("ITS virtual pending table not cleaning\n");28502764 vpe->idai = false;28512765 vpe->pending_last = true;···35733517 }3574351835753519 raw_spin_lock_init(&its->lock);35203520+ mutex_init(&its->dev_alloc_lock);35763521 INIT_LIST_HEAD(&its->entry);35773522 INIT_LIST_HEAD(&its->its_device_list);35783523 typer = gic_read_typer(its_base + GITS_TYPER);
···70707171static int xtensa_irq_retrigger(struct irq_data *d)7272{7373- xtensa_set_sr(1 << d->hwirq, intset);7373+ unsigned int mask = 1u << d->hwirq;7474+7575+ if (WARN_ON(mask & ~XCHAL_INTTYPE_MASK_SOFTWARE))7676+ return 0;7777+ xtensa_set_sr(mask, intset);7478 return 1;7579}7680
+1-1
drivers/isdn/mISDN/timerdev.c
···170170 spin_lock_irqsave(&timer->dev->lock, flags);171171 if (timer->id >= 0)172172 list_move_tail(&timer->list, &timer->dev->expired);173173- spin_unlock_irqrestore(&timer->dev->lock, flags);174173 wake_up_interruptible(&timer->dev->wait);174174+ spin_unlock_irqrestore(&timer->dev->lock, flags);175175}176176177177static int
+1-1
drivers/md/dm-rq.c
···131131static void rq_completed(struct mapped_device *md)132132{133133 /* nudge anyone waiting on suspend queue */134134- if (unlikely(waitqueue_active(&md->wait)))134134+ if (unlikely(wq_has_sleeper(&md->wait)))135135 wake_up(&md->wait);136136137137 /*
+6-2
drivers/md/dm.c
···699699 true, duration, &io->stats_aux);700700701701 /* nudge anyone waiting on suspend queue */702702- if (unlikely(waitqueue_active(&md->wait)))702702+ if (unlikely(wq_has_sleeper(&md->wait)))703703 wake_up(&md->wait);704704}705705···13361336 return r;13371337 }1338133813391339- bio_trim(clone, sector - clone->bi_iter.bi_sector, len);13391339+ bio_advance(clone, to_bytes(sector - clone->bi_iter.bi_sector));13401340+ clone->bi_iter.bi_size = to_bytes(len);13411341+13421342+ if (bio_integrity(bio))13431343+ bio_integrity_trim(clone);1340134413411345 return 0;13421346}
+22-11
drivers/md/raid5-cache.c
···19351935}1936193619371937static struct stripe_head *19381938-r5c_recovery_alloc_stripe(struct r5conf *conf,19391939- sector_t stripe_sect)19381938+r5c_recovery_alloc_stripe(19391939+ struct r5conf *conf,19401940+ sector_t stripe_sect,19411941+ int noblock)19401942{19411943 struct stripe_head *sh;1942194419431943- sh = raid5_get_active_stripe(conf, stripe_sect, 0, 1, 0);19451945+ sh = raid5_get_active_stripe(conf, stripe_sect, 0, noblock, 0);19441946 if (!sh)19451947 return NULL; /* no more stripe available */19461948···21522150 stripe_sect);2153215121542152 if (!sh) {21552155- sh = r5c_recovery_alloc_stripe(conf, stripe_sect);21532153+ sh = r5c_recovery_alloc_stripe(conf, stripe_sect, 1);21562154 /*21572155 * cannot get stripe from raid5_get_active_stripe21582156 * try replay some stripes···21612159 r5c_recovery_replay_stripes(21622160 cached_stripe_list, ctx);21632161 sh = r5c_recovery_alloc_stripe(21642164- conf, stripe_sect);21622162+ conf, stripe_sect, 1);21652163 }21662164 if (!sh) {21652165+ int new_size = conf->min_nr_stripes * 2;21672166 pr_debug("md/raid:%s: Increasing stripe cache size to %d to recovery data on journal.\n",21682167 mdname(mddev),21692169- conf->min_nr_stripes * 2);21702170- raid5_set_cache_size(mddev,21712171- conf->min_nr_stripes * 2);21722172- sh = r5c_recovery_alloc_stripe(conf,21732173- stripe_sect);21682168+ new_size);21692169+ ret = raid5_set_cache_size(mddev, new_size);21702170+ if (conf->min_nr_stripes <= new_size / 2) {21712171+ pr_err("md/raid:%s: Cannot increase cache size, ret=%d, new_size=%d, min_nr_stripes=%d, max_nr_stripes=%d\n",21722172+ mdname(mddev),21732173+ ret,21742174+ new_size,21752175+ conf->min_nr_stripes,21762176+ conf->max_nr_stripes);21772177+ return -ENOMEM;21782178+ }21792179+ sh = r5c_recovery_alloc_stripe(21802180+ conf, stripe_sect, 0);21742181 }21752182 if (!sh) {21762183 pr_err("md/raid:%s: Cannot get enough stripes due to memory pressure. Recovery failed.\n",21772177- mdname(mddev));21842184+ mdname(mddev));21782185 return -ENOMEM;21792186 }21802187 list_add_tail(&sh->lru, cached_stripe_list);
+6-2
drivers/md/raid5.c
···63696369int63706370raid5_set_cache_size(struct mddev *mddev, int size)63716371{63726372+ int result = 0;63726373 struct r5conf *conf = mddev->private;6373637463746375 if (size <= 16 || size > 32768)···6386638563876386 mutex_lock(&conf->cache_size_mutex);63886387 while (size > conf->max_nr_stripes)63896389- if (!grow_one_stripe(conf, GFP_KERNEL))63886388+ if (!grow_one_stripe(conf, GFP_KERNEL)) {63896389+ conf->min_nr_stripes = conf->max_nr_stripes;63906390+ result = -ENOMEM;63906391 break;63926392+ }63916393 mutex_unlock(&conf->cache_size_mutex);6392639463936393- return 0;63956395+ return result;63946396}63956397EXPORT_SYMBOL(raid5_set_cache_size);63966398
+1-1
drivers/mfd/Kconfig
···1419141914201420config MFD_TPS6847014211421 bool "TI TPS68470 Power Management / LED chips"14221422- depends on ACPI && I2C=y14221422+ depends on ACPI && PCI && I2C=y14231423 select MFD_CORE14241424 select REGMAP_I2C14251425 select I2C_DESIGNWARE_PLATFORM
+4-1
drivers/misc/mei/client.c
···401401 struct mei_cl_cb *cb, *next;402402403403 list_for_each_entry_safe(cb, next, head, list) {404404- if (cl == cb->cl)404404+ if (cl == cb->cl) {405405 list_del_init(&cb->list);406406+ if (cb->fop_type == MEI_FOP_READ)407407+ mei_io_cb_free(cb);408408+ }406409 }407410}408411
+2
drivers/misc/mei/hw-me-regs.h
···139139#define MEI_DEV_ID_CNP_H 0xA360 /* Cannon Point H */140140#define MEI_DEV_ID_CNP_H_4 0xA364 /* Cannon Point H 4 (iTouch) */141141142142+#define MEI_DEV_ID_ICP_LP 0x34E0 /* Ice Lake Point LP */143143+142144/*143145 * MEI HW Section144146 */
···480480 /* let's register it anyway to preserve ordering */481481 slave->offset = 0;482482 slave->mtd.size = 0;483483+484484+ /* Initialize ->erasesize to make add_mtd_device() happy. */485485+ slave->mtd.erasesize = parent->erasesize;486486+483487 printk(KERN_ERR"mtd: partition \"%s\" is out of reach -- disabled\n",484488 part->name);485489 goto out_register;···636632 mutex_unlock(&mtd_partitions_mutex);637633638634 free_partition(new);639639- pr_info("%s:%i\n", __func__, __LINE__);640635641636 return ret;642637}
+6-7
drivers/mtd/nand/raw/gpmi-nand/gpmi-lib.c
···155155156156 /*157157 * Reset BCH here, too. We got failures otherwise :(158158- * See later BCH reset for explanation of MX23 handling158158+ * See later BCH reset for explanation of MX23 and MX28 handling159159 */160160- ret = gpmi_reset_block(r->bch_regs, GPMI_IS_MX23(this));160160+ ret = gpmi_reset_block(r->bch_regs,161161+ GPMI_IS_MX23(this) || GPMI_IS_MX28(this));161162 if (ret)162163 goto err_out;163164···264263 /*265264 * Due to erratum #2847 of the MX23, the BCH cannot be soft reset on this266265 * chip, otherwise it will lock up. So we skip resetting BCH on the MX23.267267- * On the other hand, the MX28 needs the reset, because one case has been268268- * seen where the BCH produced ECC errors constantly after 10000269269- * consecutive reboots. The latter case has not been seen on the MX23270270- * yet, still we don't know if it could happen there as well.266266+ * and MX28.271267 */272272- ret = gpmi_reset_block(r->bch_regs, GPMI_IS_MX23(this));268268+ ret = gpmi_reset_block(r->bch_regs,269269+ GPMI_IS_MX23(this) || GPMI_IS_MX28(this));273270 if (ret)274271 goto err_out;275272
+1
drivers/mtd/nand/raw/nand_base.c
···410410411411/**412412 * nand_fill_oob - [INTERN] Transfer client buffer to oob413413+ * @chip: NAND chip object413414 * @oob: oob data buffer414415 * @len: oob data write length415416 * @ops: oob ops structure
+1-1
drivers/mtd/nand/raw/nand_bbt.c
···158158159159/**160160 * read_bbt - [GENERIC] Read the bad block table starting from page161161- * @chip: NAND chip object161161+ * @this: NAND chip object162162 * @buf: temporary buffer163163 * @page: the starting page164164 * @num: the number of bbt descriptors to read
+22-24
drivers/mtd/nand/spi/core.c
···304304 struct nand_device *nand = spinand_to_nand(spinand);305305 struct mtd_info *mtd = nanddev_to_mtd(nand);306306 struct nand_page_io_req adjreq = *req;307307- unsigned int nbytes = 0;308308- void *buf = NULL;307307+ void *buf = spinand->databuf;308308+ unsigned int nbytes;309309 u16 column = 0;310310 int ret;311311312312- memset(spinand->databuf, 0xff,313313- nanddev_page_size(nand) +314314- nanddev_per_page_oobsize(nand));312312+ /*313313+ * Looks like PROGRAM LOAD (AKA write cache) does not necessarily reset314314+ * the cache content to 0xFF (depends on vendor implementation), so we315315+ * must fill the page cache entirely even if we only want to program316316+ * the data portion of the page, otherwise we might corrupt the BBM or317317+ * user data previously programmed in OOB area.318318+ */319319+ nbytes = nanddev_page_size(nand) + nanddev_per_page_oobsize(nand);320320+ memset(spinand->databuf, 0xff, nbytes);321321+ adjreq.dataoffs = 0;322322+ adjreq.datalen = nanddev_page_size(nand);323323+ adjreq.databuf.out = spinand->databuf;324324+ adjreq.ooblen = nanddev_per_page_oobsize(nand);325325+ adjreq.ooboffs = 0;326326+ adjreq.oobbuf.out = spinand->oobbuf;315327316316- if (req->datalen) {328328+ if (req->datalen)317329 memcpy(spinand->databuf + req->dataoffs, req->databuf.out,318330 req->datalen);319319- adjreq.dataoffs = 0;320320- adjreq.datalen = nanddev_page_size(nand);321321- adjreq.databuf.out = spinand->databuf;322322- nbytes = adjreq.datalen;323323- buf = spinand->databuf;324324- }325331326332 if (req->ooblen) {327333 if (req->mode == MTD_OPS_AUTO_OOB)···338332 else339333 memcpy(spinand->oobbuf + req->ooboffs, req->oobbuf.out,340334 req->ooblen);341341-342342- adjreq.ooblen = nanddev_per_page_oobsize(nand);343343- adjreq.ooboffs = 0;344344- nbytes += nanddev_per_page_oobsize(nand);345345- if (!buf) {346346- buf = spinand->oobbuf;347347- column = nanddev_page_size(nand);348348- }349335 }350336351337 spinand_cache_op_adjust_colum(spinand, &adjreq, &column);···368370369371 /*370372 * We need to use the RANDOM LOAD CACHE operation if there's371371- * more than one iteration, because the LOAD operation resets372372- * the cache to 0xff.373373+ * more than one iteration, because the LOAD operation might374374+ * reset the cache to 0xff.373375 */374376 if (nbytes) {375377 column = op.addr.val;···10161018 for (i = 0; i < nand->memorg.ntargets; i++) {10171019 ret = spinand_select_target(spinand, i);10181020 if (ret)10191019- goto err_free_bufs;10211021+ goto err_manuf_cleanup;1020102210211023 ret = spinand_lock_block(spinand, BL_ALL_UNLOCKED);10221024 if (ret)10231023- goto err_free_bufs;10251025+ goto err_manuf_cleanup;10241026 }1025102710261028 ret = nanddev_init(nand, &spinand_ops, THIS_MODULE);
+1-4
drivers/net/caif/caif_serial.c
···257257 if (skb->len == 0) {258258 struct sk_buff *tmp = skb_dequeue(&ser->head);259259 WARN_ON(tmp != skb);260260- if (in_interrupt())261261- dev_kfree_skb_irq(skb);262262- else263263- kfree_skb(skb);260260+ dev_consume_skb_any(skb);264261 }265262 }266263 /* Send flow off if queue is empty */
-3
drivers/net/dsa/b53/b53_srab.c
···511511 /* Clear all pending interrupts */512512 writel(0xffffffff, priv->regs + B53_SRAB_INTR);513513514514- if (dev->pdata && dev->pdata->chip_id != BCM58XX_DEVICE_ID)515515- return;516516-517514 for (i = 0; i < B53_N_PORTS; i++) {518515 port = &priv->port_intrs[i];519516
+12-9
drivers/net/dsa/mv88e6xxx/global1_atu.c
···314314{315315 struct mv88e6xxx_chip *chip = dev_id;316316 struct mv88e6xxx_atu_entry entry;317317+ int spid;317318 int err;318319 u16 val;319320···337336 if (err)338337 goto out;339338339339+ spid = entry.state;340340+340341 if (val & MV88E6XXX_G1_ATU_OP_AGE_OUT_VIOLATION) {341342 dev_err_ratelimited(chip->dev,342343 "ATU age out violation for %pM\n",···347344348345 if (val & MV88E6XXX_G1_ATU_OP_MEMBER_VIOLATION) {349346 dev_err_ratelimited(chip->dev,350350- "ATU member violation for %pM portvec %x\n",351351- entry.mac, entry.portvec);352352- chip->ports[entry.portvec].atu_member_violation++;347347+ "ATU member violation for %pM portvec %x spid %d\n",348348+ entry.mac, entry.portvec, spid);349349+ chip->ports[spid].atu_member_violation++;353350 }354351355352 if (val & MV88E6XXX_G1_ATU_OP_MISS_VIOLATION) {356353 dev_err_ratelimited(chip->dev,357357- "ATU miss violation for %pM portvec %x\n",358358- entry.mac, entry.portvec);359359- chip->ports[entry.portvec].atu_miss_violation++;354354+ "ATU miss violation for %pM portvec %x spid %d\n",355355+ entry.mac, entry.portvec, spid);356356+ chip->ports[spid].atu_miss_violation++;360357 }361358362359 if (val & MV88E6XXX_G1_ATU_OP_FULL_VIOLATION) {363360 dev_err_ratelimited(chip->dev,364364- "ATU full violation for %pM portvec %x\n",365365- entry.mac, entry.portvec);366366- chip->ports[entry.portvec].atu_full_violation++;361361+ "ATU full violation for %pM portvec %x spid %d\n",362362+ entry.mac, entry.portvec, spid);363363+ chip->ports[spid].atu_full_violation++;367364 }368365 mutex_unlock(&chip->reg_lock);369366
+1-1
drivers/net/dsa/mv88e6xxx/serdes.c
···664664 if (port < 9)665665 return 0;666666667667- return mv88e6390_serdes_irq_setup(chip, port);667667+ return mv88e6390x_serdes_irq_setup(chip, port);668668}669669670670void mv88e6390x_serdes_irq_free(struct mv88e6xxx_chip *chip, int port)
···5656/* level of occupied TX descriptors under which we wake up TX process */5757#define MACB_TX_WAKEUP_THRESH(bp) (3 * (bp)->tx_ring_size / 4)58585959-#define MACB_RX_INT_FLAGS (MACB_BIT(RCOMP) | MACB_BIT(RXUBR) \6060- | MACB_BIT(ISR_ROVR))5959+#define MACB_RX_INT_FLAGS (MACB_BIT(RCOMP) | MACB_BIT(ISR_ROVR))6160#define MACB_TX_ERR_FLAGS (MACB_BIT(ISR_TUND) \6261 | MACB_BIT(ISR_RLE) \6362 | MACB_BIT(TXERR))···12691270 queue_writel(queue, ISR, MACB_BIT(RCOMP));12701271 napi_reschedule(napi);12711272 } else {12721272- queue_writel(queue, IER, MACB_RX_INT_FLAGS);12731273+ queue_writel(queue, IER, bp->rx_intr_mask);12731274 }12741275 }12751276···12871288 u32 ctrl;1288128912891290 for (q = 0, queue = bp->queues; q < bp->num_queues; ++q, ++queue) {12901290- queue_writel(queue, IDR, MACB_RX_INT_FLAGS |12911291+ queue_writel(queue, IDR, bp->rx_intr_mask |12911292 MACB_TX_INT_FLAGS |12921293 MACB_BIT(HRESP));12931294 }···1317131813181319 /* Enable interrupts */13191320 queue_writel(queue, IER,13201320- MACB_RX_INT_FLAGS |13211321+ bp->rx_intr_mask |13211322 MACB_TX_INT_FLAGS |13221323 MACB_BIT(HRESP));13231324 }···13711372 (unsigned int)(queue - bp->queues),13721373 (unsigned long)status);1373137413741374- if (status & MACB_RX_INT_FLAGS) {13751375+ if (status & bp->rx_intr_mask) {13751376 /* There's no point taking any more interrupts13761377 * until we have processed the buffers. The13771378 * scheduling call may fail if the poll routine13781379 * is already scheduled, so disable interrupts13791380 * now.13801381 */13811381- queue_writel(queue, IDR, MACB_RX_INT_FLAGS);13821382+ queue_writel(queue, IDR, bp->rx_intr_mask);13821383 if (bp->caps & MACB_CAPS_ISR_CLEAR_ON_WRITE)13831384 queue_writel(queue, ISR, MACB_BIT(RCOMP));13841385···14111412 /* There is a hardware issue under heavy load where DMA can14121413 * stop, this causes endless "used buffer descriptor read"14131414 * interrupts but it can be cleared by re-enabling RX. See14141414- * the at91 manual, section 41.3.1 or the Zynq manual14151415- * section 16.7.4 for details.14151415+ * the at91rm9200 manual, section 41.3.1 or the Zynq manual14161416+ * section 16.7.4 for details. RXUBR is only enabled for14171417+ * these two versions.14161418 */14171419 if (status & MACB_BIT(RXUBR)) {14181420 ctrl = macb_readl(bp, NCR);···2259225922602260 /* Enable interrupts */22612261 queue_writel(queue, IER,22622262- MACB_RX_INT_FLAGS |22622262+ bp->rx_intr_mask |22632263 MACB_TX_INT_FLAGS |22642264 MACB_BIT(HRESP));22652265 }···39073907};3908390839093909static const struct macb_config emac_config = {39103910+ .caps = MACB_CAPS_NEEDS_RSTONUBR,39103911 .clk_init = at91ether_clk_init,39113912 .init = at91ether_init,39123913};···39293928};3930392939313930static const struct macb_config zynq_config = {39323932- .caps = MACB_CAPS_GIGABIT_MODE_AVAILABLE | MACB_CAPS_NO_GIGABIT_HALF,39313931+ .caps = MACB_CAPS_GIGABIT_MODE_AVAILABLE | MACB_CAPS_NO_GIGABIT_HALF |39323932+ MACB_CAPS_NEEDS_RSTONUBR,39333933 .dma_burst_length = 16,39343934 .clk_init = macb_clk_init,39353935 .init = macb_init,···40844082 bp->tx_bd_rd_prefetch = (2 << (val - 1)) *40854083 macb_dma_desc_get_size(bp);40864084 }40854085+40864086+ bp->rx_intr_mask = MACB_RX_INT_FLAGS;40874087+ if (bp->caps & MACB_CAPS_NEEDS_RSTONUBR)40884088+ bp->rx_intr_mask |= MACB_BIT(RXUBR);4087408940884090 mac = of_get_mac_address(np);40894091 if (mac) {
-1
drivers/net/ethernet/cavium/Kconfig
···5454 tristate "Cavium PTP coprocessor as PTP clock"5555 depends on 64BIT && PCI5656 imply PTP_1588_CLOCK5757- default y5857 ---help---5958 This driver adds support for the Precision Time Protocol Clocks and6059 Timestamping coprocessor (PTP) found on Cavium processors.
+2-1
drivers/net/ethernet/cisco/enic/enic_main.c
···14341434 * csum is correct or is zero.14351435 */14361436 if ((netdev->features & NETIF_F_RXCSUM) && !csum_not_calc &&14371437- tcp_udp_csum_ok && ipv4_csum_ok && outer_csum_ok) {14371437+ tcp_udp_csum_ok && outer_csum_ok &&14381438+ (ipv4_csum_ok || ipv6)) {14381439 skb->ip_summed = CHECKSUM_UNNECESSARY;14391440 skb->csum_level = encap;14401441 }
···256256 e->m_neigh.family = n->ops->family;257257 memcpy(&e->m_neigh.dst_ip, n->primary_key, n->tbl->key_len);258258 e->out_dev = out_dev;259259+ e->route_dev = route_dev;259260260261 /* It's important to add the neigh to the hash table before checking261262 * the neigh validity state. So if we'll get a notification, in case the···370369 e->m_neigh.family = n->ops->family;371370 memcpy(&e->m_neigh.dst_ip, n->primary_key, n->tbl->key_len);372371 e->out_dev = out_dev;372372+ e->route_dev = route_dev;373373374374 /* It's importent to add the neigh to the hash table before checking375375 * the neigh validity state. So if we'll get a notification, in case the···614612 struct mlx5_flow_spec *spec,615613 struct tc_cls_flower_offload *f,616614 void *headers_c,617617- void *headers_v)615615+ void *headers_v, u8 *match_level)618616{619617 int tunnel_type;620618 int err = 0;621619622620 tunnel_type = mlx5e_tc_tun_get_type(filter_dev);623621 if (tunnel_type == MLX5E_TC_TUNNEL_TYPE_VXLAN) {622622+ *match_level = MLX5_MATCH_L4;624623 err = mlx5e_tc_tun_parse_vxlan(priv, spec, f,625624 headers_c, headers_v);626625 } else if (tunnel_type == MLX5E_TC_TUNNEL_TYPE_GRETAP) {626626+ *match_level = MLX5_MATCH_L3;627627 err = mlx5e_tc_tun_parse_gretap(priv, spec, f,628628 headers_c, headers_v);629629 } else {
···11341134 int err = 0;11351135 u8 *smac_v;1136113611371137- if (vport->info.spoofchk && !is_valid_ether_addr(vport->info.mac)) {11381138- mlx5_core_warn(esw->dev,11391139- "vport[%d] configure ingress rules failed, illegal mac with spoofchk\n",11401140- vport->vport);11411141- return -EPERM;11421142- }11431143-11441137 esw_vport_cleanup_ingress_rules(esw, vport);1145113811461139 if (!vport->info.vlan && !vport->info.qos && !vport->info.spoofchk) {···17211728 int vport_num;17221729 int err;1723173017241724- if (!MLX5_ESWITCH_MANAGER(dev))17311731+ if (!MLX5_VPORT_MANAGER(dev))17251732 return 0;1726173317271734 esw_info(dev,···1790179717911798void mlx5_eswitch_cleanup(struct mlx5_eswitch *esw)17921799{17931793- if (!esw || !MLX5_ESWITCH_MANAGER(esw->dev))18001800+ if (!esw || !MLX5_VPORT_MANAGER(esw->dev))17941801 return;1795180217961803 esw_info(esw->dev, "cleanup\n");···18201827 mutex_lock(&esw->state_lock);18211828 evport = &esw->vports[vport];1822182918231823- if (evport->info.spoofchk && !is_valid_ether_addr(mac)) {18301830+ if (evport->info.spoofchk && !is_valid_ether_addr(mac))18241831 mlx5_core_warn(esw->dev,18251825- "MAC invalidation is not allowed when spoofchk is on, vport(%d)\n",18321832+ "Set invalid MAC while spoofchk is on, vport(%d)\n",18261833 vport);18271827- err = -EPERM;18281828- goto unlock;18291829- }1830183418311835 err = mlx5_modify_nic_vport_mac_address(esw->dev, vport, mac);18321836 if (err) {···19691979 evport = &esw->vports[vport];19701980 pschk = evport->info.spoofchk;19711981 evport->info.spoofchk = spoofchk;19821982+ if (pschk && !is_valid_ether_addr(evport->info.mac))19831983+ mlx5_core_warn(esw->dev,19841984+ "Spoofchk in set while MAC is invalid, vport(%d)\n",19851985+ evport->vport);19721986 if (evport->enabled && esw->mode == SRIOV_LEGACY)19731987 err = esw_vport_ingress_config(esw, evport);19741988 if (err)
···160160 MLX5_SET_TO_ONES(fte_match_set_misc, misc,161161 source_eswitch_owner_vhca_id);162162163163- if (attr->match_level == MLX5_MATCH_NONE)164164- spec->match_criteria_enable = MLX5_MATCH_MISC_PARAMETERS;165165- else166166- spec->match_criteria_enable = MLX5_MATCH_OUTER_HEADERS |167167- MLX5_MATCH_MISC_PARAMETERS;168168-169169- if (flow_act.action & MLX5_FLOW_CONTEXT_ACTION_DECAP)170170- spec->match_criteria_enable |= MLX5_MATCH_INNER_HEADERS;163163+ spec->match_criteria_enable = MLX5_MATCH_MISC_PARAMETERS;164164+ if (flow_act.action & MLX5_FLOW_CONTEXT_ACTION_DECAP) {165165+ if (attr->tunnel_match_level != MLX5_MATCH_NONE)166166+ spec->match_criteria_enable |= MLX5_MATCH_OUTER_HEADERS;167167+ if (attr->match_level != MLX5_MATCH_NONE)168168+ spec->match_criteria_enable |= MLX5_MATCH_INNER_HEADERS;169169+ } else if (attr->match_level != MLX5_MATCH_NONE) {170170+ spec->match_criteria_enable |= MLX5_MATCH_OUTER_HEADERS;171171+ }171172172173 if (flow_act.action & MLX5_FLOW_CONTEXT_ACTION_MOD_HDR)173174 flow_act.modify_id = attr->mod_hdr_id;
+21
drivers/net/ethernet/mellanox/mlx5/core/lag.c
···616616 }617617}618618619619+int mlx5_lag_get_pf_num(struct mlx5_core_dev *dev, int *pf_num)620620+{621621+ struct mlx5_lag *ldev;622622+ int n;623623+624624+ ldev = mlx5_lag_dev_get(dev);625625+ if (!ldev) {626626+ mlx5_core_warn(dev, "no lag device, can't get pf num\n");627627+ return -EINVAL;628628+ }629629+630630+ for (n = 0; n < MLX5_MAX_PORTS; n++)631631+ if (ldev->pf[n].dev == dev) {632632+ *pf_num = n;633633+ return 0;634634+ }635635+636636+ mlx5_core_warn(dev, "wasn't able to locate pf in the lag device\n");637637+ return -EINVAL;638638+}639639+619640/* Must be called with intf_mutex held */620641void mlx5_lag_remove(struct mlx5_core_dev *dev)621642{
···795795796796/* get pq index according to PQ_FLAGS */797797static u16 *qed_init_qm_get_idx_from_flags(struct qed_hwfn *p_hwfn,798798- u32 pq_flags)798798+ unsigned long pq_flags)799799{800800 struct qed_qm_info *qm_info = &p_hwfn->qm_info;801801802802 /* Can't have multiple flags set here */803803- if (bitmap_weight((unsigned long *)&pq_flags,803803+ if (bitmap_weight(&pq_flags,804804 sizeof(pq_flags) * BITS_PER_BYTE) > 1) {805805- DP_ERR(p_hwfn, "requested multiple pq flags 0x%x\n", pq_flags);805805+ DP_ERR(p_hwfn, "requested multiple pq flags 0x%lx\n", pq_flags);806806 goto err;807807 }808808809809 if (!(qed_get_pq_flags(p_hwfn) & pq_flags)) {810810- DP_ERR(p_hwfn, "pq flag 0x%x is not set\n", pq_flags);810810+ DP_ERR(p_hwfn, "pq flag 0x%lx is not set\n", pq_flags);811811 goto err;812812 }813813
+16-4
drivers/net/ethernet/qlogic/qed/qed_l2.c
···609609 (!!(accept_filter & QED_ACCEPT_MCAST_MATCHED) &&610610 !!(accept_filter & QED_ACCEPT_MCAST_UNMATCHED)));611611612612+ SET_FIELD(state, ETH_VPORT_TX_MODE_UCAST_ACCEPT_ALL,613613+ (!!(accept_filter & QED_ACCEPT_UCAST_MATCHED) &&614614+ !!(accept_filter & QED_ACCEPT_UCAST_UNMATCHED)));615615+612616 SET_FIELD(state, ETH_VPORT_TX_MODE_BCAST_ACCEPT_ALL,613617 !!(accept_filter & QED_ACCEPT_BCAST));614618···746742 if (rc) {747743 qed_sp_destroy_request(p_hwfn, p_ent);748744 return rc;745745+ }746746+747747+ if (p_params->update_ctl_frame_check) {748748+ p_cmn->ctl_frame_mac_check_en = p_params->mac_chk_en;749749+ p_cmn->ctl_frame_ethtype_check_en = p_params->ethtype_chk_en;749750 }750751751752 /* Update mcast bins for VFs, PF doesn't use this functionality */···22162207 u16 num_queues = 0;2217220822182209 /* Since the feature controls only queue-zones,22192219- * make sure we have the contexts [rx, tx, xdp] to22102210+ * make sure we have the contexts [rx, xdp, tcs] to22202211 * match.22212212 */22222213 for_each_hwfn(cdev, i) {···22262217 u16 cids;2227221822282219 cids = hwfn->pf_params.eth_pf_params.num_cons;22292229- num_queues += min_t(u16, l2_queues, cids / 3);22202220+ cids /= (2 + info->num_tc);22212221+ num_queues += min_t(u16, l2_queues, cids);22302222 }2231222322322224 /* queues might theoretically be >256, but interrupts'···26982688 if (type == QED_FILTER_RX_MODE_TYPE_PROMISC) {26992689 accept_flags.rx_accept_filter |= QED_ACCEPT_UCAST_UNMATCHED |27002690 QED_ACCEPT_MCAST_UNMATCHED;27012701- accept_flags.tx_accept_filter |= QED_ACCEPT_MCAST_UNMATCHED;26912691+ accept_flags.tx_accept_filter |= QED_ACCEPT_UCAST_UNMATCHED |26922692+ QED_ACCEPT_MCAST_UNMATCHED;27022693 } else if (type == QED_FILTER_RX_MODE_TYPE_MULTI_PROMISC) {27032694 accept_flags.rx_accept_filter |= QED_ACCEPT_MCAST_UNMATCHED;27042695 accept_flags.tx_accept_filter |= QED_ACCEPT_MCAST_UNMATCHED;···28712860 p_hwfn = p_cid->p_owner;28722861 rc = qed_get_queue_coalesce(p_hwfn, coal, handle);28732862 if (rc)28742874- DP_NOTICE(p_hwfn, "Unable to read queue coalescing\n");28632863+ DP_VERBOSE(cdev, QED_MSG_DEBUG,28642864+ "Unable to read queue coalescing\n");2875286528762866 return rc;28772867}
···24512451{24522452 struct qed_ll2_tx_pkt_info pkt;24532453 const skb_frag_t *frag;24542454+ u8 flags = 0, nr_frags;24542455 int rc = -EINVAL, i;24552456 dma_addr_t mapping;24562457 u16 vlan = 0;24572457- u8 flags = 0;2458245824592459 if (unlikely(skb->ip_summed != CHECKSUM_NONE)) {24602460 DP_INFO(cdev, "Cannot transmit a checksummed packet\n");24612461 return -EINVAL;24622462 }2463246324642464- if (1 + skb_shinfo(skb)->nr_frags > CORE_LL2_TX_MAX_BDS_PER_PACKET) {24642464+ /* Cache number of fragments from SKB since SKB may be freed by24652465+ * the completion routine after calling qed_ll2_prepare_tx_packet()24662466+ */24672467+ nr_frags = skb_shinfo(skb)->nr_frags;24682468+24692469+ if (1 + nr_frags > CORE_LL2_TX_MAX_BDS_PER_PACKET) {24652470 DP_ERR(cdev, "Cannot transmit a packet with %d fragments\n",24662466- 1 + skb_shinfo(skb)->nr_frags);24712471+ 1 + nr_frags);24672472 return -EINVAL;24682473 }24692474···24902485 }2491248624922487 memset(&pkt, 0, sizeof(pkt));24932493- pkt.num_of_bds = 1 + skb_shinfo(skb)->nr_frags;24882488+ pkt.num_of_bds = 1 + nr_frags;24942489 pkt.vlan = vlan;24952490 pkt.bd_flags = flags;24962491 pkt.tx_dest = QED_LL2_TX_DEST_NW;···25012496 test_bit(QED_LL2_XMIT_FLAGS_FIP_DISCOVERY, &xmit_flags))25022497 pkt.remove_stag = true;2503249824992499+ /* qed_ll2_prepare_tx_packet() may actually send the packet if25002500+ * there are no fragments in the skb and subsequently the completion25012501+ * routine may run and free the SKB, so no dereferencing the SKB25022502+ * beyond this point unless skb has any fragments.25032503+ */25042504 rc = qed_ll2_prepare_tx_packet(&cdev->hwfns[0], cdev->ll2->handle,25052505 &pkt, 1);25062506 if (rc)25072507 goto err;2508250825092509- for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {25092509+ for (i = 0; i < nr_frags; i++) {25102510 frag = &skb_shinfo(skb)->frags[i];2511251125122512 mapping = skb_frag_dma_map(&cdev->pdev->dev, frag, 0,
···721721{722722 unsigned long clk = clk_get_rate(priv->plat->stmmac_clk);723723724724- if (!clk)725725- return 0;724724+ if (!clk) {725725+ clk = priv->plat->clk_ref_rate;726726+ if (!clk)727727+ return 0;728728+ }726729727730 return (usec * (clk / 1000000)) / 256;728731}···734731{735732 unsigned long clk = clk_get_rate(priv->plat->stmmac_clk);736733737737- if (!clk)738738- return 0;734734+ if (!clk) {735735+ clk = priv->plat->clk_ref_rate;736736+ if (!clk)737737+ return 0;738738+ }739739740740 return (riwt * 256) / (clk / 1000000);741741}
+13-4
drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
···3023302330243024 tx_q = &priv->tx_queue[queue];3025302530263026+ if (priv->tx_path_in_lpi_mode)30273027+ stmmac_disable_eee_mode(priv);30283028+30263029 /* Manage oversized TCP frames for GMAC4 device */30273030 if (skb_is_gso(skb) && priv->tso) {30283028- if (skb_shinfo(skb)->gso_type & (SKB_GSO_TCPV4 | SKB_GSO_TCPV6))30313031+ if (skb_shinfo(skb)->gso_type & (SKB_GSO_TCPV4 | SKB_GSO_TCPV6)) {30323032+ /*30333033+ * There is no way to determine the number of TSO30343034+ * capable Queues. Let's use always the Queue 030353035+ * because if TSO is supported then at least this30363036+ * one will be capable.30373037+ */30383038+ skb_set_queue_mapping(skb, 0);30393039+30293040 return stmmac_tso_xmit(skb, dev);30413041+ }30303042 }3031304330323044 if (unlikely(stmmac_tx_avail(priv, queue) < nfrags + 1)) {···30523040 }30533041 return NETDEV_TX_BUSY;30543042 }30553055-30563056- if (priv->tx_path_in_lpi_mode)30573057- stmmac_disable_eee_mode(priv);3058304330593044 entry = tx_q->cur_tx;30603045 first_entry = entry;
···17391739 tx_level -= db->rptr->len; /* '-' koz len is negative */1740174017411741 /* now should come skb pointer - free it */17421742- dev_kfree_skb_irq(db->rptr->addr.skb);17421742+ dev_consume_skb_irq(db->rptr->addr.skb);17431743 bdx_tx_db_inc_rptr(db);17441744 }17451745
···35123512 bp->descr_block_virt->xmt_data[comp].long_1,35133513 p_xmt_drv_descr->p_skb->len,35143514 DMA_TO_DEVICE);35153515- dev_kfree_skb_irq(p_xmt_drv_descr->p_skb);35153515+ dev_consume_skb_irq(p_xmt_drv_descr->p_skb);3516351635173517 /*35183518 * Move to start of next packet by updating completion index
+7-3
drivers/net/geneve.c
···15121512 }15131513#if IS_ENABLED(CONFIG_IPV6)15141514 case AF_INET6: {15151515- struct rt6_info *rt = rt6_lookup(geneve->net,15161516- &info->key.u.ipv6.dst, NULL, 0,15171517- NULL, 0);15151515+ struct rt6_info *rt;15161516+15171517+ if (!__in6_dev_get(dev))15181518+ break;15191519+15201520+ rt = rt6_lookup(geneve->net, &info->key.u.ipv6.dst, NULL, 0,15211521+ NULL, 0);1518152215191523 if (rt && rt->dst.dev)15201524 ldev_mtu = rt->dst.dev->mtu - GENEVE_IPV6_HLEN;
+3-3
drivers/net/ieee802154/mcr20a.c
···905905 }906906 break;907907 case (DAR_IRQSTS1_RXIRQ | DAR_IRQSTS1_SEQIRQ):908908- /* rx is starting */909909- dev_dbg(printdev(lp), "RX is starting\n");910910- mcr20a_handle_rx(lp);908908+ /* rx is starting */909909+ dev_dbg(printdev(lp), "RX is starting\n");910910+ mcr20a_handle_rx(lp);911911 break;912912 case (DAR_IRQSTS1_RXIRQ | DAR_IRQSTS1_TXIRQ | DAR_IRQSTS1_SEQIRQ):913913 if (lp->is_tx) {
···898898 struct phy_txts *phy_txts)899899{900900 struct skb_shared_hwtstamps shhwtstamps;901901+ struct dp83640_skb_info *skb_info;901902 struct sk_buff *skb;902902- u64 ns;903903 u8 overflow;904904+ u64 ns;904905905906 /* We must already have the skb that triggered this. */906906-907907+again:907908 skb = skb_dequeue(&dp83640->tx_queue);908908-909909 if (!skb) {910910 pr_debug("have timestamp but tx_queue empty\n");911911 return;···919919 skb = skb_dequeue(&dp83640->tx_queue);920920 }921921 return;922922+ }923923+ skb_info = (struct dp83640_skb_info *)skb->cb;924924+ if (time_after(jiffies, skb_info->tmo)) {925925+ kfree_skb(skb);926926+ goto again;922927 }923928924929 ns = phy2txts(phy_txts);···14771472static void dp83640_txtstamp(struct phy_device *phydev,14781473 struct sk_buff *skb, int type)14791474{14751475+ struct dp83640_skb_info *skb_info = (struct dp83640_skb_info *)skb->cb;14801476 struct dp83640_private *dp83640 = phydev->priv;1481147714821478 switch (dp83640->hwts_tx_en) {···14901484 /* fall through */14911485 case HWTSTAMP_TX_ON:14921486 skb_shinfo(skb)->tx_flags |= SKBTX_IN_PROGRESS;14871487+ skb_info->tmo = jiffies + SKB_TIMESTAMP_TIMEOUT;14931488 skb_queue_tail(&dp83640->tx_queue, skb);14941489 break;14951490
-16
drivers/net/phy/marvell.c
···847847848848 /* SGMII-to-Copper mode initialization */849849 if (phydev->interface == PHY_INTERFACE_MODE_SGMII) {850850-851850 /* Select page 18 */852851 err = marvell_set_page(phydev, 18);853852 if (err < 0)···869870 err = marvell_set_page(phydev, MII_MARVELL_COPPER_PAGE);870871 if (err < 0)871872 return err;872872-873873- /* There appears to be a bug in the 88e1512 when used in874874- * SGMII to copper mode, where the AN advertisement register875875- * clears the pause bits each time a negotiation occurs.876876- * This means we can never be truely sure what was advertised,877877- * so disable Pause support.878878- */879879- linkmode_clear_bit(ETHTOOL_LINK_MODE_Asym_Pause_BIT,880880- phydev->supported);881881- linkmode_clear_bit(ETHTOOL_LINK_MODE_Pause_BIT,882882- phydev->supported);883883- linkmode_clear_bit(ETHTOOL_LINK_MODE_Asym_Pause_BIT,884884- phydev->advertising);885885- linkmode_clear_bit(ETHTOOL_LINK_MODE_Pause_BIT,886886- phydev->advertising);887873 }888874889875 return m88e1318_config_init(phydev);
+1-2
drivers/net/tun.c
···866866 if (rtnl_dereference(tun->xdp_prog))867867 sock_set_flag(&tfile->sk, SOCK_XDP);868868869869- tun_set_real_num_queues(tun);870870-871869 /* device is allowed to go away first, so no need to hold extra872870 * refcnt.873871 */···877879 rcu_assign_pointer(tfile->tun, tun);878880 rcu_assign_pointer(tun->tfiles[tun->numqueues], tfile);879881 tun->numqueues++;882882+ tun_set_real_num_queues(tun);880883out:881884 return err;882885}
+124-47
drivers/net/virtio_net.c
···5757#define VIRTIO_XDP_TX BIT(0)5858#define VIRTIO_XDP_REDIR BIT(1)59596060+#define VIRTIO_XDP_FLAG BIT(0)6161+6062/* RX packet size EWMA. The average packet size is used to determine the packet6163 * buffer size when refilling RX rings. As the entire RX ring may be refilled6264 * at once, the weight is chosen so that the EWMA will be insensitive to short-···253251 */254252 char padding[4];255253};254254+255255+static bool is_xdp_frame(void *ptr)256256+{257257+ return (unsigned long)ptr & VIRTIO_XDP_FLAG;258258+}259259+260260+static void *xdp_to_ptr(struct xdp_frame *ptr)261261+{262262+ return (void *)((unsigned long)ptr | VIRTIO_XDP_FLAG);263263+}264264+265265+static struct xdp_frame *ptr_to_xdp(void *ptr)266266+{267267+ return (struct xdp_frame *)((unsigned long)ptr & ~VIRTIO_XDP_FLAG);268268+}256269257270/* Converting between virtqueue no. and kernel tx/rx queue no.258271 * 0:rx0 1:tx0 2:rx1 3:tx1 ... 2N:rxN 2N+1:txN 2N+2:cvq···479462480463 sg_init_one(sq->sg, xdpf->data, xdpf->len);481464482482- err = virtqueue_add_outbuf(sq->vq, sq->sg, 1, xdpf, GFP_ATOMIC);465465+ err = virtqueue_add_outbuf(sq->vq, sq->sg, 1, xdp_to_ptr(xdpf),466466+ GFP_ATOMIC);483467 if (unlikely(err))484468 return -ENOSPC; /* Caller handle free/refcnt */485469···500482{501483 struct virtnet_info *vi = netdev_priv(dev);502484 struct receive_queue *rq = vi->rq;503503- struct xdp_frame *xdpf_sent;504485 struct bpf_prog *xdp_prog;505486 struct send_queue *sq;506487 unsigned int len;488488+ int packets = 0;489489+ int bytes = 0;507490 int drops = 0;508491 int kicks = 0;509492 int ret, err;493493+ void *ptr;510494 int i;495495+496496+ /* Only allow ndo_xdp_xmit if XDP is loaded on dev, as this497497+ * indicate XDP resources have been successfully allocated.498498+ */499499+ xdp_prog = rcu_dereference(rq->xdp_prog);500500+ if (!xdp_prog)501501+ return -ENXIO;511502512503 sq = virtnet_xdp_sq(vi);513504···526499 goto out;527500 }528501529529- /* Only allow ndo_xdp_xmit if XDP is loaded on dev, as this530530- * indicate XDP resources have been successfully allocated.531531- */532532- xdp_prog = rcu_dereference(rq->xdp_prog);533533- if (!xdp_prog) {534534- ret = -ENXIO;535535- drops = n;536536- goto out;537537- }538538-539502 /* Free up any pending old buffers before queueing new ones. */540540- while ((xdpf_sent = virtqueue_get_buf(sq->vq, &len)) != NULL)541541- xdp_return_frame(xdpf_sent);503503+ while ((ptr = virtqueue_get_buf(sq->vq, &len)) != NULL) {504504+ if (likely(is_xdp_frame(ptr))) {505505+ struct xdp_frame *frame = ptr_to_xdp(ptr);506506+507507+ bytes += frame->len;508508+ xdp_return_frame(frame);509509+ } else {510510+ struct sk_buff *skb = ptr;511511+512512+ bytes += skb->len;513513+ napi_consume_skb(skb, false);514514+ }515515+ packets++;516516+ }542517543518 for (i = 0; i < n; i++) {544519 struct xdp_frame *xdpf = frames[i];···559530 }560531out:561532 u64_stats_update_begin(&sq->stats.syncp);533533+ sq->stats.bytes += bytes;534534+ sq->stats.packets += packets;562535 sq->stats.xdp_tx += n;563536 sq->stats.xdp_tx_drops += drops;564537 sq->stats.kicks += kicks;···1363133213641333static void free_old_xmit_skbs(struct send_queue *sq, bool in_napi)13651334{13661366- struct sk_buff *skb;13671335 unsigned int len;13681336 unsigned int packets = 0;13691337 unsigned int bytes = 0;13381338+ void *ptr;1370133913711371- while ((skb = virtqueue_get_buf(sq->vq, &len)) != NULL) {13721372- pr_debug("Sent skb %p\n", skb);13401340+ while ((ptr = virtqueue_get_buf(sq->vq, &len)) != NULL) {13411341+ if (likely(!is_xdp_frame(ptr))) {13421342+ struct sk_buff *skb = ptr;1373134313741374- bytes += skb->len;13441344+ pr_debug("Sent skb %p\n", skb);13451345+13461346+ bytes += skb->len;13471347+ napi_consume_skb(skb, in_napi);13481348+ } else {13491349+ struct xdp_frame *frame = ptr_to_xdp(ptr);13501350+13511351+ bytes += frame->len;13521352+ xdp_return_frame(frame);13531353+ }13751354 packets++;13761376-13771377- napi_consume_skb(skb, in_napi);13781355 }1379135613801357 /* Avoid overhead when no packets have been processed···13971358 u64_stats_update_end(&sq->stats.syncp);13981359}1399136013611361+static bool is_xdp_raw_buffer_queue(struct virtnet_info *vi, int q)13621362+{13631363+ if (q < (vi->curr_queue_pairs - vi->xdp_queue_pairs))13641364+ return false;13651365+ else if (q < vi->curr_queue_pairs)13661366+ return true;13671367+ else13681368+ return false;13691369+}13701370+14001371static void virtnet_poll_cleantx(struct receive_queue *rq)14011372{14021373 struct virtnet_info *vi = rq->vq->vdev->priv;···14141365 struct send_queue *sq = &vi->sq[index];14151366 struct netdev_queue *txq = netdev_get_tx_queue(vi->dev, index);1416136714171417- if (!sq->napi.weight)13681368+ if (!sq->napi.weight || is_xdp_raw_buffer_queue(vi, index))14181369 return;1419137014201371 if (__netif_tx_trylock(txq)) {···14911442{14921443 struct send_queue *sq = container_of(napi, struct send_queue, napi);14931444 struct virtnet_info *vi = sq->vq->vdev->priv;14941494- struct netdev_queue *txq = netdev_get_tx_queue(vi->dev, vq2txq(sq->vq));14451445+ unsigned int index = vq2txq(sq->vq);14461446+ struct netdev_queue *txq;1495144714481448+ if (unlikely(is_xdp_raw_buffer_queue(vi, index))) {14491449+ /* We don't need to enable cb for XDP */14501450+ napi_complete_done(napi, 0);14511451+ return 0;14521452+ }14531453+14541454+ txq = netdev_get_tx_queue(vi->dev, index);14961455 __netif_tx_lock(txq, raw_smp_processor_id());14971456 free_old_xmit_skbs(sq, true);14981457 __netif_tx_unlock(txq);···24522395 return -ENOMEM;24532396 }2454239723982398+ old_prog = rtnl_dereference(vi->rq[0].xdp_prog);23992399+ if (!prog && !old_prog)24002400+ return 0;24012401+24552402 if (prog) {24562403 prog = bpf_prog_add(prog, vi->max_queue_pairs - 1);24572404 if (IS_ERR(prog))···24632402 }2464240324652404 /* Make sure NAPI is not using any XDP TX queues for RX. */24662466- if (netif_running(dev))24672467- for (i = 0; i < vi->max_queue_pairs; i++)24052405+ if (netif_running(dev)) {24062406+ for (i = 0; i < vi->max_queue_pairs; i++) {24682407 napi_disable(&vi->rq[i].napi);24082408+ virtnet_napi_tx_disable(&vi->sq[i].napi);24092409+ }24102410+ }2469241124702470- netif_set_real_num_rx_queues(dev, curr_qp + xdp_qp);24122412+ if (!prog) {24132413+ for (i = 0; i < vi->max_queue_pairs; i++) {24142414+ rcu_assign_pointer(vi->rq[i].xdp_prog, prog);24152415+ if (i == 0)24162416+ virtnet_restore_guest_offloads(vi);24172417+ }24182418+ synchronize_net();24192419+ }24202420+24712421 err = _virtnet_set_queues(vi, curr_qp + xdp_qp);24722422 if (err)24732423 goto err;24242424+ netif_set_real_num_rx_queues(dev, curr_qp + xdp_qp);24742425 vi->xdp_queue_pairs = xdp_qp;2475242624762476- for (i = 0; i < vi->max_queue_pairs; i++) {24772477- old_prog = rtnl_dereference(vi->rq[i].xdp_prog);24782478- rcu_assign_pointer(vi->rq[i].xdp_prog, prog);24792479- if (i == 0) {24802480- if (!old_prog)24272427+ if (prog) {24282428+ for (i = 0; i < vi->max_queue_pairs; i++) {24292429+ rcu_assign_pointer(vi->rq[i].xdp_prog, prog);24302430+ if (i == 0 && !old_prog)24812431 virtnet_clear_guest_offloads(vi);24822482- if (!prog)24832483- virtnet_restore_guest_offloads(vi);24842432 }24332433+ }24342434+24352435+ for (i = 0; i < vi->max_queue_pairs; i++) {24852436 if (old_prog)24862437 bpf_prog_put(old_prog);24872487- if (netif_running(dev))24382438+ if (netif_running(dev)) {24882439 virtnet_napi_enable(vi->rq[i].vq, &vi->rq[i].napi);24402440+ virtnet_napi_tx_enable(vi, vi->sq[i].vq,24412441+ &vi->sq[i].napi);24422442+ }24892443 }2490244424912445 return 0;2492244624932447err:24942494- for (i = 0; i < vi->max_queue_pairs; i++)24952495- virtnet_napi_enable(vi->rq[i].vq, &vi->rq[i].napi);24482448+ if (!prog) {24492449+ virtnet_clear_guest_offloads(vi);24502450+ for (i = 0; i < vi->max_queue_pairs; i++)24512451+ rcu_assign_pointer(vi->rq[i].xdp_prog, old_prog);24522452+ }24532453+24542454+ if (netif_running(dev)) {24552455+ for (i = 0; i < vi->max_queue_pairs; i++) {24562456+ virtnet_napi_enable(vi->rq[i].vq, &vi->rq[i].napi);24572457+ virtnet_napi_tx_enable(vi, vi->sq[i].vq,24582458+ &vi->sq[i].napi);24592459+ }24602460+ }24962461 if (prog)24972462 bpf_prog_sub(prog, vi->max_queue_pairs - 1);24982463 return err;···27002613 put_page(vi->rq[i].alloc_frag.page);27012614}2702261527032703-static bool is_xdp_raw_buffer_queue(struct virtnet_info *vi, int q)27042704-{27052705- if (q < (vi->curr_queue_pairs - vi->xdp_queue_pairs))27062706- return false;27072707- else if (q < vi->curr_queue_pairs)27082708- return true;27092709- else27102710- return false;27112711-}27122712-27132616static void free_unused_bufs(struct virtnet_info *vi)27142617{27152618 void *buf;···27082631 for (i = 0; i < vi->max_queue_pairs; i++) {27092632 struct virtqueue *vq = vi->sq[i].vq;27102633 while ((buf = virtqueue_detach_unused_buf(vq)) != NULL) {27112711- if (!is_xdp_raw_buffer_queue(vi, i))26342634+ if (!is_xdp_frame(buf))27122635 dev_kfree_skb(buf);27132636 else27142714- put_page(virt_to_head_page(buf));26372637+ xdp_return_frame(ptr_to_xdp(buf));27152638 }27162639 }27172640
···164164 }165165166166 sdio_claim_host(func);167167+ /*168168+ * To guarantee that the SDIO card is power cycled, as required to make169169+ * the FW programming to succeed, let's do a brute force HW reset.170170+ */171171+ mmc_hw_reset(card->host);172172+167173 sdio_enable_func(func);168174 sdio_release_host(func);169175···180174{181175 struct sdio_func *func = dev_to_sdio_func(glue->dev);182176 struct mmc_card *card = func->card;183183- int error;184177185178 sdio_claim_host(func);186179 sdio_disable_func(func);187180 sdio_release_host(func);188181189182 /* Let runtime PM know the card is powered off */190190- error = pm_runtime_put(&card->dev);191191- if (error < 0 && error != -EBUSY) {192192- dev_err(&card->dev, "%s failed: %i\n", __func__, error);193193-194194- return error;195195- }196196-183183+ pm_runtime_put(&card->dev);197184 return 0;198185}199186
+7-1
drivers/nvme/host/core.c
···12531253 * effects say only one namespace is affected.12541254 */12551255 if (effects & (NVME_CMD_EFFECTS_LBCC | NVME_CMD_EFFECTS_CSE_MASK)) {12561256+ mutex_lock(&ctrl->scan_lock);12561257 nvme_start_freeze(ctrl);12571258 nvme_wait_freeze(ctrl);12581259 }···12821281 */12831282 if (effects & NVME_CMD_EFFECTS_LBCC)12841283 nvme_update_formats(ctrl);12851285- if (effects & (NVME_CMD_EFFECTS_LBCC | NVME_CMD_EFFECTS_CSE_MASK))12841284+ if (effects & (NVME_CMD_EFFECTS_LBCC | NVME_CMD_EFFECTS_CSE_MASK)) {12861285 nvme_unfreeze(ctrl);12861286+ mutex_unlock(&ctrl->scan_lock);12871287+ }12871288 if (effects & NVME_CMD_EFFECTS_CCC)12881289 nvme_init_identify(ctrl);12891290 if (effects & (NVME_CMD_EFFECTS_NIC | NVME_CMD_EFFECTS_NCC))···34043401 if (nvme_identify_ctrl(ctrl, &id))34053402 return;3406340334043404+ mutex_lock(&ctrl->scan_lock);34073405 nn = le32_to_cpu(id->nn);34083406 if (ctrl->vs >= NVME_VS(1, 1, 0) &&34093407 !(ctrl->quirks & NVME_QUIRK_IDENTIFY_CNS)) {···34133409 }34143410 nvme_scan_ns_sequential(ctrl, nn);34153411out_free_id:34123412+ mutex_unlock(&ctrl->scan_lock);34163413 kfree(id);34173414 down_write(&ctrl->namespaces_rwsem);34183415 list_sort(NULL, &ctrl->namespaces, ns_cmp);···3657365236583653 ctrl->state = NVME_CTRL_NEW;36593654 spin_lock_init(&ctrl->lock);36553655+ mutex_init(&ctrl->scan_lock);36603656 INIT_LIST_HEAD(&ctrl->namespaces);36613657 init_rwsem(&ctrl->namespaces_rwsem);36623658 ctrl->dev = dev;
···25572557 if (dev->ctrl.ctrl_config & NVME_CC_ENABLE)25582558 nvme_dev_disable(dev, false);2559255925602560- /*25612561- * Introduce CONNECTING state from nvme-fc/rdma transports to mark the25622562- * initializing procedure here.25632563- */25642564- if (!nvme_change_ctrl_state(&dev->ctrl, NVME_CTRL_CONNECTING)) {25652565- dev_warn(dev->ctrl.device,25662566- "failed to mark controller CONNECTING\n");25672567- goto out;25682568- }25692569-25602560+ mutex_lock(&dev->shutdown_lock);25702561 result = nvme_pci_enable(dev);25712562 if (result)25722563 goto out;···25762585 */25772586 dev->ctrl.max_hw_sectors = NVME_MAX_KB_SZ << 1;25782587 dev->ctrl.max_segments = NVME_MAX_SEGS;25882588+ mutex_unlock(&dev->shutdown_lock);25892589+25902590+ /*25912591+ * Introduce CONNECTING state from nvme-fc/rdma transports to mark the25922592+ * initializing procedure here.25932593+ */25942594+ if (!nvme_change_ctrl_state(&dev->ctrl, NVME_CTRL_CONNECTING)) {25952595+ dev_warn(dev->ctrl.device,25962596+ "failed to mark controller CONNECTING\n");25972597+ goto out;25982598+ }2579259925802600 result = nvme_init_identify(&dev->ctrl);25812601 if (result)
+7-4
drivers/pci/controller/dwc/pci-imx6.c
···310310 imx6_pcie->pd_pcie = dev_pm_domain_attach_by_name(dev, "pcie");311311 if (IS_ERR(imx6_pcie->pd_pcie))312312 return PTR_ERR(imx6_pcie->pd_pcie);313313+ /* Do nothing when power domain missing */314314+ if (!imx6_pcie->pd_pcie)315315+ return 0;313316 link = device_link_add(dev, imx6_pcie->pd_pcie,314317 DL_FLAG_STATELESS |315318 DL_FLAG_PM_RUNTIME |···326323 if (IS_ERR(imx6_pcie->pd_pcie_phy))327324 return PTR_ERR(imx6_pcie->pd_pcie_phy);328325329329- device_link_add(dev, imx6_pcie->pd_pcie_phy,326326+ link = device_link_add(dev, imx6_pcie->pd_pcie_phy,330327 DL_FLAG_STATELESS |331328 DL_FLAG_PM_RUNTIME |332329 DL_FLAG_RPM_ACTIVE);333333- if (IS_ERR(link)) {334334- dev_err(dev, "Failed to add device_link to pcie_phy pd: %ld\n", PTR_ERR(link));335335- return PTR_ERR(link);330330+ if (!link) {331331+ dev_err(dev, "Failed to add device_link to pcie_phy pd.\n");332332+ return -EINVAL;336333 }337334338335 return 0;
-16
drivers/pci/controller/dwc/pcie-armada8k.c
···2222#include <linux/resource.h>2323#include <linux/of_pci.h>2424#include <linux/of_irq.h>2525-#include <linux/gpio/consumer.h>26252726#include "pcie-designware.h"2827···2930 struct dw_pcie *pci;3031 struct clk *clk;3132 struct clk *clk_reg;3232- struct gpio_desc *reset_gpio;3333};34343535#define PCIE_VENDOR_REGS_OFFSET 0x8000···137139 struct dw_pcie *pci = to_dw_pcie_from_pp(pp);138140 struct armada8k_pcie *pcie = to_armada8k_pcie(pci);139141140140- if (pcie->reset_gpio) {141141- /* assert and then deassert the reset signal */142142- gpiod_set_value_cansleep(pcie->reset_gpio, 1);143143- msleep(100);144144- gpiod_set_value_cansleep(pcie->reset_gpio, 0);145145- }146142 dw_pcie_setup_rc(pp);147143 armada8k_pcie_establish_link(pcie);148144···246254 if (IS_ERR(pci->dbi_base)) {247255 dev_err(dev, "couldn't remap regs base %p\n", base);248256 ret = PTR_ERR(pci->dbi_base);249249- goto fail_clkreg;250250- }251251-252252- /* Get reset gpio signal and hold asserted (logically high) */253253- pcie->reset_gpio = devm_gpiod_get_optional(dev, "reset",254254- GPIOD_OUT_HIGH);255255- if (IS_ERR(pcie->reset_gpio)) {256256- ret = PTR_ERR(pcie->reset_gpio);257257 goto fail_clkreg;258258 }259259
+3-2
drivers/pci/quirks.c
···639639 break;640640 }641641}642642-DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_SYNOPSYS, PCI_ANY_ID,643643- quirk_synopsys_haps);642642+DECLARE_PCI_FIXUP_CLASS_HEADER(PCI_VENDOR_ID_SYNOPSYS, PCI_ANY_ID,643643+ PCI_CLASS_SERIAL_USB_XHCI, 0,644644+ quirk_synopsys_haps);644645645646/*646647 * Let's make the southbridge information explicit instead of having to
···905905config ACPI_CMPC906906 tristate "CMPC Laptop Extras"907907 depends on ACPI && INPUT908908+ depends on BACKLIGHT_LCD_SUPPORT908909 depends on RFKILL || RFKILL=n909910 select BACKLIGHT_CLASS_DEVICE910911 help···11291128config SAMSUNG_Q1011301129 tristate "Samsung Q10 Extras"11311130 depends on ACPI11311131+ depends on BACKLIGHT_LCD_SUPPORT11321132 select BACKLIGHT_CLASS_DEVICE11331133 ---help---11341134 This driver provides support for backlight control on Samsung Q10
···62626363/* make sure inq_product_rev string corresponds to this version */6464#define SDEBUG_VERSION "0188" /* format to fit INQUIRY revision field */6565-static const char *sdebug_version_date = "20180128";6565+static const char *sdebug_version_date = "20190125";66666767#define MY_NAME "scsi_debug"6868···735735 (sdebug_lbpu || sdebug_lbpws || sdebug_lbpws10);736736}737737738738-static void *fake_store(unsigned long long lba)738738+static void *lba2fake_store(unsigned long long lba)739739{740740 lba = do_div(lba, sdebug_store_sectors);741741···25142514 return ret;25152515}2516251625172517-/* If fake_store(lba,num) compares equal to arr(num), then copy top half of25182518- * arr into fake_store(lba,num) and return true. If comparison fails then25172517+/* If lba2fake_store(lba,num) compares equal to arr(num), then copy top half of25182518+ * arr into lba2fake_store(lba,num) and return true. If comparison fails then25192519 * return false. */25202520static bool comp_write_worker(u64 lba, u32 num, const u8 *arr)25212521{···26432643 if (sdt->app_tag == cpu_to_be16(0xffff))26442644 continue;2645264526462646- ret = dif_verify(sdt, fake_store(sector), sector, ei_lba);26462646+ ret = dif_verify(sdt, lba2fake_store(sector), sector, ei_lba);26472647 if (ret) {26482648 dif_errors++;26492649 return ret;···32613261static int resp_write_same(struct scsi_cmnd *scp, u64 lba, u32 num,32623262 u32 ei_lba, bool unmap, bool ndob)32633263{32643264+ int ret;32643265 unsigned long iflags;32653266 unsigned long long i;32663266- int ret;32673267- u64 lba_off;32673267+ u32 lb_size = sdebug_sector_size;32683268+ u64 block, lbaa;32693269+ u8 *fs1p;3268327032693271 ret = check_device_access_params(scp, lba, num);32703272 if (ret)···32783276 unmap_region(lba, num);32793277 goto out;32803278 }32813281-32823282- lba_off = lba * sdebug_sector_size;32793279+ lbaa = lba;32803280+ block = do_div(lbaa, sdebug_store_sectors);32833281 /* if ndob then zero 1 logical block, else fetch 1 logical block */32823282+ fs1p = fake_storep + (block * lb_size);32843283 if (ndob) {32853285- memset(fake_storep + lba_off, 0, sdebug_sector_size);32843284+ memset(fs1p, 0, lb_size);32863285 ret = 0;32873286 } else32883288- ret = fetch_to_dev_buffer(scp, fake_storep + lba_off,32893289- sdebug_sector_size);32873287+ ret = fetch_to_dev_buffer(scp, fs1p, lb_size);3290328832913289 if (-1 == ret) {32923290 write_unlock_irqrestore(&atomic_rw, iflags);32933291 return DID_ERROR << 16;32943294- } else if (sdebug_verbose && !ndob && (ret < sdebug_sector_size))32923292+ } else if (sdebug_verbose && !ndob && (ret < lb_size))32953293 sdev_printk(KERN_INFO, scp->device,32963294 "%s: %s: lb size=%u, IO sent=%d bytes\n",32973297- my_name, "write same",32983298- sdebug_sector_size, ret);32953295+ my_name, "write same", lb_size, ret);3299329633003297 /* Copy first sector to remaining blocks */33013301- for (i = 1 ; i < num ; i++)33023302- memcpy(fake_storep + ((lba + i) * sdebug_sector_size),33033303- fake_storep + lba_off,33043304- sdebug_sector_size);33053305-32983298+ for (i = 1 ; i < num ; i++) {32993299+ lbaa = lba + i;33003300+ block = do_div(lbaa, sdebug_store_sectors);33013301+ memmove(fake_storep + (block * lb_size), fs1p, lb_size);33023302+ }33063303 if (scsi_debug_lbp())33073304 map_region(lba, num);33083305out:
+8-4
drivers/scsi/sd_zbc.c
···462462 sdkp->device->use_10_for_rw = 0;463463464464 /*465465- * If something changed, revalidate the disk zone bitmaps once we have466466- * the capacity, that is on the second revalidate execution during disk467467- * scan and always during normal revalidate.465465+ * Revalidate the disk zone bitmaps once the block device capacity is466466+ * set on the second revalidate execution during disk scan and if467467+ * something changed when executing a normal revalidate.468468 */469469- if (sdkp->first_scan)469469+ if (sdkp->first_scan) {470470+ sdkp->zone_blocks = zone_blocks;471471+ sdkp->nr_zones = nr_zones;470472 return 0;473473+ }474474+471475 if (sdkp->zone_blocks != zone_blocks ||472476 sdkp->nr_zones != nr_zones ||473477 disk->queue->nr_zones != nr_zones) {
···34203420serial_pci_guess_board(struct pci_dev *dev, struct pciserial_board *board)34213421{34223422 int num_iomem, num_port, first_port = -1, i;34233423+ int rc;34243424+34253425+ rc = serial_pci_is_class_communication(dev);34263426+ if (rc)34273427+ return rc;3423342834243429 /*34253430 * Should we try to make guesses for multiport serial devices later?···36513646 }3652364736533648 board = &pci_boards[ent->driver_data];36543654-36553655- rc = serial_pci_is_class_communication(dev);36563656- if (rc)36573657- return rc;3658364936593650 rc = serial_pci_is_blacklisted(dev);36603651 if (rc)
+8-5
drivers/tty/serial/earlycon-riscv-sbi.c
···1010#include <linux/serial_core.h>1111#include <asm/sbi.h>12121313-static void sbi_console_write(struct console *con,1414- const char *s, unsigned int n)1313+static void sbi_putc(struct uart_port *port, int c)1514{1616- int i;1515+ sbi_console_putchar(c);1616+}17171818- for (i = 0; i < n; ++i)1919- sbi_console_putchar(s[i]);1818+static void sbi_console_write(struct console *con,1919+ const char *s, unsigned n)2020+{2121+ struct earlycon_device *dev = con->data;2222+ uart_console_write(&dev->port, s, n, sbi_putc);2023}21242225static int __init early_sbi_setup(struct earlycon_device *device,
+6
drivers/tty/serial/serial_core.c
···130130 struct uart_port *port;131131 unsigned long flags;132132133133+ if (!state)134134+ return;135135+133136 port = uart_port_lock(state, flags);134137 __uart_start(tty);135138 uart_port_unlock(port, flags);···729726 struct uart_state *state = tty->driver_data;730727 upstat_t mask = UPSTAT_SYNC_FIFO;731728 struct uart_port *port;729729+730730+ if (!state)731731+ return;732732733733 port = uart_port_ref(state);734734 if (!port)
+8-1
drivers/tty/serial/sh-sci.c
···1921192119221922static void sci_free_irq(struct sci_port *port)19231923{19241924- int i;19241924+ int i, j;1925192519261926 /*19271927 * Intentionally in reverse order so we iterate over the muxed···19351935 * interrupt sources.19361936 */19371937 if (unlikely(irq < 0))19381938+ continue;19391939+19401940+ /* Check if already freed (irq was muxed) */19411941+ for (j = 0; j < i; j++)19421942+ if (port->irqs[j] == irq)19431943+ j = i + 1;19441944+ if (j > i)19381945 continue;1939194619401947 free_irq(port->irqs[i], port);
+2-2
drivers/usb/dwc3/dwc3-exynos.c
···7878 for (i = 0; i < exynos->num_clks; i++) {7979 ret = clk_prepare_enable(exynos->clks[i]);8080 if (ret) {8181- while (--i > 0)8181+ while (i-- > 0)8282 clk_disable_unprepare(exynos->clks[i]);8383 return ret;8484 }···223223 for (i = 0; i < exynos->num_clks; i++) {224224 ret = clk_prepare_enable(exynos->clks[i]);225225 if (ret) {226226- while (--i > 0)226226+ while (i-- > 0)227227 clk_disable_unprepare(exynos->clks[i]);228228 return ret;229229 }
+1-1
drivers/usb/dwc3/gadget.c
···11191119 unsigned int maxp = usb_endpoint_maxp(dep->endpoint.desc);11201120 unsigned int rem = length % maxp;1121112111221122- if (rem && usb_endpoint_dir_out(dep->endpoint.desc)) {11221122+ if ((!length || rem) && usb_endpoint_dir_out(dep->endpoint.desc)) {11231123 struct dwc3 *dwc = dep->dwc;11241124 struct dwc3_trb *trb;11251125
+1-1
drivers/usb/gadget/udc/net2272.c
···20832083#if defined(PLX_PCI_RDK2)20842084 /* see if PCI int for us by checking irqstat */20852085 intcsr = readl(dev->rdk2.fpga_base_addr + RDK2_IRQSTAT);20862086- if (!intcsr & (1 << NET2272_PCI_IRQ)) {20862086+ if (!(intcsr & (1 << NET2272_PCI_IRQ))) {20872087 spin_unlock(&dev->lock);20882088 return IRQ_NONE;20892089 }
···266266 }267267 root_inode = autofs_get_inode(s, S_IFDIR | 0755);268268 root = d_make_root(root_inode);269269- if (!root)269269+ if (!root) {270270+ ret = -ENOMEM;270271 goto fail_ino;272272+ }271273 pipe = NULL;272274273275 root->d_fsdata = ino;
+50-28
fs/btrfs/ctree.c
···968968 return 0;969969}970970971971+static struct extent_buffer *alloc_tree_block_no_bg_flush(972972+ struct btrfs_trans_handle *trans,973973+ struct btrfs_root *root,974974+ u64 parent_start,975975+ const struct btrfs_disk_key *disk_key,976976+ int level,977977+ u64 hint,978978+ u64 empty_size)979979+{980980+ struct btrfs_fs_info *fs_info = root->fs_info;981981+ struct extent_buffer *ret;982982+983983+ /*984984+ * If we are COWing a node/leaf from the extent, chunk, device or free985985+ * space trees, make sure that we do not finish block group creation of986986+ * pending block groups. We do this to avoid a deadlock.987987+ * COWing can result in allocation of a new chunk, and flushing pending988988+ * block groups (btrfs_create_pending_block_groups()) can be triggered989989+ * when finishing allocation of a new chunk. Creation of a pending block990990+ * group modifies the extent, chunk, device and free space trees,991991+ * therefore we could deadlock with ourselves since we are holding a992992+ * lock on an extent buffer that btrfs_create_pending_block_groups() may993993+ * try to COW later.994994+ * For similar reasons, we also need to delay flushing pending block995995+ * groups when splitting a leaf or node, from one of those trees, since996996+ * we are holding a write lock on it and its parent or when inserting a997997+ * new root node for one of those trees.998998+ */999999+ if (root == fs_info->extent_root ||10001000+ root == fs_info->chunk_root ||10011001+ root == fs_info->dev_root ||10021002+ root == fs_info->free_space_root)10031003+ trans->can_flush_pending_bgs = false;10041004+10051005+ ret = btrfs_alloc_tree_block(trans, root, parent_start,10061006+ root->root_key.objectid, disk_key, level,10071007+ hint, empty_size);10081008+ trans->can_flush_pending_bgs = true;10091009+10101010+ return ret;10111011+}10121012+9711013/*9721014 * does the dirty work in cow of a single block. The parent block (if9731015 * supplied) is updated to point to the new cow copy. The new buffer is marked···10571015 if ((root->root_key.objectid == BTRFS_TREE_RELOC_OBJECTID) && parent)10581016 parent_start = parent->start;1059101710601060- /*10611061- * If we are COWing a node/leaf from the extent, chunk, device or free10621062- * space trees, make sure that we do not finish block group creation of10631063- * pending block groups. We do this to avoid a deadlock.10641064- * COWing can result in allocation of a new chunk, and flushing pending10651065- * block groups (btrfs_create_pending_block_groups()) can be triggered10661066- * when finishing allocation of a new chunk. Creation of a pending block10671067- * group modifies the extent, chunk, device and free space trees,10681068- * therefore we could deadlock with ourselves since we are holding a10691069- * lock on an extent buffer that btrfs_create_pending_block_groups() may10701070- * try to COW later.10711071- */10721072- if (root == fs_info->extent_root ||10731073- root == fs_info->chunk_root ||10741074- root == fs_info->dev_root ||10751075- root == fs_info->free_space_root)10761076- trans->can_flush_pending_bgs = false;10771077-10781078- cow = btrfs_alloc_tree_block(trans, root, parent_start,10791079- root->root_key.objectid, &disk_key, level,10801080- search_start, empty_size);10811081- trans->can_flush_pending_bgs = true;10181018+ cow = alloc_tree_block_no_bg_flush(trans, root, parent_start, &disk_key,10191019+ level, search_start, empty_size);10821020 if (IS_ERR(cow))10831021 return PTR_ERR(cow);10841022···33673345 else33683346 btrfs_node_key(lower, &lower_key, 0);3369334733703370- c = btrfs_alloc_tree_block(trans, root, 0, root->root_key.objectid,33713371- &lower_key, level, root->node->start, 0);33483348+ c = alloc_tree_block_no_bg_flush(trans, root, 0, &lower_key, level,33493349+ root->node->start, 0);33723350 if (IS_ERR(c))33733351 return PTR_ERR(c);33743352···34973475 mid = (c_nritems + 1) / 2;34983476 btrfs_node_key(c, &disk_key, mid);3499347735003500- split = btrfs_alloc_tree_block(trans, root, 0, root->root_key.objectid,35013501- &disk_key, level, c->start, 0);34783478+ split = alloc_tree_block_no_bg_flush(trans, root, 0, &disk_key, level,34793479+ c->start, 0);35023480 if (IS_ERR(split))35033481 return PTR_ERR(split);35043482···42824260 else42834261 btrfs_item_key(l, &disk_key, mid);4284426242854285- right = btrfs_alloc_tree_block(trans, root, 0, root->root_key.objectid,42864286- &disk_key, 0, l->start, 0);42634263+ right = alloc_tree_block_no_bg_flush(trans, root, 0, &disk_key, 0,42644264+ l->start, 0);42874265 if (IS_ERR(right))42884266 return PTR_ERR(right);42894267
···84848585#define NUMBER_OF_SMB2_COMMANDS 0x001386868787-/* 4 len + 52 transform hdr + 64 hdr + 56 create rsp */8888-#define MAX_SMB2_HDR_SIZE 0x00b08787+/* 52 transform hdr + 64 hdr + 88 create rsp */8888+#define SMB2_TRANSFORM_HEADER_SIZE 528989+#define MAX_SMB2_HDR_SIZE 20489909091#define SMB2_PROTO_NUMBER cpu_to_le32(0x424d53fe)9192#define SMB2_TRANSFORM_PROTO_NUM cpu_to_le32(0x424d53fd)···649648 __u8 Buffer[0];650649} __packed;651650651651+/*652652+ * Maximum size of a SMB2_CREATE response is 64 (smb2 header) +653653+ * 88 (fixed part of create response) + 520 (path) + 150 (contexts) +654654+ * 2 bytes of padding.655655+ */656656+#define MAX_SMB2_CREATE_RESPONSE_SIZE 824657657+652658struct smb2_create_rsp {653659 struct smb2_sync_hdr sync_hdr;654660 __le16 StructureSize; /* Must be 89 */···1003995 __u64 PersistentFileId; /* opaque endianness */1004996 __u64 VolatileFileId; /* opaque endianness */1005997} __packed;998998+999999+/*10001000+ * Maximum size of a SMB2_CLOSE response is 64 (smb2 header) + 60 (data)10011001+ */10021002+#define MAX_SMB2_CLOSE_RESPONSE_SIZE 1241006100310071004struct smb2_close_rsp {10081005 struct smb2_sync_hdr sync_hdr;···14101397 __le32 FileNameLength;14111398 char FileName[0]; /* Name to be assigned to new link */14121399} __packed; /* level 11 Set */14131413-14141414-#define SMB2_MAX_EA_BUF 655361415140014161401struct smb2_file_full_ea_info { /* encoding of response for level 15 */14171402 __le32 next_entry_offset;
+33-5
fs/dcache.c
···119119120120static DEFINE_PER_CPU(long, nr_dentry);121121static DEFINE_PER_CPU(long, nr_dentry_unused);122122+static DEFINE_PER_CPU(long, nr_dentry_negative);122123123124#if defined(CONFIG_SYSCTL) && defined(CONFIG_PROC_FS)124125···153152 return sum < 0 ? 0 : sum;154153}155154155155+static long get_nr_dentry_negative(void)156156+{157157+ int i;158158+ long sum = 0;159159+160160+ for_each_possible_cpu(i)161161+ sum += per_cpu(nr_dentry_negative, i);162162+ return sum < 0 ? 0 : sum;163163+}164164+156165int proc_nr_dentry(struct ctl_table *table, int write, void __user *buffer,157166 size_t *lenp, loff_t *ppos)158167{159168 dentry_stat.nr_dentry = get_nr_dentry();160169 dentry_stat.nr_unused = get_nr_dentry_unused();170170+ dentry_stat.nr_negative = get_nr_dentry_negative();161171 return proc_doulongvec_minmax(table, write, buffer, lenp, ppos);162172}163173#endif···329317 flags &= ~(DCACHE_ENTRY_TYPE | DCACHE_FALLTHRU);330318 WRITE_ONCE(dentry->d_flags, flags);331319 dentry->d_inode = NULL;320320+ if (dentry->d_flags & DCACHE_LRU_LIST)321321+ this_cpu_inc(nr_dentry_negative);332322}333323334324static void dentry_free(struct dentry *dentry)···385371 * The per-cpu "nr_dentry_unused" counters are updated with386372 * the DCACHE_LRU_LIST bit.387373 *374374+ * The per-cpu "nr_dentry_negative" counters are only updated375375+ * when deleted from or added to the per-superblock LRU list, not376376+ * from/to the shrink list. That is to avoid an unneeded dec/inc377377+ * pair when moving from LRU to shrink list in select_collect().378378+ *388379 * These helper functions make sure we always follow the389380 * rules. d_lock must be held by the caller.390381 */···399380 D_FLAG_VERIFY(dentry, 0);400381 dentry->d_flags |= DCACHE_LRU_LIST;401382 this_cpu_inc(nr_dentry_unused);383383+ if (d_is_negative(dentry))384384+ this_cpu_inc(nr_dentry_negative);402385 WARN_ON_ONCE(!list_lru_add(&dentry->d_sb->s_dentry_lru, &dentry->d_lru));403386}404387···409388 D_FLAG_VERIFY(dentry, DCACHE_LRU_LIST);410389 dentry->d_flags &= ~DCACHE_LRU_LIST;411390 this_cpu_dec(nr_dentry_unused);391391+ if (d_is_negative(dentry))392392+ this_cpu_dec(nr_dentry_negative);412393 WARN_ON_ONCE(!list_lru_del(&dentry->d_sb->s_dentry_lru, &dentry->d_lru));413394}414395···441418 D_FLAG_VERIFY(dentry, DCACHE_LRU_LIST);442419 dentry->d_flags &= ~DCACHE_LRU_LIST;443420 this_cpu_dec(nr_dentry_unused);421421+ if (d_is_negative(dentry))422422+ this_cpu_dec(nr_dentry_negative);444423 list_lru_isolate(lru, &dentry->d_lru);445424}446425···451426{452427 D_FLAG_VERIFY(dentry, DCACHE_LRU_LIST);453428 dentry->d_flags |= DCACHE_SHRINK_LIST;429429+ if (d_is_negative(dentry))430430+ this_cpu_dec(nr_dentry_negative);454431 list_lru_isolate_move(lru, &dentry->d_lru, list);455432}456433···12151188 */12161189void shrink_dcache_sb(struct super_block *sb)12171190{12181218- long freed;12191219-12201191 do {12211192 LIST_HEAD(dispose);1222119312231223- freed = list_lru_walk(&sb->s_dentry_lru,11941194+ list_lru_walk(&sb->s_dentry_lru,12241195 dentry_lru_isolate_shrink, &dispose, 1024);12251225-12261226- this_cpu_sub(nr_dentry_unused, freed);12271196 shrink_dentry_list(&dispose);12281197 } while (list_lru_count(&sb->s_dentry_lru) > 0);12291198}···18431820 WARN_ON(d_in_lookup(dentry));1844182118451822 spin_lock(&dentry->d_lock);18231823+ /*18241824+ * Decrement negative dentry count if it was in the LRU list.18251825+ */18261826+ if (dentry->d_flags & DCACHE_LRU_LIST)18271827+ this_cpu_dec(nr_dentry_negative);18461828 hlist_add_head(&dentry->d_u.d_alias, &inode->i_dentry);18471829 raw_write_seqcount_begin(&dentry->d_seq);18481830 __d_set_inode_and_type(dentry, inode, add_flags);
+24-12
fs/debugfs/inode.c
···324324 inode_unlock(d_inode(dentry->d_parent));325325 dput(dentry);326326 simple_release_fs(&debugfs_mount, &debugfs_mount_count);327327- return NULL;327327+ return ERR_PTR(-ENOMEM);328328}329329330330static struct dentry *end_creating(struct dentry *dentry)···347347 dentry = start_creating(name, parent);348348349349 if (IS_ERR(dentry))350350- return NULL;350350+ return dentry;351351352352 inode = debugfs_get_inode(dentry->d_sb);353353 if (unlikely(!inode))···386386 * This function will return a pointer to a dentry if it succeeds. This387387 * pointer must be passed to the debugfs_remove() function when the file is388388 * to be removed (no automatic cleanup happens if your module is unloaded,389389- * you are responsible here.) If an error occurs, %NULL will be returned.389389+ * you are responsible here.) If an error occurs, %ERR_PTR(-ERROR) will be390390+ * returned.390391 *391392 * If debugfs is not enabled in the kernel, the value -%ENODEV will be392393 * returned.···465464 * This function will return a pointer to a dentry if it succeeds. This466465 * pointer must be passed to the debugfs_remove() function when the file is467466 * to be removed (no automatic cleanup happens if your module is unloaded,468468- * you are responsible here.) If an error occurs, %NULL will be returned.467467+ * you are responsible here.) If an error occurs, %ERR_PTR(-ERROR) will be468468+ * returned.469469 *470470 * If debugfs is not enabled in the kernel, the value -%ENODEV will be471471 * returned.···497495 * This function will return a pointer to a dentry if it succeeds. This498496 * pointer must be passed to the debugfs_remove() function when the file is499497 * to be removed (no automatic cleanup happens if your module is unloaded,500500- * you are responsible here.) If an error occurs, %NULL will be returned.498498+ * you are responsible here.) If an error occurs, %ERR_PTR(-ERROR) will be499499+ * returned.501500 *502501 * If debugfs is not enabled in the kernel, the value -%ENODEV will be503502 * returned.···509506 struct inode *inode;510507511508 if (IS_ERR(dentry))512512- return NULL;509509+ return dentry;513510514511 inode = debugfs_get_inode(dentry->d_sb);515512 if (unlikely(!inode))···548545 struct inode *inode;549546550547 if (IS_ERR(dentry))551551- return NULL;548548+ return dentry;552549553550 inode = debugfs_get_inode(dentry->d_sb);554551 if (unlikely(!inode))···584581 * This function will return a pointer to a dentry if it succeeds. This585582 * pointer must be passed to the debugfs_remove() function when the symbolic586583 * link is to be removed (no automatic cleanup happens if your module is587587- * unloaded, you are responsible here.) If an error occurs, %NULL will be588588- * returned.584584+ * unloaded, you are responsible here.) If an error occurs, %ERR_PTR(-ERROR)585585+ * will be returned.589586 *590587 * If debugfs is not enabled in the kernel, the value -%ENODEV will be591588 * returned.···597594 struct inode *inode;598595 char *link = kstrdup(target, GFP_KERNEL);599596 if (!link)600600- return NULL;597597+ return ERR_PTR(-ENOMEM);601598602599 dentry = start_creating(name, parent);603600 if (IS_ERR(dentry)) {604601 kfree(link);605605- return NULL;602602+ return dentry;606603 }607604608605 inode = debugfs_get_inode(dentry->d_sb);···790787 struct dentry *dentry = NULL, *trap;791788 struct name_snapshot old_name;792789790790+ if (IS_ERR(old_dir))791791+ return old_dir;792792+ if (IS_ERR(new_dir))793793+ return new_dir;794794+ if (IS_ERR_OR_NULL(old_dentry))795795+ return old_dentry;796796+793797 trap = lock_rename(new_dir, old_dir);794798 /* Source or destination directories don't exist? */795799 if (d_really_is_negative(old_dir) || d_really_is_negative(new_dir))···830820 if (dentry && !IS_ERR(dentry))831821 dput(dentry);832822 unlock_rename(new_dir, old_dir);833833- return NULL;823823+ if (IS_ERR(dentry))824824+ return dentry;825825+ return ERR_PTR(-EINVAL);834826}835827EXPORT_SYMBOL_GPL(debugfs_rename);836828
+7-1
fs/drop_caches.c
···2121 spin_lock(&sb->s_inode_list_lock);2222 list_for_each_entry(inode, &sb->s_inodes, i_sb_list) {2323 spin_lock(&inode->i_lock);2424+ /*2525+ * We must skip inodes in unusual state. We may also skip2626+ * inodes without pages but we deliberately won't in case2727+ * we need to reschedule to avoid softlockups.2828+ */2429 if ((inode->i_state & (I_FREEING|I_WILL_FREE|I_NEW)) ||2525- (inode->i_mapping->nrpages == 0)) {3030+ (inode->i_mapping->nrpages == 0 && !need_resched())) {2631 spin_unlock(&inode->i_lock);2732 continue;2833 }···3530 spin_unlock(&inode->i_lock);3631 spin_unlock(&sb->s_inode_list_lock);37323333+ cond_resched();3834 invalidate_mapping_pages(inode->i_mapping, 0, -1);3935 iput(toput_inode);4036 toput_inode = inode;
···116116 atomic_set(&iop->read_count, 0);117117 atomic_set(&iop->write_count, 0);118118 bitmap_zero(iop->uptodate, PAGE_SIZE / SECTOR_SIZE);119119+120120+ /*121121+ * migrate_page_move_mapping() assumes that pages with private data have122122+ * their count elevated by 1.123123+ */124124+ get_page(page);119125 set_page_private(page, (unsigned long)iop);120126 SetPagePrivate(page);121127 return iop;···138132 WARN_ON_ONCE(atomic_read(&iop->write_count));139133 ClearPagePrivate(page);140134 set_page_private(page, 0);135135+ put_page(page);141136 kfree(iop);142137}143138···576569577570 if (page_has_private(page)) {578571 ClearPagePrivate(page);572572+ get_page(newpage);579573 set_page_private(newpage, page_private(page));580574 set_page_private(page, 0);575575+ put_page(page);581576 SetPagePrivate(newpage);582577 }583578···18131804 loff_t pos = iocb->ki_pos, start = pos;18141805 loff_t end = iocb->ki_pos + count - 1, ret = 0;18151806 unsigned int flags = IOMAP_DIRECT;18071807+ bool wait_for_completion = is_sync_kiocb(iocb);18161808 struct blk_plug plug;18171809 struct iomap_dio *dio;18181810···18331823 dio->end_io = end_io;18341824 dio->error = 0;18351825 dio->flags = 0;18361836- dio->wait_for_completion = is_sync_kiocb(iocb);1837182618381827 dio->submit.iter = iter;18391828 dio->submit.waiter = current;···18871878 dio_warn_stale_pagecache(iocb->ki_filp);18881879 ret = 0;1889188018901890- if (iov_iter_rw(iter) == WRITE && !dio->wait_for_completion &&18811881+ if (iov_iter_rw(iter) == WRITE && !wait_for_completion &&18911882 !inode->i_sb->s_dio_done_wq) {18921883 ret = sb_init_dio_done_wq(inode->i_sb);18931884 if (ret < 0)···19031894 if (ret <= 0) {19041895 /* magic error code to fall back to buffered I/O */19051896 if (ret == -ENOTBLK) {19061906- dio->wait_for_completion = true;18971897+ wait_for_completion = true;19071898 ret = 0;19081899 }19091900 break;···19251916 if (dio->flags & IOMAP_DIO_WRITE_FUA)19261917 dio->flags &= ~IOMAP_DIO_NEED_SYNC;1927191819191919+ /*19201920+ * We are about to drop our additional submission reference, which19211921+ * might be the last reference to the dio. There are three three19221922+ * different ways we can progress here:19231923+ *19241924+ * (a) If this is the last reference we will always complete and free19251925+ * the dio ourselves.19261926+ * (b) If this is not the last reference, and we serve an asynchronous19271927+ * iocb, we must never touch the dio after the decrement, the19281928+ * I/O completion handler will complete and free it.19291929+ * (c) If this is not the last reference, but we serve a synchronous19301930+ * iocb, the I/O completion handler will wake us up on the drop19311931+ * of the final reference, and we will complete and free it here19321932+ * after we got woken by the I/O completion handler.19331933+ */19341934+ dio->wait_for_completion = wait_for_completion;19281935 if (!atomic_dec_and_test(&dio->ref)) {19291929- if (!dio->wait_for_completion)19361936+ if (!wait_for_completion)19301937 return -EIOCBQUEUED;1931193819321939 for (;;) {···19591934 __set_current_state(TASK_RUNNING);19601935 }1961193619621962- ret = iomap_dio_complete(dio);19631963-19641964- return ret;19371937+ return iomap_dio_complete(dio);1965193819661939out_free_dio:19671940 kfree(dio);
+5
fs/nfs/super.c
···18951895 size_t len;18961896 char *end;1897189718981898+ if (unlikely(!dev_name || !*dev_name)) {18991899+ dfprintk(MOUNT, "NFS: device name not specified\n");19001900+ return -EINVAL;19011901+ }19021902+18981903 /* Is the host name protected with square brakcets? */18991904 if (*dev_name == '[') {19001905 end = strchr(++dev_name, ']');
+5-4
fs/nfs/write.c
···621621 nfs_set_page_writeback(page);622622 WARN_ON_ONCE(test_bit(PG_CLEAN, &req->wb_flags));623623624624- ret = 0;624624+ ret = req->wb_context->error;625625 /* If there is a fatal error that covers this write, just exit */626626- if (nfs_error_is_fatal_on_server(req->wb_context->error))626626+ if (nfs_error_is_fatal_on_server(ret))627627 goto out_launder;628628629629+ ret = 0;629630 if (!nfs_pageio_add_request(pgio, req)) {630631 ret = pgio->pg_error;631632 /*···636635 nfs_context_set_write_error(req->wb_context, ret);637636 if (nfs_error_is_fatal_on_server(ret))638637 goto out_launder;639639- }638638+ } else639639+ ret = -EAGAIN;640640 nfs_redirty_request(req);641641- ret = -EAGAIN;642641 } else643642 nfs_add_stats(page_file_mapping(page)->host,644643 NFSIOS_WRITEPAGES, 1);
···3838 return maybe_get_net(PDE_NET(PDE(inode)));3939}40404141+static int proc_net_d_revalidate(struct dentry *dentry, unsigned int flags)4242+{4343+ return 0;4444+}4545+4646+static const struct dentry_operations proc_net_dentry_ops = {4747+ .d_revalidate = proc_net_d_revalidate,4848+ .d_delete = always_delete_dentry,4949+};5050+5151+static void pde_force_lookup(struct proc_dir_entry *pde)5252+{5353+ /* /proc/net/ entries can be changed under us by setns(CLONE_NEWNET) */5454+ pde->proc_dops = &proc_net_dentry_ops;5555+}5656+4157static int seq_open_net(struct inode *inode, struct file *file)4258{4359 unsigned int state_size = PDE(inode)->state_size;···10690 p = proc_create_reg(name, mode, &parent, data);10791 if (!p)10892 return NULL;9393+ pde_force_lookup(p);10994 p->proc_fops = &proc_net_seq_fops;11095 p->seq_ops = ops;11196 p->state_size = state_size;···150133 p = proc_create_reg(name, mode, &parent, data);151134 if (!p)152135 return NULL;136136+ pde_force_lookup(p);153137 p->proc_fops = &proc_net_seq_fops;154138 p->seq_ops = ops;155139 p->state_size = state_size;···199181 p = proc_create_reg(name, mode, &parent, data);200182 if (!p)201183 return NULL;184184+ pde_force_lookup(p);202185 p->proc_fops = &proc_net_single_fops;203186 p->single_show = show;204187 return proc_register(parent, p);···242223 p = proc_create_reg(name, mode, &parent, data);243224 if (!p)244225 return NULL;226226+ pde_force_lookup(p);245227 p->proc_fops = &proc_net_single_fops;246228 p->single_show = show;247229 p->write = write;
+8-3
fs/xfs/scrub/repair.c
···768768 if (!uuid_equal(&btblock->bb_u.s.bb_uuid,769769 &mp->m_sb.sb_meta_uuid))770770 goto out;771771+ /*772772+ * Read verifiers can reference b_ops, so we set the pointer773773+ * here. If the verifier fails we'll reset the buffer state774774+ * to what it was before we touched the buffer.775775+ */776776+ bp->b_ops = fab->buf_ops;771777 fab->buf_ops->verify_read(bp);772778 if (bp->b_error) {779779+ bp->b_ops = NULL;773780 bp->b_error = 0;774781 goto out;775782 }776783777784 /*778785 * Some read verifiers will (re)set b_ops, so we must be779779- * careful not to blow away any such assignment.786786+ * careful not to change b_ops after running the verifier.780787 */781781- if (!bp->b_ops)782782- bp->b_ops = fab->buf_ops;783788 }784789785790 /*
···776776}777777778778/*779779+ * Set buffer ops on an unchecked buffer and validate it, if possible.780780+ *779781 * If the caller passed in an ops structure and the buffer doesn't have ops780782 * assigned, set the ops and use them to verify the contents. If the contents781783 * cannot be verified, we'll clear XBF_DONE. We assume the buffer has no782784 * recorded errors and is already in XBF_DONE state.785785+ *786786+ * Under normal operations, every in-core buffer must have buffer ops assigned787787+ * to them when the buffer is read in from disk so that we can validate the788788+ * metadata.789789+ *790790+ * However, there are two scenarios where one can encounter in-core buffers791791+ * that don't have buffer ops. The first is during log recovery of buffers on792792+ * a V4 filesystem, though these buffers are purged at the end of recovery.793793+ *794794+ * The other is online repair, which tries to match arbitrary metadata blocks795795+ * with btree types in order to find the root. If online repair doesn't match796796+ * the buffer with /any/ btree type, the buffer remains in memory in DONE state797797+ * with no ops, and a subsequent read_buf call from elsewhere will not set the798798+ * ops. This function helps us fix this situation.783799 */784800int785801xfs_buf_ensure_ops(···15521536 xfs_buf_ioerror(bp, -EIO);15531537 bp->b_flags &= ~XBF_DONE;15541538 xfs_buf_stale(bp);15551555- if (bp->b_flags & XBF_ASYNC)15561556- xfs_buf_ioend(bp);15391539+ xfs_buf_ioend(bp);15571540 return -EIO;15581541 }15591542
···6262struct dentry_stat_t {6363 long nr_dentry;6464 long nr_unused;6565- long age_limit; /* age in seconds */6666- long want_pages; /* pages requested by system */6767- long dummy[2];6565+ long age_limit; /* age in seconds */6666+ long want_pages; /* pages requested by system */6767+ long nr_negative; /* # of unused negative dentries */6868+ long dummy; /* Reserved for future use */6869};6970extern struct dentry_stat_t dentry_stat;7071
···14791479 struct user_namespace *s_user_ns;1480148014811481 /*14821482- * Keep the lru lists last in the structure so they always sit on their14831483- * own individual cachelines.14821482+ * The list_lru structure is essentially just a pointer to a table14831483+ * of per-node lru lists, each of which has its own spinlock.14841484+ * There is no need to put them into separate cachelines.14841485 */14851485- struct list_lru s_dentry_lru ____cacheline_aligned_in_smp;14861486- struct list_lru s_inode_lru ____cacheline_aligned_in_smp;14861486+ struct list_lru s_dentry_lru;14871487+ struct list_lru s_inode_lru;14871488 struct rcu_head rcu;14881489 struct work_struct destroy_work;14891490
···995995 /* cg_list protected by css_set_lock and tsk->alloc_lock: */996996 struct list_head cg_list;997997#endif998998-#ifdef CONFIG_X86_RESCTRL998998+#ifdef CONFIG_X86_CPU_RESCTRL999999 u32 closid;10001000 u32 rmid;10011001#endif
+1
include/linux/sched/coredump.h
···7171#define MMF_HUGE_ZERO_PAGE 23 /* mm has ever used the global huge zero page */7272#define MMF_DISABLE_THP 24 /* disable THP for all VMAs */7373#define MMF_OOM_VICTIM 25 /* mm is the oom victim */7474+#define MMF_OOM_REAP_QUEUED 26 /* mm was queued for oom_reaper */7475#define MMF_DISABLE_THP_MASK (1 << MMF_DISABLE_THP)75767677#define MMF_INIT_MASK (MMF_DUMPABLE_MASK | MMF_DUMP_FILTER_MASK |\
···2579257925802580 const struct uapi_definition *driver_def;25812581 enum rdma_driver_id driver_id;25822582+25822583 /*25832583- * Provides synchronization between device unregistration and netlink25842584- * commands on a device. To be used only by core.25842584+ * Positive refcount indicates that the device is currently25852585+ * registered and cannot be unregistered.25852586 */25862587 refcount_t refcount;25872588 struct completion unreg_completion;···39273926int ib_check_mr_status(struct ib_mr *mr, u32 check_mask,39283927 struct ib_mr_status *mr_status);3929392839293929+/**39303930+ * ib_device_try_get: Hold a registration lock39313931+ * device: The device to lock39323932+ *39333933+ * A device under an active registration lock cannot become unregistered. It39343934+ * is only possible to obtain a registration lock on a device that is fully39353935+ * registered, otherwise this function returns false.39363936+ *39373937+ * The registration lock is only necessary for actions which require the39383938+ * device to still be registered. Uses that only require the device pointer to39393939+ * be valid should use get_device(&ibdev->dev) to hold the memory.39403940+ *39413941+ */39423942+static inline bool ib_device_try_get(struct ib_device *dev)39433943+{39443944+ return refcount_inc_not_zero(&dev->refcount);39453945+}39463946+39473947+void ib_device_put(struct ib_device *device);39303948struct net_device *ib_get_net_dev_by_params(struct ib_device *dev, u8 port,39313949 u16 pkey, const union ib_gid *gid,39323950 const struct sockaddr *addr);
+5-1
include/sound/compress_driver.h
···173173 if (snd_BUG_ON(!stream))174174 return;175175176176- stream->runtime->state = SNDRV_PCM_STATE_SETUP;176176+ if (stream->direction == SND_COMPRESS_PLAYBACK)177177+ stream->runtime->state = SNDRV_PCM_STATE_SETUP;178178+ else179179+ stream->runtime->state = SNDRV_PCM_STATE_PREPARED;180180+177181 wake_up(&stream->runtime->sleep);178182}179183
+1
include/sound/hda_codec.h
···6868 unsigned int response_reset:1; /* controller was reset */6969 unsigned int in_reset:1; /* during reset operation */7070 unsigned int no_response_fallback:1; /* don't fallback at RIRB error */7171+ unsigned int bus_probing :1; /* during probing process */71727273 int primary_dig_out_type; /* primary digital out PCM type */7374 unsigned int mixer_assigned; /* codec addr for mixer name */
+6
include/uapi/linux/virtio_config.h
···7979#define VIRTIO_F_RING_PACKED 3480808181/*8282+ * This feature indicates that memory accesses by the driver and the8383+ * device are ordered in a way described by the platform.8484+ */8585+#define VIRTIO_F_ORDER_PLATFORM 368686+8787+/*8288 * Does the device support Single Root I/O Virtualization?8389 */8490#define VIRTIO_F_SR_IOV 37
···512512 per default but can be enabled through passing psi=1 on the513513 kernel commandline during boot.514514515515+ This feature adds some code to the task wakeup and sleep516516+ paths of the scheduler. The overhead is too low to affect517517+ common scheduling-intense workloads in practice (such as518518+ webservers, memcache), but it does show up in artificial519519+ scheduler stress tests, such as hackbench.520520+521521+ If you are paranoid and not sure what the kernel will be522522+ used for, say Y.523523+524524+ Say N if unsure.525525+515526endmenu # "CPU/Task time and stats accounting"516527517528config CPU_ISOLATION···836825 PIDs controller is designed to stop this from happening.837826838827 It should be noted that organisational operations (such as attaching839839- to a cgroup hierarchy will *not* be blocked by the PIDs controller),828828+ to a cgroup hierarchy) will *not* be blocked by the PIDs controller,840829 since the PIDs limit only affects a process's ability to fork, not to841830 attach to a cgroup.842831
···376376377377#ifdef CONFIG_HOTPLUG_SMT378378enum cpuhp_smt_control cpu_smt_control __read_mostly = CPU_SMT_ENABLED;379379-EXPORT_SYMBOL_GPL(cpu_smt_control);380380-381381-static bool cpu_smt_available __read_mostly;382379383380void __init cpu_smt_disable(bool force)384381{···394397395398/*396399 * The decision whether SMT is supported can only be done after the full397397- * CPU identification. Called from architecture code before non boot CPUs398398- * are brought up.399399- */400400-void __init cpu_smt_check_topology_early(void)401401-{402402- if (!topology_smt_supported())403403- cpu_smt_control = CPU_SMT_NOT_SUPPORTED;404404-}405405-406406-/*407407- * If SMT was disabled by BIOS, detect it here, after the CPUs have been408408- * brought online. This ensures the smt/l1tf sysfs entries are consistent409409- * with reality. cpu_smt_available is set to true during the bringup of non410410- * boot CPUs when a SMT sibling is detected. Note, this may overwrite411411- * cpu_smt_control's previous setting.400400+ * CPU identification. Called from architecture code.412401 */413402void __init cpu_smt_check_topology(void)414403{415415- if (!cpu_smt_available)404404+ if (!topology_smt_supported())416405 cpu_smt_control = CPU_SMT_NOT_SUPPORTED;417406}418407···411428412429static inline bool cpu_smt_allowed(unsigned int cpu)413430{414414- if (topology_is_primary_thread(cpu))431431+ if (cpu_smt_control == CPU_SMT_ENABLED)415432 return true;416433417417- /*418418- * If the CPU is not a 'primary' thread and the booted_once bit is419419- * set then the processor has SMT support. Store this information420420- * for the late check of SMT support in cpu_smt_check_topology().421421- */422422- if (per_cpu(cpuhp_state, cpu).booted_once)423423- cpu_smt_available = true;424424-425425- if (cpu_smt_control == CPU_SMT_ENABLED)434434+ if (topology_is_primary_thread(cpu))426435 return true;427436428437 /*···20652090 */20662091 cpuhp_offline_cpu_device(cpu);20672092 }20682068- if (!ret) {20932093+ if (!ret)20692094 cpu_smt_control = ctrlval;20702070- arch_smt_update();20712071- }20722095 cpu_maps_update_done();20732096 return ret;20742097}···2077210420782105 cpu_maps_update_begin();20792106 cpu_smt_control = CPU_SMT_ENABLED;20802080- arch_smt_update();20812107 for_each_present_cpu(cpu) {20822108 /* Skip online CPUs and CPUs on offline nodes */20832109 if (cpu_online(cpu) || !node_online(cpu_to_node(cpu)))
+7-7
kernel/events/core.c
···436436 void __user *buffer, size_t *lenp,437437 loff_t *ppos)438438{439439- int ret = proc_dointvec_minmax(table, write, buffer, lenp, ppos);440440-441441- if (ret || !write)442442- return ret;443443-439439+ int ret;440440+ int perf_cpu = sysctl_perf_cpu_time_max_percent;444441 /*445442 * If throttling is disabled don't allow the write:446443 */447447- if (sysctl_perf_cpu_time_max_percent == 100 ||448448- sysctl_perf_cpu_time_max_percent == 0)444444+ if (write && (perf_cpu == 100 || perf_cpu == 0))449445 return -EINVAL;446446+447447+ ret = proc_dointvec_minmax(table, write, buffer, lenp, ppos);448448+ if (ret || !write)449449+ return ret;450450451451 max_samples_per_tick = DIV_ROUND_UP(sysctl_perf_event_sample_rate, HZ);452452 perf_sample_period_ns = NSEC_PER_SEC / sysctl_perf_event_sample_rate;
···22212221 * decrement the counter at queue_unlock() when some error has22222222 * occurred and we don't end up adding the task to the list.22232223 */22242224- hb_waiters_inc(hb);22242224+ hb_waiters_inc(hb); /* implies smp_mb(); (A) */2225222522262226 q->lock_ptr = &hb->lock;2227222722282228- spin_lock(&hb->lock); /* implies smp_mb(); (A) */22282228+ spin_lock(&hb->lock);22292229 return hb;22302230}22312231···28612861 * and BUG when futex_unlock_pi() interleaves with this.28622862 *28632863 * Therefore acquire wait_lock while holding hb->lock, but drop the28642864- * latter before calling rt_mutex_start_proxy_lock(). This still fully28652865- * serializes against futex_unlock_pi() as that does the exact same28662866- * lock handoff sequence.28642864+ * latter before calling __rt_mutex_start_proxy_lock(). This28652865+ * interleaves with futex_unlock_pi() -- which does a similar lock28662866+ * handoff -- such that the latter can observe the futex_q::pi_state28672867+ * before __rt_mutex_start_proxy_lock() is done.28672868 */28682869 raw_spin_lock_irq(&q.pi_state->pi_mutex.wait_lock);28692870 spin_unlock(q.lock_ptr);28712871+ /*28722872+ * __rt_mutex_start_proxy_lock() unconditionally enqueues the @rt_waiter28732873+ * such that futex_unlock_pi() is guaranteed to observe the waiter when28742874+ * it sees the futex_q::pi_state.28752875+ */28702876 ret = __rt_mutex_start_proxy_lock(&q.pi_state->pi_mutex, &rt_waiter, current);28712877 raw_spin_unlock_irq(&q.pi_state->pi_mutex.wait_lock);2872287828732879 if (ret) {28742880 if (ret == 1)28752881 ret = 0;28762876-28772877- spin_lock(q.lock_ptr);28782878- goto no_block;28822882+ goto cleanup;28792883 }28802880-2881288428822885 if (unlikely(to))28832886 hrtimer_start_expires(&to->timer, HRTIMER_MODE_ABS);2884288728852888 ret = rt_mutex_wait_proxy_lock(&q.pi_state->pi_mutex, to, &rt_waiter);2886288928902890+cleanup:28872891 spin_lock(q.lock_ptr);28882892 /*28892889- * If we failed to acquire the lock (signal/timeout), we must28932893+ * If we failed to acquire the lock (deadlock/signal/timeout), we must28902894 * first acquire the hb->lock before removing the lock from the28912891- * rt_mutex waitqueue, such that we can keep the hb and rt_mutex28922892- * wait lists consistent.28952895+ * rt_mutex waitqueue, such that we can keep the hb and rt_mutex wait28962896+ * lists consistent.28932897 *28942898 * In particular; it is important that futex_unlock_pi() can not28952899 * observe this inconsistency.···30173013 * there is no point where we hold neither; and therefore30183014 * wake_futex_pi() must observe a state consistent with what we30193015 * observed.30163016+ *30173017+ * In particular; this forces __rt_mutex_start_proxy() to30183018+ * complete such that we're guaranteed to observe the30193019+ * rt_waiter. Also see the WARN in wake_futex_pi().30203020 */30213021 raw_spin_lock_irq(&pi_state->pi_mutex.wait_lock);30223022 spin_unlock(&hb->lock);
+32-5
kernel/locking/rtmutex.c
···17261726 rt_mutex_set_owner(lock, NULL);17271727}1728172817291729+/**17301730+ * __rt_mutex_start_proxy_lock() - Start lock acquisition for another task17311731+ * @lock: the rt_mutex to take17321732+ * @waiter: the pre-initialized rt_mutex_waiter17331733+ * @task: the task to prepare17341734+ *17351735+ * Starts the rt_mutex acquire; it enqueues the @waiter and does deadlock17361736+ * detection. It does not wait, see rt_mutex_wait_proxy_lock() for that.17371737+ *17381738+ * NOTE: does _NOT_ remove the @waiter on failure; must either call17391739+ * rt_mutex_wait_proxy_lock() or rt_mutex_cleanup_proxy_lock() after this.17401740+ *17411741+ * Returns:17421742+ * 0 - task blocked on lock17431743+ * 1 - acquired the lock for task, caller should wake it up17441744+ * <0 - error17451745+ *17461746+ * Special API call for PI-futex support.17471747+ */17291748int __rt_mutex_start_proxy_lock(struct rt_mutex *lock,17301749 struct rt_mutex_waiter *waiter,17311750 struct task_struct *task)17321751{17331752 int ret;17531753+17541754+ lockdep_assert_held(&lock->wait_lock);1734175517351756 if (try_to_take_rt_mutex(lock, task, NULL))17361757 return 1;···17701749 ret = 0;17711750 }1772175117731773- if (unlikely(ret))17741774- remove_waiter(lock, waiter);17751775-17761752 debug_rt_mutex_print_deadlock(waiter);1777175317781754 return ret;···17811763 * @waiter: the pre-initialized rt_mutex_waiter17821764 * @task: the task to prepare17831765 *17661766+ * Starts the rt_mutex acquire; it enqueues the @waiter and does deadlock17671767+ * detection. It does not wait, see rt_mutex_wait_proxy_lock() for that.17681768+ *17691769+ * NOTE: unlike __rt_mutex_start_proxy_lock this _DOES_ remove the @waiter17701770+ * on failure.17711771+ *17841772 * Returns:17851773 * 0 - task blocked on lock17861774 * 1 - acquired the lock for task, caller should wake it up17871775 * <0 - error17881776 *17891789- * Special API call for FUTEX_REQUEUE_PI support.17771777+ * Special API call for PI-futex support.17901778 */17911779int rt_mutex_start_proxy_lock(struct rt_mutex *lock,17921780 struct rt_mutex_waiter *waiter,···1802177818031779 raw_spin_lock_irq(&lock->wait_lock);18041780 ret = __rt_mutex_start_proxy_lock(lock, waiter, task);17811781+ if (unlikely(ret))17821782+ remove_waiter(lock, waiter);18051783 raw_spin_unlock_irq(&lock->wait_lock);1806178418071785 return ret;···18711845 * @lock: the rt_mutex we were woken on18721846 * @waiter: the pre-initialized rt_mutex_waiter18731847 *18741874- * Attempt to clean up after a failed rt_mutex_wait_proxy_lock().18481848+ * Attempt to clean up after a failed __rt_mutex_start_proxy_lock() or18491849+ * rt_mutex_wait_proxy_lock().18751850 *18761851 * Unless we acquired the lock; we're still enqueued on the wait-list and can18771852 * in fact still be granted ownership until we're removed. Therefore we can
+3-1
kernel/relay.c
···428428 dentry = chan->cb->create_buf_file(tmpname, chan->parent,429429 S_IRUSR, buf,430430 &chan->is_global);431431+ if (IS_ERR(dentry))432432+ dentry = NULL;431433432434 kfree(tmpname);433435···463461 dentry = chan->cb->create_buf_file(NULL, NULL,464462 S_IRUSR, buf,465463 &chan->is_global);466466- if (WARN_ON(dentry))464464+ if (IS_ERR_OR_NULL(dentry))467465 goto free_buf;468466 }469467
+1
kernel/sched/fair.c
···5980598059815981#ifdef CONFIG_SCHED_SMT59825982DEFINE_STATIC_KEY_FALSE(sched_smt_present);59835983+EXPORT_SYMBOL_GPL(sched_smt_present);5983598459845985static inline void set_idle_cores(int cpu, int val)59855986{
+17-4
kernel/sched/psi.c
···124124 * sampling of the aggregate task states would be.125125 */126126127127+#include "../workqueue_internal.h"127128#include <linux/sched/loadavg.h>128129#include <linux/seq_file.h>129130#include <linux/proc_fs.h>···481480 groupc->tasks[t]++;482481483482 write_seqcount_end(&groupc->seq);484484-485485- if (!delayed_work_pending(&group->clock_work))486486- schedule_delayed_work(&group->clock_work, PSI_FREQ);487483}488484489485static struct psi_group *iterate_groups(struct task_struct *task, void **iter)···511513{512514 int cpu = task_cpu(task);513515 struct psi_group *group;516516+ bool wake_clock = true;514517 void *iter = NULL;515518516519 if (!task->pid)···529530 task->psi_flags &= ~clear;530531 task->psi_flags |= set;531532532532- while ((group = iterate_groups(task, &iter)))533533+ /*534534+ * Periodic aggregation shuts off if there is a period of no535535+ * task changes, so we wake it back up if necessary. However,536536+ * don't do this if the task change is the aggregation worker537537+ * itself going to sleep, or we'll ping-pong forever.538538+ */539539+ if (unlikely((clear & TSK_RUNNING) &&540540+ (task->flags & PF_WQ_WORKER) &&541541+ wq_worker_last_func(task) == psi_update_work))542542+ wake_clock = false;543543+544544+ while ((group = iterate_groups(task, &iter))) {533545 psi_group_change(group, cpu, clear, set);546546+ if (wake_clock && !delayed_work_pending(&group->clock_work))547547+ schedule_delayed_work(&group->clock_work, PSI_FREQ);548548+ }534549}535550536551void psi_memstall_tick(struct task_struct *task, int cpu)
+59-4
kernel/signal.c
···688688}689689EXPORT_SYMBOL_GPL(dequeue_signal);690690691691+static int dequeue_synchronous_signal(kernel_siginfo_t *info)692692+{693693+ struct task_struct *tsk = current;694694+ struct sigpending *pending = &tsk->pending;695695+ struct sigqueue *q, *sync = NULL;696696+697697+ /*698698+ * Might a synchronous signal be in the queue?699699+ */700700+ if (!((pending->signal.sig[0] & ~tsk->blocked.sig[0]) & SYNCHRONOUS_MASK))701701+ return 0;702702+703703+ /*704704+ * Return the first synchronous signal in the queue.705705+ */706706+ list_for_each_entry(q, &pending->list, list) {707707+ /* Synchronous signals have a postive si_code */708708+ if ((q->info.si_code > SI_USER) &&709709+ (sigmask(q->info.si_signo) & SYNCHRONOUS_MASK)) {710710+ sync = q;711711+ goto next;712712+ }713713+ }714714+ return 0;715715+next:716716+ /*717717+ * Check if there is another siginfo for the same signal.718718+ */719719+ list_for_each_entry_continue(q, &pending->list, list) {720720+ if (q->info.si_signo == sync->info.si_signo)721721+ goto still_pending;722722+ }723723+724724+ sigdelset(&pending->signal, sync->info.si_signo);725725+ recalc_sigpending();726726+still_pending:727727+ list_del_init(&sync->list);728728+ copy_siginfo(info, &sync->info);729729+ __sigqueue_free(sync);730730+ return info->si_signo;731731+}732732+691733/*692734 * Tell a process that it has a new active signal..693735 *···1099105711001058 result = TRACE_SIGNAL_DELIVERED;11011059 /*11021102- * Skip useless siginfo allocation for SIGKILL SIGSTOP,11031103- * and kernel threads.10601060+ * Skip useless siginfo allocation for SIGKILL and kernel threads.11041061 */11051105- if (sig_kernel_only(sig) || (t->flags & PF_KTHREAD))10621062+ if ((sig == SIGKILL) || (t->flags & PF_KTHREAD))11061063 goto out_set;1107106411081065 /*···24352394 goto relock;24362395 }2437239623972397+ /* Has this task already been marked for death? */23982398+ ksig->info.si_signo = signr = SIGKILL;23992399+ if (signal_group_exit(signal))24002400+ goto fatal;24012401+24382402 for (;;) {24392403 struct k_sigaction *ka;24402404···24532407 goto relock;24542408 }2455240924562456- signr = dequeue_signal(current, ¤t->blocked, &ksig->info);24102410+ /*24112411+ * Signals generated by the execution of an instruction24122412+ * need to be delivered before any other pending signals24132413+ * so that the instruction pointer in the signal stack24142414+ * frame points to the faulting instruction.24152415+ */24162416+ signr = dequeue_synchronous_signal(&ksig->info);24172417+ if (!signr)24182418+ signr = dequeue_signal(current, ¤t->blocked, &ksig->info);2457241924582420 if (!signr)24592421 break; /* will return 0 */···25432489 continue;25442490 }2545249124922492+ fatal:25462493 spin_unlock_irq(&sighand->siglock);2547249425482495 /*
-2
kernel/smp.c
···584584 num_nodes, (num_nodes > 1 ? "s" : ""),585585 num_cpus, (num_cpus > 1 ? "s" : ""));586586587587- /* Final decision about SMT support */588588- cpu_smt_check_topology();589587 /* Any cleanup work */590588 smp_cpus_done(setup_max_cpus);591589}
···55 * Copyright (C) IBM Corporation, 2010-201266 * Author: Srikar Dronamraju <srikar@linux.vnet.ibm.com>77 */88-#define pr_fmt(fmt) "trace_kprobe: " fmt88+#define pr_fmt(fmt) "trace_uprobe: " fmt991010#include <linux/ctype.h>1111#include <linux/module.h>···160160 if (ret >= 0) {161161 if (ret == maxlen)162162 dst[ret - 1] = '\0';163163+ else164164+ /*165165+ * Include the terminating null byte. In this case it166166+ * was copied by strncpy_from_user but not accounted167167+ * for in ret.168168+ */169169+ ret++;163170 *(u32 *)dest = make_data_loc(ret, (void *)dst - base);164171 }165172
+23
kernel/workqueue.c
···910910}911911912912/**913913+ * wq_worker_last_func - retrieve worker's last work function914914+ *915915+ * Determine the last function a worker executed. This is called from916916+ * the scheduler to get a worker's last known identity.917917+ *918918+ * CONTEXT:919919+ * spin_lock_irq(rq->lock)920920+ *921921+ * Return:922922+ * The last work function %current executed as a worker, NULL if it923923+ * hasn't executed any work yet.924924+ */925925+work_func_t wq_worker_last_func(struct task_struct *task)926926+{927927+ struct worker *worker = kthread_data(task);928928+929929+ return worker->last_func;930930+}931931+932932+/**913933 * worker_set_flags - set worker flags and adjust nr_running accordingly914934 * @worker: self915935 * @flags: flags to set···22032183 /* clear cpu intensive status */22042184 if (unlikely(cpu_intensive))22052185 worker_clr_flags(worker, WORKER_CPU_INTENSIVE);21862186+21872187+ /* tag the worker for identification in schedule() */21882188+ worker->last_func = worker->current_func;2206218922072190 /* we're done with it, release */22082191 hash_del(&worker->hentry);
+5-1
kernel/workqueue_internal.h
···53535454 /* used only by rescuers to point to the target workqueue */5555 struct workqueue_struct *rescue_wq; /* I: the workqueue to rescue */5656+5757+ /* used by the scheduler to determine a worker's last known identity */5858+ work_func_t last_func;5659};57605861/**···70677168/*7269 * Scheduler hooks for concurrency managed workqueue. Only to be used from7373- * sched/core.c and workqueue.c.7070+ * sched/ and workqueue.c.7471 */7572void wq_worker_waking_up(struct task_struct *task, int cpu);7673struct task_struct *wq_worker_sleeping(struct task_struct *task);7474+work_func_t wq_worker_last_func(struct task_struct *task);77757876#endif /* _KERNEL_WORKQUEUE_INTERNAL_H */
···55UBSAN_SANITIZE_tags.o := n66KCOV_INSTRUMENT := n7788+CFLAGS_REMOVE_common.o = -pg89CFLAGS_REMOVE_generic.o = -pg910# Function splitter causes unnecessary splits in __asan_load1/__asan_store11011# see: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=63533
+2-1
mm/memory-failure.c
···372372 if (fail || tk->addr_valid == 0) {373373 pr_err("Memory failure: %#lx: forcibly killing %s:%d because of failure to unmap corrupted page\n",374374 pfn, tk->tsk->comm, tk->tsk->pid);375375- force_sig(SIGKILL, tk->tsk);375375+ do_send_sig_info(SIGKILL, SEND_SIG_PRIV,376376+ tk->tsk, PIDTYPE_PID);376377 }377378378379 /*
+27-35
mm/memory_hotplug.c
···12331233bool is_mem_section_removable(unsigned long start_pfn, unsigned long nr_pages)12341234{12351235 struct page *page = pfn_to_page(start_pfn);12361236- struct page *end_page = page + nr_pages;12361236+ unsigned long end_pfn = min(start_pfn + nr_pages, zone_end_pfn(page_zone(page)));12371237+ struct page *end_page = pfn_to_page(end_pfn);1237123812381239 /* Check the starting page of each pageblock within the range */12391240 for (; page < end_page; page = next_active_pageblock(page)) {···12741273 i++;12751274 if (i == MAX_ORDER_NR_PAGES || pfn + i >= end_pfn)12761275 continue;12761276+ /* Check if we got outside of the zone */12771277+ if (zone && !zone_spans_pfn(zone, pfn + i))12781278+ return 0;12771279 page = pfn_to_page(pfn + i);12781280 if (zone && page_zone(page) != zone)12791281 return 0;···13051301static unsigned long scan_movable_pages(unsigned long start, unsigned long end)13061302{13071303 unsigned long pfn;13081308- struct page *page;13041304+13091305 for (pfn = start; pfn < end; pfn++) {13101310- if (pfn_valid(pfn)) {13111311- page = pfn_to_page(pfn);13121312- if (PageLRU(page))13131313- return pfn;13141314- if (__PageMovable(page))13151315- return pfn;13161316- if (PageHuge(page)) {13171317- if (hugepage_migration_supported(page_hstate(page)) &&13181318- page_huge_active(page))13191319- return pfn;13201320- else13211321- pfn = round_up(pfn + 1,13221322- 1 << compound_order(page)) - 1;13231323- }13241324- }13061306+ struct page *page, *head;13071307+ unsigned long skip;13081308+13091309+ if (!pfn_valid(pfn))13101310+ continue;13111311+ page = pfn_to_page(pfn);13121312+ if (PageLRU(page))13131313+ return pfn;13141314+ if (__PageMovable(page))13151315+ return pfn;13161316+13171317+ if (!PageHuge(page))13181318+ continue;13191319+ head = compound_head(page);13201320+ if (hugepage_migration_supported(page_hstate(head)) &&13211321+ page_huge_active(head))13221322+ return pfn;13231323+ skip = (1 << compound_order(head)) - (page - head);13241324+ pfn += skip - 1;13251325 }13261326 return 0;13271327}···13521344{13531345 unsigned long pfn;13541346 struct page *page;13551355- int not_managed = 0;13561347 int ret = 0;13571348 LIST_HEAD(source);13581349···13991392 else14001393 ret = isolate_movable_page(page, ISOLATE_UNEVICTABLE);14011394 if (!ret) { /* Success */14021402- put_page(page);14031395 list_add_tail(&page->lru, &source);14041396 if (!__PageMovable(page))14051397 inc_node_page_state(page, NR_ISOLATED_ANON +···14071401 } else {14081402 pr_warn("failed to isolate pfn %lx\n", pfn);14091403 dump_page(page, "isolation failed");14101410- put_page(page);14111411- /* Because we don't have big zone->lock. we should14121412- check this again here. */14131413- if (page_count(page)) {14141414- not_managed++;14151415- ret = -EBUSY;14161416- break;14171417- }14181404 }14051405+ put_page(page);14191406 }14201407 if (!list_empty(&source)) {14211421- if (not_managed) {14221422- putback_movable_pages(&source);14231423- goto out;14241424- }14251425-14261408 /* Allocate a new page from the nearest neighbor node */14271409 ret = migrate_pages(&source, new_node_page, NULL, 0,14281410 MIGRATE_SYNC, MR_MEMORY_HOTPLUG);···14231429 putback_movable_pages(&source);14241430 }14251431 }14261426-out:14321432+14271433 return ret;14281434}14291435···15701576 we assume this for now. .*/15711577 if (!test_pages_in_a_zone(start_pfn, end_pfn, &valid_start,15721578 &valid_end)) {15731573- mem_hotplug_done();15741579 ret = -EINVAL;15751580 reason = "multizone range";15761581 goto failed_removal;···15841591 MIGRATE_MOVABLE,15851592 SKIP_HWPOISON | REPORT_FAILURE);15861593 if (ret) {15871587- mem_hotplug_done();15881594 reason = "failure to isolate range";15891595 goto failed_removal;15901596 }
+5-7
mm/migrate.c
···709709 /* Simple case, sync compaction */710710 if (mode != MIGRATE_ASYNC) {711711 do {712712- get_bh(bh);713712 lock_buffer(bh);714713 bh = bh->b_this_page;715714···719720720721 /* async case, we cannot block on lock_buffer so use trylock_buffer */721722 do {722722- get_bh(bh);723723 if (!trylock_buffer(bh)) {724724 /*725725 * We failed to lock the buffer and cannot stall in726726 * async migration. Release the taken locks727727 */728728 struct buffer_head *failed_bh = bh;729729- put_bh(failed_bh);730729 bh = head;731730 while (bh != failed_bh) {732731 unlock_buffer(bh);733733- put_bh(bh);734732 bh = bh->b_this_page;735733 }736734 return false;···814818 bh = head;815819 do {816820 unlock_buffer(bh);817817- put_bh(bh);818821 bh = bh->b_this_page;819822820823 } while (bh != head);···11301135 * If migration is successful, decrease refcount of the newpage11311136 * which will not free the page because new page owner increased11321137 * refcounter. As well, if it is LRU page, add the page to LRU11331133- * list in here.11381138+ * list in here. Use the old state of the isolated source page to11391139+ * determine if we migrated a LRU page. newpage was already unlocked11401140+ * and possibly modified by its owner - don't rely on the page11411141+ * state.11341142 */11351143 if (rc == MIGRATEPAGE_SUCCESS) {11361136- if (unlikely(__PageMovable(newpage)))11441144+ if (unlikely(!is_lru))11371145 put_page(newpage);11381146 else11391147 putback_lru_page(newpage);
+10-2
mm/oom_kill.c
···647647648648static void wake_oom_reaper(struct task_struct *tsk)649649{650650- /* tsk is already queued? */651651- if (tsk == oom_reaper_list || tsk->oom_reaper_list)650650+ /* mm is already queued? */651651+ if (test_and_set_bit(MMF_OOM_REAP_QUEUED, &tsk->signal->oom_mm->flags))652652 return;653653654654 get_task_struct(tsk);···975975 * still freeing memory.976976 */977977 read_lock(&tasklist_lock);978978+979979+ /*980980+ * The task 'p' might have already exited before reaching here. The981981+ * put_task_struct() will free task_struct 'p' while the loop still try982982+ * to access the field of 'p', so, get an extra reference.983983+ */984984+ get_task_struct(p);978985 for_each_thread(p, t) {979986 list_for_each_entry(child, &t->children, sibling) {980987 unsigned int child_points;···1001994 }1002995 }1003996 }997997+ put_task_struct(p);1004998 read_unlock(&tasklist_lock);100599910061000 /*
-12
mm/page_alloc.c
···57015701 cond_resched();57025702 }57035703 }57045704-#ifdef CONFIG_SPARSEMEM57055705- /*57065706- * If the zone does not span the rest of the section then57075707- * we should at least initialize those pages. Otherwise we57085708- * could blow up on a poisoned page in some paths which depend57095709- * on full sections being initialized (e.g. memory hotplug).57105710- */57115711- while (end_pfn % PAGES_PER_SECTION) {57125712- __init_single_page(pfn_to_page(end_pfn), end_pfn, zone, nid);57135713- end_pfn++;57145714- }57155715-#endif57165704}5717570557185706#ifdef CONFIG_ZONE_DEVICE
+3
net/batman-adv/bat_v_elp.c
···104104105105 ret = cfg80211_get_station(real_netdev, neigh->addr, &sinfo);106106107107+ /* free the TID stats immediately */108108+ cfg80211_sinfo_release_content(&sinfo);109109+107110 dev_put(real_netdev);108111 if (ret == -ENOENT) {109112 /* Node is not associated anymore! It would be
+3-2
net/batman-adv/hard-interface.c
···2020#include "main.h"21212222#include <linux/atomic.h>2323-#include <linux/bug.h>2423#include <linux/byteorder/generic.h>2524#include <linux/errno.h>2625#include <linux/gfp.h>···178179 parent_dev = __dev_get_by_index((struct net *)parent_net,179180 dev_get_iflink(net_dev));180181 /* if we got a NULL parent_dev there is something broken.. */181181- if (WARN(!parent_dev, "Cannot find parent device"))182182+ if (!parent_dev) {183183+ pr_err("Cannot find parent device\n");182184 return false;185185+ }183186184187 if (batadv_mutual_parents(net_dev, net, parent_dev, parent_net))185188 return false;
···2293229322942294 xt_compat_lock(NFPROTO_BRIDGE);2295229522962296- ret = xt_compat_init_offsets(NFPROTO_BRIDGE, tmp.nentries);22972297- if (ret < 0)22982298- goto out_unlock;22962296+ if (tmp.nentries) {22972297+ ret = xt_compat_init_offsets(NFPROTO_BRIDGE, tmp.nentries);22982298+ if (ret < 0)22992299+ goto out_unlock;23002300+ }23012301+22992302 ret = compat_copy_entries(entries_tmp, tmp.entries_size, &state);23002303 if (ret < 0)23012304 goto out_unlock;
+3
net/core/dev.c
···87128712 set_bit(__LINK_STATE_PRESENT, &dev->state);87138713 set_bit(__LINK_STATE_START, &dev->state);8714871487158715+ /* napi_busy_loop stats accounting wants this */87168716+ dev_net_set(dev, &init_net);87178717+87158718 /* Note : We dont allocate pcpu_refcnt for dummy devices,87168719 * because users of this 'device' dont need to change87178720 * its refcount.
+2
net/core/filter.c
···41124112 /* Only some socketops are supported */41134113 switch (optname) {41144114 case SO_RCVBUF:41154115+ val = min_t(u32, val, sysctl_rmem_max);41154116 sk->sk_userlocks |= SOCK_RCVBUF_LOCK;41164117 sk->sk_rcvbuf = max_t(int, val * 2, SOCK_MIN_RCVBUF);41174118 break;41184119 case SO_SNDBUF:41204120+ val = min_t(u32, val, sysctl_wmem_max);41194121 sk->sk_userlocks |= SOCK_SNDBUF_LOCK;41204122 sk->sk_sndbuf = max_t(int, val * 2, SOCK_MIN_SNDBUF);41214123 break;
+1-2
net/core/skmsg.c
···545545 struct sk_psock *psock = container_of(gc, struct sk_psock, gc);546546547547 /* No sk_callback_lock since already detached. */548548- if (psock->parser.enabled)549549- strp_done(&psock->parser.strp);548548+ strp_done(&psock->parser.strp);550549551550 cancel_work_sync(&psock->work);552551
···205205 rtnl_unlock();206206}207207208208+static struct lock_class_key dsa_master_addr_list_lock_key;209209+208210int dsa_master_setup(struct net_device *dev, struct dsa_port *cpu_dp)209211{210212 int ret;···220218 wmb();221219222220 dev->dsa_ptr = cpu_dp;221221+ lockdep_set_class(&dev->addr_list_lock,222222+ &dsa_master_addr_list_lock_key);223223224224 ret = dsa_master_ethtool_setup(dev);225225 if (ret)
+10-7
net/dsa/slave.c
···140140static void dsa_slave_change_rx_flags(struct net_device *dev, int change)141141{142142 struct net_device *master = dsa_slave_to_master(dev);143143-144144- if (change & IFF_ALLMULTI)145145- dev_set_allmulti(master, dev->flags & IFF_ALLMULTI ? 1 : -1);146146- if (change & IFF_PROMISC)147147- dev_set_promiscuity(master, dev->flags & IFF_PROMISC ? 1 : -1);143143+ if (dev->flags & IFF_UP) {144144+ if (change & IFF_ALLMULTI)145145+ dev_set_allmulti(master,146146+ dev->flags & IFF_ALLMULTI ? 1 : -1);147147+ if (change & IFF_PROMISC)148148+ dev_set_promiscuity(master,149149+ dev->flags & IFF_PROMISC ? 1 : -1);150150+ }148151}149152150153static void dsa_slave_set_rx_mode(struct net_device *dev)···642639 int ret;643640644641 /* Port's PHY and MAC both need to be EEE capable */645645- if (!dev->phydev && !dp->pl)642642+ if (!dev->phydev || !dp->pl)646643 return -ENODEV;647644648645 if (!ds->ops->set_mac_eee)···662659 int ret;663660664661 /* Port's PHY and MAC both need to be EEE capable */665665- if (!dev->phydev && !dp->pl)662662+ if (!dev->phydev || !dp->pl)666663 return -ENODEV;667664668665 if (!ds->ops->get_mac_eee)
···10071007 }1008100810091009 if (nf_ct_key_equal(h, tuple, zone, net)) {10101010+ /* Tuple is taken already, so caller will need to find10111011+ * a new source port to use.10121012+ *10131013+ * Only exception:10141014+ * If the *original tuples* are identical, then both10151015+ * conntracks refer to the same flow.10161016+ * This is a rare situation, it can occur e.g. when10171017+ * more than one UDP packet is sent from same socket10181018+ * in different threads.10191019+ *10201020+ * Let nf_ct_resolve_clash() deal with this later.10211021+ */10221022+ if (nf_ct_tuple_equal(&ignored_conntrack->tuplehash[IP_CT_DIR_ORIGINAL].tuple,10231023+ &ct->tuplehash[IP_CT_DIR_ORIGINAL].tuple))10241024+ continue;10251025+10101026 NF_CT_STAT_INC_ATOMIC(net, found);10111027 rcu_read_unlock();10121028 return 1;
···8484 }8585}86868787+static size_t fa_index(struct flex_array *fa, void *elem, size_t count)8888+{8989+ size_t index = 0;9090+9191+ while (count--) {9292+ if (elem == flex_array_get(fa, index))9393+ break;9494+ index++;9595+ }9696+9797+ return index;9898+}9999+87100/* Migrates chunks from stream queues to new stream queues if needed,88101 * but not across associations. Also, removes those chunks to streams89102 * higher than the new max.···160147161148 if (stream->out) {162149 fa_copy(out, stream->out, 0, min(outcnt, stream->outcnt));150150+ if (stream->out_curr) {151151+ size_t index = fa_index(stream->out, stream->out_curr,152152+ stream->outcnt);153153+154154+ BUG_ON(index == stream->outcnt);155155+ stream->out_curr = flex_array_get(out, index);156156+ }163157 fa_free(stream->out);164158 }165159
+10-1
net/smc/af_smc.c
···1505150515061506 smc = smc_sk(sk);15071507 lock_sock(sk);15081508+ if (sk->sk_state == SMC_CLOSED && (sk->sk_shutdown & RCV_SHUTDOWN)) {15091509+ /* socket was connected before, no more data to read */15101510+ rc = 0;15111511+ goto out;15121512+ }15081513 if ((sk->sk_state == SMC_INIT) ||15091514 (sk->sk_state == SMC_LISTEN) ||15101515 (sk->sk_state == SMC_CLOSED))···1845184018461841 smc = smc_sk(sk);18471842 lock_sock(sk);18481848-18431843+ if (sk->sk_state == SMC_CLOSED && (sk->sk_shutdown & RCV_SHUTDOWN)) {18441844+ /* socket was connected before, no more data to read */18451845+ rc = 0;18461846+ goto out;18471847+ }18491848 if (sk->sk_state == SMC_INIT ||18501849 sk->sk_state == SMC_LISTEN ||18511850 sk->sk_state == SMC_CLOSED)
···941941EXPORT_SYMBOL(dlci_ioctl_set);942942943943static long sock_do_ioctl(struct net *net, struct socket *sock,944944- unsigned int cmd, unsigned long arg,945945- unsigned int ifreq_size)944944+ unsigned int cmd, unsigned long arg)946945{947946 int err;948947 void __user *argp = (void __user *)arg;···967968 } else {968969 struct ifreq ifr;969970 bool need_copyout;970970- if (copy_from_user(&ifr, argp, ifreq_size))971971+ if (copy_from_user(&ifr, argp, sizeof(struct ifreq)))971972 return -EFAULT;972973 err = dev_ioctl(net, cmd, &ifr, &need_copyout);973974 if (!err && need_copyout)974974- if (copy_to_user(argp, &ifr, ifreq_size))975975+ if (copy_to_user(argp, &ifr, sizeof(struct ifreq)))975976 return -EFAULT;976977 }977978 return err;···10701071 err = open_related_ns(&net->ns, get_net_ns);10711072 break;10721073 default:10731073- err = sock_do_ioctl(net, sock, cmd, arg,10741074- sizeof(struct ifreq));10741074+ err = sock_do_ioctl(net, sock, cmd, arg);10751075 break;10761076 }10771077 return err;···27782780 int err;2779278127802782 set_fs(KERNEL_DS);27812781- err = sock_do_ioctl(net, sock, cmd, (unsigned long)&ktv,27822782- sizeof(struct compat_ifreq));27832783+ err = sock_do_ioctl(net, sock, cmd, (unsigned long)&ktv);27832784 set_fs(old_fs);27842785 if (!err)27852786 err = compat_put_timeval(&ktv, up);···27942797 int err;2795279827962799 set_fs(KERNEL_DS);27972797- err = sock_do_ioctl(net, sock, cmd, (unsigned long)&kts,27982798- sizeof(struct compat_ifreq));28002800+ err = sock_do_ioctl(net, sock, cmd, (unsigned long)&kts);27992801 set_fs(old_fs);28002802 if (!err)28012803 err = compat_put_timespec(&kts, up);···29902994 return dev_ioctl(net, cmd, &ifreq, NULL);29912995}2992299629972997+static int compat_ifreq_ioctl(struct net *net, struct socket *sock,29982998+ unsigned int cmd,29992999+ struct compat_ifreq __user *uifr32)30003000+{30013001+ struct ifreq __user *uifr;30023002+ int err;30033003+30043004+ /* Handle the fact that while struct ifreq has the same *layout* on30053005+ * 32/64 for everything but ifreq::ifru_ifmap and ifreq::ifru_data,30063006+ * which are handled elsewhere, it still has different *size* due to30073007+ * ifreq::ifru_ifmap (which is 16 bytes on 32 bit, 24 bytes on 64-bit,30083008+ * resulting in struct ifreq being 32 and 40 bytes respectively).30093009+ * As a result, if the struct happens to be at the end of a page and30103010+ * the next page isn't readable/writable, we get a fault. To prevent30113011+ * that, copy back and forth to the full size.30123012+ */30133013+30143014+ uifr = compat_alloc_user_space(sizeof(*uifr));30153015+ if (copy_in_user(uifr, uifr32, sizeof(*uifr32)))30163016+ return -EFAULT;30173017+30183018+ err = sock_do_ioctl(net, sock, cmd, (unsigned long)uifr);30193019+30203020+ if (!err) {30213021+ switch (cmd) {30223022+ case SIOCGIFFLAGS:30233023+ case SIOCGIFMETRIC:30243024+ case SIOCGIFMTU:30253025+ case SIOCGIFMEM:30263026+ case SIOCGIFHWADDR:30273027+ case SIOCGIFINDEX:30283028+ case SIOCGIFADDR:30293029+ case SIOCGIFBRDADDR:30303030+ case SIOCGIFDSTADDR:30313031+ case SIOCGIFNETMASK:30323032+ case SIOCGIFPFLAGS:30333033+ case SIOCGIFTXQLEN:30343034+ case SIOCGMIIPHY:30353035+ case SIOCGMIIREG:30363036+ case SIOCGIFNAME:30373037+ if (copy_in_user(uifr32, uifr, sizeof(*uifr32)))30383038+ err = -EFAULT;30393039+ break;30403040+ }30413041+ }30423042+ return err;30433043+}30443044+29933045static int compat_sioc_ifmap(struct net *net, unsigned int cmd,29943046 struct compat_ifreq __user *uifr32)29953047{···31533109 }3154311031553111 set_fs(KERNEL_DS);31563156- ret = sock_do_ioctl(net, sock, cmd, (unsigned long) r,31573157- sizeof(struct compat_ifreq));31123112+ ret = sock_do_ioctl(net, sock, cmd, (unsigned long) r);31583113 set_fs(old_fs);3159311431603115out:···32533210 case SIOCSIFTXQLEN:32543211 case SIOCBRADDIF:32553212 case SIOCBRDELIF:32133213+ case SIOCGIFNAME:32563214 case SIOCSIFNAME:32573215 case SIOCGMIIPHY:32583216 case SIOCGMIIREG:32593217 case SIOCSMIIREG:32603260- case SIOCSARP:32613261- case SIOCGARP:32623262- case SIOCDARP:32633263- case SIOCATMARK:32643218 case SIOCBONDENSLAVE:32653219 case SIOCBONDRELEASE:32663220 case SIOCBONDSETHWADDR:32673221 case SIOCBONDCHANGEACTIVE:32683268- case SIOCGIFNAME:32693269- return sock_do_ioctl(net, sock, cmd, arg,32703270- sizeof(struct compat_ifreq));32223222+ return compat_ifreq_ioctl(net, sock, cmd, argp);32233223+32243224+ case SIOCSARP:32253225+ case SIOCGARP:32263226+ case SIOCDARP:32273227+ case SIOCATMARK:32283228+ return sock_do_ioctl(net, sock, cmd, arg);32713229 }3272323032733231 return -ENOIOCTLCMD;
+99-6
net/sunrpc/xprtrdma/svc_rdma_sendto.c
···537537 DMA_TO_DEVICE);538538}539539540540+/* If the xdr_buf has more elements than the device can541541+ * transmit in a single RDMA Send, then the reply will542542+ * have to be copied into a bounce buffer.543543+ */544544+static bool svc_rdma_pull_up_needed(struct svcxprt_rdma *rdma,545545+ struct xdr_buf *xdr,546546+ __be32 *wr_lst)547547+{548548+ int elements;549549+550550+ /* xdr->head */551551+ elements = 1;552552+553553+ /* xdr->pages */554554+ if (!wr_lst) {555555+ unsigned int remaining;556556+ unsigned long pageoff;557557+558558+ pageoff = xdr->page_base & ~PAGE_MASK;559559+ remaining = xdr->page_len;560560+ while (remaining) {561561+ ++elements;562562+ remaining -= min_t(u32, PAGE_SIZE - pageoff,563563+ remaining);564564+ pageoff = 0;565565+ }566566+ }567567+568568+ /* xdr->tail */569569+ if (xdr->tail[0].iov_len)570570+ ++elements;571571+572572+ /* assume 1 SGE is needed for the transport header */573573+ return elements >= rdma->sc_max_send_sges;574574+}575575+576576+/* The device is not capable of sending the reply directly.577577+ * Assemble the elements of @xdr into the transport header578578+ * buffer.579579+ */580580+static int svc_rdma_pull_up_reply_msg(struct svcxprt_rdma *rdma,581581+ struct svc_rdma_send_ctxt *ctxt,582582+ struct xdr_buf *xdr, __be32 *wr_lst)583583+{584584+ unsigned char *dst, *tailbase;585585+ unsigned int taillen;586586+587587+ dst = ctxt->sc_xprt_buf;588588+ dst += ctxt->sc_sges[0].length;589589+590590+ memcpy(dst, xdr->head[0].iov_base, xdr->head[0].iov_len);591591+ dst += xdr->head[0].iov_len;592592+593593+ tailbase = xdr->tail[0].iov_base;594594+ taillen = xdr->tail[0].iov_len;595595+ if (wr_lst) {596596+ u32 xdrpad;597597+598598+ xdrpad = xdr_padsize(xdr->page_len);599599+ if (taillen && xdrpad) {600600+ tailbase += xdrpad;601601+ taillen -= xdrpad;602602+ }603603+ } else {604604+ unsigned int len, remaining;605605+ unsigned long pageoff;606606+ struct page **ppages;607607+608608+ ppages = xdr->pages + (xdr->page_base >> PAGE_SHIFT);609609+ pageoff = xdr->page_base & ~PAGE_MASK;610610+ remaining = xdr->page_len;611611+ while (remaining) {612612+ len = min_t(u32, PAGE_SIZE - pageoff, remaining);613613+614614+ memcpy(dst, page_address(*ppages), len);615615+ remaining -= len;616616+ dst += len;617617+ pageoff = 0;618618+ }619619+ }620620+621621+ if (taillen)622622+ memcpy(dst, tailbase, taillen);623623+624624+ ctxt->sc_sges[0].length += xdr->len;625625+ ib_dma_sync_single_for_device(rdma->sc_pd->device,626626+ ctxt->sc_sges[0].addr,627627+ ctxt->sc_sges[0].length,628628+ DMA_TO_DEVICE);629629+630630+ return 0;631631+}632632+540633/* svc_rdma_map_reply_msg - Map the buffer holding RPC message541634 * @rdma: controlling transport542635 * @ctxt: send_ctxt for the Send WR···652559 u32 xdr_pad;653560 int ret;654561655655- if (++ctxt->sc_cur_sge_no >= rdma->sc_max_send_sges)656656- return -EIO;562562+ if (svc_rdma_pull_up_needed(rdma, xdr, wr_lst))563563+ return svc_rdma_pull_up_reply_msg(rdma, ctxt, xdr, wr_lst);564564+565565+ ++ctxt->sc_cur_sge_no;657566 ret = svc_rdma_dma_map_buf(rdma, ctxt,658567 xdr->head[0].iov_base,659568 xdr->head[0].iov_len);···686591 while (remaining) {687592 len = min_t(u32, PAGE_SIZE - page_off, remaining);688593689689- if (++ctxt->sc_cur_sge_no >= rdma->sc_max_send_sges)690690- return -EIO;594594+ ++ctxt->sc_cur_sge_no;691595 ret = svc_rdma_dma_map_page(rdma, ctxt, *ppages++,692596 page_off, len);693597 if (ret < 0)···700606 len = xdr->tail[0].iov_len;701607tail:702608 if (len) {703703- if (++ctxt->sc_cur_sge_no >= rdma->sc_max_send_sges)704704- return -EIO;609609+ ++ctxt->sc_cur_sge_no;705610 ret = svc_rdma_dma_map_buf(rdma, ctxt, base, len);706611 if (ret < 0)707612 return ret;
+3-6
net/sunrpc/xprtrdma/svc_rdma_transport.c
···419419 /* Transport header, head iovec, tail iovec */420420 newxprt->sc_max_send_sges = 3;421421 /* Add one SGE per page list entry */422422- newxprt->sc_max_send_sges += svcrdma_max_req_size / PAGE_SIZE;423423- if (newxprt->sc_max_send_sges > dev->attrs.max_send_sge) {424424- pr_err("svcrdma: too few Send SGEs available (%d needed)\n",425425- newxprt->sc_max_send_sges);426426- goto errout;427427- }422422+ newxprt->sc_max_send_sges += (svcrdma_max_req_size / PAGE_SIZE) + 1;423423+ if (newxprt->sc_max_send_sges > dev->attrs.max_send_sge)424424+ newxprt->sc_max_send_sges = dev->attrs.max_send_sge;428425 newxprt->sc_max_req_size = svcrdma_max_req_size;429426 newxprt->sc_max_requests = svcrdma_max_requests;430427 newxprt->sc_max_bc_requests = svcrdma_max_bc_requests;
+5-1
net/tls/tls_sw.c
···439439 struct scatterlist *sge = sk_msg_elem(msg_en, start);440440 int rc;441441442442+ memcpy(rec->iv_data, tls_ctx->tx.iv, sizeof(rec->iv_data));443443+442444 sge->offset += tls_ctx->tx.prepend_size;443445 sge->length -= tls_ctx->tx.prepend_size;444446···450448 aead_request_set_ad(aead_req, TLS_AAD_SPACE_SIZE);451449 aead_request_set_crypt(aead_req, rec->sg_aead_in,452450 rec->sg_aead_out,453453- data_len, tls_ctx->tx.iv);451451+ data_len, rec->iv_data);454452455453 aead_request_set_callback(aead_req, CRYPTO_TFM_REQ_MAY_BACKLOG,456454 tls_encrypt_done, sk);···17941792 if (atomic_read(&ctx->encrypt_pending))17951793 crypto_wait_req(-EINPROGRESS, &ctx->async_wait);1796179417951795+ release_sock(sk);17971796 cancel_delayed_work_sync(&ctx->tx_work.work);17971797+ lock_sock(sk);1798179817991799 /* Tx whatever records we can transmit and abandon the rest */18001800 tls_tx_records(sk, -1);
+21-8
net/vmw_vsock/virtio_transport.c
···7575{7676 struct virtio_vsock *vsock = virtio_vsock_get();77777878+ if (!vsock)7979+ return VMADDR_CID_ANY;8080+7881 return vsock->guest_cid;7982}8083···587584588585 virtio_vsock_update_guest_cid(vsock);589586590590- ret = vsock_core_init(&virtio_transport.transport);591591- if (ret < 0)592592- goto out_vqs;593593-594587 vsock->rx_buf_nr = 0;595588 vsock->rx_buf_max_nr = 0;596589 atomic_set(&vsock->queued_replies, 0);···617618 mutex_unlock(&the_virtio_vsock_mutex);618619 return 0;619620620620-out_vqs:621621- vsock->vdev->config->del_vqs(vsock->vdev);622621out:623622 kfree(vsock);624623 mutex_unlock(&the_virtio_vsock_mutex);···633636 flush_work(&vsock->tx_work);634637 flush_work(&vsock->event_work);635638 flush_work(&vsock->send_pkt_work);639639+640640+ /* Reset all connected sockets when the device disappear */641641+ vsock_for_each_connected_socket(virtio_vsock_reset_sock);636642637643 vdev->config->reset(vdev);638644···669669670670 mutex_lock(&the_virtio_vsock_mutex);671671 the_virtio_vsock = NULL;672672- vsock_core_exit();673672 mutex_unlock(&the_virtio_vsock_mutex);674673675674 vdev->config->del_vqs(vdev);···701702 virtio_vsock_workqueue = alloc_workqueue("virtio_vsock", 0, 0);702703 if (!virtio_vsock_workqueue)703704 return -ENOMEM;705705+704706 ret = register_virtio_driver(&virtio_vsock_driver);705707 if (ret)706706- destroy_workqueue(virtio_vsock_workqueue);708708+ goto out_wq;709709+710710+ ret = vsock_core_init(&virtio_transport.transport);711711+ if (ret)712712+ goto out_vdr;713713+714714+ return 0;715715+716716+out_vdr:717717+ unregister_virtio_driver(&virtio_vsock_driver);718718+out_wq:719719+ destroy_workqueue(virtio_vsock_workqueue);707720 return ret;721721+708722}709723710724static void __exit virtio_vsock_exit(void)711725{726726+ vsock_core_exit();712727 unregister_virtio_driver(&virtio_vsock_driver);713728 destroy_workqueue(virtio_vsock_workqueue);714729}
···21122112 return 0;21132113}2114211421152115+/* allow waiting for a capture stream that hasn't been started */21162116+#if IS_ENABLED(CONFIG_SND_PCM_OSS)21172117+#define wait_capture_start(substream) ((substream)->oss.oss)21182118+#else21192119+#define wait_capture_start(substream) false21202120+#endif21212121+21152122/* the common loop for read/write data */21162123snd_pcm_sframes_t __snd_pcm_lib_xfer(struct snd_pcm_substream *substream,21172124 void *data, bool interleaved,···21892182 err = snd_pcm_start(substream);21902183 if (err < 0)21912184 goto _end_unlock;21922192- } else {21852185+ } else if (!wait_capture_start(substream)) {21932186 /* nothing to do */21942187 err = 0;21952188 goto _end_unlock;
+2-1
sound/pci/hda/hda_bind.c
···115115 err = snd_hda_codec_build_controls(codec);116116 if (err < 0)117117 goto error_module;118118- if (codec->card->registered) {118118+ /* only register after the bus probe finished; otherwise it's racy */119119+ if (!codec->bus->bus_probing && codec->card->registered) {119120 err = snd_card_register(codec->card);120121 if (err < 0)121122 goto error_module;
···84518451 ca0132_exit_chip(codec);8452845284538453 snd_hda_power_down(codec);84548454- if (IS_ENABLED(CONFIG_PCI) && spec->mem_base)84548454+#ifdef CONFIG_PCI84558455+ if (spec->mem_base)84558456 pci_iounmap(codec->bus->pci, spec->mem_base);84578457+#endif84568458 kfree(spec->spec_init_verbs);84578459 kfree(codec->spec);84588460}
+79-59
sound/pci/hda/patch_realtek.c
···117117 int codec_variant; /* flag for other variants */118118 unsigned int has_alc5505_dsp:1;119119 unsigned int no_depop_delay:1;120120+ unsigned int done_hp_init:1;120121121122 /* for PLL fix */122123 hda_nid_t pll_nid;···515514 }516515}517516517517+/* get a primary headphone pin if available */518518+static hda_nid_t alc_get_hp_pin(struct alc_spec *spec)519519+{520520+ if (spec->gen.autocfg.hp_pins[0])521521+ return spec->gen.autocfg.hp_pins[0];522522+ if (spec->gen.autocfg.line_out_type == AC_JACK_HP_OUT)523523+ return spec->gen.autocfg.line_out_pins[0];524524+ return 0;525525+}518526519527/*520528 * Realtek SSID verification···734724 * 15 : 1 --> enable the function "Mute internal speaker735725 * when the external headphone out jack is plugged"736726 */737737- if (!spec->gen.autocfg.hp_pins[0] &&738738- !(spec->gen.autocfg.line_out_pins[0] &&739739- spec->gen.autocfg.line_out_type == AUTO_PIN_HP_OUT)) {727727+ if (!alc_get_hp_pin(spec)) {740728 hda_nid_t nid;741729 tmp = (ass >> 11) & 0x3; /* HP to chassis */742730 nid = ports[tmp];···29662958static void alc282_init(struct hda_codec *codec)29672959{29682960 struct alc_spec *spec = codec->spec;29692969- hda_nid_t hp_pin = spec->gen.autocfg.hp_pins[0];29612961+ hda_nid_t hp_pin = alc_get_hp_pin(spec);29702962 bool hp_pin_sense;29712963 int coef78;29722964···30032995static void alc282_shutup(struct hda_codec *codec)30042996{30052997 struct alc_spec *spec = codec->spec;30063006- hda_nid_t hp_pin = spec->gen.autocfg.hp_pins[0];29982998+ hda_nid_t hp_pin = alc_get_hp_pin(spec);30072999 bool hp_pin_sense;30083000 int coef78;30093001···30813073static void alc283_init(struct hda_codec *codec)30823074{30833075 struct alc_spec *spec = codec->spec;30843084- hda_nid_t hp_pin = spec->gen.autocfg.hp_pins[0];30763076+ hda_nid_t hp_pin = alc_get_hp_pin(spec);30853077 bool hp_pin_sense;30863086-30873087- if (!spec->gen.autocfg.hp_outs) {30883088- if (spec->gen.autocfg.line_out_type == AC_JACK_HP_OUT)30893089- hp_pin = spec->gen.autocfg.line_out_pins[0];30903090- }3091307830923079 alc283_restore_default_value(codec);30933080···31173114static void alc283_shutup(struct hda_codec *codec)31183115{31193116 struct alc_spec *spec = codec->spec;31203120- hda_nid_t hp_pin = spec->gen.autocfg.hp_pins[0];31173117+ hda_nid_t hp_pin = alc_get_hp_pin(spec);31213118 bool hp_pin_sense;31223122-31233123- if (!spec->gen.autocfg.hp_outs) {31243124- if (spec->gen.autocfg.line_out_type == AC_JACK_HP_OUT)31253125- hp_pin = spec->gen.autocfg.line_out_pins[0];31263126- }3127311931283120 if (!hp_pin) {31293121 alc269_shutup(codec);···31533155static void alc256_init(struct hda_codec *codec)31543156{31553157 struct alc_spec *spec = codec->spec;31563156- hda_nid_t hp_pin = spec->gen.autocfg.hp_pins[0];31583158+ hda_nid_t hp_pin = alc_get_hp_pin(spec);31573159 bool hp_pin_sense;3158316031593161 if (!hp_pin)···31893191static void alc256_shutup(struct hda_codec *codec)31903192{31913193 struct alc_spec *spec = codec->spec;31923192- hda_nid_t hp_pin = spec->gen.autocfg.hp_pins[0];31943194+ hda_nid_t hp_pin = alc_get_hp_pin(spec);31933195 bool hp_pin_sense;3194319631953197 if (!hp_pin) {···32253227static void alc225_init(struct hda_codec *codec)32263228{32273229 struct alc_spec *spec = codec->spec;32283228- hda_nid_t hp_pin = spec->gen.autocfg.hp_pins[0];32303230+ hda_nid_t hp_pin = alc_get_hp_pin(spec);32293231 bool hp1_pin_sense, hp2_pin_sense;3230323232313233 if (!hp_pin)···32683270static void alc225_shutup(struct hda_codec *codec)32693271{32703272 struct alc_spec *spec = codec->spec;32713271- hda_nid_t hp_pin = spec->gen.autocfg.hp_pins[0];32733273+ hda_nid_t hp_pin = alc_get_hp_pin(spec);32723274 bool hp1_pin_sense, hp2_pin_sense;3273327532743276 if (!hp_pin) {···33123314static void alc_default_init(struct hda_codec *codec)33133315{33143316 struct alc_spec *spec = codec->spec;33153315- hda_nid_t hp_pin = spec->gen.autocfg.hp_pins[0];33173317+ hda_nid_t hp_pin = alc_get_hp_pin(spec);33163318 bool hp_pin_sense;3317331933183320 if (!hp_pin)···33413343static void alc_default_shutup(struct hda_codec *codec)33423344{33433345 struct alc_spec *spec = codec->spec;33443344- hda_nid_t hp_pin = spec->gen.autocfg.hp_pins[0];33463346+ hda_nid_t hp_pin = alc_get_hp_pin(spec);33453347 bool hp_pin_sense;3346334833473349 if (!hp_pin) {···3368337033693371 alc_auto_setup_eapd(codec, false);33703372 snd_hda_shutup_pins(codec);33733373+}33743374+33753375+static void alc294_hp_init(struct hda_codec *codec)33763376+{33773377+ struct alc_spec *spec = codec->spec;33783378+ hda_nid_t hp_pin = alc_get_hp_pin(spec);33793379+ int i, val;33803380+33813381+ if (!hp_pin)33823382+ return;33833383+33843384+ snd_hda_codec_write(codec, hp_pin, 0,33853385+ AC_VERB_SET_AMP_GAIN_MUTE, AMP_OUT_MUTE);33863386+33873387+ msleep(100);33883388+33893389+ snd_hda_codec_write(codec, hp_pin, 0,33903390+ AC_VERB_SET_PIN_WIDGET_CONTROL, 0x0);33913391+33923392+ alc_update_coef_idx(codec, 0x6f, 0x000f, 0);/* Set HP depop to manual mode */33933393+ alc_update_coefex_idx(codec, 0x58, 0x00, 0x8000, 0x8000); /* HP depop procedure start */33943394+33953395+ /* Wait for depop procedure finish */33963396+ val = alc_read_coefex_idx(codec, 0x58, 0x01);33973397+ for (i = 0; i < 20 && val & 0x0080; i++) {33983398+ msleep(50);33993399+ val = alc_read_coefex_idx(codec, 0x58, 0x01);34003400+ }34013401+ /* Set HP depop to auto mode */34023402+ alc_update_coef_idx(codec, 0x6f, 0x000f, 0x000b);34033403+ msleep(50);34043404+}34053405+34063406+static void alc294_init(struct hda_codec *codec)34073407+{34083408+ struct alc_spec *spec = codec->spec;34093409+34103410+ if (!spec->done_hp_init) {34113411+ alc294_hp_init(codec);34123412+ spec->done_hp_init = true;34133413+ }34143414+ alc_default_init(codec);33713415}3372341633733417static void alc5505_coef_set(struct hda_codec *codec, unsigned int index_reg,···47774737 struct alc_spec *spec = codec->spec;4778473847794739 hda_nid_t mux_pin = spec->gen.imux_pins[spec->gen.cur_mux[0]];47804780- hda_nid_t hp_pin = spec->gen.autocfg.hp_pins[0];47404740+ hda_nid_t hp_pin = alc_get_hp_pin(spec);4781474147824742 int new_headset_mode;47834743···50565016static void alc_shutup_dell_xps13(struct hda_codec *codec)50575017{50585018 struct alc_spec *spec = codec->spec;50595059- int hp_pin = spec->gen.autocfg.hp_pins[0];50195019+ int hp_pin = alc_get_hp_pin(spec);5060502050615021 /* Prevent pop noises when headphones are plugged in */50625022 snd_hda_codec_write(codec, hp_pin, 0,···5149510951505110 if (action == HDA_FIXUP_ACT_PROBE) {51515111 int mic_pin = find_ext_mic_pin(codec);51525152- int hp_pin = spec->gen.autocfg.hp_pins[0];51125112+ int hp_pin = alc_get_hp_pin(spec);5153511351545114 if (snd_BUG_ON(!mic_pin || !hp_pin))51555115 return;···56315591 ALC294_FIXUP_ASUS_HEADSET_MIC,56325592 ALC294_FIXUP_ASUS_SPK,56335593 ALC225_FIXUP_HEADSET_JACK,55945594+ ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE,56345595};5635559656365597static const struct hda_fixup alc269_fixups[] = {···65786537 .type = HDA_FIXUP_FUNC,65796538 .v.func = alc_fixup_headset_jack,65806539 },65406540+ [ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE] = {65416541+ .type = HDA_FIXUP_PINS,65426542+ .v.pins = (const struct hda_pintbl[]) {65436543+ { 0x1a, 0x01a1913c }, /* use as headset mic, without its own jack detect */65446544+ { }65456545+ },65466546+ .chained = true,65476547+ .chain_id = ALC269_FIXUP_HEADSET_MODE_NO_HP_MIC65486548+ },65816549};6582655065836551static const struct snd_pci_quirk alc269_fixup_tbl[] = {···67656715 SND_PCI_QUIRK(0x1458, 0xfa53, "Gigabyte BXBT-2807", ALC283_FIXUP_HEADSET_MIC),67666716 SND_PCI_QUIRK(0x1462, 0xb120, "MSI Cubi MS-B120", ALC283_FIXUP_HEADSET_MIC),67676717 SND_PCI_QUIRK(0x1462, 0xb171, "Cubi N 8GL (MS-B171)", ALC283_FIXUP_HEADSET_MIC),67186718+ SND_PCI_QUIRK(0x1558, 0x1325, "System76 Darter Pro (darp5)", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),67686719 SND_PCI_QUIRK(0x17aa, 0x1036, "Lenovo P520", ALC233_FIXUP_LENOVO_MULTI_CODECS),67696720 SND_PCI_QUIRK(0x17aa, 0x20f2, "Thinkpad SL410/510", ALC269_FIXUP_SKU_IGNORE),67706721 SND_PCI_QUIRK(0x17aa, 0x215e, "Thinkpad L512", ALC269_FIXUP_SKU_IGNORE),···74247373 alc_update_coef_idx(codec, 0x4, 0, 1<<11);74257374}7426737574277427-static void alc294_hp_init(struct hda_codec *codec)74287428-{74297429- struct alc_spec *spec = codec->spec;74307430- hda_nid_t hp_pin = spec->gen.autocfg.hp_pins[0];74317431- int i, val;74327432-74337433- if (!hp_pin)74347434- return;74357435-74367436- snd_hda_codec_write(codec, hp_pin, 0,74377437- AC_VERB_SET_AMP_GAIN_MUTE, AMP_OUT_MUTE);74387438-74397439- msleep(100);74407440-74417441- snd_hda_codec_write(codec, hp_pin, 0,74427442- AC_VERB_SET_PIN_WIDGET_CONTROL, 0x0);74437443-74447444- alc_update_coef_idx(codec, 0x6f, 0x000f, 0);/* Set HP depop to manual mode */74457445- alc_update_coefex_idx(codec, 0x58, 0x00, 0x8000, 0x8000); /* HP depop procedure start */74467446-74477447- /* Wait for depop procedure finish */74487448- val = alc_read_coefex_idx(codec, 0x58, 0x01);74497449- for (i = 0; i < 20 && val & 0x0080; i++) {74507450- msleep(50);74517451- val = alc_read_coefex_idx(codec, 0x58, 0x01);74527452- }74537453- /* Set HP depop to auto mode */74547454- alc_update_coef_idx(codec, 0x6f, 0x000f, 0x000b);74557455- msleep(50);74567456-}74577457-74587376/*74597377 */74607378static int patch_alc269(struct hda_codec *codec)···75497529 spec->codec_variant = ALC269_TYPE_ALC294;75507530 spec->gen.mixer_nid = 0; /* ALC2x4 does not have any loopback mixer path */75517531 alc_update_coef_idx(codec, 0x6b, 0x0018, (1<<4) | (1<<3)); /* UAJ MIC Vref control by verb */75527552- alc294_hp_init(codec);75327532+ spec->init_hook = alc294_init;75537533 break;75547534 case 0x10ec0300:75557535 spec->codec_variant = ALC269_TYPE_ALC300;···75617541 spec->codec_variant = ALC269_TYPE_ALC700;75627542 spec->gen.mixer_nid = 0; /* ALC700 does not have any loopback mixer path */75637543 alc_update_coef_idx(codec, 0x4a, 1 << 15, 0); /* Combo jack auto trigger control */75647564- alc294_hp_init(codec);75447544+ spec->init_hook = alc294_init;75657545 break;7566754675677547 }
+2
sound/usb/quirks.c
···14921492 return SNDRV_PCM_FMTBIT_DSD_U32_BE;14931493 break;1494149414951495+ case USB_ID(0x10cb, 0x0103): /* The Bit Opus #3; with fp->dsd_raw */14951496 case USB_ID(0x152a, 0x85de): /* SMSL D1 DAC */14961497 case USB_ID(0x16d0, 0x09dd): /* Encore mDSD */14971498 case USB_ID(0x0d8c, 0x0316): /* Hegel HD12 DSD */···15671566 case 0x20b1: /* XMOS based devices */15681567 case 0x152a: /* Thesycon devices */15691568 case 0x25ce: /* Mytek devices */15691569+ case 0x2ab6: /* T+A devices */15701570 if (fp->dsd_raw)15711571 return SNDRV_PCM_FMTBIT_DSD_U32_BE;15721572 break;
+1-5
tools/bpf/bpftool/common.c
···297297 snprintf(path, sizeof(path), "/proc/self/fdinfo/%d", fd);298298299299 fdi = fopen(path, "r");300300- if (!fdi) {301301- p_err("can't open fdinfo: %s", strerror(errno));300300+ if (!fdi)302301 return NULL;303303- }304302305303 while ((n = getline(&line, &line_n, fdi)) > 0) {306304 char *value;···311313312314 value = strchr(line, '\t');313315 if (!value || !value[1]) {314314- p_err("malformed fdinfo!?");315316 free(line);316317 return NULL;317318 }···323326 return line;324327 }325328326326- p_err("key '%s' not found in fdinfo", key);327329 free(line);328330 fclose(fdi);329331 return NULL;
+24-9
tools/bpf/bpftool/map.c
···347347 return argv + i;348348}349349350350+/* on per cpu maps we must copy the provided value on all value instances */351351+static void fill_per_cpu_value(struct bpf_map_info *info, void *value)352352+{353353+ unsigned int i, n, step;354354+355355+ if (!map_is_per_cpu(info->type))356356+ return;357357+358358+ n = get_possible_cpus();359359+ step = round_up(info->value_size, 8);360360+ for (i = 1; i < n; i++)361361+ memcpy(value + i * step, value, info->value_size);362362+}363363+350364static int parse_elem(char **argv, struct bpf_map_info *info,351365 void *key, void *value, __u32 key_size, __u32 value_size,352366 __u32 *flags, __u32 **value_fd)···440426 argv = parse_bytes(argv, "value", value, value_size);441427 if (!argv)442428 return -1;429429+430430+ fill_per_cpu_value(info, value);443431 }444432445433 return parse_elem(argv, info, key, NULL, key_size, value_size,···513497 jsonw_uint_field(json_wtr, "owner_prog_type",514498 prog_type);515499 }516516- if (atoi(owner_jited))517517- jsonw_bool_field(json_wtr, "owner_jited", true);518518- else519519- jsonw_bool_field(json_wtr, "owner_jited", false);500500+ if (owner_jited)501501+ jsonw_bool_field(json_wtr, "owner_jited",502502+ !!atoi(owner_jited));520503521504 free(owner_prog_type);522505 free(owner_jited);···568553 char *owner_prog_type = get_fdinfo(fd, "owner_prog_type");569554 char *owner_jited = get_fdinfo(fd, "owner_jited");570555571571- printf("\n\t");556556+ if (owner_prog_type || owner_jited)557557+ printf("\n\t");572558 if (owner_prog_type) {573559 unsigned int prog_type = atoi(owner_prog_type);574560···579563 else580564 printf("owner_prog_type %d ", prog_type);581565 }582582- if (atoi(owner_jited))583583- printf("owner jited");584584- else585585- printf("owner not jited");566566+ if (owner_jited)567567+ printf("owner%s jited",568568+ atoi(owner_jited) ? "" : " not");586569587570 free(owner_prog_type);588571 free(owner_jited);
+3-2
tools/bpf/bpftool/prog.c
···78787979static int prog_fd_by_tag(unsigned char *tag)8080{8181- struct bpf_prog_info info = {};8282- __u32 len = sizeof(info);8381 unsigned int id = 0;8482 int err;8583 int fd;86848785 while (true) {8686+ struct bpf_prog_info info = {};8787+ __u32 len = sizeof(info);8888+8889 err = bpf_prog_get_next_id(id, &id);8990 if (err) {9091 p_err("%s", strerror(errno));
+1-1
tools/iio/iio_generic_buffer.c
···330330331331int main(int argc, char **argv)332332{333333- unsigned long long num_loops = 2;333333+ long long num_loops = 2;334334 unsigned long timedelay = 1000000;335335 unsigned long buf_len = 128;336336
···1919The perf c2c tool provides means for Shared Data C2C/HITM analysis. It allows2020you to track down the cacheline contentions.21212222-The tool is based on x86's load latency and precise store facility events2323-provided by Intel CPUs. These events provide:2222+On x86, the tool is based on load latency and precise store facility events2323+provided by Intel CPUs. On PowerPC, the tool uses random instruction sampling2424+with thresholding feature.2525+2626+These events provide:2427 - memory address of the access2528 - type of the access (load and store details)2629 - latency (in cycles) of the load access···49465047-l::5148--ldlat::5252- Configure mem-loads latency.4949+ Configure mem-loads latency. (x86 only)53505451-k::5552--all-kernel::···122119 -W,-d,--phys-data,--sample-cpu123120124121Unless specified otherwise with '-e' option, following events are monitored by125125-default:122122+default on x86:126123127124 cpu/mem-loads,ldlat=30/P128125 cpu/mem-stores/P126126+127127+and following on PowerPC:128128+129129+ cpu/mem-loads/130130+ cpu/mem-stores/129131130132User can pass any 'perf record' option behind '--' mark, like (to enable131133callchains and system wide monitoring):
+1-1
tools/perf/Documentation/perf-mem.txt
···8282 Be more verbose (show counter open errors, etc)83838484--ldlat <n>::8585- Specify desired latency for loads event.8585+ Specify desired latency for loads event. (x86 only)86868787In addition, for report all perf report options are valid, and for record8888all perf record options.
···11+// SPDX-License-Identifier: GPL-2.022+#include "mem-events.h"33+44+/* PowerPC does not support 'ldlat' parameter. */55+char *perf_mem_events__name(int i)66+{77+ if (i == PERF_MEM_EVENTS__LOAD)88+ return (char *) "cpu/mem-loads/";99+1010+ return (char *) "cpu/mem-stores/";1111+}
+3-6
tools/perf/builtin-script.c
···16811681 .force_header = false,16821682 };16831683 struct perf_evsel *ev2;16841684- static bool init;16851684 u64 val;1686168516871687- if (!init) {16881688- perf_stat__init_shadow_stats();16891689- init = true;16901690- }16911686 if (!evsel->stats)16921687 perf_evlist__alloc_stats(script->session->evlist, false);16931688 if (evsel_script(evsel->leader)->gnum++ == 0)···17891794 return;17901795 }1791179617921792- if (PRINT_FIELD(TRACE)) {17971797+ if (PRINT_FIELD(TRACE) && sample->raw_data) {17931798 event_format__fprintf(evsel->tp_format, sample->cpu,17941799 sample->raw_data, sample->raw_size, fp);17951800 }···23532358 int ret;2354235923552360 signal(SIGINT, sig_handler);23612361+23622362+ perf_stat__init_shadow_stats();2356236323572364 /* override event processing functions */23582365 if (script->show_task_events) {
+18-7
tools/perf/builtin-trace.c
···2514251425152515static bool perf_evlist__add_vfs_getname(struct perf_evlist *evlist)25162516{25172517- struct perf_evsel *evsel = perf_evsel__newtp("probe", "vfs_getname");25172517+ bool found = false;25182518+ struct perf_evsel *evsel, *tmp;25192519+ struct parse_events_error err = { .idx = 0, };25202520+ int ret = parse_events(evlist, "probe:vfs_getname*", &err);2518252125192519- if (IS_ERR(evsel))25222522+ if (ret)25202523 return false;2521252425222522- if (perf_evsel__field(evsel, "pathname") == NULL) {25252525+ evlist__for_each_entry_safe(evlist, evsel, tmp) {25262526+ if (!strstarts(perf_evsel__name(evsel), "probe:vfs_getname"))25272527+ continue;25282528+25292529+ if (perf_evsel__field(evsel, "pathname")) {25302530+ evsel->handler = trace__vfs_getname;25312531+ found = true;25322532+ continue;25332533+ }25342534+25352535+ list_del_init(&evsel->node);25362536+ evsel->evlist = NULL;25232537 perf_evsel__delete(evsel);25242524- return false;25252538 }2526253925272527- evsel->handler = trace__vfs_getname;25282528- perf_evlist__add(evlist, evsel);25292529- return true;25402540+ return found;25302541}2531254225322543static struct perf_evsel *perf_evsel__new_pgfault(u64 config)
+19-13
tools/perf/tests/attr.py
···11#! /usr/bin/python22# SPDX-License-Identifier: GPL-2.03344+from __future__ import print_function55+46import os57import sys68import glob···108import tempfile119import logging1210import shutil1313-import ConfigParser1111+1212+try:1313+ import configparser1414+except ImportError:1515+ import ConfigParser as configparser14161517def data_equal(a, b):1618 # Allow multiple values in assignment separated by '|'···106100 def equal(self, other):107101 for t in Event.terms:108102 log.debug(" [%s] %s %s" % (t, self[t], other[t]));109109- if not self.has_key(t) or not other.has_key(t):103103+ if t not in self or t not in other:110104 return False111105 if not data_equal(self[t], other[t]):112106 return False113107 return True114108115109 def optional(self):116116- if self.has_key('optional') and self['optional'] == '1':110110+ if 'optional' in self and self['optional'] == '1':117111 return True118112 return False119113120114 def diff(self, other):121115 for t in Event.terms:122122- if not self.has_key(t) or not other.has_key(t):116116+ if t not in self or t not in other:123117 continue124118 if not data_equal(self[t], other[t]):125119 log.warning("expected %s=%s, got %s" % (t, self[t], other[t]))···140134# - expected values assignments141135class Test(object):142136 def __init__(self, path, options):143143- parser = ConfigParser.SafeConfigParser()137137+ parser = configparser.SafeConfigParser()144138 parser.read(path)145139146140 log.warning("running '%s'" % path)···199193 return True200194201195 def load_events(self, path, events):202202- parser_event = ConfigParser.SafeConfigParser()196196+ parser_event = configparser.SafeConfigParser()203197 parser_event.read(path)204198205199 # The event record section header contains 'event' word,···213207 # Read parent event if there's any214208 if (':' in section):215209 base = section[section.index(':') + 1:]216216- parser_base = ConfigParser.SafeConfigParser()210210+ parser_base = configparser.SafeConfigParser()217211 parser_base.read(self.test_dir + '/' + base)218212 base_items = parser_base.items('event')219213···328322 for f in glob.glob(options.test_dir + '/' + options.test):329323 try:330324 Test(f, options).run()331331- except Unsup, obj:325325+ except Unsup as obj:332326 log.warning("unsupp %s" % obj.getMsg())333333- except Notest, obj:327327+ except Notest as obj:334328 log.warning("skipped %s" % obj.getMsg())335329336330def setup_log(verbose):···369363 parser.add_option("-p", "--perf",370364 action="store", type="string", dest="perf")371365 parser.add_option("-v", "--verbose",372372- action="count", dest="verbose")366366+ default=0, action="count", dest="verbose")373367374368 options, args = parser.parse_args()375369 if args:···379373 setup_log(options.verbose)380374381375 if not options.test_dir:382382- print 'FAILED no -d option specified'376376+ print('FAILED no -d option specified')383377 sys.exit(-1)384378385379 if not options.test:···388382 try:389383 run_tests(options)390384391391- except Fail, obj:392392- print "FAILED %s" % obj.getMsg();385385+ except Fail as obj:386386+ print("FAILED %s" % obj.getMsg())393387 sys.exit(-1)394388395389 sys.exit(0)
+1-1
tools/perf/tests/evsel-tp-sched.c
···1717 return -1;1818 }19192020- is_signed = !!(field->flags | TEP_FIELD_IS_SIGNED);2020+ is_signed = !!(field->flags & TEP_FIELD_IS_SIGNED);2121 if (should_be_signed && !is_signed) {2222 pr_debug("%s: \"%s\" signedness(%d) is wrong, should be %d\n",2323 evsel->name, name, is_signed, should_be_signed);
+10-6
tools/perf/ui/browsers/annotate.c
···224224 return ret;225225}226226227227-static int disasm__cmp(struct annotation_line *a, struct annotation_line *b)227227+static double disasm__cmp(struct annotation_line *a, struct annotation_line *b,228228+ int percent_type)228229{229230 int i;230231231232 for (i = 0; i < a->data_nr; i++) {232232- if (a->data[i].percent == b->data[i].percent)233233+ if (a->data[i].percent[percent_type] == b->data[i].percent[percent_type])233234 continue;234234- return a->data[i].percent < b->data[i].percent;235235+ return a->data[i].percent[percent_type] -236236+ b->data[i].percent[percent_type];235237 }236238 return 0;237239}238240239239-static void disasm_rb_tree__insert(struct rb_root *root, struct annotation_line *al)241241+static void disasm_rb_tree__insert(struct annotate_browser *browser,242242+ struct annotation_line *al)240243{244244+ struct rb_root *root = &browser->entries;241245 struct rb_node **p = &root->rb_node;242246 struct rb_node *parent = NULL;243247 struct annotation_line *l;···250246 parent = *p;251247 l = rb_entry(parent, struct annotation_line, rb_node);252248253253- if (disasm__cmp(al, l))249249+ if (disasm__cmp(al, l, browser->opts->percent_type) < 0)254250 p = &(*p)->rb_left;255251 else256252 p = &(*p)->rb_right;···333329 RB_CLEAR_NODE(&pos->al.rb_node);334330 continue;335331 }336336- disasm_rb_tree__insert(&browser->entries, &pos->al);332332+ disasm_rb_tree__insert(browser, &pos->al);337333 }338334 pthread_mutex_unlock(¬es->lock);339335
···134134 if (!cpu_list)135135 return cpu_map__read_all_cpu_map();136136137137- if (!isdigit(*cpu_list))137137+ /*138138+ * must handle the case of empty cpumap to cover139139+ * TOPOLOGY header for NUMA nodes with no CPU140140+ * ( e.g., because of CPU hotplug)141141+ */142142+ if (!isdigit(*cpu_list) && *cpu_list != '\0')138143 goto out;139144140145 while (isdigit(*cpu_list)) {···186181187182 if (nr_cpus > 0)188183 cpus = cpu_map__trim_new(nr_cpus, tmp_cpus);189189- else184184+ else if (*cpu_list != '\0')190185 cpus = cpu_map__default_new();186186+ else187187+ cpus = cpu_map__dummy_new();191188invalid:192189 free(tmp_cpus);193190out:
+1-1
tools/perf/util/mem-events.c
···2828static char mem_loads_name[100];2929static bool mem_loads_name__init;30303131-char *perf_mem_events__name(int i)3131+char * __weak perf_mem_events__name(int i)3232{3333 if (i == PERF_MEM_EVENTS__LOAD) {3434 if (!mem_loads_name__init) {
+4-2
tools/perf/util/ordered-events.c
···391391 * Current buffer might not have all the events allocated392392 * yet, we need to free only allocated ones ...393393 */394394- list_del(&oe->buffer->list);395395- ordered_events_buffer__free(oe->buffer, oe->buffer_idx, oe);394394+ if (oe->buffer) {395395+ list_del(&oe->buffer->list);396396+ ordered_events_buffer__free(oe->buffer, oe->buffer_idx, oe);397397+ }396398397399 /* ... and continue with the rest */398400 list_for_each_entry_safe(buffer, tmp, &oe->to_free, list) {
+2
tools/perf/util/setup.py
···1717 vars[var] = sub("-mcet", "", vars[var])1818 if not clang_has_option("-fcf-protection"):1919 vars[var] = sub("-fcf-protection", "", vars[var])2020+ if not clang_has_option("-fstack-clash-protection"):2121+ vars[var] = sub("-fstack-clash-protection", "", vars[var])20222123from distutils.core import setup, Extension2224
+22-1
tools/perf/util/symbol-elf.c
···1919#define EM_AARCH64 183 /* ARM 64 bit */2020#endif21212222+#ifndef ELF32_ST_VISIBILITY2323+#define ELF32_ST_VISIBILITY(o) ((o) & 0x03)2424+#endif2525+2626+/* For ELF64 the definitions are the same. */2727+#ifndef ELF64_ST_VISIBILITY2828+#define ELF64_ST_VISIBILITY(o) ELF32_ST_VISIBILITY (o)2929+#endif3030+3131+/* How to extract information held in the st_other field. */3232+#ifndef GELF_ST_VISIBILITY3333+#define GELF_ST_VISIBILITY(val) ELF64_ST_VISIBILITY (val)3434+#endif3535+2236typedef Elf64_Nhdr GElf_Nhdr;23372438#ifdef HAVE_CPLUS_DEMANGLE_SUPPORT···10187 return GELF_ST_TYPE(sym->st_info);10288}103899090+static inline uint8_t elf_sym__visibility(const GElf_Sym *sym)9191+{9292+ return GELF_ST_VISIBILITY(sym->st_other);9393+}9494+10495#ifndef STT_GNU_IFUNC10596#define STT_GNU_IFUNC 1010697#endif···130111 return elf_sym__type(sym) == STT_NOTYPE &&131112 sym->st_name != 0 &&132113 sym->st_shndx != SHN_UNDEF &&133133- sym->st_shndx != SHN_ABS;114114+ sym->st_shndx != SHN_ABS &&115115+ elf_sym__visibility(sym) != STV_HIDDEN &&116116+ elf_sym__visibility(sym) != STV_INTERNAL;134117}135118136119static bool elf_sym__filter(GElf_Sym *sym)
···2828SPI1=0x12929SPI2=0x230303131+do_esp_policy() {3232+ local ns=$13333+ local me=$23434+ local remote=$33535+ local lnet=$43636+ local rnet=$53737+3838+ # to encrypt packets as they go out (includes forwarded packets that need encapsulation)3939+ ip -net $ns xfrm policy add src $lnet dst $rnet dir out tmpl src $me dst $remote proto esp mode tunnel priority 100 action allow4040+ # to fwd decrypted packets after esp processing:4141+ ip -net $ns xfrm policy add src $rnet dst $lnet dir fwd tmpl src $remote dst $me proto esp mode tunnel priority 100 action allow4242+}4343+3144do_esp() {3245 local ns=$13346 local me=$2···5340 ip -net $ns xfrm state add src $remote dst $me proto esp spi $spi_in enc aes $KEY_AES auth sha1 $KEY_SHA mode tunnel sel src $rnet dst $lnet5441 ip -net $ns xfrm state add src $me dst $remote proto esp spi $spi_out enc aes $KEY_AES auth sha1 $KEY_SHA mode tunnel sel src $lnet dst $rnet55425656- # to encrypt packets as they go out (includes forwarded packets that need encapsulation)5757- ip -net $ns xfrm policy add src $lnet dst $rnet dir out tmpl src $me dst $remote proto esp mode tunnel priority 100 action allow5858- # to fwd decrypted packets after esp processing:5959- ip -net $ns xfrm policy add src $rnet dst $lnet dir fwd tmpl src $remote dst $me proto esp mode tunnel priority 100 action allow4343+ do_esp_policy $ns $me $remote $lnet $rnet4444+}4545+4646+# add policies with different netmasks, to make sure kernel carries4747+# the policies contained within new netmask over when search tree is4848+# re-built.4949+# peer netns that are supposed to be encapsulated via esp have addresses5050+# in the 10.0.1.0/24 and 10.0.2.0/24 subnets, respectively.5151+#5252+# Adding a policy for '10.0.1.0/23' will make it necessary to5353+# alter the prefix of 10.0.1.0 subnet.5454+# In case new prefix overlaps with existing node, the node and all5555+# policies it carries need to be merged with the existing one(s).5656+#5757+# Do that here.5858+do_overlap()5959+{6060+ local ns=$16161+6262+ # adds new nodes to tree (neither network exists yet in policy database).6363+ ip -net $ns xfrm policy add src 10.1.0.0/24 dst 10.0.0.0/24 dir fwd priority 200 action block6464+6565+ # adds a new node in the 10.0.0.0/24 tree (dst node exists).6666+ ip -net $ns xfrm policy add src 10.2.0.0/24 dst 10.0.0.0/24 dir fwd priority 200 action block6767+6868+ # adds a 10.2.0.0/23 node, but for different dst.6969+ ip -net $ns xfrm policy add src 10.2.0.0/23 dst 10.0.1.0/24 dir fwd priority 200 action block7070+7171+ # dst now overlaps with the 10.0.1.0/24 ESP policy in fwd.7272+ # kernel must 'promote' existing one (10.0.0.0/24) to 10.0.0.0/23.7373+ # But 10.0.0.0/23 also includes existing 10.0.1.0/24, so that node7474+ # also has to be merged too, including source-sorted subtrees.7575+ # old:7676+ # 10.0.0.0/24 (node 1 in dst tree of the bin)7777+ # 10.1.0.0/24 (node in src tree of dst node 1)7878+ # 10.2.0.0/24 (node in src tree of dst node 1)7979+ # 10.0.1.0/24 (node 2 in dst tree of the bin)8080+ # 10.0.2.0/24 (node in src tree of dst node 2)8181+ # 10.2.0.0/24 (node in src tree of dst node 2)8282+ #8383+ # The next 'policy add' adds dst '10.0.0.0/23', which means8484+ # that dst node 1 and dst node 2 have to be merged including8585+ # the sub-tree. As no duplicates are allowed, policies in8686+ # the two '10.0.2.0/24' are also merged.8787+ #8888+ # after the 'add', internal search tree should look like this:8989+ # 10.0.0.0/23 (node in dst tree of bin)9090+ # 10.0.2.0/24 (node in src tree of dst node)9191+ # 10.1.0.0/24 (node in src tree of dst node)9292+ # 10.2.0.0/24 (node in src tree of dst node)9393+ #9494+ # 10.0.0.0/24 and 10.0.1.0/24 nodes have been merged as 10.0.0.0/23.9595+ ip -net $ns xfrm policy add src 10.1.0.0/24 dst 10.0.0.0/23 dir fwd priority 200 action block6096}61976298do_esp_policy_get_check() {···222160 return $lret223161}224162163163+check_exceptions()164164+{165165+ logpostfix="$1"166166+ local lret=0167167+168168+ # ping to .254 should be excluded from the tunnel (exception is in place).169169+ check_xfrm 0 254170170+ if [ $? -ne 0 ]; then171171+ echo "FAIL: expected ping to .254 to fail ($logpostfix)"172172+ lret=1173173+ else174174+ echo "PASS: ping to .254 bypassed ipsec tunnel ($logpostfix)"175175+ fi176176+177177+ # ping to .253 should use use ipsec due to direct policy exception.178178+ check_xfrm 1 253179179+ if [ $? -ne 0 ]; then180180+ echo "FAIL: expected ping to .253 to use ipsec tunnel ($logpostfix)"181181+ lret=1182182+ else183183+ echo "PASS: direct policy matches ($logpostfix)"184184+ fi185185+186186+ # ping to .2 should use ipsec.187187+ check_xfrm 1 2188188+ if [ $? -ne 0 ]; then189189+ echo "FAIL: expected ping to .2 to use ipsec tunnel ($logpostfix)"190190+ lret=1191191+ else192192+ echo "PASS: policy matches ($logpostfix)"193193+ fi194194+195195+ return $lret196196+}197197+225198#check for needed privileges226199if [ "$(id -u)" -ne 0 ];then227200 echo "SKIP: Need root privileges"···367270do_exception ns3 dead:3::1 dead:3::10 dead:2::fd dead:2:f0::/96368271do_exception ns4 dead:3::10 dead:3::1 dead:1::fd dead:1:f0::/96369272370370-# ping to .254 should now be excluded from the tunnel371371-check_xfrm 0 254273273+check_exceptions "exceptions"372274if [ $? -ne 0 ]; then373373- echo "FAIL: expected ping to .254 to fail"374275 ret=1375375-else376376- echo "PASS: ping to .254 bypassed ipsec tunnel"377276fi378277379379-# ping to .253 should use use ipsec due to direct policy exception.380380-check_xfrm 1 253278278+# insert block policies with adjacent/overlapping netmasks279279+do_overlap ns3280280+281281+check_exceptions "exceptions and block policies"381282if [ $? -ne 0 ]; then382382- echo "FAIL: expected ping to .253 to use ipsec tunnel"383283 ret=1384384-else385385- echo "PASS: direct policy matches"386284fi387285388388-# ping to .2 should use ipsec.389389-check_xfrm 1 2390390-if [ $? -ne 0 ]; then391391- echo "FAIL: expected ping to .2 to use ipsec tunnel"392392- ret=1393393-else394394- echo "PASS: policy matches"395395-fi286286+for n in ns3 ns4;do287287+ ip -net $n xfrm policy set hthresh4 28 24 hthresh6 126 125288288+ sleep $((RANDOM%5))289289+done290290+291291+check_exceptions "exceptions and block policies after hresh changes"292292+293293+# full flush of policy db, check everything gets freed incl. internal meta data294294+ip -net ns3 xfrm policy flush295295+296296+do_esp_policy ns3 10.0.3.1 10.0.3.10 10.0.1.0/24 10.0.2.0/24297297+do_exception ns3 10.0.3.1 10.0.3.10 10.0.2.253 10.0.2.240/28298298+299299+# move inexact policies to hash table300300+ip -net ns3 xfrm policy set hthresh4 16 16301301+302302+sleep $((RANDOM%5))303303+check_exceptions "exceptions and block policies after hthresh change in ns3"304304+305305+# restore original hthresh settings -- move policies back to tables306306+for n in ns3 ns4;do307307+ ip -net $n xfrm policy set hthresh4 32 32 hthresh6 128 128308308+ sleep $((RANDOM%5))309309+done310310+check_exceptions "exceptions and block policies after hresh change to normal"396311397312for i in 1 2 3 4;do ip netns del ns$i;done398313
···16081608#ifdef SYSCALL_NUM_RET_SHARE_REG16091609# define EXPECT_SYSCALL_RETURN(val, action) EXPECT_EQ(-1, action)16101610#else16111611-# define EXPECT_SYSCALL_RETURN(val, action) EXPECT_EQ(val, action)16111611+# define EXPECT_SYSCALL_RETURN(val, action) \16121612+ do { \16131613+ errno = 0; \16141614+ if (val < 0) { \16151615+ EXPECT_EQ(-1, action); \16161616+ EXPECT_EQ(-(val), errno); \16171617+ } else { \16181618+ EXPECT_EQ(val, action); \16191619+ } \16201620+ } while (0)16121621#endif1613162216141623/* Use PTRACE_GETREGS and PTRACE_SETREGS when available. This is useful for···1656164716571648/* Architecture-specific syscall changing routine. */16581649void change_syscall(struct __test_metadata *_metadata,16591659- pid_t tracee, int syscall)16501650+ pid_t tracee, int syscall, int result)16601651{16611652 int ret;16621653 ARCH_REGS regs;···17151706#ifdef SYSCALL_NUM_RET_SHARE_REG17161707 TH_LOG("Can't modify syscall return on this architecture");17171708#else17181718- regs.SYSCALL_RET = EPERM;17091709+ regs.SYSCALL_RET = result;17191710#endif1720171117211712#ifdef HAVE_GETREGS···17431734 case 0x1002:17441735 /* change getpid to getppid. */17451736 EXPECT_EQ(__NR_getpid, get_syscall(_metadata, tracee));17461746- change_syscall(_metadata, tracee, __NR_getppid);17371737+ change_syscall(_metadata, tracee, __NR_getppid, 0);17471738 break;17481739 case 0x1003:17491749- /* skip gettid. */17401740+ /* skip gettid with valid return code. */17501741 EXPECT_EQ(__NR_gettid, get_syscall(_metadata, tracee));17511751- change_syscall(_metadata, tracee, -1);17421742+ change_syscall(_metadata, tracee, -1, 45000);17521743 break;17531744 case 0x1004:17451745+ /* skip openat with error. */17461746+ EXPECT_EQ(__NR_openat, get_syscall(_metadata, tracee));17471747+ change_syscall(_metadata, tracee, -1, -ESRCH);17481748+ break;17491749+ case 0x1005:17541750 /* do nothing (allow getppid) */17551751 EXPECT_EQ(__NR_getppid, get_syscall(_metadata, tracee));17561752 break;···17881774 nr = get_syscall(_metadata, tracee);1789177517901776 if (nr == __NR_getpid)17911791- change_syscall(_metadata, tracee, __NR_getppid);17771777+ change_syscall(_metadata, tracee, __NR_getppid, 0);17781778+ if (nr == __NR_gettid)17791779+ change_syscall(_metadata, tracee, -1, 45000);17921780 if (nr == __NR_openat)17931793- change_syscall(_metadata, tracee, -1);17811781+ change_syscall(_metadata, tracee, -1, -ESRCH);17941782}1795178317961784FIXTURE_DATA(TRACE_syscall) {···18091793 BPF_STMT(BPF_RET|BPF_K, SECCOMP_RET_TRACE | 0x1002),18101794 BPF_JUMP(BPF_JMP|BPF_JEQ|BPF_K, __NR_gettid, 0, 1),18111795 BPF_STMT(BPF_RET|BPF_K, SECCOMP_RET_TRACE | 0x1003),18121812- BPF_JUMP(BPF_JMP|BPF_JEQ|BPF_K, __NR_getppid, 0, 1),17961796+ BPF_JUMP(BPF_JMP|BPF_JEQ|BPF_K, __NR_openat, 0, 1),18131797 BPF_STMT(BPF_RET|BPF_K, SECCOMP_RET_TRACE | 0x1004),17981798+ BPF_JUMP(BPF_JMP|BPF_JEQ|BPF_K, __NR_getppid, 0, 1),17991799+ BPF_STMT(BPF_RET|BPF_K, SECCOMP_RET_TRACE | 0x1005),18141800 BPF_STMT(BPF_RET|BPF_K, SECCOMP_RET_ALLOW),18151801 };18161802···18601842 EXPECT_NE(self->mypid, syscall(__NR_getpid));18611843}1862184418631863-TEST_F(TRACE_syscall, ptrace_syscall_dropped)18451845+TEST_F(TRACE_syscall, ptrace_syscall_errno)18641846{18651847 /* Swap SECCOMP_RET_TRACE tracer for PTRACE_SYSCALL tracer. */18661848 teardown_trace_fixture(_metadata, self->tracer);18671849 self->tracer = setup_trace_fixture(_metadata, tracer_ptrace, NULL,18681850 true);1869185118701870- /* Tracer should skip the open syscall, resulting in EPERM. */18711871- EXPECT_SYSCALL_RETURN(EPERM, syscall(__NR_openat));18521852+ /* Tracer should skip the open syscall, resulting in ESRCH. */18531853+ EXPECT_SYSCALL_RETURN(-ESRCH, syscall(__NR_openat));18541854+}18551855+18561856+TEST_F(TRACE_syscall, ptrace_syscall_faked)18571857+{18581858+ /* Swap SECCOMP_RET_TRACE tracer for PTRACE_SYSCALL tracer. */18591859+ teardown_trace_fixture(_metadata, self->tracer);18601860+ self->tracer = setup_trace_fixture(_metadata, tracer_ptrace, NULL,18611861+ true);18621862+18631863+ /* Tracer should skip the gettid syscall, resulting fake pid. */18641864+ EXPECT_SYSCALL_RETURN(45000, syscall(__NR_gettid));18721865}1873186618741867TEST_F(TRACE_syscall, syscall_allowed)···19121883 EXPECT_NE(self->mypid, syscall(__NR_getpid));19131884}1914188519151915-TEST_F(TRACE_syscall, syscall_dropped)18861886+TEST_F(TRACE_syscall, syscall_errno)18871887+{18881888+ long ret;18891889+18901890+ ret = prctl(PR_SET_NO_NEW_PRIVS, 1, 0, 0, 0);18911891+ ASSERT_EQ(0, ret);18921892+18931893+ ret = prctl(PR_SET_SECCOMP, SECCOMP_MODE_FILTER, &self->prog, 0, 0);18941894+ ASSERT_EQ(0, ret);18951895+18961896+ /* openat has been skipped and an errno return. */18971897+ EXPECT_SYSCALL_RETURN(-ESRCH, syscall(__NR_openat));18981898+}18991899+19001900+TEST_F(TRACE_syscall, syscall_faked)19161901{19171902 long ret;19181903···19371894 ASSERT_EQ(0, ret);1938189519391896 /* gettid has been skipped and an altered return value stored. */19401940- EXPECT_SYSCALL_RETURN(EPERM, syscall(__NR_gettid));19411941- EXPECT_NE(self->mytid, syscall(__NR_gettid));18971897+ EXPECT_SYSCALL_RETURN(45000, syscall(__NR_gettid));19421898}1943189919441900TEST_F(TRACE_syscall, skip_after_RET_TRACE)
+1-1
tools/testing/selftests/timers/Makefile
···11# SPDX-License-Identifier: GPL-2.022CFLAGS += -O3 -Wl,-no-as-needed -Wall33-LDFLAGS += -lrt -lpthread -lm33+LDLIBS += -lrt -lpthread -lm4455# these are all "safe" tests that don't modify66# system time or require escalated privileges