···1010- compatible :1111 - "fsl,vf610-edma" for eDMA used similar to that on Vybrid vf610 SoC1212 - "fsl,imx7ulp-edma" for eDMA2 used similar to that on i.mx7ulp1313- - "fsl,fsl,ls1028a-edma" for eDMA used similar to that on Vybrid vf610 SoC1313+ - "fsl,ls1028a-edma" followed by "fsl,vf610-edma" for eDMA used on the1414+ LS1028A SoC.1415- reg : Specifies base physical address(s) and size of the eDMA registers.1516 The 1st region is eDMA control register's address and size.1617 The 2nd and the 3rd regions are programmable channel multiplexing
···27273. Raw Gadget provides a way to select a UDC device/driver to bind to,2828 while GadgetFS currently binds to the first available UDC.29293030-4. Raw Gadget uses predictable endpoint names (handles) across different3131- UDCs (as long as UDCs have enough endpoints of each required transfer3232- type).3030+4. Raw Gadget explicitly exposes information about endpoints addresses and3131+ capabilities allowing a user to write UDC-agnostic gadgets.333234335. Raw Gadget has ioctl-based interface instead of a filesystem-based one.3534···4950 Raw Gadget and react to those depending on what kind of USB device5051 needs to be emulated.51525353+Note, that some UDC drivers have fixed addresses assigned to endpoints, and5454+therefore arbitrary endpoint addresses can't be used in the descriptors.5555+Nevertheles, Raw Gadget provides a UDC-agnostic way to write USB gadgets.5656+Once a USB_RAW_EVENT_CONNECT event is received via USB_RAW_IOCTL_EVENT_FETCH,5757+the USB_RAW_IOCTL_EPS_INFO ioctl can be used to find out information about5858+endpoints that the UDC driver has. Based on that information, the user must5959+chose UDC endpoints that will be used for the gadget being emulated, and6060+properly assign addresses in endpoint descriptors.6161+6262+You can find usage examples (along with a test suite) here:6363+6464+https://github.com/xairy/raw-gadget6565+6666+Internal details6767+~~~~~~~~~~~~~~~~6868+6969+Currently every endpoint read/write ioctl submits a USB request and waits until7070+its completion. This is the desired mode for coverage-guided fuzzing (as we'd7171+like all USB request processing happen during the lifetime of a syscall),7272+and must be kept in the implementation. (This might be slow for real world7373+applications, thus the O_NONBLOCK improvement suggestion below.)7474+5275Potential future improvements5376~~~~~~~~~~~~~~~~~~~~~~~~~~~~~54775555-- Implement ioctl's for setting/clearing halt status on endpoints.5656-5757-- Reporting more events (suspend, resume, etc.) through5858- USB_RAW_IOCTL_EVENT_FETCH.7878+- Report more events (suspend, resume, etc.) through USB_RAW_IOCTL_EVENT_FETCH.59796080- Support O_NONBLOCK I/O.8181+8282+- Support USB 3 features (accept SS endpoint companion descriptor when8383+ enabling endpoints; allow providing stream_id for bulk transfers).8484+8585+- Support ISO transfer features (expose frame_number for completed requests).
+16-4
MAINTAINERS
···5507550755085508DRM DRIVER FOR VMWARE VIRTUAL GPU55095509M: "VMware Graphics" <linux-graphics-maintainer@vmware.com>55105510-M: Thomas Hellstrom <thellstrom@vmware.com>55105510+M: Roland Scheidegger <sroland@vmware.com>55115511L: dri-devel@lists.freedesktop.org55125512S: Supported55135513-T: git git://people.freedesktop.org/~thomash/linux55135513+T: git git://people.freedesktop.org/~sroland/linux55145514F: drivers/gpu/drm/vmwgfx/55155515F: include/uapi/drm/vmwgfx_drm.h55165516···78297829F: drivers/media/platform/sti/hva7830783078317831HWPOISON MEMORY FAILURE HANDLING78327832-M: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>78327832+M: Naoya Horiguchi <naoya.horiguchi@nec.com>78337833L: linux-mm@kvack.org78347834S: Maintained78357835F: mm/hwpoison-inject.c···79417941F: drivers/i2c/busses/i2c-parport.c7942794279437943I2C SUBSYSTEM79447944-M: Wolfram Sang <wsa@the-dreams.de>79447944+M: Wolfram Sang <wsa@kernel.org>79457945L: linux-i2c@vger.kernel.org79467946S: Maintained79477947W: https://i2c.wiki.kernel.org/···91859185S: Maintained91869186W: http://lse.sourceforge.net/kdump/91879187F: Documentation/admin-guide/kdump/91889188+F: fs/proc/vmcore.c91899189+F: include/linux/crash_core.h91909190+F: include/linux/crash_dump.h91919191+F: include/uapi/linux/vmcore.h91929192+F: kernel/crash_*.c9188919391899194KEENE FM RADIO TRANSMITTER DRIVER91909195M: Hans Verkuil <hverkuil@xs4all.nl>···1066610661L: netdev@vger.kernel.org1066710662S: Maintained1066810663F: drivers/net/ethernet/mediatek/1066410664+1066510665+MEDIATEK I2C CONTROLLER DRIVER1066610666+M: Qii Wang <qii.wang@mediatek.com>1066710667+L: linux-i2c@vger.kernel.org1066810668+S: Maintained1066910669+F: Documentation/devicetree/bindings/i2c/i2c-mt65xx.txt1067010670+F: drivers/i2c/busses/i2c-mt65xx.c10669106711067010672MEDIATEK JPEG DRIVER1067110673M: Rick Chang <rick.chang@mediatek.com>
···1111#include <linux/clocksource.h>1212#include <linux/console.h>1313#include <linux/module.h>1414+#include <linux/sizes.h>1415#include <linux/cpu.h>1516#include <linux/of_clk.h>1617#include <linux/of_fdt.h>···425424 if ((unsigned int)__arc_dccm_base != cpu->dccm.base_addr)426425 panic("Linux built with incorrect DCCM Base address\n");427426428428- if (CONFIG_ARC_DCCM_SZ != cpu->dccm.sz)427427+ if (CONFIG_ARC_DCCM_SZ * SZ_1K != cpu->dccm.sz)429428 panic("Linux built with incorrect DCCM Size\n");430429#endif431430432431#ifdef CONFIG_ARC_HAS_ICCM433433- if (CONFIG_ARC_ICCM_SZ != cpu->iccm.sz)432432+ if (CONFIG_ARC_ICCM_SZ * SZ_1K != cpu->iccm.sz)434433 panic("Linux built with incorrect ICCM Size\n");435434#endif436435
···367367};368368369369&mmc3 {370370+ pinctrl-names = "default";371371+ pinctrl-0 = <&mmc3_pins>;370372 vmmc-supply = <&wl12xx_vmmc>;371373 /* uart2_tx.sdmmc3_dat1 pad as wakeirq */372374 interrupts-extended = <&wakeupgen GIC_SPI 94 IRQ_TYPE_LEVEL_HIGH···471469 OMAP4_IOPAD(0x09a, PIN_INPUT | MUX_MODE0)472470 OMAP4_IOPAD(0x09c, PIN_INPUT | MUX_MODE0)473471 OMAP4_IOPAD(0x09e, PIN_INPUT | MUX_MODE0)472472+ >;473473+ };474474+475475+ /*476476+ * Android uses PIN_OFF_INPUT_PULLDOWN | PIN_INPUT_PULLUP | MUX_MODE3477477+ * for gpio_100, but the internal pull makes wlan flakey on some478478+ * devices. Off mode value should be tested if we have off mode working479479+ * later on.480480+ */481481+ mmc3_pins: pinmux_mmc3_pins {482482+ pinctrl-single,pins = <483483+ /* 0x4a10008e gpmc_wait2.gpio_100 d23 */484484+ OMAP4_IOPAD(0x08e, PIN_INPUT | MUX_MODE3)485485+486486+ /* 0x4a100102 abe_mcbsp1_dx.sdmmc3_dat2 ab25 */487487+ OMAP4_IOPAD(0x102, PIN_INPUT_PULLUP | MUX_MODE1)488488+489489+ /* 0x4a100104 abe_mcbsp1_fsx.sdmmc3_dat3 ac27 */490490+ OMAP4_IOPAD(0x104, PIN_INPUT_PULLUP | MUX_MODE1)491491+492492+ /* 0x4a100118 uart2_cts.sdmmc3_clk ab26 */493493+ OMAP4_IOPAD(0x118, PIN_INPUT | MUX_MODE1)494494+495495+ /* 0x4a10011a uart2_rts.sdmmc3_cmd ab27 */496496+ OMAP4_IOPAD(0x11a, PIN_INPUT_PULLUP | MUX_MODE1)497497+498498+ /* 0x4a10011c uart2_rx.sdmmc3_dat0 aa25 */499499+ OMAP4_IOPAD(0x11c, PIN_INPUT_PULLUP | MUX_MODE1)500500+501501+ /* 0x4a10011e uart2_tx.sdmmc3_dat1 aa26 */502502+ OMAP4_IOPAD(0x11e, PIN_INPUT_PULLUP | MUX_MODE1)474503 >;475504 };476505···723690};724691725692/*726726- * As uart1 is wired to mdm6600 with rts and cts, we can use the cts pin for727727- * uart1 wakeirq.693693+ * The uart1 port is wired to mdm6600 with rts and cts. The modem uses gpio_149694694+ * for wake-up events for both the USB PHY and the UART. We can use gpio_149695695+ * pad as the shared wakeirq for the UART rather than the RX or CTS pad as we696696+ * have gpio_149 trigger before the UART transfer starts.728697 */729698&uart1 {730699 pinctrl-names = "default";731700 pinctrl-0 = <&uart1_pins>;732701 interrupts-extended = <&wakeupgen GIC_SPI 72 IRQ_TYPE_LEVEL_HIGH733733- &omap4_pmx_core 0xfc>;702702+ &omap4_pmx_core 0x110>;703703+ uart-has-rtscts;704704+ current-speed = <115200>;734705};735706736707&uart3 {
···2727#define GIC_CPU_CTRL 0x002828#define GIC_CPU_CTRL_ENABLE 129293030-int __init ox820_boot_secondary(unsigned int cpu, struct task_struct *idle)3030+static int __init ox820_boot_secondary(unsigned int cpu,3131+ struct task_struct *idle)3132{3233 /*3334 * Write the address of secondary startup into the
···208208CONFIG_PCIE_ARMADA_8K=y209209CONFIG_PCIE_KIRIN=y210210CONFIG_PCIE_HISI_STB=y211211-CONFIG_PCIE_TEGRA194=m211211+CONFIG_PCIE_TEGRA194_HOST=m212212CONFIG_DEVTMPFS=y213213CONFIG_DEVTMPFS_MOUNT=y214214CONFIG_FW_LOADER_USER_HELPER=y···567567CONFIG_MEDIA_SDR_SUPPORT=y568568CONFIG_MEDIA_CONTROLLER=y569569CONFIG_VIDEO_V4L2_SUBDEV_API=y570570+CONFIG_MEDIA_PLATFORM_SUPPORT=y570571# CONFIG_DVB_NET is not set571572CONFIG_MEDIA_USB_SUPPORT=y572573CONFIG_USB_VIDEO_CLASS=m···611610CONFIG_DRM_TEGRA=m612611CONFIG_DRM_PANEL_LVDS=m613612CONFIG_DRM_PANEL_SIMPLE=m614614-CONFIG_DRM_DUMB_VGA_DAC=m613613+CONFIG_DRM_SIMPLE_BRIDGE=m615614CONFIG_DRM_PANEL_TRULY_NT35597_WQXGA=m615615+CONFIG_DRM_DISPLAY_CONNECTOR=m616616CONFIG_DRM_SII902X=m617617CONFIG_DRM_THINE_THC63LVD1024=m618618CONFIG_DRM_TI_SN65DSI86=m···850848CONFIG_ARCH_R8A774A1=y851849CONFIG_ARCH_R8A774B1=y852850CONFIG_ARCH_R8A774C0=y853853-CONFIG_ARCH_R8A7795=y851851+CONFIG_ARCH_R8A77950=y852852+CONFIG_ARCH_R8A77951=y854853CONFIG_ARCH_R8A77960=y855854CONFIG_ARCH_R8A77961=y856855CONFIG_ARCH_R8A77965=y
···250250 } \251251} while(0)252252253253+static inline bool __lazy_irq_pending(u8 irq_happened)254254+{255255+ return !!(irq_happened & ~PACA_IRQ_HARD_DIS);256256+}257257+258258+/*259259+ * Check if a lazy IRQ is pending. Should be called with IRQs hard disabled.260260+ */253261static inline bool lazy_irq_pending(void)254262{255255- return !!(get_paca()->irq_happened & ~PACA_IRQ_HARD_DIS);263263+ return __lazy_irq_pending(get_paca()->irq_happened);264264+}265265+266266+/*267267+ * Check if a lazy IRQ is pending, with no debugging checks.268268+ * Should be called with IRQs hard disabled.269269+ * For use in RI disabled code or other constrained situations.270270+ */271271+static inline bool lazy_irq_pending_nocheck(void)272272+{273273+ return __lazy_irq_pending(local_paca->irq_happened);256274}257275258276/*
···472472#ifdef CONFIG_PPC_BOOK3S473473 /*474474 * If MSR EE/RI was never enabled, IRQs not reconciled, NVGPRs not475475- * touched, AMR not set, no exit work created, then this can be used.475475+ * touched, no exit work created, then this can be used.476476 */477477 .balign IFETCH_ALIGN_BYTES478478 .globl fast_interrupt_return479479fast_interrupt_return:480480_ASM_NOKPROBE_SYMBOL(fast_interrupt_return)481481+ kuap_check_amr r3, r4481482 ld r4,_MSR(r1)482483 andi. r0,r4,MSR_PR483484 bne .Lfast_user_interrupt_return485485+ kuap_restore_amr r3484486 andi. r0,r4,MSR_RI485487 li r3,0 /* 0 return value, no EMULATE_STACK_STORE */486488 bne+ .Lfast_kernel_interrupt_return
···348348 andis. r0, r5, (DSISR_BAD_FAULT_32S | DSISR_DABRMATCH)@h349349#endif350350 bne handle_page_fault_tramp_2 /* if not, try to put a PTE */351351- rlwinm r3, r5, 32 - 24, 30, 30 /* DSISR_STORE -> _PAGE_RW */351351+ rlwinm r3, r5, 32 - 15, 21, 21 /* DSISR_STORE -> _PAGE_RW */352352 bl hash_page353353 b handle_page_fault_tramp_1354354FTR_SECTION_ELSE···497497 andc. r1,r1,r0 /* check access & ~permission */498498 bne- InstructionAddressInvalid /* return if access not permitted */499499 /* Convert linux-style PTE to low word of PPC-style PTE */500500+ rlwimi r0,r0,32-2,31,31 /* _PAGE_USER -> PP lsb */500501 ori r1, r1, 0xe06 /* clear out reserved bits */501502 andc r1, r0, r1 /* PP = user? 1 : 0 */502503BEGIN_FTR_SECTION···565564 * we would need to update the pte atomically with lwarx/stwcx.566565 */567566 /* Convert linux-style PTE to low word of PPC-style PTE */568568- rlwinm r1,r0,0,30,30 /* _PAGE_RW -> PP msb */569569- rlwimi r0,r0,1,30,30 /* _PAGE_USER -> PP msb */567567+ rlwinm r1,r0,32-9,30,30 /* _PAGE_RW -> PP msb */568568+ rlwimi r0,r0,32-1,30,30 /* _PAGE_USER -> PP msb */569569+ rlwimi r0,r0,32-1,31,31 /* _PAGE_USER -> PP lsb */570570 ori r1,r1,0xe04 /* clear out reserved bits */571571 andc r1,r0,r1 /* PP = user? rw? 1: 3: 0 */572572BEGIN_FTR_SECTION···645643 * we would need to update the pte atomically with lwarx/stwcx.646644 */647645 /* Convert linux-style PTE to low word of PPC-style PTE */646646+ rlwimi r0,r0,32-2,31,31 /* _PAGE_USER -> PP lsb */648647 li r1,0xe06 /* clear out reserved bits & PP msb */649648 andc r1,r0,r1 /* PP = user? 1: 0 */650649BEGIN_FTR_SECTION
+2-1
arch/powerpc/kernel/head_40x.S
···344344/* 0x0C00 - System Call Exception */345345 START_EXCEPTION(0x0C00, SystemCall)346346 SYSCALL_ENTRY 0xc00347347+/* Trap_0D is commented out to get more space for system call exception */347348348348- EXCEPTION(0x0D00, Trap_0D, unknown_exception, EXC_XFER_STD)349349+/* EXCEPTION(0x0D00, Trap_0D, unknown_exception, EXC_XFER_STD) */349350 EXCEPTION(0x0E00, Trap_0E, unknown_exception, EXC_XFER_STD)350351 EXCEPTION(0x0F00, Trap_0F, unknown_exception, EXC_XFER_STD)351352
+3-3
arch/powerpc/kernel/ima_arch.c
···1919 * to be stored as an xattr or as an appended signature.2020 *2121 * To avoid duplicate signature verification as much as possible, the IMA2222- * policy rule for module appraisal is added only if CONFIG_MODULE_SIG_FORCE2222+ * policy rule for module appraisal is added only if CONFIG_MODULE_SIG2323 * is not enabled.2424 */2525static const char *const secure_rules[] = {2626 "appraise func=KEXEC_KERNEL_CHECK appraise_flag=check_blacklist appraise_type=imasig|modsig",2727-#ifndef CONFIG_MODULE_SIG_FORCE2727+#ifndef CONFIG_MODULE_SIG2828 "appraise func=MODULE_CHECK appraise_flag=check_blacklist appraise_type=imasig|modsig",2929#endif3030 NULL···5050 "measure func=KEXEC_KERNEL_CHECK template=ima-modsig",5151 "measure func=MODULE_CHECK template=ima-modsig",5252 "appraise func=KEXEC_KERNEL_CHECK appraise_flag=check_blacklist appraise_type=imasig|modsig",5353-#ifndef CONFIG_MODULE_SIG_FORCE5353+#ifndef CONFIG_MODULE_SIG5454 "appraise func=MODULE_CHECK appraise_flag=check_blacklist appraise_type=imasig|modsig",5555#endif5656 NULL
+11-9
arch/powerpc/kernel/syscall_64.c
···3535 BUG_ON(!FULL_REGS(regs));3636 BUG_ON(regs->softe != IRQS_ENABLED);37373838+ kuap_check_amr();3939+3840 account_cpu_user_entry();39414042#ifdef CONFIG_PPC_SPLPAR···4846 accumulate_stolen_time();4947 }5048#endif5151-5252- kuap_check_amr();53495450 /*5551 * This is not required for the syscall exit path, but makes the···116116 unsigned long *ti_flagsp = ¤t_thread_info()->flags;117117 unsigned long ti_flags;118118 unsigned long ret = 0;119119+120120+ kuap_check_amr();119121120122 regs->result = r3;121123···191189192190 /* This pattern matches prep_irq_for_idle */193191 __hard_EE_RI_disable();194194- if (unlikely(lazy_irq_pending())) {192192+ if (unlikely(lazy_irq_pending_nocheck())) {195193 __hard_RI_enable();196194 trace_hardirqs_off();197195 local_paca->irq_happened |= PACA_IRQ_HARD_DIS;···205203#ifdef CONFIG_PPC_TRANSACTIONAL_MEM206204 local_paca->tm_scratch = regs->msr;207205#endif208208-209209- kuap_check_amr();210206211207 account_cpu_user_exit();212208···227227 BUG_ON(!(regs->msr & MSR_PR));228228 BUG_ON(!FULL_REGS(regs));229229 BUG_ON(regs->softe != IRQS_ENABLED);230230+231231+ kuap_check_amr();230232231233 local_irq_save(flags);232234···266264267265 trace_hardirqs_on();268266 __hard_EE_RI_disable();269269- if (unlikely(lazy_irq_pending())) {267267+ if (unlikely(lazy_irq_pending_nocheck())) {270268 __hard_RI_enable();271269 trace_hardirqs_off();272270 local_paca->irq_happened |= PACA_IRQ_HARD_DIS;···294292 local_paca->tm_scratch = regs->msr;295293#endif296294297297- kuap_check_amr();298298-299295 account_cpu_user_exit();300296301297 return ret;···312312 unrecoverable_exception(regs);313313 BUG_ON(regs->msr & MSR_PR);314314 BUG_ON(!FULL_REGS(regs));315315+316316+ kuap_check_amr();315317316318 if (unlikely(*ti_flagsp & _TIF_EMULATE_STACK_STORE)) {317319 clear_bits(_TIF_EMULATE_STACK_STORE, ti_flagsp);···336334337335 trace_hardirqs_on();338336 __hard_EE_RI_disable();339339- if (unlikely(lazy_irq_pending())) {337337+ if (unlikely(lazy_irq_pending_nocheck())) {340338 __hard_RI_enable();341339 irq_soft_mask_set(IRQS_ALL_DISABLED);342340 trace_hardirqs_off();
···2828 break;2929 case R_390_64: /* Direct 64 bit. */3030 case R_390_GLOB_DAT:3131+ case R_390_JMP_SLOT:3132 *(u64 *)loc = val;3233 break;3334 case R_390_PC16: /* PC relative 16 bit. */
+6-3
arch/s390/mm/hugetlbpage.c
···159159 rste &= ~_SEGMENT_ENTRY_NOEXEC;160160161161 /* Set correct table type for 2G hugepages */162162- if ((pte_val(*ptep) & _REGION_ENTRY_TYPE_MASK) == _REGION_ENTRY_TYPE_R3)163163- rste |= _REGION_ENTRY_TYPE_R3 | _REGION3_ENTRY_LARGE;164164- else162162+ if ((pte_val(*ptep) & _REGION_ENTRY_TYPE_MASK) == _REGION_ENTRY_TYPE_R3) {163163+ if (likely(pte_present(pte)))164164+ rste |= _REGION3_ENTRY_LARGE;165165+ rste |= _REGION_ENTRY_TYPE_R3;166166+ } else if (likely(pte_present(pte)))165167 rste |= _SEGMENT_ENTRY_LARGE;168168+166169 clear_huge_pte_skeys(mm, rste);167170 pte_val(*ptep) = rste;168171}
+211-2
arch/s390/pci/pci_mmio.c
···1111#include <linux/mm.h>1212#include <linux/errno.h>1313#include <linux/pci.h>1414+#include <asm/pci_io.h>1515+#include <asm/pci_debug.h>1616+1717+static inline void zpci_err_mmio(u8 cc, u8 status, u64 offset)1818+{1919+ struct {2020+ u64 offset;2121+ u8 cc;2222+ u8 status;2323+ } data = {offset, cc, status};2424+2525+ zpci_err_hex(&data, sizeof(data));2626+}2727+2828+static inline int __pcistb_mio_inuser(2929+ void __iomem *ioaddr, const void __user *src,3030+ u64 len, u8 *status)3131+{3232+ int cc = -ENXIO;3333+3434+ asm volatile (3535+ " sacf 256\n"3636+ "0: .insn rsy,0xeb00000000d4,%[len],%[ioaddr],%[src]\n"3737+ "1: ipm %[cc]\n"3838+ " srl %[cc],28\n"3939+ "2: sacf 768\n"4040+ EX_TABLE(0b, 2b) EX_TABLE(1b, 2b)4141+ : [cc] "+d" (cc), [len] "+d" (len)4242+ : [ioaddr] "a" (ioaddr), [src] "Q" (*((u8 __force *)src))4343+ : "cc", "memory");4444+ *status = len >> 24 & 0xff;4545+ return cc;4646+}4747+4848+static inline int __pcistg_mio_inuser(4949+ void __iomem *ioaddr, const void __user *src,5050+ u64 ulen, u8 *status)5151+{5252+ register u64 addr asm("2") = (u64 __force) ioaddr;5353+ register u64 len asm("3") = ulen;5454+ int cc = -ENXIO;5555+ u64 val = 0;5656+ u64 cnt = ulen;5757+ u8 tmp;5858+5959+ /*6060+ * copy 0 < @len <= 8 bytes from @src into the right most bytes of6161+ * a register, then store it to PCI at @ioaddr while in secondary6262+ * address space. pcistg then uses the user mappings.6363+ */6464+ asm volatile (6565+ " sacf 256\n"6666+ "0: llgc %[tmp],0(%[src])\n"6767+ " sllg %[val],%[val],8\n"6868+ " aghi %[src],1\n"6969+ " ogr %[val],%[tmp]\n"7070+ " brctg %[cnt],0b\n"7171+ "1: .insn rre,0xb9d40000,%[val],%[ioaddr]\n"7272+ "2: ipm %[cc]\n"7373+ " srl %[cc],28\n"7474+ "3: sacf 768\n"7575+ EX_TABLE(0b, 3b) EX_TABLE(1b, 3b) EX_TABLE(2b, 3b)7676+ :7777+ [src] "+a" (src), [cnt] "+d" (cnt),7878+ [val] "+d" (val), [tmp] "=d" (tmp),7979+ [len] "+d" (len), [cc] "+d" (cc),8080+ [ioaddr] "+a" (addr)8181+ :: "cc", "memory");8282+ *status = len >> 24 & 0xff;8383+8484+ /* did we read everything from user memory? */8585+ if (!cc && cnt != 0)8686+ cc = -EFAULT;8787+8888+ return cc;8989+}9090+9191+static inline int __memcpy_toio_inuser(void __iomem *dst,9292+ const void __user *src, size_t n)9393+{9494+ int size, rc = 0;9595+ u8 status = 0;9696+ mm_segment_t old_fs;9797+9898+ if (!src)9999+ return -EINVAL;100100+101101+ old_fs = enable_sacf_uaccess();102102+ while (n > 0) {103103+ size = zpci_get_max_write_size((u64 __force) dst,104104+ (u64 __force) src, n,105105+ ZPCI_MAX_WRITE_SIZE);106106+ if (size > 8) /* main path */107107+ rc = __pcistb_mio_inuser(dst, src, size, &status);108108+ else109109+ rc = __pcistg_mio_inuser(dst, src, size, &status);110110+ if (rc)111111+ break;112112+ src += size;113113+ dst += size;114114+ n -= size;115115+ }116116+ disable_sacf_uaccess(old_fs);117117+ if (rc)118118+ zpci_err_mmio(rc, status, (__force u64) dst);119119+ return rc;120120+}1412115122static long get_pfn(unsigned long user_addr, unsigned long access,16123 unsigned long *pfn)···1534615447 if (length <= 0 || PAGE_SIZE - (mmio_addr & ~PAGE_MASK) < length)15548 return -EINVAL;4949+5050+ /*5151+ * Only support read access to MIO capable devices on a MIO enabled5252+ * system. Otherwise we would have to check for every address if it is5353+ * a special ZPCI_ADDR and we would have to do a get_pfn() which we5454+ * don't need for MIO capable devices.5555+ */5656+ if (static_branch_likely(&have_mio)) {5757+ ret = __memcpy_toio_inuser((void __iomem *) mmio_addr,5858+ user_buffer,5959+ length);6060+ return ret;6161+ }6262+15663 if (length > 64) {15764 buf = kmalloc(length, GFP_KERNEL);15865 if (!buf)···17756 ret = get_pfn(mmio_addr, VM_WRITE, &pfn);17857 if (ret)17958 goto out;180180- io_addr = (void __iomem *)((pfn << PAGE_SHIFT) | (mmio_addr & ~PAGE_MASK));5959+ io_addr = (void __iomem *)((pfn << PAGE_SHIFT) |6060+ (mmio_addr & ~PAGE_MASK));1816118262 ret = -EFAULT;18363 if ((unsigned long) io_addr < ZPCI_IOMAP_ADDR_BASE)···19270 if (buf != local_buf)19371 kfree(buf);19472 return ret;7373+}7474+7575+static inline int __pcilg_mio_inuser(7676+ void __user *dst, const void __iomem *ioaddr,7777+ u64 ulen, u8 *status)7878+{7979+ register u64 addr asm("2") = (u64 __force) ioaddr;8080+ register u64 len asm("3") = ulen;8181+ u64 cnt = ulen;8282+ int shift = ulen * 8;8383+ int cc = -ENXIO;8484+ u64 val, tmp;8585+8686+ /*8787+ * read 0 < @len <= 8 bytes from the PCI memory mapped at @ioaddr (in8888+ * user space) into a register using pcilg then store these bytes at8989+ * user address @dst9090+ */9191+ asm volatile (9292+ " sacf 256\n"9393+ "0: .insn rre,0xb9d60000,%[val],%[ioaddr]\n"9494+ "1: ipm %[cc]\n"9595+ " srl %[cc],28\n"9696+ " ltr %[cc],%[cc]\n"9797+ " jne 4f\n"9898+ "2: ahi %[shift],-8\n"9999+ " srlg %[tmp],%[val],0(%[shift])\n"100100+ "3: stc %[tmp],0(%[dst])\n"101101+ " aghi %[dst],1\n"102102+ " brctg %[cnt],2b\n"103103+ "4: sacf 768\n"104104+ EX_TABLE(0b, 4b) EX_TABLE(1b, 4b) EX_TABLE(3b, 4b)105105+ :106106+ [cc] "+d" (cc), [val] "=d" (val), [len] "+d" (len),107107+ [dst] "+a" (dst), [cnt] "+d" (cnt), [tmp] "=d" (tmp),108108+ [shift] "+d" (shift)109109+ :110110+ [ioaddr] "a" (addr)111111+ : "cc", "memory");112112+113113+ /* did we write everything to the user space buffer? */114114+ if (!cc && cnt != 0)115115+ cc = -EFAULT;116116+117117+ *status = len >> 24 & 0xff;118118+ return cc;119119+}120120+121121+static inline int __memcpy_fromio_inuser(void __user *dst,122122+ const void __iomem *src,123123+ unsigned long n)124124+{125125+ int size, rc = 0;126126+ u8 status;127127+ mm_segment_t old_fs;128128+129129+ old_fs = enable_sacf_uaccess();130130+ while (n > 0) {131131+ size = zpci_get_max_write_size((u64 __force) src,132132+ (u64 __force) dst, n,133133+ ZPCI_MAX_READ_SIZE);134134+ rc = __pcilg_mio_inuser(dst, src, size, &status);135135+ if (rc)136136+ break;137137+ src += size;138138+ dst += size;139139+ n -= size;140140+ }141141+ disable_sacf_uaccess(old_fs);142142+ if (rc)143143+ zpci_err_mmio(rc, status, (__force u64) dst);144144+ return rc;195145}196146197147SYSCALL_DEFINE3(s390_pci_mmio_read, unsigned long, mmio_addr,···2808628187 if (length <= 0 || PAGE_SIZE - (mmio_addr & ~PAGE_MASK) < length)28288 return -EINVAL;8989+9090+ /*9191+ * Only support write access to MIO capable devices on a MIO enabled9292+ * system. Otherwise we would have to check for every address if it is9393+ * a special ZPCI_ADDR and we would have to do a get_pfn() which we9494+ * don't need for MIO capable devices.9595+ */9696+ if (static_branch_likely(&have_mio)) {9797+ ret = __memcpy_fromio_inuser(9898+ user_buffer, (const void __iomem *)mmio_addr,9999+ length);100100+ return ret;101101+ }102102+283103 if (length > 64) {284104 buf = kmalloc(length, GFP_KERNEL);285105 if (!buf)286106 return -ENOMEM;287287- } else107107+ } else {288108 buf = local_buf;109109+ }289110290111 ret = get_pfn(mmio_addr, VM_READ, &pfn);291112 if (ret)
···11/* SPDX-License-Identifier: GPL-2.0 */22#include <asm-generic/xor.h>33-#include <shared/timer-internal.h>33+#include <linux/time-internal.h>4455/* pick an arbitrary one - measuring isn't possible with inf-cpu */66#define XOR_SELECT_TEMPLATE(x) \
···5959#define PECOFF_COMPAT_RESERVE 0x06060#endif61616262-unsigned long efi32_stub_entry;6363-unsigned long efi64_stub_entry;6464-unsigned long efi_pe_entry;6565-unsigned long efi32_pe_entry;6666-unsigned long kernel_info;6767-unsigned long startup_64;6868-unsigned long _ehead;6969-unsigned long _end;6262+static unsigned long efi32_stub_entry;6363+static unsigned long efi64_stub_entry;6464+static unsigned long efi_pe_entry;6565+static unsigned long efi32_pe_entry;6666+static unsigned long kernel_info;6767+static unsigned long startup_64;6868+static unsigned long _ehead;6969+static unsigned long _end;70707171/*----------------------------------------------------------------------*/7272
+17-2
arch/x86/hyperv/hv_init.c
···226226227227 rdmsrl(HV_X64_MSR_REENLIGHTENMENT_CONTROL, *((u64 *)&re_ctrl));228228 if (re_ctrl.target_vp == hv_vp_index[cpu]) {229229- /* Reassign to some other online CPU */229229+ /*230230+ * Reassign reenlightenment notifications to some other online231231+ * CPU or just disable the feature if there are no online CPUs232232+ * left (happens on hibernation).233233+ */230234 new_cpu = cpumask_any_but(cpu_online_mask, cpu);231235232232- re_ctrl.target_vp = hv_vp_index[new_cpu];236236+ if (new_cpu < nr_cpu_ids)237237+ re_ctrl.target_vp = hv_vp_index[new_cpu];238238+ else239239+ re_ctrl.enabled = 0;240240+233241 wrmsrl(HV_X64_MSR_REENLIGHTENMENT_CONTROL, *((u64 *)&re_ctrl));234242 }235243···301293302294 hv_hypercall_pg = hv_hypercall_pg_saved;303295 hv_hypercall_pg_saved = NULL;296296+297297+ /*298298+ * Reenlightenment notifications are disabled by hv_cpu_die(0),299299+ * reenable them here if hv_reenlightenment_cb was previously set.300300+ */301301+ if (hv_reenlightenment_cb)302302+ set_hv_tscchange_cb(hv_reenlightenment_cb);304303}305304306305/* Note: when the ops are called, only CPU0 is online and IRQs are disabled. */
···5555/*5656 * Initialize the stackprotector canary value.5757 *5858- * NOTE: this must only be called from functions that never return,5858+ * NOTE: this must only be called from functions that never return5959 * and it must always be inlined.6060+ *6161+ * In addition, it should be called from a compilation unit for which6262+ * stack protector is disabled. Alternatively, the caller should not end6363+ * with a function call which gets tail-call optimized as that would6464+ * lead to checking a modified canary value.6065 */6166static __always_inline void boot_init_stack_canary(void)6267{
+8
arch/x86/kernel/smpboot.c
···266266267267 wmb();268268 cpu_startup_entry(CPUHP_AP_ONLINE_IDLE);269269+270270+ /*271271+ * Prevent tail call to cpu_startup_entry() because the stack protector272272+ * guard has been changed a couple of function calls up, in273273+ * boot_init_stack_canary() and must not be checked before tail calling274274+ * another function.275275+ */276276+ prevent_tail_call_optimization();269277}270278271279/**
+16-7
arch/x86/kernel/unwind_orc.c
···320320321321unsigned long *unwind_get_return_address_ptr(struct unwind_state *state)322322{323323+ struct task_struct *task = state->task;324324+323325 if (unwind_done(state))324326 return NULL;325327326328 if (state->regs)327329 return &state->regs->ip;330330+331331+ if (task != current && state->sp == task->thread.sp) {332332+ struct inactive_task_frame *frame = (void *)task->thread.sp;333333+ return &frame->ret_addr;334334+ }328335329336 if (state->sp)330337 return (unsigned long *)state->sp - 1;···624617void __unwind_start(struct unwind_state *state, struct task_struct *task,625618 struct pt_regs *regs, unsigned long *first_frame)626619{627627- if (!orc_init)628628- goto done;629629-630620 memset(state, 0, sizeof(*state));631621 state->task = task;622622+623623+ if (!orc_init)624624+ goto err;632625633626 /*634627 * Refuse to unwind the stack of a task while it's executing on another···636629 * checks to prevent it from going off the rails.637630 */638631 if (task_on_another_cpu(task))639639- goto done;632632+ goto err;640633641634 if (regs) {642635 if (user_mode(regs))643643- goto done;636636+ goto the_end;644637645638 state->ip = regs->ip;646639 state->sp = regs->sp;···673666 * generate some kind of backtrace if this happens.674667 */675668 void *next_page = (void *)PAGE_ALIGN((unsigned long)state->sp);669669+ state->error = true;676670 if (get_stack_info(next_page, state->task, &state->stack_info,677671 &state->stack_mask))678672 return;···699691700692 return;701693702702-done:694694+err:695695+ state->error = true;696696+the_end:703697 state->stack_info.type = STACK_TYPE_UNKNOWN;704704- return;705698}706699EXPORT_SYMBOL_GPL(__unwind_start);
+1-1
arch/x86/kvm/hyperv.c
···14271427 */14281428 kvm_make_vcpus_request_mask(kvm,14291429 KVM_REQ_TLB_FLUSH | KVM_REQUEST_NO_WAKEUP,14301430- vcpu_mask, &hv_vcpu->tlb_flush);14301430+ NULL, vcpu_mask, &hv_vcpu->tlb_flush);1431143114321432ret_success:14331433 /* We always do full TLB flush, set rep_done = rep_cnt. */
+31-8
arch/x86/kvm/svm/nested.c
···1919#include <linux/kernel.h>20202121#include <asm/msr-index.h>2222+#include <asm/debugreg.h>22232324#include "kvm_emulate.h"2425#include "trace.h"···268267 svm->vmcb->save.rsp = nested_vmcb->save.rsp;269268 svm->vmcb->save.rip = nested_vmcb->save.rip;270269 svm->vmcb->save.dr7 = nested_vmcb->save.dr7;271271- svm->vmcb->save.dr6 = nested_vmcb->save.dr6;270270+ svm->vcpu.arch.dr6 = nested_vmcb->save.dr6;272271 svm->vmcb->save.cpl = nested_vmcb->save.cpl;273272274273 svm->nested.vmcb_msrpm = nested_vmcb->control.msrpm_base_pa & ~0x0fffULL;···483482 nested_vmcb->save.rsp = vmcb->save.rsp;484483 nested_vmcb->save.rax = vmcb->save.rax;485484 nested_vmcb->save.dr7 = vmcb->save.dr7;486486- nested_vmcb->save.dr6 = vmcb->save.dr6;485485+ nested_vmcb->save.dr6 = svm->vcpu.arch.dr6;487486 nested_vmcb->save.cpl = vmcb->save.cpl;488487489488 nested_vmcb->control.int_ctl = vmcb->control.int_ctl;···607606/* DB exceptions for our internal use must not cause vmexit */608607static int nested_svm_intercept_db(struct vcpu_svm *svm)609608{610610- unsigned long dr6;609609+ unsigned long dr6 = svm->vmcb->save.dr6;610610+611611+ /* Always catch it and pass it to userspace if debugging. */612612+ if (svm->vcpu.guest_debug &613613+ (KVM_GUESTDBG_SINGLESTEP | KVM_GUESTDBG_USE_HW_BP))614614+ return NESTED_EXIT_HOST;611615612616 /* if we're not singlestepping, it's not ours */613617 if (!svm->nmi_singlestep)614614- return NESTED_EXIT_DONE;618618+ goto reflected_db;615619616620 /* if it's not a singlestep exception, it's not ours */617617- if (kvm_get_dr(&svm->vcpu, 6, &dr6))618618- return NESTED_EXIT_DONE;619621 if (!(dr6 & DR6_BS))620620- return NESTED_EXIT_DONE;622622+ goto reflected_db;621623622624 /* if the guest is singlestepping, it should get the vmexit */623625 if (svm->nmi_singlestep_guest_rflags & X86_EFLAGS_TF) {624626 disable_nmi_singlestep(svm);625625- return NESTED_EXIT_DONE;627627+ goto reflected_db;626628 }627629628630 /* it's ours, the nested hypervisor must not see this one */629631 return NESTED_EXIT_HOST;632632+633633+reflected_db:634634+ /*635635+ * Synchronize guest DR6 here just like in kvm_deliver_exception_payload;636636+ * it will be moved into the nested VMCB by nested_svm_vmexit. Once637637+ * exceptions will be moved to svm_check_nested_events, all this stuff638638+ * will just go away and we could just return NESTED_EXIT_HOST639639+ * unconditionally. db_interception will queue the exception, which640640+ * will be processed by svm_check_nested_events if a nested vmexit is641641+ * required, and we will just use kvm_deliver_exception_payload to copy642642+ * the payload to DR6 before vmexit.643643+ */644644+ WARN_ON(svm->vcpu.arch.switch_db_regs & KVM_DEBUGREG_WONT_EXIT);645645+ svm->vcpu.arch.dr6 &= ~(DR_TRAP_BITS | DR6_RTM);646646+ svm->vcpu.arch.dr6 |= dr6 & ~DR6_FIXED_1;647647+ return NESTED_EXIT_DONE;630648}631649632650static int nested_svm_intercept_ioio(struct vcpu_svm *svm)···702682 if (svm->nested.intercept_exceptions & excp_bits) {703683 if (exit_code == SVM_EXIT_EXCP_BASE + DB_VECTOR)704684 vmexit = nested_svm_intercept_db(svm);685685+ else if (exit_code == SVM_EXIT_EXCP_BASE + BP_VECTOR &&686686+ svm->vcpu.guest_debug & KVM_GUESTDBG_USE_SW_BP)687687+ vmexit = NESTED_EXIT_HOST;705688 else706689 vmexit = NESTED_EXIT_DONE;707690 }
+22-14
arch/x86/kvm/svm/svm.c
···16721672 mark_dirty(svm->vmcb, VMCB_ASID);16731673}1674167416751675-static u64 svm_get_dr6(struct kvm_vcpu *vcpu)16751675+static void svm_set_dr6(struct vcpu_svm *svm, unsigned long value)16761676{16771677- return to_svm(vcpu)->vmcb->save.dr6;16781678-}16771677+ struct vmcb *vmcb = svm->vmcb;1679167816801680-static void svm_set_dr6(struct kvm_vcpu *vcpu, unsigned long value)16811681-{16821682- struct vcpu_svm *svm = to_svm(vcpu);16831683-16841684- svm->vmcb->save.dr6 = value;16851685- mark_dirty(svm->vmcb, VMCB_DR);16791679+ if (unlikely(value != vmcb->save.dr6)) {16801680+ vmcb->save.dr6 = value;16811681+ mark_dirty(vmcb, VMCB_DR);16821682+ }16861683}1687168416881685static void svm_sync_dirty_debug_regs(struct kvm_vcpu *vcpu)···16901693 get_debugreg(vcpu->arch.db[1], 1);16911694 get_debugreg(vcpu->arch.db[2], 2);16921695 get_debugreg(vcpu->arch.db[3], 3);16931693- vcpu->arch.dr6 = svm_get_dr6(vcpu);16961696+ /*16971697+ * We cannot reset svm->vmcb->save.dr6 to DR6_FIXED_1|DR6_RTM here,16981698+ * because db_interception might need it. We can do it before vmentry.16991699+ */17001700+ vcpu->arch.dr6 = svm->vmcb->save.dr6;16941701 vcpu->arch.dr7 = svm->vmcb->save.dr7;16951695-16961702 vcpu->arch.switch_db_regs &= ~KVM_DEBUGREG_WONT_EXIT;16971703 set_dr_intercepts(svm);16981704}···17391739 if (!(svm->vcpu.guest_debug &17401740 (KVM_GUESTDBG_SINGLESTEP | KVM_GUESTDBG_USE_HW_BP)) &&17411741 !svm->nmi_singlestep) {17421742- kvm_queue_exception(&svm->vcpu, DB_VECTOR);17421742+ u32 payload = (svm->vmcb->save.dr6 ^ DR6_RTM) & ~DR6_FIXED_1;17431743+ kvm_queue_exception_p(&svm->vcpu, DB_VECTOR, payload);17431744 return 1;17441745 }17451746···3318331733193318 svm->vmcb->save.cr2 = vcpu->arch.cr2;3320331933203320+ /*33213321+ * Run with all-zero DR6 unless needed, so that we can get the exact cause33223322+ * of a #DB.33233323+ */33243324+ if (unlikely(svm->vcpu.arch.switch_db_regs & KVM_DEBUGREG_WONT_EXIT))33253325+ svm_set_dr6(svm, vcpu->arch.dr6);33263326+ else33273327+ svm_set_dr6(svm, DR6_FIXED_1 | DR6_RTM);33283328+33213329 clgi();33223330 kvm_load_guest_xsave_state(vcpu);33233331···39413931 .set_idt = svm_set_idt,39423932 .get_gdt = svm_get_gdt,39433933 .set_gdt = svm_set_gdt,39443944- .get_dr6 = svm_get_dr6,39453945- .set_dr6 = svm_set_dr6,39463934 .set_dr7 = svm_set_dr7,39473935 .sync_dirty_debug_regs = svm_sync_dirty_debug_regs,39483936 .cache_reg = svm_cache_reg,
+4-37
arch/x86/kvm/vmx/vmx.c
···1372137213731373 vmx_vcpu_pi_load(vcpu, cpu);1374137413751375- vmx->host_pkru = read_pkru();13761375 vmx->host_debugctlmsr = get_debugctlmsr();13771376}13781377···46764677 dr6 = vmcs_readl(EXIT_QUALIFICATION);46774678 if (!(vcpu->guest_debug &46784679 (KVM_GUESTDBG_SINGLESTEP | KVM_GUESTDBG_USE_HW_BP))) {46794679- vcpu->arch.dr6 &= ~DR_TRAP_BITS;46804680- vcpu->arch.dr6 |= dr6 | DR6_RTM;46814680 if (is_icebp(intr_info))46824681 WARN_ON(!skip_emulated_instruction(vcpu));4683468246844684- kvm_queue_exception(vcpu, DB_VECTOR);46834683+ kvm_queue_exception_p(vcpu, DB_VECTOR, dr6);46854684 return 1;46864685 }46874687- kvm_run->debug.arch.dr6 = dr6 | DR6_FIXED_1;46864686+ kvm_run->debug.arch.dr6 = dr6 | DR6_FIXED_1 | DR6_RTM;46884687 kvm_run->debug.arch.dr7 = vmcs_readl(GUEST_DR7);46894688 /* fall through */46904689 case BP_VECTOR:···49264929 * guest debugging itself.49274930 */49284931 if (vcpu->guest_debug & KVM_GUESTDBG_USE_HW_BP) {49294929- vcpu->run->debug.arch.dr6 = vcpu->arch.dr6;49324932+ vcpu->run->debug.arch.dr6 = DR6_BD | DR6_RTM | DR6_FIXED_1;49304933 vcpu->run->debug.arch.dr7 = dr7;49314934 vcpu->run->debug.arch.pc = kvm_get_linear_rip(vcpu);49324935 vcpu->run->debug.arch.exception = DB_VECTOR;49334936 vcpu->run->exit_reason = KVM_EXIT_DEBUG;49344937 return 0;49354938 } else {49364936- vcpu->arch.dr6 &= ~DR_TRAP_BITS;49374937- vcpu->arch.dr6 |= DR6_BD | DR6_RTM;49384938- kvm_queue_exception(vcpu, DB_VECTOR);49394939+ kvm_queue_exception_p(vcpu, DB_VECTOR, DR6_BD);49394940 return 1;49404941 }49414942 }···49624967 return 1;4963496849644969 return kvm_skip_emulated_instruction(vcpu);49654965-}49664966-49674967-static u64 vmx_get_dr6(struct kvm_vcpu *vcpu)49684968-{49694969- return vcpu->arch.dr6;49704970-}49714971-49724972-static void vmx_set_dr6(struct kvm_vcpu *vcpu, unsigned long val)49734973-{49744970}4975497149764972static void vmx_sync_dirty_debug_regs(struct kvm_vcpu *vcpu)···6563657765646578 kvm_load_guest_xsave_state(vcpu);6565657965666566- if (static_cpu_has(X86_FEATURE_PKU) &&65676567- kvm_read_cr4_bits(vcpu, X86_CR4_PKE) &&65686568- vcpu->arch.pkru != vmx->host_pkru)65696569- __write_pkru(vcpu->arch.pkru);65706570-65716580 pt_guest_enter(vmx);6572658165736582 if (vcpu_to_pmu(vcpu)->version)···66516670 vcpu->arch.regs_dirty = 0;6652667166536672 pt_guest_exit(vmx);66546654-66556655- /*66566656- * eager fpu is enabled if PKEY is supported and CR4 is switched66576657- * back on host, so it is safe to read guest PKRU from current66586658- * XSAVE.66596659- */66606660- if (static_cpu_has(X86_FEATURE_PKU) &&66616661- kvm_read_cr4_bits(vcpu, X86_CR4_PKE)) {66626662- vcpu->arch.pkru = rdpkru();66636663- if (vcpu->arch.pkru != vmx->host_pkru)66646664- __write_pkru(vmx->host_pkru);66656665- }6666667366676674 kvm_load_host_xsave_state(vcpu);66686675···77097740 .set_idt = vmx_set_idt,77107741 .get_gdt = vmx_get_gdt,77117742 .set_gdt = vmx_set_gdt,77127712- .get_dr6 = vmx_get_dr6,77137713- .set_dr6 = vmx_set_dr6,77147743 .set_dr7 = vmx_set_dr7,77157744 .sync_dirty_debug_regs = vmx_sync_dirty_debug_regs,77167745 .cache_reg = vmx_cache_reg,
+37-23
arch/x86/kvm/x86.c
···572572}573573EXPORT_SYMBOL_GPL(kvm_requeue_exception);574574575575-static void kvm_queue_exception_p(struct kvm_vcpu *vcpu, unsigned nr,576576- unsigned long payload)575575+void kvm_queue_exception_p(struct kvm_vcpu *vcpu, unsigned nr,576576+ unsigned long payload)577577{578578 kvm_multiple_exception(vcpu, nr, false, 0, true, payload, false);579579}580580+EXPORT_SYMBOL_GPL(kvm_queue_exception_p);580581581582static void kvm_queue_exception_e_p(struct kvm_vcpu *vcpu, unsigned nr,582583 u32 error_code, unsigned long payload)···837836 vcpu->arch.ia32_xss != host_xss)838837 wrmsrl(MSR_IA32_XSS, vcpu->arch.ia32_xss);839838 }839839+840840+ if (static_cpu_has(X86_FEATURE_PKU) &&841841+ (kvm_read_cr4_bits(vcpu, X86_CR4_PKE) ||842842+ (vcpu->arch.xcr0 & XFEATURE_MASK_PKRU)) &&843843+ vcpu->arch.pkru != vcpu->arch.host_pkru)844844+ __write_pkru(vcpu->arch.pkru);840845}841846EXPORT_SYMBOL_GPL(kvm_load_guest_xsave_state);842847843848void kvm_load_host_xsave_state(struct kvm_vcpu *vcpu)844849{850850+ if (static_cpu_has(X86_FEATURE_PKU) &&851851+ (kvm_read_cr4_bits(vcpu, X86_CR4_PKE) ||852852+ (vcpu->arch.xcr0 & XFEATURE_MASK_PKRU))) {853853+ vcpu->arch.pkru = rdpkru();854854+ if (vcpu->arch.pkru != vcpu->arch.host_pkru)855855+ __write_pkru(vcpu->arch.host_pkru);856856+ }857857+845858 if (kvm_read_cr4_bits(vcpu, X86_CR4_OSXSAVE)) {846859847860 if (vcpu->arch.xcr0 != host_xcr0)···10601045 }10611046}1062104710631063-static void kvm_update_dr6(struct kvm_vcpu *vcpu)10641064-{10651065- if (!(vcpu->guest_debug & KVM_GUESTDBG_USE_HW_BP))10661066- kvm_x86_ops.set_dr6(vcpu, vcpu->arch.dr6);10671067-}10681068-10691048static void kvm_update_dr7(struct kvm_vcpu *vcpu)10701049{10711050 unsigned long dr7;···10991090 if (val & 0xffffffff00000000ULL)11001091 return -1; /* #GP */11011092 vcpu->arch.dr6 = (val & DR6_VOLATILE) | kvm_dr6_fixed(vcpu);11021102- kvm_update_dr6(vcpu);11031093 break;11041094 case 5:11051095 /* fall through */···11341126 case 4:11351127 /* fall through */11361128 case 6:11371137- if (vcpu->guest_debug & KVM_GUESTDBG_USE_HW_BP)11381138- *val = vcpu->arch.dr6;11391139- else11401140- *val = kvm_x86_ops.get_dr6(vcpu);11291129+ *val = vcpu->arch.dr6;11411130 break;11421131 case 5:11431132 /* fall through */···3563355835643559 kvm_x86_ops.vcpu_load(vcpu, cpu);3565356035613561+ /* Save host pkru register if supported */35623562+ vcpu->arch.host_pkru = read_pkru();35633563+35663564 /* Apply any externally detected TSC adjustments (due to suspend) */35673565 if (unlikely(vcpu->arch.tsc_offset_adjustment)) {35683566 adjust_tsc_offset_host(vcpu, vcpu->arch.tsc_offset_adjustment);···37593751 unsigned bank_num = mcg_cap & 0xff, bank;3760375237613753 r = -EINVAL;37623762- if (!bank_num || bank_num >= KVM_MAX_MCE_BANKS)37543754+ if (!bank_num || bank_num > KVM_MAX_MCE_BANKS)37633755 goto out;37643756 if (mcg_cap & ~(kvm_mce_cap_supported | 0xff | 0xff0000))37653757 goto out;···40174009 memcpy(vcpu->arch.db, dbgregs->db, sizeof(vcpu->arch.db));40184010 kvm_update_dr0123(vcpu);40194011 vcpu->arch.dr6 = dbgregs->dr6;40204020- kvm_update_dr6(vcpu);40214012 vcpu->arch.dr7 = dbgregs->dr7;40224013 kvm_update_dr7(vcpu);40234014···6666665966676660 if (vcpu->guest_debug & KVM_GUESTDBG_SINGLESTEP) {66686661 kvm_run->debug.arch.dr6 = DR6_BS | DR6_FIXED_1 | DR6_RTM;66696669- kvm_run->debug.arch.pc = vcpu->arch.singlestep_rip;66626662+ kvm_run->debug.arch.pc = kvm_get_linear_rip(vcpu);66706663 kvm_run->debug.arch.exception = DB_VECTOR;66716664 kvm_run->exit_reason = KVM_EXIT_DEBUG;66726665 return 0;···67266719 vcpu->arch.db);6727672067286721 if (dr6 != 0) {67296729- vcpu->arch.dr6 &= ~DR_TRAP_BITS;67306730- vcpu->arch.dr6 |= dr6 | DR6_RTM;67316731- kvm_queue_exception(vcpu, DB_VECTOR);67226722+ kvm_queue_exception_p(vcpu, DB_VECTOR, dr6);67326723 *r = 1;67336724 return true;67346725 }···80478042 zalloc_cpumask_var(&cpus, GFP_ATOMIC);8048804380498044 kvm_make_vcpus_request_mask(kvm, KVM_REQ_SCAN_IOAPIC,80508050- vcpu_bitmap, cpus);80458045+ NULL, vcpu_bitmap, cpus);8051804680528047 free_cpumask_var(cpus);80538048}···80778072 */80788073void kvm_request_apicv_update(struct kvm *kvm, bool activate, ulong bit)80798074{80758075+ struct kvm_vcpu *except;80808076 unsigned long old, new, expected;8081807780828078 if (!kvm_x86_ops.check_apicv_inhibit_reasons ||···81028096 trace_kvm_apicv_update_request(activate, bit);81038097 if (kvm_x86_ops.pre_update_apicv_exec_ctrl)81048098 kvm_x86_ops.pre_update_apicv_exec_ctrl(kvm, activate);81058105- kvm_make_all_cpus_request(kvm, KVM_REQ_APICV_UPDATE);80998099+81008100+ /*81018101+ * Sending request to update APICV for all other vcpus,81028102+ * while update the calling vcpu immediately instead of81038103+ * waiting for another #VMEXIT to handle the request.81048104+ */81058105+ except = kvm_get_running_vcpu();81068106+ kvm_make_all_cpus_request_except(kvm, KVM_REQ_APICV_UPDATE,81078107+ except);81088108+ if (except)81098109+ kvm_vcpu_update_apicv(except);81068110}81078111EXPORT_SYMBOL_GPL(kvm_request_apicv_update);81088112···84368420 WARN_ON(vcpu->guest_debug & KVM_GUESTDBG_USE_HW_BP);84378421 kvm_x86_ops.sync_dirty_debug_regs(vcpu);84388422 kvm_update_dr0123(vcpu);84398439- kvm_update_dr6(vcpu);84408423 kvm_update_dr7(vcpu);84418424 vcpu->arch.switch_db_regs &= ~KVM_DEBUGREG_RELOAD;84428425 }···94969481 memset(vcpu->arch.db, 0, sizeof(vcpu->arch.db));94979482 kvm_update_dr0123(vcpu);94989483 vcpu->arch.dr6 = DR6_INIT;94999499- kvm_update_dr6(vcpu);95009484 vcpu->arch.dr7 = DR7_FIXED_1;95019485 kvm_update_dr7(vcpu);95029486
+2-2
arch/x86/mm/mmio-mod.c
···372372 int cpu;373373 int err;374374375375- if (downed_cpus == NULL &&375375+ if (!cpumask_available(downed_cpus) &&376376 !alloc_cpumask_var(&downed_cpus, GFP_KERNEL)) {377377 pr_notice("Failed to allocate mask\n");378378 goto out;···402402 int cpu;403403 int err;404404405405- if (downed_cpus == NULL || cpumask_weight(downed_cpus) == 0)405405+ if (!cpumask_available(downed_cpus) || cpumask_weight(downed_cpus) == 0)406406 return;407407 pr_notice("Re-enabling CPUs...\n");408408 for_each_cpu(cpu, downed_cpus) {
+1
arch/x86/xen/smp_pv.c
···9393 cpu_bringup();9494 boot_init_stack_canary();9595 cpu_startup_entry(CPUHP_AP_ONLINE_IDLE);9696+ prevent_tail_call_optimization();9697}97989899void xen_smp_intr_free_pv(unsigned int cpu)
+5-1
drivers/acpi/ec.c
···20162016 * to allow the caller to process events properly after that.20172017 */20182018 ret = acpi_dispatch_gpe(NULL, first_ec->gpe);20192019- if (ret == ACPI_INTERRUPT_HANDLED)20192019+ if (ret == ACPI_INTERRUPT_HANDLED) {20202020 pm_pr_dbg("EC GPE dispatched\n");20212021+20222022+ /* Flush the event and query workqueues. */20232023+ acpi_ec_flush_work();20242024+ }2021202520222026 return false;20232027}
+4-11
drivers/acpi/sleep.c
···980980 return 0;981981}982982983983-static void acpi_s2idle_sync(void)984984-{985985- /* The EC driver uses special workqueues that need to be flushed. */986986- acpi_ec_flush_work();987987- acpi_os_wait_events_complete(); /* synchronize Notify handling */988988-}989989-990983static bool acpi_s2idle_wake(void)991984{992985 if (!acpi_sci_irq_valid())···10111018 return true;1012101910131020 /*10141014- * Cancel the wakeup and process all pending events in case10211021+ * Cancel the SCI wakeup and process all pending events in case10151022 * there are any wakeup ones in there.10161023 *10171024 * Note that if any non-EC GPEs are active at this point, the···10191026 * should be missed by canceling the wakeup here.10201027 */10211028 pm_system_cancel_wakeup();10221022-10231023- acpi_s2idle_sync();10291029+ acpi_os_wait_events_complete();1024103010251031 /*10261032 * The SCI is in the "suspended" state now and it cannot produce···10521060 * of GPEs.10531061 */10541062 acpi_os_wait_events_complete(); /* synchronize GPE processing */10551055- acpi_s2idle_sync();10631063+ acpi_ec_flush_work(); /* flush the EC driver's workqueues */10641064+ acpi_os_wait_events_complete(); /* synchronize Notify handling */1056106510571066 s2idle_wakeup = false;10581067
+37-18
drivers/base/core.c
···365365 link->flags |= DL_FLAG_STATELESS;366366 goto reorder;367367 } else {368368+ link->flags |= DL_FLAG_STATELESS;368369 goto out;369370 }370371 }···434433 flags & DL_FLAG_PM_RUNTIME)435434 pm_runtime_resume(supplier);436435436436+ list_add_tail_rcu(&link->s_node, &supplier->links.consumers);437437+ list_add_tail_rcu(&link->c_node, &consumer->links.suppliers);438438+437439 if (flags & DL_FLAG_SYNC_STATE_ONLY) {438440 dev_dbg(consumer,439441 "Linked as a sync state only consumer to %s\n",440442 dev_name(supplier));441443 goto out;442444 }445445+443446reorder:444447 /*445448 * Move the consumer and all of the devices depending on it to the end···454449 */455450 device_reorder_to_tail(consumer, NULL);456451457457- list_add_tail_rcu(&link->s_node, &supplier->links.consumers);458458- list_add_tail_rcu(&link->c_node, &consumer->links.suppliers);459459-460452 dev_dbg(consumer, "Linked as a consumer to %s\n", dev_name(supplier));461453462462- out:454454+out:463455 device_pm_unlock();464456 device_links_write_unlock();465457···831829 list_add_tail(&sup->links.defer_sync, &deferred_sync);832830}833831832832+static void device_link_drop_managed(struct device_link *link)833833+{834834+ link->flags &= ~DL_FLAG_MANAGED;835835+ WRITE_ONCE(link->status, DL_STATE_NONE);836836+ kref_put(&link->kref, __device_link_del);837837+}838838+834839/**835840 * device_links_driver_bound - Update device links after probing its driver.836841 * @dev: Device to update the links for.···851842 */852843void device_links_driver_bound(struct device *dev)853844{854854- struct device_link *link;845845+ struct device_link *link, *ln;855846 LIST_HEAD(sync_list);856847857848 /*···891882 else892883 __device_links_queue_sync_state(dev, &sync_list);893884894894- list_for_each_entry(link, &dev->links.suppliers, c_node) {885885+ list_for_each_entry_safe(link, ln, &dev->links.suppliers, c_node) {886886+ struct device *supplier;887887+895888 if (!(link->flags & DL_FLAG_MANAGED))896889 continue;897890898898- WARN_ON(link->status != DL_STATE_CONSUMER_PROBE);899899- WRITE_ONCE(link->status, DL_STATE_ACTIVE);891891+ supplier = link->supplier;892892+ if (link->flags & DL_FLAG_SYNC_STATE_ONLY) {893893+ /*894894+ * When DL_FLAG_SYNC_STATE_ONLY is set, it means no895895+ * other DL_MANAGED_LINK_FLAGS have been set. So, it's896896+ * save to drop the managed link completely.897897+ */898898+ device_link_drop_managed(link);899899+ } else {900900+ WARN_ON(link->status != DL_STATE_CONSUMER_PROBE);901901+ WRITE_ONCE(link->status, DL_STATE_ACTIVE);902902+ }900903904904+ /*905905+ * This needs to be done even for the deleted906906+ * DL_FLAG_SYNC_STATE_ONLY device link in case it was the last907907+ * device link that was preventing the supplier from getting a908908+ * sync_state() call.909909+ */901910 if (defer_sync_state_count)902902- __device_links_supplier_defer_sync(link->supplier);911911+ __device_links_supplier_defer_sync(supplier);903912 else904904- __device_links_queue_sync_state(link->supplier,905905- &sync_list);913913+ __device_links_queue_sync_state(supplier, &sync_list);906914 }907915908916 dev->links.status = DL_DEV_DRIVER_BOUND;···927901 device_links_write_unlock();928902929903 device_links_flush_sync_list(&sync_list, dev);930930-}931931-932932-static void device_link_drop_managed(struct device_link *link)933933-{934934- link->flags &= ~DL_FLAG_MANAGED;935935- WRITE_ONCE(link->status, DL_STATE_NONE);936936- kref_put(&link->kref, __device_link_del);937904}938905939906/**
+7
drivers/block/null_blk_main.c
···15351535{15361536 if (nullb->dev->discard == false)15371537 return;15381538+15391539+ if (nullb->dev->zoned) {15401540+ nullb->dev->discard = false;15411541+ pr_info("discard option is ignored in zoned mode\n");15421542+ return;15431543+ }15441544+15381545 nullb->q->limits.discard_granularity = nullb->dev->blocksize;15391546 nullb->q->limits.discard_alignment = nullb->dev->blocksize;15401547 blk_queue_max_discard_sectors(nullb->q, UINT_MAX >> 9);
+4
drivers/block/null_blk_zoned.c
···2323 pr_err("zone_size must be power-of-two\n");2424 return -EINVAL;2525 }2626+ if (dev->zone_size > dev->size) {2727+ pr_err("Zone size larger than device capacity\n");2828+ return -EINVAL;2929+ }26302731 dev->zone_size_sects = dev->zone_size << ZONE_SIZE_SHIFT;2832 dev->nr_zones = dev_size >>
···19471947 if (adev->type != &i2c_adapter_type)19481948 return 0;1949194919501950- addr_info->added_client = i2c_new_device(to_i2c_adapter(adev),19511951- &addr_info->binfo);19501950+ addr_info->added_client = i2c_new_client_device(to_i2c_adapter(adev),19511951+ &addr_info->binfo);1952195219531953 if (!addr_info->adapter_name)19541954 return 1; /* Only try the first I2C adapter by default. */
···2222 resource_size_t kmem_size;2323 resource_size_t kmem_end;2424 struct resource *new_res;2525+ const char *new_res_name;2526 int numa_node;2627 int rc;2728···4948 kmem_size &= ~(memory_block_size_bytes() - 1);5049 kmem_end = kmem_start + kmem_size;51505252- /* Region is permanently reserved. Hot-remove not yet implemented. */5353- new_res = request_mem_region(kmem_start, kmem_size, dev_name(dev));5151+ new_res_name = kstrdup(dev_name(dev), GFP_KERNEL);5252+ if (!new_res_name)5353+ return -ENOMEM;5454+5555+ /* Region is permanently reserved if hotremove fails. */5656+ new_res = request_mem_region(kmem_start, kmem_size, new_res_name);5457 if (!new_res) {5558 dev_warn(dev, "could not reserve region [%pa-%pa]\n",5659 &kmem_start, &kmem_end);6060+ kfree(new_res_name);5761 return -EBUSY;5862 }5963···6963 * unknown to us that will break add_memory() below.7064 */7165 new_res->flags = IORESOURCE_SYSTEM_RAM;7272- new_res->name = dev_name(dev);73667467 rc = add_memory(numa_node, new_res->start, resource_size(new_res));7568 if (rc) {7669 release_resource(new_res);7770 kfree(new_res);7171+ kfree(new_res_name);7872 return rc;7973 }8074 dev_dax->dax_kmem_res = new_res;···8983 struct resource *res = dev_dax->dax_kmem_res;9084 resource_size_t kmem_start = res->start;9185 resource_size_t kmem_size = resource_size(res);8686+ const char *res_name = res->name;9287 int rc;93889489 /*···109102 /* Release and free dax resources */110103 release_resource(res);111104 kfree(res);105105+ kfree(res_name);112106 dev_dax->dax_kmem_res = NULL;113107114108 return 0;
+5-4
drivers/dma/dmatest.c
···11661166 mutex_unlock(&info->lock);11671167 return ret;11681168 } else if (dmatest_run) {11691169- if (is_threaded_test_pending(info))11701170- start_threaded_tests(info);11711171- else11721172- pr_info("Could not start test, no channels configured\n");11691169+ if (!is_threaded_test_pending(info)) {11701170+ pr_info("No channels configured, continue with any\n");11711171+ add_threaded_test(info);11721172+ }11731173+ start_threaded_tests(info);11731174 } else {11741175 stop_threaded_test(info);11751176 }
+7
drivers/dma/idxd/device.c
···6262 perm.ignore = 0;6363 iowrite32(perm.bits, idxd->reg_base + offset);64646565+ /*6666+ * A readback from the device ensures that any previously generated6767+ * completion record writes are visible to software based on PCI6868+ * ordering rules.6969+ */7070+ perm.bits = ioread32(idxd->reg_base + offset);7171+6572 return 0;6673}6774
+19-7
drivers/dma/idxd/irq.c
···173173 struct llist_node *head;174174 int queued = 0;175175176176+ *processed = 0;176177 head = llist_del_all(&irq_entry->pending_llist);177178 if (!head)178179 return 0;···198197 struct list_head *node, *next;199198 int queued = 0;200199200200+ *processed = 0;201201 if (list_empty(&irq_entry->work_list))202202 return 0;203203···220218 return queued;221219}222220223223-irqreturn_t idxd_wq_thread(int irq, void *data)221221+static int idxd_desc_process(struct idxd_irq_entry *irq_entry)224222{225225- struct idxd_irq_entry *irq_entry = data;226226- int rc, processed = 0, retry = 0;223223+ int rc, processed, total = 0;227224228225 /*229226 * There are two lists we are processing. The pending_llist is where···245244 */246245 do {247246 rc = irq_process_work_list(irq_entry, &processed);248248- if (rc != 0) {249249- retry++;247247+ total += processed;248248+ if (rc != 0)250249 continue;251251- }252250253251 rc = irq_process_pending_llist(irq_entry, &processed);254254- } while (rc != 0 && retry != 10);252252+ total += processed;253253+ } while (rc != 0);255254255255+ return total;256256+}257257+258258+irqreturn_t idxd_wq_thread(int irq, void *data)259259+{260260+ struct idxd_irq_entry *irq_entry = data;261261+ int processed;262262+263263+ processed = idxd_desc_process(irq_entry);256264 idxd_unmask_msix_vector(irq_entry->idxd, irq_entry->id);265265+ /* catch anything unprocessed after unmasking */266266+ processed += idxd_desc_process(irq_entry);257267258268 if (processed == 0)259269 return IRQ_NONE;
+3-5
drivers/dma/owl-dma.c
···175175 * @id: physical index to this channel176176 * @base: virtual memory base for the dma channel177177 * @vchan: the virtual channel currently being served by this physical channel178178- * @lock: a lock to use when altering an instance of this struct179178 */180179struct owl_dma_pchan {181180 u32 id;182181 void __iomem *base;183182 struct owl_dma_vchan *vchan;184184- spinlock_t lock;185183};186184187185/**···435437 for (i = 0; i < od->nr_pchans; i++) {436438 pchan = &od->pchans[i];437439438438- spin_lock_irqsave(&pchan->lock, flags);440440+ spin_lock_irqsave(&od->lock, flags);439441 if (!pchan->vchan) {440442 pchan->vchan = vchan;441441- spin_unlock_irqrestore(&pchan->lock, flags);443443+ spin_unlock_irqrestore(&od->lock, flags);442444 break;443445 }444446445445- spin_unlock_irqrestore(&pchan->lock, flags);447447+ spin_unlock_irqrestore(&od->lock, flags);446448 }447449448450 return pchan;
+1-1
drivers/dma/tegra210-adma.c
···900900 ret = dma_async_device_register(&tdma->dma_dev);901901 if (ret < 0) {902902 dev_err(&pdev->dev, "ADMA registration failed: %d\n", ret);903903- goto irq_dispose;903903+ goto rpm_put;904904 }905905906906 ret = of_dma_controller_register(pdev->dev.of_node,
···6060 si = alloc_screen_info();6161 if (!si)6262 return NULL;6363- efi_setup_gop(si, &gop_proto, size);6363+ status = efi_setup_gop(si, &gop_proto, size);6464+ if (status != EFI_SUCCESS) {6565+ free_screen_info(si);6666+ return NULL;6767+ }6468 }6569 return si;6670}
+13
drivers/firmware/efi/libstub/efistub.h
···9292#define EFI_LOCATE_BY_REGISTER_NOTIFY 19393#define EFI_LOCATE_BY_PROTOCOL 294949595+/*9696+ * An efi_boot_memmap is used by efi_get_memory_map() to return the9797+ * EFI memory map in a dynamically allocated buffer.9898+ *9999+ * The buffer allocated for the EFI memory map includes extra room for100100+ * a minimum of EFI_MMAP_NR_SLACK_SLOTS additional EFI memory descriptors.101101+ * This facilitates the reuse of the EFI memory map buffer when a second102102+ * call to ExitBootServices() is needed because of intervening changes to103103+ * the EFI memory map. Other related structures, e.g. x86 e820ext, need104104+ * to factor in this headroom requirement as well.105105+ */106106+#define EFI_MMAP_NR_SLACK_SLOTS 8107107+95108struct efi_boot_memmap {96109 efi_memory_desc_t **map;97110 unsigned long *map_size;
-2
drivers/firmware/efi/libstub/mem.c
···5566#include "efistub.h"7788-#define EFI_MMAP_NR_SLACK_SLOTS 899-108static inline bool mmap_has_headroom(unsigned long buff_size,119 unsigned long map_size,1210 unsigned long desc_size)
+3-2
drivers/firmware/efi/libstub/tpm.c
···5454 efi_status_t status;5555 efi_physical_addr_t log_location = 0, log_last_entry = 0;5656 struct linux_efi_tpm_eventlog *log_tbl = NULL;5757- struct efi_tcg2_final_events_table *final_events_table;5757+ struct efi_tcg2_final_events_table *final_events_table = NULL;5858 unsigned long first_entry_addr, last_entry_addr;5959 size_t log_size, last_entry_size;6060 efi_bool_t truncated;···127127 * Figure out whether any events have already been logged to the128128 * final events structure, and if so how much space they take up129129 */130130- final_events_table = get_efi_config_table(LINUX_EFI_TPM_FINAL_LOG_GUID);130130+ if (version == EFI_TCG2_EVENT_LOG_FORMAT_TCG_2)131131+ final_events_table = get_efi_config_table(LINUX_EFI_TPM_FINAL_LOG_GUID);131132 if (final_events_table && final_events_table->nr_events) {132133 struct tcg_pcr_event2_head *header;133134 int offset;
+9-15
drivers/firmware/efi/libstub/x86-stub.c
···606606 struct setup_data **e820ext,607607 u32 *e820ext_size)608608{609609- unsigned long map_size, desc_size, buff_size;610610- struct efi_boot_memmap boot_map;611611- efi_memory_desc_t *map;609609+ unsigned long map_size, desc_size, map_key;612610 efi_status_t status;613613- __u32 nr_desc;611611+ __u32 nr_desc, desc_version;614612615615- boot_map.map = ↦616616- boot_map.map_size = &map_size;617617- boot_map.desc_size = &desc_size;618618- boot_map.desc_ver = NULL;619619- boot_map.key_ptr = NULL;620620- boot_map.buff_size = &buff_size;613613+ /* Only need the size of the mem map and size of each mem descriptor */614614+ map_size = 0;615615+ status = efi_bs_call(get_memory_map, &map_size, NULL, &map_key,616616+ &desc_size, &desc_version);617617+ if (status != EFI_BUFFER_TOO_SMALL)618618+ return (status != EFI_SUCCESS) ? status : EFI_UNSUPPORTED;621619622622- status = efi_get_memory_map(&boot_map);623623- if (status != EFI_SUCCESS)624624- return status;625625-626626- nr_desc = buff_size / desc_size;620620+ nr_desc = map_size / desc_size + EFI_MMAP_NR_SLACK_SLOTS;627621628622 if (nr_desc > ARRAY_SIZE(params->e820_table)) {629623 u32 nr_e820ext = nr_desc - ARRAY_SIZE(params->e820_table);
+4-1
drivers/firmware/efi/tpm.c
···6262 tbl_size = sizeof(*log_tbl) + log_tbl->size;6363 memblock_reserve(efi.tpm_log, tbl_size);64646565- if (efi.tpm_final_log == EFI_INVALID_TABLE_ADDR)6565+ if (efi.tpm_final_log == EFI_INVALID_TABLE_ADDR ||6666+ log_tbl->version != EFI_TCG2_EVENT_LOG_FORMAT_TCG_2) {6767+ pr_warn(FW_BUG "TPM Final Events table missing or invalid\n");6668 goto out;6969+ }67706871 final_tbl = early_memremap(efi.tpm_final_log, sizeof(*final_tbl));6972
···16251625 hws->funcs.verify_allow_pstate_change_high(dc);16261626}1627162716281628+/**16291629+ * delay_cursor_until_vupdate() - Delay cursor update if too close to VUPDATE.16301630+ *16311631+ * Software keepout workaround to prevent cursor update locking from stalling16321632+ * out cursor updates indefinitely or from old values from being retained in16331633+ * the case where the viewport changes in the same frame as the cursor.16341634+ *16351635+ * The idea is to calculate the remaining time from VPOS to VUPDATE. If it's16361636+ * too close to VUPDATE, then stall out until VUPDATE finishes.16371637+ *16381638+ * TODO: Optimize cursor programming to be once per frame before VUPDATE16391639+ * to avoid the need for this workaround.16401640+ */16411641+static void delay_cursor_until_vupdate(struct dc *dc, struct pipe_ctx *pipe_ctx)16421642+{16431643+ struct dc_stream_state *stream = pipe_ctx->stream;16441644+ struct crtc_position position;16451645+ uint32_t vupdate_start, vupdate_end;16461646+ unsigned int lines_to_vupdate, us_to_vupdate, vpos;16471647+ unsigned int us_per_line, us_vupdate;16481648+16491649+ if (!dc->hwss.calc_vupdate_position || !dc->hwss.get_position)16501650+ return;16511651+16521652+ if (!pipe_ctx->stream_res.stream_enc || !pipe_ctx->stream_res.tg)16531653+ return;16541654+16551655+ dc->hwss.calc_vupdate_position(dc, pipe_ctx, &vupdate_start,16561656+ &vupdate_end);16571657+16581658+ dc->hwss.get_position(&pipe_ctx, 1, &position);16591659+ vpos = position.vertical_count;16601660+16611661+ /* Avoid wraparound calculation issues */16621662+ vupdate_start += stream->timing.v_total;16631663+ vupdate_end += stream->timing.v_total;16641664+ vpos += stream->timing.v_total;16651665+16661666+ if (vpos <= vupdate_start) {16671667+ /* VPOS is in VACTIVE or back porch. */16681668+ lines_to_vupdate = vupdate_start - vpos;16691669+ } else if (vpos > vupdate_end) {16701670+ /* VPOS is in the front porch. */16711671+ return;16721672+ } else {16731673+ /* VPOS is in VUPDATE. */16741674+ lines_to_vupdate = 0;16751675+ }16761676+16771677+ /* Calculate time until VUPDATE in microseconds. */16781678+ us_per_line =16791679+ stream->timing.h_total * 10000u / stream->timing.pix_clk_100hz;16801680+ us_to_vupdate = lines_to_vupdate * us_per_line;16811681+16821682+ /* 70 us is a conservative estimate of cursor update time*/16831683+ if (us_to_vupdate > 70)16841684+ return;16851685+16861686+ /* Stall out until the cursor update completes. */16871687+ us_vupdate = (vupdate_end - vupdate_start + 1) * us_per_line;16881688+ udelay(us_to_vupdate + us_vupdate);16891689+}16901690+16281691void dcn10_cursor_lock(struct dc *dc, struct pipe_ctx *pipe, bool lock)16291692{16301693 /* cursor lock is per MPCC tree, so only need to lock one pipe per stream */16311694 if (!pipe || pipe->top_pipe)16321695 return;16961696+16971697+ /* Prevent cursor lock from stalling out cursor updates. */16981698+ if (lock)16991699+ delay_cursor_until_vupdate(dc, pipe);1633170016341701 dc->res_pool->mpc->funcs->cursor_lock(dc->res_pool->mpc,16351702 pipe->stream_res.opp->inst, lock);···33033236 return vertical_line_start;33043237}3305323833063306-static void dcn10_calc_vupdate_position(32393239+void dcn10_calc_vupdate_position(33073240 struct dc *dc,33083241 struct pipe_ctx *pipe_ctx,33093242 uint32_t *start_line,
···11-/*22- * Copyright 2017 Advanced Micro Devices, Inc.33- *44- * Permission is hereby granted, free of charge, to any person obtaining a55- * copy of this software and associated documentation files (the "Software"),66- * to deal in the Software without restriction, including without limitation77- * the rights to use, copy, modify, merge, publish, distribute, sublicense,88- * and/or sell copies of the Software, and to permit persons to whom the99- * Software is furnished to do so, subject to the following conditions:1010- *1111- * The above copyright notice and this permission notice shall be included in1212- * all copies or substantial portions of the Software.1313- *1414- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR1515- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,1616- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL1717- * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR1818- * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,1919- * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR2020- * OTHER DEALINGS IN THE SOFTWARE.2121- *2222- * Authors: AMD2323- *2424- */2525-2626-#include "dml_common_defs.h"2727-#include "dcn_calc_math.h"2828-2929-#include "dml_inline_defs.h"3030-3131-double dml_round(double a)3232-{3333- double round_pt = 0.5;3434- double ceil = dml_ceil(a, 1);3535- double floor = dml_floor(a, 1);3636-3737- if (a - floor >= round_pt)3838- return ceil;3939- else4040- return floor;4141-}4242-4343-
···11-/*22- * Copyright 2017 Advanced Micro Devices, Inc.33- *44- * Permission is hereby granted, free of charge, to any person obtaining a55- * copy of this software and associated documentation files (the "Software"),66- * to deal in the Software without restriction, including without limitation77- * the rights to use, copy, modify, merge, publish, distribute, sublicense,88- * and/or sell copies of the Software, and to permit persons to whom the99- * Software is furnished to do so, subject to the following conditions:1010- *1111- * The above copyright notice and this permission notice shall be included in1212- * all copies or substantial portions of the Software.1313- *1414- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR1515- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,1616- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL1717- * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR1818- * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,1919- * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR2020- * OTHER DEALINGS IN THE SOFTWARE.2121- *2222- * Authors: AMD2323- *2424- */2525-2626-#ifndef __DC_COMMON_DEFS_H__2727-#define __DC_COMMON_DEFS_H__2828-2929-#include "dm_services.h"3030-#include "dc_features.h"3131-#include "display_mode_structs.h"3232-#include "display_mode_enums.h"3333-3434-3535-double dml_round(double a);3636-3737-#endif /* __DC_COMMON_DEFS_H__ */
···845845 PINCTRL_STATE_DEFAULT);846846 dev->pinctrl_pins_gpio = pinctrl_lookup_state(dev->pinctrl,847847 "gpio");848848+ if (IS_ERR(dev->pinctrl_pins_default) ||849849+ IS_ERR(dev->pinctrl_pins_gpio)) {850850+ dev_info(&pdev->dev, "pinctrl states incomplete for recovery\n");851851+ return -EINVAL;852852+ }853853+854854+ /*855855+ * pins will be taken as GPIO, so we might as well inform pinctrl about856856+ * this and move the state to GPIO857857+ */858858+ pinctrl_select_state(dev->pinctrl, dev->pinctrl_pins_gpio);859859+848860 rinfo->sda_gpiod = devm_gpiod_get(&pdev->dev, "sda", GPIOD_IN);849861 if (PTR_ERR(rinfo->sda_gpiod) == -EPROBE_DEFER)850862 return -EPROBE_DEFER;···867855 return -EPROBE_DEFER;868856869857 if (IS_ERR(rinfo->sda_gpiod) ||870870- IS_ERR(rinfo->scl_gpiod) ||871871- IS_ERR(dev->pinctrl_pins_default) ||872872- IS_ERR(dev->pinctrl_pins_gpio)) {858858+ IS_ERR(rinfo->scl_gpiod)) {873859 dev_info(&pdev->dev, "recovery information incomplete\n");874860 if (!IS_ERR(rinfo->sda_gpiod)) {875861 gpiod_put(rinfo->sda_gpiod);···877867 gpiod_put(rinfo->scl_gpiod);878868 rinfo->scl_gpiod = NULL;879869 }870870+ pinctrl_select_state(dev->pinctrl, dev->pinctrl_pins_default);880871 return -EINVAL;881872 }873873+874874+ /* change the state of the pins back to their default state */875875+ pinctrl_select_state(dev->pinctrl, dev->pinctrl_pins_default);882876883877 dev_info(&pdev->dev, "using scl, sda for recovery\n");884878
+17-7
drivers/i2c/i2c-core-base.c
···77 * Mux support by Rodolfo Giometti <giometti@enneenne.com> and88 * Michael Lawnick <michael.lawnick.ext@nsn.com>99 *1010- * Copyright (C) 2013-2017 Wolfram Sang <wsa@the-dreams.de>1010+ * Copyright (C) 2013-2017 Wolfram Sang <wsa@kernel.org>1111 */12121313#define pr_fmt(fmt) "i2c-core: " fmt···338338 } else if (ACPI_COMPANION(dev)) {339339 irq = i2c_acpi_get_irq(client);340340 }341341- if (irq == -EPROBE_DEFER)342342- return irq;341341+ if (irq == -EPROBE_DEFER) {342342+ status = irq;343343+ goto put_sync_adapter;344344+ }343345344346 if (irq < 0)345347 irq = 0;···355353 */356354 if (!driver->id_table &&357355 !i2c_acpi_match_device(dev->driver->acpi_match_table, client) &&358358- !i2c_of_match_device(dev->driver->of_match_table, client))359359- return -ENODEV;356356+ !i2c_of_match_device(dev->driver->of_match_table, client)) {357357+ status = -ENODEV;358358+ goto put_sync_adapter;359359+ }360360361361 if (client->flags & I2C_CLIENT_WAKE) {362362 int wakeirq;363363364364 wakeirq = of_irq_get_byname(dev->of_node, "wakeup");365365- if (wakeirq == -EPROBE_DEFER)366366- return wakeirq;365365+ if (wakeirq == -EPROBE_DEFER) {366366+ status = wakeirq;367367+ goto put_sync_adapter;368368+ }367369368370 device_init_wakeup(&client->dev, true);369371···414408err_clear_wakeup_irq:415409 dev_pm_clear_wake_irq(&client->dev);416410 device_init_wakeup(&client->dev, false);411411+put_sync_adapter:412412+ if (client->flags & I2C_CLIENT_HOST_NOTIFY)413413+ pm_runtime_put_sync(&client->adapter->dev);414414+417415 return status;418416}419417
+1-1
drivers/i2c/i2c-core-of.c
···55 * Copyright (C) 2008 Jochen Friedrich <jochen@scram.de>66 * based on a previous patch from Jon Smirl <jonsmirl@gmail.com>77 *88- * Copyright (C) 2013, 2018 Wolfram Sang <wsa@the-dreams.de>88+ * Copyright (C) 2013, 2018 Wolfram Sang <wsa@kernel.org>99 */10101111#include <dt-bindings/i2c/i2c.h>
···693693 return ret;694694}695695696696+static bool iommu_is_attach_deferred(struct iommu_domain *domain,697697+ struct device *dev)698698+{699699+ if (domain->ops->is_attach_deferred)700700+ return domain->ops->is_attach_deferred(domain, dev);701701+702702+ return false;703703+}704704+696705/**697706 * iommu_group_add_device - add a device to an iommu group698707 * @group: the group into which to add the device (reference should be held)···756747757748 mutex_lock(&group->mutex);758749 list_add_tail(&device->list, &group->devices);759759- if (group->domain)750750+ if (group->domain && !iommu_is_attach_deferred(group->domain, dev))760751 ret = __iommu_attach_device(group->domain, dev);761752 mutex_unlock(&group->mutex);762753 if (ret)···16621653 struct device *dev)16631654{16641655 int ret;16651665- if ((domain->ops->is_attach_deferred != NULL) &&16661666- domain->ops->is_attach_deferred(domain, dev))16671667- return 0;1668165616691657 if (unlikely(domain->ops->attach_dev == NULL))16701658 return -ENODEV;···17331727static void __iommu_detach_device(struct iommu_domain *domain,17341728 struct device *dev)17351729{17361736- if ((domain->ops->is_attach_deferred != NULL) &&17371737- domain->ops->is_attach_deferred(domain, dev))17301730+ if (iommu_is_attach_deferred(domain, dev))17381731 return;1739173217401733 if (unlikely(domain->ops->detach_dev == NULL))
+1
drivers/ipack/carriers/tpci200.c
···306306 "(bn 0x%X, sn 0x%X) failed to map driver user space!",307307 tpci200->info->pdev->bus->number,308308 tpci200->info->pdev->devfn);309309+ res = -ENOMEM;309310 goto out_release_mem8_space;310311 }311312
+3
drivers/misc/cardreader/rtsx_pcr.c
···142142143143 rtsx_disable_aspm(pcr);144144145145+ /* Fixes DMA transfer timout issue after disabling ASPM on RTS5260 */146146+ msleep(1);147147+145148 if (option->ltr_enabled)146149 rtsx_set_ltr_latency(pcr, option->ltr_active_latency);147150
···1089108910901090 mtd->oobavail = ret;1091109110921092+ /* Propagate ECC information to mtd_info */10931093+ mtd->ecc_strength = nand->eccreq.strength;10941094+ mtd->ecc_step_size = nand->eccreq.step_size;10951095+10921096 return 0;1093109710941098err_cleanup_nanddev:
+2-10
drivers/mtd/ubi/debug.c
···393393{394394 struct ubi_device *ubi = s->private;395395396396- if (*pos == 0)397397- return SEQ_START_TOKEN;398398-399396 if (*pos < ubi->peb_count)400397 return pos;401398···406409{407410 struct ubi_device *ubi = s->private;408411409409- if (v == SEQ_START_TOKEN)410410- return pos;411412 (*pos)++;412413413414 if (*pos < ubi->peb_count)···427432 int err;428433429434 /* If this is the start, print a header */430430- if (iter == SEQ_START_TOKEN) {431431- seq_puts(s,432432- "physical_block_number\terase_count\tblock_status\tread_status\n");433433- return 0;434434- }435435+ if (*block_number == 0)436436+ seq_puts(s, "physical_block_number\terase_count\n");435437436438 err = ubi_io_is_bad(ubi, *block_number);437439 if (err)
+4-1
drivers/net/can/ifi_canfd/ifi_canfd.c
···947947 u32 id, rev;948948949949 addr = devm_platform_ioremap_resource(pdev, 0);950950+ if (IS_ERR(addr))951951+ return PTR_ERR(addr);952952+950953 irq = platform_get_irq(pdev, 0);951951- if (IS_ERR(addr) || irq < 0)954954+ if (irq < 0)952955 return -EINVAL;953956954957 id = readl(addr + IFI_CANFD_IP_ID);
···609609610610 priv->regs = devm_platform_ioremap_resource(pdev, 0);611611 if (IS_ERR(priv->regs))612612- return -ENOMEM;612612+ return PTR_ERR(priv->regs);613613614614 dev = b53_switch_alloc(&pdev->dev, &b53_srab_ops, priv);615615 if (!dev)
+2-7
drivers/net/dsa/mt7530.c
···628628 mt7530_write(priv, MT7530_PVC_P(port),629629 PORT_SPEC_TAG);630630631631- /* Disable auto learning on the cpu port */632632- mt7530_set(priv, MT7530_PSC_P(port), SA_DIS);633633-634634- /* Unknown unicast frame fordwarding to the cpu port */635635- mt7530_set(priv, MT7530_MFC, UNU_FFP(BIT(port)));631631+ /* Unknown multicast frame forwarding to the cpu port */632632+ mt7530_rmw(priv, MT7530_MFC, UNM_FFP_MASK, UNM_FFP(BIT(port)));636633637634 /* Set CPU port number */638635 if (priv->id == ID_MT7621)···1284128712851288 /* Enable and reset MIB counters */12861289 mt7530_mib_reset(ds);12871287-12881288- mt7530_clear(priv, MT7530_MFC, UNU_FFP_MASK);1289129012901291 for (i = 0; i < MT7530_NUM_PORTS; i++) {12911292 /* Disable forwarding by default on all ports */
···340340 [GCB] = vsc9959_gcb_regmap,341341};342342343343-/* Addresses are relative to the PCI device's base address and344344- * will be fixed up at ioremap time.345345- */346346-static struct resource vsc9959_target_io_res[] = {343343+/* Addresses are relative to the PCI device's base address */344344+static const struct resource vsc9959_target_io_res[] = {347345 [ANA] = {348346 .start = 0x0280000,349347 .end = 0x028ffff,···384386 },385387};386388387387-static struct resource vsc9959_port_io_res[] = {389389+static const struct resource vsc9959_port_io_res[] = {388390 {389391 .start = 0x0100000,390392 .end = 0x010ffff,···420422/* Port MAC 0 Internal MDIO bus through which the SerDes acting as an421423 * SGMII/QSGMII MAC PCS can be found.422424 */423423-static struct resource vsc9959_imdio_res = {425425+static const struct resource vsc9959_imdio_res = {424426 .start = 0x8030,425427 .end = 0x8040,426428 .name = "imdio",···11161118 struct device *dev = ocelot->dev;11171119 resource_size_t imdio_base;11181120 void __iomem *imdio_regs;11191119- struct resource *res;11211121+ struct resource res;11201122 struct enetc_hw *hw;11211123 struct mii_bus *bus;11221124 int port;···11331135 imdio_base = pci_resource_start(felix->pdev,11341136 felix->info->imdio_pci_bar);1135113711361136- res = felix->info->imdio_res;11371137- res->flags = IORESOURCE_MEM;11381138- res->start += imdio_base;11391139- res->end += imdio_base;11381138+ memcpy(&res, felix->info->imdio_res, sizeof(res));11391139+ res.flags = IORESOURCE_MEM;11401140+ res.start += imdio_base;11411141+ res.end += imdio_base;1140114211411141- imdio_regs = devm_ioremap_resource(dev, res);11431143+ imdio_regs = devm_ioremap_resource(dev, &res);11421144 if (IS_ERR(imdio_regs)) {11431145 dev_err(dev, "failed to map internal MDIO registers\n");11441146 return PTR_ERR(imdio_regs);
+1-1
drivers/net/ethernet/apple/bmac.c
···11821182 int i;11831183 unsigned short data;1184118411851185- for (i = 0; i < 6; i++)11851185+ for (i = 0; i < 3; i++)11861186 {11871187 reset_and_select_srom(dev);11881188 data = read_srom(dev, i + EnetAddressOffset/2, SROMAddressBits);
+7-6
drivers/net/ethernet/freescale/ucc_geth.c
···4242#include <soc/fsl/qe/ucc.h>4343#include <soc/fsl/qe/ucc_fast.h>4444#include <asm/machdep.h>4545+#include <net/sch_generic.h>45464647#include "ucc_geth.h"4748···1549154815501549static void ugeth_quiesce(struct ucc_geth_private *ugeth)15511550{15521552- /* Prevent any further xmits, plus detach the device. */15531553- netif_device_detach(ugeth->ndev);15541554-15551555- /* Wait for any current xmits to finish. */15561556- netif_tx_disable(ugeth->ndev);15511551+ /* Prevent any further xmits */15521552+ netif_tx_stop_all_queues(ugeth->ndev);1557155315581554 /* Disable the interrupt to avoid NAPI rescheduling. */15591555 disable_irq(ugeth->ug_info->uf_info.irq);···15631565{15641566 napi_enable(&ugeth->napi);15651567 enable_irq(ugeth->ug_info->uf_info.irq);15661566- netif_device_attach(ugeth->ndev);15681568+15691569+ /* allow to xmit again */15701570+ netif_tx_wake_all_queues(ugeth->ndev);15711571+ __netdev_watchdog_up(ugeth->ndev);15671572}1568157315691574/* Called every time the controller might need to be made
+1-1
drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.c
···10701070 (port->first_rxq >> MVPP2_CLS_OVERSIZE_RXQ_LOW_BITS));1071107110721072 val = mvpp2_read(port->priv, MVPP2_CLS_SWFWD_PCTRL_REG);10731073- val |= MVPP2_CLS_SWFWD_PCTRL_MASK(port->id);10731073+ val &= ~MVPP2_CLS_SWFWD_PCTRL_MASK(port->id);10741074 mvpp2_write(port->priv, MVPP2_CLS_SWFWD_PCTRL_REG, val);10751075}10761076
···319319 /* Enable PTP clock */320320 regmap_read(gmac->nss_common, NSS_COMMON_CLK_GATE, &val);321321 val |= NSS_COMMON_CLK_GATE_PTP_EN(gmac->id);322322+ switch (gmac->phy_mode) {323323+ case PHY_INTERFACE_MODE_RGMII:324324+ val |= NSS_COMMON_CLK_GATE_RGMII_RX_EN(gmac->id) |325325+ NSS_COMMON_CLK_GATE_RGMII_TX_EN(gmac->id);326326+ break;327327+ case PHY_INTERFACE_MODE_SGMII:328328+ val |= NSS_COMMON_CLK_GATE_GMII_RX_EN(gmac->id) |329329+ NSS_COMMON_CLK_GATE_GMII_TX_EN(gmac->id);330330+ break;331331+ default:332332+ /* We don't get here; the switch above will have errored out */333333+ unreachable();334334+ }322335 regmap_write(gmac->nss_common, NSS_COMMON_CLK_GATE, val);323336324337 if (gmac->phy_mode == PHY_INTERFACE_MODE_SGMII) {
···20832083 ale_params.nu_switch_ale = true;2084208420852085 common->ale = cpsw_ale_create(&ale_params);20862086- if (!common->ale) {20862086+ if (IS_ERR(common->ale)) {20872087 dev_err(dev, "error initializing ale engine\n");20882088+ ret = PTR_ERR(common->ale);20882089 goto err_of_clear;20892090 }20902091
+4
drivers/net/ethernet/ti/cpsw.c
···17751775 struct cpsw_common *cpsw = dev_get_drvdata(dev);17761776 int i;1777177717781778+ rtnl_lock();17791779+17781780 for (i = 0; i < cpsw->data.slaves; i++)17791781 if (cpsw->slaves[i].ndev)17801782 if (netif_running(cpsw->slaves[i].ndev))17811783 cpsw_ndo_stop(cpsw->slaves[i].ndev);17841784+17851785+ rtnl_unlock();1782178617831787 /* Select sleep pin state */17841788 pinctrl_pm_select_sleep_state(dev);
+1-1
drivers/net/ethernet/ti/cpsw_ale.c
···955955956956 ale = devm_kzalloc(params->dev, sizeof(*ale), GFP_KERNEL);957957 if (!ale)958958- return NULL;958958+ return ERR_PTR(-ENOMEM);959959960960 ale->p0_untag_vid_mask =961961 devm_kmalloc_array(params->dev, BITS_TO_LONGS(VLAN_N_VID),
+2-2
drivers/net/ethernet/ti/cpsw_priv.c
···504504 ale_params.ale_ports = CPSW_ALE_PORTS_NUM;505505506506 cpsw->ale = cpsw_ale_create(&ale_params);507507- if (!cpsw->ale) {507507+ if (IS_ERR(cpsw->ale)) {508508 dev_err(dev, "error initializing ale engine\n");509509- return -ENODEV;509509+ return PTR_ERR(cpsw->ale);510510 }511511512512 dma_params.dev = dev;
+2-2
drivers/net/ethernet/ti/netcp_ethss.c
···37043704 ale_params.nu_switch_ale = true;37053705 }37063706 gbe_dev->ale = cpsw_ale_create(&ale_params);37073707- if (!gbe_dev->ale) {37073707+ if (IS_ERR(gbe_dev->ale)) {37083708 dev_err(gbe_dev->dev, "error initializing ale engine\n");37093709- ret = -ENODEV;37093709+ ret = PTR_ERR(gbe_dev->ale);37103710 goto free_sec_ports;37113711 } else {37123712 dev_dbg(gbe_dev->dev, "Created a gbe ale engine\n");
···353353 const struct vsc85xx_hw_stat *hw_stats;354354 u64 *stats;355355 int nstats;356356+ /* PHY address within the package. */357357+ u8 addr;356358 /* For multiple port PHYs; the MDIO address of the base PHY in the357359 * package.358360 */
···129129 rcu_read_lock_bh();130130 keypair = rcu_dereference_bh(peer->keypairs.current_keypair);131131 send = keypair && READ_ONCE(keypair->sending.is_valid) &&132132- (atomic64_read(&keypair->sending.counter.counter) > REKEY_AFTER_MESSAGES ||132132+ (atomic64_read(&keypair->sending_counter) > REKEY_AFTER_MESSAGES ||133133 (keypair->i_am_the_initiator &&134134 wg_birthdate_has_expired(keypair->sending.birthdate, REKEY_AFTER_TIME)));135135 rcu_read_unlock_bh();···166166 struct message_data *header;167167 struct sk_buff *trailer;168168 int num_frags;169169+170170+ /* Force hash calculation before encryption so that flow analysis is171171+ * consistent over the inner packet.172172+ */173173+ skb_get_hash(skb);169174170175 /* Calculate lengths. */171176 padding_len = calculate_skb_padding(skb);···300295 skb_list_walk_safe(first, skb, next) {301296 if (likely(encrypt_packet(skb,302297 PACKET_CB(first)->keypair))) {303303- wg_reset_packet(skb);298298+ wg_reset_packet(skb, true);304299 } else {305300 state = PACKET_STATE_DEAD;306301 break;···349344350345void wg_packet_send_staged_packets(struct wg_peer *peer)351346{352352- struct noise_symmetric_key *key;353347 struct noise_keypair *keypair;354348 struct sk_buff_head packets;355349 struct sk_buff *skb;···368364 rcu_read_unlock_bh();369365 if (unlikely(!keypair))370366 goto out_nokey;371371- key = &keypair->sending;372372- if (unlikely(!READ_ONCE(key->is_valid)))367367+ if (unlikely(!READ_ONCE(keypair->sending.is_valid)))373368 goto out_nokey;374374- if (unlikely(wg_birthdate_has_expired(key->birthdate,369369+ if (unlikely(wg_birthdate_has_expired(keypair->sending.birthdate,375370 REJECT_AFTER_TIME)))376371 goto out_invalid;377372···385382 */386383 PACKET_CB(skb)->ds = ip_tunnel_ecn_encap(0, ip_hdr(skb), skb);387384 PACKET_CB(skb)->nonce =388388- atomic64_inc_return(&key->counter.counter) - 1;385385+ atomic64_inc_return(&keypair->sending_counter) - 1;389386 if (unlikely(PACKET_CB(skb)->nonce >= REJECT_AFTER_MESSAGES))390387 goto out_invalid;391388 }···397394 return;398395399396out_invalid:400400- WRITE_ONCE(key->is_valid, false);397397+ WRITE_ONCE(keypair->sending.is_valid, false);401398out_nokey:402399 wg_noise_keypair_put(keypair, false);403400
+4
drivers/net/wireless/intel/iwlwifi/pcie/drv.c
···11121112 iwl_trans->cfg = &iwl_ax101_cfg_quz_hr;11131113 else if (iwl_trans->cfg == &iwl_ax201_cfg_qu_hr)11141114 iwl_trans->cfg = &iwl_ax201_cfg_quz_hr;11151115+ else if (iwl_trans->cfg == &killer1650s_2ax_cfg_qu_b0_hr_b0)11161116+ iwl_trans->cfg = &iwl_ax1650s_cfg_quz_hr;11171117+ else if (iwl_trans->cfg == &killer1650i_2ax_cfg_qu_b0_hr_b0)11181118+ iwl_trans->cfg = &iwl_ax1650i_cfg_quz_hr;11151119 }1116112011171121#endif
+5
drivers/nvme/host/pci.c
···989989990990 while (nvme_cqe_pending(nvmeq)) {991991 found++;992992+ /*993993+ * load-load control dependency between phase and the rest of994994+ * the cqe requires a full read memory barrier995995+ */996996+ dma_rmb();992997 nvme_handle_cqe(nvmeq, nvmeq->cq_head);993998 nvme_update_cq_head(nvmeq);994999 }
···877877 rmcd_error("pinned %ld out of %ld pages",878878 pinned, nr_pages);879879 ret = -EFAULT;880880+ /*881881+ * Set nr_pages up to mean "how many pages to unpin, in882882+ * the error handler:883883+ */884884+ nr_pages = pinned;880885 goto err_pg;881886 }882887
···44 tristate "DesignWare USB3 DRD Core Support"55 depends on (USB || USB_GADGET) && HAS_DMA66 select USB_XHCI_PLATFORM if USB_XHCI_HCD77+ select USB_ROLE_SWITCH if USB_DWC3_DUAL_ROLE78 help89 Say Y or M here if your system has a Dual Role SuperSpeed910 USB controller based on the DesignWare USB3 IP Core.
···260260 char *name;261261 int ret;262262263263+ if (strlen(page) < len)264264+ return -EOVERFLOW;265265+263266 name = kstrdup(page, GFP_KERNEL);264267 if (!name)265268 return -ENOMEM;
+3-1
drivers/usb/gadget/legacy/audio.c
···300300 struct usb_descriptor_header *usb_desc;301301302302 usb_desc = usb_otg_descriptor_alloc(cdev->gadget);303303- if (!usb_desc)303303+ if (!usb_desc) {304304+ status = -ENOMEM;304305 goto fail;306306+ }305307 usb_otg_descriptor_init(cdev->gadget, usb_desc);306308 otg_desc[0] = usb_desc;307309 otg_desc[1] = NULL;
+3-1
drivers/usb/gadget/legacy/cdc2.c
···179179 struct usb_descriptor_header *usb_desc;180180181181 usb_desc = usb_otg_descriptor_alloc(gadget);182182- if (!usb_desc)182182+ if (!usb_desc) {183183+ status = -ENOMEM;183184 goto fail1;185185+ }184186 usb_otg_descriptor_init(gadget, usb_desc);185187 otg_desc[0] = usb_desc;186188 otg_desc[1] = NULL;
···730730 if (!map)731731 return NULL;732732733733- return (void *)(uintptr_t)(map->addr + addr - map->start);733733+ return (void __user *)(uintptr_t)(map->addr + addr - map->start);734734}735735736736/* Can we switch to this memory table? */···869869 * not happen in this case.870870 */871871static inline void __user *__vhost_get_user(struct vhost_virtqueue *vq,872872- void *addr, unsigned int size,872872+ void __user *addr, unsigned int size,873873 int type)874874{875875 void __user *uaddr = vhost_vq_meta_fetch(vq,
+5-13
fs/afs/fs_probe.c
···3232 struct afs_server *server = call->server;3333 unsigned int server_index = call->server_index;3434 unsigned int index = call->addr_ix;3535- unsigned int rtt = UINT_MAX;3535+ unsigned int rtt_us = 0;3636 bool have_result = false;3737- u64 _rtt;3837 int ret = call->error;39384039 _enter("%pU,%u", &server->uuid, index);···9293 }9394 }94959595- /* Get the RTT and scale it to fit into a 32-bit value that represents9696- * over a minute of time so that we can access it with one instruction9797- * on a 32-bit system.9898- */9999- _rtt = rxrpc_kernel_get_rtt(call->net->socket, call->rxcall);100100- _rtt /= 64;101101- rtt = (_rtt > UINT_MAX) ? UINT_MAX : _rtt;102102- if (rtt < server->probe.rtt) {103103- server->probe.rtt = rtt;9696+ rtt_us = rxrpc_kernel_get_srtt(call->net->socket, call->rxcall);9797+ if (rtt_us < server->probe.rtt) {9898+ server->probe.rtt = rtt_us;10499 alist->preferred = index;105100 have_result = true;106101 }···106113 spin_unlock(&server->probe_lock);107114108115 _debug("probe [%u][%u] %pISpc rtt=%u ret=%d",109109- server_index, index, &alist->addrs[index].transport,110110- (unsigned int)rtt, ret);116116+ server_index, index, &alist->addrs[index].transport, rtt_us, ret);111117112118 have_result |= afs_fs_probe_done(server);113119 if (have_result)
+4-4
fs/afs/fsclient.c
···385385 ASSERTCMP(req->offset, <=, PAGE_SIZE);386386 if (req->offset == PAGE_SIZE) {387387 req->offset = 0;388388- if (req->page_done)389389- req->page_done(req);390388 req->index++;391389 if (req->remain > 0)392390 goto begin_page;···438440 if (req->offset < PAGE_SIZE)439441 zero_user_segment(req->pages[req->index],440442 req->offset, PAGE_SIZE);441441- if (req->page_done)442442- req->page_done(req);443443 req->offset = 0;444444 }445445+446446+ if (req->page_done)447447+ for (req->index = 0; req->index < req->nr_pages; req->index++)448448+ req->page_done(req);445449446450 _leave(" = 0 [done]");447451 return 0;
+5-13
fs/afs/vl_probe.c
···3131 struct afs_addr_list *alist = call->alist;3232 struct afs_vlserver *server = call->vlserver;3333 unsigned int server_index = call->server_index;3434+ unsigned int rtt_us = 0;3435 unsigned int index = call->addr_ix;3535- unsigned int rtt = UINT_MAX;3636 bool have_result = false;3737- u64 _rtt;3837 int ret = call->error;39384039 _enter("%s,%u,%u,%d,%d", server->name, server_index, index, ret, call->abort_code);···9293 }9394 }94959595- /* Get the RTT and scale it to fit into a 32-bit value that represents9696- * over a minute of time so that we can access it with one instruction9797- * on a 32-bit system.9898- */9999- _rtt = rxrpc_kernel_get_rtt(call->net->socket, call->rxcall);100100- _rtt /= 64;101101- rtt = (_rtt > UINT_MAX) ? UINT_MAX : _rtt;102102- if (rtt < server->probe.rtt) {103103- server->probe.rtt = rtt;9696+ rtt_us = rxrpc_kernel_get_srtt(call->net->socket, call->rxcall);9797+ if (rtt_us < server->probe.rtt) {9898+ server->probe.rtt = rtt_us;10499 alist->preferred = index;105100 have_result = true;106101 }···106113 spin_unlock(&server->probe_lock);107114108115 _debug("probe [%u][%u] %pISpc rtt=%u ret=%d",109109- server_index, index, &alist->addrs[index].transport,110110- (unsigned int)rtt, ret);116116+ server_index, index, &alist->addrs[index].transport, rtt_us, ret);111117112118 have_result |= afs_vl_probe_done(server);113119 if (have_result) {
+4-4
fs/afs/yfsclient.c
···497497 ASSERTCMP(req->offset, <=, PAGE_SIZE);498498 if (req->offset == PAGE_SIZE) {499499 req->offset = 0;500500- if (req->page_done)501501- req->page_done(req);502500 req->index++;503501 if (req->remain > 0)504502 goto begin_page;···554556 if (req->offset < PAGE_SIZE)555557 zero_user_segment(req->pages[req->index],556558 req->offset, PAGE_SIZE);557557- if (req->page_done)558558- req->page_done(req);559559 req->offset = 0;560560 }561561+562562+ if (req->page_done)563563+ for (req->index = 0; req->index < req->nr_pages; req->index++)564564+ req->page_done(req);561565562566 _leave(" = 0 [done]");563567 return 0;
···40604060 * than it negotiated since it will refuse the read40614061 * then.40624062 */40634063- if ((tcon->ses) && !(tcon->ses->capabilities &40634063+ if (!(tcon->ses->capabilities &40644064 tcon->ses->server->vals->cap_large_files)) {40654065 current_read_size = min_t(uint,40664066 current_read_size, CIFSMaxBufSize);
+1-1
fs/cifs/inode.c
···730730 * cifs_backup_query_path_info - SMB1 fallback code to get ino731731 *732732 * Fallback code to get file metadata when we don't have access to733733- * @full_path (EACCESS) and have backup creds.733733+ * @full_path (EACCES) and have backup creds.734734 *735735 * @data will be set to search info result buffer736736 * @resp_buf will be set to cifs resp buf and needs to be freed with
···722722#define EXT4_MAX_BLOCK_FILE_PHYS 0xFFFFFFFF723723724724/* Max logical block we can support */725725-#define EXT4_MAX_LOGICAL_BLOCK 0xFFFFFFFF725725+#define EXT4_MAX_LOGICAL_BLOCK 0xFFFFFFFE726726727727/*728728 * Structure of an inode on the disk
+31
fs/ext4/extents.c
···48324832 .iomap_begin = ext4_iomap_xattr_begin,48334833};4834483448354835+static int ext4_fiemap_check_ranges(struct inode *inode, u64 start, u64 *len)48364836+{48374837+ u64 maxbytes;48384838+48394839+ if (ext4_test_inode_flag(inode, EXT4_INODE_EXTENTS))48404840+ maxbytes = inode->i_sb->s_maxbytes;48414841+ else48424842+ maxbytes = EXT4_SB(inode->i_sb)->s_bitmap_maxbytes;48434843+48444844+ if (*len == 0)48454845+ return -EINVAL;48464846+ if (start > maxbytes)48474847+ return -EFBIG;48484848+48494849+ /*48504850+ * Shrink request scope to what the fs can actually handle.48514851+ */48524852+ if (*len > maxbytes || (maxbytes - *len) < start)48534853+ *len = maxbytes - start;48544854+ return 0;48554855+}48564856+48354857static int _ext4_fiemap(struct inode *inode, struct fiemap_extent_info *fieinfo,48364858 __u64 start, __u64 len, bool from_es_cache)48374859{···4873485148744852 if (fiemap_check_flags(fieinfo, ext4_fiemap_flags))48754853 return -EBADR;48544854+48554855+ /*48564856+ * For bitmap files the maximum size limit could be smaller than48574857+ * s_maxbytes, so check len here manually instead of just relying on the48584858+ * generic check.48594859+ */48604860+ error = ext4_fiemap_check_ranges(inode, start, &len);48614861+ if (error)48624862+ return error;4876486348774864 if (fieinfo->fi_flags & FIEMAP_FLAG_XATTR) {48784865 fieinfo->fi_flags &= ~FIEMAP_FLAG_XATTR;
+2-31
fs/ext4/ioctl.c
···733733 fa->fsx_projid = from_kprojid(&init_user_ns, ei->i_projid);734734}735735736736-/* copied from fs/ioctl.c */737737-static int fiemap_check_ranges(struct super_block *sb,738738- u64 start, u64 len, u64 *new_len)739739-{740740- u64 maxbytes = (u64) sb->s_maxbytes;741741-742742- *new_len = len;743743-744744- if (len == 0)745745- return -EINVAL;746746-747747- if (start > maxbytes)748748- return -EFBIG;749749-750750- /*751751- * Shrink request scope to what the fs can actually handle.752752- */753753- if (len > maxbytes || (maxbytes - len) < start)754754- *new_len = maxbytes - start;755755-756756- return 0;757757-}758758-759736/* So that the fiemap access checks can't overflow on 32 bit machines. */760737#define FIEMAP_MAX_EXTENTS (UINT_MAX / sizeof(struct fiemap_extent))761738···742765 struct fiemap __user *ufiemap = (struct fiemap __user *) arg;743766 struct fiemap_extent_info fieinfo = { 0, };744767 struct inode *inode = file_inode(filp);745745- struct super_block *sb = inode->i_sb;746746- u64 len;747768 int error;748769749770 if (copy_from_user(&fiemap, ufiemap, sizeof(fiemap)))···749774750775 if (fiemap.fm_extent_count > FIEMAP_MAX_EXTENTS)751776 return -EINVAL;752752-753753- error = fiemap_check_ranges(sb, fiemap.fm_start, fiemap.fm_length,754754- &len);755755- if (error)756756- return error;757777758778 fieinfo.fi_flags = fiemap.fm_flags;759779 fieinfo.fi_extents_max = fiemap.fm_extent_count;···762792 if (fieinfo.fi_flags & FIEMAP_FLAG_SYNC)763793 filemap_write_and_wait(inode->i_mapping);764794765765- error = ext4_get_es_cache(inode, &fieinfo, fiemap.fm_start, len);795795+ error = ext4_get_es_cache(inode, &fieinfo, fiemap.fm_start,796796+ fiemap.fm_length);766797 fiemap.fm_flags = fieinfo.fi_flags;767798 fiemap.fm_mapped_extents = fieinfo.fi_extents_mapped;768799 if (copy_to_user(ufiemap, &fiemap, sizeof(fiemap)))
···619619 bool needs_fixed_file;620620 u8 opcode;621621622622+ u16 buf_index;623623+622624 struct io_ring_ctx *ctx;623625 struct list_head list;624626 unsigned int flags;···926924 goto err;927925928926 ctx->flags = p->flags;927927+ init_waitqueue_head(&ctx->sqo_wait);929928 init_waitqueue_head(&ctx->cq_wait);930929 INIT_LIST_HEAD(&ctx->cq_overflow_list);931930 init_completion(&ctx->completions[0]);···13971394 for (i = 0; i < rb->to_free; i++) {13981395 struct io_kiocb *req = rb->reqs[i];1399139614001400- if (req->flags & REQ_F_FIXED_FILE) {14011401- req->file = NULL;14021402- percpu_ref_put(req->fixed_file_refs);14031403- }14041397 if (req->flags & REQ_F_INFLIGHT)14051398 inflight++;14061399 __io_req_aux_free(req);···16691670 if ((req->flags & REQ_F_LINK_HEAD) || io_is_fallback_req(req))16701671 return false;1671167216721672- if (!(req->flags & REQ_F_FIXED_FILE) || req->io)16731673+ if (req->file || req->io)16731674 rb->need_iter++;1674167516751676 rb->reqs[rb->to_free++] = req;···2103210421042105 req->rw.addr = READ_ONCE(sqe->addr);21052106 req->rw.len = READ_ONCE(sqe->len);21062106- /* we own ->private, reuse it for the buffer index / buffer ID */21072107- req->rw.kiocb.private = (void *) (unsigned long)21082108- READ_ONCE(sqe->buf_index);21072107+ req->buf_index = READ_ONCE(sqe->buf_index);21092108 return 0;21102109}21112110···21462149 struct io_ring_ctx *ctx = req->ctx;21472150 size_t len = req->rw.len;21482151 struct io_mapped_ubuf *imu;21492149- unsigned index, buf_index;21522152+ u16 index, buf_index;21502153 size_t offset;21512154 u64 buf_addr;21522155···21542157 if (unlikely(!ctx->user_bufs))21552158 return -EFAULT;2156215921572157- buf_index = (unsigned long) req->rw.kiocb.private;21602160+ buf_index = req->buf_index;21582161 if (unlikely(buf_index >= ctx->nr_user_bufs))21592162 return -EFAULT;21602163···22702273 bool needs_lock)22712274{22722275 struct io_buffer *kbuf;22732273- int bgid;22762276+ u16 bgid;2274227722752278 kbuf = (struct io_buffer *) (unsigned long) req->rw.addr;22762276- bgid = (int) (unsigned long) req->rw.kiocb.private;22792279+ bgid = req->buf_index;22772280 kbuf = io_buffer_select(req, len, bgid, kbuf, needs_lock);22782281 if (IS_ERR(kbuf))22792282 return kbuf;···23642367 }2365236823662369 /* buffer index only valid with fixed read/write, or buffer select */23672367- if (req->rw.kiocb.private && !(req->flags & REQ_F_BUFFER_SELECT))23702370+ if (req->buf_index && !(req->flags & REQ_F_BUFFER_SELECT))23682371 return -EINVAL;2369237223702373 if (opcode == IORING_OP_READ || opcode == IORING_OP_WRITE) {···27642767 struct file *out = sp->file_out;27652768 unsigned int flags = sp->flags & ~SPLICE_F_FD_IN_FIXED;27662769 loff_t *poff_in, *poff_out;27672767- long ret;27702770+ long ret = 0;2768277127692772 if (force_nonblock)27702773 return -EAGAIN;2771277427722775 poff_in = (sp->off_in == -1) ? NULL : &sp->off_in;27732776 poff_out = (sp->off_out == -1) ? NULL : &sp->off_out;27742774- ret = do_splice(in, poff_in, out, poff_out, sp->len, flags);27752775- if (force_nonblock && ret == -EAGAIN)27762776- return -EAGAIN;27772777+27782778+ if (sp->len)27792779+ ret = do_splice(in, poff_in, out, poff_out, sp->len, flags);2777278027782781 io_put_file(req, in, (sp->flags & SPLICE_F_FD_IN_FIXED));27792782 req->flags &= ~REQ_F_NEED_CLEANUP;···41354138 req->result = mask;41364139 init_task_work(&req->task_work, func);41374140 /*41384138- * If this fails, then the task is exiting. Punt to one of the io-wq41394139- * threads to ensure the work gets run, we can't always rely on exit41404140- * cancelation taking care of this.41414141+ * If this fails, then the task is exiting. When a task exits, the41424142+ * work gets canceled, so just cancel this request as well instead41434143+ * of executing it. We can't safely execute it anyway, as we may not41444144+ * have the needed state needed for it anyway.41414145 */41424146 ret = task_work_add(tsk, &req->task_work, true);41434147 if (unlikely(ret)) {41484148+ WRITE_ONCE(poll->canceled, true);41444149 tsk = io_wq_get_task(req->ctx->io_wq);41454150 task_work_add(tsk, &req->task_work, true);41464151 }···50135014 if (!req_need_defer(req) && list_empty_careful(&ctx->defer_list))50145015 return 0;5015501650165016- if (!req->io && io_alloc_async_ctx(req))50175017- return -EAGAIN;50185018-50195019- ret = io_req_defer_prep(req, sqe);50205020- if (ret < 0)50215021- return ret;50175017+ if (!req->io) {50185018+ if (io_alloc_async_ctx(req))50195019+ return -EAGAIN;50205020+ ret = io_req_defer_prep(req, sqe);50215021+ if (ret < 0)50225022+ return ret;50235023+ }5022502450235025 spin_lock_irq(&ctx->completion_lock);50245026 if (!req_need_defer(req) && list_empty(&ctx->defer_list)) {···53065306 if (ret)53075307 return ret;5308530853095309- if (ctx->flags & IORING_SETUP_IOPOLL) {53095309+ /* If the op doesn't have a file, we're not polling for it */53105310+ if ((ctx->flags & IORING_SETUP_IOPOLL) && req->file) {53105311 const bool in_async = io_wq_current_is_worker();5311531253125313 if (req->result == -EAGAIN)···56085607 io_double_put_req(req);56095608 }56105609 } else if (req->flags & REQ_F_FORCE_ASYNC) {56115611- ret = io_req_defer_prep(req, sqe);56125612- if (unlikely(ret < 0))56135613- goto fail_req;56105610+ if (!req->io) {56115611+ ret = -EAGAIN;56125612+ if (io_alloc_async_ctx(req))56135613+ goto fail_req;56145614+ ret = io_req_defer_prep(req, sqe);56155615+ if (unlikely(ret < 0))56165616+ goto fail_req;56175617+ }56185618+56145619 /*56155620 * Never try inline submit of IOSQE_ASYNC is set, go straight56165621 * to async execution.···60326025 finish_wait(&ctx->sqo_wait, &wait);6033602660346027 ctx->rings->sq_flags &= ~IORING_SQ_NEED_WAKEUP;60286028+ ret = 0;60356029 continue;60366030 }60376031 finish_wait(&ctx->sqo_wait, &wait);···68466838{68476839 int ret;6848684068496849- init_waitqueue_head(&ctx->sqo_wait);68506841 mmgrab(current->mm);68516842 ctx->sqo_mm = current->mm;68526843
···734734 state = new;735735 state->owner = owner;736736 atomic_inc(&owner->so_count);737737- list_add_rcu(&state->inode_states, &nfsi->open_states);738737 ihold(inode);739738 state->inode = inode;739739+ list_add_rcu(&state->inode_states, &nfsi->open_states);740740 spin_unlock(&inode->i_lock);741741 /* Note: The reclaim code dictates that we add stateless742742 * and read-only stateids to the end of the list */
···783783 if (fh_type != OVL_FILEID_V0)784784 return ERR_PTR(-EINVAL);785785786786+ if (buflen <= OVL_FH_WIRE_OFFSET)787787+ return ERR_PTR(-EINVAL);788788+786789 fh = kzalloc(buflen, GFP_KERNEL);787790 if (!fh)788791 return ERR_PTR(-ENOMEM);
+18
fs/overlayfs/inode.c
···5858 if (attr->ia_valid & (ATTR_KILL_SUID|ATTR_KILL_SGID))5959 attr->ia_valid &= ~ATTR_MODE;60606161+ /*6262+ * We might have to translate ovl file into real file object6363+ * once use cases emerge. For now, simply don't let underlying6464+ * filesystem rely on attr->ia_file6565+ */6666+ attr->ia_valid &= ~ATTR_FILE;6767+6868+ /*6969+ * If open(O_TRUNC) is done, VFS calls ->setattr with ATTR_OPEN7070+ * set. Overlayfs does not pass O_TRUNC flag to underlying7171+ * filesystem during open -> do not pass ATTR_OPEN. This7272+ * disables optimization in fuse which assumes open(O_TRUNC)7373+ * already set file size to 0. But we never passed O_TRUNC to7474+ * fuse. So by clearing ATTR_OPEN, fuse will be forced to send7575+ * setattr request to server.7676+ */7777+ attr->ia_valid &= ~ATTR_OPEN;7878+6179 inode_lock(upperdentry->d_inode);6280 old_cred = ovl_override_creds(dentry->d_sb);6381 err = notify_change(upperdentry, attr, NULL);
+1-1
fs/splice.c
···14941494 * Check pipe occupancy without the inode lock first. This function14951495 * is speculative anyways, so missing one is ok.14961496 */14971497- if (pipe_full(pipe->head, pipe->tail, pipe->max_usage))14971497+ if (!pipe_full(pipe->head, pipe->tail, pipe->max_usage))14981498 return 0;1499149915001500 ret = 0;
···356356/* &a[0] degrades to a pointer: a different type from an array */357357#define __must_be_array(a) BUILD_BUG_ON_ZERO(__same_type((a), &(a)[0]))358358359359+/*360360+ * This is needed in functions which generate the stack canary, see361361+ * arch/x86/kernel/smpboot.c::start_secondary() for an example.362362+ */363363+#define prevent_tail_call_optimization() mb()364364+359365#endif /* __LINUX_COMPILER_H */
···29293030 int num_adapters;3131 int max_adapters;3232- struct i2c_adapter *adapter[0];3232+ struct i2c_adapter *adapter[];3333};34343535struct i2c_mux_core *i2c_mux_alloc(struct i2c_adapter *parent,
+1-1
include/linux/i2c.h
···22/*33 * i2c.h - definitions for the Linux i2c bus interface44 * Copyright (C) 1995-2000 Simon G. Vogl55- * Copyright (C) 2013-2019 Wolfram Sang <wsa@the-dreams.de>55+ * Copyright (C) 2013-2019 Wolfram Sang <wsa@kernel.org>66 *77 * With some changes from Kyösti Mälkki <kmalkki@cc.hut.fi> and88 * Frodo Looijaard <frodol@dds.nl>
+3
include/linux/kvm_host.h
···813813void kvm_reload_remote_mmus(struct kvm *kvm);814814815815bool kvm_make_vcpus_request_mask(struct kvm *kvm, unsigned int req,816816+ struct kvm_vcpu *except,816817 unsigned long *vcpu_bitmap, cpumask_var_t tmp);817818bool kvm_make_all_cpus_request(struct kvm *kvm, unsigned int req);819819+bool kvm_make_all_cpus_request_except(struct kvm *kvm, unsigned int req,820820+ struct kvm_vcpu *except);818821bool kvm_make_cpus_request_mask(struct kvm *kvm, unsigned int req,819822 unsigned long *vcpu_bitmap);820823
···257257 u32 table_id;258258 /* filter_set is an optimization that an entry is set */259259 bool filter_set;260260- bool dump_all_families;261260 bool dump_routes;262261 bool dump_exceptions;263262 unsigned char protocol;
···9393 __u8 data[0];9494};95959696+/* Maximum number of non-control endpoints in struct usb_raw_eps_info. */9797+#define USB_RAW_EPS_NUM_MAX 309898+9999+/* Maximum length of UDC endpoint name in struct usb_raw_ep_info. */100100+#define USB_RAW_EP_NAME_MAX 16101101+102102+/* Used as addr in struct usb_raw_ep_info if endpoint accepts any address. */103103+#define USB_RAW_EP_ADDR_ANY 0xff104104+105105+/*106106+ * struct usb_raw_ep_caps - exposes endpoint capabilities from struct usb_ep107107+ * (technically from its member struct usb_ep_caps).108108+ */109109+struct usb_raw_ep_caps {110110+ __u32 type_control : 1;111111+ __u32 type_iso : 1;112112+ __u32 type_bulk : 1;113113+ __u32 type_int : 1;114114+ __u32 dir_in : 1;115115+ __u32 dir_out : 1;116116+};117117+118118+/*119119+ * struct usb_raw_ep_limits - exposes endpoint limits from struct usb_ep.120120+ * @maxpacket_limit: Maximum packet size value supported by this endpoint.121121+ * @max_streams: maximum number of streams supported by this endpoint122122+ * (actual number is 2^n).123123+ * @reserved: Empty, reserved for potential future extensions.124124+ */125125+struct usb_raw_ep_limits {126126+ __u16 maxpacket_limit;127127+ __u16 max_streams;128128+ __u32 reserved;129129+};130130+131131+/*132132+ * struct usb_raw_ep_info - stores information about a gadget endpoint.133133+ * @name: Name of the endpoint as it is defined in the UDC driver.134134+ * @addr: Address of the endpoint that must be specified in the endpoint135135+ * descriptor passed to USB_RAW_IOCTL_EP_ENABLE ioctl.136136+ * @caps: Endpoint capabilities.137137+ * @limits: Endpoint limits.138138+ */139139+struct usb_raw_ep_info {140140+ __u8 name[USB_RAW_EP_NAME_MAX];141141+ __u32 addr;142142+ struct usb_raw_ep_caps caps;143143+ struct usb_raw_ep_limits limits;144144+};145145+146146+/*147147+ * struct usb_raw_eps_info - argument for USB_RAW_IOCTL_EPS_INFO ioctl.148148+ * eps: Structures that store information about non-control endpoints.149149+ */150150+struct usb_raw_eps_info {151151+ struct usb_raw_ep_info eps[USB_RAW_EPS_NUM_MAX];152152+};153153+96154/*97155 * Initializes a Raw Gadget instance.98156 * Accepts a pointer to the usb_raw_init struct as an argument.···173115#define USB_RAW_IOCTL_EVENT_FETCH _IOR('U', 2, struct usb_raw_event)174116175117/*176176- * Queues an IN (OUT for READ) urb as a response to the last control request177177- * received on endpoint 0, provided that was an IN (OUT for READ) request and178178- * waits until the urb is completed. Copies received data to user for READ.118118+ * Queues an IN (OUT for READ) request as a response to the last setup request119119+ * received on endpoint 0 (provided that was an IN (OUT for READ) request), and120120+ * waits until the request is completed. Copies received data to user for READ.179121 * Accepts a pointer to the usb_raw_ep_io struct as an argument.180180- * Returns length of trasferred data on success or negative error code on122122+ * Returns length of transferred data on success or negative error code on181123 * failure.182124 */183125#define USB_RAW_IOCTL_EP0_WRITE _IOW('U', 3, struct usb_raw_ep_io)184126#define USB_RAW_IOCTL_EP0_READ _IOWR('U', 4, struct usb_raw_ep_io)185127186128/*187187- * Finds an endpoint that supports the transfer type specified in the188188- * descriptor and enables it.189189- * Accepts a pointer to the usb_endpoint_descriptor struct as an argument.129129+ * Finds an endpoint that satisfies the parameters specified in the provided130130+ * descriptors (address, transfer type, etc.) and enables it.131131+ * Accepts a pointer to the usb_raw_ep_descs struct as an argument.190132 * Returns enabled endpoint handle on success or negative error code on failure.191133 */192134#define USB_RAW_IOCTL_EP_ENABLE _IOW('U', 5, struct usb_endpoint_descriptor)193135194194-/* Disables specified endpoint.136136+/*137137+ * Disables specified endpoint.195138 * Accepts endpoint handle as an argument.196139 * Returns 0 on success or negative error code on failure.197140 */198141#define USB_RAW_IOCTL_EP_DISABLE _IOW('U', 6, __u32)199142200143/*201201- * Queues an IN (OUT for READ) urb as a response to the last control request202202- * received on endpoint usb_raw_ep_io.ep, provided that was an IN (OUT for READ)203203- * request and waits until the urb is completed. Copies received data to user204204- * for READ.144144+ * Queues an IN (OUT for READ) request as a response to the last setup request145145+ * received on endpoint usb_raw_ep_io.ep (provided that was an IN (OUT for READ)146146+ * request), and waits until the request is completed. Copies received data to147147+ * user for READ.205148 * Accepts a pointer to the usb_raw_ep_io struct as an argument.206206- * Returns length of trasferred data on success or negative error code on149149+ * Returns length of transferred data on success or negative error code on207150 * failure.208151 */209152#define USB_RAW_IOCTL_EP_WRITE _IOW('U', 7, struct usb_raw_ep_io)···222163 * Returns 0 on success or negative error code on failure.223164 */224165#define USB_RAW_IOCTL_VBUS_DRAW _IOW('U', 10, __u32)166166+167167+/*168168+ * Fills in the usb_raw_eps_info structure with information about non-control169169+ * endpoints available for the currently connected UDC.170170+ * Returns the number of available endpoints on success or negative error code171171+ * on failure.172172+ */173173+#define USB_RAW_IOCTL_EPS_INFO _IOR('U', 11, struct usb_raw_eps_info)174174+175175+/*176176+ * Stalls a pending control request on endpoint 0.177177+ * Returns 0 on success or negative error code on failure.178178+ */179179+#define USB_RAW_IOCTL_EP0_STALL _IO('U', 12)180180+181181+/*182182+ * Sets or clears halt or wedge status of the endpoint.183183+ * Accepts endpoint handle as an argument.184184+ * Returns 0 on success or negative error code on failure.185185+ */186186+#define USB_RAW_IOCTL_EP_SET_HALT _IOW('U', 13, __u32)187187+#define USB_RAW_IOCTL_EP_CLEAR_HALT _IOW('U', 14, __u32)188188+#define USB_RAW_IOCTL_EP_SET_WEDGE _IOW('U', 15, __u32)225189226190#endif /* _UAPI__LINUX_USB_RAW_GADGET_H */
+2
init/main.c
···1038103810391039 /* Do the rest non-__init'ed, we're now alive */10401040 arch_call_rest_init();10411041+10421042+ prevent_tail_call_optimization();10411043}1042104410431045/* Call all constructor functions linked into the kernel. */
+14-3
kernel/bpf/syscall.c
···627627628628 mutex_lock(&map->freeze_mutex);629629630630- if ((vma->vm_flags & VM_WRITE) && map->frozen) {631631- err = -EPERM;632632- goto out;630630+ if (vma->vm_flags & VM_WRITE) {631631+ if (map->frozen) {632632+ err = -EPERM;633633+ goto out;634634+ }635635+ /* map is meant to be read-only, so do not allow mapping as636636+ * writable, because it's possible to leak a writable page637637+ * reference and allows user-space to still modify it after638638+ * freezing, while verifier will assume contents do not change639639+ */640640+ if (map->map_flags & BPF_F_RDONLY_PROG) {641641+ err = -EACCES;642642+ goto out;643643+ }633644 }634645635646 /* set default open/close callbacks */
···47734773 struct rq *rq = rq_of(cfs_rq);47744774 struct cfs_bandwidth *cfs_b = tg_cfs_bandwidth(cfs_rq->tg);47754775 struct sched_entity *se;47764776- int enqueue = 1;47774776 long task_delta, idle_task_delta;4778477747794778 se = cfs_rq->tg->se[cpu_of(rq)];···47964797 idle_task_delta = cfs_rq->idle_h_nr_running;47974798 for_each_sched_entity(se) {47984799 if (se->on_rq)47994799- enqueue = 0;48004800-48004800+ break;48014801 cfs_rq = cfs_rq_of(se);48024802- if (enqueue) {48034803- enqueue_entity(cfs_rq, se, ENQUEUE_WAKEUP);48044804- } else {48054805- update_load_avg(cfs_rq, se, 0);48064806- se_update_runnable(se);48074807- }48024802+ enqueue_entity(cfs_rq, se, ENQUEUE_WAKEUP);4808480348094804 cfs_rq->h_nr_running += task_delta;48104805 cfs_rq->idle_h_nr_running += idle_task_delta;4811480648074807+ /* end evaluation on encountering a throttled cfs_rq */48124808 if (cfs_rq_throttled(cfs_rq))48134813- break;48094809+ goto unthrottle_throttle;48144810 }4815481148164816- if (!se)48174817- add_nr_running(rq, task_delta);48124812+ for_each_sched_entity(se) {48134813+ cfs_rq = cfs_rq_of(se);4818481448154815+ update_load_avg(cfs_rq, se, UPDATE_TG);48164816+ se_update_runnable(se);48174817+48184818+ cfs_rq->h_nr_running += task_delta;48194819+ cfs_rq->idle_h_nr_running += idle_task_delta;48204820+48214821+48224822+ /* end evaluation on encountering a throttled cfs_rq */48234823+ if (cfs_rq_throttled(cfs_rq))48244824+ goto unthrottle_throttle;48254825+48264826+ /*48274827+ * One parent has been throttled and cfs_rq removed from the48284828+ * list. Add it back to not break the leaf list.48294829+ */48304830+ if (throttled_hierarchy(cfs_rq))48314831+ list_add_leaf_cfs_rq(cfs_rq);48324832+ }48334833+48344834+ /* At this point se is NULL and we are at root level*/48354835+ add_nr_running(rq, task_delta);48364836+48374837+unthrottle_throttle:48194838 /*48204839 * The cfs_rq_throttled() breaks in the above iteration can result in48214840 * incomplete leaf list maintenance, resulting in triggering the···48424825 for_each_sched_entity(se) {48434826 cfs_rq = cfs_rq_of(se);4844482748454845- list_add_leaf_cfs_rq(cfs_rq);48284828+ if (list_add_leaf_cfs_rq(cfs_rq))48294829+ break;48464830 }4847483148484832 assert_list_leaf_cfs_rq(rq);···54965478 /* end evaluation on encountering a throttled cfs_rq */54975479 if (cfs_rq_throttled(cfs_rq))54985480 goto enqueue_throttle;54815481+54825482+ /*54835483+ * One parent has been throttled and cfs_rq removed from the54845484+ * list. Add it back to not break the leaf list.54855485+ */54865486+ if (throttled_hierarchy(cfs_rq))54875487+ list_add_leaf_cfs_rq(cfs_rq);54995488 }5500548955015490enqueue_throttle:
···794794 unsigned long hashval;795795 int ret;796796797797+ /*798798+ * Print the real pointer value for NULL and error pointers,799799+ * as they are not actual addresses.800800+ */801801+ if (IS_ERR_OR_NULL(ptr))802802+ return pointer_string(buf, end, ptr, spec);803803+797804 /* When debugging early boot use non-cryptographically secure hash. */798805 if (unlikely(debug_boot_weak_hash)) {799806 hashval = hash_long((unsigned long)ptr, 32);
···318318 slots = handle_to_slots(handle);319319 write_lock(&slots->lock);320320 *(unsigned long *)handle = 0;321321- write_unlock(&slots->lock);322322- if (zhdr->slots == slots)321321+ if (zhdr->slots == slots) {322322+ write_unlock(&slots->lock);323323 return; /* simple case, nothing else to do */324324+ }324325325326 /* we are freeing a foreign handle if we are here */326327 zhdr->foreign_handles--;327328 is_free = true;328328- read_lock(&slots->lock);329329 if (!test_bit(HANDLES_ORPHANED, &slots->pool)) {330330- read_unlock(&slots->lock);330330+ write_unlock(&slots->lock);331331 return;332332 }333333 for (i = 0; i <= BUDDY_MASK; i++) {···336336 break;337337 }338338 }339339- read_unlock(&slots->lock);339339+ write_unlock(&slots->lock);340340341341 if (is_free) {342342 struct z3fold_pool *pool = slots_to_pool(slots);···422422 zhdr->start_middle = 0;423423 zhdr->cpu = -1;424424 zhdr->foreign_handles = 0;425425+ zhdr->mapped_count = 0;425426 zhdr->slots = slots;426427 zhdr->pool = pool;427428 INIT_LIST_HEAD(&zhdr->buddy);
+4-2
net/ax25/af_ax25.c
···635635 break;636636637637 case SO_BINDTODEVICE:638638- if (optlen > IFNAMSIZ)639639- optlen = IFNAMSIZ;638638+ if (optlen > IFNAMSIZ - 1)639639+ optlen = IFNAMSIZ - 1;640640+641641+ memset(devname, 0, sizeof(devname));640642641643 if (copy_from_user(devname, optval, optlen)) {642644 res = -EFAULT;
+15-5
net/core/dev.c
···50585058 return 0;50595059}5060506050615061-static int __netif_receive_skb_core(struct sk_buff *skb, bool pfmemalloc,50615061+static int __netif_receive_skb_core(struct sk_buff **pskb, bool pfmemalloc,50625062 struct packet_type **ppt_prev)50635063{50645064 struct packet_type *ptype, *pt_prev;50655065 rx_handler_func_t *rx_handler;50665066+ struct sk_buff *skb = *pskb;50665067 struct net_device *orig_dev;50675068 bool deliver_exact = false;50685069 int ret = NET_RX_DROP;···50945093 ret2 = do_xdp_generic(rcu_dereference(skb->dev->xdp_prog), skb);50955094 preempt_enable();5096509550975097- if (ret2 != XDP_PASS)50985098- return NET_RX_DROP;50965096+ if (ret2 != XDP_PASS) {50975097+ ret = NET_RX_DROP;50985098+ goto out;50995099+ }50995100 skb_reset_mac_len(skb);51005101 }51015102···52475244 }5248524552495246out:52475247+ /* The invariant here is that if *ppt_prev is not NULL52485248+ * then skb should also be non-NULL.52495249+ *52505250+ * Apparently *ppt_prev assignment above holds this invariant due to52515251+ * skb dereferencing near it.52525252+ */52535253+ *pskb = skb;52505254 return ret;52515255}52525256···52635253 struct packet_type *pt_prev = NULL;52645254 int ret;5265525552665266- ret = __netif_receive_skb_core(skb, pfmemalloc, &pt_prev);52565256+ ret = __netif_receive_skb_core(&skb, pfmemalloc, &pt_prev);52675257 if (pt_prev)52685258 ret = INDIRECT_CALL_INET(pt_prev->func, ipv6_rcv, ip_rcv, skb,52695259 skb->dev, pt_prev, orig_dev);···53415331 struct packet_type *pt_prev = NULL;5342533253435333 skb_list_del_init(skb);53445344- __netif_receive_skb_core(skb, pfmemalloc, &pt_prev);53345334+ __netif_receive_skb_core(&skb, pfmemalloc, &pt_prev);53455335 if (!pt_prev)53465336 continue;53475337 if (pt_curr != pt_prev || od_curr != orig_dev) {
+21-5
net/core/flow_dissector.c
···160160 return ret;161161}162162163163-int skb_flow_dissector_bpf_prog_detach(const union bpf_attr *attr)163163+static int flow_dissector_bpf_prog_detach(struct net *net)164164{165165 struct bpf_prog *attached;166166- struct net *net;167166168168- net = current->nsproxy->net_ns;169167 mutex_lock(&flow_dissector_mutex);170168 attached = rcu_dereference_protected(net->flow_dissector_prog,171169 lockdep_is_held(&flow_dissector_mutex));···176178 mutex_unlock(&flow_dissector_mutex);177179 return 0;178180}181181+182182+int skb_flow_dissector_bpf_prog_detach(const union bpf_attr *attr)183183+{184184+ return flow_dissector_bpf_prog_detach(current->nsproxy->net_ns);185185+}186186+187187+static void __net_exit flow_dissector_pernet_pre_exit(struct net *net)188188+{189189+ /* We're not racing with attach/detach because there are no190190+ * references to netns left when pre_exit gets called.191191+ */192192+ if (rcu_access_pointer(net->flow_dissector_prog))193193+ flow_dissector_bpf_prog_detach(net);194194+}195195+196196+static struct pernet_operations flow_dissector_pernet_ops __net_initdata = {197197+ .pre_exit = flow_dissector_pernet_pre_exit,198198+};179199180200/**181201 * __skb_flow_get_ports - extract the upper layer ports and return them···18521836 skb_flow_dissector_init(&flow_keys_basic_dissector,18531837 flow_keys_basic_dissector_keys,18541838 ARRAY_SIZE(flow_keys_basic_dissector_keys));18551855- return 0;18561856-}1857183918401840+ return register_pernet_subsys(&flow_dissector_pernet_ops);18411841+}18581842core_initcall(init_default_flow_dissectors);
+15
net/dsa/tag_mtk.c
···1515#define MTK_HDR_XMIT_TAGGED_TPID_8100 11616#define MTK_HDR_RECV_SOURCE_PORT_MASK GENMASK(2, 0)1717#define MTK_HDR_XMIT_DP_BIT_MASK GENMASK(5, 0)1818+#define MTK_HDR_XMIT_SA_DIS BIT(6)18191920static struct sk_buff *mtk_tag_xmit(struct sk_buff *skb,2021 struct net_device *dev)···2322 struct dsa_port *dp = dsa_slave_to_port(dev);2423 u8 *mtk_tag;2524 bool is_vlan_skb = true;2525+ unsigned char *dest = eth_hdr(skb)->h_dest;2626+ bool is_multicast_skb = is_multicast_ether_addr(dest) &&2727+ !is_broadcast_ether_addr(dest);26282729 /* Build the special tag after the MAC Source Address. If VLAN header2830 * is present, it's required that VLAN header and special tag is···5147 MTK_HDR_XMIT_UNTAGGED;5248 mtk_tag[1] = (1 << dp->index) & MTK_HDR_XMIT_DP_BIT_MASK;53495050+ /* Disable SA learning for multicast frames */5151+ if (unlikely(is_multicast_skb))5252+ mtk_tag[1] |= MTK_HDR_XMIT_SA_DIS;5353+5454 /* Tag control information is kept for 802.1Q */5555 if (!is_vlan_skb) {5656 mtk_tag[2] = 0;···6961{7062 int port;7163 __be16 *phdr, hdr;6464+ unsigned char *dest = eth_hdr(skb)->h_dest;6565+ bool is_multicast_skb = is_multicast_ether_addr(dest) &&6666+ !is_broadcast_ether_addr(dest);72677368 if (unlikely(!pskb_may_pull(skb, MTK_HDR_LEN)))7469 return NULL;···9685 skb->dev = dsa_master_find_slave(dev, 0, port);9786 if (!skb->dev)9887 return NULL;8888+8989+ /* Only unicast or broadcast frames are offloaded */9090+ if (likely(!is_multicast_skb))9191+ skb->offload_fwd_mark = 1;999210093 return skb;10194}
+2-2
net/ethtool/netlink.c
···342342 ret = ops->reply_size(req_info, reply_data);343343 if (ret < 0)344344 goto err_cleanup;345345- reply_len = ret;345345+ reply_len = ret + ethnl_reply_header_size();346346 ret = -ENOMEM;347347 rskb = ethnl_reply_init(reply_len, req_info->dev, ops->reply_cmd,348348 ops->hdr_attr, info, &reply_payload);···588588 ret = ops->reply_size(req_info, reply_data);589589 if (ret < 0)590590 goto err_cleanup;591591- reply_len = ret;591591+ reply_len = ret + ethnl_reply_header_size();592592 ret = -ENOMEM;593593 skb = genlmsg_new(reply_len, GFP_KERNEL);594594 if (!skb)
-1
net/ethtool/strset.c
···324324 int len = 0;325325 int ret;326326327327- len += ethnl_reply_header_size();328327 for (i = 0; i < ETH_SS_COUNT; i++) {329328 const struct strset_info *set_info = &data->sets[i];330329
+1-2
net/ipv4/fib_frontend.c
···918918 else919919 filter->dump_exceptions = false;920920921921- filter->dump_all_families = (rtm->rtm_family == AF_UNSPEC);922921 filter->flags = rtm->rtm_flags;923922 filter->protocol = rtm->rtm_protocol;924923 filter->rt_type = rtm->rtm_type;···989990 if (filter.table_id) {990991 tb = fib_get_table(net, filter.table_id);991992 if (!tb) {992992- if (filter.dump_all_families)993993+ if (rtnl_msg_family(cb->nlh) != PF_INET)993994 return skb->len;994995995996 NL_SET_ERR_MSG(cb->extack, "ipv4: FIB table does not exist");
+24-19
net/ipv4/inet_connection_sock.c
···2424#include <net/addrconf.h>25252626#if IS_ENABLED(CONFIG_IPV6)2727-/* match_wildcard == true: IPV6_ADDR_ANY equals to any IPv6 addresses if IPv62828- * only, and any IPv4 addresses if not IPv6 only2929- * match_wildcard == false: addresses must be exactly the same, i.e.3030- * IPV6_ADDR_ANY only equals to IPV6_ADDR_ANY,3131- * and 0.0.0.0 equals to 0.0.0.0 only2727+/* match_sk*_wildcard == true: IPV6_ADDR_ANY equals to any IPv6 addresses2828+ * if IPv6 only, and any IPv4 addresses2929+ * if not IPv6 only3030+ * match_sk*_wildcard == false: addresses must be exactly the same, i.e.3131+ * IPV6_ADDR_ANY only equals to IPV6_ADDR_ANY,3232+ * and 0.0.0.0 equals to 0.0.0.0 only3233 */3334static bool ipv6_rcv_saddr_equal(const struct in6_addr *sk1_rcv_saddr6,3435 const struct in6_addr *sk2_rcv_saddr6,3536 __be32 sk1_rcv_saddr, __be32 sk2_rcv_saddr,3637 bool sk1_ipv6only, bool sk2_ipv6only,3737- bool match_wildcard)3838+ bool match_sk1_wildcard,3939+ bool match_sk2_wildcard)3840{3941 int addr_type = ipv6_addr_type(sk1_rcv_saddr6);4042 int addr_type2 = sk2_rcv_saddr6 ? ipv6_addr_type(sk2_rcv_saddr6) : IPV6_ADDR_MAPPED;···4644 if (!sk2_ipv6only) {4745 if (sk1_rcv_saddr == sk2_rcv_saddr)4846 return true;4949- if (!sk1_rcv_saddr || !sk2_rcv_saddr)5050- return match_wildcard;4747+ return (match_sk1_wildcard && !sk1_rcv_saddr) ||4848+ (match_sk2_wildcard && !sk2_rcv_saddr);5149 }5250 return false;5351 }···5553 if (addr_type == IPV6_ADDR_ANY && addr_type2 == IPV6_ADDR_ANY)5654 return true;57555858- if (addr_type2 == IPV6_ADDR_ANY && match_wildcard &&5656+ if (addr_type2 == IPV6_ADDR_ANY && match_sk2_wildcard &&5957 !(sk2_ipv6only && addr_type == IPV6_ADDR_MAPPED))6058 return true;61596262- if (addr_type == IPV6_ADDR_ANY && match_wildcard &&6060+ if (addr_type == IPV6_ADDR_ANY && match_sk1_wildcard &&6361 !(sk1_ipv6only && addr_type2 == IPV6_ADDR_MAPPED))6462 return true;6563···7169}7270#endif73717474-/* match_wildcard == true: 0.0.0.0 equals to any IPv4 addresses7575- * match_wildcard == false: addresses must be exactly the same, i.e.7676- * 0.0.0.0 only equals to 0.0.0.07272+/* match_sk*_wildcard == true: 0.0.0.0 equals to any IPv4 addresses7373+ * match_sk*_wildcard == false: addresses must be exactly the same, i.e.7474+ * 0.0.0.0 only equals to 0.0.0.07775 */7876static bool ipv4_rcv_saddr_equal(__be32 sk1_rcv_saddr, __be32 sk2_rcv_saddr,7979- bool sk2_ipv6only, bool match_wildcard)7777+ bool sk2_ipv6only, bool match_sk1_wildcard,7878+ bool match_sk2_wildcard)8079{8180 if (!sk2_ipv6only) {8281 if (sk1_rcv_saddr == sk2_rcv_saddr)8382 return true;8484- if (!sk1_rcv_saddr || !sk2_rcv_saddr)8585- return match_wildcard;8383+ return (match_sk1_wildcard && !sk1_rcv_saddr) ||8484+ (match_sk2_wildcard && !sk2_rcv_saddr);8685 }8786 return false;8887}···9996 sk2->sk_rcv_saddr,10097 ipv6_only_sock(sk),10198 ipv6_only_sock(sk2),9999+ match_wildcard,102100 match_wildcard);103101#endif104102 return ipv4_rcv_saddr_equal(sk->sk_rcv_saddr, sk2->sk_rcv_saddr,105105- ipv6_only_sock(sk2), match_wildcard);103103+ ipv6_only_sock(sk2), match_wildcard,104104+ match_wildcard);106105}107106EXPORT_SYMBOL(inet_rcv_saddr_equal);108107···290285 tb->fast_rcv_saddr,291286 sk->sk_rcv_saddr,292287 tb->fast_ipv6_only,293293- ipv6_only_sock(sk), true);288288+ ipv6_only_sock(sk), true, false);294289#endif295290 return ipv4_rcv_saddr_equal(tb->fast_rcv_saddr, sk->sk_rcv_saddr,296296- ipv6_only_sock(sk), true);291291+ ipv6_only_sock(sk), true, false);297292}298293299294/* Obtain a reference to a local port for the given sock,
···2577257725782578 mrt = ipmr_get_table(sock_net(skb->sk), filter.table_id);25792579 if (!mrt) {25802580- if (filter.dump_all_families)25802580+ if (rtnl_msg_family(cb->nlh) != RTNL_FAMILY_IPMR)25812581 return skb->len;2582258225832583 NL_SET_ERR_MSG(cb->extack, "ipv4: MR table does not exist");
+2-1
net/ipv4/nexthop.c
···292292 return 0;293293294294nla_put_failure:295295+ nlmsg_cancel(skb, nlh);295296 return -EMSGSIZE;296297}297298···483482 return -EINVAL;484483 }485484 }486486- for (i = NHA_GROUP + 1; i < __NHA_MAX; ++i) {485485+ for (i = NHA_GROUP_TYPE + 1; i < __NHA_MAX; ++i) {487486 if (!tb[i])488487 continue;489488 if (tb[NHA_FDB])
+6-8
net/ipv4/route.c
···491491 atomic_t *p_id = ip_idents + hash % IP_IDENTS_SZ;492492 u32 old = READ_ONCE(*p_tstamp);493493 u32 now = (u32)jiffies;494494- u32 new, delta = 0;494494+ u32 delta = 0;495495496496 if (old != now && cmpxchg(p_tstamp, old, now) == old)497497 delta = prandom_u32_max(now - old);498498499499- /* Do not use atomic_add_return() as it makes UBSAN unhappy */500500- do {501501- old = (u32)atomic_read(p_id);502502- new = old + delta + segs;503503- } while (atomic_cmpxchg(p_id, old, new) != old);504504-505505- return new - segs;499499+ /* If UBSAN reports an error there, please make sure your compiler500500+ * supports -fno-strict-overflow before reporting it that was a bug501501+ * in UBSAN, and it has been fixed in GCC-8.502502+ */503503+ return atomic_add_return(segs + delta, p_id) - segs;506504}507505EXPORT_SYMBOL(ip_idents_reserve);508506
+1-1
net/ipv6/ip6_fib.c
···664664 if (arg.filter.table_id) {665665 tb = fib6_get_table(net, arg.filter.table_id);666666 if (!tb) {667667- if (arg.filter.dump_all_families)667667+ if (rtnl_msg_family(cb->nlh) != PF_INET6)668668 goto out;669669670670 NL_SET_ERR_MSG_MOD(cb->extack, "FIB table does not exist");
+3-2
net/ipv6/ip6mr.c
···9898#ifdef CONFIG_IPV6_MROUTE_MULTIPLE_TABLES9999#define ip6mr_for_each_table(mrt, net) \100100 list_for_each_entry_rcu(mrt, &net->ipv6.mr6_tables, list, \101101- lockdep_rtnl_is_held())101101+ lockdep_rtnl_is_held() || \102102+ list_empty(&net->ipv6.mr6_tables))102103103104static struct mr_table *ip6mr_mr_table_iter(struct net *net,104105 struct mr_table *mrt)···2503250225042503 mrt = ip6mr_get_table(sock_net(skb->sk), filter.table_id);25052504 if (!mrt) {25062506- if (filter.dump_all_families)25052505+ if (rtnl_msg_family(cb->nlh) != RTNL_FAMILY_IP6MR)25072506 return skb->len;2508250725092508 NL_SET_ERR_MSG_MOD(cb->extack, "MR table does not exist");
+9-15
net/mptcp/crypto.c
···4747void mptcp_crypto_hmac_sha(u64 key1, u64 key2, u8 *msg, int len, void *hmac)4848{4949 u8 input[SHA256_BLOCK_SIZE + SHA256_DIGEST_SIZE];5050- __be32 mptcp_hashed_key[SHA256_DIGEST_WORDS];5151- __be32 *hash_out = (__force __be32 *)hmac;5250 struct sha256_state state;5351 u8 key1be[8];5452 u8 key2be[8];···84868587 sha256_init(&state);8688 sha256_update(&state, input, SHA256_BLOCK_SIZE + SHA256_DIGEST_SIZE);8787- sha256_final(&state, (u8 *)mptcp_hashed_key);8888-8989- /* takes only first 160 bits */9090- for (i = 0; i < 5; i++)9191- hash_out[i] = mptcp_hashed_key[i];8989+ sha256_final(&state, (u8 *)hmac);9290}93919492#ifdef CONFIG_MPTCP_HMAC_TEST···95101};9610297103/* we can't reuse RFC 4231 test vectors, as we have constraint on the9898- * input and key size, and we truncate the output.104104+ * input and key size.99105 */100106static struct test_cast tests[] = {101107 {102108 .key = "0b0b0b0b0b0b0b0b",103109 .msg = "48692054",104104- .result = "8385e24fb4235ac37556b6b886db106284a1da67",110110+ .result = "8385e24fb4235ac37556b6b886db106284a1da671699f46db1f235ec622dcafa",105111 },106112 {107113 .key = "aaaaaaaaaaaaaaaa",108114 .msg = "dddddddd",109109- .result = "2c5e219164ff1dca1c4a92318d847bb6b9d44492",115115+ .result = "2c5e219164ff1dca1c4a92318d847bb6b9d44492984e1eb71aff9022f71046e9",110116 },111117 {112118 .key = "0102030405060708",113119 .msg = "cdcdcdcd",114114- .result = "e73b9ba9969969cefb04aa0d6df18ec2fcc075b6",120120+ .result = "e73b9ba9969969cefb04aa0d6df18ec2fcc075b6f23b4d8c4da736a5dbbc6e7d",115121 },116122};117123118124static int __init test_mptcp_crypto(void)119125{120120- char hmac[20], hmac_hex[41];126126+ char hmac[32], hmac_hex[65];121127 u32 nonce1, nonce2;122128 u64 key1, key2;123129 u8 msg[8];···134140 put_unaligned_be32(nonce2, &msg[4]);135141136142 mptcp_crypto_hmac_sha(key1, key2, msg, 8, hmac);137137- for (j = 0; j < 20; ++j)143143+ for (j = 0; j < 32; ++j)138144 sprintf(&hmac_hex[j << 1], "%02x", hmac[j] & 0xff);139139- hmac_hex[40] = 0;145145+ hmac_hex[64] = 0;140146141141- if (memcmp(hmac_hex, tests[i].result, 40))147147+ if (memcmp(hmac_hex, tests[i].result, 64))142148 pr_err("test %d failed, got %s expected %s", i,143149 hmac_hex, tests[i].result);144150 else
···9191 /* We analyse the number of packets that get ACK'd per RTT9292 * period and increase the window if we managed to fill it.9393 */9494- if (call->peer->rtt_usage == 0)9494+ if (call->peer->rtt_count == 0)9595 goto out;9696 if (ktime_before(skb->tstamp,9797- ktime_add_ns(call->cong_tstamp,9898- call->peer->rtt)))9797+ ktime_add_us(call->cong_tstamp,9898+ call->peer->srtt_us >> 3)))9999 goto out_no_clear_ca;100100 change = rxrpc_cong_rtt_window_end;101101 call->cong_tstamp = skb->tstamp;···803803}804804805805/*806806+ * Return true if the ACK is valid - ie. it doesn't appear to have regressed807807+ * with respect to the ack state conveyed by preceding ACKs.808808+ */809809+static bool rxrpc_is_ack_valid(struct rxrpc_call *call,810810+ rxrpc_seq_t first_pkt, rxrpc_seq_t prev_pkt)811811+{812812+ rxrpc_seq_t base = READ_ONCE(call->ackr_first_seq);813813+814814+ if (after(first_pkt, base))815815+ return true; /* The window advanced */816816+817817+ if (before(first_pkt, base))818818+ return false; /* firstPacket regressed */819819+820820+ if (after_eq(prev_pkt, call->ackr_prev_seq))821821+ return true; /* previousPacket hasn't regressed. */822822+823823+ /* Some rx implementations put a serial number in previousPacket. */824824+ if (after_eq(prev_pkt, base + call->tx_winsize))825825+ return false;826826+ return true;827827+}828828+829829+/*806830 * Process an ACK packet.807831 *808832 * ack.firstPacket is the sequence number of the first soft-ACK'd/NAK'd packet···889865 }890866891867 /* Discard any out-of-order or duplicate ACKs (outside lock). */892892- if (before(first_soft_ack, call->ackr_first_seq) ||893893- before(prev_pkt, call->ackr_prev_seq))868868+ if (!rxrpc_is_ack_valid(call, first_soft_ack, prev_pkt)) {869869+ trace_rxrpc_rx_discard_ack(call->debug_id, sp->hdr.serial,870870+ first_soft_ack, call->ackr_first_seq,871871+ prev_pkt, call->ackr_prev_seq);894872 return;873873+ }895874896875 buf.info.rxMTU = 0;897876 ioffset = offset + nr_acks + 3;···905878 spin_lock(&call->input_lock);906879907880 /* Discard any out-of-order or duplicate ACKs (inside lock). */908908- if (before(first_soft_ack, call->ackr_first_seq) ||909909- before(prev_pkt, call->ackr_prev_seq))881881+ if (!rxrpc_is_ack_valid(call, first_soft_ack, prev_pkt)) {882882+ trace_rxrpc_rx_discard_ack(call->debug_id, sp->hdr.serial,883883+ first_soft_ack, call->ackr_first_seq,884884+ prev_pkt, call->ackr_prev_seq);910885 goto out;886886+ }911887 call->acks_latest_ts = skb->tstamp;912888913889 call->ackr_first_seq = first_soft_ack;
-5
net/rxrpc/misc.c
···6363 */6464unsigned int rxrpc_rx_jumbo_max = 4;65656666-/*6767- * Time till packet resend (in milliseconds).6868- */6969-unsigned long rxrpc_resend_timeout = 4 * HZ;7070-7166const s8 rxrpc_ack_priority[] = {7267 [0] = 0,7368 [RXRPC_ACK_DELAY] = 1,
···296296}297297298298/*299299- * Add RTT information to cache. This is called in softirq mode and has300300- * exclusive access to the peer RTT data.301301- */302302-void rxrpc_peer_add_rtt(struct rxrpc_call *call, enum rxrpc_rtt_rx_trace why,303303- rxrpc_serial_t send_serial, rxrpc_serial_t resp_serial,304304- ktime_t send_time, ktime_t resp_time)305305-{306306- struct rxrpc_peer *peer = call->peer;307307- s64 rtt;308308- u64 sum = peer->rtt_sum, avg;309309- u8 cursor = peer->rtt_cursor, usage = peer->rtt_usage;310310-311311- rtt = ktime_to_ns(ktime_sub(resp_time, send_time));312312- if (rtt < 0)313313- return;314314-315315- spin_lock(&peer->rtt_input_lock);316316-317317- /* Replace the oldest datum in the RTT buffer */318318- sum -= peer->rtt_cache[cursor];319319- sum += rtt;320320- peer->rtt_cache[cursor] = rtt;321321- peer->rtt_cursor = (cursor + 1) & (RXRPC_RTT_CACHE_SIZE - 1);322322- peer->rtt_sum = sum;323323- if (usage < RXRPC_RTT_CACHE_SIZE) {324324- usage++;325325- peer->rtt_usage = usage;326326- }327327-328328- spin_unlock(&peer->rtt_input_lock);329329-330330- /* Now recalculate the average */331331- if (usage == RXRPC_RTT_CACHE_SIZE) {332332- avg = sum / RXRPC_RTT_CACHE_SIZE;333333- } else {334334- avg = sum;335335- do_div(avg, usage);336336- }337337-338338- /* Don't need to update this under lock */339339- peer->rtt = avg;340340- trace_rxrpc_rtt_rx(call, why, send_serial, resp_serial, rtt,341341- usage, avg);342342-}343343-344344-/*345299 * Perform keep-alive pings.346300 */347301static void rxrpc_peer_keepalive_dispatch(struct rxrpc_net *rxnet,
+7-5
net/rxrpc/peer_object.c
···225225 spin_lock_init(&peer->rtt_input_lock);226226 peer->debug_id = atomic_inc_return(&rxrpc_debug_id);227227228228+ rxrpc_peer_init_rtt(peer);229229+228230 if (RXRPC_TX_SMSS > 2190)229231 peer->cong_cwnd = 2;230232 else if (RXRPC_TX_SMSS > 1095)···499497EXPORT_SYMBOL(rxrpc_kernel_get_peer);500498501499/**502502- * rxrpc_kernel_get_rtt - Get a call's peer RTT500500+ * rxrpc_kernel_get_srtt - Get a call's peer smoothed RTT503501 * @sock: The socket on which the call is in progress.504502 * @call: The call to query505503 *506506- * Get the call's peer RTT.504504+ * Get the call's peer smoothed RTT.507505 */508508-u64 rxrpc_kernel_get_rtt(struct socket *sock, struct rxrpc_call *call)506506+u32 rxrpc_kernel_get_srtt(struct socket *sock, struct rxrpc_call *call)509507{510510- return call->peer->rtt;508508+ return call->peer->srtt_us >> 3;511509}512512-EXPORT_SYMBOL(rxrpc_kernel_get_rtt);510510+EXPORT_SYMBOL(rxrpc_kernel_get_srtt);
+4-4
net/rxrpc/proc.c
···222222 seq_puts(seq,223223 "Proto Local "224224 " Remote "225225- " Use CW MTU LastUse RTT Rc\n"225225+ " Use CW MTU LastUse RTT RTO\n"226226 );227227 return 0;228228 }···236236 now = ktime_get_seconds();237237 seq_printf(seq,238238 "UDP %-47.47s %-47.47s %3u"239239- " %3u %5u %6llus %12llu %2u\n",239239+ " %3u %5u %6llus %8u %8u\n",240240 lbuff,241241 rbuff,242242 atomic_read(&peer->usage),243243 peer->cong_cwnd,244244 peer->mtu,245245 now - peer->last_tx_at,246246- peer->rtt,247247- peer->rtt_cursor);246246+ peer->srtt_us >> 3,247247+ jiffies_to_usecs(peer->rto_j));248248249249 return 0;250250}
+195
net/rxrpc/rtt.c
···11+// SPDX-License-Identifier: GPL-2.022+/* RTT/RTO calculation.33+ *44+ * Adapted from TCP for AF_RXRPC by David Howells (dhowells@redhat.com)55+ *66+ * https://tools.ietf.org/html/rfc629877+ * https://tools.ietf.org/html/rfc1122#section-4.2.3.188+ * http://ccr.sigcomm.org/archive/1995/jan95/ccr-9501-partridge87.pdf99+ */1010+1111+#include <linux/net.h>1212+#include "ar-internal.h"1313+1414+#define RXRPC_RTO_MAX ((unsigned)(120 * HZ))1515+#define RXRPC_TIMEOUT_INIT ((unsigned)(1*HZ)) /* RFC6298 2.1 initial RTO value */1616+#define rxrpc_jiffies32 ((u32)jiffies) /* As rxrpc_jiffies32 */1717+#define rxrpc_min_rtt_wlen 300 /* As sysctl_tcp_min_rtt_wlen */1818+1919+static u32 rxrpc_rto_min_us(struct rxrpc_peer *peer)2020+{2121+ return 200;2222+}2323+2424+static u32 __rxrpc_set_rto(const struct rxrpc_peer *peer)2525+{2626+ return _usecs_to_jiffies((peer->srtt_us >> 3) + peer->rttvar_us);2727+}2828+2929+static u32 rxrpc_bound_rto(u32 rto)3030+{3131+ return min(rto, RXRPC_RTO_MAX);3232+}3333+3434+/*3535+ * Called to compute a smoothed rtt estimate. The data fed to this3636+ * routine either comes from timestamps, or from segments that were3737+ * known _not_ to have been retransmitted [see Karn/Partridge3838+ * Proceedings SIGCOMM 87]. The algorithm is from the SIGCOMM 883939+ * piece by Van Jacobson.4040+ * NOTE: the next three routines used to be one big routine.4141+ * To save cycles in the RFC 1323 implementation it was better to break4242+ * it up into three procedures. -- erics4343+ */4444+static void rxrpc_rtt_estimator(struct rxrpc_peer *peer, long sample_rtt_us)4545+{4646+ long m = sample_rtt_us; /* RTT */4747+ u32 srtt = peer->srtt_us;4848+4949+ /* The following amusing code comes from Jacobson's5050+ * article in SIGCOMM '88. Note that rtt and mdev5151+ * are scaled versions of rtt and mean deviation.5252+ * This is designed to be as fast as possible5353+ * m stands for "measurement".5454+ *5555+ * On a 1990 paper the rto value is changed to:5656+ * RTO = rtt + 4 * mdev5757+ *5858+ * Funny. This algorithm seems to be very broken.5959+ * These formulae increase RTO, when it should be decreased, increase6060+ * too slowly, when it should be increased quickly, decrease too quickly6161+ * etc. I guess in BSD RTO takes ONE value, so that it is absolutely6262+ * does not matter how to _calculate_ it. Seems, it was trap6363+ * that VJ failed to avoid. 8)6464+ */6565+ if (srtt != 0) {6666+ m -= (srtt >> 3); /* m is now error in rtt est */6767+ srtt += m; /* rtt = 7/8 rtt + 1/8 new */6868+ if (m < 0) {6969+ m = -m; /* m is now abs(error) */7070+ m -= (peer->mdev_us >> 2); /* similar update on mdev */7171+ /* This is similar to one of Eifel findings.7272+ * Eifel blocks mdev updates when rtt decreases.7373+ * This solution is a bit different: we use finer gain7474+ * for mdev in this case (alpha*beta).7575+ * Like Eifel it also prevents growth of rto,7676+ * but also it limits too fast rto decreases,7777+ * happening in pure Eifel.7878+ */7979+ if (m > 0)8080+ m >>= 3;8181+ } else {8282+ m -= (peer->mdev_us >> 2); /* similar update on mdev */8383+ }8484+8585+ peer->mdev_us += m; /* mdev = 3/4 mdev + 1/4 new */8686+ if (peer->mdev_us > peer->mdev_max_us) {8787+ peer->mdev_max_us = peer->mdev_us;8888+ if (peer->mdev_max_us > peer->rttvar_us)8989+ peer->rttvar_us = peer->mdev_max_us;9090+ }9191+ } else {9292+ /* no previous measure. */9393+ srtt = m << 3; /* take the measured time to be rtt */9494+ peer->mdev_us = m << 1; /* make sure rto = 3*rtt */9595+ peer->rttvar_us = max(peer->mdev_us, rxrpc_rto_min_us(peer));9696+ peer->mdev_max_us = peer->rttvar_us;9797+ }9898+9999+ peer->srtt_us = max(1U, srtt);100100+}101101+102102+/*103103+ * Calculate rto without backoff. This is the second half of Van Jacobson's104104+ * routine referred to above.105105+ */106106+static void rxrpc_set_rto(struct rxrpc_peer *peer)107107+{108108+ u32 rto;109109+110110+ /* 1. If rtt variance happened to be less 50msec, it is hallucination.111111+ * It cannot be less due to utterly erratic ACK generation made112112+ * at least by solaris and freebsd. "Erratic ACKs" has _nothing_113113+ * to do with delayed acks, because at cwnd>2 true delack timeout114114+ * is invisible. Actually, Linux-2.4 also generates erratic115115+ * ACKs in some circumstances.116116+ */117117+ rto = __rxrpc_set_rto(peer);118118+119119+ /* 2. Fixups made earlier cannot be right.120120+ * If we do not estimate RTO correctly without them,121121+ * all the algo is pure shit and should be replaced122122+ * with correct one. It is exactly, which we pretend to do.123123+ */124124+125125+ /* NOTE: clamping at RXRPC_RTO_MIN is not required, current algo126126+ * guarantees that rto is higher.127127+ */128128+ peer->rto_j = rxrpc_bound_rto(rto);129129+}130130+131131+static void rxrpc_ack_update_rtt(struct rxrpc_peer *peer, long rtt_us)132132+{133133+ if (rtt_us < 0)134134+ return;135135+136136+ //rxrpc_update_rtt_min(peer, rtt_us);137137+ rxrpc_rtt_estimator(peer, rtt_us);138138+ rxrpc_set_rto(peer);139139+140140+ /* RFC6298: only reset backoff on valid RTT measurement. */141141+ peer->backoff = 0;142142+}143143+144144+/*145145+ * Add RTT information to cache. This is called in softirq mode and has146146+ * exclusive access to the peer RTT data.147147+ */148148+void rxrpc_peer_add_rtt(struct rxrpc_call *call, enum rxrpc_rtt_rx_trace why,149149+ rxrpc_serial_t send_serial, rxrpc_serial_t resp_serial,150150+ ktime_t send_time, ktime_t resp_time)151151+{152152+ struct rxrpc_peer *peer = call->peer;153153+ s64 rtt_us;154154+155155+ rtt_us = ktime_to_us(ktime_sub(resp_time, send_time));156156+ if (rtt_us < 0)157157+ return;158158+159159+ spin_lock(&peer->rtt_input_lock);160160+ rxrpc_ack_update_rtt(peer, rtt_us);161161+ if (peer->rtt_count < 3)162162+ peer->rtt_count++;163163+ spin_unlock(&peer->rtt_input_lock);164164+165165+ trace_rxrpc_rtt_rx(call, why, send_serial, resp_serial,166166+ peer->srtt_us >> 3, peer->rto_j);167167+}168168+169169+/*170170+ * Get the retransmission timeout to set in jiffies, backing it off each time171171+ * we retransmit.172172+ */173173+unsigned long rxrpc_get_rto_backoff(struct rxrpc_peer *peer, bool retrans)174174+{175175+ u64 timo_j;176176+ u8 backoff = READ_ONCE(peer->backoff);177177+178178+ timo_j = peer->rto_j;179179+ timo_j <<= backoff;180180+ if (retrans && timo_j * 2 <= RXRPC_RTO_MAX)181181+ WRITE_ONCE(peer->backoff, backoff + 1);182182+183183+ if (timo_j < 1)184184+ timo_j = 1;185185+186186+ return timo_j;187187+}188188+189189+void rxrpc_peer_init_rtt(struct rxrpc_peer *peer)190190+{191191+ peer->rto_j = RXRPC_TIMEOUT_INIT;192192+ peer->mdev_us = jiffies_to_usecs(RXRPC_TIMEOUT_INIT);193193+ peer->backoff = 0;194194+ //minmax_reset(&peer->rtt_min, rxrpc_jiffies32, ~0U);195195+}
+1-2
net/rxrpc/rxkad.c
···11481148 ret = rxkad_decrypt_ticket(conn, skb, ticket, ticket_len, &session_key,11491149 &expiry, _abort_code);11501150 if (ret < 0)11511151- goto temporary_error_free_resp;11511151+ goto temporary_error_free_ticket;1152115211531153 /* use the session key from inside the ticket to decrypt the11541154 * response */···1230123012311231temporary_error_free_ticket:12321232 kfree(ticket);12331233-temporary_error_free_resp:12341233 kfree(response);12351234temporary_error:12361235 /* Ignore the response packet if we got a temporary error such as
+9-17
net/rxrpc/sendmsg.c
···6666 struct rxrpc_call *call)6767{6868 rxrpc_seq_t tx_start, tx_win;6969- signed long rtt2, timeout;7070- u64 rtt;6969+ signed long rtt, timeout;71707272- rtt = READ_ONCE(call->peer->rtt);7373- rtt2 = nsecs_to_jiffies64(rtt) * 2;7474- if (rtt2 < 2)7575- rtt2 = 2;7171+ rtt = READ_ONCE(call->peer->srtt_us) >> 3;7272+ rtt = usecs_to_jiffies(rtt) * 2;7373+ if (rtt < 2)7474+ rtt = 2;76757777- timeout = rtt2;7676+ timeout = rtt;7877 tx_start = READ_ONCE(call->tx_hard_ack);79788079 for (;;) {···9192 return -EINTR;92939394 if (tx_win != tx_start) {9494- timeout = rtt2;9595+ timeout = rtt;9596 tx_start = tx_win;9697 }9798···270271 _debug("need instant resend %d", ret);271272 rxrpc_instant_resend(call, ix);272273 } else {273273- unsigned long now = jiffies, resend_at;274274+ unsigned long now = jiffies;275275+ unsigned long resend_at = now + call->peer->rto_j;274276275275- if (call->peer->rtt_usage > 1)276276- resend_at = nsecs_to_jiffies(call->peer->rtt * 3 / 2);277277- else278278- resend_at = rxrpc_resend_timeout;279279- if (resend_at < 1)280280- resend_at = 1;281281-282282- resend_at += now;283277 WRITE_ONCE(call->resend_at, resend_at);284278 rxrpc_reduce_call_timer(call, resend_at, now,285279 rxrpc_timer_set_for_send);
···15231523 timeout = asoc->timeouts[cmd->obj.to];15241524 BUG_ON(!timeout);1525152515261526- timer->expires = jiffies + timeout;15271527- sctp_association_hold(asoc);15281528- add_timer(timer);15261526+ /*15271527+ * SCTP has a hard time with timer starts. Because we process15281528+ * timer starts as side effects, it can be hard to tell if we15291529+ * have already started a timer or not, which leads to BUG15301530+ * halts when we call add_timer. So here, instead of just starting15311531+ * a timer, if the timer is already started, and just mod15321532+ * the timer with the shorter of the two expiration times15331533+ */15341534+ if (!timer_pending(timer))15351535+ sctp_association_hold(asoc);15361536+ timer_reduce(timer, jiffies + timeout);15291537 break;1530153815311539 case SCTP_CMD_TIMER_RESTART:
+5-4
net/sctp/sm_statefuns.c
···18561856 /* Update the content of current association. */18571857 sctp_add_cmd_sf(commands, SCTP_CMD_UPDATE_ASSOC, SCTP_ASOC(new_asoc));18581858 sctp_add_cmd_sf(commands, SCTP_CMD_EVENT_ULP, SCTP_ULPEVENT(ev));18591859- if (sctp_state(asoc, SHUTDOWN_PENDING) &&18591859+ if ((sctp_state(asoc, SHUTDOWN_PENDING) ||18601860+ sctp_state(asoc, SHUTDOWN_SENT)) &&18601861 (sctp_sstate(asoc->base.sk, CLOSING) ||18611862 sock_flag(asoc->base.sk, SOCK_DEAD))) {18621862- /* if were currently in SHUTDOWN_PENDING, but the socket18631863- * has been closed by user, don't transition to ESTABLISHED.18641864- * Instead trigger SHUTDOWN bundled with COOKIE_ACK.18631863+ /* If the socket has been closed by user, don't18641864+ * transition to ESTABLISHED. Instead trigger SHUTDOWN18651865+ * bundled with COOKIE_ACK.18651866 */18661867 sctp_add_cmd_sf(commands, SCTP_CMD_REPLY, SCTP_CHUNK(repl));18671868 return sctp_sf_do_9_2_start_shutdown(net, ep, asoc,
···232232 goto out;233233 }234234235235- /* Guard against races in evm_read_xattrs */235235+ /*236236+ * xattr_list_mutex guards against races in evm_read_xattrs().237237+ * Entries are only added to the evm_config_xattrnames list238238+ * and never deleted. Therefore, the list is traversed239239+ * using list_for_each_entry_lockless() without holding240240+ * the mutex in evm_calc_hmac_or_hash(), evm_find_protected_xattrs()241241+ * and evm_protected_xattr().242242+ */236243 mutex_lock(&xattr_list_mutex);237244 list_for_each_entry(tmp, &evm_config_xattrnames, list) {238245 if (strcmp(xattr->name, tmp->name) == 0) {
+6-6
security/integrity/ima/ima_crypto.c
···411411 loff_t i_size;412412 int rc;413413 struct file *f = file;414414- bool new_file_instance = false, modified_flags = false;414414+ bool new_file_instance = false, modified_mode = false;415415416416 /*417417 * For consistency, fail file's opened with the O_DIRECT flag on···431431 f = dentry_open(&file->f_path, flags, file->f_cred);432432 if (IS_ERR(f)) {433433 /*434434- * Cannot open the file again, lets modify f_flags434434+ * Cannot open the file again, lets modify f_mode435435 * of original and continue436436 */437437 pr_info_ratelimited("Unable to reopen file for reading.\n");438438 f = file;439439- f->f_flags |= FMODE_READ;440440- modified_flags = true;439439+ f->f_mode |= FMODE_READ;440440+ modified_mode = true;441441 } else {442442 new_file_instance = true;443443 }···455455out:456456 if (new_file_instance)457457 fput(f);458458- else if (modified_flags)459459- f->f_flags &= ~FMODE_READ;458458+ else if (modified_mode)459459+ f->f_mode &= ~FMODE_READ;460460 return rc;461461}462462
+1-2
security/integrity/ima/ima_fs.c
···338338 integrity_audit_msg(AUDIT_INTEGRITY_STATUS, NULL, NULL,339339 "policy_update", "signed policy required",340340 1, 0);341341- if (ima_appraise & IMA_APPRAISE_ENFORCE)342342- result = -EACCES;341341+ result = -EACCES;343342 } else {344343 result = ima_parse_add_rule(data);345344 }
+14-2
security/security.c
···1965196519661966int security_secid_to_secctx(u32 secid, char **secdata, u32 *seclen)19671967{19681968- return call_int_hook(secid_to_secctx, -EOPNOTSUPP, secid, secdata,19691969- seclen);19681968+ struct security_hook_list *hp;19691969+ int rc;19701970+19711971+ /*19721972+ * Currently, only one LSM can implement secid_to_secctx (i.e this19731973+ * LSM hook is not "stackable").19741974+ */19751975+ hlist_for_each_entry(hp, &security_hook_heads.secid_to_secctx, list) {19761976+ rc = hp->hook.secid_to_secctx(secid, secdata, seclen);19771977+ if (rc != LSM_RET_DEFAULT(secid_to_secctx))19781978+ return rc;19791979+ }19801980+19811981+ return LSM_RET_DEFAULT(secid_to_secctx);19701982}19711983EXPORT_SYMBOL(security_secid_to_secctx);19721984
···264264 local packets_t0265265 local packets_t1266266267267+ RET=0268268+267269 if [ $(devlink_trap_policers_num_get) -eq 0 ]; then268270 check_err 1 "Failed to dump policers"269271 fi···330328331329trap_policer_bind_test()332330{331331+ RET=0332332+333333 devlink trap group set $DEVLINK_DEV group l2_drops policer 1334334 check_err $? "Failed to bind a valid policer"335335 if [ $(devlink_trap_group_policer_get "l2_drops") -ne 1 ]; then