···16891689 architectures force reset to be always executed16901690 i8042.unlock [HW] Unlock (ignore) the keylock16911691 i8042.kbdreset [HW] Reset device connected to KBD port16921692+ i8042.probe_defer16931693+ [HW] Allow deferred probing upon i8042 probe errors1692169416931695 i810= [HW,DRM]16941696···24152413 Default is 1 (enabled)2416241424172415 kvm-intel.emulate_invalid_guest_state=24182418- [KVM,Intel] Enable emulation of invalid guest states24192419- Default is 0 (disabled)24162416+ [KVM,Intel] Disable emulation of invalid guest state.24172417+ Ignored if kvm-intel.enable_unrestricted_guest=1, as24182418+ guest state is never invalid for unrestricted guests.24192419+ This param doesn't apply to nested guests (L2), as KVM24202420+ never emulates invalid L2 guest state.24212421+ Default is 1 (enabled)2420242224212423 kvm-intel.flexpriority=24222424 [KVM,Intel] Disable FlexPriority feature (TPR shadow).
···5151 description:5252 Properties for single BUCK regulator.53535454+ properties:5555+ op_mode:5656+ $ref: /schemas/types.yaml#/definitions/uint325757+ enum: [0, 1, 2, 3]5858+ default: 15959+ description: |6060+ Describes the different operating modes of the regulator with power6161+ mode change in SOC. The different possible values are:6262+ 0 - always off mode6363+ 1 - on in normal mode6464+ 2 - low power mode6565+ 3 - suspend mode6666+5467 required:5568 - regulator-name5669···7663 Properties for single BUCK regulator.77647865 properties:6666+ op_mode:6767+ $ref: /schemas/types.yaml#/definitions/uint326868+ enum: [0, 1, 2, 3]6969+ default: 17070+ description: |7171+ Describes the different operating modes of the regulator with power7272+ mode change in SOC. The different possible values are:7373+ 0 - always off mode7474+ 1 - on in normal mode7575+ 2 - low power mode7676+ 3 - suspend mode7777+7978 s5m8767,pmic-ext-control-gpios:8079 maxItems: 18180 description: |
+5-3
Documentation/i2c/summary.rst
···1111and so are not advertised as being I2C but come under different names,1212e.g. TWI (Two Wire Interface), IIC.13131414-The official I2C specification is the `"I2C-bus specification and user1515-manual" (UM10204) <https://www.nxp.com/docs/en/user-guide/UM10204.pdf>`_1616-published by NXP Semiconductors.1414+The latest official I2C specification is the `"I2C-bus specification and user1515+manual" (UM10204) <https://www.nxp.com/webapp/Download?colCode=UM10204>`_1616+published by NXP Semiconductors. However, you need to log-in to the site to1717+access the PDF. An older version of the specification (revision 6) is archived1818+`here <https://web.archive.org/web/20210813122132/https://www.nxp.com/docs/en/user-guide/UM10204.pdf>`_.17191820SMBus (System Management Bus) is based on the I2C protocol, and is mostly1921a subset of I2C protocols and signaling. Many I2C devices will work on an
+6-5
Documentation/networking/bonding.rst
···196196ad_actor_system197197198198 In an AD system, this specifies the mac-address for the actor in199199- protocol packet exchanges (LACPDUs). The value cannot be NULL or200200- multicast. It is preferred to have the local-admin bit set for this201201- mac but driver does not enforce it. If the value is not given then202202- system defaults to using the masters' mac address as actors' system203203- address.199199+ protocol packet exchanges (LACPDUs). The value cannot be a multicast200200+ address. If the all-zeroes MAC is specified, bonding will internally201201+ use the MAC of the bond itself. It is preferred to have the202202+ local-admin bit set for this mac but driver does not enforce it. If203203+ the value is not given then system defaults to using the masters'204204+ mac address as actors' system address.204205205206 This parameter has effect only in 802.3ad mode and is available through206207 SysFs interface.
···183183 IRQ config, enable, reset184184185185DPNI (Datapath Network Interface)186186+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~186187Contains TX/RX queues, network interface configuration, and RX buffer pool187188configuration mechanisms. The TX/RX queues are in memory and are identified188189by queue number.
···440440a virtual function (VF), jumbo frames must first be enabled in the physical441441function (PF). The VF MTU setting cannot be larger than the PF MTU.442442443443+NBASE-T Support444444+---------------445445+The ixgbe driver supports NBASE-T on some devices. However, the advertisement446446+of NBASE-T speeds is suppressed by default, to accommodate broken network447447+switches which cannot cope with advertised NBASE-T speeds. Use the ethtool448448+command to enable advertising NBASE-T speeds on devices which support it::449449+450450+ ethtool -s eth? advertise 0x1800000001028451451+452452+On Linux systems with INTERFACES(5), this can be specified as a pre-up command453453+in /etc/network/interfaces so that the interface is always brought up with454454+NBASE-T support, e.g.::455455+456456+ iface eth? inet dhcp457457+ pre-up ethtool -s eth? advertise 0x1800000001028 || true458458+443459Generic Receive Offload, aka GRO444460--------------------------------445461The driver supports the in-kernel software implementation of GRO. GRO has
+4-2
Documentation/networking/ip-sysctl.rst
···2525ip_no_pmtu_disc - INTEGER2626 Disable Path MTU Discovery. If enabled in mode 1 and a2727 fragmentation-required ICMP is received, the PMTU to this2828- destination will be set to min_pmtu (see below). You will need2828+ destination will be set to the smallest of the old MTU to2929+ this destination and min_pmtu (see below). You will need2930 to raise min_pmtu to the smallest interface MTU on your system3031 manually if you want to avoid locally generated fragments.3132···5049 Default: FALSE51505251min_pmtu - INTEGER5353- default 552 - minimum discovered Path MTU5252+ default 552 - minimum Path MTU. Unless this is changed mannually,5353+ each cached pmtu will never be lower than this setting.54545555ip_forward_use_pmtu - BOOLEAN5656 By default we don't trust protocol path MTUs while forwarding
+2-2
Documentation/networking/timestamping.rst
···582582 and hardware timestamping is not possible (SKBTX_IN_PROGRESS not set).583583- As soon as the driver has sent the packet and/or obtained a584584 hardware time stamp for it, it passes the time stamp back by585585- calling skb_hwtstamp_tx() with the original skb, the raw586586- hardware time stamp. skb_hwtstamp_tx() clones the original skb and585585+ calling skb_tstamp_tx() with the original skb, the raw586586+ hardware time stamp. skb_tstamp_tx() clones the original skb and587587 adds the timestamps, therefore the original skb has to be freed now.588588 If obtaining the hardware time stamp somehow fails, then the driver589589 should not fall back to software time stamping. The rationale is that
+2
Documentation/sound/hd-audio/models.rst
···326326 Headset support on USI machines327327dual-codecs328328 Lenovo laptops with dual codecs329329+alc285-hp-amp-init330330+ HP laptops which require speaker amplifier initialization (ALC285)329331330332ALC680331333======
+9-9
MAINTAINERS
···30763076F: drivers/phy/qualcomm/phy-ath79-usb.c3077307730783078ATHEROS ATH GENERIC UTILITIES30793079-M: Kalle Valo <kvalo@codeaurora.org>30793079+M: Kalle Valo <kvalo@kernel.org>30803080L: linux-wireless@vger.kernel.org30813081S: Supported30823082F: drivers/net/wireless/ath/*···30913091F: drivers/net/wireless/ath/ath5k/3092309230933093ATHEROS ATH6KL WIRELESS DRIVER30943094-M: Kalle Valo <kvalo@codeaurora.org>30943094+M: Kalle Valo <kvalo@kernel.org>30953095L: linux-wireless@vger.kernel.org30963096S: Supported30973097W: https://wireless.wiki.kernel.org/en/users/Drivers/ath6kl···1326713267F: include/uapi/linux/netdevice.h13268132681326913269NETWORKING DRIVERS (WIRELESS)1327013270-M: Kalle Valo <kvalo@codeaurora.org>1327013270+M: Kalle Valo <kvalo@kernel.org>1327113271L: linux-wireless@vger.kernel.org1327213272S: Maintained1327313273Q: http://patchwork.kernel.org/project/linux-wireless/list/···1487614876M: Ryder Lee <ryder.lee@mediatek.com>1487714877M: Jianjun Wang <jianjun.wang@mediatek.com>1487814878L: linux-pci@vger.kernel.org1487914879-L: linux-mediatek@lists.infradead.org1487914879+L: linux-mediatek@lists.infradead.org (moderated for non-subscribers)1488014880S: Supported1488114881F: Documentation/devicetree/bindings/pci/mediatek*1488214882F: drivers/pci/controller/*mediatek*···1573515735F: drivers/media/tuners/qt1010*15736157361573715737QUALCOMM ATHEROS ATH10K WIRELESS DRIVER1573815738-M: Kalle Valo <kvalo@codeaurora.org>1573815738+M: Kalle Valo <kvalo@kernel.org>1573915739L: ath10k@lists.infradead.org1574015740S: Supported1574115741W: https://wireless.wiki.kernel.org/en/users/Drivers/ath10k···1574315743F: drivers/net/wireless/ath/ath10k/15744157441574515745QUALCOMM ATHEROS ATH11K WIRELESS DRIVER1574615746-M: Kalle Valo <kvalo@codeaurora.org>1574615746+M: Kalle Valo <kvalo@kernel.org>1574715747L: ath11k@lists.infradead.org1574815748S: Supported1574915749T: git git://git.kernel.org/pub/scm/linux/kernel/git/kvalo/ath.git···1591615916F: drivers/media/platform/qcom/venus/15917159171591815918QUALCOMM WCN36XX WIRELESS DRIVER1591915919-M: Kalle Valo <kvalo@codeaurora.org>1591915919+M: Kalle Valo <kvalo@kernel.org>1592015920L: wcn36xx@lists.infradead.org1592115921S: Supported1592215922W: https://wireless.wiki.kernel.org/en/users/Drivers/wcn36xx···1745417454SILVACO I3C DUAL-ROLE MASTER1745517455M: Miquel Raynal <miquel.raynal@bootlin.com>1745617456M: Conor Culhane <conor.culhane@silvaco.com>1745717457-L: linux-i3c@lists.infradead.org1745717457+L: linux-i3c@lists.infradead.org (moderated for non-subscribers)1745817458S: Maintained1745917459F: Documentation/devicetree/bindings/i3c/silvaco,i3c-master.yaml1746017460F: drivers/i3c/master/svc-i3c-master.c···2110321103F: arch/x86/kernel/cpu/zhaoxin.c21104211042110521105ZONEFS FILESYSTEM2110621106-M: Damien Le Moal <damien.lemoal@wdc.com>2110621106+M: Damien Le Moal <damien.lemoal@opensource.wdc.com>2110721107M: Naohiro Aota <naohiro.aota@wdc.com>2110821108R: Johannes Thumshirn <jth@kernel.org>2110921109L: linux-fsdevel@vger.kernel.org
···596596 tstne r0, #0x04000000 @ bit 26 set on both ARM and Thumb-2597597 reteq lr598598 and r8, r0, #0x00000f00 @ mask out CP number599599- THUMB( lsr r8, r8, #8 )600599 mov r7, #1601601- add r6, r10, #TI_USED_CP602602- ARM( strb r7, [r6, r8, lsr #8] ) @ set appropriate used_cp[]603603- THUMB( strb r7, [r6, r8] ) @ set appropriate used_cp[]600600+ add r6, r10, r8, lsr #8 @ add used_cp[] array offset first601601+ strb r7, [r6, #TI_USED_CP] @ set appropriate used_cp[]604602#ifdef CONFIG_IWMMXT605603 @ Test if we need to give access to iWMMXt coprocessors606604 ldr r5, [r10, #TI_FLAGS]···607609 bcs iwmmxt_task_enable608610#endif609611 ARM( add pc, pc, r8, lsr #6 )610610- THUMB( lsl r8, r8, #2 )612612+ THUMB( lsr r8, r8, #6 )611613 THUMB( add pc, r8 )612614 nop613615
+1
arch/arm/kernel/head-nommu.S
···114114 add r12, r12, r10115115 ret r121161161: bl __after_proc_init117117+ ldr r7, __secondary_data @ reload r7117118 ldr sp, [r7, #12] @ set up the stack pointer118119 ldr r0, [r7, #16] @ set up task pointer119120 mov fp, #0
+1-1
arch/arm/mach-rockchip/platsmp.c
···189189 rockchip_boot_fn = __pa_symbol(secondary_startup);190190191191 /* copy the trampoline to sram, that runs during startup of the core */192192- memcpy(sram_base_addr, &rockchip_secondary_trampoline, trampoline_sz);192192+ memcpy_toio(sram_base_addr, &rockchip_secondary_trampoline, trampoline_sz);193193 flush_cache_all();194194 outer_clean_range(0, trampoline_sz);195195
-1
arch/arm64/Kconfig.platforms
···161161162162config ARCH_MESON163163 bool "Amlogic Platforms"164164- select COMMON_CLK165164 help166165 This enables support for the arm64 based Amlogic SoCs167166 such as the s905, S905X/D, S912, A113X/D or S905X/D2
···149149 initrd_len, cmdline, 0);150150 if (!dtb) {151151 pr_err("Preparing for new dtb failed\n");152152+ ret = -EINVAL;152153 goto out_err;153154 }154155
···8585config STACK_GROWSUP8686 def_bool y87878888-config ARCH_DEFCONFIG8989- string9090- default "arch/parisc/configs/generic-32bit_defconfig" if !64BIT9191- default "arch/parisc/configs/generic-64bit_defconfig" if 64BIT9292-9388config GENERIC_LOCKBREAK9489 bool9590 default y
+2-2
arch/parisc/include/asm/futex.h
···1414_futex_spin_lock(u32 __user *uaddr)1515{1616 extern u32 lws_lock_start[];1717- long index = ((long)uaddr & 0x3f8) >> 1;1717+ long index = ((long)uaddr & 0x7f8) >> 1;1818 arch_spinlock_t *s = (arch_spinlock_t *)&lws_lock_start[index];1919 preempt_disable();2020 arch_spin_lock(s);···2424_futex_spin_unlock(u32 __user *uaddr)2525{2626 extern u32 lws_lock_start[];2727- long index = ((long)uaddr & 0x3f8) >> 1;2727+ long index = ((long)uaddr & 0x7f8) >> 1;2828 arch_spinlock_t *s = (arch_spinlock_t *)&lws_lock_start[index];2929 arch_spin_unlock(s);3030 preempt_enable();
+1-1
arch/parisc/kernel/syscall.S
···472472 extrd,u %r1,PSW_W_BIT,1,%r1473473 /* sp must be aligned on 4, so deposit the W bit setting into474474 * the bottom of sp temporarily */475475- or,ev %r1,%r30,%r30475475+ or,od %r1,%r30,%r30476476477477 /* Clip LWS number to a 32-bit value for 32-bit processes */478478 depdi 0, 31, 32, %r20
+2
arch/parisc/kernel/traps.c
···730730 }731731 mmap_read_unlock(current->mm);732732 }733733+ /* CPU could not fetch instruction, so clear stale IIR value. */734734+ regs->iir = 0xbaadf00d;733735 fallthrough;734736 case 27: 735737 /* Data memory protection ID trap */
+34-8
arch/powerpc/kernel/module_64.c
···422422 const char *name)423423{424424 long reladdr;425425+ func_desc_t desc;426426+ int i;425427426428 if (is_mprofile_ftrace_call(name))427429 return create_ftrace_stub(entry, addr, me);428430429429- memcpy(entry->jump, ppc64_stub_insns, sizeof(ppc64_stub_insns));431431+ for (i = 0; i < sizeof(ppc64_stub_insns) / sizeof(u32); i++) {432432+ if (patch_instruction(&entry->jump[i],433433+ ppc_inst(ppc64_stub_insns[i])))434434+ return 0;435435+ }430436431437 /* Stub uses address relative to r2. */432438 reladdr = (unsigned long)entry - my_r2(sechdrs, me);···443437 }444438 pr_debug("Stub %p get data from reladdr %li\n", entry, reladdr);445439446446- entry->jump[0] |= PPC_HA(reladdr);447447- entry->jump[1] |= PPC_LO(reladdr);448448- entry->funcdata = func_desc(addr);449449- entry->magic = STUB_MAGIC;440440+ if (patch_instruction(&entry->jump[0],441441+ ppc_inst(entry->jump[0] | PPC_HA(reladdr))))442442+ return 0;443443+444444+ if (patch_instruction(&entry->jump[1],445445+ ppc_inst(entry->jump[1] | PPC_LO(reladdr))))446446+ return 0;447447+448448+ // func_desc_t is 8 bytes if ABIv2, else 16 bytes449449+ desc = func_desc(addr);450450+ for (i = 0; i < sizeof(func_desc_t) / sizeof(u32); i++) {451451+ if (patch_instruction(((u32 *)&entry->funcdata) + i,452452+ ppc_inst(((u32 *)(&desc))[i])))453453+ return 0;454454+ }455455+456456+ if (patch_instruction(&entry->magic, ppc_inst(STUB_MAGIC)))457457+ return 0;450458451459 return 1;452460}···515495 me->name, *instruction, instruction);516496 return 0;517497 }498498+518499 /* ld r2,R2_STACK_OFFSET(r1) */519519- *instruction = PPC_INST_LD_TOC;500500+ if (patch_instruction(instruction, ppc_inst(PPC_INST_LD_TOC)))501501+ return 0;502502+520503 return 1;521504}522505···659636 }660637661638 /* Only replace bits 2 through 26 */662662- *(uint32_t *)location663663- = (*(uint32_t *)location & ~0x03fffffc)639639+ value = (*(uint32_t *)location & ~0x03fffffc)664640 | (value & 0x03fffffc);641641+642642+ if (patch_instruction((u32 *)location, ppc_inst(value)))643643+ return -EFAULT;644644+665645 break;666646667647 case R_PPC64_REL64:
+1-1
arch/powerpc/mm/ptdump/ptdump.c
···183183{184184 pte_t pte = __pte(st->current_flags);185185186186- if (!IS_ENABLED(CONFIG_PPC_DEBUG_WX) || !st->check_wx)186186+ if (!IS_ENABLED(CONFIG_DEBUG_WX) || !st->check_wx)187187 return;188188189189 if (!pte_write(pte) || !pte_exec(pte))
+2-2
arch/powerpc/platforms/85xx/smp.c
···220220 local_irq_save(flags);221221 hard_irq_disable();222222223223- if (qoriq_pm_ops)223223+ if (qoriq_pm_ops && qoriq_pm_ops->cpu_up_prepare)224224 qoriq_pm_ops->cpu_up_prepare(cpu);225225226226 /* if cpu is not spinning, reset it */···292292 booting_thread_hwid = cpu_thread_in_core(nr);293293 primary = cpu_first_thread_sibling(nr);294294295295- if (qoriq_pm_ops)295295+ if (qoriq_pm_ops && qoriq_pm_ops->cpu_up_prepare)296296 qoriq_pm_ops->cpu_up_prepare(nr);297297298298 /*
···117117CONFIG_UNIX_DIAG=m118118CONFIG_XFRM_USER=m119119CONFIG_NET_KEY=m120120+CONFIG_NET_SWITCHDEV=y120121CONFIG_SMC=m121122CONFIG_SMC_DIAG=m122123CONFIG_INET=y···512511CONFIG_MLX4_EN=m513512CONFIG_MLX5_CORE=m514513CONFIG_MLX5_CORE_EN=y514514+CONFIG_MLX5_ESWITCH=y515515# CONFIG_NET_VENDOR_MICREL is not set516516# CONFIG_NET_VENDOR_MICROCHIP is not set517517# CONFIG_NET_VENDOR_MICROSEMI is not set
+2
arch/s390/configs/defconfig
···109109CONFIG_UNIX_DIAG=m110110CONFIG_XFRM_USER=m111111CONFIG_NET_KEY=m112112+CONFIG_NET_SWITCHDEV=y112113CONFIG_SMC=m113114CONFIG_SMC_DIAG=m114115CONFIG_INET=y···503502CONFIG_MLX4_EN=m504503CONFIG_MLX5_CORE=m505504CONFIG_MLX5_CORE_EN=y505505+CONFIG_MLX5_ESWITCH=y506506# CONFIG_NET_VENDOR_MICREL is not set507507# CONFIG_NET_VENDOR_MICROCHIP is not set508508# CONFIG_NET_VENDOR_MICROSEMI is not set
···713713714714 early_reserve_initrd();715715716716- if (efi_enabled(EFI_BOOT))717717- efi_memblock_x86_reserve_range();718718-719716 memblock_x86_reserve_range_setup_data();720717721718 reserve_ibft_region();···737740 }738741739742 return 0;740740-}741741-742742-static char * __init prepare_command_line(void)743743-{744744-#ifdef CONFIG_CMDLINE_BOOL745745-#ifdef CONFIG_CMDLINE_OVERRIDE746746- strlcpy(boot_command_line, builtin_cmdline, COMMAND_LINE_SIZE);747747-#else748748- if (builtin_cmdline[0]) {749749- /* append boot loader cmdline to builtin */750750- strlcat(builtin_cmdline, " ", COMMAND_LINE_SIZE);751751- strlcat(builtin_cmdline, boot_command_line, COMMAND_LINE_SIZE);752752- strlcpy(boot_command_line, builtin_cmdline, COMMAND_LINE_SIZE);753753- }754754-#endif755755-#endif756756-757757- strlcpy(command_line, boot_command_line, COMMAND_LINE_SIZE);758758-759759- parse_early_param();760760-761761- return command_line;762743}763744764745/*···828853 x86_init.oem.arch_setup();829854830855 /*831831- * x86_configure_nx() is called before parse_early_param() (called by832832- * prepare_command_line()) to detect whether hardware doesn't support833833- * NX (so that the early EHCI debug console setup can safely call834834- * set_fixmap()). It may then be called again from within noexec_setup()835835- * during parsing early parameters to honor the respective command line836836- * option.837837- */838838- x86_configure_nx();839839-840840- /*841841- * This parses early params and it needs to run before842842- * early_reserve_memory() because latter relies on such settings843843- * supplied as early params.844844- */845845- *cmdline_p = prepare_command_line();846846-847847- /*848856 * Do some memory reservations *before* memory is added to memblock, so849857 * memblock allocations won't overwrite it.850858 *···859901 data_resource.end = __pa_symbol(_edata)-1;860902 bss_resource.start = __pa_symbol(__bss_start);861903 bss_resource.end = __pa_symbol(__bss_stop)-1;904904+905905+#ifdef CONFIG_CMDLINE_BOOL906906+#ifdef CONFIG_CMDLINE_OVERRIDE907907+ strlcpy(boot_command_line, builtin_cmdline, COMMAND_LINE_SIZE);908908+#else909909+ if (builtin_cmdline[0]) {910910+ /* append boot loader cmdline to builtin */911911+ strlcat(builtin_cmdline, " ", COMMAND_LINE_SIZE);912912+ strlcat(builtin_cmdline, boot_command_line, COMMAND_LINE_SIZE);913913+ strlcpy(boot_command_line, builtin_cmdline, COMMAND_LINE_SIZE);914914+ }915915+#endif916916+#endif917917+918918+ strlcpy(command_line, boot_command_line, COMMAND_LINE_SIZE);919919+ *cmdline_p = command_line;920920+921921+ /*922922+ * x86_configure_nx() is called before parse_early_param() to detect923923+ * whether hardware doesn't support NX (so that the early EHCI debug924924+ * console setup can safely call set_fixmap()). It may then be called925925+ * again from within noexec_setup() during parsing early parameters926926+ * to honor the respective command line option.927927+ */928928+ x86_configure_nx();929929+930930+ parse_early_param();931931+932932+ if (efi_enabled(EFI_BOOT))933933+ efi_memblock_x86_reserve_range();862934863935#ifdef CONFIG_MEMORY_HOTPLUG864936 /*
+15-1
arch/x86/kvm/mmu/mmu.c
···39873987static bool is_page_fault_stale(struct kvm_vcpu *vcpu,39883988 struct kvm_page_fault *fault, int mmu_seq)39893989{39903990- if (is_obsolete_sp(vcpu->kvm, to_shadow_page(vcpu->arch.mmu->root_hpa)))39903990+ struct kvm_mmu_page *sp = to_shadow_page(vcpu->arch.mmu->root_hpa);39913991+39923992+ /* Special roots, e.g. pae_root, are not backed by shadow pages. */39933993+ if (sp && is_obsolete_sp(vcpu->kvm, sp))39943994+ return true;39953995+39963996+ /*39973997+ * Roots without an associated shadow page are considered invalid if39983998+ * there is a pending request to free obsolete roots. The request is39993999+ * only a hint that the current root _may_ be obsolete and needs to be40004000+ * reloaded, e.g. if the guest frees a PGD that KVM is tracking as a40014001+ * previous root, then __kvm_mmu_prepare_zap_page() signals all vCPUs40024002+ * to reload even if no vCPU is actively using the root.40034003+ */40044004+ if (!sp && kvm_test_request(KVM_REQ_MMU_RELOAD, vcpu))39914005 return true;3992400639934007 return fault->slot &&
···4545 * iterator walks off the end of the paging structure.4646 */4747 bool valid;4848+ /*4949+ * True if KVM dropped mmu_lock and yielded in the middle of a walk, in5050+ * which case tdp_iter_next() needs to restart the walk at the root5151+ * level instead of advancing to the next entry.5252+ */5353+ bool yielded;4854};49555056/*
+16-13
arch/x86/kvm/mmu/tdp_mmu.c
···502502 struct tdp_iter *iter,503503 u64 new_spte)504504{505505+ WARN_ON_ONCE(iter->yielded);506506+505507 lockdep_assert_held_read(&kvm->mmu_lock);506508507509 /*···577575 u64 new_spte, bool record_acc_track,578576 bool record_dirty_log)579577{578578+ WARN_ON_ONCE(iter->yielded);579579+580580 lockdep_assert_held_write(&kvm->mmu_lock);581581582582 /*···644640 * If this function should yield and flush is set, it will perform a remote645641 * TLB flush before yielding.646642 *647647- * If this function yields, it will also reset the tdp_iter's walk over the648648- * paging structure and the calling function should skip to the next649649- * iteration to allow the iterator to continue its traversal from the650650- * paging structure root.643643+ * If this function yields, iter->yielded is set and the caller must skip to644644+ * the next iteration, where tdp_iter_next() will reset the tdp_iter's walk645645+ * over the paging structures to allow the iterator to continue its traversal646646+ * from the paging structure root.651647 *652652- * Return true if this function yielded and the iterator's traversal was reset.653653- * Return false if a yield was not needed.648648+ * Returns true if this function yielded.654649 */655655-static inline bool tdp_mmu_iter_cond_resched(struct kvm *kvm,656656- struct tdp_iter *iter, bool flush,657657- bool shared)650650+static inline bool __must_check tdp_mmu_iter_cond_resched(struct kvm *kvm,651651+ struct tdp_iter *iter,652652+ bool flush, bool shared)658653{654654+ WARN_ON(iter->yielded);655655+659656 /* Ensure forward progress has been made before yielding. */660657 if (iter->next_last_level_gfn == iter->yielded_gfn)661658 return false;···676671677672 WARN_ON(iter->gfn > iter->next_last_level_gfn);678673679679- tdp_iter_restart(iter);680680-681681- return true;674674+ iter->yielded = true;682675 }683676684684- return false;677677+ return iter->yielded;685678}686679687680/*
+12-9
arch/x86/kvm/svm/svm.c
···15851585 to_svm(vcpu)->vmcb->save.rflags = rflags;15861586}1587158715881588+static bool svm_get_if_flag(struct kvm_vcpu *vcpu)15891589+{15901590+ struct vmcb *vmcb = to_svm(vcpu)->vmcb;15911591+15921592+ return sev_es_guest(vcpu->kvm)15931593+ ? vmcb->control.int_state & SVM_GUEST_INTERRUPT_MASK15941594+ : kvm_get_rflags(vcpu) & X86_EFLAGS_IF;15951595+}15961596+15881597static void svm_cache_reg(struct kvm_vcpu *vcpu, enum kvm_reg reg)15891598{15901599 switch (reg) {···35773568 if (!gif_set(svm))35783569 return true;3579357035803580- if (sev_es_guest(vcpu->kvm)) {35813581- /*35823582- * SEV-ES guests to not expose RFLAGS. Use the VMCB interrupt mask35833583- * bit to determine the state of the IF flag.35843584- */35853585- if (!(vmcb->control.int_state & SVM_GUEST_INTERRUPT_MASK))35863586- return true;35873587- } else if (is_guest_mode(vcpu)) {35713571+ if (is_guest_mode(vcpu)) {35883572 /* As long as interrupts are being delivered... */35893573 if ((svm->nested.ctl.int_ctl & V_INTR_MASKING_MASK)35903574 ? !(svm->vmcb01.ptr->save.rflags & X86_EFLAGS_IF)···35883586 if (nested_exit_on_intr(svm))35893587 return false;35903588 } else {35913591- if (!(kvm_get_rflags(vcpu) & X86_EFLAGS_IF))35893589+ if (!svm_get_if_flag(vcpu))35923590 return true;35933591 }35943592···46234621 .cache_reg = svm_cache_reg,46244622 .get_rflags = svm_get_rflags,46254623 .set_rflags = svm_set_rflags,46244624+ .get_if_flag = svm_get_if_flag,4626462546274626 .tlb_flush_all = svm_flush_tlb,46284627 .tlb_flush_current = svm_flush_tlb,
+32-13
arch/x86/kvm/vmx/vmx.c
···13631363 vmx->emulation_required = vmx_emulation_required(vcpu);13641364}1365136513661366+static bool vmx_get_if_flag(struct kvm_vcpu *vcpu)13671367+{13681368+ return vmx_get_rflags(vcpu) & X86_EFLAGS_IF;13691369+}13701370+13661371u32 vmx_get_interrupt_shadow(struct kvm_vcpu *vcpu)13671372{13681373 u32 interruptibility = vmcs_read32(GUEST_INTERRUPTIBILITY_INFO);···39643959 if (pi_test_and_set_on(&vmx->pi_desc))39653960 return 0;3966396139673967- if (vcpu != kvm_get_running_vcpu() &&39683968- !kvm_vcpu_trigger_posted_interrupt(vcpu, false))39623962+ if (!kvm_vcpu_trigger_posted_interrupt(vcpu, false))39693963 kvm_vcpu_kick(vcpu);3970396439713965 return 0;···58815877 vmx_flush_pml_buffer(vcpu);5882587858835879 /*58845884- * We should never reach this point with a pending nested VM-Enter, and58855885- * more specifically emulation of L2 due to invalid guest state (see58865886- * below) should never happen as that means we incorrectly allowed a58875887- * nested VM-Enter with an invalid vmcs12.58805880+ * KVM should never reach this point with a pending nested VM-Enter.58815881+ * More specifically, short-circuiting VM-Entry to emulate L2 due to58825882+ * invalid guest state should never happen as that means KVM knowingly58835883+ * allowed a nested VM-Enter with an invalid vmcs12. More below.58885884 */58895885 if (KVM_BUG_ON(vmx->nested.nested_run_pending, vcpu->kvm))58905886 return -EIO;58915891-58925892- /* If guest state is invalid, start emulating */58935893- if (vmx->emulation_required)58945894- return handle_invalid_guest_state(vcpu);5895588758965888 if (is_guest_mode(vcpu)) {58975889 /*···59105910 */59115911 nested_mark_vmcs12_pages_dirty(vcpu);5912591259135913+ /*59145914+ * Synthesize a triple fault if L2 state is invalid. In normal59155915+ * operation, nested VM-Enter rejects any attempt to enter L259165916+ * with invalid state. However, those checks are skipped if59175917+ * state is being stuffed via RSM or KVM_SET_NESTED_STATE. If59185918+ * L2 state is invalid, it means either L1 modified SMRAM state59195919+ * or userspace provided bad state. Synthesize TRIPLE_FAULT as59205920+ * doing so is architecturally allowed in the RSM case, and is59215921+ * the least awful solution for the userspace case without59225922+ * risking false positives.59235923+ */59245924+ if (vmx->emulation_required) {59255925+ nested_vmx_vmexit(vcpu, EXIT_REASON_TRIPLE_FAULT, 0, 0);59265926+ return 1;59275927+ }59285928+59135929 if (nested_vmx_reflect_vmexit(vcpu))59145930 return 1;59155931 }59325932+59335933+ /* If guest state is invalid, start emulating. L2 is handled above. */59345934+ if (vmx->emulation_required)59355935+ return handle_invalid_guest_state(vcpu);5916593659175937 if (exit_reason.failed_vmentry) {59185938 dump_vmcs(vcpu);···66286608 * consistency check VM-Exit due to invalid guest state and bail.66296609 */66306610 if (unlikely(vmx->emulation_required)) {66316631-66326632- /* We don't emulate invalid state of a nested guest */66336633- vmx->fail = is_guest_mode(vcpu);66116611+ vmx->fail = 0;6634661266356613 vmx->exit_reason.full = EXIT_REASON_INVALID_STATE;66366614 vmx->exit_reason.failed_vmentry = 1;···75977579 .cache_reg = vmx_cache_reg,75987580 .get_rflags = vmx_get_rflags,75997581 .set_rflags = vmx_set_rflags,75827582+ .get_if_flag = vmx_get_if_flag,7600758376017584 .tlb_flush_all = vmx_flush_tlb_all,76027585 .tlb_flush_current = vmx_flush_tlb_current,
+3-10
arch/x86/kvm/x86.c
···13311331 MSR_IA32_UMWAIT_CONTROL,1332133213331333 MSR_ARCH_PERFMON_FIXED_CTR0, MSR_ARCH_PERFMON_FIXED_CTR1,13341334- MSR_ARCH_PERFMON_FIXED_CTR0 + 2, MSR_ARCH_PERFMON_FIXED_CTR0 + 3,13341334+ MSR_ARCH_PERFMON_FIXED_CTR0 + 2,13351335 MSR_CORE_PERF_FIXED_CTR_CTRL, MSR_CORE_PERF_GLOBAL_STATUS,13361336 MSR_CORE_PERF_GLOBAL_CTRL, MSR_CORE_PERF_GLOBAL_OVF_CTRL,13371337 MSR_ARCH_PERFMON_PERFCTR0, MSR_ARCH_PERFMON_PERFCTR1,···3413341334143414 if (!msr_info->host_initiated)34153415 return 1;34163416- if (guest_cpuid_has(vcpu, X86_FEATURE_PDCM) && kvm_get_msr_feature(&msr_ent))34163416+ if (kvm_get_msr_feature(&msr_ent))34173417 return 1;34183418 if (data & ~msr_ent.data)34193419 return 1;···90019001{90029002 struct kvm_run *kvm_run = vcpu->run;9003900390049004- /*90059005- * if_flag is obsolete and useless, so do not bother90069006- * setting it for SEV-ES guests. Userspace can just90079007- * use kvm_run->ready_for_interrupt_injection.90089008- */90099009- kvm_run->if_flag = !vcpu->arch.guest_state_protected90109010- && (kvm_get_rflags(vcpu) & X86_EFLAGS_IF) != 0;90119011-90049004+ kvm_run->if_flag = static_call(kvm_x86_get_if_flag)(vcpu);90129005 kvm_run->cr8 = kvm_get_cr8(vcpu);90139006 kvm_run->apic_base = kvm_get_apic_base(vcpu);90149007
+43-8
arch/x86/net/bpf_jit_comp.c
···12521252 case BPF_LDX | BPF_MEM | BPF_DW:12531253 case BPF_LDX | BPF_PROBE_MEM | BPF_DW:12541254 if (BPF_MODE(insn->code) == BPF_PROBE_MEM) {12551255- /* test src_reg, src_reg */12561256- maybe_emit_mod(&prog, src_reg, src_reg, true); /* always 1 byte */12571257- EMIT2(0x85, add_2reg(0xC0, src_reg, src_reg));12581258- /* jne start_of_ldx */12591259- EMIT2(X86_JNE, 0);12551255+ /* Though the verifier prevents negative insn->off in BPF_PROBE_MEM12561256+ * add abs(insn->off) to the limit to make sure that negative12571257+ * offset won't be an issue.12581258+ * insn->off is s16, so it won't affect valid pointers.12591259+ */12601260+ u64 limit = TASK_SIZE_MAX + PAGE_SIZE + abs(insn->off);12611261+ u8 *end_of_jmp1, *end_of_jmp2;12621262+12631263+ /* Conservatively check that src_reg + insn->off is a kernel address:12641264+ * 1. src_reg + insn->off >= limit12651265+ * 2. src_reg + insn->off doesn't become small positive.12661266+ * Cannot do src_reg + insn->off >= limit in one branch,12671267+ * since it needs two spare registers, but JIT has only one.12681268+ */12691269+12701270+ /* movabsq r11, limit */12711271+ EMIT2(add_1mod(0x48, AUX_REG), add_1reg(0xB8, AUX_REG));12721272+ EMIT((u32)limit, 4);12731273+ EMIT(limit >> 32, 4);12741274+ /* cmp src_reg, r11 */12751275+ maybe_emit_mod(&prog, src_reg, AUX_REG, true);12761276+ EMIT2(0x39, add_2reg(0xC0, src_reg, AUX_REG));12771277+ /* if unsigned '<' goto end_of_jmp2 */12781278+ EMIT2(X86_JB, 0);12791279+ end_of_jmp1 = prog;12801280+12811281+ /* mov r11, src_reg */12821282+ emit_mov_reg(&prog, true, AUX_REG, src_reg);12831283+ /* add r11, insn->off */12841284+ maybe_emit_1mod(&prog, AUX_REG, true);12851285+ EMIT2_off32(0x81, add_1reg(0xC0, AUX_REG), insn->off);12861286+ /* jmp if not carry to start_of_ldx12871287+ * Otherwise ERR_PTR(-EINVAL) + 128 will be the user addr12881288+ * that has to be rejected.12891289+ */12901290+ EMIT2(0x73 /* JNC */, 0);12911291+ end_of_jmp2 = prog;12921292+12601293 /* xor dst_reg, dst_reg */12611294 emit_mov_imm32(&prog, false, dst_reg, 0);12621295 /* jmp byte_after_ldx */12631296 EMIT2(0xEB, 0);1264129712651265- /* populate jmp_offset for JNE above */12661266- temp[4] = prog - temp - 5 /* sizeof(test + jne) */;12981298+ /* populate jmp_offset for JB above to jump to xor dst_reg */12991299+ end_of_jmp1[-1] = end_of_jmp2 - end_of_jmp1;13001300+ /* populate jmp_offset for JNC above to jump to start_of_ldx */12671301 start_of_ldx = prog;13021302+ end_of_jmp2[-1] = start_of_ldx - end_of_jmp2;12681303 }12691304 emit_ldx(&prog, BPF_SIZE(insn->code), dst_reg, src_reg, insn->off);12701305 if (BPF_MODE(insn->code) == BPF_PROBE_MEM) {···13401305 * End result: x86 insn "mov rbx, qword ptr [rax+0x14]"13411306 * of 4 bytes will be ignored and rbx will be zero inited.13421307 */13431343- ex->fixup = (prog - temp) | (reg2pt_regs[dst_reg] << 8);13081308+ ex->fixup = (prog - start_of_ldx) | (reg2pt_regs[dst_reg] << 8);13441309 }13451310 break;13461311
···28592859 goto invalid_fld;28602860 }2861286128622862- if (ata_is_ncq(tf->protocol) && (cdb[2 + cdb_offset] & 0x3) == 0)28632863- tf->protocol = ATA_PROT_NCQ_NODATA;28622862+ if ((cdb[2 + cdb_offset] & 0x3) == 0) {28632863+ /*28642864+ * When T_LENGTH is zero (No data is transferred), dir should28652865+ * be DMA_NONE.28662866+ */28672867+ if (scmd->sc_data_direction != DMA_NONE) {28682868+ fp = 2 + cdb_offset;28692869+ goto invalid_fld;28702870+ }28712871+28722872+ if (ata_is_ncq(tf->protocol))28732873+ tf->protocol = ATA_PROT_NCQ_NODATA;28742874+ }2864287528652876 /* enable LBA */28662877 tf->flags |= ATA_TFLAG_LBA;
+4-1
drivers/auxdisplay/charlcd.c
···3737 bool must_clear;38383939 /* contains the LCD config state */4040- unsigned long int flags;4040+ unsigned long flags;41414242 /* Current escape sequence and it's length or -1 if outside */4343 struct {···578578 * Since charlcd_init_display() needs to write data, we have to579579 * enable mark the LCD initialized just before.580580 */581581+ if (WARN_ON(!lcd->ops->init_display))582582+ return -EINVAL;583583+581584 ret = lcd->ops->init_display(lcd);582585 if (ret)583586 return ret;
+1-1
drivers/base/power/main.c
···19021902 device_block_probing();1903190319041904 mutex_lock(&dpm_list_mtx);19051905- while (!list_empty(&dpm_list)) {19051905+ while (!list_empty(&dpm_list) && !error) {19061906 struct device *dev = to_device(dpm_list.next);1907190719081908 get_device(dev);
+12-3
drivers/block/xen-blkfront.c
···15121512 unsigned long flags;15131513 struct blkfront_ring_info *rinfo = (struct blkfront_ring_info *)dev_id;15141514 struct blkfront_info *info = rinfo->dev_info;15151515+ unsigned int eoiflag = XEN_EOI_FLAG_SPURIOUS;1515151615161516- if (unlikely(info->connected != BLKIF_STATE_CONNECTED))15171517+ if (unlikely(info->connected != BLKIF_STATE_CONNECTED)) {15181518+ xen_irq_lateeoi(irq, XEN_EOI_FLAG_SPURIOUS);15171519 return IRQ_HANDLED;15201520+ }1518152115191522 spin_lock_irqsave(&rinfo->ring_lock, flags);15201523 again:···15321529 for (i = rinfo->ring.rsp_cons; i != rp; i++) {15331530 unsigned long id;15341531 unsigned int op;15321532+15331533+ eoiflag = 0;1535153415361535 RING_COPY_RESPONSE(&rinfo->ring, i, &bret);15371536 id = bret.id;···1651164616521647 spin_unlock_irqrestore(&rinfo->ring_lock, flags);1653164816491649+ xen_irq_lateeoi(irq, eoiflag);16501650+16541651 return IRQ_HANDLED;1655165216561653 err:16571654 info->connected = BLKIF_STATE_ERROR;1658165516591656 spin_unlock_irqrestore(&rinfo->ring_lock, flags);16571657+16581658+ /* No EOI in order to avoid further interrupts. */1660165916611660 pr_alert("%s disabled for further use\n", info->gd->disk_name);16621661 return IRQ_HANDLED;···17011692 if (err)17021693 goto fail;1703169417041704- err = bind_evtchn_to_irqhandler(rinfo->evtchn, blkif_interrupt, 0,17051705- "blkif", rinfo);16951695+ err = bind_evtchn_to_irqhandler_lateeoi(rinfo->evtchn, blkif_interrupt,16961696+ 0, "blkif", rinfo);17061697 if (err <= 0) {17071698 xenbus_dev_fatal(dev, err,17081699 "bind_evtchn_to_irqhandler failed");
+4-4
drivers/bus/sunxi-rsb.c
···687687688688static void sunxi_rsb_hw_exit(struct sunxi_rsb *rsb)689689{690690- /* Keep the clock and PM reference counts consistent. */691691- if (pm_runtime_status_suspended(rsb->dev))692692- pm_runtime_resume(rsb->dev);693690 reset_control_assert(rsb->rstc);694694- clk_disable_unprepare(rsb->clk);691691+692692+ /* Keep the clock and PM reference counts consistent. */693693+ if (!pm_runtime_status_suspended(rsb->dev))694694+ clk_disable_unprepare(rsb->clk);695695}696696697697static int __maybe_unused sunxi_rsb_runtime_suspend(struct device *dev)
+14-9
drivers/char/ipmi/ipmi_msghandler.c
···30313031 * with removing the device attributes while reading a device30323032 * attribute.30333033 */30343034- schedule_work(&bmc->remove_work);30343034+ queue_work(remove_work_wq, &bmc->remove_work);30353035}3036303630373037/*···53925392 if (initialized)53935393 goto out;5394539453955395- init_srcu_struct(&ipmi_interfaces_srcu);53955395+ rv = init_srcu_struct(&ipmi_interfaces_srcu);53965396+ if (rv)53975397+ goto out;53985398+53995399+ remove_work_wq = create_singlethread_workqueue("ipmi-msghandler-remove-wq");54005400+ if (!remove_work_wq) {54015401+ pr_err("unable to create ipmi-msghandler-remove-wq workqueue");54025402+ rv = -ENOMEM;54035403+ goto out_wq;54045404+ }5396540553975406 timer_setup(&ipmi_timer, ipmi_timeout, 0);53985407 mod_timer(&ipmi_timer, jiffies + IPMI_TIMEOUT_JIFFIES);5399540854005409 atomic_notifier_chain_register(&panic_notifier_list, &panic_block);5401541054025402- remove_work_wq = create_singlethread_workqueue("ipmi-msghandler-remove-wq");54035403- if (!remove_work_wq) {54045404- pr_err("unable to create ipmi-msghandler-remove-wq workqueue");54055405- rv = -ENOMEM;54065406- goto out;54075407- }54085408-54095411 initialized = true;5410541254135413+out_wq:54145414+ if (rv)54155415+ cleanup_srcu_struct(&ipmi_interfaces_srcu);54115416out:54125417 mutex_unlock(&ipmi_interfaces_mutex);54135418 return rv;
+4-3
drivers/char/ipmi/ipmi_ssif.c
···16591659 }16601660 }1661166116621662+ ssif_info->client = client;16631663+ i2c_set_clientdata(client, ssif_info);16641664+16621665 rv = ssif_check_and_remove(client, ssif_info);16631666 /* If rv is 0 and addr source is not SI_ACPI, continue probing */16641667 if (!rv && ssif_info->addr_source == SI_ACPI) {···16811678 "Trying %s-specified SSIF interface at i2c address 0x%x, adapter %s, slave address 0x%x\n",16821679 ipmi_addr_src_to_str(ssif_info->addr_source),16831680 client->addr, client->adapter->name, slave_addr);16841684-16851685- ssif_info->client = client;16861686- i2c_set_clientdata(client, ssif_info);1687168116881682 /* Now check for system interface capabilities */16891683 msg[0] = IPMI_NETFN_APP_REQUEST << 2;···1881188118821882 dev_err(&ssif_info->client->dev,18831883 "Unable to start IPMI SSIF: %d\n", rv);18841884+ i2c_set_clientdata(client, NULL);18841885 kfree(ssif_info);18851886 }18861887 kfree(resp);
+12-3
drivers/clk/clk.c
···3418341834193419 clk_prepare_lock();3420342034213421+ /*34223422+ * Set hw->core after grabbing the prepare_lock to synchronize with34233423+ * callers of clk_core_fill_parent_index() where we treat hw->core34243424+ * being NULL as the clk not being registered yet. This is crucial so34253425+ * that clks aren't parented until their parent is fully registered.34263426+ */34273427+ core->hw->core = core;34283428+34213429 ret = clk_pm_runtime_get(core);34223430 if (ret)34233431 goto unlock;···35903582out:35913583 clk_pm_runtime_put(core);35923584unlock:35933593- if (ret)35853585+ if (ret) {35943586 hlist_del_init(&core->child_node);35873587+ core->hw->core = NULL;35883588+ }3595358935963590 clk_prepare_unlock();35973591···38573847 core->num_parents = init->num_parents;38583848 core->min_rate = 0;38593849 core->max_rate = ULONG_MAX;38603860- hw->core = core;3861385038623851 ret = clk_core_populate_parent_map(core, init);38633852 if (ret)···38743865 goto fail_create_clk;38753866 }3876386738773877- clk_core_link_consumer(hw->core, hw->clk);38683868+ clk_core_link_consumer(core, hw->clk);3878386938793870 ret = __clk_core_init(core);38803871 if (!ret)
+7
drivers/crypto/qat/qat_4xxx/adf_4xxx_hw_data.c
···211211 return adf_4xxx_fw_config[obj_num].ae_mask;212212}213213214214+static u32 get_vf2pf_sources(void __iomem *pmisc_addr)215215+{216216+ /* For the moment do not report vf2pf sources */217217+ return 0;218218+}219219+214220void adf_init_hw_data_4xxx(struct adf_hw_device_data *hw_data)215221{216222 hw_data->dev_class = &adf_4xxx_class;···260254 hw_data->set_msix_rttable = set_msix_default_rttable;261255 hw_data->set_ssm_wdtimer = adf_gen4_set_ssm_wdtimer;262256 hw_data->enable_pfvf_comms = pfvf_comms_disabled;257257+ hw_data->get_vf2pf_sources = get_vf2pf_sources;263258 hw_data->disable_iov = adf_disable_sriov;264259 hw_data->min_iov_compat_ver = ADF_PFVF_COMPAT_THIS_VERSION;265260
···106106{107107 struct idxd_desc *d, *t, *found = NULL;108108 struct llist_node *head;109109+ LIST_HEAD(flist);109110110111 desc->completion->status = IDXD_COMP_DESC_ABORT;111112 /*···121120 found = desc;122121 continue;123122 }124124- list_add_tail(&desc->list, &ie->work_list);123123+124124+ if (d->completion->status)125125+ list_add_tail(&d->list, &flist);126126+ else127127+ list_add_tail(&d->list, &ie->work_list);125128 }126129 }127130···135130136131 if (found)137132 complete_desc(found, IDXD_COMPLETE_ABORT);133133+134134+ /*135135+ * complete_desc() will return desc to allocator and the desc can be136136+ * acquired by a different process and the desc->list can be modified.137137+ * Delete desc from list so the list trasversing does not get corrupted138138+ * by the other process.139139+ */140140+ list_for_each_entry_safe(d, t, &flist, list) {141141+ list_del_init(&d->list);142142+ complete_desc(d, IDXD_COMPLETE_NORMAL);143143+ }138144}139145140146int idxd_submit_desc(struct idxd_wq *wq, struct idxd_desc *desc)
···4646struct dln2_gpio {4747 struct platform_device *pdev;4848 struct gpio_chip gpio;4949+ struct irq_chip irqchip;49505051 /*5152 * Cache pin direction to save us one transfer, since the hardware has···384383 mutex_unlock(&dln2->irq_lock);385384}386385387387-static struct irq_chip dln2_gpio_irqchip = {388388- .name = "dln2-irq",389389- .irq_mask = dln2_irq_mask,390390- .irq_unmask = dln2_irq_unmask,391391- .irq_set_type = dln2_irq_set_type,392392- .irq_bus_lock = dln2_irq_bus_lock,393393- .irq_bus_sync_unlock = dln2_irq_bus_unlock,394394-};395395-396386static void dln2_gpio_event(struct platform_device *pdev, u16 echo,397387 const void *data, int len)398388{···465473 dln2->gpio.direction_output = dln2_gpio_direction_output;466474 dln2->gpio.set_config = dln2_gpio_set_config;467475476476+ dln2->irqchip.name = "dln2-irq",477477+ dln2->irqchip.irq_mask = dln2_irq_mask,478478+ dln2->irqchip.irq_unmask = dln2_irq_unmask,479479+ dln2->irqchip.irq_set_type = dln2_irq_set_type,480480+ dln2->irqchip.irq_bus_lock = dln2_irq_bus_lock,481481+ dln2->irqchip.irq_bus_sync_unlock = dln2_irq_bus_unlock,482482+468483 girq = &dln2->gpio.irq;469469- girq->chip = &dln2_gpio_irqchip;484484+ girq->chip = &dln2->irqchip;470485 /* The event comes from the outside so no parent handler */471486 girq->parent_handler = NULL;472487 girq->num_parents = 0;
+1-5
drivers/gpio/gpio-virtio.c
···100100 virtqueue_kick(vgpio->request_vq);101101 mutex_unlock(&vgpio->lock);102102103103- if (!wait_for_completion_timeout(&line->completion, HZ)) {104104- dev_err(dev, "GPIO operation timed out\n");105105- ret = -ETIMEDOUT;106106- goto out;107107- }103103+ wait_for_completion(&line->completion);108104109105 if (unlikely(res->status != VIRTIO_GPIO_STATUS_OK)) {110106 dev_err(dev, "GPIO request failed: %d\n", gpio);
+8-9
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
···31663166bool amdgpu_device_asic_has_dc_support(enum amd_asic_type asic_type)31673167{31683168 switch (asic_type) {31693169+#ifdef CONFIG_DRM_AMDGPU_SI31703170+ case CHIP_HAINAN:31713171+#endif31723172+ case CHIP_TOPAZ:31733173+ /* chips with no display hardware */31743174+ return false;31693175#if defined(CONFIG_DRM_AMD_DC)31703176 case CHIP_TAHITI:31713177 case CHIP_PITCAIRN:···44674461int amdgpu_device_pre_asic_reset(struct amdgpu_device *adev,44684462 struct amdgpu_reset_context *reset_context)44694463{44704470- int i, j, r = 0;44644464+ int i, r = 0;44714465 struct amdgpu_job *job = NULL;44724466 bool need_full_reset =44734467 test_bit(AMDGPU_NEED_FULL_RESET, &reset_context->flags);···4489448344904484 /*clear job fence from fence drv to avoid force_completion44914485 *leave NULL and vm flush fence in fence drv */44924492- for (j = 0; j <= ring->fence_drv.num_fences_mask; j++) {44934493- struct dma_fence *old, **ptr;44864486+ amdgpu_fence_driver_clear_job_fences(ring);4494448744954495- ptr = &ring->fence_drv.fences[j];44964496- old = rcu_dereference_protected(*ptr, 1);44974497- if (old && test_bit(AMDGPU_FENCE_FLAG_EMBED_IN_JOB_BIT, &old->flags)) {44984498- RCU_INIT_POINTER(*ptr, NULL);44994499- }45004500- }45014488 /* after all hw jobs are reset, hw fence is meaningless, so force_completion */45024489 amdgpu_fence_driver_force_completion(ring);45034490 }
···384384 struct amdgpu_vm_bo_base *bo_base;385385 int r;386386387387- if (bo->tbo.resource->mem_type == TTM_PL_SYSTEM)387387+ if (!bo->tbo.resource || bo->tbo.resource->mem_type == TTM_PL_SYSTEM)388388 return;389389390390 r = ttm_bo_validate(&bo->tbo, &placement, &ctx);
+23-4
drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
···328328329329/**330330 * DOC: runpm (int)331331- * Override for runtime power management control for dGPUs in PX/HG laptops. The amdgpu driver can dynamically power down332332- * the dGPU on PX/HG laptops when it is idle. The default is -1 (auto enable). Setting the value to 0 disables this functionality.331331+ * Override for runtime power management control for dGPUs. The amdgpu driver can dynamically power down332332+ * the dGPUs when they are idle if supported. The default is -1 (auto enable).333333+ * Setting the value to 0 disables this functionality.333334 */334334-MODULE_PARM_DESC(runpm, "PX runtime pm (2 = force enable with BAMACO, 1 = force enable with BACO, 0 = disable, -1 = PX only default)");335335+MODULE_PARM_DESC(runpm, "PX runtime pm (2 = force enable with BAMACO, 1 = force enable with BACO, 0 = disable, -1 = auto)");335336module_param_named(runpm, amdgpu_runtime_pm, int, 0444);336337337338/**···21542153 adev->in_s3 = true;21552154 r = amdgpu_device_suspend(drm_dev, true);21562155 adev->in_s3 = false;21572157-21562156+ if (r)21572157+ return r;21582158+ if (!adev->in_s0ix)21592159+ r = amdgpu_asic_reset(adev);21582160 return r;21592161}21602162···22382234 if (amdgpu_device_supports_px(drm_dev))22392235 drm_dev->switch_power_state = DRM_SWITCH_POWER_CHANGING;2240223622372237+ /*22382238+ * By setting mp1_state as PP_MP1_STATE_UNLOAD, MP1 will do some22392239+ * proper cleanups and put itself into a state ready for PNP. That22402240+ * can address some random resuming failure observed on BOCO capable22412241+ * platforms.22422242+ * TODO: this may be also needed for PX capable platform.22432243+ */22442244+ if (amdgpu_device_supports_boco(drm_dev))22452245+ adev->mp1_state = PP_MP1_STATE_UNLOAD;22462246+22412247 ret = amdgpu_device_suspend(drm_dev, false);22422248 if (ret) {22432249 adev->in_runpm = false;22502250+ if (amdgpu_device_supports_boco(drm_dev))22512251+ adev->mp1_state = PP_MP1_STATE_NONE;22442252 return ret;22452253 }22542254+22552255+ if (amdgpu_device_supports_boco(drm_dev))22562256+ adev->mp1_state = PP_MP1_STATE_NONE;2246225722472258 if (amdgpu_device_supports_px(drm_dev)) {22482259 /* Only need to handle PCI state in the driver for ATPX
+87-39
drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
···7777 * Cast helper7878 */7979static const struct dma_fence_ops amdgpu_fence_ops;8080+static const struct dma_fence_ops amdgpu_job_fence_ops;8081static inline struct amdgpu_fence *to_amdgpu_fence(struct dma_fence *f)8182{8283 struct amdgpu_fence *__f = container_of(f, struct amdgpu_fence, base);83848484- if (__f->base.ops == &amdgpu_fence_ops)8585+ if (__f->base.ops == &amdgpu_fence_ops ||8686+ __f->base.ops == &amdgpu_job_fence_ops)8587 return __f;86888789 return NULL;···160158 }161159162160 seq = ++ring->fence_drv.sync_seq;163163- if (job != NULL && job->job_run_counter) {161161+ if (job && job->job_run_counter) {164162 /* reinit seq for resubmitted jobs */165163 fence->seqno = seq;166164 } else {167167- dma_fence_init(fence, &amdgpu_fence_ops,168168- &ring->fence_drv.lock,169169- adev->fence_context + ring->idx,170170- seq);171171- }172172-173173- if (job != NULL) {174174- /* mark this fence has a parent job */175175- set_bit(AMDGPU_FENCE_FLAG_EMBED_IN_JOB_BIT, &fence->flags);165165+ if (job)166166+ dma_fence_init(fence, &amdgpu_job_fence_ops,167167+ &ring->fence_drv.lock,168168+ adev->fence_context + ring->idx, seq);169169+ else170170+ dma_fence_init(fence, &amdgpu_fence_ops,171171+ &ring->fence_drv.lock,172172+ adev->fence_context + ring->idx, seq);176173 }177174178175 amdgpu_ring_emit_fence(ring, ring->fence_drv.gpu_addr,···622621}623622624623/**624624+ * amdgpu_fence_driver_clear_job_fences - clear job embedded fences of ring625625+ *626626+ * @ring: fence of the ring to be cleared627627+ *628628+ */629629+void amdgpu_fence_driver_clear_job_fences(struct amdgpu_ring *ring)630630+{631631+ int i;632632+ struct dma_fence *old, **ptr;633633+634634+ for (i = 0; i <= ring->fence_drv.num_fences_mask; i++) {635635+ ptr = &ring->fence_drv.fences[i];636636+ old = rcu_dereference_protected(*ptr, 1);637637+ if (old && old->ops == &amdgpu_job_fence_ops)638638+ RCU_INIT_POINTER(*ptr, NULL);639639+ }640640+}641641+642642+/**625643 * amdgpu_fence_driver_force_completion - force signal latest fence of ring626644 *627645 * @ring: fence of the ring to signal···663643664644static const char *amdgpu_fence_get_timeline_name(struct dma_fence *f)665645{666666- struct amdgpu_ring *ring;646646+ return (const char *)to_amdgpu_fence(f)->ring->name;647647+}667648668668- if (test_bit(AMDGPU_FENCE_FLAG_EMBED_IN_JOB_BIT, &f->flags)) {669669- struct amdgpu_job *job = container_of(f, struct amdgpu_job, hw_fence);649649+static const char *amdgpu_job_fence_get_timeline_name(struct dma_fence *f)650650+{651651+ struct amdgpu_job *job = container_of(f, struct amdgpu_job, hw_fence);670652671671- ring = to_amdgpu_ring(job->base.sched);672672- } else {673673- ring = to_amdgpu_fence(f)->ring;674674- }675675- return (const char *)ring->name;653653+ return (const char *)to_amdgpu_ring(job->base.sched)->name;676654}677655678656/**···683665 */684666static bool amdgpu_fence_enable_signaling(struct dma_fence *f)685667{686686- struct amdgpu_ring *ring;668668+ if (!timer_pending(&to_amdgpu_fence(f)->ring->fence_drv.fallback_timer))669669+ amdgpu_fence_schedule_fallback(to_amdgpu_fence(f)->ring);687670688688- if (test_bit(AMDGPU_FENCE_FLAG_EMBED_IN_JOB_BIT, &f->flags)) {689689- struct amdgpu_job *job = container_of(f, struct amdgpu_job, hw_fence);671671+ return true;672672+}690673691691- ring = to_amdgpu_ring(job->base.sched);692692- } else {693693- ring = to_amdgpu_fence(f)->ring;694694- }674674+/**675675+ * amdgpu_job_fence_enable_signaling - enable signalling on job fence676676+ * @f: fence677677+ *678678+ * This is the simliar function with amdgpu_fence_enable_signaling above, it679679+ * only handles the job embedded fence.680680+ */681681+static bool amdgpu_job_fence_enable_signaling(struct dma_fence *f)682682+{683683+ struct amdgpu_job *job = container_of(f, struct amdgpu_job, hw_fence);695684696696- if (!timer_pending(&ring->fence_drv.fallback_timer))697697- amdgpu_fence_schedule_fallback(ring);685685+ if (!timer_pending(&to_amdgpu_ring(job->base.sched)->fence_drv.fallback_timer))686686+ amdgpu_fence_schedule_fallback(to_amdgpu_ring(job->base.sched));698687699688 return true;700689}···717692{718693 struct dma_fence *f = container_of(rcu, struct dma_fence, rcu);719694720720- if (test_bit(AMDGPU_FENCE_FLAG_EMBED_IN_JOB_BIT, &f->flags)) {721721- /* free job if fence has a parent job */722722- struct amdgpu_job *job;723723-724724- job = container_of(f, struct amdgpu_job, hw_fence);725725- kfree(job);726726- } else {727695 /* free fence_slab if it's separated fence*/728728- struct amdgpu_fence *fence;696696+ kmem_cache_free(amdgpu_fence_slab, to_amdgpu_fence(f));697697+}729698730730- fence = to_amdgpu_fence(f);731731- kmem_cache_free(amdgpu_fence_slab, fence);732732- }699699+/**700700+ * amdgpu_job_fence_free - free up the job with embedded fence701701+ *702702+ * @rcu: RCU callback head703703+ *704704+ * Free up the job with embedded fence after the RCU grace period.705705+ */706706+static void amdgpu_job_fence_free(struct rcu_head *rcu)707707+{708708+ struct dma_fence *f = container_of(rcu, struct dma_fence, rcu);709709+710710+ /* free job if fence has a parent job */711711+ kfree(container_of(f, struct amdgpu_job, hw_fence));733712}734713735714/**···749720 call_rcu(&f->rcu, amdgpu_fence_free);750721}751722723723+/**724724+ * amdgpu_job_fence_release - callback that job embedded fence can be freed725725+ *726726+ * @f: fence727727+ *728728+ * This is the simliar function with amdgpu_fence_release above, it729729+ * only handles the job embedded fence.730730+ */731731+static void amdgpu_job_fence_release(struct dma_fence *f)732732+{733733+ call_rcu(&f->rcu, amdgpu_job_fence_free);734734+}735735+752736static const struct dma_fence_ops amdgpu_fence_ops = {753737 .get_driver_name = amdgpu_fence_get_driver_name,754738 .get_timeline_name = amdgpu_fence_get_timeline_name,···769727 .release = amdgpu_fence_release,770728};771729730730+static const struct dma_fence_ops amdgpu_job_fence_ops = {731731+ .get_driver_name = amdgpu_fence_get_driver_name,732732+ .get_timeline_name = amdgpu_job_fence_get_timeline_name,733733+ .enable_signaling = amdgpu_job_fence_enable_signaling,734734+ .release = amdgpu_job_fence_release,735735+};772736773737/*774738 * Fence debugfs
+1-3
drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
···5353#define AMDGPU_FENCE_FLAG_INT (1 << 1)5454#define AMDGPU_FENCE_FLAG_TC_WB_ONLY (1 << 2)55555656-/* fence flag bit to indicate the face is embedded in job*/5757-#define AMDGPU_FENCE_FLAG_EMBED_IN_JOB_BIT (DMA_FENCE_FLAG_USER_BITS + 1)5858-5956#define to_amdgpu_ring(s) container_of((s), struct amdgpu_ring, sched)60576158#define AMDGPU_IB_POOL_SIZE (1024 * 1024)···111114 struct dma_fence **fences;112115};113116117117+void amdgpu_fence_driver_clear_job_fences(struct amdgpu_ring *ring);114118void amdgpu_fence_driver_force_completion(struct amdgpu_ring *ring);115119116120int amdgpu_fence_driver_init_ring(struct amdgpu_ring *ring,
···18081808 return 0;18091809 }1810181018111811+ /*18121812+ * Pair the operations did in gmc_v9_0_hw_init and thus maintain18131813+ * a correct cached state for GMC. Otherwise, the "gate" again18141814+ * operation on S3 resuming will fail due to wrong cached state.18151815+ */18161816+ if (adev->mmhub.funcs->update_power_gating)18171817+ adev->mmhub.funcs->update_power_gating(adev, false);18181818+18111819 amdgpu_irq_put(adev, &adev->gmc.ecc_irq, 0);18121820 amdgpu_irq_put(adev, &adev->gmc.vm_fault, 0);18131821
···13281328 pp_dpm_powergate_vce(handle, gate);13291329 break;13301330 case AMD_IP_BLOCK_TYPE_GMC:13311331- pp_dpm_powergate_mmhub(handle);13311331+ /*13321332+ * For now, this is only used on PICASSO.13331333+ * And only "gate" operation is supported.13341334+ */13351335+ if (gate)13361336+ pp_dpm_powergate_mmhub(handle);13321337 break;13331338 case AMD_IP_BLOCK_TYPE_GFX:13341339 ret = pp_dpm_powergate_gfx(handle, gate);
···120120121121int smu_v12_0_set_gfx_cgpg(struct smu_context *smu, bool enable)122122{123123- if (!(smu->adev->pg_flags & AMD_PG_SUPPORT_GFX_PG))123123+ /* Until now the SMU12 only implemented for Renoir series so here neen't do APU check. */124124+ if (!(smu->adev->pg_flags & AMD_PG_SUPPORT_GFX_PG) || smu->adev->in_s0ix)124125 return 0;125126126127 return smu_cmn_send_smc_msg_with_param(smu,···191190192191 kfree(smu_table->watermarks_table);193192 smu_table->watermarks_table = NULL;193193+194194+ kfree(smu_table->gpu_metrics_table);195195+ smu_table->gpu_metrics_table = NULL;194196195197 return 0;196198}
···11211121 if (crtc->state)11221122 crtc->funcs->atomic_destroy_state(crtc, crtc->state);1123112311241124- __drm_atomic_helper_crtc_reset(crtc, &ast_state->base);11241124+ if (ast_state)11251125+ __drm_atomic_helper_crtc_reset(crtc, &ast_state->base);11261126+ else11271127+ __drm_atomic_helper_crtc_reset(crtc, NULL);11251128}1126112911271130static struct drm_crtc_state *
+7-1
drivers/gpu/drm/drm_fb_helper.c
···17431743 sizes->fb_width, sizes->fb_height);1744174417451745 info->par = fb_helper;17461746- snprintf(info->fix.id, sizeof(info->fix.id), "%s",17461746+ /*17471747+ * The DRM drivers fbdev emulation device name can be confusing if the17481748+ * driver name also has a "drm" suffix on it. Leading to names such as17491749+ * "simpledrmdrmfb" in /proc/fb. Unfortunately, it's an uAPI and can't17501750+ * be changed due user-space tools (e.g: pm-utils) matching against it.17511751+ */17521752+ snprintf(info->fix.id, sizeof(info->fix.id), "%sdrmfb",17471753 fb_helper->dev->driver->name);1748175417491755}
+1-1
drivers/gpu/drm/i915/display/intel_dmc.c
···596596 continue;597597598598 offset = readcount + dmc->dmc_info[id].dmc_offset * 4;599599- if (fw->size - offset < 0) {599599+ if (offset > fw->size) {600600 drm_err(&dev_priv->drm, "Reading beyond the fw_size\n");601601 continue;602602 }
+1-1
drivers/gpu/drm/i915/gem/i915_gem_context.c
···564564 container_of_user(base, typeof(*ext), base);565565 const struct set_proto_ctx_engines *set = data;566566 struct drm_i915_private *i915 = set->i915;567567+ struct i915_engine_class_instance prev_engine;567568 u64 flags;568569 int err = 0, n, i, j;569570 u16 slot, width, num_siblings;···630629 /* Create contexts / engines */631630 for (i = 0; i < width; ++i) {632631 intel_engine_mask_t current_mask = 0;633633- struct i915_engine_class_instance prev_engine;634632635633 for (j = 0; j < num_siblings; ++j) {636634 struct i915_engine_class_instance ci;
···14441444 struct list_head node; /* all dips are on a list */14451445};1446144614471447+/* only for RNR timeout issue of HIP08 */14481448+#define HNS_ROCE_CLOCK_ADJUST 100014491449+#define HNS_ROCE_MAX_CQ_PERIOD 6514501450+#define HNS_ROCE_MAX_EQ_PERIOD 6514511451+#define HNS_ROCE_RNR_TIMER_10NS 114521452+#define HNS_ROCE_1US_CFG 99914531453+#define HNS_ROCE_1NS_CFG 014541454+14471455#define HNS_ROCE_AEQ_DEFAULT_BURST_NUM 0x014481456#define HNS_ROCE_AEQ_DEFAULT_INTERVAL 0x014491457#define HNS_ROCE_CEQ_DEFAULT_BURST_NUM 0x0
···1919#include <linux/module.h>2020#include <linux/input.h>2121#include <linux/serio.h>2222+#include <asm/unaligned.h>22232324#define DRIVER_DESC "SpaceTec SpaceBall 2003/3003/4000 FLX driver"2425···76757776 case 'D': /* Ball data */7877 if (spaceball->idx != 15) return;7979- for (i = 0; i < 6; i++)7878+ /*7979+ * Skip first three bytes; read six axes worth of data.8080+ * Axis values are signed 16-bit big-endian.8181+ */8282+ data += 3;8383+ for (i = 0; i < ARRAY_SIZE(spaceball_axes); i++) {8084 input_report_abs(dev, spaceball_axes[i],8181- (__s16)((data[2 * i + 3] << 8) | data[2 * i + 2]));8585+ (__s16)get_unaligned_be16(&data[i * 2]));8686+ }8287 break;83888489 case 'K': /* Button data */
···916916 set_bit(BTN_TOOL_TRIPLETAP, input_dev->keybit);917917 set_bit(BTN_LEFT, input_dev->keybit);918918919919+ INIT_WORK(&dev->work, atp_reinit);920920+919921 error = input_register_device(dev->input);920922 if (error)921923 goto err_free_buffer;922924923925 /* save our data pointer in this interface device */924926 usb_set_intfdata(iface, dev);925925-926926- INIT_WORK(&dev->work, atp_reinit);927927928928 return 0;929929
+7-1
drivers/input/mouse/elantech.c
···15881588 */15891589static int elantech_change_report_id(struct psmouse *psmouse)15901590{15911591- unsigned char param[2] = { 0x10, 0x03 };15911591+ /*15921592+ * NOTE: the code is expecting to receive param[] as an array of 315931593+ * items (see __ps2_command()), even if in this case only 2 are15941594+ * actually needed. Make sure the array size is 3 to avoid potential15951595+ * stack out-of-bound accesses.15961596+ */15971597+ unsigned char param[3] = { 0x10, 0x03 };1592159815931599 if (elantech_write_reg_params(psmouse, 0x7, param) ||15941600 elantech_read_reg_params(psmouse, 0x7, param) ||
+21
drivers/input/serio/i8042-x86ia64io.h
···995995 { }996996};997997998998+static const struct dmi_system_id i8042_dmi_probe_defer_table[] __initconst = {999999+ {10001000+ /* ASUS ZenBook UX425UA */10011001+ .matches = {10021002+ DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),10031003+ DMI_MATCH(DMI_PRODUCT_NAME, "ZenBook UX425UA"),10041004+ },10051005+ },10061006+ {10071007+ /* ASUS ZenBook UM325UA */10081008+ .matches = {10091009+ DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),10101010+ DMI_MATCH(DMI_PRODUCT_NAME, "ZenBook UX325UA_UM325UA"),10111011+ },10121012+ },10131013+ { }10141014+};10151015+9981016#endif /* CONFIG_X86 */999101710001018#ifdef CONFIG_PNP···1332131413331315 if (dmi_check_system(i8042_dmi_kbdreset_table))13341316 i8042_kbdreset = true;13171317+13181318+ if (dmi_check_system(i8042_dmi_probe_defer_table))13191319+ i8042_probe_defer = true;1335132013361321 /*13371322 * A20 was already enabled during early kernel init. But some buggy
+35-19
drivers/input/serio/i8042.c
···4545module_param_named(unlock, i8042_unlock, bool, 0);4646MODULE_PARM_DESC(unlock, "Ignore keyboard lock.");47474848+static bool i8042_probe_defer;4949+module_param_named(probe_defer, i8042_probe_defer, bool, 0);5050+MODULE_PARM_DESC(probe_defer, "Allow deferred probing.");5151+4852enum i8042_controller_reset_mode {4953 I8042_RESET_NEVER,5054 I8042_RESET_ALWAYS,···715711 * LCS/Telegraphics.716712 */717713718718-static int __init i8042_check_mux(void)714714+static int i8042_check_mux(void)719715{720716 unsigned char mux_version;721717···744740/*745741 * The following is used to test AUX IRQ delivery.746742 */747747-static struct completion i8042_aux_irq_delivered __initdata;748748-static bool i8042_irq_being_tested __initdata;743743+static struct completion i8042_aux_irq_delivered;744744+static bool i8042_irq_being_tested;749745750750-static irqreturn_t __init i8042_aux_test_irq(int irq, void *dev_id)746746+static irqreturn_t i8042_aux_test_irq(int irq, void *dev_id)751747{752748 unsigned long flags;753749 unsigned char str, data;···774770 * verifies success by readinng CTR. Used when testing for presence of AUX775771 * port.776772 */777777-static int __init i8042_toggle_aux(bool on)773773+static int i8042_toggle_aux(bool on)778774{779775 unsigned char param;780776 int i;···802798 * the presence of an AUX interface.803799 */804800805805-static int __init i8042_check_aux(void)801801+static int i8042_check_aux(void)806802{807803 int retval = -1;808804 bool irq_registered = false;···1009100510101006 if (i8042_command(&ctr[n++ % 2], I8042_CMD_CTL_RCTR)) {10111007 pr_err("Can't read CTR while initializing i8042\n");10121012- return -EIO;10081008+ return i8042_probe_defer ? -EPROBE_DEFER : -EIO;10131009 }1014101010151011 } while (n < 2 || ctr[0] != ctr[1]);···13241320 i8042_controller_reset(false);13251321}1326132213271327-static int __init i8042_create_kbd_port(void)13231323+static int i8042_create_kbd_port(void)13281324{13291325 struct serio *serio;13301326 struct i8042_port *port = &i8042_ports[I8042_KBD_PORT_NO];···13531349 return 0;13541350}1355135113561356-static int __init i8042_create_aux_port(int idx)13521352+static int i8042_create_aux_port(int idx)13571353{13581354 struct serio *serio;13591355 int port_no = idx < 0 ? I8042_AUX_PORT_NO : I8042_MUX_PORT_NO + idx;···13901386 return 0;13911387}1392138813931393-static void __init i8042_free_kbd_port(void)13891389+static void i8042_free_kbd_port(void)13941390{13951391 kfree(i8042_ports[I8042_KBD_PORT_NO].serio);13961392 i8042_ports[I8042_KBD_PORT_NO].serio = NULL;13971393}1398139413991399-static void __init i8042_free_aux_ports(void)13951395+static void i8042_free_aux_ports(void)14001396{14011397 int i;14021398···14061402 }14071403}1408140414091409-static void __init i8042_register_ports(void)14051405+static void i8042_register_ports(void)14101406{14111407 int i;14121408···14471443 i8042_aux_irq_registered = i8042_kbd_irq_registered = false;14481444}1449144514501450-static int __init i8042_setup_aux(void)14461446+static int i8042_setup_aux(void)14511447{14521448 int (*aux_enable)(void);14531449 int error;···14891485 return error;14901486}1491148714921492-static int __init i8042_setup_kbd(void)14881488+static int i8042_setup_kbd(void)14931489{14941490 int error;14951491···15391535 return 0;15401536}1541153715421542-static int __init i8042_probe(struct platform_device *dev)15381538+static int i8042_probe(struct platform_device *dev)15431539{15441540 int error;15451541···16041600 .pm = &i8042_pm_ops,16051601#endif16061602 },16031603+ .probe = i8042_probe,16071604 .remove = i8042_remove,16081605 .shutdown = i8042_shutdown,16091606};···1615161016161611static int __init i8042_init(void)16171612{16181618- struct platform_device *pdev;16191613 int err;1620161416211615 dbg_init();···16301626 /* Set this before creating the dev to allow i8042_command to work right away */16311627 i8042_present = true;1632162816331633- pdev = platform_create_bundle(&i8042_driver, i8042_probe, NULL, 0, NULL, 0);16341634- if (IS_ERR(pdev)) {16351635- err = PTR_ERR(pdev);16291629+ err = platform_driver_register(&i8042_driver);16301630+ if (err)16361631 goto err_platform_exit;16321632+16331633+ i8042_platform_device = platform_device_alloc("i8042", -1);16341634+ if (!i8042_platform_device) {16351635+ err = -ENOMEM;16361636+ goto err_unregister_driver;16371637 }16381638+16391639+ err = platform_device_add(i8042_platform_device);16401640+ if (err)16411641+ goto err_free_device;1638164216391643 bus_register_notifier(&serio_bus, &i8042_kbd_bind_notifier_block);16401644 panic_blink = i8042_panic_blink;1641164516421646 return 0;1643164716481648+err_free_device:16491649+ platform_device_put(i8042_platform_device);16501650+err_unregister_driver:16511651+ platform_driver_unregister(&i8042_driver);16441652 err_platform_exit:16451653 i8042_platform_exit();16461654 return err;
···102102 { .id = "911", .data = >911_chip_data },103103 { .id = "9271", .data = >911_chip_data },104104 { .id = "9110", .data = >911_chip_data },105105+ { .id = "9111", .data = >911_chip_data },105106 { .id = "927", .data = >911_chip_data },106107 { .id = "928", .data = >911_chip_data },107108···651650652651 usleep_range(6000, 10000); /* T4: > 5ms */653652654654- /* end select I2C slave addr */655655- error = gpiod_direction_input(ts->gpiod_rst);656656- if (error)657657- goto error;653653+ /*654654+ * Put the reset pin back in to input / high-impedance mode to save655655+ * power. Only do this in the non ACPI case since some ACPI boards656656+ * don't have a pull-up, so there the reset pin must stay active-high.657657+ */658658+ if (ts->irq_pin_access_method == IRQ_PIN_ACCESS_GPIO) {659659+ error = gpiod_direction_input(ts->gpiod_rst);660660+ if (error)661661+ goto error;662662+ }658663659664 return 0;660665···794787 return -EINVAL;795788 }796789790790+ /*791791+ * Normally we put the reset pin in input / high-impedance mode to save792792+ * power. But some x86/ACPI boards don't have a pull-up, so for the ACPI793793+ * case, leave the pin as is. This results in the pin not being touched794794+ * at all on x86/ACPI boards, except when needed for error-recover.795795+ */796796+ ts->gpiod_rst_flags = GPIOD_ASIS;797797+797798 return devm_acpi_dev_add_driver_gpios(dev, gpio_mapping);798799}799800#else···826811 if (!ts->client)827812 return -EINVAL;828813 dev = &ts->client->dev;814814+815815+ /*816816+ * By default we request the reset pin as input, leaving it in817817+ * high-impedance when not resetting the controller to save power.818818+ */819819+ ts->gpiod_rst_flags = GPIOD_IN;829820830821 ts->avdd28 = devm_regulator_get(dev, "AVDD28");831822 if (IS_ERR(ts->avdd28)) {···870849 ts->gpiod_int = gpiod;871850872851 /* Get the reset line GPIO pin number */873873- gpiod = devm_gpiod_get_optional(dev, GOODIX_GPIO_RST_NAME, GPIOD_IN);852852+ gpiod = devm_gpiod_get_optional(dev, GOODIX_GPIO_RST_NAME, ts->gpiod_rst_flags);874853 if (IS_ERR(gpiod)) {875854 error = PTR_ERR(gpiod);876855 if (error != -EPROBE_DEFER)
+1
drivers/input/touchscreen/goodix.h
···8787 struct gpio_desc *gpiod_rst;8888 int gpio_count;8989 int gpio_int_idx;9090+ enum gpiod_flags gpiod_rst_flags;9091 char id[GOODIX_ID_MAX_LEN + 1];9192 char cfg_name[64];9293 u16 version;
+1-1
drivers/input/touchscreen/goodix_fwupload.c
···207207208208 error = goodix_reset_no_int_sync(ts);209209 if (error)210210- return error;210210+ goto release;211211212212 error = goodix_enter_upload_mode(ts->client);213213 if (error)
···135135 struct mmc_command *cmd)136136{137137 struct meson_mx_sdhc_host *host = mmc_priv(mmc);138138+ bool manual_stop = false;138139 u32 ictl, send;139140 int pack_len;140141···173172 else174173 /* software flush: */175174 ictl |= MESON_SDHC_ICTL_DATA_XFER_OK;175175+176176+ /*177177+ * Mimic the logic from the vendor driver where (only)178178+ * SD_IO_RW_EXTENDED commands with more than one block set the179179+ * MESON_SDHC_MISC_MANUAL_STOP bit. This fixes the firmware180180+ * download in the brcmfmac driver for a BCM43362/1 card.181181+ * Without this sdio_memcpy_toio() (with a size of 219557182182+ * bytes) times out if MESON_SDHC_MISC_MANUAL_STOP is not set.183183+ */184184+ manual_stop = cmd->data->blocks > 1 &&185185+ cmd->opcode == SD_IO_RW_EXTENDED;176186 } else {177187 pack_len = 0;178188179189 ictl |= MESON_SDHC_ICTL_RESP_OK;180190 }191191+192192+ regmap_update_bits(host->regmap, MESON_SDHC_MISC,193193+ MESON_SDHC_MISC_MANUAL_STOP,194194+ manual_stop ? MESON_SDHC_MISC_MANUAL_STOP : 0);181195182196 if (cmd->opcode == MMC_STOP_TRANSMISSION)183197 send |= MESON_SDHC_SEND_DATA_STOP;
···356356 }357357}358358359359-static void tegra_sdhci_hs400_enhanced_strobe(struct mmc_host *mmc,360360- struct mmc_ios *ios)361361-{362362- struct sdhci_host *host = mmc_priv(mmc);363363- u32 val;364364-365365- val = sdhci_readl(host, SDHCI_TEGRA_VENDOR_SYS_SW_CTRL);366366-367367- if (ios->enhanced_strobe)368368- val |= SDHCI_TEGRA_SYS_SW_CTRL_ENHANCED_STROBE;369369- else370370- val &= ~SDHCI_TEGRA_SYS_SW_CTRL_ENHANCED_STROBE;371371-372372- sdhci_writel(host, val, SDHCI_TEGRA_VENDOR_SYS_SW_CTRL);373373-374374-}375375-376359static void tegra_sdhci_reset(struct sdhci_host *host, u8 mask)377360{378361 struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);···774791 tegra_sdhci_pad_autocalib(host);775792 tegra_host->pad_calib_required = false;776793 }794794+}795795+796796+static void tegra_sdhci_hs400_enhanced_strobe(struct mmc_host *mmc,797797+ struct mmc_ios *ios)798798+{799799+ struct sdhci_host *host = mmc_priv(mmc);800800+ u32 val;801801+802802+ val = sdhci_readl(host, SDHCI_TEGRA_VENDOR_SYS_SW_CTRL);803803+804804+ if (ios->enhanced_strobe) {805805+ val |= SDHCI_TEGRA_SYS_SW_CTRL_ENHANCED_STROBE;806806+ /*807807+ * When CMD13 is sent from mmc_select_hs400es() after808808+ * switching to HS400ES mode, the bus is operating at809809+ * either MMC_HIGH_26_MAX_DTR or MMC_HIGH_52_MAX_DTR.810810+ * To meet Tegra SDHCI requirement at HS400ES mode, force SDHCI811811+ * interface clock to MMC_HS200_MAX_DTR (200 MHz) so that host812812+ * controller CAR clock and the interface clock are rate matched.813813+ */814814+ tegra_sdhci_set_clock(host, MMC_HS200_MAX_DTR);815815+ } else {816816+ val &= ~SDHCI_TEGRA_SYS_SW_CTRL_ENHANCED_STROBE;817817+ }818818+819819+ sdhci_writel(host, val, SDHCI_TEGRA_VENDOR_SYS_SW_CTRL);777820}778821779822static unsigned int tegra_sdhci_get_max_clock(struct sdhci_host *host)
+1-1
drivers/net/bonding/bond_options.c
···15261526 mac = (u8 *)&newval->value;15271527 }1528152815291529- if (!is_valid_ether_addr(mac))15291529+ if (is_multicast_ether_addr(mac))15301530 goto err;1531153115321532 netdev_dbg(bond->dev, "Setting ad_actor_system to %pM\n", mac);
+4
drivers/net/dsa/mv88e6xxx/chip.c
···768768 if ((!mv88e6xxx_port_ppu_updates(chip, port) ||769769 mode == MLO_AN_FIXED) && ops->port_sync_link)770770 err = ops->port_sync_link(chip, port, mode, false);771771+772772+ if (!err && ops->port_set_speed_duplex)773773+ err = ops->port_set_speed_duplex(chip, port, SPEED_UNFORCED,774774+ DUPLEX_UNFORCED);771775 mv88e6xxx_reg_unlock(chip);772776773777 if (err)
+2-2
drivers/net/dsa/mv88e6xxx/port.c
···283283 if (err)284284 return err;285285286286- if (speed)286286+ if (speed != SPEED_UNFORCED)287287 dev_dbg(chip->dev, "p%d: Speed set to %d Mbps\n", port, speed);288288 else289289 dev_dbg(chip->dev, "p%d: Speed unforced\n", port);···516516 if (err)517517 return err;518518519519- if (speed)519519+ if (speed != SPEED_UNFORCED)520520 dev_dbg(chip->dev, "p%d: Speed set to %d Mbps\n", port, speed);521521 else522522 dev_dbg(chip->dev, "p%d: Speed unforced\n", port);
···13091309 struct bcm_sysport_priv *priv = netdev_priv(dev);13101310 struct device *kdev = &priv->pdev->dev;13111311 struct bcm_sysport_tx_ring *ring;13121312+ unsigned long flags, desc_flags;13121313 struct bcm_sysport_cb *cb;13131314 struct netdev_queue *txq;13141315 u32 len_status, addr_lo;13151316 unsigned int skb_len;13161316- unsigned long flags;13171317 dma_addr_t mapping;13181318 u16 queue;13191319 int ret;···13731373 ring->desc_count--;1374137413751375 /* Ports are latched, so write upper address first */13761376+ spin_lock_irqsave(&priv->desc_lock, desc_flags);13761377 tdma_writel(priv, len_status, TDMA_WRITE_PORT_HI(ring->index));13771378 tdma_writel(priv, addr_lo, TDMA_WRITE_PORT_LO(ring->index));13791379+ spin_unlock_irqrestore(&priv->desc_lock, desc_flags);1378138013791381 /* Check ring space and update SW control flow */13801382 if (ring->desc_count == 0)···20152013 }2016201420172015 /* Initialize both hardware and software ring */20162016+ spin_lock_init(&priv->desc_lock);20182017 for (i = 0; i < dev->num_tx_queues; i++) {20192018 ret = bcm_sysport_init_tx_ring(priv, i);20202019 if (ret) {
···589589 * Internal or external PHY with MDIO access590590 */591591 phydev = phy_attach(priv->dev, phy_name, pd->phy_interface);592592- if (!phydev) {592592+ if (IS_ERR(phydev)) {593593 dev_err(kdev, "failed to register PHY device\n");594594- return -ENODEV;594594+ return PTR_ERR(phydev);595595 }596596 } else {597597 /*
+2
drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.h
···388388 __u64 bytes_per_cdan;389389};390390391391+#define DPAA2_ETH_CH_STATS 7392392+391393/* Maximum number of queues associated with a DPNI */392394#define DPAA2_ETH_MAX_TCS 8393395#define DPAA2_ETH_MAX_RX_QUEUES_PER_TC 16
···738738 * is not set to GqiRda, choose the queue format in a priority order:739739 * DqoRda, GqiRda, GqiQpl. Use GqiQpl as default.740740 */741741- if (priv->queue_format == GVE_GQI_RDA_FORMAT) {742742- dev_info(&priv->pdev->dev,743743- "Driver is running with GQI RDA queue format.\n");744744- } else if (dev_op_dqo_rda) {741741+ if (dev_op_dqo_rda) {745742 priv->queue_format = GVE_DQO_RDA_FORMAT;746743 dev_info(&priv->pdev->dev,747744 "Driver is running with DQO RDA queue format.\n");···750753 "Driver is running with GQI RDA queue format.\n");751754 supported_features_mask =752755 be32_to_cpu(dev_op_gqi_rda->supported_features_mask);756756+ } else if (priv->queue_format == GVE_GQI_RDA_FORMAT) {757757+ dev_info(&priv->pdev->dev,758758+ "Driver is running with GQI RDA queue format.\n");753759 } else {754760 priv->queue_format = GVE_GQI_QPL_FORMAT;755761 if (dev_op_gqi_qpl)
···12261226 if (ret)12271227 return ret;1228122812291229+ mutex_lock(&handle->dbgfs_lock);12291230 save_buf = &hns3_dbg_cmd[index].buf;1230123112311232 if (!test_bit(HNS3_NIC_STATE_INITED, &priv->state) ||···12391238 read_buf = *save_buf;12401239 } else {12411240 read_buf = kvzalloc(hns3_dbg_cmd[index].buf_len, GFP_KERNEL);12421242- if (!read_buf)12431243- return -ENOMEM;12411241+ if (!read_buf) {12421242+ ret = -ENOMEM;12431243+ goto out;12441244+ }1244124512451246 /* save the buffer addr until the last read operation */12461247 *save_buf = read_buf;12471247- }1248124812491249- /* get data ready for the first time to read */12501250- if (!*ppos) {12491249+ /* get data ready for the first time to read */12511250 ret = hns3_dbg_read_cmd(dbg_data, hns3_dbg_cmd[index].cmd,12521251 read_buf, hns3_dbg_cmd[index].buf_len);12531252 if (ret)···1256125512571256 size = simple_read_from_buffer(buffer, count, ppos, read_buf,12581257 strlen(read_buf));12591259- if (size > 0)12581258+ if (size > 0) {12591259+ mutex_unlock(&handle->dbgfs_lock);12601260 return size;12611261+ }1261126212621263out:12631264 /* free the buffer for the last read operation */···12681265 *save_buf = NULL;12691266 }1270126712681268+ mutex_unlock(&handle->dbgfs_lock);12711269 return ret;12721270}12731271···13411337 debugfs_create_dir(hns3_dbg_dentry[i].name,13421338 handle->hnae3_dbgfs);1343133913401340+ mutex_init(&handle->dbgfs_lock);13411341+13441342 for (i = 0; i < ARRAY_SIZE(hns3_dbg_cmd); i++) {13451343 if ((hns3_dbg_cmd[i].cmd == HNAE3_DBG_CMD_TM_NODES &&13461344 ae_dev->dev_version <= HNAE3_DEVICE_VERSION_V2) ||···13691363 return 0;1370136413711365out:13661366+ mutex_destroy(&handle->dbgfs_lock);13721367 debugfs_remove_recursive(handle->hnae3_dbgfs);13731368 handle->hnae3_dbgfs = NULL;13741369 return ret;···13851378 hns3_dbg_cmd[i].buf = NULL;13861379 }1387138013811381+ mutex_destroy(&handle->dbgfs_lock);13881382 debugfs_remove_recursive(handle->hnae3_dbgfs);13891383 handle->hnae3_dbgfs = NULL;13901384}
···66#include "ice_lib.h"77#include "ice_dcb_lib.h"8899+static bool ice_alloc_rx_buf_zc(struct ice_rx_ring *rx_ring)1010+{1111+ rx_ring->xdp_buf = kcalloc(rx_ring->count, sizeof(*rx_ring->xdp_buf), GFP_KERNEL);1212+ return !!rx_ring->xdp_buf;1313+}1414+1515+static bool ice_alloc_rx_buf(struct ice_rx_ring *rx_ring)1616+{1717+ rx_ring->rx_buf = kcalloc(rx_ring->count, sizeof(*rx_ring->rx_buf), GFP_KERNEL);1818+ return !!rx_ring->rx_buf;1919+}2020+921/**1022 * __ice_vsi_get_qs_contig - Assign a contiguous chunk of queues to VSI1123 * @qs_cfg: gathered variables needed for PF->VSI queues assignment···504492 xdp_rxq_info_reg(&ring->xdp_rxq, ring->netdev,505493 ring->q_index, ring->q_vector->napi.napi_id);506494495495+ kfree(ring->rx_buf);507496 ring->xsk_pool = ice_xsk_pool(ring);508497 if (ring->xsk_pool) {498498+ if (!ice_alloc_rx_buf_zc(ring))499499+ return -ENOMEM;509500 xdp_rxq_info_unreg_mem_model(&ring->xdp_rxq);510501511502 ring->rx_buf_len =···523508 dev_info(dev, "Registered XDP mem model MEM_TYPE_XSK_BUFF_POOL on Rx ring %d\n",524509 ring->q_index);525510 } else {511511+ if (!ice_alloc_rx_buf(ring))512512+ return -ENOMEM;526513 if (!xdp_rxq_info_is_reg(&ring->xdp_rxq))527514 /* coverity[check_return] */528515 xdp_rxq_info_reg(&ring->xdp_rxq,
+5-8
drivers/net/ethernet/intel/ice/ice_ptp.c
···705705 scaled_ppm = -scaled_ppm;706706 }707707708708- while ((u64)scaled_ppm > div_u64(U64_MAX, incval)) {708708+ while ((u64)scaled_ppm > div64_u64(U64_MAX, incval)) {709709 /* handle overflow by scaling down the scaled_ppm and710710 * the divisor, losing some precision711711 */···15401540 if (err)15411541 continue;1542154215431543- /* Check if the timestamp is valid */15441544- if (!(raw_tstamp & ICE_PTP_TS_VALID))15431543+ /* Check if the timestamp is invalid or stale */15441544+ if (!(raw_tstamp & ICE_PTP_TS_VALID) ||15451545+ raw_tstamp == tx->tstamps[idx].cached_tstamp)15451546 continue;15461546-15471547- /* clear the timestamp register, so that it won't show valid15481548- * again when re-used.15491549- */15501550- ice_clear_phy_tstamp(hw, tx->quad, phy_idx);1551154715521548 /* The timestamp is valid, so we'll go ahead and clear this15531549 * index and then send the timestamp up to the stack.15541550 */15551551 spin_lock(&tx->lock);15521552+ tx->tstamps[idx].cached_tstamp = raw_tstamp;15561553 clear_bit(idx, tx->in_use);15571554 skb = tx->tstamps[idx].skb;15581555 tx->tstamps[idx].skb = NULL;
+6
drivers/net/ethernet/intel/ice/ice_ptp.h
···5555 * struct ice_tx_tstamp - Tracking for a single Tx timestamp5656 * @skb: pointer to the SKB for this timestamp request5757 * @start: jiffies when the timestamp was first requested5858+ * @cached_tstamp: last read timestamp5859 *5960 * This structure tracks a single timestamp request. The SKB pointer is6061 * provided when initiating a request. The start time is used to ensure that6162 * we discard old requests that were not fulfilled within a 2 second time6263 * window.6464+ * Timestamp values in the PHY are read only and do not get cleared except at6565+ * hardware reset or when a new timestamp value is captured. The cached_tstamp6666+ * field is used to detect the case where a new timestamp has not yet been6767+ * captured, ensuring that we avoid sending stale timestamp data to the stack.6368 */6469struct ice_tx_tstamp {6570 struct sk_buff *skb;6671 unsigned long start;7272+ u64 cached_tstamp;6773};68746975/**
+13-6
drivers/net/ethernet/intel/ice/ice_txrx.c
···419419 }420420421421rx_skip_free:422422- memset(rx_ring->rx_buf, 0, sizeof(*rx_ring->rx_buf) * rx_ring->count);422422+ if (rx_ring->xsk_pool)423423+ memset(rx_ring->xdp_buf, 0, array_size(rx_ring->count, sizeof(*rx_ring->xdp_buf)));424424+ else425425+ memset(rx_ring->rx_buf, 0, array_size(rx_ring->count, sizeof(*rx_ring->rx_buf)));423426424427 /* Zero out the descriptor ring */425428 size = ALIGN(rx_ring->count * sizeof(union ice_32byte_rx_desc),···449446 if (xdp_rxq_info_is_reg(&rx_ring->xdp_rxq))450447 xdp_rxq_info_unreg(&rx_ring->xdp_rxq);451448 rx_ring->xdp_prog = NULL;452452- devm_kfree(rx_ring->dev, rx_ring->rx_buf);453453- rx_ring->rx_buf = NULL;449449+ if (rx_ring->xsk_pool) {450450+ kfree(rx_ring->xdp_buf);451451+ rx_ring->xdp_buf = NULL;452452+ } else {453453+ kfree(rx_ring->rx_buf);454454+ rx_ring->rx_buf = NULL;455455+ }454456455457 if (rx_ring->desc) {456458 size = ALIGN(rx_ring->count * sizeof(union ice_32byte_rx_desc),···483475 /* warn if we are about to overwrite the pointer */484476 WARN_ON(rx_ring->rx_buf);485477 rx_ring->rx_buf =486486- devm_kcalloc(dev, sizeof(*rx_ring->rx_buf), rx_ring->count,487487- GFP_KERNEL);478478+ kcalloc(rx_ring->count, sizeof(*rx_ring->rx_buf), GFP_KERNEL);488479 if (!rx_ring->rx_buf)489480 return -ENOMEM;490481···512505 return 0;513506514507err:515515- devm_kfree(dev, rx_ring->rx_buf);508508+ kfree(rx_ring->rx_buf);516509 rx_ring->rx_buf = NULL;517510 return -ENOMEM;518511}
-1
drivers/net/ethernet/intel/ice/ice_txrx.h
···2424#define ICE_MAX_DATA_PER_TXD_ALIGNED \2525 (~(ICE_MAX_READ_REQ_SIZE - 1) & ICE_MAX_DATA_PER_TXD)26262727-#define ICE_RX_BUF_WRITE 16 /* Must be power of 2 */2827#define ICE_MAX_TXQ_PER_TXQG 12829283029/* Attempt to maximize the headroom available for incoming frames. We use a 2K
+32-34
drivers/net/ethernet/intel/ice/ice_xsk.c
···1212#include "ice_txrx_lib.h"1313#include "ice_lib.h"14141515+static struct xdp_buff **ice_xdp_buf(struct ice_rx_ring *rx_ring, u32 idx)1616+{1717+ return &rx_ring->xdp_buf[idx];1818+}1919+1520/**1621 * ice_qp_reset_stats - Resets all stats for rings of given index1722 * @vsi: VSI that contains rings of interest···377372 dma_addr_t dma;378373379374 rx_desc = ICE_RX_DESC(rx_ring, ntu);380380- xdp = &rx_ring->xdp_buf[ntu];375375+ xdp = ice_xdp_buf(rx_ring, ntu);381376382377 nb_buffs = min_t(u16, count, rx_ring->count - ntu);383378 nb_buffs = xsk_buff_alloc_batch(rx_ring->xsk_pool, xdp, nb_buffs);···395390 }396391397392 ntu += nb_buffs;398398- if (ntu == rx_ring->count) {399399- rx_desc = ICE_RX_DESC(rx_ring, 0);400400- xdp = rx_ring->xdp_buf;393393+ if (ntu == rx_ring->count)401394 ntu = 0;402402- }403395404404- /* clear the status bits for the next_to_use descriptor */405405- rx_desc->wb.status_error0 = 0;406396 ice_release_rx_desc(rx_ring, ntu);407397408398 return count == nb_buffs;···419419/**420420 * ice_construct_skb_zc - Create an sk_buff from zero-copy buffer421421 * @rx_ring: Rx ring422422- * @xdp_arr: Pointer to the SW ring of xdp_buff pointers422422+ * @xdp: Pointer to XDP buffer423423 *424424 * This function allocates a new skb from a zero-copy Rx buffer.425425 *426426 * Returns the skb on success, NULL on failure.427427 */428428static struct sk_buff *429429-ice_construct_skb_zc(struct ice_rx_ring *rx_ring, struct xdp_buff **xdp_arr)429429+ice_construct_skb_zc(struct ice_rx_ring *rx_ring, struct xdp_buff *xdp)430430{431431- struct xdp_buff *xdp = *xdp_arr;431431+ unsigned int datasize_hard = xdp->data_end - xdp->data_hard_start;432432 unsigned int metasize = xdp->data - xdp->data_meta;433433 unsigned int datasize = xdp->data_end - xdp->data;434434- unsigned int datasize_hard = xdp->data_end - xdp->data_hard_start;435434 struct sk_buff *skb;436435437436 skb = __napi_alloc_skb(&rx_ring->q_vector->napi, datasize_hard,···444445 skb_metadata_set(skb, metasize);445446446447 xsk_buff_free(xdp);447447- *xdp_arr = NULL;448448 return skb;449449}450450···505507int ice_clean_rx_irq_zc(struct ice_rx_ring *rx_ring, int budget)506508{507509 unsigned int total_rx_bytes = 0, total_rx_packets = 0;508508- u16 cleaned_count = ICE_DESC_UNUSED(rx_ring);509510 struct ice_tx_ring *xdp_ring;510511 unsigned int xdp_xmit = 0;511512 struct bpf_prog *xdp_prog;···519522 while (likely(total_rx_packets < (unsigned int)budget)) {520523 union ice_32b_rx_flex_desc *rx_desc;521524 unsigned int size, xdp_res = 0;522522- struct xdp_buff **xdp;525525+ struct xdp_buff *xdp;523526 struct sk_buff *skb;524527 u16 stat_err_bits;525528 u16 vlan_tag = 0;···537540 */538541 dma_rmb();539542543543+ xdp = *ice_xdp_buf(rx_ring, rx_ring->next_to_clean);544544+540545 size = le16_to_cpu(rx_desc->wb.pkt_len) &541546 ICE_RX_FLX_DESC_PKT_LEN_M;542542- if (!size)543543- break;547547+ if (!size) {548548+ xdp->data = NULL;549549+ xdp->data_end = NULL;550550+ xdp->data_hard_start = NULL;551551+ xdp->data_meta = NULL;552552+ goto construct_skb;553553+ }544554545545- xdp = &rx_ring->xdp_buf[rx_ring->next_to_clean];546546- xsk_buff_set_size(*xdp, size);547547- xsk_buff_dma_sync_for_cpu(*xdp, rx_ring->xsk_pool);555555+ xsk_buff_set_size(xdp, size);556556+ xsk_buff_dma_sync_for_cpu(xdp, rx_ring->xsk_pool);548557549549- xdp_res = ice_run_xdp_zc(rx_ring, *xdp, xdp_prog, xdp_ring);558558+ xdp_res = ice_run_xdp_zc(rx_ring, xdp, xdp_prog, xdp_ring);550559 if (xdp_res) {551560 if (xdp_res & (ICE_XDP_TX | ICE_XDP_REDIR))552561 xdp_xmit |= xdp_res;553562 else554554- xsk_buff_free(*xdp);563563+ xsk_buff_free(xdp);555564556556- *xdp = NULL;557565 total_rx_bytes += size;558566 total_rx_packets++;559559- cleaned_count++;560567561568 ice_bump_ntc(rx_ring);562569 continue;563570 }564564-571571+construct_skb:565572 /* XDP_PASS path */566573 skb = ice_construct_skb_zc(rx_ring, xdp);567574 if (!skb) {···573572 break;574573 }575574576576- cleaned_count++;577575 ice_bump_ntc(rx_ring);578576579577 if (eth_skb_pad(skb)) {···594594 ice_receive_skb(rx_ring, skb, vlan_tag);595595 }596596597597- if (cleaned_count >= ICE_RX_BUF_WRITE)598598- failure = !ice_alloc_rx_bufs_zc(rx_ring, cleaned_count);597597+ failure = !ice_alloc_rx_bufs_zc(rx_ring, ICE_DESC_UNUSED(rx_ring));599598600599 ice_finalize_xdp_rx(xdp_ring, xdp_xmit);601600 ice_update_rx_ring_stats(rx_ring, total_rx_packets, total_rx_bytes);···810811 */811812void ice_xsk_clean_rx_ring(struct ice_rx_ring *rx_ring)812813{813813- u16 i;814814+ u16 count_mask = rx_ring->count - 1;815815+ u16 ntc = rx_ring->next_to_clean;816816+ u16 ntu = rx_ring->next_to_use;814817815815- for (i = 0; i < rx_ring->count; i++) {816816- struct xdp_buff **xdp = &rx_ring->xdp_buf[i];818818+ for ( ; ntc != ntu; ntc = (ntc + 1) & count_mask) {819819+ struct xdp_buff *xdp = *ice_xdp_buf(rx_ring, ntc);817820818818- if (!xdp)819819- continue;820820-821821- *xdp = NULL;821821+ xsk_buff_free(xdp);822822 }823823}824824
+27-20
drivers/net/ethernet/intel/igb/igb_main.c
···76487648 struct vf_mac_filter *entry = NULL;76497649 int ret = 0;7650765076517651+ if ((vf_data->flags & IGB_VF_FLAG_PF_SET_MAC) &&76527652+ !vf_data->trusted) {76537653+ dev_warn(&pdev->dev,76547654+ "VF %d requested MAC filter but is administratively denied\n",76557655+ vf);76567656+ return -EINVAL;76577657+ }76587658+ if (!is_valid_ether_addr(addr)) {76597659+ dev_warn(&pdev->dev,76607660+ "VF %d attempted to set invalid MAC filter\n",76617661+ vf);76627662+ return -EINVAL;76637663+ }76647664+76517665 switch (info) {76527666 case E1000_VF_MAC_FILTER_CLR:76537667 /* remove all unicast MAC filters related to the current VF */···76757661 }76767662 break;76777663 case E1000_VF_MAC_FILTER_ADD:76787678- if ((vf_data->flags & IGB_VF_FLAG_PF_SET_MAC) &&76797679- !vf_data->trusted) {76807680- dev_warn(&pdev->dev,76817681- "VF %d requested MAC filter but is administratively denied\n",76827682- vf);76837683- return -EINVAL;76847684- }76857685- if (!is_valid_ether_addr(addr)) {76867686- dev_warn(&pdev->dev,76877687- "VF %d attempted to set invalid MAC filter\n",76887688- vf);76897689- return -EINVAL;76907690- }76917691-76927664 /* try to find empty slot in the list */76937665 list_for_each(pos, &adapter->vf_macs.l) {76947666 entry = list_entry(pos, struct vf_mac_filter, l);···92549254 return __igb_shutdown(to_pci_dev(dev), NULL, 0);92559255}9256925692579257-static int __maybe_unused igb_resume(struct device *dev)92579257+static int __maybe_unused __igb_resume(struct device *dev, bool rpm)92589258{92599259 struct pci_dev *pdev = to_pci_dev(dev);92609260 struct net_device *netdev = pci_get_drvdata(pdev);···9297929792989298 wr32(E1000_WUS, ~0);9299929993009300- rtnl_lock();93009300+ if (!rpm)93019301+ rtnl_lock();93019302 if (!err && netif_running(netdev))93029303 err = __igb_open(netdev, true);9303930493049305 if (!err)93059306 netif_device_attach(netdev);93069306- rtnl_unlock();93079307+ if (!rpm)93089308+ rtnl_unlock();9307930993089310 return err;93119311+}93129312+93139313+static int __maybe_unused igb_resume(struct device *dev)93149314+{93159315+ return __igb_resume(dev, false);93099316}9310931793119318static int __maybe_unused igb_runtime_idle(struct device *dev)···9333932693349327static int __maybe_unused igb_runtime_resume(struct device *dev)93359328{93369336- return igb_resume(dev);93299329+ return __igb_resume(dev, true);93379330}9338933193399332static void igb_shutdown(struct pci_dev *pdev)···94499442 * @pdev: Pointer to PCI device94509443 *94519444 * Restart the card from scratch, as if from a cold-boot. Implementation94529452- * resembles the first-half of the igb_resume routine.94459445+ * resembles the first-half of the __igb_resume routine.94539446 **/94549447static pci_ers_result_t igb_io_slot_reset(struct pci_dev *pdev)94559448{···94899482 *94909483 * This callback is called when the error recovery driver tells us that94919484 * its OK to resume normal operation. Implementation resembles the94929492- * second-half of the igb_resume routine.94859485+ * second-half of the __igb_resume routine.94939486 */94949487static void igb_io_resume(struct pci_dev *pdev)94959488{
···54675467 mod_timer(&adapter->watchdog_timer, jiffies + 1);54685468 }5469546954705470+ if (icr & IGC_ICR_TS)54715471+ igc_tsync_interrupt(adapter);54725472+54705473 napi_schedule(&q_vector->napi);5471547454725475 return IRQ_HANDLED;···55125509 if (!test_bit(__IGC_DOWN, &adapter->state))55135510 mod_timer(&adapter->watchdog_timer, jiffies + 1);55145511 }55125512+55135513+ if (icr & IGC_ICR_TS)55145514+ igc_tsync_interrupt(adapter);5515551555165516 napi_schedule(&q_vector->napi);55175517
+14-1
drivers/net/ethernet/intel/igc/igc_ptp.c
···768768 */769769static bool igc_is_crosststamp_supported(struct igc_adapter *adapter)770770{771771- return IS_ENABLED(CONFIG_X86_TSC) ? pcie_ptm_enabled(adapter->pdev) : false;771771+ if (!IS_ENABLED(CONFIG_X86_TSC))772772+ return false;773773+774774+ /* FIXME: it was noticed that enabling support for PCIe PTM in775775+ * some i225-V models could cause lockups when bringing the776776+ * interface up/down. There should be no downsides to777777+ * disabling crosstimestamping support for i225-V, as it778778+ * doesn't have any PTP support. That way we gain some time779779+ * while root causing the issue.780780+ */781781+ if (adapter->pdev->device == IGC_DEV_ID_I225_V)782782+ return false;783783+784784+ return pcie_ptm_enabled(adapter->pdev);772785}773786774787static struct system_counterval_t igc_device_tstamp_to_system(u64 tstamp)
+4
drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
···55315531 if (!speed && hw->mac.ops.get_link_capabilities) {55325532 ret = hw->mac.ops.get_link_capabilities(hw, &speed,55335533 &autoneg);55345534+ /* remove NBASE-T speeds from default autonegotiation55355535+ * to accommodate broken network switches in the field55365536+ * which cannot cope with advertised NBASE-T speeds55375537+ */55345538 speed &= ~(IXGBE_LINK_SPEED_5GB_FULL |55355539 IXGBE_LINK_SPEED_2_5GB_FULL);55365540 }
+3
drivers/net/ethernet/intel/ixgbe/ixgbe_x550.c
···34053405 /* flush pending Tx transactions */34063406 ixgbe_clear_tx_pending(hw);3407340734083408+ /* set MDIO speed before talking to the PHY in case it's the 1st time */34093409+ ixgbe_set_mdio_speed(hw);34103410+34083411 /* PHY ops must be identified and initialized prior to reset */34093412 status = hw->phy.ops.init(hw);34103413 if (status == IXGBE_ERR_SFP_NOT_SUPPORTED ||
+25-11
drivers/net/ethernet/lantiq_xrx200.c
···7171 struct xrx200_chan chan_tx;7272 struct xrx200_chan chan_rx;73737474+ u16 rx_buf_size;7575+7476 struct net_device *net_dev;7577 struct device *dev;7678···9997 xrx200_pmac_w32(priv, val, offset);10098}10199100100+static int xrx200_max_frame_len(int mtu)101101+{102102+ return VLAN_ETH_HLEN + mtu;103103+}104104+105105+static int xrx200_buffer_size(int mtu)106106+{107107+ return round_up(xrx200_max_frame_len(mtu), 4 * XRX200_DMA_BURST_LEN);108108+}109109+102110/* drop all the packets from the DMA ring */103111static void xrx200_flush_dma(struct xrx200_chan *ch)104112{···121109 break;122110123111 desc->ctl = LTQ_DMA_OWN | LTQ_DMA_RX_OFFSET(NET_IP_ALIGN) |124124- (ch->priv->net_dev->mtu + VLAN_ETH_HLEN +125125- ETH_FCS_LEN);112112+ ch->priv->rx_buf_size;126113 ch->dma.desc++;127114 ch->dma.desc %= LTQ_DESC_NUM;128115 }···169158170159static int xrx200_alloc_skb(struct xrx200_chan *ch)171160{172172- int len = ch->priv->net_dev->mtu + VLAN_ETH_HLEN + ETH_FCS_LEN;173161 struct sk_buff *skb = ch->skb[ch->dma.desc];162162+ struct xrx200_priv *priv = ch->priv;174163 dma_addr_t mapping;175164 int ret = 0;176165177177- ch->skb[ch->dma.desc] = netdev_alloc_skb_ip_align(ch->priv->net_dev,178178- len);166166+ ch->skb[ch->dma.desc] = netdev_alloc_skb_ip_align(priv->net_dev,167167+ priv->rx_buf_size);179168 if (!ch->skb[ch->dma.desc]) {180169 ret = -ENOMEM;181170 goto skip;182171 }183172184184- mapping = dma_map_single(ch->priv->dev, ch->skb[ch->dma.desc]->data,185185- len, DMA_FROM_DEVICE);186186- if (unlikely(dma_mapping_error(ch->priv->dev, mapping))) {173173+ mapping = dma_map_single(priv->dev, ch->skb[ch->dma.desc]->data,174174+ priv->rx_buf_size, DMA_FROM_DEVICE);175175+ if (unlikely(dma_mapping_error(priv->dev, mapping))) {187176 dev_kfree_skb_any(ch->skb[ch->dma.desc]);188177 ch->skb[ch->dma.desc] = skb;189178 ret = -ENOMEM;···195184 wmb();196185skip:197186 ch->dma.desc_base[ch->dma.desc].ctl =198198- LTQ_DMA_OWN | LTQ_DMA_RX_OFFSET(NET_IP_ALIGN) | len;187187+ LTQ_DMA_OWN | LTQ_DMA_RX_OFFSET(NET_IP_ALIGN) | priv->rx_buf_size;199188200189 return ret;201190}···224213 skb->protocol = eth_type_trans(skb, net_dev);225214 netif_receive_skb(skb);226215 net_dev->stats.rx_packets++;227227- net_dev->stats.rx_bytes += len - ETH_FCS_LEN;216216+ net_dev->stats.rx_bytes += len;228217229218 return 0;230219}···367356 int ret = 0;368357369358 net_dev->mtu = new_mtu;359359+ priv->rx_buf_size = xrx200_buffer_size(new_mtu);370360371361 if (new_mtu <= old_mtu)372362 return ret;···387375 ret = xrx200_alloc_skb(ch_rx);388376 if (ret) {389377 net_dev->mtu = old_mtu;378378+ priv->rx_buf_size = xrx200_buffer_size(old_mtu);390379 break;391380 }392381 dev_kfree_skb_any(skb);···518505 net_dev->netdev_ops = &xrx200_netdev_ops;519506 SET_NETDEV_DEV(net_dev, dev);520507 net_dev->min_mtu = ETH_ZLEN;521521- net_dev->max_mtu = XRX200_DMA_DATA_LEN - VLAN_ETH_HLEN - ETH_FCS_LEN;508508+ net_dev->max_mtu = XRX200_DMA_DATA_LEN - xrx200_max_frame_len(0);509509+ priv->rx_buf_size = xrx200_buffer_size(ETH_DATA_LEN);522510523511 /* load the memory ranges */524512 priv->pmac_reg = devm_platform_get_and_ioremap_resource(pdev, 0, NULL);
···44#include "setup.h"55#include "en/params.h"66#include "en/txrx.h"77+#include "en/health.h"7889/* It matches XDP_UMEM_MIN_CHUNK_SIZE, but as this constant is private and may910 * change unexpectedly, and mlx5e has a minimum valid stride size for striding···171170172171void mlx5e_activate_xsk(struct mlx5e_channel *c)173172{173173+ /* ICOSQ recovery deactivates RQs. Suspend the recovery to avoid174174+ * activating XSKRQ in the middle of recovery.175175+ */176176+ mlx5e_reporter_icosq_suspend_recovery(c);174177 set_bit(MLX5E_RQ_STATE_ENABLED, &c->xskrq.state);178178+ mlx5e_reporter_icosq_resume_recovery(c);179179+175180 /* TX queue is created active. */176181177182 spin_lock_bh(&c->async_icosq_lock);···187180188181void mlx5e_deactivate_xsk(struct mlx5e_channel *c)189182{190190- mlx5e_deactivate_rq(&c->xskrq);183183+ /* ICOSQ recovery may reactivate XSKRQ if clear_bit is called in the184184+ * middle of recovery. Suspend the recovery to avoid it.185185+ */186186+ mlx5e_reporter_icosq_suspend_recovery(c);187187+ clear_bit(MLX5E_RQ_STATE_ENABLED, &c->xskrq.state);188188+ mlx5e_reporter_icosq_resume_recovery(c);189189+ synchronize_net(); /* Sync with NAPI to prevent mlx5e_post_rx_wqes. */190190+191191 /* TX queue is disabled on close. */192192}
···172172 int is_l4;173173};174174175175+/* Rx Frame Steering */176176+enum stmmac_rfs_type {177177+ STMMAC_RFS_T_VLAN,178178+ STMMAC_RFS_T_MAX,179179+};180180+181181+struct stmmac_rfs_entry {182182+ unsigned long cookie;183183+ int in_use;184184+ int type;185185+ int tc;186186+};187187+175188struct stmmac_priv {176189 /* Frequently used values are kept adjacent for cache effect */177190 u32 tx_coal_frames[MTL_MAX_TX_QUEUES];···302289 struct stmmac_tc_entry *tc_entries;303290 unsigned int flow_entries_max;304291 struct stmmac_flow_entry *flow_entries;292292+ unsigned int rfs_entries_max[STMMAC_RFS_T_MAX];293293+ unsigned int rfs_entries_cnt[STMMAC_RFS_T_MAX];294294+ unsigned int rfs_entries_total;295295+ struct stmmac_rfs_entry *rfs_entries;305296306297 /* Pulse Per Second output */307298 struct stmmac_pps_cfg pps[STMMAC_PPS_MAX];
+12-4
drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
···14611461{14621462 struct stmmac_rx_queue *rx_q = &priv->rx_queue[queue];14631463 struct stmmac_rx_buffer *buf = &rx_q->buf_pool[i];14641464+ gfp_t gfp = (GFP_ATOMIC | __GFP_NOWARN);14651465+14661466+ if (priv->dma_cap.addr64 <= 32)14671467+ gfp |= GFP_DMA32;1464146814651469 if (!buf->page) {14661466- buf->page = page_pool_dev_alloc_pages(rx_q->page_pool);14701470+ buf->page = page_pool_alloc_pages(rx_q->page_pool, gfp);14671471 if (!buf->page)14681472 return -ENOMEM;14691473 buf->page_offset = stmmac_rx_offset(priv);14701474 }1471147514721476 if (priv->sph && !buf->sec_page) {14731473- buf->sec_page = page_pool_dev_alloc_pages(rx_q->page_pool);14771477+ buf->sec_page = page_pool_alloc_pages(rx_q->page_pool, gfp);14741478 if (!buf->sec_page)14751479 return -ENOMEM;14761480···44864482 struct stmmac_rx_queue *rx_q = &priv->rx_queue[queue];44874483 int dirty = stmmac_rx_dirty(priv, queue);44884484 unsigned int entry = rx_q->dirty_rx;44854485+ gfp_t gfp = (GFP_ATOMIC | __GFP_NOWARN);44864486+44874487+ if (priv->dma_cap.addr64 <= 32)44884488+ gfp |= GFP_DMA32;4489448944904490 while (dirty-- > 0) {44914491 struct stmmac_rx_buffer *buf = &rx_q->buf_pool[entry];···45024494 p = rx_q->dma_rx + entry;4503449545044496 if (!buf->page) {45054505- buf->page = page_pool_dev_alloc_pages(rx_q->page_pool);44974497+ buf->page = page_pool_alloc_pages(rx_q->page_pool, gfp);45064498 if (!buf->page)45074499 break;45084500 }4509450145104502 if (priv->sph && !buf->sec_page) {45114511- buf->sec_page = page_pool_dev_alloc_pages(rx_q->page_pool);45034503+ buf->sec_page = page_pool_alloc_pages(rx_q->page_pool, gfp);45124504 if (!buf->sec_page)45134505 break;45144506
···239239 /* Check if we have a GPIO associated with this fixed phy */240240 if (!gpiod) {241241 gpiod = fixed_phy_get_gpiod(np);242242- if (IS_ERR(gpiod))243243- return ERR_CAST(gpiod);242242+ if (!gpiod)243243+ return ERR_PTR(-EINVAL);244244 }245245246246 /* Get the next available PHY address, up to PHY_MAX_ADDR */
+3
drivers/net/phy/mdio_bus.c
···460460461461 if (addr == mdiodev->addr) {462462 device_set_node(dev, of_fwnode_handle(child));463463+ /* The refcount on "child" is passed to the mdio464464+ * device. Do _not_ use of_node_put(child) here.465465+ */463466 return;464467 }465468 }
+59-56
drivers/net/tun.c
···209209 struct tun_prog __rcu *steering_prog;210210 struct tun_prog __rcu *filter_prog;211211 struct ethtool_link_ksettings link_ksettings;212212+ /* init args */213213+ struct file *file;214214+ struct ifreq *ifr;212215};213216214217struct veth {215218 __be16 h_vlan_proto;216219 __be16 h_vlan_TCI;217220};221221+222222+static void tun_flow_init(struct tun_struct *tun);223223+static void tun_flow_uninit(struct tun_struct *tun);218224219225static int tun_napi_receive(struct napi_struct *napi, int budget)220226{···959953960954static const struct ethtool_ops tun_ethtool_ops;961955956956+static int tun_net_init(struct net_device *dev)957957+{958958+ struct tun_struct *tun = netdev_priv(dev);959959+ struct ifreq *ifr = tun->ifr;960960+ int err;961961+962962+ dev->tstats = netdev_alloc_pcpu_stats(struct pcpu_sw_netstats);963963+ if (!dev->tstats)964964+ return -ENOMEM;965965+966966+ spin_lock_init(&tun->lock);967967+968968+ err = security_tun_dev_alloc_security(&tun->security);969969+ if (err < 0) {970970+ free_percpu(dev->tstats);971971+ return err;972972+ }973973+974974+ tun_flow_init(tun);975975+976976+ dev->hw_features = NETIF_F_SG | NETIF_F_FRAGLIST |977977+ TUN_USER_FEATURES | NETIF_F_HW_VLAN_CTAG_TX |978978+ NETIF_F_HW_VLAN_STAG_TX;979979+ dev->features = dev->hw_features | NETIF_F_LLTX;980980+ dev->vlan_features = dev->features &981981+ ~(NETIF_F_HW_VLAN_CTAG_TX |982982+ NETIF_F_HW_VLAN_STAG_TX);983983+984984+ tun->flags = (tun->flags & ~TUN_FEATURES) |985985+ (ifr->ifr_flags & TUN_FEATURES);986986+987987+ INIT_LIST_HEAD(&tun->disabled);988988+ err = tun_attach(tun, tun->file, false, ifr->ifr_flags & IFF_NAPI,989989+ ifr->ifr_flags & IFF_NAPI_FRAGS, false);990990+ if (err < 0) {991991+ tun_flow_uninit(tun);992992+ security_tun_dev_free_security(tun->security);993993+ free_percpu(dev->tstats);994994+ return err;995995+ }996996+ return 0;997997+}998998+962999/* Net device detach from fd. */9631000static void tun_net_uninit(struct net_device *dev)9641001{···12181169}1219117012201171static const struct net_device_ops tun_netdev_ops = {11721172+ .ndo_init = tun_net_init,12211173 .ndo_uninit = tun_net_uninit,12221174 .ndo_open = tun_net_open,12231175 .ndo_stop = tun_net_close,···13021252}1303125313041254static const struct net_device_ops tap_netdev_ops = {12551255+ .ndo_init = tun_net_init,13051256 .ndo_uninit = tun_net_uninit,13061257 .ndo_open = tun_net_open,13071258 .ndo_stop = tun_net_close,···13431292#define MAX_MTU 655351344129313451294/* Initialize net device. */13461346-static void tun_net_init(struct net_device *dev)12951295+static void tun_net_initialize(struct net_device *dev)13471296{13481297 struct tun_struct *tun = netdev_priv(dev);13491298···22572206 BUG_ON(!(list_empty(&tun->disabled)));2258220722592208 free_percpu(dev->tstats);22602260- /* We clear tstats so that tun_set_iff() can tell if22612261- * tun_free_netdev() has been called from register_netdevice().22622262- */22632263- dev->tstats = NULL;22642264-22652209 tun_flow_uninit(tun);22662210 security_tun_dev_free_security(tun->security);22672211 __tun_set_ebpf(tun, &tun->steering_prog, NULL);···27622716 tun->rx_batched = 0;27632717 RCU_INIT_POINTER(tun->steering_prog, NULL);2764271827652765- dev->tstats = netdev_alloc_pcpu_stats(struct pcpu_sw_netstats);27662766- if (!dev->tstats) {27672767- err = -ENOMEM;27682768- goto err_free_dev;27692769- }27192719+ tun->ifr = ifr;27202720+ tun->file = file;2770272127712771- spin_lock_init(&tun->lock);27722772-27732773- err = security_tun_dev_alloc_security(&tun->security);27742774- if (err < 0)27752775- goto err_free_stat;27762776-27772777- tun_net_init(dev);27782778- tun_flow_init(tun);27792779-27802780- dev->hw_features = NETIF_F_SG | NETIF_F_FRAGLIST |27812781- TUN_USER_FEATURES | NETIF_F_HW_VLAN_CTAG_TX |27822782- NETIF_F_HW_VLAN_STAG_TX;27832783- dev->features = dev->hw_features | NETIF_F_LLTX;27842784- dev->vlan_features = dev->features &27852785- ~(NETIF_F_HW_VLAN_CTAG_TX |27862786- NETIF_F_HW_VLAN_STAG_TX);27872787-27882788- tun->flags = (tun->flags & ~TUN_FEATURES) |27892789- (ifr->ifr_flags & TUN_FEATURES);27902790-27912791- INIT_LIST_HEAD(&tun->disabled);27922792- err = tun_attach(tun, file, false, ifr->ifr_flags & IFF_NAPI,27932793- ifr->ifr_flags & IFF_NAPI_FRAGS, false);27942794- if (err < 0)27952795- goto err_free_flow;27222722+ tun_net_initialize(dev);2796272327972724 err = register_netdevice(tun->dev);27982798- if (err < 0)27992799- goto err_detach;27252725+ if (err < 0) {27262726+ free_netdev(dev);27272727+ return err;27282728+ }28002729 /* free_netdev() won't check refcnt, to avoid race28012730 * with dev_put() we need publish tun after registration.28022731 */···2788276727892768 strcpy(ifr->ifr_name, tun->dev->name);27902769 return 0;27912791-27922792-err_detach:27932793- tun_detach_all(dev);27942794- /* We are here because register_netdevice() has failed.27952795- * If register_netdevice() already called tun_free_netdev()27962796- * while dealing with the error, dev->stats has been cleared.27972797- */27982798- if (!dev->tstats)27992799- goto err_free_dev;28002800-28012801-err_free_flow:28022802- tun_flow_uninit(tun);28032803- security_tun_dev_free_security(tun->security);28042804-err_free_stat:28052805- free_percpu(dev->tstats);28062806-err_free_dev:28072807- free_netdev(dev);28082808- return err;28092770}2810277128112772static void tun_get_iff(struct tun_struct *tun, struct ifreq *ifr)
+5-3
drivers/net/usb/asix_common.c
···991010#include "asix.h"11111212+#define AX_HOST_EN_RETRIES 301313+1214int asix_read_cmd(struct usbnet *dev, u8 cmd, u16 value, u16 index,1315 u16 size, void *data, int in_pm)1416{···7068 int i, ret;7169 u8 smsr;72707373- for (i = 0; i < 30; ++i) {7171+ for (i = 0; i < AX_HOST_EN_RETRIES; ++i) {7472 ret = asix_set_sw_mii(dev, in_pm);7573 if (ret == -ENODEV || ret == -ETIMEDOUT)7674 break;···7977 0, 0, 1, &smsr, in_pm);8078 if (ret == -ENODEV)8179 break;8282- else if (ret < 0)8080+ else if (ret < sizeof(smsr))8381 continue;8482 else if (smsr & AX_HOST_EN)8583 break;8684 }87858888- return ret;8686+ return i >= AX_HOST_EN_RETRIES ? -ETIMEDOUT : ret;8987}90889189static void reset_asix_rx_fixup_info(struct asix_rx_fixup_info *rx)
···3232#define NETNEXT_VERSION "12"33333434/* Information for net */3535-#define NET_VERSION "11"3535+#define NET_VERSION "12"36363737#define DRIVER_VERSION "v1." NETNEXT_VERSION "." NET_VERSION3838#define DRIVER_AUTHOR "Realtek linux nic maintainers <nic_swsd@realtek.com>"···40164016 ocp_write_word(tp, type, PLA_BP_BA, 0);40174017}4018401840194019+static inline void rtl_reset_ocp_base(struct r8152 *tp)40204020+{40214021+ tp->ocp_base = -1;40224022+}40234023+40194024static int rtl_phy_patch_request(struct r8152 *tp, bool request, bool wait)40204025{40214026 u16 data, check;···40914086 rtl_patch_key_set(tp, key_addr, 0);4092408740934088 rtl_phy_patch_request(tp, false, wait);40944094-40954095- ocp_write_word(tp, MCU_TYPE_PLA, PLA_OCP_GPHY_BASE, tp->ocp_base);4096408940974090 return 0;40984091}···48034800 u32 len;48044801 u8 *data;4805480248034803+ rtl_reset_ocp_base(tp);48044804+48064805 if (sram_read(tp, SRAM_GPHY_FW_VER) >= __le16_to_cpu(phy->version)) {48074806 dev_dbg(&tp->intf->dev, "PHY firmware has been the newest\n");48084807 return;···48504845 }48514846 }4852484748534853- ocp_write_word(tp, MCU_TYPE_PLA, PLA_OCP_GPHY_BASE, tp->ocp_base);48484848+ rtl_reset_ocp_base(tp);48494849+48544850 rtl_phy_patch_request(tp, false, wait);4855485148564852 if (sram_read(tp, SRAM_GPHY_FW_VER) == __le16_to_cpu(phy->version))···4866486048674861 ver_addr = __le16_to_cpu(phy_ver->ver.addr);48684862 ver = __le16_to_cpu(phy_ver->ver.data);48634863+48644864+ rtl_reset_ocp_base(tp);4869486548704866 if (sram_read(tp, ver_addr) >= ver) {48714867 dev_dbg(&tp->intf->dev, "PHY firmware has been the newest\n");···48844876static void rtl8152_fw_phy_fixup(struct r8152 *tp, struct fw_phy_fixup *fix)48854877{48864878 u16 addr, data;48794879+48804880+ rtl_reset_ocp_base(tp);4887488148884882 addr = __le16_to_cpu(fix->setting.addr);48894883 data = ocp_reg_read(tp, addr);···49184908 u32 length;49194909 int i, num;4920491049114911+ rtl_reset_ocp_base(tp);49124912+49214913 num = phy->pre_num;49224914 for (i = 0; i < num; i++)49234915 sram_write(tp, __le16_to_cpu(phy->pre_set[i].addr),···49494937 u16 mode_reg, bp_index;49504938 u32 length, i, num;49514939 __le16 *data;49404940+49414941+ rtl_reset_ocp_base(tp);4952494249534943 mode_reg = __le16_to_cpu(phy->mode_reg);49544944 sram_write(tp, mode_reg, __le16_to_cpu(phy->mode_pre));···51215107 if (rtl_fw->post_fw)51225108 rtl_fw->post_fw(tp);5123510951105110+ rtl_reset_ocp_base(tp);51245111 strscpy(rtl_fw->version, fw_hdr->version, RTL_VER_SIZE);51255112 dev_info(&tp->intf->dev, "load %s successfully\n", rtl_fw->version);51265113}···65996584 return true;66006585}6601658665876587+static void r8156_mdio_force_mode(struct r8152 *tp)65886588+{65896589+ u16 data;65906590+65916591+ /* Select force mode through 0xa5b4 bit 1565926592+ * 0: MDIO force mode65936593+ * 1: MMD force mode65946594+ */65956595+ data = ocp_reg_read(tp, 0xa5b4);65966596+ if (data & BIT(15)) {65976597+ data &= ~BIT(15);65986598+ ocp_reg_write(tp, 0xa5b4, data);65996599+ }66006600+}66016601+66026602static void set_carrier(struct r8152 *tp)66036603{66046604 struct net_device *netdev = tp->netdev;···80468016 ocp_data |= ACT_ODMA;80478017 ocp_write_byte(tp, MCU_TYPE_USB, USB_BMU_CONFIG, ocp_data);8048801880198019+ r8156_mdio_force_mode(tp);80498020 rtl_tally_reset(tp);8050802180518022 tp->coalesce = 15000; /* 15 us */···81768145 ocp_data &= ~(RX_AGG_DISABLE | RX_ZERO_EN);81778146 ocp_write_word(tp, MCU_TYPE_USB, USB_USB_CTRL, ocp_data);8178814781488148+ r8156_mdio_force_mode(tp);81798149 rtl_tally_reset(tp);8180815081818151 tp->coalesce = 15000; /* 15 us */···8499846785008468 mutex_lock(&tp->control);8501846984708470+ rtl_reset_ocp_base(tp);84718471+85028472 if (test_bit(SELECTIVE_SUSPEND, &tp->flags))85038473 ret = rtl8152_runtime_resume(tp);85048474 else···85168482 struct r8152 *tp = usb_get_intfdata(intf);8517848385188484 clear_bit(SELECTIVE_SUSPEND, &tp->flags);84858485+ rtl_reset_ocp_base(tp);85198486 tp->rtl_ops.init(tp);85208487 queue_delayed_work(system_long_wq, &tp->hw_phy_work, 0);85218488 set_ethernet_addr(tp, true);
+6-2
drivers/net/veth.c
···879879880880 stats->xdp_bytes += skb->len;881881 skb = veth_xdp_rcv_skb(rq, skb, bq, stats);882882- if (skb)883883- napi_gro_receive(&rq->xdp_napi, skb);882882+ if (skb) {883883+ if (skb_shared(skb) || skb_unclone(skb, GFP_ATOMIC))884884+ netif_receive_skb(skb);885885+ else886886+ napi_gro_receive(&rq->xdp_napi, skb);887887+ }884888 }885889 done++;886890 }
+3-6
drivers/net/virtio_net.c
···733733 pr_debug("%s: rx error: len %u exceeds max size %d\n",734734 dev->name, len, GOOD_PACKET_LEN);735735 dev->stats.rx_length_errors++;736736- goto err_len;736736+ goto err;737737 }738738739739 if (likely(!vi->xdp_enabled)) {···825825826826skip_xdp:827827 skb = build_skb(buf, buflen);828828- if (!skb) {829829- put_page(page);828828+ if (!skb)830829 goto err;831831- }832830 skb_reserve(skb, headroom - delta);833831 skb_put(skb, len);834832 if (!xdp_prog) {···837839 if (metasize)838840 skb_metadata_set(skb, metasize);839841840840-err:841842 return skb;842843843844err_xdp:844845 rcu_read_unlock();845846 stats->xdp_drops++;846846-err_len:847847+err:847848 stats->drops++;848849 put_page(page);849850xdp_xmit:
+9-5
drivers/net/wireless/broadcom/brcm80211/Kconfig
···77 depends on MAC8021188 depends on BCMA_POSSIBLE99 select BCMA1010- select NEW_LEDS if BCMA_DRIVER_GPIO1111- select LEDS_CLASS if BCMA_DRIVER_GPIO1210 select BRCMUTIL1311 select FW_LOADER1412 select CORDIC1513 help1614 This module adds support for PCIe wireless adapters based on Broadcom1717- IEEE802.11n SoftMAC chipsets. It also has WLAN led support, which will1818- be available if you select BCMA_DRIVER_GPIO. If you choose to build a1919- module, the driver will be called brcmsmac.ko.1515+ IEEE802.11n SoftMAC chipsets. If you choose to build a module, the1616+ driver will be called brcmsmac.ko.1717+1818+config BRCMSMAC_LEDS1919+ def_bool BRCMSMAC && BCMA_DRIVER_GPIO && MAC80211_LEDS2020+ help2121+ The brcmsmac LED support depends on the presence of the2222+ BCMA_DRIVER_GPIO driver, and it only works if LED support2323+ is enabled and reachable from the driver module.20242125source "drivers/net/wireless/broadcom/brcm80211/brcmfmac/Kconfig"2226
···22config IWLEGACY33 tristate44 select FW_LOADER55- select NEW_LEDS66- select LEDS_CLASS75 select LEDS_TRIGGERS86 select MAC80211_LEDS97108config IWL4965119 tristate "Intel Wireless WiFi 4965AGN (iwl4965)"1210 depends on PCI && MAC802111111+ depends on LEDS_CLASS=y || LEDS_CLASS=MAC802111312 select IWLEGACY1413 help1514 This option enables support for···3738config IWL39453839 tristate "Intel PRO/Wireless 3945ABG/BG Network Connection (iwl3945)"3940 depends on PCI && MAC802114141+ depends on LEDS_CLASS=y || LEDS_CLASS=MAC802114042 select IWLEGACY4143 help4244 Select to build the driver supporting the:
+1-1
drivers/net/wireless/intel/iwlwifi/Kconfig
···47474848config IWLWIFI_LEDS4949 bool5050- depends on LEDS_CLASS=y || LEDS_CLASS=IWLWIFI5050+ depends on LEDS_CLASS=y || LEDS_CLASS=MAC802115151 depends on IWLMVM || IWLDVM5252 select LEDS_TRIGGERS5353 select MAC80211_LEDS
+3-2
drivers/net/wireless/intel/iwlwifi/mvm/tx.c
···269269 u8 rate_plcp;270270 u32 rate_flags = 0;271271 bool is_cck;272272- struct iwl_mvm_sta *mvmsta = iwl_mvm_sta_from_mac80211(sta);273272274273 /* info->control is only relevant for non HW rate control */275274 if (!ieee80211_hw_check(mvm->hw, HAS_RATE_CONTROL)) {275275+ struct iwl_mvm_sta *mvmsta = iwl_mvm_sta_from_mac80211(sta);276276+276277 /* HT rate doesn't make sense for a non data frame */277278 WARN_ONCE(info->control.rates[0].flags & IEEE80211_TX_RC_MCS &&278279 !ieee80211_is_data(fc),279280 "Got a HT rate (flags:0x%x/mcs:%d/fc:0x%x/state:%d) for a non data frame\n",280281 info->control.rates[0].flags,281282 info->control.rates[0].idx,282282- le16_to_cpu(fc), mvmsta->sta_state);283283+ le16_to_cpu(fc), sta ? mvmsta->sta_state : -1);283284284285 rate_idx = info->control.rates[0].idx;285286 }
···203203 unsigned int rx_queue_max;204204 unsigned int rx_queue_len;205205 unsigned long last_rx_time;206206+ unsigned int rx_slots_needed;206207 bool stalled;207208208209 struct xenvif_copy_state rx_copy;
+49-28
drivers/net/xen-netback/rx.c
···3333#include <xen/xen.h>3434#include <xen/events.h>35353636+/*3737+ * Update the needed ring page slots for the first SKB queued.3838+ * Note that any call sequence outside the RX thread calling this function3939+ * needs to wake up the RX thread via a call of xenvif_kick_thread()4040+ * afterwards in order to avoid a race with putting the thread to sleep.4141+ */4242+static void xenvif_update_needed_slots(struct xenvif_queue *queue,4343+ const struct sk_buff *skb)4444+{4545+ unsigned int needed = 0;4646+4747+ if (skb) {4848+ needed = DIV_ROUND_UP(skb->len, XEN_PAGE_SIZE);4949+ if (skb_is_gso(skb))5050+ needed++;5151+ if (skb->sw_hash)5252+ needed++;5353+ }5454+5555+ WRITE_ONCE(queue->rx_slots_needed, needed);5656+}5757+3658static bool xenvif_rx_ring_slots_available(struct xenvif_queue *queue)3759{3860 RING_IDX prod, cons;3939- struct sk_buff *skb;4040- int needed;4141- unsigned long flags;6161+ unsigned int needed;42624343- spin_lock_irqsave(&queue->rx_queue.lock, flags);4444-4545- skb = skb_peek(&queue->rx_queue);4646- if (!skb) {4747- spin_unlock_irqrestore(&queue->rx_queue.lock, flags);6363+ needed = READ_ONCE(queue->rx_slots_needed);6464+ if (!needed)4865 return false;4949- }5050-5151- needed = DIV_ROUND_UP(skb->len, XEN_PAGE_SIZE);5252- if (skb_is_gso(skb))5353- needed++;5454- if (skb->sw_hash)5555- needed++;5656-5757- spin_unlock_irqrestore(&queue->rx_queue.lock, flags);58665967 do {6068 prod = queue->rx.sring->req_prod;···88808981 spin_lock_irqsave(&queue->rx_queue.lock, flags);90829191- __skb_queue_tail(&queue->rx_queue, skb);9292-9393- queue->rx_queue_len += skb->len;9494- if (queue->rx_queue_len > queue->rx_queue_max) {8383+ if (queue->rx_queue_len >= queue->rx_queue_max) {9584 struct net_device *dev = queue->vif->dev;96859786 netif_tx_stop_queue(netdev_get_tx_queue(dev, queue->id));8787+ kfree_skb(skb);8888+ queue->vif->dev->stats.rx_dropped++;8989+ } else {9090+ if (skb_queue_empty(&queue->rx_queue))9191+ xenvif_update_needed_slots(queue, skb);9292+9393+ __skb_queue_tail(&queue->rx_queue, skb);9494+9595+ queue->rx_queue_len += skb->len;9896 }999710098 spin_unlock_irqrestore(&queue->rx_queue.lock, flags);···114100115101 skb = __skb_dequeue(&queue->rx_queue);116102 if (skb) {103103+ xenvif_update_needed_slots(queue, skb_peek(&queue->rx_queue));104104+117105 queue->rx_queue_len -= skb->len;118106 if (queue->rx_queue_len < queue->rx_queue_max) {119107 struct netdev_queue *txq;···150134 break;151135 xenvif_rx_dequeue(queue);152136 kfree_skb(skb);137137+ queue->vif->dev->stats.rx_dropped++;153138 }154139}155140···504487 xenvif_rx_copy_flush(queue);505488}506489507507-static bool xenvif_rx_queue_stalled(struct xenvif_queue *queue)490490+static RING_IDX xenvif_rx_queue_slots(const struct xenvif_queue *queue)508491{509492 RING_IDX prod, cons;510493511494 prod = queue->rx.sring->req_prod;512495 cons = queue->rx.req_cons;513496497497+ return prod - cons;498498+}499499+500500+static bool xenvif_rx_queue_stalled(const struct xenvif_queue *queue)501501+{502502+ unsigned int needed = READ_ONCE(queue->rx_slots_needed);503503+514504 return !queue->stalled &&515515- prod - cons < 1 &&505505+ xenvif_rx_queue_slots(queue) < needed &&516506 time_after(jiffies,517507 queue->last_rx_time + queue->vif->stall_timeout);518508}519509520510static bool xenvif_rx_queue_ready(struct xenvif_queue *queue)521511{522522- RING_IDX prod, cons;512512+ unsigned int needed = READ_ONCE(queue->rx_slots_needed);523513524524- prod = queue->rx.sring->req_prod;525525- cons = queue->rx.req_cons;526526-527527- return queue->stalled && prod - cons >= 1;514514+ return queue->stalled && xenvif_rx_queue_slots(queue) >= needed;528515}529516530517bool xenvif_have_rx_work(struct xenvif_queue *queue, bool test_kthread)
···524524 phy->gpiod_ena = devm_gpiod_get(dev, "enable", GPIOD_OUT_LOW);525525 if (IS_ERR(phy->gpiod_ena)) {526526 nfc_err(dev, "Unable to get ENABLE GPIO\n");527527- return PTR_ERR(phy->gpiod_ena);527527+ r = PTR_ERR(phy->gpiod_ena);528528+ goto out_free;528529 }529530530531 phy->se_status.is_ese_present =···536535 r = st21nfca_hci_platform_init(phy);537536 if (r < 0) {538537 nfc_err(&client->dev, "Unable to reboot st21nfca\n");539539- return r;538538+ goto out_free;540539 }541540542541 r = devm_request_threaded_irq(&client->dev, client->irq, NULL,···545544 ST21NFCA_HCI_DRIVER_NAME, phy);546545 if (r < 0) {547546 nfc_err(&client->dev, "Unable to register IRQ handler\n");548548- return r;547547+ goto out_free;549548 }550549551551- return st21nfca_hci_probe(phy, &i2c_phy_ops, LLC_SHDLC_NAME,552552- ST21NFCA_FRAME_HEADROOM,553553- ST21NFCA_FRAME_TAILROOM,554554- ST21NFCA_HCI_LLC_MAX_PAYLOAD,555555- &phy->hdev,556556- &phy->se_status);550550+ r = st21nfca_hci_probe(phy, &i2c_phy_ops, LLC_SHDLC_NAME,551551+ ST21NFCA_FRAME_HEADROOM,552552+ ST21NFCA_FRAME_TAILROOM,553553+ ST21NFCA_HCI_LLC_MAX_PAYLOAD,554554+ &phy->hdev,555555+ &phy->se_status);556556+ if (r)557557+ goto out_free;558558+559559+ return 0;560560+561561+out_free:562562+ kfree_skb(phy->pending_skb);563563+ return r;557564}558565559566static int st21nfca_hci_i2c_remove(struct i2c_client *client)···572563573564 if (phy->powered)574565 st21nfca_hci_i2c_disable(phy);566566+ if (phy->pending_skb)567567+ kfree_skb(phy->pending_skb);575568576569 return 0;577570}
+2-2
drivers/pci/controller/Kconfig
···332332 If unsure, say Y if you have an Apple Silicon system.333333334334config PCIE_MT7621335335- tristate "MediaTek MT7621 PCIe Controller"336336- depends on (RALINK && SOC_MT7621) || (MIPS && COMPILE_TEST)335335+ bool "MediaTek MT7621 PCIe Controller"336336+ depends on SOC_MT7621 || (MIPS && COMPILE_TEST)337337 select PHY_MT7621_PCI338338 default SOC_MT7621339339 help
+11-4
drivers/pci/msi.c
···722722 goto out_disable;723723 }724724725725- /* Ensure that all table entries are masked. */726726- msix_mask_all(base, tsize);727727-728725 ret = msix_setup_entries(dev, base, entries, nvec, affd);729726 if (ret)730727 goto out_disable;···748751 /* Set MSI-X enabled bits and unmask the function */749752 pci_intx_for_msi(dev, 0);750753 dev->msix_enabled = 1;754754+755755+ /*756756+ * Ensure that all table entries are masked to prevent757757+ * stale entries from firing in a crash kernel.758758+ *759759+ * Done late to deal with a broken Marvell NVME device760760+ * which takes the MSI-X mask bits into account even761761+ * when MSI-X is disabled, which prevents MSI delivery.762762+ */763763+ msix_mask_all(base, tsize);751764 pci_msix_clear_and_set_ctrl(dev, PCI_MSIX_FLAGS_MASKALL, 0);752765753766 pcibios_free_irq(dev);···784777 free_msi_irqs(dev);785778786779out_disable:787787- pci_msix_clear_and_set_ctrl(dev, PCI_MSIX_FLAGS_ENABLE, 0);780780+ pci_msix_clear_and_set_ctrl(dev, PCI_MSIX_FLAGS_MASKALL | PCI_MSIX_FLAGS_ENABLE, 0);788781789782 return ret;790783}
···285285 desc = (const struct mtk_pin_desc *)hw->soc->pins;286286 *gpio_chip = &hw->chip;287287288288- /* Be greedy to guess first gpio_n is equal to eint_n */289289- if (desc[eint_n].eint.eint_n == eint_n)288288+ /*289289+ * Be greedy to guess first gpio_n is equal to eint_n.290290+ * Only eint virtual eint number is greater than gpio number.291291+ */292292+ if (hw->soc->npins > eint_n &&293293+ desc[eint_n].eint.eint_n == eint_n)290294 *gpio_n = eint_n;291295 else292296 *gpio_n = mtk_xt_find_eint_num(hw, eint_n);
+4-4
drivers/pinctrl/stm32/pinctrl-stm32.c
···12511251 bank_nr = args.args[1] / STM32_GPIO_PINS_PER_BANK;12521252 bank->gpio_chip.base = args.args[1];1253125312541254- npins = args.args[2];12551255- while (!of_parse_phandle_with_fixed_args(np, "gpio-ranges", 3,12561256- ++i, &args))12571257- npins += args.args[2];12541254+ /* get the last defined gpio line (offset + nb of pins) */12551255+ npins = args.args[0] + args.args[2];12561256+ while (!of_parse_phandle_with_fixed_args(np, "gpio-ranges", 3, ++i, &args))12571257+ npins = max(npins, (int)(args.args[0] + args.args[2]));12581258 } else {12591259 bank_nr = pctl->nbanks;12601260 bank->gpio_chip.base = bank_nr * STM32_GPIO_PINS_PER_BANK;
+2-2
drivers/platform/mellanox/mlxbf-pmc.c
···13741374 pmc->block[i].counters = info[2];13751375 pmc->block[i].type = info[3];1376137613771377- if (IS_ERR(pmc->block[i].mmio_base))13781378- return PTR_ERR(pmc->block[i].mmio_base);13771377+ if (!pmc->block[i].mmio_base)13781378+ return -ENOMEM;1379137913801380 ret = mlxbf_pmc_create_groups(dev, i);13811381 if (ret)
···625625 }626626627627 gmux_data->iostart = res->start;628628- gmux_data->iolen = res->end - res->start;628628+ gmux_data->iolen = resource_size(res);629629630630 if (gmux_data->iolen < GMUX_MIN_IO_LEN) {631631 pr_err("gmux I/O region too small (%lu < %u)\n",
-15
drivers/platform/x86/intel/Kconfig
···33# Intel x86 Platform Specific Drivers44#5566-menuconfig X86_PLATFORM_DRIVERS_INTEL77- bool "Intel x86 Platform Specific Device Drivers"88- default y99- help1010- Say Y here to get to see options for device drivers for1111- various Intel x86 platforms, including vendor-specific1212- drivers. This option alone does not add any kernel code.1313-1414- If you say N, all options in this submenu will be skipped1515- and disabled.1616-1717-if X86_PLATFORM_DRIVERS_INTEL1818-196source "drivers/platform/x86/intel/atomisp2/Kconfig"207source "drivers/platform/x86/intel/int1092/Kconfig"218source "drivers/platform/x86/intel/int33fe/Kconfig"···170183171184 To compile this driver as a module, choose M here: the module172185 will be called intel-uncore-frequency.173173-174174-endif # X86_PLATFORM_DRIVERS_INTEL
+1-1
drivers/platform/x86/intel/pmc/pltdrv.c
···65656666 retval = platform_device_register(pmc_core_device);6767 if (retval)6868- kfree(pmc_core_device);6868+ platform_device_put(pmc_core_device);69697070 return retval;7171}
+30-28
drivers/platform/x86/system76_acpi.c
···3535 union acpi_object *nfan;3636 union acpi_object *ntmp;3737 struct input_dev *input;3838+ bool has_open_ec;3839};39404041static const struct acpi_device_id device_ids[] = {···280279281280static void system76_battery_init(void)282281{283283- acpi_handle handle;284284-285285- handle = ec_get_handle();286286- if (handle && acpi_has_method(handle, "GBCT"))287287- battery_hook_register(&system76_battery_hook);282282+ battery_hook_register(&system76_battery_hook);288283}289284290285static void system76_battery_exit(void)291286{292292- acpi_handle handle;293293-294294- handle = ec_get_handle();295295- if (handle && acpi_has_method(handle, "GBCT"))296296- battery_hook_unregister(&system76_battery_hook);287287+ battery_hook_unregister(&system76_battery_hook);297288}298289299290// Get the airplane mode LED brightness···666673 acpi_dev->driver_data = data;667674 data->acpi_dev = acpi_dev;668675676676+ // Some models do not run open EC firmware. Check for an ACPI method677677+ // that only exists on open EC to guard functionality specific to it.678678+ data->has_open_ec = acpi_has_method(acpi_device_handle(data->acpi_dev), "NFAN");679679+669680 err = system76_get(data, "INIT");670681 if (err)671682 return err;···715718 if (err)716719 goto error;717720718718- err = system76_get_object(data, "NFAN", &data->nfan);719719- if (err)720720- goto error;721721+ if (data->has_open_ec) {722722+ err = system76_get_object(data, "NFAN", &data->nfan);723723+ if (err)724724+ goto error;721725722722- err = system76_get_object(data, "NTMP", &data->ntmp);723723- if (err)724724- goto error;726726+ err = system76_get_object(data, "NTMP", &data->ntmp);727727+ if (err)728728+ goto error;725729726726- data->therm = devm_hwmon_device_register_with_info(&acpi_dev->dev,727727- "system76_acpi", data, &thermal_chip_info, NULL);728728- err = PTR_ERR_OR_ZERO(data->therm);729729- if (err)730730- goto error;730730+ data->therm = devm_hwmon_device_register_with_info(&acpi_dev->dev,731731+ "system76_acpi", data, &thermal_chip_info, NULL);732732+ err = PTR_ERR_OR_ZERO(data->therm);733733+ if (err)734734+ goto error;731735732732- system76_battery_init();736736+ system76_battery_init();737737+ }733738734739 return 0;735740736741error:737737- kfree(data->ntmp);738738- kfree(data->nfan);742742+ if (data->has_open_ec) {743743+ kfree(data->ntmp);744744+ kfree(data->nfan);745745+ }739746 return err;740747}741748···750749751750 data = acpi_driver_data(acpi_dev);752751753753- system76_battery_exit();752752+ if (data->has_open_ec) {753753+ system76_battery_exit();754754+ kfree(data->nfan);755755+ kfree(data->ntmp);756756+ }754757755758 devm_led_classdev_unregister(&acpi_dev->dev, &data->ap_led);756759 devm_led_classdev_unregister(&acpi_dev->dev, &data->kb_led);757757-758758- kfree(data->nfan);759759- kfree(data->ntmp);760760761761 system76_get(data, "FINI");762762
···30533053 struct smp_completion_resp *psmpPayload;30543054 struct task_status_struct *ts;30553055 struct pm8001_device *pm8001_dev;30563056- char *pdma_respaddr = NULL;3057305630583057 psmpPayload = (struct smp_completion_resp *)(piomb + 4);30593058 status = le32_to_cpu(psmpPayload->status);···30793080 if (pm8001_dev)30803081 atomic_dec(&pm8001_dev->running_req);30813082 if (pm8001_ha->smp_exp_mode == SMP_DIRECT) {30833083+ struct scatterlist *sg_resp = &t->smp_task.smp_resp;30843084+ u8 *payload;30853085+ void *to;30863086+30823087 pm8001_dbg(pm8001_ha, IO,30833088 "DIRECT RESPONSE Length:%d\n",30843089 param);30853085- pdma_respaddr = (char *)(phys_to_virt(cpu_to_le6430863086- ((u64)sg_dma_address30873087- (&t->smp_task.smp_resp))));30903090+ to = kmap_atomic(sg_page(sg_resp));30913091+ payload = to + sg_resp->offset;30883092 for (i = 0; i < param; i++) {30893089- *(pdma_respaddr+i) = psmpPayload->_r_a[i];30933093+ *(payload + i) = psmpPayload->_r_a[i];30903094 pm8001_dbg(pm8001_ha, IO,30913095 "SMP Byte%d DMA data 0x%x psmp 0x%x\n",30923092- i, *(pdma_respaddr + i),30963096+ i, *(payload + i),30933097 psmpPayload->_r_a[i]);30943098 }30993099+ kunmap_atomic(to);30953100 }30963101 break;30973102 case IO_ABORTED:···42394236 struct sas_task *task = ccb->task;42404237 struct domain_device *dev = task->dev;42414238 struct pm8001_device *pm8001_dev = dev->lldd_dev;42424242- struct scatterlist *sg_req, *sg_resp;42394239+ struct scatterlist *sg_req, *sg_resp, *smp_req;42434240 u32 req_len, resp_len;42444241 struct smp_req smp_cmd;42454242 u32 opc;42464243 struct inbound_queue_table *circularQ;42474247- char *preq_dma_addr = NULL;42484248- __le64 tmp_addr;42494244 u32 i, length;42454245+ u8 *payload;42464246+ u8 *to;4250424742514248 memset(&smp_cmd, 0, sizeof(smp_cmd));42524249 /*···42834280 pm8001_ha->smp_exp_mode = SMP_INDIRECT;428442814285428242864286- tmp_addr = cpu_to_le64((u64)sg_dma_address(&task->smp_task.smp_req));42874287- preq_dma_addr = (char *)phys_to_virt(tmp_addr);42834283+ smp_req = &task->smp_task.smp_req;42844284+ to = kmap_atomic(sg_page(smp_req));42854285+ payload = to + smp_req->offset;4288428642894287 /* INDIRECT MODE command settings. Use DMA */42904288 if (pm8001_ha->smp_exp_mode == SMP_INDIRECT) {···42934289 /* for SPCv indirect mode. Place the top 4 bytes of42944290 * SMP Request header here. */42954291 for (i = 0; i < 4; i++)42964296- smp_cmd.smp_req16[i] = *(preq_dma_addr + i);42924292+ smp_cmd.smp_req16[i] = *(payload + i);42974293 /* exclude top 4 bytes for SMP req header */42984294 smp_cmd.long_smp_req.long_req_addr =42994295 cpu_to_le64((u64)sg_dma_address···43244320 pm8001_dbg(pm8001_ha, IO, "SMP REQUEST DIRECT MODE\n");43254321 for (i = 0; i < length; i++)43264322 if (i < 16) {43274327- smp_cmd.smp_req16[i] = *(preq_dma_addr+i);43234323+ smp_cmd.smp_req16[i] = *(payload + i);43284324 pm8001_dbg(pm8001_ha, IO,43294325 "Byte[%d]:%x (DMA data:%x)\n",43304326 i, smp_cmd.smp_req16[i],43314331- *(preq_dma_addr));43274327+ *(payload));43324328 } else {43334333- smp_cmd.smp_req[i] = *(preq_dma_addr+i);43294329+ smp_cmd.smp_req[i] = *(payload + i);43344330 pm8001_dbg(pm8001_ha, IO,43354331 "Byte[%d]:%x (DMA data:%x)\n",43364332 i, smp_cmd.smp_req[i],43374337- *(preq_dma_addr));43334333+ *(payload));43384334 }43394335 }43404340-43364336+ kunmap_atomic(to);43414337 build_smp_cmd(pm8001_dev->device_id, smp_cmd.tag,43424338 &smp_cmd, pm8001_ha->smp_exp_mode, length);43434339 rc = pm8001_mpi_build_cmd(pm8001_ha, circularQ, opc, &smp_cmd,
+5-2
drivers/scsi/vmw_pvscsi.c
···586586 * Commands like INQUIRY may transfer less data than587587 * requested by the initiator via bufflen. Set residual588588 * count to make upper layer aware of the actual amount589589- * of data returned.589589+ * of data returned. There are cases when controller590590+ * returns zero dataLen with non zero data - do not set591591+ * residual count in that case.590592 */591591- scsi_set_resid(cmd, scsi_bufflen(cmd) - e->dataLen);593593+ if (e->dataLen && (e->dataLen < scsi_bufflen(cmd)))594594+ scsi_set_resid(cmd, scsi_bufflen(cmd) - e->dataLen);592595 cmd->result = (DID_OK << 16);593596 break;594597
+19
drivers/soc/imx/imx8m-blk-ctrl.c
···17171818#define BLK_SFT_RSTN 0x01919#define BLK_CLK_EN 0x42020+#define BLK_MIPI_RESET_DIV 0x8 /* Mini/Nano DISPLAY_BLK_CTRL only */20212122struct imx8m_blk_ctrl_domain;2223···3736 const char *gpc_name;3837 u32 rst_mask;3938 u32 clk_mask;3939+4040+ /*4141+ * i.MX8M Mini and Nano have a third DISPLAY_BLK_CTRL register4242+ * which is used to control the reset for the MIPI Phy.4343+ * Since it's only present in certain circumstances,4444+ * an if-statement should be used before setting and clearing this4545+ * register.4646+ */4747+ u32 mipi_phy_rst_mask;4048};41494250#define DOMAIN_MAX_CLKS 3···88788979 /* put devices into reset */9080 regmap_clear_bits(bc->regmap, BLK_SFT_RSTN, data->rst_mask);8181+ if (data->mipi_phy_rst_mask)8282+ regmap_clear_bits(bc->regmap, BLK_MIPI_RESET_DIV, data->mipi_phy_rst_mask);91839284 /* enable upstream and blk-ctrl clocks to allow reset to propagate */9385 ret = clk_bulk_prepare_enable(data->num_clks, domain->clks);···11199112100 /* release reset */113101 regmap_set_bits(bc->regmap, BLK_SFT_RSTN, data->rst_mask);102102+ if (data->mipi_phy_rst_mask)103103+ regmap_set_bits(bc->regmap, BLK_MIPI_RESET_DIV, data->mipi_phy_rst_mask);114104115105 /* disable upstream clocks */116106 clk_bulk_disable_unprepare(data->num_clks, domain->clks);···134120 struct imx8m_blk_ctrl *bc = domain->bc;135121136122 /* put devices into reset and disable clocks */123123+ if (data->mipi_phy_rst_mask)124124+ regmap_clear_bits(bc->regmap, BLK_MIPI_RESET_DIV, data->mipi_phy_rst_mask);125125+137126 regmap_clear_bits(bc->regmap, BLK_SFT_RSTN, data->rst_mask);138127 regmap_clear_bits(bc->regmap, BLK_CLK_EN, data->clk_mask);139128···497480 .gpc_name = "mipi-dsi",498481 .rst_mask = BIT(5),499482 .clk_mask = BIT(8) | BIT(9),483483+ .mipi_phy_rst_mask = BIT(17),500484 },501485 [IMX8MM_DISPBLK_PD_MIPI_CSI] = {502486 .name = "dispblk-mipi-csi",···506488 .gpc_name = "mipi-csi",507489 .rst_mask = BIT(3) | BIT(4),508490 .clk_mask = BIT(10) | BIT(11),491491+ .mipi_phy_rst_mask = BIT(16),509492 },510493};511494
+4
drivers/soc/imx/soc-imx.c
···3636 int ret;3737 int i;38383939+ /* Return early if this is running on devices with different SoCs */4040+ if (!__mxc_cpu_type)4141+ return 0;4242+3943 if (of_machine_is_compatible("fsl,ls1021a"))4044 return 0;4145
+1-1
drivers/soc/tegra/fuse/fuse-tegra.c
···320320};321321builtin_platform_driver(tegra_fuse_driver);322322323323-bool __init tegra_fuse_read_spare(unsigned int spare)323323+u32 __init tegra_fuse_read_spare(unsigned int spare)324324{325325 unsigned int offset = fuse->soc->info->spare + spare * 4;326326
+1-1
drivers/soc/tegra/fuse/fuse.h
···6565void tegra_init_revision(void);6666void tegra_init_apbmisc(void);67676868-bool __init tegra_fuse_read_spare(unsigned int spare);6868+u32 __init tegra_fuse_read_spare(unsigned int spare);6969u32 __init tegra_fuse_read_early(unsigned int offset);70707171u8 tegra_get_major_rev(void);
···11// SPDX-License-Identifier: GPL-2.0-only22/*33- * Copyright (c) 2015-2016, Linaro Limited33+ * Copyright (c) 2015-2017, 2019-2021 Linaro Limited44 */55+#include <linux/anon_inodes.h>56#include <linux/device.h>66-#include <linux/dma-buf.h>77-#include <linux/fdtable.h>87#include <linux/idr.h>88+#include <linux/mm.h>99#include <linux/sched.h>1010#include <linux/slab.h>1111#include <linux/tee_drv.h>1212#include <linux/uio.h>1313-#include <linux/module.h>1413#include "tee_private.h"1515-1616-MODULE_IMPORT_NS(DMA_BUF);17141815static void release_registered_pages(struct tee_shm *shm)1916{···2831 }2932}30333131-static void tee_shm_release(struct tee_shm *shm)3434+static void tee_shm_release(struct tee_device *teedev, struct tee_shm *shm)3235{3333- struct tee_device *teedev = shm->ctx->teedev;3434-3535- if (shm->flags & TEE_SHM_DMA_BUF) {3636- mutex_lock(&teedev->mutex);3737- idr_remove(&teedev->idr, shm->id);3838- mutex_unlock(&teedev->mutex);3939- }4040-4136 if (shm->flags & TEE_SHM_POOL) {4237 struct tee_shm_pool_mgr *poolm;4338···55665667 tee_device_put(teedev);5768}5858-5959-static struct sg_table *tee_shm_op_map_dma_buf(struct dma_buf_attachment6060- *attach, enum dma_data_direction dir)6161-{6262- return NULL;6363-}6464-6565-static void tee_shm_op_unmap_dma_buf(struct dma_buf_attachment *attach,6666- struct sg_table *table,6767- enum dma_data_direction dir)6868-{6969-}7070-7171-static void tee_shm_op_release(struct dma_buf *dmabuf)7272-{7373- struct tee_shm *shm = dmabuf->priv;7474-7575- tee_shm_release(shm);7676-}7777-7878-static int tee_shm_op_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma)7979-{8080- struct tee_shm *shm = dmabuf->priv;8181- size_t size = vma->vm_end - vma->vm_start;8282-8383- /* Refuse sharing shared memory provided by application */8484- if (shm->flags & TEE_SHM_USER_MAPPED)8585- return -EINVAL;8686-8787- return remap_pfn_range(vma, vma->vm_start, shm->paddr >> PAGE_SHIFT,8888- size, vma->vm_page_prot);8989-}9090-9191-static const struct dma_buf_ops tee_shm_dma_buf_ops = {9292- .map_dma_buf = tee_shm_op_map_dma_buf,9393- .unmap_dma_buf = tee_shm_op_unmap_dma_buf,9494- .release = tee_shm_op_release,9595- .mmap = tee_shm_op_mmap,9696-};97699870struct tee_shm *tee_shm_alloc(struct tee_context *ctx, size_t size, u32 flags)9971{···90140 goto err_dev_put;91141 }92142143143+ refcount_set(&shm->refcount, 1);93144 shm->flags = flags | TEE_SHM_POOL;94145 shm->ctx = ctx;95146 if (flags & TEE_SHM_DMA_BUF)···104153 goto err_kfree;105154 }106155107107-108156 if (flags & TEE_SHM_DMA_BUF) {109109- DEFINE_DMA_BUF_EXPORT_INFO(exp_info);110110-111157 mutex_lock(&teedev->mutex);112158 shm->id = idr_alloc(&teedev->idr, shm, 1, 0, GFP_KERNEL);113159 mutex_unlock(&teedev->mutex);···112164 ret = ERR_PTR(shm->id);113165 goto err_pool_free;114166 }115115-116116- exp_info.ops = &tee_shm_dma_buf_ops;117117- exp_info.size = shm->size;118118- exp_info.flags = O_RDWR;119119- exp_info.priv = shm;120120-121121- shm->dmabuf = dma_buf_export(&exp_info);122122- if (IS_ERR(shm->dmabuf)) {123123- ret = ERR_CAST(shm->dmabuf);124124- goto err_rem;125125- }126167 }127168128169 teedev_ctx_get(ctx);129170130171 return shm;131131-err_rem:132132- if (flags & TEE_SHM_DMA_BUF) {133133- mutex_lock(&teedev->mutex);134134- idr_remove(&teedev->idr, shm->id);135135- mutex_unlock(&teedev->mutex);136136- }137172err_pool_free:138173 poolm->ops->free(poolm, shm);139174err_kfree:···177246 goto err;178247 }179248249249+ refcount_set(&shm->refcount, 1);180250 shm->flags = flags | TEE_SHM_REGISTER;181251 shm->ctx = ctx;182252 shm->id = -1;···238306 goto err;239307 }240308241241- if (flags & TEE_SHM_DMA_BUF) {242242- DEFINE_DMA_BUF_EXPORT_INFO(exp_info);243243-244244- exp_info.ops = &tee_shm_dma_buf_ops;245245- exp_info.size = shm->size;246246- exp_info.flags = O_RDWR;247247- exp_info.priv = shm;248248-249249- shm->dmabuf = dma_buf_export(&exp_info);250250- if (IS_ERR(shm->dmabuf)) {251251- ret = ERR_CAST(shm->dmabuf);252252- teedev->desc->ops->shm_unregister(ctx, shm);253253- goto err;254254- }255255- }256256-257309 return shm;258310err:259311 if (shm) {···255339}256340EXPORT_SYMBOL_GPL(tee_shm_register);257341342342+static int tee_shm_fop_release(struct inode *inode, struct file *filp)343343+{344344+ tee_shm_put(filp->private_data);345345+ return 0;346346+}347347+348348+static int tee_shm_fop_mmap(struct file *filp, struct vm_area_struct *vma)349349+{350350+ struct tee_shm *shm = filp->private_data;351351+ size_t size = vma->vm_end - vma->vm_start;352352+353353+ /* Refuse sharing shared memory provided by application */354354+ if (shm->flags & TEE_SHM_USER_MAPPED)355355+ return -EINVAL;356356+357357+ /* check for overflowing the buffer's size */358358+ if (vma->vm_pgoff + vma_pages(vma) > shm->size >> PAGE_SHIFT)359359+ return -EINVAL;360360+361361+ return remap_pfn_range(vma, vma->vm_start, shm->paddr >> PAGE_SHIFT,362362+ size, vma->vm_page_prot);363363+}364364+365365+static const struct file_operations tee_shm_fops = {366366+ .owner = THIS_MODULE,367367+ .release = tee_shm_fop_release,368368+ .mmap = tee_shm_fop_mmap,369369+};370370+258371/**259372 * tee_shm_get_fd() - Increase reference count and return file descriptor260373 * @shm: Shared memory handle···296351 if (!(shm->flags & TEE_SHM_DMA_BUF))297352 return -EINVAL;298353299299- get_dma_buf(shm->dmabuf);300300- fd = dma_buf_fd(shm->dmabuf, O_CLOEXEC);354354+ /* matched by tee_shm_put() in tee_shm_op_release() */355355+ refcount_inc(&shm->refcount);356356+ fd = anon_inode_getfd("tee_shm", &tee_shm_fops, shm, O_RDWR);301357 if (fd < 0)302302- dma_buf_put(shm->dmabuf);358358+ tee_shm_put(shm);303359 return fd;304360}305361···310364 */311365void tee_shm_free(struct tee_shm *shm)312366{313313- /*314314- * dma_buf_put() decreases the dmabuf reference counter and will315315- * call tee_shm_release() when the last reference is gone.316316- *317317- * In the case of driver private memory we call tee_shm_release318318- * directly instead as it doesn't have a reference counter.319319- */320320- if (shm->flags & TEE_SHM_DMA_BUF)321321- dma_buf_put(shm->dmabuf);322322- else323323- tee_shm_release(shm);367367+ tee_shm_put(shm);324368}325369EXPORT_SYMBOL_GPL(tee_shm_free);326370···417481 teedev = ctx->teedev;418482 mutex_lock(&teedev->mutex);419483 shm = idr_find(&teedev->idr, id);484484+ /*485485+ * If the tee_shm was found in the IDR it must have a refcount486486+ * larger than 0 due to the guarantee in tee_shm_put() below. So487487+ * it's safe to use refcount_inc().488488+ */420489 if (!shm || shm->ctx != ctx)421490 shm = ERR_PTR(-EINVAL);422422- else if (shm->flags & TEE_SHM_DMA_BUF)423423- get_dma_buf(shm->dmabuf);491491+ else492492+ refcount_inc(&shm->refcount);424493 mutex_unlock(&teedev->mutex);425494 return shm;426495}···437496 */438497void tee_shm_put(struct tee_shm *shm)439498{440440- if (shm->flags & TEE_SHM_DMA_BUF)441441- dma_buf_put(shm->dmabuf);499499+ struct tee_device *teedev = shm->ctx->teedev;500500+ bool do_release = false;501501+502502+ mutex_lock(&teedev->mutex);503503+ if (refcount_dec_and_test(&shm->refcount)) {504504+ /*505505+ * refcount has reached 0, we must now remove it from the506506+ * IDR before releasing the mutex. This will guarantee that507507+ * the refcount_inc() in tee_shm_get_from_id() never starts508508+ * from 0.509509+ */510510+ if (shm->flags & TEE_SHM_DMA_BUF)511511+ idr_remove(&teedev->idr, shm->id);512512+ do_release = true;513513+ }514514+ mutex_unlock(&teedev->mutex);515515+516516+ if (do_release)517517+ tee_shm_release(teedev, shm);442518}443519EXPORT_SYMBOL_GPL(tee_shm_put);
+27-3
drivers/tty/hvc/hvc_xen.c
···3737 struct xenbus_device *xbdev;3838 struct xencons_interface *intf;3939 unsigned int evtchn;4040+ XENCONS_RING_IDX out_cons;4141+ unsigned int out_cons_same;4042 struct hvc_struct *hvc;4143 int irq;4244 int vtermno;···140138 XENCONS_RING_IDX cons, prod;141139 int recv = 0;142140 struct xencons_info *xencons = vtermno_to_xencons(vtermno);141141+ unsigned int eoiflag = 0;142142+143143 if (xencons == NULL)144144 return -EINVAL;145145 intf = xencons->intf;···161157 mb(); /* read ring before consuming */162158 intf->in_cons = cons;163159164164- notify_daemon(xencons);160160+ /*161161+ * When to mark interrupt having been spurious:162162+ * - there was no new data to be read, and163163+ * - the backend did not consume some output bytes, and164164+ * - the previous round with no read data didn't see consumed bytes165165+ * (we might have a race with an interrupt being in flight while166166+ * updating xencons->out_cons, so account for that by allowing one167167+ * round without any visible reason)168168+ */169169+ if (intf->out_cons != xencons->out_cons) {170170+ xencons->out_cons = intf->out_cons;171171+ xencons->out_cons_same = 0;172172+ }173173+ if (recv) {174174+ notify_daemon(xencons);175175+ } else if (xencons->out_cons_same++ > 1) {176176+ eoiflag = XEN_EOI_FLAG_SPURIOUS;177177+ }178178+179179+ xen_irq_lateeoi(xencons->irq, eoiflag);180180+165181 return recv;166182}167183···410386 if (ret)411387 return ret;412388 info->evtchn = evtchn;413413- irq = bind_evtchn_to_irq(evtchn);389389+ irq = bind_interdomain_evtchn_to_irq_lateeoi(dev, evtchn);414390 if (irq < 0)415391 return irq;416392 info->irq = irq;···575551 return r;576552577553 info = vtermno_to_xencons(HVC_COOKIE);578578- info->irq = bind_evtchn_to_irq(info->evtchn);554554+ info->irq = bind_evtchn_to_irq_lateeoi(info->evtchn);579555 }580556 if (info->irq < 0)581557 info->irq = 0; /* NO_IRQ */
+22-1
drivers/tty/n_hdlc.c
···140140 struct n_hdlc_buf_list rx_buf_list;141141 struct n_hdlc_buf_list tx_free_buf_list;142142 struct n_hdlc_buf_list rx_free_buf_list;143143+ struct work_struct write_work;144144+ struct tty_struct *tty_for_write_work;143145};144146145147/*···156154/* Local functions */157155158156static struct n_hdlc *n_hdlc_alloc(void);157157+static void n_hdlc_tty_write_work(struct work_struct *work);159158160159/* max frame size for memory allocations */161160static int maxframe = 4096;···213210 wake_up_interruptible(&tty->read_wait);214211 wake_up_interruptible(&tty->write_wait);215212213213+ cancel_work_sync(&n_hdlc->write_work);214214+216215 n_hdlc_free_buf_list(&n_hdlc->rx_free_buf_list);217216 n_hdlc_free_buf_list(&n_hdlc->tx_free_buf_list);218217 n_hdlc_free_buf_list(&n_hdlc->rx_buf_list);···246241 return -ENFILE;247242 }248243244244+ INIT_WORK(&n_hdlc->write_work, n_hdlc_tty_write_work);245245+ n_hdlc->tty_for_write_work = tty;249246 tty->disc_data = n_hdlc;250247 tty->receive_room = 65536;251248···342335} /* end of n_hdlc_send_frames() */343336344337/**338338+ * n_hdlc_tty_write_work - Asynchronous callback for transmit wakeup339339+ * @work: pointer to work_struct340340+ *341341+ * Called when low level device driver can accept more send data.342342+ */343343+static void n_hdlc_tty_write_work(struct work_struct *work)344344+{345345+ struct n_hdlc *n_hdlc = container_of(work, struct n_hdlc, write_work);346346+ struct tty_struct *tty = n_hdlc->tty_for_write_work;347347+348348+ n_hdlc_send_frames(n_hdlc, tty);349349+} /* end of n_hdlc_tty_write_work() */350350+351351+/**345352 * n_hdlc_tty_wakeup - Callback for transmit wakeup346353 * @tty: pointer to associated tty instance data347354 *···365344{366345 struct n_hdlc *n_hdlc = tty->disc_data;367346368368- n_hdlc_send_frames(n_hdlc, tty);347347+ schedule_work(&n_hdlc->write_work);369348} /* end of n_hdlc_tty_wakeup() */370349371350/**
···10291029 return;10301030 }1031103110321032+ *status = 0;10331033+10321034 cdnsp_finish_td(pdev, td, event, pep, status);10331035}10341036···15251523 spin_lock_irqsave(&pdev->lock, flags);1526152415271525 if (pdev->cdnsp_state & (CDNSP_STATE_HALTED | CDNSP_STATE_DYING)) {15281528- cdnsp_died(pdev);15261526+ /*15271527+ * While removing or stopping driver there may still be deferred15281528+ * not handled interrupt which should not be treated as error.15291529+ * Driver should simply ignore it.15301530+ */15311531+ if (pdev->gadget_driver)15321532+ cdnsp_died(pdev);15331533+15291534 spin_unlock_irqrestore(&pdev->lock, flags);15301535 return IRQ_HANDLED;15311536 }
+2-2
drivers/usb/cdns3/cdnsp-trace.h
···5757 __entry->first_prime_det = pep->stream_info.first_prime_det;5858 __entry->drbls_count = pep->stream_info.drbls_count;5959 ),6060- TP_printk("%s: SID: %08x ep state: %x stream: enabled: %d num %d "6060+ TP_printk("%s: SID: %08x, ep state: %x, stream: enabled: %d num %d "6161 "tds %d, first prime: %d drbls %d",6262- __get_str(name), __entry->state, __entry->stream_id,6262+ __get_str(name), __entry->stream_id, __entry->state,6363 __entry->enabled, __entry->num_streams, __entry->td_count,6464 __entry->first_prime_det, __entry->drbls_count)6565);
···273273 gpd->dw3_info |= cpu_to_le32(GPD_EXT_FLAG_ZLP);274274 }275275276276+ /* prevent reorder, make sure GPD's HWO is set last */277277+ mb();276278 gpd->dw0_info |= cpu_to_le32(GPD_FLAGS_IOC | GPD_FLAGS_HWO);277279278280 mreq->gpd = gpd;···308306 gpd->next_gpd = cpu_to_le32(lower_32_bits(enq_dma));309307 ext_addr |= GPD_EXT_NGP(mtu, upper_32_bits(enq_dma));310308 gpd->dw3_info = cpu_to_le32(ext_addr);309309+ /* prevent reorder, make sure GPD's HWO is set last */310310+ mb();311311 gpd->dw0_info |= cpu_to_le32(GPD_FLAGS_IOC | GPD_FLAGS_HWO);312312313313 mreq->gpd = gpd;···449445 return;450446 }451447 mtu3_setbits(mbase, MU3D_EP_TXCR0(mep->epnum), TX_TXPKTRDY);452452-448448+ /* prevent reorder, make sure GPD's HWO is set last */449449+ mb();453450 /* by pass the current GDP */454451 gpd_current->dw0_info |= cpu_to_le32(GPD_FLAGS_BPS | GPD_FLAGS_HWO);455452
+4-2
drivers/usb/serial/cp210x.c
···1635163516361636 /* 2 banks of GPIO - One for the pins taken from each serial port */16371637 if (intf_num == 0) {16381638+ priv->gc.ngpio = 2;16391639+16381640 if (mode.eci == CP210X_PIN_MODE_MODEM) {16391641 /* mark all GPIOs of this interface as reserved */16401642 priv->gpio_altfunc = 0xff;···16471645 priv->gpio_pushpull = (u8)((le16_to_cpu(config.gpio_mode) &16481646 CP210X_ECI_GPIO_MODE_MASK) >>16491647 CP210X_ECI_GPIO_MODE_OFFSET);16501650- priv->gc.ngpio = 2;16511648 } else if (intf_num == 1) {16491649+ priv->gc.ngpio = 3;16501650+16521651 if (mode.sci == CP210X_PIN_MODE_MODEM) {16531652 /* mark all GPIOs of this interface as reserved */16541653 priv->gpio_altfunc = 0xff;···16601657 priv->gpio_pushpull = (u8)((le16_to_cpu(config.gpio_mode) &16611658 CP210X_SCI_GPIO_MODE_MASK) >>16621659 CP210X_SCI_GPIO_MODE_OFFSET);16631663- priv->gc.ngpio = 3;16641660 } else {16651661 return -ENODEV;16661662 }
···17321732 }17331733 return root;17341734fail:17351735+ /*17361736+ * If our caller provided us an anonymous device, then it's his17371737+ * responsability to free it in case we fail. So we have to set our17381738+ * root's anon_dev to 0 to avoid a double free, once by btrfs_put_root()17391739+ * and once again by our caller.17401740+ */17411741+ if (anon_dev)17421742+ root->anon_dev = 0;17351743 btrfs_put_root(root);17361744 return ERR_PTR(ret);17371745}
···66116611 if (test_bit(EXTENT_BUFFER_UPTODATE, &eb->bflags))66126612 return 0;6613661366146614+ /*66156615+ * We could have had EXTENT_BUFFER_UPTODATE cleared by the write66166616+ * operation, which could potentially still be in flight. In this case66176617+ * we simply want to return an error.66186618+ */66196619+ if (unlikely(test_bit(EXTENT_BUFFER_WRITE_ERR, &eb->bflags)))66206620+ return -EIO;66216621+66146622 if (eb->fs_info->sectorsize < PAGE_SIZE)66156623 return read_extent_buffer_subpage(eb, wait, mirror_num);66166624
···617617 * Since we don't abort the transaction in this case, free the618618 * tree block so that we don't leak space and leave the619619 * filesystem in an inconsistent state (an extent item in the620620- * extent tree without backreferences). Also no need to have621621- * the tree block locked since it is not in any tree at this622622- * point, so no other task can find it and use it.620620+ * extent tree with a backreference for a root that does not621621+ * exists).623622 */624624- btrfs_free_tree_block(trans, root, leaf, 0, 1);623623+ btrfs_tree_lock(leaf);624624+ btrfs_clean_tree_block(leaf);625625+ btrfs_tree_unlock(leaf);626626+ btrfs_free_tree_block(trans, objectid, leaf, 0, 1);625627 free_extent_buffer(leaf);626628 goto fail;627629 }
···11811181 parent_objectid, victim_name,11821182 victim_name_len);11831183 if (ret < 0) {11841184+ kfree(victim_name);11841185 return ret;11851186 } else if (!ret) {11861187 ret = -ENOENT;···39783977 goto done;39793978 }39803979 if (btrfs_header_generation(path->nodes[0]) != trans->transid) {39803980+ ctx->last_dir_item_offset = min_key.offset;39813981 ret = overwrite_item(trans, log, dst_path,39823982 path->nodes[0], path->slots[0],39833983 &min_key);
+4-2
fs/btrfs/volumes.c
···1370137013711371 bytenr_orig = btrfs_sb_offset(0);13721372 ret = btrfs_sb_log_location_bdev(bdev, 0, READ, &bytenr);13731373- if (ret)13741374- return ERR_PTR(ret);13731373+ if (ret) {13741374+ device = ERR_PTR(ret);13751375+ goto error_bdev_put;13761376+ }1375137713761378 disk_super = btrfs_read_disk_super(bdev, bytenr, bytenr_orig);13771379 if (IS_ERR(disk_super)) {
+8-8
fs/ceph/caps.c
···43504350{43514351 struct ceph_mds_client *mdsc = ceph_sb_to_mdsc(ci->vfs_inode.i_sb);43524352 int bits = (fmode << 1) | 1;43534353- bool is_opened = false;43534353+ bool already_opened = false;43544354 int i;4355435543564356 if (count == 1)···4358435843594359 spin_lock(&ci->i_ceph_lock);43604360 for (i = 0; i < CEPH_FILE_MODE_BITS; i++) {43614361- if (bits & (1 << i))43624362- ci->i_nr_by_mode[i] += count;43634363-43644361 /*43654365- * If any of the mode ref is larger than 1,43624362+ * If any of the mode ref is larger than 0,43664363 * that means it has been already opened by43674364 * others. Just skip checking the PIN ref.43684365 */43694369- if (i && ci->i_nr_by_mode[i] > 1)43704370- is_opened = true;43664366+ if (i && ci->i_nr_by_mode[i])43674367+ already_opened = true;43684368+43694369+ if (bits & (1 << i))43704370+ ci->i_nr_by_mode[i] += count;43714371 }4372437243734373- if (!is_opened)43734373+ if (!already_opened)43744374 percpu_counter_inc(&mdsc->metric.opened_inodes);43754375 spin_unlock(&ci->i_ceph_lock);43764376}
···30643064 (cifs_sb->ctx->rsize > server->ops->negotiate_rsize(tcon, ctx)))30653065 cifs_sb->ctx->rsize = server->ops->negotiate_rsize(tcon, ctx);3066306630673067+ /*30683068+ * The cookie is initialized from volume info returned above.30693069+ * Inside cifs_fscache_get_super_cookie it checks30703070+ * that we do not get super cookie twice.30713071+ */30723072+ cifs_fscache_get_super_cookie(tcon);30733073+30673074out:30683075 mnt_ctx->server = server;30693076 mnt_ctx->ses = ses;
+37-1
fs/cifs/fs_context.c
···435435}436436437437/*438438+ * Remove duplicate path delimiters. Windows is supposed to do that439439+ * but there are some bugs that prevent rename from working if there are440440+ * multiple delimiters.441441+ *442442+ * Returns a sanitized duplicate of @path. The caller is responsible for443443+ * cleaning up the original.444444+ */445445+#define IS_DELIM(c) ((c) == '/' || (c) == '\\')446446+static char *sanitize_path(char *path)447447+{448448+ char *cursor1 = path, *cursor2 = path;449449+450450+ /* skip all prepended delimiters */451451+ while (IS_DELIM(*cursor1))452452+ cursor1++;453453+454454+ /* copy the first letter */455455+ *cursor2 = *cursor1;456456+457457+ /* copy the remainder... */458458+ while (*(cursor1++)) {459459+ /* ... skipping all duplicated delimiters */460460+ if (IS_DELIM(*cursor1) && IS_DELIM(*cursor2))461461+ continue;462462+ *(++cursor2) = *cursor1;463463+ }464464+465465+ /* if the last character is a delimiter, skip it */466466+ if (IS_DELIM(*(cursor2 - 1)))467467+ cursor2--;468468+469469+ *(cursor2) = '\0';470470+ return kstrdup(path, GFP_KERNEL);471471+}472472+473473+/*438474 * Parse a devname into substrings and populate the ctx->UNC and ctx->prepath439475 * fields with the result. Returns 0 on success and an error otherwise440476 * (e.g. ENOMEM or EINVAL)···529493 if (!*pos)530494 return 0;531495532532- ctx->prepath = kstrdup(pos, GFP_KERNEL);496496+ ctx->prepath = sanitize_path(pos);533497 if (!ctx->prepath)534498 return -ENOMEM;535499
-13
fs/cifs/inode.c
···13561356 goto out;13571357 }1358135813591359-#ifdef CONFIG_CIFS_FSCACHE13601360- /* populate tcon->resource_id */13611361- tcon->resource_id = CIFS_I(inode)->uniqueid;13621362-#endif13631363-13641359 if (rc && tcon->pipe) {13651360 cifs_dbg(FYI, "ipc connection - fake read inode\n");13661361 spin_lock(&inode->i_lock);···13701375 iget_failed(inode);13711376 inode = ERR_PTR(rc);13721377 }13731373-13741374- /*13751375- * The cookie is initialized from volume info returned above.13761376- * Inside cifs_fscache_get_super_cookie it checks13771377- * that we do not get super cookie twice.13781378- */13791379- cifs_fscache_get_super_cookie(tcon);13801380-13811378out:13821379 kfree(path);13831380 free_xid(xid);
+56-16
fs/file.c
···841841 spin_unlock(&files->file_lock);842842}843843844844+static inline struct file *__fget_files_rcu(struct files_struct *files,845845+ unsigned int fd, fmode_t mask, unsigned int refs)846846+{847847+ for (;;) {848848+ struct file *file;849849+ struct fdtable *fdt = rcu_dereference_raw(files->fdt);850850+ struct file __rcu **fdentry;851851+852852+ if (unlikely(fd >= fdt->max_fds))853853+ return NULL;854854+855855+ fdentry = fdt->fd + array_index_nospec(fd, fdt->max_fds);856856+ file = rcu_dereference_raw(*fdentry);857857+ if (unlikely(!file))858858+ return NULL;859859+860860+ if (unlikely(file->f_mode & mask))861861+ return NULL;862862+863863+ /*864864+ * Ok, we have a file pointer. However, because we do865865+ * this all locklessly under RCU, we may be racing with866866+ * that file being closed.867867+ *868868+ * Such a race can take two forms:869869+ *870870+ * (a) the file ref already went down to zero,871871+ * and get_file_rcu_many() fails. Just try872872+ * again:873873+ */874874+ if (unlikely(!get_file_rcu_many(file, refs)))875875+ continue;876876+877877+ /*878878+ * (b) the file table entry has changed under us.879879+ * Note that we don't need to re-check the 'fdt->fd'880880+ * pointer having changed, because it always goes881881+ * hand-in-hand with 'fdt'.882882+ *883883+ * If so, we need to put our refs and try again.884884+ */885885+ if (unlikely(rcu_dereference_raw(files->fdt) != fdt) ||886886+ unlikely(rcu_dereference_raw(*fdentry) != file)) {887887+ fput_many(file, refs);888888+ continue;889889+ }890890+891891+ /*892892+ * Ok, we have a ref to the file, and checked that it893893+ * still exists.894894+ */895895+ return file;896896+ }897897+}898898+844899static struct file *__fget_files(struct files_struct *files, unsigned int fd,845900 fmode_t mask, unsigned int refs)846901{847902 struct file *file;848903849904 rcu_read_lock();850850-loop:851851- file = files_lookup_fd_rcu(files, fd);852852- if (file) {853853- /* File object ref couldn't be taken.854854- * dup2() atomicity guarantee is the reason855855- * we loop to catch the new file (or NULL pointer)856856- */857857- if (file->f_mode & mask)858858- file = NULL;859859- else if (!get_file_rcu_many(file, refs))860860- goto loop;861861- else if (files_lookup_fd_raw(files, fd) != file) {862862- fput_many(file, refs);863863- goto loop;864864- }865865- }905905+ file = __fget_files_rcu(files, fd, mask, refs);866906 rcu_read_unlock();867907868908 return file;
···17871787MODULE_AUTHOR("Damien Le Moal");17881788MODULE_DESCRIPTION("Zone file system for zoned block devices");17891789MODULE_LICENSE("GPL");17901790+MODULE_ALIAS_FS("zonefs");17901791module_init(zonefs_init);17911792module_exit(zonefs_exit);
···19371937 * @udp_tunnel_nic: UDP tunnel offload state19381938 * @xdp_state: stores info on attached XDP BPF programs19391939 *19401940- * @nested_level: Used as as a parameter of spin_lock_nested() of19401940+ * @nested_level: Used as a parameter of spin_lock_nested() of19411941 * dev->addr_list_lock.19421942 * @unlink_list: As netif_addr_lock() can be called recursively,19431943 * keep a list of interfaces to be deleted.
···195195 * @offset: offset of buffer in user space196196 * @pages: locked pages from userspace197197 * @num_pages: number of locked pages198198- * @dmabuf: dmabuf used to for exporting to user space198198+ * @refcount: reference counter199199 * @flags: defined by TEE_SHM_* in tee_drv.h200200 * @id: unique id of a shared memory object on this device, shared201201 * with user space···214214 unsigned int offset;215215 struct page **pages;216216 size_t num_pages;217217- struct dma_buf *dmabuf;217217+ refcount_t refcount;218218 u32 flags;219219 int id;220220 u64 sec_world_id;
+23-2
include/linux/virtio_net.h
···77#include <uapi/linux/udp.h>88#include <uapi/linux/virtio_net.h>991010+static inline bool virtio_net_hdr_match_proto(__be16 protocol, __u8 gso_type)1111+{1212+ switch (gso_type & ~VIRTIO_NET_HDR_GSO_ECN) {1313+ case VIRTIO_NET_HDR_GSO_TCPV4:1414+ return protocol == cpu_to_be16(ETH_P_IP);1515+ case VIRTIO_NET_HDR_GSO_TCPV6:1616+ return protocol == cpu_to_be16(ETH_P_IPV6);1717+ case VIRTIO_NET_HDR_GSO_UDP:1818+ return protocol == cpu_to_be16(ETH_P_IP) ||1919+ protocol == cpu_to_be16(ETH_P_IPV6);2020+ default:2121+ return false;2222+ }2323+}2424+1025static inline int virtio_net_hdr_set_proto(struct sk_buff *skb,1126 const struct virtio_net_hdr *hdr)1227{2828+ if (skb->protocol)2929+ return 0;3030+1331 switch (hdr->gso_type & ~VIRTIO_NET_HDR_GSO_ECN) {1432 case VIRTIO_NET_HDR_GSO_TCPV4:1533 case VIRTIO_NET_HDR_GSO_UDP:···10688 if (!skb->protocol) {10789 __be16 protocol = dev_parse_header_protocol(skb);10890109109- virtio_net_hdr_set_proto(skb, hdr);110110- if (protocol && protocol != skb->protocol)9191+ if (!protocol)9292+ virtio_net_hdr_set_proto(skb, hdr);9393+ else if (!virtio_net_hdr_match_proto(protocol, hdr->gso_type))11194 return -EINVAL;9595+ else9696+ skb->protocol = protocol;11297 }11398retry:11499 if (!skb_flow_dissect_flow_keys_basic(NULL, skb, &keys,
···136136 * MPTCP_EVENT_REMOVED: token, rem_id137137 * An address has been lost by the peer.138138 *139139- * MPTCP_EVENT_SUB_ESTABLISHED: token, family, saddr4 | saddr6,140140- * daddr4 | daddr6, sport, dport, backup,141141- * if_idx [, error]139139+ * MPTCP_EVENT_SUB_ESTABLISHED: token, family, loc_id, rem_id,140140+ * saddr4 | saddr6, daddr4 | daddr6, sport,141141+ * dport, backup, if_idx [, error]142142 * A new subflow has been established. 'error' should not be set.143143 *144144- * MPTCP_EVENT_SUB_CLOSED: token, family, saddr4 | saddr6, daddr4 | daddr6,145145- * sport, dport, backup, if_idx [, error]144144+ * MPTCP_EVENT_SUB_CLOSED: token, family, loc_id, rem_id, saddr4 | saddr6,145145+ * daddr4 | daddr6, sport, dport, backup, if_idx146146+ * [, error]146147 * A subflow has been closed. An error (copy of sk_err) could be set if an147148 * error has been detected for this subflow.148149 *149149- * MPTCP_EVENT_SUB_PRIORITY: token, family, saddr4 | saddr6, daddr4 | daddr6,150150- * sport, dport, backup, if_idx [, error]151151- * The priority of a subflow has changed. 'error' should not be set.150150+ * MPTCP_EVENT_SUB_PRIORITY: token, family, loc_id, rem_id, saddr4 | saddr6,151151+ * daddr4 | daddr6, sport, dport, backup, if_idx152152+ * [, error]153153+ * The priority of a subflow has changed. 'error' should not be set.152154 */153155enum mptcp_event_type {154156 MPTCP_EVENT_UNSPEC = 0,
+3-3
include/uapi/linux/nfc.h
···263263#define NFC_SE_ENABLED 0x1264264265265struct sockaddr_nfc {266266- sa_family_t sa_family;266266+ __kernel_sa_family_t sa_family;267267 __u32 dev_idx;268268 __u32 target_idx;269269 __u32 nfc_protocol;···271271272272#define NFC_LLCP_MAX_SERVICE_NAME 63273273struct sockaddr_nfc_llcp {274274- sa_family_t sa_family;274274+ __kernel_sa_family_t sa_family;275275 __u32 dev_idx;276276 __u32 target_idx;277277 __u32 nfc_protocol;278278 __u8 dsap; /* Destination SAP, if known */279279 __u8 ssap; /* Source SAP to be bound to */280280 char service_name[NFC_LLCP_MAX_SERVICE_NAME]; /* Service name URI */;281281- size_t service_name_len;281281+ __kernel_size_t service_name_len;282282};283283284284/* NFC socket protocols */
···718718{719719 int rc = 0;720720 struct sk_buff *skb;721721- static unsigned int failed = 0;721721+ unsigned int failed = 0;722722723723 /* NOTE: kauditd_thread takes care of all our locking, we just use724724 * the netlink info passed to us (e.g. sk and portid) */···735735 continue;736736 }737737738738+retry:738739 /* grab an extra skb reference in case of error */739740 skb_get(skb);740741 rc = netlink_unicast(sk, skb, portid, 0);741742 if (rc < 0) {742742- /* fatal failure for our queue flush attempt? */743743+ /* send failed - try a few times unless fatal error */743744 if (++failed >= retry_limit ||744745 rc == -ECONNREFUSED || rc == -EPERM) {745745- /* yes - error processing for the queue */746746 sk = NULL;747747 if (err_hook)748748 (*err_hook)(skb);749749- if (!skb_hook)750750- goto out;751751- /* keep processing with the skb_hook */749749+ if (rc == -EAGAIN)750750+ rc = 0;751751+ /* continue to drain the queue */752752 continue;753753 } else754754- /* no - requeue to preserve ordering */755755- skb_queue_head(queue, skb);754754+ goto retry;756755 } else {757757- /* it worked - drop the extra reference and continue */756756+ /* skb sent - drop the extra reference and continue */758757 consume_skb(skb);759758 failed = 0;760759 }761760 }762761763763-out:764762 return (rc >= 0 ? 0 : rc);765763}766764···16071609 audit_panic("cannot initialize netlink socket in namespace");16081610 return -ENOMEM;16091611 }16101610- aunet->sk->sk_sndtimeo = MAX_SCHEDULE_TIMEOUT;16121612+ /* limit the timeout in case auditd is blocked/stopped */16131613+ aunet->sk->sk_sndtimeo = HZ / 10;1611161416121615 return 0;16131616}
+36-17
kernel/bpf/verifier.c
···13661366 reg->var_off = tnum_or(tnum_clear_subreg(var64_off), var32_off);13671367}1368136813691369+static bool __reg32_bound_s64(s32 a)13701370+{13711371+ return a >= 0 && a <= S32_MAX;13721372+}13731373+13691374static void __reg_assign_32_into_64(struct bpf_reg_state *reg)13701375{13711376 reg->umin_value = reg->u32_min_value;13721377 reg->umax_value = reg->u32_max_value;13731373- /* Attempt to pull 32-bit signed bounds into 64-bit bounds13741374- * but must be positive otherwise set to worse case bounds13751375- * and refine later from tnum.13781378+13791379+ /* Attempt to pull 32-bit signed bounds into 64-bit bounds but must13801380+ * be positive otherwise set to worse case bounds and refine later13811381+ * from tnum.13761382 */13771377- if (reg->s32_min_value >= 0 && reg->s32_max_value >= 0)13781378- reg->smax_value = reg->s32_max_value;13791379- else13801380- reg->smax_value = U32_MAX;13811381- if (reg->s32_min_value >= 0)13831383+ if (__reg32_bound_s64(reg->s32_min_value) &&13841384+ __reg32_bound_s64(reg->s32_max_value)) {13821385 reg->smin_value = reg->s32_min_value;13831383- else13861386+ reg->smax_value = reg->s32_max_value;13871387+ } else {13841388 reg->smin_value = 0;13891389+ reg->smax_value = U32_MAX;13901390+ }13851391}1386139213871393static void __reg_combine_32_into_64(struct bpf_reg_state *reg)···23852379 */23862380 if (insn->src_reg != BPF_REG_FP)23872381 return 0;23882388- if (BPF_SIZE(insn->code) != BPF_DW)23892389- return 0;2390238223912383 /* dreg = *(u64 *)[fp - off] was a fill from the stack.23922384 * that [fp - off] slot contains scalar that needs to be···24062402 return -ENOTSUPP;24072403 /* scalars can only be spilled into stack */24082404 if (insn->dst_reg != BPF_REG_FP)24092409- return 0;24102410- if (BPF_SIZE(insn->code) != BPF_DW)24112405 return 0;24122406 spi = (-insn->off - 1) / BPF_REG_SIZE;24132407 if (spi >= 64) {···4553455145544552 if (insn->imm == BPF_CMPXCHG) {45554553 /* Check comparison of R0 with memory location */45564556- err = check_reg_arg(env, BPF_REG_0, SRC_OP);45544554+ const u32 aux_reg = BPF_REG_0;45554555+45564556+ err = check_reg_arg(env, aux_reg, SRC_OP);45574557 if (err)45584558 return err;45594559+45604560+ if (is_pointer_value(env, aux_reg)) {45614561+ verbose(env, "R%d leaks addr into mem\n", aux_reg);45624562+ return -EACCES;45634563+ }45594564 }4560456545614566 if (is_pointer_value(env, insn->src_reg)) {···45974588 load_reg = -1;45984589 }4599459046004600- /* check whether we can read the memory */45914591+ /* Check whether we can read the memory, with second call for fetch45924592+ * case to simulate the register fill.45934593+ */46014594 err = check_mem_access(env, insn_idx, insn->dst_reg, insn->off,46024602- BPF_SIZE(insn->code), BPF_READ, load_reg, true);45954595+ BPF_SIZE(insn->code), BPF_READ, -1, true);45964596+ if (!err && load_reg >= 0)45974597+ err = check_mem_access(env, insn_idx, insn->dst_reg, insn->off,45984598+ BPF_SIZE(insn->code), BPF_READ, load_reg,45994599+ true);46034600 if (err)46044601 return err;4605460246064606- /* check whether we can write into the same memory */46034603+ /* Check whether we can write into the same memory. */46074604 err = check_mem_access(env, insn_idx, insn->dst_reg, insn->off,46084605 BPF_SIZE(insn->code), BPF_WRITE, -1, true);46094606 if (err)···83238308 insn->dst_reg);83248309 }83258310 zext_32_to_64(dst_reg);83118311+83128312+ __update_reg_bounds(dst_reg);83138313+ __reg_deduce_bounds(dst_reg);83148314+ __reg_bound_offset(dst_reg);83268315 }83278316 } else {83288317 /* case: R = imm
+11
kernel/crash_core.c
···6677#include <linux/buildid.h>88#include <linux/crash_core.h>99+#include <linux/init.h>910#include <linux/utsname.h>1011#include <linux/vmalloc.h>1112···295294 return __parse_crashkernel(cmdline, system_ram, crash_size, crash_base,296295 "crashkernel=", suffix_tbl[SUFFIX_LOW]);297296}297297+298298+/*299299+ * Add a dummy early_param handler to mark crashkernel= as a known command line300300+ * parameter and suppress incorrect warnings in init/main.c.301301+ */302302+static int __init parse_crashkernel_dummy(char *arg)303303+{304304+ return 0;305305+}306306+early_param("crashkernel", parse_crashkernel_dummy);298307299308Elf_Word *append_elf_note(Elf_Word *buf, char *name, unsigned int type,300309 void *data, size_t data_len)
+1-1
kernel/locking/rtmutex.c
···13801380 * - the VCPU on which owner runs is preempted13811381 */13821382 if (!owner->on_cpu || need_resched() ||13831383- rt_mutex_waiter_is_top_waiter(lock, waiter) ||13831383+ !rt_mutex_waiter_is_top_waiter(lock, waiter) ||13841384 vcpu_is_preempted(task_cpu(owner))) {13851385 res = false;13861386 break;
+9
kernel/signal.c
···41854185 ss_mode != 0))41864186 return -EINVAL;4187418741884188+ /*41894189+ * Return before taking any locks if no actual41904190+ * sigaltstack changes were requested.41914191+ */41924192+ if (t->sas_ss_sp == (unsigned long)ss_sp &&41934193+ t->sas_ss_size == ss_size &&41944194+ t->sas_ss_flags == ss_flags)41954195+ return 0;41964196+41884197 sigaltstack_lock();41894198 if (ss_mode == SS_DISABLE) {41904199 ss_size = 0;
···264264long inc_rlimit_ucounts(struct ucounts *ucounts, enum ucount_type type, long v)265265{266266 struct ucounts *iter;267267+ long max = LONG_MAX;267268 long ret = 0;268269269270 for (iter = ucounts; iter; iter = iter->ns->ucounts) {270270- long max = READ_ONCE(iter->ns->ucount_max[type]);271271 long new = atomic_long_add_return(v, &iter->ucount[type]);272272 if (new < 0 || new > max)273273 ret = LONG_MAX;274274 else if (iter == ucounts)275275 ret = new;276276+ max = READ_ONCE(iter->ns->ucount_max[type]);276277 }277278 return ret;278279}···313312{314313 /* Caller must hold a reference to ucounts */315314 struct ucounts *iter;315315+ long max = LONG_MAX;316316 long dec, ret = 0;317317318318 for (iter = ucounts; iter; iter = iter->ns->ucounts) {319319- long max = READ_ONCE(iter->ns->ucount_max[type]);320319 long new = atomic_long_add_return(1, &iter->ucount[type]);321320 if (new < 0 || new > max)322321 goto unwind;323322 if (iter == ucounts)324323 ret = new;324324+ max = READ_ONCE(iter->ns->ucount_max[type]);325325 /*326326 * Grab an extra ucount reference for the caller when327327 * the rlimit count was previously 0.···341339 return 0;342340}343341344344-bool is_ucounts_overlimit(struct ucounts *ucounts, enum ucount_type type, unsigned long max)342342+bool is_ucounts_overlimit(struct ucounts *ucounts, enum ucount_type type, unsigned long rlimit)345343{346344 struct ucounts *iter;347347- if (get_ucounts_value(ucounts, type) > max)348348- return true;345345+ long max = rlimit;346346+ if (rlimit > LONG_MAX)347347+ max = LONG_MAX;349348 for (iter = ucounts; iter; iter = iter->ns->ucounts) {350350- max = READ_ONCE(iter->ns->ucount_max[type]);351349 if (get_ucounts_value(iter, type) > max)352350 return true;351351+ max = READ_ONCE(iter->ns->ucount_max[type]);353352 }354353 return false;355354}
+9-2
mm/damon/dbgfs.c
···353353 const char __user *buf, size_t count, loff_t *ppos)354354{355355 struct damon_ctx *ctx = file->private_data;356356+ struct damon_target *t, *next_t;356357 bool id_is_pid = true;357358 char *kbuf, *nrs;358359 unsigned long *targets;···398397 goto unlock_out;399398 }400399401401- /* remove targets with previously-set primitive */402402- damon_set_targets(ctx, NULL, 0);400400+ /* remove previously set targets */401401+ damon_for_each_target_safe(t, next_t, ctx) {402402+ if (targetid_is_pid(ctx))403403+ put_pid((struct pid *)t->id);404404+ damon_destroy_target(t);405405+ }403406404407 /* Configure the context for the address space type */405408 if (id_is_pid)···655650 if (!targetid_is_pid(ctx))656651 return;657652653653+ mutex_lock(&ctx->kdamond_lock);658654 damon_for_each_target_safe(t, next, ctx) {659655 put_pid((struct pid *)t->id);660656 damon_destroy_target(t);661657 }658658+ mutex_unlock(&ctx->kdamond_lock);662659}663660664661static struct damon_ctx *dbgfs_new_ctx(void)
···14701470 if (!(flags & MF_COUNT_INCREASED)) {14711471 res = get_hwpoison_page(p, flags);14721472 if (!res) {14731473- /*14741474- * Check "filter hit" and "race with other subpage."14751475- */14761473 lock_page(head);14771477- if (PageHWPoison(head)) {14781478- if ((hwpoison_filter(p) && TestClearPageHWPoison(p))14791479- || (p != head && TestSetPageHWPoison(head))) {14741474+ if (hwpoison_filter(p)) {14751475+ if (TestClearPageHWPoison(head))14801476 num_poisoned_pages_dec();14811481- unlock_page(head);14821482- return 0;14831483- }14771477+ unlock_page(head);14781478+ return 0;14841479 }14851480 unlock_page(head);14861481 res = MF_FAILED;···22342239 } else if (ret == 0) {22352240 if (soft_offline_free_page(page) && try_again) {22362241 try_again = false;22422242+ flags &= ~MF_COUNT_INCREASED;22372243 goto retry;22382244 }22392245 }
+1-2
mm/mempolicy.c
···21402140 * memory with both reclaim and compact as well.21412141 */21422142 if (!page && (gfp & __GFP_DIRECT_RECLAIM))21432143- page = __alloc_pages_node(hpage_node,21442144- gfp, order);21432143+ page = __alloc_pages(gfp, order, hpage_node, nmask);2145214421462145 goto out;21472146 }
+56-9
mm/vmscan.c
···10211021 unlock_page(page);10221022}1023102310241024+static bool skip_throttle_noprogress(pg_data_t *pgdat)10251025+{10261026+ int reclaimable = 0, write_pending = 0;10271027+ int i;10281028+10291029+ /*10301030+ * If kswapd is disabled, reschedule if necessary but do not10311031+ * throttle as the system is likely near OOM.10321032+ */10331033+ if (pgdat->kswapd_failures >= MAX_RECLAIM_RETRIES)10341034+ return true;10351035+10361036+ /*10371037+ * If there are a lot of dirty/writeback pages then do not10381038+ * throttle as throttling will occur when the pages cycle10391039+ * towards the end of the LRU if still under writeback.10401040+ */10411041+ for (i = 0; i < MAX_NR_ZONES; i++) {10421042+ struct zone *zone = pgdat->node_zones + i;10431043+10441044+ if (!populated_zone(zone))10451045+ continue;10461046+10471047+ reclaimable += zone_reclaimable_pages(zone);10481048+ write_pending += zone_page_state_snapshot(zone,10491049+ NR_ZONE_WRITE_PENDING);10501050+ }10511051+ if (2 * write_pending <= reclaimable)10521052+ return true;10531053+10541054+ return false;10551055+}10561056+10241057void reclaim_throttle(pg_data_t *pgdat, enum vmscan_throttle_state reason)10251058{10261059 wait_queue_head_t *wqh = &pgdat->reclaim_wait[reason];···10891056 }1090105710911058 break;10591059+ case VMSCAN_THROTTLE_CONGESTED:10601060+ fallthrough;10921061 case VMSCAN_THROTTLE_NOPROGRESS:10931093- timeout = HZ/2;10621062+ if (skip_throttle_noprogress(pgdat)) {10631063+ cond_resched();10641064+ return;10651065+ }10661066+10671067+ timeout = 1;10681068+10941069 break;10951070 case VMSCAN_THROTTLE_ISOLATED:10961071 timeout = HZ/50;···33623321 if (!current_is_kswapd() && current_may_throttle() &&33633322 !sc->hibernation_mode &&33643323 test_bit(LRUVEC_CONGESTED, &target_lruvec->flags))33653365- reclaim_throttle(pgdat, VMSCAN_THROTTLE_WRITEBACK);33243324+ reclaim_throttle(pgdat, VMSCAN_THROTTLE_CONGESTED);3366332533673326 if (should_continue_reclaim(pgdat, sc->nr_reclaimed - nr_reclaimed,33683327 sc))···34273386 }3428338734293388 /*34303430- * Do not throttle kswapd on NOPROGRESS as it will throttle on34313431- * VMSCAN_THROTTLE_WRITEBACK if there are too many pages under34323432- * writeback and marked for immediate reclaim at the tail of34333433- * the LRU.33893389+ * Do not throttle kswapd or cgroup reclaim on NOPROGRESS as it will33903390+ * throttle on VMSCAN_THROTTLE_WRITEBACK if there are too many pages33913391+ * under writeback and marked for immediate reclaim at the tail of the33923392+ * LRU.34343393 */34353435- if (current_is_kswapd())33943394+ if (current_is_kswapd() || cgroup_reclaim(sc))34363395 return;3437339634383397 /* Throttle if making no progress at high prioities. */34393439- if (sc->priority < DEF_PRIORITY - 2)33983398+ if (sc->priority == 1 && !sc->nr_reclaimed)34403399 reclaim_throttle(pgdat, VMSCAN_THROTTLE_NOPROGRESS);34413400}34423401···34563415 unsigned long nr_soft_scanned;34573416 gfp_t orig_mask;34583417 pg_data_t *last_pgdat = NULL;34183418+ pg_data_t *first_pgdat = NULL;3459341934603420 /*34613421 * If the number of buffer_heads in the machine exceeds the maximum···35203478 /* need some check for avoid more shrink_zone() */35213479 }3522348034813481+ if (!first_pgdat)34823482+ first_pgdat = zone->zone_pgdat;34833483+35233484 /* See comment about same check for global reclaim above */35243485 if (zone->zone_pgdat == last_pgdat)35253486 continue;35263487 last_pgdat = zone->zone_pgdat;35273488 shrink_node(zone->zone_pgdat, sc);35283528- consider_reclaim_throttle(zone->zone_pgdat, sc);35293489 }34903490+34913491+ if (first_pgdat)34923492+ consider_reclaim_throttle(first_pgdat, sc);3530349335313494 /*35323495 * Restore to original mask to avoid the impact on the caller if we
···12191219{12201220 struct ieee80211_sub_if_data *sdata = vif_to_sdata(txq->txq.vif);1221122112221222- if (local->in_reconfig)12221222+ /* In reconfig don't transmit now, but mark for waking later */12231223+ if (local->in_reconfig) {12241224+ set_bit(IEEE80211_TXQ_STOP_NETIF_TX, &txq->flags);12231225 return;12261226+ }1224122712251228 if (!check_sdata_in_driver(sdata))12261229 return;
+10-3
net/mac80211/mlme.c
···24522452 u16 tx_time)24532453{24542454 struct ieee80211_if_managed *ifmgd = &sdata->u.mgd;24552455- u16 tid = ieee80211_get_tid(hdr);24562456- int ac = ieee80211_ac_from_tid(tid);24572457- struct ieee80211_sta_tx_tspec *tx_tspec = &ifmgd->tx_tspec[ac];24552455+ u16 tid;24562456+ int ac;24572457+ struct ieee80211_sta_tx_tspec *tx_tspec;24582458 unsigned long now = jiffies;24592459+24602460+ if (!ieee80211_is_data_qos(hdr->frame_control))24612461+ return;24622462+24632463+ tid = ieee80211_get_tid(hdr);24642464+ ac = ieee80211_ac_from_tid(tid);24652465+ tx_tspec = &ifmgd->tx_tspec[ac];2459246624602467 if (likely(!tx_tspec->admitted_time))24612468 return;
+1
net/mac80211/rx.c
···29442944 if (!fwd_skb)29452945 goto out;2946294629472947+ fwd_skb->dev = sdata->dev;29472948 fwd_hdr = (struct ieee80211_hdr *) fwd_skb->data;29482949 fwd_hdr->frame_control &= ~cpu_to_le16(IEEE80211_FCTL_RETRY);29492950 info = IEEE80211_SKB_CB(fwd_skb);
+12-9
net/mac80211/sta_info.c
···644644 /* check if STA exists already */645645 if (sta_info_get_bss(sdata, sta->sta.addr)) {646646 err = -EEXIST;647647- goto out_err;647647+ goto out_cleanup;648648 }649649650650 sinfo = kzalloc(sizeof(struct station_info), GFP_KERNEL);651651 if (!sinfo) {652652 err = -ENOMEM;653653- goto out_err;653653+ goto out_cleanup;654654 }655655656656 local->num_sta++;···667667668668 list_add_tail_rcu(&sta->list, &local->sta_list);669669670670+ /* update channel context before notifying the driver about state671671+ * change, this enables driver using the updated channel context right away.672672+ */673673+ if (sta->sta_state >= IEEE80211_STA_ASSOC) {674674+ ieee80211_recalc_min_chandef(sta->sdata);675675+ if (!sta->sta.support_p2p_ps)676676+ ieee80211_recalc_p2p_go_ps_allowed(sta->sdata);677677+ }678678+670679 /* notify driver */671680 err = sta_info_insert_drv_state(local, sdata, sta);672681 if (err)673682 goto out_remove;674683675684 set_sta_flag(sta, WLAN_STA_INSERTED);676676-677677- if (sta->sta_state >= IEEE80211_STA_ASSOC) {678678- ieee80211_recalc_min_chandef(sta->sdata);679679- if (!sta->sta.support_p2p_ps)680680- ieee80211_recalc_p2p_go_ps_allowed(sta->sdata);681681- }682685683686 /* accept BA sessions now */684687 clear_sta_flag(sta, WLAN_STA_BLOCK_BA);···709706 out_drop_sta:710707 local->num_sta--;711708 synchronize_net();709709+ out_cleanup:712710 cleanup_single_sta(sta);713713- out_err:714711 mutex_unlock(&local->sta_mtx);715712 kfree(sinfo);716713 rcu_read_lock();
+2
net/mac80211/sta_info.h
···176176 * @failed_bar_ssn: ssn of the last failed BAR tx attempt177177 * @bar_pending: BAR needs to be re-sent178178 * @amsdu: support A-MSDU withing A-MDPU179179+ * @ssn: starting sequence number of the session179180 *180181 * This structure's lifetime is managed by RCU, assignments to181182 * the array holding it must hold the aggregation mutex.···200199 u8 stop_initiator;201200 bool tx_stop;202201 u16 buf_size;202202+ u16 ssn;203203204204 u16 failed_bar_ssn;205205 bool bar_pending;
+5-5
net/mac80211/tx.c
···18221822 struct ieee80211_tx_info *info = IEEE80211_SKB_CB(tx->skb);18231823 ieee80211_tx_result res = TX_CONTINUE;1824182418251825+ if (!ieee80211_hw_check(&tx->local->hw, HAS_RATE_CONTROL))18261826+ CALL_TXH(ieee80211_tx_h_rate_ctrl);18271827+18251828 if (unlikely(info->flags & IEEE80211_TX_INTFL_RETRANSMISSION)) {18261829 __skb_queue_tail(&tx->skbs, tx->skb);18271830 tx->skb = NULL;18281831 goto txh_done;18291832 }18301830-18311831- if (!ieee80211_hw_check(&tx->local->hw, HAS_RATE_CONTROL))18321832- CALL_TXH(ieee80211_tx_h_rate_ctrl);1833183318341834 CALL_TXH(ieee80211_tx_h_michael_mic_add);18351835 CALL_TXH(ieee80211_tx_h_sequence);···4191419141924192 ieee80211_aggr_check(sdata, sta, skb);4193419341944194+ sk_pacing_shift_update(skb->sk, sdata->local->hw.tx_sk_pacing_shift);41954195+41944196 if (sta) {41954197 struct ieee80211_fast_tx *fast_tx;41964196-41974197- sk_pacing_shift_update(skb->sk, sdata->local->hw.tx_sk_pacing_shift);4198419841994199 fast_tx = rcu_dereference(sta->fast_tx);42004200
+14-9
net/mac80211/util.c
···943943 struct ieee802_11_elems *elems)944944{945945 const void *data = elem->data + 1;946946- u8 len = elem->datalen - 1;946946+ u8 len;947947+948948+ if (!elem->datalen)949949+ return;950950+951951+ len = elem->datalen - 1;947952948953 switch (elem->data[0]) {949954 case WLAN_EID_EXT_HE_MU_EDCA:···20682063 chandef.chan = chan;2069206420702065 skb = ieee80211_probereq_get(&local->hw, src, ssid, ssid_len,20712071- 100 + ie_len);20662066+ local->scan_ies_len + ie_len);20722067 if (!skb)20732068 return NULL;20742069···26512646 mutex_unlock(&local->sta_mtx);26522647 }2653264826492649+ /*26502650+ * If this is for hw restart things are still running.26512651+ * We may want to change that later, however.26522652+ */26532653+ if (local->open_count && (!suspended || reconfig_due_to_wowlan))26542654+ drv_reconfig_complete(local, IEEE80211_RECONFIG_TYPE_RESTART);26552655+26542656 if (local->in_reconfig) {26552657 local->in_reconfig = false;26562658 barrier();···26752663 ieee80211_wake_queues_by_reason(hw, IEEE80211_MAX_QUEUE_MAP,26762664 IEEE80211_QUEUE_STOP_REASON_SUSPEND,26772665 false);26782678-26792679- /*26802680- * If this is for hw restart things are still running.26812681- * We may want to change that later, however.26822682- */26832683- if (local->open_count && (!suspended || reconfig_due_to_wowlan))26842684- drv_reconfig_complete(local, IEEE80211_RECONFIG_TYPE_RESTART);2685266626862667 if (!suspended)26872668 return 0;
···525525 case TCP_NODELAY:526526 case TCP_THIN_LINEAR_TIMEOUTS:527527 case TCP_CONGESTION:528528- case TCP_ULP:529528 case TCP_CORK:530529 case TCP_KEEPIDLE:531530 case TCP_KEEPINTVL:
···11951195 }11961196 hlist_nulls_for_each_entry(h, n, &nf_conntrack_hash[cb->args[0]],11971197 hnnode) {11981198- if (NF_CT_DIRECTION(h) != IP_CT_DIR_ORIGINAL)11991199- continue;12001198 ct = nf_ct_tuplehash_to_ctrack(h);12011199 if (nf_ct_is_expired(ct)) {12021200 if (i < ARRAY_SIZE(nf_ct_evict) &&···12041206 }1205120712061208 if (!net_eq(net, nf_ct_net(ct)))12091209+ continue;12101210+12111211+ if (NF_CT_DIRECTION(h) != IP_CT_DIR_ORIGINAL)12071212 continue;1208121312091214 if (cb->args[1]) {
···868868869869 err = pep_accept_conn(newsk, skb);870870 if (err) {871871+ __sock_put(sk);871872 sock_put(newsk);872873 newsk = NULL;873874 goto drop;···947946 ret = -EBUSY;948947 else if (sk->sk_state == TCP_ESTABLISHED)949948 ret = -EISCONN;949949+ else if (!pn->pn_sk.sobject)950950+ ret = -EADDRNOTAVAIL;950951 else951952 ret = pep_sock_enable(sk, NULL, 0);952953 release_sock(sk);
+1
net/rds/connection.c
···253253 * should end up here, but if it254254 * does, reset/destroy the connection.255255 */256256+ kfree(conn->c_path);256257 kmem_cache_free(rds_conn_slab, conn);257258 conn = ERR_PTR(-EOPNOTSUPP);258259 goto out;
···27362736 q->tins = kvcalloc(CAKE_MAX_TINS, sizeof(struct cake_tin_data),27372737 GFP_KERNEL);27382738 if (!q->tins)27392739- goto nomem;27392739+ return -ENOMEM;2740274027412741 for (i = 0; i < CAKE_MAX_TINS; i++) {27422742 struct cake_tin_data *b = q->tins + i;···27662766 q->min_netlen = ~0;27672767 q->min_adjlen = ~0;27682768 return 0;27692769-27702770-nomem:27712771- cake_destroy(sch);27722772- return -ENOMEM;27732769}2774277027752771static int cake_dump(struct Qdisc *sch, struct sk_buff *skb)
+2-2
net/sched/sch_ets.c
···666666 }667667 }668668 for (i = q->nbands; i < oldbands; i++) {669669- qdisc_tree_flush_backlog(q->classes[i].qdisc);670670- if (i >= q->nstrict)669669+ if (i >= q->nstrict && q->classes[i].qdisc->q.qlen)671670 list_del(&q->classes[i].alist);671671+ qdisc_tree_flush_backlog(q->classes[i].qdisc);672672 }673673 q->nstrict = nstrict;674674 memcpy(q->prio2band, priomap, sizeof(priomap));
+2-1
net/sched/sch_frag.c
···11// SPDX-License-Identifier: GPL-2.0 OR Linux-OpenIB22#include <net/netlink.h>33#include <net/sch_generic.h>44+#include <net/pkt_sched.h>45#include <net/dst.h>56#include <net/ip.h>67#include <net/ip6_fib.h>···138137139138int sch_frag_xmit_hook(struct sk_buff *skb, int (*xmit)(struct sk_buff *skb))140139{141141- u16 mru = qdisc_skb_cb(skb)->mru;140140+ u16 mru = tc_skb_cb(skb)->mru;142141 int err;143142144143 if (mru && skb->len > mru + skb->dev->hard_header_len)
···184184}185185186186/* Final destructor for endpoint. */187187+static void sctp_endpoint_destroy_rcu(struct rcu_head *head)188188+{189189+ struct sctp_endpoint *ep = container_of(head, struct sctp_endpoint, rcu);190190+ struct sock *sk = ep->base.sk;191191+192192+ sctp_sk(sk)->ep = NULL;193193+ sock_put(sk);194194+195195+ kfree(ep);196196+ SCTP_DBG_OBJCNT_DEC(ep);197197+}198198+187199static void sctp_endpoint_destroy(struct sctp_endpoint *ep)188200{189201 struct sock *sk;···225213 if (sctp_sk(sk)->bind_hash)226214 sctp_put_port(sk);227215228228- sctp_sk(sk)->ep = NULL;229229- /* Give up our hold on the sock */230230- sock_put(sk);231231-232232- kfree(ep);233233- SCTP_DBG_OBJCNT_DEC(ep);216216+ call_rcu(&ep->rcu, sctp_endpoint_destroy_rcu);234217}235218236219/* Hold a reference to an endpoint. */237237-void sctp_endpoint_hold(struct sctp_endpoint *ep)220220+int sctp_endpoint_hold(struct sctp_endpoint *ep)238221{239239- refcount_inc(&ep->base.refcnt);222222+ return refcount_inc_not_zero(&ep->base.refcnt);240223}241224242225/* Release a reference to an endpoint and clean up if there are
+15-8
net/sctp/socket.c
···53385338}53395339EXPORT_SYMBOL_GPL(sctp_transport_lookup_process);5340534053415341-int sctp_for_each_transport(int (*cb)(struct sctp_transport *, void *),53425342- int (*cb_done)(struct sctp_transport *, void *),53435343- struct net *net, int *pos, void *p) {53415341+int sctp_transport_traverse_process(sctp_callback_t cb, sctp_callback_t cb_done,53425342+ struct net *net, int *pos, void *p)53435343+{53445344 struct rhashtable_iter hti;53455345 struct sctp_transport *tsp;53465346+ struct sctp_endpoint *ep;53465347 int ret;5347534853485349again:···5352535153535352 tsp = sctp_transport_get_idx(net, &hti, *pos + 1);53545353 for (; !IS_ERR_OR_NULL(tsp); tsp = sctp_transport_get_next(net, &hti)) {53555355- ret = cb(tsp, p);53565356- if (ret)53575357- break;53545354+ ep = tsp->asoc->ep;53555355+ if (sctp_endpoint_hold(ep)) { /* asoc can be peeled off */53565356+ ret = cb(ep, tsp, p);53575357+ if (ret)53585358+ break;53595359+ sctp_endpoint_put(ep);53605360+ }53585361 (*pos)++;53595362 sctp_transport_put(tsp);53605363 }53615364 sctp_transport_walk_stop(&hti);5362536553635366 if (ret) {53645364- if (cb_done && !cb_done(tsp, p)) {53675367+ if (cb_done && !cb_done(ep, tsp, p)) {53655368 (*pos)++;53695369+ sctp_endpoint_put(ep);53665370 sctp_transport_put(tsp);53675371 goto again;53685372 }53735373+ sctp_endpoint_put(ep);53695374 sctp_transport_put(tsp);53705375 }5371537653725377 return ret;53735378}53745374-EXPORT_SYMBOL_GPL(sctp_for_each_transport);53795379+EXPORT_SYMBOL_GPL(sctp_transport_traverse_process);5375538053765381/* 7.2.1 Association Status (SCTP_STATUS)53775382
+3-1
net/smc/af_smc.c
···194194 /* cleanup for a dangling non-blocking connect */195195 if (smc->connect_nonblock && sk->sk_state == SMC_INIT)196196 tcp_abort(smc->clcsock->sk, ECONNABORTED);197197- flush_work(&smc->connect_work);197197+198198+ if (cancel_work_sync(&smc->connect_work))199199+ sock_put(&smc->sk); /* sock_hold in smc_connect for passive closing */198200199201 if (sk->sk_state == SMC_LISTEN)200202 /* smc_close_non_accepted() is called and acquires
+5
net/smc/smc.h
···180180 u16 tx_cdc_seq; /* sequence # for CDC send */181181 u16 tx_cdc_seq_fin; /* sequence # - tx completed */182182 spinlock_t send_lock; /* protect wr_sends */183183+ atomic_t cdc_pend_tx_wr; /* number of pending tx CDC wqe184184+ * - inc when post wqe,185185+ * - dec on polled tx cqe186186+ */187187+ wait_queue_head_t cdc_pend_tx_wq; /* wakeup on no cdc_pend_tx_wr*/183188 struct delayed_work tx_work; /* retry of smc_cdc_msg_send */184189 u32 tx_off; /* base offset in peer rmb */185190
+24-28
net/smc/smc_cdc.c
···3131 struct smc_sock *smc;3232 int diff;33333434- if (!conn)3535- /* already dismissed */3636- return;3737-3834 smc = container_of(conn, struct smc_sock, conn);3935 bh_lock_sock(&smc->sk);4036 if (!wc_status) {···4751 conn);4852 conn->tx_cdc_seq_fin = cdcpend->ctrl_seq;4953 }5454+5555+ if (atomic_dec_and_test(&conn->cdc_pend_tx_wr) &&5656+ unlikely(wq_has_sleeper(&conn->cdc_pend_tx_wq)))5757+ wake_up(&conn->cdc_pend_tx_wq);5858+ WARN_ON(atomic_read(&conn->cdc_pend_tx_wr) < 0);5959+5060 smc_tx_sndbuf_nonfull(smc);5161 bh_unlock_sock(&smc->sk);5262}···109107 conn->tx_cdc_seq++;110108 conn->local_tx_ctrl.seqno = conn->tx_cdc_seq;111109 smc_host_msg_to_cdc((struct smc_cdc_msg *)wr_buf, conn, &cfed);110110+111111+ atomic_inc(&conn->cdc_pend_tx_wr);112112+ smp_mb__after_atomic(); /* Make sure cdc_pend_tx_wr added before post */113113+112114 rc = smc_wr_tx_send(link, (struct smc_wr_tx_pend_priv *)pend);113115 if (!rc) {114116 smc_curs_copy(&conn->rx_curs_confirmed, &cfed, conn);···120114 } else {121115 conn->tx_cdc_seq--;122116 conn->local_tx_ctrl.seqno = conn->tx_cdc_seq;117117+ atomic_dec(&conn->cdc_pend_tx_wr);123118 }124119125120 return rc;···143136 peer->token = htonl(local->token);144137 peer->prod_flags.failover_validation = 1;145138139139+ /* We need to set pend->conn here to make sure smc_cdc_tx_handler()140140+ * can handle properly141141+ */142142+ smc_cdc_add_pending_send(conn, pend);143143+144144+ atomic_inc(&conn->cdc_pend_tx_wr);145145+ smp_mb__after_atomic(); /* Make sure cdc_pend_tx_wr added before post */146146+146147 rc = smc_wr_tx_send(link, (struct smc_wr_tx_pend_priv *)pend);148148+ if (unlikely(rc))149149+ atomic_dec(&conn->cdc_pend_tx_wr);150150+147151 return rc;148152}149153···211193 return rc;212194}213195214214-static bool smc_cdc_tx_filter(struct smc_wr_tx_pend_priv *tx_pend,215215- unsigned long data)196196+void smc_cdc_wait_pend_tx_wr(struct smc_connection *conn)216197{217217- struct smc_connection *conn = (struct smc_connection *)data;218218- struct smc_cdc_tx_pend *cdc_pend =219219- (struct smc_cdc_tx_pend *)tx_pend;220220-221221- return cdc_pend->conn == conn;222222-}223223-224224-static void smc_cdc_tx_dismisser(struct smc_wr_tx_pend_priv *tx_pend)225225-{226226- struct smc_cdc_tx_pend *cdc_pend =227227- (struct smc_cdc_tx_pend *)tx_pend;228228-229229- cdc_pend->conn = NULL;230230-}231231-232232-void smc_cdc_tx_dismiss_slots(struct smc_connection *conn)233233-{234234- struct smc_link *link = conn->lnk;235235-236236- smc_wr_tx_dismiss_slots(link, SMC_CDC_MSG_TYPE,237237- smc_cdc_tx_filter, smc_cdc_tx_dismisser,238238- (unsigned long)conn);198198+ wait_event(conn->cdc_pend_tx_wq, !atomic_read(&conn->cdc_pend_tx_wr));239199}240200241201/* Send a SMC-D CDC header.
···647647 for (i = 0; i < SMC_LINKS_PER_LGR_MAX; i++) {648648 struct smc_link *lnk = &lgr->lnk[i];649649650650- if (smc_link_usable(lnk))650650+ if (smc_link_sendable(lnk))651651 lnk->state = SMC_LNK_INACTIVE;652652 }653653 wake_up_all(&lgr->llc_msg_waiter);···11271127 smc_ism_unset_conn(conn);11281128 tasklet_kill(&conn->rx_tsklet);11291129 } else {11301130- smc_cdc_tx_dismiss_slots(conn);11301130+ smc_cdc_wait_pend_tx_wr(conn);11311131 if (current_work() != &conn->abort_work)11321132 cancel_work_sync(&conn->abort_work);11331133 }···12041204 smc_llc_link_clear(lnk, log);12051205 smcr_buf_unmap_lgr(lnk);12061206 smcr_rtoken_clear_link(lnk);12071207- smc_ib_modify_qp_reset(lnk);12071207+ smc_ib_modify_qp_error(lnk);12081208 smc_wr_free_link(lnk);12091209 smc_ib_destroy_queue_pair(lnk);12101210 smc_ib_dealloc_protection_domain(lnk);···13361336 else13371337 tasklet_unlock_wait(&conn->rx_tsklet);13381338 } else {13391339- smc_cdc_tx_dismiss_slots(conn);13391339+ smc_cdc_wait_pend_tx_wr(conn);13401340 }13411341 smc_lgr_unregister_conn(conn);13421342 smc_close_active_abort(smc);···14591459/* Called when an SMCR device is removed or the smc module is unloaded.14601460 * If smcibdev is given, all SMCR link groups using this device are terminated.14611461 * If smcibdev is NULL, all SMCR link groups are terminated.14621462+ *14631463+ * We must wait here for QPs been destroyed before we destroy the CQs,14641464+ * or we won't received any CQEs and cdc_pend_tx_wr cannot reach 0 thus14651465+ * smc_sock cannot be released.14621466 */14631467void smc_smcr_terminate_all(struct smc_ib_device *smcibdev)14641468{14651469 struct smc_link_group *lgr, *lg;14661470 LIST_HEAD(lgr_free_list);14711471+ LIST_HEAD(lgr_linkdown_list);14671472 int i;1468147314691474 spin_lock_bh(&smc_lgr_list.lock);···14801475 list_for_each_entry_safe(lgr, lg, &smc_lgr_list.list, list) {14811476 for (i = 0; i < SMC_LINKS_PER_LGR_MAX; i++) {14821477 if (lgr->lnk[i].smcibdev == smcibdev)14831483- smcr_link_down_cond_sched(&lgr->lnk[i]);14781478+ list_move_tail(&lgr->list, &lgr_linkdown_list);14841479 }14851480 }14861481 }···14901485 list_del_init(&lgr->list);14911486 smc_llc_set_termination_rsn(lgr, SMC_LLC_DEL_OP_INIT_TERM);14921487 __smc_lgr_terminate(lgr, false);14881488+ }14891489+14901490+ list_for_each_entry_safe(lgr, lg, &lgr_linkdown_list, list) {14911491+ for (i = 0; i < SMC_LINKS_PER_LGR_MAX; i++) {14921492+ if (lgr->lnk[i].smcibdev == smcibdev) {14931493+ mutex_lock(&lgr->llc_conf_mutex);14941494+ smcr_link_down_cond(&lgr->lnk[i]);14951495+ mutex_unlock(&lgr->llc_conf_mutex);14961496+ }14971497+ }14931498 }1494149914951500 if (smcibdev) {···16011586 if (!lgr || lnk->state == SMC_LNK_UNUSED || list_empty(&lgr->list))16021587 return;1603158816041604- smc_ib_modify_qp_reset(lnk);16051589 to_lnk = smc_switch_conns(lgr, lnk, true);16061590 if (!to_lnk) { /* no backup link available */16071591 smcr_link_clear(lnk, true);···18381824 conn->local_tx_ctrl.common.type = SMC_CDC_MSG_TYPE;18391825 conn->local_tx_ctrl.len = SMC_WR_TX_SIZE;18401826 conn->urg_state = SMC_URG_READ;18271827+ init_waitqueue_head(&conn->cdc_pend_tx_wq);18411828 INIT_WORK(&smc->conn.abort_work, smc_conn_abort_work);18421829 if (ini->is_smcd) {18431830 conn->rx_off = sizeof(struct smcd_cdc_msg);
···16301630 delllc.reason = htonl(rsn);1631163116321632 for (i = 0; i < SMC_LINKS_PER_LGR_MAX; i++) {16331633- if (!smc_link_usable(&lgr->lnk[i]))16331633+ if (!smc_link_sendable(&lgr->lnk[i]))16341634 continue;16351635 if (!smc_llc_send_message_wait(&lgr->lnk[i], &delllc))16361636 break;
+9-42
net/smc/smc_wr.c
···6262}63636464/* wait till all pending tx work requests on the given link are completed */6565-int smc_wr_tx_wait_no_pending_sends(struct smc_link *link)6565+void smc_wr_tx_wait_no_pending_sends(struct smc_link *link)6666{6767- if (wait_event_timeout(link->wr_tx_wait, !smc_wr_is_tx_pend(link),6868- SMC_WR_TX_WAIT_PENDING_TIME))6969- return 0;7070- else /* timeout */7171- return -EPIPE;6767+ wait_event(link->wr_tx_wait, !smc_wr_is_tx_pend(link));7268}73697470static inline int smc_wr_tx_find_pending_index(struct smc_link *link, u64 wr_id)···8387 struct smc_wr_tx_pend pnd_snd;8488 struct smc_link *link;8589 u32 pnd_snd_idx;8686- int i;87908891 link = wc->qp->qp_context;8992···123128 }124129125130 if (wc->status) {126126- for_each_set_bit(i, link->wr_tx_mask, link->wr_tx_cnt) {127127- /* clear full struct smc_wr_tx_pend including .priv */128128- memset(&link->wr_tx_pends[i], 0,129129- sizeof(link->wr_tx_pends[i]));130130- memset(&link->wr_tx_bufs[i], 0,131131- sizeof(link->wr_tx_bufs[i]));132132- clear_bit(i, link->wr_tx_mask);133133- }134131 if (link->lgr->smc_version == SMC_V2) {135132 memset(link->wr_tx_v2_pend, 0,136133 sizeof(*link->wr_tx_v2_pend));···175188static inline int smc_wr_tx_get_free_slot_index(struct smc_link *link, u32 *idx)176189{177190 *idx = link->wr_tx_cnt;178178- if (!smc_link_usable(link))191191+ if (!smc_link_sendable(link))179192 return -ENOLINK;180193 for_each_clear_bit(*idx, link->wr_tx_mask, link->wr_tx_cnt) {181194 if (!test_and_set_bit(*idx, link->wr_tx_mask))···218231 } else {219232 rc = wait_event_interruptible_timeout(220233 link->wr_tx_wait,221221- !smc_link_usable(link) ||234234+ !smc_link_sendable(link) ||222235 lgr->terminating ||223236 (smc_wr_tx_get_free_slot_index(link, &idx) != -EBUSY),224237 SMC_WR_TX_WAIT_FREE_SLOT_TIME);···345358 unsigned long timeout)346359{347360 struct smc_wr_tx_pend *pend;361361+ u32 pnd_idx;348362 int rc;349363350364 pend = container_of(priv, struct smc_wr_tx_pend, priv);351365 pend->compl_requested = 1;352352- init_completion(&link->wr_tx_compl[pend->idx]);366366+ pnd_idx = pend->idx;367367+ init_completion(&link->wr_tx_compl[pnd_idx]);353368354369 rc = smc_wr_tx_send(link, priv);355370 if (rc)356371 return rc;357372 /* wait for completion by smc_wr_tx_process_cqe() */358373 rc = wait_for_completion_interruptible_timeout(359359- &link->wr_tx_compl[pend->idx], timeout);374374+ &link->wr_tx_compl[pnd_idx], timeout);360375 if (rc <= 0)361376 rc = -ENODATA;362377 if (rc > 0)···406417 break;407418 }408419 return rc;409409-}410410-411411-void smc_wr_tx_dismiss_slots(struct smc_link *link, u8 wr_tx_hdr_type,412412- smc_wr_tx_filter filter,413413- smc_wr_tx_dismisser dismisser,414414- unsigned long data)415415-{416416- struct smc_wr_tx_pend_priv *tx_pend;417417- struct smc_wr_rx_hdr *wr_tx;418418- int i;419419-420420- for_each_set_bit(i, link->wr_tx_mask, link->wr_tx_cnt) {421421- wr_tx = (struct smc_wr_rx_hdr *)&link->wr_tx_bufs[i];422422- if (wr_tx->type != wr_tx_hdr_type)423423- continue;424424- tx_pend = &link->wr_tx_pends[i].priv;425425- if (filter(tx_pend, data))426426- dismisser(tx_pend);427427- }428420}429421430422/****************************** receive queue ********************************/···643673 smc_wr_wakeup_reg_wait(lnk);644674 smc_wr_wakeup_tx_wait(lnk);645675646646- if (smc_wr_tx_wait_no_pending_sends(lnk))647647- memset(lnk->wr_tx_mask, 0,648648- BITS_TO_LONGS(SMC_WR_BUF_CNT) *649649- sizeof(*lnk->wr_tx_mask));676676+ smc_wr_tx_wait_no_pending_sends(lnk);650677 wait_event(lnk->wr_reg_wait, (!atomic_read(&lnk->wr_reg_refcnt)));651678 wait_event(lnk->wr_tx_wait, (!atomic_read(&lnk->wr_tx_refcnt)));652679
+2-3
net/smc/smc_wr.h
···2222#define SMC_WR_BUF_CNT 16 /* # of ctrl buffers per link */23232424#define SMC_WR_TX_WAIT_FREE_SLOT_TIME (10 * HZ)2525-#define SMC_WR_TX_WAIT_PENDING_TIME (5 * HZ)26252726#define SMC_WR_TX_SIZE 44 /* actual size of wr_send data (<=SMC_WR_BUF_SIZE) */2827···61626263static inline bool smc_wr_tx_link_hold(struct smc_link *link)6364{6464- if (!smc_link_usable(link))6565+ if (!smc_link_sendable(link))6566 return false;6667 atomic_inc(&link->wr_tx_refcnt);6768 return true;···129130 smc_wr_tx_filter filter,130131 smc_wr_tx_dismisser dismisser,131132 unsigned long data);132132-int smc_wr_tx_wait_no_pending_sends(struct smc_link *link);133133+void smc_wr_tx_wait_no_pending_sends(struct smc_link *link);133134134135int smc_wr_rx_register_handler(struct smc_wr_rx_handler *handler);135136int smc_wr_rx_post_init(struct smc_link *link);
+4-4
net/tipc/crypto.c
···524524 return -EEXIST;525525526526 /* Allocate a new AEAD */527527- tmp = kzalloc(sizeof(*tmp), GFP_KERNEL);527527+ tmp = kzalloc(sizeof(*tmp), GFP_ATOMIC);528528 if (unlikely(!tmp))529529 return -ENOMEM;530530···14741474 return -EEXIST;1475147514761476 /* Allocate crypto */14771477- c = kzalloc(sizeof(*c), GFP_KERNEL);14771477+ c = kzalloc(sizeof(*c), GFP_ATOMIC);14781478 if (!c)14791479 return -ENOMEM;14801480···14881488 }1489148914901490 /* Allocate statistic structure */14911491- c->stats = alloc_percpu(struct tipc_crypto_stats);14911491+ c->stats = alloc_percpu_gfp(struct tipc_crypto_stats, GFP_ATOMIC);14921492 if (!c->stats) {14931493 if (c->wq)14941494 destroy_workqueue(c->wq);···24612461 }2462246224632463 /* Lets duplicate it first */24642464- skey = kmemdup(aead->key, tipc_aead_key_size(aead->key), GFP_KERNEL);24642464+ skey = kmemdup(aead->key, tipc_aead_key_size(aead->key), GFP_ATOMIC);24652465 rcu_read_unlock();2466246624672467 /* Now, generate new key, initiate & distribute it */
+2-1
net/vmw_vsock/virtio_transport_common.c
···12991299 space_available = virtio_transport_space_update(sk, pkt);1300130013011301 /* Update CID in case it has changed after a transport reset event */13021302- vsk->local_addr.svm_cid = dst.svm_cid;13021302+ if (vsk->local_addr.svm_cid != VMADDR_CID_ANY)13031303+ vsk->local_addr.svm_cid = dst.svm_cid;1303130413041305 if (space_available)13051306 sk->sk_write_space(sk);
···132132 return AE_NOT_FOUND;133133 }134134135135- info->handle = handle;136136-137135 /*138136 * On some Intel platforms, multiple children of the HDAS139137 * device can be found, but only one of them is the SoundWire···141143 */142144 if (FIELD_GET(GENMASK(31, 28), adr) != SDW_LINK_TYPE)143145 return AE_OK; /* keep going */146146+147147+ /* found the correct SoundWire controller */148148+ info->handle = handle;144149145150 /* device found, stop namespace walk */146151 return AE_CTRL_TERMINATE;···165164 acpi_status status;166165167166 info->handle = NULL;167167+ /*168168+ * In the HDAS ACPI scope, 'SNDW' may be either the child of169169+ * 'HDAS' or the grandchild of 'HDAS'. So let's go through170170+ * the ACPI from 'HDAS' at max depth of 2 to find the 'SNDW'171171+ * device.172172+ */168173 status = acpi_walk_namespace(ACPI_TYPE_DEVICE,169169- parent_handle, 1,174174+ parent_handle, 2,170175 sdw_intel_acpi_cb,171176 NULL, info, NULL);172177 if (ACPI_FAILURE(status) || info->handle == NULL)
+15-6
sound/pci/hda/patch_hdmi.c
···2947294729482948/* Intel Haswell and onwards; audio component with eld notifier */29492949static int intel_hsw_common_init(struct hda_codec *codec, hda_nid_t vendor_nid,29502950- const int *port_map, int port_num, int dev_num)29502950+ const int *port_map, int port_num, int dev_num,29512951+ bool send_silent_stream)29512952{29522953 struct hdmi_spec *spec;29532954 int err;···29812980 * Enable silent stream feature, if it is enabled via29822981 * module param or Kconfig option29832982 */29842984- if (enable_silent_stream)29832983+ if (send_silent_stream)29852984 spec->send_silent_stream = true;2986298529872986 return parse_intel_hdmi(codec);···2989298829902989static int patch_i915_hsw_hdmi(struct hda_codec *codec)29912990{29922992- return intel_hsw_common_init(codec, 0x08, NULL, 0, 3);29912991+ return intel_hsw_common_init(codec, 0x08, NULL, 0, 3,29922992+ enable_silent_stream);29932993}2994299429952995static int patch_i915_glk_hdmi(struct hda_codec *codec)29962996{29972997- return intel_hsw_common_init(codec, 0x0b, NULL, 0, 3);29972997+ /*29982998+ * Silent stream calls audio component .get_power() from29992999+ * .pin_eld_notify(). On GLK this will deadlock in i915 due30003000+ * to the audio vs. CDCLK workaround.30013001+ */30023002+ return intel_hsw_common_init(codec, 0x0b, NULL, 0, 3, false);29983003}2999300430003005static int patch_i915_icl_hdmi(struct hda_codec *codec)···30113004 */30123005 static const int map[] = {0x0, 0x4, 0x6, 0x8, 0xa, 0xb};3013300630143014- return intel_hsw_common_init(codec, 0x02, map, ARRAY_SIZE(map), 3);30073007+ return intel_hsw_common_init(codec, 0x02, map, ARRAY_SIZE(map), 3,30083008+ enable_silent_stream);30153009}3016301030173011static int patch_i915_tgl_hdmi(struct hda_codec *codec)···30243016 static const int map[] = {0x4, 0x6, 0x8, 0xa, 0xb, 0xc, 0xd, 0xe, 0xf};30253017 int ret;3026301830273027- ret = intel_hsw_common_init(codec, 0x02, map, ARRAY_SIZE(map), 4);30193019+ ret = intel_hsw_common_init(codec, 0x02, map, ARRAY_SIZE(map), 4,30203020+ enable_silent_stream);30283021 if (!ret) {30293022 struct hdmi_spec *spec = codec->spec;30303023
···2020#define AIU_MEM_I2S_CONTROL_MODE_16BIT BIT(6)2121#define AIU_MEM_I2S_BUF_CNTL_INIT BIT(0)2222#define AIU_RST_SOFT_I2S_FAST BIT(0)2323+#define AIU_I2S_MISC_HOLD_EN BIT(2)2424+#define AIU_I2S_MISC_FORCE_LEFT_RIGHT BIT(4)23252426#define AIU_FIFO_I2S_BLOCK 2562527···9290 unsigned int val;9391 int ret;94929393+ snd_soc_component_update_bits(component, AIU_I2S_MISC,9494+ AIU_I2S_MISC_HOLD_EN,9595+ AIU_I2S_MISC_HOLD_EN);9696+9597 ret = aiu_fifo_hw_params(substream, params, dai);9698 if (ret)9799 return ret;···122116 val = FIELD_PREP(AIU_MEM_I2S_MASKS_IRQ_BLOCK, val);123117 snd_soc_component_update_bits(component, AIU_MEM_I2S_MASKS,124118 AIU_MEM_I2S_MASKS_IRQ_BLOCK, val);119119+120120+ /*121121+ * Most (all?) supported SoCs have this bit set by default. The vendor122122+ * driver however sets it manually (depending on the version either123123+ * while un-setting AIU_I2S_MISC_HOLD_EN or right before that). Follow124124+ * the same approach for consistency with the vendor driver.125125+ */126126+ snd_soc_component_update_bits(component, AIU_I2S_MISC,127127+ AIU_I2S_MISC_FORCE_LEFT_RIGHT,128128+ AIU_I2S_MISC_FORCE_LEFT_RIGHT);129129+130130+ snd_soc_component_update_bits(component, AIU_I2S_MISC,131131+ AIU_I2S_MISC_HOLD_EN, 0);125132126133 return 0;127134}
···7272 ip link set $h1.10 address $h1_10_mac7373}74747575+rif_mac_profile_consolidation_test()7676+{7777+ local count=$1; shift7878+ local h1_20_mac7979+8080+ RET=08181+8282+ if [[ $count -eq 1 ]]; then8383+ return8484+ fi8585+8686+ h1_20_mac=$(mac_get $h1.20)8787+8888+ # Set the MAC of $h1.20 to that of $h1.10 and confirm that they are8989+ # using the same MAC profile.9090+ ip link set $h1.20 address 00:11:11:11:11:119191+ check_err $?9292+9393+ occ=$(devlink -j resource show $DEVLINK_DEV \9494+ | jq '.[][][] | select(.name=="rif_mac_profiles") |.["occ"]')9595+9696+ [[ $occ -eq $((count - 1)) ]]9797+ check_err $? "MAC profile occupancy did not decrease"9898+9999+ log_test "RIF MAC profile consolidation"100100+101101+ ip link set $h1.20 address $h1_20_mac102102+}103103+75104rif_mac_profile_shared_replacement_test()76105{77106 local count=$1; shift···133104 create_max_rif_mac_profiles $count134105135106 rif_mac_profile_replacement_test107107+ rif_mac_profile_consolidation_test $count136108 rif_mac_profile_shared_replacement_test $count137109}138110
···110110 ret = _vcpu_set_msr(vm, 0, MSR_IA32_PERF_CAPABILITIES, PMU_CAP_LBR_FMT);111111 TEST_ASSERT(ret == 0, "Bad PERF_CAPABILITIES didn't fail.");112112113113- /* testcase 4, set capabilities when we don't have PDCM bit */114114- entry_1_0->ecx &= ~X86_FEATURE_PDCM;115115- vcpu_set_cpuid(vm, VCPU_ID, cpuid);116116- ret = _vcpu_set_msr(vm, 0, MSR_IA32_PERF_CAPABILITIES, host_cap.capabilities);117117- TEST_ASSERT(ret == 0, "Bad PERF_CAPABILITIES didn't fail.");118118-119119- /* testcase 5, set capabilities when we don't have PMU version bits */120120- entry_1_0->ecx |= X86_FEATURE_PDCM;121121- eax.split.version_id = 0;122122- entry_1_0->ecx = eax.full;123123- vcpu_set_cpuid(vm, VCPU_ID, cpuid);124124- ret = _vcpu_set_msr(vm, 0, MSR_IA32_PERF_CAPABILITIES, PMU_CAP_FW_WRITES);125125- TEST_ASSERT(ret == 0, "Bad PERF_CAPABILITIES didn't fail.");126126-127127- vcpu_set_msr(vm, 0, MSR_IA32_PERF_CAPABILITIES, 0);128128- ASSERT_EQ(vcpu_get_msr(vm, VCPU_ID, MSR_IA32_PERF_CAPABILITIES), 0);129129-130113 kvm_vm_free(vm);131114}
+34-11
tools/testing/selftests/net/fcnal-test.sh
···455455 ip netns del ${NSC} >/dev/null 2>&1456456}457457458458+cleanup_vrf_dup()459459+{460460+ ip link del ${NSA_DEV2} >/dev/null 2>&1461461+ ip netns pids ${NSC} | xargs kill 2>/dev/null462462+ ip netns del ${NSC} >/dev/null 2>&1463463+}464464+465465+setup_vrf_dup()466466+{467467+ # some VRF tests use ns-C which has the same config as468468+ # ns-B but for a device NOT in the VRF469469+ create_ns ${NSC} "-" "-"470470+ connect_ns ${NSA} ${NSA_DEV2} ${NSA_IP}/24 ${NSA_IP6}/64 \471471+ ${NSC} ${NSC_DEV} ${NSB_IP}/24 ${NSB_IP6}/64472472+}473473+458474setup()459475{460476 local with_vrf=${1}···500484501485 ip -netns ${NSB} ro add ${VRF_IP}/32 via ${NSA_IP} dev ${NSB_DEV}502486 ip -netns ${NSB} -6 ro add ${VRF_IP6}/128 via ${NSA_IP6} dev ${NSB_DEV}503503-504504- # some VRF tests use ns-C which has the same config as505505- # ns-B but for a device NOT in the VRF506506- create_ns ${NSC} "-" "-"507507- connect_ns ${NSA} ${NSA_DEV2} ${NSA_IP}/24 ${NSA_IP6}/64 \508508- ${NSC} ${NSC_DEV} ${NSB_IP}/24 ${NSB_IP6}/64509487 else510488 ip -netns ${NSA} ro add ${NSB_LO_IP}/32 via ${NSB_IP} dev ${NSA_DEV}511489 ip -netns ${NSA} ro add ${NSB_LO_IP6}/128 via ${NSB_IP6} dev ${NSA_DEV}···12501240 log_test_addr ${a} $? 1 "Global server, local connection"1251124112521242 # run MD5 tests12431243+ setup_vrf_dup12531244 ipv4_tcp_md512451245+ cleanup_vrf_dup1254124612551247 #12561248 # enable VRF global server···18101798 for a in ${NSA_IP} ${VRF_IP}18111799 do18121800 log_start18011801+ show_hint "Socket not bound to VRF, but address is in VRF"18131802 run_cmd nettest -s -R -P icmp -l ${a} -b18141814- log_test_addr ${a} $? 0 "Raw socket bind to local address"18031803+ log_test_addr ${a} $? 1 "Raw socket bind to local address"1815180418161805 log_start18171806 run_cmd nettest -s -R -P icmp -l ${a} -I ${NSA_DEV} -b···22042191 log_start22052192 show_hint "Fails since VRF device does not support linklocal or multicast"22062193 run_cmd ${ping6} -c1 -w1 ${a}22072207- log_test_addr ${a} $? 2 "ping out, VRF bind"21942194+ log_test_addr ${a} $? 1 "ping out, VRF bind"22082195 done2209219622102197 for a in ${NSB_IP6} ${NSB_LO_IP6} ${NSB_LINKIP6}%${NSA_DEV} ${MCAST}%${NSA_DEV}···27322719 log_test_addr ${a} $? 1 "Global server, local connection"2733272027342721 # run MD5 tests27222722+ setup_vrf_dup27352723 ipv6_tcp_md527242724+ cleanup_vrf_dup2736272527372726 #27382727 # enable VRF global server···34293414 run_cmd nettest -6 -s -l ${a} -I ${NSA_DEV} -t1 -b34303415 log_test_addr ${a} $? 0 "TCP socket bind to local address after device bind"3431341634173417+ # Sadly, the kernel allows binding a socket to a device and then34183418+ # binding to an address not on the device. So this test passes34193419+ # when it really should not34323420 a=${NSA_LO_IP6}34333421 log_start34343434- show_hint "Should fail with 'Cannot assign requested address'"34223422+ show_hint "Tecnically should fail since address is not on device but kernel allows"34353423 run_cmd nettest -6 -s -l ${a} -I ${NSA_DEV} -t1 -b34363436- log_test_addr ${a} $? 1 "TCP socket bind to out of scope local address"34243424+ log_test_addr ${a} $? 0 "TCP socket bind to out of scope local address"34373425}3438342634393427ipv6_addr_bind_vrf()···34773459 run_cmd nettest -6 -s -l ${a} -I ${NSA_DEV} -t1 -b34783460 log_test_addr ${a} $? 0 "TCP socket bind to local address with device bind"3479346134623462+ # Sadly, the kernel allows binding a socket to a device and then34633463+ # binding to an address not on the device. The only restriction34643464+ # is that the address is valid in the L3 domain. So this test34653465+ # passes when it really should not34803466 a=${VRF_IP6}34813467 log_start34683468+ show_hint "Tecnically should fail since address is not on device but kernel allows"34823469 run_cmd nettest -6 -s -l ${a} -I ${NSA_DEV} -t1 -b34833483- log_test_addr ${a} $? 1 "TCP socket bind to VRF address with device bind"34703470+ log_test_addr ${a} $? 0 "TCP socket bind to VRF address with device bind"3484347134853472 a=${NSA_LO_IP6}34863473 log_start
···1313NETIFS[p6]=veth51414NETIFS[p7]=veth61515NETIFS[p8]=veth71616+NETIFS[p9]=veth81717+NETIFS[p10]=veth916181719# Port that does not have a cable connected.1820NETIF_NO_CABLE=eth8
+1-1
tools/testing/selftests/net/icmp_redirect.sh
···311311 ip -netns h1 ro get ${H1_VRF_ARG} ${H2_N2_IP} | \312312 grep -E -v 'mtu|redirected' | grep -q "cache"313313 fi314314- log_test $? 0 "IPv4: ${desc}"314314+ log_test $? 0 "IPv4: ${desc}" 0315315316316 # No PMTU info for test "redirect" and "mtu exception plus redirect"317317 if [ "$with_redirect" = "yes" ] && [ "$desc" != "redirect exception plus mtu" ]; then