···16891689 architectures force reset to be always executed16901690 i8042.unlock [HW] Unlock (ignore) the keylock16911691 i8042.kbdreset [HW] Reset device connected to KBD port16921692+ i8042.probe_defer16931693+ [HW] Allow deferred probing upon i8042 probe errors1692169416931695 i810= [HW,DRM]16941696···24152413 Default is 1 (enabled)2416241424172415 kvm-intel.emulate_invalid_guest_state=24182418- [KVM,Intel] Enable emulation of invalid guest states24192419- Default is 0 (disabled)24162416+ [KVM,Intel] Disable emulation of invalid guest state.24172417+ Ignored if kvm-intel.enable_unrestricted_guest=1, as24182418+ guest state is never invalid for unrestricted guests.24192419+ This param doesn't apply to nested guests (L2), as KVM24202420+ never emulates invalid L2 guest state.24212421+ Default is 1 (enabled)2420242224212423 kvm-intel.flexpriority=24222424 [KVM,Intel] Disable FlexPriority feature (TPR shadow).
···5151 description:5252 Properties for single BUCK regulator.53535454+ properties:5555+ op_mode:5656+ $ref: /schemas/types.yaml#/definitions/uint325757+ enum: [0, 1, 2, 3]5858+ default: 15959+ description: |6060+ Describes the different operating modes of the regulator with power6161+ mode change in SOC. The different possible values are:6262+ 0 - always off mode6363+ 1 - on in normal mode6464+ 2 - low power mode6565+ 3 - suspend mode6666+5467 required:5568 - regulator-name5669···7663 Properties for single BUCK regulator.77647865 properties:6666+ op_mode:6767+ $ref: /schemas/types.yaml#/definitions/uint326868+ enum: [0, 1, 2, 3]6969+ default: 17070+ description: |7171+ Describes the different operating modes of the regulator with power7272+ mode change in SOC. The different possible values are:7373+ 0 - always off mode7474+ 1 - on in normal mode7575+ 2 - low power mode7676+ 3 - suspend mode7777+7978 s5m8767,pmic-ext-control-gpios:8079 maxItems: 18180 description: |
+5-3
Documentation/i2c/summary.rst
···1111and so are not advertised as being I2C but come under different names,1212e.g. TWI (Two Wire Interface), IIC.13131414-The official I2C specification is the `"I2C-bus specification and user1515-manual" (UM10204) <https://www.nxp.com/docs/en/user-guide/UM10204.pdf>`_1616-published by NXP Semiconductors.1414+The latest official I2C specification is the `"I2C-bus specification and user1515+manual" (UM10204) <https://www.nxp.com/webapp/Download?colCode=UM10204>`_1616+published by NXP Semiconductors. However, you need to log-in to the site to1717+access the PDF. An older version of the specification (revision 6) is archived1818+`here <https://web.archive.org/web/20210813122132/https://www.nxp.com/docs/en/user-guide/UM10204.pdf>`_.17191820SMBus (System Management Bus) is based on the I2C protocol, and is mostly1921a subset of I2C protocols and signaling. Many I2C devices will work on an
+6-5
Documentation/networking/bonding.rst
···196196ad_actor_system197197198198 In an AD system, this specifies the mac-address for the actor in199199- protocol packet exchanges (LACPDUs). The value cannot be NULL or200200- multicast. It is preferred to have the local-admin bit set for this201201- mac but driver does not enforce it. If the value is not given then202202- system defaults to using the masters' mac address as actors' system203203- address.199199+ protocol packet exchanges (LACPDUs). The value cannot be a multicast200200+ address. If the all-zeroes MAC is specified, bonding will internally201201+ use the MAC of the bond itself. It is preferred to have the202202+ local-admin bit set for this mac but driver does not enforce it. If203203+ the value is not given then system defaults to using the masters'204204+ mac address as actors' system address.204205205206 This parameter has effect only in 802.3ad mode and is available through206207 SysFs interface.
···183183 IRQ config, enable, reset184184185185DPNI (Datapath Network Interface)186186+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~186187Contains TX/RX queues, network interface configuration, and RX buffer pool187188configuration mechanisms. The TX/RX queues are in memory and are identified188189by queue number.
+4-2
Documentation/networking/ip-sysctl.rst
···2525ip_no_pmtu_disc - INTEGER2626 Disable Path MTU Discovery. If enabled in mode 1 and a2727 fragmentation-required ICMP is received, the PMTU to this2828- destination will be set to min_pmtu (see below). You will need2828+ destination will be set to the smallest of the old MTU to2929+ this destination and min_pmtu (see below). You will need2930 to raise min_pmtu to the smallest interface MTU on your system3031 manually if you want to avoid locally generated fragments.3132···5049 Default: FALSE51505251min_pmtu - INTEGER5353- default 552 - minimum discovered Path MTU5252+ default 552 - minimum Path MTU. Unless this is changed mannually,5353+ each cached pmtu will never be lower than this setting.54545555ip_forward_use_pmtu - BOOLEAN5656 By default we don't trust protocol path MTUs while forwarding
+2-2
Documentation/networking/timestamping.rst
···582582 and hardware timestamping is not possible (SKBTX_IN_PROGRESS not set).583583- As soon as the driver has sent the packet and/or obtained a584584 hardware time stamp for it, it passes the time stamp back by585585- calling skb_hwtstamp_tx() with the original skb, the raw586586- hardware time stamp. skb_hwtstamp_tx() clones the original skb and585585+ calling skb_tstamp_tx() with the original skb, the raw586586+ hardware time stamp. skb_tstamp_tx() clones the original skb and587587 adds the timestamps, therefore the original skb has to be freed now.588588 If obtaining the hardware time stamp somehow fails, then the driver589589 should not fall back to software time stamping. The rationale is that
+2
Documentation/sound/hd-audio/models.rst
···326326 Headset support on USI machines327327dual-codecs328328 Lenovo laptops with dual codecs329329+alc285-hp-amp-init330330+ HP laptops which require speaker amplifier initialization (ALC285)329331330332ALC680331333======
+2-2
MAINTAINERS
···1484514845M: Ryder Lee <ryder.lee@mediatek.com>1484614846M: Jianjun Wang <jianjun.wang@mediatek.com>1484714847L: linux-pci@vger.kernel.org1484814848-L: linux-mediatek@lists.infradead.org1484814848+L: linux-mediatek@lists.infradead.org (moderated for non-subscribers)1484914849S: Supported1485014850F: Documentation/devicetree/bindings/pci/mediatek*1485114851F: drivers/pci/controller/*mediatek*···1742317423SILVACO I3C DUAL-ROLE MASTER1742417424M: Miquel Raynal <miquel.raynal@bootlin.com>1742517425M: Conor Culhane <conor.culhane@silvaco.com>1742617426-L: linux-i3c@lists.infradead.org1742617426+L: linux-i3c@lists.infradead.org (moderated for non-subscribers)1742717427S: Maintained1742817428F: Documentation/devicetree/bindings/i3c/silvaco,i3c-master.yaml1742917429F: drivers/i3c/master/svc-i3c-master.c
···8585config STACK_GROWSUP8686 def_bool y87878888-config ARCH_DEFCONFIG8989- string9090- default "arch/parisc/configs/generic-32bit_defconfig" if !64BIT9191- default "arch/parisc/configs/generic-64bit_defconfig" if 64BIT9292-9388config GENERIC_LOCKBREAK9489 bool9590 default y
+2-2
arch/parisc/include/asm/futex.h
···1414_futex_spin_lock(u32 __user *uaddr)1515{1616 extern u32 lws_lock_start[];1717- long index = ((long)uaddr & 0x3f8) >> 1;1717+ long index = ((long)uaddr & 0x7f8) >> 1;1818 arch_spinlock_t *s = (arch_spinlock_t *)&lws_lock_start[index];1919 preempt_disable();2020 arch_spin_lock(s);···2424_futex_spin_unlock(u32 __user *uaddr)2525{2626 extern u32 lws_lock_start[];2727- long index = ((long)uaddr & 0x3f8) >> 1;2727+ long index = ((long)uaddr & 0x7f8) >> 1;2828 arch_spinlock_t *s = (arch_spinlock_t *)&lws_lock_start[index];2929 arch_spin_unlock(s);3030 preempt_enable();
+1-1
arch/parisc/kernel/syscall.S
···472472 extrd,u %r1,PSW_W_BIT,1,%r1473473 /* sp must be aligned on 4, so deposit the W bit setting into474474 * the bottom of sp temporarily */475475- or,ev %r1,%r30,%r30475475+ or,od %r1,%r30,%r30476476477477 /* Clip LWS number to a 32-bit value for 32-bit processes */478478 depdi 0, 31, 32, %r20
+2
arch/parisc/kernel/traps.c
···730730 }731731 mmap_read_unlock(current->mm);732732 }733733+ /* CPU could not fetch instruction, so clear stale IIR value. */734734+ regs->iir = 0xbaadf00d;733735 fallthrough;734736 case 27: 735737 /* Data memory protection ID trap */
+1-1
arch/powerpc/mm/ptdump/ptdump.c
···183183{184184 pte_t pte = __pte(st->current_flags);185185186186- if (!IS_ENABLED(CONFIG_PPC_DEBUG_WX) || !st->check_wx)186186+ if (!IS_ENABLED(CONFIG_DEBUG_WX) || !st->check_wx)187187 return;188188189189 if (!pte_write(pte) || !pte_exec(pte))
···713713714714 early_reserve_initrd();715715716716- if (efi_enabled(EFI_BOOT))717717- efi_memblock_x86_reserve_range();718718-719716 memblock_x86_reserve_range_setup_data();720717721718 reserve_ibft_region();···737740 }738741739742 return 0;740740-}741741-742742-static char * __init prepare_command_line(void)743743-{744744-#ifdef CONFIG_CMDLINE_BOOL745745-#ifdef CONFIG_CMDLINE_OVERRIDE746746- strlcpy(boot_command_line, builtin_cmdline, COMMAND_LINE_SIZE);747747-#else748748- if (builtin_cmdline[0]) {749749- /* append boot loader cmdline to builtin */750750- strlcat(builtin_cmdline, " ", COMMAND_LINE_SIZE);751751- strlcat(builtin_cmdline, boot_command_line, COMMAND_LINE_SIZE);752752- strlcpy(boot_command_line, builtin_cmdline, COMMAND_LINE_SIZE);753753- }754754-#endif755755-#endif756756-757757- strlcpy(command_line, boot_command_line, COMMAND_LINE_SIZE);758758-759759- parse_early_param();760760-761761- return command_line;762743}763744764745/*···828853 x86_init.oem.arch_setup();829854830855 /*831831- * x86_configure_nx() is called before parse_early_param() (called by832832- * prepare_command_line()) to detect whether hardware doesn't support833833- * NX (so that the early EHCI debug console setup can safely call834834- * set_fixmap()). It may then be called again from within noexec_setup()835835- * during parsing early parameters to honor the respective command line836836- * option.837837- */838838- x86_configure_nx();839839-840840- /*841841- * This parses early params and it needs to run before842842- * early_reserve_memory() because latter relies on such settings843843- * supplied as early params.844844- */845845- *cmdline_p = prepare_command_line();846846-847847- /*848856 * Do some memory reservations *before* memory is added to memblock, so849857 * memblock allocations won't overwrite it.850858 *···859901 data_resource.end = __pa_symbol(_edata)-1;860902 bss_resource.start = __pa_symbol(__bss_start);861903 bss_resource.end = __pa_symbol(__bss_stop)-1;904904+905905+#ifdef CONFIG_CMDLINE_BOOL906906+#ifdef CONFIG_CMDLINE_OVERRIDE907907+ strlcpy(boot_command_line, builtin_cmdline, COMMAND_LINE_SIZE);908908+#else909909+ if (builtin_cmdline[0]) {910910+ /* append boot loader cmdline to builtin */911911+ strlcat(builtin_cmdline, " ", COMMAND_LINE_SIZE);912912+ strlcat(builtin_cmdline, boot_command_line, COMMAND_LINE_SIZE);913913+ strlcpy(boot_command_line, builtin_cmdline, COMMAND_LINE_SIZE);914914+ }915915+#endif916916+#endif917917+918918+ strlcpy(command_line, boot_command_line, COMMAND_LINE_SIZE);919919+ *cmdline_p = command_line;920920+921921+ /*922922+ * x86_configure_nx() is called before parse_early_param() to detect923923+ * whether hardware doesn't support NX (so that the early EHCI debug924924+ * console setup can safely call set_fixmap()). It may then be called925925+ * again from within noexec_setup() during parsing early parameters926926+ * to honor the respective command line option.927927+ */928928+ x86_configure_nx();929929+930930+ parse_early_param();931931+932932+ if (efi_enabled(EFI_BOOT))933933+ efi_memblock_x86_reserve_range();862934863935#ifdef CONFIG_MEMORY_HOTPLUG864936 /*
···4545 * iterator walks off the end of the paging structure.4646 */4747 bool valid;4848+ /*4949+ * True if KVM dropped mmu_lock and yielded in the middle of a walk, in5050+ * which case tdp_iter_next() needs to restart the walk at the root5151+ * level instead of advancing to the next entry.5252+ */5353+ bool yielded;4854};49555056/*
+16-13
arch/x86/kvm/mmu/tdp_mmu.c
···502502 struct tdp_iter *iter,503503 u64 new_spte)504504{505505+ WARN_ON_ONCE(iter->yielded);506506+505507 lockdep_assert_held_read(&kvm->mmu_lock);506508507509 /*···577575 u64 new_spte, bool record_acc_track,578576 bool record_dirty_log)579577{578578+ WARN_ON_ONCE(iter->yielded);579579+580580 lockdep_assert_held_write(&kvm->mmu_lock);581581582582 /*···644640 * If this function should yield and flush is set, it will perform a remote645641 * TLB flush before yielding.646642 *647647- * If this function yields, it will also reset the tdp_iter's walk over the648648- * paging structure and the calling function should skip to the next649649- * iteration to allow the iterator to continue its traversal from the650650- * paging structure root.643643+ * If this function yields, iter->yielded is set and the caller must skip to644644+ * the next iteration, where tdp_iter_next() will reset the tdp_iter's walk645645+ * over the paging structures to allow the iterator to continue its traversal646646+ * from the paging structure root.651647 *652652- * Return true if this function yielded and the iterator's traversal was reset.653653- * Return false if a yield was not needed.648648+ * Returns true if this function yielded.654649 */655655-static inline bool tdp_mmu_iter_cond_resched(struct kvm *kvm,656656- struct tdp_iter *iter, bool flush,657657- bool shared)650650+static inline bool __must_check tdp_mmu_iter_cond_resched(struct kvm *kvm,651651+ struct tdp_iter *iter,652652+ bool flush, bool shared)658653{654654+ WARN_ON(iter->yielded);655655+659656 /* Ensure forward progress has been made before yielding. */660657 if (iter->next_last_level_gfn == iter->yielded_gfn)661658 return false;···676671677672 WARN_ON(iter->gfn > iter->next_last_level_gfn);678673679679- tdp_iter_restart(iter);680680-681681- return true;674674+ iter->yielded = true;682675 }683676684684- return false;677677+ return iter->yielded;685678}686679687680/*
+12-9
arch/x86/kvm/svm/svm.c
···15851585 to_svm(vcpu)->vmcb->save.rflags = rflags;15861586}1587158715881588+static bool svm_get_if_flag(struct kvm_vcpu *vcpu)15891589+{15901590+ struct vmcb *vmcb = to_svm(vcpu)->vmcb;15911591+15921592+ return sev_es_guest(vcpu->kvm)15931593+ ? vmcb->control.int_state & SVM_GUEST_INTERRUPT_MASK15941594+ : kvm_get_rflags(vcpu) & X86_EFLAGS_IF;15951595+}15961596+15881597static void svm_cache_reg(struct kvm_vcpu *vcpu, enum kvm_reg reg)15891598{15901599 switch (reg) {···35773568 if (!gif_set(svm))35783569 return true;3579357035803580- if (sev_es_guest(vcpu->kvm)) {35813581- /*35823582- * SEV-ES guests to not expose RFLAGS. Use the VMCB interrupt mask35833583- * bit to determine the state of the IF flag.35843584- */35853585- if (!(vmcb->control.int_state & SVM_GUEST_INTERRUPT_MASK))35863586- return true;35873587- } else if (is_guest_mode(vcpu)) {35713571+ if (is_guest_mode(vcpu)) {35883572 /* As long as interrupts are being delivered... */35893573 if ((svm->nested.ctl.int_ctl & V_INTR_MASKING_MASK)35903574 ? !(svm->vmcb01.ptr->save.rflags & X86_EFLAGS_IF)···35883586 if (nested_exit_on_intr(svm))35893587 return false;35903588 } else {35913591- if (!(kvm_get_rflags(vcpu) & X86_EFLAGS_IF))35893589+ if (!svm_get_if_flag(vcpu))35923590 return true;35933591 }35943592···46234621 .cache_reg = svm_cache_reg,46244622 .get_rflags = svm_get_rflags,46254623 .set_rflags = svm_set_rflags,46244624+ .get_if_flag = svm_get_if_flag,4626462546274626 .tlb_flush_all = svm_flush_tlb,46284627 .tlb_flush_current = svm_flush_tlb,
+32-13
arch/x86/kvm/vmx/vmx.c
···13631363 vmx->emulation_required = vmx_emulation_required(vcpu);13641364}1365136513661366+static bool vmx_get_if_flag(struct kvm_vcpu *vcpu)13671367+{13681368+ return vmx_get_rflags(vcpu) & X86_EFLAGS_IF;13691369+}13701370+13661371u32 vmx_get_interrupt_shadow(struct kvm_vcpu *vcpu)13671372{13681373 u32 interruptibility = vmcs_read32(GUEST_INTERRUPTIBILITY_INFO);···39643959 if (pi_test_and_set_on(&vmx->pi_desc))39653960 return 0;3966396139673967- if (vcpu != kvm_get_running_vcpu() &&39683968- !kvm_vcpu_trigger_posted_interrupt(vcpu, false))39623962+ if (!kvm_vcpu_trigger_posted_interrupt(vcpu, false))39693963 kvm_vcpu_kick(vcpu);3970396439713965 return 0;···58815877 vmx_flush_pml_buffer(vcpu);5882587858835879 /*58845884- * We should never reach this point with a pending nested VM-Enter, and58855885- * more specifically emulation of L2 due to invalid guest state (see58865886- * below) should never happen as that means we incorrectly allowed a58875887- * nested VM-Enter with an invalid vmcs12.58805880+ * KVM should never reach this point with a pending nested VM-Enter.58815881+ * More specifically, short-circuiting VM-Entry to emulate L2 due to58825882+ * invalid guest state should never happen as that means KVM knowingly58835883+ * allowed a nested VM-Enter with an invalid vmcs12. More below.58885884 */58895885 if (KVM_BUG_ON(vmx->nested.nested_run_pending, vcpu->kvm))58905886 return -EIO;58915891-58925892- /* If guest state is invalid, start emulating */58935893- if (vmx->emulation_required)58945894- return handle_invalid_guest_state(vcpu);5895588758965888 if (is_guest_mode(vcpu)) {58975889 /*···59105910 */59115911 nested_mark_vmcs12_pages_dirty(vcpu);5912591259135913+ /*59145914+ * Synthesize a triple fault if L2 state is invalid. In normal59155915+ * operation, nested VM-Enter rejects any attempt to enter L259165916+ * with invalid state. However, those checks are skipped if59175917+ * state is being stuffed via RSM or KVM_SET_NESTED_STATE. If59185918+ * L2 state is invalid, it means either L1 modified SMRAM state59195919+ * or userspace provided bad state. Synthesize TRIPLE_FAULT as59205920+ * doing so is architecturally allowed in the RSM case, and is59215921+ * the least awful solution for the userspace case without59225922+ * risking false positives.59235923+ */59245924+ if (vmx->emulation_required) {59255925+ nested_vmx_vmexit(vcpu, EXIT_REASON_TRIPLE_FAULT, 0, 0);59265926+ return 1;59275927+ }59285928+59135929 if (nested_vmx_reflect_vmexit(vcpu))59145930 return 1;59155931 }59325932+59335933+ /* If guest state is invalid, start emulating. L2 is handled above. */59345934+ if (vmx->emulation_required)59355935+ return handle_invalid_guest_state(vcpu);5916593659175937 if (exit_reason.failed_vmentry) {59185938 dump_vmcs(vcpu);···66286608 * consistency check VM-Exit due to invalid guest state and bail.66296609 */66306610 if (unlikely(vmx->emulation_required)) {66316631-66326632- /* We don't emulate invalid state of a nested guest */66336633- vmx->fail = is_guest_mode(vcpu);66116611+ vmx->fail = 0;6634661266356613 vmx->exit_reason.full = EXIT_REASON_INVALID_STATE;66366614 vmx->exit_reason.failed_vmentry = 1;···75977579 .cache_reg = vmx_cache_reg,75987580 .get_rflags = vmx_get_rflags,75997581 .set_rflags = vmx_set_rflags,75827582+ .get_if_flag = vmx_get_if_flag,7600758376017584 .tlb_flush_all = vmx_flush_tlb_all,76027585 .tlb_flush_current = vmx_flush_tlb_current,
+2-9
arch/x86/kvm/x86.c
···13311331 MSR_IA32_UMWAIT_CONTROL,1332133213331333 MSR_ARCH_PERFMON_FIXED_CTR0, MSR_ARCH_PERFMON_FIXED_CTR1,13341334- MSR_ARCH_PERFMON_FIXED_CTR0 + 2, MSR_ARCH_PERFMON_FIXED_CTR0 + 3,13341334+ MSR_ARCH_PERFMON_FIXED_CTR0 + 2,13351335 MSR_CORE_PERF_FIXED_CTR_CTRL, MSR_CORE_PERF_GLOBAL_STATUS,13361336 MSR_CORE_PERF_GLOBAL_CTRL, MSR_CORE_PERF_GLOBAL_OVF_CTRL,13371337 MSR_ARCH_PERFMON_PERFCTR0, MSR_ARCH_PERFMON_PERFCTR1,···90019001{90029002 struct kvm_run *kvm_run = vcpu->run;9003900390049004- /*90059005- * if_flag is obsolete and useless, so do not bother90069006- * setting it for SEV-ES guests. Userspace can just90079007- * use kvm_run->ready_for_interrupt_injection.90089008- */90099009- kvm_run->if_flag = !vcpu->arch.guest_state_protected90109010- && (kvm_get_rflags(vcpu) & X86_EFLAGS_IF) != 0;90119011-90049004+ kvm_run->if_flag = static_call(kvm_x86_get_if_flag)(vcpu);90129005 kvm_run->cr8 = kvm_get_cr8(vcpu);90139006 kvm_run->apic_base = kvm_get_apic_base(vcpu);90149007
···3737 bool must_clear;38383939 /* contains the LCD config state */4040- unsigned long int flags;4040+ unsigned long flags;41414242 /* Current escape sequence and it's length or -1 if outside */4343 struct {···578578 * Since charlcd_init_display() needs to write data, we have to579579 * enable mark the LCD initialized just before.580580 */581581+ if (WARN_ON(!lcd->ops->init_display))582582+ return -EINVAL;583583+581584 ret = lcd->ops->init_display(lcd);582585 if (ret)583586 return ret;
+1-1
drivers/base/power/main.c
···19021902 device_block_probing();1903190319041904 mutex_lock(&dpm_list_mtx);19051905- while (!list_empty(&dpm_list)) {19051905+ while (!list_empty(&dpm_list) && !error) {19061906 struct device *dev = to_device(dpm_list.next);1907190719081908 get_device(dev);
+12-3
drivers/block/xen-blkfront.c
···15121512 unsigned long flags;15131513 struct blkfront_ring_info *rinfo = (struct blkfront_ring_info *)dev_id;15141514 struct blkfront_info *info = rinfo->dev_info;15151515+ unsigned int eoiflag = XEN_EOI_FLAG_SPURIOUS;1515151615161516- if (unlikely(info->connected != BLKIF_STATE_CONNECTED))15171517+ if (unlikely(info->connected != BLKIF_STATE_CONNECTED)) {15181518+ xen_irq_lateeoi(irq, XEN_EOI_FLAG_SPURIOUS);15171519 return IRQ_HANDLED;15201520+ }1518152115191522 spin_lock_irqsave(&rinfo->ring_lock, flags);15201523 again:···15321529 for (i = rinfo->ring.rsp_cons; i != rp; i++) {15331530 unsigned long id;15341531 unsigned int op;15321532+15331533+ eoiflag = 0;1535153415361535 RING_COPY_RESPONSE(&rinfo->ring, i, &bret);15371536 id = bret.id;···1651164616521647 spin_unlock_irqrestore(&rinfo->ring_lock, flags);1653164816491649+ xen_irq_lateeoi(irq, eoiflag);16501650+16541651 return IRQ_HANDLED;1655165216561653 err:16571654 info->connected = BLKIF_STATE_ERROR;1658165516591656 spin_unlock_irqrestore(&rinfo->ring_lock, flags);16571657+16581658+ /* No EOI in order to avoid further interrupts. */1660165916611660 pr_alert("%s disabled for further use\n", info->gd->disk_name);16621661 return IRQ_HANDLED;···17011692 if (err)17021693 goto fail;1703169417041704- err = bind_evtchn_to_irqhandler(rinfo->evtchn, blkif_interrupt, 0,17051705- "blkif", rinfo);16951695+ err = bind_evtchn_to_irqhandler_lateeoi(rinfo->evtchn, blkif_interrupt,16961696+ 0, "blkif", rinfo);17061697 if (err <= 0) {17071698 xenbus_dev_fatal(dev, err,17081699 "bind_evtchn_to_irqhandler failed");
+4-4
drivers/bus/sunxi-rsb.c
···687687688688static void sunxi_rsb_hw_exit(struct sunxi_rsb *rsb)689689{690690- /* Keep the clock and PM reference counts consistent. */691691- if (pm_runtime_status_suspended(rsb->dev))692692- pm_runtime_resume(rsb->dev);693690 reset_control_assert(rsb->rstc);694694- clk_disable_unprepare(rsb->clk);691691+692692+ /* Keep the clock and PM reference counts consistent. */693693+ if (!pm_runtime_status_suspended(rsb->dev))694694+ clk_disable_unprepare(rsb->clk);695695}696696697697static int __maybe_unused sunxi_rsb_runtime_suspend(struct device *dev)
+14-9
drivers/char/ipmi/ipmi_msghandler.c
···30313031 * with removing the device attributes while reading a device30323032 * attribute.30333033 */30343034- schedule_work(&bmc->remove_work);30343034+ queue_work(remove_work_wq, &bmc->remove_work);30353035}3036303630373037/*···53925392 if (initialized)53935393 goto out;5394539453955395- init_srcu_struct(&ipmi_interfaces_srcu);53955395+ rv = init_srcu_struct(&ipmi_interfaces_srcu);53965396+ if (rv)53975397+ goto out;53985398+53995399+ remove_work_wq = create_singlethread_workqueue("ipmi-msghandler-remove-wq");54005400+ if (!remove_work_wq) {54015401+ pr_err("unable to create ipmi-msghandler-remove-wq workqueue");54025402+ rv = -ENOMEM;54035403+ goto out_wq;54045404+ }5396540553975406 timer_setup(&ipmi_timer, ipmi_timeout, 0);53985407 mod_timer(&ipmi_timer, jiffies + IPMI_TIMEOUT_JIFFIES);5399540854005409 atomic_notifier_chain_register(&panic_notifier_list, &panic_block);5401541054025402- remove_work_wq = create_singlethread_workqueue("ipmi-msghandler-remove-wq");54035403- if (!remove_work_wq) {54045404- pr_err("unable to create ipmi-msghandler-remove-wq workqueue");54055405- rv = -ENOMEM;54065406- goto out;54075407- }54085408-54095411 initialized = true;5410541254135413+out_wq:54145414+ if (rv)54155415+ cleanup_srcu_struct(&ipmi_interfaces_srcu);54115416out:54125417 mutex_unlock(&ipmi_interfaces_mutex);54135418 return rv;
+4-3
drivers/char/ipmi/ipmi_ssif.c
···16591659 }16601660 }1661166116621662+ ssif_info->client = client;16631663+ i2c_set_clientdata(client, ssif_info);16641664+16621665 rv = ssif_check_and_remove(client, ssif_info);16631666 /* If rv is 0 and addr source is not SI_ACPI, continue probing */16641667 if (!rv && ssif_info->addr_source == SI_ACPI) {···16811678 "Trying %s-specified SSIF interface at i2c address 0x%x, adapter %s, slave address 0x%x\n",16821679 ipmi_addr_src_to_str(ssif_info->addr_source),16831680 client->addr, client->adapter->name, slave_addr);16841684-16851685- ssif_info->client = client;16861686- i2c_set_clientdata(client, ssif_info);1687168116881682 /* Now check for system interface capabilities */16891683 msg[0] = IPMI_NETFN_APP_REQUEST << 2;···1881188118821882 dev_err(&ssif_info->client->dev,18831883 "Unable to start IPMI SSIF: %d\n", rv);18841884+ i2c_set_clientdata(client, NULL);18841885 kfree(ssif_info);18851886 }18861887 kfree(resp);
+7
drivers/crypto/qat/qat_4xxx/adf_4xxx_hw_data.c
···211211 return adf_4xxx_fw_config[obj_num].ae_mask;212212}213213214214+static u32 get_vf2pf_sources(void __iomem *pmisc_addr)215215+{216216+ /* For the moment do not report vf2pf sources */217217+ return 0;218218+}219219+214220void adf_init_hw_data_4xxx(struct adf_hw_device_data *hw_data)215221{216222 hw_data->dev_class = &adf_4xxx_class;···260254 hw_data->set_msix_rttable = set_msix_default_rttable;261255 hw_data->set_ssm_wdtimer = adf_gen4_set_ssm_wdtimer;262256 hw_data->enable_pfvf_comms = pfvf_comms_disabled;257257+ hw_data->get_vf2pf_sources = get_vf2pf_sources;263258 hw_data->disable_iov = adf_disable_sriov;264259 hw_data->min_iov_compat_ver = ADF_PFVF_COMPAT_THIS_VERSION;265260
+9-10
drivers/gpio/gpio-dln2.c
···4646struct dln2_gpio {4747 struct platform_device *pdev;4848 struct gpio_chip gpio;4949+ struct irq_chip irqchip;49505051 /*5152 * Cache pin direction to save us one transfer, since the hardware has···384383 mutex_unlock(&dln2->irq_lock);385384}386385387387-static struct irq_chip dln2_gpio_irqchip = {388388- .name = "dln2-irq",389389- .irq_mask = dln2_irq_mask,390390- .irq_unmask = dln2_irq_unmask,391391- .irq_set_type = dln2_irq_set_type,392392- .irq_bus_lock = dln2_irq_bus_lock,393393- .irq_bus_sync_unlock = dln2_irq_bus_unlock,394394-};395395-396386static void dln2_gpio_event(struct platform_device *pdev, u16 echo,397387 const void *data, int len)398388{···465473 dln2->gpio.direction_output = dln2_gpio_direction_output;466474 dln2->gpio.set_config = dln2_gpio_set_config;467475476476+ dln2->irqchip.name = "dln2-irq",477477+ dln2->irqchip.irq_mask = dln2_irq_mask,478478+ dln2->irqchip.irq_unmask = dln2_irq_unmask,479479+ dln2->irqchip.irq_set_type = dln2_irq_set_type,480480+ dln2->irqchip.irq_bus_lock = dln2_irq_bus_lock,481481+ dln2->irqchip.irq_bus_sync_unlock = dln2_irq_bus_unlock,482482+468483 girq = &dln2->gpio.irq;469469- girq->chip = &dln2_gpio_irqchip;484484+ girq->chip = &dln2->irqchip;470485 /* The event comes from the outside so no parent handler */471486 girq->parent_handler = NULL;472487 girq->num_parents = 0;
+1-5
drivers/gpio/gpio-virtio.c
···100100 virtqueue_kick(vgpio->request_vq);101101 mutex_unlock(&vgpio->lock);102102103103- if (!wait_for_completion_timeout(&line->completion, HZ)) {104104- dev_err(dev, "GPIO operation timed out\n");105105- ret = -ETIMEDOUT;106106- goto out;107107- }103103+ wait_for_completion(&line->completion);108104109105 if (unlikely(res->status != VIRTIO_GPIO_STATUS_OK)) {110106 dev_err(dev, "GPIO request failed: %d\n", gpio);
+8-9
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
···31663166bool amdgpu_device_asic_has_dc_support(enum amd_asic_type asic_type)31673167{31683168 switch (asic_type) {31693169+#ifdef CONFIG_DRM_AMDGPU_SI31703170+ case CHIP_HAINAN:31713171+#endif31723172+ case CHIP_TOPAZ:31733173+ /* chips with no display hardware */31743174+ return false;31693175#if defined(CONFIG_DRM_AMD_DC)31703176 case CHIP_TAHITI:31713177 case CHIP_PITCAIRN:···44674461int amdgpu_device_pre_asic_reset(struct amdgpu_device *adev,44684462 struct amdgpu_reset_context *reset_context)44694463{44704470- int i, j, r = 0;44644464+ int i, r = 0;44714465 struct amdgpu_job *job = NULL;44724466 bool need_full_reset =44734467 test_bit(AMDGPU_NEED_FULL_RESET, &reset_context->flags);···4489448344904484 /*clear job fence from fence drv to avoid force_completion44914485 *leave NULL and vm flush fence in fence drv */44924492- for (j = 0; j <= ring->fence_drv.num_fences_mask; j++) {44934493- struct dma_fence *old, **ptr;44864486+ amdgpu_fence_driver_clear_job_fences(ring);4494448744954495- ptr = &ring->fence_drv.fences[j];44964496- old = rcu_dereference_protected(*ptr, 1);44974497- if (old && test_bit(AMDGPU_FENCE_FLAG_EMBED_IN_JOB_BIT, &old->flags)) {44984498- RCU_INIT_POINTER(*ptr, NULL);44994499- }45004500- }45014488 /* after all hw jobs are reset, hw fence is meaningless, so force_completion */45024489 amdgpu_fence_driver_force_completion(ring);45034490 }
···384384 struct amdgpu_vm_bo_base *bo_base;385385 int r;386386387387- if (bo->tbo.resource->mem_type == TTM_PL_SYSTEM)387387+ if (!bo->tbo.resource || bo->tbo.resource->mem_type == TTM_PL_SYSTEM)388388 return;389389390390 r = ttm_bo_validate(&bo->tbo, &placement, &ctx);
+23-4
drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
···328328329329/**330330 * DOC: runpm (int)331331- * Override for runtime power management control for dGPUs in PX/HG laptops. The amdgpu driver can dynamically power down332332- * the dGPU on PX/HG laptops when it is idle. The default is -1 (auto enable). Setting the value to 0 disables this functionality.331331+ * Override for runtime power management control for dGPUs. The amdgpu driver can dynamically power down332332+ * the dGPUs when they are idle if supported. The default is -1 (auto enable).333333+ * Setting the value to 0 disables this functionality.333334 */334334-MODULE_PARM_DESC(runpm, "PX runtime pm (2 = force enable with BAMACO, 1 = force enable with BACO, 0 = disable, -1 = PX only default)");335335+MODULE_PARM_DESC(runpm, "PX runtime pm (2 = force enable with BAMACO, 1 = force enable with BACO, 0 = disable, -1 = auto)");335336module_param_named(runpm, amdgpu_runtime_pm, int, 0444);336337337338/**···21542153 adev->in_s3 = true;21552154 r = amdgpu_device_suspend(drm_dev, true);21562155 adev->in_s3 = false;21572157-21562156+ if (r)21572157+ return r;21582158+ if (!adev->in_s0ix)21592159+ r = amdgpu_asic_reset(adev);21582160 return r;21592161}21602162···22382234 if (amdgpu_device_supports_px(drm_dev))22392235 drm_dev->switch_power_state = DRM_SWITCH_POWER_CHANGING;2240223622372237+ /*22382238+ * By setting mp1_state as PP_MP1_STATE_UNLOAD, MP1 will do some22392239+ * proper cleanups and put itself into a state ready for PNP. That22402240+ * can address some random resuming failure observed on BOCO capable22412241+ * platforms.22422242+ * TODO: this may be also needed for PX capable platform.22432243+ */22442244+ if (amdgpu_device_supports_boco(drm_dev))22452245+ adev->mp1_state = PP_MP1_STATE_UNLOAD;22462246+22412247 ret = amdgpu_device_suspend(drm_dev, false);22422248 if (ret) {22432249 adev->in_runpm = false;22502250+ if (amdgpu_device_supports_boco(drm_dev))22512251+ adev->mp1_state = PP_MP1_STATE_NONE;22442252 return ret;22452253 }22542254+22552255+ if (amdgpu_device_supports_boco(drm_dev))22562256+ adev->mp1_state = PP_MP1_STATE_NONE;2246225722472258 if (amdgpu_device_supports_px(drm_dev)) {22482259 /* Only need to handle PCI state in the driver for ATPX
+87-39
drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
···7777 * Cast helper7878 */7979static const struct dma_fence_ops amdgpu_fence_ops;8080+static const struct dma_fence_ops amdgpu_job_fence_ops;8081static inline struct amdgpu_fence *to_amdgpu_fence(struct dma_fence *f)8182{8283 struct amdgpu_fence *__f = container_of(f, struct amdgpu_fence, base);83848484- if (__f->base.ops == &amdgpu_fence_ops)8585+ if (__f->base.ops == &amdgpu_fence_ops ||8686+ __f->base.ops == &amdgpu_job_fence_ops)8587 return __f;86888789 return NULL;···160158 }161159162160 seq = ++ring->fence_drv.sync_seq;163163- if (job != NULL && job->job_run_counter) {161161+ if (job && job->job_run_counter) {164162 /* reinit seq for resubmitted jobs */165163 fence->seqno = seq;166164 } else {167167- dma_fence_init(fence, &amdgpu_fence_ops,168168- &ring->fence_drv.lock,169169- adev->fence_context + ring->idx,170170- seq);171171- }172172-173173- if (job != NULL) {174174- /* mark this fence has a parent job */175175- set_bit(AMDGPU_FENCE_FLAG_EMBED_IN_JOB_BIT, &fence->flags);165165+ if (job)166166+ dma_fence_init(fence, &amdgpu_job_fence_ops,167167+ &ring->fence_drv.lock,168168+ adev->fence_context + ring->idx, seq);169169+ else170170+ dma_fence_init(fence, &amdgpu_fence_ops,171171+ &ring->fence_drv.lock,172172+ adev->fence_context + ring->idx, seq);176173 }177174178175 amdgpu_ring_emit_fence(ring, ring->fence_drv.gpu_addr,···622621}623622624623/**624624+ * amdgpu_fence_driver_clear_job_fences - clear job embedded fences of ring625625+ *626626+ * @ring: fence of the ring to be cleared627627+ *628628+ */629629+void amdgpu_fence_driver_clear_job_fences(struct amdgpu_ring *ring)630630+{631631+ int i;632632+ struct dma_fence *old, **ptr;633633+634634+ for (i = 0; i <= ring->fence_drv.num_fences_mask; i++) {635635+ ptr = &ring->fence_drv.fences[i];636636+ old = rcu_dereference_protected(*ptr, 1);637637+ if (old && old->ops == &amdgpu_job_fence_ops)638638+ RCU_INIT_POINTER(*ptr, NULL);639639+ }640640+}641641+642642+/**625643 * amdgpu_fence_driver_force_completion - force signal latest fence of ring626644 *627645 * @ring: fence of the ring to signal···663643664644static const char *amdgpu_fence_get_timeline_name(struct dma_fence *f)665645{666666- struct amdgpu_ring *ring;646646+ return (const char *)to_amdgpu_fence(f)->ring->name;647647+}667648668668- if (test_bit(AMDGPU_FENCE_FLAG_EMBED_IN_JOB_BIT, &f->flags)) {669669- struct amdgpu_job *job = container_of(f, struct amdgpu_job, hw_fence);649649+static const char *amdgpu_job_fence_get_timeline_name(struct dma_fence *f)650650+{651651+ struct amdgpu_job *job = container_of(f, struct amdgpu_job, hw_fence);670652671671- ring = to_amdgpu_ring(job->base.sched);672672- } else {673673- ring = to_amdgpu_fence(f)->ring;674674- }675675- return (const char *)ring->name;653653+ return (const char *)to_amdgpu_ring(job->base.sched)->name;676654}677655678656/**···683665 */684666static bool amdgpu_fence_enable_signaling(struct dma_fence *f)685667{686686- struct amdgpu_ring *ring;668668+ if (!timer_pending(&to_amdgpu_fence(f)->ring->fence_drv.fallback_timer))669669+ amdgpu_fence_schedule_fallback(to_amdgpu_fence(f)->ring);687670688688- if (test_bit(AMDGPU_FENCE_FLAG_EMBED_IN_JOB_BIT, &f->flags)) {689689- struct amdgpu_job *job = container_of(f, struct amdgpu_job, hw_fence);671671+ return true;672672+}690673691691- ring = to_amdgpu_ring(job->base.sched);692692- } else {693693- ring = to_amdgpu_fence(f)->ring;694694- }674674+/**675675+ * amdgpu_job_fence_enable_signaling - enable signalling on job fence676676+ * @f: fence677677+ *678678+ * This is the simliar function with amdgpu_fence_enable_signaling above, it679679+ * only handles the job embedded fence.680680+ */681681+static bool amdgpu_job_fence_enable_signaling(struct dma_fence *f)682682+{683683+ struct amdgpu_job *job = container_of(f, struct amdgpu_job, hw_fence);695684696696- if (!timer_pending(&ring->fence_drv.fallback_timer))697697- amdgpu_fence_schedule_fallback(ring);685685+ if (!timer_pending(&to_amdgpu_ring(job->base.sched)->fence_drv.fallback_timer))686686+ amdgpu_fence_schedule_fallback(to_amdgpu_ring(job->base.sched));698687699688 return true;700689}···717692{718693 struct dma_fence *f = container_of(rcu, struct dma_fence, rcu);719694720720- if (test_bit(AMDGPU_FENCE_FLAG_EMBED_IN_JOB_BIT, &f->flags)) {721721- /* free job if fence has a parent job */722722- struct amdgpu_job *job;723723-724724- job = container_of(f, struct amdgpu_job, hw_fence);725725- kfree(job);726726- } else {727695 /* free fence_slab if it's separated fence*/728728- struct amdgpu_fence *fence;696696+ kmem_cache_free(amdgpu_fence_slab, to_amdgpu_fence(f));697697+}729698730730- fence = to_amdgpu_fence(f);731731- kmem_cache_free(amdgpu_fence_slab, fence);732732- }699699+/**700700+ * amdgpu_job_fence_free - free up the job with embedded fence701701+ *702702+ * @rcu: RCU callback head703703+ *704704+ * Free up the job with embedded fence after the RCU grace period.705705+ */706706+static void amdgpu_job_fence_free(struct rcu_head *rcu)707707+{708708+ struct dma_fence *f = container_of(rcu, struct dma_fence, rcu);709709+710710+ /* free job if fence has a parent job */711711+ kfree(container_of(f, struct amdgpu_job, hw_fence));733712}734713735714/**···749720 call_rcu(&f->rcu, amdgpu_fence_free);750721}751722723723+/**724724+ * amdgpu_job_fence_release - callback that job embedded fence can be freed725725+ *726726+ * @f: fence727727+ *728728+ * This is the simliar function with amdgpu_fence_release above, it729729+ * only handles the job embedded fence.730730+ */731731+static void amdgpu_job_fence_release(struct dma_fence *f)732732+{733733+ call_rcu(&f->rcu, amdgpu_job_fence_free);734734+}735735+752736static const struct dma_fence_ops amdgpu_fence_ops = {753737 .get_driver_name = amdgpu_fence_get_driver_name,754738 .get_timeline_name = amdgpu_fence_get_timeline_name,···769727 .release = amdgpu_fence_release,770728};771729730730+static const struct dma_fence_ops amdgpu_job_fence_ops = {731731+ .get_driver_name = amdgpu_fence_get_driver_name,732732+ .get_timeline_name = amdgpu_job_fence_get_timeline_name,733733+ .enable_signaling = amdgpu_job_fence_enable_signaling,734734+ .release = amdgpu_job_fence_release,735735+};772736773737/*774738 * Fence debugfs
+1-3
drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
···5353#define AMDGPU_FENCE_FLAG_INT (1 << 1)5454#define AMDGPU_FENCE_FLAG_TC_WB_ONLY (1 << 2)55555656-/* fence flag bit to indicate the face is embedded in job*/5757-#define AMDGPU_FENCE_FLAG_EMBED_IN_JOB_BIT (DMA_FENCE_FLAG_USER_BITS + 1)5858-5956#define to_amdgpu_ring(s) container_of((s), struct amdgpu_ring, sched)60576158#define AMDGPU_IB_POOL_SIZE (1024 * 1024)···111114 struct dma_fence **fences;112115};113116117117+void amdgpu_fence_driver_clear_job_fences(struct amdgpu_ring *ring);114118void amdgpu_fence_driver_force_completion(struct amdgpu_ring *ring);115119116120int amdgpu_fence_driver_init_ring(struct amdgpu_ring *ring,
+7
drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c
···246246{247247 int r;248248 struct amdgpu_device *adev = (struct amdgpu_device *)handle;249249+ bool idle_work_unexecuted;250250+251251+ idle_work_unexecuted = cancel_delayed_work_sync(&adev->vcn.idle_work);252252+ if (idle_work_unexecuted) {253253+ if (adev->pm.dpm_enabled)254254+ amdgpu_dpm_enable_uvd(adev, false);255255+ }249256250257 r = vcn_v1_0_hw_fini(adev);251258 if (r)
···120120121121int smu_v12_0_set_gfx_cgpg(struct smu_context *smu, bool enable)122122{123123- if (!(smu->adev->pg_flags & AMD_PG_SUPPORT_GFX_PG))123123+ /* Until now the SMU12 only implemented for Renoir series so here neen't do APU check. */124124+ if (!(smu->adev->pg_flags & AMD_PG_SUPPORT_GFX_PG) || smu->adev->in_s0ix)124125 return 0;125126126127 return smu_cmn_send_smc_msg_with_param(smu,
···14441444 struct list_head node; /* all dips are on a list */14451445};1446144614471447+/* only for RNR timeout issue of HIP08 */14481448+#define HNS_ROCE_CLOCK_ADJUST 100014491449+#define HNS_ROCE_MAX_CQ_PERIOD 6514501450+#define HNS_ROCE_MAX_EQ_PERIOD 6514511451+#define HNS_ROCE_RNR_TIMER_10NS 114521452+#define HNS_ROCE_1US_CFG 99914531453+#define HNS_ROCE_1NS_CFG 014541454+14471455#define HNS_ROCE_AEQ_DEFAULT_BURST_NUM 0x014481456#define HNS_ROCE_AEQ_DEFAULT_INTERVAL 0x014491457#define HNS_ROCE_CEQ_DEFAULT_BURST_NUM 0x0
···1919#include <linux/module.h>2020#include <linux/input.h>2121#include <linux/serio.h>2222+#include <asm/unaligned.h>22232324#define DRIVER_DESC "SpaceTec SpaceBall 2003/3003/4000 FLX driver"2425···76757776 case 'D': /* Ball data */7877 if (spaceball->idx != 15) return;7979- for (i = 0; i < 6; i++)7878+ /*7979+ * Skip first three bytes; read six axes worth of data.8080+ * Axis values are signed 16-bit big-endian.8181+ */8282+ data += 3;8383+ for (i = 0; i < ARRAY_SIZE(spaceball_axes); i++) {8084 input_report_abs(dev, spaceball_axes[i],8181- (__s16)((data[2 * i + 3] << 8) | data[2 * i + 2]));8585+ (__s16)get_unaligned_be16(&data[i * 2]));8686+ }8287 break;83888489 case 'K': /* Button data */
···916916 set_bit(BTN_TOOL_TRIPLETAP, input_dev->keybit);917917 set_bit(BTN_LEFT, input_dev->keybit);918918919919+ INIT_WORK(&dev->work, atp_reinit);920920+919921 error = input_register_device(dev->input);920922 if (error)921923 goto err_free_buffer;922924923925 /* save our data pointer in this interface device */924926 usb_set_intfdata(iface, dev);925925-926926- INIT_WORK(&dev->work, atp_reinit);927927928928 return 0;929929
+7-1
drivers/input/mouse/elantech.c
···15881588 */15891589static int elantech_change_report_id(struct psmouse *psmouse)15901590{15911591- unsigned char param[2] = { 0x10, 0x03 };15911591+ /*15921592+ * NOTE: the code is expecting to receive param[] as an array of 315931593+ * items (see __ps2_command()), even if in this case only 2 are15941594+ * actually needed. Make sure the array size is 3 to avoid potential15951595+ * stack out-of-bound accesses.15961596+ */15971597+ unsigned char param[3] = { 0x10, 0x03 };1592159815931599 if (elantech_write_reg_params(psmouse, 0x7, param) ||15941600 elantech_read_reg_params(psmouse, 0x7, param) ||
+21
drivers/input/serio/i8042-x86ia64io.h
···995995 { }996996};997997998998+static const struct dmi_system_id i8042_dmi_probe_defer_table[] __initconst = {999999+ {10001000+ /* ASUS ZenBook UX425UA */10011001+ .matches = {10021002+ DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),10031003+ DMI_MATCH(DMI_PRODUCT_NAME, "ZenBook UX425UA"),10041004+ },10051005+ },10061006+ {10071007+ /* ASUS ZenBook UM325UA */10081008+ .matches = {10091009+ DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),10101010+ DMI_MATCH(DMI_PRODUCT_NAME, "ZenBook UX325UA_UM325UA"),10111011+ },10121012+ },10131013+ { }10141014+};10151015+9981016#endif /* CONFIG_X86 */999101710001018#ifdef CONFIG_PNP···1332131413331315 if (dmi_check_system(i8042_dmi_kbdreset_table))13341316 i8042_kbdreset = true;13171317+13181318+ if (dmi_check_system(i8042_dmi_probe_defer_table))13191319+ i8042_probe_defer = true;1335132013361321 /*13371322 * A20 was already enabled during early kernel init. But some buggy
+35-19
drivers/input/serio/i8042.c
···4545module_param_named(unlock, i8042_unlock, bool, 0);4646MODULE_PARM_DESC(unlock, "Ignore keyboard lock.");47474848+static bool i8042_probe_defer;4949+module_param_named(probe_defer, i8042_probe_defer, bool, 0);5050+MODULE_PARM_DESC(probe_defer, "Allow deferred probing.");5151+4852enum i8042_controller_reset_mode {4953 I8042_RESET_NEVER,5054 I8042_RESET_ALWAYS,···715711 * LCS/Telegraphics.716712 */717713718718-static int __init i8042_check_mux(void)714714+static int i8042_check_mux(void)719715{720716 unsigned char mux_version;721717···744740/*745741 * The following is used to test AUX IRQ delivery.746742 */747747-static struct completion i8042_aux_irq_delivered __initdata;748748-static bool i8042_irq_being_tested __initdata;743743+static struct completion i8042_aux_irq_delivered;744744+static bool i8042_irq_being_tested;749745750750-static irqreturn_t __init i8042_aux_test_irq(int irq, void *dev_id)746746+static irqreturn_t i8042_aux_test_irq(int irq, void *dev_id)751747{752748 unsigned long flags;753749 unsigned char str, data;···774770 * verifies success by readinng CTR. Used when testing for presence of AUX775771 * port.776772 */777777-static int __init i8042_toggle_aux(bool on)773773+static int i8042_toggle_aux(bool on)778774{779775 unsigned char param;780776 int i;···802798 * the presence of an AUX interface.803799 */804800805805-static int __init i8042_check_aux(void)801801+static int i8042_check_aux(void)806802{807803 int retval = -1;808804 bool irq_registered = false;···1009100510101006 if (i8042_command(&ctr[n++ % 2], I8042_CMD_CTL_RCTR)) {10111007 pr_err("Can't read CTR while initializing i8042\n");10121012- return -EIO;10081008+ return i8042_probe_defer ? -EPROBE_DEFER : -EIO;10131009 }1014101010151011 } while (n < 2 || ctr[0] != ctr[1]);···13241320 i8042_controller_reset(false);13251321}1326132213271327-static int __init i8042_create_kbd_port(void)13231323+static int i8042_create_kbd_port(void)13281324{13291325 struct serio *serio;13301326 struct i8042_port *port = &i8042_ports[I8042_KBD_PORT_NO];···13531349 return 0;13541350}1355135113561356-static int __init i8042_create_aux_port(int idx)13521352+static int i8042_create_aux_port(int idx)13571353{13581354 struct serio *serio;13591355 int port_no = idx < 0 ? I8042_AUX_PORT_NO : I8042_MUX_PORT_NO + idx;···13901386 return 0;13911387}1392138813931393-static void __init i8042_free_kbd_port(void)13891389+static void i8042_free_kbd_port(void)13941390{13951391 kfree(i8042_ports[I8042_KBD_PORT_NO].serio);13961392 i8042_ports[I8042_KBD_PORT_NO].serio = NULL;13971393}1398139413991399-static void __init i8042_free_aux_ports(void)13951395+static void i8042_free_aux_ports(void)14001396{14011397 int i;14021398···14061402 }14071403}1408140414091409-static void __init i8042_register_ports(void)14051405+static void i8042_register_ports(void)14101406{14111407 int i;14121408···14471443 i8042_aux_irq_registered = i8042_kbd_irq_registered = false;14481444}1449144514501450-static int __init i8042_setup_aux(void)14461446+static int i8042_setup_aux(void)14511447{14521448 int (*aux_enable)(void);14531449 int error;···14891485 return error;14901486}1491148714921492-static int __init i8042_setup_kbd(void)14881488+static int i8042_setup_kbd(void)14931489{14941490 int error;14951491···15391535 return 0;15401536}1541153715421542-static int __init i8042_probe(struct platform_device *dev)15381538+static int i8042_probe(struct platform_device *dev)15431539{15441540 int error;15451541···16041600 .pm = &i8042_pm_ops,16051601#endif16061602 },16031603+ .probe = i8042_probe,16071604 .remove = i8042_remove,16081605 .shutdown = i8042_shutdown,16091606};···1615161016161611static int __init i8042_init(void)16171612{16181618- struct platform_device *pdev;16191613 int err;1620161416211615 dbg_init();···16301626 /* Set this before creating the dev to allow i8042_command to work right away */16311627 i8042_present = true;1632162816331633- pdev = platform_create_bundle(&i8042_driver, i8042_probe, NULL, 0, NULL, 0);16341634- if (IS_ERR(pdev)) {16351635- err = PTR_ERR(pdev);16291629+ err = platform_driver_register(&i8042_driver);16301630+ if (err)16361631 goto err_platform_exit;16321632+16331633+ i8042_platform_device = platform_device_alloc("i8042", -1);16341634+ if (!i8042_platform_device) {16351635+ err = -ENOMEM;16361636+ goto err_unregister_driver;16371637 }16381638+16391639+ err = platform_device_add(i8042_platform_device);16401640+ if (err)16411641+ goto err_free_device;1638164216391643 bus_register_notifier(&serio_bus, &i8042_kbd_bind_notifier_block);16401644 panic_blink = i8042_panic_blink;1641164516421646 return 0;1643164716481648+err_free_device:16491649+ platform_device_put(i8042_platform_device);16501650+err_unregister_driver:16511651+ platform_driver_unregister(&i8042_driver);16441652 err_platform_exit:16451653 i8042_platform_exit();16461654 return err;
···102102 { .id = "911", .data = >911_chip_data },103103 { .id = "9271", .data = >911_chip_data },104104 { .id = "9110", .data = >911_chip_data },105105+ { .id = "9111", .data = >911_chip_data },105106 { .id = "927", .data = >911_chip_data },106107 { .id = "928", .data = >911_chip_data },107108···651650652651 usleep_range(6000, 10000); /* T4: > 5ms */653652654654- /* end select I2C slave addr */655655- error = gpiod_direction_input(ts->gpiod_rst);656656- if (error)657657- goto error;653653+ /*654654+ * Put the reset pin back in to input / high-impedance mode to save655655+ * power. Only do this in the non ACPI case since some ACPI boards656656+ * don't have a pull-up, so there the reset pin must stay active-high.657657+ */658658+ if (ts->irq_pin_access_method == IRQ_PIN_ACCESS_GPIO) {659659+ error = gpiod_direction_input(ts->gpiod_rst);660660+ if (error)661661+ goto error;662662+ }658663659664 return 0;660665···794787 return -EINVAL;795788 }796789790790+ /*791791+ * Normally we put the reset pin in input / high-impedance mode to save792792+ * power. But some x86/ACPI boards don't have a pull-up, so for the ACPI793793+ * case, leave the pin as is. This results in the pin not being touched794794+ * at all on x86/ACPI boards, except when needed for error-recover.795795+ */796796+ ts->gpiod_rst_flags = GPIOD_ASIS;797797+797798 return devm_acpi_dev_add_driver_gpios(dev, gpio_mapping);798799}799800#else···826811 if (!ts->client)827812 return -EINVAL;828813 dev = &ts->client->dev;814814+815815+ /*816816+ * By default we request the reset pin as input, leaving it in817817+ * high-impedance when not resetting the controller to save power.818818+ */819819+ ts->gpiod_rst_flags = GPIOD_IN;829820830821 ts->avdd28 = devm_regulator_get(dev, "AVDD28");831822 if (IS_ERR(ts->avdd28)) {···870849 ts->gpiod_int = gpiod;871850872851 /* Get the reset line GPIO pin number */873873- gpiod = devm_gpiod_get_optional(dev, GOODIX_GPIO_RST_NAME, GPIOD_IN);852852+ gpiod = devm_gpiod_get_optional(dev, GOODIX_GPIO_RST_NAME, ts->gpiod_rst_flags);874853 if (IS_ERR(gpiod)) {875854 error = PTR_ERR(gpiod);876855 if (error != -EPROBE_DEFER)
+1
drivers/input/touchscreen/goodix.h
···8787 struct gpio_desc *gpiod_rst;8888 int gpio_count;8989 int gpio_int_idx;9090+ enum gpiod_flags gpiod_rst_flags;9091 char id[GOODIX_ID_MAX_LEN + 1];9192 char cfg_name[64];9293 u16 version;
+1-1
drivers/input/touchscreen/goodix_fwupload.c
···207207208208 error = goodix_reset_no_int_sync(ts);209209 if (error)210210- return error;210210+ goto release;211211212212 error = goodix_enter_upload_mode(ts->client);213213 if (error)
···135135 struct mmc_command *cmd)136136{137137 struct meson_mx_sdhc_host *host = mmc_priv(mmc);138138+ bool manual_stop = false;138139 u32 ictl, send;139140 int pack_len;140141···173172 else174173 /* software flush: */175174 ictl |= MESON_SDHC_ICTL_DATA_XFER_OK;175175+176176+ /*177177+ * Mimic the logic from the vendor driver where (only)178178+ * SD_IO_RW_EXTENDED commands with more than one block set the179179+ * MESON_SDHC_MISC_MANUAL_STOP bit. This fixes the firmware180180+ * download in the brcmfmac driver for a BCM43362/1 card.181181+ * Without this sdio_memcpy_toio() (with a size of 219557182182+ * bytes) times out if MESON_SDHC_MISC_MANUAL_STOP is not set.183183+ */184184+ manual_stop = cmd->data->blocks > 1 &&185185+ cmd->opcode == SD_IO_RW_EXTENDED;176186 } else {177187 pack_len = 0;178188179189 ictl |= MESON_SDHC_ICTL_RESP_OK;180190 }191191+192192+ regmap_update_bits(host->regmap, MESON_SDHC_MISC,193193+ MESON_SDHC_MISC_MANUAL_STOP,194194+ manual_stop ? MESON_SDHC_MISC_MANUAL_STOP : 0);181195182196 if (cmd->opcode == MMC_STOP_TRANSMISSION)183197 send |= MESON_SDHC_SEND_DATA_STOP;
···356356 }357357}358358359359-static void tegra_sdhci_hs400_enhanced_strobe(struct mmc_host *mmc,360360- struct mmc_ios *ios)361361-{362362- struct sdhci_host *host = mmc_priv(mmc);363363- u32 val;364364-365365- val = sdhci_readl(host, SDHCI_TEGRA_VENDOR_SYS_SW_CTRL);366366-367367- if (ios->enhanced_strobe)368368- val |= SDHCI_TEGRA_SYS_SW_CTRL_ENHANCED_STROBE;369369- else370370- val &= ~SDHCI_TEGRA_SYS_SW_CTRL_ENHANCED_STROBE;371371-372372- sdhci_writel(host, val, SDHCI_TEGRA_VENDOR_SYS_SW_CTRL);373373-374374-}375375-376359static void tegra_sdhci_reset(struct sdhci_host *host, u8 mask)377360{378361 struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);···774791 tegra_sdhci_pad_autocalib(host);775792 tegra_host->pad_calib_required = false;776793 }794794+}795795+796796+static void tegra_sdhci_hs400_enhanced_strobe(struct mmc_host *mmc,797797+ struct mmc_ios *ios)798798+{799799+ struct sdhci_host *host = mmc_priv(mmc);800800+ u32 val;801801+802802+ val = sdhci_readl(host, SDHCI_TEGRA_VENDOR_SYS_SW_CTRL);803803+804804+ if (ios->enhanced_strobe) {805805+ val |= SDHCI_TEGRA_SYS_SW_CTRL_ENHANCED_STROBE;806806+ /*807807+ * When CMD13 is sent from mmc_select_hs400es() after808808+ * switching to HS400ES mode, the bus is operating at809809+ * either MMC_HIGH_26_MAX_DTR or MMC_HIGH_52_MAX_DTR.810810+ * To meet Tegra SDHCI requirement at HS400ES mode, force SDHCI811811+ * interface clock to MMC_HS200_MAX_DTR (200 MHz) so that host812812+ * controller CAR clock and the interface clock are rate matched.813813+ */814814+ tegra_sdhci_set_clock(host, MMC_HS200_MAX_DTR);815815+ } else {816816+ val &= ~SDHCI_TEGRA_SYS_SW_CTRL_ENHANCED_STROBE;817817+ }818818+819819+ sdhci_writel(host, val, SDHCI_TEGRA_VENDOR_SYS_SW_CTRL);777820}778821779822static unsigned int tegra_sdhci_get_max_clock(struct sdhci_host *host)
+1-1
drivers/net/bonding/bond_options.c
···15261526 mac = (u8 *)&newval->value;15271527 }1528152815291529- if (!is_valid_ether_addr(mac))15291529+ if (is_multicast_ether_addr(mac))15301530 goto err;1531153115321532 netdev_dbg(bond->dev, "Setting ad_actor_system to %pM\n", mac);
···738738 * is not set to GqiRda, choose the queue format in a priority order:739739 * DqoRda, GqiRda, GqiQpl. Use GqiQpl as default.740740 */741741- if (priv->queue_format == GVE_GQI_RDA_FORMAT) {742742- dev_info(&priv->pdev->dev,743743- "Driver is running with GQI RDA queue format.\n");744744- } else if (dev_op_dqo_rda) {741741+ if (dev_op_dqo_rda) {745742 priv->queue_format = GVE_DQO_RDA_FORMAT;746743 dev_info(&priv->pdev->dev,747744 "Driver is running with DQO RDA queue format.\n");···750753 "Driver is running with GQI RDA queue format.\n");751754 supported_features_mask =752755 be32_to_cpu(dev_op_gqi_rda->supported_features_mask);756756+ } else if (priv->queue_format == GVE_GQI_RDA_FORMAT) {757757+ dev_info(&priv->pdev->dev,758758+ "Driver is running with GQI RDA queue format.\n");753759 } else {754760 priv->queue_format = GVE_GQI_QPL_FORMAT;755761 if (dev_op_gqi_qpl)
+17
drivers/net/ethernet/intel/ice/ice_base.c
···66#include "ice_lib.h"77#include "ice_dcb_lib.h"8899+static bool ice_alloc_rx_buf_zc(struct ice_rx_ring *rx_ring)1010+{1111+ rx_ring->xdp_buf = kcalloc(rx_ring->count, sizeof(*rx_ring->xdp_buf), GFP_KERNEL);1212+ return !!rx_ring->xdp_buf;1313+}1414+1515+static bool ice_alloc_rx_buf(struct ice_rx_ring *rx_ring)1616+{1717+ rx_ring->rx_buf = kcalloc(rx_ring->count, sizeof(*rx_ring->rx_buf), GFP_KERNEL);1818+ return !!rx_ring->rx_buf;1919+}2020+921/**1022 * __ice_vsi_get_qs_contig - Assign a contiguous chunk of queues to VSI1123 * @qs_cfg: gathered variables needed for PF->VSI queues assignment···504492 xdp_rxq_info_reg(&ring->xdp_rxq, ring->netdev,505493 ring->q_index, ring->q_vector->napi.napi_id);506494495495+ kfree(ring->rx_buf);507496 ring->xsk_pool = ice_xsk_pool(ring);508497 if (ring->xsk_pool) {498498+ if (!ice_alloc_rx_buf_zc(ring))499499+ return -ENOMEM;509500 xdp_rxq_info_unreg_mem_model(&ring->xdp_rxq);510501511502 ring->rx_buf_len =···523508 dev_info(dev, "Registered XDP mem model MEM_TYPE_XSK_BUFF_POOL on Rx ring %d\n",524509 ring->q_index);525510 } else {511511+ if (!ice_alloc_rx_buf(ring))512512+ return -ENOMEM;526513 if (!xdp_rxq_info_is_reg(&ring->xdp_rxq))527514 /* coverity[check_return] */528515 xdp_rxq_info_reg(&ring->xdp_rxq,
+13-6
drivers/net/ethernet/intel/ice/ice_txrx.c
···419419 }420420421421rx_skip_free:422422- memset(rx_ring->rx_buf, 0, sizeof(*rx_ring->rx_buf) * rx_ring->count);422422+ if (rx_ring->xsk_pool)423423+ memset(rx_ring->xdp_buf, 0, array_size(rx_ring->count, sizeof(*rx_ring->xdp_buf)));424424+ else425425+ memset(rx_ring->rx_buf, 0, array_size(rx_ring->count, sizeof(*rx_ring->rx_buf)));423426424427 /* Zero out the descriptor ring */425428 size = ALIGN(rx_ring->count * sizeof(union ice_32byte_rx_desc),···449446 if (xdp_rxq_info_is_reg(&rx_ring->xdp_rxq))450447 xdp_rxq_info_unreg(&rx_ring->xdp_rxq);451448 rx_ring->xdp_prog = NULL;452452- devm_kfree(rx_ring->dev, rx_ring->rx_buf);453453- rx_ring->rx_buf = NULL;449449+ if (rx_ring->xsk_pool) {450450+ kfree(rx_ring->xdp_buf);451451+ rx_ring->xdp_buf = NULL;452452+ } else {453453+ kfree(rx_ring->rx_buf);454454+ rx_ring->rx_buf = NULL;455455+ }454456455457 if (rx_ring->desc) {456458 size = ALIGN(rx_ring->count * sizeof(union ice_32byte_rx_desc),···483475 /* warn if we are about to overwrite the pointer */484476 WARN_ON(rx_ring->rx_buf);485477 rx_ring->rx_buf =486486- devm_kcalloc(dev, sizeof(*rx_ring->rx_buf), rx_ring->count,487487- GFP_KERNEL);478478+ kcalloc(rx_ring->count, sizeof(*rx_ring->rx_buf), GFP_KERNEL);488479 if (!rx_ring->rx_buf)489480 return -ENOMEM;490481···512505 return 0;513506514507err:515515- devm_kfree(dev, rx_ring->rx_buf);508508+ kfree(rx_ring->rx_buf);516509 rx_ring->rx_buf = NULL;517510 return -ENOMEM;518511}
-1
drivers/net/ethernet/intel/ice/ice_txrx.h
···2424#define ICE_MAX_DATA_PER_TXD_ALIGNED \2525 (~(ICE_MAX_READ_REQ_SIZE - 1) & ICE_MAX_DATA_PER_TXD)26262727-#define ICE_RX_BUF_WRITE 16 /* Must be power of 2 */2827#define ICE_MAX_TXQ_PER_TXQG 12829283029/* Attempt to maximize the headroom available for incoming frames. We use a 2K
+32-34
drivers/net/ethernet/intel/ice/ice_xsk.c
···1212#include "ice_txrx_lib.h"1313#include "ice_lib.h"14141515+static struct xdp_buff **ice_xdp_buf(struct ice_rx_ring *rx_ring, u32 idx)1616+{1717+ return &rx_ring->xdp_buf[idx];1818+}1919+1520/**1621 * ice_qp_reset_stats - Resets all stats for rings of given index1722 * @vsi: VSI that contains rings of interest···377372 dma_addr_t dma;378373379374 rx_desc = ICE_RX_DESC(rx_ring, ntu);380380- xdp = &rx_ring->xdp_buf[ntu];375375+ xdp = ice_xdp_buf(rx_ring, ntu);381376382377 nb_buffs = min_t(u16, count, rx_ring->count - ntu);383378 nb_buffs = xsk_buff_alloc_batch(rx_ring->xsk_pool, xdp, nb_buffs);···395390 }396391397392 ntu += nb_buffs;398398- if (ntu == rx_ring->count) {399399- rx_desc = ICE_RX_DESC(rx_ring, 0);400400- xdp = rx_ring->xdp_buf;393393+ if (ntu == rx_ring->count)401394 ntu = 0;402402- }403395404404- /* clear the status bits for the next_to_use descriptor */405405- rx_desc->wb.status_error0 = 0;406396 ice_release_rx_desc(rx_ring, ntu);407397408398 return count == nb_buffs;···419419/**420420 * ice_construct_skb_zc - Create an sk_buff from zero-copy buffer421421 * @rx_ring: Rx ring422422- * @xdp_arr: Pointer to the SW ring of xdp_buff pointers422422+ * @xdp: Pointer to XDP buffer423423 *424424 * This function allocates a new skb from a zero-copy Rx buffer.425425 *426426 * Returns the skb on success, NULL on failure.427427 */428428static struct sk_buff *429429-ice_construct_skb_zc(struct ice_rx_ring *rx_ring, struct xdp_buff **xdp_arr)429429+ice_construct_skb_zc(struct ice_rx_ring *rx_ring, struct xdp_buff *xdp)430430{431431- struct xdp_buff *xdp = *xdp_arr;431431+ unsigned int datasize_hard = xdp->data_end - xdp->data_hard_start;432432 unsigned int metasize = xdp->data - xdp->data_meta;433433 unsigned int datasize = xdp->data_end - xdp->data;434434- unsigned int datasize_hard = xdp->data_end - xdp->data_hard_start;435434 struct sk_buff *skb;436435437436 skb = __napi_alloc_skb(&rx_ring->q_vector->napi, datasize_hard,···444445 skb_metadata_set(skb, metasize);445446446447 xsk_buff_free(xdp);447447- *xdp_arr = NULL;448448 return skb;449449}450450···505507int ice_clean_rx_irq_zc(struct ice_rx_ring *rx_ring, int budget)506508{507509 unsigned int total_rx_bytes = 0, total_rx_packets = 0;508508- u16 cleaned_count = ICE_DESC_UNUSED(rx_ring);509510 struct ice_tx_ring *xdp_ring;510511 unsigned int xdp_xmit = 0;511512 struct bpf_prog *xdp_prog;···519522 while (likely(total_rx_packets < (unsigned int)budget)) {520523 union ice_32b_rx_flex_desc *rx_desc;521524 unsigned int size, xdp_res = 0;522522- struct xdp_buff **xdp;525525+ struct xdp_buff *xdp;523526 struct sk_buff *skb;524527 u16 stat_err_bits;525528 u16 vlan_tag = 0;···537540 */538541 dma_rmb();539542543543+ xdp = *ice_xdp_buf(rx_ring, rx_ring->next_to_clean);544544+540545 size = le16_to_cpu(rx_desc->wb.pkt_len) &541546 ICE_RX_FLX_DESC_PKT_LEN_M;542542- if (!size)543543- break;547547+ if (!size) {548548+ xdp->data = NULL;549549+ xdp->data_end = NULL;550550+ xdp->data_hard_start = NULL;551551+ xdp->data_meta = NULL;552552+ goto construct_skb;553553+ }544554545545- xdp = &rx_ring->xdp_buf[rx_ring->next_to_clean];546546- xsk_buff_set_size(*xdp, size);547547- xsk_buff_dma_sync_for_cpu(*xdp, rx_ring->xsk_pool);555555+ xsk_buff_set_size(xdp, size);556556+ xsk_buff_dma_sync_for_cpu(xdp, rx_ring->xsk_pool);548557549549- xdp_res = ice_run_xdp_zc(rx_ring, *xdp, xdp_prog, xdp_ring);558558+ xdp_res = ice_run_xdp_zc(rx_ring, xdp, xdp_prog, xdp_ring);550559 if (xdp_res) {551560 if (xdp_res & (ICE_XDP_TX | ICE_XDP_REDIR))552561 xdp_xmit |= xdp_res;553562 else554554- xsk_buff_free(*xdp);563563+ xsk_buff_free(xdp);555564556556- *xdp = NULL;557565 total_rx_bytes += size;558566 total_rx_packets++;559559- cleaned_count++;560567561568 ice_bump_ntc(rx_ring);562569 continue;563570 }564564-571571+construct_skb:565572 /* XDP_PASS path */566573 skb = ice_construct_skb_zc(rx_ring, xdp);567574 if (!skb) {···573572 break;574573 }575574576576- cleaned_count++;577575 ice_bump_ntc(rx_ring);578576579577 if (eth_skb_pad(skb)) {···594594 ice_receive_skb(rx_ring, skb, vlan_tag);595595 }596596597597- if (cleaned_count >= ICE_RX_BUF_WRITE)598598- failure = !ice_alloc_rx_bufs_zc(rx_ring, cleaned_count);597597+ failure = !ice_alloc_rx_bufs_zc(rx_ring, ICE_DESC_UNUSED(rx_ring));599598600599 ice_finalize_xdp_rx(xdp_ring, xdp_xmit);601600 ice_update_rx_ring_stats(rx_ring, total_rx_packets, total_rx_bytes);···810811 */811812void ice_xsk_clean_rx_ring(struct ice_rx_ring *rx_ring)812813{813813- u16 i;814814+ u16 count_mask = rx_ring->count - 1;815815+ u16 ntc = rx_ring->next_to_clean;816816+ u16 ntu = rx_ring->next_to_use;814817815815- for (i = 0; i < rx_ring->count; i++) {816816- struct xdp_buff **xdp = &rx_ring->xdp_buf[i];818818+ for ( ; ntc != ntu; ntc = (ntc + 1) & count_mask) {819819+ struct xdp_buff *xdp = *ice_xdp_buf(rx_ring, ntc);817820818818- if (!xdp)819819- continue;820820-821821- *xdp = NULL;821821+ xsk_buff_free(xdp);822822 }823823}824824
+13-6
drivers/net/ethernet/intel/igb/igb_main.c
···92549254 return __igb_shutdown(to_pci_dev(dev), NULL, 0);92559255}9256925692579257-static int __maybe_unused igb_resume(struct device *dev)92579257+static int __maybe_unused __igb_resume(struct device *dev, bool rpm)92589258{92599259 struct pci_dev *pdev = to_pci_dev(dev);92609260 struct net_device *netdev = pci_get_drvdata(pdev);···9297929792989298 wr32(E1000_WUS, ~0);9299929993009300- rtnl_lock();93009300+ if (!rpm)93019301+ rtnl_lock();93019302 if (!err && netif_running(netdev))93029303 err = __igb_open(netdev, true);9303930493049305 if (!err)93059306 netif_device_attach(netdev);93069306- rtnl_unlock();93079307+ if (!rpm)93089308+ rtnl_unlock();9307930993089310 return err;93119311+}93129312+93139313+static int __maybe_unused igb_resume(struct device *dev)93149314+{93159315+ return __igb_resume(dev, false);93099316}9310931793119318static int __maybe_unused igb_runtime_idle(struct device *dev)···9333932693349327static int __maybe_unused igb_runtime_resume(struct device *dev)93359328{93369336- return igb_resume(dev);93299329+ return __igb_resume(dev, true);93379330}9338933193399332static void igb_shutdown(struct pci_dev *pdev)···94499442 * @pdev: Pointer to PCI device94509443 *94519444 * Restart the card from scratch, as if from a cold-boot. Implementation94529452- * resembles the first-half of the igb_resume routine.94459445+ * resembles the first-half of the __igb_resume routine.94539446 **/94549447static pci_ers_result_t igb_io_slot_reset(struct pci_dev *pdev)94559448{···94899482 *94909483 * This callback is called when the error recovery driver tells us that94919484 * its OK to resume normal operation. Implementation resembles the94929492- * second-half of the igb_resume routine.94859485+ * second-half of the __igb_resume routine.94939486 */94949487static void igb_io_resume(struct pci_dev *pdev)94959488{
+6
drivers/net/ethernet/intel/igc/igc_main.c
···54675467 mod_timer(&adapter->watchdog_timer, jiffies + 1);54685468 }5469546954705470+ if (icr & IGC_ICR_TS)54715471+ igc_tsync_interrupt(adapter);54725472+54705473 napi_schedule(&q_vector->napi);5471547454725475 return IRQ_HANDLED;···55125509 if (!test_bit(__IGC_DOWN, &adapter->state))55135510 mod_timer(&adapter->watchdog_timer, jiffies + 1);55145511 }55125512+55135513+ if (icr & IGC_ICR_TS)55145514+ igc_tsync_interrupt(adapter);5515551555165516 napi_schedule(&q_vector->napi);55175517
+14-1
drivers/net/ethernet/intel/igc/igc_ptp.c
···768768 */769769static bool igc_is_crosststamp_supported(struct igc_adapter *adapter)770770{771771- return IS_ENABLED(CONFIG_X86_TSC) ? pcie_ptm_enabled(adapter->pdev) : false;771771+ if (!IS_ENABLED(CONFIG_X86_TSC))772772+ return false;773773+774774+ /* FIXME: it was noticed that enabling support for PCIe PTM in775775+ * some i225-V models could cause lockups when bringing the776776+ * interface up/down. There should be no downsides to777777+ * disabling crosstimestamping support for i225-V, as it778778+ * doesn't have any PTP support. That way we gain some time779779+ * while root causing the issue.780780+ */781781+ if (adapter->pdev->device == IGC_DEV_ID_I225_V)782782+ return false;783783+784784+ return pcie_ptm_enabled(adapter->pdev);772785}773786774787static struct system_counterval_t igc_device_tstamp_to_system(u64 tstamp)
+25-11
drivers/net/ethernet/lantiq_xrx200.c
···7171 struct xrx200_chan chan_tx;7272 struct xrx200_chan chan_rx;73737474+ u16 rx_buf_size;7575+7476 struct net_device *net_dev;7577 struct device *dev;7678···9997 xrx200_pmac_w32(priv, val, offset);10098}10199100100+static int xrx200_max_frame_len(int mtu)101101+{102102+ return VLAN_ETH_HLEN + mtu;103103+}104104+105105+static int xrx200_buffer_size(int mtu)106106+{107107+ return round_up(xrx200_max_frame_len(mtu), 4 * XRX200_DMA_BURST_LEN);108108+}109109+102110/* drop all the packets from the DMA ring */103111static void xrx200_flush_dma(struct xrx200_chan *ch)104112{···121109 break;122110123111 desc->ctl = LTQ_DMA_OWN | LTQ_DMA_RX_OFFSET(NET_IP_ALIGN) |124124- (ch->priv->net_dev->mtu + VLAN_ETH_HLEN +125125- ETH_FCS_LEN);112112+ ch->priv->rx_buf_size;126113 ch->dma.desc++;127114 ch->dma.desc %= LTQ_DESC_NUM;128115 }···169158170159static int xrx200_alloc_skb(struct xrx200_chan *ch)171160{172172- int len = ch->priv->net_dev->mtu + VLAN_ETH_HLEN + ETH_FCS_LEN;173161 struct sk_buff *skb = ch->skb[ch->dma.desc];162162+ struct xrx200_priv *priv = ch->priv;174163 dma_addr_t mapping;175164 int ret = 0;176165177177- ch->skb[ch->dma.desc] = netdev_alloc_skb_ip_align(ch->priv->net_dev,178178- len);166166+ ch->skb[ch->dma.desc] = netdev_alloc_skb_ip_align(priv->net_dev,167167+ priv->rx_buf_size);179168 if (!ch->skb[ch->dma.desc]) {180169 ret = -ENOMEM;181170 goto skip;182171 }183172184184- mapping = dma_map_single(ch->priv->dev, ch->skb[ch->dma.desc]->data,185185- len, DMA_FROM_DEVICE);186186- if (unlikely(dma_mapping_error(ch->priv->dev, mapping))) {173173+ mapping = dma_map_single(priv->dev, ch->skb[ch->dma.desc]->data,174174+ priv->rx_buf_size, DMA_FROM_DEVICE);175175+ if (unlikely(dma_mapping_error(priv->dev, mapping))) {187176 dev_kfree_skb_any(ch->skb[ch->dma.desc]);188177 ch->skb[ch->dma.desc] = skb;189178 ret = -ENOMEM;···195184 wmb();196185skip:197186 ch->dma.desc_base[ch->dma.desc].ctl =198198- LTQ_DMA_OWN | LTQ_DMA_RX_OFFSET(NET_IP_ALIGN) | len;187187+ LTQ_DMA_OWN | LTQ_DMA_RX_OFFSET(NET_IP_ALIGN) | priv->rx_buf_size;199188200189 return ret;201190}···224213 skb->protocol = eth_type_trans(skb, net_dev);225214 netif_receive_skb(skb);226215 net_dev->stats.rx_packets++;227227- net_dev->stats.rx_bytes += len - ETH_FCS_LEN;216216+ net_dev->stats.rx_bytes += len;228217229218 return 0;230219}···367356 int ret = 0;368357369358 net_dev->mtu = new_mtu;359359+ priv->rx_buf_size = xrx200_buffer_size(new_mtu);370360371361 if (new_mtu <= old_mtu)372362 return ret;···387375 ret = xrx200_alloc_skb(ch_rx);388376 if (ret) {389377 net_dev->mtu = old_mtu;378378+ priv->rx_buf_size = xrx200_buffer_size(old_mtu);390379 break;391380 }392381 dev_kfree_skb_any(skb);···518505 net_dev->netdev_ops = &xrx200_netdev_ops;519506 SET_NETDEV_DEV(net_dev, dev);520507 net_dev->min_mtu = ETH_ZLEN;521521- net_dev->max_mtu = XRX200_DMA_DATA_LEN - VLAN_ETH_HLEN - ETH_FCS_LEN;508508+ net_dev->max_mtu = XRX200_DMA_DATA_LEN - xrx200_max_frame_len(0);509509+ priv->rx_buf_size = xrx200_buffer_size(ETH_DATA_LEN);522510523511 /* load the memory ranges */524512 priv->pmac_reg = devm_platform_get_and_ioremap_resource(pdev, 0, NULL);
···44#include "setup.h"55#include "en/params.h"66#include "en/txrx.h"77+#include "en/health.h"7889/* It matches XDP_UMEM_MIN_CHUNK_SIZE, but as this constant is private and may910 * change unexpectedly, and mlx5e has a minimum valid stride size for striding···171170172171void mlx5e_activate_xsk(struct mlx5e_channel *c)173172{173173+ /* ICOSQ recovery deactivates RQs. Suspend the recovery to avoid174174+ * activating XSKRQ in the middle of recovery.175175+ */176176+ mlx5e_reporter_icosq_suspend_recovery(c);174177 set_bit(MLX5E_RQ_STATE_ENABLED, &c->xskrq.state);178178+ mlx5e_reporter_icosq_resume_recovery(c);179179+175180 /* TX queue is created active. */176181177182 spin_lock_bh(&c->async_icosq_lock);···187180188181void mlx5e_deactivate_xsk(struct mlx5e_channel *c)189182{190190- mlx5e_deactivate_rq(&c->xskrq);183183+ /* ICOSQ recovery may reactivate XSKRQ if clear_bit is called in the184184+ * middle of recovery. Suspend the recovery to avoid it.185185+ */186186+ mlx5e_reporter_icosq_suspend_recovery(c);187187+ clear_bit(MLX5E_RQ_STATE_ENABLED, &c->xskrq.state);188188+ mlx5e_reporter_icosq_resume_recovery(c);189189+ synchronize_net(); /* Sync with NAPI to prevent mlx5e_post_rx_wqes. */190190+191191 /* TX queue is disabled on close. */192192}
···239239 /* Check if we have a GPIO associated with this fixed phy */240240 if (!gpiod) {241241 gpiod = fixed_phy_get_gpiod(np);242242- if (IS_ERR(gpiod))243243- return ERR_CAST(gpiod);242242+ if (!gpiod)243243+ return ERR_PTR(-EINVAL);244244 }245245246246 /* Get the next available PHY address, up to PHY_MAX_ADDR */
+59-56
drivers/net/tun.c
···209209 struct tun_prog __rcu *steering_prog;210210 struct tun_prog __rcu *filter_prog;211211 struct ethtool_link_ksettings link_ksettings;212212+ /* init args */213213+ struct file *file;214214+ struct ifreq *ifr;212215};213216214217struct veth {215218 __be16 h_vlan_proto;216219 __be16 h_vlan_TCI;217220};221221+222222+static void tun_flow_init(struct tun_struct *tun);223223+static void tun_flow_uninit(struct tun_struct *tun);218224219225static int tun_napi_receive(struct napi_struct *napi, int budget)220226{···959953960954static const struct ethtool_ops tun_ethtool_ops;961955956956+static int tun_net_init(struct net_device *dev)957957+{958958+ struct tun_struct *tun = netdev_priv(dev);959959+ struct ifreq *ifr = tun->ifr;960960+ int err;961961+962962+ dev->tstats = netdev_alloc_pcpu_stats(struct pcpu_sw_netstats);963963+ if (!dev->tstats)964964+ return -ENOMEM;965965+966966+ spin_lock_init(&tun->lock);967967+968968+ err = security_tun_dev_alloc_security(&tun->security);969969+ if (err < 0) {970970+ free_percpu(dev->tstats);971971+ return err;972972+ }973973+974974+ tun_flow_init(tun);975975+976976+ dev->hw_features = NETIF_F_SG | NETIF_F_FRAGLIST |977977+ TUN_USER_FEATURES | NETIF_F_HW_VLAN_CTAG_TX |978978+ NETIF_F_HW_VLAN_STAG_TX;979979+ dev->features = dev->hw_features | NETIF_F_LLTX;980980+ dev->vlan_features = dev->features &981981+ ~(NETIF_F_HW_VLAN_CTAG_TX |982982+ NETIF_F_HW_VLAN_STAG_TX);983983+984984+ tun->flags = (tun->flags & ~TUN_FEATURES) |985985+ (ifr->ifr_flags & TUN_FEATURES);986986+987987+ INIT_LIST_HEAD(&tun->disabled);988988+ err = tun_attach(tun, tun->file, false, ifr->ifr_flags & IFF_NAPI,989989+ ifr->ifr_flags & IFF_NAPI_FRAGS, false);990990+ if (err < 0) {991991+ tun_flow_uninit(tun);992992+ security_tun_dev_free_security(tun->security);993993+ free_percpu(dev->tstats);994994+ return err;995995+ }996996+ return 0;997997+}998998+962999/* Net device detach from fd. */9631000static void tun_net_uninit(struct net_device *dev)9641001{···12181169}1219117012201171static const struct net_device_ops tun_netdev_ops = {11721172+ .ndo_init = tun_net_init,12211173 .ndo_uninit = tun_net_uninit,12221174 .ndo_open = tun_net_open,12231175 .ndo_stop = tun_net_close,···13021252}1303125313041254static const struct net_device_ops tap_netdev_ops = {12551255+ .ndo_init = tun_net_init,13051256 .ndo_uninit = tun_net_uninit,13061257 .ndo_open = tun_net_open,13071258 .ndo_stop = tun_net_close,···13431292#define MAX_MTU 655351344129313451294/* Initialize net device. */13461346-static void tun_net_init(struct net_device *dev)12951295+static void tun_net_initialize(struct net_device *dev)13471296{13481297 struct tun_struct *tun = netdev_priv(dev);13491298···22572206 BUG_ON(!(list_empty(&tun->disabled)));2258220722592208 free_percpu(dev->tstats);22602260- /* We clear tstats so that tun_set_iff() can tell if22612261- * tun_free_netdev() has been called from register_netdevice().22622262- */22632263- dev->tstats = NULL;22642264-22652209 tun_flow_uninit(tun);22662210 security_tun_dev_free_security(tun->security);22672211 __tun_set_ebpf(tun, &tun->steering_prog, NULL);···27622716 tun->rx_batched = 0;27632717 RCU_INIT_POINTER(tun->steering_prog, NULL);2764271827652765- dev->tstats = netdev_alloc_pcpu_stats(struct pcpu_sw_netstats);27662766- if (!dev->tstats) {27672767- err = -ENOMEM;27682768- goto err_free_dev;27692769- }27192719+ tun->ifr = ifr;27202720+ tun->file = file;2770272127712771- spin_lock_init(&tun->lock);27722772-27732773- err = security_tun_dev_alloc_security(&tun->security);27742774- if (err < 0)27752775- goto err_free_stat;27762776-27772777- tun_net_init(dev);27782778- tun_flow_init(tun);27792779-27802780- dev->hw_features = NETIF_F_SG | NETIF_F_FRAGLIST |27812781- TUN_USER_FEATURES | NETIF_F_HW_VLAN_CTAG_TX |27822782- NETIF_F_HW_VLAN_STAG_TX;27832783- dev->features = dev->hw_features | NETIF_F_LLTX;27842784- dev->vlan_features = dev->features &27852785- ~(NETIF_F_HW_VLAN_CTAG_TX |27862786- NETIF_F_HW_VLAN_STAG_TX);27872787-27882788- tun->flags = (tun->flags & ~TUN_FEATURES) |27892789- (ifr->ifr_flags & TUN_FEATURES);27902790-27912791- INIT_LIST_HEAD(&tun->disabled);27922792- err = tun_attach(tun, file, false, ifr->ifr_flags & IFF_NAPI,27932793- ifr->ifr_flags & IFF_NAPI_FRAGS, false);27942794- if (err < 0)27952795- goto err_free_flow;27222722+ tun_net_initialize(dev);2796272327972724 err = register_netdevice(tun->dev);27982798- if (err < 0)27992799- goto err_detach;27252725+ if (err < 0) {27262726+ free_netdev(dev);27272727+ return err;27282728+ }28002729 /* free_netdev() won't check refcnt, to avoid race28012730 * with dev_put() we need publish tun after registration.28022731 */···2788276727892768 strcpy(ifr->ifr_name, tun->dev->name);27902769 return 0;27912791-27922792-err_detach:27932793- tun_detach_all(dev);27942794- /* We are here because register_netdevice() has failed.27952795- * If register_netdevice() already called tun_free_netdev()27962796- * while dealing with the error, dev->stats has been cleared.27972797- */27982798- if (!dev->tstats)27992799- goto err_free_dev;28002800-28012801-err_free_flow:28022802- tun_flow_uninit(tun);28032803- security_tun_dev_free_security(tun->security);28042804-err_free_stat:28052805- free_percpu(dev->tstats);28062806-err_free_dev:28072807- free_netdev(dev);28082808- return err;28092770}2810277128112772static void tun_get_iff(struct tun_struct *tun, struct ifreq *ifr)
+5-3
drivers/net/usb/asix_common.c
···991010#include "asix.h"11111212+#define AX_HOST_EN_RETRIES 301313+1214int asix_read_cmd(struct usbnet *dev, u8 cmd, u16 value, u16 index,1315 u16 size, void *data, int in_pm)1416{···7068 int i, ret;7169 u8 smsr;72707373- for (i = 0; i < 30; ++i) {7171+ for (i = 0; i < AX_HOST_EN_RETRIES; ++i) {7472 ret = asix_set_sw_mii(dev, in_pm);7573 if (ret == -ENODEV || ret == -ETIMEDOUT)7674 break;···7977 0, 0, 1, &smsr, in_pm);8078 if (ret == -ENODEV)8179 break;8282- else if (ret < 0)8080+ else if (ret < sizeof(smsr))8381 continue;8482 else if (smsr & AX_HOST_EN)8583 break;8684 }87858888- return ret;8686+ return i >= AX_HOST_EN_RETRIES ? -ETIMEDOUT : ret;8987}90889189static void reset_asix_rx_fixup_info(struct asix_rx_fixup_info *rx)
+2-2
drivers/net/usb/pegasus.c
···493493 goto goon;494494495495 rx_status = buf[count - 2];496496- if (rx_status & 0x1e) {496496+ if (rx_status & 0x1c) {497497 netif_dbg(pegasus, rx_err, net,498498 "RX packet error %x\n", rx_status);499499 net->stats.rx_errors++;500500- if (rx_status & 0x06) /* long or runt */500500+ if (rx_status & 0x04) /* runt */501501 net->stats.rx_length_errors++;502502 if (rx_status & 0x08)503503 net->stats.rx_crc_errors++;
+39-4
drivers/net/usb/r8152.c
···3232#define NETNEXT_VERSION "12"33333434/* Information for net */3535-#define NET_VERSION "11"3535+#define NET_VERSION "12"36363737#define DRIVER_VERSION "v1." NETNEXT_VERSION "." NET_VERSION3838#define DRIVER_AUTHOR "Realtek linux nic maintainers <nic_swsd@realtek.com>"···40164016 ocp_write_word(tp, type, PLA_BP_BA, 0);40174017}4018401840194019+static inline void rtl_reset_ocp_base(struct r8152 *tp)40204020+{40214021+ tp->ocp_base = -1;40224022+}40234023+40194024static int rtl_phy_patch_request(struct r8152 *tp, bool request, bool wait)40204025{40214026 u16 data, check;···40914086 rtl_patch_key_set(tp, key_addr, 0);4092408740934088 rtl_phy_patch_request(tp, false, wait);40944094-40954095- ocp_write_word(tp, MCU_TYPE_PLA, PLA_OCP_GPHY_BASE, tp->ocp_base);4096408940974090 return 0;40984091}···48034800 u32 len;48044801 u8 *data;4805480248034803+ rtl_reset_ocp_base(tp);48044804+48064805 if (sram_read(tp, SRAM_GPHY_FW_VER) >= __le16_to_cpu(phy->version)) {48074806 dev_dbg(&tp->intf->dev, "PHY firmware has been the newest\n");48084807 return;···48504845 }48514846 }4852484748534853- ocp_write_word(tp, MCU_TYPE_PLA, PLA_OCP_GPHY_BASE, tp->ocp_base);48484848+ rtl_reset_ocp_base(tp);48494849+48544850 rtl_phy_patch_request(tp, false, wait);4855485148564852 if (sram_read(tp, SRAM_GPHY_FW_VER) == __le16_to_cpu(phy->version))···4866486048674861 ver_addr = __le16_to_cpu(phy_ver->ver.addr);48684862 ver = __le16_to_cpu(phy_ver->ver.data);48634863+48644864+ rtl_reset_ocp_base(tp);4869486548704866 if (sram_read(tp, ver_addr) >= ver) {48714867 dev_dbg(&tp->intf->dev, "PHY firmware has been the newest\n");···48844876static void rtl8152_fw_phy_fixup(struct r8152 *tp, struct fw_phy_fixup *fix)48854877{48864878 u16 addr, data;48794879+48804880+ rtl_reset_ocp_base(tp);4887488148884882 addr = __le16_to_cpu(fix->setting.addr);48894883 data = ocp_reg_read(tp, addr);···49184908 u32 length;49194909 int i, num;4920491049114911+ rtl_reset_ocp_base(tp);49124912+49214913 num = phy->pre_num;49224914 for (i = 0; i < num; i++)49234915 sram_write(tp, __le16_to_cpu(phy->pre_set[i].addr),···49494937 u16 mode_reg, bp_index;49504938 u32 length, i, num;49514939 __le16 *data;49404940+49414941+ rtl_reset_ocp_base(tp);4952494249534943 mode_reg = __le16_to_cpu(phy->mode_reg);49544944 sram_write(tp, mode_reg, __le16_to_cpu(phy->mode_pre));···51215107 if (rtl_fw->post_fw)51225108 rtl_fw->post_fw(tp);5123510951105110+ rtl_reset_ocp_base(tp);51245111 strscpy(rtl_fw->version, fw_hdr->version, RTL_VER_SIZE);51255112 dev_info(&tp->intf->dev, "load %s successfully\n", rtl_fw->version);51265113}···65996584 return true;66006585}6601658665876587+static void r8156_mdio_force_mode(struct r8152 *tp)65886588+{65896589+ u16 data;65906590+65916591+ /* Select force mode through 0xa5b4 bit 1565926592+ * 0: MDIO force mode65936593+ * 1: MMD force mode65946594+ */65956595+ data = ocp_reg_read(tp, 0xa5b4);65966596+ if (data & BIT(15)) {65976597+ data &= ~BIT(15);65986598+ ocp_reg_write(tp, 0xa5b4, data);65996599+ }66006600+}66016601+66026602static void set_carrier(struct r8152 *tp)66036603{66046604 struct net_device *netdev = tp->netdev;···80468016 ocp_data |= ACT_ODMA;80478017 ocp_write_byte(tp, MCU_TYPE_USB, USB_BMU_CONFIG, ocp_data);8048801880198019+ r8156_mdio_force_mode(tp);80498020 rtl_tally_reset(tp);8050802180518022 tp->coalesce = 15000; /* 15 us */···81768145 ocp_data &= ~(RX_AGG_DISABLE | RX_ZERO_EN);81778146 ocp_write_word(tp, MCU_TYPE_USB, USB_USB_CTRL, ocp_data);8178814781488148+ r8156_mdio_force_mode(tp);81798149 rtl_tally_reset(tp);8180815081818151 tp->coalesce = 15000; /* 15 us */···8499846785008468 mutex_lock(&tp->control);8501846984708470+ rtl_reset_ocp_base(tp);84718471+85028472 if (test_bit(SELECTIVE_SUSPEND, &tp->flags))85038473 ret = rtl8152_runtime_resume(tp);85048474 else···85168482 struct r8152 *tp = usb_get_intfdata(intf);8517848385188484 clear_bit(SELECTIVE_SUSPEND, &tp->flags);84858485+ rtl_reset_ocp_base(tp);85198486 tp->rtl_ops.init(tp);85208487 queue_delayed_work(system_long_wq, &tp->hw_phy_work, 0);85218488 set_ethernet_addr(tp, true);
+6-2
drivers/net/veth.c
···879879880880 stats->xdp_bytes += skb->len;881881 skb = veth_xdp_rcv_skb(rq, skb, bq, stats);882882- if (skb)883883- napi_gro_receive(&rq->xdp_napi, skb);882882+ if (skb) {883883+ if (skb_shared(skb) || skb_unclone(skb, GFP_ATOMIC))884884+ netif_receive_skb(skb);885885+ else886886+ napi_gro_receive(&rq->xdp_napi, skb);887887+ }884888 }885889 done++;886890 }
+1
drivers/net/xen-netback/common.h
···203203 unsigned int rx_queue_max;204204 unsigned int rx_queue_len;205205 unsigned long last_rx_time;206206+ unsigned int rx_slots_needed;206207 bool stalled;207208208209 struct xenvif_copy_state rx_copy;
+49-28
drivers/net/xen-netback/rx.c
···3333#include <xen/xen.h>3434#include <xen/events.h>35353636+/*3737+ * Update the needed ring page slots for the first SKB queued.3838+ * Note that any call sequence outside the RX thread calling this function3939+ * needs to wake up the RX thread via a call of xenvif_kick_thread()4040+ * afterwards in order to avoid a race with putting the thread to sleep.4141+ */4242+static void xenvif_update_needed_slots(struct xenvif_queue *queue,4343+ const struct sk_buff *skb)4444+{4545+ unsigned int needed = 0;4646+4747+ if (skb) {4848+ needed = DIV_ROUND_UP(skb->len, XEN_PAGE_SIZE);4949+ if (skb_is_gso(skb))5050+ needed++;5151+ if (skb->sw_hash)5252+ needed++;5353+ }5454+5555+ WRITE_ONCE(queue->rx_slots_needed, needed);5656+}5757+3658static bool xenvif_rx_ring_slots_available(struct xenvif_queue *queue)3759{3860 RING_IDX prod, cons;3939- struct sk_buff *skb;4040- int needed;4141- unsigned long flags;6161+ unsigned int needed;42624343- spin_lock_irqsave(&queue->rx_queue.lock, flags);4444-4545- skb = skb_peek(&queue->rx_queue);4646- if (!skb) {4747- spin_unlock_irqrestore(&queue->rx_queue.lock, flags);6363+ needed = READ_ONCE(queue->rx_slots_needed);6464+ if (!needed)4865 return false;4949- }5050-5151- needed = DIV_ROUND_UP(skb->len, XEN_PAGE_SIZE);5252- if (skb_is_gso(skb))5353- needed++;5454- if (skb->sw_hash)5555- needed++;5656-5757- spin_unlock_irqrestore(&queue->rx_queue.lock, flags);58665967 do {6068 prod = queue->rx.sring->req_prod;···88808981 spin_lock_irqsave(&queue->rx_queue.lock, flags);90829191- __skb_queue_tail(&queue->rx_queue, skb);9292-9393- queue->rx_queue_len += skb->len;9494- if (queue->rx_queue_len > queue->rx_queue_max) {8383+ if (queue->rx_queue_len >= queue->rx_queue_max) {9584 struct net_device *dev = queue->vif->dev;96859786 netif_tx_stop_queue(netdev_get_tx_queue(dev, queue->id));8787+ kfree_skb(skb);8888+ queue->vif->dev->stats.rx_dropped++;8989+ } else {9090+ if (skb_queue_empty(&queue->rx_queue))9191+ xenvif_update_needed_slots(queue, skb);9292+9393+ __skb_queue_tail(&queue->rx_queue, skb);9494+9595+ queue->rx_queue_len += skb->len;9896 }999710098 spin_unlock_irqrestore(&queue->rx_queue.lock, flags);···114100115101 skb = __skb_dequeue(&queue->rx_queue);116102 if (skb) {103103+ xenvif_update_needed_slots(queue, skb_peek(&queue->rx_queue));104104+117105 queue->rx_queue_len -= skb->len;118106 if (queue->rx_queue_len < queue->rx_queue_max) {119107 struct netdev_queue *txq;···150134 break;151135 xenvif_rx_dequeue(queue);152136 kfree_skb(skb);137137+ queue->vif->dev->stats.rx_dropped++;153138 }154139}155140···504487 xenvif_rx_copy_flush(queue);505488}506489507507-static bool xenvif_rx_queue_stalled(struct xenvif_queue *queue)490490+static RING_IDX xenvif_rx_queue_slots(const struct xenvif_queue *queue)508491{509492 RING_IDX prod, cons;510493511494 prod = queue->rx.sring->req_prod;512495 cons = queue->rx.req_cons;513496497497+ return prod - cons;498498+}499499+500500+static bool xenvif_rx_queue_stalled(const struct xenvif_queue *queue)501501+{502502+ unsigned int needed = READ_ONCE(queue->rx_slots_needed);503503+514504 return !queue->stalled &&515515- prod - cons < 1 &&505505+ xenvif_rx_queue_slots(queue) < needed &&516506 time_after(jiffies,517507 queue->last_rx_time + queue->vif->stall_timeout);518508}519509520510static bool xenvif_rx_queue_ready(struct xenvif_queue *queue)521511{522522- RING_IDX prod, cons;512512+ unsigned int needed = READ_ONCE(queue->rx_slots_needed);523513524524- prod = queue->rx.sring->req_prod;525525- cons = queue->rx.req_cons;526526-527527- return queue->stalled && prod - cons >= 1;514514+ return queue->stalled && xenvif_rx_queue_slots(queue) >= needed;528515}529516530517bool xenvif_have_rx_work(struct xenvif_queue *queue, bool test_kthread)
···285285 desc = (const struct mtk_pin_desc *)hw->soc->pins;286286 *gpio_chip = &hw->chip;287287288288- /* Be greedy to guess first gpio_n is equal to eint_n */289289- if (desc[eint_n].eint.eint_n == eint_n)288288+ /*289289+ * Be greedy to guess first gpio_n is equal to eint_n.290290+ * Only eint virtual eint number is greater than gpio number.291291+ */292292+ if (hw->soc->npins > eint_n &&293293+ desc[eint_n].eint.eint_n == eint_n)290294 *gpio_n = eint_n;291295 else292296 *gpio_n = mtk_xt_find_eint_num(hw, eint_n);
+4-4
drivers/pinctrl/stm32/pinctrl-stm32.c
···12511251 bank_nr = args.args[1] / STM32_GPIO_PINS_PER_BANK;12521252 bank->gpio_chip.base = args.args[1];1253125312541254- npins = args.args[2];12551255- while (!of_parse_phandle_with_fixed_args(np, "gpio-ranges", 3,12561256- ++i, &args))12571257- npins += args.args[2];12541254+ /* get the last defined gpio line (offset + nb of pins) */12551255+ npins = args.args[0] + args.args[2];12561256+ while (!of_parse_phandle_with_fixed_args(np, "gpio-ranges", 3, ++i, &args))12571257+ npins = max(npins, (int)(args.args[0] + args.args[2]));12581258 } else {12591259 bank_nr = pctl->nbanks;12601260 bank->gpio_chip.base = bank_nr * STM32_GPIO_PINS_PER_BANK;
+2-2
drivers/platform/mellanox/mlxbf-pmc.c
···13741374 pmc->block[i].counters = info[2];13751375 pmc->block[i].type = info[3];1376137613771377- if (IS_ERR(pmc->block[i].mmio_base))13781378- return PTR_ERR(pmc->block[i].mmio_base);13771377+ if (!pmc->block[i].mmio_base)13781378+ return -ENOMEM;1379137913801380 ret = mlxbf_pmc_create_groups(dev, i);13811381 if (ret)
···625625 }626626627627 gmux_data->iostart = res->start;628628- gmux_data->iolen = res->end - res->start;628628+ gmux_data->iolen = resource_size(res);629629630630 if (gmux_data->iolen < GMUX_MIN_IO_LEN) {631631 pr_err("gmux I/O region too small (%lu < %u)\n",
-15
drivers/platform/x86/intel/Kconfig
···33# Intel x86 Platform Specific Drivers44#5566-menuconfig X86_PLATFORM_DRIVERS_INTEL77- bool "Intel x86 Platform Specific Device Drivers"88- default y99- help1010- Say Y here to get to see options for device drivers for1111- various Intel x86 platforms, including vendor-specific1212- drivers. This option alone does not add any kernel code.1313-1414- If you say N, all options in this submenu will be skipped1515- and disabled.1616-1717-if X86_PLATFORM_DRIVERS_INTEL1818-196source "drivers/platform/x86/intel/atomisp2/Kconfig"207source "drivers/platform/x86/intel/int1092/Kconfig"218source "drivers/platform/x86/intel/int33fe/Kconfig"···170183171184 To compile this driver as a module, choose M here: the module172185 will be called intel-uncore-frequency.173173-174174-endif # X86_PLATFORM_DRIVERS_INTEL
+1-1
drivers/platform/x86/intel/pmc/pltdrv.c
···65656666 retval = platform_device_register(pmc_core_device);6767 if (retval)6868- kfree(pmc_core_device);6868+ platform_device_put(pmc_core_device);69697070 return retval;7171}
+30-28
drivers/platform/x86/system76_acpi.c
···3535 union acpi_object *nfan;3636 union acpi_object *ntmp;3737 struct input_dev *input;3838+ bool has_open_ec;3839};39404041static const struct acpi_device_id device_ids[] = {···280279281280static void system76_battery_init(void)282281{283283- acpi_handle handle;284284-285285- handle = ec_get_handle();286286- if (handle && acpi_has_method(handle, "GBCT"))287287- battery_hook_register(&system76_battery_hook);282282+ battery_hook_register(&system76_battery_hook);288283}289284290285static void system76_battery_exit(void)291286{292292- acpi_handle handle;293293-294294- handle = ec_get_handle();295295- if (handle && acpi_has_method(handle, "GBCT"))296296- battery_hook_unregister(&system76_battery_hook);287287+ battery_hook_unregister(&system76_battery_hook);297288}298289299290// Get the airplane mode LED brightness···666673 acpi_dev->driver_data = data;667674 data->acpi_dev = acpi_dev;668675676676+ // Some models do not run open EC firmware. Check for an ACPI method677677+ // that only exists on open EC to guard functionality specific to it.678678+ data->has_open_ec = acpi_has_method(acpi_device_handle(data->acpi_dev), "NFAN");679679+669680 err = system76_get(data, "INIT");670681 if (err)671682 return err;···715718 if (err)716719 goto error;717720718718- err = system76_get_object(data, "NFAN", &data->nfan);719719- if (err)720720- goto error;721721+ if (data->has_open_ec) {722722+ err = system76_get_object(data, "NFAN", &data->nfan);723723+ if (err)724724+ goto error;721725722722- err = system76_get_object(data, "NTMP", &data->ntmp);723723- if (err)724724- goto error;726726+ err = system76_get_object(data, "NTMP", &data->ntmp);727727+ if (err)728728+ goto error;725729726726- data->therm = devm_hwmon_device_register_with_info(&acpi_dev->dev,727727- "system76_acpi", data, &thermal_chip_info, NULL);728728- err = PTR_ERR_OR_ZERO(data->therm);729729- if (err)730730- goto error;730730+ data->therm = devm_hwmon_device_register_with_info(&acpi_dev->dev,731731+ "system76_acpi", data, &thermal_chip_info, NULL);732732+ err = PTR_ERR_OR_ZERO(data->therm);733733+ if (err)734734+ goto error;731735732732- system76_battery_init();736736+ system76_battery_init();737737+ }733738734739 return 0;735740736741error:737737- kfree(data->ntmp);738738- kfree(data->nfan);742742+ if (data->has_open_ec) {743743+ kfree(data->ntmp);744744+ kfree(data->nfan);745745+ }739746 return err;740747}741748···750749751750 data = acpi_driver_data(acpi_dev);752751753753- system76_battery_exit();752752+ if (data->has_open_ec) {753753+ system76_battery_exit();754754+ kfree(data->nfan);755755+ kfree(data->ntmp);756756+ }754757755758 devm_led_classdev_unregister(&acpi_dev->dev, &data->ap_led);756759 devm_led_classdev_unregister(&acpi_dev->dev, &data->kb_led);757757-758758- kfree(data->nfan);759759- kfree(data->ntmp);760760761761 system76_get(data, "FINI");762762
···586586 * Commands like INQUIRY may transfer less data than587587 * requested by the initiator via bufflen. Set residual588588 * count to make upper layer aware of the actual amount589589- * of data returned.589589+ * of data returned. There are cases when controller590590+ * returns zero dataLen with non zero data - do not set591591+ * residual count in that case.590592 */591591- scsi_set_resid(cmd, scsi_bufflen(cmd) - e->dataLen);593593+ if (e->dataLen && (e->dataLen < scsi_bufflen(cmd)))594594+ scsi_set_resid(cmd, scsi_bufflen(cmd) - e->dataLen);592595 cmd->result = (DID_OK << 16);593596 break;594597
···11// SPDX-License-Identifier: GPL-2.0-only22/*33- * Copyright (c) 2015-2016, Linaro Limited33+ * Copyright (c) 2015-2017, 2019-2021 Linaro Limited44 */55+#include <linux/anon_inodes.h>56#include <linux/device.h>66-#include <linux/dma-buf.h>77-#include <linux/fdtable.h>87#include <linux/idr.h>88+#include <linux/mm.h>99#include <linux/sched.h>1010#include <linux/slab.h>1111#include <linux/tee_drv.h>1212#include <linux/uio.h>1313-#include <linux/module.h>1413#include "tee_private.h"1515-1616-MODULE_IMPORT_NS(DMA_BUF);17141815static void release_registered_pages(struct tee_shm *shm)1916{···2831 }2932}30333131-static void tee_shm_release(struct tee_shm *shm)3434+static void tee_shm_release(struct tee_device *teedev, struct tee_shm *shm)3235{3333- struct tee_device *teedev = shm->ctx->teedev;3434-3535- if (shm->flags & TEE_SHM_DMA_BUF) {3636- mutex_lock(&teedev->mutex);3737- idr_remove(&teedev->idr, shm->id);3838- mutex_unlock(&teedev->mutex);3939- }4040-4136 if (shm->flags & TEE_SHM_POOL) {4237 struct tee_shm_pool_mgr *poolm;4338···55665667 tee_device_put(teedev);5768}5858-5959-static struct sg_table *tee_shm_op_map_dma_buf(struct dma_buf_attachment6060- *attach, enum dma_data_direction dir)6161-{6262- return NULL;6363-}6464-6565-static void tee_shm_op_unmap_dma_buf(struct dma_buf_attachment *attach,6666- struct sg_table *table,6767- enum dma_data_direction dir)6868-{6969-}7070-7171-static void tee_shm_op_release(struct dma_buf *dmabuf)7272-{7373- struct tee_shm *shm = dmabuf->priv;7474-7575- tee_shm_release(shm);7676-}7777-7878-static int tee_shm_op_mmap(struct dma_buf *dmabuf, struct vm_area_struct *vma)7979-{8080- struct tee_shm *shm = dmabuf->priv;8181- size_t size = vma->vm_end - vma->vm_start;8282-8383- /* Refuse sharing shared memory provided by application */8484- if (shm->flags & TEE_SHM_USER_MAPPED)8585- return -EINVAL;8686-8787- return remap_pfn_range(vma, vma->vm_start, shm->paddr >> PAGE_SHIFT,8888- size, vma->vm_page_prot);8989-}9090-9191-static const struct dma_buf_ops tee_shm_dma_buf_ops = {9292- .map_dma_buf = tee_shm_op_map_dma_buf,9393- .unmap_dma_buf = tee_shm_op_unmap_dma_buf,9494- .release = tee_shm_op_release,9595- .mmap = tee_shm_op_mmap,9696-};97699870struct tee_shm *tee_shm_alloc(struct tee_context *ctx, size_t size, u32 flags)9971{···90140 goto err_dev_put;91141 }92142143143+ refcount_set(&shm->refcount, 1);93144 shm->flags = flags | TEE_SHM_POOL;94145 shm->ctx = ctx;95146 if (flags & TEE_SHM_DMA_BUF)···104153 goto err_kfree;105154 }106155107107-108156 if (flags & TEE_SHM_DMA_BUF) {109109- DEFINE_DMA_BUF_EXPORT_INFO(exp_info);110110-111157 mutex_lock(&teedev->mutex);112158 shm->id = idr_alloc(&teedev->idr, shm, 1, 0, GFP_KERNEL);113159 mutex_unlock(&teedev->mutex);···112164 ret = ERR_PTR(shm->id);113165 goto err_pool_free;114166 }115115-116116- exp_info.ops = &tee_shm_dma_buf_ops;117117- exp_info.size = shm->size;118118- exp_info.flags = O_RDWR;119119- exp_info.priv = shm;120120-121121- shm->dmabuf = dma_buf_export(&exp_info);122122- if (IS_ERR(shm->dmabuf)) {123123- ret = ERR_CAST(shm->dmabuf);124124- goto err_rem;125125- }126167 }127168128169 teedev_ctx_get(ctx);129170130171 return shm;131131-err_rem:132132- if (flags & TEE_SHM_DMA_BUF) {133133- mutex_lock(&teedev->mutex);134134- idr_remove(&teedev->idr, shm->id);135135- mutex_unlock(&teedev->mutex);136136- }137172err_pool_free:138173 poolm->ops->free(poolm, shm);139174err_kfree:···177246 goto err;178247 }179248249249+ refcount_set(&shm->refcount, 1);180250 shm->flags = flags | TEE_SHM_REGISTER;181251 shm->ctx = ctx;182252 shm->id = -1;···238306 goto err;239307 }240308241241- if (flags & TEE_SHM_DMA_BUF) {242242- DEFINE_DMA_BUF_EXPORT_INFO(exp_info);243243-244244- exp_info.ops = &tee_shm_dma_buf_ops;245245- exp_info.size = shm->size;246246- exp_info.flags = O_RDWR;247247- exp_info.priv = shm;248248-249249- shm->dmabuf = dma_buf_export(&exp_info);250250- if (IS_ERR(shm->dmabuf)) {251251- ret = ERR_CAST(shm->dmabuf);252252- teedev->desc->ops->shm_unregister(ctx, shm);253253- goto err;254254- }255255- }256256-257309 return shm;258310err:259311 if (shm) {···255339}256340EXPORT_SYMBOL_GPL(tee_shm_register);257341342342+static int tee_shm_fop_release(struct inode *inode, struct file *filp)343343+{344344+ tee_shm_put(filp->private_data);345345+ return 0;346346+}347347+348348+static int tee_shm_fop_mmap(struct file *filp, struct vm_area_struct *vma)349349+{350350+ struct tee_shm *shm = filp->private_data;351351+ size_t size = vma->vm_end - vma->vm_start;352352+353353+ /* Refuse sharing shared memory provided by application */354354+ if (shm->flags & TEE_SHM_USER_MAPPED)355355+ return -EINVAL;356356+357357+ /* check for overflowing the buffer's size */358358+ if (vma->vm_pgoff + vma_pages(vma) > shm->size >> PAGE_SHIFT)359359+ return -EINVAL;360360+361361+ return remap_pfn_range(vma, vma->vm_start, shm->paddr >> PAGE_SHIFT,362362+ size, vma->vm_page_prot);363363+}364364+365365+static const struct file_operations tee_shm_fops = {366366+ .owner = THIS_MODULE,367367+ .release = tee_shm_fop_release,368368+ .mmap = tee_shm_fop_mmap,369369+};370370+258371/**259372 * tee_shm_get_fd() - Increase reference count and return file descriptor260373 * @shm: Shared memory handle···296351 if (!(shm->flags & TEE_SHM_DMA_BUF))297352 return -EINVAL;298353299299- get_dma_buf(shm->dmabuf);300300- fd = dma_buf_fd(shm->dmabuf, O_CLOEXEC);354354+ /* matched by tee_shm_put() in tee_shm_op_release() */355355+ refcount_inc(&shm->refcount);356356+ fd = anon_inode_getfd("tee_shm", &tee_shm_fops, shm, O_RDWR);301357 if (fd < 0)302302- dma_buf_put(shm->dmabuf);358358+ tee_shm_put(shm);303359 return fd;304360}305361···310364 */311365void tee_shm_free(struct tee_shm *shm)312366{313313- /*314314- * dma_buf_put() decreases the dmabuf reference counter and will315315- * call tee_shm_release() when the last reference is gone.316316- *317317- * In the case of driver private memory we call tee_shm_release318318- * directly instead as it doesn't have a reference counter.319319- */320320- if (shm->flags & TEE_SHM_DMA_BUF)321321- dma_buf_put(shm->dmabuf);322322- else323323- tee_shm_release(shm);367367+ tee_shm_put(shm);324368}325369EXPORT_SYMBOL_GPL(tee_shm_free);326370···417481 teedev = ctx->teedev;418482 mutex_lock(&teedev->mutex);419483 shm = idr_find(&teedev->idr, id);484484+ /*485485+ * If the tee_shm was found in the IDR it must have a refcount486486+ * larger than 0 due to the guarantee in tee_shm_put() below. So487487+ * it's safe to use refcount_inc().488488+ */420489 if (!shm || shm->ctx != ctx)421490 shm = ERR_PTR(-EINVAL);422422- else if (shm->flags & TEE_SHM_DMA_BUF)423423- get_dma_buf(shm->dmabuf);491491+ else492492+ refcount_inc(&shm->refcount);424493 mutex_unlock(&teedev->mutex);425494 return shm;426495}···437496 */438497void tee_shm_put(struct tee_shm *shm)439498{440440- if (shm->flags & TEE_SHM_DMA_BUF)441441- dma_buf_put(shm->dmabuf);499499+ struct tee_device *teedev = shm->ctx->teedev;500500+ bool do_release = false;501501+502502+ mutex_lock(&teedev->mutex);503503+ if (refcount_dec_and_test(&shm->refcount)) {504504+ /*505505+ * refcount has reached 0, we must now remove it from the506506+ * IDR before releasing the mutex. This will guarantee that507507+ * the refcount_inc() in tee_shm_get_from_id() never starts508508+ * from 0.509509+ */510510+ if (shm->flags & TEE_SHM_DMA_BUF)511511+ idr_remove(&teedev->idr, shm->id);512512+ do_release = true;513513+ }514514+ mutex_unlock(&teedev->mutex);515515+516516+ if (do_release)517517+ tee_shm_release(teedev, shm);442518}443519EXPORT_SYMBOL_GPL(tee_shm_put);
+27-3
drivers/tty/hvc/hvc_xen.c
···3737 struct xenbus_device *xbdev;3838 struct xencons_interface *intf;3939 unsigned int evtchn;4040+ XENCONS_RING_IDX out_cons;4141+ unsigned int out_cons_same;4042 struct hvc_struct *hvc;4143 int irq;4244 int vtermno;···140138 XENCONS_RING_IDX cons, prod;141139 int recv = 0;142140 struct xencons_info *xencons = vtermno_to_xencons(vtermno);141141+ unsigned int eoiflag = 0;142142+143143 if (xencons == NULL)144144 return -EINVAL;145145 intf = xencons->intf;···161157 mb(); /* read ring before consuming */162158 intf->in_cons = cons;163159164164- notify_daemon(xencons);160160+ /*161161+ * When to mark interrupt having been spurious:162162+ * - there was no new data to be read, and163163+ * - the backend did not consume some output bytes, and164164+ * - the previous round with no read data didn't see consumed bytes165165+ * (we might have a race with an interrupt being in flight while166166+ * updating xencons->out_cons, so account for that by allowing one167167+ * round without any visible reason)168168+ */169169+ if (intf->out_cons != xencons->out_cons) {170170+ xencons->out_cons = intf->out_cons;171171+ xencons->out_cons_same = 0;172172+ }173173+ if (recv) {174174+ notify_daemon(xencons);175175+ } else if (xencons->out_cons_same++ > 1) {176176+ eoiflag = XEN_EOI_FLAG_SPURIOUS;177177+ }178178+179179+ xen_irq_lateeoi(xencons->irq, eoiflag);180180+165181 return recv;166182}167183···410386 if (ret)411387 return ret;412388 info->evtchn = evtchn;413413- irq = bind_evtchn_to_irq(evtchn);389389+ irq = bind_interdomain_evtchn_to_irq_lateeoi(dev, evtchn);414390 if (irq < 0)415391 return irq;416392 info->irq = irq;···575551 return r;576552577553 info = vtermno_to_xencons(HVC_COOKIE);578578- info->irq = bind_evtchn_to_irq(info->evtchn);554554+ info->irq = bind_evtchn_to_irq_lateeoi(info->evtchn);579555 }580556 if (info->irq < 0)581557 info->irq = 0; /* NO_IRQ */
···19371937 * @udp_tunnel_nic: UDP tunnel offload state19381938 * @xdp_state: stores info on attached XDP BPF programs19391939 *19401940- * @nested_level: Used as as a parameter of spin_lock_nested() of19401940+ * @nested_level: Used as a parameter of spin_lock_nested() of19411941 * dev->addr_list_lock.19421942 * @unlink_list: As netif_addr_lock() can be called recursively,19431943 * keep a list of interfaces to be deleted.
···195195 * @offset: offset of buffer in user space196196 * @pages: locked pages from userspace197197 * @num_pages: number of locked pages198198- * @dmabuf: dmabuf used to for exporting to user space198198+ * @refcount: reference counter199199 * @flags: defined by TEE_SHM_* in tee_drv.h200200 * @id: unique id of a shared memory object on this device, shared201201 * with user space···214214 unsigned int offset;215215 struct page **pages;216216 size_t num_pages;217217- struct dma_buf *dmabuf;217217+ refcount_t refcount;218218 u32 flags;219219 int id;220220 u64 sec_world_id;
+23-2
include/linux/virtio_net.h
···77#include <uapi/linux/udp.h>88#include <uapi/linux/virtio_net.h>991010+static inline bool virtio_net_hdr_match_proto(__be16 protocol, __u8 gso_type)1111+{1212+ switch (gso_type & ~VIRTIO_NET_HDR_GSO_ECN) {1313+ case VIRTIO_NET_HDR_GSO_TCPV4:1414+ return protocol == cpu_to_be16(ETH_P_IP);1515+ case VIRTIO_NET_HDR_GSO_TCPV6:1616+ return protocol == cpu_to_be16(ETH_P_IPV6);1717+ case VIRTIO_NET_HDR_GSO_UDP:1818+ return protocol == cpu_to_be16(ETH_P_IP) ||1919+ protocol == cpu_to_be16(ETH_P_IPV6);2020+ default:2121+ return false;2222+ }2323+}2424+1025static inline int virtio_net_hdr_set_proto(struct sk_buff *skb,1126 const struct virtio_net_hdr *hdr)1227{2828+ if (skb->protocol)2929+ return 0;3030+1331 switch (hdr->gso_type & ~VIRTIO_NET_HDR_GSO_ECN) {1432 case VIRTIO_NET_HDR_GSO_TCPV4:1533 case VIRTIO_NET_HDR_GSO_UDP:···10688 if (!skb->protocol) {10789 __be16 protocol = dev_parse_header_protocol(skb);10890109109- virtio_net_hdr_set_proto(skb, hdr);110110- if (protocol && protocol != skb->protocol)9191+ if (!protocol)9292+ virtio_net_hdr_set_proto(skb, hdr);9393+ else if (!virtio_net_hdr_match_proto(protocol, hdr->gso_type))11194 return -EINVAL;9595+ else9696+ skb->protocol = protocol;11297 }11398retry:11499 if (!skb_flow_dissect_flow_keys_basic(NULL, skb, &keys,
···6677#include <linux/buildid.h>88#include <linux/crash_core.h>99+#include <linux/init.h>910#include <linux/utsname.h>1011#include <linux/vmalloc.h>1112···295294 return __parse_crashkernel(cmdline, system_ram, crash_size, crash_base,296295 "crashkernel=", suffix_tbl[SUFFIX_LOW]);297296}297297+298298+/*299299+ * Add a dummy early_param handler to mark crashkernel= as a known command line300300+ * parameter and suppress incorrect warnings in init/main.c.301301+ */302302+static int __init parse_crashkernel_dummy(char *arg)303303+{304304+ return 0;305305+}306306+early_param("crashkernel", parse_crashkernel_dummy);298307299308Elf_Word *append_elf_note(Elf_Word *buf, char *name, unsigned int type,300309 void *data, size_t data_len)
+9-6
kernel/ucount.c
···264264long inc_rlimit_ucounts(struct ucounts *ucounts, enum ucount_type type, long v)265265{266266 struct ucounts *iter;267267+ long max = LONG_MAX;267268 long ret = 0;268269269270 for (iter = ucounts; iter; iter = iter->ns->ucounts) {270270- long max = READ_ONCE(iter->ns->ucount_max[type]);271271 long new = atomic_long_add_return(v, &iter->ucount[type]);272272 if (new < 0 || new > max)273273 ret = LONG_MAX;274274 else if (iter == ucounts)275275 ret = new;276276+ max = READ_ONCE(iter->ns->ucount_max[type]);276277 }277278 return ret;278279}···313312{314313 /* Caller must hold a reference to ucounts */315314 struct ucounts *iter;315315+ long max = LONG_MAX;316316 long dec, ret = 0;317317318318 for (iter = ucounts; iter; iter = iter->ns->ucounts) {319319- long max = READ_ONCE(iter->ns->ucount_max[type]);320319 long new = atomic_long_add_return(1, &iter->ucount[type]);321320 if (new < 0 || new > max)322321 goto unwind;323322 if (iter == ucounts)324323 ret = new;324324+ max = READ_ONCE(iter->ns->ucount_max[type]);325325 /*326326 * Grab an extra ucount reference for the caller when327327 * the rlimit count was previously 0.···341339 return 0;342340}343341344344-bool is_ucounts_overlimit(struct ucounts *ucounts, enum ucount_type type, unsigned long max)342342+bool is_ucounts_overlimit(struct ucounts *ucounts, enum ucount_type type, unsigned long rlimit)345343{346344 struct ucounts *iter;347347- if (get_ucounts_value(ucounts, type) > max)348348- return true;345345+ long max = rlimit;346346+ if (rlimit > LONG_MAX)347347+ max = LONG_MAX;349348 for (iter = ucounts; iter; iter = iter->ns->ucounts) {350350- max = READ_ONCE(iter->ns->ucount_max[type]);351349 if (get_ucounts_value(iter, type) > max)352350 return true;351351+ max = READ_ONCE(iter->ns->ucount_max[type]);353352 }354353 return false;355354}
+9-2
mm/damon/dbgfs.c
···353353 const char __user *buf, size_t count, loff_t *ppos)354354{355355 struct damon_ctx *ctx = file->private_data;356356+ struct damon_target *t, *next_t;356357 bool id_is_pid = true;357358 char *kbuf, *nrs;358359 unsigned long *targets;···398397 goto unlock_out;399398 }400399401401- /* remove targets with previously-set primitive */402402- damon_set_targets(ctx, NULL, 0);400400+ /* remove previously set targets */401401+ damon_for_each_target_safe(t, next_t, ctx) {402402+ if (targetid_is_pid(ctx))403403+ put_pid((struct pid *)t->id);404404+ damon_destroy_target(t);405405+ }403406404407 /* Configure the context for the address space type */405408 if (id_is_pid)···655650 if (!targetid_is_pid(ctx))656651 return;657652653653+ mutex_lock(&ctx->kdamond_lock);658654 damon_for_each_target_safe(t, next, ctx) {659655 put_pid((struct pid *)t->id);660656 damon_destroy_target(t);661657 }658658+ mutex_unlock(&ctx->kdamond_lock);662659}663660664661static struct damon_ctx *dbgfs_new_ctx(void)
···14701470 if (!(flags & MF_COUNT_INCREASED)) {14711471 res = get_hwpoison_page(p, flags);14721472 if (!res) {14731473- /*14741474- * Check "filter hit" and "race with other subpage."14751475- */14761473 lock_page(head);14771477- if (PageHWPoison(head)) {14781478- if ((hwpoison_filter(p) && TestClearPageHWPoison(p))14791479- || (p != head && TestSetPageHWPoison(head))) {14741474+ if (hwpoison_filter(p)) {14751475+ if (TestClearPageHWPoison(head))14801476 num_poisoned_pages_dec();14811481- unlock_page(head);14821482- return 0;14831483- }14771477+ unlock_page(head);14781478+ return 0;14841479 }14851480 unlock_page(head);14861481 res = MF_FAILED;···22342239 } else if (ret == 0) {22352240 if (soft_offline_free_page(page) && try_again) {22362241 try_again = false;22422242+ flags &= ~MF_COUNT_INCREASED;22372243 goto retry;22382244 }22392245 }
+1-2
mm/mempolicy.c
···21402140 * memory with both reclaim and compact as well.21412141 */21422142 if (!page && (gfp & __GFP_DIRECT_RECLAIM))21432143- page = __alloc_pages_node(hpage_node,21442144- gfp, order);21432143+ page = __alloc_pages(gfp, order, hpage_node, nmask);2145214421462145 goto out;21472146 }
+56-9
mm/vmscan.c
···10211021 unlock_page(page);10221022}1023102310241024+static bool skip_throttle_noprogress(pg_data_t *pgdat)10251025+{10261026+ int reclaimable = 0, write_pending = 0;10271027+ int i;10281028+10291029+ /*10301030+ * If kswapd is disabled, reschedule if necessary but do not10311031+ * throttle as the system is likely near OOM.10321032+ */10331033+ if (pgdat->kswapd_failures >= MAX_RECLAIM_RETRIES)10341034+ return true;10351035+10361036+ /*10371037+ * If there are a lot of dirty/writeback pages then do not10381038+ * throttle as throttling will occur when the pages cycle10391039+ * towards the end of the LRU if still under writeback.10401040+ */10411041+ for (i = 0; i < MAX_NR_ZONES; i++) {10421042+ struct zone *zone = pgdat->node_zones + i;10431043+10441044+ if (!populated_zone(zone))10451045+ continue;10461046+10471047+ reclaimable += zone_reclaimable_pages(zone);10481048+ write_pending += zone_page_state_snapshot(zone,10491049+ NR_ZONE_WRITE_PENDING);10501050+ }10511051+ if (2 * write_pending <= reclaimable)10521052+ return true;10531053+10541054+ return false;10551055+}10561056+10241057void reclaim_throttle(pg_data_t *pgdat, enum vmscan_throttle_state reason)10251058{10261059 wait_queue_head_t *wqh = &pgdat->reclaim_wait[reason];···10891056 }1090105710911058 break;10591059+ case VMSCAN_THROTTLE_CONGESTED:10601060+ fallthrough;10921061 case VMSCAN_THROTTLE_NOPROGRESS:10931093- timeout = HZ/2;10621062+ if (skip_throttle_noprogress(pgdat)) {10631063+ cond_resched();10641064+ return;10651065+ }10661066+10671067+ timeout = 1;10681068+10941069 break;10951070 case VMSCAN_THROTTLE_ISOLATED:10961071 timeout = HZ/50;···33623321 if (!current_is_kswapd() && current_may_throttle() &&33633322 !sc->hibernation_mode &&33643323 test_bit(LRUVEC_CONGESTED, &target_lruvec->flags))33653365- reclaim_throttle(pgdat, VMSCAN_THROTTLE_WRITEBACK);33243324+ reclaim_throttle(pgdat, VMSCAN_THROTTLE_CONGESTED);3366332533673326 if (should_continue_reclaim(pgdat, sc->nr_reclaimed - nr_reclaimed,33683327 sc))···34273386 }3428338734293388 /*34303430- * Do not throttle kswapd on NOPROGRESS as it will throttle on34313431- * VMSCAN_THROTTLE_WRITEBACK if there are too many pages under34323432- * writeback and marked for immediate reclaim at the tail of34333433- * the LRU.33893389+ * Do not throttle kswapd or cgroup reclaim on NOPROGRESS as it will33903390+ * throttle on VMSCAN_THROTTLE_WRITEBACK if there are too many pages33913391+ * under writeback and marked for immediate reclaim at the tail of the33923392+ * LRU.34343393 */34353435- if (current_is_kswapd())33943394+ if (current_is_kswapd() || cgroup_reclaim(sc))34363395 return;3437339634383397 /* Throttle if making no progress at high prioities. */34393439- if (sc->priority < DEF_PRIORITY - 2)33983398+ if (sc->priority == 1 && !sc->nr_reclaimed)34403399 reclaim_throttle(pgdat, VMSCAN_THROTTLE_NOPROGRESS);34413400}34423401···34563415 unsigned long nr_soft_scanned;34573416 gfp_t orig_mask;34583417 pg_data_t *last_pgdat = NULL;34183418+ pg_data_t *first_pgdat = NULL;3459341934603420 /*34613421 * If the number of buffer_heads in the machine exceeds the maximum···35203478 /* need some check for avoid more shrink_zone() */35213479 }3522348034813481+ if (!first_pgdat)34823482+ first_pgdat = zone->zone_pgdat;34833483+35233484 /* See comment about same check for global reclaim above */35243485 if (zone->zone_pgdat == last_pgdat)35253486 continue;35263487 last_pgdat = zone->zone_pgdat;35273488 shrink_node(zone->zone_pgdat, sc);35283528- consider_reclaim_throttle(zone->zone_pgdat, sc);35293489 }34903490+34913491+ if (first_pgdat)34923492+ consider_reclaim_throttle(first_pgdat, sc);3530349335313494 /*35323495 * Restore to original mask to avoid the impact on the caller if we
···11951195 }11961196 hlist_nulls_for_each_entry(h, n, &nf_conntrack_hash[cb->args[0]],11971197 hnnode) {11981198- if (NF_CT_DIRECTION(h) != IP_CT_DIR_ORIGINAL)11991199- continue;12001198 ct = nf_ct_tuplehash_to_ctrack(h);12011199 if (nf_ct_is_expired(ct)) {12021200 if (i < ARRAY_SIZE(nf_ct_evict) &&···12041206 }1205120712061208 if (!net_eq(net, nf_ct_net(ct)))12091209+ continue;12101210+12111211+ if (NF_CT_DIRECTION(h) != IP_CT_DIR_ORIGINAL)12071212 continue;1208121312091214 if (cb->args[1]) {
···3434#include <net/mpls.h>3535#include <net/ndisc.h>3636#include <net/nsh.h>3737+#include <net/netfilter/nf_conntrack_zones.h>37383839#include "conntrack.h"3940#include "datapath.h"···861860#endif862861 bool post_ct = false;863862 int res, err;863863+ u16 zone = 0;864864865865 /* Extract metadata from packet. */866866 if (tun_info) {···900898 key->recirc_id = tc_ext ? tc_ext->chain : 0;901899 OVS_CB(skb)->mru = tc_ext ? tc_ext->mru : 0;902900 post_ct = tc_ext ? tc_ext->post_ct : false;901901+ zone = post_ct ? tc_ext->zone : 0;903902 } else {904903 key->recirc_id = 0;905904 }···909906#endif910907911908 err = key_extract(skb, key);912912- if (!err)909909+ if (!err) {913910 ovs_ct_fill_key(skb, key, post_ct); /* Must be after key_extract(). */911911+ if (post_ct && !skb_get_nfct(skb))912912+ key->ct_zone = zone;913913+ }914914 return err;915915}916916
+2
net/phonet/pep.c
···947947 ret = -EBUSY;948948 else if (sk->sk_state == TCP_ESTABLISHED)949949 ret = -EISCONN;950950+ else if (!pn->pn_sk.sobject)951951+ ret = -EADDRNOTAVAIL;950952 else951953 ret = pep_sock_enable(sk, NULL, 0);952954 release_sock(sk);
···184184}185185186186/* Final destructor for endpoint. */187187+static void sctp_endpoint_destroy_rcu(struct rcu_head *head)188188+{189189+ struct sctp_endpoint *ep = container_of(head, struct sctp_endpoint, rcu);190190+ struct sock *sk = ep->base.sk;191191+192192+ sctp_sk(sk)->ep = NULL;193193+ sock_put(sk);194194+195195+ kfree(ep);196196+ SCTP_DBG_OBJCNT_DEC(ep);197197+}198198+187199static void sctp_endpoint_destroy(struct sctp_endpoint *ep)188200{189201 struct sock *sk;···225213 if (sctp_sk(sk)->bind_hash)226214 sctp_put_port(sk);227215228228- sctp_sk(sk)->ep = NULL;229229- /* Give up our hold on the sock */230230- sock_put(sk);231231-232232- kfree(ep);233233- SCTP_DBG_OBJCNT_DEC(ep);216216+ call_rcu(&ep->rcu, sctp_endpoint_destroy_rcu);234217}235218236219/* Hold a reference to an endpoint. */237237-void sctp_endpoint_hold(struct sctp_endpoint *ep)220220+int sctp_endpoint_hold(struct sctp_endpoint *ep)238221{239239- refcount_inc(&ep->base.refcnt);222222+ return refcount_inc_not_zero(&ep->base.refcnt);240223}241224242225/* Release a reference to an endpoint and clean up if there are
+15-8
net/sctp/socket.c
···53385338}53395339EXPORT_SYMBOL_GPL(sctp_transport_lookup_process);5340534053415341-int sctp_for_each_transport(int (*cb)(struct sctp_transport *, void *),53425342- int (*cb_done)(struct sctp_transport *, void *),53435343- struct net *net, int *pos, void *p) {53415341+int sctp_transport_traverse_process(sctp_callback_t cb, sctp_callback_t cb_done,53425342+ struct net *net, int *pos, void *p)53435343+{53445344 struct rhashtable_iter hti;53455345 struct sctp_transport *tsp;53465346+ struct sctp_endpoint *ep;53465347 int ret;5347534853485349again:···5352535153535352 tsp = sctp_transport_get_idx(net, &hti, *pos + 1);53545353 for (; !IS_ERR_OR_NULL(tsp); tsp = sctp_transport_get_next(net, &hti)) {53555355- ret = cb(tsp, p);53565356- if (ret)53575357- break;53545354+ ep = tsp->asoc->ep;53555355+ if (sctp_endpoint_hold(ep)) { /* asoc can be peeled off */53565356+ ret = cb(ep, tsp, p);53575357+ if (ret)53585358+ break;53595359+ sctp_endpoint_put(ep);53605360+ }53585361 (*pos)++;53595362 sctp_transport_put(tsp);53605363 }53615364 sctp_transport_walk_stop(&hti);5362536553635366 if (ret) {53645364- if (cb_done && !cb_done(tsp, p)) {53675367+ if (cb_done && !cb_done(ep, tsp, p)) {53655368 (*pos)++;53695369+ sctp_endpoint_put(ep);53665370 sctp_transport_put(tsp);53675371 goto again;53685372 }53735373+ sctp_endpoint_put(ep);53695374 sctp_transport_put(tsp);53705375 }5371537653725377 return ret;53735378}53745374-EXPORT_SYMBOL_GPL(sctp_for_each_transport);53795379+EXPORT_SYMBOL_GPL(sctp_transport_traverse_process);5375538053765381/* 7.2.1 Association Status (SCTP_STATUS)53775382
+5
net/smc/smc.h
···180180 u16 tx_cdc_seq; /* sequence # for CDC send */181181 u16 tx_cdc_seq_fin; /* sequence # - tx completed */182182 spinlock_t send_lock; /* protect wr_sends */183183+ atomic_t cdc_pend_tx_wr; /* number of pending tx CDC wqe184184+ * - inc when post wqe,185185+ * - dec on polled tx cqe186186+ */187187+ wait_queue_head_t cdc_pend_tx_wq; /* wakeup on no cdc_pend_tx_wr*/183188 struct delayed_work tx_work; /* retry of smc_cdc_msg_send */184189 u32 tx_off; /* base offset in peer rmb */185190
+24-28
net/smc/smc_cdc.c
···3131 struct smc_sock *smc;3232 int diff;33333434- if (!conn)3535- /* already dismissed */3636- return;3737-3834 smc = container_of(conn, struct smc_sock, conn);3935 bh_lock_sock(&smc->sk);4036 if (!wc_status) {···4751 conn);4852 conn->tx_cdc_seq_fin = cdcpend->ctrl_seq;4953 }5454+5555+ if (atomic_dec_and_test(&conn->cdc_pend_tx_wr) &&5656+ unlikely(wq_has_sleeper(&conn->cdc_pend_tx_wq)))5757+ wake_up(&conn->cdc_pend_tx_wq);5858+ WARN_ON(atomic_read(&conn->cdc_pend_tx_wr) < 0);5959+5060 smc_tx_sndbuf_nonfull(smc);5161 bh_unlock_sock(&smc->sk);5262}···109107 conn->tx_cdc_seq++;110108 conn->local_tx_ctrl.seqno = conn->tx_cdc_seq;111109 smc_host_msg_to_cdc((struct smc_cdc_msg *)wr_buf, conn, &cfed);110110+111111+ atomic_inc(&conn->cdc_pend_tx_wr);112112+ smp_mb__after_atomic(); /* Make sure cdc_pend_tx_wr added before post */113113+112114 rc = smc_wr_tx_send(link, (struct smc_wr_tx_pend_priv *)pend);113115 if (!rc) {114116 smc_curs_copy(&conn->rx_curs_confirmed, &cfed, conn);···120114 } else {121115 conn->tx_cdc_seq--;122116 conn->local_tx_ctrl.seqno = conn->tx_cdc_seq;117117+ atomic_dec(&conn->cdc_pend_tx_wr);123118 }124119125120 return rc;···143136 peer->token = htonl(local->token);144137 peer->prod_flags.failover_validation = 1;145138139139+ /* We need to set pend->conn here to make sure smc_cdc_tx_handler()140140+ * can handle properly141141+ */142142+ smc_cdc_add_pending_send(conn, pend);143143+144144+ atomic_inc(&conn->cdc_pend_tx_wr);145145+ smp_mb__after_atomic(); /* Make sure cdc_pend_tx_wr added before post */146146+146147 rc = smc_wr_tx_send(link, (struct smc_wr_tx_pend_priv *)pend);148148+ if (unlikely(rc))149149+ atomic_dec(&conn->cdc_pend_tx_wr);150150+147151 return rc;148152}149153···211193 return rc;212194}213195214214-static bool smc_cdc_tx_filter(struct smc_wr_tx_pend_priv *tx_pend,215215- unsigned long data)196196+void smc_cdc_wait_pend_tx_wr(struct smc_connection *conn)216197{217217- struct smc_connection *conn = (struct smc_connection *)data;218218- struct smc_cdc_tx_pend *cdc_pend =219219- (struct smc_cdc_tx_pend *)tx_pend;220220-221221- return cdc_pend->conn == conn;222222-}223223-224224-static void smc_cdc_tx_dismisser(struct smc_wr_tx_pend_priv *tx_pend)225225-{226226- struct smc_cdc_tx_pend *cdc_pend =227227- (struct smc_cdc_tx_pend *)tx_pend;228228-229229- cdc_pend->conn = NULL;230230-}231231-232232-void smc_cdc_tx_dismiss_slots(struct smc_connection *conn)233233-{234234- struct smc_link *link = conn->lnk;235235-236236- smc_wr_tx_dismiss_slots(link, SMC_CDC_MSG_TYPE,237237- smc_cdc_tx_filter, smc_cdc_tx_dismisser,238238- (unsigned long)conn);198198+ wait_event(conn->cdc_pend_tx_wq, !atomic_read(&conn->cdc_pend_tx_wr));239199}240200241201/* Send a SMC-D CDC header.
···647647 for (i = 0; i < SMC_LINKS_PER_LGR_MAX; i++) {648648 struct smc_link *lnk = &lgr->lnk[i];649649650650- if (smc_link_usable(lnk))650650+ if (smc_link_sendable(lnk))651651 lnk->state = SMC_LNK_INACTIVE;652652 }653653 wake_up_all(&lgr->llc_msg_waiter);···11271127 smc_ism_unset_conn(conn);11281128 tasklet_kill(&conn->rx_tsklet);11291129 } else {11301130- smc_cdc_tx_dismiss_slots(conn);11301130+ smc_cdc_wait_pend_tx_wr(conn);11311131 if (current_work() != &conn->abort_work)11321132 cancel_work_sync(&conn->abort_work);11331133 }···12041204 smc_llc_link_clear(lnk, log);12051205 smcr_buf_unmap_lgr(lnk);12061206 smcr_rtoken_clear_link(lnk);12071207- smc_ib_modify_qp_reset(lnk);12071207+ smc_ib_modify_qp_error(lnk);12081208 smc_wr_free_link(lnk);12091209 smc_ib_destroy_queue_pair(lnk);12101210 smc_ib_dealloc_protection_domain(lnk);···13361336 else13371337 tasklet_unlock_wait(&conn->rx_tsklet);13381338 } else {13391339- smc_cdc_tx_dismiss_slots(conn);13391339+ smc_cdc_wait_pend_tx_wr(conn);13401340 }13411341 smc_lgr_unregister_conn(conn);13421342 smc_close_active_abort(smc);···14591459/* Called when an SMCR device is removed or the smc module is unloaded.14601460 * If smcibdev is given, all SMCR link groups using this device are terminated.14611461 * If smcibdev is NULL, all SMCR link groups are terminated.14621462+ *14631463+ * We must wait here for QPs been destroyed before we destroy the CQs,14641464+ * or we won't received any CQEs and cdc_pend_tx_wr cannot reach 0 thus14651465+ * smc_sock cannot be released.14621466 */14631467void smc_smcr_terminate_all(struct smc_ib_device *smcibdev)14641468{14651469 struct smc_link_group *lgr, *lg;14661470 LIST_HEAD(lgr_free_list);14711471+ LIST_HEAD(lgr_linkdown_list);14671472 int i;1468147314691474 spin_lock_bh(&smc_lgr_list.lock);···14801475 list_for_each_entry_safe(lgr, lg, &smc_lgr_list.list, list) {14811476 for (i = 0; i < SMC_LINKS_PER_LGR_MAX; i++) {14821477 if (lgr->lnk[i].smcibdev == smcibdev)14831483- smcr_link_down_cond_sched(&lgr->lnk[i]);14781478+ list_move_tail(&lgr->list, &lgr_linkdown_list);14841479 }14851480 }14861481 }···14901485 list_del_init(&lgr->list);14911486 smc_llc_set_termination_rsn(lgr, SMC_LLC_DEL_OP_INIT_TERM);14921487 __smc_lgr_terminate(lgr, false);14881488+ }14891489+14901490+ list_for_each_entry_safe(lgr, lg, &lgr_linkdown_list, list) {14911491+ for (i = 0; i < SMC_LINKS_PER_LGR_MAX; i++) {14921492+ if (lgr->lnk[i].smcibdev == smcibdev) {14931493+ mutex_lock(&lgr->llc_conf_mutex);14941494+ smcr_link_down_cond(&lgr->lnk[i]);14951495+ mutex_unlock(&lgr->llc_conf_mutex);14961496+ }14971497+ }14931498 }1494149914951500 if (smcibdev) {···16011586 if (!lgr || lnk->state == SMC_LNK_UNUSED || list_empty(&lgr->list))16021587 return;1603158816041604- smc_ib_modify_qp_reset(lnk);16051589 to_lnk = smc_switch_conns(lgr, lnk, true);16061590 if (!to_lnk) { /* no backup link available */16071591 smcr_link_clear(lnk, true);···18381824 conn->local_tx_ctrl.common.type = SMC_CDC_MSG_TYPE;18391825 conn->local_tx_ctrl.len = SMC_WR_TX_SIZE;18401826 conn->urg_state = SMC_URG_READ;18271827+ init_waitqueue_head(&conn->cdc_pend_tx_wq);18411828 INIT_WORK(&smc->conn.abort_work, smc_conn_abort_work);18421829 if (ini->is_smcd) {18431830 conn->rx_off = sizeof(struct smcd_cdc_msg);
···16301630 delllc.reason = htonl(rsn);1631163116321632 for (i = 0; i < SMC_LINKS_PER_LGR_MAX; i++) {16331633- if (!smc_link_usable(&lgr->lnk[i]))16331633+ if (!smc_link_sendable(&lgr->lnk[i]))16341634 continue;16351635 if (!smc_llc_send_message_wait(&lgr->lnk[i], &delllc))16361636 break;
+9-42
net/smc/smc_wr.c
···6262}63636464/* wait till all pending tx work requests on the given link are completed */6565-int smc_wr_tx_wait_no_pending_sends(struct smc_link *link)6565+void smc_wr_tx_wait_no_pending_sends(struct smc_link *link)6666{6767- if (wait_event_timeout(link->wr_tx_wait, !smc_wr_is_tx_pend(link),6868- SMC_WR_TX_WAIT_PENDING_TIME))6969- return 0;7070- else /* timeout */7171- return -EPIPE;6767+ wait_event(link->wr_tx_wait, !smc_wr_is_tx_pend(link));7268}73697470static inline int smc_wr_tx_find_pending_index(struct smc_link *link, u64 wr_id)···8387 struct smc_wr_tx_pend pnd_snd;8488 struct smc_link *link;8589 u32 pnd_snd_idx;8686- int i;87908891 link = wc->qp->qp_context;8992···123128 }124129125130 if (wc->status) {126126- for_each_set_bit(i, link->wr_tx_mask, link->wr_tx_cnt) {127127- /* clear full struct smc_wr_tx_pend including .priv */128128- memset(&link->wr_tx_pends[i], 0,129129- sizeof(link->wr_tx_pends[i]));130130- memset(&link->wr_tx_bufs[i], 0,131131- sizeof(link->wr_tx_bufs[i]));132132- clear_bit(i, link->wr_tx_mask);133133- }134131 if (link->lgr->smc_version == SMC_V2) {135132 memset(link->wr_tx_v2_pend, 0,136133 sizeof(*link->wr_tx_v2_pend));···175188static inline int smc_wr_tx_get_free_slot_index(struct smc_link *link, u32 *idx)176189{177190 *idx = link->wr_tx_cnt;178178- if (!smc_link_usable(link))191191+ if (!smc_link_sendable(link))179192 return -ENOLINK;180193 for_each_clear_bit(*idx, link->wr_tx_mask, link->wr_tx_cnt) {181194 if (!test_and_set_bit(*idx, link->wr_tx_mask))···218231 } else {219232 rc = wait_event_interruptible_timeout(220233 link->wr_tx_wait,221221- !smc_link_usable(link) ||234234+ !smc_link_sendable(link) ||222235 lgr->terminating ||223236 (smc_wr_tx_get_free_slot_index(link, &idx) != -EBUSY),224237 SMC_WR_TX_WAIT_FREE_SLOT_TIME);···345358 unsigned long timeout)346359{347360 struct smc_wr_tx_pend *pend;361361+ u32 pnd_idx;348362 int rc;349363350364 pend = container_of(priv, struct smc_wr_tx_pend, priv);351365 pend->compl_requested = 1;352352- init_completion(&link->wr_tx_compl[pend->idx]);366366+ pnd_idx = pend->idx;367367+ init_completion(&link->wr_tx_compl[pnd_idx]);353368354369 rc = smc_wr_tx_send(link, priv);355370 if (rc)356371 return rc;357372 /* wait for completion by smc_wr_tx_process_cqe() */358373 rc = wait_for_completion_interruptible_timeout(359359- &link->wr_tx_compl[pend->idx], timeout);374374+ &link->wr_tx_compl[pnd_idx], timeout);360375 if (rc <= 0)361376 rc = -ENODATA;362377 if (rc > 0)···406417 break;407418 }408419 return rc;409409-}410410-411411-void smc_wr_tx_dismiss_slots(struct smc_link *link, u8 wr_tx_hdr_type,412412- smc_wr_tx_filter filter,413413- smc_wr_tx_dismisser dismisser,414414- unsigned long data)415415-{416416- struct smc_wr_tx_pend_priv *tx_pend;417417- struct smc_wr_rx_hdr *wr_tx;418418- int i;419419-420420- for_each_set_bit(i, link->wr_tx_mask, link->wr_tx_cnt) {421421- wr_tx = (struct smc_wr_rx_hdr *)&link->wr_tx_bufs[i];422422- if (wr_tx->type != wr_tx_hdr_type)423423- continue;424424- tx_pend = &link->wr_tx_pends[i].priv;425425- if (filter(tx_pend, data))426426- dismisser(tx_pend);427427- }428420}429421430422/****************************** receive queue ********************************/···643673 smc_wr_wakeup_reg_wait(lnk);644674 smc_wr_wakeup_tx_wait(lnk);645675646646- if (smc_wr_tx_wait_no_pending_sends(lnk))647647- memset(lnk->wr_tx_mask, 0,648648- BITS_TO_LONGS(SMC_WR_BUF_CNT) *649649- sizeof(*lnk->wr_tx_mask));676676+ smc_wr_tx_wait_no_pending_sends(lnk);650677 wait_event(lnk->wr_reg_wait, (!atomic_read(&lnk->wr_reg_refcnt)));651678 wait_event(lnk->wr_tx_wait, (!atomic_read(&lnk->wr_tx_refcnt)));652679
+2-3
net/smc/smc_wr.h
···2222#define SMC_WR_BUF_CNT 16 /* # of ctrl buffers per link */23232424#define SMC_WR_TX_WAIT_FREE_SLOT_TIME (10 * HZ)2525-#define SMC_WR_TX_WAIT_PENDING_TIME (5 * HZ)26252726#define SMC_WR_TX_SIZE 44 /* actual size of wr_send data (<=SMC_WR_BUF_SIZE) */2827···61626263static inline bool smc_wr_tx_link_hold(struct smc_link *link)6364{6464- if (!smc_link_usable(link))6565+ if (!smc_link_sendable(link))6566 return false;6667 atomic_inc(&link->wr_tx_refcnt);6768 return true;···129130 smc_wr_tx_filter filter,130131 smc_wr_tx_dismisser dismisser,131132 unsigned long data);132132-int smc_wr_tx_wait_no_pending_sends(struct smc_link *link);133133+void smc_wr_tx_wait_no_pending_sends(struct smc_link *link);133134134135int smc_wr_rx_register_handler(struct smc_wr_rx_handler *handler);135136int smc_wr_rx_post_init(struct smc_link *link);
+4-4
net/tipc/crypto.c
···524524 return -EEXIST;525525526526 /* Allocate a new AEAD */527527- tmp = kzalloc(sizeof(*tmp), GFP_KERNEL);527527+ tmp = kzalloc(sizeof(*tmp), GFP_ATOMIC);528528 if (unlikely(!tmp))529529 return -ENOMEM;530530···14741474 return -EEXIST;1475147514761476 /* Allocate crypto */14771477- c = kzalloc(sizeof(*c), GFP_KERNEL);14771477+ c = kzalloc(sizeof(*c), GFP_ATOMIC);14781478 if (!c)14791479 return -ENOMEM;14801480···14881488 }1489148914901490 /* Allocate statistic structure */14911491- c->stats = alloc_percpu(struct tipc_crypto_stats);14911491+ c->stats = alloc_percpu_gfp(struct tipc_crypto_stats, GFP_ATOMIC);14921492 if (!c->stats) {14931493 if (c->wq)14941494 destroy_workqueue(c->wq);···24612461 }2462246224632463 /* Lets duplicate it first */24642464- skey = kmemdup(aead->key, tipc_aead_key_size(aead->key), GFP_KERNEL);24642464+ skey = kmemdup(aead->key, tipc_aead_key_size(aead->key), GFP_ATOMIC);24652465 rcu_read_unlock();2466246624672467 /* Now, generate new key, initiate & distribute it */
···132132 return AE_NOT_FOUND;133133 }134134135135- info->handle = handle;136136-137135 /*138136 * On some Intel platforms, multiple children of the HDAS139137 * device can be found, but only one of them is the SoundWire···141143 */142144 if (FIELD_GET(GENMASK(31, 28), adr) != SDW_LINK_TYPE)143145 return AE_OK; /* keep going */146146+147147+ /* found the correct SoundWire controller */148148+ info->handle = handle;144149145150 /* device found, stop namespace walk */146151 return AE_CTRL_TERMINATE;···165164 acpi_status status;166165167166 info->handle = NULL;167167+ /*168168+ * In the HDAS ACPI scope, 'SNDW' may be either the child of169169+ * 'HDAS' or the grandchild of 'HDAS'. So let's go through170170+ * the ACPI from 'HDAS' at max depth of 2 to find the 'SNDW'171171+ * device.172172+ */168173 status = acpi_walk_namespace(ACPI_TYPE_DEVICE,169169- parent_handle, 1,174174+ parent_handle, 2,170175 sdw_intel_acpi_cb,171176 NULL, info, NULL);172177 if (ACPI_FAILURE(status) || info->handle == NULL)
+15-6
sound/pci/hda/patch_hdmi.c
···2947294729482948/* Intel Haswell and onwards; audio component with eld notifier */29492949static int intel_hsw_common_init(struct hda_codec *codec, hda_nid_t vendor_nid,29502950- const int *port_map, int port_num, int dev_num)29502950+ const int *port_map, int port_num, int dev_num,29512951+ bool send_silent_stream)29512952{29522953 struct hdmi_spec *spec;29532954 int err;···29812980 * Enable silent stream feature, if it is enabled via29822981 * module param or Kconfig option29832982 */29842984- if (enable_silent_stream)29832983+ if (send_silent_stream)29852984 spec->send_silent_stream = true;2986298529872986 return parse_intel_hdmi(codec);···2989298829902989static int patch_i915_hsw_hdmi(struct hda_codec *codec)29912990{29922992- return intel_hsw_common_init(codec, 0x08, NULL, 0, 3);29912991+ return intel_hsw_common_init(codec, 0x08, NULL, 0, 3,29922992+ enable_silent_stream);29932993}2994299429952995static int patch_i915_glk_hdmi(struct hda_codec *codec)29962996{29972997- return intel_hsw_common_init(codec, 0x0b, NULL, 0, 3);29972997+ /*29982998+ * Silent stream calls audio component .get_power() from29992999+ * .pin_eld_notify(). On GLK this will deadlock in i915 due30003000+ * to the audio vs. CDCLK workaround.30013001+ */30023002+ return intel_hsw_common_init(codec, 0x0b, NULL, 0, 3, false);29983003}2999300430003005static int patch_i915_icl_hdmi(struct hda_codec *codec)···30113004 */30123005 static const int map[] = {0x0, 0x4, 0x6, 0x8, 0xa, 0xb};3013300630143014- return intel_hsw_common_init(codec, 0x02, map, ARRAY_SIZE(map), 3);30073007+ return intel_hsw_common_init(codec, 0x02, map, ARRAY_SIZE(map), 3,30083008+ enable_silent_stream);30153009}3016301030173011static int patch_i915_tgl_hdmi(struct hda_codec *codec)···30243016 static const int map[] = {0x4, 0x6, 0x8, 0xa, 0xb, 0xc, 0xd, 0xe, 0xf};30253017 int ret;3026301830273027- ret = intel_hsw_common_init(codec, 0x02, map, ARRAY_SIZE(map), 4);30193019+ ret = intel_hsw_common_init(codec, 0x02, map, ARRAY_SIZE(map), 4,30203020+ enable_silent_stream);30283021 if (!ret) {30293022 struct hdmi_spec *spec = codec->spec;30303023
···2020#define AIU_MEM_I2S_CONTROL_MODE_16BIT BIT(6)2121#define AIU_MEM_I2S_BUF_CNTL_INIT BIT(0)2222#define AIU_RST_SOFT_I2S_FAST BIT(0)2323+#define AIU_I2S_MISC_HOLD_EN BIT(2)2424+#define AIU_I2S_MISC_FORCE_LEFT_RIGHT BIT(4)23252426#define AIU_FIFO_I2S_BLOCK 2562527···9290 unsigned int val;9391 int ret;94929393+ snd_soc_component_update_bits(component, AIU_I2S_MISC,9494+ AIU_I2S_MISC_HOLD_EN,9595+ AIU_I2S_MISC_HOLD_EN);9696+9597 ret = aiu_fifo_hw_params(substream, params, dai);9698 if (ret)9799 return ret;···122116 val = FIELD_PREP(AIU_MEM_I2S_MASKS_IRQ_BLOCK, val);123117 snd_soc_component_update_bits(component, AIU_MEM_I2S_MASKS,124118 AIU_MEM_I2S_MASKS_IRQ_BLOCK, val);119119+120120+ /*121121+ * Most (all?) supported SoCs have this bit set by default. The vendor122122+ * driver however sets it manually (depending on the version either123123+ * while un-setting AIU_I2S_MISC_HOLD_EN or right before that). Follow124124+ * the same approach for consistency with the vendor driver.125125+ */126126+ snd_soc_component_update_bits(component, AIU_I2S_MISC,127127+ AIU_I2S_MISC_FORCE_LEFT_RIGHT,128128+ AIU_I2S_MISC_FORCE_LEFT_RIGHT);129129+130130+ snd_soc_component_update_bits(component, AIU_I2S_MISC,131131+ AIU_I2S_MISC_HOLD_EN, 0);125132126133 return 0;127134}