···100100+----------------+-----------------+-----------------+-----------------------------+101101| ARM | Cortex-A510 | #2051678 | ARM64_ERRATUM_2051678 |102102+----------------+-----------------+-----------------+-----------------------------+103103+| ARM | Cortex-A510 | #2077057 | ARM64_ERRATUM_2077057 |104104++----------------+-----------------+-----------------+-----------------------------+103105| ARM | Cortex-A710 | #2119858 | ARM64_ERRATUM_2119858 |104106+----------------+-----------------+-----------------+-----------------------------+105107| ARM | Cortex-A710 | #2054223 | ARM64_ERRATUM_2054223 |
+8
Documentation/dev-tools/kselftest.rst
···77paths in the kernel. Tests are intended to be run after building, installing88and booting a kernel.991010+Kselftest from mainline can be run on older stable kernels. Running tests1111+from mainline offers the best coverage. Several test rings run mainline1212+kselftest suite on stable releases. The reason is that when a new test1313+gets added to test existing code to regression test a bug, we should be1414+able to run that test on an older kernel. Hence, it is important to keep1515+code that can still test an older kernel and make sure it skips the test1616+gracefully on newer releases.1717+1018You can find additional information on Kselftest framework, how to1119write new tests using the framework on Kselftest wiki:1220
···107107 - const: imem108108 - const: config109109110110+ qcom,qmp:111111+ $ref: /schemas/types.yaml#/definitions/phandle112112+ description: phandle to the AOSS side-channel message RAM113113+110114 qcom,smem-states:111115 $ref: /schemas/types.yaml#/definitions/phandle-array112116 description: State bits used in by the AP to signal the modem.···225221 interconnect-names = "memory",226222 "imem",227223 "config";224224+225225+ qcom,qmp = <&aoss_qmp>;228226229227 qcom,smem-states = <&ipa_smp2p_out 0>,230228 <&ipa_smp2p_out 1>;
···2323 minItems: 12424 maxItems: 2562525 items:2626- minimum: 02727- maximum: 2562626+ items:2727+ - minimum: 02828+ maximum: 2562829 description:2930 Chip select used by the device.3031
+16
Documentation/filesystems/netfs_library.rst
···462462 struct iov_iter *iter,463463 netfs_io_terminated_t term_func,464464 void *term_func_priv);465465+466466+ int (*query_occupancy)(struct netfs_cache_resources *cres,467467+ loff_t start, size_t len, size_t granularity,468468+ loff_t *_data_start, size_t *_data_len);465469 };466470467471With a termination handler function pointer::···539535 with the number of bytes transferred or an error code, plus a flag540536 indicating whether the termination is definitely happening in the caller's541537 context.538538+539539+ * ``query_occupancy()``540540+541541+ [Required] Called to find out where the next piece of data is within a542542+ particular region of the cache. The start and length of the region to be543543+ queried are passed in, along with the granularity to which the answer needs544544+ to be aligned. The function passes back the start and length of the data,545545+ if any, available within that region. Note that there may be a hole at the546546+ front.547547+548548+ It returns 0 if some data was found, -ENODATA if there was no usable data549549+ within the region or -ENOBUFS if there is no caching on this file.542550543551Note that these methods are passed a pointer to the cache resource structure,544552not the read request structure as they could be used in other situations where
-24
Documentation/gpu/todo.rst
···300300301301Level: Advanced302302303303-Garbage collect fbdev scrolling acceleration304304---------------------------------------------305305-306306-Scroll acceleration has been disabled in fbcon. Now it works as the old307307-SCROLL_REDRAW mode. A ton of code was removed in fbcon.c and the hook bmove was308308-removed from fbcon_ops.309309-Remaining tasks:310310-311311-- a bunch of the hooks in fbcon_ops could be removed or simplified by calling312312- directly instead of the function table (with a switch on p->rotate)313313-314314-- fb_copyarea is unused after this, and can be deleted from all drivers315315-316316-- after that, fb_copyarea can be deleted from fb_ops in include/linux/fb.h as317317- well as cfb_copyarea318318-319319-Note that not all acceleration code can be deleted, since clearing and cursor320320-support is still accelerated, which might be good candidates for further321321-deletion projects.322322-323323-Contact: Daniel Vetter324324-325325-Level: Intermediate326326-327303idr_init_base()328304---------------329305
···41574157K: csky4158415841594159CA8210 IEEE-802.15.4 RADIO DRIVER41604160-M: Harry Morris <h.morris@cascoda.com>41614160L: linux-wpan@vger.kernel.org41624162-S: Maintained41614161+S: Orphan41634162W: https://github.com/Cascoda/ca8210-linux.git41644163F: Documentation/devicetree/bindings/net/ieee802154/ca8210.txt41654164F: drivers/net/ieee802154/ca8210.c···1087910880F: drivers/ata/pata_arasan_cf.c1088010881F: include/linux/pata_arasan_cf_data.h10881108821088310883+LIBATA PATA DRIVERS1088410884+R: Sergey Shtylyov <s.shtylyov@omp.ru>1088510885+L: linux-ide@vger.kernel.org1088610886+F: drivers/ata/ata_*.c1088710887+F: drivers/ata/pata_*.c1088810888+1088210889LIBATA PATA FARADAY FTIDE010 AND GEMINI SATA BRIDGE DRIVERS1088310890M: Linus Walleij <linus.walleij@linaro.org>1088410891L: linux-ide@vger.kernel.org···1240512400F: kernel/sched/membarrier.c12406124011240712402MEMBLOCK1240812408-M: Mike Rapoport <rppt@linux.ibm.com>1240312403+M: Mike Rapoport <rppt@kernel.org>1240912404L: linux-mm@kvack.org1241012405S: Maintained1241112406F: Documentation/core-api/boot-time-mm.rst···1647316468F: Documentation/devicetree/bindings/i2c/renesas,rmobile-iic.yaml1647416469F: drivers/i2c/busses/i2c-rcar.c1647516470F: drivers/i2c/busses/i2c-sh_mobile.c1647116471+1647216472+RENESAS R-CAR SATA DRIVER1647316473+R: Sergey Shtylyov <s.shtylyov@omp.ru>1647416474+S: Supported1647516475+L: linux-ide@vger.kernel.org1647616476+L: linux-renesas-soc@vger.kernel.org1647716477+F: Documentation/devicetree/bindings/ata/renesas,rcar-sata.yaml1647816478+F: drivers/ata/sata_rcar.c16476164791647716480RENESAS R-CAR THERMAL DRIVERS1647816481M: Niklas Söderlund <niklas.soderlund@ragnatech.se>
+2-2
arch/arm/crypto/blake2s-shash.c
···1313static int crypto_blake2s_update_arm(struct shash_desc *desc,1414 const u8 *in, unsigned int inlen)1515{1616- return crypto_blake2s_update(desc, in, inlen, blake2s_compress);1616+ return crypto_blake2s_update(desc, in, inlen, false);1717}18181919static int crypto_blake2s_final_arm(struct shash_desc *desc, u8 *out)2020{2121- return crypto_blake2s_final(desc, out, blake2s_compress);2121+ return crypto_blake2s_final(desc, out, false);2222}23232424#define BLAKE2S_ALG(name, driver_name, digest_size) \
+16
arch/arm64/Kconfig
···680680681681 If unsure, say Y.682682683683+config ARM64_ERRATUM_2077057684684+ bool "Cortex-A510: 2077057: workaround software-step corrupting SPSR_EL2"685685+ help686686+ This option adds the workaround for ARM Cortex-A510 erratum 2077057.687687+ Affected Cortex-A510 may corrupt SPSR_EL2 when the a step exception is688688+ expected, but a Pointer Authentication trap is taken instead. The689689+ erratum causes SPSR_EL1 to be copied to SPSR_EL2, which could allow690690+ EL1 to cause a return to EL2 with a guest controlled ELR_EL2.691691+692692+ This can only happen when EL2 is stepping EL1.693693+694694+ When these conditions occur, the SPSR_EL2 value is unchanged from the695695+ previous guest entry, and can be restored from the in-memory copy.696696+697697+ If unsure, say Y.698698+683699config ARM64_ERRATUM_2119858684700 bool "Cortex-A710/X2: 2119858: workaround TRBE overwriting trace data in FILL mode"685701 default y
···797797 xfer_to_guest_mode_work_pending();798798}799799800800+/*801801+ * Actually run the vCPU, entering an RCU extended quiescent state (EQS) while802802+ * the vCPU is running.803803+ *804804+ * This must be noinstr as instrumentation may make use of RCU, and this is not805805+ * safe during the EQS.806806+ */807807+static int noinstr kvm_arm_vcpu_enter_exit(struct kvm_vcpu *vcpu)808808+{809809+ int ret;810810+811811+ guest_state_enter_irqoff();812812+ ret = kvm_call_hyp_ret(__kvm_vcpu_run, vcpu);813813+ guest_state_exit_irqoff();814814+815815+ return ret;816816+}817817+800818/**801819 * kvm_arch_vcpu_ioctl_run - the main VCPU run function to execute guest code802820 * @vcpu: The VCPU pointer···899881 * Enter the guest900882 */901883 trace_kvm_entry(*vcpu_pc(vcpu));902902- guest_enter_irqoff();884884+ guest_timing_enter_irqoff();903885904904- ret = kvm_call_hyp_ret(__kvm_vcpu_run, vcpu);886886+ ret = kvm_arm_vcpu_enter_exit(vcpu);905887906888 vcpu->mode = OUTSIDE_GUEST_MODE;907889 vcpu->stat.exits++;···936918 kvm_arch_vcpu_ctxsync_fp(vcpu);937919938920 /*939939- * We may have taken a host interrupt in HYP mode (ie940940- * while executing the guest). This interrupt is still941941- * pending, as we haven't serviced it yet!921921+ * We must ensure that any pending interrupts are taken before922922+ * we exit guest timing so that timer ticks are accounted as923923+ * guest time. Transiently unmask interrupts so that any924924+ * pending interrupts are taken.942925 *943943- * We're now back in SVC mode, with interrupts944944- * disabled. Enabling the interrupts now will have945945- * the effect of taking the interrupt again, in SVC946946- * mode this time.926926+ * Per ARM DDI 0487G.b section D1.13.4, an ISB (or other927927+ * context synchronization event) is necessary to ensure that928928+ * pending interrupts are taken.947929 */948930 local_irq_enable();931931+ isb();932932+ local_irq_disable();949933950950- /*951951- * We do local_irq_enable() before calling guest_exit() so952952- * that if a timer interrupt hits while running the guest we953953- * account that tick as being spent in the guest. We enable954954- * preemption after calling guest_exit() so that if we get955955- * preempted we make sure ticks after that is not counted as956956- * guest time.957957- */958958- guest_exit();934934+ guest_timing_exit_irqoff();935935+936936+ local_irq_enable();937937+959938 trace_kvm_exit(ret, kvm_vcpu_trap_get_class(vcpu), *vcpu_pc(vcpu));960939961940 /* Exit types that need handling before we can be preempted */
+8
arch/arm64/kvm/handle_exit.c
···228228{229229 struct kvm_run *run = vcpu->run;230230231231+ if (ARM_SERROR_PENDING(exception_index)) {232232+ /*233233+ * The SError is handled by handle_exit_early(). If the guest234234+ * survives it will re-execute the original instruction.235235+ */236236+ return 1;237237+ }238238+231239 exception_index = ARM_EXCEPTION_CODE(exception_index);232240233241 switch (exception_index) {
+21-2
arch/arm64/kvm/hyp/include/hyp/switch.h
···402402 return false;403403}404404405405+static inline void synchronize_vcpu_pstate(struct kvm_vcpu *vcpu, u64 *exit_code)406406+{407407+ /*408408+ * Check for the conditions of Cortex-A510's #2077057. When these occur409409+ * SPSR_EL2 can't be trusted, but isn't needed either as it is410410+ * unchanged from the value in vcpu_gp_regs(vcpu)->pstate.411411+ * Are we single-stepping the guest, and took a PAC exception from the412412+ * active-not-pending state?413413+ */414414+ if (cpus_have_final_cap(ARM64_WORKAROUND_2077057) &&415415+ vcpu->guest_debug & KVM_GUESTDBG_SINGLESTEP &&416416+ *vcpu_cpsr(vcpu) & DBG_SPSR_SS &&417417+ ESR_ELx_EC(read_sysreg_el2(SYS_ESR)) == ESR_ELx_EC_PAC)418418+ write_sysreg_el2(*vcpu_cpsr(vcpu), SYS_SPSR);419419+420420+ vcpu->arch.ctxt.regs.pstate = read_sysreg_el2(SYS_SPSR);421421+}422422+405423/*406424 * Return true when we were able to fixup the guest exit and should return to407425 * the guest, false when we should restore the host state and return to the···431413 * Save PSTATE early so that we can evaluate the vcpu mode432414 * early on.433415 */434434- vcpu->arch.ctxt.regs.pstate = read_sysreg_el2(SYS_SPSR);416416+ synchronize_vcpu_pstate(vcpu, exit_code);435417436418 /*437419 * Check whether we want to repaint the state one way or···442424 if (ARM_EXCEPTION_CODE(*exit_code) != ARM_EXCEPTION_IRQ)443425 vcpu->arch.fault.esr_el2 = read_sysreg_el2(SYS_ESR);444426445445- if (ARM_SERROR_PENDING(*exit_code)) {427427+ if (ARM_SERROR_PENDING(*exit_code) &&428428+ ARM_EXCEPTION_CODE(*exit_code) != ARM_EXCEPTION_IRQ) {446429 u8 esr_ec = kvm_vcpu_trap_get_class(vcpu);447430448431 /*
···414414 return -ENOIOCTLCMD;415415}416416417417+/*418418+ * Actually run the vCPU, entering an RCU extended quiescent state (EQS) while419419+ * the vCPU is running.420420+ *421421+ * This must be noinstr as instrumentation may make use of RCU, and this is not422422+ * safe during the EQS.423423+ */424424+static int noinstr kvm_mips_vcpu_enter_exit(struct kvm_vcpu *vcpu)425425+{426426+ int ret;427427+428428+ guest_state_enter_irqoff();429429+ ret = kvm_mips_callbacks->vcpu_run(vcpu);430430+ guest_state_exit_irqoff();431431+432432+ return ret;433433+}434434+417435int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)418436{419437 int r = -EINTR;···452434 lose_fpu(1);453435454436 local_irq_disable();455455- guest_enter_irqoff();437437+ guest_timing_enter_irqoff();456438 trace_kvm_enter(vcpu);457439458440 /*···463445 */464446 smp_store_mb(vcpu->mode, IN_GUEST_MODE);465447466466- r = kvm_mips_callbacks->vcpu_run(vcpu);448448+ r = kvm_mips_vcpu_enter_exit(vcpu);449449+450450+ /*451451+ * We must ensure that any pending interrupts are taken before452452+ * we exit guest timing so that timer ticks are accounted as453453+ * guest time. Transiently unmask interrupts so that any454454+ * pending interrupts are taken.455455+ *456456+ * TODO: is there a barrier which ensures that pending interrupts are457457+ * recognised? Currently this just hopes that the CPU takes any pending458458+ * interrupts between the enable and disable.459459+ */460460+ local_irq_enable();461461+ local_irq_disable();467462468463 trace_kvm_out(vcpu);469469- guest_exit_irqoff();464464+ guest_timing_exit_irqoff();470465 local_irq_enable();471466472467out:···11991168/*12001169 * Return value is in the form (errcode<<2 | RESUME_FLAG_HOST | RESUME_FLAG_NV)12011170 */12021202-int kvm_mips_handle_exit(struct kvm_vcpu *vcpu)11711171+static int __kvm_mips_handle_exit(struct kvm_vcpu *vcpu)12031172{12041173 struct kvm_run *run = vcpu->run;12051174 u32 cause = vcpu->arch.host_cp0_cause;···13851354 read_c0_config5() & MIPS_CONF5_MSAEN)13861355 __kvm_restore_msacsr(&vcpu->arch);13871356 }13571357+ return ret;13581358+}13591359+13601360+int noinstr kvm_mips_handle_exit(struct kvm_vcpu *vcpu)13611361+{13621362+ int ret;13631363+13641364+ guest_state_exit_irqoff();13651365+ ret = __kvm_mips_handle_exit(vcpu);13661366+ guest_state_enter_irqoff();13671367+13881368 return ret;13891369}13901370
+9-3
arch/mips/kvm/vz.c
···458458/**459459 * _kvm_vz_save_htimer() - Switch to software emulation of guest timer.460460 * @vcpu: Virtual CPU.461461- * @compare: Pointer to write compare value to.462462- * @cause: Pointer to write cause value to.461461+ * @out_compare: Pointer to write compare value to.462462+ * @out_cause: Pointer to write cause value to.463463 *464464 * Save VZ guest timer state and switch to software emulation of guest CP0465465 * timer. The hard timer must already be in use, so preemption should be···15411541}1542154215431543/**15441544- * kvm_trap_vz_handle_cop_unusuable() - Guest used unusable coprocessor.15441544+ * kvm_trap_vz_handle_cop_unusable() - Guest used unusable coprocessor.15451545 * @vcpu: Virtual CPU context.15461546 *15471547 * Handle when the guest attempts to use a coprocessor which hasn't been allowed15481548 * by the root context.15491549+ *15501550+ * Return: value indicating whether to resume the host or the guest15511551+ * (RESUME_HOST or RESUME_GUEST)15491552 */15501553static int kvm_trap_vz_handle_cop_unusable(struct kvm_vcpu *vcpu)15511554{···15951592 *15961593 * Handle when the guest attempts to use MSA when it is disabled in the root15971594 * context.15951595+ *15961596+ * Return: value indicating whether to resume the host or the guest15971597+ * (RESUME_HOST or RESUME_GUEST)15981598 */15991599static int kvm_trap_vz_handle_msa_disabled(struct kvm_vcpu *vcpu)16001600{
+31-17
arch/riscv/kvm/vcpu.c
···9090int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu)9191{9292 struct kvm_cpu_context *cntx;9393+ struct kvm_vcpu_csr *reset_csr = &vcpu->arch.guest_reset_csr;93949495 /* Mark this VCPU never ran */9596 vcpu->arch.ran_atleast_once = false;···106105 cntx->hstatus |= HSTATUS_VTW;107106 cntx->hstatus |= HSTATUS_SPVP;108107 cntx->hstatus |= HSTATUS_SPV;108108+109109+ /* By default, make CY, TM, and IR counters accessible in VU mode */110110+ reset_csr->scounteren = 0x7;109111110112 /* Setup VCPU timer */111113 kvm_riscv_vcpu_timer_init(vcpu);···703699 csr_write(CSR_HVIP, csr->hvip);704700}705701702702+/*703703+ * Actually run the vCPU, entering an RCU extended quiescent state (EQS) while704704+ * the vCPU is running.705705+ *706706+ * This must be noinstr as instrumentation may make use of RCU, and this is not707707+ * safe during the EQS.708708+ */709709+static void noinstr kvm_riscv_vcpu_enter_exit(struct kvm_vcpu *vcpu)710710+{711711+ guest_state_enter_irqoff();712712+ __kvm_riscv_switch_to(&vcpu->arch);713713+ guest_state_exit_irqoff();714714+}715715+706716int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)707717{708718 int ret;···808790 continue;809791 }810792811811- guest_enter_irqoff();793793+ guest_timing_enter_irqoff();812794813813- __kvm_riscv_switch_to(&vcpu->arch);795795+ kvm_riscv_vcpu_enter_exit(vcpu);814796815797 vcpu->mode = OUTSIDE_GUEST_MODE;816798 vcpu->stat.exits++;···830812 kvm_riscv_vcpu_sync_interrupts(vcpu);831813832814 /*833833- * We may have taken a host interrupt in VS/VU-mode (i.e.834834- * while executing the guest). This interrupt is still835835- * pending, as we haven't serviced it yet!815815+ * We must ensure that any pending interrupts are taken before816816+ * we exit guest timing so that timer ticks are accounted as817817+ * guest time. Transiently unmask interrupts so that any818818+ * pending interrupts are taken.836819 *837837- * We're now back in HS-mode with interrupts disabled838838- * so enabling the interrupts now will have the effect839839- * of taking the interrupt again, in HS-mode this time.820820+ * There's no barrier which ensures that pending interrupts are821821+ * recognised, so we just hope that the CPU takes any pending822822+ * interrupts between the enable and disable.840823 */841824 local_irq_enable();825825+ local_irq_disable();842826843843- /*844844- * We do local_irq_enable() before calling guest_exit() so845845- * that if a timer interrupt hits while running the guest846846- * we account that tick as being spent in the guest. We847847- * enable preemption after calling guest_exit() so that if848848- * we get preempted we make sure ticks after that is not849849- * counted as guest time.850850- */851851- guest_exit();827827+ guest_timing_exit_irqoff();828828+829829+ local_irq_enable();852830853831 preempt_enable();854832
+2-1
arch/riscv/kvm/vcpu_sbi_base.c
···99#include <linux/errno.h>1010#include <linux/err.h>1111#include <linux/kvm_host.h>1212+#include <linux/version.h>1213#include <asm/csr.h>1314#include <asm/sbi.h>1415#include <asm/kvm_vcpu_timer.h>···3332 *out_val = KVM_SBI_IMPID;3433 break;3534 case SBI_EXT_BASE_GET_IMP_VERSION:3636- *out_val = 0;3535+ *out_val = LINUX_VERSION_CODE;3736 break;3837 case SBI_EXT_BASE_PROBE_EXT:3938 if ((cp->a0 >= SBI_EXT_EXPERIMENTAL_START &&
+2-2
arch/x86/crypto/blake2s-shash.c
···1818static int crypto_blake2s_update_x86(struct shash_desc *desc,1919 const u8 *in, unsigned int inlen)2020{2121- return crypto_blake2s_update(desc, in, inlen, blake2s_compress);2121+ return crypto_blake2s_update(desc, in, inlen, false);2222}23232424static int crypto_blake2s_final_x86(struct shash_desc *desc, u8 *out)2525{2626- return crypto_blake2s_final(desc, out, blake2s_compress);2626+ return crypto_blake2s_final(desc, out, false);2727}28282929#define BLAKE2S_ALG(name, driver_name, digest_size) \
···40414041 return 0;40424042}4043404340444044+static void vmx_deliver_interrupt(struct kvm_lapic *apic, int delivery_mode,40454045+ int trig_mode, int vector)40464046+{40474047+ struct kvm_vcpu *vcpu = apic->vcpu;40484048+40494049+ if (vmx_deliver_posted_interrupt(vcpu, vector)) {40504050+ kvm_lapic_set_irr(vector, apic);40514051+ kvm_make_request(KVM_REQ_EVENT, vcpu);40524052+ kvm_vcpu_kick(vcpu);40534053+ } else {40544054+ trace_kvm_apicv_accept_irq(vcpu->vcpu_id, delivery_mode,40554055+ trig_mode, vector);40564056+ }40574057+}40584058+40444059/*40454060 * Set up the vmcs's constant host-state fields, i.e., host-state fields that40464061 * will not change in the lifetime of the guest.···67696754static noinstr void vmx_vcpu_enter_exit(struct kvm_vcpu *vcpu,67706755 struct vcpu_vmx *vmx)67716756{67726772- kvm_guest_enter_irqoff();67576757+ guest_state_enter_irqoff();6773675867746759 /* L1D Flush includes CPU buffer clear to mitigate MDS */67756760 if (static_branch_unlikely(&vmx_l1d_should_flush))···6785677067866771 vcpu->arch.cr2 = native_read_cr2();6787677267886788- kvm_guest_exit_irqoff();67736773+ guest_state_exit_irqoff();67896774}6790677567916776static fastpath_t vmx_vcpu_run(struct kvm_vcpu *vcpu)···77837768 .hwapic_isr_update = vmx_hwapic_isr_update,77847769 .guest_apic_has_interrupt = vmx_guest_apic_has_interrupt,77857770 .sync_pir_to_irr = vmx_sync_pir_to_irr,77867786- .deliver_posted_interrupt = vmx_deliver_posted_interrupt,77717771+ .deliver_interrupt = vmx_deliver_interrupt,77877772 .dy_apicv_has_pending_interrupt = pi_has_pending_interrupt,7788777377897774 .set_tss_addr = vmx_set_tss_addr,
+6-4
arch/x86/kvm/x86.c
···9090u64 __read_mostly kvm_mce_cap_supported = MCG_CTL_P | MCG_SER_P;9191EXPORT_SYMBOL_GPL(kvm_mce_cap_supported);92929393+#define ERR_PTR_USR(e) ((void __user *)ERR_PTR(e))9494+9395#define emul_to_vcpu(ctxt) \9496 ((struct kvm_vcpu *)(ctxt)->vcpu)9597···43424340 void __user *uaddr = (void __user*)(unsigned long)attr->addr;4343434143444342 if ((u64)(unsigned long)uaddr != attr->addr)43454345- return ERR_PTR(-EFAULT);43434343+ return ERR_PTR_USR(-EFAULT);43464344 return uaddr;43474345}43484346···1004310041 set_debugreg(0, 7);1004410042 }10045100431004410044+ guest_timing_enter_irqoff();1004510045+1004610046 for (;;) {1004710047 /*1004810048 * Assert that vCPU vs. VM APICv state is consistent. An APICv···1012910125 * of accounting via context tracking, but the loss of accuracy is1013010126 * acceptable for all known use cases.1013110127 */1013210132- vtime_account_guest_exit();1012810128+ guest_timing_exit_irqoff();10133101291013410130 if (lapic_in_kernel(vcpu)) {1013510131 s64 delta = vcpu->arch.apic->lapic_timer.advance_expire_delta;···1164211638 cancel_delayed_work_sync(&kvm->arch.kvmclock_update_work);1164311639 kvm_free_pit(kvm);1164411640}1164511645-1164611646-#define ERR_PTR_USR(e) ((void __user *)ERR_PTR(e))11647116411164811642/**1164911643 * __x86_set_memory_region: Setup KVM internal memory slot
-45
arch/x86/kvm/x86.h
···10101111void kvm_spurious_fault(void);12121313-static __always_inline void kvm_guest_enter_irqoff(void)1414-{1515- /*1616- * VMENTER enables interrupts (host state), but the kernel state is1717- * interrupts disabled when this is invoked. Also tell RCU about1818- * it. This is the same logic as for exit_to_user_mode().1919- *2020- * This ensures that e.g. latency analysis on the host observes2121- * guest mode as interrupt enabled.2222- *2323- * guest_enter_irqoff() informs context tracking about the2424- * transition to guest mode and if enabled adjusts RCU state2525- * accordingly.2626- */2727- instrumentation_begin();2828- trace_hardirqs_on_prepare();2929- lockdep_hardirqs_on_prepare(CALLER_ADDR0);3030- instrumentation_end();3131-3232- guest_enter_irqoff();3333- lockdep_hardirqs_on(CALLER_ADDR0);3434-}3535-3636-static __always_inline void kvm_guest_exit_irqoff(void)3737-{3838- /*3939- * VMEXIT disables interrupts (host state), but tracing and lockdep4040- * have them in state 'on' as recorded before entering guest mode.4141- * Same as enter_from_user_mode().4242- *4343- * context_tracking_guest_exit() restores host context and reinstates4444- * RCU if enabled and required.4545- *4646- * This needs to be done immediately after VM-Exit, before any code4747- * that might contain tracepoints or call out to the greater world,4848- * e.g. before x86_spec_ctrl_restore_host().4949- */5050- lockdep_hardirqs_off(CALLER_ADDR0);5151- context_tracking_guest_exit();5252-5353- instrumentation_begin();5454- trace_hardirqs_off_finish();5555- instrumentation_end();5656-}5757-5813#define KVM_NESTED_VMENTER_CONSISTENCY_CHECK(consistency_check) \5914({ \6015 bool failed = (consistency_check); \
···148148 return rc;149149}150150151151-static void __init xen_fill_possible_map(void)152152-{153153- int i, rc;154154-155155- if (xen_initial_domain())156156- return;157157-158158- for (i = 0; i < nr_cpu_ids; i++) {159159- rc = HYPERVISOR_vcpu_op(VCPUOP_is_up, i, NULL);160160- if (rc >= 0) {161161- num_processors++;162162- set_cpu_possible(i, true);163163- }164164- }165165-}166166-167167-static void __init xen_filter_cpu_maps(void)151151+static void __init _get_smp_config(unsigned int early)168152{169153 int i, rc;170154 unsigned int subtract = 0;171155172172- if (!xen_initial_domain())156156+ if (early)173157 return;174158175159 num_processors = 0;···194210 * sure the old memory can be recycled. */195211 make_lowmem_page_readwrite(xen_initial_gdt);196212197197- xen_filter_cpu_maps();198213 xen_setup_vcpu_info_placement();199214200215 /*···459476void __init xen_smp_init(void)460477{461478 smp_ops = xen_smp_ops;462462- xen_fill_possible_map();479479+480480+ /* Avoid searching for BIOS MP tables */481481+ x86_init.mpparse.find_smp_config = x86_init_noop;482482+ x86_init.mpparse.get_smp_config = _get_smp_config;463483}
···1515static int crypto_blake2s_update_generic(struct shash_desc *desc,1616 const u8 *in, unsigned int inlen)1717{1818- return crypto_blake2s_update(desc, in, inlen, blake2s_compress_generic);1818+ return crypto_blake2s_update(desc, in, inlen, true);1919}20202121static int crypto_blake2s_final_generic(struct shash_desc *desc, u8 *out)2222{2323- return crypto_blake2s_final(desc, out, blake2s_compress_generic);2323+ return crypto_blake2s_final(desc, out, true);2424}25252626#define BLAKE2S_ALG(name, driver_name, digest_size) \
+1
drivers/acpi/Kconfig
···1111 depends on ARCH_SUPPORTS_ACPI1212 select PNP1313 select NLS1414+ select CRC321415 default y if X861516 help1617 Advanced Configuration and Power Interface (ACPI) support for
+10
drivers/ata/libata-core.c
···20072007{20082008 struct ata_port *ap = dev->link->ap;2009200920102010+ if (dev->horkage & ATA_HORKAGE_NO_LOG_DIR)20112011+ return false;20122012+20102013 if (ata_read_log_page(dev, ATA_LOG_DIRECTORY, 0, ap->sector_buf, 1))20112014 return false;20122015 return get_unaligned_le16(&ap->sector_buf[log * 2]) ? true : false;···40754072 { "WDC WD2500JD-*", NULL, ATA_HORKAGE_WD_BROKEN_LPM },40764073 { "WDC WD3000JD-*", NULL, ATA_HORKAGE_WD_BROKEN_LPM },40774074 { "WDC WD3200JD-*", NULL, ATA_HORKAGE_WD_BROKEN_LPM },40754075+40764076+ /*40774077+ * This sata dom device goes on a walkabout when the ATA_LOG_DIRECTORY40784078+ * log page is accessed. Ensure we never ask for this log page with40794079+ * these devices.40804080+ */40814081+ { "SATADOM-ML 3ME", NULL, ATA_HORKAGE_NO_LOG_DIR },4078408240794083 /* End Marker */40804084 { }
+22-17
drivers/char/random.c
···762762 return arch_init;763763}764764765765-static bool __init crng_init_try_arch_early(struct crng_state *crng)765765+static bool __init crng_init_try_arch_early(void)766766{767767 int i;768768 bool arch_init = true;···774774 rv = random_get_entropy();775775 arch_init = false;776776 }777777- crng->state[i] ^= rv;777777+ primary_crng.state[i] ^= rv;778778 }779779780780 return arch_init;···788788 crng->init_time = jiffies - CRNG_RESEED_INTERVAL - 1;789789}790790791791-static void __init crng_initialize_primary(struct crng_state *crng)791791+static void __init crng_initialize_primary(void)792792{793793- _extract_entropy(&crng->state[4], sizeof(u32) * 12);794794- if (crng_init_try_arch_early(crng) && trust_cpu && crng_init < 2) {793793+ _extract_entropy(&primary_crng.state[4], sizeof(u32) * 12);794794+ if (crng_init_try_arch_early() && trust_cpu && crng_init < 2) {795795 invalidate_batched_entropy();796796 numa_crng_init();797797 crng_init = 2;798798 pr_notice("crng init done (trusting CPU's manufacturer)\n");799799 }800800- crng->init_time = jiffies - CRNG_RESEED_INTERVAL - 1;800800+ primary_crng.init_time = jiffies - CRNG_RESEED_INTERVAL - 1;801801}802802803803-static void crng_finalize_init(struct crng_state *crng)803803+static void crng_finalize_init(void)804804{805805- if (crng != &primary_crng || crng_init >= 2)806806- return;807805 if (!system_wq) {808806 /* We can't call numa_crng_init until we have workqueues,809807 * so mark this for processing later. */···812814 invalidate_batched_entropy();813815 numa_crng_init();814816 crng_init = 2;817817+ crng_need_final_init = false;815818 process_random_ready_list();816819 wake_up_interruptible(&crng_init_wait);817820 kill_fasync(&fasync, SIGIO, POLL_IN);···979980 memzero_explicit(&buf, sizeof(buf));980981 WRITE_ONCE(crng->init_time, jiffies);981982 spin_unlock_irqrestore(&crng->lock, flags);982982- crng_finalize_init(crng);983983+ if (crng == &primary_crng && crng_init < 2)984984+ crng_finalize_init();983985}984986985987static void _extract_crng(struct crng_state *crng, u8 out[CHACHA_BLOCK_SIZE])···16971697{16981698 init_std_data();16991699 if (crng_need_final_init)17001700- crng_finalize_init(&primary_crng);17011701- crng_initialize_primary(&primary_crng);17001700+ crng_finalize_init();17011701+ crng_initialize_primary();17021702 crng_global_init_time = jiffies;17031703 if (ratelimit_disable) {17041704 urandom_warning.interval = 0;···18561856 */18571857 if (!capable(CAP_SYS_ADMIN))18581858 return -EPERM;18591859- input_pool.entropy_count = 0;18591859+ if (xchg(&input_pool.entropy_count, 0) && random_write_wakeup_bits) {18601860+ wake_up_interruptible(&random_write_wait);18611861+ kill_fasync(&fasync, SIGIO, POLL_OUT);18621862+ }18601863 return 0;18611864 case RNDRESEEDCRNG:18621865 if (!capable(CAP_SYS_ADMIN))···22082205 return;22092206 }2210220722112211- /* Suspend writing if we're above the trickle threshold.22082208+ /* Throttle writing if we're above the trickle threshold.22122209 * We'll be woken up again once below random_write_wakeup_thresh,22132213- * or when the calling thread is about to terminate.22102210+ * when the calling thread is about to terminate, or once22112211+ * CRNG_RESEED_INTERVAL has lapsed.22142212 */22152215- wait_event_interruptible(random_write_wait,22132213+ wait_event_interruptible_timeout(random_write_wait,22162214 !system_wq || kthread_should_stop() ||22172217- POOL_ENTROPY_BITS() <= random_write_wakeup_bits);22152215+ POOL_ENTROPY_BITS() <= random_write_wakeup_bits,22162216+ CRNG_RESEED_INTERVAL);22182217 mix_pool_bytes(buffer, count);22192218 credit_entropy_bits(entropy);22202219}
+2
drivers/dma-buf/dma-heap.c
···1414#include <linux/xarray.h>1515#include <linux/list.h>1616#include <linux/slab.h>1717+#include <linux/nospec.h>1718#include <linux/uaccess.h>1819#include <linux/syscalls.h>1920#include <linux/dma-heap.h>···136135 if (nr >= ARRAY_SIZE(dma_heap_ioctl_cmds))137136 return -EINVAL;138137138138+ nr = array_index_nospec(nr, ARRAY_SIZE(dma_heap_ioctl_cmds));139139 /* Get the kernel ioctl cmd that matches */140140 kcmd = dma_heap_ioctl_cmds[nr];141141
···10311031 }10321032}1033103310341034+#if IS_ENABLED(CONFIG_SUSPEND)10351035+/**10361036+ * amdgpu_acpi_is_s3_active10371037+ *10381038+ * @adev: amdgpu_device_pointer10391039+ *10401040+ * returns true if supported, false if not.10411041+ */10421042+bool amdgpu_acpi_is_s3_active(struct amdgpu_device *adev)10431043+{10441044+ return !(adev->flags & AMD_IS_APU) ||10451045+ (pm_suspend_target_state == PM_SUSPEND_MEM);10461046+}10471047+10341048/**10351049 * amdgpu_acpi_is_s0ix_active10361050 *···10541040 */10551041bool amdgpu_acpi_is_s0ix_active(struct amdgpu_device *adev)10561042{10571057-#if IS_ENABLED(CONFIG_AMD_PMC) && IS_ENABLED(CONFIG_SUSPEND)10581058- if (acpi_gbl_FADT.flags & ACPI_FADT_LOW_POWER_S0) {10591059- if (adev->flags & AMD_IS_APU)10601060- return pm_suspend_target_state == PM_SUSPEND_TO_IDLE;10431043+ if (!(adev->flags & AMD_IS_APU) ||10441044+ (pm_suspend_target_state != PM_SUSPEND_TO_IDLE))10451045+ return false;10461046+10471047+ if (!(acpi_gbl_FADT.flags & ACPI_FADT_LOW_POWER_S0)) {10481048+ dev_warn_once(adev->dev,10491049+ "Power consumption will be higher as BIOS has not been configured for suspend-to-idle.\n"10501050+ "To use suspend-to-idle change the sleep mode in BIOS setup.\n");10511051+ return false;10611052 }10621062-#endif10531053+10541054+#if !IS_ENABLED(CONFIG_AMD_PMC)10551055+ dev_warn_once(adev->dev,10561056+ "Power consumption will be higher as the kernel has not been compiled with CONFIG_AMD_PMC.\n");10631057 return false;10581058+#else10591059+ return true;10601060+#endif /* CONFIG_AMD_PMC */10641061}10621062+10631063+#endif /* CONFIG_SUSPEND */
+9-2
drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
···22462246static int amdgpu_pmops_prepare(struct device *dev)22472247{22482248 struct drm_device *drm_dev = dev_get_drvdata(dev);22492249+ struct amdgpu_device *adev = drm_to_adev(drm_dev);2249225022502251 /* Return a positive number here so22512252 * DPM_FLAG_SMART_SUSPEND works properly22522253 */22532254 if (amdgpu_device_supports_boco(drm_dev))22542254- return pm_runtime_suspended(dev) &&22552255- pm_suspend_via_firmware();22552255+ return pm_runtime_suspended(dev);22562256+22572257+ /* if we will not support s3 or s2i for the device22582258+ * then skip suspend22592259+ */22602260+ if (!amdgpu_acpi_is_s0ix_active(adev) &&22612261+ !amdgpu_acpi_is_s3_active(adev))22622262+ return 1;2256226322572264 return 0;22582265}
+1-1
drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
···19041904 unsigned i;19051905 int r;1906190619071907- if (direct_submit && !ring->sched.ready) {19071907+ if (!direct_submit && !ring->sched.ready) {19081908 DRM_ERROR("Trying to move memory with ring turned off.\n");19091909 return -EINVAL;19101910 }
···3696369636973697static int sienna_cichlid_enable_mgpu_fan_boost(struct smu_context *smu)36983698{36993699- struct smu_table_context *table_context = &smu->smu_table;37003700- PPTable_t *smc_pptable = table_context->driver_pptable;36993699+ uint16_t *mgpu_fan_boost_limit_rpm;3701370037013701+ GET_PPTABLE_MEMBER(MGpuFanBoostLimitRpm, &mgpu_fan_boost_limit_rpm);37023702 /*37033703 * Skip the MGpuFanBoost setting for those ASICs37043704 * which do not support it37053705 */37063706- if (!smc_pptable->MGpuFanBoostLimitRpm)37063706+ if (*mgpu_fan_boost_limit_rpm == 0)37073707 return 0;3708370837093709 return smu_cmn_send_smc_msg_with_param(smu,
···345345static bool adl_tc_phy_status_complete(struct intel_digital_port *dig_port)346346{347347 struct drm_i915_private *i915 = to_i915(dig_port->base.base.dev);348348+ enum tc_port tc_port = intel_port_to_tc(i915, dig_port->base.port);348349 struct intel_uncore *uncore = &i915->uncore;349350 u32 val;350351351351- val = intel_uncore_read(uncore, TCSS_DDI_STATUS(dig_port->tc_phy_fia_idx));352352+ val = intel_uncore_read(uncore, TCSS_DDI_STATUS(tc_port));352353 if (val == 0xffffffff) {353354 drm_dbg_kms(&i915->drm,354355 "Port %s: PHY in TCCOLD, assuming not complete\n",
+7-2
drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
···25052505 timeout) < 0) {25062506 i915_request_put(rq);2507250725082508- tl = intel_context_timeline_lock(ce);25082508+ /*25092509+ * Error path, cannot use intel_context_timeline_lock as25102510+ * that is user interruptable and this clean up step25112511+ * must be done.25122512+ */25132513+ mutex_lock(&ce->timeline->mutex);25092514 intel_context_exit(ce);25102510- intel_context_timeline_unlock(tl);25152515+ mutex_unlock(&ce->timeline->mutex);2511251625122517 if (nonblock)25132518 return -EWOULDBLOCK;
+5
drivers/gpu/drm/i915/gt/uc/intel_guc.h
···206206 * context usage for overflows.207207 */208208 struct delayed_work work;209209+210210+ /**211211+ * @shift: Right shift value for the gpm timestamp212212+ */213213+ u32 shift;209214 } timestamp;210215211216#ifdef CONFIG_DRM_I915_SELFTEST
+97-17
drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
···11131113 if (new_start == lower_32_bits(*prev_start))11141114 return;1115111511161116+ /*11171117+ * When gt is unparked, we update the gt timestamp and start the ping11181118+ * worker that updates the gt_stamp every POLL_TIME_CLKS. As long as gt11191119+ * is unparked, all switched in contexts will have a start time that is11201120+ * within +/- POLL_TIME_CLKS of the most recent gt_stamp.11211121+ *11221122+ * If neither gt_stamp nor new_start has rolled over, then the11231123+ * gt_stamp_hi does not need to be adjusted, however if one of them has11241124+ * rolled over, we need to adjust gt_stamp_hi accordingly.11251125+ *11261126+ * The below conditions address the cases of new_start rollover and11271127+ * gt_stamp_last rollover respectively.11281128+ */11161129 if (new_start < gt_stamp_last &&11171130 (new_start - gt_stamp_last) <= POLL_TIME_CLKS)11181131 gt_stamp_hi++;···11371124 *prev_start = ((u64)gt_stamp_hi << 32) | new_start;11381125}1139112611401140-static void guc_update_engine_gt_clks(struct intel_engine_cs *engine)11271127+/*11281128+ * GuC updates shared memory and KMD reads it. Since this is not synchronized,11291129+ * we run into a race where the value read is inconsistent. Sometimes the11301130+ * inconsistency is in reading the upper MSB bytes of the last_in value when11311131+ * this race occurs. 2 types of cases are seen - upper 8 bits are zero and upper11321132+ * 24 bits are zero. Since these are non-zero values, it is non-trivial to11331133+ * determine validity of these values. Instead we read the values multiple times11341134+ * until they are consistent. In test runs, 3 attempts results in consistent11351135+ * values. The upper bound is set to 6 attempts and may need to be tuned as per11361136+ * any new occurences.11371137+ */11381138+static void __get_engine_usage_record(struct intel_engine_cs *engine,11391139+ u32 *last_in, u32 *id, u32 *total)11411140{11421141 struct guc_engine_usage_record *rec = intel_guc_engine_usage(engine);11421142+ int i = 0;11431143+11441144+ do {11451145+ *last_in = READ_ONCE(rec->last_switch_in_stamp);11461146+ *id = READ_ONCE(rec->current_context_index);11471147+ *total = READ_ONCE(rec->total_runtime);11481148+11491149+ if (READ_ONCE(rec->last_switch_in_stamp) == *last_in &&11501150+ READ_ONCE(rec->current_context_index) == *id &&11511151+ READ_ONCE(rec->total_runtime) == *total)11521152+ break;11531153+ } while (++i < 6);11541154+}11551155+11561156+static void guc_update_engine_gt_clks(struct intel_engine_cs *engine)11571157+{11431158 struct intel_engine_guc_stats *stats = &engine->stats.guc;11441159 struct intel_guc *guc = &engine->gt->uc.guc;11451145- u32 last_switch = rec->last_switch_in_stamp;11461146- u32 ctx_id = rec->current_context_index;11471147- u32 total = rec->total_runtime;11601160+ u32 last_switch, ctx_id, total;1148116111491162 lockdep_assert_held(&guc->timestamp.lock);11631163+11641164+ __get_engine_usage_record(engine, &last_switch, &ctx_id, &total);1150116511511166 stats->running = ctx_id != ~0U && last_switch;11521167 if (stats->running)···11901149 }11911150}1192115111931193-static void guc_update_pm_timestamp(struct intel_guc *guc,11941194- struct intel_engine_cs *engine,11951195- ktime_t *now)11521152+static u32 gpm_timestamp_shift(struct intel_gt *gt)11961153{11971197- u32 gt_stamp_now, gt_stamp_hi;11541154+ intel_wakeref_t wakeref;11551155+ u32 reg, shift;11561156+11571157+ with_intel_runtime_pm(gt->uncore->rpm, wakeref)11581158+ reg = intel_uncore_read(gt->uncore, RPM_CONFIG0);11591159+11601160+ shift = (reg & GEN10_RPM_CONFIG0_CTC_SHIFT_PARAMETER_MASK) >>11611161+ GEN10_RPM_CONFIG0_CTC_SHIFT_PARAMETER_SHIFT;11621162+11631163+ return 3 - shift;11641164+}11651165+11661166+static u64 gpm_timestamp(struct intel_gt *gt)11671167+{11681168+ u32 lo, hi, old_hi, loop = 0;11691169+11701170+ hi = intel_uncore_read(gt->uncore, MISC_STATUS1);11711171+ do {11721172+ lo = intel_uncore_read(gt->uncore, MISC_STATUS0);11731173+ old_hi = hi;11741174+ hi = intel_uncore_read(gt->uncore, MISC_STATUS1);11751175+ } while (old_hi != hi && loop++ < 2);11761176+11771177+ return ((u64)hi << 32) | lo;11781178+}11791179+11801180+static void guc_update_pm_timestamp(struct intel_guc *guc, ktime_t *now)11811181+{11821182+ struct intel_gt *gt = guc_to_gt(guc);11831183+ u32 gt_stamp_lo, gt_stamp_hi;11841184+ u64 gpm_ts;1198118511991186 lockdep_assert_held(&guc->timestamp.lock);1200118712011188 gt_stamp_hi = upper_32_bits(guc->timestamp.gt_stamp);12021202- gt_stamp_now = intel_uncore_read(engine->uncore,12031203- RING_TIMESTAMP(engine->mmio_base));11891189+ gpm_ts = gpm_timestamp(gt) >> guc->timestamp.shift;11901190+ gt_stamp_lo = lower_32_bits(gpm_ts);12041191 *now = ktime_get();1205119212061206- if (gt_stamp_now < lower_32_bits(guc->timestamp.gt_stamp))11931193+ if (gt_stamp_lo < lower_32_bits(guc->timestamp.gt_stamp))12071194 gt_stamp_hi++;1208119512091209- guc->timestamp.gt_stamp = ((u64)gt_stamp_hi << 32) | gt_stamp_now;11961196+ guc->timestamp.gt_stamp = ((u64)gt_stamp_hi << 32) | gt_stamp_lo;12101197}1211119812121199/*···12771208 if (!in_reset && intel_gt_pm_get_if_awake(gt)) {12781209 stats_saved = *stats;12791210 gt_stamp_saved = guc->timestamp.gt_stamp;12111211+ /*12121212+ * Update gt_clks, then gt timestamp to simplify the 'gt_stamp -12131213+ * start_gt_clk' calculation below for active engines.12141214+ */12801215 guc_update_engine_gt_clks(engine);12811281- guc_update_pm_timestamp(guc, engine, now);12161216+ guc_update_pm_timestamp(guc, now);12821217 intel_gt_pm_put_async(gt);12831218 if (i915_reset_count(gpu_error) != reset_count) {12841219 *stats = stats_saved;···1314124113151242 spin_lock_irqsave(&guc->timestamp.lock, flags);1316124312441244+ guc_update_pm_timestamp(guc, &unused);13171245 for_each_engine(engine, gt, id) {13181318- guc_update_pm_timestamp(guc, engine, &unused);13191246 guc_update_engine_gt_clks(engine);13201247 engine->stats.guc.prev_total = 0;13211248 }···13321259 ktime_t unused;1333126013341261 spin_lock_irqsave(&guc->timestamp.lock, flags);13351335- for_each_engine(engine, gt, id) {13361336- guc_update_pm_timestamp(guc, engine, &unused);12621262+12631263+ guc_update_pm_timestamp(guc, &unused);12641264+ for_each_engine(engine, gt, id)13371265 guc_update_engine_gt_clks(engine);13381338- }12661266+13391267 spin_unlock_irqrestore(&guc->timestamp.lock, flags);13401268}13411269···14091335void intel_guc_busyness_unpark(struct intel_gt *gt)14101336{14111337 struct intel_guc *guc = >->uc.guc;13381338+ unsigned long flags;13391339+ ktime_t unused;1412134014131341 if (!guc_submission_initialized(guc))14141342 return;1415134313441344+ spin_lock_irqsave(&guc->timestamp.lock, flags);13451345+ guc_update_pm_timestamp(guc, &unused);13461346+ spin_unlock_irqrestore(&guc->timestamp.lock, flags);14161347 mod_delayed_work(system_highpri_wq, &guc->timestamp.work,14171348 guc->timestamp.ping_delay);14181349}···18621783 spin_lock_init(&guc->timestamp.lock);18631784 INIT_DELAYED_WORK(&guc->timestamp.work, guc_timestamp_ping);18641785 guc->timestamp.ping_delay = (POLL_TIME_CLKS / gt->clock_frequency + 1) * HZ;17861786+ guc->timestamp.shift = gpm_timestamp_shift(gt);1865178718661788 return 0;18671789}
+1-1
drivers/gpu/drm/i915/i915_gpu_error.c
···15221522 struct i915_request *rq = NULL;15231523 unsigned long flags;1524152415251525- ee = intel_engine_coredump_alloc(engine, GFP_KERNEL);15251525+ ee = intel_engine_coredump_alloc(engine, ALLOW_FAIL);15261526 if (!ee)15271527 return NULL;15281528
···158158 case LAYER_1:159159 kmb->plane_status[plane_id].ctrl = LCD_CTRL_VL2_ENABLE;160160 break;161161- case LAYER_2:162162- kmb->plane_status[plane_id].ctrl = LCD_CTRL_GL1_ENABLE;163163- break;164164- case LAYER_3:165165- kmb->plane_status[plane_id].ctrl = LCD_CTRL_GL2_ENABLE;166166- break;167161 }168162169163 kmb->plane_status[plane_id].disable = true;
+5-1
drivers/gpu/drm/mxsfb/mxsfb_kms.c
···361361 bridge_state =362362 drm_atomic_get_new_bridge_state(state,363363 mxsfb->bridge);364364- bus_format = bridge_state->input_bus_cfg.format;364364+ if (!bridge_state)365365+ bus_format = MEDIA_BUS_FMT_FIXED;366366+ else367367+ bus_format = bridge_state->input_bus_cfg.format;368368+365369 if (bus_format == MEDIA_BUS_FMT_FIXED) {366370 dev_warn_once(drm->dev,367371 "Bridge does not provide bus format, assuming MEDIA_BUS_FMT_RGB888_1X24.\n"
···2121#include <linux/export.h>2222#include <linux/kmemleak.h>2323#include <linux/cc_platform.h>2424+#include <linux/iopoll.h>2425#include <asm/pci-direct.h>2526#include <asm/iommu.h>2627#include <asm/apic.h>···835834 status = readl(iommu->mmio_base + MMIO_STATUS_OFFSET);836835 if (status & (MMIO_STATUS_GALOG_RUN_MASK))837836 break;837837+ udelay(10);838838 }839839840840 if (WARN_ON(i >= LOOP_TIMEOUT))
+10-3
drivers/iommu/intel/irq_remapping.c
···569569 fn, &intel_ir_domain_ops,570570 iommu);571571 if (!iommu->ir_domain) {572572- irq_domain_free_fwnode(fn);573572 pr_err("IR%d: failed to allocate irqdomain\n", iommu->seq_id);574574- goto out_free_bitmap;573573+ goto out_free_fwnode;575574 }576575 iommu->ir_msi_domain =577576 arch_create_remap_msi_irq_domain(iommu->ir_domain,···594595595596 if (dmar_enable_qi(iommu)) {596597 pr_err("Failed to enable queued invalidation\n");597597- goto out_free_bitmap;598598+ goto out_free_ir_domain;598599 }599600 }600601···618619619620 return 0;620621622622+out_free_ir_domain:623623+ if (iommu->ir_msi_domain)624624+ irq_domain_remove(iommu->ir_msi_domain);625625+ iommu->ir_msi_domain = NULL;626626+ irq_domain_remove(iommu->ir_domain);627627+ iommu->ir_domain = NULL;628628+out_free_fwnode:629629+ irq_domain_free_fwnode(fn);621630out_free_bitmap:622631 bitmap_free(bitmap);623632out_free_pages:
+1
drivers/iommu/ioasid.c
···349349350350/**351351 * ioasid_get - obtain a reference to the IOASID352352+ * @ioasid: the ID to get352353 */353354void ioasid_get(ioasid_t ioasid)354355{
+19-14
drivers/iommu/iommu.c
···207207208208static void dev_iommu_free(struct device *dev)209209{210210- iommu_fwspec_free(dev);211211- kfree(dev->iommu);210210+ struct dev_iommu *param = dev->iommu;211211+212212 dev->iommu = NULL;213213+ if (param->fwspec) {214214+ fwnode_handle_put(param->fwspec->iommu_fwnode);215215+ kfree(param->fwspec);216216+ }217217+ kfree(param);213218}214219215220static int __iommu_probe_device(struct device *dev, struct list_head *group_list)···985980 return ret;986981}987982988988-/**989989- * iommu_group_for_each_dev - iterate over each device in the group990990- * @group: the group991991- * @data: caller opaque data to be passed to callback function992992- * @fn: caller supplied callback function993993- *994994- * This function is called by group users to iterate over group devices.995995- * Callers should hold a reference count to the group during callback.996996- * The group->mutex is held across callbacks, which will block calls to997997- * iommu_group_add/remove_device.998998- */999983static int __iommu_group_for_each_dev(struct iommu_group *group, void *data,1000984 int (*fn)(struct device *, void *))1001985{···9991005 return ret;10001006}1001100710021002-10081008+/**10091009+ * iommu_group_for_each_dev - iterate over each device in the group10101010+ * @group: the group10111011+ * @data: caller opaque data to be passed to callback function10121012+ * @fn: caller supplied callback function10131013+ *10141014+ * This function is called by group users to iterate over group devices.10151015+ * Callers should hold a reference count to the group during callback.10161016+ * The group->mutex is held across callbacks, which will block calls to10171017+ * iommu_group_add/remove_device.10181018+ */10031019int iommu_group_for_each_dev(struct iommu_group *group, void *data,10041020 int (*fn)(struct device *, void *))10051021{···30363032 * iommu_sva_bind_device() - Bind a process address space to a device30373033 * @dev: the device30383034 * @mm: the mm to bind, caller must hold a reference to it30353035+ * @drvdata: opaque data pointer to pass to bind callback30393036 *30403037 * Create a bond between device and address space, allowing the device to access30413038 * the mm using the returned PASID. If a bond already exists between @device and
+1-1
drivers/iommu/omap-iommu.c
···10851085}1086108610871087/**10881088- * omap_iommu_suspend_prepare - prepare() dev_pm_ops implementation10881088+ * omap_iommu_prepare - prepare() dev_pm_ops implementation10891089 * @dev: iommu device10901090 *10911091 * This function performs the necessary checks to determine if the IOMMU
+4-4
drivers/md/md.c
···58695869 nowait = nowait && blk_queue_nowait(bdev_get_queue(rdev->bdev));58705870 }5871587158725872- /* Set the NOWAIT flags if all underlying devices support it */58735873- if (nowait)58745874- blk_queue_flag_set(QUEUE_FLAG_NOWAIT, mddev->queue);58755875-58765872 if (!bioset_initialized(&mddev->bio_set)) {58775873 err = bioset_init(&mddev->bio_set, BIO_POOL_SIZE, 0, BIOSET_NEED_BVECS);58785874 if (err)···60066010 else60076011 blk_queue_flag_clear(QUEUE_FLAG_NONROT, mddev->queue);60086012 blk_queue_flag_set(QUEUE_FLAG_IO_STAT, mddev->queue);60136013+60146014+ /* Set the NOWAIT flags if all underlying devices support it */60156015+ if (nowait)60166016+ blk_queue_flag_set(QUEUE_FLAG_NOWAIT, mddev->queue);60096017 }60106018 if (pers->sync_request) {60116019 if (mddev->kobj.sd &&
+1
drivers/net/dsa/Kconfig
···3636config NET_DSA_MT75303737 tristate "MediaTek MT753x and MT7621 Ethernet switch support"3838 select NET_DSA_TAG_MTK3939+ select MEDIATEK_GE_PHY3940 help4041 This enables support for the MediaTek MT7530, MT7531, and MT76214142 Ethernet switch chips.
+13-1
drivers/net/ethernet/amd/xgbe/xgbe-drv.c
···721721 if (!channel->tx_ring)722722 break;723723724724+ /* Deactivate the Tx timer */724725 del_timer_sync(&channel->tx_timer);726726+ channel->tx_timer_active = 0;725727 }726728}727729···25522550 buf2_len = xgbe_rx_buf2_len(rdata, packet, len);25532551 len += buf2_len;2554255225532553+ if (buf2_len > rdata->rx.buf.dma_len) {25542554+ /* Hardware inconsistency within the descriptors25552555+ * that has resulted in a length underflow.25562556+ */25572557+ error = 1;25582558+ goto skip_data;25592559+ }25602560+25552561 if (!skb) {25562562 skb = xgbe_create_skb(pdata, napi, rdata,25572563 buf1_len);···25892579 if (!last || context_next)25902580 goto read_again;2591258125922592- if (!skb)25822582+ if (!skb || error) {25832583+ dev_kfree_skb(skb);25932584 goto next_packet;25852585+ }2594258625952587 /* Be sure we don't exceed the configured MTU */25962588 max_len = netdev->mtu + ETH_HLEN;
+1-1
drivers/net/ethernet/google/gve/gve_adminq.c
···301301 */302302static int gve_adminq_kick_and_wait(struct gve_priv *priv)303303{304304- u32 tail, head;304304+ int tail, head;305305 int i;306306307307 tail = ioread32be(&priv->reg_bar0->adminq_event_counter);
···144144 __I40E_VIRTCHNL_OP_PENDING,145145 __I40E_RECOVERY_MODE,146146 __I40E_VF_RESETS_DISABLED, /* disable resets during i40e_remove */147147+ __I40E_IN_REMOVE,147148 __I40E_VFS_RELEASING,148149 /* This must be last as it determines the size of the BITMAP */149150 __I40E_STATE_SIZE__,
+29-2
drivers/net/ethernet/intel/i40e/i40e_main.c
···53725372 /* There is no need to reset BW when mqprio mode is on. */53735373 if (pf->flags & I40E_FLAG_TC_MQPRIO)53745374 return 0;53755375- if (!vsi->mqprio_qopt.qopt.hw && !(pf->flags & I40E_FLAG_DCB_ENABLED)) {53755375+53765376+ if (!vsi->mqprio_qopt.qopt.hw) {53775377+ if (pf->flags & I40E_FLAG_DCB_ENABLED)53785378+ goto skip_reset;53795379+53805380+ if (IS_ENABLED(CONFIG_I40E_DCB) &&53815381+ i40e_dcb_hw_get_num_tc(&pf->hw) == 1)53825382+ goto skip_reset;53835383+53765384 ret = i40e_set_bw_limit(vsi, vsi->seid, 0);53775385 if (ret)53785386 dev_info(&pf->pdev->dev,···53885380 vsi->seid);53895381 return ret;53905382 }53835383+53845384+skip_reset:53915385 memset(&bw_data, 0, sizeof(bw_data));53925386 bw_data.tc_valid_bits = enabled_tc;53935387 for (i = 0; i < I40E_MAX_TRAFFIC_CLASS; i++)···1086310853 bool lock_acquired)1086410854{1086510855 int ret;1085610856+1085710857+ if (test_bit(__I40E_IN_REMOVE, pf->state))1085810858+ return;1086610859 /* Now we wait for GRST to settle out.1086710860 * We don't have to delete the VEBs or VSIs from the hw switch1086810861 * because the reset will make them disappear.···12225122121222612213 vsi->req_queue_pairs = queue_count;1222712214 i40e_prep_for_reset(pf);1221512215+ if (test_bit(__I40E_IN_REMOVE, pf->state))1221612216+ return pf->alloc_rss_size;12228122171222912218 pf->alloc_rss_size = new_rss_size;1223012219···13052130371305313038 if (need_reset)1305413039 i40e_prep_for_reset(pf);1304013040+1304113041+ /* VSI shall be deleted in a moment, just return EINVAL */1304213042+ if (test_bit(__I40E_IN_REMOVE, pf->state))1304313043+ return -EINVAL;13055130441305613045 old_prog = xchg(&vsi->xdp_prog, prog);1305713046···1594715928 i40e_write_rx_ctl(hw, I40E_PFQF_HENA(0), 0);1594815929 i40e_write_rx_ctl(hw, I40E_PFQF_HENA(1), 0);15949159301595015950- while (test_bit(__I40E_RESET_RECOVERY_PENDING, pf->state))1593115931+ /* Grab __I40E_RESET_RECOVERY_PENDING and set __I40E_IN_REMOVE1593215932+ * flags, once they are set, i40e_rebuild should not be called as1593315933+ * i40e_prep_for_reset always returns early.1593415934+ */1593515935+ while (test_and_set_bit(__I40E_RESET_RECOVERY_PENDING, pf->state))1595115936 usleep_range(1000, 2000);1593715937+ set_bit(__I40E_IN_REMOVE, pf->state);15952159381595315939 if (pf->flags & I40E_FLAG_SRIOV_ENABLED) {1595415940 set_bit(__I40E_VF_RESETS_DISABLED, pf->state);···1615116127static void i40e_pci_error_reset_done(struct pci_dev *pdev)1615216128{1615316129 struct i40e_pf *pf = pci_get_drvdata(pdev);1613016130+1613116131+ if (test_bit(__I40E_IN_REMOVE, pf->state))1613216132+ return;16154161331615516134 i40e_reset_and_rebuild(pf, false, false);1615616135}
···14141414 if (err)14151415 goto err_out;1416141614171417- if (!attr->chain && esw_attr->int_port) {14171417+ if (!attr->chain && esw_attr->int_port &&14181418+ attr->action & MLX5_FLOW_CONTEXT_ACTION_FWD_DEST) {14181419 /* If decap route device is internal port, change the14191420 * source vport value in reg_c0 back to uplink just in14201421 * case the rule performs goto chain > 0. If we have a miss···31893188 if (!(actions &31903189 (MLX5_FLOW_CONTEXT_ACTION_FWD_DEST | MLX5_FLOW_CONTEXT_ACTION_DROP))) {31913190 NL_SET_ERR_MSG_MOD(extack, "Rule must have at least one forward/drop action");31913191+ return false;31923192+ }31933193+31943194+ if (!(~actions &31953195+ (MLX5_FLOW_CONTEXT_ACTION_FWD_DEST | MLX5_FLOW_CONTEXT_ACTION_DROP))) {31963196+ NL_SET_ERR_MSG_MOD(extack, "Rule cannot support forward+drop action");31973197+ return false;31983198+ }31993199+32003200+ if (actions & MLX5_FLOW_CONTEXT_ACTION_MOD_HDR &&32013201+ actions & MLX5_FLOW_CONTEXT_ACTION_DROP) {32023202+ NL_SET_ERR_MSG_MOD(extack, "Drop with modify header action is not supported");31923203 return false;31933204 }31943205
···2121#include "dwxgmac2.h"22222323#define REG_SPACE_SIZE 0x10602424+#define GMAC4_REG_SPACE_SIZE 0x116C2425#define MAC100_ETHTOOL_NAME "st_mac100"2526#define GMAC_ETHTOOL_NAME "st_gmac"2627#define XGMAC_ETHTOOL_NAME "st_xgmac"2828+2929+/* Same as DMA_CHAN_BASE_ADDR defined in dwmac4_dma.h3030+ *3131+ * It is here because dwmac_dma.h and dwmac4_dam.h can not be included at the3232+ * same time due to the conflicting macro names.3333+ */3434+#define GMAC4_DMA_CHAN_BASE_ADDR 0x0000110027352836#define ETHTOOL_DMA_OFFSET 552937···442434443435 if (priv->plat->has_xgmac)444436 return XGMAC_REGSIZE * 4;437437+ else if (priv->plat->has_gmac4)438438+ return GMAC4_REG_SPACE_SIZE;445439 return REG_SPACE_SIZE;446440}447441···456446 stmmac_dump_mac_regs(priv, priv->hw, reg_space);457447 stmmac_dump_dma_regs(priv, priv->ioaddr, reg_space);458448459459- if (!priv->plat->has_xgmac) {460460- /* Copy DMA registers to where ethtool expects them */449449+ /* Copy DMA registers to where ethtool expects them */450450+ if (priv->plat->has_gmac4) {451451+ /* GMAC4 dumps its DMA registers at its DMA_CHAN_BASE_ADDR */452452+ memcpy(®_space[ETHTOOL_DMA_OFFSET],453453+ ®_space[GMAC4_DMA_CHAN_BASE_ADDR / 4],454454+ NUM_DWMAC4_DMA_REGS * 4);455455+ } else if (!priv->plat->has_xgmac) {461456 memcpy(®_space[ETHTOOL_DMA_OFFSET],462457 ®_space[DMA_BUS_MODE / 4],463458 NUM_DWMAC1000_DMA_REGS * 4);
···145145146146static void get_systime(void __iomem *ioaddr, u64 *systime)147147{148148- u64 ns;148148+ u64 ns, sec0, sec1;149149150150- /* Get the TSSS value */151151- ns = readl(ioaddr + PTP_STNSR);152152- /* Get the TSS and convert sec time value to nanosecond */153153- ns += readl(ioaddr + PTP_STSR) * 1000000000ULL;150150+ /* Get the TSS value */151151+ sec1 = readl_relaxed(ioaddr + PTP_STSR);152152+ do {153153+ sec0 = sec1;154154+ /* Get the TSSS value */155155+ ns = readl_relaxed(ioaddr + PTP_STNSR);156156+ /* Get the TSS value */157157+ sec1 = readl_relaxed(ioaddr + PTP_STSR);158158+ } while (sec0 != sec1);154159155160 if (systime)156156- *systime = ns;161161+ *systime = ns + (sec1 * 1000000000ULL);157162}158163159164static void get_ptptime(void __iomem *ptpaddr, u64 *ptp_time)
···1111#include <linux/pm_runtime.h>1212#include <linux/bitops.h>13131414+#include "linux/soc/qcom/qcom_aoss.h"1515+1416#include "ipa.h"1517#include "ipa_power.h"1618#include "ipa_endpoint.h"···6664 * struct ipa_power - IPA power management information6765 * @dev: IPA device pointer6866 * @core: IPA core clock6767+ * @qmp: QMP handle for AOSS communication6968 * @spinlock: Protects modem TX queue enable/disable7069 * @flags: Boolean state flags7170 * @interconnect_count: Number of elements in interconnect[]···7572struct ipa_power {7673 struct device *dev;7774 struct clk *core;7575+ struct qmp *qmp;7876 spinlock_t spinlock; /* used with STOPPED/STARTED power flags */7977 DECLARE_BITMAP(flags, IPA_POWER_FLAG_COUNT);8078 u32 interconnect_count;···386382 clear_bit(IPA_POWER_FLAG_STARTED, ipa->power->flags);387383}388384385385+static int ipa_power_retention_init(struct ipa_power *power)386386+{387387+ struct qmp *qmp = qmp_get(power->dev);388388+389389+ if (IS_ERR(qmp)) {390390+ if (PTR_ERR(qmp) == -EPROBE_DEFER)391391+ return -EPROBE_DEFER;392392+393393+ /* We assume any other error means it's not defined/needed */394394+ qmp = NULL;395395+ }396396+ power->qmp = qmp;397397+398398+ return 0;399399+}400400+401401+static void ipa_power_retention_exit(struct ipa_power *power)402402+{403403+ qmp_put(power->qmp);404404+ power->qmp = NULL;405405+}406406+407407+/* Control register retention on power collapse */408408+void ipa_power_retention(struct ipa *ipa, bool enable)409409+{410410+ static const char fmt[] = "{ class: bcm, res: ipa_pc, val: %c }";411411+ struct ipa_power *power = ipa->power;412412+ char buf[36]; /* Exactly enough for fmt[]; size a multiple of 4 */413413+ int ret;414414+415415+ if (!power->qmp)416416+ return; /* Not needed on this platform */417417+418418+ (void)snprintf(buf, sizeof(buf), fmt, enable ? '1' : '0');419419+420420+ ret = qmp_send(power->qmp, buf, sizeof(buf));421421+ if (ret)422422+ dev_err(power->dev, "error %d sending QMP %sable request\n",423423+ ret, enable ? "en" : "dis");424424+}425425+389426int ipa_power_setup(struct ipa *ipa)390427{391428 int ret;···483438 if (ret)484439 goto err_kfree;485440441441+ ret = ipa_power_retention_init(power);442442+ if (ret)443443+ goto err_interconnect_exit;444444+486445 pm_runtime_set_autosuspend_delay(dev, IPA_AUTOSUSPEND_DELAY);487446 pm_runtime_use_autosuspend(dev);488447 pm_runtime_enable(dev);489448490449 return power;491450451451+err_interconnect_exit:452452+ ipa_interconnect_exit(power);492453err_kfree:493454 kfree(power);494455err_clk_put:···511460512461 pm_runtime_disable(dev);513462 pm_runtime_dont_use_autosuspend(dev);463463+ ipa_power_retention_exit(power);514464 ipa_interconnect_exit(power);515465 kfree(power);516466 clk_put(clk);
+7
drivers/net/ipa/ipa_power.h
···4141void ipa_power_modem_queue_active(struct ipa *ipa);42424343/**4444+ * ipa_power_retention() - Control register retention on power collapse4545+ * @ipa: IPA pointer4646+ * @enable: Whether retention should be enabled or disabled4747+ */4848+void ipa_power_retention(struct ipa *ipa, bool enable);4949+5050+/**4451 * ipa_power_setup() - Set up IPA power management4552 * @ipa: IPA pointer4653 *
+5
drivers/net/ipa/ipa_uc.c
···11111212#include "ipa.h"1313#include "ipa_uc.h"1414+#include "ipa_power.h"14151516/**1617 * DOC: The IPA embedded microcontroller···155154 case IPA_UC_RESPONSE_INIT_COMPLETED:156155 if (ipa->uc_powered) {157156 ipa->uc_loaded = true;157157+ ipa_power_retention(ipa, true);158158 pm_runtime_mark_last_busy(dev);159159 (void)pm_runtime_put_autosuspend(dev);160160 ipa->uc_powered = false;···186184187185 ipa_interrupt_remove(ipa->interrupt, IPA_IRQ_UC_1);188186 ipa_interrupt_remove(ipa->interrupt, IPA_IRQ_UC_0);187187+ if (ipa->uc_loaded)188188+ ipa_power_retention(ipa, false);189189+189190 if (!ipa->uc_powered)190191 return;191192
+21-12
drivers/net/macsec.c
···38703870 struct macsec_dev *macsec = macsec_priv(dev);38713871 struct net_device *real_dev = macsec->real_dev;3872387238733873+ /* If h/w offloading is available, propagate to the device */38743874+ if (macsec_is_offloaded(macsec)) {38753875+ const struct macsec_ops *ops;38763876+ struct macsec_context ctx;38773877+38783878+ ops = macsec_get_ops(netdev_priv(dev), &ctx);38793879+ if (ops) {38803880+ ctx.secy = &macsec->secy;38813881+ macsec_offload(ops->mdo_del_secy, &ctx);38823882+ }38833883+ }38843884+38733885 unregister_netdevice_queue(dev, head);38743886 list_del_rcu(&macsec->secys);38753887 macsec_del_dev(macsec);···38953883 struct macsec_dev *macsec = macsec_priv(dev);38963884 struct net_device *real_dev = macsec->real_dev;38973885 struct macsec_rxh_data *rxd = macsec_data_rtnl(real_dev);38983898-38993899- /* If h/w offloading is available, propagate to the device */39003900- if (macsec_is_offloaded(macsec)) {39013901- const struct macsec_ops *ops;39023902- struct macsec_context ctx;39033903-39043904- ops = macsec_get_ops(netdev_priv(dev), &ctx);39053905- if (ops) {39063906- ctx.secy = &macsec->secy;39073907- macsec_offload(ops->mdo_del_secy, &ctx);39083908- }39093909- }3910388639113887 macsec_common_dellink(dev, head);39123888···40174017 if (macsec->offload != MACSEC_OFFLOAD_OFF &&40184018 !macsec_check_offload(macsec->offload, macsec))40194019 return -EOPNOTSUPP;40204020+40214021+ /* send_sci must be set to true when transmit sci explicitly is set */40224022+ if ((data && data[IFLA_MACSEC_SCI]) &&40234023+ (data && data[IFLA_MACSEC_INC_SCI])) {40244024+ u8 send_sci = !!nla_get_u8(data[IFLA_MACSEC_INC_SCI]);40254025+40264026+ if (!send_sci)40274027+ return -EINVAL;40284028+ }4020402940214030 if (data && data[IFLA_MACSEC_ICV_LEN])40224031 icv_len = nla_get_u8(data[IFLA_MACSEC_ICV_LEN]);
+13-13
drivers/net/phy/at803x.c
···16881688 if (ret < 0)16891689 return ret;1690169016911691- if (phydev->link && phydev->speed == SPEED_2500)16921692- phydev->interface = PHY_INTERFACE_MODE_2500BASEX;16931693- else16941694- phydev->interface = PHY_INTERFACE_MODE_SMII;16951695-16961696- /* generate seed as a lower random value to make PHY linked as SLAVE easily,16971697- * except for master/slave configuration fault detected.16981698- * the reason for not putting this code into the function link_change_notify is16991699- * the corner case where the link partner is also the qca8081 PHY and the seed17001700- * value is configured as the same value, the link can't be up and no link change17011701- * occurs.17021702- */17031703- if (!phydev->link) {16911691+ if (phydev->link) {16921692+ if (phydev->speed == SPEED_2500)16931693+ phydev->interface = PHY_INTERFACE_MODE_2500BASEX;16941694+ else16951695+ phydev->interface = PHY_INTERFACE_MODE_SGMII;16961696+ } else {16971697+ /* generate seed as a lower random value to make PHY linked as SLAVE easily,16981698+ * except for master/slave configuration fault detected.16991699+ * the reason for not putting this code into the function link_change_notify is17001700+ * the corner case where the link partner is also the qca8081 PHY and the seed17011701+ * value is configured as the same value, the link can't be up and no link change17021702+ * occurs.17031703+ */17041704 if (phydev->master_slave_state == MASTER_SLAVE_STATE_ERR) {17051705 qca808x_phy_ms_seed_enable(phydev, false);17061706 } else {
···42534253 container_of(work, struct nvme_ctrl, async_event_work);4254425442554255 nvme_aen_uevent(ctrl);42564256- ctrl->ops->submit_async_event(ctrl);42564256+42574257+ /*42584258+ * The transport drivers must guarantee AER submission here is safe by42594259+ * flushing ctrl async_event_work after changing the controller state42604260+ * from LIVE and before freeing the admin queue.42614261+ */42624262+ if (ctrl->state == NVME_CTRL_LIVE)42634263+ ctrl->ops->submit_async_event(ctrl);42574264}4258426542594266static bool nvme_ctrl_pp_status(struct nvme_ctrl *ctrl)
···12691269 sizeof(*girq->parents),12701270 GFP_KERNEL);12711271 if (!girq->parents) {12721272- pinctrl_remove_gpio_range(pc->pctl_dev, &pc->gpio_range);12731273- return -ENOMEM;12721272+ err = -ENOMEM;12731273+ goto out_remove;12741274 }1275127512761276 if (is_7211) {12771277 pc->wake_irq = devm_kcalloc(dev, BCM2835_NUM_IRQS,12781278 sizeof(*pc->wake_irq),12791279 GFP_KERNEL);12801280- if (!pc->wake_irq)12811281- return -ENOMEM;12801280+ if (!pc->wake_irq) {12811281+ err = -ENOMEM;12821282+ goto out_remove;12831283+ }12821284 }1283128512841286 /*···1308130613091307 len = strlen(dev_name(pc->dev)) + 16;13101308 name = devm_kzalloc(pc->dev, len, GFP_KERNEL);13111311- if (!name)13121312- return -ENOMEM;13091309+ if (!name) {13101310+ err = -ENOMEM;13111311+ goto out_remove;13121312+ }1313131313141314 snprintf(name, len, "%s:bank%d", dev_name(pc->dev), i);13151315···13301326 err = gpiochip_add_data(&pc->gpio_chip, pc);13311327 if (err) {13321328 dev_err(dev, "could not add GPIO chip\n");13331333- pinctrl_remove_gpio_range(pc->pctl_dev, &pc->gpio_range);13341334- return err;13291329+ goto out_remove;13351330 }1336133113371332 return 0;13331333+13341334+out_remove:13351335+ pinctrl_remove_gpio_range(pc->pctl_dev, &pc->gpio_range);13361336+ return err;13381337}1339133813401339static struct platform_driver bcm2835_pinctrl_driver = {
+3-2
drivers/pinctrl/intel/pinctrl-cherryview.c
···1471147114721472 offset = cctx->intr_lines[intr_line];14731473 if (offset == CHV_INVALID_HWIRQ) {14741474- dev_err(dev, "interrupt on unused interrupt line %u\n", intr_line);14751475- continue;14741474+ dev_warn_once(dev, "interrupt on unmapped interrupt line %u\n", intr_line);14751475+ /* Some boards expect hwirq 0 to trigger in this case */14761476+ offset = 0;14761477 }1477147814781479 generic_handle_domain_irq(gc->irq.domain, offset);
+36-28
drivers/pinctrl/intel/pinctrl-intel.c
···451451 value &= ~PADCFG0_PMODE_MASK;452452 value |= PADCFG0_PMODE_GPIO;453453454454- /* Disable input and output buffers */455455- value |= PADCFG0_GPIORXDIS;454454+ /* Disable TX buffer and enable RX (this will be input) */455455+ value &= ~PADCFG0_GPIORXDIS;456456 value |= PADCFG0_GPIOTXDIS;457457458458 /* Disable SCI/SMI/NMI generation */···496496 }497497498498 intel_gpio_set_gpio_mode(padcfg0);499499-500500- /* Disable TX buffer and enable RX (this will be input) */501501- __intel_gpio_set_direction(padcfg0, true);502499503500 raw_spin_unlock_irqrestore(&pctrl->lock, flags);504501···1112111511131116 intel_gpio_set_gpio_mode(reg);1114111711151115- /* Disable TX buffer and enable RX (this will be input) */11161116- __intel_gpio_set_direction(reg, true);11171117-11181118 value = readl(reg);1119111911201120 value &= ~(PADCFG0_RXEVCFG_MASK | PADCFG0_RXINV);···12081214 }1209121512101216 return IRQ_RETVAL(ret);12171217+}12181218+12191219+static void intel_gpio_irq_init(struct intel_pinctrl *pctrl)12201220+{12211221+ int i;12221222+12231223+ for (i = 0; i < pctrl->ncommunities; i++) {12241224+ const struct intel_community *community;12251225+ void __iomem *base;12261226+ unsigned int gpp;12271227+12281228+ community = &pctrl->communities[i];12291229+ base = community->regs;12301230+12311231+ for (gpp = 0; gpp < community->ngpps; gpp++) {12321232+ /* Mask and clear all interrupts */12331233+ writel(0, base + community->ie_offset + gpp * 4);12341234+ writel(0xffff, base + community->is_offset + gpp * 4);12351235+ }12361236+ }12371237+}12381238+12391239+static int intel_gpio_irq_init_hw(struct gpio_chip *gc)12401240+{12411241+ struct intel_pinctrl *pctrl = gpiochip_get_data(gc);12421242+12431243+ /*12441244+ * Make sure the interrupt lines are in a proper state before12451245+ * further configuration.12461246+ */12471247+ intel_gpio_irq_init(pctrl);12481248+12491249+ return 0;12111250}1212125112131252static int intel_gpio_add_community_ranges(struct intel_pinctrl *pctrl,···13471320 girq->num_parents = 0;13481321 girq->default_type = IRQ_TYPE_NONE;13491322 girq->handler = handle_bad_irq;13231323+ girq->init_hw = intel_gpio_irq_init_hw;1350132413511325 ret = devm_gpiochip_add_data(pctrl->dev, &pctrl->chip, pctrl);13521326 if (ret) {···17221694 return 0;17231695}17241696EXPORT_SYMBOL_GPL(intel_pinctrl_suspend_noirq);17251725-17261726-static void intel_gpio_irq_init(struct intel_pinctrl *pctrl)17271727-{17281728- size_t i;17291729-17301730- for (i = 0; i < pctrl->ncommunities; i++) {17311731- const struct intel_community *community;17321732- void __iomem *base;17331733- unsigned int gpp;17341734-17351735- community = &pctrl->communities[i];17361736- base = community->regs;17371737-17381738- for (gpp = 0; gpp < community->ngpps; gpp++) {17391739- /* Mask and clear all interrupts */17401740- writel(0, base + community->ie_offset + gpp * 4);17411741- writel(0xffff, base + community->is_offset + gpp * 4);17421742- }17431743- }17441744-}1745169717461698static bool intel_gpio_update_reg(void __iomem *reg, u32 mask, u32 value)17471699{
···5566menuconfig SURFACE_PLATFORMS77 bool "Microsoft Surface Platform-Specific Device Drivers"88+ depends on ARM64 || X86 || COMPILE_TEST89 default y910 help1011 Say Y here to get to see options for platform-specific device drivers
···596596 return ret;597597}598598599599-static DEFINE_MUTEX(punit_misc_dev_lock);599599+/* Lock to prevent module registration when already opened by user space */600600+static DEFINE_MUTEX(punit_misc_dev_open_lock);601601+/* Lock to allow one share misc device for all ISST interace */602602+static DEFINE_MUTEX(punit_misc_dev_reg_lock);600603static int misc_usage_count;601604static int misc_device_ret;602605static int misc_device_open;···609606 int i, ret = 0;610607611608 /* Fail open, if a module is going away */612612- mutex_lock(&punit_misc_dev_lock);609609+ mutex_lock(&punit_misc_dev_open_lock);613610 for (i = 0; i < ISST_IF_DEV_MAX; ++i) {614611 struct isst_if_cmd_cb *cb = &punit_callbacks[i];615612···631628 } else {632629 misc_device_open++;633630 }634634- mutex_unlock(&punit_misc_dev_lock);631631+ mutex_unlock(&punit_misc_dev_open_lock);635632636633 return ret;637634}···640637{641638 int i;642639643643- mutex_lock(&punit_misc_dev_lock);640640+ mutex_lock(&punit_misc_dev_open_lock);644641 misc_device_open--;645642 for (i = 0; i < ISST_IF_DEV_MAX; ++i) {646643 struct isst_if_cmd_cb *cb = &punit_callbacks[i];···648645 if (cb->registered)649646 module_put(cb->owner);650647 }651651- mutex_unlock(&punit_misc_dev_lock);648648+ mutex_unlock(&punit_misc_dev_open_lock);652649653650 return 0;654651}···664661 .name = "isst_interface",665662 .fops = &isst_if_char_driver_ops,666663};664664+665665+static int isst_misc_reg(void)666666+{667667+ mutex_lock(&punit_misc_dev_reg_lock);668668+ if (misc_device_ret)669669+ goto unlock_exit;670670+671671+ if (!misc_usage_count) {672672+ misc_device_ret = isst_if_cpu_info_init();673673+ if (misc_device_ret)674674+ goto unlock_exit;675675+676676+ misc_device_ret = misc_register(&isst_if_char_driver);677677+ if (misc_device_ret) {678678+ isst_if_cpu_info_exit();679679+ goto unlock_exit;680680+ }681681+ }682682+ misc_usage_count++;683683+684684+unlock_exit:685685+ mutex_unlock(&punit_misc_dev_reg_lock);686686+687687+ return misc_device_ret;688688+}689689+690690+static void isst_misc_unreg(void)691691+{692692+ mutex_lock(&punit_misc_dev_reg_lock);693693+ if (misc_usage_count)694694+ misc_usage_count--;695695+ if (!misc_usage_count && !misc_device_ret) {696696+ misc_deregister(&isst_if_char_driver);697697+ isst_if_cpu_info_exit();698698+ }699699+ mutex_unlock(&punit_misc_dev_reg_lock);700700+}667701668702/**669703 * isst_if_cdev_register() - Register callback for IOCTL···719679 */720680int isst_if_cdev_register(int device_type, struct isst_if_cmd_cb *cb)721681{722722- if (misc_device_ret)723723- return misc_device_ret;682682+ int ret;724683725684 if (device_type >= ISST_IF_DEV_MAX)726685 return -EINVAL;727686728728- mutex_lock(&punit_misc_dev_lock);687687+ mutex_lock(&punit_misc_dev_open_lock);688688+ /* Device is already open, we don't want to add new callbacks */729689 if (misc_device_open) {730730- mutex_unlock(&punit_misc_dev_lock);690690+ mutex_unlock(&punit_misc_dev_open_lock);731691 return -EAGAIN;732732- }733733- if (!misc_usage_count) {734734- int ret;735735-736736- misc_device_ret = misc_register(&isst_if_char_driver);737737- if (misc_device_ret)738738- goto unlock_exit;739739-740740- ret = isst_if_cpu_info_init();741741- if (ret) {742742- misc_deregister(&isst_if_char_driver);743743- misc_device_ret = ret;744744- goto unlock_exit;745745- }746692 }747693 memcpy(&punit_callbacks[device_type], cb, sizeof(*cb));748694 punit_callbacks[device_type].registered = 1;749749- misc_usage_count++;750750-unlock_exit:751751- mutex_unlock(&punit_misc_dev_lock);695695+ mutex_unlock(&punit_misc_dev_open_lock);752696753753- return misc_device_ret;697697+ ret = isst_misc_reg();698698+ if (ret) {699699+ /*700700+ * No need of mutex as the misc device register failed701701+ * as no one can open device yet. Hence no contention.702702+ */703703+ punit_callbacks[device_type].registered = 0;704704+ return ret;705705+ }706706+ return 0;754707}755708EXPORT_SYMBOL_GPL(isst_if_cdev_register);756709···758725 */759726void isst_if_cdev_unregister(int device_type)760727{761761- mutex_lock(&punit_misc_dev_lock);762762- misc_usage_count--;728728+ isst_misc_unreg();729729+ mutex_lock(&punit_misc_dev_open_lock);763730 punit_callbacks[device_type].registered = 0;764731 if (device_type == ISST_IF_DEV_MBOX)765732 isst_delete_hash();766766- if (!misc_usage_count && !misc_device_ret) {767767- misc_deregister(&isst_if_char_driver);768768- isst_if_cpu_info_exit();769769- }770770- mutex_unlock(&punit_misc_dev_lock);733733+ mutex_unlock(&punit_misc_dev_open_lock);771734}772735EXPORT_SYMBOL_GPL(isst_if_cdev_unregister);773736
+22-3
drivers/platform/x86/thinkpad_acpi.c
···86798679 .attrs = fan_driver_attributes,86808680};8681868186828682-#define TPACPI_FAN_Q1 0x0001 /* Unitialized HFSP */86838683-#define TPACPI_FAN_2FAN 0x0002 /* EC 0x31 bit 0 selects fan2 */86848684-#define TPACPI_FAN_2CTL 0x0004 /* selects fan2 control */86828682+#define TPACPI_FAN_Q1 0x0001 /* Uninitialized HFSP */86838683+#define TPACPI_FAN_2FAN 0x0002 /* EC 0x31 bit 0 selects fan2 */86848684+#define TPACPI_FAN_2CTL 0x0004 /* selects fan2 control */86858685+#define TPACPI_FAN_NOFAN 0x0008 /* no fan available */8685868686868687static const struct tpacpi_quirk fan_quirk_table[] __initconst = {86878688 TPACPI_QEC_IBM('1', 'Y', TPACPI_FAN_Q1),···87038702 TPACPI_Q_LNV3('N', '4', '0', TPACPI_FAN_2CTL), /* P1 / X1 Extreme (4nd gen) */87048703 TPACPI_Q_LNV3('N', '3', '0', TPACPI_FAN_2CTL), /* P15 (1st gen) / P15v (1st gen) */87058704 TPACPI_Q_LNV3('N', '3', '2', TPACPI_FAN_2CTL), /* X1 Carbon (9th gen) */87058705+ TPACPI_Q_LNV3('N', '1', 'O', TPACPI_FAN_NOFAN), /* X1 Tablet (2nd gen) */87068706};8707870787088708static int __init fan_init(struct ibm_init_struct *iibm)···8731872987328730 quirks = tpacpi_check_quirks(fan_quirk_table,87338731 ARRAY_SIZE(fan_quirk_table));87328732+87338733+ if (quirks & TPACPI_FAN_NOFAN) {87348734+ pr_info("No integrated ThinkPad fan available\n");87358735+ return -ENODEV;87368736+ }8734873787358738 if (gfan_handle) {87368739 /* 570, 600e/x, 770e, 770x */···1011910112#define DYTC_CMD_MMC_GET 8 /* To get current MMC function and mode */1012010113#define DYTC_CMD_RESET 0x1ff /* To reset back to default */10121101141011510115+#define DYTC_CMD_FUNC_CAP 3 /* To get DYTC capabilities */1011610116+#define DYTC_FC_MMC 27 /* MMC Mode supported */1011710117+1012210118#define DYTC_GET_FUNCTION_BIT 8 /* Bits 8-11 - function setting */1012310119#define DYTC_GET_MODE_BIT 12 /* Bits 12-15 - mode setting */1012410120···1033310323 /* Check DYTC is enabled and supports mode setting */1033410324 if (dytc_version < 5)1033510325 return -ENODEV;1032610326+1032710327+ /* Check what capabilities are supported. Currently MMC is needed */1032810328+ err = dytc_command(DYTC_CMD_FUNC_CAP, &output);1032910329+ if (err)1033010330+ return err;1033110331+ if (!(output & BIT(DYTC_FC_MMC))) {1033210332+ dbg_printk(TPACPI_DBG_INIT, " DYTC MMC mode not supported\n");1033310333+ return -ENODEV;1033410334+ }10336103351033710336 dbg_printk(TPACPI_DBG_INIT,1033810337 "DYTC version %d: thermal mode available\n", dytc_version);
···2626#include <linux/string.h>2727/* For gpio_get_desc() which is EXPORT_SYMBOL_GPL() */2828#include "../../gpio/gpiolib.h"2929+#include "../../gpio/gpiolib-acpi.h"29303031/*3132 * Helper code to get Linux IRQ numbers given a description of the IRQ source···4847 int polarity; /* ACPI_ACTIVE_HIGH / ACPI_ACTIVE_LOW / ACPI_ACTIVE_BOTH */4948};50495151-static int x86_acpi_irq_helper_gpiochip_find(struct gpio_chip *gc, void *data)5050+static int gpiochip_find_match_label(struct gpio_chip *gc, void *data)5251{5352 return gc->label && !strcmp(gc->label, data);5453}···7473 return irq;7574 case X86_ACPI_IRQ_TYPE_GPIOINT:7675 /* Like acpi_dev_gpio_irq_get(), but without parsing ACPI resources */7777- chip = gpiochip_find(data->chip, x86_acpi_irq_helper_gpiochip_find);7676+ chip = gpiochip_find(data->chip, gpiochip_find_match_label);7877 if (!chip) {7978 pr_err("error cannot find GPIO chip %s\n", data->chip);8079 return -ENODEV;···144143};145144146145struct x86_dev_info {146146+ char *invalid_aei_gpiochip;147147 const char * const *modules;148148- struct gpiod_lookup_table **gpiod_lookup_tables;148148+ struct gpiod_lookup_table * const *gpiod_lookup_tables;149149 const struct x86_i2c_client_info *i2c_client_info;150150 const struct platform_device_info *pdev_info;151151 const struct x86_serdev_info *serdev_info;152152 int i2c_client_count;153153 int pdev_count;154154 int serdev_count;155155+ int (*init)(void);156156+ void (*exit)(void);155157};156158157159/* Generic / shared bq24190 settings */···191187};192188193189static const char * const bq24190_modules[] __initconst = {194194- "crystal_cove_charger", /* For the bq24190 IRQ */195195- "bq24190_charger", /* For the Vbus regulator for intel-int3496 */190190+ "intel_crystal_cove_charger", /* For the bq24190 IRQ */191191+ "bq24190_charger", /* For the Vbus regulator for intel-int3496 */196192 NULL197193};198194···306302 },307303};308304309309-static struct gpiod_lookup_table *asus_me176c_gpios[] = {305305+static struct gpiod_lookup_table * const asus_me176c_gpios[] = {310306 &int3496_gpo2_pin22_gpios,311307 &asus_me176c_goodix_gpios,312308 NULL···321317 .serdev_count = ARRAY_SIZE(asus_me176c_serdevs),322318 .gpiod_lookup_tables = asus_me176c_gpios,323319 .modules = bq24190_modules,320320+ .invalid_aei_gpiochip = "INT33FC:02",324321};325322326323/* Asus TF103C tablets have an Android factory img with everything hardcoded */···410405 },411406};412407413413-static struct gpiod_lookup_table *asus_tf103c_gpios[] = {408408+static struct gpiod_lookup_table * const asus_tf103c_gpios[] = {414409 &int3496_gpo2_pin22_gpios,415410 NULL416411};···422417 .pdev_count = ARRAY_SIZE(int3496_pdevs),423418 .gpiod_lookup_tables = asus_tf103c_gpios,424419 .modules = bq24190_modules,420420+ .invalid_aei_gpiochip = "INT33FC:02",425421};426422427423/*···496490 .i2c_client_count = ARRAY_SIZE(chuwi_hi8_i2c_clients),497491};498492493493+#define CZC_EC_EXTRA_PORT 0x68494494+#define CZC_EC_ANDROID_KEYS 0x63495495+496496+static int __init czc_p10t_init(void)497497+{498498+ /*499499+ * The device boots up in "Windows 7" mode, when the home button sends a500500+ * Windows specific key sequence (Left Meta + D) and the second button501501+ * sends an unknown one while also toggling the Radio Kill Switch.502502+ * This is a surprising behavior when the second button is labeled "Back".503503+ *504504+ * The vendor-supplied Android-x86 build switches the device to a "Android"505505+ * mode by writing value 0x63 to the I/O port 0x68. This just seems to just506506+ * set bit 6 on address 0x96 in the EC region; switching the bit directly507507+ * seems to achieve the same result. It uses a "p10t_switcher" to do the508508+ * job. It doesn't seem to be able to do anything else, and no other use509509+ * of the port 0x68 is known.510510+ *511511+ * In the Android mode, the home button sends just a single scancode,512512+ * which can be handled in Linux userspace more reasonably and the back513513+ * button only sends a scancode without toggling the kill switch.514514+ * The scancode can then be mapped either to Back or RF Kill functionality515515+ * in userspace, depending on how the button is labeled on that particular516516+ * model.517517+ */518518+ outb(CZC_EC_ANDROID_KEYS, CZC_EC_EXTRA_PORT);519519+ return 0;520520+}521521+522522+static const struct x86_dev_info czc_p10t __initconst = {523523+ .init = czc_p10t_init,524524+};525525+499526/*500527 * Whitelabel (sold as various brands) TM800A550L tablets.501528 * These tablet's DSDT contains a whole bunch of bogus ACPI I2C devices···598559 },599560};600561601601-static struct gpiod_lookup_table *whitelabel_tm800a550l_gpios[] = {562562+static struct gpiod_lookup_table * const whitelabel_tm800a550l_gpios[] = {602563 &whitelabel_tm800a550l_goodix_gpios,603564 NULL604565};···681642 .driver_data = (void *)&chuwi_hi8_info,682643 },683644 {645645+ /* CZC P10T */646646+ .ident = "CZC ODEON TPC-10 (\"P10T\")",647647+ .matches = {648648+ DMI_MATCH(DMI_SYS_VENDOR, "CZC"),649649+ DMI_MATCH(DMI_PRODUCT_NAME, "ODEON*TPC-10"),650650+ },651651+ .driver_data = (void *)&czc_p10t,652652+ },653653+ {654654+ /* A variant of CZC P10T */655655+ .ident = "ViewSonic ViewPad 10",656656+ .matches = {657657+ DMI_MATCH(DMI_SYS_VENDOR, "ViewSonic"),658658+ DMI_MATCH(DMI_PRODUCT_NAME, "VPAD10"),659659+ },660660+ .driver_data = (void *)&czc_p10t,661661+ },662662+ {684663 /* Whitelabel (sold as various brands) TM800A550L */685664 .matches = {686665 DMI_MATCH(DMI_BOARD_VENDOR, "AMI Corporation"),···726669static struct i2c_client **i2c_clients;727670static struct platform_device **pdevs;728671static struct serdev_device **serdevs;729729-static struct gpiod_lookup_table **gpiod_lookup_tables;672672+static struct gpiod_lookup_table * const *gpiod_lookup_tables;673673+static void (*exit_handler)(void);730674731675static __init int x86_instantiate_i2c_client(const struct x86_dev_info *dev_info,732676 int idx)···845787846788 kfree(i2c_clients);847789790790+ if (exit_handler)791791+ exit_handler();792792+848793 for (i = 0; gpiod_lookup_tables && gpiod_lookup_tables[i]; i++)849794 gpiod_remove_lookup_table(gpiod_lookup_tables[i]);850795}···856795{857796 const struct x86_dev_info *dev_info;858797 const struct dmi_system_id *id;798798+ struct gpio_chip *chip;859799 int i, ret = 0;860800861801 id = dmi_first_match(x86_android_tablet_ids);···864802 return -ENODEV;865803866804 dev_info = id->driver_data;805805+806806+ /*807807+ * The broken DSDTs on these devices often also include broken808808+ * _AEI (ACPI Event Interrupt) handlers, disable these.809809+ */810810+ if (dev_info->invalid_aei_gpiochip) {811811+ chip = gpiochip_find(dev_info->invalid_aei_gpiochip,812812+ gpiochip_find_match_label);813813+ if (!chip) {814814+ pr_err("error cannot find GPIO chip %s\n", dev_info->invalid_aei_gpiochip);815815+ return -ENODEV;816816+ }817817+ acpi_gpiochip_free_interrupts(chip);818818+ }867819868820 /*869821 * Since this runs from module_init() it cannot use -EPROBE_DEFER,···889813 gpiod_lookup_tables = dev_info->gpiod_lookup_tables;890814 for (i = 0; gpiod_lookup_tables && gpiod_lookup_tables[i]; i++)891815 gpiod_add_lookup_table(gpiod_lookup_tables[i]);816816+817817+ if (dev_info->init) {818818+ ret = dev_info->init();819819+ if (ret < 0) {820820+ x86_android_tablet_cleanup();821821+ return ret;822822+ }823823+ exit_handler = dev_info->exit;824824+ }892825893826 i2c_clients = kcalloc(dev_info->i2c_client_count, sizeof(*i2c_clients), GFP_KERNEL);894827 if (!i2c_clients) {···950865module_init(x86_android_tablet_init);951866module_exit(x86_android_tablet_cleanup);952867953953-MODULE_AUTHOR("Hans de Goede <hdegoede@redhat.com");868868+MODULE_AUTHOR("Hans de Goede <hdegoede@redhat.com>");954869MODULE_DESCRIPTION("X86 Android tablets DSDT fixups driver");955870MODULE_LICENSE("GPL");
+2-1
drivers/regulator/max20086-regulator.c
···7788#include <linux/err.h>99#include <linux/gpio.h>1010+#include <linux/gpio/consumer.h>1011#include <linux/i2c.h>1112#include <linux/module.h>1213#include <linux/regmap.h>···141140 node = of_get_child_by_name(chip->dev->of_node, "regulators");142141 if (!node) {143142 dev_err(chip->dev, "regulators node not found\n");144144- return PTR_ERR(node);143143+ return -ENODEV;145144 }146145147146 for (i = 0; i < chip->info->num_outputs; ++i)
···214214 SCSI_TIMEOUT, 3, NULL);215215}216216217217+static int scsi_realloc_sdev_budget_map(struct scsi_device *sdev,218218+ unsigned int depth)219219+{220220+ int new_shift = sbitmap_calculate_shift(depth);221221+ bool need_alloc = !sdev->budget_map.map;222222+ bool need_free = false;223223+ int ret;224224+ struct sbitmap sb_backup;225225+226226+ /*227227+ * realloc if new shift is calculated, which is caused by setting228228+ * up one new default queue depth after calling ->slave_configure229229+ */230230+ if (!need_alloc && new_shift != sdev->budget_map.shift)231231+ need_alloc = need_free = true;232232+233233+ if (!need_alloc)234234+ return 0;235235+236236+ /*237237+ * Request queue has to be frozen for reallocating budget map,238238+ * and here disk isn't added yet, so freezing is pretty fast239239+ */240240+ if (need_free) {241241+ blk_mq_freeze_queue(sdev->request_queue);242242+ sb_backup = sdev->budget_map;243243+ }244244+ ret = sbitmap_init_node(&sdev->budget_map,245245+ scsi_device_max_queue_depth(sdev),246246+ new_shift, GFP_KERNEL,247247+ sdev->request_queue->node, false, true);248248+ if (need_free) {249249+ if (ret)250250+ sdev->budget_map = sb_backup;251251+ else252252+ sbitmap_free(&sb_backup);253253+ ret = 0;254254+ blk_mq_unfreeze_queue(sdev->request_queue);255255+ }256256+ return ret;257257+}258258+217259/**218260 * scsi_alloc_sdev - allocate and setup a scsi_Device219261 * @starget: which target to allocate a &scsi_device for···348306 * default device queue depth to figure out sbitmap shift349307 * since we use this queue depth most of times.350308 */351351- if (sbitmap_init_node(&sdev->budget_map,352352- scsi_device_max_queue_depth(sdev),353353- sbitmap_calculate_shift(depth),354354- GFP_KERNEL, sdev->request_queue->node,355355- false, true)) {309309+ if (scsi_realloc_sdev_budget_map(sdev, depth)) {356310 put_device(&starget->dev);357311 kfree(sdev);358312 goto out;···10551017 }10561018 return SCSI_SCAN_NO_RESPONSE;10571019 }10201020+10211021+ /*10221022+ * The queue_depth is often changed in ->slave_configure.10231023+ * Set up budget map again since memory consumption of10241024+ * the map depends on actual queue depth.10251025+ */10261026+ scsi_realloc_sdev_budget_map(sdev, sdev->queue_depth);10581027 }1059102810601029 if (sdev->scsi_level >= SCSI_3)
···693693 writel_relaxed(0, spicc->base + SPICC_INTREG);694694695695 irq = platform_get_irq(pdev, 0);696696+ if (irq < 0) {697697+ ret = irq;698698+ goto out_master;699699+ }700700+696701 ret = devm_request_irq(&pdev->dev, irq, meson_spicc_irq,697702 0, NULL, spicc);698703 if (ret) {
+1-1
drivers/spi/spi-mt65xx.c
···624624 else625625 mdata->state = MTK_SPI_IDLE;626626627627- if (!master->can_dma(master, master->cur_msg->spi, trans)) {627627+ if (!master->can_dma(master, NULL, trans)) {628628 if (trans->rx_buf) {629629 cnt = mdata->xfer_len / 4;630630 ioread32_rep(mdata->base + SPI_RX_DATA_REG,
+17-30
drivers/spi/spi-stm32-qspi.c
···688688 struct resource *res;689689 int ret, irq;690690691691- ctrl = spi_alloc_master(dev, sizeof(*qspi));691691+ ctrl = devm_spi_alloc_master(dev, sizeof(*qspi));692692 if (!ctrl)693693 return -ENOMEM;694694···697697698698 res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "qspi");699699 qspi->io_base = devm_ioremap_resource(dev, res);700700- if (IS_ERR(qspi->io_base)) {701701- ret = PTR_ERR(qspi->io_base);702702- goto err_master_put;703703- }700700+ if (IS_ERR(qspi->io_base))701701+ return PTR_ERR(qspi->io_base);704702705703 qspi->phys_base = res->start;706704707705 res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "qspi_mm");708706 qspi->mm_base = devm_ioremap_resource(dev, res);709709- if (IS_ERR(qspi->mm_base)) {710710- ret = PTR_ERR(qspi->mm_base);711711- goto err_master_put;712712- }707707+ if (IS_ERR(qspi->mm_base))708708+ return PTR_ERR(qspi->mm_base);713709714710 qspi->mm_size = resource_size(res);715715- if (qspi->mm_size > STM32_QSPI_MAX_MMAP_SZ) {716716- ret = -EINVAL;717717- goto err_master_put;718718- }711711+ if (qspi->mm_size > STM32_QSPI_MAX_MMAP_SZ)712712+ return -EINVAL;719713720714 irq = platform_get_irq(pdev, 0);721721- if (irq < 0) {722722- ret = irq;723723- goto err_master_put;724724- }715715+ if (irq < 0)716716+ return irq;725717726718 ret = devm_request_irq(dev, irq, stm32_qspi_irq, 0,727719 dev_name(dev), qspi);728720 if (ret) {729721 dev_err(dev, "failed to request irq\n");730730- goto err_master_put;722722+ return ret;731723 }732724733725 init_completion(&qspi->data_completion);734726 init_completion(&qspi->match_completion);735727736728 qspi->clk = devm_clk_get(dev, NULL);737737- if (IS_ERR(qspi->clk)) {738738- ret = PTR_ERR(qspi->clk);739739- goto err_master_put;740740- }729729+ if (IS_ERR(qspi->clk))730730+ return PTR_ERR(qspi->clk);741731742732 qspi->clk_rate = clk_get_rate(qspi->clk);743743- if (!qspi->clk_rate) {744744- ret = -EINVAL;745745- goto err_master_put;746746- }733733+ if (!qspi->clk_rate)734734+ return -EINVAL;747735748736 ret = clk_prepare_enable(qspi->clk);749737 if (ret) {750738 dev_err(dev, "can not enable the clock\n");751751- goto err_master_put;739739+ return ret;752740 }753741754742 rstc = devm_reset_control_get_exclusive(dev, NULL);···772784 pm_runtime_enable(dev);773785 pm_runtime_get_noresume(dev);774786775775- ret = devm_spi_register_master(dev, ctrl);787787+ ret = spi_register_master(ctrl);776788 if (ret)777789 goto err_pm_runtime_free;778790···794806 stm32_qspi_dma_free(qspi);795807err_clk_disable:796808 clk_disable_unprepare(qspi->clk);797797-err_master_put:798798- spi_master_put(qspi->ctrl);799809800810 return ret;801811}···803817 struct stm32_qspi *qspi = platform_get_drvdata(pdev);804818805819 pm_runtime_get_sync(qspi->dev);820820+ spi_unregister_master(qspi->ctrl);806821 /* disable qspi */807822 writel_relaxed(0, qspi->io_base + QSPI_CR);808823 stm32_qspi_dma_free(qspi);
+4-3
drivers/spi/spi-stm32.c
···221221 * time between frames (if driver has this functionality)222222 * @set_number_of_data: optional routine to configure registers to desired223223 * number of data (if driver has this functionality)224224- * @can_dma: routine to determine if the transfer is eligible for DMA use225224 * @transfer_one_dma_start: routine to start transfer a single spi_transfer226225 * using DMA227226 * @dma_rx_cb: routine to call after DMA RX channel operation is complete···231232 * @baud_rate_div_min: minimum baud rate divisor232233 * @baud_rate_div_max: maximum baud rate divisor233234 * @has_fifo: boolean to know if fifo is used for driver234234- * @has_startbit: boolean to know if start bit is used to start transfer235235+ * @flags: compatible specific SPI controller flags used at registration time235236 */236237struct stm32_spi_cfg {237238 const struct stm32_spi_regspec *regs;···252253 unsigned int baud_rate_div_min;253254 unsigned int baud_rate_div_max;254255 bool has_fifo;256256+ u16 flags;255257};256258257259/**···17221722 .baud_rate_div_min = STM32F4_SPI_BR_DIV_MIN,17231723 .baud_rate_div_max = STM32F4_SPI_BR_DIV_MAX,17241724 .has_fifo = false,17251725+ .flags = SPI_MASTER_MUST_TX,17251726};1726172717271728static const struct stm32_spi_cfg stm32h7_spi_cfg = {···18551854 master->prepare_message = stm32_spi_prepare_msg;18561855 master->transfer_one = stm32_spi_transfer_one;18571856 master->unprepare_message = stm32_spi_unprepare_msg;18581858- master->flags = SPI_MASTER_MUST_TX;18571857+ master->flags = spi->cfg->flags;1859185818601859 spi->dma_tx = dma_request_chan(spi->dev, "tx");18611860 if (IS_ERR(spi->dma_tx)) {
+14-4
drivers/spi/spi-uniphier.c
···726726 if (ret) {727727 dev_err(&pdev->dev, "failed to get TX DMA capacities: %d\n",728728 ret);729729- goto out_disable_clk;729729+ goto out_release_dma;730730 }731731 dma_tx_burst = caps.max_burst;732732 }···735735 if (IS_ERR_OR_NULL(master->dma_rx)) {736736 if (PTR_ERR(master->dma_rx) == -EPROBE_DEFER) {737737 ret = -EPROBE_DEFER;738738- goto out_disable_clk;738738+ goto out_release_dma;739739 }740740 master->dma_rx = NULL;741741 dma_rx_burst = INT_MAX;···744744 if (ret) {745745 dev_err(&pdev->dev, "failed to get RX DMA capacities: %d\n",746746 ret);747747- goto out_disable_clk;747747+ goto out_release_dma;748748 }749749 dma_rx_burst = caps.max_burst;750750 }···753753754754 ret = devm_spi_register_master(&pdev->dev, master);755755 if (ret)756756- goto out_disable_clk;756756+ goto out_release_dma;757757758758 return 0;759759+760760+out_release_dma:761761+ if (!IS_ERR_OR_NULL(master->dma_rx)) {762762+ dma_release_channel(master->dma_rx);763763+ master->dma_rx = NULL;764764+ }765765+ if (!IS_ERR_OR_NULL(master->dma_tx)) {766766+ dma_release_channel(master->dma_tx);767767+ master->dma_tx = NULL;768768+ }759769760770out_disable_clk:761771 clk_disable_unprepare(priv->clk);
+20
drivers/video/console/Kconfig
···7878 help7979 Low-level framebuffer-based console driver.80808181+config FRAMEBUFFER_CONSOLE_LEGACY_ACCELERATION8282+ bool "Enable legacy fbcon hardware acceleration code"8383+ depends on FRAMEBUFFER_CONSOLE8484+ default y if PARISC8585+ default n8686+ help8787+ This option enables the fbcon (framebuffer text-based) hardware8888+ acceleration for graphics drivers which were written for the fbdev8989+ graphics interface.9090+9191+ On modern machines, on mainstream machines (like x86-64) or when9292+ using a modern Linux distribution those fbdev drivers usually aren't used.9393+ So enabling this option wouldn't have any effect, which is why you want9494+ to disable this option on such newer machines.9595+9696+ If you compile this kernel for older machines which still require the9797+ fbdev drivers, you may want to say Y.9898+9999+ If unsure, select n.100100+81101config FRAMEBUFFER_CONSOLE_DETECT_PRIMARY82102 bool "Map the console to the primary display device"83103 depends on FRAMEBUFFER_CONSOLE
+16
drivers/video/fbdev/core/bitblit.c
···4343 }4444}45454646+static void bit_bmove(struct vc_data *vc, struct fb_info *info, int sy,4747+ int sx, int dy, int dx, int height, int width)4848+{4949+ struct fb_copyarea area;5050+5151+ area.sx = sx * vc->vc_font.width;5252+ area.sy = sy * vc->vc_font.height;5353+ area.dx = dx * vc->vc_font.width;5454+ area.dy = dy * vc->vc_font.height;5555+ area.height = height * vc->vc_font.height;5656+ area.width = width * vc->vc_font.width;5757+5858+ info->fbops->fb_copyarea(info, &area);5959+}6060+4661static void bit_clear(struct vc_data *vc, struct fb_info *info, int sy,4762 int sx, int height, int width)4863{···393378394379void fbcon_set_bitops(struct fbcon_ops *ops)395380{381381+ ops->bmove = bit_bmove;396382 ops->clear = bit_clear;397383 ops->putcs = bit_putcs;398384 ops->clear_margins = bit_clear_margins;
+536-21
drivers/video/fbdev/core/fbcon.c
···173173 int count, int ypos, int xpos);174174static void fbcon_clear_margins(struct vc_data *vc, int bottom_only);175175static void fbcon_cursor(struct vc_data *vc, int mode);176176+static void fbcon_bmove(struct vc_data *vc, int sy, int sx, int dy, int dx,177177+ int height, int width);176178static int fbcon_switch(struct vc_data *vc);177179static int fbcon_blank(struct vc_data *vc, int blank, int mode_switch);178180static void fbcon_set_palette(struct vc_data *vc, const unsigned char *table);···182180/*183181 * Internal routines184182 */183183+static __inline__ void ywrap_up(struct vc_data *vc, int count);184184+static __inline__ void ywrap_down(struct vc_data *vc, int count);185185+static __inline__ void ypan_up(struct vc_data *vc, int count);186186+static __inline__ void ypan_down(struct vc_data *vc, int count);187187+static void fbcon_bmove_rec(struct vc_data *vc, struct fbcon_display *p, int sy, int sx,188188+ int dy, int dx, int height, int width, u_int y_break);185189static void fbcon_set_disp(struct fb_info *info, struct fb_var_screeninfo *var,186190 int unit);191191+static void fbcon_redraw_move(struct vc_data *vc, struct fbcon_display *p,192192+ int line, int count, int dy);187193static void fbcon_modechanged(struct fb_info *info);188194static void fbcon_set_all_vcs(struct fb_info *info);189195static void fbcon_start(void);···10251015 struct vc_data *svc = *default_mode;10261016 struct fbcon_display *t, *p = &fb_display[vc->vc_num];10271017 int logo = 1, new_rows, new_cols, rows, cols;10281028- int ret;10181018+ int cap, ret;1029101910301020 if (WARN_ON(info_idx == -1))10311021 return;···10341024 con2fb_map[vc->vc_num] = info_idx;1035102510361026 info = registered_fb[con2fb_map[vc->vc_num]];10271027+ cap = info->flags;1037102810381029 if (logo_shown < 0 && console_loglevel <= CONSOLE_LOGLEVEL_QUIET)10391030 logo_shown = FBCON_LOGO_DONTSHOW;···1136112511371126 ops->graphics = 0;1138112711281128+#ifdef CONFIG_FRAMEBUFFER_CONSOLE_LEGACY_ACCELERATION11291129+ if ((cap & FBINFO_HWACCEL_COPYAREA) &&11301130+ !(cap & FBINFO_HWACCEL_DISABLED))11311131+ p->scrollmode = SCROLL_MOVE;11321132+ else /* default to something safe */11331133+ p->scrollmode = SCROLL_REDRAW;11341134+#endif11351135+11391136 /*11401137 * ++guenther: console.c:vc_allocate() relies on initializing11411138 * vc_{cols,rows}, but we must not set those if we are only···12301211 * This system is now divided into two levels because of complications12311212 * caused by hardware scrolling. Top level functions:12321213 *12331233- * fbcon_clear(), fbcon_putc(), fbcon_clear_margins()12141214+ * fbcon_bmove(), fbcon_clear(), fbcon_putc(), fbcon_clear_margins()12341215 *12351216 * handles y values in range [0, scr_height-1] that correspond to real12361217 * screen positions. y_wrap shift means that first line of bitmap may be12371218 * anywhere on this display. These functions convert lineoffsets to12381219 * bitmap offsets and deal with the wrap-around case by splitting blits.12391220 *12211221+ * fbcon_bmove_physical_8() -- These functions fast implementations12401222 * fbcon_clear_physical_8() -- of original fbcon_XXX fns.12411223 * fbcon_putc_physical_8() -- (font width != 8) may be added later12421224 *···14101390 }14111391}1412139213931393+static __inline__ void ywrap_up(struct vc_data *vc, int count)13941394+{13951395+ struct fb_info *info = registered_fb[con2fb_map[vc->vc_num]];13961396+ struct fbcon_ops *ops = info->fbcon_par;13971397+ struct fbcon_display *p = &fb_display[vc->vc_num];13981398+13991399+ p->yscroll += count;14001400+ if (p->yscroll >= p->vrows) /* Deal with wrap */14011401+ p->yscroll -= p->vrows;14021402+ ops->var.xoffset = 0;14031403+ ops->var.yoffset = p->yscroll * vc->vc_font.height;14041404+ ops->var.vmode |= FB_VMODE_YWRAP;14051405+ ops->update_start(info);14061406+ scrollback_max += count;14071407+ if (scrollback_max > scrollback_phys_max)14081408+ scrollback_max = scrollback_phys_max;14091409+ scrollback_current = 0;14101410+}14111411+14121412+static __inline__ void ywrap_down(struct vc_data *vc, int count)14131413+{14141414+ struct fb_info *info = registered_fb[con2fb_map[vc->vc_num]];14151415+ struct fbcon_ops *ops = info->fbcon_par;14161416+ struct fbcon_display *p = &fb_display[vc->vc_num];14171417+14181418+ p->yscroll -= count;14191419+ if (p->yscroll < 0) /* Deal with wrap */14201420+ p->yscroll += p->vrows;14211421+ ops->var.xoffset = 0;14221422+ ops->var.yoffset = p->yscroll * vc->vc_font.height;14231423+ ops->var.vmode |= FB_VMODE_YWRAP;14241424+ ops->update_start(info);14251425+ scrollback_max -= count;14261426+ if (scrollback_max < 0)14271427+ scrollback_max = 0;14281428+ scrollback_current = 0;14291429+}14301430+14311431+static __inline__ void ypan_up(struct vc_data *vc, int count)14321432+{14331433+ struct fb_info *info = registered_fb[con2fb_map[vc->vc_num]];14341434+ struct fbcon_display *p = &fb_display[vc->vc_num];14351435+ struct fbcon_ops *ops = info->fbcon_par;14361436+14371437+ p->yscroll += count;14381438+ if (p->yscroll > p->vrows - vc->vc_rows) {14391439+ ops->bmove(vc, info, p->vrows - vc->vc_rows,14401440+ 0, 0, 0, vc->vc_rows, vc->vc_cols);14411441+ p->yscroll -= p->vrows - vc->vc_rows;14421442+ }14431443+14441444+ ops->var.xoffset = 0;14451445+ ops->var.yoffset = p->yscroll * vc->vc_font.height;14461446+ ops->var.vmode &= ~FB_VMODE_YWRAP;14471447+ ops->update_start(info);14481448+ fbcon_clear_margins(vc, 1);14491449+ scrollback_max += count;14501450+ if (scrollback_max > scrollback_phys_max)14511451+ scrollback_max = scrollback_phys_max;14521452+ scrollback_current = 0;14531453+}14541454+14551455+static __inline__ void ypan_up_redraw(struct vc_data *vc, int t, int count)14561456+{14571457+ struct fb_info *info = registered_fb[con2fb_map[vc->vc_num]];14581458+ struct fbcon_ops *ops = info->fbcon_par;14591459+ struct fbcon_display *p = &fb_display[vc->vc_num];14601460+14611461+ p->yscroll += count;14621462+14631463+ if (p->yscroll > p->vrows - vc->vc_rows) {14641464+ p->yscroll -= p->vrows - vc->vc_rows;14651465+ fbcon_redraw_move(vc, p, t + count, vc->vc_rows - count, t);14661466+ }14671467+14681468+ ops->var.xoffset = 0;14691469+ ops->var.yoffset = p->yscroll * vc->vc_font.height;14701470+ ops->var.vmode &= ~FB_VMODE_YWRAP;14711471+ ops->update_start(info);14721472+ fbcon_clear_margins(vc, 1);14731473+ scrollback_max += count;14741474+ if (scrollback_max > scrollback_phys_max)14751475+ scrollback_max = scrollback_phys_max;14761476+ scrollback_current = 0;14771477+}14781478+14791479+static __inline__ void ypan_down(struct vc_data *vc, int count)14801480+{14811481+ struct fb_info *info = registered_fb[con2fb_map[vc->vc_num]];14821482+ struct fbcon_display *p = &fb_display[vc->vc_num];14831483+ struct fbcon_ops *ops = info->fbcon_par;14841484+14851485+ p->yscroll -= count;14861486+ if (p->yscroll < 0) {14871487+ ops->bmove(vc, info, 0, 0, p->vrows - vc->vc_rows,14881488+ 0, vc->vc_rows, vc->vc_cols);14891489+ p->yscroll += p->vrows - vc->vc_rows;14901490+ }14911491+14921492+ ops->var.xoffset = 0;14931493+ ops->var.yoffset = p->yscroll * vc->vc_font.height;14941494+ ops->var.vmode &= ~FB_VMODE_YWRAP;14951495+ ops->update_start(info);14961496+ fbcon_clear_margins(vc, 1);14971497+ scrollback_max -= count;14981498+ if (scrollback_max < 0)14991499+ scrollback_max = 0;15001500+ scrollback_current = 0;15011501+}15021502+15031503+static __inline__ void ypan_down_redraw(struct vc_data *vc, int t, int count)15041504+{15051505+ struct fb_info *info = registered_fb[con2fb_map[vc->vc_num]];15061506+ struct fbcon_ops *ops = info->fbcon_par;15071507+ struct fbcon_display *p = &fb_display[vc->vc_num];15081508+15091509+ p->yscroll -= count;15101510+15111511+ if (p->yscroll < 0) {15121512+ p->yscroll += p->vrows - vc->vc_rows;15131513+ fbcon_redraw_move(vc, p, t, vc->vc_rows - count, t + count);15141514+ }15151515+15161516+ ops->var.xoffset = 0;15171517+ ops->var.yoffset = p->yscroll * vc->vc_font.height;15181518+ ops->var.vmode &= ~FB_VMODE_YWRAP;15191519+ ops->update_start(info);15201520+ fbcon_clear_margins(vc, 1);15211521+ scrollback_max -= count;15221522+ if (scrollback_max < 0)15231523+ scrollback_max = 0;15241524+ scrollback_current = 0;15251525+}15261526+15271527+static void fbcon_redraw_move(struct vc_data *vc, struct fbcon_display *p,15281528+ int line, int count, int dy)15291529+{15301530+ unsigned short *s = (unsigned short *)15311531+ (vc->vc_origin + vc->vc_size_row * line);15321532+15331533+ while (count--) {15341534+ unsigned short *start = s;15351535+ unsigned short *le = advance_row(s, 1);15361536+ unsigned short c;15371537+ int x = 0;15381538+ unsigned short attr = 1;15391539+15401540+ do {15411541+ c = scr_readw(s);15421542+ if (attr != (c & 0xff00)) {15431543+ attr = c & 0xff00;15441544+ if (s > start) {15451545+ fbcon_putcs(vc, start, s - start,15461546+ dy, x);15471547+ x += s - start;15481548+ start = s;15491549+ }15501550+ }15511551+ console_conditional_schedule();15521552+ s++;15531553+ } while (s < le);15541554+ if (s > start)15551555+ fbcon_putcs(vc, start, s - start, dy, x);15561556+ console_conditional_schedule();15571557+ dy++;15581558+ }15591559+}15601560+15611561+static void fbcon_redraw_blit(struct vc_data *vc, struct fb_info *info,15621562+ struct fbcon_display *p, int line, int count, int ycount)15631563+{15641564+ int offset = ycount * vc->vc_cols;15651565+ unsigned short *d = (unsigned short *)15661566+ (vc->vc_origin + vc->vc_size_row * line);15671567+ unsigned short *s = d + offset;15681568+ struct fbcon_ops *ops = info->fbcon_par;15691569+15701570+ while (count--) {15711571+ unsigned short *start = s;15721572+ unsigned short *le = advance_row(s, 1);15731573+ unsigned short c;15741574+ int x = 0;15751575+15761576+ do {15771577+ c = scr_readw(s);15781578+15791579+ if (c == scr_readw(d)) {15801580+ if (s > start) {15811581+ ops->bmove(vc, info, line + ycount, x,15821582+ line, x, 1, s-start);15831583+ x += s - start + 1;15841584+ start = s + 1;15851585+ } else {15861586+ x++;15871587+ start++;15881588+ }15891589+ }15901590+15911591+ scr_writew(c, d);15921592+ console_conditional_schedule();15931593+ s++;15941594+ d++;15951595+ } while (s < le);15961596+ if (s > start)15971597+ ops->bmove(vc, info, line + ycount, x, line, x, 1,15981598+ s-start);15991599+ console_conditional_schedule();16001600+ if (ycount > 0)16011601+ line++;16021602+ else {16031603+ line--;16041604+ /* NOTE: We subtract two lines from these pointers */16051605+ s -= vc->vc_size_row;16061606+ d -= vc->vc_size_row;16071607+ }16081608+ }16091609+}16101610+14131611static void fbcon_redraw(struct vc_data *vc, struct fbcon_display *p,14141612 int line, int count, int offset)14151613{···16881450{16891451 struct fb_info *info = registered_fb[con2fb_map[vc->vc_num]];16901452 struct fbcon_display *p = &fb_display[vc->vc_num];14531453+ int scroll_partial = info->flags & FBINFO_PARTIAL_PAN_OK;1691145416921455 if (fbcon_is_inactive(vc, info))16931456 return true;···17051466 case SM_UP:17061467 if (count > vc->vc_rows) /* Maximum realistic size */17071468 count = vc->vc_rows;17081708- fbcon_redraw(vc, p, t, b - t - count,17091709- count * vc->vc_cols);17101710- fbcon_clear(vc, b - count, 0, count, vc->vc_cols);17111711- scr_memsetw((unsigned short *) (vc->vc_origin +17121712- vc->vc_size_row *17131713- (b - count)),17141714- vc->vc_video_erase_char,17151715- vc->vc_size_row * count);17161716- return true;14691469+ if (logo_shown >= 0)14701470+ goto redraw_up;14711471+ switch (fb_scrollmode(p)) {14721472+ case SCROLL_MOVE:14731473+ fbcon_redraw_blit(vc, info, p, t, b - t - count,14741474+ count);14751475+ fbcon_clear(vc, b - count, 0, count, vc->vc_cols);14761476+ scr_memsetw((unsigned short *) (vc->vc_origin +14771477+ vc->vc_size_row *14781478+ (b - count)),14791479+ vc->vc_video_erase_char,14801480+ vc->vc_size_row * count);14811481+ return true;14821482+14831483+ case SCROLL_WRAP_MOVE:14841484+ if (b - t - count > 3 * vc->vc_rows >> 2) {14851485+ if (t > 0)14861486+ fbcon_bmove(vc, 0, 0, count, 0, t,14871487+ vc->vc_cols);14881488+ ywrap_up(vc, count);14891489+ if (vc->vc_rows - b > 0)14901490+ fbcon_bmove(vc, b - count, 0, b, 0,14911491+ vc->vc_rows - b,14921492+ vc->vc_cols);14931493+ } else if (info->flags & FBINFO_READS_FAST)14941494+ fbcon_bmove(vc, t + count, 0, t, 0,14951495+ b - t - count, vc->vc_cols);14961496+ else14971497+ goto redraw_up;14981498+ fbcon_clear(vc, b - count, 0, count, vc->vc_cols);14991499+ break;15001500+15011501+ case SCROLL_PAN_REDRAW:15021502+ if ((p->yscroll + count <=15031503+ 2 * (p->vrows - vc->vc_rows))15041504+ && ((!scroll_partial && (b - t == vc->vc_rows))15051505+ || (scroll_partial15061506+ && (b - t - count >15071507+ 3 * vc->vc_rows >> 2)))) {15081508+ if (t > 0)15091509+ fbcon_redraw_move(vc, p, 0, t, count);15101510+ ypan_up_redraw(vc, t, count);15111511+ if (vc->vc_rows - b > 0)15121512+ fbcon_redraw_move(vc, p, b,15131513+ vc->vc_rows - b, b);15141514+ } else15151515+ fbcon_redraw_move(vc, p, t + count, b - t - count, t);15161516+ fbcon_clear(vc, b - count, 0, count, vc->vc_cols);15171517+ break;15181518+15191519+ case SCROLL_PAN_MOVE:15201520+ if ((p->yscroll + count <=15211521+ 2 * (p->vrows - vc->vc_rows))15221522+ && ((!scroll_partial && (b - t == vc->vc_rows))15231523+ || (scroll_partial15241524+ && (b - t - count >15251525+ 3 * vc->vc_rows >> 2)))) {15261526+ if (t > 0)15271527+ fbcon_bmove(vc, 0, 0, count, 0, t,15281528+ vc->vc_cols);15291529+ ypan_up(vc, count);15301530+ if (vc->vc_rows - b > 0)15311531+ fbcon_bmove(vc, b - count, 0, b, 0,15321532+ vc->vc_rows - b,15331533+ vc->vc_cols);15341534+ } else if (info->flags & FBINFO_READS_FAST)15351535+ fbcon_bmove(vc, t + count, 0, t, 0,15361536+ b - t - count, vc->vc_cols);15371537+ else15381538+ goto redraw_up;15391539+ fbcon_clear(vc, b - count, 0, count, vc->vc_cols);15401540+ break;15411541+15421542+ case SCROLL_REDRAW:15431543+ redraw_up:15441544+ fbcon_redraw(vc, p, t, b - t - count,15451545+ count * vc->vc_cols);15461546+ fbcon_clear(vc, b - count, 0, count, vc->vc_cols);15471547+ scr_memsetw((unsigned short *) (vc->vc_origin +15481548+ vc->vc_size_row *15491549+ (b - count)),15501550+ vc->vc_video_erase_char,15511551+ vc->vc_size_row * count);15521552+ return true;15531553+ }15541554+ break;1717155517181556 case SM_DOWN:17191557 if (count > vc->vc_rows) /* Maximum realistic size */17201558 count = vc->vc_rows;17211721- fbcon_redraw(vc, p, b - 1, b - t - count,17221722- -count * vc->vc_cols);17231723- fbcon_clear(vc, t, 0, count, vc->vc_cols);17241724- scr_memsetw((unsigned short *) (vc->vc_origin +17251725- vc->vc_size_row *17261726- t),17271727- vc->vc_video_erase_char,17281728- vc->vc_size_row * count);17291729- return true;15591559+ if (logo_shown >= 0)15601560+ goto redraw_down;15611561+ switch (fb_scrollmode(p)) {15621562+ case SCROLL_MOVE:15631563+ fbcon_redraw_blit(vc, info, p, b - 1, b - t - count,15641564+ -count);15651565+ fbcon_clear(vc, t, 0, count, vc->vc_cols);15661566+ scr_memsetw((unsigned short *) (vc->vc_origin +15671567+ vc->vc_size_row *15681568+ t),15691569+ vc->vc_video_erase_char,15701570+ vc->vc_size_row * count);15711571+ return true;15721572+15731573+ case SCROLL_WRAP_MOVE:15741574+ if (b - t - count > 3 * vc->vc_rows >> 2) {15751575+ if (vc->vc_rows - b > 0)15761576+ fbcon_bmove(vc, b, 0, b - count, 0,15771577+ vc->vc_rows - b,15781578+ vc->vc_cols);15791579+ ywrap_down(vc, count);15801580+ if (t > 0)15811581+ fbcon_bmove(vc, count, 0, 0, 0, t,15821582+ vc->vc_cols);15831583+ } else if (info->flags & FBINFO_READS_FAST)15841584+ fbcon_bmove(vc, t, 0, t + count, 0,15851585+ b - t - count, vc->vc_cols);15861586+ else15871587+ goto redraw_down;15881588+ fbcon_clear(vc, t, 0, count, vc->vc_cols);15891589+ break;15901590+15911591+ case SCROLL_PAN_MOVE:15921592+ if ((count - p->yscroll <= p->vrows - vc->vc_rows)15931593+ && ((!scroll_partial && (b - t == vc->vc_rows))15941594+ || (scroll_partial15951595+ && (b - t - count >15961596+ 3 * vc->vc_rows >> 2)))) {15971597+ if (vc->vc_rows - b > 0)15981598+ fbcon_bmove(vc, b, 0, b - count, 0,15991599+ vc->vc_rows - b,16001600+ vc->vc_cols);16011601+ ypan_down(vc, count);16021602+ if (t > 0)16031603+ fbcon_bmove(vc, count, 0, 0, 0, t,16041604+ vc->vc_cols);16051605+ } else if (info->flags & FBINFO_READS_FAST)16061606+ fbcon_bmove(vc, t, 0, t + count, 0,16071607+ b - t - count, vc->vc_cols);16081608+ else16091609+ goto redraw_down;16101610+ fbcon_clear(vc, t, 0, count, vc->vc_cols);16111611+ break;16121612+16131613+ case SCROLL_PAN_REDRAW:16141614+ if ((count - p->yscroll <= p->vrows - vc->vc_rows)16151615+ && ((!scroll_partial && (b - t == vc->vc_rows))16161616+ || (scroll_partial16171617+ && (b - t - count >16181618+ 3 * vc->vc_rows >> 2)))) {16191619+ if (vc->vc_rows - b > 0)16201620+ fbcon_redraw_move(vc, p, b, vc->vc_rows - b,16211621+ b - count);16221622+ ypan_down_redraw(vc, t, count);16231623+ if (t > 0)16241624+ fbcon_redraw_move(vc, p, count, t, 0);16251625+ } else16261626+ fbcon_redraw_move(vc, p, t, b - t - count, t + count);16271627+ fbcon_clear(vc, t, 0, count, vc->vc_cols);16281628+ break;16291629+16301630+ case SCROLL_REDRAW:16311631+ redraw_down:16321632+ fbcon_redraw(vc, p, b - 1, b - t - count,16331633+ -count * vc->vc_cols);16341634+ fbcon_clear(vc, t, 0, count, vc->vc_cols);16351635+ scr_memsetw((unsigned short *) (vc->vc_origin +16361636+ vc->vc_size_row *16371637+ t),16381638+ vc->vc_video_erase_char,16391639+ vc->vc_size_row * count);16401640+ return true;16411641+ }17301642 }17311643 return false;16441644+}16451645+16461646+16471647+static void fbcon_bmove(struct vc_data *vc, int sy, int sx, int dy, int dx,16481648+ int height, int width)16491649+{16501650+ struct fb_info *info = registered_fb[con2fb_map[vc->vc_num]];16511651+ struct fbcon_display *p = &fb_display[vc->vc_num];16521652+16531653+ if (fbcon_is_inactive(vc, info))16541654+ return;16551655+16561656+ if (!width || !height)16571657+ return;16581658+16591659+ /* Split blits that cross physical y_wrap case.16601660+ * Pathological case involves 4 blits, better to use recursive16611661+ * code rather than unrolled case16621662+ *16631663+ * Recursive invocations don't need to erase the cursor over and16641664+ * over again, so we use fbcon_bmove_rec()16651665+ */16661666+ fbcon_bmove_rec(vc, p, sy, sx, dy, dx, height, width,16671667+ p->vrows - p->yscroll);16681668+}16691669+16701670+static void fbcon_bmove_rec(struct vc_data *vc, struct fbcon_display *p, int sy, int sx,16711671+ int dy, int dx, int height, int width, u_int y_break)16721672+{16731673+ struct fb_info *info = registered_fb[con2fb_map[vc->vc_num]];16741674+ struct fbcon_ops *ops = info->fbcon_par;16751675+ u_int b;16761676+16771677+ if (sy < y_break && sy + height > y_break) {16781678+ b = y_break - sy;16791679+ if (dy < sy) { /* Avoid trashing self */16801680+ fbcon_bmove_rec(vc, p, sy, sx, dy, dx, b, width,16811681+ y_break);16821682+ fbcon_bmove_rec(vc, p, sy + b, sx, dy + b, dx,16831683+ height - b, width, y_break);16841684+ } else {16851685+ fbcon_bmove_rec(vc, p, sy + b, sx, dy + b, dx,16861686+ height - b, width, y_break);16871687+ fbcon_bmove_rec(vc, p, sy, sx, dy, dx, b, width,16881688+ y_break);16891689+ }16901690+ return;16911691+ }16921692+16931693+ if (dy < y_break && dy + height > y_break) {16941694+ b = y_break - dy;16951695+ if (dy < sy) { /* Avoid trashing self */16961696+ fbcon_bmove_rec(vc, p, sy, sx, dy, dx, b, width,16971697+ y_break);16981698+ fbcon_bmove_rec(vc, p, sy + b, sx, dy + b, dx,16991699+ height - b, width, y_break);17001700+ } else {17011701+ fbcon_bmove_rec(vc, p, sy + b, sx, dy + b, dx,17021702+ height - b, width, y_break);17031703+ fbcon_bmove_rec(vc, p, sy, sx, dy, dx, b, width,17041704+ y_break);17051705+ }17061706+ return;17071707+ }17081708+ ops->bmove(vc, info, real_y(p, sy), sx, real_y(p, dy), dx,17091709+ height, width);17101710+}17111711+17121712+static void updatescrollmode_accel(struct fbcon_display *p,17131713+ struct fb_info *info,17141714+ struct vc_data *vc)17151715+{17161716+#ifdef CONFIG_FRAMEBUFFER_CONSOLE_LEGACY_ACCELERATION17171717+ struct fbcon_ops *ops = info->fbcon_par;17181718+ int cap = info->flags;17191719+ u16 t = 0;17201720+ int ypan = FBCON_SWAP(ops->rotate, info->fix.ypanstep,17211721+ info->fix.xpanstep);17221722+ int ywrap = FBCON_SWAP(ops->rotate, info->fix.ywrapstep, t);17231723+ int yres = FBCON_SWAP(ops->rotate, info->var.yres, info->var.xres);17241724+ int vyres = FBCON_SWAP(ops->rotate, info->var.yres_virtual,17251725+ info->var.xres_virtual);17261726+ int good_pan = (cap & FBINFO_HWACCEL_YPAN) &&17271727+ divides(ypan, vc->vc_font.height) && vyres > yres;17281728+ int good_wrap = (cap & FBINFO_HWACCEL_YWRAP) &&17291729+ divides(ywrap, vc->vc_font.height) &&17301730+ divides(vc->vc_font.height, vyres) &&17311731+ divides(vc->vc_font.height, yres);17321732+ int reading_fast = cap & FBINFO_READS_FAST;17331733+ int fast_copyarea = (cap & FBINFO_HWACCEL_COPYAREA) &&17341734+ !(cap & FBINFO_HWACCEL_DISABLED);17351735+ int fast_imageblit = (cap & FBINFO_HWACCEL_IMAGEBLIT) &&17361736+ !(cap & FBINFO_HWACCEL_DISABLED);17371737+17381738+ if (good_wrap || good_pan) {17391739+ if (reading_fast || fast_copyarea)17401740+ p->scrollmode = good_wrap ?17411741+ SCROLL_WRAP_MOVE : SCROLL_PAN_MOVE;17421742+ else17431743+ p->scrollmode = good_wrap ? SCROLL_REDRAW :17441744+ SCROLL_PAN_REDRAW;17451745+ } else {17461746+ if (reading_fast || (fast_copyarea && !fast_imageblit))17471747+ p->scrollmode = SCROLL_MOVE;17481748+ else17491749+ p->scrollmode = SCROLL_REDRAW;17501750+ }17511751+#endif17321752}1733175317341754static void updatescrollmode(struct fbcon_display *p,···20051507 p->vrows -= (yres - (fh * vc->vc_rows)) / fh;20061508 if ((yres % fh) && (vyres % fh < yres % fh))20071509 p->vrows--;15101510+15111511+ /* update scrollmode in case hardware acceleration is used */15121512+ updatescrollmode_accel(p, info, vc);20081513}2009151420101515#define PITCH(w) (((w) + 7) >> 3)···2165166421661665 updatescrollmode(p, info, vc);2167166621682168- scrollback_phys_max = 0;16671667+ switch (fb_scrollmode(p)) {16681668+ case SCROLL_WRAP_MOVE:16691669+ scrollback_phys_max = p->vrows - vc->vc_rows;16701670+ break;16711671+ case SCROLL_PAN_MOVE:16721672+ case SCROLL_PAN_REDRAW:16731673+ scrollback_phys_max = p->vrows - 2 * vc->vc_rows;16741674+ if (scrollback_phys_max < 0)16751675+ scrollback_phys_max = 0;16761676+ break;16771677+ default:16781678+ scrollback_phys_max = 0;16791679+ break;16801680+ }16811681+21691682 scrollback_max = 0;21701683 scrollback_current = 0;21711684
+72
drivers/video/fbdev/core/fbcon.h
···2929 /* Filled in by the low-level console driver */3030 const u_char *fontdata;3131 int userfont; /* != 0 if fontdata kmalloc()ed */3232+#ifdef CONFIG_FRAMEBUFFER_CONSOLE_LEGACY_ACCELERATION3333+ u_short scrollmode; /* Scroll Method, use fb_scrollmode() */3434+#endif3235 u_short inverse; /* != 0 text black on white as default */3336 short yscroll; /* Hardware scrolling */3437 int vrows; /* number of virtual rows */···5451};55525653struct fbcon_ops {5454+ void (*bmove)(struct vc_data *vc, struct fb_info *info, int sy,5555+ int sx, int dy, int dx, int height, int width);5756 void (*clear)(struct vc_data *vc, struct fb_info *info, int sy,5857 int sx, int height, int width);5958 void (*putcs)(struct vc_data *vc, struct fb_info *info,···153148154149#define attr_bgcol_ec(bgshift, vc, info) attr_col_ec(bgshift, vc, info, 0)155150#define attr_fgcol_ec(fgshift, vc, info) attr_col_ec(fgshift, vc, info, 1)151151+152152+ /*153153+ * Scroll Method154154+ */155155+156156+/* There are several methods fbcon can use to move text around the screen:157157+ *158158+ * Operation Pan Wrap159159+ *---------------------------------------------160160+ * SCROLL_MOVE copyarea No No161161+ * SCROLL_PAN_MOVE copyarea Yes No162162+ * SCROLL_WRAP_MOVE copyarea No Yes163163+ * SCROLL_REDRAW imageblit No No164164+ * SCROLL_PAN_REDRAW imageblit Yes No165165+ * SCROLL_WRAP_REDRAW imageblit No Yes166166+ *167167+ * (SCROLL_WRAP_REDRAW is not implemented yet)168168+ *169169+ * In general, fbcon will choose the best scrolling170170+ * method based on the rule below:171171+ *172172+ * Pan/Wrap > accel imageblit > accel copyarea >173173+ * soft imageblit > (soft copyarea)174174+ *175175+ * Exception to the rule: Pan + accel copyarea is176176+ * preferred over Pan + accel imageblit.177177+ *178178+ * The above is typical for PCI/AGP cards. Unless179179+ * overridden, fbcon will never use soft copyarea.180180+ *181181+ * If you need to override the above rule, set the182182+ * appropriate flags in fb_info->flags. For example,183183+ * to prefer copyarea over imageblit, set184184+ * FBINFO_READS_FAST.185185+ *186186+ * Other notes:187187+ * + use the hardware engine to move the text188188+ * (hw-accelerated copyarea() and fillrect())189189+ * + use hardware-supported panning on a large virtual screen190190+ * + amifb can not only pan, but also wrap the display by N lines191191+ * (i.e. visible line i = physical line (i+N) % yres).192192+ * + read what's already rendered on the screen and193193+ * write it in a different place (this is cfb_copyarea())194194+ * + re-render the text to the screen195195+ *196196+ * Whether to use wrapping or panning can only be figured out at197197+ * runtime (when we know whether our font height is a multiple198198+ * of the pan/wrap step)199199+ *200200+ */201201+202202+#define SCROLL_MOVE 0x001203203+#define SCROLL_PAN_MOVE 0x002204204+#define SCROLL_WRAP_MOVE 0x003205205+#define SCROLL_REDRAW 0x004206206+#define SCROLL_PAN_REDRAW 0x005207207+208208+static inline u_short fb_scrollmode(struct fbcon_display *fb)209209+{210210+#ifdef CONFIG_FRAMEBUFFER_CONSOLE_LEGACY_ACCELERATION211211+ return fb->scrollmode;212212+#else213213+ /* hardcoded to SCROLL_REDRAW if acceleration was disabled. */214214+ return SCROLL_REDRAW;215215+#endif216216+}217217+156218157219#ifdef CONFIG_FB_TILEBLITTING158220extern void fbcon_set_tileops(struct vc_data *vc, struct fb_info *info);
···1616#include <asm/types.h>1717#include "fbcon.h"18181919+static void tile_bmove(struct vc_data *vc, struct fb_info *info, int sy,2020+ int sx, int dy, int dx, int height, int width)2121+{2222+ struct fb_tilearea area;2323+2424+ area.sx = sx;2525+ area.sy = sy;2626+ area.dx = dx;2727+ area.dy = dy;2828+ area.height = height;2929+ area.width = width;3030+3131+ info->tileops->fb_tilecopy(info, &area);3232+}3333+1934static void tile_clear(struct vc_data *vc, struct fb_info *info, int sy,2035 int sx, int height, int width)2136{···133118 struct fb_tilemap map;134119 struct fbcon_ops *ops = info->fbcon_par;135120121121+ ops->bmove = tile_bmove;136122 ops->clear = tile_clear;137123 ops->putcs = tile_putcs;138124 ops->clear_margins = tile_clear_margins;
+6-6
drivers/video/fbdev/skeletonfb.c
···505505}506506507507/**508508- * xxxfb_copyarea - OBSOLETE function.508508+ * xxxfb_copyarea - REQUIRED function. Can use generic routines if509509+ * non acclerated hardware and packed pixel based.509510 * Copies one area of the screen to another area.510510- * Will be deleted in a future version511511 *512512 * @info: frame buffer structure that represents a single frame buffer513513 * @area: Structure providing the data to copy the framebuffer contents514514 * from one region to another.515515 *516516- * This drawing operation copied a rectangular area from one area of the516516+ * This drawing operation copies a rectangular area from one area of the517517 * screen to another area.518518 */519519void xxxfb_copyarea(struct fb_info *p, const struct fb_copyarea *area) ···645645 .fb_setcolreg = xxxfb_setcolreg,646646 .fb_blank = xxxfb_blank,647647 .fb_pan_display = xxxfb_pan_display,648648- .fb_fillrect = xxxfb_fillrect, /* Needed !!! */649649- .fb_copyarea = xxxfb_copyarea, /* Obsolete */650650- .fb_imageblit = xxxfb_imageblit, /* Needed !!! */648648+ .fb_fillrect = xxxfb_fillrect, /* Needed !!! */649649+ .fb_copyarea = xxxfb_copyarea, /* Needed !!! */650650+ .fb_imageblit = xxxfb_imageblit, /* Needed !!! */651651 .fb_cursor = xxxfb_cursor, /* Optional !!! */652652 .fb_sync = xxxfb_sync,653653 .fb_ioctl = xxxfb_ioctl,
+4-5
fs/9p/fid.c
···9696 dentry, dentry, from_kuid(&init_user_ns, uid),9797 any);9898 ret = NULL;9999-100100- if (d_inode(dentry))101101- ret = v9fs_fid_find_inode(d_inode(dentry), uid);102102-10399 /* we'll recheck under lock if there's anything to look in */104104- if (!ret && dentry->d_fsdata) {100100+ if (dentry->d_fsdata) {105101 struct hlist_head *h = (struct hlist_head *)&dentry->d_fsdata;106102107103 spin_lock(&dentry->d_lock);···109113 }110114 }111115 spin_unlock(&dentry->d_lock);116116+ } else {117117+ if (dentry->d_inode)118118+ ret = v9fs_fid_find_inode(dentry->d_inode, uid);112119 }113120114121 return ret;
···124124{125125 if (refcount_dec_and_test(&cache->refs)) {126126 WARN_ON(cache->pinned > 0);127127- WARN_ON(cache->reserved > 0);127127+ /*128128+ * If there was a failure to cleanup a log tree, very likely due129129+ * to an IO failure on a writeback attempt of one or more of its130130+ * extent buffers, we could not do proper (and cheap) unaccounting131131+ * of their reserved space, so don't warn on reserved > 0 in that132132+ * case.133133+ */134134+ if (!(cache->flags & BTRFS_BLOCK_GROUP_METADATA) ||135135+ !BTRFS_FS_LOG_CLEANUP_ERROR(cache->fs_info))136136+ WARN_ON(cache->reserved > 0);128137129138 /*130139 * A block_group shouldn't be on the discard_list anymore.···25532544 int ret;25542545 bool dirty_bg_running;2555254625472547+ /*25482548+ * This can only happen when we are doing read-only scrub on read-only25492549+ * mount.25502550+ * In that case we should not start a new transaction on read-only fs.25512551+ * Thus here we skip all chunk allocations.25522552+ */25532553+ if (sb_rdonly(fs_info->sb)) {25542554+ mutex_lock(&fs_info->ro_block_group_mutex);25552555+ ret = inc_block_group_ro(cache, 0);25562556+ mutex_unlock(&fs_info->ro_block_group_mutex);25572557+ return ret;25582558+ }25592559+25562560 do {25572561 trans = btrfs_join_transaction(root);25582562 if (IS_ERR(trans))···39963974 * important and indicates a real bug if this happens.39973975 */39983976 if (WARN_ON(space_info->bytes_pinned > 0 ||39993999- space_info->bytes_reserved > 0 ||40003977 space_info->bytes_may_use > 0))40013978 btrfs_dump_space_info(info, space_info, 0, 0);39793979+39803980+ /*39813981+ * If there was a failure to cleanup a log tree, very likely due39823982+ * to an IO failure on a writeback attempt of one or more of its39833983+ * extent buffers, we could not do proper (and cheap) unaccounting39843984+ * of their reserved space, so don't warn on bytes_reserved > 0 in39853985+ * that case.39863986+ */39873987+ if (!(space_info->flags & BTRFS_BLOCK_GROUP_METADATA) ||39883988+ !BTRFS_FS_LOG_CLEANUP_ERROR(info)) {39893989+ if (WARN_ON(space_info->bytes_reserved > 0))39903990+ btrfs_dump_space_info(info, space_info, 0, 0);39913991+ }39923992+40023993 WARN_ON(space_info->reclaim_size > 0);40033994 list_del(&space_info->list);40043995 btrfs_sysfs_remove_space_info(space_info);
+6
fs/btrfs/ctree.h
···145145 BTRFS_FS_STATE_DUMMY_FS_INFO,146146147147 BTRFS_FS_STATE_NO_CSUMS,148148+149149+ /* Indicates there was an error cleaning up a log tree. */150150+ BTRFS_FS_STATE_LOG_CLEANUP_ERROR,148151};149152150153#define BTRFS_BACKREF_REV_MAX 256···3596359335973594#define BTRFS_FS_ERROR(fs_info) (unlikely(test_bit(BTRFS_FS_STATE_ERROR, \35983595 &(fs_info)->fs_state)))35963596+#define BTRFS_FS_LOG_CLEANUP_ERROR(fs_info) \35973597+ (unlikely(test_bit(BTRFS_FS_STATE_LOG_CLEANUP_ERROR, \35983598+ &(fs_info)->fs_state)))3599359936003600__printf(5, 6)36013601__cold
+2-5
fs/btrfs/ioctl.c
···805805 goto fail;806806 }807807808808- spin_lock(&fs_info->trans_lock);809809- list_add(&pending_snapshot->list,810810- &trans->transaction->pending_snapshots);811811- spin_unlock(&fs_info->trans_lock);808808+ trans->pending_snapshot = pending_snapshot;812809813810 ret = btrfs_commit_transaction(trans);814811 if (ret)···33513354 struct block_device *bdev = NULL;33523355 fmode_t mode;33533356 int ret;33543354- bool cancel;33573357+ bool cancel = false;3355335833563359 if (!capable(CAP_SYS_ADMIN))33573360 return -EPERM;
+19-2
fs/btrfs/qgroup.c
···11851185 struct btrfs_trans_handle *trans = NULL;11861186 int ret = 0;1187118711881188+ /*11891189+ * We need to have subvol_sem write locked, to prevent races between11901190+ * concurrent tasks trying to disable quotas, because we will unlock11911191+ * and relock qgroup_ioctl_lock across BTRFS_FS_QUOTA_ENABLED changes.11921192+ */11931193+ lockdep_assert_held_write(&fs_info->subvol_sem);11941194+11881195 mutex_lock(&fs_info->qgroup_ioctl_lock);11891196 if (!fs_info->quota_root)11901197 goto out;11981198+11991199+ /*12001200+ * Request qgroup rescan worker to complete and wait for it. This wait12011201+ * must be done before transaction start for quota disable since it may12021202+ * deadlock with transaction by the qgroup rescan worker.12031203+ */12041204+ clear_bit(BTRFS_FS_QUOTA_ENABLED, &fs_info->flags);12051205+ btrfs_qgroup_wait_for_completion(fs_info, false);11911206 mutex_unlock(&fs_info->qgroup_ioctl_lock);1192120711931208 /*···12201205 if (IS_ERR(trans)) {12211206 ret = PTR_ERR(trans);12221207 trans = NULL;12081208+ set_bit(BTRFS_FS_QUOTA_ENABLED, &fs_info->flags);12231209 goto out;12241210 }1225121112261212 if (!fs_info->quota_root)12271213 goto out;1228121412291229- clear_bit(BTRFS_FS_QUOTA_ENABLED, &fs_info->flags);12301230- btrfs_qgroup_wait_for_completion(fs_info, false);12311215 spin_lock(&fs_info->qgroup_lock);12321216 quota_root = fs_info->quota_root;12331217 fs_info->quota_root = NULL;···33973383 btrfs_warn(fs_info,33983384 "qgroup rescan init failed, qgroup is not enabled");33993385 ret = -EINVAL;33863386+ } else if (!test_bit(BTRFS_FS_QUOTA_ENABLED, &fs_info->flags)) {33873387+ /* Quota disable is in progress */33883388+ ret = -EBUSY;34003389 }3401339034023391 if (ret) {
+24
fs/btrfs/transaction.c
···20002000 btrfs_wait_ordered_roots(fs_info, U64_MAX, 0, (u64)-1);20012001}2002200220032003+/*20042004+ * Add a pending snapshot associated with the given transaction handle to the20052005+ * respective handle. This must be called after the transaction commit started20062006+ * and while holding fs_info->trans_lock.20072007+ * This serves to guarantee a caller of btrfs_commit_transaction() that it can20082008+ * safely free the pending snapshot pointer in case btrfs_commit_transaction()20092009+ * returns an error.20102010+ */20112011+static void add_pending_snapshot(struct btrfs_trans_handle *trans)20122012+{20132013+ struct btrfs_transaction *cur_trans = trans->transaction;20142014+20152015+ if (!trans->pending_snapshot)20162016+ return;20172017+20182018+ lockdep_assert_held(&trans->fs_info->trans_lock);20192019+ ASSERT(cur_trans->state >= TRANS_STATE_COMMIT_START);20202020+20212021+ list_add(&trans->pending_snapshot->list, &cur_trans->pending_snapshots);20222022+}20232023+20032024int btrfs_commit_transaction(struct btrfs_trans_handle *trans)20042025{20052026 struct btrfs_fs_info *fs_info = trans->fs_info;···20932072 spin_lock(&fs_info->trans_lock);20942073 if (cur_trans->state >= TRANS_STATE_COMMIT_START) {20952074 enum btrfs_trans_state want_state = TRANS_STATE_COMPLETED;20752075+20762076+ add_pending_snapshot(trans);2096207720972078 spin_unlock(&fs_info->trans_lock);20982079 refcount_inc(&cur_trans->use_count);···21862163 * COMMIT_DOING so make sure to wait for num_writers to == 1 again.21872164 */21882165 spin_lock(&fs_info->trans_lock);21662166+ add_pending_snapshot(trans);21892167 cur_trans->state = TRANS_STATE_COMMIT_DOING;21902168 spin_unlock(&fs_info->trans_lock);21912169 wait_event(cur_trans->writer_wait,
+2
fs/btrfs/transaction.h
···123123 struct btrfs_transaction *transaction;124124 struct btrfs_block_rsv *block_rsv;125125 struct btrfs_block_rsv *orig_rsv;126126+ /* Set by a task that wants to create a snapshot. */127127+ struct btrfs_pending_snapshot *pending_snapshot;126128 refcount_t use_count;127129 unsigned int type;128130 /*
···34143414 if (log->node) {34153415 ret = walk_log_tree(trans, log, &wc);34163416 if (ret) {34173417+ /*34183418+ * We weren't able to traverse the entire log tree, the34193419+ * typical scenario is getting an -EIO when reading an34203420+ * extent buffer of the tree, due to a previous writeback34213421+ * failure of it.34223422+ */34233423+ set_bit(BTRFS_FS_STATE_LOG_CLEANUP_ERROR,34243424+ &log->fs_info->fs_state);34253425+34263426+ /*34273427+ * Some extent buffers of the log tree may still be dirty34283428+ * and not yet written back to storage, because we may34293429+ * have updates to a log tree without syncing a log tree,34303430+ * such as during rename and link operations. So flush34313431+ * them out and wait for their writeback to complete, so34323432+ * that we properly cleanup their state and pages.34333433+ */34343434+ btrfs_write_marked_extents(log->fs_info,34353435+ &log->dirty_log_pages,34363436+ EXTENT_DIRTY | EXTENT_NEW);34373437+ btrfs_wait_tree_log_extents(log,34383438+ EXTENT_DIRTY | EXTENT_NEW);34393439+34173440 if (trans)34183441 btrfs_abort_transaction(trans, ret);34193442 else
+59
fs/cachefiles/io.c
···192192}193193194194/*195195+ * Query the occupancy of the cache in a region, returning where the next chunk196196+ * of data starts and how long it is.197197+ */198198+static int cachefiles_query_occupancy(struct netfs_cache_resources *cres,199199+ loff_t start, size_t len, size_t granularity,200200+ loff_t *_data_start, size_t *_data_len)201201+{202202+ struct cachefiles_object *object;203203+ struct file *file;204204+ loff_t off, off2;205205+206206+ *_data_start = -1;207207+ *_data_len = 0;208208+209209+ if (!fscache_wait_for_operation(cres, FSCACHE_WANT_READ))210210+ return -ENOBUFS;211211+212212+ object = cachefiles_cres_object(cres);213213+ file = cachefiles_cres_file(cres);214214+ granularity = max_t(size_t, object->volume->cache->bsize, granularity);215215+216216+ _enter("%pD,%li,%llx,%zx/%llx",217217+ file, file_inode(file)->i_ino, start, len,218218+ i_size_read(file_inode(file)));219219+220220+ off = cachefiles_inject_read_error();221221+ if (off == 0)222222+ off = vfs_llseek(file, start, SEEK_DATA);223223+ if (off == -ENXIO)224224+ return -ENODATA; /* Beyond EOF */225225+ if (off < 0 && off >= (loff_t)-MAX_ERRNO)226226+ return -ENOBUFS; /* Error. */227227+ if (round_up(off, granularity) >= start + len)228228+ return -ENODATA; /* No data in range */229229+230230+ off2 = cachefiles_inject_read_error();231231+ if (off2 == 0)232232+ off2 = vfs_llseek(file, off, SEEK_HOLE);233233+ if (off2 == -ENXIO)234234+ return -ENODATA; /* Beyond EOF */235235+ if (off2 < 0 && off2 >= (loff_t)-MAX_ERRNO)236236+ return -ENOBUFS; /* Error. */237237+238238+ /* Round away partial blocks */239239+ off = round_up(off, granularity);240240+ off2 = round_down(off2, granularity);241241+ if (off2 <= off)242242+ return -ENODATA;243243+244244+ *_data_start = off;245245+ if (off2 > start + len)246246+ *_data_len = len;247247+ else248248+ *_data_len = off2 - off;249249+ return 0;250250+}251251+252252+/*195253 * Handle completion of a write to the cache.196254 */197255static void cachefiles_write_complete(struct kiocb *iocb, long ret)···603545 .write = cachefiles_write,604546 .prepare_read = cachefiles_prepare_read,605547 .prepare_write = cachefiles_prepare_write,548548+ .query_occupancy = cachefiles_query_occupancy,606549};607550608551/*
+16-7
fs/cifs/connect.c
···162162 mutex_unlock(&server->srv_mutex);163163}164164165165-/**165165+/*166166 * Mark all sessions and tcons for reconnect.167167 *168168 * @server needs to be previously set to CifsNeedReconnect.···18311831 int i;1832183218331833 for (i = 1; i < chan_count; i++) {18341834- /*18351835- * note: for now, we're okay accessing ses->chans18361836- * without chan_lock. But when chans can go away, we'll18371837- * need to introduce ref counting to make sure that chan18381838- * is not freed from under us.18391839- */18341834+ spin_unlock(&ses->chan_lock);18401835 cifs_put_tcp_session(ses->chans[i].server, 0);18361836+ spin_lock(&ses->chan_lock);18411837 ses->chans[i].server = NULL;18421838 }18431839 }···19751979 ctx->password = NULL;19761980 goto out_key_put;19771981 }19821982+ }19831983+19841984+ ctx->workstation_name = kstrdup(ses->workstation_name, GFP_KERNEL);19851985+ if (!ctx->workstation_name) {19861986+ cifs_dbg(FYI, "Unable to allocate memory for workstation_name\n");19871987+ rc = -ENOMEM;19881988+ kfree(ctx->username);19891989+ ctx->username = NULL;19901990+ kfree_sensitive(ctx->password);19911991+ ctx->password = NULL;19921992+ kfree(ctx->domainname);19931993+ ctx->domainname = NULL;19941994+ goto out_key_put;19781995 }1979199619801997out_key_put:
+83-140
fs/cifs/file.c
···42694269 for (i = 0; i < rdata->nr_pages; i++) {42704270 struct page *page = rdata->pages[i];4271427142724272- lru_cache_add(page);42734273-42744272 if (rdata->result == 0 ||42754273 (rdata->result == -EAGAIN && got_bytes)) {42764274 flush_dcache_page(page);···42764278 } else42774279 SetPageError(page);4278428042794279- unlock_page(page);42804280-42814281 if (rdata->result == 0 ||42824282 (rdata->result == -EAGAIN && got_bytes))42834283 cifs_readpage_to_fscache(rdata->mapping->host, page);42844284+42854285+ unlock_page(page);4284428642854287 got_bytes -= min_t(unsigned int, PAGE_SIZE, got_bytes);42864288···43384340 * fill them until the writes are flushed.43394341 */43404342 zero_user(page, 0, PAGE_SIZE);43414341- lru_cache_add(page);43424343 flush_dcache_page(page);43434344 SetPageUptodate(page);43444345 unlock_page(page);···43474350 continue;43484351 } else {43494352 /* no need to hold page hostage */43504350- lru_cache_add(page);43514353 unlock_page(page);43524354 put_page(page);43534355 rdata->pages[i] = NULL;···43894393 return readpages_fill_pages(server, rdata, iter, iter->count);43904394}4391439543924392-static int43934393-readpages_get_pages(struct address_space *mapping, struct list_head *page_list,43944394- unsigned int rsize, struct list_head *tmplist,43954395- unsigned int *nr_pages, loff_t *offset, unsigned int *bytes)43964396-{43974397- struct page *page, *tpage;43984398- unsigned int expected_index;43994399- int rc;44004400- gfp_t gfp = readahead_gfp_mask(mapping);44014401-44024402- INIT_LIST_HEAD(tmplist);44034403-44044404- page = lru_to_page(page_list);44054405-44064406- /*44074407- * Lock the page and put it in the cache. Since no one else44084408- * should have access to this page, we're safe to simply set44094409- * PG_locked without checking it first.44104410- */44114411- __SetPageLocked(page);44124412- rc = add_to_page_cache_locked(page, mapping,44134413- page->index, gfp);44144414-44154415- /* give up if we can't stick it in the cache */44164416- if (rc) {44174417- __ClearPageLocked(page);44184418- return rc;44194419- }44204420-44214421- /* move first page to the tmplist */44224422- *offset = (loff_t)page->index << PAGE_SHIFT;44234423- *bytes = PAGE_SIZE;44244424- *nr_pages = 1;44254425- list_move_tail(&page->lru, tmplist);44264426-44274427- /* now try and add more pages onto the request */44284428- expected_index = page->index + 1;44294429- list_for_each_entry_safe_reverse(page, tpage, page_list, lru) {44304430- /* discontinuity ? */44314431- if (page->index != expected_index)44324432- break;44334433-44344434- /* would this page push the read over the rsize? */44354435- if (*bytes + PAGE_SIZE > rsize)44364436- break;44374437-44384438- __SetPageLocked(page);44394439- rc = add_to_page_cache_locked(page, mapping, page->index, gfp);44404440- if (rc) {44414441- __ClearPageLocked(page);44424442- break;44434443- }44444444- list_move_tail(&page->lru, tmplist);44454445- (*bytes) += PAGE_SIZE;44464446- expected_index++;44474447- (*nr_pages)++;44484448- }44494449- return rc;44504450-}44514451-44524452-static int cifs_readpages(struct file *file, struct address_space *mapping,44534453- struct list_head *page_list, unsigned num_pages)43964396+static void cifs_readahead(struct readahead_control *ractl)44544397{44554398 int rc;44564456- int err = 0;44574457- struct list_head tmplist;44584458- struct cifsFileInfo *open_file = file->private_data;44594459- struct cifs_sb_info *cifs_sb = CIFS_FILE_SB(file);43994399+ struct cifsFileInfo *open_file = ractl->file->private_data;44004400+ struct cifs_sb_info *cifs_sb = CIFS_FILE_SB(ractl->file);44604401 struct TCP_Server_Info *server;44614402 pid_t pid;44624462- unsigned int xid;44034403+ unsigned int xid, nr_pages, last_batch_size = 0, cache_nr_pages = 0;44044404+ pgoff_t next_cached = ULONG_MAX;44054405+ bool caching = fscache_cookie_enabled(cifs_inode_cookie(ractl->mapping->host)) &&44064406+ cifs_inode_cookie(ractl->mapping->host)->cache_priv;44074407+ bool check_cache = caching;4463440844644409 xid = get_xid();44654465- /*44664466- * Reads as many pages as possible from fscache. Returns -ENOBUFS44674467- * immediately if the cookie is negative44684468- *44694469- * After this point, every page in the list might have PG_fscache set,44704470- * so we will need to clean that up off of every page we don't use.44714471- */44724472- rc = cifs_readpages_from_fscache(mapping->host, mapping, page_list,44734473- &num_pages);44744474- if (rc == 0) {44754475- free_xid(xid);44764476- return rc;44774477- }4478441044794411 if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_RWPIDFORWARD)44804412 pid = open_file->pid;···44134489 server = cifs_pick_channel(tlink_tcon(open_file->tlink)->ses);4414449044154491 cifs_dbg(FYI, "%s: file=%p mapping=%p num_pages=%u\n",44164416- __func__, file, mapping, num_pages);44924492+ __func__, ractl->file, ractl->mapping, readahead_count(ractl));4417449344184494 /*44194419- * Start with the page at end of list and move it to private44204420- * list. Do the same with any following pages until we hit44214421- * the rsize limit, hit an index discontinuity, or run out of44224422- * pages. Issue the async read and then start the loop again44234423- * until the list is empty.44244424- *44254425- * Note that list order is important. The page_list is in44264426- * the order of declining indexes. When we put the pages in44274427- * the rdata->pages, then we want them in increasing order.44954495+ * Chop the readahead request up into rsize-sized read requests.44284496 */44294429- while (!list_empty(page_list) && !err) {44304430- unsigned int i, nr_pages, bytes, rsize;44314431- loff_t offset;44324432- struct page *page, *tpage;44974497+ while ((nr_pages = readahead_count(ractl) - last_batch_size)) {44984498+ unsigned int i, got, rsize;44994499+ struct page *page;44334500 struct cifs_readdata *rdata;44344501 struct cifs_credits credits_on_stack;44354502 struct cifs_credits *credits = &credits_on_stack;45034503+ pgoff_t index = readahead_index(ractl) + last_batch_size;45044504+45054505+ /*45064506+ * Find out if we have anything cached in the range of45074507+ * interest, and if so, where the next chunk of cached data is.45084508+ */45094509+ if (caching) {45104510+ if (check_cache) {45114511+ rc = cifs_fscache_query_occupancy(45124512+ ractl->mapping->host, index, nr_pages,45134513+ &next_cached, &cache_nr_pages);45144514+ if (rc < 0)45154515+ caching = false;45164516+ check_cache = false;45174517+ }45184518+45194519+ if (index == next_cached) {45204520+ /*45214521+ * TODO: Send a whole batch of pages to be read45224522+ * by the cache.45234523+ */45244524+ page = readahead_page(ractl);45254525+ last_batch_size = 1 << thp_order(page);45264526+ if (cifs_readpage_from_fscache(ractl->mapping->host,45274527+ page) < 0) {45284528+ /*45294529+ * TODO: Deal with cache read failure45304530+ * here, but for the moment, delegate45314531+ * that to readpage.45324532+ */45334533+ caching = false;45344534+ }45354535+ unlock_page(page);45364536+ next_cached++;45374537+ cache_nr_pages--;45384538+ if (cache_nr_pages == 0)45394539+ check_cache = true;45404540+ continue;45414541+ }45424542+ }4436454344374544 if (open_file->invalidHandle) {44384545 rc = cifs_reopen_file(open_file, true);44394439- if (rc == -EAGAIN)44404440- continue;44414441- else if (rc)45464546+ if (rc) {45474547+ if (rc == -EAGAIN)45484548+ continue;44424549 break;45504550+ }44434551 }4444455244454553 rc = server->ops->wait_mtu_credits(server, cifs_sb->ctx->rsize,44464554 &rsize, credits);44474555 if (rc)44484556 break;45574557+ nr_pages = min_t(size_t, rsize / PAGE_SIZE, readahead_count(ractl));45584558+ nr_pages = min_t(size_t, nr_pages, next_cached - index);4449455944504560 /*44514561 * Give up immediately if rsize is too small to read an entire···44874529 * reach this point however since we set ra_pages to 0 when the44884530 * rsize is smaller than a cache page.44894531 */44904490- if (unlikely(rsize < PAGE_SIZE)) {44914491- add_credits_and_wake_if(server, credits, 0);44924492- free_xid(xid);44934493- return 0;44944494- }44954495-44964496- nr_pages = 0;44974497- err = readpages_get_pages(mapping, page_list, rsize, &tmplist,44984498- &nr_pages, &offset, &bytes);44994499- if (!nr_pages) {45324532+ if (unlikely(!nr_pages)) {45004533 add_credits_and_wake_if(server, credits, 0);45014534 break;45024535 }···44954546 rdata = cifs_readdata_alloc(nr_pages, cifs_readv_complete);44964547 if (!rdata) {44974548 /* best to give up if we're out of mem */44984498- list_for_each_entry_safe(page, tpage, &tmplist, lru) {44994499- list_del(&page->lru);45004500- lru_cache_add(page);45014501- unlock_page(page);45024502- put_page(page);45034503- }45044504- rc = -ENOMEM;45054549 add_credits_and_wake_if(server, credits, 0);45064550 break;45074551 }4508455245094509- rdata->cfile = cifsFileInfo_get(open_file);45104510- rdata->server = server;45114511- rdata->mapping = mapping;45124512- rdata->offset = offset;45134513- rdata->bytes = bytes;45144514- rdata->pid = pid;45154515- rdata->pagesz = PAGE_SIZE;45164516- rdata->tailsz = PAGE_SIZE;45174517- rdata->read_into_pages = cifs_readpages_read_into_pages;45184518- rdata->copy_into_pages = cifs_readpages_copy_into_pages;45194519- rdata->credits = credits_on_stack;45204520-45214521- list_for_each_entry_safe(page, tpage, &tmplist, lru) {45224522- list_del(&page->lru);45234523- rdata->pages[rdata->nr_pages++] = page;45534553+ got = __readahead_batch(ractl, rdata->pages, nr_pages);45544554+ if (got != nr_pages) {45554555+ pr_warn("__readahead_batch() returned %u/%u\n",45564556+ got, nr_pages);45574557+ nr_pages = got;45244558 }4525455945264526- rc = adjust_credits(server, &rdata->credits, rdata->bytes);45604560+ rdata->nr_pages = nr_pages;45614561+ rdata->bytes = readahead_batch_length(ractl);45624562+ rdata->cfile = cifsFileInfo_get(open_file);45634563+ rdata->server = server;45644564+ rdata->mapping = ractl->mapping;45654565+ rdata->offset = readahead_pos(ractl);45664566+ rdata->pid = pid;45674567+ rdata->pagesz = PAGE_SIZE;45684568+ rdata->tailsz = PAGE_SIZE;45694569+ rdata->read_into_pages = cifs_readpages_read_into_pages;45704570+ rdata->copy_into_pages = cifs_readpages_copy_into_pages;45714571+ rdata->credits = credits_on_stack;4527457245734573+ rc = adjust_credits(server, &rdata->credits, rdata->bytes);45284574 if (!rc) {45294575 if (rdata->cfile->invalidHandle)45304576 rc = -EAGAIN;···45314587 add_credits_and_wake_if(server, &rdata->credits, 0);45324588 for (i = 0; i < rdata->nr_pages; i++) {45334589 page = rdata->pages[i];45344534- lru_cache_add(page);45354590 unlock_page(page);45364591 put_page(page);45374592 }···45404597 }4541459845424599 kref_put(&rdata->refcount, cifs_readdata_release);46004600+ last_batch_size = nr_pages;45434601 }4544460245454603 free_xid(xid);45464546- return rc;45474604}4548460545494606/*···48674924 * In the non-cached mode (mount with cache=none), we shunt off direct read and write requests48684925 * so this method should never be called.48694926 *48704870- * Direct IO is not yet supported in the cached mode. 49274927+ * Direct IO is not yet supported in the cached mode.48714928 */48724929static ssize_t48734930cifs_direct_io(struct kiocb *iocb, struct iov_iter *iter)···4949500649505007const struct address_space_operations cifs_addr_ops = {49515008 .readpage = cifs_readpage,49524952- .readpages = cifs_readpages,50095009+ .readahead = cifs_readahead,49535010 .writepage = cifs_writepage,49545011 .writepages = cifs_writepages,49555012 .write_begin = cifs_write_begin,
+111-21
fs/cifs/fscache.c
···134134 }135135}136136137137+static inline void fscache_end_operation(struct netfs_cache_resources *cres)138138+{139139+ const struct netfs_cache_ops *ops = fscache_operation_valid(cres);140140+141141+ if (ops)142142+ ops->end_operation(cres);143143+}144144+145145+/*146146+ * Fallback page reading interface.147147+ */148148+static int fscache_fallback_read_page(struct inode *inode, struct page *page)149149+{150150+ struct netfs_cache_resources cres;151151+ struct fscache_cookie *cookie = cifs_inode_cookie(inode);152152+ struct iov_iter iter;153153+ struct bio_vec bvec[1];154154+ int ret;155155+156156+ memset(&cres, 0, sizeof(cres));157157+ bvec[0].bv_page = page;158158+ bvec[0].bv_offset = 0;159159+ bvec[0].bv_len = PAGE_SIZE;160160+ iov_iter_bvec(&iter, READ, bvec, ARRAY_SIZE(bvec), PAGE_SIZE);161161+162162+ ret = fscache_begin_read_operation(&cres, cookie);163163+ if (ret < 0)164164+ return ret;165165+166166+ ret = fscache_read(&cres, page_offset(page), &iter, NETFS_READ_HOLE_FAIL,167167+ NULL, NULL);168168+ fscache_end_operation(&cres);169169+ return ret;170170+}171171+172172+/*173173+ * Fallback page writing interface.174174+ */175175+static int fscache_fallback_write_page(struct inode *inode, struct page *page,176176+ bool no_space_allocated_yet)177177+{178178+ struct netfs_cache_resources cres;179179+ struct fscache_cookie *cookie = cifs_inode_cookie(inode);180180+ struct iov_iter iter;181181+ struct bio_vec bvec[1];182182+ loff_t start = page_offset(page);183183+ size_t len = PAGE_SIZE;184184+ int ret;185185+186186+ memset(&cres, 0, sizeof(cres));187187+ bvec[0].bv_page = page;188188+ bvec[0].bv_offset = 0;189189+ bvec[0].bv_len = PAGE_SIZE;190190+ iov_iter_bvec(&iter, WRITE, bvec, ARRAY_SIZE(bvec), PAGE_SIZE);191191+192192+ ret = fscache_begin_write_operation(&cres, cookie);193193+ if (ret < 0)194194+ return ret;195195+196196+ ret = cres.ops->prepare_write(&cres, &start, &len, i_size_read(inode),197197+ no_space_allocated_yet);198198+ if (ret == 0)199199+ ret = fscache_write(&cres, page_offset(page), &iter, NULL, NULL);200200+ fscache_end_operation(&cres);201201+ return ret;202202+}203203+137204/*138205 * Retrieve a page from FS-Cache139206 */140207int __cifs_readpage_from_fscache(struct inode *inode, struct page *page)141208{142142- cifs_dbg(FYI, "%s: (fsc:%p, p:%p, i:0x%p\n",143143- __func__, CIFS_I(inode)->fscache, page, inode);144144- return -ENOBUFS; // Needs conversion to using netfslib145145-}209209+ int ret;146210147147-/*148148- * Retrieve a set of pages from FS-Cache149149- */150150-int __cifs_readpages_from_fscache(struct inode *inode,151151- struct address_space *mapping,152152- struct list_head *pages,153153- unsigned *nr_pages)154154-{155155- cifs_dbg(FYI, "%s: (0x%p/%u/0x%p)\n",156156- __func__, CIFS_I(inode)->fscache, *nr_pages, inode);157157- return -ENOBUFS; // Needs conversion to using netfslib211211+ cifs_dbg(FYI, "%s: (fsc:%p, p:%p, i:0x%p\n",212212+ __func__, cifs_inode_cookie(inode), page, inode);213213+214214+ ret = fscache_fallback_read_page(inode, page);215215+ if (ret < 0)216216+ return ret;217217+218218+ /* Read completed synchronously */219219+ SetPageUptodate(page);220220+ return 0;158221}159222160223void __cifs_readpage_to_fscache(struct inode *inode, struct page *page)161224{162162- struct cifsInodeInfo *cifsi = CIFS_I(inode);163163-164164- WARN_ON(!cifsi->fscache);165165-166225 cifs_dbg(FYI, "%s: (fsc: %p, p: %p, i: %p)\n",167167- __func__, cifsi->fscache, page, inode);226226+ __func__, cifs_inode_cookie(inode), page, inode);168227169169- // Needs conversion to using netfslib228228+ fscache_fallback_write_page(inode, page, true);229229+}230230+231231+/*232232+ * Query the cache occupancy.233233+ */234234+int __cifs_fscache_query_occupancy(struct inode *inode,235235+ pgoff_t first, unsigned int nr_pages,236236+ pgoff_t *_data_first,237237+ unsigned int *_data_nr_pages)238238+{239239+ struct netfs_cache_resources cres;240240+ struct fscache_cookie *cookie = cifs_inode_cookie(inode);241241+ loff_t start, data_start;242242+ size_t len, data_len;243243+ int ret;244244+245245+ ret = fscache_begin_read_operation(&cres, cookie);246246+ if (ret < 0)247247+ return ret;248248+249249+ start = first * PAGE_SIZE;250250+ len = nr_pages * PAGE_SIZE;251251+ ret = cres.ops->query_occupancy(&cres, start, len, PAGE_SIZE,252252+ &data_start, &data_len);253253+ if (ret == 0) {254254+ *_data_first = data_start / PAGE_SIZE;255255+ *_data_nr_pages = len / PAGE_SIZE;256256+ }257257+258258+ fscache_end_operation(&cres);259259+ return ret;170260}
+50-31
fs/cifs/fscache.h
···99#ifndef _CIFS_FSCACHE_H1010#define _CIFS_FSCACHE_H11111212+#include <linux/swap.h>1213#include <linux/fscache.h>13141415#include "cifsglob.h"···5958}605961606262-extern int cifs_fscache_release_page(struct page *page, gfp_t gfp);6363-extern int __cifs_readpage_from_fscache(struct inode *, struct page *);6464-extern int __cifs_readpages_from_fscache(struct inode *,6565- struct address_space *,6666- struct list_head *,6767- unsigned *);6868-extern void __cifs_readpage_to_fscache(struct inode *, struct page *);6969-7061static inline struct fscache_cookie *cifs_inode_cookie(struct inode *inode)7162{7263 return CIFS_I(inode)->fscache;···7380 i_size_read(inode), flags);7481}75828383+extern int __cifs_fscache_query_occupancy(struct inode *inode,8484+ pgoff_t first, unsigned int nr_pages,8585+ pgoff_t *_data_first,8686+ unsigned int *_data_nr_pages);8787+8888+static inline int cifs_fscache_query_occupancy(struct inode *inode,8989+ pgoff_t first, unsigned int nr_pages,9090+ pgoff_t *_data_first,9191+ unsigned int *_data_nr_pages)9292+{9393+ if (!cifs_inode_cookie(inode))9494+ return -ENOBUFS;9595+ return __cifs_fscache_query_occupancy(inode, first, nr_pages,9696+ _data_first, _data_nr_pages);9797+}9898+9999+extern int __cifs_readpage_from_fscache(struct inode *pinode, struct page *ppage);100100+extern void __cifs_readpage_to_fscache(struct inode *pinode, struct page *ppage);101101+102102+76103static inline int cifs_readpage_from_fscache(struct inode *inode,77104 struct page *page)78105{7979- if (CIFS_I(inode)->fscache)106106+ if (cifs_inode_cookie(inode))80107 return __cifs_readpage_from_fscache(inode, page);8181-8282- return -ENOBUFS;8383-}8484-8585-static inline int cifs_readpages_from_fscache(struct inode *inode,8686- struct address_space *mapping,8787- struct list_head *pages,8888- unsigned *nr_pages)8989-{9090- if (CIFS_I(inode)->fscache)9191- return __cifs_readpages_from_fscache(inode, mapping, pages,9292- nr_pages);93108 return -ENOBUFS;94109}9511096111static inline void cifs_readpage_to_fscache(struct inode *inode,97112 struct page *page)98113{9999- if (PageFsCache(page))114114+ if (cifs_inode_cookie(inode))100115 __cifs_readpage_to_fscache(inode, page);116116+}117117+118118+static inline int cifs_fscache_release_page(struct page *page, gfp_t gfp)119119+{120120+ if (PageFsCache(page)) {121121+ if (current_is_kswapd() || !(gfp & __GFP_FS))122122+ return false;123123+ wait_on_page_fscache(page);124124+ fscache_note_page_release(cifs_inode_cookie(page->mapping->host));125125+ }126126+ return true;101127}102128103129#else /* CONFIG_CIFS_FSCACHE */···135123static inline struct fscache_cookie *cifs_inode_cookie(struct inode *inode) { return NULL; }136124static inline void cifs_invalidate_cache(struct inode *inode, unsigned int flags) {}137125126126+static inline int cifs_fscache_query_occupancy(struct inode *inode,127127+ pgoff_t first, unsigned int nr_pages,128128+ pgoff_t *_data_first,129129+ unsigned int *_data_nr_pages)130130+{131131+ *_data_first = ULONG_MAX;132132+ *_data_nr_pages = 0;133133+ return -ENOBUFS;134134+}135135+138136static inline int139137cifs_readpage_from_fscache(struct inode *inode, struct page *page)140138{141139 return -ENOBUFS;142140}143141144144-static inline int cifs_readpages_from_fscache(struct inode *inode,145145- struct address_space *mapping,146146- struct list_head *pages,147147- unsigned *nr_pages)148148-{149149- return -ENOBUFS;150150-}142142+static inline143143+void cifs_readpage_to_fscache(struct inode *inode, struct page *page) {}151144152152-static inline void cifs_readpage_to_fscache(struct inode *inode,153153- struct page *page) {}145145+static inline int nfs_fscache_release_page(struct page *page, gfp_t gfp)146146+{147147+ return true; /* May release page */148148+}154149155150#endif /* CONFIG_CIFS_FSCACHE */156151
···810810 return false;811811}812812813813-static void z_erofs_decompressqueue_work(struct work_struct *work);814814-static void z_erofs_decompress_kickoff(struct z_erofs_decompressqueue *io,815815- bool sync, int bios)816816-{817817- struct erofs_sb_info *const sbi = EROFS_SB(io->sb);818818-819819- /* wake up the caller thread for sync decompression */820820- if (sync) {821821- unsigned long flags;822822-823823- spin_lock_irqsave(&io->u.wait.lock, flags);824824- if (!atomic_add_return(bios, &io->pending_bios))825825- wake_up_locked(&io->u.wait);826826- spin_unlock_irqrestore(&io->u.wait.lock, flags);827827- return;828828- }829829-830830- if (atomic_add_return(bios, &io->pending_bios))831831- return;832832- /* Use workqueue and sync decompression for atomic contexts only */833833- if (in_atomic() || irqs_disabled()) {834834- queue_work(z_erofs_workqueue, &io->u.work);835835- /* enable sync decompression for readahead */836836- if (sbi->opt.sync_decompress == EROFS_SYNC_DECOMPRESS_AUTO)837837- sbi->opt.sync_decompress = EROFS_SYNC_DECOMPRESS_FORCE_ON;838838- return;839839- }840840- z_erofs_decompressqueue_work(&io->u.work);841841-}842842-843813static bool z_erofs_page_is_invalidated(struct page *page)844814{845815 return !page->mapping && !z_erofs_is_shortlived_page(page);846846-}847847-848848-static void z_erofs_decompressqueue_endio(struct bio *bio)849849-{850850- tagptr1_t t = tagptr_init(tagptr1_t, bio->bi_private);851851- struct z_erofs_decompressqueue *q = tagptr_unfold_ptr(t);852852- blk_status_t err = bio->bi_status;853853- struct bio_vec *bvec;854854- struct bvec_iter_all iter_all;855855-856856- bio_for_each_segment_all(bvec, bio, iter_all) {857857- struct page *page = bvec->bv_page;858858-859859- DBG_BUGON(PageUptodate(page));860860- DBG_BUGON(z_erofs_page_is_invalidated(page));861861-862862- if (err)863863- SetPageError(page);864864-865865- if (erofs_page_is_managed(EROFS_SB(q->sb), page)) {866866- if (!err)867867- SetPageUptodate(page);868868- unlock_page(page);869869- }870870- }871871- z_erofs_decompress_kickoff(q, tagptr_unfold_tags(t), -1);872872- bio_put(bio);873816}874817875818static int z_erofs_decompress_pcluster(struct super_block *sb,···10661123 kvfree(bgq);10671124}1068112511261126+static void z_erofs_decompress_kickoff(struct z_erofs_decompressqueue *io,11271127+ bool sync, int bios)11281128+{11291129+ struct erofs_sb_info *const sbi = EROFS_SB(io->sb);11301130+11311131+ /* wake up the caller thread for sync decompression */11321132+ if (sync) {11331133+ unsigned long flags;11341134+11351135+ spin_lock_irqsave(&io->u.wait.lock, flags);11361136+ if (!atomic_add_return(bios, &io->pending_bios))11371137+ wake_up_locked(&io->u.wait);11381138+ spin_unlock_irqrestore(&io->u.wait.lock, flags);11391139+ return;11401140+ }11411141+11421142+ if (atomic_add_return(bios, &io->pending_bios))11431143+ return;11441144+ /* Use workqueue and sync decompression for atomic contexts only */11451145+ if (in_atomic() || irqs_disabled()) {11461146+ queue_work(z_erofs_workqueue, &io->u.work);11471147+ /* enable sync decompression for readahead */11481148+ if (sbi->opt.sync_decompress == EROFS_SYNC_DECOMPRESS_AUTO)11491149+ sbi->opt.sync_decompress = EROFS_SYNC_DECOMPRESS_FORCE_ON;11501150+ return;11511151+ }11521152+ z_erofs_decompressqueue_work(&io->u.work);11531153+}11541154+10691155static struct page *pickup_page_for_submission(struct z_erofs_pcluster *pcl,10701156 unsigned int nr,10711157 struct page **pagepool,···12701298 WRITE_ONCE(*bypass_qtail, &pcl->next);1271129912721300 qtail[JQ_BYPASS] = &pcl->next;13011301+}13021302+13031303+static void z_erofs_decompressqueue_endio(struct bio *bio)13041304+{13051305+ tagptr1_t t = tagptr_init(tagptr1_t, bio->bi_private);13061306+ struct z_erofs_decompressqueue *q = tagptr_unfold_ptr(t);13071307+ blk_status_t err = bio->bi_status;13081308+ struct bio_vec *bvec;13091309+ struct bvec_iter_all iter_all;13101310+13111311+ bio_for_each_segment_all(bvec, bio, iter_all) {13121312+ struct page *page = bvec->bv_page;13131313+13141314+ DBG_BUGON(PageUptodate(page));13151315+ DBG_BUGON(z_erofs_page_is_invalidated(page));13161316+13171317+ if (err)13181318+ SetPageError(page);13191319+13201320+ if (erofs_page_is_managed(EROFS_SB(q->sb), page)) {13211321+ if (!err)13221322+ SetPageUptodate(page);13231323+ unlock_page(page);13241324+ }13251325+ }13261326+ z_erofs_decompress_kickoff(q, tagptr_unfold_tags(t), -1);13271327+ bio_put(bio);12731328}1274132912751330static void z_erofs_submit_queue(struct super_block *sb,
+7
fs/erofs/zmap.c
···630630 if (endoff >= m.clusterofs) {631631 m.headtype = m.type;632632 map->m_la = (m.lcn << lclusterbits) | m.clusterofs;633633+ /*634634+ * For ztailpacking files, in order to inline data more635635+ * effectively, special EOF lclusters are now supported636636+ * which can have three parts at most.637637+ */638638+ if (ztailpacking && end > inode->i_size)639639+ end = inode->i_size;633640 break;634641 }635642 /* m.lcn should be >= 1 if endoff < m.clusterofs */
···13171317 dx_set_count(entries, count + 1);13181318}1319131913201320-#ifdef CONFIG_UNICODE13201320+#if IS_ENABLED(CONFIG_UNICODE)13211321/*13221322 * Test whether a case-insensitive directory entry matches the filename13231323 * being searched for. If quick is set, assume the name being looked up···14281428 f.crypto_buf = fname->crypto_buf;14291429#endif1430143014311431-#ifdef CONFIG_UNICODE14311431+#if IS_ENABLED(CONFIG_UNICODE)14321432 if (parent->i_sb->s_encoding && IS_CASEFOLDED(parent) &&14331433 (!IS_ENCRYPTED(parent) || fscrypt_has_encryption_key(parent))) {14341434 if (fname->cf_name.name) {···18001800 }18011801 }1802180218031803-#ifdef CONFIG_UNICODE18031803+#if IS_ENABLED(CONFIG_UNICODE)18041804 if (!inode && IS_CASEFOLDED(dir)) {18051805 /* Eventually we want to call d_add_ci(dentry, NULL)18061806 * for negative dentries in the encoding case as···23082308 if (fscrypt_is_nokey_name(dentry))23092309 return -ENOKEY;2310231023112311-#ifdef CONFIG_UNICODE23112311+#if IS_ENABLED(CONFIG_UNICODE)23122312 if (sb_has_strict_encoding(sb) && IS_CASEFOLDED(dir) &&23132313 sb->s_encoding && utf8_validate(sb->s_encoding, &dentry->d_name))23142314 return -EINVAL;···31263126 ext4_fc_track_unlink(handle, dentry);31273127 retval = ext4_mark_inode_dirty(handle, dir);3128312831293129-#ifdef CONFIG_UNICODE31293129+#if IS_ENABLED(CONFIG_UNICODE)31303130 /* VFS negative dentries are incompatible with Encoding and31313131 * Case-insensitiveness. Eventually we'll want avoid31323132 * invalidating the dentries here, alongside with returning the···32313231 retval = __ext4_unlink(handle, dir, &dentry->d_name, d_inode(dentry));32323232 if (!retval)32333233 ext4_fc_track_unlink(handle, dentry);32343234-#ifdef CONFIG_UNICODE32343234+#if IS_ENABLED(CONFIG_UNICODE)32353235 /* VFS negative dentries are incompatible with Encoding and32363236 * Case-insensitiveness. Eventually we'll want avoid32373237 * invalidating the dentries here, alongside with returning the
···488488 */489489 struct fscrypt_str crypto_buf;490490#endif491491-#ifdef CONFIG_UNICODE491491+#if IS_ENABLED(CONFIG_UNICODE)492492 /*493493 * For casefolded directories: the casefolded name, but it's left NULL494494 * if the original name is not valid Unicode, if the directory is both
+1-1
fs/f2fs/hash.c
···105105 return;106106 }107107108108-#ifdef CONFIG_UNICODE108108+#if IS_ENABLED(CONFIG_UNICODE)109109 if (IS_CASEFOLDED(dir)) {110110 /*111111 * If the casefolded name is provided, hash it instead of the
+2-2
fs/f2fs/namei.c
···561561 goto out_iput;562562 }563563out_splice:564564-#ifdef CONFIG_UNICODE564564+#if IS_ENABLED(CONFIG_UNICODE)565565 if (!inode && IS_CASEFOLDED(dir)) {566566 /* Eventually we want to call d_add_ci(dentry, NULL)567567 * for negative dentries in the encoding case as···622622 goto fail;623623 }624624 f2fs_delete_entry(de, page, dir, inode);625625-#ifdef CONFIG_UNICODE625625+#if IS_ENABLED(CONFIG_UNICODE)626626 /* VFS negative dentries are incompatible with Encoding and627627 * Case-insensitiveness. Eventually we'll want avoid628628 * invalidating the dentries here, alongside with returning the
+2-2
fs/f2fs/recovery.c
···46464747static struct kmem_cache *fsync_entry_slab;48484949-#ifdef CONFIG_UNICODE4949+#if IS_ENABLED(CONFIG_UNICODE)5050extern struct kmem_cache *f2fs_cf_name_slab;5151#endif5252···149149 if (err)150150 return err;151151 f2fs_hash_filename(dir, fname);152152-#ifdef CONFIG_UNICODE152152+#if IS_ENABLED(CONFIG_UNICODE)153153 /* Case-sensitive match is fine for recovery */154154 kmem_cache_free(f2fs_cf_name_slab, fname->cf_name.name);155155 fname->cf_name.name = NULL;
+5-5
fs/f2fs/super.c
···257257 va_end(args);258258}259259260260-#ifdef CONFIG_UNICODE260260+#if IS_ENABLED(CONFIG_UNICODE)261261static const struct f2fs_sb_encodings {262262 __u16 magic;263263 char *name;···12591259 return -EINVAL;12601260 }12611261#endif12621262-#ifndef CONFIG_UNICODE12621262+#if !IS_ENABLED(CONFIG_UNICODE)12631263 if (f2fs_sb_has_casefold(sbi)) {12641264 f2fs_err(sbi,12651265 "Filesystem with casefold feature cannot be mounted without CONFIG_UNICODE");···16191619 f2fs_destroy_iostat(sbi);16201620 for (i = 0; i < NR_PAGE_TYPE; i++)16211621 kvfree(sbi->write_io[i]);16221622-#ifdef CONFIG_UNICODE16221622+#if IS_ENABLED(CONFIG_UNICODE)16231623 utf8_unload(sb->s_encoding);16241624#endif16251625 kfree(sbi);···3903390339043904static int f2fs_setup_casefold(struct f2fs_sb_info *sbi)39053905{39063906-#ifdef CONFIG_UNICODE39063906+#if IS_ENABLED(CONFIG_UNICODE)39073907 if (f2fs_sb_has_casefold(sbi) && !sbi->sb->s_encoding) {39083908 const struct f2fs_sb_encodings *encoding_info;39093909 struct unicode_map *encoding;···44584458 for (i = 0; i < NR_PAGE_TYPE; i++)44594459 kvfree(sbi->write_io[i]);4460446044614461-#ifdef CONFIG_UNICODE44614461+#if IS_ENABLED(CONFIG_UNICODE)44624462 utf8_unload(sb->s_encoding);44634463 sb->s_encoding = NULL;44644464#endif
···21212222#include "../internal.h"23232424+#define IOEND_BATCH_SIZE 40962525+2426/*2527 * Structure allocated for each folio when block size < folio size2628 * to track sub-folio uptodate status and I/O completions.···10411039 * state, release holds on bios, and finally free up memory. Do not use the10421040 * ioend after this.10431041 */10441044-static void10421042+static u3210451043iomap_finish_ioend(struct iomap_ioend *ioend, int error)10461044{10471045 struct inode *inode = ioend->io_inode;···10501048 u64 start = bio->bi_iter.bi_sector;10511049 loff_t offset = ioend->io_offset;10521050 bool quiet = bio_flagged(bio, BIO_QUIET);10511051+ u32 folio_count = 0;1053105210541053 for (bio = &ioend->io_inline_bio; bio; bio = next) {10551054 struct folio_iter fi;···10651062 next = bio->bi_private;1066106310671064 /* walk all folios in bio, ending page IO on them */10681068- bio_for_each_folio_all(fi, bio)10651065+ bio_for_each_folio_all(fi, bio) {10691066 iomap_finish_folio_write(inode, fi.folio, fi.length,10701067 error);10681068+ folio_count++;10691069+ }10711070 bio_put(bio);10721071 }10731072 /* The ioend has been freed by bio_put() */···10791074"%s: writeback error on inode %lu, offset %lld, sector %llu",10801075 inode->i_sb->s_id, inode->i_ino, offset, start);10811076 }10771077+ return folio_count;10821078}1083107910801080+/*10811081+ * Ioend completion routine for merged bios. This can only be called from task10821082+ * contexts as merged ioends can be of unbound length. Hence we have to break up10831083+ * the writeback completions into manageable chunks to avoid long scheduler10841084+ * holdoffs. We aim to keep scheduler holdoffs down below 10ms so that we get10851085+ * good batch processing throughput without creating adverse scheduler latency10861086+ * conditions.10871087+ */10841088void10851089iomap_finish_ioends(struct iomap_ioend *ioend, int error)10861090{10871091 struct list_head tmp;10921092+ u32 completions;10931093+10941094+ might_sleep();1088109510891096 list_replace_init(&ioend->io_list, &tmp);10901090- iomap_finish_ioend(ioend, error);10971097+ completions = iomap_finish_ioend(ioend, error);1091109810921099 while (!list_empty(&tmp)) {11001100+ if (completions > IOEND_BATCH_SIZE * 8) {11011101+ cond_resched();11021102+ completions = 0;11031103+ }10931104 ioend = list_first_entry(&tmp, struct iomap_ioend, io_list);10941105 list_del_init(&ioend->io_list);10951095- iomap_finish_ioend(ioend, error);11061106+ completions += iomap_finish_ioend(ioend, error);10961107 }10971108}10981109EXPORT_SYMBOL_GPL(iomap_finish_ioends);···11281107 (next->io_type == IOMAP_UNWRITTEN))11291108 return false;11301109 if (ioend->io_offset + ioend->io_size != next->io_offset)11101110+ return false;11111111+ /*11121112+ * Do not merge physically discontiguous ioends. The filesystem11131113+ * completion functions will have to iterate the physical11141114+ * discontiguities even if we merge the ioends at a logical level, so11151115+ * we don't gain anything by merging physical discontiguities here.11161116+ *11171117+ * We cannot use bio->bi_iter.bi_sector here as it is modified during11181118+ * submission so does not point to the start sector of the bio at11191119+ * completion.11201120+ */11211121+ if (ioend->io_sector + (ioend->io_size >> 9) != next->io_sector)11311122 return false;11321123 return true;11331124}···12421209 ioend->io_flags = wpc->iomap.flags;12431210 ioend->io_inode = inode;12441211 ioend->io_size = 0;12121212+ ioend->io_folios = 0;12451213 ioend->io_offset = offset;12461214 ioend->io_bio = bio;12151215+ ioend->io_sector = sector;12471216 return ioend;12481217}12491218···12851250 if (offset != wpc->ioend->io_offset + wpc->ioend->io_size)12861251 return false;12871252 if (sector != bio_end_sector(wpc->ioend->io_bio))12531253+ return false;12541254+ /*12551255+ * Limit ioend bio chain lengths to minimise IO completion latency. This12561256+ * also prevents long tight loops ending page writeback on all the12571257+ * folios in the ioend.12581258+ */12591259+ if (wpc->ioend->io_folios >= IOEND_BATCH_SIZE)12881260 return false;12891261 return true;12901262}···13771335 &submit_list);13781336 count++;13791337 }13381338+ if (count)13391339+ wpc->ioend->io_folios++;1380134013811341 WARN_ON_ONCE(!wpc->ioend && !list_empty(&submit_list));13821342 WARN_ON_ONCE(!folio_test_locked(folio));
+5-5
fs/libfs.c
···13791379 (inode->i_op == &empty_dir_inode_operations);13801380}1381138113821382-#ifdef CONFIG_UNICODE13821382+#if IS_ENABLED(CONFIG_UNICODE)13831383/*13841384 * Determine if the name of a dentry should be casefolded.13851385 *···14731473};14741474#endif1475147514761476-#if defined(CONFIG_FS_ENCRYPTION) && defined(CONFIG_UNICODE)14761476+#if defined(CONFIG_FS_ENCRYPTION) && IS_ENABLED(CONFIG_UNICODE)14771477static const struct dentry_operations generic_encrypted_ci_dentry_ops = {14781478 .d_hash = generic_ci_d_hash,14791479 .d_compare = generic_ci_d_compare,···15081508#ifdef CONFIG_FS_ENCRYPTION15091509 bool needs_encrypt_ops = dentry->d_flags & DCACHE_NOKEY_NAME;15101510#endif15111511-#ifdef CONFIG_UNICODE15111511+#if IS_ENABLED(CONFIG_UNICODE)15121512 bool needs_ci_ops = dentry->d_sb->s_encoding;15131513#endif15141514-#if defined(CONFIG_FS_ENCRYPTION) && defined(CONFIG_UNICODE)15141514+#if defined(CONFIG_FS_ENCRYPTION) && IS_ENABLED(CONFIG_UNICODE)15151515 if (needs_encrypt_ops && needs_ci_ops) {15161516 d_set_d_op(dentry, &generic_encrypted_ci_dentry_ops);15171517 return;···15231523 return;15241524 }15251525#endif15261526-#ifdef CONFIG_UNICODE15261526+#if IS_ENABLED(CONFIG_UNICODE)15271527 if (needs_ci_ops) {15281528 d_set_d_op(dentry, &generic_ci_dentry_ops);15291529 return;
+10-8
fs/lockd/svcsubs.c
···179179static int nlm_unlock_files(struct nlm_file *file)180180{181181 struct file_lock lock;182182- struct file *f;183182183183+ locks_init_lock(&lock);184184 lock.fl_type = F_UNLCK;185185 lock.fl_start = 0;186186 lock.fl_end = OFFSET_MAX;187187- for (f = file->f_file[0]; f <= file->f_file[1]; f++) {188188- if (f && vfs_lock_file(f, F_SETLK, &lock, NULL) < 0) {189189- pr_warn("lockd: unlock failure in %s:%d\n",190190- __FILE__, __LINE__);191191- return 1;192192- }193193- }187187+ if (file->f_file[O_RDONLY] &&188188+ vfs_lock_file(file->f_file[O_RDONLY], F_SETLK, &lock, NULL))189189+ goto out_err;190190+ if (file->f_file[O_WRONLY] &&191191+ vfs_lock_file(file->f_file[O_WRONLY], F_SETLK, &lock, NULL))192192+ goto out_err;194193 return 0;194194+out_err:195195+ pr_warn("lockd: unlock failure in %s:%d\n", __FILE__, __LINE__);196196+ return 1;195197}196198197199/*
+3-1
fs/nfsd/nfs4state.c
···41304130 status = nfserr_clid_inuse;41314131 if (client_has_state(old)41324132 && !same_creds(&unconf->cl_cred,41334133- &old->cl_cred))41334133+ &old->cl_cred)) {41344134+ old = NULL;41344135 goto out;41364136+ }41354137 status = mark_client_expired_locked(old);41364138 if (status) {41374139 old = NULL;
+3-3
fs/notify/fanotify/fanotify_user.c
···701701 if (fanotify_is_perm_event(event->mask))702702 FANOTIFY_PERM(event)->fd = fd;703703704704- if (f)705705- fd_install(fd, f);706706-707704 if (info_mode) {708705 ret = copy_info_records_to_user(event, info, info_mode, pidfd,709706 buf, count);710707 if (ret < 0)711708 goto out_close_fd;712709 }710710+711711+ if (f)712712+ fd_install(fd, f);713713714714 return metadata.event_len;715715
+13-3
fs/overlayfs/copy_up.c
···145145 if (err == -ENOTTY || err == -EINVAL)146146 return 0;147147 pr_warn("failed to retrieve lower fileattr (%pd2, err=%i)\n",148148- old, err);148148+ old->dentry, err);149149 return err;150150 }151151···157157 */158158 if (oldfa.flags & OVL_PROT_FS_FLAGS_MASK) {159159 err = ovl_set_protattr(inode, new->dentry, &oldfa);160160- if (err)160160+ if (err == -EPERM)161161+ pr_warn_once("copying fileattr: no xattr on upper\n");162162+ else if (err)161163 return err;162164 }163165···169167170168 err = ovl_real_fileattr_get(new, &newfa);171169 if (err) {170170+ /*171171+ * Returning an error if upper doesn't support fileattr will172172+ * result in a regression, so revert to the old behavior.173173+ */174174+ if (err == -ENOTTY || err == -EINVAL) {175175+ pr_warn_once("copying fileattr: no support on upper\n");176176+ return 0;177177+ }172178 pr_warn("failed to retrieve upper fileattr (%pd2, err=%i)\n",173173- new, err);179179+ new->dentry, err);174180 return err;175181 }176182
+8-3
fs/quota/dquot.c
···690690 /* This is not very clever (and fast) but currently I don't know about691691 * any other simple way of getting quota data to disk and we must get692692 * them there for userspace to be visible... */693693- if (sb->s_op->sync_fs)694694- sb->s_op->sync_fs(sb, 1);695695- sync_blockdev(sb->s_bdev);693693+ if (sb->s_op->sync_fs) {694694+ ret = sb->s_op->sync_fs(sb, 1);695695+ if (ret)696696+ return ret;697697+ }698698+ ret = sync_blockdev(sb->s_bdev);699699+ if (ret)700700+ return ret;696701697702 /*698703 * Now when everything is written we can discard the pagecache so
+12-7
fs/super.c
···16161616 percpu_rwsem_acquire(sb->s_writers.rw_sem + level, 0, _THIS_IP_);16171617}1618161816191619-static void sb_freeze_unlock(struct super_block *sb)16191619+static void sb_freeze_unlock(struct super_block *sb, int level)16201620{16211621- int level;16221622-16231623- for (level = SB_FREEZE_LEVELS - 1; level >= 0; level--)16211621+ for (level--; level >= 0; level--)16241622 percpu_up_write(sb->s_writers.rw_sem + level);16251623}16261624···16891691 sb_wait_write(sb, SB_FREEZE_PAGEFAULT);1690169216911693 /* All writers are done so after syncing there won't be dirty data */16921692- sync_filesystem(sb);16941694+ ret = sync_filesystem(sb);16951695+ if (ret) {16961696+ sb->s_writers.frozen = SB_UNFROZEN;16971697+ sb_freeze_unlock(sb, SB_FREEZE_PAGEFAULT);16981698+ wake_up(&sb->s_writers.wait_unfrozen);16991699+ deactivate_locked_super(sb);17001700+ return ret;17011701+ }1693170216941703 /* Now wait for internal filesystem counter */16951704 sb->s_writers.frozen = SB_FREEZE_FS;···17081703 printk(KERN_ERR17091704 "VFS:Filesystem freeze failed\n");17101705 sb->s_writers.frozen = SB_UNFROZEN;17111711- sb_freeze_unlock(sb);17061706+ sb_freeze_unlock(sb, SB_FREEZE_FS);17121707 wake_up(&sb->s_writers.wait_unfrozen);17131708 deactivate_locked_super(sb);17141709 return ret;···17531748 }1754174917551750 sb->s_writers.frozen = SB_UNFROZEN;17561756- sb_freeze_unlock(sb);17511751+ sb_freeze_unlock(sb, SB_FREEZE_FS);17571752out:17581753 wake_up(&sb->s_writers.wait_unfrozen);17591754 deactivate_locked_super(sb);
+12-6
fs/sync.c
···2929 */3030int sync_filesystem(struct super_block *sb)3131{3232- int ret;3232+ int ret = 0;33333434 /*3535 * We need to be protected against the filesystem going from···5252 * at a time.5353 */5454 writeback_inodes_sb(sb, WB_REASON_SYNC);5555- if (sb->s_op->sync_fs)5656- sb->s_op->sync_fs(sb, 0);5555+ if (sb->s_op->sync_fs) {5656+ ret = sb->s_op->sync_fs(sb, 0);5757+ if (ret)5858+ return ret;5959+ }5760 ret = sync_blockdev_nowait(sb->s_bdev);5858- if (ret < 0)6161+ if (ret)5962 return ret;60636164 sync_inodes_sb(sb);6262- if (sb->s_op->sync_fs)6363- sb->s_op->sync_fs(sb, 1);6565+ if (sb->s_op->sync_fs) {6666+ ret = sb->s_op->sync_fs(sb, 1);6767+ if (ret)6868+ return ret;6969+ }6470 return sync_blockdev(sb->s_bdev);6571}6672EXPORT_SYMBOL(sync_filesystem);
+5-13
fs/unicode/Kconfig
···33# UTF-8 normalization44#55config UNICODE66- bool "UTF-8 normalization and casefolding support"66+ tristate "UTF-8 normalization and casefolding support"77 help88 Say Y here to enable UTF-8 NFD normalization and NFD+CF casefolding99- support.1010-1111-config UNICODE_UTF8_DATA1212- tristate "UTF-8 normalization and casefolding tables"1313- depends on UNICODE1414- default UNICODE1515- help1616- This contains a large table of case foldings, which can be loaded as1717- a separate module if you say M here. To be on the safe side stick1818- to the default of Y. Saying N here makes no sense, if you do not want1919- utf8 casefolding support, disable CONFIG_UNICODE instead.99+ support. If you say M here the large table of case foldings will1010+ be a separate loadable module that gets requested only when a file1111+ system actually use it.20122113config UNICODE_NORMALIZATION_SELFTEST2214 tristate "Test UTF-8 normalization support"2323- depends on UNICODE_UTF8_DATA1515+ depends on UNICODE
···136136 memalloc_nofs_restore(nofs_flag);137137}138138139139-/* Finish all pending io completions. */139139+/*140140+ * Finish all pending IO completions that require transactional modifications.141141+ *142142+ * We try to merge physical and logically contiguous ioends before completion to143143+ * minimise the number of transactions we need to perform during IO completion.144144+ * Both unwritten extent conversion and COW remapping need to iterate and modify145145+ * one physical extent at a time, so we gain nothing by merging physically146146+ * discontiguous extents here.147147+ *148148+ * The ioend chain length that we can be processing here is largely unbound in149149+ * length and we may have to perform significant amounts of work on each ioend150150+ * to complete it. Hence we have to be careful about holding the CPU for too151151+ * long in this loop.152152+ */140153void141154xfs_end_io(142155 struct work_struct *work)···170157 list_del_init(&ioend->io_list);171158 iomap_ioend_try_merge(ioend, &tmp);172159 xfs_end_ioend(ioend);160160+ cond_resched();173161 }174162}175163
+3-6
fs/xfs/xfs_bmap_util.c
···850850 rblocks = 0;851851 }852852853853- /*854854- * Allocate and setup the transaction.855855- */856853 error = xfs_trans_alloc_inode(ip, &M_RES(mp)->tr_write,857854 dblocks, rblocks, false, &tp);858855 if (error)···866869 if (error)867870 goto error;868871869869- /*870870- * Complete the transaction871871- */872872+ ip->i_diflags |= XFS_DIFLAG_PREALLOC;873873+ xfs_trans_log_inode(tp, ip, XFS_ILOG_CORE);874874+872875 error = xfs_trans_commit(tp);873876 xfs_iunlock(ip, XFS_ILOCK_EXCL);874877 if (error)
+26-60
fs/xfs/xfs_file.c
···6666 return !((pos | len) & mask);6767}68686969-int7070-xfs_update_prealloc_flags(7171- struct xfs_inode *ip,7272- enum xfs_prealloc_flags flags)7373-{7474- struct xfs_trans *tp;7575- int error;7676-7777- error = xfs_trans_alloc(ip->i_mount, &M_RES(ip->i_mount)->tr_writeid,7878- 0, 0, 0, &tp);7979- if (error)8080- return error;8181-8282- xfs_ilock(ip, XFS_ILOCK_EXCL);8383- xfs_trans_ijoin(tp, ip, XFS_ILOCK_EXCL);8484-8585- if (!(flags & XFS_PREALLOC_INVISIBLE)) {8686- VFS_I(ip)->i_mode &= ~S_ISUID;8787- if (VFS_I(ip)->i_mode & S_IXGRP)8888- VFS_I(ip)->i_mode &= ~S_ISGID;8989- xfs_trans_ichgtime(tp, ip, XFS_ICHGTIME_MOD | XFS_ICHGTIME_CHG);9090- }9191-9292- if (flags & XFS_PREALLOC_SET)9393- ip->i_diflags |= XFS_DIFLAG_PREALLOC;9494- if (flags & XFS_PREALLOC_CLEAR)9595- ip->i_diflags &= ~XFS_DIFLAG_PREALLOC;9696-9797- xfs_trans_log_inode(tp, ip, XFS_ILOG_CORE);9898- if (flags & XFS_PREALLOC_SYNC)9999- xfs_trans_set_sync(tp);100100- return xfs_trans_commit(tp);101101-}102102-10369/*10470 * Fsync operations on directories are much simpler than on regular files,10571 * as there is no file data to flush, and thus also no need for explicit···861895 return error;862896}863897898898+/* Does this file, inode, or mount want synchronous writes? */899899+static inline bool xfs_file_sync_writes(struct file *filp)900900+{901901+ struct xfs_inode *ip = XFS_I(file_inode(filp));902902+903903+ if (xfs_has_wsync(ip->i_mount))904904+ return true;905905+ if (filp->f_flags & (__O_SYNC | O_DSYNC))906906+ return true;907907+ if (IS_SYNC(file_inode(filp)))908908+ return true;909909+910910+ return false;911911+}912912+864913#define XFS_FALLOC_FL_SUPPORTED \865914 (FALLOC_FL_KEEP_SIZE | FALLOC_FL_PUNCH_HOLE | \866915 FALLOC_FL_COLLAPSE_RANGE | FALLOC_FL_ZERO_RANGE | \···891910 struct inode *inode = file_inode(file);892911 struct xfs_inode *ip = XFS_I(inode);893912 long error;894894- enum xfs_prealloc_flags flags = 0;895913 uint iolock = XFS_IOLOCK_EXCL | XFS_MMAPLOCK_EXCL;896914 loff_t new_size = 0;897915 bool do_file_insert = false;···934954 if (error)935955 goto out_unlock;936956 }957957+958958+ error = file_modified(file);959959+ if (error)960960+ goto out_unlock;937961938962 if (mode & FALLOC_FL_PUNCH_HOLE) {939963 error = xfs_free_file_space(ip, offset, len);···9881004 }9891005 do_file_insert = true;9901006 } else {991991- flags |= XFS_PREALLOC_SET;992992-9931007 if (!(mode & FALLOC_FL_KEEP_SIZE) &&9941008 offset + len > i_size_read(inode)) {9951009 new_size = offset + len;···10391057 }10401058 }1041105910421042- if (file->f_flags & O_DSYNC)10431043- flags |= XFS_PREALLOC_SYNC;10441044-10451045- error = xfs_update_prealloc_flags(ip, flags);10461046- if (error)10471047- goto out_unlock;10481048-10491060 /* Change file size if needed */10501061 if (new_size) {10511062 struct iattr iattr;···10571082 * leave shifted extents past EOF and hence losing access to10581083 * the data that is contained within them.10591084 */10601060- if (do_file_insert)10851085+ if (do_file_insert) {10611086 error = xfs_insert_file_space(ip, offset, len);10871087+ if (error)10881088+ goto out_unlock;10891089+ }10901090+10911091+ if (xfs_file_sync_writes(file))10921092+ error = xfs_log_force_inode(ip);1062109310631094out_unlock:10641095 xfs_iunlock(ip, iolock);···10941113 if (lockflags)10951114 xfs_iunlock(ip, lockflags);10961115 return ret;10971097-}10981098-10991099-/* Does this file, inode, or mount want synchronous writes? */11001100-static inline bool xfs_file_sync_writes(struct file *filp)11011101-{11021102- struct xfs_inode *ip = XFS_I(file_inode(filp));11031103-11041104- if (xfs_has_wsync(ip->i_mount))11051105- return true;11061106- if (filp->f_flags & (__O_SYNC | O_DSYNC))11071107- return true;11081108- if (IS_SYNC(file_inode(filp)))11091109- return true;11101110-11111111- return false;11121116}1113111711141118STATIC loff_t
···1464146414651465 if (bmx.bmv_count < 2)14661466 return -EINVAL;14671467- if (bmx.bmv_count > ULONG_MAX / recsize)14671467+ if (bmx.bmv_count >= INT_MAX / recsize)14681468 return -ENOMEM;1469146914701470 buf = kvcalloc(bmx.bmv_count, sizeof(*buf), GFP_KERNEL);
+39-3
fs/xfs/xfs_pnfs.c
···7171}72727373/*7474+ * We cannot use file based VFS helpers such as file_modified() to update7575+ * inode state as we modify the data/metadata in the inode here. Hence we have7676+ * to open code the timestamp updates and SUID/SGID stripping. We also need7777+ * to set the inode prealloc flag to ensure that the extents we allocate are not7878+ * removed if the inode is reclaimed from memory before xfs_fs_block_commit()7979+ * is from the client to indicate that data has been written and the file size8080+ * can be extended.8181+ */8282+static int8383+xfs_fs_map_update_inode(8484+ struct xfs_inode *ip)8585+{8686+ struct xfs_trans *tp;8787+ int error;8888+8989+ error = xfs_trans_alloc(ip->i_mount, &M_RES(ip->i_mount)->tr_writeid,9090+ 0, 0, 0, &tp);9191+ if (error)9292+ return error;9393+9494+ xfs_ilock(ip, XFS_ILOCK_EXCL);9595+ xfs_trans_ijoin(tp, ip, XFS_ILOCK_EXCL);9696+9797+ VFS_I(ip)->i_mode &= ~S_ISUID;9898+ if (VFS_I(ip)->i_mode & S_IXGRP)9999+ VFS_I(ip)->i_mode &= ~S_ISGID;100100+ xfs_trans_ichgtime(tp, ip, XFS_ICHGTIME_MOD | XFS_ICHGTIME_CHG);101101+ ip->i_diflags |= XFS_DIFLAG_PREALLOC;102102+103103+ xfs_trans_log_inode(tp, ip, XFS_ILOG_CORE);104104+ return xfs_trans_commit(tp);105105+}106106+107107+/*74108 * Get a layout for the pNFS client.75109 */76110int···198164 * that the blocks allocated and handed out to the client are199165 * guaranteed to be present even after a server crash.200166 */201201- error = xfs_update_prealloc_flags(ip,202202- XFS_PREALLOC_SET | XFS_PREALLOC_SYNC);167167+ error = xfs_fs_map_update_inode(ip);168168+ if (!error)169169+ error = xfs_log_force_inode(ip);203170 if (error)204171 goto out_unlock;172172+205173 } else {206174 xfs_iunlock(ip, lock_flags);207175 }···291255 length = end - start;292256 if (!length)293257 continue;294294-258258+295259 /*296260 * Make sure reads through the pagecache see the new data.297261 */
+5-1
fs/xfs/xfs_super.c
···735735 int wait)736736{737737 struct xfs_mount *mp = XFS_M(sb);738738+ int error;738739739740 trace_xfs_fs_sync_fs(mp, __return_address);740741···745744 if (!wait)746745 return 0;747746748748- xfs_log_force(mp, XFS_LOG_SYNC);747747+ error = xfs_log_force(mp, XFS_LOG_SYNC);748748+ if (error)749749+ return error;750750+749751 if (laptop_mode) {750752 /*751753 * The disk must be active because we're syncing.
···262262263263 /* Draws a rectangle */264264 void (*fb_fillrect) (struct fb_info *info, const struct fb_fillrect *rect);265265- /* Copy data from area to another. Obsolete. */265265+ /* Copy data from area to another */266266 void (*fb_copyarea) (struct fb_info *info, const struct fb_copyarea *region);267267 /* Draws a image to the display */268268 void (*fb_imageblit) (struct fb_info *info, const struct fb_image *image);
···263263 struct list_head io_list; /* next ioend in chain */264264 u16 io_type;265265 u16 io_flags; /* IOMAP_F_* */266266+ u32 io_folios; /* folios added to ioend */266267 struct inode *io_inode; /* file being written to */267268 size_t io_size; /* size of the extent */268269 loff_t io_offset; /* offset in the file */270270+ sector_t io_sector; /* start sector of ioend */269271 struct bio *io_bio; /* bio being built */270272 struct bio io_inline_bio; /* MUST BE LAST! */271273};
+109-3
include/linux/kvm_host.h
···2929#include <linux/refcount.h>3030#include <linux/nospec.h>3131#include <linux/notifier.h>3232+#include <linux/ftrace.h>3233#include <linux/hashtable.h>3434+#include <linux/instrumentation.h>3335#include <linux/interval_tree.h>3436#include <linux/rbtree.h>3537#include <linux/xarray.h>···370368 u64 last_used_slot_gen;371369};372370373373-/* must be called with irqs disabled */374374-static __always_inline void guest_enter_irqoff(void)371371+/*372372+ * Start accounting time towards a guest.373373+ * Must be called before entering guest context.374374+ */375375+static __always_inline void guest_timing_enter_irqoff(void)375376{376377 /*377378 * This is running in ioctl context so its safe to assume that it's the···383378 instrumentation_begin();384379 vtime_account_guest_enter();385380 instrumentation_end();381381+}386382383383+/*384384+ * Enter guest context and enter an RCU extended quiescent state.385385+ *386386+ * Between guest_context_enter_irqoff() and guest_context_exit_irqoff() it is387387+ * unsafe to use any code which may directly or indirectly use RCU, tracing388388+ * (including IRQ flag tracing), or lockdep. All code in this period must be389389+ * non-instrumentable.390390+ */391391+static __always_inline void guest_context_enter_irqoff(void)392392+{387393 /*388394 * KVM does not hold any references to rcu protected data when it389395 * switches CPU into a guest mode. In fact switching to a guest mode···410394 }411395}412396413413-static __always_inline void guest_exit_irqoff(void)397397+/*398398+ * Deprecated. Architectures should move to guest_timing_enter_irqoff() and399399+ * guest_state_enter_irqoff().400400+ */401401+static __always_inline void guest_enter_irqoff(void)402402+{403403+ guest_timing_enter_irqoff();404404+ guest_context_enter_irqoff();405405+}406406+407407+/**408408+ * guest_state_enter_irqoff - Fixup state when entering a guest409409+ *410410+ * Entry to a guest will enable interrupts, but the kernel state is interrupts411411+ * disabled when this is invoked. Also tell RCU about it.412412+ *413413+ * 1) Trace interrupts on state414414+ * 2) Invoke context tracking if enabled to adjust RCU state415415+ * 3) Tell lockdep that interrupts are enabled416416+ *417417+ * Invoked from architecture specific code before entering a guest.418418+ * Must be called with interrupts disabled and the caller must be419419+ * non-instrumentable.420420+ * The caller has to invoke guest_timing_enter_irqoff() before this.421421+ *422422+ * Note: this is analogous to exit_to_user_mode().423423+ */424424+static __always_inline void guest_state_enter_irqoff(void)425425+{426426+ instrumentation_begin();427427+ trace_hardirqs_on_prepare();428428+ lockdep_hardirqs_on_prepare(CALLER_ADDR0);429429+ instrumentation_end();430430+431431+ guest_context_enter_irqoff();432432+ lockdep_hardirqs_on(CALLER_ADDR0);433433+}434434+435435+/*436436+ * Exit guest context and exit an RCU extended quiescent state.437437+ *438438+ * Between guest_context_enter_irqoff() and guest_context_exit_irqoff() it is439439+ * unsafe to use any code which may directly or indirectly use RCU, tracing440440+ * (including IRQ flag tracing), or lockdep. All code in this period must be441441+ * non-instrumentable.442442+ */443443+static __always_inline void guest_context_exit_irqoff(void)414444{415445 context_tracking_guest_exit();446446+}416447448448+/*449449+ * Stop accounting time towards a guest.450450+ * Must be called after exiting guest context.451451+ */452452+static __always_inline void guest_timing_exit_irqoff(void)453453+{417454 instrumentation_begin();418455 /* Flush the guest cputime we spent on the guest */419456 vtime_account_guest_exit();420457 instrumentation_end();458458+}459459+460460+/*461461+ * Deprecated. Architectures should move to guest_state_exit_irqoff() and462462+ * guest_timing_exit_irqoff().463463+ */464464+static __always_inline void guest_exit_irqoff(void)465465+{466466+ guest_context_exit_irqoff();467467+ guest_timing_exit_irqoff();421468}422469423470static inline void guest_exit(void)···490411 local_irq_save(flags);491412 guest_exit_irqoff();492413 local_irq_restore(flags);414414+}415415+416416+/**417417+ * guest_state_exit_irqoff - Establish state when returning from guest mode418418+ *419419+ * Entry from a guest disables interrupts, but guest mode is traced as420420+ * interrupts enabled. Also with NO_HZ_FULL RCU might be idle.421421+ *422422+ * 1) Tell lockdep that interrupts are disabled423423+ * 2) Invoke context tracking if enabled to reactivate RCU424424+ * 3) Trace interrupts off state425425+ *426426+ * Invoked from architecture specific code after exiting a guest.427427+ * Must be invoked with interrupts disabled and the caller must be428428+ * non-instrumentable.429429+ * The caller has to invoke guest_timing_exit_irqoff() after this.430430+ *431431+ * Note: this is analogous to enter_from_user_mode().432432+ */433433+static __always_inline void guest_state_exit_irqoff(void)434434+{435435+ lockdep_hardirqs_off(CALLER_ADDR0);436436+ guest_context_exit_irqoff();437437+438438+ instrumentation_begin();439439+ trace_hardirqs_off_finish();440440+ instrumentation_end();493441}494442495443static inline int kvm_vcpu_exiting_guest_mode(struct kvm_vcpu *vcpu)
+1
include/linux/libata.h
···380380 ATA_HORKAGE_MAX_TRIM_128M = (1 << 26), /* Limit max trim size to 128M */381381 ATA_HORKAGE_NO_NCQ_ON_ATI = (1 << 27), /* Disable NCQ on ATI chipset */382382 ATA_HORKAGE_NO_ID_DEV_LOG = (1 << 28), /* Identify device log missing */383383+ ATA_HORKAGE_NO_LOG_DIR = (1 << 29), /* Do not read log directory */383384384385 /* DMA mask for user DMA control: User visible values; DO NOT385386 renumber */
+7
include/linux/netfs.h
···244244 int (*prepare_write)(struct netfs_cache_resources *cres,245245 loff_t *_start, size_t *_len, loff_t i_size,246246 bool no_space_allocated_yet);247247+248248+ /* Query the occupancy of the cache in a region, returning where the249249+ * next chunk of data starts and how long it is.250250+ */251251+ int (*query_occupancy)(struct netfs_cache_resources *cres,252252+ loff_t start, size_t len, size_t granularity,253253+ loff_t *_data_start, size_t *_data_len);247254};248255249256struct readahead_control;
···6262{6363 return (address >> PAGE_SHIFT) & (PTRS_PER_PTE - 1);6464}6565+#define pte_index pte_index65666667#ifndef pmd_index6768static inline unsigned long pmd_index(unsigned long address)
-1
include/linux/sched.h
···16801680#define PF_MEMALLOC 0x00000800 /* Allocating memory */16811681#define PF_NPROC_EXCEEDED 0x00001000 /* set_user() noticed that RLIMIT_NPROC was exceeded */16821682#define PF_USED_MATH 0x00002000 /* If unset the fpu must be initialized before use */16831683-#define PF_USED_ASYNC 0x00004000 /* Used async_schedule*(), used by module init */16841683#define PF_NOFREEZE 0x00008000 /* This thread should not be frozen */16851684#define PF_FROZEN 0x00010000 /* Frozen for system suspend */16861685#define PF_KSWAPD 0x00020000 /* I am kswapd */
···4747/*4848 * Inserts the grant references into the mapping table of an instance4949 * of gntdev. N.B. This does not perform the mapping, which is deferred5050- * until mmap() is called with @index as the offset.5050+ * until mmap() is called with @index as the offset. @index should be5151+ * considered opaque to userspace, with one exception: if no grant5252+ * references have ever been inserted into the mapping table of this5353+ * instance, @index will be set to 0. This is necessary to use gntdev5454+ * with userspace APIs that expect a file descriptor that can be5555+ * mmap()'d at offset 0, such as Wayland. If @count is set to 0, this5656+ * ioctl will fail.5157 */5258#define IOCTL_GNTDEV_MAP_GRANT_REF \5359_IOC(_IOC_NONE, 'G', 0, sizeof(struct ioctl_gntdev_map_grant_ref))
···205205 atomic_inc(&entry_count);206206 spin_unlock_irqrestore(&async_lock, flags);207207208208- /* mark that this task has queued an async job, used by module init */209209- current->flags |= PF_USED_ASYNC;210210-211208 /* schedule for execution */212209 queue_work_node(node, system_unbound_wq, &entry->work);213210
+43-19
kernel/audit.c
···541541/**542542 * kauditd_rehold_skb - Handle a audit record send failure in the hold queue543543 * @skb: audit record544544+ * @error: error code (unused)544545 *545546 * Description:546547 * This should only be used by the kauditd_thread when it fails to flush the547548 * hold queue.548549 */549549-static void kauditd_rehold_skb(struct sk_buff *skb)550550+static void kauditd_rehold_skb(struct sk_buff *skb, __always_unused int error)550551{551551- /* put the record back in the queue at the same place */552552- skb_queue_head(&audit_hold_queue, skb);552552+ /* put the record back in the queue */553553+ skb_queue_tail(&audit_hold_queue, skb);553554}554555555556/**556557 * kauditd_hold_skb - Queue an audit record, waiting for auditd557558 * @skb: audit record559559+ * @error: error code558560 *559561 * Description:560562 * Queue the audit record, waiting for an instance of auditd. When this···566564 * and queue it, if we have room. If we want to hold on to the record, but we567565 * don't have room, record a record lost message.568566 */569569-static void kauditd_hold_skb(struct sk_buff *skb)567567+static void kauditd_hold_skb(struct sk_buff *skb, int error)570568{571569 /* at this point it is uncertain if we will ever send this to auditd so572570 * try to send the message via printk before we go any further */573571 kauditd_printk_skb(skb);574572575573 /* can we just silently drop the message? */576576- if (!audit_default) {577577- kfree_skb(skb);578578- return;574574+ if (!audit_default)575575+ goto drop;576576+577577+ /* the hold queue is only for when the daemon goes away completely,578578+ * not -EAGAIN failures; if we are in a -EAGAIN state requeue the579579+ * record on the retry queue unless it's full, in which case drop it580580+ */581581+ if (error == -EAGAIN) {582582+ if (!audit_backlog_limit ||583583+ skb_queue_len(&audit_retry_queue) < audit_backlog_limit) {584584+ skb_queue_tail(&audit_retry_queue, skb);585585+ return;586586+ }587587+ audit_log_lost("kauditd retry queue overflow");588588+ goto drop;579589 }580590581581- /* if we have room, queue the message */591591+ /* if we have room in the hold queue, queue the message */582592 if (!audit_backlog_limit ||583593 skb_queue_len(&audit_hold_queue) < audit_backlog_limit) {584594 skb_queue_tail(&audit_hold_queue, skb);···599585600586 /* we have no other options - drop the message */601587 audit_log_lost("kauditd hold queue overflow");588588+drop:602589 kfree_skb(skb);603590}604591605592/**606593 * kauditd_retry_skb - Queue an audit record, attempt to send again to auditd607594 * @skb: audit record595595+ * @error: error code (unused)608596 *609597 * Description:610598 * Not as serious as kauditd_hold_skb() as we still have a connected auditd,611599 * but for some reason we are having problems sending it audit records so612600 * queue the given record and attempt to resend.613601 */614614-static void kauditd_retry_skb(struct sk_buff *skb)602602+static void kauditd_retry_skb(struct sk_buff *skb, __always_unused int error)615603{616616- /* NOTE: because records should only live in the retry queue for a617617- * short period of time, before either being sent or moved to the hold618618- * queue, we don't currently enforce a limit on this queue */619619- skb_queue_tail(&audit_retry_queue, skb);604604+ if (!audit_backlog_limit ||605605+ skb_queue_len(&audit_retry_queue) < audit_backlog_limit) {606606+ skb_queue_tail(&audit_retry_queue, skb);607607+ return;608608+ }609609+610610+ /* we have to drop the record, send it via printk as a last effort */611611+ kauditd_printk_skb(skb);612612+ audit_log_lost("kauditd retry queue overflow");613613+ kfree_skb(skb);620614}621615622616/**···662640 /* flush the retry queue to the hold queue, but don't touch the main663641 * queue since we need to process that normally for multicast */664642 while ((skb = skb_dequeue(&audit_retry_queue)))665665- kauditd_hold_skb(skb);643643+ kauditd_hold_skb(skb, -ECONNREFUSED);666644}667645668646/**···736714 struct sk_buff_head *queue,737715 unsigned int retry_limit,738716 void (*skb_hook)(struct sk_buff *skb),739739- void (*err_hook)(struct sk_buff *skb))717717+ void (*err_hook)(struct sk_buff *skb, int error))740718{741719 int rc = 0;742742- struct sk_buff *skb;720720+ struct sk_buff *skb = NULL;721721+ struct sk_buff *skb_tail;743722 unsigned int failed = 0;744723745724 /* NOTE: kauditd_thread takes care of all our locking, we just use746725 * the netlink info passed to us (e.g. sk and portid) */747726748748- while ((skb = skb_dequeue(queue))) {727727+ skb_tail = skb_peek_tail(queue);728728+ while ((skb != skb_tail) && (skb = skb_dequeue(queue))) {749729 /* call the skb_hook for each skb we touch */750730 if (skb_hook)751731 (*skb_hook)(skb);···755731 /* can we send to anyone via unicast? */756732 if (!sk) {757733 if (err_hook)758758- (*err_hook)(skb);734734+ (*err_hook)(skb, -ECONNREFUSED);759735 continue;760736 }761737···769745 rc == -ECONNREFUSED || rc == -EPERM) {770746 sk = NULL;771747 if (err_hook)772772- (*err_hook)(skb);748748+ (*err_hook)(skb, rc);773749 if (rc == -EAGAIN)774750 rc = 0;775751 /* continue to drain the queue */
···550550static void notrace inc_misses_counter(struct bpf_prog *prog)551551{552552 struct bpf_prog_stats *stats;553553+ unsigned int flags;553554554555 stats = this_cpu_ptr(prog->stats);555555- u64_stats_update_begin(&stats->syncp);556556+ flags = u64_stats_update_begin_irqsave(&stats->syncp);556557 u64_stats_inc(&stats->misses);557557- u64_stats_update_end(&stats->syncp);558558+ u64_stats_update_end_irqrestore(&stats->syncp, flags);558559}559560560561/* The logic is similar to bpf_prog_run(), but with an explicit
+14
kernel/cgroup/cgroup-v1.c
···549549550550 BUILD_BUG_ON(sizeof(cgrp->root->release_agent_path) < PATH_MAX);551551552552+ /*553553+ * Release agent gets called with all capabilities,554554+ * require capabilities to set release agent.555555+ */556556+ if ((of->file->f_cred->user_ns != &init_user_ns) ||557557+ !capable(CAP_SYS_ADMIN))558558+ return -EPERM;559559+552560 cgrp = cgroup_kn_lock_live(of->kn, false);553561 if (!cgrp)554562 return -ENODEV;···962954 /* Specifying two release agents is forbidden */963955 if (ctx->release_agent)964956 return invalfc(fc, "release_agent respecified");957957+ /*958958+ * Release agent gets called with all capabilities,959959+ * require capabilities to set release agent.960960+ */961961+ if ((fc->user_ns != &init_user_ns) || !capable(CAP_SYS_ADMIN))962962+ return invalfc(fc, "Setting release_agent not allowed");965963 ctx->release_agent = param->string;966964 param->string = NULL;967965 break;
+51-14
kernel/cgroup/cpuset.c
···591591}592592593593/*594594+ * validate_change_legacy() - Validate conditions specific to legacy (v1)595595+ * behavior.596596+ */597597+static int validate_change_legacy(struct cpuset *cur, struct cpuset *trial)598598+{599599+ struct cgroup_subsys_state *css;600600+ struct cpuset *c, *par;601601+ int ret;602602+603603+ WARN_ON_ONCE(!rcu_read_lock_held());604604+605605+ /* Each of our child cpusets must be a subset of us */606606+ ret = -EBUSY;607607+ cpuset_for_each_child(c, css, cur)608608+ if (!is_cpuset_subset(c, trial))609609+ goto out;610610+611611+ /* On legacy hierarchy, we must be a subset of our parent cpuset. */612612+ ret = -EACCES;613613+ par = parent_cs(cur);614614+ if (par && !is_cpuset_subset(trial, par))615615+ goto out;616616+617617+ ret = 0;618618+out:619619+ return ret;620620+}621621+622622+/*594623 * validate_change() - Used to validate that any proposed cpuset change595624 * follows the structural rules for cpusets.596625 *···643614{644615 struct cgroup_subsys_state *css;645616 struct cpuset *c, *par;646646- int ret;647647-648648- /* The checks don't apply to root cpuset */649649- if (cur == &top_cpuset)650650- return 0;617617+ int ret = 0;651618652619 rcu_read_lock();653653- par = parent_cs(cur);654620655655- /* On legacy hierarchy, we must be a subset of our parent cpuset. */656656- ret = -EACCES;657657- if (!is_in_v2_mode() && !is_cpuset_subset(trial, par))621621+ if (!is_in_v2_mode())622622+ ret = validate_change_legacy(cur, trial);623623+ if (ret)658624 goto out;625625+626626+ /* Remaining checks don't apply to root cpuset */627627+ if (cur == &top_cpuset)628628+ goto out;629629+630630+ par = parent_cs(cur);659631660632 /*661633 * If either I or some sibling (!= me) is exclusive, we can't···12051175 *12061176 * Because of the implicit cpu exclusive nature of a partition root,12071177 * cpumask changes that violates the cpu exclusivity rule will not be12081208- * permitted when checked by validate_change(). The validate_change()12091209- * function will also prevent any changes to the cpu list if it is not12101210- * a superset of children's cpu lists.11781178+ * permitted when checked by validate_change().12111179 */12121180static int update_parent_subparts_cpumask(struct cpuset *cpuset, int cmd,12131181 struct cpumask *newmask,···15501522 struct cpuset *sibling;15511523 struct cgroup_subsys_state *pos_css;1552152415251525+ percpu_rwsem_assert_held(&cpuset_rwsem);15261526+15531527 /*15541528 * Check all its siblings and call update_cpumasks_hier()15551529 * if their use_parent_ecpus flag is set in order for them15561530 * to use the right effective_cpus value.15311531+ *15321532+ * The update_cpumasks_hier() function may sleep. So we have to15331533+ * release the RCU read lock before calling it.15571534 */15581535 rcu_read_lock();15591536 cpuset_for_each_child(sibling, pos_css, parent) {···15661533 continue;15671534 if (!sibling->use_parent_ecpus)15681535 continue;15361536+ if (!css_tryget_online(&sibling->css))15371537+ continue;1569153815391539+ rcu_read_unlock();15701540 update_cpumasks_hier(sibling, tmp);15411541+ rcu_read_lock();15421542+ css_put(&sibling->css);15711543 }15721544 rcu_read_unlock();15731545}···16451607 * Make sure that subparts_cpus is a subset of cpus_allowed.16461608 */16471609 if (cs->nr_subparts_cpus) {16481648- cpumask_andnot(cs->subparts_cpus, cs->subparts_cpus,16491649- cs->cpus_allowed);16101610+ cpumask_and(cs->subparts_cpus, cs->subparts_cpus, cs->cpus_allowed);16501611 cs->nr_subparts_cpus = cpumask_weight(cs->subparts_cpus);16511612 }16521613 spin_unlock_irq(&callback_lock);
+5-20
kernel/module.c
···37253725 }37263726 freeinit->module_init = mod->init_layout.base;3727372737283728- /*37293729- * We want to find out whether @mod uses async during init. Clear37303730- * PF_USED_ASYNC. async_schedule*() will set it.37313731- */37323732- current->flags &= ~PF_USED_ASYNC;37333733-37343728 do_mod_ctors(mod);37353729 /* Start the module */37363730 if (mod->init != NULL)···3750375637513757 /*37523758 * We need to finish all async code before the module init sequence37533753- * is done. This has potential to deadlock. For example, a newly37543754- * detected block device can trigger request_module() of the37553755- * default iosched from async probing task. Once userland helper37563756- * reaches here, async_synchronize_full() will wait on the async37573757- * task waiting on request_module() and deadlock.37593759+ * is done. This has potential to deadlock if synchronous module37603760+ * loading is requested from async (which is not allowed!).37583761 *37593759- * This deadlock is avoided by perfomring async_synchronize_full()37603760- * iff module init queued any async jobs. This isn't a full37613761- * solution as it will deadlock the same if module loading from37623762- * async jobs nests more than once; however, due to the various37633763- * constraints, this hack seems to be the best option for now.37643764- * Please refer to the following thread for details.37653765- *37663766- * http://thread.gmane.org/gmane.linux.kernel/142081437623762+ * See commit 0fdff3ec6d87 ("async, kmod: warn on synchronous37633763+ * request_module() from async workers") for more details.37673764 */37683768- if (!mod->async_probe_requested && (current->flags & PF_USED_ASYNC))37653765+ if (!mod->async_probe_requested)37693766 async_synchronize_full();3770376737713768 ftrace_free_mem(mod, mod->init_layout.base, mod->init_layout.base +
+1-1
kernel/printk/sysctl.c
···1212static const int ten_thousand = 10000;13131414static int proc_dointvec_minmax_sysadmin(struct ctl_table *table, int write,1515- void __user *buffer, size_t *lenp, loff_t *ppos)1515+ void *buffer, size_t *lenp, loff_t *ppos)1616{1717 if (write && !capable(CAP_SYS_ADMIN))1818 return -EPERM;
+2-3
kernel/stackleak.c
···7070#define skip_erasing() false7171#endif /* CONFIG_STACKLEAK_RUNTIME_DISABLE */72727373-asmlinkage void notrace stackleak_erase(void)7373+asmlinkage void noinstr stackleak_erase(void)7474{7575 /* It would be nice not to have 'kstack_ptr' and 'boundary' on stack */7676 unsigned long kstack_ptr = current->lowest_stack;···124124 /* Reset the 'lowest_stack' value for the next syscall */125125 current->lowest_stack = current_top_of_stack() - THREAD_SIZE/64;126126}127127-NOKPROBE_SYMBOL(stackleak_erase);128127129129-void __used __no_caller_saved_registers notrace stackleak_track_stack(void)128128+void __used __no_caller_saved_registers noinstr stackleak_track_stack(void)130129{131130 unsigned long sp = current_stack_pointer;132131
···124124 * considered failure, and furthermore, a likely bug in the caller, so a warning125125 * is also emitted.126126 */127127-struct page *try_grab_compound_head(struct page *page,128128- int refs, unsigned int flags)127127+__maybe_unused struct page *try_grab_compound_head(struct page *page,128128+ int refs, unsigned int flags)129129{130130 if (flags & FOLL_GET)131131 return try_get_compound_head(page, refs);···208208 */209209bool __must_check try_grab_page(struct page *page, unsigned int flags)210210{211211- if (!(flags & (FOLL_GET | FOLL_PIN)))212212- return true;211211+ WARN_ON_ONCE((flags & (FOLL_GET | FOLL_PIN)) == (FOLL_GET | FOLL_PIN));213212214214- return try_grab_compound_head(page, 1, flags);213213+ if (flags & FOLL_GET)214214+ return try_get_page(page);215215+ else if (flags & FOLL_PIN) {216216+ int refs = 1;217217+218218+ page = compound_head(page);219219+220220+ if (WARN_ON_ONCE(page_ref_count(page) <= 0))221221+ return false;222222+223223+ if (hpage_pincount_available(page))224224+ hpage_pincount_add(page, 1);225225+ else226226+ refs = GUP_PIN_COUNTING_BIAS;227227+228228+ /*229229+ * Similar to try_grab_compound_head(): even if using the230230+ * hpage_pincount_add/_sub() routines, be sure to231231+ * *also* increment the normal page refcount field at least232232+ * once, so that the page really is pinned.233233+ */234234+ page_ref_add(page, refs);235235+236236+ mod_node_page_state(page_pgdat(page), NR_FOLL_PIN_ACQUIRED, 1);237237+ }238238+239239+ return true;215240}216241217242/**
···14101410{14111411 unsigned long flags;14121412 struct kmemleak_object *object;14131413- int i;14131413+ struct zone *zone;14141414+ int __maybe_unused i;14141415 int new_leaks = 0;1415141614161417 jiffies_last_scan = jiffies;···14511450 * Struct page scanning for each node.14521451 */14531452 get_online_mems();14541454- for_each_online_node(i) {14551455- unsigned long start_pfn = node_start_pfn(i);14561456- unsigned long end_pfn = node_end_pfn(i);14531453+ for_each_populated_zone(zone) {14541454+ unsigned long start_pfn = zone->zone_start_pfn;14551455+ unsigned long end_pfn = zone_end_pfn(zone);14571456 unsigned long pfn;1458145714591458 for (pfn = start_pfn; pfn < end_pfn; pfn++) {···14621461 if (!page)14631462 continue;1464146314651465- /* only scan pages belonging to this node */14661466- if (page_to_nid(page) != i)14641464+ /* only scan pages belonging to this zone */14651465+ if (page_zone(page) != zone)14671466 continue;14681467 /* only scan if page is in use */14691468 if (page_count(page) == 0)
+1-1
mm/page_isolation.c
···115115 * onlining - just onlined memory won't immediately be considered for116116 * allocation.117117 */118118- if (!isolated_page && PageBuddy(page)) {118118+ if (!isolated_page) {119119 nr_pages = move_freepages_block(zone, page, migratetype, NULL);120120 __mod_zone_freepage_state(zone, nr_pages, migratetype);121121 }
+27-28
mm/page_table_check.c
···8686{8787 struct page_ext *page_ext;8888 struct page *page;8989+ unsigned long i;8990 bool anon;9090- int i;91919292 if (!pfn_valid(pfn))9393 return;···121121{122122 struct page_ext *page_ext;123123 struct page *page;124124+ unsigned long i;124125 bool anon;125125- int i;126126127127 if (!pfn_valid(pfn))128128 return;···152152void __page_table_check_zero(struct page *page, unsigned int order)153153{154154 struct page_ext *page_ext = lookup_page_ext(page);155155- int i;155155+ unsigned long i;156156157157 BUG_ON(!page_ext);158158- for (i = 0; i < (1 << order); i++) {158158+ for (i = 0; i < (1ul << order); i++) {159159 struct page_table_check *ptc = get_page_table_check(page_ext);160160161161 BUG_ON(atomic_read(&ptc->anon_map_count));···206206void __page_table_check_pte_set(struct mm_struct *mm, unsigned long addr,207207 pte_t *ptep, pte_t pte)208208{209209- pte_t old_pte;210210-211209 if (&init_mm == mm)212210 return;213211214214- old_pte = *ptep;215215- if (pte_user_accessible_page(old_pte)) {216216- page_table_check_clear(mm, addr, pte_pfn(old_pte),217217- PAGE_SIZE >> PAGE_SHIFT);218218- }219219-212212+ __page_table_check_pte_clear(mm, addr, *ptep);220213 if (pte_user_accessible_page(pte)) {221214 page_table_check_set(mm, addr, pte_pfn(pte),222215 PAGE_SIZE >> PAGE_SHIFT,···221228void __page_table_check_pmd_set(struct mm_struct *mm, unsigned long addr,222229 pmd_t *pmdp, pmd_t pmd)223230{224224- pmd_t old_pmd;225225-226231 if (&init_mm == mm)227232 return;228233229229- old_pmd = *pmdp;230230- if (pmd_user_accessible_page(old_pmd)) {231231- page_table_check_clear(mm, addr, pmd_pfn(old_pmd),232232- PMD_PAGE_SIZE >> PAGE_SHIFT);233233- }234234-234234+ __page_table_check_pmd_clear(mm, addr, *pmdp);235235 if (pmd_user_accessible_page(pmd)) {236236 page_table_check_set(mm, addr, pmd_pfn(pmd),237237 PMD_PAGE_SIZE >> PAGE_SHIFT,···236250void __page_table_check_pud_set(struct mm_struct *mm, unsigned long addr,237251 pud_t *pudp, pud_t pud)238252{239239- pud_t old_pud;240240-241253 if (&init_mm == mm)242254 return;243255244244- old_pud = *pudp;245245- if (pud_user_accessible_page(old_pud)) {246246- page_table_check_clear(mm, addr, pud_pfn(old_pud),247247- PUD_PAGE_SIZE >> PAGE_SHIFT);248248- }249249-256256+ __page_table_check_pud_clear(mm, addr, *pudp);250257 if (pud_user_accessible_page(pud)) {251258 page_table_check_set(mm, addr, pud_pfn(pud),252259 PUD_PAGE_SIZE >> PAGE_SHIFT,···247268 }248269}249270EXPORT_SYMBOL(__page_table_check_pud_set);271271+272272+void __page_table_check_pte_clear_range(struct mm_struct *mm,273273+ unsigned long addr,274274+ pmd_t pmd)275275+{276276+ if (&init_mm == mm)277277+ return;278278+279279+ if (!pmd_bad(pmd) && !pmd_leaf(pmd)) {280280+ pte_t *ptep = pte_offset_map(&pmd, addr);281281+ unsigned long i;282282+283283+ pte_unmap(ptep);284284+ for (i = 0; i < PTRS_PER_PTE; i++) {285285+ __page_table_check_pte_clear(mm, addr, *ptep);286286+ addr += PAGE_SIZE;287287+ ptep++;288288+ }289289+ }290290+}
+16-7
net/ax25/af_ax25.c
···7777{7878 ax25_dev *ax25_dev;7979 ax25_cb *s;8080+ struct sock *sk;80818182 if ((ax25_dev = ax25_dev_ax25dev(dev)) == NULL)8283 return;···8685again:8786 ax25_for_each(s, &ax25_list) {8887 if (s->ax25_dev == ax25_dev) {8888+ sk = s->sk;8989+ sock_hold(sk);8990 spin_unlock_bh(&ax25_list_lock);9090- lock_sock(s->sk);9191+ lock_sock(sk);9192 s->ax25_dev = NULL;9292- release_sock(s->sk);9393+ ax25_dev_put(ax25_dev);9494+ release_sock(sk);9395 ax25_disconnect(s, ENETUNREACH);9496 spin_lock_bh(&ax25_list_lock);9595-9797+ sock_put(sk);9698 /* The entry could have been deleted from the9799 * list meanwhile and thus the next pointer is98100 * no longer valid. Play it safe and restart···359355 if (copy_from_user(&ax25_ctl, arg, sizeof(ax25_ctl)))360356 return -EFAULT;361357362362- if ((ax25_dev = ax25_addr_ax25dev(&ax25_ctl.port_addr)) == NULL)363363- return -ENODEV;364364-365358 if (ax25_ctl.digi_count > AX25_MAX_DIGIS)366359 return -EINVAL;367360368361 if (ax25_ctl.arg > ULONG_MAX / HZ && ax25_ctl.cmd != AX25_KILL)369362 return -EINVAL;370363364364+ ax25_dev = ax25_addr_ax25dev(&ax25_ctl.port_addr);365365+ if (!ax25_dev)366366+ return -ENODEV;367367+371368 digi.ndigi = ax25_ctl.digi_count;372369 for (k = 0; k < digi.ndigi; k++)373370 digi.calls[k] = ax25_ctl.digi_addr[k];374371375375- if ((ax25 = ax25_find_cb(&ax25_ctl.source_addr, &ax25_ctl.dest_addr, &digi, ax25_dev->dev)) == NULL)372372+ ax25 = ax25_find_cb(&ax25_ctl.source_addr, &ax25_ctl.dest_addr, &digi, ax25_dev->dev);373373+ if (!ax25) {374374+ ax25_dev_put(ax25_dev);376375 return -ENOTCONN;376376+ }377377378378 switch (ax25_ctl.cmd) {379379 case AX25_KILL:···444436 }445437446438out_put:439439+ ax25_dev_put(ax25_dev);447440 ax25_cb_put(ax25);448441 return ret;449442
···1322132213231323 /* skb changing from pure zc to mixed, must charge zc */13241324 if (unlikely(skb_zcopy_pure(skb))) {13251325- if (!sk_wmem_schedule(sk, skb->data_len))13251325+ u32 extra = skb->truesize -13261326+ SKB_TRUESIZE(skb_end_offset(skb));13271327+13281328+ if (!sk_wmem_schedule(sk, extra))13261329 goto wait_for_space;1327133013281328- sk_mem_charge(sk, skb->data_len);13311331+ sk_mem_charge(sk, extra);13291332 skb_shinfo(skb)->flags &= ~SKBFL_PURE_ZEROCOPY;13301333 }13311334
+2
net/ipv4/tcp_input.c
···16601660 (mss != tcp_skb_seglen(skb)))16611661 goto out;1662166216631663+ if (!tcp_skb_can_collapse(prev, skb))16641664+ goto out;16631665 len = skb->len;16641666 pcount = tcp_skb_pcount(skb);16651667 if (tcp_skb_shift(prev, skb, pcount, len))
···981981 int id = HDA_FIXUP_ID_NOT_SET;982982 const char *name = NULL;983983 const char *type = NULL;984984- int vendor, device;984984+ unsigned int vendor, device;985985986986 if (codec->fixup_id != HDA_FIXUP_ID_NOT_SET)987987 return;
+4
sound/pci/hda/hda_codec.c
···30003000{30013001 struct hda_pcm *cpcm;3002300230033003+ /* Skip the shutdown if codec is not registered */30043004+ if (!codec->registered)30053005+ return;30063006+30033007 list_for_each_entry(cpcm, &codec->pcm_list_head, list)30043008 snd_pcm_suspend_all(cpcm->pcm);30053009
···9393 dev_err(&op->dev, "platform_device_alloc() failed\n");94949595 ret = platform_device_add(pdata->codec_device);9696- if (ret)9696+ if (ret) {9797 dev_err(&op->dev, "platform_device_add() failed: %d\n", ret);9898+ platform_device_put(pdata->codec_device);9999+ }9810099101 ret = snd_soc_register_card(card);100100- if (ret)102102+ if (ret) {101103 dev_err(&op->dev, "snd_soc_register_card() failed: %d\n", ret);104104+ platform_device_del(pdata->codec_device);105105+ platform_device_put(pdata->codec_device);106106+ }102107103108 platform_set_drvdata(op, pdata);104104-105109 return ret;110110+106111}107112108113static int pcm030_fabric_remove(struct platform_device *op)
+25-1
sound/soc/generic/simple-card.c
···2828 .hw_params = asoc_simple_hw_params,2929};30303131+static int asoc_simple_parse_platform(struct device_node *node,3232+ struct snd_soc_dai_link_component *dlc)3333+{3434+ struct of_phandle_args args;3535+ int ret;3636+3737+ if (!node)3838+ return 0;3939+4040+ /*4141+ * Get node via "sound-dai = <&phandle port>"4242+ * it will be used as xxx_of_node on soc_bind_dai_link()4343+ */4444+ ret = of_parse_phandle_with_args(node, DAI, CELL, 0, &args);4545+ if (ret)4646+ return ret;4747+4848+ /* dai_name is not required and may not exist for plat component */4949+5050+ dlc->of_node = args.np;5151+5252+ return 0;5353+}5454+3155static int asoc_simple_parse_dai(struct device_node *node,3256 struct snd_soc_dai_link_component *dlc,3357 int *is_single_link)···313289 if (ret < 0)314290 goto dai_link_of_err;315291316316- ret = asoc_simple_parse_dai(plat, platforms, NULL);292292+ ret = asoc_simple_parse_platform(plat, platforms);317293 if (ret < 0)318294 goto dai_link_of_err;319295
+1-1
sound/soc/mediatek/Kconfig
···216216217217config SND_SOC_MT8195_MT6359_RT1011_RT5682218218 tristate "ASoC Audio driver for MT8195 with MT6359 RT1011 RT5682 codec"219219- depends on I2C219219+ depends on I2C && GPIOLIB220220 depends on SND_SOC_MT8195 && MTK_PMIC_WRAP221221 select SND_SOC_MT6359222222 select SND_SOC_RT1011
+5-2
sound/soc/qcom/qdsp6/q6apm-dai.c
···308308 struct snd_pcm_runtime *runtime = substream->runtime;309309 struct q6apm_dai_rtd *prtd = runtime->private_data;310310311311- q6apm_graph_stop(prtd->graph);312312- q6apm_unmap_memory_regions(prtd->graph, substream->stream);311311+ if (prtd->state) { /* only stop graph that is started */312312+ q6apm_graph_stop(prtd->graph);313313+ q6apm_unmap_memory_regions(prtd->graph, substream->stream);314314+ }315315+313316 q6apm_graph_close(prtd->graph);314317 prtd->graph = NULL;315318 kfree(prtd);
···316316 if (sign_bit)317317 mask = BIT(sign_bit + 1) - 1;318318319319- val = ((ucontrol->value.integer.value[0] + min) & mask);319319+ if (ucontrol->value.integer.value[0] < 0)320320+ return -EINVAL;321321+ val = ucontrol->value.integer.value[0];322322+ if (mc->platform_max && val > mc->platform_max)323323+ return -EINVAL;324324+ if (val > max - min)325325+ return -EINVAL;326326+ val = (val + min) & mask;320327 if (invert)321328 val = max - val;322329 val_mask = mask << shift;323330 val = val << shift;324331 if (snd_soc_volsw_is_stereo(mc)) {325325- val2 = ((ucontrol->value.integer.value[1] + min) & mask);332332+ if (ucontrol->value.integer.value[1] < 0)333333+ return -EINVAL;334334+ val2 = ucontrol->value.integer.value[1];335335+ if (mc->platform_max && val2 > mc->platform_max)336336+ return -EINVAL;337337+ if (val2 > max - min)338338+ return -EINVAL;339339+ val2 = (val2 + min) & mask;326340 if (invert)327341 val2 = max - val2;328342 if (reg == reg2) {···423409 int err = 0;424410 unsigned int val, val_mask;425411412412+ if (ucontrol->value.integer.value[0] < 0)413413+ return -EINVAL;414414+ val = ucontrol->value.integer.value[0];415415+ if (mc->platform_max && val > mc->platform_max)416416+ return -EINVAL;417417+ if (val > max - min)418418+ return -EINVAL;426419 val_mask = mask << shift;427427- val = (ucontrol->value.integer.value[0] + min) & mask;420420+ val = (val + min) & mask;428421 val = val << shift;429422430423 err = snd_soc_component_update_bits(component, reg, val_mask, val);···879858 long val = ucontrol->value.integer.value[0];880859 unsigned int i;881860861861+ if (val < mc->min || val > mc->max)862862+ return -EINVAL;882863 if (invert)883864 val = max - val;884865 val &= mask;
+13-7
sound/soc/soc-pcm.c
···4646 snd_pcm_stream_lock_irq(snd_soc_dpcm_get_substream(rtd, stream));4747}48484949-#define snd_soc_dpcm_stream_lock_irqsave(rtd, stream, flags) \5050- snd_pcm_stream_lock_irqsave(snd_soc_dpcm_get_substream(rtd, stream), flags)4949+#define snd_soc_dpcm_stream_lock_irqsave_nested(rtd, stream, flags) \5050+ snd_pcm_stream_lock_irqsave_nested(snd_soc_dpcm_get_substream(rtd, stream), flags)51515252static inline void snd_soc_dpcm_stream_unlock_irq(struct snd_soc_pcm_runtime *rtd,5353 int stream)···12681268void dpcm_be_disconnect(struct snd_soc_pcm_runtime *fe, int stream)12691269{12701270 struct snd_soc_dpcm *dpcm, *d;12711271+ LIST_HEAD(deleted_dpcms);1271127212721273 snd_soc_dpcm_mutex_assert_held(fe);12731274···12881287 /* BEs still alive need new FE */12891288 dpcm_be_reparent(fe, dpcm->be, stream);1290128912911291- dpcm_remove_debugfs_state(dpcm);12921292-12931290 list_del(&dpcm->list_be);12941294- list_del(&dpcm->list_fe);12951295- kfree(dpcm);12911291+ list_move(&dpcm->list_fe, &deleted_dpcms);12961292 }12971293 snd_soc_dpcm_stream_unlock_irq(fe, stream);12941294+12951295+ while (!list_empty(&deleted_dpcms)) {12961296+ dpcm = list_first_entry(&deleted_dpcms, struct snd_soc_dpcm,12971297+ list_fe);12981298+ list_del(&dpcm->list_fe);12991299+ dpcm_remove_debugfs_state(dpcm);13001300+ kfree(dpcm);13011301+ }12981302}1299130313001304/* get BE for DAI widget and stream */···21002094 be = dpcm->be;21012095 be_substream = snd_soc_dpcm_get_substream(be, stream);2102209621032103- snd_soc_dpcm_stream_lock_irqsave(be, stream, flags);20972097+ snd_soc_dpcm_stream_lock_irqsave_nested(be, stream, flags);2104209821052099 /* is this op for this BE ? */21062100 if (!snd_soc_dpcm_be_can_update(fe, be, stream))
+24-3
sound/soc/xilinx/xlnx_formatter_pcm.c
···3737#define XLNX_AUD_XFER_COUNT 0x283838#define XLNX_AUD_CH_STS_START 0x2C3939#define XLNX_BYTES_PER_CH 0x444040+#define XLNX_AUD_ALIGN_BYTES 6440414142#define AUD_STS_IOC_IRQ_MASK BIT(31)4243#define AUD_STS_CH_STS_MASK BIT(29)···369368 snd_soc_set_runtime_hwparams(substream, &xlnx_pcm_hardware);370369 runtime->private_data = stream_data;371370372372- /* Resize the period size divisible by 64 */371371+ /* Resize the period bytes as divisible by 64 */373372 err = snd_pcm_hw_constraint_step(runtime, 0,374374- SNDRV_PCM_HW_PARAM_PERIOD_BYTES, 64);373373+ SNDRV_PCM_HW_PARAM_PERIOD_BYTES,374374+ XLNX_AUD_ALIGN_BYTES);375375 if (err) {376376 dev_err(component->dev,377377- "unable to set constraint on period bytes\n");377377+ "Unable to set constraint on period bytes\n");378378+ return err;379379+ }380380+381381+ /* Resize the buffer bytes as divisible by 64 */382382+ err = snd_pcm_hw_constraint_step(runtime, 0,383383+ SNDRV_PCM_HW_PARAM_BUFFER_BYTES,384384+ XLNX_AUD_ALIGN_BYTES);385385+ if (err) {386386+ dev_err(component->dev,387387+ "Unable to set constraint on buffer bytes\n");388388+ return err;389389+ }390390+391391+ /* Set periods as integer multiple */392392+ err = snd_pcm_hw_constraint_integer(runtime,393393+ SNDRV_PCM_HW_PARAM_PERIODS);394394+ if (err < 0) {395395+ dev_err(component->dev,396396+ "Unable to set constraint on periods to be integer\n");378397 return err;379398 }380399
···2828// 5. We can read keycode from same /dev/lirc device29293030#include <linux/bpf.h>3131-#include <linux/lirc.h>3231#include <linux/input.h>3332#include <errno.h>3433#include <stdio.h>
+1-1
tools/testing/selftests/cpufreq/main.sh
···194194195195# Run requested functions196196clear_dumps $OUTFILE197197-do_test >> $OUTFILE.txt197197+do_test | tee -a $OUTFILE.txt198198dmesg_dumps $OUTFILE
+1-1
tools/testing/selftests/exec/Makefile
···5566TEST_PROGS := binfmt_script non-regular77TEST_GEN_PROGS := execveat load_address_4096 load_address_2097152 load_address_1677721688-TEST_GEN_FILES := execveat.symlink execveat.denatured script subdir pipe88+TEST_GEN_FILES := execveat.symlink execveat.denatured script subdir99# Makefile is a run-time dependency, since it's accessed by the execveat test1010TEST_FILES := Makefile1111
+2-2
tools/testing/selftests/futex/Makefile
···1111 @for DIR in $(SUBDIRS); do \1212 BUILD_TARGET=$(OUTPUT)/$$DIR; \1313 mkdir $$BUILD_TARGET -p; \1414- make OUTPUT=$$BUILD_TARGET -C $$DIR $@;\1414+ $(MAKE) OUTPUT=$$BUILD_TARGET -C $$DIR $@;\1515 if [ -e $$DIR/$(TEST_PROGS) ]; then \1616 rsync -a $$DIR/$(TEST_PROGS) $$BUILD_TARGET/; \1717 fi \···3232 @for DIR in $(SUBDIRS); do \3333 BUILD_TARGET=$(OUTPUT)/$$DIR; \3434 mkdir $$BUILD_TARGET -p; \3535- make OUTPUT=$$BUILD_TARGET -C $$DIR $@;\3535+ $(MAKE) OUTPUT=$$BUILD_TARGET -C $$DIR $@;\3636 done3737endef
+3-1
tools/testing/selftests/kselftest_harness.h
···877877 }878878879879 t->timed_out = true;880880- kill(t->pid, SIGKILL);880880+ // signal process group881881+ kill(-(t->pid), SIGKILL);881882}882883883884void __wait_for_test(struct __test_metadata *t)···988987 ksft_print_msg("ERROR SPAWNING TEST CHILD\n");989988 t->passed = 0;990989 } else if (t->pid == 0) {990990+ setpgrp();991991 t->fn(t, variant);992992 if (t->skip)993993 _exit(255);
···2727 net6_port_net6_port net_port_mac_proto_net"28282929# Reported bugs, also described by TYPE_ variables below3030-BUGS="flush_remove_add"3030+BUGS="flush_remove_add reload"31313232# List of possible paths to pktgen script from kernel tree for performance tests3333PKTGEN_SCRIPT_PATHS="···352352# display display text for test report353353TYPE_flush_remove_add="354354display Add two elements, flush, re-add355355+"356356+357357+TYPE_reload="358358+display net,mac with reload359359+type_spec ipv4_addr . ether_addr360360+chain_spec ip daddr . ether saddr361361+dst addr4362362+src mac363363+start 1364364+count 1365365+src_delta 2000366366+tools sendip nc bash367367+proto udp368368+369369+race_repeat 0370370+371371+perf_duration 0355372"356373357374# Set template for all tests, types and rules are filled in depending on test···14871470 nft flush set t s 2>/dev/null || return 114881471 nft add element t s ${elem2} 2>/dev/null || return 114891472 done14731473+ nft flush ruleset14741474+}14751475+14761476+# - add ranged element, check that packets match it14771477+# - reload the set, check packets still match14781478+test_bug_reload() {14791479+ setup veth send_"${proto}" set || return ${KSELFTEST_SKIP}14801480+ rstart=${start}14811481+14821482+ range_size=114831483+ for i in $(seq "${start}" $((start + count))); do14841484+ end=$((start + range_size))14851485+14861486+ # Avoid negative or zero-sized port ranges14871487+ if [ $((end / 65534)) -gt $((start / 65534)) ]; then14881488+ start=${end}14891489+ end=$((end + 1))14901490+ fi14911491+ srcstart=$((start + src_delta))14921492+ srcend=$((end + src_delta))14931493+14941494+ add "$(format)" || return 114951495+ range_size=$((range_size + 1))14961496+ start=$((end + range_size))14971497+ done14981498+14991499+ # check kernel does allocate pcpu sctrach map15001500+ # for reload with no elemet add/delete15011501+ ( echo flush set inet filter test ;15021502+ nft list set inet filter test ) | nft -f -15031503+15041504+ start=${rstart}15051505+ range_size=115061506+15071507+ for i in $(seq "${start}" $((start + count))); do15081508+ end=$((start + range_size))15091509+15101510+ # Avoid negative or zero-sized port ranges15111511+ if [ $((end / 65534)) -gt $((start / 65534)) ]; then15121512+ start=${end}15131513+ end=$((end + 1))15141514+ fi15151515+ srcstart=$((start + src_delta))15161516+ srcend=$((end + src_delta))15171517+15181518+ for j in $(seq ${start} $((range_size / 2 + 1)) ${end}); do15191519+ send_match "${j}" $((j + src_delta)) || return 115201520+ done15211521+15221522+ range_size=$((range_size + 1))15231523+ start=$((end + range_size))15241524+ done15251525+14901526 nft flush ruleset14911527}14921528
+152
tools/testing/selftests/netfilter/nft_nat.sh
···899899 ip netns exec "$ns0" nft delete table $family nat900900}901901902902+test_stateless_nat_ip()903903+{904904+ local lret=0905905+906906+ ip netns exec "$ns0" sysctl net.ipv4.conf.veth0.forwarding=1 > /dev/null907907+ ip netns exec "$ns0" sysctl net.ipv4.conf.veth1.forwarding=1 > /dev/null908908+909909+ ip netns exec "$ns2" ping -q -c 1 10.0.1.99 > /dev/null # ping ns2->ns1910910+ if [ $? -ne 0 ] ; then911911+ echo "ERROR: cannot ping $ns1 from $ns2 before loading stateless rules"912912+ return 1913913+ fi914914+915915+ip netns exec "$ns0" nft -f /dev/stdin <<EOF916916+table ip stateless {917917+ map xlate_in {918918+ typeof meta iifname . ip saddr . ip daddr : ip daddr919919+ elements = {920920+ "veth1" . 10.0.2.99 . 10.0.1.99 : 10.0.2.2,921921+ }922922+ }923923+ map xlate_out {924924+ typeof meta iifname . ip saddr . ip daddr : ip daddr925925+ elements = {926926+ "veth0" . 10.0.1.99 . 10.0.2.2 : 10.0.2.99927927+ }928928+ }929929+930930+ chain prerouting {931931+ type filter hook prerouting priority -400; policy accept;932932+ ip saddr set meta iifname . ip saddr . ip daddr map @xlate_in933933+ ip daddr set meta iifname . ip saddr . ip daddr map @xlate_out934934+ }935935+}936936+EOF937937+ if [ $? -ne 0 ]; then938938+ echo "SKIP: Could not add ip statless rules"939939+ return $ksft_skip940940+ fi941941+942942+ reset_counters943943+944944+ ip netns exec "$ns2" ping -q -c 1 10.0.1.99 > /dev/null # ping ns2->ns1945945+ if [ $? -ne 0 ] ; then946946+ echo "ERROR: cannot ping $ns1 from $ns2 with stateless rules"947947+ lret=1948948+ fi949949+950950+ # ns1 should have seen packets from .2.2, due to stateless rewrite.951951+ expect="packets 1 bytes 84"952952+ cnt=$(ip netns exec "$ns1" nft list counter inet filter ns0insl | grep -q "$expect")953953+ if [ $? -ne 0 ]; then954954+ bad_counter "$ns1" ns0insl "$expect" "test_stateless 1"955955+ lret=1956956+ fi957957+958958+ for dir in "in" "out" ; do959959+ cnt=$(ip netns exec "$ns2" nft list counter inet filter ns1${dir} | grep -q "$expect")960960+ if [ $? -ne 0 ]; then961961+ bad_counter "$ns2" ns1$dir "$expect" "test_stateless 2"962962+ lret=1963963+ fi964964+ done965965+966966+ # ns1 should not have seen packets from ns2, due to masquerade967967+ expect="packets 0 bytes 0"968968+ for dir in "in" "out" ; do969969+ cnt=$(ip netns exec "$ns1" nft list counter inet filter ns2${dir} | grep -q "$expect")970970+ if [ $? -ne 0 ]; then971971+ bad_counter "$ns1" ns0$dir "$expect" "test_stateless 3"972972+ lret=1973973+ fi974974+975975+ cnt=$(ip netns exec "$ns0" nft list counter inet filter ns1${dir} | grep -q "$expect")976976+ if [ $? -ne 0 ]; then977977+ bad_counter "$ns0" ns1$dir "$expect" "test_stateless 4"978978+ lret=1979979+ fi980980+ done981981+982982+ reset_counters983983+984984+ socat -h > /dev/null 2>&1985985+ if [ $? -ne 0 ];then986986+ echo "SKIP: Could not run stateless nat frag test without socat tool"987987+ if [ $lret -eq 0 ]; then988988+ return $ksft_skip989989+ fi990990+991991+ ip netns exec "$ns0" nft delete table ip stateless992992+ return $lret993993+ fi994994+995995+ local tmpfile=$(mktemp)996996+ dd if=/dev/urandom of=$tmpfile bs=4096 count=1 2>/dev/null997997+998998+ local outfile=$(mktemp)999999+ ip netns exec "$ns1" timeout 3 socat -u UDP4-RECV:4233 OPEN:$outfile < /dev/null &10001000+ sc_r=$!10011001+10021002+ sleep 110031003+ # re-do with large ping -> ip fragmentation10041004+ ip netns exec "$ns2" timeout 3 socat - UDP4-SENDTO:"10.0.1.99:4233" < "$tmpfile" > /dev/null10051005+ if [ $? -ne 0 ] ; then10061006+ echo "ERROR: failed to test udp $ns1 to $ns2 with stateless ip nat" 1>&210071007+ lret=110081008+ fi10091009+10101010+ wait10111011+10121012+ cmp "$tmpfile" "$outfile"10131013+ if [ $? -ne 0 ]; then10141014+ ls -l "$tmpfile" "$outfile"10151015+ echo "ERROR: in and output file mismatch when checking udp with stateless nat" 1>&210161016+ lret=110171017+ fi10181018+10191019+ rm -f "$tmpfile" "$outfile"10201020+10211021+ # ns1 should have seen packets from 2.2, due to stateless rewrite.10221022+ expect="packets 3 bytes 4164"10231023+ cnt=$(ip netns exec "$ns1" nft list counter inet filter ns0insl | grep -q "$expect")10241024+ if [ $? -ne 0 ]; then10251025+ bad_counter "$ns1" ns0insl "$expect" "test_stateless 5"10261026+ lret=110271027+ fi10281028+10291029+ ip netns exec "$ns0" nft delete table ip stateless10301030+ if [ $? -ne 0 ]; then10311031+ echo "ERROR: Could not delete table ip stateless" 1>&210321032+ lret=110331033+ fi10341034+10351035+ test $lret -eq 0 && echo "PASS: IP statless for $ns2"10361036+10371037+ return $lret10381038+}10391039+9021040# ip netns exec "$ns0" ping -c 1 -q 10.0.$i.999031041for i in 0 1 2; do9041042ip netns exec ns$i-$sfx nft -f /dev/stdin <<EOF···1103965EOF1104966done1105967968968+# special case for stateless nat check, counter needs to969969+# be done before (input) ip defragmentation970970+ip netns exec ns1-$sfx nft -f /dev/stdin <<EOF971971+table inet filter {972972+ counter ns0insl {}973973+974974+ chain pre {975975+ type filter hook prerouting priority -400; policy accept;976976+ ip saddr 10.0.2.2 counter name "ns0insl"977977+ }978978+}979979+EOF980980+1106981sleep 31107982# test basic connectivity1108983for i in 1 2; do···11701019$test_inet_nat && test_redirect6 inet1171102011721021test_port_shadowing10221022+test_stateless_nat_ip1173102311741024if [ $ret -ne 0 ];then11751025 echo -n "FAIL: "