···110110 Usage: required111111 Definition: See soc/fsl/qman.txt and soc/fsl/bman.txt112112113113+- fsl,erratum-a050385114114+ Usage: optional115115+ Value type: boolean116116+ Definition: A boolean property. Indicates the presence of the117117+ erratum A050385 which indicates that DMA transactions that are118118+ split can result in a FMan lock.119119+113120=============================================================================114121FMan MURAM Node115122
+8
Documentation/filesystems/porting.rst
···850850d_alloc_pseudo() is internal-only; uses outside of alloc_file_pseudo() are851851very suspect (and won't work in modules). Such uses are very likely to852852be misspelled d_alloc_anon().853853+854854+---855855+856856+**mandatory**857857+858858+[should've been added in 2016] stale comment in finish_open() nonwithstanding,859859+failure exits in ->atomic_open() instances should *NOT* fput() the file,860860+no matter what. Everything is handled by the caller.
+1-1
Documentation/kbuild/kbuild.rst
···237237KBUILD_EXTRA_SYMBOLS238238--------------------239239For modules that use symbols from other modules.240240-See more details in modules.txt.240240+See more details in modules.rst.241241242242ALLSOURCE_ARCHS243243---------------
+1-1
Documentation/kbuild/kconfig-macro-language.rst
···4444 def_bool y45454646Then, Kconfig moves onto the evaluation stage to resolve inter-symbol4747-dependency as explained in kconfig-language.txt.4747+dependency as explained in kconfig-language.rst.484849495050Variables
+3-3
Documentation/kbuild/makefiles.rst
···924924 $(KBUILD_AFLAGS_MODULE) is used to add arch-specific options that925925 are used for assembler.926926927927- From commandline AFLAGS_MODULE shall be used (see kbuild.txt).927927+ From commandline AFLAGS_MODULE shall be used (see kbuild.rst).928928929929 KBUILD_CFLAGS_KERNEL930930 $(CC) options specific for built-in···937937938938 $(KBUILD_CFLAGS_MODULE) is used to add arch-specific options that939939 are used for $(CC).940940- From commandline CFLAGS_MODULE shall be used (see kbuild.txt).940940+ From commandline CFLAGS_MODULE shall be used (see kbuild.rst).941941942942 KBUILD_LDFLAGS_MODULE943943 Options for $(LD) when linking modules···945945 $(KBUILD_LDFLAGS_MODULE) is used to add arch-specific options946946 used when linking modules. This is often a linker script.947947948948- From commandline LDFLAGS_MODULE shall be used (see kbuild.txt).948948+ From commandline LDFLAGS_MODULE shall be used (see kbuild.rst).949949950950 KBUILD_LDS951951
+2-2
Documentation/kbuild/modules.rst
···470470471471 The syntax of the Module.symvers file is::472472473473- <CRC> <Symbol> <Namespace> <Module> <Export Type>473473+ <CRC> <Symbol> <Module> <Export Type> <Namespace>474474475475- 0xe1cc2a05 usb_stor_suspend USB_STORAGE drivers/usb/storage/usb-storage EXPORT_SYMBOL_GPL475475+ 0xe1cc2a05 usb_stor_suspend drivers/usb/storage/usb-storage EXPORT_SYMBOL_GPL USB_STORAGE476476477477 The fields are separated by tabs and values may be empty (e.g.478478 if no namespace is defined for an exported symbol).
···4040 # Delete a snapshot using:4141 $ devlink region del pci/0000:00:05.0/cr-space snapshot 142424343- # Trigger (request) a snapshot be taken:4444- $ devlink region trigger pci/0000:00:05.0/cr-space4545-4643 # Dump a snapshot:4744 $ devlink region dump pci/0000:00:05.0/fw-health snapshot 14845 0000000000000000 0014 95dc 0014 9514 0035 1670 0034 db30
+3-3
Documentation/networking/net_failover.rst
···88========991010The net_failover driver provides an automated failover mechanism via APIs1111-to create and destroy a failover master netdev and mananges a primary and1111+to create and destroy a failover master netdev and manages a primary and1212standby slave netdevs that get registered via the generic failover1313-infrastructrure.1313+infrastructure.14141515The failover netdev acts a master device and controls 2 slave devices. The1616original paravirtual interface is registered as 'standby' slave netdev and···2929=============================================30303131net_failover enables hypervisor controlled accelerated datapath to virtio-net3232-enabled VMs in a transparent manner with no/minimal guest userspace chanages.3232+enabled VMs in a transparent manner with no/minimal guest userspace changes.33333434To support this, the hypervisor needs to enable VIRTIO_NET_F_STANDBY3535feature on the virtio-net interface and assign the same MAC address to both
+1-1
Documentation/networking/rds.txt
···159159 set SO_RDS_TRANSPORT on a socket for which the transport has160160 been previously attached explicitly (by SO_RDS_TRANSPORT) or161161 implicitly (via bind(2)) will return an error of EOPNOTSUPP.162162- An attempt to set SO_RDS_TRANSPPORT to RDS_TRANS_NONE will162162+ An attempt to set SO_RDS_TRANSPORT to RDS_TRANS_NONE will163163 always return EINVAL.164164165165RDMA for RDS
···154154 help155155 Support for ARC HS38x Cores based on ARCv2 ISA156156 The notable features are:157157- - SMP configurations of upto 4 core with coherency157157+ - SMP configurations of up to 4 cores with coherency158158 - Optional L2 Cache and IO-Coherency159159 - Revised Interrupt Architecture (multiple priorites, reg banks,160160 auto stack switch, auto regfile save/restore)···192192 help193193 In SMP configuration cores can be configured as Halt-on-reset194194 or they could all start at same time. For Halt-on-reset, non195195- masters are parked until Master kicks them so they can start of195195+ masters are parked until Master kicks them so they can start off196196 at designated entry point. For other case, all jump to common197197 entry point and spin wait for Master's signal.198198
-2
arch/arc/configs/nps_defconfig
···2121CONFIG_MODULE_FORCE_LOAD=y2222CONFIG_MODULE_UNLOAD=y2323# CONFIG_BLK_DEV_BSG is not set2424-# CONFIG_IOSCHED_DEADLINE is not set2525-# CONFIG_IOSCHED_CFQ is not set2624CONFIG_ARC_PLAT_EZNPS=y2725CONFIG_SMP=y2826CONFIG_NR_CPUS=4096
-2
arch/arc/configs/nsimosci_defconfig
···2020CONFIG_KPROBES=y2121CONFIG_MODULES=y2222# CONFIG_BLK_DEV_BSG is not set2323-# CONFIG_IOSCHED_DEADLINE is not set2424-# CONFIG_IOSCHED_CFQ is not set2523CONFIG_ARC_BUILTIN_DTB_NAME="nsimosci"2624# CONFIG_COMPACTION is not set2725CONFIG_NET=y
-2
arch/arc/configs/nsimosci_hs_defconfig
···1919CONFIG_KPROBES=y2020CONFIG_MODULES=y2121# CONFIG_BLK_DEV_BSG is not set2222-# CONFIG_IOSCHED_DEADLINE is not set2323-# CONFIG_IOSCHED_CFQ is not set2422CONFIG_ISA_ARCV2=y2523CONFIG_ARC_BUILTIN_DTB_NAME="nsimosci_hs"2624# CONFIG_COMPACTION is not set
-2
arch/arc/configs/nsimosci_hs_smp_defconfig
···1414CONFIG_KPROBES=y1515CONFIG_MODULES=y1616# CONFIG_BLK_DEV_BSG is not set1717-# CONFIG_IOSCHED_DEADLINE is not set1818-# CONFIG_IOSCHED_CFQ is not set1917CONFIG_ISA_ARCV2=y2018CONFIG_SMP=y2119# CONFIG_ARC_TIMERS_64BIT is not set
···2929.endm30303131#define ASM_NL ` /* use '`' to mark new line in macro */3232+#define __ALIGN .align 43333+#define __ALIGN_STR __stringify(__ALIGN)32343335/* annotation for data we want in DCCM - if enabled in .config */3436.macro ARCFP_DATA nm
···958958}959959#endif960960961961+/*962962+ * The number of CPUs online, not counting this CPU (which may not be963963+ * fully online and so not counted in num_online_cpus()).964964+ */965965+static inline unsigned int num_other_online_cpus(void)966966+{967967+ unsigned int this_cpu_online = cpu_online(smp_processor_id());968968+969969+ return num_online_cpus() - this_cpu_online;970970+}971971+961972void smp_send_stop(void)962973{963974 unsigned long timeout;964975965965- if (num_online_cpus() > 1) {976976+ if (num_other_online_cpus()) {966977 cpumask_t mask;967978968979 cpumask_copy(&mask, cpu_online_mask);···986975987976 /* Wait up to one second for other CPUs to stop */988977 timeout = USEC_PER_SEC;989989- while (num_online_cpus() > 1 && timeout--)978978+ while (num_other_online_cpus() && timeout--)990979 udelay(1);991980992992- if (num_online_cpus() > 1)981981+ if (num_other_online_cpus())993982 pr_warn("SMP: failed to stop secondary CPUs %*pbl\n",994983 cpumask_pr_args(cpu_online_mask));995984···1012100110131002 cpus_stopped = 1;1014100310151015- if (num_online_cpus() == 1) {10041004+ /*10051005+ * If this cpu is the only one alive at this point in time, online or10061006+ * not, there are no stop messages to be sent around, so just back out.10071007+ */10081008+ if (num_other_online_cpus() == 0) {10161009 sdei_mask_local_cpu();10171010 return;10181011 }···10241009 cpumask_copy(&mask, cpu_online_mask);10251010 cpumask_clear_cpu(smp_processor_id(), &mask);1026101110271027- atomic_set(&waiting_for_crash_ipi, num_online_cpus() - 1);10121012+ atomic_set(&waiting_for_crash_ipi, num_other_online_cpus());1028101310291014 pr_crit("SMP: stopping secondary CPUs\n");10301015 smp_cross_call(&mask, IPI_CPU_CRASH_STOP);
···605605 * If we're configured to take boot arguments from DT, look for those606606 * now.607607 */608608- if (IS_ENABLED(CONFIG_MIPS_CMDLINE_FROM_DTB))608608+ if (IS_ENABLED(CONFIG_MIPS_CMDLINE_FROM_DTB) ||609609+ IS_ENABLED(CONFIG_MIPS_CMDLINE_DTB_EXTEND))609610 of_scan_flat_dt(bootcmdline_scan_chosen, &dt_bootargs);610611#endif611612
···120120 unsigned long k_cur;121121 phys_addr_t pa = __pa(kasan_early_shadow_page);122122123123- if (!early_mmu_has_feature(MMU_FTR_HPTE_TABLE)) {124124- int ret = kasan_init_shadow_page_tables(k_start, k_end);125125-126126- if (ret)127127- panic("kasan: kasan_init_shadow_page_tables() failed");128128- }129123 for (k_cur = k_start & PAGE_MASK; k_cur < k_end; k_cur += PAGE_SIZE) {130124 pmd_t *pmd = pmd_offset(pud_offset(pgd_offset_k(k_cur), k_cur), k_cur);131125 pte_t *ptep = pte_offset_kernel(pmd, k_cur);···137143 int ret;138144 struct memblock_region *reg;139145140140- if (early_mmu_has_feature(MMU_FTR_HPTE_TABLE)) {146146+ if (early_mmu_has_feature(MMU_FTR_HPTE_TABLE) ||147147+ IS_ENABLED(CONFIG_KASAN_VMALLOC)) {141148 ret = kasan_init_shadow_page_tables(KASAN_SHADOW_START, KASAN_SHADOW_END);142149143150 if (ret)
+17-1
arch/s390/kvm/kvm-s390.c
···32683268 /* Initial reset is a superset of the normal reset */32693269 kvm_arch_vcpu_ioctl_normal_reset(vcpu);3270327032713271- /* this equals initial cpu reset in pop, but we don't switch to ESA */32713271+ /*32723272+ * This equals initial cpu reset in pop, but we don't switch to ESA.32733273+ * We do not only reset the internal data, but also ...32743274+ */32723275 vcpu->arch.sie_block->gpsw.mask = 0;32733276 vcpu->arch.sie_block->gpsw.addr = 0;32743277 kvm_s390_set_prefix(vcpu, 0);···32813278 memset(vcpu->arch.sie_block->gcr, 0, sizeof(vcpu->arch.sie_block->gcr));32823279 vcpu->arch.sie_block->gcr[0] = CR0_INITIAL_MASK;32833280 vcpu->arch.sie_block->gcr[14] = CR14_INITIAL_MASK;32813281+32823282+ /* ... the data in sync regs */32833283+ memset(vcpu->run->s.regs.crs, 0, sizeof(vcpu->run->s.regs.crs));32843284+ vcpu->run->s.regs.ckc = 0;32853285+ vcpu->run->s.regs.crs[0] = CR0_INITIAL_MASK;32863286+ vcpu->run->s.regs.crs[14] = CR14_INITIAL_MASK;32873287+ vcpu->run->psw_addr = 0;32883288+ vcpu->run->psw_mask = 0;32893289+ vcpu->run->s.regs.todpr = 0;32903290+ vcpu->run->s.regs.cputm = 0;32913291+ vcpu->run->s.regs.ckc = 0;32923292+ vcpu->run->s.regs.pp = 0;32933293+ vcpu->run->s.regs.gbea = 1;32843294 vcpu->run->s.regs.fpc = 0;32853295 vcpu->arch.sie_block->gbea = 1;32863296 vcpu->arch.sie_block->pp = 0;
···1111avx512_supported :=$(call as-instr,vpmovm2b %k1$(comma)%zmm5,yes,no)1212sha1_ni_supported :=$(call as-instr,sha1msg1 %xmm0$(comma)%xmm1,yes,no)1313sha256_ni_supported :=$(call as-instr,sha256msg1 %xmm0$(comma)%xmm1,yes,no)1414+adx_supported := $(call as-instr,adox %r10$(comma)%r10,yes,no)14151516obj-$(CONFIG_CRYPTO_GLUE_HELPER_X86) += glue_helper.o1617···40394140obj-$(CONFIG_CRYPTO_NHPOLY1305_SSE2) += nhpoly1305-sse2.o4241obj-$(CONFIG_CRYPTO_NHPOLY1305_AVX2) += nhpoly1305-avx2.o4343-obj-$(CONFIG_CRYPTO_CURVE25519_X86) += curve25519-x86_64.o4242+4343+# These modules require the assembler to support ADX.4444+ifeq ($(adx_supported),yes)4545+ obj-$(CONFIG_CRYPTO_CURVE25519_X86) += curve25519-x86_64.o4646+endif44474548# These modules require assembler to support AVX.4649ifeq ($(avx_supported),yes)
+7-10
arch/x86/events/amd/uncore.c
···190190191191 /*192192 * NB and Last level cache counters (MSRs) are shared across all cores193193- * that share the same NB / Last level cache. Interrupts can be directed194194- * to a single target core, however, event counts generated by processes195195- * running on other cores cannot be masked out. So we do not support196196- * sampling and per-thread events.193193+ * that share the same NB / Last level cache. On family 16h and below,194194+ * Interrupts can be directed to a single target core, however, event195195+ * counts generated by processes running on other cores cannot be masked196196+ * out. So we do not support sampling and per-thread events via197197+ * CAP_NO_INTERRUPT, and we do not enable counter overflow interrupts:197198 */198198- if (is_sampling_event(event) || event->attach_state & PERF_ATTACH_TASK)199199- return -EINVAL;200200-201201- /* and we do not enable counter overflow interrupts */202199 hwc->config = event->attr.config & AMD64_RAW_EVENT_MASK_NB;203200 hwc->idx = -1;204201···303306 .start = amd_uncore_start,304307 .stop = amd_uncore_stop,305308 .read = amd_uncore_read,306306- .capabilities = PERF_PMU_CAP_NO_EXCLUDE,309309+ .capabilities = PERF_PMU_CAP_NO_EXCLUDE | PERF_PMU_CAP_NO_INTERRUPT,307310};308311309312static struct pmu amd_llc_pmu = {···314317 .start = amd_uncore_start,315318 .stop = amd_uncore_stop,316319 .read = amd_uncore_read,317317- .capabilities = PERF_PMU_CAP_NO_EXCLUDE,320320+ .capabilities = PERF_PMU_CAP_NO_EXCLUDE | PERF_PMU_CAP_NO_INTERRUPT,318321};319322320323static struct amd_uncore *amd_uncore_alloc(unsigned int cpu)
-1
arch/x86/include/asm/kvm_emulate.h
···360360 u64 d;361361 unsigned long _eip;362362 struct operand memop;363363- /* Fields above regs are cleared together. */364363 unsigned long _regs[NR_VCPU_REGS];365364 struct operand *memopp;366365 struct fetch_cache fetch;
+8-6
arch/x86/kernel/apic/vector.c
···838838 bool managed = apicd->is_managed;839839840840 /*841841- * This should never happen. Managed interrupts are not842842- * migrated except on CPU down, which does not involve the843843- * cleanup vector. But try to keep the accounting correct844844- * nevertheless.841841+ * Managed interrupts are usually not migrated away842842+ * from an online CPU, but CPU isolation 'managed_irq'843843+ * can make that happen.844844+ * 1) Activation does not take the isolation into account845845+ * to keep the code simple846846+ * 2) Migration away from an isolated CPU can happen when847847+ * a non-isolated CPU which is in the calculated848848+ * affinity mask comes online.845849 */846846- WARN_ON_ONCE(managed);847847-848850 trace_vector_free_moved(apicd->irq, cpu, vector, managed);849851 irq_matrix_free(vector_matrix, cpu, vector, managed);850852 per_cpu(vector_irq, cpu)[vector] = VECTOR_UNUSED;
+5-4
arch/x86/kernel/cpu/mce/intel.c
···493493 return;494494495495 if ((val & 3UL) == 1UL) {496496- /* PPIN available but disabled: */496496+ /* PPIN locked in disabled mode */497497 return;498498 }499499500500- /* If PPIN is disabled, but not locked, try to enable: */501501- if (!(val & 3UL)) {500500+ /* If PPIN is disabled, try to enable */501501+ if (!(val & 2UL)) {502502 wrmsrl_safe(MSR_PPIN_CTL, val | 2UL);503503 rdmsrl_safe(MSR_PPIN_CTL, &val);504504 }505505506506- if ((val & 3UL) == 2UL)506506+ /* Is the enable bit set? */507507+ if (val & 2UL)507508 set_cpu_cap(c, X86_FEATURE_INTEL_PPIN);508509 }509510}
+7-2
arch/x86/kernel/cpu/mce/therm_throt.c
···486486{487487 struct thermal_state *state = &per_cpu(thermal_state, cpu);488488 struct device *dev = get_cpu_device(cpu);489489+ u32 l;489490490490- cancel_delayed_work(&state->package_throttle.therm_work);491491- cancel_delayed_work(&state->core_throttle.therm_work);491491+ /* Mask the thermal vector before draining evtl. pending work */492492+ l = apic_read(APIC_LVTTHMR);493493+ apic_write(APIC_LVTTHMR, l | APIC_LVT_MASKED);494494+495495+ cancel_delayed_work_sync(&state->package_throttle.therm_work);496496+ cancel_delayed_work_sync(&state->core_throttle.therm_work);492497493498 state->package_throttle.rate_control_active = false;494499 state->core_throttle.rate_control_active = false;
+1-1
arch/x86/kvm/Kconfig
···6868 depends on (X86_64 && !KASAN) || !COMPILE_TEST6969 depends on EXPERT7070 help7171- Add -Werror to the build flags for (and only for) i915.ko.7171+ Add -Werror to the build flags for KVM.72727373 If in doubt, say "N".7474
···224224 return;225225226226 kvm_vcpu_unmap(vcpu, &vmx->nested.hv_evmcs_map, true);227227- vmx->nested.hv_evmcs_vmptr = -1ull;227227+ vmx->nested.hv_evmcs_vmptr = 0;228228 vmx->nested.hv_evmcs = NULL;229229}230230···19231923 if (!nested_enlightened_vmentry(vcpu, &evmcs_gpa))19241924 return 1;1925192519261926- if (unlikely(evmcs_gpa != vmx->nested.hv_evmcs_vmptr)) {19261926+ if (unlikely(!vmx->nested.hv_evmcs ||19271927+ evmcs_gpa != vmx->nested.hv_evmcs_vmptr)) {19271928 if (!vmx->nested.hv_evmcs)19281929 vmx->nested.current_vmptr = -1ull;19291930
+14-2
arch/x86/kvm/vmx/vmx.c
···23382338 kvm_cpu_vmxoff();23392339}2340234023412341+/*23422342+ * There is no X86_FEATURE for SGX yet, but anyway we need to query CPUID23432343+ * directly instead of going through cpu_has(), to ensure KVM is trapping23442344+ * ENCLS whenever it's supported in hardware. It does not matter whether23452345+ * the host OS supports or has enabled SGX.23462346+ */23472347+static bool cpu_has_sgx(void)23482348+{23492349+ return cpuid_eax(0) >= 0x12 && (cpuid_eax(0x12) & BIT(0));23502350+}23512351+23412352static __init int adjust_vmx_controls(u32 ctl_min, u32 ctl_opt,23422353 u32 msr, u32 *result)23432354{···24292418 SECONDARY_EXEC_ENABLE_USR_WAIT_PAUSE |24302419 SECONDARY_EXEC_PT_USE_GPA |24312420 SECONDARY_EXEC_PT_CONCEAL_VMX |24322432- SECONDARY_EXEC_ENABLE_VMFUNC |24332433- SECONDARY_EXEC_ENCLS_EXITING;24212421+ SECONDARY_EXEC_ENABLE_VMFUNC;24222422+ if (cpu_has_sgx())24232423+ opt2 |= SECONDARY_EXEC_ENCLS_EXITING;24342424 if (adjust_vmx_controls(min2, opt2,24352425 MSR_IA32_VMX_PROCBASED_CTLS2,24362426 &_cpu_based_2nd_exec_control) < 0)
+5-3
arch/x86/kvm/x86.c
···7195719571967196 cpu = get_cpu();71977197 policy = cpufreq_cpu_get(cpu);71987198- if (policy && policy->cpuinfo.max_freq)71997199- max_tsc_khz = policy->cpuinfo.max_freq;71987198+ if (policy) {71997199+ if (policy->cpuinfo.max_freq)72007200+ max_tsc_khz = policy->cpuinfo.max_freq;72017201+ cpufreq_cpu_put(policy);72027202+ }72007203 put_cpu();72017201- cpufreq_cpu_put(policy);72027204#endif72037205 cpufreq_register_notifier(&kvmclock_cpufreq_notifier_block,72047206 CPUFREQ_TRANSITION_NOTIFIER);
+24-2
arch/x86/mm/fault.c
···190190 return pmd_k;191191}192192193193-void vmalloc_sync_all(void)193193+static void vmalloc_sync(void)194194{195195 unsigned long address;196196···215215 }216216 spin_unlock(&pgd_lock);217217 }218218+}219219+220220+void vmalloc_sync_mappings(void)221221+{222222+ vmalloc_sync();223223+}224224+225225+void vmalloc_sync_unmappings(void)226226+{227227+ vmalloc_sync();218228}219229220230/*···329319330320#else /* CONFIG_X86_64: */331321332332-void vmalloc_sync_all(void)322322+void vmalloc_sync_mappings(void)333323{324324+ /*325325+ * 64-bit mappings might allocate new p4d/pud pages326326+ * that need to be propagated to all tasks' PGDs.327327+ */334328 sync_global_pgds(VMALLOC_START & PGDIR_MASK, VMALLOC_END);329329+}330330+331331+void vmalloc_sync_unmappings(void)332332+{333333+ /*334334+ * Unmappings never allocate or free p4d/pud pages.335335+ * No work is required here.336336+ */335337}336338337339/*
+18
arch/x86/mm/ioremap.c
···106106 return 0;107107}108108109109+/*110110+ * The EFI runtime services data area is not covered by walk_mem_res(), but must111111+ * be mapped encrypted when SEV is active.112112+ */113113+static void __ioremap_check_other(resource_size_t addr, struct ioremap_desc *desc)114114+{115115+ if (!sev_active())116116+ return;117117+118118+ if (efi_mem_type(addr) == EFI_RUNTIME_SERVICES_DATA)119119+ desc->flags |= IORES_MAP_ENCRYPTED;120120+}121121+109122static int __ioremap_collect_map_flags(struct resource *res, void *arg)110123{111124 struct ioremap_desc *desc = arg;···137124 * To avoid multiple resource walks, this function walks resources marked as138125 * IORESOURCE_MEM and IORESOURCE_BUSY and looking for system RAM and/or a139126 * resource described not as IORES_DESC_NONE (e.g. IORES_DESC_ACPI_TABLES).127127+ *128128+ * After that, deal with misc other ranges in __ioremap_check_other() which do129129+ * not fall into the above category.140130 */141131static void __ioremap_check_mem(resource_size_t addr, unsigned long size,142132 struct ioremap_desc *desc)···151135 memset(desc, 0, sizeof(struct ioremap_desc));152136153137 walk_mem_res(start, end, desc, __ioremap_collect_map_flags);138138+139139+ __ioremap_check_other(addr, desc);154140}155141156142/*
+1-1
block/blk-iocost.c
···13181318 return false;1319131913201320 /* is something in flight? */13211321- if (atomic64_read(&iocg->done_vtime) < atomic64_read(&iocg->vtime))13211321+ if (atomic64_read(&iocg->done_vtime) != atomic64_read(&iocg->vtime))13221322 return false;1323132313241324 return true;
+22
block/blk-mq-sched.c
···398398 WARN_ON(e && (rq->tag != -1));399399400400 if (blk_mq_sched_bypass_insert(hctx, !!e, rq)) {401401+ /*402402+ * Firstly normal IO request is inserted to scheduler queue or403403+ * sw queue, meantime we add flush request to dispatch queue(404404+ * hctx->dispatch) directly and there is at most one in-flight405405+ * flush request for each hw queue, so it doesn't matter to add406406+ * flush request to tail or front of the dispatch queue.407407+ *408408+ * Secondly in case of NCQ, flush request belongs to non-NCQ409409+ * command, and queueing it will fail when there is any410410+ * in-flight normal IO request(NCQ command). When adding flush411411+ * rq to the front of hctx->dispatch, it is easier to introduce412412+ * extra time to flush rq's latency because of S_SCHED_RESTART413413+ * compared with adding to the tail of dispatch queue, then414414+ * chance of flush merge is increased, and less flush requests415415+ * will be issued to controller. It is observed that ~10% time416416+ * is saved in blktests block/004 on disk attached to AHCI/NCQ417417+ * drive when adding flush rq to the front of hctx->dispatch.418418+ *419419+ * Simply queue flush rq to the front of hctx->dispatch so that420420+ * intensive flush workloads can benefit in case of NCQ HW.421421+ */422422+ at_head = (rq->rq_flags & RQF_FLUSH_SEQ) ? true : at_head;401423 blk_mq_request_bypass_insert(rq, at_head, false);402424 goto run;403425 }
+36
block/genhd.c
···301301}302302EXPORT_SYMBOL_GPL(disk_map_sector_rcu);303303304304+/**305305+ * disk_has_partitions306306+ * @disk: gendisk of interest307307+ *308308+ * Walk through the partition table and check if valid partition exists.309309+ *310310+ * CONTEXT:311311+ * Don't care.312312+ *313313+ * RETURNS:314314+ * True if the gendisk has at least one valid non-zero size partition.315315+ * Otherwise false.316316+ */317317+bool disk_has_partitions(struct gendisk *disk)318318+{319319+ struct disk_part_tbl *ptbl;320320+ int i;321321+ bool ret = false;322322+323323+ rcu_read_lock();324324+ ptbl = rcu_dereference(disk->part_tbl);325325+326326+ /* Iterate partitions skipping the whole device at index 0 */327327+ for (i = 1; i < ptbl->len; i++) {328328+ if (rcu_dereference(ptbl->part[i])) {329329+ ret = true;330330+ break;331331+ }332332+ }333333+334334+ rcu_read_unlock();335335+336336+ return ret;337337+}338338+EXPORT_SYMBOL_GPL(disk_has_partitions);339339+304340/*305341 * Can be deleted altogether. Later.306342 *
+1-1
drivers/acpi/apei/ghes.c
···171171 * New allocation must be visible in all pgd before it can be found by172172 * an NMI allocating from the pool.173173 */174174- vmalloc_sync_all();174174+ vmalloc_sync_mappings();175175176176 rc = gen_pool_add(ghes_estatus_pool, addr, PAGE_ALIGN(len), -1);177177 if (rc)
···9191#ifdef GENERAL_DEBUG9292#define PRINTK(args...) printk(args)9393#else9494-#define PRINTK(args...)9494+#define PRINTK(args...) do {} while (0)9595#endif /* GENERAL_DEBUG */96969797#ifdef EXTRA_DEBUG
+8-8
drivers/auxdisplay/Kconfig
···111111 If unsure, say N.112112113113config CFAG12864B_RATE114114- int "Refresh rate (hertz)"114114+ int "Refresh rate (hertz)"115115 depends on CFAG12864B116116 default "20"117117 ---help---···329329330330config PANEL_LCD_PIN_E331331 depends on PANEL_PROFILE="0" && PANEL_LCD="1" && PANEL_LCD_PROTO="0"332332- int "Parallel port pin number & polarity connected to the LCD E signal (-17...17) "332332+ int "Parallel port pin number & polarity connected to the LCD E signal (-17...17) "333333 range -17 17334334 default 14335335 ---help---···344344345345config PANEL_LCD_PIN_RS346346 depends on PANEL_PROFILE="0" && PANEL_LCD="1" && PANEL_LCD_PROTO="0"347347- int "Parallel port pin number & polarity connected to the LCD RS signal (-17...17) "347347+ int "Parallel port pin number & polarity connected to the LCD RS signal (-17...17) "348348 range -17 17349349 default 17350350 ---help---···359359360360config PANEL_LCD_PIN_RW361361 depends on PANEL_PROFILE="0" && PANEL_LCD="1" && PANEL_LCD_PROTO="0"362362- int "Parallel port pin number & polarity connected to the LCD RW signal (-17...17) "362362+ int "Parallel port pin number & polarity connected to the LCD RW signal (-17...17) "363363 range -17 17364364 default 16365365 ---help---···374374375375config PANEL_LCD_PIN_SCL376376 depends on PANEL_PROFILE="0" && PANEL_LCD="1" && PANEL_LCD_PROTO!="0"377377- int "Parallel port pin number & polarity connected to the LCD SCL signal (-17...17) "377377+ int "Parallel port pin number & polarity connected to the LCD SCL signal (-17...17) "378378 range -17 17379379 default 1380380 ---help---···389389390390config PANEL_LCD_PIN_SDA391391 depends on PANEL_PROFILE="0" && PANEL_LCD="1" && PANEL_LCD_PROTO!="0"392392- int "Parallel port pin number & polarity connected to the LCD SDA signal (-17...17) "392392+ int "Parallel port pin number & polarity connected to the LCD SDA signal (-17...17) "393393 range -17 17394394 default 2395395 ---help---···404404405405config PANEL_LCD_PIN_BL406406 depends on PANEL_PROFILE="0" && PANEL_LCD="1"407407- int "Parallel port pin number & polarity connected to the LCD backlight signal (-17...17) "407407+ int "Parallel port pin number & polarity connected to the LCD backlight signal (-17...17) "408408 range -17 17409409 default 0410410 ---help---411411 This describes the number of the parallel port pin to which the LCD 'BL' signal412412- has been connected. It can be :412412+ has been connected. It can be :413413414414 0 : no connection (eg: connected to ground)415415 1..17 : directly connected to any of these pins on the DB25 plug
+1-1
drivers/auxdisplay/charlcd.c
···8686 int len;8787 } esc_seq;88888989- unsigned long long drvdata[0];8989+ unsigned long long drvdata[];9090};91919292#define charlcd_to_priv(p) container_of(p, struct charlcd_priv, lcd)
···363363{364364 if (!pdev->dev.coherent_dma_mask)365365 pdev->dev.coherent_dma_mask = DMA_BIT_MASK(32);366366- if (!pdev->dma_mask)367367- pdev->dma_mask = DMA_BIT_MASK(32);368368- if (!pdev->dev.dma_mask)369369- pdev->dev.dma_mask = &pdev->dma_mask;366366+ if (!pdev->dev.dma_mask) {367367+ pdev->platform_dma_mask = DMA_BIT_MASK(32);368368+ pdev->dev.dma_mask = &pdev->platform_dma_mask;369369+ }370370};371371372372/**···662662 pdev->dev.of_node_reused = pdevinfo->of_node_reused;663663664664 if (pdevinfo->dma_mask) {665665- /*666666- * This memory isn't freed when the device is put,667667- * I don't have a nice idea for that though. Conceptually668668- * dma_mask in struct device should not be a pointer.669669- * See http://thread.gmane.org/gmane.linux.kernel.pci/9081670670- */671671- pdev->dev.dma_mask =672672- kmalloc(sizeof(*pdev->dev.dma_mask), GFP_KERNEL);673673- if (!pdev->dev.dma_mask)674674- goto err;675675-676676- kmemleak_ignore(pdev->dev.dma_mask);677677-678678- *pdev->dev.dma_mask = pdevinfo->dma_mask;665665+ pdev->platform_dma_mask = pdevinfo->dma_mask;666666+ pdev->dev.dma_mask = &pdev->platform_dma_mask;679667 pdev->dev.coherent_dma_mask = pdevinfo->dma_mask;680668 }681669···688700 if (ret) {689701err:690702 ACPI_COMPANION_SET(&pdev->dev, NULL);691691- kfree(pdev->dev.dma_mask);692703 platform_device_put(pdev);693704 return ERR_PTR(ret);694705 }
+12-5
drivers/block/virtio_blk.c
···245245 err = virtblk_add_req(vblk->vqs[qid].vq, vbr, vbr->sg, num);246246 if (err) {247247 virtqueue_kick(vblk->vqs[qid].vq);248248- blk_mq_stop_hw_queue(hctx);248248+ /* Don't stop the queue if -ENOMEM: we may have failed to249249+ * bounce the buffer due to global resource outage.250250+ */251251+ if (err == -ENOSPC)252252+ blk_mq_stop_hw_queue(hctx);249253 spin_unlock_irqrestore(&vblk->vqs[qid].lock, flags);250250- /* Out of mem doesn't actually happen, since we fall back251251- * to direct descriptors */252252- if (err == -ENOMEM || err == -ENOSPC)254254+ switch (err) {255255+ case -ENOSPC:253256 return BLK_STS_DEV_RESOURCE;254254- return BLK_STS_IOERR;257257+ case -ENOMEM:258258+ return BLK_STS_RESOURCE;259259+ default:260260+ return BLK_STS_IOERR;261261+ }255262 }256263257264 if (bd->last && virtqueue_kick_prepare(vblk->vqs[qid].vq))
···47134713 *47144714 * Returns: The number of clocks that are possible parents of this node47154715 */47164716-unsigned int of_clk_get_parent_count(struct device_node *np)47164716+unsigned int of_clk_get_parent_count(const struct device_node *np)47174717{47184718 int count;47194719···47254725}47264726EXPORT_SYMBOL_GPL(of_clk_get_parent_count);4727472747284728-const char *of_clk_get_parent_name(struct device_node *np, int index)47284728+const char *of_clk_get_parent_name(const struct device_node *np, int index)47294729{47304730 struct of_phandle_args clkspec;47314731 struct property *prop;
···693693 bool enable = (state == AMD_CG_STATE_GATE);694694695695 if (enable) {696696- if (jpeg_v2_0_is_idle(handle))696696+ if (!jpeg_v2_0_is_idle(handle))697697 return -EBUSY;698698 jpeg_v2_0_enable_clock_gating(adev);699699 } else {
+1-1
drivers/gpu/drm/amd/amdgpu/jpeg_v2_5.c
···477477 continue;478478479479 if (enable) {480480- if (jpeg_v2_5_is_idle(handle))480480+ if (!jpeg_v2_5_is_idle(handle))481481 return -EBUSY;482482 jpeg_v2_5_enable_clock_gating(adev, i);483483 } else {
+23-2
drivers/gpu/drm/amd/amdgpu/soc15.c
···8989#define HDP_MEM_POWER_CTRL__RC_MEM_POWER_CTRL_EN_MASK 0x00010000L9090#define HDP_MEM_POWER_CTRL__RC_MEM_POWER_LS_EN_MASK 0x00020000L9191#define mmHDP_MEM_POWER_CTRL_BASE_IDX 09292+9393+/* for Vega20/arcturus regiter offset change */9494+#define mmROM_INDEX_VG20 0x00e49595+#define mmROM_INDEX_VG20_BASE_IDX 09696+#define mmROM_DATA_VG20 0x00e59797+#define mmROM_DATA_VG20_BASE_IDX 09898+9299/*93100 * Indirect registers accessor94101 */···316309{317310 u32 *dw_ptr;318311 u32 i, length_dw;312312+ uint32_t rom_index_offset;313313+ uint32_t rom_data_offset;319314320315 if (bios == NULL)321316 return false;···330321 dw_ptr = (u32 *)bios;331322 length_dw = ALIGN(length_bytes, 4) / 4;332323324324+ switch (adev->asic_type) {325325+ case CHIP_VEGA20:326326+ case CHIP_ARCTURUS:327327+ rom_index_offset = SOC15_REG_OFFSET(SMUIO, 0, mmROM_INDEX_VG20);328328+ rom_data_offset = SOC15_REG_OFFSET(SMUIO, 0, mmROM_DATA_VG20);329329+ break;330330+ default:331331+ rom_index_offset = SOC15_REG_OFFSET(SMUIO, 0, mmROM_INDEX);332332+ rom_data_offset = SOC15_REG_OFFSET(SMUIO, 0, mmROM_DATA);333333+ break;334334+ }335335+333336 /* set rom index to 0 */334334- WREG32(SOC15_REG_OFFSET(SMUIO, 0, mmROM_INDEX), 0);337337+ WREG32(rom_index_offset, 0);335338 /* read out the rom data */336339 for (i = 0; i < length_dw; i++)337337- dw_ptr[i] = RREG32(SOC15_REG_OFFSET(SMUIO, 0, mmROM_DATA));340340+ dw_ptr[i] = RREG32(rom_data_offset);338341339342 return true;340343}
+1-1
drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c
···1352135213531353 if (enable) {13541354 /* wait for STATUS to clear */13551355- if (vcn_v1_0_is_idle(handle))13551355+ if (!vcn_v1_0_is_idle(handle))13561356 return -EBUSY;13571357 vcn_v1_0_enable_clock_gating(adev);13581358 } else {
+1-1
drivers/gpu/drm/amd/amdgpu/vcn_v2_0.c
···1217121712181218 if (enable) {12191219 /* wait for STATUS to clear */12201220- if (vcn_v2_0_is_idle(handle))12201220+ if (!vcn_v2_0_is_idle(handle))12211221 return -EBUSY;12221222 vcn_v2_0_enable_clock_gating(adev);12231223 } else {
···522522523523 acrtc_state = to_dm_crtc_state(acrtc->base.state);524524525525- DRM_DEBUG_DRIVER("crtc:%d, vupdate-vrr:%d\n", acrtc->crtc_id,526526- amdgpu_dm_vrr_active(acrtc_state));525525+ DRM_DEBUG_DRIVER("crtc:%d, vupdate-vrr:%d, planes:%d\n", acrtc->crtc_id,526526+ amdgpu_dm_vrr_active(acrtc_state),527527+ acrtc_state->active_planes);527528528529 amdgpu_dm_crtc_handle_crc_irq(&acrtc->base);529530 drm_crtc_handle_vblank(&acrtc->base);···544543 &acrtc_state->vrr_params.adjust);545544 }546545547547- if (acrtc->pflip_status == AMDGPU_FLIP_SUBMITTED) {546546+ /*547547+ * If there aren't any active_planes then DCH HUBP may be clock-gated.548548+ * In that case, pageflip completion interrupts won't fire and pageflip549549+ * completion events won't get delivered. Prevent this by sending550550+ * pending pageflip events from here if a flip is still pending.551551+ *552552+ * If any planes are enabled, use dm_pflip_high_irq() instead, to553553+ * avoid race conditions between flip programming and completion,554554+ * which could cause too early flip completion events.555555+ */556556+ if (acrtc->pflip_status == AMDGPU_FLIP_SUBMITTED &&557557+ acrtc_state->active_planes == 0) {548558 if (acrtc->event) {549559 drm_crtc_send_vblank_event(&acrtc->base, acrtc->event);550560 acrtc->event = NULL;
···272272{273273 struct intel_gvt *gvt = vgpu->gvt;274274275275- mutex_lock(&vgpu->vgpu_lock);276276-277275 WARN(vgpu->active, "vGPU is still active!\n");278276277277+ /*278278+ * remove idr first so later clean can judge if need to stop279279+ * service if no active vgpu.280280+ */281281+ mutex_lock(&gvt->lock);282282+ idr_remove(&gvt->vgpu_idr, vgpu->id);283283+ mutex_unlock(&gvt->lock);284284+285285+ mutex_lock(&vgpu->vgpu_lock);279286 intel_gvt_debugfs_remove_vgpu(vgpu);280287 intel_vgpu_clean_sched_policy(vgpu);281288 intel_vgpu_clean_submission(vgpu);···297290 mutex_unlock(&vgpu->vgpu_lock);298291299292 mutex_lock(&gvt->lock);300300- idr_remove(&gvt->vgpu_idr, vgpu->id);301293 if (idr_is_empty(&gvt->vgpu_idr))302294 intel_gvt_clean_irq(gvt);303295 intel_gvt_update_vgpu_types(gvt);
+20-8
drivers/gpu/drm/i915/i915_request.c
···527527 return NOTIFY_DONE;528528}529529530530+static void irq_semaphore_cb(struct irq_work *wrk)531531+{532532+ struct i915_request *rq =533533+ container_of(wrk, typeof(*rq), semaphore_work);534534+535535+ i915_schedule_bump_priority(rq, I915_PRIORITY_NOSEMAPHORE);536536+ i915_request_put(rq);537537+}538538+530539static int __i915_sw_fence_call531540semaphore_notify(struct i915_sw_fence *fence, enum i915_sw_fence_notify state)532541{533533- struct i915_request *request =534534- container_of(fence, typeof(*request), semaphore);542542+ struct i915_request *rq = container_of(fence, typeof(*rq), semaphore);535543536544 switch (state) {537545 case FENCE_COMPLETE:538538- i915_schedule_bump_priority(request, I915_PRIORITY_NOSEMAPHORE);546546+ if (!(READ_ONCE(rq->sched.attr.priority) & I915_PRIORITY_NOSEMAPHORE)) {547547+ i915_request_get(rq);548548+ init_irq_work(&rq->semaphore_work, irq_semaphore_cb);549549+ irq_work_queue(&rq->semaphore_work);550550+ }539551 break;540552541553 case FENCE_FREE:542542- i915_request_put(request);554554+ i915_request_put(rq);543555 break;544556 }545557···788776 struct dma_fence *fence;789777 int err;790778791791- GEM_BUG_ON(i915_request_timeline(rq) ==792792- rcu_access_pointer(signal->timeline));779779+ if (i915_request_timeline(rq) == rcu_access_pointer(signal->timeline))780780+ return 0;793781794782 if (i915_request_started(signal))795783 return 0;···833821 return 0;834822835823 err = 0;836836- if (intel_timeline_sync_is_later(i915_request_timeline(rq), fence))824824+ if (!intel_timeline_sync_is_later(i915_request_timeline(rq), fence))837825 err = i915_sw_fence_await_dma_fence(&rq->submit,838826 fence, 0,839827 I915_FENCE_GFP);···13301318 * decide whether to preempt the entire chain so that it is ready to13311319 * run at the earliest possible convenience.13321320 */13331333- i915_sw_fence_commit(&rq->semaphore);13341321 if (attr && rq->engine->schedule)13351322 rq->engine->schedule(rq, attr);13231323+ i915_sw_fence_commit(&rq->semaphore);13361324 i915_sw_fence_commit(&rq->submit);13371325}13381326
+2
drivers/gpu/drm/i915/i915_request.h
···2626#define I915_REQUEST_H27272828#include <linux/dma-fence.h>2929+#include <linux/irq_work.h>2930#include <linux/lockdep.h>30313132#include "gem/i915_gem_context_types.h"···209208 };210209 struct list_head execute_cb;211210 struct i915_sw_fence semaphore;211211+ struct irq_work semaphore_work;212212213213 /*214214 * A list of everyone we wait upon, and everyone who waits upon us.
···842842 }843843}844844845845-static irqreturn_t stm32_dfsdm_adc_trigger_handler(int irq, void *p)846846-{847847- struct iio_poll_func *pf = p;848848- struct iio_dev *indio_dev = pf->indio_dev;849849- struct stm32_dfsdm_adc *adc = iio_priv(indio_dev);850850- int available = stm32_dfsdm_adc_dma_residue(adc);851851-852852- while (available >= indio_dev->scan_bytes) {853853- s32 *buffer = (s32 *)&adc->rx_buf[adc->bufi];854854-855855- stm32_dfsdm_process_data(adc, buffer);856856-857857- iio_push_to_buffers_with_timestamp(indio_dev, buffer,858858- pf->timestamp);859859- available -= indio_dev->scan_bytes;860860- adc->bufi += indio_dev->scan_bytes;861861- if (adc->bufi >= adc->buf_sz)862862- adc->bufi = 0;863863- }864864-865865- iio_trigger_notify_done(indio_dev->trig);866866-867867- return IRQ_HANDLED;868868-}869869-870845static void stm32_dfsdm_dma_buffer_done(void *data)871846{872847 struct iio_dev *indio_dev = data;873848 struct stm32_dfsdm_adc *adc = iio_priv(indio_dev);874849 int available = stm32_dfsdm_adc_dma_residue(adc);875850 size_t old_pos;876876-877877- if (indio_dev->currentmode & INDIO_BUFFER_TRIGGERED) {878878- iio_trigger_poll_chained(indio_dev->trig);879879- return;880880- }881851882852 /*883853 * FIXME: In Kernel interface does not support cyclic DMA buffer,and···876906 adc->bufi = 0;877907 old_pos = 0;878908 }879879- /* regular iio buffer without trigger */909909+ /*910910+ * In DMA mode the trigger services of IIO are not used911911+ * (e.g. no call to iio_trigger_poll).912912+ * Calling irq handler associated to the hardware trigger is not913913+ * relevant as the conversions have already been done. Data914914+ * transfers are performed directly in DMA callback instead.915915+ * This implementation avoids to call trigger irq handler that916916+ * may sleep, in an atomic context (DMA irq handler context).917917+ */880918 if (adc->dev_data->type == DFSDM_IIO)881919 iio_push_to_buffers(indio_dev, buffer);882920 }···15141536 }1515153715161538 ret = iio_triggered_buffer_setup(indio_dev,15171517- &iio_pollfunc_store_time,15181518- &stm32_dfsdm_adc_trigger_handler,15391539+ &iio_pollfunc_store_time, NULL,15191540 &stm32_dfsdm_buffer_setup_ops);15201541 if (ret) {15211542 stm32_dfsdm_dma_release(indio_dev);
+2
drivers/iio/chemical/Kconfig
···9191 tristate "SPS30 particulate matter sensor"9292 depends on I2C9393 select CRC89494+ select IIO_BUFFER9595+ select IIO_TRIGGERED_BUFFER9496 help9597 Say Y here to build support for the Sensirion SPS30 particulate9698 matter sensor.
+8-7
drivers/iio/light/vcnl4000.c
···167167 data->vcnl4200_ps.reg = VCNL4200_PS_DATA;168168 switch (id) {169169 case VCNL4200_PROD_ID:170170- /* Integration time is 50ms, but the experiments */171171- /* show 54ms in total. */172172- data->vcnl4200_al.sampling_rate = ktime_set(0, 54000 * 1000);173173- data->vcnl4200_ps.sampling_rate = ktime_set(0, 4200 * 1000);170170+ /* Default wait time is 50ms, add 20% tolerance. */171171+ data->vcnl4200_al.sampling_rate = ktime_set(0, 60000 * 1000);172172+ /* Default wait time is 4.8ms, add 20% tolerance. */173173+ data->vcnl4200_ps.sampling_rate = ktime_set(0, 5760 * 1000);174174 data->al_scale = 24000;175175 break;176176 case VCNL4040_PROD_ID:177177- /* Integration time is 80ms, add 10ms. */178178- data->vcnl4200_al.sampling_rate = ktime_set(0, 100000 * 1000);179179- data->vcnl4200_ps.sampling_rate = ktime_set(0, 100000 * 1000);177177+ /* Default wait time is 80ms, add 20% tolerance. */178178+ data->vcnl4200_al.sampling_rate = ktime_set(0, 96000 * 1000);179179+ /* Default wait time is 5ms, add 20% tolerance. */180180+ data->vcnl4200_ps.sampling_rate = ktime_set(0, 6000 * 1000);180181 data->al_scale = 120000;181182 break;182183 }
+1-1
drivers/iio/magnetometer/ak8974.c
···564564 * We read all axes and discard all but one, for optimized565565 * reading, use the triggered buffer.566566 */567567- *val = le16_to_cpu(hw_values[chan->address]);567567+ *val = (s16)le16_to_cpu(hw_values[chan->address]);568568569569 ret = IIO_VAL_INT;570570 }
···17321732 * the erase operation does not exceed the max_busy_timeout, we should17331733 * use R1B response. Or we need to prevent the host from doing hw busy17341734 * detection, which is done by converting to a R1 response instead.17351735+ * Note, some hosts requires R1B, which also means they are on their own17361736+ * when it comes to deal with the busy timeout.17351737 */17361736- if (card->host->max_busy_timeout &&17381738+ if (!(card->host->caps & MMC_CAP_NEED_RSP_BUSY) &&17391739+ card->host->max_busy_timeout &&17371740 busy_timeout > card->host->max_busy_timeout) {17381741 cmd.flags = MMC_RSP_SPI_R1 | MMC_RSP_R1 | MMC_CMD_AC;17391742 } else {
+5-2
drivers/mmc/core/mmc.c
···19101910 * If the max_busy_timeout of the host is specified, validate it against19111911 * the sleep cmd timeout. A failure means we need to prevent the host19121912 * from doing hw busy detection, which is done by converting to a R119131913- * response instead of a R1B.19131913+ * response instead of a R1B. Note, some hosts requires R1B, which also19141914+ * means they are on their own when it comes to deal with the busy19151915+ * timeout.19141916 */19151915- if (host->max_busy_timeout && (timeout_ms > host->max_busy_timeout)) {19171917+ if (!(host->caps & MMC_CAP_NEED_RSP_BUSY) && host->max_busy_timeout &&19181918+ (timeout_ms > host->max_busy_timeout)) {19161919 cmd.flags = MMC_RSP_R1 | MMC_CMD_AC;19171920 } else {19181921 cmd.flags = MMC_RSP_R1B | MMC_CMD_AC;
+4-2
drivers/mmc/core/mmc_ops.c
···542542 * If the max_busy_timeout of the host is specified, make sure it's543543 * enough to fit the used timeout_ms. In case it's not, let's instruct544544 * the host to avoid HW busy detection, by converting to a R1 response545545- * instead of a R1B.545545+ * instead of a R1B. Note, some hosts requires R1B, which also means546546+ * they are on their own when it comes to deal with the busy timeout.546547 */547547- if (host->max_busy_timeout && (timeout_ms > host->max_busy_timeout))548548+ if (!(host->caps & MMC_CAP_NEED_RSP_BUSY) && host->max_busy_timeout &&549549+ (timeout_ms > host->max_busy_timeout))548550 use_r1b_resp = false;549551550552 cmd.opcode = MMC_SWITCH;
···132132133133 sdhci_reset(host, mask);134134135135- if (host->mmc->caps & MMC_CAP_NONREMOVABLE)135135+ if ((host->mmc->caps & MMC_CAP_NONREMOVABLE)136136+ || mmc_gpio_get_cd(host->mmc) >= 0)136137 sdhci_at91_set_force_card_detect(host);137138138139 if (priv->cal_always_on && (mask & SDHCI_RESET_ALL))···428427 * detection procedure using the SDMCC_CD signal is bypassed.429428 * This bit is reset when a software reset for all command is performed430429 * so we need to implement our own reset function to set back this bit.430430+ *431431+ * WA: SAMA5D2 doesn't drive CMD if using CD GPIO line.431432 */432432- if (host->mmc->caps & MMC_CAP_NONREMOVABLE)433433+ if ((host->mmc->caps & MMC_CAP_NONREMOVABLE)434434+ || mmc_gpio_get_cd(host->mmc) >= 0)433435 sdhci_at91_set_force_card_detect(host);434436435437 pm_runtime_put_autosuspend(&pdev->dev);
+3
drivers/mmc/host/sdhci-omap.c
···11921192 if (of_find_property(dev->of_node, "dmas", NULL))11931193 sdhci_switch_external_dma(host, true);1194119411951195+ /* R1B responses is required to properly manage HW busy detection. */11961196+ mmc->caps |= MMC_CAP_NEED_RSP_BUSY;11971197+11951198 ret = sdhci_setup_host(host);11961199 if (ret)11971200 goto err_put_sync;
···17411741 if (!dsa_is_user_port(ds, port))17421742 continue;1743174317441744- kthread_destroy_worker(sp->xmit_worker);17441744+ if (sp->xmit_worker)17451745+ kthread_destroy_worker(sp->xmit_worker);17451746 }1746174717471748 sja1105_tas_teardown(ds);
+1-1
drivers/net/ethernet/broadcom/bcmsysport.c
···21352135 return -ENOSPC;2136213621372137 index = find_first_zero_bit(priv->filters, RXCHK_BRCM_TAG_MAX);21382138- if (index > RXCHK_BRCM_TAG_MAX)21382138+ if (index >= RXCHK_BRCM_TAG_MAX)21392139 return -ENOSPC;2140214021412141 /* Location is the classification ID, and index is the position
···53815381static int cfg_queues(struct adapter *adap)53825382{53835383 u32 avail_qsets, avail_eth_qsets, avail_uld_qsets;53845384+ u32 i, n10g = 0, qidx = 0, n1g = 0;53855385+ u32 ncpus = num_online_cpus();53845386 u32 niqflint, neq, num_ulds;53855387 struct sge *s = &adap->sge;53865386- u32 i, n10g = 0, qidx = 0;53875387-#ifndef CONFIG_CHELSIO_T4_DCB53885388- int q10g = 0;53895389-#endif53885388+ u32 q10g = 0, q1g;5390538953915390 /* Reduce memory usage in kdump environment, disable all offload. */53925391 if (is_kdump_kernel() || (is_uld(adap) && t4_uld_mem_alloc(adap))) {···54235424 n10g += is_x_10g_port(&adap2pinfo(adap, i)->link_cfg);5424542554255426 avail_eth_qsets = min_t(u32, avail_qsets, MAX_ETH_QSETS);54275427+54285428+ /* We default to 1 queue per non-10G port and up to # of cores queues54295429+ * per 10G port.54305430+ */54315431+ if (n10g)54325432+ q10g = (avail_eth_qsets - (adap->params.nports - n10g)) / n10g;54335433+54345434+ n1g = adap->params.nports - n10g;54265435#ifdef CONFIG_CHELSIO_T4_DCB54275436 /* For Data Center Bridging support we need to be able to support up54285437 * to 8 Traffic Priorities; each of which will be assigned to its54295438 * own TX Queue in order to prevent Head-Of-Line Blocking.54305439 */54405440+ q1g = 8;54315441 if (adap->params.nports * 8 > avail_eth_qsets) {54325442 dev_err(adap->pdev_dev, "DCB avail_eth_qsets=%d < %d!\n",54335443 avail_eth_qsets, adap->params.nports * 8);54345444 return -ENOMEM;54355445 }5436544654375437- for_each_port(adap, i) {54385438- struct port_info *pi = adap2pinfo(adap, i);54475447+ if (adap->params.nports * ncpus < avail_eth_qsets)54485448+ q10g = max(8U, ncpus);54495449+ else54505450+ q10g = max(8U, q10g);5439545154405440- pi->first_qset = qidx;54415441- pi->nqsets = is_kdump_kernel() ? 1 : 8;54425442- qidx += pi->nqsets;54435443- }54525452+ while ((q10g * n10g) > (avail_eth_qsets - n1g * q1g))54535453+ q10g--;54545454+54445455#else /* !CONFIG_CHELSIO_T4_DCB */54455445- /* We default to 1 queue per non-10G port and up to # of cores queues54465446- * per 10G port.54475447- */54485448- if (n10g)54495449- q10g = (avail_eth_qsets - (adap->params.nports - n10g)) / n10g;54505450- if (q10g > netif_get_num_default_rss_queues())54515451- q10g = netif_get_num_default_rss_queues();54525452-54535453- if (is_kdump_kernel())54565456+ q1g = 1;54575457+ q10g = min(q10g, ncpus);54585458+#endif /* !CONFIG_CHELSIO_T4_DCB */54595459+ if (is_kdump_kernel()) {54545460 q10g = 1;54615461+ q1g = 1;54625462+ }5455546354565464 for_each_port(adap, i) {54575465 struct port_info *pi = adap2pinfo(adap, i);5458546654595467 pi->first_qset = qidx;54605460- pi->nqsets = is_x_10g_port(&pi->link_cfg) ? q10g : 1;54685468+ pi->nqsets = is_x_10g_port(&pi->link_cfg) ? q10g : q1g;54615469 qidx += pi->nqsets;54625470 }54635463-#endif /* !CONFIG_CHELSIO_T4_DCB */5464547154655472 s->ethqsets = qidx;54665473 s->max_ethqsets = qidx; /* MSI-X may lower it later */···54785473 * capped by the number of available cores.54795474 */54805475 num_ulds = adap->num_uld + adap->num_ofld_uld;54815481- i = min_t(u32, MAX_OFLD_QSETS, num_online_cpus());54765476+ i = min_t(u32, MAX_OFLD_QSETS, ncpus);54825477 avail_uld_qsets = roundup(i, adap->params.nports);54835478 if (avail_qsets < num_ulds * adap->params.nports) {54845479 adap->params.offload = 0;
+108-6
drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
···11/* Copyright 2008 - 2016 Freescale Semiconductor Inc.22+ * Copyright 2020 NXP23 *34 * Redistribution and use in source and binary forms, with or without45 * modification, are permitted provided that the following conditions are met:···124123#define FSL_QMAN_MAX_OAL 127125124126125/* Default alignment for start of data in an Rx FD */126126+#ifdef CONFIG_DPAA_ERRATUM_A050385127127+/* aligning data start to 64 avoids DMA transaction splits, unless the buffer128128+ * is crossing a 4k page boundary129129+ */130130+#define DPAA_FD_DATA_ALIGNMENT (fman_has_errata_a050385() ? 64 : 16)131131+/* aligning to 256 avoids DMA transaction splits caused by 4k page boundary132132+ * crossings; also, all SG fragments except the last must have a size multiple133133+ * of 256 to avoid DMA transaction splits134134+ */135135+#define DPAA_A050385_ALIGN 256136136+#define DPAA_FD_RX_DATA_ALIGNMENT (fman_has_errata_a050385() ? \137137+ DPAA_A050385_ALIGN : 16)138138+#else127139#define DPAA_FD_DATA_ALIGNMENT 16140140+#define DPAA_FD_RX_DATA_ALIGNMENT DPAA_FD_DATA_ALIGNMENT141141+#endif128142129143/* The DPAA requires 256 bytes reserved and mapped for the SGT */130144#define DPAA_SGT_SIZE 256···174158#define DPAA_PARSE_RESULTS_SIZE sizeof(struct fman_prs_result)175159#define DPAA_TIME_STAMP_SIZE 8176160#define DPAA_HASH_RESULTS_SIZE 8161161+#ifdef CONFIG_DPAA_ERRATUM_A050385162162+#define DPAA_RX_PRIV_DATA_SIZE (DPAA_A050385_ALIGN - (DPAA_PARSE_RESULTS_SIZE\163163+ + DPAA_TIME_STAMP_SIZE + DPAA_HASH_RESULTS_SIZE))164164+#else177165#define DPAA_RX_PRIV_DATA_SIZE (u16)(DPAA_TX_PRIV_DATA_SIZE + \178166 dpaa_rx_extra_headroom)167167+#endif179168180169#define DPAA_ETH_PCD_RXQ_NUM 128181170···201180202181#define DPAA_BP_RAW_SIZE 4096203182183183+#ifdef CONFIG_DPAA_ERRATUM_A050385184184+#define dpaa_bp_size(raw_size) (SKB_WITH_OVERHEAD(raw_size) & \185185+ ~(DPAA_A050385_ALIGN - 1))186186+#else204187#define dpaa_bp_size(raw_size) SKB_WITH_OVERHEAD(raw_size)188188+#endif205189206190static int dpaa_max_frm;207191···12181192 buf_prefix_content.pass_prs_result = true;12191193 buf_prefix_content.pass_hash_result = true;12201194 buf_prefix_content.pass_time_stamp = true;12211221- buf_prefix_content.data_align = DPAA_FD_DATA_ALIGNMENT;11951195+ buf_prefix_content.data_align = DPAA_FD_RX_DATA_ALIGNMENT;1222119612231197 rx_p = ¶ms.specific_params.rx_params;12241198 rx_p->err_fqid = errq->fqid;···16881662 return CHECKSUM_NONE;16891663}1690166416651665+#define PTR_IS_ALIGNED(x, a) (IS_ALIGNED((unsigned long)(x), (a)))16661666+16911667/* Build a linear skb around the received buffer.16921668 * We are guaranteed there is enough room at the end of the data buffer to16931669 * accommodate the shared info area of the skb.···1761173317621734 sg_addr = qm_sg_addr(&sgt[i]);17631735 sg_vaddr = phys_to_virt(sg_addr);17641764- WARN_ON(!IS_ALIGNED((unsigned long)sg_vaddr,17651765- SMP_CACHE_BYTES));17361736+ WARN_ON(!PTR_IS_ALIGNED(sg_vaddr, SMP_CACHE_BYTES));1766173717671738 dma_unmap_page(priv->rx_dma_dev, sg_addr,17681739 DPAA_BP_RAW_SIZE, DMA_FROM_DEVICE);···20492022 return 0;20502023}2051202420252025+#ifdef CONFIG_DPAA_ERRATUM_A05038520262026+int dpaa_a050385_wa(struct net_device *net_dev, struct sk_buff **s)20272027+{20282028+ struct dpaa_priv *priv = netdev_priv(net_dev);20292029+ struct sk_buff *new_skb, *skb = *s;20302030+ unsigned char *start, i;20312031+20322032+ /* check linear buffer alignment */20332033+ if (!PTR_IS_ALIGNED(skb->data, DPAA_A050385_ALIGN))20342034+ goto workaround;20352035+20362036+ /* linear buffers just need to have an aligned start */20372037+ if (!skb_is_nonlinear(skb))20382038+ return 0;20392039+20402040+ /* linear data size for nonlinear skbs needs to be aligned */20412041+ if (!IS_ALIGNED(skb_headlen(skb), DPAA_A050385_ALIGN))20422042+ goto workaround;20432043+20442044+ for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {20452045+ skb_frag_t *frag = &skb_shinfo(skb)->frags[i];20462046+20472047+ /* all fragments need to have aligned start addresses */20482048+ if (!IS_ALIGNED(skb_frag_off(frag), DPAA_A050385_ALIGN))20492049+ goto workaround;20502050+20512051+ /* all but last fragment need to have aligned sizes */20522052+ if (!IS_ALIGNED(skb_frag_size(frag), DPAA_A050385_ALIGN) &&20532053+ (i < skb_shinfo(skb)->nr_frags - 1))20542054+ goto workaround;20552055+ }20562056+20572057+ return 0;20582058+20592059+workaround:20602060+ /* copy all the skb content into a new linear buffer */20612061+ new_skb = netdev_alloc_skb(net_dev, skb->len + DPAA_A050385_ALIGN - 1 +20622062+ priv->tx_headroom);20632063+ if (!new_skb)20642064+ return -ENOMEM;20652065+20662066+ /* NET_SKB_PAD bytes already reserved, adding up to tx_headroom */20672067+ skb_reserve(new_skb, priv->tx_headroom - NET_SKB_PAD);20682068+20692069+ /* Workaround for DPAA_A050385 requires data start to be aligned */20702070+ start = PTR_ALIGN(new_skb->data, DPAA_A050385_ALIGN);20712071+ if (start - new_skb->data != 0)20722072+ skb_reserve(new_skb, start - new_skb->data);20732073+20742074+ skb_put(new_skb, skb->len);20752075+ skb_copy_bits(skb, 0, new_skb->data, skb->len);20762076+ skb_copy_header(new_skb, skb);20772077+ new_skb->dev = skb->dev;20782078+20792079+ /* We move the headroom when we align it so we have to reset the20802080+ * network and transport header offsets relative to the new data20812081+ * pointer. The checksum offload relies on these offsets.20822082+ */20832083+ skb_set_network_header(new_skb, skb_network_offset(skb));20842084+ skb_set_transport_header(new_skb, skb_transport_offset(skb));20852085+20862086+ /* TODO: does timestamping need the result in the old skb? */20872087+ dev_kfree_skb(skb);20882088+ *s = new_skb;20892089+20902090+ return 0;20912091+}20922092+#endif20932093+20522094static netdev_tx_t20532095dpaa_start_xmit(struct sk_buff *skb, struct net_device *net_dev)20542096{···2163206721642068 nonlinear = skb_is_nonlinear(skb);21652069 }20702070+20712071+#ifdef CONFIG_DPAA_ERRATUM_A05038520722072+ if (unlikely(fman_has_errata_a050385())) {20732073+ if (dpaa_a050385_wa(net_dev, &skb))20742074+ goto enomem;20752075+ nonlinear = skb_is_nonlinear(skb);20762076+ }20772077+#endif2166207821672079 if (nonlinear) {21682080 /* Just create a S/G fd based on the skb */···28452741 headroom = (u16)(bl->priv_data_size + DPAA_PARSE_RESULTS_SIZE +28462742 DPAA_TIME_STAMP_SIZE + DPAA_HASH_RESULTS_SIZE);2847274328482848- return DPAA_FD_DATA_ALIGNMENT ? ALIGN(headroom,28492849- DPAA_FD_DATA_ALIGNMENT) :28502850- headroom;27442744+ return ALIGN(headroom, DPAA_FD_DATA_ALIGNMENT);28512745}2852274628532747static int dpaa_eth_probe(struct platform_device *pdev)
···88 help99 Freescale Data-Path Acceleration Architecture Frame Manager1010 (FMan) support1111+1212+config DPAA_ERRATUM_A0503851313+ bool1414+ depends on ARM64 && FSL_DPAA1515+ default y1616+ help1717+ DPAA FMan erratum A050385 software workaround implementation:1818+ align buffers, data start, SG fragment length to avoid FMan DMA1919+ splits.2020+ FMAN DMA read or writes under heavy traffic load may cause FMAN2121+ internal resource leak thus stopping further packet processing.2222+ The FMAN internal queue can overflow when FMAN splits single2323+ read or write transactions into multiple smaller transactions2424+ such that more than 17 AXI transactions are in flight from FMAN2525+ to interconnect. When the FMAN internal queue overflows, it can2626+ stall further packet processing. The issue can occur with any2727+ one of the following three conditions:2828+ 1. FMAN AXI transaction crosses 4K address boundary (Errata2929+ A010022)3030+ 2. FMAN DMA address for an AXI transaction is not 16 byte3131+ aligned, i.e. the last 4 bits of an address are non-zero3232+ 3. Scatter Gather (SG) frames have more than one SG buffer in3333+ the SG list and any one of the buffers, except the last3434+ buffer in the SG list has data size that is not a multiple3535+ of 16 bytes, i.e., other than 16, 32, 48, 64, etc.3636+ With any one of the above three conditions present, there is3737+ likelihood of stalled FMAN packet processing, especially under3838+ stress with multiple ports injecting line-rate traffic.
+18
drivers/net/ethernet/freescale/fman/fman.c
···11/*22 * Copyright 2008-2015 Freescale Semiconductor Inc.33+ * Copyright 2020 NXP34 *45 * Redistribution and use in source and binary forms, with or without56 * modification, are permitted provided that the following conditions are met:···566565 u32 total_num_of_tasks;567566 u32 qmi_def_tnums_thresh;568567};568568+569569+#ifdef CONFIG_DPAA_ERRATUM_A050385570570+static bool fman_has_err_a050385;571571+#endif569572570573static irqreturn_t fman_exceptions(struct fman *fman,571574 enum fman_exceptions exception)···25232518}25242519EXPORT_SYMBOL(fman_bind);2525252025212521+#ifdef CONFIG_DPAA_ERRATUM_A05038525222522+bool fman_has_errata_a050385(void)25232523+{25242524+ return fman_has_err_a050385;25252525+}25262526+EXPORT_SYMBOL(fman_has_errata_a050385);25272527+#endif25282528+25262529static irqreturn_t fman_err_irq(int irq, void *handle)25272530{25282531 struct fman *fman = (struct fman *)handle;···28572844 __func__);28582845 goto fman_free;28592846 }28472847+28482848+#ifdef CONFIG_DPAA_ERRATUM_A05038528492849+ fman_has_err_a050385 =28502850+ of_property_read_bool(fm_node, "fsl,erratum-a050385");28512851+#endif2860285228612853 return fman;28622854
+5
drivers/net/ethernet/freescale/fman/fman.h
···11/*22 * Copyright 2008-2015 Freescale Semiconductor Inc.33+ * Copyright 2020 NXP34 *45 * Redistribution and use in source and binary forms, with or without56 * modification, are permitted provided that the following conditions are met:···398397u16 fman_get_max_frm(void);399398400399int fman_get_rx_extra_headroom(void);400400+401401+#ifdef CONFIG_DPAA_ERRATUM_A050385402402+bool fman_has_errata_a050385(void);403403+#endif401404402405struct fman *fman_bind(struct device *dev);403406
+1
drivers/net/ethernet/hisilicon/hns3/hclge_mbx.h
···4646 HCLGE_MBX_PUSH_VLAN_INFO, /* (PF -> VF) push port base vlan */4747 HCLGE_MBX_GET_MEDIA_TYPE, /* (VF -> PF) get media type */4848 HCLGE_MBX_PUSH_PROMISC_INFO, /* (PF -> VF) push vf promisc info */4949+ HCLGE_MBX_VF_UNINIT, /* (VF -> PF) vf is unintializing */49505051 HCLGE_MBX_GET_VF_FLR_STATUS = 200, /* (M7 -> PF) get vf flr status */5152 HCLGE_MBX_PUSH_LINK_STATUS, /* (M7 -> PF) get port link status */
···22772277 if (!str || !*str)22782278 return -EINVAL;22792279 while ((opt = strsep(&str, ",")) != NULL) {22802280- if (!strncmp(opt, "eee_timer:", 6)) {22802280+ if (!strncmp(opt, "eee_timer:", 10)) {22812281 if (kstrtoint(opt + 10, 0, &eee_timer))22822282 goto err;22832283 }
+17-15
drivers/net/ethernet/sfc/ef10.c
···28532853 }2854285428552855 /* Transmit timestamps are only available for 8XXX series. They result28562856- * in three events per packet. These occur in order, and are:28572857- * - the normal completion event28562856+ * in up to three events per packet. These occur in order, and are:28572857+ * - the normal completion event (may be omitted)28582858 * - the low part of the timestamp28592859 * - the high part of the timestamp28602860+ *28612861+ * It's possible for multiple completion events to appear before the28622862+ * corresponding timestamps. So we can for example get:28632863+ * COMP N28642864+ * COMP N+128652865+ * TS_LO N28662866+ * TS_HI N28672867+ * TS_LO N+128682868+ * TS_HI N+128692869+ *28702870+ * In addition it's also possible for the adjacent completions to be28712871+ * merged, so we may not see COMP N above. As such, the completion28722872+ * events are not very useful here.28602873 *28612874 * Each part of the timestamp is itself split across two 16 bit28622875 * fields in the event.···2878286528792866 switch (tx_ev_type) {28802867 case TX_TIMESTAMP_EVENT_TX_EV_COMPLETION:28812881- /* In case of Queue flush or FLR, we might have received28822882- * the previous TX completion event but not the Timestamp28832883- * events.28842884- */28852885- if (tx_queue->completed_desc_ptr != tx_queue->ptr_mask)28862886- efx_xmit_done(tx_queue, tx_queue->completed_desc_ptr);28872887-28882888- tx_ev_desc_ptr = EFX_QWORD_FIELD(*event,28892889- ESF_DZ_TX_DESCR_INDX);28902890- tx_queue->completed_desc_ptr =28912891- tx_ev_desc_ptr & tx_queue->ptr_mask;28682868+ /* Ignore this event - see above. */28922869 break;2893287028942871 case TX_TIMESTAMP_EVENT_TX_EV_TSTAMP_LO:···28902887 ts_part = efx_ef10_extract_event_ts(event);28912888 tx_queue->completed_timestamp_major = ts_part;2892288928932893- efx_xmit_done(tx_queue, tx_queue->completed_desc_ptr);28942894- tx_queue->completed_desc_ptr = tx_queue->ptr_mask;28902890+ efx_xmit_done_single(tx_queue);28952891 break;2896289228972893 default:
···208208 * avoid cache-line ping-pong between the xmit path and the209209 * completion path.210210 * @merge_events: Number of TX merged completion events211211- * @completed_desc_ptr: Most recent completed pointer - only used with212212- * timestamping.213211 * @completed_timestamp_major: Top part of the most recent tx timestamp.214212 * @completed_timestamp_minor: Low part of the most recent tx timestamp.215213 * @insert_count: Current insert pointer···267269 unsigned int merge_events;268270 unsigned int bytes_compl;269271 unsigned int pkts_compl;270270- unsigned int completed_desc_ptr;271272 u32 completed_timestamp_major;272273 u32 completed_timestamp_minor;273274
+38
drivers/net/ethernet/sfc/tx.c
···535535 return efx_enqueue_skb(tx_queue, skb);536536}537537538538+void efx_xmit_done_single(struct efx_tx_queue *tx_queue)539539+{540540+ unsigned int pkts_compl = 0, bytes_compl = 0;541541+ unsigned int read_ptr;542542+ bool finished = false;543543+544544+ read_ptr = tx_queue->read_count & tx_queue->ptr_mask;545545+546546+ while (!finished) {547547+ struct efx_tx_buffer *buffer = &tx_queue->buffer[read_ptr];548548+549549+ if (!efx_tx_buffer_in_use(buffer)) {550550+ struct efx_nic *efx = tx_queue->efx;551551+552552+ netif_err(efx, hw, efx->net_dev,553553+ "TX queue %d spurious single TX completion\n",554554+ tx_queue->queue);555555+ efx_schedule_reset(efx, RESET_TYPE_TX_SKIP);556556+ return;557557+ }558558+559559+ /* Need to check the flag before dequeueing. */560560+ if (buffer->flags & EFX_TX_BUF_SKB)561561+ finished = true;562562+ efx_dequeue_buffer(tx_queue, buffer, &pkts_compl, &bytes_compl);563563+564564+ ++tx_queue->read_count;565565+ read_ptr = tx_queue->read_count & tx_queue->ptr_mask;566566+ }567567+568568+ tx_queue->pkts_compl += pkts_compl;569569+ tx_queue->bytes_compl += bytes_compl;570570+571571+ EFX_WARN_ON_PARANOID(pkts_compl != 1);572572+573573+ efx_xmit_done_check_empty(tx_queue);574574+}575575+538576void efx_init_tx_queue_core_txq(struct efx_tx_queue *tx_queue)539577{540578 struct efx_nic *efx = tx_queue->efx;
+16-13
drivers/net/ethernet/sfc/tx_common.c
···8080 tx_queue->xmit_more_available = false;8181 tx_queue->timestamping = (efx_ptp_use_mac_tx_timestamps(efx) &&8282 tx_queue->channel == efx_ptp_channel(efx));8383- tx_queue->completed_desc_ptr = tx_queue->ptr_mask;8483 tx_queue->completed_timestamp_major = 0;8584 tx_queue->completed_timestamp_minor = 0;8685···209210 while (read_ptr != stop_index) {210211 struct efx_tx_buffer *buffer = &tx_queue->buffer[read_ptr];211212212212- if (!(buffer->flags & EFX_TX_BUF_OPTION) &&213213- unlikely(buffer->len == 0)) {213213+ if (!efx_tx_buffer_in_use(buffer)) {214214 netif_err(efx, tx_err, efx->net_dev,215215- "TX queue %d spurious TX completion id %x\n",215215+ "TX queue %d spurious TX completion id %d\n",216216 tx_queue->queue, read_ptr);217217 efx_schedule_reset(efx, RESET_TYPE_TX_SKIP);218218 return;···221223222224 ++tx_queue->read_count;223225 read_ptr = tx_queue->read_count & tx_queue->ptr_mask;226226+ }227227+}228228+229229+void efx_xmit_done_check_empty(struct efx_tx_queue *tx_queue)230230+{231231+ if ((int)(tx_queue->read_count - tx_queue->old_write_count) >= 0) {232232+ tx_queue->old_write_count = READ_ONCE(tx_queue->write_count);233233+ if (tx_queue->read_count == tx_queue->old_write_count) {234234+ /* Ensure that read_count is flushed. */235235+ smp_mb();236236+ tx_queue->empty_read_count =237237+ tx_queue->read_count | EFX_EMPTY_COUNT_VALID;238238+ }224239 }225240}226241···267256 netif_tx_wake_queue(tx_queue->core_txq);268257 }269258270270- /* Check whether the hardware queue is now empty */271271- if ((int)(tx_queue->read_count - tx_queue->old_write_count) >= 0) {272272- tx_queue->old_write_count = READ_ONCE(tx_queue->write_count);273273- if (tx_queue->read_count == tx_queue->old_write_count) {274274- smp_mb();275275- tx_queue->empty_read_count =276276- tx_queue->read_count | EFX_EMPTY_COUNT_VALID;277277- }278278- }259259+ efx_xmit_done_check_empty(tx_queue);279260}280261281262/* Remove buffers put into a tx_queue for the current packet.
+6
drivers/net/ethernet/sfc/tx_common.h
···2121 unsigned int *pkts_compl,2222 unsigned int *bytes_compl);23232424+static inline bool efx_tx_buffer_in_use(struct efx_tx_buffer *buffer)2525+{2626+ return buffer->len || (buffer->flags & EFX_TX_BUF_OPTION);2727+}2828+2929+void efx_xmit_done_check_empty(struct efx_tx_queue *tx_queue);2430void efx_xmit_done(struct efx_tx_queue *tx_queue, unsigned int index);25312632void efx_enqueue_unwind(struct efx_tx_queue *tx_queue,
···2424static void dwmac1000_core_init(struct mac_device_info *hw,2525 struct net_device *dev)2626{2727+ struct stmmac_priv *priv = netdev_priv(dev);2728 void __iomem *ioaddr = hw->pcsr;2829 u32 value = readl(ioaddr + GMAC_CONTROL);2930 int mtu = dev->mtu;···3635 * Broadcom tags can look like invalid LLC/SNAP packets and cause the3736 * hardware to truncate packets on reception.3837 */3939- if (netdev_uses_dsa(dev))3838+ if (netdev_uses_dsa(dev) || !priv->plat->enh_desc)4039 value &= ~GMAC_CONTROL_ACS;41404241 if (mtu > 1500)
+11-8
drivers/net/ipvlan/ipvlan_core.c
···293293 }294294 if (dev)295295 dev_put(dev);296296+ cond_resched();296297 }297298}298299···499498 struct ethhdr *ethh = eth_hdr(skb);500499 int ret = NET_XMIT_DROP;501500502502- /* In this mode we dont care about multicast and broadcast traffic */503503- if (is_multicast_ether_addr(ethh->h_dest)) {504504- pr_debug_ratelimited("Dropped {multi|broad}cast of type=[%x]\n",505505- ntohs(skb->protocol));506506- kfree_skb(skb);507507- goto out;508508- }509509-510501 /* The ipvlan is a pseudo-L2 device, so the packets that we receive511502 * will have L2; which need to discarded and processed further512503 * in the net-ns of the main-device.513504 */514505 if (skb_mac_header_was_set(skb)) {506506+ /* In this mode we dont care about507507+ * multicast and broadcast traffic */508508+ if (is_multicast_ether_addr(ethh->h_dest)) {509509+ pr_debug_ratelimited(510510+ "Dropped {multi|broad}cast of type=[%x]\n",511511+ ntohs(skb->protocol));512512+ kfree_skb(skb);513513+ goto out;514514+ }515515+515516 skb_pull(skb, sizeof(*ethh));516517 skb->mac_header = (typeof(skb->mac_header))~0U;517518 skb_reset_network_header(skb);
···7373 /* same phy as above, with just a different OUI */7474 .phy_id = 0x002bdc00,7575 .phy_id_mask = 0xfffffc00,7676+ .name = "Broadcom BCM63XX (2)",7677 /* PHY_BASIC_FEATURES */7778 .flags = PHY_IS_INTERNAL,7879 .config_init = bcm63xx_config_init,
+2-1
drivers/net/phy/phy.c
···727727 phy_trigger_machine(phydev);728728 }729729730730- if (phy_clear_interrupt(phydev))730730+ /* did_interrupt() may have cleared the interrupt already */731731+ if (!phydev->drv->did_interrupt && phy_clear_interrupt(phydev))731732 goto phy_err;732733 return IRQ_HANDLED;733734
+5-1
drivers/net/phy/phy_device.c
···286286 if (!mdio_bus_phy_may_suspend(phydev))287287 return 0;288288289289+ phydev->suspended_by_mdio_bus = 1;290290+289291 return phy_suspend(phydev);290292}291293···296294 struct phy_device *phydev = to_phy_device(dev);297295 int ret;298296299299- if (!mdio_bus_phy_may_suspend(phydev))297297+ if (!phydev->suspended_by_mdio_bus)300298 goto no_resume;299299+300300+ phydev->suspended_by_mdio_bus = 0;301301302302 ret = phy_resume(phydev);303303 if (ret < 0)
+7-1
drivers/net/phy/phylink.c
···761761 config.interface = interface;762762763763 ret = phylink_validate(pl, supported, &config);764764- if (ret)764764+ if (ret) {765765+ phylink_warn(pl, "validation of %s with support %*pb and advertisement %*pb failed: %d\n",766766+ phy_modes(config.interface),767767+ __ETHTOOL_LINK_MODE_MASK_NBITS, phy->supported,768768+ __ETHTOOL_LINK_MODE_MASK_NBITS, config.advertising,769769+ ret);765770 return ret;771771+ }766772767773 phy->phylink = pl;768774 phy->phy_link_change = phylink_phy_change;
+10-4
drivers/net/slip/slhc.c
···232232 struct cstate *cs = lcs->next;233233 unsigned long deltaS, deltaA;234234 short changes = 0;235235- int hlen;235235+ int nlen, hlen;236236 unsigned char new_seq[16];237237 unsigned char *cp = new_seq;238238 struct iphdr *ip;···248248 return isize;249249250250 ip = (struct iphdr *) icp;251251+ if (ip->version != 4 || ip->ihl < 5)252252+ return isize;251253252254 /* Bail if this packet isn't TCP, or is an IP fragment */253255 if (ip->protocol != IPPROTO_TCP || (ntohs(ip->frag_off) & 0x3fff)) {···260258 comp->sls_o_tcp++;261259 return isize;262260 }263263- /* Extract TCP header */261261+ nlen = ip->ihl * 4;262262+ if (isize < nlen + sizeof(*th))263263+ return isize;264264265265- th = (struct tcphdr *)(((unsigned char *)ip) + ip->ihl*4);266266- hlen = ip->ihl*4 + th->doff*4;265265+ th = (struct tcphdr *)(icp + nlen);266266+ if (th->doff < sizeof(struct tcphdr) / 4)267267+ return isize;268268+ hlen = nlen + th->doff * 4;267269268270 /* Bail if the TCP packet isn't `compressible' (i.e., ACK isn't set or269271 * some other control bit is set). Also uncompressible if
···327327config RTC_DRV_MAX8907328328 tristate "Maxim MAX8907"329329 depends on MFD_MAX8907 || COMPILE_TEST330330+ select REGMAP_IRQ330331 help331332 If you say yes here you will get support for the332333 RTC of Maxim MAX8907 PMIC.
+24-3
drivers/s390/block/dasd.c
···178178 (unsigned long) block);179179 INIT_LIST_HEAD(&block->ccw_queue);180180 spin_lock_init(&block->queue_lock);181181+ INIT_LIST_HEAD(&block->format_list);182182+ spin_lock_init(&block->format_lock);181183 timer_setup(&block->timer, dasd_block_timeout, 0);182184 spin_lock_init(&block->profile.lock);183185···1781177917821780 if (dasd_ese_needs_format(cqr->block, irb)) {17831781 if (rq_data_dir((struct request *)cqr->callback_data) == READ) {17841784- device->discipline->ese_read(cqr);17821782+ device->discipline->ese_read(cqr, irb);17851783 cqr->status = DASD_CQR_SUCCESS;17861784 cqr->stopclk = now;17871785 dasd_device_clear_timer(device);17881786 dasd_schedule_device_bh(device);17891787 return;17901788 }17911791- fcqr = device->discipline->ese_format(device, cqr);17891789+ fcqr = device->discipline->ese_format(device, cqr, irb);17921790 if (IS_ERR(fcqr)) {17911791+ if (PTR_ERR(fcqr) == -EINVAL) {17921792+ cqr->status = DASD_CQR_ERROR;17931793+ return;17941794+ }17931795 /*17941796 * If we can't format now, let the request go17951797 * one extra round. Maybe we can format later.17961798 */17971799 cqr->status = DASD_CQR_QUEUED;18001800+ dasd_schedule_device_bh(device);18011801+ return;17981802 } else {17991803 fcqr->status = DASD_CQR_QUEUED;18001804 cqr->status = DASD_CQR_QUEUED;···27562748{27572749 struct request *req;27582750 blk_status_t error = BLK_STS_OK;27512751+ unsigned int proc_bytes;27592752 int status;2760275327612754 req = (struct request *) cqr->callback_data;27622755 dasd_profile_end(cqr->block, cqr, req);2763275627572757+ proc_bytes = cqr->proc_bytes;27642758 status = cqr->block->base->discipline->free_cp(cqr, req);27652759 if (status < 0)27662760 error = errno_to_blk_status(status);···27932783 blk_mq_end_request(req, error);27942784 blk_mq_run_hw_queues(req->q, true);27952785 } else {27962796- blk_mq_complete_request(req);27862786+ /*27872787+ * Partial completed requests can happen with ESE devices.27882788+ * During read we might have gotten a NRF error and have to27892789+ * complete a request partially.27902790+ */27912791+ if (proc_bytes) {27922792+ blk_update_request(req, BLK_STS_OK,27932793+ blk_rq_bytes(req) - proc_bytes);27942794+ blk_mq_requeue_request(req, true);27952795+ } else {27962796+ blk_mq_complete_request(req);27972797+ }27972798 }27982799}27992800
+156-7
drivers/s390/block/dasd_eckd.c
···207207 geo->head |= head;208208}209209210210+/*211211+ * calculate failing track from sense data depending if212212+ * it is an EAV device or not213213+ */214214+static int dasd_eckd_track_from_irb(struct irb *irb, struct dasd_device *device,215215+ sector_t *track)216216+{217217+ struct dasd_eckd_private *private = device->private;218218+ u8 *sense = NULL;219219+ u32 cyl;220220+ u8 head;221221+222222+ sense = dasd_get_sense(irb);223223+ if (!sense) {224224+ DBF_DEV_EVENT(DBF_WARNING, device, "%s",225225+ "ESE error no sense data\n");226226+ return -EINVAL;227227+ }228228+ if (!(sense[27] & DASD_SENSE_BIT_2)) {229229+ DBF_DEV_EVENT(DBF_WARNING, device, "%s",230230+ "ESE error no valid track data\n");231231+ return -EINVAL;232232+ }233233+234234+ if (sense[27] & DASD_SENSE_BIT_3) {235235+ /* enhanced addressing */236236+ cyl = sense[30] << 20;237237+ cyl |= (sense[31] & 0xF0) << 12;238238+ cyl |= sense[28] << 8;239239+ cyl |= sense[29];240240+ } else {241241+ cyl = sense[29] << 8;242242+ cyl |= sense[30];243243+ }244244+ head = sense[31] & 0x0F;245245+ *track = cyl * private->rdc_data.trk_per_cyl + head;246246+ return 0;247247+}248248+210249static int set_timestamp(struct ccw1 *ccw, struct DE_eckd_data *data,211250 struct dasd_device *device)212251{···30252986 0, NULL);30262987}3027298829892989+static bool test_and_set_format_track(struct dasd_format_entry *to_format,29902990+ struct dasd_block *block)29912991+{29922992+ struct dasd_format_entry *format;29932993+ unsigned long flags;29942994+ bool rc = false;29952995+29962996+ spin_lock_irqsave(&block->format_lock, flags);29972997+ list_for_each_entry(format, &block->format_list, list) {29982998+ if (format->track == to_format->track) {29992999+ rc = true;30003000+ goto out;30013001+ }30023002+ }30033003+ list_add_tail(&to_format->list, &block->format_list);30043004+30053005+out:30063006+ spin_unlock_irqrestore(&block->format_lock, flags);30073007+ return rc;30083008+}30093009+30103010+static void clear_format_track(struct dasd_format_entry *format,30113011+ struct dasd_block *block)30123012+{30133013+ unsigned long flags;30143014+30153015+ spin_lock_irqsave(&block->format_lock, flags);30163016+ list_del_init(&format->list);30173017+ spin_unlock_irqrestore(&block->format_lock, flags);30183018+}30193019+30283020/*30293021 * Callback function to free ESE format requests.30303022 */···30632993{30642994 struct dasd_device *device = cqr->startdev;30652995 struct dasd_eckd_private *private = device->private;29962996+ struct dasd_format_entry *format = data;3066299729982998+ clear_format_track(format, cqr->basedev->block);30672999 private->count--;30683000 dasd_ffree_request(cqr, device);30693001}3070300230713003static struct dasd_ccw_req *30723072-dasd_eckd_ese_format(struct dasd_device *startdev, struct dasd_ccw_req *cqr)30043004+dasd_eckd_ese_format(struct dasd_device *startdev, struct dasd_ccw_req *cqr,30053005+ struct irb *irb)30733006{30743007 struct dasd_eckd_private *private;30083008+ struct dasd_format_entry *format;30753009 struct format_data_t fdata;30763010 unsigned int recs_per_trk;30773011 struct dasd_ccw_req *fcqr;···30853011 struct request *req;30863012 sector_t first_trk;30873013 sector_t last_trk;30143014+ sector_t curr_trk;30883015 int rc;3089301630903017 req = cqr->callback_data;30913091- base = cqr->block->base;30183018+ block = cqr->block;30193019+ base = block->base;30923020 private = base->private;30933093- block = base->block;30943021 blksize = block->bp_block;30953022 recs_per_trk = recs_per_track(&private->rdc_data, 0, blksize);30233023+ format = &startdev->format_entry;3096302430973025 first_trk = blk_rq_pos(req) >> block->s2b_shift;30983026 sector_div(first_trk, recs_per_trk);30993027 last_trk =31003028 (blk_rq_pos(req) + blk_rq_sectors(req) - 1) >> block->s2b_shift;31013029 sector_div(last_trk, recs_per_trk);30303030+ rc = dasd_eckd_track_from_irb(irb, base, &curr_trk);30313031+ if (rc)30323032+ return ERR_PTR(rc);3102303331033103- fdata.start_unit = first_trk;31043104- fdata.stop_unit = last_trk;30343034+ if (curr_trk < first_trk || curr_trk > last_trk) {30353035+ DBF_DEV_EVENT(DBF_WARNING, startdev,30363036+ "ESE error track %llu not within range %llu - %llu\n",30373037+ curr_trk, first_trk, last_trk);30383038+ return ERR_PTR(-EINVAL);30393039+ }30403040+ format->track = curr_trk;30413041+ /* test if track is already in formatting by another thread */30423042+ if (test_and_set_format_track(format, block))30433043+ return ERR_PTR(-EEXIST);30443044+30453045+ fdata.start_unit = curr_trk;30463046+ fdata.stop_unit = curr_trk;31053047 fdata.blksize = blksize;31063048 fdata.intensity = private->uses_cdl ? DASD_FMT_INT_COMPAT : 0;31073049···31343044 return fcqr;3135304531363046 fcqr->callback = dasd_eckd_ese_format_cb;30473047+ fcqr->callback_data = (void *) format;3137304831383049 return fcqr;31393050}···31423051/*31433052 * When data is read from an unformatted area of an ESE volume, this function31443053 * returns zeroed data and thereby mimics a read of zero data.30543054+ *30553055+ * The first unformatted track is the one that got the NRF error, the address is30563056+ * encoded in the sense data.30573057+ *30583058+ * All tracks before have returned valid data and should not be touched.30593059+ * All tracks after the unformatted track might be formatted or not. This is30603060+ * currently not known, remember the processed data and return the remainder of30613061+ * the request to the blocklayer in __dasd_cleanup_cqr().31453062 */31463146-static void dasd_eckd_ese_read(struct dasd_ccw_req *cqr)30633063+static int dasd_eckd_ese_read(struct dasd_ccw_req *cqr, struct irb *irb)31473064{30653065+ struct dasd_eckd_private *private;30663066+ sector_t first_trk, last_trk;30673067+ sector_t first_blk, last_blk;31483068 unsigned int blksize, off;30693069+ unsigned int recs_per_trk;31493070 struct dasd_device *base;31503071 struct req_iterator iter;30723072+ struct dasd_block *block;30733073+ unsigned int skip_block;30743074+ unsigned int blk_count;31513075 struct request *req;31523076 struct bio_vec bv;30773077+ sector_t curr_trk;30783078+ sector_t end_blk;31533079 char *dst;30803080+ int rc;3154308131553082 req = (struct request *) cqr->callback_data;31563083 base = cqr->block->base;31573084 blksize = base->block->bp_block;30853085+ block = cqr->block;30863086+ private = base->private;30873087+ skip_block = 0;30883088+ blk_count = 0;30893089+30903090+ recs_per_trk = recs_per_track(&private->rdc_data, 0, blksize);30913091+ first_trk = first_blk = blk_rq_pos(req) >> block->s2b_shift;30923092+ sector_div(first_trk, recs_per_trk);30933093+ last_trk = last_blk =30943094+ (blk_rq_pos(req) + blk_rq_sectors(req) - 1) >> block->s2b_shift;30953095+ sector_div(last_trk, recs_per_trk);30963096+ rc = dasd_eckd_track_from_irb(irb, base, &curr_trk);30973097+ if (rc)30983098+ return rc;30993099+31003100+ /* sanity check if the current track from sense data is valid */31013101+ if (curr_trk < first_trk || curr_trk > last_trk) {31023102+ DBF_DEV_EVENT(DBF_WARNING, base,31033103+ "ESE error track %llu not within range %llu - %llu\n",31043104+ curr_trk, first_trk, last_trk);31053105+ return -EINVAL;31063106+ }31073107+31083108+ /*31093109+ * if not the first track got the NRF error we have to skip over valid31103110+ * blocks31113111+ */31123112+ if (curr_trk != first_trk)31133113+ skip_block = curr_trk * recs_per_trk - first_blk;31143114+31153115+ /* we have no information beyond the current track */31163116+ end_blk = (curr_trk + 1) * recs_per_trk;3158311731593118 rq_for_each_segment(bv, req, iter) {31603119 dst = page_address(bv.bv_page) + bv.bv_offset;31613120 for (off = 0; off < bv.bv_len; off += blksize) {31623162- if (dst && rq_data_dir(req) == READ) {31213121+ if (first_blk + blk_count >= end_blk) {31223122+ cqr->proc_bytes = blk_count * blksize;31233123+ return 0;31243124+ }31253125+ if (dst && !skip_block) {31633126 dst += off;31643127 memset(dst, 0, blksize);31283128+ } else {31293129+ skip_block--;31653130 }31313131+ blk_count++;31663132 }31673133 }31343134+ return 0;31683135}3169313631703137/*
···15891589 tty_debug_hangup(tty, "freeing structure\n");15901590 /*15911591 * The release_tty function takes care of the details of clearing15921592- * the slots and preserving the termios structure. The tty_unlock_pair15931593- * should be safe as we keep a kref while the tty is locked (so the15941594- * unlock never unlocks a freed tty).15921592+ * the slots and preserving the termios structure.15951593 */15961594 mutex_lock(&tty_mutex);15971595 tty_port_set_kopened(tty->port, 0);···16191621 tty_debug_hangup(tty, "freeing structure\n");16201622 /*16211623 * The release_tty function takes care of the details of clearing16221622- * the slots and preserving the termios structure. The tty_unlock_pair16231623- * should be safe as we keep a kref while the tty is locked (so the16241624- * unlock never unlocks a freed tty).16241624+ * the slots and preserving the termios structure.16251625 */16261626 mutex_lock(&tty_mutex);16271627 release_tty(tty, idx);···27302734 struct serial_struct32 v32;27312735 struct serial_struct v;27322736 int err;27332733- memset(&v, 0, sizeof(struct serial_struct));2734273727352735- if (!tty->ops->set_serial)27382738+ memset(&v, 0, sizeof(v));27392739+ memset(&v32, 0, sizeof(v32));27402740+27412741+ if (!tty->ops->get_serial)27362742 return -ENOTTY;27372743 err = tty->ops->get_serial(tty, &v);27382744 if (!err) {
+4-3
drivers/usb/chipidea/udc.c
···15301530static void ci_hdrc_gadget_connect(struct usb_gadget *_gadget, int is_active)15311531{15321532 struct ci_hdrc *ci = container_of(_gadget, struct ci_hdrc, gadget);15331533- unsigned long flags;1534153315351534 if (is_active) {15361535 pm_runtime_get_sync(&_gadget->dev);15371536 hw_device_reset(ci);15381538- spin_lock_irqsave(&ci->lock, flags);15371537+ spin_lock_irq(&ci->lock);15391538 if (ci->driver) {15401539 hw_device_state(ci, ci->ep0out->qh.dma);15411540 usb_gadget_set_state(_gadget, USB_STATE_POWERED);15411541+ spin_unlock_irq(&ci->lock);15421542 usb_udc_vbus_handler(_gadget, true);15431543+ } else {15441544+ spin_unlock_irq(&ci->lock);15431545 }15441544- spin_unlock_irqrestore(&ci->lock, flags);15451546 } else {15461547 usb_udc_vbus_handler(_gadget, false);15471548 if (ci->driver)
···11/* SPDX-License-Identifier: GPL-2.0 */22/* iTCO Vendor Specific Support hooks */33#ifdef CONFIG_ITCO_VENDOR_SUPPORT44+extern int iTCO_vendorsupport;45extern void iTCO_vendor_pre_start(struct resource *, unsigned int);56extern void iTCO_vendor_pre_stop(struct resource *);67extern int iTCO_vendor_check_noreboot_on(void);78#else99+#define iTCO_vendorsupport 0810#define iTCO_vendor_pre_start(acpibase, heartbeat) {}911#define iTCO_vendor_pre_stop(acpibase) {}1012#define iTCO_vendor_check_noreboot_on() 1
+9-7
drivers/watchdog/iTCO_vendor_support.c
···3939/* Broken BIOS */4040#define BROKEN_BIOS 91141414242-static int vendorsupport;4343-module_param(vendorsupport, int, 0);4242+int iTCO_vendorsupport;4343+EXPORT_SYMBOL(iTCO_vendorsupport);4444+4545+module_param_named(vendorsupport, iTCO_vendorsupport, int, 0);4446MODULE_PARM_DESC(vendorsupport, "iTCO vendor specific support mode, default="4547 "0 (none), 1=SuperMicro Pent3, 911=Broken SMI BIOS");4648···154152void iTCO_vendor_pre_start(struct resource *smires,155153 unsigned int heartbeat)156154{157157- switch (vendorsupport) {155155+ switch (iTCO_vendorsupport) {158156 case SUPERMICRO_OLD_BOARD:159157 supermicro_old_pre_start(smires);160158 break;···167165168166void iTCO_vendor_pre_stop(struct resource *smires)169167{170170- switch (vendorsupport) {168168+ switch (iTCO_vendorsupport) {171169 case SUPERMICRO_OLD_BOARD:172170 supermicro_old_pre_stop(smires);173171 break;···180178181179int iTCO_vendor_check_noreboot_on(void)182180{183183- switch (vendorsupport) {181181+ switch (iTCO_vendorsupport) {184182 case SUPERMICRO_OLD_BOARD:185183 return 0;186184 default:···191189192190static int __init iTCO_vendor_init_module(void)193191{194194- if (vendorsupport == SUPERMICRO_NEW_BOARD) {192192+ if (iTCO_vendorsupport == SUPERMICRO_NEW_BOARD) {195193 pr_warn("Option vendorsupport=%d is no longer supported, "196194 "please use the w83627hf_wdt driver instead\n",197195 SUPERMICRO_NEW_BOARD);198196 return -EINVAL;199197 }200200- pr_info("vendor-support=%d\n", vendorsupport);198198+ pr_info("vendor-support=%d\n", iTCO_vendorsupport);201199 return 0;202200}203201
+16-12
drivers/watchdog/iTCO_wdt.c
···459459 if (!p->tco_res)460460 return -ENODEV;461461462462- p->smi_res = platform_get_resource(pdev, IORESOURCE_IO, ICH_RES_IO_SMI);463463- if (!p->smi_res)464464- return -ENODEV;465465-466462 p->iTCO_version = pdata->version;467463 p->pci_dev = to_pci_dev(dev->parent);464464+465465+ p->smi_res = platform_get_resource(pdev, IORESOURCE_IO, ICH_RES_IO_SMI);466466+ if (p->smi_res) {467467+ /* The TCO logic uses the TCO_EN bit in the SMI_EN register */468468+ if (!devm_request_region(dev, p->smi_res->start,469469+ resource_size(p->smi_res),470470+ pdev->name)) {471471+ pr_err("I/O address 0x%04llx already in use, device disabled\n",472472+ (u64)SMI_EN(p));473473+ return -EBUSY;474474+ }475475+ } else if (iTCO_vendorsupport ||476476+ turn_SMI_watchdog_clear_off >= p->iTCO_version) {477477+ pr_err("SMI I/O resource is missing\n");478478+ return -ENODEV;479479+ }468480469481 iTCO_wdt_no_reboot_bit_setup(p, pdata);470482···504492 /* Set the NO_REBOOT bit to prevent later reboots, just for sure */505493 p->update_no_reboot_bit(p->no_reboot_priv, true);506494507507- /* The TCO logic uses the TCO_EN bit in the SMI_EN register */508508- if (!devm_request_region(dev, p->smi_res->start,509509- resource_size(p->smi_res),510510- pdev->name)) {511511- pr_err("I/O address 0x%04llx already in use, device disabled\n",512512- (u64)SMI_EN(p));513513- return -EBUSY;514514- }515495 if (turn_SMI_watchdog_clear_off >= p->iTCO_version) {516496 /*517497 * Bit 13: TCO_EN -> 0
···8181 * List of server addresses.8282 */8383struct afs_addr_list {8484- struct rcu_head rcu; /* Must be first */8484+ struct rcu_head rcu;8585 refcount_t usage;8686 u32 version; /* Version */8787 unsigned char max_addrs;
+2-2
fs/btrfs/block-group.c
···856856 found_raid1c34 = true;857857 up_read(&sinfo->groups_sem);858858 }859859- if (found_raid56)859859+ if (!found_raid56)860860 btrfs_clear_fs_incompat(fs_info, RAID56);861861- if (found_raid1c34)861861+ if (!found_raid1c34)862862 btrfs_clear_fs_incompat(fs_info, RAID1C34);863863 }864864}
+4
fs/btrfs/inode.c
···94969496 ret = btrfs_sync_log(trans, BTRFS_I(old_inode)->root, &ctx);94979497 if (ret)94989498 commit_transaction = true;94999499+ } else if (sync_log) {95009500+ mutex_lock(&root->log_mutex);95019501+ list_del(&ctx.list);95029502+ mutex_unlock(&root->log_mutex);94999503 }95009504 if (commit_transaction) {95019505 ret = btrfs_commit_transaction(trans);
···539539 mk = ci->ci_master_key->payload.data[0];540540541541 /*542542+ * With proper, non-racy use of FS_IOC_REMOVE_ENCRYPTION_KEY, all inodes543543+ * protected by the key were cleaned by sync_filesystem(). But if544544+ * userspace is still using the files, inodes can be dirtied between545545+ * then and now. We mustn't lose any writes, so skip dirty inodes here.546546+ */547547+ if (inode->i_state & I_DIRTY_ALL)548548+ return 0;549549+550550+ /*542551 * Note: since we aren't holding ->mk_secret_sem, the result here can543552 * immediately become outdated. But there's no correctness problem with544553 * unnecessarily evicting. Nor is there a correctness problem with not
···301301 * FR_SENT: request is in userspace, waiting for an answer302302 * FR_FINISHED: request is finished303303 * FR_PRIVATE: request is on private list304304+ * FR_ASYNC: request is asynchronous304305 */305306enum fuse_req_flag {306307 FR_ISREPLY,···315314 FR_SENT,316315 FR_FINISHED,317316 FR_PRIVATE,317317+ FR_ASYNC,318318};319319320320/**
···191191 struct llist_head put_llist;192192 struct work_struct ref_work;193193 struct completion done;194194- struct rcu_head rcu;195194};196195197196struct io_ring_ctx {···343344 struct sockaddr __user *addr;344345 int __user *addr_len;345346 int flags;347347+ unsigned long nofile;346348};347349348350struct io_sync {···398398 struct filename *filename;399399 struct statx __user *buffer;400400 struct open_how how;401401+ unsigned long nofile;401402};402403403404struct io_files_update {···25792578 return ret;25802579 }2581258025812581+ req->open.nofile = rlimit(RLIMIT_NOFILE);25822582 req->flags |= REQ_F_NEED_CLEANUP;25832583 return 0;25842584}···26212619 return ret;26222620 }2623262126222622+ req->open.nofile = rlimit(RLIMIT_NOFILE);26242623 req->flags |= REQ_F_NEED_CLEANUP;26252624 return 0;26262625}···26402637 if (ret)26412638 goto err;2642263926432643- ret = get_unused_fd_flags(req->open.how.flags);26402640+ ret = __get_unused_fd_flags(req->open.how.flags, req->open.nofile);26442641 if (ret < 0)26452642 goto err;26462643···33253322 accept->addr = u64_to_user_ptr(READ_ONCE(sqe->addr));33263323 accept->addr_len = u64_to_user_ptr(READ_ONCE(sqe->addr2));33273324 accept->flags = READ_ONCE(sqe->accept_flags);33253325+ accept->nofile = rlimit(RLIMIT_NOFILE);33283326 return 0;33293327#else33303328 return -EOPNOTSUPP;···3342333833433339 file_flags = force_nonblock ? O_NONBLOCK : 0;33443340 ret = __sys_accept4_file(req->file, file_flags, accept->addr,33453345- accept->addr_len, accept->flags);33413341+ accept->addr_len, accept->flags,33423342+ accept->nofile);33463343 if (ret == -EAGAIN && force_nonblock)33473344 return -EAGAIN;33483345 if (ret == -ERESTARTSYS)···41374132{41384133 ssize_t ret = 0;4139413441354135+ if (!sqe)41364136+ return 0;41374137+41404138 if (io_op_defs[req->opcode].file_table) {41414139 ret = io_grab_files(req);41424140 if (unlikely(ret))···49164908 if (sqe_flags & (IOSQE_IO_LINK|IOSQE_IO_HARDLINK)) {49174909 req->flags |= REQ_F_LINK;49184910 INIT_LIST_HEAD(&req->link_list);49114911+49124912+ if (io_alloc_async_ctx(req)) {49134913+ ret = -EAGAIN;49144914+ goto err_req;49154915+ }49194916 ret = io_req_defer_prep(req, sqe);49204917 if (ret)49214918 req->flags |= REQ_F_FAIL_LINK;···53445331 complete(&data->done);53455332}5346533353475347-static void __io_file_ref_exit_and_free(struct rcu_head *rcu)53345334+static void io_file_ref_exit_and_free(struct work_struct *work)53485335{53495349- struct fixed_file_data *data = container_of(rcu, struct fixed_file_data,53505350- rcu);53365336+ struct fixed_file_data *data;53375337+53385338+ data = container_of(work, struct fixed_file_data, ref_work);53395339+53405340+ /*53415341+ * Ensure any percpu-ref atomic switch callback has run, it could have53425342+ * been in progress when the files were being unregistered. Once53435343+ * that's done, we can safely exit and free the ref and containing53445344+ * data structure.53455345+ */53465346+ rcu_barrier();53515347 percpu_ref_exit(&data->refs);53525348 kfree(data);53535353-}53545354-53555355-static void io_file_ref_exit_and_free(struct rcu_head *rcu)53565356-{53575357- /*53585358- * We need to order our exit+free call against the potentially53595359- * existing call_rcu() for switching to atomic. One way to do that53605360- * is to have this rcu callback queue the final put and free, as we53615361- * could otherwise have a pre-existing atomic switch complete _after_53625362- * the free callback we queued.53635363- */53645364- call_rcu(rcu, __io_file_ref_exit_and_free);53655349}5366535053675351static int io_sqe_files_unregister(struct io_ring_ctx *ctx)···53795369 for (i = 0; i < nr_tables; i++)53805370 kfree(data->table[i].files);53815371 kfree(data->table);53825382- call_rcu(&data->rcu, io_file_ref_exit_and_free);53725372+ INIT_WORK(&data->ref_work, io_file_ref_exit_and_free);53735373+ queue_work(system_wq, &data->ref_work);53835374 ctx->file_data = NULL;53845375 ctx->nr_user_files = 0;53855376 return 0;
+48-6
fs/locks.c
···725725{726726 locks_delete_global_blocked(waiter);727727 list_del_init(&waiter->fl_blocked_member);728728- waiter->fl_blocker = NULL;729728}730729731730static void __locks_wake_up_blocks(struct file_lock *blocker)···739740 waiter->fl_lmops->lm_notify(waiter);740741 else741742 wake_up(&waiter->fl_wait);743743+744744+ /*745745+ * The setting of fl_blocker to NULL marks the "done"746746+ * point in deleting a block. Paired with acquire at the top747747+ * of locks_delete_block().748748+ */749749+ smp_store_release(&waiter->fl_blocker, NULL);742750 }743751}744752···759753{760754 int status = -ENOENT;761755756756+ /*757757+ * If fl_blocker is NULL, it won't be set again as this thread "owns"758758+ * the lock and is the only one that might try to claim the lock.759759+ *760760+ * We use acquire/release to manage fl_blocker so that we can761761+ * optimize away taking the blocked_lock_lock in many cases.762762+ *763763+ * The smp_load_acquire guarantees two things:764764+ *765765+ * 1/ that fl_blocked_requests can be tested locklessly. If something766766+ * was recently added to that list it must have been in a locked region767767+ * *before* the locked region when fl_blocker was set to NULL.768768+ *769769+ * 2/ that no other thread is accessing 'waiter', so it is safe to free770770+ * it. __locks_wake_up_blocks is careful not to touch waiter after771771+ * fl_blocker is released.772772+ *773773+ * If a lockless check of fl_blocker shows it to be NULL, we know that774774+ * no new locks can be inserted into its fl_blocked_requests list, and775775+ * can avoid doing anything further if the list is empty.776776+ */777777+ if (!smp_load_acquire(&waiter->fl_blocker) &&778778+ list_empty(&waiter->fl_blocked_requests))779779+ return status;780780+762781 spin_lock(&blocked_lock_lock);763782 if (waiter->fl_blocker)764783 status = 0;765784 __locks_wake_up_blocks(waiter);766785 __locks_delete_block(waiter);786786+787787+ /*788788+ * The setting of fl_blocker to NULL marks the "done" point in deleting789789+ * a block. Paired with acquire at the top of this function.790790+ */791791+ smp_store_release(&waiter->fl_blocker, NULL);767792 spin_unlock(&blocked_lock_lock);768793 return status;769794}···13871350 error = posix_lock_inode(inode, fl, NULL);13881351 if (error != FILE_LOCK_DEFERRED)13891352 break;13901390- error = wait_event_interruptible(fl->fl_wait, !fl->fl_blocker);13531353+ error = wait_event_interruptible(fl->fl_wait,13541354+ list_empty(&fl->fl_blocked_member));13911355 if (error)13921356 break;13931357 }···14731435 error = posix_lock_inode(inode, &fl, NULL);14741436 if (error != FILE_LOCK_DEFERRED)14751437 break;14761476- error = wait_event_interruptible(fl.fl_wait, !fl.fl_blocker);14381438+ error = wait_event_interruptible(fl.fl_wait,14391439+ list_empty(&fl.fl_blocked_member));14771440 if (!error) {14781441 /*14791442 * If we've been sleeping someone might have···1677163816781639 locks_dispose_list(&dispose);16791640 error = wait_event_interruptible_timeout(new_fl->fl_wait,16801680- !new_fl->fl_blocker, break_time);16411641+ list_empty(&new_fl->fl_blocked_member),16421642+ break_time);1681164316821644 percpu_down_read(&file_rwsem);16831645 spin_lock(&ctx->flc_lock);···21622122 error = flock_lock_inode(inode, fl);21632123 if (error != FILE_LOCK_DEFERRED)21642124 break;21652165- error = wait_event_interruptible(fl->fl_wait, !fl->fl_blocker);21252125+ error = wait_event_interruptible(fl->fl_wait,21262126+ list_empty(&fl->fl_blocked_member));21662127 if (error)21672128 break;21682129 }···24402399 error = vfs_lock_file(filp, cmd, fl, NULL);24412400 if (error != FILE_LOCK_DEFERRED)24422401 break;24432443- error = wait_event_interruptible(fl->fl_wait, !fl->fl_blocker);24022402+ error = wait_event_interruptible(fl->fl_wait,24032403+ list_empty(&fl->fl_blocked_member));24442404 if (error)24452405 break;24462406 }
+1
fs/nfs/client.c
···153153 if ((clp = kzalloc(sizeof(*clp), GFP_KERNEL)) == NULL)154154 goto error_0;155155156156+ clp->cl_minorversion = cl_init->minorversion;156157 clp->cl_nfs_mod = cl_init->nfs_mod;157158 if (!try_module_get(clp->cl_nfs_mod->owner))158159 goto error_dealloc;
+9
fs/nfs/fs_context.c
···832832 if (len > maxnamlen)833833 goto out_hostname;834834835835+ kfree(ctx->nfs_server.hostname);836836+835837 /* N.B. caller will free nfs_server.hostname in all cases */836838 ctx->nfs_server.hostname = kmemdup_nul(dev_name, len, GFP_KERNEL);837839 if (!ctx->nfs_server.hostname)···12411239 goto out_version_unavailable;12421240 }12431241 ctx->nfs_mod = nfs_mod;12421242+ }12431243+12441244+ /* Ensure the filesystem context has the correct fs_type */12451245+ if (fc->fs_type != ctx->nfs_mod->nfs_fs) {12461246+ module_put(fc->fs_type->owner);12471247+ __module_get(ctx->nfs_mod->nfs_fs->owner);12481248+ fc->fs_type = ctx->nfs_mod->nfs_fs;12441249 }12451250 return 0;12461251
+2
fs/nfs/fscache.c
···3131struct nfs_server_key {3232 struct {3333 uint16_t nfsversion; /* NFS protocol version */3434+ uint32_t minorversion; /* NFSv4 minor version */3435 uint16_t family; /* address family */3536 __be16 port; /* IP port */3637 } hdr;···56555756 memset(&key, 0, sizeof(key));5857 key.hdr.nfsversion = clp->rpc_ops->version;5858+ key.hdr.minorversion = clp->cl_minorversion;5959 key.hdr.family = clp->cl_addr.ss_family;60606161 switch (clp->cl_addr.ss_family) {
+1-1
fs/nfs/namespace.c
···153153 /* Open a new filesystem context, transferring parameters from the154154 * parent superblock, including the network namespace.155155 */156156- fc = fs_context_for_submount(&nfs_fs_type, path->dentry);156156+ fc = fs_context_for_submount(path->mnt->mnt_sb->s_type, path->dentry);157157 if (IS_ERR(fc))158158 return ERR_CAST(fc);159159
···860860 * the return value of d_splice_alias(), then the caller needs to perform dput()861861 * on it after finish_open().862862 *863863- * On successful return @file is a fully instantiated open file. After this, if864864- * an error occurs in ->atomic_open(), it needs to clean up with fput().865865- *866863 * Returns zero on success or -errno if the open failed.867864 */868865int finish_open(struct file *file, struct dentry *dentry,
+1
fs/overlayfs/Kconfig
···9393 bool "Overlayfs: auto enable inode number mapping"9494 default n9595 depends on OVERLAY_FS9696+ depends on 64BIT9697 help9798 If this config option is enabled then overlay filesystems will use9899 unused high bits in undelying filesystem inode numbers to map all
+6
fs/overlayfs/file.c
···244244 if (iocb->ki_flags & IOCB_WRITE) {245245 struct inode *inode = file_inode(orig_iocb->ki_filp);246246247247+ /* Actually acquired in ovl_write_iter() */248248+ __sb_writers_acquired(file_inode(iocb->ki_filp)->i_sb,249249+ SB_FREEZE_WRITE);247250 file_end_write(iocb->ki_filp);248251 ovl_copyattr(ovl_inode_real(inode), inode);249252 }···349346 goto out;350347351348 file_start_write(real.file);349349+ /* Pacify lockdep, same trick as done in aio_write() */350350+ __sb_writers_release(file_inode(real.file)->i_sb,351351+ SB_FREEZE_WRITE);352352 aio_req->fd = real;353353 real.flags = 0;354354 aio_req->orig_iocb = iocb;
···14111411 if (ofs->config.xino == OVL_XINO_ON)14121412 pr_info("\"xino=on\" is useless with all layers on same fs, ignore.\n");14131413 ofs->xino_mode = 0;14141414+ } else if (ofs->config.xino == OVL_XINO_OFF) {14151415+ ofs->xino_mode = -1;14141416 } else if (ofs->config.xino == OVL_XINO_ON && ofs->xino_mode < 0) {14151417 /*14161418 * This is a roundup of number of bits needed for encoding···16251623 sb->s_stack_depth = 0;16261624 sb->s_maxbytes = MAX_LFS_FILESIZE;16271625 /* Assume underlaying fs uses 32bit inodes unless proven otherwise */16281628- if (ofs->config.xino != OVL_XINO_OFF)16261626+ if (ofs->config.xino != OVL_XINO_OFF) {16291627 ofs->xino_mode = BITS_PER_LONG - 32;16281628+ if (!ofs->xino_mode) {16291629+ pr_warn("xino not supported on 32bit kernel, falling back to xino=off.\n");16301630+ ofs->config.xino = OVL_XINO_OFF;16311631+ }16321632+ }1630163316311634 /* alloc/destroy_inode needed for setting up traps in inode cache */16321635 sb->s_op = &ovl_super_operations;
···3333 const u8 secret[CURVE25519_KEY_SIZE],3434 const u8 basepoint[CURVE25519_KEY_SIZE])3535{3636- if (IS_ENABLED(CONFIG_CRYPTO_ARCH_HAVE_LIB_CURVE25519))3636+ if (IS_ENABLED(CONFIG_CRYPTO_ARCH_HAVE_LIB_CURVE25519) &&3737+ (!IS_ENABLED(CONFIG_CRYPTO_CURVE25519_X86) || IS_ENABLED(CONFIG_AS_ADX)))3738 curve25519_arch(mypublic, secret, basepoint);3839 else3940 curve25519_generic(mypublic, secret, basepoint);···5049 CURVE25519_KEY_SIZE)))5150 return false;52515353- if (IS_ENABLED(CONFIG_CRYPTO_ARCH_HAVE_LIB_CURVE25519))5252+ if (IS_ENABLED(CONFIG_CRYPTO_ARCH_HAVE_LIB_CURVE25519) &&5353+ (!IS_ENABLED(CONFIG_CRYPTO_CURVE25519_X86) || IS_ENABLED(CONFIG_AS_ADX)))5454 curve25519_base_arch(pub, secret);5555 else5656 curve25519_generic(pub, secret, curve25519_base_point);
+2-2
include/drm/drm_dp_mst_helper.h
···8181 * &drm_dp_mst_topology_mgr.base.lock.8282 * @num_sdp_stream_sinks: Number of stream sinks. Protected by8383 * &drm_dp_mst_topology_mgr.base.lock.8484- * @available_pbn: Available bandwidth for this port. Protected by8484+ * @full_pbn: Max possible bandwidth for this port. Protected by8585 * &drm_dp_mst_topology_mgr.base.lock.8686 * @next: link to next port on this branch device8787 * @aux: i2c aux transport to talk to device connected to this port, protected···126126 u8 dpcd_rev;127127 u8 num_sdp_streams;128128 u8 num_sdp_stream_sinks;129129- uint16_t available_pbn;129129+ uint16_t full_pbn;130130 struct list_head next;131131 /**132132 * @mstb: the branch device connected to this port, if there is one.
···8585extern int replace_fd(unsigned fd, struct file *file, unsigned flags);8686extern void set_close_on_exec(unsigned int fd, int flag);8787extern bool get_close_on_exec(unsigned int fd);8888+extern int __get_unused_fd_flags(unsigned flags, unsigned long nofile);8889extern int get_unused_fd_flags(unsigned flags);8990extern void put_unused_fd(unsigned int fd);9091
···357357 * is_gigabit_capable: Set to true if PHY supports 1000Mbps358358 * has_fixups: Set to true if this phy has fixups/quirks.359359 * suspended: Set to true if this phy has been suspended successfully.360360+ * suspended_by_mdio_bus: Set to true if this phy was suspended by MDIO bus.360361 * sysfs_links: Internal boolean tracking sysfs symbolic links setup/removal.361362 * loopback_enabled: Set true if this phy has been loopbacked successfully.362363 * state: state of the PHY for management purposes···397396 unsigned is_gigabit_capable:1;398397 unsigned has_fixups:1;399398 unsigned suspended:1;399399+ unsigned suspended_by_mdio_bus:1;400400 unsigned sysfs_links:1;401401 unsigned loopback_enabled:1;402402···559557 /*560558 * Checks if the PHY generated an interrupt.561559 * For multi-PHY devices with shared PHY interrupt pin560560+ * Set interrupt bits have to be cleared.562561 */563562 int (*did_interrupt)(struct phy_device *phydev);564563
···972972/**973973 * rhashtable_lookup_get_insert_key - lookup and insert object into hash table974974 * @ht: hash table975975+ * @key: key975976 * @obj: pointer to hash head inside object976977 * @params: hash table parameters977977- * @data: pointer to element data already in hashes978978 *979979 * Just like rhashtable_lookup_insert_key(), but this function returns the980980 * object if it exists, NULL if it does not and the insertion was successful,
+2-1
include/linux/socket.h
···401401 int addr_len);402402extern int __sys_accept4_file(struct file *file, unsigned file_flags,403403 struct sockaddr __user *upeer_sockaddr,404404- int __user *upeer_addrlen, int flags);404404+ int __user *upeer_addrlen, int flags,405405+ unsigned long nofile);405406extern int __sys_accept4(int fd, struct sockaddr __user *upeer_sockaddr,406407 int __user *upeer_addrlen, int flags);407408extern int __sys_socket(int family, int type, int protocol);
+3-2
include/linux/vmalloc.h
···141141142142extern int remap_vmalloc_range(struct vm_area_struct *vma, void *addr,143143 unsigned long pgoff);144144-void vmalloc_sync_all(void);145145-144144+void vmalloc_sync_mappings(void);145145+void vmalloc_sync_unmappings(void);146146+146147/*147148 * Lowlevel-APIs (not for driver use!)148149 */
+16
include/linux/workqueue.h
···487487 *488488 * We queue the work to the CPU on which it was submitted, but if the CPU dies489489 * it can be processed by another CPU.490490+ *491491+ * Memory-ordering properties: If it returns %true, guarantees that all stores492492+ * preceding the call to queue_work() in the program order will be visible from493493+ * the CPU which will execute @work by the time such work executes, e.g.,494494+ *495495+ * { x is initially 0 }496496+ *497497+ * CPU0 CPU1498498+ *499499+ * WRITE_ONCE(x, 1); [ @work is being executed ]500500+ * r0 = queue_work(wq, work); r1 = READ_ONCE(x);501501+ *502502+ * Forbids: r0 == true && r1 == 0490503 */491504static inline bool queue_work(struct workqueue_struct *wq,492505 struct work_struct *work)···559546 * This puts a job in the kernel-global workqueue if it was not already560547 * queued and leaves it in the same position on the kernel-global561548 * workqueue otherwise.549549+ *550550+ * Shares the same memory-ordering properties of queue_work(), cf. the551551+ * DocBook header of queue_work().562552 */563553static inline bool schedule_work(struct work_struct *work)564554{
···767767 bool768768769769config CC_HAS_INT128770770- def_bool y771771- depends on !$(cc-option,-D__SIZEOF_INT128__=0)770770+ def_bool !$(cc-option,$(m64-flag) -D__SIZEOF_INT128__=0) && 64BIT772771773772#774773# For architectures that know their GCC __int128 support is sound
···35423542static int cgroup_io_pressure_show(struct seq_file *seq, void *v)35433543{35443544 struct cgroup *cgrp = seq_css(seq)->cgroup;35453545- struct psi_group *psi = cgroup_id(cgrp) == 1 ? &psi_system : &cgrp->psi;35453545+ struct psi_group *psi = cgroup_ino(cgrp) == 1 ? &psi_system : &cgrp->psi;3546354635473547 return psi_show(seq, psi, PSI_IO);35483548}35493549static int cgroup_memory_pressure_show(struct seq_file *seq, void *v)35503550{35513551 struct cgroup *cgrp = seq_css(seq)->cgroup;35523552- struct psi_group *psi = cgroup_id(cgrp) == 1 ? &psi_system : &cgrp->psi;35523552+ struct psi_group *psi = cgroup_ino(cgrp) == 1 ? &psi_system : &cgrp->psi;3553355335543554 return psi_show(seq, psi, PSI_MEM);35553555}35563556static int cgroup_cpu_pressure_show(struct seq_file *seq, void *v)35573557{35583558 struct cgroup *cgrp = seq_css(seq)->cgroup;35593559- struct psi_group *psi = cgroup_id(cgrp) == 1 ? &psi_system : &cgrp->psi;35593559+ struct psi_group *psi = cgroup_ino(cgrp) == 1 ? &psi_system : &cgrp->psi;3560356035613561 return psi_show(seq, psi, PSI_CPU);35623562}···44004400 }44014401 } while (!css_set_populated(cset) && list_empty(&cset->dying_tasks));4402440244034403- if (!list_empty(&cset->tasks))44034403+ if (!list_empty(&cset->tasks)) {44044404 it->task_pos = cset->tasks.next;44054405- else if (!list_empty(&cset->mg_tasks))44054405+ it->cur_tasks_head = &cset->tasks;44064406+ } else if (!list_empty(&cset->mg_tasks)) {44064407 it->task_pos = cset->mg_tasks.next;44074407- else44084408+ it->cur_tasks_head = &cset->mg_tasks;44094409+ } else {44084410 it->task_pos = cset->dying_tasks.next;44114411+ it->cur_tasks_head = &cset->dying_tasks;44124412+ }4409441344104414 it->tasks_head = &cset->tasks;44114415 it->mg_tasks_head = &cset->mg_tasks;···44674463 else44684464 it->task_pos = it->task_pos->next;4469446544704470- if (it->task_pos == it->tasks_head)44664466+ if (it->task_pos == it->tasks_head) {44714467 it->task_pos = it->mg_tasks_head->next;44724472- if (it->task_pos == it->mg_tasks_head)44684468+ it->cur_tasks_head = it->mg_tasks_head;44694469+ }44704470+ if (it->task_pos == it->mg_tasks_head) {44734471 it->task_pos = it->dying_tasks_head->next;44724472+ it->cur_tasks_head = it->dying_tasks_head;44734473+ }44744474 if (it->task_pos == it->dying_tasks_head)44754475 css_task_iter_advance_css_set(it);44764476 } else {···44934485 goto repeat;4494448644954487 /* and dying leaders w/o live member threads */44964496- if (!atomic_read(&task->signal->live))44884488+ if (it->cur_tasks_head == it->dying_tasks_head &&44894489+ !atomic_read(&task->signal->live))44974490 goto repeat;44984491 } else {44994492 /* skip all dying ones */45004500- if (task->flags & PF_EXITING)44934493+ if (it->cur_tasks_head == it->dying_tasks_head)45014494 goto repeat;45024495 }45034496}···46044595 struct kernfs_open_file *of = s->private;46054596 struct css_task_iter *it = of->priv;4606459745984598+ if (pos)45994599+ (*pos)++;46004600+46074601 return css_task_iter_next(it);46084602}46094603···46224610 * from position 0, so we can simply keep iterating on !0 *pos.46234611 */46244612 if (!it) {46254625- if (WARN_ON_ONCE((*pos)++))46134613+ if (WARN_ON_ONCE((*pos)))46264614 return ERR_PTR(-EINVAL);4627461546284616 it = kzalloc(sizeof(*it), GFP_KERNEL);···46304618 return ERR_PTR(-ENOMEM);46314619 of->priv = it;46324620 css_task_iter_start(&cgrp->self, iter_flags, it);46334633- } else if (!(*pos)++) {46214621+ } else if (!(*pos)) {46344622 css_task_iter_end(it);46354623 css_task_iter_start(&cgrp->self, iter_flags, it);46364636- }46244624+ } else46254625+ return it->cur_task;4637462646384627 return cgroup_procs_next(s, NULL, NULL);46394628}···62706257 cgroup_bpf_get(sock_cgroup_ptr(skcd));62716258 return;62726259 }62606260+62616261+ /* Don't associate the sock with unrelated interrupted task's cgroup. */62626262+ if (in_interrupt())62636263+ return;6273626462746265 rcu_read_lock();62756266
+55-38
kernel/futex.c
···385385 */386386static struct futex_hash_bucket *hash_futex(union futex_key *key)387387{388388- u32 hash = jhash2((u32*)&key->both.word,389389- (sizeof(key->both.word)+sizeof(key->both.ptr))/4,388388+ u32 hash = jhash2((u32 *)key, offsetof(typeof(*key), both.offset) / 4,390389 key->both.offset);390390+391391 return &futex_queues[hash & (futex_hashsize - 1)];392392}393393···429429430430 switch (key->both.offset & (FUT_OFF_INODE|FUT_OFF_MMSHARED)) {431431 case FUT_OFF_INODE:432432- ihold(key->shared.inode); /* implies smp_mb(); (B) */432432+ smp_mb(); /* explicit smp_mb(); (B) */433433 break;434434 case FUT_OFF_MMSHARED:435435 futex_get_mm(key); /* implies smp_mb(); (B) */···463463464464 switch (key->both.offset & (FUT_OFF_INODE|FUT_OFF_MMSHARED)) {465465 case FUT_OFF_INODE:466466- iput(key->shared.inode);467466 break;468467 case FUT_OFF_MMSHARED:469468 mmdrop(key->private.mm);···504505 return timeout;505506}506507508508+/*509509+ * Generate a machine wide unique identifier for this inode.510510+ *511511+ * This relies on u64 not wrapping in the life-time of the machine; which with512512+ * 1ns resolution means almost 585 years.513513+ *514514+ * This further relies on the fact that a well formed program will not unmap515515+ * the file while it has a (shared) futex waiting on it. This mapping will have516516+ * a file reference which pins the mount and inode.517517+ *518518+ * If for some reason an inode gets evicted and read back in again, it will get519519+ * a new sequence number and will _NOT_ match, even though it is the exact same520520+ * file.521521+ *522522+ * It is important that match_futex() will never have a false-positive, esp.523523+ * for PI futexes that can mess up the state. The above argues that false-negatives524524+ * are only possible for malformed programs.525525+ */526526+static u64 get_inode_sequence_number(struct inode *inode)527527+{528528+ static atomic64_t i_seq;529529+ u64 old;530530+531531+ /* Does the inode already have a sequence number? */532532+ old = atomic64_read(&inode->i_sequence);533533+ if (likely(old))534534+ return old;535535+536536+ for (;;) {537537+ u64 new = atomic64_add_return(1, &i_seq);538538+ if (WARN_ON_ONCE(!new))539539+ continue;540540+541541+ old = atomic64_cmpxchg_relaxed(&inode->i_sequence, 0, new);542542+ if (old)543543+ return old;544544+ return new;545545+ }546546+}547547+507548/**508549 * get_futex_key() - Get parameters which are the keys for a futex509550 * @uaddr: virtual address of the futex···556517 *557518 * The key words are stored in @key on success.558519 *559559- * For shared mappings, it's (page->index, file_inode(vma->vm_file),560560- * offset_within_page). For private mappings, it's (uaddr, current->mm).561561- * We can usually work out the index without swapping in the page.520520+ * For shared mappings (when @fshared), the key is:521521+ * ( inode->i_sequence, page->index, offset_within_page )522522+ * [ also see get_inode_sequence_number() ]523523+ *524524+ * For private mappings (or when !@fshared), the key is:525525+ * ( current->mm, address, 0 )526526+ *527527+ * This allows (cross process, where applicable) identification of the futex528528+ * without keeping the page pinned for the duration of the FUTEX_WAIT.562529 *563530 * lock_page() might sleep, the caller should not hold a spinlock.564531 */···704659 key->private.mm = mm;705660 key->private.address = address;706661707707- get_futex_key_refs(key); /* implies smp_mb(); (B) */708708-709662 } else {710663 struct inode *inode;711664···735692 goto again;736693 }737694738738- /*739739- * Take a reference unless it is about to be freed. Previously740740- * this reference was taken by ihold under the page lock741741- * pinning the inode in place so i_lock was unnecessary. The742742- * only way for this check to fail is if the inode was743743- * truncated in parallel which is almost certainly an744744- * application bug. In such a case, just retry.745745- *746746- * We are not calling into get_futex_key_refs() in file-backed747747- * cases, therefore a successful atomic_inc return below will748748- * guarantee that get_futex_key() will still imply smp_mb(); (B).749749- */750750- if (!atomic_inc_not_zero(&inode->i_count)) {751751- rcu_read_unlock();752752- put_page(page);753753-754754- goto again;755755- }756756-757757- /* Should be impossible but lets be paranoid for now */758758- if (WARN_ON_ONCE(inode->i_mapping != mapping)) {759759- err = -EFAULT;760760- rcu_read_unlock();761761- iput(inode);762762-763763- goto out;764764- }765765-766695 key->both.offset |= FUT_OFF_INODE; /* inode-based key */767767- key->shared.inode = inode;696696+ key->shared.i_seq = get_inode_sequence_number(inode);768697 key->shared.pgoff = basepage_index(tail);769698 rcu_read_unlock();770699 }700700+701701+ get_futex_key_refs(key); /* implies smp_mb(); (B) */771702772703out:773704 put_page(page);
···247247 tmp = tmp->parent;248248 }249249250250+ /*251251+ * ENOMEM is not the most obvious choice especially for the case252252+ * where the child subreaper has already exited and the pid253253+ * namespace denies the creation of any new processes. But ENOMEM254254+ * is what we have exposed to userspace for a long time and it is255255+ * documented behavior for pid namespaces. So we can't easily256256+ * change it even if there were an error code better suited.257257+ */258258+ retval = -ENOMEM;259259+250260 if (unlikely(is_child_reaper(pid))) {251261 if (pid_ns_prepare_proc(ns))252262 goto out_free;
···14111411 return;14121412 rcu_read_lock();14131413retry:14141414- if (req_cpu == WORK_CPU_UNBOUND)14151415- cpu = wq_select_unbound_cpu(raw_smp_processor_id());14161416-14171414 /* pwq which will be used unless @work is executing elsewhere */14181418- if (!(wq->flags & WQ_UNBOUND))14191419- pwq = per_cpu_ptr(wq->cpu_pwqs, cpu);14201420- else14151415+ if (wq->flags & WQ_UNBOUND) {14161416+ if (req_cpu == WORK_CPU_UNBOUND)14171417+ cpu = wq_select_unbound_cpu(raw_smp_processor_id());14211418 pwq = unbound_pwq_by_node(wq, cpu_to_node(cpu));14191419+ } else {14201420+ if (req_cpu == WORK_CPU_UNBOUND)14211421+ cpu = raw_smp_processor_id();14221422+ pwq = per_cpu_ptr(wq->cpu_pwqs, cpu);14231423+ }1422142414231425 /*14241426 * If @work was previously on a different pool, it might still be
+9-3
mm/madvise.c
···335335 }336336337337 page = pmd_page(orig_pmd);338338+339339+ /* Do not interfere with other mappings of this page */340340+ if (page_mapcount(page) != 1)341341+ goto huge_unlock;342342+338343 if (next - addr != HPAGE_PMD_SIZE) {339344 int err;340340-341341- if (page_mapcount(page) != 1)342342- goto huge_unlock;343345344346 get_page(page);345347 spin_unlock(ptl);···427425 addr -= PAGE_SIZE;428426 continue;429427 }428428+429429+ /* Do not interfere with other mappings of this page */430430+ if (page_mapcount(page) != 1)431431+ continue;430432431433 VM_BUG_ON_PAGE(PageTransCompound(page), page);432434
+68-49
mm/memcontrol.c
···22972297 #define MEMCG_DELAY_SCALING_SHIFT 142298229822992299/*23002300- * Scheduled by try_charge() to be executed from the userland return path23012301- * and reclaims memory over the high limit.23002300+ * Get the number of jiffies that we should penalise a mischievous cgroup which23012301+ * is exceeding its memory.high by checking both it and its ancestors.23022302 */23032303-void mem_cgroup_handle_over_high(void)23032303+static unsigned long calculate_high_delay(struct mem_cgroup *memcg,23042304+ unsigned int nr_pages)23042305{23052305- unsigned long usage, high, clamped_high;23062306- unsigned long pflags;23072307- unsigned long penalty_jiffies, overage;23082308- unsigned int nr_pages = current->memcg_nr_pages_over_high;23092309- struct mem_cgroup *memcg;23062306+ unsigned long penalty_jiffies;23072307+ u64 max_overage = 0;2310230823112311- if (likely(!nr_pages))23122312- return;23092309+ do {23102310+ unsigned long usage, high;23112311+ u64 overage;2313231223142314- memcg = get_mem_cgroup_from_mm(current->mm);23152315- reclaim_high(memcg, nr_pages, GFP_KERNEL);23162316- current->memcg_nr_pages_over_high = 0;23132313+ usage = page_counter_read(&memcg->memory);23142314+ high = READ_ONCE(memcg->high);23152315+23162316+ /*23172317+ * Prevent division by 0 in overage calculation by acting as if23182318+ * it was a threshold of 1 page23192319+ */23202320+ high = max(high, 1UL);23212321+23222322+ overage = usage - high;23232323+ overage <<= MEMCG_DELAY_PRECISION_SHIFT;23242324+ overage = div64_u64(overage, high);23252325+23262326+ if (overage > max_overage)23272327+ max_overage = overage;23282328+ } while ((memcg = parent_mem_cgroup(memcg)) &&23292329+ !mem_cgroup_is_root(memcg));23302330+23312331+ if (!max_overage)23322332+ return 0;2317233323182334 /*23192319- * memory.high is breached and reclaim is unable to keep up. Throttle23202320- * allocators proactively to slow down excessive growth.23212321- *23222335 * We use overage compared to memory.high to calculate the number of23232336 * jiffies to sleep (penalty_jiffies). Ideally this value should be23242337 * fairly lenient on small overages, and increasingly harsh when the···23392326 * its crazy behaviour, so we exponentially increase the delay based on23402327 * overage amount.23412328 */23422342-23432343- usage = page_counter_read(&memcg->memory);23442344- high = READ_ONCE(memcg->high);23452345-23462346- if (usage <= high)23472347- goto out;23482348-23492349- /*23502350- * Prevent division by 0 in overage calculation by acting as if it was a23512351- * threshold of 1 page23522352- */23532353- clamped_high = max(high, 1UL);23542354-23552355- overage = div_u64((u64)(usage - high) << MEMCG_DELAY_PRECISION_SHIFT,23562356- clamped_high);23572357-23582358- penalty_jiffies = ((u64)overage * overage * HZ)23592359- >> (MEMCG_DELAY_PRECISION_SHIFT + MEMCG_DELAY_SCALING_SHIFT);23292329+ penalty_jiffies = max_overage * max_overage * HZ;23302330+ penalty_jiffies >>= MEMCG_DELAY_PRECISION_SHIFT;23312331+ penalty_jiffies >>= MEMCG_DELAY_SCALING_SHIFT;2360233223612333 /*23622334 * Factor in the task's own contribution to the overage, such that four···23582360 * application moving forwards and also permit diagnostics, albeit23592361 * extremely slowly.23602362 */23612361- penalty_jiffies = min(penalty_jiffies, MEMCG_MAX_HIGH_DELAY_JIFFIES);23632363+ return min(penalty_jiffies, MEMCG_MAX_HIGH_DELAY_JIFFIES);23642364+}23652365+23662366+/*23672367+ * Scheduled by try_charge() to be executed from the userland return path23682368+ * and reclaims memory over the high limit.23692369+ */23702370+void mem_cgroup_handle_over_high(void)23712371+{23722372+ unsigned long penalty_jiffies;23732373+ unsigned long pflags;23742374+ unsigned int nr_pages = current->memcg_nr_pages_over_high;23752375+ struct mem_cgroup *memcg;23762376+23772377+ if (likely(!nr_pages))23782378+ return;23792379+23802380+ memcg = get_mem_cgroup_from_mm(current->mm);23812381+ reclaim_high(memcg, nr_pages, GFP_KERNEL);23822382+ current->memcg_nr_pages_over_high = 0;23832383+23842384+ /*23852385+ * memory.high is breached and reclaim is unable to keep up. Throttle23862386+ * allocators proactively to slow down excessive growth.23872387+ */23882388+ penalty_jiffies = calculate_high_delay(memcg, nr_pages);2362238923632390 /*23642391 * Don't sleep if the amount of jiffies this memcg owes us is so low···40504027 struct mem_cgroup_thresholds *thresholds;40514028 struct mem_cgroup_threshold_ary *new;40524029 unsigned long usage;40534053- int i, j, size;40304030+ int i, j, size, entries;4054403140554032 mutex_lock(&memcg->thresholds_lock);40564033···40704047 __mem_cgroup_threshold(memcg, type == _MEMSWAP);4071404840724049 /* Calculate new number of threshold */40734073- size = 0;40504050+ size = entries = 0;40744051 for (i = 0; i < thresholds->primary->size; i++) {40754052 if (thresholds->primary->entries[i].eventfd != eventfd)40764053 size++;40544054+ else40554055+ entries++;40774056 }4078405740794058 new = thresholds->spare;40594059+40604060+ /* If no items related to eventfd have been cleared, nothing to do */40614061+ if (!entries)40624062+ goto unlock;4080406340814064 /* Set thresholds array to NULL if we don't have thresholds */40824065 if (!size) {···67116682 if (!mem_cgroup_sockets_enabled)67126683 return;6713668467146714- /*67156715- * Socket cloning can throw us here with sk_memcg already67166716- * filled. It won't however, necessarily happen from67176717- * process context. So the test for root memcg given67186718- * the current task's memcg won't help us in this case.67196719- *67206720- * Respecting the original socket's memcg is a better67216721- * decision in this case.67226722- */67236723- if (sk->sk_memcg) {67246724- css_get(&sk->sk_memcg->css);66856685+ /* Do not associate the sock with unrelated interrupted task's memcg. */66866686+ if (in_interrupt())67256687 return;67266726- }6727668867286689 rcu_read_lock();67296690 memcg = mem_cgroup_from_task(current);
+18-9
mm/mmu_notifier.c
···307307 * ->release returns.308308 */309309 id = srcu_read_lock(&srcu);310310- hlist_for_each_entry_rcu(subscription, &subscriptions->list, hlist)310310+ hlist_for_each_entry_rcu(subscription, &subscriptions->list, hlist,311311+ srcu_read_lock_held(&srcu))311312 /*312313 * If ->release runs before mmu_notifier_unregister it must be313314 * handled, as it's the only way for the driver to flush all···371370372371 id = srcu_read_lock(&srcu);373372 hlist_for_each_entry_rcu(subscription,374374- &mm->notifier_subscriptions->list, hlist) {373373+ &mm->notifier_subscriptions->list, hlist,374374+ srcu_read_lock_held(&srcu)) {375375 if (subscription->ops->clear_flush_young)376376 young |= subscription->ops->clear_flush_young(377377 subscription, mm, start, end);···391389392390 id = srcu_read_lock(&srcu);393391 hlist_for_each_entry_rcu(subscription,394394- &mm->notifier_subscriptions->list, hlist) {392392+ &mm->notifier_subscriptions->list, hlist,393393+ srcu_read_lock_held(&srcu)) {395394 if (subscription->ops->clear_young)396395 young |= subscription->ops->clear_young(subscription,397396 mm, start, end);···410407411408 id = srcu_read_lock(&srcu);412409 hlist_for_each_entry_rcu(subscription,413413- &mm->notifier_subscriptions->list, hlist) {410410+ &mm->notifier_subscriptions->list, hlist,411411+ srcu_read_lock_held(&srcu)) {414412 if (subscription->ops->test_young) {415413 young = subscription->ops->test_young(subscription, mm,416414 address);···432428433429 id = srcu_read_lock(&srcu);434430 hlist_for_each_entry_rcu(subscription,435435- &mm->notifier_subscriptions->list, hlist) {431431+ &mm->notifier_subscriptions->list, hlist,432432+ srcu_read_lock_held(&srcu)) {436433 if (subscription->ops->change_pte)437434 subscription->ops->change_pte(subscription, mm, address,438435 pte);···481476 int id;482477483478 id = srcu_read_lock(&srcu);484484- hlist_for_each_entry_rcu(subscription, &subscriptions->list, hlist) {479479+ hlist_for_each_entry_rcu(subscription, &subscriptions->list, hlist,480480+ srcu_read_lock_held(&srcu)) {485481 const struct mmu_notifier_ops *ops = subscription->ops;486482487483 if (ops->invalidate_range_start) {···534528 int id;535529536530 id = srcu_read_lock(&srcu);537537- hlist_for_each_entry_rcu(subscription, &subscriptions->list, hlist) {531531+ hlist_for_each_entry_rcu(subscription, &subscriptions->list, hlist,532532+ srcu_read_lock_held(&srcu)) {538533 /*539534 * Call invalidate_range here too to avoid the need for the540535 * subsystem of having to register an invalidate_range_end···589582590583 id = srcu_read_lock(&srcu);591584 hlist_for_each_entry_rcu(subscription,592592- &mm->notifier_subscriptions->list, hlist) {585585+ &mm->notifier_subscriptions->list, hlist,586586+ srcu_read_lock_held(&srcu)) {593587 if (subscription->ops->invalidate_range)594588 subscription->ops->invalidate_range(subscription, mm,595589 start, end);···722714723715 spin_lock(&mm->notifier_subscriptions->lock);724716 hlist_for_each_entry_rcu(subscription,725725- &mm->notifier_subscriptions->list, hlist) {717717+ &mm->notifier_subscriptions->list, hlist,718718+ lockdep_is_held(&mm->notifier_subscriptions->lock)) {726719 if (subscription->ops != ops)727720 continue;728721
+7-3
mm/nommu.c
···370370EXPORT_SYMBOL_GPL(vm_unmap_aliases);371371372372/*373373- * Implement a stub for vmalloc_sync_all() if the architecture chose not to374374- * have one.373373+ * Implement a stub for vmalloc_sync_[un]mapping() if the architecture374374+ * chose not to have one.375375 */376376-void __weak vmalloc_sync_all(void)376376+void __weak vmalloc_sync_mappings(void)377377+{378378+}379379+380380+void __weak vmalloc_sync_unmappings(void)377381{378382}379383
+30-11
mm/slub.c
···1973197319741974 if (node == NUMA_NO_NODE)19751975 searchnode = numa_mem_id();19761976- else if (!node_present_pages(node))19771977- searchnode = node_to_mem_node(node);1978197619791977 object = get_partial_node(s, get_node(s, searchnode), c, flags);19801978 if (object || node != NUMA_NO_NODE)···25612563 struct page *page;2562256425632565 page = c->page;25642564- if (!page)25662566+ if (!page) {25672567+ /*25682568+ * if the node is not online or has no normal memory, just25692569+ * ignore the node constraint25702570+ */25712571+ if (unlikely(node != NUMA_NO_NODE &&25722572+ !node_state(node, N_NORMAL_MEMORY)))25732573+ node = NUMA_NO_NODE;25652574 goto new_slab;25752575+ }25662576redo:2567257725682578 if (unlikely(!node_match(page, node))) {25692569- int searchnode = node;25702570-25712571- if (node != NUMA_NO_NODE && !node_present_pages(node))25722572- searchnode = node_to_mem_node(node);25732573-25742574- if (unlikely(!node_match(page, searchnode))) {25792579+ /*25802580+ * same as above but node_match() being false already25812581+ * implies node != NUMA_NO_NODE25822582+ */25832583+ if (!node_state(node, N_NORMAL_MEMORY)) {25842584+ node = NUMA_NO_NODE;25852585+ goto redo;25862586+ } else {25752587 stat(s, ALLOC_NODE_MISMATCH);25762588 deactivate_slab(s, page, c->freelist, c);25772589 goto new_slab;···30052997 barrier();3006299830072999 if (likely(page == c->page)) {30083008- set_freepointer(s, tail_obj, c->freelist);30003000+ void **freelist = READ_ONCE(c->freelist);30013001+30023002+ set_freepointer(s, tail_obj, freelist);3009300330103004 if (unlikely(!this_cpu_cmpxchg_double(30113005 s->cpu_slab->freelist, s->cpu_slab->tid,30123012- c->freelist, tid,30063006+ freelist, tid,30133007 head, next_tid(tid)))) {3014300830153009 note_cmpxchg_failure("slab_free", s, tid);···31843174 void *object = c->freelist;3185317531863176 if (unlikely(!object)) {31773177+ /*31783178+ * We may have removed an object from c->freelist using31793179+ * the fastpath in the previous iteration; in that case,31803180+ * c->tid has not been bumped yet.31813181+ * Since ___slab_alloc() may reenable interrupts while31823182+ * allocating memory, we should bump c->tid now.31833183+ */31843184+ c->tid = next_tid(c->tid);31853185+31873186 /*31883187 * Invoking slow path likely have side-effect31893188 * of re-populating per CPU c->freelist
+6-2
mm/sparse.c
···734734 struct mem_section *ms = __pfn_to_section(pfn);735735 bool section_is_early = early_section(ms);736736 struct page *memmap = NULL;737737+ bool empty;737738 unsigned long *subsection_map = ms->usage738739 ? &ms->usage->subsection_map[0] : NULL;739740···765764 * For 2/ and 3/ the SPARSEMEM_VMEMMAP={y,n} cases are unified766765 */767766 bitmap_xor(subsection_map, map, subsection_map, SUBSECTIONS_PER_SECTION);768768- if (bitmap_empty(subsection_map, SUBSECTIONS_PER_SECTION)) {767767+ empty = bitmap_empty(subsection_map, SUBSECTIONS_PER_SECTION);768768+ if (empty) {769769 unsigned long section_nr = pfn_to_section_nr(pfn);770770771771 /*···781779 ms->usage = NULL;782780 }783781 memmap = sparse_decode_mem_map(ms->section_mem_map, section_nr);784784- ms->section_mem_map = (unsigned long)NULL;785782 }786783787784 if (section_is_early && memmap)788785 free_map_bootmem(memmap);789786 else790787 depopulate_section_memmap(pfn, nr_pages, altmap);788788+789789+ if (empty)790790+ ms->section_mem_map = (unsigned long)NULL;791791}792792793793static struct page * __meminit section_activate(int nid, unsigned long pfn,
+7-4
mm/vmalloc.c
···12951295 * First make sure the mappings are removed from all page-tables12961296 * before they are freed.12971297 */12981298- vmalloc_sync_all();12981298+ vmalloc_sync_unmappings();1299129913001300 /*13011301 * TODO: to calculate a flush range without looping.···31283128EXPORT_SYMBOL(remap_vmalloc_range);3129312931303130/*31313131- * Implement a stub for vmalloc_sync_all() if the architecture chose not to31323132- * have one.31313131+ * Implement stubs for vmalloc_sync_[un]mappings () if the architecture chose31323132+ * not to have one.31333133 *31343134 * The purpose of this function is to make sure the vmalloc area31353135 * mappings are identical in all page-tables in the system.31363136 */31373137-void __weak vmalloc_sync_all(void)31373137+void __weak vmalloc_sync_mappings(void)31383138{31393139}3140314031413141+void __weak vmalloc_sync_unmappings(void)31423142+{31433143+}3141314431423145static int f(pte_t *pte, unsigned long addr, void *data)31433146{
+4
net/batman-adv/bat_iv_ogm.c
···789789790790 lockdep_assert_held(&hard_iface->bat_iv.ogm_buff_mutex);791791792792+ /* interface already disabled by batadv_iv_ogm_iface_disable */793793+ if (!*ogm_buff)794794+ return;795795+792796 /* the interface gets activated here to avoid race conditions between793797 * the moment of activating the interface in794798 * hardif_activate_interface() where the originator mac is set and
···5353 kfree(css_cls_state(css));5454}55555656+/*5757+ * To avoid freezing of sockets creation for tasks with big number of threads5858+ * and opened sockets lets release file_lock every 1000 iterated descriptors.5959+ * New sockets will already have been created with new classid.6060+ */6161+6262+struct update_classid_context {6363+ u32 classid;6464+ unsigned int batch;6565+};6666+6767+#define UPDATE_CLASSID_BATCH 10006868+5669static int update_classid_sock(const void *v, struct file *file, unsigned n)5770{5871 int err;7272+ struct update_classid_context *ctx = (void *)v;5973 struct socket *sock = sock_from_file(file, &err);60746175 if (sock) {6276 spin_lock(&cgroup_sk_update_lock);6363- sock_cgroup_set_classid(&sock->sk->sk_cgrp_data,6464- (unsigned long)v);7777+ sock_cgroup_set_classid(&sock->sk->sk_cgrp_data, ctx->classid);6578 spin_unlock(&cgroup_sk_update_lock);6679 }8080+ if (--ctx->batch == 0) {8181+ ctx->batch = UPDATE_CLASSID_BATCH;8282+ return n + 1;8383+ }6784 return 0;8585+}8686+8787+static void update_classid_task(struct task_struct *p, u32 classid)8888+{8989+ struct update_classid_context ctx = {9090+ .classid = classid,9191+ .batch = UPDATE_CLASSID_BATCH9292+ };9393+ unsigned int fd = 0;9494+9595+ do {9696+ task_lock(p);9797+ fd = iterate_fd(p->files, fd, update_classid_sock, &ctx);9898+ task_unlock(p);9999+ cond_resched();100100+ } while (fd);68101}6910270103static void cgrp_attach(struct cgroup_taskset *tset)···10673 struct task_struct *p;1077410875 cgroup_taskset_for_each(p, css, tset) {109109- task_lock(p);110110- iterate_fd(p->files, 0, update_classid_sock,111111- (void *)(unsigned long)css_cls_state(css)->classid);112112- task_unlock(p);7676+ update_classid_task(p, css_cls_state(css)->classid);11377 }11478}11579···1289812999 css_task_iter_start(css, 0, &it);130100 while ((p = css_task_iter_next(&it))) {131131- task_lock(p);132132- iterate_fd(p->files, 0, update_classid_sock,133133- (void *)(unsigned long)cs->classid);134134- task_unlock(p);101101+ update_classid_task(p, cs->classid);135102 cond_resched();136103 }137104 css_task_iter_end(&it);
+4-1
net/core/sock.c
···18301830 atomic_set(&newsk->sk_zckey, 0);1831183118321832 sock_reset_flag(newsk, SOCK_DONE);18331833- mem_cgroup_sk_alloc(newsk);18331833+18341834+ /* sk->sk_memcg will be populated at accept() time */18351835+ newsk->sk_memcg = NULL;18361836+18341837 cgroup_sk_alloc(&newsk->sk_cgrp_data);1835183818361839 rcu_read_lock();
···5656}5757EXPORT_SYMBOL_GPL(gre_del_protocol);58585959-/* Fills in tpi and returns header length to be pulled. */5959+/* Fills in tpi and returns header length to be pulled.6060+ * Note that caller must use pskb_may_pull() before pulling GRE header.6161+ */6062int gre_parse_header(struct sk_buff *skb, struct tnl_ptk_info *tpi,6163 bool *csum_err, __be16 proto, int nhs)6264{···112110 * - When dealing with WCCPv2, Skip extra 4 bytes in GRE header113111 */114112 if (greh->flags == 0 && tpi->proto == htons(ETH_P_WCCP)) {113113+ u8 _val, *val;114114+115115+ val = skb_header_pointer(skb, nhs + hdr_len,116116+ sizeof(_val), &_val);117117+ if (!val)118118+ return -EINVAL;115119 tpi->proto = proto;116116- if ((*(u8 *)options & 0xF0) != 0x40)120120+ if ((*val & 0xF0) != 0x40)117121 hdr_len += 4;118122 }119123 tpi->hdr_len = hdr_len;
+20
net/ipv4/inet_connection_sock.c
···482482 }483483 spin_unlock_bh(&queue->fastopenq.lock);484484 }485485+485486out:486487 release_sock(sk);488488+ if (newsk && mem_cgroup_sockets_enabled) {489489+ int amt;490490+491491+ /* atomically get the memory usage, set and charge the492492+ * newsk->sk_memcg.493493+ */494494+ lock_sock(newsk);495495+496496+ /* The socket has not been accepted yet, no need to look at497497+ * newsk->sk_wmem_queued.498498+ */499499+ amt = sk_mem_pages(newsk->sk_forward_alloc +500500+ atomic_read(&newsk->sk_rmem_alloc));501501+ mem_cgroup_sk_alloc(newsk);502502+ if (newsk->sk_memcg && amt)503503+ mem_cgroup_charge_skmem(newsk->sk_memcg, amt);504504+505505+ release_sock(newsk);506506+ }487507 if (req)488508 reqsk_put(req);489509 return newsk;
+20-24
net/ipv4/inet_diag.c
···100100 aux = handler->idiag_get_aux_size(sk, net_admin);101101102102 return nla_total_size(sizeof(struct tcp_info))103103- + nla_total_size(1) /* INET_DIAG_SHUTDOWN */104104- + nla_total_size(1) /* INET_DIAG_TOS */105105- + nla_total_size(1) /* INET_DIAG_TCLASS */106106- + nla_total_size(4) /* INET_DIAG_MARK */107107- + nla_total_size(4) /* INET_DIAG_CLASS_ID */108108- + nla_total_size(sizeof(struct inet_diag_meminfo))109103 + nla_total_size(sizeof(struct inet_diag_msg))104104+ + inet_diag_msg_attrs_size()105105+ + nla_total_size(sizeof(struct inet_diag_meminfo))110106 + nla_total_size(SK_MEMINFO_VARS * sizeof(u32))111107 + nla_total_size(TCP_CA_NAME_MAX)112108 + nla_total_size(sizeof(struct tcpvegas_info))···142146143147 if (net_admin && nla_put_u32(skb, INET_DIAG_MARK, sk->sk_mark))144148 goto errout;149149+150150+ if (ext & (1 << (INET_DIAG_CLASS_ID - 1)) ||151151+ ext & (1 << (INET_DIAG_TCLASS - 1))) {152152+ u32 classid = 0;153153+154154+#ifdef CONFIG_SOCK_CGROUP_DATA155155+ classid = sock_cgroup_classid(&sk->sk_cgrp_data);156156+#endif157157+ /* Fallback to socket priority if class id isn't set.158158+ * Classful qdiscs use it as direct reference to class.159159+ * For cgroup2 classid is always zero.160160+ */161161+ if (!classid)162162+ classid = sk->sk_priority;163163+164164+ if (nla_put_u32(skb, INET_DIAG_CLASS_ID, classid))165165+ goto errout;166166+ }145167146168 r->idiag_uid = from_kuid_munged(user_ns, sock_i_uid(sk));147169 r->idiag_inode = sock_i_ino(sk);···295281 sz = ca_ops->get_info(sk, ext, &attr, &info);296282 rcu_read_unlock();297283 if (sz && nla_put(skb, attr, sz, &info) < 0)298298- goto errout;299299- }300300-301301- if (ext & (1 << (INET_DIAG_CLASS_ID - 1)) ||302302- ext & (1 << (INET_DIAG_TCLASS - 1))) {303303- u32 classid = 0;304304-305305-#ifdef CONFIG_SOCK_CGROUP_DATA306306- classid = sock_cgroup_classid(&sk->sk_cgrp_data);307307-#endif308308- /* Fallback to socket priority if class id isn't set.309309- * Classful qdiscs use it as direct reference to class.310310- * For cgroup2 classid is always zero.311311- */312312- if (!classid)313313- classid = sk->sk_priority;314314-315315- if (nla_put_u32(skb, INET_DIAG_CLASS_ID, classid))316284 goto errout;317285 }318286
···44444545# gcc version including patch level4646gcc-version := $(shell,$(srctree)/scripts/gcc-version.sh $(CC))4747+4848+# machine bit flags4949+# $(m32-flag): -m32 if the compiler supports it, or an empty string otherwise.5050+# $(m64-flag): -m64 if the compiler supports it, or an empty string otherwise.5151+cc-option-bit = $(if-success,$(CC) -Werror $(1) -E -x c /dev/null -o /dev/null,$(1))5252+m32-flag := $(cc-option-bit,-m32)5353+m64-flag := $(cc-option-bit,-m64)
···77#include <string.h>88#include <regex.h>991010-#include "../../util/debug.h"1111-#include "../../util/header.h"1010+#include "../../../util/debug.h"1111+#include "../../../util/header.h"12121313static inline void1414cpuid(unsigned int op, unsigned int *a, unsigned int *b, unsigned int *c,
···22#ifndef BENCH_H33#define BENCH_H4455+#include <sys/time.h>66+77+extern struct timeval bench__start, bench__end, bench__runtime;88+59/*610 * The madvise transparent hugepage constants were added in glibc711 * 2.13. For compatibility with older versions of glibc, define these
+4-4
tools/perf/bench/epoll-ctl.c
···35353636static unsigned int nthreads = 0;3737static unsigned int nsecs = 8;3838-struct timeval start, end, runtime;3938static bool done, __verbose, randomize;40394140/*···9394{9495 /* inform all threads that we're done for the day */9596 done = true;9696- gettimeofday(&end, NULL);9797- timersub(&end, &start, &runtime);9797+ gettimeofday(&bench__end, NULL);9898+ timersub(&bench__end, &bench__start, &bench__runtime);9899}99100100101static void nest_epollfd(void)···312313 exit(EXIT_FAILURE);313314 }314315316316+ memset(&act, 0, sizeof(act));315317 sigfillset(&act.sa_mask);316318 act.sa_sigaction = toggle_done;317319 sigaction(SIGINT, &act, NULL);···361361362362 threads_starting = nthreads;363363364364- gettimeofday(&start, NULL);364364+ gettimeofday(&bench__start, NULL);365365366366 do_threads(worker, cpu);367367
+6-6
tools/perf/bench/epoll-wait.c
···90909191static unsigned int nthreads = 0;9292static unsigned int nsecs = 8;9393-struct timeval start, end, runtime;9493static bool wdone, done, __verbose, randomize, nonblocking;95949695/*···275276{276277 /* inform all threads that we're done for the day */277278 done = true;278278- gettimeofday(&end, NULL);279279- timersub(&end, &start, &runtime);279279+ gettimeofday(&bench__end, NULL);280280+ timersub(&bench__end, &bench__start, &bench__runtime);280281}281282282283static void print_summary(void)···286287287288 printf("\nAveraged %ld operations/sec (+- %.2f%%), total secs = %d\n",288289 avg, rel_stddev_stats(stddev, avg),289289- (int) runtime.tv_sec);290290+ (int)bench__runtime.tv_sec);290291}291292292293static int do_threads(struct worker *worker, struct perf_cpu_map *cpu)···426427 exit(EXIT_FAILURE);427428 }428429430430+ memset(&act, 0, sizeof(act));429431 sigfillset(&act.sa_mask);430432 act.sa_sigaction = toggle_done;431433 sigaction(SIGINT, &act, NULL);···479479480480 threads_starting = nthreads;481481482482- gettimeofday(&start, NULL);482482+ gettimeofday(&bench__start, NULL);483483484484 do_threads(worker, cpu);485485···519519 qsort(worker, nthreads, sizeof(struct worker), cmpworker);520520521521 for (i = 0; i < nthreads; i++) {522522- unsigned long t = worker[i].ops/runtime.tv_sec;522522+ unsigned long t = worker[i].ops / bench__runtime.tv_sec;523523524524 update_stats(&throughput_stats, t);525525
+7-6
tools/perf/bench/futex-hash.c
···3737static bool fshared = false, done = false, silent = false;3838static int futex_flag = 0;39394040-struct timeval start, end, runtime;4040+struct timeval bench__start, bench__end, bench__runtime;4141static pthread_mutex_t thread_lock;4242static unsigned int threads_starting;4343static struct stats throughput_stats;···103103{104104 /* inform all threads that we're done for the day */105105 done = true;106106- gettimeofday(&end, NULL);107107- timersub(&end, &start, &runtime);106106+ gettimeofday(&bench__end, NULL);107107+ timersub(&bench__end, &bench__start, &bench__runtime);108108}109109110110static void print_summary(void)···114114115115 printf("%sAveraged %ld operations/sec (+- %.2f%%), total secs = %d\n",116116 !silent ? "\n" : "", avg, rel_stddev_stats(stddev, avg),117117- (int) runtime.tv_sec);117117+ (int)bench__runtime.tv_sec);118118}119119120120int bench_futex_hash(int argc, const char **argv)···137137 if (!cpu)138138 goto errmem;139139140140+ memset(&act, 0, sizeof(act));140141 sigfillset(&act.sa_mask);141142 act.sa_sigaction = toggle_done;142143 sigaction(SIGINT, &act, NULL);···162161163162 threads_starting = nthreads;164163 pthread_attr_init(&thread_attr);165165- gettimeofday(&start, NULL);164164+ gettimeofday(&bench__start, NULL);166165 for (i = 0; i < nthreads; i++) {167166 worker[i].tid = i;168167 worker[i].futex = calloc(nfutexes, sizeof(*worker[i].futex));···205204 pthread_mutex_destroy(&thread_lock);206205207206 for (i = 0; i < nthreads; i++) {208208- unsigned long t = worker[i].ops/runtime.tv_sec;207207+ unsigned long t = worker[i].ops / bench__runtime.tv_sec;209208 update_stats(&throughput_stats, t);210209 if (!silent) {211210 if (nfutexes == 1)
+6-6
tools/perf/bench/futex-lock-pi.c
···3737static bool done = false, fshared = false;3838static unsigned int nthreads = 0;3939static int futex_flag = 0;4040-struct timeval start, end, runtime;4140static pthread_mutex_t thread_lock;4241static unsigned int threads_starting;4342static struct stats throughput_stats;···63646465 printf("%sAveraged %ld operations/sec (+- %.2f%%), total secs = %d\n",6566 !silent ? "\n" : "", avg, rel_stddev_stats(stddev, avg),6666- (int) runtime.tv_sec);6767+ (int)bench__runtime.tv_sec);6768}68696970static void toggle_done(int sig __maybe_unused,···7273{7374 /* inform all threads that we're done for the day */7475 done = true;7575- gettimeofday(&end, NULL);7676- timersub(&end, &start, &runtime);7676+ gettimeofday(&bench__end, NULL);7777+ timersub(&bench__end, &bench__start, &bench__runtime);7778}78797980static void *workerfn(void *arg)···160161 if (!cpu)161162 err(EXIT_FAILURE, "calloc");162163164164+ memset(&act, 0, sizeof(act));163165 sigfillset(&act.sa_mask);164166 act.sa_sigaction = toggle_done;165167 sigaction(SIGINT, &act, NULL);···185185186186 threads_starting = nthreads;187187 pthread_attr_init(&thread_attr);188188- gettimeofday(&start, NULL);188188+ gettimeofday(&bench__start, NULL);189189190190 create_threads(worker, thread_attr, cpu);191191 pthread_attr_destroy(&thread_attr);···211211 pthread_mutex_destroy(&thread_lock);212212213213 for (i = 0; i < nthreads; i++) {214214- unsigned long t = worker[i].ops/runtime.tv_sec;214214+ unsigned long t = worker[i].ops / bench__runtime.tv_sec;215215216216 update_stats(&throughput_stats, t);217217 if (!silent)
···1919#include "../perf-sys.h"2020#include "cloexec.h"21212222-volatile long the_var;2222+static volatile long the_var;23232424static noinline int test_function(void)2525{
···19192020static unsigned long long **previous_count;2121static unsigned long long **current_count;2222-struct timespec start_time;2222+static struct timespec start_time;2323static unsigned long long timediff;24242525static int cpuidle_get_count_percent(unsigned int id, double *percent,
···2525#endif2626#define CSTATE_DESC_LEN 6027272828-int cpu_count;2828+extern int cpu_count;29293030/* Hard to define the right names ...: */3131enum power_range_e {
···3030#include <sched.h>3131#include <time.h>3232#include <cpuid.h>3333-#include <linux/capability.h>3333+#include <sys/capability.h>3434#include <errno.h>3535#include <math.h>3636···303303int *irqs_per_cpu; /* indexed by cpu_num */304304305305void setup_all_buffers(void);306306+307307+char *sys_lpi_file;308308+char *sys_lpi_file_sysfs = "/sys/devices/system/cpu/cpuidle/low_power_idle_system_residency_us";309309+char *sys_lpi_file_debugfs = "/sys/kernel/debug/pmc_core/slp_s0_residency_usec";306310307311int cpu_is_not_present(int cpu)308312{···29202916 *29212917 * record snapshot of29222918 * /sys/devices/system/cpu/cpuidle/low_power_idle_cpu_residency_us29232923- *29242924- * return 1 if config change requires a restart, else return 029252919 */29262920int snapshot_cpu_lpi_us(void)29272921{···29432941/*29442942 * snapshot_sys_lpi()29452943 *29462946- * record snapshot of29472947- * /sys/devices/system/cpu/cpuidle/low_power_idle_system_residency_us29482948- *29492949- * return 1 if config change requires a restart, else return 029442944+ * record snapshot of sys_lpi_file29502945 */29512946int snapshot_sys_lpi_us(void)29522947{29532948 FILE *fp;29542949 int retval;2955295029562956- fp = fopen_or_die("/sys/devices/system/cpu/cpuidle/low_power_idle_system_residency_us", "r");29512951+ fp = fopen_or_die(sys_lpi_file, "r");2957295229582953 retval = fscanf(fp, "%lld", &cpuidle_cur_sys_lpi_us);29592954 if (retval != 1) {···31503151 err(-5, "no /dev/cpu/0/msr, Try \"# modprobe msr\" ");31513152}3152315331533153-void check_permissions()31543154+/*31553155+ * check for CAP_SYS_RAWIO31563156+ * return 0 on success31573157+ * return 1 on fail31583158+ */31593159+int check_for_cap_sys_rawio(void)31543160{31553155- struct __user_cap_header_struct cap_header_data;31563156- cap_user_header_t cap_header = &cap_header_data;31573157- struct __user_cap_data_struct cap_data_data;31583158- cap_user_data_t cap_data = &cap_data_data;31593159- extern int capget(cap_user_header_t hdrp, cap_user_data_t datap);31613161+ cap_t caps;31623162+ cap_flag_value_t cap_flag_value;31633163+31643164+ caps = cap_get_proc();31653165+ if (caps == NULL)31663166+ err(-6, "cap_get_proc\n");31673167+31683168+ if (cap_get_flag(caps, CAP_SYS_RAWIO, CAP_EFFECTIVE, &cap_flag_value))31693169+ err(-6, "cap_get\n");31703170+31713171+ if (cap_flag_value != CAP_SET) {31723172+ warnx("capget(CAP_SYS_RAWIO) failed,"31733173+ " try \"# setcap cap_sys_rawio=ep %s\"", progname);31743174+ return 1;31753175+ }31763176+31773177+ if (cap_free(caps) == -1)31783178+ err(-6, "cap_free\n");31793179+31803180+ return 0;31813181+}31823182+void check_permissions(void)31833183+{31603184 int do_exit = 0;31613185 char pathname[32];3162318631633187 /* check for CAP_SYS_RAWIO */31643164- cap_header->pid = getpid();31653165- cap_header->version = _LINUX_CAPABILITY_VERSION;31663166- if (capget(cap_header, cap_data) < 0)31673167- err(-6, "capget(2) failed");31683168-31693169- if ((cap_data->effective & (1 << CAP_SYS_RAWIO)) == 0) {31703170- do_exit++;31713171- warnx("capget(CAP_SYS_RAWIO) failed,"31723172- " try \"# setcap cap_sys_rawio=ep %s\"", progname);31733173- }31883188+ do_exit += check_for_cap_sys_rawio();3174318931753190 /* test file permissions */31763191 sprintf(pathname, "/dev/cpu/%d/msr", base_cpu);···32783265 case INTEL_FAM6_ATOM_GOLDMONT: /* BXT */32793266 case INTEL_FAM6_ATOM_GOLDMONT_PLUS:32803267 case INTEL_FAM6_ATOM_GOLDMONT_D: /* DNV */32683268+ case INTEL_FAM6_ATOM_TREMONT: /* EHL */32813269 pkg_cstate_limits = glm_pkg_cstate_limits;32823270 break;32833271 default:···3346333233473333 switch (model) {33483334 case INTEL_FAM6_SKYLAKE_X:33353335+ return 1;33363336+ }33373337+ return 0;33383338+}33393339+int is_ehl(unsigned int family, unsigned int model)33403340+{33413341+ if (!genuine_intel)33423342+ return 0;33433343+33443344+ switch (model) {33453345+ case INTEL_FAM6_ATOM_TREMONT:33493346 return 1;33503347 }33513348 return 0;···35033478 dump_nhm_cst_cfg();35043479}3505348034813481+static void dump_sysfs_file(char *path)34823482+{34833483+ FILE *input;34843484+ char cpuidle_buf[64];34853485+34863486+ input = fopen(path, "r");34873487+ if (input == NULL) {34883488+ if (debug)34893489+ fprintf(outf, "NSFOD %s\n", path);34903490+ return;34913491+ }34923492+ if (!fgets(cpuidle_buf, sizeof(cpuidle_buf), input))34933493+ err(1, "%s: failed to read file", path);34943494+ fclose(input);34953495+34963496+ fprintf(outf, "%s: %s", strrchr(path, '/') + 1, cpuidle_buf);34973497+}35063498static void35073499dump_sysfs_cstate_config(void)35083500{···3532349035333491 if (!DO_BIC(BIC_sysfs))35343492 return;34933493+34943494+ if (access("/sys/devices/system/cpu/cpuidle", R_OK)) {34953495+ fprintf(outf, "cpuidle not loaded\n");34963496+ return;34973497+ }34983498+34993499+ dump_sysfs_file("/sys/devices/system/cpu/cpuidle/current_driver");35003500+ dump_sysfs_file("/sys/devices/system/cpu/cpuidle/current_governor");35013501+ dump_sysfs_file("/sys/devices/system/cpu/cpuidle/current_governor_ro");3535350235363503 for (state = 0; state < 10; ++state) {35373504···39453894 else39463895 BIC_PRESENT(BIC_PkgWatt);39473896 break;38973897+ case INTEL_FAM6_ATOM_TREMONT: /* EHL */38983898+ do_rapl = RAPL_PKG | RAPL_CORES | RAPL_CORE_POLICY | RAPL_DRAM | RAPL_DRAM_PERF_STATUS | RAPL_PKG_PERF_STATUS | RAPL_GFX | RAPL_PKG_POWER_INFO;38993899+ if (rapl_joules) {39003900+ BIC_PRESENT(BIC_Pkg_J);39013901+ BIC_PRESENT(BIC_Cor_J);39023902+ BIC_PRESENT(BIC_RAM_J);39033903+ BIC_PRESENT(BIC_GFX_J);39043904+ } else {39053905+ BIC_PRESENT(BIC_PkgWatt);39063906+ BIC_PRESENT(BIC_CorWatt);39073907+ BIC_PRESENT(BIC_RAMWatt);39083908+ BIC_PRESENT(BIC_GFXWatt);39093909+ }39103910+ break;39483911 case INTEL_FAM6_SKYLAKE_L: /* SKL */39493912 case INTEL_FAM6_CANNONLAKE_L: /* CNL */39503913 do_rapl = RAPL_PKG | RAPL_CORES | RAPL_CORE_POLICY | RAPL_DRAM | RAPL_DRAM_PERF_STATUS | RAPL_PKG_PERF_STATUS | RAPL_GFX | RAPL_PKG_POWER_INFO;···43604295 case INTEL_FAM6_ATOM_GOLDMONT: /* BXT */43614296 case INTEL_FAM6_ATOM_GOLDMONT_PLUS:43624297 case INTEL_FAM6_ATOM_GOLDMONT_D: /* DNV */42984298+ case INTEL_FAM6_ATOM_TREMONT: /* EHL */43634299 return 1;43644300 }43654301 return 0;···43904324 case INTEL_FAM6_CANNONLAKE_L: /* CNL */43914325 case INTEL_FAM6_ATOM_GOLDMONT: /* BXT */43924326 case INTEL_FAM6_ATOM_GOLDMONT_PLUS:43274327+ case INTEL_FAM6_ATOM_TREMONT: /* EHL */43934328 return 1;43944329 }43954330 return 0;···46774610 case INTEL_FAM6_SKYLAKE:46784611 case INTEL_FAM6_KABYLAKE_L:46794612 case INTEL_FAM6_KABYLAKE:46134613+ case INTEL_FAM6_COMETLAKE_L:46144614+ case INTEL_FAM6_COMETLAKE:46804615 return INTEL_FAM6_SKYLAKE_L;4681461646824617 case INTEL_FAM6_ICELAKE_L:46834618 case INTEL_FAM6_ICELAKE_NNPI:46194619+ case INTEL_FAM6_TIGERLAKE_L:46204620+ case INTEL_FAM6_TIGERLAKE:46844621 return INTEL_FAM6_CANNONLAKE_L;4685462246864623 case INTEL_FAM6_ATOM_TREMONT_D:46874624 return INTEL_FAM6_ATOM_GOLDMONT_D;46254625+46264626+ case INTEL_FAM6_ATOM_TREMONT_L:46274627+ return INTEL_FAM6_ATOM_TREMONT;46284628+46294629+ case INTEL_FAM6_ICELAKE_X:46304630+ return INTEL_FAM6_SKYLAKE_X;46884631 }46894632 return model;46904633}···49494872 do_slm_cstates = is_slm(family, model);49504873 do_knl_cstates = is_knl(family, model);4951487449524952- if (do_slm_cstates || do_knl_cstates || is_cnl(family, model))48754875+ if (do_slm_cstates || do_knl_cstates || is_cnl(family, model) ||48764876+ is_ehl(family, model))49534877 BIC_NOT_PRESENT(BIC_CPU_c3);4954487849554879 if (!quiet)···49854907 else49864908 BIC_NOT_PRESENT(BIC_CPU_LPI);4987490949884988- if (!access("/sys/devices/system/cpu/cpuidle/low_power_idle_system_residency_us", R_OK))49104910+ if (!access(sys_lpi_file_sysfs, R_OK)) {49114911+ sys_lpi_file = sys_lpi_file_sysfs;49894912 BIC_PRESENT(BIC_SYS_LPI);49904990- else49134913+ } else if (!access(sys_lpi_file_debugfs, R_OK)) {49144914+ sys_lpi_file = sys_lpi_file_debugfs;49154915+ BIC_PRESENT(BIC_SYS_LPI);49164916+ } else {49174917+ sys_lpi_file_sysfs = NULL;49914918 BIC_NOT_PRESENT(BIC_SYS_LPI);49194919+ }4992492049934921 if (!quiet)49944922 decode_misc_feature_control();···53905306}5391530753925308void print_version() {53935393- fprintf(outf, "turbostat version 19.08.31"53095309+ fprintf(outf, "turbostat version 20.03.20"53945310 " - Len Brown <lenb@kernel.org>\n");53955311}53965312···54075323 }5408532454095325 msrp->msr_num = msr_num;54105410- strncpy(msrp->name, name, NAME_BYTES);53265326+ strncpy(msrp->name, name, NAME_BYTES - 1);54115327 if (path)54125412- strncpy(msrp->path, path, PATH_BYTES);53285328+ strncpy(msrp->path, path, PATH_BYTES - 1);54135329 msrp->width = width;54145330 msrp->type = type;54155331 msrp->format = format;
+8-8
tools/testing/ktest/ktest.pl
···3030 "EMAIL_WHEN_STARTED" => 0,3131 "NUM_TESTS" => 1,3232 "TEST_TYPE" => "build",3333- "BUILD_TYPE" => "randconfig",3333+ "BUILD_TYPE" => "oldconfig",3434 "MAKE_CMD" => "make",3535 "CLOSE_CONSOLE_SIGNAL" => "INT",3636 "TIMEOUT" => 120,···10301030 }1031103110321032 if (!$skip && $rest !~ /^\s*$/) {10331033- die "$name: $.: Gargbage found after $type\n$_";10331033+ die "$name: $.: Garbage found after $type\n$_";10341034 }1035103510361036 if ($skip && $type eq "TEST_START") {···10631063 }1064106410651065 if ($rest !~ /^\s*$/) {10661066- die "$name: $.: Gargbage found after DEFAULTS\n$_";10661066+ die "$name: $.: Garbage found after DEFAULTS\n$_";10671067 }1068106810691069 } elsif (/^\s*INCLUDE\s+(\S+)/) {···11541154 # on of these sections that have SKIP defined.11551155 # The save variable can be11561156 # defined multiple times and the new one simply overrides11571157- # the prevous one.11571157+ # the previous one.11581158 set_variable($lvalue, $rvalue);1159115911601160 } else {···12341234 foreach my $option (keys %not_used) {12351235 print "$option\n";12361236 }12371237- print "Set IGRNORE_UNUSED = 1 to have ktest ignore unused variables\n";12371237+ print "Set IGNORE_UNUSED = 1 to have ktest ignore unused variables\n";12381238 if (!read_yn "Do you want to continue?") {12391239 exit -1;12401240 }···13451345 # Check for recursive evaluations.13461346 # 100 deep should be more than enough.13471347 if ($r++ > 100) {13481348- die "Over 100 evaluations accurred with $option\n" .13481348+ die "Over 100 evaluations occurred with $option\n" .13491349 "Check for recursive variables\n";13501350 }13511351 $prev = $option;···1383138313841384 } else {13851385 # Make sure everything has been written to disk13861386- run_ssh("sync");13861386+ run_ssh("sync", 10);1387138713881388 if (defined($time)) {13891389 start_monitor;···1461146114621462sub dodie {1463146314641464- # avoid recusion14641464+ # avoid recursion14651465 return if ($in_die);14661466 $in_die = 1;14671467
+11-11
tools/testing/ktest/sample.conf
···1010#11111212# Options set in the beginning of the file are considered to be1313-# default options. These options can be overriden by test specific1313+# default options. These options can be overridden by test specific1414# options, with the following exceptions:1515#1616# LOG_FILE···204204#205205# This config file can also contain "config variables".206206# These are assigned with ":=" instead of the ktest option207207-# assigment "=".207207+# assignment "=".208208#209209# The difference between ktest options and config variables210210# is that config variables can be used multiple times,···263263#### Using options in other options ####264264#265265# Options that are defined in the config file may also be used266266-# by other options. All options are evaulated at time of266266+# by other options. All options are evaluated at time of267267# use (except that config variables are evaluated at config268268# processing time).269269#···505505#TEST = ssh user@machine /root/run_test506506507507# The build type is any make config type or special command508508-# (default randconfig)508508+# (default oldconfig)509509# nobuild - skip the clean and build step510510# useconfig:/path/to/config - use the given config and run511511# oldconfig on it.···707707708708# Line to define a successful boot up in console output.709709# This is what the line contains, not the entire line. If you need710710-# the entire line to match, then use regural expression syntax like:710710+# the entire line to match, then use regular expression syntax like:711711# (do not add any quotes around it)712712#713713# SUCCESS_LINE = ^MyBox Login:$···839839# (ignored if POWEROFF_ON_SUCCESS is set)840840#REBOOT_ON_SUCCESS = 1841841842842-# In case there are isses with rebooting, you can specify this842842+# In case there are issues with rebooting, you can specify this843843# to always powercycle after this amount of time after calling844844# reboot.845845# Note, POWERCYCLE_AFTER_REBOOT = 0 does NOT disable it. It just···848848# (default undefined)849849#POWERCYCLE_AFTER_REBOOT = 5850850851851-# In case there's isses with halting, you can specify this851851+# In case there's issues with halting, you can specify this852852# to always poweroff after this amount of time after calling853853# halt.854854# Note, POWEROFF_AFTER_HALT = 0 does NOT disable it. It just···972972#973973# PATCHCHECK_START is required and is the first patch to974974# test (the SHA1 of the commit). You may also specify anything975975-# that git checkout allows (branch name, tage, HEAD~3).975975+# that git checkout allows (branch name, tag, HEAD~3).976976#977977# PATCHCHECK_END is the last patch to check (default HEAD)978978#···994994# IGNORE_WARNINGS is set for the given commit's sha1995995#996996# IGNORE_WARNINGS can be used to disable the failure of patchcheck997997-# on a particuler commit (SHA1). You can add more than one commit997997+# on a particular commit (SHA1). You can add more than one commit998998# by adding a list of SHA1s that are space delimited.999999#10001000# If BUILD_NOCLEAN is set, then make mrproper will not be run on···10931093# whatever reason. (Can't reboot, want to inspect each iteration)10941094# Doing a BISECT_MANUAL will have the test wait for you to10951095# tell it if the test passed or failed after each iteration.10961096-# This is basicall the same as running git bisect yourself10961096+# This is basically the same as running git bisect yourself10971097# but ktest will rebuild and install the kernel for you.10981098#10991099# BISECT_CHECK = 1 (optional, default 0)···12391239#12401240# CONFIG_BISECT_EXEC (optional)12411241# The config bisect is a separate program that comes with ktest.pl.12421242-# By befault, it will look for:12421242+# By default, it will look for:12431243# `pwd`/config-bisect.pl # the location ktest.pl was executed from.12441244# If it does not find it there, it will look for:12451245# `dirname <ktest.pl>`/config-bisect.pl # The directory that holds ktest.pl
+31-3
tools/testing/selftests/net/fib_tests.sh
···10411041 fi10421042 log_test $rc 0 "Prefix route with metric on link up"1043104310441044+ # verify peer metric added correctly10451045+ set -e10461046+ run_cmd "$IP -6 addr flush dev dummy2"10471047+ run_cmd "$IP -6 addr add dev dummy2 2001:db8:104::1 peer 2001:db8:104::2 metric 260"10481048+ set +e10491049+10501050+ check_route6 "2001:db8:104::1 dev dummy2 proto kernel metric 260"10511051+ log_test $? 0 "Set metric with peer route on local side"10521052+ log_test $? 0 "User specified metric on local address"10531053+ check_route6 "2001:db8:104::2 dev dummy2 proto kernel metric 260"10541054+ log_test $? 0 "Set metric with peer route on peer side"10551055+10561056+ set -e10571057+ run_cmd "$IP -6 addr change dev dummy2 2001:db8:104::1 peer 2001:db8:104::3 metric 261"10581058+ set +e10591059+10601060+ check_route6 "2001:db8:104::1 dev dummy2 proto kernel metric 261"10611061+ log_test $? 0 "Modify metric and peer address on local side"10621062+ check_route6 "2001:db8:104::3 dev dummy2 proto kernel metric 261"10631063+ log_test $? 0 "Modify metric and peer address on peer side"10641064+10441065 $IP li del dummy110451066 $IP li del dummy210461067 cleanup···1478145714791458 run_cmd "$IP addr flush dev dummy2"14801459 run_cmd "$IP addr add dev dummy2 172.16.104.1/32 peer 172.16.104.2 metric 260"14811481- run_cmd "$IP addr change dev dummy2 172.16.104.1/32 peer 172.16.104.2 metric 261"14821460 rc=$?14831461 if [ $rc -eq 0 ]; then14841484- check_route "172.16.104.2 dev dummy2 proto kernel scope link src 172.16.104.1 metric 261"14621462+ check_route "172.16.104.2 dev dummy2 proto kernel scope link src 172.16.104.1 metric 260"14851463 rc=$?14861464 fi14871487- log_test $rc 0 "Modify metric of address with peer route"14651465+ log_test $rc 0 "Set metric of address with peer route"14661466+14671467+ run_cmd "$IP addr change dev dummy2 172.16.104.1/32 peer 172.16.104.3 metric 261"14681468+ rc=$?14691469+ if [ $rc -eq 0 ]; then14701470+ check_route "172.16.104.3 dev dummy2 proto kernel scope link src 172.16.104.1 metric 261"14711471+ rc=$?14721472+ fi14731473+ log_test $rc 0 "Modify metric and peer address for peer route"1488147414891475 $IP li del dummy114901476 $IP li del dummy2
···124124125125 If in doubt, select 'None'126126127127-config INITRAMFS_COMPRESSION_NONE128128- bool "None"129129- help130130- Do not compress the built-in initramfs at all. This may sound wasteful131131- in space, but, you should be aware that the built-in initramfs will be132132- compressed at a later stage anyways along with the rest of the kernel,133133- on those architectures that support this. However, not compressing the134134- initramfs may lead to slightly higher memory consumption during a135135- short time at boot, while both the cpio image and the unpacked136136- filesystem image will be present in memory simultaneously137137-138127config INITRAMFS_COMPRESSION_GZIP139128 bool "Gzip"140129 depends on RD_GZIP···195206196207 If you choose this, keep in mind that most distros don't provide lz4197208 by default which could cause a build failure.209209+210210+config INITRAMFS_COMPRESSION_NONE211211+ bool "None"212212+ help213213+ Do not compress the built-in initramfs at all. This may sound wasteful214214+ in space, but, you should be aware that the built-in initramfs will be215215+ compressed at a later stage anyways along with the rest of the kernel,216216+ on those architectures that support this. However, not compressing the217217+ initramfs may lead to slightly higher memory consumption during a218218+ short time at boot, while both the cpio image and the unpacked219219+ filesystem image will be present in memory simultaneously198220199221endchoice