···562562 ================ =========================================563563564564 If control status is "forceoff" or "notsupported" writes565565- are rejected.565565+ are rejected. Note that enabling SMT on PowerPC skips566566+ offline cores.566567567568What: /sys/devices/system/cpu/cpuX/power/energy_perf_bias568569Date: March 2019
···162162163163164164Module parameters::165165-max_read_size166166-max_write_size167167- Maximum size of read or write requests. When a request larger than this size168168- is received, dm-crypt will split the request. The splitting improves169169- concurrency (the split requests could be encrypted in parallel by multiple170170- cores), but it also causes overhead. The user should tune these parameters to171171- fit the actual workload.165165+166166+ max_read_size167167+ max_write_size168168+ Maximum size of read or write requests. When a request larger than this size169169+ is received, dm-crypt will split the request. The splitting improves170170+ concurrency (the split requests could be encrypted in parallel by multiple171171+ cores), but it also causes overhead. The user should tune these parameters to172172+ fit the actual workload.172173173174174175Example scripts
+22-14
Documentation/arch/riscv/hwprobe.rst
···239239 ratified in commit 98918c844281 ("Merge pull request #1217 from240240 riscv/zawrs") of riscv-isa-manual.241241242242-* :c:macro:`RISCV_HWPROBE_KEY_CPUPERF_0`: A bitmask that contains performance243243- information about the selected set of processors.242242+* :c:macro:`RISCV_HWPROBE_KEY_CPUPERF_0`: Deprecated. Returns similar values to243243+ :c:macro:`RISCV_HWPROBE_KEY_MISALIGNED_SCALAR_PERF`, but the key was244244+ mistakenly classified as a bitmask rather than a value.244245245245- * :c:macro:`RISCV_HWPROBE_MISALIGNED_UNKNOWN`: The performance of misaligned246246- accesses is unknown.246246+* :c:macro:`RISCV_HWPROBE_KEY_MISALIGNED_SCALAR_PERF`: An enum value describing247247+ the performance of misaligned scalar native word accesses on the selected set248248+ of processors.247249248248- * :c:macro:`RISCV_HWPROBE_MISALIGNED_EMULATED`: Misaligned accesses are249249- emulated via software, either in or below the kernel. These accesses are250250- always extremely slow.250250+ * :c:macro:`RISCV_HWPROBE_MISALIGNED_SCALAR_UNKNOWN`: The performance of251251+ misaligned scalar accesses is unknown.251252252252- * :c:macro:`RISCV_HWPROBE_MISALIGNED_SLOW`: Misaligned accesses are slower253253- than equivalent byte accesses. Misaligned accesses may be supported254254- directly in hardware, or trapped and emulated by software.253253+ * :c:macro:`RISCV_HWPROBE_MISALIGNED_SCALAR_EMULATED`: Misaligned scalar254254+ accesses are emulated via software, either in or below the kernel. These255255+ accesses are always extremely slow.255256256256- * :c:macro:`RISCV_HWPROBE_MISALIGNED_FAST`: Misaligned accesses are faster257257- than equivalent byte accesses.257257+ * :c:macro:`RISCV_HWPROBE_MISALIGNED_SCALAR_SLOW`: Misaligned scalar native258258+ word sized accesses are slower than the equivalent quantity of byte259259+ accesses. Misaligned accesses may be supported directly in hardware, or260260+ trapped and emulated by software.258261259259- * :c:macro:`RISCV_HWPROBE_MISALIGNED_UNSUPPORTED`: Misaligned accesses are260260- not supported at all and will generate a misaligned address fault.262262+ * :c:macro:`RISCV_HWPROBE_MISALIGNED_SCALAR_FAST`: Misaligned scalar native263263+ word sized accesses are faster than the equivalent quantity of byte264264+ accesses.265265+266266+ * :c:macro:`RISCV_HWPROBE_MISALIGNED_SCALAR_UNSUPPORTED`: Misaligned scalar267267+ accesses are not supported at all and will generate a misaligned address268268+ fault.261269262270* :c:macro:`RISCV_HWPROBE_KEY_ZICBOZ_BLOCK_SIZE`: An unsigned int which263271 represents the size of the Zicboz block in bytes.
···77title: Qualcomm Display Clock & Reset Controller on SM63508899maintainers:1010- - Konrad Dybcio <konrad.dybcio@somainline.org>1010+ - Konrad Dybcio <konradybcio@kernel.org>11111212description: |1313 Qualcomm display clock control module provides the clocks, resets and power
···77title: Qualcomm Global Clock & Reset Controller on MSM89948899maintainers:1010- - Konrad Dybcio <konrad.dybcio@somainline.org>1010+ - Konrad Dybcio <konradybcio@kernel.org>11111212description: |1313 Qualcomm global clock control module provides the clocks, resets and power
···77title: Qualcomm Global Clock & Reset Controller on SM61258899maintainers:1010- - Konrad Dybcio <konrad.dybcio@somainline.org>1010+ - Konrad Dybcio <konradybcio@kernel.org>11111212description: |1313 Qualcomm global clock control module provides the clocks, resets and power
···77title: Qualcomm Global Clock & Reset Controller on SM63508899maintainers:1010- - Konrad Dybcio <konrad.dybcio@somainline.org>1010+ - Konrad Dybcio <konradybcio@kernel.org>11111212description: |1313 Qualcomm global clock control module provides the clocks, resets and power
···77title: Qualcomm Graphics Clock & Reset Controller on SM61158899maintainers:1010- - Konrad Dybcio <konrad.dybcio@linaro.org>1010+ - Konrad Dybcio <konradybcio@kernel.org>11111212description: |1313 Qualcomm graphics clock control module provides clocks, resets and power
···77title: Qualcomm Graphics Clock & Reset Controller on SM61258899maintainers:1010- - Konrad Dybcio <konrad.dybcio@linaro.org>1010+ - Konrad Dybcio <konradybcio@kernel.org>11111212description: |1313 Qualcomm graphics clock control module provides clocks and power domains on
···77title: Qualcomm Camera Clock & Reset Controller on SM63508899maintainers:1010- - Konrad Dybcio <konrad.dybcio@linaro.org>1010+ - Konrad Dybcio <konradybcio@kernel.org>11111212description: |1313 Qualcomm camera clock control module provides the clocks, resets and power
···77title: Qualcomm Display Clock & Reset Controller on SM63758899maintainers:1010- - Konrad Dybcio <konrad.dybcio@linaro.org>1010+ - Konrad Dybcio <konradybcio@kernel.org>11111212description: |1313 Qualcomm display clock control module provides the clocks, resets and power
···77title: Qualcomm Global Clock & Reset Controller on SM63758899maintainers:1010- - Konrad Dybcio <konrad.dybcio@somainline.org>1010+ - Konrad Dybcio <konradybcio@kernel.org>11111212description: |1313 Qualcomm global clock control module provides the clocks, resets and power
···77title: Qualcomm Graphics Clock & Reset Controller on SM63758899maintainers:1010- - Konrad Dybcio <konrad.dybcio@linaro.org>1010+ - Konrad Dybcio <konradybcio@kernel.org>11111212description: |1313 Qualcomm graphics clock control module provides clocks, resets and power
···77title: Qualcomm SM8350 Video Clock & Reset Controller8899maintainers:1010- - Konrad Dybcio <konrad.dybcio@linaro.org>1010+ - Konrad Dybcio <konradybcio@kernel.org>11111212description: |1313 Qualcomm video clock control module provides the clocks, resets and power
···77title: Qualcomm Graphics Clock & Reset Controller on SM84508899maintainers:1010- - Konrad Dybcio <konrad.dybcio@linaro.org>1010+ - Konrad Dybcio <konradybcio@kernel.org>11111212description: |1313 Qualcomm graphics clock control module provides the clocks, resets and power
···77title: ASUS Z00T TM5P5 NT35596 5.5" 1080×1920 LCD Panel8899maintainers:1010- - Konrad Dybcio <konradybcio@gmail.com>1010+ - Konrad Dybcio <konradybcio@kernel.org>11111212description: |+1313 This panel seems to only be found in the Asus Z00T
···77title: Sony TD4353 JDI 5 / 5.7" 2160x1080 MIPI-DSI Panel8899maintainers:1010- - Konrad Dybcio <konrad.dybcio@somainline.org>1010+ - Konrad Dybcio <konradybcio@kernel.org>11111212description: |1313 The Sony TD4353 JDI is a 5 (XZ2c) / 5.7 (XZ2) inch 2160x1080
···8899maintainers:1010 - Bjorn Andersson <andersson@kernel.org>1111- - Konrad Dybcio <konrad.dybcio@linaro.org>1111+ - Konrad Dybcio <konradybcio@kernel.org>12121313description: |1414 RPMh interconnect providers support system bandwidth requirements through
···8899maintainers:1010 - Bjorn Andersson <andersson@kernel.org>1111- - Konrad Dybcio <konrad.dybcio@linaro.org>1111+ - Konrad Dybcio <konradybcio@kernel.org>12121313description: |1414 RPMh interconnect providers support system bandwidth requirements through
···8899maintainers:1010 - Bjorn Andersson <andersson@kernel.org>1111- - Konrad Dybcio <konrad.dybcio@linaro.org>1111+ - Konrad Dybcio <konradybcio@kernel.org>12121313description: |1414 RPMh interconnect providers support system bandwidth requirements through
···77title: Qualcomm Technologies legacy IOMMU implementations8899maintainers:1010- - Konrad Dybcio <konrad.dybcio@linaro.org>1010+ - Konrad Dybcio <konradybcio@kernel.org>11111212description: |1313 Qualcomm "B" family devices which are not compatible with arm-smmu have
···77title: Qualcomm Technologies, Inc. MDM9607 TLMM block8899maintainers:1010- - Konrad Dybcio <konrad.dybcio@somainline.org>1010+ - Konrad Dybcio <konradybcio@kernel.org>11111212description:1313 Top Level Mode Multiplexer pin controller in Qualcomm MDM9607 SoC.
···77title: Qualcomm Technologies, Inc. SM6350 TLMM block8899maintainers:1010- - Konrad Dybcio <konrad.dybcio@somainline.org>1010+ - Konrad Dybcio <konradybcio@kernel.org>11111212description:1313 Top Level Mode Multiplexer pin controller in Qualcomm SM6350 SoC.
···77title: Qualcomm Technologies, Inc. SM6375 TLMM block8899maintainers:1010- - Konrad Dybcio <konrad.dybcio@somainline.org>1010+ - Konrad Dybcio <konradybcio@kernel.org>11111212description:1313 Top Level Mode Multiplexer pin controller in Qualcomm SM6375 SoC.
···77title: Qualcomm Technologies, Inc. (QTI) RPM Master Stats8899maintainers:1010- - Konrad Dybcio <konrad.dybcio@linaro.org>1010+ - Konrad Dybcio <konradybcio@kernel.org>11111212description: |1313 The Qualcomm RPM (Resource Power Manager) architecture includes a concept
+4-4
Documentation/filesystems/caching/fscache.rst
···318318Debugging319319=========320320321321-If CONFIG_FSCACHE_DEBUG is enabled, the FS-Cache facility can have runtime322322-debugging enabled by adjusting the value in::321321+If CONFIG_NETFS_DEBUG is enabled, the FS-Cache facility and NETFS support can322322+have runtime debugging enabled by adjusting the value in::323323324324- /sys/module/fscache/parameters/debug324324+ /sys/module/netfs/parameters/debug325325326326This is a bitmask of debugging streams to enable:327327···343343The appropriate set of values should be OR'd together and the result written to344344the control file. For example::345345346346- echo $((1|8|512)) >/sys/module/fscache/parameters/debug346346+ echo $((1|8|512)) >/sys/module/netfs/parameters/debug347347348348will turn on all function entry debugging.
···27272828#include <asm/numa.h>29293030-static int acpi_early_node_map[NR_CPUS] __initdata = { NUMA_NO_NODE };3030+static int acpi_early_node_map[NR_CPUS] __initdata = { [0 ... NR_CPUS - 1] = NUMA_NO_NODE };31313232int __init acpi_numa_get_nid(unsigned int cpu)3333{
-3
arch/arm64/kernel/setup.c
···355355 smp_init_cpus();356356 smp_build_mpidr_hash();357357358358- /* Init percpu seeds for random tags after cpus are set up. */359359- kasan_init_sw_tags();360360-361358#ifdef CONFIG_ARM64_SW_TTBR0_PAN362359 /*363360 * Make sure init_thread_info.ttbr0 always generates translation
+2
arch/arm64/kernel/smp.c
···467467 init_gic_priority_masking();468468469469 kasan_init_hw_tags();470470+ /* Init percpu seeds for random tags after cpus are set up. */471471+ kasan_init_sw_tags();470472}471473472474/*
···164164/**165165 * kvm_arch_init_vm - initializes a VM data structure166166 * @kvm: pointer to the KVM struct167167+ * @type: kvm device type167168 */168169int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)169170{···522521523522static void vcpu_set_pauth_traps(struct kvm_vcpu *vcpu)524523{525525- if (vcpu_has_ptrauth(vcpu)) {524524+ if (vcpu_has_ptrauth(vcpu) && !is_protected_kvm_enabled()) {526525 /*527527- * Either we're running running an L2 guest, and the API/APK528528- * bits come from L1's HCR_EL2, or API/APK are both set.526526+ * Either we're running an L2 guest, and the API/APK bits come527527+ * from L1's HCR_EL2, or API/APK are both set.529528 */530529 if (unlikely(vcpu_has_nv(vcpu) && !is_hyp_ctxt(vcpu))) {531530 u64 val;···542541 * Save the host keys if there is any chance for the guest543542 * to use pauth, as the entry code will reload the guest544543 * keys in that case.545545- * Protected mode is the exception to that rule, as the546546- * entry into the EL2 code eagerly switch back and forth547547- * between host and hyp keys (and kvm_hyp_ctxt is out of548548- * reach anyway).549544 */550550- if (is_protected_kvm_enabled())551551- return;552552-553545 if (vcpu->arch.hcr_el2 & (HCR_API | HCR_APK)) {554546 struct kvm_cpu_context *ctxt;547547+555548 ctxt = this_cpu_ptr_hyp_sym(kvm_hyp_ctxt);556549 ptrauth_save_keys(ctxt);557550 }
···173173static bool kvm_handle_pvm_sys64(struct kvm_vcpu *vcpu, u64 *exit_code)174174{175175 /*176176- * Make sure we handle the exit for workarounds and ptrauth177177- * before the pKVM handling, as the latter could decide to178178- * UNDEF.176176+ * Make sure we handle the exit for workarounds before the pKVM177177+ * handling, as the latter could decide to UNDEF.179178 */180179 return (kvm_hyp_handle_sysreg(vcpu, exit_code) ||181180 kvm_handle_pvm_sysreg(vcpu, exit_code));
···99#include <kvm/arm_vgic.h>1010#include "vgic.h"11111212-/**1212+/*1313 * vgic_irqfd_set_irq: inject the IRQ corresponding to the1414 * irqchip routing entry1515 *···7575 msi->flags = e->msi.flags;7676 msi->devid = e->msi.devid;7777}7878-/**7878+7979+/*7980 * kvm_set_msi: inject the MSI corresponding to the8081 * MSI routing entry8182 *···9998 return vgic_its_inject_msi(kvm, &msi);10099}101100102102-/**101101+/*103102 * kvm_arch_set_irq_inatomic: fast-path for irqfd injection104103 */105104int kvm_arch_set_irq_inatomic(struct kvm_kernel_irq_routing_entry *e,
+11-7
arch/arm64/kvm/vgic/vgic-its.c
···20402040 * @start_id: the ID of the first entry in the table20412041 * (non zero for 2d level tables)20422042 * @fn: function to apply on each entry20432043+ * @opaque: pointer to opaque data20432044 *20442045 * Return: < 0 on error, 0 if last element was identified, 1 otherwise20452046 * (the last element may not be found on second level tables)···20802079 return 1;20812080}2082208120832083-/**20822082+/*20842083 * vgic_its_save_ite - Save an interrupt translation entry at @gpa20852084 */20862085static int vgic_its_save_ite(struct vgic_its *its, struct its_device *dev,···2100209921012100/**21022101 * vgic_its_restore_ite - restore an interrupt translation entry21022102+ *21032103+ * @its: its handle21032104 * @event_id: id used for indexing21042105 * @ptr: pointer to the ITE entry21052106 * @opaque: pointer to the its_device···22342231 * @its: ITS handle22352232 * @dev: ITS device22362233 * @ptr: GPA22342234+ * @dte_esz: device table entry size22372235 */22382236static int vgic_its_save_dte(struct vgic_its *its, struct its_device *dev,22392237 gpa_t ptr, int dte_esz)···23172313 return 1;23182314}2319231523202320-/**23162316+/*23212317 * vgic_its_save_device_tables - Save the device table and all ITT23222318 * into guest RAM23232319 *···23902386 return ret;23912387}2392238823932393-/**23892389+/*23942390 * vgic_its_restore_device_tables - Restore the device table and all ITT23952391 * from guest RAM to internal data structs23962392 */···24822478 return 1;24832479}2484248024852485-/**24812481+/*24862482 * vgic_its_save_collection_table - Save the collection table into24872483 * guest RAM24882484 */···25222518 return ret;25232519}2524252025252525-/**25212521+/*25262522 * vgic_its_restore_collection_table - reads the collection table25272523 * in guest memory and restores the ITS internal state. Requires the25282524 * BASER registers to be restored before.···25602556 return ret;25612557}2562255825632563-/**25592559+/*25642560 * vgic_its_save_tables_v0 - Save the ITS tables into guest ARM25652561 * according to v0 ABI25662562 */···25752571 return vgic_its_save_collection_table(its);25762572}2577257325782578-/**25742574+/*25792575 * vgic_its_restore_tables_v0 - Restore the ITS tables from guest RAM25802576 * to internal data structs according to V0 ABI25812577 *
+1-1
arch/arm64/kvm/vgic/vgic-v3.c
···370370 dist->its_vm.vpes[i]->irq));371371}372372373373-/**373373+/*374374 * vgic_v3_save_pending_tables - Save the pending tables into guest RAM375375 * kvm lock and all vcpu lock must be held376376 */
+1-1
arch/arm64/kvm/vgic/vgic.c
···313313 * with all locks dropped.314314 */315315bool vgic_queue_irq_unlock(struct kvm *kvm, struct vgic_irq *irq,316316- unsigned long flags)316316+ unsigned long flags) __releases(&irq->irq_lock)317317{318318 struct kvm_vcpu *vcpu;319319
···145145146146#ifdef CONFIG_HOTPLUG_SMT147147#include <linux/cpu_smt.h>148148+#include <linux/cpumask.h>148149#include <asm/cputhreads.h>149150150151static inline bool topology_is_primary_thread(unsigned int cpu)···156155static inline bool topology_smt_thread_allowed(unsigned int cpu)157156{158157 return cpu_thread_in_core(cpu) < cpu_smt_num_threads;158158+}159159+160160+#define topology_is_core_online topology_is_core_online161161+static inline bool topology_is_core_online(unsigned int cpu)162162+{163163+ int i, first_cpu = cpu_first_thread_sibling(cpu);164164+165165+ for (i = first_cpu; i < first_cpu + threads_per_core; ++i) {166166+ if (cpu_online(i))167167+ return true;168168+ }169169+ return false;159170}160171#endif161172
+1
arch/powerpc/kernel/setup-common.c
···959959 mem_topology_setup();960960 /* Set max_mapnr before paging_init() */961961 set_max_mapnr(max_pfn);962962+ high_memory = (void *)__va(max_low_pfn * PAGE_SIZE);962963963964 /*964965 * Release secondary cpus out of their spinloops at 0x60 now that
+2-2
arch/powerpc/mm/init-common.c
···73737474#define CTOR(shift) static void ctor_##shift(void *addr) \7575{ \7676- memset(addr, 0, sizeof(void *) << (shift)); \7676+ memset(addr, 0, sizeof(pgd_t) << (shift)); \7777}78787979CTOR(0); CTOR(1); CTOR(2); CTOR(3); CTOR(4); CTOR(5); CTOR(6); CTOR(7);···117117void pgtable_cache_add(unsigned int shift)118118{119119 char *name;120120- unsigned long table_size = sizeof(void *) << shift;120120+ unsigned long table_size = sizeof(pgd_t) << shift;121121 unsigned long align = table_size;122122123123 /* When batching pgtable pointers for RCU freeing, we store
···28282929#include <asm/numa.h>30303131-static int acpi_early_node_map[NR_CPUS] __initdata = { NUMA_NO_NODE };3131+static int acpi_early_node_map[NR_CPUS] __initdata = { [0 ... NR_CPUS - 1] = NUMA_NO_NODE };32323333int __init acpi_numa_get_nid(unsigned int cpu)3434{
+4
arch/riscv/kernel/patch.c
···205205 int ret;206206207207 ret = patch_insn_set(addr, c, len);208208+ if (!ret)209209+ flush_icache_range((uintptr_t)addr, (uintptr_t)addr + len);208210209211 return ret;210212}···241239 int ret;242240243241 ret = patch_insn_write(addr, insns, len);242242+ if (!ret)243243+ flush_icache_range((uintptr_t)addr, (uintptr_t)addr + len);244244245245 return ret;246246}
+6-5
arch/riscv/kernel/sys_hwprobe.c
···178178 perf = this_perf;179179180180 if (perf != this_perf) {181181- perf = RISCV_HWPROBE_MISALIGNED_UNKNOWN;181181+ perf = RISCV_HWPROBE_MISALIGNED_SCALAR_UNKNOWN;182182 break;183183 }184184 }185185186186 if (perf == -1ULL)187187- return RISCV_HWPROBE_MISALIGNED_UNKNOWN;187187+ return RISCV_HWPROBE_MISALIGNED_SCALAR_UNKNOWN;188188189189 return perf;190190}···192192static u64 hwprobe_misaligned(const struct cpumask *cpus)193193{194194 if (IS_ENABLED(CONFIG_RISCV_EFFICIENT_UNALIGNED_ACCESS))195195- return RISCV_HWPROBE_MISALIGNED_FAST;195195+ return RISCV_HWPROBE_MISALIGNED_SCALAR_FAST;196196197197 if (IS_ENABLED(CONFIG_RISCV_EMULATED_UNALIGNED_ACCESS) && unaligned_ctl_available())198198- return RISCV_HWPROBE_MISALIGNED_EMULATED;198198+ return RISCV_HWPROBE_MISALIGNED_SCALAR_EMULATED;199199200200- return RISCV_HWPROBE_MISALIGNED_SLOW;200200+ return RISCV_HWPROBE_MISALIGNED_SCALAR_SLOW;201201}202202#endif203203···225225 break;226226227227 case RISCV_HWPROBE_KEY_CPUPERF_0:228228+ case RISCV_HWPROBE_KEY_MISALIGNED_SCALAR_PERF:228229 pair->value = hwprobe_misaligned(cpus);229230 break;230231
+2-2
arch/riscv/kernel/traps.c
···319319320320 regs->epc += 4;321321 regs->orig_a0 = regs->a0;322322+ regs->a0 = -ENOSYS;322323323324 riscv_v_vstate_discard(regs);324325···329328330329 if (syscall >= 0 && syscall < NR_syscalls)331330 syscall_handler(regs, syscall);332332- else if (syscall != -1)333333- regs->a0 = -ENOSYS;331331+334332 /*335333 * Ultimately, this value will get limited by KSTACK_OFFSET_MAX(),336334 * so the maximum stack offset is 1k bytes (10 bits).
+3-3
arch/riscv/kernel/traps_misaligned.c
···338338 perf_sw_event(PERF_COUNT_SW_ALIGNMENT_FAULTS, 1, regs, addr);339339340340#ifdef CONFIG_RISCV_PROBE_UNALIGNED_ACCESS341341- *this_cpu_ptr(&misaligned_access_speed) = RISCV_HWPROBE_MISALIGNED_EMULATED;341341+ *this_cpu_ptr(&misaligned_access_speed) = RISCV_HWPROBE_MISALIGNED_SCALAR_EMULATED;342342#endif343343344344 if (!unaligned_enabled)···532532 unsigned long tmp_var, tmp_val;533533 bool misaligned_emu_detected;534534535535- *mas_ptr = RISCV_HWPROBE_MISALIGNED_UNKNOWN;535535+ *mas_ptr = RISCV_HWPROBE_MISALIGNED_SCALAR_UNKNOWN;536536537537 __asm__ __volatile__ (538538 " "REG_L" %[tmp], 1(%[ptr])\n"539539 : [tmp] "=r" (tmp_val) : [ptr] "r" (&tmp_var) : "memory");540540541541- misaligned_emu_detected = (*mas_ptr == RISCV_HWPROBE_MISALIGNED_EMULATED);541541+ misaligned_emu_detected = (*mas_ptr == RISCV_HWPROBE_MISALIGNED_SCALAR_EMULATED);542542 /*543543 * If unaligned_ctl is already set, this means that we detected that all544544 * CPUS uses emulated misaligned access at boot time. If that changed
+6-6
arch/riscv/kernel/unaligned_access_speed.c
···3434 struct page *page = param;3535 void *dst;3636 void *src;3737- long speed = RISCV_HWPROBE_MISALIGNED_SLOW;3737+ long speed = RISCV_HWPROBE_MISALIGNED_SCALAR_SLOW;38383939- if (per_cpu(misaligned_access_speed, cpu) != RISCV_HWPROBE_MISALIGNED_UNKNOWN)3939+ if (per_cpu(misaligned_access_speed, cpu) != RISCV_HWPROBE_MISALIGNED_SCALAR_UNKNOWN)4040 return 0;41414242 /* Make an unaligned destination buffer. */···9595 }96969797 if (word_cycles < byte_cycles)9898- speed = RISCV_HWPROBE_MISALIGNED_FAST;9898+ speed = RISCV_HWPROBE_MISALIGNED_SCALAR_FAST;9999100100 ratio = div_u64((byte_cycles * 100), word_cycles);101101 pr_info("cpu%d: Ratio of byte access time to unaligned word access is %d.%02d, unaligned accesses are %s\n",102102 cpu,103103 ratio / 100,104104 ratio % 100,105105- (speed == RISCV_HWPROBE_MISALIGNED_FAST) ? "fast" : "slow");105105+ (speed == RISCV_HWPROBE_MISALIGNED_SCALAR_FAST) ? "fast" : "slow");106106107107 per_cpu(misaligned_access_speed, cpu) = speed;108108···110110 * Set the value of fast_misaligned_access of a CPU. These operations111111 * are atomic to avoid race conditions.112112 */113113- if (speed == RISCV_HWPROBE_MISALIGNED_FAST)113113+ if (speed == RISCV_HWPROBE_MISALIGNED_SCALAR_FAST)114114 cpumask_set_cpu(cpu, &fast_misaligned_access);115115 else116116 cpumask_clear_cpu(cpu, &fast_misaligned_access);···188188 static struct page *buf;189189190190 /* We are already set since the last check */191191- if (per_cpu(misaligned_access_speed, cpu) != RISCV_HWPROBE_MISALIGNED_UNKNOWN)191191+ if (per_cpu(misaligned_access_speed, cpu) != RISCV_HWPROBE_MISALIGNED_SCALAR_UNKNOWN)192192 goto exit;193193194194 buf = alloc_pages(GFP_KERNEL, MISALIGNED_BUFFER_ORDER);
···351351 * reversing the LDR calculation to get cluster of APICs, i.e. no352352 * additional work is required.353353 */354354- if (apic_x2apic_mode(apic)) {355355- WARN_ON_ONCE(ldr != kvm_apic_calc_x2apic_ldr(kvm_x2apic_id(apic)));354354+ if (apic_x2apic_mode(apic))356355 return;357357- }358356359357 if (WARN_ON_ONCE(!kvm_apic_map_get_logical_dest(new, ldr,360358 &cluster, &mask))) {···29642966 struct kvm_lapic_state *s, bool set)29652967{29662968 if (apic_x2apic_mode(vcpu->arch.apic)) {29692969+ u32 x2apic_id = kvm_x2apic_id(vcpu->arch.apic);29672970 u32 *id = (u32 *)(s->regs + APIC_ID);29682971 u32 *ldr = (u32 *)(s->regs + APIC_LDR);29692972 u64 icr;2970297329712974 if (vcpu->kvm->arch.x2apic_format) {29722972- if (*id != vcpu->vcpu_id)29752975+ if (*id != x2apic_id)29732976 return -EINVAL;29742977 } else {29782978+ /*29792979+ * Ignore the userspace value when setting APIC state.29802980+ * KVM's model is that the x2APIC ID is readonly, e.g.29812981+ * KVM only supports delivering interrupts to KVM's29822982+ * version of the x2APIC ID. However, for backwards29832983+ * compatibility, don't reject attempts to set a29842984+ * mismatched ID for userspace that hasn't opted into29852985+ * x2apic_format.29862986+ */29752987 if (set)29762976- *id >>= 24;29882988+ *id = x2apic_id;29772989 else29782978- *id <<= 24;29902990+ *id = x2apic_id << 24;29792991 }2980299229812993 /*···29942986 * split to ICR+ICR2 in userspace for backwards compatibility.29952987 */29962988 if (set) {29972997- *ldr = kvm_apic_calc_x2apic_ldr(*id);29892989+ *ldr = kvm_apic_calc_x2apic_ldr(x2apic_id);2998299029992991 icr = __kvm_lapic_get_reg(s->regs, APIC_ICR) |30002992 (u64)__kvm_lapic_get_reg(s->regs, APIC_ICR2) << 32;
+4-3
arch/x86/kvm/svm/sev.c
···2276227622772277 for (gfn = gfn_start, i = 0; gfn < gfn_start + npages; gfn++, i++) {22782278 struct sev_data_snp_launch_update fw_args = {0};22792279- bool assigned;22792279+ bool assigned = false;22802280 int level;2281228122822282 ret = snp_lookup_rmpentry((u64)pfn + i, &assigned, &level);···22902290 if (src) {22912291 void *vaddr = kmap_local_pfn(pfn + i);2292229222932293- ret = copy_from_user(vaddr, src + i * PAGE_SIZE, PAGE_SIZE);22942294- if (ret)22932293+ if (copy_from_user(vaddr, src + i * PAGE_SIZE, PAGE_SIZE)) {22942294+ ret = -EFAULT;22952295 goto err;22962296+ }22962297 kunmap_local(vaddr);22972298 }22982299
+2-4
arch/x86/kvm/x86.c
···427427428428int kvm_set_user_return_msr(unsigned slot, u64 value, u64 mask)429429{430430- unsigned int cpu = smp_processor_id();431431- struct kvm_user_return_msrs *msrs = per_cpu_ptr(user_return_msrs, cpu);430430+ struct kvm_user_return_msrs *msrs = this_cpu_ptr(user_return_msrs);432431 int err;433432434433 value = (value & mask) | (msrs->values[slot].host & ~mask);···449450450451static void drop_user_return_notifiers(void)451452{452452- unsigned int cpu = smp_processor_id();453453- struct kvm_user_return_msrs *msrs = per_cpu_ptr(user_return_msrs, cpu);453453+ struct kvm_user_return_msrs *msrs = this_cpu_ptr(user_return_msrs);454454455455 if (msrs->registered)456456 kvm_on_user_return(&msrs->urn);
···20202121/* Local prototypes */22222323+static void2424+acpi_ev_execute_orphan_reg_method(struct acpi_namespace_node *device_node,2525+ acpi_adr_space_type space_id);2626+2327static acpi_status2428acpi_ev_reg_run(acpi_handle obj_handle,2529 u32 level, void *context, void **return_value);···6561 acpi_gbl_default_address_spaces6662 [i])) {6763 acpi_ev_execute_reg_methods(acpi_gbl_root_node,6464+ ACPI_UINT32_MAX,6865 acpi_gbl_default_address_spaces6966 [i], ACPI_REG_CONNECT);7067 }···673668 * FUNCTION: acpi_ev_execute_reg_methods674669 *675670 * PARAMETERS: node - Namespace node for the device671671+ * max_depth - Depth to which search for _REG676672 * space_id - The address space ID677673 * function - Passed to _REG: On (1) or Off (0)678674 *···685679 ******************************************************************************/686680687681void688688-acpi_ev_execute_reg_methods(struct acpi_namespace_node *node,682682+acpi_ev_execute_reg_methods(struct acpi_namespace_node *node, u32 max_depth,689683 acpi_adr_space_type space_id, u32 function)690684{691685 struct acpi_reg_walk_info info;···719713 * regions and _REG methods. (i.e. handlers must be installed for all720714 * regions of this Space ID before we can run any _REG methods)721715 */722722- (void)acpi_ns_walk_namespace(ACPI_TYPE_ANY, node, ACPI_UINT32_MAX,716716+ (void)acpi_ns_walk_namespace(ACPI_TYPE_ANY, node, max_depth,723717 ACPI_NS_WALK_UNLOCK, acpi_ev_reg_run, NULL,724718 &info, NULL);725719···820814 *821815 ******************************************************************************/822816823823-void817817+static void824818acpi_ev_execute_orphan_reg_method(struct acpi_namespace_node *device_node,825819 acpi_adr_space_type space_id)826820{
+7-57
drivers/acpi/acpica/evxfregn.c
···8585 /* Run all _REG methods for this address space */86868787 if (run_reg) {8888- acpi_ev_execute_reg_methods(node, space_id, ACPI_REG_CONNECT);8888+ acpi_ev_execute_reg_methods(node, ACPI_UINT32_MAX, space_id,8989+ ACPI_REG_CONNECT);8990 }90919192unlock_and_exit:···264263 * FUNCTION: acpi_execute_reg_methods265264 *266265 * PARAMETERS: device - Handle for the device266266+ * max_depth - Depth to which search for _REG267267 * space_id - The address space ID268268 *269269 * RETURN: Status···273271 *274272 ******************************************************************************/275273acpi_status276276-acpi_execute_reg_methods(acpi_handle device, acpi_adr_space_type space_id)274274+acpi_execute_reg_methods(acpi_handle device, u32 max_depth,275275+ acpi_adr_space_type space_id)277276{278277 struct acpi_namespace_node *node;279278 acpi_status status;···299296300297 /* Run all _REG methods for this address space */301298302302- acpi_ev_execute_reg_methods(node, space_id, ACPI_REG_CONNECT);299299+ acpi_ev_execute_reg_methods(node, max_depth, space_id,300300+ ACPI_REG_CONNECT);303301 } else {304302 status = AE_BAD_PARAMETER;305303 }···310306}311307312308ACPI_EXPORT_SYMBOL(acpi_execute_reg_methods)313313-314314-/*******************************************************************************315315- *316316- * FUNCTION: acpi_execute_orphan_reg_method317317- *318318- * PARAMETERS: device - Handle for the device319319- * space_id - The address space ID320320- *321321- * RETURN: Status322322- *323323- * DESCRIPTION: Execute an "orphan" _REG method that appears under an ACPI324324- * device. This is a _REG method that has no corresponding region325325- * within the device's scope.326326- *327327- ******************************************************************************/328328-acpi_status329329-acpi_execute_orphan_reg_method(acpi_handle device, acpi_adr_space_type space_id)330330-{331331- struct acpi_namespace_node *node;332332- acpi_status status;333333-334334- ACPI_FUNCTION_TRACE(acpi_execute_orphan_reg_method);335335-336336- /* Parameter validation */337337-338338- if (!device) {339339- return_ACPI_STATUS(AE_BAD_PARAMETER);340340- }341341-342342- status = acpi_ut_acquire_mutex(ACPI_MTX_NAMESPACE);343343- if (ACPI_FAILURE(status)) {344344- return_ACPI_STATUS(status);345345- }346346-347347- /* Convert and validate the device handle */348348-349349- node = acpi_ns_validate_handle(device);350350- if (node) {351351-352352- /*353353- * If an "orphan" _REG method is present in the device's scope354354- * for the given address space ID, run it.355355- */356356-357357- acpi_ev_execute_orphan_reg_method(node, space_id);358358- } else {359359- status = AE_BAD_PARAMETER;360360- }361361-362362- (void)acpi_ut_release_mutex(ACPI_MTX_NAMESPACE);363363- return_ACPI_STATUS(status);364364-}365365-366366-ACPI_EXPORT_SYMBOL(acpi_execute_orphan_reg_method)
···22732273 if (device->handler)22742274 goto ok;2275227522762276+ acpi_ec_register_opregions(device);22772277+22762278 if (!device->flags.initialized) {22772279 device->flags.power_manageable =22782280 device->power.states[ACPI_STATE_D0].flags.valid;
+13-2
drivers/ata/libata-scsi.c
···951951 &sense_key, &asc, &ascq);952952 ata_scsi_set_sense(qc->dev, cmd, sense_key, asc, ascq);953953 } else {954954- /* ATA PASS-THROUGH INFORMATION AVAILABLE */955955- ata_scsi_set_sense(qc->dev, cmd, RECOVERED_ERROR, 0, 0x1D);954954+ /*955955+ * ATA PASS-THROUGH INFORMATION AVAILABLE956956+ *957957+ * Note: we are supposed to call ata_scsi_set_sense(), which958958+ * respects the D_SENSE bit, instead of unconditionally959959+ * generating the sense data in descriptor format. However,960960+ * because hdparm, hddtemp, and udisks incorrectly assume sense961961+ * data in descriptor format, without even looking at the962962+ * RESPONSE CODE field in the returned sense data (to see which963963+ * format the returned sense data is in), we are stuck with964964+ * being bug compatible with older kernels.965965+ */966966+ scsi_build_sense(cmd, 1, RECOVERED_ERROR, 0, 0x1D);956967 }957968}958969
+5-4
drivers/atm/idt77252.c
···11181118 rpp->len += skb->len;1119111911201120 if (stat & SAR_RSQE_EPDU) {11211121+ unsigned int len, truesize;11211122 unsigned char *l1l2;11221122- unsigned int len;1123112311241124 l1l2 = (unsigned char *) ((unsigned long) skb->data + skb->len - 6);11251125···11891189 ATM_SKB(skb)->vcc = vcc;11901190 __net_timestamp(skb);1191119111921192+ truesize = skb->truesize;11921193 vcc->push(vcc, skb);11931194 atomic_inc(&vcc->stats->rx);1194119511951195- if (skb->truesize > SAR_FB_SIZE_3)11961196+ if (truesize > SAR_FB_SIZE_3)11961197 add_rx_skb(card, 3, SAR_FB_SIZE_3, 1);11971197- else if (skb->truesize > SAR_FB_SIZE_2)11981198+ else if (truesize > SAR_FB_SIZE_2)11981199 add_rx_skb(card, 2, SAR_FB_SIZE_2, 1);11991199- else if (skb->truesize > SAR_FB_SIZE_1)12001200+ else if (truesize > SAR_FB_SIZE_1)12001201 add_rx_skb(card, 1, SAR_FB_SIZE_1, 1);12011202 else12021203 add_rx_skb(card, 0, SAR_FB_SIZE_0, 1);
+34-8
drivers/char/xillybus/xillyusb.c
···5050static const char xillyname[] = "xillyusb";51515252static unsigned int fifo_buf_order;5353+static struct workqueue_struct *wakeup_wq;53545455#define USB_VENDOR_ID_XILINX 0x03fd5556#define USB_VENDOR_ID_ALTERA 0x09fb···570569 * errors if executed. The mechanism relies on that xdev->error is assigned571570 * a non-zero value by report_io_error() prior to queueing wakeup_all(),572571 * which prevents bulk_in_work() from calling process_bulk_in().573573- *574574- * The fact that wakeup_all() and bulk_in_work() are queued on the same575575- * workqueue makes their concurrent execution very unlikely, however the576576- * kernel's API doesn't seem to ensure this strictly.577572 */578573579574static void wakeup_all(struct work_struct *work)···624627625628 if (do_once) {626629 kref_get(&xdev->kref); /* xdev is used by work item */627627- queue_work(xdev->workq, &xdev->wakeup_workitem);630630+ queue_work(wakeup_wq, &xdev->wakeup_workitem);628631 }629632}630633···1903190619041907static int xillyusb_setup_base_eps(struct xillyusb_dev *xdev)19051908{19091909+ struct usb_device *udev = xdev->udev;19101910+19111911+ /* Verify that device has the two fundamental bulk in/out endpoints */19121912+ if (usb_pipe_type_check(udev, usb_sndbulkpipe(udev, MSG_EP_NUM)) ||19131913+ usb_pipe_type_check(udev, usb_rcvbulkpipe(udev, IN_EP_NUM)))19141914+ return -ENODEV;19151915+19061916 xdev->msg_ep = endpoint_alloc(xdev, MSG_EP_NUM | USB_DIR_OUT,19071917 bulk_out_work, 1, 2);19081918 if (!xdev->msg_ep)···19391935 __le16 *chandesc,19401936 int num_channels)19411937{19421942- struct xillyusb_channel *chan;19381938+ struct usb_device *udev = xdev->udev;19391939+ struct xillyusb_channel *chan, *new_channels;19431940 int i;1944194119451942 chan = kcalloc(num_channels, sizeof(*chan), GFP_KERNEL);19461943 if (!chan)19471944 return -ENOMEM;1948194519491949- xdev->channels = chan;19461946+ new_channels = chan;1950194719511948 for (i = 0; i < num_channels; i++, chan++) {19521949 unsigned int in_desc = le16_to_cpu(*chandesc++);···19761971 */1977197219781973 if ((out_desc & 0x80) && i < 14) { /* Entry is valid */19741974+ if (usb_pipe_type_check(udev,19751975+ usb_sndbulkpipe(udev, i + 2))) {19761976+ dev_err(xdev->dev,19771977+ "Missing BULK OUT endpoint %d\n",19781978+ i + 2);19791979+ kfree(new_channels);19801980+ return -ENODEV;19811981+ }19821982+19791983 chan->writable = 1;19801984 chan->out_synchronous = !!(out_desc & 0x40);19811985 chan->out_seekable = !!(out_desc & 0x20);···19941980 }19951981 }1996198219831983+ xdev->channels = new_channels;19971984 return 0;19981985}19991986···21112096 * just after responding with the IDT, there is no reason for any21122097 * work item to be running now. To be sure that xdev->channels21132098 * is updated on anything that might run in parallel, flush the21142114- * workqueue, which rarely does anything.20992099+ * device's workqueue and the wakeup work item. This rarely21002100+ * does anything.21152101 */21162102 flush_workqueue(xdev->workq);21032103+ flush_work(&xdev->wakeup_workitem);2117210421182105 xdev->num_channels = num_channels;21192106···22752258{22762259 int rc = 0;2277226022612261+ wakeup_wq = alloc_workqueue(xillyname, 0, 0);22622262+ if (!wakeup_wq)22632263+ return -ENOMEM;22642264+22782265 if (LOG2_INITIAL_FIFO_BUF_SIZE > PAGE_SHIFT)22792266 fifo_buf_order = LOG2_INITIAL_FIFO_BUF_SIZE - PAGE_SHIFT;22802267 else···2286226522872266 rc = usb_register(&xillyusb_driver);2288226722682268+ if (rc)22692269+ destroy_workqueue(wakeup_wq);22702270+22892271 return rc;22902272}2291227322922274static void __exit xillyusb_exit(void)22932275{22942276 usb_deregister(&xillyusb_driver);22772277+22782278+ destroy_workqueue(wakeup_wq);22952279}2296228022972281module_init(xillyusb_init);
···10571057 r = amdgpu_ring_parse_cs(ring, p, job, ib);10581058 if (r)10591059 return r;10601060+10611061+ if (ib->sa_bo)10621062+ ib->gpu_addr = amdgpu_sa_bo_gpu_addr(ib->sa_bo);10601063 } else {10611064 ib->ptr = (uint32_t *)kptr;10621065 r = amdgpu_ring_patch_cs_in_place(ring, p, job, ib);
+8
drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c
···685685686686 switch (args->in.op) {687687 case AMDGPU_CTX_OP_ALLOC_CTX:688688+ if (args->in.flags)689689+ return -EINVAL;688690 r = amdgpu_ctx_alloc(adev, fpriv, filp, priority, &id);689691 args->out.alloc.ctx_id = id;690692 break;691693 case AMDGPU_CTX_OP_FREE_CTX:694694+ if (args->in.flags)695695+ return -EINVAL;692696 r = amdgpu_ctx_free(fpriv, id);693697 break;694698 case AMDGPU_CTX_OP_QUERY_STATE:699699+ if (args->in.flags)700700+ return -EINVAL;695701 r = amdgpu_ctx_query(adev, fpriv, id, &args->out);696702 break;697703 case AMDGPU_CTX_OP_QUERY_STATE2:704704+ if (args->in.flags)705705+ return -EINVAL;698706 r = amdgpu_ctx_query2(adev, fpriv, id, &args->out);699707 break;700708 case AMDGPU_CTX_OP_GET_STABLE_PSTATE:
+24-2
drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
···509509 int i, r = 0;510510 int j;511511512512+ if (adev->enable_mes) {513513+ for (i = 0; i < adev->gfx.num_compute_rings; i++) {514514+ j = i + xcc_id * adev->gfx.num_compute_rings;515515+ amdgpu_mes_unmap_legacy_queue(adev,516516+ &adev->gfx.compute_ring[j],517517+ RESET_QUEUES, 0, 0);518518+ }519519+ return 0;520520+ }521521+512522 if (!kiq->pmf || !kiq->pmf->kiq_unmap_queues)513523 return -EINVAL;514524···560550 struct amdgpu_ring *kiq_ring = &kiq->ring;561551 int i, r = 0;562552 int j;553553+554554+ if (adev->enable_mes) {555555+ if (amdgpu_gfx_is_master_xcc(adev, xcc_id)) {556556+ for (i = 0; i < adev->gfx.num_gfx_rings; i++) {557557+ j = i + xcc_id * adev->gfx.num_gfx_rings;558558+ amdgpu_mes_unmap_legacy_queue(adev,559559+ &adev->gfx.gfx_ring[j],560560+ PREEMPT_QUEUES, 0, 0);561561+ }562562+ }563563+ return 0;564564+ }563565564566 if (!kiq->pmf || !kiq->pmf->kiq_unmap_queues)565567 return -EINVAL;···1017995 if (amdgpu_device_skip_hw_access(adev))1018996 return 0;101999710201020- if (adev->mes.ring.sched.ready)998998+ if (adev->mes.ring[0].sched.ready)1021999 return amdgpu_mes_rreg(adev, reg);1022100010231001 BUG_ON(!ring->funcs->emit_rreg);···10871065 if (amdgpu_device_skip_hw_access(adev))10881066 return;1089106710901090- if (adev->mes.ring.sched.ready) {10681068+ if (adev->mes.ring[0].sched.ready) {10911069 amdgpu_mes_wreg(adev, reg, v);10921070 return;10931071 }
+3-2
drivers/gpu/drm/amd/amdgpu/amdgpu_gmc.c
···589589 ring = adev->rings[i];590590 vmhub = ring->vm_hub;591591592592- if (ring == &adev->mes.ring ||592592+ if (ring == &adev->mes.ring[0] ||593593+ ring == &adev->mes.ring[1] ||593594 ring == &adev->umsch_mm.ring)594595 continue;595596···762761 unsigned long flags;763762 uint32_t seq;764763765765- if (adev->mes.ring.sched.ready) {764764+ if (adev->mes.ring[0].sched.ready) {766765 amdgpu_mes_reg_write_reg_wait(adev, reg0, reg1,767766 ref, mask);768767 return;
+51-32
drivers/gpu/drm/amd/amdgpu/amdgpu_mes.c
···135135 idr_init(&adev->mes.queue_id_idr);136136 ida_init(&adev->mes.doorbell_ida);137137 spin_lock_init(&adev->mes.queue_id_lock);138138- spin_lock_init(&adev->mes.ring_lock);139138 mutex_init(&adev->mes.mutex_hidden);139139+140140+ for (i = 0; i < AMDGPU_MAX_MES_PIPES; i++)141141+ spin_lock_init(&adev->mes.ring_lock[i]);140142141143 adev->mes.total_max_queue = AMDGPU_FENCE_MES_QUEUE_ID_MASK;142144 adev->mes.vmid_mask_mmhub = 0xffffff00;···165163 adev->mes.sdma_hqd_mask[i] = 0xfc;166164 }167165168168- r = amdgpu_device_wb_get(adev, &adev->mes.sch_ctx_offs);169169- if (r) {170170- dev_err(adev->dev,171171- "(%d) ring trail_fence_offs wb alloc failed\n", r);172172- goto error_ids;173173- }174174- adev->mes.sch_ctx_gpu_addr =175175- adev->wb.gpu_addr + (adev->mes.sch_ctx_offs * 4);176176- adev->mes.sch_ctx_ptr =177177- (uint64_t *)&adev->wb.wb[adev->mes.sch_ctx_offs];166166+ for (i = 0; i < AMDGPU_MAX_MES_PIPES; i++) {167167+ r = amdgpu_device_wb_get(adev, &adev->mes.sch_ctx_offs[i]);168168+ if (r) {169169+ dev_err(adev->dev,170170+ "(%d) ring trail_fence_offs wb alloc failed\n",171171+ r);172172+ goto error;173173+ }174174+ adev->mes.sch_ctx_gpu_addr[i] =175175+ adev->wb.gpu_addr + (adev->mes.sch_ctx_offs[i] * 4);176176+ adev->mes.sch_ctx_ptr[i] =177177+ (uint64_t *)&adev->wb.wb[adev->mes.sch_ctx_offs[i]];178178179179- r = amdgpu_device_wb_get(adev, &adev->mes.query_status_fence_offs);180180- if (r) {181181- amdgpu_device_wb_free(adev, adev->mes.sch_ctx_offs);182182- dev_err(adev->dev,183183- "(%d) query_status_fence_offs wb alloc failed\n", r);184184- goto error_ids;179179+ r = amdgpu_device_wb_get(adev,180180+ &adev->mes.query_status_fence_offs[i]);181181+ if (r) {182182+ dev_err(adev->dev,183183+ "(%d) query_status_fence_offs wb alloc failed\n",184184+ r);185185+ goto error;186186+ }187187+ adev->mes.query_status_fence_gpu_addr[i] = adev->wb.gpu_addr +188188+ (adev->mes.query_status_fence_offs[i] * 4);189189+ adev->mes.query_status_fence_ptr[i] =190190+ (uint64_t *)&adev->wb.wb[adev->mes.query_status_fence_offs[i]];185191 }186186- adev->mes.query_status_fence_gpu_addr =187187- adev->wb.gpu_addr + (adev->mes.query_status_fence_offs * 4);188188- adev->mes.query_status_fence_ptr =189189- (uint64_t *)&adev->wb.wb[adev->mes.query_status_fence_offs];190192191193 r = amdgpu_device_wb_get(adev, &adev->mes.read_val_offs);192194 if (r) {193193- amdgpu_device_wb_free(adev, adev->mes.sch_ctx_offs);194194- amdgpu_device_wb_free(adev, adev->mes.query_status_fence_offs);195195 dev_err(adev->dev,196196 "(%d) read_val_offs alloc failed\n", r);197197- goto error_ids;197197+ goto error;198198 }199199 adev->mes.read_val_gpu_addr =200200 adev->wb.gpu_addr + (adev->mes.read_val_offs * 4);···216212error_doorbell:217213 amdgpu_mes_doorbell_free(adev);218214error:219219- amdgpu_device_wb_free(adev, adev->mes.sch_ctx_offs);220220- amdgpu_device_wb_free(adev, adev->mes.query_status_fence_offs);221221- amdgpu_device_wb_free(adev, adev->mes.read_val_offs);222222-error_ids:215215+ for (i = 0; i < AMDGPU_MAX_MES_PIPES; i++) {216216+ if (adev->mes.sch_ctx_ptr[i])217217+ amdgpu_device_wb_free(adev, adev->mes.sch_ctx_offs[i]);218218+ if (adev->mes.query_status_fence_ptr[i])219219+ amdgpu_device_wb_free(adev,220220+ adev->mes.query_status_fence_offs[i]);221221+ }222222+ if (adev->mes.read_val_ptr)223223+ amdgpu_device_wb_free(adev, adev->mes.read_val_offs);224224+223225 idr_destroy(&adev->mes.pasid_idr);224226 idr_destroy(&adev->mes.gang_id_idr);225227 idr_destroy(&adev->mes.queue_id_idr);···236226237227void amdgpu_mes_fini(struct amdgpu_device *adev)238228{229229+ int i;230230+239231 amdgpu_bo_free_kernel(&adev->mes.event_log_gpu_obj,240232 &adev->mes.event_log_gpu_addr,241233 &adev->mes.event_log_cpu_addr);242234243243- amdgpu_device_wb_free(adev, adev->mes.sch_ctx_offs);244244- amdgpu_device_wb_free(adev, adev->mes.query_status_fence_offs);245245- amdgpu_device_wb_free(adev, adev->mes.read_val_offs);235235+ for (i = 0; i < AMDGPU_MAX_MES_PIPES; i++) {236236+ if (adev->mes.sch_ctx_ptr[i])237237+ amdgpu_device_wb_free(adev, adev->mes.sch_ctx_offs[i]);238238+ if (adev->mes.query_status_fence_ptr[i])239239+ amdgpu_device_wb_free(adev,240240+ adev->mes.query_status_fence_offs[i]);241241+ }242242+ if (adev->mes.read_val_ptr)243243+ amdgpu_device_wb_free(adev, adev->mes.read_val_offs);244244+246245 amdgpu_mes_doorbell_free(adev);247246248247 idr_destroy(&adev->mes.pasid_idr);···1518149915191500 amdgpu_ucode_ip_version_decode(adev, GC_HWIP, ucode_prefix,15201501 sizeof(ucode_prefix));15211521- if (adev->enable_uni_mes && pipe == AMDGPU_MES_SCHED_PIPE) {15021502+ if (adev->enable_uni_mes) {15221503 snprintf(fw_name, sizeof(fw_name),15231504 "amdgpu/%s_uni_mes.bin", ucode_prefix);15241505 } else if (amdgpu_ip_version(adev, GC_HWIP, 0) >= IP_VERSION(11, 0, 0) &&
···35463546 return r;35473547}3548354835493549-static int gfx_v12_0_kiq_disable_kgq(struct amdgpu_device *adev)35503550-{35513551- struct amdgpu_kiq *kiq = &adev->gfx.kiq[0];35523552- struct amdgpu_ring *kiq_ring = &kiq->ring;35533553- int i, r = 0;35543554-35553555- if (!kiq->pmf || !kiq->pmf->kiq_unmap_queues)35563556- return -EINVAL;35573557-35583558- if (amdgpu_ring_alloc(kiq_ring, kiq->pmf->unmap_queues_size *35593559- adev->gfx.num_gfx_rings))35603560- return -ENOMEM;35613561-35623562- for (i = 0; i < adev->gfx.num_gfx_rings; i++)35633563- kiq->pmf->kiq_unmap_queues(kiq_ring, &adev->gfx.gfx_ring[i],35643564- PREEMPT_QUEUES, 0, 0);35653565-35663566- if (adev->gfx.kiq[0].ring.sched.ready)35673567- r = amdgpu_ring_test_helper(kiq_ring);35683568-35693569- return r;35703570-}35713571-35723549static int gfx_v12_0_hw_fini(void *handle)35733550{35743551 struct amdgpu_device *adev = (struct amdgpu_device *)handle;35753575- int r;35763552 uint32_t tmp;3577355335783554 amdgpu_irq_put(adev, &adev->gfx.priv_reg_irq, 0);···3556358035573581 if (!adev->no_hw_access) {35583582 if (amdgpu_async_gfx_ring) {35593559- r = gfx_v12_0_kiq_disable_kgq(adev);35603560- if (r)35833583+ if (amdgpu_gfx_disable_kgq(adev, 0))35613584 DRM_ERROR("KGQ disable failed\n");35623585 }35633586
+1-1
drivers/gpu/drm/amd/amdgpu/gmc_v11_0.c
···231231 /* This is necessary for SRIOV as well as for GFXOFF to function232232 * properly under bare metal233233 */234234- if ((adev->gfx.kiq[0].ring.sched.ready || adev->mes.ring.sched.ready) &&234234+ if ((adev->gfx.kiq[0].ring.sched.ready || adev->mes.ring[0].sched.ready) &&235235 (amdgpu_sriov_runtime(adev) || !amdgpu_sriov_vf(adev))) {236236 amdgpu_gmc_fw_reg_write_reg_wait(adev, req, ack, inv_req,237237 1 << vmid, GET_INST(GC, 0));
+1-1
drivers/gpu/drm/amd/amdgpu/gmc_v12_0.c
···299299 /* This is necessary for SRIOV as well as for GFXOFF to function300300 * properly under bare metal301301 */302302- if ((adev->gfx.kiq[0].ring.sched.ready || adev->mes.ring.sched.ready) &&302302+ if ((adev->gfx.kiq[0].ring.sched.ready || adev->mes.ring[0].sched.ready) &&303303 (amdgpu_sriov_runtime(adev) || !amdgpu_sriov_vf(adev))) {304304 struct amdgpu_vmhub *hub = &adev->vmhub[vmhub];305305 const unsigned eng = 17;
···539539 }540540541541 /* IGT will check if the cursor size is configured */542542- drm->mode_config.cursor_width = drm->mode_config.max_width;543543- drm->mode_config.cursor_height = drm->mode_config.max_height;542542+ drm->mode_config.cursor_width = 512;543543+ drm->mode_config.cursor_height = 512;544544545545 /* Use OVL device for all DMA memory allocations */546546 crtc = drm_crtc_from_index(drm, 0);
+1-3
drivers/gpu/drm/rockchip/inno_hdmi.c
···279279 const u8 *buffer, size_t len)280280{281281 struct inno_hdmi *hdmi = connector_to_inno_hdmi(connector);282282- u8 packed_frame[HDMI_MAXIMUM_INFO_FRAME_SIZE];283282 ssize_t i;284283285284 if (type != HDMI_INFOFRAME_TYPE_AVI) {···290291 inno_hdmi_disable_frame(connector, type);291292292293 for (i = 0; i < len; i++)293293- hdmi_writeb(hdmi, HDMI_CONTROL_PACKET_ADDR + i,294294- packed_frame[i]);294294+ hdmi_writeb(hdmi, HDMI_CONTROL_PACKET_ADDR + i, buffer[i]);295295296296 return 0;297297}
+11-3
drivers/gpu/drm/v3d/v3d_sched.c
···315315 struct v3d_dev *v3d = job->base.v3d;316316 struct drm_device *dev = &v3d->drm;317317 struct dma_fence *fence;318318- int i, csd_cfg0_reg, csd_cfg_reg_count;318318+ int i, csd_cfg0_reg;319319320320 v3d->csd_job = job;321321···335335 v3d_switch_perfmon(v3d, &job->base);336336337337 csd_cfg0_reg = V3D_CSD_QUEUED_CFG0(v3d->ver);338338- csd_cfg_reg_count = v3d->ver < 71 ? 6 : 7;339339- for (i = 1; i <= csd_cfg_reg_count; i++)338338+ for (i = 1; i <= 6; i++)340339 V3D_CORE_WRITE(0, csd_cfg0_reg + 4 * i, job->args.cfg[i]);340340+341341+ /* Although V3D 7.1 has an eighth configuration register, we are not342342+ * using it. Therefore, make sure it remains unused.343343+ *344344+ * XXX: Set the CFG7 register345345+ */346346+ if (v3d->ver >= 71)347347+ V3D_CORE_WRITE(0, V3D_V7_CSD_QUEUED_CFG7, 0);348348+341349 /* CFG0 write kicks off the job. */342350 V3D_CORE_WRITE(0, csd_cfg0_reg, job->args.cfg[0]);343351
+50-9
drivers/gpu/drm/xe/xe_device.c
···8787 spin_unlock(&xe->clients.lock);88888989 file->driver_priv = xef;9090+ kref_init(&xef->refcount);9191+9092 return 0;9393+}9494+9595+static void xe_file_destroy(struct kref *ref)9696+{9797+ struct xe_file *xef = container_of(ref, struct xe_file, refcount);9898+ struct xe_device *xe = xef->xe;9999+100100+ xa_destroy(&xef->exec_queue.xa);101101+ mutex_destroy(&xef->exec_queue.lock);102102+ xa_destroy(&xef->vm.xa);103103+ mutex_destroy(&xef->vm.lock);104104+105105+ spin_lock(&xe->clients.lock);106106+ xe->clients.count--;107107+ spin_unlock(&xe->clients.lock);108108+109109+ xe_drm_client_put(xef->client);110110+ kfree(xef);111111+}112112+113113+/**114114+ * xe_file_get() - Take a reference to the xe file object115115+ * @xef: Pointer to the xe file116116+ *117117+ * Anyone with a pointer to xef must take a reference to the xe file118118+ * object using this call.119119+ *120120+ * Return: xe file pointer121121+ */122122+struct xe_file *xe_file_get(struct xe_file *xef)123123+{124124+ kref_get(&xef->refcount);125125+ return xef;126126+}127127+128128+/**129129+ * xe_file_put() - Drop a reference to the xe file object130130+ * @xef: Pointer to the xe file131131+ *132132+ * Used to drop reference to the xef object133133+ */134134+void xe_file_put(struct xe_file *xef)135135+{136136+ kref_put(&xef->refcount, xe_file_destroy);91137}9213893139static void xe_file_close(struct drm_device *dev, struct drm_file *file)···14397 struct xe_vm *vm;14498 struct xe_exec_queue *q;14599 unsigned long idx;100100+101101+ xe_pm_runtime_get(xe);146102147103 /*148104 * No need for exec_queue.lock here as there is no contention for it···156108 xe_exec_queue_kill(q);157109 xe_exec_queue_put(q);158110 }159159- xa_destroy(&xef->exec_queue.xa);160160- mutex_destroy(&xef->exec_queue.lock);161111 mutex_lock(&xef->vm.lock);162112 xa_for_each(&xef->vm.xa, idx, vm)163113 xe_vm_close_and_put(vm);164114 mutex_unlock(&xef->vm.lock);165165- xa_destroy(&xef->vm.xa);166166- mutex_destroy(&xef->vm.lock);167115168168- spin_lock(&xe->clients.lock);169169- xe->clients.count--;170170- spin_unlock(&xe->clients.lock);116116+ xe_file_put(xef);171117172172- xe_drm_client_put(xef->client);173173- kfree(xef);118118+ xe_pm_runtime_put(xe);174119}175120176121static const struct drm_ioctl_desc xe_ioctls[] = {
···251251252252 /* Accumulate all the exec queues from this client */253253 mutex_lock(&xef->exec_queue.lock);254254- xa_for_each(&xef->exec_queue.xa, i, q) {254254+ xa_for_each(&xef->exec_queue.xa, i, q)255255 xe_exec_queue_update_run_ticks(q);256256- xef->run_ticks[q->class] += q->run_ticks - q->old_run_ticks;257257- q->old_run_ticks = q->run_ticks;258258- }259256 mutex_unlock(&xef->exec_queue.lock);260257261258 /* Get the total GPU cycles */
+9-1
drivers/gpu/drm/xe/xe_exec_queue.c
···3737{3838 if (q->vm)3939 xe_vm_put(q->vm);4040+4141+ if (q->xef)4242+ xe_file_put(q->xef);4343+4044 kfree(q);4145}4246···653649 goto kill_exec_queue;654650655651 args->exec_queue_id = id;652652+ q->xef = xe_file_get(xef);656653657654 return 0;658655···767762 */768763void xe_exec_queue_update_run_ticks(struct xe_exec_queue *q)769764{765765+ struct xe_file *xef;770766 struct xe_lrc *lrc;771767 u32 old_ts, new_ts;772768···779773 if (!q->vm || !q->vm->xef)780774 return;781775776776+ xef = q->vm->xef;777777+782778 /*783779 * Only sample the first LRC. For parallel submission, all of them are784780 * scheduled together and we compensate that below by multiplying by···791783 */792784 lrc = q->lrc[0];793785 new_ts = xe_lrc_update_timestamp(lrc, &old_ts);794794- q->run_ticks += (new_ts - old_ts) * q->width;786786+ xef->run_ticks[q->class] += (new_ts - old_ts) * q->width;795787}796788797789void xe_exec_queue_kill(struct xe_exec_queue *q)
+3-4
drivers/gpu/drm/xe/xe_exec_queue_types.h
···3838 * a kernel object.3939 */4040struct xe_exec_queue {4141+ /** @xef: Back pointer to xe file if this is user created exec queue */4242+ struct xe_file *xef;4343+4144 /** @gt: graphics tile this exec queue can submit to */4245 struct xe_gt *gt;4346 /**···142139 * Protected by @vm's resv. Unused if @vm == NULL.143140 */144141 u64 tlb_flush_seqno;145145- /** @old_run_ticks: prior hw engine class run time in ticks for this exec queue */146146- u64 old_run_ticks;147147- /** @run_ticks: hw engine class run time in ticks for this exec queue */148148- u64 run_ticks;149142 /** @lrc: logical ring context for this exec queue */150143 struct xe_lrc *lrc[];151144};
···1313#include "xe_guc.h"1414#include "xe_guc_ct.h"1515#include "xe_mmio.h"1616+#include "xe_pm.h"1617#include "xe_sriov.h"1718#include "xe_trace.h"1819#include "regs/xe_guc_regs.h"2020+2121+#define FENCE_STACK_BIT DMA_FENCE_FLAG_USER_BITS19222023/*2124 * TLB inval depends on pending commands in the CT queue and then the real···3633 return hw_tlb_timeout + 2 * delay;3734}38353636+static void3737+__invalidation_fence_signal(struct xe_device *xe, struct xe_gt_tlb_invalidation_fence *fence)3838+{3939+ bool stack = test_bit(FENCE_STACK_BIT, &fence->base.flags);4040+4141+ trace_xe_gt_tlb_invalidation_fence_signal(xe, fence);4242+ xe_gt_tlb_invalidation_fence_fini(fence);4343+ dma_fence_signal(&fence->base);4444+ if (!stack)4545+ dma_fence_put(&fence->base);4646+}4747+4848+static void4949+invalidation_fence_signal(struct xe_device *xe, struct xe_gt_tlb_invalidation_fence *fence)5050+{5151+ list_del(&fence->link);5252+ __invalidation_fence_signal(xe, fence);5353+}39544055static void xe_gt_tlb_fence_timeout(struct work_struct *work)4156{···7554 xe_gt_err(gt, "TLB invalidation fence timeout, seqno=%d recv=%d",7655 fence->seqno, gt->tlb_invalidation.seqno_recv);77567878- list_del(&fence->link);7957 fence->base.error = -ETIME;8080- dma_fence_signal(&fence->base);8181- dma_fence_put(&fence->base);5858+ invalidation_fence_signal(xe, fence);8259 }8360 if (!list_empty(>->tlb_invalidation.pending_fences))8461 queue_delayed_work(system_wq,···10687 return 0;10788}10889109109-static void110110-__invalidation_fence_signal(struct xe_device *xe, struct xe_gt_tlb_invalidation_fence *fence)111111-{112112- trace_xe_gt_tlb_invalidation_fence_signal(xe, fence);113113- dma_fence_signal(&fence->base);114114- dma_fence_put(&fence->base);115115-}116116-117117-static void118118-invalidation_fence_signal(struct xe_device *xe, struct xe_gt_tlb_invalidation_fence *fence)119119-{120120- list_del(&fence->link);121121- __invalidation_fence_signal(xe, fence);122122-}123123-12490/**12591 * xe_gt_tlb_invalidation_reset - Initialize GT TLB invalidation reset12692 * @gt: graphics tile···115111void xe_gt_tlb_invalidation_reset(struct xe_gt *gt)116112{117113 struct xe_gt_tlb_invalidation_fence *fence, *next;118118- struct xe_guc *guc = >->uc.guc;119114 int pending_seqno;120115121116 /*···137134 else138135 pending_seqno = gt->tlb_invalidation.seqno - 1;139136 WRITE_ONCE(gt->tlb_invalidation.seqno_recv, pending_seqno);140140- wake_up_all(&guc->ct.wq);141137142138 list_for_each_entry_safe(fence, next,143139 >->tlb_invalidation.pending_fences, link)···167165 int seqno;168166 int ret;169167168168+ xe_gt_assert(gt, fence);169169+170170 /*171171 * XXX: The seqno algorithm relies on TLB invalidation being processed172172 * in order which they currently are, if that changes the algorithm will···177173178174 mutex_lock(&guc->ct.lock);179175 seqno = gt->tlb_invalidation.seqno;180180- if (fence) {181181- fence->seqno = seqno;182182- trace_xe_gt_tlb_invalidation_fence_send(xe, fence);183183- }176176+ fence->seqno = seqno;177177+ trace_xe_gt_tlb_invalidation_fence_send(xe, fence);184178 action[1] = seqno;185179 ret = xe_guc_ct_send_locked(&guc->ct, action, len,186180 G2H_LEN_DW_TLB_INVALIDATE, 1);···211209 TLB_INVALIDATION_SEQNO_MAX;212210 if (!gt->tlb_invalidation.seqno)213211 gt->tlb_invalidation.seqno = 1;214214- ret = seqno;215212 }216213 mutex_unlock(&guc->ct.lock);217214···224223/**225224 * xe_gt_tlb_invalidation_guc - Issue a TLB invalidation on this GT for the GuC226225 * @gt: graphics tile226226+ * @fence: invalidation fence which will be signal on TLB invalidation227227+ * completion227228 *228229 * Issue a TLB invalidation for the GuC. Completion of TLB is asynchronous and229229- * caller can use seqno + xe_gt_tlb_invalidation_wait to wait for completion.230230+ * caller can use the invalidation fence to wait for completion.230231 *231231- * Return: Seqno which can be passed to xe_gt_tlb_invalidation_wait on success,232232- * negative error code on error.232232+ * Return: 0 on success, negative error code on error233233 */234234-static int xe_gt_tlb_invalidation_guc(struct xe_gt *gt)234234+static int xe_gt_tlb_invalidation_guc(struct xe_gt *gt,235235+ struct xe_gt_tlb_invalidation_fence *fence)235236{236237 u32 action[] = {237238 XE_GUC_ACTION_TLB_INVALIDATION,···241238 MAKE_INVAL_OP(XE_GUC_TLB_INVAL_GUC),242239 };243240244244- return send_tlb_invalidation(>->uc.guc, NULL, action,241241+ return send_tlb_invalidation(>->uc.guc, fence, action,245242 ARRAY_SIZE(action));246243}247244···260257261258 if (xe_guc_ct_enabled(>->uc.guc.ct) &&262259 gt->uc.guc.submission_state.enabled) {263263- int seqno;260260+ struct xe_gt_tlb_invalidation_fence fence;261261+ int ret;264262265265- seqno = xe_gt_tlb_invalidation_guc(gt);266266- if (seqno <= 0)267267- return seqno;263263+ xe_gt_tlb_invalidation_fence_init(gt, &fence, true);264264+ ret = xe_gt_tlb_invalidation_guc(gt, &fence);265265+ if (ret < 0) {266266+ xe_gt_tlb_invalidation_fence_fini(&fence);267267+ return ret;268268+ }268269269269- xe_gt_tlb_invalidation_wait(gt, seqno);270270+ xe_gt_tlb_invalidation_fence_wait(&fence);270271 } else if (xe_device_uc_enabled(xe) && !xe_device_wedged(xe)) {271272 if (IS_SRIOV_VF(xe))272273 return 0;···297290 *298291 * @gt: graphics tile299292 * @fence: invalidation fence which will be signal on TLB invalidation300300- * completion, can be NULL293293+ * completion301294 * @start: start address302295 * @end: end address303296 * @asid: address space id304297 *305298 * Issue a range based TLB invalidation if supported, if not fallback to a full306306- * TLB invalidation. Completion of TLB is asynchronous and caller can either use307307- * the invalidation fence or seqno + xe_gt_tlb_invalidation_wait to wait for308308- * completion.299299+ * TLB invalidation. Completion of TLB is asynchronous and caller can use300300+ * the invalidation fence to wait for completion.309301 *310310- * Return: Seqno which can be passed to xe_gt_tlb_invalidation_wait on success,311311- * negative error code on error.302302+ * Return: Negative error code on error, 0 on success312303 */313304int xe_gt_tlb_invalidation_range(struct xe_gt *gt,314305 struct xe_gt_tlb_invalidation_fence *fence,···317312 u32 action[MAX_TLB_INVALIDATION_LEN];318313 int len = 0;319314315315+ xe_gt_assert(gt, fence);316316+320317 /* Execlists not supported */321318 if (gt_to_xe(gt)->info.force_execlist) {322322- if (fence)323323- __invalidation_fence_signal(xe, fence);324324-319319+ __invalidation_fence_signal(xe, fence);325320 return 0;326321 }327322···387382 * @vma: VMA to invalidate388383 *389384 * Issue a range based TLB invalidation if supported, if not fallback to a full390390- * TLB invalidation. Completion of TLB is asynchronous and caller can either use391391- * the invalidation fence or seqno + xe_gt_tlb_invalidation_wait to wait for392392- * completion.385385+ * TLB invalidation. Completion of TLB is asynchronous and caller can use386386+ * the invalidation fence to wait for completion.393387 *394394- * Return: Seqno which can be passed to xe_gt_tlb_invalidation_wait on success,395395- * negative error code on error.388388+ * Return: Negative error code on error, 0 on success396389 */397390int xe_gt_tlb_invalidation_vma(struct xe_gt *gt,398391 struct xe_gt_tlb_invalidation_fence *fence,···401398 return xe_gt_tlb_invalidation_range(gt, fence, xe_vma_start(vma),402399 xe_vma_end(vma),403400 xe_vma_vm(vma)->usm.asid);404404-}405405-406406-/**407407- * xe_gt_tlb_invalidation_wait - Wait for TLB to complete408408- * @gt: graphics tile409409- * @seqno: seqno to wait which was returned from xe_gt_tlb_invalidation410410- *411411- * Wait for tlb_timeout_jiffies() for a TLB invalidation to complete.412412- *413413- * Return: 0 on success, -ETIME on TLB invalidation timeout414414- */415415-int xe_gt_tlb_invalidation_wait(struct xe_gt *gt, int seqno)416416-{417417- struct xe_guc *guc = >->uc.guc;418418- int ret;419419-420420- /* Execlists not supported */421421- if (gt_to_xe(gt)->info.force_execlist)422422- return 0;423423-424424- /*425425- * XXX: See above, this algorithm only works if seqno are always in426426- * order427427- */428428- ret = wait_event_timeout(guc->ct.wq,429429- tlb_invalidation_seqno_past(gt, seqno),430430- tlb_timeout_jiffies(gt));431431- if (!ret) {432432- struct drm_printer p = xe_gt_err_printer(gt);433433-434434- xe_gt_err(gt, "TLB invalidation time'd out, seqno=%d, recv=%d\n",435435- seqno, gt->tlb_invalidation.seqno_recv);436436- xe_guc_ct_print(&guc->ct, &p, true);437437- return -ETIME;438438- }439439-440440- return 0;441401}442402443403/**···446480 return 0;447481 }448482449449- /*450450- * wake_up_all() and wait_event_timeout() already have the correct451451- * barriers.452452- */453483 WRITE_ONCE(gt->tlb_invalidation.seqno_recv, msg[0]);454454- wake_up_all(&guc->ct.wq);455484456485 list_for_each_entry_safe(fence, next,457486 >->tlb_invalidation.pending_fences, link) {···468507 spin_unlock_irqrestore(>->tlb_invalidation.pending_lock, flags);469508470509 return 0;510510+}511511+512512+static const char *513513+invalidation_fence_get_driver_name(struct dma_fence *dma_fence)514514+{515515+ return "xe";516516+}517517+518518+static const char *519519+invalidation_fence_get_timeline_name(struct dma_fence *dma_fence)520520+{521521+ return "invalidation_fence";522522+}523523+524524+static const struct dma_fence_ops invalidation_fence_ops = {525525+ .get_driver_name = invalidation_fence_get_driver_name,526526+ .get_timeline_name = invalidation_fence_get_timeline_name,527527+};528528+529529+/**530530+ * xe_gt_tlb_invalidation_fence_init - Initialize TLB invalidation fence531531+ * @gt: GT532532+ * @fence: TLB invalidation fence to initialize533533+ * @stack: fence is stack variable534534+ *535535+ * Initialize TLB invalidation fence for use. xe_gt_tlb_invalidation_fence_fini536536+ * must be called if fence is not signaled.537537+ */538538+void xe_gt_tlb_invalidation_fence_init(struct xe_gt *gt,539539+ struct xe_gt_tlb_invalidation_fence *fence,540540+ bool stack)541541+{542542+ xe_pm_runtime_get_noresume(gt_to_xe(gt));543543+544544+ spin_lock_irq(>->tlb_invalidation.lock);545545+ dma_fence_init(&fence->base, &invalidation_fence_ops,546546+ >->tlb_invalidation.lock,547547+ dma_fence_context_alloc(1), 1);548548+ spin_unlock_irq(>->tlb_invalidation.lock);549549+ INIT_LIST_HEAD(&fence->link);550550+ if (stack)551551+ set_bit(FENCE_STACK_BIT, &fence->base.flags);552552+ else553553+ dma_fence_get(&fence->base);554554+ fence->gt = gt;555555+}556556+557557+/**558558+ * xe_gt_tlb_invalidation_fence_fini - Finalize TLB invalidation fence559559+ * @fence: TLB invalidation fence to finalize560560+ *561561+ * Drop PM ref which fence took durinig init.562562+ */563563+void xe_gt_tlb_invalidation_fence_fini(struct xe_gt_tlb_invalidation_fence *fence)564564+{565565+ xe_pm_runtime_put(gt_to_xe(fence->gt));471566}
···16011601 XE_WARN_ON(vm->pt_root[id]);1602160216031603 trace_xe_vm_free(vm);16041604+16051605+ if (vm->xef)16061606+ xe_file_put(vm->xef);16071607+16041608 kfree(vm);16051609}16061610···19201916 }1921191719221918 args->vm_id = id;19231923- vm->xef = xef;19191919+ vm->xef = xe_file_get(xef);1924192019251921 /* Record BO memory for VM pagetable created against client */19261922 for_each_tile(tile, xe, id)···33413337{33423338 struct xe_device *xe = xe_vma_vm(vma)->xe;33433339 struct xe_tile *tile;33403340+ struct xe_gt_tlb_invalidation_fence fence[XE_MAX_TILES_PER_DEVICE];33443341 u32 tile_needs_invalidate = 0;33453345- int seqno[XE_MAX_TILES_PER_DEVICE];33463342 u8 id;33473347- int ret;33433343+ int ret = 0;3348334433493345 xe_assert(xe, !xe_vma_is_null(vma));33503346 trace_xe_vma_invalidate(vma);···3369336533703366 for_each_tile(tile, xe, id) {33713367 if (xe_pt_zap_ptes(tile, vma)) {33723372- tile_needs_invalidate |= BIT(id);33733368 xe_device_wmb(xe);33693369+ xe_gt_tlb_invalidation_fence_init(tile->primary_gt,33703370+ &fence[id], true);33713371+33743372 /*33753373 * FIXME: We potentially need to invalidate multiple33763374 * GTs within the tile33773375 */33783378- seqno[id] = xe_gt_tlb_invalidation_vma(tile->primary_gt, NULL, vma);33793379- if (seqno[id] < 0)33803380- return seqno[id];33763376+ ret = xe_gt_tlb_invalidation_vma(tile->primary_gt,33773377+ &fence[id], vma);33783378+ if (ret < 0) {33793379+ xe_gt_tlb_invalidation_fence_fini(&fence[id]);33803380+ goto wait;33813381+ }33823382+33833383+ tile_needs_invalidate |= BIT(id);33813384 }33823385 }3383338633843384- for_each_tile(tile, xe, id) {33853385- if (tile_needs_invalidate & BIT(id)) {33863386- ret = xe_gt_tlb_invalidation_wait(tile->primary_gt, seqno[id]);33873387- if (ret < 0)33883388- return ret;33893389- }33903390- }33873387+wait:33883388+ for_each_tile(tile, xe, id)33893389+ if (tile_needs_invalidate & BIT(id))33903390+ xe_gt_tlb_invalidation_fence_wait(&fence[id]);3391339133923392 vma->tile_invalidated = vma->tile_mask;3393339333943394- return 0;33943394+ return ret;33953395}3396339633973397struct xe_vm_snapshot {
+3-1
drivers/i2c/busses/i2c-qcom-geni.c
···986986 return ret;987987988988 ret = clk_prepare_enable(gi2c->core_clk);989989- if (ret)989989+ if (ret) {990990+ geni_icc_disable(&gi2c->se);990991 return ret;992992+ }991993992994 ret = geni_se_resources_on(&gi2c->se);993995 if (ret) {
+2-2
drivers/i2c/busses/i2c-tegra.c
···18021802 * domain.18031803 *18041804 * VI I2C device shouldn't be marked as IRQ-safe because VI I2C won't18051805- * be used for atomic transfers.18051805+ * be used for atomic transfers. ACPI device is not IRQ safe also.18061806 */18071807- if (!IS_VI(i2c_dev))18071807+ if (!IS_VI(i2c_dev) && !has_acpi_companion(i2c_dev->dev))18081808 pm_runtime_irq_safe(i2c_dev->dev);1809180918101810 pm_runtime_enable(i2c_dev->dev);
+1
drivers/iommu/io-pgfault.c
···170170 report_partial_fault(iopf_param, fault);171171 iopf_put_dev_fault_param(iopf_param);172172 /* A request that is not the last does not need to be ack'd */173173+ return;173174 }174175175176 /*
+20-2
drivers/md/dm-ioctl.c
···11811181 suspend_flags &= ~DM_SUSPEND_LOCKFS_FLAG;11821182 if (param->flags & DM_NOFLUSH_FLAG)11831183 suspend_flags |= DM_SUSPEND_NOFLUSH_FLAG;11841184- if (!dm_suspended_md(md))11851185- dm_suspend(md, suspend_flags);11841184+ if (!dm_suspended_md(md)) {11851185+ r = dm_suspend(md, suspend_flags);11861186+ if (r) {11871187+ down_write(&_hash_lock);11881188+ hc = dm_get_mdptr(md);11891189+ if (hc && !hc->new_map) {11901190+ hc->new_map = new_map;11911191+ new_map = NULL;11921192+ } else {11931193+ r = -ENXIO;11941194+ }11951195+ up_write(&_hash_lock);11961196+ if (new_map) {11971197+ dm_sync_table(md);11981198+ dm_table_destroy(new_map);11991199+ }12001200+ dm_put(md);12011201+ return r;12021202+ }12031203+ }1186120411871205 old_size = dm_get_size(md);11881206 old_map = dm_swap_table(md, new_map);
···617617 return -1;618618}619619620620+static bool rdev_in_recovery(struct md_rdev *rdev, struct r1bio *r1_bio)621621+{622622+ return !test_bit(In_sync, &rdev->flags) &&623623+ rdev->recovery_offset < r1_bio->sector + r1_bio->sectors;624624+}625625+620626static int choose_bb_rdev(struct r1conf *conf, struct r1bio *r1_bio,621627 int *max_sectors)622628{···641635642636 rdev = conf->mirrors[disk].rdev;643637 if (!rdev || test_bit(Faulty, &rdev->flags) ||638638+ rdev_in_recovery(rdev, r1_bio) ||644639 test_bit(WriteMostly, &rdev->flags))645640 continue;646641···680673681674 rdev = conf->mirrors[disk].rdev;682675 if (!rdev || test_bit(Faulty, &rdev->flags) ||683683- !test_bit(WriteMostly, &rdev->flags))676676+ !test_bit(WriteMostly, &rdev->flags) ||677677+ rdev_in_recovery(rdev, r1_bio))684678 continue;685679686680 /* there are no bad blocks, we can use this disk */···741733 if (!rdev || test_bit(Faulty, &rdev->flags))742734 return false;743735744744- /* still in recovery */745745- if (!test_bit(In_sync, &rdev->flags) &&746746- rdev->recovery_offset < r1_bio->sector + r1_bio->sectors)736736+ if (rdev_in_recovery(rdev, r1_bio))747737 return false;748738749739 /* don't read from slow disk unless have to */
+4-31
drivers/media/usb/dvb-usb/dvb-usb-init.c
···2323module_param_named(force_pid_filter_usage, dvb_usb_force_pid_filter_usage, int, 0444);2424MODULE_PARM_DESC(force_pid_filter_usage, "force all dvb-usb-devices to use a PID filter, if any (default: 0).");25252626-static int dvb_usb_check_bulk_endpoint(struct dvb_usb_device *d, u8 endpoint)2727-{2828- if (endpoint) {2929- int ret;3030-3131- ret = usb_pipe_type_check(d->udev, usb_sndbulkpipe(d->udev, endpoint));3232- if (ret)3333- return ret;3434- ret = usb_pipe_type_check(d->udev, usb_rcvbulkpipe(d->udev, endpoint));3535- if (ret)3636- return ret;3737- }3838- return 0;3939-}4040-4141-static void dvb_usb_clear_halt(struct dvb_usb_device *d, u8 endpoint)4242-{4343- if (endpoint) {4444- usb_clear_halt(d->udev, usb_sndbulkpipe(d->udev, endpoint));4545- usb_clear_halt(d->udev, usb_rcvbulkpipe(d->udev, endpoint));4646- }4747-}4848-4926static int dvb_usb_adapter_init(struct dvb_usb_device *d, short *adapter_nrs)5027{5128 struct dvb_usb_adapter *adap;5229 int ret, n, o;53305454- ret = dvb_usb_check_bulk_endpoint(d, d->props.generic_bulk_ctrl_endpoint);5555- if (ret)5656- return ret;5757- ret = dvb_usb_check_bulk_endpoint(d, d->props.generic_bulk_ctrl_endpoint_response);5858- if (ret)5959- return ret;6031 for (n = 0; n < d->props.num_adapters; n++) {6132 adap = &d->adapter[n];6233 adap->dev = d;···103132 * when reloading the driver w/o replugging the device104133 * sometimes a timeout occurs, this helps105134 */106106- dvb_usb_clear_halt(d, d->props.generic_bulk_ctrl_endpoint);107107- dvb_usb_clear_halt(d, d->props.generic_bulk_ctrl_endpoint_response);135135+ if (d->props.generic_bulk_ctrl_endpoint != 0) {136136+ usb_clear_halt(d->udev, usb_sndbulkpipe(d->udev, d->props.generic_bulk_ctrl_endpoint));137137+ usb_clear_halt(d->udev, usb_rcvbulkpipe(d->udev, d->props.generic_bulk_ctrl_endpoint));138138+ }108139109140 return 0;110141
+3-19
drivers/misc/fastrpc.c
···20852085 return err;20862086}2087208720882088-static int is_attach_rejected(struct fastrpc_user *fl)20892089-{20902090- /* Check if the device node is non-secure */20912091- if (!fl->is_secure_dev) {20922092- dev_dbg(&fl->cctx->rpdev->dev, "untrusted app trying to attach to privileged DSP PD\n");20932093- return -EACCES;20942094- }20952095- return 0;20962096-}20972097-20982088static long fastrpc_device_ioctl(struct file *file, unsigned int cmd,20992089 unsigned long arg)21002090{···20972107 err = fastrpc_invoke(fl, argp);20982108 break;20992109 case FASTRPC_IOCTL_INIT_ATTACH:21002100- err = is_attach_rejected(fl);21012101- if (!err)21022102- err = fastrpc_init_attach(fl, ROOT_PD);21102110+ err = fastrpc_init_attach(fl, ROOT_PD);21032111 break;21042112 case FASTRPC_IOCTL_INIT_ATTACH_SNS:21052105- err = is_attach_rejected(fl);21062106- if (!err)21072107- err = fastrpc_init_attach(fl, SENSORS_PD);21132113+ err = fastrpc_init_attach(fl, SENSORS_PD);21082114 break;21092115 case FASTRPC_IOCTL_INIT_CREATE_STATIC:21102110- err = is_attach_rejected(fl);21112111- if (!err)21122112- err = fastrpc_init_create_static_process(fl, argp);21162116+ err = fastrpc_init_create_static_process(fl, argp);21132117 break;21142118 case FASTRPC_IOCTL_INIT_CREATE:21152119 err = fastrpc_init_create_process(fl, argp);
+16
drivers/misc/lkdtm/refcount.c
···182182 check_negative(&neg, 3);183183}184184185185+/*186186+ * A refcount_sub_and_test() by zero when the counter is at zero should act like187187+ * refcount_sub_and_test() above when going negative.188188+ */189189+static void lkdtm_REFCOUNT_SUB_AND_TEST_ZERO(void)190190+{191191+ refcount_t neg = REFCOUNT_INIT(0);192192+193193+ pr_info("attempting bad refcount_sub_and_test() at zero\n");194194+ if (refcount_sub_and_test(0, &neg))195195+ pr_warn("Weird: refcount_sub_and_test() reported zero\n");196196+197197+ check_negative(&neg, 0);198198+}199199+185200static void check_from_zero(refcount_t *ref)186201{187202 switch (refcount_read(ref)) {···415400 CRASHTYPE(REFCOUNT_DEC_NEGATIVE),416401 CRASHTYPE(REFCOUNT_DEC_AND_TEST_NEGATIVE),417402 CRASHTYPE(REFCOUNT_SUB_AND_TEST_NEGATIVE),403403+ CRASHTYPE(REFCOUNT_SUB_AND_TEST_ZERO),418404 CRASHTYPE(REFCOUNT_INC_ZERO),419405 CRASHTYPE(REFCOUNT_ADD_ZERO),420406 CRASHTYPE(REFCOUNT_INC_SATURATED),
+41-13
drivers/net/dsa/vitesse-vsc73xx-core.c
···4040#define VSC73XX_BLOCK_ARBITER 0x5 /* Only subblock 0 */4141#define VSC73XX_BLOCK_SYSTEM 0x7 /* Only subblock 0 */42424343+/* MII Block subblock */4444+#define VSC73XX_BLOCK_MII_INTERNAL 0x0 /* Internal MDIO subblock */4545+#define VSC73XX_BLOCK_MII_EXTERNAL 0x1 /* External MDIO subblock */4646+4347#define CPU_PORT 6 /* CPU port */44484549/* MAC Block registers */···229225#define VSC73XX_MII_CMD 0x1230226#define VSC73XX_MII_DATA 0x2231227228228+#define VSC73XX_MII_STAT_BUSY BIT(3)229229+232230/* Arbiter block 5 registers */233231#define VSC73XX_ARBEMPTY 0x0c234232#define VSC73XX_ARBDISC 0x0e···305299#define IS_739X(a) (IS_7395(a) || IS_7398(a))306300307301#define VSC73XX_POLL_SLEEP_US 1000302302+#define VSC73XX_MDIO_POLL_SLEEP_US 5308303#define VSC73XX_POLL_TIMEOUT_US 10000309304310305struct vsc73xx_counter {···534527 return 0;535528}536529530530+static int vsc73xx_mdio_busy_check(struct vsc73xx *vsc)531531+{532532+ int ret, err;533533+ u32 val;534534+535535+ ret = read_poll_timeout(vsc73xx_read, err,536536+ err < 0 || !(val & VSC73XX_MII_STAT_BUSY),537537+ VSC73XX_MDIO_POLL_SLEEP_US,538538+ VSC73XX_POLL_TIMEOUT_US, false, vsc,539539+ VSC73XX_BLOCK_MII, VSC73XX_BLOCK_MII_INTERNAL,540540+ VSC73XX_MII_STAT, &val);541541+ if (ret)542542+ return ret;543543+ return err;544544+}545545+537546static int vsc73xx_phy_read(struct dsa_switch *ds, int phy, int regnum)538547{539548 struct vsc73xx *vsc = ds->priv;···557534 u32 val;558535 int ret;559536537537+ ret = vsc73xx_mdio_busy_check(vsc);538538+ if (ret)539539+ return ret;540540+560541 /* Setting bit 26 means "read" */561542 cmd = BIT(26) | (phy << 21) | (regnum << 16);562543 ret = vsc73xx_write(vsc, VSC73XX_BLOCK_MII, 0, 1, cmd);563544 if (ret)564545 return ret;565565- msleep(2);546546+547547+ ret = vsc73xx_mdio_busy_check(vsc);548548+ if (ret)549549+ return ret;550550+566551 ret = vsc73xx_read(vsc, VSC73XX_BLOCK_MII, 0, 2, &val);567552 if (ret)568553 return ret;···594563 u32 cmd;595564 int ret;596565597597- /* It was found through tedious experiments that this router598598- * chip really hates to have it's PHYs reset. They599599- * never recover if that happens: autonegotiation stops600600- * working after a reset. Just filter out this command.601601- * (Resetting the whole chip is OK.)602602- */603603- if (regnum == 0 && (val & BIT(15))) {604604- dev_info(vsc->dev, "reset PHY - disallowed\n");605605- return 0;606606- }566566+ ret = vsc73xx_mdio_busy_check(vsc);567567+ if (ret)568568+ return ret;607569608608- cmd = (phy << 21) | (regnum << 16);570570+ cmd = (phy << 21) | (regnum << 16) | val;609571 ret = vsc73xx_write(vsc, VSC73XX_BLOCK_MII, 0, 1, cmd);610572 if (ret)611573 return ret;···981957982958 if (duplex == DUPLEX_FULL)983959 val |= VSC73XX_MAC_CFG_FDX;960960+ else961961+ /* In datasheet description ("Port Mode Procedure" in 5.6.2)962962+ * this bit is configured only for half duplex.963963+ */964964+ val |= VSC73XX_MAC_CFG_WEXC_DIS;984965985966 /* This routine is described in the datasheet (below ARBDISC register986967 * description)···996967 get_random_bytes(&seed, 1);997968 val |= seed << VSC73XX_MAC_CFG_SEED_OFFSET;998969 val |= VSC73XX_MAC_CFG_SEED_LOAD;999999- val |= VSC73XX_MAC_CFG_WEXC_DIS;10009701001971 /* Those bits are responsible for MTU only. Kernel takes care about MTU,1002972 * let's enable +8 bytes frame length unconditionally.
+2-2
drivers/net/ethernet/cadence/macb_main.c
···52505250 if (bp->wol & MACB_WOL_ENABLED) {52515251 /* Check for IP address in WOL ARP mode */52525252 idev = __in_dev_get_rcu(bp->dev);52535253- if (idev && idev->ifa_list)52545254- ifa = rcu_access_pointer(idev->ifa_list);52535253+ if (idev)52545254+ ifa = rcu_dereference(idev->ifa_list);52555255 if ((bp->wolopts & WAKE_ARP) && !ifa) {52565256 netdev_err(netdev, "IP address not assigned as required by WoL walk ARP\n");52575257 return -EOPNOTSUPP;
···191191 if (ret)192192 netdev_err(netdev, "failed to adjust link.\n");193193194194+ hdev->hw.mac.req_speed = (u32)speed;195195+ hdev->hw.mac.req_duplex = (u8)duplex;196196+194197 ret = hclge_cfg_flowctrl(hdev);195198 if (ret)196199 netdev_err(netdev, "failed to configure flow control.\n");
···51675167#endif51685168};5169516951705170-static u32 mlx5e_choose_lro_timeout(struct mlx5_core_dev *mdev, u32 wanted_timeout)51715171-{51725172- int i;51735173-51745174- /* The supported periods are organized in ascending order */51755175- for (i = 0; i < MLX5E_LRO_TIMEOUT_ARR_SIZE - 1; i++)51765176- if (MLX5_CAP_ETH(mdev, lro_timer_supported_periods[i]) >= wanted_timeout)51775177- break;51785178-51795179- return MLX5_CAP_ETH(mdev, lro_timer_supported_periods[i]);51805180-}51815181-51825170void mlx5e_build_nic_params(struct mlx5e_priv *priv, struct mlx5e_xsk *xsk, u16 mtu)51835171{51845172 struct mlx5e_params *params = &priv->channels.params;···52965308 struct mlx5e_rq_stats *rq_stats;5297530952985310 ASSERT_RTNL();52995299- if (mlx5e_is_uplink_rep(priv))53115311+ if (mlx5e_is_uplink_rep(priv) || !priv->stats_nch)53005312 return;5301531353025314 channel_stats = priv->channel_stats[i];···53165328 struct mlx5e_sq_stats *sq_stats;5317532953185330 ASSERT_RTNL();53315331+ if (!priv->stats_nch)53325332+ return;53335333+53195334 /* no special case needed for ptp htb etc since txq2sq_stats is kept up53205335 * to date for active sq_stats, otherwise get_base_stats takes care of53215336 * inactive sqs.
+9-9
drivers/net/ethernet/mellanox/mlx5/core/lib/sd.c
···126126}127127128128static int mlx5_query_sd(struct mlx5_core_dev *dev, bool *sdm,129129- u8 *host_buses, u8 *sd_group)129129+ u8 *host_buses)130130{131131 u32 out[MLX5_ST_SZ_DW(mpir_reg)];132132 int err;133133134134 err = mlx5_query_mpir_reg(dev, out);135135- if (err)136136- return err;137137-138138- err = mlx5_query_nic_vport_sd_group(dev, sd_group);139135 if (err)140136 return err;141137···162166 if (mlx5_core_is_ecpf(dev))163167 return 0;164168169169+ err = mlx5_query_nic_vport_sd_group(dev, &sd_group);170170+ if (err)171171+ return err;172172+173173+ if (!sd_group)174174+ return 0;175175+165176 if (!MLX5_CAP_MCAM_REG(dev, mpir))166177 return 0;167178168168- err = mlx5_query_sd(dev, &sdm, &host_buses, &sd_group);179179+ err = mlx5_query_sd(dev, &sdm, &host_buses);169180 if (err)170181 return err;171182172183 if (!sdm)173173- return 0;174174-175175- if (!sd_group)176184 return 0;177185178186 group_id = mlx5_sd_group_id(dev, sd_group);
···168168 if (err)169169 goto napi_deinit;170170171171+ mlxbf_gige_enable_mac_rx_filter(priv, MLXBF_GIGE_BCAST_MAC_FILTER_IDX);172172+ mlxbf_gige_enable_mac_rx_filter(priv, MLXBF_GIGE_LOCAL_MAC_FILTER_IDX);173173+ mlxbf_gige_enable_multicast_rx(priv);174174+171175 /* Set bits in INT_EN that we care about */172176 int_en = MLXBF_GIGE_INT_EN_HW_ACCESS_ERROR |173177 MLXBF_GIGE_INT_EN_TX_CHECKSUM_INPUTS |···383379 void __iomem *plu_base;384380 void __iomem *base;385381 int addr, phy_irq;382382+ unsigned int i;386383 int err;387384388385 base = devm_platform_ioremap_resource(pdev, MLXBF_GIGE_RES_MAC);···427422428423 priv->rx_q_entries = MLXBF_GIGE_DEFAULT_RXQ_SZ;429424 priv->tx_q_entries = MLXBF_GIGE_DEFAULT_TXQ_SZ;425425+426426+ for (i = 0; i <= MLXBF_GIGE_MAX_FILTER_IDX; i++)427427+ mlxbf_gige_disable_mac_rx_filter(priv, i);428428+ mlxbf_gige_disable_multicast_rx(priv);429429+ mlxbf_gige_disable_promisc(priv);430430431431 /* Write initial MAC address to hardware */432432 mlxbf_gige_initial_mac(priv);
···1111#include "mlxbf_gige.h"1212#include "mlxbf_gige_regs.h"13131414-void mlxbf_gige_set_mac_rx_filter(struct mlxbf_gige *priv,1515- unsigned int index, u64 dmac)1414+void mlxbf_gige_enable_multicast_rx(struct mlxbf_gige *priv)1515+{1616+ void __iomem *base = priv->base;1717+ u64 data;1818+1919+ data = readq(base + MLXBF_GIGE_RX_MAC_FILTER_GENERAL);2020+ data |= MLXBF_GIGE_RX_MAC_FILTER_EN_MULTICAST;2121+ writeq(data, base + MLXBF_GIGE_RX_MAC_FILTER_GENERAL);2222+}2323+2424+void mlxbf_gige_disable_multicast_rx(struct mlxbf_gige *priv)2525+{2626+ void __iomem *base = priv->base;2727+ u64 data;2828+2929+ data = readq(base + MLXBF_GIGE_RX_MAC_FILTER_GENERAL);3030+ data &= ~MLXBF_GIGE_RX_MAC_FILTER_EN_MULTICAST;3131+ writeq(data, base + MLXBF_GIGE_RX_MAC_FILTER_GENERAL);3232+}3333+3434+void mlxbf_gige_enable_mac_rx_filter(struct mlxbf_gige *priv,3535+ unsigned int index)1636{1737 void __iomem *base = priv->base;1838 u64 control;1919-2020- /* Write destination MAC to specified MAC RX filter */2121- writeq(dmac, base + MLXBF_GIGE_RX_MAC_FILTER +2222- (index * MLXBF_GIGE_RX_MAC_FILTER_STRIDE));23392440 /* Enable MAC receive filter mask for specified index */2541 control = readq(base + MLXBF_GIGE_CONTROL);2642 control |= (MLXBF_GIGE_CONTROL_EN_SPECIFIC_MAC << index);2743 writeq(control, base + MLXBF_GIGE_CONTROL);4444+}4545+4646+void mlxbf_gige_disable_mac_rx_filter(struct mlxbf_gige *priv,4747+ unsigned int index)4848+{4949+ void __iomem *base = priv->base;5050+ u64 control;5151+5252+ /* Disable MAC receive filter mask for specified index */5353+ control = readq(base + MLXBF_GIGE_CONTROL);5454+ control &= ~(MLXBF_GIGE_CONTROL_EN_SPECIFIC_MAC << index);5555+ writeq(control, base + MLXBF_GIGE_CONTROL);5656+}5757+5858+void mlxbf_gige_set_mac_rx_filter(struct mlxbf_gige *priv,5959+ unsigned int index, u64 dmac)6060+{6161+ void __iomem *base = priv->base;6262+6363+ /* Write destination MAC to specified MAC RX filter */6464+ writeq(dmac, base + MLXBF_GIGE_RX_MAC_FILTER +6565+ (index * MLXBF_GIGE_RX_MAC_FILTER_STRIDE));2866}29673068void mlxbf_gige_get_mac_rx_filter(struct mlxbf_gige *priv,
+19-9
drivers/net/ethernet/microsoft/mana/mana_en.c
···599599 else600600 *headroom = XDP_PACKET_HEADROOM;601601602602- *alloc_size = mtu + MANA_RXBUF_PAD + *headroom;602602+ *alloc_size = SKB_DATA_ALIGN(mtu + MANA_RXBUF_PAD + *headroom);603603+604604+ /* Using page pool in this case, so alloc_size is PAGE_SIZE */605605+ if (*alloc_size < PAGE_SIZE)606606+ *alloc_size = PAGE_SIZE;603607604608 *datasize = mtu + ETH_HLEN;605609}···17921788static int mana_cq_handler(void *context, struct gdma_queue *gdma_queue)17931789{17941790 struct mana_cq *cq = context;17951795- u8 arm_bit;17961791 int w;1797179217981793 WARN_ON_ONCE(cq->gdma_cq != gdma_queue);···18021799 mana_poll_tx_cq(cq);1803180018041801 w = cq->work_done;18021802+ cq->work_done_since_doorbell += w;1805180318061806- if (w < cq->budget &&18071807- napi_complete_done(&cq->napi, w)) {18081808- arm_bit = SET_ARM_BIT;18091809- } else {18101810- arm_bit = 0;18041804+ if (w < cq->budget) {18051805+ mana_gd_ring_cq(gdma_queue, SET_ARM_BIT);18061806+ cq->work_done_since_doorbell = 0;18071807+ napi_complete_done(&cq->napi, w);18081808+ } else if (cq->work_done_since_doorbell >18091809+ cq->gdma_cq->queue_size / COMP_ENTRY_SIZE * 4) {18101810+ /* MANA hardware requires at least one doorbell ring every 818111811+ * wraparounds of CQ even if there is no need to arm the CQ.18121812+ * This driver rings the doorbell as soon as we have exceeded18131813+ * 4 wraparounds.18141814+ */18151815+ mana_gd_ring_cq(gdma_queue, 0);18161816+ cq->work_done_since_doorbell = 0;18111817 }18121812-18131813- mana_gd_ring_cq(gdma_queue, arm_bit);1814181818151819 return w;18161820}
+8-8
drivers/net/ethernet/xilinx/xilinx_axienet.h
···160160#define XAE_RCW1_OFFSET 0x00000404 /* Rx Configuration Word 1 */161161#define XAE_TC_OFFSET 0x00000408 /* Tx Configuration */162162#define XAE_FCC_OFFSET 0x0000040C /* Flow Control Configuration */163163-#define XAE_EMMC_OFFSET 0x00000410 /* EMAC mode configuration */164164-#define XAE_PHYC_OFFSET 0x00000414 /* RGMII/SGMII configuration */163163+#define XAE_EMMC_OFFSET 0x00000410 /* MAC speed configuration */164164+#define XAE_PHYC_OFFSET 0x00000414 /* RX Max Frame Configuration */165165#define XAE_ID_OFFSET 0x000004F8 /* Identification register */166166-#define XAE_MDIO_MC_OFFSET 0x00000500 /* MII Management Config */167167-#define XAE_MDIO_MCR_OFFSET 0x00000504 /* MII Management Control */168168-#define XAE_MDIO_MWD_OFFSET 0x00000508 /* MII Management Write Data */169169-#define XAE_MDIO_MRD_OFFSET 0x0000050C /* MII Management Read Data */166166+#define XAE_MDIO_MC_OFFSET 0x00000500 /* MDIO Setup */167167+#define XAE_MDIO_MCR_OFFSET 0x00000504 /* MDIO Control */168168+#define XAE_MDIO_MWD_OFFSET 0x00000508 /* MDIO Write Data */169169+#define XAE_MDIO_MRD_OFFSET 0x0000050C /* MDIO Read Data */170170#define XAE_UAW0_OFFSET 0x00000700 /* Unicast address word 0 */171171#define XAE_UAW1_OFFSET 0x00000704 /* Unicast address word 1 */172172-#define XAE_FMI_OFFSET 0x00000708 /* Filter Mask Index */172172+#define XAE_FMI_OFFSET 0x00000708 /* Frame Filter Control */173173#define XAE_AF0_OFFSET 0x00000710 /* Address Filter 0 */174174#define XAE_AF1_OFFSET 0x00000714 /* Address Filter 1 */175175···308308 */309309#define XAE_UAW1_UNICASTADDR_MASK 0x0000FFFF310310311311-/* Bit masks for Axi Ethernet FMI register */311311+/* Bit masks for Axi Ethernet FMC register */312312#define XAE_FMI_PM_MASK 0x80000000 /* Promis. mode enable */313313#define XAE_FMI_IND_MASK 0x00000003 /* Index Mask */314314
+3
drivers/net/gtp.c
···12691269 if (skb_cow_head(skb, dev->needed_headroom))12701270 goto tx_err;1271127112721272+ if (!pskb_inet_may_pull(skb))12731273+ goto tx_err;12741274+12721275 skb_reset_inner_headers(skb);1273127612741277 /* PDP context lookups in gtp_build_skb_*() need rcu read-side lock. */
-14
drivers/net/phy/vitesse.c
···237237 return 0;238238}239239240240-static int vsc73xx_config_aneg(struct phy_device *phydev)241241-{242242- /* The VSC73xx switches does not like to be instructed to243243- * do autonegotiation in any way, it prefers that you just go244244- * with the power-on/reset defaults. Writing some registers will245245- * just make autonegotiation permanently fail.246246- */247247- return 0;248248-}249249-250240/* This adds a skew for both TX and RX clocks, so the skew should only be251241 * applied to "rgmii-id" interfaces. It may not work as expected252242 * on "rgmii-txid", "rgmii-rxid" or "rgmii" interfaces.···434444 .phy_id_mask = 0x000ffff0,435445 /* PHY_GBIT_FEATURES */436446 .config_init = vsc738x_config_init,437437- .config_aneg = vsc73xx_config_aneg,438447 .read_page = vsc73xx_read_page,439448 .write_page = vsc73xx_write_page,440449}, {···442453 .phy_id_mask = 0x000ffff0,443454 /* PHY_GBIT_FEATURES */444455 .config_init = vsc738x_config_init,445445- .config_aneg = vsc73xx_config_aneg,446456 .read_page = vsc73xx_read_page,447457 .write_page = vsc73xx_write_page,448458}, {···450462 .phy_id_mask = 0x000ffff0,451463 /* PHY_GBIT_FEATURES */452464 .config_init = vsc739x_config_init,453453- .config_aneg = vsc73xx_config_aneg,454465 .read_page = vsc73xx_read_page,455466 .write_page = vsc73xx_write_page,456467}, {···458471 .phy_id_mask = 0x000ffff0,459472 /* PHY_GBIT_FEATURES */460473 .config_init = vsc739x_config_init,461461- .config_aneg = vsc73xx_config_aneg,462474 .read_page = vsc73xx_read_page,463475 .write_page = vsc73xx_write_page,464476}, {
···639639int iwl_pcie_txq_alloc(struct iwl_trans *trans, struct iwl_txq *txq,640640 int slots_num, bool cmd_queue);641641642642-dma_addr_t iwl_pcie_get_sgt_tb_phys(struct sg_table *sgt, void *addr);642642+dma_addr_t iwl_pcie_get_sgt_tb_phys(struct sg_table *sgt, unsigned int offset,643643+ unsigned int len);643644struct sg_table *iwl_pcie_prep_tso(struct iwl_trans *trans, struct sk_buff *skb,644645 struct iwl_cmd_meta *cmd_meta,645646 u8 **hdr, unsigned int hdr_room);
+4-1
drivers/net/wireless/intel/iwlwifi/pcie/tx-gen2.c
···168168 struct ieee80211_hdr *hdr = (void *)skb->data;169169 unsigned int snap_ip_tcp_hdrlen, ip_hdrlen, total_len, hdr_room;170170 unsigned int mss = skb_shinfo(skb)->gso_size;171171+ unsigned int data_offset = 0;171172 dma_addr_t start_hdr_phys;172173 u16 length, amsdu_pad;173174 u8 *start_hdr;···261260 int ret;262261263262 tb_len = min_t(unsigned int, tso.size, data_left);264264- tb_phys = iwl_pcie_get_sgt_tb_phys(sgt, tso.data);263263+ tb_phys = iwl_pcie_get_sgt_tb_phys(sgt, data_offset,264264+ tb_len);265265 /* Not a real mapping error, use direct comparison */266266 if (unlikely(tb_phys == DMA_MAPPING_ERROR))267267 goto out_err;···274272 goto out_err;275273276274 data_left -= tb_len;275275+ data_offset += tb_len;277276 tso_build_data(skb, &tso, tb_len);278277 }279278 }
+22-10
drivers/net/wireless/intel/iwlwifi/pcie/tx.c
···18141814/**18151815 * iwl_pcie_get_sgt_tb_phys - Find TB address in mapped SG list18161816 * @sgt: scatter gather table18171817- * @addr: Virtual address18171817+ * @offset: Offset into the mapped memory (i.e. SKB payload data)18181818+ * @len: Length of the area18181819 *18191819- * Find the entry that includes the address for the given address and return18201820- * correct physical address for the TB entry.18201820+ * Find the DMA address that corresponds to the SKB payload data at the18211821+ * position given by @offset.18211822 *18221823 * Returns: Address for TB entry18231824 */18241824-dma_addr_t iwl_pcie_get_sgt_tb_phys(struct sg_table *sgt, void *addr)18251825+dma_addr_t iwl_pcie_get_sgt_tb_phys(struct sg_table *sgt, unsigned int offset,18261826+ unsigned int len)18251827{18261828 struct scatterlist *sg;18291829+ unsigned int sg_offset = 0;18271830 int i;1828183118321832+ /*18331833+ * Search the mapped DMA areas in the SG for the area that contains the18341834+ * data at offset with the given length.18351835+ */18291836 for_each_sgtable_dma_sg(sgt, sg, i) {18301830- if (addr >= sg_virt(sg) &&18311831- (u8 *)addr < (u8 *)sg_virt(sg) + sg_dma_len(sg))18321832- return sg_dma_address(sg) +18331833- ((unsigned long)addr - (unsigned long)sg_virt(sg));18371837+ if (offset >= sg_offset &&18381838+ offset + len <= sg_offset + sg_dma_len(sg))18391839+ return sg_dma_address(sg) + offset - sg_offset;18401840+18411841+ sg_offset += sg_dma_len(sg);18341842 }1835184318361844 WARN_ON_ONCE(1);···1883187518841876 sg_init_table(sgt->sgl, skb_shinfo(skb)->nr_frags + 1);1885187718861886- sgt->orig_nents = skb_to_sgvec(skb, sgt->sgl, 0, skb->len);18781878+ /* Only map the data, not the header (it is copied to the TSO page) */18791879+ sgt->orig_nents = skb_to_sgvec(skb, sgt->sgl, skb_headlen(skb),18801880+ skb->data_len);18871881 if (WARN_ON_ONCE(sgt->orig_nents <= 0))18881882 return NULL;18891883···19101900 struct ieee80211_hdr *hdr = (void *)skb->data;19111901 unsigned int snap_ip_tcp_hdrlen, ip_hdrlen, total_len, hdr_room;19121902 unsigned int mss = skb_shinfo(skb)->gso_size;19031903+ unsigned int data_offset = 0;19131904 u16 length, iv_len, amsdu_pad;19141905 dma_addr_t start_hdr_phys;19151906 u8 *start_hdr, *pos_hdr;···20112000 data_left);20122001 dma_addr_t tb_phys;2013200220142014- tb_phys = iwl_pcie_get_sgt_tb_phys(sgt, tso.data);20032003+ tb_phys = iwl_pcie_get_sgt_tb_phys(sgt, data_offset, size);20152004 /* Not a real mapping error, use direct comparison */20162005 if (unlikely(tb_phys == DMA_MAPPING_ERROR))20172006 return -EINVAL;···20222011 tb_phys, size);2023201220242013 data_left -= size;20142014+ data_offset += size;20252015 tso_build_data(skb, &tso, size);20262016 }20272017 }
···498498 }499499 if (fua)500500 lim.features |= BLK_FEAT_FUA;501501- if (is_nd_pfn(dev))501501+ if (is_nd_pfn(dev) || pmem_should_map_pages(dev))502502 lim.features |= BLK_FEAT_DAX;503503504504 if (!devm_request_mem_region(dev, res->start, resource_size(res),
+11-4
drivers/of/irq.c
···344344 struct device_node *p;345345 const __be32 *addr;346346 u32 intsize;347347- int i, res;347347+ int i, res, addr_len;348348+ __be32 addr_buf[3] = { 0 };348349349350 pr_debug("of_irq_parse_one: dev=%pOF, index=%d\n", device, index);350351···354353 return of_irq_parse_oldworld(device, index, out_irq);355354356355 /* Get the reg property (if any) */357357- addr = of_get_property(device, "reg", NULL);356356+ addr = of_get_property(device, "reg", &addr_len);357357+358358+ /* Prevent out-of-bounds read in case of longer interrupt parent address size */359359+ if (addr_len > (3 * sizeof(__be32)))360360+ addr_len = 3 * sizeof(__be32);361361+ if (addr)362362+ memcpy(addr_buf, addr, addr_len);358363359364 /* Try the new-style interrupts-extended first */360365 res = of_parse_phandle_with_args(device, "interrupts-extended",361366 "#interrupt-cells", index, out_irq);362367 if (!res)363363- return of_irq_parse_raw(addr, out_irq);368368+ return of_irq_parse_raw(addr_buf, out_irq);364369365370 /* Look for the interrupt parent. */366371 p = of_irq_find_parent(device);···396389397390398391 /* Check if there are any interrupt-map translations to process */399399- res = of_irq_parse_raw(addr, out_irq);392392+ res = of_irq_parse_raw(addr_buf, out_irq);400393 out:401394 of_node_put(p);402395 return res;
+1
drivers/platform/x86/Kconfig
···477477 tristate "Lenovo Yoga Tablet Mode Control"478478 depends on ACPI_WMI479479 depends on INPUT480480+ depends on IDEAPAD_LAPTOP480481 select INPUT_SPARSEKMAP481482 help482483 This driver maps the Tablet Mode Control switch to SW_TABLET_MODE input
+11-21
drivers/platform/x86/amd/pmf/spc.c
···150150 return 0;151151}152152153153-static int amd_pmf_get_sensor_info(struct amd_pmf_dev *dev, struct ta_pmf_enact_table *in)153153+static void amd_pmf_get_sensor_info(struct amd_pmf_dev *dev, struct ta_pmf_enact_table *in)154154{155155 struct amd_sfh_info sfh_info;156156- int ret;156156+157157+ /* Get the latest information from SFH */158158+ in->ev_info.user_present = false;157159158160 /* Get ALS data */159159- ret = amd_get_sfh_info(&sfh_info, MT_ALS);160160- if (!ret)161161+ if (!amd_get_sfh_info(&sfh_info, MT_ALS))161162 in->ev_info.ambient_light = sfh_info.ambient_light;162163 else163163- return ret;164164+ dev_dbg(dev->dev, "ALS is not enabled/detected\n");164165165166 /* get HPD data */166166- ret = amd_get_sfh_info(&sfh_info, MT_HPD);167167- if (ret)168168- return ret;169169-170170- switch (sfh_info.user_present) {171171- case SFH_NOT_DETECTED:172172- in->ev_info.user_present = 0xff; /* assume no sensors connected */173173- break;174174- case SFH_USER_PRESENT:175175- in->ev_info.user_present = 1;176176- break;177177- case SFH_USER_AWAY:178178- in->ev_info.user_present = 0;179179- break;167167+ if (!amd_get_sfh_info(&sfh_info, MT_HPD)) {168168+ if (sfh_info.user_present == SFH_USER_PRESENT)169169+ in->ev_info.user_present = true;170170+ } else {171171+ dev_dbg(dev->dev, "HPD is not enabled/detected\n");180172 }181181-182182- return 0;183173}184174185175void amd_pmf_populate_ta_inputs(struct amd_pmf_dev *dev, struct ta_pmf_enact_table *in)
+132-16
drivers/platform/x86/ideapad-laptop.c
···126126127127struct ideapad_private {128128 struct acpi_device *adev;129129+ struct mutex vpc_mutex; /* protects the VPC calls */129130 struct rfkill *rfk[IDEAPAD_RFKILL_DEV_NUM];130131 struct ideapad_rfk_priv rfk_priv[IDEAPAD_RFKILL_DEV_NUM];131132 struct platform_device *platform_device;···147146 bool touchpad_ctrl_via_ec : 1;148147 bool ctrl_ps2_aux_port : 1;149148 bool usb_charging : 1;149149+ bool ymc_ec_trigger : 1;150150 } features;151151 struct {152152 bool initialized;···195193MODULE_PARM_DESC(touchpad_ctrl_via_ec,196194 "Enable registering a 'touchpad' sysfs-attribute which can be used to manually "197195 "tell the EC to enable/disable the touchpad. This may not work on all models.");196196+197197+static bool ymc_ec_trigger __read_mostly;198198+module_param(ymc_ec_trigger, bool, 0444);199199+MODULE_PARM_DESC(ymc_ec_trigger,200200+ "Enable EC triggering work-around to force emitting tablet mode events. "201201+ "If you need this please report this to: platform-driver-x86@vger.kernel.org");198202199203/*200204 * shared data···301293{302294 struct ideapad_private *priv = s->private;303295 unsigned long value;296296+297297+ guard(mutex)(&priv->vpc_mutex);304298305299 if (!read_ec_data(priv->adev->handle, VPCCMD_R_BL_MAX, &value))306300 seq_printf(s, "Backlight max: %lu\n", value);···422412 unsigned long result;423413 int err;424414425425- err = read_ec_data(priv->adev->handle, VPCCMD_R_CAMERA, &result);415415+ scoped_guard(mutex, &priv->vpc_mutex)416416+ err = read_ec_data(priv->adev->handle, VPCCMD_R_CAMERA, &result);426417 if (err)427418 return err;428419···442431 if (err)443432 return err;444433445445- err = write_ec_cmd(priv->adev->handle, VPCCMD_W_CAMERA, state);434434+ scoped_guard(mutex, &priv->vpc_mutex)435435+ err = write_ec_cmd(priv->adev->handle, VPCCMD_W_CAMERA, state);446436 if (err)447437 return err;448438···496484 unsigned long result;497485 int err;498486499499- err = read_ec_data(priv->adev->handle, VPCCMD_R_FAN, &result);487487+ scoped_guard(mutex, &priv->vpc_mutex)488488+ err = read_ec_data(priv->adev->handle, VPCCMD_R_FAN, &result);500489 if (err)501490 return err;502491···519506 if (state > 4 || state == 3)520507 return -EINVAL;521508522522- err = write_ec_cmd(priv->adev->handle, VPCCMD_W_FAN, state);509509+ scoped_guard(mutex, &priv->vpc_mutex)510510+ err = write_ec_cmd(priv->adev->handle, VPCCMD_W_FAN, state);523511 if (err)524512 return err;525513···605591 unsigned long result;606592 int err;607593608608- err = read_ec_data(priv->adev->handle, VPCCMD_R_TOUCHPAD, &result);594594+ scoped_guard(mutex, &priv->vpc_mutex)595595+ err = read_ec_data(priv->adev->handle, VPCCMD_R_TOUCHPAD, &result);609596 if (err)610597 return err;611598···627612 if (err)628613 return err;629614630630- err = write_ec_cmd(priv->adev->handle, VPCCMD_W_TOUCHPAD, state);615615+ scoped_guard(mutex, &priv->vpc_mutex)616616+ err = write_ec_cmd(priv->adev->handle, VPCCMD_W_TOUCHPAD, state);631617 if (err)632618 return err;633619···10211005 struct ideapad_rfk_priv *priv = data;10221006 int opcode = ideapad_rfk_data[priv->dev].opcode;1023100710081008+ guard(mutex)(&priv->priv->vpc_mutex);10091009+10241010 return write_ec_cmd(priv->priv->adev->handle, opcode, !blocked);10251011}10261012···10361018 int i;1037101910381020 if (priv->features.hw_rfkill_switch) {10211021+ guard(mutex)(&priv->vpc_mutex);10221022+10391023 if (read_ec_data(priv->adev->handle, VPCCMD_R_RF, &hw_blocked))10401024 return;10411025 hw_blocked = !hw_blocked;···12111191{12121192 unsigned long long_pressed;1213119312141214- if (read_ec_data(priv->adev->handle, VPCCMD_R_NOVO, &long_pressed))12151215- return;11941194+ scoped_guard(mutex, &priv->vpc_mutex)11951195+ if (read_ec_data(priv->adev->handle, VPCCMD_R_NOVO, &long_pressed))11961196+ return;1216119712171198 if (long_pressed)12181199 ideapad_input_report(priv, 17);···12251204{12261205 unsigned long bit, value;1227120612281228- if (read_ec_data(priv->adev->handle, VPCCMD_R_SPECIAL_BUTTONS, &value))12291229- return;12071207+ scoped_guard(mutex, &priv->vpc_mutex)12081208+ if (read_ec_data(priv->adev->handle, VPCCMD_R_SPECIAL_BUTTONS, &value))12091209+ return;1230121012311211 for_each_set_bit (bit, &value, 16) {12321212 switch (bit) {···12601238 unsigned long now;12611239 int err;1262124012411241+ guard(mutex)(&priv->vpc_mutex);12421242+12631243 err = read_ec_data(priv->adev->handle, VPCCMD_R_BL, &now);12641244 if (err)12651245 return err;···12731249{12741250 struct ideapad_private *priv = bl_get_data(blightdev);12751251 int err;12521252+12531253+ guard(mutex)(&priv->vpc_mutex);1276125412771255 err = write_ec_cmd(priv->adev->handle, VPCCMD_W_BL,12781256 blightdev->props.brightness);···13531327 if (!blightdev)13541328 return;1355132913301330+ guard(mutex)(&priv->vpc_mutex);13311331+13561332 if (read_ec_data(priv->adev->handle, VPCCMD_R_BL_POWER, &power))13571333 return;13581334···1367133913681340 /* if we control brightness via acpi video driver */13691341 if (!priv->blightdev)13701370- read_ec_data(priv->adev->handle, VPCCMD_R_BL, &now);13421342+ scoped_guard(mutex, &priv->vpc_mutex)13431343+ read_ec_data(priv->adev->handle, VPCCMD_R_BL, &now);13711344 else13721345 backlight_force_update(priv->blightdev, BACKLIGHT_UPDATE_HOTKEY);13731346}···15931564 int ret;1594156515951566 /* Without reading from EC touchpad LED doesn't switch state */15961596- ret = read_ec_data(priv->adev->handle, VPCCMD_R_TOUCHPAD, &value);15671567+ scoped_guard(mutex, &priv->vpc_mutex)15681568+ ret = read_ec_data(priv->adev->handle, VPCCMD_R_TOUCHPAD, &value);15971569 if (ret)15981570 return;15991571···16221592 priv->r_touchpad_val = value;16231593}1624159415951595+static const struct dmi_system_id ymc_ec_trigger_quirk_dmi_table[] = {15961596+ {15971597+ /* Lenovo Yoga 7 14ARB7 */15981598+ .matches = {15991599+ DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),16001600+ DMI_MATCH(DMI_PRODUCT_NAME, "82QF"),16011601+ },16021602+ },16031603+ {16041604+ /* Lenovo Yoga 7 14ACN6 */16051605+ .matches = {16061606+ DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),16071607+ DMI_MATCH(DMI_PRODUCT_NAME, "82N7"),16081608+ },16091609+ },16101610+ { }16111611+};16121612+16131613+static void ideapad_laptop_trigger_ec(void)16141614+{16151615+ struct ideapad_private *priv;16161616+ int ret;16171617+16181618+ guard(mutex)(&ideapad_shared_mutex);16191619+16201620+ priv = ideapad_shared;16211621+ if (!priv)16221622+ return;16231623+16241624+ if (!priv->features.ymc_ec_trigger)16251625+ return;16261626+16271627+ scoped_guard(mutex, &priv->vpc_mutex)16281628+ ret = write_ec_cmd(priv->adev->handle, VPCCMD_W_YMC, 1);16291629+ if (ret)16301630+ dev_warn(&priv->platform_device->dev, "Could not write YMC: %d\n", ret);16311631+}16321632+16331633+static int ideapad_laptop_nb_notify(struct notifier_block *nb,16341634+ unsigned long action, void *data)16351635+{16361636+ switch (action) {16371637+ case IDEAPAD_LAPTOP_YMC_EVENT:16381638+ ideapad_laptop_trigger_ec();16391639+ break;16401640+ }16411641+16421642+ return 0;16431643+}16441644+16451645+static struct notifier_block ideapad_laptop_notifier = {16461646+ .notifier_call = ideapad_laptop_nb_notify,16471647+};16481648+16491649+static BLOCKING_NOTIFIER_HEAD(ideapad_laptop_chain_head);16501650+16511651+int ideapad_laptop_register_notifier(struct notifier_block *nb)16521652+{16531653+ return blocking_notifier_chain_register(&ideapad_laptop_chain_head, nb);16541654+}16551655+EXPORT_SYMBOL_NS_GPL(ideapad_laptop_register_notifier, IDEAPAD_LAPTOP);16561656+16571657+int ideapad_laptop_unregister_notifier(struct notifier_block *nb)16581658+{16591659+ return blocking_notifier_chain_unregister(&ideapad_laptop_chain_head, nb);16601660+}16611661+EXPORT_SYMBOL_NS_GPL(ideapad_laptop_unregister_notifier, IDEAPAD_LAPTOP);16621662+16631663+void ideapad_laptop_call_notifier(unsigned long action, void *data)16641664+{16651665+ blocking_notifier_call_chain(&ideapad_laptop_chain_head, action, data);16661666+}16671667+EXPORT_SYMBOL_NS_GPL(ideapad_laptop_call_notifier, IDEAPAD_LAPTOP);16681668+16251669static void ideapad_acpi_notify(acpi_handle handle, u32 event, void *data)16261670{16271671 struct ideapad_private *priv = data;16281672 unsigned long vpc1, vpc2, bit;1629167316301630- if (read_ec_data(handle, VPCCMD_R_VPC1, &vpc1))16311631- return;16741674+ scoped_guard(mutex, &priv->vpc_mutex) {16751675+ if (read_ec_data(handle, VPCCMD_R_VPC1, &vpc1))16761676+ return;1632167716331633- if (read_ec_data(handle, VPCCMD_R_VPC2, &vpc2))16341634- return;16781678+ if (read_ec_data(handle, VPCCMD_R_VPC2, &vpc2))16791679+ return;16801680+ }1635168116361682 vpc1 = (vpc2 << 8) | vpc1;16371683···18341728 priv->features.ctrl_ps2_aux_port =18351729 ctrl_ps2_aux_port || dmi_check_system(ctrl_ps2_aux_port_list);18361730 priv->features.touchpad_ctrl_via_ec = touchpad_ctrl_via_ec;17311731+ priv->features.ymc_ec_trigger =17321732+ ymc_ec_trigger || dmi_check_system(ymc_ec_trigger_quirk_dmi_table);1837173318381734 if (!read_ec_data(handle, VPCCMD_R_FAN, &val))18391735 priv->features.fan_mode = true;···20141906 priv->adev = adev;20151907 priv->platform_device = pdev;2016190819091909+ err = devm_mutex_init(&pdev->dev, &priv->vpc_mutex);19101910+ if (err)19111911+ return err;19121912+20171913 ideapad_check_features(priv);2018191420191915 err = ideapad_sysfs_init(priv);···20861974 if (err)20871975 goto shared_init_failed;2088197619771977+ ideapad_laptop_register_notifier(&ideapad_laptop_notifier);19781978+20891979 return 0;2090198020911981shared_init_failed:···21192005{21202006 struct ideapad_private *priv = dev_get_drvdata(&pdev->dev);21212007 int i;20082008+20092009+ ideapad_laptop_unregister_notifier(&ideapad_laptop_notifier);2122201021232011 ideapad_shared_exit(priv);21242012
···2020#define LENOVO_YMC_QUERY_INSTANCE 02121#define LENOVO_YMC_QUERY_METHOD 0x0122222323-static bool ec_trigger __read_mostly;2424-module_param(ec_trigger, bool, 0444);2525-MODULE_PARM_DESC(ec_trigger, "Enable EC triggering work-around to force emitting tablet mode events");2626-2723static bool force;2824module_param(force, bool, 0444);2925MODULE_PARM_DESC(force, "Force loading on boards without a convertible DMI chassis-type");3030-3131-static const struct dmi_system_id ec_trigger_quirk_dmi_table[] = {3232- {3333- /* Lenovo Yoga 7 14ARB7 */3434- .matches = {3535- DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),3636- DMI_MATCH(DMI_PRODUCT_NAME, "82QF"),3737- },3838- },3939- {4040- /* Lenovo Yoga 7 14ACN6 */4141- .matches = {4242- DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),4343- DMI_MATCH(DMI_PRODUCT_NAME, "82N7"),4444- },4545- },4646- { }4747-};48264927static const struct dmi_system_id allowed_chasis_types_dmi_table[] = {5028 {···40624163struct lenovo_ymc_private {4264 struct input_dev *input_dev;4343- struct acpi_device *ec_acpi_dev;4465};4545-4646-static void lenovo_ymc_trigger_ec(struct wmi_device *wdev, struct lenovo_ymc_private *priv)4747-{4848- int err;4949-5050- if (!priv->ec_acpi_dev)5151- return;5252-5353- err = write_ec_cmd(priv->ec_acpi_dev->handle, VPCCMD_W_YMC, 1);5454- if (err)5555- dev_warn(&wdev->dev, "Could not write YMC: %d\n", err);5656-}57665867static const struct key_entry lenovo_ymc_keymap[] = {5968 /* Laptop */···9012591126free_obj:92127 kfree(obj);9393- lenovo_ymc_trigger_ec(wdev, priv);128128+ ideapad_laptop_call_notifier(IDEAPAD_LAPTOP_YMC_EVENT, &code);94129}9595-9696-static void acpi_dev_put_helper(void *p) { acpi_dev_put(p); }9713098131static int lenovo_ymc_probe(struct wmi_device *wdev, const void *ctx)99132{···106143 return -ENODEV;107144 }108145109109- ec_trigger |= dmi_check_system(ec_trigger_quirk_dmi_table);110110-111146 priv = devm_kzalloc(&wdev->dev, sizeof(*priv), GFP_KERNEL);112147 if (!priv)113148 return -ENOMEM;114114-115115- if (ec_trigger) {116116- pr_debug("Lenovo YMC enable EC triggering.\n");117117- priv->ec_acpi_dev = acpi_dev_get_first_match_dev("VPC2004", NULL, -1);118118-119119- if (!priv->ec_acpi_dev) {120120- dev_err(&wdev->dev, "Could not find EC ACPI device.\n");121121- return -ENODEV;122122- }123123- err = devm_add_action_or_reset(&wdev->dev,124124- acpi_dev_put_helper, priv->ec_acpi_dev);125125- if (err) {126126- dev_err(&wdev->dev,127127- "Could not clean up EC ACPI device: %d\n", err);128128- return err;129129- }130130- }131149132150 input_dev = devm_input_allocate_device(&wdev->dev);133151 if (!input_dev)···136192 dev_set_drvdata(&wdev->dev, priv);137193138194 /* Report the state for the first time on probe */139139- lenovo_ymc_trigger_ec(wdev, priv);140195 lenovo_ymc_notify(wdev, NULL);141196 return 0;142197}···160217MODULE_AUTHOR("Gergo Koteles <soyer@irl.hu>");161218MODULE_DESCRIPTION("Lenovo Yoga Mode Control driver");162219MODULE_LICENSE("GPL");220220+MODULE_IMPORT_NS(IDEAPAD_LAPTOP);
+23-13
drivers/s390/block/dasd.c
···16011601 if (!sense)16021602 return 0;1603160316041604- return !!(sense[1] & SNS1_NO_REC_FOUND) ||16051605- !!(sense[1] & SNS1_FILE_PROTECTED) ||16061606- scsw_cstat(&irb->scsw) == SCHN_STAT_INCORR_LEN;16041604+ if (sense[1] & SNS1_NO_REC_FOUND)16051605+ return 1;16061606+16071607+ if ((sense[1] & SNS1_INV_TRACK_FORMAT) &&16081608+ scsw_is_tm(&irb->scsw) &&16091609+ !(sense[2] & SNS2_ENV_DATA_PRESENT))16101610+ return 1;16111611+16121612+ return 0;16071613}1608161416091615static int dasd_ese_oos_cond(u8 *sense)···16301624 struct dasd_device *device;16311625 unsigned long now;16321626 int nrf_suppressed = 0;16331633- int fp_suppressed = 0;16271627+ int it_suppressed = 0;16341628 struct request *req;16351629 u8 *sense = NULL;16361630 int expires;···16851679 */16861680 sense = dasd_get_sense(irb);16871681 if (sense) {16881688- fp_suppressed = (sense[1] & SNS1_FILE_PROTECTED) &&16891689- test_bit(DASD_CQR_SUPPRESS_FP, &cqr->flags);16821682+ it_suppressed = (sense[1] & SNS1_INV_TRACK_FORMAT) &&16831683+ !(sense[2] & SNS2_ENV_DATA_PRESENT) &&16841684+ test_bit(DASD_CQR_SUPPRESS_IT, &cqr->flags);16901685 nrf_suppressed = (sense[1] & SNS1_NO_REC_FOUND) &&16911686 test_bit(DASD_CQR_SUPPRESS_NRF, &cqr->flags);16921687···17021695 return;17031696 }17041697 }17051705- if (!(fp_suppressed || nrf_suppressed))16981698+ if (!(it_suppressed || nrf_suppressed))17061699 device->discipline->dump_sense_dbf(device, irb, "int");1707170017081701 if (device->features & DASD_FEATURE_ERPLOG)···24662459 rc = 0;24672460 list_for_each_entry_safe(cqr, n, ccw_queue, blocklist) {24682461 /*24692469- * In some cases the 'File Protected' or 'Incorrect Length'24702470- * error might be expected and error recovery would be24712471- * unnecessary in these cases. Check if the according suppress24722472- * bit is set.24622462+ * In some cases certain errors might be expected and24632463+ * error recovery would be unnecessary in these cases.24642464+ * Check if the according suppress bit is set.24732465 */24742466 sense = dasd_get_sense(&cqr->irb);24752475- if (sense && sense[1] & SNS1_FILE_PROTECTED &&24762476- test_bit(DASD_CQR_SUPPRESS_FP, &cqr->flags))24672467+ if (sense && (sense[1] & SNS1_INV_TRACK_FORMAT) &&24682468+ !(sense[2] & SNS2_ENV_DATA_PRESENT) &&24692469+ test_bit(DASD_CQR_SUPPRESS_IT, &cqr->flags))24702470+ continue;24712471+ if (sense && (sense[1] & SNS1_NO_REC_FOUND) &&24722472+ test_bit(DASD_CQR_SUPPRESS_NRF, &cqr->flags))24772473 continue;24782474 if (scsw_cstat(&cqr->irb.scsw) == 0x40 &&24792475 test_bit(DASD_CQR_SUPPRESS_IL, &cqr->flags))
+2-8
drivers/s390/block/dasd_3990_erp.c
···1386138613871387 struct dasd_device *device = erp->startdev;1388138813891389- /*13901390- * In some cases the 'File Protected' error might be expected and13911391- * log messages shouldn't be written then.13921392- * Check if the according suppress bit is set.13931393- */13941394- if (!test_bit(DASD_CQR_SUPPRESS_FP, &erp->flags))13951395- dev_err(&device->cdev->dev,13961396- "Accessing the DASD failed because of a hardware error\n");13891389+ dev_err(&device->cdev->dev,13901390+ "Accessing the DASD failed because of a hardware error\n");1397139113981392 return dasd_3990_erp_cleanup(erp, DASD_CQR_FAILED);13991393
+25-32
drivers/s390/block/dasd_eckd.c
···22752275 cqr->status = DASD_CQR_FILLED;22762276 /* Set flags to suppress output for expected errors */22772277 set_bit(DASD_CQR_SUPPRESS_NRF, &cqr->flags);22782278+ set_bit(DASD_CQR_SUPPRESS_IT, &cqr->flags);2278227922792280 return cqr;22802281}···25572556 cqr->buildclk = get_tod_clock();25582557 cqr->status = DASD_CQR_FILLED;25592558 /* Set flags to suppress output for expected errors */25602560- set_bit(DASD_CQR_SUPPRESS_FP, &cqr->flags);25612559 set_bit(DASD_CQR_SUPPRESS_IL, &cqr->flags);2562256025632561 return cqr;···4130413041314131 /* Set flags to suppress output for expected errors */41324132 if (dasd_eckd_is_ese(basedev)) {41334133- set_bit(DASD_CQR_SUPPRESS_FP, &cqr->flags);41344134- set_bit(DASD_CQR_SUPPRESS_IL, &cqr->flags);41354133 set_bit(DASD_CQR_SUPPRESS_NRF, &cqr->flags);41364134 }41374135···4631463346324634 /* Set flags to suppress output for expected errors */46334635 if (dasd_eckd_is_ese(basedev)) {46344634- set_bit(DASD_CQR_SUPPRESS_FP, &cqr->flags);46354635- set_bit(DASD_CQR_SUPPRESS_IL, &cqr->flags);46364636 set_bit(DASD_CQR_SUPPRESS_NRF, &cqr->flags);46374637+ set_bit(DASD_CQR_SUPPRESS_IT, &cqr->flags);46374638 }4638463946394640 return cqr;···57775780{57785781 u8 *sense = dasd_get_sense(irb);5779578257805780- if (scsw_is_tm(&irb->scsw)) {57815781- /*57825782- * In some cases the 'File Protected' or 'Incorrect Length'57835783- * error might be expected and log messages shouldn't be written57845784- * then. Check if the according suppress bit is set.57855785- */57865786- if (sense && (sense[1] & SNS1_FILE_PROTECTED) &&57875787- test_bit(DASD_CQR_SUPPRESS_FP, &req->flags))57885788- return;57895789- if (scsw_cstat(&irb->scsw) == 0x40 &&57905790- test_bit(DASD_CQR_SUPPRESS_IL, &req->flags))57915791- return;57835783+ /*57845784+ * In some cases certain errors might be expected and57855785+ * log messages shouldn't be written then.57865786+ * Check if the according suppress bit is set.57875787+ */57885788+ if (sense && (sense[1] & SNS1_INV_TRACK_FORMAT) &&57895789+ !(sense[2] & SNS2_ENV_DATA_PRESENT) &&57905790+ test_bit(DASD_CQR_SUPPRESS_IT, &req->flags))57915791+ return;5792579257935793+ if (sense && sense[0] & SNS0_CMD_REJECT &&57945794+ test_bit(DASD_CQR_SUPPRESS_CR, &req->flags))57955795+ return;57965796+57975797+ if (sense && sense[1] & SNS1_NO_REC_FOUND &&57985798+ test_bit(DASD_CQR_SUPPRESS_NRF, &req->flags))57995799+ return;58005800+58015801+ if (scsw_cstat(&irb->scsw) == 0x40 &&58025802+ test_bit(DASD_CQR_SUPPRESS_IL, &req->flags))58035803+ return;58045804+58055805+ if (scsw_is_tm(&irb->scsw))57935806 dasd_eckd_dump_sense_tcw(device, req, irb);57945794- } else {57955795- /*57965796- * In some cases the 'Command Reject' or 'No Record Found'57975797- * error might be expected and log messages shouldn't be57985798- * written then. Check if the according suppress bit is set.57995799- */58005800- if (sense && sense[0] & SNS0_CMD_REJECT &&58015801- test_bit(DASD_CQR_SUPPRESS_CR, &req->flags))58025802- return;58035803-58045804- if (sense && sense[1] & SNS1_NO_REC_FOUND &&58055805- test_bit(DASD_CQR_SUPPRESS_NRF, &req->flags))58065806- return;58075807-58075807+ else58085808 dasd_eckd_dump_sense_ccw(device, req, irb);58095809- }58105809}5811581058125811static int dasd_eckd_reload_device(struct dasd_device *device)
···2727#include "ia_css_prbs.h"2828#include "ia_css_input_port.h"29293030-/* Input modes, these enumerate all supported input modes.3131- * Note that not all ISP modes support all input modes.3030+/*3131+ * Input modes, these enumerate all supported input modes.3232+ * This enum is part of the atomisp firmware ABI and must3333+ * NOT be changed!3434+ * Note that not all ISP modes support all input modes.3235 */3336enum ia_css_input_mode {3437 IA_CSS_INPUT_MODE_SENSOR, /** data from sensor */3538 IA_CSS_INPUT_MODE_FIFO, /** data from input-fifo */3939+ IA_CSS_INPUT_MODE_TPG, /** data from test-pattern generator */3640 IA_CSS_INPUT_MODE_PRBS, /** data from pseudo-random bit stream */3741 IA_CSS_INPUT_MODE_MEMORY, /** data from a frame in memory */3842 IA_CSS_INPUT_MODE_BUFFERED_SENSOR /** data is sent through mipi buffer */
···344344345345#define IA_CSS_MIPI_SIZE_CHECK_MAX_NOF_ENTRIES_PER_PORT (3)346346347347-/* SP configuration information */347347+/*348348+ * SP configuration information349349+ *350350+ * This struct is part of the atomisp firmware ABI and is directly copied351351+ * to ISP DRAM by sh_css_store_sp_group_to_ddr()352352+ *353353+ * Do NOT change this struct's layout or remove seemingly unused fields!354354+ */348355struct sh_css_sp_config {349356 u8 no_isp_sync; /* Signal host immediately after start */350357 u8 enable_raw_pool_locking; /** Enable Raw Buffer Locking for HALv3 Support */···361354 host (true) or when they are passed to the preview/video pipe362355 (false). */363356357357+ /*358358+ * Note the fields below are only used on the ISP2400 not on the ISP2401,359359+ * sh_css_store_sp_group_to_ddr() skip copying these when run on the ISP2401.360360+ */364361 struct {365362 u8 a_changed;366363 u8 b_changed;···374363 } input_formatter;375364376365 sync_generator_cfg_t sync_gen;366366+ tpg_cfg_t tpg;377367 prbs_cfg_t prbs;378368 input_system_cfg_t input_circuit;379369 u8 input_circuit_cfg_changed;380380- u32 mipi_sizes_for_check[N_CSI_PORTS][IA_CSS_MIPI_SIZE_CHECK_MAX_NOF_ENTRIES_PER_PORT];381381- u8 enable_isys_event_queue;370370+ u32 mipi_sizes_for_check[N_CSI_PORTS][IA_CSS_MIPI_SIZE_CHECK_MAX_NOF_ENTRIES_PER_PORT];371371+ /* These last 2 fields are used on both the ISP2400 and the ISP2401 */372372+ u8 enable_isys_event_queue;382373 u8 disable_cont_vf;383374};384375
+66-17
drivers/thermal/gov_bang_bang.c
···13131414#include "thermal_core.h"15151616+static void bang_bang_set_instance_target(struct thermal_instance *instance,1717+ unsigned int target)1818+{1919+ if (instance->target != 0 && instance->target != 1 &&2020+ instance->target != THERMAL_NO_TARGET)2121+ pr_debug("Unexpected state %ld of thermal instance %s in bang-bang\n",2222+ instance->target, instance->name);2323+2424+ /*2525+ * Enable the fan when the trip is crossed on the way up and disable it2626+ * when the trip is crossed on the way down.2727+ */2828+ instance->target = target;2929+ instance->initialized = true;3030+3131+ dev_dbg(&instance->cdev->device, "target=%ld\n", instance->target);3232+3333+ mutex_lock(&instance->cdev->lock);3434+ __thermal_cdev_update(instance->cdev);3535+ mutex_unlock(&instance->cdev->lock);3636+}3737+1638/**1739 * bang_bang_control - controls devices associated with the given zone1840 * @tz: thermal_zone_device···7654 tz->temperature, trip->hysteresis);77557856 list_for_each_entry(instance, &tz->thermal_instances, tz_node) {7979- if (instance->trip != trip)5757+ if (instance->trip == trip)5858+ bang_bang_set_instance_target(instance, crossed_up);5959+ }6060+}6161+6262+static void bang_bang_manage(struct thermal_zone_device *tz)6363+{6464+ const struct thermal_trip_desc *td;6565+ struct thermal_instance *instance;6666+6767+ /* If the code below has run already, nothing needs to be done. */6868+ if (tz->governor_data)6969+ return;7070+7171+ for_each_trip_desc(tz, td) {7272+ const struct thermal_trip *trip = &td->trip;7373+7474+ if (tz->temperature >= td->threshold ||7575+ trip->temperature == THERMAL_TEMP_INVALID ||7676+ trip->type == THERMAL_TRIP_CRITICAL ||7777+ trip->type == THERMAL_TRIP_HOT)8078 continue;81798282- if (instance->target != 0 && instance->target != 1 &&8383- instance->target != THERMAL_NO_TARGET)8484- pr_debug("Unexpected state %ld of thermal instance %s in bang-bang\n",8585- instance->target, instance->name);8686-8780 /*8888- * Enable the fan when the trip is crossed on the way up and8989- * disable it when the trip is crossed on the way down.8181+ * If the initial cooling device state is "on", but the zone8282+ * temperature is not above the trip point, the core will not8383+ * call bang_bang_control() until the zone temperature reaches8484+ * the trip point temperature which may be never. In those8585+ * cases, set the initial state of the cooling device to 0.9086 */9191- instance->target = crossed_up;9292-9393- dev_dbg(&instance->cdev->device, "target=%ld\n", instance->target);9494-9595- mutex_lock(&instance->cdev->lock);9696- instance->cdev->updated = false; /* cdev needs update */9797- mutex_unlock(&instance->cdev->lock);8787+ list_for_each_entry(instance, &tz->thermal_instances, tz_node) {8888+ if (!instance->initialized && instance->trip == trip)8989+ bang_bang_set_instance_target(instance, 0);9090+ }9891 }9992100100- list_for_each_entry(instance, &tz->thermal_instances, tz_node)101101- thermal_cdev_update(instance->cdev);9393+ tz->governor_data = (void *)true;9494+}9595+9696+static void bang_bang_update_tz(struct thermal_zone_device *tz,9797+ enum thermal_notify_event reason)9898+{9999+ /*100100+ * Let bang_bang_manage() know that it needs to walk trips after binding101101+ * a new cdev and after system resume.102102+ */103103+ if (reason == THERMAL_TZ_BIND_CDEV || reason == THERMAL_TZ_RESUME)104104+ tz->governor_data = NULL;102105}103106104107static struct thermal_governor thermal_gov_bang_bang = {105108 .name = "bang_bang",106109 .trip_crossed = bang_bang_control,110110+ .manage = bang_bang_manage,111111+ .update_tz = bang_bang_update_tz,107112};108113THERMAL_GOVERNOR_DECLARE(thermal_gov_bang_bang);
···2727#include <linux/pm_wakeirq.h>2828#include <linux/dma-mapping.h>2929#include <linux/sys_soc.h>3030-#include <linux/pm_domain.h>31303231#include "8250.h"3332···117118/* Timeout low and High */118119#define UART_OMAP_TO_L 0x26119120#define UART_OMAP_TO_H 0x27120120-121121-/*122122- * Copy of the genpd flags for the console.123123- * Only used if console suspend is disabled124124- */125125-static unsigned int genpd_flags_console;126121127122struct omap8250_priv {128123 void __iomem *membase;···16501657{16511658 struct omap8250_priv *priv = dev_get_drvdata(dev);16521659 struct uart_8250_port *up = serial8250_get_port(priv->line);16531653- struct generic_pm_domain *genpd = pd_to_genpd(dev->pm_domain);16541660 int err = 0;1655166116561662 serial8250_suspend_port(priv->line);···16601668 if (!device_may_wakeup(dev))16611669 priv->wer = 0;16621670 serial_out(up, UART_OMAP_WER, priv->wer);16631663- if (uart_console(&up->port)) {16641664- if (console_suspend_enabled)16651665- err = pm_runtime_force_suspend(dev);16661666- else {16671667- /*16681668- * The pd shall not be powered-off (no console suspend).16691669- * Make copy of genpd flags before to set it always on.16701670- * The original value is restored during the resume.16711671- */16721672- genpd_flags_console = genpd->flags;16731673- genpd->flags |= GENPD_FLAG_ALWAYS_ON;16741674- }16751675- }16711671+ if (uart_console(&up->port) && console_suspend_enabled)16721672+ err = pm_runtime_force_suspend(dev);16761673 flush_work(&priv->qos_work);1677167416781675 return err;···16711690{16721691 struct omap8250_priv *priv = dev_get_drvdata(dev);16731692 struct uart_8250_port *up = serial8250_get_port(priv->line);16741674- struct generic_pm_domain *genpd = pd_to_genpd(dev->pm_domain);16751693 int err;1676169416771695 if (uart_console(&up->port) && console_suspend_enabled) {16781678- if (console_suspend_enabled) {16791679- err = pm_runtime_force_resume(dev);16801680- if (err)16811681- return err;16821682- } else16831683- genpd->flags = genpd_flags_console;16961696+ err = pm_runtime_force_resume(dev);16971697+ if (err)16981698+ return err;16841699 }1685170016861701 serial8250_resume_port(priv->line);
···29232923 pm_runtime_set_autosuspend_delay(&pdev->dev, UART_AUTOSUSPEND_TIMEOUT);29242924 pm_runtime_set_active(&pdev->dev);29252925 pm_runtime_enable(&pdev->dev);29262926+ pm_runtime_mark_last_busy(&pdev->dev);2926292729272928 ret = lpuart_global_reset(sport);29282929 if (ret)
+2-10
drivers/tty/vt/conmakehash.c
···1111 * Copyright (C) 1995-1997 H. Peter Anvin1212 */13131414-#include <libgen.h>1515-#include <linux/limits.h>1614#include <stdio.h>1715#include <stdlib.h>1816#include <sysexits.h>···7779{7880 FILE *ctbl;7981 const char *tblname;8080- char base_tblname[PATH_MAX];8182 char buffer[65536];8283 int fontlen;8384 int i, nuni, nent;···242245 for ( i = 0 ; i < fontlen ; i++ )243246 nuni += unicount[i];244247245245- strncpy(base_tblname, tblname, PATH_MAX);246246- base_tblname[PATH_MAX - 1] = 0;247248 printf("\248249/*\n\249249- * Do not edit this file; it was automatically generated by\n\250250- *\n\251251- * conmakehash %s > [this file]\n\252252- *\n\250250+ * Automatically generated file; Do not edit.\n\253251 */\n\254252\n\255253#include <linux/types.h>\n\256254\n\257255u8 dfont_unicount[%d] = \n\258258-{\n\t", basename(base_tblname), fontlen);256256+{\n\t", fontlen);259257260258 for ( i = 0 ; i < fontlen ; i++ )261259 {
+1-1
drivers/usb/host/xhci-mem.c
···1872187218731873 cancel_delayed_work_sync(&xhci->cmd_timer);1874187418751875- for (i = 0; i < xhci->max_interrupters; i++) {18751875+ for (i = 0; xhci->interrupters && i < xhci->max_interrupters; i++) {18761876 if (xhci->interrupters[i]) {18771877 xhci_remove_interrupter(xhci, xhci->interrupters[i]);18781878 xhci_free_interrupter(xhci, xhci->interrupters[i]);
···137137 if (ret)138138 return ret;139139140140- return err;140140+ return err ?: UCSI_CCI_LENGTH(*cci);141141}142142143143static int ucsi_read_error(struct ucsi *ucsi, u8 connector_num)
+2-1
fs/9p/vfs_addr.c
···75757676 /* if we just extended the file size, any portion not in7777 * cache won't be on server and is zeroes */7878- __set_bit(NETFS_SREQ_CLEAR_TAIL, &subreq->flags);7878+ if (subreq->rreq->origin != NETFS_DIO_READ)7979+ __set_bit(NETFS_SREQ_CLEAR_TAIL, &subreq->flags);79808081 netfs_subreq_terminated(subreq, err ?: total, false);8182}
+2-1
fs/afs/file.c
···242242243243 req->error = error;244244 if (subreq) {245245- __set_bit(NETFS_SREQ_CLEAR_TAIL, &subreq->flags);245245+ if (subreq->rreq->origin != NETFS_DIO_READ)246246+ __set_bit(NETFS_SREQ_CLEAR_TAIL, &subreq->flags);246247 netfs_subreq_terminated(subreq, error ?: req->actual_len, false);247248 req->subreq = NULL;248249 } else if (req->done) {
···810810 ret = bch2_disk_accounting_mod(trans, &acc_btree_key, &replicas_sectors, 1, gc);811811 if (ret)812812 return ret;813813+ } else {814814+ bool insert = !(flags & BTREE_TRIGGER_overwrite);815815+ struct disk_accounting_pos acc_inum_key = {816816+ .type = BCH_DISK_ACCOUNTING_inum,817817+ .inum.inum = k.k->p.inode,818818+ };819819+ s64 v[3] = {820820+ insert ? 1 : -1,821821+ insert ? k.k->size : -((s64) k.k->size),822822+ replicas_sectors,823823+ };824824+ ret = bch2_disk_accounting_mod(trans, &acc_inum_key, v, ARRAY_SIZE(v), gc);825825+ if (ret)826826+ return ret;813827 }814828815829 if (bch2_bkey_rebalance_opts(k)) {···915901 enum bch_data_type type,916902 unsigned sectors)917903{918918- struct bch_fs *c = trans->c;919904 struct btree_iter iter;920905 int ret = 0;921906···924911 return PTR_ERR(a);925912926913 if (a->v.data_type && type && a->v.data_type != type) {927927- bch2_fsck_err(c, FSCK_CAN_IGNORE|FSCK_NEED_FSCK,914914+ bch2_fsck_err(trans, FSCK_CAN_IGNORE|FSCK_NEED_FSCK,928915 bucket_metadata_type_mismatch,929916 "bucket %llu:%llu gen %u different types of data in same bucket: %s, %s\n"930917 "while marking %s",···10451032static int __bch2_trans_mark_dev_sb(struct btree_trans *trans, struct bch_dev *ca,10461033 enum btree_iter_update_trigger_flags flags)10471034{10481048- struct bch_sb_layout *layout = &ca->disk_sb.sb->layout;10351035+ struct bch_fs *c = trans->c;10361036+10371037+ mutex_lock(&c->sb_lock);10381038+ struct bch_sb_layout layout = ca->disk_sb.sb->layout;10391039+ mutex_unlock(&c->sb_lock);10401040+10491041 u64 bucket = 0;10501042 unsigned i, bucket_sectors = 0;10511043 int ret;1052104410531053- for (i = 0; i < layout->nr_superblocks; i++) {10541054- u64 offset = le64_to_cpu(layout->sb_offset[i]);10451045+ for (i = 0; i < layout.nr_superblocks; i++) {10461046+ u64 offset = le64_to_cpu(layout.sb_offset[i]);1055104710561048 if (offset == BCH_SB_SECTOR) {10571049 ret = bch2_trans_mark_metadata_sectors(trans, ca,···10671049 }1068105010691051 ret = bch2_trans_mark_metadata_sectors(trans, ca, offset,10701070- offset + (1 << layout->sb_max_size_bits),10521052+ offset + (1 << layout.sb_max_size_bits),10711053 BCH_DATA_sb, &bucket, &bucket_sectors, flags);10721054 if (ret)10731055 return ret;
+9-2
fs/bcachefs/buckets_waiting_for_journal.c
···9393 .dev_bucket = (u64) dev << 56 | bucket,9494 .journal_seq = journal_seq,9595 };9696- size_t i, size, new_bits, nr_elements = 1, nr_rehashes = 0;9696+ size_t i, size, new_bits, nr_elements = 1, nr_rehashes = 0, nr_rehashes_this_size = 0;9797 int ret = 0;98989999 mutex_lock(&b->lock);···106106 for (i = 0; i < size; i++)107107 nr_elements += t->d[i].journal_seq > flushed_seq;108108109109- new_bits = t->bits + (nr_elements * 3 > size);109109+ new_bits = ilog2(roundup_pow_of_two(nr_elements * 3));110110111111 n = kvmalloc(sizeof(*n) + (sizeof(n->d[0]) << new_bits), GFP_KERNEL);112112 if (!n) {···115115 }116116117117retry_rehash:118118+ if (nr_rehashes_this_size == 3) {119119+ new_bits++;120120+ nr_rehashes_this_size = 0;121121+ }122122+118123 nr_rehashes++;124124+ nr_rehashes_this_size++;125125+119126 bucket_table_init(n, new_bits);120127121128 tmp = new;
+2-4
fs/bcachefs/data_update.c
···250250 * it's been hard to reproduce, so this should give us some more251251 * information when it does occur:252252 */253253- struct printbuf err = PRINTBUF;254254- int invalid = bch2_bkey_invalid(c, bkey_i_to_s_c(insert), __btree_node_type(0, m->btree_id), 0, &err);255255- printbuf_exit(&err);256256-253253+ int invalid = bch2_bkey_validate(c, bkey_i_to_s_c(insert), __btree_node_type(0, m->btree_id),254254+ BCH_VALIDATE_commit);257255 if (invalid) {258256 struct printbuf buf = PRINTBUF;259257
···193193 * only insert fully created inodes in the inode hash table. But194194 * discard_new_inode() expects it to be set...195195 */196196- inode->v.i_flags |= I_NEW;196196+ inode->v.i_state |= I_NEW;197197 /*198198 * We don't want bch2_evict_inode() to delete the inode on disk,199199 * we just raced and had another inode in cache. Normally new
···72727373#ifdef CONFIG_BINFMT_FLAT_NO_DATA_START_OFFSET7474#define DATA_START_OFFSET_WORDS (0)7575+#define MAX_SHARED_LIBS_UPDATE (0)7576#else7677#define DATA_START_OFFSET_WORDS (MAX_SHARED_LIBS)7878+#define MAX_SHARED_LIBS_UPDATE (MAX_SHARED_LIBS)7779#endif78807981struct lib_info {···882880 return res;883881884882 /* Update data segment pointers for all libraries */885885- for (i = 0; i < MAX_SHARED_LIBS; i++) {883883+ for (i = 0; i < MAX_SHARED_LIBS_UPDATE; i++) {886884 if (!libinfo.lib_list[i].loaded)887885 continue;888886 for (j = 0; j < MAX_SHARED_LIBS; j++) {
+67
fs/btrfs/delayed-ref.c
···11341134 return find_ref_head(delayed_refs, bytenr, false);11351135}1136113611371137+static int find_comp(struct btrfs_delayed_ref_node *entry, u64 root, u64 parent)11381138+{11391139+ int type = parent ? BTRFS_SHARED_BLOCK_REF_KEY : BTRFS_TREE_BLOCK_REF_KEY;11401140+11411141+ if (type < entry->type)11421142+ return -1;11431143+ if (type > entry->type)11441144+ return 1;11451145+11461146+ if (type == BTRFS_TREE_BLOCK_REF_KEY) {11471147+ if (root < entry->ref_root)11481148+ return -1;11491149+ if (root > entry->ref_root)11501150+ return 1;11511151+ } else {11521152+ if (parent < entry->parent)11531153+ return -1;11541154+ if (parent > entry->parent)11551155+ return 1;11561156+ }11571157+ return 0;11581158+}11591159+11601160+/*11611161+ * Check to see if a given root/parent reference is attached to the head. This11621162+ * only checks for BTRFS_ADD_DELAYED_REF references that match, as that11631163+ * indicates the reference exists for the given root or parent. This is for11641164+ * tree blocks only.11651165+ *11661166+ * @head: the head of the bytenr we're searching.11671167+ * @root: the root objectid of the reference if it is a normal reference.11681168+ * @parent: the parent if this is a shared backref.11691169+ */11701170+bool btrfs_find_delayed_tree_ref(struct btrfs_delayed_ref_head *head,11711171+ u64 root, u64 parent)11721172+{11731173+ struct rb_node *node;11741174+ bool found = false;11751175+11761176+ lockdep_assert_held(&head->mutex);11771177+11781178+ spin_lock(&head->lock);11791179+ node = head->ref_tree.rb_root.rb_node;11801180+ while (node) {11811181+ struct btrfs_delayed_ref_node *entry;11821182+ int ret;11831183+11841184+ entry = rb_entry(node, struct btrfs_delayed_ref_node, ref_node);11851185+ ret = find_comp(entry, root, parent);11861186+ if (ret < 0) {11871187+ node = node->rb_left;11881188+ } else if (ret > 0) {11891189+ node = node->rb_right;11901190+ } else {11911191+ /*11921192+ * We only want to count ADD actions, as drops mean the11931193+ * ref doesn't exist.11941194+ */11951195+ if (entry->action == BTRFS_ADD_DELAYED_REF)11961196+ found = true;11971197+ break;11981198+ }11991199+ }12001200+ spin_unlock(&head->lock);12011201+ return found;12021202+}12031203+11371204void __cold btrfs_delayed_ref_exit(void)11381205{11391206 kmem_cache_destroy(btrfs_delayed_ref_head_cachep);
···54725472 struct btrfs_root *root, u64 bytenr, u64 parent,54735473 int level)54745474{54755475+ struct btrfs_delayed_ref_root *delayed_refs;54765476+ struct btrfs_delayed_ref_head *head;54755477 struct btrfs_path *path;54765478 struct btrfs_extent_inline_ref *iref;54775479 int ret;54805480+ bool exists = false;5478548154795482 path = btrfs_alloc_path();54805483 if (!path)54815484 return -ENOMEM;54825482-54855485+again:54835486 ret = lookup_extent_backref(trans, path, &iref, bytenr,54845487 root->fs_info->nodesize, parent,54855488 btrfs_root_id(root), level, 0);54895489+ if (ret != -ENOENT) {54905490+ /*54915491+ * If we get 0 then we found our reference, return 1, else54925492+ * return the error if it's not -ENOENT;54935493+ */54945494+ btrfs_free_path(path);54955495+ return (ret < 0 ) ? ret : 1;54965496+ }54975497+54985498+ /*54995499+ * We could have a delayed ref with this reference, so look it up while55005500+ * we're holding the path open to make sure we don't race with the55015501+ * delayed ref running.55025502+ */55035503+ delayed_refs = &trans->transaction->delayed_refs;55045504+ spin_lock(&delayed_refs->lock);55055505+ head = btrfs_find_delayed_ref_head(delayed_refs, bytenr);55065506+ if (!head)55075507+ goto out;55085508+ if (!mutex_trylock(&head->mutex)) {55095509+ /*55105510+ * We're contended, means that the delayed ref is running, get a55115511+ * reference and wait for the ref head to be complete and then55125512+ * try again.55135513+ */55145514+ refcount_inc(&head->refs);55155515+ spin_unlock(&delayed_refs->lock);55165516+55175517+ btrfs_release_path(path);55185518+55195519+ mutex_lock(&head->mutex);55205520+ mutex_unlock(&head->mutex);55215521+ btrfs_put_delayed_ref_head(head);55225522+ goto again;55235523+ }55245524+55255525+ exists = btrfs_find_delayed_tree_ref(head, root->root_key.objectid, parent);55265526+ mutex_unlock(&head->mutex);55275527+out:55285528+ spin_unlock(&delayed_refs->lock);54865529 btrfs_free_path(path);54875487- if (ret == -ENOENT)54885488- return 0;54895489- if (ret < 0)54905490- return ret;54915491- return 1;55305530+ return exists ? 1 : 0;54925531}5493553254945533/*
+7-7
fs/btrfs/extent_io.c
···14961496 free_extent_map(em);14971497 em = NULL;1498149814991499+ /*15001500+ * Although the PageDirty bit might be cleared before entering15011501+ * this function, subpage dirty bit is not cleared.15021502+ * So clear subpage dirty bit here so next time we won't submit15031503+ * page for range already written to disk.15041504+ */15051505+ btrfs_folio_clear_dirty(fs_info, page_folio(page), cur, iosize);14991506 btrfs_set_range_writeback(inode, cur, cur + iosize - 1);15001507 if (!PageWriteback(page)) {15011508 btrfs_err(inode->root->fs_info,···15101503 page->index, cur, end);15111504 }1512150515131513- /*15141514- * Although the PageDirty bit is cleared before entering this15151515- * function, subpage dirty bit is not cleared.15161516- * So clear subpage dirty bit here so next time we won't submit15171517- * page for range already written to disk.15181518- */15191519- btrfs_folio_clear_dirty(fs_info, page_folio(page), cur, iosize);1520150615211507 submit_extent_page(bio_ctrl, disk_bytenr, page, iosize,15221508 cur - page_offset(page));
+6-16
fs/btrfs/extent_map.c
···11471147 return 0;1148114811491149 /*11501150- * We want to be fast because we can be called from any path trying to11511151- * allocate memory, so if the lock is busy we don't want to spend time11501150+ * We want to be fast so if the lock is busy we don't want to spend time11521151 * waiting for it - either some task is about to do IO for the inode or11531152 * we may have another task shrinking extent maps, here in this code, so11541153 * skip this inode.···11901191 /*11911192 * Stop if we need to reschedule or there's contention on the11921193 * lock. This is to avoid slowing other tasks trying to take the11931193- * lock and because the shrinker might be called during a memory11941194- * allocation path and we want to avoid taking a very long time11951195- * and slowing down all sorts of tasks.11941194+ * lock.11961195 */11971196 if (need_resched() || rwlock_needbreak(&tree->lock))11981197 break;···12191222 if (ctx->scanned >= ctx->nr_to_scan)12201223 break;1221122412221222- /*12231223- * We may be called from memory allocation paths, so we don't12241224- * want to take too much time and slowdown tasks.12251225- */12261226- if (need_resched())12271227- break;12251225+ cond_resched();1228122612291227 inode = btrfs_find_first_inode(root, min_ino);12301228 }···12771285 ctx.last_ino);12781286 }1279128712801280- /*12811281- * We may be called from memory allocation paths, so we don't want to12821282- * take too much time and slowdown tasks, so stop if we need reschedule.12831283- */12841284- while (ctx.scanned < ctx.nr_to_scan && !need_resched()) {12881288+ while (ctx.scanned < ctx.nr_to_scan) {12851289 struct btrfs_root *root;12861290 unsigned long count;12911291+12921292+ cond_resched();1287129312881294 spin_lock(&fs_info->fs_roots_radix_lock);12891295 count = radix_tree_gang_lookup(&fs_info->fs_roots_radix,
···347347 int ret;348348 int need_later_update;349349 int name_len;350350- char name[];350350+ char name[] __counted_by(name_len);351351};352352353353/* See the comment at lru_cache.h about struct btrfs_lru_cache_entry. */···61576157 u64 offset = key->offset;61586158 u64 end;61596159 u64 bs = sctx->send_root->fs_info->sectorsize;61606160+ struct btrfs_file_extent_item *ei;61616161+ u64 disk_byte;61626162+ u64 data_offset;61636163+ u64 num_bytes;61646164+ struct btrfs_inode_info info = { 0 };6160616561616166 end = min_t(u64, btrfs_file_extent_end(path), sctx->cur_inode_size);61626167 if (offset >= end)61636168 return 0;6164616961656165- if (clone_root && IS_ALIGNED(end, bs)) {61666166- struct btrfs_file_extent_item *ei;61676167- u64 disk_byte;61686168- u64 data_offset;61706170+ num_bytes = end - offset;6169617161706170- ei = btrfs_item_ptr(path->nodes[0], path->slots[0],61716171- struct btrfs_file_extent_item);61726172- disk_byte = btrfs_file_extent_disk_bytenr(path->nodes[0], ei);61736173- data_offset = btrfs_file_extent_offset(path->nodes[0], ei);61746174- ret = clone_range(sctx, path, clone_root, disk_byte,61756175- data_offset, offset, end - offset);61766176- } else {61776177- ret = send_extent_data(sctx, path, offset, end - offset);61786178- }61726172+ if (!clone_root)61736173+ goto write_data;61746174+61756175+ if (IS_ALIGNED(end, bs))61766176+ goto clone_data;61776177+61786178+ /*61796179+ * If the extent end is not aligned, we can clone if the extent ends at61806180+ * the i_size of the inode and the clone range ends at the i_size of the61816181+ * source inode, otherwise the clone operation fails with -EINVAL.61826182+ */61836183+ if (end != sctx->cur_inode_size)61846184+ goto write_data;61856185+61866186+ ret = get_inode_info(clone_root->root, clone_root->ino, &info);61876187+ if (ret < 0)61886188+ return ret;61896189+61906190+ if (clone_root->offset + num_bytes == info.size)61916191+ goto clone_data;61926192+61936193+write_data:61946194+ ret = send_extent_data(sctx, path, offset, num_bytes);61956195+ sctx->cur_inode_next_write_offset = end;61966196+ return ret;61976197+61986198+clone_data:61996199+ ei = btrfs_item_ptr(path->nodes[0], path->slots[0],62006200+ struct btrfs_file_extent_item);62016201+ disk_byte = btrfs_file_extent_disk_bytenr(path->nodes[0], ei);62026202+ data_offset = btrfs_file_extent_offset(path->nodes[0], ei);62036203+ ret = clone_range(sctx, path, clone_root, disk_byte, data_offset, offset,62046204+ num_bytes);61796205 sctx->cur_inode_next_write_offset = end;61806206 return ret;61816207}
+17-1
fs/btrfs/super.c
···2828#include <linux/btrfs.h>2929#include <linux/security.h>3030#include <linux/fs_parser.h>3131+#include <linux/swap.h>3132#include "messages.h"3233#include "delayed-inode.h"3334#include "ctree.h"···2402240124032402 trace_btrfs_extent_map_shrinker_count(fs_info, nr);2404240324052405- return nr;24042404+ /*24052405+ * Only report the real number for DEBUG builds, as there are reports of24062406+ * serious performance degradation caused by too frequent shrinks.24072407+ */24082408+ if (IS_ENABLED(CONFIG_BTRFS_DEBUG))24092409+ return nr;24102410+ return 0;24062411}2407241224082413static long btrfs_free_cached_objects(struct super_block *sb, struct shrink_control *sc)24092414{24102415 const long nr_to_scan = min_t(unsigned long, LONG_MAX, sc->nr_to_scan);24112416 struct btrfs_fs_info *fs_info = btrfs_sb(sb);24172417+24182418+ /*24192419+ * We may be called from any task trying to allocate memory and we don't24202420+ * want to slow it down with scanning and dropping extent maps. It would24212421+ * also cause heavy lock contention if many tasks concurrently enter24222422+ * here. Therefore only allow kswapd tasks to scan and drop extent maps.24232423+ */24242424+ if (!current_is_kswapd())24252425+ return 0;2412242624132427 return btrfs_free_extent_maps(fs_info, nr_to_scan);24142428}
+72-2
fs/btrfs/tree-checker.c
···569569570570 /* dir type check */571571 dir_type = btrfs_dir_ftype(leaf, di);572572- if (unlikely(dir_type >= BTRFS_FT_MAX)) {572572+ if (unlikely(dir_type <= BTRFS_FT_UNKNOWN ||573573+ dir_type >= BTRFS_FT_MAX)) {573574 dir_item_err(leaf, slot,574574- "invalid dir item type, have %u expect [0, %u)",575575+ "invalid dir item type, have %u expect (0, %u)",575576 dir_type, BTRFS_FT_MAX);576577 return -EUCLEAN;577578 }···17641763 return 0;17651764}1766176517661766+static int check_dev_extent_item(const struct extent_buffer *leaf,17671767+ const struct btrfs_key *key,17681768+ int slot,17691769+ struct btrfs_key *prev_key)17701770+{17711771+ struct btrfs_dev_extent *de;17721772+ const u32 sectorsize = leaf->fs_info->sectorsize;17731773+17741774+ de = btrfs_item_ptr(leaf, slot, struct btrfs_dev_extent);17751775+ /* Basic fixed member checks. */17761776+ if (unlikely(btrfs_dev_extent_chunk_tree(leaf, de) !=17771777+ BTRFS_CHUNK_TREE_OBJECTID)) {17781778+ generic_err(leaf, slot,17791779+ "invalid dev extent chunk tree id, has %llu expect %llu",17801780+ btrfs_dev_extent_chunk_tree(leaf, de),17811781+ BTRFS_CHUNK_TREE_OBJECTID);17821782+ return -EUCLEAN;17831783+ }17841784+ if (unlikely(btrfs_dev_extent_chunk_objectid(leaf, de) !=17851785+ BTRFS_FIRST_CHUNK_TREE_OBJECTID)) {17861786+ generic_err(leaf, slot,17871787+ "invalid dev extent chunk objectid, has %llu expect %llu",17881788+ btrfs_dev_extent_chunk_objectid(leaf, de),17891789+ BTRFS_FIRST_CHUNK_TREE_OBJECTID);17901790+ return -EUCLEAN;17911791+ }17921792+ /* Alignment check. */17931793+ if (unlikely(!IS_ALIGNED(key->offset, sectorsize))) {17941794+ generic_err(leaf, slot,17951795+ "invalid dev extent key.offset, has %llu not aligned to %u",17961796+ key->offset, sectorsize);17971797+ return -EUCLEAN;17981798+ }17991799+ if (unlikely(!IS_ALIGNED(btrfs_dev_extent_chunk_offset(leaf, de),18001800+ sectorsize))) {18011801+ generic_err(leaf, slot,18021802+ "invalid dev extent chunk offset, has %llu not aligned to %u",18031803+ btrfs_dev_extent_chunk_objectid(leaf, de),18041804+ sectorsize);18051805+ return -EUCLEAN;18061806+ }18071807+ if (unlikely(!IS_ALIGNED(btrfs_dev_extent_length(leaf, de),18081808+ sectorsize))) {18091809+ generic_err(leaf, slot,18101810+ "invalid dev extent length, has %llu not aligned to %u",18111811+ btrfs_dev_extent_length(leaf, de), sectorsize);18121812+ return -EUCLEAN;18131813+ }18141814+ /* Overlap check with previous dev extent. */18151815+ if (slot && prev_key->objectid == key->objectid &&18161816+ prev_key->type == key->type) {18171817+ struct btrfs_dev_extent *prev_de;18181818+ u64 prev_len;18191819+18201820+ prev_de = btrfs_item_ptr(leaf, slot - 1, struct btrfs_dev_extent);18211821+ prev_len = btrfs_dev_extent_length(leaf, prev_de);18221822+ if (unlikely(prev_key->offset + prev_len > key->offset)) {18231823+ generic_err(leaf, slot,18241824+ "dev extent overlap, prev offset %llu len %llu current offset %llu",18251825+ prev_key->objectid, prev_len, key->offset);18261826+ return -EUCLEAN;18271827+ }18281828+ }18291829+ return 0;18301830+}18311831+17671832/*17681833 * Common point to switch the item-specific validation.17691834 */···18651798 break;18661799 case BTRFS_DEV_ITEM_KEY:18671800 ret = check_dev_item(leaf, key, slot);18011801+ break;18021802+ case BTRFS_DEV_EXTENT_KEY:18031803+ ret = check_dev_extent_item(leaf, key, slot, prev_key);18681804 break;18691805 case BTRFS_INODE_ITEM_KEY:18701806 ret = check_inode_item(leaf, key, slot);
+25-3
fs/ceph/addr.c
···246246 if (err >= 0) {247247 if (sparse && err > 0)248248 err = ceph_sparse_ext_map_end(op);249249- if (err < subreq->len)249249+ if (err < subreq->len &&250250+ subreq->rreq->origin != NETFS_DIO_READ)250251 __set_bit(NETFS_SREQ_CLEAR_TAIL, &subreq->flags);251252 if (IS_ENCRYPTED(inode) && err > 0) {252253 err = ceph_fscrypt_decrypt_extents(inode,···283282 size_t len;284283 int mode;285284286286- __set_bit(NETFS_SREQ_CLEAR_TAIL, &subreq->flags);285285+ if (rreq->origin != NETFS_DIO_READ)286286+ __set_bit(NETFS_SREQ_CLEAR_TAIL, &subreq->flags);287287 __clear_bit(NETFS_SREQ_COPY_TO_CACHE, &subreq->flags);288288289289 if (subreq->start >= inode->i_size)···426424 struct ceph_netfs_request_data *priv;427425 int ret = 0;428426427427+ /* [DEPRECATED] Use PG_private_2 to mark folio being written to the cache. */428428+ __set_bit(NETFS_RREQ_USE_PGPRIV2, &rreq->flags);429429+429430 if (rreq->origin != NETFS_READAHEAD)430431 return 0;431432···503498};504499505500#ifdef CONFIG_CEPH_FSCACHE501501+static void ceph_set_page_fscache(struct page *page)502502+{503503+ folio_start_private_2(page_folio(page)); /* [DEPRECATED] */504504+}505505+506506static void ceph_fscache_write_terminated(void *priv, ssize_t error, bool was_async)507507{508508 struct inode *inode = priv;···525515 ceph_fscache_write_terminated, inode, true, caching);526516}527517#else518518+static inline void ceph_set_page_fscache(struct page *page)519519+{520520+}521521+528522static inline void ceph_fscache_write_to_cache(struct inode *inode, u64 off, u64 len, bool caching)529523{530524}···720706 len = wlen;721707722708 set_page_writeback(page);709709+ if (caching)710710+ ceph_set_page_fscache(page);723711 ceph_fscache_write_to_cache(inode, page_off, len, caching);724712725713 if (IS_ENCRYPTED(inode)) {···804788 redirty_page_for_writepage(wbc, page);805789 return AOP_WRITEPAGE_ACTIVATE;806790 }791791+792792+ folio_wait_private_2(page_folio(page)); /* [DEPRECATED] */807793808794 err = writepage_nounlock(page, wbc);809795 if (err == -ERESTARTSYS) {···10801062 unlock_page(page);10811063 break;10821064 }10831083- if (PageWriteback(page)) {10651065+ if (PageWriteback(page) ||10661066+ PagePrivate2(page) /* [DEPRECATED] */) {10841067 if (wbc->sync_mode == WB_SYNC_NONE) {10851068 doutc(cl, "%p under writeback\n", page);10861069 unlock_page(page);···10891070 }10901071 doutc(cl, "waiting on writeback %p\n", page);10911072 wait_on_page_writeback(page);10731073+ folio_wait_private_2(page_folio(page)); /* [DEPRECATED] */10921074 }1093107510941076 if (!clear_page_dirty_for_io(page)) {···12741254 }1275125512761256 set_page_writeback(page);12571257+ if (caching)12581258+ ceph_set_page_fscache(page);12771259 len += thp_size(page);12781260 }12791261 ceph_fscache_write_to_cache(inode, offset, len, caching);
-2
fs/ceph/inode.c
···577577578578 /* Set parameters for the netfs library */579579 netfs_inode_init(&ci->netfs, &ceph_netfs_ops, false);580580- /* [DEPRECATED] Use PG_private_2 to mark folio being written to the cache. */581581- __set_bit(NETFS_ICTX_USE_PGPRIV2, &ci->netfs.flags);582580583581 spin_lock_init(&ci->i_ceph_lock);584582
+7-1
fs/exec.c
···16921692 unsigned int mode;16931693 vfsuid_t vfsuid;16941694 vfsgid_t vfsgid;16951695+ int err;1695169616961697 if (!mnt_may_suid(file->f_path.mnt))16971698 return;···17091708 /* Be careful if suid/sgid is set */17101709 inode_lock(inode);1711171017121712- /* reload atomically mode/uid/gid now that lock held */17111711+ /* Atomically reload and check mode/uid/gid now that lock held. */17131712 mode = inode->i_mode;17141713 vfsuid = i_uid_into_vfsuid(idmap, inode);17151714 vfsgid = i_gid_into_vfsgid(idmap, inode);17151715+ err = inode_permission(idmap, inode, MAY_EXEC);17161716 inode_unlock(inode);17171717+17181718+ /* Did the exec bit vanish out from under us? Give up. */17191719+ if (err)17201720+ return;1717172117181722 /* We ignore suid/sgid if there are no mappings for them in the ns */17191723 if (!vfsuid_has_mapping(bprm->cred->user_ns, vfsuid) ||
+12-16
fs/file.c
···4646#define BITBIT_NR(nr) BITS_TO_LONGS(BITS_TO_LONGS(nr))4747#define BITBIT_SIZE(nr) (BITBIT_NR(nr) * sizeof(long))48484949+#define fdt_words(fdt) ((fdt)->max_fds / BITS_PER_LONG) // words in ->open_fds4950/*5051 * Copy 'count' fd bits from the old table to the new table and clear the extra5152 * space if any. This does not copy the file pointers. Called with the files5253 * spinlock held for write.5354 */5454-static void copy_fd_bitmaps(struct fdtable *nfdt, struct fdtable *ofdt,5555- unsigned int count)5555+static inline void copy_fd_bitmaps(struct fdtable *nfdt, struct fdtable *ofdt,5656+ unsigned int copy_words)5657{5757- unsigned int cpy, set;5858+ unsigned int nwords = fdt_words(nfdt);58595959- cpy = count / BITS_PER_BYTE;6060- set = (nfdt->max_fds - count) / BITS_PER_BYTE;6161- memcpy(nfdt->open_fds, ofdt->open_fds, cpy);6262- memset((char *)nfdt->open_fds + cpy, 0, set);6363- memcpy(nfdt->close_on_exec, ofdt->close_on_exec, cpy);6464- memset((char *)nfdt->close_on_exec + cpy, 0, set);6565-6666- cpy = BITBIT_SIZE(count);6767- set = BITBIT_SIZE(nfdt->max_fds) - cpy;6868- memcpy(nfdt->full_fds_bits, ofdt->full_fds_bits, cpy);6969- memset((char *)nfdt->full_fds_bits + cpy, 0, set);6060+ bitmap_copy_and_extend(nfdt->open_fds, ofdt->open_fds,6161+ copy_words * BITS_PER_LONG, nwords * BITS_PER_LONG);6262+ bitmap_copy_and_extend(nfdt->close_on_exec, ofdt->close_on_exec,6363+ copy_words * BITS_PER_LONG, nwords * BITS_PER_LONG);6464+ bitmap_copy_and_extend(nfdt->full_fds_bits, ofdt->full_fds_bits,6565+ copy_words, nwords);7066}71677268/*···8084 memcpy(nfdt->fd, ofdt->fd, cpy);8185 memset((char *)nfdt->fd + cpy, 0, set);82868383- copy_fd_bitmaps(nfdt, ofdt, ofdt->max_fds);8787+ copy_fd_bitmaps(nfdt, ofdt, fdt_words(ofdt));8488}85898690/*···375379 open_files = sane_fdtable_size(old_fdt, max_fds);376380 }377381378378- copy_fd_bitmaps(new_fdt, old_fdt, open_files);382382+ copy_fd_bitmaps(new_fdt, old_fdt, open_files / BITS_PER_LONG);379383380384 old_fds = old_fdt->fd;381385 new_fds = new_fdt->fd;
···24242525config NETFS_DEBUG2626 bool "Enable dynamic debugging netfslib and FS-Cache"2727- depends on NETFS2727+ depends on NETFS_SUPPORT2828 help2929 This permits debugging to be dynamically enabled in the local caching3030 management module. If this is set, the debugging output may be
+109-14
fs/netfs/buffered_read.c
···1010#include "internal.h"11111212/*1313+ * [DEPRECATED] Unlock the folios in a read operation for when the filesystem1414+ * is using PG_private_2 and direct writing to the cache from here rather than1515+ * marking the page for writeback.1616+ *1717+ * Note that we don't touch folio->private in this code.1818+ */1919+static void netfs_rreq_unlock_folios_pgpriv2(struct netfs_io_request *rreq,2020+ size_t *account)2121+{2222+ struct netfs_io_subrequest *subreq;2323+ struct folio *folio;2424+ pgoff_t start_page = rreq->start / PAGE_SIZE;2525+ pgoff_t last_page = ((rreq->start + rreq->len) / PAGE_SIZE) - 1;2626+ bool subreq_failed = false;2727+2828+ XA_STATE(xas, &rreq->mapping->i_pages, start_page);2929+3030+ /* Walk through the pagecache and the I/O request lists simultaneously.3131+ * We may have a mixture of cached and uncached sections and we only3232+ * really want to write out the uncached sections. This is slightly3333+ * complicated by the possibility that we might have huge pages with a3434+ * mixture inside.3535+ */3636+ subreq = list_first_entry(&rreq->subrequests,3737+ struct netfs_io_subrequest, rreq_link);3838+ subreq_failed = (subreq->error < 0);3939+4040+ trace_netfs_rreq(rreq, netfs_rreq_trace_unlock_pgpriv2);4141+4242+ rcu_read_lock();4343+ xas_for_each(&xas, folio, last_page) {4444+ loff_t pg_end;4545+ bool pg_failed = false;4646+ bool folio_started = false;4747+4848+ if (xas_retry(&xas, folio))4949+ continue;5050+5151+ pg_end = folio_pos(folio) + folio_size(folio) - 1;5252+5353+ for (;;) {5454+ loff_t sreq_end;5555+5656+ if (!subreq) {5757+ pg_failed = true;5858+ break;5959+ }6060+6161+ if (!folio_started &&6262+ test_bit(NETFS_SREQ_COPY_TO_CACHE, &subreq->flags) &&6363+ fscache_operation_valid(&rreq->cache_resources)) {6464+ trace_netfs_folio(folio, netfs_folio_trace_copy_to_cache);6565+ folio_start_private_2(folio);6666+ folio_started = true;6767+ }6868+6969+ pg_failed |= subreq_failed;7070+ sreq_end = subreq->start + subreq->len - 1;7171+ if (pg_end < sreq_end)7272+ break;7373+7474+ *account += subreq->transferred;7575+ if (!list_is_last(&subreq->rreq_link, &rreq->subrequests)) {7676+ subreq = list_next_entry(subreq, rreq_link);7777+ subreq_failed = (subreq->error < 0);7878+ } else {7979+ subreq = NULL;8080+ subreq_failed = false;8181+ }8282+8383+ if (pg_end == sreq_end)8484+ break;8585+ }8686+8787+ if (!pg_failed) {8888+ flush_dcache_folio(folio);8989+ folio_mark_uptodate(folio);9090+ }9191+9292+ if (!test_bit(NETFS_RREQ_DONT_UNLOCK_FOLIOS, &rreq->flags)) {9393+ if (folio->index == rreq->no_unlock_folio &&9494+ test_bit(NETFS_RREQ_NO_UNLOCK_FOLIO, &rreq->flags))9595+ _debug("no unlock");9696+ else9797+ folio_unlock(folio);9898+ }9999+ }100100+ rcu_read_unlock();101101+}102102+103103+/*13104 * Unlock the folios in a read operation. We need to set PG_writeback on any14105 * folios we're going to write back before we unlock them.15106 *···12635 }12736 }128373838+ /* Handle deprecated PG_private_2 case. */3939+ if (test_bit(NETFS_RREQ_USE_PGPRIV2, &rreq->flags)) {4040+ netfs_rreq_unlock_folios_pgpriv2(rreq, &account);4141+ goto out;4242+ }4343+12944 /* Walk through the pagecache and the I/O request lists simultaneously.13045 * We may have a mixture of cached and uncached sections and we only13146 * really want to write out the uncached sections. This is slightly···14952 loff_t pg_end;15053 bool pg_failed = false;15154 bool wback_to_cache = false;152152- bool folio_started = false;1535515456 if (xas_retry(&xas, folio))15557 continue;···16266 pg_failed = true;16367 break;16468 }165165- if (test_bit(NETFS_RREQ_USE_PGPRIV2, &rreq->flags)) {166166- if (!folio_started && test_bit(NETFS_SREQ_COPY_TO_CACHE,167167- &subreq->flags)) {168168- trace_netfs_folio(folio, netfs_folio_trace_copy_to_cache);169169- folio_start_private_2(folio);170170- folio_started = true;171171- }172172- } else {173173- wback_to_cache |=174174- test_bit(NETFS_SREQ_COPY_TO_CACHE, &subreq->flags);175175- }6969+7070+ wback_to_cache |= test_bit(NETFS_SREQ_COPY_TO_CACHE, &subreq->flags);17671 pg_failed |= subreq_failed;17772 sreq_end = subreq->start + subreq->len - 1;17873 if (pg_end < sreq_end)···211124 }212125 rcu_read_unlock();213126127127+out:214128 task_io_account_read(account);215129 if (rreq->netfs_ops->done)216130 rreq->netfs_ops->done(rreq);···483395}484396485397/**486486- * netfs_write_begin - Helper to prepare for writing398398+ * netfs_write_begin - Helper to prepare for writing [DEPRECATED]487399 * @ctx: The netfs context488400 * @file: The file to read from489401 * @mapping: The mapping to read from···514426 * inode before calling this.515427 *516428 * This is usable whether or not caching is enabled.429429+ *430430+ * Note that this should be considered deprecated and netfs_perform_write()431431+ * used instead.517432 */518433int netfs_write_begin(struct netfs_inode *ctx,519434 struct file *file, struct address_space *mapping,···557466 if (!netfs_is_cache_enabled(ctx) &&558467 netfs_skip_folio_read(folio, pos, len, false)) {559468 netfs_stat(&netfs_n_rh_write_zskip);560560- goto have_folio;469469+ goto have_folio_no_wait;561470 }562471563472 rreq = netfs_alloc_request(mapping, file,···598507 netfs_put_request(rreq, false, netfs_rreq_trace_put_return);599508600509have_folio:510510+ ret = folio_wait_private_2_killable(folio);511511+ if (ret < 0)512512+ goto error;513513+have_folio_no_wait:601514 *_folio = folio;602515 _leave(" = 0");603516 return 0;
···741741 spin_lock(&cookie->lock);742742 }743743 if (test_bit(FSCACHE_COOKIE_DO_LRU_DISCARD, &cookie->flags)) {744744+ if (atomic_read(&cookie->n_accesses) != 0)745745+ /* still being accessed: postpone it */746746+ break;747747+744748 __fscache_set_cookie_state(cookie,745749 FSCACHE_COOKIE_STATE_LRU_DISCARDING);746750 wake = true;
+155-6
fs/netfs/io.c
···9999}100100101101/*102102+ * [DEPRECATED] Deal with the completion of writing the data to the cache. We103103+ * have to clear the PG_fscache bits on the folios involved and release the104104+ * caller's ref.105105+ *106106+ * May be called in softirq mode and we inherit a ref from the caller.107107+ */108108+static void netfs_rreq_unmark_after_write(struct netfs_io_request *rreq,109109+ bool was_async)110110+{111111+ struct netfs_io_subrequest *subreq;112112+ struct folio *folio;113113+ pgoff_t unlocked = 0;114114+ bool have_unlocked = false;115115+116116+ rcu_read_lock();117117+118118+ list_for_each_entry(subreq, &rreq->subrequests, rreq_link) {119119+ XA_STATE(xas, &rreq->mapping->i_pages, subreq->start / PAGE_SIZE);120120+121121+ xas_for_each(&xas, folio, (subreq->start + subreq->len - 1) / PAGE_SIZE) {122122+ if (xas_retry(&xas, folio))123123+ continue;124124+125125+ /* We might have multiple writes from the same huge126126+ * folio, but we mustn't unlock a folio more than once.127127+ */128128+ if (have_unlocked && folio->index <= unlocked)129129+ continue;130130+ unlocked = folio_next_index(folio) - 1;131131+ trace_netfs_folio(folio, netfs_folio_trace_end_copy);132132+ folio_end_private_2(folio);133133+ have_unlocked = true;134134+ }135135+ }136136+137137+ rcu_read_unlock();138138+ netfs_rreq_completed(rreq, was_async);139139+}140140+141141+static void netfs_rreq_copy_terminated(void *priv, ssize_t transferred_or_error,142142+ bool was_async) /* [DEPRECATED] */143143+{144144+ struct netfs_io_subrequest *subreq = priv;145145+ struct netfs_io_request *rreq = subreq->rreq;146146+147147+ if (IS_ERR_VALUE(transferred_or_error)) {148148+ netfs_stat(&netfs_n_rh_write_failed);149149+ trace_netfs_failure(rreq, subreq, transferred_or_error,150150+ netfs_fail_copy_to_cache);151151+ } else {152152+ netfs_stat(&netfs_n_rh_write_done);153153+ }154154+155155+ trace_netfs_sreq(subreq, netfs_sreq_trace_write_term);156156+157157+ /* If we decrement nr_copy_ops to 0, the ref belongs to us. */158158+ if (atomic_dec_and_test(&rreq->nr_copy_ops))159159+ netfs_rreq_unmark_after_write(rreq, was_async);160160+161161+ netfs_put_subrequest(subreq, was_async, netfs_sreq_trace_put_terminated);162162+}163163+164164+/*165165+ * [DEPRECATED] Perform any outstanding writes to the cache. We inherit a ref166166+ * from the caller.167167+ */168168+static void netfs_rreq_do_write_to_cache(struct netfs_io_request *rreq)169169+{170170+ struct netfs_cache_resources *cres = &rreq->cache_resources;171171+ struct netfs_io_subrequest *subreq, *next, *p;172172+ struct iov_iter iter;173173+ int ret;174174+175175+ trace_netfs_rreq(rreq, netfs_rreq_trace_copy);176176+177177+ /* We don't want terminating writes trying to wake us up whilst we're178178+ * still going through the list.179179+ */180180+ atomic_inc(&rreq->nr_copy_ops);181181+182182+ list_for_each_entry_safe(subreq, p, &rreq->subrequests, rreq_link) {183183+ if (!test_bit(NETFS_SREQ_COPY_TO_CACHE, &subreq->flags)) {184184+ list_del_init(&subreq->rreq_link);185185+ netfs_put_subrequest(subreq, false,186186+ netfs_sreq_trace_put_no_copy);187187+ }188188+ }189189+190190+ list_for_each_entry(subreq, &rreq->subrequests, rreq_link) {191191+ /* Amalgamate adjacent writes */192192+ while (!list_is_last(&subreq->rreq_link, &rreq->subrequests)) {193193+ next = list_next_entry(subreq, rreq_link);194194+ if (next->start != subreq->start + subreq->len)195195+ break;196196+ subreq->len += next->len;197197+ list_del_init(&next->rreq_link);198198+ netfs_put_subrequest(next, false,199199+ netfs_sreq_trace_put_merged);200200+ }201201+202202+ ret = cres->ops->prepare_write(cres, &subreq->start, &subreq->len,203203+ subreq->len, rreq->i_size, true);204204+ if (ret < 0) {205205+ trace_netfs_failure(rreq, subreq, ret, netfs_fail_prepare_write);206206+ trace_netfs_sreq(subreq, netfs_sreq_trace_write_skip);207207+ continue;208208+ }209209+210210+ iov_iter_xarray(&iter, ITER_SOURCE, &rreq->mapping->i_pages,211211+ subreq->start, subreq->len);212212+213213+ atomic_inc(&rreq->nr_copy_ops);214214+ netfs_stat(&netfs_n_rh_write);215215+ netfs_get_subrequest(subreq, netfs_sreq_trace_get_copy_to_cache);216216+ trace_netfs_sreq(subreq, netfs_sreq_trace_write);217217+ cres->ops->write(cres, subreq->start, &iter,218218+ netfs_rreq_copy_terminated, subreq);219219+ }220220+221221+ /* If we decrement nr_copy_ops to 0, the usage ref belongs to us. */222222+ if (atomic_dec_and_test(&rreq->nr_copy_ops))223223+ netfs_rreq_unmark_after_write(rreq, false);224224+}225225+226226+static void netfs_rreq_write_to_cache_work(struct work_struct *work) /* [DEPRECATED] */227227+{228228+ struct netfs_io_request *rreq =229229+ container_of(work, struct netfs_io_request, work);230230+231231+ netfs_rreq_do_write_to_cache(rreq);232232+}233233+234234+static void netfs_rreq_write_to_cache(struct netfs_io_request *rreq) /* [DEPRECATED] */235235+{236236+ rreq->work.func = netfs_rreq_write_to_cache_work;237237+ if (!queue_work(system_unbound_wq, &rreq->work))238238+ BUG();239239+}240240+241241+/*102242 * Handle a short read.103243 */104244static void netfs_rreq_short_read(struct netfs_io_request *rreq,···415275 clear_bit_unlock(NETFS_RREQ_IN_PROGRESS, &rreq->flags);416276 wake_up_bit(&rreq->flags, NETFS_RREQ_IN_PROGRESS);417277278278+ if (test_bit(NETFS_RREQ_COPY_TO_CACHE, &rreq->flags) &&279279+ test_bit(NETFS_RREQ_USE_PGPRIV2, &rreq->flags))280280+ return netfs_rreq_write_to_cache(rreq);281281+418282 netfs_rreq_completed(rreq, was_async);419283}420284···530386531387 if (transferred_or_error == 0) {532388 if (__test_and_set_bit(NETFS_SREQ_NO_PROGRESS, &subreq->flags)) {533533- subreq->error = -ENODATA;389389+ if (rreq->origin != NETFS_DIO_READ)390390+ subreq->error = -ENODATA;534391 goto failed;535392 }536393 } else {···602457 }603458 if (subreq->len > ictx->zero_point - subreq->start)604459 subreq->len = ictx->zero_point - subreq->start;460460+461461+ /* We limit buffered reads to the EOF, but let the462462+ * server deal with larger-than-EOF DIO/unbuffered463463+ * reads.464464+ */465465+ if (subreq->len > rreq->i_size - subreq->start)466466+ subreq->len = rreq->i_size - subreq->start;605467 }606606- if (subreq->len > rreq->i_size - subreq->start)607607- subreq->len = rreq->i_size - subreq->start;608468 if (rreq->rsize && subreq->len > rreq->rsize)609469 subreq->len = rreq->rsize;610470···745595 do {746596 _debug("submit %llx + %llx >= %llx",747597 rreq->start, rreq->submitted, rreq->i_size);748748- if (rreq->origin == NETFS_DIO_READ &&749749- rreq->start + rreq->submitted >= rreq->i_size)750750- break;751598 if (!netfs_rreq_submit_slice(rreq, &io_iter))599599+ break;600600+ if (test_bit(NETFS_SREQ_NO_PROGRESS, &rreq->flags))752601 break;753602 if (test_bit(NETFS_RREQ_BLOCKED, &rreq->flags) &&754603 test_bit(NETFS_RREQ_NONBLOCK, &rreq->flags))
···265265{266266 rreq->netfs_priv = get_nfs_open_context(nfs_file_open_context(file));267267 rreq->debug_id = atomic_inc_return(&nfs_netfs_debug_id);268268+ /* [DEPRECATED] Use PG_private_2 to mark folio being written to the cache. */269269+ __set_bit(NETFS_RREQ_USE_PGPRIV2, &rreq->flags);268270269271 return 0;270272}···363361 return;364362365363 sreq = netfs->sreq;366366- if (test_bit(NFS_IOHDR_EOF, &hdr->flags))364364+ if (test_bit(NFS_IOHDR_EOF, &hdr->flags) &&365365+ sreq->rreq->origin != NETFS_DIO_READ)367366 __set_bit(NETFS_SREQ_CLEAR_TAIL, &sreq->flags);368367369368 if (hdr->error)
-2
fs/nfs/fscache.h
···8181static inline void nfs_netfs_inode_init(struct nfs_inode *nfsi)8282{8383 netfs_inode_init(&nfsi->netfs, &nfs_netfs_ops, false);8484- /* [DEPRECATED] Use PG_private_2 to mark folio being written to the cache. */8585- __set_bit(NETFS_ICTX_USE_PGPRIV2, &nfsi->netfs.flags);8684}8785extern void nfs_netfs_initiate_read(struct nfs_pgio_header *hdr);8886extern void nfs_netfs_read_completion(struct nfs_pgio_header *hdr);
···938938 }939939 break;940940 case XFS_ATTR_FORK:941941- if (!xfs_has_attr(mp) && !xfs_has_attr2(mp))941941+ /*942942+ * "attr" means that an attr fork was created at some point in943943+ * the life of this filesystem. "attr2" means that inodes have944944+ * variable-sized data/attr fork areas. Hence we only check945945+ * attr here.946946+ */947947+ if (!xfs_has_attr(mp))942948 xchk_ino_set_corrupt(sc, sc->ip->i_ino);943949 break;944950 default:
+11
fs/xfs/xfs_ioctl.c
···483483 /* Can't change realtime flag if any extents are allocated. */484484 if (ip->i_df.if_nextents || ip->i_delayed_blks)485485 return -EINVAL;486486+487487+ /*488488+ * If S_DAX is enabled on this file, we can only switch the489489+ * device if both support fsdax. We can't update S_DAX because490490+ * there might be other threads walking down the access paths.491491+ */492492+ if (IS_DAX(VFS_I(ip)) &&493493+ (mp->m_ddev_targp->bt_daxdev == NULL ||494494+ (mp->m_rtdev_targp &&495495+ mp->m_rtdev_targp->bt_daxdev == NULL)))496496+ return -EINVAL;486497 }487498488499 if (rtflag) {
+6-1
fs/xfs/xfs_trans_ail.c
···644644 set_freezable();645645646646 while (1) {647647- if (tout)647647+ /*648648+ * Long waits of 50ms or more occur when we've run out of items649649+ * to push, so we only want uninterruptible state if we're650650+ * actually blocked on something.651651+ */652652+ if (tout && tout <= 20)648653 set_current_state(TASK_KILLABLE|TASK_FREEZABLE);649654 else650655 set_current_state(TASK_INTERRUPTIBLE|TASK_FREEZABLE);
···270270 dst[nbits / BITS_PER_LONG] &= BITMAP_LAST_WORD_MASK(nbits);271271}272272273273+static inline void bitmap_copy_and_extend(unsigned long *to,274274+ const unsigned long *from,275275+ unsigned int count, unsigned int size)276276+{277277+ unsigned int copy = BITS_TO_LONGS(count);278278+279279+ memcpy(to, from, copy * sizeof(long));280280+ if (count % BITS_PER_LONG)281281+ to[copy - 1] &= BITMAP_LAST_WORD_MASK(count);282282+ memset(to + copy, 0, bitmap_size(size) - copy * sizeof(long));283283+}284284+273285/*274286 * On 32-bit systems bitmaps are represented as u32 arrays internally. On LE64275287 * machines the order of hi and lo parts of numbers match the bitmap structure.
···23922392 *23932393 * I_PINNING_FSCACHE_WB Inode is pinning an fscache object for writeback.23942394 *23952395+ * I_LRU_ISOLATING Inode is pinned being isolated from LRU without holding23962396+ * i_count.23972397+ *23952398 * Q: What is the difference between I_WILL_FREE and I_FREEING?23962399 */23972400#define I_DIRTY_SYNC (1 << 0)···24182415#define I_DONTCACHE (1 << 16)24192416#define I_SYNC_QUEUED (1 << 17)24202417#define I_PINNING_NETFS_WB (1 << 18)24182418+#define __I_LRU_ISOLATING 1924192419+#define I_LRU_ISOLATING (1 << __I_LRU_ISOLATING)2421242024222421#define I_DIRTY_INODE (I_DIRTY_SYNC | I_DIRTY_DATASYNC)24232422#define I_DIRTY (I_DIRTY_INODE | I_DIRTY_PAGES)
+30-3
include/linux/hugetlb.h
···944944static inline spinlock_t *huge_pte_lockptr(struct hstate *h,945945 struct mm_struct *mm, pte_t *pte)946946{947947- if (huge_page_size(h) == PMD_SIZE)947947+ const unsigned long size = huge_page_size(h);948948+949949+ VM_WARN_ON(size == PAGE_SIZE);950950+951951+ /*952952+ * hugetlb must use the exact same PT locks as core-mm page table953953+ * walkers would. When modifying a PTE table, hugetlb must take the954954+ * PTE PT lock, when modifying a PMD table, hugetlb must take the PMD955955+ * PT lock etc.956956+ *957957+ * The expectation is that any hugetlb folio smaller than a PMD is958958+ * always mapped into a single PTE table and that any hugetlb folio959959+ * smaller than a PUD (but at least as big as a PMD) is always mapped960960+ * into a single PMD table.961961+ *962962+ * If that does not hold for an architecture, then that architecture963963+ * must disable split PT locks such that all *_lockptr() functions964964+ * will give us the same result: the per-MM PT lock.965965+ *966966+ * Note that with e.g., CONFIG_PGTABLE_LEVELS=2 where967967+ * PGDIR_SIZE==P4D_SIZE==PUD_SIZE==PMD_SIZE, we'd use pud_lockptr()968968+ * and core-mm would use pmd_lockptr(). However, in such configurations969969+ * split PMD locks are disabled -- they don't make sense on a single970970+ * PGDIR page table -- and the end result is the same.971971+ */972972+ if (size >= PUD_SIZE)973973+ return pud_lockptr(mm, (pud_t *) pte);974974+ else if (size >= PMD_SIZE || IS_ENABLED(CONFIG_HIGHPTE))948975 return pmd_lockptr(mm, (pmd_t *) pte);949949- VM_BUG_ON(huge_page_size(h) == PAGE_SIZE);950950- return &mm->page_table_lock;976976+ /* pte_alloc_huge() only applies with !CONFIG_HIGHPTE */977977+ return ptep_lockptr(mm, pte);951978}952979953980#ifndef hugepages_supported
···7373#define NETFS_ICTX_ODIRECT 0 /* The file has DIO in progress */7474#define NETFS_ICTX_UNBUFFERED 1 /* I/O should not use the pagecache */7575#define NETFS_ICTX_WRITETHROUGH 2 /* Write-through caching */7676-#define NETFS_ICTX_USE_PGPRIV2 31 /* [DEPRECATED] Use PG_private_2 to mark7777- * write to cache on read */7876};79778078/*···267269#define NETFS_RREQ_DONT_UNLOCK_FOLIOS 3 /* Don't unlock the folios on completion */268270#define NETFS_RREQ_FAILED 4 /* The request failed */269271#define NETFS_RREQ_IN_PROGRESS 5 /* Unlocked when the request completes */270270-#define NETFS_RREQ_WRITE_TO_CACHE 7 /* Need to write to the cache */271272#define NETFS_RREQ_UPLOAD_TO_SERVER 8 /* Need to write to the server */272273#define NETFS_RREQ_NONBLOCK 9 /* Don't block if possible (O_NONBLOCK) */273274#define NETFS_RREQ_BLOCKED 10 /* We blocked */
···266266 if (oldp)267267 *oldp = old;268268269269- if (old == i) {269269+ if (old > 0 && old == i) {270270 smp_acquire__after_ctrl_dep();271271 return true;272272 }273273274274- if (unlikely(old < 0 || old - i < 0))274274+ if (unlikely(old <= 0 || old - i < 0))275275 refcount_warn_saturate(r, REFCOUNT_SUB_UAF);276276277277 return false;
···5555 THERMAL_TZ_BIND_CDEV, /* Cooling dev is bind to the thermal zone */5656 THERMAL_TZ_UNBIND_CDEV, /* Cooling dev is unbind from the thermal zone */5757 THERMAL_INSTANCE_WEIGHT_CHANGED, /* Thermal instance weight changed */5858+ THERMAL_TZ_RESUME, /* Thermal zone is resuming after system sleep */5859};59606061/**
···421421 * IO completion data structure (Completion Queue Entry)422422 */423423struct io_uring_cqe {424424- __u64 user_data; /* sqe->data submission passed back */424424+ __u64 user_data; /* sqe->user_data value passed back */425425 __s32 res; /* result code for this event */426426 __u32 flags;427427
+2-1
include/uapi/linux/nsfs.h
···33#define __LINUX_NSFS_H4455#include <linux/ioctl.h>66+#include <linux/types.h>6778#define NSIO 0xb789···1716/* Get owner UID (in the caller's user namespace) for a user namespace */1817#define NS_GET_OWNER_UID _IO(NSIO, 0x4)1918/* Get the id for a mount namespace */2020-#define NS_GET_MNTNS_ID _IO(NSIO, 0x5)1919+#define NS_GET_MNTNS_ID _IOR(NSIO, 0x5, __u64)2120/* Translate pid from target pid namespace into the caller's pid namespace. */2221#define NS_GET_PID_FROM_PIDNS _IOR(NSIO, 0x6, int)2322/* Return thread-group leader id of pid in the callers pid namespace. */
···88#define FASTRPC_IOCTL_ALLOC_DMA_BUFF _IOWR('R', 1, struct fastrpc_alloc_dma_buf)99#define FASTRPC_IOCTL_FREE_DMA_BUFF _IOWR('R', 2, __u32)1010#define FASTRPC_IOCTL_INVOKE _IOWR('R', 3, struct fastrpc_invoke)1111-/* This ioctl is only supported with secure device nodes */1211#define FASTRPC_IOCTL_INIT_ATTACH _IO('R', 4)1312#define FASTRPC_IOCTL_INIT_CREATE _IOWR('R', 5, struct fastrpc_init_create)1413#define FASTRPC_IOCTL_MMAP _IOWR('R', 6, struct fastrpc_req_mmap)1514#define FASTRPC_IOCTL_MUNMAP _IOWR('R', 7, struct fastrpc_req_munmap)1616-/* This ioctl is only supported with secure device nodes */1715#define FASTRPC_IOCTL_INIT_ATTACH_SNS _IO('R', 8)1818-/* This ioctl is only supported with secure device nodes */1916#define FASTRPC_IOCTL_INIT_CREATE_STATIC _IOWR('R', 9, struct fastrpc_init_create_static)2017#define FASTRPC_IOCTL_MEM_MAP _IOWR('R', 10, struct fastrpc_mem_map)2118#define FASTRPC_IOCTL_MEM_UNMAP _IOWR('R', 11, struct fastrpc_mem_unmap)
+2-2
init/Kconfig
···19201920config RUSTC_VERSION_TEXT19211921 string19221922 depends on RUST19231923- default $(shell,command -v $(RUSTC) >/dev/null 2>&1 && $(RUSTC) --version || echo n)19231923+ default "$(shell,$(RUSTC) --version 2>/dev/null)"1924192419251925config BINDGEN_VERSION_TEXT19261926 string···19281928 # The dummy parameter `workaround-for-0.69.0` is required to support 0.69.019291929 # (https://github.com/rust-lang/rust-bindgen/pull/2678). It can be removed when19301930 # the minimum version is upgraded past that (0.69.1 already fixed the issue).19311931- default $(shell,command -v $(BINDGEN) >/dev/null 2>&1 && $(BINDGEN) --version workaround-for-0.69.0 || echo n)19311931+ default "$(shell,$(BINDGEN) --version workaround-for-0.69.0 2>/dev/null)"1932193219331933#19341934# Place an empty function call at each tracepoint site. Can be
···1688416884 spi = i / BPF_REG_SIZE;16885168851688616886 if (exact != NOT_EXACT &&1688716887- old->stack[spi].slot_type[i % BPF_REG_SIZE] !=1688816888- cur->stack[spi].slot_type[i % BPF_REG_SIZE])1688716887+ (i >= cur->allocated_stack ||1688816888+ old->stack[spi].slot_type[i % BPF_REG_SIZE] !=1688916889+ cur->stack[spi].slot_type[i % BPF_REG_SIZE]))1688916890 return false;16890168911689116892 if (!(old->stack[spi].spilled_ptr.live & REG_LIVE_READ)
+11-1
kernel/cpu.c
···26892689 return ret;26902690}2691269126922692+/**26932693+ * Check if the core a CPU belongs to is online26942694+ */26952695+#if !defined(topology_is_core_online)26962696+static inline bool topology_is_core_online(unsigned int cpu)26972697+{26982698+ return true;26992699+}27002700+#endif27012701+26922702int cpuhp_smt_enable(void)26932703{26942704 int cpu, ret = 0;···27092699 /* Skip online CPUs and CPUs on offline nodes */27102700 if (cpu_online(cpu) || !node_online(cpu_to_node(cpu)))27112701 continue;27122712- if (!cpu_smt_thread_allowed(cpu))27022702+ if (!cpu_smt_thread_allowed(cpu) || !topology_is_core_online(cpu))27132703 continue;27142704 ret = _cpu_up(cpu, 0, CPUHP_ONLINE);27152705 if (ret)
···9706970697079707 ret = __perf_event_account_interrupt(event, throttle);9708970897099709- if (event->prog && !bpf_overflow_handler(event, data, regs))97099709+ if (event->prog && event->prog->type == BPF_PROG_TYPE_PERF_EVENT &&97109710+ !bpf_overflow_handler(event, data, regs))97109711 return ret;9711971297129713 /*
+22-3
kernel/fork.c
···20532053 */20542054int pidfd_prepare(struct pid *pid, unsigned int flags, struct file **ret)20552055{20562056- bool thread = flags & PIDFD_THREAD;20572057-20582058- if (!pid || !pid_has_task(pid, thread ? PIDTYPE_PID : PIDTYPE_TGID))20562056+ if (!pid)20592057 return -EINVAL;20582058+20592059+ scoped_guard(rcu) {20602060+ struct task_struct *tsk;20612061+20622062+ if (flags & PIDFD_THREAD)20632063+ tsk = pid_task(pid, PIDTYPE_PID);20642064+ else20652065+ tsk = pid_task(pid, PIDTYPE_TGID);20662066+ if (!tsk)20672067+ return -EINVAL;20682068+20692069+ /* Don't create pidfds for kernel threads for now. */20702070+ if (tsk->flags & PF_KTHREAD)20712071+ return -EINVAL;20722072+ }2060207320612074 return __pidfd_prepare(pid, flags, ret);20622075}···24152402 */24162403 if (clone_flags & CLONE_PIDFD) {24172404 int flags = (clone_flags & CLONE_THREAD) ? PIDFD_THREAD : 0;24052405+24062406+ /* Don't create pidfds for kernel threads for now. */24072407+ if (args->kthread) {24082408+ retval = -EINVAL;24092409+ goto bad_fork_free_pid;24102410+ }2418241124192412 /* Note that no task has been attached to @pid yet. */24202413 retval = __pidfd_prepare(pid, flags, &pidfile);
+6-49
kernel/kallsyms.c
···160160 return kallsyms_relative_base - 1 - kallsyms_offsets[idx];161161}162162163163-static void cleanup_symbol_name(char *s)164164-{165165- char *res;166166-167167- if (!IS_ENABLED(CONFIG_LTO_CLANG))168168- return;169169-170170- /*171171- * LLVM appends various suffixes for local functions and variables that172172- * must be promoted to global scope as part of LTO. This can break173173- * hooking of static functions with kprobes. '.' is not a valid174174- * character in an identifier in C. Suffixes only in LLVM LTO observed:175175- * - foo.llvm.[0-9a-f]+176176- */177177- res = strstr(s, ".llvm.");178178- if (res)179179- *res = '\0';180180-181181- return;182182-}183183-184184-static int compare_symbol_name(const char *name, char *namebuf)185185-{186186- /* The kallsyms_seqs_of_names is sorted based on names after187187- * cleanup_symbol_name() (see scripts/kallsyms.c) if clang lto is enabled.188188- * To ensure correct bisection in kallsyms_lookup_names(), do189189- * cleanup_symbol_name(namebuf) before comparing name and namebuf.190190- */191191- cleanup_symbol_name(namebuf);192192- return strcmp(name, namebuf);193193-}194194-195163static unsigned int get_symbol_seq(int index)196164{197165 unsigned int i, seq = 0;···187219 seq = get_symbol_seq(mid);188220 off = get_symbol_offset(seq);189221 kallsyms_expand_symbol(off, namebuf, ARRAY_SIZE(namebuf));190190- ret = compare_symbol_name(name, namebuf);222222+ ret = strcmp(name, namebuf);191223 if (ret > 0)192224 low = mid + 1;193225 else if (ret < 0)···204236 seq = get_symbol_seq(low - 1);205237 off = get_symbol_offset(seq);206238 kallsyms_expand_symbol(off, namebuf, ARRAY_SIZE(namebuf));207207- if (compare_symbol_name(name, namebuf))239239+ if (strcmp(name, namebuf))208240 break;209241 low--;210242 }···216248 seq = get_symbol_seq(high + 1);217249 off = get_symbol_offset(seq);218250 kallsyms_expand_symbol(off, namebuf, ARRAY_SIZE(namebuf));219219- if (compare_symbol_name(name, namebuf))251251+ if (strcmp(name, namebuf))220252 break;221253 high++;222254 }···375407 if (modbuildid)376408 *modbuildid = NULL;377409378378- ret = strlen(namebuf);379379- goto found;410410+ return strlen(namebuf);380411 }381412382413 /* See if it's in a module or a BPF JITed image. */···389422 ret = ftrace_mod_address_lookup(addr, symbolsize,390423 offset, modname, namebuf);391424392392-found:393393- cleanup_symbol_name(namebuf);394425 return ret;395426}396427···415450416451int lookup_symbol_name(unsigned long addr, char *symname)417452{418418- int res;419419-420453 symname[0] = '\0';421454 symname[KSYM_NAME_LEN - 1] = '\0';422455···425462 /* Grab name */426463 kallsyms_expand_symbol(get_symbol_offset(pos),427464 symname, KSYM_NAME_LEN);428428- goto found;465465+ return 0;429466 }430467 /* See if it's in a module. */431431- res = lookup_module_symbol_name(addr, symname);432432- if (res)433433- return res;434434-435435-found:436436- cleanup_symbol_name(symname);437437- return 0;468468+ return lookup_module_symbol_name(addr, symname);438469}439470440471/* Look up a kernel symbol and return it in a text buffer. */
+1-21
kernel/kallsyms_selftest.c
···187187 stat.min, stat.max, div_u64(stat.sum, stat.real_cnt));188188}189189190190-static bool match_cleanup_name(const char *s, const char *name)191191-{192192- char *p;193193- int len;194194-195195- if (!IS_ENABLED(CONFIG_LTO_CLANG))196196- return false;197197-198198- p = strstr(s, ".llvm.");199199- if (!p)200200- return false;201201-202202- len = strlen(name);203203- if (p - s != len)204204- return false;205205-206206- return !strncmp(s, name, len);207207-}208208-209190static int find_symbol(void *data, const char *name, unsigned long addr)210191{211192 struct test_stat *stat = (struct test_stat *)data;212193213213- if (strcmp(name, stat->name) == 0 ||214214- (!stat->perf && match_cleanup_name(name, stat->name))) {194194+ if (!strcmp(name, stat->name)) {215195 stat->real_cnt++;216196 stat->addr = addr;217197
+1-1
kernel/trace/trace.c
···79567956 trace_access_unlock(iter->cpu_file);7957795779587958 if (ret < 0) {79597959- if (trace_empty(iter)) {79597959+ if (trace_empty(iter) && !iter->closed) {79607960 if ((filp->f_flags & O_NONBLOCK))79617961 return -EAGAIN;79627962
···5295529552965296 if (unlikely(!pte_same(old_pte, vmf->orig_pte))) {52975297 pte_unmap_unlock(vmf->pte, vmf->ptl);52985298- goto out;52985298+ return 0;52995299 }5300530053015301 pte = pte_modify(old_pte, vma->vm_page_prot);···53585358 if (!migrate_misplaced_folio(folio, vma, target_nid)) {53595359 nid = target_nid;53605360 flags |= TNF_MIGRATED;53615361- } else {53625362- flags |= TNF_MIGRATE_FAIL;53635363- vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd,53645364- vmf->address, &vmf->ptl);53655365- if (unlikely(!vmf->pte))53665366- goto out;53675367- if (unlikely(!pte_same(ptep_get(vmf->pte), vmf->orig_pte))) {53685368- pte_unmap_unlock(vmf->pte, vmf->ptl);53695369- goto out;53705370- }53715371- goto out_map;53615361+ task_numa_fault(last_cpupid, nid, nr_pages, flags);53625362+ return 0;53725363 }5373536453745374-out:53755375- if (nid != NUMA_NO_NODE)53765376- task_numa_fault(last_cpupid, nid, nr_pages, flags);53775377- return 0;53655365+ flags |= TNF_MIGRATE_FAIL;53665366+ vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd,53675367+ vmf->address, &vmf->ptl);53685368+ if (unlikely(!vmf->pte))53695369+ return 0;53705370+ if (unlikely(!pte_same(ptep_get(vmf->pte), vmf->orig_pte))) {53715371+ pte_unmap_unlock(vmf->pte, vmf->ptl);53725372+ return 0;53735373+ }53785374out_map:53795375 /*53805376 * Make it present again, depending on how arch implements···53835387 numa_rebuild_single_mapping(vmf, vma, vmf->address, vmf->pte,53845388 writable);53855389 pte_unmap_unlock(vmf->pte, vmf->ptl);53865386- goto out;53905390+53915391+ if (nid != NUMA_NO_NODE)53925392+ task_numa_fault(last_cpupid, nid, nr_pages, flags);53935393+ return 0;53875394}5388539553895396static inline vm_fault_t create_huge_pmd(struct vm_fault *vmf)
+11-5
mm/migrate.c
···14791479 return rc;14801480}1481148114821482-static inline int try_split_folio(struct folio *folio, struct list_head *split_folios)14821482+static inline int try_split_folio(struct folio *folio, struct list_head *split_folios,14831483+ enum migrate_mode mode)14831484{14841485 int rc;1485148614861486- folio_lock(folio);14871487+ if (mode == MIGRATE_ASYNC) {14881488+ if (!folio_trylock(folio))14891489+ return -EAGAIN;14901490+ } else {14911491+ folio_lock(folio);14921492+ }14871493 rc = split_folio_to_list(folio, split_folios);14881494 folio_unlock(folio);14891495 if (!rc)···16831677 */16841678 if (nr_pages > 2 &&16851679 !list_empty(&folio->_deferred_list)) {16861686- if (try_split_folio(folio, split_folios) == 0) {16801680+ if (!try_split_folio(folio, split_folios, mode)) {16871681 nr_failed++;16881682 stats->nr_thp_failed += is_thp;16891683 stats->nr_thp_split += is_thp;···17051699 if (!thp_migration_supported() && is_thp) {17061700 nr_failed++;17071701 stats->nr_thp_failed++;17081708- if (!try_split_folio(folio, split_folios)) {17021702+ if (!try_split_folio(folio, split_folios, mode)) {17091703 stats->nr_thp_split++;17101704 stats->nr_split++;17111705 continue;···17371731 stats->nr_thp_failed += is_thp;17381732 /* Large folio NUMA faulting doesn't split to retry. */17391733 if (is_large && !nosplit) {17401740- int ret = try_split_folio(folio, split_folios);17341734+ int ret = try_split_folio(folio, split_folios, mode);1741173517421736 if (!ret) {17431737 stats->nr_thp_split += is_thp;
+4-11
mm/mm_init.c
···16231623 panic("Failed to allocate %ld bytes for node %d memory map\n",16241624 size, pgdat->node_id);16251625 pgdat->node_mem_map = map + offset;16261626- mod_node_early_perpage_metadata(pgdat->node_id,16271627- DIV_ROUND_UP(size, PAGE_SIZE));16261626+ memmap_boot_pages_add(DIV_ROUND_UP(size, PAGE_SIZE));16281627 pr_debug("%s: node %d, pgdat %08lx, node_mem_map %08lx\n",16291628 __func__, pgdat->node_id, (unsigned long)pgdat,16301629 (unsigned long)pgdat->node_mem_map);···2244224522452246 set_pageblock_migratetype(page, MIGRATE_CMA);22462247 set_page_refcounted(page);22482248+ /* pages were reserved and not allocated */22492249+ clear_page_tag_ref(page);22472250 __free_pages(page, pageblock_order);2248225122492252 adjust_managed_page_count(page, pageblock_nr_pages);···24612460 }2462246124632462 /* pages were reserved and not allocated */24642464- if (mem_alloc_profiling_enabled()) {24652465- union codetag_ref *ref = get_page_tag_ref(page);24662466-24672467- if (ref) {24682468- set_codetag_empty(ref);24692469- put_page_tag_ref(ref);24702470- }24712471- }24722472-24632463+ clear_page_tag_ref(page);24732464 __free_pages_core(page, order, MEMINIT_EARLY);24742465}24752466
+11-3
mm/mseal.c
···40404141static bool is_madv_discard(int behavior)4242{4343- return behavior &4444- (MADV_FREE | MADV_DONTNEED | MADV_DONTNEED_LOCKED |4545- MADV_REMOVE | MADV_DONTFORK | MADV_WIPEONFORK);4343+ switch (behavior) {4444+ case MADV_FREE:4545+ case MADV_DONTNEED:4646+ case MADV_DONTNEED_LOCKED:4747+ case MADV_REMOVE:4848+ case MADV_DONTFORK:4949+ case MADV_WIPEONFORK:5050+ return true;5151+ }5252+5353+ return false;4654}47554856static bool is_ro_anon(struct vm_area_struct *vma)
+21-31
mm/page_alloc.c
···287287288288static bool page_contains_unaccepted(struct page *page, unsigned int order);289289static void accept_page(struct page *page, unsigned int order);290290-static bool try_to_accept_memory(struct zone *zone, unsigned int order);290290+static bool cond_accept_memory(struct zone *zone, unsigned int order);291291static inline bool has_unaccepted_memory(void);292292static bool __free_unaccepted(struct page *page);293293···30723072 if (!(alloc_flags & ALLOC_CMA))30733073 unusable_free += zone_page_state(z, NR_FREE_CMA_PAGES);30743074#endif30753075-#ifdef CONFIG_UNACCEPTED_MEMORY30763076- unusable_free += zone_page_state(z, NR_UNACCEPTED);30773077-#endif3078307530793076 return unusable_free;30803077}···33653368 }33663369 }3367337033713371+ cond_accept_memory(zone, order);33723372+33683373 /*33693374 * Detect whether the number of free pages is below high33703375 * watermark. If so, we will decrease pcp->high and free···33923393 gfp_mask)) {33933394 int ret;3394339533953395- if (has_unaccepted_memory()) {33963396- if (try_to_accept_memory(zone, order))33973397- goto try_this_zone;33983398- }33963396+ if (cond_accept_memory(zone, order))33973397+ goto try_this_zone;3399339834003399#ifdef CONFIG_DEFERRED_STRUCT_PAGE_INIT34013400 /*···3447345034483451 return page;34493452 } else {34503450- if (has_unaccepted_memory()) {34513451- if (try_to_accept_memory(zone, order))34523452- goto try_this_zone;34533453- }34533453+ if (cond_accept_memory(zone, order))34543454+ goto try_this_zone;3454345534553456#ifdef CONFIG_DEFERRED_STRUCT_PAGE_INIT34563457 /* Try again if zone has deferred pages */···57505755 for_each_online_pgdat(pgdat)57515756 pgdat->per_cpu_nodestats =57525757 alloc_percpu(struct per_cpu_nodestat);57535753- store_early_perpage_metadata();57545758}5755575957565760__meminit void zone_pcp_init(struct zone *zone)···5815582158165822void free_reserved_page(struct page *page)58175823{58185818- if (mem_alloc_profiling_enabled()) {58195819- union codetag_ref *ref = get_page_tag_ref(page);58205820-58215821- if (ref) {58225822- set_codetag_empty(ref);58235823- put_page_tag_ref(ref);58245824- }58255825- }58245824+ clear_page_tag_ref(page);58265825 ClearPageReserved(page);58275826 init_page_count(page);58285827 __free_page(page);···69386951 struct page *page;69396952 bool last;6940695369416941- if (list_empty(&zone->unaccepted_pages))69426942- return false;69436943-69446954 spin_lock_irqsave(&zone->lock, flags);69456955 page = list_first_entry_or_null(&zone->unaccepted_pages,69466956 struct page, lru);···69636979 return true;69646980}6965698169666966-static bool try_to_accept_memory(struct zone *zone, unsigned int order)69826982+static bool cond_accept_memory(struct zone *zone, unsigned int order)69676983{69686984 long to_accept;69696969- int ret = false;69856985+ bool ret = false;69866986+69876987+ if (!has_unaccepted_memory())69886988+ return false;69896989+69906990+ if (list_empty(&zone->unaccepted_pages))69916991+ return false;6970699269716993 /* How much to accept to get to high watermark? */69726994 to_accept = high_wmark_pages(zone) -69736995 (zone_page_state(zone, NR_FREE_PAGES) -69746974- __zone_watermark_unusable_free(zone, order, 0));69966996+ __zone_watermark_unusable_free(zone, order, 0) -69976997+ zone_page_state(zone, NR_UNACCEPTED));6975699869766976- /* Accept at least one page */69776977- do {69996999+ while (to_accept > 0) {69787000 if (!try_to_accept_memory_one(zone))69797001 break;69807002 ret = true;69817003 to_accept -= MAX_ORDER_NR_PAGES;69826982- } while (to_accept > 0);70047004+ }6983700569847006 return ret;69857007}···70287038{70297039}7030704070317031-static bool try_to_accept_memory(struct zone *zone, unsigned int order)70417041+static bool cond_accept_memory(struct zone *zone, unsigned int order)70327042{70337043 return false;70347044}
···35843584 page = alloc_pages_noprof(alloc_gfp, order);35853585 else35863586 page = alloc_pages_node_noprof(nid, alloc_gfp, order);35873587- if (unlikely(!page)) {35883588- if (!nofail)35893589- break;35903590-35913591- /* fall back to the zero order allocations */35923592- alloc_gfp |= __GFP_NOFAIL;35933593- order = 0;35943594- continue;35953595- }35873587+ if (unlikely(!page))35883588+ break;3596358935973590 /*35983591 * Higher order allocations must be able to be treated as
+25-27
mm/vmstat.c
···10331033}10341034#endif1035103510361036+/*10371037+ * Count number of pages "struct page" and "struct page_ext" consume.10381038+ * nr_memmap_boot_pages: # of pages allocated by boot allocator10391039+ * nr_memmap_pages: # of pages that were allocated by buddy allocator10401040+ */10411041+static atomic_long_t nr_memmap_boot_pages = ATOMIC_LONG_INIT(0);10421042+static atomic_long_t nr_memmap_pages = ATOMIC_LONG_INIT(0);10431043+10441044+void memmap_boot_pages_add(long delta)10451045+{10461046+ atomic_long_add(delta, &nr_memmap_boot_pages);10471047+}10481048+10491049+void memmap_pages_add(long delta)10501050+{10511051+ atomic_long_add(delta, &nr_memmap_pages);10521052+}10531053+10361054#ifdef CONFIG_COMPACTION1037105510381056struct contig_page_info {···12731255 "pgdemote_kswapd",12741256 "pgdemote_direct",12751257 "pgdemote_khugepaged",12761276- "nr_memmap",12771277- "nr_memmap_boot",12781278- /* enum writeback_stat_item counters */12581258+ /* system-wide enum vm_stat_item counters */12791259 "nr_dirty_threshold",12801260 "nr_dirty_background_threshold",12611261+ "nr_memmap_pages",12621262+ "nr_memmap_boot_pages",1281126312821264#if defined(CONFIG_VM_EVENT_COUNTERS) || defined(CONFIG_MEMCG)12831265 /* enum vm_event_item counters */···18081790#define NR_VMSTAT_ITEMS (NR_VM_ZONE_STAT_ITEMS + \18091791 NR_VM_NUMA_EVENT_ITEMS + \18101792 NR_VM_NODE_STAT_ITEMS + \18111811- NR_VM_WRITEBACK_STAT_ITEMS + \17931793+ NR_VM_STAT_ITEMS + \18121794 (IS_ENABLED(CONFIG_VM_EVENT_COUNTERS) ? \18131795 NR_VM_EVENT_ITEMS : 0))18141796···1845182718461828 global_dirty_limits(v + NR_DIRTY_BG_THRESHOLD,18471829 v + NR_DIRTY_THRESHOLD);18481848- v += NR_VM_WRITEBACK_STAT_ITEMS;18301830+ v[NR_MEMMAP_PAGES] = atomic_long_read(&nr_memmap_pages);18311831+ v[NR_MEMMAP_BOOT_PAGES] = atomic_long_read(&nr_memmap_boot_pages);18321832+ v += NR_VM_STAT_ITEMS;1849183318501834#ifdef CONFIG_VM_EVENT_COUNTERS18511835 all_vm_events(v);···23052285module_init(extfrag_debug_init);2306228623072287#endif23082308-23092309-/*23102310- * Page metadata size (struct page and page_ext) in pages23112311- */23122312-static unsigned long early_perpage_metadata[MAX_NUMNODES] __meminitdata;23132313-23142314-void __meminit mod_node_early_perpage_metadata(int nid, long delta)23152315-{23162316- early_perpage_metadata[nid] += delta;23172317-}23182318-23192319-void __meminit store_early_perpage_metadata(void)23202320-{23212321- int nid;23222322- struct pglist_data *pgdat;23232323-23242324- for_each_online_pgdat(pgdat) {23252325- nid = pgdat->node_id;23262326- mod_node_page_state(NODE_DATA(nid), NR_MEMMAP_BOOT,23272327- early_perpage_metadata[nid]);23282328- }23292329-}
+5-1
net/bridge/br_netfilter_hooks.c
···622622 if (likely(nf_ct_is_confirmed(ct)))623623 return NF_ACCEPT;624624625625+ if (WARN_ON_ONCE(refcount_read(&nfct->use) != 1)) {626626+ nf_reset_ct(skb);627627+ return NF_ACCEPT;628628+ }629629+625630 WARN_ON_ONCE(skb_shared(skb));626626- WARN_ON_ONCE(refcount_read(&nfct->use) != 1);627631628632 /* We can't call nf_confirm here, it would create a dependency629633 * on nf_conntrack module.
+17-9
net/core/dev.c
···99129912 }99139913}9914991499159915+static bool netdev_has_ip_or_hw_csum(netdev_features_t features)99169916+{99179917+ netdev_features_t ip_csum_mask = NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM;99189918+ bool ip_csum = (features & ip_csum_mask) == ip_csum_mask;99199919+ bool hw_csum = features & NETIF_F_HW_CSUM;99209920+99219921+ return ip_csum || hw_csum;99229922+}99239923+99159924static netdev_features_t netdev_fix_features(struct net_device *dev,99169925 netdev_features_t features)99179926{···100029993 features &= ~NETIF_F_LRO;100039994 }1000499951000510005- if (features & NETIF_F_HW_TLS_TX) {1000610006- bool ip_csum = (features & (NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM)) ==1000710007- (NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM);1000810008- bool hw_csum = features & NETIF_F_HW_CSUM;1000910009-1001010010- if (!ip_csum && !hw_csum) {1001110011- netdev_dbg(dev, "Dropping TLS TX HW offload feature since no CSUM feature.\n");1001210012- features &= ~NETIF_F_HW_TLS_TX;1001310013- }99969996+ if ((features & NETIF_F_HW_TLS_TX) && !netdev_has_ip_or_hw_csum(features)) {99979997+ netdev_dbg(dev, "Dropping TLS TX HW offload feature since no CSUM feature.\n");99989998+ features &= ~NETIF_F_HW_TLS_TX;100149999 }10015100001001610001 if ((features & NETIF_F_HW_TLS_RX) && !(features & NETIF_F_RXCSUM)) {1001710002 netdev_dbg(dev, "Dropping TLS RX HW offload feature since no RXCSUM feature.\n");1001810003 features &= ~NETIF_F_HW_TLS_RX;1000410004+ }1000510005+1000610006+ if ((features & NETIF_F_GSO_UDP_L4) && !netdev_has_ip_or_hw_csum(features)) {1000710007+ netdev_dbg(dev, "Dropping USO feature since no CSUM feature.\n");1000810008+ features &= ~NETIF_F_GSO_UDP_L4;1001910009 }10020100101002110011 return features;
···238238 */239239 if (unlikely(len != icsk->icsk_ack.rcv_mss)) {240240 u64 val = (u64)skb->len << TCP_RMEM_TO_WIN_SCALE;241241+ u8 old_ratio = tcp_sk(sk)->scaling_ratio;241242242243 do_div(val, skb->truesize);243244 tcp_sk(sk)->scaling_ratio = val ? val : 1;245245+246246+ if (old_ratio != tcp_sk(sk)->scaling_ratio)247247+ WRITE_ONCE(tcp_sk(sk)->window_clamp,248248+ tcp_win_from_space(sk, sk->sk_rcvbuf));244249 }245250 icsk->icsk_ack.rcv_mss = min_t(unsigned int, len,246251 tcp_sk(sk)->advmss);···759754 * <prev RTT . ><current RTT .. ><next RTT .... >760755 */761756762762- if (READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_moderate_rcvbuf)) {757757+ if (READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_moderate_rcvbuf) &&758758+ !(sk->sk_userlocks & SOCK_RCVBUF_LOCK)) {763759 u64 rcvwin, grow;764760 int rcvbuf;765761···776770777771 rcvbuf = min_t(u64, tcp_space_from_win(sk, rcvwin),778772 READ_ONCE(sock_net(sk)->ipv4.sysctl_tcp_rmem[2]));779779- if (!(sk->sk_userlocks & SOCK_RCVBUF_LOCK)) {780780- if (rcvbuf > sk->sk_rcvbuf) {781781- WRITE_ONCE(sk->sk_rcvbuf, rcvbuf);773773+ if (rcvbuf > sk->sk_rcvbuf) {774774+ WRITE_ONCE(sk->sk_rcvbuf, rcvbuf);782775783783- /* Make the window clamp follow along. */784784- WRITE_ONCE(tp->window_clamp,785785- tcp_win_from_space(sk, rcvbuf));786786- }787787- } else {788788- /* Make the window clamp follow along while being bounded789789- * by SO_RCVBUF.790790- */791791- int clamp = tcp_win_from_space(sk, min(rcvbuf, sk->sk_rcvbuf));792792-793793- if (clamp > tp->window_clamp)794794- WRITE_ONCE(tp->window_clamp, clamp);776776+ /* Make the window clamp follow along. */777777+ WRITE_ONCE(tp->window_clamp,778778+ tcp_win_from_space(sk, rcvbuf));795779 }796780 }797781 tp->rcvq_space.space = copied;
+6
net/ipv4/udp_offload.c
···282282 skb_transport_header(gso_skb)))283283 return ERR_PTR(-EINVAL);284284285285+ /* We don't know if egress device can segment and checksum the packet286286+ * when IPv6 extension headers are present. Fall back to software GSO.287287+ */288288+ if (gso_skb->ip_summed != CHECKSUM_PARTIAL)289289+ features &= ~(NETIF_F_GSO_UDP_L4 | NETIF_F_CSUM_MASK);290290+285291 if (skb_gso_ok(gso_skb, features | NETIF_F_GSO_ROBUST)) {286292 /* Packet is from an untrusted source, reset gso_segs. */287293 skb_shinfo(gso_skb)->gso_segs = DIV_ROUND_UP(gso_skb->len - sizeof(*uh),
···820820{821821#if IS_ENABLED(CONFIG_NF_CONNTRACK)822822 static const unsigned long flags = IPS_CONFIRMED | IPS_DYING;823823- const struct nf_conn *ct = (void *)skb_nfct(entry->skb);823823+ struct nf_conn *ct = (void *)skb_nfct(entry->skb);824824+ unsigned long status;825825+ unsigned int use;824826825825- if (ct && ((ct->status & flags) == IPS_DYING))827827+ if (!ct)828828+ return false;829829+830830+ status = READ_ONCE(ct->status);831831+ if ((status & flags) == IPS_DYING)826832 return true;833833+834834+ if (status & IPS_CONFIRMED)835835+ return false;836836+837837+ /* in some cases skb_clone() can occur after initial conntrack838838+ * pickup, but conntrack assumes exclusive skb->_nfct ownership for839839+ * unconfirmed entries.840840+ *841841+ * This happens for br_netfilter and with ip multicast routing.842842+ * We can't be solved with serialization here because one clone could843843+ * have been queued for local delivery.844844+ */845845+ use = refcount_read(&ct->ct_general.use);846846+ if (likely(use == 1))847847+ return false;848848+849849+ /* Can't decrement further? Exclusive ownership. */850850+ if (!refcount_dec_not_one(&ct->ct_general.use))851851+ return false;852852+853853+ skb_set_nfct(entry->skb, 0);854854+ /* No nf_ct_put(): we already decremented .use and it cannot855855+ * drop down to 0.856856+ */857857+ return true;827858#endif828859 return false;829860}
···4040define_panicking_intrinsics!("`f32` should not be used", {4141 __addsf3,4242 __eqsf2,4343+ __extendsfdf2,4344 __gesf2,4445 __lesf2,4546 __ltsf2,4647 __mulsf3,4748 __nesf2,4949+ __truncdfsf2,4850 __unordsf2,4951});50525153define_panicking_intrinsics!("`f64` should not be used", {5254 __adddf3,5555+ __eqdf2,5356 __ledf2,5457 __ltdf2,5558 __muldf3,
+1-1
rust/macros/lib.rs
···9494/// - `license`: ASCII string literal of the license of the kernel module (required).9595/// - `alias`: array of ASCII string literals of the alias names of the kernel module.9696/// - `firmware`: array of ASCII string literals of the firmware files of9797-/// the kernel module.9797+/// the kernel module.9898#[proc_macro]9999pub fn module(ts: TokenStream) -> TokenStream {100100 module::module(ts)
···145145 parser.add_argument('--cfgs', action='append', default=[])146146 parser.add_argument("srctree", type=pathlib.Path)147147 parser.add_argument("objtree", type=pathlib.Path)148148+ parser.add_argument("sysroot", type=pathlib.Path)148149 parser.add_argument("sysroot_src", type=pathlib.Path)149150 parser.add_argument("exttree", type=pathlib.Path, nargs="?")150151 args = parser.parse_args()···155154 level=logging.INFO if args.verbose else logging.WARNING156155 )157156157157+ # Making sure that the `sysroot` and `sysroot_src` belong to the same toolchain.158158+ assert args.sysroot in args.sysroot_src.parents159159+158160 rust_project = {159161 "crates": generate_crates(args.srctree, args.objtree, args.sysroot_src, args.exttree, args.cfgs),160160- "sysroot_src": str(args.sysroot_src),162162+ "sysroot": str(args.sysroot),161163 }162164163165 json.dump(rust_project, sys.stdout, sort_keys=True, indent=4)
+2-2
scripts/generate_rust_target.rs
···162162 "data-layout",163163 "e-m:e-p270:32:32-p271:32:32-p272:64:64-i64:64-i128:128-f80:128-n8:16:32:64-S128",164164 );165165- let mut features = "-3dnow,-3dnowa,-mmx,+soft-float".to_string();165165+ let mut features = "-mmx,+soft-float".to_string();166166 if cfg.has("MITIGATION_RETPOLINE") {167167 features += ",+retpoline-external-thunk";168168 }···179179 "data-layout",180180 "e-m:e-p:32:32-p270:32:32-p271:32:32-p272:64:64-i128:128-f64:32:64-f80:32-n8:16:32-S128",181181 );182182- let mut features = "-3dnow,-3dnowa,-mmx,+soft-float".to_string();182182+ let mut features = "-mmx,+soft-float".to_string();183183 if cfg.has("MITIGATION_RETPOLINE") {184184 features += ",+retpoline-external-thunk";185185 }
+2-29
scripts/kallsyms.c
···55 * This software may be used and distributed according to the terms66 * of the GNU General Public License, incorporated herein by reference.77 *88- * Usage: kallsyms [--all-symbols] [--absolute-percpu]99- * [--lto-clang] in.map > out.S88+ * Usage: kallsyms [--all-symbols] [--absolute-percpu] in.map > out.S109 *1110 * Table compression uses all the unused char codes on the symbols and1211 * maps these to the most used substrings (tokens). For instance, it might···6162static unsigned int table_size, table_cnt;6263static int all_symbols;6364static int absolute_percpu;6464-static int lto_clang;65656666static int token_profit[0x10000];6767···71737274static void usage(void)7375{7474- fprintf(stderr, "Usage: kallsyms [--all-symbols] [--absolute-percpu] "7575- "[--lto-clang] in.map > out.S\n");7676+ fprintf(stderr, "Usage: kallsyms [--all-symbols] [--absolute-percpu] in.map > out.S\n");7677 exit(1);7778}7879···341344 return s->percpu_absolute;342345}343346344344-static void cleanup_symbol_name(char *s)345345-{346346- char *p;347347-348348- /*349349- * ASCII[.] = 2e350350- * ASCII[0-9] = 30,39351351- * ASCII[A-Z] = 41,5a352352- * ASCII[_] = 5f353353- * ASCII[a-z] = 61,7a354354- *355355- * As above, replacing the first '.' in ".llvm." with '\0' does not356356- * affect the main sorting, but it helps us with subsorting.357357- */358358- p = strstr(s, ".llvm.");359359- if (p)360360- *p = '\0';361361-}362362-363347static int compare_names(const void *a, const void *b)364348{365349 int ret;···503525 output_label("kallsyms_relative_base");504526 output_address(relative_base);505527 printf("\n");506506-507507- if (lto_clang)508508- for (i = 0; i < table_cnt; i++)509509- cleanup_symbol_name((char *)table[i]->sym);510528511529 sort_symbols_by_name();512530 output_label("kallsyms_seqs_of_names");···781807 static const struct option long_options[] = {782808 {"all-symbols", no_argument, &all_symbols, 1},783809 {"absolute-percpu", no_argument, &absolute_percpu, 1},784784- {"lto-clang", no_argument, <o_clang, 1},785810 {},786811 };787812
-4
scripts/link-vmlinux.sh
···156156 kallsymopt="${kallsymopt} --absolute-percpu"157157 fi158158159159- if is_enabled CONFIG_LTO_CLANG; then160160- kallsymopt="${kallsymopt} --lto-clang"161161- fi162162-163159 info KSYMS "${2}.S"164160 scripts/kallsyms ${kallsymopt} "${1}" > "${2}.S"165161
+22-13
security/keys/trusted-keys/trusted_dcp.c
···186186 return ret;187187}188188189189-static int decrypt_blob_key(u8 *key)189189+static int decrypt_blob_key(u8 *encrypted_key, u8 *plain_key)190190{191191- return do_dcp_crypto(key, key, false);191191+ return do_dcp_crypto(encrypted_key, plain_key, false);192192}193193194194-static int encrypt_blob_key(u8 *key)194194+static int encrypt_blob_key(u8 *plain_key, u8 *encrypted_key)195195{196196- return do_dcp_crypto(key, key, true);196196+ return do_dcp_crypto(plain_key, encrypted_key, true);197197}198198199199static int trusted_dcp_seal(struct trusted_key_payload *p, char *datablob)200200{201201 struct dcp_blob_fmt *b = (struct dcp_blob_fmt *)p->blob;202202 int blen, ret;203203+ u8 plain_blob_key[AES_KEYSIZE_128];203204204205 blen = calc_blob_len(p->key_len);205206 if (blen > MAX_BLOB_SIZE)···208207209208 b->fmt_version = DCP_BLOB_VERSION;210209 get_random_bytes(b->nonce, AES_KEYSIZE_128);211211- get_random_bytes(b->blob_key, AES_KEYSIZE_128);210210+ get_random_bytes(plain_blob_key, AES_KEYSIZE_128);212211213213- ret = do_aead_crypto(p->key, b->payload, p->key_len, b->blob_key,212212+ ret = do_aead_crypto(p->key, b->payload, p->key_len, plain_blob_key,214213 b->nonce, true);215214 if (ret) {216215 pr_err("Unable to encrypt blob payload: %i\n", ret);217217- return ret;216216+ goto out;218217 }219218220220- ret = encrypt_blob_key(b->blob_key);219219+ ret = encrypt_blob_key(plain_blob_key, b->blob_key);221220 if (ret) {222221 pr_err("Unable to encrypt blob key: %i\n", ret);223223- return ret;222222+ goto out;224223 }225224226226- b->payload_len = get_unaligned_le32(&p->key_len);225225+ put_unaligned_le32(p->key_len, &b->payload_len);227226 p->blob_len = blen;228228- return 0;227227+ ret = 0;228228+229229+out:230230+ memzero_explicit(plain_blob_key, sizeof(plain_blob_key));231231+232232+ return ret;229233}230234231235static int trusted_dcp_unseal(struct trusted_key_payload *p, char *datablob)232236{233237 struct dcp_blob_fmt *b = (struct dcp_blob_fmt *)p->blob;234238 int blen, ret;239239+ u8 plain_blob_key[AES_KEYSIZE_128];235240236241 if (b->fmt_version != DCP_BLOB_VERSION) {237242 pr_err("DCP blob has bad version: %i, expected %i\n",···255248 goto out;256249 }257250258258- ret = decrypt_blob_key(b->blob_key);251251+ ret = decrypt_blob_key(b->blob_key, plain_blob_key);259252 if (ret) {260253 pr_err("Unable to decrypt blob key: %i\n", ret);261254 goto out;262255 }263256264257 ret = do_aead_crypto(b->payload, p->key, p->key_len + DCP_BLOB_AUTHLEN,265265- b->blob_key, b->nonce, false);258258+ plain_blob_key, b->nonce, false);266259 if (ret) {267260 pr_err("Unwrap of DCP payload failed: %i\n", ret);268261 goto out;···270263271264 ret = 0;272265out:266266+ memzero_explicit(plain_blob_key, sizeof(plain_blob_key));267267+273268 return ret;274269}275270
···38523852 if (default_noexec &&38533853 (prot & PROT_EXEC) && !(vma->vm_flags & VM_EXEC)) {38543854 int rc = 0;38553855- if (vma_is_initial_heap(vma)) {38553855+ /*38563856+ * We don't use the vma_is_initial_heap() helper as it has38573857+ * a history of problems and is currently broken on systems38583858+ * where there is no heap, e.g. brk == start_brk. Before38593859+ * replacing the conditional below with vma_is_initial_heap(),38603860+ * or something similar, please ensure that the logic is the38613861+ * same as what we have below or you have tested every possible38623862+ * corner case you can think to test.38633863+ */38643864+ if (vma->vm_start >= vma->vm_mm->start_brk &&38653865+ vma->vm_end <= vma->vm_mm->brk) {38563866 rc = avc_has_perm(sid, sid, SECCLASS_PROCESS,38573867 PROCESS__EXECHEAP, NULL);38583868 } else if (!vma->vm_file && (vma_is_initial_stack(vma) ||
+1-1
sound/core/timer.c
···547547 /* check the actual time for the start tick;548548 * bail out as error if it's way too low (< 100us)549549 */550550- if (start) {550550+ if (start && !(timer->hw.flags & SNDRV_TIMER_HW_SLAVE)) {551551 if ((u64)snd_timer_hw_resolution(timer) * ticks < 100000)552552 return -EINVAL;553553 }
+1-1
sound/pci/hda/cs35l41_hda.c
···134134};135135136136static const struct cs_dsp_client_ops client_ops = {137137- .control_remove = hda_cs_dsp_control_remove,137137+ /* cs_dsp requires the client to provide this even if it is empty */138138};139139140140static int cs35l41_request_tuning_param_file(struct cs35l41_hda *cs35l41, char *tuning_filename,
+1-1
sound/pci/hda/cs35l56_hda.c
···413413}414414415415static const struct cs_dsp_client_ops cs35l56_hda_client_ops = {416416- .control_remove = hda_cs_dsp_control_remove,416416+ /* cs_dsp requires the client to provide this even if it is empty */417417};418418419419static int cs35l56_hda_request_firmware_file(struct cs35l56_hda *cs35l56,
+99-1
sound/pci/hda/patch_realtek.c
···1111 */12121313#include <linux/acpi.h>1414+#include <linux/cleanup.h>1415#include <linux/init.h>1516#include <linux/delay.h>1617#include <linux/slab.h>1718#include <linux/pci.h>1819#include <linux/dmi.h>1920#include <linux/module.h>2121+#include <linux/i2c.h>2022#include <linux/input.h>2123#include <linux/leds.h>2224#include <linux/ctype.h>2525+#include <linux/spi/spi.h>2326#include <sound/core.h>2427#include <sound/jack.h>2528#include <sound/hda_codec.h>···586583 switch (codec->core.vendor_id) {587584 case 0x10ec0236:588585 case 0x10ec0256:589589- case 0x10ec0257:590586 case 0x19e58326:591587 case 0x10ec0283:592588 case 0x10ec0285:···68586856 }68596857}6860685868596859+static void cs35lxx_autodet_fixup(struct hda_codec *cdc,68606860+ const struct hda_fixup *fix,68616861+ int action)68626862+{68636863+ struct device *dev = hda_codec_dev(cdc);68646864+ struct acpi_device *adev;68656865+ struct fwnode_handle *fwnode __free(fwnode_handle) = NULL;68666866+ const char *bus = NULL;68676867+ static const struct {68686868+ const char *hid;68696869+ const char *name;68706870+ } acpi_ids[] = {{ "CSC3554", "cs35l54-hda" },68716871+ { "CSC3556", "cs35l56-hda" },68726872+ { "CSC3557", "cs35l57-hda" }};68736873+ char *match;68746874+ int i, count = 0, count_devindex = 0;68756875+68766876+ switch (action) {68776877+ case HDA_FIXUP_ACT_PRE_PROBE:68786878+ for (i = 0; i < ARRAY_SIZE(acpi_ids); ++i) {68796879+ adev = acpi_dev_get_first_match_dev(acpi_ids[i].hid, NULL, -1);68806880+ if (adev)68816881+ break;68826882+ }68836883+ if (!adev) {68846884+ dev_err(dev, "Failed to find ACPI entry for a Cirrus Amp\n");68856885+ return;68866886+ }68876887+68886888+ count = i2c_acpi_client_count(adev);68896889+ if (count > 0) {68906890+ bus = "i2c";68916891+ } else {68926892+ count = acpi_spi_count_resources(adev);68936893+ if (count > 0)68946894+ bus = "spi";68956895+ }68966896+68976897+ fwnode = fwnode_handle_get(acpi_fwnode_handle(adev));68986898+ acpi_dev_put(adev);68996899+69006900+ if (!bus) {69016901+ dev_err(dev, "Did not find any buses for %s\n", acpi_ids[i].hid);69026902+ return;69036903+ }69046904+69056905+ if (!fwnode) {69066906+ dev_err(dev, "Could not get fwnode for %s\n", acpi_ids[i].hid);69076907+ return;69086908+ }69096909+69106910+ /*69116911+ * When available the cirrus,dev-index property is an accurate69126912+ * count of the amps in a system and is used in preference to69136913+ * the count of bus devices that can contain additional address69146914+ * alias entries.69156915+ */69166916+ count_devindex = fwnode_property_count_u32(fwnode, "cirrus,dev-index");69176917+ if (count_devindex > 0)69186918+ count = count_devindex;69196919+69206920+ match = devm_kasprintf(dev, GFP_KERNEL, "-%%s:00-%s.%%d", acpi_ids[i].name);69216921+ if (!match)69226922+ return;69236923+ dev_info(dev, "Found %d %s on %s (%s)\n", count, acpi_ids[i].hid, bus, match);69246924+ comp_generic_fixup(cdc, action, bus, acpi_ids[i].hid, match, count);69256925+69266926+ break;69276927+ case HDA_FIXUP_ACT_FREE:69286928+ /*69296929+ * Pass the action on to comp_generic_fixup() so that69306930+ * hda_component_manager functions can be called in just once69316931+ * place. In this context the bus, hid, match_str or count69326932+ * values do not need to be calculated.69336933+ */69346934+ comp_generic_fixup(cdc, action, NULL, NULL, NULL, 0);69356935+ break;69366936+ }69376937+}69386938+68616939static void cs35l41_fixup_i2c_two(struct hda_codec *cdc, const struct hda_fixup *fix, int action)68626940{68636941 comp_generic_fixup(cdc, action, "i2c", "CSC3551", "-%s:00-cs35l41-hda.%d", 2);···76107528 ALC256_FIXUP_CHROME_BOOK,76117529 ALC287_FIXUP_LENOVO_14ARP8_LEGION_IAH7,76127530 ALC287_FIXUP_LENOVO_SSID_17AA3820,75317531+ ALCXXX_FIXUP_CS35LXX,76137532};7614753376157534/* A special fixup for Lenovo C940 and Yoga Duet 7;···99409857 .type = HDA_FIXUP_FUNC,99419858 .v.func = alc287_fixup_lenovo_ssid_17aa3820,99429859 },98609860+ [ALCXXX_FIXUP_CS35LXX] = {98619861+ .type = HDA_FIXUP_FUNC,98629862+ .v.func = cs35lxx_autodet_fixup,98639863+ },99439864};9944986599459866static const struct snd_pci_quirk alc269_fixup_tbl[] = {···1035810271 SND_PCI_QUIRK(0x103c, 0x8cdf, "HP SnowWhite", ALC287_FIXUP_CS35L41_I2C_2_HP_GPIO_LED),1035910272 SND_PCI_QUIRK(0x103c, 0x8ce0, "HP SnowWhite", ALC287_FIXUP_CS35L41_I2C_2_HP_GPIO_LED),1036010273 SND_PCI_QUIRK(0x103c, 0x8cf5, "HP ZBook Studio 16", ALC245_FIXUP_CS35L41_SPI_4_HP_GPIO_LED),1027410274+ SND_PCI_QUIRK(0x103c, 0x8d01, "HP ZBook Power 14 G12", ALCXXX_FIXUP_CS35LXX),1027510275+ SND_PCI_QUIRK(0x103c, 0x8d08, "HP EliteBook 1045 14 G12", ALCXXX_FIXUP_CS35LXX),1027610276+ SND_PCI_QUIRK(0x103c, 0x8d85, "HP EliteBook 1040 14 G12", ALCXXX_FIXUP_CS35LXX),1027710277+ SND_PCI_QUIRK(0x103c, 0x8d86, "HP Elite x360 1040 14 G12", ALCXXX_FIXUP_CS35LXX),1027810278+ SND_PCI_QUIRK(0x103c, 0x8d8c, "HP EliteBook 830 13 G12", ALCXXX_FIXUP_CS35LXX),1027910279+ SND_PCI_QUIRK(0x103c, 0x8d8d, "HP Elite x360 830 13 G12", ALCXXX_FIXUP_CS35LXX),1028010280+ SND_PCI_QUIRK(0x103c, 0x8d8e, "HP EliteBook 840 14 G12", ALCXXX_FIXUP_CS35LXX),1028110281+ SND_PCI_QUIRK(0x103c, 0x8d8f, "HP EliteBook 840 14 G12", ALCXXX_FIXUP_CS35LXX),1028210282+ SND_PCI_QUIRK(0x103c, 0x8d90, "HP EliteBook 860 16 G12", ALCXXX_FIXUP_CS35LXX),1028310283+ SND_PCI_QUIRK(0x103c, 0x8d91, "HP ZBook Firefly 14 G12", ALCXXX_FIXUP_CS35LXX),1028410284+ SND_PCI_QUIRK(0x103c, 0x8d92, "HP ZBook Firefly 16 G12", ALCXXX_FIXUP_CS35LXX),1036110285 SND_PCI_QUIRK(0x1043, 0x103e, "ASUS X540SA", ALC256_FIXUP_ASUS_MIC),1036210286 SND_PCI_QUIRK(0x1043, 0x103f, "ASUS TX300", ALC282_FIXUP_ASUS_TX300),1036310287 SND_PCI_QUIRK(0x1043, 0x106d, "Asus K53BE", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
+9-5
sound/pci/hda/tas2781_hda_i2c.c
···22//33// TAS2781 HDA I2C driver44//55-// Copyright 2023 Texas Instruments, Inc.55+// Copyright 2023 - 2024 Texas Instruments, Inc.66//77// Author: Shenghao Ding <shenghao-ding@ti.com>88+// Current maintainer: Baojun Xu <baojun.xu@ti.com>891010+#include <asm/unaligned.h>911#include <linux/acpi.h>1012#include <linux/crc8.h>1113#include <linux/crc32.h>···521519 static const unsigned char rgno_array[CALIB_MAX] = {522520 0x74, 0x0c, 0x14, 0x70, 0x7c,523521 };524524- unsigned char *data;522522+ int offset = 0;525523 int i, j, rc;524524+ __be32 data;526525527526 for (i = 0; i < tas_priv->ndev; i++) {528528- data = tas_priv->cali_data.data +529529- i * TASDEVICE_SPEAKER_CALIBRATION_SIZE;530527 for (j = 0; j < CALIB_MAX; j++) {528528+ data = cpu_to_be32(529529+ *(uint32_t *)&tas_priv->cali_data.data[offset]);531530 rc = tasdevice_dev_bulk_write(tas_priv, i,532531 TASDEVICE_REG(0, page_array[j], rgno_array[j]),533533- &(data[4 * j]), 4);532532+ (unsigned char *)&data, 4);534533 if (rc < 0)535534 dev_err(tas_priv->dev,536535 "chn %d calib %d bulk_wr err = %d\n",537536 i, j, rc);537537+ offset += 4;538538 }539539 }540540}
···11+Why we want a copy of kernel headers in tools?22+==============================================33+44+There used to be no copies, with tools/ code using kernel headers55+directly. From time to time tools/perf/ broke due to legitimate kernel66+hacking. At some point Linus complained about such direct usage. Then we77+adopted the current model.88+99+The way these headers are used in perf are not restricted to just1010+including them to compile something.1111+1212+There are sometimes used in scripts that convert defines into string1313+tables, etc, so some change may break one of these scripts, or new MSRs1414+may use some different #define pattern, etc.1515+1616+E.g.:1717+1818+ $ ls -1 tools/perf/trace/beauty/*.sh | head -51919+ tools/perf/trace/beauty/arch_errno_names.sh2020+ tools/perf/trace/beauty/drm_ioctl.sh2121+ tools/perf/trace/beauty/fadvise.sh2222+ tools/perf/trace/beauty/fsconfig.sh2323+ tools/perf/trace/beauty/fsmount.sh2424+ $2525+ $ tools/perf/trace/beauty/fadvise.sh2626+ static const char *fadvise_advices[] = {2727+ [0] = "NORMAL",2828+ [1] = "RANDOM",2929+ [2] = "SEQUENTIAL",3030+ [3] = "WILLNEED",3131+ [4] = "DONTNEED",3232+ [5] = "NOREUSE",3333+ };3434+ $3535+3636+The tools/perf/check-headers.sh script, part of the tools/ build3737+process, points out changes in the original files.3838+3939+So its important not to touch the copies in tools/ when doing changes in4040+the original kernel headers, that will be done later, when4141+check-headers.sh inform about the change to the perf tools hackers.4242+4343+Another explanation from Ingo Molnar:4444+It's better than all the alternatives we tried so far:4545+4646+ - Symbolic links and direct #includes: this was the original approach but4747+ was pushed back on from the kernel side, when tooling modified the4848+ headers and broke them accidentally for kernel builds.4949+5050+ - Duplicate self-defined ABI headers like glibc: double the maintenance5151+ burden, double the chance for mistakes, plus there's no tech-driven5252+ notification mechanism to look at new kernel side changes.5353+5454+What we are doing now is a third option:5555+5656+ - A software-enforced copy-on-write mechanism of kernel headers to5757+ tooling, driven by non-fatal warnings on the tooling side build when5858+ kernel headers get modified:5959+6060+ Warning: Kernel ABI header differences:6161+ diff -u tools/include/uapi/drm/i915_drm.h include/uapi/drm/i915_drm.h6262+ diff -u tools/include/uapi/linux/fs.h include/uapi/linux/fs.h6363+ diff -u tools/include/uapi/linux/kvm.h include/uapi/linux/kvm.h6464+ ...6565+6666+ The tooling policy is to always pick up the kernel side headers as-is,6767+ and integate them into the tooling build. The warnings above serve as a6868+ notification to tooling maintainers that there's changes on the kernel6969+ side.7070+7171+We've been using this for many years now, and it might seem hacky, but7272+works surprisingly well.7373+
···21632163 * supports this per context flag.21642164 */21652165#define I915_CONTEXT_PARAM_LOW_LATENCY 0xe21662166+21672167+/*21682168+ * I915_CONTEXT_PARAM_CONTEXT_IMAGE:21692169+ *21702170+ * Allows userspace to provide own context images.21712171+ *21722172+ * Note that this is a debug API not available on production kernel builds.21732173+ */21742174+#define I915_CONTEXT_PARAM_CONTEXT_IMAGE 0xf21662175/* Must be kept compact -- no holes and well documented */2167217621682177 /** @value: Context parameter value to be set or queried */···25722563 __u64 extensions; \25732564 struct i915_engine_class_instance engines[N__]; \25742565} __attribute__((packed)) name__25662566+25672567+struct i915_gem_context_param_context_image {25682568+ /** @engine: Engine class & instance to be configured. */25692569+ struct i915_engine_class_instance engine;25702570+25712571+ /** @flags: One of the supported flags or zero. */25722572+ __u32 flags;25732573+#define I915_CONTEXT_IMAGE_FLAG_ENGINE_INDEX (1u << 0)25742574+25752575+ /** @size: Size of the image blob pointed to by @image. */25762576+ __u32 size;25772577+25782578+ /** @mbz: Must be zero. */25792579+ __u32 mbz;25802580+25812581+ /** @image: Userspace memory containing the context image. */25822582+ __u64 image;25832583+} __attribute__((packed));2575258425762585/**25772586 * struct drm_i915_gem_context_create_ext_setparam - Context parameter
···192192/* Flags that describe what fields in emulation_failure hold valid data. */193193#define KVM_INTERNAL_ERROR_EMULATION_FLAG_INSTRUCTION_BYTES (1ULL << 0)194194195195+/*196196+ * struct kvm_run can be modified by userspace at any time, so KVM must be197197+ * careful to avoid TOCTOU bugs. In order to protect KVM, HINT_UNSAFE_IN_KVM()198198+ * renames fields in struct kvm_run from <symbol> to <symbol>__unsafe when199199+ * compiled into the kernel, ensuring that any use within KVM is obvious and200200+ * gets extra scrutiny.201201+ */202202+#ifdef __KERNEL__203203+#define HINT_UNSAFE_IN_KVM(_symbol) _symbol##__unsafe204204+#else205205+#define HINT_UNSAFE_IN_KVM(_symbol) _symbol206206+#endif207207+195208/* for KVM_RUN, returned by mmap(vcpu_fd, offset=0) */196209struct kvm_run {197210 /* in */198211 __u8 request_interrupt_window;199199- __u8 immediate_exit;212212+ __u8 HINT_UNSAFE_IN_KVM(immediate_exit);200213 __u8 padding1[6];201214202215 /* out */···931918#define KVM_CAP_GUEST_MEMFD 234932919#define KVM_CAP_VM_TYPES 235933920#define KVM_CAP_PRE_FAULT_MEMORY 236921921+#define KVM_CAP_X86_APIC_BUS_CYCLES_NS 237922922+#define KVM_CAP_X86_GUEST_MODE 238934923935924struct kvm_irq_routing_irqchip {936925 __u32 irqchip;
···11+# SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note12#23# 64-bit system call numbers and entry vectors34#45# The format is:55-# <number> <abi> <name> <entry point>66+# <number> <abi> <name> <entry point> [<compat entry point> [noreturn]]67#78# The __x64_sys_*() stubs are created on-the-fly for sys_*() system calls89#···696857 common fork sys_fork706958 common vfork sys_vfork717059 64 execve sys_execve7272-60 common exit sys_exit7171+60 common exit sys_exit - noreturn737261 common wait4 sys_wait4747362 common kill sys_kill757463 common uname sys_newuname···240239228 common clock_gettime sys_clock_gettime241240229 common clock_getres sys_clock_getres242241230 common clock_nanosleep sys_clock_nanosleep243243-231 common exit_group sys_exit_group242242+231 common exit_group sys_exit_group - noreturn244243232 common epoll_wait sys_epoll_wait245244233 common epoll_ctl sys_epoll_ctl246245234 common tgkill sys_tgkill···344343332 common statx sys_statx345344333 common io_pgetevents sys_io_pgetevents346345334 common rseq sys_rseq346346+335 common uretprobe sys_uretprobe347347# don't use numbers 387 through 423, add new calls after the last348348# 'common' entry349349424 common pidfd_send_signal sys_pidfd_send_signal
···7676 __kernel_size_t msg_controllen; /* ancillary data buffer length */7777 struct kiocb *msg_iocb; /* ptr to iocb for async requests */7878 struct ubuf_info *msg_ubuf;7979- int (*sg_from_iter)(struct sock *sk, struct sk_buff *skb,7979+ int (*sg_from_iter)(struct sk_buff *skb,8080 struct iov_iter *from, size_t length);8181};8282···442442extern int __sys_socket(int family, int type, int protocol);443443extern struct file *__sys_socket_file(int family, int type, int protocol);444444extern int __sys_bind(int fd, struct sockaddr __user *umyaddr, int addrlen);445445+extern int __sys_bind_socket(struct socket *sock, struct sockaddr_storage *address,446446+ int addrlen);445447extern int __sys_connect_file(struct file *file, struct sockaddr_storage *addr,446448 int addrlen, int file_flags);447449extern int __sys_connect(int fd, struct sockaddr __user *uservaddr,448450 int addrlen);449451extern int __sys_listen(int fd, int backlog);452452+extern int __sys_listen_socket(struct socket *sock, int backlog);450453extern int __sys_getsockname(int fd, struct sockaddr __user *usockaddr,451454 int __user *usockaddr_len);452455extern int __sys_getpeername(int fd, struct sockaddr __user *usockaddr,
+161-2
tools/perf/trace/beauty/include/uapi/linux/fs.h
···329329/* per-IO negation of O_APPEND */330330#define RWF_NOAPPEND ((__force __kernel_rwf_t)0x00000020)331331332332+/* Atomic Write */333333+#define RWF_ATOMIC ((__force __kernel_rwf_t)0x00000040)334334+332335/* mask of flags supported by the kernel */333336#define RWF_SUPPORTED (RWF_HIPRI | RWF_DSYNC | RWF_SYNC | RWF_NOWAIT |\334334- RWF_APPEND | RWF_NOAPPEND)337337+ RWF_APPEND | RWF_NOAPPEND | RWF_ATOMIC)338338+339339+#define PROCFS_IOCTL_MAGIC 'f'335340336341/* Pagemap ioctl */337337-#define PAGEMAP_SCAN _IOWR('f', 16, struct pm_scan_arg)342342+#define PAGEMAP_SCAN _IOWR(PROCFS_IOCTL_MAGIC, 16, struct pm_scan_arg)338343339344/* Bitmasks provided in pm_scan_args masks and reported in page_region.categories. */340345#define PAGE_IS_WPALLOWED (1 << 0)···396391 __u64 category_mask;397392 __u64 category_anyof_mask;398393 __u64 return_mask;394394+};395395+396396+/* /proc/<pid>/maps ioctl */397397+#define PROCMAP_QUERY _IOWR(PROCFS_IOCTL_MAGIC, 17, struct procmap_query)398398+399399+enum procmap_query_flags {400400+ /*401401+ * VMA permission flags.402402+ *403403+ * Can be used as part of procmap_query.query_flags field to look up404404+ * only VMAs satisfying specified subset of permissions. E.g., specifying405405+ * PROCMAP_QUERY_VMA_READABLE only will return both readable and read/write VMAs,406406+ * while having PROCMAP_QUERY_VMA_READABLE | PROCMAP_QUERY_VMA_WRITABLE will only407407+ * return read/write VMAs, though both executable/non-executable and408408+ * private/shared will be ignored.409409+ *410410+ * PROCMAP_QUERY_VMA_* flags are also returned in procmap_query.vma_flags411411+ * field to specify actual VMA permissions.412412+ */413413+ PROCMAP_QUERY_VMA_READABLE = 0x01,414414+ PROCMAP_QUERY_VMA_WRITABLE = 0x02,415415+ PROCMAP_QUERY_VMA_EXECUTABLE = 0x04,416416+ PROCMAP_QUERY_VMA_SHARED = 0x08,417417+ /*418418+ * Query modifier flags.419419+ *420420+ * By default VMA that covers provided address is returned, or -ENOENT421421+ * is returned. With PROCMAP_QUERY_COVERING_OR_NEXT_VMA flag set, closest422422+ * VMA with vma_start > addr will be returned if no covering VMA is423423+ * found.424424+ *425425+ * PROCMAP_QUERY_FILE_BACKED_VMA instructs query to consider only VMAs that426426+ * have file backing. Can be combined with PROCMAP_QUERY_COVERING_OR_NEXT_VMA427427+ * to iterate all VMAs with file backing.428428+ */429429+ PROCMAP_QUERY_COVERING_OR_NEXT_VMA = 0x10,430430+ PROCMAP_QUERY_FILE_BACKED_VMA = 0x20,431431+};432432+433433+/*434434+ * Input/output argument structured passed into ioctl() call. It can be used435435+ * to query a set of VMAs (Virtual Memory Areas) of a process.436436+ *437437+ * Each field can be one of three kinds, marked in a short comment to the438438+ * right of the field:439439+ * - "in", input argument, user has to provide this value, kernel doesn't modify it;440440+ * - "out", output argument, kernel sets this field with VMA data;441441+ * - "in/out", input and output argument; user provides initial value (used442442+ * to specify maximum allowable buffer size), and kernel sets it to actual443443+ * amount of data written (or zero, if there is no data).444444+ *445445+ * If matching VMA is found (according to criterias specified by446446+ * query_addr/query_flags, all the out fields are filled out, and ioctl()447447+ * returns 0. If there is no matching VMA, -ENOENT will be returned.448448+ * In case of any other error, negative error code other than -ENOENT is449449+ * returned.450450+ *451451+ * Most of the data is similar to the one returned as text in /proc/<pid>/maps452452+ * file, but procmap_query provides more querying flexibility. There are no453453+ * consistency guarantees between subsequent ioctl() calls, but data returned454454+ * for matched VMA is self-consistent.455455+ */456456+struct procmap_query {457457+ /* Query struct size, for backwards/forward compatibility */458458+ __u64 size;459459+ /*460460+ * Query flags, a combination of enum procmap_query_flags values.461461+ * Defines query filtering and behavior, see enum procmap_query_flags.462462+ *463463+ * Input argument, provided by user. Kernel doesn't modify it.464464+ */465465+ __u64 query_flags; /* in */466466+ /*467467+ * Query address. By default, VMA that covers this address will468468+ * be looked up. PROCMAP_QUERY_* flags above modify this default469469+ * behavior further.470470+ *471471+ * Input argument, provided by user. Kernel doesn't modify it.472472+ */473473+ __u64 query_addr; /* in */474474+ /* VMA starting (inclusive) and ending (exclusive) address, if VMA is found. */475475+ __u64 vma_start; /* out */476476+ __u64 vma_end; /* out */477477+ /* VMA permissions flags. A combination of PROCMAP_QUERY_VMA_* flags. */478478+ __u64 vma_flags; /* out */479479+ /* VMA backing page size granularity. */480480+ __u64 vma_page_size; /* out */481481+ /*482482+ * VMA file offset. If VMA has file backing, this specifies offset483483+ * within the file that VMA's start address corresponds to.484484+ * Is set to zero if VMA has no backing file.485485+ */486486+ __u64 vma_offset; /* out */487487+ /* Backing file's inode number, or zero, if VMA has no backing file. */488488+ __u64 inode; /* out */489489+ /* Backing file's device major/minor number, or zero, if VMA has no backing file. */490490+ __u32 dev_major; /* out */491491+ __u32 dev_minor; /* out */492492+ /*493493+ * If set to non-zero value, signals the request to return VMA name494494+ * (i.e., VMA's backing file's absolute path, with " (deleted)" suffix495495+ * appended, if file was unlinked from FS) for matched VMA. VMA name496496+ * can also be some special name (e.g., "[heap]", "[stack]") or could497497+ * be even user-supplied with prctl(PR_SET_VMA, PR_SET_VMA_ANON_NAME).498498+ *499499+ * Kernel will set this field to zero, if VMA has no associated name.500500+ * Otherwise kernel will return actual amount of bytes filled in501501+ * user-supplied buffer (see vma_name_addr field below), including the502502+ * terminating zero.503503+ *504504+ * If VMA name is longer that user-supplied maximum buffer size,505505+ * -E2BIG error is returned.506506+ *507507+ * If this field is set to non-zero value, vma_name_addr should point508508+ * to valid user space memory buffer of at least vma_name_size bytes.509509+ * If set to zero, vma_name_addr should be set to zero as well510510+ */511511+ __u32 vma_name_size; /* in/out */512512+ /*513513+ * If set to non-zero value, signals the request to extract and return514514+ * VMA's backing file's build ID, if the backing file is an ELF file515515+ * and it contains embedded build ID.516516+ *517517+ * Kernel will set this field to zero, if VMA has no backing file,518518+ * backing file is not an ELF file, or ELF file has no build ID519519+ * embedded.520520+ *521521+ * Build ID is a binary value (not a string). Kernel will set522522+ * build_id_size field to exact number of bytes used for build ID.523523+ * If build ID is requested and present, but needs more bytes than524524+ * user-supplied maximum buffer size (see build_id_addr field below),525525+ * -E2BIG error will be returned.526526+ *527527+ * If this field is set to non-zero value, build_id_addr should point528528+ * to valid user space memory buffer of at least build_id_size bytes.529529+ * If set to zero, build_id_addr should be set to zero as well530530+ */531531+ __u32 build_id_size; /* in/out */532532+ /*533533+ * User-supplied address of a buffer of at least vma_name_size bytes534534+ * for kernel to fill with matched VMA's name (see vma_name_size field535535+ * description above for details).536536+ *537537+ * Should be set to zero if VMA name should not be returned.538538+ */539539+ __u64 vma_name_addr; /* in */540540+ /*541541+ * User-supplied address of a buffer of at least build_id_size bytes542542+ * for kernel to fill with matched VMA's ELF build ID, if available543543+ * (see build_id_size field description above for details).544544+ *545545+ * Should be set to zero if build ID should not be returned.546546+ */547547+ __u64 build_id_addr; /* in */399548};400549401550#endif /* _UAPI_LINUX_FS_H */
···154154 */155155struct statmount {156156 __u32 size; /* Total size, including strings */157157- __u32 __spare1;157157+ __u32 mnt_opts; /* [str] Mount options of the mount */158158 __u64 mask; /* What results were written */159159 __u32 sb_dev_major; /* Device ID */160160 __u32 sb_dev_minor;···172172 __u64 propagate_from; /* Propagation from in current namespace */173173 __u32 mnt_root; /* [str] Root of mount relative to root of fs */174174 __u32 mnt_point; /* [str] Mountpoint relative to current root */175175- __u64 __spare2[50];175175+ __u64 mnt_ns_id; /* ID of the mount namespace */176176+ __u64 __spare2[49];176177 char str[]; /* Variable size part containing strings */177178};178179···189188 __u32 spare;190189 __u64 mnt_id;191190 __u64 param;191191+ __u64 mnt_ns_id;192192};193193194194/* List of all mnt_id_req versions. */195195#define MNT_ID_REQ_SIZE_VER0 24 /* sizeof first published struct */196196+#define MNT_ID_REQ_SIZE_VER1 32 /* sizeof second published struct */196197197198/*198199 * @mask bits for statmount(2)···205202#define STATMOUNT_MNT_ROOT 0x00000008U /* Want/got mnt_root */206203#define STATMOUNT_MNT_POINT 0x00000010U /* Want/got mnt_point */207204#define STATMOUNT_FS_TYPE 0x00000020U /* Want/got fs_type */205205+#define STATMOUNT_MNT_NS_ID 0x00000040U /* Want/got mnt_ns_id */206206+#define STATMOUNT_MNT_OPTS 0x00000080U /* Want/got mnt_opts */208207209208/*210209 * Special @mnt_id values that can be passed to listmount211210 */212211#define LSMT_ROOT 0xffffffffffffffff /* root mount */212212+#define LISTMOUNT_REVERSE (1 << 0) /* List later mounts first */213213214214#endif /* _UAPI_LINUX_MOUNT_H */
+10-2
tools/perf/trace/beauty/include/uapi/linux/stat.h
···126126 __u64 stx_mnt_id;127127 __u32 stx_dio_mem_align; /* Memory buffer alignment for direct I/O */128128 __u32 stx_dio_offset_align; /* File offset alignment for direct I/O */129129- __u64 stx_subvol; /* Subvolume identifier */130129 /* 0xa0 */131131- __u64 __spare3[11]; /* Spare space for future expansion */130130+ __u64 stx_subvol; /* Subvolume identifier */131131+ __u32 stx_atomic_write_unit_min; /* Min atomic write unit in bytes */132132+ __u32 stx_atomic_write_unit_max; /* Max atomic write unit in bytes */133133+ /* 0xb0 */134134+ __u32 stx_atomic_write_segments_max; /* Max atomic write segment count */135135+ __u32 __spare1[1];136136+ /* 0xb8 */137137+ __u64 __spare3[9]; /* Spare space for future expansion */132138 /* 0x100 */133139};134140···163157#define STATX_DIOALIGN 0x00002000U /* Want/got direct I/O alignment info */164158#define STATX_MNT_ID_UNIQUE 0x00004000U /* Want/got extended stx_mount_id */165159#define STATX_SUBVOL 0x00008000U /* Want/got stx_subvol */160160+#define STATX_WRITE_ATOMIC 0x00010000U /* Want/got atomic_write_* fields */166161167162#define STATX__RESERVED 0x80000000U /* Reserved for future struct statx expansion */168163···199192#define STATX_ATTR_MOUNT_ROOT 0x00002000 /* Root of a mount */200193#define STATX_ATTR_VERITY 0x00100000 /* [I] Verity protected file */201194#define STATX_ATTR_DAX 0x00200000 /* File is currently in DAX state */195195+#define STATX_ATTR_WRITE_ATOMIC 0x00400000 /* File supports atomic write operations */202196203197204198#endif /* _UAPI_LINUX_STAT_H */
···142142 * *143143 *****************************************************************************/144144145145-#define SNDRV_PCM_VERSION SNDRV_PROTOCOL_VERSION(2, 0, 17)145145+#define SNDRV_PCM_VERSION SNDRV_PROTOCOL_VERSION(2, 0, 18)146146147147typedef unsigned long snd_pcm_uframes_t;148148typedef signed long snd_pcm_sframes_t;···334334 unsigned char id[16];335335 unsigned short id16[8];336336 unsigned int id32[4];337337-};337337+} __attribute__((deprecated));338338339339struct snd_pcm_info {340340 unsigned int device; /* RO/WR (control): device number */···348348 int dev_subclass; /* SNDRV_PCM_SUBCLASS_* */349349 unsigned int subdevices_count;350350 unsigned int subdevices_avail;351351- union snd_pcm_sync_id sync; /* hardware synchronization ID */351351+ unsigned char pad1[16]; /* was: hardware synchronization ID */352352 unsigned char reserved[64]; /* reserved for future... */353353};354354···420420 unsigned int rate_num; /* R: rate numerator */421421 unsigned int rate_den; /* R: rate denominator */422422 snd_pcm_uframes_t fifo_size; /* R: chip FIFO size in frames */423423- unsigned char reserved[64]; /* reserved for future */423423+ unsigned char sync[16]; /* R: synchronization ID (perfect sync - one clock source) */424424+ unsigned char reserved[48]; /* reserved for future */424425};425426426427enum {
+54
tools/testing/selftests/bpf/progs/iters.c
···14321432 return sum;14331433}1434143414351435+__u32 upper, select_n, result;14361436+__u64 global;14371437+14381438+static __noinline bool nest_2(char *str)14391439+{14401440+ /* some insns (including branch insns) to ensure stacksafe() is triggered14411441+ * in nest_2(). This way, stacksafe() can compare frame associated with nest_1().14421442+ */14431443+ if (str[0] == 't')14441444+ return true;14451445+ if (str[1] == 'e')14461446+ return true;14471447+ if (str[2] == 's')14481448+ return true;14491449+ if (str[3] == 't')14501450+ return true;14511451+ return false;14521452+}14531453+14541454+static __noinline bool nest_1(int n)14551455+{14561456+ /* case 0: allocate stack, case 1: no allocate stack */14571457+ switch (n) {14581458+ case 0: {14591459+ char comm[16];14601460+14611461+ if (bpf_get_current_comm(comm, 16))14621462+ return false;14631463+ return nest_2(comm);14641464+ }14651465+ case 1:14661466+ return nest_2((char *)&global);14671467+ default:14681468+ return false;14691469+ }14701470+}14711471+14721472+SEC("raw_tp")14731473+__success14741474+int iter_subprog_check_stacksafe(const void *ctx)14751475+{14761476+ long i;14771477+14781478+ bpf_for(i, 0, upper) {14791479+ if (!nest_1(select_n)) {14801480+ result = 1;14811481+ return 0;14821482+ }14831483+ }14841484+14851485+ result = 2;14861486+ return 0;14871487+}14881488+14351489char _license[] SEC("license") = "GPL";
+35
tools/testing/selftests/core/close_range_test.c
···589589 EXPECT_EQ(close(fd3), 0);590590}591591592592+TEST(close_range_bitmap_corruption)593593+{594594+ pid_t pid;595595+ int status;596596+ struct __clone_args args = {597597+ .flags = CLONE_FILES,598598+ .exit_signal = SIGCHLD,599599+ };600600+601601+ /* get the first 128 descriptors open */602602+ for (int i = 2; i < 128; i++)603603+ EXPECT_GE(dup2(0, i), 0);604604+605605+ /* get descriptor table shared */606606+ pid = sys_clone3(&args, sizeof(args));607607+ ASSERT_GE(pid, 0);608608+609609+ if (pid == 0) {610610+ /* unshare and truncate descriptor table down to 64 */611611+ if (sys_close_range(64, ~0U, CLOSE_RANGE_UNSHARE))612612+ exit(EXIT_FAILURE);613613+614614+ ASSERT_EQ(fcntl(64, F_GETFD), -1);615615+ /* ... and verify that the range 64..127 is not616616+ stuck "fully used" according to secondary bitmap */617617+ EXPECT_EQ(dup(0), 64)618618+ exit(EXIT_FAILURE);619619+ exit(EXIT_SUCCESS);620620+ }621621+622622+ EXPECT_EQ(waitpid(pid, &status, 0), pid);623623+ EXPECT_EQ(true, WIFEXITED(status));624624+ EXPECT_EQ(0, WEXITSTATUS(status));625625+}626626+592627TEST_HARNESS_MAIN
···8989 int fd, ret = -1;9090 int compaction_index = 0;9191 char nr_hugepages[20] = {0};9292- char init_nr_hugepages[20] = {0};9292+ char init_nr_hugepages[24] = {0};93939494- sprintf(init_nr_hugepages, "%lu", initial_nr_hugepages);9494+ snprintf(init_nr_hugepages, sizeof(init_nr_hugepages),9595+ "%lu", initial_nr_hugepages);95969697 /* We want to test with 80% of available memory. Else, OOM killer comes9798 in to play */
+3
tools/testing/selftests/mm/run_vmtests.sh
···374374# MADV_POPULATE_READ and MADV_POPULATE_WRITE tests375375CATEGORY="madv_populate" run_test ./madv_populate376376377377+if [ -x ./memfd_secret ]378378+then377379(echo 0 | sudo tee /proc/sys/kernel/yama/ptrace_scope 2>&1) | tap_prefix378380CATEGORY="memfd_secret" run_test ./memfd_secret381381+fi379382380383# KSM KSM_MERGE_TIME_HUGE_PAGES test with size of 100381384CATEGORY="ksm" run_test ./ksm_tests -H -s 100
+1-1
tools/testing/selftests/net/af_unix/msg_oob.c
···209209210210static void __recvpair(struct __test_metadata *_metadata,211211 FIXTURE_DATA(msg_oob) *self,212212- const void *expected_buf, int expected_len,212212+ const char *expected_buf, int expected_len,213213 int buf_len, int flags)214214{215215 int i, ret[2], recv_errno[2], expected_errno = 0;
+1
tools/testing/selftests/net/lib.sh
···146146147147 for ns in "$@"; do148148 [ -z "${ns}" ] && continue149149+ ip netns pids "${ns}" 2> /dev/null | xargs -r kill || true149150 ip netns delete "${ns}" &> /dev/null || true150151 if ! busywait $BUSYWAIT_TIMEOUT ip netns list \| grep -vq "^$ns$" &> /dev/null; then151152 echo "Warn: Failed to remove namespace $ns"