···435435Controlling Controllers436436-----------------------437437438438-Availablity439439-~~~~~~~~~~~438438+Availability439439+~~~~~~~~~~~~440440441441A controller is available in a cgroup when it is supported by the kernel (i.e.,442442compiled in, not disabled and not attached to a v1 hierarchy) and listed in the
+6-5
Documentation/core-api/symbol-namespaces.rst
···7676within the corresponding compilation unit before the #include for7777<linux/export.h>. Typically it's placed before the first #include statement.78787979-Using the EXPORT_SYMBOL_GPL_FOR_MODULES() macro8080------------------------------------------------7979+Using the EXPORT_SYMBOL_FOR_MODULES() macro8080+-------------------------------------------81818282Symbols exported using this macro are put into a module namespace. This8383-namespace cannot be imported.8383+namespace cannot be imported. These exports are GPL-only as they are only8484+intended for in-tree modules.84858586The macro takes a comma separated list of module names, allowing only those8687modules to access this symbol. Simple tail-globs are supported.87888889For example::89909090- EXPORT_SYMBOL_GPL_FOR_MODULES(preempt_notifier_inc, "kvm,kvm-*")9191+ EXPORT_SYMBOL_FOR_MODULES(preempt_notifier_inc, "kvm,kvm-*")91929292-will limit usage of this symbol to modules whoes name matches the given9393+will limit usage of this symbol to modules whose name matches the given9394patterns.94959596How to use Symbols exported in Namespaces
···77title: Infineon Buck Regulators with PMBUS interfaces8899maintainers:1010- - Not Me.1010+ - Guenter Roeck <linux@roeck-us.net>11111212allOf:1313 - $ref: regulator.yaml#
+2
Documentation/networking/mptcp-sysctl.rst
···1212 resent to an MPTCP peer that has not acknowledged a previous1313 ADD_ADDR message.14141515+ Do not retransmit if set to 0.1616+1517 The default value matches TCP_RTO_MAX. This is a per-namespace1618 sysctl.1719
+16-9
Documentation/process/security-bugs.rst
···88disclosed as quickly as possible. Please report security bugs to the99Linux kernel security team.10101111-Contact1212--------1111+The security team and maintainers almost always require additional1212+information beyond what was initially provided in a report and rely on1313+active and efficient collaboration with the reporter to perform further1414+testing (e.g., verifying versions, configuration options, mitigations, or1515+patches). Before contacting the security team, the reporter must ensure1616+they are available to explain their findings, engage in discussions, and1717+run additional tests. Reports where the reporter does not respond promptly1818+or cannot effectively discuss their findings may be abandoned if the1919+communication does not quickly improve.2020+2121+As it is with any bug, the more information provided the easier it2222+will be to diagnose and fix. Please review the procedure outlined in2323+'Documentation/admin-guide/reporting-issues.rst' if you are unclear about what2424+information is helpful. Any exploit code is very helpful and will not2525+be released without consent from the reporter unless it has already been2626+made public.13271428The Linux kernel security team can be contacted by email at1529<security@kernel.org>. This is a private list of security officers···3218that can speed up the process considerably. It is possible that the3319security team will bring in extra help from area maintainers to3420understand and fix the security vulnerability.3535-3636-As it is with any bug, the more information provided the easier it3737-will be to diagnose and fix. Please review the procedure outlined in3838-'Documentation/admin-guide/reporting-issues.rst' if you are unclear about what3939-information is helpful. Any exploit code is very helpful and will not4040-be released without consent from the reporter unless it has already been4141-made public.42214322Please send plain text emails without attachments where possible.4423It is much harder to have a context-quoted discussion about a complex
+2-2
Documentation/userspace-api/iommufd.rst
···43434444- IOMMUFD_OBJ_HWPT_PAGING, representing an actual hardware I/O page table4545 (i.e. a single struct iommu_domain) managed by the iommu driver. "PAGING"4646- primarly indicates this type of HWPT should be linked to an IOAS. It also4646+ primarily indicates this type of HWPT should be linked to an IOAS. It also4747 indicates that it is backed by an iommu_domain with __IOMMU_DOMAIN_PAGING4848 feature flag. This can be either an UNMANAGED stage-1 domain for a device4949 running in the user space, or a nesting parent stage-2 domain for mappings···76767777 * Security namespace for guest owned ID, e.g. guest-controlled cache tags7878 * Non-device-affiliated event reporting, e.g. invalidation queue errors7979- * Access to a sharable nesting parent pagetable across physical IOMMUs7979+ * Access to a shareable nesting parent pagetable across physical IOMMUs8080 * Virtualization of various platforms IDs, e.g. RIDs and others8181 * Delivery of paravirtualized invalidation8282 * Direct assigned invalidation queues
+34-4
MAINTAINERS
···84278427F: drivers/gpu/drm/scheduler/84288428F: include/drm/gpu_scheduler.h8429842984308430+DRM GPUVM84318431+M: Danilo Krummrich <dakr@kernel.org>84328432+R: Matthew Brost <matthew.brost@intel.com>84338433+R: Thomas Hellström <thomas.hellstrom@linux.intel.com>84348434+R: Alice Ryhl <aliceryhl@google.com>84358435+L: dri-devel@lists.freedesktop.org84368436+S: Supported84378437+T: git https://gitlab.freedesktop.org/drm/misc/kernel.git84388438+F: drivers/gpu/drm/drm_gpuvm.c84398439+F: include/drm/drm_gpuvm.h84408440+84308441DRM LOG84318442M: Jocelyn Falempe <jfalempe@redhat.com>84328443M: Javier Martinez Canillas <javierm@redhat.com>···1066710656F: block/partitions/efi.*10668106571066910658HABANALABS PCI DRIVER1067010670-M: Yaron Avizrat <yaron.avizrat@intel.com>1065910659+M: Koby Elbaz <koby.elbaz@intel.com>1066010660+M: Konstantin Sinyuk <konstantin.sinyuk@intel.com>1067110661L: dri-devel@lists.freedesktop.org1067210662S: Supported1067310663C: irc://irc.oftc.net/dri-devel···1102611014F: drivers/perf/hisilicon/hns3_pmu.c11027110151102811016HISILICON I2C CONTROLLER DRIVER1102911029-M: Yicong Yang <yangyicong@hisilicon.com>1101711017+M: Devyn Liu <liudingyuan@h-partners.com>1103011018L: linux-i2c@vger.kernel.org1103111019S: Maintained1103211020W: https://www.hisilicon.com···1229412282F: include/linux/net/intel/*/12295122831229612284INTEL ETHERNET PROTOCOL DRIVER FOR RDMA1229712297-M: Mustafa Ismail <mustafa.ismail@intel.com>1229812285M: Tatyana Nikolova <tatyana.e.nikolova@intel.com>1229912286L: linux-rdma@vger.kernel.org1230012287S: Supported···1607016059F: mm/migrate.c1607116060F: mm/migrate_device.c16072160611606216062+MEMORY MANAGEMENT - MGLRU (MULTI-GEN LRU)1606316063+M: Andrew Morton <akpm@linux-foundation.org>1606416064+M: Axel Rasmussen <axelrasmussen@google.com>1606516065+M: Yuanchu Xie <yuanchu@google.com>1606616066+R: Wei Xu <weixugc@google.com>1606716067+L: linux-mm@kvack.org1606816068+S: Maintained1606916069+W: http://www.linux-mm.org1607016070+T: git git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm1607116071+F: Documentation/admin-guide/mm/multigen_lru.rst1607216072+F: Documentation/mm/multigen_lru.rst1607316073+F: include/linux/mm_inline.h1607416074+F: include/linux/mmzone.h1607516075+F: mm/swap.c1607616076+F: mm/vmscan.c1607716077+F: mm/workingset.c1607816078+1607316079MEMORY MANAGEMENT - MISC1607416080M: Andrew Morton <akpm@linux-foundation.org>1607516081M: David Hildenbrand <david@redhat.com>···1627716249W: http://www.linux-mm.org1627816250T: git git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm1627916251F: rust/helpers/mm.c1625216252+F: rust/helpers/page.c1628016253F: rust/kernel/mm.rs1628116254F: rust/kernel/mm/1625516255+F: rust/kernel/page.rs16282162561628316257MEMORY MAPPING1628416258M: Andrew Morton <akpm@linux-foundation.org>···22205221752220622176S390 NETWORK DRIVERS2220722177M: Alexandra Winter <wintera@linux.ibm.com>2220822208-M: Thorsten Winkler <twinkler@linux.ibm.com>2217822178+R: Aswin Karuvally <aswin@linux.ibm.com>2220922179L: linux-s390@vger.kernel.org2221022180L: netdev@vger.kernel.org2221122181S: Supported
···102102103103ifdef CONFIG_OBJTOOL104104ifdef CONFIG_CC_HAS_ANNOTATE_TABLEJUMP105105+# The annotate-tablejump option can not be passed to LLVM backend when LTO is enabled.106106+# Ensure it is aware of linker with LTO, '--loongarch-annotate-tablejump' also needs to107107+# be passed via '-mllvm' to ld.lld.105108KBUILD_CFLAGS += -mannotate-tablejump109109+ifdef CONFIG_LTO_CLANG110110+KBUILD_LDFLAGS += -mllvm --loongarch-annotate-tablejump111111+endif106112else107113KBUILD_CFLAGS += -fno-jump-tables # keep compatibility with older compilers108114endif
···88#include <linux/module.h>99#include <linux/moduleloader.h>1010#include <linux/ftrace.h>1111+#include <linux/sort.h>11121213Elf_Addr module_emit_got_entry(struct module *mod, Elf_Shdr *sechdrs, Elf_Addr val)1314{···6261 return (Elf_Addr)&plt[nr];6362}64636565-static int is_rela_equal(const Elf_Rela *x, const Elf_Rela *y)6464+#define cmp_3way(a, b) ((a) < (b) ? -1 : (a) > (b))6565+6666+static int compare_rela(const void *x, const void *y)6667{6767- return x->r_info == y->r_info && x->r_addend == y->r_addend;6868-}6868+ int ret;6969+ const Elf_Rela *rela_x = x, *rela_y = y;69707070-static bool duplicate_rela(const Elf_Rela *rela, int idx)7171-{7272- int i;7171+ ret = cmp_3way(rela_x->r_info, rela_y->r_info);7272+ if (ret == 0)7373+ ret = cmp_3way(rela_x->r_addend, rela_y->r_addend);73747474- for (i = 0; i < idx; i++) {7575- if (is_rela_equal(&rela[i], &rela[idx]))7676- return true;7777- }7878-7979- return false;7575+ return ret;8076}81778278static void count_max_entries(Elf_Rela *relas, int num,8379 unsigned int *plts, unsigned int *gots)8480{8585- unsigned int i, type;8181+ unsigned int i;8282+8383+ sort(relas, num, sizeof(Elf_Rela), compare_rela, NULL);86848785 for (i = 0; i < num; i++) {8888- type = ELF_R_TYPE(relas[i].r_info);8989- switch (type) {8686+ if (i && !compare_rela(&relas[i-1], &relas[i]))8787+ continue;8888+8989+ switch (ELF_R_TYPE(relas[i].r_info)) {9090 case R_LARCH_SOP_PUSH_PLT_PCREL:9191 case R_LARCH_B26:9292- if (!duplicate_rela(relas, i))9393- (*plts)++;9292+ (*plts)++;9493 break;9594 case R_LARCH_GOT_PC_HI20:9696- if (!duplicate_rela(relas, i))9797- (*gots)++;9595+ (*gots)++;9896 break;9997 default:10098 break; /* Do nothing. */
+5-5
arch/loongarch/kernel/signal.c
···677677 for (i = 1; i < 32; i++)678678 err |= __put_user(regs->regs[i], &sc->sc_regs[i]);679679680680+#ifdef CONFIG_CPU_HAS_LBT681681+ if (extctx->lbt.addr)682682+ err |= protected_save_lbt_context(extctx);683683+#endif684684+680685 if (extctx->lasx.addr)681686 err |= protected_save_lasx_context(extctx);682687 else if (extctx->lsx.addr)683688 err |= protected_save_lsx_context(extctx);684689 else if (extctx->fpu.addr)685690 err |= protected_save_fpu_context(extctx);686686-687687-#ifdef CONFIG_CPU_HAS_LBT688688- if (extctx->lbt.addr)689689- err |= protected_save_lbt_context(extctx);690690-#endif691691692692 /* Set the "end" magic */693693 info = (struct sctx_info *)extctx->end.addr;
+22
arch/loongarch/kernel/time.c
···55 * Copyright (C) 2020-2022 Loongson Technology Corporation Limited66 */77#include <linux/clockchips.h>88+#include <linux/cpuhotplug.h>89#include <linux/delay.h>910#include <linux/export.h>1011#include <linux/init.h>···103102 return 0;104103}105104105105+static int arch_timer_starting(unsigned int cpu)106106+{107107+ set_csr_ecfg(ECFGF_TIMER);108108+109109+ return 0;110110+}111111+112112+static int arch_timer_dying(unsigned int cpu)113113+{114114+ constant_set_state_shutdown(this_cpu_ptr(&constant_clockevent_device));115115+116116+ /* Clear Timer Interrupt */117117+ write_csr_tintclear(CSR_TINTCLR_TI);118118+119119+ return 0;120120+}121121+106122static unsigned long get_loops_per_jiffy(void)107123{108124 unsigned long lpj = (unsigned long)const_clock_freq;···189171190172 lpj_fine = get_loops_per_jiffy();191173 pr_info("Constant clock event device register\n");174174+175175+ cpuhp_setup_state(CPUHP_AP_LOONGARCH_ARCH_TIMER_STARTING,176176+ "clockevents/loongarch/timer:starting",177177+ arch_timer_starting, arch_timer_dying);192178193179 return 0;194180}
+6-1
arch/loongarch/kvm/intc/eiointc.c
···4545 }46464747 cpu = s->sw_coremap[irq];4848- vcpu = kvm_get_vcpu(s->kvm, cpu);4848+ vcpu = kvm_get_vcpu_by_id(s->kvm, cpu);4949+ if (unlikely(vcpu == NULL)) {5050+ kvm_err("%s: invalid target cpu: %d\n", __func__, cpu);5151+ return;5252+ }5353+4954 if (level) {5055 /* if not enable return false */5156 if (!test_bit(irq, (unsigned long *)s->enable.reg_u32))
+4-4
arch/loongarch/kvm/intc/ipi.c
···9999static int send_ipi_data(struct kvm_vcpu *vcpu, gpa_t addr, uint64_t data)100100{101101 int i, idx, ret;102102- uint32_t val = 0, mask = 0;102102+ uint64_t val = 0, mask = 0;103103104104 /*105105 * Bit 27-30 is mask for byte writing.···108108 if ((data >> 27) & 0xf) {109109 /* Read the old val */110110 idx = srcu_read_lock(&vcpu->kvm->srcu);111111- ret = kvm_io_bus_read(vcpu, KVM_IOCSR_BUS, addr, sizeof(val), &val);111111+ ret = kvm_io_bus_read(vcpu, KVM_IOCSR_BUS, addr, 4, &val);112112 srcu_read_unlock(&vcpu->kvm->srcu, idx);113113 if (unlikely(ret)) {114114 kvm_err("%s: : read data from addr %llx failed\n", __func__, addr);···124124 }125125 val |= ((uint32_t)(data >> 32) & ~mask);126126 idx = srcu_read_lock(&vcpu->kvm->srcu);127127- ret = kvm_io_bus_write(vcpu, KVM_IOCSR_BUS, addr, sizeof(val), &val);127127+ ret = kvm_io_bus_write(vcpu, KVM_IOCSR_BUS, addr, 4, &val);128128 srcu_read_unlock(&vcpu->kvm->srcu, idx);129129 if (unlikely(ret))130130 kvm_err("%s: : write data to addr %llx failed\n", __func__, addr);···298298 cpu = (attr->attr >> 16) & 0x3ff;299299 addr = attr->attr & 0xff;300300301301- vcpu = kvm_get_vcpu(dev->kvm, cpu);301301+ vcpu = kvm_get_vcpu_by_id(dev->kvm, cpu);302302 if (unlikely(vcpu == NULL)) {303303 kvm_err("%s: invalid target cpu: %d\n", __func__, cpu);304304 return -EINVAL;
+10
arch/loongarch/kvm/intc/pch_pic.c
···195195 return -EINVAL;196196 }197197198198+ if (addr & (len - 1)) {199199+ kvm_err("%s: pch pic not aligned addr %llx len %d\n", __func__, addr, len);200200+ return -EINVAL;201201+ }202202+198203 /* statistics of pch pic reading */199204 vcpu->stat.pch_pic_read_exits++;200205 ret = loongarch_pch_pic_read(s, addr, len, val);···304299305300 if (!s) {306301 kvm_err("%s: pch pic irqchip not valid!\n", __func__);302302+ return -EINVAL;303303+ }304304+305305+ if (addr & (len - 1)) {306306+ kvm_err("%s: pch pic not aligned addr %llx len %d\n", __func__, addr, len);307307 return -EINVAL;308308 }309309
···11CONFIG_NO_HZ_IDLE=y22CONFIG_HIGH_RES_TIMERS=y33+CONFIG_POSIX_AUX_CLOCKS=y34CONFIG_BPF_SYSCALL=y45# CONFIG_CPU_ISOLATION is not set56# CONFIG_UTS_NS is not set···1211CONFIG_KEXEC=y1312CONFIG_MARCH_Z13=y1413CONFIG_NR_CPUS=21515-CONFIG_HZ_100=y1414+CONFIG_HZ_1000=y1615# CONFIG_CHSC_SCH is not set1716# CONFIG_SCM_BUS is not set1817# CONFIG_AP is not set
···1326132613271327static __init int print_s5_reset_status_mmio(void)13281328{13291329- unsigned long value;13301329 void __iomem *addr;13301330+ u32 value;13311331 int i;1332133213331333 if (!cpu_feature_enabled(X86_FEATURE_ZEN))···13401340 value = ioread32(addr);13411341 iounmap(addr);1342134213431343+ /* Value with "all bits set" is an error response and should be ignored. */13441344+ if (value == U32_MAX)13451345+ return 0;13461346+13431347 for (i = 0; i < ARRAY_SIZE(s5_reset_reason_txt); i++) {13441348 if (!(value & BIT(i)))13451349 continue;1346135013471351 if (s5_reset_reason_txt[i]) {13481348- pr_info("x86/amd: Previous system reset reason [0x%08lx]: %s\n",13521352+ pr_info("x86/amd: Previous system reset reason [0x%08x]: %s\n",13491353 value, s5_reset_reason_txt[i]);13501354 }13511355 }
+1-3
arch/x86/kernel/cpu/bugs.c
···10681068 if (gds_mitigation == GDS_MITIGATION_AUTO) {10691069 if (should_mitigate_vuln(X86_BUG_GDS))10701070 gds_mitigation = GDS_MITIGATION_FULL;10711071- else {10711071+ else10721072 gds_mitigation = GDS_MITIGATION_OFF;10731073- return;10741074- }10751073 }1076107410771075 /* No microcode */
···50335033 unsigned int memflags;50345034 int i;50355035 struct xarray elv_tbl, et_tbl;50365036+ bool queues_frozen = false;5036503750375038 lockdep_assert_held(&set->tag_list_lock);50385039···50575056 blk_mq_sysfs_unregister_hctxs(q);50585057 }5059505850605060- list_for_each_entry(q, &set->tag_list, tag_set_list)50615061- blk_mq_freeze_queue_nomemsave(q);50625062-50635059 /*50645060 * Switch IO scheduler to 'none', cleaning up the data associated50655061 * with the previous scheduler. We will switch back once we are done···50665068 if (blk_mq_elv_switch_none(q, &elv_tbl))50675069 goto switch_back;5068507050715071+ list_for_each_entry(q, &set->tag_list, tag_set_list)50725072+ blk_mq_freeze_queue_nomemsave(q);50735073+ queues_frozen = true;50695074 if (blk_mq_realloc_tag_set_tags(set, nr_hw_queues) < 0)50705075 goto switch_back;50715076···50925091 }50935092switch_back:50945093 /* The blk_mq_elv_switch_back unfreezes queue for us. */50955095- list_for_each_entry(q, &set->tag_list, tag_set_list)50945094+ list_for_each_entry(q, &set->tag_list, tag_set_list) {50955095+ /* switch_back expects queue to be frozen */50965096+ if (!queues_frozen)50975097+ blk_mq_freeze_queue_nomemsave(q);50965098 blk_mq_elv_switch_back(q, &elv_tbl, &et_tbl);50995099+ }5097510050985101 list_for_each_entry(q, &set->tag_list, tag_set_list) {50995102 blk_mq_sysfs_register_hctxs(q);
+4-4
block/blk-rq-qos.c
···2233#include "blk-rq-qos.h"4455-__read_mostly DEFINE_STATIC_KEY_FALSE(block_rq_qos);66-75/*86 * Increment 'v', if 'v' is below 'below'. Returns true if we succeeded,97 * false if 'v' + 1 would be bigger than 'below'.···317319 struct rq_qos *rqos = q->rq_qos;318320 q->rq_qos = rqos->next;319321 rqos->ops->exit(rqos);320320- static_branch_dec(&block_rq_qos);321322 }323323+ blk_queue_flag_clear(QUEUE_FLAG_QOS_ENABLED, q);322324 mutex_unlock(&q->rq_qos_mutex);323325}324326···344346 goto ebusy;345347 rqos->next = q->rq_qos;346348 q->rq_qos = rqos;347347- static_branch_inc(&block_rq_qos);349349+ blk_queue_flag_set(QUEUE_FLAG_QOS_ENABLED, q);348350349351 blk_mq_unfreeze_queue(q, memflags);350352···375377 break;376378 }377379 }380380+ if (!q->rq_qos)381381+ blk_queue_flag_clear(QUEUE_FLAG_QOS_ENABLED, q);378382 blk_mq_unfreeze_queue(q, memflags);379383380384 mutex_lock(&q->debugfs_mutex);
+31-17
block/blk-rq-qos.h
···1212#include "blk-mq-debugfs.h"13131414struct blk_mq_debugfs_attr;1515-extern struct static_key_false block_rq_qos;16151716enum rq_qos_id {1817 RQ_QOS_WBT,···112113113114static inline void rq_qos_cleanup(struct request_queue *q, struct bio *bio)114115{115115- if (static_branch_unlikely(&block_rq_qos) && q->rq_qos)116116+ if (unlikely(test_bit(QUEUE_FLAG_QOS_ENABLED, &q->queue_flags)) &&117117+ q->rq_qos)116118 __rq_qos_cleanup(q->rq_qos, bio);117119}118120119121static inline void rq_qos_done(struct request_queue *q, struct request *rq)120122{121121- if (static_branch_unlikely(&block_rq_qos) && q->rq_qos &&122122- !blk_rq_is_passthrough(rq))123123+ if (unlikely(test_bit(QUEUE_FLAG_QOS_ENABLED, &q->queue_flags)) &&124124+ q->rq_qos && !blk_rq_is_passthrough(rq))123125 __rq_qos_done(q->rq_qos, rq);124126}125127126128static inline void rq_qos_issue(struct request_queue *q, struct request *rq)127129{128128- if (static_branch_unlikely(&block_rq_qos) && q->rq_qos)130130+ if (unlikely(test_bit(QUEUE_FLAG_QOS_ENABLED, &q->queue_flags)) &&131131+ q->rq_qos)129132 __rq_qos_issue(q->rq_qos, rq);130133}131134132135static inline void rq_qos_requeue(struct request_queue *q, struct request *rq)133136{134134- if (static_branch_unlikely(&block_rq_qos) && q->rq_qos)137137+ if (unlikely(test_bit(QUEUE_FLAG_QOS_ENABLED, &q->queue_flags)) &&138138+ q->rq_qos)135139 __rq_qos_requeue(q->rq_qos, rq);136140}137141138142static inline void rq_qos_done_bio(struct bio *bio)139143{140140- if (static_branch_unlikely(&block_rq_qos) &&141141- bio->bi_bdev && (bio_flagged(bio, BIO_QOS_THROTTLED) ||142142- bio_flagged(bio, BIO_QOS_MERGED))) {143143- struct request_queue *q = bdev_get_queue(bio->bi_bdev);144144- if (q->rq_qos)145145- __rq_qos_done_bio(q->rq_qos, bio);146146- }144144+ struct request_queue *q;145145+146146+ if (!bio->bi_bdev || (!bio_flagged(bio, BIO_QOS_THROTTLED) &&147147+ !bio_flagged(bio, BIO_QOS_MERGED)))148148+ return;149149+150150+ q = bdev_get_queue(bio->bi_bdev);151151+152152+ /*153153+ * If a bio has BIO_QOS_xxx set, it implicitly implies that154154+ * q->rq_qos is present. So, we skip re-checking q->rq_qos155155+ * here as an extra optimization and directly call156156+ * __rq_qos_done_bio().157157+ */158158+ __rq_qos_done_bio(q->rq_qos, bio);147159}148160149161static inline void rq_qos_throttle(struct request_queue *q, struct bio *bio)150162{151151- if (static_branch_unlikely(&block_rq_qos) && q->rq_qos) {163163+ if (unlikely(test_bit(QUEUE_FLAG_QOS_ENABLED, &q->queue_flags)) &&164164+ q->rq_qos) {152165 bio_set_flag(bio, BIO_QOS_THROTTLED);153166 __rq_qos_throttle(q->rq_qos, bio);154167 }···169158static inline void rq_qos_track(struct request_queue *q, struct request *rq,170159 struct bio *bio)171160{172172- if (static_branch_unlikely(&block_rq_qos) && q->rq_qos)161161+ if (unlikely(test_bit(QUEUE_FLAG_QOS_ENABLED, &q->queue_flags)) &&162162+ q->rq_qos)173163 __rq_qos_track(q->rq_qos, rq, bio);174164}175165176166static inline void rq_qos_merge(struct request_queue *q, struct request *rq,177167 struct bio *bio)178168{179179- if (static_branch_unlikely(&block_rq_qos) && q->rq_qos) {169169+ if (unlikely(test_bit(QUEUE_FLAG_QOS_ENABLED, &q->queue_flags)) &&170170+ q->rq_qos) {180171 bio_set_flag(bio, BIO_QOS_MERGED);181172 __rq_qos_merge(q->rq_qos, rq, bio);182173 }···186173187174static inline void rq_qos_queue_depth_changed(struct request_queue *q)188175{189189- if (static_branch_unlikely(&block_rq_qos) && q->rq_qos)176176+ if (unlikely(test_bit(QUEUE_FLAG_QOS_ENABLED, &q->queue_flags)) &&177177+ q->rq_qos)190178 __rq_qos_queue_depth_changed(q->rq_qos);191179}192180
+6-6
block/blk-settings.c
···157157 switch (bi->csum_type) {158158 case BLK_INTEGRITY_CSUM_NONE:159159 if (bi->pi_tuple_size) {160160- pr_warn("pi_tuple_size must be 0 when checksum type \161161- is none\n");160160+ pr_warn("pi_tuple_size must be 0 when checksum type is none\n");162161 return -EINVAL;163162 }164163 break;165164 case BLK_INTEGRITY_CSUM_CRC:166165 case BLK_INTEGRITY_CSUM_IP:167166 if (bi->pi_tuple_size != sizeof(struct t10_pi_tuple)) {168168- pr_warn("pi_tuple_size mismatch for T10 PI: expected \169169- %zu, got %u\n",167167+ pr_warn("pi_tuple_size mismatch for T10 PI: expected %zu, got %u\n",170168 sizeof(struct t10_pi_tuple),171169 bi->pi_tuple_size);172170 return -EINVAL;···172174 break;173175 case BLK_INTEGRITY_CSUM_CRC64:174176 if (bi->pi_tuple_size != sizeof(struct crc64_pi_tuple)) {175175- pr_warn("pi_tuple_size mismatch for CRC64 PI: \176176- expected %zu, got %u\n",177177+ pr_warn("pi_tuple_size mismatch for CRC64 PI: expected %zu, got %u\n",177178 sizeof(struct crc64_pi_tuple),178179 bi->pi_tuple_size);179180 return -EINVAL;···969972 goto incompatible;970973 if (ti->csum_type != bi->csum_type)971974 goto incompatible;975975+ if (ti->pi_tuple_size != bi->pi_tuple_size)976976+ goto incompatible;972977 if ((ti->flags & BLK_INTEGRITY_REF_TAG) !=973978 (bi->flags & BLK_INTEGRITY_REF_TAG))974979 goto incompatible;···979980 ti->flags |= (bi->flags & BLK_INTEGRITY_DEVICE_CAPABLE) |980981 (bi->flags & BLK_INTEGRITY_REF_TAG);981982 ti->csum_type = bi->csum_type;983983+ ti->pi_tuple_size = bi->pi_tuple_size;982984 ti->metadata_size = bi->metadata_size;983985 ti->pi_offset = bi->pi_offset;984986 ti->interval_exp = bi->interval_exp;
···620620 unsigned int chan = CR_CHAN(insn->chanspec);621621 unsigned int base_chan = (chan < 32) ? 0 : chan;622622 unsigned int _data[2];623623+ unsigned int i;623624 int ret;624624-625625- if (insn->n == 0)626626- return 0;627625628626 memset(_data, 0, sizeof(_data));629627 memset(&_insn, 0, sizeof(_insn));···633635 if (insn->insn == INSN_WRITE) {634636 if (!(s->subdev_flags & SDF_WRITABLE))635637 return -EINVAL;636636- _data[0] = 1U << (chan - base_chan); /* mask */637637- _data[1] = data[0] ? (1U << (chan - base_chan)) : 0; /* bits */638638+ _data[0] = 1U << (chan - base_chan); /* mask */639639+ }640640+ for (i = 0; i < insn->n; i++) {641641+ if (insn->insn == INSN_WRITE)642642+ _data[1] = data[i] ? _data[0] : 0; /* bits */643643+644644+ ret = s->insn_bits(dev, s, &_insn, _data);645645+ if (ret < 0)646646+ return ret;647647+648648+ if (insn->insn == INSN_READ)649649+ data[i] = (_data[1] >> (chan - base_chan)) & 1;638650 }639651640640- ret = s->insn_bits(dev, s, &_insn, _data);641641- if (ret < 0)642642- return ret;643643-644644- if (insn->insn == INSN_READ)645645- data[0] = (_data[1] >> (chan - base_chan)) & 1;646646-647647- return 1;652652+ return insn->n;648653}649654650655static int __comedi_device_postconfig_async(struct comedi_device *dev,
+2-1
drivers/comedi/drivers/pcl726.c
···328328 * Hook up the external trigger source interrupt only if the329329 * user config option is valid and the board supports interrupts.330330 */331331- if (it->options[1] && (board->irq_mask & (1 << it->options[1]))) {331331+ if (it->options[1] > 0 && it->options[1] < 16 &&332332+ (board->irq_mask & (1U << it->options[1]))) {332333 ret = request_irq(it->options[1], pcl726_interrupt, 0,333334 dev->board_name, dev);334335 if (ret == 0) {
+12-17
drivers/cpuidle/governors/menu.c
···287287 return 0;288288 }289289290290- if (tick_nohz_tick_stopped()) {291291- /*292292- * If the tick is already stopped, the cost of possible short293293- * idle duration misprediction is much higher, because the CPU294294- * may be stuck in a shallow idle state for a long time as a295295- * result of it. In that case say we might mispredict and use296296- * the known time till the closest timer event for the idle297297- * state selection.298298- */299299- if (predicted_ns < TICK_NSEC)300300- predicted_ns = data->next_timer_ns;301301- } else if (latency_req > predicted_ns) {302302- latency_req = predicted_ns;303303- }290290+ /*291291+ * If the tick is already stopped, the cost of possible short idle292292+ * duration misprediction is much higher, because the CPU may be stuck293293+ * in a shallow idle state for a long time as a result of it. In that294294+ * case, say we might mispredict and use the known time till the closest295295+ * timer event for the idle state selection.296296+ */297297+ if (tick_nohz_tick_stopped() && predicted_ns < TICK_NSEC)298298+ predicted_ns = data->next_timer_ns;304299305300 /*306301 * Find the idle state with the lowest power while satisfying···311316 if (idx == -1)312317 idx = i; /* first enabled state */313318319319+ if (s->exit_latency_ns > latency_req)320320+ break;321321+314322 if (s->target_residency_ns > predicted_ns) {315323 /*316324 * Use a physical idle state, not busy polling, unless317325 * a timer is going to trigger soon enough.318326 */319327 if ((drv->states[idx].flags & CPUIDLE_FLAG_POLLING) &&320320- s->exit_latency_ns <= latency_req &&321328 s->target_residency_ns <= data->next_timer_ns) {322329 predicted_ns = s->target_residency_ns;323330 idx = i;···351354352355 return idx;353356 }354354- if (s->exit_latency_ns > latency_req)355355- break;356357357358 idx = i;358359 }
···299299 irq_type = amdgpu_display_crtc_idx_to_irq_type(adev, acrtc->crtc_id);300300301301 if (enable) {302302+ struct dc *dc = adev->dm.dc;303303+ struct drm_vblank_crtc *vblank = drm_crtc_vblank_crtc(crtc);304304+ struct psr_settings *psr = &acrtc_state->stream->link->psr_settings;305305+ struct replay_settings *pr = &acrtc_state->stream->link->replay_settings;306306+ bool sr_supported = (psr->psr_version != DC_PSR_VERSION_UNSUPPORTED) ||307307+ pr->config.replay_supported;308308+309309+ /*310310+ * IPS & self-refresh feature can cause vblank counter resets between311311+ * vblank disable and enable.312312+ * It may cause system stuck due to waiting for the vblank counter.313313+ * Call this function to estimate missed vblanks by using timestamps and314314+ * update the vblank counter in DRM.315315+ */316316+ if (dc->caps.ips_support &&317317+ dc->config.disable_ips != DMUB_IPS_DISABLE_ALL &&318318+ sr_supported && vblank->config.disable_immediate)319319+ drm_crtc_vblank_restore(crtc);320320+302321 /* vblank irq on -> Only need vupdate irq in vrr mode */303322 if (amdgpu_dm_crtc_vrr_active(acrtc_state))304323 rc = amdgpu_dm_crtc_set_vupdate_irq(crtc, true);
+1-4
drivers/gpu/drm/amd/display/dc/bios/bios_parser.c
···174174 return object_id;175175 }176176177177- if (tbl->ucNumberOfObjects <= i) {178178- dm_error("Can't find connector id %d in connector table of size %d.\n",179179- i, tbl->ucNumberOfObjects);177177+ if (tbl->ucNumberOfObjects <= i)180178 return object_id;181181- }182179183180 id = le16_to_cpu(tbl->asObjects[i].usObjectID);184181 object_id = object_id_from_bios_object_id(id);
···8383static int dce60_get_dp_ref_freq_khz(struct clk_mgr *clk_mgr_base)8484{8585 struct clk_mgr_internal *clk_mgr = TO_CLK_MGR_INTERNAL(clk_mgr_base);8686- int dprefclk_wdivider;8787- int dp_ref_clk_khz;8888- int target_div;8686+ struct dc_context *ctx = clk_mgr_base->ctx;8787+ int dp_ref_clk_khz = 0;89889090- /* DCE6 has no DPREFCLK_CNTL to read DP Reference Clock source */9191-9292- /* Read the mmDENTIST_DISPCLK_CNTL to get the currently9393- * programmed DID DENTIST_DPREFCLK_WDIVIDER*/9494- REG_GET(DENTIST_DISPCLK_CNTL, DENTIST_DPREFCLK_WDIVIDER, &dprefclk_wdivider);9595-9696- /* Convert DENTIST_DPREFCLK_WDIVIDERto actual divider*/9797- target_div = dentist_get_divider_from_did(dprefclk_wdivider);9898-9999- /* Calculate the current DFS clock, in kHz.*/100100- dp_ref_clk_khz = (DENTIST_DIVIDER_RANGE_SCALE_FACTOR101101- * clk_mgr->base.dentist_vco_freq_khz) / target_div;8989+ if (ASIC_REV_IS_TAHITI_P(ctx->asic_id.hw_internal_rev))9090+ dp_ref_clk_khz = ctx->dc_bios->fw_info.default_display_engine_pll_frequency;9191+ else9292+ dp_ref_clk_khz = clk_mgr_base->clks.dispclk_khz;1029310394 return dce_adjust_dp_ref_freq_for_ss(clk_mgr, dp_ref_clk_khz);10495}···99108 struct dc_state *context)100109{101110 struct dm_pp_display_configuration *pp_display_cfg = &context->pp_display_cfg;102102-103103- pp_display_cfg->avail_mclk_switch_time_us = dce110_get_min_vblank_time_us(context);104111105112 dce110_fill_display_configs(context, pp_display_cfg);106113···112123{113124 struct clk_mgr_internal *clk_mgr_dce = TO_CLK_MGR_INTERNAL(clk_mgr_base);114125 struct dm_pp_power_level_change_request level_change_req;115115- int patched_disp_clk = context->bw_ctx.bw.dce.dispclk_khz;116116-117117- /*TODO: W/A for dal3 linux, investigate why this works */118118- if (!clk_mgr_dce->dfs_bypass_active)119119- patched_disp_clk = patched_disp_clk * 115 / 100;126126+ const int max_disp_clk =127127+ clk_mgr_dce->max_clks_by_state[DM_PP_CLOCKS_STATE_PERFORMANCE].display_clk_khz;128128+ int patched_disp_clk = MIN(max_disp_clk, context->bw_ctx.bw.dce.dispclk_khz);120129121130 level_change_req.power_level = dce_get_required_clocks_state(clk_mgr_base, context);122131 /* get max clock state from PPLIB */
+14-1
drivers/gpu/drm/amd/display/dc/core/dc.c
···217217 connectors_num,218218 num_virtual_links);219219220220- // condition loop on link_count to allow skipping invalid indices220220+ /* When getting the number of connectors, the VBIOS reports the number of valid indices,221221+ * but it doesn't say which indices are valid, and not every index has an actual connector.222222+ * So, if we don't find a connector on an index, that is not an error.223223+ *224224+ * - There is no guarantee that the first N indices will be valid225225+ * - VBIOS may report a higher amount of valid indices than there are actual connectors226226+ * - Some VBIOS have valid configurations for more connectors than there actually are227227+ * on the card. This may be because the manufacturer used the same VBIOS for different228228+ * variants of the same card.229229+ */221230 for (i = 0; dc->link_count < connectors_num && i < MAX_LINKS; i++) {231231+ struct graphics_object_id connector_id = bios->funcs->get_connector_id(bios, i);222232 struct link_init_data link_init_params = {0};223233 struct dc_link *link;234234+235235+ if (connector_id.id == CONNECTOR_ID_UNKNOWN)236236+ continue;224237225238 DC_LOG_DC("BIOS object table - printing link object info for connector number: %d, link_index: %d", i, dc->link_count);226239
···944944 // TODO: Handle mux change case if force_static is set945945 // If force_static is set, just change the replay_allow_active state directly946946 if (replay != NULL && link->replay_settings.replay_feature_enabled)947947- replay->funcs->replay_enable(replay, *allow_active, wait, panel_inst, link);947947+ replay->funcs->replay_enable(replay, *allow_active, wait, panel_inst);948948 link->replay_settings.replay_allow_active = *allow_active;949949 }950950
-20
drivers/gpu/drm/amd/display/dmub/inc/dmub_cmd.h
···40484048 */40494049 uint8_t digbe_inst;40504050 /**40514051- * @hpo_stream_enc_inst: HPO stream encoder instance40524052- */40534053- uint8_t hpo_stream_enc_inst;40544054- /**40554055- * @hpo_link_enc_inst: HPO link encoder instance40564056- */40574057- uint8_t hpo_link_enc_inst;40584058- /**40594051 * AUX HW instance.40604052 */40614053 uint8_t aux_inst;···41514159 * This does not support HDMI/DP2 for now.41524160 */41534161 uint8_t phy_rate;41544154- /**41554155- * @hpo_stream_enc_inst: HPO stream encoder instance41564156- */41574157- uint8_t hpo_stream_enc_inst;41584158- /**41594159- * @hpo_link_enc_inst: HPO link encoder instance41604160- */41614161- uint8_t hpo_link_enc_inst;41624162- /**41634163- * @pad: Align structure to 4 byte boundary.41644164- */41654165- uint8_t pad[2];41664162};4167416341684164/**
···381381 len: usize,382382}383383384384+// On arm32 architecture, dividing an `u64` by a constant will generate a call385385+// to `__aeabi_uldivmod` which is not present in the kernel.386386+// So use the multiply by inverse method for this architecture.387387+fn div10(val: u64) -> u64 {388388+ if cfg!(target_arch = "arm") {389389+ let val_h = val >> 32;390390+ let val_l = val & 0xFFFFFFFF;391391+ let b_h: u64 = 0x66666666;392392+ let b_l: u64 = 0x66666667;393393+394394+ let tmp1 = val_h * b_l + ((val_l * b_l) >> 32);395395+ let tmp2 = val_l * b_h + (tmp1 & 0xffffffff);396396+ let tmp3 = val_h * b_h + (tmp1 >> 32) + (tmp2 >> 32);397397+398398+ tmp3 >> 2399399+ } else {400400+ val / 10401401+ }402402+}403403+384404impl DecFifo {385405 fn push(&mut self, data: u64, len: usize) {386406 let mut chunk = data;···409389 }410390 for i in 0..len {411391 self.decimals[i] = (chunk % 10) as u8;412412- chunk /= 10;392392+ chunk = div10(chunk);413393 }414394 self.len += len;415395 }
···15061506 if (!(master_ctl & GEN11_GU_MISC_IRQ))15071507 return 0;1508150815091509+ intel_display_rpm_assert_block(display);15101510+15091511 iir = intel_de_read(display, GEN11_GU_MISC_IIR);15101512 if (likely(iir))15111513 intel_de_write(display, GEN11_GU_MISC_IIR, iir);15141514+15151515+ intel_display_rpm_assert_unblock(display);1512151615131517 return iir;15141518}
+75-18
drivers/gpu/drm/i915/display/intel_tc.c
···2323#include "intel_modeset_lock.h"2424#include "intel_tc.h"25252626+#define DP_PIN_ASSIGNMENT_NONE 0x02627#define DP_PIN_ASSIGNMENT_C 0x32728#define DP_PIN_ASSIGNMENT_D 0x42829#define DP_PIN_ASSIGNMENT_E 0x5···6766 enum tc_port_mode init_mode;6867 enum phy_fia phy_fia;6968 u8 phy_fia_idx;6969+ u8 max_lane_count;7070};71717272static enum intel_display_power_domain···309307 REG_FIELD_GET(TCSS_DDI_STATUS_PIN_ASSIGNMENT_MASK, val);310308311309 switch (pin_assignment) {310310+ case DP_PIN_ASSIGNMENT_NONE:311311+ return 0;312312 default:313313 MISSING_CASE(pin_assignment);314314 fallthrough;···369365 }370366}371367372372-int intel_tc_port_max_lane_count(struct intel_digital_port *dig_port)368368+static int get_max_lane_count(struct intel_tc_port *tc)373369{374374- struct intel_display *display = to_intel_display(dig_port);375375- struct intel_tc_port *tc = to_tc_port(dig_port);370370+ struct intel_display *display = to_intel_display(tc->dig_port);371371+ struct intel_digital_port *dig_port = tc->dig_port;376372377377- if (!intel_encoder_is_tc(&dig_port->base) || tc->mode != TC_PORT_DP_ALT)373373+ if (tc->mode != TC_PORT_DP_ALT)378374 return 4;379375380376 assert_tc_cold_blocked(tc);···386382 return mtl_tc_port_get_max_lane_count(dig_port);387383388384 return intel_tc_port_get_max_lane_count(dig_port);385385+}386386+387387+static void read_pin_configuration(struct intel_tc_port *tc)388388+{389389+ tc->max_lane_count = get_max_lane_count(tc);390390+}391391+392392+int intel_tc_port_max_lane_count(struct intel_digital_port *dig_port)393393+{394394+ struct intel_display *display = to_intel_display(dig_port);395395+ struct intel_tc_port *tc = to_tc_port(dig_port);396396+397397+ if (!intel_encoder_is_tc(&dig_port->base))398398+ return 4;399399+400400+ if (DISPLAY_VER(display) < 20)401401+ return get_max_lane_count(tc);402402+403403+ return tc->max_lane_count;389404}390405391406void intel_tc_port_set_fia_lane_count(struct intel_digital_port *dig_port,···619596 tc_cold_wref = __tc_cold_block(tc, &domain);620597621598 tc->mode = tc_phy_get_current_mode(tc);622622- if (tc->mode != TC_PORT_DISCONNECTED)599599+ if (tc->mode != TC_PORT_DISCONNECTED) {623600 tc->lock_wakeref = tc_cold_block(tc);601601+602602+ read_pin_configuration(tc);603603+ }624604625605 __tc_cold_unblock(tc, domain, tc_cold_wref);626606}···682656683657 tc->lock_wakeref = tc_cold_block(tc);684658685685- if (tc->mode == TC_PORT_TBT_ALT)659659+ if (tc->mode == TC_PORT_TBT_ALT) {660660+ read_pin_configuration(tc);661661+686662 return true;663663+ }687664688665 if ((!tc_phy_is_ready(tc) ||689666 !icl_tc_phy_take_ownership(tc, true)) &&···697668 goto out_unblock_tc_cold;698669 }699670671671+ read_pin_configuration(tc);700672701673 if (!tc_phy_verify_legacy_or_dp_alt_mode(tc, required_lanes))702674 goto out_release_phy;···888858 port_wakeref = intel_display_power_get(display, port_power_domain);889859890860 tc->mode = tc_phy_get_current_mode(tc);891891- if (tc->mode != TC_PORT_DISCONNECTED)861861+ if (tc->mode != TC_PORT_DISCONNECTED) {892862 tc->lock_wakeref = tc_cold_block(tc);863863+864864+ read_pin_configuration(tc);865865+ }893866894867 intel_display_power_put(display, port_power_domain, port_wakeref);895868}···906873907874 if (tc->mode == TC_PORT_TBT_ALT) {908875 tc->lock_wakeref = tc_cold_block(tc);876876+877877+ read_pin_configuration(tc);878878+909879 return true;910880 }911881···929893 }930894931895 tc->lock_wakeref = tc_cold_block(tc);896896+897897+ read_pin_configuration(tc);932898933899 if (!tc_phy_verify_legacy_or_dp_alt_mode(tc, required_lanes))934900 goto out_unblock_tc_cold;···11621124 tc_cold_wref = __tc_cold_block(tc, &domain);1163112511641126 tc->mode = tc_phy_get_current_mode(tc);11651165- if (tc->mode != TC_PORT_DISCONNECTED)11271127+ if (tc->mode != TC_PORT_DISCONNECTED) {11661128 tc->lock_wakeref = tc_cold_block(tc);11291129+11301130+ read_pin_configuration(tc);11311131+ /*11321132+ * Set a valid lane count value for a DP-alt sink which got11331133+ * disconnected. The driver can only disable the output on this PHY.11341134+ */11351135+ if (tc->max_lane_count == 0)11361136+ tc->max_lane_count = 4;11371137+ }1167113811681139 drm_WARN_ON(display->drm,11691140 (tc->mode == TC_PORT_DP_ALT || tc->mode == TC_PORT_LEGACY) &&···11851138{11861139 tc->lock_wakeref = tc_cold_block(tc);1187114011881188- if (tc->mode == TC_PORT_TBT_ALT)11411141+ if (tc->mode == TC_PORT_TBT_ALT) {11421142+ read_pin_configuration(tc);11431143+11891144 return true;11451145+ }1190114611911147 if (!xelpdp_tc_phy_enable_tcss_power(tc, true))11921148 goto out_unblock_tccold;1193114911941150 xelpdp_tc_phy_take_ownership(tc, true);11511151+11521152+ read_pin_configuration(tc);1195115311961154 if (!tc_phy_verify_legacy_or_dp_alt_mode(tc, required_lanes))11971155 goto out_release_phy;···12781226 tc->phy_ops->get_hw_state(tc);12791227}1280122812811281-static bool tc_phy_is_ready_and_owned(struct intel_tc_port *tc,12821282- bool phy_is_ready, bool phy_is_owned)12291229+/* Is the PHY owned by display i.e. is it in legacy or DP-alt mode? */12301230+static bool tc_phy_owned_by_display(struct intel_tc_port *tc,12311231+ bool phy_is_ready, bool phy_is_owned)12831232{12841233 struct intel_display *display = to_intel_display(tc->dig_port);1285123412861286- drm_WARN_ON(display->drm, phy_is_owned && !phy_is_ready);12351235+ if (DISPLAY_VER(display) < 20) {12361236+ drm_WARN_ON(display->drm, phy_is_owned && !phy_is_ready);1287123712881288- return phy_is_ready && phy_is_owned;12381238+ return phy_is_ready && phy_is_owned;12391239+ } else {12401240+ return phy_is_owned;12411241+ }12891242}1290124312911244static bool tc_phy_is_connected(struct intel_tc_port *tc,···13011244 bool phy_is_owned = tc_phy_is_owned(tc);13021245 bool is_connected;1303124613041304- if (tc_phy_is_ready_and_owned(tc, phy_is_ready, phy_is_owned))12471247+ if (tc_phy_owned_by_display(tc, phy_is_ready, phy_is_owned))13051248 is_connected = port_pll_type == ICL_PORT_DPLL_MG_PHY;13061249 else13071250 is_connected = port_pll_type == ICL_PORT_DPLL_DEFAULT;···14091352 phy_is_ready = tc_phy_is_ready(tc);14101353 phy_is_owned = tc_phy_is_owned(tc);1411135414121412- if (!tc_phy_is_ready_and_owned(tc, phy_is_ready, phy_is_owned)) {13551355+ if (!tc_phy_owned_by_display(tc, phy_is_ready, phy_is_owned)) {14131356 mode = get_tc_mode_in_phy_not_owned_state(tc, live_mode);14141357 } else {14151358 drm_WARN_ON(display->drm, live_mode == TC_PORT_TBT_ALT);···14981441 intel_display_power_flush_work(display);14991442 if (!intel_tc_cold_requires_aux_pw(dig_port)) {15001443 enum intel_display_power_domain aux_domain;15011501- bool aux_powered;1502144415031445 aux_domain = intel_aux_power_domain(dig_port);15041504- aux_powered = intel_display_power_is_enabled(display, aux_domain);15051505- drm_WARN_ON(display->drm, aux_powered);14461446+ if (intel_display_power_is_enabled(display, aux_domain))14471447+ drm_dbg_kms(display->drm, "Port %s: AUX unexpectedly powered\n",14481448+ tc->port_name);15061449 }1507145015081451 tc_phy_disconnect(tc);
+11-9
drivers/gpu/drm/i915/gt/intel_workarounds.c
···634634static void icl_ctx_workarounds_init(struct intel_engine_cs *engine,635635 struct i915_wa_list *wal)636636{637637+ struct drm_i915_private *i915 = engine->i915;638638+637639 /* Wa_1406697149 (WaDisableBankHangMode:icl) */638640 wa_write(wal, GEN8_L3CNTLREG, GEN8_ERRDETBCTRL);639641···671669672670 /* Wa_1406306137:icl,ehl */673671 wa_mcr_masked_en(wal, GEN9_ROW_CHICKEN4, GEN11_DIS_PICK_2ND_EU);672672+673673+ if (IS_JASPERLAKE(i915) || IS_ELKHARTLAKE(i915)) {674674+ /*675675+ * Disable Repacking for Compression (masked R/W access)676676+ * before rendering compressed surfaces for display.677677+ */678678+ wa_masked_en(wal, CACHE_MODE_0_GEN7,679679+ DISABLE_REPACKING_FOR_COMPRESSION);680680+ }674681}675682676683/*···23152304 RING_PSMI_CTL(RENDER_RING_BASE),23162305 GEN12_WAIT_FOR_EVENT_POWER_DOWN_DISABLE |23172306 GEN8_RC_SEMA_IDLE_MSG_DISABLE);23182318- }23192319-23202320- if (IS_JASPERLAKE(i915) || IS_ELKHARTLAKE(i915)) {23212321- /*23222322- * "Disable Repacking for Compression (masked R/W access)23232323- * before rendering compressed surfaces for display."23242324- */23252325- wa_masked_en(wal, CACHE_MODE_0_GEN7,23262326- DISABLE_REPACKING_FOR_COMPRESSION);23272307 }2328230823292309 if (GRAPHICS_VER(i915) == 11) {
+3-3
drivers/gpu/drm/nouveau/nouveau_exec.c
···6060 * virtual address in the GPU's VA space there is no guarantee that the actual6161 * mappings are created in the GPU's MMU. If the given memory is swapped out6262 * at the time the bind operation is executed the kernel will stash the mapping6363- * details into it's internal alloctor and create the actual MMU mappings once6363+ * details into it's internal allocator and create the actual MMU mappings once6464 * the memory is swapped back in. While this is transparent for userspace, it is6565 * guaranteed that all the backing memory is swapped back in and all the memory6666 * mappings, as requested by userspace previously, are actually mapped once the6767 * DRM_NOUVEAU_EXEC ioctl is called to submit an exec job.6868 *6969 * A VM_BIND job can be executed either synchronously or asynchronously. If7070- * exectued asynchronously, userspace may provide a list of syncobjs this job7070+ * executed asynchronously, userspace may provide a list of syncobjs this job7171 * will wait for and/or a list of syncobj the kernel will signal once the7272 * VM_BIND job finished execution. If executed synchronously the ioctl will7373 * block until the bind job is finished. For synchronous jobs the kernel will···8282 * Since VM_BIND jobs update the GPU's VA space on job submit, EXEC jobs do have8383 * an up to date view of the VA space. However, the actual mappings might still8484 * be pending. Hence, EXEC jobs require to have the particular fences - of8585- * the corresponding VM_BIND jobs they depent on - attached to them.8585+ * the corresponding VM_BIND jobs they depend on - attached to them.8686 */87878888static int
+2-1
drivers/gpu/drm/nouveau/nvif/vmm.c
···219219 case RAW: args->type = NVIF_VMM_V0_TYPE_RAW; break;220220 default:221221 WARN_ON(1);222222- return -EINVAL;222222+ ret = -EINVAL;223223+ goto done;223224 }224225225226 memcpy(args->data, argv, argc);
···5353 bool "Rockchip cdn DP"5454 depends on EXTCON=y || (EXTCON=m && DRM_ROCKCHIP=m)5555 select DRM_DISPLAY_HELPER5656+ select DRM_BRIDGE_CONNECTOR5657 select DRM_DISPLAY_DP_HELPER5758 help5859 This selects support for Rockchip SoC specific extensions
+5-4
drivers/gpu/drm/rockchip/rockchip_drm_vop2.c
···25792579}2580258025812581/*25822582- * The window registers are only updated when config done is written.25832583- * Until that they read back the old value. As we read-modify-write25842584- * these registers mark them as non-volatile. This makes sure we read25852585- * the new values from the regmap register cache.25822582+ * The window and video port registers are only updated when config25832583+ * done is written. Until that they read back the old value. As we25842584+ * read-modify-write these registers mark them as non-volatile. This25852585+ * makes sure we read the new values from the regmap register cache.25862586 */25872587static const struct regmap_range vop2_nonvolatile_range[] = {25882588+ regmap_reg_range(RK3568_VP0_CTRL_BASE, RK3588_VP3_CTRL_BASE + 255),25882589 regmap_reg_range(0x1000, 0x23ff),25892590};25902591
···408408409409 /* Special layout, prepared below.. */410410 vm = xe_vm_create(xe, XE_VM_FLAG_MIGRATION |411411- XE_VM_FLAG_SET_TILE_ID(tile));411411+ XE_VM_FLAG_SET_TILE_ID(tile), NULL);412412 if (IS_ERR(vm))413413 return ERR_CAST(vm);414414
+1-1
drivers/gpu/drm/xe/xe_pxp_submit.c
···101101 xe_assert(xe, hwe);102102103103 /* PXP instructions must be issued from PPGTT */104104- vm = xe_vm_create(xe, XE_VM_FLAG_GSC);104104+ vm = xe_vm_create(xe, XE_VM_FLAG_GSC, NULL);105105 if (IS_ERR(vm))106106 return PTR_ERR(vm);107107
+23-25
drivers/gpu/drm/xe/xe_vm.c
···16401640 }16411641}1642164216431643-struct xe_vm *xe_vm_create(struct xe_device *xe, u32 flags)16431643+struct xe_vm *xe_vm_create(struct xe_device *xe, u32 flags, struct xe_file *xef)16441644{16451645 struct drm_gem_object *vm_resv_obj;16461646 struct xe_vm *vm;···16611661 vm->xe = xe;1662166216631663 vm->size = 1ull << xe->info.va_bits;16641664-16651664 vm->flags = flags;1666166516661666+ if (xef)16671667+ vm->xef = xe_file_get(xef);16671668 /**16681669 * GSC VMs are kernel-owned, only used for PXP ops and can sometimes be16691670 * manipulated under the PXP mutex. However, the PXP mutex can be taken···17951794 if (number_tiles > 1)17961795 vm->composite_fence_ctx = dma_fence_context_alloc(1);1797179617971797+ if (xef && xe->info.has_asid) {17981798+ u32 asid;17991799+18001800+ down_write(&xe->usm.lock);18011801+ err = xa_alloc_cyclic(&xe->usm.asid_to_vm, &asid, vm,18021802+ XA_LIMIT(1, XE_MAX_ASID - 1),18031803+ &xe->usm.next_asid, GFP_KERNEL);18041804+ up_write(&xe->usm.lock);18051805+ if (err < 0)18061806+ goto err_unlock_close;18071807+18081808+ vm->usm.asid = asid;18091809+ }18101810+17981811 trace_xe_vm_create(vm);1799181218001813 return vm;···18291814 for_each_tile(tile, xe, id)18301815 xe_range_fence_tree_fini(&vm->rftree[id]);18311816 ttm_lru_bulk_move_fini(&xe->ttm, &vm->lru_bulk_move);18171817+ if (vm->xef)18181818+ xe_file_put(vm->xef);18321819 kfree(vm);18331820 if (flags & XE_VM_FLAG_LR_MODE)18341821 xe_pm_runtime_put(xe);···20762059 struct xe_device *xe = to_xe_device(dev);20772060 struct xe_file *xef = to_xe_file(file);20782061 struct drm_xe_vm_create *args = data;20792079- struct xe_tile *tile;20802062 struct xe_vm *vm;20812081- u32 id, asid;20632063+ u32 id;20822064 int err;20832065 u32 flags = 0;20842066···21132097 if (args->flags & DRM_XE_VM_CREATE_FLAG_FAULT_MODE)21142098 flags |= XE_VM_FLAG_FAULT_MODE;2115209921162116- vm = xe_vm_create(xe, flags);21002100+ vm = xe_vm_create(xe, flags, xef);21172101 if (IS_ERR(vm))21182102 return PTR_ERR(vm);21192119-21202120- if (xe->info.has_asid) {21212121- down_write(&xe->usm.lock);21222122- err = xa_alloc_cyclic(&xe->usm.asid_to_vm, &asid, vm,21232123- XA_LIMIT(1, XE_MAX_ASID - 1),21242124- &xe->usm.next_asid, GFP_KERNEL);21252125- up_write(&xe->usm.lock);21262126- if (err < 0)21272127- goto err_close_and_put;21282128-21292129- vm->usm.asid = asid;21302130- }21312131-21322132- vm->xef = xe_file_get(xef);21332133-21342134- /* Record BO memory for VM pagetable created against client */21352135- for_each_tile(tile, xe, id)21362136- if (vm->pt_root[id])21372137- xe_drm_client_add_bo(vm->xef->client, vm->pt_root[id]->bo);2138210321392104#if IS_ENABLED(CONFIG_DRM_XE_DEBUG_MEM)21402105 /* Warning: Security issue - never enable by default */···34183421free_bind_ops:34193422 if (args->num_binds > 1)34203423 kvfree(*bind_ops);34243424+ *bind_ops = NULL;34213425 return err;34223426}34233427···35253527 struct xe_exec_queue *q = NULL;35263528 u32 num_syncs, num_ufence = 0;35273529 struct xe_sync_entry *syncs = NULL;35283528- struct drm_xe_vm_bind_op *bind_ops;35303530+ struct drm_xe_vm_bind_op *bind_ops = NULL;35293531 struct xe_vma_ops vops;35303532 struct dma_fence *fence;35313533 int err;
···849849static int ad7124_syscalib_locked(struct ad7124_state *st, const struct iio_chan_spec *chan)850850{851851 struct device *dev = &st->sd.spi->dev;852852- struct ad7124_channel *ch = &st->channels[chan->channel];852852+ struct ad7124_channel *ch = &st->channels[chan->address];853853 int ret;854854855855 if (ch->syscalib_mode == AD7124_SYSCALIB_ZERO_SCALE) {···865865 if (ret < 0)866866 return ret;867867868868- dev_dbg(dev, "offset for channel %d after zero-scale calibration: 0x%x\n",869869- chan->channel, ch->cfg.calibration_offset);868868+ dev_dbg(dev, "offset for channel %lu after zero-scale calibration: 0x%x\n",869869+ chan->address, ch->cfg.calibration_offset);870870 } else {871871 ch->cfg.calibration_gain = st->gain_default;872872···880880 if (ret < 0)881881 return ret;882882883883- dev_dbg(dev, "gain for channel %d after full-scale calibration: 0x%x\n",884884- chan->channel, ch->cfg.calibration_gain);883883+ dev_dbg(dev, "gain for channel %lu after full-scale calibration: 0x%x\n",884884+ chan->address, ch->cfg.calibration_gain);885885 }886886887887 return 0;···924924{925925 struct ad7124_state *st = iio_priv(indio_dev);926926927927- st->channels[chan->channel].syscalib_mode = mode;927927+ st->channels[chan->address].syscalib_mode = mode;928928929929 return 0;930930}···934934{935935 struct ad7124_state *st = iio_priv(indio_dev);936936937937- return st->channels[chan->channel].syscalib_mode;937937+ return st->channels[chan->address].syscalib_mode;938938}939939940940static const struct iio_enum ad7124_syscalib_mode_enum = {
+75-12
drivers/iio/adc/ad7173.c
···200200 /*201201 * Following fields are used to compare equality. If you202202 * make adaptations in it, you most likely also have to adapt203203- * ad7173_find_live_config(), too.203203+ * ad7173_is_setup_equal(), too.204204 */205205 struct_group(config_props,206206 bool bipolar;···561561 st->config_usage_counter = 0;562562}563563564564-static struct ad7173_channel_config *565565-ad7173_find_live_config(struct ad7173_state *st, struct ad7173_channel_config *cfg)564564+/**565565+ * ad7173_is_setup_equal - Compare two channel setups566566+ * @cfg1: First channel configuration567567+ * @cfg2: Second channel configuration568568+ *569569+ * Compares all configuration options that affect the registers connected to570570+ * SETUP_SEL, namely CONFIGx, FILTERx, GAINx and OFFSETx.571571+ *572572+ * Returns: true if the setups are identical, false otherwise573573+ */574574+static bool ad7173_is_setup_equal(const struct ad7173_channel_config *cfg1,575575+ const struct ad7173_channel_config *cfg2)566576{567567- struct ad7173_channel_config *cfg_aux;568568- int i;569569-570577 /*571578 * This is just to make sure that the comparison is adapted after572579 * struct ad7173_channel_config was changed.···586579 u8 ref_sel;587580 }));588581582582+ return cfg1->bipolar == cfg2->bipolar &&583583+ cfg1->input_buf == cfg2->input_buf &&584584+ cfg1->odr == cfg2->odr &&585585+ cfg1->ref_sel == cfg2->ref_sel;586586+}587587+588588+static struct ad7173_channel_config *589589+ad7173_find_live_config(struct ad7173_state *st, struct ad7173_channel_config *cfg)590590+{591591+ struct ad7173_channel_config *cfg_aux;592592+ int i;593593+589594 for (i = 0; i < st->num_channels; i++) {590595 cfg_aux = &st->channels[i].cfg;591596592592- if (cfg_aux->live &&593593- cfg->bipolar == cfg_aux->bipolar &&594594- cfg->input_buf == cfg_aux->input_buf &&595595- cfg->odr == cfg_aux->odr &&596596- cfg->ref_sel == cfg_aux->ref_sel)597597+ if (cfg_aux->live && ad7173_is_setup_equal(cfg, cfg_aux))597598 return cfg_aux;598599 }599600 return NULL;···12431228 const unsigned long *scan_mask)12441229{12451230 struct ad7173_state *st = iio_priv(indio_dev);12461246- int i, ret;12311231+ int i, j, k, ret;1247123212481233 for (i = 0; i < indio_dev->num_channels; i++) {12491234 if (test_bit(i, scan_mask))···12521237 ret = ad_sd_write_reg(&st->sd, AD7173_REG_CH(i), 2, 0);12531238 if (ret < 0)12541239 return ret;12401240+ }12411241+12421242+ /*12431243+ * On some chips, there are more channels that setups, so if there were12441244+ * more unique setups requested than the number of available slots,12451245+ * ad7173_set_channel() will have written over some of the slots. We12461246+ * can detect this by making sure each assigned cfg_slot matches the12471247+ * requested configuration. If it doesn't, we know that the slot was12481248+ * overwritten by a different channel.12491249+ */12501250+ for_each_set_bit(i, scan_mask, indio_dev->num_channels) {12511251+ const struct ad7173_channel_config *cfg1, *cfg2;12521252+12531253+ cfg1 = &st->channels[i].cfg;12541254+12551255+ for_each_set_bit(j, scan_mask, indio_dev->num_channels) {12561256+ cfg2 = &st->channels[j].cfg;12571257+12581258+ /*12591259+ * Only compare configs that are assigned to the same12601260+ * SETUP_SEL slot and don't compare channel to itself.12611261+ */12621262+ if (i == j || cfg1->cfg_slot != cfg2->cfg_slot)12631263+ continue;12641264+12651265+ /*12661266+ * If we find two different configs trying to use the12671267+ * same SETUP_SEL slot, then we know that the that we12681268+ * have too many unique configurations requested for12691269+ * the available slots and at least one was overwritten.12701270+ */12711271+ if (!ad7173_is_setup_equal(cfg1, cfg2)) {12721272+ /*12731273+ * At this point, there isn't a way to tell12741274+ * which setups are actually programmed in the12751275+ * ADC anymore, so we could read them back to12761276+ * see, but it is simpler to just turn off all12771277+ * of the live flags so that everything gets12781278+ * reprogramed on the next attempt read a sample.12791279+ */12801280+ for (k = 0; k < st->num_channels; k++)12811281+ st->channels[k].cfg.live = false;12821282+12831283+ dev_err(&st->sd.spi->dev,12841284+ "Too many unique channel configurations requested for scan\n");12851285+ return -EINVAL;12861286+ }12871287+ }12551288 }1256128912571290 return 0;
···8989 struct completion completion;9090 struct mutex lock;9191 u16 last_val[RZG2L_ADC_MAX_CHANNELS];9292- bool was_rpm_active;9392};94939594/**···427428 if (!indio_dev)428429 return -ENOMEM;429430431431+ platform_set_drvdata(pdev, indio_dev);432432+430433 adc = iio_priv(indio_dev);431434432435 adc->hw_params = device_get_match_data(dev);···460459 ret = devm_pm_runtime_enable(dev);461460 if (ret)462461 return ret;463463-464464- platform_set_drvdata(pdev, indio_dev);465462466463 ret = rzg2l_adc_hw_init(dev, adc);467464 if (ret)···540541 };541542 int ret;542543543543- if (pm_runtime_suspended(dev)) {544544- adc->was_rpm_active = false;545545- } else {546546- ret = pm_runtime_force_suspend(dev);547547- if (ret)548548- return ret;549549- adc->was_rpm_active = true;550550- }544544+ ret = pm_runtime_force_suspend(dev);545545+ if (ret)546546+ return ret;551547552548 ret = reset_control_bulk_assert(ARRAY_SIZE(resets), resets);553549 if (ret)···551557 return 0;552558553559rpm_restore:554554- if (adc->was_rpm_active)555555- pm_runtime_force_resume(dev);556556-560560+ pm_runtime_force_resume(dev);557561 return ret;558562}559563···569577 if (ret)570578 return ret;571579572572- if (adc->was_rpm_active) {573573- ret = pm_runtime_force_resume(dev);574574- if (ret)575575- goto resets_restore;576576- }580580+ ret = pm_runtime_force_resume(dev);581581+ if (ret)582582+ goto resets_restore;577583578584 ret = rzg2l_adc_hw_init(dev, adc);579585 if (ret)···580590 return 0;581591582592rpm_restore:583583- if (adc->was_rpm_active) {584584- pm_runtime_mark_last_busy(dev);585585- pm_runtime_put_autosuspend(dev);586586- }593593+ pm_runtime_force_suspend(dev);587594resets_restore:588595 reset_control_bulk_assert(ARRAY_SIZE(resets), resets);589596 return ret;
+5-1
drivers/iio/imu/inv_icm42600/inv_icm42600_temp.c
···3232 goto exit;33333434 *temp = (s16)be16_to_cpup(raw);3535+ /*3636+ * Temperature data is invalid if both accel and gyro are off.3737+ * Return -EBUSY in this case.3838+ */3539 if (*temp == INV_ICM42600_DATA_INVALID)3636- ret = -EINVAL;4040+ ret = -EBUSY;37413842exit:3943 mutex_unlock(&st->lock);
···3213321332143214 /* Bring chip out of reset if there is an assigned GPIO line */32153215 gpiod = devm_gpiod_get_optional(dev, "reset", GPIOD_OUT_HIGH);32163216+ if (IS_ERR(gpiod))32173217+ return dev_err_probe(dev, PTR_ERR(gpiod), "failed to get reset GPIO\n");32183218+32163219 /* Deassert the signal */32173217- if (gpiod) {32183218- dev_info(dev, "release reset\n");32193219- gpiod_set_value(gpiod, 0);32203220- }32203220+ dev_info(dev, "release reset\n");32213221+ gpiod_set_value(gpiod, 0);3221322232223223 data->regmap = regmap;32233224
···994994 old_entry = xa_store(&dev->qp_xa, 1, qp, GFP_KERNEL);995995 if (xa_is_err(old_entry))996996 ret = xa_err(old_entry);997997+ else998998+ qp->ibqp.qp_num = 1;997999 } else {9981000 ret = xa_alloc_cyclic(&dev->qp_xa, &qp->ibqp.qp_num, qp,9991001 XA_LIMIT(1, dev->attrs.max_qp - 1),···10331031 if (ret)10341032 goto err_out_cmd;10351033 } else {10361036- init_kernel_qp(dev, qp, attrs);10341034+ ret = init_kernel_qp(dev, qp, attrs);10351035+ if (ret)10361036+ goto err_out_xa;10371037 }1038103810391039 qp->attrs.max_send_sge = attrs->cap.max_send_sge;
+3-3
drivers/infiniband/hw/hns/hns_roce_hw_v2.c
···30433043 if (!hr_dev->is_vf)30443044 hns_roce_free_link_table(hr_dev);3045304530463046- if (hr_dev->pci_dev->revision == PCI_REVISION_ID_HIP09)30463046+ if (hr_dev->pci_dev->revision >= PCI_REVISION_ID_HIP09)30473047 free_dip_entry(hr_dev);30483048}30493049···54765476 return ret;54775477}5478547854795479-static int hns_roce_v2_query_sccc(struct hns_roce_dev *hr_dev, u32 qpn,54795479+static int hns_roce_v2_query_sccc(struct hns_roce_dev *hr_dev, u32 sccn,54805480 void *buffer)54815481{54825482 struct hns_roce_v2_scc_context *context;···54885488 return PTR_ERR(mailbox);5489548954905490 ret = hns_roce_cmd_mbox(hr_dev, 0, mailbox->dma, HNS_ROCE_CMD_QUERY_SCCC,54915491- qpn);54915491+ sccn);54925492 if (ret)54935493 goto out;54945494
+8-1
drivers/infiniband/hw/hns/hns_roce_restrack.c
···100100 struct hns_roce_v2_qp_context qpc;101101 struct hns_roce_v2_scc_context sccc;102102 } context = {};103103+ u32 sccn = hr_qp->qpn;103104 int ret;104105105106 if (!hr_dev->hw->query_qpc)···117116 !hr_dev->hw->query_sccc)118117 goto out;119118120120- ret = hr_dev->hw->query_sccc(hr_dev, hr_qp->qpn, &context.sccc);119119+ if (hr_qp->cong_type == CONG_TYPE_DIP) {120120+ if (!hr_qp->dip)121121+ goto out;122122+ sccn = hr_qp->dip->dip_idx;123123+ }124124+125125+ ret = hr_dev->hw->query_sccc(hr_dev, sccn, &context.sccc);121126 if (ret)122127 ibdev_warn_ratelimited(&hr_dev->ib_dev,123128 "failed to query SCCC, ret = %d.\n",
+8-21
drivers/infiniband/sw/rxe/rxe_net.c
···345345346346static void rxe_skb_tx_dtor(struct sk_buff *skb)347347{348348- struct net_device *ndev = skb->dev;349349- struct rxe_dev *rxe;350350- unsigned int qp_index;351351- struct rxe_qp *qp;348348+ struct rxe_qp *qp = skb->sk->sk_user_data;352349 int skb_out;353350354354- rxe = rxe_get_dev_from_net(ndev);355355- if (!rxe && is_vlan_dev(ndev))356356- rxe = rxe_get_dev_from_net(vlan_dev_real_dev(ndev));357357- if (WARN_ON(!rxe))358358- return;359359-360360- qp_index = (int)(uintptr_t)skb->sk->sk_user_data;361361- if (!qp_index)362362- return;363363-364364- qp = rxe_pool_get_index(&rxe->qp_pool, qp_index);365365- if (!qp)366366- goto put_dev;367367-368351 skb_out = atomic_dec_return(&qp->skb_out);369369- if (qp->need_req_skb && skb_out < RXE_INFLIGHT_SKBS_PER_QP_LOW)352352+ if (unlikely(qp->need_req_skb &&353353+ skb_out < RXE_INFLIGHT_SKBS_PER_QP_LOW))370354 rxe_sched_task(&qp->send_task);371355372356 rxe_put(qp);373373-put_dev:374374- ib_device_put(&rxe->ib_dev);375357 sock_put(skb->sk);376358}377359···365383 sock_hold(sk);366384 skb->sk = sk;367385 skb->destructor = rxe_skb_tx_dtor;386386+ rxe_get(pkt->qp);368387 atomic_inc(&pkt->qp->skb_out);369388370389 if (skb->protocol == htons(ETH_P_IP))···388405 sock_hold(sk);389406 skb->sk = sk;390407 skb->destructor = rxe_skb_tx_dtor;408408+ rxe_get(pkt->qp);391409 atomic_inc(&pkt->qp->skb_out);392410393411 if (skb->protocol == htons(ETH_P_IP))···480496 rcu_read_unlock();481497 goto out;482498 }499499+500500+ /* Add time stamp to skb. */501501+ skb->tstamp = ktime_get();483502484503 skb_reserve(skb, hdr_len + LL_RESERVED_SPACE(ndev));485504
+1-1
drivers/infiniband/sw/rxe/rxe_qp.c
···244244 err = sock_create_kern(&init_net, AF_INET, SOCK_DGRAM, 0, &qp->sk);245245 if (err < 0)246246 return err;247247- qp->sk->sk->sk_user_data = (void *)(uintptr_t)qp->elem.index;247247+ qp->sk->sk->sk_user_data = qp;248248249249 /* pick a source UDP port number for this QP based on250250 * the source QPN. this spreads traffic for different QPs
+2-2
drivers/iommu/amd/init.c
···36383638{36393639 u32 seg = 0, bus, dev, fn;36403640 char *hid, *uid, *p, *addr;36413641- char acpiid[ACPIID_LEN] = {0};36413641+ char acpiid[ACPIID_LEN + 1] = { }; /* size with NULL terminator */36423642 int i;3643364336443644 addr = strchr(str, '@');···36643664 /* We have the '@', make it the terminator to get just the acpiid */36653665 *addr++ = 0;3666366636673667- if (strlen(str) > ACPIID_LEN + 1)36673667+ if (strlen(str) > ACPIID_LEN)36683668 goto not_found;3669366936703670 if (sscanf(str, "=%s", acpiid) != 1)
+1-1
drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
···29972997 /* ATS is being switched off, invalidate the entire ATC */29982998 arm_smmu_atc_inv_master(master, IOMMU_NO_PASID);29992999 }30003000- master->ats_enabled = state->ats_enabled;3001300030023001 arm_smmu_remove_master_domain(master, state->old_domain, state->ssid);30023002+ master->ats_enabled = state->ats_enabled;30033003}3004300430053005static int arm_smmu_attach_dev(struct iommu_domain *domain, struct device *dev)
+5-3
drivers/iommu/arm/arm-smmu-v3/tegra241-cmdqv.c
···301301 struct iommu_vevent_tegra241_cmdqv vevent_data;302302 int i;303303304304- for (i = 0; i < LVCMDQ_ERR_MAP_NUM_64; i++)305305- vevent_data.lvcmdq_err_map[i] =306306- readq_relaxed(REG_VINTF(vintf, LVCMDQ_ERR_MAP_64(i)));304304+ for (i = 0; i < LVCMDQ_ERR_MAP_NUM_64; i++) {305305+ u64 err = readq_relaxed(REG_VINTF(vintf, LVCMDQ_ERR_MAP_64(i)));306306+307307+ vevent_data.lvcmdq_err_map[i] = cpu_to_le64(err);308308+ }307309308310 iommufd_viommu_report_event(viommu, IOMMU_VEVENTQ_TYPE_TEGRA241_CMDQV,309311 &vevent_data, sizeof(vevent_data));
···339339 * so all the races disappear.340340 */341341static bool create_on_open = true;342342+static bool legacy_async_del_gendisk = true;342343343344/*344345 * We have a system wide 'event count' that is incremented···878877 export_rdev(rdev, mddev);879878 }880879881881- /* Call del_gendisk after release reconfig_mutex to avoid882882- * deadlock (e.g. call del_gendisk under the lock and an883883- * access to sysfs files waits the lock)884884- * And MD_DELETED is only used for md raid which is set in885885- * do_md_stop. dm raid only uses md_stop to stop. So dm raid886886- * doesn't need to check MD_DELETED when getting reconfig lock887887- */888888- if (test_bit(MD_DELETED, &mddev->flags))889889- del_gendisk(mddev->gendisk);880880+ if (!legacy_async_del_gendisk) {881881+ /*882882+ * Call del_gendisk after release reconfig_mutex to avoid883883+ * deadlock (e.g. call del_gendisk under the lock and an884884+ * access to sysfs files waits the lock)885885+ * And MD_DELETED is only used for md raid which is set in886886+ * do_md_stop. dm raid only uses md_stop to stop. So dm raid887887+ * doesn't need to check MD_DELETED when getting reconfig lock888888+ */889889+ if (test_bit(MD_DELETED, &mddev->flags))890890+ del_gendisk(mddev->gendisk);891891+ }890892}891893EXPORT_SYMBOL_GPL(mddev_unlock);892894···14231419 else {14241420 if (sb->events_hi == sb->cp_events_hi &&14251421 sb->events_lo == sb->cp_events_lo) {14261426- mddev->resync_offset = sb->resync_offset;14221422+ mddev->resync_offset = sb->recovery_cp;14271423 } else14281424 mddev->resync_offset = 0;14291425 }···15511547 mddev->minor_version = sb->minor_version;15521548 if (mddev->in_sync)15531549 {15541554- sb->resync_offset = mddev->resync_offset;15501550+ sb->recovery_cp = mddev->resync_offset;15551551 sb->cp_events_hi = (mddev->events>>32);15561552 sb->cp_events_lo = (u32)mddev->events;15571553 if (mddev->resync_offset == MaxSector)15581554 sb->state = (1<< MD_SB_CLEAN);15591555 } else15601560- sb->resync_offset = 0;15561556+ sb->recovery_cp = 0;1561155715621558 sb->layout = mddev->layout;15631559 sb->chunk_size = mddev->chunk_sectors << 9;···48394835static struct md_sysfs_entry md_metadata =48404836__ATTR_PREALLOC(metadata_version, S_IRUGO|S_IWUSR, metadata_show, metadata_store);4841483748384838+static bool rdev_needs_recovery(struct md_rdev *rdev, sector_t sectors)48394839+{48404840+ return rdev->raid_disk >= 0 &&48414841+ !test_bit(Journal, &rdev->flags) &&48424842+ !test_bit(Faulty, &rdev->flags) &&48434843+ !test_bit(In_sync, &rdev->flags) &&48444844+ rdev->recovery_offset < sectors;48454845+}48464846+48474847+static enum sync_action md_get_active_sync_action(struct mddev *mddev)48484848+{48494849+ struct md_rdev *rdev;48504850+ bool is_recover = false;48514851+48524852+ if (mddev->resync_offset < MaxSector)48534853+ return ACTION_RESYNC;48544854+48554855+ if (mddev->reshape_position != MaxSector)48564856+ return ACTION_RESHAPE;48574857+48584858+ rcu_read_lock();48594859+ rdev_for_each_rcu(rdev, mddev) {48604860+ if (rdev_needs_recovery(rdev, MaxSector)) {48614861+ is_recover = true;48624862+ break;48634863+ }48644864+ }48654865+ rcu_read_unlock();48664866+48674867+ return is_recover ? ACTION_RECOVER : ACTION_IDLE;48684868+}48694869+48424870enum sync_action md_sync_action(struct mddev *mddev)48434871{48444872 unsigned long recovery = mddev->recovery;48734873+ enum sync_action active_action;4845487448464875 /*48474876 * frozen has the highest priority, means running sync_thread will be···48984861 !test_bit(MD_RECOVERY_NEEDED, &recovery))48994862 return ACTION_IDLE;4900486349014901- if (test_bit(MD_RECOVERY_RESHAPE, &recovery) ||49024902- mddev->reshape_position != MaxSector)48644864+ /*48654865+ * Check if any sync operation (resync/recover/reshape) is48664866+ * currently active. This ensures that only one sync operation48674867+ * can run at a time. Returns the type of active operation, or48684868+ * ACTION_IDLE if none are active.48694869+ */48704870+ active_action = md_get_active_sync_action(mddev);48714871+ if (active_action != ACTION_IDLE)48724872+ return active_action;48734873+48744874+ if (test_bit(MD_RECOVERY_RESHAPE, &recovery))49034875 return ACTION_RESHAPE;4904487649054877 if (test_bit(MD_RECOVERY_RECOVER, &recovery))···58645818{58655819 struct mddev *mddev = container_of(ko, struct mddev, kobj);5866582058215821+ if (legacy_async_del_gendisk) {58225822+ if (mddev->sysfs_state)58235823+ sysfs_put(mddev->sysfs_state);58245824+ if (mddev->sysfs_level)58255825+ sysfs_put(mddev->sysfs_level);58265826+ del_gendisk(mddev->gendisk);58275827+ }58675828 put_disk(mddev->gendisk);58685829}58695830···60736020static int md_alloc_and_put(dev_t dev, char *name)60746021{60756022 struct mddev *mddev = md_alloc(dev, name);60236023+60246024+ if (legacy_async_del_gendisk)60256025+ pr_warn("md: async del_gendisk mode will be removed in future, please upgrade to mdadm-4.5+\n");6076602660776027 if (IS_ERR(mddev))60786028 return PTR_ERR(mddev);···64876431 mddev->persistent = 0;64886432 mddev->level = LEVEL_NONE;64896433 mddev->clevel[0] = 0;64906490- /* if UNTIL_STOP is set, it's cleared here */64916491- mddev->hold_active = 0;64926492- /* Don't clear MD_CLOSING, or mddev can be opened again. */64936493- mddev->flags &= BIT_ULL_MASK(MD_CLOSING);64346434+64356435+ /*64366436+ * For legacy_async_del_gendisk mode, it can stop the array in the64376437+ * middle of assembling it, then it still can access the array. So64386438+ * it needs to clear MD_CLOSING. If not legacy_async_del_gendisk,64396439+ * it can't open the array again after stopping it. So it doesn't64406440+ * clear MD_CLOSING.64416441+ */64426442+ if (legacy_async_del_gendisk && mddev->hold_active) {64436443+ clear_bit(MD_CLOSING, &mddev->flags);64446444+ } else {64456445+ /* if UNTIL_STOP is set, it's cleared here */64466446+ mddev->hold_active = 0;64476447+ /* Don't clear MD_CLOSING, or mddev can be opened again. */64486448+ mddev->flags &= BIT_ULL_MASK(MD_CLOSING);64496449+ }64946450 mddev->sb_flags = 0;64956451 mddev->ro = MD_RDWR;64966452 mddev->metadata_type[0] = 0;···6726665867276659 export_array(mddev);67286660 md_clean(mddev);67296729- set_bit(MD_DELETED, &mddev->flags);66616661+ if (!legacy_async_del_gendisk)66626662+ set_bit(MD_DELETED, &mddev->flags);67306663 }67316664 md_new_event();67326665 sysfs_notify_dirent_safe(mddev->sysfs_state);···90378968 start = MaxSector;90388969 rcu_read_lock();90398970 rdev_for_each_rcu(rdev, mddev)90409040- if (rdev->raid_disk >= 0 &&90419041- !test_bit(Journal, &rdev->flags) &&90429042- !test_bit(Faulty, &rdev->flags) &&90439043- !test_bit(In_sync, &rdev->flags) &&90449044- rdev->recovery_offset < start)89718971+ if (rdev_needs_recovery(rdev, start))90458972 start = rdev->recovery_offset;90468973 rcu_read_unlock();90478974···93969331 test_bit(MD_RECOVERY_RECOVER, &mddev->recovery)) {93979332 rcu_read_lock();93989333 rdev_for_each_rcu(rdev, mddev)93999399- if (rdev->raid_disk >= 0 &&94009400- mddev->delta_disks >= 0 &&94019401- !test_bit(Journal, &rdev->flags) &&94029402- !test_bit(Faulty, &rdev->flags) &&94039403- !test_bit(In_sync, &rdev->flags) &&94049404- rdev->recovery_offset < mddev->curr_resync)93349334+ if (mddev->delta_disks >= 0 &&93359335+ rdev_needs_recovery(rdev, mddev->curr_resync))94059336 rdev->recovery_offset = mddev->curr_resync;94069337 rcu_read_unlock();94079338 }···1045310392module_param(start_dirty_degraded, int, S_IRUGO|S_IWUSR);1045410393module_param_call(new_array, add_named_array, NULL, NULL, S_IWUSR);1045510394module_param(create_on_open, bool, S_IRUSR|S_IWUSR);1039510395+module_param(legacy_async_del_gendisk, bool, 0600);10456103961045710397MODULE_LICENSE("GPL");1045810398MODULE_DESCRIPTION("MD RAID framework");
···24572457 dev->dev_ops->cfg_port_member(dev, i, val | cpu_port);24582458 }2459245924602460+ /* HSR ports are setup once so need to use the assigned membership24612461+ * when the port is enabled.24622462+ */24632463+ if (!port_member && p->stp_state == BR_STATE_FORWARDING &&24642464+ (dev->hsr_ports & BIT(port)))24652465+ port_member = dev->hsr_ports;24602466 dev->dev_ops->cfg_port_member(dev, port, port_member | cpu_port);24612467}24622468
···53325332{53335333 int i;5334533453355335- netdev_assert_locked(bp->dev);53355335+ netdev_assert_locked_or_invisible(bp->dev);5336533653375337 /* Under netdev instance lock and all our NAPIs have been disabled.53385338 * It's safe to delete the hash table.
···606606 if (!npc_check_field(rvu, blkaddr, NPC_LB, intf))607607 *features &= ~BIT_ULL(NPC_OUTER_VID);608608609609- /* Set SPI flag only if AH/ESP and IPSEC_SPI are in the key */610610- if (npc_check_field(rvu, blkaddr, NPC_IPSEC_SPI, intf) &&609609+ /* Allow extracting SPI field from AH and ESP headers at same offset */610610+ if (npc_is_field_present(rvu, NPC_IPSEC_SPI, intf) &&611611 (*features & (BIT_ULL(NPC_IPPROTO_ESP) | BIT_ULL(NPC_IPPROTO_AH))))612612 *features |= BIT_ULL(NPC_IPSEC_SPI);613613
+2
drivers/net/ethernet/mediatek/mtk_ppe_offload.c
···101101 if (!IS_ENABLED(CONFIG_NET_MEDIATEK_SOC_WED))102102 return -1;103103104104+ rcu_read_lock();104105 err = dev_fill_forward_path(dev, addr, &stack);106106+ rcu_read_unlock();105107 if (err)106108 return err;107109
···373373 refcount_t refcnt;374374 u32 root_tsar_ix;375375 struct mlx5_qos_domain *domain;376376- /* Contains all vports with QoS enabled but no explicit node.377377- * Cannot be NULL if QoS is enabled, but may be a fake node378378- * referencing the root TSAR if the esw doesn't support nodes.379379- */380380- struct mlx5_esw_sched_node *node0;381376 } qos;382377383378 struct mlx5_esw_bridge_offloads *br_offloads;
···85858686 ret = mlx5hws_table_create_default_ft(tbl->ctx->mdev,8787 tbl,8888+ 0,8889 &matcher->end_ft_id);8990 if (ret) {9091 mlx5hws_err(tbl->ctx, "Isolated matcher: failed to create end flow table\n");···113112 if (mlx5hws_matcher_is_isolated(matcher))114113 ret = hws_matcher_create_end_ft_isolated(matcher);115114 else116116- ret = mlx5hws_table_create_default_ft(tbl->ctx->mdev, tbl,115115+ ret = mlx5hws_table_create_default_ft(tbl->ctx->mdev,116116+ tbl,117117+ 0,117118 &matcher->end_ft_id);118119119120 if (ret) {
···3232/* MAC Specific Addr 1 Top Reg */3333#define LAN865X_REG_MAC_H_SADDR1 0x0001002334343535+/* MAC TSU Timer Increment Register */3636+#define LAN865X_REG_MAC_TSU_TIMER_INCR 0x000100773737+#define MAC_TSU_TIMER_INCR_COUNT_NANOSECONDS 0x00283838+3539struct lan865x_priv {3640 struct work_struct multicast_work;3741 struct net_device *netdev;···315311316312 phy_start(netdev->phydev);317313314314+ netif_start_queue(netdev);315315+318316 return 0;319317}320318···348342 if (!priv->tc6) {349343 ret = -ENODEV;350344 goto free_netdev;345345+ }346346+347347+ /* LAN865x Rev.B0/B1 configuration parameters from AN1760348348+ * As per the Configuration Application Note AN1760 published in the349349+ * link, https://www.microchip.com/en-us/application-notes/an1760350350+ * Revision F (DS60001760G - June 2024), configure the MAC to set time351351+ * stamping at the end of the Start of Frame Delimiter (SFD) and set the352352+ * Timer Increment reg to 40 ns to be used as a 25 MHz internal clock.353353+ */354354+ ret = oa_tc6_write_register(priv->tc6, LAN865X_REG_MAC_TSU_TIMER_INCR,355355+ MAC_TSU_TIMER_INCR_COUNT_NANOSECONDS);356356+ if (ret) {357357+ dev_err(&spi->dev, "Failed to config TSU Timer Incr reg: %d\n",358358+ ret);359359+ goto oa_tc6_exit;351360 }352361353362 /* As per the point s3 in the below errata, SPI receive Ethernet frame
···203203 }204204}205205206206+static void icssg_enable_fw_offload(struct prueth *prueth)207207+{208208+ struct prueth_emac *emac;209209+ int mac;210210+211211+ for (mac = PRUETH_MAC0; mac < PRUETH_NUM_MACS; mac++) {212212+ emac = prueth->emac[mac];213213+ if (prueth->is_hsr_offload_mode) {214214+ if (emac->ndev->features & NETIF_F_HW_HSR_TAG_RM)215215+ icssg_set_port_state(emac, ICSSG_EMAC_HSR_RX_OFFLOAD_ENABLE);216216+ else217217+ icssg_set_port_state(emac, ICSSG_EMAC_HSR_RX_OFFLOAD_DISABLE);218218+ }219219+220220+ if (prueth->is_switch_mode || prueth->is_hsr_offload_mode) {221221+ if (netif_running(emac->ndev)) {222222+ icssg_fdb_add_del(emac, eth_stp_addr, prueth->default_vlan,223223+ ICSSG_FDB_ENTRY_P0_MEMBERSHIP |224224+ ICSSG_FDB_ENTRY_P1_MEMBERSHIP |225225+ ICSSG_FDB_ENTRY_P2_MEMBERSHIP |226226+ ICSSG_FDB_ENTRY_BLOCK,227227+ true);228228+ icssg_vtbl_modify(emac, emac->port_vlan | DEFAULT_VID,229229+ BIT(emac->port_id) | DEFAULT_PORT_MASK,230230+ BIT(emac->port_id) | DEFAULT_UNTAG_MASK,231231+ true);232232+ if (prueth->is_hsr_offload_mode)233233+ icssg_vtbl_modify(emac, DEFAULT_VID,234234+ DEFAULT_PORT_MASK,235235+ DEFAULT_UNTAG_MASK, true);236236+ icssg_set_pvid(prueth, emac->port_vlan, emac->port_id);237237+ if (prueth->is_switch_mode)238238+ icssg_set_port_state(emac, ICSSG_EMAC_PORT_VLAN_AWARE_ENABLE);239239+ }240240+ }241241+ }242242+}243243+206244static int prueth_emac_common_start(struct prueth *prueth)207245{208246 struct prueth_emac *emac;···791753 ret = prueth_emac_common_start(prueth);792754 if (ret)793755 goto free_rx_irq;756756+ icssg_enable_fw_offload(prueth);794757 }795758796759 flow_cfg = emac->dram.va + ICSSG_CONFIG_OFFSET + PSI_L_REGULAR_FLOW_ID_BASE_OFFSET;···1399136014001361static void icssg_change_mode(struct prueth *prueth)14011362{14021402- struct prueth_emac *emac;14031403- int mac, ret;13631363+ int ret;1404136414051365 ret = prueth_emac_restart(prueth);14061366 if (ret) {···14071369 return;14081370 }1409137114101410- for (mac = PRUETH_MAC0; mac < PRUETH_NUM_MACS; mac++) {14111411- emac = prueth->emac[mac];14121412- if (prueth->is_hsr_offload_mode) {14131413- if (emac->ndev->features & NETIF_F_HW_HSR_TAG_RM)14141414- icssg_set_port_state(emac, ICSSG_EMAC_HSR_RX_OFFLOAD_ENABLE);14151415- else14161416- icssg_set_port_state(emac, ICSSG_EMAC_HSR_RX_OFFLOAD_DISABLE);14171417- }14181418-14191419- if (netif_running(emac->ndev)) {14201420- icssg_fdb_add_del(emac, eth_stp_addr, prueth->default_vlan,14211421- ICSSG_FDB_ENTRY_P0_MEMBERSHIP |14221422- ICSSG_FDB_ENTRY_P1_MEMBERSHIP |14231423- ICSSG_FDB_ENTRY_P2_MEMBERSHIP |14241424- ICSSG_FDB_ENTRY_BLOCK,14251425- true);14261426- icssg_vtbl_modify(emac, emac->port_vlan | DEFAULT_VID,14271427- BIT(emac->port_id) | DEFAULT_PORT_MASK,14281428- BIT(emac->port_id) | DEFAULT_UNTAG_MASK,14291429- true);14301430- if (prueth->is_hsr_offload_mode)14311431- icssg_vtbl_modify(emac, DEFAULT_VID,14321432- DEFAULT_PORT_MASK,14331433- DEFAULT_UNTAG_MASK, true);14341434- icssg_set_pvid(prueth, emac->port_vlan, emac->port_id);14351435- if (prueth->is_switch_mode)14361436- icssg_set_port_state(emac, ICSSG_EMAC_PORT_VLAN_AWARE_ENABLE);14371437- }14381438- }13721372+ icssg_enable_fw_offload(prueth);14391373}1440137414411375static int prueth_netdevice_port_link(struct net_device *ndev,
+1-1
drivers/net/ethernet/wangxun/libwx/wx_vf_lib.c
···192192 u8 i, j;193193194194 /* Fill out hash function seeds */195195- netdev_rss_key_fill(wx->rss_key, sizeof(wx->rss_key));195195+ netdev_rss_key_fill(wx->rss_key, WX_RSS_KEY_SIZE);196196 for (i = 0; i < WX_RSS_KEY_SIZE / 4; i++)197197 wr32(wx, WX_VXRSSRK(i), wx->rss_key[i]);198198
···362362 u16 mask;363363};364364365365+struct vsc8531_skb_cb {366366+ u32 ns;367367+};368368+369369+#define VSC8531_SKB_CB(skb) \370370+ ((struct vsc8531_skb_cb *)((skb)->cb))371371+365372struct vsc8531_private {366373 int rate_magic;367374 u16 supp_led_modes;···417410 */418411 struct mutex ts_lock;419412 struct mutex phc_lock;413413+414414+ /* list of skbs that were received and need timestamp information but it415415+ * didn't received it yet416416+ */417417+ struct sk_buff_head rx_skbs_list;420418};421419422420/* Shared structure between the PHYs of the same package.
···356356 if (!sock || !buf)357357 return -EINVAL;358358359359+ if (!sock->metric_tbl_addr) {360360+ dev_err(sock->dev, "Metrics table address not available\n");361361+ return -ENOMEM;362362+ }363363+359364 /* Do not support lseek(), also don't allow more than the size of metric table */360365 if (size != sizeof(struct hsmp_metric_table)) {361366 dev_err(sock->dev, "Wrong buffer size\n");
···125125 if (ret)126126 goto fail_platform_device_add;127127128128- ret = dell_smbios_register_device(&platform_device->dev,129129- &dell_smbios_smm_call);128128+ ret = dell_smbios_register_device(&platform_device->dev, 0, &dell_smbios_smm_call);130129 if (ret)131130 goto fail_register;132131
+1-3
drivers/platform/x86/dell/dell-smbios-wmi.c
···264264 if (ret)265265 return ret;266266267267- /* ID is used by dell-smbios to set priority of drivers */268268- wdev->dev.id = 1;269269- ret = dell_smbios_register_device(&wdev->dev, &dell_smbios_wmi_call);267267+ ret = dell_smbios_register_device(&wdev->dev, 1, &dell_smbios_wmi_call);270268 if (ret)271269 return ret;272270
···330330 }331331332332 if (config.speed_hz > perclk_rate / 2) {333333- dev_err(fsl_lpspi->dev,334334- "per-clk should be at least two times of transfer speed");335335- return -EINVAL;333333+ div = 2;334334+ } else {335335+ div = DIV_ROUND_UP(perclk_rate, config.speed_hz);336336 }337337-338338- div = DIV_ROUND_UP(perclk_rate, config.speed_hz);339337340338 for (prescale = 0; prescale <= prescale_max; prescale++) {341339 scldiv = div / (1 << prescale) - 2;
+4
drivers/spi/spi-mem.c
···265265 */266266bool spi_mem_supports_op(struct spi_mem *mem, const struct spi_mem_op *op)267267{268268+ /* Make sure the operation frequency is correct before going futher */269269+ spi_mem_adjust_op_freq(mem, (struct spi_mem_op *)op);270270+268271 if (spi_mem_check_op(op))269272 return false;270273···580577 * spi_mem_calc_op_duration() - Derives the theoretical length (in ns) of an581578 * operation. This helps finding the best variant582579 * among a list of possible choices.580580+ * @mem: the SPI memory583581 * @op: the operation to benchmark584582 *585583 * Some chips have per-op frequency limitations, PCBs usually have their own
···147147 if (up->port.uartclk == SERIAL_RSA_BAUD_BASE * 16)148148 serial_out(up, UART_RSA_FRR, 0);149149}150150-EXPORT_SYMBOL_GPL_FOR_MODULES(rsa_enable, "8250_base");150150+EXPORT_SYMBOL_FOR_MODULES(rsa_enable, "8250_base");151151152152/*153153 * Attempts to turn off the RSA FIFO and resets the RSA board back to 115kbps compat mode. It is···179179 up->port.uartclk = SERIAL_RSA_BAUD_BASE_LO * 16;180180 uart_port_unlock_irq(&up->port);181181}182182-EXPORT_SYMBOL_GPL_FOR_MODULES(rsa_disable, "8250_base");182182+EXPORT_SYMBOL_FOR_MODULES(rsa_disable, "8250_base");183183184184void rsa_autoconfig(struct uart_8250_port *up)185185{···192192 if (__rsa_enable(up))193193 up->port.type = PORT_RSA;194194}195195-EXPORT_SYMBOL_GPL_FOR_MODULES(rsa_autoconfig, "8250_base");195195+EXPORT_SYMBOL_FOR_MODULES(rsa_autoconfig, "8250_base");196196197197void rsa_reset(struct uart_8250_port *up)198198{···201201202202 serial_out(up, UART_RSA_FRR, 0);203203}204204-EXPORT_SYMBOL_GPL_FOR_MODULES(rsa_reset, "8250_base");204204+EXPORT_SYMBOL_FOR_MODULES(rsa_reset, "8250_base");205205206206#ifdef CONFIG_SERIAL_8250_DEPRECATED_OPTIONS207207#ifndef MODULE
+44-32
drivers/ufs/core/ufshcd.c
···13031303 *13041304 * Return: 0 upon success; -EBUSY upon timeout.13051305 */13061306-static int ufshcd_wait_for_doorbell_clr(struct ufs_hba *hba,13061306+static int ufshcd_wait_for_pending_cmds(struct ufs_hba *hba,13071307 u64 wait_timeout_us)13081308{13091309 int ret = 0;···14311431 down_write(&hba->clk_scaling_lock);1432143214331433 if (!hba->clk_scaling.is_allowed ||14341434- ufshcd_wait_for_doorbell_clr(hba, timeout_us)) {14341434+ ufshcd_wait_for_pending_cmds(hba, timeout_us)) {14351435 ret = -EBUSY;14361436 up_write(&hba->clk_scaling_lock);14371437 mutex_unlock(&hba->wb_mutex);···31993199}3200320032013201/*32023202- * Return: 0 upon success; < 0 upon failure.32023202+ * Return: 0 upon success; > 0 in case the UFS device reported an OCS error;32033203+ * < 0 if another error occurred.32033204 */32043205static int ufshcd_wait_for_dev_cmd(struct ufs_hba *hba,32053206 struct ufshcd_lrb *lrbp, int max_timeout)···32763275 }32773276 }3278327732793279- WARN_ONCE(err > 0, "Incorrect return value %d > 0\n", err);32803278 return err;32813279}32823280···32943294}3295329532963296/*32973297- * Return: 0 upon success; < 0 upon failure.32973297+ * Return: 0 upon success; > 0 in case the UFS device reported an OCS error;32983298+ * < 0 if another error occurred.32983299 */32993300static int ufshcd_issue_dev_cmd(struct ufs_hba *hba, struct ufshcd_lrb *lrbp,33003301 const u32 tag, int timeout)···33183317 * @cmd_type: specifies the type (NOP, Query...)33193318 * @timeout: timeout in milliseconds33203319 *33213321- * Return: 0 upon success; < 0 upon failure.33203320+ * Return: 0 upon success; > 0 in case the UFS device reported an OCS error;33213321+ * < 0 if another error occurred.33223322 *33233323 * NOTE: Since there is only one available tag for device management commands,33243324 * it is expected you hold the hba->dev_cmd.lock mutex.···33653363 (*request)->upiu_req.selector = selector;33663364}3367336533663366+/*33673367+ * Return: 0 upon success; > 0 in case the UFS device reported an OCS error;33683368+ * < 0 if another error occurred.33693369+ */33683370static int ufshcd_query_flag_retry(struct ufs_hba *hba,33693371 enum query_opcode opcode, enum flag_idn idn, u8 index, bool *flag_res)33703372{···33893383 dev_err(hba->dev,33903384 "%s: query flag, opcode %d, idn %d, failed with error %d after %d retries\n",33913385 __func__, opcode, idn, ret, retries);33923392- WARN_ONCE(ret > 0, "Incorrect return value %d > 0\n", ret);33933386 return ret;33943387}33953388···34003395 * @index: flag index to access34013396 * @flag_res: the flag value after the query request completes34023397 *34033403- * Return: 0 for success; < 0 upon failure.33983398+ * Return: 0 upon success; > 0 in case the UFS device reported an OCS error;33993399+ * < 0 if another error occurred.34043400 */34053401int ufshcd_query_flag(struct ufs_hba *hba, enum query_opcode opcode,34063402 enum flag_idn idn, u8 index, bool *flag_res)···3457345134583452out_unlock:34593453 ufshcd_dev_man_unlock(hba);34603460- WARN_ONCE(err > 0, "Incorrect return value %d > 0\n", err);34613454 return err;34623455}34633456···34693464 * @selector: selector field34703465 * @attr_val: the attribute value after the query request completes34713466 *34723472- * Return: 0 upon success; < 0 upon failure.34733473-*/34673467+ * Return: 0 upon success; > 0 in case the UFS device reported an OCS error;34683468+ * < 0 if another error occurred.34693469+ */34743470int ufshcd_query_attr(struct ufs_hba *hba, enum query_opcode opcode,34753471 enum attr_idn idn, u8 index, u8 selector, u32 *attr_val)34763472{···3519351335203514out_unlock:35213515 ufshcd_dev_man_unlock(hba);35223522- WARN_ONCE(err > 0, "Incorrect return value %d > 0\n", err);35233516 return err;35243517}35253518···35333528 * @attr_val: the attribute value after the query request35343529 * completes35353530 *35363536- * Return: 0 for success; < 0 upon failure.35373537-*/35313531+ * Return: 0 upon success; > 0 in case the UFS device reported an OCS error;35323532+ * < 0 if another error occurred.35333533+ */35383534int ufshcd_query_attr_retry(struct ufs_hba *hba,35393535 enum query_opcode opcode, enum attr_idn idn, u8 index, u8 selector,35403536 u32 *attr_val)···35573551 dev_err(hba->dev,35583552 "%s: query attribute, idn %d, failed with error %d after %d retries\n",35593553 __func__, idn, ret, QUERY_REQ_RETRIES);35603560- WARN_ONCE(ret > 0, "Incorrect return value %d > 0\n", ret);35613554 return ret;35623555}3563355635643557/*35653565- * Return: 0 if successful; < 0 upon failure.35583558+ * Return: 0 upon success; > 0 in case the UFS device reported an OCS error;35593559+ * < 0 if another error occurred.35663560 */35673561static int __ufshcd_query_descriptor(struct ufs_hba *hba,35683562 enum query_opcode opcode, enum desc_idn idn, u8 index,···36213615out_unlock:36223616 hba->dev_cmd.query.descriptor = NULL;36233617 ufshcd_dev_man_unlock(hba);36243624- WARN_ONCE(err > 0, "Incorrect return value %d > 0\n", err);36253618 return err;36263619}36273620···36373632 * The buf_len parameter will contain, on return, the length parameter36383633 * received on the response.36393634 *36403640- * Return: 0 for success; < 0 upon failure.36353635+ * Return: 0 upon success; > 0 in case the UFS device reported an OCS error;36363636+ * < 0 if another error occurred.36413637 */36423638int ufshcd_query_descriptor_retry(struct ufs_hba *hba,36433639 enum query_opcode opcode,···36563650 break;36573651 }3658365236593659- WARN_ONCE(err > 0, "Incorrect return value %d > 0\n", err);36603653 return err;36613654}36623655···36683663 * @param_read_buf: pointer to buffer where parameter would be read36693664 * @param_size: sizeof(param_read_buf)36703665 *36713671- * Return: 0 in case of success; < 0 upon failure.36663666+ * Return: 0 upon success; > 0 in case the UFS device reported an OCS error;36673667+ * < 0 if another error occurred.36723668 */36733669int ufshcd_read_desc_param(struct ufs_hba *hba,36743670 enum desc_idn desc_id,···37363730out:37373731 if (is_kmalloc)37383732 kfree(desc_buf);37393739- WARN_ONCE(ret > 0, "Incorrect return value %d > 0\n", ret);37403733 return ret;37413734}37423735···47864781 *47874782 * Set fDeviceInit flag and poll until device toggles it.47884783 *47894789- * Return: 0 upon success; < 0 upon failure.47844784+ * Return: 0 upon success; > 0 in case the UFS device reported an OCS error;47854785+ * < 0 if another error occurred.47904786 */47914787static int ufshcd_complete_dev_init(struct ufs_hba *hba)47924788{···51415135 * not respond with NOP IN UPIU within timeout of %NOP_OUT_TIMEOUT51425136 * and we retry sending NOP OUT for %NOP_OUT_RETRIES iterations.51435137 *51445144- * Return: 0 upon success; < 0 upon failure.51385138+ * Return: 0 upon success; > 0 in case the UFS device reported an OCS error;51395139+ * < 0 if another error occurred.51455140 */51465141static int ufshcd_verify_dev_init(struct ufs_hba *hba)51475142{···55665559 irqreturn_t retval = IRQ_NONE;55675560 struct uic_command *cmd;5568556155695569- spin_lock(hba->host->host_lock);55625562+ guard(spinlock_irqsave)(hba->host->host_lock);55705563 cmd = hba->active_uic_cmd;55715571- if (WARN_ON_ONCE(!cmd))55645564+ if (!cmd)55725565 goto unlock;5573556655745567 if (ufshcd_is_auto_hibern8_error(hba, intr_status))···55935586 ufshcd_add_uic_command_trace(hba, cmd, UFS_CMD_COMP);5594558755955588unlock:55965596- spin_unlock(hba->host->host_lock);55975597-55985589 return retval;55995590}56005591···58745869 * as the device is allowed to manage its own way of handling background58755870 * operations.58765871 *58775877- * Return: zero on success, non-zero on failure.58725872+ * Return: 0 upon success; > 0 in case the UFS device reported an OCS error;58735873+ * < 0 if another error occurred.58785874 */58795875static int ufshcd_enable_auto_bkops(struct ufs_hba *hba)58805876{···59145908 * host is idle so that BKOPS are managed effectively without any negative59155909 * impacts.59165910 *59175917- * Return: zero on success, non-zero on failure.59115911+ * Return: 0 upon success; > 0 in case the UFS device reported an OCS error;59125912+ * < 0 if another error occurred.59185913 */59195914static int ufshcd_disable_auto_bkops(struct ufs_hba *hba)59205915{···60656058 __func__, err);60666059}6067606060616061+/*60626062+ * Return: 0 upon success; > 0 in case the UFS device reported an OCS error;60636063+ * < 0 if another error occurred.60646064+ */60686065int ufshcd_read_device_lvl_exception_id(struct ufs_hba *hba, u64 *exception_id)60696066{60706067 struct utp_upiu_query_v4_0 *upiu_resp;···69316920 bool queue_eh_work = false;69326921 irqreturn_t retval = IRQ_NONE;6933692269346934- spin_lock(hba->host->host_lock);69236923+ guard(spinlock_irqsave)(hba->host->host_lock);69356924 hba->errors |= UFSHCD_ERROR_MASK & intr_status;6936692569376926 if (hba->errors & INT_FATAL_ERRORS) {···69906979 */69916980 hba->errors = 0;69926981 hba->uic_error = 0;69936993- spin_unlock(hba->host->host_lock);69826982+69946983 return retval;69956984}69966985···74657454 * @sg_list: Pointer to SG list when DATA IN/OUT UPIU is required in ARPMB operation74667455 * @dir: DMA direction74677456 *74687468- * Return: zero on success, non-zero on failure.74577457+ * Return: 0 upon success; > 0 in case the UFS device reported an OCS error;74587458+ * < 0 if another error occurred.74697459 */74707460int ufshcd_advanced_rpmb_req_handler(struct ufs_hba *hba, struct utp_upiu_req *req_upiu,74717461 struct utp_upiu_req *rsp_upiu, struct ufs_ehs *req_ehs,
+15-24
drivers/ufs/host/ufs-qcom.c
···20702070 return IRQ_HANDLED;20712071}2072207220732073-static void ufs_qcom_irq_free(struct ufs_qcom_irq *uqi)20742074-{20752075- for (struct ufs_qcom_irq *q = uqi; q->irq; q++)20762076- devm_free_irq(q->hba->dev, q->irq, q->hba);20772077-20782078- platform_device_msi_free_irqs_all(uqi->hba->dev);20792079- devm_kfree(uqi->hba->dev, uqi);20802080-}20812081-20822082-DEFINE_FREE(ufs_qcom_irq, struct ufs_qcom_irq *, if (_T) ufs_qcom_irq_free(_T))20832083-20842073static int ufs_qcom_config_esi(struct ufs_hba *hba)20852074{20862075 struct ufs_qcom_host *host = ufshcd_get_variant(hba);···20842095 */20852096 nr_irqs = hba->nr_hw_queues - hba->nr_queues[HCTX_TYPE_POLL];2086209720872087- struct ufs_qcom_irq *qi __free(ufs_qcom_irq) =20882088- devm_kcalloc(hba->dev, nr_irqs, sizeof(*qi), GFP_KERNEL);20892089- if (!qi)20902090- return -ENOMEM;20912091- /* Preset so __free() has a pointer to hba in all error paths */20922092- qi[0].hba = hba;20932093-20942098 ret = platform_device_msi_init_and_alloc_irqs(hba->dev, nr_irqs,20952099 ufs_qcom_write_msi_msg);20962100 if (ret) {20972097- dev_err(hba->dev, "Failed to request Platform MSI %d\n", ret);20982098- return ret;21012101+ dev_warn(hba->dev, "Platform MSI not supported or failed, continuing without ESI\n");21022102+ return ret; /* Continue without ESI */21032103+ }21042104+21052105+ struct ufs_qcom_irq *qi = devm_kcalloc(hba->dev, nr_irqs, sizeof(*qi), GFP_KERNEL);21062106+21072107+ if (!qi) {21082108+ platform_device_msi_free_irqs_all(hba->dev);21092109+ return -ENOMEM;20992110 }2100211121012112 for (int idx = 0; idx < nr_irqs; idx++) {···21062117 ret = devm_request_irq(hba->dev, qi[idx].irq, ufs_qcom_mcq_esi_handler,21072118 IRQF_SHARED, "qcom-mcq-esi", qi + idx);21082119 if (ret) {21092109- dev_err(hba->dev, "%s: Fail to request IRQ for %d, err = %d\n",21202120+ dev_err(hba->dev, "%s: Failed to request IRQ for %d, err = %d\n",21102121 __func__, qi[idx].irq, ret);21112111- qi[idx].irq = 0;21222122+ /* Free previously allocated IRQs */21232123+ for (int j = 0; j < idx; j++)21242124+ devm_free_irq(hba->dev, qi[j].irq, qi + j);21252125+ platform_device_msi_free_irqs_all(hba->dev);21262126+ devm_kfree(hba->dev, qi);21122127 return ret;21132128 }21142129 }21152115-21162116- retain_and_null_ptr(qi);2117213021182131 if (host->hw_ver.major >= 6) {21192132 ufshcd_rmwl(hba, ESI_VEC_MASK, FIELD_PREP(ESI_VEC_MASK, MAX_ESI_VEC - 1),
···338338 schedule_work(&ci->usb_phy->chg_work);339339 break;340340 case CI_HDRC_CONTROLLER_PULLUP_EVENT:341341- if (ci->role == CI_ROLE_GADGET)341341+ if (ci->role == CI_ROLE_GADGET &&342342+ ci->gadget.speed == USB_SPEED_HIGH)342343 imx_usbmisc_pullup(data->usbmisc_data,343344 ci->gadget.connected);344345 break;
+16-7
drivers/usb/chipidea/usbmisc_imx.c
···10681068 unsigned long flags;10691069 u32 val;1070107010711071+ if (on)10721072+ return;10731073+10711074 spin_lock_irqsave(&usbmisc->lock, flags);10721075 val = readl(usbmisc->base + MX7D_USBNC_USB_CTRL2);10731073- if (!on) {10741074- val &= ~MX7D_USBNC_USB_CTRL2_OPMODE_OVERRIDE_MASK;10751075- val |= MX7D_USBNC_USB_CTRL2_OPMODE(1);10761076- val |= MX7D_USBNC_USB_CTRL2_OPMODE_OVERRIDE_EN;10771077- } else {10781078- val &= ~MX7D_USBNC_USB_CTRL2_OPMODE_OVERRIDE_EN;10791079- }10761076+ val &= ~MX7D_USBNC_USB_CTRL2_OPMODE_OVERRIDE_MASK;10771077+ val |= MX7D_USBNC_USB_CTRL2_OPMODE(1);10781078+ val |= MX7D_USBNC_USB_CTRL2_OPMODE_OVERRIDE_EN;10791079+ writel(val, usbmisc->base + MX7D_USBNC_USB_CTRL2);10801080+ spin_unlock_irqrestore(&usbmisc->lock, flags);10811081+10821082+ /* Last for at least 1 micro-frame to let host see disconnect signal */10831083+ usleep_range(125, 150);10841084+10851085+ spin_lock_irqsave(&usbmisc->lock, flags);10861086+ val &= ~MX7D_USBNC_USB_CTRL2_OPMODE_OVERRIDE_MASK;10871087+ val |= MX7D_USBNC_USB_CTRL2_OPMODE(0);10881088+ val &= ~MX7D_USBNC_USB_CTRL2_OPMODE_OVERRIDE_EN;10801089 writel(val, usbmisc->base + MX7D_USBNC_USB_CTRL2);10811090 spin_unlock_irqrestore(&usbmisc->lock, flags);10821091}
+16-12
drivers/usb/core/hcd.c
···16361636 struct usb_hcd *hcd = bus_to_hcd(urb->dev->bus);16371637 struct usb_anchor *anchor = urb->anchor;16381638 int status = urb->unlinked;16391639- unsigned long flags;1640163916411640 urb->hcpriv = NULL;16421641 if (unlikely((urb->transfer_flags & URB_SHORT_NOT_OK) &&···16531654 /* pass ownership to the completion handler */16541655 urb->status = status;16551656 /*16561656- * Only collect coverage in the softirq context and disable interrupts16571657- * to avoid scenarios with nested remote coverage collection sections16581658- * that KCOV does not support.16591659- * See the comment next to kcov_remote_start_usb_softirq() for details.16571657+ * This function can be called in task context inside another remote16581658+ * coverage collection section, but kcov doesn't support that kind of16591659+ * recursion yet. Only collect coverage in softirq context for now.16601660 */16611661- flags = kcov_remote_start_usb_softirq((u64)urb->dev->bus->busnum);16611661+ kcov_remote_start_usb_softirq((u64)urb->dev->bus->busnum);16621662 urb->complete(urb);16631663- kcov_remote_stop_softirq(flags);16631663+ kcov_remote_stop_softirq();1664166416651665 usb_anchor_resume_wakeups(anchor);16661666 atomic_dec(&urb->use_count);···17171719 * @urb: urb being returned to the USB device driver.17181720 * @status: completion status code for the URB.17191721 *17201720- * Context: atomic. The completion callback is invoked in caller's context.17211721- * For HCDs with HCD_BH flag set, the completion callback is invoked in BH17221722- * context (except for URBs submitted to the root hub which always complete in17231723- * caller's context).17221722+ * Context: atomic. The completion callback is invoked either in a work queue17231723+ * (BH) context or in the caller's context, depending on whether the HCD_BH17241724+ * flag is set in the @hcd structure, except that URBs submitted to the17251725+ * root hub always complete in BH context.17241726 *17251727 * This hands the URB from HCD to its USB device driver, using its17261728 * completion function. The HCD has freed all per-urb resources···21642166 urb->complete = usb_ehset_completion;21652167 urb->status = -EINPROGRESS;21662168 urb->actual_length = 0;21672167- urb->transfer_flags = URB_DIR_IN;21692169+ urb->transfer_flags = URB_DIR_IN | URB_NO_TRANSFER_DMA_MAP;21682170 usb_get_urb(urb);21692171 atomic_inc(&urb->use_count);21702172 atomic_inc(&urb->dev->urbnum);···2228223022292231 /* Complete remaining DATA and STATUS stages using the same URB */22302232 urb->status = -EINPROGRESS;22332233+ urb->transfer_flags &= ~URB_NO_TRANSFER_DMA_MAP;22312234 usb_get_urb(urb);22322235 atomic_inc(&urb->use_count);22332236 atomic_inc(&urb->dev->urbnum);22372237+ if (map_urb_for_dma(hcd, urb, GFP_KERNEL)) {22382238+ usb_put_urb(urb);22392239+ goto out1;22402240+ }22412241+22342242 retval = hcd->driver->submit_single_step_set_feature(hcd, urb, 0);22352243 if (!retval && !wait_for_completion_timeout(&done,22362244 msecs_to_jiffies(2000))) {
···288288 dwc3_ep0_prepare_one_trb(dep, dwc->ep0_trb_addr, 8,289289 DWC3_TRBCTL_CONTROL_SETUP, false);290290 ret = dwc3_ep0_start_trans(dep);291291- WARN_ON(ret < 0);291291+ if (ret < 0)292292+ dev_err(dwc->dev, "ep0 out start transfer failed: %d\n", ret);293293+292294 for (i = 2; i < DWC3_ENDPOINTS_NUM; i++) {293295 struct dwc3_ep *dwc3_ep;294296···10631061 ret = dwc3_ep0_start_trans(dep);10641062 }1065106310661066- WARN_ON(ret < 0);10641064+ if (ret < 0)10651065+ dev_err(dwc->dev,10661066+ "ep0 data phase start transfer failed: %d\n", ret);10671067}1068106810691069static int dwc3_ep0_start_control_status(struct dwc3_ep *dep)···1082107810831079static void __dwc3_ep0_do_control_status(struct dwc3 *dwc, struct dwc3_ep *dep)10841080{10851085- WARN_ON(dwc3_ep0_start_control_status(dep));10811081+ int ret;10821082+10831083+ ret = dwc3_ep0_start_control_status(dep);10841084+ if (ret)10851085+ dev_err(dwc->dev,10861086+ "ep0 status phase start transfer failed: %d\n", ret);10861087}1087108810881089static void dwc3_ep0_do_control_status(struct dwc3 *dwc,···11301121 cmd |= DWC3_DEPCMD_PARAM(dep->resource_index);11311122 memset(¶ms, 0, sizeof(params));11321123 ret = dwc3_send_gadget_ep_cmd(dep, cmd, ¶ms);11331133- WARN_ON_ONCE(ret);11241124+ if (ret)11251125+ dev_err_ratelimited(dwc->dev,11261126+ "ep0 data phase end transfer failed: %d\n", ret);11271127+11341128 dep->resource_index = 0;11351129}11361130
+17-2
drivers/usb/dwc3/gadget.c
···17721772 dep->flags |= DWC3_EP_DELAY_STOP;17731773 return 0;17741774 }17751775- WARN_ON_ONCE(ret);17751775+17761776+ if (ret)17771777+ dev_err_ratelimited(dep->dwc->dev,17781778+ "end transfer failed: %d\n", ret);17791779+17761780 dep->resource_index = 0;1777178117781782 if (!interrupt)···37813777static void dwc3_gadget_endpoint_transfer_not_ready(struct dwc3_ep *dep,37823778 const struct dwc3_event_depevt *event)37833779{37803780+ /*37813781+ * During a device-initiated disconnect, a late xferNotReady event can37823782+ * be generated after the End Transfer command resets the event filter,37833783+ * but before the controller is halted. Ignore it to prevent a new37843784+ * transfer from starting.37853785+ */37863786+ if (!dep->dwc->connected)37873787+ return;37883788+37843789 dwc3_gadget_endpoint_frame_from_event(dep, event);3785379037863791 /*···40524039 dep->flags &= ~DWC3_EP_STALL;4053404040544041 ret = dwc3_send_clear_stall_ep_cmd(dep);40554055- WARN_ON_ONCE(ret);40424042+ if (ret)40434043+ dev_err_ratelimited(dwc->dev,40444044+ "failed to clear STALL on %s\n", dep->name);40564045 }40574046}40584047
···704704 if (!xhci->devs[i])705705 continue;706706707707- retval = xhci_disable_slot(xhci, i);708708- xhci_free_virt_device(xhci, i);707707+ retval = xhci_disable_and_free_slot(xhci, i);709708 if (retval)710709 xhci_err(xhci, "Failed to disable slot %d, %d. Enter test mode anyway\n",711710 i, retval);
+11-11
drivers/usb/host/xhci-mem.c
···865865 * will be manipulated by the configure endpoint, allocate device, or update866866 * hub functions while this function is removing the TT entries from the list.867867 */868868-void xhci_free_virt_device(struct xhci_hcd *xhci, int slot_id)868868+void xhci_free_virt_device(struct xhci_hcd *xhci, struct xhci_virt_device *dev,869869+ int slot_id)869870{870870- struct xhci_virt_device *dev;871871 int i;872872 int old_active_eps = 0;873873874874 /* Slot ID 0 is reserved */875875- if (slot_id == 0 || !xhci->devs[slot_id])875875+ if (slot_id == 0 || !dev)876876 return;877877878878- dev = xhci->devs[slot_id];879879-880880- xhci->dcbaa->dev_context_ptrs[slot_id] = 0;881881- if (!dev)882882- return;878878+ /* If device ctx array still points to _this_ device, clear it */879879+ if (dev->out_ctx &&880880+ xhci->dcbaa->dev_context_ptrs[slot_id] == cpu_to_le64(dev->out_ctx->dma))881881+ xhci->dcbaa->dev_context_ptrs[slot_id] = 0;883882884883 trace_xhci_free_virt_device(dev);885884···919920 dev->udev->slot_id = 0;920921 if (dev->rhub_port && dev->rhub_port->slot_id == slot_id)921922 dev->rhub_port->slot_id = 0;922922- kfree(xhci->devs[slot_id]);923923- xhci->devs[slot_id] = NULL;923923+ if (xhci->devs[slot_id] == dev)924924+ xhci->devs[slot_id] = NULL;925925+ kfree(dev);924926}925927926928/*···962962out:963963 /* we are now at a leaf device */964964 xhci_debugfs_remove_slot(xhci, slot_id);965965- xhci_free_virt_device(xhci, slot_id);965965+ xhci_free_virt_device(xhci, vdev, slot_id);966966}967967968968int xhci_alloc_virt_device(struct xhci_hcd *xhci, int slot_id,
+4-3
drivers/usb/host/xhci-pci-renesas.c
···4747#define RENESAS_ROM_ERASE_MAGIC 0x5A65726F4848#define RENESAS_ROM_WRITE_MAGIC 0x53524F4D49495050-#define RENESAS_RETRY 100005151-#define RENESAS_DELAY 105050+#define RENESAS_RETRY 50000 /* 50000 * RENESAS_DELAY ~= 500ms */5151+#define RENESAS_CHIP_ERASE_RETRY 500000 /* 500000 * RENESAS_DELAY ~= 5s */5252+#define RENESAS_DELAY 1052535354#define RENESAS_FW_NAME "renesas_usb_fw.mem"5455···408407 /* sleep a bit while ROM is erased */409408 msleep(20);410409411411- for (i = 0; i < RENESAS_RETRY; i++) {410410+ for (i = 0; i < RENESAS_CHIP_ERASE_RETRY; i++) {412411 retval = pci_read_config_byte(pdev, RENESAS_ROM_STATUS,413412 &status);414413 status &= RENESAS_ROM_STATUS_ERASE;
+7-2
drivers/usb/host/xhci-ring.c
···15921592 command->slot_id = 0;15931593}1594159415951595-static void xhci_handle_cmd_disable_slot(struct xhci_hcd *xhci, int slot_id)15951595+static void xhci_handle_cmd_disable_slot(struct xhci_hcd *xhci, int slot_id,15961596+ u32 cmd_comp_code)15961597{15971598 struct xhci_virt_device *virt_dev;15981599 struct xhci_slot_ctx *slot_ctx;···16081607 if (xhci->quirks & XHCI_EP_LIMIT_QUIRK)16091608 /* Delete default control endpoint resources */16101609 xhci_free_device_endpoint_resources(xhci, virt_dev, true);16101610+ if (cmd_comp_code == COMP_SUCCESS) {16111611+ xhci->dcbaa->dev_context_ptrs[slot_id] = 0;16121612+ xhci->devs[slot_id] = NULL;16131613+ }16111614}1612161516131616static void xhci_handle_cmd_config_ep(struct xhci_hcd *xhci, int slot_id)···18611856 xhci_handle_cmd_enable_slot(slot_id, cmd, cmd_comp_code);18621857 break;18631858 case TRB_DISABLE_SLOT:18641864- xhci_handle_cmd_disable_slot(xhci, slot_id);18591859+ xhci_handle_cmd_disable_slot(xhci, slot_id, cmd_comp_code);18651860 break;18661861 case TRB_CONFIG_EP:18671862 if (!cmd->completion)
+16-7
drivers/usb/host/xhci.c
···309309 return -EINVAL;310310311311 iman = readl(&ir->ir_set->iman);312312+ iman &= ~IMAN_IP;312313 iman |= IMAN_IE;313314 writel(iman, &ir->ir_set->iman);314315···326325 return -EINVAL;327326328327 iman = readl(&ir->ir_set->iman);328328+ iman &= ~IMAN_IP;329329 iman &= ~IMAN_IE;330330 writel(iman, &ir->ir_set->iman);331331···39343932 * Obtaining a new device slot to inform the xHCI host that39353933 * the USB device has been reset.39363934 */39373937- ret = xhci_disable_slot(xhci, udev->slot_id);39383938- xhci_free_virt_device(xhci, udev->slot_id);39353935+ ret = xhci_disable_and_free_slot(xhci, udev->slot_id);39393936 if (!ret) {39403937 ret = xhci_alloc_dev(hcd, udev);39413938 if (ret == 1)···40914090 xhci_disable_slot(xhci, udev->slot_id);4092409140934092 spin_lock_irqsave(&xhci->lock, flags);40944094- xhci_free_virt_device(xhci, udev->slot_id);40934093+ xhci_free_virt_device(xhci, virt_dev, udev->slot_id);40954094 spin_unlock_irqrestore(&xhci->lock, flags);4096409540974096}···41384137 xhci_free_command(xhci, command);4139413841404139 return 0;41404140+}41414141+41424142+int xhci_disable_and_free_slot(struct xhci_hcd *xhci, u32 slot_id)41434143+{41444144+ struct xhci_virt_device *vdev = xhci->devs[slot_id];41454145+ int ret;41464146+41474147+ ret = xhci_disable_slot(xhci, slot_id);41484148+ xhci_free_virt_device(xhci, vdev, slot_id);41494149+ return ret;41414150}4142415141434152/*···42564245 return 1;4257424642584247disable_slot:42594259- xhci_disable_slot(xhci, udev->slot_id);42604260- xhci_free_virt_device(xhci, udev->slot_id);42484248+ xhci_disable_and_free_slot(xhci, udev->slot_id);4261424942624250 return 0;42634251}···43924382 dev_warn(&udev->dev, "Device not responding to setup %s.\n", act);4393438343944384 mutex_unlock(&xhci->mutex);43954395- ret = xhci_disable_slot(xhci, udev->slot_id);43964396- xhci_free_virt_device(xhci, udev->slot_id);43854385+ ret = xhci_disable_and_free_slot(xhci, udev->slot_id);43974386 if (!ret) {43984387 if (xhci_alloc_dev(hcd, udev) == 1)43994388 xhci_setup_addressable_virt_dev(xhci, udev);
···718718 return 0;719719}720720721721-/*722722- * Certain older XenBus toolstack cannot handle reading values that are723723- * not populated. Some Xen 3.4 installation are incapable of doing this724724- * so if we are running on anything older than 4 do not attempt to read725725- * control/platform-feature-xs_reset_watches.726726- */727727-static bool xen_strict_xenbus_quirk(void)728728-{729729-#ifdef CONFIG_X86730730- uint32_t eax, ebx, ecx, edx, base;731731-732732- base = xen_cpuid_base();733733- cpuid(base + 1, &eax, &ebx, &ecx, &edx);734734-735735- if ((eax >> 16) < 4)736736- return true;737737-#endif738738- return false;739739-740740-}741721static void xs_reset_watches(void)742722{743723 int err;744724745725 if (!xen_hvm_domain() || xen_initial_domain())746746- return;747747-748748- if (xen_strict_xenbus_quirk())749726 return;750727751728 if (!xenbus_read_unsigned("control",
···1512151215131513/*15141514 * Return 0 if we have submitted or queued the sector for submission.15151515- * Return <0 for critical errors.15151515+ * Return <0 for critical errors, and the sector will have its dirty flag cleared.15161516 *15171517 * Caller should make sure filepos < i_size and handle filepos >= i_size case.15181518 */···15351535 ASSERT(filepos < i_size);1536153615371537 em = btrfs_get_extent(inode, NULL, filepos, sectorsize);15381538- if (IS_ERR(em))15381538+ if (IS_ERR(em)) {15391539+ /*15401540+ * When submission failed, we should still clear the folio dirty.15411541+ * Or the folio will be written back again but without any15421542+ * ordered extent.15431543+ */15441544+ btrfs_folio_clear_dirty(fs_info, folio, filepos, sectorsize);15451545+ btrfs_folio_set_writeback(fs_info, folio, filepos, sectorsize);15461546+ btrfs_folio_clear_writeback(fs_info, folio, filepos, sectorsize);15391547 return PTR_ERR(em);15481548+ }1540154915411550 extent_offset = filepos - em->start;15421551 em_end = btrfs_extent_map_end(em);···16181609 folio_unlock(folio);16191610 return 1;16201611 }16211621- if (ret < 0)16121612+ if (ret < 0) {16131613+ btrfs_folio_clear_dirty(fs_info, folio, start, len);16141614+ btrfs_folio_set_writeback(fs_info, folio, start, len);16151615+ btrfs_folio_clear_writeback(fs_info, folio, start, len);16221616 return ret;16171617+ }1623161816241619 for (cur = start; cur < start + len; cur += fs_info->sectorsize)16251620 set_bit((cur - folio_start) >> fs_info->sectorsize_bits, &range_bitmap);···16791666 * Here we set writeback and clear for the range. If the full folio16801667 * is no longer dirty then we clear the PAGECACHE_TAG_DIRTY tag.16811668 *16821682- * If we hit any error, the corresponding sector will still be dirty16831683- * thus no need to clear PAGECACHE_TAG_DIRTY.16691669+ * If we hit any error, the corresponding sector will have its dirty16701670+ * flag cleared and writeback finished, thus no need to handle the error case.16841671 */16851672 if (!submitted_io && !error) {16861673 btrfs_folio_set_writeback(fs_info, folio, start, len);···18261813 xas_load(&xas);18271814 xas_set_mark(&xas, PAGECACHE_TAG_WRITEBACK);18281815 xas_clear_mark(&xas, PAGECACHE_TAG_DIRTY);18161816+ xas_clear_mark(&xas, PAGECACHE_TAG_TOWRITE);18291817 xas_unlock_irqrestore(&xas, flags);1830181818311819 btrfs_set_header_flag(eb, BTRFS_HEADER_FLAG_WRITTEN);
+19-10
fs/btrfs/inode.c
···41894189 return ret;41904190}4191419141924192+static void update_time_after_link_or_unlink(struct btrfs_inode *dir)41934193+{41944194+ struct timespec64 now;41954195+41964196+ /*41974197+ * If we are replaying a log tree, we do not want to update the mtime41984198+ * and ctime of the parent directory with the current time, since the41994199+ * log replay procedure is responsible for setting them to their correct42004200+ * values (the ones it had when the fsync was done).42014201+ */42024202+ if (test_bit(BTRFS_FS_LOG_RECOVERING, &dir->root->fs_info->flags))42034203+ return;42044204+42054205+ now = inode_set_ctime_current(&dir->vfs_inode);42064206+ inode_set_mtime_to_ts(&dir->vfs_inode, now);42074207+}42084208+41924209/*41934210 * unlink helper that gets used here in inode.c and in the tree logging41944211 * recovery code. It remove a link in a directory with a given name, and···43064289 inode_inc_iversion(&inode->vfs_inode);43074290 inode_set_ctime_current(&inode->vfs_inode);43084291 inode_inc_iversion(&dir->vfs_inode);43094309- inode_set_mtime_to_ts(&dir->vfs_inode, inode_set_ctime_current(&dir->vfs_inode));42924292+ update_time_after_link_or_unlink(dir);4310429343114294 return btrfs_update_inode(trans, dir);43124295}···67006683 btrfs_i_size_write(parent_inode, parent_inode->vfs_inode.i_size +67016684 name->len * 2);67026685 inode_inc_iversion(&parent_inode->vfs_inode);67036703- /*67046704- * If we are replaying a log tree, we do not want to update the mtime67056705- * and ctime of the parent directory with the current time, since the67066706- * log replay procedure is responsible for setting them to their correct67076707- * values (the ones it had when the fsync was done).67086708- */67096709- if (!test_bit(BTRFS_FS_LOG_RECOVERING, &root->fs_info->flags))67106710- inode_set_mtime_to_ts(&parent_inode->vfs_inode,67116711- inode_set_ctime_current(&parent_inode->vfs_inode));66866686+ update_time_after_link_or_unlink(parent_inode);6712668767136688 ret = btrfs_update_inode(trans, parent_inode);67146689 if (ret)
+18-1
fs/btrfs/subpage.c
···448448449449 spin_lock_irqsave(&bfs->lock, flags);450450 bitmap_set(bfs->bitmaps, start_bit, len >> fs_info->sectorsize_bits);451451+452452+ /*453453+ * Don't clear the TOWRITE tag when starting writeback on a still-dirty454454+ * folio. Doing so can cause WB_SYNC_ALL writepages() to overlook it,455455+ * assume writeback is complete, and exit too early — violating sync456456+ * ordering guarantees.457457+ */451458 if (!folio_test_writeback(folio))452452- folio_start_writeback(folio);459459+ __folio_start_writeback(folio, true);460460+ if (!folio_test_dirty(folio)) {461461+ struct address_space *mapping = folio_mapping(folio);462462+ XA_STATE(xas, &mapping->i_pages, folio->index);463463+ unsigned long flags;464464+465465+ xas_lock_irqsave(&xas, flags);466466+ xas_load(&xas);467467+ xas_clear_mark(&xas, PAGECACHE_TAG_TOWRITE);468468+ xas_unlock_irqrestore(&xas, flags);469469+ }453470 spin_unlock_irqrestore(&bfs->lock, flags);454471}455472
+8-5
fs/btrfs/super.c
···8888 refcount_t refs;8989};90909191+static void btrfs_emit_options(struct btrfs_fs_info *info,9292+ struct btrfs_fs_context *old);9393+9194enum {9295 Opt_acl,9396 Opt_clear_cache,···701698702699 if (!test_bit(BTRFS_FS_STATE_REMOUNTING, &info->fs_state)) {703700 if (btrfs_raw_test_opt(*mount_opt, SPACE_CACHE)) {704704- btrfs_info(info, "disk space caching is enabled");705701 btrfs_warn(info,706702"space cache v1 is being deprecated and will be removed in a future release, please use -o space_cache=v2");707703 }708708- if (btrfs_raw_test_opt(*mount_opt, FREE_SPACE_TREE))709709- btrfs_info(info, "using free-space-tree");710704 }711705712706 return ret;···979979 btrfs_err(fs_info, "open_ctree failed: %d", ret);980980 return ret;981981 }982982+983983+ btrfs_emit_options(fs_info, NULL);982984983985 inode = btrfs_iget(BTRFS_FIRST_FREE_OBJECTID, fs_info->fs_root);984986 if (IS_ERR(inode)) {···14391437{14401438 btrfs_info_if_set(info, old, NODATASUM, "setting nodatasum");14411439 btrfs_info_if_set(info, old, DEGRADED, "allowing degraded mounts");14421442- btrfs_info_if_set(info, old, NODATASUM, "setting nodatasum");14401440+ btrfs_info_if_set(info, old, NODATACOW, "setting nodatacow");14431441 btrfs_info_if_set(info, old, SSD, "enabling ssd optimizations");14441442 btrfs_info_if_set(info, old, SSD_SPREAD, "using spread ssd allocation scheme");14451443 btrfs_info_if_set(info, old, NOBARRIER, "turning off barriers");···14611459 btrfs_info_if_set(info, old, IGNOREMETACSUMS, "ignoring meta csums");14621460 btrfs_info_if_set(info, old, IGNORESUPERFLAGS, "ignoring unknown super block flags");1463146114621462+ btrfs_info_if_unset(info, old, NODATASUM, "setting datasum");14641463 btrfs_info_if_unset(info, old, NODATACOW, "setting datacow");14651464 btrfs_info_if_unset(info, old, SSD, "not using ssd optimizations");14661465 btrfs_info_if_unset(info, old, SSD_SPREAD, "not using spread ssd allocation scheme");14671467- btrfs_info_if_unset(info, old, NOBARRIER, "turning off barriers");14661466+ btrfs_info_if_unset(info, old, NOBARRIER, "turning on barriers");14681467 btrfs_info_if_unset(info, old, NOTREELOG, "enabling tree log");14691468 btrfs_info_if_unset(info, old, SPACE_CACHE, "disabling disk space caching");14701469 btrfs_info_if_unset(info, old, FREE_SPACE_TREE, "disabling free space tree");
+99-34
fs/btrfs/zoned.c
···1717#include "accessors.h"1818#include "bio.h"1919#include "transaction.h"2020+#include "sysfs.h"20212122/* Maximum number of zones to report per blkdev_report_zones() call */2223#define BTRFS_REPORT_NR_ZONES 4096···42414342/* Number of superblock log zones */4443#define BTRFS_NR_SB_LOG_ZONES 24444+4545+/* Default number of max active zones when the device has no limits. */4646+#define BTRFS_DEFAULT_MAX_ACTIVE_ZONES 12845474648/*4749 * Minimum of active zones we need:···420416 if (!IS_ALIGNED(nr_sectors, zone_sectors))421417 zone_info->nr_zones++;422418423423- max_active_zones = bdev_max_active_zones(bdev);419419+ max_active_zones = min_not_zero(bdev_max_active_zones(bdev),420420+ bdev_max_open_zones(bdev));421421+ if (!max_active_zones && zone_info->nr_zones > BTRFS_DEFAULT_MAX_ACTIVE_ZONES)422422+ max_active_zones = BTRFS_DEFAULT_MAX_ACTIVE_ZONES;424423 if (max_active_zones && max_active_zones < BTRFS_MIN_ACTIVE_ZONES) {425424 btrfs_err(fs_info,426425"zoned: %s: max active zones %u is too small, need at least %u active zones",···21752168 goto out_unlock;21762169 }2177217021782178- /* No space left */21792179- if (btrfs_zoned_bg_is_full(block_group)) {21802180- ret = false;21812181- goto out_unlock;21712171+ if (block_group->flags & BTRFS_BLOCK_GROUP_DATA) {21722172+ /* The caller should check if the block group is full. */21732173+ if (WARN_ON_ONCE(btrfs_zoned_bg_is_full(block_group))) {21742174+ ret = false;21752175+ goto out_unlock;21762176+ }21772177+ } else {21782178+ /* Since it is already written, it should have been active. */21792179+ WARN_ON_ONCE(block_group->meta_write_pointer != block_group->start);21822180 }2183218121842182 for (i = 0; i < map->num_stripes; i++) {···22422230 struct btrfs_fs_info *fs_info = block_group->fs_info;22432231 const u64 end = block_group->start + block_group->length;22442232 struct extent_buffer *eb;22452245- unsigned long index, start = (block_group->start >> fs_info->sectorsize_bits);22332233+ unsigned long index, start = (block_group->start >> fs_info->nodesize_bits);2246223422472235 rcu_read_lock();22482236 xa_for_each_start(&fs_info->buffer_tree, index, eb, start) {···22552243 rcu_read_lock();22562244 }22572245 rcu_read_unlock();22462246+}22472247+22482248+static int call_zone_finish(struct btrfs_block_group *block_group,22492249+ struct btrfs_io_stripe *stripe)22502250+{22512251+ struct btrfs_device *device = stripe->dev;22522252+ const u64 physical = stripe->physical;22532253+ struct btrfs_zoned_device_info *zinfo = device->zone_info;22542254+ int ret;22552255+22562256+ if (!device->bdev)22572257+ return 0;22582258+22592259+ if (zinfo->max_active_zones == 0)22602260+ return 0;22612261+22622262+ if (btrfs_dev_is_sequential(device, physical)) {22632263+ unsigned int nofs_flags;22642264+22652265+ nofs_flags = memalloc_nofs_save();22662266+ ret = blkdev_zone_mgmt(device->bdev, REQ_OP_ZONE_FINISH,22672267+ physical >> SECTOR_SHIFT,22682268+ zinfo->zone_size >> SECTOR_SHIFT);22692269+ memalloc_nofs_restore(nofs_flags);22702270+22712271+ if (ret)22722272+ return ret;22732273+ }22742274+22752275+ if (!(block_group->flags & BTRFS_BLOCK_GROUP_DATA))22762276+ zinfo->reserved_active_zones++;22772277+ btrfs_dev_clear_active_zone(device, physical);22782278+22792279+ return 0;22582280}2259228122602282static int do_zone_finish(struct btrfs_block_group *block_group, bool fully_written)···23752329 down_read(&dev_replace->rwsem);23762330 map = block_group->physical_map;23772331 for (i = 0; i < map->num_stripes; i++) {23782378- struct btrfs_device *device = map->stripes[i].dev;23792379- const u64 physical = map->stripes[i].physical;23802380- struct btrfs_zoned_device_info *zinfo = device->zone_info;23812381- unsigned int nofs_flags;2382233223832383- if (!device->bdev)23842384- continue;23852385-23862386- if (zinfo->max_active_zones == 0)23872387- continue;23882388-23892389- nofs_flags = memalloc_nofs_save();23902390- ret = blkdev_zone_mgmt(device->bdev, REQ_OP_ZONE_FINISH,23912391- physical >> SECTOR_SHIFT,23922392- zinfo->zone_size >> SECTOR_SHIFT);23932393- memalloc_nofs_restore(nofs_flags);23942394-23332333+ ret = call_zone_finish(block_group, &map->stripes[i]);23952334 if (ret) {23962335 up_read(&dev_replace->rwsem);23972336 return ret;23982337 }23992399-24002400- if (!(block_group->flags & BTRFS_BLOCK_GROUP_DATA))24012401- zinfo->reserved_active_zones++;24022402- btrfs_dev_clear_active_zone(device, physical);24032338 }24042339 up_read(&dev_replace->rwsem);24052340···25312504void btrfs_zoned_reserve_data_reloc_bg(struct btrfs_fs_info *fs_info)25322505{25332506 struct btrfs_space_info *data_sinfo = fs_info->data_sinfo;25342534- struct btrfs_space_info *space_info = data_sinfo->sub_group[0];25072507+ struct btrfs_space_info *space_info = data_sinfo;25352508 struct btrfs_trans_handle *trans;25362509 struct btrfs_block_group *bg;25372510 struct list_head *bg_list;25382511 u64 alloc_flags;25392539- bool initial = false;25122512+ bool first = true;25402513 bool did_chunk_alloc = false;25412514 int index;25422515 int ret;···25502523 if (sb_rdonly(fs_info->sb))25512524 return;2552252525532553- ASSERT(space_info->subgroup_id == BTRFS_SUB_GROUP_DATA_RELOC);25542526 alloc_flags = btrfs_get_alloc_profile(fs_info, space_info->flags);25552527 index = btrfs_bg_flags_to_raid_index(alloc_flags);2556252825572557- bg_list = &data_sinfo->block_groups[index];25292529+ /* Scan the data space_info to find empty block groups. Take the second one. */25582530again:25312531+ bg_list = &space_info->block_groups[index];25592532 list_for_each_entry(bg, bg_list, list) {25602560- if (bg->used > 0)25332533+ if (bg->alloc_offset != 0)25612534 continue;2562253525632563- if (!initial) {25642564- initial = true;25362536+ if (first) {25372537+ first = false;25652538 continue;25392539+ }25402540+25412541+ if (space_info == data_sinfo) {25422542+ /* Migrate the block group to the data relocation space_info. */25432543+ struct btrfs_space_info *reloc_sinfo = data_sinfo->sub_group[0];25442544+ int factor;25452545+25462546+ ASSERT(reloc_sinfo->subgroup_id == BTRFS_SUB_GROUP_DATA_RELOC);25472547+ factor = btrfs_bg_type_to_factor(bg->flags);25482548+25492549+ down_write(&space_info->groups_sem);25502550+ list_del_init(&bg->list);25512551+ /* We can assume this as we choose the second empty one. */25522552+ ASSERT(!list_empty(&space_info->block_groups[index]));25532553+ up_write(&space_info->groups_sem);25542554+25552555+ spin_lock(&space_info->lock);25562556+ space_info->total_bytes -= bg->length;25572557+ space_info->disk_total -= bg->length * factor;25582558+ /* There is no allocation ever happened. */25592559+ ASSERT(bg->used == 0);25602560+ ASSERT(bg->zone_unusable == 0);25612561+ /* No super block in a block group on the zoned setup. */25622562+ ASSERT(bg->bytes_super == 0);25632563+ spin_unlock(&space_info->lock);25642564+25652565+ bg->space_info = reloc_sinfo;25662566+ if (reloc_sinfo->block_group_kobjs[index] == NULL)25672567+ btrfs_sysfs_add_block_group_type(bg);25682568+25692569+ btrfs_add_bg_to_space_info(fs_info, bg);25662570 }2567257125682572 fs_info->data_reloc_bg = bg->start;···26102552 if (IS_ERR(trans))26112553 return;2612255425552555+ /* Allocate new BG in the data relocation space_info. */25562556+ space_info = data_sinfo->sub_group[0];25572557+ ASSERT(space_info->subgroup_id == BTRFS_SUB_GROUP_DATA_RELOC);26132558 ret = btrfs_chunk_alloc(trans, space_info, alloc_flags, CHUNK_ALLOC_FORCE);26142559 btrfs_end_transaction(trans);26152560 if (ret == 1) {25612561+ /*25622562+ * We allocated a new block group in the data relocation space_info. We25632563+ * can take that one.25642564+ */25652565+ first = false;26162566 did_chunk_alloc = true;26172617- bg_list = &space_info->block_groups[index];26182567 goto again;26192568 }26202569}
···183183 struct debugfs_fs_info *sb_opts = sb->s_fs_info;184184 struct debugfs_fs_info *new_opts = fc->s_fs_info;185185186186+ if (!new_opts)187187+ return 0;188188+186189 sync_filesystem(sb);187190188191 /* structure copy of new mount options to sb */···285282286283static int debugfs_get_tree(struct fs_context *fc)287284{285285+ int err;286286+288287 if (!(debugfs_allow & DEBUGFS_ALLOW_API))289288 return -EPERM;290289291291- return get_tree_single(fc, debugfs_fill_super);290290+ err = get_tree_single(fc, debugfs_fill_super);291291+ if (err)292292+ return err;293293+294294+ return debugfs_reconfigure(fc);292295}293296294297static void debugfs_free_fc(struct fs_context *fc)
+20-3
fs/ext4/fsmap.c
···393393 /* Reserved GDT blocks */394394 if (!ext4_has_feature_meta_bg(sb) || metagroup < first_meta_bg) {395395 len = le16_to_cpu(sbi->s_es->s_reserved_gdt_blocks);396396+397397+ /*398398+ * mkfs.ext4 can set s_reserved_gdt_blocks as 0 in some cases,399399+ * check for that.400400+ */401401+ if (!len)402402+ return 0;403403+396404 error = ext4_getfsmap_fill(meta_list, fsb, len,397405 EXT4_FMR_OWN_RESV_GDT);398406 if (error)···534526 ext4_group_t end_ag;535527 ext4_grpblk_t first_cluster;536528 ext4_grpblk_t last_cluster;529529+ struct ext4_fsmap irec;537530 int error = 0;538531539532 bofs = le32_to_cpu(sbi->s_es->s_first_data_block);···618609 goto err;619610 }620611621621- /* Report any gaps at the end of the bg */612612+ /*613613+ * The dummy record below will cause ext4_getfsmap_helper() to report614614+ * any allocated blocks at the end of the range.615615+ */616616+ irec.fmr_device = 0;617617+ irec.fmr_physical = end_fsb + 1;618618+ irec.fmr_length = 0;619619+ irec.fmr_owner = EXT4_FMR_OWN_FREE;620620+ irec.fmr_flags = 0;621621+622622 info->gfi_last = true;623623- error = ext4_getfsmap_datadev_helper(sb, end_ag, last_cluster + 1,624624- 0, info);623623+ error = ext4_getfsmap_helper(sb, info, &irec);625624 if (error)626625 goto err;627626
+2-2
fs/ext4/indirect.c
···539539 int indirect_blks;540540 int blocks_to_boundary = 0;541541 int depth;542542- int count = 0;542542+ u64 count = 0;543543 ext4_fsblk_t first_block = 0;544544545545 trace_ext4_ind_map_blocks_enter(inode, map->m_lblk, map->m_len, flags);···588588 count++;589589 /* Fill in size of a hole we found */590590 map->m_pblk = 0;591591- map->m_len = min_t(unsigned int, map->m_len, count);591591+ map->m_len = umin(map->m_len, count);592592 goto cleanup;593593 }594594
+2-2
fs/ext4/inode.c
···146146 */147147int ext4_inode_is_fast_symlink(struct inode *inode)148148{149149- if (!(EXT4_I(inode)->i_flags & EXT4_EA_INODE_FL)) {149149+ if (!ext4_has_feature_ea_inode(inode->i_sb)) {150150 int ea_blocks = EXT4_I(inode)->i_file_acl ?151151 EXT4_CLUSTER_SIZE(inode->i_sb) >> 9 : 0;152152···31553155 folio_unlock(folio);31563156 folio_put(folio);31573157 /*31583158- * block_write_begin may have instantiated a few blocks31583158+ * ext4_block_write_begin may have instantiated a few blocks31593159 * outside i_size. Trim these off again. Don't need31603160 * i_size_read because we hold inode lock.31613161 */
···547547 * first page of the bio. Otherwise it can deadlock.548548 */549549 if (io->io_bio)550550- gfp_flags = GFP_NOWAIT | __GFP_NOWARN;550550+ gfp_flags = GFP_NOWAIT;551551 retry_encrypt:552552 bounce_page = fscrypt_encrypt_pagecache_blocks(folio,553553 enc_bytes, 0, gfp_flags);
+8-4
fs/ext4/super.c
···268268void ext4_sb_breadahead_unmovable(struct super_block *sb, sector_t block)269269{270270 struct buffer_head *bh = bdev_getblk(sb->s_bdev, block,271271- sb->s_blocksize, GFP_NOWAIT | __GFP_NOWARN);271271+ sb->s_blocksize, GFP_NOWAIT);272272273273 if (likely(bh)) {274274 if (trylock_buffer(bh))···19981998 fc->fs_private = ctx;19991999 fc->ops = &ext4_context_ops;2000200020012001+ /* i_version is always enabled now */20022002+ fc->sb_flags |= SB_I_VERSION;20032003+20012004 return 0;20022005}20032006···29782975 SEQ_OPTS_PRINT("min_batch_time=%u", sbi->s_min_batch_time);29792976 if (nodefs || sbi->s_max_batch_time != EXT4_DEF_MAX_BATCH_TIME)29802977 SEQ_OPTS_PRINT("max_batch_time=%u", sbi->s_max_batch_time);29782978+ if (nodefs && sb->s_flags & SB_I_VERSION)29792979+ SEQ_OPTS_PUTS("i_version");29812980 if (nodefs || sbi->s_stripe)29822981 SEQ_OPTS_PRINT("stripe=%lu", sbi->s_stripe);29832982 if (nodefs || EXT4_MOUNT_DATA_FLAGS &···53195314 sb->s_flags = (sb->s_flags & ~SB_POSIXACL) |53205315 (test_opt(sb, POSIX_ACL) ? SB_POSIXACL : 0);5321531653225322- /* i_version is always enabled now */53235323- sb->s_flags |= SB_I_VERSION;53245324-53255317 /* HSM events are allowed by default. */53265318 sb->s_iflags |= SB_I_ALLOW_HSM;53275319···54165414 err = ext4_load_and_init_journal(sb, es, ctx);54175415 if (err)54185416 goto failed_mount3a;54175417+ if (bdev_read_only(sb->s_bdev))54185418+ needs_recovery = 0;54195419 } else if (test_opt(sb, NOLOAD) && !sb_rdonly(sb) &&54205420 ext4_has_feature_journal_needs_recovery(sb)) {54215421 ext4_msg(sb, KERN_ERR, "required journal recovery "
+1-1
fs/fhandle.c
···402402 if (retval)403403 return retval;404404405405- CLASS(get_unused_fd, fd)(O_CLOEXEC);405405+ CLASS(get_unused_fd, fd)(open_flag);406406 if (fd < 0)407407 return fd;408408
+5-4
fs/fs-writeback.c
···26082608 wakeup_bdi = inode_io_list_move_locked(inode, wb,26092609 dirty_list);2610261026112611- spin_unlock(&wb->list_lock);26122612- spin_unlock(&inode->i_lock);26132613- trace_writeback_dirty_inode_enqueue(inode);26142614-26152611 /*26162612 * If this is the first dirty inode for this bdi,26172613 * we have to wake-up the corresponding bdi thread···26172621 if (wakeup_bdi &&26182622 (wb->bdi->capabilities & BDI_CAP_WRITEBACK))26192623 wb_wakeup_delayed(wb);26242624+26252625+ spin_unlock(&wb->list_lock);26262626+ spin_unlock(&inode->i_lock);26272627+ trace_writeback_dirty_inode_enqueue(inode);26282628+26202629 return;26212630 }26222631 }
-5
fs/fuse/inode.c
···289289 }290290 }291291292292- if (attr->blksize != 0)293293- inode->i_blkbits = ilog2(attr->blksize);294294- else295295- inode->i_blkbits = inode->i_sb->s_blocksize_bits;296296-297292 /*298293 * Don't set the sticky bit in i_mode, unless we want the VFS299294 * to check permissions. This prevents failures due to the
+7-7
fs/iomap/direct-io.c
···363363 if (iomap->flags & IOMAP_F_SHARED)364364 dio->flags |= IOMAP_DIO_COW;365365366366- if (iomap->flags & IOMAP_F_NEW) {366366+ if (iomap->flags & IOMAP_F_NEW)367367 need_zeroout = true;368368- } else if (iomap->type == IOMAP_MAPPED) {369369- if (iomap_dio_can_use_fua(iomap, dio))370370- bio_opf |= REQ_FUA;371371- else372372- dio->flags &= ~IOMAP_DIO_WRITE_THROUGH;373373- }368368+ else if (iomap->type == IOMAP_MAPPED &&369369+ iomap_dio_can_use_fua(iomap, dio))370370+ bio_opf |= REQ_FUA;371371+372372+ if (!(bio_opf & REQ_FUA))373373+ dio->flags &= ~IOMAP_DIO_WRITE_THROUGH;374374375375 /*376376 * We can only do deferred completion for pure overwrites that
···253253 nfs_page_clear_headlock(req);254254}255255256256-/*257257- * nfs_page_group_sync_on_bit_locked256256+/**257257+ * nfs_page_group_sync_on_bit_locked - Test if all requests have @bit set258258+ * @req: request in page group259259+ * @bit: PG_* bit that is used to sync page group258260 *259261 * must be called with page group lock held260262 */261261-static bool262262-nfs_page_group_sync_on_bit_locked(struct nfs_page *req, unsigned int bit)263263+bool nfs_page_group_sync_on_bit_locked(struct nfs_page *req, unsigned int bit)263264{264265 struct nfs_page *head = req->wb_head;265266 struct nfs_page *tmp;
+10-19
fs/nfs/write.c
···153153 }154154}155155156156-static int157157-nfs_cancel_remove_inode(struct nfs_page *req, struct inode *inode)156156+static void nfs_cancel_remove_inode(struct nfs_page *req, struct inode *inode)158157{159159- int ret;160160-161161- if (!test_bit(PG_REMOVE, &req->wb_flags))162162- return 0;163163- ret = nfs_page_group_lock(req);164164- if (ret)165165- return ret;166158 if (test_and_clear_bit(PG_REMOVE, &req->wb_flags))167159 nfs_page_set_inode_ref(req, inode);168168- nfs_page_group_unlock(req);169169- return 0;170160}171161172162/**···575585 }576586 }577587588588+ ret = nfs_page_group_lock(head);589589+ if (ret < 0)590590+ goto out_unlock;591591+578592 /* Ensure that nobody removed the request before we locked it */579593 if (head != folio->private) {594594+ nfs_page_group_unlock(head);580595 nfs_unlock_and_release_request(head);581596 goto retry;582597 }583598584584- ret = nfs_cancel_remove_inode(head, inode);585585- if (ret < 0)586586- goto out_unlock;587587-588588- ret = nfs_page_group_lock(head);589589- if (ret < 0)590590- goto out_unlock;599599+ nfs_cancel_remove_inode(head, inode);591600592601 /* lock each request in the page group */593602 for (subreq = head->wb_this_page;···775786{776787 struct nfs_inode *nfsi = NFS_I(nfs_page_to_inode(req));777788778778- if (nfs_page_group_sync_on_bit(req, PG_REMOVE)) {789789+ nfs_page_group_lock(req);790790+ if (nfs_page_group_sync_on_bit_locked(req, PG_REMOVE)) {779791 struct folio *folio = nfs_page_to_folio(req->wb_head);780792 struct address_space *mapping = folio->mapping;781793···788798 }789799 spin_unlock(&mapping->i_private_lock);790800 }801801+ nfs_page_group_unlock(req);791802792803 if (test_and_clear_bit(PG_INODE_REF, &req->wb_flags)) {793804 atomic_long_dec(&nfsi->nrequests);
···111111 return;112112 }113113 if (IS_MNT_SHARED(mnt)) {114114- m = propagation_source(mnt);114114+ if (type == MS_SLAVE || !hlist_empty(&mnt->mnt_slave_list))115115+ m = propagation_source(mnt);115116 if (list_empty(&mnt->mnt_share)) {116117 mnt_release_group_id(mnt);117118 } else {···638637 }639638640639 // now to_umount consists of all acceptable candidates641641- // deal with reparenting of remaining overmounts on those640640+ // deal with reparenting of surviving overmounts on those642641 list_for_each_entry(m, &to_umount, mnt_list) {643643- if (m->overmount)644644- reparent(m->overmount);642642+ struct mount *over = m->overmount;643643+ if (over && !will_be_unmounted(over))644644+ reparent(over);645645 }646646647647 // and fold them into the set
···4646 struct mutex srv_mutex;4747 int status;4848 unsigned int cli_cap;4949- __be32 inet_addr;4949+ union {5050+ __be32 inet_addr;5151+#if IS_ENABLED(CONFIG_IPV6)5252+ u8 inet6_addr[16];5353+#endif5454+ };5055 char *request_buf;5156 struct ksmbd_transport *transport;5257 struct nls_table *local_nls;
+10-3
fs/smb/server/oplock.c
···11021102 if (!atomic_inc_not_zero(&opinfo->refcount))11031103 continue;1104110411051105- if (ksmbd_conn_releasing(opinfo->conn))11051105+ if (ksmbd_conn_releasing(opinfo->conn)) {11061106+ opinfo_put(opinfo);11061107 continue;11081108+ }1107110911081110 oplock_break(opinfo, SMB2_OPLOCK_LEVEL_NONE, NULL);11091111 opinfo_put(opinfo);···11411139 if (!atomic_inc_not_zero(&opinfo->refcount))11421140 continue;1143114111441144- if (ksmbd_conn_releasing(opinfo->conn))11421142+ if (ksmbd_conn_releasing(opinfo->conn)) {11431143+ opinfo_put(opinfo);11451144 continue;11451145+ }11461146+11461147 oplock_break(opinfo, SMB2_OPLOCK_LEVEL_NONE, NULL);11471148 opinfo_put(opinfo);11481149 }···13481343 if (!atomic_inc_not_zero(&brk_op->refcount))13491344 continue;1350134513511351- if (ksmbd_conn_releasing(brk_op->conn))13461346+ if (ksmbd_conn_releasing(brk_op->conn)) {13471347+ opinfo_put(brk_op);13521348 continue;13491349+ }1353135013541351 if (brk_op->is_lease && (brk_op->o_lease->state &13551352 (~(SMB2_LEASE_READ_CACHING_LE |
···187187 unsigned short flags;188188 unsigned int fragments;189189 u64 lookup_table_start, xattr_id_table_start, next_table;190190- int err;190190+ int err, devblksize = sb_min_blocksize(sb, SQUASHFS_DEVBLK_SIZE);191191192192 TRACE("Entered squashfs_fill_superblock\n");193193+194194+ if (!devblksize) {195195+ errorf(fc, "squashfs: unable to set blocksize\n");196196+ return -EINVAL;197197+ }193198194199 sb->s_fs_info = kzalloc(sizeof(*msblk), GFP_KERNEL);195200 if (sb->s_fs_info == NULL) {···206201207202 msblk->panic_on_errors = (opts->errors == Opt_errors_panic);208203209209- msblk->devblksize = sb_min_blocksize(sb, SQUASHFS_DEVBLK_SIZE);210210- if (!msblk->devblksize) {211211- errorf(fc, "squashfs: unable to set blocksize\n");212212- return -EINVAL;213213- }214214-204204+ msblk->devblksize = devblksize;215205 msblk->devblksize_log2 = ffz(~msblk->devblksize);216206217207 mutex_init(&msblk->meta_index_mutex);
+1
include/linux/blkdev.h
···656656 QUEUE_FLAG_SQ_SCHED, /* single queue style io dispatch */657657 QUEUE_FLAG_DISABLE_WBT_DEF, /* for sched to disable/enable wbt */658658 QUEUE_FLAG_NO_ELV_SWITCH, /* can't switch elevator any more */659659+ QUEUE_FLAG_QOS_ENABLED, /* qos is enabled */659660 QUEUE_FLAG_MAX660661};661662
-8
include/linux/compiler.h
···288288#define __ADDRESSABLE(sym) \289289 ___ADDRESSABLE(sym, __section(".discard.addressable"))290290291291-#define __ADDRESSABLE_ASM(sym) \292292- .pushsection .discard.addressable,"aw"; \293293- .align ARCH_SEL(8,4); \294294- ARCH_SEL(.quad, .long) __stringify(sym); \295295- .popsection;296296-297297-#define __ADDRESSABLE_ASM_STR(sym) __stringify(__ADDRESSABLE_ASM(sym))298298-299291/*300292 * This returns a constant expression while determining if an argument is301293 * a constant expression, most importantly without evaluating the argument.
···57575858/*5959 * The softirq flavor of kcov_remote_*() functions is introduced as a temporary6060- * workaround for KCOV's lack of nested remote coverage sections support.6161- *6262- * Adding support is tracked in https://bugzilla.kernel.org/show_bug.cgi?id=210337.6363- *6464- * kcov_remote_start_usb_softirq():6565- *6666- * 1. Only collects coverage when called in the softirq context. This allows6767- * avoiding nested remote coverage collection sections in the task context.6868- * For example, USB/IP calls usb_hcd_giveback_urb() in the task context6969- * within an existing remote coverage collection section. Thus, KCOV should7070- * not attempt to start collecting coverage within the coverage collection7171- * section in __usb_hcd_giveback_urb() in this case.7272- *7373- * 2. Disables interrupts for the duration of the coverage collection section.7474- * This allows avoiding nested remote coverage collection sections in the7575- * softirq context (a softirq might occur during the execution of a work in7676- * the BH workqueue, which runs with in_serving_softirq() > 0).7777- * For example, usb_giveback_urb_bh() runs in the BH workqueue with7878- * interrupts enabled, so __usb_hcd_giveback_urb() might be interrupted in7979- * the middle of its remote coverage collection section, and the interrupt8080- * handler might invoke __usb_hcd_giveback_urb() again.6060+ * work around for kcov's lack of nested remote coverage sections support in6161+ * task context. Adding support for nested sections is tracked in:6262+ * https://bugzilla.kernel.org/show_bug.cgi?id=2103378163 */82648383-static inline unsigned long kcov_remote_start_usb_softirq(u64 id)6565+static inline void kcov_remote_start_usb_softirq(u64 id)8466{8585- unsigned long flags = 0;8686-8787- if (in_serving_softirq()) {8888- local_irq_save(flags);6767+ if (in_serving_softirq() && !in_hardirq())8968 kcov_remote_start_usb(id);9090- }9191-9292- return flags;9369}94709595-static inline void kcov_remote_stop_softirq(unsigned long flags)7171+static inline void kcov_remote_stop_softirq(void)9672{9797- if (in_serving_softirq()) {7373+ if (in_serving_softirq() && !in_hardirq())9874 kcov_remote_stop();9999- local_irq_restore(flags);100100- }10175}1027610377#ifdef CONFIG_64BIT···105131}106132static inline void kcov_remote_start_common(u64 id) {}107133static inline void kcov_remote_start_usb(u64 id) {}108108-static inline unsigned long kcov_remote_start_usb_softirq(u64 id)109109-{110110- return 0;111111-}112112-static inline void kcov_remote_stop_softirq(unsigned long flags) {}134134+static inline void kcov_remote_start_usb_softirq(u64 id) {}135135+static inline void kcov_remote_stop_softirq(void) {}113136114137#endif /* CONFIG_KCOV */115138#endif /* _LINUX_KCOV_H */
···150150 bool active; /* T if stream is active */151151 bool need_retry; /* T if this stream needs retrying */152152 bool failed; /* T if this stream failed */153153+ bool transferred_valid; /* T is ->transferred is valid */153154};154155155156/*
···173173#else174174#error unspecified endianness175175#endif176176- __u32 resync_offset; /* 11 resync checkpoint sector count */176176+ __u32 recovery_cp; /* 11 resync checkpoint sector count */177177 /* There are only valid for minor_version > 90 */178178 __u64 reshape_position; /* 12,13 next address in array-space for reshape */179179 __u32 new_level; /* 14 new level we are reshaping to */
···9797config KEXEC_HANDOVER9898 bool "kexec handover"9999 depends on ARCH_SUPPORTS_KEXEC_HANDOVER && ARCH_SUPPORTS_KEXEC_FILE100100+ depends on !DEFERRED_STRUCT_PAGE_INIT100101 select MEMBLOCK_KHO_SCRATCH101102 select KEXEC_FILE102103 select DEBUG_FS
+5-6
kernel/cgroup/cpuset.c
···280280{281281 if (!cpusets_insane_config() &&282282 movable_only_nodes(nodes)) {283283- static_branch_enable(&cpusets_insane_config_key);283283+ static_branch_enable_cpuslocked(&cpusets_insane_config_key);284284 pr_info("Unsupported (movable nodes only) cpuset configuration detected (nmask=%*pbl)!\n"285285 "Cpuset allocations might fail even with a lot of memory available.\n",286286 nodemask_pr_args(nodes));···18431843 if (is_partition_valid(cs))18441844 adding = cpumask_and(tmp->addmask,18451845 xcpus, parent->effective_xcpus);18461846- } else if (is_partition_invalid(cs) &&18461846+ } else if (is_partition_invalid(cs) && !cpumask_empty(xcpus) &&18471847 cpumask_subset(xcpus, parent->effective_xcpus)) {18481848 struct cgroup_subsys_state *css;18491849 struct cpuset *child;···33583358 else33593359 return -EINVAL;3360336033613361- css_get(&cs->css);33623361 cpus_read_lock();33633362 mutex_lock(&cpuset_mutex);33643363 if (is_cpuset_online(cs))33653364 retval = update_prstate(cs, val);33663365 mutex_unlock(&cpuset_mutex);33673366 cpus_read_unlock();33683368- css_put(&cs->css);33693367 return retval ?: nbytes;33703368}33713369···38683870 partcmd = partcmd_invalidate;38693871 /*38703872 * On the other hand, an invalid partition root may be transitioned38713871- * back to a regular one.38733873+ * back to a regular one with a non-empty effective xcpus.38723874 */38733873- else if (is_partition_valid(parent) && is_partition_invalid(cs))38753875+ else if (is_partition_valid(parent) && is_partition_invalid(cs) &&38763876+ !cpumask_empty(cs->effective_xcpus))38743877 partcmd = partcmd_update;3875387838763879 if (partcmd >= 0) {
···46614661 } else {46624662 iter->hash = alloc_and_copy_ftrace_hash(size_bits, hash);46634663 }46644664+ } else {46654665+ if (hash)46664666+ iter->hash = alloc_and_copy_ftrace_hash(hash->size_bits, hash);46674667+ else46684668+ iter->hash = EMPTY_HASH;46694669+ }4664467046654665- if (!iter->hash) {46664666- trace_parser_put(&iter->parser);46674667- goto out_unlock;46684668- }46694669- } else46704670- iter->hash = hash;46714671+ if (!iter->hash) {46724672+ trace_parser_put(&iter->parser);46734673+ goto out_unlock;46744674+ }4671467546724676 ret = 0;46734677···65476543 ftrace_hash_move_and_update_ops(iter->ops, orig_hash,65486544 iter->hash, filter_hash);65496545 mutex_unlock(&ftrace_lock);65506550- } else {65516551- /* For read only, the hash is the ops hash */65526552- iter->hash = NULL;65536546 }6554654765556548 mutex_unlock(&iter->ops->func_hash->regex_lock);
+1-1
kernel/trace/ring_buffer.c
···76667666 rb_test_started = true;7667766776687668 set_current_state(TASK_INTERRUPTIBLE);76697669- /* Just run for 10 seconds */;76697669+ /* Just run for 10 seconds */76707670 schedule_timeout(10 * HZ);7671767176727672 kthread_stop(rb_hammer);
···2727 unsigned long enter_funcs[FTRACE_RETFUNC_DEPTH];2828};29293030+struct fgraph_ent_args {3131+ struct ftrace_graph_ent_entry ent;3232+ /* Force the sizeof of args[] to have FTRACE_REGS_MAX_ARGS entries */3333+ unsigned long args[FTRACE_REGS_MAX_ARGS];3434+};3535+3036struct fgraph_data {3137 struct fgraph_cpu_data __percpu *cpu_data;32383339 /* Place to preserve last processed entry. */3440 union {3535- struct ftrace_graph_ent_entry ent;4141+ struct fgraph_ent_args ent;4242+ /* TODO allow retaddr to have args */3643 struct fgraph_retaddr_ent_entry rent;3737- } ent;4444+ };3845 struct ftrace_graph_ret_entry ret;3946 int failed;4047 int cpu;···634627 * Save current and next entries for later reference635628 * if the output fails.636629 */637637- if (unlikely(curr->ent.type == TRACE_GRAPH_RETADDR_ENT))638638- data->ent.rent = *(struct fgraph_retaddr_ent_entry *)curr;639639- else640640- data->ent.ent = *curr;630630+ if (unlikely(curr->ent.type == TRACE_GRAPH_RETADDR_ENT)) {631631+ data->rent = *(struct fgraph_retaddr_ent_entry *)curr;632632+ } else {633633+ int size = min((int)sizeof(data->ent), (int)iter->ent_size);634634+635635+ memcpy(&data->ent, curr, size);636636+ }641637 /*642638 * If the next event is not a return type, then643639 * we only care about what type it is. Otherwise we can
+5-5
lib/crypto/Kconfig
···140140config CRYPTO_LIB_SHA1141141 tristate142142 help143143- The SHA-1 library functions. Select this if your module uses any of144144- the functions from <crypto/sha1.h>.143143+ The SHA-1 and HMAC-SHA1 library functions. Select this if your module144144+ uses any of the functions from <crypto/sha1.h>.145145146146config CRYPTO_LIB_SHA1_ARCH147147 bool···157157config CRYPTO_LIB_SHA256158158 tristate159159 help160160- Enable the SHA-256 library interface. This interface may be fulfilled161161- by either the generic implementation or an arch-specific one, if one162162- is available and enabled.160160+ The SHA-224, SHA-256, HMAC-SHA224, and HMAC-SHA256 library functions.161161+ Select this if your module uses any of these functions from162162+ <crypto/sha2.h>.163163164164config CRYPTO_LIB_SHA256_ARCH165165 bool
···853853#define hwpoison_hugetlb_range NULL854854#endif855855856856+static int hwpoison_test_walk(unsigned long start, unsigned long end,857857+ struct mm_walk *walk)858858+{859859+ /* We also want to consider pages mapped into VM_PFNMAP. */860860+ return 0;861861+}862862+856863static const struct mm_walk_ops hwpoison_walk_ops = {857864 .pmd_entry = hwpoison_pte_range,858865 .hugetlb_entry = hwpoison_hugetlb_range,866866+ .test_walk = hwpoison_test_walk,859867 .walk_lock = PGWALK_RDLOCK,860868};861869
+30-8
mm/migrate.c
···4343#include <linux/sched/sysctl.h>4444#include <linux/memory-tiers.h>4545#include <linux/pagewalk.h>4646-#include <linux/balloon_compaction.h>4747-#include <linux/zsmalloc.h>48464947#include <asm/tlbflush.h>5048···50525153#include "internal.h"5254#include "swap.h"5555+5656+static const struct movable_operations *offline_movable_ops;5757+static const struct movable_operations *zsmalloc_movable_ops;5858+5959+int set_movable_ops(const struct movable_operations *ops, enum pagetype type)6060+{6161+ /*6262+ * We only allow for selected types and don't handle concurrent6363+ * registration attempts yet.6464+ */6565+ switch (type) {6666+ case PGTY_offline:6767+ if (offline_movable_ops && ops)6868+ return -EBUSY;6969+ offline_movable_ops = ops;7070+ break;7171+ case PGTY_zsmalloc:7272+ if (zsmalloc_movable_ops && ops)7373+ return -EBUSY;7474+ zsmalloc_movable_ops = ops;7575+ break;7676+ default:7777+ return -EINVAL;7878+ }7979+ return 0;8080+}8181+EXPORT_SYMBOL_GPL(set_movable_ops);53825483static const struct movable_operations *page_movable_ops(struct page *page)5584{···8762 * it as movable, the page type must be sticky until the page gets freed8863 * back to the buddy.8964 */9090-#ifdef CONFIG_BALLOON_COMPACTION9165 if (PageOffline(page))9266 /* Only balloon compaction sets PageOffline pages movable. */9393- return &balloon_mops;9494-#endif /* CONFIG_BALLOON_COMPACTION */9595-#if defined(CONFIG_ZSMALLOC) && defined(CONFIG_COMPACTION)6767+ return offline_movable_ops;9668 if (PageZsmalloc(page))9797- return &zsmalloc_mops;9898-#endif /* defined(CONFIG_ZSMALLOC) && defined(CONFIG_COMPACTION) */6969+ return zsmalloc_movable_ops;7070+9971 return NULL;10072}10173
+47-35
mm/mremap.c
···323323}324324#endif325325326326+static inline bool uffd_supports_page_table_move(struct pagetable_move_control *pmc)327327+{328328+ /*329329+ * If we are moving a VMA that has uffd-wp registered but with330330+ * remap events disabled (new VMA will not be registered with uffd), we331331+ * need to ensure that the uffd-wp state is cleared from all pgtables.332332+ * This means recursing into lower page tables in move_page_tables().333333+ *334334+ * We might get called with VMAs reversed when recovering from a335335+ * failed page table move. In that case, the336336+ * "old"-but-actually-"originally new" VMA during recovery will not have337337+ * a uffd context. Recursing into lower page tables during the original338338+ * move but not during the recovery move will cause trouble, because we339339+ * run into already-existing page tables. So check both VMAs.340340+ */341341+ return !vma_has_uffd_without_event_remap(pmc->old) &&342342+ !vma_has_uffd_without_event_remap(pmc->new);343343+}344344+326345#ifdef CONFIG_HAVE_MOVE_PMD327346static bool move_normal_pmd(struct pagetable_move_control *pmc,328347 pmd_t *old_pmd, pmd_t *new_pmd)···353334 pmd_t pmd;354335355336 if (!arch_supports_page_table_move())337337+ return false;338338+ if (!uffd_supports_page_table_move(pmc))356339 return false;357340 /*358341 * The destination pmd shouldn't be established, free_pgtables()···380359 * this point, and verify that it really is empty. We'll see.381360 */382361 if (WARN_ON_ONCE(!pmd_none(*new_pmd)))383383- return false;384384-385385- /* If this pmd belongs to a uffd vma with remap events disabled, we need386386- * to ensure that the uffd-wp state is cleared from all pgtables. This387387- * means recursing into lower page tables in move_page_tables(), and we388388- * can reuse the existing code if we simply treat the entry as "not389389- * moved".390390- */391391- if (vma_has_uffd_without_event_remap(vma))392362 return false;393363394364 /*···430418431419 if (!arch_supports_page_table_move())432420 return false;421421+ if (!uffd_supports_page_table_move(pmc))422422+ return false;433423 /*434424 * The destination pud shouldn't be established, free_pgtables()435425 * should have released it.436426 */437427 if (WARN_ON_ONCE(!pud_none(*new_pud)))438438- return false;439439-440440- /* If this pud belongs to a uffd vma with remap events disabled, we need441441- * to ensure that the uffd-wp state is cleared from all pgtables. This442442- * means recursing into lower page tables in move_page_tables(), and we443443- * can reuse the existing code if we simply treat the entry as "not444444- * moved".445445- */446446- if (vma_has_uffd_without_event_remap(vma))447428 return false;448429449430 /*···1625162016261621static bool vma_multi_allowed(struct vm_area_struct *vma)16271622{16281628- struct file *file;16231623+ struct file *file = vma->vm_file;1629162416301625 /*16311626 * We can't support moving multiple uffd VMAs as notify requires···16381633 * Custom get unmapped area might result in MREMAP_FIXED not16391634 * being obeyed.16401635 */16411641- file = vma->vm_file;16421642- if (file && !vma_is_shmem(vma) && !is_vm_hugetlb_page(vma)) {16431643- const struct file_operations *fop = file->f_op;16361636+ if (!file || !file->f_op->get_unmapped_area)16371637+ return true;16381638+ /* Known good. */16391639+ if (vma_is_shmem(vma))16401640+ return true;16411641+ if (is_vm_hugetlb_page(vma))16421642+ return true;16431643+ if (file->f_op->get_unmapped_area == thp_get_unmapped_area)16441644+ return true;1644164516451645- if (fop->get_unmapped_area)16461646- return false;16471647- }16481648-16491649- return true;16461646+ return false;16501647}1651164816521649static int check_prep_vma(struct vma_remap_struct *vrm)···18251818 unsigned long start = vrm->addr;18261819 unsigned long end = vrm->addr + vrm->old_len;18271820 unsigned long new_addr = vrm->new_addr;18281828- bool allowed = true, seen_vma = false;18291821 unsigned long target_addr = new_addr;18301822 unsigned long res = -EFAULT;18311823 unsigned long last_end;18241824+ bool seen_vma = false;18251825+18321826 VMA_ITERATOR(vmi, current->mm, start);1833182718341828 /*···18421834 unsigned long addr = max(vma->vm_start, start);18431835 unsigned long len = min(end, vma->vm_end) - addr;18441836 unsigned long offset, res_vma;18451845-18461846- if (!allowed)18471847- return -EFAULT;18371837+ bool multi_allowed;1848183818491839 /* No gap permitted at the start of the range. */18501840 if (!seen_vma && start < vma->vm_start)···18711865 vrm->new_addr = target_addr + offset;18721866 vrm->old_len = vrm->new_len = len;1873186718741874- allowed = vma_multi_allowed(vma);18751875- if (seen_vma && !allowed)18761876- return -EFAULT;18681868+ multi_allowed = vma_multi_allowed(vma);18691869+ if (!multi_allowed) {18701870+ /* This is not the first VMA, abort immediately. */18711871+ if (seen_vma)18721872+ return -EFAULT;18731873+ /* This is the first, but there are more, abort. */18741874+ if (vma->vm_end < end)18751875+ return -EFAULT;18761876+ }1877187718781878 res_vma = check_prep_vma(vrm);18791879 if (!res_vma)···18881876 return res_vma;1889187718901878 if (!seen_vma) {18911891- VM_WARN_ON_ONCE(allowed && res_vma != new_addr);18791879+ VM_WARN_ON_ONCE(multi_allowed && res_vma != new_addr);18921880 res = res_vma;18931881 }18941882
···2246224622472247static int __init zs_init(void)22482248{22492249+ int rc __maybe_unused;22502250+22492251#ifdef CONFIG_ZPOOL22502252 zpool_register_driver(&zs_zpool_driver);22532253+#endif22542254+#ifdef CONFIG_COMPACTION22552255+ rc = set_movable_ops(&zsmalloc_mops, PGTY_zsmalloc);22562256+ if (rc)22572257+ return rc;22512258#endif22522259 zs_stat_init();22532260 return 0;···22642257{22652258#ifdef CONFIG_ZPOOL22662259 zpool_unregister_driver(&zs_zpool_driver);22602260+#endif22612261+#ifdef CONFIG_COMPACTION22622262+ set_movable_ops(NULL, PGTY_zsmalloc);22672263#endif22682264 zs_stat_exit();22692265}
+14-3
net/bluetooth/hci_conn.c
···339339 case BT_CODEC_TRANSPARENT:340340 if (!find_next_esco_param(conn, esco_param_msbc,341341 ARRAY_SIZE(esco_param_msbc)))342342- return false;342342+ return -EINVAL;343343+343344 param = &esco_param_msbc[conn->attempt - 1];344345 cp.tx_coding_format.id = 0x03;345346 cp.rx_coding_format.id = 0x03;···831830 /* Check if ISO connection is a BIS and terminate advertising832831 * set and BIG if there are no other connections using it.833832 */834834- bis = hci_conn_hash_lookup_big(hdev, conn->iso_qos.bcast.big);833833+ bis = hci_conn_hash_lookup_big_state(hdev,834834+ conn->iso_qos.bcast.big,835835+ BT_CONNECTED,836836+ HCI_ROLE_MASTER);837837+ if (bis)838838+ return;839839+840840+ bis = hci_conn_hash_lookup_big_state(hdev,841841+ conn->iso_qos.bcast.big,842842+ BT_CONNECT,843843+ HCI_ROLE_MASTER);835844 if (bis)836845 return;837846···22602249 * the start periodic advertising and create BIG commands have22612250 * been queued22622251 */22632263- hci_conn_hash_list_state(hdev, bis_mark_per_adv, PA_LINK,22522252+ hci_conn_hash_list_state(hdev, bis_mark_per_adv, BIS_LINK,22642253 BT_BOUND, &data);2265225422662255 /* Queue start periodic advertising and create BIG */
+10-5
net/bluetooth/hci_event.c
···67456745 qos->ucast.out.latency =67466746 DIV_ROUND_CLOSEST(get_unaligned_le24(ev->p_latency),67476747 1000);67486748- qos->ucast.in.sdu = le16_to_cpu(ev->c_mtu);67496749- qos->ucast.out.sdu = le16_to_cpu(ev->p_mtu);67486748+ qos->ucast.in.sdu = ev->c_bn ? le16_to_cpu(ev->c_mtu) : 0;67496749+ qos->ucast.out.sdu = ev->p_bn ? le16_to_cpu(ev->p_mtu) : 0;67506750 qos->ucast.in.phy = ev->c_phy;67516751 qos->ucast.out.phy = ev->p_phy;67526752 break;···67606760 qos->ucast.in.latency =67616761 DIV_ROUND_CLOSEST(get_unaligned_le24(ev->p_latency),67626762 1000);67636763- qos->ucast.out.sdu = le16_to_cpu(ev->c_mtu);67646764- qos->ucast.in.sdu = le16_to_cpu(ev->p_mtu);67636763+ qos->ucast.out.sdu = ev->c_bn ? le16_to_cpu(ev->c_mtu) : 0;67646764+ qos->ucast.in.sdu = ev->p_bn ? le16_to_cpu(ev->p_mtu) : 0;67656765 qos->ucast.out.phy = ev->c_phy;67666766 qos->ucast.in.phy = ev->p_phy;67676767 break;···69576957 continue;69586958 }6959695969606960- if (ev->status != 0x42)69606960+ if (ev->status != 0x42) {69616961 /* Mark PA sync as established */69626962 set_bit(HCI_CONN_PA_SYNC, &bis->flags);69636963+ /* Reset cleanup callback of PA Sync so it doesn't69646964+ * terminate the sync when deleting the connection.69656965+ */69666966+ conn->cleanup = NULL;69676967+ }6963696869646969 bis->sync_handle = conn->sync_handle;69656970 bis->iso_qos.bcast.big = ev->handle;
+16-9
net/bluetooth/hci_sync.c
···33443344 * advertising data. This also applies to the case33453345 * where BR/EDR was toggled during the AUTO_OFF phase.33463346 */33473347- if (hci_dev_test_flag(hdev, HCI_ADVERTISING) ||33473347+ if (hci_dev_test_flag(hdev, HCI_ADVERTISING) &&33483348 list_empty(&hdev->adv_instances)) {33493349 if (ext_adv_capable(hdev)) {33503350 err = hci_setup_ext_adv_instance_sync(hdev, 0x00);···45314531{45324532 struct hci_cp_le_set_host_feature cp;4533453345344534- if (!cis_capable(hdev))45344534+ if (!iso_capable(hdev))45354535 return 0;4536453645374537 memset(&cp, 0, sizeof(cp));4538453845394539 /* Connected Isochronous Channels (Host Support) */45404540 cp.bit_number = 32;45414541- cp.bit_value = 1;45414541+ cp.bit_value = iso_enabled(hdev) ? 0x01 : 0x00;4542454245434543 return __hci_cmd_sync_status(hdev, HCI_OP_LE_SET_HOST_FEATURE,45444544 sizeof(cp), &cp, HCI_CMD_TIMEOUT);···6985698569866986 hci_dev_lock(hdev);6987698769886988- hci_dev_clear_flag(hdev, HCI_PA_SYNC);69896989-69906988 if (!hci_conn_valid(hdev, conn))69916989 clear_bit(HCI_CONN_CREATE_PA_SYNC, &conn->flags);69926990···70457047 /* SID has not been set listen for HCI_EV_LE_EXT_ADV_REPORT to update70467048 * it.70477049 */70487048- if (conn->sid == HCI_SID_INVALID)70497049- __hci_cmd_sync_status_sk(hdev, HCI_OP_NOP, 0, NULL,70507050- HCI_EV_LE_EXT_ADV_REPORT,70517051- conn->conn_timeout, NULL);70507050+ if (conn->sid == HCI_SID_INVALID) {70517051+ err = __hci_cmd_sync_status_sk(hdev, HCI_OP_NOP, 0, NULL,70527052+ HCI_EV_LE_EXT_ADV_REPORT,70537053+ conn->conn_timeout, NULL);70547054+ if (err == -ETIMEDOUT)70557055+ goto done;70567056+ }7052705770537058 memset(&cp, 0, sizeof(cp));70547059 cp.options = qos->bcast.options;···70807079 if (err == -ETIMEDOUT)70817080 __hci_cmd_sync_status(hdev, HCI_OP_LE_PA_CREATE_SYNC_CANCEL,70827081 0, NULL, HCI_CMD_TIMEOUT);70827082+70837083+done:70847084+ hci_dev_clear_flag(hdev, HCI_PA_SYNC);70857085+70867086+ /* Update passive scan since HCI_PA_SYNC flag has been cleared */70877087+ hci_update_passive_scan_sync(hdev);7083708870847089 return err;70857090}
···37793779 features &= ~NETIF_F_TSO_MANGLEID;37803780 }3781378137823782+ /* NETIF_F_IPV6_CSUM does not support IPv6 extension headers,37833783+ * so neither does TSO that depends on it.37843784+ */37853785+ if (features & NETIF_F_IPV6_CSUM &&37863786+ (skb_shinfo(skb)->gso_type & SKB_GSO_TCPV6 ||37873787+ (skb_shinfo(skb)->gso_type & SKB_GSO_UDP_L4 &&37883788+ vlan_get_protocol(skb) == htons(ETH_P_IPV6))) &&37893789+ skb_transport_header_was_set(skb) &&37903790+ skb_network_header_len(skb) != sizeof(struct ipv6hdr) &&37913791+ !ipv6_has_hopopt_jumbo(skb))37923792+ features &= ~(NETIF_F_IPV6_CSUM | NETIF_F_TSO6 | NETIF_F_GSO_UDP_L4);37933793+37823794 return features;37833795}37843796
+7-1
net/hsr/hsr_slave.c
···6363 skb_push(skb, ETH_HLEN);6464 skb_reset_mac_header(skb);6565 if ((!hsr->prot_version && protocol == htons(ETH_P_PRP)) ||6666- protocol == htons(ETH_P_HSR))6666+ protocol == htons(ETH_P_HSR)) {6767+ if (!pskb_may_pull(skb, ETH_HLEN + HSR_HLEN)) {6868+ kfree_skb(skb);6969+ goto finish_consume;7070+ }7171+6772 skb_set_network_header(skb, ETH_HLEN + HSR_HLEN);7373+ }6874 skb_reset_mac_len(skb);69757076 /* Only the frames received over the interlink port will assign a
+2-4
net/ipv4/netfilter/nf_reject_ipv4.c
···247247 if (!oth)248248 return;249249250250- if ((hook == NF_INET_PRE_ROUTING || hook == NF_INET_INGRESS) &&251251- nf_reject_fill_skb_dst(oldskb) < 0)250250+ if (!skb_dst(oldskb) && nf_reject_fill_skb_dst(oldskb) < 0)252251 return;253252254253 if (skb_rtable(oldskb)->rt_flags & (RTCF_BROADCAST | RTCF_MULTICAST))···320321 if (iph->frag_off & htons(IP_OFFSET))321322 return;322323323323- if ((hook == NF_INET_PRE_ROUTING || hook == NF_INET_INGRESS) &&324324- nf_reject_fill_skb_dst(skb_in) < 0)324324+ if (!skb_dst(skb_in) && nf_reject_fill_skb_dst(skb_in) < 0)325325 return;326326327327 if (skb_csum_unnecessary(skb_in) ||
+2-3
net/ipv6/netfilter/nf_reject_ipv6.c
···293293 fl6.fl6_sport = otcph->dest;294294 fl6.fl6_dport = otcph->source;295295296296- if (hook == NF_INET_PRE_ROUTING || hook == NF_INET_INGRESS) {296296+ if (!skb_dst(oldskb)) {297297 nf_ip6_route(net, &dst, flowi6_to_flowi(&fl6), false);298298 if (!dst)299299 return;···397397 if (hooknum == NF_INET_LOCAL_OUT && skb_in->dev == NULL)398398 skb_in->dev = net->loopback_dev;399399400400- if ((hooknum == NF_INET_PRE_ROUTING || hooknum == NF_INET_INGRESS) &&401401- nf_reject6_fill_skb_dst(skb_in) < 0)400400+ if (!skb_dst(skb_in) && nf_reject6_fill_skb_dst(skb_in) < 0)402401 return;403402404403 icmpv6_send(skb_in, ICMPV6_DEST_UNREACH, code, 0);
+5-1
net/ipv6/seg6_hmac.c
···3535#include <net/xfrm.h>36363737#include <crypto/hash.h>3838+#include <crypto/utils.h>3839#include <net/seg6.h>3940#include <net/genetlink.h>4041#include <net/seg6_hmac.h>···281280 if (seg6_hmac_compute(hinfo, srh, &ipv6_hdr(skb)->saddr, hmac_output))282281 return false;283282284284- if (memcmp(hmac_output, tlv->hmac, SEG6_HMAC_FIELD_LEN) != 0)283283+ if (crypto_memneq(hmac_output, tlv->hmac, SEG6_HMAC_FIELD_LEN))285284 return false;286285287286 return true;···304303{305304 struct seg6_pernet_data *sdata = seg6_pernet(net);306305 int err;306306+307307+ if (!__hmac_get_algo(hinfo->alg_id))308308+ return -EINVAL;307309308310 err = rhashtable_lookup_insert_fast(&sdata->hmac_infos, &hinfo->node,309311 rht_params);
+4-2
net/mptcp/options.c
···11181118 return hmac == mp_opt->ahmac;11191119}1120112011211121-/* Return false if a subflow has been reset, else return true */11211121+/* Return false in case of error (or subflow has been reset),11221122+ * else return true.11231123+ */11221124bool mptcp_incoming_options(struct sock *sk, struct sk_buff *skb)11231125{11241126 struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(sk);···1224122212251223 mpext = skb_ext_add(skb, SKB_EXT_MPTCP);12261224 if (!mpext)12271227- return true;12251225+ return false;1228122612291227 memset(mpext, 0, sizeof(*mpext));12301228
···18081808 return tls_decrypt_sg(sk, NULL, sgout, &darg);18091809}1810181018111811+/* All records returned from a recvmsg() call must have the same type.18121812+ * 0 is not a valid content type. Use it as "no type reported, yet".18131813+ */18111814static int tls_record_content_type(struct msghdr *msg, struct tls_msg *tlm,18121815 u8 *control)18131816{···20542051 if (err < 0)20552052 goto end;2056205320542054+ /* process_rx_list() will set @control if it processed any records */20572055 copied = err;20582058- if (len <= copied || (copied && control != TLS_RECORD_TYPE_DATA) || rx_more)20562056+ if (len <= copied || rx_more ||20572057+ (control && control != TLS_RECORD_TYPE_DATA))20592058 goto end;2060205920612060 target = sock_rcvlowat(sk, flags & MSG_WAITALL, len);
+18-12
rust/kernel/alloc/allocator.rs
···4343/// For more details see [self].4444pub struct KVmalloc;45454646-/// Returns a proper size to alloc a new object aligned to `new_layout`'s alignment.4747-fn aligned_size(new_layout: Layout) -> usize {4848- // Customized layouts from `Layout::from_size_align()` can have size < align, so pad first.4949- let layout = new_layout.pad_to_align();5050-5151- // Note that `layout.size()` (after padding) is guaranteed to be a multiple of `layout.align()`5252- // which together with the slab guarantees means the `krealloc` will return a properly aligned5353- // object (see comments in `kmalloc()` for more information).5454- layout.size()5555-}5656-5746/// # Invariants5847///5948/// One of the following: `krealloc`, `vrealloc`, `kvrealloc`.···7788 old_layout: Layout,7889 flags: Flags,7990 ) -> Result<NonNull<[u8]>, AllocError> {8080- let size = aligned_size(layout);9191+ let size = layout.size();8192 let ptr = match ptr {8293 Some(ptr) => {8394 if old_layout.size() == 0 {···112123 }113124}114125126126+impl Kmalloc {127127+ /// Returns a [`Layout`] that makes [`Kmalloc`] fulfill the requested size and alignment of128128+ /// `layout`.129129+ pub fn aligned_layout(layout: Layout) -> Layout {130130+ // Note that `layout.size()` (after padding) is guaranteed to be a multiple of131131+ // `layout.align()` which together with the slab guarantees means that `Kmalloc` will return132132+ // a properly aligned object (see comments in `kmalloc()` for more information).133133+ layout.pad_to_align()134134+ }135135+}136136+115137// SAFETY: `realloc` delegates to `ReallocFunc::call`, which guarantees that116138// - memory remains valid until it is explicitly freed,117139// - passing a pointer to a valid memory allocation is OK,···135135 old_layout: Layout,136136 flags: Flags,137137 ) -> Result<NonNull<[u8]>, AllocError> {138138+ let layout = Kmalloc::aligned_layout(layout);139139+138140 // SAFETY: `ReallocFunc::call` has the same safety requirements as `Allocator::realloc`.139141 unsafe { ReallocFunc::KREALLOC.call(ptr, layout, old_layout, flags) }140142 }···178176 old_layout: Layout,179177 flags: Flags,180178 ) -> Result<NonNull<[u8]>, AllocError> {179179+ // `KVmalloc` may use the `Kmalloc` backend, hence we have to enforce a `Kmalloc`180180+ // compatible layout.181181+ let layout = Kmalloc::aligned_layout(layout);182182+181183 // TODO: Support alignments larger than PAGE_SIZE.182184 if layout.align() > bindings::PAGE_SIZE {183185 pr_warn!("KVmalloc does not support alignments larger than PAGE_SIZE yet.\n");
+11
rust/kernel/alloc/allocator_test.rs
···2222pub type Vmalloc = Kmalloc;2323pub type KVmalloc = Kmalloc;24242525+impl Cmalloc {2626+ /// Returns a [`Layout`] that makes [`Kmalloc`] fulfill the requested size and alignment of2727+ /// `layout`.2828+ pub fn aligned_layout(layout: Layout) -> Layout {2929+ // Note that `layout.size()` (after padding) is guaranteed to be a multiple of3030+ // `layout.align()` which together with the slab guarantees means that `Kmalloc` will return3131+ // a properly aligned object (see comments in `kmalloc()` for more information).3232+ layout.pad_to_align()3333+ }3434+}3535+2536extern "C" {2637 #[link_name = "aligned_alloc"]2738 fn libc_aligned_alloc(align: usize, size: usize) -> *mut crate::ffi::c_void;
+184-24
rust/kernel/device.rs
···15151616pub mod property;17171818-/// A reference-counted device.1818+/// The core representation of a device in the kernel's driver model.1919///2020-/// This structure represents the Rust abstraction for a C `struct device`. This implementation2121-/// abstracts the usage of an already existing C `struct device` within Rust code that we get2222-/// passed from the C side.2020+/// This structure represents the Rust abstraction for a C `struct device`. A [`Device`] can either2121+/// exist as temporary reference (see also [`Device::from_raw`]), which is only valid within a2222+/// certain scope or as [`ARef<Device>`], owning a dedicated reference count.2323///2424-/// An instance of this abstraction can be obtained temporarily or permanent.2424+/// # Device Types2525///2626-/// A temporary one is bound to the lifetime of the C `struct device` pointer used for creation.2727-/// A permanent instance is always reference-counted and hence not restricted by any lifetime2828-/// boundaries.2626+/// A [`Device`] can represent either a bus device or a class device.2927///3030-/// For subsystems it is recommended to create a permanent instance to wrap into a subsystem3131-/// specific device structure (e.g. `pci::Device`). This is useful for passing it to drivers in3232-/// `T::probe()`, such that a driver can store the `ARef<Device>` (equivalent to storing a3333-/// `struct device` pointer in a C driver) for arbitrary purposes, e.g. allocating DMA coherent3434-/// memory.2828+/// ## Bus Devices2929+///3030+/// A bus device is a [`Device`] that is associated with a physical or virtual bus. Examples of3131+/// buses include PCI, USB, I2C, and SPI. Devices attached to a bus are registered with a specific3232+/// bus type, which facilitates matching devices with appropriate drivers based on IDs or other3333+/// identifying information. Bus devices are visible in sysfs under `/sys/bus/<bus-name>/devices/`.3434+///3535+/// ## Class Devices3636+///3737+/// A class device is a [`Device`] that is associated with a logical category of functionality3838+/// rather than a physical bus. Examples of classes include block devices, network interfaces, sound3939+/// cards, and input devices. Class devices are grouped under a common class and exposed to4040+/// userspace via entries in `/sys/class/<class-name>/`.4141+///4242+/// # Device Context4343+///4444+/// [`Device`] references are generic over a [`DeviceContext`], which represents the type state of4545+/// a [`Device`].4646+///4747+/// As the name indicates, this type state represents the context of the scope the [`Device`]4848+/// reference is valid in. For instance, the [`Bound`] context guarantees that the [`Device`] is4949+/// bound to a driver for the entire duration of the existence of a [`Device<Bound>`] reference.5050+///5151+/// Other [`DeviceContext`] types besides [`Bound`] are [`Normal`], [`Core`] and [`CoreInternal`].5252+///5353+/// Unless selected otherwise [`Device`] defaults to the [`Normal`] [`DeviceContext`], which by5454+/// itself has no additional requirements.5555+///5656+/// It is always up to the caller of [`Device::from_raw`] to select the correct [`DeviceContext`]5757+/// type for the corresponding scope the [`Device`] reference is created in.5858+///5959+/// All [`DeviceContext`] types other than [`Normal`] are intended to be used with6060+/// [bus devices](#bus-devices) only.6161+///6262+/// # Implementing Bus Devices6363+///6464+/// This section provides a guideline to implement bus specific devices, such as [`pci::Device`] or6565+/// [`platform::Device`].6666+///6767+/// A bus specific device should be defined as follows.6868+///6969+/// ```ignore7070+/// #[repr(transparent)]7171+/// pub struct Device<Ctx: device::DeviceContext = device::Normal>(7272+/// Opaque<bindings::bus_device_type>,7373+/// PhantomData<Ctx>,7474+/// );7575+/// ```7676+///7777+/// Since devices are reference counted, [`AlwaysRefCounted`] should be implemented for `Device`7878+/// (i.e. `Device<Normal>`). Note that [`AlwaysRefCounted`] must not be implemented for any other7979+/// [`DeviceContext`], since all other device context types are only valid within a certain scope.8080+///8181+/// In order to be able to implement the [`DeviceContext`] dereference hierarchy, bus device8282+/// implementations should call the [`impl_device_context_deref`] macro as shown below.8383+///8484+/// ```ignore8585+/// // SAFETY: `Device` is a transparent wrapper of a type that doesn't depend on `Device`'s8686+/// // generic argument.8787+/// kernel::impl_device_context_deref!(unsafe { Device });8888+/// ```8989+///9090+/// In order to convert from a any [`Device<Ctx>`] to [`ARef<Device>`], bus devices can implement9191+/// the following macro call.9292+///9393+/// ```ignore9494+/// kernel::impl_device_context_into_aref!(Device);9595+/// ```9696+///9797+/// Bus devices should also implement the following [`AsRef`] implementation, such that users can9898+/// easily derive a generic [`Device`] reference.9999+///100100+/// ```ignore101101+/// impl<Ctx: device::DeviceContext> AsRef<device::Device<Ctx>> for Device<Ctx> {102102+/// fn as_ref(&self) -> &device::Device<Ctx> {103103+/// ...104104+/// }105105+/// }106106+/// ```107107+///108108+/// # Implementing Class Devices109109+///110110+/// Class device implementations require less infrastructure and depend slightly more on the111111+/// specific subsystem.112112+///113113+/// An example implementation for a class device could look like this.114114+///115115+/// ```ignore116116+/// #[repr(C)]117117+/// pub struct Device<T: class::Driver> {118118+/// dev: Opaque<bindings::class_device_type>,119119+/// data: T::Data,120120+/// }121121+/// ```122122+///123123+/// This class device uses the sub-classing pattern to embed the driver's private data within the124124+/// allocation of the class device. For this to be possible the class device is generic over the125125+/// class specific `Driver` trait implementation.126126+///127127+/// Just like any device, class devices are reference counted and should hence implement128128+/// [`AlwaysRefCounted`] for `Device`.129129+///130130+/// Class devices should also implement the following [`AsRef`] implementation, such that users can131131+/// easily derive a generic [`Device`] reference.132132+///133133+/// ```ignore134134+/// impl<T: class::Driver> AsRef<device::Device> for Device<T> {135135+/// fn as_ref(&self) -> &device::Device {136136+/// ...137137+/// }138138+/// }139139+/// ```140140+///141141+/// An example for a class device implementation is [`drm::Device`].35142///36143/// # Invariants37144///···14942///15043/// `bindings::device::release` is valid to be called from any thread, hence `ARef<Device>` can be15144/// dropped from any thread.4545+///4646+/// [`AlwaysRefCounted`]: kernel::types::AlwaysRefCounted4747+/// [`drm::Device`]: kernel::drm::Device4848+/// [`impl_device_context_deref`]: kernel::impl_device_context_deref4949+/// [`pci::Device`]: kernel::pci::Device5050+/// [`platform::Device`]: kernel::platform::Device15251#[repr(transparent)]15352pub struct Device<Ctx: DeviceContext = Normal>(Opaque<bindings::device>, PhantomData<Ctx>);15453···424311// synchronization in `struct device`.425312unsafe impl Sync for Device {}426313427427-/// Marker trait for the context of a bus specific device.314314+/// Marker trait for the context or scope of a bus specific device.428315///429429-/// Some functions of a bus specific device should only be called from a certain context, i.e. bus430430-/// callbacks, such as `probe()`.316316+/// [`DeviceContext`] is a marker trait for types representing the context of a bus specific317317+/// [`Device`].431318///432432-/// This is the marker trait for structures representing the context of a bus specific device.319319+/// The specific device context types are: [`CoreInternal`], [`Core`], [`Bound`] and [`Normal`].320320+///321321+/// [`DeviceContext`] types are hierarchical, which means that there is a strict hierarchy that322322+/// defines which [`DeviceContext`] type can be derived from another. For instance, any323323+/// [`Device<Core>`] can dereference to a [`Device<Bound>`].324324+///325325+/// The following enumeration illustrates the dereference hierarchy of [`DeviceContext`] types.326326+///327327+/// - [`CoreInternal`] => [`Core`] => [`Bound`] => [`Normal`]328328+///329329+/// Bus devices can automatically implement the dereference hierarchy by using330330+/// [`impl_device_context_deref`].331331+///332332+/// Note that the guarantee for a [`Device`] reference to have a certain [`DeviceContext`] comes333333+/// from the specific scope the [`Device`] reference is valid in.334334+///335335+/// [`impl_device_context_deref`]: kernel::impl_device_context_deref433336pub trait DeviceContext: private::Sealed {}434337435435-/// The [`Normal`] context is the context of a bus specific device when it is not an argument of436436-/// any bus callback.338338+/// The [`Normal`] context is the default [`DeviceContext`] of any [`Device`].339339+///340340+/// The normal context does not indicate any specific context. Any `Device<Ctx>` is also a valid341341+/// [`Device<Normal>`]. It is the only [`DeviceContext`] for which it is valid to implement342342+/// [`AlwaysRefCounted`] for.343343+///344344+/// [`AlwaysRefCounted`]: kernel::types::AlwaysRefCounted437345pub struct Normal;438346439439-/// The [`Core`] context is the context of a bus specific device when it is supplied as argument of440440-/// any of the bus callbacks, such as `probe()`.347347+/// The [`Core`] context is the context of a bus specific device when it appears as argument of348348+/// any bus specific callback, such as `probe()`.349349+///350350+/// The core context indicates that the [`Device<Core>`] reference's scope is limited to the bus351351+/// callback it appears in. It is intended to be used for synchronization purposes. Bus device352352+/// implementations can implement methods for [`Device<Core>`], such that they can only be called353353+/// from bus callbacks.441354pub struct Core;442355443443-/// Semantically the same as [`Core`] but reserved for internal usage of the corresponding bus356356+/// Semantically the same as [`Core`], but reserved for internal usage of the corresponding bus444357/// abstraction.358358+///359359+/// The internal core context is intended to be used in exactly the same way as the [`Core`]360360+/// context, with the difference that this [`DeviceContext`] is internal to the corresponding bus361361+/// abstraction.362362+///363363+/// This context mainly exists to share generic [`Device`] infrastructure that should only be called364364+/// from bus callbacks with bus abstractions, but without making them accessible for drivers.445365pub struct CoreInternal;446366447447-/// The [`Bound`] context is the context of a bus specific device reference when it is guaranteed to448448-/// be bound for the duration of its lifetime.367367+/// The [`Bound`] context is the [`DeviceContext`] of a bus specific device when it is guaranteed to368368+/// be bound to a driver.369369+///370370+/// The bound context indicates that for the entire duration of the lifetime of a [`Device<Bound>`]371371+/// reference, the [`Device`] is guaranteed to be bound to a driver.372372+///373373+/// Some APIs, such as [`dma::CoherentAllocation`] or [`Devres`] rely on the [`Device`] to be bound,374374+/// which can be proven with the [`Bound`] device context.375375+///376376+/// Any abstraction that can guarantee a scope where the corresponding bus device is bound, should377377+/// provide a [`Device<Bound>`] reference to its users for this scope. This allows users to benefit378378+/// from optimizations for accessing device resources, see also [`Devres::access`].379379+///380380+/// [`Devres`]: kernel::devres::Devres381381+/// [`Devres::access`]: kernel::devres::Devres::access382382+/// [`dma::CoherentAllocation`]: kernel::dma::CoherentAllocation449383pub struct Bound;450384451385mod private {
+18-9
rust/kernel/devres.rs
···115115 /// Contains all the fields shared with [`Self::callback`].116116 // TODO: Replace with `UnsafePinned`, once available.117117 //118118- // Subsequently, the `drop_in_place()` in `Devres::drop` and the explicit `Send` and `Sync'119119- // impls can be removed.118118+ // Subsequently, the `drop_in_place()` in `Devres::drop` and `Devres::new` as well as the119119+ // explicit `Send` and `Sync' impls can be removed.120120 #[pin]121121 inner: Opaque<Inner<T>>,122122+ _add_action: (),122123}123124124125impl<T: Send> Devres<T> {···141140 dev: dev.into(),142141 callback,143142 // INVARIANT: `inner` is properly initialized.144144- inner <- {143143+ inner <- Opaque::pin_init(try_pin_init!(Inner {144144+ devm <- Completion::new(),145145+ revoke <- Completion::new(),146146+ data <- Revocable::new(data),147147+ })),148148+ // TODO: Replace with "initializer code blocks" [1] once available.149149+ //150150+ // [1] https://github.com/Rust-for-Linux/pin-init/pull/69151151+ _add_action: {145152 // SAFETY: `this` is a valid pointer to uninitialized memory.146153 let inner = unsafe { &raw mut (*this.as_ptr()).inner };147154···161152 // live at least as long as the returned `impl PinInit<Self, Error>`.162153 to_result(unsafe {163154 bindings::devm_add_action(dev.as_raw(), Some(callback), inner.cast())164164- })?;155155+ }).inspect_err(|_| {156156+ let inner = Opaque::cast_into(inner);165157166166- Opaque::pin_init(try_pin_init!(Inner {167167- devm <- Completion::new(),168168- revoke <- Completion::new(),169169- data <- Revocable::new(data),170170- }))158158+ // SAFETY: `inner` is a valid pointer to an `Inner<T>` and valid for both reads159159+ // and writes.160160+ unsafe { core::ptr::drop_in_place(inner) };161161+ })?;171162 },172163 })173164 }
+87-2
rust/kernel/driver.rs
···2233//! Generic support for drivers of different buses (e.g., PCI, Platform, Amba, etc.).44//!55-//! Each bus / subsystem is expected to implement [`RegistrationOps`], which allows drivers to66-//! register using the [`Registration`] class.55+//! This documentation describes how to implement a bus specific driver API and how to align it with66+//! the design of (bus specific) devices.77+//!88+//! Note: Readers are expected to know the content of the documentation of [`Device`] and99+//! [`DeviceContext`].1010+//!1111+//! # Driver Trait1212+//!1313+//! The main driver interface is defined by a bus specific driver trait. For instance:1414+//!1515+//! ```ignore1616+//! pub trait Driver: Send {1717+//! /// The type holding information about each device ID supported by the driver.1818+//! type IdInfo: 'static;1919+//!2020+//! /// The table of OF device ids supported by the driver.2121+//! const OF_ID_TABLE: Option<of::IdTable<Self::IdInfo>> = None;2222+//!2323+//! /// The table of ACPI device ids supported by the driver.2424+//! const ACPI_ID_TABLE: Option<acpi::IdTable<Self::IdInfo>> = None;2525+//!2626+//! /// Driver probe.2727+//! fn probe(dev: &Device<device::Core>, id_info: &Self::IdInfo) -> Result<Pin<KBox<Self>>>;2828+//!2929+//! /// Driver unbind (optional).3030+//! fn unbind(dev: &Device<device::Core>, this: Pin<&Self>) {3131+//! let _ = (dev, this);3232+//! }3333+//! }3434+//! ```3535+//!3636+//! For specific examples see [`auxiliary::Driver`], [`pci::Driver`] and [`platform::Driver`].3737+//!3838+//! The `probe()` callback should return a `Result<Pin<KBox<Self>>>`, i.e. the driver's private3939+//! data. The bus abstraction should store the pointer in the corresponding bus device. The generic4040+//! [`Device`] infrastructure provides common helpers for this purpose on its4141+//! [`Device<CoreInternal>`] implementation.4242+//!4343+//! All driver callbacks should provide a reference to the driver's private data. Once the driver4444+//! is unbound from the device, the bus abstraction should take back the ownership of the driver's4545+//! private data from the corresponding [`Device`] and [`drop`] it.4646+//!4747+//! All driver callbacks should provide a [`Device<Core>`] reference (see also [`device::Core`]).4848+//!4949+//! # Adapter5050+//!5151+//! The adapter implementation of a bus represents the abstraction layer between the C bus5252+//! callbacks and the Rust bus callbacks. It therefore has to be generic over an implementation of5353+//! the [driver trait](#driver-trait).5454+//!5555+//! ```ignore5656+//! pub struct Adapter<T: Driver>;5757+//! ```5858+//!5959+//! There's a common [`Adapter`] trait that can be implemented to inherit common driver6060+//! infrastructure, such as finding the ID info from an [`of::IdTable`] or [`acpi::IdTable`].6161+//!6262+//! # Driver Registration6363+//!6464+//! In order to register C driver types (such as `struct platform_driver`) the [adapter](#adapter)6565+//! should implement the [`RegistrationOps`] trait.6666+//!6767+//! This trait implementation can be used to create the actual registration with the common6868+//! [`Registration`] type.6969+//!7070+//! Typically, bus abstractions want to provide a bus specific `module_bus_driver!` macro, which7171+//! creates a kernel module with exactly one [`Registration`] for the bus specific adapter.7272+//!7373+//! The generic driver infrastructure provides a helper for this with the [`module_driver`] macro.7474+//!7575+//! # Device IDs7676+//!7777+//! Besides the common device ID types, such as [`of::DeviceId`] and [`acpi::DeviceId`], most buses7878+//! may need to implement their own device ID types.7979+//!8080+//! For this purpose the generic infrastructure in [`device_id`] should be used.8181+//!8282+//! [`auxiliary::Driver`]: kernel::auxiliary::Driver8383+//! [`Core`]: device::Core8484+//! [`Device`]: device::Device8585+//! [`Device<Core>`]: device::Device<device::Core>8686+//! [`Device<CoreInternal>`]: device::Device<device::CoreInternal>8787+//! [`DeviceContext`]: device::DeviceContext8888+//! [`device_id`]: kernel::device_id8989+//! [`module_driver`]: kernel::module_driver9090+//! [`pci::Driver`]: kernel::pci::Driver9191+//! [`platform::Driver`]: kernel::platform::Driver792893use crate::error::{Error, Result};994use crate::{acpi, device, of, str::CStr, try_pin_init, types::Opaque, ThisModule};
+25-7
rust/kernel/drm/device.rs
···55//! C header: [`include/linux/drm/drm_device.h`](srctree/include/linux/drm/drm_device.h)6677use crate::{88+ alloc::allocator::Kmalloc,89 bindings, device, drm,910 drm::driver::AllocImpl,1011 error::from_err_ptr,···1312 prelude::*,1413 types::{ARef, AlwaysRefCounted, Opaque},1514};1616-use core::{mem, ops::Deref, ptr, ptr::NonNull};1515+use core::{alloc::Layout, mem, ops::Deref, ptr, ptr::NonNull};17161817#[cfg(CONFIG_DRM_LEGACY)]1918macro_rules! drm_legacy_fields {···5453///5554/// `self.dev` is a valid instance of a `struct device`.5655#[repr(C)]5757-#[pin_data]5856pub struct Device<T: drm::Driver> {5957 dev: Opaque<bindings::drm_device>,6060- #[pin]6158 data: T::Data,6259}6360···95969697 /// Create a new `drm::Device` for a `drm::Driver`.9798 pub fn new(dev: &device::Device, data: impl PinInit<T::Data, Error>) -> Result<ARef<Self>> {9999+ // `__drm_dev_alloc` uses `kmalloc()` to allocate memory, hence ensure a `kmalloc()`100100+ // compatible `Layout`.101101+ let layout = Kmalloc::aligned_layout(Layout::new::<Self>());102102+98103 // SAFETY:99104 // - `VTABLE`, as a `const` is pinned to the read-only section of the compilation,100105 // - `dev` is valid by its type invarants,···106103 bindings::__drm_dev_alloc(107104 dev.as_raw(),108105 &Self::VTABLE,109109- mem::size_of::<Self>(),106106+ layout.size(),110107 mem::offset_of!(Self, dev),111108 )112109 }···120117 // - `raw_data` is a valid pointer to uninitialized memory.121118 // - `raw_data` will not move until it is dropped.122119 unsafe { data.__pinned_init(raw_data) }.inspect_err(|_| {123123- // SAFETY: `__drm_dev_alloc()` was successful, hence `raw_drm` must be valid and the120120+ // SAFETY: `raw_drm` is a valid pointer to `Self`, given that `__drm_dev_alloc` was121121+ // successful.122122+ let drm_dev = unsafe { Self::into_drm_device(raw_drm) };123123+124124+ // SAFETY: `__drm_dev_alloc()` was successful, hence `drm_dev` must be valid and the124125 // refcount must be non-zero.125125- unsafe { bindings::drm_dev_put(ptr::addr_of_mut!((*raw_drm.as_ptr()).dev).cast()) };126126+ unsafe { bindings::drm_dev_put(drm_dev) };126127 })?;127128128129 // SAFETY: The reference count is one, and now we take ownership of that reference as a···145138 // SAFETY: By the safety requirements of this function `ptr` is a valid pointer to a146139 // `struct drm_device` embedded in `Self`.147140 unsafe { crate::container_of!(Opaque::cast_from(ptr), Self, dev) }.cast_mut()141141+ }142142+143143+ /// # Safety144144+ ///145145+ /// `ptr` must be a valid pointer to `Self`.146146+ unsafe fn into_drm_device(ptr: NonNull<Self>) -> *mut bindings::drm_device {147147+ // SAFETY: By the safety requirements of this function, `ptr` is a valid pointer to `Self`.148148+ unsafe { &raw mut (*ptr.as_ptr()).dev }.cast()148149 }149150150151 /// Not intended to be called externally, except via declare_drm_ioctls!()···204189 }205190206191 unsafe fn dec_ref(obj: NonNull<Self>) {192192+ // SAFETY: `obj` is a valid pointer to `Self`.193193+ let drm_dev = unsafe { Self::into_drm_device(obj) };194194+207195 // SAFETY: The safety requirements guarantee that the refcount is non-zero.208208- unsafe { bindings::drm_dev_put(obj.cast().as_ptr()) };196196+ unsafe { bindings::drm_dev_put(drm_dev) };209197 }210198}211199
+1-1
rust/kernel/faux.rs
···44//!55//! This module provides bindings for working with faux devices in kernel modules.66//!77-//! C header: [`include/linux/device/faux.h`]77+//! C header: [`include/linux/device/faux.h`](srctree/include/linux/device/faux.h)8899use crate::{bindings, device, error::code::*, prelude::*};1010use core::ptr::{addr_of_mut, null, null_mut, NonNull};
···393393 return 0;394394}395395396396-static int cs35l63_sdw_kick_divider(struct cs35l56_private *cs35l56,397397- struct sdw_slave *peripheral)398398-{399399- unsigned int curr_scale_reg, next_scale_reg;400400- int curr_scale, next_scale, ret;401401-402402- if (!cs35l56->base.init_done)403403- return 0;404404-405405- if (peripheral->bus->params.curr_bank) {406406- curr_scale_reg = SDW_SCP_BUSCLOCK_SCALE_B1;407407- next_scale_reg = SDW_SCP_BUSCLOCK_SCALE_B0;408408- } else {409409- curr_scale_reg = SDW_SCP_BUSCLOCK_SCALE_B0;410410- next_scale_reg = SDW_SCP_BUSCLOCK_SCALE_B1;411411- }412412-413413- /*414414- * Current clock scale value must be different to new value.415415- * Modify current to guarantee this. If next still has the dummy416416- * value we wrote when it was current, the core code has not set417417- * a new scale so restore its original good value418418- */419419- curr_scale = sdw_read_no_pm(peripheral, curr_scale_reg);420420- if (curr_scale < 0) {421421- dev_err(cs35l56->base.dev, "Failed to read current clock scale: %d\n", curr_scale);422422- return curr_scale;423423- }424424-425425- next_scale = sdw_read_no_pm(peripheral, next_scale_reg);426426- if (next_scale < 0) {427427- dev_err(cs35l56->base.dev, "Failed to read next clock scale: %d\n", next_scale);428428- return next_scale;429429- }430430-431431- if (next_scale == CS35L56_SDW_INVALID_BUS_SCALE) {432432- next_scale = cs35l56->old_sdw_clock_scale;433433- ret = sdw_write_no_pm(peripheral, next_scale_reg, next_scale);434434- if (ret < 0) {435435- dev_err(cs35l56->base.dev, "Failed to modify current clock scale: %d\n",436436- ret);437437- return ret;438438- }439439- }440440-441441- cs35l56->old_sdw_clock_scale = curr_scale;442442- ret = sdw_write_no_pm(peripheral, curr_scale_reg, CS35L56_SDW_INVALID_BUS_SCALE);443443- if (ret < 0) {444444- dev_err(cs35l56->base.dev, "Failed to modify current clock scale: %d\n", ret);445445- return ret;446446- }447447-448448- dev_dbg(cs35l56->base.dev, "Next bus scale: %#x\n", next_scale);449449-450450- return 0;451451-}452452-453453-static int cs35l56_sdw_bus_config(struct sdw_slave *peripheral,454454- struct sdw_bus_params *params)455455-{456456- struct cs35l56_private *cs35l56 = dev_get_drvdata(&peripheral->dev);457457-458458- if ((cs35l56->base.type == 0x63) && (cs35l56->base.rev < 0xa1))459459- return cs35l63_sdw_kick_divider(cs35l56, peripheral);460460-461461- return 0;462462-}463463-464396static int __maybe_unused cs35l56_sdw_clk_stop(struct sdw_slave *peripheral,465397 enum sdw_clk_stop_mode mode,466398 enum sdw_clk_stop_type type)···408476 .read_prop = cs35l56_sdw_read_prop,409477 .interrupt_callback = cs35l56_sdw_interrupt,410478 .update_status = cs35l56_sdw_update_status,411411- .bus_config = cs35l56_sdw_bus_config,412479#ifdef DEBUG413480 .clk_stop = cs35l56_sdw_clk_stop,414481#endif
+26-3
sound/soc/codecs/cs35l56-shared.c
···838838};839839EXPORT_SYMBOL_NS_GPL(cs35l56_calibration_controls, "SND_SOC_CS35L56_SHARED");840840841841+static const struct cirrus_amp_cal_controls cs35l63_calibration_controls = {842842+ .alg_id = 0xbf210,843843+ .mem_region = WMFW_ADSP2_YM,844844+ .ambient = "CAL_AMBIENT",845845+ .calr = "CAL_R",846846+ .status = "CAL_STATUS",847847+ .checksum = "CAL_CHECKSUM",848848+};849849+841850int cs35l56_get_calibration(struct cs35l56_base *cs35l56_base)842851{843852 u64 silicon_uid = 0;···921912void cs35l56_log_tuning(struct cs35l56_base *cs35l56_base, struct cs_dsp *cs_dsp)922913{923914 __be32 pid, sid, tid;915915+ unsigned int alg_id;924916 int ret;917917+918918+ switch (cs35l56_base->type) {919919+ case 0x54:920920+ case 0x56:921921+ case 0x57:922922+ alg_id = 0x9f212;923923+ break;924924+ default:925925+ alg_id = 0xbf212;926926+ break;927927+ }925928926929 scoped_guard(mutex, &cs_dsp->pwr_lock) {927930 ret = cs_dsp_coeff_read_ctrl(cs_dsp_get_ctl(cs_dsp, "AS_PRJCT_ID",928928- WMFW_ADSP2_XM, 0x9f212),931931+ WMFW_ADSP2_XM, alg_id),929932 0, &pid, sizeof(pid));930933 if (!ret)931934 ret = cs_dsp_coeff_read_ctrl(cs_dsp_get_ctl(cs_dsp, "AS_CHNNL_ID",932932- WMFW_ADSP2_XM, 0x9f212),935935+ WMFW_ADSP2_XM, alg_id),933936 0, &sid, sizeof(sid));934937 if (!ret)935938 ret = cs_dsp_coeff_read_ctrl(cs_dsp_get_ctl(cs_dsp, "AS_SNPSHT_ID",936936- WMFW_ADSP2_XM, 0x9f212),939939+ WMFW_ADSP2_XM, alg_id),937940 0, &tid, sizeof(tid));938941 }939942···995974 case 0x35A54:996975 case 0x35A56:997976 case 0x35A57:977977+ cs35l56_base->calibration_controls = &cs35l56_calibration_controls;998978 break;999979 case 0x35A630:980980+ cs35l56_base->calibration_controls = &cs35l63_calibration_controls;1000981 devid = devid >> 4;1001982 break;1002983 default:
+1-1
sound/soc/codecs/cs35l56.c
···695695 return ret;696696697697 ret = cs_amp_write_cal_coeffs(&cs35l56->dsp.cs_dsp,698698- &cs35l56_calibration_controls,698698+ cs35l56->base.calibration_controls,699699 &cs35l56->base.cal_data);700700701701 wm_adsp_stop(&cs35l56->dsp);
···193193 if (stat.st_size < BOOTCONFIG_FOOTER_SIZE)194194 return 0;195195196196- if (lseek(fd, -BOOTCONFIG_MAGIC_LEN, SEEK_END) < 0)196196+ if (lseek(fd, -(off_t)BOOTCONFIG_MAGIC_LEN, SEEK_END) < 0)197197 return pr_errno("Failed to lseek for magic", -errno);198198199199 if (read(fd, magic, BOOTCONFIG_MAGIC_LEN) < 0)···203203 if (memcmp(magic, BOOTCONFIG_MAGIC, BOOTCONFIG_MAGIC_LEN) != 0)204204 return 0;205205206206- if (lseek(fd, -BOOTCONFIG_FOOTER_SIZE, SEEK_END) < 0)206206+ if (lseek(fd, -(off_t)BOOTCONFIG_FOOTER_SIZE, SEEK_END) < 0)207207 return pr_errno("Failed to lseek for size", -errno);208208209209 if (read(fd, &size, sizeof(uint32_t)) < 0)
+28
tools/include/linux/args.h
···11+/* SPDX-License-Identifier: GPL-2.0 */22+33+#ifndef _LINUX_ARGS_H44+#define _LINUX_ARGS_H55+66+/*77+ * How do these macros work?88+ *99+ * In __COUNT_ARGS() _0 to _12 are just placeholders from the start1010+ * in order to make sure _n is positioned over the correct number1111+ * from 12 to 0 (depending on X, which is a variadic argument list).1212+ * They serve no purpose other than occupying a position. Since each1313+ * macro parameter must have a distinct identifier, those identifiers1414+ * are as good as any.1515+ *1616+ * In COUNT_ARGS() we use actual integers, so __COUNT_ARGS() returns1717+ * that as _n.1818+ */1919+2020+/* This counts to 15. Any more, it will return 16th argument. */2121+#define __COUNT_ARGS(_0, _1, _2, _3, _4, _5, _6, _7, _8, _9, _10, _11, _12, _13, _14, _15, _n, X...) _n2222+#define COUNT_ARGS(X...) __COUNT_ARGS(, ##X, 15, 14, 13, 12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 0)2323+2424+/* Concatenate two parameters, but allow them to be expanded beforehand. */2525+#define __CONCAT(a, b) a ## b2626+#define CONCATENATE(a, b) __CONCAT(a, b)2727+2828+#endif /* _LINUX_ARGS_H */
+23
tools/objtool/arch/loongarch/special.c
···2727 struct table_info *next_table;2828 unsigned long tmp_insn_offset;2929 unsigned long tmp_rodata_offset;3030+ bool is_valid_list = false;30313132 rsec = find_section_by_name(file->elf, ".rela.discard.tablejump_annotate");3233 if (!rsec)···3635 INIT_LIST_HEAD(&table_list);37363837 for_each_reloc(rsec, reloc) {3838+ if (reloc->sym->sec->rodata)3939+ continue;4040+4141+ if (strcmp(insn->sec->name, reloc->sym->sec->name))4242+ continue;4343+3944 orig_table = malloc(sizeof(struct table_info));4045 if (!orig_table) {4146 WARN("malloc failed");···56495750 if (reloc_idx(reloc) + 1 == sec_num_entries(rsec))5851 break;5252+5353+ if (strcmp(insn->sec->name, (reloc + 1)->sym->sec->name)) {5454+ list_for_each_entry(orig_table, &table_list, jump_info) {5555+ if (orig_table->insn_offset == insn->offset) {5656+ is_valid_list = true;5757+ break;5858+ }5959+ }6060+6161+ if (!is_valid_list) {6262+ list_del_init(&table_list);6363+ continue;6464+ }6565+6666+ break;6767+ }5968 }60696170 list_for_each_entry(orig_table, &table_list, jump_info) {
+4-3
tools/power/cpupower/man/cpupower-set.1
···8181.RE82828383.PP8484-\-\-turbo\-boost, \-t8484+\-\-turbo\-boost, \-\-boost, \-t8585.RS 48686-This option is used to enable or disable the turbo boost feature on8787-supported Intel and AMD processors.8686+This option is used to enable or disable the boost feature on8787+supported Intel and AMD processors, and other boost supported systems.8888+(The --boost option is an alias for the --turbo-boost option)88898990This option takes as parameter either \fB1\fP to enable, or \fB0\fP to disable the feature.9091
+15-1
tools/power/cpupower/utils/cpufreq-info.c
···128128 /* ToDo: Make this more global */129129 unsigned long pstates[MAX_HW_PSTATES] = {0,};130130131131- ret = cpufreq_has_boost_support(cpu, &support, &active, &b_states);131131+ ret = cpufreq_has_x86_boost_support(cpu, &support, &active, &b_states);132132 if (ret) {133133 printf(_("Error while evaluating Boost Capabilities"134134 " on CPU %d -- are you root?\n"), cpu);···204204 return 0;205205}206206207207+static int get_boost_mode_generic(unsigned int cpu)208208+{209209+ bool active;210210+211211+ if (!cpufreq_has_generic_boost_support(&active)) {212212+ printf(_(" boost state support:\n"));213213+ printf(_(" Active: %s\n"), active ? _("yes") : _("no"));214214+ }215215+216216+ return 0;217217+}218218+207219/* --boost / -b */208220209221static int get_boost_mode(unsigned int cpu)···226214 cpupower_cpu_info.vendor == X86_VENDOR_HYGON ||227215 cpupower_cpu_info.vendor == X86_VENDOR_INTEL)228216 return get_boost_mode_x86(cpu);217217+ else218218+ get_boost_mode_generic(cpu);229219230220 freqs = cpufreq_get_boost_frequencies(cpu);231221 if (freqs) {
···103103104104/* cpuid and cpuinfo helpers **************************/105105106106+int cpufreq_has_generic_boost_support(bool *active);107107+int cpupower_set_turbo_boost(int turbo_boost);108108+106109/* X86 ONLY ****************************************/107110#if defined(__i386__) || defined(__x86_64__)108111···121118122119extern int cpupower_set_epp(unsigned int cpu, char *epp);123120extern int cpupower_set_amd_pstate_mode(char *mode);124124-extern int cpupower_set_turbo_boost(int turbo_boost);125121126122/* Read/Write msr ****************************/127123···141139142140/* AMD HW pstate decoding **************************/143141144144-extern int cpufreq_has_boost_support(unsigned int cpu, int *support,145145- int *active, int * states);142142+int cpufreq_has_x86_boost_support(unsigned int cpu, int *support,143143+ int *active, int *states);146144147145/* AMD P-State stuff **************************/148146bool cpupower_amd_pstate_enabled(void);···183181{ return -1; };184182static inline int cpupower_set_amd_pstate_mode(char *mode)185183{ return -1; };186186-static inline int cpupower_set_turbo_boost(int turbo_boost)187187-{ return -1; };188184189185/* Read/Write msr ****************************/190186191191-static inline int cpufreq_has_boost_support(unsigned int cpu, int *support,192192- int *active, int * states)187187+static inline int cpufreq_has_x86_boost_support(unsigned int cpu, int *support,188188+ int *active, int *states)193189{ return -1; }194190195191static inline bool cpupower_amd_pstate_enabled(void)
+54-22
tools/power/cpupower/utils/helpers/misc.c
···88#include "helpers/helpers.h"99#include "helpers/sysfs.h"1010#include "cpufreq.h"1111+#include "cpupower_intern.h"11121213#if defined(__i386__) || defined(__x86_64__)13141414-#include "cpupower_intern.h"1515-1615#define MSR_AMD_HWCR 0xc001001517161818-int cpufreq_has_boost_support(unsigned int cpu, int *support, int *active,1919- int *states)1717+int cpufreq_has_x86_boost_support(unsigned int cpu, int *support, int *active,1818+ int *states)2019{2120 int ret;2221 unsigned long long val;···123124 return 0;124125}125126126126-int cpupower_set_turbo_boost(int turbo_boost)127127-{128128- char path[SYSFS_PATH_MAX];129129- char linebuf[2] = {};130130-131131- snprintf(path, sizeof(path), PATH_TO_CPU "cpufreq/boost");132132-133133- if (!is_valid_path(path))134134- return -1;135135-136136- snprintf(linebuf, sizeof(linebuf), "%d", turbo_boost);137137-138138- if (cpupower_write_sysfs(path, linebuf, 2) <= 0)139139- return -1;140140-141141- return 0;142142-}143143-144127bool cpupower_amd_pstate_enabled(void)145128{146129 char *driver = cpufreq_get_driver(0);···140159}141160142161#endif /* #if defined(__i386__) || defined(__x86_64__) */162162+163163+int cpufreq_has_generic_boost_support(bool *active)164164+{165165+ char path[SYSFS_PATH_MAX];166166+ char linebuf[2] = {};167167+ unsigned long val;168168+ char *endp;169169+170170+ snprintf(path, sizeof(path), PATH_TO_CPU "cpufreq/boost");171171+172172+ if (!is_valid_path(path))173173+ return -EACCES;174174+175175+ if (cpupower_read_sysfs(path, linebuf, 2) <= 0)176176+ return -EINVAL;177177+178178+ val = strtoul(linebuf, &endp, 0);179179+ if (endp == linebuf || errno == ERANGE)180180+ return -EINVAL;181181+182182+ switch (val) {183183+ case 0:184184+ *active = false;185185+ break;186186+ case 1:187187+ *active = true;188188+ break;189189+ default:190190+ return -EINVAL;191191+ }192192+193193+ return 0;194194+}143195144196/* get_cpustate145197 *···272258 ((unsigned int)(speed % 1000) / 100));273259 }274260 }261261+}262262+263263+int cpupower_set_turbo_boost(int turbo_boost)264264+{265265+ char path[SYSFS_PATH_MAX];266266+ char linebuf[2] = {};267267+268268+ snprintf(path, sizeof(path), PATH_TO_CPU "cpufreq/boost");269269+270270+ if (!is_valid_path(path))271271+ return -1;272272+273273+ snprintf(linebuf, sizeof(linebuf), "%d", turbo_boost);274274+275275+ if (cpupower_write_sysfs(path, linebuf, 2) <= 0)276276+ return -1;277277+278278+ return 0;275279}
-3
tools/testing/selftests/coredump/stackdump_test.c
···446446 if (info.coredump_mask & PIDFD_COREDUMPED)447447 goto out;448448449449- if (read(fd_coredump, &c, 1) < 1)450450- goto out;451451-452449 exit_code = EXIT_SUCCESS;453450out:454451 if (fd_peer_pidfd >= 0)
···11+#!/bin/bash22+# SPDX-License-Identifier: GPL-2.033+#44+# Test if a bond interface works with lacp_active=off.55+66+# shellcheck disable=SC203477+REQUIRE_MZ=no88+NUM_NETIFS=099+lib_dir=$(dirname "$0")1010+# shellcheck disable=SC10911111+source "$lib_dir"/../../../net/forwarding/lib.sh1212+1313+# shellcheck disable=SC23171414+check_port_state()1515+{1616+ local netns=$11717+ local port=$21818+ local state=$31919+2020+ ip -n "${netns}" -d -j link show "$port" | \2121+ jq -e ".[].linkinfo.info_slave_data.ad_actor_oper_port_state_str | index(\"${state}\") != null" > /dev/null2222+}2323+2424+check_pkt_count()2525+{2626+ RET=02727+ local ns="$1"2828+ local iface="$2"2929+3030+ # wait 65s, one per 30s3131+ slowwait_for_counter 65 2 tc_rule_handle_stats_get \3232+ "dev ${iface} egress" 101 ".packets" "-n ${ns}" &> /dev/null3333+}3434+3535+setup() {3636+ setup_ns c_ns s_ns3737+3838+ # shellcheck disable=SC21543939+ ip -n "${c_ns}" link add eth0 type veth peer name eth0 netns "${s_ns}"4040+ ip -n "${c_ns}" link add eth1 type veth peer name eth1 netns "${s_ns}"4141+4242+ # Add tc filter to count the pkts4343+ tc -n "${c_ns}" qdisc add dev eth0 clsact4444+ tc -n "${c_ns}" filter add dev eth0 egress handle 101 protocol 0x8809 matchall action pass4545+ tc -n "${s_ns}" qdisc add dev eth1 clsact4646+ tc -n "${s_ns}" filter add dev eth1 egress handle 101 protocol 0x8809 matchall action pass4747+4848+ ip -n "${s_ns}" link add bond0 type bond mode 802.3ad lacp_active on lacp_rate fast4949+ ip -n "${s_ns}" link set eth0 master bond05050+ ip -n "${s_ns}" link set eth1 master bond05151+5252+ ip -n "${c_ns}" link add bond0 type bond mode 802.3ad lacp_active off lacp_rate fast5353+ ip -n "${c_ns}" link set eth0 master bond05454+ ip -n "${c_ns}" link set eth1 master bond05555+5656+}5757+5858+trap cleanup_all_ns EXIT5959+setup6060+6161+# The bond will send 2 lacpdu pkts during init time, let's wait at least 2s6262+# after interface up6363+ip -n "${c_ns}" link set bond0 up6464+sleep 26565+6666+# 1. The passive side shouldn't send LACPDU.6767+check_pkt_count "${c_ns}" "eth0" && RET=16868+log_test "802.3ad lacp_active off" "init port"6969+7070+ip -n "${s_ns}" link set bond0 up7171+# 2. The passive side should not have the 'active' flag.7272+RET=07373+slowwait 2 check_port_state "${c_ns}" "eth0" "active" && RET=17474+log_test "802.3ad lacp_active off" "port state active"7575+7676+# 3. The active side should have the 'active' flag.7777+RET=07878+slowwait 2 check_port_state "${s_ns}" "eth0" "active" || RET=17979+log_test "802.3ad lacp_active on" "port state active"8080+8181+# 4. Make sure the connection is not expired.8282+RET=08383+slowwait 5 check_port_state "${s_ns}" "eth0" "distributing"8484+slowwait 10 check_port_state "${s_ns}" "eth0" "expired" && RET=18585+log_test "bond 802.3ad lacp_active off" "port connection"8686+8787+# After testing, disconnect one port on each side to check the state.8888+ip -n "${s_ns}" link set eth0 nomaster8989+ip -n "${s_ns}" link set eth0 up9090+ip -n "${c_ns}" link set eth1 nomaster9191+ip -n "${c_ns}" link set eth1 up9292+# Due to Periodic Machine and Rx Machine state change, the bond will still9393+# send lacpdu pkts in a few seconds. sleep at lease 5s to make sure9494+# negotiation finished9595+sleep 59696+9797+# 5. The active side should keep sending LACPDU.9898+check_pkt_count "${s_ns}" "eth1" || RET=19999+log_test "bond 802.3ad lacp_active on" "port pkt after disconnect"100100+101101+# 6. The passive side shouldn't send LACPDU anymore.102102+check_pkt_count "${c_ns}" "eth0" && RET=1103103+log_test "bond 802.3ad lacp_active off" "port pkt after disconnect"104104+105105+exit "$EXIT_STATUS"
···107107 #endif108108#endif109109110110+#ifndef __NR_open_tree_attr111111+ #if defined __alpha__112112+ #define __NR_open_tree_attr 577113113+ #elif defined _MIPS_SIM114114+ #if _MIPS_SIM == _MIPS_SIM_ABI32 /* o32 */115115+ #define __NR_open_tree_attr (467 + 4000)116116+ #endif117117+ #if _MIPS_SIM == _MIPS_SIM_NABI32 /* n32 */118118+ #define __NR_open_tree_attr (467 + 6000)119119+ #endif120120+ #if _MIPS_SIM == _MIPS_SIM_ABI64 /* n64 */121121+ #define __NR_open_tree_attr (467 + 5000)122122+ #endif123123+ #elif defined __ia64__124124+ #define __NR_open_tree_attr (467 + 1024)125125+ #else126126+ #define __NR_open_tree_attr 467127127+ #endif128128+#endif129129+110130#ifndef MOUNT_ATTR_IDMAP111131#define MOUNT_ATTR_IDMAP 0x00100000112132#endif···139119 struct mount_attr *attr, size_t size)140120{141121 return syscall(__NR_mount_setattr, dfd, path, flags, attr, size);122122+}123123+124124+static inline int sys_open_tree_attr(int dfd, const char *path, unsigned int flags,125125+ struct mount_attr *attr, size_t size)126126+{127127+ return syscall(__NR_open_tree_attr, dfd, path, flags, attr, size);142128}143129144130static ssize_t write_nointr(int fd, const void *buf, size_t count)···12481222 attr.userns_fd = get_userns_fd(0, 10000, 10000);12491223 ASSERT_GE(attr.userns_fd, 0);12501224 ASSERT_NE(sys_mount_setattr(open_tree_fd, "", AT_EMPTY_PATH, &attr, sizeof(attr)), 0);12251225+ /*12261226+ * Make sure that open_tree_attr() without OPEN_TREE_CLONE is not a way12271227+ * to bypass this mount_setattr() restriction.12281228+ */12291229+ ASSERT_LT(sys_open_tree_attr(open_tree_fd, "", AT_EMPTY_PATH, &attr, sizeof(attr)), 0);12301230+12511231 ASSERT_EQ(close(attr.userns_fd), 0);12521232 ASSERT_EQ(close(open_tree_fd), 0);12531233}···12871255 ASSERT_GE(attr.userns_fd, 0);12881256 ASSERT_NE(sys_mount_setattr(open_tree_fd, "", AT_EMPTY_PATH, &attr,12891257 sizeof(attr)), 0);12581258+ /*12591259+ * Make sure that open_tree_attr() without OPEN_TREE_CLONE is not a way12601260+ * to bypass this mount_setattr() restriction.12611261+ */12621262+ ASSERT_LT(sys_open_tree_attr(open_tree_fd, "", AT_EMPTY_PATH, &attr, sizeof(attr)), 0);12631263+12901264 ASSERT_EQ(close(attr.userns_fd), 0);12911265 ASSERT_EQ(close(open_tree_fd), 0);12921266}···13591321 ASSERT_EQ(close(open_tree_fd), 0);13601322}1361132313241324+static bool expected_uid_gid(int dfd, const char *path, int flags,13251325+ uid_t expected_uid, gid_t expected_gid)13261326+{13271327+ int ret;13281328+ struct stat st;13291329+13301330+ ret = fstatat(dfd, path, &st, flags);13311331+ if (ret < 0)13321332+ return false;13331333+13341334+ return st.st_uid == expected_uid && st.st_gid == expected_gid;13351335+}13361336+13621337/**13631338 * Validate that currently changing the idmapping of an idmapped mount fails.13641339 */···13811330 struct mount_attr attr = {13821331 .attr_set = MOUNT_ATTR_IDMAP,13831332 };13331333+13341334+ ASSERT_TRUE(expected_uid_gid(-EBADF, "/mnt/D", 0, 0, 0));1384133513851336 if (!mount_setattr_supported())13861337 SKIP(return, "mount_setattr syscall not supported");···14011348 AT_EMPTY_PATH, &attr, sizeof(attr)), 0);14021349 ASSERT_EQ(close(attr.userns_fd), 0);1403135013511351+ EXPECT_FALSE(expected_uid_gid(open_tree_fd, ".", 0, 0, 0));13521352+ EXPECT_TRUE(expected_uid_gid(open_tree_fd, ".", 0, 10000, 10000));13531353+14041354 /* Change idmapping on a detached mount that is already idmapped. */14051355 attr.userns_fd = get_userns_fd(0, 20000, 10000);14061356 ASSERT_GE(attr.userns_fd, 0);14071357 ASSERT_NE(sys_mount_setattr(open_tree_fd, "", AT_EMPTY_PATH, &attr, sizeof(attr)), 0);13581358+ /*13591359+ * Make sure that open_tree_attr() without OPEN_TREE_CLONE is not a way13601360+ * to bypass this mount_setattr() restriction.13611361+ */13621362+ EXPECT_LT(sys_open_tree_attr(open_tree_fd, "", AT_EMPTY_PATH, &attr, sizeof(attr)), 0);13631363+ EXPECT_FALSE(expected_uid_gid(open_tree_fd, ".", 0, 20000, 20000));13641364+ EXPECT_TRUE(expected_uid_gid(open_tree_fd, ".", 0, 10000, 10000));13651365+14081366 ASSERT_EQ(close(attr.userns_fd), 0);14091367 ASSERT_EQ(close(open_tree_fd), 0);14101410-}14111411-14121412-static bool expected_uid_gid(int dfd, const char *path, int flags,14131413- uid_t expected_uid, gid_t expected_gid)14141414-{14151415- int ret;14161416- struct stat st;14171417-14181418- ret = fstatat(dfd, path, &st, flags);14191419- if (ret < 0)14201420- return false;14211421-14221422- return st.st_uid == expected_uid && st.st_gid == expected_gid;14231368}1424136914251370TEST_F(mount_setattr_idmapped, idmap_mount_tree_invalid)
+29
tools/testing/selftests/net/forwarding/router.sh
···1818# | 2001:db8:1::1/64 2001:db8:2::1/64 |1919# | |2020# +-----------------------------------------------------------------+2121+#2222+#shellcheck disable=SC2034 # SC doesn't see our uses of global variables21232224ALL_TESTS="2325 ping_ipv4···2927 ipv4_sip_equal_dip3028 ipv6_sip_equal_dip3129 ipv4_dip_link_local3030+ ipv4_sip_link_local3231"33323433NUM_NETIFS=4···331328 ip route del 169.254.1.0/24 dev $rp2332329 ip neigh del 169.254.1.1 lladdr 00:11:22:33:44:55 dev $rp2333330 tc filter del dev $rp2 egress protocol ip pref 1 handle 101 flower331331+}332332+333333+ipv4_sip_link_local()334334+{335335+ local sip=169.254.1.1336336+337337+ RET=0338338+339339+ # Disable rpfilter to prevent packets to be dropped because of it.340340+ sysctl_set net.ipv4.conf.all.rp_filter 0341341+ sysctl_set net.ipv4.conf."$rp1".rp_filter 0342342+343343+ tc filter add dev "$rp2" egress protocol ip pref 1 handle 101 \344344+ flower src_ip "$sip" action pass345345+346346+ $MZ "$h1" -t udp "sp=54321,dp=12345" -c 5 -d 1msec -b "$rp1mac" \347347+ -A "$sip" -B 198.51.100.2 -q348348+349349+ tc_check_packets "dev $rp2 egress" 101 5350350+ check_err $? "Packets were dropped"351351+352352+ log_test "IPv4 source IP is link-local"353353+354354+ tc filter del dev "$rp2" egress protocol ip pref 1 handle 101 flower355355+ sysctl_restore net.ipv4.conf."$rp1".rp_filter356356+ sysctl_restore net.ipv4.conf.all.rp_filter334357}335358336359trap cleanup EXIT
+3-2
tools/testing/selftests/net/mptcp/mptcp_connect.c
···183183 struct addrinfo *hints,184184 struct addrinfo **res)185185{186186-again:187187- int err = getaddrinfo(node, service, hints, res);186186+ int err;188187188188+again:189189+ err = getaddrinfo(node, service, hints, res);189190 if (err) {190191 const char *errstr;191192
+3-2
tools/testing/selftests/net/mptcp/mptcp_inq.c
···7575 struct addrinfo *hints,7676 struct addrinfo **res)7777{7878-again:7979- int err = getaddrinfo(node, service, hints, res);7878+ int err;80798080+again:8181+ err = getaddrinfo(node, service, hints, res);8182 if (err) {8283 const char *errstr;8384
+1
tools/testing/selftests/net/mptcp/mptcp_join.sh
···38423842 # remove and re-add38433843 if reset_with_events "delete re-add signal" &&38443844 mptcp_lib_kallsyms_has "subflow_rebuild_header$"; then38453845+ ip netns exec $ns1 sysctl -q net.mptcp.add_addr_timeout=038453846 pm_nl_set_limits $ns1 0 338463847 pm_nl_set_limits $ns2 3 338473848 pm_nl_add_endpoint $ns1 10.0.2.1 id 1 flags signal
+3-2
tools/testing/selftests/net/mptcp/mptcp_sockopt.c
···162162 struct addrinfo *hints,163163 struct addrinfo **res)164164{165165-again:166166- int err = getaddrinfo(node, service, hints, res);165165+ int err;167166167167+again:168168+ err = getaddrinfo(node, service, hints, res);168169 if (err) {169170 const char *errstr;170171
+1
tools/testing/selftests/net/mptcp/pm_netlink.sh
···198198check "get_limits" "${default_limits}" "subflows above hard limit"199199200200set_limits 8 8201201+flush_endpoint ## to make sure it doesn't affect the limits201202check "get_limits" "$(format_limits 8 8)" "set limits"202203203204flush_endpoint
···186186 ]187187 },188188 {189189+ "id": "34c0",190190+ "name": "Test TBF with HHF Backlog Accounting in gso_skb case against underflow",191191+ "category": [192192+ "qdisc",193193+ "tbf",194194+ "hhf"195195+ ],196196+ "plugins": {197197+ "requires": [198198+ "nsPlugin"199199+ ]200200+ },201201+ "setup": [202202+ "$IP link set dev $DUMMY up || true",203203+ "$IP addr add 10.10.11.10/24 dev $DUMMY || true",204204+ "$TC qdisc add dev $DUMMY root handle 1: tbf rate 8bit burst 100b latency 100ms",205205+ "$TC qdisc replace dev $DUMMY handle 2: parent 1:1 hhf limit 1000",206206+ [207207+ "ping -I $DUMMY -c2 10.10.11.11",208208+ 1209209+ ],210210+ "$TC qdisc change dev $DUMMY handle 2: parent 1:1 hhf limit 1"211211+ ],212212+ "cmdUnderTest": "$TC qdisc del dev $DUMMY handle 2: parent 1:1",213213+ "expExitCode": "0",214214+ "verifyCmd": "$TC -s qdisc show dev $DUMMY",215215+ "matchPattern": "backlog 0b 0p",216216+ "matchCount": "1",217217+ "teardown": [218218+ "$TC qdisc del dev $DUMMY handle 1: root"219219+ ]220220+ },221221+ {222222+ "id": "fd68",223223+ "name": "Test TBF with CODEL Backlog Accounting in gso_skb case against underflow",224224+ "category": [225225+ "qdisc",226226+ "tbf",227227+ "codel"228228+ ],229229+ "plugins": {230230+ "requires": [231231+ "nsPlugin"232232+ ]233233+ },234234+ "setup": [235235+ "$IP link set dev $DUMMY up || true",236236+ "$IP addr add 10.10.11.10/24 dev $DUMMY || true",237237+ "$TC qdisc add dev $DUMMY root handle 1: tbf rate 8bit burst 100b latency 100ms",238238+ "$TC qdisc replace dev $DUMMY handle 2: parent 1:1 codel limit 1000",239239+ [240240+ "ping -I $DUMMY -c2 10.10.11.11",241241+ 1242242+ ],243243+ "$TC qdisc change dev $DUMMY handle 2: parent 1:1 codel limit 1"244244+ ],245245+ "cmdUnderTest": "$TC qdisc del dev $DUMMY handle 2: parent 1:1",246246+ "expExitCode": "0",247247+ "verifyCmd": "$TC -s qdisc show dev $DUMMY",248248+ "matchPattern": "backlog 0b 0p",249249+ "matchCount": "1",250250+ "teardown": [251251+ "$TC qdisc del dev $DUMMY handle 1: root"252252+ ]253253+ },254254+ {255255+ "id": "514e",256256+ "name": "Test TBF with PIE Backlog Accounting in gso_skb case against underflow",257257+ "category": [258258+ "qdisc",259259+ "tbf",260260+ "pie"261261+ ],262262+ "plugins": {263263+ "requires": [264264+ "nsPlugin"265265+ ]266266+ },267267+ "setup": [268268+ "$IP link set dev $DUMMY up || true",269269+ "$IP addr add 10.10.11.10/24 dev $DUMMY || true",270270+ "$TC qdisc add dev $DUMMY root handle 1: tbf rate 8bit burst 100b latency 100ms",271271+ "$TC qdisc replace dev $DUMMY handle 2: parent 1:1 pie limit 1000",272272+ [273273+ "ping -I $DUMMY -c2 10.10.11.11",274274+ 1275275+ ],276276+ "$TC qdisc change dev $DUMMY handle 2: parent 1:1 pie limit 1"277277+ ],278278+ "cmdUnderTest": "$TC qdisc del dev $DUMMY handle 2: parent 1:1",279279+ "expExitCode": "0",280280+ "verifyCmd": "$TC -s qdisc show dev $DUMMY",281281+ "matchPattern": "backlog 0b 0p",282282+ "matchCount": "1",283283+ "teardown": [284284+ "$TC qdisc del dev $DUMMY handle 1: root"285285+ ]286286+ },287287+ {288288+ "id": "6c97",289289+ "name": "Test TBF with FQ Backlog Accounting in gso_skb case against underflow",290290+ "category": [291291+ "qdisc",292292+ "tbf",293293+ "fq"294294+ ],295295+ "plugins": {296296+ "requires": [297297+ "nsPlugin"298298+ ]299299+ },300300+ "setup": [301301+ "$IP link set dev $DUMMY up || true",302302+ "$IP addr add 10.10.11.10/24 dev $DUMMY || true",303303+ "$TC qdisc add dev $DUMMY root handle 1: tbf rate 8bit burst 100b latency 100ms",304304+ "$TC qdisc replace dev $DUMMY handle 2: parent 1:1 fq limit 1000",305305+ [306306+ "ping -I $DUMMY -c2 10.10.11.11",307307+ 1308308+ ],309309+ "$TC qdisc change dev $DUMMY handle 2: parent 1:1 fq limit 1"310310+ ],311311+ "cmdUnderTest": "$TC qdisc del dev $DUMMY handle 2: parent 1:1",312312+ "expExitCode": "0",313313+ "verifyCmd": "$TC -s qdisc show dev $DUMMY",314314+ "matchPattern": "backlog 0b 0p",315315+ "matchCount": "1",316316+ "teardown": [317317+ "$TC qdisc del dev $DUMMY handle 1: root"318318+ ]319319+ },320320+ {321321+ "id": "5d0b",322322+ "name": "Test TBF with FQ_CODEL Backlog Accounting in gso_skb case against underflow",323323+ "category": [324324+ "qdisc",325325+ "tbf",326326+ "fq_codel"327327+ ],328328+ "plugins": {329329+ "requires": [330330+ "nsPlugin"331331+ ]332332+ },333333+ "setup": [334334+ "$IP link set dev $DUMMY up || true",335335+ "$IP addr add 10.10.11.10/24 dev $DUMMY || true",336336+ "$TC qdisc add dev $DUMMY root handle 1: tbf rate 8bit burst 100b latency 100ms",337337+ "$TC qdisc replace dev $DUMMY handle 2: parent 1:1 fq_codel limit 1000",338338+ [339339+ "ping -I $DUMMY -c2 10.10.11.11",340340+ 1341341+ ],342342+ "$TC qdisc change dev $DUMMY handle 2: parent 1:1 fq_codel limit 1"343343+ ],344344+ "cmdUnderTest": "$TC qdisc del dev $DUMMY handle 2: parent 1:1",345345+ "expExitCode": "0",346346+ "verifyCmd": "$TC -s qdisc show dev $DUMMY",347347+ "matchPattern": "backlog 0b 0p",348348+ "matchCount": "1",349349+ "teardown": [350350+ "$TC qdisc del dev $DUMMY handle 1: root"351351+ ]352352+ },353353+ {354354+ "id": "21c3",355355+ "name": "Test TBF with FQ_PIE Backlog Accounting in gso_skb case against underflow",356356+ "category": [357357+ "qdisc",358358+ "tbf",359359+ "fq_pie"360360+ ],361361+ "plugins": {362362+ "requires": [363363+ "nsPlugin"364364+ ]365365+ },366366+ "setup": [367367+ "$IP link set dev $DUMMY up || true",368368+ "$IP addr add 10.10.11.10/24 dev $DUMMY || true",369369+ "$TC qdisc add dev $DUMMY root handle 1: tbf rate 8bit burst 100b latency 100ms",370370+ "$TC qdisc replace dev $DUMMY handle 2: parent 1:1 fq_pie limit 1000",371371+ [372372+ "ping -I $DUMMY -c2 10.10.11.11",373373+ 1374374+ ],375375+ "$TC qdisc change dev $DUMMY handle 2: parent 1:1 fq_pie limit 1"376376+ ],377377+ "cmdUnderTest": "$TC qdisc del dev $DUMMY handle 2: parent 1:1",378378+ "expExitCode": "0",379379+ "verifyCmd": "$TC -s qdisc show dev $DUMMY",380380+ "matchPattern": "backlog 0b 0p",381381+ "matchCount": "1",382382+ "teardown": [383383+ "$TC qdisc del dev $DUMMY handle 1: root"384384+ ]385385+ },386386+ {189387 "id": "a4bb",190388 "name": "Test FQ_CODEL with HTB parent - force packet drop with empty queue",191389 "category": [
+2-2
tools/testing/selftests/ublk/kublk.c
···1400140014011401 if (!((1ULL << i) & features))14021402 continue;14031403- if (i < sizeof(feat_map) / sizeof(feat_map[0]))14031403+ if (i < ARRAY_SIZE(feat_map))14041404 feat = feat_map[i];14051405 else14061406 feat = "unknown";···14771477 printf("\tdefault: nr_queues=2(max 32), depth=128(max 1024), dev_id=-1(auto allocation)\n");14781478 printf("\tdefault: nthreads=nr_queues");1479147914801480- for (i = 0; i < sizeof(tgt_ops_list) / sizeof(tgt_ops_list[0]); i++) {14801480+ for (i = 0; i < ARRAY_SIZE(tgt_ops_list); i++) {14811481 const struct ublk_tgt_ops *ops = tgt_ops_list[i];1482148214831483 if (ops->usage)
+4
tools/testing/shared/linux/idr.h
···11+/* Avoid duplicate definitions due to system headers. */22+#ifdef __CONCAT33+#undef __CONCAT44+#endif15#include "../../../../include/linux/idr.h"
+8
tools/tracing/latency/Makefile.config
···11# SPDX-License-Identifier: GPL-2.0-only2233+include $(srctree)/tools/scripts/utilities.mak44+35STOP_ERROR :=66+77+ifndef ($(NO_LIBTRACEEVENT),1)88+ ifeq ($(call get-executable,$(PKG_CONFIG)),)99+ $(error Error: $(PKG_CONFIG) needed by libtraceevent/libtracefs is missing on this system, please install it)1010+ endif1111+endif412513define lib_setup614 $(eval LIB_INCLUDES += $(shell sh -c "$(PKG_CONFIG) --cflags lib$(1)"))
+8
tools/tracing/rtla/Makefile.config
···11# SPDX-License-Identifier: GPL-2.0-only2233+include $(srctree)/tools/scripts/utilities.mak44+35STOP_ERROR :=4657LIBTRACEEVENT_MIN_VERSION = 1.568LIBTRACEFS_MIN_VERSION = 1.699+1010+ifndef ($(NO_LIBTRACEEVENT),1)1111+ ifeq ($(call get-executable,$(PKG_CONFIG)),)1212+ $(error Error: $(PKG_CONFIG) needed by libtraceevent/libtracefs is missing on this system, please install it)1313+ endif1414+endif715816define lib_setup917 $(eval LIB_INCLUDES += $(shell sh -c "$(PKG_CONFIG) --cflags lib$(1)"))