···2525 description: Possible handler of SuperSpeed signals retiming2626 type: boolean27272828- port:2929- $ref: /schemas/graph.yaml#/$defs/port-base3030- description:3131- A port node to link the device to a TypeC controller for the purpose of3232- handling altmode muxing and orientation switching.3333-3434- properties:3535- endpoint:3636- $ref: /schemas/graph.yaml#/$defs/endpoint-base3737- unevaluatedProperties: false3838- properties:3939- data-lanes:4040- $ref: /schemas/types.yaml#/definitions/uint32-array4141- minItems: 14242- maxItems: 84343- uniqueItems: true4444- items:4545- maximum: 84646-4747- ports:4848- $ref: /schemas/graph.yaml#/properties/ports4949- properties:5050- port@0:5151- $ref: /schemas/graph.yaml#/properties/port5252- description:5353- Super Speed (SS) Output endpoint to the Type-C connector5454-5555- port@1:5656- $ref: /schemas/graph.yaml#/$defs/port-base5757- description:5858- Super Speed (SS) Input endpoint from the Super-Speed PHY5959- unevaluatedProperties: false6060-6161- properties:6262- endpoint:6363- $ref: /schemas/graph.yaml#/$defs/endpoint-base6464- unevaluatedProperties: false6565- properties:6666- data-lanes:6767- $ref: /schemas/types.yaml#/definitions/uint32-array6868- minItems: 16969- maxItems: 87070- uniqueItems: true7171- items:7272- maximum: 87373-7474-oneOf:7575- - required:7676- - port7777- - required:7878- - ports7979-8028additionalProperties: true
+4-3
Documentation/networking/ax25.rst
···11111212There is a mailing list for discussing Linux amateur radio matters1313called linux-hams@vger.kernel.org. To subscribe to it, send a message to1414-majordomo@vger.kernel.org with the words "subscribe linux-hams" in the body1515-of the message, the subject field is ignored. You don't need to be1616-subscribed to post but of course that means you might miss an answer.1414+linux-hams+subscribe@vger.kernel.org or use the web interface at1515+https://vger.kernel.org. The subject and body of the message are1616+ignored. You don't need to be subscribed to post but of course that1717+means you might miss an answer.
···137137138138Checksum offload header fields are in big endian format.139139140140+Packet format::141141+140142 Bit 0 - 6 7 8-15 16-31141143 Function Header Type Next Header Checksum Valid Reserved142144143145Header Type is to indicate the type of header, this usually is set to CHECKSUM144146145147Header types146146-= ==========================================148148+149149+= ===============1471500 Reserved1481511 Reserved1491522 checksum header153153+= ===============150154151155Checksum Valid is to indicate whether the header checksum is valid. Value of 1152156implies that checksum is calculated on this packet and is valid, value of 0···187183packets and either ACK the MAP command or deliver the IP packet to the188184network stack as needed189185190190-MAP header|IP Packet|Optional padding|MAP header|IP Packet|Optional padding....186186+Packet format::191187192192-MAP header|IP Packet|Optional padding|MAP header|Command Packet|Optional pad...188188+ MAP header|IP Packet|Optional padding|MAP header|IP Packet|Optional padding....189189+190190+ MAP header|IP Packet|Optional padding|MAP header|Command Packet|Optional pad...1931911941923. Userspace configuration195193==========================
+2-4
Documentation/networking/net_failover.rst
···9696received only on the 'failover' device.97979898Below is the patch snippet used with 'cloud-ifupdown-helper' script found on9999-Debian cloud images:9999+Debian cloud images::100100101101-::102101 @@ -27,6 +27,8 @@ do_setup() {103102 local working="$cfgdir/.$INTERFACE"104103 local final="$cfgdir/$INTERFACE"···171172172173The following script is executed on the destination hypervisor once migration173174completes, and it reattaches the VF to the VM and brings down the virtio-net174174-interface.175175+interface::175176176176-::177177 # reattach-vf.sh178178 #!/bin/bash179179
+17-2
MAINTAINERS
···1997199719981998ARM AND ARM64 SoC SUB-ARCHITECTURES (COMMON PARTS)19991999M: Arnd Bergmann <arnd@arndb.de>20002000+M: Krzysztof Kozlowski <krzk@kernel.org>20012001+M: Alexandre Belloni <alexandre.belloni@bootlin.com>20022002+M: Linus Walleij <linus.walleij@linaro.org>20032003+R: Drew Fustini <fustini@kernel.org>20002004L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)20012005L: soc@lists.linux.dev20022006S: Maintained···38453841ASUS NOTEBOOKS AND EEEPC ACPI/WMI EXTRAS DRIVERS38463842M: Corentin Chary <corentin.chary@gmail.com>38473843M: Luke D. Jones <luke@ljones.dev>38443844+M: Denis Benato <benato.denis96@gmail.com>38483845L: platform-driver-x86@vger.kernel.org38493846S: Maintained38503847W: https://asus-linux.org/···1311613111F: include/uapi/linux/io_uring/1311713112F: io_uring/13118131131311413114+IO_URING ZCRX1311513115+M: Pavel Begunkov <asml.silence@gmail.com>1311613116+L: io-uring@vger.kernel.org1311713117+L: netdev@vger.kernel.org1311813118+T: git https://github.com/isilence/linux.git zcrx/for-next1311913119+T: git git://git.kernel.org/pub/scm/linux/kernel/git/axboe/linux.git1312013120+S: Maintained1312113121+F: io_uring/zcrx.*1312213122+1311913123IPMI SUBSYSTEM1312013124M: Corey Minyard <corey@minyard.net>1312113125L: openipmi-developer@lists.sourceforge.net (moderated for non-subscribers)···14408143941440914395LINUX-NEXT TREE1441014396M: Stephen Rothwell <sfr@canb.auug.org.au>1439714397+M: Mark Brown <broonie@kernel.org>1441114398L: linux-next@vger.kernel.org1441214399S: Supported1441314400B: mailto:linux-next@vger.kernel.org and the appropriate development tree···2690026885F: drivers/vfio/cdx/*26901268862690226887VFIO DRIVER2690326903-M: Alex Williamson <alex.williamson@redhat.com>2688826888+M: Alex Williamson <alex@shazbot.org>2690426889L: kvm@vger.kernel.org2690526890S: Maintained2690626891T: git https://github.com/awilliam/linux-vfio.git···2706327048F: drivers/media/test-drivers/vimc/*27064270492706527050VIRT LIB2706627066-M: Alex Williamson <alex.williamson@redhat.com>2705127051+M: Alex Williamson <alex@shazbot.org>2706727052M: Paolo Bonzini <pbonzini@redhat.com>2706827053L: kvm@vger.kernel.org2706927054S: Supported
···3535 from != folio_page(src, 0))3636 return;37373838- WARN_ON_ONCE(!folio_try_hugetlb_mte_tagging(dst));3838+ folio_try_hugetlb_mte_tagging(dst);39394040 /*4141 * Populate tags for all subpages.···5151 }5252 folio_set_hugetlb_mte_tagged(dst);5353 } else if (page_mte_tagged(from)) {5454- /* It's a new page, shouldn't have been tagged yet */5555- WARN_ON_ONCE(!try_page_mte_tagging(to));5454+ /*5555+ * Most of the time it's a new page that shouldn't have been5656+ * tagged yet. However, folio migration can end up reusing the5757+ * same page without untagging it. Ignore the warning if the5858+ * page is already tagged.5959+ */6060+ try_page_mte_tagging(to);56615762 mte_copy_page_tags(kto, kfrom);5863 set_page_mte_tagged(to);
+1-1
arch/csky/abiv2/cacheflush.c
···21212222 folio = page_folio(pfn_to_page(pfn));23232424- if (test_and_set_bit(PG_dcache_clean, &folio->flags))2424+ if (test_and_set_bit(PG_dcache_clean, &folio->flags.f))2525 return;26262727 icache_inv_range(address, address + nr*PAGE_SIZE);
+2-2
arch/csky/abiv2/inc/abi/cacheflush.h
···20202121static inline void flush_dcache_folio(struct folio *folio)2222{2323- if (test_bit(PG_dcache_clean, &folio->flags))2424- clear_bit(PG_dcache_clean, &folio->flags);2323+ if (test_bit(PG_dcache_clean, &folio->flags.f))2424+ clear_bit(PG_dcache_clean, &folio->flags.f);2525}2626#define flush_dcache_folio flush_dcache_folio2727
+2-2
arch/mips/mti-malta/malta-setup.c
···4747 .name = "keyboard",4848 .start = 0x60,4949 .end = 0x6f,5050- .flags = IORESOURCE_IO | IORESOURCE_BUSY5050+ .flags = IORESOURCE_IO5151 },5252 {5353 .name = "dma page reg",···213213214214 /* Request I/O space for devices used on the Malta board. */215215 for (i = 0; i < ARRAY_SIZE(standard_io_resources); i++)216216- request_resource(&ioport_resource, standard_io_resources+i);216216+ insert_resource(&ioport_resource, standard_io_resources + i);217217218218 /*219219 * Enable DMA channel 4 (cascade channel) in the PIIX4 south bridge.
···654654 return __pgprot(prot);655655}656656657657+#define pgprot_dmacoherent pgprot_writecombine658658+657659/*658660 * Both Svade and Svadu control the hardware behavior when the PTE A/D bits need to be set. By659661 * default the M-mode firmware enables the hardware updating scheme when only Svadu is present in
+6
arch/riscv/include/asm/vdso/arch_data.h
···12121313 /* Boolean indicating all CPUs have the same static hwprobe values. */1414 __u8 homogeneous_cpus;1515+1616+ /*1717+ * A gate to check and see if the hwprobe data is actually ready, as1818+ * probing is deferred to avoid boot slowdowns.1919+ */2020+ __u8 ready;1521};16221723#endif /* __RISCV_ASM_VDSO_ARCH_DATA_H */
+1-3
arch/riscv/kernel/cpu.c
···6262 return -ENODEV;6363 }64646565- if (!of_device_is_available(node)) {6666- pr_info("CPU with hartid=%lu is not available\n", *hart);6565+ if (!of_device_is_available(node))6766 return -ENODEV;6868- }69677068 if (of_property_read_string(node, "riscv,isa-base", &isa))7169 goto old_interface;
+2-2
arch/riscv/kernel/cpufeature.c
···932932{933933 int cpu;934934 u32 prev_vlenb = 0;935935- u32 vlenb;935935+ u32 vlenb = 0;936936937937- /* Ignore thead,vlenb property if xtheavector is not enabled in the kernel */937937+ /* Ignore thead,vlenb property if xtheadvector is not enabled in the kernel */938938 if (!IS_ENABLED(CONFIG_RISCV_ISA_XTHEADVECTOR))939939 return 0;940940
···55 * more details.66 */77#include <linux/syscalls.h>88+#include <linux/completion.h>99+#include <linux/atomic.h>1010+#include <linux/once.h>811#include <asm/cacheflush.h>912#include <asm/cpufeature.h>1013#include <asm/hwprobe.h>···3027 u64 id = -1ULL;3128 bool first = true;3229 int cpu;3030+3131+ if (pair->key != RISCV_HWPROBE_KEY_MVENDORID &&3232+ pair->key != RISCV_HWPROBE_KEY_MIMPID &&3333+ pair->key != RISCV_HWPROBE_KEY_MARCHID)3434+ goto out;33353436 for_each_cpu(cpu, cpus) {3537 u64 cpu_id;···6658 }6759 }68606161+out:6962 pair->value = id;7063}7164···463454 return 0;464455}465456466466-static int do_riscv_hwprobe(struct riscv_hwprobe __user *pairs,467467- size_t pair_count, size_t cpusetsize,468468- unsigned long __user *cpus_user,469469- unsigned int flags)470470-{471471- if (flags & RISCV_HWPROBE_WHICH_CPUS)472472- return hwprobe_get_cpus(pairs, pair_count, cpusetsize,473473- cpus_user, flags);474474-475475- return hwprobe_get_values(pairs, pair_count, cpusetsize,476476- cpus_user, flags);477477-}478478-479457#ifdef CONFIG_MMU480458481481-static int __init init_hwprobe_vdso_data(void)459459+static DECLARE_COMPLETION(boot_probes_done);460460+static atomic_t pending_boot_probes = ATOMIC_INIT(1);461461+462462+void riscv_hwprobe_register_async_probe(void)463463+{464464+ atomic_inc(&pending_boot_probes);465465+}466466+467467+void riscv_hwprobe_complete_async_probe(void)468468+{469469+ if (atomic_dec_and_test(&pending_boot_probes))470470+ complete(&boot_probes_done);471471+}472472+473473+static int complete_hwprobe_vdso_data(void)482474{483475 struct vdso_arch_data *avd = vdso_k_arch_data;484476 u64 id_bitsmash = 0;485477 struct riscv_hwprobe pair;486478 int key;479479+480480+ if (unlikely(!atomic_dec_and_test(&pending_boot_probes)))481481+ wait_for_completion(&boot_probes_done);487482488483 /*489484 * Initialize vDSO data with the answers for the "all CPUs" case, to···516503 * vDSO should defer to the kernel for exotic cpu masks.517504 */518505 avd->homogeneous_cpus = id_bitsmash != 0 && id_bitsmash != -1;506506+507507+ /*508508+ * Make sure all the VDSO values are visible before we look at them.509509+ * This pairs with the implicit "no speculativly visible accesses"510510+ * barrier in the VDSO hwprobe code.511511+ */512512+ smp_wmb();513513+ avd->ready = true;514514+ return 0;515515+}516516+517517+static int __init init_hwprobe_vdso_data(void)518518+{519519+ struct vdso_arch_data *avd = vdso_k_arch_data;520520+521521+ /*522522+ * Prevent the vDSO cached values from being used, as they're not ready523523+ * yet.524524+ */525525+ avd->ready = false;519526 return 0;520527}521528522529arch_initcall_sync(init_hwprobe_vdso_data);523530531531+#else532532+533533+static int complete_hwprobe_vdso_data(void) { return 0; }534534+524535#endif /* CONFIG_MMU */536536+537537+static int do_riscv_hwprobe(struct riscv_hwprobe __user *pairs,538538+ size_t pair_count, size_t cpusetsize,539539+ unsigned long __user *cpus_user,540540+ unsigned int flags)541541+{542542+ DO_ONCE_SLEEPABLE(complete_hwprobe_vdso_data);543543+544544+ if (flags & RISCV_HWPROBE_WHICH_CPUS)545545+ return hwprobe_get_cpus(pairs, pair_count, cpusetsize,546546+ cpus_user, flags);547547+548548+ return hwprobe_get_values(pairs, pair_count, cpusetsize,549549+ cpus_user, flags);550550+}525551526552SYSCALL_DEFINE5(riscv_hwprobe, struct riscv_hwprobe __user *, pairs,527553 size_t, pair_count, size_t, cpusetsize, unsigned long __user *,
+7-2
arch/riscv/kernel/unaligned_access_speed.c
···379379static int __init vec_check_unaligned_access_speed_all_cpus(void *unused __always_unused)380380{381381 schedule_on_each_cpu(check_vector_unaligned_access);382382+ riscv_hwprobe_complete_async_probe();382383383384 return 0;384385}···474473 per_cpu(vector_misaligned_access, cpu) = unaligned_vector_speed_param;475474 } else if (!check_vector_unaligned_access_emulated_all_cpus() &&476475 IS_ENABLED(CONFIG_RISCV_PROBE_VECTOR_UNALIGNED_ACCESS)) {477477- kthread_run(vec_check_unaligned_access_speed_all_cpus,478478- NULL, "vec_check_unaligned_access_speed_all_cpus");476476+ riscv_hwprobe_register_async_probe();477477+ if (IS_ERR(kthread_run(vec_check_unaligned_access_speed_all_cpus,478478+ NULL, "vec_check_unaligned_access_speed_all_cpus"))) {479479+ pr_warn("Failed to create vec_unalign_check kthread\n");480480+ riscv_hwprobe_complete_async_probe();481481+ }479482 }480483481484 /*
+1-1
arch/riscv/kernel/vdso/hwprobe.c
···2727 * homogeneous, then this function can handle requests for arbitrary2828 * masks.2929 */3030- if ((flags != 0) || (!all_cpus && !avd->homogeneous_cpus))3030+ if (flags != 0 || (!all_cpus && !avd->homogeneous_cpus) || unlikely(!avd->ready))3131 return riscv_hwprobe(pairs, pair_count, cpusetsize, cpus, flags);32323333 /* This is something we can handle, fill out the pairs. */
+3-8
arch/x86/kernel/cpu/bugs.c
···14631463 break;14641464 default:14651465 if (retbleed_mitigation != RETBLEED_MITIGATION_STUFF) {14661466- pr_err(RETBLEED_INTEL_MSG);14661466+ if (retbleed_mitigation != RETBLEED_MITIGATION_NONE)14671467+ pr_err(RETBLEED_INTEL_MSG);14681468+14671469 retbleed_mitigation = RETBLEED_MITIGATION_NONE;14681470 }14691471 }···18261824 }18271825}18281826#endif18291829-18301830-static inline bool match_option(const char *arg, int arglen, const char *opt)18311831-{18321832- int len = strlen(opt);18331833-18341834- return len == arglen && !strncmp(arg, opt, len);18351835-}1836182718371828/* The kernel command line selection for spectre v2 */18381829enum spectre_v2_mitigation_cmd {
+1-1
arch/x86/kernel/cpu/microcode/amd.c
···194194 }195195196196 switch (cur_rev >> 8) {197197- case 0x80012: return cur_rev <= 0x800126f; break;197197+ case 0x80012: return cur_rev <= 0x8001277; break;198198 case 0x80082: return cur_rev <= 0x800820f; break;199199 case 0x83010: return cur_rev <= 0x830107c; break;200200 case 0x86001: return cur_rev <= 0x860010e; break;
+10-1
arch/x86/kernel/cpu/resctrl/monitor.c
···458458 r->mon.mbm_cfg_mask = ecx & MAX_EVT_CONFIG_BITS;459459 }460460461461- if (rdt_cpu_has(X86_FEATURE_ABMC)) {461461+ /*462462+ * resctrl assumes a system that supports assignable counters can463463+ * switch to "default" mode. Ensure that there is a "default" mode464464+ * to switch to. This enforces a dependency between the independent465465+ * X86_FEATURE_ABMC and X86_FEATURE_CQM_MBM_TOTAL/X86_FEATURE_CQM_MBM_LOCAL466466+ * hardware features.467467+ */468468+ if (rdt_cpu_has(X86_FEATURE_ABMC) &&469469+ (rdt_cpu_has(X86_FEATURE_CQM_MBM_TOTAL) ||470470+ rdt_cpu_has(X86_FEATURE_CQM_MBM_LOCAL))) {462471 r->mon.mbm_cntr_assignable = true;463472 cpuid_count(0x80000020, 5, &eax, &ebx, &ecx, &edx);464473 r->mon.num_mbm_cntrs = (ebx & GENMASK(15, 0)) + 1;
+10
block/blk-settings.c
···184184 if (!bi->interval_exp)185185 bi->interval_exp = ilog2(lim->logical_block_size);186186187187+ /*188188+ * The PI generation / validation helpers do not expect intervals to189189+ * straddle multiple bio_vecs. Enforce alignment so that those are190190+ * never generated, and that each buffer is aligned as expected.191191+ */192192+ if (bi->csum_type) {193193+ lim->dma_alignment = max(lim->dma_alignment,194194+ (1U << bi->interval_exp) - 1);195195+ }196196+187197 return 0;188198}189199
+6
drivers/acpi/acpica/tbprint.c
···9595{9696 struct acpi_table_header local_header;97979898+#pragma GCC diagnostic push9999+#if defined(__GNUC__) && __GNUC__ >= 11100100+#pragma GCC diagnostic ignored "-Wstringop-overread"101101+#endif102102+98103 if (ACPI_COMPARE_NAMESEG(header->signature, ACPI_SIG_FACS)) {99104100105 /* FACS only has signature and length fields */···148143 local_header.asl_compiler_id,149144 local_header.asl_compiler_revision));150145 }146146+#pragma GCC diagnostic pop151147}
···6161 return 0;6262}63636464-/**6565- * rimt_get_fwnode() - Retrieve fwnode associated with an RIMT node6666- *6767- * @node: RIMT table node to be looked-up6868- *6969- * Returns: fwnode_handle pointer on success, NULL on failure7070- */7171-static struct fwnode_handle *rimt_get_fwnode(struct acpi_rimt_node *node)7272-{7373- struct fwnode_handle *fwnode = NULL;7474- struct rimt_fwnode *curr;7575-7676- spin_lock(&rimt_fwnode_lock);7777- list_for_each_entry(curr, &rimt_fwnode_list, list) {7878- if (curr->rimt_node == node) {7979- fwnode = curr->fwnode;8080- break;8181- }8282- }8383- spin_unlock(&rimt_fwnode_lock);8484-8585- return fwnode;8686-}8787-8864static acpi_status rimt_match_node_callback(struct acpi_rimt_node *node,8965 void *context)9066{···178202 return NULL;179203}180204205205+/*206206+ * RISC-V supports IOMMU as a PCI device or a platform device.207207+ * When it is a platform device, there should be a namespace device as208208+ * well along with RIMT. To create the link between RIMT information and209209+ * the platform device, the IOMMU driver should register itself with the210210+ * RIMT module. This is true for PCI based IOMMU as well.211211+ */212212+int rimt_iommu_register(struct device *dev)213213+{214214+ struct fwnode_handle *rimt_fwnode;215215+ struct acpi_rimt_node *node;216216+217217+ node = rimt_scan_node(ACPI_RIMT_NODE_TYPE_IOMMU, dev);218218+ if (!node) {219219+ pr_err("Could not find IOMMU node in RIMT\n");220220+ return -ENODEV;221221+ }222222+223223+ if (dev_is_pci(dev)) {224224+ rimt_fwnode = acpi_alloc_fwnode_static();225225+ if (!rimt_fwnode)226226+ return -ENOMEM;227227+228228+ rimt_fwnode->dev = dev;229229+ if (!dev->fwnode)230230+ dev->fwnode = rimt_fwnode;231231+232232+ rimt_set_fwnode(node, rimt_fwnode);233233+ } else {234234+ rimt_set_fwnode(node, dev->fwnode);235235+ }236236+237237+ return 0;238238+}239239+240240+#ifdef CONFIG_IOMMU_API241241+242242+/**243243+ * rimt_get_fwnode() - Retrieve fwnode associated with an RIMT node244244+ *245245+ * @node: RIMT table node to be looked-up246246+ *247247+ * Returns: fwnode_handle pointer on success, NULL on failure248248+ */249249+static struct fwnode_handle *rimt_get_fwnode(struct acpi_rimt_node *node)250250+{251251+ struct fwnode_handle *fwnode = NULL;252252+ struct rimt_fwnode *curr;253253+254254+ spin_lock(&rimt_fwnode_lock);255255+ list_for_each_entry(curr, &rimt_fwnode_list, list) {256256+ if (curr->rimt_node == node) {257257+ fwnode = curr->fwnode;258258+ break;259259+ }260260+ }261261+ spin_unlock(&rimt_fwnode_lock);262262+263263+ return fwnode;264264+}265265+181266static bool rimt_pcie_rc_supports_ats(struct acpi_rimt_node *node)182267{183268 struct acpi_rimt_pcie_rc *pci_rc;···326289327290 return NULL;328291}329329-330330-/*331331- * RISC-V supports IOMMU as a PCI device or a platform device.332332- * When it is a platform device, there should be a namespace device as333333- * well along with RIMT. To create the link between RIMT information and334334- * the platform device, the IOMMU driver should register itself with the335335- * RIMT module. This is true for PCI based IOMMU as well.336336- */337337-int rimt_iommu_register(struct device *dev)338338-{339339- struct fwnode_handle *rimt_fwnode;340340- struct acpi_rimt_node *node;341341-342342- node = rimt_scan_node(ACPI_RIMT_NODE_TYPE_IOMMU, dev);343343- if (!node) {344344- pr_err("Could not find IOMMU node in RIMT\n");345345- return -ENODEV;346346- }347347-348348- if (dev_is_pci(dev)) {349349- rimt_fwnode = acpi_alloc_fwnode_static();350350- if (!rimt_fwnode)351351- return -ENOMEM;352352-353353- rimt_fwnode->dev = dev;354354- if (!dev->fwnode)355355- dev->fwnode = rimt_fwnode;356356-357357- rimt_set_fwnode(node, rimt_fwnode);358358- } else {359359- rimt_set_fwnode(node, dev->fwnode);360360- }361361-362362- return 0;363363-}364364-365365-#ifdef CONFIG_IOMMU_API366292367293static struct acpi_rimt_node *rimt_node_map_id(struct acpi_rimt_node *node,368294 u32 id_in, u32 *id_out,
+15-23
drivers/android/binder.c
···851851 } else {852852 if (!internal)853853 node->local_weak_refs++;854854- if (!node->has_weak_ref && list_empty(&node->work.entry)) {855855- if (target_list == NULL) {856856- pr_err("invalid inc weak node for %d\n",857857- node->debug_id);858858- return -EINVAL;859859- }860860- /*861861- * See comment above862862- */854854+ if (!node->has_weak_ref && target_list && list_empty(&node->work.entry))863855 binder_enqueue_work_ilocked(&node->work, target_list);864864- }865856 }866857 return 0;867858}···2409241824102419/**24112420 * struct binder_ptr_fixup - data to be fixed-up in target buffer24122412- * @offset offset in target buffer to fixup24132413- * @skip_size bytes to skip in copy (fixup will be written later)24142414- * @fixup_data data to write at fixup offset24152415- * @node list node24212421+ * @offset: offset in target buffer to fixup24222422+ * @skip_size: bytes to skip in copy (fixup will be written later)24232423+ * @fixup_data: data to write at fixup offset24242424+ * @node: list node24162425 *24172426 * This is used for the pointer fixup list (pf) which is created and consumed24182427 * during binder_transaction() and is only accessed locally. No···2429243824302439/**24312440 * struct binder_sg_copy - scatter-gather data to be copied24322432- * @offset offset in target buffer24332433- * @sender_uaddr user address in source buffer24342434- * @length bytes to copy24352435- * @node list node24412441+ * @offset: offset in target buffer24422442+ * @sender_uaddr: user address in source buffer24432443+ * @length: bytes to copy24442444+ * @node: list node24362445 *24372446 * This is used for the sg copy list (sgc) which is created and consumed24382447 * during binder_transaction() and is only accessed locally. No···4054406340554064/**40564065 * binder_free_buf() - free the specified buffer40574057- * @proc: binder proc that owns buffer40584058- * @buffer: buffer to be freed40594059- * @is_failure: failed to send transaction40664066+ * @proc: binder proc that owns buffer40674067+ * @thread: binder thread performing the buffer release40684068+ * @buffer: buffer to be freed40694069+ * @is_failure: failed to send transaction40604070 *40614061- * If buffer for an async transaction, enqueue the next async40714071+ * If the buffer is for an async transaction, enqueue the next async40624072 * transaction from the node.40634073 *40644064- * Cleanup buffer and free it.40744074+ * Cleanup the buffer and free it.40654075 */40664076static void40674077binder_free_buf(struct binder_proc *proc,
+14-4
drivers/android/binder/freeze.rs
···106106 return Ok(true);107107 }108108 if freeze.is_clearing {109109- _removed_listener = freeze_entry.remove_node();109109+ kernel::warn_on!(freeze.num_cleared_duplicates != 0);110110+ if freeze.num_pending_duplicates > 0 {111111+ // The primary freeze listener was deleted, so convert a pending duplicate back112112+ // into the primary one.113113+ freeze.num_pending_duplicates -= 1;114114+ freeze.is_pending = true;115115+ freeze.is_clearing = true;116116+ } else {117117+ _removed_listener = freeze_entry.remove_node();118118+ }110119 drop(node_refs);111120 writer.write_code(BR_CLEAR_FREEZE_NOTIFICATION_DONE)?;112121 writer.write_payload(&self.cookie.0)?;113122 Ok(true)114123 } else {115115- let is_frozen = freeze.node.owner.inner.lock().is_frozen;124124+ let is_frozen = freeze.node.owner.inner.lock().is_frozen.is_fully_frozen();116125 if freeze.last_is_frozen == Some(is_frozen) {117126 return Ok(true);118127 }···254245 );255246 return Err(EINVAL);256247 }257257- if freeze.is_clearing {258258- // Immediately send another FreezeMessage for BR_CLEAR_FREEZE_NOTIFICATION_DONE.248248+ let is_frozen = freeze.node.owner.inner.lock().is_frozen.is_fully_frozen();249249+ if freeze.is_clearing || freeze.last_is_frozen != Some(is_frozen) {250250+ // Immediately send another FreezeMessage.259251 clear_msg = Some(FreezeMessage::init(alloc, cookie));260252 }261253 freeze.is_pending = false;
···7272const PROC_DEFER_FLUSH: u8 = 1;7373const PROC_DEFER_RELEASE: u8 = 2;74747575+#[derive(Copy, Clone)]7676+pub(crate) enum IsFrozen {7777+ Yes,7878+ No,7979+ InProgress,8080+}8181+8282+impl IsFrozen {8383+ /// Whether incoming transactions should be rejected due to freeze.8484+ pub(crate) fn is_frozen(self) -> bool {8585+ match self {8686+ IsFrozen::Yes => true,8787+ IsFrozen::No => false,8888+ IsFrozen::InProgress => true,8989+ }9090+ }9191+9292+ /// Whether freeze notifications consider this process frozen.9393+ pub(crate) fn is_fully_frozen(self) -> bool {9494+ match self {9595+ IsFrozen::Yes => true,9696+ IsFrozen::No => false,9797+ IsFrozen::InProgress => false,9898+ }9999+ }100100+}101101+75102/// The fields of `Process` protected by the spinlock.76103pub(crate) struct ProcessInner {77104 is_manager: bool,···12598 /// are woken up.12699 outstanding_txns: u32,127100 /// Process is frozen and unable to service binder transactions.128128- pub(crate) is_frozen: bool,101101+ pub(crate) is_frozen: IsFrozen,129102 /// Process received sync transactions since last frozen.130103 pub(crate) sync_recv: bool,131104 /// Process received async transactions since last frozen.···151124 started_thread_count: 0,152125 defer_work: 0,153126 outstanding_txns: 0,154154- is_frozen: false,127127+ is_frozen: IsFrozen::No,155128 sync_recv: false,156129 async_recv: false,157130 binderfs_file: None,···12871260 let is_manager = {12881261 let mut inner = self.inner.lock();12891262 inner.is_dead = true;12901290- inner.is_frozen = false;12631263+ inner.is_frozen = IsFrozen::No;12911264 inner.sync_recv = false;12921265 inner.async_recv = false;12931266 inner.is_manager···13731346 .alloc13741347 .take_for_each(|offset, size, debug_id, odata| {13751348 let ptr = offset + address;13761376- pr_warn!(13771377- "{}: removing orphan mapping {offset}:{size}\n",13781378- self.pid_in_current_ns()13791379- );13801349 let mut alloc =13811350 Allocation::new(self.clone(), debug_id, offset, size, ptr, false);13821351 if let Some(data) = odata {···13941371 return;13951372 }13961373 inner.outstanding_txns -= 1;13971397- inner.is_frozen && inner.outstanding_txns == 013741374+ inner.is_frozen.is_frozen() && inner.outstanding_txns == 013981375 };1399137614001377 if wake {···14081385 let mut inner = self.inner.lock();14091386 inner.sync_recv = false;14101387 inner.async_recv = false;14111411- inner.is_frozen = false;13881388+ inner.is_frozen = IsFrozen::No;14121389 drop(inner);14131390 msgs.send_messages();14141391 return Ok(());···14171394 let mut inner = self.inner.lock();14181395 inner.sync_recv = false;14191396 inner.async_recv = false;14201420- inner.is_frozen = true;13971397+ inner.is_frozen = IsFrozen::InProgress;1421139814221399 if info.timeout_ms > 0 {14231400 let mut jiffies = kernel::time::msecs_to_jiffies(info.timeout_ms);···14311408 .wait_interruptible_timeout(&mut inner, jiffies)14321409 {14331410 CondVarTimeoutResult::Signal { .. } => {14341434- inner.is_frozen = false;14111411+ inner.is_frozen = IsFrozen::No;14351412 return Err(ERESTARTSYS);14361413 }14371414 CondVarTimeoutResult::Woken { jiffies: remaining } => {···14451422 }1446142314471424 if inner.txns_pending_locked() {14481448- inner.is_frozen = false;14251425+ inner.is_frozen = IsFrozen::No;14491426 Err(EAGAIN)14501427 } else {14511428 drop(inner);14521429 match self.prepare_freeze_messages() {14531430 Ok(batch) => {14311431+ self.inner.lock().is_frozen = IsFrozen::Yes;14541432 batch.send_messages();14551433 Ok(())14561434 }14571435 Err(kernel::alloc::AllocError) => {14581458- self.inner.lock().is_frozen = false;14361436+ self.inner.lock().is_frozen = IsFrozen::No;14591437 Err(ENOMEM)14601438 }14611439 }
+3-3
drivers/android/binder/transaction.rs
···249249250250 if oneway {251251 if let Some(target_node) = self.target_node.clone() {252252- if process_inner.is_frozen {252252+ if process_inner.is_frozen.is_frozen() {253253 process_inner.async_recv = true;254254 if self.flags & TF_UPDATE_TXN != 0 {255255 if let Some(t_outdated) =···270270 }271271 }272272273273- if process_inner.is_frozen {273273+ if process_inner.is_frozen.is_frozen() {274274 return Err(BinderError::new_frozen_oneway());275275 } else {276276 return Ok(());···280280 }281281 }282282283283- if process_inner.is_frozen {283283+ if process_inner.is_frozen.is_frozen() {284284 process_inner.sync_recv = true;285285 return Err(BinderError::new_frozen());286286 }
+1-1
drivers/base/arch_topology.c
···292292 * frequency (by keeping the initial capacity_freq_ref value).293293 */294294 cpu_clk = of_clk_get(cpu_node, 0);295295- if (!PTR_ERR_OR_ZERO(cpu_clk)) {295295+ if (!IS_ERR_OR_NULL(cpu_clk)) {296296 per_cpu(capacity_freq_ref, cpu) =297297 clk_get_rate(cpu_clk) / HZ_PER_KHZ;298298 clk_put(cpu_clk);
+1-1
drivers/base/core.c
···17841784 return 0;1785178517861786 if (fw_devlink_sync_state == FW_DEVLINK_SYNC_STATE_STRICT) {17871787- dev_warn(sup, "sync_state() pending due to %s\n",17871787+ dev_info(sup, "sync_state() pending due to %s\n",17881788 dev_name(link->consumer));17891789 return 0;17901790 }
+83-53
drivers/base/devcoredump.c
···2323 void *data;2424 size_t datalen;2525 /*2626- * Here, mutex is required to serialize the calls to del_wk work between2727- * user/kernel space which happens when devcd is added with device_add()2828- * and that sends uevent to user space. User space reads the uevents,2929- * and calls to devcd_data_write() which try to modify the work which is3030- * not even initialized/queued from devcoredump.2626+ * There are 2 races for which mutex is required.3127 *2828+ * The first race is between device creation and userspace writing to2929+ * schedule immediately destruction.3230 *3131+ * This race is handled by arming the timer before device creation, but3232+ * when device creation fails the timer still exists.3333 *3434- * cpu0(X) cpu1(Y)3434+ * To solve this, hold the mutex during device_add(), and set3535+ * init_completed on success before releasing the mutex.3536 *3636- * dev_coredump() uevent sent to user space3737- * device_add() ======================> user space process Y reads the3838- * uevents writes to devcd fd3939- * which results into writes to3737+ * That way the timer will never fire until device_add() is called,3838+ * it will do nothing if init_completed is not set. The timer is also3939+ * cancelled in that case.4040 *4141- * devcd_data_write()4242- * mod_delayed_work()4343- * try_to_grab_pending()4444- * timer_delete()4545- * debug_assert_init()4646- * INIT_DELAYED_WORK()4747- * schedule_delayed_work()4848- *4949- *5050- * Also, mutex alone would not be enough to avoid scheduling of5151- * del_wk work after it get flush from a call to devcd_free()5252- * mentioned as below.5353- *5454- * disabled_store()5555- * devcd_free()5656- * mutex_lock() devcd_data_write()5757- * flush_delayed_work()5858- * mutex_unlock()5959- * mutex_lock()6060- * mod_delayed_work()6161- * mutex_unlock()6262- * So, delete_work flag is required.4141+ * The second race involves multiple parallel invocations of devcd_free(),4242+ * add a deleted flag so only 1 can call the destructor.6343 */6444 struct mutex mutex;6565- bool delete_work;4545+ bool init_completed, deleted;6646 struct module *owner;6747 ssize_t (*read)(char *buffer, loff_t offset, size_t count,6848 void *data, size_t datalen);6949 void (*free)(void *data);5050+ /*5151+ * If nothing interferes and device_add() was returns success,5252+ * del_wk will destroy the device after the timer fires.5353+ *5454+ * Multiple userspace processes can interfere in the working of the timer:5555+ * - Writing to the coredump will reschedule the timer to run immediately,5656+ * if still armed.5757+ *5858+ * This is handled by using "if (cancel_delayed_work()) {5959+ * schedule_delayed_work() }", to prevent re-arming after having6060+ * been previously fired.6161+ * - Writing to /sys/class/devcoredump/disabled will destroy the6262+ * coredump synchronously.6363+ * This is handled by using disable_delayed_work_sync(), and then6464+ * checking if deleted flag is set with &devcd->mutex held.6565+ */7066 struct delayed_work del_wk;7167 struct device *failing_dev;7268};···9195 kfree(devcd);9296}93979898+static void __devcd_del(struct devcd_entry *devcd)9999+{100100+ devcd->deleted = true;101101+ device_del(&devcd->devcd_dev);102102+ put_device(&devcd->devcd_dev);103103+}104104+94105static void devcd_del(struct work_struct *wk)95106{96107 struct devcd_entry *devcd;108108+ bool init_completed;9710998110 devcd = container_of(wk, struct devcd_entry, del_wk.work);99111100100- device_del(&devcd->devcd_dev);101101- put_device(&devcd->devcd_dev);112112+ /* devcd->mutex serializes against dev_coredumpm_timeout */113113+ mutex_lock(&devcd->mutex);114114+ init_completed = devcd->init_completed;115115+ mutex_unlock(&devcd->mutex);116116+117117+ if (init_completed)118118+ __devcd_del(devcd);102119}103120104121static ssize_t devcd_data_read(struct file *filp, struct kobject *kobj,···131122 struct device *dev = kobj_to_dev(kobj);132123 struct devcd_entry *devcd = dev_to_devcd(dev);133124134134- mutex_lock(&devcd->mutex);135135- if (!devcd->delete_work) {136136- devcd->delete_work = true;137137- mod_delayed_work(system_wq, &devcd->del_wk, 0);138138- }139139- mutex_unlock(&devcd->mutex);125125+ /*126126+ * Although it's tempting to use mod_delayed work here,127127+ * that will cause a reschedule if the timer already fired.128128+ */129129+ if (cancel_delayed_work(&devcd->del_wk))130130+ schedule_delayed_work(&devcd->del_wk, 0);140131141132 return count;142133}···160151{161152 struct devcd_entry *devcd = dev_to_devcd(dev);162153154154+ /*155155+ * To prevent a race with devcd_data_write(), disable work and156156+ * complete manually instead.157157+ *158158+ * We cannot rely on the return value of159159+ * disable_delayed_work_sync() here, because it might be in the160160+ * middle of a cancel_delayed_work + schedule_delayed_work pair.161161+ *162162+ * devcd->mutex here guards against multiple parallel invocations163163+ * of devcd_free().164164+ */165165+ disable_delayed_work_sync(&devcd->del_wk);163166 mutex_lock(&devcd->mutex);164164- if (!devcd->delete_work)165165- devcd->delete_work = true;166166-167167- flush_delayed_work(&devcd->del_wk);167167+ if (!devcd->deleted)168168+ __devcd_del(devcd);168169 mutex_unlock(&devcd->mutex);169170 return 0;170171}···198179 * put_device() <- last reference199180 * error = fn(dev, data) devcd_dev_release()200181 * devcd_free(dev, data) kfree(devcd)201201- * mutex_lock(&devcd->mutex);202182 *203183 *204184 * In the above diagram, it looks like disabled_store() would be racing with parallelly205205- * running devcd_del() and result in memory abort while acquiring devcd->mutex which206206- * is called after kfree of devcd memory after dropping its last reference with185185+ * running devcd_del() and result in memory abort after dropping its last reference with207186 * put_device(). However, this will not happens as fn(dev, data) runs208187 * with its own reference to device via klist_node so it is not its last reference.209188 * so, above situation would not occur.···391374 devcd->read = read;392375 devcd->free = free;393376 devcd->failing_dev = get_device(dev);394394- devcd->delete_work = false;377377+ devcd->deleted = false;395378396379 mutex_init(&devcd->mutex);397380 device_initialize(&devcd->devcd_dev);···400383 atomic_inc_return(&devcd_count));401384 devcd->devcd_dev.class = &devcd_class;402385403403- mutex_lock(&devcd->mutex);404386 dev_set_uevent_suppress(&devcd->devcd_dev, true);387387+388388+ /* devcd->mutex prevents devcd_del() completing until init finishes */389389+ mutex_lock(&devcd->mutex);390390+ devcd->init_completed = false;391391+ INIT_DELAYED_WORK(&devcd->del_wk, devcd_del);392392+ schedule_delayed_work(&devcd->del_wk, timeout);393393+405394 if (device_add(&devcd->devcd_dev))406395 goto put_device;407396···424401425402 dev_set_uevent_suppress(&devcd->devcd_dev, false);426403 kobject_uevent(&devcd->devcd_dev.kobj, KOBJ_ADD);427427- INIT_DELAYED_WORK(&devcd->del_wk, devcd_del);428428- schedule_delayed_work(&devcd->del_wk, timeout);404404+405405+ /*406406+ * Safe to run devcd_del() now that we are done with devcd_dev.407407+ * Alternatively we could have taken a ref on devcd_dev before408408+ * dropping the lock.409409+ */410410+ devcd->init_completed = true;429411 mutex_unlock(&devcd->mutex);430412 return;431413 put_device:432432- put_device(&devcd->devcd_dev);433414 mutex_unlock(&devcd->mutex);415415+ cancel_delayed_work_sync(&devcd->del_wk);416416+ put_device(&devcd->devcd_dev);417417+434418 put_module:435419 module_put(owner);436420 free:
···317317 unsigned int count = 0;318318 const unsigned int num_sample_bytes = comedi_bytes_per_sample(s);319319320320- if (!s->munge || (async->cmd.flags & CMDF_RAWDATA)) {320320+ if (!s->munge || (async->cmd.flags & CMDF_RAWDATA) || async->cmd.chanlist_len == 0) {321321 async->munge_count += num_bytes;322322 return num_bytes;323323 }
+5-1
drivers/cpufreq/amd-pstate.c
···16141614 * min_perf value across kexec reboots. If this CPU is just onlined normally after this, the16151615 * limits, epp and desired perf will get reset to the cached values in cpudata struct16161616 */16171617- return amd_pstate_update_perf(policy, perf.bios_min_perf, 0U, 0U, 0U, false);16171617+ return amd_pstate_update_perf(policy, perf.bios_min_perf,16181618+ FIELD_GET(AMD_CPPC_DES_PERF_MASK, cpudata->cppc_req_cached),16191619+ FIELD_GET(AMD_CPPC_MAX_PERF_MASK, cpudata->cppc_req_cached),16201620+ FIELD_GET(AMD_CPPC_EPP_PERF_MASK, cpudata->cppc_req_cached),16211621+ false);16181622}1619162316201624static int amd_pstate_suspend(struct cpufreq_policy *policy)
+9-12
drivers/cpuidle/governors/menu.c
···188188 *189189 * This can deal with workloads that have long pauses interspersed190190 * with sporadic activity with a bunch of short pauses.191191+ *192192+ * However, if the number of remaining samples is too small to exclude193193+ * any more outliers, allow the deepest available idle state to be194194+ * selected because there are systems where the time spent by CPUs in195195+ * deep idle states is correlated to the maximum frequency the CPUs196196+ * can get to. On those systems, shallow idle states should be avoided197197+ * unless there is a clear indication that the given CPU is most likley198198+ * going to be woken up shortly.191199 */192192- if (divisor * 4 <= INTERVALS * 3) {193193- /*194194- * If there are sufficiently many data points still under195195- * consideration after the outliers have been eliminated,196196- * returning without a prediction would be a mistake because it197197- * is likely that the next interval will not exceed the current198198- * maximum, so return the latter in that case.199199- */200200- if (divisor >= INTERVALS / 2)201201- return max;202202-200200+ if (divisor * 4 <= INTERVALS * 3)203201 return UINT_MAX;204204- }205202206203 /* Update the thresholds for the next round. */207204 if (avg - min > max - avg)
+1-1
drivers/firewire/core-transaction.c
···269269}270270271271static int allocate_tlabel(struct fw_card *card)272272-__must_hold(&card->transactions_lock)272272+__must_hold(&card->transactions.lock)273273{274274 int tlabel;275275
+10
drivers/firewire/init_ohci1394_dma.c
···167167168168/**169169 * init_ohci1394_wait_for_busresets - wait until bus resets are completed170170+ * @ohci: Pointer to the OHCI-1394 controller structure170171 *171172 * OHCI1394 initialization itself and any device going on- or offline172173 * and any cable issue cause a IEEE1394 bus reset. The OHCI1394 spec···190189191190/**192191 * init_ohci1394_enable_physical_dma - Enable physical DMA for remote debugging192192+ * @ohci: Pointer to the OHCI-1394 controller structure193193+ *193194 * This enables remote DMA access over IEEE1394 from every host for the low194195 * 4GB of address space. DMA accesses above 4GB are not available currently.195196 */···204201205202/**206203 * init_ohci1394_reset_and_init_dma - init controller and enable DMA204204+ * @ohci: Pointer to the OHCI-1394 controller structure205205+ *207206 * This initializes the given controller and enables physical DMA engine in it.208207 */209208static inline void __init init_ohci1394_reset_and_init_dma(struct ohci *ohci)···235230236231/**237232 * init_ohci1394_controller - Map the registers of the controller and init DMA233233+ * @num: PCI bus number234234+ * @slot: PCI device number235235+ * @func: PCI function number236236+ *238237 * This maps the registers of the specified controller and initializes it239238 */240239static inline void __init init_ohci1394_controller(int num, int slot, int func)···293284294285/**295286 * setup_ohci1394_dma - enables early OHCI1394 DMA initialization287287+ * @opt: Kernel command line parameter string296288 */297289static int __init setup_ohci1394_dma(char *opt)298290{
···309309 SCMI_DEBUG_COUNTERS_LAST310310};311311312312-static inline void scmi_inc_count(atomic_t *arr, int stat)312312+/**313313+ * struct scmi_debug_info - Debug common info314314+ * @top_dentry: A reference to the top debugfs dentry315315+ * @name: Name of this SCMI instance316316+ * @type: Type of this SCMI instance317317+ * @is_atomic: Flag to state if the transport of this instance is atomic318318+ * @counters: An array of atomic_c's used for tracking statistics (if enabled)319319+ */320320+struct scmi_debug_info {321321+ struct dentry *top_dentry;322322+ const char *name;323323+ const char *type;324324+ bool is_atomic;325325+ atomic_t counters[SCMI_DEBUG_COUNTERS_LAST];326326+};327327+328328+static inline void scmi_inc_count(struct scmi_debug_info *dbg, int stat)313329{314314- if (IS_ENABLED(CONFIG_ARM_SCMI_DEBUG_COUNTERS))315315- atomic_inc(&arr[stat]);330330+ if (IS_ENABLED(CONFIG_ARM_SCMI_DEBUG_COUNTERS)) {331331+ if (dbg)332332+ atomic_inc(&dbg->counters[stat]);333333+ }316334}317335318318-static inline void scmi_dec_count(atomic_t *arr, int stat)336336+static inline void scmi_dec_count(struct scmi_debug_info *dbg, int stat)319337{320320- if (IS_ENABLED(CONFIG_ARM_SCMI_DEBUG_COUNTERS))321321- atomic_dec(&arr[stat]);338338+ if (IS_ENABLED(CONFIG_ARM_SCMI_DEBUG_COUNTERS)) {339339+ if (dbg)340340+ atomic_dec(&dbg->counters[stat]);341341+ }322342}323343324344enum scmi_bad_msg {
+21-38
drivers/firmware/arm_scmi/driver.c
···116116#define ph_to_pi(h) container_of(h, struct scmi_protocol_instance, ph)117117118118/**119119- * struct scmi_debug_info - Debug common info120120- * @top_dentry: A reference to the top debugfs dentry121121- * @name: Name of this SCMI instance122122- * @type: Type of this SCMI instance123123- * @is_atomic: Flag to state if the transport of this instance is atomic124124- * @counters: An array of atomic_c's used for tracking statistics (if enabled)125125- */126126-struct scmi_debug_info {127127- struct dentry *top_dentry;128128- const char *name;129129- const char *type;130130- bool is_atomic;131131- atomic_t counters[SCMI_DEBUG_COUNTERS_LAST];132132-};133133-134134-/**135119 * struct scmi_info - Structure representing a SCMI instance136120 *137121 * @id: A sequence number starting from zero identifying this instance···594610 /* Set in-flight */595611 set_bit(xfer->hdr.seq, minfo->xfer_alloc_table);596612 hash_add(minfo->pending_xfers, &xfer->node, xfer->hdr.seq);597597- scmi_inc_count(info->dbg->counters, XFERS_INFLIGHT);613613+ scmi_inc_count(info->dbg, XFERS_INFLIGHT);598614599615 xfer->pending = true;600616}···803819 hash_del(&xfer->node);804820 xfer->pending = false;805821806806- scmi_dec_count(info->dbg->counters, XFERS_INFLIGHT);822822+ scmi_dec_count(info->dbg, XFERS_INFLIGHT);807823 }824824+ xfer->flags = 0;808825 hlist_add_head(&xfer->node, &minfo->free_xfers);809826 }810827 spin_unlock_irqrestore(&minfo->xfer_lock, flags);···824839{825840 struct scmi_info *info = handle_to_scmi_info(handle);826841827827- xfer->flags &= ~SCMI_XFER_FLAG_IS_RAW;828828- xfer->flags &= ~SCMI_XFER_FLAG_CHAN_SET;829842 return __scmi_xfer_put(&info->tx_minfo, xfer);830843}831844···10171034 spin_unlock_irqrestore(&minfo->xfer_lock, flags);1018103510191036 scmi_bad_message_trace(cinfo, msg_hdr, MSG_UNEXPECTED);10201020- scmi_inc_count(info->dbg->counters, ERR_MSG_UNEXPECTED);10371037+ scmi_inc_count(info->dbg, ERR_MSG_UNEXPECTED);1021103810221039 return xfer;10231040 }···10451062 msg_type, xfer_id, msg_hdr, xfer->state);1046106310471064 scmi_bad_message_trace(cinfo, msg_hdr, MSG_INVALID);10481048- scmi_inc_count(info->dbg->counters, ERR_MSG_INVALID);10651065+ scmi_inc_count(info->dbg, ERR_MSG_INVALID);1049106610501067 /* On error the refcount incremented above has to be dropped */10511068 __scmi_xfer_put(minfo, xfer);···10901107 PTR_ERR(xfer));1091110810921109 scmi_bad_message_trace(cinfo, msg_hdr, MSG_NOMEM);10931093- scmi_inc_count(info->dbg->counters, ERR_MSG_NOMEM);11101110+ scmi_inc_count(info->dbg, ERR_MSG_NOMEM);1094111110951112 scmi_clear_channel(info, cinfo);10961113 return;···11061123 trace_scmi_msg_dump(info->id, cinfo->id, xfer->hdr.protocol_id,11071124 xfer->hdr.id, "NOTI", xfer->hdr.seq,11081125 xfer->hdr.status, xfer->rx.buf, xfer->rx.len);11091109- scmi_inc_count(info->dbg->counters, NOTIFICATION_OK);11261126+ scmi_inc_count(info->dbg, NOTIFICATION_OK);1110112711111128 scmi_notify(cinfo->handle, xfer->hdr.protocol_id,11121129 xfer->hdr.id, xfer->rx.buf, xfer->rx.len, ts);···11661183 if (xfer->hdr.type == MSG_TYPE_DELAYED_RESP) {11671184 scmi_clear_channel(info, cinfo);11681185 complete(xfer->async_done);11691169- scmi_inc_count(info->dbg->counters, DELAYED_RESPONSE_OK);11861186+ scmi_inc_count(info->dbg, DELAYED_RESPONSE_OK);11701187 } else {11711188 complete(&xfer->done);11721172- scmi_inc_count(info->dbg->counters, RESPONSE_OK);11891189+ scmi_inc_count(info->dbg, RESPONSE_OK);11731190 }1174119111751192 if (IS_ENABLED(CONFIG_ARM_SCMI_RAW_MODE_SUPPORT)) {···12791296 "timed out in resp(caller: %pS) - polling\n",12801297 (void *)_RET_IP_);12811298 ret = -ETIMEDOUT;12821282- scmi_inc_count(info->dbg->counters, XFERS_RESPONSE_POLLED_TIMEOUT);12991299+ scmi_inc_count(info->dbg, XFERS_RESPONSE_POLLED_TIMEOUT);12831300 }12841301 }12851302···13041321 "RESP" : "resp",13051322 xfer->hdr.seq, xfer->hdr.status,13061323 xfer->rx.buf, xfer->rx.len);13071307- scmi_inc_count(info->dbg->counters, RESPONSE_POLLED_OK);13241324+ scmi_inc_count(info->dbg, RESPONSE_POLLED_OK);1308132513091326 if (IS_ENABLED(CONFIG_ARM_SCMI_RAW_MODE_SUPPORT)) {13101327 scmi_raw_message_report(info->raw, xfer,···13191336 dev_err(dev, "timed out in resp(caller: %pS)\n",13201337 (void *)_RET_IP_);13211338 ret = -ETIMEDOUT;13221322- scmi_inc_count(info->dbg->counters, XFERS_RESPONSE_TIMEOUT);13391339+ scmi_inc_count(info->dbg, XFERS_RESPONSE_TIMEOUT);13231340 }13241341 }13251342···14031420 !is_transport_polling_capable(info->desc)) {14041421 dev_warn_once(dev,14051422 "Polling mode is not supported by transport.\n");14061406- scmi_inc_count(info->dbg->counters, SENT_FAIL_POLLING_UNSUPPORTED);14231423+ scmi_inc_count(info->dbg, SENT_FAIL_POLLING_UNSUPPORTED);14071424 return -EINVAL;14081425 }1409142614101427 cinfo = idr_find(&info->tx_idr, pi->proto->id);14111428 if (unlikely(!cinfo)) {14121412- scmi_inc_count(info->dbg->counters, SENT_FAIL_CHANNEL_NOT_FOUND);14291429+ scmi_inc_count(info->dbg, SENT_FAIL_CHANNEL_NOT_FOUND);14131430 return -EINVAL;14141431 }14151432 /* True ONLY if also supported by transport. */···14441461 ret = info->desc->ops->send_message(cinfo, xfer);14451462 if (ret < 0) {14461463 dev_dbg(dev, "Failed to send message %d\n", ret);14471447- scmi_inc_count(info->dbg->counters, SENT_FAIL);14641464+ scmi_inc_count(info->dbg, SENT_FAIL);14481465 return ret;14491466 }1450146714511468 trace_scmi_msg_dump(info->id, cinfo->id, xfer->hdr.protocol_id,14521469 xfer->hdr.id, "CMND", xfer->hdr.seq,14531470 xfer->hdr.status, xfer->tx.buf, xfer->tx.len);14541454- scmi_inc_count(info->dbg->counters, SENT_OK);14711471+ scmi_inc_count(info->dbg, SENT_OK);1455147214561473 ret = scmi_wait_for_message_response(cinfo, xfer);14571474 if (!ret && xfer->hdr.status) {14581475 ret = scmi_to_linux_errno(xfer->hdr.status);14591459- scmi_inc_count(info->dbg->counters, ERR_PROTOCOL);14761476+ scmi_inc_count(info->dbg, ERR_PROTOCOL);14601477 }1461147814621479 if (info->desc->ops->mark_txdone)···30273044 u8 channels[SCMI_MAX_CHANNELS] = {};30283045 DECLARE_BITMAP(protos, SCMI_MAX_CHANNELS) = {};3029304630303030- if (!info->dbg)30313031- return -EINVAL;30323032-30333047 /* Enumerate all channels to collect their ids */30343048 idr_for_each_entry(&info->tx_idr, cinfo, id) {30353049 /*···31983218 if (!info->dbg)31993219 dev_warn(dev, "Failed to setup SCMI debugfs.\n");3200322032013201- if (IS_ENABLED(CONFIG_ARM_SCMI_RAW_MODE_SUPPORT)) {32213221+ if (info->dbg && IS_ENABLED(CONFIG_ARM_SCMI_RAW_MODE_SUPPORT)) {32023222 ret = scmi_debugfs_raw_mode_setup(info);32033223 if (!coex) {32043224 if (ret)···34023422{34033423 if (IS_ENABLED(CONFIG_ARM_SCMI_DEBUG_COUNTERS)) {34043424 struct scmi_info *info = handle_to_scmi_info(handle);34253425+34263426+ if (!info->dbg)34273427+ return 0;3405342834063429 return atomic_read(&info->dbg->counters[XFERS_INFLIGHT]);34073430 } else {
···174174 *p = color & 0xff;175175}176176177177+/*178178+ * Special case if the pixel crosses page boundaries179179+ */180180+static void drm_panic_write_pixel24_xpage(void *vaddr, struct page *next_page,181181+ unsigned int offset, u32 color)182182+{183183+ u8 *vaddr2;184184+ u8 *p = vaddr + offset;185185+186186+ vaddr2 = kmap_local_page_try_from_panic(next_page);187187+188188+ *p++ = color & 0xff;189189+ color >>= 8;190190+191191+ if (offset == PAGE_SIZE - 1)192192+ p = vaddr2;193193+194194+ *p++ = color & 0xff;195195+ color >>= 8;196196+197197+ if (offset == PAGE_SIZE - 2)198198+ p = vaddr2;199199+200200+ *p = color & 0xff;201201+ kunmap_local(vaddr2);202202+}203203+177204static void drm_panic_write_pixel32(void *vaddr, unsigned int offset, u32 color)178205{179206 u32 *p = vaddr + offset;···258231 page = new_page;259232 vaddr = kmap_local_page_try_from_panic(pages[page]);260233 }261261- if (vaddr)234234+ if (!vaddr)235235+ continue;236236+237237+ // Special case for 24bit, as a pixel might cross page boundaries238238+ if (cpp == 3 && offset + 3 > PAGE_SIZE)239239+ drm_panic_write_pixel24_xpage(vaddr, pages[page + 1],240240+ offset, fg32);241241+ else262242 drm_panic_write_pixel(vaddr, offset, fg32, cpp);263243 }264244 }···355321 page = new_page;356322 vaddr = kmap_local_page_try_from_panic(pages[page]);357323 }358358- drm_panic_write_pixel(vaddr, offset, color, cpp);324324+ if (!vaddr)325325+ continue;326326+327327+ // Special case for 24bit, as a pixel might cross page boundaries328328+ if (cpp == 3 && offset + 3 > PAGE_SIZE)329329+ drm_panic_write_pixel24_xpage(vaddr, pages[page + 1],330330+ offset, color);331331+ else332332+ drm_panic_write_pixel(vaddr, offset, color, cpp);359333 }360334 }361335 if (vaddr)···471429static void drm_panic_logo_draw(struct drm_scanout_buffer *sb, struct drm_rect *rect,472430 const struct font_desc *font, u32 fg_color)473431{432432+ if (rect->x2 > sb->width || rect->y2 > sb->height)433433+ return;434434+474435 if (logo_mono)475436 drm_panic_blit(sb, rect, logo_mono->data,476437 DIV_ROUND_UP(drm_rect_width(rect), 8), 1, fg_color);···522477 struct drm_panic_line *line, int yoffset, u32 fg_color)523478{524479 int chars_per_row = sb->width / font->width;525525- struct drm_rect r_txt = DRM_RECT_INIT(0, yoffset, sb->width, sb->height);480480+ struct drm_rect r_txt = DRM_RECT_INIT(0, yoffset, sb->width, font->height);526481 struct drm_panic_line line_wrap;527482528483 if (line->len > chars_per_row) {···565520 struct drm_panic_line line;566521 int yoffset;567522568568- if (!font)523523+ if (!font || font->width > sb->width)569524 return;570525571526 yoffset = sb->height - font->height - (sb->height % font->height) / 2;···778733 pr_debug("QR width %d and scale %d\n", qr_width, scale);779734 r_qr_canvas = DRM_RECT_INIT(0, 0, qr_canvas_width * scale, qr_canvas_width * scale);780735781781- v_margin = (sb->height - drm_rect_height(&r_qr_canvas) - drm_rect_height(&r_msg)) / 5;736736+ v_margin = sb->height - drm_rect_height(&r_qr_canvas) - drm_rect_height(&r_msg);737737+ if (v_margin < 0)738738+ return -ENOSPC;739739+ v_margin /= 5;782740783741 drm_rect_translate(&r_qr_canvas, (sb->width - r_qr_canvas.x2) / 2, 2 * v_margin);784742 r_qr = DRM_RECT_INIT(r_qr_canvas.x1 + QR_MARGIN * scale, r_qr_canvas.y1 + QR_MARGIN * scale,···794746 /* Fill with the background color, and draw text on top */795747 drm_panic_fill(sb, &r_screen, bg_color);796748797797- if (!drm_rect_overlap(&r_logo, &r_msg) && !drm_rect_overlap(&r_logo, &r_qr))749749+ if (!drm_rect_overlap(&r_logo, &r_msg) && !drm_rect_overlap(&r_logo, &r_qr_canvas))798750 drm_panic_logo_draw(sb, &r_logo, font, fg_color);799751800752 draw_txt_rectangle(sb, font, panic_msg, panic_msg_lines, true, &r_msg, fg_color);
+13-12
drivers/gpu/drm/i915/display/intel_fb.c
···2117211721182118 intel_frontbuffer_put(intel_fb->frontbuffer);2119211921202120+ kfree(intel_fb->panic);21202121 kfree(intel_fb);21212122}21222123···22162215 struct intel_display *display = to_intel_display(obj->dev);22172216 struct drm_framebuffer *fb = &intel_fb->base;22182217 u32 max_stride;22192219- int ret = -EINVAL;22182218+ int ret;22202219 int i;22202220+22212221+ intel_fb->panic = intel_panic_alloc();22222222+ if (!intel_fb->panic)22232223+ return -ENOMEM;2221222422222225 /*22232226 * intel_frontbuffer_get() must be done before22242227 * intel_fb_bo_framebuffer_init() to avoid set_tiling vs. addfb race.22252228 */22262229 intel_fb->frontbuffer = intel_frontbuffer_get(obj);22272227- if (!intel_fb->frontbuffer)22282228- return -ENOMEM;22302230+ if (!intel_fb->frontbuffer) {22312231+ ret = -ENOMEM;22322232+ goto err_free_panic;22332233+ }2229223422302235 ret = intel_fb_bo_framebuffer_init(fb, obj, mode_cmd);22312236 if (ret)···23302323 intel_fb_bo_framebuffer_fini(obj);23312324err_frontbuffer_put:23322325 intel_frontbuffer_put(intel_fb->frontbuffer);23262326+err_free_panic:23272327+ kfree(intel_fb->panic);23282328+23332329 return ret;23342330}23352331···23592349struct intel_framebuffer *intel_framebuffer_alloc(void)23602350{23612351 struct intel_framebuffer *intel_fb;23622362- struct intel_panic *panic;2363235223642353 intel_fb = kzalloc(sizeof(*intel_fb), GFP_KERNEL);23652354 if (!intel_fb)23662355 return NULL;23672367-23682368- panic = intel_panic_alloc();23692369- if (!panic) {23702370- kfree(intel_fb);23712371- return NULL;23722372- }23732373-23742374- intel_fb->panic = panic;2375235623762357 return intel_fb;23772358}
+7-3
drivers/gpu/drm/panthor/panthor_mmu.c
···11751175 break;1176117611771177 case DRM_PANTHOR_VM_BIND_OP_TYPE_UNMAP:11781178- /* Partial unmaps might trigger a remap with either a prev or a next VA,11791179- * but not both.11781178+ /* Two VMAs can be needed for an unmap, as an unmap can happen11791179+ * in the middle of a drm_gpuva, requiring a remap with both11801180+ * prev & next VA. Or an unmap can span more than one drm_gpuva11811181+ * where the first and last ones are covered partially, requring11821182+ * a remap for the first with a prev VA and remap for the last11831183+ * with a next VA.11801184 */11811181- vma_count = 1;11851185+ vma_count = 2;11821186 break;1183118711841188 default:
···291291 return data;292292}293293294294-static int temp1_input_read(struct device *dev)294294+static int temp1_input_read(struct device *dev, long *temp)295295{296296 struct sht3x_data *data = sht3x_update_client(dev);297297298298 if (IS_ERR(data))299299 return PTR_ERR(data);300300301301- return data->temperature;301301+ *temp = data->temperature;302302+ return 0;302303}303304304304-static int humidity1_input_read(struct device *dev)305305+static int humidity1_input_read(struct device *dev, long *humidity)305306{306307 struct sht3x_data *data = sht3x_update_client(dev);307308308309 if (IS_ERR(data))309310 return PTR_ERR(data);310311311311- return data->humidity;312312+ *humidity = data->humidity;313313+ return 0;312314}313315314316/*···708706 u32 attr, int channel, long *val)709707{710708 enum sht3x_limits index;709709+ int ret;711710712711 switch (type) {713712 case hwmon_chip:···723720 case hwmon_temp:724721 switch (attr) {725722 case hwmon_temp_input:726726- *val = temp1_input_read(dev);727727- break;723723+ return temp1_input_read(dev, val);728724 case hwmon_temp_alarm:729729- *val = temp1_alarm_read(dev);725725+ ret = temp1_alarm_read(dev);726726+ if (ret < 0)727727+ return ret;728728+ *val = ret;730729 break;731730 case hwmon_temp_max:732731 index = limit_max;···753748 case hwmon_humidity:754749 switch (attr) {755750 case hwmon_humidity_input:756756- *val = humidity1_input_read(dev);757757- break;751751+ return humidity1_input_read(dev, val);758752 case hwmon_humidity_alarm:759759- *val = humidity1_alarm_read(dev);753753+ ret = humidity1_alarm_read(dev);754754+ if (ret < 0)755755+ return ret;756756+ *val = ret;760757 break;761758 case hwmon_humidity_max:762759 index = limit_max;
+2
drivers/misc/amd-sbi/Kconfig
···22config AMD_SBRMI_I2C33 tristate "AMD side band RMI support"44 depends on I2C55+ depends on ARM || ARM64 || COMPILE_TEST56 select REGMAP_I2C67 help78 Side band RMI over I2C support for AMD out of band management.99+ This driver is intended to run on the BMC, not the managed node.810911 This driver can also be built as a module. If so, the module will1012 be called sbrmi-i2c.
···120120#define MEI_DEV_ID_PTL_H 0xE370 /* Panther Lake H */121121#define MEI_DEV_ID_PTL_P 0xE470 /* Panther Lake P */122122123123+#define MEI_DEV_ID_WCL_P 0x4D70 /* Wildcat Lake P */124124+123125/*124126 * MEI HW Section125127 */
···17371737{17381738 unsigned long status, flags;17391739 struct vmballoon *b;17401740- int ret;17401740+ int ret = 0;1741174117421742 b = container_of(b_dev_info, struct vmballoon, b_dev_info);17431743···17961796 * A failure happened. While we can deflate the page we just17971797 * inflated, this deflation can also encounter an error. Instead17981798 * we will decrease the size of the balloon to reflect the17991799- * change and report failure.17991799+ * change.18001800 */18011801 atomic64_dec(&b->size);18021802- ret = -EBUSY;18031802 } else {18041803 /*18051804 * Success. Take a reference for the page, and we will add it to18061805 * the list after acquiring the lock.18071806 */18081807 get_page(newpage);18091809- ret = 0;18101808 }1811180918121810 /* Update the balloon list under the @pages_lock */···18151817 * If we succeed just insert it to the list and update the statistics18161818 * under the lock.18171819 */18181818- if (!ret) {18201820+ if (status == VMW_BALLOON_SUCCESS) {18191821 balloon_page_insert(&b->b_dev_info, newpage);18201822 __count_vm_event(BALLOON_MIGRATE);18211823 }
···22872287 unblock_netpoll_tx();22882288 }2289228922902290- if (bond_mode_can_use_xmit_hash(bond))22902290+ /* broadcast mode uses the all_slaves to loop through slaves. */22912291+ if (bond_mode_can_use_xmit_hash(bond) ||22922292+ BOND_MODE(bond) == BOND_MODE_BROADCAST)22912293 bond_update_slave_arr(bond, NULL);2292229422932295 if (!slave_dev->netdev_ops->ndo_bpf ||···2465246324662464 bond_upper_dev_unlink(bond, slave);2467246524682468- if (bond_mode_can_use_xmit_hash(bond))24662466+ if (bond_mode_can_use_xmit_hash(bond) ||24672467+ BOND_MODE(bond) == BOND_MODE_BROADCAST)24692468 bond_update_slave_arr(bond, slave);2470246924712470 slave_info(bond_dev, slave_dev, "Releasing %s interface\n",···28742871{28752872 struct bonding *bond = container_of(work, struct bonding,28762873 mii_work.work);28772877- bool should_notify_peers = false;28742874+ bool should_notify_peers;28782875 bool commit;28792876 unsigned long delay;28802877 struct slave *slave;···28862883 goto re_arm;2887288428882885 rcu_read_lock();28862886+28892887 should_notify_peers = bond_should_notify_peers(bond);28902888 commit = !!bond_miimon_inspect(bond);28912891- if (bond->send_peer_notif) {28922892- rcu_read_unlock();28932893- if (rtnl_trylock()) {28942894- bond->send_peer_notif--;28952895- rtnl_unlock();28962896- }28972897- } else {28982898- rcu_read_unlock();28992899- }2900288929012901- if (commit) {28902890+ rcu_read_unlock();28912891+28922892+ if (commit || bond->send_peer_notif) {29022893 /* Race avoidance with bond_close cancel of workqueue */29032894 if (!rtnl_trylock()) {29042895 delay = 1;29052905- should_notify_peers = false;29062896 goto re_arm;29072897 }2908289829092909- bond_for_each_slave(bond, slave, iter) {29102910- bond_commit_link_state(slave, BOND_SLAVE_NOTIFY_LATER);28992899+ if (commit) {29002900+ bond_for_each_slave(bond, slave, iter) {29012901+ bond_commit_link_state(slave,29022902+ BOND_SLAVE_NOTIFY_LATER);29032903+ }29042904+ bond_miimon_commit(bond);29112905 }29122912- bond_miimon_commit(bond);29062906+29072907+ if (bond->send_peer_notif) {29082908+ bond->send_peer_notif--;29092909+ if (should_notify_peers)29102910+ call_netdevice_notifiers(NETDEV_NOTIFY_PEERS,29112911+ bond->dev);29122912+ }2913291329142914 rtnl_unlock(); /* might sleep, hold no other locks */29152915 }···29202914re_arm:29212915 if (bond->params.miimon)29222916 queue_delayed_work(bond->wq, &bond->mii_work, delay);29232923-29242924- if (should_notify_peers) {29252925- if (!rtnl_trylock())29262926- return;29272927- call_netdevice_notifiers(NETDEV_NOTIFY_PEERS, bond->dev);29282928- rtnl_unlock();29292929- }29302917}2931291829322919static int bond_upper_dev_walk(struct net_device *upper,
+1-1
drivers/net/can/bxcan.c
···842842 u32 id;843843 int i, j;844844845845- if (can_dropped_invalid_skb(ndev, skb))845845+ if (can_dev_dropped_skb(ndev, skb))846846 return NETDEV_TX_OK;847847848848 if (bxcan_tx_busy(priv))
+4-2
drivers/net/can/dev/netlink.c
···452452 }453453454454 if (data[IFLA_CAN_RESTART_MS]) {455455- if (!priv->do_set_mode) {455455+ unsigned int restart_ms = nla_get_u32(data[IFLA_CAN_RESTART_MS]);456456+457457+ if (restart_ms != 0 && !priv->do_set_mode) {456458 NL_SET_ERR_MSG(extack,457459 "Device doesn't support restart from Bus Off");458460 return -EOPNOTSUPP;···463461 /* Do not allow changing restart delay while running */464462 if (dev->flags & IFF_UP)465463 return -EBUSY;466466- priv->restart_ms = nla_get_u32(data[IFLA_CAN_RESTART_MS]);464464+ priv->restart_ms = restart_ms;467465 }468466469467 if (data[IFLA_CAN_RESTART]) {
+1-1
drivers/net/can/esd/esdacc.c
···254254 u32 acc_id;255255 u32 acc_dlc;256256257257- if (can_dropped_invalid_skb(netdev, skb))257257+ if (can_dev_dropped_skb(netdev, skb))258258 return NETDEV_TX_OK;259259260260 /* Access core->tx_fifo_tail only once because it may be changed
+1-1
drivers/net/can/rockchip/rockchip_canfd-tx.c
···7272 int err;7373 u8 i;74747575- if (can_dropped_invalid_skb(ndev, skb))7575+ if (can_dev_dropped_skb(ndev, skb))7676 return NETDEV_TX_OK;77777878 if (!netif_subqueue_maybe_stop(priv->ndev, 0,
+1-1
drivers/net/ethernet/dlink/dl2k.c
···733733 u64 tfc_vlan_tag = 0;734734735735 if (np->link_status == 0) { /* Link Down */736736- dev_kfree_skb(skb);736736+ dev_kfree_skb_any(skb);737737 return NETDEV_TX_OK;738738 }739739 entry = np->cur_tx % TX_RING_SIZE;
···978978 int err;979979980980 dev->priv.devc = mlx5_devcom_register_device(dev);981981- if (IS_ERR(dev->priv.devc))982982- mlx5_core_warn(dev, "failed to register devcom device %pe\n",983983- dev->priv.devc);981981+ if (!dev->priv.devc)982982+ mlx5_core_warn(dev, "failed to register devcom device\n");984983985984 err = mlx5_query_board_id(dev);986985 if (err) {
+22-2
drivers/net/ethernet/renesas/ravb_main.c
···2211221122122212 skb_tx_timestamp(skb);22132213 }22142214- /* Descriptor type must be set after all the above writes */22152215- dma_wmb();22142214+22162215 if (num_tx_desc > 1) {22172216 desc->die_dt = DT_FEND;22182217 desc--;22182218+ /* When using multi-descriptors, DT_FEND needs to get written22192219+ * before DT_FSTART, but the compiler may reorder the memory22202220+ * writes in an attempt to optimize the code.22212221+ * Use a dma_wmb() barrier to make sure DT_FEND and DT_FSTART22222222+ * are written exactly in the order shown in the code.22232223+ * This is particularly important for cases where the DMA engine22242224+ * is already running when we are running this code. If the DMA22252225+ * sees DT_FSTART without the corresponding DT_FEND it will enter22262226+ * an error condition.22272227+ */22282228+ dma_wmb();22192229 desc->die_dt = DT_FSTART;22202230 } else {22312231+ /* Descriptor type must be set after all the above writes */22322232+ dma_wmb();22212233 desc->die_dt = DT_FSINGLE;22222234 }22352235+22362236+ /* Before ringing the doorbell we need to make sure that the latest22372237+ * writes have been committed to memory, otherwise it could delay22382238+ * things until the doorbell is rang again.22392239+ * This is in replacement of the read operation mentioned in the HW22402240+ * manuals.22412241+ */22422242+ dma_wmb();22232243 ravb_modify(ndev, TCCR, TCCR_TSRQ0 << q, TCCR_TSRQ0 << q);2224224422252245 priv->cur_tx[q] += num_tx_desc;
···560560static __poll_t ovpn_tcp_poll(struct file *file, struct socket *sock,561561 poll_table *wait)562562{563563- __poll_t mask = datagram_poll(file, sock, wait);563563+ struct sk_buff_head *queue = &sock->sk->sk_receive_queue;564564 struct ovpn_socket *ovpn_sock;565565+ struct ovpn_peer *peer = NULL;566566+ __poll_t mask;565567566568 rcu_read_lock();567569 ovpn_sock = rcu_dereference_sk_user_data(sock->sk);568568- if (ovpn_sock && ovpn_sock->peer &&569569- !skb_queue_empty(&ovpn_sock->peer->tcp.user_queue))570570- mask |= EPOLLIN | EPOLLRDNORM;570570+ /* if we landed in this callback, we expect to have a571571+ * meaningful state. The ovpn_socket lifecycle would572572+ * prevent it otherwise.573573+ */574574+ if (WARN(!ovpn_sock || !ovpn_sock->peer,575575+ "ovpn: null state in ovpn_tcp_poll!")) {576576+ rcu_read_unlock();577577+ return 0;578578+ }579579+580580+ if (ovpn_peer_hold(ovpn_sock->peer)) {581581+ peer = ovpn_sock->peer;582582+ queue = &peer->tcp.user_queue;583583+ }571584 rcu_read_unlock();585585+586586+ mask = datagram_poll_queue(file, sock, wait, queue);587587+588588+ if (peer)589589+ ovpn_peer_put(peer);572590573591 return mask;574592}
+2-2
drivers/net/phy/micrel.c
···42624262{42634263 struct lan8814_shared_priv *shared = phy_package_get_priv(phydev);4264426442654265+ shared->phydev = phydev;42664266+42654267 /* Initialise shared lock for clock*/42664268 mutex_init(&shared->shared_lock);42674269···43184316 return 0;4319431743204318 phydev_dbg(phydev, "successfully registered ptp clock\n");43214321-43224322- shared->phydev = phydev;4323431943244320 /* The EP.4 is shared between all the PHYs in the package and also it43254321 * can be accessed by any of the PHYs
···671671 }672672}673673674674+static int of_check_msi_parent(struct device_node *dev_node, struct device_node **msi_node)675675+{676676+ struct of_phandle_args msi_spec;677677+ int ret;678678+679679+ /*680680+ * An msi-parent phandle with a missing or == 0 #msi-cells681681+ * property identifies a 1:1 ID translation mapping.682682+ *683683+ * Set the msi controller node if the firmware matches this684684+ * condition.685685+ */686686+ ret = of_parse_phandle_with_optional_args(dev_node, "msi-parent", "#msi-cells",687687+ 0, &msi_spec);688688+ if (ret)689689+ return ret;690690+691691+ if ((*msi_node && *msi_node != msi_spec.np) || msi_spec.args_count != 0)692692+ ret = -EINVAL;693693+694694+ if (!ret) {695695+ /* Return with a node reference held */696696+ *msi_node = msi_spec.np;697697+ return 0;698698+ }699699+ of_node_put(msi_spec.np);700700+701701+ return ret;702702+}703703+674704/**675705 * of_msi_xlate - map a MSI ID and find relevant MSI controller node676706 * @dev: device for which the mapping is to be done.···708678 * @id_in: Device ID.709679 *710680 * Walk up the device hierarchy looking for devices with a "msi-map"711711- * property. If found, apply the mapping to @id_in.681681+ * or "msi-parent" property. If found, apply the mapping to @id_in.712682 * If @msi_np points to a non-NULL device node pointer, only entries targeting713683 * that node will be matched; if it points to a NULL value, it will receive the714684 * device node of the first matching target phandle, with a reference held.···722692723693 /*724694 * Walk up the device parent links looking for one with a725725- * "msi-map" property.695695+ * "msi-map" or an "msi-parent" property.726696 */727727- for (parent_dev = dev; parent_dev; parent_dev = parent_dev->parent)697697+ for (parent_dev = dev; parent_dev; parent_dev = parent_dev->parent) {728698 if (!of_map_id(parent_dev->of_node, id_in, "msi-map",729699 "msi-map-mask", msi_np, &id_out))730700 break;701701+ if (!of_check_msi_parent(parent_dev->of_node, msi_np))702702+ break;703703+ }731704 return id_out;732705}706706+EXPORT_SYMBOL_GPL(of_msi_xlate);733707734708/**735709 * of_msi_map_get_device_domain - Use msi-map to find the relevant MSI domain···775741776742 of_for_each_phandle(&it, err, np, "msi-parent", "#msi-cells", 0) {777743 d = irq_find_matching_host(it.node, token);778778- if (d)744744+ if (d) {745745+ of_node_put(it.node);779746 return d;747747+ }780748 }781749782750 return NULL;
···5555#define PARF_AXI_MSTR_WR_ADDR_HALT_V2 0x1a85656#define PARF_Q2A_FLUSH 0x1ac5757#define PARF_LTSSM 0x1b05858-#define PARF_SLV_DBI_ELBI 0x1b45958#define PARF_INT_ALL_STATUS 0x2246059#define PARF_INT_ALL_CLEAR 0x2286160#define PARF_INT_ALL_MASK 0x22c···6465#define PARF_DBI_BASE_ADDR_V2_HI 0x3546566#define PARF_SLV_ADDR_SPACE_SIZE_V2 0x3586667#define PARF_SLV_ADDR_SPACE_SIZE_V2_HI 0x35c6767-#define PARF_BLOCK_SLV_AXI_WR_BASE 0x3606868-#define PARF_BLOCK_SLV_AXI_WR_BASE_HI 0x3646969-#define PARF_BLOCK_SLV_AXI_WR_LIMIT 0x3687070-#define PARF_BLOCK_SLV_AXI_WR_LIMIT_HI 0x36c7171-#define PARF_BLOCK_SLV_AXI_RD_BASE 0x3707272-#define PARF_BLOCK_SLV_AXI_RD_BASE_HI 0x3747373-#define PARF_BLOCK_SLV_AXI_RD_LIMIT 0x3787474-#define PARF_BLOCK_SLV_AXI_RD_LIMIT_HI 0x37c7575-#define PARF_ECAM_BASE 0x3807676-#define PARF_ECAM_BASE_HI 0x3847768#define PARF_NO_SNOOP_OVERRIDE 0x3d47869#define PARF_ATU_BASE_ADDR 0x6347970#define PARF_ATU_BASE_ADDR_HI 0x638···87988899/* PARF_SYS_CTRL register fields */89100#define MAC_PHY_POWERDOWN_IN_P2_D_MUX_EN BIT(29)9090-#define PCIE_ECAM_BLOCKER_EN BIT(26)91101#define MST_WAKEUP_EN BIT(13)92102#define SLV_WAKEUP_EN BIT(12)93103#define MSTR_ACLK_CGC_DIS BIT(10)···133145134146/* PARF_LTSSM register fields */135147#define LTSSM_EN BIT(8)136136-137137-/* PARF_SLV_DBI_ELBI */138138-#define SLV_DBI_ELBI_ADDR_BASE GENMASK(11, 0)139148140149/* PARF_INT_ALL_{STATUS/CLEAR/MASK} register fields */141150#define PARF_INT_ALL_LINK_UP BIT(13)···309324 /* Ensure that PERST has been asserted for at least 100 ms */310325 msleep(PCIE_T_PVPERL_MS);311326 qcom_perst_assert(pcie, false);312312-}313313-314314-static void qcom_pci_config_ecam(struct dw_pcie_rp *pp)315315-{316316- struct dw_pcie *pci = to_dw_pcie_from_pp(pp);317317- struct qcom_pcie *pcie = to_qcom_pcie(pci);318318- u64 addr, addr_end;319319- u32 val;320320-321321- writel_relaxed(lower_32_bits(pci->dbi_phys_addr), pcie->parf + PARF_ECAM_BASE);322322- writel_relaxed(upper_32_bits(pci->dbi_phys_addr), pcie->parf + PARF_ECAM_BASE_HI);323323-324324- /*325325- * The only device on the root bus is a single Root Port. If we try to326326- * access any devices other than Device/Function 00.0 on Bus 0, the TLP327327- * will go outside of the controller to the PCI bus. But with CFG Shift328328- * Feature (ECAM) enabled in iATU, there is no guarantee that the329329- * response is going to be all F's. Hence, to make sure that the330330- * requester gets all F's response for accesses other than the Root331331- * Port, configure iATU to block the transactions starting from332332- * function 1 of the root bus to the end of the root bus (i.e., from333333- * dbi_base + 4KB to dbi_base + 1MB).334334- */335335- addr = pci->dbi_phys_addr + SZ_4K;336336- writel_relaxed(lower_32_bits(addr), pcie->parf + PARF_BLOCK_SLV_AXI_WR_BASE);337337- writel_relaxed(upper_32_bits(addr), pcie->parf + PARF_BLOCK_SLV_AXI_WR_BASE_HI);338338-339339- writel_relaxed(lower_32_bits(addr), pcie->parf + PARF_BLOCK_SLV_AXI_RD_BASE);340340- writel_relaxed(upper_32_bits(addr), pcie->parf + PARF_BLOCK_SLV_AXI_RD_BASE_HI);341341-342342- addr_end = pci->dbi_phys_addr + SZ_1M - 1;343343-344344- writel_relaxed(lower_32_bits(addr_end), pcie->parf + PARF_BLOCK_SLV_AXI_WR_LIMIT);345345- writel_relaxed(upper_32_bits(addr_end), pcie->parf + PARF_BLOCK_SLV_AXI_WR_LIMIT_HI);346346-347347- writel_relaxed(lower_32_bits(addr_end), pcie->parf + PARF_BLOCK_SLV_AXI_RD_LIMIT);348348- writel_relaxed(upper_32_bits(addr_end), pcie->parf + PARF_BLOCK_SLV_AXI_RD_LIMIT_HI);349349-350350- val = readl_relaxed(pcie->parf + PARF_SYS_CTRL);351351- val |= PCIE_ECAM_BLOCKER_EN;352352- writel_relaxed(val, pcie->parf + PARF_SYS_CTRL);353327}354328355329static int qcom_pcie_start_link(struct dw_pcie *pci)···12641320{12651321 struct dw_pcie *pci = to_dw_pcie_from_pp(pp);12661322 struct qcom_pcie *pcie = to_qcom_pcie(pci);12671267- u16 offset;12681323 int ret;1269132412701325 qcom_ep_reset_assert(pcie);···12711328 ret = pcie->cfg->ops->init(pcie);12721329 if (ret)12731330 return ret;12741274-12751275- if (pp->ecam_enabled) {12761276- /*12771277- * Override ELBI when ECAM is enabled, as when ECAM is enabled,12781278- * ELBI moves under the 'config' space.12791279- */12801280- offset = FIELD_GET(SLV_DBI_ELBI_ADDR_BASE, readl(pcie->parf + PARF_SLV_DBI_ELBI));12811281- pci->elbi_base = pci->dbi_base + offset;12821282-12831283- qcom_pci_config_ecam(pp);12841284- }1285133112861332 ret = qcom_pcie_phy_power_on(pcie);12871333 if (ret)
+9-25
drivers/pci/pcie/aspm.c
···243243 /* Clock PM state */244244 u32 clkpm_capable:1; /* Clock PM capable? */245245 u32 clkpm_enabled:1; /* Current Clock PM state */246246- u32 clkpm_default:1; /* Default Clock PM state by BIOS or247247- override */246246+ u32 clkpm_default:1; /* Default Clock PM state by BIOS */248247 u32 clkpm_disable:1; /* Clock PM disabled */249248};250249···375376 pcie_set_clkpm_nocheck(link, enable);376377}377378378378-static void pcie_clkpm_override_default_link_state(struct pcie_link_state *link,379379- int enabled)380380-{381381- struct pci_dev *pdev = link->downstream;382382-383383- /* For devicetree platforms, enable ClockPM by default */384384- if (of_have_populated_dt() && !enabled) {385385- link->clkpm_default = 1;386386- pci_info(pdev, "ASPM: DT platform, enabling ClockPM\n");387387- }388388-}389389-390379static void pcie_clkpm_cap_init(struct pcie_link_state *link, int blacklist)391380{392381 int capable = 1, enabled = 1;···397410 }398411 link->clkpm_enabled = enabled;399412 link->clkpm_default = enabled;400400- pcie_clkpm_override_default_link_state(link, enabled);401413 link->clkpm_capable = capable;402414 link->clkpm_disable = blacklist ? 1 : 0;403415}···797811 struct pci_dev *pdev = link->downstream;798812 u32 override;799813800800- /* For devicetree platforms, enable all ASPM states by default */814814+ /* For devicetree platforms, enable L0s and L1 by default */801815 if (of_have_populated_dt()) {802802- link->aspm_default = PCIE_LINK_STATE_ASPM_ALL;816816+ if (link->aspm_support & PCIE_LINK_STATE_L0S)817817+ link->aspm_default |= PCIE_LINK_STATE_L0S;818818+ if (link->aspm_support & PCIE_LINK_STATE_L1)819819+ link->aspm_default |= PCIE_LINK_STATE_L1;803820 override = link->aspm_default & ~link->aspm_enabled;804821 if (override)805805- pci_info(pdev, "ASPM: DT platform, enabling%s%s%s%s%s%s%s\n",806806- FLAG(override, L0S_UP, " L0s-up"),807807- FLAG(override, L0S_DW, " L0s-dw"),808808- FLAG(override, L1, " L1"),809809- FLAG(override, L1_1, " ASPM-L1.1"),810810- FLAG(override, L1_2, " ASPM-L1.2"),811811- FLAG(override, L1_1_PCIPM, " PCI-PM-L1.1"),812812- FLAG(override, L1_2_PCIPM, " PCI-PM-L1.2"));822822+ pci_info(pdev, "ASPM: default states%s%s\n",823823+ FLAG(override, L0S, " L0s"),824824+ FLAG(override, L1, " L1"));813825 }814826}815827
+1
drivers/platform/mellanox/mlxbf-pmc.c
···20152015 if (pmc->block[blk_num].type == MLXBF_PMC_TYPE_CRSPACE) {20162016 /* Program crspace counters to count clock cycles using "count_clock" sysfs */20172017 attr = &pmc->block[blk_num].attr_count_clock;20182018+ sysfs_attr_init(&attr->dev_attr.attr);20182019 attr->dev_attr.attr.mode = 0644;20192020 attr->dev_attr.show = mlxbf_pmc_count_clock_show;20202021 attr->dev_attr.store = mlxbf_pmc_count_clock_store;
+10-2
drivers/platform/x86/dell/alienware-wmi-wmax.c
···210210 .driver_data = &g_series_quirks,211211 },212212 {213213+ .ident = "Dell Inc. G15 5530",214214+ .matches = {215215+ DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),216216+ DMI_MATCH(DMI_PRODUCT_NAME, "Dell G15 5530"),217217+ },218218+ .driver_data = &g_series_quirks,219219+ },220220+ {213221 .ident = "Dell Inc. G16 7630",214222 .matches = {215223 DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),···1647163916481640static int wmax_wmi_suspend(struct device *dev)16491641{16501650- if (awcc->hwmon)16421642+ if (awcc && awcc->hwmon)16511643 awcc_hwmon_suspend(dev);1652164416531645 return 0;···1655164716561648static int wmax_wmi_resume(struct device *dev)16571649{16581658- if (awcc->hwmon)16501650+ if (awcc && awcc->hwmon)16591651 awcc_hwmon_resume(dev);1660165216611653 return 0;
+1-1
drivers/ptp/ptp_ocp.c
···25482548 for (i = 0; i < OCP_SMA_NUM; i++) {25492549 bp->sma[i].fixed_fcn = true;25502550 bp->sma[i].fixed_dir = true;25512551- bp->sma[1].dpll_prop.capabilities &=25512551+ bp->sma[i].dpll_prop.capabilities &=25522552 ~DPLL_PIN_CAPABILITIES_DIRECTION_CAN_CHANGE;25532553 }25542554 return;
+1-1
drivers/scsi/libfc/fc_fcp.c
···503503 host_bcode = FC_ERROR;504504 goto err;505505 }506506- if (offset + len > fsp->data_len) {506506+ if (size_add(offset, len) > fsp->data_len) {507507 /* this should never happen */508508 if ((fr_flags(fp) & FCPHF_CRC_UNCHECKED) &&509509 fc_frame_crc_check(fp))
+4-4
drivers/scsi/qla4xxx/ql4_os.c
···41044104 * The mid-level driver tries to ensure that queuecommand never gets41054105 * invoked concurrently with itself or the interrupt handler (although41064106 * the interrupt handler may call this routine as part of request-41074107- * completion handling). Unfortunely, it sometimes calls the scheduler41074107+ * completion handling). Unfortunately, it sometimes calls the scheduler41084108 * in interrupt context which is a big NO! NO!.41094109 **/41104110static int qla4xxx_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *cmd)···46474647 cmd = scsi_host_find_tag(ha->host, index);46484648 /*46494649 * We cannot just check if the index is valid,46504650- * becase if we are run from the scsi eh, then46504650+ * because if we are run from the scsi eh, then46514651 * the scsi/block layer is going to prevent46524652 * the tag from being released.46534653 */···49524952 /* Upon successful firmware/chip reset, re-initialize the adapter */49534953 if (status == QLA_SUCCESS) {49544954 /* For ISP-4xxx, force function 1 to always initialize49554955- * before function 3 to prevent both funcions from49554955+ * before function 3 to prevent both functions from49564956 * stepping on top of the other */49574957 if (is_qla40XX(ha) && (ha->mac_index == 3))49584958 ssleep(6);···69146914 struct ddb_entry *ddb_entry = NULL;6915691569166916 /* Create session object, with INVALID_ENTRY,69176917- * the targer_id would get set when we issue the login69176917+ * the target_id would get set when we issue the login69186918 */69196919 cls_sess = iscsi_session_setup(&qla4xxx_iscsi_transport, ha->host,69206920 cmds_max, sizeof(struct ddb_entry),
+46-52
drivers/scsi/storvsc_drv.c
···14061406 }1407140714081408 /*14091409- * Our channel array is sparsley populated and we14091409+ * Our channel array could be sparsley populated and we14101410 * initiated I/O on a processor/hw-q that does not14111411 * currently have a designated channel. Fix this.14121412 * The strategy is simple:14131413- * I. Ensure NUMA locality14141414- * II. Distribute evenly (best effort)14131413+ * I. Prefer the channel associated with the current CPU14141414+ * II. Ensure NUMA locality14151415+ * III. Distribute evenly (best effort)14151416 */14171417+14181418+ /* Prefer the channel on the I/O issuing processor/hw-q */14191419+ if (cpumask_test_cpu(q_num, &stor_device->alloced_cpus))14201420+ return stor_device->stor_chns[q_num];1416142114171422 node_mask = cpumask_of_node(cpu_to_node(q_num));14181423···14741469 /* See storvsc_change_target_cpu(). */14751470 outgoing_channel = READ_ONCE(stor_device->stor_chns[q_num]);14761471 if (outgoing_channel != NULL) {14771477- if (outgoing_channel->target_cpu == q_num) {14781478- /*14791479- * Ideally, we want to pick a different channel if14801480- * available on the same NUMA node.14811481- */14821482- node_mask = cpumask_of_node(cpu_to_node(q_num));14831483- for_each_cpu_wrap(tgt_cpu,14841484- &stor_device->alloced_cpus, q_num + 1) {14851485- if (!cpumask_test_cpu(tgt_cpu, node_mask))14861486- continue;14871487- if (tgt_cpu == q_num)14881488- continue;14891489- channel = READ_ONCE(14901490- stor_device->stor_chns[tgt_cpu]);14911491- if (channel == NULL)14921492- continue;14931493- if (hv_get_avail_to_write_percent(14941494- &channel->outbound)14951495- > ring_avail_percent_lowater) {14961496- outgoing_channel = channel;14971497- goto found_channel;14981498- }14991499- }14721472+ if (hv_get_avail_to_write_percent(&outgoing_channel->outbound)14731473+ > ring_avail_percent_lowater)14741474+ goto found_channel;1500147515011501- /*15021502- * All the other channels on the same NUMA node are15031503- * busy. Try to use the channel on the current CPU15041504- */15051505- if (hv_get_avail_to_write_percent(15061506- &outgoing_channel->outbound)15071507- > ring_avail_percent_lowater)14761476+ /*14771477+ * Channel is busy, try to find a channel on the same NUMA node14781478+ */14791479+ node_mask = cpumask_of_node(cpu_to_node(q_num));14801480+ for_each_cpu_wrap(tgt_cpu, &stor_device->alloced_cpus,14811481+ q_num + 1) {14821482+ if (!cpumask_test_cpu(tgt_cpu, node_mask))14831483+ continue;14841484+ channel = READ_ONCE(stor_device->stor_chns[tgt_cpu]);14851485+ if (!channel)14861486+ continue;14871487+ if (hv_get_avail_to_write_percent(&channel->outbound)14881488+ > ring_avail_percent_lowater) {14891489+ outgoing_channel = channel;15081490 goto found_channel;15091509-15101510- /*15111511- * If we reach here, all the channels on the current15121512- * NUMA node are busy. Try to find a channel in15131513- * other NUMA nodes15141514- */15151515- for_each_cpu(tgt_cpu, &stor_device->alloced_cpus) {15161516- if (cpumask_test_cpu(tgt_cpu, node_mask))15171517- continue;15181518- channel = READ_ONCE(15191519- stor_device->stor_chns[tgt_cpu]);15201520- if (channel == NULL)15211521- continue;15221522- if (hv_get_avail_to_write_percent(15231523- &channel->outbound)15241524- > ring_avail_percent_lowater) {15251525- outgoing_channel = channel;15261526- goto found_channel;15271527- }15281491 }15291492 }14931493+14941494+ /*14951495+ * If we reach here, all the channels on the current14961496+ * NUMA node are busy. Try to find a channel in14971497+ * all NUMA nodes14981498+ */14991499+ for_each_cpu_wrap(tgt_cpu, &stor_device->alloced_cpus,15001500+ q_num + 1) {15011501+ channel = READ_ONCE(stor_device->stor_chns[tgt_cpu]);15021502+ if (!channel)15031503+ continue;15041504+ if (hv_get_avail_to_write_percent(&channel->outbound)15051505+ > ring_avail_percent_lowater) {15061506+ outgoing_channel = channel;15071507+ goto found_channel;15081508+ }15091509+ }15101510+ /*15111511+ * If we reach here, all the channels are busy. Use the15121512+ * original channel found.15131513+ */15301514 } else {15311515 spin_lock_irqsave(&stor_device->lock, flags);15321516 outgoing_channel = stor_device->stor_chns[q_num];
+97-31
drivers/spi/spi-airoha-snfi.c
···192192#define SPI_NAND_OP_RESET 0xff193193#define SPI_NAND_OP_DIE_SELECT 0xc2194194195195+/* SNAND FIFO commands */196196+#define SNAND_FIFO_TX_BUSWIDTH_SINGLE 0x08197197+#define SNAND_FIFO_TX_BUSWIDTH_DUAL 0x09198198+#define SNAND_FIFO_TX_BUSWIDTH_QUAD 0x0a199199+#define SNAND_FIFO_RX_BUSWIDTH_SINGLE 0x0c200200+#define SNAND_FIFO_RX_BUSWIDTH_DUAL 0x0e201201+#define SNAND_FIFO_RX_BUSWIDTH_QUAD 0x0f202202+195203#define SPI_NAND_CACHE_SIZE (SZ_4K + SZ_256)196204#define SPI_MAX_TRANSFER_SIZE 511197205···395387 return regmap_write(as_ctrl->regmap_ctrl, REG_SPI_CTRL_DUMMY, 0);396388}397389398398-static int airoha_snand_write_data(struct airoha_snand_ctrl *as_ctrl, u8 cmd,399399- const u8 *data, int len)390390+static int airoha_snand_write_data(struct airoha_snand_ctrl *as_ctrl,391391+ const u8 *data, int len, int buswidth)400392{401393 int i, data_len;394394+ u8 cmd;395395+396396+ switch (buswidth) {397397+ case 0:398398+ case 1:399399+ cmd = SNAND_FIFO_TX_BUSWIDTH_SINGLE;400400+ break;401401+ case 2:402402+ cmd = SNAND_FIFO_TX_BUSWIDTH_DUAL;403403+ break;404404+ case 4:405405+ cmd = SNAND_FIFO_TX_BUSWIDTH_QUAD;406406+ break;407407+ default:408408+ return -EINVAL;409409+ }402410403411 for (i = 0; i < len; i += data_len) {404412 int err;···433409 return 0;434410}435411436436-static int airoha_snand_read_data(struct airoha_snand_ctrl *as_ctrl, u8 *data,437437- int len)412412+static int airoha_snand_read_data(struct airoha_snand_ctrl *as_ctrl,413413+ u8 *data, int len, int buswidth)438414{439415 int i, data_len;416416+ u8 cmd;417417+418418+ switch (buswidth) {419419+ case 0:420420+ case 1:421421+ cmd = SNAND_FIFO_RX_BUSWIDTH_SINGLE;422422+ break;423423+ case 2:424424+ cmd = SNAND_FIFO_RX_BUSWIDTH_DUAL;425425+ break;426426+ case 4:427427+ cmd = SNAND_FIFO_RX_BUSWIDTH_QUAD;428428+ break;429429+ default:430430+ return -EINVAL;431431+ }440432441433 for (i = 0; i < len; i += data_len) {442434 int err;443435444436 data_len = min(len - i, SPI_MAX_TRANSFER_SIZE);445445- err = airoha_snand_set_fifo_op(as_ctrl, 0xc, data_len);437437+ err = airoha_snand_set_fifo_op(as_ctrl, cmd, data_len);446438 if (err)447439 return err;448440···658618 if (desc->info.offset + desc->info.length > U32_MAX)659619 return -EINVAL;660620621621+ /* continuous reading is not supported */622622+ if (desc->info.length > SPI_NAND_CACHE_SIZE)623623+ return -E2BIG;624624+661625 if (!airoha_snand_supports_op(desc->mem, &desc->info.op_tmpl))662626 return -EOPNOTSUPP;663627···698654699655 err = airoha_snand_nfi_config(as_ctrl);700656 if (err)701701- return err;657657+ goto error_dma_mode_off;702658703659 dma_addr = dma_map_single(as_ctrl->dev, txrx_buf, SPI_NAND_CACHE_SIZE,704660 DMA_FROM_DEVICE);705661 err = dma_mapping_error(as_ctrl->dev, dma_addr);706662 if (err)707707- return err;663663+ goto error_dma_mode_off;708664709665 /* set dma addr */710666 err = regmap_write(as_ctrl->regmap_nfi, REG_SPI_NFI_STRADDR,···733689 if (err)734690 goto error_dma_unmap;735691736736- /* set read addr */737737- err = regmap_write(as_ctrl->regmap_nfi, REG_SPI_NFI_RD_CTL3, 0x0);692692+ /* set read addr: zero page offset + descriptor read offset */693693+ err = regmap_write(as_ctrl->regmap_nfi, REG_SPI_NFI_RD_CTL3,694694+ desc->info.offset);738695 if (err)739696 goto error_dma_unmap;740697···805760error_dma_unmap:806761 dma_unmap_single(as_ctrl->dev, dma_addr, SPI_NAND_CACHE_SIZE,807762 DMA_FROM_DEVICE);763763+error_dma_mode_off:764764+ airoha_snand_set_mode(as_ctrl, SPI_MODE_MANUAL);808765 return err;809766}810767···871824 if (err)872825 goto error_dma_unmap;873826874874- err = regmap_write(as_ctrl->regmap_nfi, REG_SPI_NFI_PG_CTL2, 0x0);827827+ /* set write addr: zero page offset + descriptor write offset */828828+ err = regmap_write(as_ctrl->regmap_nfi, REG_SPI_NFI_PG_CTL2,829829+ desc->info.offset);875830 if (err)876831 goto error_dma_unmap;877832···941892error_dma_unmap:942893 dma_unmap_single(as_ctrl->dev, dma_addr, SPI_NAND_CACHE_SIZE,943894 DMA_TO_DEVICE);895895+ airoha_snand_set_mode(as_ctrl, SPI_MODE_MANUAL);944896 return err;945897}946898947899static int airoha_snand_exec_op(struct spi_mem *mem,948900 const struct spi_mem_op *op)949901{950950- u8 data[8], cmd, opcode = op->cmd.opcode;951902 struct airoha_snand_ctrl *as_ctrl;903903+ int op_len, addr_len, dummy_len;904904+ u8 buf[20], *data;952905 int i, err;953906954907 as_ctrl = spi_controller_get_devdata(mem->spi->controller);908908+909909+ op_len = op->cmd.nbytes;910910+ addr_len = op->addr.nbytes;911911+ dummy_len = op->dummy.nbytes;912912+913913+ if (op_len + dummy_len + addr_len > sizeof(buf))914914+ return -EIO;915915+916916+ data = buf;917917+ for (i = 0; i < op_len; i++)918918+ *data++ = op->cmd.opcode >> (8 * (op_len - i - 1));919919+ for (i = 0; i < addr_len; i++)920920+ *data++ = op->addr.val >> (8 * (addr_len - i - 1));921921+ for (i = 0; i < dummy_len; i++)922922+ *data++ = 0xff;955923956924 /* switch to manual mode */957925 err = airoha_snand_set_mode(as_ctrl, SPI_MODE_MANUAL);···980914 return err;981915982916 /* opcode */983983- err = airoha_snand_write_data(as_ctrl, 0x8, &opcode, sizeof(opcode));917917+ data = buf;918918+ err = airoha_snand_write_data(as_ctrl, data, op_len,919919+ op->cmd.buswidth);984920 if (err)985921 return err;986922987923 /* addr part */988988- cmd = opcode == SPI_NAND_OP_GET_FEATURE ? 0x11 : 0x8;989989- put_unaligned_be64(op->addr.val, data);990990-991991- for (i = ARRAY_SIZE(data) - op->addr.nbytes;992992- i < ARRAY_SIZE(data); i++) {993993- err = airoha_snand_write_data(as_ctrl, cmd, &data[i],994994- sizeof(data[0]));924924+ data += op_len;925925+ if (addr_len) {926926+ err = airoha_snand_write_data(as_ctrl, data, addr_len,927927+ op->addr.buswidth);995928 if (err)996929 return err;997930 }998931999932 /* dummy */10001000- data[0] = 0xff;10011001- for (i = 0; i < op->dummy.nbytes; i++) {10021002- err = airoha_snand_write_data(as_ctrl, 0x8, &data[0],10031003- sizeof(data[0]));933933+ data += addr_len;934934+ if (dummy_len) {935935+ err = airoha_snand_write_data(as_ctrl, data, dummy_len,936936+ op->dummy.buswidth);1004937 if (err)1005938 return err;1006939 }10079401008941 /* data */10091009- if (op->data.dir == SPI_MEM_DATA_IN) {10101010- err = airoha_snand_read_data(as_ctrl, op->data.buf.in,10111011- op->data.nbytes);10121012- if (err)10131013- return err;10141014- } else {10151015- err = airoha_snand_write_data(as_ctrl, 0x8, op->data.buf.out,10161016- op->data.nbytes);942942+ if (op->data.nbytes) {943943+ if (op->data.dir == SPI_MEM_DATA_IN)944944+ err = airoha_snand_read_data(as_ctrl, op->data.buf.in,945945+ op->data.nbytes,946946+ op->data.buswidth);947947+ else948948+ err = airoha_snand_write_data(as_ctrl, op->data.buf.out,949949+ op->data.nbytes,950950+ op->data.buswidth);1017951 if (err)1018952 return err;1019953 }
+2-2
drivers/spi/spi-amlogic-spifc-a4.c
···286286287287 for (i = 0; i <= LANE_MAX; i++) {288288 if (buswidth == 1 << i) {289289- conf = i << __bf_shf(mask);289289+ conf = i << __ffs(mask);290290 return regmap_update_bits(sfc->regmap_base, SFC_SPI_CFG,291291 mask, conf);292292 }···566566 if (!op->data.nbytes)567567 goto end_xfer;568568569569- conf = (op->data.nbytes >> RAW_SIZE_BW) << __bf_shf(RAW_EXT_SIZE);569569+ conf = (op->data.nbytes >> RAW_SIZE_BW) << __ffs(RAW_EXT_SIZE);570570 ret = regmap_update_bits(sfc->regmap_base, SFC_SPI_CFG, RAW_EXT_SIZE, conf);571571 if (ret)572572 goto err_out;
+3-2
drivers/spi/spi-cadence-quadspi.c
···19951995 if (cqspi->use_direct_mode) {19961996 ret = cqspi_request_mmap_dma(cqspi);19971997 if (ret == -EPROBE_DEFER)19981998- goto probe_setup_failed;19981998+ goto probe_dma_failed;19991999 }2000200020012001 if (!(ddata && (ddata->quirks & CQSPI_DISABLE_RUNTIME_PM))) {···2019201920202020 return 0;20212021probe_setup_failed:20222022- cqspi_controller_enable(cqspi, 0);20232022 if (!(ddata && (ddata->quirks & CQSPI_DISABLE_RUNTIME_PM)))20242023 pm_runtime_disable(dev);20242024+probe_dma_failed:20252025+ cqspi_controller_enable(cqspi, 0);20252026probe_reset_failed:20262027 if (cqspi->is_jh7110)20272028 cqspi_jh7110_disable_clk(pdev, cqspi);
+3-1
drivers/spi/spi-dw-mmio.c
···358358 if (IS_ERR(dwsmmio->rstc))359359 return PTR_ERR(dwsmmio->rstc);360360361361- reset_control_deassert(dwsmmio->rstc);361361+ ret = reset_control_deassert(dwsmmio->rstc);362362+ if (ret)363363+ return dev_err_probe(&pdev->dev, ret, "Failed to deassert resets\n");362364363365 dws->bus_num = pdev->id;364366
···404404#define FSPI_NEED_INIT BIT(0)405405#define FSPI_DTR_MODE BIT(1)406406 int flags;407407+ /* save the previous operation clock rate */408408+ unsigned long pre_op_rate;409409+ /* the max clock rate fspi output to device */410410+ unsigned long max_rate;407411};408412409413static inline int needs_ip_only(struct nxp_fspi *f)···689685 * change the mode back to mode 0.690686 */691687 reg = fspi_readl(f, f->iobase + FSPI_MCR0);692692- if (op_is_dtr)688688+ if (op_is_dtr) {693689 reg |= FSPI_MCR0_RXCLKSRC(3);694694- else /*select mode 0 */690690+ f->max_rate = 166000000;691691+ } else { /*select mode 0 */695692 reg &= ~FSPI_MCR0_RXCLKSRC(3);693693+ f->max_rate = 66000000;694694+ }696695 fspi_writel(f, reg, f->iobase + FSPI_MCR0);697696}698697···726719 0, POLL_TOUT, true);727720 if (ret)728721 dev_warn(f->dev, "DLL lock failed, please fix it!\n");722722+723723+ /*724724+ * For ERR050272, DLL lock status bit is not accurate,725725+ * wait for 4us more as a workaround.726726+ */727727+ udelay(4);729728}730729731730/*···793780 uint64_t size_kb;794781795782 /*796796- * Return, if previously selected target device is same as current797797- * requested target device. Also the DTR or STR mode do not change.783783+ * Return when following condition all meet,784784+ * 1, if previously selected target device is same as current785785+ * requested target device.786786+ * 2, the DTR or STR mode do not change.787787+ * 3, previous operation max rate equals current one.788788+ *789789+ * For other case, need to re-config.798790 */799791 if ((f->selected == spi_get_chipselect(spi, 0)) &&800800- (!!(f->flags & FSPI_DTR_MODE) == op_is_dtr))792792+ (!!(f->flags & FSPI_DTR_MODE) == op_is_dtr) &&793793+ (f->pre_op_rate == op->max_freq))801794 return;802795803796 /* Reset FLSHxxCR0 registers */···821802 dev_dbg(f->dev, "Target device [CS:%x] selected\n", spi_get_chipselect(spi, 0));822803823804 nxp_fspi_select_rx_sample_clk_source(f, op_is_dtr);805805+ rate = min(f->max_rate, op->max_freq);824806825807 if (op_is_dtr) {826808 f->flags |= FSPI_DTR_MODE;···851831 nxp_fspi_dll_calibration(f);852832 else853833 nxp_fspi_dll_override(f);834834+835835+ f->pre_op_rate = op->max_freq;854836855837 f->selected = spi_get_chipselect(spi, 0);856838}
···182182 return retval;183183#endif184184185185- retval = agilent_82350b_write(board, buffer, 1, 0, &num_bytes);186186- *bytes_written += num_bytes;187187- if (retval < 0)188188- return retval;185185+ if (fifotransferlength > 0) {186186+ retval = agilent_82350b_write(board, buffer, 1, 0, &num_bytes);187187+ *bytes_written += num_bytes;188188+ if (retval < 0)189189+ return retval;190190+ }189191190192 write_byte(tms_priv, tms_priv->imr0_bits & ~HR_BOIE, IMR0);191193 for (i = 1; i < fifotransferlength;) {···219217 break;220218 }221219 write_byte(tms_priv, tms_priv->imr0_bits, IMR0);222222- if (retval)220220+ if (retval < 0)223221 return retval;224222225223 if (send_eoi) {
+5
drivers/staging/gpib/fmh_gpib/fmh_gpib.c
···15171517 resource_size(e_priv->gpib_iomem_res));15181518 }15191519 fmh_gpib_generic_detach(board);15201520+15211521+ if (board->dev) {15221522+ put_device(board->dev);15231523+ board->dev = NULL;15241524+ }15201525}1521152615221527static int fmh_gpib_pci_attach_impl(struct gpib_board *board,
+10-3
drivers/staging/gpib/ni_usb/ni_usb_gpib.c
···327327 board->status &= ~clear_mask;328328 board->status &= ~ni_usb_ibsta_mask;329329 board->status |= ni_usb_ibsta & ni_usb_ibsta_mask;330330- // FIXME should generate events on DTAS and DCAS330330+ if (ni_usb_ibsta & DCAS)331331+ push_gpib_event(board, EVENT_DEV_CLR);332332+ if (ni_usb_ibsta & DTAS)333333+ push_gpib_event(board, EVENT_DEV_TRG);331334332335 spin_lock_irqsave(&board->spinlock, flags);333336/* remove set status bits from monitored set why ?***/···697694 */698695 break;699696 case NIUSB_ATN_STATE_ERROR:700700- retval = -EIO;701701- dev_err(&usb_dev->dev, "read when ATN set\n");697697+ if (status.ibsta & DCAS) {698698+ retval = -EINTR;699699+ } else {700700+ retval = -EIO;701701+ dev_dbg(&usb_dev->dev, "read when ATN set stat: 0x%06x\n", status.ibsta);702702+ }702703 break;703704 case NIUSB_ADDRESSING_ERROR:704705 retval = -EIO;
+1
drivers/tee/qcomtee/Kconfig
···22# Qualcomm Trusted Execution Environment Configuration33config QCOMTEE44 tristate "Qualcomm TEE Support"55+ depends on ARCH_QCOM || COMPILE_TEST56 depends on !CPU_BIG_ENDIAN67 select QCOM_SCM78 select QCOM_TZMEM_MODE_SHMBRIDGE
+1-1
drivers/tee/qcomtee/call.c
···308308 }309309310310 /* Release any IO and OO objects not processed. */311311- for (; u[i].type && i < num_params; i++) {311311+ for (; i < num_params && u[i].type; i++) {312312 if (u[i].type == QCOMTEE_ARG_TYPE_OO ||313313 u[i].type == QCOMTEE_ARG_TYPE_IO)314314 qcomtee_object_put(u[i].o);
···435435 while436436 (serial_in(up, MTK_UART_DEBUG0));437437438438+ clk_disable_unprepare(data->uart_clk);438439 clk_disable_unprepare(data->bus_clk);439440440441 return 0;···446445 struct mtk8250_data *data = dev_get_drvdata(dev);447446448447 clk_prepare_enable(data->bus_clk);448448+ clk_prepare_enable(data->uart_clk);449449450450 return 0;451451}···477475 int dmacnt;478476#endif479477480480- data->uart_clk = devm_clk_get(&pdev->dev, "baud");478478+ data->uart_clk = devm_clk_get_enabled(&pdev->dev, "baud");481479 if (IS_ERR(data->uart_clk)) {482480 /*483481 * For compatibility with older device trees try unnamed484482 * clk when no baud clk can be found.485483 */486486- data->uart_clk = devm_clk_get(&pdev->dev, NULL);484484+ data->uart_clk = devm_clk_get_enabled(&pdev->dev, NULL);487485 if (IS_ERR(data->uart_clk)) {488486 dev_warn(&pdev->dev, "Can't get uart clock\n");489487 return PTR_ERR(data->uart_clk);
-7
drivers/tty/serial/sc16is7xx.c
···588588 div /= prescaler;589589 }590590591591- /* Enable enhanced features */592592- sc16is7xx_efr_lock(port);593593- sc16is7xx_port_update(port, SC16IS7XX_EFR_REG,594594- SC16IS7XX_EFR_ENABLE_BIT,595595- SC16IS7XX_EFR_ENABLE_BIT);596596- sc16is7xx_efr_unlock(port);597597-598591 /* If bit MCR_CLKSEL is set, the divide by 4 prescaler is activated. */599592 sc16is7xx_port_update(port, SC16IS7XX_MCR_REG,600593 SC16IS7XX_MCR_CLKSEL_BIT,
+8-6
drivers/tty/serial/sh-sci.c
···10141014 struct sci_port *s = to_sci_port(port);10151015 const struct plat_sci_reg *reg;10161016 int copied = 0;10171017- u16 status;10171017+ u32 status;1018101810191019- reg = sci_getreg(port, s->params->overrun_reg);10201020- if (!reg->size)10211021- return 0;10191019+ if (s->type != SCI_PORT_RSCI) {10201020+ reg = sci_getreg(port, s->params->overrun_reg);10211021+ if (!reg->size)10221022+ return 0;10231023+ }1022102410231023- status = sci_serial_in(port, s->params->overrun_reg);10251025+ status = s->ops->read_reg(port, s->params->overrun_reg);10241026 if (status & s->params->overrun_mask) {10251027 status &= ~s->params->overrun_mask;10261026- sci_serial_out(port, s->params->overrun_reg, status);10281028+ s->ops->write_reg(port, s->params->overrun_reg, status);1027102910281030 port->icount.overrun++;10291031
···182182config USB_USBIO183183 tristate "Intel USBIO Bridge support"184184 depends on USB && ACPI185185+ depends on X86 || COMPILE_TEST185186 select AUXILIARY_BUS186187 help187188 This adds support for Intel USBIO drivers.
···13391339 * Don't update inode if the file type is different13401340 */13411341 umode = p9mode2unixmode(v9ses, st, &rdev);13421342- if (inode_wrong_type(inode, umode)) {13431343- /*13441344- * Do this as a way of letting the caller know the inode should not13451345- * be reused13461346- */13471347- v9fs_invalidate_inode_attr(inode);13421342+ if (inode_wrong_type(inode, umode))13481343 goto out;13491349- }1350134413511345 /*13521346 * We don't want to refresh inode->i_size,
+1-7
fs/9p/vfs_inode_dotl.c
···897897 /*898898 * Don't update inode if the file type is different899899 */900900- if (inode_wrong_type(inode, st->st_mode)) {901901- /*902902- * Do this as a way of letting the caller know the inode should not903903- * be reused904904- */905905- v9fs_invalidate_inode_attr(inode);900900+ if (inode_wrong_type(inode, st->st_mode))906901 goto out;907907- }908902909903 /*910904 * We don't want to refresh inode->i_size,
+1-1
fs/btrfs/delayed-inode.c
···2110211021112111 for (int i = 0; i < count; i++) {21122112 __btrfs_kill_delayed_node(delayed_nodes[i]);21132113+ btrfs_delayed_node_ref_tracker_dir_print(delayed_nodes[i]);21132114 btrfs_release_delayed_node(delayed_nodes[i],21142115 &delayed_node_trackers[i]);21152115- btrfs_delayed_node_ref_tracker_dir_print(delayed_nodes[i]);21162116 }21172117 }21182118}
+7
fs/btrfs/delayed-inode.h
···219219 if (!btrfs_test_opt(node->root->fs_info, REF_TRACKER))220220 return;221221222222+ /*223223+ * Only print if there are leaked references. The caller is224224+ * holding one reference, so if refs == 1 there is no leak.225225+ */226226+ if (refcount_read(&node->refs) == 1)227227+ return;228228+222229 ref_tracker_dir_print(&node->ref_dir.dir,223230 BTRFS_DELAYED_NODE_REF_TRACKER_DISPLAY_LIMIT);224231}
+1-1
fs/btrfs/ref-verify.c
···982982983983 extent_root = btrfs_extent_root(fs_info, 0);984984 /* If the extent tree is damaged we cannot ignore it (IGNOREBADROOTS). */985985- if (IS_ERR(extent_root)) {985985+ if (!extent_root) {986986 btrfs_warn(fs_info, "ref-verify: extent tree not available, disabling");987987 btrfs_clear_opt(fs_info->mount_opt, REF_VERIFY);988988 return 0;
+48-8
fs/btrfs/send.c
···41024102 return ret;41034103}4104410441054105+static int rbtree_check_dir_ref_comp(const void *k, const struct rb_node *node)41064106+{41074107+ const struct recorded_ref *data = k;41084108+ const struct recorded_ref *ref = rb_entry(node, struct recorded_ref, node);41094109+41104110+ if (data->dir > ref->dir)41114111+ return 1;41124112+ if (data->dir < ref->dir)41134113+ return -1;41144114+ if (data->dir_gen > ref->dir_gen)41154115+ return 1;41164116+ if (data->dir_gen < ref->dir_gen)41174117+ return -1;41184118+ return 0;41194119+}41204120+41214121+static bool rbtree_check_dir_ref_less(struct rb_node *node, const struct rb_node *parent)41224122+{41234123+ const struct recorded_ref *entry = rb_entry(node, struct recorded_ref, node);41244124+41254125+ return rbtree_check_dir_ref_comp(entry, parent) < 0;41264126+}41274127+41284128+static int record_check_dir_ref_in_tree(struct rb_root *root,41294129+ struct recorded_ref *ref, struct list_head *list)41304130+{41314131+ struct recorded_ref *tmp_ref;41324132+ int ret;41334133+41344134+ if (rb_find(ref, root, rbtree_check_dir_ref_comp))41354135+ return 0;41364136+41374137+ ret = dup_ref(ref, list);41384138+ if (ret < 0)41394139+ return ret;41404140+41414141+ tmp_ref = list_last_entry(list, struct recorded_ref, list);41424142+ rb_add(&tmp_ref->node, root, rbtree_check_dir_ref_less);41434143+ tmp_ref->root = root;41444144+ return 0;41454145+}41464146+41054147static int rename_current_inode(struct send_ctx *sctx,41064148 struct fs_path *current_path,41074149 struct fs_path *new_path)···41714129 struct recorded_ref *cur;41724130 struct recorded_ref *cur2;41734131 LIST_HEAD(check_dirs);41324132+ struct rb_root rbtree_check_dirs = RB_ROOT;41744133 struct fs_path *valid_path = NULL;41754134 u64 ow_inode = 0;41764135 u64 ow_gen;41774136 u64 ow_mode;41784178- u64 last_dir_ino_rm = 0;41794137 bool did_overwrite = false;41804138 bool is_orphan = false;41814139 bool can_rename = true;···44794437 goto out;44804438 }44814439 }44824482- ret = dup_ref(cur, &check_dirs);44404440+ ret = record_check_dir_ref_in_tree(&rbtree_check_dirs, cur, &check_dirs);44834441 if (ret < 0)44844442 goto out;44854443 }···45074465 }4508446645094467 list_for_each_entry(cur, &sctx->deleted_refs, list) {45104510- ret = dup_ref(cur, &check_dirs);44684468+ ret = record_check_dir_ref_in_tree(&rbtree_check_dirs, cur, &check_dirs);45114469 if (ret < 0)45124470 goto out;45134471 }···45174475 * We have a moved dir. Add the old parent to check_dirs45184476 */45194477 cur = list_first_entry(&sctx->deleted_refs, struct recorded_ref, list);45204520- ret = dup_ref(cur, &check_dirs);44784478+ ret = record_check_dir_ref_in_tree(&rbtree_check_dirs, cur, &check_dirs);45214479 if (ret < 0)45224480 goto out;45234481 } else if (!S_ISDIR(sctx->cur_inode_mode)) {···45514509 if (is_current_inode_path(sctx, cur->full_path))45524510 fs_path_reset(&sctx->cur_inode_path);45534511 }45544554- ret = dup_ref(cur, &check_dirs);45124512+ ret = record_check_dir_ref_in_tree(&rbtree_check_dirs, cur, &check_dirs);45554513 if (ret < 0)45564514 goto out;45574515 }···45944552 ret = cache_dir_utimes(sctx, cur->dir, cur->dir_gen);45954553 if (ret < 0)45964554 goto out;45974597- } else if (ret == inode_state_did_delete &&45984598- cur->dir != last_dir_ino_rm) {45554555+ } else if (ret == inode_state_did_delete) {45994556 ret = can_rmdir(sctx, cur->dir, cur->dir_gen);46004557 if (ret < 0)46014558 goto out;···46064565 ret = send_rmdir(sctx, valid_path);46074566 if (ret < 0)46084567 goto out;46094609- last_dir_ino_rm = cur->dir;46104568 }46114569 }46124570 }
+7-1
fs/btrfs/super.c
···20682068 fs_info->super_copy = kzalloc(BTRFS_SUPER_INFO_SIZE, GFP_KERNEL);20692069 fs_info->super_for_commit = kzalloc(BTRFS_SUPER_INFO_SIZE, GFP_KERNEL);20702070 if (!fs_info->super_copy || !fs_info->super_for_commit) {20712071- btrfs_free_fs_info(fs_info);20712071+ /*20722072+ * Dont call btrfs_free_fs_info() to free it as it's still20732073+ * initialized partially.20742074+ */20752075+ kfree(fs_info->super_copy);20762076+ kfree(fs_info->super_for_commit);20772077+ kvfree(fs_info);20722078 return -ENOMEM;20732079 }20742080 btrfs_init_fs_info(fs_info);
+30-29
fs/erofs/zmap.c
···5555 } else {5656 m->partialref = !!(advise & Z_EROFS_LI_PARTIAL_REF);5757 m->clusterofs = le16_to_cpu(di->di_clusterofs);5858- if (m->clusterofs >= 1 << vi->z_lclusterbits) {5959- DBG_BUGON(1);6060- return -EFSCORRUPTED;6161- }6258 m->pblk = le32_to_cpu(di->di_u.blkaddr);6359 }6460 return 0;···236240static int z_erofs_load_lcluster_from_disk(struct z_erofs_maprecorder *m,237241 unsigned int lcn, bool lookahead)238242{243243+ struct erofs_inode *vi = EROFS_I(m->inode);244244+ int err;245245+246246+ if (vi->datalayout == EROFS_INODE_COMPRESSED_COMPACT) {247247+ err = z_erofs_load_compact_lcluster(m, lcn, lookahead);248248+ } else {249249+ DBG_BUGON(vi->datalayout != EROFS_INODE_COMPRESSED_FULL);250250+ err = z_erofs_load_full_lcluster(m, lcn);251251+ }252252+ if (err)253253+ return err;254254+239255 if (m->type >= Z_EROFS_LCLUSTER_TYPE_MAX) {240256 erofs_err(m->inode->i_sb, "unknown type %u @ lcn %u of nid %llu",241241- m->type, lcn, EROFS_I(m->inode)->nid);257257+ m->type, lcn, EROFS_I(m->inode)->nid);242258 DBG_BUGON(1);243259 return -EOPNOTSUPP;260260+ } else if (m->type != Z_EROFS_LCLUSTER_TYPE_NONHEAD &&261261+ m->clusterofs >= (1 << vi->z_lclusterbits)) {262262+ DBG_BUGON(1);263263+ return -EFSCORRUPTED;244264 }245245-246246- switch (EROFS_I(m->inode)->datalayout) {247247- case EROFS_INODE_COMPRESSED_FULL:248248- return z_erofs_load_full_lcluster(m, lcn);249249- case EROFS_INODE_COMPRESSED_COMPACT:250250- return z_erofs_load_compact_lcluster(m, lcn, lookahead);251251- default:252252- return -EINVAL;253253- }265265+ return 0;254266}255267256268static int z_erofs_extent_lookback(struct z_erofs_maprecorder *m,···272268 unsigned long lcn = m->lcn - lookback_distance;273269 int err;274270271271+ if (!lookback_distance)272272+ break;273273+275274 err = z_erofs_load_lcluster_from_disk(m, lcn, false);276275 if (err)277276 return err;278278-279277 if (m->type == Z_EROFS_LCLUSTER_TYPE_NONHEAD) {280278 lookback_distance = m->delta[0];281281- if (!lookback_distance)282282- break;283279 continue;284284- } else {285285- m->headtype = m->type;286286- m->map->m_la = (lcn << lclusterbits) | m->clusterofs;287287- return 0;288280 }281281+ m->headtype = m->type;282282+ m->map->m_la = (lcn << lclusterbits) | m->clusterofs;283283+ return 0;289284 }290285 erofs_err(sb, "bogus lookback distance %u @ lcn %lu of nid %llu",291286 lookback_distance, m->lcn, vi->nid);···434431 end = inode->i_size;435432 } else {436433 if (m.type != Z_EROFS_LCLUSTER_TYPE_NONHEAD) {437437- /* m.lcn should be >= 1 if endoff < m.clusterofs */438438- if (!m.lcn) {439439- erofs_err(sb, "invalid logical cluster 0 at nid %llu",440440- vi->nid);441441- err = -EFSCORRUPTED;442442- goto unmap_out;443443- }444434 end = (m.lcn << lclusterbits) | m.clusterofs;445435 map->m_flags |= EROFS_MAP_FULL_MAPPED;446436 m.delta[0] = 1;···592596 vi->z_fragmentoff = map->m_plen;593597 if (recsz > offsetof(struct z_erofs_extent, pstart_lo))594598 vi->z_fragmentoff |= map->m_pa << 32;595595- } else if (map->m_plen) {599599+ } else if (map->m_plen & Z_EROFS_EXTENT_PLEN_MASK) {596600 map->m_flags |= EROFS_MAP_MAPPED |597601 EROFS_MAP_FULL_MAPPED | EROFS_MAP_ENCODED;598602 fmt = map->m_plen >> Z_EROFS_EXTENT_PLEN_FMT_BIT;···711715 struct erofs_map_blocks *map)712716{713717 struct erofs_sb_info *sbi = EROFS_I_SB(inode);718718+ u64 pend;714719715720 if (!(map->m_flags & EROFS_MAP_ENCODED))716721 return 0;···729732 if (unlikely(map->m_plen > Z_EROFS_PCLUSTER_MAX_SIZE ||730733 map->m_llen > Z_EROFS_PCLUSTER_MAX_DSIZE))731734 return -EOPNOTSUPP;735735+ /* Filesystems beyond 48-bit physical block addresses are invalid */736736+ if (unlikely(check_add_overflow(map->m_pa, map->m_plen, &pend) ||737737+ (pend >> sbi->blkszbits) >= BIT_ULL(48)))738738+ return -EFSCORRUPTED;732739 return 0;733740}734741
-9
fs/hugetlbfs/inode.c
···478478 if (!hugetlb_vma_trylock_write(vma))479479 continue;480480481481- /*482482- * Skip VMAs without shareable locks. Per the design in commit483483- * 40549ba8f8e0, these will be handled by remove_inode_hugepages()484484- * called after this function with proper locking.485485- */486486- if (!__vma_shareable_lock(vma))487487- goto skip;488488-489481 v_start = vma_offset_start(vma, start);490482 v_end = vma_offset_end(vma, end);491483···488496 * vmas. Therefore, lock is not held when calling489497 * unmap_hugepage_range for private vmas.490498 */491491-skip:492499 hugetlb_vma_unlock_write(vma);493500 }494501}
···867867 mlog_errno(ret);868868 goto out;869869 }870870+ /*871871+ * Invalidate extent cache after moving/defragging to prevent872872+ * stale cached data with outdated extent flags.873873+ */874874+ ocfs2_extent_map_trunc(inode, cpos);870875871876 context->clusters_moved += alloc_size;872877next:
+7-9
fs/resctrl/monitor.c
···17821782 mba_mbps_default_event = QOS_L3_MBM_TOTAL_EVENT_ID;1783178317841784 if (r->mon.mbm_cntr_assignable) {17851785- if (!resctrl_is_mon_event_enabled(QOS_L3_MBM_TOTAL_EVENT_ID))17861786- resctrl_enable_mon_event(QOS_L3_MBM_TOTAL_EVENT_ID);17871787- if (!resctrl_is_mon_event_enabled(QOS_L3_MBM_LOCAL_EVENT_ID))17881788- resctrl_enable_mon_event(QOS_L3_MBM_LOCAL_EVENT_ID);17891789- mon_event_all[QOS_L3_MBM_TOTAL_EVENT_ID].evt_cfg = r->mon.mbm_cfg_mask;17901790- mon_event_all[QOS_L3_MBM_LOCAL_EVENT_ID].evt_cfg = r->mon.mbm_cfg_mask &17911791- (READS_TO_LOCAL_MEM |17921792- READS_TO_LOCAL_S_MEM |17931793- NON_TEMP_WRITE_TO_LOCAL_MEM);17851785+ if (resctrl_is_mon_event_enabled(QOS_L3_MBM_TOTAL_EVENT_ID))17861786+ mon_event_all[QOS_L3_MBM_TOTAL_EVENT_ID].evt_cfg = r->mon.mbm_cfg_mask;17871787+ if (resctrl_is_mon_event_enabled(QOS_L3_MBM_LOCAL_EVENT_ID))17881788+ mon_event_all[QOS_L3_MBM_LOCAL_EVENT_ID].evt_cfg = r->mon.mbm_cfg_mask &17891789+ (READS_TO_LOCAL_MEM |17901790+ READS_TO_LOCAL_S_MEM |17911791+ NON_TEMP_WRITE_TO_LOCAL_MEM);17941792 r->mon.mbm_assign_on_mkdir = true;17951793 resctrl_file_fflags_init("num_mbm_cntrs",17961794 RFTYPE_MON_INFO | RFTYPE_RES_CACHE);
+1-3
fs/smb/client/cifsglob.h
···534534 void (*new_lease_key)(struct cifs_fid *);535535 int (*generate_signingkey)(struct cifs_ses *ses,536536 struct TCP_Server_Info *server);537537- int (*calc_signature)(struct smb_rqst *, struct TCP_Server_Info *,538538- bool allocate_crypto);539537 int (*set_integrity)(const unsigned int, struct cifs_tcon *tcon,540538 struct cifsFileInfo *src_file);541539 int (*enum_snapshots)(const unsigned int xid, struct cifs_tcon *tcon,···730732 bool nosharesock;731733 bool tcp_nodelay;732734 bool terminate;733733- unsigned int credits; /* send no more requests at once */735735+ int credits; /* send no more requests at once */734736 unsigned int max_credits; /* can override large 32000 default at mnt */735737 unsigned int in_flight; /* number of requests on the wire to server */736738 unsigned int max_in_flight; /* max number of requests that were on wire */
···172172 * in order to notice the broken connection.173173 */174174 wake_up_all(&sc->status_wait);175175+ wake_up_all(&sc->send_io.lcredits.wait_queue);175176 wake_up_all(&sc->send_io.credits.wait_queue);176177 wake_up_all(&sc->send_io.pending.dec_wait_queue);177178 wake_up_all(&sc->send_io.pending.zero_wait_queue);···496495 struct smbdirect_send_io *request =497496 container_of(wc->wr_cqe, struct smbdirect_send_io, cqe);498497 struct smbdirect_socket *sc = request->socket;498498+ int lcredits = 0;499499500500 log_rdma_send(INFO, "smbdirect_send_io 0x%p completed wc->status=%s\n",501501 request, ib_wc_status_msg(wc->status));···506504 request->sge[i].addr,507505 request->sge[i].length,508506 DMA_TO_DEVICE);507507+ mempool_free(request, sc->send_io.mem.pool);508508+ lcredits += 1;509509510510 if (wc->status != IB_WC_SUCCESS || wc->opcode != IB_WC_SEND) {511511 if (wc->status != IB_WC_WR_FLUSH_ERR)512512 log_rdma_send(ERR, "wc->status=%s wc->opcode=%d\n",513513 ib_wc_status_msg(wc->status), wc->opcode);514514- mempool_free(request, sc->send_io.mem.pool);515514 smbd_disconnect_rdma_connection(sc);516515 return;517516 }517517+518518+ atomic_add(lcredits, &sc->send_io.lcredits.count);519519+ wake_up(&sc->send_io.lcredits.wait_queue);518520519521 if (atomic_dec_and_test(&sc->send_io.pending.count))520522 wake_up(&sc->send_io.pending.zero_wait_queue);521523522524 wake_up(&sc->send_io.pending.dec_wait_queue);523523-524524- mempool_free(request, sc->send_io.mem.pool);525525}526526527527static void dump_smbdirect_negotiate_resp(struct smbdirect_negotiate_resp *resp)···571567 log_rdma_event(ERR, "error: credits_granted==0\n");572568 return false;573569 }570570+ atomic_set(&sc->send_io.lcredits.count, sp->send_credit_target);574571 atomic_set(&sc->send_io.credits.count, le16_to_cpu(packet->credits_granted));575572576573 if (le32_to_cpu(packet->preferred_send_size) > sp->max_recv_size) {···11191114 struct smbdirect_data_transfer *packet;11201115 int new_credits = 0;1121111611171117+wait_lcredit:11181118+ /* Wait for local send credits */11191119+ rc = wait_event_interruptible(sc->send_io.lcredits.wait_queue,11201120+ atomic_read(&sc->send_io.lcredits.count) > 0 ||11211121+ sc->status != SMBDIRECT_SOCKET_CONNECTED);11221122+ if (rc)11231123+ goto err_wait_lcredit;11241124+11251125+ if (sc->status != SMBDIRECT_SOCKET_CONNECTED) {11261126+ log_outgoing(ERR, "disconnected not sending on wait_credit\n");11271127+ rc = -EAGAIN;11281128+ goto err_wait_lcredit;11291129+ }11301130+ if (unlikely(atomic_dec_return(&sc->send_io.lcredits.count) < 0)) {11311131+ atomic_inc(&sc->send_io.lcredits.count);11321132+ goto wait_lcredit;11331133+ }11341134+11221135wait_credit:11231136 /* Wait for send credits. A SMBD packet needs one credit */11241137 rc = wait_event_interruptible(sc->send_io.credits.wait_queue,···11531130 if (unlikely(atomic_dec_return(&sc->send_io.credits.count) < 0)) {11541131 atomic_inc(&sc->send_io.credits.count);11551132 goto wait_credit;11561156- }11571157-11581158-wait_send_queue:11591159- wait_event(sc->send_io.pending.dec_wait_queue,11601160- atomic_read(&sc->send_io.pending.count) < sp->send_credit_target ||11611161- sc->status != SMBDIRECT_SOCKET_CONNECTED);11621162-11631163- if (sc->status != SMBDIRECT_SOCKET_CONNECTED) {11641164- log_outgoing(ERR, "disconnected not sending on wait_send_queue\n");11651165- rc = -EAGAIN;11661166- goto err_wait_send_queue;11671167- }11681168-11691169- if (unlikely(atomic_inc_return(&sc->send_io.pending.count) >11701170- sp->send_credit_target)) {11711171- atomic_dec(&sc->send_io.pending.count);11721172- goto wait_send_queue;11731133 }1174113411751135 request = mempool_alloc(sc->send_io.mem.pool, GFP_KERNEL);···12351229 le32_to_cpu(packet->data_length),12361230 le32_to_cpu(packet->remaining_data_length));1237123112321232+ /*12331233+ * Now that we got a local and a remote credit12341234+ * we add us as pending12351235+ */12361236+ atomic_inc(&sc->send_io.pending.count);12371237+12381238 rc = smbd_post_send(sc, request);12391239 if (!rc)12401240 return 0;12411241+12421242+ if (atomic_dec_and_test(&sc->send_io.pending.count))12431243+ wake_up(&sc->send_io.pending.zero_wait_queue);12441244+12451245+ wake_up(&sc->send_io.pending.dec_wait_queue);1241124612421247err_dma:12431248 for (i = 0; i < request->num_sge; i++)···12631246 atomic_sub(new_credits, &sc->recv_io.credits.count);1264124712651248err_alloc:12661266- if (atomic_dec_and_test(&sc->send_io.pending.count))12671267- wake_up(&sc->send_io.pending.zero_wait_queue);12681268-12691269-err_wait_send_queue:12701270- /* roll back send credits and pending */12711249 atomic_inc(&sc->send_io.credits.count);12501250+ wake_up(&sc->send_io.credits.wait_queue);1272125112731252err_wait_credit:12531253+ atomic_inc(&sc->send_io.lcredits.count);12541254+ wake_up(&sc->send_io.lcredits.wait_queue);12551255+12561256+err_wait_lcredit:12741257 return rc;12751258}12761259···17841767 struct smbdirect_socket *sc;17851768 struct smbdirect_socket_parameters *sp;17861769 struct rdma_conn_param conn_param;17701770+ struct ib_qp_cap qp_cap;17871771 struct ib_qp_init_attr qp_attr;17881772 struct sockaddr_in *addr_in = (struct sockaddr_in *) dstaddr;17891773 struct ib_port_immutable port_immutable;···18561838 goto config_failed;18571839 }1858184018411841+ sp->responder_resources =18421842+ min_t(u8, sp->responder_resources,18431843+ sc->ib.dev->attrs.max_qp_rd_atom);18441844+ log_rdma_mr(INFO, "responder_resources=%d\n",18451845+ sp->responder_resources);18461846+18471847+ /*18481848+ * We use allocate sp->responder_resources * 2 MRs18491849+ * and each MR needs WRs for REG and INV, so18501850+ * we use '* 4'.18511851+ *18521852+ * +1 for ib_drain_qp()18531853+ */18541854+ memset(&qp_cap, 0, sizeof(qp_cap));18551855+ qp_cap.max_send_wr = sp->send_credit_target + sp->responder_resources * 4 + 1;18561856+ qp_cap.max_recv_wr = sp->recv_credit_max + 1;18571857+ qp_cap.max_send_sge = SMBDIRECT_SEND_IO_MAX_SGE;18581858+ qp_cap.max_recv_sge = SMBDIRECT_RECV_IO_MAX_SGE;18591859+18591860 sc->ib.pd = ib_alloc_pd(sc->ib.dev, 0);18601861 if (IS_ERR(sc->ib.pd)) {18611862 rc = PTR_ERR(sc->ib.pd);···1885184818861849 sc->ib.send_cq =18871850 ib_alloc_cq_any(sc->ib.dev, sc,18881888- sp->send_credit_target, IB_POLL_SOFTIRQ);18511851+ qp_cap.max_send_wr, IB_POLL_SOFTIRQ);18891852 if (IS_ERR(sc->ib.send_cq)) {18901853 sc->ib.send_cq = NULL;18911854 goto alloc_cq_failed;···1893185618941857 sc->ib.recv_cq =18951858 ib_alloc_cq_any(sc->ib.dev, sc,18961896- sp->recv_credit_max, IB_POLL_SOFTIRQ);18591859+ qp_cap.max_recv_wr, IB_POLL_SOFTIRQ);18971860 if (IS_ERR(sc->ib.recv_cq)) {18981861 sc->ib.recv_cq = NULL;18991862 goto alloc_cq_failed;···19021865 memset(&qp_attr, 0, sizeof(qp_attr));19031866 qp_attr.event_handler = smbd_qp_async_error_upcall;19041867 qp_attr.qp_context = sc;19051905- qp_attr.cap.max_send_wr = sp->send_credit_target;19061906- qp_attr.cap.max_recv_wr = sp->recv_credit_max;19071907- qp_attr.cap.max_send_sge = SMBDIRECT_SEND_IO_MAX_SGE;19081908- qp_attr.cap.max_recv_sge = SMBDIRECT_RECV_IO_MAX_SGE;19091909- qp_attr.cap.max_inline_data = 0;18681868+ qp_attr.cap = qp_cap;19101869 qp_attr.sq_sig_type = IB_SIGNAL_REQ_WR;19111870 qp_attr.qp_type = IB_QPT_RC;19121871 qp_attr.send_cq = sc->ib.send_cq;···19151882 goto create_qp_failed;19161883 }19171884 sc->ib.qp = sc->rdma.cm_id->qp;19181918-19191919- sp->responder_resources =19201920- min_t(u8, sp->responder_resources,19211921- sc->ib.dev->attrs.max_qp_rd_atom);19221922- log_rdma_mr(INFO, "responder_resources=%d\n",19231923- sp->responder_resources);1924188519251886 memset(&conn_param, 0, sizeof(conn_param));19261887 conn_param.initiator_depth = sp->initiator_depth;
+1
fs/smb/client/trace.c
···44 *55 * Author(s): Steve French <stfrench@microsoft.com>66 */77+#include "cifsglob.h"78#define CREATE_TRACE_POINTS89#include "trace.h"
+12-1
fs/smb/common/smbdirect/smbdirect_socket.h
···142142 } mem;143143144144 /*145145- * The credit state for the send side145145+ * The local credit state for ib_post_send()146146+ */147147+ struct {148148+ atomic_t count;149149+ wait_queue_head_t wait_queue;150150+ } lcredits;151151+152152+ /*153153+ * The remote credit state for the send side146154 */147155 struct {148156 atomic_t count;···344336 disable_work_sync(&sc->idle.immediate_work);345337 INIT_DELAYED_WORK(&sc->idle.timer_work, __smbdirect_socket_disabled_work);346338 disable_delayed_work_sync(&sc->idle.timer_work);339339+340340+ atomic_set(&sc->send_io.lcredits.count, 0);341341+ init_waitqueue_head(&sc->send_io.lcredits.wait_queue);347342348343 atomic_set(&sc->send_io.credits.count, 0);349344 init_waitqueue_head(&sc->send_io.credits.wait_queue);
+219-125
fs/smb/server/transport_rdma.c
···219219 * in order to notice the broken connection.220220 */221221 wake_up_all(&sc->status_wait);222222+ wake_up_all(&sc->send_io.lcredits.wait_queue);222223 wake_up_all(&sc->send_io.credits.wait_queue);223224 wake_up_all(&sc->send_io.pending.zero_wait_queue);224225 wake_up_all(&sc->recv_io.reassembly.wait_queue);···451450 struct smbdirect_recv_io *recvmsg;452451453452 disable_work_sync(&sc->disconnect_work);454454- if (sc->status < SMBDIRECT_SOCKET_DISCONNECTING) {453453+ if (sc->status < SMBDIRECT_SOCKET_DISCONNECTING)455454 smb_direct_disconnect_rdma_work(&sc->disconnect_work);456456- wait_event_interruptible(sc->status_wait,457457- sc->status == SMBDIRECT_SOCKET_DISCONNECTED);458458- }455455+ if (sc->status < SMBDIRECT_SOCKET_DISCONNECTED)456456+ wait_event(sc->status_wait, sc->status == SMBDIRECT_SOCKET_DISCONNECTED);459457460458 /*461459 * Wake up all waiters in all wait queues···471471472472 if (sc->ib.qp) {473473 ib_drain_qp(sc->ib.qp);474474- ib_mr_pool_destroy(sc->ib.qp, &sc->ib.qp->rdma_mrs);475474 sc->ib.qp = NULL;476475 rdma_destroy_qp(sc->rdma.cm_id);477476 }···522523 struct smbdirect_send_io *msg)523524{524525 int i;526526+527527+ /*528528+ * The list needs to be empty!529529+ * The caller should take care of it.530530+ */531531+ WARN_ON_ONCE(!list_empty(&msg->sibling_list));525532526533 if (msg->num_sge > 0) {527534 ib_dma_unmap_single(sc->ib.dev,···914909915910static void send_done(struct ib_cq *cq, struct ib_wc *wc)916911{917917- struct smbdirect_send_io *sendmsg, *sibling;912912+ struct smbdirect_send_io *sendmsg, *sibling, *next;918913 struct smbdirect_socket *sc;919919- struct list_head *pos, *prev, *end;914914+ int lcredits = 0;920915921916 sendmsg = container_of(wc->wr_cqe, struct smbdirect_send_io, cqe);922917 sc = sendmsg->socket;···925920 ib_wc_status_msg(wc->status), wc->status,926921 wc->opcode);927922923923+ /*924924+ * Free possible siblings and then the main send_io925925+ */926926+ list_for_each_entry_safe(sibling, next, &sendmsg->sibling_list, sibling_list) {927927+ list_del_init(&sibling->sibling_list);928928+ smb_direct_free_sendmsg(sc, sibling);929929+ lcredits += 1;930930+ }931931+ /* Note this frees wc->wr_cqe, but not wc */932932+ smb_direct_free_sendmsg(sc, sendmsg);933933+ lcredits += 1;934934+928935 if (wc->status != IB_WC_SUCCESS || wc->opcode != IB_WC_SEND) {929936 pr_err("Send error. status='%s (%d)', opcode=%d\n",930937 ib_wc_status_msg(wc->status), wc->status,931938 wc->opcode);932939 smb_direct_disconnect_rdma_connection(sc);940940+ return;933941 }942942+943943+ atomic_add(lcredits, &sc->send_io.lcredits.count);944944+ wake_up(&sc->send_io.lcredits.wait_queue);934945935946 if (atomic_dec_and_test(&sc->send_io.pending.count))936947 wake_up(&sc->send_io.pending.zero_wait_queue);937937-938938- /* iterate and free the list of messages in reverse. the list's head939939- * is invalid.940940- */941941- for (pos = &sendmsg->sibling_list, prev = pos->prev, end = sendmsg->sibling_list.next;942942- prev != end; pos = prev, prev = prev->prev) {943943- sibling = container_of(pos, struct smbdirect_send_io, sibling_list);944944- smb_direct_free_sendmsg(sc, sibling);945945- }946946-947947- sibling = container_of(pos, struct smbdirect_send_io, sibling_list);948948- smb_direct_free_sendmsg(sc, sibling);949948}950949951950static int manage_credits_prior_sending(struct smbdirect_socket *sc)···997988 ret = ib_post_send(sc->ib.qp, wr, NULL);998989 if (ret) {999990 pr_err("failed to post send: %d\n", ret);10001000- if (atomic_dec_and_test(&sc->send_io.pending.count))10011001- wake_up(&sc->send_io.pending.zero_wait_queue);1002991 smb_direct_disconnect_rdma_connection(sc);1003992 }1004993 return ret;···10391032 last->wr.send_flags = IB_SEND_SIGNALED;10401033 last->wr.wr_cqe = &last->cqe;1041103410351035+ /*10361036+ * Remove last from send_ctx->msg_list10371037+ * and splice the rest of send_ctx->msg_list10381038+ * to last->sibling_list.10391039+ *10401040+ * send_ctx->msg_list is a valid empty list10411041+ * at the end.10421042+ */10431043+ list_del_init(&last->sibling_list);10441044+ list_splice_tail_init(&send_ctx->msg_list, &last->sibling_list);10451045+ send_ctx->wr_cnt = 0;10461046+10421047 ret = smb_direct_post_send(sc, &first->wr);10431043- if (!ret) {10441044- smb_direct_send_ctx_init(send_ctx,10451045- send_ctx->need_invalidate_rkey,10461046- send_ctx->remote_key);10471047- } else {10481048- atomic_add(send_ctx->wr_cnt, &sc->send_io.credits.count);10491049- wake_up(&sc->send_io.credits.wait_queue);10501050- list_for_each_entry_safe(first, last, &send_ctx->msg_list,10511051- sibling_list) {10521052- smb_direct_free_sendmsg(sc, first);10481048+ if (ret) {10491049+ struct smbdirect_send_io *sibling, *next;10501050+10511051+ list_for_each_entry_safe(sibling, next, &last->sibling_list, sibling_list) {10521052+ list_del_init(&sibling->sibling_list);10531053+ smb_direct_free_sendmsg(sc, sibling);10531054 }10551055+ smb_direct_free_sendmsg(sc, last);10541056 }10571057+10551058 return ret;10561059}10571060···10851068 else if (ret < 0)10861069 return ret;10871070 } while (true);10711071+}10721072+10731073+static int wait_for_send_lcredit(struct smbdirect_socket *sc,10741074+ struct smbdirect_send_batch *send_ctx)10751075+{10761076+ if (send_ctx && (atomic_read(&sc->send_io.lcredits.count) <= 1)) {10771077+ int ret;10781078+10791079+ ret = smb_direct_flush_send_list(sc, send_ctx, false);10801080+ if (ret)10811081+ return ret;10821082+ }10831083+10841084+ return wait_for_credits(sc,10851085+ &sc->send_io.lcredits.wait_queue,10861086+ &sc->send_io.lcredits.count,10871087+ 1);10881088}1089108910901090static int wait_for_send_credits(struct smbdirect_socket *sc,···12911257 int data_length;12921258 struct scatterlist sg[SMBDIRECT_SEND_IO_MAX_SGE - 1];1293125912601260+ ret = wait_for_send_lcredit(sc, send_ctx);12611261+ if (ret)12621262+ goto lcredit_failed;12631263+12941264 ret = wait_for_send_credits(sc, send_ctx);12951265 if (ret)12961296- return ret;12661266+ goto credit_failed;1297126712981268 data_length = 0;12991269 for (i = 0; i < niov; i++)···1305126713061268 ret = smb_direct_create_header(sc, data_length, remaining_data_length,13071269 &msg);13081308- if (ret) {13091309- atomic_inc(&sc->send_io.credits.count);13101310- return ret;13111311- }12701270+ if (ret)12711271+ goto header_failed;1312127213131273 for (i = 0; i < niov; i++) {13141274 struct ib_sge *sge;···13441308 return 0;13451309err:13461310 smb_direct_free_sendmsg(sc, msg);13111311+header_failed:13471312 atomic_inc(&sc->send_io.credits.count);13131313+credit_failed:13141314+ atomic_inc(&sc->send_io.lcredits.count);13151315+lcredit_failed:13481316 return ret;13491317}13501318···19111871 return ret;19121872}1913187319141914-static unsigned int smb_direct_get_max_fr_pages(struct smbdirect_socket *sc)19151915-{19161916- return min_t(unsigned int,19171917- sc->ib.dev->attrs.max_fast_reg_page_list_len,19181918- 256);19191919-}19201920-19211921-static int smb_direct_init_params(struct smbdirect_socket *sc,19221922- struct ib_qp_cap *cap)18741874+static int smb_direct_init_params(struct smbdirect_socket *sc)19231875{19241876 struct smbdirect_socket_parameters *sp = &sc->parameters;19251925- struct ib_device *device = sc->ib.dev;19261926- int max_send_sges, max_rw_wrs, max_send_wrs;19271927- unsigned int max_sge_per_wr, wrs_per_credit;18771877+ int max_send_sges;18781878+ unsigned int maxpages;1928187919291880 /* need 3 more sge. because a SMB_DIRECT header, SMB2 header,19301881 * SMB2 response could be mapped.···19261895 return -EINVAL;19271896 }1928189719291929- /* Calculate the number of work requests for RDMA R/W.19301930- * The maximum number of pages which can be registered19311931- * with one Memory region can be transferred with one19321932- * R/W credit. And at least 4 work requests for each credit19331933- * are needed for MR registration, RDMA R/W, local & remote19341934- * MR invalidation.19351935- */19361936- sc->rw_io.credits.num_pages = smb_direct_get_max_fr_pages(sc);19371937- sc->rw_io.credits.max = DIV_ROUND_UP(sp->max_read_write_size,19381938- (sc->rw_io.credits.num_pages - 1) *19391939- PAGE_SIZE);18981898+ atomic_set(&sc->send_io.lcredits.count, sp->send_credit_target);1940189919411941- max_sge_per_wr = min_t(unsigned int, device->attrs.max_send_sge,19421942- device->attrs.max_sge_rd);19431943- max_sge_per_wr = max_t(unsigned int, max_sge_per_wr,19441944- max_send_sges);19451945- wrs_per_credit = max_t(unsigned int, 4,19461946- DIV_ROUND_UP(sc->rw_io.credits.num_pages,19471947- max_sge_per_wr) + 1);19481948- max_rw_wrs = sc->rw_io.credits.max * wrs_per_credit;19491949-19501950- max_send_wrs = sp->send_credit_target + max_rw_wrs;19511951- if (max_send_wrs > device->attrs.max_cqe ||19521952- max_send_wrs > device->attrs.max_qp_wr) {19531953- pr_err("consider lowering send_credit_target = %d\n",19541954- sp->send_credit_target);19551955- pr_err("Possible CQE overrun, device reporting max_cqe %d max_qp_wr %d\n",19561956- device->attrs.max_cqe, device->attrs.max_qp_wr);19571957- return -EINVAL;19581958- }19591959-19601960- if (sp->recv_credit_max > device->attrs.max_cqe ||19611961- sp->recv_credit_max > device->attrs.max_qp_wr) {19621962- pr_err("consider lowering receive_credit_max = %d\n",19631963- sp->recv_credit_max);19641964- pr_err("Possible CQE overrun, device reporting max_cpe %d max_qp_wr %d\n",19651965- device->attrs.max_cqe, device->attrs.max_qp_wr);19661966- return -EINVAL;19671967- }19681968-19691969- if (device->attrs.max_send_sge < SMBDIRECT_SEND_IO_MAX_SGE) {19701970- pr_err("warning: device max_send_sge = %d too small\n",19711971- device->attrs.max_send_sge);19721972- return -EINVAL;19731973- }19741974- if (device->attrs.max_recv_sge < SMBDIRECT_RECV_IO_MAX_SGE) {19751975- pr_err("warning: device max_recv_sge = %d too small\n",19761976- device->attrs.max_recv_sge);19771977- return -EINVAL;19781978- }19001900+ maxpages = DIV_ROUND_UP(sp->max_read_write_size, PAGE_SIZE);19011901+ sc->rw_io.credits.max = rdma_rw_mr_factor(sc->ib.dev,19021902+ sc->rdma.cm_id->port_num,19031903+ maxpages);19041904+ sc->rw_io.credits.num_pages = DIV_ROUND_UP(maxpages, sc->rw_io.credits.max);19051905+ /* add one extra in order to handle unaligned pages */19061906+ sc->rw_io.credits.max += 1;1979190719801908 sc->recv_io.credits.target = 1;1981190919821910 atomic_set(&sc->rw_io.credits.count, sc->rw_io.credits.max);1983191119841984- cap->max_send_wr = max_send_wrs;19851985- cap->max_recv_wr = sp->recv_credit_max;19861986- cap->max_send_sge = SMBDIRECT_SEND_IO_MAX_SGE;19871987- cap->max_recv_sge = SMBDIRECT_RECV_IO_MAX_SGE;19881988- cap->max_inline_data = 0;19891989- cap->max_rdma_ctxs = sc->rw_io.credits.max;19901912 return 0;19911913}19921914···20132029 return -ENOMEM;20142030}2015203120162016-static int smb_direct_create_qpair(struct smbdirect_socket *sc,20172017- struct ib_qp_cap *cap)20322032+static u32 smb_direct_rdma_rw_send_wrs(struct ib_device *dev, const struct ib_qp_init_attr *attr)20332033+{20342034+ /*20352035+ * This could be split out of rdma_rw_init_qp()20362036+ * and be a helper function next to rdma_rw_mr_factor()20372037+ *20382038+ * We can't check unlikely(rdma_rw_force_mr) here,20392039+ * but that is most likely 0 anyway.20402040+ */20412041+ u32 factor;20422042+20432043+ WARN_ON_ONCE(attr->port_num == 0);20442044+20452045+ /*20462046+ * Each context needs at least one RDMA READ or WRITE WR.20472047+ *20482048+ * For some hardware we might need more, eventually we should ask the20492049+ * HCA driver for a multiplier here.20502050+ */20512051+ factor = 1;20522052+20532053+ /*20542054+ * If the device needs MRs to perform RDMA READ or WRITE operations,20552055+ * we'll need two additional MRs for the registrations and the20562056+ * invalidation.20572057+ */20582058+ if (rdma_protocol_iwarp(dev, attr->port_num) || dev->attrs.max_sgl_rd)20592059+ factor += 2; /* inv + reg */20602060+20612061+ return factor * attr->cap.max_rdma_ctxs;20622062+}20632063+20642064+static int smb_direct_create_qpair(struct smbdirect_socket *sc)20182065{20192066 struct smbdirect_socket_parameters *sp = &sc->parameters;20202067 int ret;20682068+ struct ib_qp_cap qp_cap;20212069 struct ib_qp_init_attr qp_attr;20222022- int pages_per_rw;20702070+ u32 max_send_wr;20712071+ u32 rdma_send_wr;20722072+20732073+ /*20742074+ * Note that {rdma,ib}_create_qp() will call20752075+ * rdma_rw_init_qp() if cap->max_rdma_ctxs is not 0.20762076+ * It will adjust cap->max_send_wr to the required20772077+ * number of additional WRs for the RDMA RW operations.20782078+ * It will cap cap->max_send_wr to the device limit.20792079+ *20802080+ * +1 for ib_drain_qp20812081+ */20822082+ qp_cap.max_send_wr = sp->send_credit_target + 1;20832083+ qp_cap.max_recv_wr = sp->recv_credit_max + 1;20842084+ qp_cap.max_send_sge = SMBDIRECT_SEND_IO_MAX_SGE;20852085+ qp_cap.max_recv_sge = SMBDIRECT_RECV_IO_MAX_SGE;20862086+ qp_cap.max_inline_data = 0;20872087+ qp_cap.max_rdma_ctxs = sc->rw_io.credits.max;20882088+20892089+ /*20902090+ * Find out the number of max_send_wr20912091+ * after rdma_rw_init_qp() adjusted it.20922092+ *20932093+ * We only do it on a temporary variable,20942094+ * as rdma_create_qp() will trigger20952095+ * rdma_rw_init_qp() again.20962096+ */20972097+ memset(&qp_attr, 0, sizeof(qp_attr));20982098+ qp_attr.cap = qp_cap;20992099+ qp_attr.port_num = sc->rdma.cm_id->port_num;21002100+ rdma_send_wr = smb_direct_rdma_rw_send_wrs(sc->ib.dev, &qp_attr);21012101+ max_send_wr = qp_cap.max_send_wr + rdma_send_wr;21022102+21032103+ if (qp_cap.max_send_wr > sc->ib.dev->attrs.max_cqe ||21042104+ qp_cap.max_send_wr > sc->ib.dev->attrs.max_qp_wr) {21052105+ pr_err("Possible CQE overrun: max_send_wr %d\n",21062106+ qp_cap.max_send_wr);21072107+ pr_err("device %.*s reporting max_cqe %d max_qp_wr %d\n",21082108+ IB_DEVICE_NAME_MAX,21092109+ sc->ib.dev->name,21102110+ sc->ib.dev->attrs.max_cqe,21112111+ sc->ib.dev->attrs.max_qp_wr);21122112+ pr_err("consider lowering send_credit_target = %d\n",21132113+ sp->send_credit_target);21142114+ return -EINVAL;21152115+ }21162116+21172117+ if (qp_cap.max_rdma_ctxs &&21182118+ (max_send_wr >= sc->ib.dev->attrs.max_cqe ||21192119+ max_send_wr >= sc->ib.dev->attrs.max_qp_wr)) {21202120+ pr_err("Possible CQE overrun: rdma_send_wr %d + max_send_wr %d = %d\n",21212121+ rdma_send_wr, qp_cap.max_send_wr, max_send_wr);21222122+ pr_err("device %.*s reporting max_cqe %d max_qp_wr %d\n",21232123+ IB_DEVICE_NAME_MAX,21242124+ sc->ib.dev->name,21252125+ sc->ib.dev->attrs.max_cqe,21262126+ sc->ib.dev->attrs.max_qp_wr);21272127+ pr_err("consider lowering send_credit_target = %d, max_rdma_ctxs = %d\n",21282128+ sp->send_credit_target, qp_cap.max_rdma_ctxs);21292129+ return -EINVAL;21302130+ }21312131+21322132+ if (qp_cap.max_recv_wr > sc->ib.dev->attrs.max_cqe ||21332133+ qp_cap.max_recv_wr > sc->ib.dev->attrs.max_qp_wr) {21342134+ pr_err("Possible CQE overrun: max_recv_wr %d\n",21352135+ qp_cap.max_recv_wr);21362136+ pr_err("device %.*s reporting max_cqe %d max_qp_wr %d\n",21372137+ IB_DEVICE_NAME_MAX,21382138+ sc->ib.dev->name,21392139+ sc->ib.dev->attrs.max_cqe,21402140+ sc->ib.dev->attrs.max_qp_wr);21412141+ pr_err("consider lowering receive_credit_max = %d\n",21422142+ sp->recv_credit_max);21432143+ return -EINVAL;21442144+ }21452145+21462146+ if (qp_cap.max_send_sge > sc->ib.dev->attrs.max_send_sge ||21472147+ qp_cap.max_recv_sge > sc->ib.dev->attrs.max_recv_sge) {21482148+ pr_err("device %.*s max_send_sge/max_recv_sge = %d/%d too small\n",21492149+ IB_DEVICE_NAME_MAX,21502150+ sc->ib.dev->name,21512151+ sc->ib.dev->attrs.max_send_sge,21522152+ sc->ib.dev->attrs.max_recv_sge);21532153+ return -EINVAL;21542154+ }2023215520242156 sc->ib.pd = ib_alloc_pd(sc->ib.dev, 0);20252157 if (IS_ERR(sc->ib.pd)) {···21462046 }2147204721482048 sc->ib.send_cq = ib_alloc_cq_any(sc->ib.dev, sc,21492149- sp->send_credit_target +21502150- cap->max_rdma_ctxs,20492049+ max_send_wr,21512050 IB_POLL_WORKQUEUE);21522051 if (IS_ERR(sc->ib.send_cq)) {21532052 pr_err("Can't create RDMA send CQ\n");···21562057 }2157205821582059 sc->ib.recv_cq = ib_alloc_cq_any(sc->ib.dev, sc,21592159- sp->recv_credit_max,20602060+ qp_cap.max_recv_wr,21602061 IB_POLL_WORKQUEUE);21612062 if (IS_ERR(sc->ib.recv_cq)) {21622063 pr_err("Can't create RDMA recv CQ\n");···21652066 goto err;21662067 }2167206820692069+ /*20702070+ * We reset completely here!20712071+ * As the above use was just temporary20722072+ * to calc max_send_wr and rdma_send_wr.20732073+ *20742074+ * rdma_create_qp() will trigger rdma_rw_init_qp()20752075+ * again if max_rdma_ctxs is not 0.20762076+ */21682077 memset(&qp_attr, 0, sizeof(qp_attr));21692078 qp_attr.event_handler = smb_direct_qpair_handler;21702079 qp_attr.qp_context = sc;21712171- qp_attr.cap = *cap;20802080+ qp_attr.cap = qp_cap;21722081 qp_attr.sq_sig_type = IB_SIGNAL_REQ_WR;21732082 qp_attr.qp_type = IB_QPT_RC;21742083 qp_attr.send_cq = sc->ib.send_cq;···2191208421922085 sc->ib.qp = sc->rdma.cm_id->qp;21932086 sc->rdma.cm_id->event_handler = smb_direct_cm_handler;21942194-21952195- pages_per_rw = DIV_ROUND_UP(sp->max_read_write_size, PAGE_SIZE) + 1;21962196- if (pages_per_rw > sc->ib.dev->attrs.max_sgl_rd) {21972197- ret = ib_mr_pool_init(sc->ib.qp, &sc->ib.qp->rdma_mrs,21982198- sc->rw_io.credits.max, IB_MR_TYPE_MEM_REG,21992199- sc->rw_io.credits.num_pages, 0);22002200- if (ret) {22012201- pr_err("failed to init mr pool count %zu pages %zu\n",22022202- sc->rw_io.credits.max, sc->rw_io.credits.num_pages);22032203- goto err;22042204- }22052205- }2206208722072088 return 0;22082089err:···2278218322792184static int smb_direct_connect(struct smbdirect_socket *sc)22802185{22812281- struct ib_qp_cap qp_cap;22822186 int ret;2283218722842284- ret = smb_direct_init_params(sc, &qp_cap);21882188+ ret = smb_direct_init_params(sc);22852189 if (ret) {22862190 pr_err("Can't configure RDMA parameters\n");22872191 return ret;···22922198 return ret;22932199 }2294220022952295- ret = smb_direct_create_qpair(sc, &qp_cap);22012201+ ret = smb_direct_create_qpair(sc);22962202 if (ret) {22972203 pr_err("Can't accept RDMA client: %d\n", ret);22982204 return ret;
+21-5
fs/sysfs/group.c
···498498}499499EXPORT_SYMBOL_GPL(compat_only_sysfs_link_entry_to_kobj);500500501501-static int sysfs_group_attrs_change_owner(struct kernfs_node *grp_kn,501501+static int sysfs_group_attrs_change_owner(struct kobject *kobj,502502+ struct kernfs_node *grp_kn,502503 const struct attribute_group *grp,503504 struct iattr *newattrs)504505{505506 struct kernfs_node *kn;506506- int error;507507+ int error, i;508508+ umode_t mode;507509508510 if (grp->attrs) {509511 struct attribute *const *attr;510512511511- for (attr = grp->attrs; *attr; attr++) {513513+ for (i = 0, attr = grp->attrs; *attr; i++, attr++) {514514+ if (grp->is_visible) {515515+ mode = grp->is_visible(kobj, *attr, i);516516+ if (mode & SYSFS_GROUP_INVISIBLE)517517+ break;518518+ if (!mode)519519+ continue;520520+ }512521 kn = kernfs_find_and_get(grp_kn, (*attr)->name);513522 if (!kn)514523 return -ENOENT;···532523 if (grp->bin_attrs) {533524 const struct bin_attribute *const *bin_attr;534525535535- for (bin_attr = grp->bin_attrs; *bin_attr; bin_attr++) {526526+ for (i = 0, bin_attr = grp->bin_attrs; *bin_attr; i++, bin_attr++) {527527+ if (grp->is_bin_visible) {528528+ mode = grp->is_bin_visible(kobj, *bin_attr, i);529529+ if (mode & SYSFS_GROUP_INVISIBLE)530530+ break;531531+ if (!mode)532532+ continue;533533+ }536534 kn = kernfs_find_and_get(grp_kn, (*bin_attr)->attr.name);537535 if (!kn)538536 return -ENOENT;···589573590574 error = kernfs_setattr(grp_kn, &newattrs);591575 if (!error)592592- error = sysfs_group_attrs_change_owner(grp_kn, grp, &newattrs);576576+ error = sysfs_group_attrs_change_owner(kobj, grp_kn, grp, &newattrs);593577594578 kernfs_put(grp_kn);595579
+10-1
fs/xfs/Kconfig
···119119120120 See the xfs man page in section 5 for additional information.121121122122+ This option is mandatory to support zoned block devices. For these123123+ devices, the realtime subvolume must be backed by a zoned block124124+ device and a regular block device used as the main device (for125125+ metadata). If the zoned block device is a host-managed SMR hard-disk126126+ containing conventional zones at the beginning of its address space,127127+ XFS will use the disk conventional zones as the main device and the128128+ remaining sequential write required zones as the backing storage for129129+ the realtime subvolume.130130+122131 If unsure, say N.123132124133config XFS_DRAIN_INTENTS···165156 bool "XFS online metadata check usage data collection"166157 default y167158 depends on XFS_ONLINE_SCRUB168168- select DEBUG_FS159159+ depends on DEBUG_FS169160 help170161 If you say Y here, the kernel will gather usage data about171162 the online metadata check subsystem. This includes the number
+31-3
fs/xfs/scrub/nlinks.c
···376376 return error;377377}378378379379+static uint380380+xchk_nlinks_ilock_dir(381381+ struct xfs_inode *ip)382382+{383383+ uint lock_mode = XFS_ILOCK_SHARED;384384+385385+ /*386386+ * We're going to scan the directory entries, so we must be ready to387387+ * pull the data fork mappings into memory if they aren't already.388388+ */389389+ if (xfs_need_iread_extents(&ip->i_df))390390+ lock_mode = XFS_ILOCK_EXCL;391391+392392+ /*393393+ * We're going to scan the parent pointers, so we must be ready to394394+ * pull the attr fork mappings into memory if they aren't already.395395+ */396396+ if (xfs_has_parent(ip->i_mount) && xfs_inode_has_attr_fork(ip) &&397397+ xfs_need_iread_extents(&ip->i_af))398398+ lock_mode = XFS_ILOCK_EXCL;399399+400400+ /*401401+ * Take the IOLOCK so that other threads cannot start a directory402402+ * update while we're scanning.403403+ */404404+ lock_mode |= XFS_IOLOCK_SHARED;405405+ xfs_ilock(ip, lock_mode);406406+ return lock_mode;407407+}408408+379409/* Walk a directory to bump the observed link counts of the children. */380410STATIC int381411xchk_nlinks_collect_dir(···424394 return 0;425395426396 /* Prevent anyone from changing this directory while we walk it. */427427- xfs_ilock(dp, XFS_IOLOCK_SHARED);428428- lock_mode = xfs_ilock_data_map_shared(dp);397397+ lock_mode = xchk_nlinks_ilock_dir(dp);429398430399 /*431400 * The dotdot entry of an unlinked directory still points to the last···481452 xchk_iscan_abort(&xnc->collect_iscan);482453out_unlock:483454 xfs_iunlock(dp, lock_mode);484484- xfs_iunlock(dp, XFS_IOLOCK_SHARED);485455 return error;486456}487457
+1-1
fs/xfs/xfs_buf.c
···17511751 const char *descr)17521752{17531753 /* The maximum size of the buftarg is only known once the sb is read. */17541754- btp->bt_nr_sectors = (xfs_daddr_t)-1;17541754+ btp->bt_nr_sectors = XFS_BUF_DADDR_MAX;1755175517561756 /* Set up device logical sector size mask */17571757 btp->bt_logical_sectorsize = logical_sectorsize;
···236236 bool m_update_sb; /* sb needs update in mount */237237 unsigned int m_max_open_zones;238238 unsigned int m_zonegc_low_space;239239- struct xfs_mru_cache *m_zone_cache; /* Inode to open zone cache */240239241240 /* max_atomic_write mount option value */242241 unsigned long long m_awu_max_bytes;
+42-11
fs/xfs/xfs_super.c
···102102 * Table driven mount option parser.103103 */104104enum {105105- Opt_logbufs, Opt_logbsize, Opt_logdev, Opt_rtdev,105105+ Op_deprecated, Opt_logbufs, Opt_logbsize, Opt_logdev, Opt_rtdev,106106 Opt_wsync, Opt_noalign, Opt_swalloc, Opt_sunit, Opt_swidth, Opt_nouuid,107107 Opt_grpid, Opt_nogrpid, Opt_bsdgroups, Opt_sysvgroups,108108 Opt_allocsize, Opt_norecovery, Opt_inode64, Opt_inode32,···114114 Opt_lifetime, Opt_nolifetime, Opt_max_atomic_write,115115};116116117117+#define fsparam_dead(NAME) \118118+ __fsparam(NULL, (NAME), Op_deprecated, fs_param_deprecated, NULL)119119+117120static const struct fs_parameter_spec xfs_fs_parameters[] = {121121+ /*122122+ * These mount options were supposed to be deprecated in September 2025123123+ * but the deprecation warning was buggy, so not all users were124124+ * notified. The deprecation is now obnoxiously loud and postponed to125125+ * September 2030.126126+ */127127+ fsparam_dead("attr2"),128128+ fsparam_dead("noattr2"),129129+ fsparam_dead("ikeep"),130130+ fsparam_dead("noikeep"),131131+118132 fsparam_u32("logbufs", Opt_logbufs),119133 fsparam_string("logbsize", Opt_logbsize),120134 fsparam_string("logdev", Opt_logdev),···800786801787 truncate_inode_pages_final(&inode->i_data);802788 clear_inode(inode);789789+790790+ if (IS_ENABLED(CONFIG_XFS_RT) &&791791+ S_ISREG(inode->i_mode) && inode->i_private) {792792+ xfs_open_zone_put(inode->i_private);793793+ inode->i_private = NULL;794794+ }803795}804796805797static void···13931373static inline void13941374xfs_fs_warn_deprecated(13951375 struct fs_context *fc,13961396- struct fs_parameter *param,13971397- uint64_t flag,13981398- bool value)13761376+ struct fs_parameter *param)13991377{14001400- /* Don't print the warning if reconfiguring and current mount point14011401- * already had the flag set13781378+ /*13791379+ * Always warn about someone passing in a deprecated mount option.13801380+ * Previously we wouldn't print the warning if we were reconfiguring13811381+ * and current mount point already had the flag set, but that was not13821382+ * the right thing to do.13831383+ *13841384+ * Many distributions mount the root filesystem with no options in the13851385+ * initramfs and rely on mount -a to remount the root fs with the13861386+ * options in fstab. However, the old behavior meant that there would13871387+ * never be a warning about deprecated mount options for the root fs in13881388+ * /etc/fstab. On a single-fs system, that means no warning at all.13891389+ *13901390+ * Compounding this problem are distribution scripts that copy13911391+ * /proc/mounts to fstab, which means that we can't remove mount13921392+ * options unless we're 100% sure they have only ever been advertised13931393+ * in /proc/mounts in response to explicitly provided mount options.14021394 */14031403- if ((fc->purpose & FS_CONTEXT_FOR_RECONFIGURE) &&14041404- !!(XFS_M(fc->root->d_sb)->m_features & flag) == value)14051405- return;14061395 xfs_warn(fc->s_fs_info, "%s mount option is deprecated.", param->key);14071396}14081397···14371408 return opt;1438140914391410 switch (opt) {14111411+ case Op_deprecated:14121412+ xfs_fs_warn_deprecated(fc, param);14131413+ return 0;14401414 case Opt_logbufs:14411415 parsing_mp->m_logbufs = result.uint_32;14421416 return 0;···15601528 xfs_mount_set_dax_mode(parsing_mp, result.uint_32);15611529 return 0;15621530#endif15631563- /* Following mount options will be removed in September 2025 */15641531 case Opt_max_open_zones:15651532 parsing_mp->m_max_open_zones = result.uint_32;15661533 return 0;···22522221 struct xfs_mount *mp;22532222 int i;2254222322552255- mp = kzalloc(sizeof(struct xfs_mount), GFP_KERNEL | __GFP_NOFAIL);22242224+ mp = kzalloc(sizeof(struct xfs_mount), GFP_KERNEL);22562225 if (!mp)22572226 return -ENOMEM;22582227
+60-88
fs/xfs/xfs_zone_alloc.c
···2626#include "xfs_trace.h"2727#include "xfs_mru_cache.h"28282929+static void3030+xfs_open_zone_free_rcu(3131+ struct callback_head *cb)3232+{3333+ struct xfs_open_zone *oz = container_of(cb, typeof(*oz), oz_rcu);3434+3535+ xfs_rtgroup_rele(oz->oz_rtg);3636+ kfree(oz);3737+}3838+2939void3040xfs_open_zone_put(3141 struct xfs_open_zone *oz)3242{3333- if (atomic_dec_and_test(&oz->oz_ref)) {3434- xfs_rtgroup_rele(oz->oz_rtg);3535- kfree(oz);3636- }4343+ if (atomic_dec_and_test(&oz->oz_ref))4444+ call_rcu(&oz->oz_rcu, xfs_open_zone_free_rcu);3745}38463947static inline uint32_t···622614}623615624616/*625625- * Try to pack inodes that are written back after they were closed tight instead626626- * of trying to open new zones for them or spread them to the least recently627627- * used zone. This optimizes the data layout for workloads that untar or copy628628- * a lot of small files. Right now this does not separate multiple such617617+ * Try to tightly pack small files that are written back after they were closed618618+ * instead of trying to open new zones for them or spread them to the least619619+ * recently used zone. This optimizes the data layout for workloads that untar620620+ * or copy a lot of small files. Right now this does not separate multiple such629621 * streams.630622 */631623static inline bool xfs_zoned_pack_tight(struct xfs_inode *ip)632624{625625+ struct xfs_mount *mp = ip->i_mount;626626+ size_t zone_capacity =627627+ XFS_FSB_TO_B(mp, mp->m_groups[XG_TYPE_RTG].blocks);628628+629629+ /*630630+ * Do not pack write files that are already using a full zone to avoid631631+ * fragmentation.632632+ */633633+ if (i_size_read(VFS_I(ip)) >= zone_capacity)634634+ return false;635635+633636 return !inode_is_open_for_write(VFS_I(ip)) &&634637 !(ip->i_diflags & XFS_DIFLAG_APPEND);635638}···765746}766747767748/*768768- * Cache the last zone written to for an inode so that it is considered first769769- * for subsequent writes.770770- */771771-struct xfs_zone_cache_item {772772- struct xfs_mru_cache_elem mru;773773- struct xfs_open_zone *oz;774774-};775775-776776-static inline struct xfs_zone_cache_item *777777-xfs_zone_cache_item(struct xfs_mru_cache_elem *mru)778778-{779779- return container_of(mru, struct xfs_zone_cache_item, mru);780780-}781781-782782-static void783783-xfs_zone_cache_free_func(784784- void *data,785785- struct xfs_mru_cache_elem *mru)786786-{787787- struct xfs_zone_cache_item *item = xfs_zone_cache_item(mru);788788-789789- xfs_open_zone_put(item->oz);790790- kfree(item);791791-}792792-793793-/*794749 * Check if we have a cached last open zone available for the inode and795750 * if yes return a reference to it.796751 */797752static struct xfs_open_zone *798798-xfs_cached_zone(799799- struct xfs_mount *mp,800800- struct xfs_inode *ip)753753+xfs_get_cached_zone(754754+ struct xfs_inode *ip)801755{802802- struct xfs_mru_cache_elem *mru;803803- struct xfs_open_zone *oz;756756+ struct xfs_open_zone *oz;804757805805- mru = xfs_mru_cache_lookup(mp->m_zone_cache, ip->i_ino);806806- if (!mru)807807- return NULL;808808- oz = xfs_zone_cache_item(mru)->oz;758758+ rcu_read_lock();759759+ oz = VFS_I(ip)->i_private;809760 if (oz) {810761 /*811762 * GC only steals open zones at mount time, so no GC zones812763 * should end up in the cache.813764 */814765 ASSERT(!oz->oz_is_gc);815815- ASSERT(atomic_read(&oz->oz_ref) > 0);816816- atomic_inc(&oz->oz_ref);766766+ if (!atomic_inc_not_zero(&oz->oz_ref))767767+ oz = NULL;817768 }818818- xfs_mru_cache_done(mp->m_zone_cache);769769+ rcu_read_unlock();770770+819771 return oz;820772}821773822774/*823823- * Update the last used zone cache for a given inode.775775+ * Stash our zone in the inode so that is is reused for future allocations.824776 *825825- * The caller must have a reference on the open zone.777777+ * The open_zone structure will be pinned until either the inode is freed or778778+ * until the cached open zone is replaced with a different one because the779779+ * current one was full when we tried to use it. This means we keep any780780+ * open zone around forever as long as any inode that used it for the last781781+ * write is cached, which slightly increases the memory use of cached inodes782782+ * that were every written to, but significantly simplifies the cached zone783783+ * lookup. Because the open_zone is clearly marked as full when all data784784+ * in the underlying RTG was written, the caching is always safe.826785 */827786static void828828-xfs_zone_cache_create_association(829829- struct xfs_inode *ip,830830- struct xfs_open_zone *oz)787787+xfs_set_cached_zone(788788+ struct xfs_inode *ip,789789+ struct xfs_open_zone *oz)831790{832832- struct xfs_mount *mp = ip->i_mount;833833- struct xfs_zone_cache_item *item = NULL;834834- struct xfs_mru_cache_elem *mru;791791+ struct xfs_open_zone *old_oz;835792836836- ASSERT(atomic_read(&oz->oz_ref) > 0);837793 atomic_inc(&oz->oz_ref);838838-839839- mru = xfs_mru_cache_lookup(mp->m_zone_cache, ip->i_ino);840840- if (mru) {841841- /*842842- * If we have an association already, update it to point to the843843- * new zone.844844- */845845- item = xfs_zone_cache_item(mru);846846- xfs_open_zone_put(item->oz);847847- item->oz = oz;848848- xfs_mru_cache_done(mp->m_zone_cache);849849- return;850850- }851851-852852- item = kmalloc(sizeof(*item), GFP_KERNEL);853853- if (!item) {854854- xfs_open_zone_put(oz);855855- return;856856- }857857- item->oz = oz;858858- xfs_mru_cache_insert(mp->m_zone_cache, ip->i_ino, &item->mru);794794+ old_oz = xchg(&VFS_I(ip)->i_private, oz);795795+ if (old_oz)796796+ xfs_open_zone_put(old_oz);859797}860798861799static void···856880 * the inode is still associated with a zone and use that if so.857881 */858882 if (!*oz)859859- *oz = xfs_cached_zone(mp, ip);883883+ *oz = xfs_get_cached_zone(ip);860884861885 if (!*oz) {862886select_zone:863887 *oz = xfs_select_zone(mp, write_hint, pack_tight);864888 if (!*oz)865889 goto out_error;866866-867867- xfs_zone_cache_create_association(ip, *oz);890890+ xfs_set_cached_zone(ip, *oz);868891 }869892870893 alloc_len = xfs_zone_alloc_blocks(*oz, XFS_B_TO_FSB(mp, ioend->io_size),···941966 xfs_open_zone_put(oz);942967 }943968 spin_unlock(&zi->zi_open_zones_lock);969969+970970+ /*971971+ * Wait for all open zones to be freed so that they drop the group972972+ * references:973973+ */974974+ rcu_barrier();944975}945976946977struct xfs_init_zones {···12601279 error = xfs_zone_gc_mount(mp);12611280 if (error)12621281 goto out_free_zone_info;12631263-12641264- /*12651265- * Set up a mru cache to track inode to open zone for data placement12661266- * purposes. The magic values for group count and life time is the12671267- * same as the defaults for file streams, which seems sane enough.12681268- */12691269- xfs_mru_cache_create(&mp->m_zone_cache, mp,12701270- 5000, 10, xfs_zone_cache_free_func);12711282 return 0;1272128312731284out_free_zone_info:···12731300{12741301 xfs_zone_gc_unmount(mp);12751302 xfs_free_zone_info(mp->m_zone_info);12761276- xfs_mru_cache_destroy(mp->m_zone_cache);12771303}
+46-35
fs/xfs/xfs_zone_gc.c
···491491 struct xfs_rtgroup *victim_rtg = NULL;492492 unsigned int bucket;493493494494- if (xfs_is_shutdown(mp))495495- return false;496496-497497- if (iter->victim_rtg)498498- return true;499499-500500- /*501501- * Don't start new work if we are asked to stop or park.502502- */503503- if (kthread_should_stop() || kthread_should_park())504504- return false;505505-506506- if (!xfs_zoned_need_gc(mp))507507- return false;508508-509494 spin_lock(&zi->zi_used_buckets_lock);510495 for (bucket = 0; bucket < XFS_ZONE_USED_BUCKETS; bucket++) {511496 victim_rtg = xfs_zone_gc_pick_victim_from(mp, bucket);···960975 } while (next);961976}962977978978+static bool979979+xfs_zone_gc_should_start_new_work(980980+ struct xfs_zone_gc_data *data)981981+{982982+ if (xfs_is_shutdown(data->mp))983983+ return false;984984+ if (!xfs_zone_gc_space_available(data))985985+ return false;986986+987987+ if (!data->iter.victim_rtg) {988988+ if (kthread_should_stop() || kthread_should_park())989989+ return false;990990+ if (!xfs_zoned_need_gc(data->mp))991991+ return false;992992+ if (!xfs_zone_gc_select_victim(data))993993+ return false;994994+ }995995+996996+ return true;997997+}998998+963999/*9641000 * Handle the work to read and write data for GC and to reset the zones,9651001 * including handling all completions.···988982 * Note that the order of the chunks is preserved so that we don't undo the989983 * optimal order established by xfs_zone_gc_query().990984 */991991-static bool985985+static void992986xfs_zone_gc_handle_work(993987 struct xfs_zone_gc_data *data)994988{···1002996 zi->zi_reset_list = NULL;1003997 spin_unlock(&zi->zi_reset_list_lock);100499810051005- if (!xfs_zone_gc_select_victim(data) ||10061006- !xfs_zone_gc_space_available(data)) {10071007- if (list_empty(&data->reading) &&10081008- list_empty(&data->writing) &&10091009- list_empty(&data->resetting) &&10101010- !reset_list)10111011- return false;10121012- }10131013-10141014- __set_current_state(TASK_RUNNING);10151015- try_to_freeze();10161016-10171017- if (reset_list)999999+ if (reset_list) {10001000+ set_current_state(TASK_RUNNING);10181001 xfs_zone_gc_reset_zones(data, reset_list);10021002+ }1019100310201004 list_for_each_entry_safe(chunk, next, &data->resetting, entry) {10211005 if (READ_ONCE(chunk->state) != XFS_GC_BIO_DONE)10221006 break;10071007+ set_current_state(TASK_RUNNING);10231008 xfs_zone_gc_finish_reset(chunk);10241009 }1025101010261011 list_for_each_entry_safe(chunk, next, &data->writing, entry) {10271012 if (READ_ONCE(chunk->state) != XFS_GC_BIO_DONE)10281013 break;10141014+ set_current_state(TASK_RUNNING);10291015 xfs_zone_gc_finish_chunk(chunk);10301016 }10311017···10251027 list_for_each_entry_safe(chunk, next, &data->reading, entry) {10261028 if (READ_ONCE(chunk->state) != XFS_GC_BIO_DONE)10271029 break;10301030+ set_current_state(TASK_RUNNING);10281031 xfs_zone_gc_write_chunk(chunk);10291032 }10301033 blk_finish_plug(&plug);1031103410321032- blk_start_plug(&plug);10331033- while (xfs_zone_gc_start_chunk(data))10341034- ;10351035- blk_finish_plug(&plug);10361036- return true;10351035+ if (xfs_zone_gc_should_start_new_work(data)) {10361036+ set_current_state(TASK_RUNNING);10371037+ blk_start_plug(&plug);10381038+ while (xfs_zone_gc_start_chunk(data))10391039+ ;10401040+ blk_finish_plug(&plug);10411041+ }10371042}1038104310391044/*···10601059 for (;;) {10611060 set_current_state(TASK_INTERRUPTIBLE | TASK_FREEZABLE);10621061 xfs_set_zonegc_running(mp);10631063- if (xfs_zone_gc_handle_work(data))10621062+10631063+ xfs_zone_gc_handle_work(data);10641064+10651065+ /*10661066+ * Only sleep if nothing set the state to running. Else check for10671067+ * work again as someone might have queued up more work and woken10681068+ * us in the meantime.10691069+ */10701070+ if (get_current_state() == TASK_RUNNING) {10711071+ try_to_freeze();10641072 continue;10731073+ }1065107410661075 if (list_empty(&data->reading) &&10671076 list_empty(&data->writing) &&
+2
fs/xfs/xfs_zone_priv.h
···4444 * the life time of an open zone.4545 */4646 struct xfs_rtgroup *oz_rtg;4747+4848+ struct rcu_head oz_rcu;4749};48504951/*
+19-2
include/linux/arm_ffa.h
···338338 * an `struct ffa_mem_region_addr_range`.339339 */340340 u32 composite_off;341341+ u8 impdef_val[16];341342 u64 reserved;342343};343344···418417#define CONSTITUENTS_OFFSET(x) \419418 (offsetof(struct ffa_composite_mem_region, constituents[x]))420419420420+#define FFA_EMAD_HAS_IMPDEF_FIELD(version) ((version) >= FFA_VERSION_1_2)421421+#define FFA_MEM_REGION_HAS_EP_MEM_OFFSET(version) ((version) > FFA_VERSION_1_0)422422+423423+static inline u32 ffa_emad_size_get(u32 ffa_version)424424+{425425+ u32 sz;426426+ struct ffa_mem_region_attributes *ep_mem_access;427427+428428+ if (FFA_EMAD_HAS_IMPDEF_FIELD(ffa_version))429429+ sz = sizeof(*ep_mem_access);430430+ else431431+ sz = sizeof(*ep_mem_access) - sizeof(ep_mem_access->impdef_val);432432+433433+ return sz;434434+}435435+421436static inline u32422437ffa_mem_desc_offset(struct ffa_mem_region *buf, int count, u32 ffa_version)423438{424424- u32 offset = count * sizeof(struct ffa_mem_region_attributes);439439+ u32 offset = count * ffa_emad_size_get(ffa_version);425440 /*426441 * Earlier to v1.1, the endpoint memory descriptor array started at427442 * offset 32(i.e. offset of ep_mem_offset in the current structure)428443 */429429- if (ffa_version <= FFA_VERSION_1_0)444444+ if (!FFA_MEM_REGION_HAS_EP_MEM_OFFSET(ffa_version))430445 offset += offsetof(struct ffa_mem_region, ep_mem_offset);431446 else432447 offset += sizeof(struct ffa_mem_region);
+1-1
include/linux/cgroup-defs.h
···452452 int nr_frozen_tasks;453453454454 /* Freeze time data consistency protection */455455- seqcount_t freeze_seq;455455+ seqcount_spinlock_t freeze_seq;456456457457 /*458458 * Most recent time the cgroup was requested to freeze.
+4-3
include/linux/exportfs.h
···320320static inline bool exportfs_can_encode_fh(const struct export_operations *nop,321321 int fh_flags)322322{323323- if (!nop)324324- return false;325325-326323 /*327324 * If a non-decodeable file handle was requested, we only need to make328325 * sure that filesystem did not opt-out of encoding fid.329326 */330327 if (fh_flags & EXPORT_FH_FID)331328 return exportfs_can_encode_fid(nop);329329+330330+ /* Normal file handles cannot be created without export ops */331331+ if (!nop)332332+ return false;332333333334 /*334335 * If a connectable file handle was requested, we need to make sure that
+5
include/linux/gpio/regmap.h
···3838 * offset to a register/bitmask pair. If not3939 * given the default gpio_regmap_simple_xlate()4040 * is used.4141+ * @fixed_direction_output:4242+ * (Optional) Bitmap representing the fixed direction of4343+ * the GPIO lines. Useful when there are GPIO lines with a4444+ * fixed direction mixed together in the same register.4145 * @drvdata: (Optional) Pointer to driver specific data which is4246 * not used by gpio-remap but is provided "as is" to the4347 * driver callback(s).···8985 int reg_stride;9086 int ngpio_per_reg;9187 struct irq_domain *irq_domain;8888+ unsigned long *fixed_direction_output;92899390#ifdef CONFIG_REGMAP_IRQ9491 struct regmap_irq_chip *regmap_irq_chip;
+5-3
include/linux/hung_task.h
···2020 * always zero. So we can use these bits to encode the specific blocking2121 * type.2222 *2323+ * Note that on architectures where this is not guaranteed, or for any2424+ * unaligned lock, this tracking mechanism is silently skipped for that2525+ * lock.2626+ *2327 * Type encoding:2428 * 00 - Blocked on mutex (BLOCKER_TYPE_MUTEX)2529 * 01 - Blocked on semaphore (BLOCKER_TYPE_SEM)···4945 * If the lock pointer matches the BLOCKER_TYPE_MASK, return5046 * without writing anything.5147 */5252- if (WARN_ON_ONCE(lock_ptr & BLOCKER_TYPE_MASK))4848+ if (lock_ptr & BLOCKER_TYPE_MASK)5349 return;54505551 WRITE_ONCE(current->blocker, lock_ptr | type);···57535854static inline void hung_task_clear_blocker(void)5955{6060- WARN_ON_ONCE(!READ_ONCE(current->blocker));6161-6256 WRITE_ONCE(current->blocker, 0UL);6357}6458
···42044204 struct sk_buff_head *sk_queue,42054205 unsigned int flags, int *off, int *err);42064206struct sk_buff *skb_recv_datagram(struct sock *sk, unsigned int flags, int *err);42074207+__poll_t datagram_poll_queue(struct file *file, struct socket *sock,42084208+ struct poll_table_struct *wait,42094209+ struct sk_buff_head *rcv_queue);42074210__poll_t datagram_poll(struct file *file, struct socket *sock,42084211 struct poll_table_struct *wait);42094212int skb_copy_datagram_iter(const struct sk_buff *from, int offset,
+4
include/linux/virtio_net.h
···401401 if (!tnl_hdr_negotiated)402402 return -EINVAL;403403404404+ vhdr->hash_hdr.hash_value = 0;405405+ vhdr->hash_hdr.hash_report = 0;406406+ vhdr->hash_hdr.padding = 0;407407+404408 /* Let the basic parsing deal with plain GSO features. */405409 skb_shinfo(skb)->gso_type &= ~tnl_gso_type;406410 ret = virtio_net_hdr_from_skb(skb, hdr, true, false, vlan_hlen);
+15
include/uapi/drm/xe_drm.h
···10131013 * valid on VMs with DRM_XE_VM_CREATE_FLAG_FAULT_MODE set. The CPU address10141014 * mirror flag are only valid for DRM_XE_VM_BIND_OP_MAP operations, the BO10151015 * handle MBZ, and the BO offset MBZ.10161016+ * - %DRM_XE_VM_BIND_FLAG_MADVISE_AUTORESET - Can be used in combination with10171017+ * %DRM_XE_VM_BIND_FLAG_CPU_ADDR_MIRROR to reset madvises when the underlying10181018+ * CPU address space range is unmapped (typically with munmap(2) or brk(2)).10191019+ * The madvise values set with &DRM_IOCTL_XE_MADVISE are reset to the values10201020+ * that were present immediately after the &DRM_IOCTL_XE_VM_BIND.10211021+ * The reset GPU virtual address range is the intersection of the range bound10221022+ * using &DRM_IOCTL_XE_VM_BIND and the virtual CPU address space range10231023+ * unmapped.10241024+ * This functionality is present to mimic the behaviour of CPU address space10251025+ * madvises set using madvise(2), which are typically reset on unmap.10261026+ * Note: free(3) may or may not call munmap(2) and/or brk(2), and may thus10271027+ * not invoke autoreset. Neither will stack variables going out of scope.10281028+ * Therefore it's recommended to always explicitly reset the madvises when10291029+ * freeing the memory backing a region used in a &DRM_IOCTL_XE_MADVISE call.10161030 *10171031 * The @prefetch_mem_region_instance for %DRM_XE_VM_BIND_OP_PREFETCH can also be:10181032 * - %DRM_XE_CONSULT_MEM_ADVISE_PREF_LOC, which ensures prefetching occurs in···11331119#define DRM_XE_VM_BIND_FLAG_DUMPABLE (1 << 3)11341120#define DRM_XE_VM_BIND_FLAG_CHECK_PXP (1 << 4)11351121#define DRM_XE_VM_BIND_FLAG_CPU_ADDR_MIRROR (1 << 5)11221122+#define DRM_XE_VM_BIND_FLAG_MADVISE_AUTORESET (1 << 6)11361123 /** @flags: Bind flags */11371124 __u32 flags;11381125
···155155 return 1;156156}157157158158+static bool io_should_commit(struct io_kiocb *req, unsigned int issue_flags)159159+{160160+ /*161161+ * If we came in unlocked, we have no choice but to consume the162162+ * buffer here, otherwise nothing ensures that the buffer won't163163+ * get used by others. This does mean it'll be pinned until the164164+ * IO completes, coming in unlocked means we're being called from165165+ * io-wq context and there may be further retries in async hybrid166166+ * mode. For the locked case, the caller must call commit when167167+ * the transfer completes (or if we get -EAGAIN and must poll of168168+ * retry).169169+ */170170+ if (issue_flags & IO_URING_F_UNLOCKED)171171+ return true;172172+173173+ /* uring_cmd commits kbuf upfront, no need to auto-commit */174174+ if (!io_file_can_poll(req) && req->opcode != IORING_OP_URING_CMD)175175+ return true;176176+ return false;177177+}178178+158179static struct io_br_sel io_ring_buffer_select(struct io_kiocb *req, size_t *len,159180 struct io_buffer_list *bl,160181 unsigned int issue_flags)···202181 sel.buf_list = bl;203182 sel.addr = u64_to_user_ptr(buf->addr);204183205205- if (issue_flags & IO_URING_F_UNLOCKED || !io_file_can_poll(req)) {206206- /*207207- * If we came in unlocked, we have no choice but to consume the208208- * buffer here, otherwise nothing ensures that the buffer won't209209- * get used by others. This does mean it'll be pinned until the210210- * IO completes, coming in unlocked means we're being called from211211- * io-wq context and there may be further retries in async hybrid212212- * mode. For the locked case, the caller must call commit when213213- * the transfer completes (or if we get -EAGAIN and must poll of214214- * retry).215215- */184184+ if (io_should_commit(req, issue_flags)) {216185 io_kbuf_commit(req, sel.buf_list, *len, 1);217186 sel.buf_list = NULL;218187 }
···250250 return -EINVAL;251251252252 iwa = io_uring_alloc_async_data(NULL, req);253253- if (!unlikely(iwa))253253+ if (unlikely(!iwa))254254 return -ENOMEM;255255 iwa->req = req;256256
+1-1
kernel/cgroup/cgroup.c
···58925892 * if the parent has to be frozen, the child has too.58935893 */58945894 cgrp->freezer.e_freeze = parent->freezer.e_freeze;58955895- seqcount_init(&cgrp->freezer.freeze_seq);58955895+ seqcount_spinlock_init(&cgrp->freezer.freeze_seq, &css_set_lock);58965896 if (cgrp->freezer.e_freeze) {58975897 /*58985898 * Set the CGRP_FREEZE flag, so when a process will be
···10301030void __irq_set_handler(unsigned int irq, irq_flow_handler_t handle, int is_chained,10311031 const char *name)10321032{10331033- scoped_irqdesc_get_and_lock(irq, 0)10331033+ scoped_irqdesc_get_and_buslock(irq, 0)10341034 __irq_do_set_handler(scoped_irqdesc, handle, is_chained, name);10351035}10361036EXPORT_SYMBOL_GPL(__irq_set_handler);
+2-2
kernel/irq/manage.c
···659659660660static int __disable_irq_nosync(unsigned int irq)661661{662662- scoped_irqdesc_get_and_lock(irq, IRQ_GET_DESC_CHECK_GLOBAL) {662662+ scoped_irqdesc_get_and_buslock(irq, IRQ_GET_DESC_CHECK_GLOBAL) {663663 __disable_irq(scoped_irqdesc);664664 return 0;665665 }···789789 */790790void enable_irq(unsigned int irq)791791{792792- scoped_irqdesc_get_and_lock(irq, IRQ_GET_DESC_CHECK_GLOBAL) {792792+ scoped_irqdesc_get_and_buslock(irq, IRQ_GET_DESC_CHECK_GLOBAL) {793793 struct irq_desc *desc = scoped_irqdesc;794794795795 if (WARN(!desc->irq_data.chip, "enable_irq before setup/request_irq: irq %u\n", irq))
+12
kernel/sched/fair.c
···6437643764386438 cfs_rq->throttle_count = pcfs_rq->throttle_count;64396439 cfs_rq->throttled_clock_pelt = rq_clock_pelt(cpu_rq(cpu));64406440+64416441+ /*64426442+ * It is not enough to sync the "pelt_clock_throttled" indicator64436443+ * with the parent cfs_rq when the hierarchy is not queued.64446444+ * Always join a throttled hierarchy with PELT clock throttled64456445+ * and leaf it to the first enqueue, or distribution to64466446+ * unthrottle the PELT clock.64476447+ */64486448+ if (cfs_rq->throttle_count)64496449+ cfs_rq->pelt_clock_throttled = 1;64406450}6441645164426452/* conditionally throttle active cfs_rq's from put_prev_entity() */···1319713187 if (!cfs_rq_pelt_clock_throttled(cfs_rq))1319813188 list_add_leaf_cfs_rq(cfs_rq);1319913189 }1319013190+1319113191+ assert_list_leaf_cfs_rq(rq_of(cfs_rq));1320013192}1320113193#else /* !CONFIG_FAIR_GROUP_SCHED: */1320213194static void propagate_entity_cfs_rq(struct sched_entity *se) { }
···30703070 return -ENOMEM;30713071 }3072307230733073- for (int i = 0; i <= MAX_AUX_CLOCKS; i++) {30733073+ for (int i = 0; i < MAX_AUX_CLOCKS; i++) {30743074 char id[2] = { [0] = '0' + i, };30753075 struct kobject *clk = kobject_create_and_add(id, auxo);30763076
+1
kernel/trace/rv/monitors/pagefault/Kconfig
···55 select RV_LTL_MONITOR66 depends on RV_MON_RTAPP77 depends on X86 || RISCV88+ depends on MMU89 default y910 select LTL_MON_EVENTS_ID1011 bool "pagefault monitor"
+6-6
kernel/trace/rv/rv.c
···501501502502 list_for_each_entry_continue(mon, &rv_monitors_list, list) {503503 if (mon->enabled)504504- return mon;504504+ return &mon->list;505505 }506506507507 return NULL;···509509510510static void *enabled_monitors_start(struct seq_file *m, loff_t *pos)511511{512512- struct rv_monitor *mon;512512+ struct list_head *head;513513 loff_t l;514514515515 mutex_lock(&rv_interface_lock);···517517 if (list_empty(&rv_monitors_list))518518 return NULL;519519520520- mon = list_entry(&rv_monitors_list, struct rv_monitor, list);520520+ head = &rv_monitors_list;521521522522 for (l = 0; l <= *pos; ) {523523- mon = enabled_monitors_next(m, mon, &l);524524- if (!mon)523523+ head = enabled_monitors_next(m, head, &l);524524+ if (!head)525525 break;526526 }527527528528- return mon;528528+ return head;529529}530530531531/*
+1-1
lib/crypto/Kconfig
···97979898config CRYPTO_LIB_POLY1305_ARCH9999 bool100100- depends on CRYPTO_LIB_POLY1305 && !UML100100+ depends on CRYPTO_LIB_POLY1305 && !UML && !KMSAN101101 default y if ARM102102 default y if ARM64 && KERNEL_MODE_NEON103103 default y if MIPS
···14731473 if (IS_ERR(param_ctx))14741474 return PTR_ERR(param_ctx);14751475 test_ctx = damon_new_ctx();14761476+ if (!test_ctx)14771477+ return -ENOMEM;14761478 err = damon_commit_ctx(test_ctx, param_ctx);14771477- if (err) {14781478- damon_destroy_ctx(test_ctx);14791479+ if (err)14791480 goto out;14801480- }14811481 err = damon_commit_ctx(kdamond->damon_ctx, param_ctx);14821482out:14831483+ damon_destroy_ctx(test_ctx);14831484 damon_destroy_ctx(param_ctx);14841485 return err;14851486}
+3
mm/huge_memory.c
···41094109 if (khugepaged_max_ptes_none == HPAGE_PMD_NR - 1)41104110 return false;4111411141124112+ if (folio_contain_hwpoisoned_page(folio))41134113+ return false;41144114+41124115 for (i = 0; i < folio_nr_pages(folio); i++) {41134116 if (pages_identical(folio_page(folio, i), ZERO_PAGE(0))) {41144117 if (++num_zero_pages > khugepaged_max_ptes_none)
+2-3
mm/hugetlb.c
···76147614 p4d_t *p4d = p4d_offset(pgd, addr);76157615 pud_t *pud = pud_offset(p4d, addr);7616761676177617- i_mmap_assert_write_locked(vma->vm_file->f_mapping);76187618- hugetlb_vma_assert_locked(vma);76197617 if (sz != PMD_SIZE)76207618 return 0;76217619 if (!ptdesc_pmd_is_shared(virt_to_ptdesc(ptep)))76227620 return 0;76237623-76217621+ i_mmap_assert_write_locked(vma->vm_file->f_mapping);76227622+ hugetlb_vma_assert_locked(vma);76247623 pud_clear(pud);76257624 /*76267625 * Once our caller drops the rmap lock, some other process might be
···12371237}1238123812391239/*12401240- * Perform final tasks for MADV_DONTUNMAP operation, clearing mlock() and12411241- * account flags on remaining VMA by convention (it cannot be mlock()'d any12421242- * longer, as pages in range are no longer mapped), and removing anon_vma_chain12431243- * links from it (if the entire VMA was copied over).12401240+ * Perform final tasks for MADV_DONTUNMAP operation, clearing mlock() flag on12411241+ * remaining VMA by convention (it cannot be mlock()'d any longer, as pages in12421242+ * range are no longer mapped), and removing anon_vma_chain links from it if the12431243+ * entire VMA was copied over.12441244 */12451245static void dontunmap_complete(struct vma_remap_struct *vrm,12461246 struct vm_area_struct *new_vma)···12501250 unsigned long old_start = vrm->vma->vm_start;12511251 unsigned long old_end = vrm->vma->vm_end;1252125212531253- /*12541254- * We always clear VM_LOCKED[ONFAULT] | VM_ACCOUNT on the old12551255- * vma.12561256- */12571257- vm_flags_clear(vrm->vma, VM_LOCKED_MASK | VM_ACCOUNT);12531253+ /* We always clear VM_LOCKED[ONFAULT] on the old VMA. */12541254+ vm_flags_clear(vrm->vma, VM_LOCKED_MASK);1258125512591256 /*12601257 * anon_vma links of the old vma is no longer needed after its page
+3
mm/page_owner.c
···168168 unsigned long flags;169169 struct stack *stack;170170171171+ if (!gfpflags_allow_spinning(gfp_mask))172172+ return;173173+171174 set_current_in_page_owner();172175 stack = kmalloc(sizeof(*stack), gfp_nested_mask(gfp_mask));173176 if (!stack) {
+21-10
mm/slub.c
···20522052 }20532053}2054205420552055-static inline void mark_failed_objexts_alloc(struct slab *slab)20552055+static inline bool mark_failed_objexts_alloc(struct slab *slab)20562056{20572057- slab->obj_exts = OBJEXTS_ALLOC_FAIL;20572057+ return cmpxchg(&slab->obj_exts, 0, OBJEXTS_ALLOC_FAIL) == 0;20582058}2059205920602060static inline void handle_failed_objexts_alloc(unsigned long obj_exts,···20762076#else /* CONFIG_MEM_ALLOC_PROFILING_DEBUG */2077207720782078static inline void mark_objexts_empty(struct slabobj_ext *obj_exts) {}20792079-static inline void mark_failed_objexts_alloc(struct slab *slab) {}20792079+static inline bool mark_failed_objexts_alloc(struct slab *slab) { return false; }20802080static inline void handle_failed_objexts_alloc(unsigned long obj_exts,20812081 struct slabobj_ext *vec, unsigned int objects) {}20822082···21242124 slab_nid(slab));21252125 }21262126 if (!vec) {21272127- /* Mark vectors which failed to allocate */21282128- mark_failed_objexts_alloc(slab);21272127+ /*21282128+ * Try to mark vectors which failed to allocate.21292129+ * If this operation fails, there may be a racing process21302130+ * that has already completed the allocation.21312131+ */21322132+ if (!mark_failed_objexts_alloc(slab) &&21332133+ slab_obj_exts(slab))21342134+ return 0;2129213521302136 return -ENOMEM;21312137 }···21422136#ifdef CONFIG_MEMCG21432137 new_exts |= MEMCG_DATA_OBJEXTS;21442138#endif21392139+retry:21452140 old_exts = READ_ONCE(slab->obj_exts);21462141 handle_failed_objexts_alloc(old_exts, vec, objects);21472142 if (new_slab) {···21522145 * be simply assigned.21532146 */21542147 slab->obj_exts = new_exts;21552155- } else if ((old_exts & ~OBJEXTS_FLAGS_MASK) ||21562156- cmpxchg(&slab->obj_exts, old_exts, new_exts) != old_exts) {21482148+ } else if (old_exts & ~OBJEXTS_FLAGS_MASK) {21572149 /*21582150 * If the slab is already in use, somebody can allocate and21592151 * assign slabobj_exts in parallel. In this case the existing···21642158 else21652159 kfree(vec);21662160 return 0;21612161+ } else if (cmpxchg(&slab->obj_exts, old_exts, new_exts) != old_exts) {21622162+ /* Retry if a racing thread changed slab->obj_exts from under us. */21632163+ goto retry;21672164 }2168216521692166 if (allow_spin)···3428341934293420 if (!allow_spin && !spin_trylock_irqsave(&n->list_lock, flags)) {34303421 /* Unlucky, discard newly allocated slab */34313431- slab->frozen = 1;34323422 defer_deactivate_slab(slab, NULL);34333423 return NULL;34343424 }···64766468 struct slab *slab = container_of(pos, struct slab, llnode);6477646964786470#ifdef CONFIG_SLUB_TINY64796479- discard_slab(slab->slab_cache, slab);64716471+ free_slab(slab->slab_cache, slab);64806472#else64816481- deactivate_slab(slab->slab_cache, slab, slab->flush_freelist);64736473+ if (slab->frozen)64746474+ deactivate_slab(slab->slab_cache, slab, slab->flush_freelist);64756475+ else64766476+ free_slab(slab->slab_cache, slab);64826477#endif64836478 }64846479}
+34-10
net/core/datagram.c
···920920EXPORT_SYMBOL(skb_copy_and_csum_datagram_msg);921921922922/**923923- * datagram_poll - generic datagram poll923923+ * datagram_poll_queue - same as datagram_poll, but on a specific receive924924+ * queue924925 * @file: file struct925926 * @sock: socket926927 * @wait: poll table928928+ * @rcv_queue: receive queue to poll927929 *928928- * Datagram poll: Again totally generic. This also handles929929- * sequenced packet sockets providing the socket receive queue930930- * is only ever holding data ready to receive.930930+ * Performs polling on the given receive queue, handling shutdown, error,931931+ * and connection state. This is useful for protocols that deliver932932+ * userspace-bound packets through a custom queue instead of933933+ * sk->sk_receive_queue.931934 *932932- * Note: when you *don't* use this routine for this protocol,933933- * and you use a different write policy from sock_writeable()934934- * then please supply your own write_space callback.935935+ * Return: poll bitmask indicating the socket's current state935936 */936936-__poll_t datagram_poll(struct file *file, struct socket *sock,937937- poll_table *wait)937937+__poll_t datagram_poll_queue(struct file *file, struct socket *sock,938938+ poll_table *wait, struct sk_buff_head *rcv_queue)938939{939940 struct sock *sk = sock->sk;940941 __poll_t mask;···957956 mask |= EPOLLHUP;958957959958 /* readable? */960960- if (!skb_queue_empty_lockless(&sk->sk_receive_queue))959959+ if (!skb_queue_empty_lockless(rcv_queue))961960 mask |= EPOLLIN | EPOLLRDNORM;962961963962 /* Connection-based need to check for termination and startup */···978977 sk_set_bit(SOCKWQ_ASYNC_NOSPACE, sk);979978980979 return mask;980980+}981981+EXPORT_SYMBOL(datagram_poll_queue);982982+983983+/**984984+ * datagram_poll - generic datagram poll985985+ * @file: file struct986986+ * @sock: socket987987+ * @wait: poll table988988+ *989989+ * Datagram poll: Again totally generic. This also handles990990+ * sequenced packet sockets providing the socket receive queue991991+ * is only ever holding data ready to receive.992992+ *993993+ * Note: when you *don't* use this routine for this protocol,994994+ * and you use a different write policy from sock_writeable()995995+ * then please supply your own write_space callback.996996+ *997997+ * Return: poll bitmask indicating the socket's current state998998+ */999999+__poll_t datagram_poll(struct file *file, struct socket *sock, poll_table *wait)10001000+{10011001+ return datagram_poll_queue(file, sock, wait,10021002+ &sock->sk->sk_receive_queue);9811003}9821004EXPORT_SYMBOL(datagram_poll);
···4343 if (skb_queue_len(&cell->napi_skbs) == 1)4444 napi_schedule(&cell->napi);45454646- if (have_bh_lock)4747- local_unlock_nested_bh(&gcells->cells->bh_lock);4848-4946 res = NET_RX_SUCCESS;50475148unlock:4949+ if (have_bh_lock)5050+ local_unlock_nested_bh(&gcells->cells->bh_lock);5251 rcu_read_unlock();5352 return res;5453}
-3
net/core/rtnetlink.c
···47154715 int err;47164716 u16 vid;4717471747184718- if (!netlink_capable(skb, CAP_NET_ADMIN))47194719- return -EPERM;47204720-47214718 if (!del_bulk) {47224719 err = nlmsg_parse_deprecated(nlh, sizeof(*ndm), tb, NDA_MAX,47234720 NULL, extack);
+7-1
net/hsr/hsr_netlink.c
···3434 struct netlink_ext_ack *extack)3535{3636 struct net *link_net = rtnl_newlink_link_net(params);3737+ struct net_device *link[2], *interlink = NULL;3738 struct nlattr **data = params->data;3839 enum hsr_version proto_version;3940 unsigned char multicast_spec;4041 u8 proto = HSR_PROTOCOL_HSR;41424242- struct net_device *link[2], *interlink = NULL;4343+ if (!net_eq(link_net, dev_net(dev))) {4444+ NL_SET_ERR_MSG_MOD(extack,4545+ "HSR slaves/interlink must be on the same net namespace than HSR link");4646+ return -EINVAL;4747+ }4848+4349 if (!data) {4450 NL_SET_ERR_MSG_MOD(extack, "No slave devices specified");4551 return -EINVAL;
+6
net/mptcp/pm_kernel.c
···370370 }371371372372subflow:373373+ /* No need to try establishing subflows to remote id0 if not allowed */374374+ if (mptcp_pm_add_addr_c_flag_case(msk))375375+ goto exit;376376+373377 /* check if should create a new subflow */374378 while (msk->pm.local_addr_used < endp_subflow_max &&375379 msk->pm.extra_subflows < limit_extra_subflows) {···405401 __mptcp_subflow_connect(sk, &local, &addrs[i]);406402 spin_lock_bh(&msk->pm.lock);407403 }404404+405405+exit:408406 mptcp_pm_nl_check_work_pending(msk);409407}410408
+7-6
net/sctp/inqueue.c
···169169 chunk->head_skb = chunk->skb;170170171171 /* skbs with "cover letter" */172172- if (chunk->head_skb && chunk->skb->data_len == chunk->skb->len)172172+ if (chunk->head_skb && chunk->skb->data_len == chunk->skb->len) {173173+ if (WARN_ON(!skb_shinfo(chunk->skb)->frag_list)) {174174+ __SCTP_INC_STATS(dev_net(chunk->skb->dev),175175+ SCTP_MIB_IN_PKT_DISCARDS);176176+ sctp_chunk_free(chunk);177177+ goto next_chunk;178178+ }173179 chunk->skb = skb_shinfo(chunk->skb)->frag_list;174174-175175- if (WARN_ON(!chunk->skb)) {176176- __SCTP_INC_STATS(dev_net(chunk->skb->dev), SCTP_MIB_IN_PKT_DISCARDS);177177- sctp_chunk_free(chunk);178178- goto next_chunk;179180 }180181 }181182
-13
net/smc/smc_inet.c
···5656 .protocol = IPPROTO_SMC,5757 .prot = &smc_inet_prot,5858 .ops = &smc_inet_stream_ops,5959- .flags = INET_PROTOSW_ICSK,6059};61606261#if IS_ENABLED(CONFIG_IPV6)···103104 .protocol = IPPROTO_SMC,104105 .prot = &smc_inet6_prot,105106 .ops = &smc_inet6_stream_ops,106106- .flags = INET_PROTOSW_ICSK,107107};108108#endif /* CONFIG_IPV6 */109109-110110-static unsigned int smc_sync_mss(struct sock *sk, u32 pmtu)111111-{112112- /* No need pass it through to clcsock, mss can always be set by113113- * sock_create_kern or smc_setsockopt.114114- */115115- return 0;116116-}117109118110static int smc_inet_init_sock(struct sock *sk)119111{···112122113123 /* init common smc sock */114124 smc_sk_init(net, sk, IPPROTO_SMC);115115-116116- inet_csk(sk)->icsk_sync_mss = smc_sync_mss;117117-118125 /* create clcsock */119126 return smc_create_clcsk(net, sk, sk->sk_family);120127}
+19-19
net/vmw_vsock/af_vsock.c
···487487 goto err;488488 }489489490490- if (vsk->transport) {491491- if (vsk->transport == new_transport) {492492- ret = 0;493493- goto err;494494- }490490+ if (vsk->transport && vsk->transport == new_transport) {491491+ ret = 0;492492+ goto err;493493+ }495494495495+ /* We increase the module refcnt to prevent the transport unloading496496+ * while there are open sockets assigned to it.497497+ */498498+ if (!new_transport || !try_module_get(new_transport->module)) {499499+ ret = -ENODEV;500500+ goto err;501501+ }502502+503503+ /* It's safe to release the mutex after a successful try_module_get().504504+ * Whichever transport `new_transport` points at, it won't go away until505505+ * the last module_put() below or in vsock_deassign_transport().506506+ */507507+ mutex_unlock(&vsock_register_mutex);508508+509509+ if (vsk->transport) {496510 /* transport->release() must be called with sock lock acquired.497511 * This path can only be taken during vsock_connect(), where we498512 * have already held the sock lock. In the other cases, this···525511 sk->sk_state = TCP_CLOSE;526512 vsk->peer_shutdown = 0;527513 }528528-529529- /* We increase the module refcnt to prevent the transport unloading530530- * while there are open sockets assigned to it.531531- */532532- if (!new_transport || !try_module_get(new_transport->module)) {533533- ret = -ENODEV;534534- goto err;535535- }536536-537537- /* It's safe to release the mutex after a successful try_module_get().538538- * Whichever transport `new_transport` points at, it won't go away until539539- * the last module_put() below or in vsock_deassign_transport().540540- */541541- mutex_unlock(&vsock_register_mutex);542514543515 if (sk->sk_type == SOCK_SEQPACKET) {544516 if (!new_transport->seqpacket_allow ||
···217217218218 /// Returns a reference to the parent [`device::Device`], if any.219219 pub fn parent(&self) -> Option<&device::Device> {220220- let ptr: *const Self = self;221221- // CAST: `Device<Ctx: DeviceContext>` types are transparent to each other.222222- let ptr: *const Device = ptr.cast();223223- // SAFETY: `ptr` was derived from `&self`.224224- let this = unsafe { &*ptr };225225-226226- this.as_ref().parent()220220+ self.as_ref().parent()227221 }228222}229223
+2-2
rust/kernel/device.rs
···251251252252 /// Returns a reference to the parent device, if any.253253 #[cfg_attr(not(CONFIG_AUXILIARY_BUS), expect(dead_code))]254254- pub(crate) fn parent(&self) -> Option<&Self> {254254+ pub(crate) fn parent(&self) -> Option<&Device> {255255 // SAFETY:256256 // - By the type invariant `self.as_raw()` is always valid.257257 // - The parent device is only ever set at device creation.···264264 // - Since `parent` is not NULL, it must be a valid pointer to a `struct device`.265265 // - `parent` is valid for the lifetime of `self`, since a `struct device` holds a266266 // reference count of its parent.267267- Some(unsafe { Self::from_raw(parent) })267267+ Some(unsafe { Device::from_raw(parent) })268268 }269269 }270270
+3-2
tools/objtool/check.c
···217217 * these come from the Rust standard library).218218 */219219 return str_ends_with(func->name, "_4core5sliceSp15copy_from_slice17len_mismatch_fail") ||220220+ str_ends_with(func->name, "_4core6option13expect_failed") ||220221 str_ends_with(func->name, "_4core6option13unwrap_failed") ||221222 str_ends_with(func->name, "_4core6result13unwrap_failed") ||222223 str_ends_with(func->name, "_4core9panicking5panic") ||···4711471047124711 for_each_reloc(sec->rsec, reloc) {47134712 if (arch_absolute_reloc(file->elf, reloc)) {47144714- WARN("section %s has absolute relocation at offset 0x%lx",47154715- sec->name, reloc_offset(reloc));47134713+ WARN("section %s has absolute relocation at offset 0x%llx",47144714+ sec->name, (unsigned long long)reloc_offset(reloc));47164715 ret++;47174716 }47184717 }
···2525 return labs(a - b) <= (a + b) / 100 * err;2626}27272828+/*2929+ * Checks if two given values differ by less than err% of their sum and assert3030+ * with detailed debug info if not.3131+ */3232+static inline int values_close_report(long a, long b, int err)3333+{3434+ long diff = labs(a - b);3535+ long limit = (a + b) / 100 * err;3636+ double actual_err = (a + b) ? (100.0 * diff / (a + b)) : 0.0;3737+ int close = diff <= limit;3838+3939+ if (!close)4040+ fprintf(stderr,4141+ "[FAIL] actual=%ld expected=%ld | diff=%ld | limit=%ld | "4242+ "tolerance=%d%% | actual_error=%.2f%%\n",4343+ a, b, diff, limit, err, actual_err);4444+4545+ return close;4646+}4747+2848extern ssize_t read_text(const char *path, char *buf, size_t max_len);2949extern ssize_t write_text(const char *path, char *buf, ssize_t len);3050
+9-9
tools/testing/selftests/cgroup/test_cpu.c
···219219 if (user_usec <= 0)220220 goto cleanup;221221222222- if (!values_close(usage_usec, expected_usage_usec, 1))222222+ if (!values_close_report(usage_usec, expected_usage_usec, 1))223223 goto cleanup;224224225225 ret = KSFT_PASS;···291291292292 user_usec = cg_read_key_long(cpucg, "cpu.stat", "user_usec");293293 nice_usec = cg_read_key_long(cpucg, "cpu.stat", "nice_usec");294294- if (!values_close(nice_usec, expected_nice_usec, 1))294294+ if (!values_close_report(nice_usec, expected_nice_usec, 1))295295 goto cleanup;296296297297 ret = KSFT_PASS;···404404 goto cleanup;405405406406 delta = children[i + 1].usage - children[i].usage;407407- if (!values_close(delta, children[0].usage, 35))407407+ if (!values_close_report(delta, children[0].usage, 35))408408 goto cleanup;409409 }410410···444444 int ret = KSFT_FAIL, i;445445446446 for (i = 0; i < num_children - 1; i++) {447447- if (!values_close(children[i + 1].usage, children[0].usage, 15))447447+ if (!values_close_report(children[i + 1].usage, children[0].usage, 15))448448 goto cleanup;449449 }450450···573573574574 nested_leaf_usage = leaf[1].usage + leaf[2].usage;575575 if (overprovisioned) {576576- if (!values_close(leaf[0].usage, nested_leaf_usage, 15))576576+ if (!values_close_report(leaf[0].usage, nested_leaf_usage, 15))577577 goto cleanup;578578- } else if (!values_close(leaf[0].usage * 2, nested_leaf_usage, 15))578578+ } else if (!values_close_report(leaf[0].usage * 2, nested_leaf_usage, 15))579579 goto cleanup;580580581581582582 child_usage = cg_read_key_long(child, "cpu.stat", "usage_usec");583583 if (child_usage <= 0)584584 goto cleanup;585585- if (!values_close(child_usage, nested_leaf_usage, 1))585585+ if (!values_close_report(child_usage, nested_leaf_usage, 1))586586 goto cleanup;587587588588 ret = KSFT_PASS;···691691 expected_usage_usec692692 = n_periods * quota_usec + MIN(remainder_usec, quota_usec);693693694694- if (!values_close(usage_usec, expected_usage_usec, 10))694694+ if (!values_close_report(usage_usec, expected_usage_usec, 10))695695 goto cleanup;696696697697 ret = KSFT_PASS;···762762 expected_usage_usec763763 = n_periods * quota_usec + MIN(remainder_usec, quota_usec);764764765765- if (!values_close(usage_usec, expected_usage_usec, 10))765765+ if (!values_close_report(usage_usec, expected_usage_usec, 10))766766 goto cleanup;767767768768 ret = KSFT_PASS;
+9-9
tools/testing/selftests/net/mptcp/mptcp_join.sh
···23242324{23252325 # no laminar endpoints: routing rules are used23262326 if reset_with_tcp_filter "without a laminar endpoint" ns1 10.0.2.2 REJECT &&23272327- mptcp_lib_kallsyms_has "mptcp_pm_get_endp_laminar_max$"; then23272327+ continue_if mptcp_lib_kallsyms_has "mptcp_pm_get_endp_laminar_max$"; then23282328 pm_nl_set_limits $ns1 0 223292329 pm_nl_set_limits $ns2 2 223302330 pm_nl_add_endpoint $ns1 10.0.2.1 flags signal···2336233623372337 # laminar endpoints: this endpoint is used23382338 if reset_with_tcp_filter "with a laminar endpoint" ns1 10.0.2.2 REJECT &&23392339- mptcp_lib_kallsyms_has "mptcp_pm_get_endp_laminar_max$"; then23392339+ continue_if mptcp_lib_kallsyms_has "mptcp_pm_get_endp_laminar_max$"; then23402340 pm_nl_set_limits $ns1 0 223412341 pm_nl_set_limits $ns2 2 223422342 pm_nl_add_endpoint $ns1 10.0.2.1 flags signal···2348234823492349 # laminar endpoints: these endpoints are used23502350 if reset_with_tcp_filter "with multiple laminar endpoints" ns1 10.0.2.2 REJECT &&23512351- mptcp_lib_kallsyms_has "mptcp_pm_get_endp_laminar_max$"; then23512351+ continue_if mptcp_lib_kallsyms_has "mptcp_pm_get_endp_laminar_max$"; then23522352 pm_nl_set_limits $ns1 0 223532353 pm_nl_set_limits $ns2 2 223542354 pm_nl_add_endpoint $ns1 10.0.2.1 flags signal···2363236323642364 # laminar endpoints: only one endpoint is used23652365 if reset_with_tcp_filter "single laminar endpoint" ns1 10.0.2.2 REJECT &&23662366- mptcp_lib_kallsyms_has "mptcp_pm_get_endp_laminar_max$"; then23662366+ continue_if mptcp_lib_kallsyms_has "mptcp_pm_get_endp_laminar_max$"; then23672367 pm_nl_set_limits $ns1 0 223682368 pm_nl_set_limits $ns2 2 223692369 pm_nl_add_endpoint $ns1 10.0.2.1 flags signal···2376237623772377 # laminar endpoints: subflow and laminar flags23782378 if reset_with_tcp_filter "sublow + laminar endpoints" ns1 10.0.2.2 REJECT &&23792379- mptcp_lib_kallsyms_has "mptcp_pm_get_endp_laminar_max$"; then23792379+ continue_if mptcp_lib_kallsyms_has "mptcp_pm_get_endp_laminar_max$"; then23802380 pm_nl_set_limits $ns1 0 423812381 pm_nl_set_limits $ns2 2 423822382 pm_nl_add_endpoint $ns1 10.0.2.1 flags signal···39393939 # subflow_rebuild_header is needed to support the implicit flag39403940 # userspace pm type prevents add_addr39413941 if reset "implicit EP" &&39423942- mptcp_lib_kallsyms_has "subflow_rebuild_header$"; then39423942+ continue_if mptcp_lib_kallsyms_has "subflow_rebuild_header$"; then39433943 pm_nl_set_limits $ns1 2 239443944 pm_nl_set_limits $ns2 2 239453945 pm_nl_add_endpoint $ns1 10.0.2.1 flags signal···39643964 fi3965396539663966 if reset_with_tcp_filter "delete and re-add" ns2 10.0.3.2 REJECT OUTPUT &&39673967- mptcp_lib_kallsyms_has "subflow_rebuild_header$"; then39673967+ continue_if mptcp_lib_kallsyms_has "subflow_rebuild_header$"; then39683968 start_events39693969 pm_nl_set_limits $ns1 0 339703970 pm_nl_set_limits $ns2 0 3···4040404040414041 # remove and re-add40424042 if reset_with_events "delete re-add signal" &&40434043- mptcp_lib_kallsyms_has "subflow_rebuild_header$"; then40434043+ continue_if mptcp_lib_kallsyms_has "subflow_rebuild_header$"; then40444044 ip netns exec $ns1 sysctl -q net.mptcp.add_addr_timeout=040454045 pm_nl_set_limits $ns1 0 340464046 pm_nl_set_limits $ns2 3 3···4115411541164116 # flush and re-add41174117 if reset_with_tcp_filter "flush re-add" ns2 10.0.3.2 REJECT OUTPUT &&41184118- mptcp_lib_kallsyms_has "subflow_rebuild_header$"; then41184118+ continue_if mptcp_lib_kallsyms_has "subflow_rebuild_header$"; then41194119 pm_nl_set_limits $ns1 0 241204120 pm_nl_set_limits $ns2 1 241214121 # broadcast IP: no packet for this address will be received on ns1
+2-15
tools/testing/selftests/net/sctp_hello.c
···2929static int do_client(int argc, char *argv[])3030{3131 struct sockaddr_storage ss;3232- char buf[] = "hello";3332 int csk, ret, len;34333534 if (argc < 5) {···55565657 set_addr(&ss, argv[3], argv[4], &len);5758 ret = connect(csk, (struct sockaddr *)&ss, len);5858- if (ret < 0) {5959- printf("failed to connect to peer\n");5959+ if (ret < 0)6060 return -1;6161- }62616363- ret = send(csk, buf, strlen(buf) + 1, 0);6464- if (ret < 0) {6565- printf("failed to send msg %d\n", ret);6666- return -1;6767- }6262+ recv(csk, NULL, 0, 0);6863 close(csk);69647065 return 0;···6875{6976 struct sockaddr_storage ss;7077 int lsk, csk, ret, len;7171- char buf[20];72787379 if (argc < 2 || (strcmp(argv[1], "server") && strcmp(argv[1], "client"))) {7480 printf("%s server|client ...\n", argv[0]);···117125 return -1;118126 }119127120120- ret = recv(csk, buf, sizeof(buf), 0);121121- if (ret <= 0) {122122- printf("failed to recv msg %d\n", ret);123123- return -1;124124- }125128 close(csk);126129 close(lsk);127130
+45-28
tools/testing/selftests/net/sctp_vrf.sh
···2020 modprobe sctp_diag2121 setup_ns CLIENT_NS1 CLIENT_NS2 SERVER_NS22222323- ip net exec $CLIENT_NS1 sysctl -w net.ipv6.conf.default.accept_dad=0 2>&1 >/dev/null2424- ip net exec $CLIENT_NS2 sysctl -w net.ipv6.conf.default.accept_dad=0 2>&1 >/dev/null2525- ip net exec $SERVER_NS sysctl -w net.ipv6.conf.default.accept_dad=0 2>&1 >/dev/null2323+ ip net exec $CLIENT_NS1 sysctl -wq net.ipv6.conf.default.accept_dad=02424+ ip net exec $CLIENT_NS2 sysctl -wq net.ipv6.conf.default.accept_dad=02525+ ip net exec $SERVER_NS sysctl -wq net.ipv6.conf.default.accept_dad=026262727 ip -n $SERVER_NS link add veth1 type veth peer name veth1 netns $CLIENT_NS12828 ip -n $SERVER_NS link add veth2 type veth peer name veth1 netns $CLIENT_NS2···6262}63636464cleanup() {6565- ip netns exec $SERVER_NS pkill sctp_hello 2>&1 >/dev/null6565+ wait_client $CLIENT_NS16666+ wait_client $CLIENT_NS26767+ stop_server6668 cleanup_ns $CLIENT_NS1 $CLIENT_NS2 $SERVER_NS6769}68706969-wait_server() {7171+start_server() {7072 local IFACE=$17173 local CNT=072747373- until ip netns exec $SERVER_NS ss -lS src $SERVER_IP:$SERVER_PORT | \7474- grep LISTEN | grep "$IFACE" 2>&1 >/dev/null; do7575- [ $((CNT++)) = "20" ] && { RET=3; return $RET; }7575+ ip netns exec $SERVER_NS ./sctp_hello server $AF $SERVER_IP $SERVER_PORT $IFACE &7676+ disown7777+ until ip netns exec $SERVER_NS ss -SlH | grep -q "$IFACE"; do7878+ [ $((CNT++)) -eq 30 ] && { RET=3; return $RET; }7979+ sleep 0.18080+ done8181+}8282+8383+stop_server() {8484+ local CNT=08585+8686+ ip netns exec $SERVER_NS pkill sctp_hello8787+ while ip netns exec $SERVER_NS ss -SaH | grep -q .; do8888+ [ $((CNT++)) -eq 30 ] && break8989+ sleep 0.19090+ done9191+}9292+9393+wait_client() {9494+ local CLIENT_NS=$19595+ local CNT=09696+9797+ while ip netns exec $CLIENT_NS ss -SaH | grep -q .; do9898+ [ $((CNT++)) -eq 30 ] && break7699 sleep 0.177100 done78101}···10481 local CLIENT_NS=$110582 local IFACE=$210683107107- ip netns exec $SERVER_NS pkill sctp_hello 2>&1 >/dev/null108108- ip netns exec $SERVER_NS ./sctp_hello server $AF $SERVER_IP \109109- $SERVER_PORT $IFACE 2>&1 >/dev/null &110110- disown111111- wait_server $IFACE || return $RET8484+ start_server $IFACE || return $RET11285 timeout 3 ip netns exec $CLIENT_NS ./sctp_hello client $AF \113113- $SERVER_IP $SERVER_PORT $CLIENT_IP $CLIENT_PORT 2>&1 >/dev/null8686+ $SERVER_IP $SERVER_PORT $CLIENT_IP $CLIENT_PORT11487 RET=$?8888+ wait_client $CLIENT_NS8989+ stop_server11590 return $RET11691}11792···11796 local IFACE1=$111897 local IFACE2=$211998120120- ip netns exec $SERVER_NS pkill sctp_hello 2>&1 >/dev/null121121- ip netns exec $SERVER_NS ./sctp_hello server $AF $SERVER_IP \122122- $SERVER_PORT $IFACE1 2>&1 >/dev/null &123123- disown124124- wait_server $IFACE1 || return $RET125125- ip netns exec $SERVER_NS ./sctp_hello server $AF $SERVER_IP \126126- $SERVER_PORT $IFACE2 2>&1 >/dev/null &127127- disown128128- wait_server $IFACE2 || return $RET9999+ start_server $IFACE1 || return $RET100100+ start_server $IFACE2 || return $RET129101 timeout 3 ip netns exec $CLIENT_NS1 ./sctp_hello client $AF \130130- $SERVER_IP $SERVER_PORT $CLIENT_IP $CLIENT_PORT 2>&1 >/dev/null && \102102+ $SERVER_IP $SERVER_PORT $CLIENT_IP $CLIENT_PORT && \131103 timeout 3 ip netns exec $CLIENT_NS2 ./sctp_hello client $AF \132132- $SERVER_IP $SERVER_PORT $CLIENT_IP $CLIENT_PORT 2>&1 >/dev/null104104+ $SERVER_IP $SERVER_PORT $CLIENT_IP $CLIENT_PORT133105 RET=$?106106+ wait_client $CLIENT_NS1107107+ wait_client $CLIENT_NS2108108+ stop_server134109 return $RET135110}136111137112testup() {138138- ip netns exec $SERVER_NS sysctl -w net.sctp.l3mdev_accept=1 2>&1 >/dev/null113113+ ip netns exec $SERVER_NS sysctl -wq net.sctp.l3mdev_accept=1139114 echo -n "TEST 01: nobind, connect from client 1, l3mdev_accept=1, Y "140115 do_test $CLIENT_NS1 || { echo "[FAIL]"; return $RET; }141116 echo "[PASS]"···140123 do_test $CLIENT_NS2 && { echo "[FAIL]"; return $RET; }141124 echo "[PASS]"142125143143- ip netns exec $SERVER_NS sysctl -w net.sctp.l3mdev_accept=0 2>&1 >/dev/null126126+ ip netns exec $SERVER_NS sysctl -wq net.sctp.l3mdev_accept=0144127 echo -n "TEST 03: nobind, connect from client 1, l3mdev_accept=0, N "145128 do_test $CLIENT_NS1 && { echo "[FAIL]"; return $RET; }146129 echo "[PASS]"···177160 do_testx vrf-1 vrf-2 || { echo "[FAIL]"; return $RET; }178161 echo "[PASS]"179162180180- echo -n "TEST 12: bind vrf-2 & 1 in server, connect from client 1 & 2, N "163163+ echo -n "TEST 12: bind vrf-2 & 1 in server, connect from client 1 & 2, Y "181164 do_testx vrf-2 vrf-1 || { echo "[FAIL]"; return $RET; }182165 echo "[PASS]"183166}