···6677Note that it only applies to the new descriptor-based interface. For a88description of the deprecated integer-based GPIO interface please refer to99-gpio-legacy.txt (actually, there is no real mapping possible with the old99+legacy.rst (actually, there is no real mapping possible with the old1010interface; you just fetch an integer from somewhere and request the1111corresponding GPIO).1212
+3-3
Documentation/driver-api/gpio/consumer.rst
···4455This document describes the consumer interface of the GPIO framework. Note that66it describes the new descriptor-based interface. For a description of the77-deprecated integer-based GPIO interface please refer to gpio-legacy.txt.77+deprecated integer-based GPIO interface please refer to legacy.rst.88991010Guidelines for GPIOs consumers···78787979The two last flags are used for use cases where open drain is mandatory, such8080as I2C: if the line is not already configured as open drain in the mappings8181-(see board.txt), then open drain will be enforced anyway and a warning will be8181+(see board.rst), then open drain will be enforced anyway and a warning will be8282printed that the board configuration needs to be updated to match the use case.83838484Both functions return either a valid GPIO descriptor, or an error code checkable···270270The same is applicable for open drain or open source output lines: those do not271271actively drive their output high (open drain) or low (open source), they just272272switch their output to a high impedance value. The consumer should not need to273273-care. (For details read about open drain in driver.txt.)273273+care. (For details read about open drain in driver.rst.)274274275275With this, all the gpiod_set_(array)_value_xxx() functions interpret the276276parameter "value" as "asserted" ("1") or "de-asserted" ("0"). The physical line
+3-3
Documentation/driver-api/gpio/intro.rst
···1414ways to obtain and use GPIOs:15151616 - The descriptor-based interface is the preferred way to manipulate GPIOs,1717- and is described by all the files in this directory excepted gpio-legacy.txt.1717+ and is described by all the files in this directory excepted legacy.rst.1818 - The legacy integer-based interface which is considered deprecated (but still1919- usable for compatibility reasons) is documented in gpio-legacy.txt.1919+ usable for compatibility reasons) is documented in legacy.rst.20202121The remainder of this document applies to the new descriptor-based interface.2222-gpio-legacy.txt contains the same information applied to the legacy2222+legacy.rst contains the same information applied to the legacy2323integer-based interface.24242525
+13-3
Documentation/filesystems/btrfs.rst
···1919 * Subvolumes (separate internal filesystem roots)2020 * Object level mirroring and striping2121 * Checksums on data and metadata (multiple algorithms available)2222- * Compression2222+ * Compression (multiple algorithms available)2323+ * Reflink, deduplication2424+ * Scrub (on-line checksum verification)2525+ * Hierarchical quota groups (subvolume and snapshot support)2326 * Integrated multiple device support, with several raid algorithms2427 * Offline filesystem check2525- * Efficient incremental backup and FS mirroring2828+ * Efficient incremental backup and FS mirroring (send/receive)2929+ * Trim/discard2630 * Online filesystem defragmentation3131+ * Swapfile support3232+ * Zoned mode3333+ * Read/write metadata verification3434+ * Online resize (shrink, grow)27352828-For more information please refer to the wiki3636+For more information please refer to the documentation site or wiki3737+3838+ https://btrfs.readthedocs.io29393040 https://btrfs.wiki.kernel.org3141
···120120 unpoison-pfn121121 Software-unpoison page at PFN echoed into this file. This way122122 a page can be reused again. This only works for Linux123123- injected failures, not for real memory failures.123123+ injected failures, not for real memory failures. Once any hardware124124+ memory failure happens, this feature is disabled.124125125126 Note these injection interfaces are not stable and might change between126127 kernel versions
+89-38
MAINTAINERS
···427427M: Jean-Philippe Brucker <jean-philippe@linaro.org>428428L: linux-acpi@vger.kernel.org429429L: iommu@lists.linux-foundation.org430430+L: iommu@lists.linux.dev430431S: Maintained431432F: drivers/acpi/viot.c432433F: include/linux/acpi_viot.h···961960M: Joerg Roedel <joro@8bytes.org>962961R: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>963962L: iommu@lists.linux-foundation.org963963+L: iommu@lists.linux.dev964964S: Maintained965965T: git git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu.git966966F: drivers/iommu/amd/···24692467M: Chester Lin <clin@suse.com>24702468R: Andreas Färber <afaerber@suse.de>24712469R: Matthias Brugger <mbrugger@suse.com>24702470+R: NXP S32 Linux Team <s32@nxp.com>24722471L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)24732472S: Maintained24742473F: arch/arm64/boot/dts/freescale/s32g*.dts*···36723669M: Shubham Bansal <illusionist.neo@gmail.com>36733670L: netdev@vger.kernel.org36743671L: bpf@vger.kernel.org36753675-S: Maintained36723672+S: Odd Fixes36763673F: arch/arm/net/3677367436783675BPF JIT for ARM64···36963693M: Jakub Kicinski <kuba@kernel.org>36973694L: netdev@vger.kernel.org36983695L: bpf@vger.kernel.org36993699-S: Supported36963696+S: Odd Fixes37003697F: drivers/net/ethernet/netronome/nfp/bpf/3701369837023699BPF JIT for POWERPC (32-BIT AND 64-BIT)37033700M: Naveen N. Rao <naveen.n.rao@linux.ibm.com>37013701+M: Michael Ellerman <mpe@ellerman.id.au>37043702L: netdev@vger.kernel.org37053703L: bpf@vger.kernel.org37063706-S: Maintained37043704+S: Supported37073705F: arch/powerpc/net/3708370637093707BPF JIT for RISC-V (32-bit)···37303726M: Vasily Gorbik <gor@linux.ibm.com>37313727L: netdev@vger.kernel.org37323728L: bpf@vger.kernel.org37333733-S: Maintained37293729+S: Supported37343730F: arch/s390/net/37353731X: arch/s390/net/pnet.c37363732···37383734M: David S. Miller <davem@davemloft.net>37393735L: netdev@vger.kernel.org37403736L: bpf@vger.kernel.org37413741-S: Maintained37373737+S: Odd Fixes37423738F: arch/sparc/net/3743373937443740BPF JIT for X86 32-BIT37453741M: Wang YanQing <udknight@gmail.com>37463742L: netdev@vger.kernel.org37473743L: bpf@vger.kernel.org37483748-S: Maintained37443744+S: Odd Fixes37493745F: arch/x86/net/bpf_jit_comp32.c3750374637513747BPF JIT for X86 64-BIT···37673763F: include/linux/bpf_lsm.h37683764F: kernel/bpf/bpf_lsm.c37693765F: security/bpf/37663766+37673767+BPF L7 FRAMEWORK37683768+M: John Fastabend <john.fastabend@gmail.com>37693769+M: Jakub Sitnicki <jakub@cloudflare.com>37703770+L: netdev@vger.kernel.org37713771+L: bpf@vger.kernel.org37723772+S: Maintained37733773+F: include/linux/skmsg.h37743774+F: net/core/skmsg.c37753775+F: net/core/sock_map.c37763776+F: net/ipv4/tcp_bpf.c37773777+F: net/ipv4/udp_bpf.c37783778+F: net/unix/unix_bpf.c3770377937713780BPFTOOL37723781M: Quentin Monnet <quentin@isovalent.com>···38203803N: bcm[9]?476223821380438223805BROADCOM BCM2711/BCM2835 ARM ARCHITECTURE38233823-M: Nicolas Saenz Julienne <nsaenz@kernel.org>38063806+M: Florian Fainelli <f.fainelli@gmail.com>38243807R: Broadcom internal kernel review list <bcm-kernel-feedback-list@broadcom.com>38253808L: linux-rpi-kernel@lists.infradead.org (moderated for non-subscribers)38263809L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)38273810S: Maintained38283828-T: git git://git.kernel.org/pub/scm/linux/kernel/git/nsaenz/linux-rpi.git38113811+T: git git://github.com/broadcom/stblinux.git38293812F: Documentation/devicetree/bindings/pci/brcm,stb-pcie.yaml38303813F: drivers/pci/controller/pcie-brcmstb.c38313814F: drivers/staging/vc04_services···59865969M: Marek Szyprowski <m.szyprowski@samsung.com>59875970R: Robin Murphy <robin.murphy@arm.com>59885971L: iommu@lists.linux-foundation.org59725972+L: iommu@lists.linux.dev59895973S: Supported59905974W: http://git.infradead.org/users/hch/dma-mapping.git59915975T: git git://git.infradead.org/users/hch/dma-mapping.git···59995981DMA MAPPING BENCHMARK60005982M: Xiang Chen <chenxiang66@hisilicon.com>60015983L: iommu@lists.linux-foundation.org59845984+L: iommu@lists.linux.dev60025985F: kernel/dma/map_benchmark.c60035986F: tools/testing/selftests/dma/60045987···75847565EXYNOS SYSMMU (IOMMU) driver75857566M: Marek Szyprowski <m.szyprowski@samsung.com>75867567L: iommu@lists.linux-foundation.org75687568+L: iommu@lists.linux.dev75877569S: Maintained75887570F: drivers/iommu/exynos-iommu.c75897571···85068486F: Documentation/driver-api/gpio/85078487F: drivers/gpio/85088488F: include/asm-generic/gpio.h84898489+F: include/dt-bindings/gpio/85098490F: include/linux/gpio.h85108491F: include/linux/gpio/85118492F: include/linux/of_gpio.h···9160913991619140HWPOISON MEMORY FAILURE HANDLING91629141M: Naoya Horiguchi <naoya.horiguchi@nec.com>91429142+R: Miaohe Lin <linmiaohe@huawei.com>91639143L: linux-mm@kvack.org91649144S: Maintained91659145F: mm/hwpoison-inject.c···100069984M: David Woodhouse <dwmw2@infradead.org>100079985M: Lu Baolu <baolu.lu@linux.intel.com>100089986L: iommu@lists.linux-foundation.org99879987+L: iommu@lists.linux.dev100099988S: Supported100109989T: git git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu.git100119990F: drivers/iommu/intel/···1038610363M: Joerg Roedel <joro@8bytes.org>1038710364M: Will Deacon <will@kernel.org>1038810365L: iommu@lists.linux-foundation.org1036610366+L: iommu@lists.linux.dev1038910367S: Maintained1039010368T: git git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu.git1039110369F: Documentation/devicetree/bindings/iommu/···1086310839R: James Morse <james.morse@arm.com>1086410840R: Alexandru Elisei <alexandru.elisei@arm.com>1086510841R: Suzuki K Poulose <suzuki.poulose@arm.com>1084210842+R: Oliver Upton <oliver.upton@linux.dev>1086610843L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)1086710844L: kvmarm@lists.cs.columbia.edu (moderated for non-subscribers)1086810845S: Maintained···1093010905F: tools/testing/selftests/kvm/s390x/10931109061093210907KERNEL VIRTUAL MACHINE FOR X86 (KVM/x86)1090810908+M: Sean Christopherson <seanjc@google.com>1093310909M: Paolo Bonzini <pbonzini@redhat.com>1093410934-R: Sean Christopherson <seanjc@google.com>1093510935-R: Vitaly Kuznetsov <vkuznets@redhat.com>1093610936-R: Wanpeng Li <wanpengli@tencent.com>1093710937-R: Jim Mattson <jmattson@google.com>1093810938-R: Joerg Roedel <joro@8bytes.org>1093910910L: kvm@vger.kernel.org1094010911S: Supported1094110941-W: http://www.linux-kvm.org1094210912T: git git://git.kernel.org/pub/scm/virt/kvm/kvm.git1094310913F: arch/x86/include/asm/kvm*1094410944-F: arch/x86/include/asm/pvclock-abi.h1094510914F: arch/x86/include/asm/svm.h1094610915F: arch/x86/include/asm/vmx*.h1094710916F: arch/x86/include/uapi/asm/kvm*1094810917F: arch/x86/include/uapi/asm/svm.h1094910918F: arch/x86/include/uapi/asm/vmx.h1095010950-F: arch/x86/kernel/kvm.c1095110951-F: arch/x86/kernel/kvmclock.c1095210919F: arch/x86/kvm/1095310920F: arch/x86/kvm/*/1092110921+1092210922+KVM PARAVIRT (KVM/paravirt)1092310923+M: Paolo Bonzini <pbonzini@redhat.com>1092410924+R: Wanpeng Li <wanpengli@tencent.com>1092510925+R: Vitaly Kuznetsov <vkuznets@redhat.com>1092610926+L: kvm@vger.kernel.org1092710927+S: Supported1092810928+T: git git://git.kernel.org/pub/scm/virt/kvm/kvm.git1092910929+F: arch/x86/kernel/kvm.c1093010930+F: arch/x86/kernel/kvmclock.c1093110931+F: arch/x86/include/asm/pvclock-abi.h1093210932+F: include/linux/kvm_para.h1093310933+F: include/uapi/linux/kvm_para.h1093410934+F: include/uapi/asm-generic/kvm_para.h1093510935+F: include/asm-generic/kvm_para.h1093610936+F: arch/um/include/asm/kvm_para.h1093710937+F: arch/x86/include/asm/kvm_para.h1093810938+F: arch/x86/include/uapi/asm/kvm_para.h1093910939+1094010940+KVM X86 HYPER-V (KVM/hyper-v)1094110941+M: Vitaly Kuznetsov <vkuznets@redhat.com>1094210942+M: Sean Christopherson <seanjc@google.com>1094310943+M: Paolo Bonzini <pbonzini@redhat.com>1094410944+L: kvm@vger.kernel.org1094510945+S: Supported1094610946+T: git git://git.kernel.org/pub/scm/virt/kvm/kvm.git1094710947+F: arch/x86/kvm/hyperv.*1094810948+F: arch/x86/kvm/kvm_onhyperv.*1094910949+F: arch/x86/kvm/svm/hyperv.*1095010950+F: arch/x86/kvm/svm/svm_onhyperv.*1095110951+F: arch/x86/kvm/vmx/evmcs.*10954109521095510953KERNFS1095610954M: Greg Kroah-Hartman <gregkh@linuxfoundation.org>···1115211104S: Maintained1115311105F: include/net/l3mdev.h1115411106F: net/l3mdev1115511155-1115611156-L7 BPF FRAMEWORK1115711157-M: John Fastabend <john.fastabend@gmail.com>1115811158-M: Daniel Borkmann <daniel@iogearbox.net>1115911159-M: Jakub Sitnicki <jakub@cloudflare.com>1116011160-L: netdev@vger.kernel.org1116111161-L: bpf@vger.kernel.org1116211162-S: Maintained1116311163-F: include/linux/skmsg.h1116411164-F: net/core/skmsg.c1116511165-F: net/core/sock_map.c1116611166-F: net/ipv4/tcp_bpf.c1116711167-F: net/ipv4/udp_bpf.c1116811168-F: net/unix/unix_bpf.c11169111071117011108LANDLOCK SECURITY MODULE1117111109M: Mickaël Salaün <mic@digikod.net>···1163211598LOONGARCH1163311599M: Huacai Chen <chenhuacai@kernel.org>1163411600R: WANG Xuerui <kernel@xen0n.name>1160111601+L: loongarch@lists.linux.dev1163511602S: Maintained1163611603T: git git://git.kernel.org/pub/scm/linux/kernel/git/chenhuacai/linux-loongson.git1163711604F: arch/loongarch/···1254612511MEDIATEK IOMMU DRIVER1254712512M: Yong Wu <yong.wu@mediatek.com>1254812513L: iommu@lists.linux-foundation.org1251412514+L: iommu@lists.linux.dev1254912515L: linux-mediatek@lists.infradead.org (moderated for non-subscribers)1255012516S: Supported1255112517F: Documentation/devicetree/bindings/iommu/mediatek*···1288912853L: linux-mm@kvack.org1289012854S: Maintained1289112855W: http://www.linux-mm.org1289212892-T: quilt https://ozlabs.org/~akpm/mmotm/1289312893-T: quilt https://ozlabs.org/~akpm/mmots/1289412894-T: git git://github.com/hnaz/linux-mm.git1285612856+T: git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm1285712857+T: quilt git://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new1289512858F: include/linux/gfp.h1289612859F: include/linux/memory_hotplug.h1289712860F: include/linux/mm.h···1289912864F: include/linux/vmalloc.h1290012865F: mm/1290112866F: tools/testing/selftests/vm/1286712867+1286812868+MEMORY HOT(UN)PLUG1286912869+M: David Hildenbrand <david@redhat.com>1287012870+M: Oscar Salvador <osalvador@suse.de>1287112871+L: linux-mm@kvack.org1287212872+S: Maintained1287312873+F: Documentation/admin-guide/mm/memory-hotplug.rst1287412874+F: Documentation/core-api/memory-hotplug.rst1287512875+F: drivers/base/memory.c1287612876+F: include/linux/memory_hotplug.h1287712877+F: mm/memory_hotplug.c1287812878+F: tools/testing/selftests/memory-hotplug/12902128791290312880MEMORY TECHNOLOGY DEVICES (MTD)1290412881M: Miquel Raynal <miquel.raynal@bootlin.com>···1400813961NETWORKING [TLS]1400913962M: Boris Pismenny <borisp@nvidia.com>1401013963M: John Fastabend <john.fastabend@gmail.com>1401114011-M: Daniel Borkmann <daniel@iogearbox.net>1401213964M: Jakub Kicinski <kuba@kernel.org>1401313965L: netdev@vger.kernel.org1401413966S: Maintained···1431614270F: drivers/iio/gyro/fxas21002c_spi.c14317142711431814272NXP i.MX CLOCK DRIVERS1431914319-M: Abel Vesa <abel.vesa@nxp.com>1427314273+M: Abel Vesa <abelvesa@kernel.org>1432014274L: linux-clk@vger.kernel.org1432114275L: linux-imx@nxp.com1432214276S: Maintained···14924148781492514879OPENCOMPUTE PTP CLOCK DRIVER1492614880M: Jonathan Lemon <jonathan.lemon@gmail.com>1488114881+M: Vadim Fedorenko <vadfed@fb.com>1492714882L: netdev@vger.kernel.org1492814883S: Maintained1492914884F: drivers/ptp/ptp_ocp.c···1654416497F: drivers/cpufreq/qcom-cpufreq-nvmem.c16545164981654616499QUALCOMM CRYPTO DRIVERS1654716547-M: Thara Gopinath <thara.gopinath@linaro.org>1650016500+M: Thara Gopinath <thara.gopinath@gmail.com>1654816501L: linux-crypto@vger.kernel.org1654916502L: linux-arm-msm@vger.kernel.org1655016503S: Maintained···1659916552QUALCOMM IOMMU1660016553M: Rob Clark <robdclark@gmail.com>1660116554L: iommu@lists.linux-foundation.org1655516555+L: iommu@lists.linux.dev1660216556L: linux-arm-msm@vger.kernel.org1660316557S: Maintained1660416558F: drivers/iommu/arm/arm-smmu/qcom_iommu.c···16655166071665616608QUALCOMM TSENS THERMAL DRIVER1665716609M: Amit Kucheria <amitk@kernel.org>1665816658-M: Thara Gopinath <thara.gopinath@linaro.org>1661016610+M: Thara Gopinath <thara.gopinath@gmail.com>1665916611L: linux-pm@vger.kernel.org1666016612L: linux-arm-msm@vger.kernel.org1666116613S: Maintained···1922619178SWIOTLB SUBSYSTEM1922719179M: Christoph Hellwig <hch@infradead.org>1922819180L: iommu@lists.linux-foundation.org1918119181+L: iommu@lists.linux.dev1922919182S: Supported1923019183W: http://git.infradead.org/users/hch/dma-mapping.git1923119184T: git git://git.infradead.org/users/hch/dma-mapping.git···2077120722F: Documentation/devicetree/bindings/usb/2077220723F: Documentation/usb/2077320724F: drivers/usb/2072520725+F: include/dt-bindings/usb/2077420726F: include/linux/usb.h2077520727F: include/linux/usb/2077620728···2190221852M: Stefano Stabellini <sstabellini@kernel.org>2190321853L: xen-devel@lists.xenproject.org (moderated for non-subscribers)2190421854L: iommu@lists.linux-foundation.org2185521855+L: iommu@lists.linux.dev2190521856S: Supported2190621857F: arch/x86/xen/*swiotlb*2190721858F: drivers/xen/*swiotlb*
···21122112 return 0;2113211321142114 /*21152115- * Exclude HYP BSS from kmemleak so that it doesn't get peeked21162116- * at, which would end badly once the section is inaccessible.21172117- * None of other sections should ever be introspected.21152115+ * Exclude HYP sections from kmemleak so that they don't get peeked21162116+ * at, which would end badly once inaccessible.21182117 */21192118 kmemleak_free_part(__hyp_bss_start, __hyp_bss_end - __hyp_bss_start);21192119+ kmemleak_free_part(__va(hyp_mem_base), hyp_mem_size);21202120 return pkvm_drop_host_privileges();21212121}21222122
···4040 if (of_address_to_resource(np, 0, &res))4141 panic("Failed to get resource for %s", node);42424343+ of_node_put(np);4444+4345 if (!request_mem_region(res.start,4446 resource_size(&res),4547 res.name))
···722722 return;723723724724 if (parisc_requires_coherency()) {725725- flush_user_cache_page(vma, vmaddr);725725+ if (vma->vm_flags & VM_SHARED)726726+ flush_data_cache();727727+ else728728+ flush_user_cache_page(vma, vmaddr);726729 return;727730 }728731
+1-1
arch/parisc/math-emu/decode_exc.c
···102102 * that happen. Want to keep this overhead low, but still provide103103 * some information to the customer. All exits from this routine104104 * need to restore Fpu_register[0]105105- */105105+ */106106107107 bflags=(Fpu_register[0] & 0xf8000000);108108 Fpu_register[0] &= 0x07ffffff;
···10711071 { "get-time-of-day", -1, -1, -1, -1, -1 },10721072 { "ibm,get-vpd", -1, 0, -1, 1, 2 },10731073 { "ibm,lpar-perftools", -1, 2, 3, -1, -1 },10741074- { "ibm,platform-dump", -1, 4, 5, -1, -1 },10741074+ { "ibm,platform-dump", -1, 4, 5, -1, -1 }, /* Special cased */10751075 { "ibm,read-slot-reset-state", -1, -1, -1, -1, -1 },10761076 { "ibm,scan-log-dump", -1, 0, 1, -1, -1 },10771077 { "ibm,set-dynamic-indicator", -1, 2, -1, -1, -1 },···11201120 size = 1;1121112111221122 end = base + size - 1;11231123+11241124+ /*11251125+ * Special case for ibm,platform-dump - NULL buffer11261126+ * address is used to indicate end of dump processing11271127+ */11281128+ if (!strcmp(f->name, "ibm,platform-dump") &&11291129+ base == 0)11301130+ return false;11311131+11231132 if (!in_rmo_buf(base, end))11241133 goto err;11251134 }
+7-6
arch/powerpc/kernel/setup-common.c
···935935 /* Print various info about the machine that has been gathered so far. */936936 print_system_info();937937938938- /* Reserve large chunks of memory for use by CMA for KVM. */939939- kvm_cma_reserve();940940-941941- /* Reserve large chunks of memory for us by CMA for hugetlb */942942- gigantic_hugetlb_cma_reserve();943943-944938 klp_init_thread_info(&init_task);945939946940 setup_initial_init_mm(_stext, _etext, _edata, _end);···948954 smp_release_cpus();949955950956 initmem_init();957957+958958+ /*959959+ * Reserve large chunks of memory for use by CMA for KVM and hugetlb. These must960960+ * be called after initmem_init(), so that pageblock_order is initialised.961961+ */962962+ kvm_cma_reserve();963963+ gigantic_hugetlb_cma_reserve();951964952965 early_memtest(min_low_pfn << PAGE_SHIFT, max_low_pfn << PAGE_SHIFT);953966
···1111#include <asm/archrandom.h>1212#include <asm/cputable.h>1313#include <asm/machdep.h>1414+#include "microwatt.h"14151516#define DARN_ERR 0xFFFFFFFFFFFFFFFFul1617···3029 return 1;3130}32313333-static __init int rng_init(void)3232+void __init microwatt_rng_init(void)3433{3534 unsigned long val;3635 int i;···3837 for (i = 0; i < 10; i++) {3938 if (microwatt_get_random_darn(&val)) {4039 ppc_md.get_random_seed = microwatt_get_random_darn;4141- return 0;4040+ return;4241 }4342 }4444-4545- pr_warn("Unable to use DARN for get_random_seed()\n");4646-4747- return -EIO;4843}4949-machine_subsys_initcall(, rng_init);
···1717#include <asm/prom.h>1818#include <asm/machdep.h>1919#include <asm/smp.h>2020+#include "powernv.h"20212122#define DARN_ERR 0xFFFFFFFFFFFFFFFFul2223···2827};29283029static DEFINE_PER_CPU(struct powernv_rng *, powernv_rng);3131-32303331int powernv_hwrng_present(void)3432{···9898 return 0;9999 }100100 }101101-102102- pr_warn("Unable to use DARN for get_random_seed()\n");103103-104101 return -EIO;105102}106103···160163161164 rng_init_per_cpu(rng, dn);162165163163- pr_info_once("Registering arch random hook.\n");164164-165166 ppc_md.get_random_seed = powernv_get_random_long;166167167168 return 0;168169}169170170170-static __init int rng_init(void)171171+static int __init pnv_get_random_long_early(unsigned long *v)171172{172173 struct device_node *dn;173173- int rc;174174+175175+ if (!slab_is_available())176176+ return 0;177177+178178+ if (cmpxchg(&ppc_md.get_random_seed, pnv_get_random_long_early,179179+ NULL) != pnv_get_random_long_early)180180+ return 0;174181175182 for_each_compatible_node(dn, NULL, "ibm,power-rng") {176176- rc = rng_create(dn);177177- if (rc) {178178- pr_err("Failed creating rng for %pOF (%d).\n",179179- dn, rc);183183+ if (rng_create(dn))180184 continue;181181- }182182-183185 /* Create devices for hwrng driver */184186 of_platform_device_create(dn, NULL, NULL);185187 }186188187187- initialise_darn();189189+ if (!ppc_md.get_random_seed)190190+ return 0;191191+ return ppc_md.get_random_seed(v);192192+}188193194194+void __init pnv_rng_init(void)195195+{196196+ struct device_node *dn;197197+198198+ /* Prefer darn over the rest. */199199+ if (!initialise_darn())200200+ return;201201+202202+ dn = of_find_compatible_node(NULL, NULL, "ibm,power-rng");203203+ if (dn)204204+ ppc_md.get_random_seed = pnv_get_random_long_early;205205+206206+ of_node_put(dn);207207+}208208+209209+static int __init pnv_rng_late_init(void)210210+{211211+ unsigned long v;212212+ /* In case it wasn't called during init for some other reason. */213213+ if (ppc_md.get_random_seed == pnv_get_random_long_early)214214+ pnv_get_random_long_early(&v);189215 return 0;190216}191191-machine_subsys_initcall(powernv, rng_init);217217+machine_subsys_initcall(powernv, pnv_rng_late_init);
···219219 unsigned long src;220220 int rc;221221222222+ if (!(iter_is_iovec(iter) || iov_iter_is_kvec(iter)))223223+ return -EINVAL;224224+ /* Multi-segment iterators are not supported */225225+ if (iter->nr_segs > 1)226226+ return -EINVAL;222227 if (!csize)223228 return 0;224229 src = pfn_to_phys(pfn) + offset;···233228 rc = copy_oldmem_user(iter->iov->iov_base, src, csize);234229 else235230 rc = copy_oldmem_kernel(iter->kvec->iov_base, src, csize);236236- return rc;231231+ if (rc < 0)232232+ return rc;233233+ iov_iter_advance(iter, csize);234234+ return csize;237235}238236239237/*
+21-1
arch/s390/kernel/perf_cpum_cf.c
···516516 return err;517517}518518519519+/* Events CPU_CYLCES and INSTRUCTIONS can be submitted with two different520520+ * attribute::type values:521521+ * - PERF_TYPE_HARDWARE:522522+ * - pmu->type:523523+ * Handle both type of invocations identical. They address the same hardware.524524+ * The result is different when event modifiers exclude_kernel and/or525525+ * exclude_user are also set.526526+ */527527+static int cpumf_pmu_event_type(struct perf_event *event)528528+{529529+ u64 ev = event->attr.config;530530+531531+ if (cpumf_generic_events_basic[PERF_COUNT_HW_CPU_CYCLES] == ev ||532532+ cpumf_generic_events_basic[PERF_COUNT_HW_INSTRUCTIONS] == ev ||533533+ cpumf_generic_events_user[PERF_COUNT_HW_CPU_CYCLES] == ev ||534534+ cpumf_generic_events_user[PERF_COUNT_HW_INSTRUCTIONS] == ev)535535+ return PERF_TYPE_HARDWARE;536536+ return PERF_TYPE_RAW;537537+}538538+519539static int cpumf_pmu_event_init(struct perf_event *event)520540{521541 unsigned int type = event->attr.type;···545525 err = __hw_perf_event_init(event, type);546526 else if (event->pmu->type == type)547527 /* Registered as unknown PMU */548548- err = __hw_perf_event_init(event, PERF_TYPE_RAW);528528+ err = __hw_perf_event_init(event, cpumf_pmu_event_type(event));549529 else550530 return -ENOENT;551531
+15-5
arch/s390/kernel/perf_pai_crypto.c
···193193 /* PAI crypto PMU registered as PERF_TYPE_RAW, check event type */194194 if (a->type != PERF_TYPE_RAW && event->pmu->type != a->type)195195 return -ENOENT;196196- /* PAI crypto event must be valid */197197- if (a->config > PAI_CRYPTO_BASE + paicrypt_cnt)196196+ /* PAI crypto event must be in valid range */197197+ if (a->config < PAI_CRYPTO_BASE ||198198+ a->config > PAI_CRYPTO_BASE + paicrypt_cnt)198199 return -EINVAL;199200 /* Allow only CPU wide operation, no process context for now. */200201 if (event->hw.target || event->cpu == -1)···209208 if (rc)210209 return rc;211210211211+ /* Event initialization sets last_tag to 0. When later on the events212212+ * are deleted and re-added, do not reset the event count value to zero.213213+ * Events are added, deleted and re-added when 2 or more events214214+ * are active at the same time.215215+ */216216+ event->hw.last_tag = 0;212217 cpump->event = event;213218 event->destroy = paicrypt_event_destroy;214219···249242{250243 u64 sum;251244252252- sum = paicrypt_getall(event); /* Get current value */253253- local64_set(&event->hw.prev_count, sum);254254- local64_set(&event->count, 0);245245+ if (!event->hw.last_tag) {246246+ event->hw.last_tag = 1;247247+ sum = paicrypt_getall(event); /* Get current value */248248+ local64_set(&event->count, 0);249249+ local64_set(&event->hw.prev_count, sum);250250+ }255251}256252257253static int paicrypt_add(struct perf_event *event, int flags)
···322322 blk_mq_exit_queue(q);323323 }324324325325- /*326326- * In theory, request pool of sched_tags belongs to request queue.327327- * However, the current implementation requires tag_set for freeing328328- * requests, so free the pool now.329329- *330330- * Queue has become frozen, there can't be any in-queue requests, so331331- * it is safe to free requests now.332332- */333333- mutex_lock(&q->sysfs_lock);334334- if (q->elevator)335335- blk_mq_sched_free_rqs(q);336336- mutex_unlock(&q->sysfs_lock);337337-338325 /* @q is and will stay empty, shutdown and put */339326 blk_put_queue(q);340327}
-1
block/blk-ia-ranges.c
···144144 }145145146146 for (i = 0; i < iars->nr_ia_ranges; i++) {147147- iars->ia_range[i].queue = q;148147 ret = kobject_init_and_add(&iars->ia_range[i].kobj,149148 &blk_ia_range_ktype, &iars->kobj,150149 "%d", i);
+18-11
block/blk-mq-debugfs.c
···711711 }712712}713713714714-void blk_mq_debugfs_unregister(struct request_queue *q)715715-{716716- q->sched_debugfs_dir = NULL;717717-}718718-719714static void blk_mq_debugfs_register_ctx(struct blk_mq_hw_ctx *hctx,720715 struct blk_mq_ctx *ctx)721716{···741746742747void blk_mq_debugfs_unregister_hctx(struct blk_mq_hw_ctx *hctx)743748{749749+ if (!hctx->queue->debugfs_dir)750750+ return;744751 debugfs_remove_recursive(hctx->debugfs_dir);745752 hctx->sched_debugfs_dir = NULL;746753 hctx->debugfs_dir = NULL;···770773{771774 struct elevator_type *e = q->elevator->type;772775776776+ lockdep_assert_held(&q->debugfs_mutex);777777+773778 /*774779 * If the parent directory has not been created yet, return, we will be775780 * called again later on and the directory/files will be created then.···789790790791void blk_mq_debugfs_unregister_sched(struct request_queue *q)791792{793793+ lockdep_assert_held(&q->debugfs_mutex);794794+792795 debugfs_remove_recursive(q->sched_debugfs_dir);793796 q->sched_debugfs_dir = NULL;794797}···812811813812void blk_mq_debugfs_unregister_rqos(struct rq_qos *rqos)814813{814814+ lockdep_assert_held(&rqos->q->debugfs_mutex);815815+816816+ if (!rqos->q->debugfs_dir)817817+ return;815818 debugfs_remove_recursive(rqos->debugfs_dir);816819 rqos->debugfs_dir = NULL;817820}···824819{825820 struct request_queue *q = rqos->q;826821 const char *dir_name = rq_qos_id_to_name(rqos->id);822822+823823+ lockdep_assert_held(&q->debugfs_mutex);827824828825 if (rqos->debugfs_dir || !rqos->ops->debugfs_attrs)829826 return;···840833 debugfs_create_files(rqos->debugfs_dir, rqos, rqos->ops->debugfs_attrs);841834}842835843843-void blk_mq_debugfs_unregister_queue_rqos(struct request_queue *q)844844-{845845- debugfs_remove_recursive(q->rqos_debugfs_dir);846846- q->rqos_debugfs_dir = NULL;847847-}848848-849836void blk_mq_debugfs_register_sched_hctx(struct request_queue *q,850837 struct blk_mq_hw_ctx *hctx)851838{852839 struct elevator_type *e = q->elevator->type;840840+841841+ lockdep_assert_held(&q->debugfs_mutex);853842854843 /*855844 * If the parent debugfs directory has not been created yet, return;···866863867864void blk_mq_debugfs_unregister_sched_hctx(struct blk_mq_hw_ctx *hctx)868865{866866+ lockdep_assert_held(&hctx->queue->debugfs_mutex);867867+868868+ if (!hctx->queue->debugfs_dir)869869+ return;869870 debugfs_remove_recursive(hctx->sched_debugfs_dir);870871 hctx->sched_debugfs_dir = NULL;871872}
···27652765 return NULL;27662766 }2767276727682768- rq_qos_throttle(q, *bio);27692769-27702768 if (blk_mq_get_hctx_type((*bio)->bi_opf) != rq->mq_hctx->type)27712769 return NULL;27722770 if (op_is_flush(rq->cmd_flags) != op_is_flush((*bio)->bi_opf))27732771 return NULL;2774277227752775- rq->cmd_flags = (*bio)->bi_opf;27732773+ /*27742774+ * If any qos ->throttle() end up blocking, we will have flushed the27752775+ * plug and hence killed the cached_rq list as well. Pop this entry27762776+ * before we throttle.27772777+ */27762778 plug->cached_rq = rq_list_next(rq);27792779+ rq_qos_throttle(q, *bio);27802780+27812781+ rq->cmd_flags = (*bio)->bi_opf;27772782 INIT_LIST_HEAD(&rq->queuelist);27782783 return rq;27792784}
···779779 if (queue_is_mq(q))780780 blk_mq_release(q);781781782782- blk_trace_shutdown(q);783783- mutex_lock(&q->debugfs_mutex);784784- debugfs_remove_recursive(q->debugfs_dir);785785- mutex_unlock(&q->debugfs_mutex);786786-787787- if (queue_is_mq(q))788788- blk_mq_debugfs_unregister(q);789789-790782 bioset_exit(&q->bio_split);791783792784 if (blk_queue_has_srcu(q))···828836 goto unlock;829837 }830838839839+ if (queue_is_mq(q))840840+ __blk_mq_register_dev(dev, q);841841+ mutex_lock(&q->sysfs_lock);842842+831843 mutex_lock(&q->debugfs_mutex);832844 q->debugfs_dir = debugfs_create_dir(kobject_name(q->kobj.parent),833845 blk_debugfs_root);834834- mutex_unlock(&q->debugfs_mutex);835835-836836- if (queue_is_mq(q)) {837837- __blk_mq_register_dev(dev, q);846846+ if (queue_is_mq(q))838847 blk_mq_debugfs_register(q);839839- }840840-841841- mutex_lock(&q->sysfs_lock);848848+ mutex_unlock(&q->debugfs_mutex);842849843850 ret = disk_register_independent_access_ranges(disk, NULL);844851 if (ret)···939948 /* Now that we've deleted all child objects, we can delete the queue. */940949 kobject_uevent(&q->kobj, KOBJ_REMOVE);941950 kobject_del(&q->kobj);942942-943951 mutex_unlock(&q->sysfs_dir_lock);952952+953953+ mutex_lock(&q->debugfs_mutex);954954+ blk_trace_shutdown(q);955955+ debugfs_remove_recursive(q->debugfs_dir);956956+ q->debugfs_dir = NULL;957957+ q->sched_debugfs_dir = NULL;958958+ q->rqos_debugfs_dir = NULL;959959+ mutex_unlock(&q->debugfs_mutex);944960945961 kobject_put(&disk_to_dev(disk)->kobj);946962}
+12-30
block/genhd.c
···623623 * Prevent new I/O from crossing bio_queue_enter().624624 */625625 blk_queue_start_drain(q);626626+ blk_mq_freeze_queue_wait(q);626627627628 if (!(disk->flags & GENHD_FL_HIDDEN)) {628629 sysfs_remove_link(&disk_to_dev(disk)->kobj, "bdi");···647646 pm_runtime_set_memalloc_noio(disk_to_dev(disk), false);648647 device_del(disk_to_dev(disk));649648650650- blk_mq_freeze_queue_wait(q);651651-652649 blk_throtl_cancel_bios(disk->queue);653650654651 blk_sync_queue(q);655652 blk_flush_integrity();653653+ blk_mq_cancel_work_sync(q);654654+655655+ blk_mq_quiesce_queue(q);656656+ if (q->elevator) {657657+ mutex_lock(&q->sysfs_lock);658658+ elevator_exit(q);659659+ mutex_unlock(&q->sysfs_lock);660660+ }661661+ rq_qos_exit(q);662662+ blk_mq_unquiesce_queue(q);663663+656664 /*657665 * Allow using passthrough request again after the queue is torn down.658666 */···11301120 NULL11311121};1132112211331133-static void disk_release_mq(struct request_queue *q)11341134-{11351135- blk_mq_cancel_work_sync(q);11361136-11371137- /*11381138- * There can't be any non non-passthrough bios in flight here, but11391139- * requests stay around longer, including passthrough ones so we11401140- * still need to freeze the queue here.11411141- */11421142- blk_mq_freeze_queue(q);11431143-11441144- /*11451145- * Since the I/O scheduler exit code may access cgroup information,11461146- * perform I/O scheduler exit before disassociating from the block11471147- * cgroup controller.11481148- */11491149- if (q->elevator) {11501150- mutex_lock(&q->sysfs_lock);11511151- elevator_exit(q);11521152- mutex_unlock(&q->sysfs_lock);11531153- }11541154- rq_qos_exit(q);11551155- __blk_mq_unfreeze_queue(q, true);11561156-}11571157-11581123/**11591124 * disk_release - releases all allocated resources of the gendisk11601125 * @dev: the device representing this disk···1150116511511166 might_sleep();11521167 WARN_ON_ONCE(disk_live(disk));11531153-11541154- if (queue_is_mq(disk->queue))11551155- disk_release_mq(disk->queue);1156116811571169 blkcg_exit_queue(disk->queue);11581170
-4
block/holder.c
···79798080 WARN_ON_ONCE(!bdev->bd_holder);81818282- /* FIXME: remove the following once add_disk() handles errors */8383- if (WARN_ON(!bdev->bd_holder_dir))8484- goto out_unlock;8585-8682 holder = bd_find_holder_disk(bdev, disk);8783 if (holder) {8884 holder->refcnt++;
+2-2
certs/Makefile
···33# Makefile for the linux kernel signature checking certificates.44#5566-obj-$(CONFIG_SYSTEM_TRUSTED_KEYRING) += system_keyring.o system_certificates.o common.o77-obj-$(CONFIG_SYSTEM_BLACKLIST_KEYRING) += blacklist.o common.o66+obj-$(CONFIG_SYSTEM_TRUSTED_KEYRING) += system_keyring.o system_certificates.o77+obj-$(CONFIG_SYSTEM_BLACKLIST_KEYRING) += blacklist.o88obj-$(CONFIG_SYSTEM_REVOCATION_LIST) += revocation_certificates.o99ifneq ($(CONFIG_SYSTEM_BLACKLIST_HASH_LIST),)1010
+4-4
certs/blacklist.c
···1515#include <linux/err.h>1616#include <linux/seq_file.h>1717#include <linux/uidgid.h>1818-#include <linux/verification.h>1818+#include <keys/asymmetric-type.h>1919#include <keys/system_keyring.h>2020#include "blacklist.h"2121-#include "common.h"22212322/*2423 * According to crypto/asymmetric_keys/x509_cert_parser.c:x509_note_pkey_algo(),···364365 if (revocation_certificate_list_size)365366 pr_notice("Loading compiled-in revocation X.509 certificates\n");366367367367- return load_certificate_list(revocation_certificate_list, revocation_certificate_list_size,368368- blacklist_keyring);368368+ return x509_load_certificate_list(revocation_certificate_list,369369+ revocation_certificate_list_size,370370+ blacklist_keyring);369371}370372late_initcall(load_revocation_certificate_list);371373#endif
···7575 This option provides support for verifying the signature(s) on a7676 signed PE binary.77777878+config FIPS_SIGNATURE_SELFTEST7979+ bool "Run FIPS selftests on the X.509+PKCS7 signature verification"8080+ help8181+ This option causes some selftests to be run on the signature8282+ verification code, using some built in data. This is required8383+ for FIPS.8484+ depends on KEYS8585+ depends on ASYMMETRIC_KEY_TYPE8686+ depends on PKCS7_MESSAGE_PARSER8787+7888endif # ASYMMETRIC_KEY_TYPE
···244244/*245245 * Module stuff246246 */247247+extern int __init certs_selftest(void);247248static int __init x509_key_init(void)248249{249249- return register_asymmetric_key_parser(&x509_key_parser);250250+ int ret;251251+252252+ ret = register_asymmetric_key_parser(&x509_key_parser);253253+ if (ret < 0)254254+ return ret;255255+ return fips_signature_selftest();250256}251257252258static void __exit x509_key_exit(void)
+1-1
drivers/base/memory.c
···558558 if (kstrtoull(buf, 0, &pfn) < 0)559559 return -EINVAL;560560 pfn >>= PAGE_SHIFT;561561- ret = memory_failure(pfn, 0);561561+ ret = memory_failure(pfn, MF_SW_SIMULATED);562562 if (ret == -EOPNOTSUPP)563563 ret = 0;564564 return ret ? ret : count;
+5-3
drivers/base/regmap/regmap-irq.c
···252252 struct regmap_irq_chip_data *d = irq_data_get_irq_chip_data(data);253253 struct regmap *map = d->map;254254 const struct regmap_irq *irq_data = irq_to_regmap_irq(d, data->hwirq);255255+ unsigned int reg = irq_data->reg_offset / map->reg_stride;255256 unsigned int mask, type;256257257258 type = irq_data->type.type_falling_val | irq_data->type.type_rising_val;···269268 * at the corresponding offset in regmap_irq_set_type().270269 */271270 if (d->chip->type_in_mask && type)272272- mask = d->type_buf[irq_data->reg_offset / map->reg_stride];271271+ mask = d->type_buf[reg] & irq_data->mask;273272 else274273 mask = irq_data->mask;275274276275 if (d->chip->clear_on_unmask)277276 d->clear_status = true;278277279279- d->mask_buf[irq_data->reg_offset / map->reg_stride] &= ~mask;278278+ d->mask_buf[reg] &= ~mask;280279}281280282281static void regmap_irq_disable(struct irq_data *data)···387386 subreg = &chip->sub_reg_offsets[b];388387 for (i = 0; i < subreg->num_regs; i++) {389388 unsigned int offset = subreg->offset[i];389389+ unsigned int index = offset / map->reg_stride;390390391391 if (chip->not_fixed_stride)392392 ret = regmap_read(map,···396394 else397395 ret = regmap_read(map,398396 chip->status_base + offset,399399- &data->status_buf[offset]);397397+ &data->status_buf[index]);400398401399 if (ret)402400 break;
+8-7
drivers/base/regmap/regmap.c
···18801880 */18811881bool regmap_can_raw_write(struct regmap *map)18821882{18831883- return map->bus && map->bus->write && map->format.format_val &&18841884- map->format.format_reg;18831883+ return map->write && map->format.format_val && map->format.format_reg;18851884}18861885EXPORT_SYMBOL_GPL(regmap_can_raw_write);18871886···21542155 size_t write_len;21552156 int ret;2156215721572157- if (!map->bus)21582158- return -EINVAL;21592159- if (!map->bus->write)21582158+ if (!map->write)21602159 return -ENOTSUPP;21602160+21612161 if (val_len % map->format.val_bytes)21622162 return -EINVAL;21632163 if (!IS_ALIGNED(reg, map->reg_stride))···22762278 * Some devices don't support bulk write, for them we have a series of22772279 * single write operations.22782280 */22792279- if (!map->bus || !map->format.parse_inplace) {22812281+ if (!map->write || !map->format.parse_inplace) {22802282 map->lock(map->lock_arg);22812283 for (i = 0; i < val_count; i++) {22822284 unsigned int ival;···29022904 size_t read_len;29032905 int ret;2904290629072907+ if (!map->read)29082908+ return -ENOTSUPP;29092909+29052910 if (val_len % map->format.val_bytes)29062911 return -EINVAL;29072912 if (!IS_ALIGNED(reg, map->reg_stride))···30183017 if (val_count == 0)30193018 return -EINVAL;3020301930213021- if (map->format.parse_inplace && (vol || map->cache_type == REGCACHE_NONE)) {30203020+ if (map->read && map->format.parse_inplace && (vol || map->cache_type == REGCACHE_NONE)) {30223021 ret = regmap_raw_read(map, reg, val, val_bytes * val_count);30233022 if (ret != 0)30243023 return ret;
+12-7
drivers/block/xen-blkfront.c
···21142114 return;2115211521162116 /* No more blkif_request(). */21172117- blk_mq_stop_hw_queues(info->rq);21182118- blk_mark_disk_dead(info->gd);21192119- set_capacity(info->gd, 0);21172117+ if (info->rq && info->gd) {21182118+ blk_mq_stop_hw_queues(info->rq);21192119+ blk_mark_disk_dead(info->gd);21202120+ set_capacity(info->gd, 0);21212121+ }2120212221212123 for_each_rinfo(info, rinfo, i) {21222124 /* No more gnttab callback work. */···2459245724602458 dev_dbg(&xbdev->dev, "%s removed", xbdev->nodename);2461245924622462- del_gendisk(info->gd);24602460+ if (info->gd)24612461+ del_gendisk(info->gd);2463246224642463 mutex_lock(&blkfront_mutex);24652464 list_del(&info->info_list);24662465 mutex_unlock(&blkfront_mutex);2467246624682467 blkif_free(info, 0);24692469- xlbd_release_minors(info->gd->first_minor, info->gd->minors);24702470- blk_cleanup_disk(info->gd);24712471- blk_mq_free_tag_set(&info->tag_set);24682468+ if (info->gd) {24692469+ xlbd_release_minors(info->gd->first_minor, info->gd->minors);24702470+ blk_cleanup_disk(info->gd);24712471+ blk_mq_free_tag_set(&info->tag_set);24722472+ }2472247324732474 kfree(info);24742475 return 0;
+6-8
drivers/bus/bt1-apb.c
···175175 int ret;176176177177 apb->prst = devm_reset_control_get_optional_exclusive(apb->dev, "prst");178178- if (IS_ERR(apb->prst)) {179179- dev_warn(apb->dev, "Couldn't get reset control line\n");180180- return PTR_ERR(apb->prst);181181- }178178+ if (IS_ERR(apb->prst))179179+ return dev_err_probe(apb->dev, PTR_ERR(apb->prst),180180+ "Couldn't get reset control line\n");182181183182 ret = reset_control_deassert(apb->prst);184183 if (ret)···198199 int ret;199200200201 apb->pclk = devm_clk_get(apb->dev, "pclk");201201- if (IS_ERR(apb->pclk)) {202202- dev_err(apb->dev, "Couldn't get APB clock descriptor\n");203203- return PTR_ERR(apb->pclk);204204- }202202+ if (IS_ERR(apb->pclk))203203+ return dev_err_probe(apb->dev, PTR_ERR(apb->pclk),204204+ "Couldn't get APB clock descriptor\n");205205206206 ret = clk_prepare_enable(apb->pclk);207207 if (ret) {
+6-8
drivers/bus/bt1-axi.c
···135135 int ret;136136137137 axi->arst = devm_reset_control_get_optional_exclusive(axi->dev, "arst");138138- if (IS_ERR(axi->arst)) {139139- dev_warn(axi->dev, "Couldn't get reset control line\n");140140- return PTR_ERR(axi->arst);141141- }138138+ if (IS_ERR(axi->arst))139139+ return dev_err_probe(axi->dev, PTR_ERR(axi->arst),140140+ "Couldn't get reset control line\n");142141143142 ret = reset_control_deassert(axi->arst);144143 if (ret)···158159 int ret;159160160161 axi->aclk = devm_clk_get(axi->dev, "aclk");161161- if (IS_ERR(axi->aclk)) {162162- dev_err(axi->dev, "Couldn't get AXI Interconnect clock\n");163163- return PTR_ERR(axi->aclk);164164- }162162+ if (IS_ERR(axi->aclk))163163+ return dev_err_probe(axi->dev, PTR_ERR(axi->aclk),164164+ "Couldn't get AXI Interconnect clock\n");165165166166 ret = clk_prepare_enable(axi->aclk);167167 if (ret) {
+3-3
drivers/char/random.c
···87878888/* Control how we warn userspace. */8989static struct ratelimit_state urandom_warning =9090- RATELIMIT_STATE_INIT("warn_urandom_randomness", HZ, 3);9090+ RATELIMIT_STATE_INIT_FLAGS("urandom_warning", HZ, 3, RATELIMIT_MSG_ON_RELEASE);9191static int ratelimit_disable __read_mostly =9292 IS_ENABLED(CONFIG_WARN_ALL_UNSEEDED_RANDOM);9393module_param_named(ratelimit_disable, ratelimit_disable, int, 0644);···408408409409 /*410410 * Immediately overwrite the ChaCha key at index 4 with random411411- * bytes, in case userspace causes copy_to_user() below to sleep411411+ * bytes, in case userspace causes copy_to_iter() below to sleep412412 * forever, so that we still retain forward secrecy in that case.413413 */414414 crng_make_state(chacha_state, (u8 *)&chacha_state[4], CHACHA_KEY_SIZE);···10091009 if (new_count & MIX_INFLIGHT)10101010 return;1011101110121012- if (new_count < 64 && !time_is_before_jiffies(fast_pool->last + HZ))10121012+ if (new_count < 1024 && !time_is_before_jiffies(fast_pool->last + HZ))10131013 return;1014101410151015 if (unlikely(!fast_pool->mix.func))
···434434static int grgpio_remove(struct platform_device *ofdev)435435{436436 struct grgpio_priv *priv = platform_get_drvdata(ofdev);437437- int i;438438- int ret = 0;439439-440440- if (priv->domain) {441441- for (i = 0; i < GRGPIO_MAX_NGPIO; i++) {442442- if (priv->uirqs[i].refcnt != 0) {443443- ret = -EBUSY;444444- goto out;445445- }446446- }447447- }448437449438 gpiochip_remove(&priv->gc);450439451440 if (priv->domain)452441 irq_domain_remove(priv->domain);453442454454-out:455455- return ret;443443+ return 0;456444}457445458446static const struct of_device_id grgpio_match[] = {
+1-1
drivers/gpio/gpio-mxs.c
···11// SPDX-License-Identifier: GPL-2.0+22//33-// MXC GPIO support. (c) 2008 Daniel Mack <daniel@caiaq.de>33+// MXS GPIO support. (c) 2008 Daniel Mack <daniel@caiaq.de>44// Copyright 2008 Juergen Beisert, kernel@pengutronix.de55//66// Based on code from Freescale,
···385385 unsigned long *base = gpiochip_get_data(gc);386386 const struct winbond_gpio_info *info;387387 bool val;388388+ int ret;388389389390 winbond_gpio_get_info(&offset, &info);390391391391- val = winbond_sio_enter(*base);392392- if (val)393393- return val;392392+ ret = winbond_sio_enter(*base);393393+ if (ret)394394+ return ret;394395395396 winbond_sio_select_logical(*base, info->dev);396397
+14-6
drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
···17981798 DRM_INFO("amdgpu: %uM of VRAM memory ready\n",17991799 (unsigned) (adev->gmc.real_vram_size / (1024 * 1024)));1800180018011801- /* Compute GTT size, either bsaed on 3/4th the size of RAM size18011801+ /* Compute GTT size, either based on 1/2 the size of RAM size18021802 * or whatever the user passed on module init */18031803 if (amdgpu_gtt_size == -1) {18041804 struct sysinfo si;1805180518061806 si_meminfo(&si);18071807- gtt_size = min(max((AMDGPU_DEFAULT_GTT_SIZE_MB << 20),18081808- adev->gmc.mc_vram_size),18091809- ((uint64_t)si.totalram * si.mem_unit * 3/4));18101810- }18111811- else18071807+ /* Certain GL unit tests for large textures can cause problems18081808+ * with the OOM killer since there is no way to link this memory18091809+ * to a process. This was originally mitigated (but not necessarily18101810+ * eliminated) by limiting the GTT size. The problem is this limit18111811+ * is often too low for many modern games so just make the limit 1/218121812+ * of system memory which aligns with TTM. The OOM accounting needs18131813+ * to be addressed, but we shouldn't prevent common 3D applications18141814+ * from being usable just to potentially mitigate that corner case.18151815+ */18161816+ gtt_size = max((AMDGPU_DEFAULT_GTT_SIZE_MB << 20),18171817+ (u64)si.totalram * si.mem_unit / 2);18181818+ } else {18121819 gtt_size = (uint64_t)amdgpu_gtt_size << 20;18201820+ }1813182118141822 /* Initialize GTT memory pool */18151823 r = amdgpu_gtt_mgr_init(adev, gtt_size);
···550550 if (!bw_params->clk_table.entries[i].dtbclk_mhz)551551 bw_params->clk_table.entries[i].dtbclk_mhz = def_max.dtbclk_mhz;552552 }553553- ASSERT(bw_params->clk_table.entries[i].dcfclk_mhz);553553+ ASSERT(bw_params->clk_table.entries[i-1].dcfclk_mhz);554554 bw_params->vram_type = bios_info->memory_type;555555 bw_params->num_channels = bios_info->ma_channel_number;556556 if (!bw_params->num_channels)
+1-1
drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
···944944945945 return;946946947947- for (lane = 1; lane < LANE_COUNT_DP_MAX; lane++) {947947+ for (lane = 0; lane < LANE_COUNT_DP_MAX; lane++) {948948 if (lt_settings->voltage_swing)949949 lane_settings[lane].VOLTAGE_SWING = *lt_settings->voltage_swing;950950 if (lt_settings->pre_emphasis)
···17661766 break;17671767 }17681768 }17691769-17701770- /*17711771- * TO-DO: So far the code logic below only addresses single eDP case.17721772- * For dual eDP case, there are a few things that need to be17731773- * implemented first:17741774- *17751775- * 1. Change the fastboot logic above, so eDP link[0 or 1]'s17761776- * stream[0 or 1] will all be checked.17771777- *17781778- * 2. Change keep_edp_vdd_on to an array, and maintain keep_edp_vdd_on17791779- * for each eDP.17801780- *17811781- * Once above 2 things are completed, we can then change the logic below17821782- * correspondingly, so dual eDP case will be fully covered.17831783- */17841784-17851785- // We are trying to enable eDP, don't power down VDD if eDP stream is existing17861786- if ((edp_stream_num == 1 && edp_streams[0] != NULL) || can_apply_edp_fast_boot) {17691769+ // We are trying to enable eDP, don't power down VDD17701770+ if (can_apply_edp_fast_boot)17871771 keep_edp_vdd_on = true;17881788- DC_LOG_EVENT_LINK_TRAINING("Keep eDP Vdd on\n");17891789- } else {17901790- DC_LOG_EVENT_LINK_TRAINING("No eDP stream enabled, turn eDP Vdd off\n");17911791- }17921772 }1793177317941774 // Check seamless boot support
+3
drivers/gpu/drm/amd/display/dc/dcn20/dcn20_dpp.c
···212212 break;213213 }214214215215+ /* Set default color space based on format if none is given. */216216+ color_space = input_color_space ? input_color_space : color_space;217217+215218 if (is_2bit == 1 && alpha_2bit_lut != NULL) {216219 REG_UPDATE(ALPHA_2BIT_LUT, ALPHA_2BIT_LUT0, alpha_2bit_lut->lut0);217220 REG_UPDATE(ALPHA_2BIT_LUT, ALPHA_2BIT_LUT1, alpha_2bit_lut->lut1);
···153153 break;154154 }155155156156+ /* Set default color space based on format if none is given. */157157+ color_space = input_color_space ? input_color_space : color_space;158158+156159 if (is_2bit == 1 && alpha_2bit_lut != NULL) {157160 REG_UPDATE(ALPHA_2BIT_LUT, ALPHA_2BIT_LUT0, alpha_2bit_lut->lut0);158161 REG_UPDATE(ALPHA_2BIT_LUT, ALPHA_2BIT_LUT1, alpha_2bit_lut->lut1);
+3
drivers/gpu/drm/amd/display/dc/dcn30/dcn30_dpp.c
···294294 break;295295 }296296297297+ /* Set default color space based on format if none is given. */298298+ color_space = input_color_space ? input_color_space : color_space;299299+297300 if (is_2bit == 1 && alpha_2bit_lut != NULL) {298301 REG_UPDATE(ALPHA_2BIT_LUT, ALPHA_2BIT_LUT0, alpha_2bit_lut->lut0);299302 REG_UPDATE(ALPHA_2BIT_LUT, ALPHA_2BIT_LUT1, alpha_2bit_lut->lut1);
···23962396}2397239723982398/*23992399- * Display WA #22010492432: ehl, tgl, adl-p23992399+ * Display WA #22010492432: ehl, tgl, adl-s, adl-p24002400 * Program half of the nominal DCO divider fraction value.24012401 */24022402static bool···24042404{24052405 return ((IS_PLATFORM(i915, INTEL_ELKHARTLAKE) &&24062406 IS_JSL_EHL_DISPLAY_STEP(i915, STEP_B0, STEP_FOREVER)) ||24072407- IS_TIGERLAKE(i915) || IS_ALDERLAKE_P(i915)) &&24072407+ IS_TIGERLAKE(i915) || IS_ALDERLAKE_S(i915) || IS_ALDERLAKE_P(i915)) &&24082408 i915->dpll.ref_clks.nssc == 38400;24092409}24102410
+3-2
drivers/gpu/drm/i915/i915_drm_client.c
···116116 total += busy_add(ctx, class);117117 rcu_read_unlock();118118119119- seq_printf(m, "drm-engine-%s:\t%llu ns\n",120120- uabi_class_names[class], total);119119+ if (capacity)120120+ seq_printf(m, "drm-engine-%s:\t%llu ns\n",121121+ uabi_class_names[class], total);121122122123 if (capacity > 1)123124 seq_printf(m, "drm-engine-capacity-%s:\t%u\n",
+10-4
drivers/gpu/drm/msm/adreno/adreno_gpu.c
···498498499499 ring->cur = ring->start;500500 ring->next = ring->start;501501-502502- /* reset completed fence seqno: */503503- ring->memptrs->fence = ring->fctx->completed_fence;504501 ring->memptrs->rptr = 0;502502+503503+ /* Detect and clean up an impossible fence, ie. if GPU managed504504+ * to scribble something invalid, we don't want that to confuse505505+ * us into mistakingly believing that submits have completed.506506+ */507507+ if (fence_before(ring->fctx->last_fence, ring->memptrs->fence)) {508508+ ring->memptrs->fence = ring->fctx->last_fence;509509+ }505510 }506511507512 return 0;···10621057 for (i = 0; i < ARRAY_SIZE(adreno_gpu->info->fw); i++)10631058 release_firmware(adreno_gpu->fw[i]);1064105910651065- pm_runtime_disable(&priv->gpu_pdev->dev);10601060+ if (pm_runtime_enabled(&priv->gpu_pdev->dev))10611061+ pm_runtime_disable(&priv->gpu_pdev->dev);1066106210671063 msm_gpu_cleanup(&adreno_gpu->base);10681064}
+8-1
drivers/gpu/drm/msm/disp/dpu1/dpu_writeback.c
···1111 struct msm_drm_private *priv = dev->dev_private;1212 struct dpu_kms *dpu_kms = to_dpu_kms(priv->kms);13131414- return drm_add_modes_noedid(connector, dpu_kms->catalog->caps->max_linewidth,1414+ /*1515+ * We should ideally be limiting the modes only to the maxlinewidth but1616+ * on some chipsets this will allow even 4k modes to be added which will1717+ * fail the per SSPP bandwidth checks. So, till we have dual-SSPP support1818+ * and source split support added lets limit the modes based on max_mixer_width1919+ * as 4K modes can then be supported.2020+ */2121+ return drm_add_modes_noedid(connector, dpu_kms->catalog->caps->max_mixer_width,1522 dev->mode_config.max_height);1623}1724
+2
drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c
···216216 encoder = mdp4_lcdc_encoder_init(dev, panel_node);217217 if (IS_ERR(encoder)) {218218 DRM_DEV_ERROR(dev->dev, "failed to construct LCDC encoder\n");219219+ of_node_put(panel_node);219220 return PTR_ERR(encoder);220221 }221222···226225 connector = mdp4_lvds_connector_init(dev, panel_node, encoder);227226 if (IS_ERR(connector)) {228227 DRM_DEV_ERROR(dev->dev, "failed to initialize LVDS connector\n");228228+ of_node_put(panel_node);229229 return PTR_ERR(connector);230230 }231231
+25-8
drivers/gpu/drm/msm/dp/dp_ctrl.c
···15341534 return ret;15351535}1536153615371537+static int dp_ctrl_on_stream_phy_test_report(struct dp_ctrl *dp_ctrl);15381538+15371539static int dp_ctrl_process_phy_test_request(struct dp_ctrl_private *ctrl)15381540{15391541 int ret = 0;···1559155715601558 ret = dp_ctrl_on_link(&ctrl->dp_ctrl);15611559 if (!ret)15621562- ret = dp_ctrl_on_stream(&ctrl->dp_ctrl);15601560+ ret = dp_ctrl_on_stream_phy_test_report(&ctrl->dp_ctrl);15631561 else15641562 DRM_ERROR("failed to enable DP link controller\n");15651563···18151813 return dp_ctrl_setup_main_link(ctrl, &training_step);18161814}1817181518181818-int dp_ctrl_on_stream(struct dp_ctrl *dp_ctrl)18161816+static int dp_ctrl_on_stream_phy_test_report(struct dp_ctrl *dp_ctrl)18171817+{18181818+ int ret;18191819+ struct dp_ctrl_private *ctrl;18201820+18211821+ ctrl = container_of(dp_ctrl, struct dp_ctrl_private, dp_ctrl);18221822+18231823+ ctrl->dp_ctrl.pixel_rate = ctrl->panel->dp_mode.drm_mode.clock;18241824+18251825+ ret = dp_ctrl_enable_stream_clocks(ctrl);18261826+ if (ret) {18271827+ DRM_ERROR("Failed to start pixel clocks. ret=%d\n", ret);18281828+ return ret;18291829+ }18301830+18311831+ dp_ctrl_send_phy_test_pattern(ctrl);18321832+18331833+ return 0;18341834+}18351835+18361836+int dp_ctrl_on_stream(struct dp_ctrl *dp_ctrl, bool force_link_train)18191837{18201838 int ret = 0;18211839 bool mainlink_ready = false;···18711849 goto end;18721850 }1873185118741874- if (ctrl->link->sink_request & DP_TEST_LINK_PHY_TEST_PATTERN) {18751875- dp_ctrl_send_phy_test_pattern(ctrl);18761876- return 0;18771877- }18781878-18791879- if (!dp_ctrl_channel_eq_ok(ctrl))18521852+ if (force_link_train || !dp_ctrl_channel_eq_ok(ctrl))18801853 dp_ctrl_link_retrain(ctrl);1881185418821855 /* stop txing train pattern to end link training */
···145145146146uint64_t msm_gem_mmap_offset(struct drm_gem_object *obj);147147int msm_gem_pin_vma_locked(struct drm_gem_object *obj, struct msm_gem_vma *vma);148148-void msm_gem_unpin_vma_locked(struct drm_gem_object *obj, struct msm_gem_vma *vma);148148+void msm_gem_unpin_locked(struct drm_gem_object *obj);149149struct msm_gem_vma *msm_gem_get_vma_locked(struct drm_gem_object *obj,150150 struct msm_gem_address_space *aspace);151151int msm_gem_get_iova(struct drm_gem_object *obj,···377377 } *cmd; /* array of size nr_cmds */378378 struct {379379/* make sure these don't conflict w/ MSM_SUBMIT_BO_x */380380-#define BO_VALID 0x8000 /* is current addr in cmdstream correct/valid? */381381-#define BO_LOCKED 0x4000 /* obj lock is held */382382-#define BO_ACTIVE 0x2000 /* active refcnt is held */383383-#define BO_PINNED 0x1000 /* obj is pinned and on active list */380380+#define BO_VALID 0x8000 /* is current addr in cmdstream correct/valid? */381381+#define BO_LOCKED 0x4000 /* obj lock is held */382382+#define BO_ACTIVE 0x2000 /* active refcnt is held */383383+#define BO_OBJ_PINNED 0x1000 /* obj (pages) is pinned and on active list */384384+#define BO_VMA_PINNED 0x0800 /* vma (virtual address) is pinned */384385 uint32_t flags;385386 union {386387 struct msm_gem_object *obj;
+15
drivers/gpu/drm/msm/msm_gem_prime.c
···1111#include "msm_drv.h"1212#include "msm_gem.h"13131414+int msm_gem_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)1515+{1616+ int ret;1717+1818+ /* Ensure the mmap offset is initialized. We lazily initialize it,1919+ * so if it has not been first mmap'd directly as a GEM object, the2020+ * mmap offset will not be already initialized.2121+ */2222+ ret = drm_gem_create_mmap_offset(obj);2323+ if (ret)2424+ return ret;2525+2626+ return drm_gem_prime_mmap(obj, vma);2727+}2828+1429struct sg_table *msm_gem_prime_get_sg_table(struct drm_gem_object *obj)1530{1631 struct msm_gem_object *msm_obj = to_msm_bo(obj);
+12-6
drivers/gpu/drm/msm/msm_gem_submit.c
···232232 */233233 submit->bos[i].flags &= ~cleanup_flags;234234235235- if (flags & BO_PINNED)236236- msm_gem_unpin_vma_locked(obj, submit->bos[i].vma);235235+ if (flags & BO_VMA_PINNED)236236+ msm_gem_unpin_vma(submit->bos[i].vma);237237+238238+ if (flags & BO_OBJ_PINNED)239239+ msm_gem_unpin_locked(obj);237240238241 if (flags & BO_ACTIVE)239242 msm_gem_active_put(obj);···247244248245static void submit_unlock_unpin_bo(struct msm_gem_submit *submit, int i)249246{250250- submit_cleanup_bo(submit, i, BO_PINNED | BO_ACTIVE | BO_LOCKED);247247+ unsigned cleanup_flags = BO_VMA_PINNED | BO_OBJ_PINNED |248248+ BO_ACTIVE | BO_LOCKED;249249+ submit_cleanup_bo(submit, i, cleanup_flags);251250252251 if (!(submit->bos[i].flags & BO_VALID))253252 submit->bos[i].iova = 0;···380375 if (ret)381376 break;382377383383- submit->bos[i].flags |= BO_PINNED;378378+ submit->bos[i].flags |= BO_OBJ_PINNED | BO_VMA_PINNED;384379 submit->bos[i].vma = vma;385380386381 if (vma->iova == submit->bos[i].iova) {···516511 unsigned i;517512518513 if (error)519519- cleanup_flags |= BO_PINNED | BO_ACTIVE;514514+ cleanup_flags |= BO_VMA_PINNED | BO_OBJ_PINNED | BO_ACTIVE;520515521516 for (i = 0; i < submit->nr_bos; i++) {522517 struct msm_gem_object *msm_obj = submit->bos[i].obj;···534529 struct drm_gem_object *obj = &submit->bos[i].obj->base;535530536531 msm_gem_lock(obj);537537- submit_cleanup_bo(submit, i, BO_PINNED | BO_ACTIVE);532532+ /* Note, VMA already fence-unpinned before submit: */533533+ submit_cleanup_bo(submit, i, BO_OBJ_PINNED | BO_ACTIVE);538534 msm_gem_unlock(obj);539535 drm_gem_object_put(obj);540536 }
+2-4
drivers/gpu/drm/msm/msm_gem_vma.c
···6262 unsigned size = vma->node.size;63636464 /* Print a message if we try to purge a vma in use */6565- if (GEM_WARN_ON(msm_gem_vma_inuse(vma)))6666- return;6565+ GEM_WARN_ON(msm_gem_vma_inuse(vma));67666867 /* Don't do anything if the memory isn't mapped */6968 if (!vma->mapped)···127128void msm_gem_close_vma(struct msm_gem_address_space *aspace,128129 struct msm_gem_vma *vma)129130{130130- if (GEM_WARN_ON(msm_gem_vma_inuse(vma) || vma->mapped))131131- return;131131+ GEM_WARN_ON(msm_gem_vma_inuse(vma) || vma->mapped);132132133133 spin_lock(&aspace->lock);134134 if (vma->iova)
+5-22
drivers/gpu/drm/msm/msm_gpu.c
···164164 return ret;165165}166166167167-static void update_fences(struct msm_gpu *gpu, struct msm_ringbuffer *ring,168168- uint32_t fence)169169-{170170- struct msm_gem_submit *submit;171171- unsigned long flags;172172-173173- spin_lock_irqsave(&ring->submit_lock, flags);174174- list_for_each_entry(submit, &ring->submits, node) {175175- if (fence_after(submit->seqno, fence))176176- break;177177-178178- msm_update_fence(submit->ring->fctx,179179- submit->hw_fence->seqno);180180- dma_fence_signal(submit->hw_fence);181181- }182182- spin_unlock_irqrestore(&ring->submit_lock, flags);183183-}184184-185167#ifdef CONFIG_DEV_COREDUMP186168static ssize_t msm_gpu_devcoredump_read(char *buffer, loff_t offset,187169 size_t count, void *data, size_t datalen)···418436 * one more to clear the faulting submit419437 */420438 if (ring == cur_ring)421421- fence++;439439+ ring->memptrs->fence = ++fence;422440423423- update_fences(gpu, ring, fence);441441+ msm_update_fence(ring->fctx, fence);424442 }425443426444 if (msm_gpu_active(gpu)) {···654672 msm_submit_retire(submit);655673656674 pm_runtime_mark_last_busy(&gpu->pdev->dev);657657- pm_runtime_put_autosuspend(&gpu->pdev->dev);658675659676 spin_lock_irqsave(&ring->submit_lock, flags);660677 list_del(&submit->node);···666685 if (!gpu->active_submits)667686 msm_devfreq_idle(gpu);668687 mutex_unlock(&gpu->active_lock);688688+689689+ pm_runtime_put_autosuspend(&gpu->pdev->dev);669690670691 msm_gem_submit_put(submit);671692}···718735 int i;719736720737 for (i = 0; i < gpu->nr_rings; i++)721721- update_fences(gpu, gpu->rb[i], gpu->rb[i]->memptrs->fence);738738+ msm_update_fence(gpu->rb[i]->fctx, gpu->rb[i]->memptrs->fence);722739723740 kthread_queue_work(gpu->worker, &gpu->retire_work);724741 update_sw_cntrs(gpu);
···248248{249249 struct vc4_dev *vc4 = to_vc4_dev(bo->base.base.dev);250250251251+ if (WARN_ON_ONCE(vc4->is_vc5))252252+ return;253253+251254 mutex_lock(&vc4->purgeable.lock);252255 list_add_tail(&bo->size_head, &vc4->purgeable.list);253256 vc4->purgeable.num++;···261258static void vc4_bo_remove_from_purgeable_pool_locked(struct vc4_bo *bo)262259{263260 struct vc4_dev *vc4 = to_vc4_dev(bo->base.base.dev);261261+262262+ if (WARN_ON_ONCE(vc4->is_vc5))263263+ return;264264265265 /* list_del_init() is used here because the caller might release266266 * the purgeable lock in order to acquire the madv one and update the···393387 struct vc4_dev *vc4 = to_vc4_dev(dev);394388 struct vc4_bo *bo;395389390390+ if (WARN_ON_ONCE(vc4->is_vc5))391391+ return ERR_PTR(-ENODEV);392392+396393 bo = kzalloc(sizeof(*bo), GFP_KERNEL);397394 if (!bo)398395 return ERR_PTR(-ENOMEM);···421412 struct vc4_dev *vc4 = to_vc4_dev(dev);422413 struct drm_gem_cma_object *cma_obj;423414 struct vc4_bo *bo;415415+416416+ if (WARN_ON_ONCE(vc4->is_vc5))417417+ return ERR_PTR(-ENODEV);424418425419 if (size == 0)426420 return ERR_PTR(-EINVAL);···483471 return bo;484472}485473486486-int vc4_dumb_create(struct drm_file *file_priv,487487- struct drm_device *dev,488488- struct drm_mode_create_dumb *args)474474+int vc4_bo_dumb_create(struct drm_file *file_priv,475475+ struct drm_device *dev,476476+ struct drm_mode_create_dumb *args)489477{490490- int min_pitch = DIV_ROUND_UP(args->width * args->bpp, 8);478478+ struct vc4_dev *vc4 = to_vc4_dev(dev);491479 struct vc4_bo *bo = NULL;492480 int ret;493481494494- if (args->pitch < min_pitch)495495- args->pitch = min_pitch;482482+ if (WARN_ON_ONCE(vc4->is_vc5))483483+ return -ENODEV;496484497497- if (args->size < args->pitch * args->height)498498- args->size = args->pitch * args->height;485485+ ret = vc4_dumb_fixup_args(args);486486+ if (ret)487487+ return ret;499488500489 bo = vc4_bo_create(dev, args->size, false, VC4_BO_TYPE_DUMB);501490 if (IS_ERR(bo))···614601615602int vc4_bo_inc_usecnt(struct vc4_bo *bo)616603{604604+ struct vc4_dev *vc4 = to_vc4_dev(bo->base.base.dev);617605 int ret;606606+607607+ if (WARN_ON_ONCE(vc4->is_vc5))608608+ return -ENODEV;618609619610 /* Fast path: if the BO is already retained by someone, no need to620611 * check the madv status.···654637655638void vc4_bo_dec_usecnt(struct vc4_bo *bo)656639{640640+ struct vc4_dev *vc4 = to_vc4_dev(bo->base.base.dev);641641+642642+ if (WARN_ON_ONCE(vc4->is_vc5))643643+ return;644644+657645 /* Fast path: if the BO is still retained by someone, no need to test658646 * the madv value.659647 */···778756 struct vc4_bo *bo = NULL;779757 int ret;780758759759+ if (WARN_ON_ONCE(vc4->is_vc5))760760+ return -ENODEV;761761+781762 ret = vc4_grab_bin_bo(vc4, vc4file);782763 if (ret)783764 return ret;···804779int vc4_mmap_bo_ioctl(struct drm_device *dev, void *data,805780 struct drm_file *file_priv)806781{782782+ struct vc4_dev *vc4 = to_vc4_dev(dev);807783 struct drm_vc4_mmap_bo *args = data;808784 struct drm_gem_object *gem_obj;785785+786786+ if (WARN_ON_ONCE(vc4->is_vc5))787787+ return -ENODEV;809788810789 gem_obj = drm_gem_object_lookup(file_priv, args->handle);811790 if (!gem_obj) {···833804 struct vc4_dev *vc4 = to_vc4_dev(dev);834805 struct vc4_bo *bo = NULL;835806 int ret;807807+808808+ if (WARN_ON_ONCE(vc4->is_vc5))809809+ return -ENODEV;836810837811 if (args->size == 0)838812 return -EINVAL;···907875int vc4_set_tiling_ioctl(struct drm_device *dev, void *data,908876 struct drm_file *file_priv)909877{878878+ struct vc4_dev *vc4 = to_vc4_dev(dev);910879 struct drm_vc4_set_tiling *args = data;911880 struct drm_gem_object *gem_obj;912881 struct vc4_bo *bo;913882 bool t_format;883883+884884+ if (WARN_ON_ONCE(vc4->is_vc5))885885+ return -ENODEV;914886915887 if (args->flags != 0)916888 return -EINVAL;···954918int vc4_get_tiling_ioctl(struct drm_device *dev, void *data,955919 struct drm_file *file_priv)956920{921921+ struct vc4_dev *vc4 = to_vc4_dev(dev);957922 struct drm_vc4_get_tiling *args = data;958923 struct drm_gem_object *gem_obj;959924 struct vc4_bo *bo;925925+926926+ if (WARN_ON_ONCE(vc4->is_vc5))927927+ return -ENODEV;960928961929 if (args->flags != 0 || args->modifier != 0)962930 return -EINVAL;···987947{988948 struct vc4_dev *vc4 = to_vc4_dev(dev);989949 int i;950950+951951+ if (WARN_ON_ONCE(vc4->is_vc5))952952+ return -ENODEV;990953991954 /* Create the initial set of BO labels that the kernel will992955 * use. This lets us avoid a bunch of string reallocation in···10491006 char *name;10501007 struct drm_gem_object *gem_obj;10511008 int ret = 0, label;10091009+10101010+ if (WARN_ON_ONCE(vc4->is_vc5))10111011+ return -ENODEV;1052101210531013 if (!args->len)10541014 return -EINVAL;
+147-53
drivers/gpu/drm/vc4/vc4_crtc.c
···256256 * Removing 1 from the FIFO full level however257257 * seems to completely remove that issue.258258 */259259- if (!vc4->hvs->hvs5)259259+ if (!vc4->is_vc5)260260 return fifo_len_bytes - 3 * HVS_FIFO_LATENCY_PIX - 1;261261262262 return fifo_len_bytes - 3 * HVS_FIFO_LATENCY_PIX;···389389 if (is_dsi)390390 CRTC_WRITE(PV_HACT_ACT, mode->hdisplay * pixel_rep);391391392392- if (vc4->hvs->hvs5)392392+ if (vc4->is_vc5)393393 CRTC_WRITE(PV_MUX_CFG,394394 VC4_SET_FIELD(PV_MUX_CFG_RGB_PIXEL_MUX_MODE_NO_SWAP,395395 PV_MUX_CFG_RGB_PIXEL_MUX_MODE));···775775 struct drm_framebuffer *old_fb;776776 struct drm_pending_vblank_event *event;777777778778- struct vc4_seqno_cb cb;778778+ union {779779+ struct dma_fence_cb fence;780780+ struct vc4_seqno_cb seqno;781781+ } cb;779782};780783781784/* Called when the V3D execution for the BO being flipped to is done, so that782785 * we can actually update the plane's address to point to it.783786 */784787static void785785-vc4_async_page_flip_complete(struct vc4_seqno_cb *cb)788788+vc4_async_page_flip_complete(struct vc4_async_flip_state *flip_state)786789{787787- struct vc4_async_flip_state *flip_state =788788- container_of(cb, struct vc4_async_flip_state, cb);789790 struct drm_crtc *crtc = flip_state->crtc;790791 struct drm_device *dev = crtc->dev;791792 struct drm_plane *plane = crtc->primary;···803802 drm_crtc_vblank_put(crtc);804803 drm_framebuffer_put(flip_state->fb);805804806806- /* Decrement the BO usecnt in order to keep the inc/dec calls balanced807807- * when the planes are updated through the async update path.808808- * FIXME: we should move to generic async-page-flip when it's809809- * available, so that we can get rid of this hand-made cleanup_fb()810810- * logic.811811- */812812- if (flip_state->old_fb) {813813- struct drm_gem_cma_object *cma_bo;814814- struct vc4_bo *bo;815815-816816- cma_bo = drm_fb_cma_get_gem_obj(flip_state->old_fb, 0);817817- bo = to_vc4_bo(&cma_bo->base);818818- vc4_bo_dec_usecnt(bo);805805+ if (flip_state->old_fb)819806 drm_framebuffer_put(flip_state->old_fb);820820- }821807822808 kfree(flip_state);823809}824810825825-/* Implements async (non-vblank-synced) page flips.826826- *827827- * The page flip ioctl needs to return immediately, so we grab the828828- * modeset semaphore on the pipe, and queue the address update for829829- * when V3D is done with the BO being flipped to.830830- */831831-static int vc4_async_page_flip(struct drm_crtc *crtc,832832- struct drm_framebuffer *fb,833833- struct drm_pending_vblank_event *event,834834- uint32_t flags)811811+static void vc4_async_page_flip_seqno_complete(struct vc4_seqno_cb *cb)835812{836836- struct drm_device *dev = crtc->dev;837837- struct drm_plane *plane = crtc->primary;838838- int ret = 0;839839- struct vc4_async_flip_state *flip_state;840840- struct drm_gem_cma_object *cma_bo = drm_fb_cma_get_gem_obj(fb, 0);841841- struct vc4_bo *bo = to_vc4_bo(&cma_bo->base);813813+ struct vc4_async_flip_state *flip_state =814814+ container_of(cb, struct vc4_async_flip_state, cb.seqno);815815+ struct vc4_bo *bo = NULL;842816843843- /* Increment the BO usecnt here, so that we never end up with an844844- * unbalanced number of vc4_bo_{dec,inc}_usecnt() calls when the845845- * plane is later updated through the non-async path.846846- * FIXME: we should move to generic async-page-flip when it's847847- * available, so that we can get rid of this hand-made prepare_fb()848848- * logic.817817+ if (flip_state->old_fb) {818818+ struct drm_gem_cma_object *cma_bo =819819+ drm_fb_cma_get_gem_obj(flip_state->old_fb, 0);820820+ bo = to_vc4_bo(&cma_bo->base);821821+ }822822+823823+ vc4_async_page_flip_complete(flip_state);824824+825825+ /*826826+ * Decrement the BO usecnt in order to keep the inc/dec827827+ * calls balanced when the planes are updated through828828+ * the async update path.829829+ *830830+ * FIXME: we should move to generic async-page-flip when831831+ * it's available, so that we can get rid of this832832+ * hand-made cleanup_fb() logic.849833 */850850- ret = vc4_bo_inc_usecnt(bo);834834+ if (bo)835835+ vc4_bo_dec_usecnt(bo);836836+}837837+838838+static void vc4_async_page_flip_fence_complete(struct dma_fence *fence,839839+ struct dma_fence_cb *cb)840840+{841841+ struct vc4_async_flip_state *flip_state =842842+ container_of(cb, struct vc4_async_flip_state, cb.fence);843843+844844+ vc4_async_page_flip_complete(flip_state);845845+ dma_fence_put(fence);846846+}847847+848848+static int vc4_async_set_fence_cb(struct drm_device *dev,849849+ struct vc4_async_flip_state *flip_state)850850+{851851+ struct drm_framebuffer *fb = flip_state->fb;852852+ struct drm_gem_cma_object *cma_bo = drm_fb_cma_get_gem_obj(fb, 0);853853+ struct vc4_dev *vc4 = to_vc4_dev(dev);854854+ struct dma_fence *fence;855855+ int ret;856856+857857+ if (!vc4->is_vc5) {858858+ struct vc4_bo *bo = to_vc4_bo(&cma_bo->base);859859+860860+ return vc4_queue_seqno_cb(dev, &flip_state->cb.seqno, bo->seqno,861861+ vc4_async_page_flip_seqno_complete);862862+ }863863+864864+ ret = dma_resv_get_singleton(cma_bo->base.resv, DMA_RESV_USAGE_READ, &fence);851865 if (ret)852866 return ret;853867854854- flip_state = kzalloc(sizeof(*flip_state), GFP_KERNEL);855855- if (!flip_state) {856856- vc4_bo_dec_usecnt(bo);857857- return -ENOMEM;868868+ /* If there's no fence, complete the page flip immediately */869869+ if (!fence) {870870+ vc4_async_page_flip_fence_complete(fence, &flip_state->cb.fence);871871+ return 0;858872 }873873+874874+ /* If the fence has already been completed, complete the page flip */875875+ if (dma_fence_add_callback(fence, &flip_state->cb.fence,876876+ vc4_async_page_flip_fence_complete))877877+ vc4_async_page_flip_fence_complete(fence, &flip_state->cb.fence);878878+879879+ return 0;880880+}881881+882882+static int883883+vc4_async_page_flip_common(struct drm_crtc *crtc,884884+ struct drm_framebuffer *fb,885885+ struct drm_pending_vblank_event *event,886886+ uint32_t flags)887887+{888888+ struct drm_device *dev = crtc->dev;889889+ struct drm_plane *plane = crtc->primary;890890+ struct vc4_async_flip_state *flip_state;891891+892892+ flip_state = kzalloc(sizeof(*flip_state), GFP_KERNEL);893893+ if (!flip_state)894894+ return -ENOMEM;859895860896 drm_framebuffer_get(fb);861897 flip_state->fb = fb;···919881 */920882 drm_atomic_set_fb_for_plane(plane->state, fb);921883922922- vc4_queue_seqno_cb(dev, &flip_state->cb, bo->seqno,923923- vc4_async_page_flip_complete);884884+ vc4_async_set_fence_cb(dev, flip_state);924885925886 /* Driver takes ownership of state on successful async commit. */926887 return 0;888888+}889889+890890+/* Implements async (non-vblank-synced) page flips.891891+ *892892+ * The page flip ioctl needs to return immediately, so we grab the893893+ * modeset semaphore on the pipe, and queue the address update for894894+ * when V3D is done with the BO being flipped to.895895+ */896896+static int vc4_async_page_flip(struct drm_crtc *crtc,897897+ struct drm_framebuffer *fb,898898+ struct drm_pending_vblank_event *event,899899+ uint32_t flags)900900+{901901+ struct drm_device *dev = crtc->dev;902902+ struct vc4_dev *vc4 = to_vc4_dev(dev);903903+ struct drm_gem_cma_object *cma_bo = drm_fb_cma_get_gem_obj(fb, 0);904904+ struct vc4_bo *bo = to_vc4_bo(&cma_bo->base);905905+ int ret;906906+907907+ if (WARN_ON_ONCE(vc4->is_vc5))908908+ return -ENODEV;909909+910910+ /*911911+ * Increment the BO usecnt here, so that we never end up with an912912+ * unbalanced number of vc4_bo_{dec,inc}_usecnt() calls when the913913+ * plane is later updated through the non-async path.914914+ *915915+ * FIXME: we should move to generic async-page-flip when916916+ * it's available, so that we can get rid of this917917+ * hand-made prepare_fb() logic.918918+ */919919+ ret = vc4_bo_inc_usecnt(bo);920920+ if (ret)921921+ return ret;922922+923923+ ret = vc4_async_page_flip_common(crtc, fb, event, flags);924924+ if (ret) {925925+ vc4_bo_dec_usecnt(bo);926926+ return ret;927927+ }928928+929929+ return 0;930930+}931931+932932+static int vc5_async_page_flip(struct drm_crtc *crtc,933933+ struct drm_framebuffer *fb,934934+ struct drm_pending_vblank_event *event,935935+ uint32_t flags)936936+{937937+ return vc4_async_page_flip_common(crtc, fb, event, flags);927938}928939929940int vc4_page_flip(struct drm_crtc *crtc,···981894 uint32_t flags,982895 struct drm_modeset_acquire_ctx *ctx)983896{984984- if (flags & DRM_MODE_PAGE_FLIP_ASYNC)985985- return vc4_async_page_flip(crtc, fb, event, flags);986986- else897897+ if (flags & DRM_MODE_PAGE_FLIP_ASYNC) {898898+ struct drm_device *dev = crtc->dev;899899+ struct vc4_dev *vc4 = to_vc4_dev(dev);900900+901901+ if (vc4->is_vc5)902902+ return vc5_async_page_flip(crtc, fb, event, flags);903903+ else904904+ return vc4_async_page_flip(crtc, fb, event, flags);905905+ } else {987906 return drm_atomic_helper_page_flip(crtc, fb, event, flags, ctx);907907+ }988908}989909990910struct drm_crtc_state *vc4_crtc_duplicate_state(struct drm_crtc *crtc)···12431149 crtc_funcs, NULL);12441150 drm_crtc_helper_add(crtc, crtc_helper_funcs);1245115112461246- if (!vc4->hvs->hvs5) {11521152+ if (!vc4->is_vc5) {12471153 drm_mode_crtc_set_gamma_size(crtc, ARRAY_SIZE(vc4_crtc->lut_r));1248115412491155 drm_crtc_enable_color_mgmt(crtc, 0, false, crtc->gamma_size);
+81-16
drivers/gpu/drm/vc4/vc4_drv.c
···6363 return map;6464}65656666+int vc4_dumb_fixup_args(struct drm_mode_create_dumb *args)6767+{6868+ int min_pitch = DIV_ROUND_UP(args->width * args->bpp, 8);6969+7070+ if (args->pitch < min_pitch)7171+ args->pitch = min_pitch;7272+7373+ if (args->size < args->pitch * args->height)7474+ args->size = args->pitch * args->height;7575+7676+ return 0;7777+}7878+7979+static int vc5_dumb_create(struct drm_file *file_priv,8080+ struct drm_device *dev,8181+ struct drm_mode_create_dumb *args)8282+{8383+ int ret;8484+8585+ ret = vc4_dumb_fixup_args(args);8686+ if (ret)8787+ return ret;8888+8989+ return drm_gem_cma_dumb_create_internal(file_priv, dev, args);9090+}9191+6692static int vc4_get_param_ioctl(struct drm_device *dev, void *data,6793 struct drm_file *file_priv)6894{···98729973 if (args->pad != 0)10074 return -EINVAL;7575+7676+ if (WARN_ON_ONCE(vc4->is_vc5))7777+ return -ENODEV;1017810279 if (!vc4->v3d)10380 return -ENODEV;···145116146117static int vc4_open(struct drm_device *dev, struct drm_file *file)147118{119119+ struct vc4_dev *vc4 = to_vc4_dev(dev);148120 struct vc4_file *vc4file;121121+122122+ if (WARN_ON_ONCE(vc4->is_vc5))123123+ return -ENODEV;149124150125 vc4file = kzalloc(sizeof(*vc4file), GFP_KERNEL);151126 if (!vc4file)152127 return -ENOMEM;128128+ vc4file->dev = vc4;153129154130 vc4_perfmon_open_file(vc4file);155131 file->driver_priv = vc4file;···165131{166132 struct vc4_dev *vc4 = to_vc4_dev(dev);167133 struct vc4_file *vc4file = file->driver_priv;134134+135135+ if (WARN_ON_ONCE(vc4->is_vc5))136136+ return;168137169138 if (vc4file->bin_bo_used)170139 vc4_v3d_bin_bo_put(vc4);···197160 DRM_IOCTL_DEF_DRV(VC4_PERFMON_GET_VALUES, vc4_perfmon_get_values_ioctl, DRM_RENDER_ALLOW),198161};199162200200-static struct drm_driver vc4_drm_driver = {163163+static const struct drm_driver vc4_drm_driver = {201164 .driver_features = (DRIVER_MODESET |202165 DRIVER_ATOMIC |203166 DRIVER_GEM |···212175213176 .gem_create_object = vc4_create_object,214177215215- DRM_GEM_CMA_DRIVER_OPS_WITH_DUMB_CREATE(vc4_dumb_create),178178+ DRM_GEM_CMA_DRIVER_OPS_WITH_DUMB_CREATE(vc4_bo_dumb_create),216179217180 .ioctls = vc4_drm_ioctls,218181 .num_ioctls = ARRAY_SIZE(vc4_drm_ioctls),182182+ .fops = &vc4_drm_fops,183183+184184+ .name = DRIVER_NAME,185185+ .desc = DRIVER_DESC,186186+ .date = DRIVER_DATE,187187+ .major = DRIVER_MAJOR,188188+ .minor = DRIVER_MINOR,189189+ .patchlevel = DRIVER_PATCHLEVEL,190190+};191191+192192+static const struct drm_driver vc5_drm_driver = {193193+ .driver_features = (DRIVER_MODESET |194194+ DRIVER_ATOMIC |195195+ DRIVER_GEM),196196+197197+#if defined(CONFIG_DEBUG_FS)198198+ .debugfs_init = vc4_debugfs_init,199199+#endif200200+201201+ DRM_GEM_CMA_DRIVER_OPS_WITH_DUMB_CREATE(vc5_dumb_create),202202+219203 .fops = &vc4_drm_fops,220204221205 .name = DRIVER_NAME,···270212static int vc4_drm_bind(struct device *dev)271213{272214 struct platform_device *pdev = to_platform_device(dev);215215+ const struct drm_driver *driver;273216 struct rpi_firmware *firmware = NULL;274217 struct drm_device *drm;275218 struct vc4_dev *vc4;276219 struct device_node *node;277220 struct drm_crtc *crtc;221221+ bool is_vc5;278222 int ret = 0;279223280224 dev->coherent_dma_mask = DMA_BIT_MASK(32);281225282282- /* If VC4 V3D is missing, don't advertise render nodes. */283283- node = of_find_matching_node_and_match(NULL, vc4_v3d_dt_match, NULL);284284- if (!node || !of_device_is_available(node))285285- vc4_drm_driver.driver_features &= ~DRIVER_RENDER;286286- of_node_put(node);226226+ is_vc5 = of_device_is_compatible(dev->of_node, "brcm,bcm2711-vc5");227227+ if (is_vc5)228228+ driver = &vc5_drm_driver;229229+ else230230+ driver = &vc4_drm_driver;287231288288- vc4 = devm_drm_dev_alloc(dev, &vc4_drm_driver, struct vc4_dev, base);232232+ vc4 = devm_drm_dev_alloc(dev, driver, struct vc4_dev, base);289233 if (IS_ERR(vc4))290234 return PTR_ERR(vc4);235235+ vc4->is_vc5 = is_vc5;291236292237 drm = &vc4->base;293238 platform_set_drvdata(pdev, drm);294239 INIT_LIST_HEAD(&vc4->debugfs_list);295240296296- mutex_init(&vc4->bin_bo_lock);241241+ if (!is_vc5) {242242+ mutex_init(&vc4->bin_bo_lock);297243298298- ret = vc4_bo_cache_init(drm);299299- if (ret)300300- return ret;244244+ ret = vc4_bo_cache_init(drm);245245+ if (ret)246246+ return ret;247247+ }301248302249 ret = drmm_mode_config_init(drm);303250 if (ret)304251 return ret;305252306306- ret = vc4_gem_init(drm);307307- if (ret)308308- return ret;253253+ if (!is_vc5) {254254+ ret = vc4_gem_init(drm);255255+ if (ret)256256+ return ret;257257+ }309258310259 node = of_find_compatible_node(NULL, NULL, "raspberrypi,bcm2835-firmware");311260 if (node) {···323258 return -EPROBE_DEFER;324259 }325260326326- ret = drm_aperture_remove_framebuffers(false, &vc4_drm_driver);261261+ ret = drm_aperture_remove_framebuffers(false, driver);327262 if (ret)328263 return ret;329264
+13-6
drivers/gpu/drm/vc4/vc4_drv.h
···4848 * done. This way, only events related to a specific job will be counted.4949 */5050struct vc4_perfmon {5151+ struct vc4_dev *dev;5252+5153 /* Tracks the number of users of the perfmon, when this counter reaches5254 * zero the perfmon is destroyed.5355 */···75737674struct vc4_dev {7775 struct drm_device base;7676+7777+ bool is_vc5;78787979 unsigned int irq;8080···320316};321317322318struct vc4_hvs {319319+ struct vc4_dev *vc4;323320 struct platform_device *pdev;324321 void __iomem *regs;325322 u32 __iomem *dlist;···338333 struct drm_mm_node mitchell_netravali_filter;339334340335 struct debugfs_regset32 regset;341341-342342- /* HVS version 5 flag, therefore requires updated dlist structures */343343- bool hvs5;344336};345337346338struct vc4_plane {···582580#define VC4_REG32(reg) { .name = #reg, .offset = reg }583581584582struct vc4_exec_info {583583+ struct vc4_dev *dev;584584+585585 /* Sequence number for this bin/render job. */586586 uint64_t seqno;587587···705701 * released when the DRM file is closed should be placed here.706702 */707703struct vc4_file {704704+ struct vc4_dev *dev;705705+708706 struct {709707 struct idr idr;710708 struct mutex lock;···820814struct drm_gem_object *vc4_create_object(struct drm_device *dev, size_t size);821815struct vc4_bo *vc4_bo_create(struct drm_device *dev, size_t size,822816 bool from_cache, enum vc4_kernel_bo_type type);823823-int vc4_dumb_create(struct drm_file *file_priv,824824- struct drm_device *dev,825825- struct drm_mode_create_dumb *args);817817+int vc4_bo_dumb_create(struct drm_file *file_priv,818818+ struct drm_device *dev,819819+ struct drm_mode_create_dumb *args);826820int vc4_create_bo_ioctl(struct drm_device *dev, void *data,827821 struct drm_file *file_priv);828822int vc4_create_shader_bo_ioctl(struct drm_device *dev, void *data,···891885892886/* vc4_drv.c */893887void __iomem *vc4_ioremap_regs(struct platform_device *dev, int index);888888+int vc4_dumb_fixup_args(struct drm_mode_create_dumb *args);894889895890/* vc4_dpi.c */896891extern struct platform_driver vc4_dpi_driver;
+40
drivers/gpu/drm/vc4/vc4_gem.c
···7676 u32 i;7777 int ret = 0;78787979+ if (WARN_ON_ONCE(vc4->is_vc5))8080+ return -ENODEV;8181+7982 if (!vc4->v3d) {8083 DRM_DEBUG("VC4_GET_HANG_STATE with no VC4 V3D probed\n");8184 return -ENODEV;···389386 unsigned long timeout_expire;390387 DEFINE_WAIT(wait);391388389389+ if (WARN_ON_ONCE(vc4->is_vc5))390390+ return -ENODEV;391391+392392 if (vc4->finished_seqno >= seqno)393393 return 0;394394···474468 struct vc4_dev *vc4 = to_vc4_dev(dev);475469 struct vc4_exec_info *exec;476470471471+ if (WARN_ON_ONCE(vc4->is_vc5))472472+ return;473473+477474again:478475 exec = vc4_first_bin_job(vc4);479476 if (!exec)···522513 if (!exec)523514 return;524515516516+ if (WARN_ON_ONCE(vc4->is_vc5))517517+ return;518518+525519 /* A previous RCL may have written to one of our textures, and526520 * our full cache flush at bin time may have occurred before527521 * that RCL completed. Flush the texture cache now, but not···542530{543531 struct vc4_dev *vc4 = to_vc4_dev(dev);544532 bool was_empty = list_empty(&vc4->render_job_list);533533+534534+ if (WARN_ON_ONCE(vc4->is_vc5))535535+ return;545536546537 list_move_tail(&exec->head, &vc4->render_job_list);547538 if (was_empty)···1012997 unsigned long irqflags;1013998 struct vc4_seqno_cb *cb, *cb_temp;101499910001000+ if (WARN_ON_ONCE(vc4->is_vc5))10011001+ return;10021002+10151003 spin_lock_irqsave(&vc4->job_lock, irqflags);10161004 while (!list_empty(&vc4->job_done_list)) {10171005 struct vc4_exec_info *exec =···10501032{10511033 struct vc4_dev *vc4 = to_vc4_dev(dev);10521034 unsigned long irqflags;10351035+10361036+ if (WARN_ON_ONCE(vc4->is_vc5))10371037+ return -ENODEV;1053103810541039 cb->func = func;10551040 INIT_WORK(&cb->work, vc4_seqno_cb_work);···11041083vc4_wait_seqno_ioctl(struct drm_device *dev, void *data,11051084 struct drm_file *file_priv)11061085{10861086+ struct vc4_dev *vc4 = to_vc4_dev(dev);11071087 struct drm_vc4_wait_seqno *args = data;10881088+10891089+ if (WARN_ON_ONCE(vc4->is_vc5))10901090+ return -ENODEV;1108109111091092 return vc4_wait_for_seqno_ioctl_helper(dev, args->seqno,11101093 &args->timeout_ns);···11181093vc4_wait_bo_ioctl(struct drm_device *dev, void *data,11191094 struct drm_file *file_priv)11201095{10961096+ struct vc4_dev *vc4 = to_vc4_dev(dev);11211097 int ret;11221098 struct drm_vc4_wait_bo *args = data;11231099 struct drm_gem_object *gem_obj;11241100 struct vc4_bo *bo;11011101+11021102+ if (WARN_ON_ONCE(vc4->is_vc5))11031103+ return -ENODEV;1125110411261105 if (args->pad != 0)11271106 return -EINVAL;···11731144 args->shader_rec_size,11741145 args->bo_handle_count);1175114611471147+ if (WARN_ON_ONCE(vc4->is_vc5))11481148+ return -ENODEV;11491149+11761150 if (!vc4->v3d) {11771151 DRM_DEBUG("VC4_SUBMIT_CL with no VC4 V3D probed\n");11781152 return -ENODEV;···11991167 DRM_ERROR("malloc failure on exec struct\n");12001168 return -ENOMEM;12011169 }11701170+ exec->dev = vc4;1202117112031172 ret = vc4_v3d_pm_get(vc4);12041173 if (ret) {···13091276{13101277 struct vc4_dev *vc4 = to_vc4_dev(dev);1311127812791279+ if (WARN_ON_ONCE(vc4->is_vc5))12801280+ return -ENODEV;12811281+13121282 vc4->dma_fence_context = dma_fence_context_alloc(1);1313128313141284 INIT_LIST_HEAD(&vc4->bin_job_list);···13571321int vc4_gem_madvise_ioctl(struct drm_device *dev, void *data,13581322 struct drm_file *file_priv)13591323{13241324+ struct vc4_dev *vc4 = to_vc4_dev(dev);13601325 struct drm_vc4_gem_madvise *args = data;13611326 struct drm_gem_object *gem_obj;13621327 struct vc4_bo *bo;13631328 int ret;13291329+13301330+ if (WARN_ON_ONCE(vc4->is_vc5))13311331+ return -ENODEV;1364133213651333 switch (args->madv) {13661334 case VC4_MADV_DONTNEED:
+1-1
drivers/gpu/drm/vc4/vc4_hdmi.c
···14811481 unsigned int bpc,14821482 enum vc4_hdmi_output_format fmt)14831483{14841484- unsigned long long clock = mode->clock * 1000;14841484+ unsigned long long clock = mode->clock * 1000ULL;1485148514861486 if (mode->flags & DRM_MODE_FLAG_DBLCLK)14871487 clock = clock * 2;
+9-9
drivers/gpu/drm/vc4/vc4_hvs.c
···220220221221int vc4_hvs_get_fifo_from_output(struct vc4_hvs *hvs, unsigned int output)222222{223223+ struct vc4_dev *vc4 = hvs->vc4;223224 u32 reg;224225 int ret;225226226226- if (!hvs->hvs5)227227+ if (!vc4->is_vc5)227228 return output;228229229230 switch (output) {···274273static int vc4_hvs_init_channel(struct vc4_hvs *hvs, struct drm_crtc *crtc,275274 struct drm_display_mode *mode, bool oneshot)276275{276276+ struct vc4_dev *vc4 = hvs->vc4;277277 struct vc4_crtc *vc4_crtc = to_vc4_crtc(crtc);278278 struct vc4_crtc_state *vc4_crtc_state = to_vc4_crtc_state(crtc->state);279279 unsigned int chan = vc4_crtc_state->assigned_channel;···293291 */294292 dispctrl = SCALER_DISPCTRLX_ENABLE;295293296296- if (!hvs->hvs5)294294+ if (!vc4->is_vc5)297295 dispctrl |= VC4_SET_FIELD(mode->hdisplay,298296 SCALER_DISPCTRLX_WIDTH) |299297 VC4_SET_FIELD(mode->vdisplay,···314312315313 HVS_WRITE(SCALER_DISPBKGNDX(chan), dispbkgndx |316314 SCALER_DISPBKGND_AUTOHS |317317- ((!hvs->hvs5) ? SCALER_DISPBKGND_GAMMA : 0) |315315+ ((!vc4->is_vc5) ? SCALER_DISPBKGND_GAMMA : 0) |318316 (interlace ? SCALER_DISPBKGND_INTERLACE : 0));319317320318 /* Reload the LUT, since the SRAMs would have been disabled if···619617 if (!hvs)620618 return -ENOMEM;621619620620+ hvs->vc4 = vc4;622621 hvs->pdev = pdev;623623-624624- if (of_device_is_compatible(pdev->dev.of_node, "brcm,bcm2711-hvs"))625625- hvs->hvs5 = true;626622627623 hvs->regs = vc4_ioremap_regs(pdev, 0);628624 if (IS_ERR(hvs->regs))···630630 hvs->regset.regs = hvs_regs;631631 hvs->regset.nregs = ARRAY_SIZE(hvs_regs);632632633633- if (hvs->hvs5) {633633+ if (vc4->is_vc5) {634634 hvs->core_clk = devm_clk_get(&pdev->dev, NULL);635635 if (IS_ERR(hvs->core_clk)) {636636 dev_err(&pdev->dev, "Couldn't get core clock\n");···644644 }645645 }646646647647- if (!hvs->hvs5)647647+ if (!vc4->is_vc5)648648 hvs->dlist = hvs->regs + SCALER_DLIST_START;649649 else650650 hvs->dlist = hvs->regs + SCALER5_DLIST_START;···665665 * between planes when they don't overlap on the screen, but666666 * for now we just allocate globally.667667 */668668- if (!hvs->hvs5)668668+ if (!vc4->is_vc5)669669 /* 48k words of 2x12-bit pixels */670670 drm_mm_init(&hvs->lbm_mm, 0, 48 * 1024);671671 else
+16
drivers/gpu/drm/vc4/vc4_irq.c
···265265{266266 struct vc4_dev *vc4 = to_vc4_dev(dev);267267268268+ if (WARN_ON_ONCE(vc4->is_vc5))269269+ return;270270+268271 if (!vc4->v3d)269272 return;270273···281278vc4_irq_disable(struct drm_device *dev)282279{283280 struct vc4_dev *vc4 = to_vc4_dev(dev);281281+282282+ if (WARN_ON_ONCE(vc4->is_vc5))283283+ return;284284285285 if (!vc4->v3d)286286 return;···302296303297int vc4_irq_install(struct drm_device *dev, int irq)304298{299299+ struct vc4_dev *vc4 = to_vc4_dev(dev);305300 int ret;301301+302302+ if (WARN_ON_ONCE(vc4->is_vc5))303303+ return -ENODEV;306304307305 if (irq == IRQ_NOTCONNECTED)308306 return -ENOTCONN;···326316{327317 struct vc4_dev *vc4 = to_vc4_dev(dev);328318319319+ if (WARN_ON_ONCE(vc4->is_vc5))320320+ return;321321+329322 vc4_irq_disable(dev);330323 free_irq(vc4->irq, dev);331324}···338325{339326 struct vc4_dev *vc4 = to_vc4_dev(dev);340327 unsigned long irqflags;328328+329329+ if (WARN_ON_ONCE(vc4->is_vc5))330330+ return;341331342332 /* Acknowledge any stale IRQs. */343333 V3D_WRITE(V3D_INTCTL, V3D_DRIVER_IRQS);
+16-8
drivers/gpu/drm/vc4/vc4_kms.c
···393393 old_hvs_state->fifo_state[channel].pending_commit = NULL;394394 }395395396396- if (vc4->hvs->hvs5) {396396+ if (vc4->is_vc5) {397397 unsigned long state_rate = max(old_hvs_state->core_clock_rate,398398 new_hvs_state->core_clock_rate);399399 unsigned long core_rate = max_t(unsigned long,···412412413413 vc4_ctm_commit(vc4, state);414414415415- if (vc4->hvs->hvs5)415415+ if (vc4->is_vc5)416416 vc5_hvs_pv_muxing_commit(vc4, state);417417 else418418 vc4_hvs_pv_muxing_commit(vc4, state);···430430431431 drm_atomic_helper_cleanup_planes(dev, state);432432433433- if (vc4->hvs->hvs5) {433433+ if (vc4->is_vc5) {434434 drm_dbg(dev, "Running the core clock at %lu Hz\n",435435 new_hvs_state->core_clock_rate);436436···479479 struct drm_file *file_priv,480480 const struct drm_mode_fb_cmd2 *mode_cmd)481481{482482+ struct vc4_dev *vc4 = to_vc4_dev(dev);482483 struct drm_mode_fb_cmd2 mode_cmd_local;484484+485485+ if (WARN_ON_ONCE(vc4->is_vc5))486486+ return ERR_PTR(-ENODEV);483487484488 /* If the user didn't specify a modifier, use the485489 * vc4_set_tiling_ioctl() state for the BO.···1001997 .fb_create = vc4_fb_create,1002998};100399910001000+static const struct drm_mode_config_funcs vc5_mode_funcs = {10011001+ .atomic_check = vc4_atomic_check,10021002+ .atomic_commit = drm_atomic_helper_commit,10031003+ .fb_create = drm_gem_fb_create,10041004+};10051005+10041006int vc4_kms_load(struct drm_device *dev)10051007{10061008 struct vc4_dev *vc4 = to_vc4_dev(dev);10071007- bool is_vc5 = of_device_is_compatible(dev->dev->of_node,10081008- "brcm,bcm2711-vc5");10091009 int ret;1010101010111011 /*···10171009 * the BCM2711, but the load tracker computations are used for10181010 * the core clock rate calculation.10191011 */10201020- if (!is_vc5) {10121012+ if (!vc4->is_vc5) {10211013 /* Start with the load tracker enabled. Can be10221014 * disabled through the debugfs load_tracker file.10231015 */···10331025 return ret;10341026 }1035102710361036- if (is_vc5) {10281028+ if (vc4->is_vc5) {10371029 dev->mode_config.max_width = 7680;10381030 dev->mode_config.max_height = 7680;10391031 } else {···10411033 dev->mode_config.max_height = 2048;10421034 }1043103510441044- dev->mode_config.funcs = &vc4_mode_funcs;10361036+ dev->mode_config.funcs = vc4->is_vc5 ? &vc5_mode_funcs : &vc4_mode_funcs;10451037 dev->mode_config.helper_private = &vc4_mode_config_helpers;10461038 dev->mode_config.preferred_depth = 24;10471039 dev->mode_config.async_page_flip = true;
+46-1
drivers/gpu/drm/vc4/vc4_perfmon.c
···17171818void vc4_perfmon_get(struct vc4_perfmon *perfmon)1919{2020+ struct vc4_dev *vc4 = perfmon->dev;2121+2222+ if (WARN_ON_ONCE(vc4->is_vc5))2323+ return;2424+2025 if (perfmon)2126 refcount_inc(&perfmon->refcnt);2227}23282429void vc4_perfmon_put(struct vc4_perfmon *perfmon)2530{2626- if (perfmon && refcount_dec_and_test(&perfmon->refcnt))3131+ struct vc4_dev *vc4;3232+3333+ if (!perfmon)3434+ return;3535+3636+ vc4 = perfmon->dev;3737+ if (WARN_ON_ONCE(vc4->is_vc5))3838+ return;3939+4040+ if (refcount_dec_and_test(&perfmon->refcnt))2741 kfree(perfmon);2842}2943···4531{4632 unsigned int i;4733 u32 mask;3434+3535+ if (WARN_ON_ONCE(vc4->is_vc5))3636+ return;48374938 if (WARN_ON_ONCE(!perfmon || vc4->active_perfmon))5039 return;···6649{6750 unsigned int i;68515252+ if (WARN_ON_ONCE(vc4->is_vc5))5353+ return;5454+6955 if (WARN_ON_ONCE(!vc4->active_perfmon ||7056 perfmon != vc4->active_perfmon))7157 return;···84648565struct vc4_perfmon *vc4_perfmon_find(struct vc4_file *vc4file, int id)8666{6767+ struct vc4_dev *vc4 = vc4file->dev;8768 struct vc4_perfmon *perfmon;6969+7070+ if (WARN_ON_ONCE(vc4->is_vc5))7171+ return NULL;88728973 mutex_lock(&vc4file->perfmon.lock);9074 perfmon = idr_find(&vc4file->perfmon.idr, id);···1007610177void vc4_perfmon_open_file(struct vc4_file *vc4file)10278{7979+ struct vc4_dev *vc4 = vc4file->dev;8080+8181+ if (WARN_ON_ONCE(vc4->is_vc5))8282+ return;8383+10384 mutex_init(&vc4file->perfmon.lock);10485 idr_init_base(&vc4file->perfmon.idr, VC4_PERFMONID_MIN);8686+ vc4file->dev = vc4;10587}1068810789static int vc4_perfmon_idr_del(int id, void *elem, void *data)···1219112292void vc4_perfmon_close_file(struct vc4_file *vc4file)12393{9494+ struct vc4_dev *vc4 = vc4file->dev;9595+9696+ if (WARN_ON_ONCE(vc4->is_vc5))9797+ return;9898+12499 mutex_lock(&vc4file->perfmon.lock);125100 idr_for_each(&vc4file->perfmon.idr, vc4_perfmon_idr_del, NULL);126101 idr_destroy(&vc4file->perfmon.idr);···141106 struct vc4_perfmon *perfmon;142107 unsigned int i;143108 int ret;109109+110110+ if (WARN_ON_ONCE(vc4->is_vc5))111111+ return -ENODEV;144112145113 if (!vc4->v3d) {146114 DRM_DEBUG("Creating perfmon no VC4 V3D probed\n");···165127 GFP_KERNEL);166128 if (!perfmon)167129 return -ENOMEM;130130+ perfmon->dev = vc4;168131169132 for (i = 0; i < req->ncounters; i++)170133 perfmon->events[i] = req->events[i];···196157 struct drm_vc4_perfmon_destroy *req = data;197158 struct vc4_perfmon *perfmon;198159160160+ if (WARN_ON_ONCE(vc4->is_vc5))161161+ return -ENODEV;162162+199163 if (!vc4->v3d) {200164 DRM_DEBUG("Destroying perfmon no VC4 V3D probed\n");201165 return -ENODEV;···223181 struct drm_vc4_perfmon_get_values *req = data;224182 struct vc4_perfmon *perfmon;225183 int ret;184184+185185+ if (WARN_ON_ONCE(vc4->is_vc5))186186+ return -ENODEV;226187227188 if (!vc4->v3d) {228189 DRM_DEBUG("Getting perfmon no VC4 V3D probed\n");
+21-8
drivers/gpu/drm/vc4/vc4_plane.c
···489489 }490490491491 /* Align it to 64 or 128 (hvs5) bytes */492492- lbm = roundup(lbm, vc4->hvs->hvs5 ? 128 : 64);492492+ lbm = roundup(lbm, vc4->is_vc5 ? 128 : 64);493493494494 /* Each "word" of the LBM memory contains 2 or 4 (hvs5) pixels */495495- lbm /= vc4->hvs->hvs5 ? 4 : 2;495495+ lbm /= vc4->is_vc5 ? 4 : 2;496496497497 return lbm;498498}···608608 ret = drm_mm_insert_node_generic(&vc4->hvs->lbm_mm,609609 &vc4_state->lbm,610610 lbm_size,611611- vc4->hvs->hvs5 ? 64 : 32,611611+ vc4->is_vc5 ? 64 : 32,612612 0, 0);613613 spin_unlock_irqrestore(&vc4->hvs->mm_lock, irqflags);614614···917917 mix_plane_alpha = state->alpha != DRM_BLEND_ALPHA_OPAQUE &&918918 fb->format->has_alpha;919919920920- if (!vc4->hvs->hvs5) {920920+ if (!vc4->is_vc5) {921921 /* Control word */922922 vc4_dlist_write(vc4_state,923923 SCALER_CTL0_VALID |···1321132113221322 old_vc4_state = to_vc4_plane_state(plane->state);13231323 new_vc4_state = to_vc4_plane_state(new_plane_state);13241324+13251325+ if (!new_vc4_state->hw_dlist)13261326+ return -EINVAL;13271327+13241328 if (old_vc4_state->dlist_count != new_vc4_state->dlist_count ||13251329 old_vc4_state->pos0_offset != new_vc4_state->pos0_offset ||13261330 old_vc4_state->pos2_offset != new_vc4_state->pos2_offset ||···13851381 .atomic_update = vc4_plane_atomic_update,13861382 .prepare_fb = vc4_prepare_fb,13871383 .cleanup_fb = vc4_cleanup_fb,13841384+ .atomic_async_check = vc4_plane_atomic_async_check,13851385+ .atomic_async_update = vc4_plane_atomic_async_update,13861386+};13871387+13881388+static const struct drm_plane_helper_funcs vc5_plane_helper_funcs = {13891389+ .atomic_check = vc4_plane_atomic_check,13901390+ .atomic_update = vc4_plane_atomic_update,13881391 .atomic_async_check = vc4_plane_atomic_async_check,13891392 .atomic_async_update = vc4_plane_atomic_async_update,13901393};···14641453struct drm_plane *vc4_plane_init(struct drm_device *dev,14651454 enum drm_plane_type type)14661455{14561456+ struct vc4_dev *vc4 = to_vc4_dev(dev);14671457 struct drm_plane *plane = NULL;14681458 struct vc4_plane *vc4_plane;14691459 u32 formats[ARRAY_SIZE(hvs_formats)];14701460 int num_formats = 0;14711461 int ret = 0;14721462 unsigned i;14731473- bool hvs5 = of_device_is_compatible(dev->dev->of_node,14741474- "brcm,bcm2711-vc5");14751463 static const uint64_t modifiers[] = {14761464 DRM_FORMAT_MOD_BROADCOM_VC4_T_TILED,14771465 DRM_FORMAT_MOD_BROADCOM_SAND128,···14861476 return ERR_PTR(-ENOMEM);1487147714881478 for (i = 0; i < ARRAY_SIZE(hvs_formats); i++) {14891489- if (!hvs_formats[i].hvs5_only || hvs5) {14791479+ if (!hvs_formats[i].hvs5_only || vc4->is_vc5) {14901480 formats[num_formats] = hvs_formats[i].drm;14911481 num_formats++;14921482 }···15001490 if (ret)15011491 return ERR_PTR(ret);1502149215031503- drm_plane_helper_add(plane, &vc4_plane_helper_funcs);14931493+ if (vc4->is_vc5)14941494+ drm_plane_helper_add(plane, &vc5_plane_helper_funcs);14951495+ else14961496+ drm_plane_helper_add(plane, &vc4_plane_helper_funcs);1504149715051498 drm_plane_create_alpha_property(plane);15061499 drm_plane_create_rotation_property(plane, DRM_MODE_ROTATE_0,
···15111511 int i;15121512 int ret;1513151315141514- ret = i2c_smbus_write_byte_data(client, MMA8452_CTRL_REG2,15141514+ /*15151515+ * Find on fxls8471, after config reset bit, it reset immediately,15161516+ * and will not give ACK, so here do not check the return value.15171517+ * The following code will read the reset register, and check whether15181518+ * this reset works.15191519+ */15201520+ i2c_smbus_write_byte_data(client, MMA8452_CTRL_REG2,15151521 MMA8452_CTRL_REG2_RST);15161516- if (ret < 0)15171517- return ret;1518152215191523 for (i = 0; i < 10; i++) {15201524 usleep_range(100, 200);···15611557 mutex_init(&data->lock);1562155815631559 data->chip_info = device_get_match_data(&client->dev);15641564- if (!data->chip_info && id) {15651565- data->chip_info = &mma_chip_info_table[id->driver_data];15661566- } else {15671567- dev_err(&client->dev, "unknown device model\n");15681568- return -ENODEV;15601560+ if (!data->chip_info) {15611561+ if (id) {15621562+ data->chip_info = &mma_chip_info_table[id->driver_data];15631563+ } else {15641564+ dev_err(&client->dev, "unknown device model\n");15651565+ return -ENODEV;15661566+ }15691567 }1570156815711569 ret = iio_read_mount_matrix(&client->dev, &data->orientation);
···639639 dev_dbg(yas5xx->dev, "calibration data: %*ph\n", 14, data);640640641641 /* Sanity check, is this all zeroes? */642642- if (memchr_inv(data, 0x00, 13)) {642642+ if (memchr_inv(data, 0x00, 13) == NULL) {643643 if (!(data[13] & BIT(7)))644644 dev_warn(yas5xx->dev, "calibration is blank!\n");645645 }
+3
drivers/iio/proximity/sx9324.c
···885885 break;886886 ret = device_property_read_u32_array(dev, prop, pin_defs,887887 ARRAY_SIZE(pin_defs));888888+ if (ret)889889+ break;890890+888891 for (pin = 0; pin < SX9324_NUM_PINS; pin++)889892 raw |= (pin_defs[pin] << (2 * pin)) &890893 SX9324_REG_AFE_PH0_PIN_MASK(pin);
+1-1
drivers/iio/test/Kconfig
···66# Keep in alphabetical order77config IIO_RESCALE_KUNIT_TEST88 bool "Test IIO rescale conversion functions"99- depends on KUNIT=y && !IIO_RESCALE99+ depends on KUNIT=y && IIO_RESCALE=y1010 default KUNIT_ALL_TESTS1111 help1212 If you want to run tests on the iio-rescale code say Y here.
···272272 atomic_t io_count;273273 struct mapped_device *md;274274275275+ struct bio *split_bio;275276 /* The three fields represent mapped part of original bio */276277 struct bio *orig_bio;277278 unsigned int sector_offset; /* offset to end of orig_bio */
+7-1
drivers/md/dm-era-target.c
···14001400static void stop_worker(struct era *era)14011401{14021402 atomic_set(&era->suspended, 1);14031403- flush_workqueue(era->wq);14031403+ drain_workqueue(era->wq);14041404}1405140514061406/*----------------------------------------------------------------···15701570 }1571157115721572 stop_worker(era);15731573+15741574+ r = metadata_commit(era->md);15751575+ if (r) {15761576+ DMERR("%s: metadata_commit failed", __func__);15771577+ /* FIXME: fail mode */15781578+ }15731579}1574158015751581static int era_preresume(struct dm_target *ti)
+1-1
drivers/md/dm-log.c
···615615 log_clear_bit(lc, lc->clean_bits, i);616616617617 /* clear any old bits -- device has shrunk */618618- for (i = lc->region_count; i % (sizeof(*lc->clean_bits) << BYTE_SHIFT); i++)618618+ for (i = lc->region_count; i % BITS_PER_LONG; i++)619619 log_clear_bit(lc, lc->clean_bits, i);620620621621 /* copy clean across to sync */
+10-5
drivers/md/dm.c
···594594 atomic_set(&io->io_count, 2);595595 this_cpu_inc(*md->pending_io);596596 io->orig_bio = bio;597597+ io->split_bio = NULL;597598 io->md = md;598599 spin_lock_init(&io->lock);599600 io->start_time = jiffies;···888887{889888 blk_status_t io_error;890889 struct mapped_device *md = io->md;891891- struct bio *bio = io->orig_bio;890890+ struct bio *bio = io->split_bio ? io->split_bio : io->orig_bio;892891893892 if (io->status == BLK_STS_DM_REQUEUE) {894893 unsigned long flags;···940939 if (io_error == BLK_STS_AGAIN) {941940 /* io_uring doesn't handle BLK_STS_AGAIN (yet) */942941 queue_io(md, bio);942942+ return;943943 }944944 }945945- return;945945+ if (io_error == BLK_STS_DM_REQUEUE)946946+ return;946947 }947948948949 if (bio_is_flush_with_data(bio)) {···16941691 * Remainder must be passed to submit_bio_noacct() so it gets handled16951692 * *after* bios already submitted have been completely processed.16961693 */16971697- bio_trim(bio, io->sectors, ci.sector_count);16981698- trace_block_split(bio, bio->bi_iter.bi_sector);16991699- bio_inc_remaining(bio);16941694+ WARN_ON_ONCE(!dm_io_flagged(io, DM_IO_WAS_SPLIT));16951695+ io->split_bio = bio_split(bio, io->sectors, GFP_NOIO,16961696+ &md->queue->bio_split);16971697+ bio_chain(io->split_bio, bio);16981698+ trace_block_split(io->split_bio, bio->bi_iter.bi_sector);17001699 submit_bio_noacct(bio);17011700out:17021701 /*
+1
drivers/memory/Kconfig
···105105config OMAP_GPMC106106 tristate "Texas Instruments OMAP SoC GPMC driver"107107 depends on OF_ADDRESS108108+ depends on ARCH_OMAP2PLUS || ARCH_KEYSTONE || ARCH_K3 || COMPILE_TEST108109 select GPIOLIB109110 help110111 This driver is for the General Purpose Memory Controller (GPMC)
+4-1
drivers/memory/mtk-smi.c
···404404 of_node_put(smi_com_node);405405 if (smi_com_pdev) {406406 /* smi common is the supplier, Make sure it is ready before */407407- if (!platform_get_drvdata(smi_com_pdev))407407+ if (!platform_get_drvdata(smi_com_pdev)) {408408+ put_device(&smi_com_pdev->dev);408409 return -EPROBE_DEFER;410410+ }409411 smi_com_dev = &smi_com_pdev->dev;410412 link = device_link_add(dev, smi_com_dev,411413 DL_FLAG_PM_RUNTIME | DL_FLAG_STATELESS);412414 if (!link) {413415 dev_err(dev, "Unable to link smi-common dev\n");416416+ put_device(&smi_com_pdev->dev);414417 return -ENODEV;415418 }416419 *com_dev = smi_com_dev;
+18-11
drivers/memory/samsung/exynos5422-dmc.c
···1187118711881188 dmc->timing_row = devm_kmalloc_array(dmc->dev, TIMING_COUNT,11891189 sizeof(u32), GFP_KERNEL);11901190- if (!dmc->timing_row)11911191- return -ENOMEM;11901190+ if (!dmc->timing_row) {11911191+ ret = -ENOMEM;11921192+ goto put_node;11931193+ }1192119411931195 dmc->timing_data = devm_kmalloc_array(dmc->dev, TIMING_COUNT,11941196 sizeof(u32), GFP_KERNEL);11951195- if (!dmc->timing_data)11961196- return -ENOMEM;11971197+ if (!dmc->timing_data) {11981198+ ret = -ENOMEM;11991199+ goto put_node;12001200+ }1197120111981202 dmc->timing_power = devm_kmalloc_array(dmc->dev, TIMING_COUNT,11991203 sizeof(u32), GFP_KERNEL);12001200- if (!dmc->timing_power)12011201- return -ENOMEM;12041204+ if (!dmc->timing_power) {12051205+ ret = -ENOMEM;12061206+ goto put_node;12071207+ }1202120812031209 dmc->timings = of_lpddr3_get_ddr_timings(np_ddr, dmc->dev,12041210 DDR_TYPE_LPDDR3,12051211 &dmc->timings_arr_size);12061212 if (!dmc->timings) {12071207- of_node_put(np_ddr);12081213 dev_warn(dmc->dev, "could not get timings from DT\n");12091209- return -EINVAL;12141214+ ret = -EINVAL;12151215+ goto put_node;12101216 }1211121712121218 dmc->min_tck = of_lpddr3_get_min_tck(np_ddr, dmc->dev);12131219 if (!dmc->min_tck) {12141214- of_node_put(np_ddr);12151220 dev_warn(dmc->dev, "could not get tck from DT\n");12161216- return -EINVAL;12211221+ ret = -EINVAL;12221222+ goto put_node;12171223 }1218122412191225 /* Sorted array of OPPs with frequency ascending */···12331227 clk_period_ps);12341228 }1235122912361236- of_node_put(np_ddr);1237123012381231 /* Take the highest frequency's timings as 'bypass' */12391232 dmc->bypass_timing_row = dmc->timing_row[idx - 1];12401233 dmc->bypass_timing_data = dmc->timing_data[idx - 1];12411234 dmc->bypass_timing_power = dmc->timing_power[idx - 1];1242123512361236+put_node:12371237+ of_node_put(np_ddr);12431238 return ret;12441239}12451240
···36843684 if (!rtnl_trylock())36853685 return;3686368636873687- if (should_notify_peers)36873687+ if (should_notify_peers) {36883688+ bond->send_peer_notif--;36883689 call_netdevice_notifiers(NETDEV_NOTIFY_PEERS,36893690 bond->dev);36913691+ }36903692 if (should_notify_rtnl) {36913693 bond_slave_state_notify(bond);36923694 bond_slave_link_notify(bond);
+21-1
drivers/net/dsa/qca8k.c
···23342334qca8k_port_change_mtu(struct dsa_switch *ds, int port, int new_mtu)23352335{23362336 struct qca8k_priv *priv = ds->priv;23372337+ int ret;2337233823382339 /* We have only have a general MTU setting.23392340 * DSA always set the CPU port's MTU to the largest MTU of the slave···23452344 if (!dsa_is_cpu_port(ds, port))23462345 return 0;2347234623472347+ /* To change the MAX_FRAME_SIZE the cpu ports must be off or23482348+ * the switch panics.23492349+ * Turn off both cpu ports before applying the new value to prevent23502350+ * this.23512351+ */23522352+ if (priv->port_enabled_map & BIT(0))23532353+ qca8k_port_set_status(priv, 0, 0);23542354+23552355+ if (priv->port_enabled_map & BIT(6))23562356+ qca8k_port_set_status(priv, 6, 0);23572357+23482358 /* Include L2 header / FCS length */23492349- return qca8k_write(priv, QCA8K_MAX_FRAME_SIZE, new_mtu + ETH_HLEN + ETH_FCS_LEN);23592359+ ret = qca8k_write(priv, QCA8K_MAX_FRAME_SIZE, new_mtu + ETH_HLEN + ETH_FCS_LEN);23602360+23612361+ if (priv->port_enabled_map & BIT(0))23622362+ qca8k_port_set_status(priv, 0, 1);23632363+23642364+ if (priv->port_enabled_map & BIT(6))23652365+ qca8k_port_set_status(priv, 6, 1);23662366+23672367+ return ret;23502368}2351236923522370static int
···43434444 for (i = 0; i < fw_image->fw_info.fw_section_cnt; i++) {4545 len += fw_image->fw_section_info[i].fw_section_len;4646- memcpy(&host_image->image_section_info[i],4747- &fw_image->fw_section_info[i],4848- sizeof(struct fw_section_info_st));4646+ host_image->image_section_info[i] = fw_image->fw_section_info[i];4947 }50485149 if (len != fw_image->fw_len ||
+48-1
drivers/net/ethernet/intel/ice/ice_ethtool.c
···21902190}2191219121922192/**21932193+ * ice_set_phy_type_from_speed - set phy_types based on speeds21942194+ * and advertised modes21952195+ * @ks: ethtool link ksettings struct21962196+ * @phy_type_low: pointer to the lower part of phy_type21972197+ * @phy_type_high: pointer to the higher part of phy_type21982198+ * @adv_link_speed: targeted link speeds bitmap21992199+ */22002200+static void22012201+ice_set_phy_type_from_speed(const struct ethtool_link_ksettings *ks,22022202+ u64 *phy_type_low, u64 *phy_type_high,22032203+ u16 adv_link_speed)22042204+{22052205+ /* Handle 1000M speed in a special way because ice_update_phy_type22062206+ * enables all link modes, but having mixed copper and optical22072207+ * standards is not supported.22082208+ */22092209+ adv_link_speed &= ~ICE_AQ_LINK_SPEED_1000MB;22102210+22112211+ if (ethtool_link_ksettings_test_link_mode(ks, advertising,22122212+ 1000baseT_Full))22132213+ *phy_type_low |= ICE_PHY_TYPE_LOW_1000BASE_T |22142214+ ICE_PHY_TYPE_LOW_1G_SGMII;22152215+22162216+ if (ethtool_link_ksettings_test_link_mode(ks, advertising,22172217+ 1000baseKX_Full))22182218+ *phy_type_low |= ICE_PHY_TYPE_LOW_1000BASE_KX;22192219+22202220+ if (ethtool_link_ksettings_test_link_mode(ks, advertising,22212221+ 1000baseX_Full))22222222+ *phy_type_low |= ICE_PHY_TYPE_LOW_1000BASE_SX |22232223+ ICE_PHY_TYPE_LOW_1000BASE_LX;22242224+22252225+ ice_update_phy_type(phy_type_low, phy_type_high, adv_link_speed);22262226+}22272227+22282228+/**21932229 * ice_set_link_ksettings - Set Speed and Duplex21942230 * @netdev: network interface device structure21952231 * @ks: ethtool ksettings···23562320 adv_link_speed = curr_link_speed;2357232123582322 /* Convert the advertise link speeds to their corresponded PHY_TYPE */23592359- ice_update_phy_type(&phy_type_low, &phy_type_high, adv_link_speed);23232323+ ice_set_phy_type_from_speed(ks, &phy_type_low, &phy_type_high,23242324+ adv_link_speed);2360232523612326 if (!autoneg_changed && adv_link_speed == curr_link_speed) {23622327 netdev_info(netdev, "Nothing changed, exiting without setting anything.\n");···35073470 new_rx = ch->combined_count + ch->rx_count;35083471 new_tx = ch->combined_count + ch->tx_count;3509347234733473+ if (new_rx < vsi->tc_cfg.numtc) {34743474+ netdev_err(dev, "Cannot set less Rx channels, than Traffic Classes you have (%u)\n",34753475+ vsi->tc_cfg.numtc);34763476+ return -EINVAL;34773477+ }34783478+ if (new_tx < vsi->tc_cfg.numtc) {34793479+ netdev_err(dev, "Cannot set less Tx channels, than Traffic Classes you have (%u)\n",34803480+ vsi->tc_cfg.numtc);34813481+ return -EINVAL;34823482+ }35103483 if (new_rx > ice_get_max_rxq(pf)) {35113484 netdev_err(dev, "Maximum allowed Rx channels is %d\n",35123485 ice_get_max_rxq(pf));
+37-5
drivers/net/ethernet/intel/ice/ice_lib.c
···909909 * @vsi: the VSI being configured910910 * @ctxt: VSI context structure911911 */912912-static void ice_vsi_setup_q_map(struct ice_vsi *vsi, struct ice_vsi_ctx *ctxt)912912+static int ice_vsi_setup_q_map(struct ice_vsi *vsi, struct ice_vsi_ctx *ctxt)913913{914914 u16 offset = 0, qmap = 0, tx_count = 0, pow = 0;915915 u16 num_txq_per_tc, num_rxq_per_tc;···982982 else983983 vsi->num_rxq = num_rxq_per_tc;984984985985+ if (vsi->num_rxq > vsi->alloc_rxq) {986986+ dev_err(ice_pf_to_dev(vsi->back), "Trying to use more Rx queues (%u), than were allocated (%u)!\n",987987+ vsi->num_rxq, vsi->alloc_rxq);988988+ return -EINVAL;989989+ }990990+985991 vsi->num_txq = tx_count;992992+ if (vsi->num_txq > vsi->alloc_txq) {993993+ dev_err(ice_pf_to_dev(vsi->back), "Trying to use more Tx queues (%u), than were allocated (%u)!\n",994994+ vsi->num_txq, vsi->alloc_txq);995995+ return -EINVAL;996996+ }986997987998 if (vsi->type == ICE_VSI_VF && vsi->num_txq != vsi->num_rxq) {988999 dev_dbg(ice_pf_to_dev(vsi->back), "VF VSI should have same number of Tx and Rx queues. Hence making them equal\n");···10111000 */10121001 ctxt->info.q_mapping[0] = cpu_to_le16(vsi->rxq_map[0]);10131002 ctxt->info.q_mapping[1] = cpu_to_le16(vsi->num_rxq);10031003+10041004+ return 0;10141005}1015100610161007/**···12001187 if (vsi->type == ICE_VSI_CHNL) {12011188 ice_chnl_vsi_setup_q_map(vsi, ctxt);12021189 } else {12031203- ice_vsi_setup_q_map(vsi, ctxt);11901190+ ret = ice_vsi_setup_q_map(vsi, ctxt);11911191+ if (ret)11921192+ goto out;11931193+12041194 if (!init_vsi) /* means VSI being updated */12051195 /* must to indicate which section of VSI context are12061196 * being modified···34803464 *34813465 * Prepares VSI tc_config to have queue configurations based on MQPRIO options.34823466 */34833483-static void34673467+static int34843468ice_vsi_setup_q_map_mqprio(struct ice_vsi *vsi, struct ice_vsi_ctx *ctxt,34853469 u8 ena_tc)34863470{···3529351335303514 /* Set actual Tx/Rx queue pairs */35313515 vsi->num_txq = offset + qcount_tx;35163516+ if (vsi->num_txq > vsi->alloc_txq) {35173517+ dev_err(ice_pf_to_dev(vsi->back), "Trying to use more Tx queues (%u), than were allocated (%u)!\n",35183518+ vsi->num_txq, vsi->alloc_txq);35193519+ return -EINVAL;35203520+ }35213521+35323522 vsi->num_rxq = offset + qcount_rx;35233523+ if (vsi->num_rxq > vsi->alloc_rxq) {35243524+ dev_err(ice_pf_to_dev(vsi->back), "Trying to use more Rx queues (%u), than were allocated (%u)!\n",35253525+ vsi->num_rxq, vsi->alloc_rxq);35263526+ return -EINVAL;35273527+ }3533352835343529 /* Setup queue TC[0].qmap for given VSI context */35353530 ctxt->info.tc_mapping[0] = cpu_to_le16(qmap);···35583531 dev_dbg(ice_pf_to_dev(vsi->back), "vsi->num_rxq = %d\n", vsi->num_rxq);35593532 dev_dbg(ice_pf_to_dev(vsi->back), "all_numtc %u, all_enatc: 0x%04x, tc_cfg.numtc %u\n",35603533 vsi->all_numtc, vsi->all_enatc, vsi->tc_cfg.numtc);35343534+35353535+ return 0;35613536}3562353735633538/**···3609358036103581 if (vsi->type == ICE_VSI_PF &&36113582 test_bit(ICE_FLAG_TC_MQPRIO, pf->flags))36123612- ice_vsi_setup_q_map_mqprio(vsi, ctx, ena_tc);35833583+ ret = ice_vsi_setup_q_map_mqprio(vsi, ctx, ena_tc);36133584 else36143614- ice_vsi_setup_q_map(vsi, ctx);35853585+ ret = ice_vsi_setup_q_map(vsi, ctx);35863586+35873587+ if (ret)35883588+ goto out;3615358936163590 /* must to indicate which section of VSI context are being modified */36173591 ctx->info.valid_sections = cpu_to_le16(ICE_AQ_VSI_PROP_RXQ_MAP_VALID);
···110110 struct smsc_phy_priv *priv = phydev->priv;111111 int rc;112112113113- if (!priv->energy_enable)113113+ if (!priv->energy_enable || phydev->irq != PHY_POLL)114114 return 0;115115116116 rc = phy_read(phydev, MII_LAN83C185_CTRL_STATUS);···210210 * response on link pulses to detect presence of plugged Ethernet cable.211211 * The Energy Detect Power-Down mode is enabled again in the end of procedure to212212 * save approximately 220 mW of power if cable is unplugged.213213+ * The workaround is only applicable to poll mode. Energy Detect Power-Down may214214+ * not be used in interrupt mode lest link change detection becomes unreliable.213215 */214216static int lan87xx_read_status(struct phy_device *phydev)215217{···219217220218 int err = genphy_read_status(phydev);221219222222- if (!phydev->link && priv->energy_enable) {220220+ if (!phydev->link && priv->energy_enable && phydev->irq == PHY_POLL) {223221 /* Disable EDPD to wake up PHY */224222 int rc = phy_read(phydev, MII_LAN83C185_CTRL_STATUS);225223 if (rc < 0)
+4
drivers/net/veth.c
···312312static netdev_tx_t veth_xmit(struct sk_buff *skb, struct net_device *dev)313313{314314 struct veth_priv *rcv_priv, *priv = netdev_priv(dev);315315+ struct netdev_queue *queue = NULL;315316 struct veth_rq *rq = NULL;316317 struct net_device *rcv;317318 int length = skb->len;···330329 rxq = skb_get_queue_mapping(skb);331330 if (rxq < rcv->real_num_rx_queues) {332331 rq = &rcv_priv->rq[rxq];332332+ queue = netdev_get_tx_queue(dev, rxq);333333334334 /* The napi pointer is available when an XDP program is335335 * attached or when GRO is enabled···342340343341 skb_tx_timestamp(skb);344342 if (likely(veth_forward_skb(rcv, skb, rq, use_napi) == NET_RX_SUCCESS)) {343343+ if (queue)344344+ txq_trans_cond_update(queue);345345 if (!use_napi)346346 dev_lstats_add(dev, length);347347 } else {
+6-19
drivers/net/virtio_net.c
···27682768static void virtnet_freeze_down(struct virtio_device *vdev)27692769{27702770 struct virtnet_info *vi = vdev->priv;27712771- int i;2772277127732772 /* Make sure no work handler is accessing the device */27742773 flush_work(&vi->config_work);···27752776 netif_tx_lock_bh(vi->dev);27762777 netif_device_detach(vi->dev);27772778 netif_tx_unlock_bh(vi->dev);27782778- cancel_delayed_work_sync(&vi->refill);27792779-27802780- if (netif_running(vi->dev)) {27812781- for (i = 0; i < vi->max_queue_pairs; i++) {27822782- napi_disable(&vi->rq[i].napi);27832783- virtnet_napi_tx_disable(&vi->sq[i].napi);27842784- }27852785- }27792779+ if (netif_running(vi->dev))27802780+ virtnet_close(vi->dev);27862781}2787278227882783static int init_vqs(struct virtnet_info *vi);···27842791static int virtnet_restore_up(struct virtio_device *vdev)27852792{27862793 struct virtnet_info *vi = vdev->priv;27872787- int err, i;27942794+ int err;2788279527892796 err = init_vqs(vi);27902797 if (err)···27932800 virtio_device_ready(vdev);2794280127952802 if (netif_running(vi->dev)) {27962796- for (i = 0; i < vi->curr_queue_pairs; i++)27972797- if (!try_fill_recv(vi, &vi->rq[i], GFP_KERNEL))27982798- schedule_delayed_work(&vi->refill, 0);27992799-28002800- for (i = 0; i < vi->max_queue_pairs; i++) {28012801- virtnet_napi_enable(vi->rq[i].vq, &vi->rq[i].napi);28022802- virtnet_napi_tx_enable(vi, vi->sq[i].vq,28032803- &vi->sq[i].napi);28042804- }28032803+ err = virtnet_open(vi->dev);28042804+ if (err)28052805+ return err;28052806 }2806280728072808 netif_tx_lock_bh(vi->dev);
+14
drivers/nvme/host/core.c
···25462546 .vid = 0x1e0f,25472547 .mn = "KCD6XVUL6T40",25482548 .quirks = NVME_QUIRK_NO_APST,25492549+ },25502550+ {25512551+ /*25522552+ * The external Samsung X5 SSD fails initialization without a25532553+ * delay before checking if it is ready and has a whole set of25542554+ * other problems. To make this even more interesting, it25552555+ * shares the PCI ID with internal Samsung 970 Evo Plus that25562556+ * does not need or want these quirks.25572557+ */25582558+ .vid = 0x144d,25592559+ .mn = "Samsung Portable SSD X5",25602560+ .quirks = NVME_QUIRK_DELAY_BEFORE_CHK_RDY |25612561+ NVME_QUIRK_NO_DEEPEST_PS |25622562+ NVME_QUIRK_IGNORE_DEV_SUBNQN,25492563 }25502564};25512565
···160160static void ibmvfc_tgt_implicit_logout_and_del(struct ibmvfc_target *);161161static void ibmvfc_tgt_move_login(struct ibmvfc_target *);162162163163-static void ibmvfc_release_sub_crqs(struct ibmvfc_host *);164164-static void ibmvfc_init_sub_crqs(struct ibmvfc_host *);163163+static void ibmvfc_dereg_sub_crqs(struct ibmvfc_host *);164164+static void ibmvfc_reg_sub_crqs(struct ibmvfc_host *);165165166166static const char *unknown_error = "unknown error";167167···917917 struct vio_dev *vdev = to_vio_dev(vhost->dev);918918 unsigned long flags;919919920920- ibmvfc_release_sub_crqs(vhost);920920+ ibmvfc_dereg_sub_crqs(vhost);921921922922 /* Re-enable the CRQ */923923 do {···936936 spin_unlock(vhost->crq.q_lock);937937 spin_unlock_irqrestore(vhost->host->host_lock, flags);938938939939- ibmvfc_init_sub_crqs(vhost);939939+ ibmvfc_reg_sub_crqs(vhost);940940941941 return rc;942942}···955955 struct vio_dev *vdev = to_vio_dev(vhost->dev);956956 struct ibmvfc_queue *crq = &vhost->crq;957957958958- ibmvfc_release_sub_crqs(vhost);958958+ ibmvfc_dereg_sub_crqs(vhost);959959960960 /* Close the CRQ */961961 do {···988988 spin_unlock(vhost->crq.q_lock);989989 spin_unlock_irqrestore(vhost->host->host_lock, flags);990990991991- ibmvfc_init_sub_crqs(vhost);991991+ ibmvfc_reg_sub_crqs(vhost);992992993993 return rc;994994}···56825682 queue->cur = 0;56835683 queue->fmt = fmt;56845684 queue->size = PAGE_SIZE / fmt_size;56855685+56865686+ queue->vhost = vhost;56855687 return 0;56865688}56875689···5759575757605758 ENTER;5761575957625762- if (ibmvfc_alloc_queue(vhost, scrq, IBMVFC_SUB_CRQ_FMT))57635763- return -ENOMEM;57645764-57655760 rc = h_reg_sub_crq(vdev->unit_address, scrq->msg_token, PAGE_SIZE,57665761 &scrq->cookie, &scrq->hw_irq);57675762···57895790 }5790579157915792 scrq->hwq_id = index;57925792- scrq->vhost = vhost;5793579357945794 LEAVE;57955795 return 0;···57985800 rc = plpar_hcall_norets(H_FREE_SUB_CRQ, vdev->unit_address, scrq->cookie);57995801 } while (rtas_busy_delay(rc));58005802reg_failed:58015801- ibmvfc_free_queue(vhost, scrq);58025803 LEAVE;58035804 return rc;58045805}···58235826 if (rc)58245827 dev_err(dev, "Failed to free sub-crq[%d]: rc=%ld\n", index, rc);5825582858265826- ibmvfc_free_queue(vhost, scrq);58295829+ /* Clean out the queue */58305830+ memset(scrq->msgs.crq, 0, PAGE_SIZE);58315831+ scrq->cur = 0;58325832+58335833+ LEAVE;58345834+}58355835+58365836+static void ibmvfc_reg_sub_crqs(struct ibmvfc_host *vhost)58375837+{58385838+ int i, j;58395839+58405840+ ENTER;58415841+ if (!vhost->mq_enabled || !vhost->scsi_scrqs.scrqs)58425842+ return;58435843+58445844+ for (i = 0; i < nr_scsi_hw_queues; i++) {58455845+ if (ibmvfc_register_scsi_channel(vhost, i)) {58465846+ for (j = i; j > 0; j--)58475847+ ibmvfc_deregister_scsi_channel(vhost, j - 1);58485848+ vhost->do_enquiry = 0;58495849+ return;58505850+ }58515851+ }58525852+58535853+ LEAVE;58545854+}58555855+58565856+static void ibmvfc_dereg_sub_crqs(struct ibmvfc_host *vhost)58575857+{58585858+ int i;58595859+58605860+ ENTER;58615861+ if (!vhost->mq_enabled || !vhost->scsi_scrqs.scrqs)58625862+ return;58635863+58645864+ for (i = 0; i < nr_scsi_hw_queues; i++)58655865+ ibmvfc_deregister_scsi_channel(vhost, i);58665866+58275867 LEAVE;58285868}5829586958305870static void ibmvfc_init_sub_crqs(struct ibmvfc_host *vhost)58315871{58725872+ struct ibmvfc_queue *scrq;58325873 int i, j;5833587458345875 ENTER;···58825847 }5883584858845849 for (i = 0; i < nr_scsi_hw_queues; i++) {58855885- if (ibmvfc_register_scsi_channel(vhost, i)) {58865886- for (j = i; j > 0; j--)58875887- ibmvfc_deregister_scsi_channel(vhost, j - 1);58505850+ scrq = &vhost->scsi_scrqs.scrqs[i];58515851+ if (ibmvfc_alloc_queue(vhost, scrq, IBMVFC_SUB_CRQ_FMT)) {58525852+ for (j = i; j > 0; j--) {58535853+ scrq = &vhost->scsi_scrqs.scrqs[j - 1];58545854+ ibmvfc_free_queue(vhost, scrq);58555855+ }58885856 kfree(vhost->scsi_scrqs.scrqs);58895857 vhost->scsi_scrqs.scrqs = NULL;58905858 vhost->scsi_scrqs.active_queues = 0;58915859 vhost->do_enquiry = 0;58925892- break;58605860+ vhost->mq_enabled = 0;58615861+ return;58935862 }58945863 }58645864+58655865+ ibmvfc_reg_sub_crqs(vhost);5895586658965867 LEAVE;58975868}5898586958995870static void ibmvfc_release_sub_crqs(struct ibmvfc_host *vhost)59005871{58725872+ struct ibmvfc_queue *scrq;59015873 int i;5902587459035875 ENTER;59045876 if (!vhost->scsi_scrqs.scrqs)59055877 return;5906587859075907- for (i = 0; i < nr_scsi_hw_queues; i++)59085908- ibmvfc_deregister_scsi_channel(vhost, i);58795879+ ibmvfc_dereg_sub_crqs(vhost);58805880+58815881+ for (i = 0; i < nr_scsi_hw_queues; i++) {58825882+ scrq = &vhost->scsi_scrqs.scrqs[i];58835883+ ibmvfc_free_queue(vhost, scrq);58845884+ }5909588559105886 kfree(vhost->scsi_scrqs.scrqs);59115887 vhost->scsi_scrqs.scrqs = NULL;
+1-1
drivers/scsi/ibmvscsi/ibmvfc.h
···789789 spinlock_t _lock;790790 spinlock_t *q_lock;791791792792+ struct ibmvfc_host *vhost;792793 struct ibmvfc_event_pool evt_pool;793794 struct list_head sent;794795 struct list_head free;···798797 union ibmvfc_iu cancel_rsp;799798800799 /* Sub-CRQ fields */801801- struct ibmvfc_host *vhost;802800 unsigned long cookie;803801 unsigned long vios_cookie;804802 unsigned long hw_irq;
+20-2
drivers/scsi/scsi_debug.c
···28262826 }28272827}2828282828292829+static inline void zbc_set_zone_full(struct sdebug_dev_info *devip,28302830+ struct sdeb_zone_state *zsp)28312831+{28322832+ switch (zsp->z_cond) {28332833+ case ZC2_IMPLICIT_OPEN:28342834+ devip->nr_imp_open--;28352835+ break;28362836+ case ZC3_EXPLICIT_OPEN:28372837+ devip->nr_exp_open--;28382838+ break;28392839+ default:28402840+ WARN_ONCE(true, "Invalid zone %llu condition %x\n",28412841+ zsp->z_start, zsp->z_cond);28422842+ break;28432843+ }28442844+ zsp->z_cond = ZC5_FULL;28452845+}28462846+28292847static void zbc_inc_wp(struct sdebug_dev_info *devip,28302848 unsigned long long lba, unsigned int num)28312849{···28562838 if (zsp->z_type == ZBC_ZTYPE_SWR) {28572839 zsp->z_wp += num;28582840 if (zsp->z_wp >= zend)28592859- zsp->z_cond = ZC5_FULL;28412841+ zbc_set_zone_full(devip, zsp);28602842 return;28612843 }28622844···28752857 n = num;28762858 }28772859 if (zsp->z_wp >= zend)28782878- zsp->z_cond = ZC5_FULL;28602860+ zbc_set_zone_full(devip, zsp);2879286128802862 num -= n;28812863 lba += n;
+6-1
drivers/scsi/scsi_transport_iscsi.c
···212212 return NULL;213213214214 mutex_lock(&iscsi_ep_idr_mutex);215215- id = idr_alloc(&iscsi_ep_idr, ep, 0, -1, GFP_NOIO);215215+216216+ /*217217+ * First endpoint id should be 1 to comply with user space218218+ * applications (iscsid).219219+ */220220+ id = idr_alloc(&iscsi_ep_idr, ep, 1, -1, GFP_NOIO);216221 if (id < 0) {217222 mutex_unlock(&iscsi_ep_idr_mutex);218223 printk(KERN_ERR "Could not allocate endpoint ID. Error %d.\n",
+22-5
drivers/scsi/storvsc_drv.c
···18441844 .cmd_per_lun = 2048,18451845 .this_id = -1,18461846 /* Ensure there are no gaps in presented sgls */18471847- .virt_boundary_mask = PAGE_SIZE-1,18471847+ .virt_boundary_mask = HV_HYP_PAGE_SIZE - 1,18481848 .no_write_same = 1,18491849 .track_queue_depth = 1,18501850 .change_queue_depth = storvsc_change_queue_depth,···18951895 int target = 0;18961896 struct storvsc_device *stor_device;18971897 int max_sub_channels = 0;18981898+ u32 max_xfer_bytes;1898189918991900 /*19001901 * We support sub-channels for storage on SCSI and FC controllers.···19691968 }19701969 /* max cmd length */19711970 host->max_cmd_len = STORVSC_MAX_CMD_LEN;19721972-19731971 /*19741974- * set the table size based on the info we got19751975- * from the host.19721972+ * Any reasonable Hyper-V configuration should provide19731973+ * max_transfer_bytes value aligning to HV_HYP_PAGE_SIZE,19741974+ * protecting it from any weird value.19761975 */19771977- host->sg_tablesize = (stor_device->max_transfer_bytes >> PAGE_SHIFT);19761976+ max_xfer_bytes = round_down(stor_device->max_transfer_bytes, HV_HYP_PAGE_SIZE);19771977+ /* max_hw_sectors_kb */19781978+ host->max_sectors = max_xfer_bytes >> 9;19791979+ /*19801980+ * There are 2 requirements for Hyper-V storvsc sgl segments,19811981+ * based on which the below calculation for max segments is19821982+ * done:19831983+ *19841984+ * 1. Except for the first and last sgl segment, all sgl segments19851985+ * should be align to HV_HYP_PAGE_SIZE, that also means the19861986+ * maximum number of segments in a sgl can be calculated by19871987+ * dividing the total max transfer length by HV_HYP_PAGE_SIZE.19881988+ *19891989+ * 2. Except for the first and last, each entry in the SGL must19901990+ * have an offset that is a multiple of HV_HYP_PAGE_SIZE.19911991+ */19921992+ host->sg_tablesize = (max_xfer_bytes >> HV_HYP_PAGE_SHIFT) + 1;19781993 /*19791994 * For non-IDE disks, the host supports multiple channels.19801995 * Set the number of HW queues we are supporting.
+1
drivers/soc/bcm/brcmstb/pm/pm-arm.c
···783783 }784784785785 ret = brcmstb_init_sram(dn);786786+ of_node_put(dn);786787 if (ret) {787788 pr_err("error setting up SRAM for PM\n");788789 return ret;
···6969#define CDNS_SPI_BAUD_DIV_SHIFT 3 /* Baud rate divisor shift in CR */7070#define CDNS_SPI_SS_SHIFT 10 /* Slave Select field shift in CR */7171#define CDNS_SPI_SS0 0x1 /* Slave Select zero */7272+#define CDNS_SPI_NOSS 0x3C /* No Slave select */72737374/*7475 * SPI Interrupt Registers bit Masks···9392#define CDNS_SPI_ER_ENABLE 0x00000001 /* SPI Enable Bit Mask */9493#define CDNS_SPI_ER_DISABLE 0x0 /* SPI Disable Bit Mask */95949696-/* SPI FIFO depth in bytes */9797-#define CDNS_SPI_FIFO_DEPTH 1289898-9995/* Default number of chip select lines */10096#define CDNS_SPI_DEFAULT_NUM_CS 410197···108110 * @rx_bytes: Number of bytes requested109111 * @dev_busy: Device busy flag110112 * @is_decoded_cs: Flag for decoder property set or not113113+ * @tx_fifo_depth: Depth of the TX FIFO111114 */112115struct cdns_spi {113116 void __iomem *regs;···122123 int rx_bytes;123124 u8 dev_busy;124125 u32 is_decoded_cs;126126+ unsigned int tx_fifo_depth;125127};126128127129/* Macros for the SPI controller read/write */···304304{305305 unsigned long trans_cnt = 0;306306307307- while ((trans_cnt < CDNS_SPI_FIFO_DEPTH) &&307307+ while ((trans_cnt < xspi->tx_fifo_depth) &&308308 (xspi->tx_bytes > 0)) {309309310310 /* When xspi in busy condition, bytes may send failed,···450450 * @master: Pointer to the spi_master structure which provides451451 * information about the controller.452452 *453453- * This function disables the SPI master controller.453453+ * This function disables the SPI master controller when no slave selected.454454 *455455 * Return: 0 always456456 */457457static int cdns_unprepare_transfer_hardware(struct spi_master *master)458458{459459 struct cdns_spi *xspi = spi_master_get_devdata(master);460460+ u32 ctrl_reg;460461461461- cdns_spi_write(xspi, CDNS_SPI_ER, CDNS_SPI_ER_DISABLE);462462+ /* Disable the SPI if slave is deselected */463463+ ctrl_reg = cdns_spi_read(xspi, CDNS_SPI_CR);464464+ ctrl_reg = (ctrl_reg & CDNS_SPI_CR_SSCTRL) >> CDNS_SPI_SS_SHIFT;465465+ if (ctrl_reg == CDNS_SPI_NOSS)466466+ cdns_spi_write(xspi, CDNS_SPI_ER, CDNS_SPI_ER_DISABLE);462467463468 return 0;469469+}470470+471471+/**472472+ * cdns_spi_detect_fifo_depth - Detect the FIFO depth of the hardware473473+ * @xspi: Pointer to the cdns_spi structure474474+ *475475+ * The depth of the TX FIFO is a synthesis configuration parameter of the SPI476476+ * IP. The FIFO threshold register is sized so that its maximum value can be the477477+ * FIFO size - 1. This is used to detect the size of the FIFO.478478+ */479479+static void cdns_spi_detect_fifo_depth(struct cdns_spi *xspi)480480+{481481+ /* The MSBs will get truncated giving us the size of the FIFO */482482+ cdns_spi_write(xspi, CDNS_SPI_THLD, 0xffff);483483+ xspi->tx_fifo_depth = cdns_spi_read(xspi, CDNS_SPI_THLD) + 1;484484+485485+ /* Reset to default */486486+ cdns_spi_write(xspi, CDNS_SPI_THLD, 0x1);464487}465488466489/**···557534 &xspi->is_decoded_cs);558535 if (ret < 0)559536 xspi->is_decoded_cs = 0;537537+538538+ cdns_spi_detect_fifo_depth(xspi);560539561540 /* SPI controller initializations */562541 cdns_spi_init_hw(xspi);
+1-1
drivers/spi/spi-mem.c
···808808 op->data.dir != SPI_MEM_DATA_IN)809809 return -EINVAL;810810811811- if (ctlr->mem_ops && ctlr->mem_ops->poll_status) {811811+ if (ctlr->mem_ops && ctlr->mem_ops->poll_status && !mem->spi->cs_gpiod) {812812 ret = spi_mem_access_start(mem);813813 if (ret)814814 return ret;
+7-4
drivers/spi/spi-rockchip.c
···381381 rs->tx_left = rs->tx ? xfer->len / rs->n_bytes : 0;382382 rs->rx_left = xfer->len / rs->n_bytes;383383384384- if (rs->cs_inactive)385385- writel_relaxed(INT_RF_FULL | INT_CS_INACTIVE, rs->regs + ROCKCHIP_SPI_IMR);386386- else387387- writel_relaxed(INT_RF_FULL, rs->regs + ROCKCHIP_SPI_IMR);384384+ writel_relaxed(0xffffffff, rs->regs + ROCKCHIP_SPI_ICR);385385+388386 spi_enable_chip(rs, true);389387390388 if (rs->tx_left)391389 rockchip_spi_pio_writer(rs);390390+391391+ if (rs->cs_inactive)392392+ writel_relaxed(INT_RF_FULL | INT_CS_INACTIVE, rs->regs + ROCKCHIP_SPI_IMR);393393+ else394394+ writel_relaxed(INT_RF_FULL, rs->regs + ROCKCHIP_SPI_IMR);392395393396 /* 1 means the transfer is in progress */394397 return 1;
-2
drivers/tty/sysrq.c
···581581582582 rcu_sysrq_start();583583 rcu_read_lock();584584- printk_prefer_direct_enter();585584 /*586585 * Raise the apparent loglevel to maximum so that the sysrq header587586 * is shown to provide the user with positive feedback. We do not···622623 pr_cont("\n");623624 console_loglevel = orig_log_level;624625 }625625- printk_prefer_direct_exit();626626 rcu_read_unlock();627627 rcu_sysrq_end();628628
+48-28
drivers/ufs/core/ufshcd.c
···748748}749749750750/**751751- * ufshcd_utrl_clear - Clear a bit in UTRLCLR register751751+ * ufshcd_utrl_clear() - Clear requests from the controller request list.752752 * @hba: per adapter instance753753- * @pos: position of the bit to be cleared753753+ * @mask: mask with one bit set for each request to be cleared754754 */755755-static inline void ufshcd_utrl_clear(struct ufs_hba *hba, u32 pos)755755+static inline void ufshcd_utrl_clear(struct ufs_hba *hba, u32 mask)756756{757757 if (hba->quirks & UFSHCI_QUIRK_BROKEN_REQ_LIST_CLR)758758- ufshcd_writel(hba, (1 << pos), REG_UTP_TRANSFER_REQ_LIST_CLEAR);759759- else760760- ufshcd_writel(hba, ~(1 << pos),761761- REG_UTP_TRANSFER_REQ_LIST_CLEAR);758758+ mask = ~mask;759759+ /*760760+ * From the UFSHCI specification: "UTP Transfer Request List CLear761761+ * Register (UTRLCLR): This field is bit significant. Each bit762762+ * corresponds to a slot in the UTP Transfer Request List, where bit 0763763+ * corresponds to request slot 0. A bit in this field is set to ‘0’764764+ * by host software to indicate to the host controller that a transfer765765+ * request slot is cleared. The host controller766766+ * shall free up any resources associated to the request slot767767+ * immediately, and shall set the associated bit in UTRLDBR to ‘0’. The768768+ * host software indicates no change to request slots by setting the769769+ * associated bits in this field to ‘1’. Bits in this field shall only770770+ * be set ‘1’ or ‘0’ by host software when UTRLRSR is set to ‘1’."771771+ */772772+ ufshcd_writel(hba, ~mask, REG_UTP_TRANSFER_REQ_LIST_CLEAR);762773}763774764775/**···28742863 return ufshcd_compose_devman_upiu(hba, lrbp);28752864}2876286528772877-static int28782878-ufshcd_clear_cmd(struct ufs_hba *hba, int tag)28662866+/*28672867+ * Clear all the requests from the controller for which a bit has been set in28682868+ * @mask and wait until the controller confirms that these requests have been28692869+ * cleared.28702870+ */28712871+static int ufshcd_clear_cmds(struct ufs_hba *hba, u32 mask)28792872{28802880- int err = 0;28812873 unsigned long flags;28822882- u32 mask = 1 << tag;2883287428842875 /* clear outstanding transaction before retry */28852876 spin_lock_irqsave(hba->host->host_lock, flags);28862886- ufshcd_utrl_clear(hba, tag);28772877+ ufshcd_utrl_clear(hba, mask);28872878 spin_unlock_irqrestore(hba->host->host_lock, flags);2888287928892880 /*28902881 * wait for h/w to clear corresponding bit in door-bell.28912882 * max. wait is 1 sec.28922883 */28932893- err = ufshcd_wait_for_register(hba,28942894- REG_UTP_TRANSFER_REQ_DOOR_BELL,28952895- mask, ~mask, 1000, 1000);28962896-28972897- return err;28842884+ return ufshcd_wait_for_register(hba, REG_UTP_TRANSFER_REQ_DOOR_BELL,28852885+ mask, ~mask, 1000, 1000);28982886}2899288729002888static int···29732963 err = -ETIMEDOUT;29742964 dev_dbg(hba->dev, "%s: dev_cmd request timedout, tag %d\n",29752965 __func__, lrbp->task_tag);29762976- if (!ufshcd_clear_cmd(hba, lrbp->task_tag))29662966+ if (!ufshcd_clear_cmds(hba, 1U << lrbp->task_tag))29772967 /* successfully cleared the command, retry if needed */29782968 err = -EAGAIN;29792969 /*···69686958}6969695969706960/**69716971- * ufshcd_eh_device_reset_handler - device reset handler registered to69726972- * scsi layer.69616961+ * ufshcd_eh_device_reset_handler() - Reset a single logical unit.69736962 * @cmd: SCSI command pointer69746963 *69756964 * Returns SUCCESS/FAILED69766965 */69776966static int ufshcd_eh_device_reset_handler(struct scsi_cmnd *cmd)69786967{69686968+ unsigned long flags, pending_reqs = 0, not_cleared = 0;69796969 struct Scsi_Host *host;69806970 struct ufs_hba *hba;69816971 u32 pos;···69946984 }6995698569966986 /* clear the commands that were pending for corresponding LUN */69976997- for_each_set_bit(pos, &hba->outstanding_reqs, hba->nutrs) {69986998- if (hba->lrb[pos].lun == lun) {69996999- err = ufshcd_clear_cmd(hba, pos);70007000- if (err)70017001- break;70027002- __ufshcd_transfer_req_compl(hba, 1U << pos);70037003- }69876987+ spin_lock_irqsave(&hba->outstanding_lock, flags);69886988+ for_each_set_bit(pos, &hba->outstanding_reqs, hba->nutrs)69896989+ if (hba->lrb[pos].lun == lun)69906990+ __set_bit(pos, &pending_reqs);69916991+ hba->outstanding_reqs &= ~pending_reqs;69926992+ spin_unlock_irqrestore(&hba->outstanding_lock, flags);69936993+69946994+ if (ufshcd_clear_cmds(hba, pending_reqs) < 0) {69956995+ spin_lock_irqsave(&hba->outstanding_lock, flags);69966996+ not_cleared = pending_reqs &69976997+ ufshcd_readl(hba, REG_UTP_TRANSFER_REQ_DOOR_BELL);69986998+ hba->outstanding_reqs |= not_cleared;69996999+ spin_unlock_irqrestore(&hba->outstanding_lock, flags);70007000+70017001+ dev_err(hba->dev, "%s: failed to clear requests %#lx\n",70027002+ __func__, not_cleared);70047003 }70047004+ __ufshcd_transfer_req_compl(hba, pending_reqs & ~not_cleared);7005700570067006out:70077007 hba->req_abort_count = 0;···71087088 goto out;71097089 }7110709071117111- err = ufshcd_clear_cmd(hba, tag);70917091+ err = ufshcd_clear_cmds(hba, 1U << tag);71127092 if (err)71137093 dev_err(hba->dev, "%s: Failed clearing cmd at tag %d, err %d\n",71147094 __func__, tag, err);
+3
drivers/usb/chipidea/udc.c
···10481048 struct ci_hdrc *ci = req->context;10491049 unsigned long flags;1050105010511051+ if (req->status < 0)10521052+ return;10531053+10511054 if (ci->setaddr) {10521055 hw_usb_set_address(ci, ci->address);10531056 ci->setaddr = false;
···436436 break;437437 case 0x200:438438 switch (bcdDevice) {439439- case 0x100:439439+ case 0x100: /* GC */440440 case 0x105:441441+ return TYPE_HXN;442442+ case 0x300: /* GT / TA */443443+ if (pl2303_supports_hx_status(serial))444444+ return TYPE_TA;445445+ fallthrough;441446 case 0x305:447447+ case 0x400: /* GL */442448 case 0x405:449449+ return TYPE_HXN;450450+ case 0x500: /* GE / TB */451451+ if (pl2303_supports_hx_status(serial))452452+ return TYPE_TB;453453+ fallthrough;454454+ case 0x505:455455+ case 0x600: /* GS */443456 case 0x605:444444- /*445445- * Assume it's an HXN-type if the device doesn't446446- * support the old read request value.447447- */448448- if (!pl2303_supports_hx_status(serial))449449- return TYPE_HXN;450450- break;451451- case 0x300:452452- return TYPE_TA;453453- case 0x500:454454- return TYPE_TB;457457+ case 0x700: /* GR */458458+ case 0x705:459459+ return TYPE_HXN;455460 }456461 break;457462 }
-1
drivers/usb/typec/tcpm/Kconfig
···5656 tristate "Intel WhiskeyCove PMIC USB Type-C PHY driver"5757 depends on ACPI5858 depends on MFD_INTEL_PMC_BXT5959- depends on INTEL_SOC_PMIC6059 depends on BXT_WC_PMIC_OPREGION6160 help6261 This driver adds support for USB Type-C on Intel Broxton platforms
+2
drivers/video/console/sticore.c
···11481148 return ret;11491149}1150115011511151+#if defined(CONFIG_FB_STI)11511152/* check if given fb_info is the primary device */11521153int fb_is_primary_device(struct fb_info *info)11531154{···11641163 return (sti->info == info);11651164}11661165EXPORT_SYMBOL(fb_is_primary_device);11661166+#endif1167116711681168MODULE_AUTHOR("Philipp Rumpf, Helge Deller, Thomas Bogendoerfer");11691169MODULE_DESCRIPTION("Core STI driver for HP's NGLE series graphics cards in HP PARISC machines");
+2-4
drivers/video/fbdev/au1100fb.c
···560560 /* Blank the LCD */561561 au1100fb_fb_blank(VESA_POWERDOWN, &fbdev->info);562562563563- if (fbdev->lcdclk)564564- clk_disable(fbdev->lcdclk);563563+ clk_disable(fbdev->lcdclk);565564566565 memcpy(&fbregs, fbdev->regs, sizeof(struct au1100fb_regs));567566···576577577578 memcpy(fbdev->regs, &fbregs, sizeof(struct au1100fb_regs));578579579579- if (fbdev->lcdclk)580580- clk_enable(fbdev->lcdclk);580580+ clk_enable(fbdev->lcdclk);581581582582 /* Unblank the LCD */583583 au1100fb_fb_blank(VESA_NO_BLANKING, &fbdev->info);
···472472 struct fb_info *info;473473 struct intelfb_info *dinfo;474474 int i, err, dvo;475475- int aperture_size, stolen_size;475475+ int aperture_size, stolen_size = 0;476476 struct agp_kern_info gtt_info;477477 int agp_memtype;478478 const char *s;···571571 return -ENODEV;572572 }573573574574- if (intelfbhw_get_memory(pdev, &aperture_size,&stolen_size)) {574574+ if (intelfbhw_get_memory(pdev, &aperture_size, &stolen_size)) {575575 cleanup(dinfo);576576 return -ENODEV;577577 }
+5-7
drivers/video/fbdev/intelfb/intelfbhw.c
···201201 case PCI_DEVICE_ID_INTEL_945GME:202202 case PCI_DEVICE_ID_INTEL_965G:203203 case PCI_DEVICE_ID_INTEL_965GM:204204- /* 915, 945 and 965 chipsets support a 256MB aperture.205205- Aperture size is determined by inspected the206206- base address of the aperture. */207207- if (pci_resource_start(pdev, 2) & 0x08000000)208208- *aperture_size = MB(128);209209- else210210- *aperture_size = MB(256);204204+ /*205205+ * 915, 945 and 965 chipsets support 64MB, 128MB or 256MB206206+ * aperture. Determine size from PCI resource length.207207+ */208208+ *aperture_size = pci_resource_len(pdev, 2);211209 break;212210 default:213211 if ((tmp & INTEL_GMCH_MEM_MASK) == INTEL_GMCH_MEM_64M)
+1-1
drivers/video/fbdev/omap/sossi.c
···359359 int bus_pick_count, bus_pick_width;360360361361 /*362362- * We set explicitly the the bus_pick_count as well, although362362+ * We set explicitly the bus_pick_count as well, although363363 * with remapping/reordering disabled it will be calculated by HW364364 * as (32 / bus_pick_width).365365 */
+1-1
drivers/video/fbdev/omap2/omapfb/dss/hdmi_phy.c
···143143 /*144144 * In OMAP5+, the HFBITCLK must be divided by 2 before issuing the145145 * HDMI_PHYPWRCMD_LDOON command.146146- */146146+ */147147 if (phy_feat->bist_ctrl)148148 REG_FLD_MOD(phy->base, HDMI_TXPHY_BIST_CONTROL, 1, 11, 11);149149
+1-1
drivers/video/fbdev/pxa3xx-gcu.c
···381381 struct pxa3xx_gcu_batch *buffer;382382 struct pxa3xx_gcu_priv *priv = to_pxa3xx_gcu_priv(file);383383384384- int words = count / 4;384384+ size_t words = count / 4;385385386386 /* Does not need to be atomic. There's a lock in user space,387387 * but anyhow, this is just for statistics. */
+1-2
drivers/video/fbdev/simplefb.c
···237237 if (IS_ERR(clock)) {238238 if (PTR_ERR(clock) == -EPROBE_DEFER) {239239 while (--i >= 0) {240240- if (par->clks[i])241241- clk_put(par->clks[i]);240240+ clk_put(par->clks[i]);242241 }243242 kfree(par->clks);244243 return -EPROBE_DEFER;
+8-7
drivers/video/fbdev/skeletonfb.c
···96969797 /*9898 * Modern graphical hardware not only supports pipelines but some 9999- * also support multiple monitors where each display can have its 9999+ * also support multiple monitors where each display can have100100 * its own unique data. In this case each display could be 101101 * represented by a separate framebuffer device thus a separate 102102 * struct fb_info. Now the struct xxx_par represents the graphics···838838 *839839 * See Documentation/driver-api/pm/devices.rst for more information840840 */841841-static int xxxfb_suspend(struct pci_dev *dev, pm_message_t msg)841841+static int xxxfb_suspend(struct device *dev)842842{843843- struct fb_info *info = pci_get_drvdata(dev);843843+ struct fb_info *info = dev_get_drvdata(dev);844844 struct xxxfb_par *par = info->par;845845846846 /* suspend here */···853853 *854854 * See Documentation/driver-api/pm/devices.rst for more information855855 */856856-static int xxxfb_resume(struct pci_dev *dev)856856+static int xxxfb_resume(struct device *dev)857857{858858- struct fb_info *info = pci_get_drvdata(dev);858858+ struct fb_info *info = dev_get_drvdata(dev);859859 struct xxxfb_par *par = info->par;860860861861 /* resume here */···873873 { 0, }874874};875875876876+static SIMPLE_DEV_PM_OPS(xxxfb_pm_ops, xxxfb_suspend, xxxfb_resume);877877+876878/* For PCI drivers */877879static struct pci_driver xxxfb_driver = {878880 .name = "xxxfb",879881 .id_table = xxxfb_id_table,880882 .probe = xxxfb_probe,881883 .remove = xxxfb_remove,882882- .suspend = xxxfb_suspend, /* optional but recommended */883883- .resume = xxxfb_resume, /* optional but recommended */884884+ .driver.pm = xxxfb_pm_ops, /* optional but recommended */884885};885886886887MODULE_DEVICE_TABLE(pci, xxxfb_id_table);
···1616#include <linux/mmu_notifier.h>1717#include <linux/types.h>1818#include <xen/interface/event_channel.h>1919+#include <xen/grant_table.h>19202021struct gntdev_dmabuf_priv;2122···5756 struct gnttab_unmap_grant_ref *unmap_ops;5857 struct gnttab_map_grant_ref *kmap_ops;5958 struct gnttab_unmap_grant_ref *kunmap_ops;5959+ bool *being_removed;6060 struct page **pages;6161 unsigned long pages_vm_start;6262···7573 /* Needed to avoid allocation in gnttab_dma_free_pages(). */7674 xen_pfn_t *frames;7775#endif7676+7777+ /* Number of live grants */7878+ atomic_t live_grants;7979+ /* Needed to avoid allocation in __unmap_grant_pages */8080+ struct gntab_unmap_queue_data unmap_data;7881};79828083struct gntdev_grant_map *gntdev_alloc_map(struct gntdev_priv *priv, int count,
+106-51
drivers/xen/gntdev.c
···3535#include <linux/slab.h>3636#include <linux/highmem.h>3737#include <linux/refcount.h>3838+#include <linux/workqueue.h>38393940#include <xen/xen.h>4041#include <xen/grant_table.h>···6160MODULE_PARM_DESC(limit,6261 "Maximum number of grants that may be mapped by one mapping request");63626363+/* True in PV mode, false otherwise */6464static int use_ptemod;65656666-static int unmap_grant_pages(struct gntdev_grant_map *map,6767- int offset, int pages);6666+static void unmap_grant_pages(struct gntdev_grant_map *map,6767+ int offset, int pages);68686969static struct miscdevice gntdev_miscdev;7070···122120 kvfree(map->unmap_ops);123121 kvfree(map->kmap_ops);124122 kvfree(map->kunmap_ops);123123+ kvfree(map->being_removed);125124 kfree(map);126125}127126···143140 add->unmap_ops = kvmalloc_array(count, sizeof(add->unmap_ops[0]),144141 GFP_KERNEL);145142 add->pages = kvcalloc(count, sizeof(add->pages[0]), GFP_KERNEL);143143+ add->being_removed =144144+ kvcalloc(count, sizeof(add->being_removed[0]), GFP_KERNEL);146145 if (NULL == add->grants ||147146 NULL == add->map_ops ||148147 NULL == add->unmap_ops ||149149- NULL == add->pages)148148+ NULL == add->pages ||149149+ NULL == add->being_removed)150150 goto err;151151 if (use_ptemod) {152152 add->kmap_ops = kvmalloc_array(count, sizeof(add->kmap_ops[0]),···256250 if (!refcount_dec_and_test(&map->users))257251 return;258252259259- if (map->pages && !use_ptemod)253253+ if (map->pages && !use_ptemod) {254254+ /*255255+ * Increment the reference count. This ensures that the256256+ * subsequent call to unmap_grant_pages() will not wind up257257+ * re-entering itself. It *can* wind up calling258258+ * gntdev_put_map() recursively, but such calls will be with a259259+ * reference count greater than 1, so they will return before260260+ * this code is reached. The recursion depth is thus limited to261261+ * 1. Do NOT use refcount_inc() here, as it will detect that262262+ * the reference count is zero and WARN().263263+ */264264+ refcount_set(&map->users, 1);265265+266266+ /*267267+ * Unmap the grants. This may or may not be asynchronous, so it268268+ * is possible that the reference count is 1 on return, but it269269+ * could also be greater than 1.270270+ */260271 unmap_grant_pages(map, 0, map->count);272272+273273+ /* Check if the memory now needs to be freed */274274+ if (!refcount_dec_and_test(&map->users))275275+ return;276276+277277+ /*278278+ * All pages have been returned to the hypervisor, so free the279279+ * map.280280+ */281281+ }261282262283 if (map->notify.flags & UNMAP_NOTIFY_SEND_EVENT) {263284 notify_remote_via_evtchn(map->notify.event);···316283317284int gntdev_map_grant_pages(struct gntdev_grant_map *map)318285{286286+ size_t alloced = 0;319287 int i, err = 0;320288321289 if (!use_ptemod) {···365331 map->count);366332367333 for (i = 0; i < map->count; i++) {368368- if (map->map_ops[i].status == GNTST_okay)334334+ if (map->map_ops[i].status == GNTST_okay) {369335 map->unmap_ops[i].handle = map->map_ops[i].handle;370370- else if (!err)336336+ if (!use_ptemod)337337+ alloced++;338338+ } else if (!err)371339 err = -EINVAL;372340373341 if (map->flags & GNTMAP_device_map)374342 map->unmap_ops[i].dev_bus_addr = map->map_ops[i].dev_bus_addr;375343376344 if (use_ptemod) {377377- if (map->kmap_ops[i].status == GNTST_okay)345345+ if (map->kmap_ops[i].status == GNTST_okay) {346346+ if (map->map_ops[i].status == GNTST_okay)347347+ alloced++;378348 map->kunmap_ops[i].handle = map->kmap_ops[i].handle;379379- else if (!err)349349+ } else if (!err)380350 err = -EINVAL;381351 }382352 }353353+ atomic_add(alloced, &map->live_grants);383354 return err;384355}385356386386-static int __unmap_grant_pages(struct gntdev_grant_map *map, int offset,387387- int pages)357357+static void __unmap_grant_pages_done(int result,358358+ struct gntab_unmap_queue_data *data)388359{389389- int i, err = 0;390390- struct gntab_unmap_queue_data unmap_data;360360+ unsigned int i;361361+ struct gntdev_grant_map *map = data->data;362362+ unsigned int offset = data->unmap_ops - map->unmap_ops;391363392392- if (map->notify.flags & UNMAP_NOTIFY_CLEAR_BYTE) {393393- int pgno = (map->notify.addr >> PAGE_SHIFT);394394- if (pgno >= offset && pgno < offset + pages) {395395- /* No need for kmap, pages are in lowmem */396396- uint8_t *tmp = pfn_to_kaddr(page_to_pfn(map->pages[pgno]));397397- tmp[map->notify.addr & (PAGE_SIZE-1)] = 0;398398- map->notify.flags &= ~UNMAP_NOTIFY_CLEAR_BYTE;399399- }400400- }401401-402402- unmap_data.unmap_ops = map->unmap_ops + offset;403403- unmap_data.kunmap_ops = use_ptemod ? map->kunmap_ops + offset : NULL;404404- unmap_data.pages = map->pages + offset;405405- unmap_data.count = pages;406406-407407- err = gnttab_unmap_refs_sync(&unmap_data);408408- if (err)409409- return err;410410-411411- for (i = 0; i < pages; i++) {412412- if (map->unmap_ops[offset+i].status)413413- err = -EINVAL;364364+ for (i = 0; i < data->count; i++) {365365+ WARN_ON(map->unmap_ops[offset+i].status);414366 pr_debug("unmap handle=%d st=%d\n",415367 map->unmap_ops[offset+i].handle,416368 map->unmap_ops[offset+i].status);417369 map->unmap_ops[offset+i].handle = INVALID_GRANT_HANDLE;418370 if (use_ptemod) {419419- if (map->kunmap_ops[offset+i].status)420420- err = -EINVAL;371371+ WARN_ON(map->kunmap_ops[offset+i].status);421372 pr_debug("kunmap handle=%u st=%d\n",422373 map->kunmap_ops[offset+i].handle,423374 map->kunmap_ops[offset+i].status);424375 map->kunmap_ops[offset+i].handle = INVALID_GRANT_HANDLE;425376 }426377 }427427- return err;378378+ /*379379+ * Decrease the live-grant counter. This must happen after the loop to380380+ * prevent premature reuse of the grants by gnttab_mmap().381381+ */382382+ atomic_sub(data->count, &map->live_grants);383383+384384+ /* Release reference taken by __unmap_grant_pages */385385+ gntdev_put_map(NULL, map);428386}429387430430-static int unmap_grant_pages(struct gntdev_grant_map *map, int offset,431431- int pages)388388+static void __unmap_grant_pages(struct gntdev_grant_map *map, int offset,389389+ int pages)432390{433433- int range, err = 0;391391+ if (map->notify.flags & UNMAP_NOTIFY_CLEAR_BYTE) {392392+ int pgno = (map->notify.addr >> PAGE_SHIFT);393393+394394+ if (pgno >= offset && pgno < offset + pages) {395395+ /* No need for kmap, pages are in lowmem */396396+ uint8_t *tmp = pfn_to_kaddr(page_to_pfn(map->pages[pgno]));397397+398398+ tmp[map->notify.addr & (PAGE_SIZE-1)] = 0;399399+ map->notify.flags &= ~UNMAP_NOTIFY_CLEAR_BYTE;400400+ }401401+ }402402+403403+ map->unmap_data.unmap_ops = map->unmap_ops + offset;404404+ map->unmap_data.kunmap_ops = use_ptemod ? map->kunmap_ops + offset : NULL;405405+ map->unmap_data.pages = map->pages + offset;406406+ map->unmap_data.count = pages;407407+ map->unmap_data.done = __unmap_grant_pages_done;408408+ map->unmap_data.data = map;409409+ refcount_inc(&map->users); /* to keep map alive during async call below */410410+411411+ gnttab_unmap_refs_async(&map->unmap_data);412412+}413413+414414+static void unmap_grant_pages(struct gntdev_grant_map *map, int offset,415415+ int pages)416416+{417417+ int range;418418+419419+ if (atomic_read(&map->live_grants) == 0)420420+ return; /* Nothing to do */434421435422 pr_debug("unmap %d+%d [%d+%d]\n", map->index, map->count, offset, pages);436423437424 /* It is possible the requested range will have a "hole" where we438425 * already unmapped some of the grants. Only unmap valid ranges.439426 */440440- while (pages && !err) {441441- while (pages &&442442- map->unmap_ops[offset].handle == INVALID_GRANT_HANDLE) {427427+ while (pages) {428428+ while (pages && map->being_removed[offset]) {443429 offset++;444430 pages--;445431 }446432 range = 0;447433 while (range < pages) {448448- if (map->unmap_ops[offset + range].handle ==449449- INVALID_GRANT_HANDLE)434434+ if (map->being_removed[offset + range])450435 break;436436+ map->being_removed[offset + range] = true;451437 range++;452438 }453453- err = __unmap_grant_pages(map, offset, range);439439+ if (range)440440+ __unmap_grant_pages(map, offset, range);454441 offset += range;455442 pages -= range;456443 }457457-458458- return err;459444}460445461446/* ------------------------------------------------------------------ */···526473 struct gntdev_grant_map *map =527474 container_of(mn, struct gntdev_grant_map, notifier);528475 unsigned long mstart, mend;529529- int err;530476531477 if (!mmu_notifier_range_blockable(range))532478 return false;···546494 map->index, map->count,547495 map->vma->vm_start, map->vma->vm_end,548496 range->start, range->end, mstart, mend);549549- err = unmap_grant_pages(map,497497+ unmap_grant_pages(map,550498 (mstart - map->vma->vm_start) >> PAGE_SHIFT,551499 (mend - mstart) >> PAGE_SHIFT);552552- WARN_ON(err);553500554501 return true;555502}···1036985 goto unlock_out;1037986 if (use_ptemod && map->vma)1038987 goto unlock_out;988988+ if (atomic_read(&map->live_grants)) {989989+ err = -EAGAIN;990990+ goto unlock_out;991991+ }1039992 refcount_inc(&map->users);10409931041994 vma->vm_ops = &gntdev_vmops;
+9-13
fs/9p/fid.c
···152152 const unsigned char **wnames, *uname;153153 int i, n, l, clone, access;154154 struct v9fs_session_info *v9ses;155155- struct p9_fid *fid, *old_fid = NULL;155155+ struct p9_fid *fid, *old_fid;156156157157 v9ses = v9fs_dentry2v9ses(dentry);158158 access = v9ses->flags & V9FS_ACCESS_MASK;···194194 if (IS_ERR(fid))195195 return fid;196196197197+ refcount_inc(&fid->count);197198 v9fs_fid_add(dentry->d_sb->s_root, fid);198199 }199200 /* If we are root ourself just return that */200200- if (dentry->d_sb->s_root == dentry) {201201- refcount_inc(&fid->count);201201+ if (dentry->d_sb->s_root == dentry)202202 return fid;203203- }204203 /*205204 * Do a multipath walk with attached root.206205 * When walking parent we need to make sure we···211212 fid = ERR_PTR(n);212213 goto err_out;213214 }215215+ old_fid = fid;214216 clone = 1;215217 i = 0;216218 while (i < n) {···221221 * walk to ensure none of the patch component change222222 */223223 fid = p9_client_walk(fid, l, &wnames[i], clone);224224+ /* non-cloning walk will return the same fid */225225+ if (fid != old_fid) {226226+ p9_client_clunk(old_fid);227227+ old_fid = fid;228228+ }224229 if (IS_ERR(fid)) {225225- if (old_fid) {226226- /*227227- * If we fail, clunk fid which are mapping228228- * to path component and not the last component229229- * of the path.230230- */231231- p9_client_clunk(old_fid);232232- }233230 kfree(wnames);234231 goto err_out;235232 }236236- old_fid = fid;237233 i += l;238234 clone = 0;239235 }
+13
fs/9p/vfs_addr.c
···5858 */5959static int v9fs_init_request(struct netfs_io_request *rreq, struct file *file)6060{6161+ struct inode *inode = file_inode(file);6262+ struct v9fs_inode *v9inode = V9FS_I(inode);6163 struct p9_fid *fid = file->private_data;6464+6565+ BUG_ON(!fid);6666+6767+ /* we might need to read from a fid that was opened write-only6868+ * for read-modify-write of page cache, use the writeback fid6969+ * for that */7070+ if (rreq->origin == NETFS_READ_FOR_WRITE &&7171+ (fid->mode & O_ACCMODE) == O_WRONLY) {7272+ fid = v9inode->writeback_fid;7373+ BUG_ON(!fid);7474+ }62756376 refcount_inc(&fid->count);6477 rreq->netfs_priv = fid;
+4-4
fs/9p/vfs_inode.c
···12511251 return ERR_PTR(-ECHILD);1252125212531253 v9ses = v9fs_dentry2v9ses(dentry);12541254- fid = v9fs_fid_lookup(dentry);12541254+ if (!v9fs_proto_dotu(v9ses))12551255+ return ERR_PTR(-EBADF);12561256+12551257 p9_debug(P9_DEBUG_VFS, "%pd\n", dentry);12581258+ fid = v9fs_fid_lookup(dentry);1256125912571260 if (IS_ERR(fid))12581261 return ERR_CAST(fid);12591259-12601260- if (!v9fs_proto_dotu(v9ses))12611261- return ERR_PTR(-EBADF);1262126212631263 st = p9_client_stat(fid);12641264 p9_client_clunk(fid);
+3
fs/9p/vfs_inode_dotl.c
···274274 if (IS_ERR(ofid)) {275275 err = PTR_ERR(ofid);276276 p9_debug(P9_DEBUG_VFS, "p9_client_walk failed %d\n", err);277277+ p9_client_clunk(dfid);277278 goto out;278279 }279280···286285 if (err) {287286 p9_debug(P9_DEBUG_VFS, "Failed to get acl values in creat %d\n",288287 err);288288+ p9_client_clunk(dfid);289289 goto error;290290 }291291 err = p9_client_create_dotl(ofid, name, v9fs_open_to_dotl_flags(flags),···294292 if (err < 0) {295293 p9_debug(P9_DEBUG_VFS, "p9_client_open_dotl failed in creat %d\n",296294 err);295295+ p9_client_clunk(dfid);297296 goto error;298297 }299298 v9fs_invalidate_inode_attr(dir);
+2-1
fs/afs/inode.c
···745745746746 _enter("{ ino=%lu v=%u }", inode->i_ino, inode->i_generation);747747748748- if (!(query_flags & AT_STATX_DONT_SYNC) &&748748+ if (vnode->volume &&749749+ !(query_flags & AT_STATX_DONT_SYNC) &&749750 !test_bit(AFS_VNODE_CB_PROMISED, &vnode->flags)) {750751 key = afs_request_key(vnode->volume->cell);751752 if (IS_ERR(key))
+1
fs/btrfs/block-group.h
···104104 unsigned int relocating_repair:1;105105 unsigned int chunk_item_inserted:1;106106 unsigned int zone_is_active:1;107107+ unsigned int zoned_data_reloc_ongoing:1;107108108109 int disk_cache_state;109110
+2
fs/btrfs/ctree.h
···13301330 * existing extent into a file range.13311331 */13321332 bool is_new_extent;13331333+ /* Indicate if we should update the inode's mtime and ctime. */13341334+ bool update_times;13331335 /* Meaningful only if is_new_extent is true. */13341336 int qgroup_reserved;13351337 /*
+11-2
fs/btrfs/disk-io.c
···46324632 int ret;4633463346344634 set_bit(BTRFS_FS_CLOSING_START, &fs_info->flags);46354635+46364636+ /*46374637+ * We may have the reclaim task running and relocating a data block group,46384638+ * in which case it may create delayed iputs. So stop it before we park46394639+ * the cleaner kthread otherwise we can get new delayed iputs after46404640+ * parking the cleaner, and that can make the async reclaim task to hang46414641+ * if it's waiting for delayed iputs to complete, since the cleaner is46424642+ * parked and can not run delayed iputs - this will make us hang when46434643+ * trying to stop the async reclaim task.46444644+ */46454645+ cancel_work_sync(&fs_info->reclaim_bgs_work);46354646 /*46364647 * We don't want the cleaner to start new transactions, add more delayed46374648 * iputs, etc. while we're closing. We can't use kthread_stop() yet···46824671 cancel_work_sync(&fs_info->async_reclaim_work);46834672 cancel_work_sync(&fs_info->async_data_reclaim_work);46844673 cancel_work_sync(&fs_info->preempt_reclaim_work);46854685-46864686- cancel_work_sync(&fs_info->reclaim_bgs_work);4687467446884675 /* Cancel or finish ongoing discard work */46894676 btrfs_discard_cleanup(fs_info);
+18-2
fs/btrfs/extent-tree.c
···38323832 block_group->start == fs_info->data_reloc_bg ||38333833 fs_info->data_reloc_bg == 0);3834383438353835- if (block_group->ro) {38353835+ if (block_group->ro || block_group->zoned_data_reloc_ongoing) {38363836 ret = 1;38373837 goto out;38383838 }···38943894out:38953895 if (ret && ffe_ctl->for_treelog)38963896 fs_info->treelog_bg = 0;38973897- if (ret && ffe_ctl->for_data_reloc)38973897+ if (ret && ffe_ctl->for_data_reloc &&38983898+ fs_info->data_reloc_bg == block_group->start) {38993899+ /*39003900+ * Do not allow further allocations from this block group.39013901+ * Compared to increasing the ->ro, setting the39023902+ * ->zoned_data_reloc_ongoing flag still allows nocow39033903+ * writers to come in. See btrfs_inc_nocow_writers().39043904+ *39053905+ * We need to disable an allocation to avoid an allocation of39063906+ * regular (non-relocation data) extent. With mix of relocation39073907+ * extents and regular extents, we can dispatch WRITE commands39083908+ * (for relocation extents) and ZONE APPEND commands (for39093909+ * regular extents) at the same time to the same zone, which39103910+ * easily break the write pointer.39113911+ */39123912+ block_group->zoned_data_reloc_ongoing = 1;38983913 fs_info->data_reloc_bg = 0;39143914+ }38993915 spin_unlock(&fs_info->relocation_bg_lock);39003916 spin_unlock(&fs_info->treelog_bg_lock);39013917 spin_unlock(&block_group->lock);
···23232323 */23242324 btrfs_inode_unlock(inode, BTRFS_ILOCK_MMAP);2325232523262326- if (ret != BTRFS_NO_LOG_SYNC) {23272327- if (!ret) {23282328- ret = btrfs_sync_log(trans, root, &ctx);23292329- if (!ret) {23302330- ret = btrfs_end_transaction(trans);23312331- goto out;23322332- }23332333- }23342334- if (!full_sync) {23352335- ret = btrfs_wait_ordered_range(inode, start, len);23362336- if (ret) {23372337- btrfs_end_transaction(trans);23382338- goto out;23392339- }23402340- }23412341- ret = btrfs_commit_transaction(trans);23422342- } else {23262326+ if (ret == BTRFS_NO_LOG_SYNC) {23432327 ret = btrfs_end_transaction(trans);23282328+ goto out;23442329 }23302330+23312331+ /* We successfully logged the inode, attempt to sync the log. */23322332+ if (!ret) {23332333+ ret = btrfs_sync_log(trans, root, &ctx);23342334+ if (!ret) {23352335+ ret = btrfs_end_transaction(trans);23362336+ goto out;23372337+ }23382338+ }23392339+23402340+ /*23412341+ * At this point we need to commit the transaction because we had23422342+ * btrfs_need_log_full_commit() or some other error.23432343+ *23442344+ * If we didn't do a full sync we have to stop the trans handle, wait on23452345+ * the ordered extents, start it again and commit the transaction. If23462346+ * we attempt to wait on the ordered extents here we could deadlock with23472347+ * something like fallocate() that is holding the extent lock trying to23482348+ * start a transaction while some other thread is trying to commit the23492349+ * transaction while we (fsync) are currently holding the transaction23502350+ * open.23512351+ */23522352+ if (!full_sync) {23532353+ ret = btrfs_end_transaction(trans);23542354+ if (ret)23552355+ goto out;23562356+ ret = btrfs_wait_ordered_range(inode, start, len);23572357+ if (ret)23582358+ goto out;23592359+23602360+ /*23612361+ * This is safe to use here because we're only interested in23622362+ * making sure the transaction that had the ordered extents is23632363+ * committed. We aren't waiting on anything past this point,23642364+ * we're purely getting the transaction and committing it.23652365+ */23662366+ trans = btrfs_attach_transaction_barrier(root);23672367+ if (IS_ERR(trans)) {23682368+ ret = PTR_ERR(trans);23692369+23702370+ /*23712371+ * We committed the transaction and there's no currently23722372+ * running transaction, this means everything we care23732373+ * about made it to disk and we are done.23742374+ */23752375+ if (ret == -ENOENT)23762376+ ret = 0;23772377+ goto out;23782378+ }23792379+ }23802380+23812381+ ret = btrfs_commit_transaction(trans);23452382out:23462383 ASSERT(list_empty(&ctx.list));23472384 err = file_check_and_advance_wb_err(file);···2756271927572720 ret = btrfs_block_rsv_migrate(&fs_info->trans_block_rsv, rsv,27582721 min_size, false);27592759- BUG_ON(ret);27222722+ if (WARN_ON(ret))27232723+ goto out_trans;27602724 trans->block_rsv = rsv;2761272527622726 cur_offset = start;···28412803 extent_info->file_offset += replace_len;28422804 }2843280528062806+ /*28072807+ * We are releasing our handle on the transaction, balance the28082808+ * dirty pages of the btree inode and flush delayed items, and28092809+ * then get a new transaction handle, which may now point to a28102810+ * new transaction in case someone else may have committed the28112811+ * transaction we used to replace/drop file extent items. So28122812+ * bump the inode's iversion and update mtime and ctime except28132813+ * if we are called from a dedupe context. This is because a28142814+ * power failure/crash may happen after the transaction is28152815+ * committed and before we finish replacing/dropping all the28162816+ * file extent items we need.28172817+ */28182818+ inode_inc_iversion(&inode->vfs_inode);28192819+28202820+ if (!extent_info || extent_info->update_times) {28212821+ inode->vfs_inode.i_mtime = current_time(&inode->vfs_inode);28222822+ inode->vfs_inode.i_ctime = inode->vfs_inode.i_mtime;28232823+ }28242824+28442825 ret = btrfs_update_inode(trans, root, inode);28452826 if (ret)28462827 break;···2876281928772820 ret = btrfs_block_rsv_migrate(&fs_info->trans_block_rsv,28782821 rsv, min_size, false);28792879- BUG_ON(ret); /* shouldn't happen */28222822+ if (WARN_ON(ret))28232823+ break;28802824 trans->block_rsv = rsv;2881282528822826 cur_offset = drop_args.drop_end;
···344344 int ret;345345 const u64 len = olen_aligned;346346 u64 last_dest_end = destoff;347347+ u64 prev_extent_end = off;347348348349 ret = -ENOMEM;349350 buf = kvmalloc(fs_info->nodesize, GFP_KERNEL);···364363 key.offset = off;365364366365 while (1) {367367- u64 next_key_min_offset = key.offset + 1;368366 struct btrfs_file_extent_item *extent;369367 u64 extent_gen;370368 int type;···431431 * The first search might have left us at an extent item that432432 * ends before our target range's start, can happen if we have433433 * holes and NO_HOLES feature enabled.434434+ *435435+ * Subsequent searches may leave us on a file range we have436436+ * processed before - this happens due to a race with ordered437437+ * extent completion for a file range that is outside our source438438+ * range, but that range was part of a file extent item that439439+ * also covered a leading part of our source range.434440 */435435- if (key.offset + datal <= off) {441441+ if (key.offset + datal <= prev_extent_end) {436442 path->slots[0]++;437443 goto process_slot;438444 } else if (key.offset >= off + len) {439445 break;440446 }441441- next_key_min_offset = key.offset + datal;447447+448448+ prev_extent_end = key.offset + datal;442449 size = btrfs_item_size(leaf, slot);443450 read_extent_buffer(leaf, buf, btrfs_item_ptr_offset(leaf, slot),444451 size);···496489 clone_info.file_offset = new_key.offset;497490 clone_info.extent_buf = buf;498491 clone_info.is_new_extent = false;492492+ clone_info.update_times = !no_time_update;499493 ret = btrfs_replace_file_extents(BTRFS_I(inode), path,500494 drop_start, new_key.offset + datal - 1,501495 &clone_info, &trans);···558550 break;559551560552 btrfs_release_path(path);561561- key.offset = next_key_min_offset;553553+ key.offset = prev_extent_end;562554563555 if (fatal_signal_pending(current)) {564556 ret = -EINTR;
+40-7
fs/btrfs/super.c
···763763 compress_force = false;764764 no_compress++;765765 } else {766766+ btrfs_err(info, "unrecognized compression value %s",767767+ args[0].from);766768 ret = -EINVAL;767769 goto out;768770 }···823821 case Opt_thread_pool:824822 ret = match_int(&args[0], &intarg);825823 if (ret) {824824+ btrfs_err(info, "unrecognized thread_pool value %s",825825+ args[0].from);826826 goto out;827827 } else if (intarg == 0) {828828+ btrfs_err(info, "invalid value 0 for thread_pool");828829 ret = -EINVAL;829830 goto out;830831 }···888883 break;889884 case Opt_ratio:890885 ret = match_int(&args[0], &intarg);891891- if (ret)886886+ if (ret) {887887+ btrfs_err(info, "unrecognized metadata_ratio value %s",888888+ args[0].from);892889 goto out;890890+ }893891 info->metadata_ratio = intarg;894892 btrfs_info(info, "metadata ratio %u",895893 info->metadata_ratio);···909901 btrfs_set_and_info(info, DISCARD_ASYNC,910902 "turning on async discard");911903 } else {904904+ btrfs_err(info, "unrecognized discard mode value %s",905905+ args[0].from);912906 ret = -EINVAL;913907 goto out;914908 }···943933 btrfs_set_and_info(info, FREE_SPACE_TREE,944934 "enabling free space tree");945935 } else {936936+ btrfs_err(info, "unrecognized space_cache value %s",937937+ args[0].from);946938 ret = -EINVAL;947939 goto out;948940 }···10261014 break;10271015 case Opt_check_integrity_print_mask:10281016 ret = match_int(&args[0], &intarg);10291029- if (ret)10171017+ if (ret) {10181018+ btrfs_err(info,10191019+ "unrecognized check_integrity_print_mask value %s",10201020+ args[0].from);10301021 goto out;10221022+ }10311023 info->check_integrity_print_mask = intarg;10321024 btrfs_info(info, "check_integrity_print_mask 0x%x",10331025 info->check_integrity_print_mask);···10461030 goto out;10471031#endif10481032 case Opt_fatal_errors:10491049- if (strcmp(args[0].from, "panic") == 0)10331033+ if (strcmp(args[0].from, "panic") == 0) {10501034 btrfs_set_opt(info->mount_opt,10511035 PANIC_ON_FATAL_ERROR);10521052- else if (strcmp(args[0].from, "bug") == 0)10361036+ } else if (strcmp(args[0].from, "bug") == 0) {10531037 btrfs_clear_opt(info->mount_opt,10541038 PANIC_ON_FATAL_ERROR);10551055- else {10391039+ } else {10401040+ btrfs_err(info, "unrecognized fatal_errors value %s",10411041+ args[0].from);10561042 ret = -EINVAL;10571043 goto out;10581044 }···10621044 case Opt_commit_interval:10631045 intarg = 0;10641046 ret = match_int(&args[0], &intarg);10651065- if (ret)10471047+ if (ret) {10481048+ btrfs_err(info, "unrecognized commit_interval value %s",10491049+ args[0].from);10501050+ ret = -EINVAL;10661051 goto out;10521052+ }10671053 if (intarg == 0) {10681054 btrfs_info(info,10691055 "using default commit interval %us",···10811059 break;10821060 case Opt_rescue:10831061 ret = parse_rescue_options(info, args[0].from);10841084- if (ret < 0)10621062+ if (ret < 0) {10631063+ btrfs_err(info, "unrecognized rescue value %s",10641064+ args[0].from);10851065 goto out;10661066+ }10861067 break;10871068#ifdef CONFIG_BTRFS_DEBUG10881069 case Opt_fragment_all:···20101985 if (ret)20111986 goto restore;2012198719881988+ /* V1 cache is not supported for subpage mount. */19891989+ if (fs_info->sectorsize < PAGE_SIZE && btrfs_test_opt(fs_info, SPACE_CACHE)) {19901990+ btrfs_warn(fs_info,19911991+ "v1 space cache is not supported for page size %lu with sectorsize %u",19921992+ PAGE_SIZE, fs_info->sectorsize);19931993+ ret = -EINVAL;19941994+ goto restore;19951995+ }20131996 btrfs_remount_begin(fs_info, old_opts, *flags);20141997 btrfs_resize_thread_pool(fs_info,20151998 fs_info->thread_pool_size, old_thread_pool_size);
+27
fs/btrfs/zoned.c
···21392139 factor = div64_u64(used * 100, total);21402140 return factor >= fs_info->bg_reclaim_threshold;21412141}21422142+21432143+void btrfs_zoned_release_data_reloc_bg(struct btrfs_fs_info *fs_info, u64 logical,21442144+ u64 length)21452145+{21462146+ struct btrfs_block_group *block_group;21472147+21482148+ if (!btrfs_is_zoned(fs_info))21492149+ return;21502150+21512151+ block_group = btrfs_lookup_block_group(fs_info, logical);21522152+ /* It should be called on a previous data relocation block group. */21532153+ ASSERT(block_group && (block_group->flags & BTRFS_BLOCK_GROUP_DATA));21542154+21552155+ spin_lock(&block_group->lock);21562156+ if (!block_group->zoned_data_reloc_ongoing)21572157+ goto out;21582158+21592159+ /* All relocation extents are written. */21602160+ if (block_group->start + block_group->alloc_offset == logical + length) {21612161+ /* Now, release this block group for further allocations. */21622162+ block_group->zoned_data_reloc_ongoing = 0;21632163+ }21642164+21652165+out:21662166+ spin_unlock(&block_group->lock);21672167+ btrfs_put_block_group(block_group);21682168+}
···58585959 spin_lock(&ses->chan_lock);6060 for (i = 0; i < ses->chan_count; i++) {6161- if (is_server_using_iface(ses->chans[i].server, iface)) {6161+ if (ses->chans[i].iface == iface) {6262 spin_unlock(&ses->chan_lock);6363 return true;6464 }···146146 return CIFS_CHAN_NEEDS_RECONNECT(ses, chan_index);147147}148148149149+bool150150+cifs_chan_is_iface_active(struct cifs_ses *ses,151151+ struct TCP_Server_Info *server)152152+{153153+ unsigned int chan_index = cifs_ses_get_chan_index(ses, server);154154+155155+ return ses->chans[chan_index].iface &&156156+ ses->chans[chan_index].iface->is_active;157157+}158158+149159/* returns number of channels added */150160int cifs_try_adding_channels(struct cifs_sb_info *cifs_sb, struct cifs_ses *ses)151161{152162 int old_chan_count, new_chan_count;153163 int left;154154- int i = 0;155164 int rc = 0;156165 int tries = 0;157157- struct cifs_server_iface *ifaces = NULL;158158- size_t iface_count;166166+ struct cifs_server_iface *iface = NULL, *niface = NULL;159167160168 spin_lock(&ses->chan_lock);161169···193185 spin_unlock(&ses->chan_lock);194186195187 /*196196- * Make a copy of the iface list at the time and use that197197- * instead so as to not hold the iface spinlock for opening198198- * channels199199- */200200- spin_lock(&ses->iface_lock);201201- iface_count = ses->iface_count;202202- if (iface_count <= 0) {203203- spin_unlock(&ses->iface_lock);204204- cifs_dbg(VFS, "no iface list available to open channels\n");205205- return 0;206206- }207207- ifaces = kmemdup(ses->iface_list, iface_count*sizeof(*ifaces),208208- GFP_ATOMIC);209209- if (!ifaces) {210210- spin_unlock(&ses->iface_lock);211211- return 0;212212- }213213- spin_unlock(&ses->iface_lock);214214-215215- /*216188 * Keep connecting to same, fastest, iface for all channels as217189 * long as its RSS. Try next fastest one if not RSS or channel218190 * creation fails.219191 */192192+ spin_lock(&ses->iface_lock);193193+ iface = list_first_entry(&ses->iface_list, struct cifs_server_iface,194194+ iface_head);195195+ spin_unlock(&ses->iface_lock);196196+220197 while (left > 0) {221221- struct cifs_server_iface *iface;222198223199 tries++;224200 if (tries > 3*ses->chan_max) {···211219 break;212220 }213221214214- iface = &ifaces[i];215215- if (is_ses_using_iface(ses, iface) && !iface->rss_capable) {216216- i = (i+1) % iface_count;217217- continue;222222+ spin_lock(&ses->iface_lock);223223+ if (!ses->iface_count) {224224+ spin_unlock(&ses->iface_lock);225225+ break;218226 }219227220220- rc = cifs_ses_add_channel(cifs_sb, ses, iface);221221- if (rc) {222222- cifs_dbg(FYI, "failed to open extra channel on iface#%d rc=%d\n",223223- i, rc);224224- i = (i+1) % iface_count;225225- continue;226226- }228228+ list_for_each_entry_safe_from(iface, niface, &ses->iface_list,229229+ iface_head) {230230+ /* skip ifaces that are unusable */231231+ if (!iface->is_active ||232232+ (is_ses_using_iface(ses, iface) &&233233+ !iface->rss_capable)) {234234+ continue;235235+ }227236228228- cifs_dbg(FYI, "successfully opened new channel on iface#%d\n",229229- i);237237+ /* take ref before unlock */238238+ kref_get(&iface->refcount);239239+240240+ spin_unlock(&ses->iface_lock);241241+ rc = cifs_ses_add_channel(cifs_sb, ses, iface);242242+ spin_lock(&ses->iface_lock);243243+244244+ if (rc) {245245+ cifs_dbg(VFS, "failed to open extra channel on iface:%pIS rc=%d\n",246246+ &iface->sockaddr,247247+ rc);248248+ kref_put(&iface->refcount, release_iface);249249+ continue;250250+ }251251+252252+ cifs_dbg(FYI, "successfully opened new channel on iface:%pIS\n",253253+ &iface->sockaddr);254254+ break;255255+ }256256+ spin_unlock(&ses->iface_lock);257257+230258 left--;231259 new_chan_count++;232260 }233261234234- kfree(ifaces);235262 return new_chan_count - old_chan_count;263263+}264264+265265+/*266266+ * update the iface for the channel if necessary.267267+ * will return 0 when iface is updated, 1 if removed, 2 otherwise268268+ * Must be called with chan_lock held.269269+ */270270+int271271+cifs_chan_update_iface(struct cifs_ses *ses, struct TCP_Server_Info *server)272272+{273273+ unsigned int chan_index;274274+ struct cifs_server_iface *iface = NULL;275275+ struct cifs_server_iface *old_iface = NULL;276276+ int rc = 0;277277+278278+ spin_lock(&ses->chan_lock);279279+ chan_index = cifs_ses_get_chan_index(ses, server);280280+ if (!chan_index) {281281+ spin_unlock(&ses->chan_lock);282282+ return 0;283283+ }284284+285285+ if (ses->chans[chan_index].iface) {286286+ old_iface = ses->chans[chan_index].iface;287287+ if (old_iface->is_active) {288288+ spin_unlock(&ses->chan_lock);289289+ return 1;290290+ }291291+ }292292+ spin_unlock(&ses->chan_lock);293293+294294+ spin_lock(&ses->iface_lock);295295+ /* then look for a new one */296296+ list_for_each_entry(iface, &ses->iface_list, iface_head) {297297+ if (!iface->is_active ||298298+ (is_ses_using_iface(ses, iface) &&299299+ !iface->rss_capable)) {300300+ continue;301301+ }302302+ kref_get(&iface->refcount);303303+ }304304+305305+ if (!list_entry_is_head(iface, &ses->iface_list, iface_head)) {306306+ rc = 1;307307+ iface = NULL;308308+ cifs_dbg(FYI, "unable to find a suitable iface\n");309309+ }310310+311311+ /* now drop the ref to the current iface */312312+ if (old_iface && iface) {313313+ kref_put(&old_iface->refcount, release_iface);314314+ cifs_dbg(FYI, "replacing iface: %pIS with %pIS\n",315315+ &old_iface->sockaddr,316316+ &iface->sockaddr);317317+ } else if (old_iface) {318318+ kref_put(&old_iface->refcount, release_iface);319319+ cifs_dbg(FYI, "releasing ref to iface: %pIS\n",320320+ &old_iface->sockaddr);321321+ } else {322322+ WARN_ON(!iface);323323+ cifs_dbg(FYI, "adding new iface: %pIS\n", &iface->sockaddr);324324+ }325325+ spin_unlock(&ses->iface_lock);326326+327327+ spin_lock(&ses->chan_lock);328328+ chan_index = cifs_ses_get_chan_index(ses, server);329329+ ses->chans[chan_index].iface = iface;330330+331331+ /* No iface is found. if secondary chan, drop connection */332332+ if (!iface && CIFS_SERVER_IS_CHAN(server))333333+ ses->chans[chan_index].server = NULL;334334+335335+ spin_unlock(&ses->chan_lock);336336+337337+ if (!iface && CIFS_SERVER_IS_CHAN(server))338338+ cifs_put_tcp_session(server, false);339339+340340+ return rc;236341}237342238343/*···444355 spin_unlock(&ses->chan_lock);445356 goto out;446357 }358358+ chan->iface = iface;447359 ses->chan_count++;448360 atomic_set(&ses->chan_seq, 0);449361
+119-112
fs/cifs/smb2ops.c
···512512static int513513parse_server_interfaces(struct network_interface_info_ioctl_rsp *buf,514514 size_t buf_len,515515- struct cifs_server_iface **iface_list,516516- size_t *iface_count)515515+ struct cifs_ses *ses)517516{518517 struct network_interface_info_ioctl_rsp *p;519518 struct sockaddr_in *addr4;520519 struct sockaddr_in6 *addr6;521520 struct iface_info_ipv4 *p4;522521 struct iface_info_ipv6 *p6;523523- struct cifs_server_iface *info;522522+ struct cifs_server_iface *info = NULL, *iface = NULL, *niface = NULL;523523+ struct cifs_server_iface tmp_iface;524524 ssize_t bytes_left;525525 size_t next = 0;526526 int nb_iface = 0;527527- int rc = 0;528528-529529- *iface_list = NULL;530530- *iface_count = 0;531531-532532- /*533533- * Fist pass: count and sanity check534534- */527527+ int rc = 0, ret = 0;535528536529 bytes_left = buf_len;537530 p = buf;531531+532532+ spin_lock(&ses->iface_lock);533533+ /*534534+ * Go through iface_list and do kref_put to remove535535+ * any unused ifaces. ifaces in use will be removed536536+ * when the last user calls a kref_put on it537537+ */538538+ list_for_each_entry_safe(iface, niface, &ses->iface_list,539539+ iface_head) {540540+ iface->is_active = 0;541541+ kref_put(&iface->refcount, release_iface);542542+ }543543+ spin_unlock(&ses->iface_lock);544544+538545 while (bytes_left >= sizeof(*p)) {546546+ memset(&tmp_iface, 0, sizeof(tmp_iface));547547+ tmp_iface.speed = le64_to_cpu(p->LinkSpeed);548548+ tmp_iface.rdma_capable = le32_to_cpu(p->Capability & RDMA_CAPABLE) ? 1 : 0;549549+ tmp_iface.rss_capable = le32_to_cpu(p->Capability & RSS_CAPABLE) ? 1 : 0;550550+551551+ switch (p->Family) {552552+ /*553553+ * The kernel and wire socket structures have the same554554+ * layout and use network byte order but make the555555+ * conversion explicit in case either one changes.556556+ */557557+ case INTERNETWORK:558558+ addr4 = (struct sockaddr_in *)&tmp_iface.sockaddr;559559+ p4 = (struct iface_info_ipv4 *)p->Buffer;560560+ addr4->sin_family = AF_INET;561561+ memcpy(&addr4->sin_addr, &p4->IPv4Address, 4);562562+563563+ /* [MS-SMB2] 2.2.32.5.1.1 Clients MUST ignore these */564564+ addr4->sin_port = cpu_to_be16(CIFS_PORT);565565+566566+ cifs_dbg(FYI, "%s: ipv4 %pI4\n", __func__,567567+ &addr4->sin_addr);568568+ break;569569+ case INTERNETWORKV6:570570+ addr6 = (struct sockaddr_in6 *)&tmp_iface.sockaddr;571571+ p6 = (struct iface_info_ipv6 *)p->Buffer;572572+ addr6->sin6_family = AF_INET6;573573+ memcpy(&addr6->sin6_addr, &p6->IPv6Address, 16);574574+575575+ /* [MS-SMB2] 2.2.32.5.1.2 Clients MUST ignore these */576576+ addr6->sin6_flowinfo = 0;577577+ addr6->sin6_scope_id = 0;578578+ addr6->sin6_port = cpu_to_be16(CIFS_PORT);579579+580580+ cifs_dbg(FYI, "%s: ipv6 %pI6\n", __func__,581581+ &addr6->sin6_addr);582582+ break;583583+ default:584584+ cifs_dbg(VFS,585585+ "%s: skipping unsupported socket family\n",586586+ __func__);587587+ goto next_iface;588588+ }589589+590590+ /*591591+ * The iface_list is assumed to be sorted by speed.592592+ * Check if the new interface exists in that list.593593+ * NEVER change iface. it could be in use.594594+ * Add a new one instead595595+ */596596+ spin_lock(&ses->iface_lock);597597+ iface = niface = NULL;598598+ list_for_each_entry_safe(iface, niface, &ses->iface_list,599599+ iface_head) {600600+ ret = iface_cmp(iface, &tmp_iface);601601+ if (!ret) {602602+ /* just get a ref so that it doesn't get picked/freed */603603+ iface->is_active = 1;604604+ kref_get(&iface->refcount);605605+ spin_unlock(&ses->iface_lock);606606+ goto next_iface;607607+ } else if (ret < 0) {608608+ /* all remaining ifaces are slower */609609+ kref_get(&iface->refcount);610610+ break;611611+ }612612+ }613613+ spin_unlock(&ses->iface_lock);614614+615615+ /* no match. insert the entry in the list */616616+ info = kmalloc(sizeof(struct cifs_server_iface),617617+ GFP_KERNEL);618618+ if (!info) {619619+ rc = -ENOMEM;620620+ goto out;621621+ }622622+ memcpy(info, &tmp_iface, sizeof(tmp_iface));623623+624624+ /* add this new entry to the list */625625+ kref_init(&info->refcount);626626+ info->is_active = 1;627627+628628+ cifs_dbg(FYI, "%s: adding iface %zu\n", __func__, ses->iface_count);629629+ cifs_dbg(FYI, "%s: speed %zu bps\n", __func__, info->speed);630630+ cifs_dbg(FYI, "%s: capabilities 0x%08x\n", __func__,631631+ le32_to_cpu(p->Capability));632632+633633+ spin_lock(&ses->iface_lock);634634+ if (!list_entry_is_head(iface, &ses->iface_list, iface_head)) {635635+ list_add_tail(&info->iface_head, &iface->iface_head);636636+ kref_put(&iface->refcount, release_iface);637637+ } else638638+ list_add_tail(&info->iface_head, &ses->iface_list);639639+ spin_unlock(&ses->iface_lock);640640+641641+ ses->iface_count++;642642+ ses->iface_last_update = jiffies;643643+next_iface:539644 nb_iface++;540645 next = le32_to_cpu(p->Next);541646 if (!next) {···662557 cifs_dbg(VFS, "%s: incomplete interface info\n", __func__);663558664559665665- /*666666- * Second pass: extract info to internal structure667667- */668668-669669- *iface_list = kcalloc(nb_iface, sizeof(**iface_list), GFP_KERNEL);670670- if (!*iface_list) {671671- rc = -ENOMEM;672672- goto out;673673- }674674-675675- info = *iface_list;676676- bytes_left = buf_len;677677- p = buf;678678- while (bytes_left >= sizeof(*p)) {679679- info->speed = le64_to_cpu(p->LinkSpeed);680680- info->rdma_capable = le32_to_cpu(p->Capability & RDMA_CAPABLE) ? 1 : 0;681681- info->rss_capable = le32_to_cpu(p->Capability & RSS_CAPABLE) ? 1 : 0;682682-683683- cifs_dbg(FYI, "%s: adding iface %zu\n", __func__, *iface_count);684684- cifs_dbg(FYI, "%s: speed %zu bps\n", __func__, info->speed);685685- cifs_dbg(FYI, "%s: capabilities 0x%08x\n", __func__,686686- le32_to_cpu(p->Capability));687687-688688- switch (p->Family) {689689- /*690690- * The kernel and wire socket structures have the same691691- * layout and use network byte order but make the692692- * conversion explicit in case either one changes.693693- */694694- case INTERNETWORK:695695- addr4 = (struct sockaddr_in *)&info->sockaddr;696696- p4 = (struct iface_info_ipv4 *)p->Buffer;697697- addr4->sin_family = AF_INET;698698- memcpy(&addr4->sin_addr, &p4->IPv4Address, 4);699699-700700- /* [MS-SMB2] 2.2.32.5.1.1 Clients MUST ignore these */701701- addr4->sin_port = cpu_to_be16(CIFS_PORT);702702-703703- cifs_dbg(FYI, "%s: ipv4 %pI4\n", __func__,704704- &addr4->sin_addr);705705- break;706706- case INTERNETWORKV6:707707- addr6 = (struct sockaddr_in6 *)&info->sockaddr;708708- p6 = (struct iface_info_ipv6 *)p->Buffer;709709- addr6->sin6_family = AF_INET6;710710- memcpy(&addr6->sin6_addr, &p6->IPv6Address, 16);711711-712712- /* [MS-SMB2] 2.2.32.5.1.2 Clients MUST ignore these */713713- addr6->sin6_flowinfo = 0;714714- addr6->sin6_scope_id = 0;715715- addr6->sin6_port = cpu_to_be16(CIFS_PORT);716716-717717- cifs_dbg(FYI, "%s: ipv6 %pI6\n", __func__,718718- &addr6->sin6_addr);719719- break;720720- default:721721- cifs_dbg(VFS,722722- "%s: skipping unsupported socket family\n",723723- __func__);724724- goto next_iface;725725- }726726-727727- (*iface_count)++;728728- info++;729729-next_iface:730730- next = le32_to_cpu(p->Next);731731- if (!next)732732- break;733733- p = (struct network_interface_info_ioctl_rsp *)((u8 *)p+next);734734- bytes_left -= next;735735- }736736-737737- if (!*iface_count) {560560+ if (!ses->iface_count) {738561 rc = -EINVAL;739562 goto out;740563 }741564742565out:743743- if (rc) {744744- kfree(*iface_list);745745- *iface_count = 0;746746- *iface_list = NULL;747747- }748566 return rc;749567}750568751751-static int compare_iface(const void *ia, const void *ib)752752-{753753- const struct cifs_server_iface *a = (struct cifs_server_iface *)ia;754754- const struct cifs_server_iface *b = (struct cifs_server_iface *)ib;755755-756756- return a->speed == b->speed ? 0 : (a->speed > b->speed ? -1 : 1);757757-}758758-759759-static int569569+int760570SMB3_request_interfaces(const unsigned int xid, struct cifs_tcon *tcon)761571{762572 int rc;763573 unsigned int ret_data_len = 0;764574 struct network_interface_info_ioctl_rsp *out_buf = NULL;765765- struct cifs_server_iface *iface_list;766766- size_t iface_count;767575 struct cifs_ses *ses = tcon->ses;768576769577 rc = SMB2_ioctl(xid, tcon, NO_FILE_ID, NO_FILE_ID,···692674 goto out;693675 }694676695695- rc = parse_server_interfaces(out_buf, ret_data_len,696696- &iface_list, &iface_count);677677+ rc = parse_server_interfaces(out_buf, ret_data_len, ses);697678 if (rc)698679 goto out;699699-700700- /* sort interfaces from fastest to slowest */701701- sort(iface_list, iface_count, sizeof(*iface_list), compare_iface, NULL);702702-703703- spin_lock(&ses->iface_lock);704704- kfree(ses->iface_list);705705- ses->iface_list = iface_list;706706- ses->iface_count = iface_count;707707- ses->iface_last_update = jiffies;708708- spin_unlock(&ses->iface_lock);709680710681out:711682 kfree(out_buf);
+15-6
fs/cifs/smb2pdu.c
···543543 struct TCP_Server_Info *server, unsigned int *total_len)544544{545545 char *pneg_ctxt;546546+ char *hostname = NULL;546547 unsigned int ctxt_len, neg_context_count;547548548549 if (*total_len > 200) {···571570 *total_len += ctxt_len;572571 pneg_ctxt += ctxt_len;573572574574- ctxt_len = build_netname_ctxt((struct smb2_netname_neg_context *)pneg_ctxt,575575- server->hostname);576576- *total_len += ctxt_len;577577- pneg_ctxt += ctxt_len;578578-579573 build_posix_ctxt((struct smb2_posix_neg_context *)pneg_ctxt);580574 *total_len += sizeof(struct smb2_posix_neg_context);581575 pneg_ctxt += sizeof(struct smb2_posix_neg_context);582576583583- neg_context_count = 4;577577+ /*578578+ * secondary channels don't have the hostname field populated579579+ * use the hostname field in the primary channel instead580580+ */581581+ hostname = CIFS_SERVER_IS_CHAN(server) ?582582+ server->primary_server->hostname : server->hostname;583583+ if (hostname && (hostname[0] != 0)) {584584+ ctxt_len = build_netname_ctxt((struct smb2_netname_neg_context *)pneg_ctxt,585585+ hostname);586586+ *total_len += ctxt_len;587587+ pneg_ctxt += ctxt_len;588588+ neg_context_count = 4;589589+ } else /* second channels do not have a hostname */590590+ neg_context_count = 3;584591585592 if (server->compress_algorithm) {586593 build_compression_ctxt((struct smb2_compression_capabilities_context *)
···9191 unsigned int cnt;9292 struct f2fs_iostat_latency iostat_lat[MAX_IO_TYPE][NR_PAGE_TYPE];9393 struct iostat_lat_info *io_lat = sbi->iostat_io_lat;9494+ unsigned long flags;94959595- spin_lock_bh(&sbi->iostat_lat_lock);9696+ spin_lock_irqsave(&sbi->iostat_lat_lock, flags);9697 for (idx = 0; idx < MAX_IO_TYPE; idx++) {9798 for (io = 0; io < NR_PAGE_TYPE; io++) {9899 cnt = io_lat->bio_cnt[idx][io];···107106 io_lat->bio_cnt[idx][io] = 0;108107 }109108 }110110- spin_unlock_bh(&sbi->iostat_lat_lock);109109+ spin_unlock_irqrestore(&sbi->iostat_lat_lock, flags);111110112111 trace_f2fs_iostat_latency(sbi, iostat_lat);113112}···116115{117116 unsigned long long iostat_diff[NR_IO_TYPE];118117 int i;118118+ unsigned long flags;119119120120 if (time_is_after_jiffies(sbi->iostat_next_period))121121 return;122122123123 /* Need double check under the lock */124124- spin_lock_bh(&sbi->iostat_lock);124124+ spin_lock_irqsave(&sbi->iostat_lock, flags);125125 if (time_is_after_jiffies(sbi->iostat_next_period)) {126126- spin_unlock_bh(&sbi->iostat_lock);126126+ spin_unlock_irqrestore(&sbi->iostat_lock, flags);127127 return;128128 }129129 sbi->iostat_next_period = jiffies +···135133 sbi->prev_rw_iostat[i];136134 sbi->prev_rw_iostat[i] = sbi->rw_iostat[i];137135 }138138- spin_unlock_bh(&sbi->iostat_lock);136136+ spin_unlock_irqrestore(&sbi->iostat_lock, flags);139137140138 trace_f2fs_iostat(sbi, iostat_diff);141139···147145 struct iostat_lat_info *io_lat = sbi->iostat_io_lat;148146 int i;149147150150- spin_lock_bh(&sbi->iostat_lock);148148+ spin_lock_irq(&sbi->iostat_lock);151149 for (i = 0; i < NR_IO_TYPE; i++) {152150 sbi->rw_iostat[i] = 0;153151 sbi->prev_rw_iostat[i] = 0;154152 }155155- spin_unlock_bh(&sbi->iostat_lock);153153+ spin_unlock_irq(&sbi->iostat_lock);156154157157- spin_lock_bh(&sbi->iostat_lat_lock);155155+ spin_lock_irq(&sbi->iostat_lat_lock);158156 memset(io_lat, 0, sizeof(struct iostat_lat_info));159159- spin_unlock_bh(&sbi->iostat_lat_lock);157157+ spin_unlock_irq(&sbi->iostat_lat_lock);160158}161159162160void f2fs_update_iostat(struct f2fs_sb_info *sbi,163161 enum iostat_type type, unsigned long long io_bytes)164162{163163+ unsigned long flags;164164+165165 if (!sbi->iostat_enable)166166 return;167167168168- spin_lock_bh(&sbi->iostat_lock);168168+ spin_lock_irqsave(&sbi->iostat_lock, flags);169169 sbi->rw_iostat[type] += io_bytes;170170171171 if (type == APP_BUFFERED_IO || type == APP_DIRECT_IO)···176172 if (type == APP_BUFFERED_READ_IO || type == APP_DIRECT_READ_IO)177173 sbi->rw_iostat[APP_READ_IO] += io_bytes;178174179179- spin_unlock_bh(&sbi->iostat_lock);175175+ spin_unlock_irqrestore(&sbi->iostat_lock, flags);180176181177 f2fs_record_iostat(sbi);182178}···189185 struct f2fs_sb_info *sbi = iostat_ctx->sbi;190186 struct iostat_lat_info *io_lat = sbi->iostat_io_lat;191187 int idx;188188+ unsigned long flags;192189193190 if (!sbi->iostat_enable)194191 return;···207202 idx = WRITE_ASYNC_IO;208203 }209204210210- spin_lock_bh(&sbi->iostat_lat_lock);205205+ spin_lock_irqsave(&sbi->iostat_lat_lock, flags);211206 io_lat->sum_lat[idx][iotype] += ts_diff;212207 io_lat->bio_cnt[idx][iotype]++;213208 if (ts_diff > io_lat->peak_lat[idx][iotype])214209 io_lat->peak_lat[idx][iotype] = ts_diff;215215- spin_unlock_bh(&sbi->iostat_lat_lock);210210+ spin_unlock_irqrestore(&sbi->iostat_lat_lock, flags);216211}217212218213void iostat_update_and_unbind_ctx(struct bio *bio, int rw)
+11-6
fs/f2fs/namei.c
···8989 if (test_opt(sbi, INLINE_XATTR))9090 set_inode_flag(inode, FI_INLINE_XATTR);91919292- if (test_opt(sbi, INLINE_DATA) && f2fs_may_inline_data(inode))9393- set_inode_flag(inode, FI_INLINE_DATA);9492 if (f2fs_may_inline_dentry(inode))9593 set_inode_flag(inode, FI_INLINE_DENTRY);9694···105107106108 f2fs_init_extent_tree(inode, NULL);107109108108- stat_inc_inline_xattr(inode);109109- stat_inc_inline_inode(inode);110110- stat_inc_inline_dir(inode);111111-112110 F2FS_I(inode)->i_flags =113111 f2fs_mask_flags(mode, F2FS_I(dir)->i_flags & F2FS_FL_INHERITED);114112···120126 f2fs_may_compress(inode))121127 set_compress_context(inode);122128 }129129+130130+ /* Should enable inline_data after compression set */131131+ if (test_opt(sbi, INLINE_DATA) && f2fs_may_inline_data(inode))132132+ set_inode_flag(inode, FI_INLINE_DATA);133133+134134+ stat_inc_inline_xattr(inode);135135+ stat_inc_inline_inode(inode);136136+ stat_inc_inline_dir(inode);123137124138 f2fs_set_inode_flags(inode);125139···327325 if (!is_extension_exist(name, ext[i], false))328326 continue;329327328328+ /* Do not use inline_data with compression */329329+ stat_dec_inline_inode(inode);330330+ clear_inode_flag(inode, FI_INLINE_DATA);330331 set_compress_context(inode);331332 return;332333 }
+3-1
fs/f2fs/node.c
···14501450out_err:14511451 ClearPageUptodate(page);14521452out_put_err:14531453- f2fs_handle_page_eio(sbi, page->index, NODE);14531453+ /* ENOENT comes from read_node_page which is not an error. */14541454+ if (err != -ENOENT)14551455+ f2fs_handle_page_eio(sbi, page->index, NODE);14541456 f2fs_put_page(page, 1);14551457 return ERR_PTR(err);14561458}
+55-17
fs/hugetlbfs/inode.c
···600600 remove_inode_hugepages(inode, offset, LLONG_MAX);601601}602602603603+static void hugetlbfs_zero_partial_page(struct hstate *h,604604+ struct address_space *mapping,605605+ loff_t start,606606+ loff_t end)607607+{608608+ pgoff_t idx = start >> huge_page_shift(h);609609+ struct folio *folio;610610+611611+ folio = filemap_lock_folio(mapping, idx);612612+ if (!folio)613613+ return;614614+615615+ start = start & ~huge_page_mask(h);616616+ end = end & ~huge_page_mask(h);617617+ if (!end)618618+ end = huge_page_size(h);619619+620620+ folio_zero_segment(folio, (size_t)start, (size_t)end);621621+622622+ folio_unlock(folio);623623+ folio_put(folio);624624+}625625+603626static long hugetlbfs_punch_hole(struct inode *inode, loff_t offset, loff_t len)604627{628628+ struct hugetlbfs_inode_info *info = HUGETLBFS_I(inode);629629+ struct address_space *mapping = inode->i_mapping;605630 struct hstate *h = hstate_inode(inode);606631 loff_t hpage_size = huge_page_size(h);607632 loff_t hole_start, hole_end;608633609634 /*610610- * For hole punch round up the beginning offset of the hole and611611- * round down the end.635635+ * hole_start and hole_end indicate the full pages within the hole.612636 */613637 hole_start = round_up(offset, hpage_size);614638 hole_end = round_down(offset + len, hpage_size);615639640640+ inode_lock(inode);641641+642642+ /* protected by i_rwsem */643643+ if (info->seals & (F_SEAL_WRITE | F_SEAL_FUTURE_WRITE)) {644644+ inode_unlock(inode);645645+ return -EPERM;646646+ }647647+648648+ i_mmap_lock_write(mapping);649649+650650+ /* If range starts before first full page, zero partial page. */651651+ if (offset < hole_start)652652+ hugetlbfs_zero_partial_page(h, mapping,653653+ offset, min(offset + len, hole_start));654654+655655+ /* Unmap users of full pages in the hole. */616656 if (hole_end > hole_start) {617617- struct address_space *mapping = inode->i_mapping;618618- struct hugetlbfs_inode_info *info = HUGETLBFS_I(inode);619619-620620- inode_lock(inode);621621-622622- /* protected by i_rwsem */623623- if (info->seals & (F_SEAL_WRITE | F_SEAL_FUTURE_WRITE)) {624624- inode_unlock(inode);625625- return -EPERM;626626- }627627-628628- i_mmap_lock_write(mapping);629657 if (!RB_EMPTY_ROOT(&mapping->i_mmap.rb_root))630658 hugetlb_vmdelete_list(&mapping->i_mmap,631659 hole_start >> PAGE_SHIFT,632660 hole_end >> PAGE_SHIFT, 0);633633- i_mmap_unlock_write(mapping);634634- remove_inode_hugepages(inode, hole_start, hole_end);635635- inode_unlock(inode);636661 }662662+663663+ /* If range extends beyond last full page, zero partial page. */664664+ if ((offset + len) > hole_end && (offset + len) > hole_start)665665+ hugetlbfs_zero_partial_page(h, mapping,666666+ hole_end, offset + len);667667+668668+ i_mmap_unlock_write(mapping);669669+670670+ /* Remove full pages from the file. */671671+ if (hole_end > hole_start)672672+ remove_inode_hugepages(inode, hole_start, hole_end);673673+674674+ inode_unlock(inode);637675638676 return 0;639677}
+15-11
fs/io_uring.c
···19751975{19761976 if (!(req->flags & REQ_F_INFLIGHT)) {19771977 req->flags |= REQ_F_INFLIGHT;19781978- atomic_inc(¤t->io_uring->inflight_tracked);19781978+ atomic_inc(&req->task->io_uring->inflight_tracked);19791979 }19801980}19811981···34373437 if (unlikely(res != req->cqe.res)) {34383438 if ((res == -EAGAIN || res == -EOPNOTSUPP) &&34393439 io_rw_should_reissue(req)) {34403440- req->flags |= REQ_F_REISSUE;34403440+ req->flags |= REQ_F_REISSUE | REQ_F_PARTIAL_IO;34413441 return true;34423442 }34433443 req_set_fail(req);···34873487 kiocb_end_write(req);34883488 if (unlikely(res != req->cqe.res)) {34893489 if (res == -EAGAIN && io_rw_should_reissue(req)) {34903490- req->flags |= REQ_F_REISSUE;34903490+ req->flags |= REQ_F_REISSUE | REQ_F_PARTIAL_IO;34913491 return;34923492 }34933493 req->cqe.res = res;···6077607760786078 if (unlikely(sqe->file_index))60796079 return -EINVAL;60806080- if (unlikely(sqe->addr2 || sqe->file_index))60816081- return -EINVAL;6082608060836081 sr->umsg = u64_to_user_ptr(READ_ONCE(sqe->addr));60846082 sr->len = READ_ONCE(sqe->len);···63126314 struct io_sr_msg *sr = &req->sr_msg;6313631563146316 if (unlikely(sqe->file_index))63156315- return -EINVAL;63166316- if (unlikely(sqe->addr2 || sqe->file_index))63176317 return -EINVAL;6318631863196319 sr->umsg = u64_to_user_ptr(READ_ONCE(sqe->addr));···69506954 io_req_complete_failed(req, ret);69516955}6952695669536953-static void __io_poll_execute(struct io_kiocb *req, int mask, __poll_t events)69576957+static void __io_poll_execute(struct io_kiocb *req, int mask,69586958+ __poll_t __maybe_unused events)69546959{69556960 req->cqe.res = mask;69566961 /*···69606963 * CPU. We want to avoid pulling in req->apoll->events for that69616964 * case.69626965 */69636963- req->apoll_events = events;69646966 if (req->opcode == IORING_OP_POLL_ADD)69656967 req->io_task_work.func = io_poll_task_func;69666968 else···71107114 io_init_poll_iocb(poll, mask, io_poll_wake);71117115 poll->file = req->file;7112711671177117+ req->apoll_events = poll->events;71187118+71137119 ipt->pt._key = mask;71147120 ipt->req = req;71157121 ipt->error = 0;···7142714471437145 if (mask) {71447146 /* can't multishot if failed, just queue the event we've got */71457145- if (unlikely(ipt->error || !ipt->nr_entries))71477147+ if (unlikely(ipt->error || !ipt->nr_entries)) {71467148 poll->events |= EPOLLONESHOT;71497149+ req->apoll_events |= EPOLLONESHOT;71507150+ ipt->error = 0;71517151+ }71477152 __io_poll_execute(req, mask, poll->events);71487153 return 0;71497154 }···72087207 mask |= EPOLLEXCLUSIVE;72097208 if (req->flags & REQ_F_POLLED) {72107209 apoll = req->apoll;72107210+ kfree(apoll->double_poll);72117211 } else if (!(issue_flags & IO_URING_F_UNLOCKED) &&72127212 !list_empty(&ctx->apoll_cache)) {72137213 apoll = list_first_entry(&ctx->apoll_cache, struct async_poll,···73947392 return -EINVAL;7395739373967394 io_req_set_refcount(req);73977397- req->apoll_events = poll->events = io_poll_parse_events(sqe, flags);73957395+ poll->events = io_poll_parse_events(sqe, flags);73987396 return 0;73997397}74007398···74077405 ipt.pt._qproc = io_poll_queue_proc;7408740674097407 ret = __io_arm_poll_handler(req, &req->poll, &ipt, poll->events);74087408+ if (!ret && ipt.error)74097409+ req_set_fail(req);74107410 ret = ret ?: ipt.error;74117411 if (ret)74127412 __io_req_complete(req, issue_flags, ret, 0);
+1-1
fs/tracefs/inode.c
···553553 *554554 * Only one instances directory is allowed.555555 *556556- * The instances directory is special as it allows for mkdir and rmdir to556556+ * The instances directory is special as it allows for mkdir and rmdir557557 * to be done by userspace. When a mkdir or rmdir is performed, the inode558558 * locks are released and the methods passed in (@mkdir and @rmdir) are559559 * called without locks and with the name of the directory being created
+3
include/keys/asymmetric-type.h
···8484 const struct asymmetric_key_id *id_2,8585 bool partial);86868787+int x509_load_certificate_list(const u8 cert_list[], const unsigned long list_size,8888+ const struct key *keyring);8989+8790/*8891 * The payload is at the discretion of the subtype.8992 */
···16161717#include <linux/atomic.h>1818#include <linux/types.h>1919-#include <linux/mutex.h>20192120struct vc_data;2221struct console_font_op;···153154 uint ospeed;154155 u64 seq;155156 unsigned long dropped;156156- struct task_struct *thread;157157- bool blocked;158158-159159- /*160160- * The per-console lock is used by printing kthreads to synchronize161161- * this console with callers of console_lock(). This is necessary in162162- * order to allow printing kthreads to run in parallel to each other,163163- * while each safely accessing the @blocked field and synchronizing164164- * against direct printing via console_lock/console_unlock.165165- *166166- * Note: For synchronizing against direct printing via167167- * console_trylock/console_unlock, see the static global168168- * variable @console_kthreads_active.169169- */170170- struct mutex lock;171171-172157 void *data;173158 struct console *next;174159};
+16-13
include/linux/gpio/driver.h
···167167 */168168 irq_flow_handler_t parent_handler;169169170170- /**171171- * @parent_handler_data:172172- *173173- * If @per_parent_data is false, @parent_handler_data is a single174174- * pointer used as the data associated with every parent interrupt.175175- *176176- * @parent_handler_data_array:177177- *178178- * If @per_parent_data is true, @parent_handler_data_array is179179- * an array of @num_parents pointers, and is used to associate180180- * different data for each parent. This cannot be NULL if181181- * @per_parent_data is true.182182- */183170 union {171171+ /**172172+ * @parent_handler_data:173173+ *174174+ * If @per_parent_data is false, @parent_handler_data is a175175+ * single pointer used as the data associated with every176176+ * parent interrupt.177177+ */184178 void *parent_handler_data;179179+180180+ /**181181+ * @parent_handler_data_array:182182+ *183183+ * If @per_parent_data is true, @parent_handler_data_array is184184+ * an array of @num_parents pointers, and is used to associate185185+ * different data for each parent. This cannot be NULL if186186+ * @per_parent_data is true.187187+ */185188 void **parent_handler_data_array;186189 };187190
+2-1
include/linux/mm.h
···16001600 if (mt == MIGRATE_CMA || mt == MIGRATE_ISOLATE)16011601 return false;16021602#endif16031603- return !(is_zone_movable_page(page) || is_zero_pfn(page_to_pfn(page)));16031603+ return !is_zone_movable_page(page) || is_zero_pfn(page_to_pfn(page));16041604}16051605#else16061606static inline bool is_pinnable_page(struct page *page)···32323232 MF_MUST_KILL = 1 << 2,32333233 MF_SOFT_OFFLINE = 1 << 3,32343234 MF_UNPOISON = 1 << 4,32353235+ MF_SW_SIMULATED = 1 << 5,32353236};32363237extern int memory_failure(unsigned long pfn, int flags);32373238extern void memory_failure_queue(unsigned long pfn, int flags);
···48154815 n = btf_nr_types(btf);48164816 for (i = start_id; i < n; i++) {48174817 const struct btf_type *t;48184818+ int chain_limit = 32;48184819 u32 cur_id = i;4819482048204821 t = btf_type_by_id(btf, i);···4828482748294828 in_tags = btf_type_is_type_tag(t);48304829 while (btf_type_is_modifier(t)) {48304830+ if (!chain_limit--) {48314831+ btf_verifier_log(env, "Max chain length or cycle detected");48324832+ return -ELOOP;48334833+ }48314834 if (btf_type_is_type_tag(t)) {48324835 if (!in_tags) {48334836 btf_verifier_log(env, "Type tags don't precede modifiers");
+2-3
kernel/dma/direct.c
···357357 } else {358358 if (IS_ENABLED(CONFIG_ARCH_HAS_DMA_CLEAR_UNCACHED))359359 arch_dma_clear_uncached(cpu_addr, size);360360- if (dma_set_encrypted(dev, cpu_addr, 1 << page_order))360360+ if (dma_set_encrypted(dev, cpu_addr, size))361361 return;362362 }363363···392392 struct page *page, dma_addr_t dma_addr,393393 enum dma_data_direction dir)394394{395395- unsigned int page_order = get_order(size);396395 void *vaddr = page_address(page);397396398397 /* If cpu_addr is not from an atomic pool, dma_free_from_pool() fails */···399400 dma_free_from_pool(dev, vaddr, size))400401 return;401402402402- if (dma_set_encrypted(dev, vaddr, 1 << page_order))403403+ if (dma_set_encrypted(dev, vaddr, size))403404 return;404405 __dma_direct_free_pages(dev, page, size);405406}
+1-10
kernel/hung_task.c
···127127 * complain:128128 */129129 if (sysctl_hung_task_warnings) {130130- printk_prefer_direct_enter();131131-132130 if (sysctl_hung_task_warnings > 0)133131 sysctl_hung_task_warnings--;134132 pr_err("INFO: task %s:%d blocked for more than %ld seconds.\n",···142144143145 if (sysctl_hung_task_all_cpu_backtrace)144146 hung_task_show_all_bt = true;145145-146146- printk_prefer_direct_exit();147147 }148148149149 touch_nmi_watchdog();···204208 }205209 unlock:206210 rcu_read_unlock();207207- if (hung_task_show_lock) {208208- printk_prefer_direct_enter();211211+ if (hung_task_show_lock)209212 debug_show_all_locks();210210- printk_prefer_direct_exit();211211- }212213213214 if (hung_task_show_all_bt) {214215 hung_task_show_all_bt = false;215215- printk_prefer_direct_enter();216216 trigger_all_cpu_backtrace();217217- printk_prefer_direct_exit();218217 }219218220219 if (hung_task_call_panic)
+7-7
kernel/kthread.c
···340340341341 self = to_kthread(current);342342343343- /* If user was SIGKILLed, I release the structure. */343343+ /* Release the structure when caller killed by a fatal signal. */344344 done = xchg(&create->done, NULL);345345 if (!done) {346346 kfree(create);···398398 /* We want our own signal handler (we take no signals by default). */399399 pid = kernel_thread(kthread, create, CLONE_FS | CLONE_FILES | SIGCHLD);400400 if (pid < 0) {401401- /* If user was SIGKILLed, I release the structure. */401401+ /* Release the structure when caller killed by a fatal signal. */402402 struct completion *done = xchg(&create->done, NULL);403403404404 if (!done) {···440440 */441441 if (unlikely(wait_for_completion_killable(&done))) {442442 /*443443- * If I was SIGKILLed before kthreadd (or new kernel thread)444444- * calls complete(), leave the cleanup of this structure to445445- * that thread.443443+ * If I was killed by a fatal signal before kthreadd (or new444444+ * kernel thread) calls complete(), leave the cleanup of this445445+ * structure to that thread.446446 */447447 if (xchg(&create->done, NULL))448448 return ERR_PTR(-EINTR);···876876 *877877 * Returns a pointer to the allocated worker on success, ERR_PTR(-ENOMEM)878878 * when the needed structures could not get allocated, and ERR_PTR(-EINTR)879879- * when the worker was SIGKILLed.879879+ * when the caller was killed by a fatal signal.880880 */881881struct kthread_worker *882882kthread_create_worker(unsigned int flags, const char namefmt[], ...)···925925 * Return:926926 * The pointer to the allocated worker on success, ERR_PTR(-ENOMEM)927927 * when the needed structures could not get allocated, and ERR_PTR(-EINTR)928928- * when the worker was SIGKILLed.928928+ * when the caller was killed by a fatal signal.929929 */930930struct kthread_worker *931931kthread_create_worker_on_cpu(int cpu, unsigned int flags,
-6
kernel/panic.c
···297297 * unfortunately means it may not be hardened to work in a298298 * panic situation.299299 */300300- try_block_console_kthreads(10000);301300 smp_send_stop();302301 } else {303302 /*···304305 * kmsg_dump, we will need architecture dependent extra305306 * works in addition to stopping other CPUs.306307 */307307- try_block_console_kthreads(10000);308308 crash_smp_send_stop();309309 }310310···603605{604606 disable_trace_on_warning();605607606606- printk_prefer_direct_enter();607607-608608 if (file)609609 pr_warn("WARNING: CPU: %d PID: %d at %s:%d %pS\n",610610 raw_smp_processor_id(), current->pid, file, line,···632636633637 /* Just a warning, don't kill lockdep. */634638 add_taint(taint, LOCKDEP_STILL_OK);635635-636636- printk_prefer_direct_exit();637639}638640639641#ifndef __WARN_FLAGS
+1-1
kernel/power/hibernate.c
···665665 hibernation_platform_enter();666666 fallthrough;667667 case HIBERNATION_SHUTDOWN:668668- if (pm_power_off)668668+ if (kernel_can_power_off())669669 kernel_power_off();670670 break;671671 }
-2
kernel/printk/internal.h
···2020 LOG_CONT = 8, /* text is a fragment of a continuation line */2121};22222323-extern bool block_console_kthreads;2424-2523__printf(4, 0)2624int vprintk_store(int facility, int level,2725 const struct dev_printk_info *dev_info,
+63-530
kernel/printk/printk.c
···224224static int nr_ext_console_drivers;225225226226/*227227- * Used to synchronize printing kthreads against direct printing via228228- * console_trylock/console_unlock.229229- *230230- * Values:231231- * -1 = console kthreads atomically blocked (via global trylock)232232- * 0 = no kthread printing, console not locked (via trylock)233233- * >0 = kthread(s) actively printing234234- *235235- * Note: For synchronizing against direct printing via236236- * console_lock/console_unlock, see the @lock variable in237237- * struct console.238238- */239239-static atomic_t console_kthreads_active = ATOMIC_INIT(0);240240-241241-#define console_kthreads_atomic_tryblock() \242242- (atomic_cmpxchg(&console_kthreads_active, 0, -1) == 0)243243-#define console_kthreads_atomic_unblock() \244244- atomic_cmpxchg(&console_kthreads_active, -1, 0)245245-#define console_kthreads_atomically_blocked() \246246- (atomic_read(&console_kthreads_active) == -1)247247-248248-#define console_kthread_printing_tryenter() \249249- atomic_inc_unless_negative(&console_kthreads_active)250250-#define console_kthread_printing_exit() \251251- atomic_dec(&console_kthreads_active)252252-253253-/* Block console kthreads to avoid processing new messages. */254254-bool block_console_kthreads;255255-256256-/*257227 * Helper macros to handle lockdep when locking/unlocking console_sem. We use258228 * macros instead of functions so that _RET_IP_ contains useful information.259229 */···271301}272302273303/*274274- * Tracks whether kthread printers are all blocked. A value of true implies275275- * that the console is locked via console_lock() or the console is suspended.276276- * Writing to this variable requires holding @console_sem.304304+ * This is used for debugging the mess that is the VT code by305305+ * keeping track if we have the console semaphore held. It's306306+ * definitely not the perfect debug tool (we don't know if _WE_307307+ * hold it and are racing, but it helps tracking those weird code308308+ * paths in the console code where we end up in places I want309309+ * locked without the console semaphore held).277310 */278278-static bool console_kthreads_blocked;279279-280280-/*281281- * Block all kthread printers from a schedulable context.282282- *283283- * Requires holding @console_sem.284284- */285285-static void console_kthreads_block(void)286286-{287287- struct console *con;288288-289289- for_each_console(con) {290290- mutex_lock(&con->lock);291291- con->blocked = true;292292- mutex_unlock(&con->lock);293293- }294294-295295- console_kthreads_blocked = true;296296-}297297-298298-/*299299- * Unblock all kthread printers from a schedulable context.300300- *301301- * Requires holding @console_sem.302302- */303303-static void console_kthreads_unblock(void)304304-{305305- struct console *con;306306-307307- for_each_console(con) {308308- mutex_lock(&con->lock);309309- con->blocked = false;310310- mutex_unlock(&con->lock);311311- }312312-313313- console_kthreads_blocked = false;314314-}315315-316316-static int console_suspended;311311+static int console_locked, console_suspended;317312318313/*319314 * Array of consoles built from command line options (console=)···361426/* syslog_lock protects syslog_* variables and write access to clear_seq. */362427static DEFINE_MUTEX(syslog_lock);363428364364-/*365365- * A flag to signify if printk_activate_kthreads() has already started the366366- * kthread printers. If true, any later registered consoles must start their367367- * own kthread directly. The flag is write protected by the console_lock.368368- */369369-static bool printk_kthreads_available;370370-371429#ifdef CONFIG_PRINTK372372-static atomic_t printk_prefer_direct = ATOMIC_INIT(0);373373-374374-/**375375- * printk_prefer_direct_enter - cause printk() calls to attempt direct376376- * printing to all enabled consoles377377- *378378- * Since it is not possible to call into the console printing code from any379379- * context, there is no guarantee that direct printing will occur.380380- *381381- * This globally effects all printk() callers.382382- *383383- * Context: Any context.384384- */385385-void printk_prefer_direct_enter(void)386386-{387387- atomic_inc(&printk_prefer_direct);388388-}389389-390390-/**391391- * printk_prefer_direct_exit - restore printk() behavior392392- *393393- * Context: Any context.394394- */395395-void printk_prefer_direct_exit(void)396396-{397397- WARN_ON(atomic_dec_if_positive(&printk_prefer_direct) < 0);398398-}399399-400400-/*401401- * Calling printk() always wakes kthread printers so that they can402402- * flush the new message to their respective consoles. Also, if direct403403- * printing is allowed, printk() tries to flush the messages directly.404404- *405405- * Direct printing is allowed in situations when the kthreads406406- * are not available or the system is in a problematic state.407407- *408408- * See the implementation about possible races.409409- */410410-static inline bool allow_direct_printing(void)411411-{412412- /*413413- * Checking kthread availability is a possible race because the414414- * kthread printers can become permanently disabled during runtime.415415- * However, doing that requires holding the console_lock, so any416416- * pending messages will be direct printed by console_unlock().417417- */418418- if (!printk_kthreads_available)419419- return true;420420-421421- /*422422- * Prefer direct printing when the system is in a problematic state.423423- * The context that sets this state will always see the updated value.424424- * The other contexts do not care. Anyway, direct printing is just a425425- * best effort. The direct output is only possible when console_lock426426- * is not already taken and no kthread printers are actively printing.427427- */428428- return (system_state > SYSTEM_RUNNING ||429429- oops_in_progress ||430430- atomic_read(&printk_prefer_direct));431431-}432432-433430DECLARE_WAIT_QUEUE_HEAD(log_wait);434431/* All 3 protected by @syslog_lock. */435432/* the next printk record to read by syslog(READ) or /proc/kmsg */···22522385 printed_len = vprintk_store(facility, level, dev_info, fmt, args);2253238622542387 /* If called from the scheduler, we can not call up(). */22552255- if (!in_sched && allow_direct_printing()) {23882388+ if (!in_sched) {22562389 /*22572390 * The caller may be holding system-critical or22582258- * timing-sensitive locks. Disable preemption during direct23912391+ * timing-sensitive locks. Disable preemption during22592392 * printing of all remaining records to all consoles so that22602393 * this context can return as soon as possible. Hopefully22612394 * another printk() caller will take over the printing.···2298243122992432static bool __pr_flush(struct console *con, int timeout_ms, bool reset_on_progress);2300243323012301-static void printk_start_kthread(struct console *con);23022302-23032434#else /* CONFIG_PRINTK */2304243523052436#define CONSOLE_LOG_MAX 0···23312466}23322467static bool suppress_message_printing(int level) { return false; }23332468static bool __pr_flush(struct console *con, int timeout_ms, bool reset_on_progress) { return true; }23342334-static void printk_start_kthread(struct console *con) { }23352335-static bool allow_direct_printing(void) { return true; }2336246923372470#endif /* CONFIG_PRINTK */23382471···25492686 /* If trylock fails, someone else is doing the printing */25502687 if (console_trylock())25512688 console_unlock();25522552- else {25532553- /*25542554- * If a new CPU comes online, the conditions for25552555- * printer_should_wake() may have changed for some25562556- * kthread printer with !CON_ANYTIME.25572557- */25582558- wake_up_klogd();25592559- }25602689 }25612690 return 0;25622691}···25682713 down_console_sem();25692714 if (console_suspended)25702715 return;25712571- console_kthreads_block();27162716+ console_locked = 1;25722717 console_may_schedule = 1;25732718}25742719EXPORT_SYMBOL(console_lock);···25892734 up_console_sem();25902735 return 0;25912736 }25922592- if (!console_kthreads_atomic_tryblock()) {25932593- up_console_sem();25942594- return 0;25952595- }27372737+ console_locked = 1;25962738 console_may_schedule = 0;25972739 return 1;25982740}25992741EXPORT_SYMBOL(console_trylock);2600274226012601-/*26022602- * This is used to help to make sure that certain paths within the VT code are26032603- * running with the console lock held. It is definitely not the perfect debug26042604- * tool (it is not known if the VT code is the task holding the console lock),26052605- * but it helps tracking those weird code paths in the console code such as26062606- * when the console is suspended: where the console is not locked but no26072607- * console printing may occur.26082608- *26092609- * Note: This returns true when the console is suspended but is not locked.26102610- * This is intentional because the VT code must consider that situation26112611- * the same as if the console was locked.26122612- */26132743int is_console_locked(void)26142744{26152615- return (console_kthreads_blocked || atomic_read(&console_kthreads_active));27452745+ return console_locked;26162746}26172747EXPORT_SYMBOL(is_console_locked);26182748···26202780 return atomic_read(&panic_cpu) != raw_smp_processor_id();26212781}2622278226232623-static inline bool __console_is_usable(short flags)27832783+/*27842784+ * Check if the given console is currently capable and allowed to print27852785+ * records.27862786+ *27872787+ * Requires the console_lock.27882788+ */27892789+static inline bool console_is_usable(struct console *con)26242790{26252625- if (!(flags & CON_ENABLED))27912791+ if (!(con->flags & CON_ENABLED))27922792+ return false;27932793+27942794+ if (!con->write)26262795 return false;2627279626282797 /*···26402791 * cope (CON_ANYTIME) don't call them until this CPU is officially up.26412792 */26422793 if (!cpu_online(raw_smp_processor_id()) &&26432643- !(flags & CON_ANYTIME))27942794+ !(con->flags & CON_ANYTIME))26442795 return false;2645279626462797 return true;26472798}2648279926492649-/*26502650- * Check if the given console is currently capable and allowed to print26512651- * records.26522652- *26532653- * Requires holding the console_lock.26542654- */26552655-static inline bool console_is_usable(struct console *con)26562656-{26572657- if (!con->write)26582658- return false;26592659-26602660- return __console_is_usable(con->flags);26612661-}26622662-26632800static void __console_unlock(void)26642801{26652665- /*26662666- * Depending on whether console_lock() or console_trylock() was used,26672667- * appropriately allow the kthread printers to continue.26682668- */26692669- if (console_kthreads_blocked)26702670- console_kthreads_unblock();26712671- else26722672- console_kthreads_atomic_unblock();26732673-26742674- /*26752675- * New records may have arrived while the console was locked.26762676- * Wake the kthread printers to print them.26772677- */26782678- wake_up_klogd();26792679-28022802+ console_locked = 0;26802803 up_console_sem();26812804}26822805···26662845 *26672846 * @handover will be set to true if a printk waiter has taken over the26682847 * console_lock, in which case the caller is no longer holding the26692669- * console_lock. Otherwise it is set to false. A NULL pointer may be provided26702670- * to disable allowing the console_lock to be taken over by a printk waiter.28482848+ * console_lock. Otherwise it is set to false.26712849 *26722850 * Returns false if the given console has no next record to print, otherwise26732851 * true.26742852 *26752675- * Requires the console_lock if @handover is non-NULL.26762676- * Requires con->lock otherwise.28532853+ * Requires the console_lock.26772854 */26782678-static bool __console_emit_next_record(struct console *con, char *text, char *ext_text,26792679- char *dropped_text, bool *handover)28552855+static bool console_emit_next_record(struct console *con, char *text, char *ext_text,28562856+ char *dropped_text, bool *handover)26802857{26812681- static atomic_t panic_console_dropped = ATOMIC_INIT(0);28582858+ static int panic_console_dropped;26822859 struct printk_info info;26832860 struct printk_record r;26842861 unsigned long flags;···2685286626862867 prb_rec_init_rd(&r, &info, text, CONSOLE_LOG_MAX);2687286826882688- if (handover)26892689- *handover = false;28692869+ *handover = false;2690287026912871 if (!prb_read_valid(prb, con->seq, &r))26922872 return false;···26932875 if (con->seq != r.info->seq) {26942876 con->dropped += r.info->seq - con->seq;26952877 con->seq = r.info->seq;26962696- if (panic_in_progress() &&26972697- atomic_fetch_inc_relaxed(&panic_console_dropped) > 10) {28782878+ if (panic_in_progress() && panic_console_dropped++ > 10) {26982879 suppress_panic_printk = 1;26992880 pr_warn_once("Too many dropped messages. Suppress messages on non-panic CPUs to prevent livelock.\n");27002881 }···27152898 len = record_print_text(&r, console_msg_format & MSG_FORMAT_SYSLOG, printk_time);27162899 }2717290027182718- if (handover) {27192719- /*27202720- * While actively printing out messages, if another printk()27212721- * were to occur on another CPU, it may wait for this one to27222722- * finish. This task can not be preempted if there is a27232723- * waiter waiting to take over.27242724- *27252725- * Interrupts are disabled because the hand over to a waiter27262726- * must not be interrupted until the hand over is completed27272727- * (@console_waiter is cleared).27282728- */27292729- printk_safe_enter_irqsave(flags);27302730- console_lock_spinning_enable();29012901+ /*29022902+ * While actively printing out messages, if another printk()29032903+ * were to occur on another CPU, it may wait for this one to29042904+ * finish. This task can not be preempted if there is a29052905+ * waiter waiting to take over.29062906+ *29072907+ * Interrupts are disabled because the hand over to a waiter29082908+ * must not be interrupted until the hand over is completed29092909+ * (@console_waiter is cleared).29102910+ */29112911+ printk_safe_enter_irqsave(flags);29122912+ console_lock_spinning_enable();2731291327322732- /* don't trace irqsoff print latency */27332733- stop_critical_timings();27342734- }27352735-29142914+ stop_critical_timings(); /* don't trace print latency */27362915 call_console_driver(con, write_text, len, dropped_text);29162916+ start_critical_timings();2737291727382918 con->seq++;2739291927402740- if (handover) {27412741- start_critical_timings();27422742- *handover = console_lock_spinning_disable_and_check();27432743- printk_safe_exit_irqrestore(flags);27442744- }29202920+ *handover = console_lock_spinning_disable_and_check();29212921+ printk_safe_exit_irqrestore(flags);27452922skip:27462923 return true;27472747-}27482748-27492749-/*27502750- * Print a record for a given console, but allow another printk() caller to27512751- * take over the console_lock and continue printing.27522752- *27532753- * Requires the console_lock, but depending on @handover after the call, the27542754- * caller may no longer have the console_lock.27552755- *27562756- * See __console_emit_next_record() for argument and return details.27572757- */27582758-static bool console_emit_next_record_transferable(struct console *con, char *text, char *ext_text,27592759- char *dropped_text, bool *handover)27602760-{27612761- /*27622762- * Handovers are only supported if threaded printers are atomically27632763- * blocked. The context taking over the console_lock may be atomic.27642764- */27652765- if (!console_kthreads_atomically_blocked()) {27662766- *handover = false;27672767- handover = NULL;27682768- }27692769-27702770- return __console_emit_next_record(con, text, ext_text, dropped_text, handover);27712924}2772292527732926/*···27582971 * were flushed to all usable consoles. A returned false informs the caller27592972 * that everything was not flushed (either there were no usable consoles or27602973 * another context has taken over printing or it is a panic situation and this27612761- * is not the panic CPU or direct printing is not preferred). Regardless the27622762- * reason, the caller should assume it is not useful to immediately try again.29742974+ * is not the panic CPU). Regardless the reason, the caller should assume it29752975+ * is not useful to immediately try again.27632976 *27642977 * Requires the console_lock.27652978 */···27762989 *handover = false;2777299027782991 do {27792779- /* Let the kthread printers do the work if they can. */27802780- if (!allow_direct_printing())27812781- return false;27822782-27832992 any_progress = false;2784299327852994 for_each_console(con) {···2787300427883005 if (con->flags & CON_EXTENDED) {27893006 /* Extended consoles do not print "dropped messages". */27902790- progress = console_emit_next_record_transferable(con, &text[0],27912791- &ext_text[0], NULL, handover);30073007+ progress = console_emit_next_record(con, &text[0],30083008+ &ext_text[0], NULL,30093009+ handover);27923010 } else {27932793- progress = console_emit_next_record_transferable(con, &text[0],27942794- NULL, &dropped_text[0], handover);30113011+ progress = console_emit_next_record(con, &text[0],30123012+ NULL, &dropped_text[0],30133013+ handover);27953014 }27963015 if (*handover)27973016 return false;···29083123 if (oops_in_progress) {29093124 if (down_trylock_console_sem() != 0)29103125 return;29112911- if (!console_kthreads_atomic_tryblock()) {29122912- up_console_sem();29132913- return;29142914- }29153126 } else29163127 console_lock();2917312831293129+ console_locked = 1;29183130 console_may_schedule = 0;29193131 for_each_console(c)29203132 if ((c->flags & CON_ENABLED) && c->unblank)···31903408 nr_ext_console_drivers++;3191340931923410 newcon->dropped = 0;31933193- newcon->thread = NULL;31943194- newcon->blocked = true;31953195- mutex_init(&newcon->lock);31963196-31973411 if (newcon->flags & CON_PRINTBUFFER) {31983412 /* Get a consistent copy of @syslog_seq. */31993413 mutex_lock(&syslog_lock);···31993421 /* Begin with next message. */32003422 newcon->seq = prb_next_seq(prb);32013423 }32023202-32033203- if (printk_kthreads_available)32043204- printk_start_kthread(newcon);32053205-32063424 console_unlock();32073425 console_sysfs_notify();32083426···3225345132263452int unregister_console(struct console *console)32273453{32283228- struct task_struct *thd;32293454 struct console *con;32303455 int res;32313456···32653492 console_drivers->flags |= CON_CONSDEV;3266349332673494 console->flags &= ~CON_ENABLED;32683268-32693269- /*32703270- * console->thread can only be cleared under the console lock. But32713271- * stopping the thread must be done without the console lock. The32723272- * task that clears @thread is the task that stops the kthread.32733273- */32743274- thd = console->thread;32753275- console->thread = NULL;32763276-32773495 console_unlock();32783278-32793279- if (thd)32803280- kthread_stop(thd);32813281-32823496 console_sysfs_notify();3283349732843498 if (console->exit)···33613601}33623602late_initcall(printk_late_init);3363360333643364-static int __init printk_activate_kthreads(void)33653365-{33663366- struct console *con;33673367-33683368- console_lock();33693369- printk_kthreads_available = true;33703370- for_each_console(con)33713371- printk_start_kthread(con);33723372- console_unlock();33733373-33743374- return 0;33753375-}33763376-early_initcall(printk_activate_kthreads);33773377-33783604#if defined CONFIG_PRINTK33793605/* If @con is specified, only wait for that console. Otherwise wait for all. */33803606static bool __pr_flush(struct console *con, int timeout_ms, bool reset_on_progress)···34353689}34363690EXPORT_SYMBOL(pr_flush);3437369134383438-static void __printk_fallback_preferred_direct(void)34393439-{34403440- printk_prefer_direct_enter();34413441- pr_err("falling back to preferred direct printing\n");34423442- printk_kthreads_available = false;34433443-}34443444-34453445-/*34463446- * Enter preferred direct printing, but never exit. Mark console threads as34473447- * unavailable. The system is then forever in preferred direct printing and34483448- * any printing threads will exit.34493449- *34503450- * Must *not* be called under console_lock. Use34513451- * __printk_fallback_preferred_direct() if already holding console_lock.34523452- */34533453-static void printk_fallback_preferred_direct(void)34543454-{34553455- console_lock();34563456- __printk_fallback_preferred_direct();34573457- console_unlock();34583458-}34593459-34603460-/*34613461- * Print a record for a given console, not allowing another printk() caller34623462- * to take over. This is appropriate for contexts that do not have the34633463- * console_lock.34643464- *34653465- * See __console_emit_next_record() for argument and return details.34663466- */34673467-static bool console_emit_next_record(struct console *con, char *text, char *ext_text,34683468- char *dropped_text)34693469-{34703470- return __console_emit_next_record(con, text, ext_text, dropped_text, NULL);34713471-}34723472-34733473-static bool printer_should_wake(struct console *con, u64 seq)34743474-{34753475- short flags;34763476-34773477- if (kthread_should_stop() || !printk_kthreads_available)34783478- return true;34793479-34803480- if (con->blocked ||34813481- console_kthreads_atomically_blocked() ||34823482- block_console_kthreads ||34833483- system_state > SYSTEM_RUNNING ||34843484- oops_in_progress) {34853485- return false;34863486- }34873487-34883488- /*34893489- * This is an unsafe read from con->flags, but a false positive is34903490- * not a problem. Worst case it would allow the printer to wake up34913491- * although it is disabled. But the printer will notice that when34923492- * attempting to print and instead go back to sleep.34933493- */34943494- flags = data_race(READ_ONCE(con->flags));34953495-34963496- if (!__console_is_usable(flags))34973497- return false;34983498-34993499- return prb_read_valid(prb, seq, NULL);35003500-}35013501-35023502-static int printk_kthread_func(void *data)35033503-{35043504- struct console *con = data;35053505- char *dropped_text = NULL;35063506- char *ext_text = NULL;35073507- u64 seq = 0;35083508- char *text;35093509- int error;35103510-35113511- text = kmalloc(CONSOLE_LOG_MAX, GFP_KERNEL);35123512- if (!text) {35133513- con_printk(KERN_ERR, con, "failed to allocate text buffer\n");35143514- printk_fallback_preferred_direct();35153515- goto out;35163516- }35173517-35183518- if (con->flags & CON_EXTENDED) {35193519- ext_text = kmalloc(CONSOLE_EXT_LOG_MAX, GFP_KERNEL);35203520- if (!ext_text) {35213521- con_printk(KERN_ERR, con, "failed to allocate ext_text buffer\n");35223522- printk_fallback_preferred_direct();35233523- goto out;35243524- }35253525- } else {35263526- dropped_text = kmalloc(DROPPED_TEXT_MAX, GFP_KERNEL);35273527- if (!dropped_text) {35283528- con_printk(KERN_ERR, con, "failed to allocate dropped_text buffer\n");35293529- printk_fallback_preferred_direct();35303530- goto out;35313531- }35323532- }35333533-35343534- con_printk(KERN_INFO, con, "printing thread started\n");35353535-35363536- for (;;) {35373537- /*35383538- * Guarantee this task is visible on the waitqueue before35393539- * checking the wake condition.35403540- *35413541- * The full memory barrier within set_current_state() of35423542- * prepare_to_wait_event() pairs with the full memory barrier35433543- * within wq_has_sleeper().35443544- *35453545- * This pairs with __wake_up_klogd:A.35463546- */35473547- error = wait_event_interruptible(log_wait,35483548- printer_should_wake(con, seq)); /* LMM(printk_kthread_func:A) */35493549-35503550- if (kthread_should_stop() || !printk_kthreads_available)35513551- break;35523552-35533553- if (error)35543554- continue;35553555-35563556- error = mutex_lock_interruptible(&con->lock);35573557- if (error)35583558- continue;35593559-35603560- if (con->blocked ||35613561- !console_kthread_printing_tryenter()) {35623562- /* Another context has locked the console_lock. */35633563- mutex_unlock(&con->lock);35643564- continue;35653565- }35663566-35673567- /*35683568- * Although this context has not locked the console_lock, it35693569- * is known that the console_lock is not locked and it is not35703570- * possible for any other context to lock the console_lock.35713571- * Therefore it is safe to read con->flags.35723572- */35733573-35743574- if (!__console_is_usable(con->flags)) {35753575- console_kthread_printing_exit();35763576- mutex_unlock(&con->lock);35773577- continue;35783578- }35793579-35803580- /*35813581- * Even though the printk kthread is always preemptible, it is35823582- * still not allowed to call cond_resched() from within35833583- * console drivers. The task may become non-preemptible in the35843584- * console driver call chain. For example, vt_console_print()35853585- * takes a spinlock and then can call into fbcon_redraw(),35863586- * which can conditionally invoke cond_resched().35873587- */35883588- console_may_schedule = 0;35893589- console_emit_next_record(con, text, ext_text, dropped_text);35903590-35913591- seq = con->seq;35923592-35933593- console_kthread_printing_exit();35943594-35953595- mutex_unlock(&con->lock);35963596- }35973597-35983598- con_printk(KERN_INFO, con, "printing thread stopped\n");35993599-out:36003600- kfree(dropped_text);36013601- kfree(ext_text);36023602- kfree(text);36033603-36043604- console_lock();36053605- /*36063606- * If this kthread is being stopped by another task, con->thread will36073607- * already be NULL. That is fine. The important thing is that it is36083608- * NULL after the kthread exits.36093609- */36103610- con->thread = NULL;36113611- console_unlock();36123612-36133613- return 0;36143614-}36153615-36163616-/* Must be called under console_lock. */36173617-static void printk_start_kthread(struct console *con)36183618-{36193619- /*36203620- * Do not start a kthread if there is no write() callback. The36213621- * kthreads assume the write() callback exists.36223622- */36233623- if (!con->write)36243624- return;36253625-36263626- con->thread = kthread_run(printk_kthread_func, con,36273627- "pr/%s%d", con->name, con->index);36283628- if (IS_ERR(con->thread)) {36293629- con->thread = NULL;36303630- con_printk(KERN_ERR, con, "unable to start printing thread\n");36313631- __printk_fallback_preferred_direct();36323632- return;36333633- }36343634-}36353635-36363692/*36373693 * Delayed printk version, for scheduler-internal messages:36383694 */36393639-#define PRINTK_PENDING_WAKEUP 0x0136403640-#define PRINTK_PENDING_DIRECT_OUTPUT 0x0236953695+#define PRINTK_PENDING_WAKEUP 0x0136963696+#define PRINTK_PENDING_OUTPUT 0x023641369736423698static DEFINE_PER_CPU(int, printk_pending);36433699···34473899{34483900 int pending = this_cpu_xchg(printk_pending, 0);3449390134503450- if (pending & PRINTK_PENDING_DIRECT_OUTPUT) {34513451- printk_prefer_direct_enter();34523452-39023902+ if (pending & PRINTK_PENDING_OUTPUT) {34533903 /* If trylock fails, someone else is doing the printing */34543904 if (console_trylock())34553905 console_unlock();34563456-34573457- printk_prefer_direct_exit();34583906 }3459390734603908 if (pending & PRINTK_PENDING_WAKEUP)···34753931 * prepare_to_wait_event(), which is called after ___wait_event() adds34763932 * the waiter but before it has checked the wait condition.34773933 *34783478- * This pairs with devkmsg_read:A, syslog_print:A, and34793479- * printk_kthread_func:A.39343934+ * This pairs with devkmsg_read:A and syslog_print:A.34803935 */34813936 if (wq_has_sleeper(&log_wait) || /* LMM(__wake_up_klogd:A) */34823482- (val & PRINTK_PENDING_DIRECT_OUTPUT)) {39373937+ (val & PRINTK_PENDING_OUTPUT)) {34833938 this_cpu_or(printk_pending, val);34843939 irq_work_queue(this_cpu_ptr(&wake_up_klogd_work));34853940 }···34963953 * New messages may have been added directly to the ringbuffer34973954 * using vprintk_store(), so wake any waiters as well.34983955 */34993499- int val = PRINTK_PENDING_WAKEUP;35003500-35013501- /*35023502- * Make sure that some context will print the messages when direct35033503- * printing is allowed. This happens in situations when the kthreads35043504- * may not be as reliable or perhaps unusable.35053505- */35063506- if (allow_direct_printing())35073507- val |= PRINTK_PENDING_DIRECT_OUTPUT;35083508-35093509- __wake_up_klogd(val);39563956+ __wake_up_klogd(PRINTK_PENDING_WAKEUP | PRINTK_PENDING_OUTPUT);35103957}3511395835123959void printk_trigger_flush(void)
-32
kernel/printk/printk_safe.c
···88#include <linux/smp.h>99#include <linux/cpumask.h>1010#include <linux/printk.h>1111-#include <linux/console.h>1211#include <linux/kprobes.h>1313-#include <linux/delay.h>14121513#include "internal.h"1614···5052 return vprintk_default(fmt, args);5153}5254EXPORT_SYMBOL(vprintk);5353-5454-/**5555- * try_block_console_kthreads() - Try to block console kthreads and5656- * make the global console_lock() avaialble5757- *5858- * @timeout_ms: The maximum time (in ms) to wait.5959- *6060- * Prevent console kthreads from starting processing new messages. Wait6161- * until the global console_lock() become available.6262- *6363- * Context: Can be called in any context.6464- */6565-void try_block_console_kthreads(int timeout_ms)6666-{6767- block_console_kthreads = true;6868-6969- /* Do not wait when the console lock could not be safely taken. */7070- if (this_cpu_read(printk_context) || in_nmi())7171- return;7272-7373- while (timeout_ms > 0) {7474- if (console_trylock()) {7575- console_unlock();7676- return;7777- }7878-7979- udelay(1000);8080- timeout_ms -= 1;8181- }8282-}
-2
kernel/rcu/tree_stall.h
···647647 * See Documentation/RCU/stallwarn.rst for info on how to debug648648 * RCU CPU stall warnings.649649 */650650- printk_prefer_direct_enter();651650 trace_rcu_stall_warning(rcu_state.name, TPS("SelfDetected"));652651 pr_err("INFO: %s self-detected stall on CPU\n", rcu_state.name);653652 raw_spin_lock_irqsave_rcu_node(rdp->mynode, flags);···684685 */685686 set_tsk_need_resched(current);686687 set_preempt_need_resched();687687- printk_prefer_direct_exit();688688}689689690690static void check_cpu_stall(struct rcu_data *rdp)
+1-15
kernel/reboot.c
···8282{8383 blocking_notifier_call_chain(&reboot_notifier_list, SYS_RESTART, cmd);8484 system_state = SYSTEM_RESTART;8585- try_block_console_kthreads(10000);8685 usermodehelper_disable();8786 device_shutdown();8887}···270271 blocking_notifier_call_chain(&reboot_notifier_list,271272 (state == SYSTEM_HALT) ? SYS_HALT : SYS_POWER_OFF, NULL);272273 system_state = state;273273- try_block_console_kthreads(10000);274274 usermodehelper_disable();275275 device_shutdown();276276}···819821 ret = run_cmd(reboot_cmd);820822821823 if (ret) {822822- printk_prefer_direct_enter();823824 pr_warn("Failed to start orderly reboot: forcing the issue\n");824825 emergency_sync();825826 kernel_restart(NULL);826826- printk_prefer_direct_exit();827827 }828828829829 return ret;···834838 ret = run_cmd(poweroff_cmd);835839836840 if (ret && force) {837837- printk_prefer_direct_enter();838841 pr_warn("Failed to start orderly shutdown: forcing the issue\n");839842840843 /*···843848 */844849 emergency_sync();845850 kernel_power_off();846846- printk_prefer_direct_exit();847851 }848852849853 return ret;···900906 */901907static void hw_failure_emergency_poweroff_func(struct work_struct *work)902908{903903- printk_prefer_direct_enter();904904-905909 /*906910 * We have reached here after the emergency shutdown waiting period has907911 * expired. This means orderly_poweroff has not been able to shut off···916924 */917925 pr_emerg("Hardware protection shutdown failed. Trying emergency restart\n");918926 emergency_restart();919919-920920- printk_prefer_direct_exit();921927}922928923929static DECLARE_DELAYED_WORK(hw_failure_emergency_poweroff_work,···954964{955965 static atomic_t allow_proceed = ATOMIC_INIT(1);956966957957- printk_prefer_direct_enter();958958-959967 pr_emerg("HARDWARE PROTECTION shutdown (%s)\n", reason);960968961969 /* Shutdown should be initiated only once. */962970 if (!atomic_dec_and_test(&allow_proceed))963963- goto out;971971+ return;964972965973 /*966974 * Queue a backup emergency shutdown in the event of···966978 */967979 hw_failure_emergency_poweroff(ms_until_forced);968980 orderly_poweroff(true);969969-out:970970- printk_prefer_direct_exit();971981}972982EXPORT_SYMBOL_GPL(hw_protection_shutdown);973983
···154154 if (unlikely(!handler))155155 return NULL;156156157157+ /*158158+ * This expects the caller will set up a rethook on a function entry.159159+ * When the function returns, the rethook will eventually be reclaimed160160+ * or released in the rethook_recycle() with call_rcu().161161+ * This means the caller must be run in the RCU-availabe context.162162+ */163163+ if (unlikely(!rcu_is_watching()))164164+ return NULL;165165+157166 fn = freelist_try_get(&rh->pool);158167 if (!fn)159168 return NULL;
···17181718kretprobe_dispatcher(struct kretprobe_instance *ri, struct pt_regs *regs)17191719{17201720 struct kretprobe *rp = get_kretprobe(ri);17211721- struct trace_kprobe *tk = container_of(rp, struct trace_kprobe, rp);17211721+ struct trace_kprobe *tk;1722172217231723+ /*17241724+ * There is a small chance that get_kretprobe(ri) returns NULL when17251725+ * the kretprobe is unregister on another CPU between kretprobe's17261726+ * trampoline_handler and this function.17271727+ */17281728+ if (unlikely(!rp))17291729+ return 0;17301730+17311731+ tk = container_of(rp, struct trace_kprobe, rp);17231732 raw_cpu_inc(*tk->nhit);1724173317251734 if (trace_probe_test_flag(&tk->tp, TP_FLAG_TRACE))
···424424 /* Start period for the next softlockup warning. */425425 update_report_ts();426426427427- printk_prefer_direct_enter();428428-429427 pr_emerg("BUG: soft lockup - CPU#%d stuck for %us! [%s:%d]\n",430428 smp_processor_id(), duration,431429 current->comm, task_pid_nr(current));···442444 add_taint(TAINT_SOFTLOCKUP, LOCKDEP_STILL_OK);443445 if (softlockup_panic)444446 panic("softlockup: hung tasks");445445-446446- printk_prefer_direct_exit();447447 }448448449449 return HRTIMER_RESTART;
-4
kernel/watchdog_hld.c
···135135 if (__this_cpu_read(hard_watchdog_warn) == true)136136 return;137137138138- printk_prefer_direct_enter();139139-140138 pr_emerg("Watchdog detected hard LOCKUP on cpu %d\n",141139 this_cpu);142140 print_modules();···154156155157 if (hardlockup_panic)156158 nmi_panic(regs, "Hard LOCKUP");157157-158158- printk_prefer_direct_exit();159159160160 __this_cpu_write(hard_watchdog_warn, true);161161 return;
+8
mm/damon/reclaim.c
···374374}375375static DECLARE_DELAYED_WORK(damon_reclaim_timer, damon_reclaim_timer_fn);376376377377+static bool damon_reclaim_initialized;378378+377379static int enabled_store(const char *val,378380 const struct kernel_param *kp)379381{380382 int rc = param_set_bool(val, kp);381383382384 if (rc < 0)385385+ return rc;386386+387387+ /* system_wq might not initialized yet */388388+ if (!damon_reclaim_initialized)383389 return rc;384390385391 if (enabled)···455449 damon_add_target(ctx, target);456450457451 schedule_delayed_work(&damon_reclaim_timer, 0);452452+453453+ damon_reclaim_initialized = true;458454 return 0;459455}460456
+12-3
mm/filemap.c
···23852385 continue;23862386 if (xas.xa_index > max || xa_is_value(folio))23872387 break;23882388+ if (xa_is_sibling(folio))23892389+ break;23882390 if (!folio_try_get_rcu(folio))23892391 goto retry;23902392···26312629 return err;26322630}2633263126322632+static inline bool pos_same_folio(loff_t pos1, loff_t pos2, struct folio *folio)26332633+{26342634+ unsigned int shift = folio_shift(folio);26352635+26362636+ return (pos1 >> shift == pos2 >> shift);26372637+}26382638+26342639/**26352640 * filemap_read - Read data from the page cache.26362641 * @iocb: The iocb to read.···27092700 writably_mapped = mapping_writably_mapped(mapping);2710270127112702 /*27122712- * When a sequential read accesses a page several times, only27032703+ * When a read accesses the same folio several times, only27132704 * mark it as accessed the first time.27142705 */27152715- if (iocb->ki_pos >> PAGE_SHIFT !=27162716- ra->prev_pos >> PAGE_SHIFT)27062706+ if (!pos_same_folio(iocb->ki_pos, ra->prev_pos - 1,27072707+ fbatch.folios[0]))27172708 folio_mark_accessed(fbatch.folios[0]);2718270927192710 for (i = 0; i < folio_batch_count(&fbatch); i++) {
+1
mm/huge_memory.c
···23772377 page_tail);23782378 page_tail->mapping = head->mapping;23792379 page_tail->index = head->index + tail;23802380+ page_tail->private = 0;2380238123812382 /* Page flags must be visible before we make the page non-compound. */23822383 smp_wmb();
···360360 unsigned long flags;361361 struct slab *slab;362362 void *addr;363363+ const bool random_right_allocate = prandom_u32_max(2);364364+ const bool random_fault = CONFIG_KFENCE_STRESS_TEST_FAULTS &&365365+ !prandom_u32_max(CONFIG_KFENCE_STRESS_TEST_FAULTS);363366364367 /* Try to obtain a free object. */365368 raw_spin_lock_irqsave(&kfence_freelist_lock, flags);···407404 * is that the out-of-bounds accesses detected are deterministic for408405 * such allocations.409406 */410410- if (prandom_u32_max(2)) {407407+ if (random_right_allocate) {411408 /* Allocate on the "right" side, re-calculate address. */412409 meta->addr += PAGE_SIZE - size;413410 meta->addr = ALIGN_DOWN(meta->addr, cache->align);···447444 if (cache->ctor)448445 cache->ctor(addr);449446450450- if (CONFIG_KFENCE_STRESS_TEST_FAULTS && !prandom_u32_max(CONFIG_KFENCE_STRESS_TEST_FAULTS))447447+ if (random_fault)451448 kfence_protect(meta->addr); /* Random "faults" by protecting the object. */452449453450 atomic_long_inc(&counters[KFENCE_COUNTER_ALLOCATED]);
+1-1
mm/madvise.c
···11121112 } else {11131113 pr_info("Injecting memory failure for pfn %#lx at process virtual address %#lx\n",11141114 pfn, start);11151115- ret = memory_failure(pfn, MF_COUNT_INCREASED);11151115+ ret = memory_failure(pfn, MF_COUNT_INCREASED | MF_SW_SIMULATED);11161116 if (ret == -EOPNOTSUPP)11171117 ret = 0;11181118 }
···69697070atomic_long_t num_poisoned_pages __read_mostly = ATOMIC_LONG_INIT(0);71717272+static bool hw_memory_failure __read_mostly = false;7373+7274static bool __page_handle_poison(struct page *page)7375{7476 int ret;···1770176817711769 mutex_lock(&mf_mutex);1772177017711771+ if (!(flags & MF_SW_SIMULATED))17721772+ hw_memory_failure = true;17731773+17731774 p = pfn_to_online_page(pfn);17741775 if (!p) {17751776 res = arch_memory_failure(pfn, flags);···21072102 page = compound_head(p);2108210321092104 mutex_lock(&mf_mutex);21052105+21062106+ if (hw_memory_failure) {21072107+ unpoison_pr_info("Unpoison: Disabled after HW memory failure %#lx\n",21082108+ pfn, &unpoison_rs);21092109+ ret = -EOPNOTSUPP;21102110+ goto unlock_mutex;21112111+ }2110211221112113 if (!PageHWPoison(p)) {21122114 unpoison_pr_info("Unpoison: Page was already unpoisoned %#lx\n",
+1
mm/migrate.c
···11061106 if (!newpage)11071107 return -ENOMEM;1108110811091109+ newpage->private = 0;11091110 rc = __unmap_and_move(page, newpage, force, mode);11101111 if (rc == MIGRATEPAGE_SUCCESS)11111112 set_page_owner_migrate_reason(newpage, reason);
+2
mm/page_isolation.c
···286286 * @flags: isolation flags287287 * @gfp_flags: GFP flags used for migrating pages288288 * @isolate_before: isolate the pageblock before the boundary_pfn289289+ * @skip_isolation: the flag to skip the pageblock isolation in second290290+ * isolate_single_pageblock()289291 *290292 * Free and in-use pages can be as big as MAX_ORDER-1 and contain more than one291293 * pageblock. When not all pageblocks within a page are isolated at the same
+2
mm/readahead.c
···510510 new_order--;511511 }512512513513+ filemap_invalidate_lock_shared(mapping);513514 while (index <= limit) {514515 unsigned int order = new_order;515516···537536 }538537539538 read_pages(ractl);539539+ filemap_invalidate_unlock_shared(mapping);540540541541 /*542542 * If there were already pages in the page cache, then we may have
···881881 * lru_disable_count = 0 will have exited the critical882882 * section when synchronize_rcu() returns.883883 */884884- synchronize_rcu();884884+ synchronize_rcu_expedited();885885#ifdef CONFIG_SMP886886 __lru_add_drain_all(true);887887#else
+15-10
net/core/dev.c
···397397/* Device list removal398398 * caller must respect a RCU grace period before freeing/reusing dev399399 */400400-static void unlist_netdevice(struct net_device *dev)400400+static void unlist_netdevice(struct net_device *dev, bool lock)401401{402402 ASSERT_RTNL();403403404404 /* Unlink dev from the device chain */405405- write_lock(&dev_base_lock);405405+ if (lock)406406+ write_lock(&dev_base_lock);406407 list_del_rcu(&dev->dev_list);407408 netdev_name_node_del(dev->name_node);408409 hlist_del_rcu(&dev->index_hlist);409409- write_unlock(&dev_base_lock);410410+ if (lock)411411+ write_unlock(&dev_base_lock);410412411413 dev_base_seq_inc(dev_net(dev));412414}···1004510043 goto err_uninit;10046100441004710045 ret = netdev_register_kobject(dev);1004810048- if (ret) {1004910049- dev->reg_state = NETREG_UNREGISTERED;1004610046+ write_lock(&dev_base_lock);1004710047+ dev->reg_state = ret ? NETREG_UNREGISTERED : NETREG_REGISTERED;1004810048+ write_unlock(&dev_base_lock);1004910049+ if (ret)1005010050 goto err_uninit;1005110051- }1005210052- dev->reg_state = NETREG_REGISTERED;10053100511005410052 __netdev_update_features(dev);1005510053···1033110329 continue;1033210330 }10333103311033210332+ write_lock(&dev_base_lock);1033410333 dev->reg_state = NETREG_UNREGISTERED;1033410334+ write_unlock(&dev_base_lock);1033510335 linkwatch_forget_dev(dev);1033610336 }1033710337···10814108101081510811 list_for_each_entry(dev, head, unreg_list) {1081610812 /* And unlink it from device chain. */1081710817- unlist_netdevice(dev);1081810818-1081310813+ write_lock(&dev_base_lock);1081410814+ unlist_netdevice(dev, false);1081910815 dev->reg_state = NETREG_UNREGISTERING;1081610816+ write_unlock(&dev_base_lock);1082010817 }1082110818 flush_all_backlogs();1082210819···1096410959 dev_close(dev);10965109601096610961 /* And unlink it from device chain */1096710967- unlist_netdevice(dev);1096210962+ unlist_netdevice(dev, true);10968109631096910964 synchronize_net();1097010965
+28-6
net/core/filter.c
···65166516 ifindex, proto, netns_id, flags);6517651765186518 if (sk) {65196519- sk = sk_to_full_sk(sk);65206520- if (!sk_fullsock(sk)) {65196519+ struct sock *sk2 = sk_to_full_sk(sk);65206520+65216521+ /* sk_to_full_sk() may return (sk)->rsk_listener, so make sure the original sk65226522+ * sock refcnt is decremented to prevent a request_sock leak.65236523+ */65246524+ if (!sk_fullsock(sk2))65256525+ sk2 = NULL;65266526+ if (sk2 != sk) {65216527 sock_gen_put(sk);65226522- return NULL;65286528+ /* Ensure there is no need to bump sk2 refcnt */65296529+ if (unlikely(sk2 && !sock_flag(sk2, SOCK_RCU_FREE))) {65306530+ WARN_ONCE(1, "Found non-RCU, unreferenced socket!");65316531+ return NULL;65326532+ }65336533+ sk = sk2;65236534 }65246535 }65256536···65646553 flags);6565655465666555 if (sk) {65676567- sk = sk_to_full_sk(sk);65686568- if (!sk_fullsock(sk)) {65566556+ struct sock *sk2 = sk_to_full_sk(sk);65576557+65586558+ /* sk_to_full_sk() may return (sk)->rsk_listener, so make sure the original sk65596559+ * sock refcnt is decremented to prevent a request_sock leak.65606560+ */65616561+ if (!sk_fullsock(sk2))65626562+ sk2 = NULL;65636563+ if (sk2 != sk) {65696564 sock_gen_put(sk);65706570- return NULL;65656565+ /* Ensure there is no need to bump sk2 refcnt */65666566+ if (unlikely(sk2 && !sock_flag(sk2, SOCK_RCU_FREE))) {65676567+ WARN_ONCE(1, "Found non-RCU, unreferenced socket!");65686568+ return NULL;65696569+ }65706570+ sk = sk2;65716571 }65726572 }65736573
···538538 goto out;539539 }540540541541- skb = xsk_build_skb(xs, &desc);542542- if (IS_ERR(skb)) {543543- err = PTR_ERR(skb);544544- goto out;545545- }546546-547541 /* This is the backpressure mechanism for the Tx path.548542 * Reserve space in the completion queue and only proceed549543 * if there is space in it. This avoids having to implement···546552 spin_lock_irqsave(&xs->pool->cq_lock, flags);547553 if (xskq_prod_reserve(xs->pool->cq)) {548554 spin_unlock_irqrestore(&xs->pool->cq_lock, flags);549549- kfree_skb(skb);550555 goto out;551556 }552557 spin_unlock_irqrestore(&xs->pool->cq_lock, flags);558558+559559+ skb = xsk_build_skb(xs, &desc);560560+ if (IS_ERR(skb)) {561561+ err = PTR_ERR(skb);562562+ spin_lock_irqsave(&xs->pool->cq_lock, flags);563563+ xskq_prod_cancel(xs->pool->cq);564564+ spin_unlock_irqrestore(&xs->pool->cq_lock, flags);565565+ goto out;566566+ }553567554568 err = __dev_direct_xmit(skb, xs->queue_id);555569 if (err == NETDEV_TX_BUSY) {
+25-4
samples/fprobe/fprobe_example.c
···2121#define BACKTRACE_DEPTH 162222#define MAX_SYMBOL_LEN 40962323struct fprobe sample_probe;2424+static unsigned long nhit;24252526static char symbol[MAX_SYMBOL_LEN] = "kernel_clone";2627module_param_string(symbol, symbol, sizeof(symbol), 0644);···2928module_param_string(nosymbol, nosymbol, sizeof(nosymbol), 0644);3029static bool stackdump = true;3130module_param(stackdump, bool, 0644);3131+static bool use_trace = false;3232+module_param(use_trace, bool, 0644);32333334static void show_backtrace(void)3435{···43404441static void sample_entry_handler(struct fprobe *fp, unsigned long ip, struct pt_regs *regs)4542{4646- pr_info("Enter <%pS> ip = 0x%p\n", (void *)ip, (void *)ip);4343+ if (use_trace)4444+ /*4545+ * This is just an example, no kernel code should call4646+ * trace_printk() except when actively debugging.4747+ */4848+ trace_printk("Enter <%pS> ip = 0x%p\n", (void *)ip, (void *)ip);4949+ else5050+ pr_info("Enter <%pS> ip = 0x%p\n", (void *)ip, (void *)ip);5151+ nhit++;4752 if (stackdump)4853 show_backtrace();4954}···6049{6150 unsigned long rip = instruction_pointer(regs);62516363- pr_info("Return from <%pS> ip = 0x%p to rip = 0x%p (%pS)\n",6464- (void *)ip, (void *)ip, (void *)rip, (void *)rip);5252+ if (use_trace)5353+ /*5454+ * This is just an example, no kernel code should call5555+ * trace_printk() except when actively debugging.5656+ */5757+ trace_printk("Return from <%pS> ip = 0x%p to rip = 0x%p (%pS)\n",5858+ (void *)ip, (void *)ip, (void *)rip, (void *)rip);5959+ else6060+ pr_info("Return from <%pS> ip = 0x%p to rip = 0x%p (%pS)\n",6161+ (void *)ip, (void *)ip, (void *)rip, (void *)rip);6262+ nhit++;6563 if (stackdump)6664 show_backtrace();6765}···132112{133113 unregister_fprobe(&sample_probe);134114135135- pr_info("fprobe at %s unregistered\n", symbol);115115+ pr_info("fprobe at %s unregistered. %ld times hit, %ld times missed\n",116116+ symbol, nhit, sample_probe.nmissed);136117}137118138119module_init(fprobe_init)
+3
scripts/gen_autoksyms.sh
···5656# point addresses.5757sed -e 's/^\.//' |5858sort -u |5959+# Ignore __this_module. It's not an exported symbol, and will be resolved6060+# when the final .ko's are linked.6161+grep -v '^__this_module$' |5962sed -e 's/\(.*\)/#define __KSYM_\1 1/' >> "$output_file"
+1-1
scripts/mod/modpost.c
···980980},981981/* Do not export init/exit functions or data */982982{983983- .fromsec = { "__ksymtab*", NULL },983983+ .fromsec = { "___ksymtab*", NULL },984984 .bad_tosec = { INIT_SECTIONS, EXIT_SECTIONS, NULL },985985 .mismatch = EXPORT_TO_INIT_EXIT,986986 .symbol_white_list = { DEFAULT_SYMBOL_WHITE_LIST, NULL },
···55555656 /* find max number of channels based on format_configuration */5757 if (fmt_configs->fmt_count) {5858- dev_dbg(dev, "%s: found %d format definitions\n",5959- __func__, fmt_configs->fmt_count);5858+ dev_dbg(dev, "found %d format definitions\n",5959+ fmt_configs->fmt_count);60606161 for (i = 0; i < fmt_configs->fmt_count; i++) {6262 struct wav_fmt_ext *fmt_ext;···6666 if (fmt_ext->fmt.channels > max_ch)6767 max_ch = fmt_ext->fmt.channels;6868 }6969- dev_dbg(dev, "%s: max channels found %d\n", __func__, max_ch);6969+ dev_dbg(dev, "max channels found %d\n", max_ch);7070 } else {7171- dev_dbg(dev, "%s: No format information found\n", __func__);7171+ dev_dbg(dev, "No format information found\n");7272 }73737474 if (cfg->device_config.config_type != NHLT_CONFIG_TYPE_MIC_ARRAY) {···9595 }96969797 if (dmic_geo > 0) {9898- dev_dbg(dev, "%s: Array with %d dmics\n", __func__, dmic_geo);9898+ dev_dbg(dev, "Array with %d dmics\n", dmic_geo);9999 }100100 if (max_ch > dmic_geo) {101101- dev_dbg(dev, "%s: max channels %d exceed dmic number %d\n",102102- __func__, max_ch, dmic_geo);101101+ dev_dbg(dev, "max channels %d exceed dmic number %d\n",102102+ max_ch, dmic_geo);103103 }104104 }105105 }106106107107- dev_dbg(dev, "%s: dmic number %d max_ch %d\n",108108- __func__, dmic_geo, max_ch);107107+ dev_dbg(dev, "dmic number %d max_ch %d\n", dmic_geo, max_ch);109108110109 return dmic_geo;111110}
+4-3
sound/pci/hda/hda_auto_parser.c
···819819 snd_hda_set_pin_ctl_cache(codec, cfg->nid, cfg->val);820820}821821822822-static void apply_fixup(struct hda_codec *codec, int id, int action, int depth)822822+void __snd_hda_apply_fixup(struct hda_codec *codec, int id, int action, int depth)823823{824824 const char *modelname = codec->fixup_name;825825···829829 if (++depth > 10)830830 break;831831 if (fix->chained_before)832832- apply_fixup(codec, fix->chain_id, action, depth + 1);832832+ __snd_hda_apply_fixup(codec, fix->chain_id, action, depth + 1);833833834834 switch (fix->type) {835835 case HDA_FIXUP_PINS:···870870 id = fix->chain_id;871871 }872872}873873+EXPORT_SYMBOL_GPL(__snd_hda_apply_fixup);873874874875/**875876 * snd_hda_apply_fixup - Apply the fixup chain with the given action···880879void snd_hda_apply_fixup(struct hda_codec *codec, int action)881880{882881 if (codec->fixup_list)883883- apply_fixup(codec, codec->fixup_id, action, 0);882882+ __snd_hda_apply_fixup(codec, codec->fixup_id, action, 0);884883}885884EXPORT_SYMBOL_GPL(snd_hda_apply_fixup);886885
+1
sound/pci/hda/hda_local.h
···348348void snd_hda_apply_pincfgs(struct hda_codec *codec,349349 const struct hda_pintbl *cfg);350350void snd_hda_apply_fixup(struct hda_codec *codec, int action);351351+void __snd_hda_apply_fixup(struct hda_codec *codec, int id, int action, int depth);351352void snd_hda_pick_fixup(struct hda_codec *codec,352353 const struct hda_model_fixup *models,353354 const struct snd_pci_quirk *quirk,
···154154 I915_MOCS_CACHED,155155};156156157157-/*157157+/**158158+ * enum drm_i915_gem_engine_class - uapi engine type enumeration159159+ *158160 * Different engines serve different roles, and there may be more than one159159- * engine serving each role. enum drm_i915_gem_engine_class provides a160160- * classification of the role of the engine, which may be used when requesting161161- * operations to be performed on a certain subset of engines, or for providing162162- * information about that group.161161+ * engine serving each role. This enum provides a classification of the role162162+ * of the engine, which may be used when requesting operations to be performed163163+ * on a certain subset of engines, or for providing information about that164164+ * group.163165 */164166enum drm_i915_gem_engine_class {167167+ /**168168+ * @I915_ENGINE_CLASS_RENDER:169169+ *170170+ * Render engines support instructions used for 3D, Compute (GPGPU),171171+ * and programmable media workloads. These instructions fetch data and172172+ * dispatch individual work items to threads that operate in parallel.173173+ * The threads run small programs (called "kernels" or "shaders") on174174+ * the GPU's execution units (EUs).175175+ */165176 I915_ENGINE_CLASS_RENDER = 0,177177+178178+ /**179179+ * @I915_ENGINE_CLASS_COPY:180180+ *181181+ * Copy engines (also referred to as "blitters") support instructions182182+ * that move blocks of data from one location in memory to another,183183+ * or that fill a specified location of memory with fixed data.184184+ * Copy engines can perform pre-defined logical or bitwise operations185185+ * on the source, destination, or pattern data.186186+ */166187 I915_ENGINE_CLASS_COPY = 1,188188+189189+ /**190190+ * @I915_ENGINE_CLASS_VIDEO:191191+ *192192+ * Video engines (also referred to as "bit stream decode" (BSD) or193193+ * "vdbox") support instructions that perform fixed-function media194194+ * decode and encode.195195+ */167196 I915_ENGINE_CLASS_VIDEO = 2,197197+198198+ /**199199+ * @I915_ENGINE_CLASS_VIDEO_ENHANCE:200200+ *201201+ * Video enhancement engines (also referred to as "vebox") support202202+ * instructions related to image enhancement.203203+ */168204 I915_ENGINE_CLASS_VIDEO_ENHANCE = 3,169205170170- /* should be kept compact */206206+ /**207207+ * @I915_ENGINE_CLASS_COMPUTE:208208+ *209209+ * Compute engines support a subset of the instructions available210210+ * on render engines: compute engines support Compute (GPGPU) and211211+ * programmable media workloads, but do not support the 3D pipeline.212212+ */213213+ I915_ENGINE_CLASS_COMPUTE = 4,171214215215+ /* Values in this enum should be kept compact. */216216+217217+ /**218218+ * @I915_ENGINE_CLASS_INVALID:219219+ *220220+ * Placeholder value to represent an invalid engine class assignment.221221+ */172222 I915_ENGINE_CLASS_INVALID = -1173223};174224175175-/*225225+/**226226+ * struct i915_engine_class_instance - Engine class/instance identifier227227+ *176228 * There may be more than one engine fulfilling any role within the system.177229 * Each engine of a class is given a unique instance number and therefore178230 * any engine can be specified by its class:instance tuplet. APIs that allow···232180 * for this identification.233181 */234182struct i915_engine_class_instance {235235- __u16 engine_class; /* see enum drm_i915_gem_engine_class */236236- __u16 engine_instance;183183+ /**184184+ * @engine_class:185185+ *186186+ * Engine class from enum drm_i915_gem_engine_class187187+ */188188+ __u16 engine_class;237189#define I915_ENGINE_CLASS_INVALID_NONE -1238190#define I915_ENGINE_CLASS_INVALID_VIRTUAL -2191191+192192+ /**193193+ * @engine_instance:194194+ *195195+ * Engine instance.196196+ */197197+ __u16 engine_instance;239198};240199241200/**···27202657 DRM_I915_PERF_RECORD_MAX /* non-ABI */27212658};2722265927232723-/*26602660+/**26612661+ * struct drm_i915_perf_oa_config26622662+ *27242663 * Structure to upload perf dynamic configuration into the kernel.27252664 */27262665struct drm_i915_perf_oa_config {27272727- /** String formatted like "%08x-%04x-%04x-%04x-%012x" */26662666+ /**26672667+ * @uuid:26682668+ *26692669+ * String formatted like "%\08x-%\04x-%\04x-%\04x-%\012x"26702670+ */27282671 char uuid[36];2729267226732673+ /**26742674+ * @n_mux_regs:26752675+ *26762676+ * Number of mux regs in &mux_regs_ptr.26772677+ */27302678 __u32 n_mux_regs;26792679+26802680+ /**26812681+ * @n_boolean_regs:26822682+ *26832683+ * Number of boolean regs in &boolean_regs_ptr.26842684+ */27312685 __u32 n_boolean_regs;26862686+26872687+ /**26882688+ * @n_flex_regs:26892689+ *26902690+ * Number of flex regs in &flex_regs_ptr.26912691+ */27322692 __u32 n_flex_regs;2733269327342734- /*27352735- * These fields are pointers to tuples of u32 values (register address,27362736- * value). For example the expected length of the buffer pointed by27372737- * mux_regs_ptr is (2 * sizeof(u32) * n_mux_regs).26942694+ /**26952695+ * @mux_regs_ptr:26962696+ *26972697+ * Pointer to tuples of u32 values (register address, value) for mux26982698+ * registers. Expected length of buffer is (2 * sizeof(u32) *26992699+ * &n_mux_regs).27382700 */27392701 __u64 mux_regs_ptr;27022702+27032703+ /**27042704+ * @boolean_regs_ptr:27052705+ *27062706+ * Pointer to tuples of u32 values (register address, value) for mux27072707+ * registers. Expected length of buffer is (2 * sizeof(u32) *27082708+ * &n_boolean_regs).27092709+ */27402710 __u64 boolean_regs_ptr;27112711+27122712+ /**27132713+ * @flex_regs_ptr:27142714+ *27152715+ * Pointer to tuples of u32 values (register address, value) for mux27162716+ * registers. Expected length of buffer is (2 * sizeof(u32) *27172717+ * &n_flex_regs).27182718+ */27412719 __u64 flex_regs_ptr;27422720};27432721···27892685 * @data_ptr is also depends on the specific @query_id.27902686 */27912687struct drm_i915_query_item {27922792- /** @query_id: The id for this query */26882688+ /**26892689+ * @query_id:26902690+ *26912691+ * The id for this query. Currently accepted query IDs are:26922692+ * - %DRM_I915_QUERY_TOPOLOGY_INFO (see struct drm_i915_query_topology_info)26932693+ * - %DRM_I915_QUERY_ENGINE_INFO (see struct drm_i915_engine_info)26942694+ * - %DRM_I915_QUERY_PERF_CONFIG (see struct drm_i915_query_perf_config)26952695+ * - %DRM_I915_QUERY_MEMORY_REGIONS (see struct drm_i915_query_memory_regions)26962696+ * - %DRM_I915_QUERY_HWCONFIG_BLOB (see `GuC HWCONFIG blob uAPI`)26972697+ * - %DRM_I915_QUERY_GEOMETRY_SUBSLICES (see struct drm_i915_query_topology_info)26982698+ */27932699 __u64 query_id;27942794-#define DRM_I915_QUERY_TOPOLOGY_INFO 127952795-#define DRM_I915_QUERY_ENGINE_INFO 227962796-#define DRM_I915_QUERY_PERF_CONFIG 327972797-#define DRM_I915_QUERY_MEMORY_REGIONS 427002700+#define DRM_I915_QUERY_TOPOLOGY_INFO 127012701+#define DRM_I915_QUERY_ENGINE_INFO 227022702+#define DRM_I915_QUERY_PERF_CONFIG 327032703+#define DRM_I915_QUERY_MEMORY_REGIONS 427042704+#define DRM_I915_QUERY_HWCONFIG_BLOB 527052705+#define DRM_I915_QUERY_GEOMETRY_SUBSLICES 627982706/* Must be kept compact -- no holes and well documented */2799270728002708 /**···28222706 /**28232707 * @flags:28242708 *28252825- * When query_id == DRM_I915_QUERY_TOPOLOGY_INFO, must be 0.27092709+ * When &query_id == %DRM_I915_QUERY_TOPOLOGY_INFO, must be 0.28262710 *28272827- * When query_id == DRM_I915_QUERY_PERF_CONFIG, must be one of the27112711+ * When &query_id == %DRM_I915_QUERY_PERF_CONFIG, must be one of the28282712 * following:28292713 *28302830- * - DRM_I915_QUERY_PERF_CONFIG_LIST28312831- * - DRM_I915_QUERY_PERF_CONFIG_DATA_FOR_UUID28322832- * - DRM_I915_QUERY_PERF_CONFIG_FOR_UUID27142714+ * - %DRM_I915_QUERY_PERF_CONFIG_LIST27152715+ * - %DRM_I915_QUERY_PERF_CONFIG_DATA_FOR_UUID27162716+ * - %DRM_I915_QUERY_PERF_CONFIG_FOR_UUID27172717+ *27182718+ * When &query_id == %DRM_I915_QUERY_GEOMETRY_SUBSLICES must contain27192719+ * a struct i915_engine_class_instance that references a render engine.28332720 */28342721 __u32 flags;28352722#define DRM_I915_QUERY_PERF_CONFIG_LIST 1···28902771 __u64 items_ptr;28912772};2892277328932893-/*28942894- * Data written by the kernel with query DRM_I915_QUERY_TOPOLOGY_INFO :27742774+/**27752775+ * struct drm_i915_query_topology_info28952776 *28962896- * data: contains the 3 pieces of information :28972897- *28982898- * - the slice mask with one bit per slice telling whether a slice is28992899- * available. The availability of slice X can be queried with the following29002900- * formula :29012901- *29022902- * (data[X / 8] >> (X % 8)) & 129032903- *29042904- * - the subslice mask for each slice with one bit per subslice telling29052905- * whether a subslice is available. Gen12 has dual-subslices, which are29062906- * similar to two gen11 subslices. For gen12, this array represents dual-29072907- * subslices. The availability of subslice Y in slice X can be queried29082908- * with the following formula :29092909- *29102910- * (data[subslice_offset +29112911- * X * subslice_stride +29122912- * Y / 8] >> (Y % 8)) & 129132913- *29142914- * - the EU mask for each subslice in each slice with one bit per EU telling29152915- * whether an EU is available. The availability of EU Z in subslice Y in29162916- * slice X can be queried with the following formula :29172917- *29182918- * (data[eu_offset +29192919- * (X * max_subslices + Y) * eu_stride +29202920- * Z / 8] >> (Z % 8)) & 127772777+ * Describes slice/subslice/EU information queried by27782778+ * %DRM_I915_QUERY_TOPOLOGY_INFO29212779 */29222780struct drm_i915_query_topology_info {29232923- /*27812781+ /**27822782+ * @flags:27832783+ *29242784 * Unused for now. Must be cleared to zero.29252785 */29262786 __u16 flags;2927278727882788+ /**27892789+ * @max_slices:27902790+ *27912791+ * The number of bits used to express the slice mask.27922792+ */29282793 __u16 max_slices;27942794+27952795+ /**27962796+ * @max_subslices:27972797+ *27982798+ * The number of bits used to express the subslice mask.27992799+ */29292800 __u16 max_subslices;28012801+28022802+ /**28032803+ * @max_eus_per_subslice:28042804+ *28052805+ * The number of bits in the EU mask that correspond to a single28062806+ * subslice's EUs.28072807+ */29302808 __u16 max_eus_per_subslice;2931280929322932- /*28102810+ /**28112811+ * @subslice_offset:28122812+ *29332813 * Offset in data[] at which the subslice masks are stored.29342814 */29352815 __u16 subslice_offset;2936281629372937- /*28172817+ /**28182818+ * @subslice_stride:28192819+ *29382820 * Stride at which each of the subslice masks for each slice are29392821 * stored.29402822 */29412823 __u16 subslice_stride;2942282429432943- /*28252825+ /**28262826+ * @eu_offset:28272827+ *29442828 * Offset in data[] at which the EU masks are stored.29452829 */29462830 __u16 eu_offset;2947283129482948- /*28322832+ /**28332833+ * @eu_stride:28342834+ *29492835 * Stride at which each of the EU masks for each subslice are stored.29502836 */29512837 __u16 eu_stride;2952283828392839+ /**28402840+ * @data:28412841+ *28422842+ * Contains 3 pieces of information :28432843+ *28442844+ * - The slice mask with one bit per slice telling whether a slice is28452845+ * available. The availability of slice X can be queried with the28462846+ * following formula :28472847+ *28482848+ * .. code:: c28492849+ *28502850+ * (data[X / 8] >> (X % 8)) & 128512851+ *28522852+ * Starting with Xe_HP platforms, Intel hardware no longer has28532853+ * traditional slices so i915 will always report a single slice28542854+ * (hardcoded slicemask = 0x1) which contains all of the platform's28552855+ * subslices. I.e., the mask here does not reflect any of the newer28562856+ * hardware concepts such as "gslices" or "cslices" since userspace28572857+ * is capable of inferring those from the subslice mask.28582858+ *28592859+ * - The subslice mask for each slice with one bit per subslice telling28602860+ * whether a subslice is available. Starting with Gen12 we use the28612861+ * term "subslice" to refer to what the hardware documentation28622862+ * describes as a "dual-subslices." The availability of subslice Y28632863+ * in slice X can be queried with the following formula :28642864+ *28652865+ * .. code:: c28662866+ *28672867+ * (data[subslice_offset + X * subslice_stride + Y / 8] >> (Y % 8)) & 128682868+ *28692869+ * - The EU mask for each subslice in each slice, with one bit per EU28702870+ * telling whether an EU is available. The availability of EU Z in28712871+ * subslice Y in slice X can be queried with the following formula :28722872+ *28732873+ * .. code:: c28742874+ *28752875+ * (data[eu_offset +28762876+ * (X * max_subslices + Y) * eu_stride +28772877+ * Z / 828782878+ * ] >> (Z % 8)) & 128792879+ */29532880 __u8 data[];29542881};29552882···31162951 struct drm_i915_engine_info engines[];31172952};3118295331193119-/*31203120- * Data written by the kernel with query DRM_I915_QUERY_PERF_CONFIG.29542954+/**29552955+ * struct drm_i915_query_perf_config29562956+ *29572957+ * Data written by the kernel with query %DRM_I915_QUERY_PERF_CONFIG and29582958+ * %DRM_I915_QUERY_GEOMETRY_SUBSLICES.31212959 */31222960struct drm_i915_query_perf_config {31232961 union {31243124- /*31253125- * When query_item.flags == DRM_I915_QUERY_PERF_CONFIG_LIST, i915 sets31263126- * this fields to the number of configurations available.29622962+ /**29632963+ * @n_configs:29642964+ *29652965+ * When &drm_i915_query_item.flags ==29662966+ * %DRM_I915_QUERY_PERF_CONFIG_LIST, i915 sets this fields to29672967+ * the number of configurations available.31272968 */31282969 __u64 n_configs;3129297031303130- /*31313131- * When query_id == DRM_I915_QUERY_PERF_CONFIG_DATA_FOR_ID,31323132- * i915 will use the value in this field as configuration31333133- * identifier to decide what data to write into config_ptr.29712971+ /**29722972+ * @config:29732973+ *29742974+ * When &drm_i915_query_item.flags ==29752975+ * %DRM_I915_QUERY_PERF_CONFIG_DATA_FOR_ID, i915 will use the29762976+ * value in this field as configuration identifier to decide29772977+ * what data to write into config_ptr.31342978 */31352979 __u64 config;3136298031373137- /*31383138- * When query_id == DRM_I915_QUERY_PERF_CONFIG_DATA_FOR_UUID,31393139- * i915 will use the value in this field as configuration31403140- * identifier to decide what data to write into config_ptr.29812981+ /**29822982+ * @uuid:29832983+ *29842984+ * When &drm_i915_query_item.flags ==29852985+ * %DRM_I915_QUERY_PERF_CONFIG_DATA_FOR_UUID, i915 will use the29862986+ * value in this field as configuration identifier to decide29872987+ * what data to write into config_ptr.31412988 *31422989 * String formatted like "%08x-%04x-%04x-%04x-%012x"31432990 */31442991 char uuid[36];31452992 };3146299331473147- /*29942994+ /**29952995+ * @flags:29962996+ *31482997 * Unused for now. Must be cleared to zero.31492998 */31502999 __u32 flags;3151300031523152- /*31533153- * When query_item.flags == DRM_I915_QUERY_PERF_CONFIG_LIST, i915 will31543154- * write an array of __u64 of configuration identifiers.30013001+ /**30023002+ * @data:31553003 *31563156- * When query_item.flags == DRM_I915_QUERY_PERF_CONFIG_DATA, i915 will31573157- * write a struct drm_i915_perf_oa_config. If the following fields of31583158- * drm_i915_perf_oa_config are set not set to 0, i915 will write into31593159- * the associated pointers the values of submitted when the30043004+ * When &drm_i915_query_item.flags == %DRM_I915_QUERY_PERF_CONFIG_LIST,30053005+ * i915 will write an array of __u64 of configuration identifiers.30063006+ *30073007+ * When &drm_i915_query_item.flags == %DRM_I915_QUERY_PERF_CONFIG_DATA,30083008+ * i915 will write a struct drm_i915_perf_oa_config. If the following30093009+ * fields of struct drm_i915_perf_oa_config are not set to 0, i915 will30103010+ * write into the associated pointers the values of submitted when the31603011 * configuration was created :31613012 *31623162- * - n_mux_regs31633163- * - n_boolean_regs31643164- * - n_flex_regs30133013+ * - &drm_i915_perf_oa_config.n_mux_regs30143014+ * - &drm_i915_perf_oa_config.n_boolean_regs30153015+ * - &drm_i915_perf_oa_config.n_flex_regs31653016 */31663017 __u8 data[];31673018};···33143133 /** @regions: Info about each supported region */33153134 struct drm_i915_memory_region_info regions[];33163135};31363136+31373137+/**31383138+ * DOC: GuC HWCONFIG blob uAPI31393139+ *31403140+ * The GuC produces a blob with information about the current device.31413141+ * i915 reads this blob from GuC and makes it available via this uAPI.31423142+ *31433143+ * The format and meaning of the blob content are documented in the31443144+ * Programmer's Reference Manual.31453145+ */3317314633183147/**33193148 * struct drm_i915_gem_create_ext - Existing gem_create behaviour, with added
+9
tools/include/uapi/linux/prctl.h
···272272# define PR_SCHED_CORE_SCOPE_THREAD_GROUP 1273273# define PR_SCHED_CORE_SCOPE_PROCESS_GROUP 2274274275275+/* arm64 Scalable Matrix Extension controls */276276+/* Flag values must be in sync with SVE versions */277277+#define PR_SME_SET_VL 63 /* set task vector length */278278+# define PR_SME_SET_VL_ONEXEC (1 << 18) /* defer effect until exec */279279+#define PR_SME_GET_VL 64 /* get task vector length */280280+/* Bits common to PR_SME_SET_VL and PR_SME_GET_VL */281281+# define PR_SME_VL_LEN_MASK 0xffff282282+# define PR_SME_VL_INHERIT (1 << 17) /* inherit across exec */283283+275284#define PR_SET_VMA 0x53564d41276285# define PR_SET_VMA_ANON_NAME 0277286
+20-6
tools/include/uapi/linux/vhost.h
···89899090/* Set or get vhost backend capability */91919292-/* Use message type V2 */9393-#define VHOST_BACKEND_F_IOTLB_MSG_V2 0x19494-/* IOTLB can accept batching hints */9595-#define VHOST_BACKEND_F_IOTLB_BATCH 0x29696-9792#define VHOST_SET_BACKEND_FEATURES _IOW(VHOST_VIRTIO, 0x25, __u64)9893#define VHOST_GET_BACKEND_FEATURES _IOR(VHOST_VIRTIO, 0x26, __u64)9994···145150/* Get the valid iova range */146151#define VHOST_VDPA_GET_IOVA_RANGE _IOR(VHOST_VIRTIO, 0x78, \147152 struct vhost_vdpa_iova_range)148148-149153/* Get the config size */150154#define VHOST_VDPA_GET_CONFIG_SIZE _IOR(VHOST_VIRTIO, 0x79, __u32)151155152156/* Get the count of all virtqueues */153157#define VHOST_VDPA_GET_VQS_COUNT _IOR(VHOST_VIRTIO, 0x80, __u32)158158+159159+/* Get the number of virtqueue groups. */160160+#define VHOST_VDPA_GET_GROUP_NUM _IOR(VHOST_VIRTIO, 0x81, __u32)161161+162162+/* Get the number of address spaces. */163163+#define VHOST_VDPA_GET_AS_NUM _IOR(VHOST_VIRTIO, 0x7A, unsigned int)164164+165165+/* Get the group for a virtqueue: read index, write group in num,166166+ * The virtqueue index is stored in the index field of167167+ * vhost_vring_state. The group for this specific virtqueue is168168+ * returned via num field of vhost_vring_state.169169+ */170170+#define VHOST_VDPA_GET_VRING_GROUP _IOWR(VHOST_VIRTIO, 0x7B, \171171+ struct vhost_vring_state)172172+/* Set the ASID for a virtqueue group. The group index is stored in173173+ * the index field of vhost_vring_state, the ASID associated with this174174+ * group is stored at num field of vhost_vring_state.175175+ */176176+#define VHOST_VDPA_SET_GROUP_ASID _IOW(VHOST_VIRTIO, 0x7C, \177177+ struct vhost_vring_state)154178155179#endif
+2-1
tools/kvm/kvm_stat/kvm_stat
···16461646 .format(values))16471647 if len(pids) > 1:16481648 sys.exit('Error: Multiple processes found (pids: {}). Use "-p"'16491649- ' to specify the desired pid'.format(" ".join(pids)))16491649+ ' to specify the desired pid'16501650+ .format(" ".join(map(str, pids))))16501651 namespace.pid = pids[0]1651165216521653 argparser = argparse.ArgumentParser(description=description_text,
···891891 if (ret < 0)892892 return ret;893893 pr_debug("%s\n", cmd);894894- return system(cmd);894894+ ret = system(cmd);895895+ free(cmd);896896+ return ret;895897}896898897899static int output_fd(struct perf_inject *inject)···918916 inject->tool.tracing_data = perf_event__repipe_tracing_data;919917 }920918921921- output_data_offset = session->header.data_offset;919919+ output_data_offset = perf_session__data_offset(session->evlist);922920923921 if (inject->build_id_all) {924922 inject->tool.mmap = perf_event__repipe_buildid_mmap;
+2
tools/perf/builtin-stat.c
···25862586 if (evlist__initialize_ctlfd(evsel_list, stat_config.ctl_fd, stat_config.ctl_fd_ack))25872587 goto out;2588258825892589+ /* Enable ignoring missing threads when -p option is defined. */25902590+ evlist__first(evsel_list)->ignore_missing_thread = target.pid;25892591 status = 0;25902592 for (run_idx = 0; forever || run_idx < stat_config.run_count; run_idx++) {25912593 if (stat_config.run_count != 1 && verbose > 0)
+14-2
tools/perf/tests/bp_account.c
···151151static int detect_share(int wp_cnt, int bp_cnt)152152{153153 struct perf_event_attr attr;154154- int i, fd[wp_cnt + bp_cnt], ret;154154+ int i, *fd = NULL, ret = -1;155155+156156+ if (wp_cnt + bp_cnt == 0)157157+ return 0;158158+159159+ fd = malloc(sizeof(int) * (wp_cnt + bp_cnt));160160+ if (!fd)161161+ return -1;155162156163 for (i = 0; i < wp_cnt; i++) {157164 fd[i] = wp_event((void *)&the_var, &attr);158158- TEST_ASSERT_VAL("failed to create wp\n", fd[i] != -1);165165+ if (fd[i] == -1) {166166+ pr_err("failed to create wp\n");167167+ goto out;168168+ }159169 }160170161171 for (; i < (bp_cnt + wp_cnt); i++) {···176166177167 ret = i != (bp_cnt + wp_cnt);178168169169+out:179170 while (i--)180171 close(fd[i]);181172173173+ free(fd);182174 return ret;183175}184176
+2
tools/perf/tests/expr.c
···9797 ret |= test(ctx, "2.2 > 2.2", 0);9898 ret |= test(ctx, "2.2 < 1.1", 0);9999 ret |= test(ctx, "1.1 > 2.2", 0);100100+ ret |= test(ctx, "1.1e10 < 1.1e100", 1);101101+ ret |= test(ctx, "1.1e2 > 1.1e-2", 1);100102101103 if (ret) {102104 expr__ctx_free(ctx);
···11-#!/usr/bin/python22-# SPDX-License-Identifier: GPL-2.033-44-import argparse55-import sys66-77-# Basic sanity check of perf CSV output as specified in the man page.88-# Currently just checks the number of fields per line in output.99-1010-ap = argparse.ArgumentParser()1111-ap.add_argument('--no-args', action='store_true')1212-ap.add_argument('--interval', action='store_true')1313-ap.add_argument('--system-wide-no-aggr', action='store_true')1414-ap.add_argument('--system-wide', action='store_true')1515-ap.add_argument('--event', action='store_true')1616-ap.add_argument('--per-core', action='store_true')1717-ap.add_argument('--per-thread', action='store_true')1818-ap.add_argument('--per-die', action='store_true')1919-ap.add_argument('--per-node', action='store_true')2020-ap.add_argument('--per-socket', action='store_true')2121-ap.add_argument('--separator', default=',', nargs='?')2222-args = ap.parse_args()2323-2424-Lines = sys.stdin.readlines()2525-2626-def check_csv_output(exp):2727- for line in Lines:2828- if 'failed' not in line:2929- count = line.count(args.separator)3030- if count != exp:3131- sys.stdout.write(''.join(Lines))3232- raise RuntimeError(f'wrong number of fields. expected {exp} in {line}')3333-3434-try:3535- if args.no_args or args.system_wide or args.event:3636- expected_items = 63737- elif args.interval or args.per_thread or args.system_wide_no_aggr:3838- expected_items = 73939- elif args.per_core or args.per_socket or args.per_node or args.per_die:4040- expected_items = 84141- else:4242- ap.print_help()4343- raise RuntimeError('No checking option specified')4444- check_csv_output(expected_items)4545-4646-except:4747- sys.stdout.write('Test failed for input: ' + ''.join(Lines))4848- raise
+45-24
tools/perf/tests/shell/stat+csv_output.sh
···6677set -e8899-pythonchecker=$(dirname $0)/lib/perf_csv_output_lint.py1010-if [ "x$PYTHON" == "x" ]1111-then1212- if which python3 > /dev/null1313- then1414- PYTHON=python31515- elif which python > /dev/null1616- then1717- PYTHON=python1818- else1919- echo Skipping test, python not detected please set environment variable PYTHON.2020- exit 22121- fi2222-fi99+function commachecker()1010+{1111+ local -i cnt=0 exp=01212+1313+ case "$1"1414+ in "--no-args") exp=61515+ ;; "--system-wide") exp=61616+ ;; "--event") exp=61717+ ;; "--interval") exp=71818+ ;; "--per-thread") exp=71919+ ;; "--system-wide-no-aggr") exp=72020+ [ $(uname -m) = "s390x" ] && exp=62121+ ;; "--per-core") exp=82222+ ;; "--per-socket") exp=82323+ ;; "--per-node") exp=82424+ ;; "--per-die") exp=82525+ esac2626+2727+ while read line2828+ do2929+ # Check for lines beginning with Failed3030+ x=${line:0:6}3131+ [ "$x" = "Failed" ] && continue3232+3333+ # Count the number of commas3434+ x=$(echo $line | tr -d -c ',')3535+ cnt="${#x}"3636+ # echo $line $cnt3737+ [ "$cnt" -ne "$exp" ] && {3838+ echo "wrong number of fields. expected $exp in $line" 1>&23939+ exit 1;4040+ }4141+ done4242+ return 04343+}23442445# Return true if perf_event_paranoid is > $1 and not running as root.2546function ParanoidAndNotRoot()···5130check_no_args()5231{5332 echo -n "Checking CSV output: no args "5454- perf stat -x, true 2>&1 | $PYTHON $pythonchecker --no-args3333+ perf stat -x, true 2>&1 | commachecker --no-args5534 echo "[Success]"5635}5736···6342 echo "[Skip] paranoid and not root"6443 return6544 fi6666- perf stat -x, -a true 2>&1 | $PYTHON $pythonchecker --system-wide4545+ perf stat -x, -a true 2>&1 | commachecker --system-wide6746 echo "[Success]"6847}6948···7655 return7756 fi7857 echo -n "Checking CSV output: system wide no aggregation "7979- perf stat -x, -A -a --no-merge true 2>&1 | $PYTHON $pythonchecker --system-wide-no-aggr5858+ perf stat -x, -A -a --no-merge true 2>&1 | commachecker --system-wide-no-aggr8059 echo "[Success]"8160}82618362check_interval()8463{8564 echo -n "Checking CSV output: interval "8686- perf stat -x, -I 1000 true 2>&1 | $PYTHON $pythonchecker --interval6565+ perf stat -x, -I 1000 true 2>&1 | commachecker --interval8766 echo "[Success]"8867}8968···9170check_event()9271{9372 echo -n "Checking CSV output: event "9494- perf stat -x, -e cpu-clock true 2>&1 | $PYTHON $pythonchecker --event7373+ perf stat -x, -e cpu-clock true 2>&1 | commachecker --event9574 echo "[Success]"9675}9776···10382 echo "[Skip] paranoid and not root"10483 return10584 fi106106- perf stat -x, --per-core -a true 2>&1 | $PYTHON $pythonchecker --per-core8585+ perf stat -x, --per-core -a true 2>&1 | commachecker --per-core10786 echo "[Success]"10887}10988···11594 echo "[Skip] paranoid and not root"11695 return11796 fi118118- perf stat -x, --per-thread -a true 2>&1 | $PYTHON $pythonchecker --per-thread9797+ perf stat -x, --per-thread -a true 2>&1 | commachecker --per-thread11998 echo "[Success]"12099}121100···127106 echo "[Skip] paranoid and not root"128107 return129108 fi130130- perf stat -x, --per-die -a true 2>&1 | $PYTHON $pythonchecker --per-die109109+ perf stat -x, --per-die -a true 2>&1 | commachecker --per-die131110 echo "[Success]"132111}133112···139118 echo "[Skip] paranoid and not root"140119 return141120 fi142142- perf stat -x, --per-node -a true 2>&1 | $PYTHON $pythonchecker --per-node121121+ perf stat -x, --per-node -a true 2>&1 | commachecker --per-node143122 echo "[Success]"144123}145124···151130 echo "[Skip] paranoid and not root"152131 return153132 fi154154- perf stat -x, --per-socket -a true 2>&1 | $PYTHON $pythonchecker --per-socket133133+ perf stat -x, --per-socket -a true 2>&1 | commachecker --per-socket155134 echo "[Success]"156135}157136
+1-1
tools/perf/tests/shell/test_arm_callgraph_fp.sh
···4343cc $CFLAGS $TEST_PROGRAM_SOURCE -o $TEST_PROGRAM || exit 144444545# Add a 1 second delay to skip samples that are not in the leaf() function4646-perf record -o $PERF_DATA --call-graph fp -e cycles//u -D 1000 -- $TEST_PROGRAM 2> /dev/null &4646+perf record -o $PERF_DATA --call-graph fp -e cycles//u -D 1000 --user-callchains -- $TEST_PROGRAM 2> /dev/null &4747PID=$!48484949echo " + Recording (PID=$PID)..."
+1-1
tools/perf/tests/topology.c
···115115 * physical_package_id will be set to -1. Hence skip this116116 * test if physical_package_id returns -1 for cpu from perf_cpu_map.117117 */118118- if (strncmp(session->header.env.arch, "powerpc", 7)) {118118+ if (!strncmp(session->header.env.arch, "ppc64le", 7)) {119119 if (cpu__get_socket_id(perf_cpu_map__cpu(map, 0)) == -1)120120 return TEST_SKIP;121121 }
+2-12
tools/perf/trace/beauty/arch_errno_names.sh
···3333 local arch=$(arch_string "$1")3434 local nr name35353636- cat <<EoFuncBegin3737-static const char *errno_to_name__$arch(int err)3838-{3939- switch (err) {4040-EoFuncBegin3636+ printf "static const char *errno_to_name__%s(int err)\n{\n\tswitch (err) {\n" $arch41374238 while read name nr; do4339 printf '\tcase %d: return "%s";\n' $nr $name4440 done45414646- cat <<EoFuncEnd4747- default:4848- return "(unknown)";4949- }5050-}5151-5252-EoFuncEnd4242+ printf '\tdefault: return "(unknown)";\n\t}\n}\n'5343}54445545process_arch()
+6-1
tools/perf/trace/beauty/include/linux/socket.h
···5050struct msghdr {5151 void *msg_name; /* ptr to socket address structure */5252 int msg_namelen; /* size of socket address structure */5353+5454+ int msg_inq; /* output, data left in socket */5555+5356 struct iov_iter msg_iter; /* data */54575558 /*···6562 void __user *msg_control_user;6663 };6764 bool msg_control_is_user : 1;6868- __kernel_size_t msg_controllen; /* ancillary data buffer length */6565+ bool msg_get_inq : 1;/* return INQ after receive */6966 unsigned int msg_flags; /* flags on received message */6767+ __kernel_size_t msg_controllen; /* ancillary data buffer length */7068 struct kiocb *msg_iocb; /* ptr to iocb for async requests */7169};7270···438434extern int __sys_accept4(int fd, struct sockaddr __user *upeer_sockaddr,439435 int __user *upeer_addrlen, int flags);440436extern int __sys_socket(int family, int type, int protocol);437437+extern struct file *__sys_socket_file(int family, int type, int protocol);441438extern int __sys_bind(int fd, struct sockaddr __user *umyaddr, int addrlen);442439extern int __sys_connect_file(struct file *file, struct sockaddr_storage *addr,443440 int addrlen, int file_flags);
+8-14
tools/perf/util/arm-spe.c
···387387 return arm_spe_deliver_synth_event(spe, speq, event, &sample);388388}389389390390-#define SPE_MEM_TYPE (ARM_SPE_L1D_ACCESS | ARM_SPE_L1D_MISS | \391391- ARM_SPE_LLC_ACCESS | ARM_SPE_LLC_MISS | \392392- ARM_SPE_REMOTE_ACCESS)393393-394394-static bool arm_spe__is_memory_event(enum arm_spe_sample_type type)395395-{396396- if (type & SPE_MEM_TYPE)397397- return true;398398-399399- return false;400400-}401401-402390static u64 arm_spe__synth_data_source(const struct arm_spe_record *record)403391{404392 union perf_mem_data_src data_src = { 0 };405393406394 if (record->op == ARM_SPE_LD)407395 data_src.mem_op = PERF_MEM_OP_LOAD;408408- else396396+ else if (record->op == ARM_SPE_ST)409397 data_src.mem_op = PERF_MEM_OP_STORE;398398+ else399399+ return 0;410400411401 if (record->type & (ARM_SPE_LLC_ACCESS | ARM_SPE_LLC_MISS)) {412402 data_src.mem_lvl = PERF_MEM_LVL_L3;···500510 return err;501511 }502512503503- if (spe->sample_memory && arm_spe__is_memory_event(record->type)) {513513+ /*514514+ * When data_src is zero it means the record is not a memory operation,515515+ * skip to synthesize memory sample for this case.516516+ */517517+ if (spe->sample_memory && data_src) {504518 err = arm_spe__synth_mem_sample(speq, spe->memory_id, data_src);505519 if (err)506520 return err;
···31313232# List of possible paths to pktgen script from kernel tree for performance tests3333PKTGEN_SCRIPT_PATHS="3434- ../../../samples/pktgen/pktgen_bench_xmit_mode_netif_receive.sh3434+ ../../../../samples/pktgen/pktgen_bench_xmit_mode_netif_receive.sh3535 pktgen/pktgen_bench_xmit_mode_netif_receive.sh"36363737# Definition of set types:
+2-2
tools/testing/selftests/vm/gup_test.c
···209209 if (write)210210 gup.gup_flags |= FOLL_WRITE;211211212212- gup_fd = open("/sys/kernel/debug/gup_test", O_RDWR);212212+ gup_fd = open(GUP_TEST_FILE, O_RDWR);213213 if (gup_fd == -1) {214214 switch (errno) {215215 case EACCES:···224224 printf("check if CONFIG_GUP_TEST is enabled in kernel config\n");225225 break;226226 default:227227- perror("failed to open /sys/kernel/debug/gup_test");227227+ perror("failed to open " GUP_TEST_FILE);228228 break;229229 }230230 exit(KSFT_SKIP);