Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge 5.19-rc4 into usb-next

We need the USB fixes in here as well.

Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

+4426 -2532
+4
.mailmap
··· 10 10 # Please keep this list dictionary sorted. 11 11 # 12 12 Aaron Durbin <adurbin@google.com> 13 + Abel Vesa <abelvesa@kernel.org> <abel.vesa@nxp.com> 14 + Abel Vesa <abelvesa@kernel.org> <abelvesa@gmail.com> 13 15 Abhinav Kumar <quic_abhinavk@quicinc.com> <abhinavk@codeaurora.org> 14 16 Adam Oldham <oldhamca@gmail.com> 15 17 Adam Radford <aradford@gmail.com> ··· 87 85 Christian Brauner <brauner@kernel.org> <christian@brauner.io> 88 86 Christian Brauner <brauner@kernel.org> <christian.brauner@canonical.com> 89 87 Christian Brauner <brauner@kernel.org> <christian.brauner@ubuntu.com> 88 + Christian Marangi <ansuelsmth@gmail.com> 90 89 Christophe Ricard <christophe.ricard@gmail.com> 91 90 Christoph Hellwig <hch@lst.de> 92 91 Colin Ian King <colin.king@intel.com> <colin.king@canonical.com> ··· 168 165 Jan Glauber <jan.glauber@gmail.com> <jang@linux.vnet.ibm.com> 169 166 Jan Glauber <jan.glauber@gmail.com> <jglauber@cavium.com> 170 167 Jarkko Sakkinen <jarkko@kernel.org> <jarkko.sakkinen@linux.intel.com> 168 + Jarkko Sakkinen <jarkko@kernel.org> <jarkko@profian.com> 171 169 Jason Gunthorpe <jgg@ziepe.ca> <jgg@mellanox.com> 172 170 Jason Gunthorpe <jgg@ziepe.ca> <jgg@nvidia.com> 173 171 Jason Gunthorpe <jgg@ziepe.ca> <jgunthorpe@obsidianresearch.com>
+1 -1
Documentation/ABI/testing/sysfs-bus-iio-vf610
··· 1 - What: /sys/bus/iio/devices/iio:deviceX/conversion_mode 1 + What: /sys/bus/iio/devices/iio:deviceX/in_conversion_mode 2 2 KernelVersion: 4.2 3 3 Contact: linux-iio@vger.kernel.org 4 4 Description:
-1
Documentation/devicetree/bindings/spi/microchip,mpfs-spi.yaml
··· 47 47 clocks = <&clkcfg CLK_SPI0>; 48 48 interrupt-parent = <&plic>; 49 49 interrupts = <54>; 50 - spi-max-frequency = <25000000>; 51 50 }; 52 51 ...
-1
Documentation/devicetree/bindings/spi/qcom,spi-geni-qcom.yaml
··· 110 110 pinctrl-names = "default"; 111 111 pinctrl-0 = <&qup_spi1_default>; 112 112 interrupts = <GIC_SPI 602 IRQ_TYPE_LEVEL_HIGH>; 113 - spi-max-frequency = <50000000>; 114 113 #address-cells = <1>; 115 114 #size-cells = <0>; 116 115 };
+2 -1
Documentation/devicetree/bindings/usb/generic-ehci.yaml
··· 136 136 Phandle of a companion. 137 137 138 138 phys: 139 - maxItems: 1 139 + minItems: 1 140 + maxItems: 3 140 141 141 142 phy-names: 142 143 const: usb
+2 -1
Documentation/devicetree/bindings/usb/generic-ohci.yaml
··· 103 103 Overrides the detected port count 104 104 105 105 phys: 106 - maxItems: 1 106 + minItems: 1 107 + maxItems: 3 107 108 108 109 phy-names: 109 110 const: usb
+1 -1
Documentation/driver-api/gpio/board.rst
··· 6 6 7 7 Note that it only applies to the new descriptor-based interface. For a 8 8 description of the deprecated integer-based GPIO interface please refer to 9 - gpio-legacy.txt (actually, there is no real mapping possible with the old 9 + legacy.rst (actually, there is no real mapping possible with the old 10 10 interface; you just fetch an integer from somewhere and request the 11 11 corresponding GPIO). 12 12
+3 -3
Documentation/driver-api/gpio/consumer.rst
··· 4 4 5 5 This document describes the consumer interface of the GPIO framework. Note that 6 6 it describes the new descriptor-based interface. For a description of the 7 - deprecated integer-based GPIO interface please refer to gpio-legacy.txt. 7 + deprecated integer-based GPIO interface please refer to legacy.rst. 8 8 9 9 10 10 Guidelines for GPIOs consumers ··· 78 78 79 79 The two last flags are used for use cases where open drain is mandatory, such 80 80 as I2C: if the line is not already configured as open drain in the mappings 81 - (see board.txt), then open drain will be enforced anyway and a warning will be 81 + (see board.rst), then open drain will be enforced anyway and a warning will be 82 82 printed that the board configuration needs to be updated to match the use case. 83 83 84 84 Both functions return either a valid GPIO descriptor, or an error code checkable ··· 270 270 The same is applicable for open drain or open source output lines: those do not 271 271 actively drive their output high (open drain) or low (open source), they just 272 272 switch their output to a high impedance value. The consumer should not need to 273 - care. (For details read about open drain in driver.txt.) 273 + care. (For details read about open drain in driver.rst.) 274 274 275 275 With this, all the gpiod_set_(array)_value_xxx() functions interpret the 276 276 parameter "value" as "asserted" ("1") or "de-asserted" ("0"). The physical line
+3 -3
Documentation/driver-api/gpio/intro.rst
··· 14 14 ways to obtain and use GPIOs: 15 15 16 16 - The descriptor-based interface is the preferred way to manipulate GPIOs, 17 - and is described by all the files in this directory excepted gpio-legacy.txt. 17 + and is described by all the files in this directory excepted legacy.rst. 18 18 - The legacy integer-based interface which is considered deprecated (but still 19 - usable for compatibility reasons) is documented in gpio-legacy.txt. 19 + usable for compatibility reasons) is documented in legacy.rst. 20 20 21 21 The remainder of this document applies to the new descriptor-based interface. 22 - gpio-legacy.txt contains the same information applied to the legacy 22 + legacy.rst contains the same information applied to the legacy 23 23 integer-based interface. 24 24 25 25
+13 -3
Documentation/filesystems/btrfs.rst
··· 19 19 * Subvolumes (separate internal filesystem roots) 20 20 * Object level mirroring and striping 21 21 * Checksums on data and metadata (multiple algorithms available) 22 - * Compression 22 + * Compression (multiple algorithms available) 23 + * Reflink, deduplication 24 + * Scrub (on-line checksum verification) 25 + * Hierarchical quota groups (subvolume and snapshot support) 23 26 * Integrated multiple device support, with several raid algorithms 24 27 * Offline filesystem check 25 - * Efficient incremental backup and FS mirroring 28 + * Efficient incremental backup and FS mirroring (send/receive) 29 + * Trim/discard 26 30 * Online filesystem defragmentation 31 + * Swapfile support 32 + * Zoned mode 33 + * Read/write metadata verification 34 + * Online resize (shrink, grow) 27 35 28 - For more information please refer to the wiki 36 + For more information please refer to the documentation site or wiki 37 + 38 + https://btrfs.readthedocs.io 29 39 30 40 https://btrfs.wiki.kernel.org 31 41
+8 -2
Documentation/kbuild/llvm.rst
··· 129 129 * - arm64 130 130 - Supported 131 131 - ``LLVM=1`` 132 + * - hexagon 133 + - Maintained 134 + - ``LLVM=1`` 132 135 * - mips 133 136 - Maintained 134 - - ``CC=clang`` 137 + - ``LLVM=1`` 135 138 * - powerpc 136 139 - Maintained 137 140 - ``CC=clang`` 138 141 * - riscv 139 142 - Maintained 140 - - ``CC=clang`` 143 + - ``LLVM=1`` 141 144 * - s390 142 145 - Maintained 143 146 - ``CC=clang`` 147 + * - um (User Mode) 148 + - Maintained 149 + - ``LLVM=1`` 144 150 * - x86 145 151 - Supported 146 152 - ``LLVM=1``
+2 -1
Documentation/vm/hwpoison.rst
··· 120 120 unpoison-pfn 121 121 Software-unpoison page at PFN echoed into this file. This way 122 122 a page can be reused again. This only works for Linux 123 - injected failures, not for real memory failures. 123 + injected failures, not for real memory failures. Once any hardware 124 + memory failure happens, this feature is disabled. 124 125 125 126 Note these injection interfaces are not stable and might change between 126 127 kernel versions
+89 -38
MAINTAINERS
··· 427 427 M: Jean-Philippe Brucker <jean-philippe@linaro.org> 428 428 L: linux-acpi@vger.kernel.org 429 429 L: iommu@lists.linux-foundation.org 430 + L: iommu@lists.linux.dev 430 431 S: Maintained 431 432 F: drivers/acpi/viot.c 432 433 F: include/linux/acpi_viot.h ··· 961 960 M: Joerg Roedel <joro@8bytes.org> 962 961 R: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> 963 962 L: iommu@lists.linux-foundation.org 963 + L: iommu@lists.linux.dev 964 964 S: Maintained 965 965 T: git git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu.git 966 966 F: drivers/iommu/amd/ ··· 2469 2467 M: Chester Lin <clin@suse.com> 2470 2468 R: Andreas Färber <afaerber@suse.de> 2471 2469 R: Matthias Brugger <mbrugger@suse.com> 2470 + R: NXP S32 Linux Team <s32@nxp.com> 2472 2471 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 2473 2472 S: Maintained 2474 2473 F: arch/arm64/boot/dts/freescale/s32g*.dts* ··· 3672 3669 M: Shubham Bansal <illusionist.neo@gmail.com> 3673 3670 L: netdev@vger.kernel.org 3674 3671 L: bpf@vger.kernel.org 3675 - S: Maintained 3672 + S: Odd Fixes 3676 3673 F: arch/arm/net/ 3677 3674 3678 3675 BPF JIT for ARM64 ··· 3696 3693 M: Jakub Kicinski <kuba@kernel.org> 3697 3694 L: netdev@vger.kernel.org 3698 3695 L: bpf@vger.kernel.org 3699 - S: Supported 3696 + S: Odd Fixes 3700 3697 F: drivers/net/ethernet/netronome/nfp/bpf/ 3701 3698 3702 3699 BPF JIT for POWERPC (32-BIT AND 64-BIT) 3703 3700 M: Naveen N. Rao <naveen.n.rao@linux.ibm.com> 3701 + M: Michael Ellerman <mpe@ellerman.id.au> 3704 3702 L: netdev@vger.kernel.org 3705 3703 L: bpf@vger.kernel.org 3706 - S: Maintained 3704 + S: Supported 3707 3705 F: arch/powerpc/net/ 3708 3706 3709 3707 BPF JIT for RISC-V (32-bit) ··· 3730 3726 M: Vasily Gorbik <gor@linux.ibm.com> 3731 3727 L: netdev@vger.kernel.org 3732 3728 L: bpf@vger.kernel.org 3733 - S: Maintained 3729 + S: Supported 3734 3730 F: arch/s390/net/ 3735 3731 X: arch/s390/net/pnet.c 3736 3732 ··· 3738 3734 M: David S. Miller <davem@davemloft.net> 3739 3735 L: netdev@vger.kernel.org 3740 3736 L: bpf@vger.kernel.org 3741 - S: Maintained 3737 + S: Odd Fixes 3742 3738 F: arch/sparc/net/ 3743 3739 3744 3740 BPF JIT for X86 32-BIT 3745 3741 M: Wang YanQing <udknight@gmail.com> 3746 3742 L: netdev@vger.kernel.org 3747 3743 L: bpf@vger.kernel.org 3748 - S: Maintained 3744 + S: Odd Fixes 3749 3745 F: arch/x86/net/bpf_jit_comp32.c 3750 3746 3751 3747 BPF JIT for X86 64-BIT ··· 3767 3763 F: include/linux/bpf_lsm.h 3768 3764 F: kernel/bpf/bpf_lsm.c 3769 3765 F: security/bpf/ 3766 + 3767 + BPF L7 FRAMEWORK 3768 + M: John Fastabend <john.fastabend@gmail.com> 3769 + M: Jakub Sitnicki <jakub@cloudflare.com> 3770 + L: netdev@vger.kernel.org 3771 + L: bpf@vger.kernel.org 3772 + S: Maintained 3773 + F: include/linux/skmsg.h 3774 + F: net/core/skmsg.c 3775 + F: net/core/sock_map.c 3776 + F: net/ipv4/tcp_bpf.c 3777 + F: net/ipv4/udp_bpf.c 3778 + F: net/unix/unix_bpf.c 3770 3779 3771 3780 BPFTOOL 3772 3781 M: Quentin Monnet <quentin@isovalent.com> ··· 3820 3803 N: bcm[9]?47622 3821 3804 3822 3805 BROADCOM BCM2711/BCM2835 ARM ARCHITECTURE 3823 - M: Nicolas Saenz Julienne <nsaenz@kernel.org> 3806 + M: Florian Fainelli <f.fainelli@gmail.com> 3824 3807 R: Broadcom internal kernel review list <bcm-kernel-feedback-list@broadcom.com> 3825 3808 L: linux-rpi-kernel@lists.infradead.org (moderated for non-subscribers) 3826 3809 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 3827 3810 S: Maintained 3828 - T: git git://git.kernel.org/pub/scm/linux/kernel/git/nsaenz/linux-rpi.git 3811 + T: git git://github.com/broadcom/stblinux.git 3829 3812 F: Documentation/devicetree/bindings/pci/brcm,stb-pcie.yaml 3830 3813 F: drivers/pci/controller/pcie-brcmstb.c 3831 3814 F: drivers/staging/vc04_services ··· 5986 5969 M: Marek Szyprowski <m.szyprowski@samsung.com> 5987 5970 R: Robin Murphy <robin.murphy@arm.com> 5988 5971 L: iommu@lists.linux-foundation.org 5972 + L: iommu@lists.linux.dev 5989 5973 S: Supported 5990 5974 W: http://git.infradead.org/users/hch/dma-mapping.git 5991 5975 T: git git://git.infradead.org/users/hch/dma-mapping.git ··· 5999 5981 DMA MAPPING BENCHMARK 6000 5982 M: Xiang Chen <chenxiang66@hisilicon.com> 6001 5983 L: iommu@lists.linux-foundation.org 5984 + L: iommu@lists.linux.dev 6002 5985 F: kernel/dma/map_benchmark.c 6003 5986 F: tools/testing/selftests/dma/ 6004 5987 ··· 7584 7565 EXYNOS SYSMMU (IOMMU) driver 7585 7566 M: Marek Szyprowski <m.szyprowski@samsung.com> 7586 7567 L: iommu@lists.linux-foundation.org 7568 + L: iommu@lists.linux.dev 7587 7569 S: Maintained 7588 7570 F: drivers/iommu/exynos-iommu.c 7589 7571 ··· 8506 8486 F: Documentation/driver-api/gpio/ 8507 8487 F: drivers/gpio/ 8508 8488 F: include/asm-generic/gpio.h 8489 + F: include/dt-bindings/gpio/ 8509 8490 F: include/linux/gpio.h 8510 8491 F: include/linux/gpio/ 8511 8492 F: include/linux/of_gpio.h ··· 9160 9139 9161 9140 HWPOISON MEMORY FAILURE HANDLING 9162 9141 M: Naoya Horiguchi <naoya.horiguchi@nec.com> 9142 + R: Miaohe Lin <linmiaohe@huawei.com> 9163 9143 L: linux-mm@kvack.org 9164 9144 S: Maintained 9165 9145 F: mm/hwpoison-inject.c ··· 10006 9984 M: David Woodhouse <dwmw2@infradead.org> 10007 9985 M: Lu Baolu <baolu.lu@linux.intel.com> 10008 9986 L: iommu@lists.linux-foundation.org 9987 + L: iommu@lists.linux.dev 10009 9988 S: Supported 10010 9989 T: git git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu.git 10011 9990 F: drivers/iommu/intel/ ··· 10386 10363 M: Joerg Roedel <joro@8bytes.org> 10387 10364 M: Will Deacon <will@kernel.org> 10388 10365 L: iommu@lists.linux-foundation.org 10366 + L: iommu@lists.linux.dev 10389 10367 S: Maintained 10390 10368 T: git git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu.git 10391 10369 F: Documentation/devicetree/bindings/iommu/ ··· 10863 10839 R: James Morse <james.morse@arm.com> 10864 10840 R: Alexandru Elisei <alexandru.elisei@arm.com> 10865 10841 R: Suzuki K Poulose <suzuki.poulose@arm.com> 10842 + R: Oliver Upton <oliver.upton@linux.dev> 10866 10843 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 10867 10844 L: kvmarm@lists.cs.columbia.edu (moderated for non-subscribers) 10868 10845 S: Maintained ··· 10930 10905 F: tools/testing/selftests/kvm/s390x/ 10931 10906 10932 10907 KERNEL VIRTUAL MACHINE FOR X86 (KVM/x86) 10908 + M: Sean Christopherson <seanjc@google.com> 10933 10909 M: Paolo Bonzini <pbonzini@redhat.com> 10934 - R: Sean Christopherson <seanjc@google.com> 10935 - R: Vitaly Kuznetsov <vkuznets@redhat.com> 10936 - R: Wanpeng Li <wanpengli@tencent.com> 10937 - R: Jim Mattson <jmattson@google.com> 10938 - R: Joerg Roedel <joro@8bytes.org> 10939 10910 L: kvm@vger.kernel.org 10940 10911 S: Supported 10941 - W: http://www.linux-kvm.org 10942 10912 T: git git://git.kernel.org/pub/scm/virt/kvm/kvm.git 10943 10913 F: arch/x86/include/asm/kvm* 10944 - F: arch/x86/include/asm/pvclock-abi.h 10945 10914 F: arch/x86/include/asm/svm.h 10946 10915 F: arch/x86/include/asm/vmx*.h 10947 10916 F: arch/x86/include/uapi/asm/kvm* 10948 10917 F: arch/x86/include/uapi/asm/svm.h 10949 10918 F: arch/x86/include/uapi/asm/vmx.h 10950 - F: arch/x86/kernel/kvm.c 10951 - F: arch/x86/kernel/kvmclock.c 10952 10919 F: arch/x86/kvm/ 10953 10920 F: arch/x86/kvm/*/ 10921 + 10922 + KVM PARAVIRT (KVM/paravirt) 10923 + M: Paolo Bonzini <pbonzini@redhat.com> 10924 + R: Wanpeng Li <wanpengli@tencent.com> 10925 + R: Vitaly Kuznetsov <vkuznets@redhat.com> 10926 + L: kvm@vger.kernel.org 10927 + S: Supported 10928 + T: git git://git.kernel.org/pub/scm/virt/kvm/kvm.git 10929 + F: arch/x86/kernel/kvm.c 10930 + F: arch/x86/kernel/kvmclock.c 10931 + F: arch/x86/include/asm/pvclock-abi.h 10932 + F: include/linux/kvm_para.h 10933 + F: include/uapi/linux/kvm_para.h 10934 + F: include/uapi/asm-generic/kvm_para.h 10935 + F: include/asm-generic/kvm_para.h 10936 + F: arch/um/include/asm/kvm_para.h 10937 + F: arch/x86/include/asm/kvm_para.h 10938 + F: arch/x86/include/uapi/asm/kvm_para.h 10939 + 10940 + KVM X86 HYPER-V (KVM/hyper-v) 10941 + M: Vitaly Kuznetsov <vkuznets@redhat.com> 10942 + M: Sean Christopherson <seanjc@google.com> 10943 + M: Paolo Bonzini <pbonzini@redhat.com> 10944 + L: kvm@vger.kernel.org 10945 + S: Supported 10946 + T: git git://git.kernel.org/pub/scm/virt/kvm/kvm.git 10947 + F: arch/x86/kvm/hyperv.* 10948 + F: arch/x86/kvm/kvm_onhyperv.* 10949 + F: arch/x86/kvm/svm/hyperv.* 10950 + F: arch/x86/kvm/svm/svm_onhyperv.* 10951 + F: arch/x86/kvm/vmx/evmcs.* 10954 10952 10955 10953 KERNFS 10956 10954 M: Greg Kroah-Hartman <gregkh@linuxfoundation.org> ··· 11152 11104 S: Maintained 11153 11105 F: include/net/l3mdev.h 11154 11106 F: net/l3mdev 11155 - 11156 - L7 BPF FRAMEWORK 11157 - M: John Fastabend <john.fastabend@gmail.com> 11158 - M: Daniel Borkmann <daniel@iogearbox.net> 11159 - M: Jakub Sitnicki <jakub@cloudflare.com> 11160 - L: netdev@vger.kernel.org 11161 - L: bpf@vger.kernel.org 11162 - S: Maintained 11163 - F: include/linux/skmsg.h 11164 - F: net/core/skmsg.c 11165 - F: net/core/sock_map.c 11166 - F: net/ipv4/tcp_bpf.c 11167 - F: net/ipv4/udp_bpf.c 11168 - F: net/unix/unix_bpf.c 11169 11107 11170 11108 LANDLOCK SECURITY MODULE 11171 11109 M: Mickaël Salaün <mic@digikod.net> ··· 11632 11598 LOONGARCH 11633 11599 M: Huacai Chen <chenhuacai@kernel.org> 11634 11600 R: WANG Xuerui <kernel@xen0n.name> 11601 + L: loongarch@lists.linux.dev 11635 11602 S: Maintained 11636 11603 T: git git://git.kernel.org/pub/scm/linux/kernel/git/chenhuacai/linux-loongson.git 11637 11604 F: arch/loongarch/ ··· 12546 12511 MEDIATEK IOMMU DRIVER 12547 12512 M: Yong Wu <yong.wu@mediatek.com> 12548 12513 L: iommu@lists.linux-foundation.org 12514 + L: iommu@lists.linux.dev 12549 12515 L: linux-mediatek@lists.infradead.org (moderated for non-subscribers) 12550 12516 S: Supported 12551 12517 F: Documentation/devicetree/bindings/iommu/mediatek* ··· 12889 12853 L: linux-mm@kvack.org 12890 12854 S: Maintained 12891 12855 W: http://www.linux-mm.org 12892 - T: quilt https://ozlabs.org/~akpm/mmotm/ 12893 - T: quilt https://ozlabs.org/~akpm/mmots/ 12894 - T: git git://github.com/hnaz/linux-mm.git 12856 + T: git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm 12857 + T: quilt git://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new 12895 12858 F: include/linux/gfp.h 12896 12859 F: include/linux/memory_hotplug.h 12897 12860 F: include/linux/mm.h ··· 12899 12864 F: include/linux/vmalloc.h 12900 12865 F: mm/ 12901 12866 F: tools/testing/selftests/vm/ 12867 + 12868 + MEMORY HOT(UN)PLUG 12869 + M: David Hildenbrand <david@redhat.com> 12870 + M: Oscar Salvador <osalvador@suse.de> 12871 + L: linux-mm@kvack.org 12872 + S: Maintained 12873 + F: Documentation/admin-guide/mm/memory-hotplug.rst 12874 + F: Documentation/core-api/memory-hotplug.rst 12875 + F: drivers/base/memory.c 12876 + F: include/linux/memory_hotplug.h 12877 + F: mm/memory_hotplug.c 12878 + F: tools/testing/selftests/memory-hotplug/ 12902 12879 12903 12880 MEMORY TECHNOLOGY DEVICES (MTD) 12904 12881 M: Miquel Raynal <miquel.raynal@bootlin.com> ··· 14008 13961 NETWORKING [TLS] 14009 13962 M: Boris Pismenny <borisp@nvidia.com> 14010 13963 M: John Fastabend <john.fastabend@gmail.com> 14011 - M: Daniel Borkmann <daniel@iogearbox.net> 14012 13964 M: Jakub Kicinski <kuba@kernel.org> 14013 13965 L: netdev@vger.kernel.org 14014 13966 S: Maintained ··· 14316 14270 F: drivers/iio/gyro/fxas21002c_spi.c 14317 14271 14318 14272 NXP i.MX CLOCK DRIVERS 14319 - M: Abel Vesa <abel.vesa@nxp.com> 14273 + M: Abel Vesa <abelvesa@kernel.org> 14320 14274 L: linux-clk@vger.kernel.org 14321 14275 L: linux-imx@nxp.com 14322 14276 S: Maintained ··· 14924 14878 14925 14879 OPENCOMPUTE PTP CLOCK DRIVER 14926 14880 M: Jonathan Lemon <jonathan.lemon@gmail.com> 14881 + M: Vadim Fedorenko <vadfed@fb.com> 14927 14882 L: netdev@vger.kernel.org 14928 14883 S: Maintained 14929 14884 F: drivers/ptp/ptp_ocp.c ··· 16544 16497 F: drivers/cpufreq/qcom-cpufreq-nvmem.c 16545 16498 16546 16499 QUALCOMM CRYPTO DRIVERS 16547 - M: Thara Gopinath <thara.gopinath@linaro.org> 16500 + M: Thara Gopinath <thara.gopinath@gmail.com> 16548 16501 L: linux-crypto@vger.kernel.org 16549 16502 L: linux-arm-msm@vger.kernel.org 16550 16503 S: Maintained ··· 16599 16552 QUALCOMM IOMMU 16600 16553 M: Rob Clark <robdclark@gmail.com> 16601 16554 L: iommu@lists.linux-foundation.org 16555 + L: iommu@lists.linux.dev 16602 16556 L: linux-arm-msm@vger.kernel.org 16603 16557 S: Maintained 16604 16558 F: drivers/iommu/arm/arm-smmu/qcom_iommu.c ··· 16655 16607 16656 16608 QUALCOMM TSENS THERMAL DRIVER 16657 16609 M: Amit Kucheria <amitk@kernel.org> 16658 - M: Thara Gopinath <thara.gopinath@linaro.org> 16610 + M: Thara Gopinath <thara.gopinath@gmail.com> 16659 16611 L: linux-pm@vger.kernel.org 16660 16612 L: linux-arm-msm@vger.kernel.org 16661 16613 S: Maintained ··· 19226 19178 SWIOTLB SUBSYSTEM 19227 19179 M: Christoph Hellwig <hch@infradead.org> 19228 19180 L: iommu@lists.linux-foundation.org 19181 + L: iommu@lists.linux.dev 19229 19182 S: Supported 19230 19183 W: http://git.infradead.org/users/hch/dma-mapping.git 19231 19184 T: git git://git.infradead.org/users/hch/dma-mapping.git ··· 20771 20722 F: Documentation/devicetree/bindings/usb/ 20772 20723 F: Documentation/usb/ 20773 20724 F: drivers/usb/ 20725 + F: include/dt-bindings/usb/ 20774 20726 F: include/linux/usb.h 20775 20727 F: include/linux/usb/ 20776 20728 ··· 21902 21852 M: Stefano Stabellini <sstabellini@kernel.org> 21903 21853 L: xen-devel@lists.xenproject.org (moderated for non-subscribers) 21904 21854 L: iommu@lists.linux-foundation.org 21855 + L: iommu@lists.linux.dev 21905 21856 S: Supported 21906 21857 F: arch/x86/xen/*swiotlb* 21907 21858 F: drivers/xen/*swiotlb*
+2 -2
Makefile
··· 2 2 VERSION = 5 3 3 PATCHLEVEL = 19 4 4 SUBLEVEL = 0 5 - EXTRAVERSION = -rc3 5 + EXTRAVERSION = -rc4 6 6 NAME = Superb Owl 7 7 8 8 # *DOCUMENTATION* ··· 1141 1141 1142 1142 autoksyms_recursive: descend modules.order 1143 1143 $(Q)$(CONFIG_SHELL) $(srctree)/scripts/adjust_autoksyms.sh \ 1144 - "$(MAKE) -f $(srctree)/Makefile vmlinux" 1144 + "$(MAKE) -f $(srctree)/Makefile autoksyms_recursive" 1145 1145 endif 1146 1146 1147 1147 autoksyms_h := $(if $(CONFIG_TRIM_UNUSED_KSYMS), include/generated/autoksyms.h)
+1 -1
arch/arm/boot/dts/Makefile
··· 1586 1586 aspeed-bmc-lenovo-hr630.dtb \ 1587 1587 aspeed-bmc-lenovo-hr855xg2.dtb \ 1588 1588 aspeed-bmc-microsoft-olympus.dtb \ 1589 - aspeed-bmc-nuvia-dc-scm.dtb \ 1590 1589 aspeed-bmc-opp-lanyang.dtb \ 1591 1590 aspeed-bmc-opp-mihawk.dtb \ 1592 1591 aspeed-bmc-opp-mowgli.dtb \ ··· 1598 1599 aspeed-bmc-opp-witherspoon.dtb \ 1599 1600 aspeed-bmc-opp-zaius.dtb \ 1600 1601 aspeed-bmc-portwell-neptune.dtb \ 1602 + aspeed-bmc-qcom-dc-scm-v1.dtb \ 1601 1603 aspeed-bmc-quanta-q71l.dtb \ 1602 1604 aspeed-bmc-quanta-s6q.dtb \ 1603 1605 aspeed-bmc-supermicro-x11spi.dtb \
+2 -2
arch/arm/boot/dts/aspeed-bmc-nuvia-dc-scm.dts arch/arm/boot/dts/aspeed-bmc-qcom-dc-scm-v1.dts
··· 6 6 #include "aspeed-g6.dtsi" 7 7 8 8 / { 9 - model = "Nuvia DC-SCM BMC"; 10 - compatible = "nuvia,dc-scm-bmc", "aspeed,ast2600"; 9 + model = "Qualcomm DC-SCM V1 BMC"; 10 + compatible = "qcom,dc-scm-v1-bmc", "aspeed,ast2600"; 11 11 12 12 aliases { 13 13 serial4 = &uart5;
+3 -3
arch/arm/boot/dts/bcm2711-rpi-400.dts
··· 28 28 &expgpio { 29 29 gpio-line-names = "BT_ON", 30 30 "WL_ON", 31 - "", 31 + "PWR_LED_OFF", 32 32 "GLOBAL_RESET", 33 33 "VDD_SD_IO_SEL", 34 - "CAM_GPIO", 34 + "GLOBAL_SHUTDOWN", 35 35 "SD_PWR_ON", 36 - "SD_OC_N"; 36 + "SHUTDOWN_REQUEST"; 37 37 }; 38 38 39 39 &genet_mdio {
+1 -1
arch/arm/boot/dts/imx6qdl-colibri.dtsi
··· 593 593 pinctrl-names = "default"; 594 594 pinctrl-0 = <&pinctrl_atmel_conn>; 595 595 reg = <0x4a>; 596 - reset-gpios = <&gpio1 14 GPIO_ACTIVE_HIGH>; /* SODIMM 106 */ 596 + reset-gpios = <&gpio1 14 GPIO_ACTIVE_LOW>; /* SODIMM 106 */ 597 597 status = "disabled"; 598 598 }; 599 599 };
+1 -1
arch/arm/boot/dts/imx6qdl.dtsi
··· 762 762 regulator-name = "vddpu"; 763 763 regulator-min-microvolt = <725000>; 764 764 regulator-max-microvolt = <1450000>; 765 - regulator-enable-ramp-delay = <150>; 765 + regulator-enable-ramp-delay = <380>; 766 766 anatop-reg-offset = <0x140>; 767 767 anatop-vol-bit-shift = <9>; 768 768 anatop-vol-bit-width = <5>;
+1 -1
arch/arm/boot/dts/imx7s.dtsi
··· 120 120 compatible = "usb-nop-xceiv"; 121 121 clocks = <&clks IMX7D_USB_HSIC_ROOT_CLK>; 122 122 clock-names = "main_clk"; 123 + power-domains = <&pgc_hsic_phy>; 123 124 #phy-cells = <0>; 124 125 }; 125 126 ··· 1154 1153 compatible = "fsl,imx7d-usb", "fsl,imx27-usb"; 1155 1154 reg = <0x30b30000 0x200>; 1156 1155 interrupts = <GIC_SPI 40 IRQ_TYPE_LEVEL_HIGH>; 1157 - power-domains = <&pgc_hsic_phy>; 1158 1156 clocks = <&clks IMX7D_USB_CTRL_CLK>; 1159 1157 fsl,usbphy = <&usbphynop3>; 1160 1158 fsl,usbmisc = <&usbmisc3 0>;
+47
arch/arm/boot/dts/stm32mp15-scmi.dtsi
··· 1 + // SPDX-License-Identifier: (GPL-2.0+ OR BSD-3-Clause) 2 + /* 3 + * Copyright (C) STMicroelectronics 2022 - All Rights Reserved 4 + * Author: Alexandre Torgue <alexandre.torgue@foss.st.com> for STMicroelectronics. 5 + */ 6 + 7 + / { 8 + firmware { 9 + optee: optee { 10 + compatible = "linaro,optee-tz"; 11 + method = "smc"; 12 + }; 13 + 14 + scmi: scmi { 15 + compatible = "linaro,scmi-optee"; 16 + #address-cells = <1>; 17 + #size-cells = <0>; 18 + linaro,optee-channel-id = <0>; 19 + shmem = <&scmi_shm>; 20 + 21 + scmi_clk: protocol@14 { 22 + reg = <0x14>; 23 + #clock-cells = <1>; 24 + }; 25 + 26 + scmi_reset: protocol@16 { 27 + reg = <0x16>; 28 + #reset-cells = <1>; 29 + }; 30 + }; 31 + }; 32 + 33 + soc { 34 + scmi_sram: sram@2ffff000 { 35 + compatible = "mmio-sram"; 36 + reg = <0x2ffff000 0x1000>; 37 + #address-cells = <1>; 38 + #size-cells = <1>; 39 + ranges = <0 0x2ffff000 0x1000>; 40 + 41 + scmi_shm: scmi-sram@0 { 42 + compatible = "arm,scmi-shmem"; 43 + reg = <0 0x80>; 44 + }; 45 + }; 46 + }; 47 + };
-41
arch/arm/boot/dts/stm32mp151.dtsi
··· 115 115 status = "disabled"; 116 116 }; 117 117 118 - firmware { 119 - optee: optee { 120 - compatible = "linaro,optee-tz"; 121 - method = "smc"; 122 - status = "disabled"; 123 - }; 124 - 125 - scmi: scmi { 126 - compatible = "linaro,scmi-optee"; 127 - #address-cells = <1>; 128 - #size-cells = <0>; 129 - linaro,optee-channel-id = <0>; 130 - shmem = <&scmi_shm>; 131 - status = "disabled"; 132 - 133 - scmi_clk: protocol@14 { 134 - reg = <0x14>; 135 - #clock-cells = <1>; 136 - }; 137 - 138 - scmi_reset: protocol@16 { 139 - reg = <0x16>; 140 - #reset-cells = <1>; 141 - }; 142 - }; 143 - }; 144 - 145 118 soc { 146 119 compatible = "simple-bus"; 147 120 #address-cells = <1>; 148 121 #size-cells = <1>; 149 122 interrupt-parent = <&intc>; 150 123 ranges; 151 - 152 - scmi_sram: sram@2ffff000 { 153 - compatible = "mmio-sram"; 154 - reg = <0x2ffff000 0x1000>; 155 - #address-cells = <1>; 156 - #size-cells = <1>; 157 - ranges = <0 0x2ffff000 0x1000>; 158 - 159 - scmi_shm: scmi-sram@0 { 160 - compatible = "arm,scmi-shmem"; 161 - reg = <0 0x80>; 162 - status = "disabled"; 163 - }; 164 - }; 165 124 166 125 timers2: timer@40000000 { 167 126 #address-cells = <1>;
+1 -12
arch/arm/boot/dts/stm32mp157a-dk1-scmi.dts
··· 7 7 /dts-v1/; 8 8 9 9 #include "stm32mp157a-dk1.dts" 10 + #include "stm32mp15-scmi.dtsi" 10 11 11 12 / { 12 13 model = "STMicroelectronics STM32MP157A-DK1 SCMI Discovery Board"; ··· 55 54 resets = <&scmi_reset RST_SCMI_MCU>; 56 55 }; 57 56 58 - &optee { 59 - status = "okay"; 60 - }; 61 - 62 57 &rcc { 63 58 compatible = "st,stm32mp1-rcc-secure", "syscon"; 64 59 clock-names = "hse", "hsi", "csi", "lse", "lsi"; ··· 72 75 73 76 &rtc { 74 77 clocks = <&scmi_clk CK_SCMI_RTCAPB>, <&scmi_clk CK_SCMI_RTC>; 75 - }; 76 - 77 - &scmi { 78 - status = "okay"; 79 - }; 80 - 81 - &scmi_shm { 82 - status = "okay"; 83 78 };
+1 -12
arch/arm/boot/dts/stm32mp157c-dk2-scmi.dts
··· 7 7 /dts-v1/; 8 8 9 9 #include "stm32mp157c-dk2.dts" 10 + #include "stm32mp15-scmi.dtsi" 10 11 11 12 / { 12 13 model = "STMicroelectronics STM32MP157C-DK2 SCMI Discovery Board"; ··· 64 63 resets = <&scmi_reset RST_SCMI_MCU>; 65 64 }; 66 65 67 - &optee { 68 - status = "okay"; 69 - }; 70 - 71 66 &rcc { 72 67 compatible = "st,stm32mp1-rcc-secure", "syscon"; 73 68 clock-names = "hse", "hsi", "csi", "lse", "lsi"; ··· 81 84 82 85 &rtc { 83 86 clocks = <&scmi_clk CK_SCMI_RTCAPB>, <&scmi_clk CK_SCMI_RTC>; 84 - }; 85 - 86 - &scmi { 87 - status = "okay"; 88 - }; 89 - 90 - &scmi_shm { 91 - status = "okay"; 92 87 };
+1 -12
arch/arm/boot/dts/stm32mp157c-ed1-scmi.dts
··· 7 7 /dts-v1/; 8 8 9 9 #include "stm32mp157c-ed1.dts" 10 + #include "stm32mp15-scmi.dtsi" 10 11 11 12 / { 12 13 model = "STMicroelectronics STM32MP157C-ED1 SCMI eval daughter"; ··· 60 59 resets = <&scmi_reset RST_SCMI_MCU>; 61 60 }; 62 61 63 - &optee { 64 - status = "okay"; 65 - }; 66 - 67 62 &rcc { 68 63 compatible = "st,stm32mp1-rcc-secure", "syscon"; 69 64 clock-names = "hse", "hsi", "csi", "lse", "lsi"; ··· 77 80 78 81 &rtc { 79 82 clocks = <&scmi_clk CK_SCMI_RTCAPB>, <&scmi_clk CK_SCMI_RTC>; 80 - }; 81 - 82 - &scmi { 83 - status = "okay"; 84 - }; 85 - 86 - &scmi_shm { 87 - status = "okay"; 88 83 };
+1 -12
arch/arm/boot/dts/stm32mp157c-ev1-scmi.dts
··· 7 7 /dts-v1/; 8 8 9 9 #include "stm32mp157c-ev1.dts" 10 + #include "stm32mp15-scmi.dtsi" 10 11 11 12 / { 12 13 model = "STMicroelectronics STM32MP157C-EV1 SCMI eval daughter on eval mother"; ··· 69 68 resets = <&scmi_reset RST_SCMI_MCU>; 70 69 }; 71 70 72 - &optee { 73 - status = "okay"; 74 - }; 75 - 76 71 &rcc { 77 72 compatible = "st,stm32mp1-rcc-secure", "syscon"; 78 73 clock-names = "hse", "hsi", "csi", "lse", "lsi"; ··· 86 89 87 90 &rtc { 88 91 clocks = <&scmi_clk CK_SCMI_RTCAPB>, <&scmi_clk CK_SCMI_RTC>; 89 - }; 90 - 91 - &scmi { 92 - status = "okay"; 93 - }; 94 - 95 - &scmi_shm { 96 - status = "okay"; 97 92 };
+1
arch/arm/mach-axxia/platsmp.c
··· 39 39 return -ENOENT; 40 40 41 41 syscon = of_iomap(syscon_np, 0); 42 + of_node_put(syscon_np); 42 43 if (!syscon) 43 44 return -ENOMEM; 44 45
+2
arch/arm/mach-cns3xxx/core.c
··· 372 372 /* De-Asscer SATA Reset */ 373 373 cns3xxx_pwr_soft_rst(CNS3XXX_PWR_SOFTWARE_RST(SATA)); 374 374 } 375 + of_node_put(dn); 375 376 376 377 dn = of_find_compatible_node(NULL, NULL, "cavium,cns3420-sdhci"); 377 378 if (of_device_is_available(dn)) { ··· 386 385 cns3xxx_pwr_clk_en(CNS3XXX_PWR_CLK_EN(SDIO)); 387 386 cns3xxx_pwr_soft_rst(CNS3XXX_PWR_SOFTWARE_RST(SDIO)); 388 387 } 388 + of_node_put(dn); 389 389 390 390 pm_power_off = cns3xxx_power_off; 391 391
+1
arch/arm/mach-exynos/exynos.c
··· 149 149 np = of_find_matching_node(NULL, exynos_dt_pmu_match); 150 150 if (np) 151 151 pmu_base_addr = of_iomap(np, 0); 152 + of_node_put(np); 152 153 } 153 154 154 155 static void __init exynos_init_irq(void)
+6 -2
arch/arm/mach-spear/time.c
··· 218 218 irq = irq_of_parse_and_map(np, 0); 219 219 if (!irq) { 220 220 pr_err("%s: No irq passed for timer via DT\n", __func__); 221 - return; 221 + goto err_put_np; 222 222 } 223 223 224 224 gpt_base = of_iomap(np, 0); 225 225 if (!gpt_base) { 226 226 pr_err("%s: of iomap failed\n", __func__); 227 - return; 227 + goto err_put_np; 228 228 } 229 229 230 230 gpt_clk = clk_get_sys("gpt0", NULL); ··· 239 239 goto err_prepare_enable_clk; 240 240 } 241 241 242 + of_node_put(np); 243 + 242 244 spear_clockevent_init(irq); 243 245 spear_clocksource_init(); 244 246 ··· 250 248 clk_put(gpt_clk); 251 249 err_iomap: 252 250 iounmap(gpt_base); 251 + err_put_np: 252 + of_node_put(np); 253 253 }
+6 -6
arch/arm64/boot/dts/exynos/exynos7885.dtsi
··· 280 280 interrupts = <GIC_SPI 246 IRQ_TYPE_LEVEL_HIGH>; 281 281 pinctrl-names = "default"; 282 282 pinctrl-0 = <&uart0_bus>; 283 - clocks = <&cmu_peri CLK_GOUT_UART0_EXT_UCLK>, 284 - <&cmu_peri CLK_GOUT_UART0_PCLK>; 283 + clocks = <&cmu_peri CLK_GOUT_UART0_PCLK>, 284 + <&cmu_peri CLK_GOUT_UART0_EXT_UCLK>; 285 285 clock-names = "uart", "clk_uart_baud0"; 286 286 samsung,uart-fifosize = <64>; 287 287 status = "disabled"; ··· 293 293 interrupts = <GIC_SPI 247 IRQ_TYPE_LEVEL_HIGH>; 294 294 pinctrl-names = "default"; 295 295 pinctrl-0 = <&uart1_bus>; 296 - clocks = <&cmu_peri CLK_GOUT_UART1_EXT_UCLK>, 297 - <&cmu_peri CLK_GOUT_UART1_PCLK>; 296 + clocks = <&cmu_peri CLK_GOUT_UART1_PCLK>, 297 + <&cmu_peri CLK_GOUT_UART1_EXT_UCLK>; 298 298 clock-names = "uart", "clk_uart_baud0"; 299 299 samsung,uart-fifosize = <256>; 300 300 status = "disabled"; ··· 306 306 interrupts = <GIC_SPI 279 IRQ_TYPE_LEVEL_HIGH>; 307 307 pinctrl-names = "default"; 308 308 pinctrl-0 = <&uart2_bus>; 309 - clocks = <&cmu_peri CLK_GOUT_UART2_EXT_UCLK>, 310 - <&cmu_peri CLK_GOUT_UART2_PCLK>; 309 + clocks = <&cmu_peri CLK_GOUT_UART2_PCLK>, 310 + <&cmu_peri CLK_GOUT_UART2_EXT_UCLK>; 311 311 clock-names = "uart", "clk_uart_baud0"; 312 312 samsung,uart-fifosize = <256>; 313 313 status = "disabled";
+1 -1
arch/arm64/boot/dts/freescale/s32g2.dtsi
··· 79 79 }; 80 80 }; 81 81 82 - soc { 82 + soc@0 { 83 83 compatible = "simple-bus"; 84 84 #address-cells = <1>; 85 85 #size-cells = <1>;
-2
arch/arm64/boot/dts/ti/k3-am64-main.dtsi
··· 456 456 clock-names = "clk_ahb", "clk_xin"; 457 457 mmc-ddr-1_8v; 458 458 mmc-hs200-1_8v; 459 - mmc-hs400-1_8v; 460 459 ti,trm-icp = <0x2>; 461 460 ti,otap-del-sel-legacy = <0x0>; 462 461 ti,otap-del-sel-mmc-hs = <0x0>; 463 462 ti,otap-del-sel-ddr52 = <0x6>; 464 463 ti,otap-del-sel-hs200 = <0x7>; 465 - ti,otap-del-sel-hs400 = <0x4>; 466 464 }; 467 465 468 466 sdhci1: mmc@fa00000 {
+1 -1
arch/arm64/boot/dts/ti/k3-j721s2-main.dtsi
··· 33 33 ranges; 34 34 #interrupt-cells = <3>; 35 35 interrupt-controller; 36 - reg = <0x00 0x01800000 0x00 0x200000>, /* GICD */ 36 + reg = <0x00 0x01800000 0x00 0x100000>, /* GICD */ 37 37 <0x00 0x01900000 0x00 0x100000>, /* GICR */ 38 38 <0x00 0x6f000000 0x00 0x2000>, /* GICC */ 39 39 <0x00 0x6f010000 0x00 0x1000>, /* GICH */
+3 -3
arch/arm64/kvm/arm.c
··· 2112 2112 return 0; 2113 2113 2114 2114 /* 2115 - * Exclude HYP BSS from kmemleak so that it doesn't get peeked 2116 - * at, which would end badly once the section is inaccessible. 2117 - * None of other sections should ever be introspected. 2115 + * Exclude HYP sections from kmemleak so that they don't get peeked 2116 + * at, which would end badly once inaccessible. 2118 2117 */ 2119 2118 kmemleak_free_part(__hyp_bss_start, __hyp_bss_end - __hyp_bss_start); 2119 + kmemleak_free_part(__va(hyp_mem_base), hyp_mem_size); 2120 2120 return pkvm_drop_host_privileges(); 2121 2121 } 2122 2122
+1 -2
arch/loongarch/include/asm/branch.h
··· 12 12 return regs->csr_era; 13 13 } 14 14 15 - static inline int compute_return_era(struct pt_regs *regs) 15 + static inline void compute_return_era(struct pt_regs *regs) 16 16 { 17 17 regs->csr_era += 4; 18 - return 0; 19 18 } 20 19 21 20 #endif /* _ASM_BRANCH_H */
+5 -5
arch/loongarch/include/asm/pgtable.h
··· 426 426 427 427 #define kern_addr_valid(addr) (1) 428 428 429 + static inline unsigned long pmd_pfn(pmd_t pmd) 430 + { 431 + return (pmd_val(pmd) & _PFN_MASK) >> _PFN_SHIFT; 432 + } 433 + 429 434 #ifdef CONFIG_TRANSPARENT_HUGEPAGE 430 435 431 436 /* We don't have hardware dirty/accessed bits, generic_pmdp_establish is fine.*/ ··· 500 495 { 501 496 pmd_val(pmd) |= _PAGE_ACCESSED; 502 497 return pmd; 503 - } 504 - 505 - static inline unsigned long pmd_pfn(pmd_t pmd) 506 - { 507 - return (pmd_val(pmd) & _PFN_MASK) >> _PFN_SHIFT; 508 498 } 509 499 510 500 static inline struct page *pmd_page(pmd_t pmd)
+1 -1
arch/loongarch/kernel/cpu-probe.c
··· 263 263 264 264 c->cputype = CPU_UNKNOWN; 265 265 c->processor_id = read_cpucfg(LOONGARCH_CPUCFG0); 266 - c->fpu_vers = (read_cpucfg(LOONGARCH_CPUCFG2) >> 3) & 0x3; 266 + c->fpu_vers = (read_cpucfg(LOONGARCH_CPUCFG2) & CPUCFG2_FPVERS) >> 3; 267 267 268 268 c->fpu_csr0 = FPU_CSR_RN; 269 269 c->fpu_mask = FPU_CSR_RSVD;
-2
arch/loongarch/kernel/head.S
··· 14 14 15 15 __REF 16 16 17 - SYM_ENTRY(_stext, SYM_L_GLOBAL, SYM_A_NONE) 18 - 19 17 SYM_CODE_START(kernel_entry) # kernel entry point 20 18 21 19 /* Config direct window and set PG */
+1 -2
arch/loongarch/kernel/traps.c
··· 475 475 476 476 die_if_kernel("Reserved instruction in kernel code", regs); 477 477 478 - if (unlikely(compute_return_era(regs) < 0)) 479 - goto out; 478 + compute_return_era(regs); 480 479 481 480 if (unlikely(get_user(opcode, era) < 0)) { 482 481 status = SIGSEGV;
+1
arch/loongarch/kernel/vmlinux.lds.S
··· 37 37 HEAD_TEXT_SECTION 38 38 39 39 . = ALIGN(PECOFF_SEGMENT_ALIGN); 40 + _stext = .; 40 41 .text : { 41 42 TEXT_TEXT 42 43 SCHED_TEXT
+4 -3
arch/loongarch/mm/tlb.c
··· 281 281 if (pcpu_handlers[cpu]) 282 282 return; 283 283 284 - page = alloc_pages_node(cpu_to_node(cpu), GFP_KERNEL, get_order(vec_sz)); 284 + page = alloc_pages_node(cpu_to_node(cpu), GFP_ATOMIC, get_order(vec_sz)); 285 285 if (!page) 286 286 return; 287 287 288 288 addr = page_address(page); 289 - pcpu_handlers[cpu] = virt_to_phys(addr); 289 + pcpu_handlers[cpu] = (unsigned long)addr; 290 290 memcpy((void *)addr, (void *)eentry, vec_sz); 291 291 local_flush_icache_range((unsigned long)addr, (unsigned long)addr + vec_sz); 292 - csr_write64(pcpu_handlers[cpu], LOONGARCH_CSR_TLBRENTRY); 292 + csr_write64(pcpu_handlers[cpu], LOONGARCH_CSR_EENTRY); 293 + csr_write64(pcpu_handlers[cpu], LOONGARCH_CSR_MERRENTRY); 293 294 csr_write64(pcpu_handlers[cpu] + 80*VECSIZE, LOONGARCH_CSR_TLBRENTRY); 294 295 } 295 296 #endif
+3 -2
arch/mips/boot/dts/ingenic/x1000.dtsi
··· 111 111 112 112 clocks = <&cgu X1000_CLK_RTCLK>, 113 113 <&cgu X1000_CLK_EXCLK>, 114 - <&cgu X1000_CLK_PCLK>; 115 - clock-names = "rtc", "ext", "pclk"; 114 + <&cgu X1000_CLK_PCLK>, 115 + <&cgu X1000_CLK_TCU>; 116 + clock-names = "rtc", "ext", "pclk", "tcu"; 116 117 117 118 interrupt-controller; 118 119 #interrupt-cells = <1>;
+3 -2
arch/mips/boot/dts/ingenic/x1830.dtsi
··· 104 104 105 105 clocks = <&cgu X1830_CLK_RTCLK>, 106 106 <&cgu X1830_CLK_EXCLK>, 107 - <&cgu X1830_CLK_PCLK>; 108 - clock-names = "rtc", "ext", "pclk"; 107 + <&cgu X1830_CLK_PCLK>, 108 + <&cgu X1830_CLK_TCU>; 109 + clock-names = "rtc", "ext", "pclk", "tcu"; 109 110 110 111 interrupt-controller; 111 112 #interrupt-cells = <1>;
+1
arch/mips/generic/board-ranchu.c
··· 44 44 __func__); 45 45 46 46 rtc_base = of_iomap(np, 0); 47 + of_node_put(np); 47 48 if (!rtc_base) 48 49 panic("%s(): Failed to ioremap Goldfish RTC base!", __func__); 49 50
+6
arch/mips/lantiq/falcon/sysctrl.c
··· 208 208 of_address_to_resource(np_sysgpe, 0, &res_sys[2])) 209 209 panic("Failed to get core resources"); 210 210 211 + of_node_put(np_status); 212 + of_node_put(np_ebu); 213 + of_node_put(np_sys1); 214 + of_node_put(np_syseth); 215 + of_node_put(np_sysgpe); 216 + 211 217 if ((request_mem_region(res_status.start, resource_size(&res_status), 212 218 res_status.name) < 0) || 213 219 (request_mem_region(res_ebu.start, resource_size(&res_ebu),
+1
arch/mips/lantiq/irq.c
··· 408 408 if (!ltq_eiu_membase) 409 409 panic("Failed to remap eiu memory"); 410 410 } 411 + of_node_put(eiu_node); 411 412 412 413 return 0; 413 414 }
+4
arch/mips/lantiq/xway/sysctrl.c
··· 441 441 of_address_to_resource(np_ebu, 0, &res_ebu)) 442 442 panic("Failed to get core resources"); 443 443 444 + of_node_put(np_pmu); 445 + of_node_put(np_cgu); 446 + of_node_put(np_ebu); 447 + 444 448 if (!request_mem_region(res_pmu.start, resource_size(&res_pmu), 445 449 res_pmu.name) || 446 450 !request_mem_region(res_cgu.start, resource_size(&res_cgu),
+2
arch/mips/mti-malta/malta-time.c
··· 214 214 215 215 if (of_update_property(node, &gic_frequency_prop) < 0) 216 216 pr_err("error updating gic frequency property\n"); 217 + 218 + of_node_put(node); 217 219 } 218 220 219 221 #endif
+6 -1
arch/mips/pic32/pic32mzda/init.c
··· 98 98 np = of_find_compatible_node(NULL, NULL, lookup->compatible); 99 99 if (np) { 100 100 lookup->name = (char *)np->name; 101 - if (lookup->phys_addr) 101 + if (lookup->phys_addr) { 102 + of_node_put(np); 102 103 continue; 104 + } 103 105 if (!of_address_to_resource(np, 0, &res)) 104 106 lookup->phys_addr = res.start; 107 + of_node_put(np); 105 108 } 106 109 } 110 + 111 + of_node_put(root); 107 112 108 113 return 0; 109 114 }
+3
arch/mips/pic32/pic32mzda/time.c
··· 32 32 goto default_map; 33 33 34 34 irq = irq_of_parse_and_map(node, 0); 35 + 36 + of_node_put(node); 37 + 35 38 if (!irq) 36 39 goto default_map; 37 40
+2
arch/mips/ralink/of.c
··· 40 40 if (of_address_to_resource(np, 0, &res)) 41 41 panic("Failed to get resource for %s", node); 42 42 43 + of_node_put(np); 44 + 43 45 if (!request_mem_region(res.start, 44 46 resource_size(&res), 45 47 res.name))
-2
arch/mips/vr41xx/common/icu.c
··· 640 640 641 641 printk(KERN_ERR "spurious ICU interrupt: %04x,%04x\n", pend1, pend2); 642 642 643 - atomic_inc(&irq_err_count); 644 - 645 643 return -1; 646 644 } 647 645
+1
arch/parisc/Kconfig
··· 10 10 select ARCH_WANT_FRAME_POINTERS 11 11 select ARCH_HAS_ELF_RANDOMIZE 12 12 select ARCH_HAS_STRICT_KERNEL_RWX 13 + select ARCH_HAS_STRICT_MODULE_RWX 13 14 select ARCH_HAS_UBSAN_SANITIZE_ALL 14 15 select ARCH_HAS_PTE_SPECIAL 15 16 select ARCH_NO_SG_CHAIN
+1 -1
arch/parisc/include/asm/fb.h
··· 12 12 pgprot_val(vma->vm_page_prot) |= _PAGE_NO_CACHE; 13 13 } 14 14 15 - #if defined(CONFIG_STI_CONSOLE) || defined(CONFIG_FB_STI) 15 + #if defined(CONFIG_FB_STI) 16 16 int fb_is_primary_device(struct fb_info *info); 17 17 #else 18 18 static inline int fb_is_primary_device(struct fb_info *info)
+4 -1
arch/parisc/kernel/cache.c
··· 722 722 return; 723 723 724 724 if (parisc_requires_coherency()) { 725 - flush_user_cache_page(vma, vmaddr); 725 + if (vma->vm_flags & VM_SHARED) 726 + flush_data_cache(); 727 + else 728 + flush_user_cache_page(vma, vmaddr); 726 729 return; 727 730 } 728 731
+1 -1
arch/parisc/math-emu/decode_exc.c
··· 102 102 * that happen. Want to keep this overhead low, but still provide 103 103 * some information to the customer. All exits from this routine 104 104 * need to restore Fpu_register[0] 105 - */ 105 + */ 106 106 107 107 bflags=(Fpu_register[0] & 0xf8000000); 108 108 Fpu_register[0] &= 0x07ffffff;
+1 -1
arch/powerpc/kernel/process.c
··· 1855 1855 tm_reclaim_current(0); 1856 1856 #endif 1857 1857 1858 - memset(regs->gpr, 0, sizeof(regs->gpr)); 1858 + memset(&regs->gpr[1], 0, sizeof(regs->gpr) - sizeof(regs->gpr[0])); 1859 1859 regs->ctr = 0; 1860 1860 regs->link = 0; 1861 1861 regs->xer = 0;
+1 -1
arch/powerpc/kernel/prom_init.c
··· 2302 2302 2303 2303 static int __init prom_find_machine_type(void) 2304 2304 { 2305 - char compat[256]; 2305 + static char compat[256] __prombss; 2306 2306 int len, i = 0; 2307 2307 #ifdef CONFIG_PPC64 2308 2308 phandle rtas;
+10 -1
arch/powerpc/kernel/rtas.c
··· 1071 1071 { "get-time-of-day", -1, -1, -1, -1, -1 }, 1072 1072 { "ibm,get-vpd", -1, 0, -1, 1, 2 }, 1073 1073 { "ibm,lpar-perftools", -1, 2, 3, -1, -1 }, 1074 - { "ibm,platform-dump", -1, 4, 5, -1, -1 }, 1074 + { "ibm,platform-dump", -1, 4, 5, -1, -1 }, /* Special cased */ 1075 1075 { "ibm,read-slot-reset-state", -1, -1, -1, -1, -1 }, 1076 1076 { "ibm,scan-log-dump", -1, 0, 1, -1, -1 }, 1077 1077 { "ibm,set-dynamic-indicator", -1, 2, -1, -1, -1 }, ··· 1120 1120 size = 1; 1121 1121 1122 1122 end = base + size - 1; 1123 + 1124 + /* 1125 + * Special case for ibm,platform-dump - NULL buffer 1126 + * address is used to indicate end of dump processing 1127 + */ 1128 + if (!strcmp(f->name, "ibm,platform-dump") && 1129 + base == 0) 1130 + return false; 1131 + 1123 1132 if (!in_rmo_buf(base, end)) 1124 1133 goto err; 1125 1134 }
+7 -6
arch/powerpc/kernel/setup-common.c
··· 935 935 /* Print various info about the machine that has been gathered so far. */ 936 936 print_system_info(); 937 937 938 - /* Reserve large chunks of memory for use by CMA for KVM. */ 939 - kvm_cma_reserve(); 940 - 941 - /* Reserve large chunks of memory for us by CMA for hugetlb */ 942 - gigantic_hugetlb_cma_reserve(); 943 - 944 938 klp_init_thread_info(&init_task); 945 939 946 940 setup_initial_init_mm(_stext, _etext, _edata, _end); ··· 948 954 smp_release_cpus(); 949 955 950 956 initmem_init(); 957 + 958 + /* 959 + * Reserve large chunks of memory for use by CMA for KVM and hugetlb. These must 960 + * be called after initmem_init(), so that pageblock_order is initialised. 961 + */ 962 + kvm_cma_reserve(); 963 + gigantic_hugetlb_cma_reserve(); 951 964 952 965 early_memtest(min_low_pfn << PAGE_SHIFT, max_low_pfn << PAGE_SHIFT); 953 966
+7
arch/powerpc/platforms/microwatt/microwatt.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + #ifndef _MICROWATT_H 3 + #define _MICROWATT_H 4 + 5 + void microwatt_rng_init(void); 6 + 7 + #endif /* _MICROWATT_H */
+3 -7
arch/powerpc/platforms/microwatt/rng.c
··· 11 11 #include <asm/archrandom.h> 12 12 #include <asm/cputable.h> 13 13 #include <asm/machdep.h> 14 + #include "microwatt.h" 14 15 15 16 #define DARN_ERR 0xFFFFFFFFFFFFFFFFul 16 17 ··· 30 29 return 1; 31 30 } 32 31 33 - static __init int rng_init(void) 32 + void __init microwatt_rng_init(void) 34 33 { 35 34 unsigned long val; 36 35 int i; ··· 38 37 for (i = 0; i < 10; i++) { 39 38 if (microwatt_get_random_darn(&val)) { 40 39 ppc_md.get_random_seed = microwatt_get_random_darn; 41 - return 0; 40 + return; 42 41 } 43 42 } 44 - 45 - pr_warn("Unable to use DARN for get_random_seed()\n"); 46 - 47 - return -EIO; 48 43 } 49 - machine_subsys_initcall(, rng_init);
+8
arch/powerpc/platforms/microwatt/setup.c
··· 16 16 #include <asm/xics.h> 17 17 #include <asm/udbg.h> 18 18 19 + #include "microwatt.h" 20 + 19 21 static void __init microwatt_init_IRQ(void) 20 22 { 21 23 xics_init(); ··· 34 32 } 35 33 machine_arch_initcall(microwatt, microwatt_populate); 36 34 35 + static void __init microwatt_setup_arch(void) 36 + { 37 + microwatt_rng_init(); 38 + } 39 + 37 40 define_machine(microwatt) { 38 41 .name = "microwatt", 39 42 .probe = microwatt_probe, 40 43 .init_IRQ = microwatt_init_IRQ, 44 + .setup_arch = microwatt_setup_arch, 41 45 .progress = udbg_progress, 42 46 .calibrate_decr = generic_calibrate_decr, 43 47 };
+2
arch/powerpc/platforms/powernv/powernv.h
··· 42 42 u32 __init memcons_get_size(struct memcons *mc); 43 43 struct memcons *__init memcons_init(struct device_node *node, const char *mc_prop_name); 44 44 45 + void pnv_rng_init(void); 46 + 45 47 #endif /* _POWERNV_H */
+36 -16
arch/powerpc/platforms/powernv/rng.c
··· 17 17 #include <asm/prom.h> 18 18 #include <asm/machdep.h> 19 19 #include <asm/smp.h> 20 + #include "powernv.h" 20 21 21 22 #define DARN_ERR 0xFFFFFFFFFFFFFFFFul 22 23 ··· 28 27 }; 29 28 30 29 static DEFINE_PER_CPU(struct powernv_rng *, powernv_rng); 31 - 32 30 33 31 int powernv_hwrng_present(void) 34 32 { ··· 98 98 return 0; 99 99 } 100 100 } 101 - 102 - pr_warn("Unable to use DARN for get_random_seed()\n"); 103 - 104 101 return -EIO; 105 102 } 106 103 ··· 160 163 161 164 rng_init_per_cpu(rng, dn); 162 165 163 - pr_info_once("Registering arch random hook.\n"); 164 - 165 166 ppc_md.get_random_seed = powernv_get_random_long; 166 167 167 168 return 0; 168 169 } 169 170 170 - static __init int rng_init(void) 171 + static int __init pnv_get_random_long_early(unsigned long *v) 171 172 { 172 173 struct device_node *dn; 173 - int rc; 174 + 175 + if (!slab_is_available()) 176 + return 0; 177 + 178 + if (cmpxchg(&ppc_md.get_random_seed, pnv_get_random_long_early, 179 + NULL) != pnv_get_random_long_early) 180 + return 0; 174 181 175 182 for_each_compatible_node(dn, NULL, "ibm,power-rng") { 176 - rc = rng_create(dn); 177 - if (rc) { 178 - pr_err("Failed creating rng for %pOF (%d).\n", 179 - dn, rc); 183 + if (rng_create(dn)) 180 184 continue; 181 - } 182 - 183 185 /* Create devices for hwrng driver */ 184 186 of_platform_device_create(dn, NULL, NULL); 185 187 } 186 188 187 - initialise_darn(); 189 + if (!ppc_md.get_random_seed) 190 + return 0; 191 + return ppc_md.get_random_seed(v); 192 + } 188 193 194 + void __init pnv_rng_init(void) 195 + { 196 + struct device_node *dn; 197 + 198 + /* Prefer darn over the rest. */ 199 + if (!initialise_darn()) 200 + return; 201 + 202 + dn = of_find_compatible_node(NULL, NULL, "ibm,power-rng"); 203 + if (dn) 204 + ppc_md.get_random_seed = pnv_get_random_long_early; 205 + 206 + of_node_put(dn); 207 + } 208 + 209 + static int __init pnv_rng_late_init(void) 210 + { 211 + unsigned long v; 212 + /* In case it wasn't called during init for some other reason. */ 213 + if (ppc_md.get_random_seed == pnv_get_random_long_early) 214 + pnv_get_random_long_early(&v); 189 215 return 0; 190 216 } 191 - machine_subsys_initcall(powernv, rng_init); 217 + machine_subsys_initcall(powernv, pnv_rng_late_init);
+2
arch/powerpc/platforms/powernv/setup.c
··· 203 203 pnv_check_guarded_cores(); 204 204 205 205 /* XXX PMCS */ 206 + 207 + pnv_rng_init(); 206 208 } 207 209 208 210 static void __init pnv_init(void)
+2
arch/powerpc/platforms/pseries/pseries.h
··· 122 122 static inline void pseries_lpar_read_hblkrm_characteristics(void) { } 123 123 #endif 124 124 125 + void pseries_rng_init(void); 126 + 125 127 #endif /* _PSERIES_PSERIES_H */
+3 -8
arch/powerpc/platforms/pseries/rng.c
··· 10 10 #include <asm/archrandom.h> 11 11 #include <asm/machdep.h> 12 12 #include <asm/plpar_wrappers.h> 13 + #include "pseries.h" 13 14 14 15 15 16 static int pseries_get_random_long(unsigned long *v) ··· 25 24 return 0; 26 25 } 27 26 28 - static __init int rng_init(void) 27 + void __init pseries_rng_init(void) 29 28 { 30 29 struct device_node *dn; 31 30 32 31 dn = of_find_compatible_node(NULL, NULL, "ibm,random"); 33 32 if (!dn) 34 - return -ENODEV; 35 - 36 - pr_info("Registering arch random hook.\n"); 37 - 33 + return; 38 34 ppc_md.get_random_seed = pseries_get_random_long; 39 - 40 35 of_node_put(dn); 41 - return 0; 42 36 } 43 - machine_subsys_initcall(pseries, rng_init);
+1
arch/powerpc/platforms/pseries/setup.c
··· 839 839 } 840 840 841 841 ppc_md.pcibios_root_bridge_prepare = pseries_root_bridge_prepare; 842 + pseries_rng_init(); 842 843 } 843 844 844 845 static void pseries_panic(char *str)
+7 -7
arch/riscv/include/asm/errata_list.h
··· 75 75 "nop\n\t" \ 76 76 "nop\n\t" \ 77 77 "nop", \ 78 - "li t3, %2\n\t" \ 79 - "slli t3, t3, %4\n\t" \ 78 + "li t3, %1\n\t" \ 79 + "slli t3, t3, %3\n\t" \ 80 80 "and t3, %0, t3\n\t" \ 81 81 "bne t3, zero, 2f\n\t" \ 82 - "li t3, %3\n\t" \ 83 - "slli t3, t3, %4\n\t" \ 82 + "li t3, %2\n\t" \ 83 + "slli t3, t3, %3\n\t" \ 84 84 "or %0, %0, t3\n\t" \ 85 85 "2:", THEAD_VENDOR_ID, \ 86 86 ERRATA_THEAD_PBMT, CONFIG_ERRATA_THEAD_PBMT) \ 87 87 : "+r"(_val) \ 88 - : "0"(_val), \ 89 - "I"(_PAGE_MTMASK_THEAD >> ALT_THEAD_PBMT_SHIFT), \ 88 + : "I"(_PAGE_MTMASK_THEAD >> ALT_THEAD_PBMT_SHIFT), \ 90 89 "I"(_PAGE_PMA_THEAD >> ALT_THEAD_PBMT_SHIFT), \ 91 - "I"(ALT_THEAD_PBMT_SHIFT)) 90 + "I"(ALT_THEAD_PBMT_SHIFT) \ 91 + : "t3") 92 92 #else 93 93 #define ALT_THEAD_PMA(_val) 94 94 #endif
+9 -1
arch/s390/kernel/crash_dump.c
··· 219 219 unsigned long src; 220 220 int rc; 221 221 222 + if (!(iter_is_iovec(iter) || iov_iter_is_kvec(iter))) 223 + return -EINVAL; 224 + /* Multi-segment iterators are not supported */ 225 + if (iter->nr_segs > 1) 226 + return -EINVAL; 222 227 if (!csize) 223 228 return 0; 224 229 src = pfn_to_phys(pfn) + offset; ··· 233 228 rc = copy_oldmem_user(iter->iov->iov_base, src, csize); 234 229 else 235 230 rc = copy_oldmem_kernel(iter->kvec->iov_base, src, csize); 236 - return rc; 231 + if (rc < 0) 232 + return rc; 233 + iov_iter_advance(iter, csize); 234 + return csize; 237 235 } 238 236 239 237 /*
+21 -1
arch/s390/kernel/perf_cpum_cf.c
··· 516 516 return err; 517 517 } 518 518 519 + /* Events CPU_CYLCES and INSTRUCTIONS can be submitted with two different 520 + * attribute::type values: 521 + * - PERF_TYPE_HARDWARE: 522 + * - pmu->type: 523 + * Handle both type of invocations identical. They address the same hardware. 524 + * The result is different when event modifiers exclude_kernel and/or 525 + * exclude_user are also set. 526 + */ 527 + static int cpumf_pmu_event_type(struct perf_event *event) 528 + { 529 + u64 ev = event->attr.config; 530 + 531 + if (cpumf_generic_events_basic[PERF_COUNT_HW_CPU_CYCLES] == ev || 532 + cpumf_generic_events_basic[PERF_COUNT_HW_INSTRUCTIONS] == ev || 533 + cpumf_generic_events_user[PERF_COUNT_HW_CPU_CYCLES] == ev || 534 + cpumf_generic_events_user[PERF_COUNT_HW_INSTRUCTIONS] == ev) 535 + return PERF_TYPE_HARDWARE; 536 + return PERF_TYPE_RAW; 537 + } 538 + 519 539 static int cpumf_pmu_event_init(struct perf_event *event) 520 540 { 521 541 unsigned int type = event->attr.type; ··· 545 525 err = __hw_perf_event_init(event, type); 546 526 else if (event->pmu->type == type) 547 527 /* Registered as unknown PMU */ 548 - err = __hw_perf_event_init(event, PERF_TYPE_RAW); 528 + err = __hw_perf_event_init(event, cpumf_pmu_event_type(event)); 549 529 else 550 530 return -ENOENT; 551 531
+15 -5
arch/s390/kernel/perf_pai_crypto.c
··· 193 193 /* PAI crypto PMU registered as PERF_TYPE_RAW, check event type */ 194 194 if (a->type != PERF_TYPE_RAW && event->pmu->type != a->type) 195 195 return -ENOENT; 196 - /* PAI crypto event must be valid */ 197 - if (a->config > PAI_CRYPTO_BASE + paicrypt_cnt) 196 + /* PAI crypto event must be in valid range */ 197 + if (a->config < PAI_CRYPTO_BASE || 198 + a->config > PAI_CRYPTO_BASE + paicrypt_cnt) 198 199 return -EINVAL; 199 200 /* Allow only CPU wide operation, no process context for now. */ 200 201 if (event->hw.target || event->cpu == -1) ··· 209 208 if (rc) 210 209 return rc; 211 210 211 + /* Event initialization sets last_tag to 0. When later on the events 212 + * are deleted and re-added, do not reset the event count value to zero. 213 + * Events are added, deleted and re-added when 2 or more events 214 + * are active at the same time. 215 + */ 216 + event->hw.last_tag = 0; 212 217 cpump->event = event; 213 218 event->destroy = paicrypt_event_destroy; 214 219 ··· 249 242 { 250 243 u64 sum; 251 244 252 - sum = paicrypt_getall(event); /* Get current value */ 253 - local64_set(&event->hw.prev_count, sum); 254 - local64_set(&event->count, 0); 245 + if (!event->hw.last_tag) { 246 + event->hw.last_tag = 1; 247 + sum = paicrypt_getall(event); /* Get current value */ 248 + local64_set(&event->count, 0); 249 + local64_set(&event->hw.prev_count, sum); 250 + } 255 251 } 256 252 257 253 static int paicrypt_add(struct perf_event *event, int flags)
+1 -1
arch/x86/include/asm/efi.h
··· 323 323 #define __efi64_argmap_get_memory_space_descriptor(phys, desc) \ 324 324 (__efi64_split(phys), (desc)) 325 325 326 - #define __efi64_argmap_set_memory_space_descriptor(phys, size, flags) \ 326 + #define __efi64_argmap_set_memory_space_attributes(phys, size, flags) \ 327 327 (__efi64_split(phys), __efi64_split(size), __efi64_split(flags)) 328 328 329 329 /*
+50 -28
arch/x86/kvm/svm/sev.c
··· 844 844 845 845 /* If source buffer is not aligned then use an intermediate buffer */ 846 846 if (!IS_ALIGNED((unsigned long)vaddr, 16)) { 847 - src_tpage = alloc_page(GFP_KERNEL); 847 + src_tpage = alloc_page(GFP_KERNEL_ACCOUNT); 848 848 if (!src_tpage) 849 849 return -ENOMEM; 850 850 ··· 865 865 if (!IS_ALIGNED((unsigned long)dst_vaddr, 16) || !IS_ALIGNED(size, 16)) { 866 866 int dst_offset; 867 867 868 - dst_tpage = alloc_page(GFP_KERNEL); 868 + dst_tpage = alloc_page(GFP_KERNEL_ACCOUNT); 869 869 if (!dst_tpage) { 870 870 ret = -ENOMEM; 871 871 goto e_free; ··· 1665 1665 { 1666 1666 struct kvm_sev_info *dst = &to_kvm_svm(dst_kvm)->sev_info; 1667 1667 struct kvm_sev_info *src = &to_kvm_svm(src_kvm)->sev_info; 1668 + struct kvm_vcpu *dst_vcpu, *src_vcpu; 1669 + struct vcpu_svm *dst_svm, *src_svm; 1668 1670 struct kvm_sev_info *mirror; 1671 + unsigned long i; 1669 1672 1670 1673 dst->active = true; 1671 1674 dst->asid = src->asid; 1672 1675 dst->handle = src->handle; 1673 1676 dst->pages_locked = src->pages_locked; 1674 1677 dst->enc_context_owner = src->enc_context_owner; 1678 + dst->es_active = src->es_active; 1675 1679 1676 1680 src->asid = 0; 1677 1681 src->active = false; 1678 1682 src->handle = 0; 1679 1683 src->pages_locked = 0; 1680 1684 src->enc_context_owner = NULL; 1685 + src->es_active = false; 1681 1686 1682 1687 list_cut_before(&dst->regions_list, &src->regions_list, &src->regions_list); 1683 1688 ··· 1709 1704 list_del(&src->mirror_entry); 1710 1705 list_add_tail(&dst->mirror_entry, &owner_sev_info->mirror_vms); 1711 1706 } 1712 - } 1713 1707 1714 - static int sev_es_migrate_from(struct kvm *dst, struct kvm *src) 1715 - { 1716 - unsigned long i; 1717 - struct kvm_vcpu *dst_vcpu, *src_vcpu; 1718 - struct vcpu_svm *dst_svm, *src_svm; 1719 - 1720 - if (atomic_read(&src->online_vcpus) != atomic_read(&dst->online_vcpus)) 1721 - return -EINVAL; 1722 - 1723 - kvm_for_each_vcpu(i, src_vcpu, src) { 1724 - if (!src_vcpu->arch.guest_state_protected) 1725 - return -EINVAL; 1726 - } 1727 - 1728 - kvm_for_each_vcpu(i, src_vcpu, src) { 1729 - src_svm = to_svm(src_vcpu); 1730 - dst_vcpu = kvm_get_vcpu(dst, i); 1708 + kvm_for_each_vcpu(i, dst_vcpu, dst_kvm) { 1731 1709 dst_svm = to_svm(dst_vcpu); 1710 + 1711 + sev_init_vmcb(dst_svm); 1712 + 1713 + if (!dst->es_active) 1714 + continue; 1715 + 1716 + /* 1717 + * Note, the source is not required to have the same number of 1718 + * vCPUs as the destination when migrating a vanilla SEV VM. 1719 + */ 1720 + src_vcpu = kvm_get_vcpu(dst_kvm, i); 1721 + src_svm = to_svm(src_vcpu); 1732 1722 1733 1723 /* 1734 1724 * Transfer VMSA and GHCB state to the destination. Nullify and ··· 1740 1740 src_svm->vmcb->control.vmsa_pa = INVALID_PAGE; 1741 1741 src_vcpu->arch.guest_state_protected = false; 1742 1742 } 1743 - to_kvm_svm(src)->sev_info.es_active = false; 1744 - to_kvm_svm(dst)->sev_info.es_active = true; 1743 + } 1744 + 1745 + static int sev_check_source_vcpus(struct kvm *dst, struct kvm *src) 1746 + { 1747 + struct kvm_vcpu *src_vcpu; 1748 + unsigned long i; 1749 + 1750 + if (!sev_es_guest(src)) 1751 + return 0; 1752 + 1753 + if (atomic_read(&src->online_vcpus) != atomic_read(&dst->online_vcpus)) 1754 + return -EINVAL; 1755 + 1756 + kvm_for_each_vcpu(i, src_vcpu, src) { 1757 + if (!src_vcpu->arch.guest_state_protected) 1758 + return -EINVAL; 1759 + } 1745 1760 1746 1761 return 0; 1747 1762 } ··· 1804 1789 if (ret) 1805 1790 goto out_dst_vcpu; 1806 1791 1807 - if (sev_es_guest(source_kvm)) { 1808 - ret = sev_es_migrate_from(kvm, source_kvm); 1809 - if (ret) 1810 - goto out_source_vcpu; 1811 - } 1792 + ret = sev_check_source_vcpus(kvm, source_kvm); 1793 + if (ret) 1794 + goto out_source_vcpu; 1812 1795 1813 1796 sev_migrate_from(kvm, source_kvm); 1814 1797 kvm_vm_dead(source_kvm); ··· 2927 2914 count, in); 2928 2915 } 2929 2916 2930 - void sev_es_init_vmcb(struct vcpu_svm *svm) 2917 + static void sev_es_init_vmcb(struct vcpu_svm *svm) 2931 2918 { 2932 2919 struct kvm_vcpu *vcpu = &svm->vcpu; 2933 2920 ··· 2978 2965 if (guest_cpuid_has(&svm->vcpu, X86_FEATURE_RDTSCP)) 2979 2966 svm_clr_intercept(svm, INTERCEPT_RDTSCP); 2980 2967 } 2968 + } 2969 + 2970 + void sev_init_vmcb(struct vcpu_svm *svm) 2971 + { 2972 + svm->vmcb->control.nested_ctl |= SVM_NESTED_CTL_SEV_ENABLE; 2973 + clr_exception_intercept(svm, UD_VECTOR); 2974 + 2975 + if (sev_es_guest(svm->vcpu.kvm)) 2976 + sev_es_init_vmcb(svm); 2981 2977 } 2982 2978 2983 2979 void sev_es_vcpu_reset(struct vcpu_svm *svm)
+2 -9
arch/x86/kvm/svm/svm.c
··· 1212 1212 svm->vmcb->control.int_ctl |= V_GIF_ENABLE_MASK; 1213 1213 } 1214 1214 1215 - if (sev_guest(vcpu->kvm)) { 1216 - svm->vmcb->control.nested_ctl |= SVM_NESTED_CTL_SEV_ENABLE; 1217 - clr_exception_intercept(svm, UD_VECTOR); 1218 - 1219 - if (sev_es_guest(vcpu->kvm)) { 1220 - /* Perform SEV-ES specific VMCB updates */ 1221 - sev_es_init_vmcb(svm); 1222 - } 1223 - } 1215 + if (sev_guest(vcpu->kvm)) 1216 + sev_init_vmcb(svm); 1224 1217 1225 1218 svm_hv_init_vmcb(vmcb); 1226 1219 init_vmcb_after_set_cpuid(vcpu);
+1 -1
arch/x86/kvm/svm/svm.h
··· 649 649 void __init sev_hardware_setup(void); 650 650 void sev_hardware_unsetup(void); 651 651 int sev_cpu_init(struct svm_cpu_data *sd); 652 + void sev_init_vmcb(struct vcpu_svm *svm); 652 653 void sev_free_vcpu(struct kvm_vcpu *vcpu); 653 654 int sev_handle_vmgexit(struct kvm_vcpu *vcpu); 654 655 int sev_es_string_io(struct vcpu_svm *svm, int size, unsigned int port, int in); 655 - void sev_es_init_vmcb(struct vcpu_svm *svm); 656 656 void sev_es_vcpu_reset(struct vcpu_svm *svm); 657 657 void sev_vcpu_deliver_sipi_vector(struct kvm_vcpu *vcpu, u8 vector); 658 658 void sev_es_prepare_switch_to_guest(struct sev_es_save_area *hostsa);
+2 -1
arch/x86/net/bpf_jit_comp.c
··· 1420 1420 case BPF_JMP | BPF_CALL: 1421 1421 func = (u8 *) __bpf_call_base + imm32; 1422 1422 if (tail_call_reachable) { 1423 + /* mov rax, qword ptr [rbp - rounded_stack_depth - 8] */ 1423 1424 EMIT3_off32(0x48, 0x8B, 0x85, 1424 - -(bpf_prog->aux->stack_depth + 8)); 1425 + -round_up(bpf_prog->aux->stack_depth, 8) - 8); 1425 1426 if (!imm32 || emit_call(&prog, func, image + addrs[i - 1] + 7)) 1426 1427 return -EINVAL; 1427 1428 } else {
+1 -1
arch/xtensa/kernel/entry.S
··· 2173 2173 2174 2174 #ifdef CONFIG_HIBERNATION 2175 2175 2176 - .bss 2176 + .section .bss, "aw" 2177 2177 .align 4 2178 2178 .Lsaved_regs: 2179 2179 #if defined(__XTENSA_WINDOWED_ABI__)
+1
arch/xtensa/kernel/time.c
··· 154 154 cpu = of_find_compatible_node(NULL, NULL, "cdns,xtensa-cpu"); 155 155 if (cpu) { 156 156 clk = of_clk_get(cpu, 0); 157 + of_node_put(cpu); 157 158 if (!IS_ERR(clk)) { 158 159 ccount_freq = clk_get_rate(clk); 159 160 return;
+1
arch/xtensa/platforms/xtfpga/setup.c
··· 133 133 134 134 if ((eth = of_find_compatible_node(eth, NULL, "opencores,ethoc"))) 135 135 update_local_mac(eth); 136 + of_node_put(eth); 136 137 return 0; 137 138 } 138 139 arch_initcall(machine_setup);
-13
block/blk-core.c
··· 322 322 blk_mq_exit_queue(q); 323 323 } 324 324 325 - /* 326 - * In theory, request pool of sched_tags belongs to request queue. 327 - * However, the current implementation requires tag_set for freeing 328 - * requests, so free the pool now. 329 - * 330 - * Queue has become frozen, there can't be any in-queue requests, so 331 - * it is safe to free requests now. 332 - */ 333 - mutex_lock(&q->sysfs_lock); 334 - if (q->elevator) 335 - blk_mq_sched_free_rqs(q); 336 - mutex_unlock(&q->sysfs_lock); 337 - 338 325 /* @q is and will stay empty, shutdown and put */ 339 326 blk_put_queue(q); 340 327 }
-1
block/blk-ia-ranges.c
··· 144 144 } 145 145 146 146 for (i = 0; i < iars->nr_ia_ranges; i++) { 147 - iars->ia_range[i].queue = q; 148 147 ret = kobject_init_and_add(&iars->ia_range[i].kobj, 149 148 &blk_ia_range_ktype, &iars->kobj, 150 149 "%d", i);
+18 -11
block/blk-mq-debugfs.c
··· 711 711 } 712 712 } 713 713 714 - void blk_mq_debugfs_unregister(struct request_queue *q) 715 - { 716 - q->sched_debugfs_dir = NULL; 717 - } 718 - 719 714 static void blk_mq_debugfs_register_ctx(struct blk_mq_hw_ctx *hctx, 720 715 struct blk_mq_ctx *ctx) 721 716 { ··· 741 746 742 747 void blk_mq_debugfs_unregister_hctx(struct blk_mq_hw_ctx *hctx) 743 748 { 749 + if (!hctx->queue->debugfs_dir) 750 + return; 744 751 debugfs_remove_recursive(hctx->debugfs_dir); 745 752 hctx->sched_debugfs_dir = NULL; 746 753 hctx->debugfs_dir = NULL; ··· 770 773 { 771 774 struct elevator_type *e = q->elevator->type; 772 775 776 + lockdep_assert_held(&q->debugfs_mutex); 777 + 773 778 /* 774 779 * If the parent directory has not been created yet, return, we will be 775 780 * called again later on and the directory/files will be created then. ··· 789 790 790 791 void blk_mq_debugfs_unregister_sched(struct request_queue *q) 791 792 { 793 + lockdep_assert_held(&q->debugfs_mutex); 794 + 792 795 debugfs_remove_recursive(q->sched_debugfs_dir); 793 796 q->sched_debugfs_dir = NULL; 794 797 } ··· 812 811 813 812 void blk_mq_debugfs_unregister_rqos(struct rq_qos *rqos) 814 813 { 814 + lockdep_assert_held(&rqos->q->debugfs_mutex); 815 + 816 + if (!rqos->q->debugfs_dir) 817 + return; 815 818 debugfs_remove_recursive(rqos->debugfs_dir); 816 819 rqos->debugfs_dir = NULL; 817 820 } ··· 824 819 { 825 820 struct request_queue *q = rqos->q; 826 821 const char *dir_name = rq_qos_id_to_name(rqos->id); 822 + 823 + lockdep_assert_held(&q->debugfs_mutex); 827 824 828 825 if (rqos->debugfs_dir || !rqos->ops->debugfs_attrs) 829 826 return; ··· 840 833 debugfs_create_files(rqos->debugfs_dir, rqos, rqos->ops->debugfs_attrs); 841 834 } 842 835 843 - void blk_mq_debugfs_unregister_queue_rqos(struct request_queue *q) 844 - { 845 - debugfs_remove_recursive(q->rqos_debugfs_dir); 846 - q->rqos_debugfs_dir = NULL; 847 - } 848 - 849 836 void blk_mq_debugfs_register_sched_hctx(struct request_queue *q, 850 837 struct blk_mq_hw_ctx *hctx) 851 838 { 852 839 struct elevator_type *e = q->elevator->type; 840 + 841 + lockdep_assert_held(&q->debugfs_mutex); 853 842 854 843 /* 855 844 * If the parent debugfs directory has not been created yet, return; ··· 866 863 867 864 void blk_mq_debugfs_unregister_sched_hctx(struct blk_mq_hw_ctx *hctx) 868 865 { 866 + lockdep_assert_held(&hctx->queue->debugfs_mutex); 867 + 868 + if (!hctx->queue->debugfs_dir) 869 + return; 869 870 debugfs_remove_recursive(hctx->sched_debugfs_dir); 870 871 hctx->sched_debugfs_dir = NULL; 871 872 }
-10
block/blk-mq-debugfs.h
··· 21 21 int blk_mq_debugfs_rq_show(struct seq_file *m, void *v); 22 22 23 23 void blk_mq_debugfs_register(struct request_queue *q); 24 - void blk_mq_debugfs_unregister(struct request_queue *q); 25 24 void blk_mq_debugfs_register_hctx(struct request_queue *q, 26 25 struct blk_mq_hw_ctx *hctx); 27 26 void blk_mq_debugfs_unregister_hctx(struct blk_mq_hw_ctx *hctx); ··· 35 36 36 37 void blk_mq_debugfs_register_rqos(struct rq_qos *rqos); 37 38 void blk_mq_debugfs_unregister_rqos(struct rq_qos *rqos); 38 - void blk_mq_debugfs_unregister_queue_rqos(struct request_queue *q); 39 39 #else 40 40 static inline void blk_mq_debugfs_register(struct request_queue *q) 41 - { 42 - } 43 - 44 - static inline void blk_mq_debugfs_unregister(struct request_queue *q) 45 41 { 46 42 } 47 43 ··· 79 85 } 80 86 81 87 static inline void blk_mq_debugfs_unregister_rqos(struct rq_qos *rqos) 82 - { 83 - } 84 - 85 - static inline void blk_mq_debugfs_unregister_queue_rqos(struct request_queue *q) 86 88 { 87 89 } 88 90 #endif
+11
block/blk-mq-sched.c
··· 594 594 if (ret) 595 595 goto err_free_map_and_rqs; 596 596 597 + mutex_lock(&q->debugfs_mutex); 597 598 blk_mq_debugfs_register_sched(q); 599 + mutex_unlock(&q->debugfs_mutex); 598 600 599 601 queue_for_each_hw_ctx(q, hctx, i) { 600 602 if (e->ops.init_hctx) { ··· 609 607 return ret; 610 608 } 611 609 } 610 + mutex_lock(&q->debugfs_mutex); 612 611 blk_mq_debugfs_register_sched_hctx(q, hctx); 612 + mutex_unlock(&q->debugfs_mutex); 613 613 } 614 614 615 615 return 0; ··· 652 648 unsigned int flags = 0; 653 649 654 650 queue_for_each_hw_ctx(q, hctx, i) { 651 + mutex_lock(&q->debugfs_mutex); 655 652 blk_mq_debugfs_unregister_sched_hctx(hctx); 653 + mutex_unlock(&q->debugfs_mutex); 654 + 656 655 if (e->type->ops.exit_hctx && hctx->sched_data) { 657 656 e->type->ops.exit_hctx(hctx, i); 658 657 hctx->sched_data = NULL; 659 658 } 660 659 flags = hctx->flags; 661 660 } 661 + 662 + mutex_lock(&q->debugfs_mutex); 662 663 blk_mq_debugfs_unregister_sched(q); 664 + mutex_unlock(&q->debugfs_mutex); 665 + 663 666 if (e->type->ops.exit_sched) 664 667 e->type->ops.exit_sched(e); 665 668 blk_mq_sched_tags_teardown(q, flags);
+8 -3
block/blk-mq.c
··· 2765 2765 return NULL; 2766 2766 } 2767 2767 2768 - rq_qos_throttle(q, *bio); 2769 - 2770 2768 if (blk_mq_get_hctx_type((*bio)->bi_opf) != rq->mq_hctx->type) 2771 2769 return NULL; 2772 2770 if (op_is_flush(rq->cmd_flags) != op_is_flush((*bio)->bi_opf)) 2773 2771 return NULL; 2774 2772 2775 - rq->cmd_flags = (*bio)->bi_opf; 2773 + /* 2774 + * If any qos ->throttle() end up blocking, we will have flushed the 2775 + * plug and hence killed the cached_rq list as well. Pop this entry 2776 + * before we throttle. 2777 + */ 2776 2778 plug->cached_rq = rq_list_next(rq); 2779 + rq_qos_throttle(q, *bio); 2780 + 2781 + rq->cmd_flags = (*bio)->bi_opf; 2777 2782 INIT_LIST_HEAD(&rq->queuelist); 2778 2783 return rq; 2779 2784 }
-2
block/blk-rq-qos.c
··· 294 294 295 295 void rq_qos_exit(struct request_queue *q) 296 296 { 297 - blk_mq_debugfs_unregister_queue_rqos(q); 298 - 299 297 while (q->rq_qos) { 300 298 struct rq_qos *rqos = q->rq_qos; 301 299 q->rq_qos = rqos->next;
+6 -1
block/blk-rq-qos.h
··· 104 104 105 105 blk_mq_unfreeze_queue(q); 106 106 107 - if (rqos->ops->debugfs_attrs) 107 + if (rqos->ops->debugfs_attrs) { 108 + mutex_lock(&q->debugfs_mutex); 108 109 blk_mq_debugfs_register_rqos(rqos); 110 + mutex_unlock(&q->debugfs_mutex); 111 + } 109 112 } 110 113 111 114 static inline void rq_qos_del(struct request_queue *q, struct rq_qos *rqos) ··· 132 129 133 130 blk_mq_unfreeze_queue(q); 134 131 132 + mutex_lock(&q->debugfs_mutex); 135 133 blk_mq_debugfs_unregister_rqos(rqos); 134 + mutex_unlock(&q->debugfs_mutex); 136 135 } 137 136 138 137 typedef bool (acquire_inflight_cb_t)(struct rq_wait *rqw, void *private_data);
+14 -16
block/blk-sysfs.c
··· 779 779 if (queue_is_mq(q)) 780 780 blk_mq_release(q); 781 781 782 - blk_trace_shutdown(q); 783 - mutex_lock(&q->debugfs_mutex); 784 - debugfs_remove_recursive(q->debugfs_dir); 785 - mutex_unlock(&q->debugfs_mutex); 786 - 787 - if (queue_is_mq(q)) 788 - blk_mq_debugfs_unregister(q); 789 - 790 782 bioset_exit(&q->bio_split); 791 783 792 784 if (blk_queue_has_srcu(q)) ··· 828 836 goto unlock; 829 837 } 830 838 839 + if (queue_is_mq(q)) 840 + __blk_mq_register_dev(dev, q); 841 + mutex_lock(&q->sysfs_lock); 842 + 831 843 mutex_lock(&q->debugfs_mutex); 832 844 q->debugfs_dir = debugfs_create_dir(kobject_name(q->kobj.parent), 833 845 blk_debugfs_root); 834 - mutex_unlock(&q->debugfs_mutex); 835 - 836 - if (queue_is_mq(q)) { 837 - __blk_mq_register_dev(dev, q); 846 + if (queue_is_mq(q)) 838 847 blk_mq_debugfs_register(q); 839 - } 840 - 841 - mutex_lock(&q->sysfs_lock); 848 + mutex_unlock(&q->debugfs_mutex); 842 849 843 850 ret = disk_register_independent_access_ranges(disk, NULL); 844 851 if (ret) ··· 939 948 /* Now that we've deleted all child objects, we can delete the queue. */ 940 949 kobject_uevent(&q->kobj, KOBJ_REMOVE); 941 950 kobject_del(&q->kobj); 942 - 943 951 mutex_unlock(&q->sysfs_dir_lock); 952 + 953 + mutex_lock(&q->debugfs_mutex); 954 + blk_trace_shutdown(q); 955 + debugfs_remove_recursive(q->debugfs_dir); 956 + q->debugfs_dir = NULL; 957 + q->sched_debugfs_dir = NULL; 958 + q->rqos_debugfs_dir = NULL; 959 + mutex_unlock(&q->debugfs_mutex); 944 960 945 961 kobject_put(&disk_to_dev(disk)->kobj); 946 962 }
+12 -30
block/genhd.c
··· 623 623 * Prevent new I/O from crossing bio_queue_enter(). 624 624 */ 625 625 blk_queue_start_drain(q); 626 + blk_mq_freeze_queue_wait(q); 626 627 627 628 if (!(disk->flags & GENHD_FL_HIDDEN)) { 628 629 sysfs_remove_link(&disk_to_dev(disk)->kobj, "bdi"); ··· 647 646 pm_runtime_set_memalloc_noio(disk_to_dev(disk), false); 648 647 device_del(disk_to_dev(disk)); 649 648 650 - blk_mq_freeze_queue_wait(q); 651 - 652 649 blk_throtl_cancel_bios(disk->queue); 653 650 654 651 blk_sync_queue(q); 655 652 blk_flush_integrity(); 653 + blk_mq_cancel_work_sync(q); 654 + 655 + blk_mq_quiesce_queue(q); 656 + if (q->elevator) { 657 + mutex_lock(&q->sysfs_lock); 658 + elevator_exit(q); 659 + mutex_unlock(&q->sysfs_lock); 660 + } 661 + rq_qos_exit(q); 662 + blk_mq_unquiesce_queue(q); 663 + 656 664 /* 657 665 * Allow using passthrough request again after the queue is torn down. 658 666 */ ··· 1130 1120 NULL 1131 1121 }; 1132 1122 1133 - static void disk_release_mq(struct request_queue *q) 1134 - { 1135 - blk_mq_cancel_work_sync(q); 1136 - 1137 - /* 1138 - * There can't be any non non-passthrough bios in flight here, but 1139 - * requests stay around longer, including passthrough ones so we 1140 - * still need to freeze the queue here. 1141 - */ 1142 - blk_mq_freeze_queue(q); 1143 - 1144 - /* 1145 - * Since the I/O scheduler exit code may access cgroup information, 1146 - * perform I/O scheduler exit before disassociating from the block 1147 - * cgroup controller. 1148 - */ 1149 - if (q->elevator) { 1150 - mutex_lock(&q->sysfs_lock); 1151 - elevator_exit(q); 1152 - mutex_unlock(&q->sysfs_lock); 1153 - } 1154 - rq_qos_exit(q); 1155 - __blk_mq_unfreeze_queue(q, true); 1156 - } 1157 - 1158 1123 /** 1159 1124 * disk_release - releases all allocated resources of the gendisk 1160 1125 * @dev: the device representing this disk ··· 1150 1165 1151 1166 might_sleep(); 1152 1167 WARN_ON_ONCE(disk_live(disk)); 1153 - 1154 - if (queue_is_mq(disk->queue)) 1155 - disk_release_mq(disk->queue); 1156 1168 1157 1169 blkcg_exit_queue(disk->queue); 1158 1170
-4
block/holder.c
··· 79 79 80 80 WARN_ON_ONCE(!bdev->bd_holder); 81 81 82 - /* FIXME: remove the following once add_disk() handles errors */ 83 - if (WARN_ON(!bdev->bd_holder_dir)) 84 - goto out_unlock; 85 - 86 82 holder = bd_find_holder_disk(bdev, disk); 87 83 if (holder) { 88 84 holder->refcnt++;
+2 -2
certs/Makefile
··· 3 3 # Makefile for the linux kernel signature checking certificates. 4 4 # 5 5 6 - obj-$(CONFIG_SYSTEM_TRUSTED_KEYRING) += system_keyring.o system_certificates.o common.o 7 - obj-$(CONFIG_SYSTEM_BLACKLIST_KEYRING) += blacklist.o common.o 6 + obj-$(CONFIG_SYSTEM_TRUSTED_KEYRING) += system_keyring.o system_certificates.o 7 + obj-$(CONFIG_SYSTEM_BLACKLIST_KEYRING) += blacklist.o 8 8 obj-$(CONFIG_SYSTEM_REVOCATION_LIST) += revocation_certificates.o 9 9 ifneq ($(CONFIG_SYSTEM_BLACKLIST_HASH_LIST),) 10 10
+4 -4
certs/blacklist.c
··· 15 15 #include <linux/err.h> 16 16 #include <linux/seq_file.h> 17 17 #include <linux/uidgid.h> 18 - #include <linux/verification.h> 18 + #include <keys/asymmetric-type.h> 19 19 #include <keys/system_keyring.h> 20 20 #include "blacklist.h" 21 - #include "common.h" 22 21 23 22 /* 24 23 * According to crypto/asymmetric_keys/x509_cert_parser.c:x509_note_pkey_algo(), ··· 364 365 if (revocation_certificate_list_size) 365 366 pr_notice("Loading compiled-in revocation X.509 certificates\n"); 366 367 367 - return load_certificate_list(revocation_certificate_list, revocation_certificate_list_size, 368 - blacklist_keyring); 368 + return x509_load_certificate_list(revocation_certificate_list, 369 + revocation_certificate_list_size, 370 + blacklist_keyring); 369 371 } 370 372 late_initcall(load_revocation_certificate_list); 371 373 #endif
+4 -4
certs/common.c crypto/asymmetric_keys/x509_loader.c
··· 2 2 3 3 #include <linux/kernel.h> 4 4 #include <linux/key.h> 5 - #include "common.h" 5 + #include <keys/asymmetric-type.h> 6 6 7 - int load_certificate_list(const u8 cert_list[], 8 - const unsigned long list_size, 9 - const struct key *keyring) 7 + int x509_load_certificate_list(const u8 cert_list[], 8 + const unsigned long list_size, 9 + const struct key *keyring) 10 10 { 11 11 key_ref_t key; 12 12 const u8 *p, *end;
-9
certs/common.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0-or-later */ 2 - 3 - #ifndef _CERT_COMMON_H 4 - #define _CERT_COMMON_H 5 - 6 - int load_certificate_list(const u8 cert_list[], const unsigned long list_size, 7 - const struct key *keyring); 8 - 9 - #endif
+3 -3
certs/system_keyring.c
··· 16 16 #include <keys/asymmetric-type.h> 17 17 #include <keys/system_keyring.h> 18 18 #include <crypto/pkcs7.h> 19 - #include "common.h" 20 19 21 20 static struct key *builtin_trusted_keys; 22 21 #ifdef CONFIG_SECONDARY_TRUSTED_KEYRING ··· 182 183 183 184 pr_notice("Loading compiled-in module X.509 certificates\n"); 184 185 185 - return load_certificate_list(system_certificate_list, module_cert_size, keyring); 186 + return x509_load_certificate_list(system_certificate_list, 187 + module_cert_size, keyring); 186 188 } 187 189 188 190 /* ··· 204 204 size = system_certificate_list_size - module_cert_size; 205 205 #endif 206 206 207 - return load_certificate_list(p, size, builtin_trusted_keys); 207 + return x509_load_certificate_list(p, size, builtin_trusted_keys); 208 208 } 209 209 late_initcall(load_system_certificate_list); 210 210
+10
crypto/asymmetric_keys/Kconfig
··· 75 75 This option provides support for verifying the signature(s) on a 76 76 signed PE binary. 77 77 78 + config FIPS_SIGNATURE_SELFTEST 79 + bool "Run FIPS selftests on the X.509+PKCS7 signature verification" 80 + help 81 + This option causes some selftests to be run on the signature 82 + verification code, using some built in data. This is required 83 + for FIPS. 84 + depends on KEYS 85 + depends on ASYMMETRIC_KEY_TYPE 86 + depends on PKCS7_MESSAGE_PARSER 87 + 78 88 endif # ASYMMETRIC_KEY_TYPE
+2
crypto/asymmetric_keys/Makefile
··· 20 20 x509.asn1.o \ 21 21 x509_akid.asn1.o \ 22 22 x509_cert_parser.o \ 23 + x509_loader.o \ 23 24 x509_public_key.o 25 + x509_key_parser-$(CONFIG_FIPS_SIGNATURE_SELFTEST) += selftest.o 24 26 25 27 $(obj)/x509_cert_parser.o: \ 26 28 $(obj)/x509.asn1.h \
+224
crypto/asymmetric_keys/selftest.c
··· 1 + /* Self-testing for signature checking. 2 + * 3 + * Copyright (C) 2022 Red Hat, Inc. All Rights Reserved. 4 + * Written by David Howells (dhowells@redhat.com) 5 + */ 6 + 7 + #include <linux/kernel.h> 8 + #include <linux/cred.h> 9 + #include <linux/key.h> 10 + #include <crypto/pkcs7.h> 11 + #include "x509_parser.h" 12 + 13 + struct certs_test { 14 + const u8 *data; 15 + size_t data_len; 16 + const u8 *pkcs7; 17 + size_t pkcs7_len; 18 + }; 19 + 20 + /* 21 + * Set of X.509 certificates to provide public keys for the tests. These will 22 + * be loaded into a temporary keyring for the duration of the testing. 23 + */ 24 + static const __initconst u8 certs_selftest_keys[] = { 25 + "\x30\x82\x05\x55\x30\x82\x03\x3d\xa0\x03\x02\x01\x02\x02\x14\x73" 26 + "\x98\xea\x98\x2d\xd0\x2e\xa8\xb1\xcf\x57\xc7\xf2\x97\xb3\xe6\x1a" 27 + "\xfc\x8c\x0a\x30\x0d\x06\x09\x2a\x86\x48\x86\xf7\x0d\x01\x01\x0b" 28 + "\x05\x00\x30\x34\x31\x32\x30\x30\x06\x03\x55\x04\x03\x0c\x29\x43" 29 + "\x65\x72\x74\x69\x66\x69\x63\x61\x74\x65\x20\x76\x65\x72\x69\x66" 30 + "\x69\x63\x61\x74\x69\x6f\x6e\x20\x73\x65\x6c\x66\x2d\x74\x65\x73" 31 + "\x74\x69\x6e\x67\x20\x6b\x65\x79\x30\x20\x17\x0d\x32\x32\x30\x35" 32 + "\x31\x38\x32\x32\x33\x32\x34\x31\x5a\x18\x0f\x32\x31\x32\x32\x30" 33 + "\x34\x32\x34\x32\x32\x33\x32\x34\x31\x5a\x30\x34\x31\x32\x30\x30" 34 + "\x06\x03\x55\x04\x03\x0c\x29\x43\x65\x72\x74\x69\x66\x69\x63\x61" 35 + "\x74\x65\x20\x76\x65\x72\x69\x66\x69\x63\x61\x74\x69\x6f\x6e\x20" 36 + "\x73\x65\x6c\x66\x2d\x74\x65\x73\x74\x69\x6e\x67\x20\x6b\x65\x79" 37 + "\x30\x82\x02\x22\x30\x0d\x06\x09\x2a\x86\x48\x86\xf7\x0d\x01\x01" 38 + "\x01\x05\x00\x03\x82\x02\x0f\x00\x30\x82\x02\x0a\x02\x82\x02\x01" 39 + "\x00\xcc\xac\x49\xdd\x3b\xca\xb0\x15\x7e\x84\x6a\xb2\x0a\x69\x5f" 40 + "\x1c\x0a\x61\x82\x3b\x4f\x2c\xa3\x95\x2c\x08\x58\x4b\xb1\x5d\x99" 41 + "\xe0\xc3\xc1\x79\xc2\xb3\xeb\xc0\x1e\x6d\x3e\x54\x1d\xbd\xb7\x92" 42 + "\x7b\x4d\xb5\x95\x58\xb2\x52\x2e\xc6\x24\x4b\x71\x63\x80\x32\x77" 43 + "\xa7\x38\x5e\xdb\x72\xae\x6e\x0d\xec\xfb\xb6\x6d\x01\x7f\xe9\x55" 44 + "\x66\xdf\xbf\x1d\x76\x78\x02\x31\xe8\xe5\x07\xf8\xb7\x82\x5c\x0d" 45 + "\xd4\xbb\xfb\xa2\x59\x0d\x2e\x3a\x78\x95\x3a\x8b\x46\x06\x47\x44" 46 + "\x46\xd7\xcd\x06\x6a\x41\x13\xe3\x19\xf6\xbb\x6e\x38\xf4\x83\x01" 47 + "\xa3\xbf\x4a\x39\x4f\xd7\x0a\xe9\x38\xb3\xf5\x94\x14\x4e\xdd\xf7" 48 + "\x43\xfd\x24\xb2\x49\x3c\xa5\xf7\x7a\x7c\xd4\x45\x3d\x97\x75\x68" 49 + "\xf1\xed\x4c\x42\x0b\x70\xca\x85\xf3\xde\xe5\x88\x2c\xc5\xbe\xb6" 50 + "\x97\x34\xba\x24\x02\xcd\x8b\x86\x9f\xa9\x73\xca\x73\xcf\x92\x81" 51 + "\xee\x75\x55\xbb\x18\x67\x5c\xff\x3f\xb5\xdd\x33\x1b\x0c\xe9\x78" 52 + "\xdb\x5c\xcf\xaa\x5c\x43\x42\xdf\x5e\xa9\x6d\xec\xd7\xd7\xff\xe6" 53 + "\xa1\x3a\x92\x1a\xda\xae\xf6\x8c\x6f\x7b\xd5\xb4\x6e\x06\xe9\x8f" 54 + "\xe8\xde\x09\x31\x89\xed\x0e\x11\xa1\xfa\x8a\xe9\xe9\x64\x59\x62" 55 + "\x53\xda\xd1\x70\xbe\x11\xd4\x99\x97\x11\xcf\x99\xde\x0b\x9d\x94" 56 + "\x7e\xaa\xb8\x52\xea\x37\xdb\x90\x7e\x35\xbd\xd9\xfe\x6d\x0a\x48" 57 + "\x70\x28\xdd\xd5\x0d\x7f\x03\x80\x93\x14\x23\x8f\xb9\x22\xcd\x7c" 58 + "\x29\xfe\xf1\x72\xb5\x5c\x0b\x12\xcf\x9c\x15\xf6\x11\x4c\x7a\x45" 59 + "\x25\x8c\x45\x0a\x34\xac\x2d\x9a\x81\xca\x0b\x13\x22\xcd\xeb\x1a" 60 + "\x38\x88\x18\x97\x96\x08\x81\xaa\xcc\x8f\x0f\x8a\x32\x7b\x76\x68" 61 + "\x03\x68\x43\xbf\x11\xba\x55\x60\xfd\x80\x1c\x0d\x9b\x69\xb6\x09" 62 + "\x72\xbc\x0f\x41\x2f\x07\x82\xc6\xe3\xb2\x13\x91\xc4\x6d\x14\x95" 63 + "\x31\xbe\x19\xbd\xbc\xed\xe1\x4c\x74\xa2\xe0\x78\x0b\xbb\x94\xec" 64 + "\x4c\x53\x3a\xa2\xb5\x84\x1d\x4b\x65\x7e\xdc\xf7\xdb\x36\x7d\xbe" 65 + "\x9e\x3b\x36\x66\x42\x66\x76\x35\xbf\xbe\xf0\xc1\x3c\x7c\xe9\x42" 66 + "\x5c\x24\x53\x03\x05\xa8\x67\x24\x50\x02\x75\xff\x24\x46\x3b\x35" 67 + "\x89\x76\xe6\x70\xda\xc5\x51\x8c\x9a\xe5\x05\xb0\x0b\xd0\x2d\xd4" 68 + "\x7d\x57\x75\x94\x6b\xf9\x0a\xad\x0e\x41\x00\x15\xd0\x4f\xc0\x7f" 69 + "\x90\x2d\x18\x48\x8f\x28\xfe\x5d\xa7\xcd\x99\x9e\xbd\x02\x6c\x8a" 70 + "\x31\xf3\x1c\xc7\x4b\xe6\x93\xcd\x42\xa2\xe4\x68\x10\x47\x9d\xfc" 71 + "\x21\x02\x03\x01\x00\x01\xa3\x5d\x30\x5b\x30\x0c\x06\x03\x55\x1d" 72 + "\x13\x01\x01\xff\x04\x02\x30\x00\x30\x0b\x06\x03\x55\x1d\x0f\x04" 73 + "\x04\x03\x02\x07\x80\x30\x1d\x06\x03\x55\x1d\x0e\x04\x16\x04\x14" 74 + "\xf5\x87\x03\xbb\x33\xce\x1b\x73\xee\x02\xec\xcd\xee\x5b\x88\x17" 75 + "\x51\x8f\xe3\xdb\x30\x1f\x06\x03\x55\x1d\x23\x04\x18\x30\x16\x80" 76 + "\x14\xf5\x87\x03\xbb\x33\xce\x1b\x73\xee\x02\xec\xcd\xee\x5b\x88" 77 + "\x17\x51\x8f\xe3\xdb\x30\x0d\x06\x09\x2a\x86\x48\x86\xf7\x0d\x01" 78 + "\x01\x0b\x05\x00\x03\x82\x02\x01\x00\xc0\x2e\x12\x41\x7b\x73\x85" 79 + "\x16\xc8\xdb\x86\x79\xe8\xf5\xcd\x44\xf4\xc6\xe2\x81\x23\x5e\x47" 80 + "\xcb\xab\x25\xf1\x1e\x58\x3e\x31\x7f\x78\xad\x85\xeb\xfe\x14\x88" 81 + "\x60\xf7\x7f\xd2\x26\xa2\xf4\x98\x2a\xfd\xba\x05\x0c\x20\x33\x12" 82 + "\xcc\x4d\x14\x61\x64\x81\x93\xd3\x33\xed\xc8\xff\xf1\x78\xcc\x5f" 83 + "\x51\x9f\x09\xd7\xbe\x0d\x5c\x74\xfd\x9b\xdf\x52\x4a\xc9\xa8\x71" 84 + "\x25\x33\x04\x10\x67\x36\xd0\xb3\x0b\xc9\xa1\x40\x72\xae\x41\x7b" 85 + "\x68\xe6\xe4\x7b\xd0\x28\xf7\x6d\xe7\x3f\x50\xfc\x91\x7c\x91\x56" 86 + "\xd4\xdf\xa6\xbb\xe8\x4d\x1b\x58\xaa\x28\xfa\xc1\x19\xeb\x11\x2f" 87 + "\x24\x8b\x7c\xc5\xa9\x86\x26\xaa\x6e\xb7\x9b\xd5\xf8\x06\xfb\x02" 88 + "\x52\x7b\x9c\x9e\xa1\xe0\x07\x8b\x5e\xe4\xb8\x55\x29\xf6\x48\x52" 89 + "\x1c\x1b\x54\x2d\x46\xd8\xe5\x71\xb9\x60\xd1\x45\xb5\x92\x89\x8a" 90 + "\x63\x58\x2a\xb3\xc6\xb2\x76\xe2\x3c\x82\x59\x04\xae\x5a\xc4\x99" 91 + "\x7b\x2e\x4b\x46\x57\xb8\x29\x24\xb2\xfd\xee\x2c\x0d\xa4\x83\xfa" 92 + "\x65\x2a\x07\x35\x8b\x97\xcf\xbd\x96\x2e\xd1\x7e\x6c\xc2\x1e\x87" 93 + "\xb6\x6c\x76\x65\xb5\xb2\x62\xda\x8b\xe9\x73\xe3\xdb\x33\xdd\x13" 94 + "\x3a\x17\x63\x6a\x76\xde\x8d\x8f\xe0\x47\x61\x28\x3a\x83\xff\x8f" 95 + "\xe7\xc7\xe0\x4a\xa3\xe5\x07\xcf\xe9\x8c\x35\x35\x2e\xe7\x80\x66" 96 + "\x31\xbf\x91\x58\x0a\xe1\x25\x3d\x38\xd3\xa4\xf0\x59\x34\x47\x07" 97 + "\x62\x0f\xbe\x30\xdd\x81\x88\x58\xf0\x28\xb0\x96\xe5\x82\xf8\x05" 98 + "\xb7\x13\x01\xbc\xfa\xc6\x1f\x86\x72\xcc\xf9\xee\x8e\xd9\xd6\x04" 99 + "\x8c\x24\x6c\xbf\x0f\x5d\x37\x39\xcf\x45\xc1\x93\x3a\xd2\xed\x5c" 100 + "\x58\x79\x74\x86\x62\x30\x7e\x8e\xbb\xdd\x7a\xa9\xed\xca\x40\xcb" 101 + "\x62\x47\xf4\xb4\x9f\x52\x7f\x72\x63\xa8\xf0\x2b\xaf\x45\x2a\x48" 102 + "\x19\x6d\xe3\xfb\xf9\x19\x66\x69\xc8\xcc\x62\x87\x6c\x53\x2b\x2d" 103 + "\x6e\x90\x6c\x54\x3a\x82\x25\x41\xcb\x18\x6a\xa4\x22\xa8\xa1\xc4" 104 + "\x47\xd7\x81\x00\x1c\x15\x51\x0f\x1a\xaf\xef\x9f\xa6\x61\x8c\xbd" 105 + "\x6b\x8b\xed\xe6\xac\x0e\xb6\x3a\x4c\x92\xe6\x0f\x91\x0a\x0f\x71" 106 + "\xc7\xa0\xb9\x0d\x3a\x17\x5a\x6f\x35\xc8\xe7\x50\x4f\x46\xe8\x70" 107 + "\x60\x48\x06\x82\x8b\x66\x58\xe6\x73\x91\x9c\x12\x3d\x35\x8e\x46" 108 + "\xad\x5a\xf5\xb3\xdb\x69\x21\x04\xfd\xd3\x1c\xdf\x94\x9d\x56\xb0" 109 + "\x0a\xd1\x95\x76\x8d\xec\x9e\xdd\x0b\x15\x97\x64\xad\xe5\xf2\x62" 110 + "\x02\xfc\x9e\x5f\x56\x42\x39\x05\xb3" 111 + }; 112 + 113 + /* 114 + * Signed data and detached signature blobs that form the verification tests. 115 + */ 116 + static const __initconst u8 certs_selftest_1_data[] = { 117 + "\x54\x68\x69\x73\x20\x69\x73\x20\x73\x6f\x6d\x65\x20\x74\x65\x73" 118 + "\x74\x20\x64\x61\x74\x61\x20\x75\x73\x65\x64\x20\x66\x6f\x72\x20" 119 + "\x73\x65\x6c\x66\x2d\x74\x65\x73\x74\x69\x6e\x67\x20\x63\x65\x72" 120 + "\x74\x69\x66\x69\x63\x61\x74\x65\x20\x76\x65\x72\x69\x66\x69\x63" 121 + "\x61\x74\x69\x6f\x6e\x2e\x0a" 122 + }; 123 + 124 + static const __initconst u8 certs_selftest_1_pkcs7[] = { 125 + "\x30\x82\x02\xab\x06\x09\x2a\x86\x48\x86\xf7\x0d\x01\x07\x02\xa0" 126 + "\x82\x02\x9c\x30\x82\x02\x98\x02\x01\x01\x31\x0d\x30\x0b\x06\x09" 127 + "\x60\x86\x48\x01\x65\x03\x04\x02\x01\x30\x0b\x06\x09\x2a\x86\x48" 128 + "\x86\xf7\x0d\x01\x07\x01\x31\x82\x02\x75\x30\x82\x02\x71\x02\x01" 129 + "\x01\x30\x4c\x30\x34\x31\x32\x30\x30\x06\x03\x55\x04\x03\x0c\x29" 130 + "\x43\x65\x72\x74\x69\x66\x69\x63\x61\x74\x65\x20\x76\x65\x72\x69" 131 + "\x66\x69\x63\x61\x74\x69\x6f\x6e\x20\x73\x65\x6c\x66\x2d\x74\x65" 132 + "\x73\x74\x69\x6e\x67\x20\x6b\x65\x79\x02\x14\x73\x98\xea\x98\x2d" 133 + "\xd0\x2e\xa8\xb1\xcf\x57\xc7\xf2\x97\xb3\xe6\x1a\xfc\x8c\x0a\x30" 134 + "\x0b\x06\x09\x60\x86\x48\x01\x65\x03\x04\x02\x01\x30\x0d\x06\x09" 135 + "\x2a\x86\x48\x86\xf7\x0d\x01\x01\x01\x05\x00\x04\x82\x02\x00\xac" 136 + "\xb0\xf2\x07\xd6\x99\x6d\xc0\xc0\xd9\x8d\x31\x0d\x7e\x04\xeb\xc3" 137 + "\x88\x90\xc4\x58\x46\xd4\xe2\xa0\xa3\x25\xe3\x04\x50\x37\x85\x8c" 138 + "\x91\xc6\xfc\xc5\xd4\x92\xfd\x05\xd8\xb8\xa3\xb8\xba\x89\x13\x00" 139 + "\x88\x79\x99\x51\x6b\x5b\x28\x31\xc0\xb3\x1b\x7a\x68\x2c\x00\xdb" 140 + "\x4b\x46\x11\xf3\xfa\x50\x8e\x19\x89\xa2\x4c\xda\x4c\x89\x01\x11" 141 + "\x89\xee\xd3\xc8\xc1\xe7\xa7\xf6\xb2\xa2\xf8\x65\xb8\x35\x20\x33" 142 + "\xba\x12\x62\xd5\xbd\xaa\x71\xe5\x5b\xc0\x6a\x32\xff\x6a\x2e\x23" 143 + "\xef\x2b\xb6\x58\xb1\xfb\x5f\x82\x34\x40\x6d\x9f\xbc\x27\xac\x37" 144 + "\x23\x99\xcf\x7d\x20\xb2\x39\x01\xc0\x12\xce\xd7\x5d\x2f\xb6\xab" 145 + "\xb5\x56\x4f\xef\xf4\x72\x07\x58\x65\xa9\xeb\x1f\x75\x1c\x5f\x0c" 146 + "\x88\xe0\xa4\xe2\xcd\x73\x2b\x9e\xb2\x05\x7e\x12\xf8\xd0\x66\x41" 147 + "\xcc\x12\x63\xd4\xd6\xac\x9b\x1d\x14\x77\x8d\x1c\x57\xd5\x27\xc6" 148 + "\x49\xa2\x41\x43\xf3\x59\x29\xe5\xcb\xd1\x75\xbc\x3a\x97\x2a\x72" 149 + "\x22\x66\xc5\x3b\xc1\xba\xfc\x53\x18\x98\xe2\x21\x64\xc6\x52\x87" 150 + "\x13\xd5\x7c\x42\xe8\xfb\x9c\x9a\x45\x32\xd5\xa5\x22\x62\x9d\xd4" 151 + "\xcb\xa4\xfa\x77\xbb\x50\x24\x0b\x8b\x88\x99\x15\x56\xa9\x1e\x92" 152 + "\xbf\x5d\x94\x77\xb6\xf1\x67\x01\x60\x06\x58\x5c\xdf\x18\x52\x79" 153 + "\x37\x30\x93\x7d\x87\x04\xf1\xe0\x55\x59\x52\xf3\xc2\xb1\x1c\x5b" 154 + "\x12\x7c\x49\x87\xfb\xf7\xed\xdd\x95\x71\xec\x4b\x1a\x85\x08\xb0" 155 + "\xa0\x36\xc4\x7b\xab\x40\xe0\xf1\x98\xcc\xaf\x19\x40\x8f\x47\x6f" 156 + "\xf0\x6c\x84\x29\x7f\x7f\x04\x46\xcb\x08\x0f\xe0\xc1\xc9\x70\x6e" 157 + "\x95\x3b\xa4\xbc\x29\x2b\x53\x67\x45\x1b\x0d\xbc\x13\xa5\x76\x31" 158 + "\xaf\xb9\xd0\xe0\x60\x12\xd2\xf4\xb7\x7c\x58\x7e\xf6\x2d\xbb\x24" 159 + "\x14\x5a\x20\x24\xa8\x12\xdf\x25\xbd\x42\xce\x96\x7c\x2e\xba\x14" 160 + "\x1b\x81\x9f\x18\x45\xa4\xc6\x70\x3e\x0e\xf0\xd3\x7b\x9c\x10\xbe" 161 + "\xb8\x7a\x89\xc5\x9e\xd9\x97\xdf\xd7\xe7\xc6\x1d\xc0\x20\x6c\xb8" 162 + "\x1e\x3a\x63\xb8\x39\x8e\x8e\x62\xd5\xd2\xb4\xcd\xff\x46\xfc\x8e" 163 + "\xec\x07\x35\x0c\xff\xb0\x05\xe6\xf4\xe5\xfe\xa2\xe3\x0a\xe6\x36" 164 + "\xa7\x4a\x7e\x62\x1d\xc4\x50\x39\x35\x4e\x28\xcb\x4a\xfb\x9d\xdb" 165 + "\xdd\x23\xd6\x53\xb1\x74\x77\x12\xf7\x9c\xf0\x9a\x6b\xf7\xa9\x64" 166 + "\x2d\x86\x21\x2a\xcf\xc6\x54\xf5\xc9\xad\xfa\xb5\x12\xb4\xf3\x51" 167 + "\x77\x55\x3c\x6f\x0c\x32\xd3\x8c\x44\x39\x71\x25\xfe\x96\xd2" 168 + }; 169 + 170 + /* 171 + * List of tests to be run. 172 + */ 173 + #define TEST(data, pkcs7) { data, sizeof(data) - 1, pkcs7, sizeof(pkcs7) - 1 } 174 + static const struct certs_test certs_tests[] __initconst = { 175 + TEST(certs_selftest_1_data, certs_selftest_1_pkcs7), 176 + }; 177 + 178 + int __init fips_signature_selftest(void) 179 + { 180 + struct key *keyring; 181 + int ret, i; 182 + 183 + pr_notice("Running certificate verification selftests\n"); 184 + 185 + keyring = keyring_alloc(".certs_selftest", 186 + GLOBAL_ROOT_UID, GLOBAL_ROOT_GID, current_cred(), 187 + (KEY_POS_ALL & ~KEY_POS_SETATTR) | 188 + KEY_USR_VIEW | KEY_USR_READ | 189 + KEY_USR_SEARCH, 190 + KEY_ALLOC_NOT_IN_QUOTA, 191 + NULL, NULL); 192 + if (IS_ERR(keyring)) 193 + panic("Can't allocate certs selftest keyring: %ld\n", 194 + PTR_ERR(keyring)); 195 + 196 + ret = x509_load_certificate_list(certs_selftest_keys, 197 + sizeof(certs_selftest_keys) - 1, keyring); 198 + if (ret < 0) 199 + panic("Can't allocate certs selftest keyring: %d\n", ret); 200 + 201 + for (i = 0; i < ARRAY_SIZE(certs_tests); i++) { 202 + const struct certs_test *test = &certs_tests[i]; 203 + struct pkcs7_message *pkcs7; 204 + 205 + pkcs7 = pkcs7_parse_message(test->pkcs7, test->pkcs7_len); 206 + if (IS_ERR(pkcs7)) 207 + panic("Certs selftest %d: pkcs7_parse_message() = %d\n", i, ret); 208 + 209 + pkcs7_supply_detached_data(pkcs7, test->data, test->data_len); 210 + 211 + ret = pkcs7_verify(pkcs7, VERIFYING_MODULE_SIGNATURE); 212 + if (ret < 0) 213 + panic("Certs selftest %d: pkcs7_verify() = %d\n", i, ret); 214 + 215 + ret = pkcs7_validate_trust(pkcs7, keyring); 216 + if (ret < 0) 217 + panic("Certs selftest %d: pkcs7_validate_trust() = %d\n", i, ret); 218 + 219 + pkcs7_free_message(pkcs7); 220 + } 221 + 222 + key_put(keyring); 223 + return 0; 224 + }
+9
crypto/asymmetric_keys/x509_parser.h
··· 41 41 }; 42 42 43 43 /* 44 + * selftest.c 45 + */ 46 + #ifdef CONFIG_FIPS_SIGNATURE_SELFTEST 47 + extern int __init fips_signature_selftest(void); 48 + #else 49 + static inline int fips_signature_selftest(void) { return 0; } 50 + #endif 51 + 52 + /* 44 53 * x509_cert_parser.c 45 54 */ 46 55 extern void x509_free_certificate(struct x509_certificate *cert);
+7 -1
crypto/asymmetric_keys/x509_public_key.c
··· 244 244 /* 245 245 * Module stuff 246 246 */ 247 + extern int __init certs_selftest(void); 247 248 static int __init x509_key_init(void) 248 249 { 249 - return register_asymmetric_key_parser(&x509_key_parser); 250 + int ret; 251 + 252 + ret = register_asymmetric_key_parser(&x509_key_parser); 253 + if (ret < 0) 254 + return ret; 255 + return fips_signature_selftest(); 250 256 } 251 257 252 258 static void __exit x509_key_exit(void)
+1 -1
drivers/base/memory.c
··· 558 558 if (kstrtoull(buf, 0, &pfn) < 0) 559 559 return -EINVAL; 560 560 pfn >>= PAGE_SHIFT; 561 - ret = memory_failure(pfn, 0); 561 + ret = memory_failure(pfn, MF_SW_SIMULATED); 562 562 if (ret == -EOPNOTSUPP) 563 563 ret = 0; 564 564 return ret ? ret : count;
+5 -3
drivers/base/regmap/regmap-irq.c
··· 252 252 struct regmap_irq_chip_data *d = irq_data_get_irq_chip_data(data); 253 253 struct regmap *map = d->map; 254 254 const struct regmap_irq *irq_data = irq_to_regmap_irq(d, data->hwirq); 255 + unsigned int reg = irq_data->reg_offset / map->reg_stride; 255 256 unsigned int mask, type; 256 257 257 258 type = irq_data->type.type_falling_val | irq_data->type.type_rising_val; ··· 269 268 * at the corresponding offset in regmap_irq_set_type(). 270 269 */ 271 270 if (d->chip->type_in_mask && type) 272 - mask = d->type_buf[irq_data->reg_offset / map->reg_stride]; 271 + mask = d->type_buf[reg] & irq_data->mask; 273 272 else 274 273 mask = irq_data->mask; 275 274 276 275 if (d->chip->clear_on_unmask) 277 276 d->clear_status = true; 278 277 279 - d->mask_buf[irq_data->reg_offset / map->reg_stride] &= ~mask; 278 + d->mask_buf[reg] &= ~mask; 280 279 } 281 280 282 281 static void regmap_irq_disable(struct irq_data *data) ··· 387 386 subreg = &chip->sub_reg_offsets[b]; 388 387 for (i = 0; i < subreg->num_regs; i++) { 389 388 unsigned int offset = subreg->offset[i]; 389 + unsigned int index = offset / map->reg_stride; 390 390 391 391 if (chip->not_fixed_stride) 392 392 ret = regmap_read(map, ··· 396 394 else 397 395 ret = regmap_read(map, 398 396 chip->status_base + offset, 399 - &data->status_buf[offset]); 397 + &data->status_buf[index]); 400 398 401 399 if (ret) 402 400 break;
+8 -7
drivers/base/regmap/regmap.c
··· 1880 1880 */ 1881 1881 bool regmap_can_raw_write(struct regmap *map) 1882 1882 { 1883 - return map->bus && map->bus->write && map->format.format_val && 1884 - map->format.format_reg; 1883 + return map->write && map->format.format_val && map->format.format_reg; 1885 1884 } 1886 1885 EXPORT_SYMBOL_GPL(regmap_can_raw_write); 1887 1886 ··· 2154 2155 size_t write_len; 2155 2156 int ret; 2156 2157 2157 - if (!map->bus) 2158 - return -EINVAL; 2159 - if (!map->bus->write) 2158 + if (!map->write) 2160 2159 return -ENOTSUPP; 2160 + 2161 2161 if (val_len % map->format.val_bytes) 2162 2162 return -EINVAL; 2163 2163 if (!IS_ALIGNED(reg, map->reg_stride)) ··· 2276 2278 * Some devices don't support bulk write, for them we have a series of 2277 2279 * single write operations. 2278 2280 */ 2279 - if (!map->bus || !map->format.parse_inplace) { 2281 + if (!map->write || !map->format.parse_inplace) { 2280 2282 map->lock(map->lock_arg); 2281 2283 for (i = 0; i < val_count; i++) { 2282 2284 unsigned int ival; ··· 2902 2904 size_t read_len; 2903 2905 int ret; 2904 2906 2907 + if (!map->read) 2908 + return -ENOTSUPP; 2909 + 2905 2910 if (val_len % map->format.val_bytes) 2906 2911 return -EINVAL; 2907 2912 if (!IS_ALIGNED(reg, map->reg_stride)) ··· 3018 3017 if (val_count == 0) 3019 3018 return -EINVAL; 3020 3019 3021 - if (map->format.parse_inplace && (vol || map->cache_type == REGCACHE_NONE)) { 3020 + if (map->read && map->format.parse_inplace && (vol || map->cache_type == REGCACHE_NONE)) { 3022 3021 ret = regmap_raw_read(map, reg, val, val_bytes * val_count); 3023 3022 if (ret != 0) 3024 3023 return ret;
+12 -7
drivers/block/xen-blkfront.c
··· 2114 2114 return; 2115 2115 2116 2116 /* No more blkif_request(). */ 2117 - blk_mq_stop_hw_queues(info->rq); 2118 - blk_mark_disk_dead(info->gd); 2119 - set_capacity(info->gd, 0); 2117 + if (info->rq && info->gd) { 2118 + blk_mq_stop_hw_queues(info->rq); 2119 + blk_mark_disk_dead(info->gd); 2120 + set_capacity(info->gd, 0); 2121 + } 2120 2122 2121 2123 for_each_rinfo(info, rinfo, i) { 2122 2124 /* No more gnttab callback work. */ ··· 2459 2457 2460 2458 dev_dbg(&xbdev->dev, "%s removed", xbdev->nodename); 2461 2459 2462 - del_gendisk(info->gd); 2460 + if (info->gd) 2461 + del_gendisk(info->gd); 2463 2462 2464 2463 mutex_lock(&blkfront_mutex); 2465 2464 list_del(&info->info_list); 2466 2465 mutex_unlock(&blkfront_mutex); 2467 2466 2468 2467 blkif_free(info, 0); 2469 - xlbd_release_minors(info->gd->first_minor, info->gd->minors); 2470 - blk_cleanup_disk(info->gd); 2471 - blk_mq_free_tag_set(&info->tag_set); 2468 + if (info->gd) { 2469 + xlbd_release_minors(info->gd->first_minor, info->gd->minors); 2470 + blk_cleanup_disk(info->gd); 2471 + blk_mq_free_tag_set(&info->tag_set); 2472 + } 2472 2473 2473 2474 kfree(info); 2474 2475 return 0;
+6 -8
drivers/bus/bt1-apb.c
··· 175 175 int ret; 176 176 177 177 apb->prst = devm_reset_control_get_optional_exclusive(apb->dev, "prst"); 178 - if (IS_ERR(apb->prst)) { 179 - dev_warn(apb->dev, "Couldn't get reset control line\n"); 180 - return PTR_ERR(apb->prst); 181 - } 178 + if (IS_ERR(apb->prst)) 179 + return dev_err_probe(apb->dev, PTR_ERR(apb->prst), 180 + "Couldn't get reset control line\n"); 182 181 183 182 ret = reset_control_deassert(apb->prst); 184 183 if (ret) ··· 198 199 int ret; 199 200 200 201 apb->pclk = devm_clk_get(apb->dev, "pclk"); 201 - if (IS_ERR(apb->pclk)) { 202 - dev_err(apb->dev, "Couldn't get APB clock descriptor\n"); 203 - return PTR_ERR(apb->pclk); 204 - } 202 + if (IS_ERR(apb->pclk)) 203 + return dev_err_probe(apb->dev, PTR_ERR(apb->pclk), 204 + "Couldn't get APB clock descriptor\n"); 205 205 206 206 ret = clk_prepare_enable(apb->pclk); 207 207 if (ret) {
+6 -8
drivers/bus/bt1-axi.c
··· 135 135 int ret; 136 136 137 137 axi->arst = devm_reset_control_get_optional_exclusive(axi->dev, "arst"); 138 - if (IS_ERR(axi->arst)) { 139 - dev_warn(axi->dev, "Couldn't get reset control line\n"); 140 - return PTR_ERR(axi->arst); 141 - } 138 + if (IS_ERR(axi->arst)) 139 + return dev_err_probe(axi->dev, PTR_ERR(axi->arst), 140 + "Couldn't get reset control line\n"); 142 141 143 142 ret = reset_control_deassert(axi->arst); 144 143 if (ret) ··· 158 159 int ret; 159 160 160 161 axi->aclk = devm_clk_get(axi->dev, "aclk"); 161 - if (IS_ERR(axi->aclk)) { 162 - dev_err(axi->dev, "Couldn't get AXI Interconnect clock\n"); 163 - return PTR_ERR(axi->aclk); 164 - } 162 + if (IS_ERR(axi->aclk)) 163 + return dev_err_probe(axi->dev, PTR_ERR(axi->aclk), 164 + "Couldn't get AXI Interconnect clock\n"); 165 165 166 166 ret = clk_prepare_enable(axi->aclk); 167 167 if (ret) {
+3 -3
drivers/char/random.c
··· 87 87 88 88 /* Control how we warn userspace. */ 89 89 static struct ratelimit_state urandom_warning = 90 - RATELIMIT_STATE_INIT("warn_urandom_randomness", HZ, 3); 90 + RATELIMIT_STATE_INIT_FLAGS("urandom_warning", HZ, 3, RATELIMIT_MSG_ON_RELEASE); 91 91 static int ratelimit_disable __read_mostly = 92 92 IS_ENABLED(CONFIG_WARN_ALL_UNSEEDED_RANDOM); 93 93 module_param_named(ratelimit_disable, ratelimit_disable, int, 0644); ··· 408 408 409 409 /* 410 410 * Immediately overwrite the ChaCha key at index 4 with random 411 - * bytes, in case userspace causes copy_to_user() below to sleep 411 + * bytes, in case userspace causes copy_to_iter() below to sleep 412 412 * forever, so that we still retain forward secrecy in that case. 413 413 */ 414 414 crng_make_state(chacha_state, (u8 *)&chacha_state[4], CHACHA_KEY_SIZE); ··· 1009 1009 if (new_count & MIX_INFLIGHT) 1010 1010 return; 1011 1011 1012 - if (new_count < 64 && !time_is_before_jiffies(fast_pool->last + HZ)) 1012 + if (new_count < 1024 && !time_is_before_jiffies(fast_pool->last + HZ)) 1013 1013 return; 1014 1014 1015 1015 if (unlikely(!fast_pool->mix.func))
+4 -1
drivers/dma-buf/udmabuf.c
··· 32 32 { 33 33 struct vm_area_struct *vma = vmf->vma; 34 34 struct udmabuf *ubuf = vma->vm_private_data; 35 + pgoff_t pgoff = vmf->pgoff; 35 36 36 - vmf->page = ubuf->pages[vmf->pgoff]; 37 + if (pgoff >= ubuf->pagecount) 38 + return VM_FAULT_SIGBUS; 39 + vmf->page = ubuf->pages[pgoff]; 37 40 get_page(vmf->page); 38 41 return 0; 39 42 }
+1 -1
drivers/firewire/core-cdev.c
··· 1211 1211 struct fw_cdev_get_cycle_timer2 *a = &arg->get_cycle_timer2; 1212 1212 struct fw_card *card = client->device->card; 1213 1213 struct timespec64 ts = {0, 0}; 1214 - u32 cycle_time; 1214 + u32 cycle_time = 0; 1215 1215 int ret = 0; 1216 1216 1217 1217 local_irq_disable();
+2 -4
drivers/firewire/core-device.c
··· 372 372 struct fw_device *device = fw_device(dev->parent); 373 373 struct fw_unit *unit = fw_unit(dev); 374 374 375 - return snprintf(buf, PAGE_SIZE, "%d\n", 376 - (int)(unit->directory - device->config_rom)); 375 + return sysfs_emit(buf, "%td\n", unit->directory - device->config_rom); 377 376 } 378 377 379 378 static struct device_attribute fw_unit_attributes[] = { ··· 402 403 int ret; 403 404 404 405 down_read(&fw_device_rwsem); 405 - ret = snprintf(buf, PAGE_SIZE, "0x%08x%08x\n", 406 - device->config_rom[3], device->config_rom[4]); 406 + ret = sysfs_emit(buf, "0x%08x%08x\n", device->config_rom[3], device->config_rom[4]); 407 407 up_read(&fw_device_rwsem); 408 408 409 409 return ret;
+15 -9
drivers/firmware/arm_scmi/base.c
··· 36 36 37 37 struct scmi_msg_resp_base_discover_agent { 38 38 __le32 agent_id; 39 - u8 name[SCMI_MAX_STR_SIZE]; 39 + u8 name[SCMI_SHORT_NAME_MAX_SIZE]; 40 40 }; 41 41 42 42 ··· 119 119 120 120 ret = ph->xops->do_xfer(ph, t); 121 121 if (!ret) 122 - memcpy(vendor_id, t->rx.buf, size); 122 + strscpy(vendor_id, t->rx.buf, size); 123 123 124 124 ph->xops->xfer_put(ph, t); 125 125 ··· 221 221 calc_list_sz = (1 + (loop_num_ret - 1) / sizeof(u32)) * 222 222 sizeof(u32); 223 223 if (calc_list_sz != real_list_sz) { 224 - dev_err(dev, 225 - "Malformed reply - real_sz:%zd calc_sz:%u\n", 226 - real_list_sz, calc_list_sz); 227 - ret = -EPROTO; 228 - break; 224 + dev_warn(dev, 225 + "Malformed reply - real_sz:%zd calc_sz:%u (loop_num_ret:%d)\n", 226 + real_list_sz, calc_list_sz, loop_num_ret); 227 + /* 228 + * Bail out if the expected list size is bigger than the 229 + * total payload size of the received reply. 230 + */ 231 + if (calc_list_sz > real_list_sz) { 232 + ret = -EPROTO; 233 + break; 234 + } 229 235 } 230 236 231 237 for (loop = 0; loop < loop_num_ret; loop++) ··· 276 270 ret = ph->xops->do_xfer(ph, t); 277 271 if (!ret) { 278 272 agent_info = t->rx.buf; 279 - strlcpy(name, agent_info->name, SCMI_MAX_STR_SIZE); 273 + strscpy(name, agent_info->name, SCMI_SHORT_NAME_MAX_SIZE); 280 274 } 281 275 282 276 ph->xops->xfer_put(ph, t); ··· 375 369 int id, ret; 376 370 u8 *prot_imp; 377 371 u32 version; 378 - char name[SCMI_MAX_STR_SIZE]; 372 + char name[SCMI_SHORT_NAME_MAX_SIZE]; 379 373 struct device *dev = ph->dev; 380 374 struct scmi_revision_info *rev = scmi_revision_area_get(ph); 381 375
+3 -4
drivers/firmware/arm_scmi/clock.c
··· 153 153 if (!ret) { 154 154 u32 latency = 0; 155 155 attributes = le32_to_cpu(attr->attributes); 156 - strlcpy(clk->name, attr->name, SCMI_MAX_STR_SIZE); 156 + strscpy(clk->name, attr->name, SCMI_SHORT_NAME_MAX_SIZE); 157 157 /* clock_enable_latency field is present only since SCMI v3.1 */ 158 158 if (PROTOCOL_REV_MAJOR(version) >= 0x2) 159 159 latency = le32_to_cpu(attr->clock_enable_latency); ··· 266 266 struct scmi_clock_info *clk) 267 267 { 268 268 int ret; 269 - 270 269 void *iter; 271 - struct scmi_msg_clock_describe_rates *msg; 272 270 struct scmi_iterator_ops ops = { 273 271 .prepare_message = iter_clk_describe_prepare_message, 274 272 .update_state = iter_clk_describe_update_state, ··· 279 281 280 282 iter = ph->hops->iter_response_init(ph, &ops, SCMI_MAX_NUM_RATES, 281 283 CLOCK_DESCRIBE_RATES, 282 - sizeof(*msg), &cpriv); 284 + sizeof(struct scmi_msg_clock_describe_rates), 285 + &cpriv); 283 286 if (IS_ERR(iter)) 284 287 return PTR_ERR(iter); 285 288
+3 -3
drivers/firmware/arm_scmi/perf.c
··· 252 252 dom_info->mult_factor = 253 253 (dom_info->sustained_freq_khz * 1000) / 254 254 dom_info->sustained_perf_level; 255 - strlcpy(dom_info->name, attr->name, SCMI_MAX_STR_SIZE); 255 + strscpy(dom_info->name, attr->name, SCMI_SHORT_NAME_MAX_SIZE); 256 256 } 257 257 258 258 ph->xops->xfer_put(ph, t); ··· 332 332 { 333 333 int ret; 334 334 void *iter; 335 - struct scmi_msg_perf_describe_levels *msg; 336 335 struct scmi_iterator_ops ops = { 337 336 .prepare_message = iter_perf_levels_prepare_message, 338 337 .update_state = iter_perf_levels_update_state, ··· 344 345 345 346 iter = ph->hops->iter_response_init(ph, &ops, MAX_OPPS, 346 347 PERF_DESCRIBE_LEVELS, 347 - sizeof(*msg), &ppriv); 348 + sizeof(struct scmi_msg_perf_describe_levels), 349 + &ppriv); 348 350 if (IS_ERR(iter)) 349 351 return PTR_ERR(iter); 350 352
+1 -1
drivers/firmware/arm_scmi/power.c
··· 122 122 dom_info->state_set_notify = SUPPORTS_STATE_SET_NOTIFY(flags); 123 123 dom_info->state_set_async = SUPPORTS_STATE_SET_ASYNC(flags); 124 124 dom_info->state_set_sync = SUPPORTS_STATE_SET_SYNC(flags); 125 - strlcpy(dom_info->name, attr->name, SCMI_MAX_STR_SIZE); 125 + strscpy(dom_info->name, attr->name, SCMI_SHORT_NAME_MAX_SIZE); 126 126 } 127 127 ph->xops->xfer_put(ph, t); 128 128
-2
drivers/firmware/arm_scmi/protocols.h
··· 24 24 25 25 #include <asm/unaligned.h> 26 26 27 - #define SCMI_SHORT_NAME_MAX_SIZE 16 28 - 29 27 #define PROTOCOL_REV_MINOR_MASK GENMASK(15, 0) 30 28 #define PROTOCOL_REV_MAJOR_MASK GENMASK(31, 16) 31 29 #define PROTOCOL_REV_MAJOR(x) ((u16)(FIELD_GET(PROTOCOL_REV_MAJOR_MASK, (x))))
+1 -1
drivers/firmware/arm_scmi/reset.c
··· 116 116 dom_info->latency_us = le32_to_cpu(attr->latency); 117 117 if (dom_info->latency_us == U32_MAX) 118 118 dom_info->latency_us = 0; 119 - strlcpy(dom_info->name, attr->name, SCMI_MAX_STR_SIZE); 119 + strscpy(dom_info->name, attr->name, SCMI_SHORT_NAME_MAX_SIZE); 120 120 } 121 121 122 122 ph->xops->xfer_put(ph, t);
+52 -16
drivers/firmware/arm_scmi/sensors.c
··· 338 338 struct scmi_sensor_info *s) 339 339 { 340 340 void *iter; 341 - struct scmi_msg_sensor_list_update_intervals *msg; 342 341 struct scmi_iterator_ops ops = { 343 342 .prepare_message = iter_intervals_prepare_message, 344 343 .update_state = iter_intervals_update_state, ··· 350 351 351 352 iter = ph->hops->iter_response_init(ph, &ops, s->intervals.count, 352 353 SENSOR_LIST_UPDATE_INTERVALS, 353 - sizeof(*msg), &upriv); 354 + sizeof(struct scmi_msg_sensor_list_update_intervals), 355 + &upriv); 354 356 if (IS_ERR(iter)) 355 357 return PTR_ERR(iter); 356 358 357 359 return ph->hops->iter_response_run(iter); 358 360 } 359 361 362 + struct scmi_apriv { 363 + bool any_axes_support_extended_names; 364 + struct scmi_sensor_info *s; 365 + }; 366 + 360 367 static void iter_axes_desc_prepare_message(void *message, 361 368 const unsigned int desc_index, 362 369 const void *priv) 363 370 { 364 371 struct scmi_msg_sensor_axis_description_get *msg = message; 365 - const struct scmi_sensor_info *s = priv; 372 + const struct scmi_apriv *apriv = priv; 366 373 367 374 /* Set the number of sensors to be skipped/already read */ 368 - msg->id = cpu_to_le32(s->id); 375 + msg->id = cpu_to_le32(apriv->s->id); 369 376 msg->axis_desc_index = cpu_to_le32(desc_index); 370 377 } 371 378 ··· 398 393 u32 attrh, attrl; 399 394 struct scmi_sensor_axis_info *a; 400 395 size_t dsize = SCMI_MSG_RESP_AXIS_DESCR_BASE_SZ; 401 - struct scmi_sensor_info *s = priv; 396 + struct scmi_apriv *apriv = priv; 402 397 const struct scmi_axis_descriptor *adesc = st->priv; 403 398 404 399 attrl = le32_to_cpu(adesc->attributes_low); 400 + if (SUPPORTS_EXTENDED_AXIS_NAMES(attrl)) 401 + apriv->any_axes_support_extended_names = true; 405 402 406 - a = &s->axis[st->desc_index + st->loop_idx]; 403 + a = &apriv->s->axis[st->desc_index + st->loop_idx]; 407 404 a->id = le32_to_cpu(adesc->id); 408 405 a->extended_attrs = SUPPORTS_EXTEND_ATTRS(attrl); 409 406 410 407 attrh = le32_to_cpu(adesc->attributes_high); 411 408 a->scale = S32_EXT(SENSOR_SCALE(attrh)); 412 409 a->type = SENSOR_TYPE(attrh); 413 - strscpy(a->name, adesc->name, SCMI_MAX_STR_SIZE); 410 + strscpy(a->name, adesc->name, SCMI_SHORT_NAME_MAX_SIZE); 414 411 415 412 if (a->extended_attrs) { 416 413 unsigned int ares = le32_to_cpu(adesc->resolution); ··· 451 444 void *priv) 452 445 { 453 446 struct scmi_sensor_axis_info *a; 454 - const struct scmi_sensor_info *s = priv; 447 + const struct scmi_apriv *apriv = priv; 455 448 struct scmi_sensor_axis_name_descriptor *adesc = st->priv; 449 + u32 axis_id = le32_to_cpu(adesc->axis_id); 456 450 457 - a = &s->axis[st->desc_index + st->loop_idx]; 451 + if (axis_id >= st->max_resources) 452 + return -EPROTO; 453 + 454 + /* 455 + * Pick the corresponding descriptor based on the axis_id embedded 456 + * in the reply since the list of axes supporting extended names 457 + * can be a subset of all the axes. 458 + */ 459 + a = &apriv->s->axis[axis_id]; 458 460 strscpy(a->name, adesc->name, SCMI_MAX_STR_SIZE); 459 461 st->priv = ++adesc; 460 462 ··· 474 458 scmi_sensor_axis_extended_names_get(const struct scmi_protocol_handle *ph, 475 459 struct scmi_sensor_info *s) 476 460 { 461 + int ret; 477 462 void *iter; 478 - struct scmi_msg_sensor_axis_description_get *msg; 479 463 struct scmi_iterator_ops ops = { 480 464 .prepare_message = iter_axes_desc_prepare_message, 481 465 .update_state = iter_axes_extended_name_update_state, 482 466 .process_response = iter_axes_extended_name_process_response, 483 467 }; 468 + struct scmi_apriv apriv = { 469 + .any_axes_support_extended_names = false, 470 + .s = s, 471 + }; 484 472 485 473 iter = ph->hops->iter_response_init(ph, &ops, s->num_axis, 486 474 SENSOR_AXIS_NAME_GET, 487 - sizeof(*msg), s); 475 + sizeof(struct scmi_msg_sensor_axis_description_get), 476 + &apriv); 488 477 if (IS_ERR(iter)) 489 478 return PTR_ERR(iter); 490 479 491 - return ph->hops->iter_response_run(iter); 480 + /* 481 + * Do not cause whole protocol initialization failure when failing to 482 + * get extended names for axes. 483 + */ 484 + ret = ph->hops->iter_response_run(iter); 485 + if (ret) 486 + dev_warn(ph->dev, 487 + "Failed to get axes extended names for %s (ret:%d).\n", 488 + s->name, ret); 489 + 490 + return 0; 492 491 } 493 492 494 493 static int scmi_sensor_axis_description(const struct scmi_protocol_handle *ph, ··· 512 481 { 513 482 int ret; 514 483 void *iter; 515 - struct scmi_msg_sensor_axis_description_get *msg; 516 484 struct scmi_iterator_ops ops = { 517 485 .prepare_message = iter_axes_desc_prepare_message, 518 486 .update_state = iter_axes_desc_update_state, 519 487 .process_response = iter_axes_desc_process_response, 488 + }; 489 + struct scmi_apriv apriv = { 490 + .any_axes_support_extended_names = false, 491 + .s = s, 520 492 }; 521 493 522 494 s->axis = devm_kcalloc(ph->dev, s->num_axis, ··· 529 495 530 496 iter = ph->hops->iter_response_init(ph, &ops, s->num_axis, 531 497 SENSOR_AXIS_DESCRIPTION_GET, 532 - sizeof(*msg), s); 498 + sizeof(struct scmi_msg_sensor_axis_description_get), 499 + &apriv); 533 500 if (IS_ERR(iter)) 534 501 return PTR_ERR(iter); 535 502 ··· 538 503 if (ret) 539 504 return ret; 540 505 541 - if (PROTOCOL_REV_MAJOR(version) >= 0x3) 506 + if (PROTOCOL_REV_MAJOR(version) >= 0x3 && 507 + apriv.any_axes_support_extended_names) 542 508 ret = scmi_sensor_axis_extended_names_get(ph, s); 543 509 544 510 return ret; ··· 634 598 SUPPORTS_AXIS(attrh) ? 635 599 SENSOR_AXIS_NUMBER(attrh) : 0, 636 600 SCMI_MAX_NUM_SENSOR_AXIS); 637 - strscpy(s->name, sdesc->name, SCMI_MAX_STR_SIZE); 601 + strscpy(s->name, sdesc->name, SCMI_SHORT_NAME_MAX_SIZE); 638 602 639 603 /* 640 604 * If supported overwrite short name with the extended
+5 -10
drivers/firmware/arm_scmi/voltage.c
··· 180 180 { 181 181 int ret; 182 182 void *iter; 183 - struct scmi_msg_cmd_describe_levels *msg; 184 183 struct scmi_iterator_ops ops = { 185 184 .prepare_message = iter_volt_levels_prepare_message, 186 185 .update_state = iter_volt_levels_update_state, ··· 192 193 193 194 iter = ph->hops->iter_response_init(ph, &ops, v->num_levels, 194 195 VOLTAGE_DESCRIBE_LEVELS, 195 - sizeof(*msg), &vpriv); 196 + sizeof(struct scmi_msg_cmd_describe_levels), 197 + &vpriv); 196 198 if (IS_ERR(iter)) 197 199 return PTR_ERR(iter); 198 200 ··· 225 225 226 226 /* Retrieve domain attributes at first ... */ 227 227 put_unaligned_le32(dom, td->tx.buf); 228 - ret = ph->xops->do_xfer(ph, td); 229 228 /* Skip domain on comms error */ 230 - if (ret) 229 + if (ph->xops->do_xfer(ph, td)) 231 230 continue; 232 231 233 232 v = vinfo->domains + dom; 234 233 v->id = dom; 235 234 attributes = le32_to_cpu(resp_dom->attr); 236 - strlcpy(v->name, resp_dom->name, SCMI_MAX_STR_SIZE); 235 + strscpy(v->name, resp_dom->name, SCMI_SHORT_NAME_MAX_SIZE); 237 236 238 237 /* 239 238 * If supported overwrite short name with the extended one; ··· 248 249 v->async_level_set = true; 249 250 } 250 251 251 - ret = scmi_voltage_levels_get(ph, v); 252 252 /* Skip invalid voltage descriptors */ 253 - if (ret) 254 - continue; 255 - 256 - ph->xops->reset_rx_to_maxsz(ph, td); 253 + scmi_voltage_levels_get(ph, v); 257 254 } 258 255 259 256 ph->xops->xfer_put(ph, td);
-2
drivers/firmware/efi/sysfb_efi.c
··· 26 26 #include <linux/sysfb.h> 27 27 #include <video/vga.h> 28 28 29 - #include <asm/efi.h> 30 - 31 29 enum { 32 30 OVERRIDE_NONE = 0x0, 33 31 OVERRIDE_BASE = 0x1,
+1 -13
drivers/gpio/gpio-grgpio.c
··· 434 434 static int grgpio_remove(struct platform_device *ofdev) 435 435 { 436 436 struct grgpio_priv *priv = platform_get_drvdata(ofdev); 437 - int i; 438 - int ret = 0; 439 - 440 - if (priv->domain) { 441 - for (i = 0; i < GRGPIO_MAX_NGPIO; i++) { 442 - if (priv->uirqs[i].refcnt != 0) { 443 - ret = -EBUSY; 444 - goto out; 445 - } 446 - } 447 - } 448 437 449 438 gpiochip_remove(&priv->gc); 450 439 451 440 if (priv->domain) 452 441 irq_domain_remove(priv->domain); 453 442 454 - out: 455 - return ret; 443 + return 0; 456 444 } 457 445 458 446 static const struct of_device_id grgpio_match[] = {
+1 -1
drivers/gpio/gpio-mxs.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0+ 2 2 // 3 - // MXC GPIO support. (c) 2008 Daniel Mack <daniel@caiaq.de> 3 + // MXS GPIO support. (c) 2008 Daniel Mack <daniel@caiaq.de> 4 4 // Copyright 2008 Juergen Beisert, kernel@pengutronix.de 5 5 // 6 6 // Based on code from Freescale,
+8 -2
drivers/gpio/gpio-realtek-otto.c
··· 172 172 unsigned long flags; 173 173 u16 m; 174 174 175 + gpiochip_enable_irq(&ctrl->gc, line); 176 + 175 177 raw_spin_lock_irqsave(&ctrl->lock, flags); 176 178 m = ctrl->intr_mask[port]; 177 179 m |= realtek_gpio_imr_bits(port_pin, REALTEK_GPIO_IMR_LINE_MASK); ··· 197 195 ctrl->intr_mask[port] = m; 198 196 realtek_gpio_write_imr(ctrl, port, ctrl->intr_type[port], m); 199 197 raw_spin_unlock_irqrestore(&ctrl->lock, flags); 198 + 199 + gpiochip_disable_irq(&ctrl->gc, line); 200 200 } 201 201 202 202 static int realtek_gpio_irq_set_type(struct irq_data *data, unsigned int flow_type) ··· 319 315 return 0; 320 316 } 321 317 322 - static struct irq_chip realtek_gpio_irq_chip = { 318 + static const struct irq_chip realtek_gpio_irq_chip = { 323 319 .name = "realtek-otto-gpio", 324 320 .irq_ack = realtek_gpio_irq_ack, 325 321 .irq_mask = realtek_gpio_irq_mask, 326 322 .irq_unmask = realtek_gpio_irq_unmask, 327 323 .irq_set_type = realtek_gpio_irq_set_type, 328 324 .irq_set_affinity = realtek_gpio_irq_set_affinity, 325 + .flags = IRQCHIP_IMMUTABLE, 326 + GPIOCHIP_IRQ_RESOURCE_HELPERS, 329 327 }; 330 328 331 329 static const struct of_device_id realtek_gpio_of_match[] = { ··· 410 404 irq = platform_get_irq_optional(pdev, 0); 411 405 if (!(dev_flags & GPIO_INTERRUPTS_DISABLED) && irq > 0) { 412 406 girq = &ctrl->gc.irq; 413 - girq->chip = &realtek_gpio_irq_chip; 407 + gpio_irq_chip_set_chip(girq, &realtek_gpio_irq_chip); 414 408 girq->default_type = IRQ_TYPE_NONE; 415 409 girq->handler = handle_bad_irq; 416 410 girq->parent_handler = realtek_gpio_irq_handler;
-2
drivers/gpio/gpio-vr41xx.c
··· 217 217 printk(KERN_ERR "spurious GIU interrupt: %04x(%04x),%04x(%04x)\n", 218 218 maskl, pendl, maskh, pendh); 219 219 220 - atomic_inc(&irq_err_count); 221 - 222 220 return -EINVAL; 223 221 } 224 222
+4 -3
drivers/gpio/gpio-winbond.c
··· 385 385 unsigned long *base = gpiochip_get_data(gc); 386 386 const struct winbond_gpio_info *info; 387 387 bool val; 388 + int ret; 388 389 389 390 winbond_gpio_get_info(&offset, &info); 390 391 391 - val = winbond_sio_enter(*base); 392 - if (val) 393 - return val; 392 + ret = winbond_sio_enter(*base); 393 + if (ret) 394 + return ret; 394 395 395 396 winbond_sio_select_logical(*base, info->dev); 396 397
+14 -6
drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
··· 1798 1798 DRM_INFO("amdgpu: %uM of VRAM memory ready\n", 1799 1799 (unsigned) (adev->gmc.real_vram_size / (1024 * 1024))); 1800 1800 1801 - /* Compute GTT size, either bsaed on 3/4th the size of RAM size 1801 + /* Compute GTT size, either based on 1/2 the size of RAM size 1802 1802 * or whatever the user passed on module init */ 1803 1803 if (amdgpu_gtt_size == -1) { 1804 1804 struct sysinfo si; 1805 1805 1806 1806 si_meminfo(&si); 1807 - gtt_size = min(max((AMDGPU_DEFAULT_GTT_SIZE_MB << 20), 1808 - adev->gmc.mc_vram_size), 1809 - ((uint64_t)si.totalram * si.mem_unit * 3/4)); 1810 - } 1811 - else 1807 + /* Certain GL unit tests for large textures can cause problems 1808 + * with the OOM killer since there is no way to link this memory 1809 + * to a process. This was originally mitigated (but not necessarily 1810 + * eliminated) by limiting the GTT size. The problem is this limit 1811 + * is often too low for many modern games so just make the limit 1/2 1812 + * of system memory which aligns with TTM. The OOM accounting needs 1813 + * to be addressed, but we shouldn't prevent common 3D applications 1814 + * from being usable just to potentially mitigate that corner case. 1815 + */ 1816 + gtt_size = max((AMDGPU_DEFAULT_GTT_SIZE_MB << 20), 1817 + (u64)si.totalram * si.mem_unit / 2); 1818 + } else { 1812 1819 gtt_size = (uint64_t)amdgpu_gtt_size << 20; 1820 + } 1813 1821 1814 1822 /* Initialize GTT memory pool */ 1815 1823 r = amdgpu_gtt_mgr_init(adev, gtt_size);
+1 -1
drivers/gpu/drm/amd/display/dc/clk_mgr/dcn315/dcn315_clk_mgr.c
··· 550 550 if (!bw_params->clk_table.entries[i].dtbclk_mhz) 551 551 bw_params->clk_table.entries[i].dtbclk_mhz = def_max.dtbclk_mhz; 552 552 } 553 - ASSERT(bw_params->clk_table.entries[i].dcfclk_mhz); 553 + ASSERT(bw_params->clk_table.entries[i-1].dcfclk_mhz); 554 554 bw_params->vram_type = bios_info->memory_type; 555 555 bw_params->num_channels = bios_info->ma_channel_number; 556 556 if (!bw_params->num_channels)
+2 -22
drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c
··· 1766 1766 break; 1767 1767 } 1768 1768 } 1769 - 1770 - /* 1771 - * TO-DO: So far the code logic below only addresses single eDP case. 1772 - * For dual eDP case, there are a few things that need to be 1773 - * implemented first: 1774 - * 1775 - * 1. Change the fastboot logic above, so eDP link[0 or 1]'s 1776 - * stream[0 or 1] will all be checked. 1777 - * 1778 - * 2. Change keep_edp_vdd_on to an array, and maintain keep_edp_vdd_on 1779 - * for each eDP. 1780 - * 1781 - * Once above 2 things are completed, we can then change the logic below 1782 - * correspondingly, so dual eDP case will be fully covered. 1783 - */ 1784 - 1785 - // We are trying to enable eDP, don't power down VDD if eDP stream is existing 1786 - if ((edp_stream_num == 1 && edp_streams[0] != NULL) || can_apply_edp_fast_boot) { 1769 + // We are trying to enable eDP, don't power down VDD 1770 + if (can_apply_edp_fast_boot) 1787 1771 keep_edp_vdd_on = true; 1788 - DC_LOG_EVENT_LINK_TRAINING("Keep eDP Vdd on\n"); 1789 - } else { 1790 - DC_LOG_EVENT_LINK_TRAINING("No eDP stream enabled, turn eDP Vdd off\n"); 1791 - } 1792 1772 } 1793 1773 1794 1774 // Check seamless boot support
+3
drivers/gpu/drm/amd/display/dc/dcn20/dcn20_dpp.c
··· 212 212 break; 213 213 } 214 214 215 + /* Set default color space based on format if none is given. */ 216 + color_space = input_color_space ? input_color_space : color_space; 217 + 215 218 if (is_2bit == 1 && alpha_2bit_lut != NULL) { 216 219 REG_UPDATE(ALPHA_2BIT_LUT, ALPHA_2BIT_LUT0, alpha_2bit_lut->lut0); 217 220 REG_UPDATE(ALPHA_2BIT_LUT, ALPHA_2BIT_LUT1, alpha_2bit_lut->lut1);
+3
drivers/gpu/drm/amd/display/dc/dcn201/dcn201_dpp.c
··· 153 153 break; 154 154 } 155 155 156 + /* Set default color space based on format if none is given. */ 157 + color_space = input_color_space ? input_color_space : color_space; 158 + 156 159 if (is_2bit == 1 && alpha_2bit_lut != NULL) { 157 160 REG_UPDATE(ALPHA_2BIT_LUT, ALPHA_2BIT_LUT0, alpha_2bit_lut->lut0); 158 161 REG_UPDATE(ALPHA_2BIT_LUT, ALPHA_2BIT_LUT1, alpha_2bit_lut->lut1);
+3
drivers/gpu/drm/amd/display/dc/dcn30/dcn30_dpp.c
··· 294 294 break; 295 295 } 296 296 297 + /* Set default color space based on format if none is given. */ 298 + color_space = input_color_space ? input_color_space : color_space; 299 + 297 300 if (is_2bit == 1 && alpha_2bit_lut != NULL) { 298 301 REG_UPDATE(ALPHA_2BIT_LUT, ALPHA_2BIT_LUT0, alpha_2bit_lut->lut0); 299 302 REG_UPDATE(ALPHA_2BIT_LUT, ALPHA_2BIT_LUT1, alpha_2bit_lut->lut1);
+6
drivers/gpu/drm/drm_panel_orientation_quirks.c
··· 152 152 DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "AYA NEO 2021"), 153 153 }, 154 154 .driver_data = (void *)&lcd800x1280_rightside_up, 155 + }, { /* AYA NEO NEXT */ 156 + .matches = { 157 + DMI_EXACT_MATCH(DMI_BOARD_VENDOR, "AYANEO"), 158 + DMI_MATCH(DMI_BOARD_NAME, "NEXT"), 159 + }, 160 + .driver_data = (void *)&lcd800x1280_rightside_up, 155 161 }, { /* Chuwi HiBook (CWI514) */ 156 162 .matches = { 157 163 DMI_MATCH(DMI_BOARD_VENDOR, "Hampoo"),
+29 -3
drivers/gpu/drm/i915/display/intel_dp.c
··· 388 388 return intel_dp_is_edp(intel_dp) ? 810000 : 1350000; 389 389 } 390 390 391 + static bool is_low_voltage_sku(struct drm_i915_private *i915, enum phy phy) 392 + { 393 + u32 voltage; 394 + 395 + voltage = intel_de_read(i915, ICL_PORT_COMP_DW3(phy)) & VOLTAGE_INFO_MASK; 396 + 397 + return voltage == VOLTAGE_INFO_0_85V; 398 + } 399 + 391 400 static int icl_max_source_rate(struct intel_dp *intel_dp) 392 401 { 393 402 struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp); 394 403 struct drm_i915_private *dev_priv = to_i915(dig_port->base.base.dev); 395 404 enum phy phy = intel_port_to_phy(dev_priv, dig_port->base.port); 396 405 397 - if (intel_phy_is_combo(dev_priv, phy) && !intel_dp_is_edp(intel_dp)) 406 + if (intel_phy_is_combo(dev_priv, phy) && 407 + (is_low_voltage_sku(dev_priv, phy) || !intel_dp_is_edp(intel_dp))) 398 408 return 540000; 399 409 400 410 return 810000; ··· 412 402 413 403 static int ehl_max_source_rate(struct intel_dp *intel_dp) 414 404 { 415 - if (intel_dp_is_edp(intel_dp)) 405 + struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp); 406 + struct drm_i915_private *dev_priv = to_i915(dig_port->base.base.dev); 407 + enum phy phy = intel_port_to_phy(dev_priv, dig_port->base.port); 408 + 409 + if (intel_dp_is_edp(intel_dp) || is_low_voltage_sku(dev_priv, phy)) 410 + return 540000; 411 + 412 + return 810000; 413 + } 414 + 415 + static int dg1_max_source_rate(struct intel_dp *intel_dp) 416 + { 417 + struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp); 418 + struct drm_i915_private *i915 = to_i915(dig_port->base.base.dev); 419 + enum phy phy = intel_port_to_phy(i915, dig_port->base.port); 420 + 421 + if (intel_phy_is_combo(i915, phy) && is_low_voltage_sku(i915, phy)) 416 422 return 540000; 417 423 418 424 return 810000; ··· 471 445 max_rate = dg2_max_source_rate(intel_dp); 472 446 else if (IS_ALDERLAKE_P(dev_priv) || IS_ALDERLAKE_S(dev_priv) || 473 447 IS_DG1(dev_priv) || IS_ROCKETLAKE(dev_priv)) 474 - max_rate = 810000; 448 + max_rate = dg1_max_source_rate(intel_dp); 475 449 else if (IS_JSL_EHL(dev_priv)) 476 450 max_rate = ehl_max_source_rate(intel_dp); 477 451 else
+2 -2
drivers/gpu/drm/i915/display/intel_dpll_mgr.c
··· 2396 2396 } 2397 2397 2398 2398 /* 2399 - * Display WA #22010492432: ehl, tgl, adl-p 2399 + * Display WA #22010492432: ehl, tgl, adl-s, adl-p 2400 2400 * Program half of the nominal DCO divider fraction value. 2401 2401 */ 2402 2402 static bool ··· 2404 2404 { 2405 2405 return ((IS_PLATFORM(i915, INTEL_ELKHARTLAKE) && 2406 2406 IS_JSL_EHL_DISPLAY_STEP(i915, STEP_B0, STEP_FOREVER)) || 2407 - IS_TIGERLAKE(i915) || IS_ALDERLAKE_P(i915)) && 2407 + IS_TIGERLAKE(i915) || IS_ALDERLAKE_S(i915) || IS_ALDERLAKE_P(i915)) && 2408 2408 i915->dpll.ref_clks.nssc == 38400; 2409 2409 } 2410 2410
+3 -2
drivers/gpu/drm/i915/i915_drm_client.c
··· 116 116 total += busy_add(ctx, class); 117 117 rcu_read_unlock(); 118 118 119 - seq_printf(m, "drm-engine-%s:\t%llu ns\n", 120 - uabi_class_names[class], total); 119 + if (capacity) 120 + seq_printf(m, "drm-engine-%s:\t%llu ns\n", 121 + uabi_class_names[class], total); 121 122 122 123 if (capacity > 1) 123 124 seq_printf(m, "drm-engine-capacity-%s:\t%u\n",
+10 -4
drivers/gpu/drm/msm/adreno/adreno_gpu.c
··· 498 498 499 499 ring->cur = ring->start; 500 500 ring->next = ring->start; 501 - 502 - /* reset completed fence seqno: */ 503 - ring->memptrs->fence = ring->fctx->completed_fence; 504 501 ring->memptrs->rptr = 0; 502 + 503 + /* Detect and clean up an impossible fence, ie. if GPU managed 504 + * to scribble something invalid, we don't want that to confuse 505 + * us into mistakingly believing that submits have completed. 506 + */ 507 + if (fence_before(ring->fctx->last_fence, ring->memptrs->fence)) { 508 + ring->memptrs->fence = ring->fctx->last_fence; 509 + } 505 510 } 506 511 507 512 return 0; ··· 1062 1057 for (i = 0; i < ARRAY_SIZE(adreno_gpu->info->fw); i++) 1063 1058 release_firmware(adreno_gpu->fw[i]); 1064 1059 1065 - pm_runtime_disable(&priv->gpu_pdev->dev); 1060 + if (pm_runtime_enabled(&priv->gpu_pdev->dev)) 1061 + pm_runtime_disable(&priv->gpu_pdev->dev); 1066 1062 1067 1063 msm_gpu_cleanup(&adreno_gpu->base); 1068 1064 }
+8 -1
drivers/gpu/drm/msm/disp/dpu1/dpu_writeback.c
··· 11 11 struct msm_drm_private *priv = dev->dev_private; 12 12 struct dpu_kms *dpu_kms = to_dpu_kms(priv->kms); 13 13 14 - return drm_add_modes_noedid(connector, dpu_kms->catalog->caps->max_linewidth, 14 + /* 15 + * We should ideally be limiting the modes only to the maxlinewidth but 16 + * on some chipsets this will allow even 4k modes to be added which will 17 + * fail the per SSPP bandwidth checks. So, till we have dual-SSPP support 18 + * and source split support added lets limit the modes based on max_mixer_width 19 + * as 4K modes can then be supported. 20 + */ 21 + return drm_add_modes_noedid(connector, dpu_kms->catalog->caps->max_mixer_width, 15 22 dev->mode_config.max_height); 16 23 } 17 24
+2
drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c
··· 216 216 encoder = mdp4_lcdc_encoder_init(dev, panel_node); 217 217 if (IS_ERR(encoder)) { 218 218 DRM_DEV_ERROR(dev->dev, "failed to construct LCDC encoder\n"); 219 + of_node_put(panel_node); 219 220 return PTR_ERR(encoder); 220 221 } 221 222 ··· 226 225 connector = mdp4_lvds_connector_init(dev, panel_node, encoder); 227 226 if (IS_ERR(connector)) { 228 227 DRM_DEV_ERROR(dev->dev, "failed to initialize LVDS connector\n"); 228 + of_node_put(panel_node); 229 229 return PTR_ERR(connector); 230 230 } 231 231
+25 -8
drivers/gpu/drm/msm/dp/dp_ctrl.c
··· 1534 1534 return ret; 1535 1535 } 1536 1536 1537 + static int dp_ctrl_on_stream_phy_test_report(struct dp_ctrl *dp_ctrl); 1538 + 1537 1539 static int dp_ctrl_process_phy_test_request(struct dp_ctrl_private *ctrl) 1538 1540 { 1539 1541 int ret = 0; ··· 1559 1557 1560 1558 ret = dp_ctrl_on_link(&ctrl->dp_ctrl); 1561 1559 if (!ret) 1562 - ret = dp_ctrl_on_stream(&ctrl->dp_ctrl); 1560 + ret = dp_ctrl_on_stream_phy_test_report(&ctrl->dp_ctrl); 1563 1561 else 1564 1562 DRM_ERROR("failed to enable DP link controller\n"); 1565 1563 ··· 1815 1813 return dp_ctrl_setup_main_link(ctrl, &training_step); 1816 1814 } 1817 1815 1818 - int dp_ctrl_on_stream(struct dp_ctrl *dp_ctrl) 1816 + static int dp_ctrl_on_stream_phy_test_report(struct dp_ctrl *dp_ctrl) 1817 + { 1818 + int ret; 1819 + struct dp_ctrl_private *ctrl; 1820 + 1821 + ctrl = container_of(dp_ctrl, struct dp_ctrl_private, dp_ctrl); 1822 + 1823 + ctrl->dp_ctrl.pixel_rate = ctrl->panel->dp_mode.drm_mode.clock; 1824 + 1825 + ret = dp_ctrl_enable_stream_clocks(ctrl); 1826 + if (ret) { 1827 + DRM_ERROR("Failed to start pixel clocks. ret=%d\n", ret); 1828 + return ret; 1829 + } 1830 + 1831 + dp_ctrl_send_phy_test_pattern(ctrl); 1832 + 1833 + return 0; 1834 + } 1835 + 1836 + int dp_ctrl_on_stream(struct dp_ctrl *dp_ctrl, bool force_link_train) 1819 1837 { 1820 1838 int ret = 0; 1821 1839 bool mainlink_ready = false; ··· 1871 1849 goto end; 1872 1850 } 1873 1851 1874 - if (ctrl->link->sink_request & DP_TEST_LINK_PHY_TEST_PATTERN) { 1875 - dp_ctrl_send_phy_test_pattern(ctrl); 1876 - return 0; 1877 - } 1878 - 1879 - if (!dp_ctrl_channel_eq_ok(ctrl)) 1852 + if (force_link_train || !dp_ctrl_channel_eq_ok(ctrl)) 1880 1853 dp_ctrl_link_retrain(ctrl); 1881 1854 1882 1855 /* stop txing train pattern to end link training */
+1 -1
drivers/gpu/drm/msm/dp/dp_ctrl.h
··· 21 21 }; 22 22 23 23 int dp_ctrl_on_link(struct dp_ctrl *dp_ctrl); 24 - int dp_ctrl_on_stream(struct dp_ctrl *dp_ctrl); 24 + int dp_ctrl_on_stream(struct dp_ctrl *dp_ctrl, bool force_link_train); 25 25 int dp_ctrl_off_link_stream(struct dp_ctrl *dp_ctrl); 26 26 int dp_ctrl_off_link(struct dp_ctrl *dp_ctrl); 27 27 int dp_ctrl_off(struct dp_ctrl *dp_ctrl);
+8 -8
drivers/gpu/drm/msm/dp/dp_display.c
··· 309 309 struct msm_drm_private *priv = dev_get_drvdata(master); 310 310 311 311 /* disable all HPD interrupts */ 312 - dp_catalog_hpd_config_intr(dp->catalog, DP_DP_HPD_INT_MASK, false); 312 + if (dp->core_initialized) 313 + dp_catalog_hpd_config_intr(dp->catalog, DP_DP_HPD_INT_MASK, false); 313 314 314 315 kthread_stop(dp->ev_tsk); 315 316 ··· 873 872 return 0; 874 873 } 875 874 876 - rc = dp_ctrl_on_stream(dp->ctrl); 875 + rc = dp_ctrl_on_stream(dp->ctrl, data); 877 876 if (!rc) 878 877 dp_display->power_on = true; 879 878 ··· 1660 1659 int rc = 0; 1661 1660 struct dp_display_private *dp_display; 1662 1661 u32 state; 1662 + bool force_link_train = false; 1663 1663 1664 1664 dp_display = container_of(dp, struct dp_display_private, dp_display); 1665 1665 if (!dp_display->dp_mode.drm_mode.clock) { ··· 1695 1693 1696 1694 state = dp_display->hpd_state; 1697 1695 1698 - if (state == ST_DISPLAY_OFF) 1696 + if (state == ST_DISPLAY_OFF) { 1699 1697 dp_display_host_phy_init(dp_display); 1698 + force_link_train = true; 1699 + } 1700 1700 1701 - dp_display_enable(dp_display, 0); 1701 + dp_display_enable(dp_display, force_link_train); 1702 1702 1703 1703 rc = dp_display_post_enable(dp); 1704 1704 if (rc) { ··· 1708 1704 dp_display_disable(dp_display, 0); 1709 1705 dp_display_unprepare(dp); 1710 1706 } 1711 - 1712 - /* manual kick off plug event to train link */ 1713 - if (state == ST_DISPLAY_OFF) 1714 - dp_add_event(dp_display, EV_IRQ_HPD_INT, 0, 0); 1715 1707 1716 1708 /* completed connection */ 1717 1709 dp_display->hpd_state = ST_CONNECTED;
+1 -1
drivers/gpu/drm/msm/msm_drv.c
··· 964 964 .prime_handle_to_fd = drm_gem_prime_handle_to_fd, 965 965 .prime_fd_to_handle = drm_gem_prime_fd_to_handle, 966 966 .gem_prime_import_sg_table = msm_gem_prime_import_sg_table, 967 - .gem_prime_mmap = drm_gem_prime_mmap, 967 + .gem_prime_mmap = msm_gem_prime_mmap, 968 968 #ifdef CONFIG_DEBUG_FS 969 969 .debugfs_init = msm_debugfs_init, 970 970 #endif
+1
drivers/gpu/drm/msm/msm_drv.h
··· 246 246 void msm_gem_shrinker_init(struct drm_device *dev); 247 247 void msm_gem_shrinker_cleanup(struct drm_device *dev); 248 248 249 + int msm_gem_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma); 249 250 struct sg_table *msm_gem_prime_get_sg_table(struct drm_gem_object *obj); 250 251 int msm_gem_prime_vmap(struct drm_gem_object *obj, struct iosys_map *map); 251 252 void msm_gem_prime_vunmap(struct drm_gem_object *obj, struct iosys_map *map);
+5 -3
drivers/gpu/drm/msm/msm_fence.c
··· 46 46 (int32_t)(*fctx->fenceptr - fence) >= 0; 47 47 } 48 48 49 - /* called from workqueue */ 49 + /* called from irq handler and workqueue (in recover path) */ 50 50 void msm_update_fence(struct msm_fence_context *fctx, uint32_t fence) 51 51 { 52 - spin_lock(&fctx->spinlock); 52 + unsigned long flags; 53 + 54 + spin_lock_irqsave(&fctx->spinlock, flags); 53 55 fctx->completed_fence = max(fence, fctx->completed_fence); 54 - spin_unlock(&fctx->spinlock); 56 + spin_unlock_irqrestore(&fctx->spinlock, flags); 55 57 } 56 58 57 59 struct msm_fence {
+3 -4
drivers/gpu/drm/msm/msm_gem.c
··· 439 439 return ret; 440 440 } 441 441 442 - void msm_gem_unpin_vma_locked(struct drm_gem_object *obj, struct msm_gem_vma *vma) 442 + void msm_gem_unpin_locked(struct drm_gem_object *obj) 443 443 { 444 444 struct msm_gem_object *msm_obj = to_msm_bo(obj); 445 445 446 446 GEM_WARN_ON(!msm_gem_is_locked(obj)); 447 - 448 - msm_gem_unpin_vma(vma); 449 447 450 448 msm_obj->pin_count--; 451 449 GEM_WARN_ON(msm_obj->pin_count < 0); ··· 584 586 msm_gem_lock(obj); 585 587 vma = lookup_vma(obj, aspace); 586 588 if (!GEM_WARN_ON(!vma)) { 587 - msm_gem_unpin_vma_locked(obj, vma); 589 + msm_gem_unpin_vma(vma); 590 + msm_gem_unpin_locked(obj); 588 591 } 589 592 msm_gem_unlock(obj); 590 593 }
+6 -5
drivers/gpu/drm/msm/msm_gem.h
··· 145 145 146 146 uint64_t msm_gem_mmap_offset(struct drm_gem_object *obj); 147 147 int msm_gem_pin_vma_locked(struct drm_gem_object *obj, struct msm_gem_vma *vma); 148 - void msm_gem_unpin_vma_locked(struct drm_gem_object *obj, struct msm_gem_vma *vma); 148 + void msm_gem_unpin_locked(struct drm_gem_object *obj); 149 149 struct msm_gem_vma *msm_gem_get_vma_locked(struct drm_gem_object *obj, 150 150 struct msm_gem_address_space *aspace); 151 151 int msm_gem_get_iova(struct drm_gem_object *obj, ··· 377 377 } *cmd; /* array of size nr_cmds */ 378 378 struct { 379 379 /* make sure these don't conflict w/ MSM_SUBMIT_BO_x */ 380 - #define BO_VALID 0x8000 /* is current addr in cmdstream correct/valid? */ 381 - #define BO_LOCKED 0x4000 /* obj lock is held */ 382 - #define BO_ACTIVE 0x2000 /* active refcnt is held */ 383 - #define BO_PINNED 0x1000 /* obj is pinned and on active list */ 380 + #define BO_VALID 0x8000 /* is current addr in cmdstream correct/valid? */ 381 + #define BO_LOCKED 0x4000 /* obj lock is held */ 382 + #define BO_ACTIVE 0x2000 /* active refcnt is held */ 383 + #define BO_OBJ_PINNED 0x1000 /* obj (pages) is pinned and on active list */ 384 + #define BO_VMA_PINNED 0x0800 /* vma (virtual address) is pinned */ 384 385 uint32_t flags; 385 386 union { 386 387 struct msm_gem_object *obj;
+15
drivers/gpu/drm/msm/msm_gem_prime.c
··· 11 11 #include "msm_drv.h" 12 12 #include "msm_gem.h" 13 13 14 + int msm_gem_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma) 15 + { 16 + int ret; 17 + 18 + /* Ensure the mmap offset is initialized. We lazily initialize it, 19 + * so if it has not been first mmap'd directly as a GEM object, the 20 + * mmap offset will not be already initialized. 21 + */ 22 + ret = drm_gem_create_mmap_offset(obj); 23 + if (ret) 24 + return ret; 25 + 26 + return drm_gem_prime_mmap(obj, vma); 27 + } 28 + 14 29 struct sg_table *msm_gem_prime_get_sg_table(struct drm_gem_object *obj) 15 30 { 16 31 struct msm_gem_object *msm_obj = to_msm_bo(obj);
+12 -6
drivers/gpu/drm/msm/msm_gem_submit.c
··· 232 232 */ 233 233 submit->bos[i].flags &= ~cleanup_flags; 234 234 235 - if (flags & BO_PINNED) 236 - msm_gem_unpin_vma_locked(obj, submit->bos[i].vma); 235 + if (flags & BO_VMA_PINNED) 236 + msm_gem_unpin_vma(submit->bos[i].vma); 237 + 238 + if (flags & BO_OBJ_PINNED) 239 + msm_gem_unpin_locked(obj); 237 240 238 241 if (flags & BO_ACTIVE) 239 242 msm_gem_active_put(obj); ··· 247 244 248 245 static void submit_unlock_unpin_bo(struct msm_gem_submit *submit, int i) 249 246 { 250 - submit_cleanup_bo(submit, i, BO_PINNED | BO_ACTIVE | BO_LOCKED); 247 + unsigned cleanup_flags = BO_VMA_PINNED | BO_OBJ_PINNED | 248 + BO_ACTIVE | BO_LOCKED; 249 + submit_cleanup_bo(submit, i, cleanup_flags); 251 250 252 251 if (!(submit->bos[i].flags & BO_VALID)) 253 252 submit->bos[i].iova = 0; ··· 380 375 if (ret) 381 376 break; 382 377 383 - submit->bos[i].flags |= BO_PINNED; 378 + submit->bos[i].flags |= BO_OBJ_PINNED | BO_VMA_PINNED; 384 379 submit->bos[i].vma = vma; 385 380 386 381 if (vma->iova == submit->bos[i].iova) { ··· 516 511 unsigned i; 517 512 518 513 if (error) 519 - cleanup_flags |= BO_PINNED | BO_ACTIVE; 514 + cleanup_flags |= BO_VMA_PINNED | BO_OBJ_PINNED | BO_ACTIVE; 520 515 521 516 for (i = 0; i < submit->nr_bos; i++) { 522 517 struct msm_gem_object *msm_obj = submit->bos[i].obj; ··· 534 529 struct drm_gem_object *obj = &submit->bos[i].obj->base; 535 530 536 531 msm_gem_lock(obj); 537 - submit_cleanup_bo(submit, i, BO_PINNED | BO_ACTIVE); 532 + /* Note, VMA already fence-unpinned before submit: */ 533 + submit_cleanup_bo(submit, i, BO_OBJ_PINNED | BO_ACTIVE); 538 534 msm_gem_unlock(obj); 539 535 drm_gem_object_put(obj); 540 536 }
+2 -4
drivers/gpu/drm/msm/msm_gem_vma.c
··· 62 62 unsigned size = vma->node.size; 63 63 64 64 /* Print a message if we try to purge a vma in use */ 65 - if (GEM_WARN_ON(msm_gem_vma_inuse(vma))) 66 - return; 65 + GEM_WARN_ON(msm_gem_vma_inuse(vma)); 67 66 68 67 /* Don't do anything if the memory isn't mapped */ 69 68 if (!vma->mapped) ··· 127 128 void msm_gem_close_vma(struct msm_gem_address_space *aspace, 128 129 struct msm_gem_vma *vma) 129 130 { 130 - if (GEM_WARN_ON(msm_gem_vma_inuse(vma) || vma->mapped)) 131 - return; 131 + GEM_WARN_ON(msm_gem_vma_inuse(vma) || vma->mapped); 132 132 133 133 spin_lock(&aspace->lock); 134 134 if (vma->iova)
+5 -22
drivers/gpu/drm/msm/msm_gpu.c
··· 164 164 return ret; 165 165 } 166 166 167 - static void update_fences(struct msm_gpu *gpu, struct msm_ringbuffer *ring, 168 - uint32_t fence) 169 - { 170 - struct msm_gem_submit *submit; 171 - unsigned long flags; 172 - 173 - spin_lock_irqsave(&ring->submit_lock, flags); 174 - list_for_each_entry(submit, &ring->submits, node) { 175 - if (fence_after(submit->seqno, fence)) 176 - break; 177 - 178 - msm_update_fence(submit->ring->fctx, 179 - submit->hw_fence->seqno); 180 - dma_fence_signal(submit->hw_fence); 181 - } 182 - spin_unlock_irqrestore(&ring->submit_lock, flags); 183 - } 184 - 185 167 #ifdef CONFIG_DEV_COREDUMP 186 168 static ssize_t msm_gpu_devcoredump_read(char *buffer, loff_t offset, 187 169 size_t count, void *data, size_t datalen) ··· 418 436 * one more to clear the faulting submit 419 437 */ 420 438 if (ring == cur_ring) 421 - fence++; 439 + ring->memptrs->fence = ++fence; 422 440 423 - update_fences(gpu, ring, fence); 441 + msm_update_fence(ring->fctx, fence); 424 442 } 425 443 426 444 if (msm_gpu_active(gpu)) { ··· 654 672 msm_submit_retire(submit); 655 673 656 674 pm_runtime_mark_last_busy(&gpu->pdev->dev); 657 - pm_runtime_put_autosuspend(&gpu->pdev->dev); 658 675 659 676 spin_lock_irqsave(&ring->submit_lock, flags); 660 677 list_del(&submit->node); ··· 666 685 if (!gpu->active_submits) 667 686 msm_devfreq_idle(gpu); 668 687 mutex_unlock(&gpu->active_lock); 688 + 689 + pm_runtime_put_autosuspend(&gpu->pdev->dev); 669 690 670 691 msm_gem_submit_put(submit); 671 692 } ··· 718 735 int i; 719 736 720 737 for (i = 0; i < gpu->nr_rings; i++) 721 - update_fences(gpu, gpu->rb[i], gpu->rb[i]->memptrs->fence); 738 + msm_update_fence(gpu->rb[i]->fctx, gpu->rb[i]->memptrs->fence); 722 739 723 740 kthread_queue_work(gpu->worker, &gpu->retire_work); 724 741 update_sw_cntrs(gpu);
+1 -1
drivers/gpu/drm/msm/msm_iommu.c
··· 58 58 u64 addr = iova; 59 59 unsigned int i; 60 60 61 - for_each_sg(sgt->sgl, sg, sgt->nents, i) { 61 + for_each_sgtable_sg(sgt, sg, i) { 62 62 size_t size = sg->length; 63 63 phys_addr_t phys = sg_phys(sg); 64 64
+1 -1
drivers/gpu/drm/msm/msm_ringbuffer.c
··· 25 25 26 26 msm_gem_lock(obj); 27 27 msm_gem_unpin_vma_fenced(submit->bos[i].vma, fctx); 28 - submit->bos[i].flags &= ~BO_PINNED; 28 + submit->bos[i].flags &= ~BO_VMA_PINNED; 29 29 msm_gem_unlock(obj); 30 30 } 31 31
+11 -1
drivers/gpu/drm/sun4i/sun4i_drv.c
··· 7 7 */ 8 8 9 9 #include <linux/component.h> 10 + #include <linux/dma-mapping.h> 10 11 #include <linux/kfifo.h> 11 12 #include <linux/module.h> 12 13 #include <linux/of_graph.h> ··· 74 73 goto free_drm; 75 74 } 76 75 77 - dev_set_drvdata(dev, drm); 78 76 drm->dev_private = drv; 79 77 INIT_LIST_HEAD(&drv->frontend_list); 80 78 INIT_LIST_HEAD(&drv->engine_list); ··· 114 114 115 115 drm_fbdev_generic_setup(drm, 32); 116 116 117 + dev_set_drvdata(dev, drm); 118 + 117 119 return 0; 118 120 119 121 finish_poll: ··· 132 130 { 133 131 struct drm_device *drm = dev_get_drvdata(dev); 134 132 133 + dev_set_drvdata(dev, NULL); 135 134 drm_dev_unregister(drm); 136 135 drm_kms_helper_poll_fini(drm); 137 136 drm_atomic_helper_shutdown(drm); ··· 369 366 int i, ret, count = 0; 370 367 371 368 INIT_KFIFO(list.fifo); 369 + 370 + /* 371 + * DE2 and DE3 cores actually supports 40-bit addresses, but 372 + * driver does not. 373 + */ 374 + dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32)); 375 + dma_set_max_seg_size(&pdev->dev, UINT_MAX); 372 376 373 377 for (i = 0;; i++) { 374 378 struct device_node *pipeline = of_parse_phandle(np,
+1 -1
drivers/gpu/drm/sun4i/sun4i_layer.c
··· 117 117 struct sun4i_layer *layer = plane_to_sun4i_layer(plane); 118 118 119 119 if (IS_ERR_OR_NULL(layer->backend->frontend)) 120 - sun4i_backend_format_is_supported(format, modifier); 120 + return sun4i_backend_format_is_supported(format, modifier); 121 121 122 122 return sun4i_backend_format_is_supported(format, modifier) || 123 123 sun4i_frontend_format_is_supported(format, modifier);
+4 -50
drivers/gpu/drm/sun4i/sun8i_dw_hdmi.c
··· 93 93 return crtcs; 94 94 } 95 95 96 - static int sun8i_dw_hdmi_find_connector_pdev(struct device *dev, 97 - struct platform_device **pdev_out) 98 - { 99 - struct platform_device *pdev; 100 - struct device_node *remote; 101 - 102 - remote = of_graph_get_remote_node(dev->of_node, 1, -1); 103 - if (!remote) 104 - return -ENODEV; 105 - 106 - if (!of_device_is_compatible(remote, "hdmi-connector")) { 107 - of_node_put(remote); 108 - return -ENODEV; 109 - } 110 - 111 - pdev = of_find_device_by_node(remote); 112 - of_node_put(remote); 113 - if (!pdev) 114 - return -ENODEV; 115 - 116 - *pdev_out = pdev; 117 - return 0; 118 - } 119 - 120 96 static int sun8i_dw_hdmi_bind(struct device *dev, struct device *master, 121 97 void *data) 122 98 { 123 - struct platform_device *pdev = to_platform_device(dev), *connector_pdev; 99 + struct platform_device *pdev = to_platform_device(dev); 124 100 struct dw_hdmi_plat_data *plat_data; 125 101 struct drm_device *drm = data; 126 102 struct device_node *phy_node; ··· 143 167 return dev_err_probe(dev, PTR_ERR(hdmi->regulator), 144 168 "Couldn't get regulator\n"); 145 169 146 - ret = sun8i_dw_hdmi_find_connector_pdev(dev, &connector_pdev); 147 - if (!ret) { 148 - hdmi->ddc_en = gpiod_get_optional(&connector_pdev->dev, 149 - "ddc-en", GPIOD_OUT_HIGH); 150 - platform_device_put(connector_pdev); 151 - 152 - if (IS_ERR(hdmi->ddc_en)) { 153 - dev_err(dev, "Couldn't get ddc-en gpio\n"); 154 - return PTR_ERR(hdmi->ddc_en); 155 - } 156 - } 157 - 158 170 ret = regulator_enable(hdmi->regulator); 159 171 if (ret) { 160 172 dev_err(dev, "Failed to enable regulator\n"); 161 - goto err_unref_ddc_en; 173 + return ret; 162 174 } 163 - 164 - gpiod_set_value(hdmi->ddc_en, 1); 165 175 166 176 ret = reset_control_deassert(hdmi->rst_ctrl); 167 177 if (ret) { 168 178 dev_err(dev, "Could not deassert ctrl reset control\n"); 169 - goto err_disable_ddc_en; 179 + goto err_disable_regulator; 170 180 } 171 181 172 182 ret = clk_prepare_enable(hdmi->clk_tmds); ··· 207 245 clk_disable_unprepare(hdmi->clk_tmds); 208 246 err_assert_ctrl_reset: 209 247 reset_control_assert(hdmi->rst_ctrl); 210 - err_disable_ddc_en: 211 - gpiod_set_value(hdmi->ddc_en, 0); 248 + err_disable_regulator: 212 249 regulator_disable(hdmi->regulator); 213 - err_unref_ddc_en: 214 - if (hdmi->ddc_en) 215 - gpiod_put(hdmi->ddc_en); 216 250 217 251 return ret; 218 252 } ··· 222 264 sun8i_hdmi_phy_deinit(hdmi->phy); 223 265 clk_disable_unprepare(hdmi->clk_tmds); 224 266 reset_control_assert(hdmi->rst_ctrl); 225 - gpiod_set_value(hdmi->ddc_en, 0); 226 267 regulator_disable(hdmi->regulator); 227 - 228 - if (hdmi->ddc_en) 229 - gpiod_put(hdmi->ddc_en); 230 268 } 231 269 232 270 static const struct component_ops sun8i_dw_hdmi_ops = {
-2
drivers/gpu/drm/sun4i/sun8i_dw_hdmi.h
··· 9 9 #include <drm/bridge/dw_hdmi.h> 10 10 #include <drm/drm_encoder.h> 11 11 #include <linux/clk.h> 12 - #include <linux/gpio/consumer.h> 13 12 #include <linux/regmap.h> 14 13 #include <linux/regulator/consumer.h> 15 14 #include <linux/reset.h> ··· 192 193 struct regulator *regulator; 193 194 const struct sun8i_dw_hdmi_quirks *quirks; 194 195 struct reset_control *rst_ctrl; 195 - struct gpio_desc *ddc_en; 196 196 }; 197 197 198 198 extern struct platform_driver sun8i_hdmi_phy_driver;
+54 -8
drivers/gpu/drm/vc4/vc4_bo.c
··· 248 248 { 249 249 struct vc4_dev *vc4 = to_vc4_dev(bo->base.base.dev); 250 250 251 + if (WARN_ON_ONCE(vc4->is_vc5)) 252 + return; 253 + 251 254 mutex_lock(&vc4->purgeable.lock); 252 255 list_add_tail(&bo->size_head, &vc4->purgeable.list); 253 256 vc4->purgeable.num++; ··· 261 258 static void vc4_bo_remove_from_purgeable_pool_locked(struct vc4_bo *bo) 262 259 { 263 260 struct vc4_dev *vc4 = to_vc4_dev(bo->base.base.dev); 261 + 262 + if (WARN_ON_ONCE(vc4->is_vc5)) 263 + return; 264 264 265 265 /* list_del_init() is used here because the caller might release 266 266 * the purgeable lock in order to acquire the madv one and update the ··· 393 387 struct vc4_dev *vc4 = to_vc4_dev(dev); 394 388 struct vc4_bo *bo; 395 389 390 + if (WARN_ON_ONCE(vc4->is_vc5)) 391 + return ERR_PTR(-ENODEV); 392 + 396 393 bo = kzalloc(sizeof(*bo), GFP_KERNEL); 397 394 if (!bo) 398 395 return ERR_PTR(-ENOMEM); ··· 421 412 struct vc4_dev *vc4 = to_vc4_dev(dev); 422 413 struct drm_gem_cma_object *cma_obj; 423 414 struct vc4_bo *bo; 415 + 416 + if (WARN_ON_ONCE(vc4->is_vc5)) 417 + return ERR_PTR(-ENODEV); 424 418 425 419 if (size == 0) 426 420 return ERR_PTR(-EINVAL); ··· 483 471 return bo; 484 472 } 485 473 486 - int vc4_dumb_create(struct drm_file *file_priv, 487 - struct drm_device *dev, 488 - struct drm_mode_create_dumb *args) 474 + int vc4_bo_dumb_create(struct drm_file *file_priv, 475 + struct drm_device *dev, 476 + struct drm_mode_create_dumb *args) 489 477 { 490 - int min_pitch = DIV_ROUND_UP(args->width * args->bpp, 8); 478 + struct vc4_dev *vc4 = to_vc4_dev(dev); 491 479 struct vc4_bo *bo = NULL; 492 480 int ret; 493 481 494 - if (args->pitch < min_pitch) 495 - args->pitch = min_pitch; 482 + if (WARN_ON_ONCE(vc4->is_vc5)) 483 + return -ENODEV; 496 484 497 - if (args->size < args->pitch * args->height) 498 - args->size = args->pitch * args->height; 485 + ret = vc4_dumb_fixup_args(args); 486 + if (ret) 487 + return ret; 499 488 500 489 bo = vc4_bo_create(dev, args->size, false, VC4_BO_TYPE_DUMB); 501 490 if (IS_ERR(bo)) ··· 614 601 615 602 int vc4_bo_inc_usecnt(struct vc4_bo *bo) 616 603 { 604 + struct vc4_dev *vc4 = to_vc4_dev(bo->base.base.dev); 617 605 int ret; 606 + 607 + if (WARN_ON_ONCE(vc4->is_vc5)) 608 + return -ENODEV; 618 609 619 610 /* Fast path: if the BO is already retained by someone, no need to 620 611 * check the madv status. ··· 654 637 655 638 void vc4_bo_dec_usecnt(struct vc4_bo *bo) 656 639 { 640 + struct vc4_dev *vc4 = to_vc4_dev(bo->base.base.dev); 641 + 642 + if (WARN_ON_ONCE(vc4->is_vc5)) 643 + return; 644 + 657 645 /* Fast path: if the BO is still retained by someone, no need to test 658 646 * the madv value. 659 647 */ ··· 778 756 struct vc4_bo *bo = NULL; 779 757 int ret; 780 758 759 + if (WARN_ON_ONCE(vc4->is_vc5)) 760 + return -ENODEV; 761 + 781 762 ret = vc4_grab_bin_bo(vc4, vc4file); 782 763 if (ret) 783 764 return ret; ··· 804 779 int vc4_mmap_bo_ioctl(struct drm_device *dev, void *data, 805 780 struct drm_file *file_priv) 806 781 { 782 + struct vc4_dev *vc4 = to_vc4_dev(dev); 807 783 struct drm_vc4_mmap_bo *args = data; 808 784 struct drm_gem_object *gem_obj; 785 + 786 + if (WARN_ON_ONCE(vc4->is_vc5)) 787 + return -ENODEV; 809 788 810 789 gem_obj = drm_gem_object_lookup(file_priv, args->handle); 811 790 if (!gem_obj) { ··· 833 804 struct vc4_dev *vc4 = to_vc4_dev(dev); 834 805 struct vc4_bo *bo = NULL; 835 806 int ret; 807 + 808 + if (WARN_ON_ONCE(vc4->is_vc5)) 809 + return -ENODEV; 836 810 837 811 if (args->size == 0) 838 812 return -EINVAL; ··· 907 875 int vc4_set_tiling_ioctl(struct drm_device *dev, void *data, 908 876 struct drm_file *file_priv) 909 877 { 878 + struct vc4_dev *vc4 = to_vc4_dev(dev); 910 879 struct drm_vc4_set_tiling *args = data; 911 880 struct drm_gem_object *gem_obj; 912 881 struct vc4_bo *bo; 913 882 bool t_format; 883 + 884 + if (WARN_ON_ONCE(vc4->is_vc5)) 885 + return -ENODEV; 914 886 915 887 if (args->flags != 0) 916 888 return -EINVAL; ··· 954 918 int vc4_get_tiling_ioctl(struct drm_device *dev, void *data, 955 919 struct drm_file *file_priv) 956 920 { 921 + struct vc4_dev *vc4 = to_vc4_dev(dev); 957 922 struct drm_vc4_get_tiling *args = data; 958 923 struct drm_gem_object *gem_obj; 959 924 struct vc4_bo *bo; 925 + 926 + if (WARN_ON_ONCE(vc4->is_vc5)) 927 + return -ENODEV; 960 928 961 929 if (args->flags != 0 || args->modifier != 0) 962 930 return -EINVAL; ··· 987 947 { 988 948 struct vc4_dev *vc4 = to_vc4_dev(dev); 989 949 int i; 950 + 951 + if (WARN_ON_ONCE(vc4->is_vc5)) 952 + return -ENODEV; 990 953 991 954 /* Create the initial set of BO labels that the kernel will 992 955 * use. This lets us avoid a bunch of string reallocation in ··· 1049 1006 char *name; 1050 1007 struct drm_gem_object *gem_obj; 1051 1008 int ret = 0, label; 1009 + 1010 + if (WARN_ON_ONCE(vc4->is_vc5)) 1011 + return -ENODEV; 1052 1012 1053 1013 if (!args->len) 1054 1014 return -EINVAL;
+147 -53
drivers/gpu/drm/vc4/vc4_crtc.c
··· 256 256 * Removing 1 from the FIFO full level however 257 257 * seems to completely remove that issue. 258 258 */ 259 - if (!vc4->hvs->hvs5) 259 + if (!vc4->is_vc5) 260 260 return fifo_len_bytes - 3 * HVS_FIFO_LATENCY_PIX - 1; 261 261 262 262 return fifo_len_bytes - 3 * HVS_FIFO_LATENCY_PIX; ··· 389 389 if (is_dsi) 390 390 CRTC_WRITE(PV_HACT_ACT, mode->hdisplay * pixel_rep); 391 391 392 - if (vc4->hvs->hvs5) 392 + if (vc4->is_vc5) 393 393 CRTC_WRITE(PV_MUX_CFG, 394 394 VC4_SET_FIELD(PV_MUX_CFG_RGB_PIXEL_MUX_MODE_NO_SWAP, 395 395 PV_MUX_CFG_RGB_PIXEL_MUX_MODE)); ··· 775 775 struct drm_framebuffer *old_fb; 776 776 struct drm_pending_vblank_event *event; 777 777 778 - struct vc4_seqno_cb cb; 778 + union { 779 + struct dma_fence_cb fence; 780 + struct vc4_seqno_cb seqno; 781 + } cb; 779 782 }; 780 783 781 784 /* Called when the V3D execution for the BO being flipped to is done, so that 782 785 * we can actually update the plane's address to point to it. 783 786 */ 784 787 static void 785 - vc4_async_page_flip_complete(struct vc4_seqno_cb *cb) 788 + vc4_async_page_flip_complete(struct vc4_async_flip_state *flip_state) 786 789 { 787 - struct vc4_async_flip_state *flip_state = 788 - container_of(cb, struct vc4_async_flip_state, cb); 789 790 struct drm_crtc *crtc = flip_state->crtc; 790 791 struct drm_device *dev = crtc->dev; 791 792 struct drm_plane *plane = crtc->primary; ··· 803 802 drm_crtc_vblank_put(crtc); 804 803 drm_framebuffer_put(flip_state->fb); 805 804 806 - /* Decrement the BO usecnt in order to keep the inc/dec calls balanced 807 - * when the planes are updated through the async update path. 808 - * FIXME: we should move to generic async-page-flip when it's 809 - * available, so that we can get rid of this hand-made cleanup_fb() 810 - * logic. 811 - */ 812 - if (flip_state->old_fb) { 813 - struct drm_gem_cma_object *cma_bo; 814 - struct vc4_bo *bo; 815 - 816 - cma_bo = drm_fb_cma_get_gem_obj(flip_state->old_fb, 0); 817 - bo = to_vc4_bo(&cma_bo->base); 818 - vc4_bo_dec_usecnt(bo); 805 + if (flip_state->old_fb) 819 806 drm_framebuffer_put(flip_state->old_fb); 820 - } 821 807 822 808 kfree(flip_state); 823 809 } 824 810 825 - /* Implements async (non-vblank-synced) page flips. 826 - * 827 - * The page flip ioctl needs to return immediately, so we grab the 828 - * modeset semaphore on the pipe, and queue the address update for 829 - * when V3D is done with the BO being flipped to. 830 - */ 831 - static int vc4_async_page_flip(struct drm_crtc *crtc, 832 - struct drm_framebuffer *fb, 833 - struct drm_pending_vblank_event *event, 834 - uint32_t flags) 811 + static void vc4_async_page_flip_seqno_complete(struct vc4_seqno_cb *cb) 835 812 { 836 - struct drm_device *dev = crtc->dev; 837 - struct drm_plane *plane = crtc->primary; 838 - int ret = 0; 839 - struct vc4_async_flip_state *flip_state; 840 - struct drm_gem_cma_object *cma_bo = drm_fb_cma_get_gem_obj(fb, 0); 841 - struct vc4_bo *bo = to_vc4_bo(&cma_bo->base); 813 + struct vc4_async_flip_state *flip_state = 814 + container_of(cb, struct vc4_async_flip_state, cb.seqno); 815 + struct vc4_bo *bo = NULL; 842 816 843 - /* Increment the BO usecnt here, so that we never end up with an 844 - * unbalanced number of vc4_bo_{dec,inc}_usecnt() calls when the 845 - * plane is later updated through the non-async path. 846 - * FIXME: we should move to generic async-page-flip when it's 847 - * available, so that we can get rid of this hand-made prepare_fb() 848 - * logic. 817 + if (flip_state->old_fb) { 818 + struct drm_gem_cma_object *cma_bo = 819 + drm_fb_cma_get_gem_obj(flip_state->old_fb, 0); 820 + bo = to_vc4_bo(&cma_bo->base); 821 + } 822 + 823 + vc4_async_page_flip_complete(flip_state); 824 + 825 + /* 826 + * Decrement the BO usecnt in order to keep the inc/dec 827 + * calls balanced when the planes are updated through 828 + * the async update path. 829 + * 830 + * FIXME: we should move to generic async-page-flip when 831 + * it's available, so that we can get rid of this 832 + * hand-made cleanup_fb() logic. 849 833 */ 850 - ret = vc4_bo_inc_usecnt(bo); 834 + if (bo) 835 + vc4_bo_dec_usecnt(bo); 836 + } 837 + 838 + static void vc4_async_page_flip_fence_complete(struct dma_fence *fence, 839 + struct dma_fence_cb *cb) 840 + { 841 + struct vc4_async_flip_state *flip_state = 842 + container_of(cb, struct vc4_async_flip_state, cb.fence); 843 + 844 + vc4_async_page_flip_complete(flip_state); 845 + dma_fence_put(fence); 846 + } 847 + 848 + static int vc4_async_set_fence_cb(struct drm_device *dev, 849 + struct vc4_async_flip_state *flip_state) 850 + { 851 + struct drm_framebuffer *fb = flip_state->fb; 852 + struct drm_gem_cma_object *cma_bo = drm_fb_cma_get_gem_obj(fb, 0); 853 + struct vc4_dev *vc4 = to_vc4_dev(dev); 854 + struct dma_fence *fence; 855 + int ret; 856 + 857 + if (!vc4->is_vc5) { 858 + struct vc4_bo *bo = to_vc4_bo(&cma_bo->base); 859 + 860 + return vc4_queue_seqno_cb(dev, &flip_state->cb.seqno, bo->seqno, 861 + vc4_async_page_flip_seqno_complete); 862 + } 863 + 864 + ret = dma_resv_get_singleton(cma_bo->base.resv, DMA_RESV_USAGE_READ, &fence); 851 865 if (ret) 852 866 return ret; 853 867 854 - flip_state = kzalloc(sizeof(*flip_state), GFP_KERNEL); 855 - if (!flip_state) { 856 - vc4_bo_dec_usecnt(bo); 857 - return -ENOMEM; 868 + /* If there's no fence, complete the page flip immediately */ 869 + if (!fence) { 870 + vc4_async_page_flip_fence_complete(fence, &flip_state->cb.fence); 871 + return 0; 858 872 } 873 + 874 + /* If the fence has already been completed, complete the page flip */ 875 + if (dma_fence_add_callback(fence, &flip_state->cb.fence, 876 + vc4_async_page_flip_fence_complete)) 877 + vc4_async_page_flip_fence_complete(fence, &flip_state->cb.fence); 878 + 879 + return 0; 880 + } 881 + 882 + static int 883 + vc4_async_page_flip_common(struct drm_crtc *crtc, 884 + struct drm_framebuffer *fb, 885 + struct drm_pending_vblank_event *event, 886 + uint32_t flags) 887 + { 888 + struct drm_device *dev = crtc->dev; 889 + struct drm_plane *plane = crtc->primary; 890 + struct vc4_async_flip_state *flip_state; 891 + 892 + flip_state = kzalloc(sizeof(*flip_state), GFP_KERNEL); 893 + if (!flip_state) 894 + return -ENOMEM; 859 895 860 896 drm_framebuffer_get(fb); 861 897 flip_state->fb = fb; ··· 919 881 */ 920 882 drm_atomic_set_fb_for_plane(plane->state, fb); 921 883 922 - vc4_queue_seqno_cb(dev, &flip_state->cb, bo->seqno, 923 - vc4_async_page_flip_complete); 884 + vc4_async_set_fence_cb(dev, flip_state); 924 885 925 886 /* Driver takes ownership of state on successful async commit. */ 926 887 return 0; 888 + } 889 + 890 + /* Implements async (non-vblank-synced) page flips. 891 + * 892 + * The page flip ioctl needs to return immediately, so we grab the 893 + * modeset semaphore on the pipe, and queue the address update for 894 + * when V3D is done with the BO being flipped to. 895 + */ 896 + static int vc4_async_page_flip(struct drm_crtc *crtc, 897 + struct drm_framebuffer *fb, 898 + struct drm_pending_vblank_event *event, 899 + uint32_t flags) 900 + { 901 + struct drm_device *dev = crtc->dev; 902 + struct vc4_dev *vc4 = to_vc4_dev(dev); 903 + struct drm_gem_cma_object *cma_bo = drm_fb_cma_get_gem_obj(fb, 0); 904 + struct vc4_bo *bo = to_vc4_bo(&cma_bo->base); 905 + int ret; 906 + 907 + if (WARN_ON_ONCE(vc4->is_vc5)) 908 + return -ENODEV; 909 + 910 + /* 911 + * Increment the BO usecnt here, so that we never end up with an 912 + * unbalanced number of vc4_bo_{dec,inc}_usecnt() calls when the 913 + * plane is later updated through the non-async path. 914 + * 915 + * FIXME: we should move to generic async-page-flip when 916 + * it's available, so that we can get rid of this 917 + * hand-made prepare_fb() logic. 918 + */ 919 + ret = vc4_bo_inc_usecnt(bo); 920 + if (ret) 921 + return ret; 922 + 923 + ret = vc4_async_page_flip_common(crtc, fb, event, flags); 924 + if (ret) { 925 + vc4_bo_dec_usecnt(bo); 926 + return ret; 927 + } 928 + 929 + return 0; 930 + } 931 + 932 + static int vc5_async_page_flip(struct drm_crtc *crtc, 933 + struct drm_framebuffer *fb, 934 + struct drm_pending_vblank_event *event, 935 + uint32_t flags) 936 + { 937 + return vc4_async_page_flip_common(crtc, fb, event, flags); 927 938 } 928 939 929 940 int vc4_page_flip(struct drm_crtc *crtc, ··· 981 894 uint32_t flags, 982 895 struct drm_modeset_acquire_ctx *ctx) 983 896 { 984 - if (flags & DRM_MODE_PAGE_FLIP_ASYNC) 985 - return vc4_async_page_flip(crtc, fb, event, flags); 986 - else 897 + if (flags & DRM_MODE_PAGE_FLIP_ASYNC) { 898 + struct drm_device *dev = crtc->dev; 899 + struct vc4_dev *vc4 = to_vc4_dev(dev); 900 + 901 + if (vc4->is_vc5) 902 + return vc5_async_page_flip(crtc, fb, event, flags); 903 + else 904 + return vc4_async_page_flip(crtc, fb, event, flags); 905 + } else { 987 906 return drm_atomic_helper_page_flip(crtc, fb, event, flags, ctx); 907 + } 988 908 } 989 909 990 910 struct drm_crtc_state *vc4_crtc_duplicate_state(struct drm_crtc *crtc) ··· 1243 1149 crtc_funcs, NULL); 1244 1150 drm_crtc_helper_add(crtc, crtc_helper_funcs); 1245 1151 1246 - if (!vc4->hvs->hvs5) { 1152 + if (!vc4->is_vc5) { 1247 1153 drm_mode_crtc_set_gamma_size(crtc, ARRAY_SIZE(vc4_crtc->lut_r)); 1248 1154 1249 1155 drm_crtc_enable_color_mgmt(crtc, 0, false, crtc->gamma_size);
+81 -16
drivers/gpu/drm/vc4/vc4_drv.c
··· 63 63 return map; 64 64 } 65 65 66 + int vc4_dumb_fixup_args(struct drm_mode_create_dumb *args) 67 + { 68 + int min_pitch = DIV_ROUND_UP(args->width * args->bpp, 8); 69 + 70 + if (args->pitch < min_pitch) 71 + args->pitch = min_pitch; 72 + 73 + if (args->size < args->pitch * args->height) 74 + args->size = args->pitch * args->height; 75 + 76 + return 0; 77 + } 78 + 79 + static int vc5_dumb_create(struct drm_file *file_priv, 80 + struct drm_device *dev, 81 + struct drm_mode_create_dumb *args) 82 + { 83 + int ret; 84 + 85 + ret = vc4_dumb_fixup_args(args); 86 + if (ret) 87 + return ret; 88 + 89 + return drm_gem_cma_dumb_create_internal(file_priv, dev, args); 90 + } 91 + 66 92 static int vc4_get_param_ioctl(struct drm_device *dev, void *data, 67 93 struct drm_file *file_priv) 68 94 { ··· 98 72 99 73 if (args->pad != 0) 100 74 return -EINVAL; 75 + 76 + if (WARN_ON_ONCE(vc4->is_vc5)) 77 + return -ENODEV; 101 78 102 79 if (!vc4->v3d) 103 80 return -ENODEV; ··· 145 116 146 117 static int vc4_open(struct drm_device *dev, struct drm_file *file) 147 118 { 119 + struct vc4_dev *vc4 = to_vc4_dev(dev); 148 120 struct vc4_file *vc4file; 121 + 122 + if (WARN_ON_ONCE(vc4->is_vc5)) 123 + return -ENODEV; 149 124 150 125 vc4file = kzalloc(sizeof(*vc4file), GFP_KERNEL); 151 126 if (!vc4file) 152 127 return -ENOMEM; 128 + vc4file->dev = vc4; 153 129 154 130 vc4_perfmon_open_file(vc4file); 155 131 file->driver_priv = vc4file; ··· 165 131 { 166 132 struct vc4_dev *vc4 = to_vc4_dev(dev); 167 133 struct vc4_file *vc4file = file->driver_priv; 134 + 135 + if (WARN_ON_ONCE(vc4->is_vc5)) 136 + return; 168 137 169 138 if (vc4file->bin_bo_used) 170 139 vc4_v3d_bin_bo_put(vc4); ··· 197 160 DRM_IOCTL_DEF_DRV(VC4_PERFMON_GET_VALUES, vc4_perfmon_get_values_ioctl, DRM_RENDER_ALLOW), 198 161 }; 199 162 200 - static struct drm_driver vc4_drm_driver = { 163 + static const struct drm_driver vc4_drm_driver = { 201 164 .driver_features = (DRIVER_MODESET | 202 165 DRIVER_ATOMIC | 203 166 DRIVER_GEM | ··· 212 175 213 176 .gem_create_object = vc4_create_object, 214 177 215 - DRM_GEM_CMA_DRIVER_OPS_WITH_DUMB_CREATE(vc4_dumb_create), 178 + DRM_GEM_CMA_DRIVER_OPS_WITH_DUMB_CREATE(vc4_bo_dumb_create), 216 179 217 180 .ioctls = vc4_drm_ioctls, 218 181 .num_ioctls = ARRAY_SIZE(vc4_drm_ioctls), 182 + .fops = &vc4_drm_fops, 183 + 184 + .name = DRIVER_NAME, 185 + .desc = DRIVER_DESC, 186 + .date = DRIVER_DATE, 187 + .major = DRIVER_MAJOR, 188 + .minor = DRIVER_MINOR, 189 + .patchlevel = DRIVER_PATCHLEVEL, 190 + }; 191 + 192 + static const struct drm_driver vc5_drm_driver = { 193 + .driver_features = (DRIVER_MODESET | 194 + DRIVER_ATOMIC | 195 + DRIVER_GEM), 196 + 197 + #if defined(CONFIG_DEBUG_FS) 198 + .debugfs_init = vc4_debugfs_init, 199 + #endif 200 + 201 + DRM_GEM_CMA_DRIVER_OPS_WITH_DUMB_CREATE(vc5_dumb_create), 202 + 219 203 .fops = &vc4_drm_fops, 220 204 221 205 .name = DRIVER_NAME, ··· 270 212 static int vc4_drm_bind(struct device *dev) 271 213 { 272 214 struct platform_device *pdev = to_platform_device(dev); 215 + const struct drm_driver *driver; 273 216 struct rpi_firmware *firmware = NULL; 274 217 struct drm_device *drm; 275 218 struct vc4_dev *vc4; 276 219 struct device_node *node; 277 220 struct drm_crtc *crtc; 221 + bool is_vc5; 278 222 int ret = 0; 279 223 280 224 dev->coherent_dma_mask = DMA_BIT_MASK(32); 281 225 282 - /* If VC4 V3D is missing, don't advertise render nodes. */ 283 - node = of_find_matching_node_and_match(NULL, vc4_v3d_dt_match, NULL); 284 - if (!node || !of_device_is_available(node)) 285 - vc4_drm_driver.driver_features &= ~DRIVER_RENDER; 286 - of_node_put(node); 226 + is_vc5 = of_device_is_compatible(dev->of_node, "brcm,bcm2711-vc5"); 227 + if (is_vc5) 228 + driver = &vc5_drm_driver; 229 + else 230 + driver = &vc4_drm_driver; 287 231 288 - vc4 = devm_drm_dev_alloc(dev, &vc4_drm_driver, struct vc4_dev, base); 232 + vc4 = devm_drm_dev_alloc(dev, driver, struct vc4_dev, base); 289 233 if (IS_ERR(vc4)) 290 234 return PTR_ERR(vc4); 235 + vc4->is_vc5 = is_vc5; 291 236 292 237 drm = &vc4->base; 293 238 platform_set_drvdata(pdev, drm); 294 239 INIT_LIST_HEAD(&vc4->debugfs_list); 295 240 296 - mutex_init(&vc4->bin_bo_lock); 241 + if (!is_vc5) { 242 + mutex_init(&vc4->bin_bo_lock); 297 243 298 - ret = vc4_bo_cache_init(drm); 299 - if (ret) 300 - return ret; 244 + ret = vc4_bo_cache_init(drm); 245 + if (ret) 246 + return ret; 247 + } 301 248 302 249 ret = drmm_mode_config_init(drm); 303 250 if (ret) 304 251 return ret; 305 252 306 - ret = vc4_gem_init(drm); 307 - if (ret) 308 - return ret; 253 + if (!is_vc5) { 254 + ret = vc4_gem_init(drm); 255 + if (ret) 256 + return ret; 257 + } 309 258 310 259 node = of_find_compatible_node(NULL, NULL, "raspberrypi,bcm2835-firmware"); 311 260 if (node) { ··· 323 258 return -EPROBE_DEFER; 324 259 } 325 260 326 - ret = drm_aperture_remove_framebuffers(false, &vc4_drm_driver); 261 + ret = drm_aperture_remove_framebuffers(false, driver); 327 262 if (ret) 328 263 return ret; 329 264
+13 -6
drivers/gpu/drm/vc4/vc4_drv.h
··· 48 48 * done. This way, only events related to a specific job will be counted. 49 49 */ 50 50 struct vc4_perfmon { 51 + struct vc4_dev *dev; 52 + 51 53 /* Tracks the number of users of the perfmon, when this counter reaches 52 54 * zero the perfmon is destroyed. 53 55 */ ··· 75 73 76 74 struct vc4_dev { 77 75 struct drm_device base; 76 + 77 + bool is_vc5; 78 78 79 79 unsigned int irq; 80 80 ··· 320 316 }; 321 317 322 318 struct vc4_hvs { 319 + struct vc4_dev *vc4; 323 320 struct platform_device *pdev; 324 321 void __iomem *regs; 325 322 u32 __iomem *dlist; ··· 338 333 struct drm_mm_node mitchell_netravali_filter; 339 334 340 335 struct debugfs_regset32 regset; 341 - 342 - /* HVS version 5 flag, therefore requires updated dlist structures */ 343 - bool hvs5; 344 336 }; 345 337 346 338 struct vc4_plane { ··· 582 580 #define VC4_REG32(reg) { .name = #reg, .offset = reg } 583 581 584 582 struct vc4_exec_info { 583 + struct vc4_dev *dev; 584 + 585 585 /* Sequence number for this bin/render job. */ 586 586 uint64_t seqno; 587 587 ··· 705 701 * released when the DRM file is closed should be placed here. 706 702 */ 707 703 struct vc4_file { 704 + struct vc4_dev *dev; 705 + 708 706 struct { 709 707 struct idr idr; 710 708 struct mutex lock; ··· 820 814 struct drm_gem_object *vc4_create_object(struct drm_device *dev, size_t size); 821 815 struct vc4_bo *vc4_bo_create(struct drm_device *dev, size_t size, 822 816 bool from_cache, enum vc4_kernel_bo_type type); 823 - int vc4_dumb_create(struct drm_file *file_priv, 824 - struct drm_device *dev, 825 - struct drm_mode_create_dumb *args); 817 + int vc4_bo_dumb_create(struct drm_file *file_priv, 818 + struct drm_device *dev, 819 + struct drm_mode_create_dumb *args); 826 820 int vc4_create_bo_ioctl(struct drm_device *dev, void *data, 827 821 struct drm_file *file_priv); 828 822 int vc4_create_shader_bo_ioctl(struct drm_device *dev, void *data, ··· 891 885 892 886 /* vc4_drv.c */ 893 887 void __iomem *vc4_ioremap_regs(struct platform_device *dev, int index); 888 + int vc4_dumb_fixup_args(struct drm_mode_create_dumb *args); 894 889 895 890 /* vc4_dpi.c */ 896 891 extern struct platform_driver vc4_dpi_driver;
+40
drivers/gpu/drm/vc4/vc4_gem.c
··· 76 76 u32 i; 77 77 int ret = 0; 78 78 79 + if (WARN_ON_ONCE(vc4->is_vc5)) 80 + return -ENODEV; 81 + 79 82 if (!vc4->v3d) { 80 83 DRM_DEBUG("VC4_GET_HANG_STATE with no VC4 V3D probed\n"); 81 84 return -ENODEV; ··· 389 386 unsigned long timeout_expire; 390 387 DEFINE_WAIT(wait); 391 388 389 + if (WARN_ON_ONCE(vc4->is_vc5)) 390 + return -ENODEV; 391 + 392 392 if (vc4->finished_seqno >= seqno) 393 393 return 0; 394 394 ··· 474 468 struct vc4_dev *vc4 = to_vc4_dev(dev); 475 469 struct vc4_exec_info *exec; 476 470 471 + if (WARN_ON_ONCE(vc4->is_vc5)) 472 + return; 473 + 477 474 again: 478 475 exec = vc4_first_bin_job(vc4); 479 476 if (!exec) ··· 522 513 if (!exec) 523 514 return; 524 515 516 + if (WARN_ON_ONCE(vc4->is_vc5)) 517 + return; 518 + 525 519 /* A previous RCL may have written to one of our textures, and 526 520 * our full cache flush at bin time may have occurred before 527 521 * that RCL completed. Flush the texture cache now, but not ··· 542 530 { 543 531 struct vc4_dev *vc4 = to_vc4_dev(dev); 544 532 bool was_empty = list_empty(&vc4->render_job_list); 533 + 534 + if (WARN_ON_ONCE(vc4->is_vc5)) 535 + return; 545 536 546 537 list_move_tail(&exec->head, &vc4->render_job_list); 547 538 if (was_empty) ··· 1012 997 unsigned long irqflags; 1013 998 struct vc4_seqno_cb *cb, *cb_temp; 1014 999 1000 + if (WARN_ON_ONCE(vc4->is_vc5)) 1001 + return; 1002 + 1015 1003 spin_lock_irqsave(&vc4->job_lock, irqflags); 1016 1004 while (!list_empty(&vc4->job_done_list)) { 1017 1005 struct vc4_exec_info *exec = ··· 1050 1032 { 1051 1033 struct vc4_dev *vc4 = to_vc4_dev(dev); 1052 1034 unsigned long irqflags; 1035 + 1036 + if (WARN_ON_ONCE(vc4->is_vc5)) 1037 + return -ENODEV; 1053 1038 1054 1039 cb->func = func; 1055 1040 INIT_WORK(&cb->work, vc4_seqno_cb_work); ··· 1104 1083 vc4_wait_seqno_ioctl(struct drm_device *dev, void *data, 1105 1084 struct drm_file *file_priv) 1106 1085 { 1086 + struct vc4_dev *vc4 = to_vc4_dev(dev); 1107 1087 struct drm_vc4_wait_seqno *args = data; 1088 + 1089 + if (WARN_ON_ONCE(vc4->is_vc5)) 1090 + return -ENODEV; 1108 1091 1109 1092 return vc4_wait_for_seqno_ioctl_helper(dev, args->seqno, 1110 1093 &args->timeout_ns); ··· 1118 1093 vc4_wait_bo_ioctl(struct drm_device *dev, void *data, 1119 1094 struct drm_file *file_priv) 1120 1095 { 1096 + struct vc4_dev *vc4 = to_vc4_dev(dev); 1121 1097 int ret; 1122 1098 struct drm_vc4_wait_bo *args = data; 1123 1099 struct drm_gem_object *gem_obj; 1124 1100 struct vc4_bo *bo; 1101 + 1102 + if (WARN_ON_ONCE(vc4->is_vc5)) 1103 + return -ENODEV; 1125 1104 1126 1105 if (args->pad != 0) 1127 1106 return -EINVAL; ··· 1173 1144 args->shader_rec_size, 1174 1145 args->bo_handle_count); 1175 1146 1147 + if (WARN_ON_ONCE(vc4->is_vc5)) 1148 + return -ENODEV; 1149 + 1176 1150 if (!vc4->v3d) { 1177 1151 DRM_DEBUG("VC4_SUBMIT_CL with no VC4 V3D probed\n"); 1178 1152 return -ENODEV; ··· 1199 1167 DRM_ERROR("malloc failure on exec struct\n"); 1200 1168 return -ENOMEM; 1201 1169 } 1170 + exec->dev = vc4; 1202 1171 1203 1172 ret = vc4_v3d_pm_get(vc4); 1204 1173 if (ret) { ··· 1309 1276 { 1310 1277 struct vc4_dev *vc4 = to_vc4_dev(dev); 1311 1278 1279 + if (WARN_ON_ONCE(vc4->is_vc5)) 1280 + return -ENODEV; 1281 + 1312 1282 vc4->dma_fence_context = dma_fence_context_alloc(1); 1313 1283 1314 1284 INIT_LIST_HEAD(&vc4->bin_job_list); ··· 1357 1321 int vc4_gem_madvise_ioctl(struct drm_device *dev, void *data, 1358 1322 struct drm_file *file_priv) 1359 1323 { 1324 + struct vc4_dev *vc4 = to_vc4_dev(dev); 1360 1325 struct drm_vc4_gem_madvise *args = data; 1361 1326 struct drm_gem_object *gem_obj; 1362 1327 struct vc4_bo *bo; 1363 1328 int ret; 1329 + 1330 + if (WARN_ON_ONCE(vc4->is_vc5)) 1331 + return -ENODEV; 1364 1332 1365 1333 switch (args->madv) { 1366 1334 case VC4_MADV_DONTNEED:
+1 -1
drivers/gpu/drm/vc4/vc4_hdmi.c
··· 1481 1481 unsigned int bpc, 1482 1482 enum vc4_hdmi_output_format fmt) 1483 1483 { 1484 - unsigned long long clock = mode->clock * 1000; 1484 + unsigned long long clock = mode->clock * 1000ULL; 1485 1485 1486 1486 if (mode->flags & DRM_MODE_FLAG_DBLCLK) 1487 1487 clock = clock * 2;
+9 -9
drivers/gpu/drm/vc4/vc4_hvs.c
··· 220 220 221 221 int vc4_hvs_get_fifo_from_output(struct vc4_hvs *hvs, unsigned int output) 222 222 { 223 + struct vc4_dev *vc4 = hvs->vc4; 223 224 u32 reg; 224 225 int ret; 225 226 226 - if (!hvs->hvs5) 227 + if (!vc4->is_vc5) 227 228 return output; 228 229 229 230 switch (output) { ··· 274 273 static int vc4_hvs_init_channel(struct vc4_hvs *hvs, struct drm_crtc *crtc, 275 274 struct drm_display_mode *mode, bool oneshot) 276 275 { 276 + struct vc4_dev *vc4 = hvs->vc4; 277 277 struct vc4_crtc *vc4_crtc = to_vc4_crtc(crtc); 278 278 struct vc4_crtc_state *vc4_crtc_state = to_vc4_crtc_state(crtc->state); 279 279 unsigned int chan = vc4_crtc_state->assigned_channel; ··· 293 291 */ 294 292 dispctrl = SCALER_DISPCTRLX_ENABLE; 295 293 296 - if (!hvs->hvs5) 294 + if (!vc4->is_vc5) 297 295 dispctrl |= VC4_SET_FIELD(mode->hdisplay, 298 296 SCALER_DISPCTRLX_WIDTH) | 299 297 VC4_SET_FIELD(mode->vdisplay, ··· 314 312 315 313 HVS_WRITE(SCALER_DISPBKGNDX(chan), dispbkgndx | 316 314 SCALER_DISPBKGND_AUTOHS | 317 - ((!hvs->hvs5) ? SCALER_DISPBKGND_GAMMA : 0) | 315 + ((!vc4->is_vc5) ? SCALER_DISPBKGND_GAMMA : 0) | 318 316 (interlace ? SCALER_DISPBKGND_INTERLACE : 0)); 319 317 320 318 /* Reload the LUT, since the SRAMs would have been disabled if ··· 619 617 if (!hvs) 620 618 return -ENOMEM; 621 619 620 + hvs->vc4 = vc4; 622 621 hvs->pdev = pdev; 623 - 624 - if (of_device_is_compatible(pdev->dev.of_node, "brcm,bcm2711-hvs")) 625 - hvs->hvs5 = true; 626 622 627 623 hvs->regs = vc4_ioremap_regs(pdev, 0); 628 624 if (IS_ERR(hvs->regs)) ··· 630 630 hvs->regset.regs = hvs_regs; 631 631 hvs->regset.nregs = ARRAY_SIZE(hvs_regs); 632 632 633 - if (hvs->hvs5) { 633 + if (vc4->is_vc5) { 634 634 hvs->core_clk = devm_clk_get(&pdev->dev, NULL); 635 635 if (IS_ERR(hvs->core_clk)) { 636 636 dev_err(&pdev->dev, "Couldn't get core clock\n"); ··· 644 644 } 645 645 } 646 646 647 - if (!hvs->hvs5) 647 + if (!vc4->is_vc5) 648 648 hvs->dlist = hvs->regs + SCALER_DLIST_START; 649 649 else 650 650 hvs->dlist = hvs->regs + SCALER5_DLIST_START; ··· 665 665 * between planes when they don't overlap on the screen, but 666 666 * for now we just allocate globally. 667 667 */ 668 - if (!hvs->hvs5) 668 + if (!vc4->is_vc5) 669 669 /* 48k words of 2x12-bit pixels */ 670 670 drm_mm_init(&hvs->lbm_mm, 0, 48 * 1024); 671 671 else
+16
drivers/gpu/drm/vc4/vc4_irq.c
··· 265 265 { 266 266 struct vc4_dev *vc4 = to_vc4_dev(dev); 267 267 268 + if (WARN_ON_ONCE(vc4->is_vc5)) 269 + return; 270 + 268 271 if (!vc4->v3d) 269 272 return; 270 273 ··· 281 278 vc4_irq_disable(struct drm_device *dev) 282 279 { 283 280 struct vc4_dev *vc4 = to_vc4_dev(dev); 281 + 282 + if (WARN_ON_ONCE(vc4->is_vc5)) 283 + return; 284 284 285 285 if (!vc4->v3d) 286 286 return; ··· 302 296 303 297 int vc4_irq_install(struct drm_device *dev, int irq) 304 298 { 299 + struct vc4_dev *vc4 = to_vc4_dev(dev); 305 300 int ret; 301 + 302 + if (WARN_ON_ONCE(vc4->is_vc5)) 303 + return -ENODEV; 306 304 307 305 if (irq == IRQ_NOTCONNECTED) 308 306 return -ENOTCONN; ··· 326 316 { 327 317 struct vc4_dev *vc4 = to_vc4_dev(dev); 328 318 319 + if (WARN_ON_ONCE(vc4->is_vc5)) 320 + return; 321 + 329 322 vc4_irq_disable(dev); 330 323 free_irq(vc4->irq, dev); 331 324 } ··· 338 325 { 339 326 struct vc4_dev *vc4 = to_vc4_dev(dev); 340 327 unsigned long irqflags; 328 + 329 + if (WARN_ON_ONCE(vc4->is_vc5)) 330 + return; 341 331 342 332 /* Acknowledge any stale IRQs. */ 343 333 V3D_WRITE(V3D_INTCTL, V3D_DRIVER_IRQS);
+16 -8
drivers/gpu/drm/vc4/vc4_kms.c
··· 393 393 old_hvs_state->fifo_state[channel].pending_commit = NULL; 394 394 } 395 395 396 - if (vc4->hvs->hvs5) { 396 + if (vc4->is_vc5) { 397 397 unsigned long state_rate = max(old_hvs_state->core_clock_rate, 398 398 new_hvs_state->core_clock_rate); 399 399 unsigned long core_rate = max_t(unsigned long, ··· 412 412 413 413 vc4_ctm_commit(vc4, state); 414 414 415 - if (vc4->hvs->hvs5) 415 + if (vc4->is_vc5) 416 416 vc5_hvs_pv_muxing_commit(vc4, state); 417 417 else 418 418 vc4_hvs_pv_muxing_commit(vc4, state); ··· 430 430 431 431 drm_atomic_helper_cleanup_planes(dev, state); 432 432 433 - if (vc4->hvs->hvs5) { 433 + if (vc4->is_vc5) { 434 434 drm_dbg(dev, "Running the core clock at %lu Hz\n", 435 435 new_hvs_state->core_clock_rate); 436 436 ··· 479 479 struct drm_file *file_priv, 480 480 const struct drm_mode_fb_cmd2 *mode_cmd) 481 481 { 482 + struct vc4_dev *vc4 = to_vc4_dev(dev); 482 483 struct drm_mode_fb_cmd2 mode_cmd_local; 484 + 485 + if (WARN_ON_ONCE(vc4->is_vc5)) 486 + return ERR_PTR(-ENODEV); 483 487 484 488 /* If the user didn't specify a modifier, use the 485 489 * vc4_set_tiling_ioctl() state for the BO. ··· 1001 997 .fb_create = vc4_fb_create, 1002 998 }; 1003 999 1000 + static const struct drm_mode_config_funcs vc5_mode_funcs = { 1001 + .atomic_check = vc4_atomic_check, 1002 + .atomic_commit = drm_atomic_helper_commit, 1003 + .fb_create = drm_gem_fb_create, 1004 + }; 1005 + 1004 1006 int vc4_kms_load(struct drm_device *dev) 1005 1007 { 1006 1008 struct vc4_dev *vc4 = to_vc4_dev(dev); 1007 - bool is_vc5 = of_device_is_compatible(dev->dev->of_node, 1008 - "brcm,bcm2711-vc5"); 1009 1009 int ret; 1010 1010 1011 1011 /* ··· 1017 1009 * the BCM2711, but the load tracker computations are used for 1018 1010 * the core clock rate calculation. 1019 1011 */ 1020 - if (!is_vc5) { 1012 + if (!vc4->is_vc5) { 1021 1013 /* Start with the load tracker enabled. Can be 1022 1014 * disabled through the debugfs load_tracker file. 1023 1015 */ ··· 1033 1025 return ret; 1034 1026 } 1035 1027 1036 - if (is_vc5) { 1028 + if (vc4->is_vc5) { 1037 1029 dev->mode_config.max_width = 7680; 1038 1030 dev->mode_config.max_height = 7680; 1039 1031 } else { ··· 1041 1033 dev->mode_config.max_height = 2048; 1042 1034 } 1043 1035 1044 - dev->mode_config.funcs = &vc4_mode_funcs; 1036 + dev->mode_config.funcs = vc4->is_vc5 ? &vc5_mode_funcs : &vc4_mode_funcs; 1045 1037 dev->mode_config.helper_private = &vc4_mode_config_helpers; 1046 1038 dev->mode_config.preferred_depth = 24; 1047 1039 dev->mode_config.async_page_flip = true;
+46 -1
drivers/gpu/drm/vc4/vc4_perfmon.c
··· 17 17 18 18 void vc4_perfmon_get(struct vc4_perfmon *perfmon) 19 19 { 20 + struct vc4_dev *vc4 = perfmon->dev; 21 + 22 + if (WARN_ON_ONCE(vc4->is_vc5)) 23 + return; 24 + 20 25 if (perfmon) 21 26 refcount_inc(&perfmon->refcnt); 22 27 } 23 28 24 29 void vc4_perfmon_put(struct vc4_perfmon *perfmon) 25 30 { 26 - if (perfmon && refcount_dec_and_test(&perfmon->refcnt)) 31 + struct vc4_dev *vc4; 32 + 33 + if (!perfmon) 34 + return; 35 + 36 + vc4 = perfmon->dev; 37 + if (WARN_ON_ONCE(vc4->is_vc5)) 38 + return; 39 + 40 + if (refcount_dec_and_test(&perfmon->refcnt)) 27 41 kfree(perfmon); 28 42 } 29 43 ··· 45 31 { 46 32 unsigned int i; 47 33 u32 mask; 34 + 35 + if (WARN_ON_ONCE(vc4->is_vc5)) 36 + return; 48 37 49 38 if (WARN_ON_ONCE(!perfmon || vc4->active_perfmon)) 50 39 return; ··· 66 49 { 67 50 unsigned int i; 68 51 52 + if (WARN_ON_ONCE(vc4->is_vc5)) 53 + return; 54 + 69 55 if (WARN_ON_ONCE(!vc4->active_perfmon || 70 56 perfmon != vc4->active_perfmon)) 71 57 return; ··· 84 64 85 65 struct vc4_perfmon *vc4_perfmon_find(struct vc4_file *vc4file, int id) 86 66 { 67 + struct vc4_dev *vc4 = vc4file->dev; 87 68 struct vc4_perfmon *perfmon; 69 + 70 + if (WARN_ON_ONCE(vc4->is_vc5)) 71 + return NULL; 88 72 89 73 mutex_lock(&vc4file->perfmon.lock); 90 74 perfmon = idr_find(&vc4file->perfmon.idr, id); ··· 100 76 101 77 void vc4_perfmon_open_file(struct vc4_file *vc4file) 102 78 { 79 + struct vc4_dev *vc4 = vc4file->dev; 80 + 81 + if (WARN_ON_ONCE(vc4->is_vc5)) 82 + return; 83 + 103 84 mutex_init(&vc4file->perfmon.lock); 104 85 idr_init_base(&vc4file->perfmon.idr, VC4_PERFMONID_MIN); 86 + vc4file->dev = vc4; 105 87 } 106 88 107 89 static int vc4_perfmon_idr_del(int id, void *elem, void *data) ··· 121 91 122 92 void vc4_perfmon_close_file(struct vc4_file *vc4file) 123 93 { 94 + struct vc4_dev *vc4 = vc4file->dev; 95 + 96 + if (WARN_ON_ONCE(vc4->is_vc5)) 97 + return; 98 + 124 99 mutex_lock(&vc4file->perfmon.lock); 125 100 idr_for_each(&vc4file->perfmon.idr, vc4_perfmon_idr_del, NULL); 126 101 idr_destroy(&vc4file->perfmon.idr); ··· 141 106 struct vc4_perfmon *perfmon; 142 107 unsigned int i; 143 108 int ret; 109 + 110 + if (WARN_ON_ONCE(vc4->is_vc5)) 111 + return -ENODEV; 144 112 145 113 if (!vc4->v3d) { 146 114 DRM_DEBUG("Creating perfmon no VC4 V3D probed\n"); ··· 165 127 GFP_KERNEL); 166 128 if (!perfmon) 167 129 return -ENOMEM; 130 + perfmon->dev = vc4; 168 131 169 132 for (i = 0; i < req->ncounters; i++) 170 133 perfmon->events[i] = req->events[i]; ··· 196 157 struct drm_vc4_perfmon_destroy *req = data; 197 158 struct vc4_perfmon *perfmon; 198 159 160 + if (WARN_ON_ONCE(vc4->is_vc5)) 161 + return -ENODEV; 162 + 199 163 if (!vc4->v3d) { 200 164 DRM_DEBUG("Destroying perfmon no VC4 V3D probed\n"); 201 165 return -ENODEV; ··· 223 181 struct drm_vc4_perfmon_get_values *req = data; 224 182 struct vc4_perfmon *perfmon; 225 183 int ret; 184 + 185 + if (WARN_ON_ONCE(vc4->is_vc5)) 186 + return -ENODEV; 226 187 227 188 if (!vc4->v3d) { 228 189 DRM_DEBUG("Getting perfmon no VC4 V3D probed\n");
+21 -8
drivers/gpu/drm/vc4/vc4_plane.c
··· 489 489 } 490 490 491 491 /* Align it to 64 or 128 (hvs5) bytes */ 492 - lbm = roundup(lbm, vc4->hvs->hvs5 ? 128 : 64); 492 + lbm = roundup(lbm, vc4->is_vc5 ? 128 : 64); 493 493 494 494 /* Each "word" of the LBM memory contains 2 or 4 (hvs5) pixels */ 495 - lbm /= vc4->hvs->hvs5 ? 4 : 2; 495 + lbm /= vc4->is_vc5 ? 4 : 2; 496 496 497 497 return lbm; 498 498 } ··· 608 608 ret = drm_mm_insert_node_generic(&vc4->hvs->lbm_mm, 609 609 &vc4_state->lbm, 610 610 lbm_size, 611 - vc4->hvs->hvs5 ? 64 : 32, 611 + vc4->is_vc5 ? 64 : 32, 612 612 0, 0); 613 613 spin_unlock_irqrestore(&vc4->hvs->mm_lock, irqflags); 614 614 ··· 917 917 mix_plane_alpha = state->alpha != DRM_BLEND_ALPHA_OPAQUE && 918 918 fb->format->has_alpha; 919 919 920 - if (!vc4->hvs->hvs5) { 920 + if (!vc4->is_vc5) { 921 921 /* Control word */ 922 922 vc4_dlist_write(vc4_state, 923 923 SCALER_CTL0_VALID | ··· 1321 1321 1322 1322 old_vc4_state = to_vc4_plane_state(plane->state); 1323 1323 new_vc4_state = to_vc4_plane_state(new_plane_state); 1324 + 1325 + if (!new_vc4_state->hw_dlist) 1326 + return -EINVAL; 1327 + 1324 1328 if (old_vc4_state->dlist_count != new_vc4_state->dlist_count || 1325 1329 old_vc4_state->pos0_offset != new_vc4_state->pos0_offset || 1326 1330 old_vc4_state->pos2_offset != new_vc4_state->pos2_offset || ··· 1385 1381 .atomic_update = vc4_plane_atomic_update, 1386 1382 .prepare_fb = vc4_prepare_fb, 1387 1383 .cleanup_fb = vc4_cleanup_fb, 1384 + .atomic_async_check = vc4_plane_atomic_async_check, 1385 + .atomic_async_update = vc4_plane_atomic_async_update, 1386 + }; 1387 + 1388 + static const struct drm_plane_helper_funcs vc5_plane_helper_funcs = { 1389 + .atomic_check = vc4_plane_atomic_check, 1390 + .atomic_update = vc4_plane_atomic_update, 1388 1391 .atomic_async_check = vc4_plane_atomic_async_check, 1389 1392 .atomic_async_update = vc4_plane_atomic_async_update, 1390 1393 }; ··· 1464 1453 struct drm_plane *vc4_plane_init(struct drm_device *dev, 1465 1454 enum drm_plane_type type) 1466 1455 { 1456 + struct vc4_dev *vc4 = to_vc4_dev(dev); 1467 1457 struct drm_plane *plane = NULL; 1468 1458 struct vc4_plane *vc4_plane; 1469 1459 u32 formats[ARRAY_SIZE(hvs_formats)]; 1470 1460 int num_formats = 0; 1471 1461 int ret = 0; 1472 1462 unsigned i; 1473 - bool hvs5 = of_device_is_compatible(dev->dev->of_node, 1474 - "brcm,bcm2711-vc5"); 1475 1463 static const uint64_t modifiers[] = { 1476 1464 DRM_FORMAT_MOD_BROADCOM_VC4_T_TILED, 1477 1465 DRM_FORMAT_MOD_BROADCOM_SAND128, ··· 1486 1476 return ERR_PTR(-ENOMEM); 1487 1477 1488 1478 for (i = 0; i < ARRAY_SIZE(hvs_formats); i++) { 1489 - if (!hvs_formats[i].hvs5_only || hvs5) { 1479 + if (!hvs_formats[i].hvs5_only || vc4->is_vc5) { 1490 1480 formats[num_formats] = hvs_formats[i].drm; 1491 1481 num_formats++; 1492 1482 } ··· 1500 1490 if (ret) 1501 1491 return ERR_PTR(ret); 1502 1492 1503 - drm_plane_helper_add(plane, &vc4_plane_helper_funcs); 1493 + if (vc4->is_vc5) 1494 + drm_plane_helper_add(plane, &vc5_plane_helper_funcs); 1495 + else 1496 + drm_plane_helper_add(plane, &vc4_plane_helper_funcs); 1504 1497 1505 1498 drm_plane_create_alpha_property(plane); 1506 1499 drm_plane_create_rotation_property(plane, DRM_MODE_ROTATE_0,
+4
drivers/gpu/drm/vc4/vc4_render_cl.c
··· 593 593 594 594 int vc4_get_rcl(struct drm_device *dev, struct vc4_exec_info *exec) 595 595 { 596 + struct vc4_dev *vc4 = to_vc4_dev(dev); 596 597 struct vc4_rcl_setup setup = {0}; 597 598 struct drm_vc4_submit_cl *args = exec->args; 598 599 bool has_bin = args->bin_cl_size != 0; 599 600 int ret; 601 + 602 + if (WARN_ON_ONCE(vc4->is_vc5)) 603 + return -ENODEV; 600 604 601 605 if (args->min_x_tile > args->max_x_tile || 602 606 args->min_y_tile > args->max_y_tile) {
+15
drivers/gpu/drm/vc4/vc4_v3d.c
··· 127 127 int 128 128 vc4_v3d_pm_get(struct vc4_dev *vc4) 129 129 { 130 + if (WARN_ON_ONCE(vc4->is_vc5)) 131 + return -ENODEV; 132 + 130 133 mutex_lock(&vc4->power_lock); 131 134 if (vc4->power_refcount++ == 0) { 132 135 int ret = pm_runtime_get_sync(&vc4->v3d->pdev->dev); ··· 148 145 void 149 146 vc4_v3d_pm_put(struct vc4_dev *vc4) 150 147 { 148 + if (WARN_ON_ONCE(vc4->is_vc5)) 149 + return; 150 + 151 151 mutex_lock(&vc4->power_lock); 152 152 if (--vc4->power_refcount == 0) { 153 153 pm_runtime_mark_last_busy(&vc4->v3d->pdev->dev); ··· 177 171 int slot; 178 172 uint64_t seqno = 0; 179 173 struct vc4_exec_info *exec; 174 + 175 + if (WARN_ON_ONCE(vc4->is_vc5)) 176 + return -ENODEV; 180 177 181 178 try_again: 182 179 spin_lock_irqsave(&vc4->job_lock, irqflags); ··· 325 316 { 326 317 int ret = 0; 327 318 319 + if (WARN_ON_ONCE(vc4->is_vc5)) 320 + return -ENODEV; 321 + 328 322 mutex_lock(&vc4->bin_bo_lock); 329 323 330 324 if (used && *used) ··· 360 348 361 349 void vc4_v3d_bin_bo_put(struct vc4_dev *vc4) 362 350 { 351 + if (WARN_ON_ONCE(vc4->is_vc5)) 352 + return; 353 + 363 354 mutex_lock(&vc4->bin_bo_lock); 364 355 kref_put(&vc4->bin_bo_kref, bin_bo_release); 365 356 mutex_unlock(&vc4->bin_bo_lock);
+16
drivers/gpu/drm/vc4/vc4_validate.c
··· 105 105 struct drm_gem_cma_object * 106 106 vc4_use_bo(struct vc4_exec_info *exec, uint32_t hindex) 107 107 { 108 + struct vc4_dev *vc4 = exec->dev; 108 109 struct drm_gem_cma_object *obj; 109 110 struct vc4_bo *bo; 111 + 112 + if (WARN_ON_ONCE(vc4->is_vc5)) 113 + return NULL; 110 114 111 115 if (hindex >= exec->bo_count) { 112 116 DRM_DEBUG("BO index %d greater than BO count %d\n", ··· 164 160 uint32_t offset, uint8_t tiling_format, 165 161 uint32_t width, uint32_t height, uint8_t cpp) 166 162 { 163 + struct vc4_dev *vc4 = exec->dev; 167 164 uint32_t aligned_width, aligned_height, stride, size; 168 165 uint32_t utile_w = utile_width(cpp); 169 166 uint32_t utile_h = utile_height(cpp); 167 + 168 + if (WARN_ON_ONCE(vc4->is_vc5)) 169 + return false; 170 170 171 171 /* The shaded vertex format stores signed 12.4 fixed point 172 172 * (-2048,2047) offsets from the viewport center, so we should ··· 490 482 void *unvalidated, 491 483 struct vc4_exec_info *exec) 492 484 { 485 + struct vc4_dev *vc4 = to_vc4_dev(dev); 493 486 uint32_t len = exec->args->bin_cl_size; 494 487 uint32_t dst_offset = 0; 495 488 uint32_t src_offset = 0; 489 + 490 + if (WARN_ON_ONCE(vc4->is_vc5)) 491 + return -ENODEV; 496 492 497 493 while (src_offset < len) { 498 494 void *dst_pkt = validated + dst_offset; ··· 938 926 vc4_validate_shader_recs(struct drm_device *dev, 939 927 struct vc4_exec_info *exec) 940 928 { 929 + struct vc4_dev *vc4 = to_vc4_dev(dev); 941 930 uint32_t i; 942 931 int ret = 0; 932 + 933 + if (WARN_ON_ONCE(vc4->is_vc5)) 934 + return -ENODEV; 943 935 944 936 for (i = 0; i < exec->shader_state_count; i++) { 945 937 ret = validate_gl_shader_rec(dev, exec, &exec->shader_state[i]);
+4
drivers/gpu/drm/vc4/vc4_validate_shaders.c
··· 778 778 struct vc4_validated_shader_info * 779 779 vc4_validate_shader(struct drm_gem_cma_object *shader_obj) 780 780 { 781 + struct vc4_dev *vc4 = to_vc4_dev(shader_obj->base.dev); 781 782 bool found_shader_end = false; 782 783 int shader_end_ip = 0; 783 784 uint32_t last_thread_switch_ip = -3; 784 785 uint32_t ip; 785 786 struct vc4_validated_shader_info *validated_shader = NULL; 786 787 struct vc4_shader_validation_state validation_state; 788 + 789 + if (WARN_ON_ONCE(vc4->is_vc5)) 790 + return NULL; 787 791 788 792 memset(&validation_state, 0, sizeof(validation_state)); 789 793 validation_state.shader = shader_obj->vaddr;
+1 -1
drivers/gpu/drm/xen/xen_drm_front_gem.c
··· 71 71 * the whole buffer. 72 72 */ 73 73 vma->vm_flags &= ~VM_PFNMAP; 74 - vma->vm_flags |= VM_MIXEDMAP; 74 + vma->vm_flags |= VM_MIXEDMAP | VM_DONTEXPAND; 75 75 vma->vm_pgoff = 0; 76 76 77 77 /*
+2 -1
drivers/iio/accel/bma180.c
··· 1006 1006 1007 1007 data->trig->ops = &bma180_trigger_ops; 1008 1008 iio_trigger_set_drvdata(data->trig, indio_dev); 1009 - indio_dev->trig = iio_trigger_get(data->trig); 1010 1009 1011 1010 ret = iio_trigger_register(data->trig); 1012 1011 if (ret) 1013 1012 goto err_trigger_free; 1013 + 1014 + indio_dev->trig = iio_trigger_get(data->trig); 1014 1015 } 1015 1016 1016 1017 ret = iio_triggered_buffer_setup(indio_dev, NULL,
+2 -2
drivers/iio/accel/kxcjk-1013.c
··· 1554 1554 1555 1555 data->dready_trig->ops = &kxcjk1013_trigger_ops; 1556 1556 iio_trigger_set_drvdata(data->dready_trig, indio_dev); 1557 - indio_dev->trig = data->dready_trig; 1558 - iio_trigger_get(indio_dev->trig); 1559 1557 ret = iio_trigger_register(data->dready_trig); 1560 1558 if (ret) 1561 1559 goto err_poweroff; 1560 + 1561 + indio_dev->trig = iio_trigger_get(data->dready_trig); 1562 1562 1563 1563 data->motion_trig->ops = &kxcjk1013_trigger_ops; 1564 1564 iio_trigger_set_drvdata(data->motion_trig, indio_dev);
+14 -8
drivers/iio/accel/mma8452.c
··· 1511 1511 int i; 1512 1512 int ret; 1513 1513 1514 - ret = i2c_smbus_write_byte_data(client, MMA8452_CTRL_REG2, 1514 + /* 1515 + * Find on fxls8471, after config reset bit, it reset immediately, 1516 + * and will not give ACK, so here do not check the return value. 1517 + * The following code will read the reset register, and check whether 1518 + * this reset works. 1519 + */ 1520 + i2c_smbus_write_byte_data(client, MMA8452_CTRL_REG2, 1515 1521 MMA8452_CTRL_REG2_RST); 1516 - if (ret < 0) 1517 - return ret; 1518 1522 1519 1523 for (i = 0; i < 10; i++) { 1520 1524 usleep_range(100, 200); ··· 1561 1557 mutex_init(&data->lock); 1562 1558 1563 1559 data->chip_info = device_get_match_data(&client->dev); 1564 - if (!data->chip_info && id) { 1565 - data->chip_info = &mma_chip_info_table[id->driver_data]; 1566 - } else { 1567 - dev_err(&client->dev, "unknown device model\n"); 1568 - return -ENODEV; 1560 + if (!data->chip_info) { 1561 + if (id) { 1562 + data->chip_info = &mma_chip_info_table[id->driver_data]; 1563 + } else { 1564 + dev_err(&client->dev, "unknown device model\n"); 1565 + return -ENODEV; 1566 + } 1569 1567 } 1570 1568 1571 1569 ret = iio_read_mount_matrix(&client->dev, &data->orientation);
+2 -2
drivers/iio/accel/mxc4005.c
··· 456 456 457 457 data->dready_trig->ops = &mxc4005_trigger_ops; 458 458 iio_trigger_set_drvdata(data->dready_trig, indio_dev); 459 - indio_dev->trig = data->dready_trig; 460 - iio_trigger_get(indio_dev->trig); 461 459 ret = devm_iio_trigger_register(&client->dev, 462 460 data->dready_trig); 463 461 if (ret) { ··· 463 465 "failed to register trigger\n"); 464 466 return ret; 465 467 } 468 + 469 + indio_dev->trig = iio_trigger_get(data->dready_trig); 466 470 } 467 471 468 472 return devm_iio_device_register(&client->dev, indio_dev);
+3
drivers/iio/adc/adi-axi-adc.c
··· 322 322 323 323 if (!try_module_get(cl->dev->driver->owner)) { 324 324 mutex_unlock(&registered_clients_lock); 325 + of_node_put(cln); 325 326 return ERR_PTR(-ENODEV); 326 327 } 327 328 328 329 get_device(cl->dev); 329 330 cl->info = info; 330 331 mutex_unlock(&registered_clients_lock); 332 + of_node_put(cln); 331 333 return cl; 332 334 } 333 335 334 336 mutex_unlock(&registered_clients_lock); 337 + of_node_put(cln); 335 338 336 339 return ERR_PTR(-EPROBE_DEFER); 337 340 }
+1
drivers/iio/adc/aspeed_adc.c
··· 186 186 return -EOPNOTSUPP; 187 187 } 188 188 scu = syscon_node_to_regmap(syscon); 189 + of_node_put(syscon); 189 190 if (IS_ERR(scu)) { 190 191 dev_warn(data->dev, "Failed to get syscon regmap\n"); 191 192 return -EOPNOTSUPP;
+8
drivers/iio/adc/axp288_adc.c
··· 196 196 }, 197 197 .driver_data = (void *)(uintptr_t)AXP288_ADC_TS_BIAS_80UA, 198 198 }, 199 + { 200 + /* Nuvision Solo 10 Draw */ 201 + .matches = { 202 + DMI_MATCH(DMI_SYS_VENDOR, "TMAX"), 203 + DMI_MATCH(DMI_PRODUCT_NAME, "TM101W610L"), 204 + }, 205 + .driver_data = (void *)(uintptr_t)AXP288_ADC_TS_BIAS_80UA, 206 + }, 199 207 {} 200 208 }; 201 209
+6 -2
drivers/iio/adc/rzg2l_adc.c
··· 334 334 i = 0; 335 335 device_for_each_child_node(&pdev->dev, fwnode) { 336 336 ret = fwnode_property_read_u32(fwnode, "reg", &channel); 337 - if (ret) 337 + if (ret) { 338 + fwnode_handle_put(fwnode); 338 339 return ret; 340 + } 339 341 340 - if (channel >= RZG2L_ADC_MAX_CHANNELS) 342 + if (channel >= RZG2L_ADC_MAX_CHANNELS) { 343 + fwnode_handle_put(fwnode); 341 344 return -EINVAL; 345 + } 342 346 343 347 chan_array[i].type = IIO_VOLTAGE; 344 348 chan_array[i].indexed = 1;
+7 -2
drivers/iio/adc/stm32-adc-core.c
··· 64 64 * @max_clk_rate_hz: maximum analog clock rate (Hz, from datasheet) 65 65 * @has_syscfg: SYSCFG capability flags 66 66 * @num_irqs: number of interrupt lines 67 + * @num_adcs: maximum number of ADC instances in the common registers 67 68 */ 68 69 struct stm32_adc_priv_cfg { 69 70 const struct stm32_adc_common_regs *regs; ··· 72 71 u32 max_clk_rate_hz; 73 72 unsigned int has_syscfg; 74 73 unsigned int num_irqs; 74 + unsigned int num_adcs; 75 75 }; 76 76 77 77 /** ··· 354 352 * before invoking the interrupt handler (e.g. call ISR only for 355 353 * IRQ-enabled ADCs). 356 354 */ 357 - for (i = 0; i < priv->cfg->num_irqs; i++) { 355 + for (i = 0; i < priv->cfg->num_adcs; i++) { 358 356 if ((status & priv->cfg->regs->eoc_msk[i] && 359 357 stm32_adc_eoc_enabled(priv, i)) || 360 358 (status & priv->cfg->regs->ovr_msk[i])) ··· 794 792 .clk_sel = stm32f4_adc_clk_sel, 795 793 .max_clk_rate_hz = 36000000, 796 794 .num_irqs = 1, 795 + .num_adcs = 3, 797 796 }; 798 797 799 798 static const struct stm32_adc_priv_cfg stm32h7_adc_priv_cfg = { ··· 803 800 .max_clk_rate_hz = 36000000, 804 801 .has_syscfg = HAS_VBOOSTER, 805 802 .num_irqs = 1, 803 + .num_adcs = 2, 806 804 }; 807 805 808 806 static const struct stm32_adc_priv_cfg stm32mp1_adc_priv_cfg = { 809 807 .regs = &stm32h7_adc_common_regs, 810 808 .clk_sel = stm32h7_adc_clk_sel, 811 - .max_clk_rate_hz = 40000000, 809 + .max_clk_rate_hz = 36000000, 812 810 .has_syscfg = HAS_VBOOSTER | HAS_ANASWVDD, 813 811 .num_irqs = 2, 812 + .num_adcs = 2, 814 813 }; 815 814 816 815 static const struct of_device_id stm32_adc_of_match[] = {
+17 -20
drivers/iio/adc/stm32-adc.c
··· 1365 1365 else 1366 1366 ret = -EINVAL; 1367 1367 1368 - if (mask == IIO_CHAN_INFO_PROCESSED && adc->vrefint.vrefint_cal) 1368 + if (mask == IIO_CHAN_INFO_PROCESSED) 1369 1369 *val = STM32_ADC_VREFINT_VOLTAGE * adc->vrefint.vrefint_cal / *val; 1370 1370 1371 1371 iio_device_release_direct_mode(indio_dev); ··· 1407 1407 struct stm32_adc *adc = iio_priv(indio_dev); 1408 1408 const struct stm32_adc_regspec *regs = adc->cfg->regs; 1409 1409 u32 status = stm32_adc_readl(adc, regs->isr_eoc.reg); 1410 - u32 mask = stm32_adc_readl(adc, regs->ier_eoc.reg); 1411 1410 1412 1411 /* Check ovr status right now, as ovr mask should be already disabled */ 1413 1412 if (status & regs->isr_ovr.mask) { ··· 1421 1422 return IRQ_HANDLED; 1422 1423 } 1423 1424 1424 - if (!(status & mask)) 1425 - dev_err_ratelimited(&indio_dev->dev, 1426 - "Unexpected IRQ: IER=0x%08x, ISR=0x%08x\n", 1427 - mask, status); 1428 - 1429 1425 return IRQ_NONE; 1430 1426 } 1431 1427 ··· 1430 1436 struct stm32_adc *adc = iio_priv(indio_dev); 1431 1437 const struct stm32_adc_regspec *regs = adc->cfg->regs; 1432 1438 u32 status = stm32_adc_readl(adc, regs->isr_eoc.reg); 1433 - u32 mask = stm32_adc_readl(adc, regs->ier_eoc.reg); 1434 - 1435 - if (!(status & mask)) 1436 - return IRQ_WAKE_THREAD; 1437 1439 1438 1440 if (status & regs->isr_ovr.mask) { 1439 1441 /* ··· 1969 1979 1970 1980 for (i = 0; i < STM32_ADC_INT_CH_NB; i++) { 1971 1981 if (!strncmp(stm32_adc_ic[i].name, ch_name, STM32_ADC_CH_SZ)) { 1972 - adc->int_ch[i] = chan; 1973 - 1974 - if (stm32_adc_ic[i].idx != STM32_ADC_INT_CH_VREFINT) 1975 - continue; 1982 + if (stm32_adc_ic[i].idx != STM32_ADC_INT_CH_VREFINT) { 1983 + adc->int_ch[i] = chan; 1984 + break; 1985 + } 1976 1986 1977 1987 /* Get calibration data for vrefint channel */ 1978 1988 ret = nvmem_cell_read_u16(&indio_dev->dev, "vrefint", &vrefint); ··· 1980 1990 return dev_err_probe(indio_dev->dev.parent, ret, 1981 1991 "nvmem access error\n"); 1982 1992 } 1983 - if (ret == -ENOENT) 1984 - dev_dbg(&indio_dev->dev, "vrefint calibration not found\n"); 1985 - else 1986 - adc->vrefint.vrefint_cal = vrefint; 1993 + if (ret == -ENOENT) { 1994 + dev_dbg(&indio_dev->dev, "vrefint calibration not found. Skip vrefint channel\n"); 1995 + return ret; 1996 + } else if (!vrefint) { 1997 + dev_dbg(&indio_dev->dev, "Null vrefint calibration value. Skip vrefint channel\n"); 1998 + return -ENOENT; 1999 + } 2000 + adc->int_ch[i] = chan; 2001 + adc->vrefint.vrefint_cal = vrefint; 1987 2002 } 1988 2003 } 1989 2004 ··· 2025 2030 } 2026 2031 strncpy(adc->chan_name[val], name, STM32_ADC_CH_SZ); 2027 2032 ret = stm32_adc_populate_int_ch(indio_dev, name, val); 2028 - if (ret) 2033 + if (ret == -ENOENT) 2034 + continue; 2035 + else if (ret) 2029 2036 goto err; 2030 2037 } else if (ret != -EINVAL) { 2031 2038 dev_err(&indio_dev->dev, "Invalid label %d\n", ret);
+7 -3
drivers/iio/adc/ti-ads131e08.c
··· 739 739 device_for_each_child_node(dev, node) { 740 740 ret = fwnode_property_read_u32(node, "reg", &channel); 741 741 if (ret) 742 - return ret; 742 + goto err_child_out; 743 743 744 744 ret = fwnode_property_read_u32(node, "ti,gain", &tmp); 745 745 if (ret) { ··· 747 747 } else { 748 748 ret = ads131e08_pga_gain_to_field_value(st, tmp); 749 749 if (ret < 0) 750 - return ret; 750 + goto err_child_out; 751 751 752 752 channel_config[i].pga_gain = tmp; 753 753 } ··· 758 758 } else { 759 759 ret = ads131e08_validate_channel_mux(st, tmp); 760 760 if (ret) 761 - return ret; 761 + goto err_child_out; 762 762 763 763 channel_config[i].mux = tmp; 764 764 } ··· 784 784 st->channel_config = channel_config; 785 785 786 786 return 0; 787 + 788 + err_child_out: 789 + fwnode_handle_put(node); 790 + return ret; 787 791 } 788 792 789 793 static void ads131e08_regulator_disable(void *data)
+1 -1
drivers/iio/adc/xilinx-ams.c
··· 1409 1409 1410 1410 irq = platform_get_irq(pdev, 0); 1411 1411 if (irq < 0) 1412 - return ret; 1412 + return irq; 1413 1413 1414 1414 ret = devm_request_irq(&pdev->dev, irq, &ams_irq, 0, "ams-irq", 1415 1415 indio_dev);
+1 -1
drivers/iio/afe/iio-rescale.c
··· 277 277 chan->ext_info = rescale->ext_info; 278 278 chan->type = rescale->cfg->type; 279 279 280 - if (iio_channel_has_info(schan, IIO_CHAN_INFO_RAW) || 280 + if (iio_channel_has_info(schan, IIO_CHAN_INFO_RAW) && 281 281 iio_channel_has_info(schan, IIO_CHAN_INFO_SCALE)) { 282 282 dev_info(dev, "using raw+scale source channel\n"); 283 283 } else if (iio_channel_has_info(schan, IIO_CHAN_INFO_PROCESSED)) {
+2 -2
drivers/iio/chemical/ccs811.c
··· 499 499 500 500 data->drdy_trig->ops = &ccs811_trigger_ops; 501 501 iio_trigger_set_drvdata(data->drdy_trig, indio_dev); 502 - indio_dev->trig = data->drdy_trig; 503 - iio_trigger_get(indio_dev->trig); 504 502 ret = iio_trigger_register(data->drdy_trig); 505 503 if (ret) 506 504 goto err_poweroff; 505 + 506 + indio_dev->trig = iio_trigger_get(data->drdy_trig); 507 507 } 508 508 509 509 ret = iio_triggered_buffer_setup(indio_dev, NULL,
+4 -2
drivers/iio/frequency/admv1014.c
··· 700 700 ADMV1014_DET_EN_MSK; 701 701 702 702 enable_reg = FIELD_PREP(ADMV1014_P1DB_COMPENSATION_MSK, st->p1db_comp ? 3 : 0) | 703 - FIELD_PREP(ADMV1014_IF_AMP_PD_MSK, !(st->input_mode)) | 704 - FIELD_PREP(ADMV1014_BB_AMP_PD_MSK, st->input_mode) | 703 + FIELD_PREP(ADMV1014_IF_AMP_PD_MSK, 704 + (st->input_mode == ADMV1014_IF_MODE) ? 0 : 1) | 705 + FIELD_PREP(ADMV1014_BB_AMP_PD_MSK, 706 + (st->input_mode == ADMV1014_IF_MODE) ? 1 : 0) | 705 707 FIELD_PREP(ADMV1014_DET_EN_MSK, st->det_en); 706 708 707 709 return __admv1014_spi_update_bits(st, ADMV1014_REG_ENABLE, enable_reg_msk, enable_reg);
+1
drivers/iio/gyro/mpu3050-core.c
··· 875 875 ret = regmap_update_bits(mpu3050->map, MPU3050_PWR_MGM, 876 876 MPU3050_PWR_MGM_SLEEP, 0); 877 877 if (ret) { 878 + regulator_bulk_disable(ARRAY_SIZE(mpu3050->regs), mpu3050->regs); 878 879 dev_err(mpu3050->dev, "error setting power mode\n"); 879 880 return ret; 880 881 }
+4 -1
drivers/iio/humidity/hts221_buffer.c
··· 135 135 136 136 iio_trigger_set_drvdata(hw->trig, iio_dev); 137 137 hw->trig->ops = &hts221_trigger_ops; 138 + 139 + err = devm_iio_trigger_register(hw->dev, hw->trig); 140 + 138 141 iio_dev->trig = iio_trigger_get(hw->trig); 139 142 140 - return devm_iio_trigger_register(hw->dev, hw->trig); 143 + return err; 141 144 } 142 145 143 146 static int hts221_buffer_preenable(struct iio_dev *iio_dev)
+1
drivers/iio/imu/inv_icm42600/inv_icm42600.h
··· 17 17 #include "inv_icm42600_buffer.h" 18 18 19 19 enum inv_icm42600_chip { 20 + INV_CHIP_INVALID, 20 21 INV_CHIP_ICM42600, 21 22 INV_CHIP_ICM42602, 22 23 INV_CHIP_ICM42605,
+1 -1
drivers/iio/imu/inv_icm42600/inv_icm42600_core.c
··· 565 565 bool open_drain; 566 566 int ret; 567 567 568 - if (chip < 0 || chip >= INV_CHIP_NB) { 568 + if (chip <= INV_CHIP_INVALID || chip >= INV_CHIP_NB) { 569 569 dev_err(dev, "invalid chip = %d\n", chip); 570 570 return -ENODEV; 571 571 }
+1 -1
drivers/iio/magnetometer/yamaha-yas530.c
··· 639 639 dev_dbg(yas5xx->dev, "calibration data: %*ph\n", 14, data); 640 640 641 641 /* Sanity check, is this all zeroes? */ 642 - if (memchr_inv(data, 0x00, 13)) { 642 + if (memchr_inv(data, 0x00, 13) == NULL) { 643 643 if (!(data[13] & BIT(7))) 644 644 dev_warn(yas5xx->dev, "calibration is blank!\n"); 645 645 }
+3
drivers/iio/proximity/sx9324.c
··· 885 885 break; 886 886 ret = device_property_read_u32_array(dev, prop, pin_defs, 887 887 ARRAY_SIZE(pin_defs)); 888 + if (ret) 889 + break; 890 + 888 891 for (pin = 0; pin < SX9324_NUM_PINS; pin++) 889 892 raw |= (pin_defs[pin] << (2 * pin)) & 890 893 SX9324_REG_AFE_PH0_PIN_MASK(pin);
+1 -1
drivers/iio/test/Kconfig
··· 6 6 # Keep in alphabetical order 7 7 config IIO_RESCALE_KUNIT_TEST 8 8 bool "Test IIO rescale conversion functions" 9 - depends on KUNIT=y && !IIO_RESCALE 9 + depends on KUNIT=y && IIO_RESCALE=y 10 10 default KUNIT_ALL_TESTS 11 11 help 12 12 If you want to run tests on the iio-rescale code say Y here.
+1 -1
drivers/iio/test/Makefile
··· 4 4 # 5 5 6 6 # Keep in alphabetical order 7 - obj-$(CONFIG_IIO_RESCALE_KUNIT_TEST) += iio-test-rescale.o ../afe/iio-rescale.o 7 + obj-$(CONFIG_IIO_RESCALE_KUNIT_TEST) += iio-test-rescale.o 8 8 obj-$(CONFIG_IIO_TEST_FORMAT) += iio-test-format.o 9 9 CFLAGS_iio-test-format.o += $(DISABLE_STRUCTLEAK_PLUGIN)
+1
drivers/iio/trigger/iio-trig-sysfs.c
··· 190 190 } 191 191 192 192 iio_trigger_unregister(t->trig); 193 + irq_work_sync(&t->work); 193 194 iio_trigger_free(t->trig); 194 195 195 196 list_del(&t->l);
+1 -1
drivers/iommu/ipmmu-vmsa.c
··· 987 987 .compatible = "renesas,ipmmu-r8a779a0", 988 988 .data = &ipmmu_features_rcar_gen4, 989 989 }, { 990 - .compatible = "renesas,rcar-gen4-ipmmu", 990 + .compatible = "renesas,rcar-gen4-ipmmu-vmsa", 991 991 .data = &ipmmu_features_rcar_gen4, 992 992 }, { 993 993 /* Terminator */
+1
drivers/md/dm-core.h
··· 272 272 atomic_t io_count; 273 273 struct mapped_device *md; 274 274 275 + struct bio *split_bio; 275 276 /* The three fields represent mapped part of original bio */ 276 277 struct bio *orig_bio; 277 278 unsigned int sector_offset; /* offset to end of orig_bio */
+7 -1
drivers/md/dm-era-target.c
··· 1400 1400 static void stop_worker(struct era *era) 1401 1401 { 1402 1402 atomic_set(&era->suspended, 1); 1403 - flush_workqueue(era->wq); 1403 + drain_workqueue(era->wq); 1404 1404 } 1405 1405 1406 1406 /*---------------------------------------------------------------- ··· 1570 1570 } 1571 1571 1572 1572 stop_worker(era); 1573 + 1574 + r = metadata_commit(era->md); 1575 + if (r) { 1576 + DMERR("%s: metadata_commit failed", __func__); 1577 + /* FIXME: fail mode */ 1578 + } 1573 1579 } 1574 1580 1575 1581 static int era_preresume(struct dm_target *ti)
+1 -1
drivers/md/dm-log.c
··· 615 615 log_clear_bit(lc, lc->clean_bits, i); 616 616 617 617 /* clear any old bits -- device has shrunk */ 618 - for (i = lc->region_count; i % (sizeof(*lc->clean_bits) << BYTE_SHIFT); i++) 618 + for (i = lc->region_count; i % BITS_PER_LONG; i++) 619 619 log_clear_bit(lc, lc->clean_bits, i); 620 620 621 621 /* copy clean across to sync */
+10 -5
drivers/md/dm.c
··· 594 594 atomic_set(&io->io_count, 2); 595 595 this_cpu_inc(*md->pending_io); 596 596 io->orig_bio = bio; 597 + io->split_bio = NULL; 597 598 io->md = md; 598 599 spin_lock_init(&io->lock); 599 600 io->start_time = jiffies; ··· 888 887 { 889 888 blk_status_t io_error; 890 889 struct mapped_device *md = io->md; 891 - struct bio *bio = io->orig_bio; 890 + struct bio *bio = io->split_bio ? io->split_bio : io->orig_bio; 892 891 893 892 if (io->status == BLK_STS_DM_REQUEUE) { 894 893 unsigned long flags; ··· 940 939 if (io_error == BLK_STS_AGAIN) { 941 940 /* io_uring doesn't handle BLK_STS_AGAIN (yet) */ 942 941 queue_io(md, bio); 942 + return; 943 943 } 944 944 } 945 - return; 945 + if (io_error == BLK_STS_DM_REQUEUE) 946 + return; 946 947 } 947 948 948 949 if (bio_is_flush_with_data(bio)) { ··· 1694 1691 * Remainder must be passed to submit_bio_noacct() so it gets handled 1695 1692 * *after* bios already submitted have been completely processed. 1696 1693 */ 1697 - bio_trim(bio, io->sectors, ci.sector_count); 1698 - trace_block_split(bio, bio->bi_iter.bi_sector); 1699 - bio_inc_remaining(bio); 1694 + WARN_ON_ONCE(!dm_io_flagged(io, DM_IO_WAS_SPLIT)); 1695 + io->split_bio = bio_split(bio, io->sectors, GFP_NOIO, 1696 + &md->queue->bio_split); 1697 + bio_chain(io->split_bio, bio); 1698 + trace_block_split(io->split_bio, bio->bi_iter.bi_sector); 1700 1699 submit_bio_noacct(bio); 1701 1700 out: 1702 1701 /*
+1
drivers/memory/Kconfig
··· 105 105 config OMAP_GPMC 106 106 tristate "Texas Instruments OMAP SoC GPMC driver" 107 107 depends on OF_ADDRESS 108 + depends on ARCH_OMAP2PLUS || ARCH_KEYSTONE || ARCH_K3 || COMPILE_TEST 108 109 select GPIOLIB 109 110 help 110 111 This driver is for the General Purpose Memory Controller (GPMC)
+4 -1
drivers/memory/mtk-smi.c
··· 404 404 of_node_put(smi_com_node); 405 405 if (smi_com_pdev) { 406 406 /* smi common is the supplier, Make sure it is ready before */ 407 - if (!platform_get_drvdata(smi_com_pdev)) 407 + if (!platform_get_drvdata(smi_com_pdev)) { 408 + put_device(&smi_com_pdev->dev); 408 409 return -EPROBE_DEFER; 410 + } 409 411 smi_com_dev = &smi_com_pdev->dev; 410 412 link = device_link_add(dev, smi_com_dev, 411 413 DL_FLAG_PM_RUNTIME | DL_FLAG_STATELESS); 412 414 if (!link) { 413 415 dev_err(dev, "Unable to link smi-common dev\n"); 416 + put_device(&smi_com_pdev->dev); 414 417 return -ENODEV; 415 418 } 416 419 *com_dev = smi_com_dev;
+18 -11
drivers/memory/samsung/exynos5422-dmc.c
··· 1187 1187 1188 1188 dmc->timing_row = devm_kmalloc_array(dmc->dev, TIMING_COUNT, 1189 1189 sizeof(u32), GFP_KERNEL); 1190 - if (!dmc->timing_row) 1191 - return -ENOMEM; 1190 + if (!dmc->timing_row) { 1191 + ret = -ENOMEM; 1192 + goto put_node; 1193 + } 1192 1194 1193 1195 dmc->timing_data = devm_kmalloc_array(dmc->dev, TIMING_COUNT, 1194 1196 sizeof(u32), GFP_KERNEL); 1195 - if (!dmc->timing_data) 1196 - return -ENOMEM; 1197 + if (!dmc->timing_data) { 1198 + ret = -ENOMEM; 1199 + goto put_node; 1200 + } 1197 1201 1198 1202 dmc->timing_power = devm_kmalloc_array(dmc->dev, TIMING_COUNT, 1199 1203 sizeof(u32), GFP_KERNEL); 1200 - if (!dmc->timing_power) 1201 - return -ENOMEM; 1204 + if (!dmc->timing_power) { 1205 + ret = -ENOMEM; 1206 + goto put_node; 1207 + } 1202 1208 1203 1209 dmc->timings = of_lpddr3_get_ddr_timings(np_ddr, dmc->dev, 1204 1210 DDR_TYPE_LPDDR3, 1205 1211 &dmc->timings_arr_size); 1206 1212 if (!dmc->timings) { 1207 - of_node_put(np_ddr); 1208 1213 dev_warn(dmc->dev, "could not get timings from DT\n"); 1209 - return -EINVAL; 1214 + ret = -EINVAL; 1215 + goto put_node; 1210 1216 } 1211 1217 1212 1218 dmc->min_tck = of_lpddr3_get_min_tck(np_ddr, dmc->dev); 1213 1219 if (!dmc->min_tck) { 1214 - of_node_put(np_ddr); 1215 1220 dev_warn(dmc->dev, "could not get tck from DT\n"); 1216 - return -EINVAL; 1221 + ret = -EINVAL; 1222 + goto put_node; 1217 1223 } 1218 1224 1219 1225 /* Sorted array of OPPs with frequency ascending */ ··· 1233 1227 clk_period_ps); 1234 1228 } 1235 1229 1236 - of_node_put(np_ddr); 1237 1230 1238 1231 /* Take the highest frequency's timings as 'bypass' */ 1239 1232 dmc->bypass_timing_row = dmc->timing_row[idx - 1]; 1240 1233 dmc->bypass_timing_data = dmc->timing_data[idx - 1]; 1241 1234 dmc->bypass_timing_power = dmc->timing_power[idx - 1]; 1242 1235 1236 + put_node: 1237 + of_node_put(np_ddr); 1243 1238 return ret; 1244 1239 } 1245 1240
+12 -8
drivers/mmc/host/mtk-sd.c
··· 1356 1356 msdc_request_done(host, mrq); 1357 1357 } 1358 1358 1359 - static bool msdc_data_xfer_done(struct msdc_host *host, u32 events, 1359 + static void msdc_data_xfer_done(struct msdc_host *host, u32 events, 1360 1360 struct mmc_request *mrq, struct mmc_data *data) 1361 1361 { 1362 1362 struct mmc_command *stop; ··· 1376 1376 spin_unlock_irqrestore(&host->lock, flags); 1377 1377 1378 1378 if (done) 1379 - return true; 1379 + return; 1380 1380 stop = data->stop; 1381 1381 1382 1382 if (check_data || (stop && stop->error)) { ··· 1385 1385 sdr_set_field(host->base + MSDC_DMA_CTRL, MSDC_DMA_CTRL_STOP, 1386 1386 1); 1387 1387 1388 + ret = readl_poll_timeout_atomic(host->base + MSDC_DMA_CTRL, val, 1389 + !(val & MSDC_DMA_CTRL_STOP), 1, 20000); 1390 + if (ret) 1391 + dev_dbg(host->dev, "DMA stop timed out\n"); 1392 + 1388 1393 ret = readl_poll_timeout_atomic(host->base + MSDC_DMA_CFG, val, 1389 1394 !(val & MSDC_DMA_CFG_STS), 1, 20000); 1390 - if (ret) { 1391 - dev_dbg(host->dev, "DMA stop timed out\n"); 1392 - return false; 1393 - } 1395 + if (ret) 1396 + dev_dbg(host->dev, "DMA inactive timed out\n"); 1394 1397 1395 1398 sdr_clr_bits(host->base + MSDC_INTEN, data_ints_mask); 1396 1399 dev_dbg(host->dev, "DMA stop\n"); ··· 1418 1415 } 1419 1416 1420 1417 msdc_data_xfer_next(host, mrq); 1421 - done = true; 1422 1418 } 1423 - return done; 1424 1419 } 1425 1420 1426 1421 static void msdc_set_buswidth(struct msdc_host *host, u32 width) ··· 2417 2416 if (recovery) { 2418 2417 sdr_set_field(host->base + MSDC_DMA_CTRL, 2419 2418 MSDC_DMA_CTRL_STOP, 1); 2419 + if (WARN_ON(readl_poll_timeout(host->base + MSDC_DMA_CTRL, val, 2420 + !(val & MSDC_DMA_CTRL_STOP), 1, 3000))) 2421 + return; 2420 2422 if (WARN_ON(readl_poll_timeout(host->base + MSDC_DMA_CFG, val, 2421 2423 !(val & MSDC_DMA_CFG_STS), 1, 3000))) 2422 2424 return;
+2
drivers/mmc/host/sdhci-pci-o2micro.c
··· 152 152 153 153 if (!(sdhci_readw(host, O2_PLL_DLL_WDT_CONTROL1) & O2_PLL_LOCK_STATUS)) 154 154 sdhci_o2_enable_internal_clock(host); 155 + else 156 + sdhci_o2_wait_card_detect_stable(host); 155 157 156 158 return !!(sdhci_readl(host, SDHCI_PRESENT_STATE) & SDHCI_CARD_PRESENT); 157 159 }
+1 -1
drivers/mtd/nand/raw/gpmi-nand/gpmi-nand.c
··· 890 890 hw->timing0 = BF_GPMI_TIMING0_ADDRESS_SETUP(addr_setup_cycles) | 891 891 BF_GPMI_TIMING0_DATA_HOLD(data_hold_cycles) | 892 892 BF_GPMI_TIMING0_DATA_SETUP(data_setup_cycles); 893 - hw->timing1 = BF_GPMI_TIMING1_BUSY_TIMEOUT(busy_timeout_cycles * 4096); 893 + hw->timing1 = BF_GPMI_TIMING1_BUSY_TIMEOUT(DIV_ROUND_UP(busy_timeout_cycles, 4096)); 894 894 895 895 /* 896 896 * Derive NFC ideal delay from {3}:
-3
drivers/mtd/nand/raw/nand_ids.c
··· 29 29 {"TC58NVG0S3E 1G 3.3V 8-bit", 30 30 { .id = {0x98, 0xd1, 0x90, 0x15, 0x76, 0x14, 0x01, 0x00} }, 31 31 SZ_2K, SZ_128, SZ_128K, 0, 8, 64, NAND_ECC_INFO(1, SZ_512), }, 32 - {"TC58NVG0S3HTA00 1G 3.3V 8-bit", 33 - { .id = {0x98, 0xf1, 0x80, 0x15} }, 34 - SZ_2K, SZ_128, SZ_128K, 0, 4, 128, NAND_ECC_INFO(8, SZ_512), }, 35 32 {"TC58NVG2S0F 4G 3.3V 8-bit", 36 33 { .id = {0x98, 0xdc, 0x90, 0x26, 0x76, 0x15, 0x01, 0x08} }, 37 34 SZ_4K, SZ_512, SZ_256K, 0, 8, 224, NAND_ECC_INFO(4, SZ_512) },
+3 -1
drivers/net/bonding/bond_main.c
··· 3684 3684 if (!rtnl_trylock()) 3685 3685 return; 3686 3686 3687 - if (should_notify_peers) 3687 + if (should_notify_peers) { 3688 + bond->send_peer_notif--; 3688 3689 call_netdevice_notifiers(NETDEV_NOTIFY_PEERS, 3689 3690 bond->dev); 3691 + } 3690 3692 if (should_notify_rtnl) { 3691 3693 bond_slave_state_notify(bond); 3692 3694 bond_slave_link_notify(bond);
+21 -1
drivers/net/dsa/qca8k.c
··· 2334 2334 qca8k_port_change_mtu(struct dsa_switch *ds, int port, int new_mtu) 2335 2335 { 2336 2336 struct qca8k_priv *priv = ds->priv; 2337 + int ret; 2337 2338 2338 2339 /* We have only have a general MTU setting. 2339 2340 * DSA always set the CPU port's MTU to the largest MTU of the slave ··· 2345 2344 if (!dsa_is_cpu_port(ds, port)) 2346 2345 return 0; 2347 2346 2347 + /* To change the MAX_FRAME_SIZE the cpu ports must be off or 2348 + * the switch panics. 2349 + * Turn off both cpu ports before applying the new value to prevent 2350 + * this. 2351 + */ 2352 + if (priv->port_enabled_map & BIT(0)) 2353 + qca8k_port_set_status(priv, 0, 0); 2354 + 2355 + if (priv->port_enabled_map & BIT(6)) 2356 + qca8k_port_set_status(priv, 6, 0); 2357 + 2348 2358 /* Include L2 header / FCS length */ 2349 - return qca8k_write(priv, QCA8K_MAX_FRAME_SIZE, new_mtu + ETH_HLEN + ETH_FCS_LEN); 2359 + ret = qca8k_write(priv, QCA8K_MAX_FRAME_SIZE, new_mtu + ETH_HLEN + ETH_FCS_LEN); 2360 + 2361 + if (priv->port_enabled_map & BIT(0)) 2362 + qca8k_port_set_status(priv, 0, 1); 2363 + 2364 + if (priv->port_enabled_map & BIT(6)) 2365 + qca8k_port_set_status(priv, 6, 1); 2366 + 2367 + return ret; 2350 2368 } 2351 2369 2352 2370 static int
+1 -1
drivers/net/dsa/qca8k.h
··· 15 15 16 16 #define QCA8K_ETHERNET_MDIO_PRIORITY 7 17 17 #define QCA8K_ETHERNET_PHY_PRIORITY 6 18 - #define QCA8K_ETHERNET_TIMEOUT 100 18 + #define QCA8K_ETHERNET_TIMEOUT 5 19 19 20 20 #define QCA8K_NUM_PORTS 7 21 21 #define QCA8K_NUM_CPU_PORTS 2
+1 -3
drivers/net/ethernet/huawei/hinic/hinic_devlink.c
··· 43 43 44 44 for (i = 0; i < fw_image->fw_info.fw_section_cnt; i++) { 45 45 len += fw_image->fw_section_info[i].fw_section_len; 46 - memcpy(&host_image->image_section_info[i], 47 - &fw_image->fw_section_info[i], 48 - sizeof(struct fw_section_info_st)); 46 + host_image->image_section_info[i] = fw_image->fw_section_info[i]; 49 47 } 50 48 51 49 if (len != fw_image->fw_len ||
+48 -1
drivers/net/ethernet/intel/ice/ice_ethtool.c
··· 2190 2190 } 2191 2191 2192 2192 /** 2193 + * ice_set_phy_type_from_speed - set phy_types based on speeds 2194 + * and advertised modes 2195 + * @ks: ethtool link ksettings struct 2196 + * @phy_type_low: pointer to the lower part of phy_type 2197 + * @phy_type_high: pointer to the higher part of phy_type 2198 + * @adv_link_speed: targeted link speeds bitmap 2199 + */ 2200 + static void 2201 + ice_set_phy_type_from_speed(const struct ethtool_link_ksettings *ks, 2202 + u64 *phy_type_low, u64 *phy_type_high, 2203 + u16 adv_link_speed) 2204 + { 2205 + /* Handle 1000M speed in a special way because ice_update_phy_type 2206 + * enables all link modes, but having mixed copper and optical 2207 + * standards is not supported. 2208 + */ 2209 + adv_link_speed &= ~ICE_AQ_LINK_SPEED_1000MB; 2210 + 2211 + if (ethtool_link_ksettings_test_link_mode(ks, advertising, 2212 + 1000baseT_Full)) 2213 + *phy_type_low |= ICE_PHY_TYPE_LOW_1000BASE_T | 2214 + ICE_PHY_TYPE_LOW_1G_SGMII; 2215 + 2216 + if (ethtool_link_ksettings_test_link_mode(ks, advertising, 2217 + 1000baseKX_Full)) 2218 + *phy_type_low |= ICE_PHY_TYPE_LOW_1000BASE_KX; 2219 + 2220 + if (ethtool_link_ksettings_test_link_mode(ks, advertising, 2221 + 1000baseX_Full)) 2222 + *phy_type_low |= ICE_PHY_TYPE_LOW_1000BASE_SX | 2223 + ICE_PHY_TYPE_LOW_1000BASE_LX; 2224 + 2225 + ice_update_phy_type(phy_type_low, phy_type_high, adv_link_speed); 2226 + } 2227 + 2228 + /** 2193 2229 * ice_set_link_ksettings - Set Speed and Duplex 2194 2230 * @netdev: network interface device structure 2195 2231 * @ks: ethtool ksettings ··· 2356 2320 adv_link_speed = curr_link_speed; 2357 2321 2358 2322 /* Convert the advertise link speeds to their corresponded PHY_TYPE */ 2359 - ice_update_phy_type(&phy_type_low, &phy_type_high, adv_link_speed); 2323 + ice_set_phy_type_from_speed(ks, &phy_type_low, &phy_type_high, 2324 + adv_link_speed); 2360 2325 2361 2326 if (!autoneg_changed && adv_link_speed == curr_link_speed) { 2362 2327 netdev_info(netdev, "Nothing changed, exiting without setting anything.\n"); ··· 3507 3470 new_rx = ch->combined_count + ch->rx_count; 3508 3471 new_tx = ch->combined_count + ch->tx_count; 3509 3472 3473 + if (new_rx < vsi->tc_cfg.numtc) { 3474 + netdev_err(dev, "Cannot set less Rx channels, than Traffic Classes you have (%u)\n", 3475 + vsi->tc_cfg.numtc); 3476 + return -EINVAL; 3477 + } 3478 + if (new_tx < vsi->tc_cfg.numtc) { 3479 + netdev_err(dev, "Cannot set less Tx channels, than Traffic Classes you have (%u)\n", 3480 + vsi->tc_cfg.numtc); 3481 + return -EINVAL; 3482 + } 3510 3483 if (new_rx > ice_get_max_rxq(pf)) { 3511 3484 netdev_err(dev, "Maximum allowed Rx channels is %d\n", 3512 3485 ice_get_max_rxq(pf));
+37 -5
drivers/net/ethernet/intel/ice/ice_lib.c
··· 909 909 * @vsi: the VSI being configured 910 910 * @ctxt: VSI context structure 911 911 */ 912 - static void ice_vsi_setup_q_map(struct ice_vsi *vsi, struct ice_vsi_ctx *ctxt) 912 + static int ice_vsi_setup_q_map(struct ice_vsi *vsi, struct ice_vsi_ctx *ctxt) 913 913 { 914 914 u16 offset = 0, qmap = 0, tx_count = 0, pow = 0; 915 915 u16 num_txq_per_tc, num_rxq_per_tc; ··· 982 982 else 983 983 vsi->num_rxq = num_rxq_per_tc; 984 984 985 + if (vsi->num_rxq > vsi->alloc_rxq) { 986 + dev_err(ice_pf_to_dev(vsi->back), "Trying to use more Rx queues (%u), than were allocated (%u)!\n", 987 + vsi->num_rxq, vsi->alloc_rxq); 988 + return -EINVAL; 989 + } 990 + 985 991 vsi->num_txq = tx_count; 992 + if (vsi->num_txq > vsi->alloc_txq) { 993 + dev_err(ice_pf_to_dev(vsi->back), "Trying to use more Tx queues (%u), than were allocated (%u)!\n", 994 + vsi->num_txq, vsi->alloc_txq); 995 + return -EINVAL; 996 + } 986 997 987 998 if (vsi->type == ICE_VSI_VF && vsi->num_txq != vsi->num_rxq) { 988 999 dev_dbg(ice_pf_to_dev(vsi->back), "VF VSI should have same number of Tx and Rx queues. Hence making them equal\n"); ··· 1011 1000 */ 1012 1001 ctxt->info.q_mapping[0] = cpu_to_le16(vsi->rxq_map[0]); 1013 1002 ctxt->info.q_mapping[1] = cpu_to_le16(vsi->num_rxq); 1003 + 1004 + return 0; 1014 1005 } 1015 1006 1016 1007 /** ··· 1200 1187 if (vsi->type == ICE_VSI_CHNL) { 1201 1188 ice_chnl_vsi_setup_q_map(vsi, ctxt); 1202 1189 } else { 1203 - ice_vsi_setup_q_map(vsi, ctxt); 1190 + ret = ice_vsi_setup_q_map(vsi, ctxt); 1191 + if (ret) 1192 + goto out; 1193 + 1204 1194 if (!init_vsi) /* means VSI being updated */ 1205 1195 /* must to indicate which section of VSI context are 1206 1196 * being modified ··· 3480 3464 * 3481 3465 * Prepares VSI tc_config to have queue configurations based on MQPRIO options. 3482 3466 */ 3483 - static void 3467 + static int 3484 3468 ice_vsi_setup_q_map_mqprio(struct ice_vsi *vsi, struct ice_vsi_ctx *ctxt, 3485 3469 u8 ena_tc) 3486 3470 { ··· 3529 3513 3530 3514 /* Set actual Tx/Rx queue pairs */ 3531 3515 vsi->num_txq = offset + qcount_tx; 3516 + if (vsi->num_txq > vsi->alloc_txq) { 3517 + dev_err(ice_pf_to_dev(vsi->back), "Trying to use more Tx queues (%u), than were allocated (%u)!\n", 3518 + vsi->num_txq, vsi->alloc_txq); 3519 + return -EINVAL; 3520 + } 3521 + 3532 3522 vsi->num_rxq = offset + qcount_rx; 3523 + if (vsi->num_rxq > vsi->alloc_rxq) { 3524 + dev_err(ice_pf_to_dev(vsi->back), "Trying to use more Rx queues (%u), than were allocated (%u)!\n", 3525 + vsi->num_rxq, vsi->alloc_rxq); 3526 + return -EINVAL; 3527 + } 3533 3528 3534 3529 /* Setup queue TC[0].qmap for given VSI context */ 3535 3530 ctxt->info.tc_mapping[0] = cpu_to_le16(qmap); ··· 3558 3531 dev_dbg(ice_pf_to_dev(vsi->back), "vsi->num_rxq = %d\n", vsi->num_rxq); 3559 3532 dev_dbg(ice_pf_to_dev(vsi->back), "all_numtc %u, all_enatc: 0x%04x, tc_cfg.numtc %u\n", 3560 3533 vsi->all_numtc, vsi->all_enatc, vsi->tc_cfg.numtc); 3534 + 3535 + return 0; 3561 3536 } 3562 3537 3563 3538 /** ··· 3609 3580 3610 3581 if (vsi->type == ICE_VSI_PF && 3611 3582 test_bit(ICE_FLAG_TC_MQPRIO, pf->flags)) 3612 - ice_vsi_setup_q_map_mqprio(vsi, ctx, ena_tc); 3583 + ret = ice_vsi_setup_q_map_mqprio(vsi, ctx, ena_tc); 3613 3584 else 3614 - ice_vsi_setup_q_map(vsi, ctx); 3585 + ret = ice_vsi_setup_q_map(vsi, ctx); 3586 + 3587 + if (ret) 3588 + goto out; 3615 3589 3616 3590 /* must to indicate which section of VSI context are being modified */ 3617 3591 ctx->info.valid_sections = cpu_to_le16(ICE_AQ_VSI_PROP_RXQ_MAP_VALID);
+4 -1
drivers/net/ethernet/intel/ice/ice_tc_lib.c
··· 524 524 */ 525 525 fltr->rid = rule_added.rid; 526 526 fltr->rule_id = rule_added.rule_id; 527 + fltr->dest_id = rule_added.vsi_handle; 527 528 528 529 exit: 529 530 kfree(list); ··· 994 993 n_proto_key = ntohs(match.key->n_proto); 995 994 n_proto_mask = ntohs(match.mask->n_proto); 996 995 997 - if (n_proto_key == ETH_P_ALL || n_proto_key == 0) { 996 + if (n_proto_key == ETH_P_ALL || n_proto_key == 0 || 997 + fltr->tunnel_type == TNL_GTPU || 998 + fltr->tunnel_type == TNL_GTPC) { 998 999 n_proto_key = 0; 999 1000 n_proto_mask = 0; 1000 1001 } else {
+10 -9
drivers/net/ethernet/intel/igb/igb_main.c
··· 4819 4819 while (i != tx_ring->next_to_use) { 4820 4820 union e1000_adv_tx_desc *eop_desc, *tx_desc; 4821 4821 4822 - /* Free all the Tx ring sk_buffs */ 4823 - dev_kfree_skb_any(tx_buffer->skb); 4822 + /* Free all the Tx ring sk_buffs or xdp frames */ 4823 + if (tx_buffer->type == IGB_TYPE_SKB) 4824 + dev_kfree_skb_any(tx_buffer->skb); 4825 + else 4826 + xdp_return_frame(tx_buffer->xdpf); 4824 4827 4825 4828 /* unmap skb header data */ 4826 4829 dma_unmap_single(tx_ring->dev, ··· 9901 9898 struct e1000_hw *hw = &adapter->hw; 9902 9899 u32 dmac_thr; 9903 9900 u16 hwm; 9901 + u32 reg; 9904 9902 9905 9903 if (hw->mac.type > e1000_82580) { 9906 9904 if (adapter->flags & IGB_FLAG_DMAC) { 9907 - u32 reg; 9908 - 9909 9905 /* force threshold to 0. */ 9910 9906 wr32(E1000_DMCTXTH, 0); 9911 9907 ··· 9937 9935 /* Disable BMC-to-OS Watchdog Enable */ 9938 9936 if (hw->mac.type != e1000_i354) 9939 9937 reg &= ~E1000_DMACR_DC_BMC2OSW_EN; 9940 - 9941 9938 wr32(E1000_DMACR, reg); 9942 9939 9943 9940 /* no lower threshold to disable ··· 9953 9952 */ 9954 9953 wr32(E1000_DMCTXTH, (IGB_MIN_TXPBSIZE - 9955 9954 (IGB_TX_BUF_4096 + adapter->max_frame_size)) >> 6); 9955 + } 9956 9956 9957 - /* make low power state decision controlled 9958 - * by DMA coal 9959 - */ 9957 + if (hw->mac.type >= e1000_i210 || 9958 + (adapter->flags & IGB_FLAG_DMAC)) { 9960 9959 reg = rd32(E1000_PCIEMISC); 9961 - reg &= ~E1000_PCIEMISC_LX_DECISION; 9960 + reg |= E1000_PCIEMISC_LX_DECISION; 9962 9961 wr32(E1000_PCIEMISC, reg); 9963 9962 } /* endif adapter->dmac is not disabled */ 9964 9963 } else if (hw->mac.type == e1000_82580) {
+8 -1
drivers/net/hamradio/6pack.c
··· 99 99 100 100 unsigned int rx_count; 101 101 unsigned int rx_count_cooked; 102 + spinlock_t rxlock; 102 103 103 104 int mtu; /* Our mtu (to spot changes!) */ 104 105 int buffsize; /* Max buffers sizes */ ··· 566 565 sp->dev = dev; 567 566 568 567 spin_lock_init(&sp->lock); 568 + spin_lock_init(&sp->rxlock); 569 569 refcount_set(&sp->refcnt, 1); 570 570 init_completion(&sp->dead); 571 571 ··· 915 913 sp->led_state = 0x60; 916 914 /* fill trailing bytes with zeroes */ 917 915 sp->tty->ops->write(sp->tty, &sp->led_state, 1); 916 + spin_lock_bh(&sp->rxlock); 918 917 rest = sp->rx_count; 919 918 if (rest != 0) 920 919 for (i = rest; i <= 3; i++) ··· 933 930 sp_bump(sp, 0); 934 931 } 935 932 sp->rx_count_cooked = 0; 933 + spin_unlock_bh(&sp->rxlock); 936 934 } 937 935 break; 938 936 case SIXP_TX_URUN: printk(KERN_DEBUG "6pack: TX underrun\n"); ··· 963 959 decode_prio_command(sp, inbyte); 964 960 else if ((inbyte & SIXP_STD_CMD_MASK) != 0) 965 961 decode_std_command(sp, inbyte); 966 - else if ((sp->status & SIXP_RX_DCD_MASK) == SIXP_RX_DCD_MASK) 962 + else if ((sp->status & SIXP_RX_DCD_MASK) == SIXP_RX_DCD_MASK) { 963 + spin_lock_bh(&sp->rxlock); 967 964 decode_data(sp, inbyte); 965 + spin_unlock_bh(&sp->rxlock); 966 + } 968 967 } 969 968 } 970 969
+14 -1
drivers/net/phy/aquantia_main.c
··· 34 34 #define MDIO_AN_VEND_PROV 0xc400 35 35 #define MDIO_AN_VEND_PROV_1000BASET_FULL BIT(15) 36 36 #define MDIO_AN_VEND_PROV_1000BASET_HALF BIT(14) 37 + #define MDIO_AN_VEND_PROV_5000BASET_FULL BIT(11) 38 + #define MDIO_AN_VEND_PROV_2500BASET_FULL BIT(10) 37 39 #define MDIO_AN_VEND_PROV_DOWNSHIFT_EN BIT(4) 38 40 #define MDIO_AN_VEND_PROV_DOWNSHIFT_MASK GENMASK(3, 0) 39 41 #define MDIO_AN_VEND_PROV_DOWNSHIFT_DFLT 4 ··· 233 231 phydev->advertising)) 234 232 reg |= MDIO_AN_VEND_PROV_1000BASET_HALF; 235 233 234 + /* Handle the case when the 2.5G and 5G speeds are not advertised */ 235 + if (linkmode_test_bit(ETHTOOL_LINK_MODE_2500baseT_Full_BIT, 236 + phydev->advertising)) 237 + reg |= MDIO_AN_VEND_PROV_2500BASET_FULL; 238 + 239 + if (linkmode_test_bit(ETHTOOL_LINK_MODE_5000baseT_Full_BIT, 240 + phydev->advertising)) 241 + reg |= MDIO_AN_VEND_PROV_5000BASET_FULL; 242 + 236 243 ret = phy_modify_mmd_changed(phydev, MDIO_MMD_AN, MDIO_AN_VEND_PROV, 237 244 MDIO_AN_VEND_PROV_1000BASET_HALF | 238 - MDIO_AN_VEND_PROV_1000BASET_FULL, reg); 245 + MDIO_AN_VEND_PROV_1000BASET_FULL | 246 + MDIO_AN_VEND_PROV_2500BASET_FULL | 247 + MDIO_AN_VEND_PROV_5000BASET_FULL, reg); 239 248 if (ret < 0) 240 249 return ret; 241 250 if (ret > 0)
+6
drivers/net/phy/at803x.c
··· 2072 2072 /* ATHEROS AR9331 */ 2073 2073 PHY_ID_MATCH_EXACT(ATH9331_PHY_ID), 2074 2074 .name = "Qualcomm Atheros AR9331 built-in PHY", 2075 + .probe = at803x_probe, 2076 + .remove = at803x_remove, 2075 2077 .suspend = at803x_suspend, 2076 2078 .resume = at803x_resume, 2077 2079 .flags = PHY_POLL_CABLE_TEST, ··· 2089 2087 /* Qualcomm Atheros QCA9561 */ 2090 2088 PHY_ID_MATCH_EXACT(QCA9561_PHY_ID), 2091 2089 .name = "Qualcomm Atheros QCA9561 built-in PHY", 2090 + .probe = at803x_probe, 2091 + .remove = at803x_remove, 2092 2092 .suspend = at803x_suspend, 2093 2093 .resume = at803x_resume, 2094 2094 .flags = PHY_POLL_CABLE_TEST, ··· 2155 2151 PHY_ID_MATCH_EXACT(QCA8081_PHY_ID), 2156 2152 .name = "Qualcomm QCA8081", 2157 2153 .flags = PHY_POLL_CABLE_TEST, 2154 + .probe = at803x_probe, 2155 + .remove = at803x_remove, 2158 2156 .config_intr = at803x_config_intr, 2159 2157 .handle_interrupt = at803x_handle_interrupt, 2160 2158 .get_tunable = at803x_get_tunable,
+4 -2
drivers/net/phy/smsc.c
··· 110 110 struct smsc_phy_priv *priv = phydev->priv; 111 111 int rc; 112 112 113 - if (!priv->energy_enable) 113 + if (!priv->energy_enable || phydev->irq != PHY_POLL) 114 114 return 0; 115 115 116 116 rc = phy_read(phydev, MII_LAN83C185_CTRL_STATUS); ··· 210 210 * response on link pulses to detect presence of plugged Ethernet cable. 211 211 * The Energy Detect Power-Down mode is enabled again in the end of procedure to 212 212 * save approximately 220 mW of power if cable is unplugged. 213 + * The workaround is only applicable to poll mode. Energy Detect Power-Down may 214 + * not be used in interrupt mode lest link change detection becomes unreliable. 213 215 */ 214 216 static int lan87xx_read_status(struct phy_device *phydev) 215 217 { ··· 219 217 220 218 int err = genphy_read_status(phydev); 221 219 222 - if (!phydev->link && priv->energy_enable) { 220 + if (!phydev->link && priv->energy_enable && phydev->irq == PHY_POLL) { 223 221 /* Disable EDPD to wake up PHY */ 224 222 int rc = phy_read(phydev, MII_LAN83C185_CTRL_STATUS); 225 223 if (rc < 0)
+4
drivers/net/veth.c
··· 312 312 static netdev_tx_t veth_xmit(struct sk_buff *skb, struct net_device *dev) 313 313 { 314 314 struct veth_priv *rcv_priv, *priv = netdev_priv(dev); 315 + struct netdev_queue *queue = NULL; 315 316 struct veth_rq *rq = NULL; 316 317 struct net_device *rcv; 317 318 int length = skb->len; ··· 330 329 rxq = skb_get_queue_mapping(skb); 331 330 if (rxq < rcv->real_num_rx_queues) { 332 331 rq = &rcv_priv->rq[rxq]; 332 + queue = netdev_get_tx_queue(dev, rxq); 333 333 334 334 /* The napi pointer is available when an XDP program is 335 335 * attached or when GRO is enabled ··· 342 340 343 341 skb_tx_timestamp(skb); 344 342 if (likely(veth_forward_skb(rcv, skb, rq, use_napi) == NET_RX_SUCCESS)) { 343 + if (queue) 344 + txq_trans_cond_update(queue); 345 345 if (!use_napi) 346 346 dev_lstats_add(dev, length); 347 347 } else {
+6 -19
drivers/net/virtio_net.c
··· 2768 2768 static void virtnet_freeze_down(struct virtio_device *vdev) 2769 2769 { 2770 2770 struct virtnet_info *vi = vdev->priv; 2771 - int i; 2772 2771 2773 2772 /* Make sure no work handler is accessing the device */ 2774 2773 flush_work(&vi->config_work); ··· 2775 2776 netif_tx_lock_bh(vi->dev); 2776 2777 netif_device_detach(vi->dev); 2777 2778 netif_tx_unlock_bh(vi->dev); 2778 - cancel_delayed_work_sync(&vi->refill); 2779 - 2780 - if (netif_running(vi->dev)) { 2781 - for (i = 0; i < vi->max_queue_pairs; i++) { 2782 - napi_disable(&vi->rq[i].napi); 2783 - virtnet_napi_tx_disable(&vi->sq[i].napi); 2784 - } 2785 - } 2779 + if (netif_running(vi->dev)) 2780 + virtnet_close(vi->dev); 2786 2781 } 2787 2782 2788 2783 static int init_vqs(struct virtnet_info *vi); ··· 2784 2791 static int virtnet_restore_up(struct virtio_device *vdev) 2785 2792 { 2786 2793 struct virtnet_info *vi = vdev->priv; 2787 - int err, i; 2794 + int err; 2788 2795 2789 2796 err = init_vqs(vi); 2790 2797 if (err) ··· 2793 2800 virtio_device_ready(vdev); 2794 2801 2795 2802 if (netif_running(vi->dev)) { 2796 - for (i = 0; i < vi->curr_queue_pairs; i++) 2797 - if (!try_fill_recv(vi, &vi->rq[i], GFP_KERNEL)) 2798 - schedule_delayed_work(&vi->refill, 0); 2799 - 2800 - for (i = 0; i < vi->max_queue_pairs; i++) { 2801 - virtnet_napi_enable(vi->rq[i].vq, &vi->rq[i].napi); 2802 - virtnet_napi_tx_enable(vi, vi->sq[i].vq, 2803 - &vi->sq[i].napi); 2804 - } 2803 + err = virtnet_open(vi->dev); 2804 + if (err) 2805 + return err; 2805 2806 } 2806 2807 2807 2808 netif_tx_lock_bh(vi->dev);
+14
drivers/nvme/host/core.c
··· 2546 2546 .vid = 0x1e0f, 2547 2547 .mn = "KCD6XVUL6T40", 2548 2548 .quirks = NVME_QUIRK_NO_APST, 2549 + }, 2550 + { 2551 + /* 2552 + * The external Samsung X5 SSD fails initialization without a 2553 + * delay before checking if it is ready and has a whole set of 2554 + * other problems. To make this even more interesting, it 2555 + * shares the PCI ID with internal Samsung 970 Evo Plus that 2556 + * does not need or want these quirks. 2557 + */ 2558 + .vid = 0x144d, 2559 + .mn = "Samsung Portable SSD X5", 2560 + .quirks = NVME_QUIRK_DELAY_BEFORE_CHK_RDY | 2561 + NVME_QUIRK_NO_DEEPEST_PS | 2562 + NVME_QUIRK_IGNORE_DEV_SUBNQN, 2549 2563 } 2550 2564 }; 2551 2565
+2 -4
drivers/nvme/host/pci.c
··· 3474 3474 { PCI_DEVICE(0x1cc1, 0x8201), /* ADATA SX8200PNP 512GB */ 3475 3475 .driver_data = NVME_QUIRK_NO_DEEPEST_PS | 3476 3476 NVME_QUIRK_IGNORE_DEV_SUBNQN, }, 3477 + { PCI_DEVICE(0x1344, 0x5407), /* Micron Technology Inc NVMe SSD */ 3478 + .driver_data = NVME_QUIRK_IGNORE_DEV_SUBNQN }, 3477 3479 { PCI_DEVICE(0x1c5c, 0x1504), /* SK Hynix PC400 */ 3478 3480 .driver_data = NVME_QUIRK_DISABLE_WRITE_ZEROES, }, 3479 3481 { PCI_DEVICE(0x1c5c, 0x174a), /* SK Hynix P31 SSD */ ··· 3526 3524 NVME_QUIRK_128_BYTES_SQES | 3527 3525 NVME_QUIRK_SHARED_TAGS | 3528 3526 NVME_QUIRK_SKIP_CID_GEN }, 3529 - { PCI_DEVICE(0x144d, 0xa808), /* Samsung X5 */ 3530 - .driver_data = NVME_QUIRK_DELAY_BEFORE_CHK_RDY| 3531 - NVME_QUIRK_NO_DEEPEST_PS | 3532 - NVME_QUIRK_IGNORE_DEV_SUBNQN, }, 3533 3527 { PCI_DEVICE_CLASS(PCI_CLASS_STORAGE_EXPRESS, 0xffffff) }, 3534 3528 { 0, } 3535 3529 };
+4 -4
drivers/regulator/qcom_smd-regulator.c
··· 723 723 724 724 static const struct regulator_desc mp5496_smpa2 = { 725 725 .linear_ranges = (struct linear_range[]) { 726 - REGULATOR_LINEAR_RANGE(725000, 0, 27, 12500), 726 + REGULATOR_LINEAR_RANGE(600000, 0, 127, 12500), 727 727 }, 728 728 .n_linear_ranges = 1, 729 - .n_voltages = 28, 729 + .n_voltages = 128, 730 730 .ops = &rpm_mp5496_ops, 731 731 }; 732 732 733 733 static const struct regulator_desc mp5496_ldoa2 = { 734 734 .linear_ranges = (struct linear_range[]) { 735 - REGULATOR_LINEAR_RANGE(1800000, 0, 60, 25000), 735 + REGULATOR_LINEAR_RANGE(800000, 0, 127, 25000), 736 736 }, 737 737 .n_linear_ranges = 1, 738 - .n_voltages = 61, 738 + .n_voltages = 128, 739 739 .ops = &rpm_mp5496_ops, 740 740 }; 741 741
+64 -18
drivers/scsi/ibmvscsi/ibmvfc.c
··· 160 160 static void ibmvfc_tgt_implicit_logout_and_del(struct ibmvfc_target *); 161 161 static void ibmvfc_tgt_move_login(struct ibmvfc_target *); 162 162 163 - static void ibmvfc_release_sub_crqs(struct ibmvfc_host *); 164 - static void ibmvfc_init_sub_crqs(struct ibmvfc_host *); 163 + static void ibmvfc_dereg_sub_crqs(struct ibmvfc_host *); 164 + static void ibmvfc_reg_sub_crqs(struct ibmvfc_host *); 165 165 166 166 static const char *unknown_error = "unknown error"; 167 167 ··· 917 917 struct vio_dev *vdev = to_vio_dev(vhost->dev); 918 918 unsigned long flags; 919 919 920 - ibmvfc_release_sub_crqs(vhost); 920 + ibmvfc_dereg_sub_crqs(vhost); 921 921 922 922 /* Re-enable the CRQ */ 923 923 do { ··· 936 936 spin_unlock(vhost->crq.q_lock); 937 937 spin_unlock_irqrestore(vhost->host->host_lock, flags); 938 938 939 - ibmvfc_init_sub_crqs(vhost); 939 + ibmvfc_reg_sub_crqs(vhost); 940 940 941 941 return rc; 942 942 } ··· 955 955 struct vio_dev *vdev = to_vio_dev(vhost->dev); 956 956 struct ibmvfc_queue *crq = &vhost->crq; 957 957 958 - ibmvfc_release_sub_crqs(vhost); 958 + ibmvfc_dereg_sub_crqs(vhost); 959 959 960 960 /* Close the CRQ */ 961 961 do { ··· 988 988 spin_unlock(vhost->crq.q_lock); 989 989 spin_unlock_irqrestore(vhost->host->host_lock, flags); 990 990 991 - ibmvfc_init_sub_crqs(vhost); 991 + ibmvfc_reg_sub_crqs(vhost); 992 992 993 993 return rc; 994 994 } ··· 5682 5682 queue->cur = 0; 5683 5683 queue->fmt = fmt; 5684 5684 queue->size = PAGE_SIZE / fmt_size; 5685 + 5686 + queue->vhost = vhost; 5685 5687 return 0; 5686 5688 } 5687 5689 ··· 5759 5757 5760 5758 ENTER; 5761 5759 5762 - if (ibmvfc_alloc_queue(vhost, scrq, IBMVFC_SUB_CRQ_FMT)) 5763 - return -ENOMEM; 5764 - 5765 5760 rc = h_reg_sub_crq(vdev->unit_address, scrq->msg_token, PAGE_SIZE, 5766 5761 &scrq->cookie, &scrq->hw_irq); 5767 5762 ··· 5789 5790 } 5790 5791 5791 5792 scrq->hwq_id = index; 5792 - scrq->vhost = vhost; 5793 5793 5794 5794 LEAVE; 5795 5795 return 0; ··· 5798 5800 rc = plpar_hcall_norets(H_FREE_SUB_CRQ, vdev->unit_address, scrq->cookie); 5799 5801 } while (rtas_busy_delay(rc)); 5800 5802 reg_failed: 5801 - ibmvfc_free_queue(vhost, scrq); 5802 5803 LEAVE; 5803 5804 return rc; 5804 5805 } ··· 5823 5826 if (rc) 5824 5827 dev_err(dev, "Failed to free sub-crq[%d]: rc=%ld\n", index, rc); 5825 5828 5826 - ibmvfc_free_queue(vhost, scrq); 5829 + /* Clean out the queue */ 5830 + memset(scrq->msgs.crq, 0, PAGE_SIZE); 5831 + scrq->cur = 0; 5832 + 5833 + LEAVE; 5834 + } 5835 + 5836 + static void ibmvfc_reg_sub_crqs(struct ibmvfc_host *vhost) 5837 + { 5838 + int i, j; 5839 + 5840 + ENTER; 5841 + if (!vhost->mq_enabled || !vhost->scsi_scrqs.scrqs) 5842 + return; 5843 + 5844 + for (i = 0; i < nr_scsi_hw_queues; i++) { 5845 + if (ibmvfc_register_scsi_channel(vhost, i)) { 5846 + for (j = i; j > 0; j--) 5847 + ibmvfc_deregister_scsi_channel(vhost, j - 1); 5848 + vhost->do_enquiry = 0; 5849 + return; 5850 + } 5851 + } 5852 + 5853 + LEAVE; 5854 + } 5855 + 5856 + static void ibmvfc_dereg_sub_crqs(struct ibmvfc_host *vhost) 5857 + { 5858 + int i; 5859 + 5860 + ENTER; 5861 + if (!vhost->mq_enabled || !vhost->scsi_scrqs.scrqs) 5862 + return; 5863 + 5864 + for (i = 0; i < nr_scsi_hw_queues; i++) 5865 + ibmvfc_deregister_scsi_channel(vhost, i); 5866 + 5827 5867 LEAVE; 5828 5868 } 5829 5869 5830 5870 static void ibmvfc_init_sub_crqs(struct ibmvfc_host *vhost) 5831 5871 { 5872 + struct ibmvfc_queue *scrq; 5832 5873 int i, j; 5833 5874 5834 5875 ENTER; ··· 5882 5847 } 5883 5848 5884 5849 for (i = 0; i < nr_scsi_hw_queues; i++) { 5885 - if (ibmvfc_register_scsi_channel(vhost, i)) { 5886 - for (j = i; j > 0; j--) 5887 - ibmvfc_deregister_scsi_channel(vhost, j - 1); 5850 + scrq = &vhost->scsi_scrqs.scrqs[i]; 5851 + if (ibmvfc_alloc_queue(vhost, scrq, IBMVFC_SUB_CRQ_FMT)) { 5852 + for (j = i; j > 0; j--) { 5853 + scrq = &vhost->scsi_scrqs.scrqs[j - 1]; 5854 + ibmvfc_free_queue(vhost, scrq); 5855 + } 5888 5856 kfree(vhost->scsi_scrqs.scrqs); 5889 5857 vhost->scsi_scrqs.scrqs = NULL; 5890 5858 vhost->scsi_scrqs.active_queues = 0; 5891 5859 vhost->do_enquiry = 0; 5892 - break; 5860 + vhost->mq_enabled = 0; 5861 + return; 5893 5862 } 5894 5863 } 5864 + 5865 + ibmvfc_reg_sub_crqs(vhost); 5895 5866 5896 5867 LEAVE; 5897 5868 } 5898 5869 5899 5870 static void ibmvfc_release_sub_crqs(struct ibmvfc_host *vhost) 5900 5871 { 5872 + struct ibmvfc_queue *scrq; 5901 5873 int i; 5902 5874 5903 5875 ENTER; 5904 5876 if (!vhost->scsi_scrqs.scrqs) 5905 5877 return; 5906 5878 5907 - for (i = 0; i < nr_scsi_hw_queues; i++) 5908 - ibmvfc_deregister_scsi_channel(vhost, i); 5879 + ibmvfc_dereg_sub_crqs(vhost); 5880 + 5881 + for (i = 0; i < nr_scsi_hw_queues; i++) { 5882 + scrq = &vhost->scsi_scrqs.scrqs[i]; 5883 + ibmvfc_free_queue(vhost, scrq); 5884 + } 5909 5885 5910 5886 kfree(vhost->scsi_scrqs.scrqs); 5911 5887 vhost->scsi_scrqs.scrqs = NULL;
+1 -1
drivers/scsi/ibmvscsi/ibmvfc.h
··· 789 789 spinlock_t _lock; 790 790 spinlock_t *q_lock; 791 791 792 + struct ibmvfc_host *vhost; 792 793 struct ibmvfc_event_pool evt_pool; 793 794 struct list_head sent; 794 795 struct list_head free; ··· 798 797 union ibmvfc_iu cancel_rsp; 799 798 800 799 /* Sub-CRQ fields */ 801 - struct ibmvfc_host *vhost; 802 800 unsigned long cookie; 803 801 unsigned long vios_cookie; 804 802 unsigned long hw_irq;
+20 -2
drivers/scsi/scsi_debug.c
··· 2826 2826 } 2827 2827 } 2828 2828 2829 + static inline void zbc_set_zone_full(struct sdebug_dev_info *devip, 2830 + struct sdeb_zone_state *zsp) 2831 + { 2832 + switch (zsp->z_cond) { 2833 + case ZC2_IMPLICIT_OPEN: 2834 + devip->nr_imp_open--; 2835 + break; 2836 + case ZC3_EXPLICIT_OPEN: 2837 + devip->nr_exp_open--; 2838 + break; 2839 + default: 2840 + WARN_ONCE(true, "Invalid zone %llu condition %x\n", 2841 + zsp->z_start, zsp->z_cond); 2842 + break; 2843 + } 2844 + zsp->z_cond = ZC5_FULL; 2845 + } 2846 + 2829 2847 static void zbc_inc_wp(struct sdebug_dev_info *devip, 2830 2848 unsigned long long lba, unsigned int num) 2831 2849 { ··· 2856 2838 if (zsp->z_type == ZBC_ZTYPE_SWR) { 2857 2839 zsp->z_wp += num; 2858 2840 if (zsp->z_wp >= zend) 2859 - zsp->z_cond = ZC5_FULL; 2841 + zbc_set_zone_full(devip, zsp); 2860 2842 return; 2861 2843 } 2862 2844 ··· 2875 2857 n = num; 2876 2858 } 2877 2859 if (zsp->z_wp >= zend) 2878 - zsp->z_cond = ZC5_FULL; 2860 + zbc_set_zone_full(devip, zsp); 2879 2861 2880 2862 num -= n; 2881 2863 lba += n;
+6 -1
drivers/scsi/scsi_transport_iscsi.c
··· 212 212 return NULL; 213 213 214 214 mutex_lock(&iscsi_ep_idr_mutex); 215 - id = idr_alloc(&iscsi_ep_idr, ep, 0, -1, GFP_NOIO); 215 + 216 + /* 217 + * First endpoint id should be 1 to comply with user space 218 + * applications (iscsid). 219 + */ 220 + id = idr_alloc(&iscsi_ep_idr, ep, 1, -1, GFP_NOIO); 216 221 if (id < 0) { 217 222 mutex_unlock(&iscsi_ep_idr_mutex); 218 223 printk(KERN_ERR "Could not allocate endpoint ID. Error %d.\n",
+22 -5
drivers/scsi/storvsc_drv.c
··· 1844 1844 .cmd_per_lun = 2048, 1845 1845 .this_id = -1, 1846 1846 /* Ensure there are no gaps in presented sgls */ 1847 - .virt_boundary_mask = PAGE_SIZE-1, 1847 + .virt_boundary_mask = HV_HYP_PAGE_SIZE - 1, 1848 1848 .no_write_same = 1, 1849 1849 .track_queue_depth = 1, 1850 1850 .change_queue_depth = storvsc_change_queue_depth, ··· 1895 1895 int target = 0; 1896 1896 struct storvsc_device *stor_device; 1897 1897 int max_sub_channels = 0; 1898 + u32 max_xfer_bytes; 1898 1899 1899 1900 /* 1900 1901 * We support sub-channels for storage on SCSI and FC controllers. ··· 1969 1968 } 1970 1969 /* max cmd length */ 1971 1970 host->max_cmd_len = STORVSC_MAX_CMD_LEN; 1972 - 1973 1971 /* 1974 - * set the table size based on the info we got 1975 - * from the host. 1972 + * Any reasonable Hyper-V configuration should provide 1973 + * max_transfer_bytes value aligning to HV_HYP_PAGE_SIZE, 1974 + * protecting it from any weird value. 1976 1975 */ 1977 - host->sg_tablesize = (stor_device->max_transfer_bytes >> PAGE_SHIFT); 1976 + max_xfer_bytes = round_down(stor_device->max_transfer_bytes, HV_HYP_PAGE_SIZE); 1977 + /* max_hw_sectors_kb */ 1978 + host->max_sectors = max_xfer_bytes >> 9; 1979 + /* 1980 + * There are 2 requirements for Hyper-V storvsc sgl segments, 1981 + * based on which the below calculation for max segments is 1982 + * done: 1983 + * 1984 + * 1. Except for the first and last sgl segment, all sgl segments 1985 + * should be align to HV_HYP_PAGE_SIZE, that also means the 1986 + * maximum number of segments in a sgl can be calculated by 1987 + * dividing the total max transfer length by HV_HYP_PAGE_SIZE. 1988 + * 1989 + * 2. Except for the first and last, each entry in the SGL must 1990 + * have an offset that is a multiple of HV_HYP_PAGE_SIZE. 1991 + */ 1992 + host->sg_tablesize = (max_xfer_bytes >> HV_HYP_PAGE_SHIFT) + 1; 1978 1993 /* 1979 1994 * For non-IDE disks, the host supports multiple channels. 1980 1995 * Set the number of HW queues we are supporting.
+1
drivers/soc/bcm/brcmstb/pm/pm-arm.c
··· 783 783 } 784 784 785 785 ret = brcmstb_init_sram(dn); 786 + of_node_put(dn); 786 787 if (ret) { 787 788 pr_err("error setting up SRAM for PM\n"); 788 789 return ret;
+1 -1
drivers/soc/imx/imx8m-blk-ctrl.c
··· 667 667 }, 668 668 [IMX8MP_MEDIABLK_PD_LCDIF_2] = { 669 669 .name = "mediablk-lcdif-2", 670 - .clk_names = (const char *[]){ "disp1", "apb", "axi", }, 670 + .clk_names = (const char *[]){ "disp2", "apb", "axi", }, 671 671 .num_clks = 3, 672 672 .gpc_name = "lcdif2", 673 673 .rst_mask = BIT(11) | BIT(12) | BIT(24),
+31 -6
drivers/spi/spi-cadence.c
··· 69 69 #define CDNS_SPI_BAUD_DIV_SHIFT 3 /* Baud rate divisor shift in CR */ 70 70 #define CDNS_SPI_SS_SHIFT 10 /* Slave Select field shift in CR */ 71 71 #define CDNS_SPI_SS0 0x1 /* Slave Select zero */ 72 + #define CDNS_SPI_NOSS 0x3C /* No Slave select */ 72 73 73 74 /* 74 75 * SPI Interrupt Registers bit Masks ··· 93 92 #define CDNS_SPI_ER_ENABLE 0x00000001 /* SPI Enable Bit Mask */ 94 93 #define CDNS_SPI_ER_DISABLE 0x0 /* SPI Disable Bit Mask */ 95 94 96 - /* SPI FIFO depth in bytes */ 97 - #define CDNS_SPI_FIFO_DEPTH 128 98 - 99 95 /* Default number of chip select lines */ 100 96 #define CDNS_SPI_DEFAULT_NUM_CS 4 101 97 ··· 108 110 * @rx_bytes: Number of bytes requested 109 111 * @dev_busy: Device busy flag 110 112 * @is_decoded_cs: Flag for decoder property set or not 113 + * @tx_fifo_depth: Depth of the TX FIFO 111 114 */ 112 115 struct cdns_spi { 113 116 void __iomem *regs; ··· 122 123 int rx_bytes; 123 124 u8 dev_busy; 124 125 u32 is_decoded_cs; 126 + unsigned int tx_fifo_depth; 125 127 }; 126 128 127 129 /* Macros for the SPI controller read/write */ ··· 304 304 { 305 305 unsigned long trans_cnt = 0; 306 306 307 - while ((trans_cnt < CDNS_SPI_FIFO_DEPTH) && 307 + while ((trans_cnt < xspi->tx_fifo_depth) && 308 308 (xspi->tx_bytes > 0)) { 309 309 310 310 /* When xspi in busy condition, bytes may send failed, ··· 450 450 * @master: Pointer to the spi_master structure which provides 451 451 * information about the controller. 452 452 * 453 - * This function disables the SPI master controller. 453 + * This function disables the SPI master controller when no slave selected. 454 454 * 455 455 * Return: 0 always 456 456 */ 457 457 static int cdns_unprepare_transfer_hardware(struct spi_master *master) 458 458 { 459 459 struct cdns_spi *xspi = spi_master_get_devdata(master); 460 + u32 ctrl_reg; 460 461 461 - cdns_spi_write(xspi, CDNS_SPI_ER, CDNS_SPI_ER_DISABLE); 462 + /* Disable the SPI if slave is deselected */ 463 + ctrl_reg = cdns_spi_read(xspi, CDNS_SPI_CR); 464 + ctrl_reg = (ctrl_reg & CDNS_SPI_CR_SSCTRL) >> CDNS_SPI_SS_SHIFT; 465 + if (ctrl_reg == CDNS_SPI_NOSS) 466 + cdns_spi_write(xspi, CDNS_SPI_ER, CDNS_SPI_ER_DISABLE); 462 467 463 468 return 0; 469 + } 470 + 471 + /** 472 + * cdns_spi_detect_fifo_depth - Detect the FIFO depth of the hardware 473 + * @xspi: Pointer to the cdns_spi structure 474 + * 475 + * The depth of the TX FIFO is a synthesis configuration parameter of the SPI 476 + * IP. The FIFO threshold register is sized so that its maximum value can be the 477 + * FIFO size - 1. This is used to detect the size of the FIFO. 478 + */ 479 + static void cdns_spi_detect_fifo_depth(struct cdns_spi *xspi) 480 + { 481 + /* The MSBs will get truncated giving us the size of the FIFO */ 482 + cdns_spi_write(xspi, CDNS_SPI_THLD, 0xffff); 483 + xspi->tx_fifo_depth = cdns_spi_read(xspi, CDNS_SPI_THLD) + 1; 484 + 485 + /* Reset to default */ 486 + cdns_spi_write(xspi, CDNS_SPI_THLD, 0x1); 464 487 } 465 488 466 489 /** ··· 557 534 &xspi->is_decoded_cs); 558 535 if (ret < 0) 559 536 xspi->is_decoded_cs = 0; 537 + 538 + cdns_spi_detect_fifo_depth(xspi); 560 539 561 540 /* SPI controller initializations */ 562 541 cdns_spi_init_hw(xspi);
+1 -1
drivers/spi/spi-mem.c
··· 808 808 op->data.dir != SPI_MEM_DATA_IN) 809 809 return -EINVAL; 810 810 811 - if (ctlr->mem_ops && ctlr->mem_ops->poll_status) { 811 + if (ctlr->mem_ops && ctlr->mem_ops->poll_status && !mem->spi->cs_gpiod) { 812 812 ret = spi_mem_access_start(mem); 813 813 if (ret) 814 814 return ret;
+7 -4
drivers/spi/spi-rockchip.c
··· 381 381 rs->tx_left = rs->tx ? xfer->len / rs->n_bytes : 0; 382 382 rs->rx_left = xfer->len / rs->n_bytes; 383 383 384 - if (rs->cs_inactive) 385 - writel_relaxed(INT_RF_FULL | INT_CS_INACTIVE, rs->regs + ROCKCHIP_SPI_IMR); 386 - else 387 - writel_relaxed(INT_RF_FULL, rs->regs + ROCKCHIP_SPI_IMR); 384 + writel_relaxed(0xffffffff, rs->regs + ROCKCHIP_SPI_ICR); 385 + 388 386 spi_enable_chip(rs, true); 389 387 390 388 if (rs->tx_left) 391 389 rockchip_spi_pio_writer(rs); 390 + 391 + if (rs->cs_inactive) 392 + writel_relaxed(INT_RF_FULL | INT_CS_INACTIVE, rs->regs + ROCKCHIP_SPI_IMR); 393 + else 394 + writel_relaxed(INT_RF_FULL, rs->regs + ROCKCHIP_SPI_IMR); 392 395 393 396 /* 1 means the transfer is in progress */ 394 397 return 1;
-2
drivers/tty/sysrq.c
··· 581 581 582 582 rcu_sysrq_start(); 583 583 rcu_read_lock(); 584 - printk_prefer_direct_enter(); 585 584 /* 586 585 * Raise the apparent loglevel to maximum so that the sysrq header 587 586 * is shown to provide the user with positive feedback. We do not ··· 622 623 pr_cont("\n"); 623 624 console_loglevel = orig_log_level; 624 625 } 625 - printk_prefer_direct_exit(); 626 626 rcu_read_unlock(); 627 627 rcu_sysrq_end(); 628 628
+48 -28
drivers/ufs/core/ufshcd.c
··· 748 748 } 749 749 750 750 /** 751 - * ufshcd_utrl_clear - Clear a bit in UTRLCLR register 751 + * ufshcd_utrl_clear() - Clear requests from the controller request list. 752 752 * @hba: per adapter instance 753 - * @pos: position of the bit to be cleared 753 + * @mask: mask with one bit set for each request to be cleared 754 754 */ 755 - static inline void ufshcd_utrl_clear(struct ufs_hba *hba, u32 pos) 755 + static inline void ufshcd_utrl_clear(struct ufs_hba *hba, u32 mask) 756 756 { 757 757 if (hba->quirks & UFSHCI_QUIRK_BROKEN_REQ_LIST_CLR) 758 - ufshcd_writel(hba, (1 << pos), REG_UTP_TRANSFER_REQ_LIST_CLEAR); 759 - else 760 - ufshcd_writel(hba, ~(1 << pos), 761 - REG_UTP_TRANSFER_REQ_LIST_CLEAR); 758 + mask = ~mask; 759 + /* 760 + * From the UFSHCI specification: "UTP Transfer Request List CLear 761 + * Register (UTRLCLR): This field is bit significant. Each bit 762 + * corresponds to a slot in the UTP Transfer Request List, where bit 0 763 + * corresponds to request slot 0. A bit in this field is set to ‘0’ 764 + * by host software to indicate to the host controller that a transfer 765 + * request slot is cleared. The host controller 766 + * shall free up any resources associated to the request slot 767 + * immediately, and shall set the associated bit in UTRLDBR to ‘0’. The 768 + * host software indicates no change to request slots by setting the 769 + * associated bits in this field to ‘1’. Bits in this field shall only 770 + * be set ‘1’ or ‘0’ by host software when UTRLRSR is set to ‘1’." 771 + */ 772 + ufshcd_writel(hba, ~mask, REG_UTP_TRANSFER_REQ_LIST_CLEAR); 762 773 } 763 774 764 775 /** ··· 2874 2863 return ufshcd_compose_devman_upiu(hba, lrbp); 2875 2864 } 2876 2865 2877 - static int 2878 - ufshcd_clear_cmd(struct ufs_hba *hba, int tag) 2866 + /* 2867 + * Clear all the requests from the controller for which a bit has been set in 2868 + * @mask and wait until the controller confirms that these requests have been 2869 + * cleared. 2870 + */ 2871 + static int ufshcd_clear_cmds(struct ufs_hba *hba, u32 mask) 2879 2872 { 2880 - int err = 0; 2881 2873 unsigned long flags; 2882 - u32 mask = 1 << tag; 2883 2874 2884 2875 /* clear outstanding transaction before retry */ 2885 2876 spin_lock_irqsave(hba->host->host_lock, flags); 2886 - ufshcd_utrl_clear(hba, tag); 2877 + ufshcd_utrl_clear(hba, mask); 2887 2878 spin_unlock_irqrestore(hba->host->host_lock, flags); 2888 2879 2889 2880 /* 2890 2881 * wait for h/w to clear corresponding bit in door-bell. 2891 2882 * max. wait is 1 sec. 2892 2883 */ 2893 - err = ufshcd_wait_for_register(hba, 2894 - REG_UTP_TRANSFER_REQ_DOOR_BELL, 2895 - mask, ~mask, 1000, 1000); 2896 - 2897 - return err; 2884 + return ufshcd_wait_for_register(hba, REG_UTP_TRANSFER_REQ_DOOR_BELL, 2885 + mask, ~mask, 1000, 1000); 2898 2886 } 2899 2887 2900 2888 static int ··· 2973 2963 err = -ETIMEDOUT; 2974 2964 dev_dbg(hba->dev, "%s: dev_cmd request timedout, tag %d\n", 2975 2965 __func__, lrbp->task_tag); 2976 - if (!ufshcd_clear_cmd(hba, lrbp->task_tag)) 2966 + if (!ufshcd_clear_cmds(hba, 1U << lrbp->task_tag)) 2977 2967 /* successfully cleared the command, retry if needed */ 2978 2968 err = -EAGAIN; 2979 2969 /* ··· 6968 6958 } 6969 6959 6970 6960 /** 6971 - * ufshcd_eh_device_reset_handler - device reset handler registered to 6972 - * scsi layer. 6961 + * ufshcd_eh_device_reset_handler() - Reset a single logical unit. 6973 6962 * @cmd: SCSI command pointer 6974 6963 * 6975 6964 * Returns SUCCESS/FAILED 6976 6965 */ 6977 6966 static int ufshcd_eh_device_reset_handler(struct scsi_cmnd *cmd) 6978 6967 { 6968 + unsigned long flags, pending_reqs = 0, not_cleared = 0; 6979 6969 struct Scsi_Host *host; 6980 6970 struct ufs_hba *hba; 6981 6971 u32 pos; ··· 6994 6984 } 6995 6985 6996 6986 /* clear the commands that were pending for corresponding LUN */ 6997 - for_each_set_bit(pos, &hba->outstanding_reqs, hba->nutrs) { 6998 - if (hba->lrb[pos].lun == lun) { 6999 - err = ufshcd_clear_cmd(hba, pos); 7000 - if (err) 7001 - break; 7002 - __ufshcd_transfer_req_compl(hba, 1U << pos); 7003 - } 6987 + spin_lock_irqsave(&hba->outstanding_lock, flags); 6988 + for_each_set_bit(pos, &hba->outstanding_reqs, hba->nutrs) 6989 + if (hba->lrb[pos].lun == lun) 6990 + __set_bit(pos, &pending_reqs); 6991 + hba->outstanding_reqs &= ~pending_reqs; 6992 + spin_unlock_irqrestore(&hba->outstanding_lock, flags); 6993 + 6994 + if (ufshcd_clear_cmds(hba, pending_reqs) < 0) { 6995 + spin_lock_irqsave(&hba->outstanding_lock, flags); 6996 + not_cleared = pending_reqs & 6997 + ufshcd_readl(hba, REG_UTP_TRANSFER_REQ_DOOR_BELL); 6998 + hba->outstanding_reqs |= not_cleared; 6999 + spin_unlock_irqrestore(&hba->outstanding_lock, flags); 7000 + 7001 + dev_err(hba->dev, "%s: failed to clear requests %#lx\n", 7002 + __func__, not_cleared); 7004 7003 } 7004 + __ufshcd_transfer_req_compl(hba, pending_reqs & ~not_cleared); 7005 7005 7006 7006 out: 7007 7007 hba->req_abort_count = 0; ··· 7108 7088 goto out; 7109 7089 } 7110 7090 7111 - err = ufshcd_clear_cmd(hba, tag); 7091 + err = ufshcd_clear_cmds(hba, 1U << tag); 7112 7092 if (err) 7113 7093 dev_err(hba->dev, "%s: Failed clearing cmd at tag %d, err %d\n", 7114 7094 __func__, tag, err);
+3
drivers/usb/chipidea/udc.c
··· 1048 1048 struct ci_hdrc *ci = req->context; 1049 1049 unsigned long flags; 1050 1050 1051 + if (req->status < 0) 1052 + return; 1053 + 1051 1054 if (ci->setaddr) { 1052 1055 hw_usb_set_address(ci, ci->address); 1053 1056 ci->setaddr = false;
+3
drivers/usb/gadget/function/uvc_video.c
··· 428 428 uvcg_queue_cancel(queue, 0); 429 429 break; 430 430 } 431 + 432 + /* Endpoint now owns the request */ 433 + req = NULL; 431 434 video->req_int_count++; 432 435 } 433 436
+47 -16
drivers/usb/gadget/legacy/raw_gadget.c
··· 11 11 #include <linux/ctype.h> 12 12 #include <linux/debugfs.h> 13 13 #include <linux/delay.h> 14 + #include <linux/idr.h> 14 15 #include <linux/kref.h> 15 16 #include <linux/miscdevice.h> 16 17 #include <linux/module.h> ··· 36 35 MODULE_LICENSE("GPL"); 37 36 38 37 /*----------------------------------------------------------------------*/ 38 + 39 + static DEFINE_IDA(driver_id_numbers); 40 + #define DRIVER_DRIVER_NAME_LENGTH_MAX 32 39 41 40 42 #define RAW_EVENT_QUEUE_SIZE 16 41 43 ··· 165 161 /* Reference to misc device: */ 166 162 struct device *dev; 167 163 164 + /* Make driver names unique */ 165 + int driver_id_number; 166 + 168 167 /* Protected by lock: */ 169 168 enum dev_state state; 170 169 bool gadget_registered; ··· 196 189 spin_lock_init(&dev->lock); 197 190 init_completion(&dev->ep0_done); 198 191 raw_event_queue_init(&dev->queue); 192 + dev->driver_id_number = -1; 199 193 return dev; 200 194 } 201 195 ··· 207 199 208 200 kfree(dev->udc_name); 209 201 kfree(dev->driver.udc_name); 202 + kfree(dev->driver.driver.name); 203 + if (dev->driver_id_number >= 0) 204 + ida_free(&driver_id_numbers, dev->driver_id_number); 210 205 if (dev->req) { 211 206 if (dev->ep0_urb_queued) 212 207 usb_ep_dequeue(dev->gadget->ep0, dev->req); ··· 430 419 static int raw_ioctl_init(struct raw_dev *dev, unsigned long value) 431 420 { 432 421 int ret = 0; 422 + int driver_id_number; 433 423 struct usb_raw_init arg; 434 424 char *udc_driver_name; 435 425 char *udc_device_name; 426 + char *driver_driver_name; 436 427 unsigned long flags; 437 428 438 429 if (copy_from_user(&arg, (void __user *)value, sizeof(arg))) ··· 453 440 return -EINVAL; 454 441 } 455 442 443 + driver_id_number = ida_alloc(&driver_id_numbers, GFP_KERNEL); 444 + if (driver_id_number < 0) 445 + return driver_id_number; 446 + 447 + driver_driver_name = kmalloc(DRIVER_DRIVER_NAME_LENGTH_MAX, GFP_KERNEL); 448 + if (!driver_driver_name) { 449 + ret = -ENOMEM; 450 + goto out_free_driver_id_number; 451 + } 452 + snprintf(driver_driver_name, DRIVER_DRIVER_NAME_LENGTH_MAX, 453 + DRIVER_NAME ".%d", driver_id_number); 454 + 456 455 udc_driver_name = kmalloc(UDC_NAME_LENGTH_MAX, GFP_KERNEL); 457 - if (!udc_driver_name) 458 - return -ENOMEM; 456 + if (!udc_driver_name) { 457 + ret = -ENOMEM; 458 + goto out_free_driver_driver_name; 459 + } 459 460 ret = strscpy(udc_driver_name, &arg.driver_name[0], 460 461 UDC_NAME_LENGTH_MAX); 461 - if (ret < 0) { 462 - kfree(udc_driver_name); 463 - return ret; 464 - } 462 + if (ret < 0) 463 + goto out_free_udc_driver_name; 465 464 ret = 0; 466 465 467 466 udc_device_name = kmalloc(UDC_NAME_LENGTH_MAX, GFP_KERNEL); 468 467 if (!udc_device_name) { 469 - kfree(udc_driver_name); 470 - return -ENOMEM; 468 + ret = -ENOMEM; 469 + goto out_free_udc_driver_name; 471 470 } 472 471 ret = strscpy(udc_device_name, &arg.device_name[0], 473 472 UDC_NAME_LENGTH_MAX); 474 - if (ret < 0) { 475 - kfree(udc_driver_name); 476 - kfree(udc_device_name); 477 - return ret; 478 - } 473 + if (ret < 0) 474 + goto out_free_udc_device_name; 479 475 ret = 0; 480 476 481 477 spin_lock_irqsave(&dev->lock, flags); 482 478 if (dev->state != STATE_DEV_OPENED) { 483 479 dev_dbg(dev->dev, "fail, device is not opened\n"); 484 - kfree(udc_driver_name); 485 - kfree(udc_device_name); 486 480 ret = -EINVAL; 487 481 goto out_unlock; 488 482 } ··· 504 484 dev->driver.suspend = gadget_suspend; 505 485 dev->driver.resume = gadget_resume; 506 486 dev->driver.reset = gadget_reset; 507 - dev->driver.driver.name = DRIVER_NAME; 487 + dev->driver.driver.name = driver_driver_name; 508 488 dev->driver.udc_name = udc_device_name; 509 489 dev->driver.match_existing_only = 1; 490 + dev->driver_id_number = driver_id_number; 510 491 511 492 dev->state = STATE_DEV_INITIALIZED; 493 + spin_unlock_irqrestore(&dev->lock, flags); 494 + return ret; 512 495 513 496 out_unlock: 514 497 spin_unlock_irqrestore(&dev->lock, flags); 498 + out_free_udc_device_name: 499 + kfree(udc_device_name); 500 + out_free_udc_driver_name: 501 + kfree(udc_driver_name); 502 + out_free_driver_driver_name: 503 + kfree(driver_driver_name); 504 + out_free_driver_id_number: 505 + ida_free(&driver_id_numbers, driver_id_number); 515 506 return ret; 516 507 } 517 508
+1 -1
drivers/usb/host/xhci-hub.c
··· 652 652 * It will release and re-aquire the lock while calling ACPI 653 653 * method. 654 654 */ 655 - static void xhci_set_port_power(struct xhci_hcd *xhci, struct usb_hcd *hcd, 655 + void xhci_set_port_power(struct xhci_hcd *xhci, struct usb_hcd *hcd, 656 656 u16 index, bool on, unsigned long *flags) 657 657 __must_hold(&xhci->lock) 658 658 {
+5 -1
drivers/usb/host/xhci-pci.c
··· 61 61 #define PCI_DEVICE_ID_INTEL_ALDER_LAKE_XHCI 0x461e 62 62 #define PCI_DEVICE_ID_INTEL_ALDER_LAKE_N_XHCI 0x464e 63 63 #define PCI_DEVICE_ID_INTEL_ALDER_LAKE_PCH_XHCI 0x51ed 64 + #define PCI_DEVICE_ID_INTEL_RAPTOR_LAKE_XHCI 0xa71e 65 + #define PCI_DEVICE_ID_INTEL_METEOR_LAKE_XHCI 0x7ec0 64 66 65 67 #define PCI_DEVICE_ID_AMD_RENOIR_XHCI 0x1639 66 68 #define PCI_DEVICE_ID_AMD_PROMONTORYA_4 0x43b9 ··· 271 269 pdev->device == PCI_DEVICE_ID_INTEL_MAPLE_RIDGE_XHCI || 272 270 pdev->device == PCI_DEVICE_ID_INTEL_ALDER_LAKE_XHCI || 273 271 pdev->device == PCI_DEVICE_ID_INTEL_ALDER_LAKE_N_XHCI || 274 - pdev->device == PCI_DEVICE_ID_INTEL_ALDER_LAKE_PCH_XHCI)) 272 + pdev->device == PCI_DEVICE_ID_INTEL_ALDER_LAKE_PCH_XHCI || 273 + pdev->device == PCI_DEVICE_ID_INTEL_RAPTOR_LAKE_XHCI || 274 + pdev->device == PCI_DEVICE_ID_INTEL_METEOR_LAKE_XHCI)) 275 275 xhci->quirks |= XHCI_DEFAULT_PM_RUNTIME_ALLOW; 276 276 277 277 if (pdev->vendor == PCI_VENDOR_ID_ETRON &&
+35 -15
drivers/usb/host/xhci.c
··· 611 611 612 612 static int xhci_run_finished(struct xhci_hcd *xhci) 613 613 { 614 + unsigned long flags; 615 + u32 temp; 616 + 617 + /* 618 + * Enable interrupts before starting the host (xhci 4.2 and 5.5.2). 619 + * Protect the short window before host is running with a lock 620 + */ 621 + spin_lock_irqsave(&xhci->lock, flags); 622 + 623 + xhci_dbg_trace(xhci, trace_xhci_dbg_init, "Enable interrupts"); 624 + temp = readl(&xhci->op_regs->command); 625 + temp |= (CMD_EIE); 626 + writel(temp, &xhci->op_regs->command); 627 + 628 + xhci_dbg_trace(xhci, trace_xhci_dbg_init, "Enable primary interrupter"); 629 + temp = readl(&xhci->ir_set->irq_pending); 630 + writel(ER_IRQ_ENABLE(temp), &xhci->ir_set->irq_pending); 631 + 614 632 if (xhci_start(xhci)) { 615 633 xhci_halt(xhci); 634 + spin_unlock_irqrestore(&xhci->lock, flags); 616 635 return -ENODEV; 617 636 } 637 + 618 638 xhci->cmd_ring_state = CMD_RING_STATE_RUNNING; 619 639 620 640 if (xhci->quirks & XHCI_NEC_HOST) 621 641 xhci_ring_cmd_db(xhci); 642 + 643 + spin_unlock_irqrestore(&xhci->lock, flags); 622 644 623 645 return 0; 624 646 } ··· 689 667 temp &= ~ER_IRQ_INTERVAL_MASK; 690 668 temp |= (xhci->imod_interval / 250) & ER_IRQ_INTERVAL_MASK; 691 669 writel(temp, &xhci->ir_set->irq_control); 692 - 693 - /* Set the HCD state before we enable the irqs */ 694 - temp = readl(&xhci->op_regs->command); 695 - temp |= (CMD_EIE); 696 - xhci_dbg_trace(xhci, trace_xhci_dbg_init, 697 - "// Enable interrupts, cmd = 0x%x.", temp); 698 - writel(temp, &xhci->op_regs->command); 699 - 700 - temp = readl(&xhci->ir_set->irq_pending); 701 - xhci_dbg_trace(xhci, trace_xhci_dbg_init, 702 - "// Enabling event ring interrupter %p by writing 0x%x to irq_pending", 703 - xhci->ir_set, (unsigned int) ER_IRQ_ENABLE(temp)); 704 - writel(ER_IRQ_ENABLE(temp), &xhci->ir_set->irq_pending); 705 670 706 671 if (xhci->quirks & XHCI_NEC_HOST) { 707 672 struct xhci_command *command; ··· 791 782 void xhci_shutdown(struct usb_hcd *hcd) 792 783 { 793 784 struct xhci_hcd *xhci = hcd_to_xhci(hcd); 785 + unsigned long flags; 786 + int i; 794 787 795 788 if (xhci->quirks & XHCI_SPURIOUS_REBOOT) 796 789 usb_disable_xhci_ports(to_pci_dev(hcd->self.sysdev)); ··· 808 797 del_timer_sync(&xhci->shared_hcd->rh_timer); 809 798 } 810 799 811 - spin_lock_irq(&xhci->lock); 800 + spin_lock_irqsave(&xhci->lock, flags); 812 801 xhci_halt(xhci); 802 + 803 + /* Power off USB2 ports*/ 804 + for (i = 0; i < xhci->usb2_rhub.num_ports; i++) 805 + xhci_set_port_power(xhci, xhci->main_hcd, i, false, &flags); 806 + 807 + /* Power off USB3 ports*/ 808 + for (i = 0; i < xhci->usb3_rhub.num_ports; i++) 809 + xhci_set_port_power(xhci, xhci->shared_hcd, i, false, &flags); 810 + 813 811 /* Workaround for spurious wakeups at shutdown with HSW */ 814 812 if (xhci->quirks & XHCI_SPURIOUS_WAKEUP) 815 813 xhci_reset(xhci, XHCI_RESET_SHORT_USEC); 816 - spin_unlock_irq(&xhci->lock); 814 + spin_unlock_irqrestore(&xhci->lock, flags); 817 815 818 816 xhci_cleanup_msix(xhci); 819 817
+2
drivers/usb/host/xhci.h
··· 2196 2196 int xhci_hub_status_data(struct usb_hcd *hcd, char *buf); 2197 2197 int xhci_find_raw_port_number(struct usb_hcd *hcd, int port1); 2198 2198 struct xhci_hub *xhci_get_rhub(struct usb_hcd *hcd); 2199 + void xhci_set_port_power(struct xhci_hcd *xhci, struct usb_hcd *hcd, u16 index, 2200 + bool on, unsigned long *flags); 2199 2201 2200 2202 void xhci_hc_died(struct xhci_hcd *xhci); 2201 2203
+6
drivers/usb/serial/option.c
··· 252 252 #define QUECTEL_PRODUCT_EG95 0x0195 253 253 #define QUECTEL_PRODUCT_BG96 0x0296 254 254 #define QUECTEL_PRODUCT_EP06 0x0306 255 + #define QUECTEL_PRODUCT_EM05G 0x030a 255 256 #define QUECTEL_PRODUCT_EM12 0x0512 256 257 #define QUECTEL_PRODUCT_RM500Q 0x0800 257 258 #define QUECTEL_PRODUCT_EC200S_CN 0x6002 258 259 #define QUECTEL_PRODUCT_EC200T 0x6026 260 + #define QUECTEL_PRODUCT_RM500K 0x7001 259 261 260 262 #define CMOTECH_VENDOR_ID 0x16d8 261 263 #define CMOTECH_PRODUCT_6001 0x6001 ··· 1136 1134 { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EP06, 0xff, 0xff, 0xff), 1137 1135 .driver_info = RSVD(1) | RSVD(2) | RSVD(3) | RSVD(4) | NUMEP2 }, 1138 1136 { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EP06, 0xff, 0, 0) }, 1137 + { USB_DEVICE_INTERFACE_CLASS(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM05G, 0xff), 1138 + .driver_info = RSVD(6) | ZLP }, 1139 1139 { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM12, 0xff, 0xff, 0xff), 1140 1140 .driver_info = RSVD(1) | RSVD(2) | RSVD(3) | RSVD(4) | NUMEP2 }, 1141 1141 { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EM12, 0xff, 0, 0) }, ··· 1151 1147 .driver_info = ZLP }, 1152 1148 { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EC200S_CN, 0xff, 0, 0) }, 1153 1149 { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EC200T, 0xff, 0, 0) }, 1150 + { USB_DEVICE_AND_INTERFACE_INFO(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_RM500K, 0xff, 0x00, 0x00) }, 1154 1151 1155 1152 { USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_6001) }, 1156 1153 { USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_CMU_300) }, ··· 1284 1279 .driver_info = NCTRL(0) | RSVD(1) | RSVD(2) }, 1285 1280 { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1231, 0xff), /* Telit LE910Cx (RNDIS) */ 1286 1281 .driver_info = NCTRL(2) | RSVD(3) }, 1282 + { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x1250, 0xff, 0x00, 0x00) }, /* Telit LE910Cx (rmnet) */ 1287 1283 { USB_DEVICE(TELIT_VENDOR_ID, 0x1260), 1288 1284 .driver_info = NCTRL(0) | RSVD(1) | RSVD(2) }, 1289 1285 { USB_DEVICE(TELIT_VENDOR_ID, 0x1261),
+17 -12
drivers/usb/serial/pl2303.c
··· 436 436 break; 437 437 case 0x200: 438 438 switch (bcdDevice) { 439 - case 0x100: 439 + case 0x100: /* GC */ 440 440 case 0x105: 441 + return TYPE_HXN; 442 + case 0x300: /* GT / TA */ 443 + if (pl2303_supports_hx_status(serial)) 444 + return TYPE_TA; 445 + fallthrough; 441 446 case 0x305: 447 + case 0x400: /* GL */ 442 448 case 0x405: 449 + return TYPE_HXN; 450 + case 0x500: /* GE / TB */ 451 + if (pl2303_supports_hx_status(serial)) 452 + return TYPE_TB; 453 + fallthrough; 454 + case 0x505: 455 + case 0x600: /* GS */ 443 456 case 0x605: 444 - /* 445 - * Assume it's an HXN-type if the device doesn't 446 - * support the old read request value. 447 - */ 448 - if (!pl2303_supports_hx_status(serial)) 449 - return TYPE_HXN; 450 - break; 451 - case 0x300: 452 - return TYPE_TA; 453 - case 0x500: 454 - return TYPE_TB; 457 + case 0x700: /* GR */ 458 + case 0x705: 459 + return TYPE_HXN; 455 460 } 456 461 break; 457 462 }
-1
drivers/usb/typec/tcpm/Kconfig
··· 56 56 tristate "Intel WhiskeyCove PMIC USB Type-C PHY driver" 57 57 depends on ACPI 58 58 depends on MFD_INTEL_PMC_BXT 59 - depends on INTEL_SOC_PMIC 60 59 depends on BXT_WC_PMIC_OPREGION 61 60 help 62 61 This driver adds support for USB Type-C on Intel Broxton platforms
+2
drivers/video/console/sticore.c
··· 1148 1148 return ret; 1149 1149 } 1150 1150 1151 + #if defined(CONFIG_FB_STI) 1151 1152 /* check if given fb_info is the primary device */ 1152 1153 int fb_is_primary_device(struct fb_info *info) 1153 1154 { ··· 1164 1163 return (sti->info == info); 1165 1164 } 1166 1165 EXPORT_SYMBOL(fb_is_primary_device); 1166 + #endif 1167 1167 1168 1168 MODULE_AUTHOR("Philipp Rumpf, Helge Deller, Thomas Bogendoerfer"); 1169 1169 MODULE_DESCRIPTION("Core STI driver for HP's NGLE series graphics cards in HP PARISC machines");
+2 -4
drivers/video/fbdev/au1100fb.c
··· 560 560 /* Blank the LCD */ 561 561 au1100fb_fb_blank(VESA_POWERDOWN, &fbdev->info); 562 562 563 - if (fbdev->lcdclk) 564 - clk_disable(fbdev->lcdclk); 563 + clk_disable(fbdev->lcdclk); 565 564 566 565 memcpy(&fbregs, fbdev->regs, sizeof(struct au1100fb_regs)); 567 566 ··· 576 577 577 578 memcpy(fbdev->regs, &fbregs, sizeof(struct au1100fb_regs)); 578 579 579 - if (fbdev->lcdclk) 580 - clk_enable(fbdev->lcdclk); 580 + clk_enable(fbdev->lcdclk); 581 581 582 582 /* Unblank the LCD */ 583 583 au1100fb_fb_blank(VESA_NO_BLANKING, &fbdev->info);
-6
drivers/video/fbdev/cirrusfb.c
··· 2184 2184 .id_table = cirrusfb_pci_table, 2185 2185 .probe = cirrusfb_pci_register, 2186 2186 .remove = cirrusfb_pci_unregister, 2187 - #ifdef CONFIG_PM 2188 - #if 0 2189 - .suspend = cirrusfb_pci_suspend, 2190 - .resume = cirrusfb_pci_resume, 2191 - #endif 2192 - #endif 2193 2187 }; 2194 2188 #endif /* CONFIG_PCI */ 2195 2189
+2 -2
drivers/video/fbdev/intelfb/intelfbdrv.c
··· 472 472 struct fb_info *info; 473 473 struct intelfb_info *dinfo; 474 474 int i, err, dvo; 475 - int aperture_size, stolen_size; 475 + int aperture_size, stolen_size = 0; 476 476 struct agp_kern_info gtt_info; 477 477 int agp_memtype; 478 478 const char *s; ··· 571 571 return -ENODEV; 572 572 } 573 573 574 - if (intelfbhw_get_memory(pdev, &aperture_size,&stolen_size)) { 574 + if (intelfbhw_get_memory(pdev, &aperture_size, &stolen_size)) { 575 575 cleanup(dinfo); 576 576 return -ENODEV; 577 577 }
+5 -7
drivers/video/fbdev/intelfb/intelfbhw.c
··· 201 201 case PCI_DEVICE_ID_INTEL_945GME: 202 202 case PCI_DEVICE_ID_INTEL_965G: 203 203 case PCI_DEVICE_ID_INTEL_965GM: 204 - /* 915, 945 and 965 chipsets support a 256MB aperture. 205 - Aperture size is determined by inspected the 206 - base address of the aperture. */ 207 - if (pci_resource_start(pdev, 2) & 0x08000000) 208 - *aperture_size = MB(128); 209 - else 210 - *aperture_size = MB(256); 204 + /* 205 + * 915, 945 and 965 chipsets support 64MB, 128MB or 256MB 206 + * aperture. Determine size from PCI resource length. 207 + */ 208 + *aperture_size = pci_resource_len(pdev, 2); 211 209 break; 212 210 default: 213 211 if ((tmp & INTEL_GMCH_MEM_MASK) == INTEL_GMCH_MEM_64M)
+1 -1
drivers/video/fbdev/omap/sossi.c
··· 359 359 int bus_pick_count, bus_pick_width; 360 360 361 361 /* 362 - * We set explicitly the the bus_pick_count as well, although 362 + * We set explicitly the bus_pick_count as well, although 363 363 * with remapping/reordering disabled it will be calculated by HW 364 364 * as (32 / bus_pick_width). 365 365 */
+1 -1
drivers/video/fbdev/omap2/omapfb/dss/hdmi_phy.c
··· 143 143 /* 144 144 * In OMAP5+, the HFBITCLK must be divided by 2 before issuing the 145 145 * HDMI_PHYPWRCMD_LDOON command. 146 - */ 146 + */ 147 147 if (phy_feat->bist_ctrl) 148 148 REG_FLD_MOD(phy->base, HDMI_TXPHY_BIST_CONTROL, 1, 11, 11); 149 149
+1 -1
drivers/video/fbdev/pxa3xx-gcu.c
··· 381 381 struct pxa3xx_gcu_batch *buffer; 382 382 struct pxa3xx_gcu_priv *priv = to_pxa3xx_gcu_priv(file); 383 383 384 - int words = count / 4; 384 + size_t words = count / 4; 385 385 386 386 /* Does not need to be atomic. There's a lock in user space, 387 387 * but anyhow, this is just for statistics. */
+1 -2
drivers/video/fbdev/simplefb.c
··· 237 237 if (IS_ERR(clock)) { 238 238 if (PTR_ERR(clock) == -EPROBE_DEFER) { 239 239 while (--i >= 0) { 240 - if (par->clks[i]) 241 - clk_put(par->clks[i]); 240 + clk_put(par->clks[i]); 242 241 } 243 242 kfree(par->clks); 244 243 return -EPROBE_DEFER;
+8 -7
drivers/video/fbdev/skeletonfb.c
··· 96 96 97 97 /* 98 98 * Modern graphical hardware not only supports pipelines but some 99 - * also support multiple monitors where each display can have its 99 + * also support multiple monitors where each display can have 100 100 * its own unique data. In this case each display could be 101 101 * represented by a separate framebuffer device thus a separate 102 102 * struct fb_info. Now the struct xxx_par represents the graphics ··· 838 838 * 839 839 * See Documentation/driver-api/pm/devices.rst for more information 840 840 */ 841 - static int xxxfb_suspend(struct pci_dev *dev, pm_message_t msg) 841 + static int xxxfb_suspend(struct device *dev) 842 842 { 843 - struct fb_info *info = pci_get_drvdata(dev); 843 + struct fb_info *info = dev_get_drvdata(dev); 844 844 struct xxxfb_par *par = info->par; 845 845 846 846 /* suspend here */ ··· 853 853 * 854 854 * See Documentation/driver-api/pm/devices.rst for more information 855 855 */ 856 - static int xxxfb_resume(struct pci_dev *dev) 856 + static int xxxfb_resume(struct device *dev) 857 857 { 858 - struct fb_info *info = pci_get_drvdata(dev); 858 + struct fb_info *info = dev_get_drvdata(dev); 859 859 struct xxxfb_par *par = info->par; 860 860 861 861 /* resume here */ ··· 873 873 { 0, } 874 874 }; 875 875 876 + static SIMPLE_DEV_PM_OPS(xxxfb_pm_ops, xxxfb_suspend, xxxfb_resume); 877 + 876 878 /* For PCI drivers */ 877 879 static struct pci_driver xxxfb_driver = { 878 880 .name = "xxxfb", 879 881 .id_table = xxxfb_id_table, 880 882 .probe = xxxfb_probe, 881 883 .remove = xxxfb_remove, 882 - .suspend = xxxfb_suspend, /* optional but recommended */ 883 - .resume = xxxfb_resume, /* optional but recommended */ 884 + .driver.pm = xxxfb_pm_ops, /* optional but recommended */ 884 885 }; 885 886 886 887 MODULE_DEVICE_TABLE(pci, xxxfb_id_table);
+1 -1
drivers/xen/features.c
··· 42 42 if (HYPERVISOR_xen_version(XENVER_get_features, &fi) < 0) 43 43 break; 44 44 for (j = 0; j < 32; j++) 45 - xen_features[i * 32 + j] = !!(fi.submap & 1<<j); 45 + xen_features[i * 32 + j] = !!(fi.submap & 1U << j); 46 46 } 47 47 48 48 if (xen_pv_domain()) {
+7
drivers/xen/gntdev-common.h
··· 16 16 #include <linux/mmu_notifier.h> 17 17 #include <linux/types.h> 18 18 #include <xen/interface/event_channel.h> 19 + #include <xen/grant_table.h> 19 20 20 21 struct gntdev_dmabuf_priv; 21 22 ··· 57 56 struct gnttab_unmap_grant_ref *unmap_ops; 58 57 struct gnttab_map_grant_ref *kmap_ops; 59 58 struct gnttab_unmap_grant_ref *kunmap_ops; 59 + bool *being_removed; 60 60 struct page **pages; 61 61 unsigned long pages_vm_start; 62 62 ··· 75 73 /* Needed to avoid allocation in gnttab_dma_free_pages(). */ 76 74 xen_pfn_t *frames; 77 75 #endif 76 + 77 + /* Number of live grants */ 78 + atomic_t live_grants; 79 + /* Needed to avoid allocation in __unmap_grant_pages */ 80 + struct gntab_unmap_queue_data unmap_data; 78 81 }; 79 82 80 83 struct gntdev_grant_map *gntdev_alloc_map(struct gntdev_priv *priv, int count,
+106 -51
drivers/xen/gntdev.c
··· 35 35 #include <linux/slab.h> 36 36 #include <linux/highmem.h> 37 37 #include <linux/refcount.h> 38 + #include <linux/workqueue.h> 38 39 39 40 #include <xen/xen.h> 40 41 #include <xen/grant_table.h> ··· 61 60 MODULE_PARM_DESC(limit, 62 61 "Maximum number of grants that may be mapped by one mapping request"); 63 62 63 + /* True in PV mode, false otherwise */ 64 64 static int use_ptemod; 65 65 66 - static int unmap_grant_pages(struct gntdev_grant_map *map, 67 - int offset, int pages); 66 + static void unmap_grant_pages(struct gntdev_grant_map *map, 67 + int offset, int pages); 68 68 69 69 static struct miscdevice gntdev_miscdev; 70 70 ··· 122 120 kvfree(map->unmap_ops); 123 121 kvfree(map->kmap_ops); 124 122 kvfree(map->kunmap_ops); 123 + kvfree(map->being_removed); 125 124 kfree(map); 126 125 } 127 126 ··· 143 140 add->unmap_ops = kvmalloc_array(count, sizeof(add->unmap_ops[0]), 144 141 GFP_KERNEL); 145 142 add->pages = kvcalloc(count, sizeof(add->pages[0]), GFP_KERNEL); 143 + add->being_removed = 144 + kvcalloc(count, sizeof(add->being_removed[0]), GFP_KERNEL); 146 145 if (NULL == add->grants || 147 146 NULL == add->map_ops || 148 147 NULL == add->unmap_ops || 149 - NULL == add->pages) 148 + NULL == add->pages || 149 + NULL == add->being_removed) 150 150 goto err; 151 151 if (use_ptemod) { 152 152 add->kmap_ops = kvmalloc_array(count, sizeof(add->kmap_ops[0]), ··· 256 250 if (!refcount_dec_and_test(&map->users)) 257 251 return; 258 252 259 - if (map->pages && !use_ptemod) 253 + if (map->pages && !use_ptemod) { 254 + /* 255 + * Increment the reference count. This ensures that the 256 + * subsequent call to unmap_grant_pages() will not wind up 257 + * re-entering itself. It *can* wind up calling 258 + * gntdev_put_map() recursively, but such calls will be with a 259 + * reference count greater than 1, so they will return before 260 + * this code is reached. The recursion depth is thus limited to 261 + * 1. Do NOT use refcount_inc() here, as it will detect that 262 + * the reference count is zero and WARN(). 263 + */ 264 + refcount_set(&map->users, 1); 265 + 266 + /* 267 + * Unmap the grants. This may or may not be asynchronous, so it 268 + * is possible that the reference count is 1 on return, but it 269 + * could also be greater than 1. 270 + */ 260 271 unmap_grant_pages(map, 0, map->count); 272 + 273 + /* Check if the memory now needs to be freed */ 274 + if (!refcount_dec_and_test(&map->users)) 275 + return; 276 + 277 + /* 278 + * All pages have been returned to the hypervisor, so free the 279 + * map. 280 + */ 281 + } 261 282 262 283 if (map->notify.flags & UNMAP_NOTIFY_SEND_EVENT) { 263 284 notify_remote_via_evtchn(map->notify.event); ··· 316 283 317 284 int gntdev_map_grant_pages(struct gntdev_grant_map *map) 318 285 { 286 + size_t alloced = 0; 319 287 int i, err = 0; 320 288 321 289 if (!use_ptemod) { ··· 365 331 map->count); 366 332 367 333 for (i = 0; i < map->count; i++) { 368 - if (map->map_ops[i].status == GNTST_okay) 334 + if (map->map_ops[i].status == GNTST_okay) { 369 335 map->unmap_ops[i].handle = map->map_ops[i].handle; 370 - else if (!err) 336 + if (!use_ptemod) 337 + alloced++; 338 + } else if (!err) 371 339 err = -EINVAL; 372 340 373 341 if (map->flags & GNTMAP_device_map) 374 342 map->unmap_ops[i].dev_bus_addr = map->map_ops[i].dev_bus_addr; 375 343 376 344 if (use_ptemod) { 377 - if (map->kmap_ops[i].status == GNTST_okay) 345 + if (map->kmap_ops[i].status == GNTST_okay) { 346 + if (map->map_ops[i].status == GNTST_okay) 347 + alloced++; 378 348 map->kunmap_ops[i].handle = map->kmap_ops[i].handle; 379 - else if (!err) 349 + } else if (!err) 380 350 err = -EINVAL; 381 351 } 382 352 } 353 + atomic_add(alloced, &map->live_grants); 383 354 return err; 384 355 } 385 356 386 - static int __unmap_grant_pages(struct gntdev_grant_map *map, int offset, 387 - int pages) 357 + static void __unmap_grant_pages_done(int result, 358 + struct gntab_unmap_queue_data *data) 388 359 { 389 - int i, err = 0; 390 - struct gntab_unmap_queue_data unmap_data; 360 + unsigned int i; 361 + struct gntdev_grant_map *map = data->data; 362 + unsigned int offset = data->unmap_ops - map->unmap_ops; 391 363 392 - if (map->notify.flags & UNMAP_NOTIFY_CLEAR_BYTE) { 393 - int pgno = (map->notify.addr >> PAGE_SHIFT); 394 - if (pgno >= offset && pgno < offset + pages) { 395 - /* No need for kmap, pages are in lowmem */ 396 - uint8_t *tmp = pfn_to_kaddr(page_to_pfn(map->pages[pgno])); 397 - tmp[map->notify.addr & (PAGE_SIZE-1)] = 0; 398 - map->notify.flags &= ~UNMAP_NOTIFY_CLEAR_BYTE; 399 - } 400 - } 401 - 402 - unmap_data.unmap_ops = map->unmap_ops + offset; 403 - unmap_data.kunmap_ops = use_ptemod ? map->kunmap_ops + offset : NULL; 404 - unmap_data.pages = map->pages + offset; 405 - unmap_data.count = pages; 406 - 407 - err = gnttab_unmap_refs_sync(&unmap_data); 408 - if (err) 409 - return err; 410 - 411 - for (i = 0; i < pages; i++) { 412 - if (map->unmap_ops[offset+i].status) 413 - err = -EINVAL; 364 + for (i = 0; i < data->count; i++) { 365 + WARN_ON(map->unmap_ops[offset+i].status); 414 366 pr_debug("unmap handle=%d st=%d\n", 415 367 map->unmap_ops[offset+i].handle, 416 368 map->unmap_ops[offset+i].status); 417 369 map->unmap_ops[offset+i].handle = INVALID_GRANT_HANDLE; 418 370 if (use_ptemod) { 419 - if (map->kunmap_ops[offset+i].status) 420 - err = -EINVAL; 371 + WARN_ON(map->kunmap_ops[offset+i].status); 421 372 pr_debug("kunmap handle=%u st=%d\n", 422 373 map->kunmap_ops[offset+i].handle, 423 374 map->kunmap_ops[offset+i].status); 424 375 map->kunmap_ops[offset+i].handle = INVALID_GRANT_HANDLE; 425 376 } 426 377 } 427 - return err; 378 + /* 379 + * Decrease the live-grant counter. This must happen after the loop to 380 + * prevent premature reuse of the grants by gnttab_mmap(). 381 + */ 382 + atomic_sub(data->count, &map->live_grants); 383 + 384 + /* Release reference taken by __unmap_grant_pages */ 385 + gntdev_put_map(NULL, map); 428 386 } 429 387 430 - static int unmap_grant_pages(struct gntdev_grant_map *map, int offset, 431 - int pages) 388 + static void __unmap_grant_pages(struct gntdev_grant_map *map, int offset, 389 + int pages) 432 390 { 433 - int range, err = 0; 391 + if (map->notify.flags & UNMAP_NOTIFY_CLEAR_BYTE) { 392 + int pgno = (map->notify.addr >> PAGE_SHIFT); 393 + 394 + if (pgno >= offset && pgno < offset + pages) { 395 + /* No need for kmap, pages are in lowmem */ 396 + uint8_t *tmp = pfn_to_kaddr(page_to_pfn(map->pages[pgno])); 397 + 398 + tmp[map->notify.addr & (PAGE_SIZE-1)] = 0; 399 + map->notify.flags &= ~UNMAP_NOTIFY_CLEAR_BYTE; 400 + } 401 + } 402 + 403 + map->unmap_data.unmap_ops = map->unmap_ops + offset; 404 + map->unmap_data.kunmap_ops = use_ptemod ? map->kunmap_ops + offset : NULL; 405 + map->unmap_data.pages = map->pages + offset; 406 + map->unmap_data.count = pages; 407 + map->unmap_data.done = __unmap_grant_pages_done; 408 + map->unmap_data.data = map; 409 + refcount_inc(&map->users); /* to keep map alive during async call below */ 410 + 411 + gnttab_unmap_refs_async(&map->unmap_data); 412 + } 413 + 414 + static void unmap_grant_pages(struct gntdev_grant_map *map, int offset, 415 + int pages) 416 + { 417 + int range; 418 + 419 + if (atomic_read(&map->live_grants) == 0) 420 + return; /* Nothing to do */ 434 421 435 422 pr_debug("unmap %d+%d [%d+%d]\n", map->index, map->count, offset, pages); 436 423 437 424 /* It is possible the requested range will have a "hole" where we 438 425 * already unmapped some of the grants. Only unmap valid ranges. 439 426 */ 440 - while (pages && !err) { 441 - while (pages && 442 - map->unmap_ops[offset].handle == INVALID_GRANT_HANDLE) { 427 + while (pages) { 428 + while (pages && map->being_removed[offset]) { 443 429 offset++; 444 430 pages--; 445 431 } 446 432 range = 0; 447 433 while (range < pages) { 448 - if (map->unmap_ops[offset + range].handle == 449 - INVALID_GRANT_HANDLE) 434 + if (map->being_removed[offset + range]) 450 435 break; 436 + map->being_removed[offset + range] = true; 451 437 range++; 452 438 } 453 - err = __unmap_grant_pages(map, offset, range); 439 + if (range) 440 + __unmap_grant_pages(map, offset, range); 454 441 offset += range; 455 442 pages -= range; 456 443 } 457 - 458 - return err; 459 444 } 460 445 461 446 /* ------------------------------------------------------------------ */ ··· 526 473 struct gntdev_grant_map *map = 527 474 container_of(mn, struct gntdev_grant_map, notifier); 528 475 unsigned long mstart, mend; 529 - int err; 530 476 531 477 if (!mmu_notifier_range_blockable(range)) 532 478 return false; ··· 546 494 map->index, map->count, 547 495 map->vma->vm_start, map->vma->vm_end, 548 496 range->start, range->end, mstart, mend); 549 - err = unmap_grant_pages(map, 497 + unmap_grant_pages(map, 550 498 (mstart - map->vma->vm_start) >> PAGE_SHIFT, 551 499 (mend - mstart) >> PAGE_SHIFT); 552 - WARN_ON(err); 553 500 554 501 return true; 555 502 } ··· 1036 985 goto unlock_out; 1037 986 if (use_ptemod && map->vma) 1038 987 goto unlock_out; 988 + if (atomic_read(&map->live_grants)) { 989 + err = -EAGAIN; 990 + goto unlock_out; 991 + } 1039 992 refcount_inc(&map->users); 1040 993 1041 994 vma->vm_ops = &gntdev_vmops;
+9 -13
fs/9p/fid.c
··· 152 152 const unsigned char **wnames, *uname; 153 153 int i, n, l, clone, access; 154 154 struct v9fs_session_info *v9ses; 155 - struct p9_fid *fid, *old_fid = NULL; 155 + struct p9_fid *fid, *old_fid; 156 156 157 157 v9ses = v9fs_dentry2v9ses(dentry); 158 158 access = v9ses->flags & V9FS_ACCESS_MASK; ··· 194 194 if (IS_ERR(fid)) 195 195 return fid; 196 196 197 + refcount_inc(&fid->count); 197 198 v9fs_fid_add(dentry->d_sb->s_root, fid); 198 199 } 199 200 /* If we are root ourself just return that */ 200 - if (dentry->d_sb->s_root == dentry) { 201 - refcount_inc(&fid->count); 201 + if (dentry->d_sb->s_root == dentry) 202 202 return fid; 203 - } 204 203 /* 205 204 * Do a multipath walk with attached root. 206 205 * When walking parent we need to make sure we ··· 211 212 fid = ERR_PTR(n); 212 213 goto err_out; 213 214 } 215 + old_fid = fid; 214 216 clone = 1; 215 217 i = 0; 216 218 while (i < n) { ··· 221 221 * walk to ensure none of the patch component change 222 222 */ 223 223 fid = p9_client_walk(fid, l, &wnames[i], clone); 224 + /* non-cloning walk will return the same fid */ 225 + if (fid != old_fid) { 226 + p9_client_clunk(old_fid); 227 + old_fid = fid; 228 + } 224 229 if (IS_ERR(fid)) { 225 - if (old_fid) { 226 - /* 227 - * If we fail, clunk fid which are mapping 228 - * to path component and not the last component 229 - * of the path. 230 - */ 231 - p9_client_clunk(old_fid); 232 - } 233 230 kfree(wnames); 234 231 goto err_out; 235 232 } 236 - old_fid = fid; 237 233 i += l; 238 234 clone = 0; 239 235 }
+13
fs/9p/vfs_addr.c
··· 58 58 */ 59 59 static int v9fs_init_request(struct netfs_io_request *rreq, struct file *file) 60 60 { 61 + struct inode *inode = file_inode(file); 62 + struct v9fs_inode *v9inode = V9FS_I(inode); 61 63 struct p9_fid *fid = file->private_data; 64 + 65 + BUG_ON(!fid); 66 + 67 + /* we might need to read from a fid that was opened write-only 68 + * for read-modify-write of page cache, use the writeback fid 69 + * for that */ 70 + if (rreq->origin == NETFS_READ_FOR_WRITE && 71 + (fid->mode & O_ACCMODE) == O_WRONLY) { 72 + fid = v9inode->writeback_fid; 73 + BUG_ON(!fid); 74 + } 62 75 63 76 refcount_inc(&fid->count); 64 77 rreq->netfs_priv = fid;
+4 -4
fs/9p/vfs_inode.c
··· 1251 1251 return ERR_PTR(-ECHILD); 1252 1252 1253 1253 v9ses = v9fs_dentry2v9ses(dentry); 1254 - fid = v9fs_fid_lookup(dentry); 1254 + if (!v9fs_proto_dotu(v9ses)) 1255 + return ERR_PTR(-EBADF); 1256 + 1255 1257 p9_debug(P9_DEBUG_VFS, "%pd\n", dentry); 1258 + fid = v9fs_fid_lookup(dentry); 1256 1259 1257 1260 if (IS_ERR(fid)) 1258 1261 return ERR_CAST(fid); 1259 - 1260 - if (!v9fs_proto_dotu(v9ses)) 1261 - return ERR_PTR(-EBADF); 1262 1262 1263 1263 st = p9_client_stat(fid); 1264 1264 p9_client_clunk(fid);
+3
fs/9p/vfs_inode_dotl.c
··· 274 274 if (IS_ERR(ofid)) { 275 275 err = PTR_ERR(ofid); 276 276 p9_debug(P9_DEBUG_VFS, "p9_client_walk failed %d\n", err); 277 + p9_client_clunk(dfid); 277 278 goto out; 278 279 } 279 280 ··· 286 285 if (err) { 287 286 p9_debug(P9_DEBUG_VFS, "Failed to get acl values in creat %d\n", 288 287 err); 288 + p9_client_clunk(dfid); 289 289 goto error; 290 290 } 291 291 err = p9_client_create_dotl(ofid, name, v9fs_open_to_dotl_flags(flags), ··· 294 292 if (err < 0) { 295 293 p9_debug(P9_DEBUG_VFS, "p9_client_open_dotl failed in creat %d\n", 296 294 err); 295 + p9_client_clunk(dfid); 297 296 goto error; 298 297 } 299 298 v9fs_invalidate_inode_attr(dir);
+2 -1
fs/afs/inode.c
··· 745 745 746 746 _enter("{ ino=%lu v=%u }", inode->i_ino, inode->i_generation); 747 747 748 - if (!(query_flags & AT_STATX_DONT_SYNC) && 748 + if (vnode->volume && 749 + !(query_flags & AT_STATX_DONT_SYNC) && 749 750 !test_bit(AFS_VNODE_CB_PROMISED, &vnode->flags)) { 750 751 key = afs_request_key(vnode->volume->cell); 751 752 if (IS_ERR(key))
+1
fs/btrfs/block-group.h
··· 104 104 unsigned int relocating_repair:1; 105 105 unsigned int chunk_item_inserted:1; 106 106 unsigned int zone_is_active:1; 107 + unsigned int zoned_data_reloc_ongoing:1; 107 108 108 109 int disk_cache_state; 109 110
+2
fs/btrfs/ctree.h
··· 1330 1330 * existing extent into a file range. 1331 1331 */ 1332 1332 bool is_new_extent; 1333 + /* Indicate if we should update the inode's mtime and ctime. */ 1334 + bool update_times; 1333 1335 /* Meaningful only if is_new_extent is true. */ 1334 1336 int qgroup_reserved; 1335 1337 /*
+11 -2
fs/btrfs/disk-io.c
··· 4632 4632 int ret; 4633 4633 4634 4634 set_bit(BTRFS_FS_CLOSING_START, &fs_info->flags); 4635 + 4636 + /* 4637 + * We may have the reclaim task running and relocating a data block group, 4638 + * in which case it may create delayed iputs. So stop it before we park 4639 + * the cleaner kthread otherwise we can get new delayed iputs after 4640 + * parking the cleaner, and that can make the async reclaim task to hang 4641 + * if it's waiting for delayed iputs to complete, since the cleaner is 4642 + * parked and can not run delayed iputs - this will make us hang when 4643 + * trying to stop the async reclaim task. 4644 + */ 4645 + cancel_work_sync(&fs_info->reclaim_bgs_work); 4635 4646 /* 4636 4647 * We don't want the cleaner to start new transactions, add more delayed 4637 4648 * iputs, etc. while we're closing. We can't use kthread_stop() yet ··· 4682 4671 cancel_work_sync(&fs_info->async_reclaim_work); 4683 4672 cancel_work_sync(&fs_info->async_data_reclaim_work); 4684 4673 cancel_work_sync(&fs_info->preempt_reclaim_work); 4685 - 4686 - cancel_work_sync(&fs_info->reclaim_bgs_work); 4687 4674 4688 4675 /* Cancel or finish ongoing discard work */ 4689 4676 btrfs_discard_cleanup(fs_info);
+18 -2
fs/btrfs/extent-tree.c
··· 3832 3832 block_group->start == fs_info->data_reloc_bg || 3833 3833 fs_info->data_reloc_bg == 0); 3834 3834 3835 - if (block_group->ro) { 3835 + if (block_group->ro || block_group->zoned_data_reloc_ongoing) { 3836 3836 ret = 1; 3837 3837 goto out; 3838 3838 } ··· 3894 3894 out: 3895 3895 if (ret && ffe_ctl->for_treelog) 3896 3896 fs_info->treelog_bg = 0; 3897 - if (ret && ffe_ctl->for_data_reloc) 3897 + if (ret && ffe_ctl->for_data_reloc && 3898 + fs_info->data_reloc_bg == block_group->start) { 3899 + /* 3900 + * Do not allow further allocations from this block group. 3901 + * Compared to increasing the ->ro, setting the 3902 + * ->zoned_data_reloc_ongoing flag still allows nocow 3903 + * writers to come in. See btrfs_inc_nocow_writers(). 3904 + * 3905 + * We need to disable an allocation to avoid an allocation of 3906 + * regular (non-relocation data) extent. With mix of relocation 3907 + * extents and regular extents, we can dispatch WRITE commands 3908 + * (for relocation extents) and ZONE APPEND commands (for 3909 + * regular extents) at the same time to the same zone, which 3910 + * easily break the write pointer. 3911 + */ 3912 + block_group->zoned_data_reloc_ongoing = 1; 3898 3913 fs_info->data_reloc_bg = 0; 3914 + } 3899 3915 spin_unlock(&fs_info->relocation_bg_lock); 3900 3916 spin_unlock(&fs_info->treelog_bg_lock); 3901 3917 spin_unlock(&block_group->lock);
+2 -1
fs/btrfs/extent_io.c
··· 5241 5241 */ 5242 5242 btrfs_zoned_data_reloc_lock(BTRFS_I(inode)); 5243 5243 ret = extent_write_cache_pages(mapping, wbc, &epd); 5244 - btrfs_zoned_data_reloc_unlock(BTRFS_I(inode)); 5245 5244 ASSERT(ret <= 0); 5246 5245 if (ret < 0) { 5246 + btrfs_zoned_data_reloc_unlock(BTRFS_I(inode)); 5247 5247 end_write_bio(&epd, ret); 5248 5248 return ret; 5249 5249 } 5250 5250 flush_write_bio(&epd); 5251 + btrfs_zoned_data_reloc_unlock(BTRFS_I(inode)); 5251 5252 return ret; 5252 5253 } 5253 5254
+77 -19
fs/btrfs/file.c
··· 2323 2323 */ 2324 2324 btrfs_inode_unlock(inode, BTRFS_ILOCK_MMAP); 2325 2325 2326 - if (ret != BTRFS_NO_LOG_SYNC) { 2327 - if (!ret) { 2328 - ret = btrfs_sync_log(trans, root, &ctx); 2329 - if (!ret) { 2330 - ret = btrfs_end_transaction(trans); 2331 - goto out; 2332 - } 2333 - } 2334 - if (!full_sync) { 2335 - ret = btrfs_wait_ordered_range(inode, start, len); 2336 - if (ret) { 2337 - btrfs_end_transaction(trans); 2338 - goto out; 2339 - } 2340 - } 2341 - ret = btrfs_commit_transaction(trans); 2342 - } else { 2326 + if (ret == BTRFS_NO_LOG_SYNC) { 2343 2327 ret = btrfs_end_transaction(trans); 2328 + goto out; 2344 2329 } 2330 + 2331 + /* We successfully logged the inode, attempt to sync the log. */ 2332 + if (!ret) { 2333 + ret = btrfs_sync_log(trans, root, &ctx); 2334 + if (!ret) { 2335 + ret = btrfs_end_transaction(trans); 2336 + goto out; 2337 + } 2338 + } 2339 + 2340 + /* 2341 + * At this point we need to commit the transaction because we had 2342 + * btrfs_need_log_full_commit() or some other error. 2343 + * 2344 + * If we didn't do a full sync we have to stop the trans handle, wait on 2345 + * the ordered extents, start it again and commit the transaction. If 2346 + * we attempt to wait on the ordered extents here we could deadlock with 2347 + * something like fallocate() that is holding the extent lock trying to 2348 + * start a transaction while some other thread is trying to commit the 2349 + * transaction while we (fsync) are currently holding the transaction 2350 + * open. 2351 + */ 2352 + if (!full_sync) { 2353 + ret = btrfs_end_transaction(trans); 2354 + if (ret) 2355 + goto out; 2356 + ret = btrfs_wait_ordered_range(inode, start, len); 2357 + if (ret) 2358 + goto out; 2359 + 2360 + /* 2361 + * This is safe to use here because we're only interested in 2362 + * making sure the transaction that had the ordered extents is 2363 + * committed. We aren't waiting on anything past this point, 2364 + * we're purely getting the transaction and committing it. 2365 + */ 2366 + trans = btrfs_attach_transaction_barrier(root); 2367 + if (IS_ERR(trans)) { 2368 + ret = PTR_ERR(trans); 2369 + 2370 + /* 2371 + * We committed the transaction and there's no currently 2372 + * running transaction, this means everything we care 2373 + * about made it to disk and we are done. 2374 + */ 2375 + if (ret == -ENOENT) 2376 + ret = 0; 2377 + goto out; 2378 + } 2379 + } 2380 + 2381 + ret = btrfs_commit_transaction(trans); 2345 2382 out: 2346 2383 ASSERT(list_empty(&ctx.list)); 2347 2384 err = file_check_and_advance_wb_err(file); ··· 2756 2719 2757 2720 ret = btrfs_block_rsv_migrate(&fs_info->trans_block_rsv, rsv, 2758 2721 min_size, false); 2759 - BUG_ON(ret); 2722 + if (WARN_ON(ret)) 2723 + goto out_trans; 2760 2724 trans->block_rsv = rsv; 2761 2725 2762 2726 cur_offset = start; ··· 2841 2803 extent_info->file_offset += replace_len; 2842 2804 } 2843 2805 2806 + /* 2807 + * We are releasing our handle on the transaction, balance the 2808 + * dirty pages of the btree inode and flush delayed items, and 2809 + * then get a new transaction handle, which may now point to a 2810 + * new transaction in case someone else may have committed the 2811 + * transaction we used to replace/drop file extent items. So 2812 + * bump the inode's iversion and update mtime and ctime except 2813 + * if we are called from a dedupe context. This is because a 2814 + * power failure/crash may happen after the transaction is 2815 + * committed and before we finish replacing/dropping all the 2816 + * file extent items we need. 2817 + */ 2818 + inode_inc_iversion(&inode->vfs_inode); 2819 + 2820 + if (!extent_info || extent_info->update_times) { 2821 + inode->vfs_inode.i_mtime = current_time(&inode->vfs_inode); 2822 + inode->vfs_inode.i_ctime = inode->vfs_inode.i_mtime; 2823 + } 2824 + 2844 2825 ret = btrfs_update_inode(trans, root, inode); 2845 2826 if (ret) 2846 2827 break; ··· 2876 2819 2877 2820 ret = btrfs_block_rsv_migrate(&fs_info->trans_block_rsv, 2878 2821 rsv, min_size, false); 2879 - BUG_ON(ret); /* shouldn't happen */ 2822 + if (WARN_ON(ret)) 2823 + break; 2880 2824 trans->block_rsv = rsv; 2881 2825 2882 2826 cur_offset = drop_args.drop_end;
+3
fs/btrfs/inode.c
··· 3195 3195 ordered_extent->file_offset, 3196 3196 ordered_extent->file_offset + 3197 3197 logical_len); 3198 + btrfs_zoned_release_data_reloc_bg(fs_info, ordered_extent->disk_bytenr, 3199 + ordered_extent->disk_num_bytes); 3198 3200 } else { 3199 3201 BUG_ON(root == fs_info->tree_root); 3200 3202 ret = insert_ordered_extent_file_extent(trans, ordered_extent); ··· 9899 9897 extent_info.file_offset = file_offset; 9900 9898 extent_info.extent_buf = (char *)&stack_fi; 9901 9899 extent_info.is_new_extent = true; 9900 + extent_info.update_times = true; 9902 9901 extent_info.qgroup_reserved = qgroup_released; 9903 9902 extent_info.insertions = 0; 9904 9903
-3
fs/btrfs/locking.c
··· 45 45 start_ns = ktime_get_ns(); 46 46 47 47 down_read_nested(&eb->lock, nest); 48 - eb->lock_owner = current->pid; 49 48 trace_btrfs_tree_read_lock(eb, start_ns); 50 49 } 51 50 ··· 61 62 int btrfs_try_tree_read_lock(struct extent_buffer *eb) 62 63 { 63 64 if (down_read_trylock(&eb->lock)) { 64 - eb->lock_owner = current->pid; 65 65 trace_btrfs_try_tree_read_lock(eb); 66 66 return 1; 67 67 } ··· 88 90 void btrfs_tree_read_unlock(struct extent_buffer *eb) 89 91 { 90 92 trace_btrfs_tree_read_unlock(eb); 91 - eb->lock_owner = 0; 92 93 up_read(&eb->lock); 93 94 } 94 95
+12 -4
fs/btrfs/reflink.c
··· 344 344 int ret; 345 345 const u64 len = olen_aligned; 346 346 u64 last_dest_end = destoff; 347 + u64 prev_extent_end = off; 347 348 348 349 ret = -ENOMEM; 349 350 buf = kvmalloc(fs_info->nodesize, GFP_KERNEL); ··· 364 363 key.offset = off; 365 364 366 365 while (1) { 367 - u64 next_key_min_offset = key.offset + 1; 368 366 struct btrfs_file_extent_item *extent; 369 367 u64 extent_gen; 370 368 int type; ··· 431 431 * The first search might have left us at an extent item that 432 432 * ends before our target range's start, can happen if we have 433 433 * holes and NO_HOLES feature enabled. 434 + * 435 + * Subsequent searches may leave us on a file range we have 436 + * processed before - this happens due to a race with ordered 437 + * extent completion for a file range that is outside our source 438 + * range, but that range was part of a file extent item that 439 + * also covered a leading part of our source range. 434 440 */ 435 - if (key.offset + datal <= off) { 441 + if (key.offset + datal <= prev_extent_end) { 436 442 path->slots[0]++; 437 443 goto process_slot; 438 444 } else if (key.offset >= off + len) { 439 445 break; 440 446 } 441 - next_key_min_offset = key.offset + datal; 447 + 448 + prev_extent_end = key.offset + datal; 442 449 size = btrfs_item_size(leaf, slot); 443 450 read_extent_buffer(leaf, buf, btrfs_item_ptr_offset(leaf, slot), 444 451 size); ··· 496 489 clone_info.file_offset = new_key.offset; 497 490 clone_info.extent_buf = buf; 498 491 clone_info.is_new_extent = false; 492 + clone_info.update_times = !no_time_update; 499 493 ret = btrfs_replace_file_extents(BTRFS_I(inode), path, 500 494 drop_start, new_key.offset + datal - 1, 501 495 &clone_info, &trans); ··· 558 550 break; 559 551 560 552 btrfs_release_path(path); 561 - key.offset = next_key_min_offset; 553 + key.offset = prev_extent_end; 562 554 563 555 if (fatal_signal_pending(current)) { 564 556 ret = -EINTR;
+40 -7
fs/btrfs/super.c
··· 763 763 compress_force = false; 764 764 no_compress++; 765 765 } else { 766 + btrfs_err(info, "unrecognized compression value %s", 767 + args[0].from); 766 768 ret = -EINVAL; 767 769 goto out; 768 770 } ··· 823 821 case Opt_thread_pool: 824 822 ret = match_int(&args[0], &intarg); 825 823 if (ret) { 824 + btrfs_err(info, "unrecognized thread_pool value %s", 825 + args[0].from); 826 826 goto out; 827 827 } else if (intarg == 0) { 828 + btrfs_err(info, "invalid value 0 for thread_pool"); 828 829 ret = -EINVAL; 829 830 goto out; 830 831 } ··· 888 883 break; 889 884 case Opt_ratio: 890 885 ret = match_int(&args[0], &intarg); 891 - if (ret) 886 + if (ret) { 887 + btrfs_err(info, "unrecognized metadata_ratio value %s", 888 + args[0].from); 892 889 goto out; 890 + } 893 891 info->metadata_ratio = intarg; 894 892 btrfs_info(info, "metadata ratio %u", 895 893 info->metadata_ratio); ··· 909 901 btrfs_set_and_info(info, DISCARD_ASYNC, 910 902 "turning on async discard"); 911 903 } else { 904 + btrfs_err(info, "unrecognized discard mode value %s", 905 + args[0].from); 912 906 ret = -EINVAL; 913 907 goto out; 914 908 } ··· 943 933 btrfs_set_and_info(info, FREE_SPACE_TREE, 944 934 "enabling free space tree"); 945 935 } else { 936 + btrfs_err(info, "unrecognized space_cache value %s", 937 + args[0].from); 946 938 ret = -EINVAL; 947 939 goto out; 948 940 } ··· 1026 1014 break; 1027 1015 case Opt_check_integrity_print_mask: 1028 1016 ret = match_int(&args[0], &intarg); 1029 - if (ret) 1017 + if (ret) { 1018 + btrfs_err(info, 1019 + "unrecognized check_integrity_print_mask value %s", 1020 + args[0].from); 1030 1021 goto out; 1022 + } 1031 1023 info->check_integrity_print_mask = intarg; 1032 1024 btrfs_info(info, "check_integrity_print_mask 0x%x", 1033 1025 info->check_integrity_print_mask); ··· 1046 1030 goto out; 1047 1031 #endif 1048 1032 case Opt_fatal_errors: 1049 - if (strcmp(args[0].from, "panic") == 0) 1033 + if (strcmp(args[0].from, "panic") == 0) { 1050 1034 btrfs_set_opt(info->mount_opt, 1051 1035 PANIC_ON_FATAL_ERROR); 1052 - else if (strcmp(args[0].from, "bug") == 0) 1036 + } else if (strcmp(args[0].from, "bug") == 0) { 1053 1037 btrfs_clear_opt(info->mount_opt, 1054 1038 PANIC_ON_FATAL_ERROR); 1055 - else { 1039 + } else { 1040 + btrfs_err(info, "unrecognized fatal_errors value %s", 1041 + args[0].from); 1056 1042 ret = -EINVAL; 1057 1043 goto out; 1058 1044 } ··· 1062 1044 case Opt_commit_interval: 1063 1045 intarg = 0; 1064 1046 ret = match_int(&args[0], &intarg); 1065 - if (ret) 1047 + if (ret) { 1048 + btrfs_err(info, "unrecognized commit_interval value %s", 1049 + args[0].from); 1050 + ret = -EINVAL; 1066 1051 goto out; 1052 + } 1067 1053 if (intarg == 0) { 1068 1054 btrfs_info(info, 1069 1055 "using default commit interval %us", ··· 1081 1059 break; 1082 1060 case Opt_rescue: 1083 1061 ret = parse_rescue_options(info, args[0].from); 1084 - if (ret < 0) 1062 + if (ret < 0) { 1063 + btrfs_err(info, "unrecognized rescue value %s", 1064 + args[0].from); 1085 1065 goto out; 1066 + } 1086 1067 break; 1087 1068 #ifdef CONFIG_BTRFS_DEBUG 1088 1069 case Opt_fragment_all: ··· 2010 1985 if (ret) 2011 1986 goto restore; 2012 1987 1988 + /* V1 cache is not supported for subpage mount. */ 1989 + if (fs_info->sectorsize < PAGE_SIZE && btrfs_test_opt(fs_info, SPACE_CACHE)) { 1990 + btrfs_warn(fs_info, 1991 + "v1 space cache is not supported for page size %lu with sectorsize %u", 1992 + PAGE_SIZE, fs_info->sectorsize); 1993 + ret = -EINVAL; 1994 + goto restore; 1995 + } 2013 1996 btrfs_remount_begin(fs_info, old_opts, *flags); 2014 1997 btrfs_resize_thread_pool(fs_info, 2015 1998 fs_info->thread_pool_size, old_thread_pool_size);
+27
fs/btrfs/zoned.c
··· 2139 2139 factor = div64_u64(used * 100, total); 2140 2140 return factor >= fs_info->bg_reclaim_threshold; 2141 2141 } 2142 + 2143 + void btrfs_zoned_release_data_reloc_bg(struct btrfs_fs_info *fs_info, u64 logical, 2144 + u64 length) 2145 + { 2146 + struct btrfs_block_group *block_group; 2147 + 2148 + if (!btrfs_is_zoned(fs_info)) 2149 + return; 2150 + 2151 + block_group = btrfs_lookup_block_group(fs_info, logical); 2152 + /* It should be called on a previous data relocation block group. */ 2153 + ASSERT(block_group && (block_group->flags & BTRFS_BLOCK_GROUP_DATA)); 2154 + 2155 + spin_lock(&block_group->lock); 2156 + if (!block_group->zoned_data_reloc_ongoing) 2157 + goto out; 2158 + 2159 + /* All relocation extents are written. */ 2160 + if (block_group->start + block_group->alloc_offset == logical + length) { 2161 + /* Now, release this block group for further allocations. */ 2162 + block_group->zoned_data_reloc_ongoing = 0; 2163 + } 2164 + 2165 + out: 2166 + spin_unlock(&block_group->lock); 2167 + btrfs_put_block_group(block_group); 2168 + }
+5
fs/btrfs/zoned.h
··· 77 77 void btrfs_clear_data_reloc_bg(struct btrfs_block_group *bg); 78 78 void btrfs_free_zone_cache(struct btrfs_fs_info *fs_info); 79 79 bool btrfs_zoned_should_reclaim(struct btrfs_fs_info *fs_info); 80 + void btrfs_zoned_release_data_reloc_bg(struct btrfs_fs_info *fs_info, u64 logical, 81 + u64 length); 80 82 #else /* CONFIG_BLK_DEV_ZONED */ 81 83 static inline int btrfs_get_dev_zone(struct btrfs_device *device, u64 pos, 82 84 struct blk_zone *zone) ··· 245 243 { 246 244 return false; 247 245 } 246 + 247 + static inline void btrfs_zoned_release_data_reloc_bg(struct btrfs_fs_info *fs_info, 248 + u64 logical, u64 length) { } 248 249 #endif 249 250 250 251 static inline bool btrfs_dev_is_sequential(struct btrfs_device *device, u64 pos)
+7 -5
fs/cifs/cifs_debug.c
··· 162 162 seq_printf(m, "\t\tIPv4: %pI4\n", &ipv4->sin_addr); 163 163 else if (iface->sockaddr.ss_family == AF_INET6) 164 164 seq_printf(m, "\t\tIPv6: %pI6\n", &ipv6->sin6_addr); 165 + if (!iface->is_active) 166 + seq_puts(m, "\t\t[for-cleanup]\n"); 165 167 } 166 168 167 169 static int cifs_debug_files_proc_show(struct seq_file *m, void *v) ··· 223 221 struct TCP_Server_Info *server; 224 222 struct cifs_ses *ses; 225 223 struct cifs_tcon *tcon; 224 + struct cifs_server_iface *iface; 226 225 int c, i, j; 227 226 228 227 seq_puts(m, ··· 459 456 if (ses->iface_count) 460 457 seq_printf(m, "\n\n\tServer interfaces: %zu", 461 458 ses->iface_count); 462 - for (j = 0; j < ses->iface_count; j++) { 463 - struct cifs_server_iface *iface; 464 - 465 - iface = &ses->iface_list[j]; 466 - seq_printf(m, "\n\t%d)", j+1); 459 + j = 0; 460 + list_for_each_entry(iface, &ses->iface_list, 461 + iface_head) { 462 + seq_printf(m, "\n\t%d)", ++j); 467 463 cifs_dump_iface(m, iface); 468 464 if (is_ses_using_iface(ses, iface)) 469 465 seq_puts(m, "\t\t[CONNECTED]\n");
+57 -1
fs/cifs/cifsglob.h
··· 80 80 #define SMB_DNS_RESOLVE_INTERVAL_MIN 120 81 81 #define SMB_DNS_RESOLVE_INTERVAL_DEFAULT 600 82 82 83 + /* smb multichannel query server interfaces interval in seconds */ 84 + #define SMB_INTERFACE_POLL_INTERVAL 600 85 + 83 86 /* maximum number of PDUs in one compound */ 84 87 #define MAX_COMPOUND 5 85 88 ··· 936 933 #endif 937 934 938 935 struct cifs_server_iface { 936 + struct list_head iface_head; 937 + struct kref refcount; 939 938 size_t speed; 940 939 unsigned int rdma_capable : 1; 941 940 unsigned int rss_capable : 1; 941 + unsigned int is_active : 1; /* unset if non existent */ 942 942 struct sockaddr_storage sockaddr; 943 943 }; 944 + 945 + /* release iface when last ref is dropped */ 946 + static inline void 947 + release_iface(struct kref *ref) 948 + { 949 + struct cifs_server_iface *iface = container_of(ref, 950 + struct cifs_server_iface, 951 + refcount); 952 + list_del_init(&iface->iface_head); 953 + kfree(iface); 954 + } 955 + 956 + /* 957 + * compare two interfaces a and b 958 + * return 0 if everything matches. 959 + * return 1 if a has higher link speed, or rdma capable, or rss capable 960 + * return -1 otherwise. 961 + */ 962 + static inline int 963 + iface_cmp(struct cifs_server_iface *a, struct cifs_server_iface *b) 964 + { 965 + int cmp_ret = 0; 966 + 967 + WARN_ON(!a || !b); 968 + if (a->speed == b->speed) { 969 + if (a->rdma_capable == b->rdma_capable) { 970 + if (a->rss_capable == b->rss_capable) { 971 + cmp_ret = memcmp(&a->sockaddr, &b->sockaddr, 972 + sizeof(a->sockaddr)); 973 + if (!cmp_ret) 974 + return 0; 975 + else if (cmp_ret > 0) 976 + return 1; 977 + else 978 + return -1; 979 + } else if (a->rss_capable > b->rss_capable) 980 + return 1; 981 + else 982 + return -1; 983 + } else if (a->rdma_capable > b->rdma_capable) 984 + return 1; 985 + else 986 + return -1; 987 + } else if (a->speed > b->speed) 988 + return 1; 989 + else 990 + return -1; 991 + } 944 992 945 993 struct cifs_chan { 946 994 unsigned int in_reconnect : 1; /* if session setup in progress for this channel */ 947 995 struct TCP_Server_Info *server; 996 + struct cifs_server_iface *iface; /* interface in use */ 948 997 __u8 signkey[SMB3_SIGN_KEY_SIZE]; 949 998 }; 950 999 ··· 1048 993 */ 1049 994 spinlock_t iface_lock; 1050 995 /* ========= begin: protected by iface_lock ======== */ 1051 - struct cifs_server_iface *iface_list; 996 + struct list_head iface_list; 1052 997 size_t iface_count; 1053 998 unsigned long iface_last_update; /* jiffies */ 1054 999 /* ========= end: protected by iface_lock ======== */ ··· 1258 1203 #ifdef CONFIG_CIFS_DFS_UPCALL 1259 1204 struct list_head ulist; /* cache update list */ 1260 1205 #endif 1206 + struct delayed_work query_interfaces; /* query interfaces workqueue job */ 1261 1207 }; 1262 1208 1263 1209 /*
+7
fs/cifs/cifsproto.h
··· 636 636 bool 637 637 cifs_chan_needs_reconnect(struct cifs_ses *ses, 638 638 struct TCP_Server_Info *server); 639 + bool 640 + cifs_chan_is_iface_active(struct cifs_ses *ses, 641 + struct TCP_Server_Info *server); 642 + int 643 + cifs_chan_update_iface(struct cifs_ses *ses, struct TCP_Server_Info *server); 644 + int 645 + SMB3_request_interfaces(const unsigned int xid, struct cifs_tcon *tcon); 639 646 640 647 void extract_unc_hostname(const char *unc, const char **h, size_t *len); 641 648 int copy_path_name(char *dst, const char *src);
+54 -5
fs/cifs/connect.c
··· 145 145 return rc; 146 146 } 147 147 148 + static void smb2_query_server_interfaces(struct work_struct *work) 149 + { 150 + int rc; 151 + struct cifs_tcon *tcon = container_of(work, 152 + struct cifs_tcon, 153 + query_interfaces.work); 154 + 155 + /* 156 + * query server network interfaces, in case they change 157 + */ 158 + rc = SMB3_request_interfaces(0, tcon); 159 + if (rc) { 160 + cifs_dbg(FYI, "%s: failed to query server interfaces: %d\n", 161 + __func__, rc); 162 + } 163 + 164 + queue_delayed_work(cifsiod_wq, &tcon->query_interfaces, 165 + (SMB_INTERFACE_POLL_INTERVAL * HZ)); 166 + } 148 167 149 168 static void cifs_resolve_server(struct work_struct *work) 150 169 { ··· 236 217 bool mark_smb_session) 237 218 { 238 219 struct TCP_Server_Info *pserver; 239 - struct cifs_ses *ses; 220 + struct cifs_ses *ses, *nses; 240 221 struct cifs_tcon *tcon; 241 222 242 223 /* ··· 250 231 251 232 252 233 spin_lock(&cifs_tcp_ses_lock); 253 - list_for_each_entry(ses, &pserver->smb_ses_list, smb_ses_list) { 234 + list_for_each_entry_safe(ses, nses, &pserver->smb_ses_list, smb_ses_list) { 235 + /* check if iface is still active */ 236 + if (!cifs_chan_is_iface_active(ses, server)) { 237 + /* 238 + * HACK: drop the lock before calling 239 + * cifs_chan_update_iface to avoid deadlock 240 + */ 241 + ses->ses_count++; 242 + spin_unlock(&cifs_tcp_ses_lock); 243 + cifs_chan_update_iface(ses, server); 244 + spin_lock(&cifs_tcp_ses_lock); 245 + ses->ses_count--; 246 + } 247 + 254 248 spin_lock(&ses->chan_lock); 255 249 if (!mark_smb_session && cifs_chan_needs_reconnect(ses, server)) 256 250 goto next_session; ··· 1926 1894 int i; 1927 1895 1928 1896 for (i = 1; i < chan_count; i++) { 1929 - spin_unlock(&ses->chan_lock); 1897 + if (ses->chans[i].iface) { 1898 + kref_put(&ses->chans[i].iface->refcount, release_iface); 1899 + ses->chans[i].iface = NULL; 1900 + } 1930 1901 cifs_put_tcp_session(ses->chans[i].server, 0); 1931 - spin_lock(&ses->chan_lock); 1932 1902 ses->chans[i].server = NULL; 1933 1903 } 1934 1904 } ··· 2304 2270 list_del_init(&tcon->tcon_list); 2305 2271 spin_unlock(&cifs_tcp_ses_lock); 2306 2272 2273 + /* cancel polling of interfaces */ 2274 + cancel_delayed_work_sync(&tcon->query_interfaces); 2275 + 2307 2276 if (tcon->use_witness) { 2308 2277 int rc; 2309 2278 ··· 2543 2506 tcon->nodelete = ctx->nodelete; 2544 2507 tcon->local_lease = ctx->local_lease; 2545 2508 INIT_LIST_HEAD(&tcon->pending_opens); 2509 + 2510 + /* schedule query interfaces poll */ 2511 + INIT_DELAYED_WORK(&tcon->query_interfaces, 2512 + smb2_query_server_interfaces); 2513 + queue_delayed_work(cifsiod_wq, &tcon->query_interfaces, 2514 + (SMB_INTERFACE_POLL_INTERVAL * HZ)); 2546 2515 2547 2516 spin_lock(&cifs_tcp_ses_lock); 2548 2517 list_add(&tcon->tcon_list, &ses->tcon_list); ··· 4025 3982 struct nls_table *nls_info) 4026 3983 { 4027 3984 int rc = -ENOSYS; 3985 + struct sockaddr_in6 *addr6 = (struct sockaddr_in6 *)&server->dstaddr; 3986 + struct sockaddr_in *addr = (struct sockaddr_in *)&server->dstaddr; 4028 3987 bool is_binding = false; 4029 3988 4030 - 4031 3989 spin_lock(&cifs_tcp_ses_lock); 3990 + if (server->dstaddr.ss_family == AF_INET6) 3991 + scnprintf(ses->ip_addr, sizeof(ses->ip_addr), "%pI6", &addr6->sin6_addr); 3992 + else 3993 + scnprintf(ses->ip_addr, sizeof(ses->ip_addr), "%pI4", &addr->sin_addr); 3994 + 4032 3995 if (ses->ses_status != SES_GOOD && 4033 3996 ses->ses_status != SES_NEW && 4034 3997 ses->ses_status != SES_NEED_RECON) {
+8 -1
fs/cifs/misc.c
··· 75 75 INIT_LIST_HEAD(&ret_buf->tcon_list); 76 76 mutex_init(&ret_buf->session_mutex); 77 77 spin_lock_init(&ret_buf->iface_lock); 78 + INIT_LIST_HEAD(&ret_buf->iface_list); 78 79 spin_lock_init(&ret_buf->chan_lock); 79 80 } 80 81 return ret_buf; ··· 84 83 void 85 84 sesInfoFree(struct cifs_ses *buf_to_free) 86 85 { 86 + struct cifs_server_iface *iface = NULL, *niface = NULL; 87 + 87 88 if (buf_to_free == NULL) { 88 89 cifs_dbg(FYI, "Null buffer passed to sesInfoFree\n"); 89 90 return; ··· 99 96 kfree(buf_to_free->user_name); 100 97 kfree(buf_to_free->domainName); 101 98 kfree_sensitive(buf_to_free->auth_key.response); 102 - kfree(buf_to_free->iface_list); 99 + spin_lock(&buf_to_free->iface_lock); 100 + list_for_each_entry_safe(iface, niface, &buf_to_free->iface_list, 101 + iface_head) 102 + kref_put(&iface->refcount, release_iface); 103 + spin_unlock(&buf_to_free->iface_lock); 103 104 kfree_sensitive(buf_to_free); 104 105 } 105 106
+129 -39
fs/cifs/sess.c
··· 58 58 59 59 spin_lock(&ses->chan_lock); 60 60 for (i = 0; i < ses->chan_count; i++) { 61 - if (is_server_using_iface(ses->chans[i].server, iface)) { 61 + if (ses->chans[i].iface == iface) { 62 62 spin_unlock(&ses->chan_lock); 63 63 return true; 64 64 } ··· 146 146 return CIFS_CHAN_NEEDS_RECONNECT(ses, chan_index); 147 147 } 148 148 149 + bool 150 + cifs_chan_is_iface_active(struct cifs_ses *ses, 151 + struct TCP_Server_Info *server) 152 + { 153 + unsigned int chan_index = cifs_ses_get_chan_index(ses, server); 154 + 155 + return ses->chans[chan_index].iface && 156 + ses->chans[chan_index].iface->is_active; 157 + } 158 + 149 159 /* returns number of channels added */ 150 160 int cifs_try_adding_channels(struct cifs_sb_info *cifs_sb, struct cifs_ses *ses) 151 161 { 152 162 int old_chan_count, new_chan_count; 153 163 int left; 154 - int i = 0; 155 164 int rc = 0; 156 165 int tries = 0; 157 - struct cifs_server_iface *ifaces = NULL; 158 - size_t iface_count; 166 + struct cifs_server_iface *iface = NULL, *niface = NULL; 159 167 160 168 spin_lock(&ses->chan_lock); 161 169 ··· 193 185 spin_unlock(&ses->chan_lock); 194 186 195 187 /* 196 - * Make a copy of the iface list at the time and use that 197 - * instead so as to not hold the iface spinlock for opening 198 - * channels 199 - */ 200 - spin_lock(&ses->iface_lock); 201 - iface_count = ses->iface_count; 202 - if (iface_count <= 0) { 203 - spin_unlock(&ses->iface_lock); 204 - cifs_dbg(VFS, "no iface list available to open channels\n"); 205 - return 0; 206 - } 207 - ifaces = kmemdup(ses->iface_list, iface_count*sizeof(*ifaces), 208 - GFP_ATOMIC); 209 - if (!ifaces) { 210 - spin_unlock(&ses->iface_lock); 211 - return 0; 212 - } 213 - spin_unlock(&ses->iface_lock); 214 - 215 - /* 216 188 * Keep connecting to same, fastest, iface for all channels as 217 189 * long as its RSS. Try next fastest one if not RSS or channel 218 190 * creation fails. 219 191 */ 192 + spin_lock(&ses->iface_lock); 193 + iface = list_first_entry(&ses->iface_list, struct cifs_server_iface, 194 + iface_head); 195 + spin_unlock(&ses->iface_lock); 196 + 220 197 while (left > 0) { 221 - struct cifs_server_iface *iface; 222 198 223 199 tries++; 224 200 if (tries > 3*ses->chan_max) { ··· 211 219 break; 212 220 } 213 221 214 - iface = &ifaces[i]; 215 - if (is_ses_using_iface(ses, iface) && !iface->rss_capable) { 216 - i = (i+1) % iface_count; 217 - continue; 222 + spin_lock(&ses->iface_lock); 223 + if (!ses->iface_count) { 224 + spin_unlock(&ses->iface_lock); 225 + break; 218 226 } 219 227 220 - rc = cifs_ses_add_channel(cifs_sb, ses, iface); 221 - if (rc) { 222 - cifs_dbg(FYI, "failed to open extra channel on iface#%d rc=%d\n", 223 - i, rc); 224 - i = (i+1) % iface_count; 225 - continue; 226 - } 228 + list_for_each_entry_safe_from(iface, niface, &ses->iface_list, 229 + iface_head) { 230 + /* skip ifaces that are unusable */ 231 + if (!iface->is_active || 232 + (is_ses_using_iface(ses, iface) && 233 + !iface->rss_capable)) { 234 + continue; 235 + } 227 236 228 - cifs_dbg(FYI, "successfully opened new channel on iface#%d\n", 229 - i); 237 + /* take ref before unlock */ 238 + kref_get(&iface->refcount); 239 + 240 + spin_unlock(&ses->iface_lock); 241 + rc = cifs_ses_add_channel(cifs_sb, ses, iface); 242 + spin_lock(&ses->iface_lock); 243 + 244 + if (rc) { 245 + cifs_dbg(VFS, "failed to open extra channel on iface:%pIS rc=%d\n", 246 + &iface->sockaddr, 247 + rc); 248 + kref_put(&iface->refcount, release_iface); 249 + continue; 250 + } 251 + 252 + cifs_dbg(FYI, "successfully opened new channel on iface:%pIS\n", 253 + &iface->sockaddr); 254 + break; 255 + } 256 + spin_unlock(&ses->iface_lock); 257 + 230 258 left--; 231 259 new_chan_count++; 232 260 } 233 261 234 - kfree(ifaces); 235 262 return new_chan_count - old_chan_count; 263 + } 264 + 265 + /* 266 + * update the iface for the channel if necessary. 267 + * will return 0 when iface is updated, 1 if removed, 2 otherwise 268 + * Must be called with chan_lock held. 269 + */ 270 + int 271 + cifs_chan_update_iface(struct cifs_ses *ses, struct TCP_Server_Info *server) 272 + { 273 + unsigned int chan_index; 274 + struct cifs_server_iface *iface = NULL; 275 + struct cifs_server_iface *old_iface = NULL; 276 + int rc = 0; 277 + 278 + spin_lock(&ses->chan_lock); 279 + chan_index = cifs_ses_get_chan_index(ses, server); 280 + if (!chan_index) { 281 + spin_unlock(&ses->chan_lock); 282 + return 0; 283 + } 284 + 285 + if (ses->chans[chan_index].iface) { 286 + old_iface = ses->chans[chan_index].iface; 287 + if (old_iface->is_active) { 288 + spin_unlock(&ses->chan_lock); 289 + return 1; 290 + } 291 + } 292 + spin_unlock(&ses->chan_lock); 293 + 294 + spin_lock(&ses->iface_lock); 295 + /* then look for a new one */ 296 + list_for_each_entry(iface, &ses->iface_list, iface_head) { 297 + if (!iface->is_active || 298 + (is_ses_using_iface(ses, iface) && 299 + !iface->rss_capable)) { 300 + continue; 301 + } 302 + kref_get(&iface->refcount); 303 + } 304 + 305 + if (!list_entry_is_head(iface, &ses->iface_list, iface_head)) { 306 + rc = 1; 307 + iface = NULL; 308 + cifs_dbg(FYI, "unable to find a suitable iface\n"); 309 + } 310 + 311 + /* now drop the ref to the current iface */ 312 + if (old_iface && iface) { 313 + kref_put(&old_iface->refcount, release_iface); 314 + cifs_dbg(FYI, "replacing iface: %pIS with %pIS\n", 315 + &old_iface->sockaddr, 316 + &iface->sockaddr); 317 + } else if (old_iface) { 318 + kref_put(&old_iface->refcount, release_iface); 319 + cifs_dbg(FYI, "releasing ref to iface: %pIS\n", 320 + &old_iface->sockaddr); 321 + } else { 322 + WARN_ON(!iface); 323 + cifs_dbg(FYI, "adding new iface: %pIS\n", &iface->sockaddr); 324 + } 325 + spin_unlock(&ses->iface_lock); 326 + 327 + spin_lock(&ses->chan_lock); 328 + chan_index = cifs_ses_get_chan_index(ses, server); 329 + ses->chans[chan_index].iface = iface; 330 + 331 + /* No iface is found. if secondary chan, drop connection */ 332 + if (!iface && CIFS_SERVER_IS_CHAN(server)) 333 + ses->chans[chan_index].server = NULL; 334 + 335 + spin_unlock(&ses->chan_lock); 336 + 337 + if (!iface && CIFS_SERVER_IS_CHAN(server)) 338 + cifs_put_tcp_session(server, false); 339 + 340 + return rc; 236 341 } 237 342 238 343 /* ··· 444 355 spin_unlock(&ses->chan_lock); 445 356 goto out; 446 357 } 358 + chan->iface = iface; 447 359 ses->chan_count++; 448 360 atomic_set(&ses->chan_seq, 0); 449 361
+119 -112
fs/cifs/smb2ops.c
··· 512 512 static int 513 513 parse_server_interfaces(struct network_interface_info_ioctl_rsp *buf, 514 514 size_t buf_len, 515 - struct cifs_server_iface **iface_list, 516 - size_t *iface_count) 515 + struct cifs_ses *ses) 517 516 { 518 517 struct network_interface_info_ioctl_rsp *p; 519 518 struct sockaddr_in *addr4; 520 519 struct sockaddr_in6 *addr6; 521 520 struct iface_info_ipv4 *p4; 522 521 struct iface_info_ipv6 *p6; 523 - struct cifs_server_iface *info; 522 + struct cifs_server_iface *info = NULL, *iface = NULL, *niface = NULL; 523 + struct cifs_server_iface tmp_iface; 524 524 ssize_t bytes_left; 525 525 size_t next = 0; 526 526 int nb_iface = 0; 527 - int rc = 0; 528 - 529 - *iface_list = NULL; 530 - *iface_count = 0; 531 - 532 - /* 533 - * Fist pass: count and sanity check 534 - */ 527 + int rc = 0, ret = 0; 535 528 536 529 bytes_left = buf_len; 537 530 p = buf; 531 + 532 + spin_lock(&ses->iface_lock); 533 + /* 534 + * Go through iface_list and do kref_put to remove 535 + * any unused ifaces. ifaces in use will be removed 536 + * when the last user calls a kref_put on it 537 + */ 538 + list_for_each_entry_safe(iface, niface, &ses->iface_list, 539 + iface_head) { 540 + iface->is_active = 0; 541 + kref_put(&iface->refcount, release_iface); 542 + } 543 + spin_unlock(&ses->iface_lock); 544 + 538 545 while (bytes_left >= sizeof(*p)) { 546 + memset(&tmp_iface, 0, sizeof(tmp_iface)); 547 + tmp_iface.speed = le64_to_cpu(p->LinkSpeed); 548 + tmp_iface.rdma_capable = le32_to_cpu(p->Capability & RDMA_CAPABLE) ? 1 : 0; 549 + tmp_iface.rss_capable = le32_to_cpu(p->Capability & RSS_CAPABLE) ? 1 : 0; 550 + 551 + switch (p->Family) { 552 + /* 553 + * The kernel and wire socket structures have the same 554 + * layout and use network byte order but make the 555 + * conversion explicit in case either one changes. 556 + */ 557 + case INTERNETWORK: 558 + addr4 = (struct sockaddr_in *)&tmp_iface.sockaddr; 559 + p4 = (struct iface_info_ipv4 *)p->Buffer; 560 + addr4->sin_family = AF_INET; 561 + memcpy(&addr4->sin_addr, &p4->IPv4Address, 4); 562 + 563 + /* [MS-SMB2] 2.2.32.5.1.1 Clients MUST ignore these */ 564 + addr4->sin_port = cpu_to_be16(CIFS_PORT); 565 + 566 + cifs_dbg(FYI, "%s: ipv4 %pI4\n", __func__, 567 + &addr4->sin_addr); 568 + break; 569 + case INTERNETWORKV6: 570 + addr6 = (struct sockaddr_in6 *)&tmp_iface.sockaddr; 571 + p6 = (struct iface_info_ipv6 *)p->Buffer; 572 + addr6->sin6_family = AF_INET6; 573 + memcpy(&addr6->sin6_addr, &p6->IPv6Address, 16); 574 + 575 + /* [MS-SMB2] 2.2.32.5.1.2 Clients MUST ignore these */ 576 + addr6->sin6_flowinfo = 0; 577 + addr6->sin6_scope_id = 0; 578 + addr6->sin6_port = cpu_to_be16(CIFS_PORT); 579 + 580 + cifs_dbg(FYI, "%s: ipv6 %pI6\n", __func__, 581 + &addr6->sin6_addr); 582 + break; 583 + default: 584 + cifs_dbg(VFS, 585 + "%s: skipping unsupported socket family\n", 586 + __func__); 587 + goto next_iface; 588 + } 589 + 590 + /* 591 + * The iface_list is assumed to be sorted by speed. 592 + * Check if the new interface exists in that list. 593 + * NEVER change iface. it could be in use. 594 + * Add a new one instead 595 + */ 596 + spin_lock(&ses->iface_lock); 597 + iface = niface = NULL; 598 + list_for_each_entry_safe(iface, niface, &ses->iface_list, 599 + iface_head) { 600 + ret = iface_cmp(iface, &tmp_iface); 601 + if (!ret) { 602 + /* just get a ref so that it doesn't get picked/freed */ 603 + iface->is_active = 1; 604 + kref_get(&iface->refcount); 605 + spin_unlock(&ses->iface_lock); 606 + goto next_iface; 607 + } else if (ret < 0) { 608 + /* all remaining ifaces are slower */ 609 + kref_get(&iface->refcount); 610 + break; 611 + } 612 + } 613 + spin_unlock(&ses->iface_lock); 614 + 615 + /* no match. insert the entry in the list */ 616 + info = kmalloc(sizeof(struct cifs_server_iface), 617 + GFP_KERNEL); 618 + if (!info) { 619 + rc = -ENOMEM; 620 + goto out; 621 + } 622 + memcpy(info, &tmp_iface, sizeof(tmp_iface)); 623 + 624 + /* add this new entry to the list */ 625 + kref_init(&info->refcount); 626 + info->is_active = 1; 627 + 628 + cifs_dbg(FYI, "%s: adding iface %zu\n", __func__, ses->iface_count); 629 + cifs_dbg(FYI, "%s: speed %zu bps\n", __func__, info->speed); 630 + cifs_dbg(FYI, "%s: capabilities 0x%08x\n", __func__, 631 + le32_to_cpu(p->Capability)); 632 + 633 + spin_lock(&ses->iface_lock); 634 + if (!list_entry_is_head(iface, &ses->iface_list, iface_head)) { 635 + list_add_tail(&info->iface_head, &iface->iface_head); 636 + kref_put(&iface->refcount, release_iface); 637 + } else 638 + list_add_tail(&info->iface_head, &ses->iface_list); 639 + spin_unlock(&ses->iface_lock); 640 + 641 + ses->iface_count++; 642 + ses->iface_last_update = jiffies; 643 + next_iface: 539 644 nb_iface++; 540 645 next = le32_to_cpu(p->Next); 541 646 if (!next) { ··· 662 557 cifs_dbg(VFS, "%s: incomplete interface info\n", __func__); 663 558 664 559 665 - /* 666 - * Second pass: extract info to internal structure 667 - */ 668 - 669 - *iface_list = kcalloc(nb_iface, sizeof(**iface_list), GFP_KERNEL); 670 - if (!*iface_list) { 671 - rc = -ENOMEM; 672 - goto out; 673 - } 674 - 675 - info = *iface_list; 676 - bytes_left = buf_len; 677 - p = buf; 678 - while (bytes_left >= sizeof(*p)) { 679 - info->speed = le64_to_cpu(p->LinkSpeed); 680 - info->rdma_capable = le32_to_cpu(p->Capability & RDMA_CAPABLE) ? 1 : 0; 681 - info->rss_capable = le32_to_cpu(p->Capability & RSS_CAPABLE) ? 1 : 0; 682 - 683 - cifs_dbg(FYI, "%s: adding iface %zu\n", __func__, *iface_count); 684 - cifs_dbg(FYI, "%s: speed %zu bps\n", __func__, info->speed); 685 - cifs_dbg(FYI, "%s: capabilities 0x%08x\n", __func__, 686 - le32_to_cpu(p->Capability)); 687 - 688 - switch (p->Family) { 689 - /* 690 - * The kernel and wire socket structures have the same 691 - * layout and use network byte order but make the 692 - * conversion explicit in case either one changes. 693 - */ 694 - case INTERNETWORK: 695 - addr4 = (struct sockaddr_in *)&info->sockaddr; 696 - p4 = (struct iface_info_ipv4 *)p->Buffer; 697 - addr4->sin_family = AF_INET; 698 - memcpy(&addr4->sin_addr, &p4->IPv4Address, 4); 699 - 700 - /* [MS-SMB2] 2.2.32.5.1.1 Clients MUST ignore these */ 701 - addr4->sin_port = cpu_to_be16(CIFS_PORT); 702 - 703 - cifs_dbg(FYI, "%s: ipv4 %pI4\n", __func__, 704 - &addr4->sin_addr); 705 - break; 706 - case INTERNETWORKV6: 707 - addr6 = (struct sockaddr_in6 *)&info->sockaddr; 708 - p6 = (struct iface_info_ipv6 *)p->Buffer; 709 - addr6->sin6_family = AF_INET6; 710 - memcpy(&addr6->sin6_addr, &p6->IPv6Address, 16); 711 - 712 - /* [MS-SMB2] 2.2.32.5.1.2 Clients MUST ignore these */ 713 - addr6->sin6_flowinfo = 0; 714 - addr6->sin6_scope_id = 0; 715 - addr6->sin6_port = cpu_to_be16(CIFS_PORT); 716 - 717 - cifs_dbg(FYI, "%s: ipv6 %pI6\n", __func__, 718 - &addr6->sin6_addr); 719 - break; 720 - default: 721 - cifs_dbg(VFS, 722 - "%s: skipping unsupported socket family\n", 723 - __func__); 724 - goto next_iface; 725 - } 726 - 727 - (*iface_count)++; 728 - info++; 729 - next_iface: 730 - next = le32_to_cpu(p->Next); 731 - if (!next) 732 - break; 733 - p = (struct network_interface_info_ioctl_rsp *)((u8 *)p+next); 734 - bytes_left -= next; 735 - } 736 - 737 - if (!*iface_count) { 560 + if (!ses->iface_count) { 738 561 rc = -EINVAL; 739 562 goto out; 740 563 } 741 564 742 565 out: 743 - if (rc) { 744 - kfree(*iface_list); 745 - *iface_count = 0; 746 - *iface_list = NULL; 747 - } 748 566 return rc; 749 567 } 750 568 751 - static int compare_iface(const void *ia, const void *ib) 752 - { 753 - const struct cifs_server_iface *a = (struct cifs_server_iface *)ia; 754 - const struct cifs_server_iface *b = (struct cifs_server_iface *)ib; 755 - 756 - return a->speed == b->speed ? 0 : (a->speed > b->speed ? -1 : 1); 757 - } 758 - 759 - static int 569 + int 760 570 SMB3_request_interfaces(const unsigned int xid, struct cifs_tcon *tcon) 761 571 { 762 572 int rc; 763 573 unsigned int ret_data_len = 0; 764 574 struct network_interface_info_ioctl_rsp *out_buf = NULL; 765 - struct cifs_server_iface *iface_list; 766 - size_t iface_count; 767 575 struct cifs_ses *ses = tcon->ses; 768 576 769 577 rc = SMB2_ioctl(xid, tcon, NO_FILE_ID, NO_FILE_ID, ··· 692 674 goto out; 693 675 } 694 676 695 - rc = parse_server_interfaces(out_buf, ret_data_len, 696 - &iface_list, &iface_count); 677 + rc = parse_server_interfaces(out_buf, ret_data_len, ses); 697 678 if (rc) 698 679 goto out; 699 - 700 - /* sort interfaces from fastest to slowest */ 701 - sort(iface_list, iface_count, sizeof(*iface_list), compare_iface, NULL); 702 - 703 - spin_lock(&ses->iface_lock); 704 - kfree(ses->iface_list); 705 - ses->iface_list = iface_list; 706 - ses->iface_count = iface_count; 707 - ses->iface_last_update = jiffies; 708 - spin_unlock(&ses->iface_lock); 709 680 710 681 out: 711 682 kfree(out_buf);
+15 -6
fs/cifs/smb2pdu.c
··· 543 543 struct TCP_Server_Info *server, unsigned int *total_len) 544 544 { 545 545 char *pneg_ctxt; 546 + char *hostname = NULL; 546 547 unsigned int ctxt_len, neg_context_count; 547 548 548 549 if (*total_len > 200) { ··· 571 570 *total_len += ctxt_len; 572 571 pneg_ctxt += ctxt_len; 573 572 574 - ctxt_len = build_netname_ctxt((struct smb2_netname_neg_context *)pneg_ctxt, 575 - server->hostname); 576 - *total_len += ctxt_len; 577 - pneg_ctxt += ctxt_len; 578 - 579 573 build_posix_ctxt((struct smb2_posix_neg_context *)pneg_ctxt); 580 574 *total_len += sizeof(struct smb2_posix_neg_context); 581 575 pneg_ctxt += sizeof(struct smb2_posix_neg_context); 582 576 583 - neg_context_count = 4; 577 + /* 578 + * secondary channels don't have the hostname field populated 579 + * use the hostname field in the primary channel instead 580 + */ 581 + hostname = CIFS_SERVER_IS_CHAN(server) ? 582 + server->primary_server->hostname : server->hostname; 583 + if (hostname && (hostname[0] != 0)) { 584 + ctxt_len = build_netname_ctxt((struct smb2_netname_neg_context *)pneg_ctxt, 585 + hostname); 586 + *total_len += ctxt_len; 587 + pneg_ctxt += ctxt_len; 588 + neg_context_count = 4; 589 + } else /* second channels do not have a hostname */ 590 + neg_context_count = 3; 584 591 585 592 if (server->compress_algorithm) { 586 593 build_compression_ctxt((struct smb2_compression_capabilities_context *)
+3 -1
fs/exfat/namei.c
··· 1198 1198 return -ENOENT; 1199 1199 } 1200 1200 1201 - exfat_chain_dup(&olddir, &ei->dir); 1201 + exfat_chain_set(&olddir, EXFAT_I(old_parent_inode)->start_clu, 1202 + EXFAT_B_TO_CLU_ROUND_UP(i_size_read(old_parent_inode), sbi), 1203 + EXFAT_I(old_parent_inode)->flags); 1202 1204 dentry = ei->entry; 1203 1205 1204 1206 ep = exfat_get_dentry(sb, &olddir, dentry, &old_bh);
+18 -13
fs/f2fs/iostat.c
··· 91 91 unsigned int cnt; 92 92 struct f2fs_iostat_latency iostat_lat[MAX_IO_TYPE][NR_PAGE_TYPE]; 93 93 struct iostat_lat_info *io_lat = sbi->iostat_io_lat; 94 + unsigned long flags; 94 95 95 - spin_lock_bh(&sbi->iostat_lat_lock); 96 + spin_lock_irqsave(&sbi->iostat_lat_lock, flags); 96 97 for (idx = 0; idx < MAX_IO_TYPE; idx++) { 97 98 for (io = 0; io < NR_PAGE_TYPE; io++) { 98 99 cnt = io_lat->bio_cnt[idx][io]; ··· 107 106 io_lat->bio_cnt[idx][io] = 0; 108 107 } 109 108 } 110 - spin_unlock_bh(&sbi->iostat_lat_lock); 109 + spin_unlock_irqrestore(&sbi->iostat_lat_lock, flags); 111 110 112 111 trace_f2fs_iostat_latency(sbi, iostat_lat); 113 112 } ··· 116 115 { 117 116 unsigned long long iostat_diff[NR_IO_TYPE]; 118 117 int i; 118 + unsigned long flags; 119 119 120 120 if (time_is_after_jiffies(sbi->iostat_next_period)) 121 121 return; 122 122 123 123 /* Need double check under the lock */ 124 - spin_lock_bh(&sbi->iostat_lock); 124 + spin_lock_irqsave(&sbi->iostat_lock, flags); 125 125 if (time_is_after_jiffies(sbi->iostat_next_period)) { 126 - spin_unlock_bh(&sbi->iostat_lock); 126 + spin_unlock_irqrestore(&sbi->iostat_lock, flags); 127 127 return; 128 128 } 129 129 sbi->iostat_next_period = jiffies + ··· 135 133 sbi->prev_rw_iostat[i]; 136 134 sbi->prev_rw_iostat[i] = sbi->rw_iostat[i]; 137 135 } 138 - spin_unlock_bh(&sbi->iostat_lock); 136 + spin_unlock_irqrestore(&sbi->iostat_lock, flags); 139 137 140 138 trace_f2fs_iostat(sbi, iostat_diff); 141 139 ··· 147 145 struct iostat_lat_info *io_lat = sbi->iostat_io_lat; 148 146 int i; 149 147 150 - spin_lock_bh(&sbi->iostat_lock); 148 + spin_lock_irq(&sbi->iostat_lock); 151 149 for (i = 0; i < NR_IO_TYPE; i++) { 152 150 sbi->rw_iostat[i] = 0; 153 151 sbi->prev_rw_iostat[i] = 0; 154 152 } 155 - spin_unlock_bh(&sbi->iostat_lock); 153 + spin_unlock_irq(&sbi->iostat_lock); 156 154 157 - spin_lock_bh(&sbi->iostat_lat_lock); 155 + spin_lock_irq(&sbi->iostat_lat_lock); 158 156 memset(io_lat, 0, sizeof(struct iostat_lat_info)); 159 - spin_unlock_bh(&sbi->iostat_lat_lock); 157 + spin_unlock_irq(&sbi->iostat_lat_lock); 160 158 } 161 159 162 160 void f2fs_update_iostat(struct f2fs_sb_info *sbi, 163 161 enum iostat_type type, unsigned long long io_bytes) 164 162 { 163 + unsigned long flags; 164 + 165 165 if (!sbi->iostat_enable) 166 166 return; 167 167 168 - spin_lock_bh(&sbi->iostat_lock); 168 + spin_lock_irqsave(&sbi->iostat_lock, flags); 169 169 sbi->rw_iostat[type] += io_bytes; 170 170 171 171 if (type == APP_BUFFERED_IO || type == APP_DIRECT_IO) ··· 176 172 if (type == APP_BUFFERED_READ_IO || type == APP_DIRECT_READ_IO) 177 173 sbi->rw_iostat[APP_READ_IO] += io_bytes; 178 174 179 - spin_unlock_bh(&sbi->iostat_lock); 175 + spin_unlock_irqrestore(&sbi->iostat_lock, flags); 180 176 181 177 f2fs_record_iostat(sbi); 182 178 } ··· 189 185 struct f2fs_sb_info *sbi = iostat_ctx->sbi; 190 186 struct iostat_lat_info *io_lat = sbi->iostat_io_lat; 191 187 int idx; 188 + unsigned long flags; 192 189 193 190 if (!sbi->iostat_enable) 194 191 return; ··· 207 202 idx = WRITE_ASYNC_IO; 208 203 } 209 204 210 - spin_lock_bh(&sbi->iostat_lat_lock); 205 + spin_lock_irqsave(&sbi->iostat_lat_lock, flags); 211 206 io_lat->sum_lat[idx][iotype] += ts_diff; 212 207 io_lat->bio_cnt[idx][iotype]++; 213 208 if (ts_diff > io_lat->peak_lat[idx][iotype]) 214 209 io_lat->peak_lat[idx][iotype] = ts_diff; 215 - spin_unlock_bh(&sbi->iostat_lat_lock); 210 + spin_unlock_irqrestore(&sbi->iostat_lat_lock, flags); 216 211 } 217 212 218 213 void iostat_update_and_unbind_ctx(struct bio *bio, int rw)
+11 -6
fs/f2fs/namei.c
··· 89 89 if (test_opt(sbi, INLINE_XATTR)) 90 90 set_inode_flag(inode, FI_INLINE_XATTR); 91 91 92 - if (test_opt(sbi, INLINE_DATA) && f2fs_may_inline_data(inode)) 93 - set_inode_flag(inode, FI_INLINE_DATA); 94 92 if (f2fs_may_inline_dentry(inode)) 95 93 set_inode_flag(inode, FI_INLINE_DENTRY); 96 94 ··· 105 107 106 108 f2fs_init_extent_tree(inode, NULL); 107 109 108 - stat_inc_inline_xattr(inode); 109 - stat_inc_inline_inode(inode); 110 - stat_inc_inline_dir(inode); 111 - 112 110 F2FS_I(inode)->i_flags = 113 111 f2fs_mask_flags(mode, F2FS_I(dir)->i_flags & F2FS_FL_INHERITED); 114 112 ··· 120 126 f2fs_may_compress(inode)) 121 127 set_compress_context(inode); 122 128 } 129 + 130 + /* Should enable inline_data after compression set */ 131 + if (test_opt(sbi, INLINE_DATA) && f2fs_may_inline_data(inode)) 132 + set_inode_flag(inode, FI_INLINE_DATA); 133 + 134 + stat_inc_inline_xattr(inode); 135 + stat_inc_inline_inode(inode); 136 + stat_inc_inline_dir(inode); 123 137 124 138 f2fs_set_inode_flags(inode); 125 139 ··· 327 325 if (!is_extension_exist(name, ext[i], false)) 328 326 continue; 329 327 328 + /* Do not use inline_data with compression */ 329 + stat_dec_inline_inode(inode); 330 + clear_inode_flag(inode, FI_INLINE_DATA); 330 331 set_compress_context(inode); 331 332 return; 332 333 }
+3 -1
fs/f2fs/node.c
··· 1450 1450 out_err: 1451 1451 ClearPageUptodate(page); 1452 1452 out_put_err: 1453 - f2fs_handle_page_eio(sbi, page->index, NODE); 1453 + /* ENOENT comes from read_node_page which is not an error. */ 1454 + if (err != -ENOENT) 1455 + f2fs_handle_page_eio(sbi, page->index, NODE); 1454 1456 f2fs_put_page(page, 1); 1455 1457 return ERR_PTR(err); 1456 1458 }
+55 -17
fs/hugetlbfs/inode.c
··· 600 600 remove_inode_hugepages(inode, offset, LLONG_MAX); 601 601 } 602 602 603 + static void hugetlbfs_zero_partial_page(struct hstate *h, 604 + struct address_space *mapping, 605 + loff_t start, 606 + loff_t end) 607 + { 608 + pgoff_t idx = start >> huge_page_shift(h); 609 + struct folio *folio; 610 + 611 + folio = filemap_lock_folio(mapping, idx); 612 + if (!folio) 613 + return; 614 + 615 + start = start & ~huge_page_mask(h); 616 + end = end & ~huge_page_mask(h); 617 + if (!end) 618 + end = huge_page_size(h); 619 + 620 + folio_zero_segment(folio, (size_t)start, (size_t)end); 621 + 622 + folio_unlock(folio); 623 + folio_put(folio); 624 + } 625 + 603 626 static long hugetlbfs_punch_hole(struct inode *inode, loff_t offset, loff_t len) 604 627 { 628 + struct hugetlbfs_inode_info *info = HUGETLBFS_I(inode); 629 + struct address_space *mapping = inode->i_mapping; 605 630 struct hstate *h = hstate_inode(inode); 606 631 loff_t hpage_size = huge_page_size(h); 607 632 loff_t hole_start, hole_end; 608 633 609 634 /* 610 - * For hole punch round up the beginning offset of the hole and 611 - * round down the end. 635 + * hole_start and hole_end indicate the full pages within the hole. 612 636 */ 613 637 hole_start = round_up(offset, hpage_size); 614 638 hole_end = round_down(offset + len, hpage_size); 615 639 640 + inode_lock(inode); 641 + 642 + /* protected by i_rwsem */ 643 + if (info->seals & (F_SEAL_WRITE | F_SEAL_FUTURE_WRITE)) { 644 + inode_unlock(inode); 645 + return -EPERM; 646 + } 647 + 648 + i_mmap_lock_write(mapping); 649 + 650 + /* If range starts before first full page, zero partial page. */ 651 + if (offset < hole_start) 652 + hugetlbfs_zero_partial_page(h, mapping, 653 + offset, min(offset + len, hole_start)); 654 + 655 + /* Unmap users of full pages in the hole. */ 616 656 if (hole_end > hole_start) { 617 - struct address_space *mapping = inode->i_mapping; 618 - struct hugetlbfs_inode_info *info = HUGETLBFS_I(inode); 619 - 620 - inode_lock(inode); 621 - 622 - /* protected by i_rwsem */ 623 - if (info->seals & (F_SEAL_WRITE | F_SEAL_FUTURE_WRITE)) { 624 - inode_unlock(inode); 625 - return -EPERM; 626 - } 627 - 628 - i_mmap_lock_write(mapping); 629 657 if (!RB_EMPTY_ROOT(&mapping->i_mmap.rb_root)) 630 658 hugetlb_vmdelete_list(&mapping->i_mmap, 631 659 hole_start >> PAGE_SHIFT, 632 660 hole_end >> PAGE_SHIFT, 0); 633 - i_mmap_unlock_write(mapping); 634 - remove_inode_hugepages(inode, hole_start, hole_end); 635 - inode_unlock(inode); 636 661 } 662 + 663 + /* If range extends beyond last full page, zero partial page. */ 664 + if ((offset + len) > hole_end && (offset + len) > hole_start) 665 + hugetlbfs_zero_partial_page(h, mapping, 666 + hole_end, offset + len); 667 + 668 + i_mmap_unlock_write(mapping); 669 + 670 + /* Remove full pages from the file. */ 671 + if (hole_end > hole_start) 672 + remove_inode_hugepages(inode, hole_start, hole_end); 673 + 674 + inode_unlock(inode); 637 675 638 676 return 0; 639 677 }
+15 -11
fs/io_uring.c
··· 1975 1975 { 1976 1976 if (!(req->flags & REQ_F_INFLIGHT)) { 1977 1977 req->flags |= REQ_F_INFLIGHT; 1978 - atomic_inc(&current->io_uring->inflight_tracked); 1978 + atomic_inc(&req->task->io_uring->inflight_tracked); 1979 1979 } 1980 1980 } 1981 1981 ··· 3437 3437 if (unlikely(res != req->cqe.res)) { 3438 3438 if ((res == -EAGAIN || res == -EOPNOTSUPP) && 3439 3439 io_rw_should_reissue(req)) { 3440 - req->flags |= REQ_F_REISSUE; 3440 + req->flags |= REQ_F_REISSUE | REQ_F_PARTIAL_IO; 3441 3441 return true; 3442 3442 } 3443 3443 req_set_fail(req); ··· 3487 3487 kiocb_end_write(req); 3488 3488 if (unlikely(res != req->cqe.res)) { 3489 3489 if (res == -EAGAIN && io_rw_should_reissue(req)) { 3490 - req->flags |= REQ_F_REISSUE; 3490 + req->flags |= REQ_F_REISSUE | REQ_F_PARTIAL_IO; 3491 3491 return; 3492 3492 } 3493 3493 req->cqe.res = res; ··· 6077 6077 6078 6078 if (unlikely(sqe->file_index)) 6079 6079 return -EINVAL; 6080 - if (unlikely(sqe->addr2 || sqe->file_index)) 6081 - return -EINVAL; 6082 6080 6083 6081 sr->umsg = u64_to_user_ptr(READ_ONCE(sqe->addr)); 6084 6082 sr->len = READ_ONCE(sqe->len); ··· 6312 6314 struct io_sr_msg *sr = &req->sr_msg; 6313 6315 6314 6316 if (unlikely(sqe->file_index)) 6315 - return -EINVAL; 6316 - if (unlikely(sqe->addr2 || sqe->file_index)) 6317 6317 return -EINVAL; 6318 6318 6319 6319 sr->umsg = u64_to_user_ptr(READ_ONCE(sqe->addr)); ··· 6950 6954 io_req_complete_failed(req, ret); 6951 6955 } 6952 6956 6953 - static void __io_poll_execute(struct io_kiocb *req, int mask, __poll_t events) 6957 + static void __io_poll_execute(struct io_kiocb *req, int mask, 6958 + __poll_t __maybe_unused events) 6954 6959 { 6955 6960 req->cqe.res = mask; 6956 6961 /* ··· 6960 6963 * CPU. We want to avoid pulling in req->apoll->events for that 6961 6964 * case. 6962 6965 */ 6963 - req->apoll_events = events; 6964 6966 if (req->opcode == IORING_OP_POLL_ADD) 6965 6967 req->io_task_work.func = io_poll_task_func; 6966 6968 else ··· 7110 7114 io_init_poll_iocb(poll, mask, io_poll_wake); 7111 7115 poll->file = req->file; 7112 7116 7117 + req->apoll_events = poll->events; 7118 + 7113 7119 ipt->pt._key = mask; 7114 7120 ipt->req = req; 7115 7121 ipt->error = 0; ··· 7142 7144 7143 7145 if (mask) { 7144 7146 /* can't multishot if failed, just queue the event we've got */ 7145 - if (unlikely(ipt->error || !ipt->nr_entries)) 7147 + if (unlikely(ipt->error || !ipt->nr_entries)) { 7146 7148 poll->events |= EPOLLONESHOT; 7149 + req->apoll_events |= EPOLLONESHOT; 7150 + ipt->error = 0; 7151 + } 7147 7152 __io_poll_execute(req, mask, poll->events); 7148 7153 return 0; 7149 7154 } ··· 7208 7207 mask |= EPOLLEXCLUSIVE; 7209 7208 if (req->flags & REQ_F_POLLED) { 7210 7209 apoll = req->apoll; 7210 + kfree(apoll->double_poll); 7211 7211 } else if (!(issue_flags & IO_URING_F_UNLOCKED) && 7212 7212 !list_empty(&ctx->apoll_cache)) { 7213 7213 apoll = list_first_entry(&ctx->apoll_cache, struct async_poll, ··· 7394 7392 return -EINVAL; 7395 7393 7396 7394 io_req_set_refcount(req); 7397 - req->apoll_events = poll->events = io_poll_parse_events(sqe, flags); 7395 + poll->events = io_poll_parse_events(sqe, flags); 7398 7396 return 0; 7399 7397 } 7400 7398 ··· 7407 7405 ipt.pt._qproc = io_poll_queue_proc; 7408 7406 7409 7407 ret = __io_arm_poll_handler(req, &req->poll, &ipt, poll->events); 7408 + if (!ret && ipt.error) 7409 + req_set_fail(req); 7410 7410 ret = ret ?: ipt.error; 7411 7411 if (ret) 7412 7412 __io_req_complete(req, issue_flags, ret, 0);
+1 -1
fs/tracefs/inode.c
··· 553 553 * 554 554 * Only one instances directory is allowed. 555 555 * 556 - * The instances directory is special as it allows for mkdir and rmdir to 556 + * The instances directory is special as it allows for mkdir and rmdir 557 557 * to be done by userspace. When a mkdir or rmdir is performed, the inode 558 558 * locks are released and the methods passed in (@mkdir and @rmdir) are 559 559 * called without locks and with the name of the directory being created
+3
include/keys/asymmetric-type.h
··· 84 84 const struct asymmetric_key_id *id_2, 85 85 bool partial); 86 86 87 + int x509_load_certificate_list(const u8 cert_list[], const unsigned long list_size, 88 + const struct key *keyring); 89 + 87 90 /* 88 91 * The payload is at the discretion of the subtype. 89 92 */
+4 -5
include/linux/blkdev.h
··· 342 342 */ 343 343 struct blk_independent_access_range { 344 344 struct kobject kobj; 345 - struct request_queue *queue; 346 345 sector_t sector; 347 346 sector_t nr_sectors; 348 347 }; ··· 481 482 #endif /* CONFIG_BLK_DEV_ZONED */ 482 483 483 484 int node; 484 - struct mutex debugfs_mutex; 485 485 #ifdef CONFIG_BLK_DEV_IO_TRACE 486 486 struct blk_trace __rcu *blk_trace; 487 487 #endif ··· 524 526 struct bio_set bio_split; 525 527 526 528 struct dentry *debugfs_dir; 527 - 528 - #ifdef CONFIG_BLK_DEBUG_FS 529 529 struct dentry *sched_debugfs_dir; 530 530 struct dentry *rqos_debugfs_dir; 531 - #endif 531 + /* 532 + * Serializes all debugfs metadata operations using the above dentries. 533 + */ 534 + struct mutex debugfs_mutex; 532 535 533 536 bool mq_sysfs_init_done; 534 537
-17
include/linux/console.h
··· 16 16 17 17 #include <linux/atomic.h> 18 18 #include <linux/types.h> 19 - #include <linux/mutex.h> 20 19 21 20 struct vc_data; 22 21 struct console_font_op; ··· 153 154 uint ospeed; 154 155 u64 seq; 155 156 unsigned long dropped; 156 - struct task_struct *thread; 157 - bool blocked; 158 - 159 - /* 160 - * The per-console lock is used by printing kthreads to synchronize 161 - * this console with callers of console_lock(). This is necessary in 162 - * order to allow printing kthreads to run in parallel to each other, 163 - * while each safely accessing the @blocked field and synchronizing 164 - * against direct printing via console_lock/console_unlock. 165 - * 166 - * Note: For synchronizing against direct printing via 167 - * console_trylock/console_unlock, see the static global 168 - * variable @console_kthreads_active. 169 - */ 170 - struct mutex lock; 171 - 172 157 void *data; 173 158 struct console *next; 174 159 };
+16 -13
include/linux/gpio/driver.h
··· 167 167 */ 168 168 irq_flow_handler_t parent_handler; 169 169 170 - /** 171 - * @parent_handler_data: 172 - * 173 - * If @per_parent_data is false, @parent_handler_data is a single 174 - * pointer used as the data associated with every parent interrupt. 175 - * 176 - * @parent_handler_data_array: 177 - * 178 - * If @per_parent_data is true, @parent_handler_data_array is 179 - * an array of @num_parents pointers, and is used to associate 180 - * different data for each parent. This cannot be NULL if 181 - * @per_parent_data is true. 182 - */ 183 170 union { 171 + /** 172 + * @parent_handler_data: 173 + * 174 + * If @per_parent_data is false, @parent_handler_data is a 175 + * single pointer used as the data associated with every 176 + * parent interrupt. 177 + */ 184 178 void *parent_handler_data; 179 + 180 + /** 181 + * @parent_handler_data_array: 182 + * 183 + * If @per_parent_data is true, @parent_handler_data_array is 184 + * an array of @num_parents pointers, and is used to associate 185 + * different data for each parent. This cannot be NULL if 186 + * @per_parent_data is true. 187 + */ 185 188 void **parent_handler_data_array; 186 189 }; 187 190
+2 -1
include/linux/mm.h
··· 1600 1600 if (mt == MIGRATE_CMA || mt == MIGRATE_ISOLATE) 1601 1601 return false; 1602 1602 #endif 1603 - return !(is_zone_movable_page(page) || is_zero_pfn(page_to_pfn(page))); 1603 + return !is_zone_movable_page(page) || is_zero_pfn(page_to_pfn(page)); 1604 1604 } 1605 1605 #else 1606 1606 static inline bool is_pinnable_page(struct page *page) ··· 3232 3232 MF_MUST_KILL = 1 << 2, 3233 3233 MF_SOFT_OFFLINE = 1 << 3, 3234 3234 MF_UNPOISON = 1 << 4, 3235 + MF_SW_SIMULATED = 1 << 5, 3235 3236 }; 3236 3237 extern int memory_failure(unsigned long pfn, int flags); 3237 3238 extern void memory_failure_queue(unsigned long pfn, int flags);
+2 -2
include/linux/nvme.h
··· 233 233 }; 234 234 235 235 enum { 236 - NVME_CAP_CRMS_CRIMS = 1ULL << 59, 237 - NVME_CAP_CRMS_CRWMS = 1ULL << 60, 236 + NVME_CAP_CRMS_CRWMS = 1ULL << 59, 237 + NVME_CAP_CRMS_CRIMS = 1ULL << 60, 238 238 }; 239 239 240 240 struct nvme_id_power_state {
-16
include/linux/printk.h
··· 169 169 #define printk_deferred_enter __printk_safe_enter 170 170 #define printk_deferred_exit __printk_safe_exit 171 171 172 - extern void printk_prefer_direct_enter(void); 173 - extern void printk_prefer_direct_exit(void); 174 - 175 172 extern bool pr_flush(int timeout_ms, bool reset_on_progress); 176 - extern void try_block_console_kthreads(int timeout_ms); 177 173 178 174 /* 179 175 * Please don't use printk_ratelimit(), because it shares ratelimiting state ··· 221 225 { 222 226 } 223 227 224 - static inline void printk_prefer_direct_enter(void) 225 - { 226 - } 227 - 228 - static inline void printk_prefer_direct_exit(void) 229 - { 230 - } 231 - 232 228 static inline bool pr_flush(int timeout_ms, bool reset_on_progress) 233 229 { 234 230 return true; 235 - } 236 - 237 - static inline void try_block_console_kthreads(int timeout_ms) 238 - { 239 231 } 240 232 241 233 static inline int printk_ratelimit(void)
+8 -4
include/linux/ratelimit_types.h
··· 23 23 unsigned long flags; 24 24 }; 25 25 26 - #define RATELIMIT_STATE_INIT(name, interval_init, burst_init) { \ 27 - .lock = __RAW_SPIN_LOCK_UNLOCKED(name.lock), \ 28 - .interval = interval_init, \ 29 - .burst = burst_init, \ 26 + #define RATELIMIT_STATE_INIT_FLAGS(name, interval_init, burst_init, flags_init) { \ 27 + .lock = __RAW_SPIN_LOCK_UNLOCKED(name.lock), \ 28 + .interval = interval_init, \ 29 + .burst = burst_init, \ 30 + .flags = flags_init, \ 30 31 } 32 + 33 + #define RATELIMIT_STATE_INIT(name, interval_init, burst_init) \ 34 + RATELIMIT_STATE_INIT_FLAGS(name, interval_init, burst_init, 0) 31 35 32 36 #define RATELIMIT_STATE_INIT_DISABLED \ 33 37 RATELIMIT_STATE_INIT(ratelimit_state, 0, DEFAULT_RATELIMIT_BURST)
+5 -4
include/linux/scmi_protocol.h
··· 13 13 #include <linux/notifier.h> 14 14 #include <linux/types.h> 15 15 16 - #define SCMI_MAX_STR_SIZE 64 17 - #define SCMI_MAX_NUM_RATES 16 16 + #define SCMI_MAX_STR_SIZE 64 17 + #define SCMI_SHORT_NAME_MAX_SIZE 16 18 + #define SCMI_MAX_NUM_RATES 16 18 19 19 20 /** 20 21 * struct scmi_revision_info - version information structure ··· 37 36 u8 num_protocols; 38 37 u8 num_agents; 39 38 u32 impl_ver; 40 - char vendor_id[SCMI_MAX_STR_SIZE]; 41 - char sub_vendor_id[SCMI_MAX_STR_SIZE]; 39 + char vendor_id[SCMI_SHORT_NAME_MAX_SIZE]; 40 + char sub_vendor_id[SCMI_SHORT_NAME_MAX_SIZE]; 42 41 }; 43 42 44 43 struct scmi_clock_info {
+5
include/net/inet_sock.h
··· 253 253 #define IP_CMSG_CHECKSUM BIT(7) 254 254 #define IP_CMSG_RECVFRAGSIZE BIT(8) 255 255 256 + static inline bool sk_is_inet(struct sock *sk) 257 + { 258 + return sk->sk_family == AF_INET || sk->sk_family == AF_INET6; 259 + } 260 + 256 261 /** 257 262 * sk_to_full_sk - Access to a full socket 258 263 * @sk: pointer to a socket
+35 -7
include/trace/events/io_uring.h
··· 158 158 __field( unsigned int, flags ) 159 159 __field( struct io_wq_work *, work ) 160 160 __field( int, rw ) 161 + 162 + __string( op_str, io_uring_get_opcode(opcode) ) 161 163 ), 162 164 163 165 TP_fast_assign( ··· 170 168 __entry->opcode = opcode; 171 169 __entry->work = work; 172 170 __entry->rw = rw; 171 + 172 + __assign_str(op_str, io_uring_get_opcode(opcode)); 173 173 ), 174 174 175 175 TP_printk("ring %p, request %p, user_data 0x%llx, opcode %s, flags 0x%x, %s queue, work %p", 176 176 __entry->ctx, __entry->req, __entry->user_data, 177 - io_uring_get_opcode(__entry->opcode), 177 + __get_str(op_str), 178 178 __entry->flags, __entry->rw ? "hashed" : "normal", __entry->work) 179 179 ); 180 180 ··· 202 198 __field( void *, req ) 203 199 __field( unsigned long long, data ) 204 200 __field( u8, opcode ) 201 + 202 + __string( op_str, io_uring_get_opcode(opcode) ) 205 203 ), 206 204 207 205 TP_fast_assign( ··· 211 205 __entry->req = req; 212 206 __entry->data = user_data; 213 207 __entry->opcode = opcode; 208 + 209 + __assign_str(op_str, io_uring_get_opcode(opcode)); 214 210 ), 215 211 216 212 TP_printk("ring %p, request %p, user_data 0x%llx, opcode %s", 217 213 __entry->ctx, __entry->req, __entry->data, 218 - io_uring_get_opcode(__entry->opcode)) 214 + __get_str(op_str)) 219 215 ); 220 216 221 217 /** ··· 306 298 __field( unsigned long long, user_data ) 307 299 __field( u8, opcode ) 308 300 __field( void *, link ) 301 + 302 + __string( op_str, io_uring_get_opcode(opcode) ) 309 303 ), 310 304 311 305 TP_fast_assign( ··· 316 306 __entry->user_data = user_data; 317 307 __entry->opcode = opcode; 318 308 __entry->link = link; 309 + 310 + __assign_str(op_str, io_uring_get_opcode(opcode)); 319 311 ), 320 312 321 313 TP_printk("ring %p, request %p, user_data 0x%llx, opcode %s, link %p", 322 314 __entry->ctx, __entry->req, __entry->user_data, 323 - io_uring_get_opcode(__entry->opcode), __entry->link) 315 + __get_str(op_str), __entry->link) 324 316 ); 325 317 326 318 /** ··· 402 390 __field( u32, flags ) 403 391 __field( bool, force_nonblock ) 404 392 __field( bool, sq_thread ) 393 + 394 + __string( op_str, io_uring_get_opcode(opcode) ) 405 395 ), 406 396 407 397 TP_fast_assign( ··· 414 400 __entry->flags = flags; 415 401 __entry->force_nonblock = force_nonblock; 416 402 __entry->sq_thread = sq_thread; 403 + 404 + __assign_str(op_str, io_uring_get_opcode(opcode)); 417 405 ), 418 406 419 407 TP_printk("ring %p, req %p, user_data 0x%llx, opcode %s, flags 0x%x, " 420 408 "non block %d, sq_thread %d", __entry->ctx, __entry->req, 421 - __entry->user_data, io_uring_get_opcode(__entry->opcode), 409 + __entry->user_data, __get_str(op_str), 422 410 __entry->flags, __entry->force_nonblock, __entry->sq_thread) 423 411 ); 424 412 ··· 451 435 __field( u8, opcode ) 452 436 __field( int, mask ) 453 437 __field( int, events ) 438 + 439 + __string( op_str, io_uring_get_opcode(opcode) ) 454 440 ), 455 441 456 442 TP_fast_assign( ··· 462 444 __entry->opcode = opcode; 463 445 __entry->mask = mask; 464 446 __entry->events = events; 447 + 448 + __assign_str(op_str, io_uring_get_opcode(opcode)); 465 449 ), 466 450 467 451 TP_printk("ring %p, req %p, user_data 0x%llx, opcode %s, mask 0x%x, events 0x%x", 468 452 __entry->ctx, __entry->req, __entry->user_data, 469 - io_uring_get_opcode(__entry->opcode), 453 + __get_str(op_str), 470 454 __entry->mask, __entry->events) 471 455 ); 472 456 ··· 494 474 __field( unsigned long long, user_data ) 495 475 __field( u8, opcode ) 496 476 __field( int, mask ) 477 + 478 + __string( op_str, io_uring_get_opcode(opcode) ) 497 479 ), 498 480 499 481 TP_fast_assign( ··· 504 482 __entry->user_data = user_data; 505 483 __entry->opcode = opcode; 506 484 __entry->mask = mask; 485 + 486 + __assign_str(op_str, io_uring_get_opcode(opcode)); 507 487 ), 508 488 509 489 TP_printk("ring %p, req %p, user_data 0x%llx, opcode %s, mask %x", 510 490 __entry->ctx, __entry->req, __entry->user_data, 511 - io_uring_get_opcode(__entry->opcode), 491 + __get_str(op_str), 512 492 __entry->mask) 513 493 ); 514 494 ··· 547 523 __field( u64, pad1 ) 548 524 __field( u64, addr3 ) 549 525 __field( int, error ) 526 + 527 + __string( op_str, io_uring_get_opcode(sqe->opcode) ) 550 528 ), 551 529 552 530 TP_fast_assign( ··· 568 542 __entry->pad1 = sqe->__pad2[0]; 569 543 __entry->addr3 = sqe->addr3; 570 544 __entry->error = error; 545 + 546 + __assign_str(op_str, io_uring_get_opcode(sqe->opcode)); 571 547 ), 572 548 573 549 TP_printk("ring %p, req %p, user_data 0x%llx, " ··· 578 550 "personality=%d, file_index=%d, pad=0x%llx, addr3=%llx, " 579 551 "error=%d", 580 552 __entry->ctx, __entry->req, __entry->user_data, 581 - io_uring_get_opcode(__entry->opcode), 553 + __get_str(op_str), 582 554 __entry->flags, __entry->ioprio, 583 555 (unsigned long long)__entry->off, 584 556 (unsigned long long) __entry->addr, __entry->len,
+1
include/trace/events/libata.h
··· 288 288 __entry->hob_feature = qc->result_tf.hob_feature; 289 289 __entry->nsect = qc->result_tf.nsect; 290 290 __entry->hob_nsect = qc->result_tf.hob_nsect; 291 + __entry->flags = qc->flags; 291 292 ), 292 293 293 294 TP_printk("ata_port=%u ata_dev=%u tag=%d flags=%s status=%s " \
+5
kernel/bpf/btf.c
··· 4815 4815 n = btf_nr_types(btf); 4816 4816 for (i = start_id; i < n; i++) { 4817 4817 const struct btf_type *t; 4818 + int chain_limit = 32; 4818 4819 u32 cur_id = i; 4819 4820 4820 4821 t = btf_type_by_id(btf, i); ··· 4828 4827 4829 4828 in_tags = btf_type_is_type_tag(t); 4830 4829 while (btf_type_is_modifier(t)) { 4830 + if (!chain_limit--) { 4831 + btf_verifier_log(env, "Max chain length or cycle detected"); 4832 + return -ELOOP; 4833 + } 4831 4834 if (btf_type_is_type_tag(t)) { 4832 4835 if (!in_tags) { 4833 4836 btf_verifier_log(env, "Type tags don't precede modifiers");
+2 -3
kernel/dma/direct.c
··· 357 357 } else { 358 358 if (IS_ENABLED(CONFIG_ARCH_HAS_DMA_CLEAR_UNCACHED)) 359 359 arch_dma_clear_uncached(cpu_addr, size); 360 - if (dma_set_encrypted(dev, cpu_addr, 1 << page_order)) 360 + if (dma_set_encrypted(dev, cpu_addr, size)) 361 361 return; 362 362 } 363 363 ··· 392 392 struct page *page, dma_addr_t dma_addr, 393 393 enum dma_data_direction dir) 394 394 { 395 - unsigned int page_order = get_order(size); 396 395 void *vaddr = page_address(page); 397 396 398 397 /* If cpu_addr is not from an atomic pool, dma_free_from_pool() fails */ ··· 399 400 dma_free_from_pool(dev, vaddr, size)) 400 401 return; 401 402 402 - if (dma_set_encrypted(dev, vaddr, 1 << page_order)) 403 + if (dma_set_encrypted(dev, vaddr, size)) 403 404 return; 404 405 __dma_direct_free_pages(dev, page, size); 405 406 }
+1 -10
kernel/hung_task.c
··· 127 127 * complain: 128 128 */ 129 129 if (sysctl_hung_task_warnings) { 130 - printk_prefer_direct_enter(); 131 - 132 130 if (sysctl_hung_task_warnings > 0) 133 131 sysctl_hung_task_warnings--; 134 132 pr_err("INFO: task %s:%d blocked for more than %ld seconds.\n", ··· 142 144 143 145 if (sysctl_hung_task_all_cpu_backtrace) 144 146 hung_task_show_all_bt = true; 145 - 146 - printk_prefer_direct_exit(); 147 147 } 148 148 149 149 touch_nmi_watchdog(); ··· 204 208 } 205 209 unlock: 206 210 rcu_read_unlock(); 207 - if (hung_task_show_lock) { 208 - printk_prefer_direct_enter(); 211 + if (hung_task_show_lock) 209 212 debug_show_all_locks(); 210 - printk_prefer_direct_exit(); 211 - } 212 213 213 214 if (hung_task_show_all_bt) { 214 215 hung_task_show_all_bt = false; 215 - printk_prefer_direct_enter(); 216 216 trigger_all_cpu_backtrace(); 217 - printk_prefer_direct_exit(); 218 217 } 219 218 220 219 if (hung_task_call_panic)
+7 -7
kernel/kthread.c
··· 340 340 341 341 self = to_kthread(current); 342 342 343 - /* If user was SIGKILLed, I release the structure. */ 343 + /* Release the structure when caller killed by a fatal signal. */ 344 344 done = xchg(&create->done, NULL); 345 345 if (!done) { 346 346 kfree(create); ··· 398 398 /* We want our own signal handler (we take no signals by default). */ 399 399 pid = kernel_thread(kthread, create, CLONE_FS | CLONE_FILES | SIGCHLD); 400 400 if (pid < 0) { 401 - /* If user was SIGKILLed, I release the structure. */ 401 + /* Release the structure when caller killed by a fatal signal. */ 402 402 struct completion *done = xchg(&create->done, NULL); 403 403 404 404 if (!done) { ··· 440 440 */ 441 441 if (unlikely(wait_for_completion_killable(&done))) { 442 442 /* 443 - * If I was SIGKILLed before kthreadd (or new kernel thread) 444 - * calls complete(), leave the cleanup of this structure to 445 - * that thread. 443 + * If I was killed by a fatal signal before kthreadd (or new 444 + * kernel thread) calls complete(), leave the cleanup of this 445 + * structure to that thread. 446 446 */ 447 447 if (xchg(&create->done, NULL)) 448 448 return ERR_PTR(-EINTR); ··· 876 876 * 877 877 * Returns a pointer to the allocated worker on success, ERR_PTR(-ENOMEM) 878 878 * when the needed structures could not get allocated, and ERR_PTR(-EINTR) 879 - * when the worker was SIGKILLed. 879 + * when the caller was killed by a fatal signal. 880 880 */ 881 881 struct kthread_worker * 882 882 kthread_create_worker(unsigned int flags, const char namefmt[], ...) ··· 925 925 * Return: 926 926 * The pointer to the allocated worker on success, ERR_PTR(-ENOMEM) 927 927 * when the needed structures could not get allocated, and ERR_PTR(-EINTR) 928 - * when the worker was SIGKILLed. 928 + * when the caller was killed by a fatal signal. 929 929 */ 930 930 struct kthread_worker * 931 931 kthread_create_worker_on_cpu(int cpu, unsigned int flags,
-6
kernel/panic.c
··· 297 297 * unfortunately means it may not be hardened to work in a 298 298 * panic situation. 299 299 */ 300 - try_block_console_kthreads(10000); 301 300 smp_send_stop(); 302 301 } else { 303 302 /* ··· 304 305 * kmsg_dump, we will need architecture dependent extra 305 306 * works in addition to stopping other CPUs. 306 307 */ 307 - try_block_console_kthreads(10000); 308 308 crash_smp_send_stop(); 309 309 } 310 310 ··· 603 605 { 604 606 disable_trace_on_warning(); 605 607 606 - printk_prefer_direct_enter(); 607 - 608 608 if (file) 609 609 pr_warn("WARNING: CPU: %d PID: %d at %s:%d %pS\n", 610 610 raw_smp_processor_id(), current->pid, file, line, ··· 632 636 633 637 /* Just a warning, don't kill lockdep. */ 634 638 add_taint(taint, LOCKDEP_STILL_OK); 635 - 636 - printk_prefer_direct_exit(); 637 639 } 638 640 639 641 #ifndef __WARN_FLAGS
+1 -1
kernel/power/hibernate.c
··· 665 665 hibernation_platform_enter(); 666 666 fallthrough; 667 667 case HIBERNATION_SHUTDOWN: 668 - if (pm_power_off) 668 + if (kernel_can_power_off()) 669 669 kernel_power_off(); 670 670 break; 671 671 }
-2
kernel/printk/internal.h
··· 20 20 LOG_CONT = 8, /* text is a fragment of a continuation line */ 21 21 }; 22 22 23 - extern bool block_console_kthreads; 24 - 25 23 __printf(4, 0) 26 24 int vprintk_store(int facility, int level, 27 25 const struct dev_printk_info *dev_info,
+63 -530
kernel/printk/printk.c
··· 224 224 static int nr_ext_console_drivers; 225 225 226 226 /* 227 - * Used to synchronize printing kthreads against direct printing via 228 - * console_trylock/console_unlock. 229 - * 230 - * Values: 231 - * -1 = console kthreads atomically blocked (via global trylock) 232 - * 0 = no kthread printing, console not locked (via trylock) 233 - * >0 = kthread(s) actively printing 234 - * 235 - * Note: For synchronizing against direct printing via 236 - * console_lock/console_unlock, see the @lock variable in 237 - * struct console. 238 - */ 239 - static atomic_t console_kthreads_active = ATOMIC_INIT(0); 240 - 241 - #define console_kthreads_atomic_tryblock() \ 242 - (atomic_cmpxchg(&console_kthreads_active, 0, -1) == 0) 243 - #define console_kthreads_atomic_unblock() \ 244 - atomic_cmpxchg(&console_kthreads_active, -1, 0) 245 - #define console_kthreads_atomically_blocked() \ 246 - (atomic_read(&console_kthreads_active) == -1) 247 - 248 - #define console_kthread_printing_tryenter() \ 249 - atomic_inc_unless_negative(&console_kthreads_active) 250 - #define console_kthread_printing_exit() \ 251 - atomic_dec(&console_kthreads_active) 252 - 253 - /* Block console kthreads to avoid processing new messages. */ 254 - bool block_console_kthreads; 255 - 256 - /* 257 227 * Helper macros to handle lockdep when locking/unlocking console_sem. We use 258 228 * macros instead of functions so that _RET_IP_ contains useful information. 259 229 */ ··· 271 301 } 272 302 273 303 /* 274 - * Tracks whether kthread printers are all blocked. A value of true implies 275 - * that the console is locked via console_lock() or the console is suspended. 276 - * Writing to this variable requires holding @console_sem. 304 + * This is used for debugging the mess that is the VT code by 305 + * keeping track if we have the console semaphore held. It's 306 + * definitely not the perfect debug tool (we don't know if _WE_ 307 + * hold it and are racing, but it helps tracking those weird code 308 + * paths in the console code where we end up in places I want 309 + * locked without the console semaphore held). 277 310 */ 278 - static bool console_kthreads_blocked; 279 - 280 - /* 281 - * Block all kthread printers from a schedulable context. 282 - * 283 - * Requires holding @console_sem. 284 - */ 285 - static void console_kthreads_block(void) 286 - { 287 - struct console *con; 288 - 289 - for_each_console(con) { 290 - mutex_lock(&con->lock); 291 - con->blocked = true; 292 - mutex_unlock(&con->lock); 293 - } 294 - 295 - console_kthreads_blocked = true; 296 - } 297 - 298 - /* 299 - * Unblock all kthread printers from a schedulable context. 300 - * 301 - * Requires holding @console_sem. 302 - */ 303 - static void console_kthreads_unblock(void) 304 - { 305 - struct console *con; 306 - 307 - for_each_console(con) { 308 - mutex_lock(&con->lock); 309 - con->blocked = false; 310 - mutex_unlock(&con->lock); 311 - } 312 - 313 - console_kthreads_blocked = false; 314 - } 315 - 316 - static int console_suspended; 311 + static int console_locked, console_suspended; 317 312 318 313 /* 319 314 * Array of consoles built from command line options (console=) ··· 361 426 /* syslog_lock protects syslog_* variables and write access to clear_seq. */ 362 427 static DEFINE_MUTEX(syslog_lock); 363 428 364 - /* 365 - * A flag to signify if printk_activate_kthreads() has already started the 366 - * kthread printers. If true, any later registered consoles must start their 367 - * own kthread directly. The flag is write protected by the console_lock. 368 - */ 369 - static bool printk_kthreads_available; 370 - 371 429 #ifdef CONFIG_PRINTK 372 - static atomic_t printk_prefer_direct = ATOMIC_INIT(0); 373 - 374 - /** 375 - * printk_prefer_direct_enter - cause printk() calls to attempt direct 376 - * printing to all enabled consoles 377 - * 378 - * Since it is not possible to call into the console printing code from any 379 - * context, there is no guarantee that direct printing will occur. 380 - * 381 - * This globally effects all printk() callers. 382 - * 383 - * Context: Any context. 384 - */ 385 - void printk_prefer_direct_enter(void) 386 - { 387 - atomic_inc(&printk_prefer_direct); 388 - } 389 - 390 - /** 391 - * printk_prefer_direct_exit - restore printk() behavior 392 - * 393 - * Context: Any context. 394 - */ 395 - void printk_prefer_direct_exit(void) 396 - { 397 - WARN_ON(atomic_dec_if_positive(&printk_prefer_direct) < 0); 398 - } 399 - 400 - /* 401 - * Calling printk() always wakes kthread printers so that they can 402 - * flush the new message to their respective consoles. Also, if direct 403 - * printing is allowed, printk() tries to flush the messages directly. 404 - * 405 - * Direct printing is allowed in situations when the kthreads 406 - * are not available or the system is in a problematic state. 407 - * 408 - * See the implementation about possible races. 409 - */ 410 - static inline bool allow_direct_printing(void) 411 - { 412 - /* 413 - * Checking kthread availability is a possible race because the 414 - * kthread printers can become permanently disabled during runtime. 415 - * However, doing that requires holding the console_lock, so any 416 - * pending messages will be direct printed by console_unlock(). 417 - */ 418 - if (!printk_kthreads_available) 419 - return true; 420 - 421 - /* 422 - * Prefer direct printing when the system is in a problematic state. 423 - * The context that sets this state will always see the updated value. 424 - * The other contexts do not care. Anyway, direct printing is just a 425 - * best effort. The direct output is only possible when console_lock 426 - * is not already taken and no kthread printers are actively printing. 427 - */ 428 - return (system_state > SYSTEM_RUNNING || 429 - oops_in_progress || 430 - atomic_read(&printk_prefer_direct)); 431 - } 432 - 433 430 DECLARE_WAIT_QUEUE_HEAD(log_wait); 434 431 /* All 3 protected by @syslog_lock. */ 435 432 /* the next printk record to read by syslog(READ) or /proc/kmsg */ ··· 2252 2385 printed_len = vprintk_store(facility, level, dev_info, fmt, args); 2253 2386 2254 2387 /* If called from the scheduler, we can not call up(). */ 2255 - if (!in_sched && allow_direct_printing()) { 2388 + if (!in_sched) { 2256 2389 /* 2257 2390 * The caller may be holding system-critical or 2258 - * timing-sensitive locks. Disable preemption during direct 2391 + * timing-sensitive locks. Disable preemption during 2259 2392 * printing of all remaining records to all consoles so that 2260 2393 * this context can return as soon as possible. Hopefully 2261 2394 * another printk() caller will take over the printing. ··· 2298 2431 2299 2432 static bool __pr_flush(struct console *con, int timeout_ms, bool reset_on_progress); 2300 2433 2301 - static void printk_start_kthread(struct console *con); 2302 - 2303 2434 #else /* CONFIG_PRINTK */ 2304 2435 2305 2436 #define CONSOLE_LOG_MAX 0 ··· 2331 2466 } 2332 2467 static bool suppress_message_printing(int level) { return false; } 2333 2468 static bool __pr_flush(struct console *con, int timeout_ms, bool reset_on_progress) { return true; } 2334 - static void printk_start_kthread(struct console *con) { } 2335 - static bool allow_direct_printing(void) { return true; } 2336 2469 2337 2470 #endif /* CONFIG_PRINTK */ 2338 2471 ··· 2549 2686 /* If trylock fails, someone else is doing the printing */ 2550 2687 if (console_trylock()) 2551 2688 console_unlock(); 2552 - else { 2553 - /* 2554 - * If a new CPU comes online, the conditions for 2555 - * printer_should_wake() may have changed for some 2556 - * kthread printer with !CON_ANYTIME. 2557 - */ 2558 - wake_up_klogd(); 2559 - } 2560 2689 } 2561 2690 return 0; 2562 2691 } ··· 2568 2713 down_console_sem(); 2569 2714 if (console_suspended) 2570 2715 return; 2571 - console_kthreads_block(); 2716 + console_locked = 1; 2572 2717 console_may_schedule = 1; 2573 2718 } 2574 2719 EXPORT_SYMBOL(console_lock); ··· 2589 2734 up_console_sem(); 2590 2735 return 0; 2591 2736 } 2592 - if (!console_kthreads_atomic_tryblock()) { 2593 - up_console_sem(); 2594 - return 0; 2595 - } 2737 + console_locked = 1; 2596 2738 console_may_schedule = 0; 2597 2739 return 1; 2598 2740 } 2599 2741 EXPORT_SYMBOL(console_trylock); 2600 2742 2601 - /* 2602 - * This is used to help to make sure that certain paths within the VT code are 2603 - * running with the console lock held. It is definitely not the perfect debug 2604 - * tool (it is not known if the VT code is the task holding the console lock), 2605 - * but it helps tracking those weird code paths in the console code such as 2606 - * when the console is suspended: where the console is not locked but no 2607 - * console printing may occur. 2608 - * 2609 - * Note: This returns true when the console is suspended but is not locked. 2610 - * This is intentional because the VT code must consider that situation 2611 - * the same as if the console was locked. 2612 - */ 2613 2743 int is_console_locked(void) 2614 2744 { 2615 - return (console_kthreads_blocked || atomic_read(&console_kthreads_active)); 2745 + return console_locked; 2616 2746 } 2617 2747 EXPORT_SYMBOL(is_console_locked); 2618 2748 ··· 2620 2780 return atomic_read(&panic_cpu) != raw_smp_processor_id(); 2621 2781 } 2622 2782 2623 - static inline bool __console_is_usable(short flags) 2783 + /* 2784 + * Check if the given console is currently capable and allowed to print 2785 + * records. 2786 + * 2787 + * Requires the console_lock. 2788 + */ 2789 + static inline bool console_is_usable(struct console *con) 2624 2790 { 2625 - if (!(flags & CON_ENABLED)) 2791 + if (!(con->flags & CON_ENABLED)) 2792 + return false; 2793 + 2794 + if (!con->write) 2626 2795 return false; 2627 2796 2628 2797 /* ··· 2640 2791 * cope (CON_ANYTIME) don't call them until this CPU is officially up. 2641 2792 */ 2642 2793 if (!cpu_online(raw_smp_processor_id()) && 2643 - !(flags & CON_ANYTIME)) 2794 + !(con->flags & CON_ANYTIME)) 2644 2795 return false; 2645 2796 2646 2797 return true; 2647 2798 } 2648 2799 2649 - /* 2650 - * Check if the given console is currently capable and allowed to print 2651 - * records. 2652 - * 2653 - * Requires holding the console_lock. 2654 - */ 2655 - static inline bool console_is_usable(struct console *con) 2656 - { 2657 - if (!con->write) 2658 - return false; 2659 - 2660 - return __console_is_usable(con->flags); 2661 - } 2662 - 2663 2800 static void __console_unlock(void) 2664 2801 { 2665 - /* 2666 - * Depending on whether console_lock() or console_trylock() was used, 2667 - * appropriately allow the kthread printers to continue. 2668 - */ 2669 - if (console_kthreads_blocked) 2670 - console_kthreads_unblock(); 2671 - else 2672 - console_kthreads_atomic_unblock(); 2673 - 2674 - /* 2675 - * New records may have arrived while the console was locked. 2676 - * Wake the kthread printers to print them. 2677 - */ 2678 - wake_up_klogd(); 2679 - 2802 + console_locked = 0; 2680 2803 up_console_sem(); 2681 2804 } 2682 2805 ··· 2666 2845 * 2667 2846 * @handover will be set to true if a printk waiter has taken over the 2668 2847 * console_lock, in which case the caller is no longer holding the 2669 - * console_lock. Otherwise it is set to false. A NULL pointer may be provided 2670 - * to disable allowing the console_lock to be taken over by a printk waiter. 2848 + * console_lock. Otherwise it is set to false. 2671 2849 * 2672 2850 * Returns false if the given console has no next record to print, otherwise 2673 2851 * true. 2674 2852 * 2675 - * Requires the console_lock if @handover is non-NULL. 2676 - * Requires con->lock otherwise. 2853 + * Requires the console_lock. 2677 2854 */ 2678 - static bool __console_emit_next_record(struct console *con, char *text, char *ext_text, 2679 - char *dropped_text, bool *handover) 2855 + static bool console_emit_next_record(struct console *con, char *text, char *ext_text, 2856 + char *dropped_text, bool *handover) 2680 2857 { 2681 - static atomic_t panic_console_dropped = ATOMIC_INIT(0); 2858 + static int panic_console_dropped; 2682 2859 struct printk_info info; 2683 2860 struct printk_record r; 2684 2861 unsigned long flags; ··· 2685 2866 2686 2867 prb_rec_init_rd(&r, &info, text, CONSOLE_LOG_MAX); 2687 2868 2688 - if (handover) 2689 - *handover = false; 2869 + *handover = false; 2690 2870 2691 2871 if (!prb_read_valid(prb, con->seq, &r)) 2692 2872 return false; ··· 2693 2875 if (con->seq != r.info->seq) { 2694 2876 con->dropped += r.info->seq - con->seq; 2695 2877 con->seq = r.info->seq; 2696 - if (panic_in_progress() && 2697 - atomic_fetch_inc_relaxed(&panic_console_dropped) > 10) { 2878 + if (panic_in_progress() && panic_console_dropped++ > 10) { 2698 2879 suppress_panic_printk = 1; 2699 2880 pr_warn_once("Too many dropped messages. Suppress messages on non-panic CPUs to prevent livelock.\n"); 2700 2881 } ··· 2715 2898 len = record_print_text(&r, console_msg_format & MSG_FORMAT_SYSLOG, printk_time); 2716 2899 } 2717 2900 2718 - if (handover) { 2719 - /* 2720 - * While actively printing out messages, if another printk() 2721 - * were to occur on another CPU, it may wait for this one to 2722 - * finish. This task can not be preempted if there is a 2723 - * waiter waiting to take over. 2724 - * 2725 - * Interrupts are disabled because the hand over to a waiter 2726 - * must not be interrupted until the hand over is completed 2727 - * (@console_waiter is cleared). 2728 - */ 2729 - printk_safe_enter_irqsave(flags); 2730 - console_lock_spinning_enable(); 2901 + /* 2902 + * While actively printing out messages, if another printk() 2903 + * were to occur on another CPU, it may wait for this one to 2904 + * finish. This task can not be preempted if there is a 2905 + * waiter waiting to take over. 2906 + * 2907 + * Interrupts are disabled because the hand over to a waiter 2908 + * must not be interrupted until the hand over is completed 2909 + * (@console_waiter is cleared). 2910 + */ 2911 + printk_safe_enter_irqsave(flags); 2912 + console_lock_spinning_enable(); 2731 2913 2732 - /* don't trace irqsoff print latency */ 2733 - stop_critical_timings(); 2734 - } 2735 - 2914 + stop_critical_timings(); /* don't trace print latency */ 2736 2915 call_console_driver(con, write_text, len, dropped_text); 2916 + start_critical_timings(); 2737 2917 2738 2918 con->seq++; 2739 2919 2740 - if (handover) { 2741 - start_critical_timings(); 2742 - *handover = console_lock_spinning_disable_and_check(); 2743 - printk_safe_exit_irqrestore(flags); 2744 - } 2920 + *handover = console_lock_spinning_disable_and_check(); 2921 + printk_safe_exit_irqrestore(flags); 2745 2922 skip: 2746 2923 return true; 2747 - } 2748 - 2749 - /* 2750 - * Print a record for a given console, but allow another printk() caller to 2751 - * take over the console_lock and continue printing. 2752 - * 2753 - * Requires the console_lock, but depending on @handover after the call, the 2754 - * caller may no longer have the console_lock. 2755 - * 2756 - * See __console_emit_next_record() for argument and return details. 2757 - */ 2758 - static bool console_emit_next_record_transferable(struct console *con, char *text, char *ext_text, 2759 - char *dropped_text, bool *handover) 2760 - { 2761 - /* 2762 - * Handovers are only supported if threaded printers are atomically 2763 - * blocked. The context taking over the console_lock may be atomic. 2764 - */ 2765 - if (!console_kthreads_atomically_blocked()) { 2766 - *handover = false; 2767 - handover = NULL; 2768 - } 2769 - 2770 - return __console_emit_next_record(con, text, ext_text, dropped_text, handover); 2771 2924 } 2772 2925 2773 2926 /* ··· 2758 2971 * were flushed to all usable consoles. A returned false informs the caller 2759 2972 * that everything was not flushed (either there were no usable consoles or 2760 2973 * another context has taken over printing or it is a panic situation and this 2761 - * is not the panic CPU or direct printing is not preferred). Regardless the 2762 - * reason, the caller should assume it is not useful to immediately try again. 2974 + * is not the panic CPU). Regardless the reason, the caller should assume it 2975 + * is not useful to immediately try again. 2763 2976 * 2764 2977 * Requires the console_lock. 2765 2978 */ ··· 2776 2989 *handover = false; 2777 2990 2778 2991 do { 2779 - /* Let the kthread printers do the work if they can. */ 2780 - if (!allow_direct_printing()) 2781 - return false; 2782 - 2783 2992 any_progress = false; 2784 2993 2785 2994 for_each_console(con) { ··· 2787 3004 2788 3005 if (con->flags & CON_EXTENDED) { 2789 3006 /* Extended consoles do not print "dropped messages". */ 2790 - progress = console_emit_next_record_transferable(con, &text[0], 2791 - &ext_text[0], NULL, handover); 3007 + progress = console_emit_next_record(con, &text[0], 3008 + &ext_text[0], NULL, 3009 + handover); 2792 3010 } else { 2793 - progress = console_emit_next_record_transferable(con, &text[0], 2794 - NULL, &dropped_text[0], handover); 3011 + progress = console_emit_next_record(con, &text[0], 3012 + NULL, &dropped_text[0], 3013 + handover); 2795 3014 } 2796 3015 if (*handover) 2797 3016 return false; ··· 2908 3123 if (oops_in_progress) { 2909 3124 if (down_trylock_console_sem() != 0) 2910 3125 return; 2911 - if (!console_kthreads_atomic_tryblock()) { 2912 - up_console_sem(); 2913 - return; 2914 - } 2915 3126 } else 2916 3127 console_lock(); 2917 3128 3129 + console_locked = 1; 2918 3130 console_may_schedule = 0; 2919 3131 for_each_console(c) 2920 3132 if ((c->flags & CON_ENABLED) && c->unblank) ··· 3190 3408 nr_ext_console_drivers++; 3191 3409 3192 3410 newcon->dropped = 0; 3193 - newcon->thread = NULL; 3194 - newcon->blocked = true; 3195 - mutex_init(&newcon->lock); 3196 - 3197 3411 if (newcon->flags & CON_PRINTBUFFER) { 3198 3412 /* Get a consistent copy of @syslog_seq. */ 3199 3413 mutex_lock(&syslog_lock); ··· 3199 3421 /* Begin with next message. */ 3200 3422 newcon->seq = prb_next_seq(prb); 3201 3423 } 3202 - 3203 - if (printk_kthreads_available) 3204 - printk_start_kthread(newcon); 3205 - 3206 3424 console_unlock(); 3207 3425 console_sysfs_notify(); 3208 3426 ··· 3225 3451 3226 3452 int unregister_console(struct console *console) 3227 3453 { 3228 - struct task_struct *thd; 3229 3454 struct console *con; 3230 3455 int res; 3231 3456 ··· 3265 3492 console_drivers->flags |= CON_CONSDEV; 3266 3493 3267 3494 console->flags &= ~CON_ENABLED; 3268 - 3269 - /* 3270 - * console->thread can only be cleared under the console lock. But 3271 - * stopping the thread must be done without the console lock. The 3272 - * task that clears @thread is the task that stops the kthread. 3273 - */ 3274 - thd = console->thread; 3275 - console->thread = NULL; 3276 - 3277 3495 console_unlock(); 3278 - 3279 - if (thd) 3280 - kthread_stop(thd); 3281 - 3282 3496 console_sysfs_notify(); 3283 3497 3284 3498 if (console->exit) ··· 3361 3601 } 3362 3602 late_initcall(printk_late_init); 3363 3603 3364 - static int __init printk_activate_kthreads(void) 3365 - { 3366 - struct console *con; 3367 - 3368 - console_lock(); 3369 - printk_kthreads_available = true; 3370 - for_each_console(con) 3371 - printk_start_kthread(con); 3372 - console_unlock(); 3373 - 3374 - return 0; 3375 - } 3376 - early_initcall(printk_activate_kthreads); 3377 - 3378 3604 #if defined CONFIG_PRINTK 3379 3605 /* If @con is specified, only wait for that console. Otherwise wait for all. */ 3380 3606 static bool __pr_flush(struct console *con, int timeout_ms, bool reset_on_progress) ··· 3435 3689 } 3436 3690 EXPORT_SYMBOL(pr_flush); 3437 3691 3438 - static void __printk_fallback_preferred_direct(void) 3439 - { 3440 - printk_prefer_direct_enter(); 3441 - pr_err("falling back to preferred direct printing\n"); 3442 - printk_kthreads_available = false; 3443 - } 3444 - 3445 - /* 3446 - * Enter preferred direct printing, but never exit. Mark console threads as 3447 - * unavailable. The system is then forever in preferred direct printing and 3448 - * any printing threads will exit. 3449 - * 3450 - * Must *not* be called under console_lock. Use 3451 - * __printk_fallback_preferred_direct() if already holding console_lock. 3452 - */ 3453 - static void printk_fallback_preferred_direct(void) 3454 - { 3455 - console_lock(); 3456 - __printk_fallback_preferred_direct(); 3457 - console_unlock(); 3458 - } 3459 - 3460 - /* 3461 - * Print a record for a given console, not allowing another printk() caller 3462 - * to take over. This is appropriate for contexts that do not have the 3463 - * console_lock. 3464 - * 3465 - * See __console_emit_next_record() for argument and return details. 3466 - */ 3467 - static bool console_emit_next_record(struct console *con, char *text, char *ext_text, 3468 - char *dropped_text) 3469 - { 3470 - return __console_emit_next_record(con, text, ext_text, dropped_text, NULL); 3471 - } 3472 - 3473 - static bool printer_should_wake(struct console *con, u64 seq) 3474 - { 3475 - short flags; 3476 - 3477 - if (kthread_should_stop() || !printk_kthreads_available) 3478 - return true; 3479 - 3480 - if (con->blocked || 3481 - console_kthreads_atomically_blocked() || 3482 - block_console_kthreads || 3483 - system_state > SYSTEM_RUNNING || 3484 - oops_in_progress) { 3485 - return false; 3486 - } 3487 - 3488 - /* 3489 - * This is an unsafe read from con->flags, but a false positive is 3490 - * not a problem. Worst case it would allow the printer to wake up 3491 - * although it is disabled. But the printer will notice that when 3492 - * attempting to print and instead go back to sleep. 3493 - */ 3494 - flags = data_race(READ_ONCE(con->flags)); 3495 - 3496 - if (!__console_is_usable(flags)) 3497 - return false; 3498 - 3499 - return prb_read_valid(prb, seq, NULL); 3500 - } 3501 - 3502 - static int printk_kthread_func(void *data) 3503 - { 3504 - struct console *con = data; 3505 - char *dropped_text = NULL; 3506 - char *ext_text = NULL; 3507 - u64 seq = 0; 3508 - char *text; 3509 - int error; 3510 - 3511 - text = kmalloc(CONSOLE_LOG_MAX, GFP_KERNEL); 3512 - if (!text) { 3513 - con_printk(KERN_ERR, con, "failed to allocate text buffer\n"); 3514 - printk_fallback_preferred_direct(); 3515 - goto out; 3516 - } 3517 - 3518 - if (con->flags & CON_EXTENDED) { 3519 - ext_text = kmalloc(CONSOLE_EXT_LOG_MAX, GFP_KERNEL); 3520 - if (!ext_text) { 3521 - con_printk(KERN_ERR, con, "failed to allocate ext_text buffer\n"); 3522 - printk_fallback_preferred_direct(); 3523 - goto out; 3524 - } 3525 - } else { 3526 - dropped_text = kmalloc(DROPPED_TEXT_MAX, GFP_KERNEL); 3527 - if (!dropped_text) { 3528 - con_printk(KERN_ERR, con, "failed to allocate dropped_text buffer\n"); 3529 - printk_fallback_preferred_direct(); 3530 - goto out; 3531 - } 3532 - } 3533 - 3534 - con_printk(KERN_INFO, con, "printing thread started\n"); 3535 - 3536 - for (;;) { 3537 - /* 3538 - * Guarantee this task is visible on the waitqueue before 3539 - * checking the wake condition. 3540 - * 3541 - * The full memory barrier within set_current_state() of 3542 - * prepare_to_wait_event() pairs with the full memory barrier 3543 - * within wq_has_sleeper(). 3544 - * 3545 - * This pairs with __wake_up_klogd:A. 3546 - */ 3547 - error = wait_event_interruptible(log_wait, 3548 - printer_should_wake(con, seq)); /* LMM(printk_kthread_func:A) */ 3549 - 3550 - if (kthread_should_stop() || !printk_kthreads_available) 3551 - break; 3552 - 3553 - if (error) 3554 - continue; 3555 - 3556 - error = mutex_lock_interruptible(&con->lock); 3557 - if (error) 3558 - continue; 3559 - 3560 - if (con->blocked || 3561 - !console_kthread_printing_tryenter()) { 3562 - /* Another context has locked the console_lock. */ 3563 - mutex_unlock(&con->lock); 3564 - continue; 3565 - } 3566 - 3567 - /* 3568 - * Although this context has not locked the console_lock, it 3569 - * is known that the console_lock is not locked and it is not 3570 - * possible for any other context to lock the console_lock. 3571 - * Therefore it is safe to read con->flags. 3572 - */ 3573 - 3574 - if (!__console_is_usable(con->flags)) { 3575 - console_kthread_printing_exit(); 3576 - mutex_unlock(&con->lock); 3577 - continue; 3578 - } 3579 - 3580 - /* 3581 - * Even though the printk kthread is always preemptible, it is 3582 - * still not allowed to call cond_resched() from within 3583 - * console drivers. The task may become non-preemptible in the 3584 - * console driver call chain. For example, vt_console_print() 3585 - * takes a spinlock and then can call into fbcon_redraw(), 3586 - * which can conditionally invoke cond_resched(). 3587 - */ 3588 - console_may_schedule = 0; 3589 - console_emit_next_record(con, text, ext_text, dropped_text); 3590 - 3591 - seq = con->seq; 3592 - 3593 - console_kthread_printing_exit(); 3594 - 3595 - mutex_unlock(&con->lock); 3596 - } 3597 - 3598 - con_printk(KERN_INFO, con, "printing thread stopped\n"); 3599 - out: 3600 - kfree(dropped_text); 3601 - kfree(ext_text); 3602 - kfree(text); 3603 - 3604 - console_lock(); 3605 - /* 3606 - * If this kthread is being stopped by another task, con->thread will 3607 - * already be NULL. That is fine. The important thing is that it is 3608 - * NULL after the kthread exits. 3609 - */ 3610 - con->thread = NULL; 3611 - console_unlock(); 3612 - 3613 - return 0; 3614 - } 3615 - 3616 - /* Must be called under console_lock. */ 3617 - static void printk_start_kthread(struct console *con) 3618 - { 3619 - /* 3620 - * Do not start a kthread if there is no write() callback. The 3621 - * kthreads assume the write() callback exists. 3622 - */ 3623 - if (!con->write) 3624 - return; 3625 - 3626 - con->thread = kthread_run(printk_kthread_func, con, 3627 - "pr/%s%d", con->name, con->index); 3628 - if (IS_ERR(con->thread)) { 3629 - con->thread = NULL; 3630 - con_printk(KERN_ERR, con, "unable to start printing thread\n"); 3631 - __printk_fallback_preferred_direct(); 3632 - return; 3633 - } 3634 - } 3635 - 3636 3692 /* 3637 3693 * Delayed printk version, for scheduler-internal messages: 3638 3694 */ 3639 - #define PRINTK_PENDING_WAKEUP 0x01 3640 - #define PRINTK_PENDING_DIRECT_OUTPUT 0x02 3695 + #define PRINTK_PENDING_WAKEUP 0x01 3696 + #define PRINTK_PENDING_OUTPUT 0x02 3641 3697 3642 3698 static DEFINE_PER_CPU(int, printk_pending); 3643 3699 ··· 3447 3899 { 3448 3900 int pending = this_cpu_xchg(printk_pending, 0); 3449 3901 3450 - if (pending & PRINTK_PENDING_DIRECT_OUTPUT) { 3451 - printk_prefer_direct_enter(); 3452 - 3902 + if (pending & PRINTK_PENDING_OUTPUT) { 3453 3903 /* If trylock fails, someone else is doing the printing */ 3454 3904 if (console_trylock()) 3455 3905 console_unlock(); 3456 - 3457 - printk_prefer_direct_exit(); 3458 3906 } 3459 3907 3460 3908 if (pending & PRINTK_PENDING_WAKEUP) ··· 3475 3931 * prepare_to_wait_event(), which is called after ___wait_event() adds 3476 3932 * the waiter but before it has checked the wait condition. 3477 3933 * 3478 - * This pairs with devkmsg_read:A, syslog_print:A, and 3479 - * printk_kthread_func:A. 3934 + * This pairs with devkmsg_read:A and syslog_print:A. 3480 3935 */ 3481 3936 if (wq_has_sleeper(&log_wait) || /* LMM(__wake_up_klogd:A) */ 3482 - (val & PRINTK_PENDING_DIRECT_OUTPUT)) { 3937 + (val & PRINTK_PENDING_OUTPUT)) { 3483 3938 this_cpu_or(printk_pending, val); 3484 3939 irq_work_queue(this_cpu_ptr(&wake_up_klogd_work)); 3485 3940 } ··· 3496 3953 * New messages may have been added directly to the ringbuffer 3497 3954 * using vprintk_store(), so wake any waiters as well. 3498 3955 */ 3499 - int val = PRINTK_PENDING_WAKEUP; 3500 - 3501 - /* 3502 - * Make sure that some context will print the messages when direct 3503 - * printing is allowed. This happens in situations when the kthreads 3504 - * may not be as reliable or perhaps unusable. 3505 - */ 3506 - if (allow_direct_printing()) 3507 - val |= PRINTK_PENDING_DIRECT_OUTPUT; 3508 - 3509 - __wake_up_klogd(val); 3956 + __wake_up_klogd(PRINTK_PENDING_WAKEUP | PRINTK_PENDING_OUTPUT); 3510 3957 } 3511 3958 3512 3959 void printk_trigger_flush(void)
-32
kernel/printk/printk_safe.c
··· 8 8 #include <linux/smp.h> 9 9 #include <linux/cpumask.h> 10 10 #include <linux/printk.h> 11 - #include <linux/console.h> 12 11 #include <linux/kprobes.h> 13 - #include <linux/delay.h> 14 12 15 13 #include "internal.h" 16 14 ··· 50 52 return vprintk_default(fmt, args); 51 53 } 52 54 EXPORT_SYMBOL(vprintk); 53 - 54 - /** 55 - * try_block_console_kthreads() - Try to block console kthreads and 56 - * make the global console_lock() avaialble 57 - * 58 - * @timeout_ms: The maximum time (in ms) to wait. 59 - * 60 - * Prevent console kthreads from starting processing new messages. Wait 61 - * until the global console_lock() become available. 62 - * 63 - * Context: Can be called in any context. 64 - */ 65 - void try_block_console_kthreads(int timeout_ms) 66 - { 67 - block_console_kthreads = true; 68 - 69 - /* Do not wait when the console lock could not be safely taken. */ 70 - if (this_cpu_read(printk_context) || in_nmi()) 71 - return; 72 - 73 - while (timeout_ms > 0) { 74 - if (console_trylock()) { 75 - console_unlock(); 76 - return; 77 - } 78 - 79 - udelay(1000); 80 - timeout_ms -= 1; 81 - } 82 - }
-2
kernel/rcu/tree_stall.h
··· 647 647 * See Documentation/RCU/stallwarn.rst for info on how to debug 648 648 * RCU CPU stall warnings. 649 649 */ 650 - printk_prefer_direct_enter(); 651 650 trace_rcu_stall_warning(rcu_state.name, TPS("SelfDetected")); 652 651 pr_err("INFO: %s self-detected stall on CPU\n", rcu_state.name); 653 652 raw_spin_lock_irqsave_rcu_node(rdp->mynode, flags); ··· 684 685 */ 685 686 set_tsk_need_resched(current); 686 687 set_preempt_need_resched(); 687 - printk_prefer_direct_exit(); 688 688 } 689 689 690 690 static void check_cpu_stall(struct rcu_data *rdp)
+1 -15
kernel/reboot.c
··· 82 82 { 83 83 blocking_notifier_call_chain(&reboot_notifier_list, SYS_RESTART, cmd); 84 84 system_state = SYSTEM_RESTART; 85 - try_block_console_kthreads(10000); 86 85 usermodehelper_disable(); 87 86 device_shutdown(); 88 87 } ··· 270 271 blocking_notifier_call_chain(&reboot_notifier_list, 271 272 (state == SYSTEM_HALT) ? SYS_HALT : SYS_POWER_OFF, NULL); 272 273 system_state = state; 273 - try_block_console_kthreads(10000); 274 274 usermodehelper_disable(); 275 275 device_shutdown(); 276 276 } ··· 819 821 ret = run_cmd(reboot_cmd); 820 822 821 823 if (ret) { 822 - printk_prefer_direct_enter(); 823 824 pr_warn("Failed to start orderly reboot: forcing the issue\n"); 824 825 emergency_sync(); 825 826 kernel_restart(NULL); 826 - printk_prefer_direct_exit(); 827 827 } 828 828 829 829 return ret; ··· 834 838 ret = run_cmd(poweroff_cmd); 835 839 836 840 if (ret && force) { 837 - printk_prefer_direct_enter(); 838 841 pr_warn("Failed to start orderly shutdown: forcing the issue\n"); 839 842 840 843 /* ··· 843 848 */ 844 849 emergency_sync(); 845 850 kernel_power_off(); 846 - printk_prefer_direct_exit(); 847 851 } 848 852 849 853 return ret; ··· 900 906 */ 901 907 static void hw_failure_emergency_poweroff_func(struct work_struct *work) 902 908 { 903 - printk_prefer_direct_enter(); 904 - 905 909 /* 906 910 * We have reached here after the emergency shutdown waiting period has 907 911 * expired. This means orderly_poweroff has not been able to shut off ··· 916 924 */ 917 925 pr_emerg("Hardware protection shutdown failed. Trying emergency restart\n"); 918 926 emergency_restart(); 919 - 920 - printk_prefer_direct_exit(); 921 927 } 922 928 923 929 static DECLARE_DELAYED_WORK(hw_failure_emergency_poweroff_work, ··· 954 964 { 955 965 static atomic_t allow_proceed = ATOMIC_INIT(1); 956 966 957 - printk_prefer_direct_enter(); 958 - 959 967 pr_emerg("HARDWARE PROTECTION shutdown (%s)\n", reason); 960 968 961 969 /* Shutdown should be initiated only once. */ 962 970 if (!atomic_dec_and_test(&allow_proceed)) 963 - goto out; 971 + return; 964 972 965 973 /* 966 974 * Queue a backup emergency shutdown in the event of ··· 966 978 */ 967 979 hw_failure_emergency_poweroff(ms_until_forced); 968 980 orderly_poweroff(true); 969 - out: 970 - printk_prefer_direct_exit(); 971 981 } 972 982 EXPORT_SYMBOL_GPL(hw_protection_shutdown); 973 983
-3
kernel/trace/blktrace.c
··· 770 770 **/ 771 771 void blk_trace_shutdown(struct request_queue *q) 772 772 { 773 - mutex_lock(&q->debugfs_mutex); 774 773 if (rcu_dereference_protected(q->blk_trace, 775 774 lockdep_is_held(&q->debugfs_mutex))) { 776 775 __blk_trace_startstop(q, 0); 777 776 __blk_trace_remove(q); 778 777 } 779 - 780 - mutex_unlock(&q->debugfs_mutex); 781 778 } 782 779 783 780 #ifdef CONFIG_BLK_CGROUP
+50 -20
kernel/trace/bpf_trace.c
··· 2423 2423 kprobe_multi_link_prog_run(link, entry_ip, regs); 2424 2424 } 2425 2425 2426 - static int symbols_cmp(const void *a, const void *b) 2426 + static int symbols_cmp_r(const void *a, const void *b, const void *priv) 2427 2427 { 2428 2428 const char **str_a = (const char **) a; 2429 2429 const char **str_b = (const char **) b; 2430 2430 2431 2431 return strcmp(*str_a, *str_b); 2432 + } 2433 + 2434 + struct multi_symbols_sort { 2435 + const char **funcs; 2436 + u64 *cookies; 2437 + }; 2438 + 2439 + static void symbols_swap_r(void *a, void *b, int size, const void *priv) 2440 + { 2441 + const struct multi_symbols_sort *data = priv; 2442 + const char **name_a = a, **name_b = b; 2443 + 2444 + swap(*name_a, *name_b); 2445 + 2446 + /* If defined, swap also related cookies. */ 2447 + if (data->cookies) { 2448 + u64 *cookie_a, *cookie_b; 2449 + 2450 + cookie_a = data->cookies + (name_a - data->funcs); 2451 + cookie_b = data->cookies + (name_b - data->funcs); 2452 + swap(*cookie_a, *cookie_b); 2453 + } 2432 2454 } 2433 2455 2434 2456 int bpf_kprobe_multi_link_attach(const union bpf_attr *attr, struct bpf_prog *prog) ··· 2490 2468 if (!addrs) 2491 2469 return -ENOMEM; 2492 2470 2493 - if (uaddrs) { 2494 - if (copy_from_user(addrs, uaddrs, size)) { 2495 - err = -EFAULT; 2496 - goto error; 2497 - } 2498 - } else { 2499 - struct user_syms us; 2500 - 2501 - err = copy_user_syms(&us, usyms, cnt); 2502 - if (err) 2503 - goto error; 2504 - 2505 - sort(us.syms, cnt, sizeof(*us.syms), symbols_cmp, NULL); 2506 - err = ftrace_lookup_symbols(us.syms, cnt, addrs); 2507 - free_user_syms(&us); 2508 - if (err) 2509 - goto error; 2510 - } 2511 - 2512 2471 ucookies = u64_to_user_ptr(attr->link_create.kprobe_multi.cookies); 2513 2472 if (ucookies) { 2514 2473 cookies = kvmalloc_array(cnt, sizeof(*addrs), GFP_KERNEL); ··· 2501 2498 err = -EFAULT; 2502 2499 goto error; 2503 2500 } 2501 + } 2502 + 2503 + if (uaddrs) { 2504 + if (copy_from_user(addrs, uaddrs, size)) { 2505 + err = -EFAULT; 2506 + goto error; 2507 + } 2508 + } else { 2509 + struct multi_symbols_sort data = { 2510 + .cookies = cookies, 2511 + }; 2512 + struct user_syms us; 2513 + 2514 + err = copy_user_syms(&us, usyms, cnt); 2515 + if (err) 2516 + goto error; 2517 + 2518 + if (cookies) 2519 + data.funcs = us.syms; 2520 + 2521 + sort_r(us.syms, cnt, sizeof(*us.syms), symbols_cmp_r, 2522 + symbols_swap_r, &data); 2523 + 2524 + err = ftrace_lookup_symbols(us.syms, cnt, addrs); 2525 + free_user_syms(&us); 2526 + if (err) 2527 + goto error; 2504 2528 } 2505 2529 2506 2530 link = kzalloc(sizeof(*link), GFP_KERNEL);
+11 -2
kernel/trace/ftrace.c
··· 8029 8029 struct module *mod, unsigned long addr) 8030 8030 { 8031 8031 struct kallsyms_data *args = data; 8032 + const char **sym; 8033 + int idx; 8032 8034 8033 - if (!bsearch(&name, args->syms, args->cnt, sizeof(*args->syms), symbols_cmp)) 8035 + sym = bsearch(&name, args->syms, args->cnt, sizeof(*args->syms), symbols_cmp); 8036 + if (!sym) 8037 + return 0; 8038 + 8039 + idx = sym - args->syms; 8040 + if (args->addrs[idx]) 8034 8041 return 0; 8035 8042 8036 8043 addr = ftrace_location(addr); 8037 8044 if (!addr) 8038 8045 return 0; 8039 8046 8040 - args->addrs[args->found++] = addr; 8047 + args->addrs[idx] = addr; 8048 + args->found++; 8041 8049 return args->found == args->cnt ? 1 : 0; 8042 8050 } 8043 8051 ··· 8070 8062 struct kallsyms_data args; 8071 8063 int err; 8072 8064 8065 + memset(addrs, 0, sizeof(*addrs) * cnt); 8073 8066 args.addrs = addrs; 8074 8067 args.syms = sorted_syms; 8075 8068 args.cnt = cnt;
+9
kernel/trace/rethook.c
··· 154 154 if (unlikely(!handler)) 155 155 return NULL; 156 156 157 + /* 158 + * This expects the caller will set up a rethook on a function entry. 159 + * When the function returns, the rethook will eventually be reclaimed 160 + * or released in the rethook_recycle() with call_rcu(). 161 + * This means the caller must be run in the RCU-availabe context. 162 + */ 163 + if (unlikely(!rcu_is_watching())) 164 + return NULL; 165 + 157 166 fn = freelist_try_get(&rh->pool); 158 167 if (!fn) 159 168 return NULL;
-2
kernel/trace/trace.c
··· 6424 6424 synchronize_rcu(); 6425 6425 free_snapshot(tr); 6426 6426 } 6427 - #endif 6428 6427 6429 - #ifdef CONFIG_TRACER_MAX_TRACE 6430 6428 if (t->use_max_tr && !had_max_tr) { 6431 6429 ret = tracing_alloc_snapshot_instance(tr); 6432 6430 if (ret < 0)
+10 -1
kernel/trace/trace_kprobe.c
··· 1718 1718 kretprobe_dispatcher(struct kretprobe_instance *ri, struct pt_regs *regs) 1719 1719 { 1720 1720 struct kretprobe *rp = get_kretprobe(ri); 1721 - struct trace_kprobe *tk = container_of(rp, struct trace_kprobe, rp); 1721 + struct trace_kprobe *tk; 1722 1722 1723 + /* 1724 + * There is a small chance that get_kretprobe(ri) returns NULL when 1725 + * the kretprobe is unregister on another CPU between kretprobe's 1726 + * trampoline_handler and this function. 1727 + */ 1728 + if (unlikely(!rp)) 1729 + return 0; 1730 + 1731 + tk = container_of(rp, struct trace_kprobe, rp); 1723 1732 raw_cpu_inc(*tk->nhit); 1724 1733 1725 1734 if (trace_probe_test_flag(&tk->tp, TP_FLAG_TRACE))
-1
kernel/trace/trace_uprobe.c
··· 546 546 bool is_return = false; 547 547 int i, ret; 548 548 549 - ret = 0; 550 549 ref_ctr_offset = 0; 551 550 552 551 switch (argv[0][0]) {
-4
kernel/watchdog.c
··· 424 424 /* Start period for the next softlockup warning. */ 425 425 update_report_ts(); 426 426 427 - printk_prefer_direct_enter(); 428 - 429 427 pr_emerg("BUG: soft lockup - CPU#%d stuck for %us! [%s:%d]\n", 430 428 smp_processor_id(), duration, 431 429 current->comm, task_pid_nr(current)); ··· 442 444 add_taint(TAINT_SOFTLOCKUP, LOCKDEP_STILL_OK); 443 445 if (softlockup_panic) 444 446 panic("softlockup: hung tasks"); 445 - 446 - printk_prefer_direct_exit(); 447 447 } 448 448 449 449 return HRTIMER_RESTART;
-4
kernel/watchdog_hld.c
··· 135 135 if (__this_cpu_read(hard_watchdog_warn) == true) 136 136 return; 137 137 138 - printk_prefer_direct_enter(); 139 - 140 138 pr_emerg("Watchdog detected hard LOCKUP on cpu %d\n", 141 139 this_cpu); 142 140 print_modules(); ··· 154 156 155 157 if (hardlockup_panic) 156 158 nmi_panic(regs, "Hard LOCKUP"); 157 - 158 - printk_prefer_direct_exit(); 159 159 160 160 __this_cpu_write(hard_watchdog_warn, true); 161 161 return;
+8
mm/damon/reclaim.c
··· 374 374 } 375 375 static DECLARE_DELAYED_WORK(damon_reclaim_timer, damon_reclaim_timer_fn); 376 376 377 + static bool damon_reclaim_initialized; 378 + 377 379 static int enabled_store(const char *val, 378 380 const struct kernel_param *kp) 379 381 { 380 382 int rc = param_set_bool(val, kp); 381 383 382 384 if (rc < 0) 385 + return rc; 386 + 387 + /* system_wq might not initialized yet */ 388 + if (!damon_reclaim_initialized) 383 389 return rc; 384 390 385 391 if (enabled) ··· 455 449 damon_add_target(ctx, target); 456 450 457 451 schedule_delayed_work(&damon_reclaim_timer, 0); 452 + 453 + damon_reclaim_initialized = true; 458 454 return 0; 459 455 } 460 456
+12 -3
mm/filemap.c
··· 2385 2385 continue; 2386 2386 if (xas.xa_index > max || xa_is_value(folio)) 2387 2387 break; 2388 + if (xa_is_sibling(folio)) 2389 + break; 2388 2390 if (!folio_try_get_rcu(folio)) 2389 2391 goto retry; 2390 2392 ··· 2631 2629 return err; 2632 2630 } 2633 2631 2632 + static inline bool pos_same_folio(loff_t pos1, loff_t pos2, struct folio *folio) 2633 + { 2634 + unsigned int shift = folio_shift(folio); 2635 + 2636 + return (pos1 >> shift == pos2 >> shift); 2637 + } 2638 + 2634 2639 /** 2635 2640 * filemap_read - Read data from the page cache. 2636 2641 * @iocb: The iocb to read. ··· 2709 2700 writably_mapped = mapping_writably_mapped(mapping); 2710 2701 2711 2702 /* 2712 - * When a sequential read accesses a page several times, only 2703 + * When a read accesses the same folio several times, only 2713 2704 * mark it as accessed the first time. 2714 2705 */ 2715 - if (iocb->ki_pos >> PAGE_SHIFT != 2716 - ra->prev_pos >> PAGE_SHIFT) 2706 + if (!pos_same_folio(iocb->ki_pos, ra->prev_pos - 1, 2707 + fbatch.folios[0])) 2717 2708 folio_mark_accessed(fbatch.folios[0]); 2718 2709 2719 2710 for (i = 0; i < folio_batch_count(&fbatch); i++) {
+1
mm/huge_memory.c
··· 2377 2377 page_tail); 2378 2378 page_tail->mapping = head->mapping; 2379 2379 page_tail->index = head->index + tail; 2380 + page_tail->private = 0; 2380 2381 2381 2382 /* Page flags must be visible before we make the page non-compound. */ 2382 2383 smp_wmb();
+1 -1
mm/hwpoison-inject.c
··· 48 48 49 49 inject: 50 50 pr_info("Injecting memory failure at pfn %#lx\n", pfn); 51 - err = memory_failure(pfn, 0); 51 + err = memory_failure(pfn, MF_SW_SIMULATED); 52 52 return (err == -EOPNOTSUPP) ? 0 : err; 53 53 } 54 54
+5 -2
mm/kfence/core.c
··· 360 360 unsigned long flags; 361 361 struct slab *slab; 362 362 void *addr; 363 + const bool random_right_allocate = prandom_u32_max(2); 364 + const bool random_fault = CONFIG_KFENCE_STRESS_TEST_FAULTS && 365 + !prandom_u32_max(CONFIG_KFENCE_STRESS_TEST_FAULTS); 363 366 364 367 /* Try to obtain a free object. */ 365 368 raw_spin_lock_irqsave(&kfence_freelist_lock, flags); ··· 407 404 * is that the out-of-bounds accesses detected are deterministic for 408 405 * such allocations. 409 406 */ 410 - if (prandom_u32_max(2)) { 407 + if (random_right_allocate) { 411 408 /* Allocate on the "right" side, re-calculate address. */ 412 409 meta->addr += PAGE_SIZE - size; 413 410 meta->addr = ALIGN_DOWN(meta->addr, cache->align); ··· 447 444 if (cache->ctor) 448 445 cache->ctor(addr); 449 446 450 - if (CONFIG_KFENCE_STRESS_TEST_FAULTS && !prandom_u32_max(CONFIG_KFENCE_STRESS_TEST_FAULTS)) 447 + if (random_fault) 451 448 kfence_protect(meta->addr); /* Random "faults" by protecting the object. */ 452 449 453 450 atomic_long_inc(&counters[KFENCE_COUNTER_ALLOCATED]);
+1 -1
mm/madvise.c
··· 1112 1112 } else { 1113 1113 pr_info("Injecting memory failure for pfn %#lx at process virtual address %#lx\n", 1114 1114 pfn, start); 1115 - ret = memory_failure(pfn, MF_COUNT_INCREASED); 1115 + ret = memory_failure(pfn, MF_COUNT_INCREASED | MF_SW_SIMULATED); 1116 1116 if (ret == -EOPNOTSUPP) 1117 1117 ret = 0; 1118 1118 }
+1 -1
mm/memcontrol.c
··· 4859 4859 { 4860 4860 /* 4861 4861 * Deprecated. 4862 - * Please, take a look at tools/cgroup/slabinfo.py . 4862 + * Please, take a look at tools/cgroup/memcg_slabinfo.py . 4863 4863 */ 4864 4864 return 0; 4865 4865 }
+12
mm/memory-failure.c
··· 69 69 70 70 atomic_long_t num_poisoned_pages __read_mostly = ATOMIC_LONG_INIT(0); 71 71 72 + static bool hw_memory_failure __read_mostly = false; 73 + 72 74 static bool __page_handle_poison(struct page *page) 73 75 { 74 76 int ret; ··· 1770 1768 1771 1769 mutex_lock(&mf_mutex); 1772 1770 1771 + if (!(flags & MF_SW_SIMULATED)) 1772 + hw_memory_failure = true; 1773 + 1773 1774 p = pfn_to_online_page(pfn); 1774 1775 if (!p) { 1775 1776 res = arch_memory_failure(pfn, flags); ··· 2107 2102 page = compound_head(p); 2108 2103 2109 2104 mutex_lock(&mf_mutex); 2105 + 2106 + if (hw_memory_failure) { 2107 + unpoison_pr_info("Unpoison: Disabled after HW memory failure %#lx\n", 2108 + pfn, &unpoison_rs); 2109 + ret = -EOPNOTSUPP; 2110 + goto unlock_mutex; 2111 + } 2110 2112 2111 2113 if (!PageHWPoison(p)) { 2112 2114 unpoison_pr_info("Unpoison: Page was already unpoisoned %#lx\n",
+1
mm/migrate.c
··· 1106 1106 if (!newpage) 1107 1107 return -ENOMEM; 1108 1108 1109 + newpage->private = 0; 1109 1110 rc = __unmap_and_move(page, newpage, force, mode); 1110 1111 if (rc == MIGRATEPAGE_SUCCESS) 1111 1112 set_page_owner_migrate_reason(newpage, reason);
+2
mm/page_isolation.c
··· 286 286 * @flags: isolation flags 287 287 * @gfp_flags: GFP flags used for migrating pages 288 288 * @isolate_before: isolate the pageblock before the boundary_pfn 289 + * @skip_isolation: the flag to skip the pageblock isolation in second 290 + * isolate_single_pageblock() 289 291 * 290 292 * Free and in-use pages can be as big as MAX_ORDER-1 and contain more than one 291 293 * pageblock. When not all pageblocks within a page are isolated at the same
+2
mm/readahead.c
··· 510 510 new_order--; 511 511 } 512 512 513 + filemap_invalidate_lock_shared(mapping); 513 514 while (index <= limit) { 514 515 unsigned int order = new_order; 515 516 ··· 537 536 } 538 537 539 538 read_pages(ractl); 539 + filemap_invalidate_unlock_shared(mapping); 540 540 541 541 /* 542 542 * If there were already pages in the page cache, then we may have
+36 -7
mm/slub.c
··· 726 726 return kasan_reset_tag(p + alloc); 727 727 } 728 728 729 - static void noinline set_track(struct kmem_cache *s, void *object, 730 - enum track_item alloc, unsigned long addr) 731 - { 732 - struct track *p = get_track(s, object, alloc); 733 - 734 729 #ifdef CONFIG_STACKDEPOT 730 + static noinline depot_stack_handle_t set_track_prepare(void) 731 + { 732 + depot_stack_handle_t handle; 735 733 unsigned long entries[TRACK_ADDRS_COUNT]; 736 734 unsigned int nr_entries; 737 735 738 736 nr_entries = stack_trace_save(entries, ARRAY_SIZE(entries), 3); 739 - p->handle = stack_depot_save(entries, nr_entries, GFP_NOWAIT); 737 + handle = stack_depot_save(entries, nr_entries, GFP_NOWAIT); 738 + 739 + return handle; 740 + } 741 + #else 742 + static inline depot_stack_handle_t set_track_prepare(void) 743 + { 744 + return 0; 745 + } 740 746 #endif 741 747 748 + static void set_track_update(struct kmem_cache *s, void *object, 749 + enum track_item alloc, unsigned long addr, 750 + depot_stack_handle_t handle) 751 + { 752 + struct track *p = get_track(s, object, alloc); 753 + 754 + #ifdef CONFIG_STACKDEPOT 755 + p->handle = handle; 756 + #endif 742 757 p->addr = addr; 743 758 p->cpu = smp_processor_id(); 744 759 p->pid = current->pid; 745 760 p->when = jiffies; 761 + } 762 + 763 + static __always_inline void set_track(struct kmem_cache *s, void *object, 764 + enum track_item alloc, unsigned long addr) 765 + { 766 + depot_stack_handle_t handle = set_track_prepare(); 767 + 768 + set_track_update(s, object, alloc, addr, handle); 746 769 } 747 770 748 771 static void init_tracking(struct kmem_cache *s, void *object) ··· 1396 1373 int cnt = 0; 1397 1374 unsigned long flags, flags2; 1398 1375 int ret = 0; 1376 + depot_stack_handle_t handle = 0; 1377 + 1378 + if (s->flags & SLAB_STORE_USER) 1379 + handle = set_track_prepare(); 1399 1380 1400 1381 spin_lock_irqsave(&n->list_lock, flags); 1401 1382 slab_lock(slab, &flags2); ··· 1418 1391 } 1419 1392 1420 1393 if (s->flags & SLAB_STORE_USER) 1421 - set_track(s, object, TRACK_FREE, addr); 1394 + set_track_update(s, object, TRACK_FREE, addr, handle); 1422 1395 trace(s, slab, object, 0); 1423 1396 /* Freepointer not overwritten by init_object(), SLAB_POISON moved it */ 1424 1397 init_object(s, object, SLUB_RED_INACTIVE); ··· 2963 2936 2964 2937 if (!freelist) { 2965 2938 c->slab = NULL; 2939 + c->tid = next_tid(c->tid); 2966 2940 local_unlock_irqrestore(&s->cpu_slab->lock, flags); 2967 2941 stat(s, DEACTIVATE_BYPASS); 2968 2942 goto new_slab; ··· 2996 2968 freelist = c->freelist; 2997 2969 c->slab = NULL; 2998 2970 c->freelist = NULL; 2971 + c->tid = next_tid(c->tid); 2999 2972 local_unlock_irqrestore(&s->cpu_slab->lock, flags); 3000 2973 deactivate_slab(s, slab, freelist); 3001 2974
+1 -1
mm/swap.c
··· 881 881 * lru_disable_count = 0 will have exited the critical 882 882 * section when synchronize_rcu() returns. 883 883 */ 884 - synchronize_rcu(); 884 + synchronize_rcu_expedited(); 885 885 #ifdef CONFIG_SMP 886 886 __lru_add_drain_all(true); 887 887 #else
+15 -10
net/core/dev.c
··· 397 397 /* Device list removal 398 398 * caller must respect a RCU grace period before freeing/reusing dev 399 399 */ 400 - static void unlist_netdevice(struct net_device *dev) 400 + static void unlist_netdevice(struct net_device *dev, bool lock) 401 401 { 402 402 ASSERT_RTNL(); 403 403 404 404 /* Unlink dev from the device chain */ 405 - write_lock(&dev_base_lock); 405 + if (lock) 406 + write_lock(&dev_base_lock); 406 407 list_del_rcu(&dev->dev_list); 407 408 netdev_name_node_del(dev->name_node); 408 409 hlist_del_rcu(&dev->index_hlist); 409 - write_unlock(&dev_base_lock); 410 + if (lock) 411 + write_unlock(&dev_base_lock); 410 412 411 413 dev_base_seq_inc(dev_net(dev)); 412 414 } ··· 10045 10043 goto err_uninit; 10046 10044 10047 10045 ret = netdev_register_kobject(dev); 10048 - if (ret) { 10049 - dev->reg_state = NETREG_UNREGISTERED; 10046 + write_lock(&dev_base_lock); 10047 + dev->reg_state = ret ? NETREG_UNREGISTERED : NETREG_REGISTERED; 10048 + write_unlock(&dev_base_lock); 10049 + if (ret) 10050 10050 goto err_uninit; 10051 - } 10052 - dev->reg_state = NETREG_REGISTERED; 10053 10051 10054 10052 __netdev_update_features(dev); 10055 10053 ··· 10331 10329 continue; 10332 10330 } 10333 10331 10332 + write_lock(&dev_base_lock); 10334 10333 dev->reg_state = NETREG_UNREGISTERED; 10334 + write_unlock(&dev_base_lock); 10335 10335 linkwatch_forget_dev(dev); 10336 10336 } 10337 10337 ··· 10814 10810 10815 10811 list_for_each_entry(dev, head, unreg_list) { 10816 10812 /* And unlink it from device chain. */ 10817 - unlist_netdevice(dev); 10818 - 10813 + write_lock(&dev_base_lock); 10814 + unlist_netdevice(dev, false); 10819 10815 dev->reg_state = NETREG_UNREGISTERING; 10816 + write_unlock(&dev_base_lock); 10820 10817 } 10821 10818 flush_all_backlogs(); 10822 10819 ··· 10964 10959 dev_close(dev); 10965 10960 10966 10961 /* And unlink it from device chain */ 10967 - unlist_netdevice(dev); 10962 + unlist_netdevice(dev, true); 10968 10963 10969 10964 synchronize_net(); 10970 10965
+28 -6
net/core/filter.c
··· 6516 6516 ifindex, proto, netns_id, flags); 6517 6517 6518 6518 if (sk) { 6519 - sk = sk_to_full_sk(sk); 6520 - if (!sk_fullsock(sk)) { 6519 + struct sock *sk2 = sk_to_full_sk(sk); 6520 + 6521 + /* sk_to_full_sk() may return (sk)->rsk_listener, so make sure the original sk 6522 + * sock refcnt is decremented to prevent a request_sock leak. 6523 + */ 6524 + if (!sk_fullsock(sk2)) 6525 + sk2 = NULL; 6526 + if (sk2 != sk) { 6521 6527 sock_gen_put(sk); 6522 - return NULL; 6528 + /* Ensure there is no need to bump sk2 refcnt */ 6529 + if (unlikely(sk2 && !sock_flag(sk2, SOCK_RCU_FREE))) { 6530 + WARN_ONCE(1, "Found non-RCU, unreferenced socket!"); 6531 + return NULL; 6532 + } 6533 + sk = sk2; 6523 6534 } 6524 6535 } 6525 6536 ··· 6564 6553 flags); 6565 6554 6566 6555 if (sk) { 6567 - sk = sk_to_full_sk(sk); 6568 - if (!sk_fullsock(sk)) { 6556 + struct sock *sk2 = sk_to_full_sk(sk); 6557 + 6558 + /* sk_to_full_sk() may return (sk)->rsk_listener, so make sure the original sk 6559 + * sock refcnt is decremented to prevent a request_sock leak. 6560 + */ 6561 + if (!sk_fullsock(sk2)) 6562 + sk2 = NULL; 6563 + if (sk2 != sk) { 6569 6564 sock_gen_put(sk); 6570 - return NULL; 6565 + /* Ensure there is no need to bump sk2 refcnt */ 6566 + if (unlikely(sk2 && !sock_flag(sk2, SOCK_RCU_FREE))) { 6567 + WARN_ONCE(1, "Found non-RCU, unreferenced socket!"); 6568 + return NULL; 6569 + } 6570 + sk = sk2; 6571 6571 } 6572 6572 } 6573 6573
+1
net/core/net-sysfs.c
··· 33 33 static const char fmt_ulong[] = "%lu\n"; 34 34 static const char fmt_u64[] = "%llu\n"; 35 35 36 + /* Caller holds RTNL or dev_base_lock */ 36 37 static inline int dev_isalive(const struct net_device *dev) 37 38 { 38 39 return dev->reg_state <= NETREG_REGISTERED;
+5
net/core/skmsg.c
··· 699 699 700 700 write_lock_bh(&sk->sk_callback_lock); 701 701 702 + if (sk_is_inet(sk) && inet_csk_has_ulp(sk)) { 703 + psock = ERR_PTR(-EINVAL); 704 + goto out; 705 + } 706 + 702 707 if (sk->sk_user_data) { 703 708 psock = ERR_PTR(-EBUSY); 704 709 goto out;
+1 -1
net/ethtool/eeprom.c
··· 36 36 if (request->page) 37 37 offset = request->page * ETH_MODULE_EEPROM_PAGE_LEN + offset; 38 38 39 - if (modinfo->type == ETH_MODULE_SFF_8079 && 39 + if (modinfo->type == ETH_MODULE_SFF_8472 && 40 40 request->i2c_address == 0x51) 41 41 offset += ETH_MODULE_EEPROM_PAGE_LEN * 2; 42 42
+10 -5
net/ipv4/ip_gre.c
··· 524 524 int tunnel_hlen; 525 525 int version; 526 526 int nhoff; 527 - int thoff; 528 527 529 528 tun_info = skb_tunnel_info(skb); 530 529 if (unlikely(!tun_info || !(tun_info->mode & IP_TUNNEL_INFO_TX) || ··· 557 558 (ntohs(ip_hdr(skb)->tot_len) > skb->len - nhoff)) 558 559 truncate = true; 559 560 560 - thoff = skb_transport_header(skb) - skb_mac_header(skb); 561 - if (skb->protocol == htons(ETH_P_IPV6) && 562 - (ntohs(ipv6_hdr(skb)->payload_len) > skb->len - thoff)) 563 - truncate = true; 561 + if (skb->protocol == htons(ETH_P_IPV6)) { 562 + int thoff; 563 + 564 + if (skb_transport_header_was_set(skb)) 565 + thoff = skb_transport_header(skb) - skb_mac_header(skb); 566 + else 567 + thoff = nhoff + sizeof(struct ipv6hdr); 568 + if (ntohs(ipv6_hdr(skb)->payload_len) > skb->len - thoff) 569 + truncate = true; 570 + } 564 571 565 572 if (version == 1) { 566 573 erspan_build_header(skb, ntohl(tunnel_id_to_key32(key->tun_id)),
+7 -3
net/ipv4/ping.c
··· 319 319 pr_debug("ping_check_bind_addr(sk=%p,addr=%pI4,port=%d)\n", 320 320 sk, &addr->sin_addr.s_addr, ntohs(addr->sin_port)); 321 321 322 + if (addr->sin_addr.s_addr == htonl(INADDR_ANY)) 323 + return 0; 324 + 322 325 tb_id = l3mdev_fib_table_by_index(net, sk->sk_bound_dev_if) ? : tb_id; 323 326 chk_addr_ret = inet_addr_type_table(net, addr->sin_addr.s_addr, tb_id); 324 327 325 - if (!inet_addr_valid_or_nonlocal(net, inet_sk(sk), 326 - addr->sin_addr.s_addr, 327 - chk_addr_ret)) 328 + if (chk_addr_ret == RTN_MULTICAST || 329 + chk_addr_ret == RTN_BROADCAST || 330 + (chk_addr_ret != RTN_LOCAL && 331 + !inet_can_nonlocal_bind(net, isk))) 328 332 return -EADDRNOTAVAIL; 329 333 330 334 #if IS_ENABLED(CONFIG_IPV6)
-3
net/ipv4/tcp_bpf.c
··· 611 611 return 0; 612 612 } 613 613 614 - if (inet_csk_has_ulp(sk)) 615 - return -EINVAL; 616 - 617 614 if (sk->sk_family == AF_INET6) { 618 615 if (tcp_bpf_assert_proto_ops(psock->sk_proto)) 619 616 return -EINVAL;
+10 -5
net/ipv6/ip6_gre.c
··· 939 939 __be16 proto; 940 940 __u32 mtu; 941 941 int nhoff; 942 - int thoff; 943 942 944 943 if (!pskb_inet_may_pull(skb)) 945 944 goto tx_err; ··· 959 960 (ntohs(ip_hdr(skb)->tot_len) > skb->len - nhoff)) 960 961 truncate = true; 961 962 962 - thoff = skb_transport_header(skb) - skb_mac_header(skb); 963 - if (skb->protocol == htons(ETH_P_IPV6) && 964 - (ntohs(ipv6_hdr(skb)->payload_len) > skb->len - thoff)) 965 - truncate = true; 963 + if (skb->protocol == htons(ETH_P_IPV6)) { 964 + int thoff; 965 + 966 + if (skb_transport_header_was_set(skb)) 967 + thoff = skb_transport_header(skb) - skb_mac_header(skb); 968 + else 969 + thoff = nhoff + sizeof(struct ipv6hdr); 970 + if (ntohs(ipv6_hdr(skb)->payload_len) > skb->len - thoff) 971 + truncate = true; 972 + } 966 973 967 974 if (skb_cow_head(skb, dev->needed_headroom ?: t->hlen)) 968 975 goto tx_err;
+21 -4
net/netfilter/nf_dup_netdev.c
··· 13 13 #include <net/netfilter/nf_tables_offload.h> 14 14 #include <net/netfilter/nf_dup_netdev.h> 15 15 16 - static void nf_do_netdev_egress(struct sk_buff *skb, struct net_device *dev) 16 + #define NF_RECURSION_LIMIT 2 17 + 18 + static DEFINE_PER_CPU(u8, nf_dup_skb_recursion); 19 + 20 + static void nf_do_netdev_egress(struct sk_buff *skb, struct net_device *dev, 21 + enum nf_dev_hooks hook) 17 22 { 18 - if (skb_mac_header_was_set(skb)) 23 + if (__this_cpu_read(nf_dup_skb_recursion) > NF_RECURSION_LIMIT) 24 + goto err; 25 + 26 + if (hook == NF_NETDEV_INGRESS && skb_mac_header_was_set(skb)) { 27 + if (skb_cow_head(skb, skb->mac_len)) 28 + goto err; 29 + 19 30 skb_push(skb, skb->mac_len); 31 + } 20 32 21 33 skb->dev = dev; 22 34 skb_clear_tstamp(skb); 35 + __this_cpu_inc(nf_dup_skb_recursion); 23 36 dev_queue_xmit(skb); 37 + __this_cpu_dec(nf_dup_skb_recursion); 38 + return; 39 + err: 40 + kfree_skb(skb); 24 41 } 25 42 26 43 void nf_fwd_netdev_egress(const struct nft_pktinfo *pkt, int oif) ··· 50 33 return; 51 34 } 52 35 53 - nf_do_netdev_egress(pkt->skb, dev); 36 + nf_do_netdev_egress(pkt->skb, dev, nft_hook(pkt)); 54 37 } 55 38 EXPORT_SYMBOL_GPL(nf_fwd_netdev_egress); 56 39 ··· 65 48 66 49 skb = skb_clone(pkt->skb, GFP_ATOMIC); 67 50 if (skb) 68 - nf_do_netdev_egress(skb, dev); 51 + nf_do_netdev_egress(skb, dev, nft_hook(pkt)); 69 52 } 70 53 EXPORT_SYMBOL_GPL(nf_dup_netdev_egress); 71 54
+2 -11
net/netfilter/nft_meta.c
··· 14 14 #include <linux/in.h> 15 15 #include <linux/ip.h> 16 16 #include <linux/ipv6.h> 17 + #include <linux/random.h> 17 18 #include <linux/smp.h> 18 19 #include <linux/static_key.h> 19 20 #include <net/dst.h> ··· 32 31 #define NFT_META_SECS_PER_HOUR 3600 33 32 #define NFT_META_SECS_PER_DAY 86400 34 33 #define NFT_META_DAYS_PER_WEEK 7 35 - 36 - static DEFINE_PER_CPU(struct rnd_state, nft_prandom_state); 37 34 38 35 static u8 nft_meta_weekday(void) 39 36 { ··· 270 271 return true; 271 272 } 272 273 273 - static noinline u32 nft_prandom_u32(void) 274 - { 275 - struct rnd_state *state = this_cpu_ptr(&nft_prandom_state); 276 - 277 - return prandom_u32_state(state); 278 - } 279 - 280 274 #ifdef CONFIG_IP_ROUTE_CLASSID 281 275 static noinline bool 282 276 nft_meta_get_eval_rtclassid(const struct sk_buff *skb, u32 *dest) ··· 381 389 break; 382 390 #endif 383 391 case NFT_META_PRANDOM: 384 - *dest = nft_prandom_u32(); 392 + *dest = get_random_u32(); 385 393 break; 386 394 #ifdef CONFIG_XFRM 387 395 case NFT_META_SECPATH: ··· 510 518 len = IFNAMSIZ; 511 519 break; 512 520 case NFT_META_PRANDOM: 513 - prandom_init_once(&nft_prandom_state); 514 521 len = sizeof(u32); 515 522 break; 516 523 #ifdef CONFIG_XFRM
+3 -9
net/netfilter/nft_numgen.c
··· 9 9 #include <linux/netlink.h> 10 10 #include <linux/netfilter.h> 11 11 #include <linux/netfilter/nf_tables.h> 12 + #include <linux/random.h> 12 13 #include <linux/static_key.h> 13 14 #include <net/netfilter/nf_tables.h> 14 15 #include <net/netfilter/nf_tables_core.h> 15 - 16 - static DEFINE_PER_CPU(struct rnd_state, nft_numgen_prandom_state); 17 16 18 17 struct nft_ng_inc { 19 18 u8 dreg; ··· 134 135 u32 offset; 135 136 }; 136 137 137 - static u32 nft_ng_random_gen(struct nft_ng_random *priv) 138 + static u32 nft_ng_random_gen(const struct nft_ng_random *priv) 138 139 { 139 - struct rnd_state *state = this_cpu_ptr(&nft_numgen_prandom_state); 140 - 141 - return reciprocal_scale(prandom_u32_state(state), priv->modulus) + 142 - priv->offset; 140 + return reciprocal_scale(get_random_u32(), priv->modulus) + priv->offset; 143 141 } 144 142 145 143 static void nft_ng_random_eval(const struct nft_expr *expr, ··· 163 167 164 168 if (priv->offset + priv->modulus - 1 < priv->offset) 165 169 return -EOVERFLOW; 166 - 167 - prandom_init_once(&nft_numgen_prandom_state); 168 170 169 171 return nft_parse_register_store(ctx, tb[NFTA_NG_DREG], &priv->dreg, 170 172 NULL, NFT_DATA_VALUE, sizeof(u32));
+1 -1
net/openvswitch/flow.c
··· 407 407 if (flags & IP6_FH_F_FRAG) { 408 408 if (frag_off) { 409 409 key->ip.frag = OVS_FRAG_TYPE_LATER; 410 - key->ip.proto = nexthdr; 410 + key->ip.proto = NEXTHDR_FRAGMENT; 411 411 return 0; 412 412 } 413 413 key->ip.frag = OVS_FRAG_TYPE_FIRST;
+2 -2
net/sched/sch_netem.c
··· 1146 1146 struct tc_netem_rate rate; 1147 1147 struct tc_netem_slot slot; 1148 1148 1149 - qopt.latency = min_t(psched_tdiff_t, PSCHED_NS2TICKS(q->latency), 1149 + qopt.latency = min_t(psched_time_t, PSCHED_NS2TICKS(q->latency), 1150 1150 UINT_MAX); 1151 - qopt.jitter = min_t(psched_tdiff_t, PSCHED_NS2TICKS(q->jitter), 1151 + qopt.jitter = min_t(psched_time_t, PSCHED_NS2TICKS(q->jitter), 1152 1152 UINT_MAX); 1153 1153 qopt.limit = q->limit; 1154 1154 qopt.loss = q->loss;
+1 -2
net/tipc/core.c
··· 109 109 struct tipc_net *tn = tipc_net(net); 110 110 111 111 tipc_detach_loopback(net); 112 + tipc_net_stop(net); 112 113 /* Make sure the tipc_net_finalize_work() finished */ 113 114 cancel_work_sync(&tn->work); 114 - tipc_net_stop(net); 115 - 116 115 tipc_bcast_stop(net); 117 116 tipc_nametbl_stop(net); 118 117 tipc_sk_rht_destroy(net);
+2
net/tls/tls_main.c
··· 921 921 { 922 922 struct tls_context *ctx; 923 923 924 + WARN_ON_ONCE(sk->sk_prot == p); 925 + 924 926 ctx = tls_get_ctx(sk); 925 927 if (likely(ctx)) { 926 928 ctx->sk_write_space = write_space;
+9 -7
net/xdp/xsk.c
··· 538 538 goto out; 539 539 } 540 540 541 - skb = xsk_build_skb(xs, &desc); 542 - if (IS_ERR(skb)) { 543 - err = PTR_ERR(skb); 544 - goto out; 545 - } 546 - 547 541 /* This is the backpressure mechanism for the Tx path. 548 542 * Reserve space in the completion queue and only proceed 549 543 * if there is space in it. This avoids having to implement ··· 546 552 spin_lock_irqsave(&xs->pool->cq_lock, flags); 547 553 if (xskq_prod_reserve(xs->pool->cq)) { 548 554 spin_unlock_irqrestore(&xs->pool->cq_lock, flags); 549 - kfree_skb(skb); 550 555 goto out; 551 556 } 552 557 spin_unlock_irqrestore(&xs->pool->cq_lock, flags); 558 + 559 + skb = xsk_build_skb(xs, &desc); 560 + if (IS_ERR(skb)) { 561 + err = PTR_ERR(skb); 562 + spin_lock_irqsave(&xs->pool->cq_lock, flags); 563 + xskq_prod_cancel(xs->pool->cq); 564 + spin_unlock_irqrestore(&xs->pool->cq_lock, flags); 565 + goto out; 566 + } 553 567 554 568 err = __dev_direct_xmit(skb, xs->queue_id); 555 569 if (err == NETDEV_TX_BUSY) {
+25 -4
samples/fprobe/fprobe_example.c
··· 21 21 #define BACKTRACE_DEPTH 16 22 22 #define MAX_SYMBOL_LEN 4096 23 23 struct fprobe sample_probe; 24 + static unsigned long nhit; 24 25 25 26 static char symbol[MAX_SYMBOL_LEN] = "kernel_clone"; 26 27 module_param_string(symbol, symbol, sizeof(symbol), 0644); ··· 29 28 module_param_string(nosymbol, nosymbol, sizeof(nosymbol), 0644); 30 29 static bool stackdump = true; 31 30 module_param(stackdump, bool, 0644); 31 + static bool use_trace = false; 32 + module_param(use_trace, bool, 0644); 32 33 33 34 static void show_backtrace(void) 34 35 { ··· 43 40 44 41 static void sample_entry_handler(struct fprobe *fp, unsigned long ip, struct pt_regs *regs) 45 42 { 46 - pr_info("Enter <%pS> ip = 0x%p\n", (void *)ip, (void *)ip); 43 + if (use_trace) 44 + /* 45 + * This is just an example, no kernel code should call 46 + * trace_printk() except when actively debugging. 47 + */ 48 + trace_printk("Enter <%pS> ip = 0x%p\n", (void *)ip, (void *)ip); 49 + else 50 + pr_info("Enter <%pS> ip = 0x%p\n", (void *)ip, (void *)ip); 51 + nhit++; 47 52 if (stackdump) 48 53 show_backtrace(); 49 54 } ··· 60 49 { 61 50 unsigned long rip = instruction_pointer(regs); 62 51 63 - pr_info("Return from <%pS> ip = 0x%p to rip = 0x%p (%pS)\n", 64 - (void *)ip, (void *)ip, (void *)rip, (void *)rip); 52 + if (use_trace) 53 + /* 54 + * This is just an example, no kernel code should call 55 + * trace_printk() except when actively debugging. 56 + */ 57 + trace_printk("Return from <%pS> ip = 0x%p to rip = 0x%p (%pS)\n", 58 + (void *)ip, (void *)ip, (void *)rip, (void *)rip); 59 + else 60 + pr_info("Return from <%pS> ip = 0x%p to rip = 0x%p (%pS)\n", 61 + (void *)ip, (void *)ip, (void *)rip, (void *)rip); 62 + nhit++; 65 63 if (stackdump) 66 64 show_backtrace(); 67 65 } ··· 132 112 { 133 113 unregister_fprobe(&sample_probe); 134 114 135 - pr_info("fprobe at %s unregistered\n", symbol); 115 + pr_info("fprobe at %s unregistered. %ld times hit, %ld times missed\n", 116 + symbol, nhit, sample_probe.nmissed); 136 117 } 137 118 138 119 module_init(fprobe_init)
+3
scripts/gen_autoksyms.sh
··· 56 56 # point addresses. 57 57 sed -e 's/^\.//' | 58 58 sort -u | 59 + # Ignore __this_module. It's not an exported symbol, and will be resolved 60 + # when the final .ko's are linked. 61 + grep -v '^__this_module$' | 59 62 sed -e 's/\(.*\)/#define __KSYM_\1 1/' >> "$output_file"
+1 -1
scripts/mod/modpost.c
··· 980 980 }, 981 981 /* Do not export init/exit functions or data */ 982 982 { 983 - .fromsec = { "__ksymtab*", NULL }, 983 + .fromsec = { "___ksymtab*", NULL }, 984 984 .bad_tosec = { INIT_SECTIONS, EXIT_SECTIONS, NULL }, 985 985 .mismatch = EXPORT_TO_INIT_EXIT, 986 986 .symbol_white_list = { DEFAULT_SYMBOL_WHITE_LIST, NULL },
+1 -22
sound/core/memalloc.c
··· 431 431 */ 432 432 static void *snd_dma_dev_alloc(struct snd_dma_buffer *dmab, size_t size) 433 433 { 434 - void *p; 435 - 436 - p = dma_alloc_coherent(dmab->dev.dev, size, &dmab->addr, DEFAULT_GFP); 437 - #ifdef CONFIG_X86 438 - if (p && dmab->dev.type == SNDRV_DMA_TYPE_DEV_WC) 439 - set_memory_wc((unsigned long)p, PAGE_ALIGN(size) >> PAGE_SHIFT); 440 - #endif 441 - return p; 434 + return dma_alloc_coherent(dmab->dev.dev, size, &dmab->addr, DEFAULT_GFP); 442 435 } 443 436 444 437 static void snd_dma_dev_free(struct snd_dma_buffer *dmab) 445 438 { 446 - #ifdef CONFIG_X86 447 - if (dmab->dev.type == SNDRV_DMA_TYPE_DEV_WC) 448 - set_memory_wb((unsigned long)dmab->area, 449 - PAGE_ALIGN(dmab->bytes) >> PAGE_SHIFT); 450 - #endif 451 439 dma_free_coherent(dmab->dev.dev, dmab->bytes, dmab->area, dmab->addr); 452 440 } 453 441 454 442 static int snd_dma_dev_mmap(struct snd_dma_buffer *dmab, 455 443 struct vm_area_struct *area) 456 444 { 457 - #ifdef CONFIG_X86 458 - if (dmab->dev.type == SNDRV_DMA_TYPE_DEV_WC) 459 - area->vm_page_prot = pgprot_writecombine(area->vm_page_prot); 460 - #endif 461 445 return dma_mmap_coherent(dmab->dev.dev, area, 462 446 dmab->area, dmab->addr, dmab->bytes); 463 447 } ··· 455 471 /* 456 472 * Write-combined pages 457 473 */ 458 - #ifdef CONFIG_X86 459 - /* On x86, share the same ops as the standard dev ops */ 460 - #define snd_dma_wc_ops snd_dma_dev_ops 461 - #else /* CONFIG_X86 */ 462 474 static void *snd_dma_wc_alloc(struct snd_dma_buffer *dmab, size_t size) 463 475 { 464 476 return dma_alloc_wc(dmab->dev.dev, size, &dmab->addr, DEFAULT_GFP); ··· 477 497 .free = snd_dma_wc_free, 478 498 .mmap = snd_dma_wc_mmap, 479 499 }; 480 - #endif /* CONFIG_X86 */ 481 500 482 501 #ifdef CONFIG_SND_DMA_SGBUF 483 502 static void *snd_dma_sg_fallback_alloc(struct snd_dma_buffer *dmab, size_t size);
+6 -9
sound/hda/hdac_i915.c
··· 119 119 /* check whether Intel graphics is present and reachable */ 120 120 static int i915_gfx_present(struct pci_dev *hdac_pci) 121 121 { 122 - unsigned int class = PCI_BASE_CLASS_DISPLAY << 16; 123 122 struct pci_dev *display_dev = NULL; 124 - bool match = false; 125 123 126 - do { 127 - display_dev = pci_get_class(class, display_dev); 128 - 129 - if (display_dev && display_dev->vendor == PCI_VENDOR_ID_INTEL && 124 + for_each_pci_dev(display_dev) { 125 + if (display_dev->vendor == PCI_VENDOR_ID_INTEL && 126 + (display_dev->class >> 16) == PCI_BASE_CLASS_DISPLAY && 130 127 connectivity_check(display_dev, hdac_pci)) { 131 128 pci_dev_put(display_dev); 132 - match = true; 129 + return true; 133 130 } 134 - } while (!match && display_dev); 131 + } 135 132 136 - return match; 133 + return false; 137 134 } 138 135 139 136 /**
+12
sound/hda/intel-dsp-config.c
··· 196 196 DMI_MATCH(DMI_SYS_VENDOR, "Google"), 197 197 } 198 198 }, 199 + { 200 + .ident = "UP-WHL", 201 + .matches = { 202 + DMI_MATCH(DMI_SYS_VENDOR, "AAEON"), 203 + } 204 + }, 199 205 {} 200 206 } 201 207 }, ··· 362 356 .ident = "Google Chromebooks", 363 357 .matches = { 364 358 DMI_MATCH(DMI_SYS_VENDOR, "Google"), 359 + } 360 + }, 361 + { 362 + .ident = "UPX-TGL", 363 + .matches = { 364 + DMI_MATCH(DMI_SYS_VENDOR, "AAEON"), 365 365 } 366 366 }, 367 367 {}
+8 -9
sound/hda/intel-nhlt.c
··· 55 55 56 56 /* find max number of channels based on format_configuration */ 57 57 if (fmt_configs->fmt_count) { 58 - dev_dbg(dev, "%s: found %d format definitions\n", 59 - __func__, fmt_configs->fmt_count); 58 + dev_dbg(dev, "found %d format definitions\n", 59 + fmt_configs->fmt_count); 60 60 61 61 for (i = 0; i < fmt_configs->fmt_count; i++) { 62 62 struct wav_fmt_ext *fmt_ext; ··· 66 66 if (fmt_ext->fmt.channels > max_ch) 67 67 max_ch = fmt_ext->fmt.channels; 68 68 } 69 - dev_dbg(dev, "%s: max channels found %d\n", __func__, max_ch); 69 + dev_dbg(dev, "max channels found %d\n", max_ch); 70 70 } else { 71 - dev_dbg(dev, "%s: No format information found\n", __func__); 71 + dev_dbg(dev, "No format information found\n"); 72 72 } 73 73 74 74 if (cfg->device_config.config_type != NHLT_CONFIG_TYPE_MIC_ARRAY) { ··· 95 95 } 96 96 97 97 if (dmic_geo > 0) { 98 - dev_dbg(dev, "%s: Array with %d dmics\n", __func__, dmic_geo); 98 + dev_dbg(dev, "Array with %d dmics\n", dmic_geo); 99 99 } 100 100 if (max_ch > dmic_geo) { 101 - dev_dbg(dev, "%s: max channels %d exceed dmic number %d\n", 102 - __func__, max_ch, dmic_geo); 101 + dev_dbg(dev, "max channels %d exceed dmic number %d\n", 102 + max_ch, dmic_geo); 103 103 } 104 104 } 105 105 } 106 106 107 - dev_dbg(dev, "%s: dmic number %d max_ch %d\n", 108 - __func__, dmic_geo, max_ch); 107 + dev_dbg(dev, "dmic number %d max_ch %d\n", dmic_geo, max_ch); 109 108 110 109 return dmic_geo; 111 110 }
+4 -3
sound/pci/hda/hda_auto_parser.c
··· 819 819 snd_hda_set_pin_ctl_cache(codec, cfg->nid, cfg->val); 820 820 } 821 821 822 - static void apply_fixup(struct hda_codec *codec, int id, int action, int depth) 822 + void __snd_hda_apply_fixup(struct hda_codec *codec, int id, int action, int depth) 823 823 { 824 824 const char *modelname = codec->fixup_name; 825 825 ··· 829 829 if (++depth > 10) 830 830 break; 831 831 if (fix->chained_before) 832 - apply_fixup(codec, fix->chain_id, action, depth + 1); 832 + __snd_hda_apply_fixup(codec, fix->chain_id, action, depth + 1); 833 833 834 834 switch (fix->type) { 835 835 case HDA_FIXUP_PINS: ··· 870 870 id = fix->chain_id; 871 871 } 872 872 } 873 + EXPORT_SYMBOL_GPL(__snd_hda_apply_fixup); 873 874 874 875 /** 875 876 * snd_hda_apply_fixup - Apply the fixup chain with the given action ··· 880 879 void snd_hda_apply_fixup(struct hda_codec *codec, int action) 881 880 { 882 881 if (codec->fixup_list) 883 - apply_fixup(codec, codec->fixup_id, action, 0); 882 + __snd_hda_apply_fixup(codec, codec->fixup_id, action, 0); 884 883 } 885 884 EXPORT_SYMBOL_GPL(snd_hda_apply_fixup); 886 885
+1
sound/pci/hda/hda_local.h
··· 348 348 void snd_hda_apply_pincfgs(struct hda_codec *codec, 349 349 const struct hda_pintbl *cfg); 350 350 void snd_hda_apply_fixup(struct hda_codec *codec, int action); 351 + void __snd_hda_apply_fixup(struct hda_codec *codec, int id, int action, int depth); 351 352 void snd_hda_pick_fixup(struct hda_codec *codec, 352 353 const struct hda_model_fixup *models, 353 354 const struct snd_pci_quirk *quirk,
+2 -2
sound/pci/hda/patch_conexant.c
··· 1079 1079 if (err < 0) 1080 1080 goto error; 1081 1081 1082 - err = snd_hda_gen_parse_auto_config(codec, &spec->gen.autocfg); 1082 + err = cx_auto_parse_beep(codec); 1083 1083 if (err < 0) 1084 1084 goto error; 1085 1085 1086 - err = cx_auto_parse_beep(codec); 1086 + err = snd_hda_gen_parse_auto_config(codec, &spec->gen.autocfg); 1087 1087 if (err < 0) 1088 1088 goto error; 1089 1089
+35 -1
sound/pci/hda/patch_realtek.c
··· 2634 2634 SND_PCI_QUIRK(0x1558, 0x67e1, "Clevo PB71[DE][CDF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS), 2635 2635 SND_PCI_QUIRK(0x1558, 0x67e5, "Clevo PC70D[PRS](?:-D|-G)?", ALC1220_FIXUP_CLEVO_PB51ED_PINS), 2636 2636 SND_PCI_QUIRK(0x1558, 0x67f1, "Clevo PC70H[PRS]", ALC1220_FIXUP_CLEVO_PB51ED_PINS), 2637 + SND_PCI_QUIRK(0x1558, 0x67f5, "Clevo PD70PN[NRT]", ALC1220_FIXUP_CLEVO_PB51ED_PINS), 2637 2638 SND_PCI_QUIRK(0x1558, 0x70d1, "Clevo PC70[ER][CDF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS), 2638 2639 SND_PCI_QUIRK(0x1558, 0x7714, "Clevo X170SM", ALC1220_FIXUP_CLEVO_PB51ED_PINS), 2639 2640 SND_PCI_QUIRK(0x1558, 0x7715, "Clevo X170KM-G", ALC1220_FIXUP_CLEVO_PB51ED), ··· 7005 7004 ALC287_FIXUP_LEGION_15IMHG05_SPEAKERS, 7006 7005 ALC287_FIXUP_LEGION_15IMHG05_AUTOMUTE, 7007 7006 ALC287_FIXUP_YOGA7_14ITL_SPEAKERS, 7007 + ALC298_FIXUP_LENOVO_C940_DUET7, 7008 7008 ALC287_FIXUP_13S_GEN2_SPEAKERS, 7009 7009 ALC256_FIXUP_SET_COEF_DEFAULTS, 7010 7010 ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE, ··· 7023 7021 ALC285_FIXUP_HP_SPEAKERS_MICMUTE_LED, 7024 7022 ALC295_FIXUP_FRAMEWORK_LAPTOP_MIC_NO_PRESENCE, 7025 7023 }; 7024 + 7025 + /* A special fixup for Lenovo C940 and Yoga Duet 7; 7026 + * both have the very same PCI SSID, and we need to apply different fixups 7027 + * depending on the codec ID 7028 + */ 7029 + static void alc298_fixup_lenovo_c940_duet7(struct hda_codec *codec, 7030 + const struct hda_fixup *fix, 7031 + int action) 7032 + { 7033 + int id; 7034 + 7035 + if (codec->core.vendor_id == 0x10ec0298) 7036 + id = ALC298_FIXUP_LENOVO_SPK_VOLUME; /* C940 */ 7037 + else 7038 + id = ALC287_FIXUP_YOGA7_14ITL_SPEAKERS; /* Duet 7 */ 7039 + __snd_hda_apply_fixup(codec, id, action, 0); 7040 + } 7026 7041 7027 7042 static const struct hda_fixup alc269_fixups[] = { 7028 7043 [ALC269_FIXUP_GPIO2] = { ··· 8740 8721 .chained = true, 8741 8722 .chain_id = ALC269_FIXUP_HEADSET_MODE, 8742 8723 }, 8724 + [ALC298_FIXUP_LENOVO_C940_DUET7] = { 8725 + .type = HDA_FIXUP_FUNC, 8726 + .v.func = alc298_fixup_lenovo_c940_duet7, 8727 + }, 8743 8728 [ALC287_FIXUP_13S_GEN2_SPEAKERS] = { 8744 8729 .type = HDA_FIXUP_VERBS, 8745 8730 .v.verbs = (const struct hda_verb[]) { ··· 9045 9022 ALC285_FIXUP_HP_GPIO_AMP_INIT), 9046 9023 SND_PCI_QUIRK(0x103c, 0x8783, "HP ZBook Fury 15 G7 Mobile Workstation", 9047 9024 ALC285_FIXUP_HP_GPIO_AMP_INIT), 9025 + SND_PCI_QUIRK(0x103c, 0x8787, "HP OMEN 15", ALC285_FIXUP_HP_MUTE_LED), 9048 9026 SND_PCI_QUIRK(0x103c, 0x8788, "HP OMEN 15", ALC285_FIXUP_HP_MUTE_LED), 9049 9027 SND_PCI_QUIRK(0x103c, 0x87c8, "HP", ALC287_FIXUP_HP_GPIO_LED), 9050 9028 SND_PCI_QUIRK(0x103c, 0x87e5, "HP ProBook 440 G8 Notebook PC", ALC236_FIXUP_HP_GPIO_LED), ··· 9211 9187 SND_PCI_QUIRK(0x1558, 0x70f3, "Clevo NH77DPQ", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 9212 9188 SND_PCI_QUIRK(0x1558, 0x70f4, "Clevo NH77EPY", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 9213 9189 SND_PCI_QUIRK(0x1558, 0x70f6, "Clevo NH77DPQ-Y", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 9190 + SND_PCI_QUIRK(0x1558, 0x7716, "Clevo NS50PU", ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 9214 9191 SND_PCI_QUIRK(0x1558, 0x8228, "Clevo NR40BU", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 9215 9192 SND_PCI_QUIRK(0x1558, 0x8520, "Clevo NH50D[CD]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 9216 9193 SND_PCI_QUIRK(0x1558, 0x8521, "Clevo NH77D[CD]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), ··· 9298 9273 SND_PCI_QUIRK(0x17aa, 0x31af, "ThinkCentre Station", ALC623_FIXUP_LENOVO_THINKSTATION_P340), 9299 9274 SND_PCI_QUIRK(0x17aa, 0x3802, "Lenovo Yoga DuetITL 2021", ALC287_FIXUP_YOGA7_14ITL_SPEAKERS), 9300 9275 SND_PCI_QUIRK(0x17aa, 0x3813, "Legion 7i 15IMHG05", ALC287_FIXUP_LEGION_15IMHG05_SPEAKERS), 9301 - SND_PCI_QUIRK(0x17aa, 0x3818, "Lenovo C940", ALC298_FIXUP_LENOVO_SPK_VOLUME), 9276 + SND_PCI_QUIRK(0x17aa, 0x3818, "Lenovo C940 / Yoga Duet 7", ALC298_FIXUP_LENOVO_C940_DUET7), 9302 9277 SND_PCI_QUIRK(0x17aa, 0x3819, "Lenovo 13s Gen2 ITL", ALC287_FIXUP_13S_GEN2_SPEAKERS), 9303 9278 SND_PCI_QUIRK(0x17aa, 0x3820, "Yoga Duet 7 13ITL6", ALC287_FIXUP_YOGA7_14ITL_SPEAKERS), 9304 9279 SND_PCI_QUIRK(0x17aa, 0x3824, "Legion Y9000X 2020", ALC285_FIXUP_LEGION_Y9000X_SPEAKERS), ··· 10762 10737 ALC668_FIXUP_MIC_DET_COEF, 10763 10738 ALC897_FIXUP_LENOVO_HEADSET_MIC, 10764 10739 ALC897_FIXUP_HEADSET_MIC_PIN, 10740 + ALC897_FIXUP_HP_HSMIC_VERB, 10765 10741 }; 10766 10742 10767 10743 static const struct hda_fixup alc662_fixups[] = { ··· 11182 11156 .chained = true, 11183 11157 .chain_id = ALC897_FIXUP_LENOVO_HEADSET_MIC 11184 11158 }, 11159 + [ALC897_FIXUP_HP_HSMIC_VERB] = { 11160 + .type = HDA_FIXUP_PINS, 11161 + .v.pins = (const struct hda_pintbl[]) { 11162 + { 0x19, 0x01a1913c }, /* use as headset mic, without its own jack detect */ 11163 + { } 11164 + }, 11165 + }, 11185 11166 }; 11186 11167 11187 11168 static const struct snd_pci_quirk alc662_fixup_tbl[] = { ··· 11214 11181 SND_PCI_QUIRK(0x1028, 0x0698, "Dell", ALC668_FIXUP_DELL_MIC_NO_PRESENCE), 11215 11182 SND_PCI_QUIRK(0x1028, 0x069f, "Dell", ALC668_FIXUP_DELL_MIC_NO_PRESENCE), 11216 11183 SND_PCI_QUIRK(0x103c, 0x1632, "HP RP5800", ALC662_FIXUP_HP_RP5800), 11184 + SND_PCI_QUIRK(0x103c, 0x8719, "HP", ALC897_FIXUP_HP_HSMIC_VERB), 11217 11185 SND_PCI_QUIRK(0x103c, 0x873e, "HP", ALC671_FIXUP_HP_HEADSET_MIC2), 11218 11186 SND_PCI_QUIRK(0x103c, 0x885f, "HP 288 Pro G8", ALC671_FIXUP_HP_HEADSET_MIC2), 11219 11187 SND_PCI_QUIRK(0x1043, 0x1080, "Asus UX501VW", ALC668_FIXUP_HEADSET_MODE),
+2 -2
sound/pci/hda/patch_via.c
··· 518 518 if (err < 0) 519 519 return err; 520 520 521 - err = snd_hda_gen_parse_auto_config(codec, &spec->gen.autocfg); 521 + err = auto_parse_beep(codec); 522 522 if (err < 0) 523 523 return err; 524 524 525 - err = auto_parse_beep(codec); 525 + err = snd_hda_gen_parse_auto_config(codec, &spec->gen.autocfg); 526 526 if (err < 0) 527 527 return err; 528 528
+3 -3
sound/usb/mixer_us16x08.c
··· 637 637 } 638 638 } else { 639 639 /* skip channels with no compressor active */ 640 - while (!store->comp_store->val[ 640 + while (store->comp_index <= SND_US16X08_MAX_CHANNELS 641 + && !store->comp_store->val[ 641 642 COMP_STORE_IDX(SND_US16X08_ID_COMP_SWITCH)] 642 - [store->comp_index - 1] 643 - && store->comp_index <= SND_US16X08_MAX_CHANNELS) { 643 + [store->comp_index - 1]) { 644 644 store->comp_index++; 645 645 } 646 646 ret = store->comp_index++;
+13 -2
sound/x86/intel_hdmi_audio.c
··· 33 33 #include <drm/intel_lpe_audio.h> 34 34 #include "intel_hdmi_audio.h" 35 35 36 + #define INTEL_HDMI_AUDIO_SUSPEND_DELAY_MS 5000 37 + 36 38 #define for_each_pipe(card_ctx, pipe) \ 37 39 for ((pipe) = 0; (pipe) < (card_ctx)->num_pipes; (pipe)++) 38 40 #define for_each_port(card_ctx, port) \ ··· 1068 1066 intelhaddata = snd_pcm_substream_chip(substream); 1069 1067 runtime = substream->runtime; 1070 1068 1071 - pm_runtime_get_sync(intelhaddata->dev); 1069 + retval = pm_runtime_resume_and_get(intelhaddata->dev); 1070 + if (retval < 0) 1071 + return retval; 1072 1072 1073 1073 /* set the runtime hw parameter with local snd_pcm_hardware struct */ 1074 1074 runtime->hw = had_pcm_hardware; ··· 1538 1534 container_of(work, struct snd_intelhad, hdmi_audio_wq); 1539 1535 struct intel_hdmi_lpe_audio_pdata *pdata = ctx->dev->platform_data; 1540 1536 struct intel_hdmi_lpe_audio_port_pdata *ppdata = &pdata->port[ctx->port]; 1537 + int ret; 1541 1538 1542 - pm_runtime_get_sync(ctx->dev); 1539 + ret = pm_runtime_resume_and_get(ctx->dev); 1540 + if (ret < 0) 1541 + return; 1542 + 1543 1543 mutex_lock(&ctx->mutex); 1544 1544 if (ppdata->pipe < 0) { 1545 1545 dev_dbg(ctx->dev, "%s: Event: HAD_NOTIFY_HOT_UNPLUG : port = %d\n", ··· 1810 1802 pdata->notify_audio_lpe = notify_audio_lpe; 1811 1803 spin_unlock_irq(&pdata->lpe_audio_slock); 1812 1804 1805 + pm_runtime_set_autosuspend_delay(&pdev->dev, INTEL_HDMI_AUDIO_SUSPEND_DELAY_MS); 1813 1806 pm_runtime_use_autosuspend(&pdev->dev); 1807 + pm_runtime_enable(&pdev->dev); 1814 1808 pm_runtime_mark_last_busy(&pdev->dev); 1809 + pm_runtime_idle(&pdev->dev); 1815 1810 1816 1811 dev_dbg(&pdev->dev, "%s: handle pending notification\n", __func__); 1817 1812 for_each_port(card_ctx, port) {
+10 -2
tools/arch/arm64/include/asm/cputype.h
··· 36 36 #define MIDR_VARIANT(midr) \ 37 37 (((midr) & MIDR_VARIANT_MASK) >> MIDR_VARIANT_SHIFT) 38 38 #define MIDR_IMPLEMENTOR_SHIFT 24 39 - #define MIDR_IMPLEMENTOR_MASK (0xff << MIDR_IMPLEMENTOR_SHIFT) 39 + #define MIDR_IMPLEMENTOR_MASK (0xffU << MIDR_IMPLEMENTOR_SHIFT) 40 40 #define MIDR_IMPLEMENTOR(midr) \ 41 41 (((midr) & MIDR_IMPLEMENTOR_MASK) >> MIDR_IMPLEMENTOR_SHIFT) 42 42 ··· 118 118 119 119 #define APPLE_CPU_PART_M1_ICESTORM 0x022 120 120 #define APPLE_CPU_PART_M1_FIRESTORM 0x023 121 + #define APPLE_CPU_PART_M1_ICESTORM_PRO 0x024 122 + #define APPLE_CPU_PART_M1_FIRESTORM_PRO 0x025 123 + #define APPLE_CPU_PART_M1_ICESTORM_MAX 0x028 124 + #define APPLE_CPU_PART_M1_FIRESTORM_MAX 0x029 121 125 122 126 #define MIDR_CORTEX_A53 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A53) 123 127 #define MIDR_CORTEX_A57 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A57) ··· 168 164 #define MIDR_HISI_TSV110 MIDR_CPU_MODEL(ARM_CPU_IMP_HISI, HISI_CPU_PART_TSV110) 169 165 #define MIDR_APPLE_M1_ICESTORM MIDR_CPU_MODEL(ARM_CPU_IMP_APPLE, APPLE_CPU_PART_M1_ICESTORM) 170 166 #define MIDR_APPLE_M1_FIRESTORM MIDR_CPU_MODEL(ARM_CPU_IMP_APPLE, APPLE_CPU_PART_M1_FIRESTORM) 167 + #define MIDR_APPLE_M1_ICESTORM_PRO MIDR_CPU_MODEL(ARM_CPU_IMP_APPLE, APPLE_CPU_PART_M1_ICESTORM_PRO) 168 + #define MIDR_APPLE_M1_FIRESTORM_PRO MIDR_CPU_MODEL(ARM_CPU_IMP_APPLE, APPLE_CPU_PART_M1_FIRESTORM_PRO) 169 + #define MIDR_APPLE_M1_ICESTORM_MAX MIDR_CPU_MODEL(ARM_CPU_IMP_APPLE, APPLE_CPU_PART_M1_ICESTORM_MAX) 170 + #define MIDR_APPLE_M1_FIRESTORM_MAX MIDR_CPU_MODEL(ARM_CPU_IMP_APPLE, APPLE_CPU_PART_M1_FIRESTORM_MAX) 171 171 172 172 /* Fujitsu Erratum 010001 affects A64FX 1.0 and 1.1, (v0r0 and v1r0) */ 173 173 #define MIDR_FUJITSU_ERRATUM_010001 MIDR_FUJITSU_A64FX ··· 180 172 181 173 #ifndef __ASSEMBLY__ 182 174 183 - #include "sysreg.h" 175 + #include <asm/sysreg.h> 184 176 185 177 #define read_cpuid(reg) read_sysreg_s(SYS_ ## reg) 186 178
+5 -2
tools/arch/x86/include/asm/cpufeatures.h
··· 201 201 #define X86_FEATURE_INVPCID_SINGLE ( 7*32+ 7) /* Effectively INVPCID && CR4.PCIDE=1 */ 202 202 #define X86_FEATURE_HW_PSTATE ( 7*32+ 8) /* AMD HW-PState */ 203 203 #define X86_FEATURE_PROC_FEEDBACK ( 7*32+ 9) /* AMD ProcFeedbackInterface */ 204 - /* FREE! ( 7*32+10) */ 204 + #define X86_FEATURE_XCOMPACTED ( 7*32+10) /* "" Use compacted XSTATE (XSAVES or XSAVEC) */ 205 205 #define X86_FEATURE_PTI ( 7*32+11) /* Kernel Page Table Isolation enabled */ 206 206 #define X86_FEATURE_RETPOLINE ( 7*32+12) /* "" Generic Retpoline mitigation for Spectre variant 2 */ 207 207 #define X86_FEATURE_RETPOLINE_LFENCE ( 7*32+13) /* "" Use LFENCE for Spectre variant 2 */ ··· 211 211 #define X86_FEATURE_SSBD ( 7*32+17) /* Speculative Store Bypass Disable */ 212 212 #define X86_FEATURE_MBA ( 7*32+18) /* Memory Bandwidth Allocation */ 213 213 #define X86_FEATURE_RSB_CTXSW ( 7*32+19) /* "" Fill RSB on context switches */ 214 - /* FREE! ( 7*32+20) */ 214 + #define X86_FEATURE_PERFMON_V2 ( 7*32+20) /* AMD Performance Monitoring Version 2 */ 215 215 #define X86_FEATURE_USE_IBPB ( 7*32+21) /* "" Indirect Branch Prediction Barrier enabled */ 216 216 #define X86_FEATURE_USE_IBRS_FW ( 7*32+22) /* "" Use IBRS during runtime firmware calls */ 217 217 #define X86_FEATURE_SPEC_STORE_BYPASS_DISABLE ( 7*32+23) /* "" Disable Speculative Store Bypass. */ ··· 238 238 #define X86_FEATURE_VMW_VMMCALL ( 8*32+19) /* "" VMware prefers VMMCALL hypercall instruction */ 239 239 #define X86_FEATURE_PVUNLOCK ( 8*32+20) /* "" PV unlock function */ 240 240 #define X86_FEATURE_VCPUPREEMPT ( 8*32+21) /* "" PV vcpu_is_preempted function */ 241 + #define X86_FEATURE_TDX_GUEST ( 8*32+22) /* Intel Trust Domain Extensions Guest */ 241 242 242 243 /* Intel-defined CPU features, CPUID level 0x00000007:0 (EBX), word 9 */ 243 244 #define X86_FEATURE_FSGSBASE ( 9*32+ 0) /* RDFSBASE, WRFSBASE, RDGSBASE, WRGSBASE instructions*/ ··· 316 315 #define X86_FEATURE_VIRT_SSBD (13*32+25) /* Virtualized Speculative Store Bypass Disable */ 317 316 #define X86_FEATURE_AMD_SSB_NO (13*32+26) /* "" Speculative Store Bypass is fixed in hardware. */ 318 317 #define X86_FEATURE_CPPC (13*32+27) /* Collaborative Processor Performance Control */ 318 + #define X86_FEATURE_BRS (13*32+31) /* Branch Sampling available */ 319 319 320 320 /* Thermal and Power Management Leaf, CPUID level 0x00000006 (EAX), word 14 */ 321 321 #define X86_FEATURE_DTHERM (14*32+ 0) /* Digital Thermal Sensor */ ··· 407 405 #define X86_FEATURE_SEV (19*32+ 1) /* AMD Secure Encrypted Virtualization */ 408 406 #define X86_FEATURE_VM_PAGE_FLUSH (19*32+ 2) /* "" VM Page Flush MSR is supported */ 409 407 #define X86_FEATURE_SEV_ES (19*32+ 3) /* AMD Secure Encrypted Virtualization - Encrypted State */ 408 + #define X86_FEATURE_V_TSC_AUX (19*32+ 9) /* "" Virtual TSC_AUX */ 410 409 #define X86_FEATURE_SME_COHERENT (19*32+10) /* "" AMD hardware-enforced cache coherency */ 411 410 412 411 /*
+7 -1
tools/arch/x86/include/asm/disabled-features.h
··· 62 62 # define DISABLE_SGX (1 << (X86_FEATURE_SGX & 31)) 63 63 #endif 64 64 65 + #ifdef CONFIG_INTEL_TDX_GUEST 66 + # define DISABLE_TDX_GUEST 0 67 + #else 68 + # define DISABLE_TDX_GUEST (1 << (X86_FEATURE_TDX_GUEST & 31)) 69 + #endif 70 + 65 71 /* 66 72 * Make sure to add features to the correct mask 67 73 */ ··· 79 73 #define DISABLED_MASK5 0 80 74 #define DISABLED_MASK6 0 81 75 #define DISABLED_MASK7 (DISABLE_PTI) 82 - #define DISABLED_MASK8 0 76 + #define DISABLED_MASK8 (DISABLE_TDX_GUEST) 83 77 #define DISABLED_MASK9 (DISABLE_SGX) 84 78 #define DISABLED_MASK10 0 85 79 #define DISABLED_MASK11 0
+6 -5
tools/arch/x86/include/uapi/asm/kvm.h
··· 428 428 struct kvm_vcpu_events events; 429 429 }; 430 430 431 - #define KVM_X86_QUIRK_LINT0_REENABLED (1 << 0) 432 - #define KVM_X86_QUIRK_CD_NW_CLEARED (1 << 1) 433 - #define KVM_X86_QUIRK_LAPIC_MMIO_HOLE (1 << 2) 434 - #define KVM_X86_QUIRK_OUT_7E_INC_RIP (1 << 3) 435 - #define KVM_X86_QUIRK_MISC_ENABLE_NO_MWAIT (1 << 4) 431 + #define KVM_X86_QUIRK_LINT0_REENABLED (1 << 0) 432 + #define KVM_X86_QUIRK_CD_NW_CLEARED (1 << 1) 433 + #define KVM_X86_QUIRK_LAPIC_MMIO_HOLE (1 << 2) 434 + #define KVM_X86_QUIRK_OUT_7E_INC_RIP (1 << 3) 435 + #define KVM_X86_QUIRK_MISC_ENABLE_NO_MWAIT (1 << 4) 436 + #define KVM_X86_QUIRK_FIX_HYPERCALL_INSN (1 << 5) 436 437 437 438 #define KVM_STATE_NESTED_FORMAT_VMX 0 438 439 #define KVM_STATE_NESTED_FORMAT_SVM 1
+13
tools/arch/x86/include/uapi/asm/svm.h
··· 108 108 #define SVM_VMGEXIT_AP_JUMP_TABLE 0x80000005 109 109 #define SVM_VMGEXIT_SET_AP_JUMP_TABLE 0 110 110 #define SVM_VMGEXIT_GET_AP_JUMP_TABLE 1 111 + #define SVM_VMGEXIT_PSC 0x80000010 112 + #define SVM_VMGEXIT_GUEST_REQUEST 0x80000011 113 + #define SVM_VMGEXIT_EXT_GUEST_REQUEST 0x80000012 114 + #define SVM_VMGEXIT_AP_CREATION 0x80000013 115 + #define SVM_VMGEXIT_AP_CREATE_ON_INIT 0 116 + #define SVM_VMGEXIT_AP_CREATE 1 117 + #define SVM_VMGEXIT_AP_DESTROY 2 118 + #define SVM_VMGEXIT_HV_FEATURES 0x8000fffd 111 119 #define SVM_VMGEXIT_UNSUPPORTED_EVENT 0x8000ffff 112 120 113 121 /* Exit code reserved for hypervisor/software use */ ··· 226 218 { SVM_VMGEXIT_NMI_COMPLETE, "vmgexit_nmi_complete" }, \ 227 219 { SVM_VMGEXIT_AP_HLT_LOOP, "vmgexit_ap_hlt_loop" }, \ 228 220 { SVM_VMGEXIT_AP_JUMP_TABLE, "vmgexit_ap_jump_table" }, \ 221 + { SVM_VMGEXIT_PSC, "vmgexit_page_state_change" }, \ 222 + { SVM_VMGEXIT_GUEST_REQUEST, "vmgexit_guest_request" }, \ 223 + { SVM_VMGEXIT_EXT_GUEST_REQUEST, "vmgexit_ext_guest_request" }, \ 224 + { SVM_VMGEXIT_AP_CREATION, "vmgexit_ap_creation" }, \ 225 + { SVM_VMGEXIT_HV_FEATURES, "vmgexit_hypervisor_feature" }, \ 229 226 { SVM_EXIT_ERR, "invalid_guest_state" } 230 227 231 228
+272 -81
tools/include/uapi/drm/i915_drm.h
··· 154 154 I915_MOCS_CACHED, 155 155 }; 156 156 157 - /* 157 + /** 158 + * enum drm_i915_gem_engine_class - uapi engine type enumeration 159 + * 158 160 * Different engines serve different roles, and there may be more than one 159 - * engine serving each role. enum drm_i915_gem_engine_class provides a 160 - * classification of the role of the engine, which may be used when requesting 161 - * operations to be performed on a certain subset of engines, or for providing 162 - * information about that group. 161 + * engine serving each role. This enum provides a classification of the role 162 + * of the engine, which may be used when requesting operations to be performed 163 + * on a certain subset of engines, or for providing information about that 164 + * group. 163 165 */ 164 166 enum drm_i915_gem_engine_class { 167 + /** 168 + * @I915_ENGINE_CLASS_RENDER: 169 + * 170 + * Render engines support instructions used for 3D, Compute (GPGPU), 171 + * and programmable media workloads. These instructions fetch data and 172 + * dispatch individual work items to threads that operate in parallel. 173 + * The threads run small programs (called "kernels" or "shaders") on 174 + * the GPU's execution units (EUs). 175 + */ 165 176 I915_ENGINE_CLASS_RENDER = 0, 177 + 178 + /** 179 + * @I915_ENGINE_CLASS_COPY: 180 + * 181 + * Copy engines (also referred to as "blitters") support instructions 182 + * that move blocks of data from one location in memory to another, 183 + * or that fill a specified location of memory with fixed data. 184 + * Copy engines can perform pre-defined logical or bitwise operations 185 + * on the source, destination, or pattern data. 186 + */ 166 187 I915_ENGINE_CLASS_COPY = 1, 188 + 189 + /** 190 + * @I915_ENGINE_CLASS_VIDEO: 191 + * 192 + * Video engines (also referred to as "bit stream decode" (BSD) or 193 + * "vdbox") support instructions that perform fixed-function media 194 + * decode and encode. 195 + */ 167 196 I915_ENGINE_CLASS_VIDEO = 2, 197 + 198 + /** 199 + * @I915_ENGINE_CLASS_VIDEO_ENHANCE: 200 + * 201 + * Video enhancement engines (also referred to as "vebox") support 202 + * instructions related to image enhancement. 203 + */ 168 204 I915_ENGINE_CLASS_VIDEO_ENHANCE = 3, 169 205 170 - /* should be kept compact */ 206 + /** 207 + * @I915_ENGINE_CLASS_COMPUTE: 208 + * 209 + * Compute engines support a subset of the instructions available 210 + * on render engines: compute engines support Compute (GPGPU) and 211 + * programmable media workloads, but do not support the 3D pipeline. 212 + */ 213 + I915_ENGINE_CLASS_COMPUTE = 4, 171 214 215 + /* Values in this enum should be kept compact. */ 216 + 217 + /** 218 + * @I915_ENGINE_CLASS_INVALID: 219 + * 220 + * Placeholder value to represent an invalid engine class assignment. 221 + */ 172 222 I915_ENGINE_CLASS_INVALID = -1 173 223 }; 174 224 175 - /* 225 + /** 226 + * struct i915_engine_class_instance - Engine class/instance identifier 227 + * 176 228 * There may be more than one engine fulfilling any role within the system. 177 229 * Each engine of a class is given a unique instance number and therefore 178 230 * any engine can be specified by its class:instance tuplet. APIs that allow ··· 232 180 * for this identification. 233 181 */ 234 182 struct i915_engine_class_instance { 235 - __u16 engine_class; /* see enum drm_i915_gem_engine_class */ 236 - __u16 engine_instance; 183 + /** 184 + * @engine_class: 185 + * 186 + * Engine class from enum drm_i915_gem_engine_class 187 + */ 188 + __u16 engine_class; 237 189 #define I915_ENGINE_CLASS_INVALID_NONE -1 238 190 #define I915_ENGINE_CLASS_INVALID_VIRTUAL -2 191 + 192 + /** 193 + * @engine_instance: 194 + * 195 + * Engine instance. 196 + */ 197 + __u16 engine_instance; 239 198 }; 240 199 241 200 /** ··· 2720 2657 DRM_I915_PERF_RECORD_MAX /* non-ABI */ 2721 2658 }; 2722 2659 2723 - /* 2660 + /** 2661 + * struct drm_i915_perf_oa_config 2662 + * 2724 2663 * Structure to upload perf dynamic configuration into the kernel. 2725 2664 */ 2726 2665 struct drm_i915_perf_oa_config { 2727 - /** String formatted like "%08x-%04x-%04x-%04x-%012x" */ 2666 + /** 2667 + * @uuid: 2668 + * 2669 + * String formatted like "%\08x-%\04x-%\04x-%\04x-%\012x" 2670 + */ 2728 2671 char uuid[36]; 2729 2672 2673 + /** 2674 + * @n_mux_regs: 2675 + * 2676 + * Number of mux regs in &mux_regs_ptr. 2677 + */ 2730 2678 __u32 n_mux_regs; 2679 + 2680 + /** 2681 + * @n_boolean_regs: 2682 + * 2683 + * Number of boolean regs in &boolean_regs_ptr. 2684 + */ 2731 2685 __u32 n_boolean_regs; 2686 + 2687 + /** 2688 + * @n_flex_regs: 2689 + * 2690 + * Number of flex regs in &flex_regs_ptr. 2691 + */ 2732 2692 __u32 n_flex_regs; 2733 2693 2734 - /* 2735 - * These fields are pointers to tuples of u32 values (register address, 2736 - * value). For example the expected length of the buffer pointed by 2737 - * mux_regs_ptr is (2 * sizeof(u32) * n_mux_regs). 2694 + /** 2695 + * @mux_regs_ptr: 2696 + * 2697 + * Pointer to tuples of u32 values (register address, value) for mux 2698 + * registers. Expected length of buffer is (2 * sizeof(u32) * 2699 + * &n_mux_regs). 2738 2700 */ 2739 2701 __u64 mux_regs_ptr; 2702 + 2703 + /** 2704 + * @boolean_regs_ptr: 2705 + * 2706 + * Pointer to tuples of u32 values (register address, value) for mux 2707 + * registers. Expected length of buffer is (2 * sizeof(u32) * 2708 + * &n_boolean_regs). 2709 + */ 2740 2710 __u64 boolean_regs_ptr; 2711 + 2712 + /** 2713 + * @flex_regs_ptr: 2714 + * 2715 + * Pointer to tuples of u32 values (register address, value) for mux 2716 + * registers. Expected length of buffer is (2 * sizeof(u32) * 2717 + * &n_flex_regs). 2718 + */ 2741 2719 __u64 flex_regs_ptr; 2742 2720 }; 2743 2721 ··· 2789 2685 * @data_ptr is also depends on the specific @query_id. 2790 2686 */ 2791 2687 struct drm_i915_query_item { 2792 - /** @query_id: The id for this query */ 2688 + /** 2689 + * @query_id: 2690 + * 2691 + * The id for this query. Currently accepted query IDs are: 2692 + * - %DRM_I915_QUERY_TOPOLOGY_INFO (see struct drm_i915_query_topology_info) 2693 + * - %DRM_I915_QUERY_ENGINE_INFO (see struct drm_i915_engine_info) 2694 + * - %DRM_I915_QUERY_PERF_CONFIG (see struct drm_i915_query_perf_config) 2695 + * - %DRM_I915_QUERY_MEMORY_REGIONS (see struct drm_i915_query_memory_regions) 2696 + * - %DRM_I915_QUERY_HWCONFIG_BLOB (see `GuC HWCONFIG blob uAPI`) 2697 + * - %DRM_I915_QUERY_GEOMETRY_SUBSLICES (see struct drm_i915_query_topology_info) 2698 + */ 2793 2699 __u64 query_id; 2794 - #define DRM_I915_QUERY_TOPOLOGY_INFO 1 2795 - #define DRM_I915_QUERY_ENGINE_INFO 2 2796 - #define DRM_I915_QUERY_PERF_CONFIG 3 2797 - #define DRM_I915_QUERY_MEMORY_REGIONS 4 2700 + #define DRM_I915_QUERY_TOPOLOGY_INFO 1 2701 + #define DRM_I915_QUERY_ENGINE_INFO 2 2702 + #define DRM_I915_QUERY_PERF_CONFIG 3 2703 + #define DRM_I915_QUERY_MEMORY_REGIONS 4 2704 + #define DRM_I915_QUERY_HWCONFIG_BLOB 5 2705 + #define DRM_I915_QUERY_GEOMETRY_SUBSLICES 6 2798 2706 /* Must be kept compact -- no holes and well documented */ 2799 2707 2800 2708 /** ··· 2822 2706 /** 2823 2707 * @flags: 2824 2708 * 2825 - * When query_id == DRM_I915_QUERY_TOPOLOGY_INFO, must be 0. 2709 + * When &query_id == %DRM_I915_QUERY_TOPOLOGY_INFO, must be 0. 2826 2710 * 2827 - * When query_id == DRM_I915_QUERY_PERF_CONFIG, must be one of the 2711 + * When &query_id == %DRM_I915_QUERY_PERF_CONFIG, must be one of the 2828 2712 * following: 2829 2713 * 2830 - * - DRM_I915_QUERY_PERF_CONFIG_LIST 2831 - * - DRM_I915_QUERY_PERF_CONFIG_DATA_FOR_UUID 2832 - * - DRM_I915_QUERY_PERF_CONFIG_FOR_UUID 2714 + * - %DRM_I915_QUERY_PERF_CONFIG_LIST 2715 + * - %DRM_I915_QUERY_PERF_CONFIG_DATA_FOR_UUID 2716 + * - %DRM_I915_QUERY_PERF_CONFIG_FOR_UUID 2717 + * 2718 + * When &query_id == %DRM_I915_QUERY_GEOMETRY_SUBSLICES must contain 2719 + * a struct i915_engine_class_instance that references a render engine. 2833 2720 */ 2834 2721 __u32 flags; 2835 2722 #define DRM_I915_QUERY_PERF_CONFIG_LIST 1 ··· 2890 2771 __u64 items_ptr; 2891 2772 }; 2892 2773 2893 - /* 2894 - * Data written by the kernel with query DRM_I915_QUERY_TOPOLOGY_INFO : 2774 + /** 2775 + * struct drm_i915_query_topology_info 2895 2776 * 2896 - * data: contains the 3 pieces of information : 2897 - * 2898 - * - the slice mask with one bit per slice telling whether a slice is 2899 - * available. The availability of slice X can be queried with the following 2900 - * formula : 2901 - * 2902 - * (data[X / 8] >> (X % 8)) & 1 2903 - * 2904 - * - the subslice mask for each slice with one bit per subslice telling 2905 - * whether a subslice is available. Gen12 has dual-subslices, which are 2906 - * similar to two gen11 subslices. For gen12, this array represents dual- 2907 - * subslices. The availability of subslice Y in slice X can be queried 2908 - * with the following formula : 2909 - * 2910 - * (data[subslice_offset + 2911 - * X * subslice_stride + 2912 - * Y / 8] >> (Y % 8)) & 1 2913 - * 2914 - * - the EU mask for each subslice in each slice with one bit per EU telling 2915 - * whether an EU is available. The availability of EU Z in subslice Y in 2916 - * slice X can be queried with the following formula : 2917 - * 2918 - * (data[eu_offset + 2919 - * (X * max_subslices + Y) * eu_stride + 2920 - * Z / 8] >> (Z % 8)) & 1 2777 + * Describes slice/subslice/EU information queried by 2778 + * %DRM_I915_QUERY_TOPOLOGY_INFO 2921 2779 */ 2922 2780 struct drm_i915_query_topology_info { 2923 - /* 2781 + /** 2782 + * @flags: 2783 + * 2924 2784 * Unused for now. Must be cleared to zero. 2925 2785 */ 2926 2786 __u16 flags; 2927 2787 2788 + /** 2789 + * @max_slices: 2790 + * 2791 + * The number of bits used to express the slice mask. 2792 + */ 2928 2793 __u16 max_slices; 2794 + 2795 + /** 2796 + * @max_subslices: 2797 + * 2798 + * The number of bits used to express the subslice mask. 2799 + */ 2929 2800 __u16 max_subslices; 2801 + 2802 + /** 2803 + * @max_eus_per_subslice: 2804 + * 2805 + * The number of bits in the EU mask that correspond to a single 2806 + * subslice's EUs. 2807 + */ 2930 2808 __u16 max_eus_per_subslice; 2931 2809 2932 - /* 2810 + /** 2811 + * @subslice_offset: 2812 + * 2933 2813 * Offset in data[] at which the subslice masks are stored. 2934 2814 */ 2935 2815 __u16 subslice_offset; 2936 2816 2937 - /* 2817 + /** 2818 + * @subslice_stride: 2819 + * 2938 2820 * Stride at which each of the subslice masks for each slice are 2939 2821 * stored. 2940 2822 */ 2941 2823 __u16 subslice_stride; 2942 2824 2943 - /* 2825 + /** 2826 + * @eu_offset: 2827 + * 2944 2828 * Offset in data[] at which the EU masks are stored. 2945 2829 */ 2946 2830 __u16 eu_offset; 2947 2831 2948 - /* 2832 + /** 2833 + * @eu_stride: 2834 + * 2949 2835 * Stride at which each of the EU masks for each subslice are stored. 2950 2836 */ 2951 2837 __u16 eu_stride; 2952 2838 2839 + /** 2840 + * @data: 2841 + * 2842 + * Contains 3 pieces of information : 2843 + * 2844 + * - The slice mask with one bit per slice telling whether a slice is 2845 + * available. The availability of slice X can be queried with the 2846 + * following formula : 2847 + * 2848 + * .. code:: c 2849 + * 2850 + * (data[X / 8] >> (X % 8)) & 1 2851 + * 2852 + * Starting with Xe_HP platforms, Intel hardware no longer has 2853 + * traditional slices so i915 will always report a single slice 2854 + * (hardcoded slicemask = 0x1) which contains all of the platform's 2855 + * subslices. I.e., the mask here does not reflect any of the newer 2856 + * hardware concepts such as "gslices" or "cslices" since userspace 2857 + * is capable of inferring those from the subslice mask. 2858 + * 2859 + * - The subslice mask for each slice with one bit per subslice telling 2860 + * whether a subslice is available. Starting with Gen12 we use the 2861 + * term "subslice" to refer to what the hardware documentation 2862 + * describes as a "dual-subslices." The availability of subslice Y 2863 + * in slice X can be queried with the following formula : 2864 + * 2865 + * .. code:: c 2866 + * 2867 + * (data[subslice_offset + X * subslice_stride + Y / 8] >> (Y % 8)) & 1 2868 + * 2869 + * - The EU mask for each subslice in each slice, with one bit per EU 2870 + * telling whether an EU is available. The availability of EU Z in 2871 + * subslice Y in slice X can be queried with the following formula : 2872 + * 2873 + * .. code:: c 2874 + * 2875 + * (data[eu_offset + 2876 + * (X * max_subslices + Y) * eu_stride + 2877 + * Z / 8 2878 + * ] >> (Z % 8)) & 1 2879 + */ 2953 2880 __u8 data[]; 2954 2881 }; 2955 2882 ··· 3116 2951 struct drm_i915_engine_info engines[]; 3117 2952 }; 3118 2953 3119 - /* 3120 - * Data written by the kernel with query DRM_I915_QUERY_PERF_CONFIG. 2954 + /** 2955 + * struct drm_i915_query_perf_config 2956 + * 2957 + * Data written by the kernel with query %DRM_I915_QUERY_PERF_CONFIG and 2958 + * %DRM_I915_QUERY_GEOMETRY_SUBSLICES. 3121 2959 */ 3122 2960 struct drm_i915_query_perf_config { 3123 2961 union { 3124 - /* 3125 - * When query_item.flags == DRM_I915_QUERY_PERF_CONFIG_LIST, i915 sets 3126 - * this fields to the number of configurations available. 2962 + /** 2963 + * @n_configs: 2964 + * 2965 + * When &drm_i915_query_item.flags == 2966 + * %DRM_I915_QUERY_PERF_CONFIG_LIST, i915 sets this fields to 2967 + * the number of configurations available. 3127 2968 */ 3128 2969 __u64 n_configs; 3129 2970 3130 - /* 3131 - * When query_id == DRM_I915_QUERY_PERF_CONFIG_DATA_FOR_ID, 3132 - * i915 will use the value in this field as configuration 3133 - * identifier to decide what data to write into config_ptr. 2971 + /** 2972 + * @config: 2973 + * 2974 + * When &drm_i915_query_item.flags == 2975 + * %DRM_I915_QUERY_PERF_CONFIG_DATA_FOR_ID, i915 will use the 2976 + * value in this field as configuration identifier to decide 2977 + * what data to write into config_ptr. 3134 2978 */ 3135 2979 __u64 config; 3136 2980 3137 - /* 3138 - * When query_id == DRM_I915_QUERY_PERF_CONFIG_DATA_FOR_UUID, 3139 - * i915 will use the value in this field as configuration 3140 - * identifier to decide what data to write into config_ptr. 2981 + /** 2982 + * @uuid: 2983 + * 2984 + * When &drm_i915_query_item.flags == 2985 + * %DRM_I915_QUERY_PERF_CONFIG_DATA_FOR_UUID, i915 will use the 2986 + * value in this field as configuration identifier to decide 2987 + * what data to write into config_ptr. 3141 2988 * 3142 2989 * String formatted like "%08x-%04x-%04x-%04x-%012x" 3143 2990 */ 3144 2991 char uuid[36]; 3145 2992 }; 3146 2993 3147 - /* 2994 + /** 2995 + * @flags: 2996 + * 3148 2997 * Unused for now. Must be cleared to zero. 3149 2998 */ 3150 2999 __u32 flags; 3151 3000 3152 - /* 3153 - * When query_item.flags == DRM_I915_QUERY_PERF_CONFIG_LIST, i915 will 3154 - * write an array of __u64 of configuration identifiers. 3001 + /** 3002 + * @data: 3155 3003 * 3156 - * When query_item.flags == DRM_I915_QUERY_PERF_CONFIG_DATA, i915 will 3157 - * write a struct drm_i915_perf_oa_config. If the following fields of 3158 - * drm_i915_perf_oa_config are set not set to 0, i915 will write into 3159 - * the associated pointers the values of submitted when the 3004 + * When &drm_i915_query_item.flags == %DRM_I915_QUERY_PERF_CONFIG_LIST, 3005 + * i915 will write an array of __u64 of configuration identifiers. 3006 + * 3007 + * When &drm_i915_query_item.flags == %DRM_I915_QUERY_PERF_CONFIG_DATA, 3008 + * i915 will write a struct drm_i915_perf_oa_config. If the following 3009 + * fields of struct drm_i915_perf_oa_config are not set to 0, i915 will 3010 + * write into the associated pointers the values of submitted when the 3160 3011 * configuration was created : 3161 3012 * 3162 - * - n_mux_regs 3163 - * - n_boolean_regs 3164 - * - n_flex_regs 3013 + * - &drm_i915_perf_oa_config.n_mux_regs 3014 + * - &drm_i915_perf_oa_config.n_boolean_regs 3015 + * - &drm_i915_perf_oa_config.n_flex_regs 3165 3016 */ 3166 3017 __u8 data[]; 3167 3018 }; ··· 3314 3133 /** @regions: Info about each supported region */ 3315 3134 struct drm_i915_memory_region_info regions[]; 3316 3135 }; 3136 + 3137 + /** 3138 + * DOC: GuC HWCONFIG blob uAPI 3139 + * 3140 + * The GuC produces a blob with information about the current device. 3141 + * i915 reads this blob from GuC and makes it available via this uAPI. 3142 + * 3143 + * The format and meaning of the blob content are documented in the 3144 + * Programmer's Reference Manual. 3145 + */ 3317 3146 3318 3147 /** 3319 3148 * struct drm_i915_gem_create_ext - Existing gem_create behaviour, with added
+9
tools/include/uapi/linux/prctl.h
··· 272 272 # define PR_SCHED_CORE_SCOPE_THREAD_GROUP 1 273 273 # define PR_SCHED_CORE_SCOPE_PROCESS_GROUP 2 274 274 275 + /* arm64 Scalable Matrix Extension controls */ 276 + /* Flag values must be in sync with SVE versions */ 277 + #define PR_SME_SET_VL 63 /* set task vector length */ 278 + # define PR_SME_SET_VL_ONEXEC (1 << 18) /* defer effect until exec */ 279 + #define PR_SME_GET_VL 64 /* get task vector length */ 280 + /* Bits common to PR_SME_SET_VL and PR_SME_GET_VL */ 281 + # define PR_SME_VL_LEN_MASK 0xffff 282 + # define PR_SME_VL_INHERIT (1 << 17) /* inherit across exec */ 283 + 275 284 #define PR_SET_VMA 0x53564d41 276 285 # define PR_SET_VMA_ANON_NAME 0 277 286
+20 -6
tools/include/uapi/linux/vhost.h
··· 89 89 90 90 /* Set or get vhost backend capability */ 91 91 92 - /* Use message type V2 */ 93 - #define VHOST_BACKEND_F_IOTLB_MSG_V2 0x1 94 - /* IOTLB can accept batching hints */ 95 - #define VHOST_BACKEND_F_IOTLB_BATCH 0x2 96 - 97 92 #define VHOST_SET_BACKEND_FEATURES _IOW(VHOST_VIRTIO, 0x25, __u64) 98 93 #define VHOST_GET_BACKEND_FEATURES _IOR(VHOST_VIRTIO, 0x26, __u64) 99 94 ··· 145 150 /* Get the valid iova range */ 146 151 #define VHOST_VDPA_GET_IOVA_RANGE _IOR(VHOST_VIRTIO, 0x78, \ 147 152 struct vhost_vdpa_iova_range) 148 - 149 153 /* Get the config size */ 150 154 #define VHOST_VDPA_GET_CONFIG_SIZE _IOR(VHOST_VIRTIO, 0x79, __u32) 151 155 152 156 /* Get the count of all virtqueues */ 153 157 #define VHOST_VDPA_GET_VQS_COUNT _IOR(VHOST_VIRTIO, 0x80, __u32) 158 + 159 + /* Get the number of virtqueue groups. */ 160 + #define VHOST_VDPA_GET_GROUP_NUM _IOR(VHOST_VIRTIO, 0x81, __u32) 161 + 162 + /* Get the number of address spaces. */ 163 + #define VHOST_VDPA_GET_AS_NUM _IOR(VHOST_VIRTIO, 0x7A, unsigned int) 164 + 165 + /* Get the group for a virtqueue: read index, write group in num, 166 + * The virtqueue index is stored in the index field of 167 + * vhost_vring_state. The group for this specific virtqueue is 168 + * returned via num field of vhost_vring_state. 169 + */ 170 + #define VHOST_VDPA_GET_VRING_GROUP _IOWR(VHOST_VIRTIO, 0x7B, \ 171 + struct vhost_vring_state) 172 + /* Set the ASID for a virtqueue group. The group index is stored in 173 + * the index field of vhost_vring_state, the ASID associated with this 174 + * group is stored at num field of vhost_vring_state. 175 + */ 176 + #define VHOST_VDPA_SET_GROUP_ASID _IOW(VHOST_VIRTIO, 0x7C, \ 177 + struct vhost_vring_state) 154 178 155 179 #endif
+2 -1
tools/kvm/kvm_stat/kvm_stat
··· 1646 1646 .format(values)) 1647 1647 if len(pids) > 1: 1648 1648 sys.exit('Error: Multiple processes found (pids: {}). Use "-p"' 1649 - ' to specify the desired pid'.format(" ".join(pids))) 1649 + ' to specify the desired pid' 1650 + .format(" ".join(map(str, pids)))) 1650 1651 namespace.pid = pids[0] 1651 1652 1652 1653 argparser = argparse.ArgumentParser(description=description_text,
+12 -5
tools/lib/perf/evsel.c
··· 149 149 int fd, group_fd, *evsel_fd; 150 150 151 151 evsel_fd = FD(evsel, idx, thread); 152 - if (evsel_fd == NULL) 153 - return -EINVAL; 152 + if (evsel_fd == NULL) { 153 + err = -EINVAL; 154 + goto out; 155 + } 154 156 155 157 err = get_group_fd(evsel, idx, thread, &group_fd); 156 158 if (err < 0) 157 - return err; 159 + goto out; 158 160 159 161 fd = sys_perf_event_open(&evsel->attr, 160 162 threads->map[thread].pid, 161 163 cpu, group_fd, 0); 162 164 163 - if (fd < 0) 164 - return -errno; 165 + if (fd < 0) { 166 + err = -errno; 167 + goto out; 168 + } 165 169 166 170 *evsel_fd = fd; 167 171 } 168 172 } 173 + out: 174 + if (err) 175 + perf_evsel__close(evsel); 169 176 170 177 return err; 171 178 }
+4 -2
tools/perf/builtin-inject.c
··· 891 891 if (ret < 0) 892 892 return ret; 893 893 pr_debug("%s\n", cmd); 894 - return system(cmd); 894 + ret = system(cmd); 895 + free(cmd); 896 + return ret; 895 897 } 896 898 897 899 static int output_fd(struct perf_inject *inject) ··· 918 916 inject->tool.tracing_data = perf_event__repipe_tracing_data; 919 917 } 920 918 921 - output_data_offset = session->header.data_offset; 919 + output_data_offset = perf_session__data_offset(session->evlist); 922 920 923 921 if (inject->build_id_all) { 924 922 inject->tool.mmap = perf_event__repipe_buildid_mmap;
+2
tools/perf/builtin-stat.c
··· 2586 2586 if (evlist__initialize_ctlfd(evsel_list, stat_config.ctl_fd, stat_config.ctl_fd_ack)) 2587 2587 goto out; 2588 2588 2589 + /* Enable ignoring missing threads when -p option is defined. */ 2590 + evlist__first(evsel_list)->ignore_missing_thread = target.pid; 2589 2591 status = 0; 2590 2592 for (run_idx = 0; forever || run_idx < stat_config.run_count; run_idx++) { 2591 2593 if (stat_config.run_count != 1 && verbose > 0)
+14 -2
tools/perf/tests/bp_account.c
··· 151 151 static int detect_share(int wp_cnt, int bp_cnt) 152 152 { 153 153 struct perf_event_attr attr; 154 - int i, fd[wp_cnt + bp_cnt], ret; 154 + int i, *fd = NULL, ret = -1; 155 + 156 + if (wp_cnt + bp_cnt == 0) 157 + return 0; 158 + 159 + fd = malloc(sizeof(int) * (wp_cnt + bp_cnt)); 160 + if (!fd) 161 + return -1; 155 162 156 163 for (i = 0; i < wp_cnt; i++) { 157 164 fd[i] = wp_event((void *)&the_var, &attr); 158 - TEST_ASSERT_VAL("failed to create wp\n", fd[i] != -1); 165 + if (fd[i] == -1) { 166 + pr_err("failed to create wp\n"); 167 + goto out; 168 + } 159 169 } 160 170 161 171 for (; i < (bp_cnt + wp_cnt); i++) { ··· 176 166 177 167 ret = i != (bp_cnt + wp_cnt); 178 168 169 + out: 179 170 while (i--) 180 171 close(fd[i]); 181 172 173 + free(fd); 182 174 return ret; 183 175 } 184 176
+2
tools/perf/tests/expr.c
··· 97 97 ret |= test(ctx, "2.2 > 2.2", 0); 98 98 ret |= test(ctx, "2.2 < 1.1", 0); 99 99 ret |= test(ctx, "1.1 > 2.2", 0); 100 + ret |= test(ctx, "1.1e10 < 1.1e100", 1); 101 + ret |= test(ctx, "1.1e2 > 1.1e-2", 1); 100 102 101 103 if (ret) { 102 104 expr__ctx_free(ctx);
-48
tools/perf/tests/shell/lib/perf_csv_output_lint.py
··· 1 - #!/usr/bin/python 2 - # SPDX-License-Identifier: GPL-2.0 3 - 4 - import argparse 5 - import sys 6 - 7 - # Basic sanity check of perf CSV output as specified in the man page. 8 - # Currently just checks the number of fields per line in output. 9 - 10 - ap = argparse.ArgumentParser() 11 - ap.add_argument('--no-args', action='store_true') 12 - ap.add_argument('--interval', action='store_true') 13 - ap.add_argument('--system-wide-no-aggr', action='store_true') 14 - ap.add_argument('--system-wide', action='store_true') 15 - ap.add_argument('--event', action='store_true') 16 - ap.add_argument('--per-core', action='store_true') 17 - ap.add_argument('--per-thread', action='store_true') 18 - ap.add_argument('--per-die', action='store_true') 19 - ap.add_argument('--per-node', action='store_true') 20 - ap.add_argument('--per-socket', action='store_true') 21 - ap.add_argument('--separator', default=',', nargs='?') 22 - args = ap.parse_args() 23 - 24 - Lines = sys.stdin.readlines() 25 - 26 - def check_csv_output(exp): 27 - for line in Lines: 28 - if 'failed' not in line: 29 - count = line.count(args.separator) 30 - if count != exp: 31 - sys.stdout.write(''.join(Lines)) 32 - raise RuntimeError(f'wrong number of fields. expected {exp} in {line}') 33 - 34 - try: 35 - if args.no_args or args.system_wide or args.event: 36 - expected_items = 6 37 - elif args.interval or args.per_thread or args.system_wide_no_aggr: 38 - expected_items = 7 39 - elif args.per_core or args.per_socket or args.per_node or args.per_die: 40 - expected_items = 8 41 - else: 42 - ap.print_help() 43 - raise RuntimeError('No checking option specified') 44 - check_csv_output(expected_items) 45 - 46 - except: 47 - sys.stdout.write('Test failed for input: ' + ''.join(Lines)) 48 - raise
+45 -24
tools/perf/tests/shell/stat+csv_output.sh
··· 6 6 7 7 set -e 8 8 9 - pythonchecker=$(dirname $0)/lib/perf_csv_output_lint.py 10 - if [ "x$PYTHON" == "x" ] 11 - then 12 - if which python3 > /dev/null 13 - then 14 - PYTHON=python3 15 - elif which python > /dev/null 16 - then 17 - PYTHON=python 18 - else 19 - echo Skipping test, python not detected please set environment variable PYTHON. 20 - exit 2 21 - fi 22 - fi 9 + function commachecker() 10 + { 11 + local -i cnt=0 exp=0 12 + 13 + case "$1" 14 + in "--no-args") exp=6 15 + ;; "--system-wide") exp=6 16 + ;; "--event") exp=6 17 + ;; "--interval") exp=7 18 + ;; "--per-thread") exp=7 19 + ;; "--system-wide-no-aggr") exp=7 20 + [ $(uname -m) = "s390x" ] && exp=6 21 + ;; "--per-core") exp=8 22 + ;; "--per-socket") exp=8 23 + ;; "--per-node") exp=8 24 + ;; "--per-die") exp=8 25 + esac 26 + 27 + while read line 28 + do 29 + # Check for lines beginning with Failed 30 + x=${line:0:6} 31 + [ "$x" = "Failed" ] && continue 32 + 33 + # Count the number of commas 34 + x=$(echo $line | tr -d -c ',') 35 + cnt="${#x}" 36 + # echo $line $cnt 37 + [ "$cnt" -ne "$exp" ] && { 38 + echo "wrong number of fields. expected $exp in $line" 1>&2 39 + exit 1; 40 + } 41 + done 42 + return 0 43 + } 23 44 24 45 # Return true if perf_event_paranoid is > $1 and not running as root. 25 46 function ParanoidAndNotRoot() ··· 51 30 check_no_args() 52 31 { 53 32 echo -n "Checking CSV output: no args " 54 - perf stat -x, true 2>&1 | $PYTHON $pythonchecker --no-args 33 + perf stat -x, true 2>&1 | commachecker --no-args 55 34 echo "[Success]" 56 35 } 57 36 ··· 63 42 echo "[Skip] paranoid and not root" 64 43 return 65 44 fi 66 - perf stat -x, -a true 2>&1 | $PYTHON $pythonchecker --system-wide 45 + perf stat -x, -a true 2>&1 | commachecker --system-wide 67 46 echo "[Success]" 68 47 } 69 48 ··· 76 55 return 77 56 fi 78 57 echo -n "Checking CSV output: system wide no aggregation " 79 - perf stat -x, -A -a --no-merge true 2>&1 | $PYTHON $pythonchecker --system-wide-no-aggr 58 + perf stat -x, -A -a --no-merge true 2>&1 | commachecker --system-wide-no-aggr 80 59 echo "[Success]" 81 60 } 82 61 83 62 check_interval() 84 63 { 85 64 echo -n "Checking CSV output: interval " 86 - perf stat -x, -I 1000 true 2>&1 | $PYTHON $pythonchecker --interval 65 + perf stat -x, -I 1000 true 2>&1 | commachecker --interval 87 66 echo "[Success]" 88 67 } 89 68 ··· 91 70 check_event() 92 71 { 93 72 echo -n "Checking CSV output: event " 94 - perf stat -x, -e cpu-clock true 2>&1 | $PYTHON $pythonchecker --event 73 + perf stat -x, -e cpu-clock true 2>&1 | commachecker --event 95 74 echo "[Success]" 96 75 } 97 76 ··· 103 82 echo "[Skip] paranoid and not root" 104 83 return 105 84 fi 106 - perf stat -x, --per-core -a true 2>&1 | $PYTHON $pythonchecker --per-core 85 + perf stat -x, --per-core -a true 2>&1 | commachecker --per-core 107 86 echo "[Success]" 108 87 } 109 88 ··· 115 94 echo "[Skip] paranoid and not root" 116 95 return 117 96 fi 118 - perf stat -x, --per-thread -a true 2>&1 | $PYTHON $pythonchecker --per-thread 97 + perf stat -x, --per-thread -a true 2>&1 | commachecker --per-thread 119 98 echo "[Success]" 120 99 } 121 100 ··· 127 106 echo "[Skip] paranoid and not root" 128 107 return 129 108 fi 130 - perf stat -x, --per-die -a true 2>&1 | $PYTHON $pythonchecker --per-die 109 + perf stat -x, --per-die -a true 2>&1 | commachecker --per-die 131 110 echo "[Success]" 132 111 } 133 112 ··· 139 118 echo "[Skip] paranoid and not root" 140 119 return 141 120 fi 142 - perf stat -x, --per-node -a true 2>&1 | $PYTHON $pythonchecker --per-node 121 + perf stat -x, --per-node -a true 2>&1 | commachecker --per-node 143 122 echo "[Success]" 144 123 } 145 124 ··· 151 130 echo "[Skip] paranoid and not root" 152 131 return 153 132 fi 154 - perf stat -x, --per-socket -a true 2>&1 | $PYTHON $pythonchecker --per-socket 133 + perf stat -x, --per-socket -a true 2>&1 | commachecker --per-socket 155 134 echo "[Success]" 156 135 } 157 136
+1 -1
tools/perf/tests/shell/test_arm_callgraph_fp.sh
··· 43 43 cc $CFLAGS $TEST_PROGRAM_SOURCE -o $TEST_PROGRAM || exit 1 44 44 45 45 # Add a 1 second delay to skip samples that are not in the leaf() function 46 - perf record -o $PERF_DATA --call-graph fp -e cycles//u -D 1000 -- $TEST_PROGRAM 2> /dev/null & 46 + perf record -o $PERF_DATA --call-graph fp -e cycles//u -D 1000 --user-callchains -- $TEST_PROGRAM 2> /dev/null & 47 47 PID=$! 48 48 49 49 echo " + Recording (PID=$PID)..."
+1 -1
tools/perf/tests/topology.c
··· 115 115 * physical_package_id will be set to -1. Hence skip this 116 116 * test if physical_package_id returns -1 for cpu from perf_cpu_map. 117 117 */ 118 - if (strncmp(session->header.env.arch, "powerpc", 7)) { 118 + if (!strncmp(session->header.env.arch, "ppc64le", 7)) { 119 119 if (cpu__get_socket_id(perf_cpu_map__cpu(map, 0)) == -1) 120 120 return TEST_SKIP; 121 121 }
+2 -12
tools/perf/trace/beauty/arch_errno_names.sh
··· 33 33 local arch=$(arch_string "$1") 34 34 local nr name 35 35 36 - cat <<EoFuncBegin 37 - static const char *errno_to_name__$arch(int err) 38 - { 39 - switch (err) { 40 - EoFuncBegin 36 + printf "static const char *errno_to_name__%s(int err)\n{\n\tswitch (err) {\n" $arch 41 37 42 38 while read name nr; do 43 39 printf '\tcase %d: return "%s";\n' $nr $name 44 40 done 45 41 46 - cat <<EoFuncEnd 47 - default: 48 - return "(unknown)"; 49 - } 50 - } 51 - 52 - EoFuncEnd 42 + printf '\tdefault: return "(unknown)";\n\t}\n}\n' 53 43 } 54 44 55 45 process_arch()
+6 -1
tools/perf/trace/beauty/include/linux/socket.h
··· 50 50 struct msghdr { 51 51 void *msg_name; /* ptr to socket address structure */ 52 52 int msg_namelen; /* size of socket address structure */ 53 + 54 + int msg_inq; /* output, data left in socket */ 55 + 53 56 struct iov_iter msg_iter; /* data */ 54 57 55 58 /* ··· 65 62 void __user *msg_control_user; 66 63 }; 67 64 bool msg_control_is_user : 1; 68 - __kernel_size_t msg_controllen; /* ancillary data buffer length */ 65 + bool msg_get_inq : 1;/* return INQ after receive */ 69 66 unsigned int msg_flags; /* flags on received message */ 67 + __kernel_size_t msg_controllen; /* ancillary data buffer length */ 70 68 struct kiocb *msg_iocb; /* ptr to iocb for async requests */ 71 69 }; 72 70 ··· 438 434 extern int __sys_accept4(int fd, struct sockaddr __user *upeer_sockaddr, 439 435 int __user *upeer_addrlen, int flags); 440 436 extern int __sys_socket(int family, int type, int protocol); 437 + extern struct file *__sys_socket_file(int family, int type, int protocol); 441 438 extern int __sys_bind(int fd, struct sockaddr __user *umyaddr, int addrlen); 442 439 extern int __sys_connect_file(struct file *file, struct sockaddr_storage *addr, 443 440 int addrlen, int file_flags);
+8 -14
tools/perf/util/arm-spe.c
··· 387 387 return arm_spe_deliver_synth_event(spe, speq, event, &sample); 388 388 } 389 389 390 - #define SPE_MEM_TYPE (ARM_SPE_L1D_ACCESS | ARM_SPE_L1D_MISS | \ 391 - ARM_SPE_LLC_ACCESS | ARM_SPE_LLC_MISS | \ 392 - ARM_SPE_REMOTE_ACCESS) 393 - 394 - static bool arm_spe__is_memory_event(enum arm_spe_sample_type type) 395 - { 396 - if (type & SPE_MEM_TYPE) 397 - return true; 398 - 399 - return false; 400 - } 401 - 402 390 static u64 arm_spe__synth_data_source(const struct arm_spe_record *record) 403 391 { 404 392 union perf_mem_data_src data_src = { 0 }; 405 393 406 394 if (record->op == ARM_SPE_LD) 407 395 data_src.mem_op = PERF_MEM_OP_LOAD; 408 - else 396 + else if (record->op == ARM_SPE_ST) 409 397 data_src.mem_op = PERF_MEM_OP_STORE; 398 + else 399 + return 0; 410 400 411 401 if (record->type & (ARM_SPE_LLC_ACCESS | ARM_SPE_LLC_MISS)) { 412 402 data_src.mem_lvl = PERF_MEM_LVL_L3; ··· 500 510 return err; 501 511 } 502 512 503 - if (spe->sample_memory && arm_spe__is_memory_event(record->type)) { 513 + /* 514 + * When data_src is zero it means the record is not a memory operation, 515 + * skip to synthesize memory sample for this case. 516 + */ 517 + if (spe->sample_memory && data_src) { 504 518 err = arm_spe__synth_mem_sample(speq, spe->memory_id, data_src); 505 519 if (err) 506 520 return err;
+28
tools/perf/util/build-id.c
··· 872 872 return err; 873 873 } 874 874 875 + static int filename__read_build_id_ns(const char *filename, 876 + struct build_id *bid, 877 + struct nsinfo *nsi) 878 + { 879 + struct nscookie nsc; 880 + int ret; 881 + 882 + nsinfo__mountns_enter(nsi, &nsc); 883 + ret = filename__read_build_id(filename, bid); 884 + nsinfo__mountns_exit(&nsc); 885 + 886 + return ret; 887 + } 888 + 889 + static bool dso__build_id_mismatch(struct dso *dso, const char *name) 890 + { 891 + struct build_id bid; 892 + 893 + if (filename__read_build_id_ns(name, &bid, dso->nsinfo) < 0) 894 + return false; 895 + 896 + return !dso__build_id_equal(dso, &bid); 897 + } 898 + 875 899 static int dso__cache_build_id(struct dso *dso, struct machine *machine, 876 900 void *priv __maybe_unused) 877 901 { ··· 910 886 is_kallsyms = true; 911 887 name = machine->mmap_name; 912 888 } 889 + 890 + if (!is_kallsyms && dso__build_id_mismatch(dso, name)) 891 + return 0; 892 + 913 893 return build_id_cache__add_b(&dso->bid, name, dso->nsinfo, 914 894 is_kallsyms, is_vdso); 915 895 }
+1 -1
tools/perf/util/expr.l
··· 91 91 } 92 92 %} 93 93 94 - number ([0-9]+\.?[0-9]*|[0-9]*\.?[0-9]+) 94 + number ([0-9]+\.?[0-9]*|[0-9]*\.?[0-9]+)(e-?[0-9]+)? 95 95 96 96 sch [-,=] 97 97 spec \\{sch}
+14
tools/perf/util/header.c
··· 3686 3686 return perf_session__do_write_header(session, evlist, fd, at_exit, NULL); 3687 3687 } 3688 3688 3689 + size_t perf_session__data_offset(const struct evlist *evlist) 3690 + { 3691 + struct evsel *evsel; 3692 + size_t data_offset; 3693 + 3694 + data_offset = sizeof(struct perf_file_header); 3695 + evlist__for_each_entry(evlist, evsel) { 3696 + data_offset += evsel->core.ids * sizeof(u64); 3697 + } 3698 + data_offset += evlist->core.nr_entries * sizeof(struct perf_file_attr); 3699 + 3700 + return data_offset; 3701 + } 3702 + 3689 3703 int perf_session__inject_header(struct perf_session *session, 3690 3704 struct evlist *evlist, 3691 3705 int fd,
+2
tools/perf/util/header.h
··· 136 136 int fd, 137 137 struct feat_copier *fc); 138 138 139 + size_t perf_session__data_offset(const struct evlist *evlist); 140 + 139 141 void perf_header__set_feat(struct perf_header *header, int feat); 140 142 void perf_header__clear_feat(struct perf_header *header, int feat); 141 143 bool perf_header__has_feat(const struct perf_header *header, int feat);
+9
tools/perf/util/metricgroup.c
··· 1372 1372 1373 1373 *out_evlist = NULL; 1374 1374 if (!metric_no_merge || hashmap__size(ids->ids) == 0) { 1375 + bool added_event = false; 1375 1376 int i; 1376 1377 /* 1377 1378 * We may fail to share events between metrics because a tool ··· 1394 1393 if (!tmp) 1395 1394 return -ENOMEM; 1396 1395 ids__insert(ids->ids, tmp); 1396 + added_event = true; 1397 1397 } 1398 + } 1399 + if (!added_event && hashmap__size(ids->ids) == 0) { 1400 + char *tmp = strdup("duration_time"); 1401 + 1402 + if (!tmp) 1403 + return -ENOMEM; 1404 + ids__insert(ids->ids, tmp); 1398 1405 } 1399 1406 } 1400 1407 ret = metricgroup__build_event_string(&events, ids, modifier,
+1 -1
tools/perf/util/unwind-libunwind-local.c
··· 174 174 Elf *elf; 175 175 GElf_Ehdr ehdr; 176 176 GElf_Shdr shdr; 177 - int ret; 177 + int ret = -1; 178 178 179 179 elf = elf_begin(fd, PERF_ELF_C_READ_MMAP, NULL); 180 180 if (elf == NULL)
+39 -39
tools/testing/selftests/bpf/prog_tests/bpf_cookie.c
··· 121 121 }) 122 122 123 123 GET_ADDR("bpf_fentry_test1", addrs[0]); 124 - GET_ADDR("bpf_fentry_test2", addrs[1]); 125 - GET_ADDR("bpf_fentry_test3", addrs[2]); 126 - GET_ADDR("bpf_fentry_test4", addrs[3]); 127 - GET_ADDR("bpf_fentry_test5", addrs[4]); 128 - GET_ADDR("bpf_fentry_test6", addrs[5]); 129 - GET_ADDR("bpf_fentry_test7", addrs[6]); 124 + GET_ADDR("bpf_fentry_test3", addrs[1]); 125 + GET_ADDR("bpf_fentry_test4", addrs[2]); 126 + GET_ADDR("bpf_fentry_test5", addrs[3]); 127 + GET_ADDR("bpf_fentry_test6", addrs[4]); 128 + GET_ADDR("bpf_fentry_test7", addrs[5]); 129 + GET_ADDR("bpf_fentry_test2", addrs[6]); 130 130 GET_ADDR("bpf_fentry_test8", addrs[7]); 131 131 132 132 #undef GET_ADDR 133 133 134 - cookies[0] = 1; 135 - cookies[1] = 2; 136 - cookies[2] = 3; 137 - cookies[3] = 4; 138 - cookies[4] = 5; 139 - cookies[5] = 6; 140 - cookies[6] = 7; 141 - cookies[7] = 8; 134 + cookies[0] = 1; /* bpf_fentry_test1 */ 135 + cookies[1] = 2; /* bpf_fentry_test3 */ 136 + cookies[2] = 3; /* bpf_fentry_test4 */ 137 + cookies[3] = 4; /* bpf_fentry_test5 */ 138 + cookies[4] = 5; /* bpf_fentry_test6 */ 139 + cookies[5] = 6; /* bpf_fentry_test7 */ 140 + cookies[6] = 7; /* bpf_fentry_test2 */ 141 + cookies[7] = 8; /* bpf_fentry_test8 */ 142 142 143 143 opts.kprobe_multi.addrs = (const unsigned long *) &addrs; 144 144 opts.kprobe_multi.cnt = ARRAY_SIZE(addrs); ··· 149 149 if (!ASSERT_GE(link1_fd, 0, "link1_fd")) 150 150 goto cleanup; 151 151 152 - cookies[0] = 8; 153 - cookies[1] = 7; 154 - cookies[2] = 6; 155 - cookies[3] = 5; 156 - cookies[4] = 4; 157 - cookies[5] = 3; 158 - cookies[6] = 2; 159 - cookies[7] = 1; 152 + cookies[0] = 8; /* bpf_fentry_test1 */ 153 + cookies[1] = 7; /* bpf_fentry_test3 */ 154 + cookies[2] = 6; /* bpf_fentry_test4 */ 155 + cookies[3] = 5; /* bpf_fentry_test5 */ 156 + cookies[4] = 4; /* bpf_fentry_test6 */ 157 + cookies[5] = 3; /* bpf_fentry_test7 */ 158 + cookies[6] = 2; /* bpf_fentry_test2 */ 159 + cookies[7] = 1; /* bpf_fentry_test8 */ 160 160 161 161 opts.kprobe_multi.flags = BPF_F_KPROBE_MULTI_RETURN; 162 162 prog_fd = bpf_program__fd(skel->progs.test_kretprobe); ··· 181 181 struct kprobe_multi *skel = NULL; 182 182 const char *syms[8] = { 183 183 "bpf_fentry_test1", 184 - "bpf_fentry_test2", 185 184 "bpf_fentry_test3", 186 185 "bpf_fentry_test4", 187 186 "bpf_fentry_test5", 188 187 "bpf_fentry_test6", 189 188 "bpf_fentry_test7", 189 + "bpf_fentry_test2", 190 190 "bpf_fentry_test8", 191 191 }; 192 192 __u64 cookies[8]; ··· 198 198 skel->bss->pid = getpid(); 199 199 skel->bss->test_cookie = true; 200 200 201 - cookies[0] = 1; 202 - cookies[1] = 2; 203 - cookies[2] = 3; 204 - cookies[3] = 4; 205 - cookies[4] = 5; 206 - cookies[5] = 6; 207 - cookies[6] = 7; 208 - cookies[7] = 8; 201 + cookies[0] = 1; /* bpf_fentry_test1 */ 202 + cookies[1] = 2; /* bpf_fentry_test3 */ 203 + cookies[2] = 3; /* bpf_fentry_test4 */ 204 + cookies[3] = 4; /* bpf_fentry_test5 */ 205 + cookies[4] = 5; /* bpf_fentry_test6 */ 206 + cookies[5] = 6; /* bpf_fentry_test7 */ 207 + cookies[6] = 7; /* bpf_fentry_test2 */ 208 + cookies[7] = 8; /* bpf_fentry_test8 */ 209 209 210 210 opts.syms = syms; 211 211 opts.cnt = ARRAY_SIZE(syms); ··· 216 216 if (!ASSERT_OK_PTR(link1, "bpf_program__attach_kprobe_multi_opts")) 217 217 goto cleanup; 218 218 219 - cookies[0] = 8; 220 - cookies[1] = 7; 221 - cookies[2] = 6; 222 - cookies[3] = 5; 223 - cookies[4] = 4; 224 - cookies[5] = 3; 225 - cookies[6] = 2; 226 - cookies[7] = 1; 219 + cookies[0] = 8; /* bpf_fentry_test1 */ 220 + cookies[1] = 7; /* bpf_fentry_test3 */ 221 + cookies[2] = 6; /* bpf_fentry_test4 */ 222 + cookies[3] = 5; /* bpf_fentry_test5 */ 223 + cookies[4] = 4; /* bpf_fentry_test6 */ 224 + cookies[5] = 3; /* bpf_fentry_test7 */ 225 + cookies[6] = 2; /* bpf_fentry_test2 */ 226 + cookies[7] = 1; /* bpf_fentry_test8 */ 227 227 228 228 opts.retprobe = true; 229 229
+3
tools/testing/selftests/bpf/prog_tests/kprobe_multi_test.c
··· 364 364 continue; 365 365 if (!strncmp(name, "rcu_", 4)) 366 366 continue; 367 + if (!strncmp(name, "__ftrace_invalid_address__", 368 + sizeof("__ftrace_invalid_address__") - 1)) 369 + continue; 367 370 err = hashmap__add(map, name, NULL); 368 371 if (err) { 369 372 free(name);
+55
tools/testing/selftests/bpf/prog_tests/tailcalls.c
··· 831 831 bpf_object__close(obj); 832 832 } 833 833 834 + #include "tailcall_bpf2bpf6.skel.h" 835 + 836 + /* Tail call counting works even when there is data on stack which is 837 + * not aligned to 8 bytes. 838 + */ 839 + static void test_tailcall_bpf2bpf_6(void) 840 + { 841 + struct tailcall_bpf2bpf6 *obj; 842 + int err, map_fd, prog_fd, main_fd, data_fd, i, val; 843 + LIBBPF_OPTS(bpf_test_run_opts, topts, 844 + .data_in = &pkt_v4, 845 + .data_size_in = sizeof(pkt_v4), 846 + .repeat = 1, 847 + ); 848 + 849 + obj = tailcall_bpf2bpf6__open_and_load(); 850 + if (!ASSERT_OK_PTR(obj, "open and load")) 851 + return; 852 + 853 + main_fd = bpf_program__fd(obj->progs.entry); 854 + if (!ASSERT_GE(main_fd, 0, "entry prog fd")) 855 + goto out; 856 + 857 + map_fd = bpf_map__fd(obj->maps.jmp_table); 858 + if (!ASSERT_GE(map_fd, 0, "jmp_table map fd")) 859 + goto out; 860 + 861 + prog_fd = bpf_program__fd(obj->progs.classifier_0); 862 + if (!ASSERT_GE(prog_fd, 0, "classifier_0 prog fd")) 863 + goto out; 864 + 865 + i = 0; 866 + err = bpf_map_update_elem(map_fd, &i, &prog_fd, BPF_ANY); 867 + if (!ASSERT_OK(err, "jmp_table map update")) 868 + goto out; 869 + 870 + err = bpf_prog_test_run_opts(main_fd, &topts); 871 + ASSERT_OK(err, "entry prog test run"); 872 + ASSERT_EQ(topts.retval, 0, "tailcall retval"); 873 + 874 + data_fd = bpf_map__fd(obj->maps.bss); 875 + if (!ASSERT_GE(map_fd, 0, "bss map fd")) 876 + goto out; 877 + 878 + i = 0; 879 + err = bpf_map_lookup_elem(data_fd, &i, &val); 880 + ASSERT_OK(err, "bss map lookup"); 881 + ASSERT_EQ(val, 1, "done flag is set"); 882 + 883 + out: 884 + tailcall_bpf2bpf6__destroy(obj); 885 + } 886 + 834 887 void test_tailcalls(void) 835 888 { 836 889 if (test__start_subtest("tailcall_1")) ··· 908 855 test_tailcall_bpf2bpf_4(false); 909 856 if (test__start_subtest("tailcall_bpf2bpf_5")) 910 857 test_tailcall_bpf2bpf_4(true); 858 + if (test__start_subtest("tailcall_bpf2bpf_6")) 859 + test_tailcall_bpf2bpf_6(); 911 860 }
+12 -12
tools/testing/selftests/bpf/progs/kprobe_multi.c
··· 54 54 55 55 if (is_return) { 56 56 SET(kretprobe_test1_result, &bpf_fentry_test1, 8); 57 - SET(kretprobe_test2_result, &bpf_fentry_test2, 7); 58 - SET(kretprobe_test3_result, &bpf_fentry_test3, 6); 59 - SET(kretprobe_test4_result, &bpf_fentry_test4, 5); 60 - SET(kretprobe_test5_result, &bpf_fentry_test5, 4); 61 - SET(kretprobe_test6_result, &bpf_fentry_test6, 3); 62 - SET(kretprobe_test7_result, &bpf_fentry_test7, 2); 57 + SET(kretprobe_test2_result, &bpf_fentry_test2, 2); 58 + SET(kretprobe_test3_result, &bpf_fentry_test3, 7); 59 + SET(kretprobe_test4_result, &bpf_fentry_test4, 6); 60 + SET(kretprobe_test5_result, &bpf_fentry_test5, 5); 61 + SET(kretprobe_test6_result, &bpf_fentry_test6, 4); 62 + SET(kretprobe_test7_result, &bpf_fentry_test7, 3); 63 63 SET(kretprobe_test8_result, &bpf_fentry_test8, 1); 64 64 } else { 65 65 SET(kprobe_test1_result, &bpf_fentry_test1, 1); 66 - SET(kprobe_test2_result, &bpf_fentry_test2, 2); 67 - SET(kprobe_test3_result, &bpf_fentry_test3, 3); 68 - SET(kprobe_test4_result, &bpf_fentry_test4, 4); 69 - SET(kprobe_test5_result, &bpf_fentry_test5, 5); 70 - SET(kprobe_test6_result, &bpf_fentry_test6, 6); 71 - SET(kprobe_test7_result, &bpf_fentry_test7, 7); 66 + SET(kprobe_test2_result, &bpf_fentry_test2, 7); 67 + SET(kprobe_test3_result, &bpf_fentry_test3, 2); 68 + SET(kprobe_test4_result, &bpf_fentry_test4, 3); 69 + SET(kprobe_test5_result, &bpf_fentry_test5, 4); 70 + SET(kprobe_test6_result, &bpf_fentry_test6, 5); 71 + SET(kprobe_test7_result, &bpf_fentry_test7, 6); 72 72 SET(kprobe_test8_result, &bpf_fentry_test8, 8); 73 73 } 74 74
+42
tools/testing/selftests/bpf/progs/tailcall_bpf2bpf6.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + #include <linux/bpf.h> 3 + #include <bpf/bpf_helpers.h> 4 + 5 + #define __unused __attribute__((unused)) 6 + 7 + struct { 8 + __uint(type, BPF_MAP_TYPE_PROG_ARRAY); 9 + __uint(max_entries, 1); 10 + __uint(key_size, sizeof(__u32)); 11 + __uint(value_size, sizeof(__u32)); 12 + } jmp_table SEC(".maps"); 13 + 14 + int done = 0; 15 + 16 + SEC("tc") 17 + int classifier_0(struct __sk_buff *skb __unused) 18 + { 19 + done = 1; 20 + return 0; 21 + } 22 + 23 + static __noinline 24 + int subprog_tail(struct __sk_buff *skb) 25 + { 26 + /* Don't propagate the constant to the caller */ 27 + volatile int ret = 1; 28 + 29 + bpf_tail_call_static(skb, &jmp_table, 0); 30 + return ret; 31 + } 32 + 33 + SEC("tc") 34 + int entry(struct __sk_buff *skb) 35 + { 36 + /* Have data on stack which size is not a multiple of 8 */ 37 + volatile char arr[1] = {}; 38 + 39 + return subprog_tail(skb); 40 + } 41 + 42 + char __license[] SEC("license") = "GPL";
+1
tools/testing/selftests/dma/Makefile
··· 1 1 # SPDX-License-Identifier: GPL-2.0 2 2 CFLAGS += -I../../../../usr/include/ 3 + CFLAGS += -I../../../../include/ 3 4 4 5 TEST_GEN_PROGS := dma_map_benchmark 5 6
+1 -1
tools/testing/selftests/dma/dma_map_benchmark.c
··· 10 10 #include <unistd.h> 11 11 #include <sys/ioctl.h> 12 12 #include <sys/mman.h> 13 - #include <linux/map_benchmark.h> 14 13 #include <linux/types.h> 14 + #include <linux/map_benchmark.h> 15 15 16 16 #define NSEC_PER_MSEC 1000000L 17 17
+4 -5
tools/testing/selftests/kvm/lib/aarch64/ucall.c
··· 73 73 74 74 void ucall(uint64_t cmd, int nargs, ...) 75 75 { 76 - struct ucall uc = { 77 - .cmd = cmd, 78 - }; 76 + struct ucall uc = {}; 79 77 va_list va; 80 78 int i; 81 79 80 + WRITE_ONCE(uc.cmd, cmd); 82 81 nargs = nargs <= UCALL_MAX_ARGS ? nargs : UCALL_MAX_ARGS; 83 82 84 83 va_start(va, nargs); 85 84 for (i = 0; i < nargs; ++i) 86 - uc.args[i] = va_arg(va, uint64_t); 85 + WRITE_ONCE(uc.args[i], va_arg(va, uint64_t)); 87 86 va_end(va); 88 87 89 - *ucall_exit_mmio_addr = (vm_vaddr_t)&uc; 88 + WRITE_ONCE(*ucall_exit_mmio_addr, (vm_vaddr_t)&uc); 90 89 } 91 90 92 91 uint64_t get_ucall(struct kvm_vm *vm, uint32_t vcpu_id, struct ucall *uc)
+23 -2
tools/testing/selftests/lib.mk
··· 7 7 LLVM_SUFFIX := $(LLVM) 8 8 endif 9 9 10 - CC := $(LLVM_PREFIX)clang$(LLVM_SUFFIX) 10 + CLANG_TARGET_FLAGS_arm := arm-linux-gnueabi 11 + CLANG_TARGET_FLAGS_arm64 := aarch64-linux-gnu 12 + CLANG_TARGET_FLAGS_hexagon := hexagon-linux-musl 13 + CLANG_TARGET_FLAGS_m68k := m68k-linux-gnu 14 + CLANG_TARGET_FLAGS_mips := mipsel-linux-gnu 15 + CLANG_TARGET_FLAGS_powerpc := powerpc64le-linux-gnu 16 + CLANG_TARGET_FLAGS_riscv := riscv64-linux-gnu 17 + CLANG_TARGET_FLAGS_s390 := s390x-linux-gnu 18 + CLANG_TARGET_FLAGS_x86 := x86_64-linux-gnu 19 + CLANG_TARGET_FLAGS := $(CLANG_TARGET_FLAGS_$(ARCH)) 20 + 21 + ifeq ($(CROSS_COMPILE),) 22 + ifeq ($(CLANG_TARGET_FLAGS),) 23 + $(error Specify CROSS_COMPILE or add '--target=' option to lib.mk 24 + else 25 + CLANG_FLAGS += --target=$(CLANG_TARGET_FLAGS) 26 + endif # CLANG_TARGET_FLAGS 27 + else 28 + CLANG_FLAGS += --target=$(notdir $(CROSS_COMPILE:%-=%)) 29 + endif # CROSS_COMPILE 30 + 31 + CC := $(LLVM_PREFIX)clang$(LLVM_SUFFIX) $(CLANG_FLAGS) -fintegrated-as 11 32 else 12 33 CC := $(CROSS_COMPILE)gcc 13 - endif 34 + endif # LLVM 14 35 15 36 ifeq (0,$(MAKELEVEL)) 16 37 ifeq ($(OUTPUT),)
+56 -5
tools/testing/selftests/net/fcnal-test.sh
··· 70 70 NL_IP=172.17.1.1 71 71 NL_IP6=2001:db8:4::1 72 72 73 + # multicast and broadcast addresses 74 + MCAST_IP=224.0.0.1 75 + BCAST_IP=255.255.255.255 76 + 73 77 MD5_PW=abc123 74 78 MD5_WRONG_PW=abc1234 75 79 ··· 311 307 case "$1" in 312 308 127.0.0.1) echo "loopback";; 313 309 ::1) echo "IPv6 loopback";; 310 + 311 + ${BCAST_IP}) echo "broadcast";; 312 + ${MCAST_IP}) echo "multicast";; 314 313 315 314 ${NSA_IP}) echo "ns-A IP";; 316 315 ${NSA_IP6}) echo "ns-A IPv6";; ··· 1800 1793 done 1801 1794 1802 1795 # 1803 - # raw socket with nonlocal bind 1796 + # tests for nonlocal bind 1804 1797 # 1805 1798 a=${NL_IP} 1806 1799 log_start 1807 - run_cmd nettest -s -R -P icmp -f -l ${a} -I ${NSA_DEV} -b 1808 - log_test_addr ${a} $? 0 "Raw socket bind to nonlocal address after device bind" 1800 + run_cmd nettest -s -R -f -l ${a} -b 1801 + log_test_addr ${a} $? 0 "Raw socket bind to nonlocal address" 1802 + 1803 + log_start 1804 + run_cmd nettest -s -f -l ${a} -b 1805 + log_test_addr ${a} $? 0 "TCP socket bind to nonlocal address" 1806 + 1807 + log_start 1808 + run_cmd nettest -s -D -P icmp -f -l ${a} -b 1809 + log_test_addr ${a} $? 0 "ICMP socket bind to nonlocal address" 1810 + 1811 + # 1812 + # check that ICMP sockets cannot bind to broadcast and multicast addresses 1813 + # 1814 + a=${BCAST_IP} 1815 + log_start 1816 + run_cmd nettest -s -D -P icmp -l ${a} -b 1817 + log_test_addr ${a} $? 1 "ICMP socket bind to broadcast address" 1818 + 1819 + a=${MCAST_IP} 1820 + log_start 1821 + run_cmd nettest -s -D -P icmp -l ${a} -b 1822 + log_test_addr ${a} $? 1 "ICMP socket bind to multicast address" 1809 1823 1810 1824 # 1811 1825 # tcp sockets ··· 1878 1850 log_test_addr ${a} $? 1 "Raw socket bind to out of scope address after VRF bind" 1879 1851 1880 1852 # 1881 - # raw socket with nonlocal bind 1853 + # tests for nonlocal bind 1882 1854 # 1883 1855 a=${NL_IP} 1884 1856 log_start 1885 - run_cmd nettest -s -R -P icmp -f -l ${a} -I ${VRF} -b 1857 + run_cmd nettest -s -R -f -l ${a} -I ${VRF} -b 1886 1858 log_test_addr ${a} $? 0 "Raw socket bind to nonlocal address after VRF bind" 1859 + 1860 + log_start 1861 + run_cmd nettest -s -f -l ${a} -I ${VRF} -b 1862 + log_test_addr ${a} $? 0 "TCP socket bind to nonlocal address after VRF bind" 1863 + 1864 + log_start 1865 + run_cmd nettest -s -D -P icmp -f -l ${a} -I ${VRF} -b 1866 + log_test_addr ${a} $? 0 "ICMP socket bind to nonlocal address after VRF bind" 1867 + 1868 + # 1869 + # check that ICMP sockets cannot bind to broadcast and multicast addresses 1870 + # 1871 + a=${BCAST_IP} 1872 + log_start 1873 + run_cmd nettest -s -D -P icmp -l ${a} -I ${VRF} -b 1874 + log_test_addr ${a} $? 1 "ICMP socket bind to broadcast address after VRF bind" 1875 + 1876 + a=${MCAST_IP} 1877 + log_start 1878 + run_cmd nettest -s -D -P icmp -l ${a} -I ${VRF} -b 1879 + log_test_addr ${a} $? 1 "ICMP socket bind to multicast address after VRF bind" 1887 1880 1888 1881 # 1889 1882 # tcp sockets ··· 1938 1889 1939 1890 log_subsection "No VRF" 1940 1891 setup 1892 + set_sysctl net.ipv4.ping_group_range='0 2147483647' 2>/dev/null 1941 1893 ipv4_addr_bind_novrf 1942 1894 1943 1895 log_subsection "With VRF" 1944 1896 setup "yes" 1897 + set_sysctl net.ipv4.ping_group_range='0 2147483647' 2>/dev/null 1945 1898 ipv4_addr_bind_vrf 1946 1899 } 1947 1900
+1 -1
tools/testing/selftests/netfilter/nft_concat_range.sh
··· 31 31 32 32 # List of possible paths to pktgen script from kernel tree for performance tests 33 33 PKTGEN_SCRIPT_PATHS=" 34 - ../../../samples/pktgen/pktgen_bench_xmit_mode_netif_receive.sh 34 + ../../../../samples/pktgen/pktgen_bench_xmit_mode_netif_receive.sh 35 35 pktgen/pktgen_bench_xmit_mode_netif_receive.sh" 36 36 37 37 # Definition of set types:
+2 -2
tools/testing/selftests/vm/gup_test.c
··· 209 209 if (write) 210 210 gup.gup_flags |= FOLL_WRITE; 211 211 212 - gup_fd = open("/sys/kernel/debug/gup_test", O_RDWR); 212 + gup_fd = open(GUP_TEST_FILE, O_RDWR); 213 213 if (gup_fd == -1) { 214 214 switch (errno) { 215 215 case EACCES: ··· 224 224 printf("check if CONFIG_GUP_TEST is enabled in kernel config\n"); 225 225 break; 226 226 default: 227 - perror("failed to open /sys/kernel/debug/gup_test"); 227 + perror("failed to open " GUP_TEST_FILE); 228 228 break; 229 229 } 230 230 exit(KSFT_SKIP);
+2
tools/testing/selftests/vm/ksm_tests.c
··· 54 54 } 55 55 if (fprintf(f, "%lu", val) < 0) { 56 56 perror("fprintf"); 57 + fclose(f); 57 58 return 1; 58 59 } 59 60 fclose(f); ··· 73 72 } 74 73 if (fscanf(f, "%lu", val) != 1) { 75 74 perror("fscanf"); 75 + fclose(f); 76 76 return 1; 77 77 } 78 78 fclose(f);