Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge 5.19-rc6 into char-misc-next

We need the misc driver fixes in here as well.

Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

+4857 -2485
+3
.mailmap
··· 64 64 Bart Van Assche <bvanassche@acm.org> <bart.vanassche@wdc.com> 65 65 Ben Gardner <bgardner@wabtec.com> 66 66 Ben M Cahill <ben.m.cahill@intel.com> 67 + Ben Widawsky <bwidawsk@kernel.org> <ben@bwidawsk.net> 68 + Ben Widawsky <bwidawsk@kernel.org> <ben.widawsky@intel.com> 69 + Ben Widawsky <bwidawsk@kernel.org> <benjamin.widawsky@intel.com> 67 70 Björn Steinbrink <B.Steinbrink@gmx.de> 68 71 Björn Töpel <bjorn@kernel.org> <bjorn.topel@gmail.com> 69 72 Björn Töpel <bjorn@kernel.org> <bjorn.topel@intel.com>
+1 -1
Documentation/devicetree/bindings/dma/allwinner,sun50i-a64-dma.yaml
··· 67 67 then: 68 68 properties: 69 69 clocks: 70 - maxItems: 2 70 + minItems: 2 71 71 72 72 required: 73 73 - clock-names
+6
Documentation/driver-api/firmware/other_interfaces.rst
··· 13 13 .. kernel-doc:: drivers/firmware/edd.c 14 14 :internal: 15 15 16 + Generic System Framebuffers Interface 17 + ------------------------------------- 18 + 19 + .. kernel-doc:: drivers/firmware/sysfb.c 20 + :export: 21 + 16 22 Intel Stratix10 SoC Service Layer 17 23 --------------------------------- 18 24 Some features of the Intel Stratix10 SoC require a level of privilege
+36
Documentation/process/maintainer-netdev.rst
··· 6 6 netdev FAQ 7 7 ========== 8 8 9 + tl;dr 10 + ----- 11 + 12 + - designate your patch to a tree - ``[PATCH net]`` or ``[PATCH net-next]`` 13 + - for fixes the ``Fixes:`` tag is required, regardless of the tree 14 + - don't post large series (> 15 patches), break them up 15 + - don't repost your patches within one 24h period 16 + - reverse xmas tree 17 + 9 18 What is netdev? 10 19 --------------- 11 20 It is a mailing list for all network-related Linux stuff. This ··· 145 136 version that should be applied. If there is any doubt, the maintainer 146 137 will reply and ask what should be done. 147 138 139 + How do I divide my work into patches? 140 + ------------------------------------- 141 + 142 + Put yourself in the shoes of the reviewer. Each patch is read separately 143 + and therefore should constitute a comprehensible step towards your stated 144 + goal. 145 + 146 + Avoid sending series longer than 15 patches. Larger series takes longer 147 + to review as reviewers will defer looking at it until they find a large 148 + chunk of time. A small series can be reviewed in a short time, so Maintainers 149 + just do it. As a result, a sequence of smaller series gets merged quicker and 150 + with better review coverage. Re-posting large series also increases the mailing 151 + list traffic. 152 + 148 153 I made changes to only a few patches in a patch series should I resend only those changed? 149 154 ------------------------------------------------------------------------------------------ 150 155 No, please resend the entire patch series and make sure you do number your ··· 205 182 /* foobar blah blah blah 206 183 * another line of text 207 184 */ 185 + 186 + What is "reverse xmas tree"? 187 + ---------------------------- 188 + 189 + Netdev has a convention for ordering local variables in functions. 190 + Order the variable declaration lines longest to shortest, e.g.:: 191 + 192 + struct scatterlist *sg; 193 + struct sk_buff *skb; 194 + int err, i; 195 + 196 + If there are dependencies between the variables preventing the ordering 197 + move the initialization out of line. 208 198 209 199 I am working in existing code which uses non-standard formatting. Which formatting should I use? 210 200 ------------------------------------------------------------------------------------------------
+109 -38
MAINTAINERS
··· 426 426 ACPI VIOT DRIVER 427 427 M: Jean-Philippe Brucker <jean-philippe@linaro.org> 428 428 L: linux-acpi@vger.kernel.org 429 - L: iommu@lists.linux-foundation.org 430 429 L: iommu@lists.linux.dev 431 430 S: Maintained 432 431 F: drivers/acpi/viot.c ··· 959 960 AMD IOMMU (AMD-VI) 960 961 M: Joerg Roedel <joro@8bytes.org> 961 962 R: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> 962 - L: iommu@lists.linux-foundation.org 963 963 L: iommu@lists.linux.dev 964 964 S: Maintained 965 965 T: git git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu.git ··· 2538 2540 ARM/QUALCOMM SUPPORT 2539 2541 M: Andy Gross <agross@kernel.org> 2540 2542 M: Bjorn Andersson <bjorn.andersson@linaro.org> 2543 + R: Konrad Dybcio <konrad.dybcio@somainline.org> 2541 2544 L: linux-arm-msm@vger.kernel.org 2542 2545 S: Maintained 2543 2546 T: git git://git.kernel.org/pub/scm/linux/kernel/git/qcom/linux.git ··· 3616 3617 F: Documentation/devicetree/bindings/iio/accel/bosch,bma400.yaml 3617 3618 F: drivers/iio/accel/bma400* 3618 3619 3619 - BPF (Safe dynamic programs and tools) 3620 + BPF [GENERAL] (Safe Dynamic Programs and Tools) 3620 3621 M: Alexei Starovoitov <ast@kernel.org> 3621 3622 M: Daniel Borkmann <daniel@iogearbox.net> 3622 3623 M: Andrii Nakryiko <andrii@kernel.org> 3623 - R: Martin KaFai Lau <kafai@fb.com> 3624 - R: Song Liu <songliubraving@fb.com> 3624 + R: Martin KaFai Lau <martin.lau@linux.dev> 3625 + R: Song Liu <song@kernel.org> 3625 3626 R: Yonghong Song <yhs@fb.com> 3626 3627 R: John Fastabend <john.fastabend@gmail.com> 3627 3628 R: KP Singh <kpsingh@kernel.org> 3628 - L: netdev@vger.kernel.org 3629 + R: Stanislav Fomichev <sdf@google.com> 3630 + R: Hao Luo <haoluo@google.com> 3631 + R: Jiri Olsa <jolsa@kernel.org> 3629 3632 L: bpf@vger.kernel.org 3630 3633 S: Supported 3631 3634 W: https://bpf.io/ ··· 3659 3658 F: tools/bpf/ 3660 3659 F: tools/lib/bpf/ 3661 3660 F: tools/testing/selftests/bpf/ 3662 - N: bpf 3663 - K: bpf 3664 3661 3665 3662 BPF JIT for ARM 3666 3663 M: Shubham Bansal <illusionist.neo@gmail.com> 3667 - L: netdev@vger.kernel.org 3668 3664 L: bpf@vger.kernel.org 3669 3665 S: Odd Fixes 3670 3666 F: arch/arm/net/ ··· 3670 3672 M: Daniel Borkmann <daniel@iogearbox.net> 3671 3673 M: Alexei Starovoitov <ast@kernel.org> 3672 3674 M: Zi Shen Lim <zlim.lnx@gmail.com> 3673 - L: netdev@vger.kernel.org 3674 3675 L: bpf@vger.kernel.org 3675 3676 S: Supported 3676 3677 F: arch/arm64/net/ ··· 3677 3680 BPF JIT for MIPS (32-BIT AND 64-BIT) 3678 3681 M: Johan Almbladh <johan.almbladh@anyfinetworks.com> 3679 3682 M: Paul Burton <paulburton@kernel.org> 3680 - L: netdev@vger.kernel.org 3681 3683 L: bpf@vger.kernel.org 3682 3684 S: Maintained 3683 3685 F: arch/mips/net/ 3684 3686 3685 3687 BPF JIT for NFP NICs 3686 3688 M: Jakub Kicinski <kuba@kernel.org> 3687 - L: netdev@vger.kernel.org 3688 3689 L: bpf@vger.kernel.org 3689 3690 S: Odd Fixes 3690 3691 F: drivers/net/ethernet/netronome/nfp/bpf/ ··· 3690 3695 BPF JIT for POWERPC (32-BIT AND 64-BIT) 3691 3696 M: Naveen N. Rao <naveen.n.rao@linux.ibm.com> 3692 3697 M: Michael Ellerman <mpe@ellerman.id.au> 3693 - L: netdev@vger.kernel.org 3694 3698 L: bpf@vger.kernel.org 3695 3699 S: Supported 3696 3700 F: arch/powerpc/net/ ··· 3697 3703 BPF JIT for RISC-V (32-bit) 3698 3704 M: Luke Nelson <luke.r.nels@gmail.com> 3699 3705 M: Xi Wang <xi.wang@gmail.com> 3700 - L: netdev@vger.kernel.org 3701 3706 L: bpf@vger.kernel.org 3702 3707 S: Maintained 3703 3708 F: arch/riscv/net/ ··· 3704 3711 3705 3712 BPF JIT for RISC-V (64-bit) 3706 3713 M: Björn Töpel <bjorn@kernel.org> 3707 - L: netdev@vger.kernel.org 3708 3714 L: bpf@vger.kernel.org 3709 3715 S: Maintained 3710 3716 F: arch/riscv/net/ ··· 3713 3721 M: Ilya Leoshkevich <iii@linux.ibm.com> 3714 3722 M: Heiko Carstens <hca@linux.ibm.com> 3715 3723 M: Vasily Gorbik <gor@linux.ibm.com> 3716 - L: netdev@vger.kernel.org 3717 3724 L: bpf@vger.kernel.org 3718 3725 S: Supported 3719 3726 F: arch/s390/net/ ··· 3720 3729 3721 3730 BPF JIT for SPARC (32-BIT AND 64-BIT) 3722 3731 M: David S. Miller <davem@davemloft.net> 3723 - L: netdev@vger.kernel.org 3724 3732 L: bpf@vger.kernel.org 3725 3733 S: Odd Fixes 3726 3734 F: arch/sparc/net/ 3727 3735 3728 3736 BPF JIT for X86 32-BIT 3729 3737 M: Wang YanQing <udknight@gmail.com> 3730 - L: netdev@vger.kernel.org 3731 3738 L: bpf@vger.kernel.org 3732 3739 S: Odd Fixes 3733 3740 F: arch/x86/net/bpf_jit_comp32.c ··· 3733 3744 BPF JIT for X86 64-BIT 3734 3745 M: Alexei Starovoitov <ast@kernel.org> 3735 3746 M: Daniel Borkmann <daniel@iogearbox.net> 3736 - L: netdev@vger.kernel.org 3737 3747 L: bpf@vger.kernel.org 3738 3748 S: Supported 3739 3749 F: arch/x86/net/ 3740 3750 X: arch/x86/net/bpf_jit_comp32.c 3741 3751 3742 - BPF LSM (Security Audit and Enforcement using BPF) 3752 + BPF [CORE] 3753 + M: Alexei Starovoitov <ast@kernel.org> 3754 + M: Daniel Borkmann <daniel@iogearbox.net> 3755 + R: John Fastabend <john.fastabend@gmail.com> 3756 + L: bpf@vger.kernel.org 3757 + S: Maintained 3758 + F: kernel/bpf/verifier.c 3759 + F: kernel/bpf/tnum.c 3760 + F: kernel/bpf/core.c 3761 + F: kernel/bpf/syscall.c 3762 + F: kernel/bpf/dispatcher.c 3763 + F: kernel/bpf/trampoline.c 3764 + F: include/linux/bpf* 3765 + F: include/linux/filter.h 3766 + 3767 + BPF [BTF] 3768 + M: Martin KaFai Lau <martin.lau@linux.dev> 3769 + L: bpf@vger.kernel.org 3770 + S: Maintained 3771 + F: kernel/bpf/btf.c 3772 + F: include/linux/btf* 3773 + 3774 + BPF [TRACING] 3775 + M: Song Liu <song@kernel.org> 3776 + R: Jiri Olsa <jolsa@kernel.org> 3777 + L: bpf@vger.kernel.org 3778 + S: Maintained 3779 + F: kernel/trace/bpf_trace.c 3780 + F: kernel/bpf/stackmap.c 3781 + 3782 + BPF [NETWORKING] (tc BPF, sock_addr) 3783 + M: Martin KaFai Lau <martin.lau@linux.dev> 3784 + M: Daniel Borkmann <daniel@iogearbox.net> 3785 + R: John Fastabend <john.fastabend@gmail.com> 3786 + L: bpf@vger.kernel.org 3787 + L: netdev@vger.kernel.org 3788 + S: Maintained 3789 + F: net/core/filter.c 3790 + F: net/sched/act_bpf.c 3791 + F: net/sched/cls_bpf.c 3792 + 3793 + BPF [NETWORKING] (struct_ops, reuseport) 3794 + M: Martin KaFai Lau <martin.lau@linux.dev> 3795 + L: bpf@vger.kernel.org 3796 + L: netdev@vger.kernel.org 3797 + S: Maintained 3798 + F: kernel/bpf/bpf_struct* 3799 + 3800 + BPF [SECURITY & LSM] (Security Audit and Enforcement using BPF) 3743 3801 M: KP Singh <kpsingh@kernel.org> 3744 3802 R: Florent Revest <revest@chromium.org> 3745 3803 R: Brendan Jackman <jackmanb@chromium.org> ··· 3797 3761 F: kernel/bpf/bpf_lsm.c 3798 3762 F: security/bpf/ 3799 3763 3800 - BPF L7 FRAMEWORK 3764 + BPF [STORAGE & CGROUPS] 3765 + M: Martin KaFai Lau <martin.lau@linux.dev> 3766 + L: bpf@vger.kernel.org 3767 + S: Maintained 3768 + F: kernel/bpf/cgroup.c 3769 + F: kernel/bpf/*storage.c 3770 + F: kernel/bpf/bpf_lru* 3771 + 3772 + BPF [RINGBUF] 3773 + M: Andrii Nakryiko <andrii@kernel.org> 3774 + L: bpf@vger.kernel.org 3775 + S: Maintained 3776 + F: kernel/bpf/ringbuf.c 3777 + 3778 + BPF [ITERATOR] 3779 + M: Yonghong Song <yhs@fb.com> 3780 + L: bpf@vger.kernel.org 3781 + S: Maintained 3782 + F: kernel/bpf/*iter.c 3783 + 3784 + BPF [L7 FRAMEWORK] (sockmap) 3801 3785 M: John Fastabend <john.fastabend@gmail.com> 3802 3786 M: Jakub Sitnicki <jakub@cloudflare.com> 3803 3787 L: netdev@vger.kernel.org ··· 3830 3774 F: net/ipv4/udp_bpf.c 3831 3775 F: net/unix/unix_bpf.c 3832 3776 3833 - BPFTOOL 3777 + BPF [LIBRARY] (libbpf) 3778 + M: Andrii Nakryiko <andrii@kernel.org> 3779 + L: bpf@vger.kernel.org 3780 + S: Maintained 3781 + F: tools/lib/bpf/ 3782 + 3783 + BPF [TOOLING] (bpftool) 3834 3784 M: Quentin Monnet <quentin@isovalent.com> 3835 3785 L: bpf@vger.kernel.org 3836 3786 S: Maintained 3837 3787 F: kernel/bpf/disasm.* 3838 3788 F: tools/bpf/bpftool/ 3789 + 3790 + BPF [SELFTESTS] (Test Runners & Infrastructure) 3791 + M: Andrii Nakryiko <andrii@kernel.org> 3792 + R: Mykola Lysenko <mykolal@fb.com> 3793 + L: bpf@vger.kernel.org 3794 + S: Maintained 3795 + F: tools/testing/selftests/bpf/ 3796 + 3797 + BPF [MISC] 3798 + L: bpf@vger.kernel.org 3799 + S: Odd Fixes 3800 + K: (?:\b|_)bpf(?:\b|_) 3839 3801 3840 3802 BROADCOM B44 10/100 ETHERNET DRIVER 3841 3803 M: Michael Chan <michael.chan@broadcom.com> ··· 5050 4976 T: git git://git.kernel.org/pub/scm/linux/kernel/git/clk/linux.git 5051 4977 F: Documentation/devicetree/bindings/clock/ 5052 4978 F: drivers/clk/ 4979 + F: include/dt-bindings/clock/ 5053 4980 F: include/linux/clk-pr* 5054 4981 F: include/linux/clk/ 5055 4982 F: include/linux/of_clk.h ··· 5101 5026 M: Alison Schofield <alison.schofield@intel.com> 5102 5027 M: Vishal Verma <vishal.l.verma@intel.com> 5103 5028 M: Ira Weiny <ira.weiny@intel.com> 5104 - M: Ben Widawsky <ben.widawsky@intel.com> 5029 + M: Ben Widawsky <bwidawsk@kernel.org> 5105 5030 M: Dan Williams <dan.j.williams@intel.com> 5106 5031 L: linux-cxl@vger.kernel.org 5107 5032 S: Maintained ··· 6053 5978 M: Christoph Hellwig <hch@lst.de> 6054 5979 M: Marek Szyprowski <m.szyprowski@samsung.com> 6055 5980 R: Robin Murphy <robin.murphy@arm.com> 6056 - L: iommu@lists.linux-foundation.org 6057 5981 L: iommu@lists.linux.dev 6058 5982 S: Supported 6059 5983 W: http://git.infradead.org/users/hch/dma-mapping.git ··· 6065 5991 6066 5992 DMA MAPPING BENCHMARK 6067 5993 M: Xiang Chen <chenxiang66@hisilicon.com> 6068 - L: iommu@lists.linux-foundation.org 6069 5994 L: iommu@lists.linux.dev 6070 5995 F: kernel/dma/map_benchmark.c 6071 5996 F: tools/testing/selftests/dma/ ··· 7649 7576 7650 7577 EXYNOS SYSMMU (IOMMU) driver 7651 7578 M: Marek Szyprowski <m.szyprowski@samsung.com> 7652 - L: iommu@lists.linux-foundation.org 7653 7579 L: iommu@lists.linux.dev 7654 7580 S: Maintained 7655 7581 F: drivers/iommu/exynos-iommu.c ··· 9914 9842 M: Cezary Rojewski <cezary.rojewski@intel.com> 9915 9843 M: Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com> 9916 9844 M: Liam Girdwood <liam.r.girdwood@linux.intel.com> 9917 - M: Jie Yang <yang.jie@linux.intel.com> 9845 + M: Peter Ujfalusi <peter.ujfalusi@linux.intel.com> 9846 + M: Bard Liao <yung-chuan.liao@linux.intel.com> 9847 + M: Ranjani Sridharan <ranjani.sridharan@linux.intel.com> 9848 + M: Kai Vehmanen <kai.vehmanen@linux.intel.com> 9918 9849 L: alsa-devel@alsa-project.org (moderated for non-subscribers) 9919 9850 S: Supported 9920 9851 F: sound/soc/intel/ ··· 10080 10005 INTEL IOMMU (VT-d) 10081 10006 M: David Woodhouse <dwmw2@infradead.org> 10082 10007 M: Lu Baolu <baolu.lu@linux.intel.com> 10083 - L: iommu@lists.linux-foundation.org 10084 10008 L: iommu@lists.linux.dev 10085 10009 S: Supported 10086 10010 T: git git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu.git ··· 10459 10385 IOMMU DRIVERS 10460 10386 M: Joerg Roedel <joro@8bytes.org> 10461 10387 M: Will Deacon <will@kernel.org> 10462 - L: iommu@lists.linux-foundation.org 10463 10388 L: iommu@lists.linux.dev 10464 10389 S: Maintained 10465 10390 T: git git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu.git ··· 12618 12545 12619 12546 MEDIATEK IOMMU DRIVER 12620 12547 M: Yong Wu <yong.wu@mediatek.com> 12621 - L: iommu@lists.linux-foundation.org 12622 12548 L: iommu@lists.linux.dev 12623 12549 L: linux-mediatek@lists.infradead.org (moderated for non-subscribers) 12624 12550 S: Supported ··· 14484 14412 F: sound/soc/codecs/tfa989x.c 14485 14413 14486 14414 NXP-NCI NFC DRIVER 14487 - R: Charles Gorand <charles.gorand@effinnov.com> 14488 14415 L: linux-nfc@lists.01.org (subscribers-only) 14489 - S: Supported 14416 + S: Orphan 14490 14417 F: Documentation/devicetree/bindings/net/nfc/nxp,nci.yaml 14491 14418 F: drivers/nfc/nxp-nci 14492 14419 ··· 15874 15803 PIN CONTROLLER - INTEL 15875 15804 M: Mika Westerberg <mika.westerberg@linux.intel.com> 15876 15805 M: Andy Shevchenko <andy@kernel.org> 15877 - S: Maintained 15806 + S: Supported 15878 15807 T: git git://git.kernel.org/pub/scm/linux/kernel/git/pinctrl/intel.git 15879 15808 F: drivers/pinctrl/intel/ 15880 15809 ··· 16396 16325 16397 16326 QCOM AUDIO (ASoC) DRIVERS 16398 16327 M: Srinivas Kandagatla <srinivas.kandagatla@linaro.org> 16399 - M: Banajit Goswami <bgoswami@codeaurora.org> 16328 + M: Banajit Goswami <bgoswami@quicinc.com> 16400 16329 L: alsa-devel@alsa-project.org (moderated for non-subscribers) 16401 16330 S: Supported 16402 16331 F: sound/soc/codecs/lpass-va-macro.c ··· 16677 16606 16678 16607 QUALCOMM IOMMU 16679 16608 M: Rob Clark <robdclark@gmail.com> 16680 - L: iommu@lists.linux-foundation.org 16681 16609 L: iommu@lists.linux.dev 16682 16610 L: linux-arm-msm@vger.kernel.org 16683 16611 S: Maintained ··· 18190 18120 18191 18121 SHARED MEMORY COMMUNICATIONS (SMC) SOCKETS 18192 18122 M: Karsten Graul <kgraul@linux.ibm.com> 18123 + M: Wenjia Zhang <wenjia@linux.ibm.com> 18193 18124 L: linux-s390@vger.kernel.org 18194 18125 S: Supported 18195 18126 W: http://www.ibm.com/developerworks/linux/linux390/ ··· 18823 18752 SOUND - SOUND OPEN FIRMWARE (SOF) DRIVERS 18824 18753 M: Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com> 18825 18754 M: Liam Girdwood <lgirdwood@gmail.com> 18755 + M: Peter Ujfalusi <peter.ujfalusi@linux.intel.com> 18756 + M: Bard Liao <yung-chuan.liao@linux.intel.com> 18826 18757 M: Ranjani Sridharan <ranjani.sridharan@linux.intel.com> 18827 - M: Kai Vehmanen <kai.vehmanen@linux.intel.com> 18758 + R: Kai Vehmanen <kai.vehmanen@linux.intel.com> 18828 18759 M: Daniel Baluta <daniel.baluta@nxp.com> 18829 18760 L: sound-open-firmware@alsa-project.org (moderated for non-subscribers) 18830 18761 S: Supported ··· 19305 19232 19306 19233 SWIOTLB SUBSYSTEM 19307 19234 M: Christoph Hellwig <hch@infradead.org> 19308 - L: iommu@lists.linux-foundation.org 19309 19235 L: iommu@lists.linux.dev 19310 19236 S: Supported 19311 19237 W: http://git.infradead.org/users/hch/dma-mapping.git ··· 21981 21909 M: Juergen Gross <jgross@suse.com> 21982 21910 M: Stefano Stabellini <sstabellini@kernel.org> 21983 21911 L: xen-devel@lists.xenproject.org (moderated for non-subscribers) 21984 - L: iommu@lists.linux-foundation.org 21985 21912 L: iommu@lists.linux.dev 21986 21913 S: Supported 21987 21914 F: arch/x86/xen/*swiotlb*
+1 -1
Makefile
··· 2 2 VERSION = 5 3 3 PATCHLEVEL = 19 4 4 SUBLEVEL = 0 5 - EXTRAVERSION = -rc4 5 + EXTRAVERSION = -rc6 6 6 NAME = Superb Owl 7 7 8 8 # *DOCUMENTATION*
+1 -2
arch/arm/boot/dts/at91-sam9x60ek.dts
··· 233 233 status = "okay"; 234 234 235 235 eeprom@53 { 236 - compatible = "atmel,24c32"; 236 + compatible = "atmel,24c02"; 237 237 reg = <0x53>; 238 238 pagesize = <16>; 239 - size = <128>; 240 239 status = "okay"; 241 240 }; 242 241 };
+3 -3
arch/arm/boot/dts/at91-sama5d2_icp.dts
··· 329 329 status = "okay"; 330 330 331 331 eeprom@50 { 332 - compatible = "atmel,24c32"; 332 + compatible = "atmel,24c02"; 333 333 reg = <0x50>; 334 334 pagesize = <16>; 335 335 status = "okay"; 336 336 }; 337 337 338 338 eeprom@52 { 339 - compatible = "atmel,24c32"; 339 + compatible = "atmel,24c02"; 340 340 reg = <0x52>; 341 341 pagesize = <16>; 342 342 status = "disabled"; 343 343 }; 344 344 345 345 eeprom@53 { 346 - compatible = "atmel,24c32"; 346 + compatible = "atmel,24c02"; 347 347 reg = <0x53>; 348 348 pagesize = <16>; 349 349 status = "disabled";
+1 -3
arch/arm/boot/dts/imx7d-smegw01.dts
··· 216 216 pinctrl-names = "default"; 217 217 pinctrl-0 = <&pinctrl_usdhc2>; 218 218 bus-width = <4>; 219 + no-1-8-v; 219 220 non-removable; 220 - cap-sd-highspeed; 221 - sd-uhs-ddr50; 222 - mmc-ddr-1_8v; 223 221 vmmc-supply = <&reg_wifi>; 224 222 enable-sdio-wakeup; 225 223 status = "okay";
+58
arch/arm/boot/dts/stm32mp15-scmi.dtsi
··· 27 27 reg = <0x16>; 28 28 #reset-cells = <1>; 29 29 }; 30 + 31 + scmi_voltd: protocol@17 { 32 + reg = <0x17>; 33 + 34 + scmi_reguls: regulators { 35 + #address-cells = <1>; 36 + #size-cells = <0>; 37 + 38 + scmi_reg11: reg11@0 { 39 + reg = <0>; 40 + regulator-name = "reg11"; 41 + regulator-min-microvolt = <1100000>; 42 + regulator-max-microvolt = <1100000>; 43 + }; 44 + 45 + scmi_reg18: reg18@1 { 46 + voltd-name = "reg18"; 47 + reg = <1>; 48 + regulator-name = "reg18"; 49 + regulator-min-microvolt = <1800000>; 50 + regulator-max-microvolt = <1800000>; 51 + }; 52 + 53 + scmi_usb33: usb33@2 { 54 + reg = <2>; 55 + regulator-name = "usb33"; 56 + regulator-min-microvolt = <3300000>; 57 + regulator-max-microvolt = <3300000>; 58 + }; 59 + }; 60 + }; 30 61 }; 31 62 }; 32 63 ··· 76 45 }; 77 46 }; 78 47 }; 48 + 49 + &reg11 { 50 + status = "disabled"; 51 + }; 52 + 53 + &reg18 { 54 + status = "disabled"; 55 + }; 56 + 57 + &usb33 { 58 + status = "disabled"; 59 + }; 60 + 61 + &usbotg_hs { 62 + usb33d-supply = <&scmi_usb33>; 63 + }; 64 + 65 + &usbphyc { 66 + vdda1v1-supply = <&scmi_reg11>; 67 + vdda1v8-supply = <&scmi_reg18>; 68 + }; 69 + 70 + /delete-node/ &clk_hse; 71 + /delete-node/ &clk_hsi; 72 + /delete-node/ &clk_lse; 73 + /delete-node/ &clk_lsi; 74 + /delete-node/ &clk_csi;
+3 -3
arch/arm/boot/dts/stm32mp151.dtsi
··· 565 565 compatible = "st,stm32-cec"; 566 566 reg = <0x40016000 0x400>; 567 567 interrupts = <GIC_SPI 94 IRQ_TYPE_LEVEL_HIGH>; 568 - clocks = <&rcc CEC_K>, <&clk_lse>; 568 + clocks = <&rcc CEC_K>, <&rcc CEC>; 569 569 clock-names = "cec", "hdmi-cec"; 570 570 status = "disabled"; 571 571 }; ··· 1474 1474 usbh_ohci: usb@5800c000 { 1475 1475 compatible = "generic-ohci"; 1476 1476 reg = <0x5800c000 0x1000>; 1477 - clocks = <&rcc USBH>, <&usbphyc>; 1477 + clocks = <&usbphyc>, <&rcc USBH>; 1478 1478 resets = <&rcc USBH_R>; 1479 1479 interrupts = <GIC_SPI 74 IRQ_TYPE_LEVEL_HIGH>; 1480 1480 status = "disabled"; ··· 1483 1483 usbh_ehci: usb@5800d000 { 1484 1484 compatible = "generic-ehci"; 1485 1485 reg = <0x5800d000 0x1000>; 1486 - clocks = <&rcc USBH>; 1486 + clocks = <&usbphyc>, <&rcc USBH>; 1487 1487 resets = <&rcc USBH_R>; 1488 1488 interrupts = <GIC_SPI 75 IRQ_TYPE_LEVEL_HIGH>; 1489 1489 companion = <&usbh_ohci>;
+4
arch/arm/boot/dts/stm32mp157a-dk1-scmi.dts
··· 29 29 clocks = <&scmi_clk CK_SCMI_MPU>; 30 30 }; 31 31 32 + &dsi { 33 + clocks = <&rcc DSI_K>, <&scmi_clk CK_SCMI_HSE>, <&rcc DSI_PX>; 34 + }; 35 + 32 36 &gpioz { 33 37 clocks = <&scmi_clk CK_SCMI_GPIOZ>; 34 38 };
+1
arch/arm/boot/dts/stm32mp157c-dk2-scmi.dts
··· 35 35 }; 36 36 37 37 &dsi { 38 + phy-dsi-supply = <&scmi_reg18>; 38 39 clocks = <&rcc DSI_K>, <&scmi_clk CK_SCMI_HSE>, <&rcc DSI_PX>; 39 40 }; 40 41
+4
arch/arm/boot/dts/stm32mp157c-ed1-scmi.dts
··· 34 34 resets = <&scmi_reset RST_SCMI_CRYP1>; 35 35 }; 36 36 37 + &dsi { 38 + clocks = <&rcc DSI_K>, <&scmi_clk CK_SCMI_HSE>, <&rcc DSI_PX>; 39 + }; 40 + 37 41 &gpioz { 38 42 clocks = <&scmi_clk CK_SCMI_GPIOZ>; 39 43 };
+1
arch/arm/boot/dts/stm32mp157c-ev1-scmi.dts
··· 36 36 }; 37 37 38 38 &dsi { 39 + phy-dsi-supply = <&scmi_reg18>; 39 40 clocks = <&rcc DSI_K>, <&scmi_clk CK_SCMI_HSE>, <&rcc DSI_PX>; 40 41 }; 41 42
+1
arch/arm/configs/mxs_defconfig
··· 93 93 CONFIG_DRM=y 94 94 CONFIG_DRM_PANEL_SEIKO_43WVF1G=y 95 95 CONFIG_DRM_MXSFB=y 96 + CONFIG_FB=y 96 97 CONFIG_FB_MODE_HELPERS=y 97 98 CONFIG_LCD_CLASS_DEVICE=y 98 99 CONFIG_BACKLIGHT_CLASS_DEVICE=y
+6 -6
arch/arm/mach-at91/pm.c
··· 202 202 203 203 static const struct of_device_id sama5d2_ws_ids[] = { 204 204 { .compatible = "atmel,sama5d2-gem", .data = &ws_info[0] }, 205 - { .compatible = "atmel,at91rm9200-rtc", .data = &ws_info[1] }, 205 + { .compatible = "atmel,sama5d2-rtc", .data = &ws_info[1] }, 206 206 { .compatible = "atmel,sama5d3-udc", .data = &ws_info[2] }, 207 207 { .compatible = "atmel,at91rm9200-ohci", .data = &ws_info[2] }, 208 208 { .compatible = "usb-ohci", .data = &ws_info[2] }, ··· 213 213 }; 214 214 215 215 static const struct of_device_id sam9x60_ws_ids[] = { 216 - { .compatible = "atmel,at91sam9x5-rtc", .data = &ws_info[1] }, 216 + { .compatible = "microchip,sam9x60-rtc", .data = &ws_info[1] }, 217 217 { .compatible = "atmel,at91rm9200-ohci", .data = &ws_info[2] }, 218 218 { .compatible = "usb-ohci", .data = &ws_info[2] }, 219 219 { .compatible = "atmel,at91sam9g45-ehci", .data = &ws_info[2] }, 220 220 { .compatible = "usb-ehci", .data = &ws_info[2] }, 221 - { .compatible = "atmel,at91sam9260-rtt", .data = &ws_info[4] }, 221 + { .compatible = "microchip,sam9x60-rtt", .data = &ws_info[4] }, 222 222 { .compatible = "cdns,sam9x60-macb", .data = &ws_info[5] }, 223 223 { /* sentinel */ } 224 224 }; 225 225 226 226 static const struct of_device_id sama7g5_ws_ids[] = { 227 - { .compatible = "atmel,at91sam9x5-rtc", .data = &ws_info[1] }, 227 + { .compatible = "microchip,sama7g5-rtc", .data = &ws_info[1] }, 228 228 { .compatible = "microchip,sama7g5-ohci", .data = &ws_info[2] }, 229 229 { .compatible = "usb-ohci", .data = &ws_info[2] }, 230 230 { .compatible = "atmel,at91sam9g45-ehci", .data = &ws_info[2] }, 231 231 { .compatible = "usb-ehci", .data = &ws_info[2] }, 232 232 { .compatible = "microchip,sama7g5-sdhci", .data = &ws_info[3] }, 233 - { .compatible = "atmel,at91sam9260-rtt", .data = &ws_info[4] }, 233 + { .compatible = "microchip,sama7g5-rtt", .data = &ws_info[4] }, 234 234 { /* sentinel */ } 235 235 }; 236 236 ··· 1079 1079 return ret; 1080 1080 } 1081 1081 1082 - static void at91_pm_secure_init(void) 1082 + static void __init at91_pm_secure_init(void) 1083 1083 { 1084 1084 int suspend_mode; 1085 1085 struct arm_smccc_res res;
+2
arch/arm/mach-meson/platsmp.c
··· 71 71 } 72 72 73 73 sram_base = of_iomap(node, 0); 74 + of_node_put(node); 74 75 if (!sram_base) { 75 76 pr_err("Couldn't map SRAM registers\n"); 76 77 return; ··· 92 91 } 93 92 94 93 scu_base = of_iomap(node, 0); 94 + of_node_put(node); 95 95 if (!scu_base) { 96 96 pr_err("Couldn't map SCU registers\n"); 97 97 return;
+4 -2
arch/arm/xen/p2m.c
··· 63 63 64 64 unsigned long __pfn_to_mfn(unsigned long pfn) 65 65 { 66 - struct rb_node *n = phys_to_mach.rb_node; 66 + struct rb_node *n; 67 67 struct xen_p2m_entry *entry; 68 68 unsigned long irqflags; 69 69 70 70 read_lock_irqsave(&p2m_lock, irqflags); 71 + n = phys_to_mach.rb_node; 71 72 while (n) { 72 73 entry = rb_entry(n, struct xen_p2m_entry, rbnode_phys); 73 74 if (entry->pfn <= pfn && ··· 153 152 int rc; 154 153 unsigned long irqflags; 155 154 struct xen_p2m_entry *p2m_entry; 156 - struct rb_node *n = phys_to_mach.rb_node; 155 + struct rb_node *n; 157 156 158 157 if (mfn == INVALID_P2M_ENTRY) { 159 158 write_lock_irqsave(&p2m_lock, irqflags); 159 + n = phys_to_mach.rb_node; 160 160 while (n) { 161 161 p2m_entry = rb_entry(n, struct xen_p2m_entry, rbnode_phys); 162 162 if (p2m_entry->pfn <= pfn &&
+44 -44
arch/arm64/boot/dts/freescale/imx8mp-evk.dts
··· 395 395 &iomuxc { 396 396 pinctrl_eqos: eqosgrp { 397 397 fsl,pins = < 398 - MX8MP_IOMUXC_ENET_MDC__ENET_QOS_MDC 0x3 399 - MX8MP_IOMUXC_ENET_MDIO__ENET_QOS_MDIO 0x3 400 - MX8MP_IOMUXC_ENET_RD0__ENET_QOS_RGMII_RD0 0x91 401 - MX8MP_IOMUXC_ENET_RD1__ENET_QOS_RGMII_RD1 0x91 402 - MX8MP_IOMUXC_ENET_RD2__ENET_QOS_RGMII_RD2 0x91 403 - MX8MP_IOMUXC_ENET_RD3__ENET_QOS_RGMII_RD3 0x91 404 - MX8MP_IOMUXC_ENET_RXC__CCM_ENET_QOS_CLOCK_GENERATE_RX_CLK 0x91 405 - MX8MP_IOMUXC_ENET_RX_CTL__ENET_QOS_RGMII_RX_CTL 0x91 406 - MX8MP_IOMUXC_ENET_TD0__ENET_QOS_RGMII_TD0 0x1f 407 - MX8MP_IOMUXC_ENET_TD1__ENET_QOS_RGMII_TD1 0x1f 408 - MX8MP_IOMUXC_ENET_TD2__ENET_QOS_RGMII_TD2 0x1f 409 - MX8MP_IOMUXC_ENET_TD3__ENET_QOS_RGMII_TD3 0x1f 410 - MX8MP_IOMUXC_ENET_TX_CTL__ENET_QOS_RGMII_TX_CTL 0x1f 411 - MX8MP_IOMUXC_ENET_TXC__CCM_ENET_QOS_CLOCK_GENERATE_TX_CLK 0x1f 412 - MX8MP_IOMUXC_SAI2_RXC__GPIO4_IO22 0x19 398 + MX8MP_IOMUXC_ENET_MDC__ENET_QOS_MDC 0x2 399 + MX8MP_IOMUXC_ENET_MDIO__ENET_QOS_MDIO 0x2 400 + MX8MP_IOMUXC_ENET_RD0__ENET_QOS_RGMII_RD0 0x90 401 + MX8MP_IOMUXC_ENET_RD1__ENET_QOS_RGMII_RD1 0x90 402 + MX8MP_IOMUXC_ENET_RD2__ENET_QOS_RGMII_RD2 0x90 403 + MX8MP_IOMUXC_ENET_RD3__ENET_QOS_RGMII_RD3 0x90 404 + MX8MP_IOMUXC_ENET_RXC__CCM_ENET_QOS_CLOCK_GENERATE_RX_CLK 0x90 405 + MX8MP_IOMUXC_ENET_RX_CTL__ENET_QOS_RGMII_RX_CTL 0x90 406 + MX8MP_IOMUXC_ENET_TD0__ENET_QOS_RGMII_TD0 0x16 407 + MX8MP_IOMUXC_ENET_TD1__ENET_QOS_RGMII_TD1 0x16 408 + MX8MP_IOMUXC_ENET_TD2__ENET_QOS_RGMII_TD2 0x16 409 + MX8MP_IOMUXC_ENET_TD3__ENET_QOS_RGMII_TD3 0x16 410 + MX8MP_IOMUXC_ENET_TX_CTL__ENET_QOS_RGMII_TX_CTL 0x16 411 + MX8MP_IOMUXC_ENET_TXC__CCM_ENET_QOS_CLOCK_GENERATE_TX_CLK 0x16 412 + MX8MP_IOMUXC_SAI2_RXC__GPIO4_IO22 0x10 413 413 >; 414 414 }; 415 415 416 416 pinctrl_fec: fecgrp { 417 417 fsl,pins = < 418 - MX8MP_IOMUXC_SAI1_RXD2__ENET1_MDC 0x3 419 - MX8MP_IOMUXC_SAI1_RXD3__ENET1_MDIO 0x3 420 - MX8MP_IOMUXC_SAI1_RXD4__ENET1_RGMII_RD0 0x91 421 - MX8MP_IOMUXC_SAI1_RXD5__ENET1_RGMII_RD1 0x91 422 - MX8MP_IOMUXC_SAI1_RXD6__ENET1_RGMII_RD2 0x91 423 - MX8MP_IOMUXC_SAI1_RXD7__ENET1_RGMII_RD3 0x91 424 - MX8MP_IOMUXC_SAI1_TXC__ENET1_RGMII_RXC 0x91 425 - MX8MP_IOMUXC_SAI1_TXFS__ENET1_RGMII_RX_CTL 0x91 426 - MX8MP_IOMUXC_SAI1_TXD0__ENET1_RGMII_TD0 0x1f 427 - MX8MP_IOMUXC_SAI1_TXD1__ENET1_RGMII_TD1 0x1f 428 - MX8MP_IOMUXC_SAI1_TXD2__ENET1_RGMII_TD2 0x1f 429 - MX8MP_IOMUXC_SAI1_TXD3__ENET1_RGMII_TD3 0x1f 430 - MX8MP_IOMUXC_SAI1_TXD4__ENET1_RGMII_TX_CTL 0x1f 431 - MX8MP_IOMUXC_SAI1_TXD5__ENET1_RGMII_TXC 0x1f 432 - MX8MP_IOMUXC_SAI1_RXD0__GPIO4_IO02 0x19 418 + MX8MP_IOMUXC_SAI1_RXD2__ENET1_MDC 0x2 419 + MX8MP_IOMUXC_SAI1_RXD3__ENET1_MDIO 0x2 420 + MX8MP_IOMUXC_SAI1_RXD4__ENET1_RGMII_RD0 0x90 421 + MX8MP_IOMUXC_SAI1_RXD5__ENET1_RGMII_RD1 0x90 422 + MX8MP_IOMUXC_SAI1_RXD6__ENET1_RGMII_RD2 0x90 423 + MX8MP_IOMUXC_SAI1_RXD7__ENET1_RGMII_RD3 0x90 424 + MX8MP_IOMUXC_SAI1_TXC__ENET1_RGMII_RXC 0x90 425 + MX8MP_IOMUXC_SAI1_TXFS__ENET1_RGMII_RX_CTL 0x90 426 + MX8MP_IOMUXC_SAI1_TXD0__ENET1_RGMII_TD0 0x16 427 + MX8MP_IOMUXC_SAI1_TXD1__ENET1_RGMII_TD1 0x16 428 + MX8MP_IOMUXC_SAI1_TXD2__ENET1_RGMII_TD2 0x16 429 + MX8MP_IOMUXC_SAI1_TXD3__ENET1_RGMII_TD3 0x16 430 + MX8MP_IOMUXC_SAI1_TXD4__ENET1_RGMII_TX_CTL 0x16 431 + MX8MP_IOMUXC_SAI1_TXD5__ENET1_RGMII_TXC 0x16 432 + MX8MP_IOMUXC_SAI1_RXD0__GPIO4_IO02 0x10 433 433 >; 434 434 }; 435 435 ··· 461 461 462 462 pinctrl_gpio_led: gpioledgrp { 463 463 fsl,pins = < 464 - MX8MP_IOMUXC_NAND_READY_B__GPIO3_IO16 0x19 464 + MX8MP_IOMUXC_NAND_READY_B__GPIO3_IO16 0x140 465 465 >; 466 466 }; 467 467 468 468 pinctrl_i2c1: i2c1grp { 469 469 fsl,pins = < 470 - MX8MP_IOMUXC_I2C1_SCL__I2C1_SCL 0x400001c3 471 - MX8MP_IOMUXC_I2C1_SDA__I2C1_SDA 0x400001c3 470 + MX8MP_IOMUXC_I2C1_SCL__I2C1_SCL 0x400001c2 471 + MX8MP_IOMUXC_I2C1_SDA__I2C1_SDA 0x400001c2 472 472 >; 473 473 }; 474 474 475 475 pinctrl_i2c3: i2c3grp { 476 476 fsl,pins = < 477 - MX8MP_IOMUXC_I2C3_SCL__I2C3_SCL 0x400001c3 478 - MX8MP_IOMUXC_I2C3_SDA__I2C3_SDA 0x400001c3 477 + MX8MP_IOMUXC_I2C3_SCL__I2C3_SCL 0x400001c2 478 + MX8MP_IOMUXC_I2C3_SDA__I2C3_SDA 0x400001c2 479 479 >; 480 480 }; 481 481 482 482 pinctrl_i2c5: i2c5grp { 483 483 fsl,pins = < 484 - MX8MP_IOMUXC_SPDIF_RX__I2C5_SDA 0x400001c3 485 - MX8MP_IOMUXC_SPDIF_TX__I2C5_SCL 0x400001c3 484 + MX8MP_IOMUXC_SPDIF_RX__I2C5_SDA 0x400001c2 485 + MX8MP_IOMUXC_SPDIF_TX__I2C5_SCL 0x400001c2 486 486 >; 487 487 }; 488 488 ··· 500 500 501 501 pinctrl_reg_usdhc2_vmmc: regusdhc2vmmcgrp { 502 502 fsl,pins = < 503 - MX8MP_IOMUXC_SD2_RESET_B__GPIO2_IO19 0x41 503 + MX8MP_IOMUXC_SD2_RESET_B__GPIO2_IO19 0x40 504 504 >; 505 505 }; 506 506 507 507 pinctrl_uart2: uart2grp { 508 508 fsl,pins = < 509 - MX8MP_IOMUXC_UART2_RXD__UART2_DCE_RX 0x49 510 - MX8MP_IOMUXC_UART2_TXD__UART2_DCE_TX 0x49 509 + MX8MP_IOMUXC_UART2_RXD__UART2_DCE_RX 0x140 510 + MX8MP_IOMUXC_UART2_TXD__UART2_DCE_TX 0x140 511 511 >; 512 512 }; 513 513 514 514 pinctrl_usb1_vbus: usb1grp { 515 515 fsl,pins = < 516 - MX8MP_IOMUXC_GPIO1_IO14__USB2_OTG_PWR 0x19 516 + MX8MP_IOMUXC_GPIO1_IO14__USB2_OTG_PWR 0x10 517 517 >; 518 518 }; 519 519 ··· 525 525 MX8MP_IOMUXC_SD2_DATA1__USDHC2_DATA1 0x1d0 526 526 MX8MP_IOMUXC_SD2_DATA2__USDHC2_DATA2 0x1d0 527 527 MX8MP_IOMUXC_SD2_DATA3__USDHC2_DATA3 0x1d0 528 - MX8MP_IOMUXC_GPIO1_IO04__USDHC2_VSELECT 0xc1 528 + MX8MP_IOMUXC_GPIO1_IO04__USDHC2_VSELECT 0xc0 529 529 >; 530 530 }; 531 531 ··· 537 537 MX8MP_IOMUXC_SD2_DATA1__USDHC2_DATA1 0x1d4 538 538 MX8MP_IOMUXC_SD2_DATA2__USDHC2_DATA2 0x1d4 539 539 MX8MP_IOMUXC_SD2_DATA3__USDHC2_DATA3 0x1d4 540 - MX8MP_IOMUXC_GPIO1_IO04__USDHC2_VSELECT 0xc1 540 + MX8MP_IOMUXC_GPIO1_IO04__USDHC2_VSELECT 0xc0 541 541 >; 542 542 }; 543 543 ··· 549 549 MX8MP_IOMUXC_SD2_DATA1__USDHC2_DATA1 0x1d6 550 550 MX8MP_IOMUXC_SD2_DATA2__USDHC2_DATA2 0x1d6 551 551 MX8MP_IOMUXC_SD2_DATA3__USDHC2_DATA3 0x1d6 552 - MX8MP_IOMUXC_GPIO1_IO04__USDHC2_VSELECT 0xc1 552 + MX8MP_IOMUXC_GPIO1_IO04__USDHC2_VSELECT 0xc0 553 553 >; 554 554 }; 555 555
+20 -20
arch/arm64/boot/dts/freescale/imx8mp-icore-mx8mp-edimm2.2.dts
··· 110 110 &iomuxc { 111 111 pinctrl_eqos: eqosgrp { 112 112 fsl,pins = < 113 - MX8MP_IOMUXC_ENET_MDC__ENET_QOS_MDC 0x3 114 - MX8MP_IOMUXC_ENET_MDIO__ENET_QOS_MDIO 0x3 115 - MX8MP_IOMUXC_ENET_RD0__ENET_QOS_RGMII_RD0 0x91 116 - MX8MP_IOMUXC_ENET_RD1__ENET_QOS_RGMII_RD1 0x91 117 - MX8MP_IOMUXC_ENET_RD2__ENET_QOS_RGMII_RD2 0x91 118 - MX8MP_IOMUXC_ENET_RD3__ENET_QOS_RGMII_RD3 0x91 119 - MX8MP_IOMUXC_ENET_RXC__CCM_ENET_QOS_CLOCK_GENERATE_RX_CLK 0x91 120 - MX8MP_IOMUXC_ENET_RX_CTL__ENET_QOS_RGMII_RX_CTL 0x91 121 - MX8MP_IOMUXC_ENET_TD0__ENET_QOS_RGMII_TD0 0x1f 122 - MX8MP_IOMUXC_ENET_TD1__ENET_QOS_RGMII_TD1 0x1f 123 - MX8MP_IOMUXC_ENET_TD2__ENET_QOS_RGMII_TD2 0x1f 124 - MX8MP_IOMUXC_ENET_TD3__ENET_QOS_RGMII_TD3 0x1f 125 - MX8MP_IOMUXC_ENET_TX_CTL__ENET_QOS_RGMII_TX_CTL 0x1f 126 - MX8MP_IOMUXC_ENET_TXC__CCM_ENET_QOS_CLOCK_GENERATE_TX_CLK 0x1f 127 - MX8MP_IOMUXC_NAND_DATA01__GPIO3_IO07 0x19 113 + MX8MP_IOMUXC_ENET_MDC__ENET_QOS_MDC 0x2 114 + MX8MP_IOMUXC_ENET_MDIO__ENET_QOS_MDIO 0x2 115 + MX8MP_IOMUXC_ENET_RD0__ENET_QOS_RGMII_RD0 0x90 116 + MX8MP_IOMUXC_ENET_RD1__ENET_QOS_RGMII_RD1 0x90 117 + MX8MP_IOMUXC_ENET_RD2__ENET_QOS_RGMII_RD2 0x90 118 + MX8MP_IOMUXC_ENET_RD3__ENET_QOS_RGMII_RD3 0x90 119 + MX8MP_IOMUXC_ENET_RXC__CCM_ENET_QOS_CLOCK_GENERATE_RX_CLK 0x90 120 + MX8MP_IOMUXC_ENET_RX_CTL__ENET_QOS_RGMII_RX_CTL 0x90 121 + MX8MP_IOMUXC_ENET_TD0__ENET_QOS_RGMII_TD0 0x16 122 + MX8MP_IOMUXC_ENET_TD1__ENET_QOS_RGMII_TD1 0x16 123 + MX8MP_IOMUXC_ENET_TD2__ENET_QOS_RGMII_TD2 0x16 124 + MX8MP_IOMUXC_ENET_TD3__ENET_QOS_RGMII_TD3 0x16 125 + MX8MP_IOMUXC_ENET_TX_CTL__ENET_QOS_RGMII_TX_CTL 0x16 126 + MX8MP_IOMUXC_ENET_TXC__CCM_ENET_QOS_CLOCK_GENERATE_TX_CLK 0x16 127 + MX8MP_IOMUXC_NAND_DATA01__GPIO3_IO07 0x10 128 128 >; 129 129 }; 130 130 131 131 pinctrl_uart2: uart2grp { 132 132 fsl,pins = < 133 - MX8MP_IOMUXC_UART2_RXD__UART2_DCE_RX 0x49 134 - MX8MP_IOMUXC_UART2_TXD__UART2_DCE_TX 0x49 133 + MX8MP_IOMUXC_UART2_RXD__UART2_DCE_RX 0x40 134 + MX8MP_IOMUXC_UART2_TXD__UART2_DCE_TX 0x40 135 135 >; 136 136 }; 137 137 ··· 151 151 MX8MP_IOMUXC_SD2_DATA1__USDHC2_DATA1 0x1d0 152 152 MX8MP_IOMUXC_SD2_DATA2__USDHC2_DATA2 0x1d0 153 153 MX8MP_IOMUXC_SD2_DATA3__USDHC2_DATA3 0x1d0 154 - MX8MP_IOMUXC_GPIO1_IO04__USDHC2_VSELECT 0xc1 154 + MX8MP_IOMUXC_GPIO1_IO04__USDHC2_VSELECT 0xc0 155 155 >; 156 156 }; 157 157 ··· 163 163 164 164 pinctrl_reg_usb1: regusb1grp { 165 165 fsl,pins = < 166 - MX8MP_IOMUXC_GPIO1_IO14__GPIO1_IO14 0x19 166 + MX8MP_IOMUXC_GPIO1_IO14__GPIO1_IO14 0x10 167 167 >; 168 168 }; 169 169 170 170 pinctrl_reg_usdhc2_vmmc: regusdhc2vmmcgrp { 171 171 fsl,pins = < 172 - MX8MP_IOMUXC_SD2_RESET_B__GPIO2_IO19 0x41 172 + MX8MP_IOMUXC_SD2_RESET_B__GPIO2_IO19 0x40 173 173 >; 174 174 }; 175 175 };
+24 -24
arch/arm64/boot/dts/freescale/imx8mp-phyboard-pollux-rdk.dts
··· 116 116 &iomuxc { 117 117 pinctrl_eqos: eqosgrp { 118 118 fsl,pins = < 119 - MX8MP_IOMUXC_ENET_MDC__ENET_QOS_MDC 0x3 120 - MX8MP_IOMUXC_ENET_MDIO__ENET_QOS_MDIO 0x3 121 - MX8MP_IOMUXC_ENET_RD0__ENET_QOS_RGMII_RD0 0x91 122 - MX8MP_IOMUXC_ENET_RD1__ENET_QOS_RGMII_RD1 0x91 123 - MX8MP_IOMUXC_ENET_RD2__ENET_QOS_RGMII_RD2 0x91 124 - MX8MP_IOMUXC_ENET_RD3__ENET_QOS_RGMII_RD3 0x91 125 - MX8MP_IOMUXC_ENET_RXC__CCM_ENET_QOS_CLOCK_GENERATE_RX_CLK 0x91 126 - MX8MP_IOMUXC_ENET_RX_CTL__ENET_QOS_RGMII_RX_CTL 0x91 127 - MX8MP_IOMUXC_ENET_TD0__ENET_QOS_RGMII_TD0 0x1f 128 - MX8MP_IOMUXC_ENET_TD1__ENET_QOS_RGMII_TD1 0x1f 129 - MX8MP_IOMUXC_ENET_TD2__ENET_QOS_RGMII_TD2 0x1f 130 - MX8MP_IOMUXC_ENET_TD3__ENET_QOS_RGMII_TD3 0x1f 131 - MX8MP_IOMUXC_ENET_TX_CTL__ENET_QOS_RGMII_TX_CTL 0x1f 132 - MX8MP_IOMUXC_ENET_TXC__CCM_ENET_QOS_CLOCK_GENERATE_TX_CLK 0x1f 119 + MX8MP_IOMUXC_ENET_MDC__ENET_QOS_MDC 0x2 120 + MX8MP_IOMUXC_ENET_MDIO__ENET_QOS_MDIO 0x2 121 + MX8MP_IOMUXC_ENET_RD0__ENET_QOS_RGMII_RD0 0x90 122 + MX8MP_IOMUXC_ENET_RD1__ENET_QOS_RGMII_RD1 0x90 123 + MX8MP_IOMUXC_ENET_RD2__ENET_QOS_RGMII_RD2 0x90 124 + MX8MP_IOMUXC_ENET_RD3__ENET_QOS_RGMII_RD3 0x90 125 + MX8MP_IOMUXC_ENET_RXC__CCM_ENET_QOS_CLOCK_GENERATE_RX_CLK 0x90 126 + MX8MP_IOMUXC_ENET_RX_CTL__ENET_QOS_RGMII_RX_CTL 0x90 127 + MX8MP_IOMUXC_ENET_TD0__ENET_QOS_RGMII_TD0 0x16 128 + MX8MP_IOMUXC_ENET_TD1__ENET_QOS_RGMII_TD1 0x16 129 + MX8MP_IOMUXC_ENET_TD2__ENET_QOS_RGMII_TD2 0x16 130 + MX8MP_IOMUXC_ENET_TD3__ENET_QOS_RGMII_TD3 0x16 131 + MX8MP_IOMUXC_ENET_TX_CTL__ENET_QOS_RGMII_TX_CTL 0x16 132 + MX8MP_IOMUXC_ENET_TXC__CCM_ENET_QOS_CLOCK_GENERATE_TX_CLK 0x16 133 133 MX8MP_IOMUXC_SAI1_MCLK__GPIO4_IO20 0x10 134 134 >; 135 135 }; 136 136 137 137 pinctrl_i2c2: i2c2grp { 138 138 fsl,pins = < 139 - MX8MP_IOMUXC_I2C2_SCL__I2C2_SCL 0x400001c3 140 - MX8MP_IOMUXC_I2C2_SDA__I2C2_SDA 0x400001c3 139 + MX8MP_IOMUXC_I2C2_SCL__I2C2_SCL 0x400001c2 140 + MX8MP_IOMUXC_I2C2_SDA__I2C2_SDA 0x400001c2 141 141 >; 142 142 }; 143 143 144 144 pinctrl_i2c2_gpio: i2c2gpiogrp { 145 145 fsl,pins = < 146 - MX8MP_IOMUXC_I2C2_SCL__GPIO5_IO16 0x1e3 147 - MX8MP_IOMUXC_I2C2_SDA__GPIO5_IO17 0x1e3 146 + MX8MP_IOMUXC_I2C2_SCL__GPIO5_IO16 0x1e2 147 + MX8MP_IOMUXC_I2C2_SDA__GPIO5_IO17 0x1e2 148 148 >; 149 149 }; 150 150 151 151 pinctrl_reg_usdhc2_vmmc: regusdhc2vmmcgrp { 152 152 fsl,pins = < 153 - MX8MP_IOMUXC_SD2_RESET_B__GPIO2_IO19 0x41 153 + MX8MP_IOMUXC_SD2_RESET_B__GPIO2_IO19 0x40 154 154 >; 155 155 }; 156 156 157 157 pinctrl_uart1: uart1grp { 158 158 fsl,pins = < 159 - MX8MP_IOMUXC_UART1_RXD__UART1_DCE_RX 0x49 160 - MX8MP_IOMUXC_UART1_TXD__UART1_DCE_TX 0x49 159 + MX8MP_IOMUXC_UART1_RXD__UART1_DCE_RX 0x40 160 + MX8MP_IOMUXC_UART1_TXD__UART1_DCE_TX 0x40 161 161 >; 162 162 }; 163 163 ··· 175 175 MX8MP_IOMUXC_SD2_DATA1__USDHC2_DATA1 0x1d0 176 176 MX8MP_IOMUXC_SD2_DATA2__USDHC2_DATA2 0x1d0 177 177 MX8MP_IOMUXC_SD2_DATA3__USDHC2_DATA3 0x1d0 178 - MX8MP_IOMUXC_GPIO1_IO04__USDHC2_VSELECT 0xc1 178 + MX8MP_IOMUXC_GPIO1_IO04__USDHC2_VSELECT 0xc0 179 179 >; 180 180 }; 181 181 ··· 187 187 MX8MP_IOMUXC_SD2_DATA1__USDHC2_DATA1 0x1d4 188 188 MX8MP_IOMUXC_SD2_DATA2__USDHC2_DATA2 0x1d4 189 189 MX8MP_IOMUXC_SD2_DATA3__USDHC2_DATA3 0x1d4 190 - MX8MP_IOMUXC_GPIO1_IO04__USDHC2_VSELECT 0xc1 190 + MX8MP_IOMUXC_GPIO1_IO04__USDHC2_VSELECT 0xc0 191 191 >; 192 192 }; 193 193 ··· 199 199 MX8MP_IOMUXC_SD2_DATA1__USDHC2_DATA1 0x1d6 200 200 MX8MP_IOMUXC_SD2_DATA2__USDHC2_DATA2 0x1d6 201 201 MX8MP_IOMUXC_SD2_DATA3__USDHC2_DATA3 0x1d6 202 - MX8MP_IOMUXC_GPIO1_IO04__USDHC2_VSELECT 0xc1 202 + MX8MP_IOMUXC_GPIO1_IO04__USDHC2_VSELECT 0xc0 203 203 >; 204 204 }; 205 205 };
+58 -58
arch/arm64/boot/dts/freescale/imx8mp-venice-gw74xx.dts
··· 622 622 623 623 pinctrl_hog: hoggrp { 624 624 fsl,pins = < 625 - MX8MP_IOMUXC_GPIO1_IO09__GPIO1_IO09 0x40000041 /* DIO0 */ 626 - MX8MP_IOMUXC_GPIO1_IO11__GPIO1_IO11 0x40000041 /* DIO1 */ 627 - MX8MP_IOMUXC_NAND_DQS__GPIO3_IO14 0x40000041 /* M2SKT_OFF# */ 628 - MX8MP_IOMUXC_SD2_DATA2__GPIO2_IO17 0x40000159 /* PCIE1_WDIS# */ 629 - MX8MP_IOMUXC_SD2_DATA3__GPIO2_IO18 0x40000159 /* PCIE2_WDIS# */ 630 - MX8MP_IOMUXC_SD2_CMD__GPIO2_IO14 0x40000159 /* PCIE3_WDIS# */ 631 - MX8MP_IOMUXC_NAND_DATA00__GPIO3_IO06 0x40000041 /* M2SKT_RST# */ 632 - MX8MP_IOMUXC_SAI1_TXD6__GPIO4_IO18 0x40000159 /* M2SKT_WDIS# */ 633 - MX8MP_IOMUXC_NAND_ALE__GPIO3_IO00 0x40000159 /* M2SKT_GDIS# */ 625 + MX8MP_IOMUXC_GPIO1_IO09__GPIO1_IO09 0x40000040 /* DIO0 */ 626 + MX8MP_IOMUXC_GPIO1_IO11__GPIO1_IO11 0x40000040 /* DIO1 */ 627 + MX8MP_IOMUXC_NAND_DQS__GPIO3_IO14 0x40000040 /* M2SKT_OFF# */ 628 + MX8MP_IOMUXC_SD2_DATA2__GPIO2_IO17 0x40000150 /* PCIE1_WDIS# */ 629 + MX8MP_IOMUXC_SD2_DATA3__GPIO2_IO18 0x40000150 /* PCIE2_WDIS# */ 630 + MX8MP_IOMUXC_SD2_CMD__GPIO2_IO14 0x40000150 /* PCIE3_WDIS# */ 631 + MX8MP_IOMUXC_NAND_DATA00__GPIO3_IO06 0x40000040 /* M2SKT_RST# */ 632 + MX8MP_IOMUXC_SAI1_TXD6__GPIO4_IO18 0x40000150 /* M2SKT_WDIS# */ 633 + MX8MP_IOMUXC_NAND_ALE__GPIO3_IO00 0x40000150 /* M2SKT_GDIS# */ 634 634 MX8MP_IOMUXC_SAI3_TXD__GPIO5_IO01 0x40000104 /* UART_TERM */ 635 635 MX8MP_IOMUXC_SAI3_TXFS__GPIO4_IO31 0x40000104 /* UART_RS485 */ 636 636 MX8MP_IOMUXC_SAI3_TXC__GPIO5_IO00 0x40000104 /* UART_HALF */ ··· 639 639 640 640 pinctrl_accel: accelgrp { 641 641 fsl,pins = < 642 - MX8MP_IOMUXC_GPIO1_IO07__GPIO1_IO07 0x159 642 + MX8MP_IOMUXC_GPIO1_IO07__GPIO1_IO07 0x150 643 643 >; 644 644 }; 645 645 646 646 pinctrl_eqos: eqosgrp { 647 647 fsl,pins = < 648 - MX8MP_IOMUXC_ENET_MDC__ENET_QOS_MDC 0x3 649 - MX8MP_IOMUXC_ENET_MDIO__ENET_QOS_MDIO 0x3 650 - MX8MP_IOMUXC_ENET_RD0__ENET_QOS_RGMII_RD0 0x91 651 - MX8MP_IOMUXC_ENET_RD1__ENET_QOS_RGMII_RD1 0x91 652 - MX8MP_IOMUXC_ENET_RD2__ENET_QOS_RGMII_RD2 0x91 653 - MX8MP_IOMUXC_ENET_RD3__ENET_QOS_RGMII_RD3 0x91 654 - MX8MP_IOMUXC_ENET_RXC__CCM_ENET_QOS_CLOCK_GENERATE_RX_CLK 0x91 655 - MX8MP_IOMUXC_ENET_RX_CTL__ENET_QOS_RGMII_RX_CTL 0x91 656 - MX8MP_IOMUXC_ENET_TD0__ENET_QOS_RGMII_TD0 0x1f 657 - MX8MP_IOMUXC_ENET_TD1__ENET_QOS_RGMII_TD1 0x1f 658 - MX8MP_IOMUXC_ENET_TD2__ENET_QOS_RGMII_TD2 0x1f 659 - MX8MP_IOMUXC_ENET_TD3__ENET_QOS_RGMII_TD3 0x1f 660 - MX8MP_IOMUXC_ENET_TX_CTL__ENET_QOS_RGMII_TX_CTL 0x1f 661 - MX8MP_IOMUXC_ENET_TXC__CCM_ENET_QOS_CLOCK_GENERATE_TX_CLK 0x1f 662 - MX8MP_IOMUXC_SAI3_RXD__GPIO4_IO30 0x141 /* RST# */ 663 - MX8MP_IOMUXC_SAI3_RXFS__GPIO4_IO28 0x159 /* IRQ# */ 648 + MX8MP_IOMUXC_ENET_MDC__ENET_QOS_MDC 0x2 649 + MX8MP_IOMUXC_ENET_MDIO__ENET_QOS_MDIO 0x2 650 + MX8MP_IOMUXC_ENET_RD0__ENET_QOS_RGMII_RD0 0x90 651 + MX8MP_IOMUXC_ENET_RD1__ENET_QOS_RGMII_RD1 0x90 652 + MX8MP_IOMUXC_ENET_RD2__ENET_QOS_RGMII_RD2 0x90 653 + MX8MP_IOMUXC_ENET_RD3__ENET_QOS_RGMII_RD3 0x90 654 + MX8MP_IOMUXC_ENET_RXC__CCM_ENET_QOS_CLOCK_GENERATE_RX_CLK 0x90 655 + MX8MP_IOMUXC_ENET_RX_CTL__ENET_QOS_RGMII_RX_CTL 0x90 656 + MX8MP_IOMUXC_ENET_TD0__ENET_QOS_RGMII_TD0 0x16 657 + MX8MP_IOMUXC_ENET_TD1__ENET_QOS_RGMII_TD1 0x16 658 + MX8MP_IOMUXC_ENET_TD2__ENET_QOS_RGMII_TD2 0x16 659 + MX8MP_IOMUXC_ENET_TD3__ENET_QOS_RGMII_TD3 0x16 660 + MX8MP_IOMUXC_ENET_TX_CTL__ENET_QOS_RGMII_TX_CTL 0x16 661 + MX8MP_IOMUXC_ENET_TXC__CCM_ENET_QOS_CLOCK_GENERATE_TX_CLK 0x16 662 + MX8MP_IOMUXC_SAI3_RXD__GPIO4_IO30 0x140 /* RST# */ 663 + MX8MP_IOMUXC_SAI3_RXFS__GPIO4_IO28 0x150 /* IRQ# */ 664 664 >; 665 665 }; 666 666 667 667 pinctrl_fec: fecgrp { 668 668 fsl,pins = < 669 - MX8MP_IOMUXC_SAI1_RXD4__ENET1_RGMII_RD0 0x91 670 - MX8MP_IOMUXC_SAI1_RXD5__ENET1_RGMII_RD1 0x91 671 - MX8MP_IOMUXC_SAI1_RXD6__ENET1_RGMII_RD2 0x91 672 - MX8MP_IOMUXC_SAI1_RXD7__ENET1_RGMII_RD3 0x91 673 - MX8MP_IOMUXC_SAI1_TXC__ENET1_RGMII_RXC 0x91 674 - MX8MP_IOMUXC_SAI1_TXFS__ENET1_RGMII_RX_CTL 0x91 675 - MX8MP_IOMUXC_SAI1_TXD0__ENET1_RGMII_TD0 0x1f 676 - MX8MP_IOMUXC_SAI1_TXD1__ENET1_RGMII_TD1 0x1f 677 - MX8MP_IOMUXC_SAI1_TXD2__ENET1_RGMII_TD2 0x1f 678 - MX8MP_IOMUXC_SAI1_TXD3__ENET1_RGMII_TD3 0x1f 679 - MX8MP_IOMUXC_SAI1_TXD4__ENET1_RGMII_TX_CTL 0x1f 680 - MX8MP_IOMUXC_SAI1_TXD5__ENET1_RGMII_TXC 0x1f 681 - MX8MP_IOMUXC_SAI1_RXFS__ENET1_1588_EVENT0_IN 0x141 682 - MX8MP_IOMUXC_SAI1_RXC__ENET1_1588_EVENT0_OUT 0x141 669 + MX8MP_IOMUXC_SAI1_RXD4__ENET1_RGMII_RD0 0x90 670 + MX8MP_IOMUXC_SAI1_RXD5__ENET1_RGMII_RD1 0x90 671 + MX8MP_IOMUXC_SAI1_RXD6__ENET1_RGMII_RD2 0x90 672 + MX8MP_IOMUXC_SAI1_RXD7__ENET1_RGMII_RD3 0x90 673 + MX8MP_IOMUXC_SAI1_TXC__ENET1_RGMII_RXC 0x90 674 + MX8MP_IOMUXC_SAI1_TXFS__ENET1_RGMII_RX_CTL 0x90 675 + MX8MP_IOMUXC_SAI1_TXD0__ENET1_RGMII_TD0 0x16 676 + MX8MP_IOMUXC_SAI1_TXD1__ENET1_RGMII_TD1 0x16 677 + MX8MP_IOMUXC_SAI1_TXD2__ENET1_RGMII_TD2 0x16 678 + MX8MP_IOMUXC_SAI1_TXD3__ENET1_RGMII_TD3 0x16 679 + MX8MP_IOMUXC_SAI1_TXD4__ENET1_RGMII_TX_CTL 0x16 680 + MX8MP_IOMUXC_SAI1_TXD5__ENET1_RGMII_TXC 0x16 681 + MX8MP_IOMUXC_SAI1_RXFS__ENET1_1588_EVENT0_IN 0x140 682 + MX8MP_IOMUXC_SAI1_RXC__ENET1_1588_EVENT0_OUT 0x140 683 683 >; 684 684 }; 685 685 ··· 692 692 693 693 pinctrl_gsc: gscgrp { 694 694 fsl,pins = < 695 - MX8MP_IOMUXC_SAI1_MCLK__GPIO4_IO20 0x159 695 + MX8MP_IOMUXC_SAI1_MCLK__GPIO4_IO20 0x150 696 696 >; 697 697 }; 698 698 699 699 pinctrl_i2c1: i2c1grp { 700 700 fsl,pins = < 701 - MX8MP_IOMUXC_I2C1_SCL__I2C1_SCL 0x400001c3 702 - MX8MP_IOMUXC_I2C1_SDA__I2C1_SDA 0x400001c3 701 + MX8MP_IOMUXC_I2C1_SCL__I2C1_SCL 0x400001c2 702 + MX8MP_IOMUXC_I2C1_SDA__I2C1_SDA 0x400001c2 703 703 >; 704 704 }; 705 705 706 706 pinctrl_i2c2: i2c2grp { 707 707 fsl,pins = < 708 - MX8MP_IOMUXC_I2C2_SCL__I2C2_SCL 0x400001c3 709 - MX8MP_IOMUXC_I2C2_SDA__I2C2_SDA 0x400001c3 708 + MX8MP_IOMUXC_I2C2_SCL__I2C2_SCL 0x400001c2 709 + MX8MP_IOMUXC_I2C2_SDA__I2C2_SDA 0x400001c2 710 710 >; 711 711 }; 712 712 713 713 pinctrl_i2c3: i2c3grp { 714 714 fsl,pins = < 715 - MX8MP_IOMUXC_I2C3_SCL__I2C3_SCL 0x400001c3 716 - MX8MP_IOMUXC_I2C3_SDA__I2C3_SDA 0x400001c3 715 + MX8MP_IOMUXC_I2C3_SCL__I2C3_SCL 0x400001c2 716 + MX8MP_IOMUXC_I2C3_SDA__I2C3_SDA 0x400001c2 717 717 >; 718 718 }; 719 719 720 720 pinctrl_i2c4: i2c4grp { 721 721 fsl,pins = < 722 - MX8MP_IOMUXC_I2C4_SCL__I2C4_SCL 0x400001c3 723 - MX8MP_IOMUXC_I2C4_SDA__I2C4_SDA 0x400001c3 722 + MX8MP_IOMUXC_I2C4_SCL__I2C4_SCL 0x400001c2 723 + MX8MP_IOMUXC_I2C4_SDA__I2C4_SDA 0x400001c2 724 724 >; 725 725 }; 726 726 727 727 pinctrl_ksz: kszgrp { 728 728 fsl,pins = < 729 - MX8MP_IOMUXC_SAI3_RXC__GPIO4_IO29 0x159 /* IRQ# */ 730 - MX8MP_IOMUXC_SAI3_MCLK__GPIO5_IO02 0x141 /* RST# */ 729 + MX8MP_IOMUXC_SAI3_RXC__GPIO4_IO29 0x150 /* IRQ# */ 730 + MX8MP_IOMUXC_SAI3_MCLK__GPIO5_IO02 0x140 /* RST# */ 731 731 >; 732 732 }; 733 733 734 734 pinctrl_gpio_leds: ledgrp { 735 735 fsl,pins = < 736 - MX8MP_IOMUXC_SD2_DATA0__GPIO2_IO15 0x19 737 - MX8MP_IOMUXC_SD2_DATA1__GPIO2_IO16 0x19 736 + MX8MP_IOMUXC_SD2_DATA0__GPIO2_IO15 0x10 737 + MX8MP_IOMUXC_SD2_DATA1__GPIO2_IO16 0x10 738 738 >; 739 739 }; 740 740 741 741 pinctrl_pmic: pmicgrp { 742 742 fsl,pins = < 743 - MX8MP_IOMUXC_NAND_DATA01__GPIO3_IO07 0x141 743 + MX8MP_IOMUXC_NAND_DATA01__GPIO3_IO07 0x140 744 744 >; 745 745 }; 746 746 747 747 pinctrl_pps: ppsgrp { 748 748 fsl,pins = < 749 - MX8MP_IOMUXC_GPIO1_IO12__GPIO1_IO12 0x141 749 + MX8MP_IOMUXC_GPIO1_IO12__GPIO1_IO12 0x140 750 750 >; 751 751 }; 752 752 ··· 758 758 759 759 pinctrl_reg_usb2: regusb2grp { 760 760 fsl,pins = < 761 - MX8MP_IOMUXC_GPIO1_IO06__GPIO1_IO06 0x141 761 + MX8MP_IOMUXC_GPIO1_IO06__GPIO1_IO06 0x140 762 762 >; 763 763 }; 764 764 765 765 pinctrl_reg_wifi: regwifigrp { 766 766 fsl,pins = < 767 - MX8MP_IOMUXC_NAND_DATA03__GPIO3_IO09 0x119 767 + MX8MP_IOMUXC_NAND_DATA03__GPIO3_IO09 0x110 768 768 >; 769 769 }; 770 770 ··· 811 811 812 812 pinctrl_uart3_gpio: uart3gpiogrp { 813 813 fsl,pins = < 814 - MX8MP_IOMUXC_NAND_DATA02__GPIO3_IO08 0x119 814 + MX8MP_IOMUXC_NAND_DATA02__GPIO3_IO08 0x110 815 815 >; 816 816 }; 817 817
+1 -1
arch/arm64/boot/dts/freescale/imx8mp.dtsi
··· 595 595 pgc_ispdwp: power-domain@18 { 596 596 #power-domain-cells = <0>; 597 597 reg = <IMX8MP_POWER_DOMAIN_MEDIAMIX_ISPDWP>; 598 - clocks = <&clk IMX8MP_CLK_MEDIA_ISP_DIV>; 598 + clocks = <&clk IMX8MP_CLK_MEDIA_ISP_ROOT>; 599 599 }; 600 600 }; 601 601 };
+1 -1
arch/arm64/boot/dts/qcom/msm8992-lg-bullhead.dtsi
··· 74 74 vdd_l17_29-supply = <&vph_pwr>; 75 75 vdd_l20_21-supply = <&vph_pwr>; 76 76 vdd_l25-supply = <&pm8994_s5>; 77 - vdd_lvs1_2 = <&pm8994_s4>; 77 + vdd_lvs1_2-supply = <&pm8994_s4>; 78 78 79 79 /* S1, S2, S6 and S12 are managed by RPMPD */ 80 80
+1 -1
arch/arm64/boot/dts/qcom/msm8992-xiaomi-libra.dts
··· 171 171 vdd_l17_29-supply = <&vph_pwr>; 172 172 vdd_l20_21-supply = <&vph_pwr>; 173 173 vdd_l25-supply = <&pm8994_s5>; 174 - vdd_lvs1_2 = <&pm8994_s4>; 174 + vdd_lvs1_2-supply = <&pm8994_s4>; 175 175 176 176 /* S1, S2, S6 and S12 are managed by RPMPD */ 177 177
+2 -2
arch/arm64/boot/dts/qcom/msm8994.dtsi
··· 100 100 CPU6: cpu@102 { 101 101 device_type = "cpu"; 102 102 compatible = "arm,cortex-a57"; 103 - reg = <0x0 0x101>; 103 + reg = <0x0 0x102>; 104 104 enable-method = "psci"; 105 105 next-level-cache = <&L2_1>; 106 106 }; ··· 108 108 CPU7: cpu@103 { 109 109 device_type = "cpu"; 110 110 compatible = "arm,cortex-a57"; 111 - reg = <0x0 0x101>; 111 + reg = <0x0 0x103>; 112 112 enable-method = "psci"; 113 113 next-level-cache = <&L2_1>; 114 114 };
+1 -1
arch/arm64/boot/dts/qcom/sc7180-trogdor-homestar.dtsi
··· 5 5 * Copyright 2021 Google LLC. 6 6 */ 7 7 8 - #include "sc7180-trogdor.dtsi" 8 + /* This file must be included after sc7180-trogdor.dtsi */ 9 9 10 10 / { 11 11 /* BOARD-SPECIFIC TOP LEVEL NODES */
+1 -1
arch/arm64/boot/dts/qcom/sc7180-trogdor-lazor.dtsi
··· 5 5 * Copyright 2020 Google LLC. 6 6 */ 7 7 8 - #include "sc7180-trogdor.dtsi" 8 + /* This file must be included after sc7180-trogdor.dtsi */ 9 9 10 10 &ap_sar_sensor { 11 11 semtech,cs0-ground;
+1 -1
arch/arm64/boot/dts/qcom/sdm845.dtsi
··· 4244 4244 4245 4245 power-domains = <&dispcc MDSS_GDSC>; 4246 4246 4247 - clocks = <&gcc GCC_DISP_AHB_CLK>, 4247 + clocks = <&dispcc DISP_CC_MDSS_AHB_CLK>, 4248 4248 <&dispcc DISP_CC_MDSS_MDP_CLK>; 4249 4249 clock-names = "iface", "core"; 4250 4250
+12 -2
arch/arm64/boot/dts/qcom/sm8450.dtsi
··· 2853 2853 reg = <0x0 0x17100000 0x0 0x10000>, /* GICD */ 2854 2854 <0x0 0x17180000 0x0 0x200000>; /* GICR * 8 */ 2855 2855 interrupts = <GIC_PPI 9 IRQ_TYPE_LEVEL_HIGH>; 2856 + #address-cells = <2>; 2857 + #size-cells = <2>; 2858 + ranges; 2859 + 2860 + gic_its: msi-controller@17140000 { 2861 + compatible = "arm,gic-v3-its"; 2862 + reg = <0x0 0x17140000 0x0 0x20000>; 2863 + msi-controller; 2864 + #msi-cells = <1>; 2865 + }; 2856 2866 }; 2857 2867 2858 2868 timer@17420000 { ··· 3047 3037 3048 3038 iommus = <&apps_smmu 0xe0 0x0>; 3049 3039 3050 - interconnects = <&aggre1_noc MASTER_UFS_MEM &mc_virt SLAVE_EBI1>, 3051 - <&gem_noc MASTER_APPSS_PROC &config_noc SLAVE_UFS_MEM_CFG>; 3040 + interconnects = <&aggre1_noc MASTER_UFS_MEM 0 &mc_virt SLAVE_EBI1 0>, 3041 + <&gem_noc MASTER_APPSS_PROC 0 &config_noc SLAVE_UFS_MEM_CFG 0>; 3052 3042 interconnect-names = "ufs-ddr", "cpu-ufs"; 3053 3043 clock-names = 3054 3044 "core_clk",
+21 -9
arch/arm64/mm/hugetlbpage.c
··· 214 214 return orig_pte; 215 215 } 216 216 217 + static pte_t get_clear_contig_flush(struct mm_struct *mm, 218 + unsigned long addr, 219 + pte_t *ptep, 220 + unsigned long pgsize, 221 + unsigned long ncontig) 222 + { 223 + pte_t orig_pte = get_clear_contig(mm, addr, ptep, pgsize, ncontig); 224 + struct vm_area_struct vma = TLB_FLUSH_VMA(mm, 0); 225 + 226 + flush_tlb_range(&vma, addr, addr + (pgsize * ncontig)); 227 + return orig_pte; 228 + } 229 + 217 230 /* 218 231 * Changing some bits of contiguous entries requires us to follow a 219 232 * Break-Before-Make approach, breaking the whole contiguous set ··· 460 447 int ncontig, i; 461 448 size_t pgsize = 0; 462 449 unsigned long pfn = pte_pfn(pte), dpfn; 450 + struct mm_struct *mm = vma->vm_mm; 463 451 pgprot_t hugeprot; 464 452 pte_t orig_pte; 465 453 466 454 if (!pte_cont(pte)) 467 455 return ptep_set_access_flags(vma, addr, ptep, pte, dirty); 468 456 469 - ncontig = find_num_contig(vma->vm_mm, addr, ptep, &pgsize); 457 + ncontig = find_num_contig(mm, addr, ptep, &pgsize); 470 458 dpfn = pgsize >> PAGE_SHIFT; 471 459 472 460 if (!__cont_access_flags_changed(ptep, pte, ncontig)) 473 461 return 0; 474 462 475 - orig_pte = get_clear_contig(vma->vm_mm, addr, ptep, pgsize, ncontig); 463 + orig_pte = get_clear_contig_flush(mm, addr, ptep, pgsize, ncontig); 476 464 477 465 /* Make sure we don't lose the dirty or young state */ 478 466 if (pte_dirty(orig_pte)) ··· 484 470 485 471 hugeprot = pte_pgprot(pte); 486 472 for (i = 0; i < ncontig; i++, ptep++, addr += pgsize, pfn += dpfn) 487 - set_pte_at(vma->vm_mm, addr, ptep, pfn_pte(pfn, hugeprot)); 473 + set_pte_at(mm, addr, ptep, pfn_pte(pfn, hugeprot)); 488 474 489 475 return 1; 490 476 } ··· 506 492 ncontig = find_num_contig(mm, addr, ptep, &pgsize); 507 493 dpfn = pgsize >> PAGE_SHIFT; 508 494 509 - pte = get_clear_contig(mm, addr, ptep, pgsize, ncontig); 495 + pte = get_clear_contig_flush(mm, addr, ptep, pgsize, ncontig); 510 496 pte = pte_wrprotect(pte); 511 497 512 498 hugeprot = pte_pgprot(pte); ··· 519 505 pte_t huge_ptep_clear_flush(struct vm_area_struct *vma, 520 506 unsigned long addr, pte_t *ptep) 521 507 { 508 + struct mm_struct *mm = vma->vm_mm; 522 509 size_t pgsize; 523 510 int ncontig; 524 - pte_t orig_pte; 525 511 526 512 if (!pte_cont(READ_ONCE(*ptep))) 527 513 return ptep_clear_flush(vma, addr, ptep); 528 514 529 - ncontig = find_num_contig(vma->vm_mm, addr, ptep, &pgsize); 530 - orig_pte = get_clear_contig(vma->vm_mm, addr, ptep, pgsize, ncontig); 531 - flush_tlb_range(vma, addr, addr + pgsize * ncontig); 532 - return orig_pte; 515 + ncontig = find_num_contig(mm, addr, ptep, &pgsize); 516 + return get_clear_contig_flush(mm, addr, ptep, pgsize, ncontig); 533 517 } 534 518 535 519 static int __init hugetlbpage_init(void)
-4
arch/loongarch/Kconfig
··· 54 54 select GENERIC_CMOS_UPDATE 55 55 select GENERIC_CPU_AUTOPROBE 56 56 select GENERIC_ENTRY 57 - select GENERIC_FIND_FIRST_BIT 58 57 select GENERIC_GETTIMEOFDAY 59 58 select GENERIC_IRQ_MULTI_HANDLER 60 59 select GENERIC_IRQ_PROBE ··· 76 77 select HAVE_ARCH_TRANSPARENT_HUGEPAGE 77 78 select HAVE_ASM_MODVERSIONS 78 79 select HAVE_CONTEXT_TRACKING 79 - select HAVE_COPY_THREAD_TLS 80 80 select HAVE_DEBUG_STACKOVERFLOW 81 81 select HAVE_DMA_CONTIGUOUS 82 82 select HAVE_EXIT_THREAD ··· 84 86 select HAVE_IOREMAP_PROT 85 87 select HAVE_IRQ_EXIT_ON_IRQ_STACK 86 88 select HAVE_IRQ_TIME_ACCOUNTING 87 - select HAVE_MEMBLOCK 88 - select HAVE_MEMBLOCK_NODE_MAP 89 89 select HAVE_MOD_ARCH_SPECIFIC 90 90 select HAVE_NMI 91 91 select HAVE_PERF_EVENTS
-1
arch/loongarch/include/asm/fpregdef.h
··· 48 48 #define fcsr1 $r1 49 49 #define fcsr2 $r2 50 50 #define fcsr3 $r3 51 - #define vcsr16 $r16 52 51 53 52 #endif /* _ASM_FPREGDEF_H */
+1
arch/loongarch/include/asm/page.h
··· 6 6 #define _ASM_PAGE_H 7 7 8 8 #include <linux/const.h> 9 + #include <asm/addrspace.h> 9 10 10 11 /* 11 12 * PAGE_SHIFT determines the page size
-2
arch/loongarch/include/asm/processor.h
··· 80 80 81 81 struct loongarch_fpu { 82 82 unsigned int fcsr; 83 - unsigned int vcsr; 84 83 uint64_t fcc; /* 8x8 */ 85 84 union fpureg fpr[NUM_FPU_REGS]; 86 85 }; ··· 160 161 */ \ 161 162 .fpu = { \ 162 163 .fcsr = 0, \ 163 - .vcsr = 0, \ 164 164 .fcc = 0, \ 165 165 .fpr = {{{0,},},}, \ 166 166 }, \
-1
arch/loongarch/kernel/asm-offsets.c
··· 166 166 167 167 OFFSET(THREAD_FCSR, loongarch_fpu, fcsr); 168 168 OFFSET(THREAD_FCC, loongarch_fpu, fcc); 169 - OFFSET(THREAD_VCSR, loongarch_fpu, vcsr); 170 169 BLANK(); 171 170 } 172 171
-10
arch/loongarch/kernel/fpu.S
··· 146 146 movgr2fcsr fcsr0, \tmp0 147 147 .endm 148 148 149 - .macro sc_save_vcsr base, tmp0 150 - movfcsr2gr \tmp0, vcsr16 151 - EX st.w \tmp0, \base, 0 152 - .endm 153 - 154 - .macro sc_restore_vcsr base, tmp0 155 - EX ld.w \tmp0, \base, 0 156 - movgr2fcsr vcsr16, \tmp0 157 - .endm 158 - 159 149 /* 160 150 * Save a thread's fp context. 161 151 */
-1
arch/loongarch/kernel/numa.c
··· 429 429 return 0; 430 430 } 431 431 432 - EXPORT_SYMBOL(init_numa_memory); 433 432 #endif 434 433 435 434 void __init paging_init(void)
+1
arch/loongarch/vdso/Makefile
··· 21 21 endif 22 22 23 23 cflags-vdso := $(ccflags-vdso) \ 24 + -isystem $(shell $(CC) -print-file-name=include) \ 24 25 $(filter -W%,$(filter-out -Wa$(comma)%,$(KBUILD_CFLAGS))) \ 25 26 -O2 -g -fno-strict-aliasing -fno-common -fno-builtin -G0 \ 26 27 -fno-stack-protector -fno-jump-tables -DDISABLE_BRANCH_PROFILING \
+1 -1
arch/openrisc/kernel/unwinder.c
··· 25 25 /* 26 26 * Verify a frameinfo structure. The return address should be a valid text 27 27 * address. The frame pointer may be null if its the last frame, otherwise 28 - * the frame pointer should point to a location in the stack after the the 28 + * the frame pointer should point to a location in the stack after the 29 29 * top of the next frame up. 30 30 */ 31 31 static inline int or1k_frameinfo_valid(struct or1k_frameinfo *frameinfo)
+5
arch/parisc/kernel/asm-offsets.c
··· 224 224 BLANK(); 225 225 DEFINE(ASM_SIGFRAME_SIZE, PARISC_RT_SIGFRAME_SIZE); 226 226 DEFINE(SIGFRAME_CONTEXT_REGS, offsetof(struct rt_sigframe, uc.uc_mcontext) - PARISC_RT_SIGFRAME_SIZE); 227 + #ifdef CONFIG_64BIT 227 228 DEFINE(ASM_SIGFRAME_SIZE32, PARISC_RT_SIGFRAME_SIZE32); 228 229 DEFINE(SIGFRAME_CONTEXT_REGS32, offsetof(struct compat_rt_sigframe, uc.uc_mcontext) - PARISC_RT_SIGFRAME_SIZE32); 230 + #else 231 + DEFINE(ASM_SIGFRAME_SIZE32, PARISC_RT_SIGFRAME_SIZE); 232 + DEFINE(SIGFRAME_CONTEXT_REGS32, offsetof(struct rt_sigframe, uc.uc_mcontext) - PARISC_RT_SIGFRAME_SIZE); 233 + #endif 229 234 BLANK(); 230 235 DEFINE(ICACHE_BASE, offsetof(struct pdc_cache_info, ic_base)); 231 236 DEFINE(ICACHE_STRIDE, offsetof(struct pdc_cache_info, ic_stride));
+1 -1
arch/parisc/kernel/unaligned.c
··· 146 146 " depw %%r0,31,2,%4\n" 147 147 "1: ldw 0(%%sr1,%4),%0\n" 148 148 "2: ldw 4(%%sr1,%4),%3\n" 149 - " subi 32,%4,%2\n" 149 + " subi 32,%2,%2\n" 150 150 " mtctl %2,11\n" 151 151 " vshd %0,%3,%0\n" 152 152 "3: \n"
+4
arch/powerpc/Kconfig
··· 358 358 def_bool y 359 359 depends on PPC_POWERNV || PPC_PSERIES 360 360 361 + config ARCH_HAS_ADD_PAGES 362 + def_bool y 363 + depends on ARCH_ENABLE_MEMORY_HOTPLUG 364 + 361 365 config PPC_DCR_NATIVE 362 366 bool 363 367
+9
arch/powerpc/include/asm/bpf_perf_event.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + #ifndef _ASM_POWERPC_BPF_PERF_EVENT_H 3 + #define _ASM_POWERPC_BPF_PERF_EVENT_H 4 + 5 + #include <asm/ptrace.h> 6 + 7 + typedef struct user_pt_regs bpf_user_pt_regs_t; 8 + 9 + #endif /* _ASM_POWERPC_BPF_PERF_EVENT_H */
-9
arch/powerpc/include/uapi/asm/bpf_perf_event.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */ 2 - #ifndef _UAPI__ASM_BPF_PERF_EVENT_H__ 3 - #define _UAPI__ASM_BPF_PERF_EVENT_H__ 4 - 5 - #include <asm/ptrace.h> 6 - 7 - typedef struct user_pt_regs bpf_user_pt_regs_t; 8 - 9 - #endif /* _UAPI__ASM_BPF_PERF_EVENT_H__ */
+1 -1
arch/powerpc/kernel/prom_init_check.sh
··· 13 13 # If you really need to reference something from prom_init.o add 14 14 # it to the list below: 15 15 16 - grep "^CONFIG_KASAN=y$" .config >/dev/null 16 + grep "^CONFIG_KASAN=y$" ${KCONFIG_CONFIG} >/dev/null 17 17 if [ $? -eq 0 ] 18 18 then 19 19 MEM_FUNCS="__memcpy __memset"
+32 -1
arch/powerpc/mm/mem.c
··· 105 105 vm_unmap_aliases(); 106 106 } 107 107 108 + /* 109 + * After memory hotplug the variables max_pfn, max_low_pfn and high_memory need 110 + * updating. 111 + */ 112 + static void update_end_of_memory_vars(u64 start, u64 size) 113 + { 114 + unsigned long end_pfn = PFN_UP(start + size); 115 + 116 + if (end_pfn > max_pfn) { 117 + max_pfn = end_pfn; 118 + max_low_pfn = end_pfn; 119 + high_memory = (void *)__va(max_pfn * PAGE_SIZE - 1) + 1; 120 + } 121 + } 122 + 123 + int __ref add_pages(int nid, unsigned long start_pfn, unsigned long nr_pages, 124 + struct mhp_params *params) 125 + { 126 + int ret; 127 + 128 + ret = __add_pages(nid, start_pfn, nr_pages, params); 129 + if (ret) 130 + return ret; 131 + 132 + /* update max_pfn, max_low_pfn and high_memory */ 133 + update_end_of_memory_vars(start_pfn << PAGE_SHIFT, 134 + nr_pages << PAGE_SHIFT); 135 + 136 + return ret; 137 + } 138 + 108 139 int __ref arch_add_memory(int nid, u64 start, u64 size, 109 140 struct mhp_params *params) 110 141 { ··· 146 115 rc = arch_create_linear_mapping(nid, start, size, params); 147 116 if (rc) 148 117 return rc; 149 - rc = __add_pages(nid, start_pfn, nr_pages, params); 118 + rc = add_pages(nid, start_pfn, nr_pages, params); 150 119 if (rc) 151 120 arch_remove_linear_mapping(start, size); 152 121 return rc;
+3 -3
arch/powerpc/mm/nohash/book3e_pgtable.c
··· 96 96 pgdp = pgd_offset_k(ea); 97 97 p4dp = p4d_offset(pgdp, ea); 98 98 if (p4d_none(*p4dp)) { 99 - pmdp = early_alloc_pgtable(PMD_TABLE_SIZE); 100 - p4d_populate(&init_mm, p4dp, pmdp); 99 + pudp = early_alloc_pgtable(PUD_TABLE_SIZE); 100 + p4d_populate(&init_mm, p4dp, pudp); 101 101 } 102 102 pudp = pud_offset(p4dp, ea); 103 103 if (pud_none(*pudp)) { ··· 106 106 } 107 107 pmdp = pmd_offset(pudp, ea); 108 108 if (!pmd_present(*pmdp)) { 109 - ptep = early_alloc_pgtable(PAGE_SIZE); 109 + ptep = early_alloc_pgtable(PTE_TABLE_SIZE); 110 110 pmd_populate_kernel(&init_mm, pmdp, ptep); 111 111 } 112 112 ptep = pte_offset_kernel(pmdp, ea);
+10 -6
arch/powerpc/platforms/powernv/rng.c
··· 176 176 NULL) != pnv_get_random_long_early) 177 177 return 0; 178 178 179 - for_each_compatible_node(dn, NULL, "ibm,power-rng") { 180 - if (rng_create(dn)) 181 - continue; 182 - /* Create devices for hwrng driver */ 183 - of_platform_device_create(dn, NULL, NULL); 184 - } 179 + for_each_compatible_node(dn, NULL, "ibm,power-rng") 180 + rng_create(dn); 185 181 186 182 if (!ppc_md.get_random_seed) 187 183 return 0; ··· 201 205 202 206 static int __init pnv_rng_late_init(void) 203 207 { 208 + struct device_node *dn; 204 209 unsigned long v; 210 + 205 211 /* In case it wasn't called during init for some other reason. */ 206 212 if (ppc_md.get_random_seed == pnv_get_random_long_early) 207 213 pnv_get_random_long_early(&v); 214 + 215 + if (ppc_md.get_random_seed == powernv_get_random_long) { 216 + for_each_compatible_node(dn, NULL, "ibm,power-rng") 217 + of_platform_device_create(dn, NULL, NULL); 218 + } 219 + 208 220 return 0; 209 221 } 210 222 machine_subsys_initcall(powernv, pnv_rng_late_init);
+3 -2
arch/powerpc/sysdev/xive/spapr.c
··· 15 15 #include <linux/of_fdt.h> 16 16 #include <linux/slab.h> 17 17 #include <linux/spinlock.h> 18 + #include <linux/bitmap.h> 18 19 #include <linux/cpumask.h> 19 20 #include <linux/mm.h> 20 21 #include <linux/delay.h> ··· 58 57 spin_lock_init(&xibm->lock); 59 58 xibm->base = base; 60 59 xibm->count = count; 61 - xibm->bitmap = kzalloc(xibm->count, GFP_KERNEL); 60 + xibm->bitmap = bitmap_zalloc(xibm->count, GFP_KERNEL); 62 61 if (!xibm->bitmap) { 63 62 kfree(xibm); 64 63 return -ENOMEM; ··· 76 75 77 76 list_for_each_entry_safe(xibm, tmp, &xive_irq_bitmaps, list) { 78 77 list_del(&xibm->list); 79 - kfree(xibm->bitmap); 78 + bitmap_free(xibm->bitmap); 80 79 kfree(xibm); 81 80 } 82 81 }
-1
arch/s390/Kconfig
··· 484 484 config KEXEC_FILE 485 485 bool "kexec file based system call" 486 486 select KEXEC_CORE 487 - select BUILD_BIN2C 488 487 depends on CRYPTO 489 488 depends on CRYPTO_SHA256 490 489 depends on CRYPTO_SHA256_S390
-217
arch/s390/crypto/arch_random.c
··· 4 4 * 5 5 * Copyright IBM Corp. 2017, 2020 6 6 * Author(s): Harald Freudenberger 7 - * 8 - * The s390_arch_random_generate() function may be called from random.c 9 - * in interrupt context. So this implementation does the best to be very 10 - * fast. There is a buffer of random data which is asynchronously checked 11 - * and filled by a workqueue thread. 12 - * If there are enough bytes in the buffer the s390_arch_random_generate() 13 - * just delivers these bytes. Otherwise false is returned until the 14 - * worker thread refills the buffer. 15 - * The worker fills the rng buffer by pulling fresh entropy from the 16 - * high quality (but slow) true hardware random generator. This entropy 17 - * is then spread over the buffer with an pseudo random generator PRNG. 18 - * As the arch_get_random_seed_long() fetches 8 bytes and the calling 19 - * function add_interrupt_randomness() counts this as 1 bit entropy the 20 - * distribution needs to make sure there is in fact 1 bit entropy contained 21 - * in 8 bytes of the buffer. The current values pull 32 byte entropy 22 - * and scatter this into a 2048 byte buffer. So 8 byte in the buffer 23 - * will contain 1 bit of entropy. 24 - * The worker thread is rescheduled based on the charge level of the 25 - * buffer but at least with 500 ms delay to avoid too much CPU consumption. 26 - * So the max. amount of rng data delivered via arch_get_random_seed is 27 - * limited to 4k bytes per second. 28 7 */ 29 8 30 9 #include <linux/kernel.h> 31 10 #include <linux/atomic.h> 32 11 #include <linux/random.h> 33 - #include <linux/slab.h> 34 12 #include <linux/static_key.h> 35 - #include <linux/workqueue.h> 36 - #include <linux/moduleparam.h> 37 13 #include <asm/cpacf.h> 38 14 39 15 DEFINE_STATIC_KEY_FALSE(s390_arch_random_available); 40 16 41 17 atomic64_t s390_arch_random_counter = ATOMIC64_INIT(0); 42 18 EXPORT_SYMBOL(s390_arch_random_counter); 43 - 44 - #define ARCH_REFILL_TICKS (HZ/2) 45 - #define ARCH_PRNG_SEED_SIZE 32 46 - #define ARCH_RNG_BUF_SIZE 2048 47 - 48 - static DEFINE_SPINLOCK(arch_rng_lock); 49 - static u8 *arch_rng_buf; 50 - static unsigned int arch_rng_buf_idx; 51 - 52 - static void arch_rng_refill_buffer(struct work_struct *); 53 - static DECLARE_DELAYED_WORK(arch_rng_work, arch_rng_refill_buffer); 54 - 55 - bool s390_arch_random_generate(u8 *buf, unsigned int nbytes) 56 - { 57 - /* max hunk is ARCH_RNG_BUF_SIZE */ 58 - if (nbytes > ARCH_RNG_BUF_SIZE) 59 - return false; 60 - 61 - /* lock rng buffer */ 62 - if (!spin_trylock(&arch_rng_lock)) 63 - return false; 64 - 65 - /* try to resolve the requested amount of bytes from the buffer */ 66 - arch_rng_buf_idx -= nbytes; 67 - if (arch_rng_buf_idx < ARCH_RNG_BUF_SIZE) { 68 - memcpy(buf, arch_rng_buf + arch_rng_buf_idx, nbytes); 69 - atomic64_add(nbytes, &s390_arch_random_counter); 70 - spin_unlock(&arch_rng_lock); 71 - return true; 72 - } 73 - 74 - /* not enough bytes in rng buffer, refill is done asynchronously */ 75 - spin_unlock(&arch_rng_lock); 76 - 77 - return false; 78 - } 79 - EXPORT_SYMBOL(s390_arch_random_generate); 80 - 81 - static void arch_rng_refill_buffer(struct work_struct *unused) 82 - { 83 - unsigned int delay = ARCH_REFILL_TICKS; 84 - 85 - spin_lock(&arch_rng_lock); 86 - if (arch_rng_buf_idx > ARCH_RNG_BUF_SIZE) { 87 - /* buffer is exhausted and needs refill */ 88 - u8 seed[ARCH_PRNG_SEED_SIZE]; 89 - u8 prng_wa[240]; 90 - /* fetch ARCH_PRNG_SEED_SIZE bytes of entropy */ 91 - cpacf_trng(NULL, 0, seed, sizeof(seed)); 92 - /* blow this entropy up to ARCH_RNG_BUF_SIZE with PRNG */ 93 - memset(prng_wa, 0, sizeof(prng_wa)); 94 - cpacf_prno(CPACF_PRNO_SHA512_DRNG_SEED, 95 - &prng_wa, NULL, 0, seed, sizeof(seed)); 96 - cpacf_prno(CPACF_PRNO_SHA512_DRNG_GEN, 97 - &prng_wa, arch_rng_buf, ARCH_RNG_BUF_SIZE, NULL, 0); 98 - arch_rng_buf_idx = ARCH_RNG_BUF_SIZE; 99 - } 100 - delay += (ARCH_REFILL_TICKS * arch_rng_buf_idx) / ARCH_RNG_BUF_SIZE; 101 - spin_unlock(&arch_rng_lock); 102 - 103 - /* kick next check */ 104 - queue_delayed_work(system_long_wq, &arch_rng_work, delay); 105 - } 106 - 107 - /* 108 - * Here follows the implementation of s390_arch_get_random_long(). 109 - * 110 - * The random longs to be pulled by arch_get_random_long() are 111 - * prepared in an 4K buffer which is filled from the NIST 800-90 112 - * compliant s390 drbg. By default the random long buffer is refilled 113 - * 256 times before the drbg itself needs a reseed. The reseed of the 114 - * drbg is done with 32 bytes fetched from the high quality (but slow) 115 - * trng which is assumed to deliver 100% entropy. So the 32 * 8 = 256 116 - * bits of entropy are spread over 256 * 4KB = 1MB serving 131072 117 - * arch_get_random_long() invocations before reseeded. 118 - * 119 - * How often the 4K random long buffer is refilled with the drbg 120 - * before the drbg is reseeded can be adjusted. There is a module 121 - * parameter 's390_arch_rnd_long_drbg_reseed' accessible via 122 - * /sys/module/arch_random/parameters/rndlong_drbg_reseed 123 - * or as kernel command line parameter 124 - * arch_random.rndlong_drbg_reseed=<value> 125 - * This parameter tells how often the drbg fills the 4K buffer before 126 - * it is re-seeded by fresh entropy from the trng. 127 - * A value of 16 results in reseeding the drbg at every 16 * 4 KB = 64 128 - * KB with 32 bytes of fresh entropy pulled from the trng. So a value 129 - * of 16 would result in 256 bits entropy per 64 KB. 130 - * A value of 256 results in 1MB of drbg output before a reseed of the 131 - * drbg is done. So this would spread the 256 bits of entropy among 1MB. 132 - * Setting this parameter to 0 forces the reseed to take place every 133 - * time the 4K buffer is depleted, so the entropy rises to 256 bits 134 - * entropy per 4K or 0.5 bit entropy per arch_get_random_long(). With 135 - * setting this parameter to negative values all this effort is 136 - * disabled, arch_get_random long() returns false and thus indicating 137 - * that the arch_get_random_long() feature is disabled at all. 138 - */ 139 - 140 - static unsigned long rndlong_buf[512]; 141 - static DEFINE_SPINLOCK(rndlong_lock); 142 - static int rndlong_buf_index; 143 - 144 - static int rndlong_drbg_reseed = 256; 145 - module_param_named(rndlong_drbg_reseed, rndlong_drbg_reseed, int, 0600); 146 - MODULE_PARM_DESC(rndlong_drbg_reseed, "s390 arch_get_random_long() drbg reseed"); 147 - 148 - static inline void refill_rndlong_buf(void) 149 - { 150 - static u8 prng_ws[240]; 151 - static int drbg_counter; 152 - 153 - if (--drbg_counter < 0) { 154 - /* need to re-seed the drbg */ 155 - u8 seed[32]; 156 - 157 - /* fetch seed from trng */ 158 - cpacf_trng(NULL, 0, seed, sizeof(seed)); 159 - /* seed drbg */ 160 - memset(prng_ws, 0, sizeof(prng_ws)); 161 - cpacf_prno(CPACF_PRNO_SHA512_DRNG_SEED, 162 - &prng_ws, NULL, 0, seed, sizeof(seed)); 163 - /* re-init counter for drbg */ 164 - drbg_counter = rndlong_drbg_reseed; 165 - } 166 - 167 - /* fill the arch_get_random_long buffer from drbg */ 168 - cpacf_prno(CPACF_PRNO_SHA512_DRNG_GEN, &prng_ws, 169 - (u8 *) rndlong_buf, sizeof(rndlong_buf), 170 - NULL, 0); 171 - } 172 - 173 - bool s390_arch_get_random_long(unsigned long *v) 174 - { 175 - bool rc = false; 176 - unsigned long flags; 177 - 178 - /* arch_get_random_long() disabled ? */ 179 - if (rndlong_drbg_reseed < 0) 180 - return false; 181 - 182 - /* try to lock the random long lock */ 183 - if (!spin_trylock_irqsave(&rndlong_lock, flags)) 184 - return false; 185 - 186 - if (--rndlong_buf_index >= 0) { 187 - /* deliver next long value from the buffer */ 188 - *v = rndlong_buf[rndlong_buf_index]; 189 - rc = true; 190 - goto out; 191 - } 192 - 193 - /* buffer is depleted and needs refill */ 194 - if (in_interrupt()) { 195 - /* delay refill in interrupt context to next caller */ 196 - rndlong_buf_index = 0; 197 - goto out; 198 - } 199 - 200 - /* refill random long buffer */ 201 - refill_rndlong_buf(); 202 - rndlong_buf_index = ARRAY_SIZE(rndlong_buf); 203 - 204 - /* and provide one random long */ 205 - *v = rndlong_buf[--rndlong_buf_index]; 206 - rc = true; 207 - 208 - out: 209 - spin_unlock_irqrestore(&rndlong_lock, flags); 210 - return rc; 211 - } 212 - EXPORT_SYMBOL(s390_arch_get_random_long); 213 - 214 - static int __init s390_arch_random_init(void) 215 - { 216 - /* all the needed PRNO subfunctions available ? */ 217 - if (cpacf_query_func(CPACF_PRNO, CPACF_PRNO_TRNG) && 218 - cpacf_query_func(CPACF_PRNO, CPACF_PRNO_SHA512_DRNG_GEN)) { 219 - 220 - /* alloc arch random working buffer */ 221 - arch_rng_buf = kmalloc(ARCH_RNG_BUF_SIZE, GFP_KERNEL); 222 - if (!arch_rng_buf) 223 - return -ENOMEM; 224 - 225 - /* kick worker queue job to fill the random buffer */ 226 - queue_delayed_work(system_long_wq, 227 - &arch_rng_work, ARCH_REFILL_TICKS); 228 - 229 - /* enable arch random to the outside world */ 230 - static_branch_enable(&s390_arch_random_available); 231 - } 232 - 233 - return 0; 234 - } 235 - arch_initcall(s390_arch_random_init);
+7 -7
arch/s390/include/asm/archrandom.h
··· 15 15 16 16 #include <linux/static_key.h> 17 17 #include <linux/atomic.h> 18 + #include <asm/cpacf.h> 18 19 19 20 DECLARE_STATIC_KEY_FALSE(s390_arch_random_available); 20 21 extern atomic64_t s390_arch_random_counter; 21 22 22 - bool s390_arch_get_random_long(unsigned long *v); 23 - bool s390_arch_random_generate(u8 *buf, unsigned int nbytes); 24 - 25 23 static inline bool __must_check arch_get_random_long(unsigned long *v) 26 24 { 27 - if (static_branch_likely(&s390_arch_random_available)) 28 - return s390_arch_get_random_long(v); 29 25 return false; 30 26 } 31 27 ··· 33 37 static inline bool __must_check arch_get_random_seed_long(unsigned long *v) 34 38 { 35 39 if (static_branch_likely(&s390_arch_random_available)) { 36 - return s390_arch_random_generate((u8 *)v, sizeof(*v)); 40 + cpacf_trng(NULL, 0, (u8 *)v, sizeof(*v)); 41 + atomic64_add(sizeof(*v), &s390_arch_random_counter); 42 + return true; 37 43 } 38 44 return false; 39 45 } ··· 43 45 static inline bool __must_check arch_get_random_seed_int(unsigned int *v) 44 46 { 45 47 if (static_branch_likely(&s390_arch_random_available)) { 46 - return s390_arch_random_generate((u8 *)v, sizeof(*v)); 48 + cpacf_trng(NULL, 0, (u8 *)v, sizeof(*v)); 49 + atomic64_add(sizeof(*v), &s390_arch_random_counter); 50 + return true; 47 51 } 48 52 return false; 49 53 }
+3 -3
arch/s390/include/asm/qdio.h
··· 133 133 * @sb_count: number of storage blocks 134 134 * @sba: storage block element addresses 135 135 * @dcount: size of storage block elements 136 - * @user0: user defineable value 137 - * @res4: reserved paramater 138 - * @user1: user defineable value 136 + * @user0: user definable value 137 + * @res4: reserved parameter 138 + * @user1: user definable value 139 139 */ 140 140 struct qaob { 141 141 u64 res0[6];
+5
arch/s390/kernel/setup.c
··· 875 875 if (stsi(vmms, 3, 2, 2) == 0 && vmms->count) 876 876 add_device_randomness(&vmms->vm, sizeof(vmms->vm[0]) * vmms->count); 877 877 memblock_free(vmms, PAGE_SIZE); 878 + 879 + #ifdef CONFIG_ARCH_RANDOM 880 + if (cpacf_query_func(CPACF_PRNO, CPACF_PRNO_TRNG)) 881 + static_branch_enable(&s390_arch_random_available); 882 + #endif 878 883 } 879 884 880 885 /*
+2 -3
arch/s390/purgatory/Makefile
··· 48 48 $(obj)/purgatory.ro: $(obj)/purgatory $(obj)/purgatory.chk FORCE 49 49 $(call if_changed,objcopy) 50 50 51 - $(obj)/kexec-purgatory.o: $(obj)/kexec-purgatory.S $(obj)/purgatory.ro FORCE 52 - $(call if_changed_rule,as_o_S) 51 + $(obj)/kexec-purgatory.o: $(obj)/purgatory.ro 53 52 54 - obj-$(CONFIG_ARCH_HAS_KEXEC_PURGATORY) += kexec-purgatory.o 53 + obj-y += kexec-purgatory.o
+13
arch/x86/boot/compressed/ident_map_64.c
··· 110 110 void initialize_identity_maps(void *rmode) 111 111 { 112 112 unsigned long cmdline; 113 + struct setup_data *sd; 113 114 114 115 /* Exclude the encryption mask from __PHYSICAL_MASK */ 115 116 physical_mask &= ~sme_me_mask; ··· 163 162 kernel_add_identity_map((unsigned long)boot_params, (unsigned long)(boot_params + 1)); 164 163 cmdline = get_cmd_line_ptr(); 165 164 kernel_add_identity_map(cmdline, cmdline + COMMAND_LINE_SIZE); 165 + 166 + /* 167 + * Also map the setup_data entries passed via boot_params in case they 168 + * need to be accessed by uncompressed kernel via the identity mapping. 169 + */ 170 + sd = (struct setup_data *)boot_params->hdr.setup_data; 171 + while (sd) { 172 + unsigned long sd_addr = (unsigned long)sd; 173 + 174 + kernel_add_identity_map(sd_addr, sd_addr + sizeof(*sd) + sd->len); 175 + sd = (struct setup_data *)sd->next; 176 + } 166 177 167 178 sev_prep_identity_maps(top_level_pgt); 168 179
+3
arch/x86/include/asm/setup.h
··· 120 120 static char __brk_##name[size] 121 121 122 122 extern void probe_roms(void); 123 + 124 + void clear_bss(void); 125 + 123 126 #ifdef __i386__ 124 127 125 128 asmlinkage void __init i386_start_kernel(void);
+1 -1
arch/x86/include/uapi/asm/bootparam.h
··· 15 15 #define SETUP_INDIRECT (1<<31) 16 16 17 17 /* SETUP_INDIRECT | max(SETUP_*) */ 18 - #define SETUP_TYPE_MAX (SETUP_INDIRECT | SETUP_JAILHOUSE) 18 + #define SETUP_TYPE_MAX (SETUP_INDIRECT | SETUP_CC_BLOB) 19 19 20 20 /* ram_size flags */ 21 21 #define RAMDISK_IMAGE_START_MASK 0x07FF
+10
arch/x86/kernel/acpi/cppc.c
··· 11 11 12 12 /* Refer to drivers/acpi/cppc_acpi.c for the description of functions */ 13 13 14 + bool cpc_supported_by_cpu(void) 15 + { 16 + switch (boot_cpu_data.x86_vendor) { 17 + case X86_VENDOR_AMD: 18 + case X86_VENDOR_HYGON: 19 + return boot_cpu_has(X86_FEATURE_CPPC); 20 + } 21 + return false; 22 + } 23 + 14 24 bool cpc_ffh_supported(void) 15 25 { 16 26 return true;
+3 -1
arch/x86/kernel/head64.c
··· 426 426 427 427 /* Don't add a printk in there. printk relies on the PDA which is not initialized 428 428 yet. */ 429 - static void __init clear_bss(void) 429 + void __init clear_bss(void) 430 430 { 431 431 memset(__bss_start, 0, 432 432 (unsigned long) __bss_stop - (unsigned long) __bss_start); 433 + memset(__brk_base, 0, 434 + (unsigned long) __brk_limit - (unsigned long) __brk_base); 433 435 } 434 436 435 437 static unsigned long get_cmd_line_ptr(void)
+1 -1
arch/x86/kernel/vmlinux.lds.S
··· 385 385 __end_of_kernel_reserve = .; 386 386 387 387 . = ALIGN(PAGE_SIZE); 388 - .brk (NOLOAD) : AT(ADDR(.brk) - LOAD_OFFSET) { 388 + .brk : AT(ADDR(.brk) - LOAD_OFFSET) { 389 389 __brk_base = .; 390 390 . += 64 * 1024; /* 64k alignment slop space */ 391 391 *(.bss..brk) /* areas brk users have reserved */
+6 -2
arch/x86/xen/enlighten_pv.c
··· 1183 1183 extern void early_xen_iret_patch(void); 1184 1184 1185 1185 /* First C function to be called on Xen boot */ 1186 - asmlinkage __visible void __init xen_start_kernel(void) 1186 + asmlinkage __visible void __init xen_start_kernel(struct start_info *si) 1187 1187 { 1188 1188 struct physdev_set_iopl set_iopl; 1189 1189 unsigned long initrd_start = 0; 1190 1190 int rc; 1191 1191 1192 - if (!xen_start_info) 1192 + if (!si) 1193 1193 return; 1194 + 1195 + clear_bss(); 1196 + 1197 + xen_start_info = si; 1194 1198 1195 1199 __text_gen_insn(&early_xen_iret_patch, 1196 1200 JMP32_INSN_OPCODE, &early_xen_iret_patch, &xen_iret,
+1 -9
arch/x86/xen/xen-head.S
··· 48 48 ANNOTATE_NOENDBR 49 49 cld 50 50 51 - /* Clear .bss */ 52 - xor %eax,%eax 53 - mov $__bss_start, %rdi 54 - mov $__bss_stop, %rcx 55 - sub %rdi, %rcx 56 - shr $3, %rcx 57 - rep stosq 58 - 59 - mov %rsi, xen_start_info 60 51 mov initial_stack(%rip), %rsp 61 52 62 53 /* Set up %gs. ··· 62 71 cdq 63 72 wrmsr 64 73 74 + mov %rsi, %rdi 65 75 call xen_start_kernel 66 76 SYM_CODE_END(startup_xen) 67 77 __FINIT
+114
crypto/Kconfig
··· 666 666 CRC32c and CRC32 CRC algorithms implemented using mips crypto 667 667 instructions, when available. 668 668 669 + config CRYPTO_CRC32_S390 670 + tristate "CRC-32 algorithms" 671 + depends on S390 672 + select CRYPTO_HASH 673 + select CRC32 674 + help 675 + Select this option if you want to use hardware accelerated 676 + implementations of CRC algorithms. With this option, you 677 + can optimize the computation of CRC-32 (IEEE 802.3 Ethernet) 678 + and CRC-32C (Castagnoli). 679 + 680 + It is available with IBM z13 or later. 669 681 670 682 config CRYPTO_XXHASH 671 683 tristate "xxHash hash algorithm" ··· 910 898 Extensions version 1 (AVX1), or Advanced Vector Extensions 911 899 version 2 (AVX2) instructions, when available. 912 900 901 + config CRYPTO_SHA512_S390 902 + tristate "SHA384 and SHA512 digest algorithm" 903 + depends on S390 904 + select CRYPTO_HASH 905 + help 906 + This is the s390 hardware accelerated implementation of the 907 + SHA512 secure hash standard. 908 + 909 + It is available as of z10. 910 + 913 911 config CRYPTO_SHA1_OCTEON 914 912 tristate "SHA1 digest algorithm (OCTEON)" 915 913 depends on CPU_CAVIUM_OCTEON ··· 951 929 help 952 930 SHA-1 secure hash standard (DFIPS 180-4) implemented 953 931 using powerpc SPE SIMD instruction set. 932 + 933 + config CRYPTO_SHA1_S390 934 + tristate "SHA1 digest algorithm" 935 + depends on S390 936 + select CRYPTO_HASH 937 + help 938 + This is the s390 hardware accelerated implementation of the 939 + SHA-1 secure hash standard (FIPS 180-1/DFIPS 180-2). 940 + 941 + It is available as of z990. 954 942 955 943 config CRYPTO_SHA256 956 944 tristate "SHA224 and SHA256 digest algorithm" ··· 1002 970 SHA-256 secure hash standard (DFIPS 180-2) implemented 1003 971 using sparc64 crypto instructions, when available. 1004 972 973 + config CRYPTO_SHA256_S390 974 + tristate "SHA256 digest algorithm" 975 + depends on S390 976 + select CRYPTO_HASH 977 + help 978 + This is the s390 hardware accelerated implementation of the 979 + SHA256 secure hash standard (DFIPS 180-2). 980 + 981 + It is available as of z9. 982 + 1005 983 config CRYPTO_SHA512 1006 984 tristate "SHA384 and SHA512 digest algorithms" 1007 985 select CRYPTO_HASH ··· 1051 1009 1052 1010 References: 1053 1011 http://keccak.noekeon.org/ 1012 + 1013 + config CRYPTO_SHA3_256_S390 1014 + tristate "SHA3_224 and SHA3_256 digest algorithm" 1015 + depends on S390 1016 + select CRYPTO_HASH 1017 + help 1018 + This is the s390 hardware accelerated implementation of the 1019 + SHA3_256 secure hash standard. 1020 + 1021 + It is available as of z14. 1022 + 1023 + config CRYPTO_SHA3_512_S390 1024 + tristate "SHA3_384 and SHA3_512 digest algorithm" 1025 + depends on S390 1026 + select CRYPTO_HASH 1027 + help 1028 + This is the s390 hardware accelerated implementation of the 1029 + SHA3_512 secure hash standard. 1030 + 1031 + It is available as of z14. 1054 1032 1055 1033 config CRYPTO_SM3 1056 1034 tristate ··· 1131 1069 help 1132 1070 This is the x86_64 CLMUL-NI accelerated implementation of 1133 1071 GHASH, the hash function used in GCM (Galois/Counter mode). 1072 + 1073 + config CRYPTO_GHASH_S390 1074 + tristate "GHASH hash function" 1075 + depends on S390 1076 + select CRYPTO_HASH 1077 + help 1078 + This is the s390 hardware accelerated implementation of GHASH, 1079 + the hash function used in GCM (Galois/Counter mode). 1080 + 1081 + It is available as of z196. 1134 1082 1135 1083 comment "Ciphers" 1136 1084 ··· 1256 1184 timining attacks. Nevertheless it might be not as secure as other 1257 1185 architecture specific assembler implementations that work on 1KB 1258 1186 tables or 256 bytes S-boxes. 1187 + 1188 + config CRYPTO_AES_S390 1189 + tristate "AES cipher algorithms" 1190 + depends on S390 1191 + select CRYPTO_ALGAPI 1192 + select CRYPTO_SKCIPHER 1193 + help 1194 + This is the s390 hardware accelerated implementation of the 1195 + AES cipher algorithms (FIPS-197). 1196 + 1197 + As of z9 the ECB and CBC modes are hardware accelerated 1198 + for 128 bit keys. 1199 + As of z10 the ECB and CBC modes are hardware accelerated 1200 + for all AES key sizes. 1201 + As of z196 the CTR mode is hardware accelerated for all AES 1202 + key sizes and XTS mode is hardware accelerated for 256 and 1203 + 512 bit keys. 1259 1204 1260 1205 config CRYPTO_ANUBIS 1261 1206 tristate "Anubis cipher algorithm" ··· 1504 1415 algorithm are provided; regular processing one input block and 1505 1416 one that processes three blocks parallel. 1506 1417 1418 + config CRYPTO_DES_S390 1419 + tristate "DES and Triple DES cipher algorithms" 1420 + depends on S390 1421 + select CRYPTO_ALGAPI 1422 + select CRYPTO_SKCIPHER 1423 + select CRYPTO_LIB_DES 1424 + help 1425 + This is the s390 hardware accelerated implementation of the 1426 + DES cipher algorithm (FIPS 46-2), and Triple DES EDE (FIPS 46-3). 1427 + 1428 + As of z990 the ECB and CBC mode are hardware accelerated. 1429 + As of z196 the CTR mode is hardware accelerated. 1430 + 1507 1431 config CRYPTO_FCRYPT 1508 1432 tristate "FCrypt cipher algorithm" 1509 1433 select CRYPTO_ALGAPI ··· 1575 1473 depends on CPU_MIPS32_R2 1576 1474 select CRYPTO_SKCIPHER 1577 1475 select CRYPTO_ARCH_HAVE_LIB_CHACHA 1476 + 1477 + config CRYPTO_CHACHA_S390 1478 + tristate "ChaCha20 stream cipher" 1479 + depends on S390 1480 + select CRYPTO_SKCIPHER 1481 + select CRYPTO_LIB_CHACHA_GENERIC 1482 + select CRYPTO_ARCH_HAVE_LIB_CHACHA 1483 + help 1484 + This is the s390 SIMD implementation of the ChaCha20 stream 1485 + cipher (RFC 7539). 1486 + 1487 + It is available as of z13. 1578 1488 1579 1489 config CRYPTO_SEED 1580 1490 tristate "SEED cipher algorithm"
+6 -7
drivers/acpi/acpi_video.c
··· 73 73 static int only_lcd = -1; 74 74 module_param(only_lcd, int, 0444); 75 75 76 + static bool has_backlight; 76 77 static int register_count; 77 78 static DEFINE_MUTEX(register_count_mutex); 78 79 static DEFINE_MUTEX(video_list_lock); ··· 1223 1222 acpi_video_device_bind(video, data); 1224 1223 acpi_video_device_find_cap(data); 1225 1224 1225 + if (data->cap._BCM && data->cap._BCL) 1226 + has_backlight = true; 1227 + 1226 1228 mutex_lock(&video->device_list_lock); 1227 1229 list_add_tail(&data->entry, &video->video_device_list); 1228 1230 mutex_unlock(&video->device_list_lock); ··· 2253 2249 if (register_count) { 2254 2250 acpi_bus_unregister_driver(&acpi_video_bus); 2255 2251 register_count = 0; 2252 + has_backlight = false; 2256 2253 } 2257 2254 mutex_unlock(&register_count_mutex); 2258 2255 } ··· 2275 2270 2276 2271 bool acpi_video_handles_brightness_key_presses(void) 2277 2272 { 2278 - bool have_video_busses; 2279 - 2280 - mutex_lock(&video_list_lock); 2281 - have_video_busses = !list_empty(&video_bus_head); 2282 - mutex_unlock(&video_list_lock); 2283 - 2284 - return have_video_busses && 2273 + return has_backlight && 2285 2274 (report_key_events & REPORT_BRIGHTNESS_KEY_EVENTS); 2286 2275 } 2287 2276 EXPORT_SYMBOL(acpi_video_handles_brightness_key_presses);
+5 -6
drivers/acpi/bus.c
··· 298 298 bool osc_sb_native_usb4_support_confirmed; 299 299 EXPORT_SYMBOL_GPL(osc_sb_native_usb4_support_confirmed); 300 300 301 - bool osc_sb_cppc_not_supported; 301 + bool osc_sb_cppc2_support_acked; 302 302 303 303 static u8 sb_uuid_str[] = "0811B06E-4A27-44F9-8D60-3CBBC22E7B48"; 304 304 static void acpi_bus_osc_negotiate_platform_control(void) ··· 358 358 return; 359 359 } 360 360 361 - #ifdef CONFIG_ACPI_CPPC_LIB 362 - osc_sb_cppc_not_supported = !(capbuf_ret[OSC_SUPPORT_DWORD] & 363 - (OSC_SB_CPC_SUPPORT | OSC_SB_CPCV2_SUPPORT)); 364 - #endif 365 - 366 361 /* 367 362 * Now run _OSC again with query flag clear and with the caps 368 363 * supported by both the OS and the platform. ··· 371 376 372 377 capbuf_ret = context.ret.pointer; 373 378 if (context.ret.length > OSC_SUPPORT_DWORD) { 379 + #ifdef CONFIG_ACPI_CPPC_LIB 380 + osc_sb_cppc2_support_acked = capbuf_ret[OSC_SUPPORT_DWORD] & OSC_SB_CPCV2_SUPPORT; 381 + #endif 382 + 374 383 osc_sb_apei_support_acked = 375 384 capbuf_ret[OSC_SUPPORT_DWORD] & OSC_SB_APEI_SUPPORT; 376 385 osc_pc_lpi_support_confirmed =
+18 -2
drivers/acpi/cppc_acpi.c
··· 578 578 } 579 579 580 580 /** 581 + * cpc_supported_by_cpu() - check if CPPC is supported by CPU 582 + * 583 + * Check if the architectural support for CPPC is present even 584 + * if the _OSC hasn't prescribed it 585 + * 586 + * Return: true for supported, false for not supported 587 + */ 588 + bool __weak cpc_supported_by_cpu(void) 589 + { 590 + return false; 591 + } 592 + 593 + /** 581 594 * pcc_data_alloc() - Allocate the pcc_data memory for pcc subspace 582 595 * 583 596 * Check and allocate the cppc_pcc_data memory. ··· 697 684 acpi_status status; 698 685 int ret = -ENODATA; 699 686 700 - if (osc_sb_cppc_not_supported) 701 - return -ENODEV; 687 + if (!osc_sb_cppc2_support_acked) { 688 + pr_debug("CPPC v2 _OSC not acked\n"); 689 + if (!cpc_supported_by_cpu()) 690 + return -ENODEV; 691 + } 702 692 703 693 /* Parse the ACPI _CPC table for this CPU. */ 704 694 status = acpi_evaluate_object_typed(handle, "_CPC", NULL, &output,
+2 -2
drivers/ata/pata_cs5535.c
··· 90 90 static const u16 pio_cmd_timings[5] = { 91 91 0xF7F4, 0x53F3, 0x13F1, 0x5131, 0x1131 92 92 }; 93 - u32 reg, dummy; 93 + u32 reg, __maybe_unused dummy; 94 94 struct ata_device *pair = ata_dev_pair(adev); 95 95 96 96 int mode = adev->pio_mode - XFER_PIO_0; ··· 129 129 static const u32 mwdma_timings[3] = { 130 130 0x7F0FFFF3, 0x7F035352, 0x7F024241 131 131 }; 132 - u32 reg, dummy; 132 + u32 reg, __maybe_unused dummy; 133 133 int mode = adev->dma_mode; 134 134 135 135 rdmsr(ATAC_CH0D0_DMA + 2 * adev->devno, reg, dummy);
+12 -1
drivers/base/core.c
··· 486 486 /* Ensure that all references to the link object have been dropped. */ 487 487 device_link_synchronize_removal(); 488 488 489 - pm_runtime_release_supplier(link, true); 489 + pm_runtime_release_supplier(link); 490 + /* 491 + * If supplier_preactivated is set, the link has been dropped between 492 + * the pm_runtime_get_suppliers() and pm_runtime_put_suppliers() calls 493 + * in __driver_probe_device(). In that case, drop the supplier's 494 + * PM-runtime usage counter to remove the reference taken by 495 + * pm_runtime_get_suppliers(). 496 + */ 497 + if (link->supplier_preactivated) 498 + pm_runtime_put_noidle(link->supplier); 499 + 500 + pm_request_idle(link->supplier); 490 501 491 502 put_device(link->consumer); 492 503 put_device(link->supplier);
+10 -24
drivers/base/power/runtime.c
··· 308 308 /** 309 309 * pm_runtime_release_supplier - Drop references to device link's supplier. 310 310 * @link: Target device link. 311 - * @check_idle: Whether or not to check if the supplier device is idle. 312 311 * 313 - * Drop all runtime PM references associated with @link to its supplier device 314 - * and if @check_idle is set, check if that device is idle (and so it can be 315 - * suspended). 312 + * Drop all runtime PM references associated with @link to its supplier device. 316 313 */ 317 - void pm_runtime_release_supplier(struct device_link *link, bool check_idle) 314 + void pm_runtime_release_supplier(struct device_link *link) 318 315 { 319 316 struct device *supplier = link->supplier; 320 317 ··· 324 327 while (refcount_dec_not_one(&link->rpm_active) && 325 328 atomic_read(&supplier->power.usage_count) > 0) 326 329 pm_runtime_put_noidle(supplier); 327 - 328 - if (check_idle) 329 - pm_request_idle(supplier); 330 330 } 331 331 332 332 static void __rpm_put_suppliers(struct device *dev, bool try_to_suspend) ··· 331 337 struct device_link *link; 332 338 333 339 list_for_each_entry_rcu(link, &dev->links.suppliers, c_node, 334 - device_links_read_lock_held()) 335 - pm_runtime_release_supplier(link, try_to_suspend); 340 + device_links_read_lock_held()) { 341 + pm_runtime_release_supplier(link); 342 + if (try_to_suspend) 343 + pm_request_idle(link->supplier); 344 + } 336 345 } 337 346 338 347 static void rpm_put_suppliers(struct device *dev) ··· 1768 1771 if (link->flags & DL_FLAG_PM_RUNTIME) { 1769 1772 link->supplier_preactivated = true; 1770 1773 pm_runtime_get_sync(link->supplier); 1771 - refcount_inc(&link->rpm_active); 1772 1774 } 1773 1775 1774 1776 device_links_read_unlock(idx); ··· 1787 1791 list_for_each_entry_rcu(link, &dev->links.suppliers, c_node, 1788 1792 device_links_read_lock_held()) 1789 1793 if (link->supplier_preactivated) { 1790 - bool put; 1791 - 1792 1794 link->supplier_preactivated = false; 1793 - 1794 - spin_lock_irq(&dev->power.lock); 1795 - 1796 - put = pm_runtime_status_suspended(dev) && 1797 - refcount_dec_not_one(&link->rpm_active); 1798 - 1799 - spin_unlock_irq(&dev->power.lock); 1800 - 1801 - if (put) 1802 - pm_runtime_put(link->supplier); 1795 + pm_runtime_put(link->supplier); 1803 1796 } 1804 1797 1805 1798 device_links_read_unlock(idx); ··· 1823 1838 return; 1824 1839 1825 1840 pm_runtime_drop_link_count(link->consumer); 1826 - pm_runtime_release_supplier(link, true); 1841 + pm_runtime_release_supplier(link); 1842 + pm_request_idle(link->supplier); 1827 1843 } 1828 1844 1829 1845 static bool pm_runtime_need_not_resume(struct device *dev)
+37 -17
drivers/block/xen-blkfront.c
··· 152 152 module_param_named(max_ring_page_order, xen_blkif_max_ring_order, int, 0444); 153 153 MODULE_PARM_DESC(max_ring_page_order, "Maximum order of pages to be used for the shared ring"); 154 154 155 + static bool __read_mostly xen_blkif_trusted = true; 156 + module_param_named(trusted, xen_blkif_trusted, bool, 0644); 157 + MODULE_PARM_DESC(trusted, "Is the backend trusted"); 158 + 155 159 #define BLK_RING_SIZE(info) \ 156 160 __CONST_RING_SIZE(blkif, XEN_PAGE_SIZE * (info)->nr_ring_pages) 157 161 ··· 214 210 unsigned int feature_discard:1; 215 211 unsigned int feature_secdiscard:1; 216 212 unsigned int feature_persistent:1; 213 + unsigned int bounce:1; 217 214 unsigned int discard_granularity; 218 215 unsigned int discard_alignment; 219 216 /* Number of 4KB segments handled */ ··· 315 310 if (!gnt_list_entry) 316 311 goto out_of_memory; 317 312 318 - if (info->feature_persistent) { 319 - granted_page = alloc_page(GFP_NOIO); 313 + if (info->bounce) { 314 + granted_page = alloc_page(GFP_NOIO | __GFP_ZERO); 320 315 if (!granted_page) { 321 316 kfree(gnt_list_entry); 322 317 goto out_of_memory; ··· 335 330 list_for_each_entry_safe(gnt_list_entry, n, 336 331 &rinfo->grants, node) { 337 332 list_del(&gnt_list_entry->node); 338 - if (info->feature_persistent) 333 + if (info->bounce) 339 334 __free_page(gnt_list_entry->page); 340 335 kfree(gnt_list_entry); 341 336 i--; ··· 381 376 /* Assign a gref to this page */ 382 377 gnt_list_entry->gref = gnttab_claim_grant_reference(gref_head); 383 378 BUG_ON(gnt_list_entry->gref == -ENOSPC); 384 - if (info->feature_persistent) 379 + if (info->bounce) 385 380 grant_foreign_access(gnt_list_entry, info); 386 381 else { 387 382 /* Grant access to the GFN passed by the caller */ ··· 405 400 /* Assign a gref to this page */ 406 401 gnt_list_entry->gref = gnttab_claim_grant_reference(gref_head); 407 402 BUG_ON(gnt_list_entry->gref == -ENOSPC); 408 - if (!info->feature_persistent) { 403 + if (!info->bounce) { 409 404 struct page *indirect_page; 410 405 411 406 /* Fetch a pre-allocated page to use for indirect grefs */ ··· 708 703 .grant_idx = 0, 709 704 .segments = NULL, 710 705 .rinfo = rinfo, 711 - .need_copy = rq_data_dir(req) && info->feature_persistent, 706 + .need_copy = rq_data_dir(req) && info->bounce, 712 707 }; 713 708 714 709 /* ··· 986 981 { 987 982 blk_queue_write_cache(info->rq, info->feature_flush ? true : false, 988 983 info->feature_fua ? true : false); 989 - pr_info("blkfront: %s: %s %s %s %s %s\n", 984 + pr_info("blkfront: %s: %s %s %s %s %s %s %s\n", 990 985 info->gd->disk_name, flush_info(info), 991 986 "persistent grants:", info->feature_persistent ? 992 987 "enabled;" : "disabled;", "indirect descriptors:", 993 - info->max_indirect_segments ? "enabled;" : "disabled;"); 988 + info->max_indirect_segments ? "enabled;" : "disabled;", 989 + "bounce buffer:", info->bounce ? "enabled" : "disabled;"); 994 990 } 995 991 996 992 static int xen_translate_vdev(int vdevice, int *minor, unsigned int *offset) ··· 1213 1207 if (!list_empty(&rinfo->indirect_pages)) { 1214 1208 struct page *indirect_page, *n; 1215 1209 1216 - BUG_ON(info->feature_persistent); 1210 + BUG_ON(info->bounce); 1217 1211 list_for_each_entry_safe(indirect_page, n, &rinfo->indirect_pages, lru) { 1218 1212 list_del(&indirect_page->lru); 1219 1213 __free_page(indirect_page); ··· 1230 1224 NULL); 1231 1225 rinfo->persistent_gnts_c--; 1232 1226 } 1233 - if (info->feature_persistent) 1227 + if (info->bounce) 1234 1228 __free_page(persistent_gnt->page); 1235 1229 kfree(persistent_gnt); 1236 1230 } ··· 1251 1245 for (j = 0; j < segs; j++) { 1252 1246 persistent_gnt = rinfo->shadow[i].grants_used[j]; 1253 1247 gnttab_end_foreign_access(persistent_gnt->gref, NULL); 1254 - if (info->feature_persistent) 1248 + if (info->bounce) 1255 1249 __free_page(persistent_gnt->page); 1256 1250 kfree(persistent_gnt); 1257 1251 } ··· 1434 1428 data.s = s; 1435 1429 num_sg = s->num_sg; 1436 1430 1437 - if (bret->operation == BLKIF_OP_READ && info->feature_persistent) { 1431 + if (bret->operation == BLKIF_OP_READ && info->bounce) { 1438 1432 for_each_sg(s->sg, sg, num_sg, i) { 1439 1433 BUG_ON(sg->offset + sg->length > PAGE_SIZE); 1440 1434 ··· 1493 1487 * Add the used indirect page back to the list of 1494 1488 * available pages for indirect grefs. 1495 1489 */ 1496 - if (!info->feature_persistent) { 1490 + if (!info->bounce) { 1497 1491 indirect_page = s->indirect_grants[i]->page; 1498 1492 list_add(&indirect_page->lru, &rinfo->indirect_pages); 1499 1493 } ··· 1769 1763 1770 1764 if (!info) 1771 1765 return -ENODEV; 1766 + 1767 + /* Check if backend is trusted. */ 1768 + info->bounce = !xen_blkif_trusted || 1769 + !xenbus_read_unsigned(dev->nodename, "trusted", 1); 1772 1770 1773 1771 max_page_order = xenbus_read_unsigned(info->xbdev->otherend, 1774 1772 "max-ring-page-order", 0); ··· 2183 2173 if (err) 2184 2174 goto out_of_memory; 2185 2175 2186 - if (!info->feature_persistent && info->max_indirect_segments) { 2176 + if (!info->bounce && info->max_indirect_segments) { 2187 2177 /* 2188 - * We are using indirect descriptors but not persistent 2189 - * grants, we need to allocate a set of pages that can be 2178 + * We are using indirect descriptors but don't have a bounce 2179 + * buffer, we need to allocate a set of pages that can be 2190 2180 * used for mapping indirect grefs 2191 2181 */ 2192 2182 int num = INDIRECT_GREFS(grants) * BLK_RING_SIZE(info); 2193 2183 2194 2184 BUG_ON(!list_empty(&rinfo->indirect_pages)); 2195 2185 for (i = 0; i < num; i++) { 2196 - struct page *indirect_page = alloc_page(GFP_KERNEL); 2186 + struct page *indirect_page = alloc_page(GFP_KERNEL | 2187 + __GFP_ZERO); 2197 2188 if (!indirect_page) 2198 2189 goto out_of_memory; 2199 2190 list_add(&indirect_page->lru, &rinfo->indirect_pages); ··· 2287 2276 info->feature_persistent = 2288 2277 !!xenbus_read_unsigned(info->xbdev->otherend, 2289 2278 "feature-persistent", 0); 2279 + if (info->feature_persistent) 2280 + info->bounce = true; 2290 2281 2291 2282 indirect_segments = xenbus_read_unsigned(info->xbdev->otherend, 2292 2283 "feature-max-indirect-segments", 0); ··· 2559 2546 { 2560 2547 struct blkfront_info *info; 2561 2548 bool need_schedule_work = false; 2549 + 2550 + /* 2551 + * Note that when using bounce buffers but not persistent grants 2552 + * there's no need to run blkfront_delay_work because grants are 2553 + * revoked in blkif_completion or else an error is reported and the 2554 + * connection is closed. 2555 + */ 2562 2556 2563 2557 mutex_lock(&blkfront_mutex); 2564 2558
+1
drivers/clk/stm32/reset-stm32.c
··· 111 111 if (!reset_data) 112 112 return -ENOMEM; 113 113 114 + spin_lock_init(&reset_data->lock); 114 115 reset_data->membase = base; 115 116 reset_data->rcdev.owner = THIS_MODULE; 116 117 reset_data->rcdev.ops = &stm32_reset_ops;
+24
drivers/cpufreq/amd-pstate.c
··· 566 566 return 0; 567 567 } 568 568 569 + static int amd_pstate_cpu_resume(struct cpufreq_policy *policy) 570 + { 571 + int ret; 572 + 573 + ret = amd_pstate_enable(true); 574 + if (ret) 575 + pr_err("failed to enable amd-pstate during resume, return %d\n", ret); 576 + 577 + return ret; 578 + } 579 + 580 + static int amd_pstate_cpu_suspend(struct cpufreq_policy *policy) 581 + { 582 + int ret; 583 + 584 + ret = amd_pstate_enable(false); 585 + if (ret) 586 + pr_err("failed to disable amd-pstate during suspend, return %d\n", ret); 587 + 588 + return ret; 589 + } 590 + 569 591 /* Sysfs attributes */ 570 592 571 593 /* ··· 658 636 .target = amd_pstate_target, 659 637 .init = amd_pstate_cpu_init, 660 638 .exit = amd_pstate_cpu_exit, 639 + .suspend = amd_pstate_cpu_suspend, 640 + .resume = amd_pstate_cpu_resume, 661 641 .set_boost = amd_pstate_set_boost, 662 642 .name = "amd-pstate", 663 643 .attr = amd_pstate_attr,
+1
drivers/cpufreq/cpufreq-dt-platdev.c
··· 127 127 { .compatible = "mediatek,mt8173", }, 128 128 { .compatible = "mediatek,mt8176", }, 129 129 { .compatible = "mediatek,mt8183", }, 130 + { .compatible = "mediatek,mt8186", }, 130 131 { .compatible = "mediatek,mt8365", }, 131 132 { .compatible = "mediatek,mt8516", }, 132 133
+4
drivers/cpufreq/pmac32-cpufreq.c
··· 470 470 if (slew_done_gpio_np) 471 471 slew_done_gpio = read_gpio(slew_done_gpio_np); 472 472 473 + of_node_put(volt_gpio_np); 474 + of_node_put(freq_gpio_np); 475 + of_node_put(slew_done_gpio_np); 476 + 473 477 /* If we use the frequency GPIOs, calculate the min/max speeds based 474 478 * on the bus frequencies 475 479 */
+6
drivers/cpufreq/qcom-cpufreq-hw.c
··· 442 442 struct platform_device *pdev = cpufreq_get_driver_data(); 443 443 int ret; 444 444 445 + if (data->throttle_irq <= 0) 446 + return 0; 447 + 445 448 ret = irq_set_affinity_hint(data->throttle_irq, policy->cpus); 446 449 if (ret) 447 450 dev_err(&pdev->dev, "Failed to set CPU affinity of %s[%d]\n", ··· 472 469 473 470 static void qcom_cpufreq_hw_lmh_exit(struct qcom_cpufreq_data *data) 474 471 { 472 + if (data->throttle_irq <= 0) 473 + return; 474 + 475 475 free_irq(data->throttle_irq, data); 476 476 } 477 477
+1
drivers/cpufreq/qoriq-cpufreq.c
··· 275 275 276 276 np = of_find_matching_node(NULL, qoriq_cpufreq_blacklist); 277 277 if (np) { 278 + of_node_put(np); 278 279 dev_info(&pdev->dev, "Disabling due to erratum A-008083"); 279 280 return -ENODEV; 280 281 }
-115
drivers/crypto/Kconfig
··· 133 133 Select this option if you want to use the paes cipher 134 134 for example to use protected key encrypted devices. 135 135 136 - config CRYPTO_SHA1_S390 137 - tristate "SHA1 digest algorithm" 138 - depends on S390 139 - select CRYPTO_HASH 140 - help 141 - This is the s390 hardware accelerated implementation of the 142 - SHA-1 secure hash standard (FIPS 180-1/DFIPS 180-2). 143 - 144 - It is available as of z990. 145 - 146 - config CRYPTO_SHA256_S390 147 - tristate "SHA256 digest algorithm" 148 - depends on S390 149 - select CRYPTO_HASH 150 - help 151 - This is the s390 hardware accelerated implementation of the 152 - SHA256 secure hash standard (DFIPS 180-2). 153 - 154 - It is available as of z9. 155 - 156 - config CRYPTO_SHA512_S390 157 - tristate "SHA384 and SHA512 digest algorithm" 158 - depends on S390 159 - select CRYPTO_HASH 160 - help 161 - This is the s390 hardware accelerated implementation of the 162 - SHA512 secure hash standard. 163 - 164 - It is available as of z10. 165 - 166 - config CRYPTO_SHA3_256_S390 167 - tristate "SHA3_224 and SHA3_256 digest algorithm" 168 - depends on S390 169 - select CRYPTO_HASH 170 - help 171 - This is the s390 hardware accelerated implementation of the 172 - SHA3_256 secure hash standard. 173 - 174 - It is available as of z14. 175 - 176 - config CRYPTO_SHA3_512_S390 177 - tristate "SHA3_384 and SHA3_512 digest algorithm" 178 - depends on S390 179 - select CRYPTO_HASH 180 - help 181 - This is the s390 hardware accelerated implementation of the 182 - SHA3_512 secure hash standard. 183 - 184 - It is available as of z14. 185 - 186 - config CRYPTO_DES_S390 187 - tristate "DES and Triple DES cipher algorithms" 188 - depends on S390 189 - select CRYPTO_ALGAPI 190 - select CRYPTO_SKCIPHER 191 - select CRYPTO_LIB_DES 192 - help 193 - This is the s390 hardware accelerated implementation of the 194 - DES cipher algorithm (FIPS 46-2), and Triple DES EDE (FIPS 46-3). 195 - 196 - As of z990 the ECB and CBC mode are hardware accelerated. 197 - As of z196 the CTR mode is hardware accelerated. 198 - 199 - config CRYPTO_AES_S390 200 - tristate "AES cipher algorithms" 201 - depends on S390 202 - select CRYPTO_ALGAPI 203 - select CRYPTO_SKCIPHER 204 - help 205 - This is the s390 hardware accelerated implementation of the 206 - AES cipher algorithms (FIPS-197). 207 - 208 - As of z9 the ECB and CBC modes are hardware accelerated 209 - for 128 bit keys. 210 - As of z10 the ECB and CBC modes are hardware accelerated 211 - for all AES key sizes. 212 - As of z196 the CTR mode is hardware accelerated for all AES 213 - key sizes and XTS mode is hardware accelerated for 256 and 214 - 512 bit keys. 215 - 216 - config CRYPTO_CHACHA_S390 217 - tristate "ChaCha20 stream cipher" 218 - depends on S390 219 - select CRYPTO_SKCIPHER 220 - select CRYPTO_LIB_CHACHA_GENERIC 221 - select CRYPTO_ARCH_HAVE_LIB_CHACHA 222 - help 223 - This is the s390 SIMD implementation of the ChaCha20 stream 224 - cipher (RFC 7539). 225 - 226 - It is available as of z13. 227 - 228 136 config S390_PRNG 229 137 tristate "Pseudo random number generator device driver" 230 138 depends on S390 ··· 145 237 pseudo-random-number device through the char device /dev/prandom. 146 238 147 239 It is available as of z9. 148 - 149 - config CRYPTO_GHASH_S390 150 - tristate "GHASH hash function" 151 - depends on S390 152 - select CRYPTO_HASH 153 - help 154 - This is the s390 hardware accelerated implementation of GHASH, 155 - the hash function used in GCM (Galois/Counter mode). 156 - 157 - It is available as of z196. 158 - 159 - config CRYPTO_CRC32_S390 160 - tristate "CRC-32 algorithms" 161 - depends on S390 162 - select CRYPTO_HASH 163 - select CRC32 164 - help 165 - Select this option if you want to use hardware accelerated 166 - implementations of CRC algorithms. With this option, you 167 - can optimize the computation of CRC-32 (IEEE 802.3 Ethernet) 168 - and CRC-32C (Castagnoli). 169 - 170 - It is available with IBM z13 or later. 171 240 172 241 config CRYPTO_DEV_NIAGARA2 173 242 tristate "Niagara2 Stream Processing Unit driver"
+2 -10
drivers/crypto/ccp/sp-platform.c
··· 85 85 struct sp_platform *sp_platform = sp->dev_specific; 86 86 struct device *dev = sp->dev; 87 87 struct platform_device *pdev = to_platform_device(dev); 88 - unsigned int i, count; 89 88 int ret; 90 89 91 - for (i = 0, count = 0; i < pdev->num_resources; i++) { 92 - struct resource *res = &pdev->resource[i]; 93 - 94 - if (resource_type(res) == IORESOURCE_IRQ) 95 - count++; 96 - } 97 - 98 - sp_platform->irq_count = count; 90 + sp_platform->irq_count = platform_irq_count(pdev); 99 91 100 92 ret = platform_get_irq(pdev, 0); 101 93 if (ret < 0) { ··· 96 104 } 97 105 98 106 sp->psp_irq = ret; 99 - if (count == 1) { 107 + if (sp_platform->irq_count == 1) { 100 108 sp->ccp_irq = ret; 101 109 } else { 102 110 ret = platform_get_irq(pdev, 1);
+1 -1
drivers/cxl/core/hdm.c
··· 197 197 else 198 198 cxld->target_type = CXL_DECODER_ACCELERATOR; 199 199 200 - if (is_cxl_endpoint(to_cxl_port(cxld->dev.parent))) 200 + if (is_endpoint_decoder(&cxld->dev)) 201 201 return 0; 202 202 203 203 target_list.value =
+4 -2
drivers/cxl/core/mbox.c
··· 355 355 return -EBUSY; 356 356 357 357 /* Check the input buffer is the expected size */ 358 - if (info->size_in != send_cmd->in.size) 358 + if ((info->size_in != CXL_VARIABLE_PAYLOAD) && 359 + (info->size_in != send_cmd->in.size)) 359 360 return -ENOMEM; 360 361 361 362 /* Check the output buffer is at least large enough */ 362 - if (send_cmd->out.size < info->size_out) 363 + if ((info->size_out != CXL_VARIABLE_PAYLOAD) && 364 + (send_cmd->out.size < info->size_out)) 363 365 return -ENOMEM; 364 366 365 367 *mem_cmd = (struct cxl_mem_command) {
+1 -1
drivers/cxl/core/port.c
··· 272 272 .groups = cxl_decoder_root_attribute_groups, 273 273 }; 274 274 275 - static bool is_endpoint_decoder(struct device *dev) 275 + bool is_endpoint_decoder(struct device *dev) 276 276 { 277 277 return dev->type == &cxl_decoder_endpoint_type; 278 278 }
+1
drivers/cxl/cxl.h
··· 340 340 341 341 struct cxl_decoder *to_cxl_decoder(struct device *dev); 342 342 bool is_root_decoder(struct device *dev); 343 + bool is_endpoint_decoder(struct device *dev); 343 344 bool is_cxl_decoder(struct device *dev); 344 345 struct cxl_decoder *cxl_root_decoder_alloc(struct cxl_port *port, 345 346 unsigned int nr_targets);
+4 -4
drivers/cxl/cxlmem.h
··· 300 300 } __packed; 301 301 302 302 struct cxl_mbox_get_lsa { 303 - u32 offset; 304 - u32 length; 303 + __le32 offset; 304 + __le32 length; 305 305 } __packed; 306 306 307 307 struct cxl_mbox_set_lsa { 308 - u32 offset; 309 - u32 reserved; 308 + __le32 offset; 309 + __le32 reserved; 310 310 u8 data[]; 311 311 } __packed; 312 312
+6 -1
drivers/cxl/mem.c
··· 29 29 { 30 30 struct cxl_dev_state *cxlds = cxlmd->cxlds; 31 31 struct cxl_port *endpoint; 32 + int rc; 32 33 33 34 endpoint = devm_cxl_add_port(&parent_port->dev, &cxlmd->dev, 34 35 cxlds->component_reg_phys, parent_port); ··· 38 37 39 38 dev_dbg(&cxlmd->dev, "add: %s\n", dev_name(&endpoint->dev)); 40 39 40 + rc = cxl_endpoint_autoremove(cxlmd, endpoint); 41 + if (rc) 42 + return rc; 43 + 41 44 if (!endpoint->dev.driver) { 42 45 dev_err(&cxlmd->dev, "%s failed probe\n", 43 46 dev_name(&endpoint->dev)); 44 47 return -ENXIO; 45 48 } 46 49 47 - return cxl_endpoint_autoremove(cxlmd, endpoint); 50 + return 0; 48 51 } 49 52 50 53 static void enable_suspend(void *data)
+3 -3
drivers/cxl/pmem.c
··· 108 108 return -EINVAL; 109 109 110 110 get_lsa = (struct cxl_mbox_get_lsa) { 111 - .offset = cmd->in_offset, 112 - .length = cmd->in_length, 111 + .offset = cpu_to_le32(cmd->in_offset), 112 + .length = cpu_to_le32(cmd->in_length), 113 113 }; 114 114 115 115 rc = cxl_mbox_send_cmd(cxlds, CXL_MBOX_OP_GET_LSA, &get_lsa, ··· 139 139 return -ENOMEM; 140 140 141 141 *set_lsa = (struct cxl_mbox_set_lsa) { 142 - .offset = cmd->in_offset, 142 + .offset = cpu_to_le32(cmd->in_offset), 143 143 }; 144 144 memcpy(set_lsa->data, cmd->in_buf, cmd->in_length); 145 145
+37 -39
drivers/devfreq/devfreq.c
··· 123 123 unsigned long *min_freq, 124 124 unsigned long *max_freq) 125 125 { 126 - unsigned long *freq_table = devfreq->profile->freq_table; 126 + unsigned long *freq_table = devfreq->freq_table; 127 127 s32 qos_min_freq, qos_max_freq; 128 128 129 129 lockdep_assert_held(&devfreq->lock); ··· 133 133 * The devfreq drivers can initialize this in either ascending or 134 134 * descending order and devfreq core supports both. 135 135 */ 136 - if (freq_table[0] < freq_table[devfreq->profile->max_state - 1]) { 136 + if (freq_table[0] < freq_table[devfreq->max_state - 1]) { 137 137 *min_freq = freq_table[0]; 138 - *max_freq = freq_table[devfreq->profile->max_state - 1]; 138 + *max_freq = freq_table[devfreq->max_state - 1]; 139 139 } else { 140 - *min_freq = freq_table[devfreq->profile->max_state - 1]; 140 + *min_freq = freq_table[devfreq->max_state - 1]; 141 141 *max_freq = freq_table[0]; 142 142 } 143 143 ··· 169 169 { 170 170 int lev; 171 171 172 - for (lev = 0; lev < devfreq->profile->max_state; lev++) 173 - if (freq == devfreq->profile->freq_table[lev]) 172 + for (lev = 0; lev < devfreq->max_state; lev++) 173 + if (freq == devfreq->freq_table[lev]) 174 174 return lev; 175 175 176 176 return -EINVAL; ··· 178 178 179 179 static int set_freq_table(struct devfreq *devfreq) 180 180 { 181 - struct devfreq_dev_profile *profile = devfreq->profile; 182 181 struct dev_pm_opp *opp; 183 182 unsigned long freq; 184 183 int i, count; ··· 187 188 if (count <= 0) 188 189 return -EINVAL; 189 190 190 - profile->max_state = count; 191 - profile->freq_table = devm_kcalloc(devfreq->dev.parent, 192 - profile->max_state, 193 - sizeof(*profile->freq_table), 194 - GFP_KERNEL); 195 - if (!profile->freq_table) { 196 - profile->max_state = 0; 191 + devfreq->max_state = count; 192 + devfreq->freq_table = devm_kcalloc(devfreq->dev.parent, 193 + devfreq->max_state, 194 + sizeof(*devfreq->freq_table), 195 + GFP_KERNEL); 196 + if (!devfreq->freq_table) 197 197 return -ENOMEM; 198 - } 199 198 200 - for (i = 0, freq = 0; i < profile->max_state; i++, freq++) { 199 + for (i = 0, freq = 0; i < devfreq->max_state; i++, freq++) { 201 200 opp = dev_pm_opp_find_freq_ceil(devfreq->dev.parent, &freq); 202 201 if (IS_ERR(opp)) { 203 - devm_kfree(devfreq->dev.parent, profile->freq_table); 204 - profile->max_state = 0; 202 + devm_kfree(devfreq->dev.parent, devfreq->freq_table); 205 203 return PTR_ERR(opp); 206 204 } 207 205 dev_pm_opp_put(opp); 208 - profile->freq_table[i] = freq; 206 + devfreq->freq_table[i] = freq; 209 207 } 210 208 211 209 return 0; ··· 242 246 243 247 if (lev != prev_lev) { 244 248 devfreq->stats.trans_table[ 245 - (prev_lev * devfreq->profile->max_state) + lev]++; 249 + (prev_lev * devfreq->max_state) + lev]++; 246 250 devfreq->stats.total_trans++; 247 251 } 248 252 ··· 831 835 if (err < 0) 832 836 goto err_dev; 833 837 mutex_lock(&devfreq->lock); 838 + } else { 839 + devfreq->freq_table = devfreq->profile->freq_table; 840 + devfreq->max_state = devfreq->profile->max_state; 834 841 } 835 842 836 843 devfreq->scaling_min_freq = find_available_min_freq(devfreq); ··· 869 870 870 871 devfreq->stats.trans_table = devm_kzalloc(&devfreq->dev, 871 872 array3_size(sizeof(unsigned int), 872 - devfreq->profile->max_state, 873 - devfreq->profile->max_state), 873 + devfreq->max_state, 874 + devfreq->max_state), 874 875 GFP_KERNEL); 875 876 if (!devfreq->stats.trans_table) { 876 877 mutex_unlock(&devfreq->lock); ··· 879 880 } 880 881 881 882 devfreq->stats.time_in_state = devm_kcalloc(&devfreq->dev, 882 - devfreq->profile->max_state, 883 + devfreq->max_state, 883 884 sizeof(*devfreq->stats.time_in_state), 884 885 GFP_KERNEL); 885 886 if (!devfreq->stats.time_in_state) { ··· 931 932 err = devfreq->governor->event_handler(devfreq, DEVFREQ_GOV_START, 932 933 NULL); 933 934 if (err) { 934 - dev_err(dev, "%s: Unable to start governor for the device\n", 935 - __func__); 935 + dev_err_probe(dev, err, 936 + "%s: Unable to start governor for the device\n", 937 + __func__); 936 938 goto err_init; 937 939 } 938 940 create_sysfs_files(devfreq, devfreq->governor); ··· 1665 1665 1666 1666 mutex_lock(&df->lock); 1667 1667 1668 - for (i = 0; i < df->profile->max_state; i++) 1668 + for (i = 0; i < df->max_state; i++) 1669 1669 count += scnprintf(&buf[count], (PAGE_SIZE - count - 2), 1670 - "%lu ", df->profile->freq_table[i]); 1670 + "%lu ", df->freq_table[i]); 1671 1671 1672 1672 mutex_unlock(&df->lock); 1673 1673 /* Truncate the trailing space */ ··· 1690 1690 1691 1691 if (!df->profile) 1692 1692 return -EINVAL; 1693 - max_state = df->profile->max_state; 1693 + max_state = df->max_state; 1694 1694 1695 1695 if (max_state == 0) 1696 1696 return sprintf(buf, "Not Supported.\n"); ··· 1707 1707 len += sprintf(buf + len, " :"); 1708 1708 for (i = 0; i < max_state; i++) 1709 1709 len += sprintf(buf + len, "%10lu", 1710 - df->profile->freq_table[i]); 1710 + df->freq_table[i]); 1711 1711 1712 1712 len += sprintf(buf + len, " time(ms)\n"); 1713 1713 1714 1714 for (i = 0; i < max_state; i++) { 1715 - if (df->profile->freq_table[i] 1716 - == df->previous_freq) { 1715 + if (df->freq_table[i] == df->previous_freq) 1717 1716 len += sprintf(buf + len, "*"); 1718 - } else { 1717 + else 1719 1718 len += sprintf(buf + len, " "); 1720 - } 1721 - len += sprintf(buf + len, "%10lu:", 1722 - df->profile->freq_table[i]); 1719 + 1720 + len += sprintf(buf + len, "%10lu:", df->freq_table[i]); 1723 1721 for (j = 0; j < max_state; j++) 1724 1722 len += sprintf(buf + len, "%10u", 1725 1723 df->stats.trans_table[(i * max_state) + j]); ··· 1741 1743 if (!df->profile) 1742 1744 return -EINVAL; 1743 1745 1744 - if (df->profile->max_state == 0) 1746 + if (df->max_state == 0) 1745 1747 return count; 1746 1748 1747 1749 err = kstrtoint(buf, 10, &value); ··· 1749 1751 return -EINVAL; 1750 1752 1751 1753 mutex_lock(&df->lock); 1752 - memset(df->stats.time_in_state, 0, (df->profile->max_state * 1754 + memset(df->stats.time_in_state, 0, (df->max_state * 1753 1755 sizeof(*df->stats.time_in_state))); 1754 1756 memset(df->stats.trans_table, 0, array3_size(sizeof(unsigned int), 1755 - df->profile->max_state, 1756 - df->profile->max_state)); 1757 + df->max_state, 1758 + df->max_state)); 1757 1759 df->stats.total_trans = 0; 1758 1760 df->stats.last_update = get_jiffies_64(); 1759 1761 mutex_unlock(&df->lock);
+6 -2
drivers/devfreq/event/exynos-ppmu.c
··· 519 519 520 520 count = of_get_child_count(events_np); 521 521 desc = devm_kcalloc(dev, count, sizeof(*desc), GFP_KERNEL); 522 - if (!desc) 522 + if (!desc) { 523 + of_node_put(events_np); 523 524 return -ENOMEM; 525 + } 524 526 info->num_events = count; 525 527 526 528 of_id = of_match_device(exynos_ppmu_id_match, dev); 527 529 if (of_id) 528 530 info->ppmu_type = (enum exynos_ppmu_type)of_id->data; 529 - else 531 + else { 532 + of_node_put(events_np); 530 533 return -EINVAL; 534 + } 531 535 532 536 j = 0; 533 537 for_each_child_of_node(events_np, node) {
+3 -3
drivers/devfreq/exynos-bus.c
··· 447 447 } 448 448 } 449 449 450 - max_state = bus->devfreq->profile->max_state; 451 - min_freq = (bus->devfreq->profile->freq_table[0] / 1000); 452 - max_freq = (bus->devfreq->profile->freq_table[max_state - 1] / 1000); 450 + max_state = bus->devfreq->max_state; 451 + min_freq = (bus->devfreq->freq_table[0] / 1000); 452 + max_freq = (bus->devfreq->freq_table[max_state - 1] / 1000); 453 453 pr_info("exynos-bus: new bus device registered: %s (%6ld KHz ~ %6ld KHz)\n", 454 454 dev_name(dev), min_freq, max_freq); 455 455
+27 -35
drivers/devfreq/governor_passive.c
··· 1 - // SPDX-License-Identifier: GPL-2.0-only 1 + // SPDX-License-Identifier: GPL-2.0-only 2 2 /* 3 3 * linux/drivers/devfreq/governor_passive.c 4 4 * ··· 14 14 #include <linux/slab.h> 15 15 #include <linux/device.h> 16 16 #include <linux/devfreq.h> 17 + #include <linux/units.h> 17 18 #include "governor.h" 18 - 19 - #define HZ_PER_KHZ 1000 20 19 21 20 static struct devfreq_cpu_data * 22 21 get_parent_cpu_data(struct devfreq_passive_data *p_data, ··· 31 32 return parent_cpu_data; 32 33 33 34 return NULL; 35 + } 36 + 37 + static void delete_parent_cpu_data(struct devfreq_passive_data *p_data) 38 + { 39 + struct devfreq_cpu_data *parent_cpu_data, *tmp; 40 + 41 + list_for_each_entry_safe(parent_cpu_data, tmp, &p_data->cpu_data_list, node) { 42 + list_del(&parent_cpu_data->node); 43 + 44 + if (parent_cpu_data->opp_table) 45 + dev_pm_opp_put_opp_table(parent_cpu_data->opp_table); 46 + 47 + kfree(parent_cpu_data); 48 + } 34 49 } 35 50 36 51 static unsigned long get_target_freq_by_required_opp(struct device *p_dev, ··· 144 131 goto out; 145 132 146 133 /* Use interpolation if required opps is not available */ 147 - for (i = 0; i < parent_devfreq->profile->max_state; i++) 148 - if (parent_devfreq->profile->freq_table[i] == *freq) 134 + for (i = 0; i < parent_devfreq->max_state; i++) 135 + if (parent_devfreq->freq_table[i] == *freq) 149 136 break; 150 137 151 - if (i == parent_devfreq->profile->max_state) 138 + if (i == parent_devfreq->max_state) 152 139 return -EINVAL; 153 140 154 - if (i < devfreq->profile->max_state) { 155 - child_freq = devfreq->profile->freq_table[i]; 141 + if (i < devfreq->max_state) { 142 + child_freq = devfreq->freq_table[i]; 156 143 } else { 157 - count = devfreq->profile->max_state; 158 - child_freq = devfreq->profile->freq_table[count - 1]; 144 + count = devfreq->max_state; 145 + child_freq = devfreq->freq_table[count - 1]; 159 146 } 160 147 161 148 out: ··· 235 222 { 236 223 struct devfreq_passive_data *p_data 237 224 = (struct devfreq_passive_data *)devfreq->data; 238 - struct devfreq_cpu_data *parent_cpu_data; 239 - int cpu, ret = 0; 225 + int ret; 240 226 241 227 if (p_data->nb.notifier_call) { 242 228 ret = cpufreq_unregister_notifier(&p_data->nb, ··· 244 232 return ret; 245 233 } 246 234 247 - for_each_possible_cpu(cpu) { 248 - struct cpufreq_policy *policy = cpufreq_cpu_get(cpu); 249 - if (!policy) { 250 - ret = -EINVAL; 251 - continue; 252 - } 235 + delete_parent_cpu_data(p_data); 253 236 254 - parent_cpu_data = get_parent_cpu_data(p_data, policy); 255 - if (!parent_cpu_data) { 256 - cpufreq_cpu_put(policy); 257 - continue; 258 - } 259 - 260 - list_del(&parent_cpu_data->node); 261 - if (parent_cpu_data->opp_table) 262 - dev_pm_opp_put_opp_table(parent_cpu_data->opp_table); 263 - kfree(parent_cpu_data); 264 - cpufreq_cpu_put(policy); 265 - } 266 - 267 - return ret; 237 + return 0; 268 238 } 269 239 270 240 static int cpufreq_passive_register_notifier(struct devfreq *devfreq) ··· 330 336 err_put_policy: 331 337 cpufreq_cpu_put(policy); 332 338 err: 333 - WARN_ON(cpufreq_passive_unregister_notifier(devfreq)); 334 339 335 340 return ret; 336 341 } ··· 400 407 if (!p_data) 401 408 return -EINVAL; 402 409 403 - if (!p_data->this) 404 - p_data->this = devfreq; 410 + p_data->this = devfreq; 405 411 406 412 switch (event) { 407 413 case DEVFREQ_GOV_START:
+5
drivers/dma/at_xdmac.c
··· 1900 1900 for (i = 0; i < init_nr_desc_per_channel; i++) { 1901 1901 desc = at_xdmac_alloc_desc(chan, GFP_KERNEL); 1902 1902 if (!desc) { 1903 + if (i == 0) { 1904 + dev_warn(chan2dev(chan), 1905 + "can't allocate any descriptors\n"); 1906 + return -EIO; 1907 + } 1903 1908 dev_warn(chan2dev(chan), 1904 1909 "only %d descriptors have been allocated\n", i); 1905 1910 break;
+3 -10
drivers/dma/dmatest.c
··· 675 675 /* 676 676 * src and dst buffers are freed by ourselves below 677 677 */ 678 - if (params->polled) { 678 + if (params->polled) 679 679 flags = DMA_CTRL_ACK; 680 - } else { 681 - if (dma_has_cap(DMA_INTERRUPT, dev->cap_mask)) { 682 - flags = DMA_CTRL_ACK | DMA_PREP_INTERRUPT; 683 - } else { 684 - pr_err("Channel does not support interrupt!\n"); 685 - goto err_pq_array; 686 - } 687 - } 680 + else 681 + flags = DMA_CTRL_ACK | DMA_PREP_INTERRUPT; 688 682 689 683 ktime = ktime_get(); 690 684 while (!(kthread_should_stop() || ··· 906 912 runtime = ktime_to_us(ktime); 907 913 908 914 ret = 0; 909 - err_pq_array: 910 915 kfree(dma_pq); 911 916 err_srcs_array: 912 917 kfree(srcs);
+5 -3
drivers/dma/dw-axi-dmac/dw-axi-dmac-platform.c
··· 1164 1164 BIT(chan->id) << DMAC_CHAN_SUSP_WE_SHIFT; 1165 1165 axi_dma_iowrite32(chan->chip, DMAC_CHEN, val); 1166 1166 } else { 1167 - val = BIT(chan->id) << DMAC_CHAN_SUSP2_SHIFT | 1168 - BIT(chan->id) << DMAC_CHAN_SUSP2_WE_SHIFT; 1167 + val = axi_dma_ioread32(chan->chip, DMAC_CHSUSPREG); 1168 + val |= BIT(chan->id) << DMAC_CHAN_SUSP2_SHIFT | 1169 + BIT(chan->id) << DMAC_CHAN_SUSP2_WE_SHIFT; 1169 1170 axi_dma_iowrite32(chan->chip, DMAC_CHSUSPREG, val); 1170 1171 } 1171 1172 ··· 1191 1190 { 1192 1191 u32 val; 1193 1192 1194 - val = axi_dma_ioread32(chan->chip, DMAC_CHEN); 1195 1193 if (chan->chip->dw->hdata->reg_map_8_channels) { 1194 + val = axi_dma_ioread32(chan->chip, DMAC_CHEN); 1196 1195 val &= ~(BIT(chan->id) << DMAC_CHAN_SUSP_SHIFT); 1197 1196 val |= (BIT(chan->id) << DMAC_CHAN_SUSP_WE_SHIFT); 1198 1197 axi_dma_iowrite32(chan->chip, DMAC_CHEN, val); 1199 1198 } else { 1199 + val = axi_dma_ioread32(chan->chip, DMAC_CHSUSPREG); 1200 1200 val &= ~(BIT(chan->id) << DMAC_CHAN_SUSP2_SHIFT); 1201 1201 val |= (BIT(chan->id) << DMAC_CHAN_SUSP2_WE_SHIFT); 1202 1202 axi_dma_iowrite32(chan->chip, DMAC_CHSUSPREG, val);
+1 -4
drivers/dma/idxd/device.c
··· 716 716 struct idxd_wq *wq = idxd->wqs[i]; 717 717 718 718 mutex_lock(&wq->wq_lock); 719 - if (wq->state == IDXD_WQ_ENABLED) { 720 - idxd_wq_disable_cleanup(wq); 721 - wq->state = IDXD_WQ_DISABLED; 722 - } 719 + idxd_wq_disable_cleanup(wq); 723 720 idxd_wq_device_reset_cleanup(wq); 724 721 mutex_unlock(&wq->wq_lock); 725 722 }
+7 -6
drivers/dma/idxd/init.c
··· 512 512 dev_dbg(dev, "IDXD reset complete\n"); 513 513 514 514 if (IS_ENABLED(CONFIG_INTEL_IDXD_SVM) && sva) { 515 - if (iommu_dev_enable_feature(dev, IOMMU_DEV_FEAT_SVA)) 515 + if (iommu_dev_enable_feature(dev, IOMMU_DEV_FEAT_SVA)) { 516 516 dev_warn(dev, "Unable to turn on user SVA feature.\n"); 517 - else 517 + } else { 518 518 set_bit(IDXD_FLAG_USER_PASID_ENABLED, &idxd->flags); 519 519 520 - if (idxd_enable_system_pasid(idxd)) 521 - dev_warn(dev, "No in-kernel DMA with PASID.\n"); 522 - else 523 - set_bit(IDXD_FLAG_PASID_ENABLED, &idxd->flags); 520 + if (idxd_enable_system_pasid(idxd)) 521 + dev_warn(dev, "No in-kernel DMA with PASID.\n"); 522 + else 523 + set_bit(IDXD_FLAG_PASID_ENABLED, &idxd->flags); 524 + } 524 525 } else if (!sva) { 525 526 dev_warn(dev, "User forced SVA off via module param.\n"); 526 527 }
+2 -2
drivers/dma/imx-sdma.c
··· 891 891 * SDMA stops cyclic channel when DMA request triggers a channel and no SDMA 892 892 * owned buffer is available (i.e. BD_DONE was set too late). 893 893 */ 894 - if (!is_sdma_channel_enabled(sdmac->sdma, sdmac->channel)) { 894 + if (sdmac->desc && !is_sdma_channel_enabled(sdmac->sdma, sdmac->channel)) { 895 895 dev_warn(sdmac->sdma->dev, "restart cyclic channel %d\n", sdmac->channel); 896 896 sdma_enable_channel(sdmac->sdma, sdmac->channel); 897 897 } ··· 2346 2346 #if IS_ENABLED(CONFIG_SOC_IMX6Q) 2347 2347 MODULE_FIRMWARE("imx/sdma/sdma-imx6q.bin"); 2348 2348 #endif 2349 - #if IS_ENABLED(CONFIG_SOC_IMX7D) 2349 + #if IS_ENABLED(CONFIG_SOC_IMX7D) || IS_ENABLED(CONFIG_SOC_IMX8M) 2350 2350 MODULE_FIRMWARE("imx/sdma/sdma-imx7d.bin"); 2351 2351 #endif 2352 2352 MODULE_LICENSE("GPL");
+2 -1
drivers/dma/lgm/lgm-dma.c
··· 1593 1593 d->core_clk = devm_clk_get_optional(dev, NULL); 1594 1594 if (IS_ERR(d->core_clk)) 1595 1595 return PTR_ERR(d->core_clk); 1596 - clk_prepare_enable(d->core_clk); 1597 1596 1598 1597 d->rst = devm_reset_control_get_optional(dev, NULL); 1599 1598 if (IS_ERR(d->rst)) 1600 1599 return PTR_ERR(d->rst); 1600 + 1601 + clk_prepare_enable(d->core_clk); 1601 1602 reset_control_deassert(d->rst); 1602 1603 1603 1604 ret = devm_add_action_or_reset(dev, ldma_clk_disable, d);
+1 -1
drivers/dma/pl330.c
··· 2589 2589 2590 2590 /* If the DMAC pool is empty, alloc new */ 2591 2591 if (!desc) { 2592 - DEFINE_SPINLOCK(lock); 2592 + static DEFINE_SPINLOCK(lock); 2593 2593 LIST_HEAD(pool); 2594 2594 2595 2595 if (!add_desc(&pool, &lock, GFP_ATOMIC, 1))
+11 -28
drivers/dma/qcom/bam_dma.c
··· 558 558 return 0; 559 559 } 560 560 561 - static int bam_pm_runtime_get_sync(struct device *dev) 562 - { 563 - if (pm_runtime_enabled(dev)) 564 - return pm_runtime_get_sync(dev); 565 - 566 - return 0; 567 - } 568 - 569 561 /** 570 562 * bam_free_chan - Frees dma resources associated with specific channel 571 563 * @chan: specified channel ··· 573 581 unsigned long flags; 574 582 int ret; 575 583 576 - ret = bam_pm_runtime_get_sync(bdev->dev); 584 + ret = pm_runtime_get_sync(bdev->dev); 577 585 if (ret < 0) 578 586 return; 579 587 ··· 776 784 unsigned long flag; 777 785 int ret; 778 786 779 - ret = bam_pm_runtime_get_sync(bdev->dev); 787 + ret = pm_runtime_get_sync(bdev->dev); 780 788 if (ret < 0) 781 789 return ret; 782 790 ··· 802 810 unsigned long flag; 803 811 int ret; 804 812 805 - ret = bam_pm_runtime_get_sync(bdev->dev); 813 + ret = pm_runtime_get_sync(bdev->dev); 806 814 if (ret < 0) 807 815 return ret; 808 816 ··· 911 919 if (srcs & P_IRQ) 912 920 tasklet_schedule(&bdev->task); 913 921 914 - ret = bam_pm_runtime_get_sync(bdev->dev); 922 + ret = pm_runtime_get_sync(bdev->dev); 915 923 if (ret < 0) 916 924 return IRQ_NONE; 917 925 ··· 1029 1037 if (!vd) 1030 1038 return; 1031 1039 1032 - ret = bam_pm_runtime_get_sync(bdev->dev); 1040 + ret = pm_runtime_get_sync(bdev->dev); 1033 1041 if (ret < 0) 1034 1042 return; 1035 1043 ··· 1366 1374 if (ret) 1367 1375 goto err_unregister_dma; 1368 1376 1369 - if (!bdev->bamclk) { 1370 - pm_runtime_disable(&pdev->dev); 1371 - return 0; 1372 - } 1373 - 1374 1377 pm_runtime_irq_safe(&pdev->dev); 1375 1378 pm_runtime_set_autosuspend_delay(&pdev->dev, BAM_DMA_AUTOSUSPEND_DELAY); 1376 1379 pm_runtime_use_autosuspend(&pdev->dev); ··· 1449 1462 { 1450 1463 struct bam_device *bdev = dev_get_drvdata(dev); 1451 1464 1452 - if (bdev->bamclk) { 1453 - pm_runtime_force_suspend(dev); 1454 - clk_unprepare(bdev->bamclk); 1455 - } 1465 + pm_runtime_force_suspend(dev); 1466 + clk_unprepare(bdev->bamclk); 1456 1467 1457 1468 return 0; 1458 1469 } ··· 1460 1475 struct bam_device *bdev = dev_get_drvdata(dev); 1461 1476 int ret; 1462 1477 1463 - if (bdev->bamclk) { 1464 - ret = clk_prepare(bdev->bamclk); 1465 - if (ret) 1466 - return ret; 1478 + ret = clk_prepare(bdev->bamclk); 1479 + if (ret) 1480 + return ret; 1467 1481 1468 - pm_runtime_force_resume(dev); 1469 - } 1482 + pm_runtime_force_resume(dev); 1470 1483 1471 1484 return 0; 1472 1485 }
+5
drivers/dma/ti/dma-crossbar.c
··· 245 245 if (dma_spec->args[0] >= xbar->xbar_requests) { 246 246 dev_err(&pdev->dev, "Invalid XBAR request number: %d\n", 247 247 dma_spec->args[0]); 248 + put_device(&pdev->dev); 248 249 return ERR_PTR(-EINVAL); 249 250 } 250 251 ··· 253 252 dma_spec->np = of_parse_phandle(ofdma->of_node, "dma-masters", 0); 254 253 if (!dma_spec->np) { 255 254 dev_err(&pdev->dev, "Can't get DMA master\n"); 255 + put_device(&pdev->dev); 256 256 return ERR_PTR(-EINVAL); 257 257 } 258 258 259 259 map = kzalloc(sizeof(*map), GFP_KERNEL); 260 260 if (!map) { 261 261 of_node_put(dma_spec->np); 262 + put_device(&pdev->dev); 262 263 return ERR_PTR(-ENOMEM); 263 264 } 264 265 ··· 271 268 mutex_unlock(&xbar->mutex); 272 269 dev_err(&pdev->dev, "Run out of free DMA requests\n"); 273 270 kfree(map); 271 + of_node_put(dma_spec->np); 272 + put_device(&pdev->dev); 274 273 return ERR_PTR(-ENOMEM); 275 274 } 276 275 set_bit(map->xbar_out, xbar->dma_inuse);
+3 -3
drivers/firmware/arm_scmi/bus.c
··· 181 181 return NULL; 182 182 } 183 183 184 - id = ida_simple_get(&scmi_bus_id, 1, 0, GFP_KERNEL); 184 + id = ida_alloc_min(&scmi_bus_id, 1, GFP_KERNEL); 185 185 if (id < 0) { 186 186 kfree_const(scmi_dev->name); 187 187 kfree(scmi_dev); ··· 204 204 put_dev: 205 205 kfree_const(scmi_dev->name); 206 206 put_device(&scmi_dev->dev); 207 - ida_simple_remove(&scmi_bus_id, id); 207 + ida_free(&scmi_bus_id, id); 208 208 return NULL; 209 209 } 210 210 ··· 212 212 { 213 213 kfree_const(scmi_dev->name); 214 214 scmi_handle_put(scmi_dev->handle); 215 - ida_simple_remove(&scmi_bus_id, scmi_dev->id); 215 + ida_free(&scmi_bus_id, scmi_dev->id); 216 216 device_unregister(&scmi_dev->dev); 217 217 } 218 218
+25 -1
drivers/firmware/arm_scmi/clock.c
··· 194 194 } 195 195 196 196 struct scmi_clk_ipriv { 197 + struct device *dev; 197 198 u32 clk_id; 198 199 struct scmi_clock_info *clk; 199 200 }; ··· 223 222 st->num_remaining = NUM_REMAINING(flags); 224 223 st->num_returned = NUM_RETURNED(flags); 225 224 p->clk->rate_discrete = RATE_DISCRETE(flags); 225 + 226 + /* Warn about out of spec replies ... */ 227 + if (!p->clk->rate_discrete && 228 + (st->num_returned != 3 || st->num_remaining != 0)) { 229 + dev_warn(p->dev, 230 + "Out-of-spec CLOCK_DESCRIBE_RATES reply for %s - returned:%d remaining:%d rx_len:%zd\n", 231 + p->clk->name, st->num_returned, st->num_remaining, 232 + st->rx_len); 233 + 234 + /* 235 + * A known quirk: a triplet is returned but num_returned != 3 236 + * Check for a safe payload size and fix. 237 + */ 238 + if (st->num_returned != 3 && st->num_remaining == 0 && 239 + st->rx_len == sizeof(*r) + sizeof(__le32) * 2 * 3) { 240 + st->num_returned = 3; 241 + st->num_remaining = 0; 242 + } else { 243 + dev_err(p->dev, 244 + "Cannot fix out-of-spec reply !\n"); 245 + return -EPROTO; 246 + } 247 + } 226 248 227 249 return 0; 228 250 } ··· 279 255 280 256 *rate = RATE_TO_U64(r->rate[st->loop_idx]); 281 257 p->clk->list.num_rates++; 282 - //XXX dev_dbg(ph->dev, "Rate %llu Hz\n", *rate); 283 258 } 284 259 285 260 return ret; ··· 298 275 struct scmi_clk_ipriv cpriv = { 299 276 .clk_id = clk_id, 300 277 .clk = clk, 278 + .dev = ph->dev, 301 279 }; 302 280 303 281 iter = ph->hops->iter_response_init(ph, &ops, SCMI_MAX_NUM_RATES,
+1
drivers/firmware/arm_scmi/driver.c
··· 1223 1223 if (ret) 1224 1224 break; 1225 1225 1226 + st->rx_len = i->t->rx.len; 1226 1227 ret = iops->update_state(st, i->resp, i->priv); 1227 1228 if (ret) 1228 1229 break;
+6 -1
drivers/firmware/arm_scmi/optee.c
··· 117 117 u32 channel_id; 118 118 u32 tee_session; 119 119 u32 caps; 120 + u32 rx_len; 120 121 struct mutex mu; 121 122 struct scmi_chan_info *cinfo; 122 123 union { ··· 303 302 return -EIO; 304 303 } 305 304 305 + /* Save response size */ 306 + channel->rx_len = param[2].u.memref.size; 307 + 306 308 return 0; 307 309 } 308 310 ··· 357 353 shbuf = tee_shm_get_va(channel->tee_shm, 0); 358 354 memset(shbuf, 0, msg_size); 359 355 channel->req.msg = shbuf; 356 + channel->rx_len = msg_size; 360 357 361 358 return 0; 362 359 } ··· 513 508 struct scmi_optee_channel *channel = cinfo->transport_info; 514 509 515 510 if (channel->tee_shm) 516 - msg_fetch_response(channel->req.msg, SCMI_OPTEE_MAX_MSG_SIZE, xfer); 511 + msg_fetch_response(channel->req.msg, channel->rx_len, xfer); 517 512 else 518 513 shmem_fetch_response(channel->req.shmem, xfer); 519 514 }
+3
drivers/firmware/arm_scmi/protocols.h
··· 179 179 * @max_resources: Maximum acceptable number of items, configured by the caller 180 180 * depending on the underlying resources that it is querying. 181 181 * @loop_idx: The iterator loop index in the current multi-part reply. 182 + * @rx_len: Size in bytes of the currenly processed message; it can be used by 183 + * the user of the iterator to verify a reply size. 182 184 * @priv: Optional pointer to some additional state-related private data setup 183 185 * by the caller during the iterations. 184 186 */ ··· 190 188 unsigned int num_remaining; 191 189 unsigned int max_resources; 192 190 unsigned int loop_idx; 191 + size_t rx_len; 193 192 void *priv; 194 193 }; 195 194
+50 -8
drivers/firmware/sysfb.c
··· 34 34 #include <linux/screen_info.h> 35 35 #include <linux/sysfb.h> 36 36 37 + static struct platform_device *pd; 38 + static DEFINE_MUTEX(disable_lock); 39 + static bool disabled; 40 + 41 + static bool sysfb_unregister(void) 42 + { 43 + if (IS_ERR_OR_NULL(pd)) 44 + return false; 45 + 46 + platform_device_unregister(pd); 47 + pd = NULL; 48 + 49 + return true; 50 + } 51 + 52 + /** 53 + * sysfb_disable() - disable the Generic System Framebuffers support 54 + * 55 + * This disables the registration of system framebuffer devices that match the 56 + * generic drivers that make use of the system framebuffer set up by firmware. 57 + * 58 + * It also unregisters a device if this was already registered by sysfb_init(). 59 + * 60 + * Context: The function can sleep. A @disable_lock mutex is acquired to serialize 61 + * against sysfb_init(), that registers a system framebuffer device. 62 + */ 63 + void sysfb_disable(void) 64 + { 65 + mutex_lock(&disable_lock); 66 + sysfb_unregister(); 67 + disabled = true; 68 + mutex_unlock(&disable_lock); 69 + } 70 + EXPORT_SYMBOL_GPL(sysfb_disable); 71 + 37 72 static __init int sysfb_init(void) 38 73 { 39 74 struct screen_info *si = &screen_info; 40 75 struct simplefb_platform_data mode; 41 - struct platform_device *pd; 42 76 const char *name; 43 77 bool compatible; 44 - int ret; 78 + int ret = 0; 79 + 80 + mutex_lock(&disable_lock); 81 + if (disabled) 82 + goto unlock_mutex; 45 83 46 84 /* try to create a simple-framebuffer device */ 47 85 compatible = sysfb_parse_mode(si, &mode); 48 86 if (compatible) { 49 - ret = sysfb_create_simplefb(si, &mode); 50 - if (!ret) 51 - return 0; 87 + pd = sysfb_create_simplefb(si, &mode); 88 + if (!IS_ERR(pd)) 89 + goto unlock_mutex; 52 90 } 53 91 54 92 /* if the FB is incompatible, create a legacy framebuffer device */ ··· 98 60 name = "platform-framebuffer"; 99 61 100 62 pd = platform_device_alloc(name, 0); 101 - if (!pd) 102 - return -ENOMEM; 63 + if (!pd) { 64 + ret = -ENOMEM; 65 + goto unlock_mutex; 66 + } 103 67 104 68 sysfb_apply_efi_quirks(pd); 105 69 ··· 113 73 if (ret) 114 74 goto err; 115 75 116 - return 0; 76 + goto unlock_mutex; 117 77 err: 118 78 platform_device_put(pd); 79 + unlock_mutex: 80 + mutex_unlock(&disable_lock); 119 81 return ret; 120 82 } 121 83
+8 -8
drivers/firmware/sysfb_simplefb.c
··· 57 57 return false; 58 58 } 59 59 60 - __init int sysfb_create_simplefb(const struct screen_info *si, 61 - const struct simplefb_platform_data *mode) 60 + __init struct platform_device *sysfb_create_simplefb(const struct screen_info *si, 61 + const struct simplefb_platform_data *mode) 62 62 { 63 63 struct platform_device *pd; 64 64 struct resource res; ··· 76 76 base |= (u64)si->ext_lfb_base << 32; 77 77 if (!base || (u64)(resource_size_t)base != base) { 78 78 printk(KERN_DEBUG "sysfb: inaccessible VRAM base\n"); 79 - return -EINVAL; 79 + return ERR_PTR(-EINVAL); 80 80 } 81 81 82 82 /* ··· 93 93 length = mode->height * mode->stride; 94 94 if (length > size) { 95 95 printk(KERN_WARNING "sysfb: VRAM smaller than advertised\n"); 96 - return -EINVAL; 96 + return ERR_PTR(-EINVAL); 97 97 } 98 98 length = PAGE_ALIGN(length); 99 99 ··· 104 104 res.start = base; 105 105 res.end = res.start + length - 1; 106 106 if (res.end <= res.start) 107 - return -EINVAL; 107 + return ERR_PTR(-EINVAL); 108 108 109 109 pd = platform_device_alloc("simple-framebuffer", 0); 110 110 if (!pd) 111 - return -ENOMEM; 111 + return ERR_PTR(-ENOMEM); 112 112 113 113 sysfb_apply_efi_quirks(pd); 114 114 ··· 124 124 if (ret) 125 125 goto err_put_device; 126 126 127 - return 0; 127 + return pd; 128 128 129 129 err_put_device: 130 130 platform_device_put(pd); 131 131 132 - return ret; 132 + return ERR_PTR(ret); 133 133 }
+1
drivers/gpio/gpio-vf610.c
··· 19 19 #include <linux/of.h> 20 20 #include <linux/of_device.h> 21 21 #include <linux/of_irq.h> 22 + #include <linux/pinctrl/consumer.h> 22 23 23 24 #define VF610_GPIO_PER_PORT 32 24 25
+4 -3
drivers/gpio/gpiolib-cdev.c
··· 1460 1460 static void linereq_free(struct linereq *lr) 1461 1461 { 1462 1462 unsigned int i; 1463 - bool hte; 1463 + bool hte = false; 1464 1464 1465 1465 for (i = 0; i < lr->num_lines; i++) { 1466 - hte = !!test_bit(FLAG_EVENT_CLOCK_HTE, 1467 - &lr->lines[i].desc->flags); 1466 + if (lr->lines[i].desc) 1467 + hte = !!test_bit(FLAG_EVENT_CLOCK_HTE, 1468 + &lr->lines[i].desc->flags); 1468 1469 edge_detector_stop(&lr->lines[i], hte); 1469 1470 if (lr->lines[i].desc) 1470 1471 gpiod_free(lr->lines[i].desc);
+2 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c
··· 714 714 { 715 715 bool all_hub = false; 716 716 717 - if (adev->family == AMDGPU_FAMILY_AI) 717 + if (adev->family == AMDGPU_FAMILY_AI || 718 + adev->family == AMDGPU_FAMILY_RV) 718 719 all_hub = true; 719 720 720 721 return amdgpu_gmc_flush_gpu_tlb_pasid(adev, pasid, flush_type, all_hub);
+1 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
··· 5164 5164 */ 5165 5165 amdgpu_unregister_gpu_instance(tmp_adev); 5166 5166 5167 - drm_fb_helper_set_suspend_unlocked(adev_to_drm(adev)->fb_helper, true); 5167 + drm_fb_helper_set_suspend_unlocked(adev_to_drm(tmp_adev)->fb_helper, true); 5168 5168 5169 5169 /* disable ras on ALL IPs */ 5170 5170 if (!need_emergency_restart &&
+1
drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c
··· 320 320 if (!amdgpu_device_has_dc_support(adev)) { 321 321 if (!adev->enable_virtual_display) 322 322 /* Disable vblank IRQs aggressively for power-saving */ 323 + /* XXX: can this be enabled for DC? */ 323 324 adev_to_drm(adev)->vblank_disable_immediate = true; 324 325 325 326 r = drm_vblank_init(adev_to_drm(adev), adev->mode_info.num_crtc);
-3
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
··· 4259 4259 } 4260 4260 } 4261 4261 4262 - /* Disable vblank IRQs aggressively for power-saving. */ 4263 - adev_to_drm(adev)->vblank_disable_immediate = true; 4264 - 4265 4262 /* loops over all connectors on the board */ 4266 4263 for (i = 0; i < link_cnt; i++) { 4267 4264 struct dc_link *link = NULL;
+15 -11
drivers/gpu/drm/drm_aperture.c
··· 329 329 const struct drm_driver *req_driver) 330 330 { 331 331 resource_size_t base, size; 332 - int bar, ret = 0; 332 + int bar, ret; 333 + 334 + /* 335 + * WARNING: Apparently we must kick fbdev drivers before vgacon, 336 + * otherwise the vga fbdev driver falls over. 337 + */ 338 + #if IS_REACHABLE(CONFIG_FB) 339 + ret = remove_conflicting_pci_framebuffers(pdev, req_driver->name); 340 + if (ret) 341 + return ret; 342 + #endif 343 + ret = vga_remove_vgacon(pdev); 344 + if (ret) 345 + return ret; 333 346 334 347 for (bar = 0; bar < PCI_STD_NUM_BARS; ++bar) { 335 348 if (!(pci_resource_flags(pdev, bar) & IORESOURCE_MEM)) ··· 352 339 drm_aperture_detach_drivers(base, size); 353 340 } 354 341 355 - /* 356 - * WARNING: Apparently we must kick fbdev drivers before vgacon, 357 - * otherwise the vga fbdev driver falls over. 358 - */ 359 - #if IS_REACHABLE(CONFIG_FB) 360 - ret = remove_conflicting_pci_framebuffers(pdev, req_driver->name); 361 - #endif 362 - if (ret == 0) 363 - ret = vga_remove_vgacon(pdev); 364 - return ret; 342 + return 0; 365 343 } 366 344 EXPORT_SYMBOL(drm_aperture_remove_conflicting_pci_framebuffers);
+3 -2
drivers/gpu/drm/i915/gem/i915_gem_context.c
··· 933 933 case I915_CONTEXT_PARAM_PERSISTENCE: 934 934 if (args->size) 935 935 ret = -EINVAL; 936 - ret = proto_context_set_persistence(fpriv->dev_priv, pc, 937 - args->value); 936 + else 937 + ret = proto_context_set_persistence(fpriv->dev_priv, pc, 938 + args->value); 938 939 break; 939 940 940 941 case I915_CONTEXT_PARAM_PROTECTED_CONTENT:
+3 -3
drivers/gpu/drm/i915/gem/i915_gem_domain.c
··· 35 35 if (obj->cache_dirty) 36 36 return false; 37 37 38 - if (!(obj->cache_coherent & I915_BO_CACHE_COHERENT_FOR_WRITE)) 39 - return true; 40 - 41 38 if (IS_DGFX(i915)) 42 39 return false; 40 + 41 + if (!(obj->cache_coherent & I915_BO_CACHE_COHERENT_FOR_WRITE)) 42 + return true; 43 43 44 44 /* Currently in use by HW (display engine)? Keep flushed. */ 45 45 return i915_gem_object_is_framebuffer(obj);
+15 -19
drivers/gpu/drm/i915/i915_driver.c
··· 530 530 static int i915_driver_hw_probe(struct drm_i915_private *dev_priv) 531 531 { 532 532 struct pci_dev *pdev = to_pci_dev(dev_priv->drm.dev); 533 + struct pci_dev *root_pdev; 533 534 int ret; 534 535 535 536 if (i915_inject_probe_failure(dev_priv)) ··· 642 641 643 642 intel_bw_init_hw(dev_priv); 644 643 644 + /* 645 + * FIXME: Temporary hammer to avoid freezing the machine on our DGFX 646 + * This should be totally removed when we handle the pci states properly 647 + * on runtime PM and on s2idle cases. 648 + */ 649 + root_pdev = pcie_find_root_port(pdev); 650 + if (root_pdev) 651 + pci_d3cold_disable(root_pdev); 652 + 645 653 return 0; 646 654 647 655 err_msi: ··· 674 664 static void i915_driver_hw_remove(struct drm_i915_private *dev_priv) 675 665 { 676 666 struct pci_dev *pdev = to_pci_dev(dev_priv->drm.dev); 667 + struct pci_dev *root_pdev; 677 668 678 669 i915_perf_fini(dev_priv); 679 670 680 671 if (pdev->msi_enabled) 681 672 pci_disable_msi(pdev); 673 + 674 + root_pdev = pcie_find_root_port(pdev); 675 + if (root_pdev) 676 + pci_d3cold_enable(root_pdev); 682 677 } 683 678 684 679 /** ··· 1208 1193 goto out; 1209 1194 } 1210 1195 1211 - /* 1212 - * FIXME: Temporary hammer to avoid freezing the machine on our DGFX 1213 - * This should be totally removed when we handle the pci states properly 1214 - * on runtime PM and on s2idle cases. 1215 - */ 1216 - if (suspend_to_idle(dev_priv)) 1217 - pci_d3cold_disable(pdev); 1218 - 1219 1196 pci_disable_device(pdev); 1220 1197 /* 1221 1198 * During hibernation on some platforms the BIOS may try to access ··· 1371 1364 return -EIO; 1372 1365 1373 1366 pci_set_master(pdev); 1374 - 1375 - pci_d3cold_enable(pdev); 1376 1367 1377 1368 disable_rpm_wakeref_asserts(&dev_priv->runtime_pm); 1378 1369 ··· 1548 1543 { 1549 1544 struct drm_i915_private *dev_priv = kdev_to_i915(kdev); 1550 1545 struct intel_runtime_pm *rpm = &dev_priv->runtime_pm; 1551 - struct pci_dev *pdev = to_pci_dev(dev_priv->drm.dev); 1552 1546 int ret; 1553 1547 1554 1548 if (drm_WARN_ON_ONCE(&dev_priv->drm, !HAS_RUNTIME_PM(dev_priv))) ··· 1593 1589 drm_err(&dev_priv->drm, 1594 1590 "Unclaimed access detected prior to suspending\n"); 1595 1591 1596 - /* 1597 - * FIXME: Temporary hammer to avoid freezing the machine on our DGFX 1598 - * This should be totally removed when we handle the pci states properly 1599 - * on runtime PM and on s2idle cases. 1600 - */ 1601 - pci_d3cold_disable(pdev); 1602 1592 rpm->suspended = true; 1603 1593 1604 1594 /* ··· 1631 1633 { 1632 1634 struct drm_i915_private *dev_priv = kdev_to_i915(kdev); 1633 1635 struct intel_runtime_pm *rpm = &dev_priv->runtime_pm; 1634 - struct pci_dev *pdev = to_pci_dev(dev_priv->drm.dev); 1635 1636 int ret; 1636 1637 1637 1638 if (drm_WARN_ON_ONCE(&dev_priv->drm, !HAS_RUNTIME_PM(dev_priv))) ··· 1643 1646 1644 1647 intel_opregion_notify_adapter(dev_priv, PCI_D0); 1645 1648 rpm->suspended = false; 1646 - pci_d3cold_enable(pdev); 1647 1649 if (intel_uncore_unclaimed_mmio(&dev_priv->uncore)) 1648 1650 drm_dbg(&dev_priv->drm, 1649 1651 "Unclaimed access during suspend, bios?\n");
+2 -1
drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
··· 1251 1251 DPU_ATRACE_BEGIN("encoder_vblank_callback"); 1252 1252 dpu_enc = to_dpu_encoder_virt(drm_enc); 1253 1253 1254 + atomic_inc(&phy_enc->vsync_cnt); 1255 + 1254 1256 spin_lock_irqsave(&dpu_enc->enc_spinlock, lock_flags); 1255 1257 if (dpu_enc->crtc) 1256 1258 dpu_crtc_vblank_callback(dpu_enc->crtc); 1257 1259 spin_unlock_irqrestore(&dpu_enc->enc_spinlock, lock_flags); 1258 1260 1259 - atomic_inc(&phy_enc->vsync_cnt); 1260 1261 DPU_ATRACE_END("encoder_vblank_callback"); 1261 1262 } 1262 1263
+5 -5
drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_wb.c
··· 252 252 DPU_DEBUG("[atomic_check:%d, \"%s\",%d,%d]\n", 253 253 phys_enc->wb_idx, mode->name, mode->hdisplay, mode->vdisplay); 254 254 255 - if (!conn_state->writeback_job || !conn_state->writeback_job->fb) 256 - return 0; 257 - 258 - fb = conn_state->writeback_job->fb; 259 - 260 255 if (!conn_state || !conn_state->connector) { 261 256 DPU_ERROR("invalid connector state\n"); 262 257 return -EINVAL; ··· 261 266 conn_state->connector->status); 262 267 return -EINVAL; 263 268 } 269 + 270 + if (!conn_state->writeback_job || !conn_state->writeback_job->fb) 271 + return 0; 272 + 273 + fb = conn_state->writeback_job->fb; 264 274 265 275 DPU_DEBUG("[fb_id:%u][fb:%u,%u]\n", fb->base.id, 266 276 fb->width, fb->height);
+2
drivers/gpu/drm/msm/dp/dp_display.c
··· 316 316 317 317 dp_power_client_deinit(dp->power); 318 318 dp_aux_unregister(dp->aux); 319 + dp->drm_dev = NULL; 320 + dp->aux->drm_dev = NULL; 319 321 priv->dp[dp->id] = NULL; 320 322 } 321 323
+1 -1
drivers/gpu/drm/msm/msm_gem_submit.c
··· 928 928 INT_MAX, GFP_KERNEL); 929 929 } 930 930 if (submit->fence_id < 0) { 931 - ret = submit->fence_id = 0; 931 + ret = submit->fence_id; 932 932 submit->fence_id = 0; 933 933 } 934 934
+6 -3
drivers/gpu/drm/vc4/vc4_perfmon.c
··· 17 17 18 18 void vc4_perfmon_get(struct vc4_perfmon *perfmon) 19 19 { 20 - struct vc4_dev *vc4 = perfmon->dev; 20 + struct vc4_dev *vc4; 21 21 22 + if (!perfmon) 23 + return; 24 + 25 + vc4 = perfmon->dev; 22 26 if (WARN_ON_ONCE(vc4->is_vc5)) 23 27 return; 24 28 25 - if (perfmon) 26 - refcount_inc(&perfmon->refcnt); 29 + refcount_inc(&perfmon->refcnt); 27 30 } 28 31 29 32 void vc4_perfmon_put(struct vc4_perfmon *perfmon)
+8 -4
drivers/hwmon/ibmaem.c
··· 550 550 551 551 res = platform_device_add(data->pdev); 552 552 if (res) 553 - goto ipmi_err; 553 + goto dev_add_err; 554 554 555 555 platform_set_drvdata(data->pdev, data); 556 556 ··· 598 598 ipmi_destroy_user(data->ipmi.user); 599 599 ipmi_err: 600 600 platform_set_drvdata(data->pdev, NULL); 601 - platform_device_unregister(data->pdev); 601 + platform_device_del(data->pdev); 602 + dev_add_err: 603 + platform_device_put(data->pdev); 602 604 dev_err: 603 605 ida_free(&aem_ida, data->id); 604 606 id_err: ··· 692 690 693 691 res = platform_device_add(data->pdev); 694 692 if (res) 695 - goto ipmi_err; 693 + goto dev_add_err; 696 694 697 695 platform_set_drvdata(data->pdev, data); 698 696 ··· 740 738 ipmi_destroy_user(data->ipmi.user); 741 739 ipmi_err: 742 740 platform_set_drvdata(data->pdev, NULL); 743 - platform_device_unregister(data->pdev); 741 + platform_device_del(data->pdev); 742 + dev_add_err: 743 + platform_device_put(data->pdev); 744 744 dev_err: 745 745 ida_free(&aem_ida, data->id); 746 746 id_err:
+3 -2
drivers/hwmon/occ/common.c
··· 145 145 cmd[6] = 0; /* checksum lsb */ 146 146 147 147 /* mutex should already be locked if necessary */ 148 - rc = occ->send_cmd(occ, cmd, sizeof(cmd)); 148 + rc = occ->send_cmd(occ, cmd, sizeof(cmd), &occ->resp, sizeof(occ->resp)); 149 149 if (rc) { 150 150 occ->last_error = rc; 151 151 if (occ->error_count++ > OCC_ERROR_COUNT_THRESHOLD) ··· 182 182 { 183 183 int rc; 184 184 u8 cmd[8]; 185 + u8 resp[8]; 185 186 __be16 user_power_cap_be = cpu_to_be16(user_power_cap); 186 187 187 188 cmd[0] = 0; /* sequence number */ ··· 199 198 if (rc) 200 199 return rc; 201 200 202 - rc = occ->send_cmd(occ, cmd, sizeof(cmd)); 201 + rc = occ->send_cmd(occ, cmd, sizeof(cmd), resp, sizeof(resp)); 203 202 204 203 mutex_unlock(&occ->lock); 205 204
+2 -1
drivers/hwmon/occ/common.h
··· 96 96 97 97 int powr_sample_time_us; /* average power sample time */ 98 98 u8 poll_cmd_data; /* to perform OCC poll command */ 99 - int (*send_cmd)(struct occ *occ, u8 *cmd, size_t len); 99 + int (*send_cmd)(struct occ *occ, u8 *cmd, size_t len, void *resp, 100 + size_t resp_len); 100 101 101 102 unsigned long next_update; 102 103 struct mutex lock; /* lock OCC access */
+7 -6
drivers/hwmon/occ/p8_i2c.c
··· 111 111 be32_to_cpu(data1)); 112 112 } 113 113 114 - static int p8_i2c_occ_send_cmd(struct occ *occ, u8 *cmd, size_t len) 114 + static int p8_i2c_occ_send_cmd(struct occ *occ, u8 *cmd, size_t len, 115 + void *resp, size_t resp_len) 115 116 { 116 117 int i, rc; 117 118 unsigned long start; ··· 121 120 const long wait_time = msecs_to_jiffies(OCC_CMD_IN_PRG_WAIT_MS); 122 121 struct p8_i2c_occ *ctx = to_p8_i2c_occ(occ); 123 122 struct i2c_client *client = ctx->client; 124 - struct occ_response *resp = &occ->resp; 123 + struct occ_response *or = (struct occ_response *)resp; 125 124 126 125 start = jiffies; 127 126 ··· 152 151 return rc; 153 152 154 153 /* wait for OCC */ 155 - if (resp->return_status == OCC_RESP_CMD_IN_PRG) { 154 + if (or->return_status == OCC_RESP_CMD_IN_PRG) { 156 155 rc = -EALREADY; 157 156 158 157 if (time_after(jiffies, start + timeout)) ··· 164 163 } while (rc); 165 164 166 165 /* check the OCC response */ 167 - switch (resp->return_status) { 166 + switch (or->return_status) { 168 167 case OCC_RESP_CMD_IN_PRG: 169 168 rc = -ETIMEDOUT; 170 169 break; ··· 193 192 if (rc < 0) 194 193 return rc; 195 194 196 - data_length = get_unaligned_be16(&resp->data_length); 197 - if (data_length > OCC_RESP_DATA_BYTES) 195 + data_length = get_unaligned_be16(&or->data_length); 196 + if ((data_length + 7) > resp_len) 198 197 return -EMSGSIZE; 199 198 200 199 /* fetch the rest of the response data */
+3 -4
drivers/hwmon/occ/p9_sbe.c
··· 78 78 return notify; 79 79 } 80 80 81 - static int p9_sbe_occ_send_cmd(struct occ *occ, u8 *cmd, size_t len) 81 + static int p9_sbe_occ_send_cmd(struct occ *occ, u8 *cmd, size_t len, 82 + void *resp, size_t resp_len) 82 83 { 83 - struct occ_response *resp = &occ->resp; 84 84 struct p9_sbe_occ *ctx = to_p9_sbe_occ(occ); 85 - size_t resp_len = sizeof(*resp); 86 85 int rc; 87 86 88 87 rc = fsi_occ_submit(ctx->sbe, cmd, len, resp, &resp_len); ··· 95 96 return rc; 96 97 } 97 98 98 - switch (resp->return_status) { 99 + switch (((struct occ_response *)resp)->return_status) { 99 100 case OCC_RESP_CMD_IN_PRG: 100 101 rc = -ETIMEDOUT; 101 102 break;
+1 -1
drivers/hwmon/pmbus/ucd9200.c
··· 148 148 * This only affects the READ_IOUT and READ_TEMPERATURE2 registers. 149 149 * READ_IOUT will return the sum of currents of all phases of a rail, 150 150 * and READ_TEMPERATURE2 will return the maximum temperature detected 151 - * for the the phases of the rail. 151 + * for the phases of the rail. 152 152 */ 153 153 for (i = 0; i < info->pages; i++) { 154 154 /*
+1
drivers/i2c/busses/i2c-cadence.c
··· 1338 1338 return 0; 1339 1339 1340 1340 err_clk_dis: 1341 + clk_notifier_unregister(id->clk, &id->clk_rate_change_nb); 1341 1342 clk_disable_unprepare(id->clk); 1342 1343 pm_runtime_disable(&pdev->dev); 1343 1344 pm_runtime_set_suspended(&pdev->dev);
+7 -9
drivers/i2c/busses/i2c-piix4.c
··· 161 161 162 162 struct sb800_mmio_cfg { 163 163 void __iomem *addr; 164 - struct resource *res; 165 164 bool use_mmio; 166 165 }; 167 166 ··· 178 179 struct sb800_mmio_cfg *mmio_cfg) 179 180 { 180 181 if (mmio_cfg->use_mmio) { 181 - struct resource *res; 182 182 void __iomem *addr; 183 183 184 - res = request_mem_region_muxed(SB800_PIIX4_FCH_PM_ADDR, 185 - SB800_PIIX4_FCH_PM_SIZE, 186 - "sb800_piix4_smb"); 187 - if (!res) { 184 + if (!request_mem_region_muxed(SB800_PIIX4_FCH_PM_ADDR, 185 + SB800_PIIX4_FCH_PM_SIZE, 186 + "sb800_piix4_smb")) { 188 187 dev_err(dev, 189 188 "SMBus base address memory region 0x%x already in use.\n", 190 189 SB800_PIIX4_FCH_PM_ADDR); ··· 192 195 addr = ioremap(SB800_PIIX4_FCH_PM_ADDR, 193 196 SB800_PIIX4_FCH_PM_SIZE); 194 197 if (!addr) { 195 - release_resource(res); 198 + release_mem_region(SB800_PIIX4_FCH_PM_ADDR, 199 + SB800_PIIX4_FCH_PM_SIZE); 196 200 dev_err(dev, "SMBus base address mapping failed.\n"); 197 201 return -ENOMEM; 198 202 } 199 203 200 - mmio_cfg->res = res; 201 204 mmio_cfg->addr = addr; 202 205 203 206 return 0; ··· 219 222 { 220 223 if (mmio_cfg->use_mmio) { 221 224 iounmap(mmio_cfg->addr); 222 - release_resource(mmio_cfg->res); 225 + release_mem_region(SB800_PIIX4_FCH_PM_ADDR, 226 + SB800_PIIX4_FCH_PM_SIZE); 223 227 return; 224 228 } 225 229
+3 -1
drivers/infiniband/core/cm.c
··· 1252 1252 return ERR_CAST(cm_id_priv); 1253 1253 1254 1254 err = cm_init_listen(cm_id_priv, service_id, 0); 1255 - if (err) 1255 + if (err) { 1256 + ib_destroy_cm_id(&cm_id_priv->id); 1256 1257 return ERR_PTR(err); 1258 + } 1257 1259 1258 1260 spin_lock_irq(&cm_id_priv->lock); 1259 1261 listen_id_priv = cm_insert_listen(cm_id_priv, cm_handler);
+1
drivers/infiniband/hw/qedr/qedr.h
··· 418 418 u32 sq_psn; 419 419 u32 qkey; 420 420 u32 dest_qp_num; 421 + u8 timeout; 421 422 422 423 /* Relevant to qps created from kernel space only (ULPs) */ 423 424 u8 prev_wqe_size;
+3 -1
drivers/infiniband/hw/qedr/verbs.c
··· 2613 2613 1 << max_t(int, attr->timeout - 8, 0); 2614 2614 else 2615 2615 qp_params.ack_timeout = 0; 2616 + 2617 + qp->timeout = attr->timeout; 2616 2618 } 2617 2619 2618 2620 if (attr_mask & IB_QP_RETRY_CNT) { ··· 2774 2772 rdma_ah_set_dgid_raw(&qp_attr->ah_attr, &params.dgid.bytes[0]); 2775 2773 rdma_ah_set_port_num(&qp_attr->ah_attr, 1); 2776 2774 rdma_ah_set_sl(&qp_attr->ah_attr, 0); 2777 - qp_attr->timeout = params.timeout; 2775 + qp_attr->timeout = qp->timeout; 2778 2776 qp_attr->rnr_retry = params.rnr_retry; 2779 2777 qp_attr->retry_cnt = params.retry_cnt; 2780 2778 qp_attr->min_rnr_timer = params.min_rnr_nak_timer;
+1 -1
drivers/iommu/intel/dmar.c
··· 382 382 383 383 static struct notifier_block dmar_pci_bus_nb = { 384 384 .notifier_call = dmar_pci_bus_notifier, 385 - .priority = INT_MIN, 385 + .priority = 1, 386 386 }; 387 387 388 388 static struct dmar_drhd_unit *
-24
drivers/iommu/intel/iommu.c
··· 320 320 DEFINE_SPINLOCK(device_domain_lock); 321 321 static LIST_HEAD(device_domain_list); 322 322 323 - /* 324 - * Iterate over elements in device_domain_list and call the specified 325 - * callback @fn against each element. 326 - */ 327 - int for_each_device_domain(int (*fn)(struct device_domain_info *info, 328 - void *data), void *data) 329 - { 330 - int ret = 0; 331 - unsigned long flags; 332 - struct device_domain_info *info; 333 - 334 - spin_lock_irqsave(&device_domain_lock, flags); 335 - list_for_each_entry(info, &device_domain_list, global) { 336 - ret = fn(info, data); 337 - if (ret) { 338 - spin_unlock_irqrestore(&device_domain_lock, flags); 339 - return ret; 340 - } 341 - } 342 - spin_unlock_irqrestore(&device_domain_lock, flags); 343 - 344 - return 0; 345 - } 346 - 347 323 const struct iommu_ops intel_iommu_ops; 348 324 349 325 static bool translation_pre_enabled(struct intel_iommu *iommu)
+3 -66
drivers/iommu/intel/pasid.c
··· 86 86 /* 87 87 * Per device pasid table management: 88 88 */ 89 - static inline void 90 - device_attach_pasid_table(struct device_domain_info *info, 91 - struct pasid_table *pasid_table) 92 - { 93 - info->pasid_table = pasid_table; 94 - list_add(&info->table, &pasid_table->dev); 95 - } 96 - 97 - static inline void 98 - device_detach_pasid_table(struct device_domain_info *info, 99 - struct pasid_table *pasid_table) 100 - { 101 - info->pasid_table = NULL; 102 - list_del(&info->table); 103 - } 104 - 105 - struct pasid_table_opaque { 106 - struct pasid_table **pasid_table; 107 - int segment; 108 - int bus; 109 - int devfn; 110 - }; 111 - 112 - static int search_pasid_table(struct device_domain_info *info, void *opaque) 113 - { 114 - struct pasid_table_opaque *data = opaque; 115 - 116 - if (info->iommu->segment == data->segment && 117 - info->bus == data->bus && 118 - info->devfn == data->devfn && 119 - info->pasid_table) { 120 - *data->pasid_table = info->pasid_table; 121 - return 1; 122 - } 123 - 124 - return 0; 125 - } 126 - 127 - static int get_alias_pasid_table(struct pci_dev *pdev, u16 alias, void *opaque) 128 - { 129 - struct pasid_table_opaque *data = opaque; 130 - 131 - data->segment = pci_domain_nr(pdev->bus); 132 - data->bus = PCI_BUS_NUM(alias); 133 - data->devfn = alias & 0xff; 134 - 135 - return for_each_device_domain(&search_pasid_table, data); 136 - } 137 89 138 90 /* 139 91 * Allocate a pasid table for @dev. It should be called in a ··· 95 143 { 96 144 struct device_domain_info *info; 97 145 struct pasid_table *pasid_table; 98 - struct pasid_table_opaque data; 99 146 struct page *pages; 100 147 u32 max_pasid = 0; 101 - int ret, order; 102 - int size; 148 + int order, size; 103 149 104 150 might_sleep(); 105 151 info = dev_iommu_priv_get(dev); 106 152 if (WARN_ON(!info || !dev_is_pci(dev) || info->pasid_table)) 107 153 return -EINVAL; 108 154 109 - /* DMA alias device already has a pasid table, use it: */ 110 - data.pasid_table = &pasid_table; 111 - ret = pci_for_each_dma_alias(to_pci_dev(dev), 112 - &get_alias_pasid_table, &data); 113 - if (ret) 114 - goto attach_out; 115 - 116 155 pasid_table = kzalloc(sizeof(*pasid_table), GFP_KERNEL); 117 156 if (!pasid_table) 118 157 return -ENOMEM; 119 - INIT_LIST_HEAD(&pasid_table->dev); 120 158 121 159 if (info->pasid_supported) 122 160 max_pasid = min_t(u32, pci_max_pasids(to_pci_dev(dev)), ··· 124 182 pasid_table->table = page_address(pages); 125 183 pasid_table->order = order; 126 184 pasid_table->max_pasid = 1 << (order + PAGE_SHIFT + 3); 127 - 128 - attach_out: 129 - device_attach_pasid_table(info, pasid_table); 185 + info->pasid_table = pasid_table; 130 186 131 187 return 0; 132 188 } ··· 142 202 return; 143 203 144 204 pasid_table = info->pasid_table; 145 - device_detach_pasid_table(info, pasid_table); 146 - 147 - if (!list_empty(&pasid_table->dev)) 148 - return; 205 + info->pasid_table = NULL; 149 206 150 207 /* Free scalable mode PASID directory tables: */ 151 208 dir = pasid_table->table;
-1
drivers/iommu/intel/pasid.h
··· 74 74 void *table; /* pasid table pointer */ 75 75 int order; /* page order of pasid table */ 76 76 u32 max_pasid; /* max pasid */ 77 - struct list_head dev; /* device list */ 78 77 }; 79 78 80 79 /* Get PRESENT bit of a PASID directory entry. */
+1 -1
drivers/irqchip/Kconfig
··· 298 298 299 299 config XILINX_INTC 300 300 bool "Xilinx Interrupt Controller IP" 301 - depends on OF 301 + depends on OF_ADDRESS 302 302 select IRQ_DOMAIN 303 303 help 304 304 Support for the Xilinx Interrupt Controller IP core.
+1 -1
drivers/irqchip/irq-apple-aic.c
··· 228 228 #define AIC_TMR_EL02_PHYS AIC_TMR_GUEST_PHYS 229 229 #define AIC_TMR_EL02_VIRT AIC_TMR_GUEST_VIRT 230 230 231 - DEFINE_STATIC_KEY_TRUE(use_fast_ipi); 231 + static DEFINE_STATIC_KEY_TRUE(use_fast_ipi); 232 232 233 233 struct aic_info { 234 234 int version;
+31 -10
drivers/irqchip/irq-gic-v3.c
··· 2042 2042 vgic_set_kvm_info(&gic_v3_kvm_info); 2043 2043 } 2044 2044 2045 + static void gic_request_region(resource_size_t base, resource_size_t size, 2046 + const char *name) 2047 + { 2048 + if (!request_mem_region(base, size, name)) 2049 + pr_warn_once(FW_BUG "%s region %pa has overlapping address\n", 2050 + name, &base); 2051 + } 2052 + 2053 + static void __iomem *gic_of_iomap(struct device_node *node, int idx, 2054 + const char *name, struct resource *res) 2055 + { 2056 + void __iomem *base; 2057 + int ret; 2058 + 2059 + ret = of_address_to_resource(node, idx, res); 2060 + if (ret) 2061 + return IOMEM_ERR_PTR(ret); 2062 + 2063 + gic_request_region(res->start, resource_size(res), name); 2064 + base = of_iomap(node, idx); 2065 + 2066 + return base ?: IOMEM_ERR_PTR(-ENOMEM); 2067 + } 2068 + 2045 2069 static int __init gic_of_init(struct device_node *node, struct device_node *parent) 2046 2070 { 2047 2071 void __iomem *dist_base; 2048 2072 struct redist_region *rdist_regs; 2073 + struct resource res; 2049 2074 u64 redist_stride; 2050 2075 u32 nr_redist_regions; 2051 2076 int err, i; 2052 2077 2053 - dist_base = of_io_request_and_map(node, 0, "GICD"); 2078 + dist_base = gic_of_iomap(node, 0, "GICD", &res); 2054 2079 if (IS_ERR(dist_base)) { 2055 2080 pr_err("%pOF: unable to map gic dist registers\n", node); 2056 2081 return PTR_ERR(dist_base); ··· 2098 2073 } 2099 2074 2100 2075 for (i = 0; i < nr_redist_regions; i++) { 2101 - struct resource res; 2102 - int ret; 2103 - 2104 - ret = of_address_to_resource(node, 1 + i, &res); 2105 - rdist_regs[i].redist_base = of_io_request_and_map(node, 1 + i, "GICR"); 2106 - if (ret || IS_ERR(rdist_regs[i].redist_base)) { 2076 + rdist_regs[i].redist_base = gic_of_iomap(node, 1 + i, "GICR", &res); 2077 + if (IS_ERR(rdist_regs[i].redist_base)) { 2107 2078 pr_err("%pOF: couldn't map region %d\n", node, i); 2108 2079 err = -ENODEV; 2109 2080 goto out_unmap_rdist; ··· 2172 2151 pr_err("Couldn't map GICR region @%llx\n", redist->base_address); 2173 2152 return -ENOMEM; 2174 2153 } 2175 - request_mem_region(redist->base_address, redist->length, "GICR"); 2154 + gic_request_region(redist->base_address, redist->length, "GICR"); 2176 2155 2177 2156 gic_acpi_register_redist(redist->base_address, redist_base); 2178 2157 return 0; ··· 2195 2174 redist_base = ioremap(gicc->gicr_base_address, size); 2196 2175 if (!redist_base) 2197 2176 return -ENOMEM; 2198 - request_mem_region(gicc->gicr_base_address, size, "GICR"); 2177 + gic_request_region(gicc->gicr_base_address, size, "GICR"); 2199 2178 2200 2179 gic_acpi_register_redist(gicc->gicr_base_address, redist_base); 2201 2180 return 0; ··· 2397 2376 pr_err("Unable to map GICD registers\n"); 2398 2377 return -ENOMEM; 2399 2378 } 2400 - request_mem_region(dist->base_address, ACPI_GICV3_DIST_MEM_SIZE, "GICD"); 2379 + gic_request_region(dist->base_address, ACPI_GICV3_DIST_MEM_SIZE, "GICD"); 2401 2380 2402 2381 err = gic_validate_dist_version(acpi_data.dist_base); 2403 2382 if (err) {
-1
drivers/irqchip/irq-or1k-pic.c
··· 66 66 .name = "or1k-PIC-level", 67 67 .irq_unmask = or1k_pic_unmask, 68 68 .irq_mask = or1k_pic_mask, 69 - .irq_mask_ack = or1k_pic_mask_ack, 70 69 }, 71 70 .handle = handle_level_irq, 72 71 .flags = IRQ_LEVEL | IRQ_NOPROBE,
+18 -16
drivers/md/dm-raid.c
··· 1001 1001 static int validate_raid_redundancy(struct raid_set *rs) 1002 1002 { 1003 1003 unsigned int i, rebuild_cnt = 0; 1004 - unsigned int rebuilds_per_group = 0, copies; 1004 + unsigned int rebuilds_per_group = 0, copies, raid_disks; 1005 1005 unsigned int group_size, last_group_start; 1006 1006 1007 - for (i = 0; i < rs->md.raid_disks; i++) 1008 - if (!test_bit(In_sync, &rs->dev[i].rdev.flags) || 1009 - !rs->dev[i].rdev.sb_page) 1007 + for (i = 0; i < rs->raid_disks; i++) 1008 + if (!test_bit(FirstUse, &rs->dev[i].rdev.flags) && 1009 + ((!test_bit(In_sync, &rs->dev[i].rdev.flags) || 1010 + !rs->dev[i].rdev.sb_page))) 1010 1011 rebuild_cnt++; 1011 1012 1012 1013 switch (rs->md.level) { ··· 1047 1046 * A A B B C 1048 1047 * C D D E E 1049 1048 */ 1049 + raid_disks = min(rs->raid_disks, rs->md.raid_disks); 1050 1050 if (__is_raid10_near(rs->md.new_layout)) { 1051 - for (i = 0; i < rs->md.raid_disks; i++) { 1051 + for (i = 0; i < raid_disks; i++) { 1052 1052 if (!(i % copies)) 1053 1053 rebuilds_per_group = 0; 1054 1054 if ((!rs->dev[i].rdev.sb_page || ··· 1072 1070 * results in the need to treat the last (potentially larger) 1073 1071 * set differently. 1074 1072 */ 1075 - group_size = (rs->md.raid_disks / copies); 1076 - last_group_start = (rs->md.raid_disks / group_size) - 1; 1073 + group_size = (raid_disks / copies); 1074 + last_group_start = (raid_disks / group_size) - 1; 1077 1075 last_group_start *= group_size; 1078 - for (i = 0; i < rs->md.raid_disks; i++) { 1076 + for (i = 0; i < raid_disks; i++) { 1079 1077 if (!(i % copies) && !(i > last_group_start)) 1080 1078 rebuilds_per_group = 0; 1081 1079 if ((!rs->dev[i].rdev.sb_page || ··· 1590 1588 { 1591 1589 int i; 1592 1590 1593 - for (i = 0; i < rs->md.raid_disks; i++) { 1591 + for (i = 0; i < rs->raid_disks; i++) { 1594 1592 struct md_rdev *rdev = &rs->dev[i].rdev; 1595 1593 1596 1594 if (!test_bit(Journal, &rdev->flags) && ··· 3768 3766 unsigned int i; 3769 3767 int r = 0; 3770 3768 3771 - for (i = 0; !r && i < rs->md.raid_disks; i++) 3772 - if (rs->dev[i].data_dev) 3773 - r = fn(ti, 3774 - rs->dev[i].data_dev, 3775 - 0, /* No offset on data devs */ 3776 - rs->md.dev_sectors, 3777 - data); 3769 + for (i = 0; !r && i < rs->raid_disks; i++) { 3770 + if (rs->dev[i].data_dev) { 3771 + r = fn(ti, rs->dev[i].data_dev, 3772 + 0, /* No offset on data devs */ 3773 + rs->md.dev_sectors, data); 3774 + } 3775 + } 3778 3776 3779 3777 return r; 3780 3778 }
+5 -1
drivers/md/raid5.c
··· 7933 7933 int err = 0; 7934 7934 int number = rdev->raid_disk; 7935 7935 struct md_rdev __rcu **rdevp; 7936 - struct disk_info *p = conf->disks + number; 7936 + struct disk_info *p; 7937 7937 struct md_rdev *tmp; 7938 7938 7939 7939 print_raid5_conf(conf); ··· 7952 7952 log_exit(conf); 7953 7953 return 0; 7954 7954 } 7955 + if (unlikely(number >= conf->pool_size)) 7956 + return 0; 7957 + p = conf->disks + number; 7955 7958 if (rdev == rcu_access_pointer(p->rdev)) 7956 7959 rdevp = &p->rdev; 7957 7960 else if (rdev == rcu_access_pointer(p->replacement)) ··· 8065 8062 */ 8066 8063 if (rdev->saved_raid_disk >= 0 && 8067 8064 rdev->saved_raid_disk >= first && 8065 + rdev->saved_raid_disk <= last && 8068 8066 conf->disks[rdev->saved_raid_disk].rdev == NULL) 8069 8067 first = rdev->saved_raid_disk; 8070 8068
+19 -8
drivers/misc/cardreader/rtsx_usb.c
··· 631 631 632 632 ucr->pusb_dev = usb_dev; 633 633 634 - ucr->iobuf = usb_alloc_coherent(ucr->pusb_dev, IOBUF_SIZE, 635 - GFP_KERNEL, &ucr->iobuf_dma); 636 - if (!ucr->iobuf) 634 + ucr->cmd_buf = kmalloc(IOBUF_SIZE, GFP_KERNEL); 635 + if (!ucr->cmd_buf) 637 636 return -ENOMEM; 637 + 638 + ucr->rsp_buf = kmalloc(IOBUF_SIZE, GFP_KERNEL); 639 + if (!ucr->rsp_buf) { 640 + ret = -ENOMEM; 641 + goto out_free_cmd_buf; 642 + } 638 643 639 644 usb_set_intfdata(intf, ucr); 640 645 641 646 ucr->vendor_id = id->idVendor; 642 647 ucr->product_id = id->idProduct; 643 - ucr->cmd_buf = ucr->rsp_buf = ucr->iobuf; 644 648 645 649 mutex_init(&ucr->dev_mutex); 646 650 ··· 672 668 673 669 out_init_fail: 674 670 usb_set_intfdata(ucr->pusb_intf, NULL); 675 - usb_free_coherent(ucr->pusb_dev, IOBUF_SIZE, ucr->iobuf, 676 - ucr->iobuf_dma); 671 + kfree(ucr->rsp_buf); 672 + ucr->rsp_buf = NULL; 673 + out_free_cmd_buf: 674 + kfree(ucr->cmd_buf); 675 + ucr->cmd_buf = NULL; 677 676 return ret; 678 677 } 679 678 ··· 689 682 mfd_remove_devices(&intf->dev); 690 683 691 684 usb_set_intfdata(ucr->pusb_intf, NULL); 692 - usb_free_coherent(ucr->pusb_dev, IOBUF_SIZE, ucr->iobuf, 693 - ucr->iobuf_dma); 685 + 686 + kfree(ucr->cmd_buf); 687 + ucr->cmd_buf = NULL; 688 + 689 + kfree(ucr->rsp_buf); 690 + ucr->rsp_buf = NULL; 694 691 } 695 692 696 693 #ifdef CONFIG_PM
+12 -14
drivers/misc/eeprom/at25.c
··· 80 80 struct at25_data *at25 = priv; 81 81 char *buf = val; 82 82 size_t max_chunk = spi_max_transfer_size(at25->spi); 83 - size_t num_msgs = DIV_ROUND_UP(count, max_chunk); 84 - size_t nr_bytes = 0; 85 - unsigned int msg_offset; 86 - size_t msg_count; 83 + unsigned int msg_offset = offset; 84 + size_t bytes_left = count; 85 + size_t segment; 87 86 u8 *cp; 88 87 ssize_t status; 89 88 struct spi_transfer t[2]; ··· 96 97 if (unlikely(!count)) 97 98 return -EINVAL; 98 99 99 - msg_offset = (unsigned int)offset; 100 - msg_count = min(count, max_chunk); 101 - while (num_msgs) { 100 + do { 101 + segment = min(bytes_left, max_chunk); 102 102 cp = at25->command; 103 103 104 104 instr = AT25_READ; ··· 129 131 t[0].len = at25->addrlen + 1; 130 132 spi_message_add_tail(&t[0], &m); 131 133 132 - t[1].rx_buf = buf + nr_bytes; 133 - t[1].len = msg_count; 134 + t[1].rx_buf = buf; 135 + t[1].len = segment; 134 136 spi_message_add_tail(&t[1], &m); 135 137 136 138 status = spi_sync(at25->spi, &m); ··· 140 142 if (status) 141 143 return status; 142 144 143 - --num_msgs; 144 - msg_offset += msg_count; 145 - nr_bytes += msg_count; 146 - } 145 + msg_offset += segment; 146 + buf += segment; 147 + bytes_left -= segment; 148 + } while (bytes_left > 0); 147 149 148 150 dev_dbg(&at25->spi->dev, "read %zu bytes at %d\n", 149 151 count, offset); ··· 227 229 do { 228 230 unsigned long timeout, retries; 229 231 unsigned segment; 230 - unsigned offset = (unsigned) off; 232 + unsigned offset = off; 231 233 u8 *cp = bounce; 232 234 int sr; 233 235 u8 instr;
+1
drivers/net/Kconfig
··· 94 94 select CRYPTO_CURVE25519_NEON if ARM && KERNEL_MODE_NEON 95 95 select CRYPTO_CHACHA_MIPS if CPU_MIPS32_R2 96 96 select CRYPTO_POLY1305_MIPS if MIPS 97 + select CRYPTO_CHACHA_S390 if S390 97 98 help 98 99 WireGuard is a secure, fast, and easy to use replacement for IPSec 99 100 that uses modern cryptography and clever networking tricks. It's
+2 -1
drivers/net/bonding/bond_3ad.c
··· 2228 2228 temp_aggregator->num_of_ports--; 2229 2229 if (__agg_active_ports(temp_aggregator) == 0) { 2230 2230 select_new_active_agg = temp_aggregator->is_active; 2231 - ad_clear_agg(temp_aggregator); 2231 + if (temp_aggregator->num_of_ports == 0) 2232 + ad_clear_agg(temp_aggregator); 2232 2233 if (select_new_active_agg) { 2233 2234 slave_info(bond->dev, slave->dev, "Removing an active aggregator\n"); 2234 2235 /* select new active aggregator */
+1 -1
drivers/net/bonding/bond_alb.c
··· 1302 1302 return res; 1303 1303 1304 1304 if (rlb_enabled) { 1305 - bond->alb_info.rlb_enabled = 1; 1306 1305 res = rlb_initialize(bond); 1307 1306 if (res) { 1308 1307 tlb_deinitialize(bond); 1309 1308 return res; 1310 1309 } 1310 + bond->alb_info.rlb_enabled = 1; 1311 1311 } else { 1312 1312 bond->alb_info.rlb_enabled = 0; 1313 1313 }
+9 -1
drivers/net/caif/caif_virtio.c
··· 722 722 /* Carrier is off until netdevice is opened */ 723 723 netif_carrier_off(netdev); 724 724 725 + /* serialize netdev register + virtio_device_ready() with ndo_open() */ 726 + rtnl_lock(); 727 + 725 728 /* register Netdev */ 726 - err = register_netdev(netdev); 729 + err = register_netdevice(netdev); 727 730 if (err) { 731 + rtnl_unlock(); 728 732 dev_err(&vdev->dev, "Unable to register netdev (%d)\n", err); 729 733 goto err; 730 734 } 735 + 736 + virtio_device_ready(vdev); 737 + 738 + rtnl_unlock(); 731 739 732 740 debugfs_init(cfv); 733 741
-1
drivers/net/can/grcan.c
··· 1646 1646 */ 1647 1647 sysid_parent = of_find_node_by_path("/ambapp0"); 1648 1648 if (sysid_parent) { 1649 - of_node_get(sysid_parent); 1650 1649 err = of_property_read_u32(sysid_parent, "systemid", &sysid); 1651 1650 if (!err && ((sysid & GRLIB_VERSION_MASK) >= 1652 1651 GRCAN_TXBUG_SAFE_GRLIB_VERSION))
+5 -3
drivers/net/can/m_can/m_can.c
··· 529 529 /* acknowledge rx fifo 0 */ 530 530 m_can_write(cdev, M_CAN_RXF0A, fgi); 531 531 532 - timestamp = FIELD_GET(RX_BUF_RXTS_MASK, fifo_header.dlc); 532 + timestamp = FIELD_GET(RX_BUF_RXTS_MASK, fifo_header.dlc) << 16; 533 533 534 534 m_can_receive_skb(cdev, skb, timestamp); 535 535 ··· 1030 1030 } 1031 1031 1032 1032 msg_mark = FIELD_GET(TX_EVENT_MM_MASK, txe); 1033 - timestamp = FIELD_GET(TX_EVENT_TXTS_MASK, txe); 1033 + timestamp = FIELD_GET(TX_EVENT_TXTS_MASK, txe) << 16; 1034 1034 1035 1035 /* ack txe element */ 1036 1036 m_can_write(cdev, M_CAN_TXEFA, FIELD_PREP(TXEFA_EFAI_MASK, ··· 1351 1351 /* enable internal timestamp generation, with a prescalar of 16. The 1352 1352 * prescalar is applied to the nominal bit timing 1353 1353 */ 1354 - m_can_write(cdev, M_CAN_TSCC, FIELD_PREP(TSCC_TCP_MASK, 0xf)); 1354 + m_can_write(cdev, M_CAN_TSCC, 1355 + FIELD_PREP(TSCC_TCP_MASK, 0xf) | 1356 + FIELD_PREP(TSCC_TSS_MASK, TSCC_TSS_INTERNAL)); 1355 1357 1356 1358 m_can_config_endisable(cdev, false); 1357 1359
+4 -1
drivers/net/can/rcar/rcar_canfd.c
··· 1332 1332 cfg = (RCANFD_DCFG_DTSEG1(gpriv, tseg1) | RCANFD_DCFG_DBRP(brp) | 1333 1333 RCANFD_DCFG_DSJW(sjw) | RCANFD_DCFG_DTSEG2(gpriv, tseg2)); 1334 1334 1335 - rcar_canfd_write(priv->base, RCANFD_F_DCFG(ch), cfg); 1335 + if (is_v3u(gpriv)) 1336 + rcar_canfd_write(priv->base, RCANFD_V3U_DCFG(ch), cfg); 1337 + else 1338 + rcar_canfd_write(priv->base, RCANFD_F_DCFG(ch), cfg); 1336 1339 netdev_dbg(priv->ndev, "drate: brp %u, sjw %u, tseg1 %u, tseg2 %u\n", 1337 1340 brp, sjw, tseg1, tseg2); 1338 1341 } else {
+4 -2
drivers/net/can/spi/mcp251xfd/mcp251xfd-core.c
··· 12 12 // Copyright (c) 2019 Martin Sperl <kernel@martin.sperl.org> 13 13 // 14 14 15 + #include <asm/unaligned.h> 15 16 #include <linux/bitfield.h> 16 17 #include <linux/clk.h> 17 18 #include <linux/device.h> ··· 1651 1650 netif_stop_queue(ndev); 1652 1651 set_bit(MCP251XFD_FLAGS_DOWN, priv->flags); 1653 1652 hrtimer_cancel(&priv->rx_irq_timer); 1653 + hrtimer_cancel(&priv->tx_irq_timer); 1654 1654 mcp251xfd_chip_interrupts_disable(priv); 1655 1655 free_irq(ndev->irq, priv); 1656 1656 can_rx_offload_disable(&priv->offload); ··· 1779 1777 xfer[0].len = sizeof(buf_tx->cmd); 1780 1778 xfer[0].speed_hz = priv->spi_max_speed_hz_slow; 1781 1779 xfer[1].rx_buf = buf_rx->data; 1782 - xfer[1].len = sizeof(dev_id); 1780 + xfer[1].len = sizeof(*dev_id); 1783 1781 xfer[1].speed_hz = priv->spi_max_speed_hz_fast; 1784 1782 1785 1783 mcp251xfd_spi_cmd_read_nocrc(&buf_tx->cmd, MCP251XFD_REG_DEVID); ··· 1788 1786 if (err) 1789 1787 goto out_kfree_buf_tx; 1790 1788 1791 - *dev_id = be32_to_cpup((__be32 *)buf_rx->data); 1789 + *dev_id = get_unaligned_le32(buf_rx->data); 1792 1790 *effective_speed_hz_slow = xfer[0].effective_speed_hz; 1793 1791 *effective_speed_hz_fast = xfer[1].effective_speed_hz; 1794 1792
+11 -11
drivers/net/can/spi/mcp251xfd/mcp251xfd-regmap.c
··· 334 334 * register. It increments once per SYS clock tick, 335 335 * which is 20 or 40 MHz. 336 336 * 337 - * Observation shows that if the lowest byte (which is 338 - * transferred first on the SPI bus) of that register 339 - * is 0x00 or 0x80 the calculated CRC doesn't always 340 - * match the transferred one. 337 + * Observation on the mcp2518fd shows that if the 338 + * lowest byte (which is transferred first on the SPI 339 + * bus) of that register is 0x00 or 0x80 the 340 + * calculated CRC doesn't always match the transferred 341 + * one. On the mcp2517fd this problem is not limited 342 + * to the first byte being 0x00 or 0x80. 341 343 * 342 344 * If the highest bit in the lowest byte is flipped 343 345 * the transferred CRC matches the calculated one. We 344 - * assume for now the CRC calculation in the chip 345 - * works on wrong data and the transferred data is 346 - * correct. 346 + * assume for now the CRC operates on the correct 347 + * data. 347 348 */ 348 349 if (reg == MCP251XFD_REG_TBC && 349 - (buf_rx->data[0] == 0x0 || buf_rx->data[0] == 0x80)) { 350 + ((buf_rx->data[0] & 0xf8) == 0x0 || 351 + (buf_rx->data[0] & 0xf8) == 0x80)) { 350 352 /* Flip highest bit in lowest byte of le32 */ 351 353 buf_rx->data[0] ^= 0x80; 352 354 ··· 358 356 val_len); 359 357 if (!err) { 360 358 /* If CRC is now correct, assume 361 - * transferred data was OK, flip bit 362 - * back to original value. 359 + * flipped data is OK. 363 360 */ 364 - buf_rx->data[0] ^= 0x80; 365 361 goto out; 366 362 } 367 363 }
+21 -2
drivers/net/can/usb/gs_usb.c
··· 268 268 269 269 struct usb_anchor tx_submitted; 270 270 atomic_t active_tx_urbs; 271 + void *rxbuf[GS_MAX_RX_URBS]; 272 + dma_addr_t rxbuf_dma[GS_MAX_RX_URBS]; 271 273 }; 272 274 273 275 /* usb interface struct */ ··· 744 742 for (i = 0; i < GS_MAX_RX_URBS; i++) { 745 743 struct urb *urb; 746 744 u8 *buf; 745 + dma_addr_t buf_dma; 747 746 748 747 /* alloc rx urb */ 749 748 urb = usb_alloc_urb(0, GFP_KERNEL); ··· 755 752 buf = usb_alloc_coherent(dev->udev, 756 753 dev->parent->hf_size_rx, 757 754 GFP_KERNEL, 758 - &urb->transfer_dma); 755 + &buf_dma); 759 756 if (!buf) { 760 757 netdev_err(netdev, 761 758 "No memory left for USB buffer\n"); 762 759 usb_free_urb(urb); 763 760 return -ENOMEM; 764 761 } 762 + 763 + urb->transfer_dma = buf_dma; 765 764 766 765 /* fill, anchor, and submit rx urb */ 767 766 usb_fill_bulk_urb(urb, ··· 786 781 "usb_submit failed (err=%d)\n", rc); 787 782 788 783 usb_unanchor_urb(urb); 784 + usb_free_coherent(dev->udev, 785 + sizeof(struct gs_host_frame), 786 + buf, 787 + buf_dma); 789 788 usb_free_urb(urb); 790 789 break; 791 790 } 791 + 792 + dev->rxbuf[i] = buf; 793 + dev->rxbuf_dma[i] = buf_dma; 792 794 793 795 /* Drop reference, 794 796 * USB core will take care of freeing it ··· 854 842 int rc; 855 843 struct gs_can *dev = netdev_priv(netdev); 856 844 struct gs_usb *parent = dev->parent; 845 + unsigned int i; 857 846 858 847 netif_stop_queue(netdev); 859 848 860 849 /* Stop polling */ 861 850 parent->active_channels--; 862 - if (!parent->active_channels) 851 + if (!parent->active_channels) { 863 852 usb_kill_anchored_urbs(&parent->rx_submitted); 853 + for (i = 0; i < GS_MAX_RX_URBS; i++) 854 + usb_free_coherent(dev->udev, 855 + sizeof(struct gs_host_frame), 856 + dev->rxbuf[i], 857 + dev->rxbuf_dma[i]); 858 + } 864 859 865 860 /* Stop sending URBs */ 866 861 usb_kill_anchored_urbs(&dev->tx_submitted);
+15 -10
drivers/net/can/usb/kvaser_usb/kvaser_usb.h
··· 35 35 #define KVASER_USB_RX_BUFFER_SIZE 3072 36 36 #define KVASER_USB_MAX_NET_DEVICES 5 37 37 38 - /* USB devices features */ 39 - #define KVASER_USB_HAS_SILENT_MODE BIT(0) 40 - #define KVASER_USB_HAS_TXRX_ERRORS BIT(1) 38 + /* Kvaser USB device quirks */ 39 + #define KVASER_USB_QUIRK_HAS_SILENT_MODE BIT(0) 40 + #define KVASER_USB_QUIRK_HAS_TXRX_ERRORS BIT(1) 41 + #define KVASER_USB_QUIRK_IGNORE_CLK_FREQ BIT(2) 41 42 42 43 /* Device capabilities */ 43 44 #define KVASER_USB_CAP_BERR_CAP 0x01 ··· 66 65 struct kvaser_usb_dev_card_data { 67 66 u32 ctrlmode_supported; 68 67 u32 capabilities; 69 - union { 70 - struct { 71 - enum kvaser_usb_leaf_family family; 72 - } leaf; 73 - struct kvaser_usb_dev_card_data_hydra hydra; 74 - }; 68 + struct kvaser_usb_dev_card_data_hydra hydra; 75 69 }; 76 70 77 71 /* Context for an outstanding, not yet ACKed, transmission */ ··· 79 83 struct usb_device *udev; 80 84 struct usb_interface *intf; 81 85 struct kvaser_usb_net_priv *nets[KVASER_USB_MAX_NET_DEVICES]; 82 - const struct kvaser_usb_dev_ops *ops; 86 + const struct kvaser_usb_driver_info *driver_info; 83 87 const struct kvaser_usb_dev_cfg *cfg; 84 88 85 89 struct usb_endpoint_descriptor *bulk_in, *bulk_out; ··· 161 165 u16 transid); 162 166 }; 163 167 168 + struct kvaser_usb_driver_info { 169 + u32 quirks; 170 + enum kvaser_usb_leaf_family family; 171 + const struct kvaser_usb_dev_ops *ops; 172 + }; 173 + 164 174 struct kvaser_usb_dev_cfg { 165 175 const struct can_clock clock; 166 176 const unsigned int timestamp_freq; ··· 186 184 int len); 187 185 188 186 int kvaser_usb_can_rx_over_error(struct net_device *netdev); 187 + 188 + extern const struct can_bittiming_const kvaser_usb_flexc_bittiming_const; 189 + 189 190 #endif /* KVASER_USB_H */
+158 -127
drivers/net/can/usb/kvaser_usb/kvaser_usb_core.c
··· 61 61 #define USB_USBCAN_R_V2_PRODUCT_ID 294 62 62 #define USB_LEAF_LIGHT_R_V2_PRODUCT_ID 295 63 63 #define USB_LEAF_LIGHT_HS_V2_OEM2_PRODUCT_ID 296 64 - #define USB_LEAF_PRODUCT_ID_END \ 65 - USB_LEAF_LIGHT_HS_V2_OEM2_PRODUCT_ID 66 64 67 65 /* Kvaser USBCan-II devices product ids */ 68 66 #define USB_USBCAN_REVB_PRODUCT_ID 2 ··· 87 89 #define USB_USBCAN_PRO_4HS_PRODUCT_ID 276 88 90 #define USB_HYBRID_CANLIN_PRODUCT_ID 277 89 91 #define USB_HYBRID_PRO_CANLIN_PRODUCT_ID 278 90 - #define USB_HYDRA_PRODUCT_ID_END \ 91 - USB_HYBRID_PRO_CANLIN_PRODUCT_ID 92 92 93 - static inline bool kvaser_is_leaf(const struct usb_device_id *id) 94 - { 95 - return (id->idProduct >= USB_LEAF_DEVEL_PRODUCT_ID && 96 - id->idProduct <= USB_CAN_R_PRODUCT_ID) || 97 - (id->idProduct >= USB_LEAF_LITE_V2_PRODUCT_ID && 98 - id->idProduct <= USB_LEAF_PRODUCT_ID_END); 99 - } 93 + static const struct kvaser_usb_driver_info kvaser_usb_driver_info_hydra = { 94 + .quirks = 0, 95 + .ops = &kvaser_usb_hydra_dev_ops, 96 + }; 100 97 101 - static inline bool kvaser_is_usbcan(const struct usb_device_id *id) 102 - { 103 - return id->idProduct >= USB_USBCAN_REVB_PRODUCT_ID && 104 - id->idProduct <= USB_MEMORATOR_PRODUCT_ID; 105 - } 98 + static const struct kvaser_usb_driver_info kvaser_usb_driver_info_usbcan = { 99 + .quirks = KVASER_USB_QUIRK_HAS_TXRX_ERRORS | 100 + KVASER_USB_QUIRK_HAS_SILENT_MODE, 101 + .family = KVASER_USBCAN, 102 + .ops = &kvaser_usb_leaf_dev_ops, 103 + }; 106 104 107 - static inline bool kvaser_is_hydra(const struct usb_device_id *id) 108 - { 109 - return id->idProduct >= USB_BLACKBIRD_V2_PRODUCT_ID && 110 - id->idProduct <= USB_HYDRA_PRODUCT_ID_END; 111 - } 105 + static const struct kvaser_usb_driver_info kvaser_usb_driver_info_leaf = { 106 + .quirks = KVASER_USB_QUIRK_IGNORE_CLK_FREQ, 107 + .family = KVASER_LEAF, 108 + .ops = &kvaser_usb_leaf_dev_ops, 109 + }; 110 + 111 + static const struct kvaser_usb_driver_info kvaser_usb_driver_info_leaf_err = { 112 + .quirks = KVASER_USB_QUIRK_HAS_TXRX_ERRORS | 113 + KVASER_USB_QUIRK_IGNORE_CLK_FREQ, 114 + .family = KVASER_LEAF, 115 + .ops = &kvaser_usb_leaf_dev_ops, 116 + }; 117 + 118 + static const struct kvaser_usb_driver_info kvaser_usb_driver_info_leaf_err_listen = { 119 + .quirks = KVASER_USB_QUIRK_HAS_TXRX_ERRORS | 120 + KVASER_USB_QUIRK_HAS_SILENT_MODE | 121 + KVASER_USB_QUIRK_IGNORE_CLK_FREQ, 122 + .family = KVASER_LEAF, 123 + .ops = &kvaser_usb_leaf_dev_ops, 124 + }; 125 + 126 + static const struct kvaser_usb_driver_info kvaser_usb_driver_info_leafimx = { 127 + .quirks = 0, 128 + .ops = &kvaser_usb_leaf_dev_ops, 129 + }; 112 130 113 131 static const struct usb_device_id kvaser_usb_table[] = { 114 - /* Leaf USB product IDs */ 115 - { USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_DEVEL_PRODUCT_ID) }, 116 - { USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_LITE_PRODUCT_ID) }, 132 + /* Leaf M32C USB product IDs */ 133 + { USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_DEVEL_PRODUCT_ID), 134 + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leaf }, 135 + { USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_LITE_PRODUCT_ID), 136 + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leaf }, 117 137 { USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_PRO_PRODUCT_ID), 118 - .driver_info = KVASER_USB_HAS_TXRX_ERRORS | 119 - KVASER_USB_HAS_SILENT_MODE }, 138 + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leaf_err_listen }, 120 139 { USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_SPRO_PRODUCT_ID), 121 - .driver_info = KVASER_USB_HAS_TXRX_ERRORS | 122 - KVASER_USB_HAS_SILENT_MODE }, 140 + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leaf_err_listen }, 123 141 { USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_PRO_LS_PRODUCT_ID), 124 - .driver_info = KVASER_USB_HAS_TXRX_ERRORS | 125 - KVASER_USB_HAS_SILENT_MODE }, 142 + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leaf_err_listen }, 126 143 { USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_PRO_SWC_PRODUCT_ID), 127 - .driver_info = KVASER_USB_HAS_TXRX_ERRORS | 128 - KVASER_USB_HAS_SILENT_MODE }, 144 + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leaf_err_listen }, 129 145 { USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_PRO_LIN_PRODUCT_ID), 130 - .driver_info = KVASER_USB_HAS_TXRX_ERRORS | 131 - KVASER_USB_HAS_SILENT_MODE }, 146 + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leaf_err_listen }, 132 147 { USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_SPRO_LS_PRODUCT_ID), 133 - .driver_info = KVASER_USB_HAS_TXRX_ERRORS | 134 - KVASER_USB_HAS_SILENT_MODE }, 148 + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leaf_err_listen }, 135 149 { USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_SPRO_SWC_PRODUCT_ID), 136 - .driver_info = KVASER_USB_HAS_TXRX_ERRORS | 137 - KVASER_USB_HAS_SILENT_MODE }, 150 + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leaf_err_listen }, 138 151 { USB_DEVICE(KVASER_VENDOR_ID, USB_MEMO2_DEVEL_PRODUCT_ID), 139 - .driver_info = KVASER_USB_HAS_TXRX_ERRORS | 140 - KVASER_USB_HAS_SILENT_MODE }, 152 + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leaf_err_listen }, 141 153 { USB_DEVICE(KVASER_VENDOR_ID, USB_MEMO2_HSHS_PRODUCT_ID), 142 - .driver_info = KVASER_USB_HAS_TXRX_ERRORS | 143 - KVASER_USB_HAS_SILENT_MODE }, 154 + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leaf_err_listen }, 144 155 { USB_DEVICE(KVASER_VENDOR_ID, USB_UPRO_HSHS_PRODUCT_ID), 145 - .driver_info = KVASER_USB_HAS_TXRX_ERRORS }, 146 - { USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_LITE_GI_PRODUCT_ID) }, 156 + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leaf_err }, 157 + { USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_LITE_GI_PRODUCT_ID), 158 + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leaf }, 147 159 { USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_PRO_OBDII_PRODUCT_ID), 148 - .driver_info = KVASER_USB_HAS_TXRX_ERRORS | 149 - KVASER_USB_HAS_SILENT_MODE }, 160 + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leaf_err_listen }, 150 161 { USB_DEVICE(KVASER_VENDOR_ID, USB_MEMO2_HSLS_PRODUCT_ID), 151 - .driver_info = KVASER_USB_HAS_TXRX_ERRORS }, 162 + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leaf_err }, 152 163 { USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_LITE_CH_PRODUCT_ID), 153 - .driver_info = KVASER_USB_HAS_TXRX_ERRORS }, 164 + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leaf_err }, 154 165 { USB_DEVICE(KVASER_VENDOR_ID, USB_BLACKBIRD_SPRO_PRODUCT_ID), 155 - .driver_info = KVASER_USB_HAS_TXRX_ERRORS }, 166 + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leaf_err }, 156 167 { USB_DEVICE(KVASER_VENDOR_ID, USB_OEM_MERCURY_PRODUCT_ID), 157 - .driver_info = KVASER_USB_HAS_TXRX_ERRORS }, 168 + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leaf_err }, 158 169 { USB_DEVICE(KVASER_VENDOR_ID, USB_OEM_LEAF_PRODUCT_ID), 159 - .driver_info = KVASER_USB_HAS_TXRX_ERRORS }, 170 + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leaf_err }, 160 171 { USB_DEVICE(KVASER_VENDOR_ID, USB_CAN_R_PRODUCT_ID), 161 - .driver_info = KVASER_USB_HAS_TXRX_ERRORS }, 162 - { USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_LITE_V2_PRODUCT_ID) }, 163 - { USB_DEVICE(KVASER_VENDOR_ID, USB_MINI_PCIE_HS_PRODUCT_ID) }, 164 - { USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_LIGHT_HS_V2_OEM_PRODUCT_ID) }, 165 - { USB_DEVICE(KVASER_VENDOR_ID, USB_USBCAN_LIGHT_2HS_PRODUCT_ID) }, 166 - { USB_DEVICE(KVASER_VENDOR_ID, USB_MINI_PCIE_2HS_PRODUCT_ID) }, 167 - { USB_DEVICE(KVASER_VENDOR_ID, USB_USBCAN_R_V2_PRODUCT_ID) }, 168 - { USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_LIGHT_R_V2_PRODUCT_ID) }, 169 - { USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_LIGHT_HS_V2_OEM2_PRODUCT_ID) }, 172 + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leaf_err }, 173 + 174 + /* Leaf i.MX28 USB product IDs */ 175 + { USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_LITE_V2_PRODUCT_ID), 176 + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leafimx }, 177 + { USB_DEVICE(KVASER_VENDOR_ID, USB_MINI_PCIE_HS_PRODUCT_ID), 178 + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leafimx }, 179 + { USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_LIGHT_HS_V2_OEM_PRODUCT_ID), 180 + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leafimx }, 181 + { USB_DEVICE(KVASER_VENDOR_ID, USB_USBCAN_LIGHT_2HS_PRODUCT_ID), 182 + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leafimx }, 183 + { USB_DEVICE(KVASER_VENDOR_ID, USB_MINI_PCIE_2HS_PRODUCT_ID), 184 + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leafimx }, 185 + { USB_DEVICE(KVASER_VENDOR_ID, USB_USBCAN_R_V2_PRODUCT_ID), 186 + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leafimx }, 187 + { USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_LIGHT_R_V2_PRODUCT_ID), 188 + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leafimx }, 189 + { USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_LIGHT_HS_V2_OEM2_PRODUCT_ID), 190 + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leafimx }, 170 191 171 192 /* USBCANII USB product IDs */ 172 193 { USB_DEVICE(KVASER_VENDOR_ID, USB_USBCAN2_PRODUCT_ID), 173 - .driver_info = KVASER_USB_HAS_TXRX_ERRORS }, 194 + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_usbcan }, 174 195 { USB_DEVICE(KVASER_VENDOR_ID, USB_USBCAN_REVB_PRODUCT_ID), 175 - .driver_info = KVASER_USB_HAS_TXRX_ERRORS }, 196 + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_usbcan }, 176 197 { USB_DEVICE(KVASER_VENDOR_ID, USB_MEMORATOR_PRODUCT_ID), 177 - .driver_info = KVASER_USB_HAS_TXRX_ERRORS }, 198 + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_usbcan }, 178 199 { USB_DEVICE(KVASER_VENDOR_ID, USB_VCI2_PRODUCT_ID), 179 - .driver_info = KVASER_USB_HAS_TXRX_ERRORS }, 200 + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_usbcan }, 180 201 181 202 /* Minihydra USB product IDs */ 182 - { USB_DEVICE(KVASER_VENDOR_ID, USB_BLACKBIRD_V2_PRODUCT_ID) }, 183 - { USB_DEVICE(KVASER_VENDOR_ID, USB_MEMO_PRO_5HS_PRODUCT_ID) }, 184 - { USB_DEVICE(KVASER_VENDOR_ID, USB_USBCAN_PRO_5HS_PRODUCT_ID) }, 185 - { USB_DEVICE(KVASER_VENDOR_ID, USB_USBCAN_LIGHT_4HS_PRODUCT_ID) }, 186 - { USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_PRO_HS_V2_PRODUCT_ID) }, 187 - { USB_DEVICE(KVASER_VENDOR_ID, USB_USBCAN_PRO_2HS_V2_PRODUCT_ID) }, 188 - { USB_DEVICE(KVASER_VENDOR_ID, USB_MEMO_2HS_PRODUCT_ID) }, 189 - { USB_DEVICE(KVASER_VENDOR_ID, USB_MEMO_PRO_2HS_V2_PRODUCT_ID) }, 190 - { USB_DEVICE(KVASER_VENDOR_ID, USB_HYBRID_2CANLIN_PRODUCT_ID) }, 191 - { USB_DEVICE(KVASER_VENDOR_ID, USB_ATI_USBCAN_PRO_2HS_V2_PRODUCT_ID) }, 192 - { USB_DEVICE(KVASER_VENDOR_ID, USB_ATI_MEMO_PRO_2HS_V2_PRODUCT_ID) }, 193 - { USB_DEVICE(KVASER_VENDOR_ID, USB_HYBRID_PRO_2CANLIN_PRODUCT_ID) }, 194 - { USB_DEVICE(KVASER_VENDOR_ID, USB_U100_PRODUCT_ID) }, 195 - { USB_DEVICE(KVASER_VENDOR_ID, USB_U100P_PRODUCT_ID) }, 196 - { USB_DEVICE(KVASER_VENDOR_ID, USB_U100S_PRODUCT_ID) }, 197 - { USB_DEVICE(KVASER_VENDOR_ID, USB_USBCAN_PRO_4HS_PRODUCT_ID) }, 198 - { USB_DEVICE(KVASER_VENDOR_ID, USB_HYBRID_CANLIN_PRODUCT_ID) }, 199 - { USB_DEVICE(KVASER_VENDOR_ID, USB_HYBRID_PRO_CANLIN_PRODUCT_ID) }, 203 + { USB_DEVICE(KVASER_VENDOR_ID, USB_BLACKBIRD_V2_PRODUCT_ID), 204 + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_hydra }, 205 + { USB_DEVICE(KVASER_VENDOR_ID, USB_MEMO_PRO_5HS_PRODUCT_ID), 206 + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_hydra }, 207 + { USB_DEVICE(KVASER_VENDOR_ID, USB_USBCAN_PRO_5HS_PRODUCT_ID), 208 + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_hydra }, 209 + { USB_DEVICE(KVASER_VENDOR_ID, USB_USBCAN_LIGHT_4HS_PRODUCT_ID), 210 + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_hydra }, 211 + { USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_PRO_HS_V2_PRODUCT_ID), 212 + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_hydra }, 213 + { USB_DEVICE(KVASER_VENDOR_ID, USB_USBCAN_PRO_2HS_V2_PRODUCT_ID), 214 + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_hydra }, 215 + { USB_DEVICE(KVASER_VENDOR_ID, USB_MEMO_2HS_PRODUCT_ID), 216 + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_hydra }, 217 + { USB_DEVICE(KVASER_VENDOR_ID, USB_MEMO_PRO_2HS_V2_PRODUCT_ID), 218 + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_hydra }, 219 + { USB_DEVICE(KVASER_VENDOR_ID, USB_HYBRID_2CANLIN_PRODUCT_ID), 220 + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_hydra }, 221 + { USB_DEVICE(KVASER_VENDOR_ID, USB_ATI_USBCAN_PRO_2HS_V2_PRODUCT_ID), 222 + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_hydra }, 223 + { USB_DEVICE(KVASER_VENDOR_ID, USB_ATI_MEMO_PRO_2HS_V2_PRODUCT_ID), 224 + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_hydra }, 225 + { USB_DEVICE(KVASER_VENDOR_ID, USB_HYBRID_PRO_2CANLIN_PRODUCT_ID), 226 + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_hydra }, 227 + { USB_DEVICE(KVASER_VENDOR_ID, USB_U100_PRODUCT_ID), 228 + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_hydra }, 229 + { USB_DEVICE(KVASER_VENDOR_ID, USB_U100P_PRODUCT_ID), 230 + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_hydra }, 231 + { USB_DEVICE(KVASER_VENDOR_ID, USB_U100S_PRODUCT_ID), 232 + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_hydra }, 233 + { USB_DEVICE(KVASER_VENDOR_ID, USB_USBCAN_PRO_4HS_PRODUCT_ID), 234 + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_hydra }, 235 + { USB_DEVICE(KVASER_VENDOR_ID, USB_HYBRID_CANLIN_PRODUCT_ID), 236 + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_hydra }, 237 + { USB_DEVICE(KVASER_VENDOR_ID, USB_HYBRID_PRO_CANLIN_PRODUCT_ID), 238 + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_hydra }, 200 239 { } 201 240 }; 202 241 MODULE_DEVICE_TABLE(usb, kvaser_usb_table); ··· 320 285 static void kvaser_usb_read_bulk_callback(struct urb *urb) 321 286 { 322 287 struct kvaser_usb *dev = urb->context; 288 + const struct kvaser_usb_dev_ops *ops = dev->driver_info->ops; 323 289 int err; 324 290 unsigned int i; 325 291 ··· 337 301 goto resubmit_urb; 338 302 } 339 303 340 - dev->ops->dev_read_bulk_callback(dev, urb->transfer_buffer, 341 - urb->actual_length); 304 + ops->dev_read_bulk_callback(dev, urb->transfer_buffer, 305 + urb->actual_length); 342 306 343 307 resubmit_urb: 344 308 usb_fill_bulk_urb(urb, dev->udev, ··· 432 396 { 433 397 struct kvaser_usb_net_priv *priv = netdev_priv(netdev); 434 398 struct kvaser_usb *dev = priv->dev; 399 + const struct kvaser_usb_dev_ops *ops = dev->driver_info->ops; 435 400 int err; 436 401 437 402 err = open_candev(netdev); ··· 443 406 if (err) 444 407 goto error; 445 408 446 - err = dev->ops->dev_set_opt_mode(priv); 409 + err = ops->dev_set_opt_mode(priv); 447 410 if (err) 448 411 goto error; 449 412 450 - err = dev->ops->dev_start_chip(priv); 413 + err = ops->dev_start_chip(priv); 451 414 if (err) { 452 415 netdev_warn(netdev, "Cannot start device, error %d\n", err); 453 416 goto error; ··· 504 467 { 505 468 struct kvaser_usb_net_priv *priv = netdev_priv(netdev); 506 469 struct kvaser_usb *dev = priv->dev; 470 + const struct kvaser_usb_dev_ops *ops = dev->driver_info->ops; 507 471 int err; 508 472 509 473 netif_stop_queue(netdev); 510 474 511 - err = dev->ops->dev_flush_queue(priv); 475 + err = ops->dev_flush_queue(priv); 512 476 if (err) 513 477 netdev_warn(netdev, "Cannot flush queue, error %d\n", err); 514 478 515 - if (dev->ops->dev_reset_chip) { 516 - err = dev->ops->dev_reset_chip(dev, priv->channel); 479 + if (ops->dev_reset_chip) { 480 + err = ops->dev_reset_chip(dev, priv->channel); 517 481 if (err) 518 482 netdev_warn(netdev, "Cannot reset card, error %d\n", 519 483 err); 520 484 } 521 485 522 - err = dev->ops->dev_stop_chip(priv); 486 + err = ops->dev_stop_chip(priv); 523 487 if (err) 524 488 netdev_warn(netdev, "Cannot stop device, error %d\n", err); 525 489 ··· 559 521 { 560 522 struct kvaser_usb_net_priv *priv = netdev_priv(netdev); 561 523 struct kvaser_usb *dev = priv->dev; 524 + const struct kvaser_usb_dev_ops *ops = dev->driver_info->ops; 562 525 struct net_device_stats *stats = &netdev->stats; 563 526 struct kvaser_usb_tx_urb_context *context = NULL; 564 527 struct urb *urb; ··· 602 563 goto freeurb; 603 564 } 604 565 605 - buf = dev->ops->dev_frame_to_cmd(priv, skb, &cmd_len, 606 - context->echo_index); 566 + buf = ops->dev_frame_to_cmd(priv, skb, &cmd_len, context->echo_index); 607 567 if (!buf) { 608 568 stats->tx_dropped++; 609 569 dev_kfree_skb(skb); ··· 686 648 } 687 649 } 688 650 689 - static int kvaser_usb_init_one(struct kvaser_usb *dev, 690 - const struct usb_device_id *id, int channel) 651 + static int kvaser_usb_init_one(struct kvaser_usb *dev, int channel) 691 652 { 692 653 struct net_device *netdev; 693 654 struct kvaser_usb_net_priv *priv; 655 + const struct kvaser_usb_driver_info *driver_info = dev->driver_info; 656 + const struct kvaser_usb_dev_ops *ops = driver_info->ops; 694 657 int err; 695 658 696 - if (dev->ops->dev_reset_chip) { 697 - err = dev->ops->dev_reset_chip(dev, channel); 659 + if (ops->dev_reset_chip) { 660 + err = ops->dev_reset_chip(dev, channel); 698 661 if (err) 699 662 return err; 700 663 } ··· 724 685 priv->can.state = CAN_STATE_STOPPED; 725 686 priv->can.clock.freq = dev->cfg->clock.freq; 726 687 priv->can.bittiming_const = dev->cfg->bittiming_const; 727 - priv->can.do_set_bittiming = dev->ops->dev_set_bittiming; 728 - priv->can.do_set_mode = dev->ops->dev_set_mode; 729 - if ((id->driver_info & KVASER_USB_HAS_TXRX_ERRORS) || 688 + priv->can.do_set_bittiming = ops->dev_set_bittiming; 689 + priv->can.do_set_mode = ops->dev_set_mode; 690 + if ((driver_info->quirks & KVASER_USB_QUIRK_HAS_TXRX_ERRORS) || 730 691 (priv->dev->card_data.capabilities & KVASER_USB_CAP_BERR_CAP)) 731 - priv->can.do_get_berr_counter = dev->ops->dev_get_berr_counter; 732 - if (id->driver_info & KVASER_USB_HAS_SILENT_MODE) 692 + priv->can.do_get_berr_counter = ops->dev_get_berr_counter; 693 + if (driver_info->quirks & KVASER_USB_QUIRK_HAS_SILENT_MODE) 733 694 priv->can.ctrlmode_supported |= CAN_CTRLMODE_LISTENONLY; 734 695 735 696 priv->can.ctrlmode_supported |= dev->card_data.ctrlmode_supported; 736 697 737 698 if (priv->can.ctrlmode_supported & CAN_CTRLMODE_FD) { 738 699 priv->can.data_bittiming_const = dev->cfg->data_bittiming_const; 739 - priv->can.do_set_data_bittiming = 740 - dev->ops->dev_set_data_bittiming; 700 + priv->can.do_set_data_bittiming = ops->dev_set_data_bittiming; 741 701 } 742 702 743 703 netdev->flags |= IFF_ECHO; ··· 767 729 struct kvaser_usb *dev; 768 730 int err; 769 731 int i; 732 + const struct kvaser_usb_driver_info *driver_info; 733 + const struct kvaser_usb_dev_ops *ops; 734 + 735 + driver_info = (const struct kvaser_usb_driver_info *)id->driver_info; 736 + if (!driver_info) 737 + return -ENODEV; 770 738 771 739 dev = devm_kzalloc(&intf->dev, sizeof(*dev), GFP_KERNEL); 772 740 if (!dev) 773 741 return -ENOMEM; 774 742 775 - if (kvaser_is_leaf(id)) { 776 - dev->card_data.leaf.family = KVASER_LEAF; 777 - dev->ops = &kvaser_usb_leaf_dev_ops; 778 - } else if (kvaser_is_usbcan(id)) { 779 - dev->card_data.leaf.family = KVASER_USBCAN; 780 - dev->ops = &kvaser_usb_leaf_dev_ops; 781 - } else if (kvaser_is_hydra(id)) { 782 - dev->ops = &kvaser_usb_hydra_dev_ops; 783 - } else { 784 - dev_err(&intf->dev, 785 - "Product ID (%d) is not a supported Kvaser USB device\n", 786 - id->idProduct); 787 - return -ENODEV; 788 - } 789 - 790 743 dev->intf = intf; 744 + dev->driver_info = driver_info; 745 + ops = driver_info->ops; 791 746 792 - err = dev->ops->dev_setup_endpoints(dev); 747 + err = ops->dev_setup_endpoints(dev); 793 748 if (err) { 794 749 dev_err(&intf->dev, "Cannot get usb endpoint(s)"); 795 750 return err; ··· 796 765 797 766 dev->card_data.ctrlmode_supported = 0; 798 767 dev->card_data.capabilities = 0; 799 - err = dev->ops->dev_init_card(dev); 768 + err = ops->dev_init_card(dev); 800 769 if (err) { 801 770 dev_err(&intf->dev, 802 771 "Failed to initialize card, error %d\n", err); 803 772 return err; 804 773 } 805 774 806 - err = dev->ops->dev_get_software_info(dev); 775 + err = ops->dev_get_software_info(dev); 807 776 if (err) { 808 777 dev_err(&intf->dev, 809 778 "Cannot get software info, error %d\n", err); 810 779 return err; 811 780 } 812 781 813 - if (dev->ops->dev_get_software_details) { 814 - err = dev->ops->dev_get_software_details(dev); 782 + if (ops->dev_get_software_details) { 783 + err = ops->dev_get_software_details(dev); 815 784 if (err) { 816 785 dev_err(&intf->dev, 817 786 "Cannot get software details, error %d\n", err); ··· 829 798 830 799 dev_dbg(&intf->dev, "Max outstanding tx = %d URBs\n", dev->max_tx_urbs); 831 800 832 - err = dev->ops->dev_get_card_info(dev); 801 + err = ops->dev_get_card_info(dev); 833 802 if (err) { 834 803 dev_err(&intf->dev, "Cannot get card info, error %d\n", err); 835 804 return err; 836 805 } 837 806 838 - if (dev->ops->dev_get_capabilities) { 839 - err = dev->ops->dev_get_capabilities(dev); 807 + if (ops->dev_get_capabilities) { 808 + err = ops->dev_get_capabilities(dev); 840 809 if (err) { 841 810 dev_err(&intf->dev, 842 811 "Cannot get capabilities, error %d\n", err); ··· 846 815 } 847 816 848 817 for (i = 0; i < dev->nchannels; i++) { 849 - err = kvaser_usb_init_one(dev, id, i); 818 + err = kvaser_usb_init_one(dev, i); 850 819 if (err) { 851 820 kvaser_usb_remove_interfaces(dev); 852 821 return err;
+2 -2
drivers/net/can/usb/kvaser_usb/kvaser_usb_hydra.c
··· 375 375 .brp_inc = 1, 376 376 }; 377 377 378 - static const struct can_bittiming_const kvaser_usb_hydra_flexc_bittiming_c = { 378 + const struct can_bittiming_const kvaser_usb_flexc_bittiming_const = { 379 379 .name = "kvaser_usb_flex", 380 380 .tseg1_min = 4, 381 381 .tseg1_max = 16, ··· 2052 2052 .freq = 24 * MEGA /* Hz */, 2053 2053 }, 2054 2054 .timestamp_freq = 1, 2055 - .bittiming_const = &kvaser_usb_hydra_flexc_bittiming_c, 2055 + .bittiming_const = &kvaser_usb_flexc_bittiming_const, 2056 2056 }; 2057 2057 2058 2058 static const struct kvaser_usb_dev_cfg kvaser_usb_hydra_dev_cfg_rt = {
+68 -51
drivers/net/can/usb/kvaser_usb/kvaser_usb_leaf.c
··· 101 101 #define USBCAN_ERROR_STATE_RX_ERROR BIT(1) 102 102 #define USBCAN_ERROR_STATE_BUSERROR BIT(2) 103 103 104 - /* bittiming parameters */ 105 - #define KVASER_USB_TSEG1_MIN 1 106 - #define KVASER_USB_TSEG1_MAX 16 107 - #define KVASER_USB_TSEG2_MIN 1 108 - #define KVASER_USB_TSEG2_MAX 8 109 - #define KVASER_USB_SJW_MAX 4 110 - #define KVASER_USB_BRP_MIN 1 111 - #define KVASER_USB_BRP_MAX 64 112 - #define KVASER_USB_BRP_INC 1 113 - 114 104 /* ctrl modes */ 115 105 #define KVASER_CTRL_MODE_NORMAL 1 116 106 #define KVASER_CTRL_MODE_SILENT 2 ··· 333 343 }; 334 344 }; 335 345 336 - static const struct can_bittiming_const kvaser_usb_leaf_bittiming_const = { 337 - .name = "kvaser_usb", 338 - .tseg1_min = KVASER_USB_TSEG1_MIN, 339 - .tseg1_max = KVASER_USB_TSEG1_MAX, 340 - .tseg2_min = KVASER_USB_TSEG2_MIN, 341 - .tseg2_max = KVASER_USB_TSEG2_MAX, 342 - .sjw_max = KVASER_USB_SJW_MAX, 343 - .brp_min = KVASER_USB_BRP_MIN, 344 - .brp_max = KVASER_USB_BRP_MAX, 345 - .brp_inc = KVASER_USB_BRP_INC, 346 + static const struct can_bittiming_const kvaser_usb_leaf_m16c_bittiming_const = { 347 + .name = "kvaser_usb_ucii", 348 + .tseg1_min = 4, 349 + .tseg1_max = 16, 350 + .tseg2_min = 2, 351 + .tseg2_max = 8, 352 + .sjw_max = 4, 353 + .brp_min = 1, 354 + .brp_max = 16, 355 + .brp_inc = 1, 346 356 }; 347 357 348 - static const struct kvaser_usb_dev_cfg kvaser_usb_leaf_dev_cfg_8mhz = { 358 + static const struct can_bittiming_const kvaser_usb_leaf_m32c_bittiming_const = { 359 + .name = "kvaser_usb_leaf", 360 + .tseg1_min = 3, 361 + .tseg1_max = 16, 362 + .tseg2_min = 2, 363 + .tseg2_max = 8, 364 + .sjw_max = 4, 365 + .brp_min = 2, 366 + .brp_max = 128, 367 + .brp_inc = 2, 368 + }; 369 + 370 + static const struct kvaser_usb_dev_cfg kvaser_usb_leaf_usbcan_dev_cfg = { 349 371 .clock = { 350 372 .freq = 8 * MEGA /* Hz */, 351 373 }, 352 374 .timestamp_freq = 1, 353 - .bittiming_const = &kvaser_usb_leaf_bittiming_const, 375 + .bittiming_const = &kvaser_usb_leaf_m16c_bittiming_const, 354 376 }; 355 377 356 - static const struct kvaser_usb_dev_cfg kvaser_usb_leaf_dev_cfg_16mhz = { 378 + static const struct kvaser_usb_dev_cfg kvaser_usb_leaf_m32c_dev_cfg = { 357 379 .clock = { 358 380 .freq = 16 * MEGA /* Hz */, 359 381 }, 360 382 .timestamp_freq = 1, 361 - .bittiming_const = &kvaser_usb_leaf_bittiming_const, 383 + .bittiming_const = &kvaser_usb_leaf_m32c_bittiming_const, 362 384 }; 363 385 364 - static const struct kvaser_usb_dev_cfg kvaser_usb_leaf_dev_cfg_24mhz = { 386 + static const struct kvaser_usb_dev_cfg kvaser_usb_leaf_imx_dev_cfg_16mhz = { 387 + .clock = { 388 + .freq = 16 * MEGA /* Hz */, 389 + }, 390 + .timestamp_freq = 1, 391 + .bittiming_const = &kvaser_usb_flexc_bittiming_const, 392 + }; 393 + 394 + static const struct kvaser_usb_dev_cfg kvaser_usb_leaf_imx_dev_cfg_24mhz = { 365 395 .clock = { 366 396 .freq = 24 * MEGA /* Hz */, 367 397 }, 368 398 .timestamp_freq = 1, 369 - .bittiming_const = &kvaser_usb_leaf_bittiming_const, 399 + .bittiming_const = &kvaser_usb_flexc_bittiming_const, 370 400 }; 371 401 372 - static const struct kvaser_usb_dev_cfg kvaser_usb_leaf_dev_cfg_32mhz = { 402 + static const struct kvaser_usb_dev_cfg kvaser_usb_leaf_imx_dev_cfg_32mhz = { 373 403 .clock = { 374 404 .freq = 32 * MEGA /* Hz */, 375 405 }, 376 406 .timestamp_freq = 1, 377 - .bittiming_const = &kvaser_usb_leaf_bittiming_const, 407 + .bittiming_const = &kvaser_usb_flexc_bittiming_const, 378 408 }; 379 409 380 410 static void * ··· 414 404 sizeof(struct kvaser_cmd_tx_can); 415 405 cmd->u.tx_can.channel = priv->channel; 416 406 417 - switch (dev->card_data.leaf.family) { 407 + switch (dev->driver_info->family) { 418 408 case KVASER_LEAF: 419 409 cmd_tx_can_flags = &cmd->u.tx_can.leaf.flags; 420 410 break; ··· 534 524 dev->fw_version = le32_to_cpu(softinfo->fw_version); 535 525 dev->max_tx_urbs = le16_to_cpu(softinfo->max_outstanding_tx); 536 526 537 - switch (sw_options & KVASER_USB_LEAF_SWOPTION_FREQ_MASK) { 538 - case KVASER_USB_LEAF_SWOPTION_FREQ_16_MHZ_CLK: 539 - dev->cfg = &kvaser_usb_leaf_dev_cfg_16mhz; 540 - break; 541 - case KVASER_USB_LEAF_SWOPTION_FREQ_24_MHZ_CLK: 542 - dev->cfg = &kvaser_usb_leaf_dev_cfg_24mhz; 543 - break; 544 - case KVASER_USB_LEAF_SWOPTION_FREQ_32_MHZ_CLK: 545 - dev->cfg = &kvaser_usb_leaf_dev_cfg_32mhz; 546 - break; 527 + if (dev->driver_info->quirks & KVASER_USB_QUIRK_IGNORE_CLK_FREQ) { 528 + /* Firmware expects bittiming parameters calculated for 16MHz 529 + * clock, regardless of the actual clock 530 + */ 531 + dev->cfg = &kvaser_usb_leaf_m32c_dev_cfg; 532 + } else { 533 + switch (sw_options & KVASER_USB_LEAF_SWOPTION_FREQ_MASK) { 534 + case KVASER_USB_LEAF_SWOPTION_FREQ_16_MHZ_CLK: 535 + dev->cfg = &kvaser_usb_leaf_imx_dev_cfg_16mhz; 536 + break; 537 + case KVASER_USB_LEAF_SWOPTION_FREQ_24_MHZ_CLK: 538 + dev->cfg = &kvaser_usb_leaf_imx_dev_cfg_24mhz; 539 + break; 540 + case KVASER_USB_LEAF_SWOPTION_FREQ_32_MHZ_CLK: 541 + dev->cfg = &kvaser_usb_leaf_imx_dev_cfg_32mhz; 542 + break; 543 + } 547 544 } 548 545 } 549 546 ··· 567 550 if (err) 568 551 return err; 569 552 570 - switch (dev->card_data.leaf.family) { 553 + switch (dev->driver_info->family) { 571 554 case KVASER_LEAF: 572 555 kvaser_usb_leaf_get_software_info_leaf(dev, &cmd.u.leaf.softinfo); 573 556 break; ··· 575 558 dev->fw_version = le32_to_cpu(cmd.u.usbcan.softinfo.fw_version); 576 559 dev->max_tx_urbs = 577 560 le16_to_cpu(cmd.u.usbcan.softinfo.max_outstanding_tx); 578 - dev->cfg = &kvaser_usb_leaf_dev_cfg_8mhz; 561 + dev->cfg = &kvaser_usb_leaf_usbcan_dev_cfg; 579 562 break; 580 563 } 581 564 ··· 614 597 615 598 dev->nchannels = cmd.u.cardinfo.nchannels; 616 599 if (dev->nchannels > KVASER_USB_MAX_NET_DEVICES || 617 - (dev->card_data.leaf.family == KVASER_USBCAN && 600 + (dev->driver_info->family == KVASER_USBCAN && 618 601 dev->nchannels > MAX_USBCAN_NET_DEVICES)) 619 602 return -EINVAL; 620 603 ··· 747 730 new_state < CAN_STATE_BUS_OFF) 748 731 priv->can.can_stats.restarts++; 749 732 750 - switch (dev->card_data.leaf.family) { 733 + switch (dev->driver_info->family) { 751 734 case KVASER_LEAF: 752 735 if (es->leaf.error_factor) { 753 736 priv->can.can_stats.bus_error++; ··· 826 809 } 827 810 } 828 811 829 - switch (dev->card_data.leaf.family) { 812 + switch (dev->driver_info->family) { 830 813 case KVASER_LEAF: 831 814 if (es->leaf.error_factor) { 832 815 cf->can_id |= CAN_ERR_BUSERROR | CAN_ERR_PROT; ··· 1016 999 stats = &priv->netdev->stats; 1017 1000 1018 1001 if ((cmd->u.rx_can_header.flag & MSG_FLAG_ERROR_FRAME) && 1019 - (dev->card_data.leaf.family == KVASER_LEAF && 1002 + (dev->driver_info->family == KVASER_LEAF && 1020 1003 cmd->id == CMD_LEAF_LOG_MESSAGE)) { 1021 1004 kvaser_usb_leaf_leaf_rx_error(dev, cmd); 1022 1005 return; ··· 1032 1015 return; 1033 1016 } 1034 1017 1035 - switch (dev->card_data.leaf.family) { 1018 + switch (dev->driver_info->family) { 1036 1019 case KVASER_LEAF: 1037 1020 rx_data = cmd->u.leaf.rx_can.data; 1038 1021 break; ··· 1047 1030 return; 1048 1031 } 1049 1032 1050 - if (dev->card_data.leaf.family == KVASER_LEAF && cmd->id == 1033 + if (dev->driver_info->family == KVASER_LEAF && cmd->id == 1051 1034 CMD_LEAF_LOG_MESSAGE) { 1052 1035 cf->can_id = le32_to_cpu(cmd->u.leaf.log_message.id); 1053 1036 if (cf->can_id & KVASER_EXTENDED_FRAME) ··· 1145 1128 break; 1146 1129 1147 1130 case CMD_LEAF_LOG_MESSAGE: 1148 - if (dev->card_data.leaf.family != KVASER_LEAF) 1131 + if (dev->driver_info->family != KVASER_LEAF) 1149 1132 goto warn; 1150 1133 kvaser_usb_leaf_rx_can_msg(dev, cmd); 1151 1134 break; 1152 1135 1153 1136 case CMD_CHIP_STATE_EVENT: 1154 1137 case CMD_CAN_ERROR_EVENT: 1155 - if (dev->card_data.leaf.family == KVASER_LEAF) 1138 + if (dev->driver_info->family == KVASER_LEAF) 1156 1139 kvaser_usb_leaf_leaf_rx_error(dev, cmd); 1157 1140 else 1158 1141 kvaser_usb_leaf_usbcan_rx_error(dev, cmd); ··· 1164 1147 1165 1148 /* Ignored commands */ 1166 1149 case CMD_USBCAN_CLOCK_OVERFLOW_EVENT: 1167 - if (dev->card_data.leaf.family != KVASER_USBCAN) 1150 + if (dev->driver_info->family != KVASER_USBCAN) 1168 1151 goto warn; 1169 1152 break; 1170 1153 1171 1154 case CMD_FLUSH_QUEUE_REPLY: 1172 - if (dev->card_data.leaf.family != KVASER_LEAF) 1155 + if (dev->driver_info->family != KVASER_LEAF) 1173 1156 goto warn; 1174 1157 break; 1175 1158
+2 -2
drivers/net/can/xilinx_can.c
··· 258 258 .tseg2_min = 1, 259 259 .tseg2_max = 128, 260 260 .sjw_max = 128, 261 - .brp_min = 2, 261 + .brp_min = 1, 262 262 .brp_max = 256, 263 263 .brp_inc = 1, 264 264 }; ··· 271 271 .tseg2_min = 1, 272 272 .tseg2_max = 16, 273 273 .sjw_max = 16, 274 - .brp_min = 2, 274 + .brp_min = 1, 275 275 .brp_max = 256, 276 276 .brp_inc = 1, 277 277 };
+5
drivers/net/dsa/bcm_sf2.c
··· 878 878 if (duplex == DUPLEX_FULL) 879 879 reg |= DUPLX_MODE; 880 880 881 + if (tx_pause) 882 + reg |= TXFLOW_CNTL; 883 + if (rx_pause) 884 + reg |= RXFLOW_CNTL; 885 + 881 886 core_writel(priv, reg, offset); 882 887 } 883 888
+1
drivers/net/dsa/hirschmann/hellcreek_ptp.c
··· 300 300 const char *label, *state; 301 301 int ret = -EINVAL; 302 302 303 + of_node_get(hellcreek->dev->of_node); 303 304 leds = of_find_node_by_name(hellcreek->dev->of_node, "leds"); 304 305 if (!leds) { 305 306 dev_err(hellcreek->dev, "No LEDs specified in device tree!\n");
+4
drivers/net/dsa/ocelot/felix_vsc9959.c
··· 1886 1886 static void vsc9959_psfp_counters_get(struct ocelot *ocelot, u32 index, 1887 1887 struct felix_stream_filter_counters *counters) 1888 1888 { 1889 + mutex_lock(&ocelot->stats_lock); 1890 + 1889 1891 ocelot_rmw(ocelot, SYS_STAT_CFG_STAT_VIEW(index), 1890 1892 SYS_STAT_CFG_STAT_VIEW_M, 1891 1893 SYS_STAT_CFG); ··· 1902 1900 SYS_STAT_CFG_STAT_VIEW(index) | 1903 1901 SYS_STAT_CFG_STAT_CLEAR_SHOT(0x10), 1904 1902 SYS_STAT_CFG); 1903 + 1904 + mutex_unlock(&ocelot->stats_lock); 1905 1905 } 1906 1906 1907 1907 static int vsc9959_psfp_filter_add(struct ocelot *ocelot, int port,
+9
drivers/net/ethernet/ibm/ibmvnic.c
··· 5981 5981 release_sub_crqs(adapter, 0); 5982 5982 rc = init_sub_crqs(adapter); 5983 5983 } else { 5984 + /* no need to reinitialize completely, but we do 5985 + * need to clean up transmits that were in flight 5986 + * when we processed the reset. Failure to do so 5987 + * will confound the upper layer, usually TCP, by 5988 + * creating the illusion of transmits that are 5989 + * awaiting completion. 5990 + */ 5991 + clean_tx_pools(adapter); 5992 + 5984 5993 rc = reset_sub_crq_queues(adapter); 5985 5994 } 5986 5995 } else {
+16
drivers/net/ethernet/intel/i40e/i40e.h
··· 37 37 #include <net/tc_act/tc_mirred.h> 38 38 #include <net/udp_tunnel.h> 39 39 #include <net/xdp_sock.h> 40 + #include <linux/bitfield.h> 40 41 #include "i40e_type.h" 41 42 #include "i40e_prototype.h" 42 43 #include <linux/net/intel/i40e_client.h> ··· 1091 1090 (u32)(val >> 32)); 1092 1091 i40e_write_rx_ctl(&pf->hw, I40E_PRTQF_FD_INSET(addr, 0), 1093 1092 (u32)(val & 0xFFFFFFFFULL)); 1093 + } 1094 + 1095 + /** 1096 + * i40e_get_pf_count - get PCI PF count. 1097 + * @hw: pointer to a hw. 1098 + * 1099 + * Reports the function number of the highest PCI physical 1100 + * function plus 1 as it is loaded from the NVM. 1101 + * 1102 + * Return: PCI PF count. 1103 + **/ 1104 + static inline u32 i40e_get_pf_count(struct i40e_hw *hw) 1105 + { 1106 + return FIELD_GET(I40E_GLGEN_PCIFCNCNT_PCIPFCNT_MASK, 1107 + rd32(hw, I40E_GLGEN_PCIFCNCNT)); 1094 1108 } 1095 1109 1096 1110 /* needed by i40e_ethtool.c */
+73
drivers/net/ethernet/intel/i40e/i40e_main.c
··· 551 551 } 552 552 553 553 /** 554 + * i40e_compute_pci_to_hw_id - compute index form PCI function. 555 + * @vsi: ptr to the VSI to read from. 556 + * @hw: ptr to the hardware info. 557 + **/ 558 + static u32 i40e_compute_pci_to_hw_id(struct i40e_vsi *vsi, struct i40e_hw *hw) 559 + { 560 + int pf_count = i40e_get_pf_count(hw); 561 + 562 + if (vsi->type == I40E_VSI_SRIOV) 563 + return (hw->port * BIT(7)) / pf_count + vsi->vf_id; 564 + 565 + return hw->port + BIT(7); 566 + } 567 + 568 + /** 569 + * i40e_stat_update64 - read and update a 64 bit stat from the chip. 570 + * @hw: ptr to the hardware info. 571 + * @hireg: the high 32 bit reg to read. 572 + * @loreg: the low 32 bit reg to read. 573 + * @offset_loaded: has the initial offset been loaded yet. 574 + * @offset: ptr to current offset value. 575 + * @stat: ptr to the stat. 576 + * 577 + * Since the device stats are not reset at PFReset, they will not 578 + * be zeroed when the driver starts. We'll save the first values read 579 + * and use them as offsets to be subtracted from the raw values in order 580 + * to report stats that count from zero. 581 + **/ 582 + static void i40e_stat_update64(struct i40e_hw *hw, u32 hireg, u32 loreg, 583 + bool offset_loaded, u64 *offset, u64 *stat) 584 + { 585 + u64 new_data; 586 + 587 + new_data = rd64(hw, loreg); 588 + 589 + if (!offset_loaded || new_data < *offset) 590 + *offset = new_data; 591 + *stat = new_data - *offset; 592 + } 593 + 594 + /** 554 595 * i40e_stat_update48 - read and update a 48 bit stat from the chip 555 596 * @hw: ptr to the hardware info 556 597 * @hireg: the high 32 bit reg to read ··· 663 622 } 664 623 665 624 /** 625 + * i40e_stats_update_rx_discards - update rx_discards. 626 + * @vsi: ptr to the VSI to be updated. 627 + * @hw: ptr to the hardware info. 628 + * @stat_idx: VSI's stat_counter_idx. 629 + * @offset_loaded: ptr to the VSI's stat_offsets_loaded. 630 + * @stat_offset: ptr to stat_offset to store first read of specific register. 631 + * @stat: ptr to VSI's stat to be updated. 632 + **/ 633 + static void 634 + i40e_stats_update_rx_discards(struct i40e_vsi *vsi, struct i40e_hw *hw, 635 + int stat_idx, bool offset_loaded, 636 + struct i40e_eth_stats *stat_offset, 637 + struct i40e_eth_stats *stat) 638 + { 639 + u64 rx_rdpc, rx_rxerr; 640 + 641 + i40e_stat_update32(hw, I40E_GLV_RDPC(stat_idx), offset_loaded, 642 + &stat_offset->rx_discards, &rx_rdpc); 643 + i40e_stat_update64(hw, 644 + I40E_GL_RXERR1H(i40e_compute_pci_to_hw_id(vsi, hw)), 645 + I40E_GL_RXERR1L(i40e_compute_pci_to_hw_id(vsi, hw)), 646 + offset_loaded, &stat_offset->rx_discards_other, 647 + &rx_rxerr); 648 + 649 + stat->rx_discards = rx_rdpc + rx_rxerr; 650 + } 651 + 652 + /** 666 653 * i40e_update_eth_stats - Update VSI-specific ethernet statistics counters. 667 654 * @vsi: the VSI to be updated 668 655 **/ ··· 749 680 I40E_GLV_BPTCL(stat_idx), 750 681 vsi->stat_offsets_loaded, 751 682 &oes->tx_broadcast, &es->tx_broadcast); 683 + 684 + i40e_stats_update_rx_discards(vsi, hw, stat_idx, 685 + vsi->stat_offsets_loaded, oes, es); 686 + 752 687 vsi->stat_offsets_loaded = true; 753 688 } 754 689
+13
drivers/net/ethernet/intel/i40e/i40e_register.h
··· 211 211 #define I40E_GLGEN_MSRWD_MDIWRDATA_SHIFT 0 212 212 #define I40E_GLGEN_MSRWD_MDIRDDATA_SHIFT 16 213 213 #define I40E_GLGEN_MSRWD_MDIRDDATA_MASK I40E_MASK(0xFFFF, I40E_GLGEN_MSRWD_MDIRDDATA_SHIFT) 214 + #define I40E_GLGEN_PCIFCNCNT 0x001C0AB4 /* Reset: PCIR */ 215 + #define I40E_GLGEN_PCIFCNCNT_PCIPFCNT_SHIFT 0 216 + #define I40E_GLGEN_PCIFCNCNT_PCIPFCNT_MASK I40E_MASK(0x1F, I40E_GLGEN_PCIFCNCNT_PCIPFCNT_SHIFT) 217 + #define I40E_GLGEN_PCIFCNCNT_PCIVFCNT_SHIFT 16 218 + #define I40E_GLGEN_PCIFCNCNT_PCIVFCNT_MASK I40E_MASK(0xFF, I40E_GLGEN_PCIFCNCNT_PCIVFCNT_SHIFT) 214 219 #define I40E_GLGEN_RSTAT 0x000B8188 /* Reset: POR */ 215 220 #define I40E_GLGEN_RSTAT_DEVSTATE_SHIFT 0 216 221 #define I40E_GLGEN_RSTAT_DEVSTATE_MASK I40E_MASK(0x3, I40E_GLGEN_RSTAT_DEVSTATE_SHIFT) ··· 648 643 #define I40E_VFQF_HKEY1_MAX_INDEX 12 649 644 #define I40E_VFQF_HLUT1(_i, _VF) (0x00220000 + ((_i) * 1024 + (_VF) * 4)) /* _i=0...15, _VF=0...127 */ /* Reset: CORER */ 650 645 #define I40E_VFQF_HLUT1_MAX_INDEX 15 646 + #define I40E_GL_RXERR1H(_i) (0x00318004 + ((_i) * 8)) /* _i=0...143 */ /* Reset: CORER */ 647 + #define I40E_GL_RXERR1H_MAX_INDEX 143 648 + #define I40E_GL_RXERR1H_RXERR1H_SHIFT 0 649 + #define I40E_GL_RXERR1H_RXERR1H_MASK I40E_MASK(0xFFFFFFFF, I40E_GL_RXERR1H_RXERR1H_SHIFT) 650 + #define I40E_GL_RXERR1L(_i) (0x00318000 + ((_i) * 8)) /* _i=0...143 */ /* Reset: CORER */ 651 + #define I40E_GL_RXERR1L_MAX_INDEX 143 652 + #define I40E_GL_RXERR1L_RXERR1L_SHIFT 0 653 + #define I40E_GL_RXERR1L_RXERR1L_MASK I40E_MASK(0xFFFFFFFF, I40E_GL_RXERR1L_RXERR1L_SHIFT) 651 654 #define I40E_GLPRT_BPRCH(_i) (0x003005E4 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ 652 655 #define I40E_GLPRT_BPRCL(_i) (0x003005E0 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ 653 656 #define I40E_GLPRT_BPTCH(_i) (0x00300A04 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */
+1
drivers/net/ethernet/intel/i40e/i40e_type.h
··· 1172 1172 u64 tx_broadcast; /* bptc */ 1173 1173 u64 tx_discards; /* tdpc */ 1174 1174 u64 tx_errors; /* tepc */ 1175 + u64 rx_discards_other; /* rxerr1 */ 1175 1176 }; 1176 1177 1177 1178 /* Statistics collected per VEB per TC */
+4
drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
··· 2147 2147 /* VFs only use TC 0 */ 2148 2148 vfres->vsi_res[0].qset_handle 2149 2149 = le16_to_cpu(vsi->info.qs_handle[0]); 2150 + if (!(vf->driver_caps & VIRTCHNL_VF_OFFLOAD_USO) && !vf->pf_set_mac) { 2151 + i40e_del_mac_filter(vsi, vf->default_lan_addr.addr); 2152 + eth_zero_addr(vf->default_lan_addr.addr); 2153 + } 2150 2154 ether_addr_copy(vfres->vsi_res[0].default_mac_addr, 2151 2155 vf->default_lan_addr.addr); 2152 2156 }
+1 -1
drivers/net/ethernet/marvell/octeon_ep/octep_regs_cn9k_pf.h
··· 52 52 53 53 #define CN93_SDP_EPF_RINFO_SRN(val) ((val) & 0xFF) 54 54 #define CN93_SDP_EPF_RINFO_RPVF(val) (((val) >> 32) & 0xF) 55 - #define CN93_SDP_EPF_RINFO_NVFS(val) (((val) >> 48) && 0xFF) 55 + #define CN93_SDP_EPF_RINFO_NVFS(val) (((val) >> 48) & 0xFF) 56 56 57 57 /* SDP Function select */ 58 58 #define CN93_SDP_FUNC_SEL_EPF_BIT_POS 8
+6 -7
drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
··· 4529 4529 return -EOPNOTSUPP; 4530 4530 } 4531 4531 4532 - if (act->police.notexceed.act_id != FLOW_ACTION_PIPE && 4533 - act->police.notexceed.act_id != FLOW_ACTION_ACCEPT) { 4534 - NL_SET_ERR_MSG_MOD(extack, 4535 - "Offload not supported when conform action is not pipe or ok"); 4536 - return -EOPNOTSUPP; 4537 - } 4538 - 4539 4532 if (act->police.notexceed.act_id == FLOW_ACTION_ACCEPT && 4540 4533 !flow_action_is_last_entry(action, act)) { 4541 4534 NL_SET_ERR_MSG_MOD(extack, ··· 4579 4586 flow_action_for_each(i, act, flow_action) { 4580 4587 switch (act->id) { 4581 4588 case FLOW_ACTION_POLICE: 4589 + if (act->police.notexceed.act_id != FLOW_ACTION_CONTINUE) { 4590 + NL_SET_ERR_MSG_MOD(extack, 4591 + "Offload not supported when conform action is not continue"); 4592 + return -EOPNOTSUPP; 4593 + } 4594 + 4582 4595 err = mlx5e_policer_validate(flow_action, act, extack); 4583 4596 if (err) 4584 4597 return err;
+13 -1
drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c
··· 4415 4415 return 0; 4416 4416 4417 4417 err_nexthop_neigh_init: 4418 + list_del(&nh->router_list_node); 4419 + mlxsw_sp_nexthop_counter_free(mlxsw_sp, nh); 4418 4420 mlxsw_sp_nexthop_remove(mlxsw_sp, nh); 4419 4421 return err; 4420 4422 } ··· 6742 6740 const struct fib6_info *rt) 6743 6741 { 6744 6742 struct net_device *dev = rt->fib6_nh->fib_nh_dev; 6743 + int err; 6745 6744 6746 6745 nh->nhgi = nh_grp->nhgi; 6747 6746 nh->nh_weight = rt->fib6_nh->fib_nh_weight; ··· 6758 6755 return 0; 6759 6756 nh->ifindex = dev->ifindex; 6760 6757 6761 - return mlxsw_sp_nexthop_type_init(mlxsw_sp, nh, dev); 6758 + err = mlxsw_sp_nexthop_type_init(mlxsw_sp, nh, dev); 6759 + if (err) 6760 + goto err_nexthop_type_init; 6761 + 6762 + return 0; 6763 + 6764 + err_nexthop_type_init: 6765 + list_del(&nh->router_list_node); 6766 + mlxsw_sp_nexthop_counter_free(mlxsw_sp, nh); 6767 + return err; 6762 6768 } 6763 6769 6764 6770 static void mlxsw_sp_nexthop6_fini(struct mlxsw_sp *mlxsw_sp,
+2 -6
drivers/net/ethernet/microchip/lan966x/lan966x_main.c
··· 994 994 struct fwnode_handle *ports, *portnp; 995 995 struct lan966x *lan966x; 996 996 u8 mac_addr[ETH_ALEN]; 997 - int err, i; 997 + int err; 998 998 999 999 lan966x = devm_kzalloc(&pdev->dev, sizeof(*lan966x), GFP_KERNEL); 1000 1000 if (!lan966x) ··· 1025 1025 if (err) 1026 1026 return dev_err_probe(&pdev->dev, err, "Reset failed"); 1027 1027 1028 - i = 0; 1029 - fwnode_for_each_available_child_node(ports, portnp) 1030 - ++i; 1031 - 1032 - lan966x->num_phys_ports = i; 1028 + lan966x->num_phys_ports = NUM_PHYS_PORTS; 1033 1029 lan966x->ports = devm_kcalloc(&pdev->dev, lan966x->num_phys_ports, 1034 1030 sizeof(struct lan966x_port *), 1035 1031 GFP_KERNEL);
+1
drivers/net/ethernet/microchip/lan966x/lan966x_main.h
··· 34 34 /* Reserved amount for (SRC, PRIO) at index 8*SRC + PRIO */ 35 35 #define QSYS_Q_RSRV 95 36 36 37 + #define NUM_PHYS_PORTS 8 37 38 #define CPU_PORT 8 38 39 39 40 /* Reserved PGIDs */
+6
drivers/net/ethernet/microchip/sparx5/sparx5_switchdev.c
··· 396 396 u32 mact_entry; 397 397 int res, err; 398 398 399 + if (!sparx5_netdevice_check(dev)) 400 + return -EOPNOTSUPP; 401 + 399 402 if (netif_is_bridge_master(v->obj.orig_dev)) { 400 403 sparx5_mact_learn(spx5, PGID_CPU, v->addr, v->vid); 401 404 return 0; ··· 468 465 u16 pgid_idx, vid; 469 466 u32 mact_entry, res, pgid_entry[3]; 470 467 int err; 468 + 469 + if (!sparx5_netdevice_check(dev)) 470 + return -EOPNOTSUPP; 471 471 472 472 if (netif_is_bridge_master(v->obj.orig_dev)) { 473 473 sparx5_mact_forget(spx5, v->addr, v->vid);
+4 -6
drivers/net/ethernet/realtek/r8169_main.c
··· 4190 4190 static bool rtl8169_tso_csum_v2(struct rtl8169_private *tp, 4191 4191 struct sk_buff *skb, u32 *opts) 4192 4192 { 4193 - u32 transport_offset = (u32)skb_transport_offset(skb); 4194 4193 struct skb_shared_info *shinfo = skb_shinfo(skb); 4195 4194 u32 mss = shinfo->gso_size; 4196 4195 ··· 4206 4207 WARN_ON_ONCE(1); 4207 4208 } 4208 4209 4209 - opts[0] |= transport_offset << GTTCPHO_SHIFT; 4210 + opts[0] |= skb_transport_offset(skb) << GTTCPHO_SHIFT; 4210 4211 opts[1] |= mss << TD1_MSS_SHIFT; 4211 4212 } else if (skb->ip_summed == CHECKSUM_PARTIAL) { 4212 4213 u8 ip_protocol; ··· 4234 4235 else 4235 4236 WARN_ON_ONCE(1); 4236 4237 4237 - opts[1] |= transport_offset << TCPHO_SHIFT; 4238 + opts[1] |= skb_transport_offset(skb) << TCPHO_SHIFT; 4238 4239 } else { 4239 4240 unsigned int padto = rtl_quirk_packet_padto(tp, skb); 4240 4241 ··· 4401 4402 struct net_device *dev, 4402 4403 netdev_features_t features) 4403 4404 { 4404 - int transport_offset = skb_transport_offset(skb); 4405 4405 struct rtl8169_private *tp = netdev_priv(dev); 4406 4406 4407 4407 if (skb_is_gso(skb)) { 4408 4408 if (tp->mac_version == RTL_GIGA_MAC_VER_34) 4409 4409 features = rtl8168evl_fix_tso(skb, features); 4410 4410 4411 - if (transport_offset > GTTCPHO_MAX && 4411 + if (skb_transport_offset(skb) > GTTCPHO_MAX && 4412 4412 rtl_chip_supports_csum_v2(tp)) 4413 4413 features &= ~NETIF_F_ALL_TSO; 4414 4414 } else if (skb->ip_summed == CHECKSUM_PARTIAL) { ··· 4418 4420 if (rtl_quirk_packet_padto(tp, skb)) 4419 4421 features &= ~NETIF_F_CSUM_MASK; 4420 4422 4421 - if (transport_offset > TCPHO_MAX && 4423 + if (skb_transport_offset(skb) > TCPHO_MAX && 4422 4424 rtl_chip_supports_csum_v2(tp)) 4423 4425 features &= ~NETIF_F_CSUM_MASK; 4424 4426 }
+2 -2
drivers/net/ethernet/smsc/epic100.c
··· 1515 1515 struct net_device *dev = pci_get_drvdata(pdev); 1516 1516 struct epic_private *ep = netdev_priv(dev); 1517 1517 1518 + unregister_netdev(dev); 1518 1519 dma_free_coherent(&pdev->dev, TX_TOTAL_SIZE, ep->tx_ring, 1519 1520 ep->tx_ring_dma); 1520 1521 dma_free_coherent(&pdev->dev, RX_TOTAL_SIZE, ep->rx_ring, 1521 1522 ep->rx_ring_dma); 1522 - unregister_netdev(dev); 1523 1523 pci_iounmap(pdev, ep->ioaddr); 1524 - pci_release_regions(pdev); 1525 1524 free_netdev(dev); 1525 + pci_release_regions(pdev); 1526 1526 pci_disable_device(pdev); 1527 1527 /* pci_power_off(pdev, -1); */ 1528 1528 }
+4 -2
drivers/net/phy/ax88796b.c
··· 88 88 /* Reset PHY, otherwise MII_LPA will provide outdated information. 89 89 * This issue is reproducible only with some link partner PHYs 90 90 */ 91 - if (phydev->state == PHY_NOLINK && phydev->drv->soft_reset) 92 - phydev->drv->soft_reset(phydev); 91 + if (phydev->state == PHY_NOLINK) { 92 + phy_init_hw(phydev); 93 + phy_start_aneg(phydev); 94 + } 93 95 } 94 96 95 97 static struct phy_driver asix_driver[] = {
+1 -3
drivers/net/phy/dp83822.c
··· 229 229 if (misr_status < 0) 230 230 return misr_status; 231 231 232 - misr_status |= (DP83822_RX_ERR_HF_INT_EN | 233 - DP83822_FALSE_CARRIER_HF_INT_EN | 234 - DP83822_LINK_STAT_INT_EN | 232 + misr_status |= (DP83822_LINK_STAT_INT_EN | 235 233 DP83822_ENERGY_DET_INT_EN | 236 234 DP83822_LINK_QUAL_INT_EN); 237 235
+23
drivers/net/phy/phy.c
··· 31 31 #include <linux/io.h> 32 32 #include <linux/uaccess.h> 33 33 #include <linux/atomic.h> 34 + #include <linux/suspend.h> 34 35 #include <net/netlink.h> 35 36 #include <net/genetlink.h> 36 37 #include <net/sock.h> ··· 976 975 struct phy_device *phydev = phy_dat; 977 976 struct phy_driver *drv = phydev->drv; 978 977 irqreturn_t ret; 978 + 979 + /* Wakeup interrupts may occur during a system sleep transition. 980 + * Postpone handling until the PHY has resumed. 981 + */ 982 + if (IS_ENABLED(CONFIG_PM_SLEEP) && phydev->irq_suspended) { 983 + struct net_device *netdev = phydev->attached_dev; 984 + 985 + if (netdev) { 986 + struct device *parent = netdev->dev.parent; 987 + 988 + if (netdev->wol_enabled) 989 + pm_system_wakeup(); 990 + else if (device_may_wakeup(&netdev->dev)) 991 + pm_wakeup_dev_event(&netdev->dev, 0, true); 992 + else if (parent && device_may_wakeup(parent)) 993 + pm_wakeup_dev_event(parent, 0, true); 994 + } 995 + 996 + phydev->irq_rerun = 1; 997 + disable_irq_nosync(irq); 998 + return IRQ_HANDLED; 999 + } 979 1000 980 1001 mutex_lock(&phydev->lock); 981 1002 ret = drv->handle_interrupt(phydev);
+23
drivers/net/phy/phy_device.c
··· 278 278 if (phydev->mac_managed_pm) 279 279 return 0; 280 280 281 + /* Wakeup interrupts may occur during the system sleep transition when 282 + * the PHY is inaccessible. Set flag to postpone handling until the PHY 283 + * has resumed. Wait for concurrent interrupt handler to complete. 284 + */ 285 + if (phy_interrupt_is_valid(phydev)) { 286 + phydev->irq_suspended = 1; 287 + synchronize_irq(phydev->irq); 288 + } 289 + 281 290 /* We must stop the state machine manually, otherwise it stops out of 282 291 * control, possibly with the phydev->lock held. Upon resume, netdev 283 292 * may call phy routines that try to grab the same lock, and that may ··· 324 315 if (ret < 0) 325 316 return ret; 326 317 no_resume: 318 + if (phy_interrupt_is_valid(phydev)) { 319 + phydev->irq_suspended = 0; 320 + synchronize_irq(phydev->irq); 321 + 322 + /* Rerun interrupts which were postponed by phy_interrupt() 323 + * because they occurred during the system sleep transition. 324 + */ 325 + if (phydev->irq_rerun) { 326 + phydev->irq_rerun = 0; 327 + enable_irq(phydev->irq); 328 + irq_wake_thread(phydev->irq, phydev); 329 + } 330 + } 331 + 327 332 if (phydev->attached_dev && phydev->adjust_link) 328 333 phy_start_machine(phydev); 329 334
+1 -1
drivers/net/phy/sfp.c
··· 2516 2516 2517 2517 platform_set_drvdata(pdev, sfp); 2518 2518 2519 - err = devm_add_action(sfp->dev, sfp_cleanup, sfp); 2519 + err = devm_add_action_or_reset(sfp->dev, sfp_cleanup, sfp); 2520 2520 if (err < 0) 2521 2521 return err; 2522 2522
+13 -2
drivers/net/tun.c
··· 273 273 } 274 274 } 275 275 276 + static void tun_napi_enable(struct tun_file *tfile) 277 + { 278 + if (tfile->napi_enabled) 279 + napi_enable(&tfile->napi); 280 + } 281 + 276 282 static void tun_napi_disable(struct tun_file *tfile) 277 283 { 278 284 if (tfile->napi_enabled) ··· 640 634 tun = rtnl_dereference(tfile->tun); 641 635 642 636 if (tun && clean) { 643 - tun_napi_disable(tfile); 637 + if (!tfile->detached) 638 + tun_napi_disable(tfile); 644 639 tun_napi_del(tfile); 645 640 } 646 641 ··· 660 653 if (clean) { 661 654 RCU_INIT_POINTER(tfile->tun, NULL); 662 655 sock_put(&tfile->sk); 663 - } else 656 + } else { 664 657 tun_disable_queue(tun, tfile); 658 + tun_napi_disable(tfile); 659 + } 665 660 666 661 synchronize_net(); 667 662 tun_flow_delete_by_queue(tun, tun->numqueues + 1); ··· 736 727 sock_put(&tfile->sk); 737 728 } 738 729 list_for_each_entry_safe(tfile, tmp, &tun->disabled, next) { 730 + tun_napi_del(tfile); 739 731 tun_enable_queue(tfile); 740 732 tun_queue_purge(tfile); 741 733 xdp_rxq_info_unreg(&tfile->xdp_rxq); ··· 817 807 818 808 if (tfile->detached) { 819 809 tun_enable_queue(tfile); 810 + tun_napi_enable(tfile); 820 811 } else { 821 812 sock_hold(&tfile->sk); 822 813 tun_napi_init(tun, tfile, napi, napi_frags);
+1 -2
drivers/net/usb/asix.h
··· 126 126 AX_MEDIUM_RE) 127 127 128 128 #define AX88772_MEDIUM_DEFAULT \ 129 - (AX_MEDIUM_FD | AX_MEDIUM_RFC | \ 130 - AX_MEDIUM_TFC | AX_MEDIUM_PS | \ 129 + (AX_MEDIUM_FD | AX_MEDIUM_PS | \ 131 130 AX_MEDIUM_AC | AX_MEDIUM_RE) 132 131 133 132 /* AX88772 & AX88178 RX_CTL values */
+1
drivers/net/usb/asix_common.c
··· 431 431 432 432 asix_write_medium_mode(dev, mode, 0); 433 433 phy_print_status(phydev); 434 + usbnet_link_change(dev, phydev->link, 0); 434 435 } 435 436 436 437 int asix_write_gpio(struct usbnet *dev, u16 value, int sleep, int in_pm)
+78 -27
drivers/net/usb/ax88179_178a.c
··· 1472 1472 * are bundled into this buffer and where we can find an array of 1473 1473 * per-packet metadata (which contains elements encoded into u16). 1474 1474 */ 1475 + 1476 + /* SKB contents for current firmware: 1477 + * <packet 1> <padding> 1478 + * ... 1479 + * <packet N> <padding> 1480 + * <per-packet metadata entry 1> <dummy header> 1481 + * ... 1482 + * <per-packet metadata entry N> <dummy header> 1483 + * <padding2> <rx_hdr> 1484 + * 1485 + * where: 1486 + * <packet N> contains pkt_len bytes: 1487 + * 2 bytes of IP alignment pseudo header 1488 + * packet received 1489 + * <per-packet metadata entry N> contains 4 bytes: 1490 + * pkt_len and fields AX_RXHDR_* 1491 + * <padding> 0-7 bytes to terminate at 1492 + * 8 bytes boundary (64-bit). 1493 + * <padding2> 4 bytes to make rx_hdr terminate at 1494 + * 8 bytes boundary (64-bit) 1495 + * <dummy-header> contains 4 bytes: 1496 + * pkt_len=0 and AX_RXHDR_DROP_ERR 1497 + * <rx-hdr> contains 4 bytes: 1498 + * pkt_cnt and hdr_off (offset of 1499 + * <per-packet metadata entry 1>) 1500 + * 1501 + * pkt_cnt is number of entrys in the per-packet metadata. 1502 + * In current firmware there is 2 entrys per packet. 1503 + * The first points to the packet and the 1504 + * second is a dummy header. 1505 + * This was done probably to align fields in 64-bit and 1506 + * maintain compatibility with old firmware. 1507 + * This code assumes that <dummy header> and <padding2> are 1508 + * optional. 1509 + */ 1510 + 1475 1511 if (skb->len < 4) 1476 1512 return 0; 1477 1513 skb_trim(skb, skb->len - 4); ··· 1521 1485 /* Make sure that the bounds of the metadata array are inside the SKB 1522 1486 * (and in front of the counter at the end). 1523 1487 */ 1524 - if (pkt_cnt * 2 + hdr_off > skb->len) 1488 + if (pkt_cnt * 4 + hdr_off > skb->len) 1525 1489 return 0; 1526 1490 pkt_hdr = (u32 *)(skb->data + hdr_off); 1527 1491 1528 1492 /* Packets must not overlap the metadata array */ 1529 1493 skb_trim(skb, hdr_off); 1530 1494 1531 - for (; ; pkt_cnt--, pkt_hdr++) { 1495 + for (; pkt_cnt > 0; pkt_cnt--, pkt_hdr++) { 1496 + u16 pkt_len_plus_padd; 1532 1497 u16 pkt_len; 1533 1498 1534 1499 le32_to_cpus(pkt_hdr); 1535 1500 pkt_len = (*pkt_hdr >> 16) & 0x1fff; 1501 + pkt_len_plus_padd = (pkt_len + 7) & 0xfff8; 1536 1502 1537 - if (pkt_len > skb->len) 1503 + /* Skip dummy header used for alignment 1504 + */ 1505 + if (pkt_len == 0) 1506 + continue; 1507 + 1508 + if (pkt_len_plus_padd > skb->len) 1538 1509 return 0; 1539 1510 1540 1511 /* Check CRC or runt packet */ 1541 - if (((*pkt_hdr & (AX_RXHDR_CRC_ERR | AX_RXHDR_DROP_ERR)) == 0) && 1542 - pkt_len >= 2 + ETH_HLEN) { 1543 - bool last = (pkt_cnt == 0); 1544 - 1545 - if (last) { 1546 - ax_skb = skb; 1547 - } else { 1548 - ax_skb = skb_clone(skb, GFP_ATOMIC); 1549 - if (!ax_skb) 1550 - return 0; 1551 - } 1552 - ax_skb->len = pkt_len; 1553 - /* Skip IP alignment pseudo header */ 1554 - skb_pull(ax_skb, 2); 1555 - skb_set_tail_pointer(ax_skb, ax_skb->len); 1556 - ax_skb->truesize = pkt_len + sizeof(struct sk_buff); 1557 - ax88179_rx_checksum(ax_skb, pkt_hdr); 1558 - 1559 - if (last) 1560 - return 1; 1561 - 1562 - usbnet_skb_return(dev, ax_skb); 1512 + if ((*pkt_hdr & (AX_RXHDR_CRC_ERR | AX_RXHDR_DROP_ERR)) || 1513 + pkt_len < 2 + ETH_HLEN) { 1514 + dev->net->stats.rx_errors++; 1515 + skb_pull(skb, pkt_len_plus_padd); 1516 + continue; 1563 1517 } 1564 1518 1565 - /* Trim this packet away from the SKB */ 1566 - if (!skb_pull(skb, (pkt_len + 7) & 0xFFF8)) 1519 + /* last packet */ 1520 + if (pkt_len_plus_padd == skb->len) { 1521 + skb_trim(skb, pkt_len); 1522 + 1523 + /* Skip IP alignment pseudo header */ 1524 + skb_pull(skb, 2); 1525 + 1526 + skb->truesize = SKB_TRUESIZE(pkt_len_plus_padd); 1527 + ax88179_rx_checksum(skb, pkt_hdr); 1528 + return 1; 1529 + } 1530 + 1531 + ax_skb = skb_clone(skb, GFP_ATOMIC); 1532 + if (!ax_skb) 1567 1533 return 0; 1534 + skb_trim(ax_skb, pkt_len); 1535 + 1536 + /* Skip IP alignment pseudo header */ 1537 + skb_pull(ax_skb, 2); 1538 + 1539 + skb->truesize = pkt_len_plus_padd + 1540 + SKB_DATA_ALIGN(sizeof(struct sk_buff)); 1541 + ax88179_rx_checksum(ax_skb, pkt_hdr); 1542 + usbnet_skb_return(dev, ax_skb); 1543 + 1544 + skb_pull(skb, pkt_len_plus_padd); 1568 1545 } 1546 + 1547 + return 0; 1569 1548 } 1570 1549 1571 1550 static struct sk_buff *
+1 -1
drivers/net/usb/catc.c
··· 781 781 intf->altsetting->desc.bInterfaceNumber, 1)) { 782 782 dev_err(dev, "Can't set altsetting 1.\n"); 783 783 ret = -EIO; 784 - goto fail_mem;; 784 + goto fail_mem; 785 785 } 786 786 787 787 netdev = alloc_etherdev(sizeof(struct catc));
+14 -7
drivers/net/usb/usbnet.c
··· 2004 2004 cmd, reqtype, value, index, size); 2005 2005 2006 2006 if (size) { 2007 - buf = kmalloc(size, GFP_KERNEL); 2007 + buf = kmalloc(size, GFP_NOIO); 2008 2008 if (!buf) 2009 2009 goto out; 2010 2010 } ··· 2036 2036 cmd, reqtype, value, index, size); 2037 2037 2038 2038 if (data) { 2039 - buf = kmemdup(data, size, GFP_KERNEL); 2039 + buf = kmemdup(data, size, GFP_NOIO); 2040 2040 if (!buf) 2041 2041 goto out; 2042 2042 } else { ··· 2137 2137 int usbnet_write_cmd_async(struct usbnet *dev, u8 cmd, u8 reqtype, 2138 2138 u16 value, u16 index, const void *data, u16 size) 2139 2139 { 2140 - struct usb_ctrlrequest *req = NULL; 2140 + struct usb_ctrlrequest *req; 2141 2141 struct urb *urb; 2142 2142 int err = -ENOMEM; 2143 2143 void *buf = NULL; ··· 2155 2155 if (!buf) { 2156 2156 netdev_err(dev->net, "Error allocating buffer" 2157 2157 " in %s!\n", __func__); 2158 - goto fail_free; 2158 + goto fail_free_urb; 2159 2159 } 2160 2160 } 2161 2161 ··· 2179 2179 if (err < 0) { 2180 2180 netdev_err(dev->net, "Error submitting the control" 2181 2181 " message: status=%d\n", err); 2182 - goto fail_free; 2182 + goto fail_free_all; 2183 2183 } 2184 2184 return 0; 2185 2185 2186 + fail_free_all: 2187 + kfree(req); 2186 2188 fail_free_buf: 2187 2189 kfree(buf); 2188 - fail_free: 2189 - kfree(req); 2190 + /* 2191 + * avoid a double free 2192 + * needed because the flag can be set only 2193 + * after filling the URB 2194 + */ 2195 + urb->transfer_flags = 0; 2196 + fail_free_urb: 2190 2197 usb_free_urb(urb); 2191 2198 fail: 2192 2199 return err;
+7 -1
drivers/net/virtio_net.c
··· 3642 3642 if (vi->has_rss || vi->has_rss_hash_report) 3643 3643 virtnet_init_default_rss(vi); 3644 3644 3645 - err = register_netdev(dev); 3645 + /* serialize netdev register + virtio_device_ready() with ndo_open() */ 3646 + rtnl_lock(); 3647 + 3648 + err = register_netdevice(dev); 3646 3649 if (err) { 3647 3650 pr_debug("virtio_net: registering device failed\n"); 3651 + rtnl_unlock(); 3648 3652 goto free_failover; 3649 3653 } 3650 3654 3651 3655 virtio_device_ready(vdev); 3656 + 3657 + rtnl_unlock(); 3652 3658 3653 3659 err = virtnet_cpu_notif_add(vi); 3654 3660 if (err) {
+52 -4
drivers/net/xen-netfront.c
··· 66 66 MODULE_PARM_DESC(max_queues, 67 67 "Maximum number of queues per virtual interface"); 68 68 69 + static bool __read_mostly xennet_trusted = true; 70 + module_param_named(trusted, xennet_trusted, bool, 0644); 71 + MODULE_PARM_DESC(trusted, "Is the backend trusted"); 72 + 69 73 #define XENNET_TIMEOUT (5 * HZ) 70 74 71 75 static const struct ethtool_ops xennet_ethtool_ops; ··· 177 173 /* Is device behaving sane? */ 178 174 bool broken; 179 175 176 + /* Should skbs be bounced into a zeroed buffer? */ 177 + bool bounce; 178 + 180 179 atomic_t rx_gso_checksum_fixup; 181 180 }; 182 181 ··· 278 271 if (unlikely(!skb)) 279 272 return NULL; 280 273 281 - page = page_pool_dev_alloc_pages(queue->page_pool); 274 + page = page_pool_alloc_pages(queue->page_pool, 275 + GFP_ATOMIC | __GFP_NOWARN | __GFP_ZERO); 282 276 if (unlikely(!page)) { 283 277 kfree_skb(skb); 284 278 return NULL; ··· 673 665 return nxmit; 674 666 } 675 667 668 + struct sk_buff *bounce_skb(const struct sk_buff *skb) 669 + { 670 + unsigned int headerlen = skb_headroom(skb); 671 + /* Align size to allocate full pages and avoid contiguous data leaks */ 672 + unsigned int size = ALIGN(skb_end_offset(skb) + skb->data_len, 673 + XEN_PAGE_SIZE); 674 + struct sk_buff *n = alloc_skb(size, GFP_ATOMIC | __GFP_ZERO); 675 + 676 + if (!n) 677 + return NULL; 678 + 679 + if (!IS_ALIGNED((uintptr_t)n->head, XEN_PAGE_SIZE)) { 680 + WARN_ONCE(1, "misaligned skb allocated\n"); 681 + kfree_skb(n); 682 + return NULL; 683 + } 684 + 685 + /* Set the data pointer */ 686 + skb_reserve(n, headerlen); 687 + /* Set the tail pointer and length */ 688 + skb_put(n, skb->len); 689 + 690 + BUG_ON(skb_copy_bits(skb, -headerlen, n->head, headerlen + skb->len)); 691 + 692 + skb_copy_header(n, skb); 693 + return n; 694 + } 676 695 677 696 #define MAX_XEN_SKB_FRAGS (65536 / XEN_PAGE_SIZE + 1) 678 697 ··· 753 718 754 719 /* The first req should be at least ETH_HLEN size or the packet will be 755 720 * dropped by netback. 721 + * 722 + * If the backend is not trusted bounce all data to zeroed pages to 723 + * avoid exposing contiguous data on the granted page not belonging to 724 + * the skb. 756 725 */ 757 - if (unlikely(PAGE_SIZE - offset < ETH_HLEN)) { 758 - nskb = skb_copy(skb, GFP_ATOMIC); 726 + if (np->bounce || unlikely(PAGE_SIZE - offset < ETH_HLEN)) { 727 + nskb = bounce_skb(skb); 759 728 if (!nskb) 760 729 goto drop; 761 730 dev_consume_skb_any(skb); ··· 1092 1053 } 1093 1054 } 1094 1055 rcu_read_unlock(); 1095 - next: 1056 + 1096 1057 __skb_queue_tail(list, skb); 1058 + 1059 + next: 1097 1060 if (!(rx->flags & XEN_NETRXF_more_data)) 1098 1061 break; 1099 1062 ··· 2255 2214 2256 2215 info->netdev->irq = 0; 2257 2216 2217 + /* Check if backend is trusted. */ 2218 + info->bounce = !xennet_trusted || 2219 + !xenbus_read_unsigned(dev->nodename, "trusted", 1); 2220 + 2258 2221 /* Check if backend supports multiple queues */ 2259 2222 max_queues = xenbus_read_unsigned(info->xbdev->otherend, 2260 2223 "multi-queue-max-queues", 1); ··· 2426 2381 return err; 2427 2382 if (np->netback_has_xdp_headroom) 2428 2383 pr_info("backend supports XDP headroom\n"); 2384 + if (np->bounce) 2385 + dev_info(&np->xbdev->dev, 2386 + "bouncing transmitted data to zeroed pages\n"); 2429 2387 2430 2388 /* talk_to_netback() sets the correct number of queues */ 2431 2389 num_queues = dev->real_num_tx_queues;
+3 -3
drivers/nfc/nfcmrvl/i2c.c
··· 167 167 pdata->irq_polarity = IRQF_TRIGGER_RISING; 168 168 169 169 ret = irq_of_parse_and_map(node, 0); 170 - if (ret < 0) { 171 - pr_err("Unable to get irq, error: %d\n", ret); 172 - return ret; 170 + if (!ret) { 171 + pr_err("Unable to get irq\n"); 172 + return -EINVAL; 173 173 } 174 174 pdata->irq = ret; 175 175
+3 -3
drivers/nfc/nfcmrvl/spi.c
··· 115 115 } 116 116 117 117 ret = irq_of_parse_and_map(node, 0); 118 - if (ret < 0) { 119 - pr_err("Unable to get irq, error: %d\n", ret); 120 - return ret; 118 + if (!ret) { 119 + pr_err("Unable to get irq\n"); 120 + return -EINVAL; 121 121 } 122 122 pdata->irq = ret; 123 123
+9 -2
drivers/nfc/nxp-nci/i2c.c
··· 122 122 skb_put_data(*skb, &header, NXP_NCI_FW_HDR_LEN); 123 123 124 124 r = i2c_master_recv(client, skb_put(*skb, frame_len), frame_len); 125 - if (r != frame_len) { 125 + if (r < 0) { 126 + goto fw_read_exit_free_skb; 127 + } else if (r != frame_len) { 126 128 nfc_err(&client->dev, 127 129 "Invalid frame length: %u (expected %zu)\n", 128 130 r, frame_len); ··· 164 162 165 163 skb_put_data(*skb, (void *)&header, NCI_CTRL_HDR_SIZE); 166 164 165 + if (!header.plen) 166 + return 0; 167 + 167 168 r = i2c_master_recv(client, skb_put(*skb, header.plen), header.plen); 168 - if (r != header.plen) { 169 + if (r < 0) { 170 + goto nci_read_exit_free_skb; 171 + } else if (r != header.plen) { 169 172 nfc_err(&client->dev, 170 173 "Invalid frame payload length: %u (expected %u)\n", 171 174 r, header.plen);
+2 -2
drivers/nvdimm/bus.c
··· 176 176 ndr_end = nd_region->ndr_start + nd_region->ndr_size - 1; 177 177 178 178 /* make sure we are in the region */ 179 - if (ctx->phys < nd_region->ndr_start 180 - || (ctx->phys + ctx->cleared) > ndr_end) 179 + if (ctx->phys < nd_region->ndr_start || 180 + (ctx->phys + ctx->cleared - 1) > ndr_end) 181 181 return 0; 182 182 183 183 sector = (ctx->phys - nd_region->ndr_start) / 512;
+2
drivers/nvme/host/core.c
··· 4595 4595 nvme_stop_failfast_work(ctrl); 4596 4596 flush_work(&ctrl->async_event_work); 4597 4597 cancel_work_sync(&ctrl->fw_act_work); 4598 + if (ctrl->ops->stop_ctrl) 4599 + ctrl->ops->stop_ctrl(ctrl); 4598 4600 } 4599 4601 EXPORT_SYMBOL_GPL(nvme_stop_ctrl); 4600 4602
+1
drivers/nvme/host/nvme.h
··· 502 502 void (*free_ctrl)(struct nvme_ctrl *ctrl); 503 503 void (*submit_async_event)(struct nvme_ctrl *ctrl); 504 504 void (*delete_ctrl)(struct nvme_ctrl *ctrl); 505 + void (*stop_ctrl)(struct nvme_ctrl *ctrl); 505 506 int (*get_address)(struct nvme_ctrl *ctrl, char *buf, int size); 506 507 void (*print_device_info)(struct nvme_ctrl *ctrl); 507 508 };
+6 -2
drivers/nvme/host/pci.c
··· 3465 3465 { PCI_DEVICE(0x1987, 0x5012), /* Phison E12 */ 3466 3466 .driver_data = NVME_QUIRK_BOGUS_NID, }, 3467 3467 { PCI_DEVICE(0x1987, 0x5016), /* Phison E16 */ 3468 - .driver_data = NVME_QUIRK_IGNORE_DEV_SUBNQN, }, 3468 + .driver_data = NVME_QUIRK_IGNORE_DEV_SUBNQN | 3469 + NVME_QUIRK_BOGUS_NID, }, 3469 3470 { PCI_DEVICE(0x1b4b, 0x1092), /* Lexar 256 GB SSD */ 3470 3471 .driver_data = NVME_QUIRK_NO_NS_DESC_LIST | 3471 3472 NVME_QUIRK_IGNORE_DEV_SUBNQN, }, 3473 + { PCI_DEVICE(0x1cc1, 0x33f8), /* ADATA IM2P33F8ABR1 1 TB */ 3474 + .driver_data = NVME_QUIRK_BOGUS_NID, }, 3472 3475 { PCI_DEVICE(0x10ec, 0x5762), /* ADATA SX6000LNP */ 3473 - .driver_data = NVME_QUIRK_IGNORE_DEV_SUBNQN, }, 3476 + .driver_data = NVME_QUIRK_IGNORE_DEV_SUBNQN | 3477 + NVME_QUIRK_BOGUS_NID, }, 3474 3478 { PCI_DEVICE(0x1cc1, 0x8201), /* ADATA SX8200PNP 512GB */ 3475 3479 .driver_data = NVME_QUIRK_NO_DEEPEST_PS | 3476 3480 NVME_QUIRK_IGNORE_DEV_SUBNQN, },
+9 -3
drivers/nvme/host/rdma.c
··· 1048 1048 } 1049 1049 } 1050 1050 1051 + static void nvme_rdma_stop_ctrl(struct nvme_ctrl *nctrl) 1052 + { 1053 + struct nvme_rdma_ctrl *ctrl = to_rdma_ctrl(nctrl); 1054 + 1055 + cancel_work_sync(&ctrl->err_work); 1056 + cancel_delayed_work_sync(&ctrl->reconnect_work); 1057 + } 1058 + 1051 1059 static void nvme_rdma_free_ctrl(struct nvme_ctrl *nctrl) 1052 1060 { 1053 1061 struct nvme_rdma_ctrl *ctrl = to_rdma_ctrl(nctrl); ··· 2260 2252 2261 2253 static void nvme_rdma_shutdown_ctrl(struct nvme_rdma_ctrl *ctrl, bool shutdown) 2262 2254 { 2263 - cancel_work_sync(&ctrl->err_work); 2264 - cancel_delayed_work_sync(&ctrl->reconnect_work); 2265 - 2266 2255 nvme_rdma_teardown_io_queues(ctrl, shutdown); 2267 2256 nvme_stop_admin_queue(&ctrl->ctrl); 2268 2257 if (shutdown) ··· 2309 2304 .submit_async_event = nvme_rdma_submit_async_event, 2310 2305 .delete_ctrl = nvme_rdma_delete_ctrl, 2311 2306 .get_address = nvmf_get_address, 2307 + .stop_ctrl = nvme_rdma_stop_ctrl, 2312 2308 }; 2313 2309 2314 2310 /*
+8 -5
drivers/nvme/host/tcp.c
··· 1180 1180 } else if (ret < 0) { 1181 1181 dev_err(queue->ctrl->ctrl.device, 1182 1182 "failed to send request %d\n", ret); 1183 - if (ret != -EPIPE && ret != -ECONNRESET) 1184 - nvme_tcp_fail_request(queue->request); 1183 + nvme_tcp_fail_request(queue->request); 1185 1184 nvme_tcp_done_send_req(queue); 1186 1185 } 1187 1186 return ret; ··· 2193 2194 2194 2195 static void nvme_tcp_teardown_ctrl(struct nvme_ctrl *ctrl, bool shutdown) 2195 2196 { 2196 - cancel_work_sync(&to_tcp_ctrl(ctrl)->err_work); 2197 - cancel_delayed_work_sync(&to_tcp_ctrl(ctrl)->connect_work); 2198 - 2199 2197 nvme_tcp_teardown_io_queues(ctrl, shutdown); 2200 2198 nvme_stop_admin_queue(ctrl); 2201 2199 if (shutdown) ··· 2230 2234 out_fail: 2231 2235 ++ctrl->nr_reconnects; 2232 2236 nvme_tcp_reconnect_or_remove(ctrl); 2237 + } 2238 + 2239 + static void nvme_tcp_stop_ctrl(struct nvme_ctrl *ctrl) 2240 + { 2241 + cancel_work_sync(&to_tcp_ctrl(ctrl)->err_work); 2242 + cancel_delayed_work_sync(&to_tcp_ctrl(ctrl)->connect_work); 2233 2243 } 2234 2244 2235 2245 static void nvme_tcp_free_ctrl(struct nvme_ctrl *nctrl) ··· 2559 2557 .submit_async_event = nvme_tcp_submit_async_event, 2560 2558 .delete_ctrl = nvme_tcp_delete_ctrl, 2561 2559 .get_address = nvmf_get_address, 2560 + .stop_ctrl = nvme_tcp_stop_ctrl, 2562 2561 }; 2563 2562 2564 2563 static bool
+1 -1
drivers/nvme/host/trace.h
··· 69 69 __entry->metadata = !!blk_integrity_rq(req); 70 70 __entry->fctype = cmd->fabrics.fctype; 71 71 __assign_disk_name(__entry->disk, req->q->disk); 72 - memcpy(__entry->cdw10, &cmd->common.cdw10, 72 + memcpy(__entry->cdw10, &cmd->common.cdws, 73 73 sizeof(__entry->cdw10)); 74 74 ), 75 75 TP_printk("nvme%d: %sqid=%d, cmdid=%u, nsid=%u, flags=0x%x, meta=0x%x, cmd=(%s %s)",
+20
drivers/nvme/target/configfs.c
··· 773 773 } 774 774 CONFIGFS_ATTR(nvmet_passthru_, io_timeout); 775 775 776 + static ssize_t nvmet_passthru_clear_ids_show(struct config_item *item, 777 + char *page) 778 + { 779 + return sprintf(page, "%u\n", to_subsys(item->ci_parent)->clear_ids); 780 + } 781 + 782 + static ssize_t nvmet_passthru_clear_ids_store(struct config_item *item, 783 + const char *page, size_t count) 784 + { 785 + struct nvmet_subsys *subsys = to_subsys(item->ci_parent); 786 + unsigned int clear_ids; 787 + 788 + if (kstrtouint(page, 0, &clear_ids)) 789 + return -EINVAL; 790 + subsys->clear_ids = clear_ids; 791 + return count; 792 + } 793 + CONFIGFS_ATTR(nvmet_passthru_, clear_ids); 794 + 776 795 static struct configfs_attribute *nvmet_passthru_attrs[] = { 777 796 &nvmet_passthru_attr_device_path, 778 797 &nvmet_passthru_attr_enable, 779 798 &nvmet_passthru_attr_admin_timeout, 780 799 &nvmet_passthru_attr_io_timeout, 800 + &nvmet_passthru_attr_clear_ids, 781 801 NULL, 782 802 }; 783 803
+6
drivers/nvme/target/core.c
··· 1374 1374 ctrl->port = req->port; 1375 1375 ctrl->ops = req->ops; 1376 1376 1377 + #ifdef CONFIG_NVME_TARGET_PASSTHRU 1378 + /* By default, set loop targets to clear IDS by default */ 1379 + if (ctrl->port->disc_addr.trtype == NVMF_TRTYPE_LOOP) 1380 + subsys->clear_ids = 1; 1381 + #endif 1382 + 1377 1383 INIT_WORK(&ctrl->async_event_work, nvmet_async_event_work); 1378 1384 INIT_LIST_HEAD(&ctrl->async_events); 1379 1385 INIT_RADIX_TREE(&ctrl->p2p_ns_map, GFP_KERNEL);
+1
drivers/nvme/target/nvmet.h
··· 249 249 struct config_group passthru_group; 250 250 unsigned int admin_timeout; 251 251 unsigned int io_timeout; 252 + unsigned int clear_ids; 252 253 #endif /* CONFIG_NVME_TARGET_PASSTHRU */ 253 254 254 255 #ifdef CONFIG_BLK_DEV_ZONED
+55
drivers/nvme/target/passthru.c
··· 30 30 ctrl->cap &= ~(1ULL << 43); 31 31 } 32 32 33 + static u16 nvmet_passthru_override_id_descs(struct nvmet_req *req) 34 + { 35 + struct nvmet_ctrl *ctrl = req->sq->ctrl; 36 + u16 status = NVME_SC_SUCCESS; 37 + int pos, len; 38 + bool csi_seen = false; 39 + void *data; 40 + u8 csi; 41 + 42 + if (!ctrl->subsys->clear_ids) 43 + return status; 44 + 45 + data = kzalloc(NVME_IDENTIFY_DATA_SIZE, GFP_KERNEL); 46 + if (!data) 47 + return NVME_SC_INTERNAL; 48 + 49 + status = nvmet_copy_from_sgl(req, 0, data, NVME_IDENTIFY_DATA_SIZE); 50 + if (status) 51 + goto out_free; 52 + 53 + for (pos = 0; pos < NVME_IDENTIFY_DATA_SIZE; pos += len) { 54 + struct nvme_ns_id_desc *cur = data + pos; 55 + 56 + if (cur->nidl == 0) 57 + break; 58 + if (cur->nidt == NVME_NIDT_CSI) { 59 + memcpy(&csi, cur + 1, NVME_NIDT_CSI_LEN); 60 + csi_seen = true; 61 + break; 62 + } 63 + len = sizeof(struct nvme_ns_id_desc) + cur->nidl; 64 + } 65 + 66 + memset(data, 0, NVME_IDENTIFY_DATA_SIZE); 67 + if (csi_seen) { 68 + struct nvme_ns_id_desc *cur = data; 69 + 70 + cur->nidt = NVME_NIDT_CSI; 71 + cur->nidl = NVME_NIDT_CSI_LEN; 72 + memcpy(cur + 1, &csi, NVME_NIDT_CSI_LEN); 73 + } 74 + status = nvmet_copy_to_sgl(req, 0, data, NVME_IDENTIFY_DATA_SIZE); 75 + out_free: 76 + kfree(data); 77 + return status; 78 + } 79 + 33 80 static u16 nvmet_passthru_override_id_ctrl(struct nvmet_req *req) 34 81 { 35 82 struct nvmet_ctrl *ctrl = req->sq->ctrl; ··· 199 152 */ 200 153 id->mc = 0; 201 154 155 + if (req->sq->ctrl->subsys->clear_ids) { 156 + memset(id->nguid, 0, NVME_NIDT_NGUID_LEN); 157 + memset(id->eui64, 0, NVME_NIDT_EUI64_LEN); 158 + } 159 + 202 160 status = nvmet_copy_to_sgl(req, 0, id, sizeof(*id)); 203 161 204 162 out_free: ··· 227 175 break; 228 176 case NVME_ID_CNS_NS: 229 177 nvmet_passthru_override_id_ns(req); 178 + break; 179 + case NVME_ID_CNS_NS_DESC_LIST: 180 + nvmet_passthru_override_id_descs(req); 230 181 break; 231 182 } 232 183 } else if (status < 0)
+3 -20
drivers/nvme/target/tcp.c
··· 405 405 return NVME_SC_INTERNAL; 406 406 } 407 407 408 - static void nvmet_tcp_send_ddgst(struct ahash_request *hash, 408 + static void nvmet_tcp_calc_ddgst(struct ahash_request *hash, 409 409 struct nvmet_tcp_cmd *cmd) 410 410 { 411 411 ahash_request_set_crypt(hash, cmd->req.sg, 412 412 (void *)&cmd->exp_ddgst, cmd->req.transfer_len); 413 413 crypto_ahash_digest(hash); 414 - } 415 - 416 - static void nvmet_tcp_recv_ddgst(struct ahash_request *hash, 417 - struct nvmet_tcp_cmd *cmd) 418 - { 419 - struct scatterlist sg; 420 - struct kvec *iov; 421 - int i; 422 - 423 - crypto_ahash_init(hash); 424 - for (i = 0, iov = cmd->iov; i < cmd->nr_mapped; i++, iov++) { 425 - sg_init_one(&sg, iov->iov_base, iov->iov_len); 426 - ahash_request_set_crypt(hash, &sg, NULL, iov->iov_len); 427 - crypto_ahash_update(hash); 428 - } 429 - ahash_request_set_crypt(hash, NULL, (void *)&cmd->exp_ddgst, 0); 430 - crypto_ahash_final(hash); 431 414 } 432 415 433 416 static void nvmet_setup_c2h_data_pdu(struct nvmet_tcp_cmd *cmd) ··· 437 454 438 455 if (queue->data_digest) { 439 456 pdu->hdr.flags |= NVME_TCP_F_DDGST; 440 - nvmet_tcp_send_ddgst(queue->snd_hash, cmd); 457 + nvmet_tcp_calc_ddgst(queue->snd_hash, cmd); 441 458 } 442 459 443 460 if (cmd->queue->hdr_digest) { ··· 1120 1137 { 1121 1138 struct nvmet_tcp_queue *queue = cmd->queue; 1122 1139 1123 - nvmet_tcp_recv_ddgst(queue->rcv_hash, cmd); 1140 + nvmet_tcp_calc_ddgst(queue->rcv_hash, cmd); 1124 1141 queue->offset = 0; 1125 1142 queue->left = NVME_TCP_DIGEST_LENGTH; 1126 1143 queue->rcv_state = NVMET_TCP_RECV_DDGST;
+2 -2
drivers/pinctrl/aspeed/pinctrl-aspeed.c
··· 236 236 const struct aspeed_sig_expr **funcs; 237 237 const struct aspeed_sig_expr ***prios; 238 238 239 - pr_debug("Muxing pin %s for %s\n", pdesc->name, pfunc->name); 240 - 241 239 if (!pdesc) 242 240 return -EINVAL; 241 + 242 + pr_debug("Muxing pin %s for %s\n", pdesc->name, pfunc->name); 243 243 244 244 prios = pdesc->prios; 245 245
+1
drivers/pinctrl/freescale/pinctrl-imx93.c
··· 239 239 static const struct imx_pinctrl_soc_info imx93_pinctrl_info = { 240 240 .pins = imx93_pinctrl_pads, 241 241 .npins = ARRAY_SIZE(imx93_pinctrl_pads), 242 + .flags = ZERO_OFFSET_VALID, 242 243 .gpr_compatible = "fsl,imx93-iomuxc-gpr", 243 244 }; 244 245
+12 -8
drivers/pinctrl/stm32/pinctrl-stm32.c
··· 1338 1338 bank->secure_control = pctl->match_data->secure_control; 1339 1339 spin_lock_init(&bank->lock); 1340 1340 1341 - /* create irq hierarchical domain */ 1342 - bank->fwnode = fwnode; 1341 + if (pctl->domain) { 1342 + /* create irq hierarchical domain */ 1343 + bank->fwnode = fwnode; 1343 1344 1344 - bank->domain = irq_domain_create_hierarchy(pctl->domain, 0, 1345 - STM32_GPIO_IRQ_LINE, bank->fwnode, 1346 - &stm32_gpio_domain_ops, bank); 1345 + bank->domain = irq_domain_create_hierarchy(pctl->domain, 0, STM32_GPIO_IRQ_LINE, 1346 + bank->fwnode, &stm32_gpio_domain_ops, 1347 + bank); 1347 1348 1348 - if (!bank->domain) { 1349 - err = -ENODEV; 1350 - goto err_clk; 1349 + if (!bank->domain) { 1350 + err = -ENODEV; 1351 + goto err_clk; 1352 + } 1351 1353 } 1352 1354 1353 1355 err = gpiochip_add_data(&bank->gpio_chip, bank); ··· 1512 1510 pctl->domain = stm32_pctrl_get_irq_domain(pdev); 1513 1511 if (IS_ERR(pctl->domain)) 1514 1512 return PTR_ERR(pctl->domain); 1513 + if (!pctl->domain) 1514 + dev_warn(dev, "pinctrl without interrupt support\n"); 1515 1515 1516 1516 /* hwspinlock is optional */ 1517 1517 hwlock_id = of_hwspin_lock_get_id(pdev->dev.of_node, 0);
+5 -5
drivers/pinctrl/sunxi/pinctrl-sun8i-a83t.c
··· 158 158 SUNXI_PIN(SUNXI_PINCTRL_PIN(C, 14), 159 159 SUNXI_FUNCTION(0x0, "gpio_in"), 160 160 SUNXI_FUNCTION(0x1, "gpio_out"), 161 - SUNXI_FUNCTION(0x2, "nand"), /* DQ6 */ 161 + SUNXI_FUNCTION(0x2, "nand0"), /* DQ6 */ 162 162 SUNXI_FUNCTION(0x3, "mmc2")), /* D6 */ 163 163 SUNXI_PIN(SUNXI_PINCTRL_PIN(C, 15), 164 164 SUNXI_FUNCTION(0x0, "gpio_in"), 165 165 SUNXI_FUNCTION(0x1, "gpio_out"), 166 - SUNXI_FUNCTION(0x2, "nand"), /* DQ7 */ 166 + SUNXI_FUNCTION(0x2, "nand0"), /* DQ7 */ 167 167 SUNXI_FUNCTION(0x3, "mmc2")), /* D7 */ 168 168 SUNXI_PIN(SUNXI_PINCTRL_PIN(C, 16), 169 169 SUNXI_FUNCTION(0x0, "gpio_in"), 170 170 SUNXI_FUNCTION(0x1, "gpio_out"), 171 - SUNXI_FUNCTION(0x2, "nand"), /* DQS */ 171 + SUNXI_FUNCTION(0x2, "nand0"), /* DQS */ 172 172 SUNXI_FUNCTION(0x3, "mmc2")), /* RST */ 173 173 SUNXI_PIN(SUNXI_PINCTRL_PIN(C, 17), 174 174 SUNXI_FUNCTION(0x0, "gpio_in"), 175 175 SUNXI_FUNCTION(0x1, "gpio_out"), 176 - SUNXI_FUNCTION(0x2, "nand")), /* CE2 */ 176 + SUNXI_FUNCTION(0x2, "nand0")), /* CE2 */ 177 177 SUNXI_PIN(SUNXI_PINCTRL_PIN(C, 18), 178 178 SUNXI_FUNCTION(0x0, "gpio_in"), 179 179 SUNXI_FUNCTION(0x1, "gpio_out"), 180 - SUNXI_FUNCTION(0x2, "nand")), /* CE3 */ 180 + SUNXI_FUNCTION(0x2, "nand0")), /* CE3 */ 181 181 /* Hole */ 182 182 SUNXI_PIN(SUNXI_PINCTRL_PIN(D, 2), 183 183 SUNXI_FUNCTION(0x0, "gpio_in"),
+2
drivers/pinctrl/sunxi/pinctrl-sunxi.c
··· 544 544 struct sunxi_pinctrl *pctl = pinctrl_dev_get_drvdata(pctldev); 545 545 int i; 546 546 547 + pin -= pctl->desc->pin_base; 548 + 547 549 for (i = 0; i < num_configs; i++) { 548 550 enum pin_config_param param; 549 551 unsigned long flags;
+3 -1
drivers/platform/mellanox/nvsw-sn2201.c
··· 890 890 int size) 891 891 { 892 892 struct mlxreg_hotplug_device *dev = devs; 893 + int ret; 893 894 int i; 894 895 895 896 /* Create I2C static devices. */ ··· 902 901 dev->nr, dev->brdinfo->addr); 903 902 904 903 dev->adapter = NULL; 904 + ret = PTR_ERR(dev->client); 905 905 goto fail_create_static_devices; 906 906 } 907 907 } ··· 916 914 dev->client = NULL; 917 915 dev->adapter = NULL; 918 916 } 919 - return IS_ERR(dev->client); 917 + return ret; 920 918 } 921 919 922 920 static void nvsw_sn2201_destroy_static_devices(struct nvsw_sn2201 *nvsw_sn2201,
+2
drivers/platform/x86/Kconfig
··· 945 945 tristate "Panasonic Laptop Extras" 946 946 depends on INPUT && ACPI 947 947 depends on BACKLIGHT_CLASS_DEVICE 948 + depends on ACPI_VIDEO=n || ACPI_VIDEO 949 + depends on SERIO_I8042 || SERIO_I8042 = n 948 950 select INPUT_SPARSEKMAP 949 951 help 950 952 This driver adds support for access to backlight control and hotkeys
+3
drivers/platform/x86/hp-wmi.c
··· 89 89 HPWMI_BACKLIT_KB_BRIGHTNESS = 0x0D, 90 90 HPWMI_PEAKSHIFT_PERIOD = 0x0F, 91 91 HPWMI_BATTERY_CHARGE_PERIOD = 0x10, 92 + HPWMI_SANITIZATION_MODE = 0x17, 92 93 }; 93 94 94 95 /* ··· 853 852 case HPWMI_PEAKSHIFT_PERIOD: 854 853 break; 855 854 case HPWMI_BATTERY_CHARGE_PERIOD: 855 + break; 856 + case HPWMI_SANITIZATION_MODE: 856 857 break; 857 858 default: 858 859 pr_info("Unknown event_id - %d - 0x%x\n", event_id, event_data);
+21 -8
drivers/platform/x86/ideapad-laptop.c
··· 152 152 module_param(no_bt_rfkill, bool, 0444); 153 153 MODULE_PARM_DESC(no_bt_rfkill, "No rfkill for bluetooth."); 154 154 155 + static bool allow_v4_dytc; 156 + module_param(allow_v4_dytc, bool, 0444); 157 + MODULE_PARM_DESC(allow_v4_dytc, "Enable DYTC version 4 platform-profile support."); 158 + 155 159 /* 156 160 * ACPI Helpers 157 161 */ ··· 875 871 static const struct dmi_system_id ideapad_dytc_v4_allow_table[] = { 876 872 { 877 873 /* Ideapad 5 Pro 16ACH6 */ 878 - .ident = "LENOVO 82L5", 879 874 .matches = { 880 875 DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"), 881 876 DMI_MATCH(DMI_PRODUCT_NAME, "82L5") 877 + } 878 + }, 879 + { 880 + /* Ideapad 5 15ITL05 */ 881 + .matches = { 882 + DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"), 883 + DMI_MATCH(DMI_PRODUCT_VERSION, "IdeaPad 5 15ITL05") 882 884 } 883 885 }, 884 886 {} ··· 911 901 912 902 dytc_version = (output >> DYTC_QUERY_REV_BIT) & 0xF; 913 903 914 - if (dytc_version < 5) { 915 - if (dytc_version < 4 || !dmi_check_system(ideapad_dytc_v4_allow_table)) { 916 - dev_info(&priv->platform_device->dev, 917 - "DYTC_VERSION is less than 4 or is not allowed: %d\n", 918 - dytc_version); 919 - return -ENODEV; 920 - } 904 + if (dytc_version < 4) { 905 + dev_info(&priv->platform_device->dev, "DYTC_VERSION < 4 is not supported\n"); 906 + return -ENODEV; 907 + } 908 + 909 + if (dytc_version < 5 && 910 + !(allow_v4_dytc || dmi_check_system(ideapad_dytc_v4_allow_table))) { 911 + dev_info(&priv->platform_device->dev, 912 + "DYTC_VERSION 4 support may not work. Pass ideapad_laptop.allow_v4_dytc=Y on the kernel commandline to enable\n"); 913 + return -ENODEV; 921 914 } 922 915 923 916 priv->dytc = kzalloc(sizeof(*priv->dytc), GFP_KERNEL);
+1
drivers/platform/x86/intel/pmc/core.c
··· 1911 1911 X86_MATCH_INTEL_FAM6_MODEL(ATOM_TREMONT_L, &icl_reg_map), 1912 1912 X86_MATCH_INTEL_FAM6_MODEL(ROCKETLAKE, &tgl_reg_map), 1913 1913 X86_MATCH_INTEL_FAM6_MODEL(ALDERLAKE_L, &tgl_reg_map), 1914 + X86_MATCH_INTEL_FAM6_MODEL(ALDERLAKE_N, &tgl_reg_map), 1914 1915 X86_MATCH_INTEL_FAM6_MODEL(ALDERLAKE, &adl_reg_map), 1915 1916 X86_MATCH_INTEL_FAM6_MODEL(RAPTORLAKE_P, &tgl_reg_map), 1916 1917 {}
+67 -17
drivers/platform/x86/panasonic-laptop.c
··· 119 119 * - v0.1 start from toshiba_acpi driver written by John Belmonte 120 120 */ 121 121 122 - #include <linux/kernel.h> 123 - #include <linux/module.h> 124 - #include <linux/init.h> 125 - #include <linux/types.h> 122 + #include <linux/acpi.h> 126 123 #include <linux/backlight.h> 127 124 #include <linux/ctype.h> 128 - #include <linux/seq_file.h> 129 - #include <linux/uaccess.h> 130 - #include <linux/slab.h> 131 - #include <linux/acpi.h> 125 + #include <linux/i8042.h> 126 + #include <linux/init.h> 132 127 #include <linux/input.h> 133 128 #include <linux/input/sparse-keymap.h> 129 + #include <linux/kernel.h> 130 + #include <linux/module.h> 134 131 #include <linux/platform_device.h> 135 - 132 + #include <linux/seq_file.h> 133 + #include <linux/serio.h> 134 + #include <linux/slab.h> 135 + #include <linux/types.h> 136 + #include <linux/uaccess.h> 137 + #include <acpi/video.h> 136 138 137 139 MODULE_AUTHOR("Hiroshi Miura <miura@da-cha.org>"); 138 140 MODULE_AUTHOR("David Bronaugh <dbronaugh@linuxboxen.org>"); ··· 242 240 struct backlight_device *backlight; 243 241 struct platform_device *platform; 244 242 }; 243 + 244 + /* 245 + * On some Panasonic models the volume up / down / mute keys send duplicate 246 + * keypress events over the PS/2 kbd interface, filter these out. 247 + */ 248 + static bool panasonic_i8042_filter(unsigned char data, unsigned char str, 249 + struct serio *port) 250 + { 251 + static bool extended; 252 + 253 + if (str & I8042_STR_AUXDATA) 254 + return false; 255 + 256 + if (data == 0xe0) { 257 + extended = true; 258 + return true; 259 + } else if (extended) { 260 + extended = false; 261 + 262 + switch (data & 0x7f) { 263 + case 0x20: /* e0 20 / e0 a0, Volume Mute press / release */ 264 + case 0x2e: /* e0 2e / e0 ae, Volume Down press / release */ 265 + case 0x30: /* e0 30 / e0 b0, Volume Up press / release */ 266 + return true; 267 + default: 268 + /* 269 + * Report the previously filtered e0 before continuing 270 + * with the next non-filtered byte. 271 + */ 272 + serio_interrupt(port, 0xe0, 0); 273 + return false; 274 + } 275 + } 276 + 277 + return false; 278 + } 245 279 246 280 /* method access functions */ 247 281 static int acpi_pcc_write_sset(struct pcc_acpi *pcc, int func, int val) ··· 800 762 struct input_dev *hotk_input_dev = pcc->input_dev; 801 763 int rc; 802 764 unsigned long long result; 765 + unsigned int key; 766 + unsigned int updown; 803 767 804 768 rc = acpi_evaluate_integer(pcc->handle, METHOD_HKEY_QUERY, 805 769 NULL, &result); ··· 810 770 return; 811 771 } 812 772 773 + key = result & 0xf; 774 + updown = result & 0x80; /* 0x80 == key down; 0x00 = key up */ 775 + 813 776 /* hack: some firmware sends no key down for sleep / hibernate */ 814 - if ((result & 0xf) == 0x7 || (result & 0xf) == 0xa) { 815 - if (result & 0x80) 777 + if (key == 7 || key == 10) { 778 + if (updown) 816 779 sleep_keydown_seen = 1; 817 780 if (!sleep_keydown_seen) 818 781 sparse_keymap_report_event(hotk_input_dev, 819 - result & 0xf, 0x80, false); 782 + key, 0x80, false); 820 783 } 821 784 822 - if ((result & 0xf) == 0x7 || (result & 0xf) == 0x9 || (result & 0xf) == 0xa) { 823 - if (!sparse_keymap_report_event(hotk_input_dev, 824 - result & 0xf, result & 0x80, false)) 825 - pr_err("Unknown hotkey event: 0x%04llx\n", result); 826 - } 785 + /* 786 + * Don't report brightness key-presses if they are also reported 787 + * by the ACPI video bus. 788 + */ 789 + if ((key == 1 || key == 2) && acpi_video_handles_brightness_key_presses()) 790 + return; 791 + 792 + if (!sparse_keymap_report_event(hotk_input_dev, key, updown, false)) 793 + pr_err("Unknown hotkey event: 0x%04llx\n", result); 827 794 } 828 795 829 796 static void acpi_pcc_hotkey_notify(struct acpi_device *device, u32 event) ··· 1044 997 pcc->platform = NULL; 1045 998 } 1046 999 1000 + i8042_install_filter(panasonic_i8042_filter); 1047 1001 return 0; 1048 1002 1049 1003 out_platform: ··· 1067 1019 1068 1020 if (!device || !pcc) 1069 1021 return -EINVAL; 1022 + 1023 + i8042_remove_filter(panasonic_i8042_filter); 1070 1024 1071 1025 if (pcc->platform) { 1072 1026 device_remove_file(&pcc->platform->dev, &dev_attr_cdpower);
+24 -27
drivers/platform/x86/thinkpad_acpi.c
··· 4529 4529 iounmap(addr); 4530 4530 cleanup_resource: 4531 4531 release_resource(res); 4532 + kfree(res); 4532 4533 } 4533 4534 4534 4535 static struct acpi_s2idle_dev_ops thinkpad_acpi_s2idle_dev_ops = { ··· 10300 10299 #define DYTC_DISABLE_CQL DYTC_SET_COMMAND(DYTC_FUNCTION_CQL, DYTC_MODE_MMC_BALANCE, 0) 10301 10300 #define DYTC_ENABLE_CQL DYTC_SET_COMMAND(DYTC_FUNCTION_CQL, DYTC_MODE_MMC_BALANCE, 1) 10302 10301 10303 - enum dytc_profile_funcmode { 10304 - DYTC_FUNCMODE_NONE = 0, 10305 - DYTC_FUNCMODE_MMC, 10306 - DYTC_FUNCMODE_PSC, 10307 - }; 10308 - 10309 - static enum dytc_profile_funcmode dytc_profile_available; 10310 10302 static enum platform_profile_option dytc_current_profile; 10311 10303 static atomic_t dytc_ignore_event = ATOMIC_INIT(0); 10312 10304 static DEFINE_MUTEX(dytc_mutex); 10305 + static int dytc_capabilities; 10313 10306 static bool dytc_mmc_get_available; 10314 10307 10315 10308 static int convert_dytc_to_profile(int dytcmode, enum platform_profile_option *profile) 10316 10309 { 10317 - if (dytc_profile_available == DYTC_FUNCMODE_MMC) { 10310 + if (dytc_capabilities & BIT(DYTC_FC_MMC)) { 10318 10311 switch (dytcmode) { 10319 10312 case DYTC_MODE_MMC_LOWPOWER: 10320 10313 *profile = PLATFORM_PROFILE_LOW_POWER; ··· 10325 10330 } 10326 10331 return 0; 10327 10332 } 10328 - if (dytc_profile_available == DYTC_FUNCMODE_PSC) { 10333 + if (dytc_capabilities & BIT(DYTC_FC_PSC)) { 10329 10334 switch (dytcmode) { 10330 10335 case DYTC_MODE_PSC_LOWPOWER: 10331 10336 *profile = PLATFORM_PROFILE_LOW_POWER; ··· 10347 10352 { 10348 10353 switch (profile) { 10349 10354 case PLATFORM_PROFILE_LOW_POWER: 10350 - if (dytc_profile_available == DYTC_FUNCMODE_MMC) 10355 + if (dytc_capabilities & BIT(DYTC_FC_MMC)) 10351 10356 *perfmode = DYTC_MODE_MMC_LOWPOWER; 10352 - else if (dytc_profile_available == DYTC_FUNCMODE_PSC) 10357 + else if (dytc_capabilities & BIT(DYTC_FC_PSC)) 10353 10358 *perfmode = DYTC_MODE_PSC_LOWPOWER; 10354 10359 break; 10355 10360 case PLATFORM_PROFILE_BALANCED: 10356 - if (dytc_profile_available == DYTC_FUNCMODE_MMC) 10361 + if (dytc_capabilities & BIT(DYTC_FC_MMC)) 10357 10362 *perfmode = DYTC_MODE_MMC_BALANCE; 10358 - else if (dytc_profile_available == DYTC_FUNCMODE_PSC) 10363 + else if (dytc_capabilities & BIT(DYTC_FC_PSC)) 10359 10364 *perfmode = DYTC_MODE_PSC_BALANCE; 10360 10365 break; 10361 10366 case PLATFORM_PROFILE_PERFORMANCE: 10362 - if (dytc_profile_available == DYTC_FUNCMODE_MMC) 10367 + if (dytc_capabilities & BIT(DYTC_FC_MMC)) 10363 10368 *perfmode = DYTC_MODE_MMC_PERFORM; 10364 - else if (dytc_profile_available == DYTC_FUNCMODE_PSC) 10369 + else if (dytc_capabilities & BIT(DYTC_FC_PSC)) 10365 10370 *perfmode = DYTC_MODE_PSC_PERFORM; 10366 10371 break; 10367 10372 default: /* Unknown profile */ ··· 10440 10445 if (err) 10441 10446 goto unlock; 10442 10447 10443 - if (dytc_profile_available == DYTC_FUNCMODE_MMC) { 10448 + if (dytc_capabilities & BIT(DYTC_FC_MMC)) { 10444 10449 if (profile == PLATFORM_PROFILE_BALANCED) { 10445 10450 /* 10446 10451 * To get back to balanced mode we need to issue a reset command. ··· 10459 10464 goto unlock; 10460 10465 } 10461 10466 } 10462 - if (dytc_profile_available == DYTC_FUNCMODE_PSC) { 10467 + if (dytc_capabilities & BIT(DYTC_FC_PSC)) { 10463 10468 err = dytc_command(DYTC_SET_COMMAND(DYTC_FUNCTION_PSC, perfmode, 1), &output); 10464 10469 if (err) 10465 10470 goto unlock; ··· 10478 10483 int perfmode; 10479 10484 10480 10485 mutex_lock(&dytc_mutex); 10481 - if (dytc_profile_available == DYTC_FUNCMODE_MMC) { 10486 + if (dytc_capabilities & BIT(DYTC_FC_MMC)) { 10482 10487 if (dytc_mmc_get_available) 10483 10488 err = dytc_command(DYTC_CMD_MMC_GET, &output); 10484 10489 else 10485 10490 err = dytc_cql_command(DYTC_CMD_GET, &output); 10486 - } else if (dytc_profile_available == DYTC_FUNCMODE_PSC) 10491 + } else if (dytc_capabilities & BIT(DYTC_FC_PSC)) 10487 10492 err = dytc_command(DYTC_CMD_GET, &output); 10488 10493 10489 10494 mutex_unlock(&dytc_mutex); ··· 10512 10517 set_bit(PLATFORM_PROFILE_BALANCED, dytc_profile.choices); 10513 10518 set_bit(PLATFORM_PROFILE_PERFORMANCE, dytc_profile.choices); 10514 10519 10515 - dytc_profile_available = DYTC_FUNCMODE_NONE; 10516 10520 err = dytc_command(DYTC_CMD_QUERY, &output); 10517 10521 if (err) 10518 10522 return err; ··· 10524 10530 return -ENODEV; 10525 10531 10526 10532 /* Check what capabilities are supported */ 10527 - err = dytc_command(DYTC_CMD_FUNC_CAP, &output); 10533 + err = dytc_command(DYTC_CMD_FUNC_CAP, &dytc_capabilities); 10528 10534 if (err) 10529 10535 return err; 10530 10536 10531 - if (output & BIT(DYTC_FC_MMC)) { /* MMC MODE */ 10532 - dytc_profile_available = DYTC_FUNCMODE_MMC; 10533 - 10537 + if (dytc_capabilities & BIT(DYTC_FC_MMC)) { /* MMC MODE */ 10538 + pr_debug("MMC is supported\n"); 10534 10539 /* 10535 10540 * Check if MMC_GET functionality available 10536 10541 * Version > 6 and return success from MMC_GET command ··· 10540 10547 if (!err && ((output & DYTC_ERR_MASK) == DYTC_ERR_SUCCESS)) 10541 10548 dytc_mmc_get_available = true; 10542 10549 } 10543 - } else if (output & BIT(DYTC_FC_PSC)) { /* PSC MODE */ 10544 - dytc_profile_available = DYTC_FUNCMODE_PSC; 10550 + } else if (dytc_capabilities & BIT(DYTC_FC_PSC)) { /* PSC MODE */ 10551 + /* Support for this only works on AMD platforms */ 10552 + if (boot_cpu_data.x86_vendor != X86_VENDOR_AMD) { 10553 + dbg_printk(TPACPI_DBG_INIT, "PSC not support on Intel platforms\n"); 10554 + return -ENODEV; 10555 + } 10556 + pr_debug("PSC is supported\n"); 10545 10557 } else { 10546 10558 dbg_printk(TPACPI_DBG_INIT, "No DYTC support available\n"); 10547 10559 return -ENODEV; ··· 10572 10574 10573 10575 static void dytc_profile_exit(void) 10574 10576 { 10575 - dytc_profile_available = DYTC_FUNCMODE_NONE; 10576 10577 platform_profile_remove(); 10577 10578 } 10578 10579
+1 -1
drivers/s390/char/sclp.c
··· 60 60 /* List of queued requests. */ 61 61 static LIST_HEAD(sclp_req_queue); 62 62 63 - /* Data for read and and init requests. */ 63 + /* Data for read and init requests. */ 64 64 static struct sclp_req sclp_read_req; 65 65 static struct sclp_req sclp_init_req; 66 66 static void *sclp_read_sccb;
+8 -1
drivers/s390/virtio/virtio_ccw.c
··· 1136 1136 vcdev->err = -EIO; 1137 1137 } 1138 1138 virtio_ccw_check_activity(vcdev, activity); 1139 - /* Interrupts are disabled here */ 1139 + #ifdef CONFIG_VIRTIO_HARDEN_NOTIFICATION 1140 + /* 1141 + * Paired with virtio_ccw_synchronize_cbs() and interrupts are 1142 + * disabled here. 1143 + */ 1140 1144 read_lock(&vcdev->irq_lock); 1145 + #endif 1141 1146 for_each_set_bit(i, indicators(vcdev), 1142 1147 sizeof(*indicators(vcdev)) * BITS_PER_BYTE) { 1143 1148 /* The bit clear must happen before the vring kick. */ ··· 1151 1146 vq = virtio_ccw_vq_by_ind(vcdev, i); 1152 1147 vring_interrupt(0, vq); 1153 1148 } 1149 + #ifdef CONFIG_VIRTIO_HARDEN_NOTIFICATION 1154 1150 read_unlock(&vcdev->irq_lock); 1151 + #endif 1155 1152 if (test_bit(0, indicators2(vcdev))) { 1156 1153 virtio_config_changed(&vcdev->vdev); 1157 1154 clear_bit(0, indicators2(vcdev));
+7
drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
··· 2782 2782 struct hisi_hba *hisi_hba = shost_priv(shost); 2783 2783 struct device *dev = hisi_hba->dev; 2784 2784 int ret = sas_slave_configure(sdev); 2785 + unsigned int max_sectors; 2785 2786 2786 2787 if (ret) 2787 2788 return ret; ··· 2799 2798 pm_runtime_disable(dev); 2800 2799 } 2801 2800 } 2801 + 2802 + /* Set according to IOMMU IOVA caching limit */ 2803 + max_sectors = min_t(size_t, queue_max_hw_sectors(sdev->request_queue), 2804 + (PAGE_SIZE * 32) >> SECTOR_SHIFT); 2805 + 2806 + blk_queue_max_hw_sectors(sdev->request_queue, max_sectors); 2802 2807 2803 2808 return 0; 2804 2809 }
+6 -6
drivers/soc/atmel/soc.c
··· 91 91 AT91_SOC(SAM9X60_CIDR_MATCH, AT91_CIDR_MATCH_MASK, 92 92 AT91_CIDR_VERSION_MASK, SAM9X60_EXID_MATCH, 93 93 "sam9x60", "sam9x60"), 94 - AT91_SOC(SAM9X60_CIDR_MATCH, SAM9X60_D5M_EXID_MATCH, 95 - AT91_CIDR_VERSION_MASK, SAM9X60_EXID_MATCH, 94 + AT91_SOC(SAM9X60_CIDR_MATCH, AT91_CIDR_MATCH_MASK, 95 + AT91_CIDR_VERSION_MASK, SAM9X60_D5M_EXID_MATCH, 96 96 "sam9x60 64MiB DDR2 SiP", "sam9x60"), 97 - AT91_SOC(SAM9X60_CIDR_MATCH, SAM9X60_D1G_EXID_MATCH, 98 - AT91_CIDR_VERSION_MASK, SAM9X60_EXID_MATCH, 97 + AT91_SOC(SAM9X60_CIDR_MATCH, AT91_CIDR_MATCH_MASK, 98 + AT91_CIDR_VERSION_MASK, SAM9X60_D1G_EXID_MATCH, 99 99 "sam9x60 128MiB DDR2 SiP", "sam9x60"), 100 - AT91_SOC(SAM9X60_CIDR_MATCH, SAM9X60_D6K_EXID_MATCH, 101 - AT91_CIDR_VERSION_MASK, SAM9X60_EXID_MATCH, 100 + AT91_SOC(SAM9X60_CIDR_MATCH, AT91_CIDR_MATCH_MASK, 101 + AT91_CIDR_VERSION_MASK, SAM9X60_D6K_EXID_MATCH, 102 102 "sam9x60 8MiB SDRAM SiP", "sam9x60"), 103 103 #endif 104 104 #ifdef CONFIG_SOC_SAMA5
+1 -1
drivers/soc/ixp4xx/ixp4xx-npe.c
··· 758 758 static struct platform_driver ixp4xx_npe_driver = { 759 759 .driver = { 760 760 .name = "ixp4xx-npe", 761 - .of_match_table = of_match_ptr(ixp4xx_npe_of_match), 761 + .of_match_table = ixp4xx_npe_of_match, 762 762 }, 763 763 .probe = ixp4xx_npe_probe, 764 764 .remove = ixp4xx_npe_remove,
+3 -3
drivers/soc/qcom/smem.c
··· 926 926 struct smem_partition_header *header; 927 927 struct smem_ptable_entry *entry; 928 928 struct smem_ptable *ptable; 929 - unsigned int remote_host; 929 + u16 remote_host; 930 930 u16 host0, host1; 931 931 int i; 932 932 ··· 951 951 continue; 952 952 953 953 if (remote_host >= SMEM_HOST_COUNT) { 954 - dev_err(smem->dev, "bad host %hu\n", remote_host); 954 + dev_err(smem->dev, "bad host %u\n", remote_host); 955 955 return -EINVAL; 956 956 } 957 957 958 958 if (smem->partitions[remote_host].virt_base) { 959 - dev_err(smem->dev, "duplicate host %hu\n", remote_host); 959 + dev_err(smem->dev, "duplicate host %u\n", remote_host); 960 960 return -EINVAL; 961 961 } 962 962
+1 -1
drivers/staging/wlan-ng/hfa384x_usb.c
··· 2632 2632 */ 2633 2633 static void hfa384x_usbctlx_completion_task(struct work_struct *work) 2634 2634 { 2635 - struct hfa384x *hw = container_of(work, struct hfa384x, reaper_bh); 2635 + struct hfa384x *hw = container_of(work, struct hfa384x, completion_bh); 2636 2636 struct hfa384x_usbctlx *ctlx, *temp; 2637 2637 unsigned long flags; 2638 2638
+1
drivers/thermal/intel/intel_tcc_cooling.c
··· 81 81 X86_MATCH_INTEL_FAM6_MODEL(COMETLAKE, NULL), 82 82 X86_MATCH_INTEL_FAM6_MODEL(ALDERLAKE, NULL), 83 83 X86_MATCH_INTEL_FAM6_MODEL(ALDERLAKE_L, NULL), 84 + X86_MATCH_INTEL_FAM6_MODEL(RAPTORLAKE, NULL), 84 85 {} 85 86 }; 86 87
+22 -11
drivers/vdpa/mlx5/net/mlx5_vnet.c
··· 1962 1962 struct mlx5_vdpa_net *ndev = to_mlx5_vdpa_ndev(mvdev); 1963 1963 1964 1964 ndev->event_cbs[idx] = *cb; 1965 + if (is_ctrl_vq_idx(mvdev, idx)) 1966 + mvdev->cvq.event_cb = *cb; 1965 1967 } 1966 1968 1967 1969 static void mlx5_cvq_notify(struct vringh *vring) ··· 2176 2174 static int setup_virtqueues(struct mlx5_vdpa_dev *mvdev) 2177 2175 { 2178 2176 struct mlx5_vdpa_net *ndev = to_mlx5_vdpa_ndev(mvdev); 2179 - struct mlx5_control_vq *cvq = &mvdev->cvq; 2180 2177 int err; 2181 2178 int i; 2182 2179 2183 2180 for (i = 0; i < mvdev->max_vqs; i++) { 2184 2181 err = setup_vq(ndev, &ndev->vqs[i]); 2185 - if (err) 2186 - goto err_vq; 2187 - } 2188 - 2189 - if (mvdev->actual_features & BIT_ULL(VIRTIO_NET_F_CTRL_VQ)) { 2190 - err = vringh_init_iotlb(&cvq->vring, mvdev->actual_features, 2191 - MLX5_CVQ_MAX_ENT, false, 2192 - (struct vring_desc *)(uintptr_t)cvq->desc_addr, 2193 - (struct vring_avail *)(uintptr_t)cvq->driver_addr, 2194 - (struct vring_used *)(uintptr_t)cvq->device_addr); 2195 2182 if (err) 2196 2183 goto err_vq; 2197 2184 } ··· 2457 2466 ndev->mvdev.cvq.ready = false; 2458 2467 } 2459 2468 2469 + static int setup_cvq_vring(struct mlx5_vdpa_dev *mvdev) 2470 + { 2471 + struct mlx5_control_vq *cvq = &mvdev->cvq; 2472 + int err = 0; 2473 + 2474 + if (mvdev->actual_features & BIT_ULL(VIRTIO_NET_F_CTRL_VQ)) 2475 + err = vringh_init_iotlb(&cvq->vring, mvdev->actual_features, 2476 + MLX5_CVQ_MAX_ENT, false, 2477 + (struct vring_desc *)(uintptr_t)cvq->desc_addr, 2478 + (struct vring_avail *)(uintptr_t)cvq->driver_addr, 2479 + (struct vring_used *)(uintptr_t)cvq->device_addr); 2480 + 2481 + return err; 2482 + } 2483 + 2460 2484 static void mlx5_vdpa_set_status(struct vdpa_device *vdev, u8 status) 2461 2485 { 2462 2486 struct mlx5_vdpa_dev *mvdev = to_mvdev(vdev); ··· 2484 2478 2485 2479 if ((status ^ ndev->mvdev.status) & VIRTIO_CONFIG_S_DRIVER_OK) { 2486 2480 if (status & VIRTIO_CONFIG_S_DRIVER_OK) { 2481 + err = setup_cvq_vring(mvdev); 2482 + if (err) { 2483 + mlx5_vdpa_warn(mvdev, "failed to setup control VQ vring\n"); 2484 + goto err_setup; 2485 + } 2487 2486 err = setup_driver(mvdev); 2488 2487 if (err) { 2489 2488 mlx5_vdpa_warn(mvdev, "failed to setup driver\n");
+37 -23
drivers/vdpa/vdpa_user/vduse_dev.c
··· 1476 1476 return kasprintf(GFP_KERNEL, "vduse/%s", dev_name(dev)); 1477 1477 } 1478 1478 1479 - static void vduse_mgmtdev_release(struct device *dev) 1480 - { 1481 - } 1482 - 1483 - static struct device vduse_mgmtdev = { 1484 - .init_name = "vduse", 1485 - .release = vduse_mgmtdev_release, 1479 + struct vduse_mgmt_dev { 1480 + struct vdpa_mgmt_dev mgmt_dev; 1481 + struct device dev; 1486 1482 }; 1487 1483 1488 - static struct vdpa_mgmt_dev mgmt_dev; 1484 + static struct vduse_mgmt_dev *vduse_mgmt; 1489 1485 1490 1486 static int vduse_dev_init_vdpa(struct vduse_dev *dev, const char *name) 1491 1487 { ··· 1506 1510 } 1507 1511 set_dma_ops(&vdev->vdpa.dev, &vduse_dev_dma_ops); 1508 1512 vdev->vdpa.dma_dev = &vdev->vdpa.dev; 1509 - vdev->vdpa.mdev = &mgmt_dev; 1513 + vdev->vdpa.mdev = &vduse_mgmt->mgmt_dev; 1510 1514 1511 1515 return 0; 1512 1516 } ··· 1552 1556 { 0 }, 1553 1557 }; 1554 1558 1555 - static struct vdpa_mgmt_dev mgmt_dev = { 1556 - .device = &vduse_mgmtdev, 1557 - .id_table = id_table, 1558 - .ops = &vdpa_dev_mgmtdev_ops, 1559 - }; 1559 + static void vduse_mgmtdev_release(struct device *dev) 1560 + { 1561 + struct vduse_mgmt_dev *mgmt_dev; 1562 + 1563 + mgmt_dev = container_of(dev, struct vduse_mgmt_dev, dev); 1564 + kfree(mgmt_dev); 1565 + } 1560 1566 1561 1567 static int vduse_mgmtdev_init(void) 1562 1568 { 1563 1569 int ret; 1564 1570 1565 - ret = device_register(&vduse_mgmtdev); 1566 - if (ret) 1571 + vduse_mgmt = kzalloc(sizeof(*vduse_mgmt), GFP_KERNEL); 1572 + if (!vduse_mgmt) 1573 + return -ENOMEM; 1574 + 1575 + ret = dev_set_name(&vduse_mgmt->dev, "vduse"); 1576 + if (ret) { 1577 + kfree(vduse_mgmt); 1567 1578 return ret; 1579 + } 1568 1580 1569 - ret = vdpa_mgmtdev_register(&mgmt_dev); 1581 + vduse_mgmt->dev.release = vduse_mgmtdev_release; 1582 + 1583 + ret = device_register(&vduse_mgmt->dev); 1570 1584 if (ret) 1571 - goto err; 1585 + goto dev_reg_err; 1572 1586 1573 - return 0; 1574 - err: 1575 - device_unregister(&vduse_mgmtdev); 1587 + vduse_mgmt->mgmt_dev.id_table = id_table; 1588 + vduse_mgmt->mgmt_dev.ops = &vdpa_dev_mgmtdev_ops; 1589 + vduse_mgmt->mgmt_dev.device = &vduse_mgmt->dev; 1590 + ret = vdpa_mgmtdev_register(&vduse_mgmt->mgmt_dev); 1591 + if (ret) 1592 + device_unregister(&vduse_mgmt->dev); 1593 + 1594 + return ret; 1595 + 1596 + dev_reg_err: 1597 + put_device(&vduse_mgmt->dev); 1576 1598 return ret; 1577 1599 } 1578 1600 1579 1601 static void vduse_mgmtdev_exit(void) 1580 1602 { 1581 - vdpa_mgmtdev_unregister(&mgmt_dev); 1582 - device_unregister(&vduse_mgmtdev); 1603 + vdpa_mgmtdev_unregister(&vduse_mgmt->mgmt_dev); 1604 + device_unregister(&vduse_mgmt->dev); 1583 1605 } 1584 1606 1585 1607 static int vduse_init(void)
+1 -1
drivers/vhost/vdpa.c
··· 1209 1209 vhost_dev_stop(&v->vdev); 1210 1210 vhost_vdpa_free_domain(v); 1211 1211 vhost_vdpa_config_put(v); 1212 - vhost_dev_cleanup(&v->vdev); 1212 + vhost_vdpa_cleanup(v); 1213 1213 mutex_unlock(&d->mutex); 1214 1214 1215 1215 atomic_dec(&v->opened);
+33
drivers/video/fbdev/core/fbcon.c
··· 2469 2469 if (charcount != 256 && charcount != 512) 2470 2470 return -EINVAL; 2471 2471 2472 + /* font bigger than screen resolution ? */ 2473 + if (w > FBCON_SWAP(info->var.rotate, info->var.xres, info->var.yres) || 2474 + h > FBCON_SWAP(info->var.rotate, info->var.yres, info->var.xres)) 2475 + return -EINVAL; 2476 + 2472 2477 /* Make sure drawing engine can handle the font */ 2473 2478 if (!(info->pixmap.blit_x & (1 << (font->width - 1))) || 2474 2479 !(info->pixmap.blit_y & (1 << (font->height - 1)))) ··· 2735 2730 fbcon_modechanged(info); 2736 2731 } 2737 2732 EXPORT_SYMBOL(fbcon_update_vcs); 2733 + 2734 + /* let fbcon check if it supports a new screen resolution */ 2735 + int fbcon_modechange_possible(struct fb_info *info, struct fb_var_screeninfo *var) 2736 + { 2737 + struct fbcon_ops *ops = info->fbcon_par; 2738 + struct vc_data *vc; 2739 + unsigned int i; 2740 + 2741 + WARN_CONSOLE_UNLOCKED(); 2742 + 2743 + if (!ops) 2744 + return 0; 2745 + 2746 + /* prevent setting a screen size which is smaller than font size */ 2747 + for (i = first_fb_vc; i <= last_fb_vc; i++) { 2748 + vc = vc_cons[i].d; 2749 + if (!vc || vc->vc_mode != KD_TEXT || 2750 + fbcon_info_from_console(i) != info) 2751 + continue; 2752 + 2753 + if (vc->vc_font.width > FBCON_SWAP(var->rotate, var->xres, var->yres) || 2754 + vc->vc_font.height > FBCON_SWAP(var->rotate, var->yres, var->xres)) 2755 + return -EINVAL; 2756 + } 2757 + 2758 + return 0; 2759 + } 2760 + EXPORT_SYMBOL_GPL(fbcon_modechange_possible); 2738 2761 2739 2762 int fbcon_mode_deleted(struct fb_info *info, 2740 2763 struct fb_videomode *mode)
+26 -2
drivers/video/fbdev/core/fbmem.c
··· 19 19 #include <linux/kernel.h> 20 20 #include <linux/major.h> 21 21 #include <linux/slab.h> 22 + #include <linux/sysfb.h> 22 23 #include <linux/mm.h> 23 24 #include <linux/mman.h> 24 25 #include <linux/vt.h> ··· 511 510 512 511 while (n && (n * (logo->width + 8) - 8 > xres)) 513 512 --n; 514 - image.dx = (xres - n * (logo->width + 8) - 8) / 2; 513 + image.dx = (xres - (n * (logo->width + 8) - 8)) / 2; 515 514 image.dy = y ?: (yres - logo->height) / 2; 516 515 } else { 517 516 image.dx = 0; ··· 1017 1016 if (ret) 1018 1017 return ret; 1019 1018 1019 + /* verify that virtual resolution >= physical resolution */ 1020 + if (var->xres_virtual < var->xres || 1021 + var->yres_virtual < var->yres) { 1022 + pr_warn("WARNING: fbcon: Driver '%s' missed to adjust virtual screen size (%ux%u vs. %ux%u)\n", 1023 + info->fix.id, 1024 + var->xres_virtual, var->yres_virtual, 1025 + var->xres, var->yres); 1026 + return -EINVAL; 1027 + } 1028 + 1020 1029 if ((var->activate & FB_ACTIVATE_MASK) != FB_ACTIVATE_NOW) 1021 1030 return 0; 1022 1031 ··· 1117 1106 return -EFAULT; 1118 1107 console_lock(); 1119 1108 lock_fb_info(info); 1120 - ret = fb_set_var(info, &var); 1109 + ret = fbcon_modechange_possible(info, &var); 1110 + if (!ret) 1111 + ret = fb_set_var(info, &var); 1121 1112 if (!ret) 1122 1113 fbcon_update_vcs(info, var.activate & FB_ACTIVATE_ALL); 1123 1114 unlock_fb_info(info); ··· 1764 1751 a->ranges[0].size = ~0; 1765 1752 do_free = true; 1766 1753 } 1754 + 1755 + /* 1756 + * If a driver asked to unregister a platform device registered by 1757 + * sysfb, then can be assumed that this is a driver for a display 1758 + * that is set up by the system firmware and has a generic driver. 1759 + * 1760 + * Drivers for devices that don't have a generic driver will never 1761 + * ask for this, so let's assume that a real driver for the display 1762 + * was already probed and prevent sysfb to register devices later. 1763 + */ 1764 + sysfb_disable(); 1767 1765 1768 1766 mutex_lock(&registration_lock); 1769 1767 do_remove_conflicting_framebuffers(a, name, primary);
+13
drivers/virtio/Kconfig
··· 29 29 30 30 if VIRTIO_MENU 31 31 32 + config VIRTIO_HARDEN_NOTIFICATION 33 + bool "Harden virtio notification" 34 + help 35 + Enable this to harden the device notifications and suppress 36 + those that happen at a time where notifications are illegal. 37 + 38 + Experimental: Note that several drivers still have bugs that 39 + may cause crashes or hangs when correct handling of 40 + notifications is enforced; depending on the subset of 41 + drivers and devices you use, this may or may not work. 42 + 43 + If unsure, say N. 44 + 32 45 config VIRTIO_PCI 33 46 tristate "PCI driver for virtio devices" 34 47 depends on PCI
+2
drivers/virtio/virtio.c
··· 219 219 * */ 220 220 void virtio_reset_device(struct virtio_device *dev) 221 221 { 222 + #ifdef CONFIG_VIRTIO_HARDEN_NOTIFICATION 222 223 /* 223 224 * The below virtio_synchronize_cbs() guarantees that any 224 225 * interrupt for this line arriving after ··· 228 227 */ 229 228 virtio_break_device(dev); 230 229 virtio_synchronize_cbs(dev); 230 + #endif 231 231 232 232 dev->config->reset(dev); 233 233 }
+26
drivers/virtio/virtio_mmio.c
··· 62 62 #include <linux/list.h> 63 63 #include <linux/module.h> 64 64 #include <linux/platform_device.h> 65 + #include <linux/pm.h> 65 66 #include <linux/slab.h> 66 67 #include <linux/spinlock.h> 67 68 #include <linux/virtio.h> ··· 557 556 .synchronize_cbs = vm_synchronize_cbs, 558 557 }; 559 558 559 + #ifdef CONFIG_PM_SLEEP 560 + static int virtio_mmio_freeze(struct device *dev) 561 + { 562 + struct virtio_mmio_device *vm_dev = dev_get_drvdata(dev); 563 + 564 + return virtio_device_freeze(&vm_dev->vdev); 565 + } 566 + 567 + static int virtio_mmio_restore(struct device *dev) 568 + { 569 + struct virtio_mmio_device *vm_dev = dev_get_drvdata(dev); 570 + 571 + if (vm_dev->version == 1) 572 + writel(PAGE_SIZE, vm_dev->base + VIRTIO_MMIO_GUEST_PAGE_SIZE); 573 + 574 + return virtio_device_restore(&vm_dev->vdev); 575 + } 576 + 577 + static const struct dev_pm_ops virtio_mmio_pm_ops = { 578 + SET_SYSTEM_SLEEP_PM_OPS(virtio_mmio_freeze, virtio_mmio_restore) 579 + }; 580 + #endif 560 581 561 582 static void virtio_mmio_release_dev(struct device *_d) 562 583 { ··· 822 799 .name = "virtio-mmio", 823 800 .of_match_table = virtio_mmio_match, 824 801 .acpi_match_table = ACPI_PTR(virtio_mmio_acpi_match), 802 + #ifdef CONFIG_PM_SLEEP 803 + .pm = &virtio_mmio_pm_ops, 804 + #endif 825 805 }, 826 806 }; 827 807
-2
drivers/virtio/virtio_pci_modern_dev.c
··· 220 220 221 221 check_offsets(); 222 222 223 - mdev->pci_dev = pci_dev; 224 - 225 223 /* We only own devices >= 0x1000 and <= 0x107f: leave the rest. */ 226 224 if (pci_dev->device < 0x1000 || pci_dev->device > 0x107f) 227 225 return -ENODEV;
+60 -29
drivers/virtio/virtio_ring.c
··· 111 111 /* Number we've added since last sync. */ 112 112 unsigned int num_added; 113 113 114 - /* Last used index we've seen. */ 114 + /* Last used index we've seen. 115 + * for split ring, it just contains last used index 116 + * for packed ring: 117 + * bits up to VRING_PACKED_EVENT_F_WRAP_CTR include the last used index. 118 + * bits from VRING_PACKED_EVENT_F_WRAP_CTR include the used wrap counter. 119 + */ 115 120 u16 last_used_idx; 116 121 117 122 /* Hint for event idx: already triggered no need to disable. */ ··· 158 153 159 154 /* Driver ring wrap counter. */ 160 155 bool avail_wrap_counter; 161 - 162 - /* Device ring wrap counter. */ 163 - bool used_wrap_counter; 164 156 165 157 /* Avail used flags. */ 166 158 u16 avail_used_flags; ··· 935 933 for (; num && vring_size(num, vring_align) > PAGE_SIZE; num /= 2) { 936 934 queue = vring_alloc_queue(vdev, vring_size(num, vring_align), 937 935 &dma_addr, 938 - GFP_KERNEL|__GFP_NOWARN|__GFP_ZERO); 936 + GFP_KERNEL | __GFP_NOWARN | __GFP_ZERO); 939 937 if (queue) 940 938 break; 941 939 if (!may_reduce_num) ··· 975 973 /* 976 974 * Packed ring specific functions - *_packed(). 977 975 */ 976 + static inline bool packed_used_wrap_counter(u16 last_used_idx) 977 + { 978 + return !!(last_used_idx & (1 << VRING_PACKED_EVENT_F_WRAP_CTR)); 979 + } 980 + 981 + static inline u16 packed_last_used(u16 last_used_idx) 982 + { 983 + return last_used_idx & ~(-(1 << VRING_PACKED_EVENT_F_WRAP_CTR)); 984 + } 978 985 979 986 static void vring_unmap_extra_packed(const struct vring_virtqueue *vq, 980 987 struct vring_desc_extra *extra) ··· 1417 1406 1418 1407 static inline bool more_used_packed(const struct vring_virtqueue *vq) 1419 1408 { 1420 - return is_used_desc_packed(vq, vq->last_used_idx, 1421 - vq->packed.used_wrap_counter); 1409 + u16 last_used; 1410 + u16 last_used_idx; 1411 + bool used_wrap_counter; 1412 + 1413 + last_used_idx = READ_ONCE(vq->last_used_idx); 1414 + last_used = packed_last_used(last_used_idx); 1415 + used_wrap_counter = packed_used_wrap_counter(last_used_idx); 1416 + return is_used_desc_packed(vq, last_used, used_wrap_counter); 1422 1417 } 1423 1418 1424 1419 static void *virtqueue_get_buf_ctx_packed(struct virtqueue *_vq, ··· 1432 1415 void **ctx) 1433 1416 { 1434 1417 struct vring_virtqueue *vq = to_vvq(_vq); 1435 - u16 last_used, id; 1418 + u16 last_used, id, last_used_idx; 1419 + bool used_wrap_counter; 1436 1420 void *ret; 1437 1421 1438 1422 START_USE(vq); ··· 1452 1434 /* Only get used elements after they have been exposed by host. */ 1453 1435 virtio_rmb(vq->weak_barriers); 1454 1436 1455 - last_used = vq->last_used_idx; 1437 + last_used_idx = READ_ONCE(vq->last_used_idx); 1438 + used_wrap_counter = packed_used_wrap_counter(last_used_idx); 1439 + last_used = packed_last_used(last_used_idx); 1456 1440 id = le16_to_cpu(vq->packed.vring.desc[last_used].id); 1457 1441 *len = le32_to_cpu(vq->packed.vring.desc[last_used].len); 1458 1442 ··· 1471 1451 ret = vq->packed.desc_state[id].data; 1472 1452 detach_buf_packed(vq, id, ctx); 1473 1453 1474 - vq->last_used_idx += vq->packed.desc_state[id].num; 1475 - if (unlikely(vq->last_used_idx >= vq->packed.vring.num)) { 1476 - vq->last_used_idx -= vq->packed.vring.num; 1477 - vq->packed.used_wrap_counter ^= 1; 1454 + last_used += vq->packed.desc_state[id].num; 1455 + if (unlikely(last_used >= vq->packed.vring.num)) { 1456 + last_used -= vq->packed.vring.num; 1457 + used_wrap_counter ^= 1; 1478 1458 } 1459 + 1460 + last_used = (last_used | (used_wrap_counter << VRING_PACKED_EVENT_F_WRAP_CTR)); 1461 + WRITE_ONCE(vq->last_used_idx, last_used); 1479 1462 1480 1463 /* 1481 1464 * If we expect an interrupt for the next entry, tell host ··· 1488 1465 if (vq->packed.event_flags_shadow == VRING_PACKED_EVENT_FLAG_DESC) 1489 1466 virtio_store_mb(vq->weak_barriers, 1490 1467 &vq->packed.vring.driver->off_wrap, 1491 - cpu_to_le16(vq->last_used_idx | 1492 - (vq->packed.used_wrap_counter << 1493 - VRING_PACKED_EVENT_F_WRAP_CTR))); 1468 + cpu_to_le16(vq->last_used_idx)); 1494 1469 1495 1470 LAST_ADD_TIME_INVALID(vq); 1496 1471 ··· 1520 1499 1521 1500 if (vq->event) { 1522 1501 vq->packed.vring.driver->off_wrap = 1523 - cpu_to_le16(vq->last_used_idx | 1524 - (vq->packed.used_wrap_counter << 1525 - VRING_PACKED_EVENT_F_WRAP_CTR)); 1502 + cpu_to_le16(vq->last_used_idx); 1526 1503 /* 1527 1504 * We need to update event offset and event wrap 1528 1505 * counter first before updating event flags. ··· 1537 1518 } 1538 1519 1539 1520 END_USE(vq); 1540 - return vq->last_used_idx | ((u16)vq->packed.used_wrap_counter << 1541 - VRING_PACKED_EVENT_F_WRAP_CTR); 1521 + return vq->last_used_idx; 1542 1522 } 1543 1523 1544 1524 static bool virtqueue_poll_packed(struct virtqueue *_vq, u16 off_wrap) ··· 1555 1537 static bool virtqueue_enable_cb_delayed_packed(struct virtqueue *_vq) 1556 1538 { 1557 1539 struct vring_virtqueue *vq = to_vvq(_vq); 1558 - u16 used_idx, wrap_counter; 1540 + u16 used_idx, wrap_counter, last_used_idx; 1559 1541 u16 bufs; 1560 1542 1561 1543 START_USE(vq); ··· 1568 1550 if (vq->event) { 1569 1551 /* TODO: tune this threshold */ 1570 1552 bufs = (vq->packed.vring.num - vq->vq.num_free) * 3 / 4; 1571 - wrap_counter = vq->packed.used_wrap_counter; 1553 + last_used_idx = READ_ONCE(vq->last_used_idx); 1554 + wrap_counter = packed_used_wrap_counter(last_used_idx); 1572 1555 1573 - used_idx = vq->last_used_idx + bufs; 1556 + used_idx = packed_last_used(last_used_idx) + bufs; 1574 1557 if (used_idx >= vq->packed.vring.num) { 1575 1558 used_idx -= vq->packed.vring.num; 1576 1559 wrap_counter ^= 1; ··· 1601 1582 */ 1602 1583 virtio_mb(vq->weak_barriers); 1603 1584 1604 - if (is_used_desc_packed(vq, 1605 - vq->last_used_idx, 1606 - vq->packed.used_wrap_counter)) { 1585 + last_used_idx = READ_ONCE(vq->last_used_idx); 1586 + wrap_counter = packed_used_wrap_counter(last_used_idx); 1587 + used_idx = packed_last_used(last_used_idx); 1588 + if (is_used_desc_packed(vq, used_idx, wrap_counter)) { 1607 1589 END_USE(vq); 1608 1590 return false; 1609 1591 } ··· 1708 1688 vq->we_own_ring = true; 1709 1689 vq->notify = notify; 1710 1690 vq->weak_barriers = weak_barriers; 1691 + #ifdef CONFIG_VIRTIO_HARDEN_NOTIFICATION 1711 1692 vq->broken = true; 1712 - vq->last_used_idx = 0; 1693 + #else 1694 + vq->broken = false; 1695 + #endif 1696 + vq->last_used_idx = 0 | (1 << VRING_PACKED_EVENT_F_WRAP_CTR); 1713 1697 vq->event_triggered = false; 1714 1698 vq->num_added = 0; 1715 1699 vq->packed_ring = true; ··· 1744 1720 1745 1721 vq->packed.next_avail_idx = 0; 1746 1722 vq->packed.avail_wrap_counter = 1; 1747 - vq->packed.used_wrap_counter = 1; 1748 1723 vq->packed.event_flags_shadow = 0; 1749 1724 vq->packed.avail_used_flags = 1 << VRING_PACKED_DESC_F_AVAIL; 1750 1725 ··· 2158 2135 } 2159 2136 2160 2137 if (unlikely(vq->broken)) { 2138 + #ifdef CONFIG_VIRTIO_HARDEN_NOTIFICATION 2161 2139 dev_warn_once(&vq->vq.vdev->dev, 2162 2140 "virtio vring IRQ raised before DRIVER_OK"); 2163 2141 return IRQ_NONE; 2142 + #else 2143 + return IRQ_HANDLED; 2144 + #endif 2164 2145 } 2165 2146 2166 2147 /* Just a hint for performance: so it's ok that this can be racy! */ ··· 2207 2180 vq->we_own_ring = false; 2208 2181 vq->notify = notify; 2209 2182 vq->weak_barriers = weak_barriers; 2183 + #ifdef CONFIG_VIRTIO_HARDEN_NOTIFICATION 2210 2184 vq->broken = true; 2185 + #else 2186 + vq->broken = false; 2187 + #endif 2211 2188 vq->last_used_idx = 0; 2212 2189 vq->event_triggered = false; 2213 2190 vq->num_added = 0;
+2 -1
fs/cachefiles/ondemand.c
··· 21 21 * anon_fd. 22 22 */ 23 23 xas_for_each(&xas, req, ULONG_MAX) { 24 - if (req->msg.opcode == CACHEFILES_OP_READ) { 24 + if (req->msg.object_id == object_id && 25 + req->msg.opcode == CACHEFILES_OP_READ) { 25 26 req->error = -EIO; 26 27 complete(&req->done); 27 28 xas_store(&xas, NULL);
+1
fs/ceph/caps.c
··· 4377 4377 ihold(inode); 4378 4378 dout("flush_dirty_caps %llx.%llx\n", ceph_vinop(inode)); 4379 4379 spin_unlock(&mdsc->cap_dirty_lock); 4380 + ceph_wait_on_async_create(inode); 4380 4381 ceph_check_caps(ci, CHECK_CAPS_FLUSH, NULL); 4381 4382 iput(inode); 4382 4383 spin_lock(&mdsc->cap_dirty_lock);
+22 -4
fs/fscache/cookie.c
··· 372 372 return NULL; 373 373 } 374 374 375 + static inline bool fscache_cookie_is_dropped(struct fscache_cookie *cookie) 376 + { 377 + return READ_ONCE(cookie->state) == FSCACHE_COOKIE_STATE_DROPPED; 378 + } 379 + 375 380 static void fscache_wait_on_collision(struct fscache_cookie *candidate, 376 381 struct fscache_cookie *wait_for) 377 382 { 378 383 enum fscache_cookie_state *statep = &wait_for->state; 379 384 380 - wait_var_event_timeout(statep, READ_ONCE(*statep) == FSCACHE_COOKIE_STATE_DROPPED, 385 + wait_var_event_timeout(statep, fscache_cookie_is_dropped(wait_for), 381 386 20 * HZ); 382 - if (READ_ONCE(*statep) != FSCACHE_COOKIE_STATE_DROPPED) { 387 + if (!fscache_cookie_is_dropped(wait_for)) { 383 388 pr_notice("Potential collision c=%08x old: c=%08x", 384 389 candidate->debug_id, wait_for->debug_id); 385 - wait_var_event(statep, READ_ONCE(*statep) == FSCACHE_COOKIE_STATE_DROPPED); 390 + wait_var_event(statep, fscache_cookie_is_dropped(wait_for)); 386 391 } 387 392 } 388 393 ··· 522 517 } 523 518 524 519 fscache_see_cookie(cookie, fscache_cookie_see_active); 525 - fscache_set_cookie_state(cookie, FSCACHE_COOKIE_STATE_ACTIVE); 520 + spin_lock(&cookie->lock); 521 + if (test_and_clear_bit(FSCACHE_COOKIE_DO_INVALIDATE, &cookie->flags)) 522 + __fscache_set_cookie_state(cookie, 523 + FSCACHE_COOKIE_STATE_INVALIDATING); 524 + else 525 + __fscache_set_cookie_state(cookie, FSCACHE_COOKIE_STATE_ACTIVE); 526 + spin_unlock(&cookie->lock); 527 + wake_up_cookie_state(cookie); 526 528 trace = fscache_access_lookup_cookie_end; 527 529 528 530 out: ··· 763 751 cookie->volume->cache->ops->withdraw_cookie(cookie); 764 752 spin_lock(&cookie->lock); 765 753 } 754 + 755 + if (test_and_clear_bit(FSCACHE_COOKIE_DO_INVALIDATE, &cookie->flags)) 756 + fscache_end_cookie_access(cookie, fscache_access_invalidate_cookie_end); 766 757 767 758 switch (state) { 768 759 case FSCACHE_COOKIE_STATE_RELINQUISHING: ··· 1063 1048 return; 1064 1049 1065 1050 case FSCACHE_COOKIE_STATE_LOOKING_UP: 1051 + __fscache_begin_cookie_access(cookie, fscache_access_invalidate_cookie); 1052 + set_bit(FSCACHE_COOKIE_DO_INVALIDATE, &cookie->flags); 1053 + fallthrough; 1066 1054 case FSCACHE_COOKIE_STATE_CREATING: 1067 1055 spin_unlock(&cookie->lock); 1068 1056 _leave(" [look %x]", cookie->inval_counter);
+2 -2
fs/fscache/volume.c
··· 143 143 { 144 144 wait_var_event_timeout(&candidate->flags, 145 145 !fscache_is_acquire_pending(candidate), 20 * HZ); 146 - if (!fscache_is_acquire_pending(candidate)) { 146 + if (fscache_is_acquire_pending(candidate)) { 147 147 pr_notice("Potential volume collision new=%08x old=%08x", 148 148 candidate->debug_id, collidee_debug_id); 149 149 fscache_stat(&fscache_n_volumes_collision); ··· 182 182 hlist_bl_add_head(&candidate->hash_link, h); 183 183 hlist_bl_unlock(h); 184 184 185 - if (test_bit(FSCACHE_VOLUME_ACQUIRE_PENDING, &candidate->flags)) 185 + if (fscache_is_acquire_pending(candidate)) 186 186 fscache_wait_on_volume_collision(candidate, collidee_debug_id); 187 187 return true; 188 188
+16 -8
fs/io_uring.c
··· 1183 1183 .unbound_nonreg_file = 1, 1184 1184 .pollout = 1, 1185 1185 .needs_async_setup = 1, 1186 + .ioprio = 1, 1186 1187 .async_size = sizeof(struct io_async_msghdr), 1187 1188 }, 1188 1189 [IORING_OP_RECVMSG] = { ··· 1192 1191 .pollin = 1, 1193 1192 .buffer_select = 1, 1194 1193 .needs_async_setup = 1, 1194 + .ioprio = 1, 1195 1195 .async_size = sizeof(struct io_async_msghdr), 1196 1196 }, 1197 1197 [IORING_OP_TIMEOUT] = { ··· 1268 1266 .unbound_nonreg_file = 1, 1269 1267 .pollout = 1, 1270 1268 .audit_skip = 1, 1269 + .ioprio = 1, 1271 1270 }, 1272 1271 [IORING_OP_RECV] = { 1273 1272 .needs_file = 1, ··· 1276 1273 .pollin = 1, 1277 1274 .buffer_select = 1, 1278 1275 .audit_skip = 1, 1276 + .ioprio = 1, 1279 1277 }, 1280 1278 [IORING_OP_OPENAT2] = { 1281 1279 }, ··· 4318 4314 if (unlikely(ret < 0)) 4319 4315 return ret; 4320 4316 } else { 4317 + rw = req->async_data; 4318 + s = &rw->s; 4319 + 4321 4320 /* 4322 4321 * Safe and required to re-import if we're using provided 4323 4322 * buffers, as we dropped the selected one before retry. 4324 4323 */ 4325 - if (req->flags & REQ_F_BUFFER_SELECT) { 4324 + if (io_do_buffer_select(req)) { 4326 4325 ret = io_import_iovec(READ, req, &iovec, s, issue_flags); 4327 4326 if (unlikely(ret < 0)) 4328 4327 return ret; 4329 4328 } 4330 4329 4331 - rw = req->async_data; 4332 - s = &rw->s; 4333 4330 /* 4334 4331 * We come here from an earlier attempt, restore our state to 4335 4332 * match in case it doesn't. It's cheap enough that we don't ··· 5066 5061 { 5067 5062 struct io_uring_cmd *ioucmd = &req->uring_cmd; 5068 5063 5069 - if (sqe->rw_flags) 5064 + if (sqe->rw_flags || sqe->__pad1) 5070 5065 return -EINVAL; 5071 5066 ioucmd->cmd = sqe->cmd; 5072 5067 ioucmd->cmd_op = READ_ONCE(sqe->cmd_op); ··· 6080 6075 { 6081 6076 struct io_sr_msg *sr = &req->sr_msg; 6082 6077 6083 - if (unlikely(sqe->file_index)) 6078 + if (unlikely(sqe->file_index || sqe->addr2)) 6084 6079 return -EINVAL; 6085 6080 6086 6081 sr->umsg = u64_to_user_ptr(READ_ONCE(sqe->addr)); 6087 6082 sr->len = READ_ONCE(sqe->len); 6088 - sr->flags = READ_ONCE(sqe->addr2); 6083 + sr->flags = READ_ONCE(sqe->ioprio); 6089 6084 if (sr->flags & ~IORING_RECVSEND_POLL_FIRST) 6090 6085 return -EINVAL; 6091 6086 sr->msg_flags = READ_ONCE(sqe->msg_flags) | MSG_NOSIGNAL; ··· 6316 6311 { 6317 6312 struct io_sr_msg *sr = &req->sr_msg; 6318 6313 6319 - if (unlikely(sqe->file_index)) 6314 + if (unlikely(sqe->file_index || sqe->addr2)) 6320 6315 return -EINVAL; 6321 6316 6322 6317 sr->umsg = u64_to_user_ptr(READ_ONCE(sqe->addr)); 6323 6318 sr->len = READ_ONCE(sqe->len); 6324 - sr->flags = READ_ONCE(sqe->addr2); 6319 + sr->flags = READ_ONCE(sqe->ioprio); 6325 6320 if (sr->flags & ~IORING_RECVSEND_POLL_FIRST) 6326 6321 return -EINVAL; 6327 6322 sr->msg_flags = READ_ONCE(sqe->msg_flags) | MSG_NOSIGNAL; ··· 7972 7967 unsigned int done; 7973 7968 struct file *file; 7974 7969 int ret, fd; 7970 + 7971 + if (!req->ctx->file_data) 7972 + return -ENXIO; 7975 7973 7976 7974 for (done = 0; done < req->rsrc_update.nr_args; done++) { 7977 7975 if (copy_from_user(&fd, &fds[done], sizeof(fd))) {
+31 -17
fs/ksmbd/smb2pdu.c
··· 6490 6490 goto out; 6491 6491 } 6492 6492 6493 + ksmbd_debug(SMB, "flags %u\n", le32_to_cpu(req->Flags)); 6493 6494 if (le32_to_cpu(req->Flags) & SMB2_WRITEFLAG_WRITE_THROUGH) 6494 6495 writethrough = true; 6495 6496 ··· 6505 6504 } 6506 6505 data_buf = (char *)(((char *)&req->hdr.ProtocolId) + 6507 6506 le16_to_cpu(req->DataOffset)); 6508 - 6509 - ksmbd_debug(SMB, "flags %u\n", le32_to_cpu(req->Flags)); 6510 - if (le32_to_cpu(req->Flags) & SMB2_WRITEFLAG_WRITE_THROUGH) 6511 - writethrough = true; 6512 6507 6513 6508 ksmbd_debug(SMB, "filename %pd, offset %lld, len %zu\n", 6514 6509 fp->filp->f_path.dentry, offset, length); ··· 7700 7703 { 7701 7704 struct file_zero_data_information *zero_data; 7702 7705 struct ksmbd_file *fp; 7703 - loff_t off, len; 7706 + loff_t off, len, bfz; 7704 7707 7705 7708 if (!test_tree_conn_flag(work->tcon, KSMBD_TREE_CONN_FLAG_WRITABLE)) { 7706 7709 ksmbd_debug(SMB, ··· 7717 7720 zero_data = 7718 7721 (struct file_zero_data_information *)&req->Buffer[0]; 7719 7722 7720 - fp = ksmbd_lookup_fd_fast(work, id); 7721 - if (!fp) { 7722 - ret = -ENOENT; 7723 + off = le64_to_cpu(zero_data->FileOffset); 7724 + bfz = le64_to_cpu(zero_data->BeyondFinalZero); 7725 + if (off > bfz) { 7726 + ret = -EINVAL; 7723 7727 goto out; 7724 7728 } 7725 7729 7726 - off = le64_to_cpu(zero_data->FileOffset); 7727 - len = le64_to_cpu(zero_data->BeyondFinalZero) - off; 7730 + len = bfz - off; 7731 + if (len) { 7732 + fp = ksmbd_lookup_fd_fast(work, id); 7733 + if (!fp) { 7734 + ret = -ENOENT; 7735 + goto out; 7736 + } 7728 7737 7729 - ret = ksmbd_vfs_zero_data(work, fp, off, len); 7730 - ksmbd_fd_put(work, fp); 7731 - if (ret < 0) 7732 - goto out; 7738 + ret = ksmbd_vfs_zero_data(work, fp, off, len); 7739 + ksmbd_fd_put(work, fp); 7740 + if (ret < 0) 7741 + goto out; 7742 + } 7733 7743 break; 7734 7744 } 7735 7745 case FSCTL_QUERY_ALLOCATED_RANGES: ··· 7810 7806 src_off = le64_to_cpu(dup_ext->SourceFileOffset); 7811 7807 dst_off = le64_to_cpu(dup_ext->TargetFileOffset); 7812 7808 length = le64_to_cpu(dup_ext->ByteCount); 7813 - cloned = vfs_clone_file_range(fp_in->filp, src_off, fp_out->filp, 7814 - dst_off, length, 0); 7809 + /* 7810 + * XXX: It is not clear if FSCTL_DUPLICATE_EXTENTS_TO_FILE 7811 + * should fall back to vfs_copy_file_range(). This could be 7812 + * beneficial when re-exporting nfs/smb mount, but note that 7813 + * this can result in partial copy that returns an error status. 7814 + * If/when FSCTL_DUPLICATE_EXTENTS_TO_FILE_EX is implemented, 7815 + * fall back to vfs_copy_file_range(), should be avoided when 7816 + * the flag DUPLICATE_EXTENTS_DATA_EX_SOURCE_ATOMIC is set. 7817 + */ 7818 + cloned = vfs_clone_file_range(fp_in->filp, src_off, 7819 + fp_out->filp, dst_off, length, 0); 7815 7820 if (cloned == -EXDEV || cloned == -EOPNOTSUPP) { 7816 7821 ret = -EOPNOTSUPP; 7817 7822 goto dup_ext_out; 7818 7823 } else if (cloned != length) { 7819 7824 cloned = vfs_copy_file_range(fp_in->filp, src_off, 7820 - fp_out->filp, dst_off, length, 0); 7825 + fp_out->filp, dst_off, 7826 + length, 0); 7821 7827 if (cloned != length) { 7822 7828 if (cloned < 0) 7823 7829 ret = cloned;
-10
fs/ksmbd/transport_rdma.c
··· 5 5 * 6 6 * Author(s): Long Li <longli@microsoft.com>, 7 7 * Hyunchul Lee <hyc.lee@gmail.com> 8 - * 9 - * This program is free software; you can redistribute it and/or modify 10 - * it under the terms of the GNU General Public License as published by 11 - * the Free Software Foundation; either version 2 of the License, or 12 - * (at your option) any later version. 13 - * 14 - * This program is distributed in the hope that it will be useful, 15 - * but WITHOUT ANY WARRANTY; without even the implied warranty of 16 - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See 17 - * the GNU General Public License for more details. 18 8 */ 19 9 20 10 #define SUBMOD_NAME "smb_direct"
+1 -1
fs/ksmbd/transport_tcp.c
··· 230 230 break; 231 231 } 232 232 ret = kernel_accept(iface->ksmbd_socket, &client_sk, 233 - O_NONBLOCK); 233 + SOCK_NONBLOCK); 234 234 mutex_unlock(&iface->sock_release_lock); 235 235 if (ret) { 236 236 if (ret == -EAGAIN)
+9 -3
fs/ksmbd/vfs.c
··· 1015 1015 FALLOC_FL_PUNCH_HOLE | FALLOC_FL_KEEP_SIZE, 1016 1016 off, len); 1017 1017 1018 - return vfs_fallocate(fp->filp, FALLOC_FL_ZERO_RANGE, off, len); 1018 + return vfs_fallocate(fp->filp, 1019 + FALLOC_FL_ZERO_RANGE | FALLOC_FL_KEEP_SIZE, 1020 + off, len); 1019 1021 } 1020 1022 1021 1023 int ksmbd_vfs_fqar_lseek(struct ksmbd_file *fp, loff_t start, loff_t length, ··· 1048 1046 *out_count = 0; 1049 1047 end = start + length; 1050 1048 while (start < end && *out_count < in_count) { 1051 - extent_start = f->f_op->llseek(f, start, SEEK_DATA); 1049 + extent_start = vfs_llseek(f, start, SEEK_DATA); 1052 1050 if (extent_start < 0) { 1053 1051 if (extent_start != -ENXIO) 1054 1052 ret = (int)extent_start; ··· 1058 1056 if (extent_start >= end) 1059 1057 break; 1060 1058 1061 - extent_end = f->f_op->llseek(f, extent_start, SEEK_HOLE); 1059 + extent_end = vfs_llseek(f, extent_start, SEEK_HOLE); 1062 1060 if (extent_end < 0) { 1063 1061 if (extent_end != -ENXIO) 1064 1062 ret = (int)extent_end; ··· 1779 1777 1780 1778 ret = vfs_copy_file_range(src_fp->filp, src_off, 1781 1779 dst_fp->filp, dst_off, len, 0); 1780 + if (ret == -EOPNOTSUPP || ret == -EXDEV) 1781 + ret = generic_copy_file_range(src_fp->filp, src_off, 1782 + dst_fp->filp, dst_off, 1783 + len, 0); 1782 1784 if (ret < 0) 1783 1785 return ret; 1784 1786
+13 -6
fs/nfs/nfs4proc.c
··· 4012 4012 } 4013 4013 4014 4014 page = alloc_page(GFP_KERNEL); 4015 + if (!page) 4016 + return -ENOMEM; 4015 4017 locations = kmalloc(sizeof(struct nfs4_fs_locations), GFP_KERNEL); 4016 - if (page == NULL || locations == NULL) 4017 - goto out; 4018 + if (!locations) 4019 + goto out_free; 4020 + locations->fattr = nfs_alloc_fattr(); 4021 + if (!locations->fattr) 4022 + goto out_free_2; 4018 4023 4019 4024 status = nfs4_proc_get_locations(server, fhandle, locations, page, 4020 4025 cred); 4021 4026 if (status) 4022 - goto out; 4027 + goto out_free_3; 4023 4028 4024 4029 for (i = 0; i < locations->nlocations; i++) 4025 4030 test_fs_location_for_trunking(&locations->locations[i], clp, 4026 4031 server); 4027 - out: 4028 - if (page) 4029 - __free_page(page); 4032 + out_free_3: 4033 + kfree(locations->fattr); 4034 + out_free_2: 4030 4035 kfree(locations); 4036 + out_free: 4037 + __free_page(page); 4031 4038 return status; 4032 4039 } 4033 4040
+1
fs/nfs/nfs4state.c
··· 2753 2753 goto again; 2754 2754 2755 2755 nfs_put_client(clp); 2756 + module_put_and_kthread_exit(0); 2756 2757 return 0; 2757 2758 }
+9 -2
fs/nfsd/vfs.c
··· 577 577 ssize_t nfsd_copy_file_range(struct file *src, u64 src_pos, struct file *dst, 578 578 u64 dst_pos, u64 count) 579 579 { 580 + ssize_t ret; 580 581 581 582 /* 582 583 * Limit copy to 4MB to prevent indefinitely blocking an nfsd ··· 588 587 * limit like this and pipeline multiple COPY requests. 589 588 */ 590 589 count = min_t(u64, count, 1 << 22); 591 - return vfs_copy_file_range(src, src_pos, dst, dst_pos, count, 0); 590 + ret = vfs_copy_file_range(src, src_pos, dst, dst_pos, count, 0); 591 + 592 + if (ret == -EOPNOTSUPP || ret == -EXDEV) 593 + ret = generic_copy_file_range(src, src_pos, dst, dst_pos, 594 + count, 0); 595 + return ret; 592 596 } 593 597 594 598 __be32 nfsd4_vfs_fallocate(struct svc_rqst *rqstp, struct svc_fh *fhp, ··· 1179 1173 nfsd_copy_write_verifier(verf, nn); 1180 1174 err2 = filemap_check_wb_err(nf->nf_file->f_mapping, 1181 1175 since); 1176 + err = nfserrno(err2); 1182 1177 break; 1183 1178 case -EINVAL: 1184 1179 err = nfserr_notsupp; ··· 1187 1180 default: 1188 1181 nfsd_reset_write_verifier(nn); 1189 1182 trace_nfsd_writeverf_reset(nn, rqstp, err2); 1183 + err = nfserrno(err2); 1190 1184 } 1191 - err = nfserrno(err2); 1192 1185 } else 1193 1186 nfsd_copy_write_verifier(verf, nn); 1194 1187
+19 -15
fs/notify/fanotify/fanotify_user.c
··· 1513 1513 return 0; 1514 1514 } 1515 1515 1516 - static int fanotify_events_supported(struct path *path, __u64 mask) 1516 + static int fanotify_events_supported(struct fsnotify_group *group, 1517 + struct path *path, __u64 mask, 1518 + unsigned int flags) 1517 1519 { 1520 + unsigned int mark_type = flags & FANOTIFY_MARK_TYPE_BITS; 1521 + /* Strict validation of events in non-dir inode mask with v5.17+ APIs */ 1522 + bool strict_dir_events = FAN_GROUP_FLAG(group, FAN_REPORT_TARGET_FID) || 1523 + (mask & FAN_RENAME); 1524 + 1518 1525 /* 1519 1526 * Some filesystems such as 'proc' acquire unusual locks when opening 1520 1527 * files. For them fanotify permission events have high chances of ··· 1533 1526 if (mask & FANOTIFY_PERM_EVENTS && 1534 1527 path->mnt->mnt_sb->s_type->fs_flags & FS_DISALLOW_NOTIFY_PERM) 1535 1528 return -EINVAL; 1529 + 1530 + /* 1531 + * We shouldn't have allowed setting dirent events and the directory 1532 + * flags FAN_ONDIR and FAN_EVENT_ON_CHILD in mask of non-dir inode, 1533 + * but because we always allowed it, error only when using new APIs. 1534 + */ 1535 + if (strict_dir_events && mark_type == FAN_MARK_INODE && 1536 + !d_is_dir(path->dentry) && (mask & FANOTIFY_DIRONLY_EVENT_BITS)) 1537 + return -ENOTDIR; 1538 + 1536 1539 return 0; 1537 1540 } 1538 1541 ··· 1689 1672 goto fput_and_out; 1690 1673 1691 1674 if (flags & FAN_MARK_ADD) { 1692 - ret = fanotify_events_supported(&path, mask); 1675 + ret = fanotify_events_supported(group, &path, mask, flags); 1693 1676 if (ret) 1694 1677 goto path_put_and_out; 1695 1678 } ··· 1711 1694 inode = path.dentry->d_inode; 1712 1695 else 1713 1696 mnt = path.mnt; 1714 - 1715 - /* 1716 - * FAN_RENAME is not allowed on non-dir (for now). 1717 - * We shouldn't have allowed setting any dirent events in mask of 1718 - * non-dir, but because we always allowed it, error only if group 1719 - * was initialized with the new flag FAN_REPORT_TARGET_FID. 1720 - */ 1721 - ret = -ENOTDIR; 1722 - if (inode && !S_ISDIR(inode->i_mode) && 1723 - ((mask & FAN_RENAME) || 1724 - ((mask & FANOTIFY_DIRENT_EVENTS) && 1725 - FAN_GROUP_FLAG(group, FAN_REPORT_TARGET_FID)))) 1726 - goto path_put_and_out; 1727 1697 1728 1698 /* Mask out FAN_EVENT_ON_CHILD flag for sb/mount/non-dir marks */ 1729 1699 if (mnt || !S_ISDIR(inode->i_mode)) {
+44 -33
fs/read_write.c
··· 1397 1397 } 1398 1398 EXPORT_SYMBOL(generic_copy_file_range); 1399 1399 1400 - static ssize_t do_copy_file_range(struct file *file_in, loff_t pos_in, 1401 - struct file *file_out, loff_t pos_out, 1402 - size_t len, unsigned int flags) 1403 - { 1404 - /* 1405 - * Although we now allow filesystems to handle cross sb copy, passing 1406 - * a file of the wrong filesystem type to filesystem driver can result 1407 - * in an attempt to dereference the wrong type of ->private_data, so 1408 - * avoid doing that until we really have a good reason. NFS defines 1409 - * several different file_system_type structures, but they all end up 1410 - * using the same ->copy_file_range() function pointer. 1411 - */ 1412 - if (file_out->f_op->copy_file_range && 1413 - file_out->f_op->copy_file_range == file_in->f_op->copy_file_range) 1414 - return file_out->f_op->copy_file_range(file_in, pos_in, 1415 - file_out, pos_out, 1416 - len, flags); 1417 - 1418 - return generic_copy_file_range(file_in, pos_in, file_out, pos_out, len, 1419 - flags); 1420 - } 1421 - 1422 1400 /* 1423 1401 * Performs necessary checks before doing a file copy 1424 1402 * ··· 1417 1439 ret = generic_file_rw_checks(file_in, file_out); 1418 1440 if (ret) 1419 1441 return ret; 1442 + 1443 + /* 1444 + * We allow some filesystems to handle cross sb copy, but passing 1445 + * a file of the wrong filesystem type to filesystem driver can result 1446 + * in an attempt to dereference the wrong type of ->private_data, so 1447 + * avoid doing that until we really have a good reason. 1448 + * 1449 + * nfs and cifs define several different file_system_type structures 1450 + * and several different sets of file_operations, but they all end up 1451 + * using the same ->copy_file_range() function pointer. 1452 + */ 1453 + if (file_out->f_op->copy_file_range) { 1454 + if (file_in->f_op->copy_file_range != 1455 + file_out->f_op->copy_file_range) 1456 + return -EXDEV; 1457 + } else if (file_inode(file_in)->i_sb != file_inode(file_out)->i_sb) { 1458 + return -EXDEV; 1459 + } 1420 1460 1421 1461 /* Don't touch certain kinds of inodes */ 1422 1462 if (IS_IMMUTABLE(inode_out)) ··· 1501 1505 file_start_write(file_out); 1502 1506 1503 1507 /* 1504 - * Try cloning first, this is supported by more file systems, and 1505 - * more efficient if both clone and copy are supported (e.g. NFS). 1508 + * Cloning is supported by more file systems, so we implement copy on 1509 + * same sb using clone, but for filesystems where both clone and copy 1510 + * are supported (e.g. nfs,cifs), we only call the copy method. 1506 1511 */ 1512 + if (file_out->f_op->copy_file_range) { 1513 + ret = file_out->f_op->copy_file_range(file_in, pos_in, 1514 + file_out, pos_out, 1515 + len, flags); 1516 + goto done; 1517 + } 1518 + 1507 1519 if (file_in->f_op->remap_file_range && 1508 1520 file_inode(file_in)->i_sb == file_inode(file_out)->i_sb) { 1509 - loff_t cloned; 1510 - 1511 - cloned = file_in->f_op->remap_file_range(file_in, pos_in, 1521 + ret = file_in->f_op->remap_file_range(file_in, pos_in, 1512 1522 file_out, pos_out, 1513 1523 min_t(loff_t, MAX_RW_COUNT, len), 1514 1524 REMAP_FILE_CAN_SHORTEN); 1515 - if (cloned > 0) { 1516 - ret = cloned; 1525 + if (ret > 0) 1517 1526 goto done; 1518 - } 1519 1527 } 1520 1528 1521 - ret = do_copy_file_range(file_in, pos_in, file_out, pos_out, len, 1522 - flags); 1523 - WARN_ON_ONCE(ret == -EOPNOTSUPP); 1529 + /* 1530 + * We can get here for same sb copy of filesystems that do not implement 1531 + * ->copy_file_range() in case filesystem does not support clone or in 1532 + * case filesystem supports clone but rejected the clone request (e.g. 1533 + * because it was not block aligned). 1534 + * 1535 + * In both cases, fall back to kernel copy so we are able to maintain a 1536 + * consistent story about which filesystems support copy_file_range() 1537 + * and which filesystems do not, that will allow userspace tools to 1538 + * make consistent desicions w.r.t using copy_file_range(). 1539 + */ 1540 + ret = generic_copy_file_range(file_in, pos_in, file_out, pos_out, len, 1541 + flags); 1542 + 1524 1543 done: 1525 1544 if (ret > 0) { 1526 1545 fsnotify_access(file_in);
+9 -29
fs/xfs/libxfs/xfs_attr.c
··· 50 50 STATIC int xfs_attr_leaf_get(xfs_da_args_t *args); 51 51 STATIC int xfs_attr_leaf_removename(xfs_da_args_t *args); 52 52 STATIC int xfs_attr_leaf_hasname(struct xfs_da_args *args, struct xfs_buf **bp); 53 - STATIC int xfs_attr_leaf_try_add(struct xfs_da_args *args, struct xfs_buf *bp); 53 + STATIC int xfs_attr_leaf_try_add(struct xfs_da_args *args); 54 54 55 55 /* 56 56 * Internal routines when attribute list is more than one block. ··· 393 393 * It won't fit in the shortform, transform to a leaf block. GROT: 394 394 * another possible req'mt for a double-split btree op. 395 395 */ 396 - error = xfs_attr_shortform_to_leaf(args, &attr->xattri_leaf_bp); 396 + error = xfs_attr_shortform_to_leaf(args); 397 397 if (error) 398 398 return error; 399 399 400 - /* 401 - * Prevent the leaf buffer from being unlocked so that a concurrent AIL 402 - * push cannot grab the half-baked leaf buffer and run into problems 403 - * with the write verifier. 404 - */ 405 - xfs_trans_bhold(args->trans, attr->xattri_leaf_bp); 406 400 attr->xattri_dela_state = XFS_DAS_LEAF_ADD; 407 401 out: 408 402 trace_xfs_attr_sf_addname_return(attr->xattri_dela_state, args->dp); ··· 441 447 442 448 /* 443 449 * Use the leaf buffer we may already hold locked as a result of 444 - * a sf-to-leaf conversion. The held buffer is no longer valid 445 - * after this call, regardless of the result. 450 + * a sf-to-leaf conversion. 446 451 */ 447 - error = xfs_attr_leaf_try_add(args, attr->xattri_leaf_bp); 448 - attr->xattri_leaf_bp = NULL; 452 + error = xfs_attr_leaf_try_add(args); 449 453 450 454 if (error == -ENOSPC) { 451 455 error = xfs_attr3_leaf_to_node(args); ··· 488 496 { 489 497 struct xfs_da_args *args = attr->xattri_da_args; 490 498 int error; 491 - 492 - ASSERT(!attr->xattri_leaf_bp); 493 499 494 500 error = xfs_attr_node_addname_find_attr(attr); 495 501 if (error) ··· 1205 1215 */ 1206 1216 STATIC int 1207 1217 xfs_attr_leaf_try_add( 1208 - struct xfs_da_args *args, 1209 - struct xfs_buf *bp) 1218 + struct xfs_da_args *args) 1210 1219 { 1220 + struct xfs_buf *bp; 1211 1221 int error; 1212 1222 1213 - /* 1214 - * If the caller provided a buffer to us, it is locked and held in 1215 - * the transaction because it just did a shortform to leaf conversion. 1216 - * Hence we don't need to read it again. Otherwise read in the leaf 1217 - * buffer. 1218 - */ 1219 - if (bp) { 1220 - xfs_trans_bhold_release(args->trans, bp); 1221 - } else { 1222 - error = xfs_attr3_leaf_read(args->trans, args->dp, 0, &bp); 1223 - if (error) 1224 - return error; 1225 - } 1223 + error = xfs_attr3_leaf_read(args->trans, args->dp, 0, &bp); 1224 + if (error) 1225 + return error; 1226 1226 1227 1227 /* 1228 1228 * Look up the xattr name to set the insertion point for the new xattr.
-5
fs/xfs/libxfs/xfs_attr.h
··· 515 515 */ 516 516 struct xfs_attri_log_nameval *xattri_nameval; 517 517 518 - /* 519 - * Used by xfs_attr_set to hold a leaf buffer across a transaction roll 520 - */ 521 - struct xfs_buf *xattri_leaf_bp; 522 - 523 518 /* Used to keep track of current state of delayed operation */ 524 519 enum xfs_delattr_state xattri_dela_state; 525 520
+19 -16
fs/xfs/libxfs/xfs_attr_leaf.c
··· 289 289 return NULL; 290 290 } 291 291 292 + /* 293 + * Validate an attribute leaf block. 294 + * 295 + * Empty leaf blocks can occur under the following circumstances: 296 + * 297 + * 1. setxattr adds a new extended attribute to a file; 298 + * 2. The file has zero existing attributes; 299 + * 3. The attribute is too large to fit in the attribute fork; 300 + * 4. The attribute is small enough to fit in a leaf block; 301 + * 5. A log flush occurs after committing the transaction that creates 302 + * the (empty) leaf block; and 303 + * 6. The filesystem goes down after the log flush but before the new 304 + * attribute can be committed to the leaf block. 305 + * 306 + * Hence we need to ensure that we don't fail the validation purely 307 + * because the leaf is empty. 308 + */ 292 309 static xfs_failaddr_t 293 310 xfs_attr3_leaf_verify( 294 311 struct xfs_buf *bp) ··· 326 309 fa = xfs_da3_blkinfo_verify(bp, bp->b_addr); 327 310 if (fa) 328 311 return fa; 329 - 330 - /* 331 - * Empty leaf blocks should never occur; they imply the existence of a 332 - * software bug that needs fixing. xfs_repair also flags them as a 333 - * corruption that needs fixing, so we should never let these go to 334 - * disk. 335 - */ 336 - if (ichdr.count == 0) 337 - return __this_address; 338 312 339 313 /* 340 314 * firstused is the block offset of the first name info structure. ··· 930 922 return -ENOATTR; 931 923 } 932 924 933 - /* 934 - * Convert from using the shortform to the leaf. On success, return the 935 - * buffer so that we can keep it locked until we're totally done with it. 936 - */ 925 + /* Convert from using the shortform to the leaf format. */ 937 926 int 938 927 xfs_attr_shortform_to_leaf( 939 - struct xfs_da_args *args, 940 - struct xfs_buf **leaf_bp) 928 + struct xfs_da_args *args) 941 929 { 942 930 struct xfs_inode *dp; 943 931 struct xfs_attr_shortform *sf; ··· 995 991 sfe = xfs_attr_sf_nextentry(sfe); 996 992 } 997 993 error = 0; 998 - *leaf_bp = bp; 999 994 out: 1000 995 kmem_free(tmpbuffer); 1001 996 return error;
+1 -2
fs/xfs/libxfs/xfs_attr_leaf.h
··· 49 49 void xfs_attr_shortform_add(struct xfs_da_args *args, int forkoff); 50 50 int xfs_attr_shortform_lookup(struct xfs_da_args *args); 51 51 int xfs_attr_shortform_getvalue(struct xfs_da_args *args); 52 - int xfs_attr_shortform_to_leaf(struct xfs_da_args *args, 53 - struct xfs_buf **leaf_bp); 52 + int xfs_attr_shortform_to_leaf(struct xfs_da_args *args); 54 53 int xfs_attr_sf_removename(struct xfs_da_args *args); 55 54 int xfs_attr_sf_findname(struct xfs_da_args *args, 56 55 struct xfs_attr_sf_entry **sfep,
+15 -12
fs/xfs/xfs_attr_item.c
··· 576 576 struct xfs_trans_res tres; 577 577 struct xfs_attri_log_format *attrp; 578 578 struct xfs_attri_log_nameval *nv = attrip->attri_nameval; 579 - int error, ret = 0; 579 + int error; 580 580 int total; 581 581 int local; 582 582 struct xfs_attrd_log_item *done_item = NULL; ··· 655 655 xfs_ilock(ip, XFS_ILOCK_EXCL); 656 656 xfs_trans_ijoin(tp, ip, 0); 657 657 658 - ret = xfs_xattri_finish_update(attr, done_item); 659 - if (ret == -EAGAIN) { 660 - /* There's more work to do, so add it to this transaction */ 658 + error = xfs_xattri_finish_update(attr, done_item); 659 + if (error == -EAGAIN) { 660 + /* 661 + * There's more work to do, so add the intent item to this 662 + * transaction so that we can continue it later. 663 + */ 661 664 xfs_defer_add(tp, XFS_DEFER_OPS_TYPE_ATTR, &attr->xattri_list); 662 - } else 663 - error = ret; 665 + error = xfs_defer_ops_capture_and_commit(tp, capture_list); 666 + if (error) 667 + goto out_unlock; 664 668 669 + xfs_iunlock(ip, XFS_ILOCK_EXCL); 670 + xfs_irele(ip); 671 + return 0; 672 + } 665 673 if (error) { 666 674 xfs_trans_cancel(tp); 667 675 goto out_unlock; 668 676 } 669 677 670 678 error = xfs_defer_ops_capture_and_commit(tp, capture_list); 671 - 672 679 out_unlock: 673 - if (attr->xattri_leaf_bp) 674 - xfs_buf_relse(attr->xattri_leaf_bp); 675 - 676 680 xfs_iunlock(ip, XFS_ILOCK_EXCL); 677 681 xfs_irele(ip); 678 682 out: 679 - if (ret != -EAGAIN) 680 - xfs_attr_free_item(attr); 683 + xfs_attr_free_item(attr); 681 684 return error; 682 685 } 683 686
+2
fs/xfs/xfs_bmap_util.c
··· 686 686 * forever. 687 687 */ 688 688 end_fsb = XFS_B_TO_FSB(mp, (xfs_ufsize_t)XFS_ISIZE(ip)); 689 + if (XFS_IS_REALTIME_INODE(ip) && mp->m_sb.sb_rextsize > 1) 690 + end_fsb = roundup_64(end_fsb, mp->m_sb.sb_rextsize); 689 691 last_fsb = XFS_B_TO_FSB(mp, mp->m_super->s_maxbytes); 690 692 if (last_fsb <= end_fsb) 691 693 return false;
+37 -19
fs/xfs/xfs_icache.c
··· 440 440 for_each_online_cpu(cpu) { 441 441 gc = per_cpu_ptr(mp->m_inodegc, cpu); 442 442 if (!llist_empty(&gc->list)) 443 - queue_work_on(cpu, mp->m_inodegc_wq, &gc->work); 443 + mod_delayed_work_on(cpu, mp->m_inodegc_wq, &gc->work, 0); 444 444 } 445 445 } 446 446 ··· 1841 1841 xfs_inodegc_worker( 1842 1842 struct work_struct *work) 1843 1843 { 1844 - struct xfs_inodegc *gc = container_of(work, struct xfs_inodegc, 1845 - work); 1844 + struct xfs_inodegc *gc = container_of(to_delayed_work(work), 1845 + struct xfs_inodegc, work); 1846 1846 struct llist_node *node = llist_del_all(&gc->list); 1847 1847 struct xfs_inode *ip, *n; 1848 1848 ··· 1862 1862 } 1863 1863 1864 1864 /* 1865 + * Expedite all pending inodegc work to run immediately. This does not wait for 1866 + * completion of the work. 1867 + */ 1868 + void 1869 + xfs_inodegc_push( 1870 + struct xfs_mount *mp) 1871 + { 1872 + if (!xfs_is_inodegc_enabled(mp)) 1873 + return; 1874 + trace_xfs_inodegc_push(mp, __return_address); 1875 + xfs_inodegc_queue_all(mp); 1876 + } 1877 + 1878 + /* 1865 1879 * Force all currently queued inode inactivation work to run immediately and 1866 1880 * wait for the work to finish. 1867 1881 */ ··· 1883 1869 xfs_inodegc_flush( 1884 1870 struct xfs_mount *mp) 1885 1871 { 1886 - if (!xfs_is_inodegc_enabled(mp)) 1887 - return; 1888 - 1872 + xfs_inodegc_push(mp); 1889 1873 trace_xfs_inodegc_flush(mp, __return_address); 1890 - 1891 - xfs_inodegc_queue_all(mp); 1892 1874 flush_workqueue(mp->m_inodegc_wq); 1893 1875 } 1894 1876 ··· 2024 2014 struct xfs_inodegc *gc; 2025 2015 int items; 2026 2016 unsigned int shrinker_hits; 2017 + unsigned long queue_delay = 1; 2027 2018 2028 2019 trace_xfs_inode_set_need_inactive(ip); 2029 2020 spin_lock(&ip->i_flags_lock); ··· 2036 2025 items = READ_ONCE(gc->items); 2037 2026 WRITE_ONCE(gc->items, items + 1); 2038 2027 shrinker_hits = READ_ONCE(gc->shrinker_hits); 2039 - put_cpu_ptr(gc); 2040 2028 2041 - if (!xfs_is_inodegc_enabled(mp)) 2029 + /* 2030 + * We queue the work while holding the current CPU so that the work 2031 + * is scheduled to run on this CPU. 2032 + */ 2033 + if (!xfs_is_inodegc_enabled(mp)) { 2034 + put_cpu_ptr(gc); 2042 2035 return; 2043 - 2044 - if (xfs_inodegc_want_queue_work(ip, items)) { 2045 - trace_xfs_inodegc_queue(mp, __return_address); 2046 - queue_work(mp->m_inodegc_wq, &gc->work); 2047 2036 } 2037 + 2038 + if (xfs_inodegc_want_queue_work(ip, items)) 2039 + queue_delay = 0; 2040 + 2041 + trace_xfs_inodegc_queue(mp, __return_address); 2042 + mod_delayed_work(mp->m_inodegc_wq, &gc->work, queue_delay); 2043 + put_cpu_ptr(gc); 2048 2044 2049 2045 if (xfs_inodegc_want_flush_work(ip, items, shrinker_hits)) { 2050 2046 trace_xfs_inodegc_throttle(mp, __return_address); 2051 - flush_work(&gc->work); 2047 + flush_delayed_work(&gc->work); 2052 2048 } 2053 2049 } 2054 2050 ··· 2072 2054 unsigned int count = 0; 2073 2055 2074 2056 dead_gc = per_cpu_ptr(mp->m_inodegc, dead_cpu); 2075 - cancel_work_sync(&dead_gc->work); 2057 + cancel_delayed_work_sync(&dead_gc->work); 2076 2058 2077 2059 if (llist_empty(&dead_gc->list)) 2078 2060 return; ··· 2091 2073 llist_add_batch(first, last, &gc->list); 2092 2074 count += READ_ONCE(gc->items); 2093 2075 WRITE_ONCE(gc->items, count); 2094 - put_cpu_ptr(gc); 2095 2076 2096 2077 if (xfs_is_inodegc_enabled(mp)) { 2097 2078 trace_xfs_inodegc_queue(mp, __return_address); 2098 - queue_work(mp->m_inodegc_wq, &gc->work); 2079 + mod_delayed_work(mp->m_inodegc_wq, &gc->work, 0); 2099 2080 } 2081 + put_cpu_ptr(gc); 2100 2082 } 2101 2083 2102 2084 /* ··· 2191 2173 unsigned int h = READ_ONCE(gc->shrinker_hits); 2192 2174 2193 2175 WRITE_ONCE(gc->shrinker_hits, h + 1); 2194 - queue_work_on(cpu, mp->m_inodegc_wq, &gc->work); 2176 + mod_delayed_work_on(cpu, mp->m_inodegc_wq, &gc->work, 0); 2195 2177 no_items = false; 2196 2178 } 2197 2179 }
+1
fs/xfs/xfs_icache.h
··· 76 76 void xfs_blockgc_start(struct xfs_mount *mp); 77 77 78 78 void xfs_inodegc_worker(struct work_struct *work); 79 + void xfs_inodegc_push(struct xfs_mount *mp); 79 80 void xfs_inodegc_flush(struct xfs_mount *mp); 80 81 void xfs_inodegc_stop(struct xfs_mount *mp); 81 82 void xfs_inodegc_start(struct xfs_mount *mp);
+25 -39
fs/xfs/xfs_inode.c
··· 132 132 } 133 133 134 134 /* 135 + * You can't set both SHARED and EXCL for the same lock, 136 + * and only XFS_IOLOCK_SHARED, XFS_IOLOCK_EXCL, XFS_MMAPLOCK_SHARED, 137 + * XFS_MMAPLOCK_EXCL, XFS_ILOCK_SHARED, XFS_ILOCK_EXCL are valid values 138 + * to set in lock_flags. 139 + */ 140 + static inline void 141 + xfs_lock_flags_assert( 142 + uint lock_flags) 143 + { 144 + ASSERT((lock_flags & (XFS_IOLOCK_SHARED | XFS_IOLOCK_EXCL)) != 145 + (XFS_IOLOCK_SHARED | XFS_IOLOCK_EXCL)); 146 + ASSERT((lock_flags & (XFS_MMAPLOCK_SHARED | XFS_MMAPLOCK_EXCL)) != 147 + (XFS_MMAPLOCK_SHARED | XFS_MMAPLOCK_EXCL)); 148 + ASSERT((lock_flags & (XFS_ILOCK_SHARED | XFS_ILOCK_EXCL)) != 149 + (XFS_ILOCK_SHARED | XFS_ILOCK_EXCL)); 150 + ASSERT((lock_flags & ~(XFS_LOCK_MASK | XFS_LOCK_SUBCLASS_MASK)) == 0); 151 + ASSERT(lock_flags != 0); 152 + } 153 + 154 + /* 135 155 * In addition to i_rwsem in the VFS inode, the xfs inode contains 2 136 156 * multi-reader locks: invalidate_lock and the i_lock. This routine allows 137 157 * various combinations of the locks to be obtained. ··· 188 168 { 189 169 trace_xfs_ilock(ip, lock_flags, _RET_IP_); 190 170 191 - /* 192 - * You can't set both SHARED and EXCL for the same lock, 193 - * and only XFS_IOLOCK_SHARED, XFS_IOLOCK_EXCL, XFS_ILOCK_SHARED, 194 - * and XFS_ILOCK_EXCL are valid values to set in lock_flags. 195 - */ 196 - ASSERT((lock_flags & (XFS_IOLOCK_SHARED | XFS_IOLOCK_EXCL)) != 197 - (XFS_IOLOCK_SHARED | XFS_IOLOCK_EXCL)); 198 - ASSERT((lock_flags & (XFS_MMAPLOCK_SHARED | XFS_MMAPLOCK_EXCL)) != 199 - (XFS_MMAPLOCK_SHARED | XFS_MMAPLOCK_EXCL)); 200 - ASSERT((lock_flags & (XFS_ILOCK_SHARED | XFS_ILOCK_EXCL)) != 201 - (XFS_ILOCK_SHARED | XFS_ILOCK_EXCL)); 202 - ASSERT((lock_flags & ~(XFS_LOCK_MASK | XFS_LOCK_SUBCLASS_MASK)) == 0); 171 + xfs_lock_flags_assert(lock_flags); 203 172 204 173 if (lock_flags & XFS_IOLOCK_EXCL) { 205 174 down_write_nested(&VFS_I(ip)->i_rwsem, ··· 231 222 { 232 223 trace_xfs_ilock_nowait(ip, lock_flags, _RET_IP_); 233 224 234 - /* 235 - * You can't set both SHARED and EXCL for the same lock, 236 - * and only XFS_IOLOCK_SHARED, XFS_IOLOCK_EXCL, XFS_ILOCK_SHARED, 237 - * and XFS_ILOCK_EXCL are valid values to set in lock_flags. 238 - */ 239 - ASSERT((lock_flags & (XFS_IOLOCK_SHARED | XFS_IOLOCK_EXCL)) != 240 - (XFS_IOLOCK_SHARED | XFS_IOLOCK_EXCL)); 241 - ASSERT((lock_flags & (XFS_MMAPLOCK_SHARED | XFS_MMAPLOCK_EXCL)) != 242 - (XFS_MMAPLOCK_SHARED | XFS_MMAPLOCK_EXCL)); 243 - ASSERT((lock_flags & (XFS_ILOCK_SHARED | XFS_ILOCK_EXCL)) != 244 - (XFS_ILOCK_SHARED | XFS_ILOCK_EXCL)); 245 - ASSERT((lock_flags & ~(XFS_LOCK_MASK | XFS_LOCK_SUBCLASS_MASK)) == 0); 225 + xfs_lock_flags_assert(lock_flags); 246 226 247 227 if (lock_flags & XFS_IOLOCK_EXCL) { 248 228 if (!down_write_trylock(&VFS_I(ip)->i_rwsem)) ··· 289 291 xfs_inode_t *ip, 290 292 uint lock_flags) 291 293 { 292 - /* 293 - * You can't set both SHARED and EXCL for the same lock, 294 - * and only XFS_IOLOCK_SHARED, XFS_IOLOCK_EXCL, XFS_ILOCK_SHARED, 295 - * and XFS_ILOCK_EXCL are valid values to set in lock_flags. 296 - */ 297 - ASSERT((lock_flags & (XFS_IOLOCK_SHARED | XFS_IOLOCK_EXCL)) != 298 - (XFS_IOLOCK_SHARED | XFS_IOLOCK_EXCL)); 299 - ASSERT((lock_flags & (XFS_MMAPLOCK_SHARED | XFS_MMAPLOCK_EXCL)) != 300 - (XFS_MMAPLOCK_SHARED | XFS_MMAPLOCK_EXCL)); 301 - ASSERT((lock_flags & (XFS_ILOCK_SHARED | XFS_ILOCK_EXCL)) != 302 - (XFS_ILOCK_SHARED | XFS_ILOCK_EXCL)); 303 - ASSERT((lock_flags & ~(XFS_LOCK_MASK | XFS_LOCK_SUBCLASS_MASK)) == 0); 304 - ASSERT(lock_flags != 0); 294 + xfs_lock_flags_assert(lock_flags); 305 295 306 296 if (lock_flags & XFS_IOLOCK_EXCL) 307 297 up_write(&VFS_I(ip)->i_rwsem); ··· 365 379 } 366 380 367 381 if (lock_flags & (XFS_MMAPLOCK_EXCL|XFS_MMAPLOCK_SHARED)) { 368 - return __xfs_rwsem_islocked(&VFS_I(ip)->i_rwsem, 369 - (lock_flags & XFS_IOLOCK_SHARED)); 382 + return __xfs_rwsem_islocked(&VFS_I(ip)->i_mapping->invalidate_lock, 383 + (lock_flags & XFS_MMAPLOCK_SHARED)); 370 384 } 371 385 372 386 if (lock_flags & (XFS_IOLOCK_EXCL | XFS_IOLOCK_SHARED)) {
+7 -2
fs/xfs/xfs_log.c
··· 2092 2092 xlog_in_core_t *iclog, *next_iclog; 2093 2093 int i; 2094 2094 2095 - xlog_cil_destroy(log); 2096 - 2097 2095 /* 2098 2096 * Cycle all the iclogbuf locks to make sure all log IO completion 2099 2097 * is done before we tear down these buffers. ··· 2102 2104 up(&iclog->ic_sema); 2103 2105 iclog = iclog->ic_next; 2104 2106 } 2107 + 2108 + /* 2109 + * Destroy the CIL after waiting for iclog IO completion because an 2110 + * iclog EIO error will try to shut down the log, which accesses the 2111 + * CIL to wake up the waiters. 2112 + */ 2113 + xlog_cil_destroy(log); 2105 2114 2106 2115 iclog = log->l_iclog; 2107 2116 for (i = 0; i < log->l_iclog_bufs; i++) {
+1 -1
fs/xfs/xfs_mount.h
··· 61 61 */ 62 62 struct xfs_inodegc { 63 63 struct llist_head list; 64 - struct work_struct work; 64 + struct delayed_work work; 65 65 66 66 /* approximate count of inodes in the list */ 67 67 unsigned int items;
+6 -3
fs/xfs/xfs_qm_syscalls.c
··· 454 454 struct xfs_dquot *dqp; 455 455 int error; 456 456 457 - /* Flush inodegc work at the start of a quota reporting scan. */ 457 + /* 458 + * Expedite pending inodegc work at the start of a quota reporting 459 + * scan but don't block waiting for it to complete. 460 + */ 458 461 if (id == 0) 459 - xfs_inodegc_flush(mp); 462 + xfs_inodegc_push(mp); 460 463 461 464 /* 462 465 * Try to get the dquot. We don't want it allocated on disk, so don't ··· 501 498 502 499 /* Flush inodegc work at the start of a quota reporting scan. */ 503 500 if (*id == 0) 504 - xfs_inodegc_flush(mp); 501 + xfs_inodegc_push(mp); 505 502 506 503 error = xfs_qm_dqget_next(mp, *id, type, &dqp); 507 504 if (error)
+6 -3
fs/xfs/xfs_super.c
··· 797 797 xfs_extlen_t lsize; 798 798 int64_t ffree; 799 799 800 - /* Wait for whatever inactivations are in progress. */ 801 - xfs_inodegc_flush(mp); 800 + /* 801 + * Expedite background inodegc but don't wait. We do not want to block 802 + * here waiting hours for a billion extent file to be truncated. 803 + */ 804 + xfs_inodegc_push(mp); 802 805 803 806 statp->f_type = XFS_SUPER_MAGIC; 804 807 statp->f_namelen = MAXNAMELEN - 1; ··· 1077 1074 gc = per_cpu_ptr(mp->m_inodegc, cpu); 1078 1075 init_llist_head(&gc->list); 1079 1076 gc->items = 0; 1080 - INIT_WORK(&gc->work, xfs_inodegc_worker); 1077 + INIT_DELAYED_WORK(&gc->work, xfs_inodegc_worker); 1081 1078 } 1082 1079 return 0; 1083 1080 }
+1
fs/xfs/xfs_trace.h
··· 240 240 TP_PROTO(struct xfs_mount *mp, void *caller_ip), \ 241 241 TP_ARGS(mp, caller_ip)) 242 242 DEFINE_FS_EVENT(xfs_inodegc_flush); 243 + DEFINE_FS_EVENT(xfs_inodegc_push); 243 244 DEFINE_FS_EVENT(xfs_inodegc_start); 244 245 DEFINE_FS_EVENT(xfs_inodegc_stop); 245 246 DEFINE_FS_EVENT(xfs_inodegc_queue);
+1
include/acpi/cppc_acpi.h
··· 145 145 extern int acpi_get_psd_map(unsigned int cpu, struct cppc_cpudata *cpu_data); 146 146 extern unsigned int cppc_get_transition_latency(int cpu); 147 147 extern bool cpc_ffh_supported(void); 148 + extern bool cpc_supported_by_cpu(void); 148 149 extern int cpc_read_ffh(int cpunum, struct cpc_reg *reg, u64 *val); 149 150 extern int cpc_write_ffh(int cpunum, struct cpc_reg *reg, u64 val); 150 151 #else /* !CONFIG_ACPI_CPPC_LIB */
+1 -1
include/linux/acpi.h
··· 584 584 extern bool osc_sb_apei_support_acked; 585 585 extern bool osc_pc_lpi_support_confirmed; 586 586 extern bool osc_sb_native_usb4_support_confirmed; 587 - extern bool osc_sb_cppc_not_supported; 587 + extern bool osc_sb_cppc2_support_acked; 588 588 extern bool osc_cpc_flexible_adr_space_confirmed; 589 589 590 590 /* USB4 Capabilities */
+2
include/linux/compiler_types.h
··· 24 24 /* context/locking */ 25 25 # define __must_hold(x) __attribute__((context(x,1,1))) 26 26 # define __acquires(x) __attribute__((context(x,0,1))) 27 + # define __cond_acquires(x) __attribute__((context(x,0,-1))) 27 28 # define __releases(x) __attribute__((context(x,1,0))) 28 29 # define __acquire(x) __context__(x,1) 29 30 # define __release(x) __context__(x,-1) ··· 51 50 /* context/locking */ 52 51 # define __must_hold(x) 53 52 # define __acquires(x) 53 + # define __cond_acquires(x) 54 54 # define __releases(x) 55 55 # define __acquire(x) (void)0 56 56 # define __release(x) (void)0
+5
include/linux/devfreq.h
··· 148 148 * reevaluate operable frequencies. Devfreq users may use 149 149 * devfreq.nb to the corresponding register notifier call chain. 150 150 * @work: delayed work for load monitoring. 151 + * @freq_table: current frequency table used by the devfreq driver. 152 + * @max_state: count of entry present in the frequency table. 151 153 * @previous_freq: previously configured frequency value. 152 154 * @last_status: devfreq user device info, performance statistics 153 155 * @data: Private data of the governor. The devfreq framework does not ··· 186 184 struct opp_table *opp_table; 187 185 struct notifier_block nb; 188 186 struct delayed_work work; 187 + 188 + unsigned long *freq_table; 189 + unsigned int max_state; 189 190 190 191 unsigned long previous_freq; 191 192 struct devfreq_dev_status last_status;
+1 -1
include/linux/dim.h
··· 21 21 * We consider 10% difference as significant. 22 22 */ 23 23 #define IS_SIGNIFICANT_DIFF(val, ref) \ 24 - (((100UL * abs((val) - (ref))) / (ref)) > 10) 24 + ((ref) && (((100UL * abs((val) - (ref))) / (ref)) > 10)) 25 25 26 26 /* 27 27 * Calculate the gap between two values.
+4
include/linux/fanotify.h
··· 111 111 FANOTIFY_PERM_EVENTS | \ 112 112 FAN_Q_OVERFLOW | FAN_ONDIR) 113 113 114 + /* Events and flags relevant only for directories */ 115 + #define FANOTIFY_DIRONLY_EVENT_BITS (FANOTIFY_DIRENT_EVENTS | \ 116 + FAN_EVENT_ON_CHILD | FAN_ONDIR) 117 + 114 118 #define ALL_FANOTIFY_EVENT_BITS (FANOTIFY_OUTGOING_EVENTS | \ 115 119 FANOTIFY_EVENT_FLAGS) 116 120
+4
include/linux/fbcon.h
··· 15 15 void fbcon_get_requirement(struct fb_info *info, 16 16 struct fb_blit_caps *caps); 17 17 void fbcon_fb_blanked(struct fb_info *info, int blank); 18 + int fbcon_modechange_possible(struct fb_info *info, 19 + struct fb_var_screeninfo *var); 18 20 void fbcon_update_vcs(struct fb_info *info, bool all); 19 21 void fbcon_remap_all(struct fb_info *info); 20 22 int fbcon_set_con2fb_map_ioctl(void __user *argp); ··· 35 33 static inline void fbcon_get_requirement(struct fb_info *info, 36 34 struct fb_blit_caps *caps) {} 37 35 static inline void fbcon_fb_blanked(struct fb_info *info, int blank) {} 36 + static inline int fbcon_modechange_possible(struct fb_info *info, 37 + struct fb_var_screeninfo *var) { return 0; } 38 38 static inline void fbcon_update_vcs(struct fb_info *info, bool all) {} 39 39 static inline void fbcon_remap_all(struct fb_info *info) {} 40 40 static inline int fbcon_set_con2fb_map_ioctl(void __user *argp) { return 0; }
+1
include/linux/fscache.h
··· 130 130 #define FSCACHE_COOKIE_DO_PREP_TO_WRITE 12 /* T if cookie needs write preparation */ 131 131 #define FSCACHE_COOKIE_HAVE_DATA 13 /* T if this cookie has data stored */ 132 132 #define FSCACHE_COOKIE_IS_HASHED 14 /* T if this cookie is hashed */ 133 + #define FSCACHE_COOKIE_DO_INVALIDATE 15 /* T if cookie needs invalidation */ 133 134 134 135 enum fscache_cookie_state state; 135 136 u8 advice; /* FSCACHE_ADV_* */
-3
include/linux/intel-iommu.h
··· 612 612 struct device_domain_info { 613 613 struct list_head link; /* link to domain siblings */ 614 614 struct list_head global; /* link to global list */ 615 - struct list_head table; /* link to pasid table */ 616 615 u32 segment; /* PCI segment number */ 617 616 u8 bus; /* PCI bus number */ 618 617 u8 devfn; /* PCI devfn number */ ··· 728 729 void *alloc_pgtable_page(int node); 729 730 void free_pgtable_page(void *vaddr); 730 731 struct intel_iommu *domain_get_iommu(struct dmar_domain *domain); 731 - int for_each_device_domain(int (*fn)(struct device_domain_info *info, 732 - void *data), void *data); 733 732 void iommu_flush_write_buffer(struct intel_iommu *iommu); 734 733 int intel_iommu_enable_pasid(struct intel_iommu *iommu, struct device *dev); 735 734 struct intel_iommu *device_to_iommu(struct device *dev, u8 *bus, u8 *devfn);
-1
include/linux/lockref.h
··· 38 38 extern int lockref_put_return(struct lockref *); 39 39 extern int lockref_get_not_zero(struct lockref *); 40 40 extern int lockref_put_not_zero(struct lockref *); 41 - extern int lockref_get_or_lock(struct lockref *); 42 41 extern int lockref_put_or_lock(struct lockref *); 43 42 44 43 extern void lockref_mark_dead(struct lockref *);
+1 -1
include/linux/memregion.h
··· 16 16 { 17 17 return -ENOMEM; 18 18 } 19 - void memregion_free(int id) 19 + static inline void memregion_free(int id) 20 20 { 21 21 } 22 22 #endif
+1 -1
include/linux/netdevice.h
··· 1671 1671 IFF_FAILOVER_SLAVE = 1<<28, 1672 1672 IFF_L3MDEV_RX_HANDLER = 1<<29, 1673 1673 IFF_LIVE_RENAME_OK = 1<<30, 1674 - IFF_TX_SKB_NO_LINEAR = 1<<31, 1674 + IFF_TX_SKB_NO_LINEAR = BIT_ULL(31), 1675 1675 IFF_CHANGE_PROTO_DOWN = BIT_ULL(32), 1676 1676 }; 1677 1677
+2
include/linux/nvme.h
··· 906 906 __le32 cdw2[2]; 907 907 __le64 metadata; 908 908 union nvme_data_ptr dptr; 909 + struct_group(cdws, 909 910 __le32 cdw10; 910 911 __le32 cdw11; 911 912 __le32 cdw12; 912 913 __le32 cdw13; 913 914 __le32 cdw14; 914 915 __le32 cdw15; 916 + ); 915 917 }; 916 918 917 919 struct nvme_rw_command {
+6
include/linux/phy.h
··· 572 572 * @mdix_ctrl: User setting of crossover 573 573 * @pma_extable: Cached value of PMA/PMD Extended Abilities Register 574 574 * @interrupts: Flag interrupts have been enabled 575 + * @irq_suspended: Flag indicating PHY is suspended and therefore interrupt 576 + * handling shall be postponed until PHY has resumed 577 + * @irq_rerun: Flag indicating interrupts occurred while PHY was suspended, 578 + * requiring a rerun of the interrupt handler after resume 575 579 * @interface: enum phy_interface_t value 576 580 * @skb: Netlink message for cable diagnostics 577 581 * @nest: Netlink nest used for cable diagnostics ··· 630 626 631 627 /* Interrupts are enabled */ 632 628 unsigned interrupts:1; 629 + unsigned irq_suspended:1; 630 + unsigned irq_rerun:1; 633 631 634 632 enum phy_state state; 635 633
+2 -3
include/linux/pm_runtime.h
··· 88 88 extern void pm_runtime_put_suppliers(struct device *dev); 89 89 extern void pm_runtime_new_link(struct device *dev); 90 90 extern void pm_runtime_drop_link(struct device_link *link); 91 - extern void pm_runtime_release_supplier(struct device_link *link, bool check_idle); 91 + extern void pm_runtime_release_supplier(struct device_link *link); 92 92 93 93 extern int devm_pm_runtime_enable(struct device *dev); 94 94 ··· 314 314 static inline void pm_runtime_put_suppliers(struct device *dev) {} 315 315 static inline void pm_runtime_new_link(struct device *dev) {} 316 316 static inline void pm_runtime_drop_link(struct device_link *link) {} 317 - static inline void pm_runtime_release_supplier(struct device_link *link, 318 - bool check_idle) {} 317 + static inline void pm_runtime_release_supplier(struct device_link *link) {} 319 318 320 319 #endif /* !CONFIG_PM */ 321 320
+3 -3
include/linux/refcount.h
··· 361 361 362 362 extern __must_check bool refcount_dec_if_one(refcount_t *r); 363 363 extern __must_check bool refcount_dec_not_one(refcount_t *r); 364 - extern __must_check bool refcount_dec_and_mutex_lock(refcount_t *r, struct mutex *lock); 365 - extern __must_check bool refcount_dec_and_lock(refcount_t *r, spinlock_t *lock); 364 + extern __must_check bool refcount_dec_and_mutex_lock(refcount_t *r, struct mutex *lock) __cond_acquires(lock); 365 + extern __must_check bool refcount_dec_and_lock(refcount_t *r, spinlock_t *lock) __cond_acquires(lock); 366 366 extern __must_check bool refcount_dec_and_lock_irqsave(refcount_t *r, 367 367 spinlock_t *lock, 368 - unsigned long *flags); 368 + unsigned long *flags) __cond_acquires(lock); 369 369 #endif /* _LINUX_REFCOUNT_H */
-2
include/linux/rtsx_usb.h
··· 54 54 struct usb_device *pusb_dev; 55 55 struct usb_interface *pusb_intf; 56 56 struct usb_sg_request current_sg; 57 - unsigned char *iobuf; 58 - dma_addr_t iobuf_dma; 59 57 60 58 struct timer_list sg_timer; 61 59 struct mutex dev_mutex;
+17 -5
include/linux/sysfb.h
··· 55 55 int flags; 56 56 }; 57 57 58 + #ifdef CONFIG_SYSFB 59 + 60 + void sysfb_disable(void); 61 + 62 + #else /* CONFIG_SYSFB */ 63 + 64 + static inline void sysfb_disable(void) 65 + { 66 + } 67 + 68 + #endif /* CONFIG_SYSFB */ 69 + 58 70 #ifdef CONFIG_EFI 59 71 60 72 extern struct efifb_dmi_info efifb_dmi_list[]; ··· 84 72 85 73 bool sysfb_parse_mode(const struct screen_info *si, 86 74 struct simplefb_platform_data *mode); 87 - int sysfb_create_simplefb(const struct screen_info *si, 88 - const struct simplefb_platform_data *mode); 75 + struct platform_device *sysfb_create_simplefb(const struct screen_info *si, 76 + const struct simplefb_platform_data *mode); 89 77 90 78 #else /* CONFIG_SYSFB_SIMPLE */ 91 79 ··· 95 83 return false; 96 84 } 97 85 98 - static inline int sysfb_create_simplefb(const struct screen_info *si, 99 - const struct simplefb_platform_data *mode) 86 + static inline struct platform_device *sysfb_create_simplefb(const struct screen_info *si, 87 + const struct simplefb_platform_data *mode) 100 88 { 101 - return -EINVAL; 89 + return ERR_PTR(-EINVAL); 102 90 } 103 91 104 92 #endif /* CONFIG_SYSFB_SIMPLE */
+2
include/linux/virtio_config.h
··· 257 257 258 258 WARN_ON(status & VIRTIO_CONFIG_S_DRIVER_OK); 259 259 260 + #ifdef CONFIG_VIRTIO_HARDEN_NOTIFICATION 260 261 /* 261 262 * The virtio_synchronize_cbs() makes sure vring_interrupt() 262 263 * will see the driver specific setup if it sees vq->broken ··· 265 264 */ 266 265 virtio_synchronize_cbs(dev); 267 266 __virtio_unbreak_device(dev); 267 + #endif 268 268 /* 269 269 * The transport should ensure the visibility of vq->broken 270 270 * before setting DRIVER_OK. See the comments for the transport
+1
include/net/flow_offload.h
··· 152 152 FLOW_ACTION_PIPE, 153 153 FLOW_ACTION_VLAN_PUSH_ETH, 154 154 FLOW_ACTION_VLAN_POP_ETH, 155 + FLOW_ACTION_CONTINUE, 155 156 NUM_FLOW_ACTIONS, 156 157 }; 157 158
+10 -6
include/net/netfilter/nf_tables.h
··· 1338 1338 /** 1339 1339 * struct nft_traceinfo - nft tracing information and state 1340 1340 * 1341 + * @trace: other struct members are initialised 1342 + * @nf_trace: copy of skb->nf_trace before rule evaluation 1343 + * @type: event type (enum nft_trace_types) 1344 + * @skbid: hash of skb to be used as trace id 1345 + * @packet_dumped: packet headers sent in a previous traceinfo message 1341 1346 * @pkt: pktinfo currently processed 1342 1347 * @basechain: base chain currently processed 1343 1348 * @chain: chain currently processed 1344 1349 * @rule: rule that was evaluated 1345 1350 * @verdict: verdict given by rule 1346 - * @type: event type (enum nft_trace_types) 1347 - * @packet_dumped: packet headers sent in a previous traceinfo message 1348 - * @trace: other struct members are initialised 1349 1351 */ 1350 1352 struct nft_traceinfo { 1353 + bool trace; 1354 + bool nf_trace; 1355 + bool packet_dumped; 1356 + enum nft_trace_types type:8; 1357 + u32 skbid; 1351 1358 const struct nft_pktinfo *pkt; 1352 1359 const struct nft_base_chain *basechain; 1353 1360 const struct nft_chain *chain; 1354 1361 const struct nft_rule_dp *rule; 1355 1362 const struct nft_verdict *verdict; 1356 - enum nft_trace_types type; 1357 - bool packet_dumped; 1358 - bool trace; 1359 1363 }; 1360 1364 1361 1365 void nft_trace_init(struct nft_traceinfo *info, const struct nft_pktinfo *pkt,
-2
include/sound/soc.h
··· 408 408 409 409 struct snd_soc_jack_gpio; 410 410 411 - typedef int (*hw_write_t)(void *,const char* ,int); 412 - 413 411 enum snd_soc_pcm_subclass { 414 412 SND_SOC_PCM_CLASS_PCM = 0, 415 413 SND_SOC_PCM_CLASS_BE = 1,
+2 -2
include/uapi/drm/drm_fourcc.h
··· 1444 1444 #define AMD_FMT_MOD_PIPE_MASK 0x7 1445 1445 1446 1446 #define AMD_FMT_MOD_SET(field, value) \ 1447 - ((uint64_t)(value) << AMD_FMT_MOD_##field##_SHIFT) 1447 + ((__u64)(value) << AMD_FMT_MOD_##field##_SHIFT) 1448 1448 #define AMD_FMT_MOD_GET(field, value) \ 1449 1449 (((value) >> AMD_FMT_MOD_##field##_SHIFT) & AMD_FMT_MOD_##field##_MASK) 1450 1450 #define AMD_FMT_MOD_CLEAR(field) \ 1451 - (~((uint64_t)AMD_FMT_MOD_##field##_MASK << AMD_FMT_MOD_##field##_SHIFT)) 1451 + (~((__u64)AMD_FMT_MOD_##field##_MASK << AMD_FMT_MOD_##field##_SHIFT)) 1452 1452 1453 1453 #if defined(__cplusplus) 1454 1454 }
+5 -2
include/uapi/linux/io_uring.h
··· 22 22 union { 23 23 __u64 off; /* offset into file */ 24 24 __u64 addr2; 25 - __u32 cmd_op; 25 + struct { 26 + __u32 cmd_op; 27 + __u32 __pad1; 28 + }; 26 29 }; 27 30 union { 28 31 __u64 addr; /* pointer to buffer or iovecs */ ··· 247 244 #define IORING_ASYNC_CANCEL_ANY (1U << 2) 248 245 249 246 /* 250 - * send/sendmsg and recv/recvmsg flags (sqe->addr2) 247 + * send/sendmsg and recv/recvmsg flags (sqe->ioprio) 251 248 * 252 249 * IORING_RECVSEND_POLL_FIRST If set, instead of first attempting to send 253 250 * or receive and arm poll if that yields an
+5 -4
include/uapi/linux/mptcp.h
··· 2 2 #ifndef _UAPI_MPTCP_H 3 3 #define _UAPI_MPTCP_H 4 4 5 + #ifndef __KERNEL__ 6 + #include <netinet/in.h> /* for sockaddr_in and sockaddr_in6 */ 7 + #include <sys/socket.h> /* for struct sockaddr */ 8 + #endif 9 + 5 10 #include <linux/const.h> 6 11 #include <linux/types.h> 7 12 #include <linux/in.h> /* for sockaddr_in */ 8 13 #include <linux/in6.h> /* for sockaddr_in6 */ 9 14 #include <linux/socket.h> /* for sockaddr_storage and sa_family */ 10 - 11 - #ifndef __KERNEL__ 12 - #include <sys/socket.h> /* for struct sockaddr */ 13 - #endif 14 15 15 16 #define MPTCP_SUBFLOW_FLAG_MCAP_REM _BITUL(0) 16 17 #define MPTCP_SUBFLOW_FLAG_MCAP_LOC _BITUL(1)
+2
include/video/of_display_timing.h
··· 8 8 #ifndef __LINUX_OF_DISPLAY_TIMING_H 9 9 #define __LINUX_OF_DISPLAY_TIMING_H 10 10 11 + #include <linux/errno.h> 12 + 11 13 struct device_node; 12 14 struct display_timing; 13 15 struct display_timings;
+48 -67
kernel/bpf/verifier.c
··· 1562 1562 reg->var_off = tnum_or(tnum_clear_subreg(var64_off), var32_off); 1563 1563 } 1564 1564 1565 + static void reg_bounds_sync(struct bpf_reg_state *reg) 1566 + { 1567 + /* We might have learned new bounds from the var_off. */ 1568 + __update_reg_bounds(reg); 1569 + /* We might have learned something about the sign bit. */ 1570 + __reg_deduce_bounds(reg); 1571 + /* We might have learned some bits from the bounds. */ 1572 + __reg_bound_offset(reg); 1573 + /* Intersecting with the old var_off might have improved our bounds 1574 + * slightly, e.g. if umax was 0x7f...f and var_off was (0; 0xf...fc), 1575 + * then new var_off is (0; 0x7f...fc) which improves our umax. 1576 + */ 1577 + __update_reg_bounds(reg); 1578 + } 1579 + 1565 1580 static bool __reg32_bound_s64(s32 a) 1566 1581 { 1567 1582 return a >= 0 && a <= S32_MAX; ··· 1618 1603 * so they do not impact tnum bounds calculation. 1619 1604 */ 1620 1605 __mark_reg64_unbounded(reg); 1621 - __update_reg_bounds(reg); 1622 1606 } 1623 - 1624 - /* Intersecting with the old var_off might have improved our bounds 1625 - * slightly. e.g. if umax was 0x7f...f and var_off was (0; 0xf...fc), 1626 - * then new var_off is (0; 0x7f...fc) which improves our umax. 1627 - */ 1628 - __reg_deduce_bounds(reg); 1629 - __reg_bound_offset(reg); 1630 - __update_reg_bounds(reg); 1607 + reg_bounds_sync(reg); 1631 1608 } 1632 1609 1633 1610 static bool __reg64_bound_s32(s64 a) ··· 1635 1628 static void __reg_combine_64_into_32(struct bpf_reg_state *reg) 1636 1629 { 1637 1630 __mark_reg32_unbounded(reg); 1638 - 1639 1631 if (__reg64_bound_s32(reg->smin_value) && __reg64_bound_s32(reg->smax_value)) { 1640 1632 reg->s32_min_value = (s32)reg->smin_value; 1641 1633 reg->s32_max_value = (s32)reg->smax_value; ··· 1643 1637 reg->u32_min_value = (u32)reg->umin_value; 1644 1638 reg->u32_max_value = (u32)reg->umax_value; 1645 1639 } 1646 - 1647 - /* Intersecting with the old var_off might have improved our bounds 1648 - * slightly. e.g. if umax was 0x7f...f and var_off was (0; 0xf...fc), 1649 - * then new var_off is (0; 0x7f...fc) which improves our umax. 1650 - */ 1651 - __reg_deduce_bounds(reg); 1652 - __reg_bound_offset(reg); 1653 - __update_reg_bounds(reg); 1640 + reg_bounds_sync(reg); 1654 1641 } 1655 1642 1656 1643 /* Mark a register as having a completely unknown (scalar) value. */ ··· 6942 6943 ret_reg->s32_max_value = meta->msize_max_value; 6943 6944 ret_reg->smin_value = -MAX_ERRNO; 6944 6945 ret_reg->s32_min_value = -MAX_ERRNO; 6945 - __reg_deduce_bounds(ret_reg); 6946 - __reg_bound_offset(ret_reg); 6947 - __update_reg_bounds(ret_reg); 6946 + reg_bounds_sync(ret_reg); 6948 6947 } 6949 6948 6950 6949 static int ··· 8199 8202 8200 8203 if (!check_reg_sane_offset(env, dst_reg, ptr_reg->type)) 8201 8204 return -EINVAL; 8202 - 8203 - __update_reg_bounds(dst_reg); 8204 - __reg_deduce_bounds(dst_reg); 8205 - __reg_bound_offset(dst_reg); 8206 - 8205 + reg_bounds_sync(dst_reg); 8207 8206 if (sanitize_check_bounds(env, insn, dst_reg) < 0) 8208 8207 return -EACCES; 8209 8208 if (sanitize_needed(opcode)) { ··· 8937 8944 /* ALU32 ops are zero extended into 64bit register */ 8938 8945 if (alu32) 8939 8946 zext_32_to_64(dst_reg); 8940 - 8941 - __update_reg_bounds(dst_reg); 8942 - __reg_deduce_bounds(dst_reg); 8943 - __reg_bound_offset(dst_reg); 8947 + reg_bounds_sync(dst_reg); 8944 8948 return 0; 8945 8949 } 8946 8950 ··· 9126 9136 insn->dst_reg); 9127 9137 } 9128 9138 zext_32_to_64(dst_reg); 9129 - 9130 - __update_reg_bounds(dst_reg); 9131 - __reg_deduce_bounds(dst_reg); 9132 - __reg_bound_offset(dst_reg); 9139 + reg_bounds_sync(dst_reg); 9133 9140 } 9134 9141 } else { 9135 9142 /* case: R = imm ··· 9564 9577 return; 9565 9578 9566 9579 switch (opcode) { 9580 + /* JEQ/JNE comparison doesn't change the register equivalence. 9581 + * 9582 + * r1 = r2; 9583 + * if (r1 == 42) goto label; 9584 + * ... 9585 + * label: // here both r1 and r2 are known to be 42. 9586 + * 9587 + * Hence when marking register as known preserve it's ID. 9588 + */ 9567 9589 case BPF_JEQ: 9568 - case BPF_JNE: 9569 - { 9570 - struct bpf_reg_state *reg = 9571 - opcode == BPF_JEQ ? true_reg : false_reg; 9572 - 9573 - /* JEQ/JNE comparison doesn't change the register equivalence. 9574 - * r1 = r2; 9575 - * if (r1 == 42) goto label; 9576 - * ... 9577 - * label: // here both r1 and r2 are known to be 42. 9578 - * 9579 - * Hence when marking register as known preserve it's ID. 9580 - */ 9581 - if (is_jmp32) 9582 - __mark_reg32_known(reg, val32); 9583 - else 9584 - ___mark_reg_known(reg, val); 9590 + if (is_jmp32) { 9591 + __mark_reg32_known(true_reg, val32); 9592 + true_32off = tnum_subreg(true_reg->var_off); 9593 + } else { 9594 + ___mark_reg_known(true_reg, val); 9595 + true_64off = true_reg->var_off; 9596 + } 9585 9597 break; 9586 - } 9598 + case BPF_JNE: 9599 + if (is_jmp32) { 9600 + __mark_reg32_known(false_reg, val32); 9601 + false_32off = tnum_subreg(false_reg->var_off); 9602 + } else { 9603 + ___mark_reg_known(false_reg, val); 9604 + false_64off = false_reg->var_off; 9605 + } 9606 + break; 9587 9607 case BPF_JSET: 9588 9608 if (is_jmp32) { 9589 9609 false_32off = tnum_and(false_32off, tnum_const(~val32)); ··· 9729 9735 dst_reg->smax_value); 9730 9736 src_reg->var_off = dst_reg->var_off = tnum_intersect(src_reg->var_off, 9731 9737 dst_reg->var_off); 9732 - /* We might have learned new bounds from the var_off. */ 9733 - __update_reg_bounds(src_reg); 9734 - __update_reg_bounds(dst_reg); 9735 - /* We might have learned something about the sign bit. */ 9736 - __reg_deduce_bounds(src_reg); 9737 - __reg_deduce_bounds(dst_reg); 9738 - /* We might have learned some bits from the bounds. */ 9739 - __reg_bound_offset(src_reg); 9740 - __reg_bound_offset(dst_reg); 9741 - /* Intersecting with the old var_off might have improved our bounds 9742 - * slightly. e.g. if umax was 0x7f...f and var_off was (0; 0xf...fc), 9743 - * then new var_off is (0; 0x7f...fc) which improves our umax. 9744 - */ 9745 - __update_reg_bounds(src_reg); 9746 - __update_reg_bounds(dst_reg); 9738 + reg_bounds_sync(src_reg); 9739 + reg_bounds_sync(dst_reg); 9747 9740 } 9748 9741 9749 9742 static void reg_combine_min_max(struct bpf_reg_state *true_src,
+1 -1
kernel/ptrace.c
··· 222 222 if (lock_task_sighand(task, &flags)) { 223 223 task->jobctl &= ~JOBCTL_PTRACE_FROZEN; 224 224 if (__fatal_signal_pending(task)) { 225 - task->jobctl &= ~TASK_TRACED; 225 + task->jobctl &= ~JOBCTL_TRACED; 226 226 wake_up_state(task, __TASK_TRACED); 227 227 } 228 228 unlock_task_sighand(task, &flags);
+4 -4
kernel/signal.c
··· 2029 2029 bool autoreap = false; 2030 2030 u64 utime, stime; 2031 2031 2032 - BUG_ON(sig == -1); 2032 + WARN_ON_ONCE(sig == -1); 2033 2033 2034 - /* do_notify_parent_cldstop should have been called instead. */ 2035 - BUG_ON(task_is_stopped_or_traced(tsk)); 2034 + /* do_notify_parent_cldstop should have been called instead. */ 2035 + WARN_ON_ONCE(task_is_stopped_or_traced(tsk)); 2036 2036 2037 - BUG_ON(!tsk->ptrace && 2037 + WARN_ON_ONCE(!tsk->ptrace && 2038 2038 (tsk->group_leader != tsk || !thread_group_empty(tsk))); 2039 2039 2040 2040 /* Wake up all pidfd waiters */
-1
kernel/time/tick-sched.c
··· 526 526 cpumask_copy(tick_nohz_full_mask, cpumask); 527 527 tick_nohz_full_running = true; 528 528 } 529 - EXPORT_SYMBOL_GPL(tick_nohz_full_setup); 530 529 531 530 static int tick_nohz_cpu_down(unsigned int cpu) 532 531 {
+2 -1
lib/idr.c
··· 491 491 struct ida_bitmap *bitmap; 492 492 unsigned long flags; 493 493 494 - BUG_ON((int)id < 0); 494 + if ((int)id < 0) 495 + return; 495 496 496 497 xas_lock_irqsave(&xas, flags); 497 498 bitmap = xas_load(&xas);
-25
lib/lockref.c
··· 111 111 EXPORT_SYMBOL(lockref_put_not_zero); 112 112 113 113 /** 114 - * lockref_get_or_lock - Increments count unless the count is 0 or dead 115 - * @lockref: pointer to lockref structure 116 - * Return: 1 if count updated successfully or 0 if count was zero 117 - * and we got the lock instead. 118 - */ 119 - int lockref_get_or_lock(struct lockref *lockref) 120 - { 121 - CMPXCHG_LOOP( 122 - new.count++; 123 - if (old.count <= 0) 124 - break; 125 - , 126 - return 1; 127 - ); 128 - 129 - spin_lock(&lockref->lock); 130 - if (lockref->count <= 0) 131 - return 0; 132 - lockref->count++; 133 - spin_unlock(&lockref->lock); 134 - return 1; 135 - } 136 - EXPORT_SYMBOL(lockref_get_or_lock); 137 - 138 - /** 139 114 * lockref_put_return - Decrement reference count if possible 140 115 * @lockref: pointer to lockref structure 141 116 *
+4 -1
lib/sbitmap.c
··· 528 528 529 529 sbitmap_deferred_clear(map); 530 530 if (map->word == (1UL << (map_depth - 1)) - 1) 531 - continue; 531 + goto next; 532 532 533 533 nr = find_first_zero_bit(&map->word, map_depth); 534 534 if (nr + nr_tags <= map_depth) { ··· 539 539 get_mask = ((1UL << map_tags) - 1) << nr; 540 540 do { 541 541 val = READ_ONCE(map->word); 542 + if ((val & ~get_mask) != val) 543 + goto next; 542 544 ret = atomic_long_cmpxchg(ptr, val, get_mask | val); 543 545 } while (ret != val); 544 546 get_mask = (get_mask & ~ret) >> nr; ··· 551 549 return get_mask; 552 550 } 553 551 } 552 + next: 554 553 /* Jump to next index. */ 555 554 if (++index >= sb->map_nr) 556 555 index = 0;
+3
net/bluetooth/hci_core.c
··· 571 571 goto done; 572 572 } 573 573 574 + cancel_work_sync(&hdev->power_on); 574 575 if (hci_dev_test_and_clear_flag(hdev, HCI_AUTO_OFF)) 575 576 cancel_delayed_work(&hdev->power_off); 576 577 ··· 2675 2674 write_lock(&hci_dev_list_lock); 2676 2675 list_del(&hdev->list); 2677 2676 write_unlock(&hci_dev_list_lock); 2677 + 2678 + cancel_work_sync(&hdev->power_on); 2678 2679 2679 2680 hci_cmd_sync_clear(hdev); 2680 2681
-1
net/bluetooth/hci_sync.c
··· 4088 4088 4089 4089 bt_dev_dbg(hdev, ""); 4090 4090 4091 - cancel_work_sync(&hdev->power_on); 4092 4091 cancel_delayed_work(&hdev->power_off); 4093 4092 cancel_delayed_work(&hdev->ncmd_timer); 4094 4093
+18 -3
net/bridge/br_netfilter_hooks.c
··· 1012 1012 return okfn(net, sk, skb); 1013 1013 1014 1014 ops = nf_hook_entries_get_hook_ops(e); 1015 - for (i = 0; i < e->num_hook_entries && 1016 - ops[i]->priority <= NF_BR_PRI_BRNF; i++) 1017 - ; 1015 + for (i = 0; i < e->num_hook_entries; i++) { 1016 + /* These hooks have already been called */ 1017 + if (ops[i]->priority < NF_BR_PRI_BRNF) 1018 + continue; 1019 + 1020 + /* These hooks have not been called yet, run them. */ 1021 + if (ops[i]->priority > NF_BR_PRI_BRNF) 1022 + break; 1023 + 1024 + /* take a closer look at NF_BR_PRI_BRNF. */ 1025 + if (ops[i]->hook == br_nf_pre_routing) { 1026 + /* This hook diverted the skb to this function, 1027 + * hooks after this have not been run yet. 1028 + */ 1029 + i++; 1030 + break; 1031 + } 1032 + } 1018 1033 1019 1034 nf_hook_state_init(&state, hook, NFPROTO_BRIDGE, indev, outdev, 1020 1035 sk, net, okfn);
+14 -4
net/can/bcm.c
··· 100 100 101 101 struct bcm_op { 102 102 struct list_head list; 103 + struct rcu_head rcu; 103 104 int ifindex; 104 105 canid_t can_id; 105 106 u32 flags; ··· 719 718 return NULL; 720 719 } 721 720 722 - static void bcm_remove_op(struct bcm_op *op) 721 + static void bcm_free_op_rcu(struct rcu_head *rcu_head) 723 722 { 724 - hrtimer_cancel(&op->timer); 725 - hrtimer_cancel(&op->thrtimer); 723 + struct bcm_op *op = container_of(rcu_head, struct bcm_op, rcu); 726 724 727 725 if ((op->frames) && (op->frames != &op->sframe)) 728 726 kfree(op->frames); ··· 730 730 kfree(op->last_frames); 731 731 732 732 kfree(op); 733 + } 734 + 735 + static void bcm_remove_op(struct bcm_op *op) 736 + { 737 + hrtimer_cancel(&op->timer); 738 + hrtimer_cancel(&op->thrtimer); 739 + 740 + call_rcu(&op->rcu, bcm_free_op_rcu); 733 741 } 734 742 735 743 static void bcm_rx_unreg(struct net_device *dev, struct bcm_op *op) ··· 764 756 list_for_each_entry_safe(op, n, ops, list) { 765 757 if ((op->can_id == mh->can_id) && (op->ifindex == ifindex) && 766 758 (op->flags & CAN_FD_FRAME) == (mh->flags & CAN_FD_FRAME)) { 759 + 760 + /* disable automatic timer on frame reception */ 761 + op->flags |= RX_NO_AUTOTIMER; 767 762 768 763 /* 769 764 * Don't care if we're bound or not (due to netdev ··· 796 785 bcm_rx_handler, op); 797 786 798 787 list_del(&op->list); 799 - synchronize_rcu(); 800 788 bcm_remove_op(op); 801 789 return 1; /* done */ 802 790 }
+1 -1
net/ipv4/ip_tunnel_core.c
··· 410 410 u32 mtu = dst_mtu(encap_dst) - headroom; 411 411 412 412 if ((skb_is_gso(skb) && skb_gso_validate_network_len(skb, mtu)) || 413 - (!skb_is_gso(skb) && (skb->len - skb_mac_header_len(skb)) <= mtu)) 413 + (!skb_is_gso(skb) && (skb->len - skb_network_offset(skb)) <= mtu)) 414 414 return 0; 415 415 416 416 skb_dst_update_pmtu_no_confirm(skb, mtu);
+5 -1
net/ipv4/tcp_ipv4.c
··· 1964 1964 struct sock *nsk; 1965 1965 1966 1966 sk = req->rsk_listener; 1967 - drop_reason = tcp_inbound_md5_hash(sk, skb, 1967 + if (!xfrm4_policy_check(sk, XFRM_POLICY_IN, skb)) 1968 + drop_reason = SKB_DROP_REASON_XFRM_POLICY; 1969 + else 1970 + drop_reason = tcp_inbound_md5_hash(sk, skb, 1968 1971 &iph->saddr, &iph->daddr, 1969 1972 AF_INET, dif, sdif); 1970 1973 if (unlikely(drop_reason)) { ··· 2019 2016 } 2020 2017 goto discard_and_relse; 2021 2018 } 2019 + nf_reset_ct(skb); 2022 2020 if (nsk == sk) { 2023 2021 reqsk_put(req); 2024 2022 tcp_v4_restore_cb(skb);
+2 -6
net/ipv6/addrconf.c
··· 1109 1109 goto out; 1110 1110 } 1111 1111 1112 - if (net->ipv6.devconf_all->disable_policy || 1113 - idev->cnf.disable_policy) 1114 - f6i->dst_nopolicy = true; 1115 - 1116 1112 neigh_parms_data_state_setall(idev->nd_parms); 1117 1113 1118 1114 ifa->addr = *cfg->pfx; ··· 5168 5172 fillargs->event = RTM_GETMULTICAST; 5169 5173 5170 5174 /* multicast address */ 5171 - for (ifmca = rcu_dereference(idev->mc_list); 5175 + for (ifmca = rtnl_dereference(idev->mc_list); 5172 5176 ifmca; 5173 - ifmca = rcu_dereference(ifmca->next), ip_idx++) { 5177 + ifmca = rtnl_dereference(ifmca->next), ip_idx++) { 5174 5178 if (ip_idx < s_ip_idx) 5175 5179 continue; 5176 5180 err = inet6_fill_ifmcaddr(skb, ifmca, fillargs);
+8 -1
net/ipv6/route.c
··· 4569 4569 } 4570 4570 4571 4571 f6i = ip6_route_info_create(&cfg, gfp_flags, NULL); 4572 - if (!IS_ERR(f6i)) 4572 + if (!IS_ERR(f6i)) { 4573 4573 f6i->dst_nocount = true; 4574 + 4575 + if (!anycast && 4576 + (net->ipv6.devconf_all->disable_policy || 4577 + idev->cnf.disable_policy)) 4578 + f6i->dst_nopolicy = true; 4579 + } 4580 + 4574 4581 return f6i; 4575 4582 } 4576 4583
-1
net/ipv6/seg6_hmac.c
··· 406 406 407 407 return rhashtable_init(&sdata->hmac_infos, &rht_params); 408 408 } 409 - EXPORT_SYMBOL(seg6_hmac_net_init); 410 409 411 410 void seg6_hmac_exit(void) 412 411 {
+3 -5
net/ipv6/sit.c
··· 323 323 kcalloc(cmax, sizeof(*kp), GFP_KERNEL_ACCOUNT | __GFP_NOWARN) : 324 324 NULL; 325 325 326 - rcu_read_lock(); 327 - 328 326 ca = min(t->prl_count, cmax); 329 327 330 328 if (!kp) { ··· 339 341 } 340 342 } 341 343 342 - c = 0; 344 + rcu_read_lock(); 343 345 for_each_prl_rcu(t->prl) { 344 346 if (c >= cmax) 345 347 break; ··· 351 353 if (kprl.addr != htonl(INADDR_ANY)) 352 354 break; 353 355 } 354 - out: 356 + 355 357 rcu_read_unlock(); 356 358 357 359 len = sizeof(*kp) * c; ··· 360 362 ret = -EFAULT; 361 363 362 364 kfree(kp); 363 - 365 + out: 364 366 return ret; 365 367 } 366 368
+7 -3
net/mptcp/options.c
··· 765 765 opts->suboptions |= OPTION_MPTCP_RST; 766 766 opts->reset_transient = subflow->reset_transient; 767 767 opts->reset_reason = subflow->reset_reason; 768 + MPTCP_INC_STATS(sock_net(sk), MPTCP_MIB_MPRSTTX); 768 769 769 770 return true; 770 771 } ··· 789 788 opts->rcvr_key = msk->remote_key; 790 789 791 790 pr_debug("FASTCLOSE key=%llu", opts->rcvr_key); 791 + MPTCP_INC_STATS(sock_net(sk), MPTCP_MIB_MPFASTCLOSETX); 792 792 return true; 793 793 } 794 794 ··· 811 809 opts->fail_seq = subflow->map_seq; 812 810 813 811 pr_debug("MP_FAIL fail_seq=%llu", opts->fail_seq); 812 + MPTCP_INC_STATS(sock_net(sk), MPTCP_MIB_MPFAILTX); 814 813 815 814 return true; 816 815 } ··· 836 833 mptcp_established_options_mp_fail(sk, &opt_size, remaining, opts)) { 837 834 *size += opt_size; 838 835 remaining -= opt_size; 839 - MPTCP_INC_STATS(sock_net(sk), MPTCP_MIB_MPFASTCLOSETX); 840 836 } 841 837 /* MP_RST can be used with MP_FASTCLOSE and MP_FAIL if there is room */ 842 838 if (mptcp_established_options_rst(sk, skb, &opt_size, remaining, opts)) { 843 839 *size += opt_size; 844 840 remaining -= opt_size; 845 - MPTCP_INC_STATS(sock_net(sk), MPTCP_MIB_MPRSTTX); 846 841 } 847 842 return true; 848 843 } ··· 967 966 goto reset; 968 967 subflow->mp_capable = 0; 969 968 pr_fallback(msk); 970 - __mptcp_do_fallback(msk); 969 + mptcp_do_fallback(ssk); 971 970 return false; 972 971 } 973 972 ··· 1584 1583 *ptr++ = mptcp_option(MPTCPOPT_MP_PRIO, 1585 1584 TCPOLEN_MPTCP_PRIO, 1586 1585 opts->backup, TCPOPT_NOP); 1586 + 1587 + MPTCP_INC_STATS(sock_net((const struct sock *)tp), 1588 + MPTCP_MIB_MPPRIOTX); 1587 1589 } 1588 1590 1589 1591 mp_capable_done:
+4 -6
net/mptcp/pm.c
··· 299 299 { 300 300 struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(sk); 301 301 struct mptcp_sock *msk = mptcp_sk(subflow->conn); 302 - struct sock *s = (struct sock *)msk; 303 302 304 303 pr_debug("fail_seq=%llu", fail_seq); 305 304 306 305 if (!READ_ONCE(msk->allow_infinite_fallback)) 307 306 return; 308 307 309 - if (!READ_ONCE(subflow->mp_fail_response_expect)) { 308 + if (!subflow->fail_tout) { 310 309 pr_debug("send MP_FAIL response and infinite map"); 311 310 312 311 subflow->send_mp_fail = 1; 313 - MPTCP_INC_STATS(sock_net(sk), MPTCP_MIB_MPFAILTX); 314 312 subflow->send_infinite_map = 1; 315 - } else if (!sock_flag(sk, SOCK_DEAD)) { 313 + tcp_send_ack(sk); 314 + } else { 316 315 pr_debug("MP_FAIL response received"); 317 - 318 - sk_stop_timer(s, &s->sk_timer); 316 + WRITE_ONCE(subflow->fail_tout, 0); 319 317 } 320 318 } 321 319
+33 -13
net/mptcp/pm_netlink.c
··· 717 717 } 718 718 } 719 719 720 - static int mptcp_pm_nl_mp_prio_send_ack(struct mptcp_sock *msk, 721 - struct mptcp_addr_info *addr, 722 - u8 bkup) 720 + int mptcp_pm_nl_mp_prio_send_ack(struct mptcp_sock *msk, 721 + struct mptcp_addr_info *addr, 722 + struct mptcp_addr_info *rem, 723 + u8 bkup) 723 724 { 724 725 struct mptcp_subflow_context *subflow; 725 726 ··· 728 727 729 728 mptcp_for_each_subflow(msk, subflow) { 730 729 struct sock *ssk = mptcp_subflow_tcp_sock(subflow); 731 - struct sock *sk = (struct sock *)msk; 732 - struct mptcp_addr_info local; 730 + struct mptcp_addr_info local, remote; 731 + bool slow; 733 732 734 733 local_address((struct sock_common *)ssk, &local); 735 734 if (!mptcp_addresses_equal(&local, addr, addr->port)) 736 735 continue; 737 736 737 + if (rem && rem->family != AF_UNSPEC) { 738 + remote_address((struct sock_common *)ssk, &remote); 739 + if (!mptcp_addresses_equal(&remote, rem, rem->port)) 740 + continue; 741 + } 742 + 743 + slow = lock_sock_fast(ssk); 738 744 if (subflow->backup != bkup) 739 745 msk->last_snd = NULL; 740 746 subflow->backup = bkup; 741 747 subflow->send_mp_prio = 1; 742 748 subflow->request_bkup = bkup; 743 - __MPTCP_INC_STATS(sock_net(sk), MPTCP_MIB_MPPRIOTX); 744 749 745 - spin_unlock_bh(&msk->pm.lock); 746 750 pr_debug("send ack for mp_prio"); 747 - mptcp_subflow_send_ack(ssk); 748 - spin_lock_bh(&msk->pm.lock); 751 + __mptcp_subflow_send_ack(ssk); 752 + unlock_sock_fast(ssk, slow); 749 753 750 754 return 0; 751 755 } ··· 807 801 removed = true; 808 802 __MPTCP_INC_STATS(sock_net(sk), rm_type); 809 803 } 810 - __set_bit(rm_list->ids[i], msk->pm.id_avail_bitmap); 804 + if (rm_type == MPTCP_MIB_RMSUBFLOW) 805 + __set_bit(rm_list->ids[i], msk->pm.id_avail_bitmap); 811 806 if (!removed) 812 807 continue; 813 808 ··· 1823 1816 1824 1817 list.ids[list.nr++] = addr->id; 1825 1818 1819 + spin_lock_bh(&msk->pm.lock); 1826 1820 mptcp_pm_nl_rm_subflow_received(msk, &list); 1827 1821 mptcp_pm_create_subflow_or_signal_addr(msk); 1822 + spin_unlock_bh(&msk->pm.lock); 1828 1823 } 1829 1824 1830 1825 static int mptcp_nl_set_flags(struct net *net, ··· 1844 1835 goto next; 1845 1836 1846 1837 lock_sock(sk); 1847 - spin_lock_bh(&msk->pm.lock); 1848 1838 if (changed & MPTCP_PM_ADDR_FLAG_BACKUP) 1849 - ret = mptcp_pm_nl_mp_prio_send_ack(msk, addr, bkup); 1839 + ret = mptcp_pm_nl_mp_prio_send_ack(msk, addr, NULL, bkup); 1850 1840 if (changed & MPTCP_PM_ADDR_FLAG_FULLMESH) 1851 1841 mptcp_pm_nl_fullmesh(msk, addr); 1852 - spin_unlock_bh(&msk->pm.lock); 1853 1842 release_sock(sk); 1854 1843 1855 1844 next: ··· 1861 1854 static int mptcp_nl_cmd_set_flags(struct sk_buff *skb, struct genl_info *info) 1862 1855 { 1863 1856 struct mptcp_pm_addr_entry addr = { .addr = { .family = AF_UNSPEC }, }, *entry; 1857 + struct mptcp_pm_addr_entry remote = { .addr = { .family = AF_UNSPEC }, }; 1858 + struct nlattr *attr_rem = info->attrs[MPTCP_PM_ATTR_ADDR_REMOTE]; 1859 + struct nlattr *token = info->attrs[MPTCP_PM_ATTR_TOKEN]; 1864 1860 struct nlattr *attr = info->attrs[MPTCP_PM_ATTR_ADDR]; 1865 1861 struct pm_nl_pernet *pernet = genl_info_pm_nl(info); 1866 1862 u8 changed, mask = MPTCP_PM_ADDR_FLAG_BACKUP | ··· 1876 1866 if (ret < 0) 1877 1867 return ret; 1878 1868 1869 + if (attr_rem) { 1870 + ret = mptcp_pm_parse_entry(attr_rem, info, false, &remote); 1871 + if (ret < 0) 1872 + return ret; 1873 + } 1874 + 1879 1875 if (addr.flags & MPTCP_PM_ADDR_FLAG_BACKUP) 1880 1876 bkup = 1; 1881 1877 if (addr.addr.family == AF_UNSPEC) { ··· 1889 1873 if (!addr.addr.id) 1890 1874 return -EOPNOTSUPP; 1891 1875 } 1876 + 1877 + if (token) 1878 + return mptcp_userspace_pm_set_flags(sock_net(skb->sk), 1879 + token, &addr, &remote, bkup); 1892 1880 1893 1881 spin_lock_bh(&pernet->lock); 1894 1882 entry = __lookup_addr(pernet, &addr.addr, lookup_by_id);
+38 -13
net/mptcp/pm_userspace.c
··· 5 5 */ 6 6 7 7 #include "protocol.h" 8 + #include "mib.h" 8 9 9 10 void mptcp_free_local_addr_list(struct mptcp_sock *msk) 10 11 { ··· 307 306 const struct mptcp_addr_info *local, 308 307 const struct mptcp_addr_info *remote) 309 308 { 310 - struct sock *sk = &msk->sk.icsk_inet.sk; 311 309 struct mptcp_subflow_context *subflow; 312 - struct sock *found = NULL; 313 310 314 311 if (local->family != remote->family) 315 312 return NULL; 316 - 317 - lock_sock(sk); 318 313 319 314 mptcp_for_each_subflow(msk, subflow) { 320 315 const struct inet_sock *issk; ··· 344 347 } 345 348 346 349 if (issk->inet_sport == local->port && 347 - issk->inet_dport == remote->port) { 348 - found = ssk; 349 - goto found; 350 - } 350 + issk->inet_dport == remote->port) 351 + return ssk; 351 352 } 352 353 353 - found: 354 - release_sock(sk); 355 - 356 - return found; 354 + return NULL; 357 355 } 358 356 359 357 int mptcp_nl_cmd_sf_destroy(struct sk_buff *skb, struct genl_info *info) ··· 404 412 } 405 413 406 414 sk = &msk->sk.icsk_inet.sk; 415 + lock_sock(sk); 407 416 ssk = mptcp_nl_find_ssk(msk, &addr_l, &addr_r); 408 417 if (ssk) { 409 418 struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(ssk); 410 419 411 420 mptcp_subflow_shutdown(sk, ssk, RCV_SHUTDOWN | SEND_SHUTDOWN); 412 421 mptcp_close_ssk(sk, ssk, subflow); 422 + MPTCP_INC_STATS(sock_net(sk), MPTCP_MIB_RMSUBFLOW); 413 423 err = 0; 414 424 } else { 415 425 err = -ESRCH; 416 426 } 427 + release_sock(sk); 417 428 418 - destroy_err: 429 + destroy_err: 419 430 sock_put((struct sock *)msk); 420 431 return err; 432 + } 433 + 434 + int mptcp_userspace_pm_set_flags(struct net *net, struct nlattr *token, 435 + struct mptcp_pm_addr_entry *loc, 436 + struct mptcp_pm_addr_entry *rem, u8 bkup) 437 + { 438 + struct mptcp_sock *msk; 439 + int ret = -EINVAL; 440 + u32 token_val; 441 + 442 + token_val = nla_get_u32(token); 443 + 444 + msk = mptcp_token_get_sock(net, token_val); 445 + if (!msk) 446 + return ret; 447 + 448 + if (!mptcp_pm_is_userspace(msk)) 449 + goto set_flags_err; 450 + 451 + if (loc->addr.family == AF_UNSPEC || 452 + rem->addr.family == AF_UNSPEC) 453 + goto set_flags_err; 454 + 455 + lock_sock((struct sock *)msk); 456 + ret = mptcp_pm_nl_mp_prio_send_ack(msk, &loc->addr, &rem->addr, bkup); 457 + release_sock((struct sock *)msk); 458 + 459 + set_flags_err: 460 + sock_put((struct sock *)msk); 461 + return ret; 421 462 }
+60 -33
net/mptcp/protocol.c
··· 500 500 __mptcp_set_timeout(sk, tout); 501 501 } 502 502 503 - static bool tcp_can_send_ack(const struct sock *ssk) 503 + static inline bool tcp_can_send_ack(const struct sock *ssk) 504 504 { 505 505 return !((1 << inet_sk_state_load(ssk)) & 506 506 (TCPF_SYN_SENT | TCPF_SYN_RECV | TCPF_TIME_WAIT | TCPF_CLOSE | TCPF_LISTEN)); 507 + } 508 + 509 + void __mptcp_subflow_send_ack(struct sock *ssk) 510 + { 511 + if (tcp_can_send_ack(ssk)) 512 + tcp_send_ack(ssk); 507 513 } 508 514 509 515 void mptcp_subflow_send_ack(struct sock *ssk) ··· 517 511 bool slow; 518 512 519 513 slow = lock_sock_fast(ssk); 520 - if (tcp_can_send_ack(ssk)) 521 - tcp_send_ack(ssk); 514 + __mptcp_subflow_send_ack(ssk); 522 515 unlock_sock_fast(ssk, slow); 523 516 } 524 517 ··· 1250 1245 MPTCP_INC_STATS(sock_net(ssk), MPTCP_MIB_INFINITEMAPTX); 1251 1246 mptcp_subflow_ctx(ssk)->send_infinite_map = 0; 1252 1247 pr_fallback(msk); 1253 - __mptcp_do_fallback(msk); 1248 + mptcp_do_fallback(ssk); 1254 1249 } 1255 1250 1256 1251 static int mptcp_sendmsg_frag(struct sock *sk, struct sock *ssk, ··· 2180 2175 sock_put(sk); 2181 2176 } 2182 2177 2183 - static struct mptcp_subflow_context * 2184 - mp_fail_response_expect_subflow(struct mptcp_sock *msk) 2185 - { 2186 - struct mptcp_subflow_context *subflow, *ret = NULL; 2187 - 2188 - mptcp_for_each_subflow(msk, subflow) { 2189 - if (READ_ONCE(subflow->mp_fail_response_expect)) { 2190 - ret = subflow; 2191 - break; 2192 - } 2193 - } 2194 - 2195 - return ret; 2196 - } 2197 - 2198 2178 static void mptcp_timeout_timer(struct timer_list *t) 2199 2179 { 2200 2180 struct sock *sk = from_timer(sk, t, sk_timer); ··· 2336 2346 kfree_rcu(subflow, rcu); 2337 2347 } else { 2338 2348 /* otherwise tcp will dispose of the ssk and subflow ctx */ 2349 + if (ssk->sk_state == TCP_LISTEN) { 2350 + tcp_set_state(ssk, TCP_CLOSE); 2351 + mptcp_subflow_queue_clean(ssk); 2352 + inet_csk_listen_stop(ssk); 2353 + } 2339 2354 __tcp_close(ssk, 0); 2340 2355 2341 2356 /* close acquired an extra ref */ ··· 2513 2518 mptcp_reset_timer(sk); 2514 2519 } 2515 2520 2521 + /* schedule the timeout timer for the relevant event: either close timeout 2522 + * or mp_fail timeout. The close timeout takes precedence on the mp_fail one 2523 + */ 2524 + void mptcp_reset_timeout(struct mptcp_sock *msk, unsigned long fail_tout) 2525 + { 2526 + struct sock *sk = (struct sock *)msk; 2527 + unsigned long timeout, close_timeout; 2528 + 2529 + if (!fail_tout && !sock_flag(sk, SOCK_DEAD)) 2530 + return; 2531 + 2532 + close_timeout = inet_csk(sk)->icsk_mtup.probe_timestamp - tcp_jiffies32 + jiffies + TCP_TIMEWAIT_LEN; 2533 + 2534 + /* the close timeout takes precedence on the fail one, and here at least one of 2535 + * them is active 2536 + */ 2537 + timeout = sock_flag(sk, SOCK_DEAD) ? close_timeout : fail_tout; 2538 + 2539 + sk_reset_timer(sk, &sk->sk_timer, timeout); 2540 + } 2541 + 2516 2542 static void mptcp_mp_fail_no_response(struct mptcp_sock *msk) 2517 2543 { 2518 - struct mptcp_subflow_context *subflow; 2519 - struct sock *ssk; 2544 + struct sock *ssk = msk->first; 2520 2545 bool slow; 2521 2546 2522 - subflow = mp_fail_response_expect_subflow(msk); 2523 - if (subflow) { 2524 - pr_debug("MP_FAIL doesn't respond, reset the subflow"); 2547 + if (!ssk) 2548 + return; 2525 2549 2526 - ssk = mptcp_subflow_tcp_sock(subflow); 2527 - slow = lock_sock_fast(ssk); 2528 - mptcp_subflow_reset(ssk); 2529 - unlock_sock_fast(ssk, slow); 2530 - } 2550 + pr_debug("MP_FAIL doesn't respond, reset the subflow"); 2551 + 2552 + slow = lock_sock_fast(ssk); 2553 + mptcp_subflow_reset(ssk); 2554 + WRITE_ONCE(mptcp_subflow_ctx(ssk)->fail_tout, 0); 2555 + unlock_sock_fast(ssk, slow); 2556 + 2557 + mptcp_reset_timeout(msk, 0); 2531 2558 } 2532 2559 2533 2560 static void mptcp_worker(struct work_struct *work) 2534 2561 { 2535 2562 struct mptcp_sock *msk = container_of(work, struct mptcp_sock, work); 2536 2563 struct sock *sk = &msk->sk.icsk_inet.sk; 2564 + unsigned long fail_tout; 2537 2565 int state; 2538 2566 2539 2567 lock_sock(sk); ··· 2593 2575 if (test_and_clear_bit(MPTCP_WORK_RTX, &msk->flags)) 2594 2576 __mptcp_retrans(sk); 2595 2577 2596 - mptcp_mp_fail_no_response(msk); 2578 + fail_tout = msk->first ? READ_ONCE(mptcp_subflow_ctx(msk->first)->fail_tout) : 0; 2579 + if (fail_tout && time_after(jiffies, fail_tout)) 2580 + mptcp_mp_fail_no_response(msk); 2597 2581 2598 2582 unlock: 2599 2583 release_sock(sk); ··· 2842 2822 static void mptcp_close(struct sock *sk, long timeout) 2843 2823 { 2844 2824 struct mptcp_subflow_context *subflow; 2825 + struct mptcp_sock *msk = mptcp_sk(sk); 2845 2826 bool do_cancel_work = false; 2846 2827 2847 2828 lock_sock(sk); ··· 2861 2840 cleanup: 2862 2841 /* orphan all the subflows */ 2863 2842 inet_csk(sk)->icsk_mtup.probe_timestamp = tcp_jiffies32; 2864 - mptcp_for_each_subflow(mptcp_sk(sk), subflow) { 2843 + mptcp_for_each_subflow(msk, subflow) { 2865 2844 struct sock *ssk = mptcp_subflow_tcp_sock(subflow); 2866 2845 bool slow = lock_sock_fast_nested(ssk); 2846 + 2847 + /* since the close timeout takes precedence on the fail one, 2848 + * cancel the latter 2849 + */ 2850 + if (ssk == msk->first) 2851 + subflow->fail_tout = 0; 2867 2852 2868 2853 sock_orphan(ssk); 2869 2854 unlock_sock_fast(ssk, slow); ··· 2879 2852 sock_hold(sk); 2880 2853 pr_debug("msk=%p state=%d", sk, sk->sk_state); 2881 2854 if (mptcp_sk(sk)->token) 2882 - mptcp_event(MPTCP_EVENT_CLOSED, mptcp_sk(sk), NULL, GFP_KERNEL); 2855 + mptcp_event(MPTCP_EVENT_CLOSED, msk, NULL, GFP_KERNEL); 2883 2856 2884 2857 if (sk->sk_state == TCP_CLOSE) { 2885 2858 __mptcp_destroy_sock(sk); 2886 2859 do_cancel_work = true; 2887 2860 } else { 2888 - sk_reset_timer(sk, &sk->sk_timer, jiffies + TCP_TIMEWAIT_LEN); 2861 + mptcp_reset_timeout(msk, 0); 2889 2862 } 2890 2863 release_sock(sk); 2891 2864 if (do_cancel_work)
+28 -5
net/mptcp/protocol.h
··· 306 306 307 307 u32 setsockopt_seq; 308 308 char ca_name[TCP_CA_NAME_MAX]; 309 + struct mptcp_sock *dl_next; 309 310 }; 310 311 311 312 #define mptcp_data_lock(sk) spin_lock_bh(&(sk)->sk_lock.slock) ··· 469 468 local_id_valid : 1, /* local_id is correctly initialized */ 470 469 valid_csum_seen : 1; /* at least one csum validated */ 471 470 enum mptcp_data_avail data_avail; 472 - bool mp_fail_response_expect; 473 471 u32 remote_nonce; 474 472 u64 thmac; 475 473 u32 local_nonce; ··· 482 482 u8 stale_count; 483 483 484 484 long delegated_status; 485 + unsigned long fail_tout; 485 486 486 487 ); 487 488 ··· 607 606 void mptcp_subflow_shutdown(struct sock *sk, struct sock *ssk, int how); 608 607 void mptcp_close_ssk(struct sock *sk, struct sock *ssk, 609 608 struct mptcp_subflow_context *subflow); 609 + void __mptcp_subflow_send_ack(struct sock *ssk); 610 610 void mptcp_subflow_send_ack(struct sock *ssk); 611 611 void mptcp_subflow_reset(struct sock *ssk); 612 + void mptcp_subflow_queue_clean(struct sock *ssk); 612 613 void mptcp_sock_graft(struct sock *sk, struct socket *parent); 613 614 struct socket *__mptcp_nmpc_socket(const struct mptcp_sock *msk); 614 615 ··· 665 662 666 663 void mptcp_finish_connect(struct sock *sk); 667 664 void __mptcp_set_connected(struct sock *sk); 665 + void mptcp_reset_timeout(struct mptcp_sock *msk, unsigned long fail_tout); 668 666 static inline bool mptcp_is_fully_established(struct sock *sk) 669 667 { 670 668 return inet_sk_state_load(sk) == TCP_ESTABLISHED && ··· 772 768 const struct mptcp_rm_list *rm_list); 773 769 void mptcp_pm_mp_prio_received(struct sock *sk, u8 bkup); 774 770 void mptcp_pm_mp_fail_received(struct sock *sk, u64 fail_seq); 771 + int mptcp_pm_nl_mp_prio_send_ack(struct mptcp_sock *msk, 772 + struct mptcp_addr_info *addr, 773 + struct mptcp_addr_info *rem, 774 + u8 bkup); 775 775 bool mptcp_pm_alloc_anno_list(struct mptcp_sock *msk, 776 776 const struct mptcp_pm_addr_entry *entry); 777 777 void mptcp_pm_free_anno_list(struct mptcp_sock *msk); ··· 792 784 int mptcp_userspace_pm_get_flags_and_ifindex_by_id(struct mptcp_sock *msk, 793 785 unsigned int id, 794 786 u8 *flags, int *ifindex); 795 - 787 + int mptcp_userspace_pm_set_flags(struct net *net, struct nlattr *token, 788 + struct mptcp_pm_addr_entry *loc, 789 + struct mptcp_pm_addr_entry *rem, u8 bkup); 796 790 int mptcp_pm_announce_addr(struct mptcp_sock *msk, 797 791 const struct mptcp_addr_info *addr, 798 792 bool echo); ··· 936 926 set_bit(MPTCP_FALLBACK_DONE, &msk->flags); 937 927 } 938 928 939 - static inline void mptcp_do_fallback(struct sock *sk) 929 + static inline void mptcp_do_fallback(struct sock *ssk) 940 930 { 941 - struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(sk); 942 - struct mptcp_sock *msk = mptcp_sk(subflow->conn); 931 + struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(ssk); 932 + struct sock *sk = subflow->conn; 933 + struct mptcp_sock *msk; 943 934 935 + msk = mptcp_sk(sk); 944 936 __mptcp_do_fallback(msk); 937 + if (READ_ONCE(msk->snd_data_fin_enable) && !(ssk->sk_shutdown & SEND_SHUTDOWN)) { 938 + gfp_t saved_allocation = ssk->sk_allocation; 939 + 940 + /* we are in a atomic (BH) scope, override ssk default for data 941 + * fin allocation 942 + */ 943 + ssk->sk_allocation = GFP_ATOMIC; 944 + ssk->sk_shutdown |= SEND_SHUTDOWN; 945 + tcp_shutdown(ssk, SEND_SHUTDOWN); 946 + ssk->sk_allocation = saved_allocation; 947 + } 945 948 } 946 949 947 950 #define pr_fallback(a) pr_debug("%s:fallback to TCP (msk=%p)", __func__, a)
+98 -29
net/mptcp/subflow.c
··· 843 843 MAPPING_INVALID, 844 844 MAPPING_EMPTY, 845 845 MAPPING_DATA_FIN, 846 - MAPPING_DUMMY 846 + MAPPING_DUMMY, 847 + MAPPING_BAD_CSUM 847 848 }; 848 849 849 850 static void dbg_bad_map(struct mptcp_subflow_context *subflow, u32 ssn) ··· 959 958 subflow->map_data_csum); 960 959 if (unlikely(csum)) { 961 960 MPTCP_INC_STATS(sock_net(ssk), MPTCP_MIB_DATACSUMERR); 962 - if (subflow->mp_join || subflow->valid_csum_seen) { 963 - subflow->send_mp_fail = 1; 964 - MPTCP_INC_STATS(sock_net(ssk), MPTCP_MIB_MPFAILTX); 965 - } 966 - return subflow->mp_join ? MAPPING_INVALID : MAPPING_DUMMY; 961 + return MAPPING_BAD_CSUM; 967 962 } 968 963 969 964 subflow->valid_csum_seen = 1; ··· 971 974 { 972 975 struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(ssk); 973 976 bool csum_reqd = READ_ONCE(msk->csum_enabled); 974 - struct sock *sk = (struct sock *)msk; 975 977 struct mptcp_ext *mpext; 976 978 struct sk_buff *skb; 977 979 u16 data_len; ··· 1012 1016 pr_debug("infinite mapping received"); 1013 1017 MPTCP_INC_STATS(sock_net(ssk), MPTCP_MIB_INFINITEMAPRX); 1014 1018 subflow->map_data_len = 0; 1015 - if (!sock_flag(ssk, SOCK_DEAD)) 1016 - sk_stop_timer(sk, &sk->sk_timer); 1017 - 1018 1019 return MAPPING_INVALID; 1019 1020 } 1020 1021 ··· 1158 1165 return !subflow->fully_established; 1159 1166 } 1160 1167 1168 + static void mptcp_subflow_fail(struct mptcp_sock *msk, struct sock *ssk) 1169 + { 1170 + struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(ssk); 1171 + unsigned long fail_tout; 1172 + 1173 + /* greceful failure can happen only on the MPC subflow */ 1174 + if (WARN_ON_ONCE(ssk != READ_ONCE(msk->first))) 1175 + return; 1176 + 1177 + /* since the close timeout take precedence on the fail one, 1178 + * no need to start the latter when the first is already set 1179 + */ 1180 + if (sock_flag((struct sock *)msk, SOCK_DEAD)) 1181 + return; 1182 + 1183 + /* we don't need extreme accuracy here, use a zero fail_tout as special 1184 + * value meaning no fail timeout at all; 1185 + */ 1186 + fail_tout = jiffies + TCP_RTO_MAX; 1187 + if (!fail_tout) 1188 + fail_tout = 1; 1189 + WRITE_ONCE(subflow->fail_tout, fail_tout); 1190 + tcp_send_ack(ssk); 1191 + 1192 + mptcp_reset_timeout(msk, subflow->fail_tout); 1193 + } 1194 + 1161 1195 static bool subflow_check_data_avail(struct sock *ssk) 1162 1196 { 1163 1197 struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(ssk); ··· 1204 1184 1205 1185 status = get_mapping_status(ssk, msk); 1206 1186 trace_subflow_check_data_avail(status, skb_peek(&ssk->sk_receive_queue)); 1207 - if (unlikely(status == MAPPING_INVALID)) 1208 - goto fallback; 1209 - 1210 - if (unlikely(status == MAPPING_DUMMY)) 1187 + if (unlikely(status == MAPPING_INVALID || status == MAPPING_DUMMY || 1188 + status == MAPPING_BAD_CSUM)) 1211 1189 goto fallback; 1212 1190 1213 1191 if (status != MAPPING_OK) ··· 1247 1229 fallback: 1248 1230 if (!__mptcp_check_fallback(msk)) { 1249 1231 /* RFC 8684 section 3.7. */ 1250 - if (subflow->send_mp_fail) { 1232 + if (status == MAPPING_BAD_CSUM && 1233 + (subflow->mp_join || subflow->valid_csum_seen)) { 1234 + subflow->send_mp_fail = 1; 1235 + 1251 1236 if (!READ_ONCE(msk->allow_infinite_fallback)) { 1252 - ssk->sk_err = EBADMSG; 1253 - tcp_set_state(ssk, TCP_CLOSE); 1254 1237 subflow->reset_transient = 0; 1255 1238 subflow->reset_reason = MPTCP_RST_EMIDDLEBOX; 1256 - tcp_send_active_reset(ssk, GFP_ATOMIC); 1257 - while ((skb = skb_peek(&ssk->sk_receive_queue))) 1258 - sk_eat_skb(ssk, skb); 1259 - } else if (!sock_flag(ssk, SOCK_DEAD)) { 1260 - WRITE_ONCE(subflow->mp_fail_response_expect, true); 1261 - sk_reset_timer((struct sock *)msk, 1262 - &((struct sock *)msk)->sk_timer, 1263 - jiffies + TCP_RTO_MAX); 1239 + goto reset; 1264 1240 } 1265 - WRITE_ONCE(subflow->data_avail, MPTCP_SUBFLOW_NODATA); 1241 + mptcp_subflow_fail(msk, ssk); 1242 + WRITE_ONCE(subflow->data_avail, MPTCP_SUBFLOW_DATA_AVAIL); 1266 1243 return true; 1267 1244 } 1268 1245 ··· 1265 1252 /* fatal protocol error, close the socket. 1266 1253 * subflow_error_report() will introduce the appropriate barriers 1267 1254 */ 1268 - ssk->sk_err = EBADMSG; 1269 - tcp_set_state(ssk, TCP_CLOSE); 1270 1255 subflow->reset_transient = 0; 1271 1256 subflow->reset_reason = MPTCP_RST_EMPTCP; 1257 + 1258 + reset: 1259 + ssk->sk_err = EBADMSG; 1260 + tcp_set_state(ssk, TCP_CLOSE); 1261 + while ((skb = skb_peek(&ssk->sk_receive_queue))) 1262 + sk_eat_skb(ssk, skb); 1272 1263 tcp_send_active_reset(ssk, GFP_ATOMIC); 1273 1264 WRITE_ONCE(subflow->data_avail, MPTCP_SUBFLOW_NODATA); 1274 1265 return false; 1275 1266 } 1276 1267 1277 - __mptcp_do_fallback(msk); 1268 + mptcp_do_fallback(ssk); 1278 1269 } 1279 1270 1280 1271 skb = skb_peek(&ssk->sk_receive_queue); ··· 1721 1704 subflow->rx_eof = 1; 1722 1705 mptcp_subflow_eof(parent); 1723 1706 } 1707 + } 1708 + 1709 + void mptcp_subflow_queue_clean(struct sock *listener_ssk) 1710 + { 1711 + struct request_sock_queue *queue = &inet_csk(listener_ssk)->icsk_accept_queue; 1712 + struct mptcp_sock *msk, *next, *head = NULL; 1713 + struct request_sock *req; 1714 + 1715 + /* build a list of all unaccepted mptcp sockets */ 1716 + spin_lock_bh(&queue->rskq_lock); 1717 + for (req = queue->rskq_accept_head; req; req = req->dl_next) { 1718 + struct mptcp_subflow_context *subflow; 1719 + struct sock *ssk = req->sk; 1720 + struct mptcp_sock *msk; 1721 + 1722 + if (!sk_is_mptcp(ssk)) 1723 + continue; 1724 + 1725 + subflow = mptcp_subflow_ctx(ssk); 1726 + if (!subflow || !subflow->conn) 1727 + continue; 1728 + 1729 + /* skip if already in list */ 1730 + msk = mptcp_sk(subflow->conn); 1731 + if (msk->dl_next || msk == head) 1732 + continue; 1733 + 1734 + msk->dl_next = head; 1735 + head = msk; 1736 + } 1737 + spin_unlock_bh(&queue->rskq_lock); 1738 + if (!head) 1739 + return; 1740 + 1741 + /* can't acquire the msk socket lock under the subflow one, 1742 + * or will cause ABBA deadlock 1743 + */ 1744 + release_sock(listener_ssk); 1745 + 1746 + for (msk = head; msk; msk = next) { 1747 + struct sock *sk = (struct sock *)msk; 1748 + bool slow; 1749 + 1750 + slow = lock_sock_fast_nested(sk); 1751 + next = msk->dl_next; 1752 + msk->first = NULL; 1753 + msk->dl_next = NULL; 1754 + unlock_sock_fast(sk, slow); 1755 + } 1756 + 1757 + /* we are still under the listener msk socket lock */ 1758 + lock_sock_nested(listener_ssk, SINGLE_DEPTH_NESTING); 1724 1759 } 1725 1760 1726 1761 static int subflow_ulp_init(struct sock *sk)
+2 -1
net/ncsi/ncsi-manage.c
··· 1803 1803 pdev = to_platform_device(dev->dev.parent); 1804 1804 if (pdev) { 1805 1805 np = pdev->dev.of_node; 1806 - if (np && of_get_property(np, "mlx,multi-host", NULL)) 1806 + if (np && (of_get_property(np, "mellanox,multi-host", NULL) || 1807 + of_get_property(np, "mlx,multi-host", NULL))) 1807 1808 ndp->mlx_multi_host = true; 1808 1809 } 1809 1810
+8 -1
net/netfilter/nf_tables_api.c
··· 5213 5213 struct nft_data *data, 5214 5214 struct nlattr *attr) 5215 5215 { 5216 + u32 dtype; 5216 5217 int err; 5217 5218 5218 5219 err = nft_data_init(ctx, data, NFT_DATA_VALUE_MAXLEN, desc, attr); 5219 5220 if (err < 0) 5220 5221 return err; 5221 5222 5222 - if (desc->type != NFT_DATA_VERDICT && desc->len != set->dlen) { 5223 + if (set->dtype == NFT_DATA_VERDICT) 5224 + dtype = NFT_DATA_VERDICT; 5225 + else 5226 + dtype = NFT_DATA_VALUE; 5227 + 5228 + if (dtype != desc->type || 5229 + set->dlen != desc->len) { 5223 5230 nft_data_release(data, desc->type); 5224 5231 return -EINVAL; 5225 5232 }
+21 -3
net/netfilter/nf_tables_core.c
··· 25 25 const struct nft_chain *chain, 26 26 enum nft_trace_types type) 27 27 { 28 - const struct nft_pktinfo *pkt = info->pkt; 29 - 30 - if (!info->trace || !pkt->skb->nf_trace) 28 + if (!info->trace || !info->nf_trace) 31 29 return; 32 30 33 31 info->chain = chain; ··· 40 42 enum nft_trace_types type) 41 43 { 42 44 if (static_branch_unlikely(&nft_trace_enabled)) { 45 + const struct nft_pktinfo *pkt = info->pkt; 46 + 47 + info->nf_trace = pkt->skb->nf_trace; 43 48 info->rule = rule; 44 49 __nft_trace_packet(info, chain, type); 50 + } 51 + } 52 + 53 + static inline void nft_trace_copy_nftrace(struct nft_traceinfo *info) 54 + { 55 + if (static_branch_unlikely(&nft_trace_enabled)) { 56 + const struct nft_pktinfo *pkt = info->pkt; 57 + 58 + if (info->trace) 59 + info->nf_trace = pkt->skb->nf_trace; 45 60 } 46 61 } 47 62 ··· 96 85 const struct nft_chain *chain, 97 86 const struct nft_regs *regs) 98 87 { 88 + const struct nft_pktinfo *pkt = info->pkt; 99 89 enum nft_trace_types type; 100 90 101 91 switch (regs->verdict.code) { ··· 104 92 case NFT_RETURN: 105 93 type = NFT_TRACETYPE_RETURN; 106 94 break; 95 + case NF_STOLEN: 96 + type = NFT_TRACETYPE_RULE; 97 + /* can't access skb->nf_trace; use copy */ 98 + break; 107 99 default: 108 100 type = NFT_TRACETYPE_RULE; 101 + info->nf_trace = pkt->skb->nf_trace; 109 102 break; 110 103 } 111 104 ··· 271 254 switch (regs.verdict.code) { 272 255 case NFT_BREAK: 273 256 regs.verdict.code = NFT_CONTINUE; 257 + nft_trace_copy_nftrace(&info); 274 258 continue; 275 259 case NFT_CONTINUE: 276 260 nft_trace_packet(&info, chain, rule,
+24 -20
net/netfilter/nf_tables_trace.c
··· 7 7 #include <linux/module.h> 8 8 #include <linux/static_key.h> 9 9 #include <linux/hash.h> 10 - #include <linux/jhash.h> 10 + #include <linux/siphash.h> 11 11 #include <linux/if_vlan.h> 12 12 #include <linux/init.h> 13 13 #include <linux/skbuff.h> ··· 24 24 25 25 DEFINE_STATIC_KEY_FALSE(nft_trace_enabled); 26 26 EXPORT_SYMBOL_GPL(nft_trace_enabled); 27 - 28 - static int trace_fill_id(struct sk_buff *nlskb, struct sk_buff *skb) 29 - { 30 - __be32 id; 31 - 32 - /* using skb address as ID results in a limited number of 33 - * values (and quick reuse). 34 - * 35 - * So we attempt to use as many skb members that will not 36 - * change while skb is with netfilter. 37 - */ 38 - id = (__be32)jhash_2words(hash32_ptr(skb), skb_get_hash(skb), 39 - skb->skb_iif); 40 - 41 - return nla_put_be32(nlskb, NFTA_TRACE_ID, id); 42 - } 43 27 44 28 static int trace_fill_header(struct sk_buff *nlskb, u16 type, 45 29 const struct sk_buff *skb, ··· 170 186 struct nlmsghdr *nlh; 171 187 struct sk_buff *skb; 172 188 unsigned int size; 189 + u32 mark = 0; 173 190 u16 event; 174 191 175 192 if (!nfnetlink_has_listeners(nft_net(pkt), NFNLGRP_NFTRACE)) ··· 214 229 if (nla_put_be32(skb, NFTA_TRACE_TYPE, htonl(info->type))) 215 230 goto nla_put_failure; 216 231 217 - if (trace_fill_id(skb, pkt->skb)) 232 + if (nla_put_u32(skb, NFTA_TRACE_ID, info->skbid)) 218 233 goto nla_put_failure; 219 234 220 235 if (nla_put_string(skb, NFTA_TRACE_CHAIN, info->chain->name)) ··· 234 249 case NFT_TRACETYPE_RULE: 235 250 if (nft_verdict_dump(skb, NFTA_TRACE_VERDICT, info->verdict)) 236 251 goto nla_put_failure; 252 + 253 + /* pkt->skb undefined iff NF_STOLEN, disable dump */ 254 + if (info->verdict->code == NF_STOLEN) 255 + info->packet_dumped = true; 256 + else 257 + mark = pkt->skb->mark; 258 + 237 259 break; 238 260 case NFT_TRACETYPE_POLICY: 261 + mark = pkt->skb->mark; 262 + 239 263 if (nla_put_be32(skb, NFTA_TRACE_POLICY, 240 264 htonl(info->basechain->policy))) 241 265 goto nla_put_failure; 242 266 break; 243 267 } 244 268 245 - if (pkt->skb->mark && 246 - nla_put_be32(skb, NFTA_TRACE_MARK, htonl(pkt->skb->mark))) 269 + if (mark && nla_put_be32(skb, NFTA_TRACE_MARK, htonl(mark))) 247 270 goto nla_put_failure; 248 271 249 272 if (!info->packet_dumped) { ··· 276 283 const struct nft_verdict *verdict, 277 284 const struct nft_chain *chain) 278 285 { 286 + static siphash_key_t trace_key __read_mostly; 287 + struct sk_buff *skb = pkt->skb; 288 + 279 289 info->basechain = nft_base_chain(chain); 280 290 info->trace = true; 291 + info->nf_trace = pkt->skb->nf_trace; 281 292 info->packet_dumped = false; 282 293 info->pkt = pkt; 283 294 info->verdict = verdict; 295 + 296 + net_get_random_once(&trace_key, sizeof(trace_key)); 297 + 298 + info->skbid = (u32)siphash_3u32(hash32_ptr(skb), 299 + skb_get_hash(skb), 300 + skb->skb_iif, 301 + &trace_key); 284 302 }
+2
net/netfilter/nft_set_hash.c
··· 143 143 /* Another cpu may race to insert the element with the same key */ 144 144 if (prev) { 145 145 nft_set_elem_destroy(set, he, true); 146 + atomic_dec(&set->nelems); 146 147 he = prev; 147 148 } 148 149 ··· 153 152 154 153 err2: 155 154 nft_set_elem_destroy(set, he, true); 155 + atomic_dec(&set->nelems); 156 156 err1: 157 157 return false; 158 158 }
+33 -15
net/netfilter/nft_set_pipapo.c
··· 2125 2125 } 2126 2126 2127 2127 /** 2128 + * nft_set_pipapo_match_destroy() - Destroy elements from key mapping array 2129 + * @set: nftables API set representation 2130 + * @m: matching data pointing to key mapping array 2131 + */ 2132 + static void nft_set_pipapo_match_destroy(const struct nft_set *set, 2133 + struct nft_pipapo_match *m) 2134 + { 2135 + struct nft_pipapo_field *f; 2136 + int i, r; 2137 + 2138 + for (i = 0, f = m->f; i < m->field_count - 1; i++, f++) 2139 + ; 2140 + 2141 + for (r = 0; r < f->rules; r++) { 2142 + struct nft_pipapo_elem *e; 2143 + 2144 + if (r < f->rules - 1 && f->mt[r + 1].e == f->mt[r].e) 2145 + continue; 2146 + 2147 + e = f->mt[r].e; 2148 + 2149 + nft_set_elem_destroy(set, e, true); 2150 + } 2151 + } 2152 + 2153 + /** 2128 2154 * nft_pipapo_destroy() - Free private data for set and all committed elements 2129 2155 * @set: nftables API set representation 2130 2156 */ ··· 2158 2132 { 2159 2133 struct nft_pipapo *priv = nft_set_priv(set); 2160 2134 struct nft_pipapo_match *m; 2161 - struct nft_pipapo_field *f; 2162 - int i, r, cpu; 2135 + int cpu; 2163 2136 2164 2137 m = rcu_dereference_protected(priv->match, true); 2165 2138 if (m) { 2166 2139 rcu_barrier(); 2167 2140 2168 - for (i = 0, f = m->f; i < m->field_count - 1; i++, f++) 2169 - ; 2170 - 2171 - for (r = 0; r < f->rules; r++) { 2172 - struct nft_pipapo_elem *e; 2173 - 2174 - if (r < f->rules - 1 && f->mt[r + 1].e == f->mt[r].e) 2175 - continue; 2176 - 2177 - e = f->mt[r].e; 2178 - 2179 - nft_set_elem_destroy(set, e, true); 2180 - } 2141 + nft_set_pipapo_match_destroy(set, m); 2181 2142 2182 2143 #ifdef NFT_PIPAPO_ALIGN 2183 2144 free_percpu(m->scratch_aligned); ··· 2178 2165 } 2179 2166 2180 2167 if (priv->clone) { 2168 + m = priv->clone; 2169 + 2170 + if (priv->dirty) 2171 + nft_set_pipapo_match_destroy(set, m); 2172 + 2181 2173 #ifdef NFT_PIPAPO_ALIGN 2182 2174 free_percpu(priv->clone->scratch_aligned); 2183 2175 #endif
+2 -2
net/rose/rose_route.c
··· 227 227 { 228 228 struct rose_neigh *s; 229 229 230 - rose_stop_ftimer(rose_neigh); 231 - rose_stop_t0timer(rose_neigh); 230 + del_timer_sync(&rose_neigh->ftimer); 231 + del_timer_sync(&rose_neigh->t0timer); 232 232 233 233 skb_queue_purge(&rose_neigh->queue); 234 234
+19 -15
net/rose/rose_timer.c
··· 31 31 32 32 void rose_start_heartbeat(struct sock *sk) 33 33 { 34 - del_timer(&sk->sk_timer); 34 + sk_stop_timer(sk, &sk->sk_timer); 35 35 36 36 sk->sk_timer.function = rose_heartbeat_expiry; 37 37 sk->sk_timer.expires = jiffies + 5 * HZ; 38 38 39 - add_timer(&sk->sk_timer); 39 + sk_reset_timer(sk, &sk->sk_timer, sk->sk_timer.expires); 40 40 } 41 41 42 42 void rose_start_t1timer(struct sock *sk) 43 43 { 44 44 struct rose_sock *rose = rose_sk(sk); 45 45 46 - del_timer(&rose->timer); 46 + sk_stop_timer(sk, &rose->timer); 47 47 48 48 rose->timer.function = rose_timer_expiry; 49 49 rose->timer.expires = jiffies + rose->t1; 50 50 51 - add_timer(&rose->timer); 51 + sk_reset_timer(sk, &rose->timer, rose->timer.expires); 52 52 } 53 53 54 54 void rose_start_t2timer(struct sock *sk) 55 55 { 56 56 struct rose_sock *rose = rose_sk(sk); 57 57 58 - del_timer(&rose->timer); 58 + sk_stop_timer(sk, &rose->timer); 59 59 60 60 rose->timer.function = rose_timer_expiry; 61 61 rose->timer.expires = jiffies + rose->t2; 62 62 63 - add_timer(&rose->timer); 63 + sk_reset_timer(sk, &rose->timer, rose->timer.expires); 64 64 } 65 65 66 66 void rose_start_t3timer(struct sock *sk) 67 67 { 68 68 struct rose_sock *rose = rose_sk(sk); 69 69 70 - del_timer(&rose->timer); 70 + sk_stop_timer(sk, &rose->timer); 71 71 72 72 rose->timer.function = rose_timer_expiry; 73 73 rose->timer.expires = jiffies + rose->t3; 74 74 75 - add_timer(&rose->timer); 75 + sk_reset_timer(sk, &rose->timer, rose->timer.expires); 76 76 } 77 77 78 78 void rose_start_hbtimer(struct sock *sk) 79 79 { 80 80 struct rose_sock *rose = rose_sk(sk); 81 81 82 - del_timer(&rose->timer); 82 + sk_stop_timer(sk, &rose->timer); 83 83 84 84 rose->timer.function = rose_timer_expiry; 85 85 rose->timer.expires = jiffies + rose->hb; 86 86 87 - add_timer(&rose->timer); 87 + sk_reset_timer(sk, &rose->timer, rose->timer.expires); 88 88 } 89 89 90 90 void rose_start_idletimer(struct sock *sk) 91 91 { 92 92 struct rose_sock *rose = rose_sk(sk); 93 93 94 - del_timer(&rose->idletimer); 94 + sk_stop_timer(sk, &rose->idletimer); 95 95 96 96 if (rose->idle > 0) { 97 97 rose->idletimer.function = rose_idletimer_expiry; 98 98 rose->idletimer.expires = jiffies + rose->idle; 99 99 100 - add_timer(&rose->idletimer); 100 + sk_reset_timer(sk, &rose->idletimer, rose->idletimer.expires); 101 101 } 102 102 } 103 103 104 104 void rose_stop_heartbeat(struct sock *sk) 105 105 { 106 - del_timer(&sk->sk_timer); 106 + sk_stop_timer(sk, &sk->sk_timer); 107 107 } 108 108 109 109 void rose_stop_timer(struct sock *sk) 110 110 { 111 - del_timer(&rose_sk(sk)->timer); 111 + sk_stop_timer(sk, &rose_sk(sk)->timer); 112 112 } 113 113 114 114 void rose_stop_idletimer(struct sock *sk) 115 115 { 116 - del_timer(&rose_sk(sk)->idletimer); 116 + sk_stop_timer(sk, &rose_sk(sk)->idletimer); 117 117 } 118 118 119 119 static void rose_heartbeat_expiry(struct timer_list *t) ··· 130 130 (sk->sk_state == TCP_LISTEN && sock_flag(sk, SOCK_DEAD))) { 131 131 bh_unlock_sock(sk); 132 132 rose_destroy_socket(sk); 133 + sock_put(sk); 133 134 return; 134 135 } 135 136 break; ··· 153 152 154 153 rose_start_heartbeat(sk); 155 154 bh_unlock_sock(sk); 155 + sock_put(sk); 156 156 } 157 157 158 158 static void rose_timer_expiry(struct timer_list *t) ··· 183 181 break; 184 182 } 185 183 bh_unlock_sock(sk); 184 + sock_put(sk); 186 185 } 187 186 188 187 static void rose_idletimer_expiry(struct timer_list *t) ··· 208 205 sock_set_flag(sk, SOCK_DEAD); 209 206 } 210 207 bh_unlock_sock(sk); 208 + sock_put(sk); 211 209 }
+14 -8
net/sched/act_api.c
··· 588 588 } 589 589 590 590 static int tcf_del_walker(struct tcf_idrinfo *idrinfo, struct sk_buff *skb, 591 - const struct tc_action_ops *ops) 591 + const struct tc_action_ops *ops, 592 + struct netlink_ext_ack *extack) 592 593 { 593 594 struct nlattr *nest; 594 595 int n_i = 0; ··· 605 604 if (nla_put_string(skb, TCA_KIND, ops->kind)) 606 605 goto nla_put_failure; 607 606 607 + ret = 0; 608 608 mutex_lock(&idrinfo->lock); 609 609 idr_for_each_entry_ul(idr, p, tmp, id) { 610 610 if (IS_ERR(p)) 611 611 continue; 612 612 ret = tcf_idr_release_unsafe(p); 613 - if (ret == ACT_P_DELETED) { 613 + if (ret == ACT_P_DELETED) 614 614 module_put(ops->owner); 615 - n_i++; 616 - } else if (ret < 0) { 617 - mutex_unlock(&idrinfo->lock); 618 - goto nla_put_failure; 619 - } 615 + else if (ret < 0) 616 + break; 617 + n_i++; 620 618 } 621 619 mutex_unlock(&idrinfo->lock); 620 + if (ret < 0) { 621 + if (n_i) 622 + NL_SET_ERR_MSG(extack, "Unable to flush all TC actions"); 623 + else 624 + goto nla_put_failure; 625 + } 622 626 623 627 ret = nla_put_u32(skb, TCA_FCNT, n_i); 624 628 if (ret) ··· 644 638 struct tcf_idrinfo *idrinfo = tn->idrinfo; 645 639 646 640 if (type == RTM_DELACTION) { 647 - return tcf_del_walker(idrinfo, skb, ops); 641 + return tcf_del_walker(idrinfo, skb, ops, extack); 648 642 } else if (type == RTM_GETACTION) { 649 643 return tcf_dump_walker(idrinfo, skb, cb); 650 644 } else {
+1 -1
net/sched/act_police.c
··· 442 442 act_id = FLOW_ACTION_JUMP; 443 443 *extval = tc_act & TC_ACT_EXT_VAL_MASK; 444 444 } else if (tc_act == TC_ACT_UNSPEC) { 445 - NL_SET_ERR_MSG_MOD(extack, "Offload not supported when conform/exceed action is \"continue\""); 445 + act_id = FLOW_ACTION_CONTINUE; 446 446 } else { 447 447 NL_SET_ERR_MSG_MOD(extack, "Unsupported conform/exceed action offload"); 448 448 }
+6 -10
net/socket.c
··· 2149 2149 int __sys_recvfrom(int fd, void __user *ubuf, size_t size, unsigned int flags, 2150 2150 struct sockaddr __user *addr, int __user *addr_len) 2151 2151 { 2152 + struct sockaddr_storage address; 2153 + struct msghdr msg = { 2154 + /* Save some cycles and don't copy the address if not needed */ 2155 + .msg_name = addr ? (struct sockaddr *)&address : NULL, 2156 + }; 2152 2157 struct socket *sock; 2153 2158 struct iovec iov; 2154 - struct msghdr msg; 2155 - struct sockaddr_storage address; 2156 2159 int err, err2; 2157 2160 int fput_needed; 2158 2161 ··· 2166 2163 if (!sock) 2167 2164 goto out; 2168 2165 2169 - msg.msg_control = NULL; 2170 - msg.msg_controllen = 0; 2171 - /* Save some cycles and don't copy the address if not needed */ 2172 - msg.msg_name = addr ? (struct sockaddr *)&address : NULL; 2173 - /* We assume all kernel code knows the size of sockaddr_storage */ 2174 - msg.msg_namelen = 0; 2175 - msg.msg_iocb = NULL; 2176 - msg.msg_flags = 0; 2177 2166 if (sock->file->f_flags & O_NONBLOCK) 2178 2167 flags |= MSG_DONTWAIT; 2179 2168 err = sock_recvmsg(sock, &msg, flags); ··· 2370 2375 return -EFAULT; 2371 2376 2372 2377 kmsg->msg_control_is_user = true; 2378 + kmsg->msg_get_inq = 0; 2373 2379 kmsg->msg_control_user = msg.msg_control; 2374 2380 kmsg->msg_controllen = msg.msg_controllen; 2375 2381 kmsg->msg_flags = msg.msg_flags;
+1 -1
net/sunrpc/xdr.c
··· 984 984 p = page_address(*xdr->page_ptr); 985 985 xdr->p = p + frag2bytes; 986 986 space_left = xdr->buf->buflen - xdr->buf->len; 987 - if (space_left - nbytes >= PAGE_SIZE) 987 + if (space_left - frag1bytes >= PAGE_SIZE) 988 988 xdr->end = p + PAGE_SIZE; 989 989 else 990 990 xdr->end = p + space_left - frag1bytes;
+22 -19
net/tipc/node.c
··· 472 472 bool preliminary) 473 473 { 474 474 struct tipc_net *tn = net_generic(net, tipc_net_id); 475 + struct tipc_link *l, *snd_l = tipc_bc_sndlink(net); 475 476 struct tipc_node *n, *temp_node; 476 - struct tipc_link *l; 477 477 unsigned long intv; 478 478 int bearer_id; 479 479 int i; ··· 488 488 goto exit; 489 489 /* A preliminary node becomes "real" now, refresh its data */ 490 490 tipc_node_write_lock(n); 491 + if (!tipc_link_bc_create(net, tipc_own_addr(net), addr, peer_id, U16_MAX, 492 + tipc_link_min_win(snd_l), tipc_link_max_win(snd_l), 493 + n->capabilities, &n->bc_entry.inputq1, 494 + &n->bc_entry.namedq, snd_l, &n->bc_entry.link)) { 495 + pr_warn("Broadcast rcv link refresh failed, no memory\n"); 496 + tipc_node_write_unlock_fast(n); 497 + tipc_node_put(n); 498 + n = NULL; 499 + goto exit; 500 + } 491 501 n->preliminary = false; 492 502 n->addr = addr; 493 503 hlist_del_rcu(&n->hash); ··· 577 567 n->signature = INVALID_NODE_SIG; 578 568 n->active_links[0] = INVALID_BEARER_ID; 579 569 n->active_links[1] = INVALID_BEARER_ID; 580 - n->bc_entry.link = NULL; 570 + if (!preliminary && 571 + !tipc_link_bc_create(net, tipc_own_addr(net), addr, peer_id, U16_MAX, 572 + tipc_link_min_win(snd_l), tipc_link_max_win(snd_l), 573 + n->capabilities, &n->bc_entry.inputq1, 574 + &n->bc_entry.namedq, snd_l, &n->bc_entry.link)) { 575 + pr_warn("Broadcast rcv link creation failed, no memory\n"); 576 + kfree(n); 577 + n = NULL; 578 + goto exit; 579 + } 581 580 tipc_node_get(n); 582 581 timer_setup(&n->timer, tipc_node_timeout, 0); 583 582 /* Start a slow timer anyway, crypto needs it */ ··· 1174 1155 bool *respond, bool *dupl_addr) 1175 1156 { 1176 1157 struct tipc_node *n; 1177 - struct tipc_link *l, *snd_l; 1158 + struct tipc_link *l; 1178 1159 struct tipc_link_entry *le; 1179 1160 bool addr_match = false; 1180 1161 bool sign_match = false; ··· 1194 1175 return; 1195 1176 1196 1177 tipc_node_write_lock(n); 1197 - if (unlikely(!n->bc_entry.link)) { 1198 - snd_l = tipc_bc_sndlink(net); 1199 - if (!tipc_link_bc_create(net, tipc_own_addr(net), 1200 - addr, peer_id, U16_MAX, 1201 - tipc_link_min_win(snd_l), 1202 - tipc_link_max_win(snd_l), 1203 - n->capabilities, 1204 - &n->bc_entry.inputq1, 1205 - &n->bc_entry.namedq, snd_l, 1206 - &n->bc_entry.link)) { 1207 - pr_warn("Broadcast rcv link creation failed, no mem\n"); 1208 - tipc_node_write_unlock_fast(n); 1209 - tipc_node_put(n); 1210 - return; 1211 - } 1212 - } 1213 1178 1214 1179 le = &n->links[b->identity]; 1215 1180
+1
net/tipc/socket.c
··· 502 502 sock_init_data(sock, sk); 503 503 tipc_set_sk_state(sk, TIPC_OPEN); 504 504 if (tipc_sk_insert(tsk)) { 505 + sk_free(sk); 505 506 pr_warn("Socket create failed; port number exhausted\n"); 506 507 return -EINVAL; 507 508 }
+4 -4
net/tls/tls_sw.c
··· 267 267 } 268 268 darg->async = false; 269 269 270 - if (ret == -EBADMSG) 271 - TLS_INC_STATS(sock_net(sk), LINUX_MIB_TLSDECRYPTERROR); 272 - 273 270 return ret; 274 271 } 275 272 ··· 1576 1579 } 1577 1580 1578 1581 err = decrypt_internal(sk, skb, dest, NULL, darg); 1579 - if (err < 0) 1582 + if (err < 0) { 1583 + if (err == -EBADMSG) 1584 + TLS_INC_STATS(sock_net(sk), LINUX_MIB_TLSDECRYPTERROR); 1580 1585 return err; 1586 + } 1581 1587 if (darg->async) 1582 1588 goto decrypt_next; 1583 1589
+1
net/xdp/xsk_buff_pool.c
··· 332 332 for (i = 0; i < dma_map->dma_pages_cnt; i++) { 333 333 dma = &dma_map->dma_pages[i]; 334 334 if (*dma) { 335 + *dma &= ~XSK_NEXT_PG_CONTIG_MASK; 335 336 dma_unmap_page_attrs(dma_map->dev, *dma, PAGE_SIZE, 336 337 DMA_BIDIRECTIONAL, attrs); 337 338 *dma = 0;
+7
samples/fprobe/fprobe_example.c
··· 25 25 26 26 static char symbol[MAX_SYMBOL_LEN] = "kernel_clone"; 27 27 module_param_string(symbol, symbol, sizeof(symbol), 0644); 28 + MODULE_PARM_DESC(symbol, "Probed symbol(s), given by comma separated symbols or a wildcard pattern."); 29 + 28 30 static char nosymbol[MAX_SYMBOL_LEN] = ""; 29 31 module_param_string(nosymbol, nosymbol, sizeof(nosymbol), 0644); 32 + MODULE_PARM_DESC(nosymbol, "Not-probed symbols, given by a wildcard pattern."); 33 + 30 34 static bool stackdump = true; 31 35 module_param(stackdump, bool, 0644); 36 + MODULE_PARM_DESC(stackdump, "Enable stackdump."); 37 + 32 38 static bool use_trace = false; 33 39 module_param(use_trace, bool, 0644); 40 + MODULE_PARM_DESC(use_trace, "Use trace_printk instead of printk. This is only for debugging."); 34 41 35 42 static void show_backtrace(void) 36 43 {
-3
scripts/Makefile.modinst
··· 28 28 __modinst: $(modules) 29 29 @: 30 30 31 - quiet_cmd_none = 32 - cmd_none = : 33 - 34 31 # 35 32 # Installation 36 33 #
+3 -3
scripts/clang-tools/gen_compile_commands.py
··· 157 157 if ext != '.ko': 158 158 sys.exit('{}: module path must end with .ko'.format(ko)) 159 159 mod = base + '.mod' 160 - # The first line of *.mod lists the objects that compose the module. 160 + # Read from *.mod, to get a list of objects that compose the module. 161 161 with open(mod) as m: 162 - for obj in m.readline().split(): 163 - yield to_cmdfile(obj) 162 + for mod_line in m: 163 + yield to_cmdfile(mod_line.rstrip()) 164 164 165 165 166 166 def process_line(root_directory, command_prefix, file_path):
+13 -9
sound/pci/cs46xx/cs46xx.c
··· 74 74 err = snd_cs46xx_create(card, pci, 75 75 external_amp[dev], thinkpad[dev]); 76 76 if (err < 0) 77 - return err; 77 + goto error; 78 78 card->private_data = chip; 79 79 chip->accept_valid = mmap_valid[dev]; 80 80 err = snd_cs46xx_pcm(chip, 0); 81 81 if (err < 0) 82 - return err; 82 + goto error; 83 83 #ifdef CONFIG_SND_CS46XX_NEW_DSP 84 84 err = snd_cs46xx_pcm_rear(chip, 1); 85 85 if (err < 0) 86 - return err; 86 + goto error; 87 87 err = snd_cs46xx_pcm_iec958(chip, 2); 88 88 if (err < 0) 89 - return err; 89 + goto error; 90 90 #endif 91 91 err = snd_cs46xx_mixer(chip, 2); 92 92 if (err < 0) 93 - return err; 93 + goto error; 94 94 #ifdef CONFIG_SND_CS46XX_NEW_DSP 95 95 if (chip->nr_ac97_codecs ==2) { 96 96 err = snd_cs46xx_pcm_center_lfe(chip, 3); 97 97 if (err < 0) 98 - return err; 98 + goto error; 99 99 } 100 100 #endif 101 101 err = snd_cs46xx_midi(chip, 0); 102 102 if (err < 0) 103 - return err; 103 + goto error; 104 104 err = snd_cs46xx_start_dsp(chip); 105 105 if (err < 0) 106 - return err; 106 + goto error; 107 107 108 108 snd_cs46xx_gameport(chip); 109 109 ··· 117 117 118 118 err = snd_card_register(card); 119 119 if (err < 0) 120 - return err; 120 + goto error; 121 121 122 122 pci_set_drvdata(pci, card); 123 123 dev++; 124 124 return 0; 125 + 126 + error: 127 + snd_card_free(card); 128 + return err; 125 129 } 126 130 127 131 static struct pci_driver cs46xx_driver = {
+1
sound/pci/hda/patch_realtek.c
··· 9212 9212 SND_PCI_QUIRK(0x1558, 0x70f4, "Clevo NH77EPY", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 9213 9213 SND_PCI_QUIRK(0x1558, 0x70f6, "Clevo NH77DPQ-Y", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 9214 9214 SND_PCI_QUIRK(0x1558, 0x7716, "Clevo NS50PU", ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 9215 + SND_PCI_QUIRK(0x1558, 0x7718, "Clevo L140PU", ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 9215 9216 SND_PCI_QUIRK(0x1558, 0x8228, "Clevo NR40BU", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 9216 9217 SND_PCI_QUIRK(0x1558, 0x8520, "Clevo NH50D[CD]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 9217 9218 SND_PCI_QUIRK(0x1558, 0x8521, "Clevo NH77D[CD]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+4 -2
sound/soc/codecs/ak4613.c
··· 868 868 869 869 /* 870 870 * connected STDI 871 + * TDM support is assuming it is probed via Audio-Graph-Card style here. 872 + * Default is SDTIx1 if it was probed via Simple-Audio-Card for now. 871 873 */ 872 874 sdti_num = of_graph_get_endpoint_count(np); 873 - if (WARN_ON((sdti_num > 3) || (sdti_num < 1))) 874 - return; 875 + if ((sdti_num >= SDTx_MAX) || (sdti_num < 1)) 876 + sdti_num = 1; 875 877 876 878 AK4613_CONFIG_SDTI_set(priv, sdti_num); 877 879 }
+8 -2
sound/soc/codecs/cs35l41-lib.c
··· 37 37 { CS35L41_DAC_PCM1_SRC, 0x00000008 }, 38 38 { CS35L41_ASP_TX1_SRC, 0x00000018 }, 39 39 { CS35L41_ASP_TX2_SRC, 0x00000019 }, 40 - { CS35L41_ASP_TX3_SRC, 0x00000020 }, 41 - { CS35L41_ASP_TX4_SRC, 0x00000021 }, 40 + { CS35L41_ASP_TX3_SRC, 0x00000000 }, 41 + { CS35L41_ASP_TX4_SRC, 0x00000000 }, 42 42 { CS35L41_DSP1_RX1_SRC, 0x00000008 }, 43 43 { CS35L41_DSP1_RX2_SRC, 0x00000009 }, 44 44 { CS35L41_DSP1_RX3_SRC, 0x00000018 }, ··· 644 644 { CS35L41_DSP1_XM_ACCEL_PL0_PRI, 0x00000000 }, 645 645 { CS35L41_PWR_CTRL2, 0x00000000 }, 646 646 { CS35L41_AMP_GAIN_CTRL, 0x00000000 }, 647 + { CS35L41_ASP_TX3_SRC, 0x00000000 }, 648 + { CS35L41_ASP_TX4_SRC, 0x00000000 }, 647 649 }; 648 650 649 651 static const struct reg_sequence cs35l41_revb0_errata_patch[] = { ··· 657 655 { CS35L41_DSP1_XM_ACCEL_PL0_PRI, 0x00000000 }, 658 656 { CS35L41_PWR_CTRL2, 0x00000000 }, 659 657 { CS35L41_AMP_GAIN_CTRL, 0x00000000 }, 658 + { CS35L41_ASP_TX3_SRC, 0x00000000 }, 659 + { CS35L41_ASP_TX4_SRC, 0x00000000 }, 660 660 }; 661 661 662 662 static const struct reg_sequence cs35l41_revb2_errata_patch[] = { ··· 670 666 { CS35L41_DSP1_XM_ACCEL_PL0_PRI, 0x00000000 }, 671 667 { CS35L41_PWR_CTRL2, 0x00000000 }, 672 668 { CS35L41_AMP_GAIN_CTRL, 0x00000000 }, 669 + { CS35L41_ASP_TX3_SRC, 0x00000000 }, 670 + { CS35L41_ASP_TX4_SRC, 0x00000000 }, 673 671 }; 674 672 675 673 static const struct reg_sequence cs35l41_fs_errata_patch[] = {
+6 -6
sound/soc/codecs/cs35l41.c
··· 333 333 SOC_SINGLE("HW Noise Gate Enable", CS35L41_NG_CFG, 8, 63, 0), 334 334 SOC_SINGLE("HW Noise Gate Delay", CS35L41_NG_CFG, 4, 7, 0), 335 335 SOC_SINGLE("HW Noise Gate Threshold", CS35L41_NG_CFG, 0, 7, 0), 336 - SOC_SINGLE("Aux Noise Gate CH1 Enable", 336 + SOC_SINGLE("Aux Noise Gate CH1 Switch", 337 337 CS35L41_MIXER_NGATE_CH1_CFG, 16, 1, 0), 338 338 SOC_SINGLE("Aux Noise Gate CH1 Entry Delay", 339 339 CS35L41_MIXER_NGATE_CH1_CFG, 8, 15, 0), ··· 341 341 CS35L41_MIXER_NGATE_CH1_CFG, 0, 7, 0), 342 342 SOC_SINGLE("Aux Noise Gate CH2 Entry Delay", 343 343 CS35L41_MIXER_NGATE_CH2_CFG, 8, 15, 0), 344 - SOC_SINGLE("Aux Noise Gate CH2 Enable", 344 + SOC_SINGLE("Aux Noise Gate CH2 Switch", 345 345 CS35L41_MIXER_NGATE_CH2_CFG, 16, 1, 0), 346 346 SOC_SINGLE("Aux Noise Gate CH2 Threshold", 347 347 CS35L41_MIXER_NGATE_CH2_CFG, 0, 7, 0), 348 - SOC_SINGLE("SCLK Force", CS35L41_SP_FORMAT, CS35L41_SCLK_FRC_SHIFT, 1, 0), 349 - SOC_SINGLE("LRCLK Force", CS35L41_SP_FORMAT, CS35L41_LRCLK_FRC_SHIFT, 1, 0), 350 - SOC_SINGLE("Invert Class D", CS35L41_AMP_DIG_VOL_CTRL, 348 + SOC_SINGLE("SCLK Force Switch", CS35L41_SP_FORMAT, CS35L41_SCLK_FRC_SHIFT, 1, 0), 349 + SOC_SINGLE("LRCLK Force Switch", CS35L41_SP_FORMAT, CS35L41_LRCLK_FRC_SHIFT, 1, 0), 350 + SOC_SINGLE("Invert Class D Switch", CS35L41_AMP_DIG_VOL_CTRL, 351 351 CS35L41_AMP_INV_PCM_SHIFT, 1, 0), 352 - SOC_SINGLE("Amp Gain ZC", CS35L41_AMP_GAIN_CTRL, 352 + SOC_SINGLE("Amp Gain ZC Switch", CS35L41_AMP_GAIN_CTRL, 353 353 CS35L41_AMP_GAIN_ZC_SHIFT, 1, 0), 354 354 WM_ADSP2_PRELOAD_SWITCH("DSP1", 1), 355 355 WM_ADSP_FW_CONTROL("DSP1", 0),
+4 -1
sound/soc/codecs/cs47l15.c
··· 122 122 snd_soc_kcontrol_component(kcontrol); 123 123 struct cs47l15 *cs47l15 = snd_soc_component_get_drvdata(component); 124 124 125 + if (!!ucontrol->value.integer.value[0] == cs47l15->in1_lp_mode) 126 + return 0; 127 + 125 128 switch (ucontrol->value.integer.value[0]) { 126 129 case 0: 127 130 /* Set IN1 to normal mode */ ··· 153 150 break; 154 151 } 155 152 156 - return 0; 153 + return 1; 157 154 } 158 155 159 156 static const struct snd_kcontrol_new cs47l15_snd_controls[] = {
+10 -4
sound/soc/codecs/madera.c
··· 618 618 end: 619 619 snd_soc_dapm_mutex_unlock(dapm); 620 620 621 - return snd_soc_dapm_mux_update_power(dapm, kcontrol, mux, e, NULL); 621 + ret = snd_soc_dapm_mux_update_power(dapm, kcontrol, mux, e, NULL); 622 + if (ret < 0) { 623 + dev_err(madera->dev, "Failed to update demux power state: %d\n", ret); 624 + return ret; 625 + } 626 + 627 + return change; 622 628 } 623 629 EXPORT_SYMBOL_GPL(madera_out1_demux_put); 624 630 ··· 899 893 struct soc_enum *e = (struct soc_enum *)kcontrol->private_value; 900 894 const int adsp_num = e->shift_l; 901 895 const unsigned int item = ucontrol->value.enumerated.item[0]; 902 - int ret; 896 + int ret = 0; 903 897 904 898 if (item >= e->items) 905 899 return -EINVAL; ··· 916 910 "Cannot change '%s' while in use by active audio paths\n", 917 911 kcontrol->id.name); 918 912 ret = -EBUSY; 919 - } else { 913 + } else if (priv->adsp_rate_cache[adsp_num] != e->values[item]) { 920 914 /* Volatile register so defer until the codec is powered up */ 921 915 priv->adsp_rate_cache[adsp_num] = e->values[item]; 922 - ret = 0; 916 + ret = 1; 923 917 } 924 918 925 919 mutex_unlock(&priv->rate_lock);
+11 -1
sound/soc/codecs/max98373-sdw.c
··· 862 862 return max98373_init(slave, regmap); 863 863 } 864 864 865 + static int max98373_sdw_remove(struct sdw_slave *slave) 866 + { 867 + struct max98373_priv *max98373 = dev_get_drvdata(&slave->dev); 868 + 869 + if (max98373->first_hw_init) 870 + pm_runtime_disable(&slave->dev); 871 + 872 + return 0; 873 + } 874 + 865 875 #if defined(CONFIG_OF) 866 876 static const struct of_device_id max98373_of_match[] = { 867 877 { .compatible = "maxim,max98373", }, ··· 903 893 .pm = &max98373_pm, 904 894 }, 905 895 .probe = max98373_sdw_probe, 906 - .remove = NULL, 896 + .remove = max98373_sdw_remove, 907 897 .ops = &max98373_slave_ops, 908 898 .id_table = max98373_id, 909 899 };
+11
sound/soc/codecs/rt1308-sdw.c
··· 691 691 return 0; 692 692 } 693 693 694 + static int rt1308_sdw_remove(struct sdw_slave *slave) 695 + { 696 + struct rt1308_sdw_priv *rt1308 = dev_get_drvdata(&slave->dev); 697 + 698 + if (rt1308->first_hw_init) 699 + pm_runtime_disable(&slave->dev); 700 + 701 + return 0; 702 + } 703 + 694 704 static const struct sdw_device_id rt1308_id[] = { 695 705 SDW_SLAVE_ENTRY_EXT(0x025d, 0x1308, 0x2, 0, 0), 696 706 {}, ··· 760 750 .pm = &rt1308_pm, 761 751 }, 762 752 .probe = rt1308_sdw_probe, 753 + .remove = rt1308_sdw_remove, 763 754 .ops = &rt1308_slave_ops, 764 755 .id_table = rt1308_id, 765 756 };
+11
sound/soc/codecs/rt1316-sdw.c
··· 676 676 return rt1316_sdw_init(&slave->dev, regmap, slave); 677 677 } 678 678 679 + static int rt1316_sdw_remove(struct sdw_slave *slave) 680 + { 681 + struct rt1316_sdw_priv *rt1316 = dev_get_drvdata(&slave->dev); 682 + 683 + if (rt1316->first_hw_init) 684 + pm_runtime_disable(&slave->dev); 685 + 686 + return 0; 687 + } 688 + 679 689 static const struct sdw_device_id rt1316_id[] = { 680 690 SDW_SLAVE_ENTRY_EXT(0x025d, 0x1316, 0x3, 0x1, 0), 681 691 {}, ··· 745 735 .pm = &rt1316_pm, 746 736 }, 747 737 .probe = rt1316_sdw_probe, 738 + .remove = rt1316_sdw_remove, 748 739 .ops = &rt1316_slave_ops, 749 740 .id_table = rt1316_id, 750 741 };
+4 -1
sound/soc/codecs/rt5682-sdw.c
··· 719 719 { 720 720 struct rt5682_priv *rt5682 = dev_get_drvdata(&slave->dev); 721 721 722 - if (rt5682 && rt5682->hw_init) 722 + if (rt5682->hw_init) 723 723 cancel_delayed_work_sync(&rt5682->jack_detect_work); 724 + 725 + if (rt5682->first_hw_init) 726 + pm_runtime_disable(&slave->dev); 724 727 725 728 return 0; 726 729 }
+5 -1
sound/soc/codecs/rt700-sdw.c
··· 13 13 #include <linux/soundwire/sdw_type.h> 14 14 #include <linux/soundwire/sdw_registers.h> 15 15 #include <linux/module.h> 16 + #include <linux/pm_runtime.h> 16 17 #include <linux/regmap.h> 17 18 #include <sound/soc.h> 18 19 #include "rt700.h" ··· 464 463 { 465 464 struct rt700_priv *rt700 = dev_get_drvdata(&slave->dev); 466 465 467 - if (rt700 && rt700->hw_init) { 466 + if (rt700->hw_init) { 468 467 cancel_delayed_work_sync(&rt700->jack_detect_work); 469 468 cancel_delayed_work_sync(&rt700->jack_btn_check_work); 470 469 } 470 + 471 + if (rt700->first_hw_init) 472 + pm_runtime_disable(&slave->dev); 471 473 472 474 return 0; 473 475 }
+19 -11
sound/soc/codecs/rt700.c
··· 162 162 if (!rt700->hs_jack) 163 163 return; 164 164 165 - if (!rt700->component->card->instantiated) 165 + if (!rt700->component->card || !rt700->component->card->instantiated) 166 166 return; 167 167 168 168 reg = RT700_VERB_GET_PIN_SENSE | RT700_HP_OUT; ··· 315 315 struct snd_soc_jack *hs_jack, void *data) 316 316 { 317 317 struct rt700_priv *rt700 = snd_soc_component_get_drvdata(component); 318 + int ret; 318 319 319 320 rt700->hs_jack = hs_jack; 320 321 321 - if (!rt700->hw_init) { 322 - dev_dbg(&rt700->slave->dev, 323 - "%s hw_init not ready yet\n", __func__); 322 + ret = pm_runtime_resume_and_get(component->dev); 323 + if (ret < 0) { 324 + if (ret != -EACCES) { 325 + dev_err(component->dev, "%s: failed to resume %d\n", __func__, ret); 326 + return ret; 327 + } 328 + 329 + /* pm_runtime not enabled yet */ 330 + dev_dbg(component->dev, "%s: skipping jack init for now\n", __func__); 324 331 return 0; 325 332 } 326 333 327 334 rt700_jack_init(rt700); 335 + 336 + pm_runtime_mark_last_busy(component->dev); 337 + pm_runtime_put_autosuspend(component->dev); 328 338 329 339 return 0; 330 340 } ··· 1125 1115 1126 1116 mutex_init(&rt700->disable_irq_lock); 1127 1117 1118 + INIT_DELAYED_WORK(&rt700->jack_detect_work, 1119 + rt700_jack_detect_handler); 1120 + INIT_DELAYED_WORK(&rt700->jack_btn_check_work, 1121 + rt700_btn_check_handler); 1122 + 1128 1123 /* 1129 1124 * Mark hw_init to false 1130 1125 * HW init will be performed when device reports present ··· 1223 1208 1224 1209 /* Finish Initial Settings, set power to D3 */ 1225 1210 regmap_write(rt700->regmap, RT700_SET_AUDIO_POWER_STATE, AC_PWRST_D3); 1226 - 1227 - if (!rt700->first_hw_init) { 1228 - INIT_DELAYED_WORK(&rt700->jack_detect_work, 1229 - rt700_jack_detect_handler); 1230 - INIT_DELAYED_WORK(&rt700->jack_btn_check_work, 1231 - rt700_btn_check_handler); 1232 - } 1233 1211 1234 1212 /* 1235 1213 * if set_jack callback occurred early than io_init,
+8 -1
sound/soc/codecs/rt711-sdca-sdw.c
··· 11 11 #include <linux/mod_devicetable.h> 12 12 #include <linux/soundwire/sdw_registers.h> 13 13 #include <linux/module.h> 14 + #include <linux/pm_runtime.h> 14 15 15 16 #include "rt711-sdca.h" 16 17 #include "rt711-sdca-sdw.h" ··· 365 364 { 366 365 struct rt711_sdca_priv *rt711 = dev_get_drvdata(&slave->dev); 367 366 368 - if (rt711 && rt711->hw_init) { 367 + if (rt711->hw_init) { 369 368 cancel_delayed_work_sync(&rt711->jack_detect_work); 370 369 cancel_delayed_work_sync(&rt711->jack_btn_check_work); 371 370 } 371 + 372 + if (rt711->first_hw_init) 373 + pm_runtime_disable(&slave->dev); 374 + 375 + mutex_destroy(&rt711->calibrate_mutex); 376 + mutex_destroy(&rt711->disable_irq_lock); 372 377 373 378 return 0; 374 379 }
+21 -23
sound/soc/codecs/rt711-sdca.c
··· 34 34 35 35 ret = regmap_write(regmap, addr, value); 36 36 if (ret < 0) 37 - dev_err(rt711->component->dev, 37 + dev_err(&rt711->slave->dev, 38 38 "Failed to set private value: %06x <= %04x ret=%d\n", 39 39 addr, value, ret); 40 40 ··· 50 50 51 51 ret = regmap_read(regmap, addr, value); 52 52 if (ret < 0) 53 - dev_err(rt711->component->dev, 53 + dev_err(&rt711->slave->dev, 54 54 "Failed to get private value: %06x => %04x ret=%d\n", 55 55 addr, *value, ret); 56 56 ··· 294 294 if (!rt711->hs_jack) 295 295 return; 296 296 297 - if (!rt711->component->card->instantiated) 297 + if (!rt711->component->card || !rt711->component->card->instantiated) 298 298 return; 299 299 300 300 /* SDW_SCP_SDCA_INT_SDCA_0 is used for jack detection */ ··· 487 487 struct snd_soc_jack *hs_jack, void *data) 488 488 { 489 489 struct rt711_sdca_priv *rt711 = snd_soc_component_get_drvdata(component); 490 + int ret; 490 491 491 492 rt711->hs_jack = hs_jack; 492 493 493 - if (!rt711->hw_init) { 494 - dev_dbg(&rt711->slave->dev, 495 - "%s hw_init not ready yet\n", __func__); 494 + ret = pm_runtime_resume_and_get(component->dev); 495 + if (ret < 0) { 496 + if (ret != -EACCES) { 497 + dev_err(component->dev, "%s: failed to resume %d\n", __func__, ret); 498 + return ret; 499 + } 500 + 501 + /* pm_runtime not enabled yet */ 502 + dev_dbg(component->dev, "%s: skipping jack init for now\n", __func__); 496 503 return 0; 497 504 } 498 505 499 506 rt711_sdca_jack_init(rt711); 507 + 508 + pm_runtime_mark_last_busy(component->dev); 509 + pm_runtime_put_autosuspend(component->dev); 510 + 500 511 return 0; 501 512 } 502 513 ··· 1201 1190 return 0; 1202 1191 } 1203 1192 1204 - static void rt711_sdca_remove(struct snd_soc_component *component) 1205 - { 1206 - struct rt711_sdca_priv *rt711 = snd_soc_component_get_drvdata(component); 1207 - 1208 - regcache_cache_only(rt711->regmap, true); 1209 - regcache_cache_only(rt711->mbq_regmap, true); 1210 - } 1211 - 1212 1193 static const struct snd_soc_component_driver soc_sdca_dev_rt711 = { 1213 1194 .probe = rt711_sdca_probe, 1214 1195 .controls = rt711_sdca_snd_controls, ··· 1210 1207 .dapm_routes = rt711_sdca_audio_map, 1211 1208 .num_dapm_routes = ARRAY_SIZE(rt711_sdca_audio_map), 1212 1209 .set_jack = rt711_sdca_set_jack_detect, 1213 - .remove = rt711_sdca_remove, 1214 1210 .endianness = 1, 1215 1211 }; 1216 1212 ··· 1414 1412 rt711->regmap = regmap; 1415 1413 rt711->mbq_regmap = mbq_regmap; 1416 1414 1415 + mutex_init(&rt711->calibrate_mutex); 1417 1416 mutex_init(&rt711->disable_irq_lock); 1417 + 1418 + INIT_DELAYED_WORK(&rt711->jack_detect_work, rt711_sdca_jack_detect_handler); 1419 + INIT_DELAYED_WORK(&rt711->jack_btn_check_work, rt711_sdca_btn_check_handler); 1418 1420 1419 1421 /* 1420 1422 * Mark hw_init to false ··· 1550 1544 /* ge_exclusive_inbox_en disable */ 1551 1545 rt711_sdca_index_update_bits(rt711, RT711_VENDOR_HDA_CTL, 1552 1546 RT711_PUSH_BTN_INT_CTL0, 0x20, 0x00); 1553 - 1554 - if (!rt711->first_hw_init) { 1555 - INIT_DELAYED_WORK(&rt711->jack_detect_work, 1556 - rt711_sdca_jack_detect_handler); 1557 - INIT_DELAYED_WORK(&rt711->jack_btn_check_work, 1558 - rt711_sdca_btn_check_handler); 1559 - mutex_init(&rt711->calibrate_mutex); 1560 - } 1561 1547 1562 1548 /* calibration */ 1563 1549 ret = rt711_sdca_calibration(rt711);
+8 -1
sound/soc/codecs/rt711-sdw.c
··· 13 13 #include <linux/soundwire/sdw_type.h> 14 14 #include <linux/soundwire/sdw_registers.h> 15 15 #include <linux/module.h> 16 + #include <linux/pm_runtime.h> 16 17 #include <linux/regmap.h> 17 18 #include <sound/soc.h> 18 19 #include "rt711.h" ··· 465 464 { 466 465 struct rt711_priv *rt711 = dev_get_drvdata(&slave->dev); 467 466 468 - if (rt711 && rt711->hw_init) { 467 + if (rt711->hw_init) { 469 468 cancel_delayed_work_sync(&rt711->jack_detect_work); 470 469 cancel_delayed_work_sync(&rt711->jack_btn_check_work); 471 470 cancel_work_sync(&rt711->calibration_work); 472 471 } 472 + 473 + if (rt711->first_hw_init) 474 + pm_runtime_disable(&slave->dev); 475 + 476 + mutex_destroy(&rt711->calibrate_mutex); 477 + mutex_destroy(&rt711->disable_irq_lock); 473 478 474 479 return 0; 475 480 }
+20 -20
sound/soc/codecs/rt711.c
··· 242 242 if (!rt711->hs_jack) 243 243 return; 244 244 245 - if (!rt711->component->card->instantiated) 245 + if (!rt711->component->card || !rt711->component->card->instantiated) 246 246 return; 247 247 248 248 if (pm_runtime_status_suspended(rt711->slave->dev.parent)) { ··· 457 457 struct snd_soc_jack *hs_jack, void *data) 458 458 { 459 459 struct rt711_priv *rt711 = snd_soc_component_get_drvdata(component); 460 + int ret; 460 461 461 462 rt711->hs_jack = hs_jack; 462 463 463 - if (!rt711->hw_init) { 464 - dev_dbg(&rt711->slave->dev, 465 - "%s hw_init not ready yet\n", __func__); 464 + ret = pm_runtime_resume_and_get(component->dev); 465 + if (ret < 0) { 466 + if (ret != -EACCES) { 467 + dev_err(component->dev, "%s: failed to resume %d\n", __func__, ret); 468 + return ret; 469 + } 470 + 471 + /* pm_runtime not enabled yet */ 472 + dev_dbg(component->dev, "%s: skipping jack init for now\n", __func__); 466 473 return 0; 467 474 } 468 475 469 476 rt711_jack_init(rt711); 477 + 478 + pm_runtime_mark_last_busy(component->dev); 479 + pm_runtime_put_autosuspend(component->dev); 470 480 471 481 return 0; 472 482 } ··· 942 932 return 0; 943 933 } 944 934 945 - static void rt711_remove(struct snd_soc_component *component) 946 - { 947 - struct rt711_priv *rt711 = snd_soc_component_get_drvdata(component); 948 - 949 - regcache_cache_only(rt711->regmap, true); 950 - } 951 - 952 935 static const struct snd_soc_component_driver soc_codec_dev_rt711 = { 953 936 .probe = rt711_probe, 954 937 .set_bias_level = rt711_set_bias_level, ··· 952 949 .dapm_routes = rt711_audio_map, 953 950 .num_dapm_routes = ARRAY_SIZE(rt711_audio_map), 954 951 .set_jack = rt711_set_jack_detect, 955 - .remove = rt711_remove, 956 952 .endianness = 1, 957 953 }; 958 954 ··· 1206 1204 rt711->sdw_regmap = sdw_regmap; 1207 1205 rt711->regmap = regmap; 1208 1206 1207 + mutex_init(&rt711->calibrate_mutex); 1209 1208 mutex_init(&rt711->disable_irq_lock); 1209 + 1210 + INIT_DELAYED_WORK(&rt711->jack_detect_work, rt711_jack_detect_handler); 1211 + INIT_DELAYED_WORK(&rt711->jack_btn_check_work, rt711_btn_check_handler); 1212 + INIT_WORK(&rt711->calibration_work, rt711_calibration_work); 1210 1213 1211 1214 /* 1212 1215 * Mark hw_init to false ··· 1320 1313 1321 1314 if (rt711->first_hw_init) 1322 1315 rt711_calibration(rt711); 1323 - else { 1324 - INIT_DELAYED_WORK(&rt711->jack_detect_work, 1325 - rt711_jack_detect_handler); 1326 - INIT_DELAYED_WORK(&rt711->jack_btn_check_work, 1327 - rt711_btn_check_handler); 1328 - mutex_init(&rt711->calibrate_mutex); 1329 - INIT_WORK(&rt711->calibration_work, rt711_calibration_work); 1316 + else 1330 1317 schedule_work(&rt711->calibration_work); 1331 - } 1332 1318 1333 1319 /* 1334 1320 * if set_jack callback occurred early than io_init,
+12
sound/soc/codecs/rt715-sdca-sdw.c
··· 13 13 #include <linux/soundwire/sdw_type.h> 14 14 #include <linux/soundwire/sdw_registers.h> 15 15 #include <linux/module.h> 16 + #include <linux/pm_runtime.h> 16 17 #include <linux/regmap.h> 17 18 #include <sound/soc.h> 18 19 #include "rt715-sdca.h" ··· 194 193 return rt715_sdca_init(&slave->dev, mbq_regmap, regmap, slave); 195 194 } 196 195 196 + static int rt715_sdca_sdw_remove(struct sdw_slave *slave) 197 + { 198 + struct rt715_sdca_priv *rt715 = dev_get_drvdata(&slave->dev); 199 + 200 + if (rt715->first_hw_init) 201 + pm_runtime_disable(&slave->dev); 202 + 203 + return 0; 204 + } 205 + 197 206 static const struct sdw_device_id rt715_sdca_id[] = { 198 207 SDW_SLAVE_ENTRY_EXT(0x025d, 0x715, 0x3, 0x1, 0), 199 208 SDW_SLAVE_ENTRY_EXT(0x025d, 0x714, 0x3, 0x1, 0), ··· 278 267 .pm = &rt715_pm, 279 268 }, 280 269 .probe = rt715_sdca_sdw_probe, 270 + .remove = rt715_sdca_sdw_remove, 281 271 .ops = &rt715_sdca_slave_ops, 282 272 .id_table = rt715_sdca_id, 283 273 };
+12
sound/soc/codecs/rt715-sdw.c
··· 14 14 #include <linux/soundwire/sdw_type.h> 15 15 #include <linux/soundwire/sdw_registers.h> 16 16 #include <linux/module.h> 17 + #include <linux/pm_runtime.h> 17 18 #include <linux/of.h> 18 19 #include <linux/regmap.h> 19 20 #include <sound/soc.h> ··· 515 514 return 0; 516 515 } 517 516 517 + static int rt715_sdw_remove(struct sdw_slave *slave) 518 + { 519 + struct rt715_priv *rt715 = dev_get_drvdata(&slave->dev); 520 + 521 + if (rt715->first_hw_init) 522 + pm_runtime_disable(&slave->dev); 523 + 524 + return 0; 525 + } 526 + 518 527 static const struct sdw_device_id rt715_id[] = { 519 528 SDW_SLAVE_ENTRY_EXT(0x025d, 0x714, 0x2, 0, 0), 520 529 SDW_SLAVE_ENTRY_EXT(0x025d, 0x715, 0x2, 0, 0), ··· 586 575 .pm = &rt715_pm, 587 576 }, 588 577 .probe = rt715_sdw_probe, 578 + .remove = rt715_sdw_remove, 589 579 .ops = &rt715_slave_ops, 590 580 .id_table = rt715_id, 591 581 };
+7 -1
sound/soc/codecs/wcd9335.c
··· 1287 1287 struct snd_soc_dapm_update *update = NULL; 1288 1288 u32 port_id = w->shift; 1289 1289 1290 + if (wcd->rx_port_value[port_id] == ucontrol->value.enumerated.item[0]) 1291 + return 0; 1292 + 1290 1293 wcd->rx_port_value[port_id] = ucontrol->value.enumerated.item[0]; 1294 + 1295 + /* Remove channel from any list it's in before adding it to a new one */ 1296 + list_del_init(&wcd->rx_chs[port_id].list); 1291 1297 1292 1298 switch (wcd->rx_port_value[port_id]) { 1293 1299 case 0: 1294 - list_del_init(&wcd->rx_chs[port_id].list); 1300 + /* Channel already removed from lists. Nothing to do here */ 1295 1301 break; 1296 1302 case 1: 1297 1303 list_add_tail(&wcd->rx_chs[port_id].list,
+12
sound/soc/codecs/wcd938x.c
··· 2519 2519 struct soc_enum *e = (struct soc_enum *)kcontrol->private_value; 2520 2520 int path = e->shift_l; 2521 2521 2522 + if (wcd938x->tx_mode[path] == ucontrol->value.enumerated.item[0]) 2523 + return 0; 2524 + 2522 2525 wcd938x->tx_mode[path] = ucontrol->value.enumerated.item[0]; 2523 2526 2524 2527 return 1; ··· 2543 2540 { 2544 2541 struct snd_soc_component *component = snd_soc_kcontrol_component(kcontrol); 2545 2542 struct wcd938x_priv *wcd938x = snd_soc_component_get_drvdata(component); 2543 + 2544 + if (wcd938x->hph_mode == ucontrol->value.enumerated.item[0]) 2545 + return 0; 2546 2546 2547 2547 wcd938x->hph_mode = ucontrol->value.enumerated.item[0]; 2548 2548 ··· 2638 2632 struct snd_soc_component *component = snd_soc_kcontrol_component(kcontrol); 2639 2633 struct wcd938x_priv *wcd938x = snd_soc_component_get_drvdata(component); 2640 2634 2635 + if (wcd938x->ldoh == ucontrol->value.integer.value[0]) 2636 + return 0; 2637 + 2641 2638 wcd938x->ldoh = ucontrol->value.integer.value[0]; 2642 2639 2643 2640 return 1; ··· 2662 2653 { 2663 2654 struct snd_soc_component *component = snd_soc_kcontrol_component(kcontrol); 2664 2655 struct wcd938x_priv *wcd938x = snd_soc_component_get_drvdata(component); 2656 + 2657 + if (wcd938x->bcs_dis == ucontrol->value.integer.value[0]) 2658 + return 0; 2665 2659 2666 2660 wcd938x->bcs_dis = ucontrol->value.integer.value[0]; 2667 2661
+6 -2
sound/soc/codecs/wm5110.c
··· 413 413 unsigned int rnew = (!!ucontrol->value.integer.value[1]) << mc->rshift; 414 414 unsigned int lold, rold; 415 415 unsigned int lena, rena; 416 + bool change = false; 416 417 int ret; 417 418 418 419 snd_soc_dapm_mutex_lock(dapm); ··· 441 440 goto err; 442 441 } 443 442 444 - ret = regmap_update_bits(arizona->regmap, ARIZONA_DRE_ENABLE, 445 - mask, lnew | rnew); 443 + ret = regmap_update_bits_check(arizona->regmap, ARIZONA_DRE_ENABLE, 444 + mask, lnew | rnew, &change); 446 445 if (ret) { 447 446 dev_err(arizona->dev, "Failed to set DRE: %d\n", ret); 448 447 goto err; ··· 454 453 455 454 if (!rnew && rold) 456 455 wm5110_clear_pga_volume(arizona, mc->rshift); 456 + 457 + if (change) 458 + ret = 1; 457 459 458 460 err: 459 461 snd_soc_dapm_mutex_unlock(dapm);
+1 -1
sound/soc/codecs/wm_adsp.c
··· 997 997 snd_soc_dapm_sync(dapm); 998 998 } 999 999 1000 - return 0; 1000 + return 1; 1001 1001 } 1002 1002 EXPORT_SYMBOL_GPL(wm_adsp2_preloader_put); 1003 1003
+2 -2
sound/soc/intel/avs/topology.c
··· 128 128 static int 129 129 avs_parse_uuid_token(struct snd_soc_component *comp, void *elem, void *object, u32 offset) 130 130 { 131 - struct snd_soc_tplg_vendor_value_elem *tuple = elem; 131 + struct snd_soc_tplg_vendor_uuid_elem *tuple = elem; 132 132 guid_t *val = (guid_t *)((u8 *)object + offset); 133 133 134 - guid_copy((guid_t *)val, (const guid_t *)&tuple->value); 134 + guid_copy((guid_t *)val, (const guid_t *)&tuple->uuid); 135 135 136 136 return 0; 137 137 }
+11 -2
sound/soc/intel/boards/bytcr_wm5102.c
··· 421 421 priv->spkvdd_en_gpio = gpiod_get(codec_dev, "wlf,spkvdd-ena", GPIOD_OUT_LOW); 422 422 put_device(codec_dev); 423 423 424 - if (IS_ERR(priv->spkvdd_en_gpio)) 425 - return dev_err_probe(dev, PTR_ERR(priv->spkvdd_en_gpio), "getting spkvdd-GPIO\n"); 424 + if (IS_ERR(priv->spkvdd_en_gpio)) { 425 + ret = PTR_ERR(priv->spkvdd_en_gpio); 426 + /* 427 + * The spkvdd gpio-lookup is registered by: drivers/mfd/arizona-spi.c, 428 + * so -ENOENT means that arizona-spi hasn't probed yet. 429 + */ 430 + if (ret == -ENOENT) 431 + ret = -EPROBE_DEFER; 432 + 433 + return dev_err_probe(dev, ret, "getting spkvdd-GPIO\n"); 434 + } 426 435 427 436 /* override platform name, if required */ 428 437 byt_wm5102_card.dev = dev;
+29 -22
sound/soc/intel/boards/sof_sdw.c
··· 1398 1398 .late_probe = sof_sdw_card_late_probe, 1399 1399 }; 1400 1400 1401 + static void mc_dailink_exit_loop(struct snd_soc_card *card) 1402 + { 1403 + struct snd_soc_dai_link *link; 1404 + int ret; 1405 + int i, j; 1406 + 1407 + for (i = 0; i < ARRAY_SIZE(codec_info_list); i++) { 1408 + if (!codec_info_list[i].exit) 1409 + continue; 1410 + /* 1411 + * We don't need to call .exit function if there is no matched 1412 + * dai link found. 1413 + */ 1414 + for_each_card_prelinks(card, j, link) { 1415 + if (!strcmp(link->codecs[0].dai_name, 1416 + codec_info_list[i].dai_name)) { 1417 + ret = codec_info_list[i].exit(card, link); 1418 + if (ret) 1419 + dev_warn(card->dev, 1420 + "codec exit failed %d\n", 1421 + ret); 1422 + break; 1423 + } 1424 + } 1425 + } 1426 + } 1427 + 1401 1428 static int mc_probe(struct platform_device *pdev) 1402 1429 { 1403 1430 struct snd_soc_card *card = &card_sof_sdw; ··· 1489 1462 ret = devm_snd_soc_register_card(&pdev->dev, card); 1490 1463 if (ret) { 1491 1464 dev_err(card->dev, "snd_soc_register_card failed %d\n", ret); 1465 + mc_dailink_exit_loop(card); 1492 1466 return ret; 1493 1467 } 1494 1468 ··· 1501 1473 static int mc_remove(struct platform_device *pdev) 1502 1474 { 1503 1475 struct snd_soc_card *card = platform_get_drvdata(pdev); 1504 - struct snd_soc_dai_link *link; 1505 - int ret; 1506 - int i, j; 1507 1476 1508 - for (i = 0; i < ARRAY_SIZE(codec_info_list); i++) { 1509 - if (!codec_info_list[i].exit) 1510 - continue; 1511 - /* 1512 - * We don't need to call .exit function if there is no matched 1513 - * dai link found. 1514 - */ 1515 - for_each_card_prelinks(card, j, link) { 1516 - if (!strcmp(link->codecs[0].dai_name, 1517 - codec_info_list[i].dai_name)) { 1518 - ret = codec_info_list[i].exit(card, link); 1519 - if (ret) 1520 - dev_warn(&pdev->dev, 1521 - "codec exit failed %d\n", 1522 - ret); 1523 - break; 1524 - } 1525 - } 1526 - } 1477 + mc_dailink_exit_loop(card); 1527 1478 1528 1479 return 0; 1529 1480 }
+6
sound/soc/qcom/qdsp6/q6apm-dai.c
··· 147 147 cfg.num_channels = runtime->channels; 148 148 cfg.bit_width = prtd->bits_per_sample; 149 149 150 + if (prtd->state) { 151 + /* clear the previous setup if any */ 152 + q6apm_graph_stop(prtd->graph); 153 + q6apm_unmap_memory_regions(prtd->graph, substream->stream); 154 + } 155 + 150 156 prtd->pcm_count = snd_pcm_lib_period_bytes(substream); 151 157 prtd->pos = 0; 152 158 /* rate and channels are sent to audio driver */
+129 -31
sound/soc/rockchip/rockchip_i2s.c
··· 13 13 #include <linux/of_gpio.h> 14 14 #include <linux/of_device.h> 15 15 #include <linux/clk.h> 16 + #include <linux/pinctrl/consumer.h> 16 17 #include <linux/pm_runtime.h> 17 18 #include <linux/regmap.h> 18 19 #include <linux/spinlock.h> ··· 55 54 const struct rk_i2s_pins *pins; 56 55 unsigned int bclk_ratio; 57 56 spinlock_t lock; /* tx/rx lock */ 57 + struct pinctrl *pinctrl; 58 + struct pinctrl_state *bclk_on; 59 + struct pinctrl_state *bclk_off; 58 60 }; 61 + 62 + static int i2s_pinctrl_select_bclk_on(struct rk_i2s_dev *i2s) 63 + { 64 + int ret = 0; 65 + 66 + if (!IS_ERR(i2s->pinctrl) && !IS_ERR_OR_NULL(i2s->bclk_on)) 67 + ret = pinctrl_select_state(i2s->pinctrl, 68 + i2s->bclk_on); 69 + 70 + if (ret) 71 + dev_err(i2s->dev, "bclk enable failed %d\n", ret); 72 + 73 + return ret; 74 + } 75 + 76 + static int i2s_pinctrl_select_bclk_off(struct rk_i2s_dev *i2s) 77 + { 78 + 79 + int ret = 0; 80 + 81 + if (!IS_ERR(i2s->pinctrl) && !IS_ERR_OR_NULL(i2s->bclk_off)) 82 + ret = pinctrl_select_state(i2s->pinctrl, 83 + i2s->bclk_off); 84 + 85 + if (ret) 86 + dev_err(i2s->dev, "bclk disable failed %d\n", ret); 87 + 88 + return ret; 89 + } 59 90 60 91 static int i2s_runtime_suspend(struct device *dev) 61 92 { ··· 125 92 return snd_soc_dai_get_drvdata(dai); 126 93 } 127 94 128 - static void rockchip_snd_txctrl(struct rk_i2s_dev *i2s, int on) 95 + static int rockchip_snd_txctrl(struct rk_i2s_dev *i2s, int on) 129 96 { 130 97 unsigned int val = 0; 131 98 int retry = 10; 99 + int ret = 0; 132 100 133 101 spin_lock(&i2s->lock); 134 102 if (on) { 135 - regmap_update_bits(i2s->regmap, I2S_DMACR, 136 - I2S_DMACR_TDE_ENABLE, I2S_DMACR_TDE_ENABLE); 103 + ret = regmap_update_bits(i2s->regmap, I2S_DMACR, 104 + I2S_DMACR_TDE_ENABLE, I2S_DMACR_TDE_ENABLE); 105 + if (ret < 0) 106 + goto end; 137 107 138 - regmap_update_bits(i2s->regmap, I2S_XFER, 139 - I2S_XFER_TXS_START | I2S_XFER_RXS_START, 140 - I2S_XFER_TXS_START | I2S_XFER_RXS_START); 108 + ret = regmap_update_bits(i2s->regmap, I2S_XFER, 109 + I2S_XFER_TXS_START | I2S_XFER_RXS_START, 110 + I2S_XFER_TXS_START | I2S_XFER_RXS_START); 111 + if (ret < 0) 112 + goto end; 141 113 142 114 i2s->tx_start = true; 143 115 } else { 144 116 i2s->tx_start = false; 145 117 146 - regmap_update_bits(i2s->regmap, I2S_DMACR, 147 - I2S_DMACR_TDE_ENABLE, I2S_DMACR_TDE_DISABLE); 118 + ret = regmap_update_bits(i2s->regmap, I2S_DMACR, 119 + I2S_DMACR_TDE_ENABLE, I2S_DMACR_TDE_DISABLE); 120 + if (ret < 0) 121 + goto end; 148 122 149 123 if (!i2s->rx_start) { 150 - regmap_update_bits(i2s->regmap, I2S_XFER, 151 - I2S_XFER_TXS_START | 152 - I2S_XFER_RXS_START, 153 - I2S_XFER_TXS_STOP | 154 - I2S_XFER_RXS_STOP); 124 + ret = regmap_update_bits(i2s->regmap, I2S_XFER, 125 + I2S_XFER_TXS_START | 126 + I2S_XFER_RXS_START, 127 + I2S_XFER_TXS_STOP | 128 + I2S_XFER_RXS_STOP); 129 + if (ret < 0) 130 + goto end; 155 131 156 132 udelay(150); 157 - regmap_update_bits(i2s->regmap, I2S_CLR, 158 - I2S_CLR_TXC | I2S_CLR_RXC, 159 - I2S_CLR_TXC | I2S_CLR_RXC); 133 + ret = regmap_update_bits(i2s->regmap, I2S_CLR, 134 + I2S_CLR_TXC | I2S_CLR_RXC, 135 + I2S_CLR_TXC | I2S_CLR_RXC); 136 + if (ret < 0) 137 + goto end; 160 138 161 139 regmap_read(i2s->regmap, I2S_CLR, &val); 162 140 ··· 182 138 } 183 139 } 184 140 } 141 + end: 185 142 spin_unlock(&i2s->lock); 143 + if (ret < 0) 144 + dev_err(i2s->dev, "lrclk update failed\n"); 145 + 146 + return ret; 186 147 } 187 148 188 - static void rockchip_snd_rxctrl(struct rk_i2s_dev *i2s, int on) 149 + static int rockchip_snd_rxctrl(struct rk_i2s_dev *i2s, int on) 189 150 { 190 151 unsigned int val = 0; 191 152 int retry = 10; 153 + int ret = 0; 192 154 193 155 spin_lock(&i2s->lock); 194 156 if (on) { 195 - regmap_update_bits(i2s->regmap, I2S_DMACR, 157 + ret = regmap_update_bits(i2s->regmap, I2S_DMACR, 196 158 I2S_DMACR_RDE_ENABLE, I2S_DMACR_RDE_ENABLE); 159 + if (ret < 0) 160 + goto end; 197 161 198 - regmap_update_bits(i2s->regmap, I2S_XFER, 162 + ret = regmap_update_bits(i2s->regmap, I2S_XFER, 199 163 I2S_XFER_TXS_START | I2S_XFER_RXS_START, 200 164 I2S_XFER_TXS_START | I2S_XFER_RXS_START); 165 + if (ret < 0) 166 + goto end; 201 167 202 168 i2s->rx_start = true; 203 169 } else { 204 170 i2s->rx_start = false; 205 171 206 - regmap_update_bits(i2s->regmap, I2S_DMACR, 172 + ret = regmap_update_bits(i2s->regmap, I2S_DMACR, 207 173 I2S_DMACR_RDE_ENABLE, I2S_DMACR_RDE_DISABLE); 174 + if (ret < 0) 175 + goto end; 208 176 209 177 if (!i2s->tx_start) { 210 - regmap_update_bits(i2s->regmap, I2S_XFER, 178 + ret = regmap_update_bits(i2s->regmap, I2S_XFER, 211 179 I2S_XFER_TXS_START | 212 180 I2S_XFER_RXS_START, 213 181 I2S_XFER_TXS_STOP | 214 182 I2S_XFER_RXS_STOP); 215 - 183 + if (ret < 0) 184 + goto end; 216 185 udelay(150); 217 - regmap_update_bits(i2s->regmap, I2S_CLR, 186 + ret = regmap_update_bits(i2s->regmap, I2S_CLR, 218 187 I2S_CLR_TXC | I2S_CLR_RXC, 219 188 I2S_CLR_TXC | I2S_CLR_RXC); 220 - 189 + if (ret < 0) 190 + goto end; 221 191 regmap_read(i2s->regmap, I2S_CLR, &val); 222 - 223 192 /* Should wait for clear operation to finish */ 224 193 while (val) { 225 194 regmap_read(i2s->regmap, I2S_CLR, &val); ··· 244 187 } 245 188 } 246 189 } 190 + end: 247 191 spin_unlock(&i2s->lock); 192 + if (ret < 0) 193 + dev_err(i2s->dev, "lrclk update failed\n"); 194 + 195 + return ret; 248 196 } 249 197 250 198 static int rockchip_i2s_set_fmt(struct snd_soc_dai *cpu_dai, ··· 487 425 case SNDRV_PCM_TRIGGER_RESUME: 488 426 case SNDRV_PCM_TRIGGER_PAUSE_RELEASE: 489 427 if (substream->stream == SNDRV_PCM_STREAM_CAPTURE) 490 - rockchip_snd_rxctrl(i2s, 1); 428 + ret = rockchip_snd_rxctrl(i2s, 1); 491 429 else 492 - rockchip_snd_txctrl(i2s, 1); 430 + ret = rockchip_snd_txctrl(i2s, 1); 431 + /* Do not turn on bclk if lrclk open fails. */ 432 + if (ret < 0) 433 + return ret; 434 + i2s_pinctrl_select_bclk_on(i2s); 493 435 break; 494 436 case SNDRV_PCM_TRIGGER_SUSPEND: 495 437 case SNDRV_PCM_TRIGGER_STOP: 496 438 case SNDRV_PCM_TRIGGER_PAUSE_PUSH: 497 - if (substream->stream == SNDRV_PCM_STREAM_CAPTURE) 498 - rockchip_snd_rxctrl(i2s, 0); 499 - else 500 - rockchip_snd_txctrl(i2s, 0); 439 + if (substream->stream == SNDRV_PCM_STREAM_CAPTURE) { 440 + if (!i2s->tx_start) 441 + i2s_pinctrl_select_bclk_off(i2s); 442 + ret = rockchip_snd_rxctrl(i2s, 0); 443 + } else { 444 + if (!i2s->rx_start) 445 + i2s_pinctrl_select_bclk_off(i2s); 446 + ret = rockchip_snd_txctrl(i2s, 0); 447 + } 501 448 break; 502 449 default: 503 450 ret = -EINVAL; ··· 807 736 } 808 737 809 738 i2s->bclk_ratio = 64; 739 + i2s->pinctrl = devm_pinctrl_get(&pdev->dev); 740 + if (IS_ERR(i2s->pinctrl)) 741 + dev_err(&pdev->dev, "failed to find i2s pinctrl\n"); 742 + 743 + i2s->bclk_on = pinctrl_lookup_state(i2s->pinctrl, 744 + "bclk_on"); 745 + if (IS_ERR_OR_NULL(i2s->bclk_on)) 746 + dev_err(&pdev->dev, "failed to find i2s default state\n"); 747 + else 748 + dev_dbg(&pdev->dev, "find i2s bclk state\n"); 749 + 750 + i2s->bclk_off = pinctrl_lookup_state(i2s->pinctrl, 751 + "bclk_off"); 752 + if (IS_ERR_OR_NULL(i2s->bclk_off)) 753 + dev_err(&pdev->dev, "failed to find i2s gpio state\n"); 754 + else 755 + dev_dbg(&pdev->dev, "find i2s bclk_off state\n"); 756 + 757 + i2s_pinctrl_select_bclk_off(i2s); 758 + 759 + i2s->playback_dma_data.addr = res->start + I2S_TXDR; 760 + i2s->playback_dma_data.addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES; 761 + i2s->playback_dma_data.maxburst = 4; 762 + 763 + i2s->capture_dma_data.addr = res->start + I2S_RXDR; 764 + i2s->capture_dma_data.addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES; 765 + i2s->capture_dma_data.maxburst = 4; 810 766 811 767 dev_set_drvdata(&pdev->dev, i2s); 812 768
+5
sound/soc/soc-dapm.c
··· 62 62 snd_soc_dapm_new_control_unlocked(struct snd_soc_dapm_context *dapm, 63 63 const struct snd_soc_dapm_widget *widget); 64 64 65 + static unsigned int soc_dapm_read(struct snd_soc_dapm_context *dapm, int reg); 66 + 65 67 /* dapm power sequences - make this per codec in the future */ 66 68 static int dapm_up_seq[] = { 67 69 [snd_soc_dapm_pre] = 1, ··· 444 442 445 443 snd_soc_dapm_add_path(widget->dapm, data->widget, 446 444 widget, NULL, NULL); 445 + } else if (e->reg != SND_SOC_NOPM) { 446 + data->value = soc_dapm_read(widget->dapm, e->reg) & 447 + (e->mask << e->shift_l); 447 448 } 448 449 break; 449 450 default:
+2 -2
sound/soc/soc-ops.c
··· 526 526 return -EINVAL; 527 527 if (mc->platform_max && tmp > mc->platform_max) 528 528 return -EINVAL; 529 - if (tmp > mc->max - mc->min + 1) 529 + if (tmp > mc->max - mc->min) 530 530 return -EINVAL; 531 531 532 532 if (invert) ··· 547 547 return -EINVAL; 548 548 if (mc->platform_max && tmp > mc->platform_max) 549 549 return -EINVAL; 550 - if (tmp > mc->max - mc->min + 1) 550 + if (tmp > mc->max - mc->min) 551 551 return -EINVAL; 552 552 553 553 if (invert)
+9 -1
sound/soc/sof/intel/hda-dsp.c
··· 181 181 * Power Management. 182 182 */ 183 183 184 - static int hda_dsp_core_power_up(struct snd_sof_dev *sdev, unsigned int core_mask) 184 + int hda_dsp_core_power_up(struct snd_sof_dev *sdev, unsigned int core_mask) 185 185 { 186 + struct sof_intel_hda_dev *hda = sdev->pdata->hw_pdata; 187 + const struct sof_intel_dsp_desc *chip = hda->desc; 186 188 unsigned int cpa; 187 189 u32 adspcs; 188 190 int ret; 191 + 192 + /* restrict core_mask to host managed cores mask */ 193 + core_mask &= chip->host_managed_cores_mask; 194 + /* return if core_mask is not valid */ 195 + if (!core_mask) 196 + return 0; 189 197 190 198 /* update bits */ 191 199 snd_sof_dsp_update_bits(sdev, HDA_DSP_BAR, HDA_DSP_REG_ADSPCS,
+7 -6
sound/soc/sof/intel/hda-loader.c
··· 95 95 } 96 96 97 97 /* 98 - * first boot sequence has some extra steps. core 0 waits for power 99 - * status on core 1, so power up core 1 also momentarily, keep it in 100 - * reset/stall and then turn it off 98 + * first boot sequence has some extra steps. 99 + * power on all host managed cores and only unstall/run the boot core to boot the 100 + * DSP then turn off all non boot cores (if any) is powered on. 101 101 */ 102 102 static int cl_dsp_init(struct snd_sof_dev *sdev, int stream_tag, bool imr_boot) 103 103 { ··· 110 110 int ret; 111 111 112 112 /* step 1: power up corex */ 113 - ret = hda_dsp_enable_core(sdev, chip->host_managed_cores_mask); 113 + ret = hda_dsp_core_power_up(sdev, chip->host_managed_cores_mask); 114 114 if (ret < 0) { 115 115 if (hda->boot_iteration == HDA_FW_BOOT_ATTEMPTS) 116 116 dev_err(sdev->dev, "error: dsp core 0/1 power up failed\n"); ··· 127 127 snd_sof_dsp_write(sdev, HDA_DSP_BAR, chip->ipc_req, ipc_hdr); 128 128 129 129 /* step 3: unset core 0 reset state & unstall/run core 0 */ 130 - ret = hda_dsp_core_run(sdev, BIT(0)); 130 + ret = hda_dsp_core_run(sdev, chip->init_core_mask); 131 131 if (ret < 0) { 132 132 if (hda->boot_iteration == HDA_FW_BOOT_ATTEMPTS) 133 133 dev_err(sdev->dev, ··· 389 389 struct snd_dma_buffer dmab; 390 390 int ret, ret1, i; 391 391 392 - if (hda->imrboot_supported && !sdev->first_boot) { 392 + if (sdev->system_suspend_target < SOF_SUSPEND_S4 && 393 + hda->imrboot_supported && !sdev->first_boot) { 393 394 dev_dbg(sdev->dev, "IMR restore supported, booting from IMR directly\n"); 394 395 hda->boot_iteration = 0; 395 396 ret = hda_dsp_boot_imr(sdev);
+1 -73
sound/soc/sof/intel/hda-pcm.c
··· 192 192 goto found; 193 193 } 194 194 195 - switch (sof_hda_position_quirk) { 196 - case SOF_HDA_POSITION_QUIRK_USE_SKYLAKE_LEGACY: 197 - /* 198 - * This legacy code, inherited from the Skylake driver, 199 - * mixes DPIB registers and DPIB DDR updates and 200 - * does not seem to follow any known hardware recommendations. 201 - * It's not clear e.g. why there is a different flow 202 - * for capture and playback, the only information that matters is 203 - * what traffic class is used, and on all SOF-enabled platforms 204 - * only VC0 is supported so the work-around was likely not necessary 205 - * and quite possibly wrong. 206 - */ 207 - 208 - /* DPIB/posbuf position mode: 209 - * For Playback, Use DPIB register from HDA space which 210 - * reflects the actual data transferred. 211 - * For Capture, Use the position buffer for pointer, as DPIB 212 - * is not accurate enough, its update may be completed 213 - * earlier than the data written to DDR. 214 - */ 215 - if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK) { 216 - pos = snd_sof_dsp_read(sdev, HDA_DSP_HDA_BAR, 217 - AZX_REG_VS_SDXDPIB_XBASE + 218 - (AZX_REG_VS_SDXDPIB_XINTERVAL * 219 - hstream->index)); 220 - } else { 221 - /* 222 - * For capture stream, we need more workaround to fix the 223 - * position incorrect issue: 224 - * 225 - * 1. Wait at least 20us before reading position buffer after 226 - * the interrupt generated(IOC), to make sure position update 227 - * happens on frame boundary i.e. 20.833uSec for 48KHz. 228 - * 2. Perform a dummy Read to DPIB register to flush DMA 229 - * position value. 230 - * 3. Read the DMA Position from posbuf. Now the readback 231 - * value should be >= period boundary. 232 - */ 233 - usleep_range(20, 21); 234 - snd_sof_dsp_read(sdev, HDA_DSP_HDA_BAR, 235 - AZX_REG_VS_SDXDPIB_XBASE + 236 - (AZX_REG_VS_SDXDPIB_XINTERVAL * 237 - hstream->index)); 238 - pos = snd_hdac_stream_get_pos_posbuf(hstream); 239 - } 240 - break; 241 - case SOF_HDA_POSITION_QUIRK_USE_DPIB_REGISTERS: 242 - /* 243 - * In case VC1 traffic is disabled this is the recommended option 244 - */ 245 - pos = snd_sof_dsp_read(sdev, HDA_DSP_HDA_BAR, 246 - AZX_REG_VS_SDXDPIB_XBASE + 247 - (AZX_REG_VS_SDXDPIB_XINTERVAL * 248 - hstream->index)); 249 - break; 250 - case SOF_HDA_POSITION_QUIRK_USE_DPIB_DDR_UPDATE: 251 - /* 252 - * This is the recommended option when VC1 is enabled. 253 - * While this isn't needed for SOF platforms it's added for 254 - * consistency and debug. 255 - */ 256 - pos = snd_hdac_stream_get_pos_posbuf(hstream); 257 - break; 258 - default: 259 - dev_err_once(sdev->dev, "hda_position_quirk value %d not supported\n", 260 - sof_hda_position_quirk); 261 - pos = 0; 262 - break; 263 - } 264 - 265 - if (pos >= hstream->bufsize) 266 - pos = 0; 267 - 195 + pos = hda_dsp_stream_get_position(hstream, substream->stream, true); 268 196 found: 269 197 pos = bytes_to_frames(substream->runtime, pos); 270 198
+90 -4
sound/soc/sof/intel/hda-stream.c
··· 707 707 } 708 708 709 709 static void 710 - hda_dsp_set_bytes_transferred(struct hdac_stream *hstream, u64 buffer_size) 710 + hda_dsp_compr_bytes_transferred(struct hdac_stream *hstream, int direction) 711 711 { 712 + u64 buffer_size = hstream->bufsize; 712 713 u64 prev_pos, pos, num_bytes; 713 714 714 715 div64_u64_rem(hstream->curr_pos, buffer_size, &prev_pos); 715 - pos = snd_hdac_stream_get_pos_posbuf(hstream); 716 + pos = hda_dsp_stream_get_position(hstream, direction, false); 716 717 717 718 if (pos < prev_pos) 718 719 num_bytes = (buffer_size - prev_pos) + pos; ··· 749 748 if (s->substream && sof_hda->no_ipc_position) { 750 749 snd_sof_pcm_period_elapsed(s->substream); 751 750 } else if (s->cstream) { 752 - hda_dsp_set_bytes_transferred(s, 753 - s->cstream->runtime->buffer_size); 751 + hda_dsp_compr_bytes_transferred(s, s->cstream->direction); 754 752 snd_compr_fragment_elapsed(s->cstream); 755 753 } 756 754 } ··· 1008 1008 hext_stream); 1009 1009 devm_kfree(sdev->dev, hda_stream); 1010 1010 } 1011 + } 1012 + 1013 + snd_pcm_uframes_t hda_dsp_stream_get_position(struct hdac_stream *hstream, 1014 + int direction, bool can_sleep) 1015 + { 1016 + struct hdac_ext_stream *hext_stream = stream_to_hdac_ext_stream(hstream); 1017 + struct sof_intel_hda_stream *hda_stream = hstream_to_sof_hda_stream(hext_stream); 1018 + struct snd_sof_dev *sdev = hda_stream->sdev; 1019 + snd_pcm_uframes_t pos; 1020 + 1021 + switch (sof_hda_position_quirk) { 1022 + case SOF_HDA_POSITION_QUIRK_USE_SKYLAKE_LEGACY: 1023 + /* 1024 + * This legacy code, inherited from the Skylake driver, 1025 + * mixes DPIB registers and DPIB DDR updates and 1026 + * does not seem to follow any known hardware recommendations. 1027 + * It's not clear e.g. why there is a different flow 1028 + * for capture and playback, the only information that matters is 1029 + * what traffic class is used, and on all SOF-enabled platforms 1030 + * only VC0 is supported so the work-around was likely not necessary 1031 + * and quite possibly wrong. 1032 + */ 1033 + 1034 + /* DPIB/posbuf position mode: 1035 + * For Playback, Use DPIB register from HDA space which 1036 + * reflects the actual data transferred. 1037 + * For Capture, Use the position buffer for pointer, as DPIB 1038 + * is not accurate enough, its update may be completed 1039 + * earlier than the data written to DDR. 1040 + */ 1041 + if (direction == SNDRV_PCM_STREAM_PLAYBACK) { 1042 + pos = snd_sof_dsp_read(sdev, HDA_DSP_HDA_BAR, 1043 + AZX_REG_VS_SDXDPIB_XBASE + 1044 + (AZX_REG_VS_SDXDPIB_XINTERVAL * 1045 + hstream->index)); 1046 + } else { 1047 + /* 1048 + * For capture stream, we need more workaround to fix the 1049 + * position incorrect issue: 1050 + * 1051 + * 1. Wait at least 20us before reading position buffer after 1052 + * the interrupt generated(IOC), to make sure position update 1053 + * happens on frame boundary i.e. 20.833uSec for 48KHz. 1054 + * 2. Perform a dummy Read to DPIB register to flush DMA 1055 + * position value. 1056 + * 3. Read the DMA Position from posbuf. Now the readback 1057 + * value should be >= period boundary. 1058 + */ 1059 + if (can_sleep) 1060 + usleep_range(20, 21); 1061 + 1062 + snd_sof_dsp_read(sdev, HDA_DSP_HDA_BAR, 1063 + AZX_REG_VS_SDXDPIB_XBASE + 1064 + (AZX_REG_VS_SDXDPIB_XINTERVAL * 1065 + hstream->index)); 1066 + pos = snd_hdac_stream_get_pos_posbuf(hstream); 1067 + } 1068 + break; 1069 + case SOF_HDA_POSITION_QUIRK_USE_DPIB_REGISTERS: 1070 + /* 1071 + * In case VC1 traffic is disabled this is the recommended option 1072 + */ 1073 + pos = snd_sof_dsp_read(sdev, HDA_DSP_HDA_BAR, 1074 + AZX_REG_VS_SDXDPIB_XBASE + 1075 + (AZX_REG_VS_SDXDPIB_XINTERVAL * 1076 + hstream->index)); 1077 + break; 1078 + case SOF_HDA_POSITION_QUIRK_USE_DPIB_DDR_UPDATE: 1079 + /* 1080 + * This is the recommended option when VC1 is enabled. 1081 + * While this isn't needed for SOF platforms it's added for 1082 + * consistency and debug. 1083 + */ 1084 + pos = snd_hdac_stream_get_pos_posbuf(hstream); 1085 + break; 1086 + default: 1087 + dev_err_once(sdev->dev, "hda_position_quirk value %d not supported\n", 1088 + sof_hda_position_quirk); 1089 + pos = 0; 1090 + break; 1091 + } 1092 + 1093 + if (pos >= hstream->bufsize) 1094 + pos = 0; 1095 + 1096 + return pos; 1011 1097 }
+4
sound/soc/sof/intel/hda.h
··· 497 497 */ 498 498 int hda_dsp_probe(struct snd_sof_dev *sdev); 499 499 int hda_dsp_remove(struct snd_sof_dev *sdev); 500 + int hda_dsp_core_power_up(struct snd_sof_dev *sdev, unsigned int core_mask); 500 501 int hda_dsp_core_run(struct snd_sof_dev *sdev, unsigned int core_mask); 501 502 int hda_dsp_enable_core(struct snd_sof_dev *sdev, unsigned int core_mask); 502 503 int hda_dsp_core_reset_power_down(struct snd_sof_dev *sdev, ··· 564 563 struct hdac_stream *hstream); 565 564 bool hda_dsp_check_ipc_irq(struct snd_sof_dev *sdev); 566 565 bool hda_dsp_check_stream_irq(struct snd_sof_dev *sdev); 566 + 567 + snd_pcm_uframes_t hda_dsp_stream_get_position(struct hdac_stream *hstream, 568 + int direction, bool can_sleep); 567 569 568 570 struct hdac_ext_stream * 569 571 hda_dsp_stream_get(struct snd_sof_dev *sdev, int direction, u32 flags);
+13 -14
sound/soc/sof/ipc3-topology.c
··· 1577 1577 struct sof_ipc_ctrl_data *cdata; 1578 1578 int ret; 1579 1579 1580 + if (scontrol->max_size < (sizeof(*cdata) + sizeof(struct sof_abi_hdr))) { 1581 + dev_err(sdev->dev, "%s: insufficient size for a bytes control: %zu.\n", 1582 + __func__, scontrol->max_size); 1583 + return -EINVAL; 1584 + } 1585 + 1586 + if (scontrol->priv_size > scontrol->max_size - sizeof(*cdata)) { 1587 + dev_err(sdev->dev, 1588 + "%s: bytes data size %zu exceeds max %zu.\n", __func__, 1589 + scontrol->priv_size, scontrol->max_size - sizeof(*cdata)); 1590 + return -EINVAL; 1591 + } 1592 + 1580 1593 scontrol->ipc_control_data = kzalloc(scontrol->max_size, GFP_KERNEL); 1581 1594 if (!scontrol->ipc_control_data) 1582 1595 return -ENOMEM; 1583 - 1584 - if (scontrol->max_size < sizeof(*cdata) || 1585 - scontrol->max_size < sizeof(struct sof_abi_hdr)) { 1586 - ret = -EINVAL; 1587 - goto err; 1588 - } 1589 - 1590 - /* init the get/put bytes data */ 1591 - if (scontrol->priv_size > scontrol->max_size - sizeof(*cdata)) { 1592 - dev_err(sdev->dev, "err: bytes data size %zu exceeds max %zu.\n", 1593 - scontrol->priv_size, scontrol->max_size - sizeof(*cdata)); 1594 - ret = -EINVAL; 1595 - goto err; 1596 - } 1597 1596 1598 1597 scontrol->size = sizeof(struct sof_ipc_ctrl_data) + scontrol->priv_size; 1599 1598
+1 -1
sound/soc/sof/mediatek/mt8186/mt8186.c
··· 392 392 PLATFORM_DEVID_NONE, 393 393 pdev, sizeof(*pdev)); 394 394 if (IS_ERR(priv->ipc_dev)) { 395 - ret = IS_ERR(priv->ipc_dev); 395 + ret = PTR_ERR(priv->ipc_dev); 396 396 dev_err(sdev->dev, "failed to create mtk-adsp-ipc device\n"); 397 397 goto err_adsp_off; 398 398 }
+20 -1
sound/soc/sof/pm.c
··· 23 23 u32 target_dsp_state; 24 24 25 25 switch (sdev->system_suspend_target) { 26 + case SOF_SUSPEND_S5: 27 + case SOF_SUSPEND_S4: 28 + /* DSP should be in D3 if the system is suspending to S3+ */ 26 29 case SOF_SUSPEND_S3: 27 30 /* DSP should be in D3 if the system is suspending to S3 */ 28 31 target_dsp_state = SOF_DSP_PM_D3; ··· 338 335 return 0; 339 336 340 337 #if defined(CONFIG_ACPI) 341 - if (acpi_target_system_state() == ACPI_STATE_S0) 338 + switch (acpi_target_system_state()) { 339 + case ACPI_STATE_S0: 342 340 sdev->system_suspend_target = SOF_SUSPEND_S0IX; 341 + break; 342 + case ACPI_STATE_S1: 343 + case ACPI_STATE_S2: 344 + case ACPI_STATE_S3: 345 + sdev->system_suspend_target = SOF_SUSPEND_S3; 346 + break; 347 + case ACPI_STATE_S4: 348 + sdev->system_suspend_target = SOF_SUSPEND_S4; 349 + break; 350 + case ACPI_STATE_S5: 351 + sdev->system_suspend_target = SOF_SUSPEND_S5; 352 + break; 353 + default: 354 + break; 355 + } 343 356 #endif 344 357 345 358 return 0;
+2
sound/soc/sof/sof-priv.h
··· 85 85 SOF_SUSPEND_NONE = 0, 86 86 SOF_SUSPEND_S0IX, 87 87 SOF_SUSPEND_S3, 88 + SOF_SUSPEND_S4, 89 + SOF_SUSPEND_S5, 88 90 }; 89 91 90 92 enum sof_dfsentry_type {
+248
sound/usb/quirks-table.h
··· 3803 3803 }, 3804 3804 3805 3805 /* 3806 + * MacroSilicon MS2100/MS2106 based AV capture cards 3807 + * 3808 + * These claim 96kHz 1ch in the descriptors, but are actually 48kHz 2ch. 3809 + * They also need QUIRK_FLAG_ALIGN_TRANSFER, which makes one wonder if 3810 + * they pretend to be 96kHz mono as a workaround for stereo being broken 3811 + * by that... 3812 + * 3813 + * They also have an issue with initial stream alignment that causes the 3814 + * channels to be swapped and out of phase, which is dealt with in quirks.c. 3815 + */ 3816 + { 3817 + USB_AUDIO_DEVICE(0x534d, 0x0021), 3818 + .driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) { 3819 + .vendor_name = "MacroSilicon", 3820 + .product_name = "MS210x", 3821 + .ifnum = QUIRK_ANY_INTERFACE, 3822 + .type = QUIRK_COMPOSITE, 3823 + .data = &(const struct snd_usb_audio_quirk[]) { 3824 + { 3825 + .ifnum = 2, 3826 + .type = QUIRK_AUDIO_STANDARD_MIXER, 3827 + }, 3828 + { 3829 + .ifnum = 3, 3830 + .type = QUIRK_AUDIO_FIXED_ENDPOINT, 3831 + .data = &(const struct audioformat) { 3832 + .formats = SNDRV_PCM_FMTBIT_S16_LE, 3833 + .channels = 2, 3834 + .iface = 3, 3835 + .altsetting = 1, 3836 + .altset_idx = 1, 3837 + .attributes = 0, 3838 + .endpoint = 0x82, 3839 + .ep_attr = USB_ENDPOINT_XFER_ISOC | 3840 + USB_ENDPOINT_SYNC_ASYNC, 3841 + .rates = SNDRV_PCM_RATE_CONTINUOUS, 3842 + .rate_min = 48000, 3843 + .rate_max = 48000, 3844 + } 3845 + }, 3846 + { 3847 + .ifnum = -1 3848 + } 3849 + } 3850 + } 3851 + }, 3852 + 3853 + /* 3806 3854 * MacroSilicon MS2109 based HDMI capture cards 3807 3855 * 3808 3856 * These claim 96kHz 1ch in the descriptors, but are actually 48kHz 2ch. ··· 4160 4112 { 4161 4113 .ifnum = 1, 4162 4114 .type = QUIRK_AUDIO_STANDARD_INTERFACE 4115 + }, 4116 + { 4117 + .ifnum = -1 4118 + } 4119 + } 4120 + } 4121 + }, 4122 + { 4123 + /* 4124 + * Fiero SC-01 (firmware v1.0.0 @ 48 kHz) 4125 + */ 4126 + USB_DEVICE(0x2b53, 0x0023), 4127 + .driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) { 4128 + .vendor_name = "Fiero", 4129 + .product_name = "SC-01", 4130 + .ifnum = QUIRK_ANY_INTERFACE, 4131 + .type = QUIRK_COMPOSITE, 4132 + .data = &(const struct snd_usb_audio_quirk[]) { 4133 + { 4134 + .ifnum = 0, 4135 + .type = QUIRK_AUDIO_STANDARD_INTERFACE 4136 + }, 4137 + /* Playback */ 4138 + { 4139 + .ifnum = 1, 4140 + .type = QUIRK_AUDIO_FIXED_ENDPOINT, 4141 + .data = &(const struct audioformat) { 4142 + .formats = SNDRV_PCM_FMTBIT_S32_LE, 4143 + .channels = 2, 4144 + .fmt_bits = 24, 4145 + .iface = 1, 4146 + .altsetting = 1, 4147 + .altset_idx = 1, 4148 + .endpoint = 0x01, 4149 + .ep_attr = USB_ENDPOINT_XFER_ISOC | 4150 + USB_ENDPOINT_SYNC_ASYNC, 4151 + .rates = SNDRV_PCM_RATE_48000, 4152 + .rate_min = 48000, 4153 + .rate_max = 48000, 4154 + .nr_rates = 1, 4155 + .rate_table = (unsigned int[]) { 48000 }, 4156 + .clock = 0x29 4157 + } 4158 + }, 4159 + /* Capture */ 4160 + { 4161 + .ifnum = 2, 4162 + .type = QUIRK_AUDIO_FIXED_ENDPOINT, 4163 + .data = &(const struct audioformat) { 4164 + .formats = SNDRV_PCM_FMTBIT_S32_LE, 4165 + .channels = 2, 4166 + .fmt_bits = 24, 4167 + .iface = 2, 4168 + .altsetting = 1, 4169 + .altset_idx = 1, 4170 + .endpoint = 0x82, 4171 + .ep_attr = USB_ENDPOINT_XFER_ISOC | 4172 + USB_ENDPOINT_SYNC_ASYNC | 4173 + USB_ENDPOINT_USAGE_IMPLICIT_FB, 4174 + .rates = SNDRV_PCM_RATE_48000, 4175 + .rate_min = 48000, 4176 + .rate_max = 48000, 4177 + .nr_rates = 1, 4178 + .rate_table = (unsigned int[]) { 48000 }, 4179 + .clock = 0x29 4180 + } 4181 + }, 4182 + { 4183 + .ifnum = -1 4184 + } 4185 + } 4186 + } 4187 + }, 4188 + { 4189 + /* 4190 + * Fiero SC-01 (firmware v1.0.0 @ 96 kHz) 4191 + */ 4192 + USB_DEVICE(0x2b53, 0x0024), 4193 + .driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) { 4194 + .vendor_name = "Fiero", 4195 + .product_name = "SC-01", 4196 + .ifnum = QUIRK_ANY_INTERFACE, 4197 + .type = QUIRK_COMPOSITE, 4198 + .data = &(const struct snd_usb_audio_quirk[]) { 4199 + { 4200 + .ifnum = 0, 4201 + .type = QUIRK_AUDIO_STANDARD_INTERFACE 4202 + }, 4203 + /* Playback */ 4204 + { 4205 + .ifnum = 1, 4206 + .type = QUIRK_AUDIO_FIXED_ENDPOINT, 4207 + .data = &(const struct audioformat) { 4208 + .formats = SNDRV_PCM_FMTBIT_S32_LE, 4209 + .channels = 2, 4210 + .fmt_bits = 24, 4211 + .iface = 1, 4212 + .altsetting = 1, 4213 + .altset_idx = 1, 4214 + .endpoint = 0x01, 4215 + .ep_attr = USB_ENDPOINT_XFER_ISOC | 4216 + USB_ENDPOINT_SYNC_ASYNC, 4217 + .rates = SNDRV_PCM_RATE_96000, 4218 + .rate_min = 96000, 4219 + .rate_max = 96000, 4220 + .nr_rates = 1, 4221 + .rate_table = (unsigned int[]) { 96000 }, 4222 + .clock = 0x29 4223 + } 4224 + }, 4225 + /* Capture */ 4226 + { 4227 + .ifnum = 2, 4228 + .type = QUIRK_AUDIO_FIXED_ENDPOINT, 4229 + .data = &(const struct audioformat) { 4230 + .formats = SNDRV_PCM_FMTBIT_S32_LE, 4231 + .channels = 2, 4232 + .fmt_bits = 24, 4233 + .iface = 2, 4234 + .altsetting = 1, 4235 + .altset_idx = 1, 4236 + .endpoint = 0x82, 4237 + .ep_attr = USB_ENDPOINT_XFER_ISOC | 4238 + USB_ENDPOINT_SYNC_ASYNC | 4239 + USB_ENDPOINT_USAGE_IMPLICIT_FB, 4240 + .rates = SNDRV_PCM_RATE_96000, 4241 + .rate_min = 96000, 4242 + .rate_max = 96000, 4243 + .nr_rates = 1, 4244 + .rate_table = (unsigned int[]) { 96000 }, 4245 + .clock = 0x29 4246 + } 4247 + }, 4248 + { 4249 + .ifnum = -1 4250 + } 4251 + } 4252 + } 4253 + }, 4254 + { 4255 + /* 4256 + * Fiero SC-01 (firmware v1.1.0) 4257 + */ 4258 + USB_DEVICE(0x2b53, 0x0031), 4259 + .driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) { 4260 + .vendor_name = "Fiero", 4261 + .product_name = "SC-01", 4262 + .ifnum = QUIRK_ANY_INTERFACE, 4263 + .type = QUIRK_COMPOSITE, 4264 + .data = &(const struct snd_usb_audio_quirk[]) { 4265 + { 4266 + .ifnum = 0, 4267 + .type = QUIRK_AUDIO_STANDARD_INTERFACE 4268 + }, 4269 + /* Playback */ 4270 + { 4271 + .ifnum = 1, 4272 + .type = QUIRK_AUDIO_FIXED_ENDPOINT, 4273 + .data = &(const struct audioformat) { 4274 + .formats = SNDRV_PCM_FMTBIT_S32_LE, 4275 + .channels = 2, 4276 + .fmt_bits = 24, 4277 + .iface = 1, 4278 + .altsetting = 1, 4279 + .altset_idx = 1, 4280 + .endpoint = 0x01, 4281 + .ep_attr = USB_ENDPOINT_XFER_ISOC | 4282 + USB_ENDPOINT_SYNC_ASYNC, 4283 + .rates = SNDRV_PCM_RATE_48000 | 4284 + SNDRV_PCM_RATE_96000, 4285 + .rate_min = 48000, 4286 + .rate_max = 96000, 4287 + .nr_rates = 2, 4288 + .rate_table = (unsigned int[]) { 48000, 96000 }, 4289 + .clock = 0x29 4290 + } 4291 + }, 4292 + /* Capture */ 4293 + { 4294 + .ifnum = 2, 4295 + .type = QUIRK_AUDIO_FIXED_ENDPOINT, 4296 + .data = &(const struct audioformat) { 4297 + .formats = SNDRV_PCM_FMTBIT_S32_LE, 4298 + .channels = 2, 4299 + .fmt_bits = 24, 4300 + .iface = 2, 4301 + .altsetting = 1, 4302 + .altset_idx = 1, 4303 + .endpoint = 0x82, 4304 + .ep_attr = USB_ENDPOINT_XFER_ISOC | 4305 + USB_ENDPOINT_SYNC_ASYNC | 4306 + USB_ENDPOINT_USAGE_IMPLICIT_FB, 4307 + .rates = SNDRV_PCM_RATE_48000 | 4308 + SNDRV_PCM_RATE_96000, 4309 + .rate_min = 48000, 4310 + .rate_max = 96000, 4311 + .nr_rates = 2, 4312 + .rate_table = (unsigned int[]) { 48000, 96000 }, 4313 + .clock = 0x29 4314 + } 4163 4315 }, 4164 4316 { 4165 4317 .ifnum = -1
+13
sound/usb/quirks.c
··· 1478 1478 case USB_ID(0x041e, 0x3f19): /* E-Mu 0204 USB */ 1479 1479 set_format_emu_quirk(subs, fmt); 1480 1480 break; 1481 + case USB_ID(0x534d, 0x0021): /* MacroSilicon MS2100/MS2106 */ 1481 1482 case USB_ID(0x534d, 0x2109): /* MacroSilicon MS2109 */ 1482 1483 subs->stream_offset_adj = 2; 1483 1484 break; ··· 1843 1842 QUIRK_FLAG_SHARE_MEDIA_DEVICE | QUIRK_FLAG_ALIGN_TRANSFER), 1844 1843 DEVICE_FLG(0x1395, 0x740a, /* Sennheiser DECT */ 1845 1844 QUIRK_FLAG_GET_SAMPLE_RATE), 1845 + DEVICE_FLG(0x1397, 0x0508, /* Behringer UMC204HD */ 1846 + QUIRK_FLAG_PLAYBACK_FIRST | QUIRK_FLAG_GENERIC_IMPLICIT_FB), 1847 + DEVICE_FLG(0x1397, 0x0509, /* Behringer UMC404HD */ 1848 + QUIRK_FLAG_PLAYBACK_FIRST | QUIRK_FLAG_GENERIC_IMPLICIT_FB), 1846 1849 DEVICE_FLG(0x13e5, 0x0001, /* Serato Phono */ 1847 1850 QUIRK_FLAG_IGNORE_CTL_ERROR), 1848 1851 DEVICE_FLG(0x154e, 0x1002, /* Denon DCD-1500RE */ ··· 1909 1904 QUIRK_FLAG_IGNORE_CTL_ERROR), 1910 1905 DEVICE_FLG(0x413c, 0xa506, /* Dell AE515 sound bar */ 1911 1906 QUIRK_FLAG_GET_SAMPLE_RATE), 1907 + DEVICE_FLG(0x534d, 0x0021, /* MacroSilicon MS2100/MS2106 */ 1908 + QUIRK_FLAG_ALIGN_TRANSFER), 1912 1909 DEVICE_FLG(0x534d, 0x2109, /* MacroSilicon MS2109 */ 1913 1910 QUIRK_FLAG_ALIGN_TRANSFER), 1914 1911 DEVICE_FLG(0x1224, 0x2a25, /* Jieli Technology USB PHY 2.0 */ 1915 1912 QUIRK_FLAG_GET_SAMPLE_RATE), 1913 + DEVICE_FLG(0x2b53, 0x0023, /* Fiero SC-01 (firmware v1.0.0 @ 48 kHz) */ 1914 + QUIRK_FLAG_GENERIC_IMPLICIT_FB), 1915 + DEVICE_FLG(0x2b53, 0x0024, /* Fiero SC-01 (firmware v1.0.0 @ 96 kHz) */ 1916 + QUIRK_FLAG_GENERIC_IMPLICIT_FB), 1917 + DEVICE_FLG(0x2b53, 0x0031, /* Fiero SC-01 (firmware v1.1.0) */ 1918 + QUIRK_FLAG_GENERIC_IMPLICIT_FB), 1916 1919 1917 1920 /* Vendor matches */ 1918 1921 VENDOR_FLG(0x045e, /* MS Lifecam */
+36
tools/arch/arm64/include/uapi/asm/kvm.h
··· 139 139 __u64 dbg_wvr[KVM_ARM_MAX_DBG_REGS]; 140 140 }; 141 141 142 + #define KVM_DEBUG_ARCH_HSR_HIGH_VALID (1 << 0) 142 143 struct kvm_debug_exit_arch { 143 144 __u32 hsr; 145 + __u32 hsr_high; /* ESR_EL2[61:32] */ 144 146 __u64 far; /* used for watchpoints */ 145 147 }; 146 148 ··· 333 331 KVM_REG_SIZE_U512 | 0xffff) 334 332 #define KVM_ARM64_SVE_VLS_WORDS \ 335 333 ((KVM_ARM64_SVE_VQ_MAX - KVM_ARM64_SVE_VQ_MIN) / 64 + 1) 334 + 335 + /* Bitmap feature firmware registers */ 336 + #define KVM_REG_ARM_FW_FEAT_BMAP (0x0016 << KVM_REG_ARM_COPROC_SHIFT) 337 + #define KVM_REG_ARM_FW_FEAT_BMAP_REG(r) (KVM_REG_ARM64 | KVM_REG_SIZE_U64 | \ 338 + KVM_REG_ARM_FW_FEAT_BMAP | \ 339 + ((r) & 0xffff)) 340 + 341 + #define KVM_REG_ARM_STD_BMAP KVM_REG_ARM_FW_FEAT_BMAP_REG(0) 342 + 343 + enum { 344 + KVM_REG_ARM_STD_BIT_TRNG_V1_0 = 0, 345 + #ifdef __KERNEL__ 346 + KVM_REG_ARM_STD_BMAP_BIT_COUNT, 347 + #endif 348 + }; 349 + 350 + #define KVM_REG_ARM_STD_HYP_BMAP KVM_REG_ARM_FW_FEAT_BMAP_REG(1) 351 + 352 + enum { 353 + KVM_REG_ARM_STD_HYP_BIT_PV_TIME = 0, 354 + #ifdef __KERNEL__ 355 + KVM_REG_ARM_STD_HYP_BMAP_BIT_COUNT, 356 + #endif 357 + }; 358 + 359 + #define KVM_REG_ARM_VENDOR_HYP_BMAP KVM_REG_ARM_FW_FEAT_BMAP_REG(2) 360 + 361 + enum { 362 + KVM_REG_ARM_VENDOR_HYP_BIT_FUNC_FEAT = 0, 363 + KVM_REG_ARM_VENDOR_HYP_BIT_PTP = 1, 364 + #ifdef __KERNEL__ 365 + KVM_REG_ARM_VENDOR_HYP_BMAP_BIT_COUNT, 366 + #endif 367 + }; 336 368 337 369 /* Device Control API: ARM VGIC */ 338 370 #define KVM_DEV_ARM_VGIC_GRP_ADDR 0
+52 -2
tools/include/uapi/linux/kvm.h
··· 444 444 #define KVM_SYSTEM_EVENT_SHUTDOWN 1 445 445 #define KVM_SYSTEM_EVENT_RESET 2 446 446 #define KVM_SYSTEM_EVENT_CRASH 3 447 + #define KVM_SYSTEM_EVENT_WAKEUP 4 448 + #define KVM_SYSTEM_EVENT_SUSPEND 5 449 + #define KVM_SYSTEM_EVENT_SEV_TERM 6 447 450 __u32 type; 448 451 __u32 ndata; 449 452 union { ··· 649 646 #define KVM_MP_STATE_OPERATING 7 650 647 #define KVM_MP_STATE_LOAD 8 651 648 #define KVM_MP_STATE_AP_RESET_HOLD 9 649 + #define KVM_MP_STATE_SUSPENDED 10 652 650 653 651 struct kvm_mp_state { 654 652 __u32 mp_state; ··· 1154 1150 #define KVM_CAP_S390_MEM_OP_EXTENSION 211 1155 1151 #define KVM_CAP_PMU_CAPABILITY 212 1156 1152 #define KVM_CAP_DISABLE_QUIRKS2 213 1157 - /* #define KVM_CAP_VM_TSC_CONTROL 214 */ 1153 + #define KVM_CAP_VM_TSC_CONTROL 214 1158 1154 #define KVM_CAP_SYSTEM_EVENT_DATA 215 1155 + #define KVM_CAP_ARM_SYSTEM_SUSPEND 216 1159 1156 1160 1157 #ifdef KVM_CAP_IRQ_ROUTING 1161 1158 ··· 1245 1240 #define KVM_XEN_HVM_CONFIG_SHARED_INFO (1 << 2) 1246 1241 #define KVM_XEN_HVM_CONFIG_RUNSTATE (1 << 3) 1247 1242 #define KVM_XEN_HVM_CONFIG_EVTCHN_2LEVEL (1 << 4) 1243 + #define KVM_XEN_HVM_CONFIG_EVTCHN_SEND (1 << 5) 1248 1244 1249 1245 struct kvm_xen_hvm_config { 1250 1246 __u32 flags; ··· 1484 1478 #define KVM_SET_PIT2 _IOW(KVMIO, 0xa0, struct kvm_pit_state2) 1485 1479 /* Available with KVM_CAP_PPC_GET_PVINFO */ 1486 1480 #define KVM_PPC_GET_PVINFO _IOW(KVMIO, 0xa1, struct kvm_ppc_pvinfo) 1487 - /* Available with KVM_CAP_TSC_CONTROL */ 1481 + /* Available with KVM_CAP_TSC_CONTROL for a vCPU, or with 1482 + * KVM_CAP_VM_TSC_CONTROL to set defaults for a VM */ 1488 1483 #define KVM_SET_TSC_KHZ _IO(KVMIO, 0xa2) 1489 1484 #define KVM_GET_TSC_KHZ _IO(KVMIO, 0xa3) 1490 1485 /* Available with KVM_CAP_PCI_2_3 */ ··· 1701 1694 struct { 1702 1695 __u64 gfn; 1703 1696 } shared_info; 1697 + struct { 1698 + __u32 send_port; 1699 + __u32 type; /* EVTCHNSTAT_ipi / EVTCHNSTAT_interdomain */ 1700 + __u32 flags; 1701 + #define KVM_XEN_EVTCHN_DEASSIGN (1 << 0) 1702 + #define KVM_XEN_EVTCHN_UPDATE (1 << 1) 1703 + #define KVM_XEN_EVTCHN_RESET (1 << 2) 1704 + /* 1705 + * Events sent by the guest are either looped back to 1706 + * the guest itself (potentially on a different port#) 1707 + * or signalled via an eventfd. 1708 + */ 1709 + union { 1710 + struct { 1711 + __u32 port; 1712 + __u32 vcpu; 1713 + __u32 priority; 1714 + } port; 1715 + struct { 1716 + __u32 port; /* Zero for eventfd */ 1717 + __s32 fd; 1718 + } eventfd; 1719 + __u32 padding[4]; 1720 + } deliver; 1721 + } evtchn; 1722 + __u32 xen_version; 1704 1723 __u64 pad[8]; 1705 1724 } u; 1706 1725 }; ··· 1735 1702 #define KVM_XEN_ATTR_TYPE_LONG_MODE 0x0 1736 1703 #define KVM_XEN_ATTR_TYPE_SHARED_INFO 0x1 1737 1704 #define KVM_XEN_ATTR_TYPE_UPCALL_VECTOR 0x2 1705 + /* Available with KVM_CAP_XEN_HVM / KVM_XEN_HVM_CONFIG_EVTCHN_SEND */ 1706 + #define KVM_XEN_ATTR_TYPE_EVTCHN 0x3 1707 + #define KVM_XEN_ATTR_TYPE_XEN_VERSION 0x4 1738 1708 1739 1709 /* Per-vCPU Xen attributes */ 1740 1710 #define KVM_XEN_VCPU_GET_ATTR _IOWR(KVMIO, 0xca, struct kvm_xen_vcpu_attr) 1741 1711 #define KVM_XEN_VCPU_SET_ATTR _IOW(KVMIO, 0xcb, struct kvm_xen_vcpu_attr) 1712 + 1713 + /* Available with KVM_CAP_XEN_HVM / KVM_XEN_HVM_CONFIG_EVTCHN_SEND */ 1714 + #define KVM_XEN_HVM_EVTCHN_SEND _IOW(KVMIO, 0xd0, struct kvm_irq_routing_xen_evtchn) 1742 1715 1743 1716 #define KVM_GET_SREGS2 _IOR(KVMIO, 0xcc, struct kvm_sregs2) 1744 1717 #define KVM_SET_SREGS2 _IOW(KVMIO, 0xcd, struct kvm_sregs2) ··· 1763 1724 __u64 time_blocked; 1764 1725 __u64 time_offline; 1765 1726 } runstate; 1727 + __u32 vcpu_id; 1728 + struct { 1729 + __u32 port; 1730 + __u32 priority; 1731 + __u64 expires_ns; 1732 + } timer; 1733 + __u8 vector; 1766 1734 } u; 1767 1735 }; 1768 1736 ··· 1780 1734 #define KVM_XEN_VCPU_ATTR_TYPE_RUNSTATE_CURRENT 0x3 1781 1735 #define KVM_XEN_VCPU_ATTR_TYPE_RUNSTATE_DATA 0x4 1782 1736 #define KVM_XEN_VCPU_ATTR_TYPE_RUNSTATE_ADJUST 0x5 1737 + /* Available with KVM_CAP_XEN_HVM / KVM_XEN_HVM_CONFIG_EVTCHN_SEND */ 1738 + #define KVM_XEN_VCPU_ATTR_TYPE_VCPU_ID 0x6 1739 + #define KVM_XEN_VCPU_ATTR_TYPE_TIMER 0x7 1740 + #define KVM_XEN_VCPU_ATTR_TYPE_UPCALL_VECTOR 0x8 1783 1741 1784 1742 /* Secure Encrypted Virtualization command */ 1785 1743 enum sev_cmd_id {
+1 -2
tools/objtool/check.c
··· 3826 3826 !strcmp(sec->name, "__bug_table") || 3827 3827 !strcmp(sec->name, "__ex_table") || 3828 3828 !strcmp(sec->name, "__jump_table") || 3829 - !strcmp(sec->name, "__mcount_loc") || 3830 - !strcmp(sec->name, "__tracepoints")) 3829 + !strcmp(sec->name, "__mcount_loc")) 3831 3830 continue; 3832 3831 3833 3832 list_for_each_entry(reloc, &sec->reloc->reloc_list, list)
+2 -3
tools/perf/util/bpf-utils.c
··· 149 149 count = bpf_prog_info_read_offset_u32(&info, desc->count_offset); 150 150 size = bpf_prog_info_read_offset_u32(&info, desc->size_offset); 151 151 152 - data_len += count * size; 152 + data_len += roundup(count * size, sizeof(__u64)); 153 153 } 154 154 155 155 /* step 3: allocate continuous memory */ 156 - data_len = roundup(data_len, sizeof(__u64)); 157 156 info_linear = malloc(sizeof(struct perf_bpil) + data_len); 158 157 if (!info_linear) 159 158 return ERR_PTR(-ENOMEM); ··· 179 180 bpf_prog_info_set_offset_u64(&info_linear->info, 180 181 desc->array_offset, 181 182 ptr_to_u64(ptr)); 182 - ptr += count * size; 183 + ptr += roundup(count * size, sizeof(__u64)); 183 184 } 184 185 185 186 /* step 5: call syscall again to get required arrays */
+6 -1
tools/perf/util/bpf_off_cpu.c
··· 265 265 266 266 sample_type = evsel->core.attr.sample_type; 267 267 268 + if (sample_type & ~OFFCPU_SAMPLE_TYPES) { 269 + pr_err("not supported sample type: %llx\n", 270 + (unsigned long long)sample_type); 271 + return -1; 272 + } 273 + 268 274 if (sample_type & (PERF_SAMPLE_ID | PERF_SAMPLE_IDENTIFIER)) { 269 275 if (evsel->core.id) 270 276 sid = evsel->core.id[0]; ··· 325 319 } 326 320 if (sample_type & PERF_SAMPLE_CGROUP) 327 321 data.array[n++] = key.cgroup_id; 328 - /* TODO: handle more sample types */ 329 322 330 323 size = n * sizeof(u64); 331 324 data.hdr.size = size;
+14 -6
tools/perf/util/bpf_skel/off_cpu.bpf.c
··· 71 71 __uint(max_entries, 1); 72 72 } cgroup_filter SEC(".maps"); 73 73 74 + /* new kernel task_struct definition */ 75 + struct task_struct___new { 76 + long __state; 77 + } __attribute__((preserve_access_index)); 78 + 74 79 /* old kernel task_struct definition */ 75 80 struct task_struct___old { 76 81 long state; ··· 98 93 */ 99 94 static inline int get_task_state(struct task_struct *t) 100 95 { 101 - if (bpf_core_field_exists(t->__state)) 102 - return BPF_CORE_READ(t, __state); 96 + /* recast pointer to capture new type for compiler */ 97 + struct task_struct___new *t_new = (void *)t; 103 98 104 - /* recast pointer to capture task_struct___old type for compiler */ 105 - struct task_struct___old *t_old = (void *)t; 99 + if (bpf_core_field_exists(t_new->__state)) { 100 + return BPF_CORE_READ(t_new, __state); 101 + } else { 102 + /* recast pointer to capture old type for compiler */ 103 + struct task_struct___old *t_old = (void *)t; 106 104 107 - /* now use old "state" name of the field */ 108 - return BPF_CORE_READ(t_old, state); 105 + return BPF_CORE_READ(t_old, state); 106 + } 109 107 } 110 108 111 109 static inline __u64 get_cgroup_id(struct task_struct *t)
+9
tools/perf/util/evsel.c
··· 48 48 #include "util.h" 49 49 #include "hashmap.h" 50 50 #include "pmu-hybrid.h" 51 + #include "off_cpu.h" 51 52 #include "../perf-sys.h" 52 53 #include "util/parse-branch-options.h" 53 54 #include <internal/xyarray.h> ··· 1103 1102 } 1104 1103 } 1105 1104 1105 + static bool evsel__is_offcpu_event(struct evsel *evsel) 1106 + { 1107 + return evsel__is_bpf_output(evsel) && !strcmp(evsel->name, OFFCPU_EVENT); 1108 + } 1109 + 1106 1110 /* 1107 1111 * The enable_on_exec/disabled value strategy: 1108 1112 * ··· 1372 1366 */ 1373 1367 if (evsel__is_dummy_event(evsel)) 1374 1368 evsel__reset_sample_bit(evsel, BRANCH_STACK); 1369 + 1370 + if (evsel__is_offcpu_event(evsel)) 1371 + evsel->core.attr.sample_type &= OFFCPU_SAMPLE_TYPES; 1375 1372 } 1376 1373 1377 1374 int evsel__set_filter(struct evsel *evsel, const char *filter)
+9
tools/perf/util/off_cpu.h
··· 1 1 #ifndef PERF_UTIL_OFF_CPU_H 2 2 #define PERF_UTIL_OFF_CPU_H 3 3 4 + #include <linux/perf_event.h> 5 + 4 6 struct evlist; 5 7 struct target; 6 8 struct perf_session; 7 9 struct record_opts; 8 10 9 11 #define OFFCPU_EVENT "offcpu-time" 12 + 13 + #define OFFCPU_SAMPLE_TYPES (PERF_SAMPLE_IDENTIFIER | PERF_SAMPLE_IP | \ 14 + PERF_SAMPLE_TID | PERF_SAMPLE_TIME | \ 15 + PERF_SAMPLE_ID | PERF_SAMPLE_CPU | \ 16 + PERF_SAMPLE_PERIOD | PERF_SAMPLE_CALLCHAIN | \ 17 + PERF_SAMPLE_CGROUP) 18 + 10 19 11 20 #ifdef HAVE_BPF_SKEL 12 21 int off_cpu_prepare(struct evlist *evlist, struct target *target,
+5 -4
tools/perf/util/synthetic-events.c
··· 754 754 snprintf(filename, sizeof(filename), "%s/proc/%d/task", 755 755 machine->root_dir, pid); 756 756 757 - n = scandir(filename, &dirent, filter_task, alphasort); 757 + n = scandir(filename, &dirent, filter_task, NULL); 758 758 if (n < 0) 759 759 return n; 760 760 ··· 767 767 if (*end) 768 768 continue; 769 769 770 - rc = -1; 770 + /* some threads may exit just after scan, ignore it */ 771 771 if (perf_event__prepare_comm(comm_event, pid, _pid, machine, 772 772 &tgid, &ppid, &kernel_thread) != 0) 773 - break; 773 + continue; 774 774 775 + rc = -1; 775 776 if (perf_event__synthesize_fork(tool, fork_event, _pid, tgid, 776 777 ppid, process, machine) < 0) 777 778 break; ··· 988 987 return 0; 989 988 990 989 snprintf(proc_path, sizeof(proc_path), "%s/proc", machine->root_dir); 991 - n = scandir(proc_path, &dirent, filter_task, alphasort); 990 + n = scandir(proc_path, &dirent, filter_task, NULL); 992 991 if (n < 0) 993 992 return err; 994 993
+1 -1
tools/perf/util/unwind-libunwind-local.c
··· 197 197 #ifndef NO_LIBUNWIND_DEBUG_FRAME 198 198 static u64 elf_section_offset(int fd, const char *name) 199 199 { 200 - u64 address, offset; 200 + u64 address, offset = 0; 201 201 202 202 if (elf_section_address_and_offset(fd, name, &address, &offset)) 203 203 return 0;
+75 -9
tools/testing/selftests/bpf/prog_tests/sockmap_ktls.c
··· 4 4 * Tests for sockmap/sockhash holding kTLS sockets. 5 5 */ 6 6 7 + #include <netinet/tcp.h> 7 8 #include "test_progs.h" 8 9 9 10 #define MAX_TEST_NAME 80 ··· 93 92 close(srv); 94 93 } 95 94 95 + static void test_sockmap_ktls_update_fails_when_sock_has_ulp(int family, int map) 96 + { 97 + struct sockaddr_storage addr = {}; 98 + socklen_t len = sizeof(addr); 99 + struct sockaddr_in6 *v6; 100 + struct sockaddr_in *v4; 101 + int err, s, zero = 0; 102 + 103 + switch (family) { 104 + case AF_INET: 105 + v4 = (struct sockaddr_in *)&addr; 106 + v4->sin_family = AF_INET; 107 + break; 108 + case AF_INET6: 109 + v6 = (struct sockaddr_in6 *)&addr; 110 + v6->sin6_family = AF_INET6; 111 + break; 112 + default: 113 + PRINT_FAIL("unsupported socket family %d", family); 114 + return; 115 + } 116 + 117 + s = socket(family, SOCK_STREAM, 0); 118 + if (!ASSERT_GE(s, 0, "socket")) 119 + return; 120 + 121 + err = bind(s, (struct sockaddr *)&addr, len); 122 + if (!ASSERT_OK(err, "bind")) 123 + goto close; 124 + 125 + err = getsockname(s, (struct sockaddr *)&addr, &len); 126 + if (!ASSERT_OK(err, "getsockname")) 127 + goto close; 128 + 129 + err = connect(s, (struct sockaddr *)&addr, len); 130 + if (!ASSERT_OK(err, "connect")) 131 + goto close; 132 + 133 + /* save sk->sk_prot and set it to tls_prots */ 134 + err = setsockopt(s, IPPROTO_TCP, TCP_ULP, "tls", strlen("tls")); 135 + if (!ASSERT_OK(err, "setsockopt(TCP_ULP)")) 136 + goto close; 137 + 138 + /* sockmap update should not affect saved sk_prot */ 139 + err = bpf_map_update_elem(map, &zero, &s, BPF_ANY); 140 + if (!ASSERT_ERR(err, "sockmap update elem")) 141 + goto close; 142 + 143 + /* call sk->sk_prot->setsockopt to dispatch to saved sk_prot */ 144 + err = setsockopt(s, IPPROTO_TCP, TCP_NODELAY, &zero, sizeof(zero)); 145 + ASSERT_OK(err, "setsockopt(TCP_NODELAY)"); 146 + 147 + close: 148 + close(s); 149 + } 150 + 151 + static const char *fmt_test_name(const char *subtest_name, int family, 152 + enum bpf_map_type map_type) 153 + { 154 + const char *map_type_str = BPF_MAP_TYPE_SOCKMAP ? "SOCKMAP" : "SOCKHASH"; 155 + const char *family_str = AF_INET ? "IPv4" : "IPv6"; 156 + static char test_name[MAX_TEST_NAME]; 157 + 158 + snprintf(test_name, MAX_TEST_NAME, 159 + "sockmap_ktls %s %s %s", 160 + subtest_name, family_str, map_type_str); 161 + 162 + return test_name; 163 + } 164 + 96 165 static void run_tests(int family, enum bpf_map_type map_type) 97 166 { 98 - char test_name[MAX_TEST_NAME]; 99 167 int map; 100 168 101 169 map = bpf_map_create(map_type, NULL, sizeof(int), sizeof(int), 1, NULL); ··· 173 103 return; 174 104 } 175 105 176 - snprintf(test_name, MAX_TEST_NAME, 177 - "sockmap_ktls disconnect_after_delete %s %s", 178 - family == AF_INET ? "IPv4" : "IPv6", 179 - map_type == BPF_MAP_TYPE_SOCKMAP ? "SOCKMAP" : "SOCKHASH"); 180 - if (!test__start_subtest(test_name)) 181 - return; 182 - 183 - test_sockmap_ktls_disconnect_after_delete(family, map); 106 + if (test__start_subtest(fmt_test_name("disconnect_after_delete", family, map_type))) 107 + test_sockmap_ktls_disconnect_after_delete(family, map); 108 + if (test__start_subtest(fmt_test_name("update_fails_when_sock_has_ulp", family, map_type))) 109 + test_sockmap_ktls_update_fails_when_sock_has_ulp(family, map); 184 110 185 111 close(map); 186 112 }
+21
tools/testing/selftests/bpf/verifier/jmp32.c
··· 864 864 .result = ACCEPT, 865 865 .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 866 866 }, 867 + { 868 + "jeq32/jne32: bounds checking", 869 + .insns = { 870 + BPF_MOV64_IMM(BPF_REG_6, 563), 871 + BPF_MOV64_IMM(BPF_REG_2, 0), 872 + BPF_ALU64_IMM(BPF_NEG, BPF_REG_2, 0), 873 + BPF_ALU64_IMM(BPF_NEG, BPF_REG_2, 0), 874 + BPF_ALU32_REG(BPF_OR, BPF_REG_2, BPF_REG_6), 875 + BPF_JMP32_IMM(BPF_JNE, BPF_REG_2, 8, 5), 876 + BPF_JMP_IMM(BPF_JSGE, BPF_REG_2, 500, 2), 877 + BPF_MOV64_IMM(BPF_REG_0, 2), 878 + BPF_EXIT_INSN(), 879 + BPF_MOV64_REG(BPF_REG_0, BPF_REG_4), 880 + BPF_EXIT_INSN(), 881 + BPF_MOV64_IMM(BPF_REG_0, 1), 882 + BPF_EXIT_INSN(), 883 + }, 884 + .prog_type = BPF_PROG_TYPE_SCHED_CLS, 885 + .result = ACCEPT, 886 + .retval = 1, 887 + },
+22
tools/testing/selftests/bpf/verifier/jump.c
··· 373 373 .result = ACCEPT, 374 374 .retval = 3, 375 375 }, 376 + { 377 + "jump & dead code elimination", 378 + .insns = { 379 + BPF_MOV64_IMM(BPF_REG_0, 1), 380 + BPF_MOV64_IMM(BPF_REG_3, 0), 381 + BPF_ALU64_IMM(BPF_NEG, BPF_REG_3, 0), 382 + BPF_ALU64_IMM(BPF_NEG, BPF_REG_3, 0), 383 + BPF_ALU64_IMM(BPF_OR, BPF_REG_3, 32767), 384 + BPF_JMP_IMM(BPF_JSGE, BPF_REG_3, 0, 1), 385 + BPF_EXIT_INSN(), 386 + BPF_JMP_IMM(BPF_JSLE, BPF_REG_3, 0x8000, 1), 387 + BPF_EXIT_INSN(), 388 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_3, -32767), 389 + BPF_MOV64_IMM(BPF_REG_0, 2), 390 + BPF_JMP_IMM(BPF_JLE, BPF_REG_3, 0, 1), 391 + BPF_MOV64_REG(BPF_REG_0, BPF_REG_4), 392 + BPF_EXIT_INSN(), 393 + }, 394 + .prog_type = BPF_PROG_TYPE_SCHED_CLS, 395 + .result = ACCEPT, 396 + .retval = 2, 397 + },
+1 -1
tools/testing/selftests/net/Makefile
··· 54 54 TEST_GEN_FILES += ioam6_parser 55 55 TEST_GEN_FILES += gro 56 56 TEST_GEN_PROGS = reuseport_bpf reuseport_bpf_cpu reuseport_bpf_numa 57 - TEST_GEN_PROGS += reuseport_dualstack reuseaddr_conflict tls 57 + TEST_GEN_PROGS += reuseport_dualstack reuseaddr_conflict tls tun 58 58 TEST_GEN_FILES += toeplitz 59 59 TEST_GEN_FILES += cmsg_sender 60 60 TEST_GEN_FILES += stress_reuseport_listen
+1 -1
tools/testing/selftests/net/bpf/Makefile
··· 2 2 3 3 CLANG ?= clang 4 4 CCINCLUDE += -I../../bpf 5 - CCINCLUDE += -I../../../lib 5 + CCINCLUDE += -I../../../../lib 6 6 CCINCLUDE += -I../../../../../usr/include/ 7 7 8 8 TEST_CUSTOM_PROGS = $(OUTPUT)/bpf/nat6to4.o
+5 -1
tools/testing/selftests/net/forwarding/lib.sh
··· 1240 1240 # FDB entry was installed. 1241 1241 bridge link set dev $br_port1 flood off 1242 1242 1243 + ip link set $host1_if promisc on 1243 1244 tc qdisc add dev $host1_if ingress 1244 1245 tc filter add dev $host1_if ingress protocol ip pref 1 handle 101 \ 1245 1246 flower dst_mac $mac action drop ··· 1251 1250 tc -j -s filter show dev $host1_if ingress \ 1252 1251 | jq -e ".[] | select(.options.handle == 101) \ 1253 1252 | select(.options.actions[0].stats.packets == 1)" &> /dev/null 1254 - check_fail $? "Packet reached second host when should not" 1253 + check_fail $? "Packet reached first host when should not" 1255 1254 1256 1255 $MZ $host1_if -c 1 -p 64 -a $mac -t ip -q 1257 1256 sleep 1 ··· 1290 1289 1291 1290 tc filter del dev $host1_if ingress protocol ip pref 1 handle 101 flower 1292 1291 tc qdisc del dev $host1_if ingress 1292 + ip link set $host1_if promisc off 1293 1293 1294 1294 bridge link set dev $br_port1 flood on 1295 1295 ··· 1308 1306 1309 1307 # Add an ACL on `host2_if` which will tell us whether the packet 1310 1308 # was flooded to it or not. 1309 + ip link set $host2_if promisc on 1311 1310 tc qdisc add dev $host2_if ingress 1312 1311 tc filter add dev $host2_if ingress protocol ip pref 1 handle 101 \ 1313 1312 flower dst_mac $mac action drop ··· 1326 1323 1327 1324 tc filter del dev $host2_if ingress protocol ip pref 1 handle 101 flower 1328 1325 tc qdisc del dev $host2_if ingress 1326 + ip link set $host2_if promisc off 1329 1327 1330 1328 return $err 1331 1329 }
+40 -8
tools/testing/selftests/net/mptcp/diag.sh
··· 61 61 __chk_nr "grep -c token:" $* 62 62 } 63 63 64 + wait_msk_nr() 65 + { 66 + local condition="grep -c token:" 67 + local expected=$1 68 + local timeout=20 69 + local msg nr 70 + local max=0 71 + local i=0 72 + 73 + shift 1 74 + msg=$* 75 + 76 + while [ $i -lt $timeout ]; do 77 + nr=$(ss -inmHMN $ns | $condition) 78 + [ $nr == $expected ] && break; 79 + [ $nr -gt $max ] && max=$nr 80 + i=$((i + 1)) 81 + sleep 1 82 + done 83 + 84 + printf "%-50s" "$msg" 85 + if [ $i -ge $timeout ]; then 86 + echo "[ fail ] timeout while expecting $expected max $max last $nr" 87 + ret=$test_cnt 88 + elif [ $nr != $expected ]; then 89 + echo "[ fail ] expected $expected found $nr" 90 + ret=$test_cnt 91 + else 92 + echo "[ ok ]" 93 + fi 94 + test_cnt=$((test_cnt+1)) 95 + } 96 + 64 97 chk_msk_fallback_nr() 65 98 { 66 99 __chk_nr "grep -c fallback" $* ··· 179 146 echo "a" | \ 180 147 timeout ${timeout_test} \ 181 148 ip netns exec $ns \ 182 - ./mptcp_connect -p 10000 -l -t ${timeout_poll} \ 149 + ./mptcp_connect -p 10000 -l -t ${timeout_poll} -w 20 \ 183 150 0.0.0.0 >/dev/null & 184 151 wait_local_port_listen $ns 10000 185 152 chk_msk_nr 0 "no msk on netns creation" ··· 188 155 echo "b" | \ 189 156 timeout ${timeout_test} \ 190 157 ip netns exec $ns \ 191 - ./mptcp_connect -p 10000 -r 0 -t ${timeout_poll} \ 158 + ./mptcp_connect -p 10000 -r 0 -t ${timeout_poll} -w 20 \ 192 159 127.0.0.1 >/dev/null & 193 160 wait_connected $ns 10000 194 161 chk_msk_nr 2 "after MPC handshake " ··· 200 167 echo "a" | \ 201 168 timeout ${timeout_test} \ 202 169 ip netns exec $ns \ 203 - ./mptcp_connect -p 10001 -l -s TCP -t ${timeout_poll} \ 170 + ./mptcp_connect -p 10001 -l -s TCP -t ${timeout_poll} -w 20 \ 204 171 0.0.0.0 >/dev/null & 205 172 wait_local_port_listen $ns 10001 206 173 echo "b" | \ 207 174 timeout ${timeout_test} \ 208 175 ip netns exec $ns \ 209 - ./mptcp_connect -p 10001 -r 0 -t ${timeout_poll} \ 176 + ./mptcp_connect -p 10001 -r 0 -t ${timeout_poll} -w 20 \ 210 177 127.0.0.1 >/dev/null & 211 178 wait_connected $ns 10001 212 179 chk_msk_fallback_nr 1 "check fallback" ··· 217 184 echo "a" | \ 218 185 timeout ${timeout_test} \ 219 186 ip netns exec $ns \ 220 - ./mptcp_connect -p $((I+10001)) -l -w 10 \ 187 + ./mptcp_connect -p $((I+10001)) -l -w 20 \ 221 188 -t ${timeout_poll} 0.0.0.0 >/dev/null & 222 189 done 223 190 wait_local_port_listen $ns $((NR_CLIENTS + 10001)) ··· 226 193 echo "b" | \ 227 194 timeout ${timeout_test} \ 228 195 ip netns exec $ns \ 229 - ./mptcp_connect -p $((I+10001)) -w 10 \ 196 + ./mptcp_connect -p $((I+10001)) -w 20 \ 230 197 -t ${timeout_poll} 127.0.0.1 >/dev/null & 231 198 done 232 - sleep 1.5 233 199 234 - chk_msk_nr $((NR_CLIENTS*2)) "many msk socket present" 200 + wait_msk_nr $((NR_CLIENTS*2)) "many msk socket present" 235 201 flush_pids 236 202 237 203 exit $ret
+1 -1
tools/testing/selftests/net/mptcp/mptcp_connect.c
··· 265 265 static int sock_listen_mptcp(const char * const listenaddr, 266 266 const char * const port) 267 267 { 268 - int sock; 268 + int sock = -1; 269 269 struct addrinfo hints = { 270 270 .ai_protocol = IPPROTO_TCP, 271 271 .ai_socktype = SOCK_STREAM,
+1 -1
tools/testing/selftests/net/mptcp/mptcp_inq.c
··· 88 88 static int sock_listen_mptcp(const char * const listenaddr, 89 89 const char * const port) 90 90 { 91 - int sock; 91 + int sock = -1; 92 92 struct addrinfo hints = { 93 93 .ai_protocol = IPPROTO_TCP, 94 94 .ai_socktype = SOCK_STREAM,
+1 -1
tools/testing/selftests/net/mptcp/mptcp_sockopt.c
··· 136 136 static int sock_listen_mptcp(const char * const listenaddr, 137 137 const char * const port) 138 138 { 139 - int sock; 139 + int sock = -1; 140 140 struct addrinfo hints = { 141 141 .ai_protocol = IPPROTO_TCP, 142 142 .ai_socktype = SOCK_STREAM,
+71 -2
tools/testing/selftests/net/mptcp/pm_nl_ctl.c
··· 39 39 fprintf(stderr, "\tdsf lip <local-ip> lport <local-port> rip <remote-ip> rport <remote-port> token <token>\n"); 40 40 fprintf(stderr, "\tdel <id> [<ip>]\n"); 41 41 fprintf(stderr, "\tget <id>\n"); 42 - fprintf(stderr, "\tset [<ip>] [id <nr>] flags [no]backup|[no]fullmesh [port <nr>]\n"); 42 + fprintf(stderr, "\tset [<ip>] [id <nr>] flags [no]backup|[no]fullmesh [port <nr>] [token <token>] [rip <ip>] [rport <port>]\n"); 43 43 fprintf(stderr, "\tflush\n"); 44 44 fprintf(stderr, "\tdump\n"); 45 45 fprintf(stderr, "\tlimits [<rcv addr max> <subflow max>]\n"); ··· 1279 1279 struct rtattr *rta, *nest; 1280 1280 struct nlmsghdr *nh; 1281 1281 u_int32_t flags = 0; 1282 + u_int32_t token = 0; 1283 + u_int16_t rport = 0; 1282 1284 u_int16_t family; 1285 + void *rip = NULL; 1283 1286 int nest_start; 1284 1287 int use_id = 0; 1285 1288 u_int8_t id; ··· 1342 1339 error(1, 0, " missing flags keyword"); 1343 1340 1344 1341 for (; arg < argc; arg++) { 1345 - if (!strcmp(argv[arg], "flags")) { 1342 + if (!strcmp(argv[arg], "token")) { 1343 + if (++arg >= argc) 1344 + error(1, 0, " missing token value"); 1345 + 1346 + /* token */ 1347 + token = atoi(argv[arg]); 1348 + } else if (!strcmp(argv[arg], "flags")) { 1346 1349 char *tok, *str; 1347 1350 1348 1351 /* flags */ ··· 1387 1378 rta->rta_len = RTA_LENGTH(2); 1388 1379 memcpy(RTA_DATA(rta), &port, 2); 1389 1380 off += NLMSG_ALIGN(rta->rta_len); 1381 + } else if (!strcmp(argv[arg], "rport")) { 1382 + if (++arg >= argc) 1383 + error(1, 0, " missing remote port"); 1384 + 1385 + rport = atoi(argv[arg]); 1386 + } else if (!strcmp(argv[arg], "rip")) { 1387 + if (++arg >= argc) 1388 + error(1, 0, " missing remote ip"); 1389 + 1390 + rip = argv[arg]; 1390 1391 } else { 1391 1392 error(1, 0, "unknown keyword %s", argv[arg]); 1392 1393 } 1393 1394 } 1394 1395 nest->rta_len = off - nest_start; 1396 + 1397 + /* token */ 1398 + if (token) { 1399 + rta = (void *)(data + off); 1400 + rta->rta_type = MPTCP_PM_ATTR_TOKEN; 1401 + rta->rta_len = RTA_LENGTH(4); 1402 + memcpy(RTA_DATA(rta), &token, 4); 1403 + off += NLMSG_ALIGN(rta->rta_len); 1404 + } 1405 + 1406 + /* remote addr/port */ 1407 + if (rip) { 1408 + nest_start = off; 1409 + nest = (void *)(data + off); 1410 + nest->rta_type = NLA_F_NESTED | MPTCP_PM_ATTR_ADDR_REMOTE; 1411 + nest->rta_len = RTA_LENGTH(0); 1412 + off += NLMSG_ALIGN(nest->rta_len); 1413 + 1414 + /* addr data */ 1415 + rta = (void *)(data + off); 1416 + if (inet_pton(AF_INET, rip, RTA_DATA(rta))) { 1417 + family = AF_INET; 1418 + rta->rta_type = MPTCP_PM_ADDR_ATTR_ADDR4; 1419 + rta->rta_len = RTA_LENGTH(4); 1420 + } else if (inet_pton(AF_INET6, rip, RTA_DATA(rta))) { 1421 + family = AF_INET6; 1422 + rta->rta_type = MPTCP_PM_ADDR_ATTR_ADDR6; 1423 + rta->rta_len = RTA_LENGTH(16); 1424 + } else { 1425 + error(1, errno, "can't parse ip %s", (char *)rip); 1426 + } 1427 + off += NLMSG_ALIGN(rta->rta_len); 1428 + 1429 + /* family */ 1430 + rta = (void *)(data + off); 1431 + rta->rta_type = MPTCP_PM_ADDR_ATTR_FAMILY; 1432 + rta->rta_len = RTA_LENGTH(2); 1433 + memcpy(RTA_DATA(rta), &family, 2); 1434 + off += NLMSG_ALIGN(rta->rta_len); 1435 + 1436 + if (rport) { 1437 + rta = (void *)(data + off); 1438 + rta->rta_type = MPTCP_PM_ADDR_ATTR_PORT; 1439 + rta->rta_len = RTA_LENGTH(2); 1440 + memcpy(RTA_DATA(rta), &rport, 2); 1441 + off += NLMSG_ALIGN(rta->rta_len); 1442 + } 1443 + 1444 + nest->rta_len = off - nest_start; 1445 + } 1395 1446 1396 1447 do_nl_req(fd, nh, off, 0); 1397 1448 return 0;
+32
tools/testing/selftests/net/mptcp/userspace_pm.sh
··· 770 770 rm -f "$evts" 771 771 } 772 772 773 + test_prio() 774 + { 775 + local count 776 + 777 + # Send MP_PRIO signal from client to server machine 778 + ip netns exec "$ns2" ./pm_nl_ctl set 10.0.1.2 port "$client4_port" flags backup token "$client4_token" rip 10.0.1.1 rport "$server4_port" 779 + sleep 0.5 780 + 781 + # Check TX 782 + stdbuf -o0 -e0 printf "MP_PRIO TX \t" 783 + count=$(ip netns exec "$ns2" nstat -as | grep MPTcpExtMPPrioTx | awk '{print $2}') 784 + [ -z "$count" ] && count=0 785 + if [ $count != 1 ]; then 786 + stdbuf -o0 -e0 printf "[FAIL]\n" 787 + exit 1 788 + else 789 + stdbuf -o0 -e0 printf "[OK]\n" 790 + fi 791 + 792 + # Check RX 793 + stdbuf -o0 -e0 printf "MP_PRIO RX \t" 794 + count=$(ip netns exec "$ns1" nstat -as | grep MPTcpExtMPPrioRx | awk '{print $2}') 795 + [ -z "$count" ] && count=0 796 + if [ $count != 1 ]; then 797 + stdbuf -o0 -e0 printf "[FAIL]\n" 798 + exit 1 799 + else 800 + stdbuf -o0 -e0 printf "[OK]\n" 801 + fi 802 + } 803 + 773 804 make_connection 774 805 make_connection "v6" 775 806 test_announce 776 807 test_remove 777 808 test_subflows 809 + test_prio 778 810 779 811 exit 0
+162
tools/testing/selftests/net/tun.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + 3 + #define _GNU_SOURCE 4 + 5 + #include <errno.h> 6 + #include <fcntl.h> 7 + #include <stdio.h> 8 + #include <stdlib.h> 9 + #include <string.h> 10 + #include <unistd.h> 11 + #include <linux/if.h> 12 + #include <linux/if_tun.h> 13 + #include <linux/netlink.h> 14 + #include <linux/rtnetlink.h> 15 + #include <sys/ioctl.h> 16 + #include <sys/socket.h> 17 + 18 + #include "../kselftest_harness.h" 19 + 20 + static int tun_attach(int fd, char *dev) 21 + { 22 + struct ifreq ifr; 23 + 24 + memset(&ifr, 0, sizeof(ifr)); 25 + strcpy(ifr.ifr_name, dev); 26 + ifr.ifr_flags = IFF_ATTACH_QUEUE; 27 + 28 + return ioctl(fd, TUNSETQUEUE, (void *) &ifr); 29 + } 30 + 31 + static int tun_detach(int fd, char *dev) 32 + { 33 + struct ifreq ifr; 34 + 35 + memset(&ifr, 0, sizeof(ifr)); 36 + strcpy(ifr.ifr_name, dev); 37 + ifr.ifr_flags = IFF_DETACH_QUEUE; 38 + 39 + return ioctl(fd, TUNSETQUEUE, (void *) &ifr); 40 + } 41 + 42 + static int tun_alloc(char *dev) 43 + { 44 + struct ifreq ifr; 45 + int fd, err; 46 + 47 + fd = open("/dev/net/tun", O_RDWR); 48 + if (fd < 0) { 49 + fprintf(stderr, "can't open tun: %s\n", strerror(errno)); 50 + return fd; 51 + } 52 + 53 + memset(&ifr, 0, sizeof(ifr)); 54 + strcpy(ifr.ifr_name, dev); 55 + ifr.ifr_flags = IFF_TAP | IFF_NAPI | IFF_MULTI_QUEUE; 56 + 57 + err = ioctl(fd, TUNSETIFF, (void *) &ifr); 58 + if (err < 0) { 59 + fprintf(stderr, "can't TUNSETIFF: %s\n", strerror(errno)); 60 + close(fd); 61 + return err; 62 + } 63 + strcpy(dev, ifr.ifr_name); 64 + return fd; 65 + } 66 + 67 + static int tun_delete(char *dev) 68 + { 69 + struct { 70 + struct nlmsghdr nh; 71 + struct ifinfomsg ifm; 72 + unsigned char data[64]; 73 + } req; 74 + struct rtattr *rta; 75 + int ret, rtnl; 76 + 77 + rtnl = socket(AF_NETLINK, SOCK_DGRAM, NETLINK_ROUTE); 78 + if (rtnl < 0) { 79 + fprintf(stderr, "can't open rtnl: %s\n", strerror(errno)); 80 + return 1; 81 + } 82 + 83 + memset(&req, 0, sizeof(req)); 84 + req.nh.nlmsg_len = NLMSG_ALIGN(NLMSG_LENGTH(sizeof(req.ifm))); 85 + req.nh.nlmsg_flags = NLM_F_REQUEST; 86 + req.nh.nlmsg_type = RTM_DELLINK; 87 + 88 + req.ifm.ifi_family = AF_UNSPEC; 89 + 90 + rta = (struct rtattr *)(((char *)&req) + NLMSG_ALIGN(req.nh.nlmsg_len)); 91 + rta->rta_type = IFLA_IFNAME; 92 + rta->rta_len = RTA_LENGTH(IFNAMSIZ); 93 + req.nh.nlmsg_len += rta->rta_len; 94 + memcpy(RTA_DATA(rta), dev, IFNAMSIZ); 95 + 96 + ret = send(rtnl, &req, req.nh.nlmsg_len, 0); 97 + if (ret < 0) 98 + fprintf(stderr, "can't send: %s\n", strerror(errno)); 99 + ret = (unsigned int)ret != req.nh.nlmsg_len; 100 + 101 + close(rtnl); 102 + return ret; 103 + } 104 + 105 + FIXTURE(tun) 106 + { 107 + char ifname[IFNAMSIZ]; 108 + int fd, fd2; 109 + }; 110 + 111 + FIXTURE_SETUP(tun) 112 + { 113 + memset(self->ifname, 0, sizeof(self->ifname)); 114 + 115 + self->fd = tun_alloc(self->ifname); 116 + ASSERT_GE(self->fd, 0); 117 + 118 + self->fd2 = tun_alloc(self->ifname); 119 + ASSERT_GE(self->fd2, 0); 120 + } 121 + 122 + FIXTURE_TEARDOWN(tun) 123 + { 124 + if (self->fd >= 0) 125 + close(self->fd); 126 + if (self->fd2 >= 0) 127 + close(self->fd2); 128 + } 129 + 130 + TEST_F(tun, delete_detach_close) { 131 + EXPECT_EQ(tun_delete(self->ifname), 0); 132 + EXPECT_EQ(tun_detach(self->fd, self->ifname), -1); 133 + EXPECT_EQ(errno, 22); 134 + } 135 + 136 + TEST_F(tun, detach_delete_close) { 137 + EXPECT_EQ(tun_detach(self->fd, self->ifname), 0); 138 + EXPECT_EQ(tun_delete(self->ifname), 0); 139 + } 140 + 141 + TEST_F(tun, detach_close_delete) { 142 + EXPECT_EQ(tun_detach(self->fd, self->ifname), 0); 143 + close(self->fd); 144 + self->fd = -1; 145 + EXPECT_EQ(tun_delete(self->ifname), 0); 146 + } 147 + 148 + TEST_F(tun, reattach_delete_close) { 149 + EXPECT_EQ(tun_detach(self->fd, self->ifname), 0); 150 + EXPECT_EQ(tun_attach(self->fd, self->ifname), 0); 151 + EXPECT_EQ(tun_delete(self->ifname), 0); 152 + } 153 + 154 + TEST_F(tun, reattach_close_delete) { 155 + EXPECT_EQ(tun_detach(self->fd, self->ifname), 0); 156 + EXPECT_EQ(tun_attach(self->fd, self->ifname), 0); 157 + close(self->fd); 158 + self->fd = -1; 159 + EXPECT_EQ(tun_delete(self->ifname), 0); 160 + } 161 + 162 + TEST_HARNESS_MAIN
+1 -1
tools/testing/selftests/net/udpgro.sh
··· 34 34 ip -netns "${PEER_NS}" addr add dev veth1 192.168.1.1/24 35 35 ip -netns "${PEER_NS}" addr add dev veth1 2001:db8::1/64 nodad 36 36 ip -netns "${PEER_NS}" link set dev veth1 up 37 - ip -n "${PEER_NS}" link set veth1 xdp object ../bpf/xdp_dummy.o section xdp_dummy 37 + ip -n "${PEER_NS}" link set veth1 xdp object ../bpf/xdp_dummy.o section xdp 38 38 } 39 39 40 40 run_one() {
+1 -1
tools/testing/selftests/net/udpgro_bench.sh
··· 34 34 ip -netns "${PEER_NS}" addr add dev veth1 2001:db8::1/64 nodad 35 35 ip -netns "${PEER_NS}" link set dev veth1 up 36 36 37 - ip -n "${PEER_NS}" link set veth1 xdp object ../bpf/xdp_dummy.o section xdp_dummy 37 + ip -n "${PEER_NS}" link set veth1 xdp object ../bpf/xdp_dummy.o section xdp 38 38 ip netns exec "${PEER_NS}" ./udpgso_bench_rx ${rx_args} -r & 39 39 ip netns exec "${PEER_NS}" ./udpgso_bench_rx -t ${rx_args} -r & 40 40
+1 -1
tools/testing/selftests/net/udpgro_frglist.sh
··· 36 36 ip netns exec "${PEER_NS}" ethtool -K veth1 rx-gro-list on 37 37 38 38 39 - ip -n "${PEER_NS}" link set veth1 xdp object ../bpf/xdp_dummy.o section xdp_dummy 39 + ip -n "${PEER_NS}" link set veth1 xdp object ../bpf/xdp_dummy.o section xdp 40 40 tc -n "${PEER_NS}" qdisc add dev veth1 clsact 41 41 tc -n "${PEER_NS}" filter add dev veth1 ingress prio 4 protocol ipv6 bpf object-file ../bpf/nat6to4.o section schedcls/ingress6/nat_6 direct-action 42 42 tc -n "${PEER_NS}" filter add dev veth1 egress prio 4 protocol ip bpf object-file ../bpf/nat6to4.o section schedcls/egress4/snat4 direct-action
+1 -1
tools/testing/selftests/net/udpgro_fwd.sh
··· 46 46 ip -n $BASE$ns addr add dev veth$ns $BM_NET_V4$ns/24 47 47 ip -n $BASE$ns addr add dev veth$ns $BM_NET_V6$ns/64 nodad 48 48 done 49 - ip -n $NS_DST link set veth$DST xdp object ../bpf/xdp_dummy.o section xdp_dummy 2>/dev/null 49 + ip -n $NS_DST link set veth$DST xdp object ../bpf/xdp_dummy.o section xdp 2>/dev/null 50 50 } 51 51 52 52 create_vxlan_endpoint() {
+1 -1
tools/testing/selftests/net/udpgso_bench.sh
··· 120 120 run_udp "${ipv4_args}" 121 121 122 122 echo "ipv6" 123 - run_tcp "${ipv4_args}" 123 + run_tcp "${ipv6_args}" 124 124 run_udp "${ipv6_args}" 125 125 } 126 126
+3 -3
tools/testing/selftests/net/veth.sh
··· 289 289 ip netns exec $NS_SRC ethtool -L veth$SRC rx 1 tx 2 2>/dev/null 290 290 printf "%-60s" "bad setting: XDP with RX nr less than TX" 291 291 ip -n $NS_DST link set dev veth$DST xdp object ../bpf/xdp_dummy.o \ 292 - section xdp_dummy 2>/dev/null &&\ 292 + section xdp 2>/dev/null &&\ 293 293 echo "fail - set operation successful ?!?" || echo " ok " 294 294 295 295 # the following tests will run with multiple channels active 296 296 ip netns exec $NS_SRC ethtool -L veth$SRC rx 2 297 297 ip netns exec $NS_DST ethtool -L veth$DST rx 2 298 298 ip -n $NS_DST link set dev veth$DST xdp object ../bpf/xdp_dummy.o \ 299 - section xdp_dummy 2>/dev/null 299 + section xdp 2>/dev/null 300 300 printf "%-60s" "bad setting: reducing RX nr below peer TX with XDP set" 301 301 ip netns exec $NS_DST ethtool -L veth$DST rx 1 2>/dev/null &&\ 302 302 echo "fail - set operation successful ?!?" || echo " ok " ··· 311 311 chk_channels "setting invalid channels nr" $DST 2 2 312 312 fi 313 313 314 - ip -n $NS_DST link set dev veth$DST xdp object ../bpf/xdp_dummy.o section xdp_dummy 2>/dev/null 314 + ip -n $NS_DST link set dev veth$DST xdp object ../bpf/xdp_dummy.o section xdp 2>/dev/null 315 315 chk_gro_flag "with xdp attached - gro flag" $DST on 316 316 chk_gro_flag " - peer gro flag" $SRC off 317 317 chk_tso_flag " - tso flag" $SRC off
+77
tools/testing/selftests/tc-testing/tc-tests/actions/gact.json
··· 609 609 "teardown": [ 610 610 "$TC actions flush action gact" 611 611 ] 612 + }, 613 + { 614 + "id": "7f52", 615 + "name": "Try to flush action which is referenced by filter", 616 + "category": [ 617 + "actions", 618 + "gact" 619 + ], 620 + "plugins": { 621 + "requires": "nsPlugin" 622 + }, 623 + "setup": [ 624 + [ 625 + "$TC actions flush action gact", 626 + 0, 627 + 1, 628 + 255 629 + ], 630 + "$TC qdisc add dev $DEV1 ingress", 631 + "$TC actions add action pass index 1", 632 + "$TC filter add dev $DEV1 protocol all ingress prio 1 handle 0x1234 matchall action gact index 1" 633 + ], 634 + "cmdUnderTest": "$TC actions flush action gact", 635 + "expExitCode": "1", 636 + "verifyCmd": "$TC actions ls action gact", 637 + "matchPattern": "total acts 1.*action order [0-9]*: gact action pass.*index 1 ref 2 bind 1", 638 + "matchCount": "1", 639 + "teardown": [ 640 + "$TC qdisc del dev $DEV1 ingress", 641 + [ 642 + "sleep 1; $TC actions flush action gact", 643 + 0, 644 + 1 645 + ] 646 + ] 647 + }, 648 + { 649 + "id": "ae1e", 650 + "name": "Try to flush actions when last one is referenced by filter", 651 + "category": [ 652 + "actions", 653 + "gact" 654 + ], 655 + "plugins": { 656 + "requires": "nsPlugin" 657 + }, 658 + "setup": [ 659 + [ 660 + "$TC actions flush action gact", 661 + 0, 662 + 1, 663 + 255 664 + ], 665 + "$TC qdisc add dev $DEV1 ingress", 666 + [ 667 + "$TC actions add action pass index 1", 668 + 0, 669 + 1, 670 + 255 671 + ], 672 + "$TC actions add action reclassify index 2", 673 + "$TC actions add action drop index 3", 674 + "$TC filter add dev $DEV1 protocol all ingress prio 1 handle 0x1234 matchall action gact index 3" 675 + ], 676 + "cmdUnderTest": "$TC actions flush action gact", 677 + "expExitCode": "0", 678 + "verifyCmd": "$TC actions ls action gact", 679 + "matchPattern": "total acts 1.*action order [0-9]*: gact action drop.*index 3 ref 2 bind 1", 680 + "matchCount": "1", 681 + "teardown": [ 682 + "$TC qdisc del dev $DEV1 ingress", 683 + [ 684 + "sleep 1; $TC actions flush action gact", 685 + 0, 686 + 1 687 + ] 688 + ] 612 689 } 613 690 ]
+11 -9
tools/testing/selftests/wireguard/qemu/Makefile
··· 19 19 MIRROR := https://download.wireguard.com/qemu-test/distfiles/ 20 20 21 21 KERNEL_BUILD_PATH := $(BUILD_PATH)/kernel$(if $(findstring yes,$(DEBUG_KERNEL)),-debug) 22 - rwildcard=$(foreach d,$(wildcard $1*),$(call rwildcard,$d/,$2) $(filter $(subst *,%,$2),$d)) 23 - WIREGUARD_SOURCES := $(call rwildcard,$(KERNEL_PATH)/drivers/net/wireguard/,*) 24 22 25 23 default: qemu 26 24 ··· 107 109 QEMU_ARCH := x86_64 108 110 KERNEL_ARCH := x86_64 109 111 KERNEL_BZIMAGE := $(KERNEL_BUILD_PATH)/arch/x86/boot/bzImage 112 + QEMU_VPORT_RESULT := virtio-serial-device 110 113 ifeq ($(HOST_ARCH),$(ARCH)) 111 - QEMU_MACHINE := -cpu host -machine q35,accel=kvm 114 + QEMU_MACHINE := -cpu host -machine microvm,accel=kvm,pit=off,pic=off,rtc=off -no-acpi 112 115 else 113 - QEMU_MACHINE := -cpu max -machine q35 116 + QEMU_MACHINE := -cpu max -machine microvm -no-acpi 114 117 endif 115 118 else ifeq ($(ARCH),i686) 116 119 CHOST := i686-linux-musl 117 120 QEMU_ARCH := i386 118 121 KERNEL_ARCH := x86 119 122 KERNEL_BZIMAGE := $(KERNEL_BUILD_PATH)/arch/x86/boot/bzImage 123 + QEMU_VPORT_RESULT := virtio-serial-device 120 124 ifeq ($(subst x86_64,i686,$(HOST_ARCH)),$(ARCH)) 121 - QEMU_MACHINE := -cpu host -machine q35,accel=kvm 125 + QEMU_MACHINE := -cpu host -machine microvm,accel=kvm,pit=off,pic=off,rtc=off -no-acpi 122 126 else 123 - QEMU_MACHINE := -cpu max -machine q35 127 + QEMU_MACHINE := -cpu coreduo -machine microvm -no-acpi 124 128 endif 125 129 else ifeq ($(ARCH),mips64) 126 130 CHOST := mips64-linux-musl ··· 208 208 KERNEL_ARCH := m68k 209 209 KERNEL_BZIMAGE := $(KERNEL_BUILD_PATH)/vmlinux 210 210 KERNEL_CMDLINE := $(shell sed -n 's/CONFIG_CMDLINE=\(.*\)/\1/p' arch/m68k.config) 211 + QEMU_VPORT_RESULT := virtio-serial-device 211 212 ifeq ($(HOST_ARCH),$(ARCH)) 212 - QEMU_MACHINE := -cpu host,accel=kvm -machine q800 -append $(KERNEL_CMDLINE) 213 + QEMU_MACHINE := -cpu host,accel=kvm -machine virt -append $(KERNEL_CMDLINE) 213 214 else 214 - QEMU_MACHINE := -machine q800 -smp 1 -append $(KERNEL_CMDLINE) 215 + QEMU_MACHINE := -machine virt -smp 1 -append $(KERNEL_CMDLINE) 215 216 endif 216 217 else ifeq ($(ARCH),riscv64) 217 218 CHOST := riscv64-linux-musl ··· 323 322 cd $(KERNEL_BUILD_PATH) && ARCH=$(KERNEL_ARCH) $(KERNEL_PATH)/scripts/kconfig/merge_config.sh -n $(KERNEL_BUILD_PATH)/.config $(KERNEL_BUILD_PATH)/minimal.config 324 323 $(if $(findstring yes,$(DEBUG_KERNEL)),cp debug.config $(KERNEL_BUILD_PATH) && cd $(KERNEL_BUILD_PATH) && ARCH=$(KERNEL_ARCH) $(KERNEL_PATH)/scripts/kconfig/merge_config.sh -n $(KERNEL_BUILD_PATH)/.config debug.config,) 325 324 326 - $(KERNEL_BZIMAGE): $(TOOLCHAIN_PATH)/.installed $(KERNEL_BUILD_PATH)/.config $(BUILD_PATH)/init-cpio-spec.txt $(IPERF_PATH)/src/iperf3 $(IPUTILS_PATH)/ping $(BASH_PATH)/bash $(IPROUTE2_PATH)/misc/ss $(IPROUTE2_PATH)/ip/ip $(IPTABLES_PATH)/iptables/xtables-legacy-multi $(NMAP_PATH)/ncat/ncat $(WIREGUARD_TOOLS_PATH)/src/wg $(BUILD_PATH)/init ../netns.sh $(WIREGUARD_SOURCES) 325 + $(KERNEL_BZIMAGE): $(TOOLCHAIN_PATH)/.installed $(KERNEL_BUILD_PATH)/.config $(BUILD_PATH)/init-cpio-spec.txt $(IPERF_PATH)/src/iperf3 $(IPUTILS_PATH)/ping $(BASH_PATH)/bash $(IPROUTE2_PATH)/misc/ss $(IPROUTE2_PATH)/ip/ip $(IPTABLES_PATH)/iptables/xtables-legacy-multi $(NMAP_PATH)/ncat/ncat $(WIREGUARD_TOOLS_PATH)/src/wg $(BUILD_PATH)/init 327 326 $(MAKE) -C $(KERNEL_PATH) O=$(KERNEL_BUILD_PATH) ARCH=$(KERNEL_ARCH) CROSS_COMPILE=$(CROSS_COMPILE) 327 + .PHONY: $(KERNEL_BZIMAGE) 328 328 329 329 $(TOOLCHAIN_PATH)/$(CHOST)/include/linux/.installed: | $(KERNEL_BUILD_PATH)/.config $(TOOLCHAIN_PATH)/.installed 330 330 rm -rf $(TOOLCHAIN_PATH)/$(CHOST)/include/linux
+1
tools/testing/selftests/wireguard/qemu/arch/arm.config
··· 7 7 CONFIG_VIRTIO_MENU=y 8 8 CONFIG_VIRTIO_MMIO=y 9 9 CONFIG_VIRTIO_CONSOLE=y 10 + CONFIG_COMPAT_32BIT_TIME=y 10 11 CONFIG_CMDLINE_BOOL=y 11 12 CONFIG_CMDLINE="console=ttyAMA0 wg.success=vport0p1 panic_on_warn=1" 12 13 CONFIG_FRAME_WARN=1024
+1
tools/testing/selftests/wireguard/qemu/arch/armeb.config
··· 7 7 CONFIG_VIRTIO_MENU=y 8 8 CONFIG_VIRTIO_MMIO=y 9 9 CONFIG_VIRTIO_CONSOLE=y 10 + CONFIG_COMPAT_32BIT_TIME=y 10 11 CONFIG_CMDLINE_BOOL=y 11 12 CONFIG_CMDLINE="console=ttyAMA0 wg.success=vport0p1 panic_on_warn=1" 12 13 CONFIG_CPU_BIG_ENDIAN=y
+6 -2
tools/testing/selftests/wireguard/qemu/arch/i686.config
··· 1 - CONFIG_ACPI=y 2 1 CONFIG_SERIAL_8250=y 3 2 CONFIG_SERIAL_8250_CONSOLE=y 3 + CONFIG_VIRTIO_MENU=y 4 + CONFIG_VIRTIO_MMIO=y 5 + CONFIG_VIRTIO_CONSOLE=y 6 + CONFIG_VIRTIO_MMIO_CMDLINE_DEVICES=y 7 + CONFIG_COMPAT_32BIT_TIME=y 4 8 CONFIG_CMDLINE_BOOL=y 5 - CONFIG_CMDLINE="console=ttyS0 wg.success=ttyS1 panic_on_warn=1" 9 + CONFIG_CMDLINE="console=ttyS0 wg.success=vport0p1 panic_on_warn=1 reboot=t" 6 10 CONFIG_FRAME_WARN=1024
+4 -6
tools/testing/selftests/wireguard/qemu/arch/m68k.config
··· 1 1 CONFIG_MMU=y 2 + CONFIG_VIRT=y 2 3 CONFIG_M68KCLASSIC=y 3 - CONFIG_M68040=y 4 - CONFIG_MAC=y 5 - CONFIG_SERIAL_PMACZILOG=y 6 - CONFIG_SERIAL_PMACZILOG_TTYS=y 7 - CONFIG_SERIAL_PMACZILOG_CONSOLE=y 8 - CONFIG_CMDLINE="console=ttyS0 wg.success=ttyS1 panic_on_warn=1" 4 + CONFIG_VIRTIO_CONSOLE=y 5 + CONFIG_COMPAT_32BIT_TIME=y 6 + CONFIG_CMDLINE="console=ttyGF0 wg.success=vport0p1 panic_on_warn=1" 9 7 CONFIG_FRAME_WARN=1024
+1
tools/testing/selftests/wireguard/qemu/arch/mips.config
··· 6 6 CONFIG_POWER_RESET_SYSCON=y 7 7 CONFIG_SERIAL_8250=y 8 8 CONFIG_SERIAL_8250_CONSOLE=y 9 + CONFIG_COMPAT_32BIT_TIME=y 9 10 CONFIG_CMDLINE_BOOL=y 10 11 CONFIG_CMDLINE="console=ttyS0 wg.success=ttyS1 panic_on_warn=1" 11 12 CONFIG_FRAME_WARN=1024
+1
tools/testing/selftests/wireguard/qemu/arch/mipsel.config
··· 7 7 CONFIG_POWER_RESET_SYSCON=y 8 8 CONFIG_SERIAL_8250=y 9 9 CONFIG_SERIAL_8250_CONSOLE=y 10 + CONFIG_COMPAT_32BIT_TIME=y 10 11 CONFIG_CMDLINE_BOOL=y 11 12 CONFIG_CMDLINE="console=ttyS0 wg.success=ttyS1 panic_on_warn=1" 12 13 CONFIG_FRAME_WARN=1024
+1
tools/testing/selftests/wireguard/qemu/arch/powerpc.config
··· 4 4 CONFIG_PHYS_64BIT=y 5 5 CONFIG_SERIAL_8250=y 6 6 CONFIG_SERIAL_8250_CONSOLE=y 7 + CONFIG_COMPAT_32BIT_TIME=y 7 8 CONFIG_MATH_EMULATION=y 8 9 CONFIG_CMDLINE_BOOL=y 9 10 CONFIG_CMDLINE="console=ttyS0 wg.success=ttyS1 panic_on_warn=1"
+5 -2
tools/testing/selftests/wireguard/qemu/arch/x86_64.config
··· 1 - CONFIG_ACPI=y 2 1 CONFIG_SERIAL_8250=y 3 2 CONFIG_SERIAL_8250_CONSOLE=y 3 + CONFIG_VIRTIO_MENU=y 4 + CONFIG_VIRTIO_MMIO=y 5 + CONFIG_VIRTIO_CONSOLE=y 6 + CONFIG_VIRTIO_MMIO_CMDLINE_DEVICES=y 4 7 CONFIG_CMDLINE_BOOL=y 5 - CONFIG_CMDLINE="console=ttyS0 wg.success=ttyS1 panic_on_warn=1" 8 + CONFIG_CMDLINE="console=ttyS0 wg.success=vport0p1 panic_on_warn=1 reboot=t" 6 9 CONFIG_FRAME_WARN=1280
+11
tools/testing/selftests/wireguard/qemu/init.c
··· 11 11 #include <stdlib.h> 12 12 #include <stdbool.h> 13 13 #include <fcntl.h> 14 + #include <time.h> 14 15 #include <sys/wait.h> 15 16 #include <sys/mount.h> 16 17 #include <sys/stat.h> ··· 69 68 if (ioctl(fd, RNDADDTOENTCNT, &bits) < 0) 70 69 panic("ioctl(RNDADDTOENTCNT)"); 71 70 close(fd); 71 + } 72 + 73 + static void set_time(void) 74 + { 75 + if (time(NULL)) 76 + return; 77 + pretty_message("[+] Setting fake time..."); 78 + if (stime(&(time_t){1433512680}) < 0) 79 + panic("settimeofday()"); 72 80 } 73 81 74 82 static void mount_filesystems(void) ··· 269 259 print_banner(); 270 260 mount_filesystems(); 271 261 seed_rng(); 262 + set_time(); 272 263 kmod_selftests(); 273 264 enable_logging(); 274 265 clear_leaks();