Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

No conflicts.

Signed-off-by: Jakub Kicinski <kuba@kernel.org>

+3144 -1668
+6
Documentation/driver-api/firmware/other_interfaces.rst
··· 13 13 .. kernel-doc:: drivers/firmware/edd.c 14 14 :internal: 15 15 16 + Generic System Framebuffers Interface 17 + ------------------------------------- 18 + 19 + .. kernel-doc:: drivers/firmware/sysfb.c 20 + :export: 21 + 16 22 Intel Stratix10 SoC Service Layer 17 23 --------------------------------- 18 24 Some features of the Intel Stratix10 SoC require a level of privilege
+36
Documentation/process/maintainer-netdev.rst
··· 6 6 netdev FAQ 7 7 ========== 8 8 9 + tl;dr 10 + ----- 11 + 12 + - designate your patch to a tree - ``[PATCH net]`` or ``[PATCH net-next]`` 13 + - for fixes the ``Fixes:`` tag is required, regardless of the tree 14 + - don't post large series (> 15 patches), break them up 15 + - don't repost your patches within one 24h period 16 + - reverse xmas tree 17 + 9 18 What is netdev? 10 19 --------------- 11 20 It is a mailing list for all network-related Linux stuff. This ··· 145 136 version that should be applied. If there is any doubt, the maintainer 146 137 will reply and ask what should be done. 147 138 139 + How do I divide my work into patches? 140 + ------------------------------------- 141 + 142 + Put yourself in the shoes of the reviewer. Each patch is read separately 143 + and therefore should constitute a comprehensible step towards your stated 144 + goal. 145 + 146 + Avoid sending series longer than 15 patches. Larger series takes longer 147 + to review as reviewers will defer looking at it until they find a large 148 + chunk of time. A small series can be reviewed in a short time, so Maintainers 149 + just do it. As a result, a sequence of smaller series gets merged quicker and 150 + with better review coverage. Re-posting large series also increases the mailing 151 + list traffic. 152 + 148 153 I made changes to only a few patches in a patch series should I resend only those changed? 149 154 ------------------------------------------------------------------------------------------ 150 155 No, please resend the entire patch series and make sure you do number your ··· 205 182 /* foobar blah blah blah 206 183 * another line of text 207 184 */ 185 + 186 + What is "reverse xmas tree"? 187 + ---------------------------- 188 + 189 + Netdev has a convention for ordering local variables in functions. 190 + Order the variable declaration lines longest to shortest, e.g.:: 191 + 192 + struct scatterlist *sg; 193 + struct sk_buff *skb; 194 + int err, i; 195 + 196 + If there are dependencies between the variables preventing the ordering 197 + move the initialization out of line. 208 198 209 199 I am working in existing code which uses non-standard formatting. Which formatting should I use? 210 200 ------------------------------------------------------------------------------------------------
+107 -24
MAINTAINERS
··· 2539 2539 ARM/QUALCOMM SUPPORT 2540 2540 M: Andy Gross <agross@kernel.org> 2541 2541 M: Bjorn Andersson <bjorn.andersson@linaro.org> 2542 + R: Konrad Dybcio <konrad.dybcio@somainline.org> 2542 2543 L: linux-arm-msm@vger.kernel.org 2543 2544 S: Maintained 2544 2545 T: git git://git.kernel.org/pub/scm/linux/kernel/git/qcom/linux.git ··· 3617 3616 F: Documentation/devicetree/bindings/iio/accel/bosch,bma400.yaml 3618 3617 F: drivers/iio/accel/bma400* 3619 3618 3620 - BPF (Safe dynamic programs and tools) 3619 + BPF [GENERAL] (Safe Dynamic Programs and Tools) 3621 3620 M: Alexei Starovoitov <ast@kernel.org> 3622 3621 M: Daniel Borkmann <daniel@iogearbox.net> 3623 3622 M: Andrii Nakryiko <andrii@kernel.org> 3624 - R: Martin KaFai Lau <kafai@fb.com> 3625 - R: Song Liu <songliubraving@fb.com> 3623 + R: Martin KaFai Lau <martin.lau@linux.dev> 3624 + R: Song Liu <song@kernel.org> 3626 3625 R: Yonghong Song <yhs@fb.com> 3627 3626 R: John Fastabend <john.fastabend@gmail.com> 3628 3627 R: KP Singh <kpsingh@kernel.org> 3629 - L: netdev@vger.kernel.org 3628 + R: Stanislav Fomichev <sdf@google.com> 3629 + R: Hao Luo <haoluo@google.com> 3630 + R: Jiri Olsa <jolsa@kernel.org> 3630 3631 L: bpf@vger.kernel.org 3631 3632 S: Supported 3632 3633 W: https://bpf.io/ ··· 3660 3657 F: tools/bpf/ 3661 3658 F: tools/lib/bpf/ 3662 3659 F: tools/testing/selftests/bpf/ 3663 - N: bpf 3664 - K: bpf 3665 3660 3666 3661 BPF JIT for ARM 3667 3662 M: Shubham Bansal <illusionist.neo@gmail.com> 3668 - L: netdev@vger.kernel.org 3669 3663 L: bpf@vger.kernel.org 3670 3664 S: Odd Fixes 3671 3665 F: arch/arm/net/ ··· 3671 3671 M: Daniel Borkmann <daniel@iogearbox.net> 3672 3672 M: Alexei Starovoitov <ast@kernel.org> 3673 3673 M: Zi Shen Lim <zlim.lnx@gmail.com> 3674 - L: netdev@vger.kernel.org 3675 3674 L: bpf@vger.kernel.org 3676 3675 S: Supported 3677 3676 F: arch/arm64/net/ ··· 3678 3679 BPF JIT for MIPS (32-BIT AND 64-BIT) 3679 3680 M: Johan Almbladh <johan.almbladh@anyfinetworks.com> 3680 3681 M: Paul Burton <paulburton@kernel.org> 3681 - L: netdev@vger.kernel.org 3682 3682 L: bpf@vger.kernel.org 3683 3683 S: Maintained 3684 3684 F: arch/mips/net/ 3685 3685 3686 3686 BPF JIT for NFP NICs 3687 3687 M: Jakub Kicinski <kuba@kernel.org> 3688 - L: netdev@vger.kernel.org 3689 3688 L: bpf@vger.kernel.org 3690 3689 S: Odd Fixes 3691 3690 F: drivers/net/ethernet/netronome/nfp/bpf/ ··· 3691 3694 BPF JIT for POWERPC (32-BIT AND 64-BIT) 3692 3695 M: Naveen N. Rao <naveen.n.rao@linux.ibm.com> 3693 3696 M: Michael Ellerman <mpe@ellerman.id.au> 3694 - L: netdev@vger.kernel.org 3695 3697 L: bpf@vger.kernel.org 3696 3698 S: Supported 3697 3699 F: arch/powerpc/net/ ··· 3698 3702 BPF JIT for RISC-V (32-bit) 3699 3703 M: Luke Nelson <luke.r.nels@gmail.com> 3700 3704 M: Xi Wang <xi.wang@gmail.com> 3701 - L: netdev@vger.kernel.org 3702 3705 L: bpf@vger.kernel.org 3703 3706 S: Maintained 3704 3707 F: arch/riscv/net/ ··· 3705 3710 3706 3711 BPF JIT for RISC-V (64-bit) 3707 3712 M: Björn Töpel <bjorn@kernel.org> 3708 - L: netdev@vger.kernel.org 3709 3713 L: bpf@vger.kernel.org 3710 3714 S: Maintained 3711 3715 F: arch/riscv/net/ ··· 3714 3720 M: Ilya Leoshkevich <iii@linux.ibm.com> 3715 3721 M: Heiko Carstens <hca@linux.ibm.com> 3716 3722 M: Vasily Gorbik <gor@linux.ibm.com> 3717 - L: netdev@vger.kernel.org 3718 3723 L: bpf@vger.kernel.org 3719 3724 S: Supported 3720 3725 F: arch/s390/net/ ··· 3721 3728 3722 3729 BPF JIT for SPARC (32-BIT AND 64-BIT) 3723 3730 M: David S. Miller <davem@davemloft.net> 3724 - L: netdev@vger.kernel.org 3725 3731 L: bpf@vger.kernel.org 3726 3732 S: Odd Fixes 3727 3733 F: arch/sparc/net/ 3728 3734 3729 3735 BPF JIT for X86 32-BIT 3730 3736 M: Wang YanQing <udknight@gmail.com> 3731 - L: netdev@vger.kernel.org 3732 3737 L: bpf@vger.kernel.org 3733 3738 S: Odd Fixes 3734 3739 F: arch/x86/net/bpf_jit_comp32.c ··· 3734 3743 BPF JIT for X86 64-BIT 3735 3744 M: Alexei Starovoitov <ast@kernel.org> 3736 3745 M: Daniel Borkmann <daniel@iogearbox.net> 3737 - L: netdev@vger.kernel.org 3738 3746 L: bpf@vger.kernel.org 3739 3747 S: Supported 3740 3748 F: arch/x86/net/ 3741 3749 X: arch/x86/net/bpf_jit_comp32.c 3742 3750 3743 - BPF LSM (Security Audit and Enforcement using BPF) 3751 + BPF [CORE] 3752 + M: Alexei Starovoitov <ast@kernel.org> 3753 + M: Daniel Borkmann <daniel@iogearbox.net> 3754 + R: John Fastabend <john.fastabend@gmail.com> 3755 + L: bpf@vger.kernel.org 3756 + S: Maintained 3757 + F: kernel/bpf/verifier.c 3758 + F: kernel/bpf/tnum.c 3759 + F: kernel/bpf/core.c 3760 + F: kernel/bpf/syscall.c 3761 + F: kernel/bpf/dispatcher.c 3762 + F: kernel/bpf/trampoline.c 3763 + F: include/linux/bpf* 3764 + F: include/linux/filter.h 3765 + 3766 + BPF [BTF] 3767 + M: Martin KaFai Lau <martin.lau@linux.dev> 3768 + L: bpf@vger.kernel.org 3769 + S: Maintained 3770 + F: kernel/bpf/btf.c 3771 + F: include/linux/btf* 3772 + 3773 + BPF [TRACING] 3774 + M: Song Liu <song@kernel.org> 3775 + R: Jiri Olsa <jolsa@kernel.org> 3776 + L: bpf@vger.kernel.org 3777 + S: Maintained 3778 + F: kernel/trace/bpf_trace.c 3779 + F: kernel/bpf/stackmap.c 3780 + 3781 + BPF [NETWORKING] (tc BPF, sock_addr) 3782 + M: Martin KaFai Lau <martin.lau@linux.dev> 3783 + M: Daniel Borkmann <daniel@iogearbox.net> 3784 + R: John Fastabend <john.fastabend@gmail.com> 3785 + L: bpf@vger.kernel.org 3786 + L: netdev@vger.kernel.org 3787 + S: Maintained 3788 + F: net/core/filter.c 3789 + F: net/sched/act_bpf.c 3790 + F: net/sched/cls_bpf.c 3791 + 3792 + BPF [NETWORKING] (struct_ops, reuseport) 3793 + M: Martin KaFai Lau <martin.lau@linux.dev> 3794 + L: bpf@vger.kernel.org 3795 + L: netdev@vger.kernel.org 3796 + S: Maintained 3797 + F: kernel/bpf/bpf_struct* 3798 + 3799 + BPF [SECURITY & LSM] (Security Audit and Enforcement using BPF) 3744 3800 M: KP Singh <kpsingh@kernel.org> 3745 3801 R: Florent Revest <revest@chromium.org> 3746 3802 R: Brendan Jackman <jackmanb@chromium.org> ··· 3798 3760 F: kernel/bpf/bpf_lsm.c 3799 3761 F: security/bpf/ 3800 3762 3801 - BPF L7 FRAMEWORK 3763 + BPF [STORAGE & CGROUPS] 3764 + M: Martin KaFai Lau <martin.lau@linux.dev> 3765 + L: bpf@vger.kernel.org 3766 + S: Maintained 3767 + F: kernel/bpf/cgroup.c 3768 + F: kernel/bpf/*storage.c 3769 + F: kernel/bpf/bpf_lru* 3770 + 3771 + BPF [RINGBUF] 3772 + M: Andrii Nakryiko <andrii@kernel.org> 3773 + L: bpf@vger.kernel.org 3774 + S: Maintained 3775 + F: kernel/bpf/ringbuf.c 3776 + 3777 + BPF [ITERATOR] 3778 + M: Yonghong Song <yhs@fb.com> 3779 + L: bpf@vger.kernel.org 3780 + S: Maintained 3781 + F: kernel/bpf/*iter.c 3782 + 3783 + BPF [L7 FRAMEWORK] (sockmap) 3802 3784 M: John Fastabend <john.fastabend@gmail.com> 3803 3785 M: Jakub Sitnicki <jakub@cloudflare.com> 3804 3786 L: netdev@vger.kernel.org ··· 3831 3773 F: net/ipv4/udp_bpf.c 3832 3774 F: net/unix/unix_bpf.c 3833 3775 3834 - BPFTOOL 3776 + BPF [LIBRARY] (libbpf) 3777 + M: Andrii Nakryiko <andrii@kernel.org> 3778 + L: bpf@vger.kernel.org 3779 + S: Maintained 3780 + F: tools/lib/bpf/ 3781 + 3782 + BPF [TOOLING] (bpftool) 3835 3783 M: Quentin Monnet <quentin@isovalent.com> 3836 3784 L: bpf@vger.kernel.org 3837 3785 S: Maintained 3838 3786 F: kernel/bpf/disasm.* 3839 3787 F: tools/bpf/bpftool/ 3788 + 3789 + BPF [SELFTESTS] (Test Runners & Infrastructure) 3790 + M: Andrii Nakryiko <andrii@kernel.org> 3791 + R: Mykola Lysenko <mykolal@fb.com> 3792 + L: bpf@vger.kernel.org 3793 + S: Maintained 3794 + F: tools/testing/selftests/bpf/ 3795 + 3796 + BPF [MISC] 3797 + L: bpf@vger.kernel.org 3798 + S: Odd Fixes 3799 + K: (?:\b|_)bpf(?:\b|_) 3840 3800 3841 3801 BROADCOM B44 10/100 ETHERNET DRIVER 3842 3802 M: Michael Chan <michael.chan@broadcom.com> ··· 5051 4975 T: git git://git.kernel.org/pub/scm/linux/kernel/git/clk/linux.git 5052 4976 F: Documentation/devicetree/bindings/clock/ 5053 4977 F: drivers/clk/ 4978 + F: include/dt-bindings/clock/ 5054 4979 F: include/linux/clk-pr* 5055 4980 F: include/linux/clk/ 5056 4981 F: include/linux/of_clk.h ··· 9925 9848 M: Cezary Rojewski <cezary.rojewski@intel.com> 9926 9849 M: Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com> 9927 9850 M: Liam Girdwood <liam.r.girdwood@linux.intel.com> 9928 - M: Jie Yang <yang.jie@linux.intel.com> 9851 + M: Peter Ujfalusi <peter.ujfalusi@linux.intel.com> 9852 + M: Bard Liao <yung-chuan.liao@linux.intel.com> 9853 + M: Ranjani Sridharan <ranjani.sridharan@linux.intel.com> 9854 + M: Kai Vehmanen <kai.vehmanen@linux.intel.com> 9929 9855 L: alsa-devel@alsa-project.org (moderated for non-subscribers) 9930 9856 S: Supported 9931 9857 F: sound/soc/intel/ ··· 15880 15800 PIN CONTROLLER - INTEL 15881 15801 M: Mika Westerberg <mika.westerberg@linux.intel.com> 15882 15802 M: Andy Shevchenko <andy@kernel.org> 15883 - S: Maintained 15803 + S: Supported 15884 15804 T: git git://git.kernel.org/pub/scm/linux/kernel/git/pinctrl/intel.git 15885 15805 F: drivers/pinctrl/intel/ 15886 15806 ··· 16402 16322 16403 16323 QCOM AUDIO (ASoC) DRIVERS 16404 16324 M: Srinivas Kandagatla <srinivas.kandagatla@linaro.org> 16405 - M: Banajit Goswami <bgoswami@codeaurora.org> 16325 + M: Banajit Goswami <bgoswami@quicinc.com> 16406 16326 L: alsa-devel@alsa-project.org (moderated for non-subscribers) 16407 16327 S: Supported 16408 16328 F: sound/soc/codecs/lpass-va-macro.c ··· 18210 18130 18211 18131 SHARED MEMORY COMMUNICATIONS (SMC) SOCKETS 18212 18132 M: Karsten Graul <kgraul@linux.ibm.com> 18133 + M: Wenjia Zhang <wenjia@linux.ibm.com> 18213 18134 L: linux-s390@vger.kernel.org 18214 18135 S: Supported 18215 18136 W: http://www.ibm.com/developerworks/linux/linux390/ ··· 18843 18762 SOUND - SOUND OPEN FIRMWARE (SOF) DRIVERS 18844 18763 M: Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com> 18845 18764 M: Liam Girdwood <lgirdwood@gmail.com> 18765 + M: Peter Ujfalusi <peter.ujfalusi@linux.intel.com> 18766 + M: Bard Liao <yung-chuan.liao@linux.intel.com> 18846 18767 M: Ranjani Sridharan <ranjani.sridharan@linux.intel.com> 18847 - M: Kai Vehmanen <kai.vehmanen@linux.intel.com> 18768 + R: Kai Vehmanen <kai.vehmanen@linux.intel.com> 18848 18769 M: Daniel Baluta <daniel.baluta@nxp.com> 18849 18770 L: sound-open-firmware@alsa-project.org (moderated for non-subscribers) 18850 18771 S: Supported
+1 -1
Makefile
··· 2 2 VERSION = 5 3 3 PATCHLEVEL = 19 4 4 SUBLEVEL = 0 5 - EXTRAVERSION = -rc4 5 + EXTRAVERSION = -rc5 6 6 NAME = Superb Owl 7 7 8 8 # *DOCUMENTATION*
+1 -2
arch/arm/boot/dts/at91-sam9x60ek.dts
··· 233 233 status = "okay"; 234 234 235 235 eeprom@53 { 236 - compatible = "atmel,24c32"; 236 + compatible = "atmel,24c02"; 237 237 reg = <0x53>; 238 238 pagesize = <16>; 239 - size = <128>; 240 239 status = "okay"; 241 240 }; 242 241 };
+3 -3
arch/arm/boot/dts/at91-sama5d2_icp.dts
··· 329 329 status = "okay"; 330 330 331 331 eeprom@50 { 332 - compatible = "atmel,24c32"; 332 + compatible = "atmel,24c02"; 333 333 reg = <0x50>; 334 334 pagesize = <16>; 335 335 status = "okay"; 336 336 }; 337 337 338 338 eeprom@52 { 339 - compatible = "atmel,24c32"; 339 + compatible = "atmel,24c02"; 340 340 reg = <0x52>; 341 341 pagesize = <16>; 342 342 status = "disabled"; 343 343 }; 344 344 345 345 eeprom@53 { 346 - compatible = "atmel,24c32"; 346 + compatible = "atmel,24c02"; 347 347 reg = <0x53>; 348 348 pagesize = <16>; 349 349 status = "disabled";
+1 -3
arch/arm/boot/dts/imx7d-smegw01.dts
··· 216 216 pinctrl-names = "default"; 217 217 pinctrl-0 = <&pinctrl_usdhc2>; 218 218 bus-width = <4>; 219 + no-1-8-v; 219 220 non-removable; 220 - cap-sd-highspeed; 221 - sd-uhs-ddr50; 222 - mmc-ddr-1_8v; 223 221 vmmc-supply = <&reg_wifi>; 224 222 enable-sdio-wakeup; 225 223 status = "okay";
+58
arch/arm/boot/dts/stm32mp15-scmi.dtsi
··· 27 27 reg = <0x16>; 28 28 #reset-cells = <1>; 29 29 }; 30 + 31 + scmi_voltd: protocol@17 { 32 + reg = <0x17>; 33 + 34 + scmi_reguls: regulators { 35 + #address-cells = <1>; 36 + #size-cells = <0>; 37 + 38 + scmi_reg11: reg11@0 { 39 + reg = <0>; 40 + regulator-name = "reg11"; 41 + regulator-min-microvolt = <1100000>; 42 + regulator-max-microvolt = <1100000>; 43 + }; 44 + 45 + scmi_reg18: reg18@1 { 46 + voltd-name = "reg18"; 47 + reg = <1>; 48 + regulator-name = "reg18"; 49 + regulator-min-microvolt = <1800000>; 50 + regulator-max-microvolt = <1800000>; 51 + }; 52 + 53 + scmi_usb33: usb33@2 { 54 + reg = <2>; 55 + regulator-name = "usb33"; 56 + regulator-min-microvolt = <3300000>; 57 + regulator-max-microvolt = <3300000>; 58 + }; 59 + }; 60 + }; 30 61 }; 31 62 }; 32 63 ··· 76 45 }; 77 46 }; 78 47 }; 48 + 49 + &reg11 { 50 + status = "disabled"; 51 + }; 52 + 53 + &reg18 { 54 + status = "disabled"; 55 + }; 56 + 57 + &usb33 { 58 + status = "disabled"; 59 + }; 60 + 61 + &usbotg_hs { 62 + usb33d-supply = <&scmi_usb33>; 63 + }; 64 + 65 + &usbphyc { 66 + vdda1v1-supply = <&scmi_reg11>; 67 + vdda1v8-supply = <&scmi_reg18>; 68 + }; 69 + 70 + /delete-node/ &clk_hse; 71 + /delete-node/ &clk_hsi; 72 + /delete-node/ &clk_lse; 73 + /delete-node/ &clk_lsi; 74 + /delete-node/ &clk_csi;
+3 -3
arch/arm/boot/dts/stm32mp151.dtsi
··· 565 565 compatible = "st,stm32-cec"; 566 566 reg = <0x40016000 0x400>; 567 567 interrupts = <GIC_SPI 94 IRQ_TYPE_LEVEL_HIGH>; 568 - clocks = <&rcc CEC_K>, <&clk_lse>; 568 + clocks = <&rcc CEC_K>, <&rcc CEC>; 569 569 clock-names = "cec", "hdmi-cec"; 570 570 status = "disabled"; 571 571 }; ··· 1474 1474 usbh_ohci: usb@5800c000 { 1475 1475 compatible = "generic-ohci"; 1476 1476 reg = <0x5800c000 0x1000>; 1477 - clocks = <&rcc USBH>, <&usbphyc>; 1477 + clocks = <&usbphyc>, <&rcc USBH>; 1478 1478 resets = <&rcc USBH_R>; 1479 1479 interrupts = <GIC_SPI 74 IRQ_TYPE_LEVEL_HIGH>; 1480 1480 status = "disabled"; ··· 1483 1483 usbh_ehci: usb@5800d000 { 1484 1484 compatible = "generic-ehci"; 1485 1485 reg = <0x5800d000 0x1000>; 1486 - clocks = <&rcc USBH>; 1486 + clocks = <&usbphyc>, <&rcc USBH>; 1487 1487 resets = <&rcc USBH_R>; 1488 1488 interrupts = <GIC_SPI 75 IRQ_TYPE_LEVEL_HIGH>; 1489 1489 companion = <&usbh_ohci>;
+4
arch/arm/boot/dts/stm32mp157a-dk1-scmi.dts
··· 29 29 clocks = <&scmi_clk CK_SCMI_MPU>; 30 30 }; 31 31 32 + &dsi { 33 + clocks = <&rcc DSI_K>, <&scmi_clk CK_SCMI_HSE>, <&rcc DSI_PX>; 34 + }; 35 + 32 36 &gpioz { 33 37 clocks = <&scmi_clk CK_SCMI_GPIOZ>; 34 38 };
+1
arch/arm/boot/dts/stm32mp157c-dk2-scmi.dts
··· 35 35 }; 36 36 37 37 &dsi { 38 + phy-dsi-supply = <&scmi_reg18>; 38 39 clocks = <&rcc DSI_K>, <&scmi_clk CK_SCMI_HSE>, <&rcc DSI_PX>; 39 40 }; 40 41
+4
arch/arm/boot/dts/stm32mp157c-ed1-scmi.dts
··· 34 34 resets = <&scmi_reset RST_SCMI_CRYP1>; 35 35 }; 36 36 37 + &dsi { 38 + clocks = <&rcc DSI_K>, <&scmi_clk CK_SCMI_HSE>, <&rcc DSI_PX>; 39 + }; 40 + 37 41 &gpioz { 38 42 clocks = <&scmi_clk CK_SCMI_GPIOZ>; 39 43 };
+1
arch/arm/boot/dts/stm32mp157c-ev1-scmi.dts
··· 36 36 }; 37 37 38 38 &dsi { 39 + phy-dsi-supply = <&scmi_reg18>; 39 40 clocks = <&rcc DSI_K>, <&scmi_clk CK_SCMI_HSE>, <&rcc DSI_PX>; 40 41 }; 41 42
+1
arch/arm/configs/mxs_defconfig
··· 93 93 CONFIG_DRM=y 94 94 CONFIG_DRM_PANEL_SEIKO_43WVF1G=y 95 95 CONFIG_DRM_MXSFB=y 96 + CONFIG_FB=y 96 97 CONFIG_FB_MODE_HELPERS=y 97 98 CONFIG_LCD_CLASS_DEVICE=y 98 99 CONFIG_BACKLIGHT_CLASS_DEVICE=y
+6 -6
arch/arm/mach-at91/pm.c
··· 202 202 203 203 static const struct of_device_id sama5d2_ws_ids[] = { 204 204 { .compatible = "atmel,sama5d2-gem", .data = &ws_info[0] }, 205 - { .compatible = "atmel,at91rm9200-rtc", .data = &ws_info[1] }, 205 + { .compatible = "atmel,sama5d2-rtc", .data = &ws_info[1] }, 206 206 { .compatible = "atmel,sama5d3-udc", .data = &ws_info[2] }, 207 207 { .compatible = "atmel,at91rm9200-ohci", .data = &ws_info[2] }, 208 208 { .compatible = "usb-ohci", .data = &ws_info[2] }, ··· 213 213 }; 214 214 215 215 static const struct of_device_id sam9x60_ws_ids[] = { 216 - { .compatible = "atmel,at91sam9x5-rtc", .data = &ws_info[1] }, 216 + { .compatible = "microchip,sam9x60-rtc", .data = &ws_info[1] }, 217 217 { .compatible = "atmel,at91rm9200-ohci", .data = &ws_info[2] }, 218 218 { .compatible = "usb-ohci", .data = &ws_info[2] }, 219 219 { .compatible = "atmel,at91sam9g45-ehci", .data = &ws_info[2] }, 220 220 { .compatible = "usb-ehci", .data = &ws_info[2] }, 221 - { .compatible = "atmel,at91sam9260-rtt", .data = &ws_info[4] }, 221 + { .compatible = "microchip,sam9x60-rtt", .data = &ws_info[4] }, 222 222 { .compatible = "cdns,sam9x60-macb", .data = &ws_info[5] }, 223 223 { /* sentinel */ } 224 224 }; 225 225 226 226 static const struct of_device_id sama7g5_ws_ids[] = { 227 - { .compatible = "atmel,at91sam9x5-rtc", .data = &ws_info[1] }, 227 + { .compatible = "microchip,sama7g5-rtc", .data = &ws_info[1] }, 228 228 { .compatible = "microchip,sama7g5-ohci", .data = &ws_info[2] }, 229 229 { .compatible = "usb-ohci", .data = &ws_info[2] }, 230 230 { .compatible = "atmel,at91sam9g45-ehci", .data = &ws_info[2] }, 231 231 { .compatible = "usb-ehci", .data = &ws_info[2] }, 232 232 { .compatible = "microchip,sama7g5-sdhci", .data = &ws_info[3] }, 233 - { .compatible = "atmel,at91sam9260-rtt", .data = &ws_info[4] }, 233 + { .compatible = "microchip,sama7g5-rtt", .data = &ws_info[4] }, 234 234 { /* sentinel */ } 235 235 }; 236 236 ··· 1079 1079 return ret; 1080 1080 } 1081 1081 1082 - static void at91_pm_secure_init(void) 1082 + static void __init at91_pm_secure_init(void) 1083 1083 { 1084 1084 int suspend_mode; 1085 1085 struct arm_smccc_res res;
+2
arch/arm/mach-meson/platsmp.c
··· 71 71 } 72 72 73 73 sram_base = of_iomap(node, 0); 74 + of_node_put(node); 74 75 if (!sram_base) { 75 76 pr_err("Couldn't map SRAM registers\n"); 76 77 return; ··· 92 91 } 93 92 94 93 scu_base = of_iomap(node, 0); 94 + of_node_put(node); 95 95 if (!scu_base) { 96 96 pr_err("Couldn't map SCU registers\n"); 97 97 return;
+4 -2
arch/arm/xen/p2m.c
··· 63 63 64 64 unsigned long __pfn_to_mfn(unsigned long pfn) 65 65 { 66 - struct rb_node *n = phys_to_mach.rb_node; 66 + struct rb_node *n; 67 67 struct xen_p2m_entry *entry; 68 68 unsigned long irqflags; 69 69 70 70 read_lock_irqsave(&p2m_lock, irqflags); 71 + n = phys_to_mach.rb_node; 71 72 while (n) { 72 73 entry = rb_entry(n, struct xen_p2m_entry, rbnode_phys); 73 74 if (entry->pfn <= pfn && ··· 153 152 int rc; 154 153 unsigned long irqflags; 155 154 struct xen_p2m_entry *p2m_entry; 156 - struct rb_node *n = phys_to_mach.rb_node; 155 + struct rb_node *n; 157 156 158 157 if (mfn == INVALID_P2M_ENTRY) { 159 158 write_lock_irqsave(&p2m_lock, irqflags); 159 + n = phys_to_mach.rb_node; 160 160 while (n) { 161 161 p2m_entry = rb_entry(n, struct xen_p2m_entry, rbnode_phys); 162 162 if (p2m_entry->pfn <= pfn &&
+44 -44
arch/arm64/boot/dts/freescale/imx8mp-evk.dts
··· 395 395 &iomuxc { 396 396 pinctrl_eqos: eqosgrp { 397 397 fsl,pins = < 398 - MX8MP_IOMUXC_ENET_MDC__ENET_QOS_MDC 0x3 399 - MX8MP_IOMUXC_ENET_MDIO__ENET_QOS_MDIO 0x3 400 - MX8MP_IOMUXC_ENET_RD0__ENET_QOS_RGMII_RD0 0x91 401 - MX8MP_IOMUXC_ENET_RD1__ENET_QOS_RGMII_RD1 0x91 402 - MX8MP_IOMUXC_ENET_RD2__ENET_QOS_RGMII_RD2 0x91 403 - MX8MP_IOMUXC_ENET_RD3__ENET_QOS_RGMII_RD3 0x91 404 - MX8MP_IOMUXC_ENET_RXC__CCM_ENET_QOS_CLOCK_GENERATE_RX_CLK 0x91 405 - MX8MP_IOMUXC_ENET_RX_CTL__ENET_QOS_RGMII_RX_CTL 0x91 406 - MX8MP_IOMUXC_ENET_TD0__ENET_QOS_RGMII_TD0 0x1f 407 - MX8MP_IOMUXC_ENET_TD1__ENET_QOS_RGMII_TD1 0x1f 408 - MX8MP_IOMUXC_ENET_TD2__ENET_QOS_RGMII_TD2 0x1f 409 - MX8MP_IOMUXC_ENET_TD3__ENET_QOS_RGMII_TD3 0x1f 410 - MX8MP_IOMUXC_ENET_TX_CTL__ENET_QOS_RGMII_TX_CTL 0x1f 411 - MX8MP_IOMUXC_ENET_TXC__CCM_ENET_QOS_CLOCK_GENERATE_TX_CLK 0x1f 412 - MX8MP_IOMUXC_SAI2_RXC__GPIO4_IO22 0x19 398 + MX8MP_IOMUXC_ENET_MDC__ENET_QOS_MDC 0x2 399 + MX8MP_IOMUXC_ENET_MDIO__ENET_QOS_MDIO 0x2 400 + MX8MP_IOMUXC_ENET_RD0__ENET_QOS_RGMII_RD0 0x90 401 + MX8MP_IOMUXC_ENET_RD1__ENET_QOS_RGMII_RD1 0x90 402 + MX8MP_IOMUXC_ENET_RD2__ENET_QOS_RGMII_RD2 0x90 403 + MX8MP_IOMUXC_ENET_RD3__ENET_QOS_RGMII_RD3 0x90 404 + MX8MP_IOMUXC_ENET_RXC__CCM_ENET_QOS_CLOCK_GENERATE_RX_CLK 0x90 405 + MX8MP_IOMUXC_ENET_RX_CTL__ENET_QOS_RGMII_RX_CTL 0x90 406 + MX8MP_IOMUXC_ENET_TD0__ENET_QOS_RGMII_TD0 0x16 407 + MX8MP_IOMUXC_ENET_TD1__ENET_QOS_RGMII_TD1 0x16 408 + MX8MP_IOMUXC_ENET_TD2__ENET_QOS_RGMII_TD2 0x16 409 + MX8MP_IOMUXC_ENET_TD3__ENET_QOS_RGMII_TD3 0x16 410 + MX8MP_IOMUXC_ENET_TX_CTL__ENET_QOS_RGMII_TX_CTL 0x16 411 + MX8MP_IOMUXC_ENET_TXC__CCM_ENET_QOS_CLOCK_GENERATE_TX_CLK 0x16 412 + MX8MP_IOMUXC_SAI2_RXC__GPIO4_IO22 0x10 413 413 >; 414 414 }; 415 415 416 416 pinctrl_fec: fecgrp { 417 417 fsl,pins = < 418 - MX8MP_IOMUXC_SAI1_RXD2__ENET1_MDC 0x3 419 - MX8MP_IOMUXC_SAI1_RXD3__ENET1_MDIO 0x3 420 - MX8MP_IOMUXC_SAI1_RXD4__ENET1_RGMII_RD0 0x91 421 - MX8MP_IOMUXC_SAI1_RXD5__ENET1_RGMII_RD1 0x91 422 - MX8MP_IOMUXC_SAI1_RXD6__ENET1_RGMII_RD2 0x91 423 - MX8MP_IOMUXC_SAI1_RXD7__ENET1_RGMII_RD3 0x91 424 - MX8MP_IOMUXC_SAI1_TXC__ENET1_RGMII_RXC 0x91 425 - MX8MP_IOMUXC_SAI1_TXFS__ENET1_RGMII_RX_CTL 0x91 426 - MX8MP_IOMUXC_SAI1_TXD0__ENET1_RGMII_TD0 0x1f 427 - MX8MP_IOMUXC_SAI1_TXD1__ENET1_RGMII_TD1 0x1f 428 - MX8MP_IOMUXC_SAI1_TXD2__ENET1_RGMII_TD2 0x1f 429 - MX8MP_IOMUXC_SAI1_TXD3__ENET1_RGMII_TD3 0x1f 430 - MX8MP_IOMUXC_SAI1_TXD4__ENET1_RGMII_TX_CTL 0x1f 431 - MX8MP_IOMUXC_SAI1_TXD5__ENET1_RGMII_TXC 0x1f 432 - MX8MP_IOMUXC_SAI1_RXD0__GPIO4_IO02 0x19 418 + MX8MP_IOMUXC_SAI1_RXD2__ENET1_MDC 0x2 419 + MX8MP_IOMUXC_SAI1_RXD3__ENET1_MDIO 0x2 420 + MX8MP_IOMUXC_SAI1_RXD4__ENET1_RGMII_RD0 0x90 421 + MX8MP_IOMUXC_SAI1_RXD5__ENET1_RGMII_RD1 0x90 422 + MX8MP_IOMUXC_SAI1_RXD6__ENET1_RGMII_RD2 0x90 423 + MX8MP_IOMUXC_SAI1_RXD7__ENET1_RGMII_RD3 0x90 424 + MX8MP_IOMUXC_SAI1_TXC__ENET1_RGMII_RXC 0x90 425 + MX8MP_IOMUXC_SAI1_TXFS__ENET1_RGMII_RX_CTL 0x90 426 + MX8MP_IOMUXC_SAI1_TXD0__ENET1_RGMII_TD0 0x16 427 + MX8MP_IOMUXC_SAI1_TXD1__ENET1_RGMII_TD1 0x16 428 + MX8MP_IOMUXC_SAI1_TXD2__ENET1_RGMII_TD2 0x16 429 + MX8MP_IOMUXC_SAI1_TXD3__ENET1_RGMII_TD3 0x16 430 + MX8MP_IOMUXC_SAI1_TXD4__ENET1_RGMII_TX_CTL 0x16 431 + MX8MP_IOMUXC_SAI1_TXD5__ENET1_RGMII_TXC 0x16 432 + MX8MP_IOMUXC_SAI1_RXD0__GPIO4_IO02 0x10 433 433 >; 434 434 }; 435 435 ··· 461 461 462 462 pinctrl_gpio_led: gpioledgrp { 463 463 fsl,pins = < 464 - MX8MP_IOMUXC_NAND_READY_B__GPIO3_IO16 0x19 464 + MX8MP_IOMUXC_NAND_READY_B__GPIO3_IO16 0x140 465 465 >; 466 466 }; 467 467 468 468 pinctrl_i2c1: i2c1grp { 469 469 fsl,pins = < 470 - MX8MP_IOMUXC_I2C1_SCL__I2C1_SCL 0x400001c3 471 - MX8MP_IOMUXC_I2C1_SDA__I2C1_SDA 0x400001c3 470 + MX8MP_IOMUXC_I2C1_SCL__I2C1_SCL 0x400001c2 471 + MX8MP_IOMUXC_I2C1_SDA__I2C1_SDA 0x400001c2 472 472 >; 473 473 }; 474 474 475 475 pinctrl_i2c3: i2c3grp { 476 476 fsl,pins = < 477 - MX8MP_IOMUXC_I2C3_SCL__I2C3_SCL 0x400001c3 478 - MX8MP_IOMUXC_I2C3_SDA__I2C3_SDA 0x400001c3 477 + MX8MP_IOMUXC_I2C3_SCL__I2C3_SCL 0x400001c2 478 + MX8MP_IOMUXC_I2C3_SDA__I2C3_SDA 0x400001c2 479 479 >; 480 480 }; 481 481 482 482 pinctrl_i2c5: i2c5grp { 483 483 fsl,pins = < 484 - MX8MP_IOMUXC_SPDIF_RX__I2C5_SDA 0x400001c3 485 - MX8MP_IOMUXC_SPDIF_TX__I2C5_SCL 0x400001c3 484 + MX8MP_IOMUXC_SPDIF_RX__I2C5_SDA 0x400001c2 485 + MX8MP_IOMUXC_SPDIF_TX__I2C5_SCL 0x400001c2 486 486 >; 487 487 }; 488 488 ··· 500 500 501 501 pinctrl_reg_usdhc2_vmmc: regusdhc2vmmcgrp { 502 502 fsl,pins = < 503 - MX8MP_IOMUXC_SD2_RESET_B__GPIO2_IO19 0x41 503 + MX8MP_IOMUXC_SD2_RESET_B__GPIO2_IO19 0x40 504 504 >; 505 505 }; 506 506 507 507 pinctrl_uart2: uart2grp { 508 508 fsl,pins = < 509 - MX8MP_IOMUXC_UART2_RXD__UART2_DCE_RX 0x49 510 - MX8MP_IOMUXC_UART2_TXD__UART2_DCE_TX 0x49 509 + MX8MP_IOMUXC_UART2_RXD__UART2_DCE_RX 0x140 510 + MX8MP_IOMUXC_UART2_TXD__UART2_DCE_TX 0x140 511 511 >; 512 512 }; 513 513 514 514 pinctrl_usb1_vbus: usb1grp { 515 515 fsl,pins = < 516 - MX8MP_IOMUXC_GPIO1_IO14__USB2_OTG_PWR 0x19 516 + MX8MP_IOMUXC_GPIO1_IO14__USB2_OTG_PWR 0x10 517 517 >; 518 518 }; 519 519 ··· 525 525 MX8MP_IOMUXC_SD2_DATA1__USDHC2_DATA1 0x1d0 526 526 MX8MP_IOMUXC_SD2_DATA2__USDHC2_DATA2 0x1d0 527 527 MX8MP_IOMUXC_SD2_DATA3__USDHC2_DATA3 0x1d0 528 - MX8MP_IOMUXC_GPIO1_IO04__USDHC2_VSELECT 0xc1 528 + MX8MP_IOMUXC_GPIO1_IO04__USDHC2_VSELECT 0xc0 529 529 >; 530 530 }; 531 531 ··· 537 537 MX8MP_IOMUXC_SD2_DATA1__USDHC2_DATA1 0x1d4 538 538 MX8MP_IOMUXC_SD2_DATA2__USDHC2_DATA2 0x1d4 539 539 MX8MP_IOMUXC_SD2_DATA3__USDHC2_DATA3 0x1d4 540 - MX8MP_IOMUXC_GPIO1_IO04__USDHC2_VSELECT 0xc1 540 + MX8MP_IOMUXC_GPIO1_IO04__USDHC2_VSELECT 0xc0 541 541 >; 542 542 }; 543 543 ··· 549 549 MX8MP_IOMUXC_SD2_DATA1__USDHC2_DATA1 0x1d6 550 550 MX8MP_IOMUXC_SD2_DATA2__USDHC2_DATA2 0x1d6 551 551 MX8MP_IOMUXC_SD2_DATA3__USDHC2_DATA3 0x1d6 552 - MX8MP_IOMUXC_GPIO1_IO04__USDHC2_VSELECT 0xc1 552 + MX8MP_IOMUXC_GPIO1_IO04__USDHC2_VSELECT 0xc0 553 553 >; 554 554 }; 555 555
+20 -20
arch/arm64/boot/dts/freescale/imx8mp-icore-mx8mp-edimm2.2.dts
··· 110 110 &iomuxc { 111 111 pinctrl_eqos: eqosgrp { 112 112 fsl,pins = < 113 - MX8MP_IOMUXC_ENET_MDC__ENET_QOS_MDC 0x3 114 - MX8MP_IOMUXC_ENET_MDIO__ENET_QOS_MDIO 0x3 115 - MX8MP_IOMUXC_ENET_RD0__ENET_QOS_RGMII_RD0 0x91 116 - MX8MP_IOMUXC_ENET_RD1__ENET_QOS_RGMII_RD1 0x91 117 - MX8MP_IOMUXC_ENET_RD2__ENET_QOS_RGMII_RD2 0x91 118 - MX8MP_IOMUXC_ENET_RD3__ENET_QOS_RGMII_RD3 0x91 119 - MX8MP_IOMUXC_ENET_RXC__CCM_ENET_QOS_CLOCK_GENERATE_RX_CLK 0x91 120 - MX8MP_IOMUXC_ENET_RX_CTL__ENET_QOS_RGMII_RX_CTL 0x91 121 - MX8MP_IOMUXC_ENET_TD0__ENET_QOS_RGMII_TD0 0x1f 122 - MX8MP_IOMUXC_ENET_TD1__ENET_QOS_RGMII_TD1 0x1f 123 - MX8MP_IOMUXC_ENET_TD2__ENET_QOS_RGMII_TD2 0x1f 124 - MX8MP_IOMUXC_ENET_TD3__ENET_QOS_RGMII_TD3 0x1f 125 - MX8MP_IOMUXC_ENET_TX_CTL__ENET_QOS_RGMII_TX_CTL 0x1f 126 - MX8MP_IOMUXC_ENET_TXC__CCM_ENET_QOS_CLOCK_GENERATE_TX_CLK 0x1f 127 - MX8MP_IOMUXC_NAND_DATA01__GPIO3_IO07 0x19 113 + MX8MP_IOMUXC_ENET_MDC__ENET_QOS_MDC 0x2 114 + MX8MP_IOMUXC_ENET_MDIO__ENET_QOS_MDIO 0x2 115 + MX8MP_IOMUXC_ENET_RD0__ENET_QOS_RGMII_RD0 0x90 116 + MX8MP_IOMUXC_ENET_RD1__ENET_QOS_RGMII_RD1 0x90 117 + MX8MP_IOMUXC_ENET_RD2__ENET_QOS_RGMII_RD2 0x90 118 + MX8MP_IOMUXC_ENET_RD3__ENET_QOS_RGMII_RD3 0x90 119 + MX8MP_IOMUXC_ENET_RXC__CCM_ENET_QOS_CLOCK_GENERATE_RX_CLK 0x90 120 + MX8MP_IOMUXC_ENET_RX_CTL__ENET_QOS_RGMII_RX_CTL 0x90 121 + MX8MP_IOMUXC_ENET_TD0__ENET_QOS_RGMII_TD0 0x16 122 + MX8MP_IOMUXC_ENET_TD1__ENET_QOS_RGMII_TD1 0x16 123 + MX8MP_IOMUXC_ENET_TD2__ENET_QOS_RGMII_TD2 0x16 124 + MX8MP_IOMUXC_ENET_TD3__ENET_QOS_RGMII_TD3 0x16 125 + MX8MP_IOMUXC_ENET_TX_CTL__ENET_QOS_RGMII_TX_CTL 0x16 126 + MX8MP_IOMUXC_ENET_TXC__CCM_ENET_QOS_CLOCK_GENERATE_TX_CLK 0x16 127 + MX8MP_IOMUXC_NAND_DATA01__GPIO3_IO07 0x10 128 128 >; 129 129 }; 130 130 131 131 pinctrl_uart2: uart2grp { 132 132 fsl,pins = < 133 - MX8MP_IOMUXC_UART2_RXD__UART2_DCE_RX 0x49 134 - MX8MP_IOMUXC_UART2_TXD__UART2_DCE_TX 0x49 133 + MX8MP_IOMUXC_UART2_RXD__UART2_DCE_RX 0x40 134 + MX8MP_IOMUXC_UART2_TXD__UART2_DCE_TX 0x40 135 135 >; 136 136 }; 137 137 ··· 151 151 MX8MP_IOMUXC_SD2_DATA1__USDHC2_DATA1 0x1d0 152 152 MX8MP_IOMUXC_SD2_DATA2__USDHC2_DATA2 0x1d0 153 153 MX8MP_IOMUXC_SD2_DATA3__USDHC2_DATA3 0x1d0 154 - MX8MP_IOMUXC_GPIO1_IO04__USDHC2_VSELECT 0xc1 154 + MX8MP_IOMUXC_GPIO1_IO04__USDHC2_VSELECT 0xc0 155 155 >; 156 156 }; 157 157 ··· 163 163 164 164 pinctrl_reg_usb1: regusb1grp { 165 165 fsl,pins = < 166 - MX8MP_IOMUXC_GPIO1_IO14__GPIO1_IO14 0x19 166 + MX8MP_IOMUXC_GPIO1_IO14__GPIO1_IO14 0x10 167 167 >; 168 168 }; 169 169 170 170 pinctrl_reg_usdhc2_vmmc: regusdhc2vmmcgrp { 171 171 fsl,pins = < 172 - MX8MP_IOMUXC_SD2_RESET_B__GPIO2_IO19 0x41 172 + MX8MP_IOMUXC_SD2_RESET_B__GPIO2_IO19 0x40 173 173 >; 174 174 }; 175 175 };
+24 -24
arch/arm64/boot/dts/freescale/imx8mp-phyboard-pollux-rdk.dts
··· 116 116 &iomuxc { 117 117 pinctrl_eqos: eqosgrp { 118 118 fsl,pins = < 119 - MX8MP_IOMUXC_ENET_MDC__ENET_QOS_MDC 0x3 120 - MX8MP_IOMUXC_ENET_MDIO__ENET_QOS_MDIO 0x3 121 - MX8MP_IOMUXC_ENET_RD0__ENET_QOS_RGMII_RD0 0x91 122 - MX8MP_IOMUXC_ENET_RD1__ENET_QOS_RGMII_RD1 0x91 123 - MX8MP_IOMUXC_ENET_RD2__ENET_QOS_RGMII_RD2 0x91 124 - MX8MP_IOMUXC_ENET_RD3__ENET_QOS_RGMII_RD3 0x91 125 - MX8MP_IOMUXC_ENET_RXC__CCM_ENET_QOS_CLOCK_GENERATE_RX_CLK 0x91 126 - MX8MP_IOMUXC_ENET_RX_CTL__ENET_QOS_RGMII_RX_CTL 0x91 127 - MX8MP_IOMUXC_ENET_TD0__ENET_QOS_RGMII_TD0 0x1f 128 - MX8MP_IOMUXC_ENET_TD1__ENET_QOS_RGMII_TD1 0x1f 129 - MX8MP_IOMUXC_ENET_TD2__ENET_QOS_RGMII_TD2 0x1f 130 - MX8MP_IOMUXC_ENET_TD3__ENET_QOS_RGMII_TD3 0x1f 131 - MX8MP_IOMUXC_ENET_TX_CTL__ENET_QOS_RGMII_TX_CTL 0x1f 132 - MX8MP_IOMUXC_ENET_TXC__CCM_ENET_QOS_CLOCK_GENERATE_TX_CLK 0x1f 119 + MX8MP_IOMUXC_ENET_MDC__ENET_QOS_MDC 0x2 120 + MX8MP_IOMUXC_ENET_MDIO__ENET_QOS_MDIO 0x2 121 + MX8MP_IOMUXC_ENET_RD0__ENET_QOS_RGMII_RD0 0x90 122 + MX8MP_IOMUXC_ENET_RD1__ENET_QOS_RGMII_RD1 0x90 123 + MX8MP_IOMUXC_ENET_RD2__ENET_QOS_RGMII_RD2 0x90 124 + MX8MP_IOMUXC_ENET_RD3__ENET_QOS_RGMII_RD3 0x90 125 + MX8MP_IOMUXC_ENET_RXC__CCM_ENET_QOS_CLOCK_GENERATE_RX_CLK 0x90 126 + MX8MP_IOMUXC_ENET_RX_CTL__ENET_QOS_RGMII_RX_CTL 0x90 127 + MX8MP_IOMUXC_ENET_TD0__ENET_QOS_RGMII_TD0 0x16 128 + MX8MP_IOMUXC_ENET_TD1__ENET_QOS_RGMII_TD1 0x16 129 + MX8MP_IOMUXC_ENET_TD2__ENET_QOS_RGMII_TD2 0x16 130 + MX8MP_IOMUXC_ENET_TD3__ENET_QOS_RGMII_TD3 0x16 131 + MX8MP_IOMUXC_ENET_TX_CTL__ENET_QOS_RGMII_TX_CTL 0x16 132 + MX8MP_IOMUXC_ENET_TXC__CCM_ENET_QOS_CLOCK_GENERATE_TX_CLK 0x16 133 133 MX8MP_IOMUXC_SAI1_MCLK__GPIO4_IO20 0x10 134 134 >; 135 135 }; 136 136 137 137 pinctrl_i2c2: i2c2grp { 138 138 fsl,pins = < 139 - MX8MP_IOMUXC_I2C2_SCL__I2C2_SCL 0x400001c3 140 - MX8MP_IOMUXC_I2C2_SDA__I2C2_SDA 0x400001c3 139 + MX8MP_IOMUXC_I2C2_SCL__I2C2_SCL 0x400001c2 140 + MX8MP_IOMUXC_I2C2_SDA__I2C2_SDA 0x400001c2 141 141 >; 142 142 }; 143 143 144 144 pinctrl_i2c2_gpio: i2c2gpiogrp { 145 145 fsl,pins = < 146 - MX8MP_IOMUXC_I2C2_SCL__GPIO5_IO16 0x1e3 147 - MX8MP_IOMUXC_I2C2_SDA__GPIO5_IO17 0x1e3 146 + MX8MP_IOMUXC_I2C2_SCL__GPIO5_IO16 0x1e2 147 + MX8MP_IOMUXC_I2C2_SDA__GPIO5_IO17 0x1e2 148 148 >; 149 149 }; 150 150 151 151 pinctrl_reg_usdhc2_vmmc: regusdhc2vmmcgrp { 152 152 fsl,pins = < 153 - MX8MP_IOMUXC_SD2_RESET_B__GPIO2_IO19 0x41 153 + MX8MP_IOMUXC_SD2_RESET_B__GPIO2_IO19 0x40 154 154 >; 155 155 }; 156 156 157 157 pinctrl_uart1: uart1grp { 158 158 fsl,pins = < 159 - MX8MP_IOMUXC_UART1_RXD__UART1_DCE_RX 0x49 160 - MX8MP_IOMUXC_UART1_TXD__UART1_DCE_TX 0x49 159 + MX8MP_IOMUXC_UART1_RXD__UART1_DCE_RX 0x40 160 + MX8MP_IOMUXC_UART1_TXD__UART1_DCE_TX 0x40 161 161 >; 162 162 }; 163 163 ··· 175 175 MX8MP_IOMUXC_SD2_DATA1__USDHC2_DATA1 0x1d0 176 176 MX8MP_IOMUXC_SD2_DATA2__USDHC2_DATA2 0x1d0 177 177 MX8MP_IOMUXC_SD2_DATA3__USDHC2_DATA3 0x1d0 178 - MX8MP_IOMUXC_GPIO1_IO04__USDHC2_VSELECT 0xc1 178 + MX8MP_IOMUXC_GPIO1_IO04__USDHC2_VSELECT 0xc0 179 179 >; 180 180 }; 181 181 ··· 187 187 MX8MP_IOMUXC_SD2_DATA1__USDHC2_DATA1 0x1d4 188 188 MX8MP_IOMUXC_SD2_DATA2__USDHC2_DATA2 0x1d4 189 189 MX8MP_IOMUXC_SD2_DATA3__USDHC2_DATA3 0x1d4 190 - MX8MP_IOMUXC_GPIO1_IO04__USDHC2_VSELECT 0xc1 190 + MX8MP_IOMUXC_GPIO1_IO04__USDHC2_VSELECT 0xc0 191 191 >; 192 192 }; 193 193 ··· 199 199 MX8MP_IOMUXC_SD2_DATA1__USDHC2_DATA1 0x1d6 200 200 MX8MP_IOMUXC_SD2_DATA2__USDHC2_DATA2 0x1d6 201 201 MX8MP_IOMUXC_SD2_DATA3__USDHC2_DATA3 0x1d6 202 - MX8MP_IOMUXC_GPIO1_IO04__USDHC2_VSELECT 0xc1 202 + MX8MP_IOMUXC_GPIO1_IO04__USDHC2_VSELECT 0xc0 203 203 >; 204 204 }; 205 205 };
+58 -58
arch/arm64/boot/dts/freescale/imx8mp-venice-gw74xx.dts
··· 622 622 623 623 pinctrl_hog: hoggrp { 624 624 fsl,pins = < 625 - MX8MP_IOMUXC_GPIO1_IO09__GPIO1_IO09 0x40000041 /* DIO0 */ 626 - MX8MP_IOMUXC_GPIO1_IO11__GPIO1_IO11 0x40000041 /* DIO1 */ 627 - MX8MP_IOMUXC_NAND_DQS__GPIO3_IO14 0x40000041 /* M2SKT_OFF# */ 628 - MX8MP_IOMUXC_SD2_DATA2__GPIO2_IO17 0x40000159 /* PCIE1_WDIS# */ 629 - MX8MP_IOMUXC_SD2_DATA3__GPIO2_IO18 0x40000159 /* PCIE2_WDIS# */ 630 - MX8MP_IOMUXC_SD2_CMD__GPIO2_IO14 0x40000159 /* PCIE3_WDIS# */ 631 - MX8MP_IOMUXC_NAND_DATA00__GPIO3_IO06 0x40000041 /* M2SKT_RST# */ 632 - MX8MP_IOMUXC_SAI1_TXD6__GPIO4_IO18 0x40000159 /* M2SKT_WDIS# */ 633 - MX8MP_IOMUXC_NAND_ALE__GPIO3_IO00 0x40000159 /* M2SKT_GDIS# */ 625 + MX8MP_IOMUXC_GPIO1_IO09__GPIO1_IO09 0x40000040 /* DIO0 */ 626 + MX8MP_IOMUXC_GPIO1_IO11__GPIO1_IO11 0x40000040 /* DIO1 */ 627 + MX8MP_IOMUXC_NAND_DQS__GPIO3_IO14 0x40000040 /* M2SKT_OFF# */ 628 + MX8MP_IOMUXC_SD2_DATA2__GPIO2_IO17 0x40000150 /* PCIE1_WDIS# */ 629 + MX8MP_IOMUXC_SD2_DATA3__GPIO2_IO18 0x40000150 /* PCIE2_WDIS# */ 630 + MX8MP_IOMUXC_SD2_CMD__GPIO2_IO14 0x40000150 /* PCIE3_WDIS# */ 631 + MX8MP_IOMUXC_NAND_DATA00__GPIO3_IO06 0x40000040 /* M2SKT_RST# */ 632 + MX8MP_IOMUXC_SAI1_TXD6__GPIO4_IO18 0x40000150 /* M2SKT_WDIS# */ 633 + MX8MP_IOMUXC_NAND_ALE__GPIO3_IO00 0x40000150 /* M2SKT_GDIS# */ 634 634 MX8MP_IOMUXC_SAI3_TXD__GPIO5_IO01 0x40000104 /* UART_TERM */ 635 635 MX8MP_IOMUXC_SAI3_TXFS__GPIO4_IO31 0x40000104 /* UART_RS485 */ 636 636 MX8MP_IOMUXC_SAI3_TXC__GPIO5_IO00 0x40000104 /* UART_HALF */ ··· 639 639 640 640 pinctrl_accel: accelgrp { 641 641 fsl,pins = < 642 - MX8MP_IOMUXC_GPIO1_IO07__GPIO1_IO07 0x159 642 + MX8MP_IOMUXC_GPIO1_IO07__GPIO1_IO07 0x150 643 643 >; 644 644 }; 645 645 646 646 pinctrl_eqos: eqosgrp { 647 647 fsl,pins = < 648 - MX8MP_IOMUXC_ENET_MDC__ENET_QOS_MDC 0x3 649 - MX8MP_IOMUXC_ENET_MDIO__ENET_QOS_MDIO 0x3 650 - MX8MP_IOMUXC_ENET_RD0__ENET_QOS_RGMII_RD0 0x91 651 - MX8MP_IOMUXC_ENET_RD1__ENET_QOS_RGMII_RD1 0x91 652 - MX8MP_IOMUXC_ENET_RD2__ENET_QOS_RGMII_RD2 0x91 653 - MX8MP_IOMUXC_ENET_RD3__ENET_QOS_RGMII_RD3 0x91 654 - MX8MP_IOMUXC_ENET_RXC__CCM_ENET_QOS_CLOCK_GENERATE_RX_CLK 0x91 655 - MX8MP_IOMUXC_ENET_RX_CTL__ENET_QOS_RGMII_RX_CTL 0x91 656 - MX8MP_IOMUXC_ENET_TD0__ENET_QOS_RGMII_TD0 0x1f 657 - MX8MP_IOMUXC_ENET_TD1__ENET_QOS_RGMII_TD1 0x1f 658 - MX8MP_IOMUXC_ENET_TD2__ENET_QOS_RGMII_TD2 0x1f 659 - MX8MP_IOMUXC_ENET_TD3__ENET_QOS_RGMII_TD3 0x1f 660 - MX8MP_IOMUXC_ENET_TX_CTL__ENET_QOS_RGMII_TX_CTL 0x1f 661 - MX8MP_IOMUXC_ENET_TXC__CCM_ENET_QOS_CLOCK_GENERATE_TX_CLK 0x1f 662 - MX8MP_IOMUXC_SAI3_RXD__GPIO4_IO30 0x141 /* RST# */ 663 - MX8MP_IOMUXC_SAI3_RXFS__GPIO4_IO28 0x159 /* IRQ# */ 648 + MX8MP_IOMUXC_ENET_MDC__ENET_QOS_MDC 0x2 649 + MX8MP_IOMUXC_ENET_MDIO__ENET_QOS_MDIO 0x2 650 + MX8MP_IOMUXC_ENET_RD0__ENET_QOS_RGMII_RD0 0x90 651 + MX8MP_IOMUXC_ENET_RD1__ENET_QOS_RGMII_RD1 0x90 652 + MX8MP_IOMUXC_ENET_RD2__ENET_QOS_RGMII_RD2 0x90 653 + MX8MP_IOMUXC_ENET_RD3__ENET_QOS_RGMII_RD3 0x90 654 + MX8MP_IOMUXC_ENET_RXC__CCM_ENET_QOS_CLOCK_GENERATE_RX_CLK 0x90 655 + MX8MP_IOMUXC_ENET_RX_CTL__ENET_QOS_RGMII_RX_CTL 0x90 656 + MX8MP_IOMUXC_ENET_TD0__ENET_QOS_RGMII_TD0 0x16 657 + MX8MP_IOMUXC_ENET_TD1__ENET_QOS_RGMII_TD1 0x16 658 + MX8MP_IOMUXC_ENET_TD2__ENET_QOS_RGMII_TD2 0x16 659 + MX8MP_IOMUXC_ENET_TD3__ENET_QOS_RGMII_TD3 0x16 660 + MX8MP_IOMUXC_ENET_TX_CTL__ENET_QOS_RGMII_TX_CTL 0x16 661 + MX8MP_IOMUXC_ENET_TXC__CCM_ENET_QOS_CLOCK_GENERATE_TX_CLK 0x16 662 + MX8MP_IOMUXC_SAI3_RXD__GPIO4_IO30 0x140 /* RST# */ 663 + MX8MP_IOMUXC_SAI3_RXFS__GPIO4_IO28 0x150 /* IRQ# */ 664 664 >; 665 665 }; 666 666 667 667 pinctrl_fec: fecgrp { 668 668 fsl,pins = < 669 - MX8MP_IOMUXC_SAI1_RXD4__ENET1_RGMII_RD0 0x91 670 - MX8MP_IOMUXC_SAI1_RXD5__ENET1_RGMII_RD1 0x91 671 - MX8MP_IOMUXC_SAI1_RXD6__ENET1_RGMII_RD2 0x91 672 - MX8MP_IOMUXC_SAI1_RXD7__ENET1_RGMII_RD3 0x91 673 - MX8MP_IOMUXC_SAI1_TXC__ENET1_RGMII_RXC 0x91 674 - MX8MP_IOMUXC_SAI1_TXFS__ENET1_RGMII_RX_CTL 0x91 675 - MX8MP_IOMUXC_SAI1_TXD0__ENET1_RGMII_TD0 0x1f 676 - MX8MP_IOMUXC_SAI1_TXD1__ENET1_RGMII_TD1 0x1f 677 - MX8MP_IOMUXC_SAI1_TXD2__ENET1_RGMII_TD2 0x1f 678 - MX8MP_IOMUXC_SAI1_TXD3__ENET1_RGMII_TD3 0x1f 679 - MX8MP_IOMUXC_SAI1_TXD4__ENET1_RGMII_TX_CTL 0x1f 680 - MX8MP_IOMUXC_SAI1_TXD5__ENET1_RGMII_TXC 0x1f 681 - MX8MP_IOMUXC_SAI1_RXFS__ENET1_1588_EVENT0_IN 0x141 682 - MX8MP_IOMUXC_SAI1_RXC__ENET1_1588_EVENT0_OUT 0x141 669 + MX8MP_IOMUXC_SAI1_RXD4__ENET1_RGMII_RD0 0x90 670 + MX8MP_IOMUXC_SAI1_RXD5__ENET1_RGMII_RD1 0x90 671 + MX8MP_IOMUXC_SAI1_RXD6__ENET1_RGMII_RD2 0x90 672 + MX8MP_IOMUXC_SAI1_RXD7__ENET1_RGMII_RD3 0x90 673 + MX8MP_IOMUXC_SAI1_TXC__ENET1_RGMII_RXC 0x90 674 + MX8MP_IOMUXC_SAI1_TXFS__ENET1_RGMII_RX_CTL 0x90 675 + MX8MP_IOMUXC_SAI1_TXD0__ENET1_RGMII_TD0 0x16 676 + MX8MP_IOMUXC_SAI1_TXD1__ENET1_RGMII_TD1 0x16 677 + MX8MP_IOMUXC_SAI1_TXD2__ENET1_RGMII_TD2 0x16 678 + MX8MP_IOMUXC_SAI1_TXD3__ENET1_RGMII_TD3 0x16 679 + MX8MP_IOMUXC_SAI1_TXD4__ENET1_RGMII_TX_CTL 0x16 680 + MX8MP_IOMUXC_SAI1_TXD5__ENET1_RGMII_TXC 0x16 681 + MX8MP_IOMUXC_SAI1_RXFS__ENET1_1588_EVENT0_IN 0x140 682 + MX8MP_IOMUXC_SAI1_RXC__ENET1_1588_EVENT0_OUT 0x140 683 683 >; 684 684 }; 685 685 ··· 692 692 693 693 pinctrl_gsc: gscgrp { 694 694 fsl,pins = < 695 - MX8MP_IOMUXC_SAI1_MCLK__GPIO4_IO20 0x159 695 + MX8MP_IOMUXC_SAI1_MCLK__GPIO4_IO20 0x150 696 696 >; 697 697 }; 698 698 699 699 pinctrl_i2c1: i2c1grp { 700 700 fsl,pins = < 701 - MX8MP_IOMUXC_I2C1_SCL__I2C1_SCL 0x400001c3 702 - MX8MP_IOMUXC_I2C1_SDA__I2C1_SDA 0x400001c3 701 + MX8MP_IOMUXC_I2C1_SCL__I2C1_SCL 0x400001c2 702 + MX8MP_IOMUXC_I2C1_SDA__I2C1_SDA 0x400001c2 703 703 >; 704 704 }; 705 705 706 706 pinctrl_i2c2: i2c2grp { 707 707 fsl,pins = < 708 - MX8MP_IOMUXC_I2C2_SCL__I2C2_SCL 0x400001c3 709 - MX8MP_IOMUXC_I2C2_SDA__I2C2_SDA 0x400001c3 708 + MX8MP_IOMUXC_I2C2_SCL__I2C2_SCL 0x400001c2 709 + MX8MP_IOMUXC_I2C2_SDA__I2C2_SDA 0x400001c2 710 710 >; 711 711 }; 712 712 713 713 pinctrl_i2c3: i2c3grp { 714 714 fsl,pins = < 715 - MX8MP_IOMUXC_I2C3_SCL__I2C3_SCL 0x400001c3 716 - MX8MP_IOMUXC_I2C3_SDA__I2C3_SDA 0x400001c3 715 + MX8MP_IOMUXC_I2C3_SCL__I2C3_SCL 0x400001c2 716 + MX8MP_IOMUXC_I2C3_SDA__I2C3_SDA 0x400001c2 717 717 >; 718 718 }; 719 719 720 720 pinctrl_i2c4: i2c4grp { 721 721 fsl,pins = < 722 - MX8MP_IOMUXC_I2C4_SCL__I2C4_SCL 0x400001c3 723 - MX8MP_IOMUXC_I2C4_SDA__I2C4_SDA 0x400001c3 722 + MX8MP_IOMUXC_I2C4_SCL__I2C4_SCL 0x400001c2 723 + MX8MP_IOMUXC_I2C4_SDA__I2C4_SDA 0x400001c2 724 724 >; 725 725 }; 726 726 727 727 pinctrl_ksz: kszgrp { 728 728 fsl,pins = < 729 - MX8MP_IOMUXC_SAI3_RXC__GPIO4_IO29 0x159 /* IRQ# */ 730 - MX8MP_IOMUXC_SAI3_MCLK__GPIO5_IO02 0x141 /* RST# */ 729 + MX8MP_IOMUXC_SAI3_RXC__GPIO4_IO29 0x150 /* IRQ# */ 730 + MX8MP_IOMUXC_SAI3_MCLK__GPIO5_IO02 0x140 /* RST# */ 731 731 >; 732 732 }; 733 733 734 734 pinctrl_gpio_leds: ledgrp { 735 735 fsl,pins = < 736 - MX8MP_IOMUXC_SD2_DATA0__GPIO2_IO15 0x19 737 - MX8MP_IOMUXC_SD2_DATA1__GPIO2_IO16 0x19 736 + MX8MP_IOMUXC_SD2_DATA0__GPIO2_IO15 0x10 737 + MX8MP_IOMUXC_SD2_DATA1__GPIO2_IO16 0x10 738 738 >; 739 739 }; 740 740 741 741 pinctrl_pmic: pmicgrp { 742 742 fsl,pins = < 743 - MX8MP_IOMUXC_NAND_DATA01__GPIO3_IO07 0x141 743 + MX8MP_IOMUXC_NAND_DATA01__GPIO3_IO07 0x140 744 744 >; 745 745 }; 746 746 747 747 pinctrl_pps: ppsgrp { 748 748 fsl,pins = < 749 - MX8MP_IOMUXC_GPIO1_IO12__GPIO1_IO12 0x141 749 + MX8MP_IOMUXC_GPIO1_IO12__GPIO1_IO12 0x140 750 750 >; 751 751 }; 752 752 ··· 758 758 759 759 pinctrl_reg_usb2: regusb2grp { 760 760 fsl,pins = < 761 - MX8MP_IOMUXC_GPIO1_IO06__GPIO1_IO06 0x141 761 + MX8MP_IOMUXC_GPIO1_IO06__GPIO1_IO06 0x140 762 762 >; 763 763 }; 764 764 765 765 pinctrl_reg_wifi: regwifigrp { 766 766 fsl,pins = < 767 - MX8MP_IOMUXC_NAND_DATA03__GPIO3_IO09 0x119 767 + MX8MP_IOMUXC_NAND_DATA03__GPIO3_IO09 0x110 768 768 >; 769 769 }; 770 770 ··· 811 811 812 812 pinctrl_uart3_gpio: uart3gpiogrp { 813 813 fsl,pins = < 814 - MX8MP_IOMUXC_NAND_DATA02__GPIO3_IO08 0x119 814 + MX8MP_IOMUXC_NAND_DATA02__GPIO3_IO08 0x110 815 815 >; 816 816 }; 817 817
+1 -1
arch/arm64/boot/dts/freescale/imx8mp.dtsi
··· 595 595 pgc_ispdwp: power-domain@18 { 596 596 #power-domain-cells = <0>; 597 597 reg = <IMX8MP_POWER_DOMAIN_MEDIAMIX_ISPDWP>; 598 - clocks = <&clk IMX8MP_CLK_MEDIA_ISP_DIV>; 598 + clocks = <&clk IMX8MP_CLK_MEDIA_ISP_ROOT>; 599 599 }; 600 600 }; 601 601 };
+1 -1
arch/arm64/boot/dts/qcom/msm8992-lg-bullhead.dtsi
··· 74 74 vdd_l17_29-supply = <&vph_pwr>; 75 75 vdd_l20_21-supply = <&vph_pwr>; 76 76 vdd_l25-supply = <&pm8994_s5>; 77 - vdd_lvs1_2 = <&pm8994_s4>; 77 + vdd_lvs1_2-supply = <&pm8994_s4>; 78 78 79 79 /* S1, S2, S6 and S12 are managed by RPMPD */ 80 80
+1 -1
arch/arm64/boot/dts/qcom/msm8992-xiaomi-libra.dts
··· 171 171 vdd_l17_29-supply = <&vph_pwr>; 172 172 vdd_l20_21-supply = <&vph_pwr>; 173 173 vdd_l25-supply = <&pm8994_s5>; 174 - vdd_lvs1_2 = <&pm8994_s4>; 174 + vdd_lvs1_2-supply = <&pm8994_s4>; 175 175 176 176 /* S1, S2, S6 and S12 are managed by RPMPD */ 177 177
+2 -2
arch/arm64/boot/dts/qcom/msm8994.dtsi
··· 100 100 CPU6: cpu@102 { 101 101 device_type = "cpu"; 102 102 compatible = "arm,cortex-a57"; 103 - reg = <0x0 0x101>; 103 + reg = <0x0 0x102>; 104 104 enable-method = "psci"; 105 105 next-level-cache = <&L2_1>; 106 106 }; ··· 108 108 CPU7: cpu@103 { 109 109 device_type = "cpu"; 110 110 compatible = "arm,cortex-a57"; 111 - reg = <0x0 0x101>; 111 + reg = <0x0 0x103>; 112 112 enable-method = "psci"; 113 113 next-level-cache = <&L2_1>; 114 114 };
+1 -1
arch/arm64/boot/dts/qcom/sc7180-trogdor-homestar.dtsi
··· 5 5 * Copyright 2021 Google LLC. 6 6 */ 7 7 8 - #include "sc7180-trogdor.dtsi" 8 + /* This file must be included after sc7180-trogdor.dtsi */ 9 9 10 10 / { 11 11 /* BOARD-SPECIFIC TOP LEVEL NODES */
+1 -1
arch/arm64/boot/dts/qcom/sc7180-trogdor-lazor.dtsi
··· 5 5 * Copyright 2020 Google LLC. 6 6 */ 7 7 8 - #include "sc7180-trogdor.dtsi" 8 + /* This file must be included after sc7180-trogdor.dtsi */ 9 9 10 10 &ap_sar_sensor { 11 11 semtech,cs0-ground;
+1 -1
arch/arm64/boot/dts/qcom/sdm845.dtsi
··· 4244 4244 4245 4245 power-domains = <&dispcc MDSS_GDSC>; 4246 4246 4247 - clocks = <&gcc GCC_DISP_AHB_CLK>, 4247 + clocks = <&dispcc DISP_CC_MDSS_AHB_CLK>, 4248 4248 <&dispcc DISP_CC_MDSS_MDP_CLK>; 4249 4249 clock-names = "iface", "core"; 4250 4250
+12 -2
arch/arm64/boot/dts/qcom/sm8450.dtsi
··· 2853 2853 reg = <0x0 0x17100000 0x0 0x10000>, /* GICD */ 2854 2854 <0x0 0x17180000 0x0 0x200000>; /* GICR * 8 */ 2855 2855 interrupts = <GIC_PPI 9 IRQ_TYPE_LEVEL_HIGH>; 2856 + #address-cells = <2>; 2857 + #size-cells = <2>; 2858 + ranges; 2859 + 2860 + gic_its: msi-controller@17140000 { 2861 + compatible = "arm,gic-v3-its"; 2862 + reg = <0x0 0x17140000 0x0 0x20000>; 2863 + msi-controller; 2864 + #msi-cells = <1>; 2865 + }; 2856 2866 }; 2857 2867 2858 2868 timer@17420000 { ··· 3047 3037 3048 3038 iommus = <&apps_smmu 0xe0 0x0>; 3049 3039 3050 - interconnects = <&aggre1_noc MASTER_UFS_MEM &mc_virt SLAVE_EBI1>, 3051 - <&gem_noc MASTER_APPSS_PROC &config_noc SLAVE_UFS_MEM_CFG>; 3040 + interconnects = <&aggre1_noc MASTER_UFS_MEM 0 &mc_virt SLAVE_EBI1 0>, 3041 + <&gem_noc MASTER_APPSS_PROC 0 &config_noc SLAVE_UFS_MEM_CFG 0>; 3052 3042 interconnect-names = "ufs-ddr", "cpu-ufs"; 3053 3043 clock-names = 3054 3044 "core_clk",
+21 -9
arch/arm64/mm/hugetlbpage.c
··· 214 214 return orig_pte; 215 215 } 216 216 217 + static pte_t get_clear_contig_flush(struct mm_struct *mm, 218 + unsigned long addr, 219 + pte_t *ptep, 220 + unsigned long pgsize, 221 + unsigned long ncontig) 222 + { 223 + pte_t orig_pte = get_clear_contig(mm, addr, ptep, pgsize, ncontig); 224 + struct vm_area_struct vma = TLB_FLUSH_VMA(mm, 0); 225 + 226 + flush_tlb_range(&vma, addr, addr + (pgsize * ncontig)); 227 + return orig_pte; 228 + } 229 + 217 230 /* 218 231 * Changing some bits of contiguous entries requires us to follow a 219 232 * Break-Before-Make approach, breaking the whole contiguous set ··· 460 447 int ncontig, i; 461 448 size_t pgsize = 0; 462 449 unsigned long pfn = pte_pfn(pte), dpfn; 450 + struct mm_struct *mm = vma->vm_mm; 463 451 pgprot_t hugeprot; 464 452 pte_t orig_pte; 465 453 466 454 if (!pte_cont(pte)) 467 455 return ptep_set_access_flags(vma, addr, ptep, pte, dirty); 468 456 469 - ncontig = find_num_contig(vma->vm_mm, addr, ptep, &pgsize); 457 + ncontig = find_num_contig(mm, addr, ptep, &pgsize); 470 458 dpfn = pgsize >> PAGE_SHIFT; 471 459 472 460 if (!__cont_access_flags_changed(ptep, pte, ncontig)) 473 461 return 0; 474 462 475 - orig_pte = get_clear_contig(vma->vm_mm, addr, ptep, pgsize, ncontig); 463 + orig_pte = get_clear_contig_flush(mm, addr, ptep, pgsize, ncontig); 476 464 477 465 /* Make sure we don't lose the dirty or young state */ 478 466 if (pte_dirty(orig_pte)) ··· 484 470 485 471 hugeprot = pte_pgprot(pte); 486 472 for (i = 0; i < ncontig; i++, ptep++, addr += pgsize, pfn += dpfn) 487 - set_pte_at(vma->vm_mm, addr, ptep, pfn_pte(pfn, hugeprot)); 473 + set_pte_at(mm, addr, ptep, pfn_pte(pfn, hugeprot)); 488 474 489 475 return 1; 490 476 } ··· 506 492 ncontig = find_num_contig(mm, addr, ptep, &pgsize); 507 493 dpfn = pgsize >> PAGE_SHIFT; 508 494 509 - pte = get_clear_contig(mm, addr, ptep, pgsize, ncontig); 495 + pte = get_clear_contig_flush(mm, addr, ptep, pgsize, ncontig); 510 496 pte = pte_wrprotect(pte); 511 497 512 498 hugeprot = pte_pgprot(pte); ··· 519 505 pte_t huge_ptep_clear_flush(struct vm_area_struct *vma, 520 506 unsigned long addr, pte_t *ptep) 521 507 { 508 + struct mm_struct *mm = vma->vm_mm; 522 509 size_t pgsize; 523 510 int ncontig; 524 - pte_t orig_pte; 525 511 526 512 if (!pte_cont(READ_ONCE(*ptep))) 527 513 return ptep_clear_flush(vma, addr, ptep); 528 514 529 - ncontig = find_num_contig(vma->vm_mm, addr, ptep, &pgsize); 530 - orig_pte = get_clear_contig(vma->vm_mm, addr, ptep, pgsize, ncontig); 531 - flush_tlb_range(vma, addr, addr + pgsize * ncontig); 532 - return orig_pte; 515 + ncontig = find_num_contig(mm, addr, ptep, &pgsize); 516 + return get_clear_contig_flush(mm, addr, ptep, pgsize, ncontig); 533 517 } 534 518 535 519 static int __init hugetlbpage_init(void)
+1 -1
arch/openrisc/kernel/unwinder.c
··· 25 25 /* 26 26 * Verify a frameinfo structure. The return address should be a valid text 27 27 * address. The frame pointer may be null if its the last frame, otherwise 28 - * the frame pointer should point to a location in the stack after the the 28 + * the frame pointer should point to a location in the stack after the 29 29 * top of the next frame up. 30 30 */ 31 31 static inline int or1k_frameinfo_valid(struct or1k_frameinfo *frameinfo)
+5
arch/parisc/kernel/asm-offsets.c
··· 224 224 BLANK(); 225 225 DEFINE(ASM_SIGFRAME_SIZE, PARISC_RT_SIGFRAME_SIZE); 226 226 DEFINE(SIGFRAME_CONTEXT_REGS, offsetof(struct rt_sigframe, uc.uc_mcontext) - PARISC_RT_SIGFRAME_SIZE); 227 + #ifdef CONFIG_64BIT 227 228 DEFINE(ASM_SIGFRAME_SIZE32, PARISC_RT_SIGFRAME_SIZE32); 228 229 DEFINE(SIGFRAME_CONTEXT_REGS32, offsetof(struct compat_rt_sigframe, uc.uc_mcontext) - PARISC_RT_SIGFRAME_SIZE32); 230 + #else 231 + DEFINE(ASM_SIGFRAME_SIZE32, PARISC_RT_SIGFRAME_SIZE); 232 + DEFINE(SIGFRAME_CONTEXT_REGS32, offsetof(struct rt_sigframe, uc.uc_mcontext) - PARISC_RT_SIGFRAME_SIZE); 233 + #endif 229 234 BLANK(); 230 235 DEFINE(ICACHE_BASE, offsetof(struct pdc_cache_info, ic_base)); 231 236 DEFINE(ICACHE_STRIDE, offsetof(struct pdc_cache_info, ic_stride));
+1 -1
arch/parisc/kernel/unaligned.c
··· 146 146 " depw %%r0,31,2,%4\n" 147 147 "1: ldw 0(%%sr1,%4),%0\n" 148 148 "2: ldw 4(%%sr1,%4),%3\n" 149 - " subi 32,%4,%2\n" 149 + " subi 32,%2,%2\n" 150 150 " mtctl %2,11\n" 151 151 " vshd %0,%3,%0\n" 152 152 "3: \n"
+4
arch/powerpc/Kconfig
··· 358 358 def_bool y 359 359 depends on PPC_POWERNV || PPC_PSERIES 360 360 361 + config ARCH_HAS_ADD_PAGES 362 + def_bool y 363 + depends on ARCH_ENABLE_MEMORY_HOTPLUG 364 + 361 365 config PPC_DCR_NATIVE 362 366 bool 363 367
+9
arch/powerpc/include/asm/bpf_perf_event.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + #ifndef _ASM_POWERPC_BPF_PERF_EVENT_H 3 + #define _ASM_POWERPC_BPF_PERF_EVENT_H 4 + 5 + #include <asm/ptrace.h> 6 + 7 + typedef struct user_pt_regs bpf_user_pt_regs_t; 8 + 9 + #endif /* _ASM_POWERPC_BPF_PERF_EVENT_H */
-9
arch/powerpc/include/uapi/asm/bpf_perf_event.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */ 2 - #ifndef _UAPI__ASM_BPF_PERF_EVENT_H__ 3 - #define _UAPI__ASM_BPF_PERF_EVENT_H__ 4 - 5 - #include <asm/ptrace.h> 6 - 7 - typedef struct user_pt_regs bpf_user_pt_regs_t; 8 - 9 - #endif /* _UAPI__ASM_BPF_PERF_EVENT_H__ */
+1 -1
arch/powerpc/kernel/prom_init_check.sh
··· 13 13 # If you really need to reference something from prom_init.o add 14 14 # it to the list below: 15 15 16 - grep "^CONFIG_KASAN=y$" .config >/dev/null 16 + grep "^CONFIG_KASAN=y$" ${KCONFIG_CONFIG} >/dev/null 17 17 if [ $? -eq 0 ] 18 18 then 19 19 MEM_FUNCS="__memcpy __memset"
+32 -1
arch/powerpc/mm/mem.c
··· 105 105 vm_unmap_aliases(); 106 106 } 107 107 108 + /* 109 + * After memory hotplug the variables max_pfn, max_low_pfn and high_memory need 110 + * updating. 111 + */ 112 + static void update_end_of_memory_vars(u64 start, u64 size) 113 + { 114 + unsigned long end_pfn = PFN_UP(start + size); 115 + 116 + if (end_pfn > max_pfn) { 117 + max_pfn = end_pfn; 118 + max_low_pfn = end_pfn; 119 + high_memory = (void *)__va(max_pfn * PAGE_SIZE - 1) + 1; 120 + } 121 + } 122 + 123 + int __ref add_pages(int nid, unsigned long start_pfn, unsigned long nr_pages, 124 + struct mhp_params *params) 125 + { 126 + int ret; 127 + 128 + ret = __add_pages(nid, start_pfn, nr_pages, params); 129 + if (ret) 130 + return ret; 131 + 132 + /* update max_pfn, max_low_pfn and high_memory */ 133 + update_end_of_memory_vars(start_pfn << PAGE_SHIFT, 134 + nr_pages << PAGE_SHIFT); 135 + 136 + return ret; 137 + } 138 + 108 139 int __ref arch_add_memory(int nid, u64 start, u64 size, 109 140 struct mhp_params *params) 110 141 { ··· 146 115 rc = arch_create_linear_mapping(nid, start, size, params); 147 116 if (rc) 148 117 return rc; 149 - rc = __add_pages(nid, start_pfn, nr_pages, params); 118 + rc = add_pages(nid, start_pfn, nr_pages, params); 150 119 if (rc) 151 120 arch_remove_linear_mapping(start, size); 152 121 return rc;
+3 -3
arch/powerpc/mm/nohash/book3e_pgtable.c
··· 96 96 pgdp = pgd_offset_k(ea); 97 97 p4dp = p4d_offset(pgdp, ea); 98 98 if (p4d_none(*p4dp)) { 99 - pmdp = early_alloc_pgtable(PMD_TABLE_SIZE); 100 - p4d_populate(&init_mm, p4dp, pmdp); 99 + pudp = early_alloc_pgtable(PUD_TABLE_SIZE); 100 + p4d_populate(&init_mm, p4dp, pudp); 101 101 } 102 102 pudp = pud_offset(p4dp, ea); 103 103 if (pud_none(*pudp)) { ··· 106 106 } 107 107 pmdp = pmd_offset(pudp, ea); 108 108 if (!pmd_present(*pmdp)) { 109 - ptep = early_alloc_pgtable(PAGE_SIZE); 109 + ptep = early_alloc_pgtable(PTE_TABLE_SIZE); 110 110 pmd_populate_kernel(&init_mm, pmdp, ptep); 111 111 } 112 112 ptep = pte_offset_kernel(pmdp, ea);
+3 -2
arch/powerpc/sysdev/xive/spapr.c
··· 15 15 #include <linux/of_fdt.h> 16 16 #include <linux/slab.h> 17 17 #include <linux/spinlock.h> 18 + #include <linux/bitmap.h> 18 19 #include <linux/cpumask.h> 19 20 #include <linux/mm.h> 20 21 #include <linux/delay.h> ··· 58 57 spin_lock_init(&xibm->lock); 59 58 xibm->base = base; 60 59 xibm->count = count; 61 - xibm->bitmap = kzalloc(xibm->count, GFP_KERNEL); 60 + xibm->bitmap = bitmap_zalloc(xibm->count, GFP_KERNEL); 62 61 if (!xibm->bitmap) { 63 62 kfree(xibm); 64 63 return -ENOMEM; ··· 76 75 77 76 list_for_each_entry_safe(xibm, tmp, &xive_irq_bitmaps, list) { 78 77 list_del(&xibm->list); 79 - kfree(xibm->bitmap); 78 + bitmap_free(xibm->bitmap); 80 79 kfree(xibm); 81 80 } 82 81 }
-1
arch/s390/Kconfig
··· 484 484 config KEXEC_FILE 485 485 bool "kexec file based system call" 486 486 select KEXEC_CORE 487 - select BUILD_BIN2C 488 487 depends on CRYPTO 489 488 depends on CRYPTO_SHA256 490 489 depends on CRYPTO_SHA256_S390
-217
arch/s390/crypto/arch_random.c
··· 4 4 * 5 5 * Copyright IBM Corp. 2017, 2020 6 6 * Author(s): Harald Freudenberger 7 - * 8 - * The s390_arch_random_generate() function may be called from random.c 9 - * in interrupt context. So this implementation does the best to be very 10 - * fast. There is a buffer of random data which is asynchronously checked 11 - * and filled by a workqueue thread. 12 - * If there are enough bytes in the buffer the s390_arch_random_generate() 13 - * just delivers these bytes. Otherwise false is returned until the 14 - * worker thread refills the buffer. 15 - * The worker fills the rng buffer by pulling fresh entropy from the 16 - * high quality (but slow) true hardware random generator. This entropy 17 - * is then spread over the buffer with an pseudo random generator PRNG. 18 - * As the arch_get_random_seed_long() fetches 8 bytes and the calling 19 - * function add_interrupt_randomness() counts this as 1 bit entropy the 20 - * distribution needs to make sure there is in fact 1 bit entropy contained 21 - * in 8 bytes of the buffer. The current values pull 32 byte entropy 22 - * and scatter this into a 2048 byte buffer. So 8 byte in the buffer 23 - * will contain 1 bit of entropy. 24 - * The worker thread is rescheduled based on the charge level of the 25 - * buffer but at least with 500 ms delay to avoid too much CPU consumption. 26 - * So the max. amount of rng data delivered via arch_get_random_seed is 27 - * limited to 4k bytes per second. 28 7 */ 29 8 30 9 #include <linux/kernel.h> 31 10 #include <linux/atomic.h> 32 11 #include <linux/random.h> 33 - #include <linux/slab.h> 34 12 #include <linux/static_key.h> 35 - #include <linux/workqueue.h> 36 - #include <linux/moduleparam.h> 37 13 #include <asm/cpacf.h> 38 14 39 15 DEFINE_STATIC_KEY_FALSE(s390_arch_random_available); 40 16 41 17 atomic64_t s390_arch_random_counter = ATOMIC64_INIT(0); 42 18 EXPORT_SYMBOL(s390_arch_random_counter); 43 - 44 - #define ARCH_REFILL_TICKS (HZ/2) 45 - #define ARCH_PRNG_SEED_SIZE 32 46 - #define ARCH_RNG_BUF_SIZE 2048 47 - 48 - static DEFINE_SPINLOCK(arch_rng_lock); 49 - static u8 *arch_rng_buf; 50 - static unsigned int arch_rng_buf_idx; 51 - 52 - static void arch_rng_refill_buffer(struct work_struct *); 53 - static DECLARE_DELAYED_WORK(arch_rng_work, arch_rng_refill_buffer); 54 - 55 - bool s390_arch_random_generate(u8 *buf, unsigned int nbytes) 56 - { 57 - /* max hunk is ARCH_RNG_BUF_SIZE */ 58 - if (nbytes > ARCH_RNG_BUF_SIZE) 59 - return false; 60 - 61 - /* lock rng buffer */ 62 - if (!spin_trylock(&arch_rng_lock)) 63 - return false; 64 - 65 - /* try to resolve the requested amount of bytes from the buffer */ 66 - arch_rng_buf_idx -= nbytes; 67 - if (arch_rng_buf_idx < ARCH_RNG_BUF_SIZE) { 68 - memcpy(buf, arch_rng_buf + arch_rng_buf_idx, nbytes); 69 - atomic64_add(nbytes, &s390_arch_random_counter); 70 - spin_unlock(&arch_rng_lock); 71 - return true; 72 - } 73 - 74 - /* not enough bytes in rng buffer, refill is done asynchronously */ 75 - spin_unlock(&arch_rng_lock); 76 - 77 - return false; 78 - } 79 - EXPORT_SYMBOL(s390_arch_random_generate); 80 - 81 - static void arch_rng_refill_buffer(struct work_struct *unused) 82 - { 83 - unsigned int delay = ARCH_REFILL_TICKS; 84 - 85 - spin_lock(&arch_rng_lock); 86 - if (arch_rng_buf_idx > ARCH_RNG_BUF_SIZE) { 87 - /* buffer is exhausted and needs refill */ 88 - u8 seed[ARCH_PRNG_SEED_SIZE]; 89 - u8 prng_wa[240]; 90 - /* fetch ARCH_PRNG_SEED_SIZE bytes of entropy */ 91 - cpacf_trng(NULL, 0, seed, sizeof(seed)); 92 - /* blow this entropy up to ARCH_RNG_BUF_SIZE with PRNG */ 93 - memset(prng_wa, 0, sizeof(prng_wa)); 94 - cpacf_prno(CPACF_PRNO_SHA512_DRNG_SEED, 95 - &prng_wa, NULL, 0, seed, sizeof(seed)); 96 - cpacf_prno(CPACF_PRNO_SHA512_DRNG_GEN, 97 - &prng_wa, arch_rng_buf, ARCH_RNG_BUF_SIZE, NULL, 0); 98 - arch_rng_buf_idx = ARCH_RNG_BUF_SIZE; 99 - } 100 - delay += (ARCH_REFILL_TICKS * arch_rng_buf_idx) / ARCH_RNG_BUF_SIZE; 101 - spin_unlock(&arch_rng_lock); 102 - 103 - /* kick next check */ 104 - queue_delayed_work(system_long_wq, &arch_rng_work, delay); 105 - } 106 - 107 - /* 108 - * Here follows the implementation of s390_arch_get_random_long(). 109 - * 110 - * The random longs to be pulled by arch_get_random_long() are 111 - * prepared in an 4K buffer which is filled from the NIST 800-90 112 - * compliant s390 drbg. By default the random long buffer is refilled 113 - * 256 times before the drbg itself needs a reseed. The reseed of the 114 - * drbg is done with 32 bytes fetched from the high quality (but slow) 115 - * trng which is assumed to deliver 100% entropy. So the 32 * 8 = 256 116 - * bits of entropy are spread over 256 * 4KB = 1MB serving 131072 117 - * arch_get_random_long() invocations before reseeded. 118 - * 119 - * How often the 4K random long buffer is refilled with the drbg 120 - * before the drbg is reseeded can be adjusted. There is a module 121 - * parameter 's390_arch_rnd_long_drbg_reseed' accessible via 122 - * /sys/module/arch_random/parameters/rndlong_drbg_reseed 123 - * or as kernel command line parameter 124 - * arch_random.rndlong_drbg_reseed=<value> 125 - * This parameter tells how often the drbg fills the 4K buffer before 126 - * it is re-seeded by fresh entropy from the trng. 127 - * A value of 16 results in reseeding the drbg at every 16 * 4 KB = 64 128 - * KB with 32 bytes of fresh entropy pulled from the trng. So a value 129 - * of 16 would result in 256 bits entropy per 64 KB. 130 - * A value of 256 results in 1MB of drbg output before a reseed of the 131 - * drbg is done. So this would spread the 256 bits of entropy among 1MB. 132 - * Setting this parameter to 0 forces the reseed to take place every 133 - * time the 4K buffer is depleted, so the entropy rises to 256 bits 134 - * entropy per 4K or 0.5 bit entropy per arch_get_random_long(). With 135 - * setting this parameter to negative values all this effort is 136 - * disabled, arch_get_random long() returns false and thus indicating 137 - * that the arch_get_random_long() feature is disabled at all. 138 - */ 139 - 140 - static unsigned long rndlong_buf[512]; 141 - static DEFINE_SPINLOCK(rndlong_lock); 142 - static int rndlong_buf_index; 143 - 144 - static int rndlong_drbg_reseed = 256; 145 - module_param_named(rndlong_drbg_reseed, rndlong_drbg_reseed, int, 0600); 146 - MODULE_PARM_DESC(rndlong_drbg_reseed, "s390 arch_get_random_long() drbg reseed"); 147 - 148 - static inline void refill_rndlong_buf(void) 149 - { 150 - static u8 prng_ws[240]; 151 - static int drbg_counter; 152 - 153 - if (--drbg_counter < 0) { 154 - /* need to re-seed the drbg */ 155 - u8 seed[32]; 156 - 157 - /* fetch seed from trng */ 158 - cpacf_trng(NULL, 0, seed, sizeof(seed)); 159 - /* seed drbg */ 160 - memset(prng_ws, 0, sizeof(prng_ws)); 161 - cpacf_prno(CPACF_PRNO_SHA512_DRNG_SEED, 162 - &prng_ws, NULL, 0, seed, sizeof(seed)); 163 - /* re-init counter for drbg */ 164 - drbg_counter = rndlong_drbg_reseed; 165 - } 166 - 167 - /* fill the arch_get_random_long buffer from drbg */ 168 - cpacf_prno(CPACF_PRNO_SHA512_DRNG_GEN, &prng_ws, 169 - (u8 *) rndlong_buf, sizeof(rndlong_buf), 170 - NULL, 0); 171 - } 172 - 173 - bool s390_arch_get_random_long(unsigned long *v) 174 - { 175 - bool rc = false; 176 - unsigned long flags; 177 - 178 - /* arch_get_random_long() disabled ? */ 179 - if (rndlong_drbg_reseed < 0) 180 - return false; 181 - 182 - /* try to lock the random long lock */ 183 - if (!spin_trylock_irqsave(&rndlong_lock, flags)) 184 - return false; 185 - 186 - if (--rndlong_buf_index >= 0) { 187 - /* deliver next long value from the buffer */ 188 - *v = rndlong_buf[rndlong_buf_index]; 189 - rc = true; 190 - goto out; 191 - } 192 - 193 - /* buffer is depleted and needs refill */ 194 - if (in_interrupt()) { 195 - /* delay refill in interrupt context to next caller */ 196 - rndlong_buf_index = 0; 197 - goto out; 198 - } 199 - 200 - /* refill random long buffer */ 201 - refill_rndlong_buf(); 202 - rndlong_buf_index = ARRAY_SIZE(rndlong_buf); 203 - 204 - /* and provide one random long */ 205 - *v = rndlong_buf[--rndlong_buf_index]; 206 - rc = true; 207 - 208 - out: 209 - spin_unlock_irqrestore(&rndlong_lock, flags); 210 - return rc; 211 - } 212 - EXPORT_SYMBOL(s390_arch_get_random_long); 213 - 214 - static int __init s390_arch_random_init(void) 215 - { 216 - /* all the needed PRNO subfunctions available ? */ 217 - if (cpacf_query_func(CPACF_PRNO, CPACF_PRNO_TRNG) && 218 - cpacf_query_func(CPACF_PRNO, CPACF_PRNO_SHA512_DRNG_GEN)) { 219 - 220 - /* alloc arch random working buffer */ 221 - arch_rng_buf = kmalloc(ARCH_RNG_BUF_SIZE, GFP_KERNEL); 222 - if (!arch_rng_buf) 223 - return -ENOMEM; 224 - 225 - /* kick worker queue job to fill the random buffer */ 226 - queue_delayed_work(system_long_wq, 227 - &arch_rng_work, ARCH_REFILL_TICKS); 228 - 229 - /* enable arch random to the outside world */ 230 - static_branch_enable(&s390_arch_random_available); 231 - } 232 - 233 - return 0; 234 - } 235 - arch_initcall(s390_arch_random_init);
+7 -7
arch/s390/include/asm/archrandom.h
··· 15 15 16 16 #include <linux/static_key.h> 17 17 #include <linux/atomic.h> 18 + #include <asm/cpacf.h> 18 19 19 20 DECLARE_STATIC_KEY_FALSE(s390_arch_random_available); 20 21 extern atomic64_t s390_arch_random_counter; 21 22 22 - bool s390_arch_get_random_long(unsigned long *v); 23 - bool s390_arch_random_generate(u8 *buf, unsigned int nbytes); 24 - 25 23 static inline bool __must_check arch_get_random_long(unsigned long *v) 26 24 { 27 - if (static_branch_likely(&s390_arch_random_available)) 28 - return s390_arch_get_random_long(v); 29 25 return false; 30 26 } 31 27 ··· 33 37 static inline bool __must_check arch_get_random_seed_long(unsigned long *v) 34 38 { 35 39 if (static_branch_likely(&s390_arch_random_available)) { 36 - return s390_arch_random_generate((u8 *)v, sizeof(*v)); 40 + cpacf_trng(NULL, 0, (u8 *)v, sizeof(*v)); 41 + atomic64_add(sizeof(*v), &s390_arch_random_counter); 42 + return true; 37 43 } 38 44 return false; 39 45 } ··· 43 45 static inline bool __must_check arch_get_random_seed_int(unsigned int *v) 44 46 { 45 47 if (static_branch_likely(&s390_arch_random_available)) { 46 - return s390_arch_random_generate((u8 *)v, sizeof(*v)); 48 + cpacf_trng(NULL, 0, (u8 *)v, sizeof(*v)); 49 + atomic64_add(sizeof(*v), &s390_arch_random_counter); 50 + return true; 47 51 } 48 52 return false; 49 53 }
+3 -3
arch/s390/include/asm/qdio.h
··· 133 133 * @sb_count: number of storage blocks 134 134 * @sba: storage block element addresses 135 135 * @dcount: size of storage block elements 136 - * @user0: user defineable value 137 - * @res4: reserved paramater 138 - * @user1: user defineable value 136 + * @user0: user definable value 137 + * @res4: reserved parameter 138 + * @user1: user definable value 139 139 */ 140 140 struct qaob { 141 141 u64 res0[6];
+5
arch/s390/kernel/setup.c
··· 875 875 if (stsi(vmms, 3, 2, 2) == 0 && vmms->count) 876 876 add_device_randomness(&vmms->vm, sizeof(vmms->vm[0]) * vmms->count); 877 877 memblock_free(vmms, PAGE_SIZE); 878 + 879 + #ifdef CONFIG_ARCH_RANDOM 880 + if (cpacf_query_func(CPACF_PRNO, CPACF_PRNO_TRNG)) 881 + static_branch_enable(&s390_arch_random_available); 882 + #endif 878 883 } 879 884 880 885 /*
+2 -3
arch/s390/purgatory/Makefile
··· 48 48 $(obj)/purgatory.ro: $(obj)/purgatory $(obj)/purgatory.chk FORCE 49 49 $(call if_changed,objcopy) 50 50 51 - $(obj)/kexec-purgatory.o: $(obj)/kexec-purgatory.S $(obj)/purgatory.ro FORCE 52 - $(call if_changed_rule,as_o_S) 51 + $(obj)/kexec-purgatory.o: $(obj)/purgatory.ro 53 52 54 - obj-$(CONFIG_ARCH_HAS_KEXEC_PURGATORY) += kexec-purgatory.o 53 + obj-y += kexec-purgatory.o
+114
crypto/Kconfig
··· 666 666 CRC32c and CRC32 CRC algorithms implemented using mips crypto 667 667 instructions, when available. 668 668 669 + config CRYPTO_CRC32_S390 670 + tristate "CRC-32 algorithms" 671 + depends on S390 672 + select CRYPTO_HASH 673 + select CRC32 674 + help 675 + Select this option if you want to use hardware accelerated 676 + implementations of CRC algorithms. With this option, you 677 + can optimize the computation of CRC-32 (IEEE 802.3 Ethernet) 678 + and CRC-32C (Castagnoli). 679 + 680 + It is available with IBM z13 or later. 669 681 670 682 config CRYPTO_XXHASH 671 683 tristate "xxHash hash algorithm" ··· 910 898 Extensions version 1 (AVX1), or Advanced Vector Extensions 911 899 version 2 (AVX2) instructions, when available. 912 900 901 + config CRYPTO_SHA512_S390 902 + tristate "SHA384 and SHA512 digest algorithm" 903 + depends on S390 904 + select CRYPTO_HASH 905 + help 906 + This is the s390 hardware accelerated implementation of the 907 + SHA512 secure hash standard. 908 + 909 + It is available as of z10. 910 + 913 911 config CRYPTO_SHA1_OCTEON 914 912 tristate "SHA1 digest algorithm (OCTEON)" 915 913 depends on CPU_CAVIUM_OCTEON ··· 951 929 help 952 930 SHA-1 secure hash standard (DFIPS 180-4) implemented 953 931 using powerpc SPE SIMD instruction set. 932 + 933 + config CRYPTO_SHA1_S390 934 + tristate "SHA1 digest algorithm" 935 + depends on S390 936 + select CRYPTO_HASH 937 + help 938 + This is the s390 hardware accelerated implementation of the 939 + SHA-1 secure hash standard (FIPS 180-1/DFIPS 180-2). 940 + 941 + It is available as of z990. 954 942 955 943 config CRYPTO_SHA256 956 944 tristate "SHA224 and SHA256 digest algorithm" ··· 1002 970 SHA-256 secure hash standard (DFIPS 180-2) implemented 1003 971 using sparc64 crypto instructions, when available. 1004 972 973 + config CRYPTO_SHA256_S390 974 + tristate "SHA256 digest algorithm" 975 + depends on S390 976 + select CRYPTO_HASH 977 + help 978 + This is the s390 hardware accelerated implementation of the 979 + SHA256 secure hash standard (DFIPS 180-2). 980 + 981 + It is available as of z9. 982 + 1005 983 config CRYPTO_SHA512 1006 984 tristate "SHA384 and SHA512 digest algorithms" 1007 985 select CRYPTO_HASH ··· 1051 1009 1052 1010 References: 1053 1011 http://keccak.noekeon.org/ 1012 + 1013 + config CRYPTO_SHA3_256_S390 1014 + tristate "SHA3_224 and SHA3_256 digest algorithm" 1015 + depends on S390 1016 + select CRYPTO_HASH 1017 + help 1018 + This is the s390 hardware accelerated implementation of the 1019 + SHA3_256 secure hash standard. 1020 + 1021 + It is available as of z14. 1022 + 1023 + config CRYPTO_SHA3_512_S390 1024 + tristate "SHA3_384 and SHA3_512 digest algorithm" 1025 + depends on S390 1026 + select CRYPTO_HASH 1027 + help 1028 + This is the s390 hardware accelerated implementation of the 1029 + SHA3_512 secure hash standard. 1030 + 1031 + It is available as of z14. 1054 1032 1055 1033 config CRYPTO_SM3 1056 1034 tristate ··· 1131 1069 help 1132 1070 This is the x86_64 CLMUL-NI accelerated implementation of 1133 1071 GHASH, the hash function used in GCM (Galois/Counter mode). 1072 + 1073 + config CRYPTO_GHASH_S390 1074 + tristate "GHASH hash function" 1075 + depends on S390 1076 + select CRYPTO_HASH 1077 + help 1078 + This is the s390 hardware accelerated implementation of GHASH, 1079 + the hash function used in GCM (Galois/Counter mode). 1080 + 1081 + It is available as of z196. 1134 1082 1135 1083 comment "Ciphers" 1136 1084 ··· 1256 1184 timining attacks. Nevertheless it might be not as secure as other 1257 1185 architecture specific assembler implementations that work on 1KB 1258 1186 tables or 256 bytes S-boxes. 1187 + 1188 + config CRYPTO_AES_S390 1189 + tristate "AES cipher algorithms" 1190 + depends on S390 1191 + select CRYPTO_ALGAPI 1192 + select CRYPTO_SKCIPHER 1193 + help 1194 + This is the s390 hardware accelerated implementation of the 1195 + AES cipher algorithms (FIPS-197). 1196 + 1197 + As of z9 the ECB and CBC modes are hardware accelerated 1198 + for 128 bit keys. 1199 + As of z10 the ECB and CBC modes are hardware accelerated 1200 + for all AES key sizes. 1201 + As of z196 the CTR mode is hardware accelerated for all AES 1202 + key sizes and XTS mode is hardware accelerated for 256 and 1203 + 512 bit keys. 1259 1204 1260 1205 config CRYPTO_ANUBIS 1261 1206 tristate "Anubis cipher algorithm" ··· 1504 1415 algorithm are provided; regular processing one input block and 1505 1416 one that processes three blocks parallel. 1506 1417 1418 + config CRYPTO_DES_S390 1419 + tristate "DES and Triple DES cipher algorithms" 1420 + depends on S390 1421 + select CRYPTO_ALGAPI 1422 + select CRYPTO_SKCIPHER 1423 + select CRYPTO_LIB_DES 1424 + help 1425 + This is the s390 hardware accelerated implementation of the 1426 + DES cipher algorithm (FIPS 46-2), and Triple DES EDE (FIPS 46-3). 1427 + 1428 + As of z990 the ECB and CBC mode are hardware accelerated. 1429 + As of z196 the CTR mode is hardware accelerated. 1430 + 1507 1431 config CRYPTO_FCRYPT 1508 1432 tristate "FCrypt cipher algorithm" 1509 1433 select CRYPTO_ALGAPI ··· 1575 1473 depends on CPU_MIPS32_R2 1576 1474 select CRYPTO_SKCIPHER 1577 1475 select CRYPTO_ARCH_HAVE_LIB_CHACHA 1476 + 1477 + config CRYPTO_CHACHA_S390 1478 + tristate "ChaCha20 stream cipher" 1479 + depends on S390 1480 + select CRYPTO_SKCIPHER 1481 + select CRYPTO_LIB_CHACHA_GENERIC 1482 + select CRYPTO_ARCH_HAVE_LIB_CHACHA 1483 + help 1484 + This is the s390 SIMD implementation of the ChaCha20 stream 1485 + cipher (RFC 7539). 1486 + 1487 + It is available as of z13. 1578 1488 1579 1489 config CRYPTO_SEED 1580 1490 tristate "SEED cipher algorithm"
+2 -2
drivers/ata/pata_cs5535.c
··· 90 90 static const u16 pio_cmd_timings[5] = { 91 91 0xF7F4, 0x53F3, 0x13F1, 0x5131, 0x1131 92 92 }; 93 - u32 reg, dummy; 93 + u32 reg, __maybe_unused dummy; 94 94 struct ata_device *pair = ata_dev_pair(adev); 95 95 96 96 int mode = adev->pio_mode - XFER_PIO_0; ··· 129 129 static const u32 mwdma_timings[3] = { 130 130 0x7F0FFFF3, 0x7F035352, 0x7F024241 131 131 }; 132 - u32 reg, dummy; 132 + u32 reg, __maybe_unused dummy; 133 133 int mode = adev->dma_mode; 134 134 135 135 rdmsr(ATAC_CH0D0_DMA + 2 * adev->devno, reg, dummy);
+37 -17
drivers/block/xen-blkfront.c
··· 152 152 module_param_named(max_ring_page_order, xen_blkif_max_ring_order, int, 0444); 153 153 MODULE_PARM_DESC(max_ring_page_order, "Maximum order of pages to be used for the shared ring"); 154 154 155 + static bool __read_mostly xen_blkif_trusted = true; 156 + module_param_named(trusted, xen_blkif_trusted, bool, 0644); 157 + MODULE_PARM_DESC(trusted, "Is the backend trusted"); 158 + 155 159 #define BLK_RING_SIZE(info) \ 156 160 __CONST_RING_SIZE(blkif, XEN_PAGE_SIZE * (info)->nr_ring_pages) 157 161 ··· 214 210 unsigned int feature_discard:1; 215 211 unsigned int feature_secdiscard:1; 216 212 unsigned int feature_persistent:1; 213 + unsigned int bounce:1; 217 214 unsigned int discard_granularity; 218 215 unsigned int discard_alignment; 219 216 /* Number of 4KB segments handled */ ··· 315 310 if (!gnt_list_entry) 316 311 goto out_of_memory; 317 312 318 - if (info->feature_persistent) { 319 - granted_page = alloc_page(GFP_NOIO); 313 + if (info->bounce) { 314 + granted_page = alloc_page(GFP_NOIO | __GFP_ZERO); 320 315 if (!granted_page) { 321 316 kfree(gnt_list_entry); 322 317 goto out_of_memory; ··· 335 330 list_for_each_entry_safe(gnt_list_entry, n, 336 331 &rinfo->grants, node) { 337 332 list_del(&gnt_list_entry->node); 338 - if (info->feature_persistent) 333 + if (info->bounce) 339 334 __free_page(gnt_list_entry->page); 340 335 kfree(gnt_list_entry); 341 336 i--; ··· 381 376 /* Assign a gref to this page */ 382 377 gnt_list_entry->gref = gnttab_claim_grant_reference(gref_head); 383 378 BUG_ON(gnt_list_entry->gref == -ENOSPC); 384 - if (info->feature_persistent) 379 + if (info->bounce) 385 380 grant_foreign_access(gnt_list_entry, info); 386 381 else { 387 382 /* Grant access to the GFN passed by the caller */ ··· 405 400 /* Assign a gref to this page */ 406 401 gnt_list_entry->gref = gnttab_claim_grant_reference(gref_head); 407 402 BUG_ON(gnt_list_entry->gref == -ENOSPC); 408 - if (!info->feature_persistent) { 403 + if (!info->bounce) { 409 404 struct page *indirect_page; 410 405 411 406 /* Fetch a pre-allocated page to use for indirect grefs */ ··· 708 703 .grant_idx = 0, 709 704 .segments = NULL, 710 705 .rinfo = rinfo, 711 - .need_copy = rq_data_dir(req) && info->feature_persistent, 706 + .need_copy = rq_data_dir(req) && info->bounce, 712 707 }; 713 708 714 709 /* ··· 986 981 { 987 982 blk_queue_write_cache(info->rq, info->feature_flush ? true : false, 988 983 info->feature_fua ? true : false); 989 - pr_info("blkfront: %s: %s %s %s %s %s\n", 984 + pr_info("blkfront: %s: %s %s %s %s %s %s %s\n", 990 985 info->gd->disk_name, flush_info(info), 991 986 "persistent grants:", info->feature_persistent ? 992 987 "enabled;" : "disabled;", "indirect descriptors:", 993 - info->max_indirect_segments ? "enabled;" : "disabled;"); 988 + info->max_indirect_segments ? "enabled;" : "disabled;", 989 + "bounce buffer:", info->bounce ? "enabled" : "disabled;"); 994 990 } 995 991 996 992 static int xen_translate_vdev(int vdevice, int *minor, unsigned int *offset) ··· 1213 1207 if (!list_empty(&rinfo->indirect_pages)) { 1214 1208 struct page *indirect_page, *n; 1215 1209 1216 - BUG_ON(info->feature_persistent); 1210 + BUG_ON(info->bounce); 1217 1211 list_for_each_entry_safe(indirect_page, n, &rinfo->indirect_pages, lru) { 1218 1212 list_del(&indirect_page->lru); 1219 1213 __free_page(indirect_page); ··· 1230 1224 NULL); 1231 1225 rinfo->persistent_gnts_c--; 1232 1226 } 1233 - if (info->feature_persistent) 1227 + if (info->bounce) 1234 1228 __free_page(persistent_gnt->page); 1235 1229 kfree(persistent_gnt); 1236 1230 } ··· 1251 1245 for (j = 0; j < segs; j++) { 1252 1246 persistent_gnt = rinfo->shadow[i].grants_used[j]; 1253 1247 gnttab_end_foreign_access(persistent_gnt->gref, NULL); 1254 - if (info->feature_persistent) 1248 + if (info->bounce) 1255 1249 __free_page(persistent_gnt->page); 1256 1250 kfree(persistent_gnt); 1257 1251 } ··· 1434 1428 data.s = s; 1435 1429 num_sg = s->num_sg; 1436 1430 1437 - if (bret->operation == BLKIF_OP_READ && info->feature_persistent) { 1431 + if (bret->operation == BLKIF_OP_READ && info->bounce) { 1438 1432 for_each_sg(s->sg, sg, num_sg, i) { 1439 1433 BUG_ON(sg->offset + sg->length > PAGE_SIZE); 1440 1434 ··· 1493 1487 * Add the used indirect page back to the list of 1494 1488 * available pages for indirect grefs. 1495 1489 */ 1496 - if (!info->feature_persistent) { 1490 + if (!info->bounce) { 1497 1491 indirect_page = s->indirect_grants[i]->page; 1498 1492 list_add(&indirect_page->lru, &rinfo->indirect_pages); 1499 1493 } ··· 1769 1763 1770 1764 if (!info) 1771 1765 return -ENODEV; 1766 + 1767 + /* Check if backend is trusted. */ 1768 + info->bounce = !xen_blkif_trusted || 1769 + !xenbus_read_unsigned(dev->nodename, "trusted", 1); 1772 1770 1773 1771 max_page_order = xenbus_read_unsigned(info->xbdev->otherend, 1774 1772 "max-ring-page-order", 0); ··· 2183 2173 if (err) 2184 2174 goto out_of_memory; 2185 2175 2186 - if (!info->feature_persistent && info->max_indirect_segments) { 2176 + if (!info->bounce && info->max_indirect_segments) { 2187 2177 /* 2188 - * We are using indirect descriptors but not persistent 2189 - * grants, we need to allocate a set of pages that can be 2178 + * We are using indirect descriptors but don't have a bounce 2179 + * buffer, we need to allocate a set of pages that can be 2190 2180 * used for mapping indirect grefs 2191 2181 */ 2192 2182 int num = INDIRECT_GREFS(grants) * BLK_RING_SIZE(info); 2193 2183 2194 2184 BUG_ON(!list_empty(&rinfo->indirect_pages)); 2195 2185 for (i = 0; i < num; i++) { 2196 - struct page *indirect_page = alloc_page(GFP_KERNEL); 2186 + struct page *indirect_page = alloc_page(GFP_KERNEL | 2187 + __GFP_ZERO); 2197 2188 if (!indirect_page) 2198 2189 goto out_of_memory; 2199 2190 list_add(&indirect_page->lru, &rinfo->indirect_pages); ··· 2287 2276 info->feature_persistent = 2288 2277 !!xenbus_read_unsigned(info->xbdev->otherend, 2289 2278 "feature-persistent", 0); 2279 + if (info->feature_persistent) 2280 + info->bounce = true; 2290 2281 2291 2282 indirect_segments = xenbus_read_unsigned(info->xbdev->otherend, 2292 2283 "feature-max-indirect-segments", 0); ··· 2559 2546 { 2560 2547 struct blkfront_info *info; 2561 2548 bool need_schedule_work = false; 2549 + 2550 + /* 2551 + * Note that when using bounce buffers but not persistent grants 2552 + * there's no need to run blkfront_delay_work because grants are 2553 + * revoked in blkif_completion or else an error is reported and the 2554 + * connection is closed. 2555 + */ 2562 2556 2563 2557 mutex_lock(&blkfront_mutex); 2564 2558
+1
drivers/clk/stm32/reset-stm32.c
··· 111 111 if (!reset_data) 112 112 return -ENOMEM; 113 113 114 + spin_lock_init(&reset_data->lock); 114 115 reset_data->membase = base; 115 116 reset_data->rcdev.owner = THIS_MODULE; 116 117 reset_data->rcdev.ops = &stm32_reset_ops;
+24
drivers/cpufreq/amd-pstate.c
··· 566 566 return 0; 567 567 } 568 568 569 + static int amd_pstate_cpu_resume(struct cpufreq_policy *policy) 570 + { 571 + int ret; 572 + 573 + ret = amd_pstate_enable(true); 574 + if (ret) 575 + pr_err("failed to enable amd-pstate during resume, return %d\n", ret); 576 + 577 + return ret; 578 + } 579 + 580 + static int amd_pstate_cpu_suspend(struct cpufreq_policy *policy) 581 + { 582 + int ret; 583 + 584 + ret = amd_pstate_enable(false); 585 + if (ret) 586 + pr_err("failed to disable amd-pstate during suspend, return %d\n", ret); 587 + 588 + return ret; 589 + } 590 + 569 591 /* Sysfs attributes */ 570 592 571 593 /* ··· 658 636 .target = amd_pstate_target, 659 637 .init = amd_pstate_cpu_init, 660 638 .exit = amd_pstate_cpu_exit, 639 + .suspend = amd_pstate_cpu_suspend, 640 + .resume = amd_pstate_cpu_resume, 661 641 .set_boost = amd_pstate_set_boost, 662 642 .name = "amd-pstate", 663 643 .attr = amd_pstate_attr,
+1
drivers/cpufreq/cpufreq-dt-platdev.c
··· 127 127 { .compatible = "mediatek,mt8173", }, 128 128 { .compatible = "mediatek,mt8176", }, 129 129 { .compatible = "mediatek,mt8183", }, 130 + { .compatible = "mediatek,mt8186", }, 130 131 { .compatible = "mediatek,mt8365", }, 131 132 { .compatible = "mediatek,mt8516", }, 132 133
+4
drivers/cpufreq/pmac32-cpufreq.c
··· 470 470 if (slew_done_gpio_np) 471 471 slew_done_gpio = read_gpio(slew_done_gpio_np); 472 472 473 + of_node_put(volt_gpio_np); 474 + of_node_put(freq_gpio_np); 475 + of_node_put(slew_done_gpio_np); 476 + 473 477 /* If we use the frequency GPIOs, calculate the min/max speeds based 474 478 * on the bus frequencies 475 479 */
+6
drivers/cpufreq/qcom-cpufreq-hw.c
··· 442 442 struct platform_device *pdev = cpufreq_get_driver_data(); 443 443 int ret; 444 444 445 + if (data->throttle_irq <= 0) 446 + return 0; 447 + 445 448 ret = irq_set_affinity_hint(data->throttle_irq, policy->cpus); 446 449 if (ret) 447 450 dev_err(&pdev->dev, "Failed to set CPU affinity of %s[%d]\n", ··· 472 469 473 470 static void qcom_cpufreq_hw_lmh_exit(struct qcom_cpufreq_data *data) 474 471 { 472 + if (data->throttle_irq <= 0) 473 + return; 474 + 475 475 free_irq(data->throttle_irq, data); 476 476 } 477 477
+1
drivers/cpufreq/qoriq-cpufreq.c
··· 275 275 276 276 np = of_find_matching_node(NULL, qoriq_cpufreq_blacklist); 277 277 if (np) { 278 + of_node_put(np); 278 279 dev_info(&pdev->dev, "Disabling due to erratum A-008083"); 279 280 return -ENODEV; 280 281 }
-115
drivers/crypto/Kconfig
··· 133 133 Select this option if you want to use the paes cipher 134 134 for example to use protected key encrypted devices. 135 135 136 - config CRYPTO_SHA1_S390 137 - tristate "SHA1 digest algorithm" 138 - depends on S390 139 - select CRYPTO_HASH 140 - help 141 - This is the s390 hardware accelerated implementation of the 142 - SHA-1 secure hash standard (FIPS 180-1/DFIPS 180-2). 143 - 144 - It is available as of z990. 145 - 146 - config CRYPTO_SHA256_S390 147 - tristate "SHA256 digest algorithm" 148 - depends on S390 149 - select CRYPTO_HASH 150 - help 151 - This is the s390 hardware accelerated implementation of the 152 - SHA256 secure hash standard (DFIPS 180-2). 153 - 154 - It is available as of z9. 155 - 156 - config CRYPTO_SHA512_S390 157 - tristate "SHA384 and SHA512 digest algorithm" 158 - depends on S390 159 - select CRYPTO_HASH 160 - help 161 - This is the s390 hardware accelerated implementation of the 162 - SHA512 secure hash standard. 163 - 164 - It is available as of z10. 165 - 166 - config CRYPTO_SHA3_256_S390 167 - tristate "SHA3_224 and SHA3_256 digest algorithm" 168 - depends on S390 169 - select CRYPTO_HASH 170 - help 171 - This is the s390 hardware accelerated implementation of the 172 - SHA3_256 secure hash standard. 173 - 174 - It is available as of z14. 175 - 176 - config CRYPTO_SHA3_512_S390 177 - tristate "SHA3_384 and SHA3_512 digest algorithm" 178 - depends on S390 179 - select CRYPTO_HASH 180 - help 181 - This is the s390 hardware accelerated implementation of the 182 - SHA3_512 secure hash standard. 183 - 184 - It is available as of z14. 185 - 186 - config CRYPTO_DES_S390 187 - tristate "DES and Triple DES cipher algorithms" 188 - depends on S390 189 - select CRYPTO_ALGAPI 190 - select CRYPTO_SKCIPHER 191 - select CRYPTO_LIB_DES 192 - help 193 - This is the s390 hardware accelerated implementation of the 194 - DES cipher algorithm (FIPS 46-2), and Triple DES EDE (FIPS 46-3). 195 - 196 - As of z990 the ECB and CBC mode are hardware accelerated. 197 - As of z196 the CTR mode is hardware accelerated. 198 - 199 - config CRYPTO_AES_S390 200 - tristate "AES cipher algorithms" 201 - depends on S390 202 - select CRYPTO_ALGAPI 203 - select CRYPTO_SKCIPHER 204 - help 205 - This is the s390 hardware accelerated implementation of the 206 - AES cipher algorithms (FIPS-197). 207 - 208 - As of z9 the ECB and CBC modes are hardware accelerated 209 - for 128 bit keys. 210 - As of z10 the ECB and CBC modes are hardware accelerated 211 - for all AES key sizes. 212 - As of z196 the CTR mode is hardware accelerated for all AES 213 - key sizes and XTS mode is hardware accelerated for 256 and 214 - 512 bit keys. 215 - 216 - config CRYPTO_CHACHA_S390 217 - tristate "ChaCha20 stream cipher" 218 - depends on S390 219 - select CRYPTO_SKCIPHER 220 - select CRYPTO_LIB_CHACHA_GENERIC 221 - select CRYPTO_ARCH_HAVE_LIB_CHACHA 222 - help 223 - This is the s390 SIMD implementation of the ChaCha20 stream 224 - cipher (RFC 7539). 225 - 226 - It is available as of z13. 227 - 228 136 config S390_PRNG 229 137 tristate "Pseudo random number generator device driver" 230 138 depends on S390 ··· 145 237 pseudo-random-number device through the char device /dev/prandom. 146 238 147 239 It is available as of z9. 148 - 149 - config CRYPTO_GHASH_S390 150 - tristate "GHASH hash function" 151 - depends on S390 152 - select CRYPTO_HASH 153 - help 154 - This is the s390 hardware accelerated implementation of GHASH, 155 - the hash function used in GCM (Galois/Counter mode). 156 - 157 - It is available as of z196. 158 - 159 - config CRYPTO_CRC32_S390 160 - tristate "CRC-32 algorithms" 161 - depends on S390 162 - select CRYPTO_HASH 163 - select CRC32 164 - help 165 - Select this option if you want to use hardware accelerated 166 - implementations of CRC algorithms. With this option, you 167 - can optimize the computation of CRC-32 (IEEE 802.3 Ethernet) 168 - and CRC-32C (Castagnoli). 169 - 170 - It is available with IBM z13 or later. 171 240 172 241 config CRYPTO_DEV_NIAGARA2 173 242 tristate "Niagara2 Stream Processing Unit driver"
+37 -39
drivers/devfreq/devfreq.c
··· 123 123 unsigned long *min_freq, 124 124 unsigned long *max_freq) 125 125 { 126 - unsigned long *freq_table = devfreq->profile->freq_table; 126 + unsigned long *freq_table = devfreq->freq_table; 127 127 s32 qos_min_freq, qos_max_freq; 128 128 129 129 lockdep_assert_held(&devfreq->lock); ··· 133 133 * The devfreq drivers can initialize this in either ascending or 134 134 * descending order and devfreq core supports both. 135 135 */ 136 - if (freq_table[0] < freq_table[devfreq->profile->max_state - 1]) { 136 + if (freq_table[0] < freq_table[devfreq->max_state - 1]) { 137 137 *min_freq = freq_table[0]; 138 - *max_freq = freq_table[devfreq->profile->max_state - 1]; 138 + *max_freq = freq_table[devfreq->max_state - 1]; 139 139 } else { 140 - *min_freq = freq_table[devfreq->profile->max_state - 1]; 140 + *min_freq = freq_table[devfreq->max_state - 1]; 141 141 *max_freq = freq_table[0]; 142 142 } 143 143 ··· 169 169 { 170 170 int lev; 171 171 172 - for (lev = 0; lev < devfreq->profile->max_state; lev++) 173 - if (freq == devfreq->profile->freq_table[lev]) 172 + for (lev = 0; lev < devfreq->max_state; lev++) 173 + if (freq == devfreq->freq_table[lev]) 174 174 return lev; 175 175 176 176 return -EINVAL; ··· 178 178 179 179 static int set_freq_table(struct devfreq *devfreq) 180 180 { 181 - struct devfreq_dev_profile *profile = devfreq->profile; 182 181 struct dev_pm_opp *opp; 183 182 unsigned long freq; 184 183 int i, count; ··· 187 188 if (count <= 0) 188 189 return -EINVAL; 189 190 190 - profile->max_state = count; 191 - profile->freq_table = devm_kcalloc(devfreq->dev.parent, 192 - profile->max_state, 193 - sizeof(*profile->freq_table), 194 - GFP_KERNEL); 195 - if (!profile->freq_table) { 196 - profile->max_state = 0; 191 + devfreq->max_state = count; 192 + devfreq->freq_table = devm_kcalloc(devfreq->dev.parent, 193 + devfreq->max_state, 194 + sizeof(*devfreq->freq_table), 195 + GFP_KERNEL); 196 + if (!devfreq->freq_table) 197 197 return -ENOMEM; 198 - } 199 198 200 - for (i = 0, freq = 0; i < profile->max_state; i++, freq++) { 199 + for (i = 0, freq = 0; i < devfreq->max_state; i++, freq++) { 201 200 opp = dev_pm_opp_find_freq_ceil(devfreq->dev.parent, &freq); 202 201 if (IS_ERR(opp)) { 203 - devm_kfree(devfreq->dev.parent, profile->freq_table); 204 - profile->max_state = 0; 202 + devm_kfree(devfreq->dev.parent, devfreq->freq_table); 205 203 return PTR_ERR(opp); 206 204 } 207 205 dev_pm_opp_put(opp); 208 - profile->freq_table[i] = freq; 206 + devfreq->freq_table[i] = freq; 209 207 } 210 208 211 209 return 0; ··· 242 246 243 247 if (lev != prev_lev) { 244 248 devfreq->stats.trans_table[ 245 - (prev_lev * devfreq->profile->max_state) + lev]++; 249 + (prev_lev * devfreq->max_state) + lev]++; 246 250 devfreq->stats.total_trans++; 247 251 } 248 252 ··· 831 835 if (err < 0) 832 836 goto err_dev; 833 837 mutex_lock(&devfreq->lock); 838 + } else { 839 + devfreq->freq_table = devfreq->profile->freq_table; 840 + devfreq->max_state = devfreq->profile->max_state; 834 841 } 835 842 836 843 devfreq->scaling_min_freq = find_available_min_freq(devfreq); ··· 869 870 870 871 devfreq->stats.trans_table = devm_kzalloc(&devfreq->dev, 871 872 array3_size(sizeof(unsigned int), 872 - devfreq->profile->max_state, 873 - devfreq->profile->max_state), 873 + devfreq->max_state, 874 + devfreq->max_state), 874 875 GFP_KERNEL); 875 876 if (!devfreq->stats.trans_table) { 876 877 mutex_unlock(&devfreq->lock); ··· 879 880 } 880 881 881 882 devfreq->stats.time_in_state = devm_kcalloc(&devfreq->dev, 882 - devfreq->profile->max_state, 883 + devfreq->max_state, 883 884 sizeof(*devfreq->stats.time_in_state), 884 885 GFP_KERNEL); 885 886 if (!devfreq->stats.time_in_state) { ··· 931 932 err = devfreq->governor->event_handler(devfreq, DEVFREQ_GOV_START, 932 933 NULL); 933 934 if (err) { 934 - dev_err(dev, "%s: Unable to start governor for the device\n", 935 - __func__); 935 + dev_err_probe(dev, err, 936 + "%s: Unable to start governor for the device\n", 937 + __func__); 936 938 goto err_init; 937 939 } 938 940 create_sysfs_files(devfreq, devfreq->governor); ··· 1665 1665 1666 1666 mutex_lock(&df->lock); 1667 1667 1668 - for (i = 0; i < df->profile->max_state; i++) 1668 + for (i = 0; i < df->max_state; i++) 1669 1669 count += scnprintf(&buf[count], (PAGE_SIZE - count - 2), 1670 - "%lu ", df->profile->freq_table[i]); 1670 + "%lu ", df->freq_table[i]); 1671 1671 1672 1672 mutex_unlock(&df->lock); 1673 1673 /* Truncate the trailing space */ ··· 1690 1690 1691 1691 if (!df->profile) 1692 1692 return -EINVAL; 1693 - max_state = df->profile->max_state; 1693 + max_state = df->max_state; 1694 1694 1695 1695 if (max_state == 0) 1696 1696 return sprintf(buf, "Not Supported.\n"); ··· 1707 1707 len += sprintf(buf + len, " :"); 1708 1708 for (i = 0; i < max_state; i++) 1709 1709 len += sprintf(buf + len, "%10lu", 1710 - df->profile->freq_table[i]); 1710 + df->freq_table[i]); 1711 1711 1712 1712 len += sprintf(buf + len, " time(ms)\n"); 1713 1713 1714 1714 for (i = 0; i < max_state; i++) { 1715 - if (df->profile->freq_table[i] 1716 - == df->previous_freq) { 1715 + if (df->freq_table[i] == df->previous_freq) 1717 1716 len += sprintf(buf + len, "*"); 1718 - } else { 1717 + else 1719 1718 len += sprintf(buf + len, " "); 1720 - } 1721 - len += sprintf(buf + len, "%10lu:", 1722 - df->profile->freq_table[i]); 1719 + 1720 + len += sprintf(buf + len, "%10lu:", df->freq_table[i]); 1723 1721 for (j = 0; j < max_state; j++) 1724 1722 len += sprintf(buf + len, "%10u", 1725 1723 df->stats.trans_table[(i * max_state) + j]); ··· 1741 1743 if (!df->profile) 1742 1744 return -EINVAL; 1743 1745 1744 - if (df->profile->max_state == 0) 1746 + if (df->max_state == 0) 1745 1747 return count; 1746 1748 1747 1749 err = kstrtoint(buf, 10, &value); ··· 1749 1751 return -EINVAL; 1750 1752 1751 1753 mutex_lock(&df->lock); 1752 - memset(df->stats.time_in_state, 0, (df->profile->max_state * 1754 + memset(df->stats.time_in_state, 0, (df->max_state * 1753 1755 sizeof(*df->stats.time_in_state))); 1754 1756 memset(df->stats.trans_table, 0, array3_size(sizeof(unsigned int), 1755 - df->profile->max_state, 1756 - df->profile->max_state)); 1757 + df->max_state, 1758 + df->max_state)); 1757 1759 df->stats.total_trans = 0; 1758 1760 df->stats.last_update = get_jiffies_64(); 1759 1761 mutex_unlock(&df->lock);
+6 -2
drivers/devfreq/event/exynos-ppmu.c
··· 519 519 520 520 count = of_get_child_count(events_np); 521 521 desc = devm_kcalloc(dev, count, sizeof(*desc), GFP_KERNEL); 522 - if (!desc) 522 + if (!desc) { 523 + of_node_put(events_np); 523 524 return -ENOMEM; 525 + } 524 526 info->num_events = count; 525 527 526 528 of_id = of_match_device(exynos_ppmu_id_match, dev); 527 529 if (of_id) 528 530 info->ppmu_type = (enum exynos_ppmu_type)of_id->data; 529 - else 531 + else { 532 + of_node_put(events_np); 530 533 return -EINVAL; 534 + } 531 535 532 536 j = 0; 533 537 for_each_child_of_node(events_np, node) {
+27 -35
drivers/devfreq/governor_passive.c
··· 1 - // SPDX-License-Identifier: GPL-2.0-only 1 + // SPDX-License-Identifier: GPL-2.0-only 2 2 /* 3 3 * linux/drivers/devfreq/governor_passive.c 4 4 * ··· 14 14 #include <linux/slab.h> 15 15 #include <linux/device.h> 16 16 #include <linux/devfreq.h> 17 + #include <linux/units.h> 17 18 #include "governor.h" 18 - 19 - #define HZ_PER_KHZ 1000 20 19 21 20 static struct devfreq_cpu_data * 22 21 get_parent_cpu_data(struct devfreq_passive_data *p_data, ··· 31 32 return parent_cpu_data; 32 33 33 34 return NULL; 35 + } 36 + 37 + static void delete_parent_cpu_data(struct devfreq_passive_data *p_data) 38 + { 39 + struct devfreq_cpu_data *parent_cpu_data, *tmp; 40 + 41 + list_for_each_entry_safe(parent_cpu_data, tmp, &p_data->cpu_data_list, node) { 42 + list_del(&parent_cpu_data->node); 43 + 44 + if (parent_cpu_data->opp_table) 45 + dev_pm_opp_put_opp_table(parent_cpu_data->opp_table); 46 + 47 + kfree(parent_cpu_data); 48 + } 34 49 } 35 50 36 51 static unsigned long get_target_freq_by_required_opp(struct device *p_dev, ··· 144 131 goto out; 145 132 146 133 /* Use interpolation if required opps is not available */ 147 - for (i = 0; i < parent_devfreq->profile->max_state; i++) 148 - if (parent_devfreq->profile->freq_table[i] == *freq) 134 + for (i = 0; i < parent_devfreq->max_state; i++) 135 + if (parent_devfreq->freq_table[i] == *freq) 149 136 break; 150 137 151 - if (i == parent_devfreq->profile->max_state) 138 + if (i == parent_devfreq->max_state) 152 139 return -EINVAL; 153 140 154 - if (i < devfreq->profile->max_state) { 155 - child_freq = devfreq->profile->freq_table[i]; 141 + if (i < devfreq->max_state) { 142 + child_freq = devfreq->freq_table[i]; 156 143 } else { 157 - count = devfreq->profile->max_state; 158 - child_freq = devfreq->profile->freq_table[count - 1]; 144 + count = devfreq->max_state; 145 + child_freq = devfreq->freq_table[count - 1]; 159 146 } 160 147 161 148 out: ··· 235 222 { 236 223 struct devfreq_passive_data *p_data 237 224 = (struct devfreq_passive_data *)devfreq->data; 238 - struct devfreq_cpu_data *parent_cpu_data; 239 - int cpu, ret = 0; 225 + int ret; 240 226 241 227 if (p_data->nb.notifier_call) { 242 228 ret = cpufreq_unregister_notifier(&p_data->nb, ··· 244 232 return ret; 245 233 } 246 234 247 - for_each_possible_cpu(cpu) { 248 - struct cpufreq_policy *policy = cpufreq_cpu_get(cpu); 249 - if (!policy) { 250 - ret = -EINVAL; 251 - continue; 252 - } 235 + delete_parent_cpu_data(p_data); 253 236 254 - parent_cpu_data = get_parent_cpu_data(p_data, policy); 255 - if (!parent_cpu_data) { 256 - cpufreq_cpu_put(policy); 257 - continue; 258 - } 259 - 260 - list_del(&parent_cpu_data->node); 261 - if (parent_cpu_data->opp_table) 262 - dev_pm_opp_put_opp_table(parent_cpu_data->opp_table); 263 - kfree(parent_cpu_data); 264 - cpufreq_cpu_put(policy); 265 - } 266 - 267 - return ret; 237 + return 0; 268 238 } 269 239 270 240 static int cpufreq_passive_register_notifier(struct devfreq *devfreq) ··· 330 336 err_put_policy: 331 337 cpufreq_cpu_put(policy); 332 338 err: 333 - WARN_ON(cpufreq_passive_unregister_notifier(devfreq)); 334 339 335 340 return ret; 336 341 } ··· 400 407 if (!p_data) 401 408 return -EINVAL; 402 409 403 - if (!p_data->this) 404 - p_data->this = devfreq; 410 + p_data->this = devfreq; 405 411 406 412 switch (event) { 407 413 case DEVFREQ_GOV_START:
+3 -3
drivers/firmware/arm_scmi/bus.c
··· 181 181 return NULL; 182 182 } 183 183 184 - id = ida_simple_get(&scmi_bus_id, 1, 0, GFP_KERNEL); 184 + id = ida_alloc_min(&scmi_bus_id, 1, GFP_KERNEL); 185 185 if (id < 0) { 186 186 kfree_const(scmi_dev->name); 187 187 kfree(scmi_dev); ··· 204 204 put_dev: 205 205 kfree_const(scmi_dev->name); 206 206 put_device(&scmi_dev->dev); 207 - ida_simple_remove(&scmi_bus_id, id); 207 + ida_free(&scmi_bus_id, id); 208 208 return NULL; 209 209 } 210 210 ··· 212 212 { 213 213 kfree_const(scmi_dev->name); 214 214 scmi_handle_put(scmi_dev->handle); 215 - ida_simple_remove(&scmi_bus_id, scmi_dev->id); 215 + ida_free(&scmi_bus_id, scmi_dev->id); 216 216 device_unregister(&scmi_dev->dev); 217 217 } 218 218
+25 -1
drivers/firmware/arm_scmi/clock.c
··· 194 194 } 195 195 196 196 struct scmi_clk_ipriv { 197 + struct device *dev; 197 198 u32 clk_id; 198 199 struct scmi_clock_info *clk; 199 200 }; ··· 223 222 st->num_remaining = NUM_REMAINING(flags); 224 223 st->num_returned = NUM_RETURNED(flags); 225 224 p->clk->rate_discrete = RATE_DISCRETE(flags); 225 + 226 + /* Warn about out of spec replies ... */ 227 + if (!p->clk->rate_discrete && 228 + (st->num_returned != 3 || st->num_remaining != 0)) { 229 + dev_warn(p->dev, 230 + "Out-of-spec CLOCK_DESCRIBE_RATES reply for %s - returned:%d remaining:%d rx_len:%zd\n", 231 + p->clk->name, st->num_returned, st->num_remaining, 232 + st->rx_len); 233 + 234 + /* 235 + * A known quirk: a triplet is returned but num_returned != 3 236 + * Check for a safe payload size and fix. 237 + */ 238 + if (st->num_returned != 3 && st->num_remaining == 0 && 239 + st->rx_len == sizeof(*r) + sizeof(__le32) * 2 * 3) { 240 + st->num_returned = 3; 241 + st->num_remaining = 0; 242 + } else { 243 + dev_err(p->dev, 244 + "Cannot fix out-of-spec reply !\n"); 245 + return -EPROTO; 246 + } 247 + } 226 248 227 249 return 0; 228 250 } ··· 279 255 280 256 *rate = RATE_TO_U64(r->rate[st->loop_idx]); 281 257 p->clk->list.num_rates++; 282 - //XXX dev_dbg(ph->dev, "Rate %llu Hz\n", *rate); 283 258 } 284 259 285 260 return ret; ··· 298 275 struct scmi_clk_ipriv cpriv = { 299 276 .clk_id = clk_id, 300 277 .clk = clk, 278 + .dev = ph->dev, 301 279 }; 302 280 303 281 iter = ph->hops->iter_response_init(ph, &ops, SCMI_MAX_NUM_RATES,
+1
drivers/firmware/arm_scmi/driver.c
··· 1223 1223 if (ret) 1224 1224 break; 1225 1225 1226 + st->rx_len = i->t->rx.len; 1226 1227 ret = iops->update_state(st, i->resp, i->priv); 1227 1228 if (ret) 1228 1229 break;
+6 -1
drivers/firmware/arm_scmi/optee.c
··· 117 117 u32 channel_id; 118 118 u32 tee_session; 119 119 u32 caps; 120 + u32 rx_len; 120 121 struct mutex mu; 121 122 struct scmi_chan_info *cinfo; 122 123 union { ··· 303 302 return -EIO; 304 303 } 305 304 305 + /* Save response size */ 306 + channel->rx_len = param[2].u.memref.size; 307 + 306 308 return 0; 307 309 } 308 310 ··· 357 353 shbuf = tee_shm_get_va(channel->tee_shm, 0); 358 354 memset(shbuf, 0, msg_size); 359 355 channel->req.msg = shbuf; 356 + channel->rx_len = msg_size; 360 357 361 358 return 0; 362 359 } ··· 513 508 struct scmi_optee_channel *channel = cinfo->transport_info; 514 509 515 510 if (channel->tee_shm) 516 - msg_fetch_response(channel->req.msg, SCMI_OPTEE_MAX_MSG_SIZE, xfer); 511 + msg_fetch_response(channel->req.msg, channel->rx_len, xfer); 517 512 else 518 513 shmem_fetch_response(channel->req.shmem, xfer); 519 514 }
+3
drivers/firmware/arm_scmi/protocols.h
··· 179 179 * @max_resources: Maximum acceptable number of items, configured by the caller 180 180 * depending on the underlying resources that it is querying. 181 181 * @loop_idx: The iterator loop index in the current multi-part reply. 182 + * @rx_len: Size in bytes of the currenly processed message; it can be used by 183 + * the user of the iterator to verify a reply size. 182 184 * @priv: Optional pointer to some additional state-related private data setup 183 185 * by the caller during the iterations. 184 186 */ ··· 190 188 unsigned int num_remaining; 191 189 unsigned int max_resources; 192 190 unsigned int loop_idx; 191 + size_t rx_len; 193 192 void *priv; 194 193 }; 195 194
+50 -8
drivers/firmware/sysfb.c
··· 34 34 #include <linux/screen_info.h> 35 35 #include <linux/sysfb.h> 36 36 37 + static struct platform_device *pd; 38 + static DEFINE_MUTEX(disable_lock); 39 + static bool disabled; 40 + 41 + static bool sysfb_unregister(void) 42 + { 43 + if (IS_ERR_OR_NULL(pd)) 44 + return false; 45 + 46 + platform_device_unregister(pd); 47 + pd = NULL; 48 + 49 + return true; 50 + } 51 + 52 + /** 53 + * sysfb_disable() - disable the Generic System Framebuffers support 54 + * 55 + * This disables the registration of system framebuffer devices that match the 56 + * generic drivers that make use of the system framebuffer set up by firmware. 57 + * 58 + * It also unregisters a device if this was already registered by sysfb_init(). 59 + * 60 + * Context: The function can sleep. A @disable_lock mutex is acquired to serialize 61 + * against sysfb_init(), that registers a system framebuffer device. 62 + */ 63 + void sysfb_disable(void) 64 + { 65 + mutex_lock(&disable_lock); 66 + sysfb_unregister(); 67 + disabled = true; 68 + mutex_unlock(&disable_lock); 69 + } 70 + EXPORT_SYMBOL_GPL(sysfb_disable); 71 + 37 72 static __init int sysfb_init(void) 38 73 { 39 74 struct screen_info *si = &screen_info; 40 75 struct simplefb_platform_data mode; 41 - struct platform_device *pd; 42 76 const char *name; 43 77 bool compatible; 44 - int ret; 78 + int ret = 0; 79 + 80 + mutex_lock(&disable_lock); 81 + if (disabled) 82 + goto unlock_mutex; 45 83 46 84 /* try to create a simple-framebuffer device */ 47 85 compatible = sysfb_parse_mode(si, &mode); 48 86 if (compatible) { 49 - ret = sysfb_create_simplefb(si, &mode); 50 - if (!ret) 51 - return 0; 87 + pd = sysfb_create_simplefb(si, &mode); 88 + if (!IS_ERR(pd)) 89 + goto unlock_mutex; 52 90 } 53 91 54 92 /* if the FB is incompatible, create a legacy framebuffer device */ ··· 98 60 name = "platform-framebuffer"; 99 61 100 62 pd = platform_device_alloc(name, 0); 101 - if (!pd) 102 - return -ENOMEM; 63 + if (!pd) { 64 + ret = -ENOMEM; 65 + goto unlock_mutex; 66 + } 103 67 104 68 sysfb_apply_efi_quirks(pd); 105 69 ··· 113 73 if (ret) 114 74 goto err; 115 75 116 - return 0; 76 + goto unlock_mutex; 117 77 err: 118 78 platform_device_put(pd); 79 + unlock_mutex: 80 + mutex_unlock(&disable_lock); 119 81 return ret; 120 82 } 121 83
+8 -8
drivers/firmware/sysfb_simplefb.c
··· 57 57 return false; 58 58 } 59 59 60 - __init int sysfb_create_simplefb(const struct screen_info *si, 61 - const struct simplefb_platform_data *mode) 60 + __init struct platform_device *sysfb_create_simplefb(const struct screen_info *si, 61 + const struct simplefb_platform_data *mode) 62 62 { 63 63 struct platform_device *pd; 64 64 struct resource res; ··· 76 76 base |= (u64)si->ext_lfb_base << 32; 77 77 if (!base || (u64)(resource_size_t)base != base) { 78 78 printk(KERN_DEBUG "sysfb: inaccessible VRAM base\n"); 79 - return -EINVAL; 79 + return ERR_PTR(-EINVAL); 80 80 } 81 81 82 82 /* ··· 93 93 length = mode->height * mode->stride; 94 94 if (length > size) { 95 95 printk(KERN_WARNING "sysfb: VRAM smaller than advertised\n"); 96 - return -EINVAL; 96 + return ERR_PTR(-EINVAL); 97 97 } 98 98 length = PAGE_ALIGN(length); 99 99 ··· 104 104 res.start = base; 105 105 res.end = res.start + length - 1; 106 106 if (res.end <= res.start) 107 - return -EINVAL; 107 + return ERR_PTR(-EINVAL); 108 108 109 109 pd = platform_device_alloc("simple-framebuffer", 0); 110 110 if (!pd) 111 - return -ENOMEM; 111 + return ERR_PTR(-ENOMEM); 112 112 113 113 sysfb_apply_efi_quirks(pd); 114 114 ··· 124 124 if (ret) 125 125 goto err_put_device; 126 126 127 - return 0; 127 + return pd; 128 128 129 129 err_put_device: 130 130 platform_device_put(pd); 131 131 132 - return ret; 132 + return ERR_PTR(ret); 133 133 }
+2 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c
··· 714 714 { 715 715 bool all_hub = false; 716 716 717 - if (adev->family == AMDGPU_FAMILY_AI) 717 + if (adev->family == AMDGPU_FAMILY_AI || 718 + adev->family == AMDGPU_FAMILY_RV) 718 719 all_hub = true; 719 720 720 721 return amdgpu_gmc_flush_gpu_tlb_pasid(adev, pasid, flush_type, all_hub);
+1 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
··· 5164 5164 */ 5165 5165 amdgpu_unregister_gpu_instance(tmp_adev); 5166 5166 5167 - drm_fb_helper_set_suspend_unlocked(adev_to_drm(adev)->fb_helper, true); 5167 + drm_fb_helper_set_suspend_unlocked(adev_to_drm(tmp_adev)->fb_helper, true); 5168 5168 5169 5169 /* disable ras on ALL IPs */ 5170 5170 if (!need_emergency_restart &&
+1
drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c
··· 320 320 if (!amdgpu_device_has_dc_support(adev)) { 321 321 if (!adev->enable_virtual_display) 322 322 /* Disable vblank IRQs aggressively for power-saving */ 323 + /* XXX: can this be enabled for DC? */ 323 324 adev_to_drm(adev)->vblank_disable_immediate = true; 324 325 325 326 r = drm_vblank_init(adev_to_drm(adev), adev->mode_info.num_crtc);
-3
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
··· 4259 4259 } 4260 4260 } 4261 4261 4262 - /* Disable vblank IRQs aggressively for power-saving. */ 4263 - adev_to_drm(adev)->vblank_disable_immediate = true; 4264 - 4265 4262 /* loops over all connectors on the board */ 4266 4263 for (i = 0; i < link_cnt; i++) { 4267 4264 struct dc_link *link = NULL;
+3 -2
drivers/gpu/drm/i915/gem/i915_gem_context.c
··· 933 933 case I915_CONTEXT_PARAM_PERSISTENCE: 934 934 if (args->size) 935 935 ret = -EINVAL; 936 - ret = proto_context_set_persistence(fpriv->dev_priv, pc, 937 - args->value); 936 + else 937 + ret = proto_context_set_persistence(fpriv->dev_priv, pc, 938 + args->value); 938 939 break; 939 940 940 941 case I915_CONTEXT_PARAM_PROTECTED_CONTENT:
+3 -3
drivers/gpu/drm/i915/gem/i915_gem_domain.c
··· 35 35 if (obj->cache_dirty) 36 36 return false; 37 37 38 - if (!(obj->cache_coherent & I915_BO_CACHE_COHERENT_FOR_WRITE)) 39 - return true; 40 - 41 38 if (IS_DGFX(i915)) 42 39 return false; 40 + 41 + if (!(obj->cache_coherent & I915_BO_CACHE_COHERENT_FOR_WRITE)) 42 + return true; 43 43 44 44 /* Currently in use by HW (display engine)? Keep flushed. */ 45 45 return i915_gem_object_is_framebuffer(obj);
+15 -19
drivers/gpu/drm/i915/i915_driver.c
··· 530 530 static int i915_driver_hw_probe(struct drm_i915_private *dev_priv) 531 531 { 532 532 struct pci_dev *pdev = to_pci_dev(dev_priv->drm.dev); 533 + struct pci_dev *root_pdev; 533 534 int ret; 534 535 535 536 if (i915_inject_probe_failure(dev_priv)) ··· 642 641 643 642 intel_bw_init_hw(dev_priv); 644 643 644 + /* 645 + * FIXME: Temporary hammer to avoid freezing the machine on our DGFX 646 + * This should be totally removed when we handle the pci states properly 647 + * on runtime PM and on s2idle cases. 648 + */ 649 + root_pdev = pcie_find_root_port(pdev); 650 + if (root_pdev) 651 + pci_d3cold_disable(root_pdev); 652 + 645 653 return 0; 646 654 647 655 err_msi: ··· 674 664 static void i915_driver_hw_remove(struct drm_i915_private *dev_priv) 675 665 { 676 666 struct pci_dev *pdev = to_pci_dev(dev_priv->drm.dev); 667 + struct pci_dev *root_pdev; 677 668 678 669 i915_perf_fini(dev_priv); 679 670 680 671 if (pdev->msi_enabled) 681 672 pci_disable_msi(pdev); 673 + 674 + root_pdev = pcie_find_root_port(pdev); 675 + if (root_pdev) 676 + pci_d3cold_enable(root_pdev); 682 677 } 683 678 684 679 /** ··· 1208 1193 goto out; 1209 1194 } 1210 1195 1211 - /* 1212 - * FIXME: Temporary hammer to avoid freezing the machine on our DGFX 1213 - * This should be totally removed when we handle the pci states properly 1214 - * on runtime PM and on s2idle cases. 1215 - */ 1216 - if (suspend_to_idle(dev_priv)) 1217 - pci_d3cold_disable(pdev); 1218 - 1219 1196 pci_disable_device(pdev); 1220 1197 /* 1221 1198 * During hibernation on some platforms the BIOS may try to access ··· 1371 1364 return -EIO; 1372 1365 1373 1366 pci_set_master(pdev); 1374 - 1375 - pci_d3cold_enable(pdev); 1376 1367 1377 1368 disable_rpm_wakeref_asserts(&dev_priv->runtime_pm); 1378 1369 ··· 1548 1543 { 1549 1544 struct drm_i915_private *dev_priv = kdev_to_i915(kdev); 1550 1545 struct intel_runtime_pm *rpm = &dev_priv->runtime_pm; 1551 - struct pci_dev *pdev = to_pci_dev(dev_priv->drm.dev); 1552 1546 int ret; 1553 1547 1554 1548 if (drm_WARN_ON_ONCE(&dev_priv->drm, !HAS_RUNTIME_PM(dev_priv))) ··· 1593 1589 drm_err(&dev_priv->drm, 1594 1590 "Unclaimed access detected prior to suspending\n"); 1595 1591 1596 - /* 1597 - * FIXME: Temporary hammer to avoid freezing the machine on our DGFX 1598 - * This should be totally removed when we handle the pci states properly 1599 - * on runtime PM and on s2idle cases. 1600 - */ 1601 - pci_d3cold_disable(pdev); 1602 1592 rpm->suspended = true; 1603 1593 1604 1594 /* ··· 1631 1633 { 1632 1634 struct drm_i915_private *dev_priv = kdev_to_i915(kdev); 1633 1635 struct intel_runtime_pm *rpm = &dev_priv->runtime_pm; 1634 - struct pci_dev *pdev = to_pci_dev(dev_priv->drm.dev); 1635 1636 int ret; 1636 1637 1637 1638 if (drm_WARN_ON_ONCE(&dev_priv->drm, !HAS_RUNTIME_PM(dev_priv))) ··· 1643 1646 1644 1647 intel_opregion_notify_adapter(dev_priv, PCI_D0); 1645 1648 rpm->suspended = false; 1646 - pci_d3cold_enable(pdev); 1647 1649 if (intel_uncore_unclaimed_mmio(&dev_priv->uncore)) 1648 1650 drm_dbg(&dev_priv->drm, 1649 1651 "Unclaimed access during suspend, bios?\n");
+2 -1
drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
··· 1251 1251 DPU_ATRACE_BEGIN("encoder_vblank_callback"); 1252 1252 dpu_enc = to_dpu_encoder_virt(drm_enc); 1253 1253 1254 + atomic_inc(&phy_enc->vsync_cnt); 1255 + 1254 1256 spin_lock_irqsave(&dpu_enc->enc_spinlock, lock_flags); 1255 1257 if (dpu_enc->crtc) 1256 1258 dpu_crtc_vblank_callback(dpu_enc->crtc); 1257 1259 spin_unlock_irqrestore(&dpu_enc->enc_spinlock, lock_flags); 1258 1260 1259 - atomic_inc(&phy_enc->vsync_cnt); 1260 1261 DPU_ATRACE_END("encoder_vblank_callback"); 1261 1262 } 1262 1263
+5 -5
drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_wb.c
··· 252 252 DPU_DEBUG("[atomic_check:%d, \"%s\",%d,%d]\n", 253 253 phys_enc->wb_idx, mode->name, mode->hdisplay, mode->vdisplay); 254 254 255 - if (!conn_state->writeback_job || !conn_state->writeback_job->fb) 256 - return 0; 257 - 258 - fb = conn_state->writeback_job->fb; 259 - 260 255 if (!conn_state || !conn_state->connector) { 261 256 DPU_ERROR("invalid connector state\n"); 262 257 return -EINVAL; ··· 261 266 conn_state->connector->status); 262 267 return -EINVAL; 263 268 } 269 + 270 + if (!conn_state->writeback_job || !conn_state->writeback_job->fb) 271 + return 0; 272 + 273 + fb = conn_state->writeback_job->fb; 264 274 265 275 DPU_DEBUG("[fb_id:%u][fb:%u,%u]\n", fb->base.id, 266 276 fb->width, fb->height);
+2
drivers/gpu/drm/msm/dp/dp_display.c
··· 316 316 317 317 dp_power_client_deinit(dp->power); 318 318 dp_aux_unregister(dp->aux); 319 + dp->drm_dev = NULL; 320 + dp->aux->drm_dev = NULL; 319 321 priv->dp[dp->id] = NULL; 320 322 } 321 323
+1 -1
drivers/gpu/drm/msm/msm_gem_submit.c
··· 928 928 INT_MAX, GFP_KERNEL); 929 929 } 930 930 if (submit->fence_id < 0) { 931 - ret = submit->fence_id = 0; 931 + ret = submit->fence_id; 932 932 submit->fence_id = 0; 933 933 } 934 934
+6 -3
drivers/gpu/drm/vc4/vc4_perfmon.c
··· 17 17 18 18 void vc4_perfmon_get(struct vc4_perfmon *perfmon) 19 19 { 20 - struct vc4_dev *vc4 = perfmon->dev; 20 + struct vc4_dev *vc4; 21 21 22 + if (!perfmon) 23 + return; 24 + 25 + vc4 = perfmon->dev; 22 26 if (WARN_ON_ONCE(vc4->is_vc5)) 23 27 return; 24 28 25 - if (perfmon) 26 - refcount_inc(&perfmon->refcnt); 29 + refcount_inc(&perfmon->refcnt); 27 30 } 28 31 29 32 void vc4_perfmon_put(struct vc4_perfmon *perfmon)
+8 -4
drivers/hwmon/ibmaem.c
··· 550 550 551 551 res = platform_device_add(data->pdev); 552 552 if (res) 553 - goto ipmi_err; 553 + goto dev_add_err; 554 554 555 555 platform_set_drvdata(data->pdev, data); 556 556 ··· 598 598 ipmi_destroy_user(data->ipmi.user); 599 599 ipmi_err: 600 600 platform_set_drvdata(data->pdev, NULL); 601 - platform_device_unregister(data->pdev); 601 + platform_device_del(data->pdev); 602 + dev_add_err: 603 + platform_device_put(data->pdev); 602 604 dev_err: 603 605 ida_free(&aem_ida, data->id); 604 606 id_err: ··· 692 690 693 691 res = platform_device_add(data->pdev); 694 692 if (res) 695 - goto ipmi_err; 693 + goto dev_add_err; 696 694 697 695 platform_set_drvdata(data->pdev, data); 698 696 ··· 740 738 ipmi_destroy_user(data->ipmi.user); 741 739 ipmi_err: 742 740 platform_set_drvdata(data->pdev, NULL); 743 - platform_device_unregister(data->pdev); 741 + platform_device_del(data->pdev); 742 + dev_add_err: 743 + platform_device_put(data->pdev); 744 744 dev_err: 745 745 ida_free(&aem_ida, data->id); 746 746 id_err:
+3 -2
drivers/hwmon/occ/common.c
··· 145 145 cmd[6] = 0; /* checksum lsb */ 146 146 147 147 /* mutex should already be locked if necessary */ 148 - rc = occ->send_cmd(occ, cmd, sizeof(cmd)); 148 + rc = occ->send_cmd(occ, cmd, sizeof(cmd), &occ->resp, sizeof(occ->resp)); 149 149 if (rc) { 150 150 occ->last_error = rc; 151 151 if (occ->error_count++ > OCC_ERROR_COUNT_THRESHOLD) ··· 182 182 { 183 183 int rc; 184 184 u8 cmd[8]; 185 + u8 resp[8]; 185 186 __be16 user_power_cap_be = cpu_to_be16(user_power_cap); 186 187 187 188 cmd[0] = 0; /* sequence number */ ··· 199 198 if (rc) 200 199 return rc; 201 200 202 - rc = occ->send_cmd(occ, cmd, sizeof(cmd)); 201 + rc = occ->send_cmd(occ, cmd, sizeof(cmd), resp, sizeof(resp)); 203 202 204 203 mutex_unlock(&occ->lock); 205 204
+2 -1
drivers/hwmon/occ/common.h
··· 96 96 97 97 int powr_sample_time_us; /* average power sample time */ 98 98 u8 poll_cmd_data; /* to perform OCC poll command */ 99 - int (*send_cmd)(struct occ *occ, u8 *cmd, size_t len); 99 + int (*send_cmd)(struct occ *occ, u8 *cmd, size_t len, void *resp, 100 + size_t resp_len); 100 101 101 102 unsigned long next_update; 102 103 struct mutex lock; /* lock OCC access */
+7 -6
drivers/hwmon/occ/p8_i2c.c
··· 111 111 be32_to_cpu(data1)); 112 112 } 113 113 114 - static int p8_i2c_occ_send_cmd(struct occ *occ, u8 *cmd, size_t len) 114 + static int p8_i2c_occ_send_cmd(struct occ *occ, u8 *cmd, size_t len, 115 + void *resp, size_t resp_len) 115 116 { 116 117 int i, rc; 117 118 unsigned long start; ··· 121 120 const long wait_time = msecs_to_jiffies(OCC_CMD_IN_PRG_WAIT_MS); 122 121 struct p8_i2c_occ *ctx = to_p8_i2c_occ(occ); 123 122 struct i2c_client *client = ctx->client; 124 - struct occ_response *resp = &occ->resp; 123 + struct occ_response *or = (struct occ_response *)resp; 125 124 126 125 start = jiffies; 127 126 ··· 152 151 return rc; 153 152 154 153 /* wait for OCC */ 155 - if (resp->return_status == OCC_RESP_CMD_IN_PRG) { 154 + if (or->return_status == OCC_RESP_CMD_IN_PRG) { 156 155 rc = -EALREADY; 157 156 158 157 if (time_after(jiffies, start + timeout)) ··· 164 163 } while (rc); 165 164 166 165 /* check the OCC response */ 167 - switch (resp->return_status) { 166 + switch (or->return_status) { 168 167 case OCC_RESP_CMD_IN_PRG: 169 168 rc = -ETIMEDOUT; 170 169 break; ··· 193 192 if (rc < 0) 194 193 return rc; 195 194 196 - data_length = get_unaligned_be16(&resp->data_length); 197 - if (data_length > OCC_RESP_DATA_BYTES) 195 + data_length = get_unaligned_be16(&or->data_length); 196 + if ((data_length + 7) > resp_len) 198 197 return -EMSGSIZE; 199 198 200 199 /* fetch the rest of the response data */
+3 -4
drivers/hwmon/occ/p9_sbe.c
··· 78 78 return notify; 79 79 } 80 80 81 - static int p9_sbe_occ_send_cmd(struct occ *occ, u8 *cmd, size_t len) 81 + static int p9_sbe_occ_send_cmd(struct occ *occ, u8 *cmd, size_t len, 82 + void *resp, size_t resp_len) 82 83 { 83 - struct occ_response *resp = &occ->resp; 84 84 struct p9_sbe_occ *ctx = to_p9_sbe_occ(occ); 85 - size_t resp_len = sizeof(*resp); 86 85 int rc; 87 86 88 87 rc = fsi_occ_submit(ctx->sbe, cmd, len, resp, &resp_len); ··· 95 96 return rc; 96 97 } 97 98 98 - switch (resp->return_status) { 99 + switch (((struct occ_response *)resp)->return_status) { 99 100 case OCC_RESP_CMD_IN_PRG: 100 101 rc = -ETIMEDOUT; 101 102 break;
+1 -1
drivers/hwmon/pmbus/ucd9200.c
··· 148 148 * This only affects the READ_IOUT and READ_TEMPERATURE2 registers. 149 149 * READ_IOUT will return the sum of currents of all phases of a rail, 150 150 * and READ_TEMPERATURE2 will return the maximum temperature detected 151 - * for the the phases of the rail. 151 + * for the phases of the rail. 152 152 */ 153 153 for (i = 0; i < info->pages; i++) { 154 154 /*
-1
drivers/irqchip/irq-or1k-pic.c
··· 66 66 .name = "or1k-PIC-level", 67 67 .irq_unmask = or1k_pic_unmask, 68 68 .irq_mask = or1k_pic_mask, 69 - .irq_mask_ack = or1k_pic_mask_ack, 70 69 }, 71 70 .handle = handle_level_irq, 72 71 .flags = IRQ_LEVEL | IRQ_NOPROBE,
+18 -16
drivers/md/dm-raid.c
··· 1001 1001 static int validate_raid_redundancy(struct raid_set *rs) 1002 1002 { 1003 1003 unsigned int i, rebuild_cnt = 0; 1004 - unsigned int rebuilds_per_group = 0, copies; 1004 + unsigned int rebuilds_per_group = 0, copies, raid_disks; 1005 1005 unsigned int group_size, last_group_start; 1006 1006 1007 - for (i = 0; i < rs->md.raid_disks; i++) 1008 - if (!test_bit(In_sync, &rs->dev[i].rdev.flags) || 1009 - !rs->dev[i].rdev.sb_page) 1007 + for (i = 0; i < rs->raid_disks; i++) 1008 + if (!test_bit(FirstUse, &rs->dev[i].rdev.flags) && 1009 + ((!test_bit(In_sync, &rs->dev[i].rdev.flags) || 1010 + !rs->dev[i].rdev.sb_page))) 1010 1011 rebuild_cnt++; 1011 1012 1012 1013 switch (rs->md.level) { ··· 1047 1046 * A A B B C 1048 1047 * C D D E E 1049 1048 */ 1049 + raid_disks = min(rs->raid_disks, rs->md.raid_disks); 1050 1050 if (__is_raid10_near(rs->md.new_layout)) { 1051 - for (i = 0; i < rs->md.raid_disks; i++) { 1051 + for (i = 0; i < raid_disks; i++) { 1052 1052 if (!(i % copies)) 1053 1053 rebuilds_per_group = 0; 1054 1054 if ((!rs->dev[i].rdev.sb_page || ··· 1072 1070 * results in the need to treat the last (potentially larger) 1073 1071 * set differently. 1074 1072 */ 1075 - group_size = (rs->md.raid_disks / copies); 1076 - last_group_start = (rs->md.raid_disks / group_size) - 1; 1073 + group_size = (raid_disks / copies); 1074 + last_group_start = (raid_disks / group_size) - 1; 1077 1075 last_group_start *= group_size; 1078 - for (i = 0; i < rs->md.raid_disks; i++) { 1076 + for (i = 0; i < raid_disks; i++) { 1079 1077 if (!(i % copies) && !(i > last_group_start)) 1080 1078 rebuilds_per_group = 0; 1081 1079 if ((!rs->dev[i].rdev.sb_page || ··· 1590 1588 { 1591 1589 int i; 1592 1590 1593 - for (i = 0; i < rs->md.raid_disks; i++) { 1591 + for (i = 0; i < rs->raid_disks; i++) { 1594 1592 struct md_rdev *rdev = &rs->dev[i].rdev; 1595 1593 1596 1594 if (!test_bit(Journal, &rdev->flags) && ··· 3768 3766 unsigned int i; 3769 3767 int r = 0; 3770 3768 3771 - for (i = 0; !r && i < rs->md.raid_disks; i++) 3772 - if (rs->dev[i].data_dev) 3773 - r = fn(ti, 3774 - rs->dev[i].data_dev, 3775 - 0, /* No offset on data devs */ 3776 - rs->md.dev_sectors, 3777 - data); 3769 + for (i = 0; !r && i < rs->raid_disks; i++) { 3770 + if (rs->dev[i].data_dev) { 3771 + r = fn(ti, rs->dev[i].data_dev, 3772 + 0, /* No offset on data devs */ 3773 + rs->md.dev_sectors, data); 3774 + } 3775 + } 3778 3776 3779 3777 return r; 3780 3778 }
+5 -1
drivers/md/raid5.c
··· 7933 7933 int err = 0; 7934 7934 int number = rdev->raid_disk; 7935 7935 struct md_rdev __rcu **rdevp; 7936 - struct disk_info *p = conf->disks + number; 7936 + struct disk_info *p; 7937 7937 struct md_rdev *tmp; 7938 7938 7939 7939 print_raid5_conf(conf); ··· 7952 7952 log_exit(conf); 7953 7953 return 0; 7954 7954 } 7955 + if (unlikely(number >= conf->pool_size)) 7956 + return 0; 7957 + p = conf->disks + number; 7955 7958 if (rdev == rcu_access_pointer(p->rdev)) 7956 7959 rdevp = &p->rdev; 7957 7960 else if (rdev == rcu_access_pointer(p->replacement)) ··· 8065 8062 */ 8066 8063 if (rdev->saved_raid_disk >= 0 && 8067 8064 rdev->saved_raid_disk >= first && 8065 + rdev->saved_raid_disk <= last && 8068 8066 conf->disks[rdev->saved_raid_disk].rdev == NULL) 8069 8067 first = rdev->saved_raid_disk; 8070 8068
+1
drivers/net/Kconfig
··· 94 94 select CRYPTO_CURVE25519_NEON if ARM && KERNEL_MODE_NEON 95 95 select CRYPTO_CHACHA_MIPS if CPU_MIPS32_R2 96 96 select CRYPTO_POLY1305_MIPS if MIPS 97 + select CRYPTO_CHACHA_S390 if S390 97 98 help 98 99 WireGuard is a secure, fast, and easy to use replacement for IPSec 99 100 that uses modern cryptography and clever networking tricks. It's
-1
drivers/net/can/grcan.c
··· 1646 1646 */ 1647 1647 sysid_parent = of_find_node_by_path("/ambapp0"); 1648 1648 if (sysid_parent) { 1649 - of_node_get(sysid_parent); 1650 1649 err = of_property_read_u32(sysid_parent, "systemid", &sysid); 1651 1650 if (!err && ((sysid & GRLIB_VERSION_MASK) >= 1652 1651 GRCAN_TXBUG_SAFE_GRLIB_VERSION))
+5 -3
drivers/net/can/m_can/m_can.c
··· 529 529 /* acknowledge rx fifo 0 */ 530 530 m_can_write(cdev, M_CAN_RXF0A, fgi); 531 531 532 - timestamp = FIELD_GET(RX_BUF_RXTS_MASK, fifo_header.dlc); 532 + timestamp = FIELD_GET(RX_BUF_RXTS_MASK, fifo_header.dlc) << 16; 533 533 534 534 m_can_receive_skb(cdev, skb, timestamp); 535 535 ··· 1030 1030 } 1031 1031 1032 1032 msg_mark = FIELD_GET(TX_EVENT_MM_MASK, txe); 1033 - timestamp = FIELD_GET(TX_EVENT_TXTS_MASK, txe); 1033 + timestamp = FIELD_GET(TX_EVENT_TXTS_MASK, txe) << 16; 1034 1034 1035 1035 /* ack txe element */ 1036 1036 m_can_write(cdev, M_CAN_TXEFA, FIELD_PREP(TXEFA_EFAI_MASK, ··· 1351 1351 /* enable internal timestamp generation, with a prescaler of 16. The 1352 1352 * prescaler is applied to the nominal bit timing 1353 1353 */ 1354 - m_can_write(cdev, M_CAN_TSCC, FIELD_PREP(TSCC_TCP_MASK, 0xf)); 1354 + m_can_write(cdev, M_CAN_TSCC, 1355 + FIELD_PREP(TSCC_TCP_MASK, 0xf) | 1356 + FIELD_PREP(TSCC_TSS_MASK, TSCC_TSS_INTERNAL)); 1355 1357 1356 1358 m_can_config_endisable(cdev, false); 1357 1359
+4 -1
drivers/net/can/rcar/rcar_canfd.c
··· 1332 1332 cfg = (RCANFD_DCFG_DTSEG1(gpriv, tseg1) | RCANFD_DCFG_DBRP(brp) | 1333 1333 RCANFD_DCFG_DSJW(sjw) | RCANFD_DCFG_DTSEG2(gpriv, tseg2)); 1334 1334 1335 - rcar_canfd_write(priv->base, RCANFD_F_DCFG(ch), cfg); 1335 + if (is_v3u(gpriv)) 1336 + rcar_canfd_write(priv->base, RCANFD_V3U_DCFG(ch), cfg); 1337 + else 1338 + rcar_canfd_write(priv->base, RCANFD_F_DCFG(ch), cfg); 1336 1339 netdev_dbg(priv->ndev, "drate: brp %u, sjw %u, tseg1 %u, tseg2 %u\n", 1337 1340 brp, sjw, tseg1, tseg2); 1338 1341 } else {
+4 -2
drivers/net/can/spi/mcp251xfd/mcp251xfd-core.c
··· 12 12 // Copyright (c) 2019 Martin Sperl <kernel@martin.sperl.org> 13 13 // 14 14 15 + #include <asm/unaligned.h> 15 16 #include <linux/bitfield.h> 16 17 #include <linux/clk.h> 17 18 #include <linux/device.h> ··· 1651 1650 netif_stop_queue(ndev); 1652 1651 set_bit(MCP251XFD_FLAGS_DOWN, priv->flags); 1653 1652 hrtimer_cancel(&priv->rx_irq_timer); 1653 + hrtimer_cancel(&priv->tx_irq_timer); 1654 1654 mcp251xfd_chip_interrupts_disable(priv); 1655 1655 free_irq(ndev->irq, priv); 1656 1656 can_rx_offload_disable(&priv->offload); ··· 1779 1777 xfer[0].len = sizeof(buf_tx->cmd); 1780 1778 xfer[0].speed_hz = priv->spi_max_speed_hz_slow; 1781 1779 xfer[1].rx_buf = buf_rx->data; 1782 - xfer[1].len = sizeof(dev_id); 1780 + xfer[1].len = sizeof(*dev_id); 1783 1781 xfer[1].speed_hz = priv->spi_max_speed_hz_fast; 1784 1782 1785 1783 mcp251xfd_spi_cmd_read_nocrc(&buf_tx->cmd, MCP251XFD_REG_DEVID); ··· 1788 1786 if (err) 1789 1787 goto out_kfree_buf_tx; 1790 1788 1791 - *dev_id = be32_to_cpup((__be32 *)buf_rx->data); 1789 + *dev_id = get_unaligned_le32(buf_rx->data); 1792 1790 *effective_speed_hz_slow = xfer[0].effective_speed_hz; 1793 1791 *effective_speed_hz_fast = xfer[1].effective_speed_hz; 1794 1792
+11 -11
drivers/net/can/spi/mcp251xfd/mcp251xfd-regmap.c
··· 334 334 * register. It increments once per SYS clock tick, 335 335 * which is 20 or 40 MHz. 336 336 * 337 - * Observation shows that if the lowest byte (which is 338 - * transferred first on the SPI bus) of that register 339 - * is 0x00 or 0x80 the calculated CRC doesn't always 340 - * match the transferred one. 337 + * Observation on the mcp2518fd shows that if the 338 + * lowest byte (which is transferred first on the SPI 339 + * bus) of that register is 0x00 or 0x80 the 340 + * calculated CRC doesn't always match the transferred 341 + * one. On the mcp2517fd this problem is not limited 342 + * to the first byte being 0x00 or 0x80. 341 343 * 342 344 * If the highest bit in the lowest byte is flipped 343 345 * the transferred CRC matches the calculated one. We 344 - * assume for now the CRC calculation in the chip 345 - * works on wrong data and the transferred data is 346 - * correct. 346 + * assume for now the CRC operates on the correct 347 + * data. 347 348 */ 348 349 if (reg == MCP251XFD_REG_TBC && 349 - (buf_rx->data[0] == 0x0 || buf_rx->data[0] == 0x80)) { 350 + ((buf_rx->data[0] & 0xf8) == 0x0 || 351 + (buf_rx->data[0] & 0xf8) == 0x80)) { 350 352 /* Flip highest bit in lowest byte of le32 */ 351 353 buf_rx->data[0] ^= 0x80; 352 354 ··· 358 356 val_len); 359 357 if (!err) { 360 358 /* If CRC is now correct, assume 361 - * transferred data was OK, flip bit 362 - * back to original value. 359 + * flipped data is OK. 363 360 */ 364 - buf_rx->data[0] ^= 0x80; 365 361 goto out; 366 362 } 367 363 }
+21 -2
drivers/net/can/usb/gs_usb.c
··· 268 268 269 269 struct usb_anchor tx_submitted; 270 270 atomic_t active_tx_urbs; 271 + void *rxbuf[GS_MAX_RX_URBS]; 272 + dma_addr_t rxbuf_dma[GS_MAX_RX_URBS]; 271 273 }; 272 274 273 275 /* usb interface struct */ ··· 744 742 for (i = 0; i < GS_MAX_RX_URBS; i++) { 745 743 struct urb *urb; 746 744 u8 *buf; 745 + dma_addr_t buf_dma; 747 746 748 747 /* alloc rx urb */ 749 748 urb = usb_alloc_urb(0, GFP_KERNEL); ··· 755 752 buf = usb_alloc_coherent(dev->udev, 756 753 dev->parent->hf_size_rx, 757 754 GFP_KERNEL, 758 - &urb->transfer_dma); 755 + &buf_dma); 759 756 if (!buf) { 760 757 netdev_err(netdev, 761 758 "No memory left for USB buffer\n"); 762 759 usb_free_urb(urb); 763 760 return -ENOMEM; 764 761 } 762 + 763 + urb->transfer_dma = buf_dma; 765 764 766 765 /* fill, anchor, and submit rx urb */ 767 766 usb_fill_bulk_urb(urb, ··· 786 781 "usb_submit failed (err=%d)\n", rc); 787 782 788 783 usb_unanchor_urb(urb); 784 + usb_free_coherent(dev->udev, 785 + sizeof(struct gs_host_frame), 786 + buf, 787 + buf_dma); 789 788 usb_free_urb(urb); 790 789 break; 791 790 } 791 + 792 + dev->rxbuf[i] = buf; 793 + dev->rxbuf_dma[i] = buf_dma; 792 794 793 795 /* Drop reference, 794 796 * USB core will take care of freeing it ··· 854 842 int rc; 855 843 struct gs_can *dev = netdev_priv(netdev); 856 844 struct gs_usb *parent = dev->parent; 845 + unsigned int i; 857 846 858 847 netif_stop_queue(netdev); 859 848 860 849 /* Stop polling */ 861 850 parent->active_channels--; 862 - if (!parent->active_channels) 851 + if (!parent->active_channels) { 863 852 usb_kill_anchored_urbs(&parent->rx_submitted); 853 + for (i = 0; i < GS_MAX_RX_URBS; i++) 854 + usb_free_coherent(dev->udev, 855 + sizeof(struct gs_host_frame), 856 + dev->rxbuf[i], 857 + dev->rxbuf_dma[i]); 858 + } 864 859 865 860 /* Stop sending URBs */ 866 861 usb_kill_anchored_urbs(&dev->tx_submitted);
+15 -10
drivers/net/can/usb/kvaser_usb/kvaser_usb.h
··· 35 35 #define KVASER_USB_RX_BUFFER_SIZE 3072 36 36 #define KVASER_USB_MAX_NET_DEVICES 5 37 37 38 - /* USB devices features */ 39 - #define KVASER_USB_HAS_SILENT_MODE BIT(0) 40 - #define KVASER_USB_HAS_TXRX_ERRORS BIT(1) 38 + /* Kvaser USB device quirks */ 39 + #define KVASER_USB_QUIRK_HAS_SILENT_MODE BIT(0) 40 + #define KVASER_USB_QUIRK_HAS_TXRX_ERRORS BIT(1) 41 + #define KVASER_USB_QUIRK_IGNORE_CLK_FREQ BIT(2) 41 42 42 43 /* Device capabilities */ 43 44 #define KVASER_USB_CAP_BERR_CAP 0x01 ··· 66 65 struct kvaser_usb_dev_card_data { 67 66 u32 ctrlmode_supported; 68 67 u32 capabilities; 69 - union { 70 - struct { 71 - enum kvaser_usb_leaf_family family; 72 - } leaf; 73 - struct kvaser_usb_dev_card_data_hydra hydra; 74 - }; 68 + struct kvaser_usb_dev_card_data_hydra hydra; 75 69 }; 76 70 77 71 /* Context for an outstanding, not yet ACKed, transmission */ ··· 79 83 struct usb_device *udev; 80 84 struct usb_interface *intf; 81 85 struct kvaser_usb_net_priv *nets[KVASER_USB_MAX_NET_DEVICES]; 82 - const struct kvaser_usb_dev_ops *ops; 86 + const struct kvaser_usb_driver_info *driver_info; 83 87 const struct kvaser_usb_dev_cfg *cfg; 84 88 85 89 struct usb_endpoint_descriptor *bulk_in, *bulk_out; ··· 161 165 u16 transid); 162 166 }; 163 167 168 + struct kvaser_usb_driver_info { 169 + u32 quirks; 170 + enum kvaser_usb_leaf_family family; 171 + const struct kvaser_usb_dev_ops *ops; 172 + }; 173 + 164 174 struct kvaser_usb_dev_cfg { 165 175 const struct can_clock clock; 166 176 const unsigned int timestamp_freq; ··· 186 184 int len); 187 185 188 186 int kvaser_usb_can_rx_over_error(struct net_device *netdev); 187 + 188 + extern const struct can_bittiming_const kvaser_usb_flexc_bittiming_const; 189 + 189 190 #endif /* KVASER_USB_H */
+158 -127
drivers/net/can/usb/kvaser_usb/kvaser_usb_core.c
··· 61 61 #define USB_USBCAN_R_V2_PRODUCT_ID 294 62 62 #define USB_LEAF_LIGHT_R_V2_PRODUCT_ID 295 63 63 #define USB_LEAF_LIGHT_HS_V2_OEM2_PRODUCT_ID 296 64 - #define USB_LEAF_PRODUCT_ID_END \ 65 - USB_LEAF_LIGHT_HS_V2_OEM2_PRODUCT_ID 66 64 67 65 /* Kvaser USBCan-II devices product ids */ 68 66 #define USB_USBCAN_REVB_PRODUCT_ID 2 ··· 87 89 #define USB_USBCAN_PRO_4HS_PRODUCT_ID 276 88 90 #define USB_HYBRID_CANLIN_PRODUCT_ID 277 89 91 #define USB_HYBRID_PRO_CANLIN_PRODUCT_ID 278 90 - #define USB_HYDRA_PRODUCT_ID_END \ 91 - USB_HYBRID_PRO_CANLIN_PRODUCT_ID 92 92 93 - static inline bool kvaser_is_leaf(const struct usb_device_id *id) 94 - { 95 - return (id->idProduct >= USB_LEAF_DEVEL_PRODUCT_ID && 96 - id->idProduct <= USB_CAN_R_PRODUCT_ID) || 97 - (id->idProduct >= USB_LEAF_LITE_V2_PRODUCT_ID && 98 - id->idProduct <= USB_LEAF_PRODUCT_ID_END); 99 - } 93 + static const struct kvaser_usb_driver_info kvaser_usb_driver_info_hydra = { 94 + .quirks = 0, 95 + .ops = &kvaser_usb_hydra_dev_ops, 96 + }; 100 97 101 - static inline bool kvaser_is_usbcan(const struct usb_device_id *id) 102 - { 103 - return id->idProduct >= USB_USBCAN_REVB_PRODUCT_ID && 104 - id->idProduct <= USB_MEMORATOR_PRODUCT_ID; 105 - } 98 + static const struct kvaser_usb_driver_info kvaser_usb_driver_info_usbcan = { 99 + .quirks = KVASER_USB_QUIRK_HAS_TXRX_ERRORS | 100 + KVASER_USB_QUIRK_HAS_SILENT_MODE, 101 + .family = KVASER_USBCAN, 102 + .ops = &kvaser_usb_leaf_dev_ops, 103 + }; 106 104 107 - static inline bool kvaser_is_hydra(const struct usb_device_id *id) 108 - { 109 - return id->idProduct >= USB_BLACKBIRD_V2_PRODUCT_ID && 110 - id->idProduct <= USB_HYDRA_PRODUCT_ID_END; 111 - } 105 + static const struct kvaser_usb_driver_info kvaser_usb_driver_info_leaf = { 106 + .quirks = KVASER_USB_QUIRK_IGNORE_CLK_FREQ, 107 + .family = KVASER_LEAF, 108 + .ops = &kvaser_usb_leaf_dev_ops, 109 + }; 110 + 111 + static const struct kvaser_usb_driver_info kvaser_usb_driver_info_leaf_err = { 112 + .quirks = KVASER_USB_QUIRK_HAS_TXRX_ERRORS | 113 + KVASER_USB_QUIRK_IGNORE_CLK_FREQ, 114 + .family = KVASER_LEAF, 115 + .ops = &kvaser_usb_leaf_dev_ops, 116 + }; 117 + 118 + static const struct kvaser_usb_driver_info kvaser_usb_driver_info_leaf_err_listen = { 119 + .quirks = KVASER_USB_QUIRK_HAS_TXRX_ERRORS | 120 + KVASER_USB_QUIRK_HAS_SILENT_MODE | 121 + KVASER_USB_QUIRK_IGNORE_CLK_FREQ, 122 + .family = KVASER_LEAF, 123 + .ops = &kvaser_usb_leaf_dev_ops, 124 + }; 125 + 126 + static const struct kvaser_usb_driver_info kvaser_usb_driver_info_leafimx = { 127 + .quirks = 0, 128 + .ops = &kvaser_usb_leaf_dev_ops, 129 + }; 112 130 113 131 static const struct usb_device_id kvaser_usb_table[] = { 114 - /* Leaf USB product IDs */ 115 - { USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_DEVEL_PRODUCT_ID) }, 116 - { USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_LITE_PRODUCT_ID) }, 132 + /* Leaf M32C USB product IDs */ 133 + { USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_DEVEL_PRODUCT_ID), 134 + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leaf }, 135 + { USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_LITE_PRODUCT_ID), 136 + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leaf }, 117 137 { USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_PRO_PRODUCT_ID), 118 - .driver_info = KVASER_USB_HAS_TXRX_ERRORS | 119 - KVASER_USB_HAS_SILENT_MODE }, 138 + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leaf_err_listen }, 120 139 { USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_SPRO_PRODUCT_ID), 121 - .driver_info = KVASER_USB_HAS_TXRX_ERRORS | 122 - KVASER_USB_HAS_SILENT_MODE }, 140 + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leaf_err_listen }, 123 141 { USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_PRO_LS_PRODUCT_ID), 124 - .driver_info = KVASER_USB_HAS_TXRX_ERRORS | 125 - KVASER_USB_HAS_SILENT_MODE }, 142 + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leaf_err_listen }, 126 143 { USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_PRO_SWC_PRODUCT_ID), 127 - .driver_info = KVASER_USB_HAS_TXRX_ERRORS | 128 - KVASER_USB_HAS_SILENT_MODE }, 144 + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leaf_err_listen }, 129 145 { USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_PRO_LIN_PRODUCT_ID), 130 - .driver_info = KVASER_USB_HAS_TXRX_ERRORS | 131 - KVASER_USB_HAS_SILENT_MODE }, 146 + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leaf_err_listen }, 132 147 { USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_SPRO_LS_PRODUCT_ID), 133 - .driver_info = KVASER_USB_HAS_TXRX_ERRORS | 134 - KVASER_USB_HAS_SILENT_MODE }, 148 + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leaf_err_listen }, 135 149 { USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_SPRO_SWC_PRODUCT_ID), 136 - .driver_info = KVASER_USB_HAS_TXRX_ERRORS | 137 - KVASER_USB_HAS_SILENT_MODE }, 150 + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leaf_err_listen }, 138 151 { USB_DEVICE(KVASER_VENDOR_ID, USB_MEMO2_DEVEL_PRODUCT_ID), 139 - .driver_info = KVASER_USB_HAS_TXRX_ERRORS | 140 - KVASER_USB_HAS_SILENT_MODE }, 152 + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leaf_err_listen }, 141 153 { USB_DEVICE(KVASER_VENDOR_ID, USB_MEMO2_HSHS_PRODUCT_ID), 142 - .driver_info = KVASER_USB_HAS_TXRX_ERRORS | 143 - KVASER_USB_HAS_SILENT_MODE }, 154 + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leaf_err_listen }, 144 155 { USB_DEVICE(KVASER_VENDOR_ID, USB_UPRO_HSHS_PRODUCT_ID), 145 - .driver_info = KVASER_USB_HAS_TXRX_ERRORS }, 146 - { USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_LITE_GI_PRODUCT_ID) }, 156 + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leaf_err }, 157 + { USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_LITE_GI_PRODUCT_ID), 158 + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leaf }, 147 159 { USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_PRO_OBDII_PRODUCT_ID), 148 - .driver_info = KVASER_USB_HAS_TXRX_ERRORS | 149 - KVASER_USB_HAS_SILENT_MODE }, 160 + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leaf_err_listen }, 150 161 { USB_DEVICE(KVASER_VENDOR_ID, USB_MEMO2_HSLS_PRODUCT_ID), 151 - .driver_info = KVASER_USB_HAS_TXRX_ERRORS }, 162 + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leaf_err }, 152 163 { USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_LITE_CH_PRODUCT_ID), 153 - .driver_info = KVASER_USB_HAS_TXRX_ERRORS }, 164 + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leaf_err }, 154 165 { USB_DEVICE(KVASER_VENDOR_ID, USB_BLACKBIRD_SPRO_PRODUCT_ID), 155 - .driver_info = KVASER_USB_HAS_TXRX_ERRORS }, 166 + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leaf_err }, 156 167 { USB_DEVICE(KVASER_VENDOR_ID, USB_OEM_MERCURY_PRODUCT_ID), 157 - .driver_info = KVASER_USB_HAS_TXRX_ERRORS }, 168 + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leaf_err }, 158 169 { USB_DEVICE(KVASER_VENDOR_ID, USB_OEM_LEAF_PRODUCT_ID), 159 - .driver_info = KVASER_USB_HAS_TXRX_ERRORS }, 170 + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leaf_err }, 160 171 { USB_DEVICE(KVASER_VENDOR_ID, USB_CAN_R_PRODUCT_ID), 161 - .driver_info = KVASER_USB_HAS_TXRX_ERRORS }, 162 - { USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_LITE_V2_PRODUCT_ID) }, 163 - { USB_DEVICE(KVASER_VENDOR_ID, USB_MINI_PCIE_HS_PRODUCT_ID) }, 164 - { USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_LIGHT_HS_V2_OEM_PRODUCT_ID) }, 165 - { USB_DEVICE(KVASER_VENDOR_ID, USB_USBCAN_LIGHT_2HS_PRODUCT_ID) }, 166 - { USB_DEVICE(KVASER_VENDOR_ID, USB_MINI_PCIE_2HS_PRODUCT_ID) }, 167 - { USB_DEVICE(KVASER_VENDOR_ID, USB_USBCAN_R_V2_PRODUCT_ID) }, 168 - { USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_LIGHT_R_V2_PRODUCT_ID) }, 169 - { USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_LIGHT_HS_V2_OEM2_PRODUCT_ID) }, 172 + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leaf_err }, 173 + 174 + /* Leaf i.MX28 USB product IDs */ 175 + { USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_LITE_V2_PRODUCT_ID), 176 + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leafimx }, 177 + { USB_DEVICE(KVASER_VENDOR_ID, USB_MINI_PCIE_HS_PRODUCT_ID), 178 + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leafimx }, 179 + { USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_LIGHT_HS_V2_OEM_PRODUCT_ID), 180 + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leafimx }, 181 + { USB_DEVICE(KVASER_VENDOR_ID, USB_USBCAN_LIGHT_2HS_PRODUCT_ID), 182 + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leafimx }, 183 + { USB_DEVICE(KVASER_VENDOR_ID, USB_MINI_PCIE_2HS_PRODUCT_ID), 184 + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leafimx }, 185 + { USB_DEVICE(KVASER_VENDOR_ID, USB_USBCAN_R_V2_PRODUCT_ID), 186 + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leafimx }, 187 + { USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_LIGHT_R_V2_PRODUCT_ID), 188 + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leafimx }, 189 + { USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_LIGHT_HS_V2_OEM2_PRODUCT_ID), 190 + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_leafimx }, 170 191 171 192 /* USBCANII USB product IDs */ 172 193 { USB_DEVICE(KVASER_VENDOR_ID, USB_USBCAN2_PRODUCT_ID), 173 - .driver_info = KVASER_USB_HAS_TXRX_ERRORS }, 194 + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_usbcan }, 174 195 { USB_DEVICE(KVASER_VENDOR_ID, USB_USBCAN_REVB_PRODUCT_ID), 175 - .driver_info = KVASER_USB_HAS_TXRX_ERRORS }, 196 + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_usbcan }, 176 197 { USB_DEVICE(KVASER_VENDOR_ID, USB_MEMORATOR_PRODUCT_ID), 177 - .driver_info = KVASER_USB_HAS_TXRX_ERRORS }, 198 + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_usbcan }, 178 199 { USB_DEVICE(KVASER_VENDOR_ID, USB_VCI2_PRODUCT_ID), 179 - .driver_info = KVASER_USB_HAS_TXRX_ERRORS }, 200 + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_usbcan }, 180 201 181 202 /* Minihydra USB product IDs */ 182 - { USB_DEVICE(KVASER_VENDOR_ID, USB_BLACKBIRD_V2_PRODUCT_ID) }, 183 - { USB_DEVICE(KVASER_VENDOR_ID, USB_MEMO_PRO_5HS_PRODUCT_ID) }, 184 - { USB_DEVICE(KVASER_VENDOR_ID, USB_USBCAN_PRO_5HS_PRODUCT_ID) }, 185 - { USB_DEVICE(KVASER_VENDOR_ID, USB_USBCAN_LIGHT_4HS_PRODUCT_ID) }, 186 - { USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_PRO_HS_V2_PRODUCT_ID) }, 187 - { USB_DEVICE(KVASER_VENDOR_ID, USB_USBCAN_PRO_2HS_V2_PRODUCT_ID) }, 188 - { USB_DEVICE(KVASER_VENDOR_ID, USB_MEMO_2HS_PRODUCT_ID) }, 189 - { USB_DEVICE(KVASER_VENDOR_ID, USB_MEMO_PRO_2HS_V2_PRODUCT_ID) }, 190 - { USB_DEVICE(KVASER_VENDOR_ID, USB_HYBRID_2CANLIN_PRODUCT_ID) }, 191 - { USB_DEVICE(KVASER_VENDOR_ID, USB_ATI_USBCAN_PRO_2HS_V2_PRODUCT_ID) }, 192 - { USB_DEVICE(KVASER_VENDOR_ID, USB_ATI_MEMO_PRO_2HS_V2_PRODUCT_ID) }, 193 - { USB_DEVICE(KVASER_VENDOR_ID, USB_HYBRID_PRO_2CANLIN_PRODUCT_ID) }, 194 - { USB_DEVICE(KVASER_VENDOR_ID, USB_U100_PRODUCT_ID) }, 195 - { USB_DEVICE(KVASER_VENDOR_ID, USB_U100P_PRODUCT_ID) }, 196 - { USB_DEVICE(KVASER_VENDOR_ID, USB_U100S_PRODUCT_ID) }, 197 - { USB_DEVICE(KVASER_VENDOR_ID, USB_USBCAN_PRO_4HS_PRODUCT_ID) }, 198 - { USB_DEVICE(KVASER_VENDOR_ID, USB_HYBRID_CANLIN_PRODUCT_ID) }, 199 - { USB_DEVICE(KVASER_VENDOR_ID, USB_HYBRID_PRO_CANLIN_PRODUCT_ID) }, 203 + { USB_DEVICE(KVASER_VENDOR_ID, USB_BLACKBIRD_V2_PRODUCT_ID), 204 + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_hydra }, 205 + { USB_DEVICE(KVASER_VENDOR_ID, USB_MEMO_PRO_5HS_PRODUCT_ID), 206 + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_hydra }, 207 + { USB_DEVICE(KVASER_VENDOR_ID, USB_USBCAN_PRO_5HS_PRODUCT_ID), 208 + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_hydra }, 209 + { USB_DEVICE(KVASER_VENDOR_ID, USB_USBCAN_LIGHT_4HS_PRODUCT_ID), 210 + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_hydra }, 211 + { USB_DEVICE(KVASER_VENDOR_ID, USB_LEAF_PRO_HS_V2_PRODUCT_ID), 212 + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_hydra }, 213 + { USB_DEVICE(KVASER_VENDOR_ID, USB_USBCAN_PRO_2HS_V2_PRODUCT_ID), 214 + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_hydra }, 215 + { USB_DEVICE(KVASER_VENDOR_ID, USB_MEMO_2HS_PRODUCT_ID), 216 + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_hydra }, 217 + { USB_DEVICE(KVASER_VENDOR_ID, USB_MEMO_PRO_2HS_V2_PRODUCT_ID), 218 + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_hydra }, 219 + { USB_DEVICE(KVASER_VENDOR_ID, USB_HYBRID_2CANLIN_PRODUCT_ID), 220 + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_hydra }, 221 + { USB_DEVICE(KVASER_VENDOR_ID, USB_ATI_USBCAN_PRO_2HS_V2_PRODUCT_ID), 222 + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_hydra }, 223 + { USB_DEVICE(KVASER_VENDOR_ID, USB_ATI_MEMO_PRO_2HS_V2_PRODUCT_ID), 224 + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_hydra }, 225 + { USB_DEVICE(KVASER_VENDOR_ID, USB_HYBRID_PRO_2CANLIN_PRODUCT_ID), 226 + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_hydra }, 227 + { USB_DEVICE(KVASER_VENDOR_ID, USB_U100_PRODUCT_ID), 228 + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_hydra }, 229 + { USB_DEVICE(KVASER_VENDOR_ID, USB_U100P_PRODUCT_ID), 230 + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_hydra }, 231 + { USB_DEVICE(KVASER_VENDOR_ID, USB_U100S_PRODUCT_ID), 232 + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_hydra }, 233 + { USB_DEVICE(KVASER_VENDOR_ID, USB_USBCAN_PRO_4HS_PRODUCT_ID), 234 + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_hydra }, 235 + { USB_DEVICE(KVASER_VENDOR_ID, USB_HYBRID_CANLIN_PRODUCT_ID), 236 + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_hydra }, 237 + { USB_DEVICE(KVASER_VENDOR_ID, USB_HYBRID_PRO_CANLIN_PRODUCT_ID), 238 + .driver_info = (kernel_ulong_t)&kvaser_usb_driver_info_hydra }, 200 239 { } 201 240 }; 202 241 MODULE_DEVICE_TABLE(usb, kvaser_usb_table); ··· 320 285 static void kvaser_usb_read_bulk_callback(struct urb *urb) 321 286 { 322 287 struct kvaser_usb *dev = urb->context; 288 + const struct kvaser_usb_dev_ops *ops = dev->driver_info->ops; 323 289 int err; 324 290 unsigned int i; 325 291 ··· 337 301 goto resubmit_urb; 338 302 } 339 303 340 - dev->ops->dev_read_bulk_callback(dev, urb->transfer_buffer, 341 - urb->actual_length); 304 + ops->dev_read_bulk_callback(dev, urb->transfer_buffer, 305 + urb->actual_length); 342 306 343 307 resubmit_urb: 344 308 usb_fill_bulk_urb(urb, dev->udev, ··· 432 396 { 433 397 struct kvaser_usb_net_priv *priv = netdev_priv(netdev); 434 398 struct kvaser_usb *dev = priv->dev; 399 + const struct kvaser_usb_dev_ops *ops = dev->driver_info->ops; 435 400 int err; 436 401 437 402 err = open_candev(netdev); ··· 443 406 if (err) 444 407 goto error; 445 408 446 - err = dev->ops->dev_set_opt_mode(priv); 409 + err = ops->dev_set_opt_mode(priv); 447 410 if (err) 448 411 goto error; 449 412 450 - err = dev->ops->dev_start_chip(priv); 413 + err = ops->dev_start_chip(priv); 451 414 if (err) { 452 415 netdev_warn(netdev, "Cannot start device, error %d\n", err); 453 416 goto error; ··· 504 467 { 505 468 struct kvaser_usb_net_priv *priv = netdev_priv(netdev); 506 469 struct kvaser_usb *dev = priv->dev; 470 + const struct kvaser_usb_dev_ops *ops = dev->driver_info->ops; 507 471 int err; 508 472 509 473 netif_stop_queue(netdev); 510 474 511 - err = dev->ops->dev_flush_queue(priv); 475 + err = ops->dev_flush_queue(priv); 512 476 if (err) 513 477 netdev_warn(netdev, "Cannot flush queue, error %d\n", err); 514 478 515 - if (dev->ops->dev_reset_chip) { 516 - err = dev->ops->dev_reset_chip(dev, priv->channel); 479 + if (ops->dev_reset_chip) { 480 + err = ops->dev_reset_chip(dev, priv->channel); 517 481 if (err) 518 482 netdev_warn(netdev, "Cannot reset card, error %d\n", 519 483 err); 520 484 } 521 485 522 - err = dev->ops->dev_stop_chip(priv); 486 + err = ops->dev_stop_chip(priv); 523 487 if (err) 524 488 netdev_warn(netdev, "Cannot stop device, error %d\n", err); 525 489 ··· 559 521 { 560 522 struct kvaser_usb_net_priv *priv = netdev_priv(netdev); 561 523 struct kvaser_usb *dev = priv->dev; 524 + const struct kvaser_usb_dev_ops *ops = dev->driver_info->ops; 562 525 struct net_device_stats *stats = &netdev->stats; 563 526 struct kvaser_usb_tx_urb_context *context = NULL; 564 527 struct urb *urb; ··· 602 563 goto freeurb; 603 564 } 604 565 605 - buf = dev->ops->dev_frame_to_cmd(priv, skb, &cmd_len, 606 - context->echo_index); 566 + buf = ops->dev_frame_to_cmd(priv, skb, &cmd_len, context->echo_index); 607 567 if (!buf) { 608 568 stats->tx_dropped++; 609 569 dev_kfree_skb(skb); ··· 686 648 } 687 649 } 688 650 689 - static int kvaser_usb_init_one(struct kvaser_usb *dev, 690 - const struct usb_device_id *id, int channel) 651 + static int kvaser_usb_init_one(struct kvaser_usb *dev, int channel) 691 652 { 692 653 struct net_device *netdev; 693 654 struct kvaser_usb_net_priv *priv; 655 + const struct kvaser_usb_driver_info *driver_info = dev->driver_info; 656 + const struct kvaser_usb_dev_ops *ops = driver_info->ops; 694 657 int err; 695 658 696 - if (dev->ops->dev_reset_chip) { 697 - err = dev->ops->dev_reset_chip(dev, channel); 659 + if (ops->dev_reset_chip) { 660 + err = ops->dev_reset_chip(dev, channel); 698 661 if (err) 699 662 return err; 700 663 } ··· 724 685 priv->can.state = CAN_STATE_STOPPED; 725 686 priv->can.clock.freq = dev->cfg->clock.freq; 726 687 priv->can.bittiming_const = dev->cfg->bittiming_const; 727 - priv->can.do_set_bittiming = dev->ops->dev_set_bittiming; 728 - priv->can.do_set_mode = dev->ops->dev_set_mode; 729 - if ((id->driver_info & KVASER_USB_HAS_TXRX_ERRORS) || 688 + priv->can.do_set_bittiming = ops->dev_set_bittiming; 689 + priv->can.do_set_mode = ops->dev_set_mode; 690 + if ((driver_info->quirks & KVASER_USB_QUIRK_HAS_TXRX_ERRORS) || 730 691 (priv->dev->card_data.capabilities & KVASER_USB_CAP_BERR_CAP)) 731 - priv->can.do_get_berr_counter = dev->ops->dev_get_berr_counter; 732 - if (id->driver_info & KVASER_USB_HAS_SILENT_MODE) 692 + priv->can.do_get_berr_counter = ops->dev_get_berr_counter; 693 + if (driver_info->quirks & KVASER_USB_QUIRK_HAS_SILENT_MODE) 733 694 priv->can.ctrlmode_supported |= CAN_CTRLMODE_LISTENONLY; 734 695 735 696 priv->can.ctrlmode_supported |= dev->card_data.ctrlmode_supported; 736 697 737 698 if (priv->can.ctrlmode_supported & CAN_CTRLMODE_FD) { 738 699 priv->can.data_bittiming_const = dev->cfg->data_bittiming_const; 739 - priv->can.do_set_data_bittiming = 740 - dev->ops->dev_set_data_bittiming; 700 + priv->can.do_set_data_bittiming = ops->dev_set_data_bittiming; 741 701 } 742 702 743 703 netdev->flags |= IFF_ECHO; ··· 767 729 struct kvaser_usb *dev; 768 730 int err; 769 731 int i; 732 + const struct kvaser_usb_driver_info *driver_info; 733 + const struct kvaser_usb_dev_ops *ops; 734 + 735 + driver_info = (const struct kvaser_usb_driver_info *)id->driver_info; 736 + if (!driver_info) 737 + return -ENODEV; 770 738 771 739 dev = devm_kzalloc(&intf->dev, sizeof(*dev), GFP_KERNEL); 772 740 if (!dev) 773 741 return -ENOMEM; 774 742 775 - if (kvaser_is_leaf(id)) { 776 - dev->card_data.leaf.family = KVASER_LEAF; 777 - dev->ops = &kvaser_usb_leaf_dev_ops; 778 - } else if (kvaser_is_usbcan(id)) { 779 - dev->card_data.leaf.family = KVASER_USBCAN; 780 - dev->ops = &kvaser_usb_leaf_dev_ops; 781 - } else if (kvaser_is_hydra(id)) { 782 - dev->ops = &kvaser_usb_hydra_dev_ops; 783 - } else { 784 - dev_err(&intf->dev, 785 - "Product ID (%d) is not a supported Kvaser USB device\n", 786 - id->idProduct); 787 - return -ENODEV; 788 - } 789 - 790 743 dev->intf = intf; 744 + dev->driver_info = driver_info; 745 + ops = driver_info->ops; 791 746 792 - err = dev->ops->dev_setup_endpoints(dev); 747 + err = ops->dev_setup_endpoints(dev); 793 748 if (err) { 794 749 dev_err(&intf->dev, "Cannot get usb endpoint(s)"); 795 750 return err; ··· 796 765 797 766 dev->card_data.ctrlmode_supported = 0; 798 767 dev->card_data.capabilities = 0; 799 - err = dev->ops->dev_init_card(dev); 768 + err = ops->dev_init_card(dev); 800 769 if (err) { 801 770 dev_err(&intf->dev, 802 771 "Failed to initialize card, error %d\n", err); 803 772 return err; 804 773 } 805 774 806 - err = dev->ops->dev_get_software_info(dev); 775 + err = ops->dev_get_software_info(dev); 807 776 if (err) { 808 777 dev_err(&intf->dev, 809 778 "Cannot get software info, error %d\n", err); 810 779 return err; 811 780 } 812 781 813 - if (dev->ops->dev_get_software_details) { 814 - err = dev->ops->dev_get_software_details(dev); 782 + if (ops->dev_get_software_details) { 783 + err = ops->dev_get_software_details(dev); 815 784 if (err) { 816 785 dev_err(&intf->dev, 817 786 "Cannot get software details, error %d\n", err); ··· 829 798 830 799 dev_dbg(&intf->dev, "Max outstanding tx = %d URBs\n", dev->max_tx_urbs); 831 800 832 - err = dev->ops->dev_get_card_info(dev); 801 + err = ops->dev_get_card_info(dev); 833 802 if (err) { 834 803 dev_err(&intf->dev, "Cannot get card info, error %d\n", err); 835 804 return err; 836 805 } 837 806 838 - if (dev->ops->dev_get_capabilities) { 839 - err = dev->ops->dev_get_capabilities(dev); 807 + if (ops->dev_get_capabilities) { 808 + err = ops->dev_get_capabilities(dev); 840 809 if (err) { 841 810 dev_err(&intf->dev, 842 811 "Cannot get capabilities, error %d\n", err); ··· 846 815 } 847 816 848 817 for (i = 0; i < dev->nchannels; i++) { 849 - err = kvaser_usb_init_one(dev, id, i); 818 + err = kvaser_usb_init_one(dev, i); 850 819 if (err) { 851 820 kvaser_usb_remove_interfaces(dev); 852 821 return err;
+2 -2
drivers/net/can/usb/kvaser_usb/kvaser_usb_hydra.c
··· 375 375 .brp_inc = 1, 376 376 }; 377 377 378 - static const struct can_bittiming_const kvaser_usb_hydra_flexc_bittiming_c = { 378 + const struct can_bittiming_const kvaser_usb_flexc_bittiming_const = { 379 379 .name = "kvaser_usb_flex", 380 380 .tseg1_min = 4, 381 381 .tseg1_max = 16, ··· 2052 2052 .freq = 24 * MEGA /* Hz */, 2053 2053 }, 2054 2054 .timestamp_freq = 1, 2055 - .bittiming_const = &kvaser_usb_hydra_flexc_bittiming_c, 2055 + .bittiming_const = &kvaser_usb_flexc_bittiming_const, 2056 2056 }; 2057 2057 2058 2058 static const struct kvaser_usb_dev_cfg kvaser_usb_hydra_dev_cfg_rt = {
+68 -51
drivers/net/can/usb/kvaser_usb/kvaser_usb_leaf.c
··· 101 101 #define USBCAN_ERROR_STATE_RX_ERROR BIT(1) 102 102 #define USBCAN_ERROR_STATE_BUSERROR BIT(2) 103 103 104 - /* bittiming parameters */ 105 - #define KVASER_USB_TSEG1_MIN 1 106 - #define KVASER_USB_TSEG1_MAX 16 107 - #define KVASER_USB_TSEG2_MIN 1 108 - #define KVASER_USB_TSEG2_MAX 8 109 - #define KVASER_USB_SJW_MAX 4 110 - #define KVASER_USB_BRP_MIN 1 111 - #define KVASER_USB_BRP_MAX 64 112 - #define KVASER_USB_BRP_INC 1 113 - 114 104 /* ctrl modes */ 115 105 #define KVASER_CTRL_MODE_NORMAL 1 116 106 #define KVASER_CTRL_MODE_SILENT 2 ··· 333 343 }; 334 344 }; 335 345 336 - static const struct can_bittiming_const kvaser_usb_leaf_bittiming_const = { 337 - .name = "kvaser_usb", 338 - .tseg1_min = KVASER_USB_TSEG1_MIN, 339 - .tseg1_max = KVASER_USB_TSEG1_MAX, 340 - .tseg2_min = KVASER_USB_TSEG2_MIN, 341 - .tseg2_max = KVASER_USB_TSEG2_MAX, 342 - .sjw_max = KVASER_USB_SJW_MAX, 343 - .brp_min = KVASER_USB_BRP_MIN, 344 - .brp_max = KVASER_USB_BRP_MAX, 345 - .brp_inc = KVASER_USB_BRP_INC, 346 + static const struct can_bittiming_const kvaser_usb_leaf_m16c_bittiming_const = { 347 + .name = "kvaser_usb_ucii", 348 + .tseg1_min = 4, 349 + .tseg1_max = 16, 350 + .tseg2_min = 2, 351 + .tseg2_max = 8, 352 + .sjw_max = 4, 353 + .brp_min = 1, 354 + .brp_max = 16, 355 + .brp_inc = 1, 346 356 }; 347 357 348 - static const struct kvaser_usb_dev_cfg kvaser_usb_leaf_dev_cfg_8mhz = { 358 + static const struct can_bittiming_const kvaser_usb_leaf_m32c_bittiming_const = { 359 + .name = "kvaser_usb_leaf", 360 + .tseg1_min = 3, 361 + .tseg1_max = 16, 362 + .tseg2_min = 2, 363 + .tseg2_max = 8, 364 + .sjw_max = 4, 365 + .brp_min = 2, 366 + .brp_max = 128, 367 + .brp_inc = 2, 368 + }; 369 + 370 + static const struct kvaser_usb_dev_cfg kvaser_usb_leaf_usbcan_dev_cfg = { 349 371 .clock = { 350 372 .freq = 8 * MEGA /* Hz */, 351 373 }, 352 374 .timestamp_freq = 1, 353 - .bittiming_const = &kvaser_usb_leaf_bittiming_const, 375 + .bittiming_const = &kvaser_usb_leaf_m16c_bittiming_const, 354 376 }; 355 377 356 - static const struct kvaser_usb_dev_cfg kvaser_usb_leaf_dev_cfg_16mhz = { 378 + static const struct kvaser_usb_dev_cfg kvaser_usb_leaf_m32c_dev_cfg = { 357 379 .clock = { 358 380 .freq = 16 * MEGA /* Hz */, 359 381 }, 360 382 .timestamp_freq = 1, 361 - .bittiming_const = &kvaser_usb_leaf_bittiming_const, 383 + .bittiming_const = &kvaser_usb_leaf_m32c_bittiming_const, 362 384 }; 363 385 364 - static const struct kvaser_usb_dev_cfg kvaser_usb_leaf_dev_cfg_24mhz = { 386 + static const struct kvaser_usb_dev_cfg kvaser_usb_leaf_imx_dev_cfg_16mhz = { 387 + .clock = { 388 + .freq = 16 * MEGA /* Hz */, 389 + }, 390 + .timestamp_freq = 1, 391 + .bittiming_const = &kvaser_usb_flexc_bittiming_const, 392 + }; 393 + 394 + static const struct kvaser_usb_dev_cfg kvaser_usb_leaf_imx_dev_cfg_24mhz = { 365 395 .clock = { 366 396 .freq = 24 * MEGA /* Hz */, 367 397 }, 368 398 .timestamp_freq = 1, 369 - .bittiming_const = &kvaser_usb_leaf_bittiming_const, 399 + .bittiming_const = &kvaser_usb_flexc_bittiming_const, 370 400 }; 371 401 372 - static const struct kvaser_usb_dev_cfg kvaser_usb_leaf_dev_cfg_32mhz = { 402 + static const struct kvaser_usb_dev_cfg kvaser_usb_leaf_imx_dev_cfg_32mhz = { 373 403 .clock = { 374 404 .freq = 32 * MEGA /* Hz */, 375 405 }, 376 406 .timestamp_freq = 1, 377 - .bittiming_const = &kvaser_usb_leaf_bittiming_const, 407 + .bittiming_const = &kvaser_usb_flexc_bittiming_const, 378 408 }; 379 409 380 410 static void * ··· 414 404 sizeof(struct kvaser_cmd_tx_can); 415 405 cmd->u.tx_can.channel = priv->channel; 416 406 417 - switch (dev->card_data.leaf.family) { 407 + switch (dev->driver_info->family) { 418 408 case KVASER_LEAF: 419 409 cmd_tx_can_flags = &cmd->u.tx_can.leaf.flags; 420 410 break; ··· 534 524 dev->fw_version = le32_to_cpu(softinfo->fw_version); 535 525 dev->max_tx_urbs = le16_to_cpu(softinfo->max_outstanding_tx); 536 526 537 - switch (sw_options & KVASER_USB_LEAF_SWOPTION_FREQ_MASK) { 538 - case KVASER_USB_LEAF_SWOPTION_FREQ_16_MHZ_CLK: 539 - dev->cfg = &kvaser_usb_leaf_dev_cfg_16mhz; 540 - break; 541 - case KVASER_USB_LEAF_SWOPTION_FREQ_24_MHZ_CLK: 542 - dev->cfg = &kvaser_usb_leaf_dev_cfg_24mhz; 543 - break; 544 - case KVASER_USB_LEAF_SWOPTION_FREQ_32_MHZ_CLK: 545 - dev->cfg = &kvaser_usb_leaf_dev_cfg_32mhz; 546 - break; 527 + if (dev->driver_info->quirks & KVASER_USB_QUIRK_IGNORE_CLK_FREQ) { 528 + /* Firmware expects bittiming parameters calculated for 16MHz 529 + * clock, regardless of the actual clock 530 + */ 531 + dev->cfg = &kvaser_usb_leaf_m32c_dev_cfg; 532 + } else { 533 + switch (sw_options & KVASER_USB_LEAF_SWOPTION_FREQ_MASK) { 534 + case KVASER_USB_LEAF_SWOPTION_FREQ_16_MHZ_CLK: 535 + dev->cfg = &kvaser_usb_leaf_imx_dev_cfg_16mhz; 536 + break; 537 + case KVASER_USB_LEAF_SWOPTION_FREQ_24_MHZ_CLK: 538 + dev->cfg = &kvaser_usb_leaf_imx_dev_cfg_24mhz; 539 + break; 540 + case KVASER_USB_LEAF_SWOPTION_FREQ_32_MHZ_CLK: 541 + dev->cfg = &kvaser_usb_leaf_imx_dev_cfg_32mhz; 542 + break; 543 + } 547 544 } 548 545 } 549 546 ··· 567 550 if (err) 568 551 return err; 569 552 570 - switch (dev->card_data.leaf.family) { 553 + switch (dev->driver_info->family) { 571 554 case KVASER_LEAF: 572 555 kvaser_usb_leaf_get_software_info_leaf(dev, &cmd.u.leaf.softinfo); 573 556 break; ··· 575 558 dev->fw_version = le32_to_cpu(cmd.u.usbcan.softinfo.fw_version); 576 559 dev->max_tx_urbs = 577 560 le16_to_cpu(cmd.u.usbcan.softinfo.max_outstanding_tx); 578 - dev->cfg = &kvaser_usb_leaf_dev_cfg_8mhz; 561 + dev->cfg = &kvaser_usb_leaf_usbcan_dev_cfg; 579 562 break; 580 563 } 581 564 ··· 614 597 615 598 dev->nchannels = cmd.u.cardinfo.nchannels; 616 599 if (dev->nchannels > KVASER_USB_MAX_NET_DEVICES || 617 - (dev->card_data.leaf.family == KVASER_USBCAN && 600 + (dev->driver_info->family == KVASER_USBCAN && 618 601 dev->nchannels > MAX_USBCAN_NET_DEVICES)) 619 602 return -EINVAL; 620 603 ··· 747 730 new_state < CAN_STATE_BUS_OFF) 748 731 priv->can.can_stats.restarts++; 749 732 750 - switch (dev->card_data.leaf.family) { 733 + switch (dev->driver_info->family) { 751 734 case KVASER_LEAF: 752 735 if (es->leaf.error_factor) { 753 736 priv->can.can_stats.bus_error++; ··· 826 809 } 827 810 } 828 811 829 - switch (dev->card_data.leaf.family) { 812 + switch (dev->driver_info->family) { 830 813 case KVASER_LEAF: 831 814 if (es->leaf.error_factor) { 832 815 cf->can_id |= CAN_ERR_BUSERROR | CAN_ERR_PROT; ··· 1016 999 stats = &priv->netdev->stats; 1017 1000 1018 1001 if ((cmd->u.rx_can_header.flag & MSG_FLAG_ERROR_FRAME) && 1019 - (dev->card_data.leaf.family == KVASER_LEAF && 1002 + (dev->driver_info->family == KVASER_LEAF && 1020 1003 cmd->id == CMD_LEAF_LOG_MESSAGE)) { 1021 1004 kvaser_usb_leaf_leaf_rx_error(dev, cmd); 1022 1005 return; ··· 1032 1015 return; 1033 1016 } 1034 1017 1035 - switch (dev->card_data.leaf.family) { 1018 + switch (dev->driver_info->family) { 1036 1019 case KVASER_LEAF: 1037 1020 rx_data = cmd->u.leaf.rx_can.data; 1038 1021 break; ··· 1047 1030 return; 1048 1031 } 1049 1032 1050 - if (dev->card_data.leaf.family == KVASER_LEAF && cmd->id == 1033 + if (dev->driver_info->family == KVASER_LEAF && cmd->id == 1051 1034 CMD_LEAF_LOG_MESSAGE) { 1052 1035 cf->can_id = le32_to_cpu(cmd->u.leaf.log_message.id); 1053 1036 if (cf->can_id & KVASER_EXTENDED_FRAME) ··· 1145 1128 break; 1146 1129 1147 1130 case CMD_LEAF_LOG_MESSAGE: 1148 - if (dev->card_data.leaf.family != KVASER_LEAF) 1131 + if (dev->driver_info->family != KVASER_LEAF) 1149 1132 goto warn; 1150 1133 kvaser_usb_leaf_rx_can_msg(dev, cmd); 1151 1134 break; 1152 1135 1153 1136 case CMD_CHIP_STATE_EVENT: 1154 1137 case CMD_CAN_ERROR_EVENT: 1155 - if (dev->card_data.leaf.family == KVASER_LEAF) 1138 + if (dev->driver_info->family == KVASER_LEAF) 1156 1139 kvaser_usb_leaf_leaf_rx_error(dev, cmd); 1157 1140 else 1158 1141 kvaser_usb_leaf_usbcan_rx_error(dev, cmd); ··· 1164 1147 1165 1148 /* Ignored commands */ 1166 1149 case CMD_USBCAN_CLOCK_OVERFLOW_EVENT: 1167 - if (dev->card_data.leaf.family != KVASER_USBCAN) 1150 + if (dev->driver_info->family != KVASER_USBCAN) 1168 1151 goto warn; 1169 1152 break; 1170 1153 1171 1154 case CMD_FLUSH_QUEUE_REPLY: 1172 - if (dev->card_data.leaf.family != KVASER_LEAF) 1155 + if (dev->driver_info->family != KVASER_LEAF) 1173 1156 goto warn; 1174 1157 break; 1175 1158
+2 -2
drivers/net/can/xilinx_can.c
··· 263 263 .tseg2_min = 1, 264 264 .tseg2_max = 128, 265 265 .sjw_max = 128, 266 - .brp_min = 2, 266 + .brp_min = 1, 267 267 .brp_max = 256, 268 268 .brp_inc = 1, 269 269 }; ··· 276 276 .tseg2_min = 1, 277 277 .tseg2_max = 16, 278 278 .sjw_max = 16, 279 - .brp_min = 2, 279 + .brp_min = 1, 280 280 .brp_max = 256, 281 281 .brp_inc = 1, 282 282 };
+9
drivers/net/ethernet/ibm/ibmvnic.c
··· 5981 5981 release_sub_crqs(adapter, 0); 5982 5982 rc = init_sub_crqs(adapter); 5983 5983 } else { 5984 + /* no need to reinitialize completely, but we do 5985 + * need to clean up transmits that were in flight 5986 + * when we processed the reset. Failure to do so 5987 + * will confound the upper layer, usually TCP, by 5988 + * creating the illusion of transmits that are 5989 + * awaiting completion. 5990 + */ 5991 + clean_tx_pools(adapter); 5992 + 5984 5993 rc = reset_sub_crq_queues(adapter); 5985 5994 } 5986 5995 } else {
+16
drivers/net/ethernet/intel/i40e/i40e.h
··· 37 37 #include <net/tc_act/tc_mirred.h> 38 38 #include <net/udp_tunnel.h> 39 39 #include <net/xdp_sock.h> 40 + #include <linux/bitfield.h> 40 41 #include "i40e_type.h" 41 42 #include "i40e_prototype.h" 42 43 #include <linux/net/intel/i40e_client.h> ··· 1092 1091 (u32)(val >> 32)); 1093 1092 i40e_write_rx_ctl(&pf->hw, I40E_PRTQF_FD_INSET(addr, 0), 1094 1093 (u32)(val & 0xFFFFFFFFULL)); 1094 + } 1095 + 1096 + /** 1097 + * i40e_get_pf_count - get PCI PF count. 1098 + * @hw: pointer to a hw. 1099 + * 1100 + * Reports the function number of the highest PCI physical 1101 + * function plus 1 as it is loaded from the NVM. 1102 + * 1103 + * Return: PCI PF count. 1104 + **/ 1105 + static inline u32 i40e_get_pf_count(struct i40e_hw *hw) 1106 + { 1107 + return FIELD_GET(I40E_GLGEN_PCIFCNCNT_PCIPFCNT_MASK, 1108 + rd32(hw, I40E_GLGEN_PCIFCNCNT)); 1095 1109 } 1096 1110 1097 1111 /* needed by i40e_ethtool.c */
+73
drivers/net/ethernet/intel/i40e/i40e_main.c
··· 551 551 } 552 552 553 553 /** 554 + * i40e_compute_pci_to_hw_id - compute index form PCI function. 555 + * @vsi: ptr to the VSI to read from. 556 + * @hw: ptr to the hardware info. 557 + **/ 558 + static u32 i40e_compute_pci_to_hw_id(struct i40e_vsi *vsi, struct i40e_hw *hw) 559 + { 560 + int pf_count = i40e_get_pf_count(hw); 561 + 562 + if (vsi->type == I40E_VSI_SRIOV) 563 + return (hw->port * BIT(7)) / pf_count + vsi->vf_id; 564 + 565 + return hw->port + BIT(7); 566 + } 567 + 568 + /** 569 + * i40e_stat_update64 - read and update a 64 bit stat from the chip. 570 + * @hw: ptr to the hardware info. 571 + * @hireg: the high 32 bit reg to read. 572 + * @loreg: the low 32 bit reg to read. 573 + * @offset_loaded: has the initial offset been loaded yet. 574 + * @offset: ptr to current offset value. 575 + * @stat: ptr to the stat. 576 + * 577 + * Since the device stats are not reset at PFReset, they will not 578 + * be zeroed when the driver starts. We'll save the first values read 579 + * and use them as offsets to be subtracted from the raw values in order 580 + * to report stats that count from zero. 581 + **/ 582 + static void i40e_stat_update64(struct i40e_hw *hw, u32 hireg, u32 loreg, 583 + bool offset_loaded, u64 *offset, u64 *stat) 584 + { 585 + u64 new_data; 586 + 587 + new_data = rd64(hw, loreg); 588 + 589 + if (!offset_loaded || new_data < *offset) 590 + *offset = new_data; 591 + *stat = new_data - *offset; 592 + } 593 + 594 + /** 554 595 * i40e_stat_update48 - read and update a 48 bit stat from the chip 555 596 * @hw: ptr to the hardware info 556 597 * @hireg: the high 32 bit reg to read ··· 663 622 } 664 623 665 624 /** 625 + * i40e_stats_update_rx_discards - update rx_discards. 626 + * @vsi: ptr to the VSI to be updated. 627 + * @hw: ptr to the hardware info. 628 + * @stat_idx: VSI's stat_counter_idx. 629 + * @offset_loaded: ptr to the VSI's stat_offsets_loaded. 630 + * @stat_offset: ptr to stat_offset to store first read of specific register. 631 + * @stat: ptr to VSI's stat to be updated. 632 + **/ 633 + static void 634 + i40e_stats_update_rx_discards(struct i40e_vsi *vsi, struct i40e_hw *hw, 635 + int stat_idx, bool offset_loaded, 636 + struct i40e_eth_stats *stat_offset, 637 + struct i40e_eth_stats *stat) 638 + { 639 + u64 rx_rdpc, rx_rxerr; 640 + 641 + i40e_stat_update32(hw, I40E_GLV_RDPC(stat_idx), offset_loaded, 642 + &stat_offset->rx_discards, &rx_rdpc); 643 + i40e_stat_update64(hw, 644 + I40E_GL_RXERR1H(i40e_compute_pci_to_hw_id(vsi, hw)), 645 + I40E_GL_RXERR1L(i40e_compute_pci_to_hw_id(vsi, hw)), 646 + offset_loaded, &stat_offset->rx_discards_other, 647 + &rx_rxerr); 648 + 649 + stat->rx_discards = rx_rdpc + rx_rxerr; 650 + } 651 + 652 + /** 666 653 * i40e_update_eth_stats - Update VSI-specific ethernet statistics counters. 667 654 * @vsi: the VSI to be updated 668 655 **/ ··· 749 680 I40E_GLV_BPTCL(stat_idx), 750 681 vsi->stat_offsets_loaded, 751 682 &oes->tx_broadcast, &es->tx_broadcast); 683 + 684 + i40e_stats_update_rx_discards(vsi, hw, stat_idx, 685 + vsi->stat_offsets_loaded, oes, es); 686 + 752 687 vsi->stat_offsets_loaded = true; 753 688 } 754 689
+13
drivers/net/ethernet/intel/i40e/i40e_register.h
··· 211 211 #define I40E_GLGEN_MSRWD_MDIWRDATA_SHIFT 0 212 212 #define I40E_GLGEN_MSRWD_MDIRDDATA_SHIFT 16 213 213 #define I40E_GLGEN_MSRWD_MDIRDDATA_MASK I40E_MASK(0xFFFF, I40E_GLGEN_MSRWD_MDIRDDATA_SHIFT) 214 + #define I40E_GLGEN_PCIFCNCNT 0x001C0AB4 /* Reset: PCIR */ 215 + #define I40E_GLGEN_PCIFCNCNT_PCIPFCNT_SHIFT 0 216 + #define I40E_GLGEN_PCIFCNCNT_PCIPFCNT_MASK I40E_MASK(0x1F, I40E_GLGEN_PCIFCNCNT_PCIPFCNT_SHIFT) 217 + #define I40E_GLGEN_PCIFCNCNT_PCIVFCNT_SHIFT 16 218 + #define I40E_GLGEN_PCIFCNCNT_PCIVFCNT_MASK I40E_MASK(0xFF, I40E_GLGEN_PCIFCNCNT_PCIVFCNT_SHIFT) 214 219 #define I40E_GLGEN_RSTAT 0x000B8188 /* Reset: POR */ 215 220 #define I40E_GLGEN_RSTAT_DEVSTATE_SHIFT 0 216 221 #define I40E_GLGEN_RSTAT_DEVSTATE_MASK I40E_MASK(0x3, I40E_GLGEN_RSTAT_DEVSTATE_SHIFT) ··· 648 643 #define I40E_VFQF_HKEY1_MAX_INDEX 12 649 644 #define I40E_VFQF_HLUT1(_i, _VF) (0x00220000 + ((_i) * 1024 + (_VF) * 4)) /* _i=0...15, _VF=0...127 */ /* Reset: CORER */ 650 645 #define I40E_VFQF_HLUT1_MAX_INDEX 15 646 + #define I40E_GL_RXERR1H(_i) (0x00318004 + ((_i) * 8)) /* _i=0...143 */ /* Reset: CORER */ 647 + #define I40E_GL_RXERR1H_MAX_INDEX 143 648 + #define I40E_GL_RXERR1H_RXERR1H_SHIFT 0 649 + #define I40E_GL_RXERR1H_RXERR1H_MASK I40E_MASK(0xFFFFFFFF, I40E_GL_RXERR1H_RXERR1H_SHIFT) 650 + #define I40E_GL_RXERR1L(_i) (0x00318000 + ((_i) * 8)) /* _i=0...143 */ /* Reset: CORER */ 651 + #define I40E_GL_RXERR1L_MAX_INDEX 143 652 + #define I40E_GL_RXERR1L_RXERR1L_SHIFT 0 653 + #define I40E_GL_RXERR1L_RXERR1L_MASK I40E_MASK(0xFFFFFFFF, I40E_GL_RXERR1L_RXERR1L_SHIFT) 651 654 #define I40E_GLPRT_BPRCH(_i) (0x003005E4 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ 652 655 #define I40E_GLPRT_BPRCL(_i) (0x003005E0 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */ 653 656 #define I40E_GLPRT_BPTCH(_i) (0x00300A04 + ((_i) * 8)) /* _i=0...3 */ /* Reset: CORER */
+1
drivers/net/ethernet/intel/i40e/i40e_type.h
··· 1172 1172 u64 tx_broadcast; /* bptc */ 1173 1173 u64 tx_discards; /* tdpc */ 1174 1174 u64 tx_errors; /* tepc */ 1175 + u64 rx_discards_other; /* rxerr1 */ 1175 1176 }; 1176 1177 1177 1178 /* Statistics collected per VEB per TC */
+4
drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
··· 2147 2147 /* VFs only use TC 0 */ 2148 2148 vfres->vsi_res[0].qset_handle 2149 2149 = le16_to_cpu(vsi->info.qs_handle[0]); 2150 + if (!(vf->driver_caps & VIRTCHNL_VF_OFFLOAD_USO) && !vf->pf_set_mac) { 2151 + i40e_del_mac_filter(vsi, vf->default_lan_addr.addr); 2152 + eth_zero_addr(vf->default_lan_addr.addr); 2153 + } 2150 2154 ether_addr_copy(vfres->vsi_res[0].default_mac_addr, 2151 2155 vf->default_lan_addr.addr); 2152 2156 }
+6 -7
drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
··· 4591 4591 return -EOPNOTSUPP; 4592 4592 } 4593 4593 4594 - if (act->police.notexceed.act_id != FLOW_ACTION_PIPE && 4595 - act->police.notexceed.act_id != FLOW_ACTION_ACCEPT) { 4596 - NL_SET_ERR_MSG_MOD(extack, 4597 - "Offload not supported when conform action is not pipe or ok"); 4598 - return -EOPNOTSUPP; 4599 - } 4600 - 4601 4594 if (act->police.notexceed.act_id == FLOW_ACTION_ACCEPT && 4602 4595 !flow_action_is_last_entry(action, act)) { 4603 4596 NL_SET_ERR_MSG_MOD(extack, ··· 4641 4648 flow_action_for_each(i, act, flow_action) { 4642 4649 switch (act->id) { 4643 4650 case FLOW_ACTION_POLICE: 4651 + if (act->police.notexceed.act_id != FLOW_ACTION_CONTINUE) { 4652 + NL_SET_ERR_MSG_MOD(extack, 4653 + "Offload not supported when conform action is not continue"); 4654 + return -EOPNOTSUPP; 4655 + } 4656 + 4644 4657 err = mlx5e_policer_validate(flow_action, act, extack); 4645 4658 if (err) 4646 4659 return err;
+2 -6
drivers/net/ethernet/microchip/lan966x/lan966x_main.c
··· 994 994 struct fwnode_handle *ports, *portnp; 995 995 struct lan966x *lan966x; 996 996 u8 mac_addr[ETH_ALEN]; 997 - int err, i; 997 + int err; 998 998 999 999 lan966x = devm_kzalloc(&pdev->dev, sizeof(*lan966x), GFP_KERNEL); 1000 1000 if (!lan966x) ··· 1025 1025 if (err) 1026 1026 return dev_err_probe(&pdev->dev, err, "Reset failed"); 1027 1027 1028 - i = 0; 1029 - fwnode_for_each_available_child_node(ports, portnp) 1030 - ++i; 1031 - 1032 - lan966x->num_phys_ports = i; 1028 + lan966x->num_phys_ports = NUM_PHYS_PORTS; 1033 1029 lan966x->ports = devm_kcalloc(&pdev->dev, lan966x->num_phys_ports, 1034 1030 sizeof(struct lan966x_port *), 1035 1031 GFP_KERNEL);
+1
drivers/net/ethernet/microchip/lan966x/lan966x_main.h
··· 34 34 /* Reserved amount for (SRC, PRIO) at index 8*SRC + PRIO */ 35 35 #define QSYS_Q_RSRV 95 36 36 37 + #define NUM_PHYS_PORTS 8 37 38 #define CPU_PORT 8 38 39 39 40 /* Reserved PGIDs */
+4 -6
drivers/net/ethernet/realtek/r8169_main.c
··· 4190 4190 static bool rtl8169_tso_csum_v2(struct rtl8169_private *tp, 4191 4191 struct sk_buff *skb, u32 *opts) 4192 4192 { 4193 - u32 transport_offset = (u32)skb_transport_offset(skb); 4194 4193 struct skb_shared_info *shinfo = skb_shinfo(skb); 4195 4194 u32 mss = shinfo->gso_size; 4196 4195 ··· 4206 4207 WARN_ON_ONCE(1); 4207 4208 } 4208 4209 4209 - opts[0] |= transport_offset << GTTCPHO_SHIFT; 4210 + opts[0] |= skb_transport_offset(skb) << GTTCPHO_SHIFT; 4210 4211 opts[1] |= mss << TD1_MSS_SHIFT; 4211 4212 } else if (skb->ip_summed == CHECKSUM_PARTIAL) { 4212 4213 u8 ip_protocol; ··· 4234 4235 else 4235 4236 WARN_ON_ONCE(1); 4236 4237 4237 - opts[1] |= transport_offset << TCPHO_SHIFT; 4238 + opts[1] |= skb_transport_offset(skb) << TCPHO_SHIFT; 4238 4239 } else { 4239 4240 unsigned int padto = rtl_quirk_packet_padto(tp, skb); 4240 4241 ··· 4401 4402 struct net_device *dev, 4402 4403 netdev_features_t features) 4403 4404 { 4404 - int transport_offset = skb_transport_offset(skb); 4405 4405 struct rtl8169_private *tp = netdev_priv(dev); 4406 4406 4407 4407 if (skb_is_gso(skb)) { 4408 4408 if (tp->mac_version == RTL_GIGA_MAC_VER_34) 4409 4409 features = rtl8168evl_fix_tso(skb, features); 4410 4410 4411 - if (transport_offset > GTTCPHO_MAX && 4411 + if (skb_transport_offset(skb) > GTTCPHO_MAX && 4412 4412 rtl_chip_supports_csum_v2(tp)) 4413 4413 features &= ~NETIF_F_ALL_TSO; 4414 4414 } else if (skb->ip_summed == CHECKSUM_PARTIAL) { ··· 4418 4420 if (rtl_quirk_packet_padto(tp, skb)) 4419 4421 features &= ~NETIF_F_CSUM_MASK; 4420 4422 4421 - if (transport_offset > TCPHO_MAX && 4423 + if (skb_transport_offset(skb) > TCPHO_MAX && 4422 4424 rtl_chip_supports_csum_v2(tp)) 4423 4425 features &= ~NETIF_F_CSUM_MASK; 4424 4426 }
+1 -1
drivers/net/usb/catc.c
··· 781 781 intf->altsetting->desc.bInterfaceNumber, 1)) { 782 782 dev_err(dev, "Can't set altsetting 1.\n"); 783 783 ret = -EIO; 784 - goto fail_mem;; 784 + goto fail_mem; 785 785 } 786 786 787 787 netdev = alloc_etherdev(sizeof(struct catc));
+12 -5
drivers/net/usb/usbnet.c
··· 2130 2130 int usbnet_write_cmd_async(struct usbnet *dev, u8 cmd, u8 reqtype, 2131 2131 u16 value, u16 index, const void *data, u16 size) 2132 2132 { 2133 - struct usb_ctrlrequest *req = NULL; 2133 + struct usb_ctrlrequest *req; 2134 2134 struct urb *urb; 2135 2135 int err = -ENOMEM; 2136 2136 void *buf = NULL; ··· 2148 2148 if (!buf) { 2149 2149 netdev_err(dev->net, "Error allocating buffer" 2150 2150 " in %s!\n", __func__); 2151 - goto fail_free; 2151 + goto fail_free_urb; 2152 2152 } 2153 2153 } 2154 2154 ··· 2172 2172 if (err < 0) { 2173 2173 netdev_err(dev->net, "Error submitting the control" 2174 2174 " message: status=%d\n", err); 2175 - goto fail_free; 2175 + goto fail_free_all; 2176 2176 } 2177 2177 return 0; 2178 2178 2179 + fail_free_all: 2180 + kfree(req); 2179 2181 fail_free_buf: 2180 2182 kfree(buf); 2181 - fail_free: 2182 - kfree(req); 2183 + /* 2184 + * avoid a double free 2185 + * needed because the flag can be set only 2186 + * after filling the URB 2187 + */ 2188 + urb->transfer_flags = 0; 2189 + fail_free_urb: 2183 2190 usb_free_urb(urb); 2184 2191 fail: 2185 2192 return err;
+52 -4
drivers/net/xen-netfront.c
··· 66 66 MODULE_PARM_DESC(max_queues, 67 67 "Maximum number of queues per virtual interface"); 68 68 69 + static bool __read_mostly xennet_trusted = true; 70 + module_param_named(trusted, xennet_trusted, bool, 0644); 71 + MODULE_PARM_DESC(trusted, "Is the backend trusted"); 72 + 69 73 #define XENNET_TIMEOUT (5 * HZ) 70 74 71 75 static const struct ethtool_ops xennet_ethtool_ops; ··· 177 173 /* Is device behaving sane? */ 178 174 bool broken; 179 175 176 + /* Should skbs be bounced into a zeroed buffer? */ 177 + bool bounce; 178 + 180 179 atomic_t rx_gso_checksum_fixup; 181 180 }; 182 181 ··· 278 271 if (unlikely(!skb)) 279 272 return NULL; 280 273 281 - page = page_pool_dev_alloc_pages(queue->page_pool); 274 + page = page_pool_alloc_pages(queue->page_pool, 275 + GFP_ATOMIC | __GFP_NOWARN | __GFP_ZERO); 282 276 if (unlikely(!page)) { 283 277 kfree_skb(skb); 284 278 return NULL; ··· 673 665 return nxmit; 674 666 } 675 667 668 + struct sk_buff *bounce_skb(const struct sk_buff *skb) 669 + { 670 + unsigned int headerlen = skb_headroom(skb); 671 + /* Align size to allocate full pages and avoid contiguous data leaks */ 672 + unsigned int size = ALIGN(skb_end_offset(skb) + skb->data_len, 673 + XEN_PAGE_SIZE); 674 + struct sk_buff *n = alloc_skb(size, GFP_ATOMIC | __GFP_ZERO); 675 + 676 + if (!n) 677 + return NULL; 678 + 679 + if (!IS_ALIGNED((uintptr_t)n->head, XEN_PAGE_SIZE)) { 680 + WARN_ONCE(1, "misaligned skb allocated\n"); 681 + kfree_skb(n); 682 + return NULL; 683 + } 684 + 685 + /* Set the data pointer */ 686 + skb_reserve(n, headerlen); 687 + /* Set the tail pointer and length */ 688 + skb_put(n, skb->len); 689 + 690 + BUG_ON(skb_copy_bits(skb, -headerlen, n->head, headerlen + skb->len)); 691 + 692 + skb_copy_header(n, skb); 693 + return n; 694 + } 676 695 677 696 #define MAX_XEN_SKB_FRAGS (65536 / XEN_PAGE_SIZE + 1) 678 697 ··· 753 718 754 719 /* The first req should be at least ETH_HLEN size or the packet will be 755 720 * dropped by netback. 721 + * 722 + * If the backend is not trusted bounce all data to zeroed pages to 723 + * avoid exposing contiguous data on the granted page not belonging to 724 + * the skb. 756 725 */ 757 - if (unlikely(PAGE_SIZE - offset < ETH_HLEN)) { 758 - nskb = skb_copy(skb, GFP_ATOMIC); 726 + if (np->bounce || unlikely(PAGE_SIZE - offset < ETH_HLEN)) { 727 + nskb = bounce_skb(skb); 759 728 if (!nskb) 760 729 goto drop; 761 730 dev_consume_skb_any(skb); ··· 1092 1053 } 1093 1054 } 1094 1055 rcu_read_unlock(); 1095 - next: 1056 + 1096 1057 __skb_queue_tail(list, skb); 1058 + 1059 + next: 1097 1060 if (!(rx->flags & XEN_NETRXF_more_data)) 1098 1061 break; 1099 1062 ··· 2255 2214 2256 2215 info->netdev->irq = 0; 2257 2216 2217 + /* Check if backend is trusted. */ 2218 + info->bounce = !xennet_trusted || 2219 + !xenbus_read_unsigned(dev->nodename, "trusted", 1); 2220 + 2258 2221 /* Check if backend supports multiple queues */ 2259 2222 max_queues = xenbus_read_unsigned(info->xbdev->otherend, 2260 2223 "multi-queue-max-queues", 1); ··· 2426 2381 return err; 2427 2382 if (np->netback_has_xdp_headroom) 2428 2383 pr_info("backend supports XDP headroom\n"); 2384 + if (np->bounce) 2385 + dev_info(&np->xbdev->dev, 2386 + "bouncing transmitted data to zeroed pages\n"); 2429 2387 2430 2388 /* talk_to_netback() sets the correct number of queues */ 2431 2389 num_queues = dev->real_num_tx_queues;
+2 -2
drivers/nvdimm/bus.c
··· 176 176 ndr_end = nd_region->ndr_start + nd_region->ndr_size - 1; 177 177 178 178 /* make sure we are in the region */ 179 - if (ctx->phys < nd_region->ndr_start 180 - || (ctx->phys + ctx->cleared) > ndr_end) 179 + if (ctx->phys < nd_region->ndr_start || 180 + (ctx->phys + ctx->cleared - 1) > ndr_end) 181 181 return 0; 182 182 183 183 sector = (ctx->phys - nd_region->ndr_start) / 512;
+2
drivers/nvme/host/core.c
··· 4595 4595 nvme_stop_failfast_work(ctrl); 4596 4596 flush_work(&ctrl->async_event_work); 4597 4597 cancel_work_sync(&ctrl->fw_act_work); 4598 + if (ctrl->ops->stop_ctrl) 4599 + ctrl->ops->stop_ctrl(ctrl); 4598 4600 } 4599 4601 EXPORT_SYMBOL_GPL(nvme_stop_ctrl); 4600 4602
+1
drivers/nvme/host/nvme.h
··· 502 502 void (*free_ctrl)(struct nvme_ctrl *ctrl); 503 503 void (*submit_async_event)(struct nvme_ctrl *ctrl); 504 504 void (*delete_ctrl)(struct nvme_ctrl *ctrl); 505 + void (*stop_ctrl)(struct nvme_ctrl *ctrl); 505 506 int (*get_address)(struct nvme_ctrl *ctrl, char *buf, int size); 506 507 void (*print_device_info)(struct nvme_ctrl *ctrl); 507 508 };
+4 -1
drivers/nvme/host/pci.c
··· 3469 3469 { PCI_DEVICE(0x1b4b, 0x1092), /* Lexar 256 GB SSD */ 3470 3470 .driver_data = NVME_QUIRK_NO_NS_DESC_LIST | 3471 3471 NVME_QUIRK_IGNORE_DEV_SUBNQN, }, 3472 + { PCI_DEVICE(0x1cc1, 0x33f8), /* ADATA IM2P33F8ABR1 1 TB */ 3473 + .driver_data = NVME_QUIRK_BOGUS_NID, }, 3472 3474 { PCI_DEVICE(0x10ec, 0x5762), /* ADATA SX6000LNP */ 3473 - .driver_data = NVME_QUIRK_IGNORE_DEV_SUBNQN, }, 3475 + .driver_data = NVME_QUIRK_IGNORE_DEV_SUBNQN | 3476 + NVME_QUIRK_BOGUS_NID, }, 3474 3477 { PCI_DEVICE(0x1cc1, 0x8201), /* ADATA SX8200PNP 512GB */ 3475 3478 .driver_data = NVME_QUIRK_NO_DEEPEST_PS | 3476 3479 NVME_QUIRK_IGNORE_DEV_SUBNQN, },
+9 -3
drivers/nvme/host/rdma.c
··· 1048 1048 } 1049 1049 } 1050 1050 1051 + static void nvme_rdma_stop_ctrl(struct nvme_ctrl *nctrl) 1052 + { 1053 + struct nvme_rdma_ctrl *ctrl = to_rdma_ctrl(nctrl); 1054 + 1055 + cancel_work_sync(&ctrl->err_work); 1056 + cancel_delayed_work_sync(&ctrl->reconnect_work); 1057 + } 1058 + 1051 1059 static void nvme_rdma_free_ctrl(struct nvme_ctrl *nctrl) 1052 1060 { 1053 1061 struct nvme_rdma_ctrl *ctrl = to_rdma_ctrl(nctrl); ··· 2260 2252 2261 2253 static void nvme_rdma_shutdown_ctrl(struct nvme_rdma_ctrl *ctrl, bool shutdown) 2262 2254 { 2263 - cancel_work_sync(&ctrl->err_work); 2264 - cancel_delayed_work_sync(&ctrl->reconnect_work); 2265 - 2266 2255 nvme_rdma_teardown_io_queues(ctrl, shutdown); 2267 2256 nvme_stop_admin_queue(&ctrl->ctrl); 2268 2257 if (shutdown) ··· 2309 2304 .submit_async_event = nvme_rdma_submit_async_event, 2310 2305 .delete_ctrl = nvme_rdma_delete_ctrl, 2311 2306 .get_address = nvmf_get_address, 2307 + .stop_ctrl = nvme_rdma_stop_ctrl, 2312 2308 }; 2313 2309 2314 2310 /*
+8 -5
drivers/nvme/host/tcp.c
··· 1180 1180 } else if (ret < 0) { 1181 1181 dev_err(queue->ctrl->ctrl.device, 1182 1182 "failed to send request %d\n", ret); 1183 - if (ret != -EPIPE && ret != -ECONNRESET) 1184 - nvme_tcp_fail_request(queue->request); 1183 + nvme_tcp_fail_request(queue->request); 1185 1184 nvme_tcp_done_send_req(queue); 1186 1185 } 1187 1186 return ret; ··· 2193 2194 2194 2195 static void nvme_tcp_teardown_ctrl(struct nvme_ctrl *ctrl, bool shutdown) 2195 2196 { 2196 - cancel_work_sync(&to_tcp_ctrl(ctrl)->err_work); 2197 - cancel_delayed_work_sync(&to_tcp_ctrl(ctrl)->connect_work); 2198 - 2199 2197 nvme_tcp_teardown_io_queues(ctrl, shutdown); 2200 2198 nvme_stop_admin_queue(ctrl); 2201 2199 if (shutdown) ··· 2230 2234 out_fail: 2231 2235 ++ctrl->nr_reconnects; 2232 2236 nvme_tcp_reconnect_or_remove(ctrl); 2237 + } 2238 + 2239 + static void nvme_tcp_stop_ctrl(struct nvme_ctrl *ctrl) 2240 + { 2241 + cancel_work_sync(&to_tcp_ctrl(ctrl)->err_work); 2242 + cancel_delayed_work_sync(&to_tcp_ctrl(ctrl)->connect_work); 2233 2243 } 2234 2244 2235 2245 static void nvme_tcp_free_ctrl(struct nvme_ctrl *nctrl) ··· 2559 2557 .submit_async_event = nvme_tcp_submit_async_event, 2560 2558 .delete_ctrl = nvme_tcp_delete_ctrl, 2561 2559 .get_address = nvmf_get_address, 2560 + .stop_ctrl = nvme_tcp_stop_ctrl, 2562 2561 }; 2563 2562 2564 2563 static bool
+20
drivers/nvme/target/configfs.c
··· 773 773 } 774 774 CONFIGFS_ATTR(nvmet_passthru_, io_timeout); 775 775 776 + static ssize_t nvmet_passthru_clear_ids_show(struct config_item *item, 777 + char *page) 778 + { 779 + return sprintf(page, "%u\n", to_subsys(item->ci_parent)->clear_ids); 780 + } 781 + 782 + static ssize_t nvmet_passthru_clear_ids_store(struct config_item *item, 783 + const char *page, size_t count) 784 + { 785 + struct nvmet_subsys *subsys = to_subsys(item->ci_parent); 786 + unsigned int clear_ids; 787 + 788 + if (kstrtouint(page, 0, &clear_ids)) 789 + return -EINVAL; 790 + subsys->clear_ids = clear_ids; 791 + return count; 792 + } 793 + CONFIGFS_ATTR(nvmet_passthru_, clear_ids); 794 + 776 795 static struct configfs_attribute *nvmet_passthru_attrs[] = { 777 796 &nvmet_passthru_attr_device_path, 778 797 &nvmet_passthru_attr_enable, 779 798 &nvmet_passthru_attr_admin_timeout, 780 799 &nvmet_passthru_attr_io_timeout, 800 + &nvmet_passthru_attr_clear_ids, 781 801 NULL, 782 802 }; 783 803
+6
drivers/nvme/target/core.c
··· 1374 1374 ctrl->port = req->port; 1375 1375 ctrl->ops = req->ops; 1376 1376 1377 + #ifdef CONFIG_NVME_TARGET_PASSTHRU 1378 + /* By default, set loop targets to clear IDS by default */ 1379 + if (ctrl->port->disc_addr.trtype == NVMF_TRTYPE_LOOP) 1380 + subsys->clear_ids = 1; 1381 + #endif 1382 + 1377 1383 INIT_WORK(&ctrl->async_event_work, nvmet_async_event_work); 1378 1384 INIT_LIST_HEAD(&ctrl->async_events); 1379 1385 INIT_RADIX_TREE(&ctrl->p2p_ns_map, GFP_KERNEL);
+1
drivers/nvme/target/nvmet.h
··· 249 249 struct config_group passthru_group; 250 250 unsigned int admin_timeout; 251 251 unsigned int io_timeout; 252 + unsigned int clear_ids; 252 253 #endif /* CONFIG_NVME_TARGET_PASSTHRU */ 253 254 254 255 #ifdef CONFIG_BLK_DEV_ZONED
+55
drivers/nvme/target/passthru.c
··· 30 30 ctrl->cap &= ~(1ULL << 43); 31 31 } 32 32 33 + static u16 nvmet_passthru_override_id_descs(struct nvmet_req *req) 34 + { 35 + struct nvmet_ctrl *ctrl = req->sq->ctrl; 36 + u16 status = NVME_SC_SUCCESS; 37 + int pos, len; 38 + bool csi_seen = false; 39 + void *data; 40 + u8 csi; 41 + 42 + if (!ctrl->subsys->clear_ids) 43 + return status; 44 + 45 + data = kzalloc(NVME_IDENTIFY_DATA_SIZE, GFP_KERNEL); 46 + if (!data) 47 + return NVME_SC_INTERNAL; 48 + 49 + status = nvmet_copy_from_sgl(req, 0, data, NVME_IDENTIFY_DATA_SIZE); 50 + if (status) 51 + goto out_free; 52 + 53 + for (pos = 0; pos < NVME_IDENTIFY_DATA_SIZE; pos += len) { 54 + struct nvme_ns_id_desc *cur = data + pos; 55 + 56 + if (cur->nidl == 0) 57 + break; 58 + if (cur->nidt == NVME_NIDT_CSI) { 59 + memcpy(&csi, cur + 1, NVME_NIDT_CSI_LEN); 60 + csi_seen = true; 61 + break; 62 + } 63 + len = sizeof(struct nvme_ns_id_desc) + cur->nidl; 64 + } 65 + 66 + memset(data, 0, NVME_IDENTIFY_DATA_SIZE); 67 + if (csi_seen) { 68 + struct nvme_ns_id_desc *cur = data; 69 + 70 + cur->nidt = NVME_NIDT_CSI; 71 + cur->nidl = NVME_NIDT_CSI_LEN; 72 + memcpy(cur + 1, &csi, NVME_NIDT_CSI_LEN); 73 + } 74 + status = nvmet_copy_to_sgl(req, 0, data, NVME_IDENTIFY_DATA_SIZE); 75 + out_free: 76 + kfree(data); 77 + return status; 78 + } 79 + 33 80 static u16 nvmet_passthru_override_id_ctrl(struct nvmet_req *req) 34 81 { 35 82 struct nvmet_ctrl *ctrl = req->sq->ctrl; ··· 199 152 */ 200 153 id->mc = 0; 201 154 155 + if (req->sq->ctrl->subsys->clear_ids) { 156 + memset(id->nguid, 0, NVME_NIDT_NGUID_LEN); 157 + memset(id->eui64, 0, NVME_NIDT_EUI64_LEN); 158 + } 159 + 202 160 status = nvmet_copy_to_sgl(req, 0, id, sizeof(*id)); 203 161 204 162 out_free: ··· 227 175 break; 228 176 case NVME_ID_CNS_NS: 229 177 nvmet_passthru_override_id_ns(req); 178 + break; 179 + case NVME_ID_CNS_NS_DESC_LIST: 180 + nvmet_passthru_override_id_descs(req); 230 181 break; 231 182 } 232 183 } else if (status < 0)
+3 -20
drivers/nvme/target/tcp.c
··· 405 405 return NVME_SC_INTERNAL; 406 406 } 407 407 408 - static void nvmet_tcp_send_ddgst(struct ahash_request *hash, 408 + static void nvmet_tcp_calc_ddgst(struct ahash_request *hash, 409 409 struct nvmet_tcp_cmd *cmd) 410 410 { 411 411 ahash_request_set_crypt(hash, cmd->req.sg, 412 412 (void *)&cmd->exp_ddgst, cmd->req.transfer_len); 413 413 crypto_ahash_digest(hash); 414 - } 415 - 416 - static void nvmet_tcp_recv_ddgst(struct ahash_request *hash, 417 - struct nvmet_tcp_cmd *cmd) 418 - { 419 - struct scatterlist sg; 420 - struct kvec *iov; 421 - int i; 422 - 423 - crypto_ahash_init(hash); 424 - for (i = 0, iov = cmd->iov; i < cmd->nr_mapped; i++, iov++) { 425 - sg_init_one(&sg, iov->iov_base, iov->iov_len); 426 - ahash_request_set_crypt(hash, &sg, NULL, iov->iov_len); 427 - crypto_ahash_update(hash); 428 - } 429 - ahash_request_set_crypt(hash, NULL, (void *)&cmd->exp_ddgst, 0); 430 - crypto_ahash_final(hash); 431 414 } 432 415 433 416 static void nvmet_setup_c2h_data_pdu(struct nvmet_tcp_cmd *cmd) ··· 437 454 438 455 if (queue->data_digest) { 439 456 pdu->hdr.flags |= NVME_TCP_F_DDGST; 440 - nvmet_tcp_send_ddgst(queue->snd_hash, cmd); 457 + nvmet_tcp_calc_ddgst(queue->snd_hash, cmd); 441 458 } 442 459 443 460 if (cmd->queue->hdr_digest) { ··· 1120 1137 { 1121 1138 struct nvmet_tcp_queue *queue = cmd->queue; 1122 1139 1123 - nvmet_tcp_recv_ddgst(queue->rcv_hash, cmd); 1140 + nvmet_tcp_calc_ddgst(queue->rcv_hash, cmd); 1124 1141 queue->offset = 0; 1125 1142 queue->left = NVME_TCP_DIGEST_LENGTH; 1126 1143 queue->rcv_state = NVMET_TCP_RECV_DDGST;
+2 -2
drivers/pinctrl/aspeed/pinctrl-aspeed.c
··· 236 236 const struct aspeed_sig_expr **funcs; 237 237 const struct aspeed_sig_expr ***prios; 238 238 239 - pr_debug("Muxing pin %s for %s\n", pdesc->name, pfunc->name); 240 - 241 239 if (!pdesc) 242 240 return -EINVAL; 241 + 242 + pr_debug("Muxing pin %s for %s\n", pdesc->name, pfunc->name); 243 243 244 244 prios = pdesc->prios; 245 245
+1
drivers/pinctrl/freescale/pinctrl-imx93.c
··· 239 239 static const struct imx_pinctrl_soc_info imx93_pinctrl_info = { 240 240 .pins = imx93_pinctrl_pads, 241 241 .npins = ARRAY_SIZE(imx93_pinctrl_pads), 242 + .flags = ZERO_OFFSET_VALID, 242 243 .gpr_compatible = "fsl,imx93-iomuxc-gpr", 243 244 }; 244 245
+12 -8
drivers/pinctrl/stm32/pinctrl-stm32.c
··· 1338 1338 bank->secure_control = pctl->match_data->secure_control; 1339 1339 spin_lock_init(&bank->lock); 1340 1340 1341 - /* create irq hierarchical domain */ 1342 - bank->fwnode = fwnode; 1341 + if (pctl->domain) { 1342 + /* create irq hierarchical domain */ 1343 + bank->fwnode = fwnode; 1343 1344 1344 - bank->domain = irq_domain_create_hierarchy(pctl->domain, 0, 1345 - STM32_GPIO_IRQ_LINE, bank->fwnode, 1346 - &stm32_gpio_domain_ops, bank); 1345 + bank->domain = irq_domain_create_hierarchy(pctl->domain, 0, STM32_GPIO_IRQ_LINE, 1346 + bank->fwnode, &stm32_gpio_domain_ops, 1347 + bank); 1347 1348 1348 - if (!bank->domain) { 1349 - err = -ENODEV; 1350 - goto err_clk; 1349 + if (!bank->domain) { 1350 + err = -ENODEV; 1351 + goto err_clk; 1352 + } 1351 1353 } 1352 1354 1353 1355 err = gpiochip_add_data(&bank->gpio_chip, bank); ··· 1512 1510 pctl->domain = stm32_pctrl_get_irq_domain(pdev); 1513 1511 if (IS_ERR(pctl->domain)) 1514 1512 return PTR_ERR(pctl->domain); 1513 + if (!pctl->domain) 1514 + dev_warn(dev, "pinctrl without interrupt support\n"); 1515 1515 1516 1516 /* hwspinlock is optional */ 1517 1517 hwlock_id = of_hwspin_lock_get_id(pdev->dev.of_node, 0);
+5 -5
drivers/pinctrl/sunxi/pinctrl-sun8i-a83t.c
··· 158 158 SUNXI_PIN(SUNXI_PINCTRL_PIN(C, 14), 159 159 SUNXI_FUNCTION(0x0, "gpio_in"), 160 160 SUNXI_FUNCTION(0x1, "gpio_out"), 161 - SUNXI_FUNCTION(0x2, "nand"), /* DQ6 */ 161 + SUNXI_FUNCTION(0x2, "nand0"), /* DQ6 */ 162 162 SUNXI_FUNCTION(0x3, "mmc2")), /* D6 */ 163 163 SUNXI_PIN(SUNXI_PINCTRL_PIN(C, 15), 164 164 SUNXI_FUNCTION(0x0, "gpio_in"), 165 165 SUNXI_FUNCTION(0x1, "gpio_out"), 166 - SUNXI_FUNCTION(0x2, "nand"), /* DQ7 */ 166 + SUNXI_FUNCTION(0x2, "nand0"), /* DQ7 */ 167 167 SUNXI_FUNCTION(0x3, "mmc2")), /* D7 */ 168 168 SUNXI_PIN(SUNXI_PINCTRL_PIN(C, 16), 169 169 SUNXI_FUNCTION(0x0, "gpio_in"), 170 170 SUNXI_FUNCTION(0x1, "gpio_out"), 171 - SUNXI_FUNCTION(0x2, "nand"), /* DQS */ 171 + SUNXI_FUNCTION(0x2, "nand0"), /* DQS */ 172 172 SUNXI_FUNCTION(0x3, "mmc2")), /* RST */ 173 173 SUNXI_PIN(SUNXI_PINCTRL_PIN(C, 17), 174 174 SUNXI_FUNCTION(0x0, "gpio_in"), 175 175 SUNXI_FUNCTION(0x1, "gpio_out"), 176 - SUNXI_FUNCTION(0x2, "nand")), /* CE2 */ 176 + SUNXI_FUNCTION(0x2, "nand0")), /* CE2 */ 177 177 SUNXI_PIN(SUNXI_PINCTRL_PIN(C, 18), 178 178 SUNXI_FUNCTION(0x0, "gpio_in"), 179 179 SUNXI_FUNCTION(0x1, "gpio_out"), 180 - SUNXI_FUNCTION(0x2, "nand")), /* CE3 */ 180 + SUNXI_FUNCTION(0x2, "nand0")), /* CE3 */ 181 181 /* Hole */ 182 182 SUNXI_PIN(SUNXI_PINCTRL_PIN(D, 2), 183 183 SUNXI_FUNCTION(0x0, "gpio_in"),
+2
drivers/pinctrl/sunxi/pinctrl-sunxi.c
··· 544 544 struct sunxi_pinctrl *pctl = pinctrl_dev_get_drvdata(pctldev); 545 545 int i; 546 546 547 + pin -= pctl->desc->pin_base; 548 + 547 549 for (i = 0; i < num_configs; i++) { 548 550 enum pin_config_param param; 549 551 unsigned long flags;
+1 -1
drivers/s390/char/sclp.c
··· 60 60 /* List of queued requests. */ 61 61 static LIST_HEAD(sclp_req_queue); 62 62 63 - /* Data for read and and init requests. */ 63 + /* Data for read and init requests. */ 64 64 static struct sclp_req sclp_read_req; 65 65 static struct sclp_req sclp_init_req; 66 66 static void *sclp_read_sccb;
+7
drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
··· 2782 2782 struct hisi_hba *hisi_hba = shost_priv(shost); 2783 2783 struct device *dev = hisi_hba->dev; 2784 2784 int ret = sas_slave_configure(sdev); 2785 + unsigned int max_sectors; 2785 2786 2786 2787 if (ret) 2787 2788 return ret; ··· 2799 2798 pm_runtime_disable(dev); 2800 2799 } 2801 2800 } 2801 + 2802 + /* Set according to IOMMU IOVA caching limit */ 2803 + max_sectors = min_t(size_t, queue_max_hw_sectors(sdev->request_queue), 2804 + (PAGE_SIZE * 32) >> SECTOR_SHIFT); 2805 + 2806 + blk_queue_max_hw_sectors(sdev->request_queue, max_sectors); 2802 2807 2803 2808 return 0; 2804 2809 }
+6 -6
drivers/soc/atmel/soc.c
··· 91 91 AT91_SOC(SAM9X60_CIDR_MATCH, AT91_CIDR_MATCH_MASK, 92 92 AT91_CIDR_VERSION_MASK, SAM9X60_EXID_MATCH, 93 93 "sam9x60", "sam9x60"), 94 - AT91_SOC(SAM9X60_CIDR_MATCH, SAM9X60_D5M_EXID_MATCH, 95 - AT91_CIDR_VERSION_MASK, SAM9X60_EXID_MATCH, 94 + AT91_SOC(SAM9X60_CIDR_MATCH, AT91_CIDR_MATCH_MASK, 95 + AT91_CIDR_VERSION_MASK, SAM9X60_D5M_EXID_MATCH, 96 96 "sam9x60 64MiB DDR2 SiP", "sam9x60"), 97 - AT91_SOC(SAM9X60_CIDR_MATCH, SAM9X60_D1G_EXID_MATCH, 98 - AT91_CIDR_VERSION_MASK, SAM9X60_EXID_MATCH, 97 + AT91_SOC(SAM9X60_CIDR_MATCH, AT91_CIDR_MATCH_MASK, 98 + AT91_CIDR_VERSION_MASK, SAM9X60_D1G_EXID_MATCH, 99 99 "sam9x60 128MiB DDR2 SiP", "sam9x60"), 100 - AT91_SOC(SAM9X60_CIDR_MATCH, SAM9X60_D6K_EXID_MATCH, 101 - AT91_CIDR_VERSION_MASK, SAM9X60_EXID_MATCH, 100 + AT91_SOC(SAM9X60_CIDR_MATCH, AT91_CIDR_MATCH_MASK, 101 + AT91_CIDR_VERSION_MASK, SAM9X60_D6K_EXID_MATCH, 102 102 "sam9x60 8MiB SDRAM SiP", "sam9x60"), 103 103 #endif 104 104 #ifdef CONFIG_SOC_SAMA5
+1 -1
drivers/soc/ixp4xx/ixp4xx-npe.c
··· 758 758 static struct platform_driver ixp4xx_npe_driver = { 759 759 .driver = { 760 760 .name = "ixp4xx-npe", 761 - .of_match_table = of_match_ptr(ixp4xx_npe_of_match), 761 + .of_match_table = ixp4xx_npe_of_match, 762 762 }, 763 763 .probe = ixp4xx_npe_probe, 764 764 .remove = ixp4xx_npe_remove,
+3 -3
drivers/soc/qcom/smem.c
··· 926 926 struct smem_partition_header *header; 927 927 struct smem_ptable_entry *entry; 928 928 struct smem_ptable *ptable; 929 - unsigned int remote_host; 929 + u16 remote_host; 930 930 u16 host0, host1; 931 931 int i; 932 932 ··· 951 951 continue; 952 952 953 953 if (remote_host >= SMEM_HOST_COUNT) { 954 - dev_err(smem->dev, "bad host %hu\n", remote_host); 954 + dev_err(smem->dev, "bad host %u\n", remote_host); 955 955 return -EINVAL; 956 956 } 957 957 958 958 if (smem->partitions[remote_host].virt_base) { 959 - dev_err(smem->dev, "duplicate host %hu\n", remote_host); 959 + dev_err(smem->dev, "duplicate host %u\n", remote_host); 960 960 return -EINVAL; 961 961 } 962 962
+1
drivers/thermal/intel/intel_tcc_cooling.c
··· 81 81 X86_MATCH_INTEL_FAM6_MODEL(COMETLAKE, NULL), 82 82 X86_MATCH_INTEL_FAM6_MODEL(ALDERLAKE, NULL), 83 83 X86_MATCH_INTEL_FAM6_MODEL(ALDERLAKE_L, NULL), 84 + X86_MATCH_INTEL_FAM6_MODEL(RAPTORLAKE, NULL), 84 85 {} 85 86 }; 86 87
+12
drivers/video/fbdev/core/fbmem.c
··· 19 19 #include <linux/kernel.h> 20 20 #include <linux/major.h> 21 21 #include <linux/slab.h> 22 + #include <linux/sysfb.h> 22 23 #include <linux/mm.h> 23 24 #include <linux/mman.h> 24 25 #include <linux/vt.h> ··· 1752 1751 a->ranges[0].size = ~0; 1753 1752 do_free = true; 1754 1753 } 1754 + 1755 + /* 1756 + * If a driver asked to unregister a platform device registered by 1757 + * sysfb, then can be assumed that this is a driver for a display 1758 + * that is set up by the system firmware and has a generic driver. 1759 + * 1760 + * Drivers for devices that don't have a generic driver will never 1761 + * ask for this, so let's assume that a real driver for the display 1762 + * was already probed and prevent sysfb to register devices later. 1763 + */ 1764 + sysfb_disable(); 1755 1765 1756 1766 mutex_lock(&registration_lock); 1757 1767 do_remove_conflicting_framebuffers(a, name, primary);
+1
fs/ceph/caps.c
··· 4377 4377 ihold(inode); 4378 4378 dout("flush_dirty_caps %llx.%llx\n", ceph_vinop(inode)); 4379 4379 spin_unlock(&mdsc->cap_dirty_lock); 4380 + ceph_wait_on_async_create(inode); 4380 4381 ceph_check_caps(ci, CHECK_CAPS_FLUSH, NULL); 4381 4382 iput(inode); 4382 4383 spin_lock(&mdsc->cap_dirty_lock);
+12 -7
fs/io_uring.c
··· 1183 1183 .unbound_nonreg_file = 1, 1184 1184 .pollout = 1, 1185 1185 .needs_async_setup = 1, 1186 + .ioprio = 1, 1186 1187 .async_size = sizeof(struct io_async_msghdr), 1187 1188 }, 1188 1189 [IORING_OP_RECVMSG] = { ··· 1192 1191 .pollin = 1, 1193 1192 .buffer_select = 1, 1194 1193 .needs_async_setup = 1, 1194 + .ioprio = 1, 1195 1195 .async_size = sizeof(struct io_async_msghdr), 1196 1196 }, 1197 1197 [IORING_OP_TIMEOUT] = { ··· 1268 1266 .unbound_nonreg_file = 1, 1269 1267 .pollout = 1, 1270 1268 .audit_skip = 1, 1269 + .ioprio = 1, 1271 1270 }, 1272 1271 [IORING_OP_RECV] = { 1273 1272 .needs_file = 1, ··· 1276 1273 .pollin = 1, 1277 1274 .buffer_select = 1, 1278 1275 .audit_skip = 1, 1276 + .ioprio = 1, 1279 1277 }, 1280 1278 [IORING_OP_OPENAT2] = { 1281 1279 }, ··· 4318 4314 if (unlikely(ret < 0)) 4319 4315 return ret; 4320 4316 } else { 4317 + rw = req->async_data; 4318 + s = &rw->s; 4319 + 4321 4320 /* 4322 4321 * Safe and required to re-import if we're using provided 4323 4322 * buffers, as we dropped the selected one before retry. 4324 4323 */ 4325 - if (req->flags & REQ_F_BUFFER_SELECT) { 4324 + if (io_do_buffer_select(req)) { 4326 4325 ret = io_import_iovec(READ, req, &iovec, s, issue_flags); 4327 4326 if (unlikely(ret < 0)) 4328 4327 return ret; 4329 4328 } 4330 4329 4331 - rw = req->async_data; 4332 - s = &rw->s; 4333 4330 /* 4334 4331 * We come here from an earlier attempt, restore our state to 4335 4332 * match in case it doesn't. It's cheap enough that we don't ··· 6080 6075 { 6081 6076 struct io_sr_msg *sr = &req->sr_msg; 6082 6077 6083 - if (unlikely(sqe->file_index)) 6078 + if (unlikely(sqe->file_index || sqe->addr2)) 6084 6079 return -EINVAL; 6085 6080 6086 6081 sr->umsg = u64_to_user_ptr(READ_ONCE(sqe->addr)); 6087 6082 sr->len = READ_ONCE(sqe->len); 6088 - sr->flags = READ_ONCE(sqe->addr2); 6083 + sr->flags = READ_ONCE(sqe->ioprio); 6089 6084 if (sr->flags & ~IORING_RECVSEND_POLL_FIRST) 6090 6085 return -EINVAL; 6091 6086 sr->msg_flags = READ_ONCE(sqe->msg_flags) | MSG_NOSIGNAL; ··· 6316 6311 { 6317 6312 struct io_sr_msg *sr = &req->sr_msg; 6318 6313 6319 - if (unlikely(sqe->file_index)) 6314 + if (unlikely(sqe->file_index || sqe->addr2)) 6320 6315 return -EINVAL; 6321 6316 6322 6317 sr->umsg = u64_to_user_ptr(READ_ONCE(sqe->addr)); 6323 6318 sr->len = READ_ONCE(sqe->len); 6324 - sr->flags = READ_ONCE(sqe->addr2); 6319 + sr->flags = READ_ONCE(sqe->ioprio); 6325 6320 if (sr->flags & ~IORING_RECVSEND_POLL_FIRST) 6326 6321 return -EINVAL; 6327 6322 sr->msg_flags = READ_ONCE(sqe->msg_flags) | MSG_NOSIGNAL;
+13 -6
fs/nfs/nfs4proc.c
··· 4012 4012 } 4013 4013 4014 4014 page = alloc_page(GFP_KERNEL); 4015 + if (!page) 4016 + return -ENOMEM; 4015 4017 locations = kmalloc(sizeof(struct nfs4_fs_locations), GFP_KERNEL); 4016 - if (page == NULL || locations == NULL) 4017 - goto out; 4018 + if (!locations) 4019 + goto out_free; 4020 + locations->fattr = nfs_alloc_fattr(); 4021 + if (!locations->fattr) 4022 + goto out_free_2; 4018 4023 4019 4024 status = nfs4_proc_get_locations(server, fhandle, locations, page, 4020 4025 cred); 4021 4026 if (status) 4022 - goto out; 4027 + goto out_free_3; 4023 4028 4024 4029 for (i = 0; i < locations->nlocations; i++) 4025 4030 test_fs_location_for_trunking(&locations->locations[i], clp, 4026 4031 server); 4027 - out: 4028 - if (page) 4029 - __free_page(page); 4032 + out_free_3: 4033 + kfree(locations->fattr); 4034 + out_free_2: 4030 4035 kfree(locations); 4036 + out_free: 4037 + __free_page(page); 4031 4038 return status; 4032 4039 } 4033 4040
+1
fs/nfs/nfs4state.c
··· 2753 2753 goto again; 2754 2754 2755 2755 nfs_put_client(clp); 2756 + module_put_and_kthread_exit(0); 2756 2757 return 0; 2757 2758 }
+2 -1
fs/nfsd/vfs.c
··· 1179 1179 nfsd_copy_write_verifier(verf, nn); 1180 1180 err2 = filemap_check_wb_err(nf->nf_file->f_mapping, 1181 1181 since); 1182 + err = nfserrno(err2); 1182 1183 break; 1183 1184 case -EINVAL: 1184 1185 err = nfserr_notsupp; ··· 1187 1186 default: 1188 1187 nfsd_reset_write_verifier(nn); 1189 1188 trace_nfsd_writeverf_reset(nn, rqstp, err2); 1189 + err = nfserrno(err2); 1190 1190 } 1191 - err = nfserrno(err2); 1192 1191 } else 1193 1192 nfsd_copy_write_verifier(verf, nn); 1194 1193
+9 -29
fs/xfs/libxfs/xfs_attr.c
··· 50 50 STATIC int xfs_attr_leaf_get(xfs_da_args_t *args); 51 51 STATIC int xfs_attr_leaf_removename(xfs_da_args_t *args); 52 52 STATIC int xfs_attr_leaf_hasname(struct xfs_da_args *args, struct xfs_buf **bp); 53 - STATIC int xfs_attr_leaf_try_add(struct xfs_da_args *args, struct xfs_buf *bp); 53 + STATIC int xfs_attr_leaf_try_add(struct xfs_da_args *args); 54 54 55 55 /* 56 56 * Internal routines when attribute list is more than one block. ··· 393 393 * It won't fit in the shortform, transform to a leaf block. GROT: 394 394 * another possible req'mt for a double-split btree op. 395 395 */ 396 - error = xfs_attr_shortform_to_leaf(args, &attr->xattri_leaf_bp); 396 + error = xfs_attr_shortform_to_leaf(args); 397 397 if (error) 398 398 return error; 399 399 400 - /* 401 - * Prevent the leaf buffer from being unlocked so that a concurrent AIL 402 - * push cannot grab the half-baked leaf buffer and run into problems 403 - * with the write verifier. 404 - */ 405 - xfs_trans_bhold(args->trans, attr->xattri_leaf_bp); 406 400 attr->xattri_dela_state = XFS_DAS_LEAF_ADD; 407 401 out: 408 402 trace_xfs_attr_sf_addname_return(attr->xattri_dela_state, args->dp); ··· 441 447 442 448 /* 443 449 * Use the leaf buffer we may already hold locked as a result of 444 - * a sf-to-leaf conversion. The held buffer is no longer valid 445 - * after this call, regardless of the result. 450 + * a sf-to-leaf conversion. 446 451 */ 447 - error = xfs_attr_leaf_try_add(args, attr->xattri_leaf_bp); 448 - attr->xattri_leaf_bp = NULL; 452 + error = xfs_attr_leaf_try_add(args); 449 453 450 454 if (error == -ENOSPC) { 451 455 error = xfs_attr3_leaf_to_node(args); ··· 488 496 { 489 497 struct xfs_da_args *args = attr->xattri_da_args; 490 498 int error; 491 - 492 - ASSERT(!attr->xattri_leaf_bp); 493 499 494 500 error = xfs_attr_node_addname_find_attr(attr); 495 501 if (error) ··· 1205 1215 */ 1206 1216 STATIC int 1207 1217 xfs_attr_leaf_try_add( 1208 - struct xfs_da_args *args, 1209 - struct xfs_buf *bp) 1218 + struct xfs_da_args *args) 1210 1219 { 1220 + struct xfs_buf *bp; 1211 1221 int error; 1212 1222 1213 - /* 1214 - * If the caller provided a buffer to us, it is locked and held in 1215 - * the transaction because it just did a shortform to leaf conversion. 1216 - * Hence we don't need to read it again. Otherwise read in the leaf 1217 - * buffer. 1218 - */ 1219 - if (bp) { 1220 - xfs_trans_bhold_release(args->trans, bp); 1221 - } else { 1222 - error = xfs_attr3_leaf_read(args->trans, args->dp, 0, &bp); 1223 - if (error) 1224 - return error; 1225 - } 1223 + error = xfs_attr3_leaf_read(args->trans, args->dp, 0, &bp); 1224 + if (error) 1225 + return error; 1226 1226 1227 1227 /* 1228 1228 * Look up the xattr name to set the insertion point for the new xattr.
-5
fs/xfs/libxfs/xfs_attr.h
··· 515 515 */ 516 516 struct xfs_attri_log_nameval *xattri_nameval; 517 517 518 - /* 519 - * Used by xfs_attr_set to hold a leaf buffer across a transaction roll 520 - */ 521 - struct xfs_buf *xattri_leaf_bp; 522 - 523 518 /* Used to keep track of current state of delayed operation */ 524 519 enum xfs_delattr_state xattri_dela_state; 525 520
+19 -16
fs/xfs/libxfs/xfs_attr_leaf.c
··· 289 289 return NULL; 290 290 } 291 291 292 + /* 293 + * Validate an attribute leaf block. 294 + * 295 + * Empty leaf blocks can occur under the following circumstances: 296 + * 297 + * 1. setxattr adds a new extended attribute to a file; 298 + * 2. The file has zero existing attributes; 299 + * 3. The attribute is too large to fit in the attribute fork; 300 + * 4. The attribute is small enough to fit in a leaf block; 301 + * 5. A log flush occurs after committing the transaction that creates 302 + * the (empty) leaf block; and 303 + * 6. The filesystem goes down after the log flush but before the new 304 + * attribute can be committed to the leaf block. 305 + * 306 + * Hence we need to ensure that we don't fail the validation purely 307 + * because the leaf is empty. 308 + */ 292 309 static xfs_failaddr_t 293 310 xfs_attr3_leaf_verify( 294 311 struct xfs_buf *bp) ··· 326 309 fa = xfs_da3_blkinfo_verify(bp, bp->b_addr); 327 310 if (fa) 328 311 return fa; 329 - 330 - /* 331 - * Empty leaf blocks should never occur; they imply the existence of a 332 - * software bug that needs fixing. xfs_repair also flags them as a 333 - * corruption that needs fixing, so we should never let these go to 334 - * disk. 335 - */ 336 - if (ichdr.count == 0) 337 - return __this_address; 338 312 339 313 /* 340 314 * firstused is the block offset of the first name info structure. ··· 930 922 return -ENOATTR; 931 923 } 932 924 933 - /* 934 - * Convert from using the shortform to the leaf. On success, return the 935 - * buffer so that we can keep it locked until we're totally done with it. 936 - */ 925 + /* Convert from using the shortform to the leaf format. */ 937 926 int 938 927 xfs_attr_shortform_to_leaf( 939 - struct xfs_da_args *args, 940 - struct xfs_buf **leaf_bp) 928 + struct xfs_da_args *args) 941 929 { 942 930 struct xfs_inode *dp; 943 931 struct xfs_attr_shortform *sf; ··· 995 991 sfe = xfs_attr_sf_nextentry(sfe); 996 992 } 997 993 error = 0; 998 - *leaf_bp = bp; 999 994 out: 1000 995 kmem_free(tmpbuffer); 1001 996 return error;
+1 -2
fs/xfs/libxfs/xfs_attr_leaf.h
··· 49 49 void xfs_attr_shortform_add(struct xfs_da_args *args, int forkoff); 50 50 int xfs_attr_shortform_lookup(struct xfs_da_args *args); 51 51 int xfs_attr_shortform_getvalue(struct xfs_da_args *args); 52 - int xfs_attr_shortform_to_leaf(struct xfs_da_args *args, 53 - struct xfs_buf **leaf_bp); 52 + int xfs_attr_shortform_to_leaf(struct xfs_da_args *args); 54 53 int xfs_attr_sf_removename(struct xfs_da_args *args); 55 54 int xfs_attr_sf_findname(struct xfs_da_args *args, 56 55 struct xfs_attr_sf_entry **sfep,
+15 -12
fs/xfs/xfs_attr_item.c
··· 576 576 struct xfs_trans_res tres; 577 577 struct xfs_attri_log_format *attrp; 578 578 struct xfs_attri_log_nameval *nv = attrip->attri_nameval; 579 - int error, ret = 0; 579 + int error; 580 580 int total; 581 581 int local; 582 582 struct xfs_attrd_log_item *done_item = NULL; ··· 655 655 xfs_ilock(ip, XFS_ILOCK_EXCL); 656 656 xfs_trans_ijoin(tp, ip, 0); 657 657 658 - ret = xfs_xattri_finish_update(attr, done_item); 659 - if (ret == -EAGAIN) { 660 - /* There's more work to do, so add it to this transaction */ 658 + error = xfs_xattri_finish_update(attr, done_item); 659 + if (error == -EAGAIN) { 660 + /* 661 + * There's more work to do, so add the intent item to this 662 + * transaction so that we can continue it later. 663 + */ 661 664 xfs_defer_add(tp, XFS_DEFER_OPS_TYPE_ATTR, &attr->xattri_list); 662 - } else 663 - error = ret; 665 + error = xfs_defer_ops_capture_and_commit(tp, capture_list); 666 + if (error) 667 + goto out_unlock; 664 668 669 + xfs_iunlock(ip, XFS_ILOCK_EXCL); 670 + xfs_irele(ip); 671 + return 0; 672 + } 665 673 if (error) { 666 674 xfs_trans_cancel(tp); 667 675 goto out_unlock; 668 676 } 669 677 670 678 error = xfs_defer_ops_capture_and_commit(tp, capture_list); 671 - 672 679 out_unlock: 673 - if (attr->xattri_leaf_bp) 674 - xfs_buf_relse(attr->xattri_leaf_bp); 675 - 676 680 xfs_iunlock(ip, XFS_ILOCK_EXCL); 677 681 xfs_irele(ip); 678 682 out: 679 - if (ret != -EAGAIN) 680 - xfs_attr_free_item(attr); 683 + xfs_attr_free_item(attr); 681 684 return error; 682 685 } 683 686
+2
fs/xfs/xfs_bmap_util.c
··· 686 686 * forever. 687 687 */ 688 688 end_fsb = XFS_B_TO_FSB(mp, (xfs_ufsize_t)XFS_ISIZE(ip)); 689 + if (XFS_IS_REALTIME_INODE(ip) && mp->m_sb.sb_rextsize > 1) 690 + end_fsb = roundup_64(end_fsb, mp->m_sb.sb_rextsize); 689 691 last_fsb = XFS_B_TO_FSB(mp, mp->m_super->s_maxbytes); 690 692 if (last_fsb <= end_fsb) 691 693 return false;
+37 -19
fs/xfs/xfs_icache.c
··· 440 440 for_each_online_cpu(cpu) { 441 441 gc = per_cpu_ptr(mp->m_inodegc, cpu); 442 442 if (!llist_empty(&gc->list)) 443 - queue_work_on(cpu, mp->m_inodegc_wq, &gc->work); 443 + mod_delayed_work_on(cpu, mp->m_inodegc_wq, &gc->work, 0); 444 444 } 445 445 } 446 446 ··· 1841 1841 xfs_inodegc_worker( 1842 1842 struct work_struct *work) 1843 1843 { 1844 - struct xfs_inodegc *gc = container_of(work, struct xfs_inodegc, 1845 - work); 1844 + struct xfs_inodegc *gc = container_of(to_delayed_work(work), 1845 + struct xfs_inodegc, work); 1846 1846 struct llist_node *node = llist_del_all(&gc->list); 1847 1847 struct xfs_inode *ip, *n; 1848 1848 ··· 1862 1862 } 1863 1863 1864 1864 /* 1865 + * Expedite all pending inodegc work to run immediately. This does not wait for 1866 + * completion of the work. 1867 + */ 1868 + void 1869 + xfs_inodegc_push( 1870 + struct xfs_mount *mp) 1871 + { 1872 + if (!xfs_is_inodegc_enabled(mp)) 1873 + return; 1874 + trace_xfs_inodegc_push(mp, __return_address); 1875 + xfs_inodegc_queue_all(mp); 1876 + } 1877 + 1878 + /* 1865 1879 * Force all currently queued inode inactivation work to run immediately and 1866 1880 * wait for the work to finish. 1867 1881 */ ··· 1883 1869 xfs_inodegc_flush( 1884 1870 struct xfs_mount *mp) 1885 1871 { 1886 - if (!xfs_is_inodegc_enabled(mp)) 1887 - return; 1888 - 1872 + xfs_inodegc_push(mp); 1889 1873 trace_xfs_inodegc_flush(mp, __return_address); 1890 - 1891 - xfs_inodegc_queue_all(mp); 1892 1874 flush_workqueue(mp->m_inodegc_wq); 1893 1875 } 1894 1876 ··· 2024 2014 struct xfs_inodegc *gc; 2025 2015 int items; 2026 2016 unsigned int shrinker_hits; 2017 + unsigned long queue_delay = 1; 2027 2018 2028 2019 trace_xfs_inode_set_need_inactive(ip); 2029 2020 spin_lock(&ip->i_flags_lock); ··· 2036 2025 items = READ_ONCE(gc->items); 2037 2026 WRITE_ONCE(gc->items, items + 1); 2038 2027 shrinker_hits = READ_ONCE(gc->shrinker_hits); 2039 - put_cpu_ptr(gc); 2040 2028 2041 - if (!xfs_is_inodegc_enabled(mp)) 2029 + /* 2030 + * We queue the work while holding the current CPU so that the work 2031 + * is scheduled to run on this CPU. 2032 + */ 2033 + if (!xfs_is_inodegc_enabled(mp)) { 2034 + put_cpu_ptr(gc); 2042 2035 return; 2043 - 2044 - if (xfs_inodegc_want_queue_work(ip, items)) { 2045 - trace_xfs_inodegc_queue(mp, __return_address); 2046 - queue_work(mp->m_inodegc_wq, &gc->work); 2047 2036 } 2037 + 2038 + if (xfs_inodegc_want_queue_work(ip, items)) 2039 + queue_delay = 0; 2040 + 2041 + trace_xfs_inodegc_queue(mp, __return_address); 2042 + mod_delayed_work(mp->m_inodegc_wq, &gc->work, queue_delay); 2043 + put_cpu_ptr(gc); 2048 2044 2049 2045 if (xfs_inodegc_want_flush_work(ip, items, shrinker_hits)) { 2050 2046 trace_xfs_inodegc_throttle(mp, __return_address); 2051 - flush_work(&gc->work); 2047 + flush_delayed_work(&gc->work); 2052 2048 } 2053 2049 } 2054 2050 ··· 2072 2054 unsigned int count = 0; 2073 2055 2074 2056 dead_gc = per_cpu_ptr(mp->m_inodegc, dead_cpu); 2075 - cancel_work_sync(&dead_gc->work); 2057 + cancel_delayed_work_sync(&dead_gc->work); 2076 2058 2077 2059 if (llist_empty(&dead_gc->list)) 2078 2060 return; ··· 2091 2073 llist_add_batch(first, last, &gc->list); 2092 2074 count += READ_ONCE(gc->items); 2093 2075 WRITE_ONCE(gc->items, count); 2094 - put_cpu_ptr(gc); 2095 2076 2096 2077 if (xfs_is_inodegc_enabled(mp)) { 2097 2078 trace_xfs_inodegc_queue(mp, __return_address); 2098 - queue_work(mp->m_inodegc_wq, &gc->work); 2079 + mod_delayed_work(mp->m_inodegc_wq, &gc->work, 0); 2099 2080 } 2081 + put_cpu_ptr(gc); 2100 2082 } 2101 2083 2102 2084 /* ··· 2191 2173 unsigned int h = READ_ONCE(gc->shrinker_hits); 2192 2174 2193 2175 WRITE_ONCE(gc->shrinker_hits, h + 1); 2194 - queue_work_on(cpu, mp->m_inodegc_wq, &gc->work); 2176 + mod_delayed_work_on(cpu, mp->m_inodegc_wq, &gc->work, 0); 2195 2177 no_items = false; 2196 2178 } 2197 2179 }
+1
fs/xfs/xfs_icache.h
··· 76 76 void xfs_blockgc_start(struct xfs_mount *mp); 77 77 78 78 void xfs_inodegc_worker(struct work_struct *work); 79 + void xfs_inodegc_push(struct xfs_mount *mp); 79 80 void xfs_inodegc_flush(struct xfs_mount *mp); 80 81 void xfs_inodegc_stop(struct xfs_mount *mp); 81 82 void xfs_inodegc_start(struct xfs_mount *mp);
+25 -39
fs/xfs/xfs_inode.c
··· 132 132 } 133 133 134 134 /* 135 + * You can't set both SHARED and EXCL for the same lock, 136 + * and only XFS_IOLOCK_SHARED, XFS_IOLOCK_EXCL, XFS_MMAPLOCK_SHARED, 137 + * XFS_MMAPLOCK_EXCL, XFS_ILOCK_SHARED, XFS_ILOCK_EXCL are valid values 138 + * to set in lock_flags. 139 + */ 140 + static inline void 141 + xfs_lock_flags_assert( 142 + uint lock_flags) 143 + { 144 + ASSERT((lock_flags & (XFS_IOLOCK_SHARED | XFS_IOLOCK_EXCL)) != 145 + (XFS_IOLOCK_SHARED | XFS_IOLOCK_EXCL)); 146 + ASSERT((lock_flags & (XFS_MMAPLOCK_SHARED | XFS_MMAPLOCK_EXCL)) != 147 + (XFS_MMAPLOCK_SHARED | XFS_MMAPLOCK_EXCL)); 148 + ASSERT((lock_flags & (XFS_ILOCK_SHARED | XFS_ILOCK_EXCL)) != 149 + (XFS_ILOCK_SHARED | XFS_ILOCK_EXCL)); 150 + ASSERT((lock_flags & ~(XFS_LOCK_MASK | XFS_LOCK_SUBCLASS_MASK)) == 0); 151 + ASSERT(lock_flags != 0); 152 + } 153 + 154 + /* 135 155 * In addition to i_rwsem in the VFS inode, the xfs inode contains 2 136 156 * multi-reader locks: invalidate_lock and the i_lock. This routine allows 137 157 * various combinations of the locks to be obtained. ··· 188 168 { 189 169 trace_xfs_ilock(ip, lock_flags, _RET_IP_); 190 170 191 - /* 192 - * You can't set both SHARED and EXCL for the same lock, 193 - * and only XFS_IOLOCK_SHARED, XFS_IOLOCK_EXCL, XFS_ILOCK_SHARED, 194 - * and XFS_ILOCK_EXCL are valid values to set in lock_flags. 195 - */ 196 - ASSERT((lock_flags & (XFS_IOLOCK_SHARED | XFS_IOLOCK_EXCL)) != 197 - (XFS_IOLOCK_SHARED | XFS_IOLOCK_EXCL)); 198 - ASSERT((lock_flags & (XFS_MMAPLOCK_SHARED | XFS_MMAPLOCK_EXCL)) != 199 - (XFS_MMAPLOCK_SHARED | XFS_MMAPLOCK_EXCL)); 200 - ASSERT((lock_flags & (XFS_ILOCK_SHARED | XFS_ILOCK_EXCL)) != 201 - (XFS_ILOCK_SHARED | XFS_ILOCK_EXCL)); 202 - ASSERT((lock_flags & ~(XFS_LOCK_MASK | XFS_LOCK_SUBCLASS_MASK)) == 0); 171 + xfs_lock_flags_assert(lock_flags); 203 172 204 173 if (lock_flags & XFS_IOLOCK_EXCL) { 205 174 down_write_nested(&VFS_I(ip)->i_rwsem, ··· 231 222 { 232 223 trace_xfs_ilock_nowait(ip, lock_flags, _RET_IP_); 233 224 234 - /* 235 - * You can't set both SHARED and EXCL for the same lock, 236 - * and only XFS_IOLOCK_SHARED, XFS_IOLOCK_EXCL, XFS_ILOCK_SHARED, 237 - * and XFS_ILOCK_EXCL are valid values to set in lock_flags. 238 - */ 239 - ASSERT((lock_flags & (XFS_IOLOCK_SHARED | XFS_IOLOCK_EXCL)) != 240 - (XFS_IOLOCK_SHARED | XFS_IOLOCK_EXCL)); 241 - ASSERT((lock_flags & (XFS_MMAPLOCK_SHARED | XFS_MMAPLOCK_EXCL)) != 242 - (XFS_MMAPLOCK_SHARED | XFS_MMAPLOCK_EXCL)); 243 - ASSERT((lock_flags & (XFS_ILOCK_SHARED | XFS_ILOCK_EXCL)) != 244 - (XFS_ILOCK_SHARED | XFS_ILOCK_EXCL)); 245 - ASSERT((lock_flags & ~(XFS_LOCK_MASK | XFS_LOCK_SUBCLASS_MASK)) == 0); 225 + xfs_lock_flags_assert(lock_flags); 246 226 247 227 if (lock_flags & XFS_IOLOCK_EXCL) { 248 228 if (!down_write_trylock(&VFS_I(ip)->i_rwsem)) ··· 289 291 xfs_inode_t *ip, 290 292 uint lock_flags) 291 293 { 292 - /* 293 - * You can't set both SHARED and EXCL for the same lock, 294 - * and only XFS_IOLOCK_SHARED, XFS_IOLOCK_EXCL, XFS_ILOCK_SHARED, 295 - * and XFS_ILOCK_EXCL are valid values to set in lock_flags. 296 - */ 297 - ASSERT((lock_flags & (XFS_IOLOCK_SHARED | XFS_IOLOCK_EXCL)) != 298 - (XFS_IOLOCK_SHARED | XFS_IOLOCK_EXCL)); 299 - ASSERT((lock_flags & (XFS_MMAPLOCK_SHARED | XFS_MMAPLOCK_EXCL)) != 300 - (XFS_MMAPLOCK_SHARED | XFS_MMAPLOCK_EXCL)); 301 - ASSERT((lock_flags & (XFS_ILOCK_SHARED | XFS_ILOCK_EXCL)) != 302 - (XFS_ILOCK_SHARED | XFS_ILOCK_EXCL)); 303 - ASSERT((lock_flags & ~(XFS_LOCK_MASK | XFS_LOCK_SUBCLASS_MASK)) == 0); 304 - ASSERT(lock_flags != 0); 294 + xfs_lock_flags_assert(lock_flags); 305 295 306 296 if (lock_flags & XFS_IOLOCK_EXCL) 307 297 up_write(&VFS_I(ip)->i_rwsem); ··· 365 379 } 366 380 367 381 if (lock_flags & (XFS_MMAPLOCK_EXCL|XFS_MMAPLOCK_SHARED)) { 368 - return __xfs_rwsem_islocked(&VFS_I(ip)->i_rwsem, 369 - (lock_flags & XFS_IOLOCK_SHARED)); 382 + return __xfs_rwsem_islocked(&VFS_I(ip)->i_mapping->invalidate_lock, 383 + (lock_flags & XFS_MMAPLOCK_SHARED)); 370 384 } 371 385 372 386 if (lock_flags & (XFS_IOLOCK_EXCL | XFS_IOLOCK_SHARED)) {
+7 -2
fs/xfs/xfs_log.c
··· 2092 2092 xlog_in_core_t *iclog, *next_iclog; 2093 2093 int i; 2094 2094 2095 - xlog_cil_destroy(log); 2096 - 2097 2095 /* 2098 2096 * Cycle all the iclogbuf locks to make sure all log IO completion 2099 2097 * is done before we tear down these buffers. ··· 2102 2104 up(&iclog->ic_sema); 2103 2105 iclog = iclog->ic_next; 2104 2106 } 2107 + 2108 + /* 2109 + * Destroy the CIL after waiting for iclog IO completion because an 2110 + * iclog EIO error will try to shut down the log, which accesses the 2111 + * CIL to wake up the waiters. 2112 + */ 2113 + xlog_cil_destroy(log); 2105 2114 2106 2115 iclog = log->l_iclog; 2107 2116 for (i = 0; i < log->l_iclog_bufs; i++) {
+1 -1
fs/xfs/xfs_mount.h
··· 61 61 */ 62 62 struct xfs_inodegc { 63 63 struct llist_head list; 64 - struct work_struct work; 64 + struct delayed_work work; 65 65 66 66 /* approximate count of inodes in the list */ 67 67 unsigned int items;
+6 -3
fs/xfs/xfs_qm_syscalls.c
··· 454 454 struct xfs_dquot *dqp; 455 455 int error; 456 456 457 - /* Flush inodegc work at the start of a quota reporting scan. */ 457 + /* 458 + * Expedite pending inodegc work at the start of a quota reporting 459 + * scan but don't block waiting for it to complete. 460 + */ 458 461 if (id == 0) 459 - xfs_inodegc_flush(mp); 462 + xfs_inodegc_push(mp); 460 463 461 464 /* 462 465 * Try to get the dquot. We don't want it allocated on disk, so don't ··· 501 498 502 499 /* Flush inodegc work at the start of a quota reporting scan. */ 503 500 if (*id == 0) 504 - xfs_inodegc_flush(mp); 501 + xfs_inodegc_push(mp); 505 502 506 503 error = xfs_qm_dqget_next(mp, *id, type, &dqp); 507 504 if (error)
+6 -3
fs/xfs/xfs_super.c
··· 797 797 xfs_extlen_t lsize; 798 798 int64_t ffree; 799 799 800 - /* Wait for whatever inactivations are in progress. */ 801 - xfs_inodegc_flush(mp); 800 + /* 801 + * Expedite background inodegc but don't wait. We do not want to block 802 + * here waiting hours for a billion extent file to be truncated. 803 + */ 804 + xfs_inodegc_push(mp); 802 805 803 806 statp->f_type = XFS_SUPER_MAGIC; 804 807 statp->f_namelen = MAXNAMELEN - 1; ··· 1077 1074 gc = per_cpu_ptr(mp->m_inodegc, cpu); 1078 1075 init_llist_head(&gc->list); 1079 1076 gc->items = 0; 1080 - INIT_WORK(&gc->work, xfs_inodegc_worker); 1077 + INIT_DELAYED_WORK(&gc->work, xfs_inodegc_worker); 1081 1078 } 1082 1079 return 0; 1083 1080 }
+1
fs/xfs/xfs_trace.h
··· 240 240 TP_PROTO(struct xfs_mount *mp, void *caller_ip), \ 241 241 TP_ARGS(mp, caller_ip)) 242 242 DEFINE_FS_EVENT(xfs_inodegc_flush); 243 + DEFINE_FS_EVENT(xfs_inodegc_push); 243 244 DEFINE_FS_EVENT(xfs_inodegc_start); 244 245 DEFINE_FS_EVENT(xfs_inodegc_stop); 245 246 DEFINE_FS_EVENT(xfs_inodegc_queue);
+2
include/linux/compiler_types.h
··· 24 24 /* context/locking */ 25 25 # define __must_hold(x) __attribute__((context(x,1,1))) 26 26 # define __acquires(x) __attribute__((context(x,0,1))) 27 + # define __cond_acquires(x) __attribute__((context(x,0,-1))) 27 28 # define __releases(x) __attribute__((context(x,1,0))) 28 29 # define __acquire(x) __context__(x,1) 29 30 # define __release(x) __context__(x,-1) ··· 51 50 /* context/locking */ 52 51 # define __must_hold(x) 53 52 # define __acquires(x) 53 + # define __cond_acquires(x) 54 54 # define __releases(x) 55 55 # define __acquire(x) (void)0 56 56 # define __release(x) (void)0
+5
include/linux/devfreq.h
··· 148 148 * reevaluate operable frequencies. Devfreq users may use 149 149 * devfreq.nb to the corresponding register notifier call chain. 150 150 * @work: delayed work for load monitoring. 151 + * @freq_table: current frequency table used by the devfreq driver. 152 + * @max_state: count of entry present in the frequency table. 151 153 * @previous_freq: previously configured frequency value. 152 154 * @last_status: devfreq user device info, performance statistics 153 155 * @data: Private data of the governor. The devfreq framework does not ··· 186 184 struct opp_table *opp_table; 187 185 struct notifier_block nb; 188 186 struct delayed_work work; 187 + 188 + unsigned long *freq_table; 189 + unsigned int max_state; 189 190 190 191 unsigned long previous_freq; 191 192 struct devfreq_dev_status last_status;
-1
include/linux/lockref.h
··· 38 38 extern int lockref_put_return(struct lockref *); 39 39 extern int lockref_get_not_zero(struct lockref *); 40 40 extern int lockref_put_not_zero(struct lockref *); 41 - extern int lockref_get_or_lock(struct lockref *); 42 41 extern int lockref_put_or_lock(struct lockref *); 43 42 44 43 extern void lockref_mark_dead(struct lockref *);
+3 -3
include/linux/refcount.h
··· 361 361 362 362 extern __must_check bool refcount_dec_if_one(refcount_t *r); 363 363 extern __must_check bool refcount_dec_not_one(refcount_t *r); 364 - extern __must_check bool refcount_dec_and_mutex_lock(refcount_t *r, struct mutex *lock); 365 - extern __must_check bool refcount_dec_and_lock(refcount_t *r, spinlock_t *lock); 364 + extern __must_check bool refcount_dec_and_mutex_lock(refcount_t *r, struct mutex *lock) __cond_acquires(lock); 365 + extern __must_check bool refcount_dec_and_lock(refcount_t *r, spinlock_t *lock) __cond_acquires(lock); 366 366 extern __must_check bool refcount_dec_and_lock_irqsave(refcount_t *r, 367 367 spinlock_t *lock, 368 - unsigned long *flags); 368 + unsigned long *flags) __cond_acquires(lock); 369 369 #endif /* _LINUX_REFCOUNT_H */
+17 -5
include/linux/sysfb.h
··· 55 55 int flags; 56 56 }; 57 57 58 + #ifdef CONFIG_SYSFB 59 + 60 + void sysfb_disable(void); 61 + 62 + #else /* CONFIG_SYSFB */ 63 + 64 + static inline void sysfb_disable(void) 65 + { 66 + } 67 + 68 + #endif /* CONFIG_SYSFB */ 69 + 58 70 #ifdef CONFIG_EFI 59 71 60 72 extern struct efifb_dmi_info efifb_dmi_list[]; ··· 84 72 85 73 bool sysfb_parse_mode(const struct screen_info *si, 86 74 struct simplefb_platform_data *mode); 87 - int sysfb_create_simplefb(const struct screen_info *si, 88 - const struct simplefb_platform_data *mode); 75 + struct platform_device *sysfb_create_simplefb(const struct screen_info *si, 76 + const struct simplefb_platform_data *mode); 89 77 90 78 #else /* CONFIG_SYSFB_SIMPLE */ 91 79 ··· 95 83 return false; 96 84 } 97 85 98 - static inline int sysfb_create_simplefb(const struct screen_info *si, 99 - const struct simplefb_platform_data *mode) 86 + static inline struct platform_device *sysfb_create_simplefb(const struct screen_info *si, 87 + const struct simplefb_platform_data *mode) 100 88 { 101 - return -EINVAL; 89 + return ERR_PTR(-EINVAL); 102 90 } 103 91 104 92 #endif /* CONFIG_SYSFB_SIMPLE */
+1
include/net/flow_offload.h
··· 152 152 FLOW_ACTION_PIPE, 153 153 FLOW_ACTION_VLAN_PUSH_ETH, 154 154 FLOW_ACTION_VLAN_POP_ETH, 155 + FLOW_ACTION_CONTINUE, 155 156 NUM_FLOW_ACTIONS, 156 157 }; 157 158
-2
include/sound/soc.h
··· 408 408 409 409 struct snd_soc_jack_gpio; 410 410 411 - typedef int (*hw_write_t)(void *,const char* ,int); 412 - 413 411 enum snd_soc_pcm_subclass { 414 412 SND_SOC_PCM_CLASS_PCM = 0, 415 413 SND_SOC_PCM_CLASS_BE = 1,
+2 -2
include/uapi/drm/drm_fourcc.h
··· 1444 1444 #define AMD_FMT_MOD_PIPE_MASK 0x7 1445 1445 1446 1446 #define AMD_FMT_MOD_SET(field, value) \ 1447 - ((uint64_t)(value) << AMD_FMT_MOD_##field##_SHIFT) 1447 + ((__u64)(value) << AMD_FMT_MOD_##field##_SHIFT) 1448 1448 #define AMD_FMT_MOD_GET(field, value) \ 1449 1449 (((value) >> AMD_FMT_MOD_##field##_SHIFT) & AMD_FMT_MOD_##field##_MASK) 1450 1450 #define AMD_FMT_MOD_CLEAR(field) \ 1451 - (~((uint64_t)AMD_FMT_MOD_##field##_MASK << AMD_FMT_MOD_##field##_SHIFT)) 1451 + (~((__u64)AMD_FMT_MOD_##field##_MASK << AMD_FMT_MOD_##field##_SHIFT)) 1452 1452 1453 1453 #if defined(__cplusplus) 1454 1454 }
+1 -1
include/uapi/linux/io_uring.h
··· 244 244 #define IORING_ASYNC_CANCEL_ANY (1U << 2) 245 245 246 246 /* 247 - * send/sendmsg and recv/recvmsg flags (sqe->addr2) 247 + * send/sendmsg and recv/recvmsg flags (sqe->ioprio) 248 248 * 249 249 * IORING_RECVSEND_POLL_FIRST If set, instead of first attempting to send 250 250 * or receive and arm poll if that yields an
+48 -67
kernel/bpf/verifier.c
··· 1562 1562 reg->var_off = tnum_or(tnum_clear_subreg(var64_off), var32_off); 1563 1563 } 1564 1564 1565 + static void reg_bounds_sync(struct bpf_reg_state *reg) 1566 + { 1567 + /* We might have learned new bounds from the var_off. */ 1568 + __update_reg_bounds(reg); 1569 + /* We might have learned something about the sign bit. */ 1570 + __reg_deduce_bounds(reg); 1571 + /* We might have learned some bits from the bounds. */ 1572 + __reg_bound_offset(reg); 1573 + /* Intersecting with the old var_off might have improved our bounds 1574 + * slightly, e.g. if umax was 0x7f...f and var_off was (0; 0xf...fc), 1575 + * then new var_off is (0; 0x7f...fc) which improves our umax. 1576 + */ 1577 + __update_reg_bounds(reg); 1578 + } 1579 + 1565 1580 static bool __reg32_bound_s64(s32 a) 1566 1581 { 1567 1582 return a >= 0 && a <= S32_MAX; ··· 1618 1603 * so they do not impact tnum bounds calculation. 1619 1604 */ 1620 1605 __mark_reg64_unbounded(reg); 1621 - __update_reg_bounds(reg); 1622 1606 } 1623 - 1624 - /* Intersecting with the old var_off might have improved our bounds 1625 - * slightly. e.g. if umax was 0x7f...f and var_off was (0; 0xf...fc), 1626 - * then new var_off is (0; 0x7f...fc) which improves our umax. 1627 - */ 1628 - __reg_deduce_bounds(reg); 1629 - __reg_bound_offset(reg); 1630 - __update_reg_bounds(reg); 1607 + reg_bounds_sync(reg); 1631 1608 } 1632 1609 1633 1610 static bool __reg64_bound_s32(s64 a) ··· 1635 1628 static void __reg_combine_64_into_32(struct bpf_reg_state *reg) 1636 1629 { 1637 1630 __mark_reg32_unbounded(reg); 1638 - 1639 1631 if (__reg64_bound_s32(reg->smin_value) && __reg64_bound_s32(reg->smax_value)) { 1640 1632 reg->s32_min_value = (s32)reg->smin_value; 1641 1633 reg->s32_max_value = (s32)reg->smax_value; ··· 1643 1637 reg->u32_min_value = (u32)reg->umin_value; 1644 1638 reg->u32_max_value = (u32)reg->umax_value; 1645 1639 } 1646 - 1647 - /* Intersecting with the old var_off might have improved our bounds 1648 - * slightly. e.g. if umax was 0x7f...f and var_off was (0; 0xf...fc), 1649 - * then new var_off is (0; 0x7f...fc) which improves our umax. 1650 - */ 1651 - __reg_deduce_bounds(reg); 1652 - __reg_bound_offset(reg); 1653 - __update_reg_bounds(reg); 1640 + reg_bounds_sync(reg); 1654 1641 } 1655 1642 1656 1643 /* Mark a register as having a completely unknown (scalar) value. */ ··· 6963 6964 ret_reg->s32_max_value = meta->msize_max_value; 6964 6965 ret_reg->smin_value = -MAX_ERRNO; 6965 6966 ret_reg->s32_min_value = -MAX_ERRNO; 6966 - __reg_deduce_bounds(ret_reg); 6967 - __reg_bound_offset(ret_reg); 6968 - __update_reg_bounds(ret_reg); 6967 + reg_bounds_sync(ret_reg); 6969 6968 } 6970 6969 6971 6970 static int ··· 8220 8223 8221 8224 if (!check_reg_sane_offset(env, dst_reg, ptr_reg->type)) 8222 8225 return -EINVAL; 8223 - 8224 - __update_reg_bounds(dst_reg); 8225 - __reg_deduce_bounds(dst_reg); 8226 - __reg_bound_offset(dst_reg); 8227 - 8226 + reg_bounds_sync(dst_reg); 8228 8227 if (sanitize_check_bounds(env, insn, dst_reg) < 0) 8229 8228 return -EACCES; 8230 8229 if (sanitize_needed(opcode)) { ··· 8958 8965 /* ALU32 ops are zero extended into 64bit register */ 8959 8966 if (alu32) 8960 8967 zext_32_to_64(dst_reg); 8961 - 8962 - __update_reg_bounds(dst_reg); 8963 - __reg_deduce_bounds(dst_reg); 8964 - __reg_bound_offset(dst_reg); 8968 + reg_bounds_sync(dst_reg); 8965 8969 return 0; 8966 8970 } 8967 8971 ··· 9147 9157 insn->dst_reg); 9148 9158 } 9149 9159 zext_32_to_64(dst_reg); 9150 - 9151 - __update_reg_bounds(dst_reg); 9152 - __reg_deduce_bounds(dst_reg); 9153 - __reg_bound_offset(dst_reg); 9160 + reg_bounds_sync(dst_reg); 9154 9161 } 9155 9162 } else { 9156 9163 /* case: R = imm ··· 9585 9598 return; 9586 9599 9587 9600 switch (opcode) { 9601 + /* JEQ/JNE comparison doesn't change the register equivalence. 9602 + * 9603 + * r1 = r2; 9604 + * if (r1 == 42) goto label; 9605 + * ... 9606 + * label: // here both r1 and r2 are known to be 42. 9607 + * 9608 + * Hence when marking register as known preserve it's ID. 9609 + */ 9588 9610 case BPF_JEQ: 9589 - case BPF_JNE: 9590 - { 9591 - struct bpf_reg_state *reg = 9592 - opcode == BPF_JEQ ? true_reg : false_reg; 9593 - 9594 - /* JEQ/JNE comparison doesn't change the register equivalence. 9595 - * r1 = r2; 9596 - * if (r1 == 42) goto label; 9597 - * ... 9598 - * label: // here both r1 and r2 are known to be 42. 9599 - * 9600 - * Hence when marking register as known preserve it's ID. 9601 - */ 9602 - if (is_jmp32) 9603 - __mark_reg32_known(reg, val32); 9604 - else 9605 - ___mark_reg_known(reg, val); 9611 + if (is_jmp32) { 9612 + __mark_reg32_known(true_reg, val32); 9613 + true_32off = tnum_subreg(true_reg->var_off); 9614 + } else { 9615 + ___mark_reg_known(true_reg, val); 9616 + true_64off = true_reg->var_off; 9617 + } 9606 9618 break; 9607 - } 9619 + case BPF_JNE: 9620 + if (is_jmp32) { 9621 + __mark_reg32_known(false_reg, val32); 9622 + false_32off = tnum_subreg(false_reg->var_off); 9623 + } else { 9624 + ___mark_reg_known(false_reg, val); 9625 + false_64off = false_reg->var_off; 9626 + } 9627 + break; 9608 9628 case BPF_JSET: 9609 9629 if (is_jmp32) { 9610 9630 false_32off = tnum_and(false_32off, tnum_const(~val32)); ··· 9750 9756 dst_reg->smax_value); 9751 9757 src_reg->var_off = dst_reg->var_off = tnum_intersect(src_reg->var_off, 9752 9758 dst_reg->var_off); 9753 - /* We might have learned new bounds from the var_off. */ 9754 - __update_reg_bounds(src_reg); 9755 - __update_reg_bounds(dst_reg); 9756 - /* We might have learned something about the sign bit. */ 9757 - __reg_deduce_bounds(src_reg); 9758 - __reg_deduce_bounds(dst_reg); 9759 - /* We might have learned some bits from the bounds. */ 9760 - __reg_bound_offset(src_reg); 9761 - __reg_bound_offset(dst_reg); 9762 - /* Intersecting with the old var_off might have improved our bounds 9763 - * slightly. e.g. if umax was 0x7f...f and var_off was (0; 0xf...fc), 9764 - * then new var_off is (0; 0x7f...fc) which improves our umax. 9765 - */ 9766 - __update_reg_bounds(src_reg); 9767 - __update_reg_bounds(dst_reg); 9759 + reg_bounds_sync(src_reg); 9760 + reg_bounds_sync(dst_reg); 9768 9761 } 9769 9762 9770 9763 static void reg_combine_min_max(struct bpf_reg_state *true_src,
+4 -4
kernel/signal.c
··· 2029 2029 bool autoreap = false; 2030 2030 u64 utime, stime; 2031 2031 2032 - BUG_ON(sig == -1); 2032 + WARN_ON_ONCE(sig == -1); 2033 2033 2034 - /* do_notify_parent_cldstop should have been called instead. */ 2035 - BUG_ON(task_is_stopped_or_traced(tsk)); 2034 + /* do_notify_parent_cldstop should have been called instead. */ 2035 + WARN_ON_ONCE(task_is_stopped_or_traced(tsk)); 2036 2036 2037 - BUG_ON(!tsk->ptrace && 2037 + WARN_ON_ONCE(!tsk->ptrace && 2038 2038 (tsk->group_leader != tsk || !thread_group_empty(tsk))); 2039 2039 2040 2040 /* Wake up all pidfd waiters */
-25
lib/lockref.c
··· 111 111 EXPORT_SYMBOL(lockref_put_not_zero); 112 112 113 113 /** 114 - * lockref_get_or_lock - Increments count unless the count is 0 or dead 115 - * @lockref: pointer to lockref structure 116 - * Return: 1 if count updated successfully or 0 if count was zero 117 - * and we got the lock instead. 118 - */ 119 - int lockref_get_or_lock(struct lockref *lockref) 120 - { 121 - CMPXCHG_LOOP( 122 - new.count++; 123 - if (old.count <= 0) 124 - break; 125 - , 126 - return 1; 127 - ); 128 - 129 - spin_lock(&lockref->lock); 130 - if (lockref->count <= 0) 131 - return 0; 132 - lockref->count++; 133 - spin_unlock(&lockref->lock); 134 - return 1; 135 - } 136 - EXPORT_SYMBOL(lockref_get_or_lock); 137 - 138 - /** 139 114 * lockref_put_return - Decrement reference count if possible 140 115 * @lockref: pointer to lockref structure 141 116 *
+4 -1
lib/sbitmap.c
··· 528 528 529 529 sbitmap_deferred_clear(map); 530 530 if (map->word == (1UL << (map_depth - 1)) - 1) 531 - continue; 531 + goto next; 532 532 533 533 nr = find_first_zero_bit(&map->word, map_depth); 534 534 if (nr + nr_tags <= map_depth) { ··· 539 539 get_mask = ((1UL << map_tags) - 1) << nr; 540 540 do { 541 541 val = READ_ONCE(map->word); 542 + if ((val & ~get_mask) != val) 543 + goto next; 542 544 ret = atomic_long_cmpxchg(ptr, val, get_mask | val); 543 545 } while (ret != val); 544 546 get_mask = (get_mask & ~ret) >> nr; ··· 551 549 return get_mask; 552 550 } 553 551 } 552 + next: 554 553 /* Jump to next index. */ 555 554 if (++index >= sb->map_nr) 556 555 index = 0;
+3
net/bluetooth/hci_core.c
··· 571 571 goto done; 572 572 } 573 573 574 + cancel_work_sync(&hdev->power_on); 574 575 if (hci_dev_test_and_clear_flag(hdev, HCI_AUTO_OFF)) 575 576 cancel_delayed_work(&hdev->power_off); 576 577 ··· 2675 2674 write_lock(&hci_dev_list_lock); 2676 2675 list_del(&hdev->list); 2677 2676 write_unlock(&hci_dev_list_lock); 2677 + 2678 + cancel_work_sync(&hdev->power_on); 2678 2679 2679 2680 hci_cmd_sync_clear(hdev); 2680 2681
-1
net/bluetooth/hci_sync.c
··· 4088 4088 4089 4089 bt_dev_dbg(hdev, ""); 4090 4090 4091 - cancel_work_sync(&hdev->power_on); 4092 4091 cancel_delayed_work(&hdev->power_off); 4093 4092 cancel_delayed_work(&hdev->ncmd_timer); 4094 4093
+14 -4
net/can/bcm.c
··· 100 100 101 101 struct bcm_op { 102 102 struct list_head list; 103 + struct rcu_head rcu; 103 104 int ifindex; 104 105 canid_t can_id; 105 106 u32 flags; ··· 719 718 return NULL; 720 719 } 721 720 722 - static void bcm_remove_op(struct bcm_op *op) 721 + static void bcm_free_op_rcu(struct rcu_head *rcu_head) 723 722 { 724 - hrtimer_cancel(&op->timer); 725 - hrtimer_cancel(&op->thrtimer); 723 + struct bcm_op *op = container_of(rcu_head, struct bcm_op, rcu); 726 724 727 725 if ((op->frames) && (op->frames != &op->sframe)) 728 726 kfree(op->frames); ··· 730 730 kfree(op->last_frames); 731 731 732 732 kfree(op); 733 + } 734 + 735 + static void bcm_remove_op(struct bcm_op *op) 736 + { 737 + hrtimer_cancel(&op->timer); 738 + hrtimer_cancel(&op->thrtimer); 739 + 740 + call_rcu(&op->rcu, bcm_free_op_rcu); 733 741 } 734 742 735 743 static void bcm_rx_unreg(struct net_device *dev, struct bcm_op *op) ··· 764 756 list_for_each_entry_safe(op, n, ops, list) { 765 757 if ((op->can_id == mh->can_id) && (op->ifindex == ifindex) && 766 758 (op->flags & CAN_FD_FRAME) == (mh->flags & CAN_FD_FRAME)) { 759 + 760 + /* disable automatic timer on frame reception */ 761 + op->flags |= RX_NO_AUTOTIMER; 767 762 768 763 /* 769 764 * Don't care if we're bound or not (due to netdev ··· 796 785 bcm_rx_handler, op); 797 786 798 787 list_del(&op->list); 799 - synchronize_rcu(); 800 788 bcm_remove_op(op); 801 789 return 1; /* done */ 802 790 }
+3
net/mptcp/options.c
··· 1584 1584 *ptr++ = mptcp_option(MPTCPOPT_MP_PRIO, 1585 1585 TCPOLEN_MPTCP_PRIO, 1586 1586 opts->backup, TCPOPT_NOP); 1587 + 1588 + MPTCP_INC_STATS(sock_net((const struct sock *)tp), 1589 + MPTCP_MIB_MPPRIOTX); 1587 1590 } 1588 1591 1589 1592 mp_capable_done:
+33 -13
net/mptcp/pm_netlink.c
··· 717 717 } 718 718 } 719 719 720 - static int mptcp_pm_nl_mp_prio_send_ack(struct mptcp_sock *msk, 721 - struct mptcp_addr_info *addr, 722 - u8 bkup) 720 + int mptcp_pm_nl_mp_prio_send_ack(struct mptcp_sock *msk, 721 + struct mptcp_addr_info *addr, 722 + struct mptcp_addr_info *rem, 723 + u8 bkup) 723 724 { 724 725 struct mptcp_subflow_context *subflow; 725 726 ··· 728 727 729 728 mptcp_for_each_subflow(msk, subflow) { 730 729 struct sock *ssk = mptcp_subflow_tcp_sock(subflow); 731 - struct sock *sk = (struct sock *)msk; 732 - struct mptcp_addr_info local; 730 + struct mptcp_addr_info local, remote; 731 + bool slow; 733 732 734 733 local_address((struct sock_common *)ssk, &local); 735 734 if (!mptcp_addresses_equal(&local, addr, addr->port)) 736 735 continue; 737 736 737 + if (rem && rem->family != AF_UNSPEC) { 738 + remote_address((struct sock_common *)ssk, &remote); 739 + if (!mptcp_addresses_equal(&remote, rem, rem->port)) 740 + continue; 741 + } 742 + 743 + slow = lock_sock_fast(ssk); 738 744 if (subflow->backup != bkup) 739 745 msk->last_snd = NULL; 740 746 subflow->backup = bkup; 741 747 subflow->send_mp_prio = 1; 742 748 subflow->request_bkup = bkup; 743 - __MPTCP_INC_STATS(sock_net(sk), MPTCP_MIB_MPPRIOTX); 744 749 745 - spin_unlock_bh(&msk->pm.lock); 746 750 pr_debug("send ack for mp_prio"); 747 - mptcp_subflow_send_ack(ssk); 748 - spin_lock_bh(&msk->pm.lock); 751 + __mptcp_subflow_send_ack(ssk); 752 + unlock_sock_fast(ssk, slow); 749 753 750 754 return 0; 751 755 } ··· 807 801 removed = true; 808 802 __MPTCP_INC_STATS(sock_net(sk), rm_type); 809 803 } 810 - __set_bit(rm_list->ids[i], msk->pm.id_avail_bitmap); 804 + if (rm_type == MPTCP_MIB_RMSUBFLOW) 805 + __set_bit(rm_list->ids[i], msk->pm.id_avail_bitmap); 811 806 if (!removed) 812 807 continue; 813 808 ··· 1823 1816 1824 1817 list.ids[list.nr++] = addr->id; 1825 1818 1819 + spin_lock_bh(&msk->pm.lock); 1826 1820 mptcp_pm_nl_rm_subflow_received(msk, &list); 1827 1821 mptcp_pm_create_subflow_or_signal_addr(msk); 1822 + spin_unlock_bh(&msk->pm.lock); 1828 1823 } 1829 1824 1830 1825 static int mptcp_nl_set_flags(struct net *net, ··· 1844 1835 goto next; 1845 1836 1846 1837 lock_sock(sk); 1847 - spin_lock_bh(&msk->pm.lock); 1848 1838 if (changed & MPTCP_PM_ADDR_FLAG_BACKUP) 1849 - ret = mptcp_pm_nl_mp_prio_send_ack(msk, addr, bkup); 1839 + ret = mptcp_pm_nl_mp_prio_send_ack(msk, addr, NULL, bkup); 1850 1840 if (changed & MPTCP_PM_ADDR_FLAG_FULLMESH) 1851 1841 mptcp_pm_nl_fullmesh(msk, addr); 1852 - spin_unlock_bh(&msk->pm.lock); 1853 1842 release_sock(sk); 1854 1843 1855 1844 next: ··· 1861 1854 static int mptcp_nl_cmd_set_flags(struct sk_buff *skb, struct genl_info *info) 1862 1855 { 1863 1856 struct mptcp_pm_addr_entry addr = { .addr = { .family = AF_UNSPEC }, }, *entry; 1857 + struct mptcp_pm_addr_entry remote = { .addr = { .family = AF_UNSPEC }, }; 1858 + struct nlattr *attr_rem = info->attrs[MPTCP_PM_ATTR_ADDR_REMOTE]; 1859 + struct nlattr *token = info->attrs[MPTCP_PM_ATTR_TOKEN]; 1864 1860 struct nlattr *attr = info->attrs[MPTCP_PM_ATTR_ADDR]; 1865 1861 struct pm_nl_pernet *pernet = genl_info_pm_nl(info); 1866 1862 u8 changed, mask = MPTCP_PM_ADDR_FLAG_BACKUP | ··· 1876 1866 if (ret < 0) 1877 1867 return ret; 1878 1868 1869 + if (attr_rem) { 1870 + ret = mptcp_pm_parse_entry(attr_rem, info, false, &remote); 1871 + if (ret < 0) 1872 + return ret; 1873 + } 1874 + 1879 1875 if (addr.flags & MPTCP_PM_ADDR_FLAG_BACKUP) 1880 1876 bkup = 1; 1881 1877 if (addr.addr.family == AF_UNSPEC) { ··· 1889 1873 if (!addr.addr.id) 1890 1874 return -EOPNOTSUPP; 1891 1875 } 1876 + 1877 + if (token) 1878 + return mptcp_userspace_pm_set_flags(sock_net(skb->sk), 1879 + token, &addr, &remote, bkup); 1892 1880 1893 1881 spin_lock_bh(&pernet->lock); 1894 1882 entry = __lookup_addr(pernet, &addr.addr, lookup_by_id);
+38 -13
net/mptcp/pm_userspace.c
··· 5 5 */ 6 6 7 7 #include "protocol.h" 8 + #include "mib.h" 8 9 9 10 void mptcp_free_local_addr_list(struct mptcp_sock *msk) 10 11 { ··· 307 306 const struct mptcp_addr_info *local, 308 307 const struct mptcp_addr_info *remote) 309 308 { 310 - struct sock *sk = &msk->sk.icsk_inet.sk; 311 309 struct mptcp_subflow_context *subflow; 312 - struct sock *found = NULL; 313 310 314 311 if (local->family != remote->family) 315 312 return NULL; 316 - 317 - lock_sock(sk); 318 313 319 314 mptcp_for_each_subflow(msk, subflow) { 320 315 const struct inet_sock *issk; ··· 344 347 } 345 348 346 349 if (issk->inet_sport == local->port && 347 - issk->inet_dport == remote->port) { 348 - found = ssk; 349 - goto found; 350 - } 350 + issk->inet_dport == remote->port) 351 + return ssk; 351 352 } 352 353 353 - found: 354 - release_sock(sk); 355 - 356 - return found; 354 + return NULL; 357 355 } 358 356 359 357 int mptcp_nl_cmd_sf_destroy(struct sk_buff *skb, struct genl_info *info) ··· 404 412 } 405 413 406 414 sk = &msk->sk.icsk_inet.sk; 415 + lock_sock(sk); 407 416 ssk = mptcp_nl_find_ssk(msk, &addr_l, &addr_r); 408 417 if (ssk) { 409 418 struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(ssk); 410 419 411 420 mptcp_subflow_shutdown(sk, ssk, RCV_SHUTDOWN | SEND_SHUTDOWN); 412 421 mptcp_close_ssk(sk, ssk, subflow); 422 + MPTCP_INC_STATS(sock_net(sk), MPTCP_MIB_RMSUBFLOW); 413 423 err = 0; 414 424 } else { 415 425 err = -ESRCH; 416 426 } 427 + release_sock(sk); 417 428 418 - destroy_err: 429 + destroy_err: 419 430 sock_put((struct sock *)msk); 420 431 return err; 432 + } 433 + 434 + int mptcp_userspace_pm_set_flags(struct net *net, struct nlattr *token, 435 + struct mptcp_pm_addr_entry *loc, 436 + struct mptcp_pm_addr_entry *rem, u8 bkup) 437 + { 438 + struct mptcp_sock *msk; 439 + int ret = -EINVAL; 440 + u32 token_val; 441 + 442 + token_val = nla_get_u32(token); 443 + 444 + msk = mptcp_token_get_sock(net, token_val); 445 + if (!msk) 446 + return ret; 447 + 448 + if (!mptcp_pm_is_userspace(msk)) 449 + goto set_flags_err; 450 + 451 + if (loc->addr.family == AF_UNSPEC || 452 + rem->addr.family == AF_UNSPEC) 453 + goto set_flags_err; 454 + 455 + lock_sock((struct sock *)msk); 456 + ret = mptcp_pm_nl_mp_prio_send_ack(msk, &loc->addr, &rem->addr, bkup); 457 + release_sock((struct sock *)msk); 458 + 459 + set_flags_err: 460 + sock_put((struct sock *)msk); 461 + return ret; 421 462 }
+7 -2
net/mptcp/protocol.c
··· 502 502 (TCPF_SYN_SENT | TCPF_SYN_RECV | TCPF_TIME_WAIT | TCPF_CLOSE | TCPF_LISTEN)); 503 503 } 504 504 505 + void __mptcp_subflow_send_ack(struct sock *ssk) 506 + { 507 + if (tcp_can_send_ack(ssk)) 508 + tcp_send_ack(ssk); 509 + } 510 + 505 511 void mptcp_subflow_send_ack(struct sock *ssk) 506 512 { 507 513 bool slow; 508 514 509 515 slow = lock_sock_fast(ssk); 510 - if (tcp_can_send_ack(ssk)) 511 - tcp_send_ack(ssk); 516 + __mptcp_subflow_send_ack(ssk); 512 517 unlock_sock_fast(ssk, slow); 513 518 } 514 519
+8 -1
net/mptcp/protocol.h
··· 607 607 void mptcp_subflow_shutdown(struct sock *sk, struct sock *ssk, int how); 608 608 void mptcp_close_ssk(struct sock *sk, struct sock *ssk, 609 609 struct mptcp_subflow_context *subflow); 610 + void __mptcp_subflow_send_ack(struct sock *ssk); 610 611 void mptcp_subflow_send_ack(struct sock *ssk); 611 612 void mptcp_subflow_reset(struct sock *ssk); 612 613 void mptcp_subflow_queue_clean(struct sock *ssk); ··· 772 771 const struct mptcp_rm_list *rm_list); 773 772 void mptcp_pm_mp_prio_received(struct sock *sk, u8 bkup); 774 773 void mptcp_pm_mp_fail_received(struct sock *sk, u64 fail_seq); 774 + int mptcp_pm_nl_mp_prio_send_ack(struct mptcp_sock *msk, 775 + struct mptcp_addr_info *addr, 776 + struct mptcp_addr_info *rem, 777 + u8 bkup); 775 778 bool mptcp_pm_alloc_anno_list(struct mptcp_sock *msk, 776 779 const struct mptcp_pm_addr_entry *entry); 777 780 void mptcp_pm_free_anno_list(struct mptcp_sock *msk); ··· 792 787 int mptcp_userspace_pm_get_flags_and_ifindex_by_id(struct mptcp_sock *msk, 793 788 unsigned int id, 794 789 u8 *flags, int *ifindex); 795 - 790 + int mptcp_userspace_pm_set_flags(struct net *net, struct nlattr *token, 791 + struct mptcp_pm_addr_entry *loc, 792 + struct mptcp_pm_addr_entry *rem, u8 bkup); 796 793 int mptcp_pm_announce_addr(struct mptcp_sock *msk, 797 794 const struct mptcp_addr_info *addr, 798 795 bool echo);
+8 -1
net/netfilter/nf_tables_api.c
··· 5213 5213 struct nft_data *data, 5214 5214 struct nlattr *attr) 5215 5215 { 5216 + u32 dtype; 5216 5217 int err; 5217 5218 5218 5219 err = nft_data_init(ctx, data, NFT_DATA_VALUE_MAXLEN, desc, attr); 5219 5220 if (err < 0) 5220 5221 return err; 5221 5222 5222 - if (desc->type != NFT_DATA_VERDICT && desc->len != set->dlen) { 5223 + if (set->dtype == NFT_DATA_VERDICT) 5224 + dtype = NFT_DATA_VERDICT; 5225 + else 5226 + dtype = NFT_DATA_VALUE; 5227 + 5228 + if (dtype != desc->type || 5229 + set->dlen != desc->len) { 5223 5230 nft_data_release(data, desc->type); 5224 5231 return -EINVAL; 5225 5232 }
+33 -15
net/netfilter/nft_set_pipapo.c
··· 2125 2125 } 2126 2126 2127 2127 /** 2128 + * nft_set_pipapo_match_destroy() - Destroy elements from key mapping array 2129 + * @set: nftables API set representation 2130 + * @m: matching data pointing to key mapping array 2131 + */ 2132 + static void nft_set_pipapo_match_destroy(const struct nft_set *set, 2133 + struct nft_pipapo_match *m) 2134 + { 2135 + struct nft_pipapo_field *f; 2136 + int i, r; 2137 + 2138 + for (i = 0, f = m->f; i < m->field_count - 1; i++, f++) 2139 + ; 2140 + 2141 + for (r = 0; r < f->rules; r++) { 2142 + struct nft_pipapo_elem *e; 2143 + 2144 + if (r < f->rules - 1 && f->mt[r + 1].e == f->mt[r].e) 2145 + continue; 2146 + 2147 + e = f->mt[r].e; 2148 + 2149 + nft_set_elem_destroy(set, e, true); 2150 + } 2151 + } 2152 + 2153 + /** 2128 2154 * nft_pipapo_destroy() - Free private data for set and all committed elements 2129 2155 * @set: nftables API set representation 2130 2156 */ ··· 2158 2132 { 2159 2133 struct nft_pipapo *priv = nft_set_priv(set); 2160 2134 struct nft_pipapo_match *m; 2161 - struct nft_pipapo_field *f; 2162 - int i, r, cpu; 2135 + int cpu; 2163 2136 2164 2137 m = rcu_dereference_protected(priv->match, true); 2165 2138 if (m) { 2166 2139 rcu_barrier(); 2167 2140 2168 - for (i = 0, f = m->f; i < m->field_count - 1; i++, f++) 2169 - ; 2170 - 2171 - for (r = 0; r < f->rules; r++) { 2172 - struct nft_pipapo_elem *e; 2173 - 2174 - if (r < f->rules - 1 && f->mt[r + 1].e == f->mt[r].e) 2175 - continue; 2176 - 2177 - e = f->mt[r].e; 2178 - 2179 - nft_set_elem_destroy(set, e, true); 2180 - } 2141 + nft_set_pipapo_match_destroy(set, m); 2181 2142 2182 2143 #ifdef NFT_PIPAPO_ALIGN 2183 2144 free_percpu(m->scratch_aligned); ··· 2178 2165 } 2179 2166 2180 2167 if (priv->clone) { 2168 + m = priv->clone; 2169 + 2170 + if (priv->dirty) 2171 + nft_set_pipapo_match_destroy(set, m); 2172 + 2181 2173 #ifdef NFT_PIPAPO_ALIGN 2182 2174 free_percpu(priv->clone->scratch_aligned); 2183 2175 #endif
+2 -2
net/rose/rose_route.c
··· 227 227 { 228 228 struct rose_neigh *s; 229 229 230 - rose_stop_ftimer(rose_neigh); 231 - rose_stop_t0timer(rose_neigh); 230 + del_timer_sync(&rose_neigh->ftimer); 231 + del_timer_sync(&rose_neigh->t0timer); 232 232 233 233 skb_queue_purge(&rose_neigh->queue); 234 234
+1 -1
net/sched/act_police.c
··· 442 442 act_id = FLOW_ACTION_JUMP; 443 443 *extval = tc_act & TC_ACT_EXT_VAL_MASK; 444 444 } else if (tc_act == TC_ACT_UNSPEC) { 445 - NL_SET_ERR_MSG_MOD(extack, "Offload not supported when conform/exceed action is \"continue\""); 445 + act_id = FLOW_ACTION_CONTINUE; 446 446 } else { 447 447 NL_SET_ERR_MSG_MOD(extack, "Unsupported conform/exceed action offload"); 448 448 }
+1 -1
net/sunrpc/xdr.c
··· 984 984 p = page_address(*xdr->page_ptr); 985 985 xdr->p = p + frag2bytes; 986 986 space_left = xdr->buf->buflen - xdr->buf->len; 987 - if (space_left - nbytes >= PAGE_SIZE) 987 + if (space_left - frag1bytes >= PAGE_SIZE) 988 988 xdr->end = p + PAGE_SIZE; 989 989 else 990 990 xdr->end = p + space_left - frag1bytes;
+4 -4
net/tls/tls_sw.c
··· 269 269 } 270 270 darg->async = false; 271 271 272 - if (ret == -EBADMSG) 273 - TLS_INC_STATS(sock_net(sk), LINUX_MIB_TLSDECRYPTERROR); 274 - 275 272 return ret; 276 273 } 277 274 ··· 1590 1593 } 1591 1594 1592 1595 err = decrypt_internal(sk, skb, dest, NULL, darg); 1593 - if (err < 0) 1596 + if (err < 0) { 1597 + if (err == -EBADMSG) 1598 + TLS_INC_STATS(sock_net(sk), LINUX_MIB_TLSDECRYPTERROR); 1594 1599 return err; 1600 + } 1595 1601 if (darg->async) 1596 1602 goto decrypt_next; 1597 1603 /* If opportunistic TLS 1.3 ZC failed retry without ZC */
+1
net/xdp/xsk_buff_pool.c
··· 332 332 for (i = 0; i < dma_map->dma_pages_cnt; i++) { 333 333 dma = &dma_map->dma_pages[i]; 334 334 if (*dma) { 335 + *dma &= ~XSK_NEXT_PG_CONTIG_MASK; 335 336 dma_unmap_page_attrs(dma_map->dev, *dma, PAGE_SIZE, 336 337 DMA_BIDIRECTIONAL, attrs); 337 338 *dma = 0;
+7
samples/fprobe/fprobe_example.c
··· 25 25 26 26 static char symbol[MAX_SYMBOL_LEN] = "kernel_clone"; 27 27 module_param_string(symbol, symbol, sizeof(symbol), 0644); 28 + MODULE_PARM_DESC(symbol, "Probed symbol(s), given by comma separated symbols or a wildcard pattern."); 29 + 28 30 static char nosymbol[MAX_SYMBOL_LEN] = ""; 29 31 module_param_string(nosymbol, nosymbol, sizeof(nosymbol), 0644); 32 + MODULE_PARM_DESC(nosymbol, "Not-probed symbols, given by a wildcard pattern."); 33 + 30 34 static bool stackdump = true; 31 35 module_param(stackdump, bool, 0644); 36 + MODULE_PARM_DESC(stackdump, "Enable stackdump."); 37 + 32 38 static bool use_trace = false; 33 39 module_param(use_trace, bool, 0644); 40 + MODULE_PARM_DESC(use_trace, "Use trace_printk instead of printk. This is only for debugging."); 34 41 35 42 static void show_backtrace(void) 36 43 {
+13 -9
sound/pci/cs46xx/cs46xx.c
··· 74 74 err = snd_cs46xx_create(card, pci, 75 75 external_amp[dev], thinkpad[dev]); 76 76 if (err < 0) 77 - return err; 77 + goto error; 78 78 card->private_data = chip; 79 79 chip->accept_valid = mmap_valid[dev]; 80 80 err = snd_cs46xx_pcm(chip, 0); 81 81 if (err < 0) 82 - return err; 82 + goto error; 83 83 #ifdef CONFIG_SND_CS46XX_NEW_DSP 84 84 err = snd_cs46xx_pcm_rear(chip, 1); 85 85 if (err < 0) 86 - return err; 86 + goto error; 87 87 err = snd_cs46xx_pcm_iec958(chip, 2); 88 88 if (err < 0) 89 - return err; 89 + goto error; 90 90 #endif 91 91 err = snd_cs46xx_mixer(chip, 2); 92 92 if (err < 0) 93 - return err; 93 + goto error; 94 94 #ifdef CONFIG_SND_CS46XX_NEW_DSP 95 95 if (chip->nr_ac97_codecs ==2) { 96 96 err = snd_cs46xx_pcm_center_lfe(chip, 3); 97 97 if (err < 0) 98 - return err; 98 + goto error; 99 99 } 100 100 #endif 101 101 err = snd_cs46xx_midi(chip, 0); 102 102 if (err < 0) 103 - return err; 103 + goto error; 104 104 err = snd_cs46xx_start_dsp(chip); 105 105 if (err < 0) 106 - return err; 106 + goto error; 107 107 108 108 snd_cs46xx_gameport(chip); 109 109 ··· 117 117 118 118 err = snd_card_register(card); 119 119 if (err < 0) 120 - return err; 120 + goto error; 121 121 122 122 pci_set_drvdata(pci, card); 123 123 dev++; 124 124 return 0; 125 + 126 + error: 127 + snd_card_free(card); 128 + return err; 125 129 } 126 130 127 131 static struct pci_driver cs46xx_driver = {
+1
sound/pci/hda/patch_realtek.c
··· 9212 9212 SND_PCI_QUIRK(0x1558, 0x70f4, "Clevo NH77EPY", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 9213 9213 SND_PCI_QUIRK(0x1558, 0x70f6, "Clevo NH77DPQ-Y", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 9214 9214 SND_PCI_QUIRK(0x1558, 0x7716, "Clevo NS50PU", ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 9215 + SND_PCI_QUIRK(0x1558, 0x7718, "Clevo L140PU", ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 9215 9216 SND_PCI_QUIRK(0x1558, 0x8228, "Clevo NR40BU", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 9216 9217 SND_PCI_QUIRK(0x1558, 0x8520, "Clevo NH50D[CD]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 9217 9218 SND_PCI_QUIRK(0x1558, 0x8521, "Clevo NH77D[CD]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE),
+4 -2
sound/soc/codecs/ak4613.c
··· 868 868 869 869 /* 870 870 * connected STDI 871 + * TDM support is assuming it is probed via Audio-Graph-Card style here. 872 + * Default is SDTIx1 if it was probed via Simple-Audio-Card for now. 871 873 */ 872 874 sdti_num = of_graph_get_endpoint_count(np); 873 - if (WARN_ON((sdti_num > 3) || (sdti_num < 1))) 874 - return; 875 + if ((sdti_num >= SDTx_MAX) || (sdti_num < 1)) 876 + sdti_num = 1; 875 877 876 878 AK4613_CONFIG_SDTI_set(priv, sdti_num); 877 879 }
+8 -2
sound/soc/codecs/cs35l41-lib.c
··· 37 37 { CS35L41_DAC_PCM1_SRC, 0x00000008 }, 38 38 { CS35L41_ASP_TX1_SRC, 0x00000018 }, 39 39 { CS35L41_ASP_TX2_SRC, 0x00000019 }, 40 - { CS35L41_ASP_TX3_SRC, 0x00000020 }, 41 - { CS35L41_ASP_TX4_SRC, 0x00000021 }, 40 + { CS35L41_ASP_TX3_SRC, 0x00000000 }, 41 + { CS35L41_ASP_TX4_SRC, 0x00000000 }, 42 42 { CS35L41_DSP1_RX1_SRC, 0x00000008 }, 43 43 { CS35L41_DSP1_RX2_SRC, 0x00000009 }, 44 44 { CS35L41_DSP1_RX3_SRC, 0x00000018 }, ··· 644 644 { CS35L41_DSP1_XM_ACCEL_PL0_PRI, 0x00000000 }, 645 645 { CS35L41_PWR_CTRL2, 0x00000000 }, 646 646 { CS35L41_AMP_GAIN_CTRL, 0x00000000 }, 647 + { CS35L41_ASP_TX3_SRC, 0x00000000 }, 648 + { CS35L41_ASP_TX4_SRC, 0x00000000 }, 647 649 }; 648 650 649 651 static const struct reg_sequence cs35l41_revb0_errata_patch[] = { ··· 657 655 { CS35L41_DSP1_XM_ACCEL_PL0_PRI, 0x00000000 }, 658 656 { CS35L41_PWR_CTRL2, 0x00000000 }, 659 657 { CS35L41_AMP_GAIN_CTRL, 0x00000000 }, 658 + { CS35L41_ASP_TX3_SRC, 0x00000000 }, 659 + { CS35L41_ASP_TX4_SRC, 0x00000000 }, 660 660 }; 661 661 662 662 static const struct reg_sequence cs35l41_revb2_errata_patch[] = { ··· 670 666 { CS35L41_DSP1_XM_ACCEL_PL0_PRI, 0x00000000 }, 671 667 { CS35L41_PWR_CTRL2, 0x00000000 }, 672 668 { CS35L41_AMP_GAIN_CTRL, 0x00000000 }, 669 + { CS35L41_ASP_TX3_SRC, 0x00000000 }, 670 + { CS35L41_ASP_TX4_SRC, 0x00000000 }, 673 671 }; 674 672 675 673 static const struct reg_sequence cs35l41_fs_errata_patch[] = {
+6 -6
sound/soc/codecs/cs35l41.c
··· 333 333 SOC_SINGLE("HW Noise Gate Enable", CS35L41_NG_CFG, 8, 63, 0), 334 334 SOC_SINGLE("HW Noise Gate Delay", CS35L41_NG_CFG, 4, 7, 0), 335 335 SOC_SINGLE("HW Noise Gate Threshold", CS35L41_NG_CFG, 0, 7, 0), 336 - SOC_SINGLE("Aux Noise Gate CH1 Enable", 336 + SOC_SINGLE("Aux Noise Gate CH1 Switch", 337 337 CS35L41_MIXER_NGATE_CH1_CFG, 16, 1, 0), 338 338 SOC_SINGLE("Aux Noise Gate CH1 Entry Delay", 339 339 CS35L41_MIXER_NGATE_CH1_CFG, 8, 15, 0), ··· 341 341 CS35L41_MIXER_NGATE_CH1_CFG, 0, 7, 0), 342 342 SOC_SINGLE("Aux Noise Gate CH2 Entry Delay", 343 343 CS35L41_MIXER_NGATE_CH2_CFG, 8, 15, 0), 344 - SOC_SINGLE("Aux Noise Gate CH2 Enable", 344 + SOC_SINGLE("Aux Noise Gate CH2 Switch", 345 345 CS35L41_MIXER_NGATE_CH2_CFG, 16, 1, 0), 346 346 SOC_SINGLE("Aux Noise Gate CH2 Threshold", 347 347 CS35L41_MIXER_NGATE_CH2_CFG, 0, 7, 0), 348 - SOC_SINGLE("SCLK Force", CS35L41_SP_FORMAT, CS35L41_SCLK_FRC_SHIFT, 1, 0), 349 - SOC_SINGLE("LRCLK Force", CS35L41_SP_FORMAT, CS35L41_LRCLK_FRC_SHIFT, 1, 0), 350 - SOC_SINGLE("Invert Class D", CS35L41_AMP_DIG_VOL_CTRL, 348 + SOC_SINGLE("SCLK Force Switch", CS35L41_SP_FORMAT, CS35L41_SCLK_FRC_SHIFT, 1, 0), 349 + SOC_SINGLE("LRCLK Force Switch", CS35L41_SP_FORMAT, CS35L41_LRCLK_FRC_SHIFT, 1, 0), 350 + SOC_SINGLE("Invert Class D Switch", CS35L41_AMP_DIG_VOL_CTRL, 351 351 CS35L41_AMP_INV_PCM_SHIFT, 1, 0), 352 - SOC_SINGLE("Amp Gain ZC", CS35L41_AMP_GAIN_CTRL, 352 + SOC_SINGLE("Amp Gain ZC Switch", CS35L41_AMP_GAIN_CTRL, 353 353 CS35L41_AMP_GAIN_ZC_SHIFT, 1, 0), 354 354 WM_ADSP2_PRELOAD_SWITCH("DSP1", 1), 355 355 WM_ADSP_FW_CONTROL("DSP1", 0),
+4 -1
sound/soc/codecs/cs47l15.c
··· 122 122 snd_soc_kcontrol_component(kcontrol); 123 123 struct cs47l15 *cs47l15 = snd_soc_component_get_drvdata(component); 124 124 125 + if (!!ucontrol->value.integer.value[0] == cs47l15->in1_lp_mode) 126 + return 0; 127 + 125 128 switch (ucontrol->value.integer.value[0]) { 126 129 case 0: 127 130 /* Set IN1 to normal mode */ ··· 153 150 break; 154 151 } 155 152 156 - return 0; 153 + return 1; 157 154 } 158 155 159 156 static const struct snd_kcontrol_new cs47l15_snd_controls[] = {
+10 -4
sound/soc/codecs/madera.c
··· 618 618 end: 619 619 snd_soc_dapm_mutex_unlock(dapm); 620 620 621 - return snd_soc_dapm_mux_update_power(dapm, kcontrol, mux, e, NULL); 621 + ret = snd_soc_dapm_mux_update_power(dapm, kcontrol, mux, e, NULL); 622 + if (ret < 0) { 623 + dev_err(madera->dev, "Failed to update demux power state: %d\n", ret); 624 + return ret; 625 + } 626 + 627 + return change; 622 628 } 623 629 EXPORT_SYMBOL_GPL(madera_out1_demux_put); 624 630 ··· 899 893 struct soc_enum *e = (struct soc_enum *)kcontrol->private_value; 900 894 const int adsp_num = e->shift_l; 901 895 const unsigned int item = ucontrol->value.enumerated.item[0]; 902 - int ret; 896 + int ret = 0; 903 897 904 898 if (item >= e->items) 905 899 return -EINVAL; ··· 916 910 "Cannot change '%s' while in use by active audio paths\n", 917 911 kcontrol->id.name); 918 912 ret = -EBUSY; 919 - } else { 913 + } else if (priv->adsp_rate_cache[adsp_num] != e->values[item]) { 920 914 /* Volatile register so defer until the codec is powered up */ 921 915 priv->adsp_rate_cache[adsp_num] = e->values[item]; 922 - ret = 0; 916 + ret = 1; 923 917 } 924 918 925 919 mutex_unlock(&priv->rate_lock);
+11 -1
sound/soc/codecs/max98373-sdw.c
··· 862 862 return max98373_init(slave, regmap); 863 863 } 864 864 865 + static int max98373_sdw_remove(struct sdw_slave *slave) 866 + { 867 + struct max98373_priv *max98373 = dev_get_drvdata(&slave->dev); 868 + 869 + if (max98373->first_hw_init) 870 + pm_runtime_disable(&slave->dev); 871 + 872 + return 0; 873 + } 874 + 865 875 #if defined(CONFIG_OF) 866 876 static const struct of_device_id max98373_of_match[] = { 867 877 { .compatible = "maxim,max98373", }, ··· 903 893 .pm = &max98373_pm, 904 894 }, 905 895 .probe = max98373_sdw_probe, 906 - .remove = NULL, 896 + .remove = max98373_sdw_remove, 907 897 .ops = &max98373_slave_ops, 908 898 .id_table = max98373_id, 909 899 };
+11
sound/soc/codecs/rt1308-sdw.c
··· 691 691 return 0; 692 692 } 693 693 694 + static int rt1308_sdw_remove(struct sdw_slave *slave) 695 + { 696 + struct rt1308_sdw_priv *rt1308 = dev_get_drvdata(&slave->dev); 697 + 698 + if (rt1308->first_hw_init) 699 + pm_runtime_disable(&slave->dev); 700 + 701 + return 0; 702 + } 703 + 694 704 static const struct sdw_device_id rt1308_id[] = { 695 705 SDW_SLAVE_ENTRY_EXT(0x025d, 0x1308, 0x2, 0, 0), 696 706 {}, ··· 760 750 .pm = &rt1308_pm, 761 751 }, 762 752 .probe = rt1308_sdw_probe, 753 + .remove = rt1308_sdw_remove, 763 754 .ops = &rt1308_slave_ops, 764 755 .id_table = rt1308_id, 765 756 };
+11
sound/soc/codecs/rt1316-sdw.c
··· 676 676 return rt1316_sdw_init(&slave->dev, regmap, slave); 677 677 } 678 678 679 + static int rt1316_sdw_remove(struct sdw_slave *slave) 680 + { 681 + struct rt1316_sdw_priv *rt1316 = dev_get_drvdata(&slave->dev); 682 + 683 + if (rt1316->first_hw_init) 684 + pm_runtime_disable(&slave->dev); 685 + 686 + return 0; 687 + } 688 + 679 689 static const struct sdw_device_id rt1316_id[] = { 680 690 SDW_SLAVE_ENTRY_EXT(0x025d, 0x1316, 0x3, 0x1, 0), 681 691 {}, ··· 745 735 .pm = &rt1316_pm, 746 736 }, 747 737 .probe = rt1316_sdw_probe, 738 + .remove = rt1316_sdw_remove, 748 739 .ops = &rt1316_slave_ops, 749 740 .id_table = rt1316_id, 750 741 };
+4 -1
sound/soc/codecs/rt5682-sdw.c
··· 719 719 { 720 720 struct rt5682_priv *rt5682 = dev_get_drvdata(&slave->dev); 721 721 722 - if (rt5682 && rt5682->hw_init) 722 + if (rt5682->hw_init) 723 723 cancel_delayed_work_sync(&rt5682->jack_detect_work); 724 + 725 + if (rt5682->first_hw_init) 726 + pm_runtime_disable(&slave->dev); 724 727 725 728 return 0; 726 729 }
+5 -1
sound/soc/codecs/rt700-sdw.c
··· 13 13 #include <linux/soundwire/sdw_type.h> 14 14 #include <linux/soundwire/sdw_registers.h> 15 15 #include <linux/module.h> 16 + #include <linux/pm_runtime.h> 16 17 #include <linux/regmap.h> 17 18 #include <sound/soc.h> 18 19 #include "rt700.h" ··· 464 463 { 465 464 struct rt700_priv *rt700 = dev_get_drvdata(&slave->dev); 466 465 467 - if (rt700 && rt700->hw_init) { 466 + if (rt700->hw_init) { 468 467 cancel_delayed_work_sync(&rt700->jack_detect_work); 469 468 cancel_delayed_work_sync(&rt700->jack_btn_check_work); 470 469 } 470 + 471 + if (rt700->first_hw_init) 472 + pm_runtime_disable(&slave->dev); 471 473 472 474 return 0; 473 475 }
+19 -11
sound/soc/codecs/rt700.c
··· 162 162 if (!rt700->hs_jack) 163 163 return; 164 164 165 - if (!rt700->component->card->instantiated) 165 + if (!rt700->component->card || !rt700->component->card->instantiated) 166 166 return; 167 167 168 168 reg = RT700_VERB_GET_PIN_SENSE | RT700_HP_OUT; ··· 315 315 struct snd_soc_jack *hs_jack, void *data) 316 316 { 317 317 struct rt700_priv *rt700 = snd_soc_component_get_drvdata(component); 318 + int ret; 318 319 319 320 rt700->hs_jack = hs_jack; 320 321 321 - if (!rt700->hw_init) { 322 - dev_dbg(&rt700->slave->dev, 323 - "%s hw_init not ready yet\n", __func__); 322 + ret = pm_runtime_resume_and_get(component->dev); 323 + if (ret < 0) { 324 + if (ret != -EACCES) { 325 + dev_err(component->dev, "%s: failed to resume %d\n", __func__, ret); 326 + return ret; 327 + } 328 + 329 + /* pm_runtime not enabled yet */ 330 + dev_dbg(component->dev, "%s: skipping jack init for now\n", __func__); 324 331 return 0; 325 332 } 326 333 327 334 rt700_jack_init(rt700); 335 + 336 + pm_runtime_mark_last_busy(component->dev); 337 + pm_runtime_put_autosuspend(component->dev); 328 338 329 339 return 0; 330 340 } ··· 1125 1115 1126 1116 mutex_init(&rt700->disable_irq_lock); 1127 1117 1118 + INIT_DELAYED_WORK(&rt700->jack_detect_work, 1119 + rt700_jack_detect_handler); 1120 + INIT_DELAYED_WORK(&rt700->jack_btn_check_work, 1121 + rt700_btn_check_handler); 1122 + 1128 1123 /* 1129 1124 * Mark hw_init to false 1130 1125 * HW init will be performed when device reports present ··· 1223 1208 1224 1209 /* Finish Initial Settings, set power to D3 */ 1225 1210 regmap_write(rt700->regmap, RT700_SET_AUDIO_POWER_STATE, AC_PWRST_D3); 1226 - 1227 - if (!rt700->first_hw_init) { 1228 - INIT_DELAYED_WORK(&rt700->jack_detect_work, 1229 - rt700_jack_detect_handler); 1230 - INIT_DELAYED_WORK(&rt700->jack_btn_check_work, 1231 - rt700_btn_check_handler); 1232 - } 1233 1211 1234 1212 /* 1235 1213 * if set_jack callback occurred early than io_init,
+8 -1
sound/soc/codecs/rt711-sdca-sdw.c
··· 11 11 #include <linux/mod_devicetable.h> 12 12 #include <linux/soundwire/sdw_registers.h> 13 13 #include <linux/module.h> 14 + #include <linux/pm_runtime.h> 14 15 15 16 #include "rt711-sdca.h" 16 17 #include "rt711-sdca-sdw.h" ··· 365 364 { 366 365 struct rt711_sdca_priv *rt711 = dev_get_drvdata(&slave->dev); 367 366 368 - if (rt711 && rt711->hw_init) { 367 + if (rt711->hw_init) { 369 368 cancel_delayed_work_sync(&rt711->jack_detect_work); 370 369 cancel_delayed_work_sync(&rt711->jack_btn_check_work); 371 370 } 371 + 372 + if (rt711->first_hw_init) 373 + pm_runtime_disable(&slave->dev); 374 + 375 + mutex_destroy(&rt711->calibrate_mutex); 376 + mutex_destroy(&rt711->disable_irq_lock); 372 377 373 378 return 0; 374 379 }
+21 -23
sound/soc/codecs/rt711-sdca.c
··· 34 34 35 35 ret = regmap_write(regmap, addr, value); 36 36 if (ret < 0) 37 - dev_err(rt711->component->dev, 37 + dev_err(&rt711->slave->dev, 38 38 "Failed to set private value: %06x <= %04x ret=%d\n", 39 39 addr, value, ret); 40 40 ··· 50 50 51 51 ret = regmap_read(regmap, addr, value); 52 52 if (ret < 0) 53 - dev_err(rt711->component->dev, 53 + dev_err(&rt711->slave->dev, 54 54 "Failed to get private value: %06x => %04x ret=%d\n", 55 55 addr, *value, ret); 56 56 ··· 294 294 if (!rt711->hs_jack) 295 295 return; 296 296 297 - if (!rt711->component->card->instantiated) 297 + if (!rt711->component->card || !rt711->component->card->instantiated) 298 298 return; 299 299 300 300 /* SDW_SCP_SDCA_INT_SDCA_0 is used for jack detection */ ··· 487 487 struct snd_soc_jack *hs_jack, void *data) 488 488 { 489 489 struct rt711_sdca_priv *rt711 = snd_soc_component_get_drvdata(component); 490 + int ret; 490 491 491 492 rt711->hs_jack = hs_jack; 492 493 493 - if (!rt711->hw_init) { 494 - dev_dbg(&rt711->slave->dev, 495 - "%s hw_init not ready yet\n", __func__); 494 + ret = pm_runtime_resume_and_get(component->dev); 495 + if (ret < 0) { 496 + if (ret != -EACCES) { 497 + dev_err(component->dev, "%s: failed to resume %d\n", __func__, ret); 498 + return ret; 499 + } 500 + 501 + /* pm_runtime not enabled yet */ 502 + dev_dbg(component->dev, "%s: skipping jack init for now\n", __func__); 496 503 return 0; 497 504 } 498 505 499 506 rt711_sdca_jack_init(rt711); 507 + 508 + pm_runtime_mark_last_busy(component->dev); 509 + pm_runtime_put_autosuspend(component->dev); 510 + 500 511 return 0; 501 512 } 502 513 ··· 1201 1190 return 0; 1202 1191 } 1203 1192 1204 - static void rt711_sdca_remove(struct snd_soc_component *component) 1205 - { 1206 - struct rt711_sdca_priv *rt711 = snd_soc_component_get_drvdata(component); 1207 - 1208 - regcache_cache_only(rt711->regmap, true); 1209 - regcache_cache_only(rt711->mbq_regmap, true); 1210 - } 1211 - 1212 1193 static const struct snd_soc_component_driver soc_sdca_dev_rt711 = { 1213 1194 .probe = rt711_sdca_probe, 1214 1195 .controls = rt711_sdca_snd_controls, ··· 1210 1207 .dapm_routes = rt711_sdca_audio_map, 1211 1208 .num_dapm_routes = ARRAY_SIZE(rt711_sdca_audio_map), 1212 1209 .set_jack = rt711_sdca_set_jack_detect, 1213 - .remove = rt711_sdca_remove, 1214 1210 .endianness = 1, 1215 1211 }; 1216 1212 ··· 1414 1412 rt711->regmap = regmap; 1415 1413 rt711->mbq_regmap = mbq_regmap; 1416 1414 1415 + mutex_init(&rt711->calibrate_mutex); 1417 1416 mutex_init(&rt711->disable_irq_lock); 1417 + 1418 + INIT_DELAYED_WORK(&rt711->jack_detect_work, rt711_sdca_jack_detect_handler); 1419 + INIT_DELAYED_WORK(&rt711->jack_btn_check_work, rt711_sdca_btn_check_handler); 1418 1420 1419 1421 /* 1420 1422 * Mark hw_init to false ··· 1550 1544 /* ge_exclusive_inbox_en disable */ 1551 1545 rt711_sdca_index_update_bits(rt711, RT711_VENDOR_HDA_CTL, 1552 1546 RT711_PUSH_BTN_INT_CTL0, 0x20, 0x00); 1553 - 1554 - if (!rt711->first_hw_init) { 1555 - INIT_DELAYED_WORK(&rt711->jack_detect_work, 1556 - rt711_sdca_jack_detect_handler); 1557 - INIT_DELAYED_WORK(&rt711->jack_btn_check_work, 1558 - rt711_sdca_btn_check_handler); 1559 - mutex_init(&rt711->calibrate_mutex); 1560 - } 1561 1547 1562 1548 /* calibration */ 1563 1549 ret = rt711_sdca_calibration(rt711);
+8 -1
sound/soc/codecs/rt711-sdw.c
··· 13 13 #include <linux/soundwire/sdw_type.h> 14 14 #include <linux/soundwire/sdw_registers.h> 15 15 #include <linux/module.h> 16 + #include <linux/pm_runtime.h> 16 17 #include <linux/regmap.h> 17 18 #include <sound/soc.h> 18 19 #include "rt711.h" ··· 465 464 { 466 465 struct rt711_priv *rt711 = dev_get_drvdata(&slave->dev); 467 466 468 - if (rt711 && rt711->hw_init) { 467 + if (rt711->hw_init) { 469 468 cancel_delayed_work_sync(&rt711->jack_detect_work); 470 469 cancel_delayed_work_sync(&rt711->jack_btn_check_work); 471 470 cancel_work_sync(&rt711->calibration_work); 472 471 } 472 + 473 + if (rt711->first_hw_init) 474 + pm_runtime_disable(&slave->dev); 475 + 476 + mutex_destroy(&rt711->calibrate_mutex); 477 + mutex_destroy(&rt711->disable_irq_lock); 473 478 474 479 return 0; 475 480 }
+20 -20
sound/soc/codecs/rt711.c
··· 242 242 if (!rt711->hs_jack) 243 243 return; 244 244 245 - if (!rt711->component->card->instantiated) 245 + if (!rt711->component->card || !rt711->component->card->instantiated) 246 246 return; 247 247 248 248 if (pm_runtime_status_suspended(rt711->slave->dev.parent)) { ··· 457 457 struct snd_soc_jack *hs_jack, void *data) 458 458 { 459 459 struct rt711_priv *rt711 = snd_soc_component_get_drvdata(component); 460 + int ret; 460 461 461 462 rt711->hs_jack = hs_jack; 462 463 463 - if (!rt711->hw_init) { 464 - dev_dbg(&rt711->slave->dev, 465 - "%s hw_init not ready yet\n", __func__); 464 + ret = pm_runtime_resume_and_get(component->dev); 465 + if (ret < 0) { 466 + if (ret != -EACCES) { 467 + dev_err(component->dev, "%s: failed to resume %d\n", __func__, ret); 468 + return ret; 469 + } 470 + 471 + /* pm_runtime not enabled yet */ 472 + dev_dbg(component->dev, "%s: skipping jack init for now\n", __func__); 466 473 return 0; 467 474 } 468 475 469 476 rt711_jack_init(rt711); 477 + 478 + pm_runtime_mark_last_busy(component->dev); 479 + pm_runtime_put_autosuspend(component->dev); 470 480 471 481 return 0; 472 482 } ··· 942 932 return 0; 943 933 } 944 934 945 - static void rt711_remove(struct snd_soc_component *component) 946 - { 947 - struct rt711_priv *rt711 = snd_soc_component_get_drvdata(component); 948 - 949 - regcache_cache_only(rt711->regmap, true); 950 - } 951 - 952 935 static const struct snd_soc_component_driver soc_codec_dev_rt711 = { 953 936 .probe = rt711_probe, 954 937 .set_bias_level = rt711_set_bias_level, ··· 952 949 .dapm_routes = rt711_audio_map, 953 950 .num_dapm_routes = ARRAY_SIZE(rt711_audio_map), 954 951 .set_jack = rt711_set_jack_detect, 955 - .remove = rt711_remove, 956 952 .endianness = 1, 957 953 }; 958 954 ··· 1206 1204 rt711->sdw_regmap = sdw_regmap; 1207 1205 rt711->regmap = regmap; 1208 1206 1207 + mutex_init(&rt711->calibrate_mutex); 1209 1208 mutex_init(&rt711->disable_irq_lock); 1209 + 1210 + INIT_DELAYED_WORK(&rt711->jack_detect_work, rt711_jack_detect_handler); 1211 + INIT_DELAYED_WORK(&rt711->jack_btn_check_work, rt711_btn_check_handler); 1212 + INIT_WORK(&rt711->calibration_work, rt711_calibration_work); 1210 1213 1211 1214 /* 1212 1215 * Mark hw_init to false ··· 1320 1313 1321 1314 if (rt711->first_hw_init) 1322 1315 rt711_calibration(rt711); 1323 - else { 1324 - INIT_DELAYED_WORK(&rt711->jack_detect_work, 1325 - rt711_jack_detect_handler); 1326 - INIT_DELAYED_WORK(&rt711->jack_btn_check_work, 1327 - rt711_btn_check_handler); 1328 - mutex_init(&rt711->calibrate_mutex); 1329 - INIT_WORK(&rt711->calibration_work, rt711_calibration_work); 1316 + else 1330 1317 schedule_work(&rt711->calibration_work); 1331 - } 1332 1318 1333 1319 /* 1334 1320 * if set_jack callback occurred early than io_init,
+12
sound/soc/codecs/rt715-sdca-sdw.c
··· 13 13 #include <linux/soundwire/sdw_type.h> 14 14 #include <linux/soundwire/sdw_registers.h> 15 15 #include <linux/module.h> 16 + #include <linux/pm_runtime.h> 16 17 #include <linux/regmap.h> 17 18 #include <sound/soc.h> 18 19 #include "rt715-sdca.h" ··· 194 193 return rt715_sdca_init(&slave->dev, mbq_regmap, regmap, slave); 195 194 } 196 195 196 + static int rt715_sdca_sdw_remove(struct sdw_slave *slave) 197 + { 198 + struct rt715_sdca_priv *rt715 = dev_get_drvdata(&slave->dev); 199 + 200 + if (rt715->first_hw_init) 201 + pm_runtime_disable(&slave->dev); 202 + 203 + return 0; 204 + } 205 + 197 206 static const struct sdw_device_id rt715_sdca_id[] = { 198 207 SDW_SLAVE_ENTRY_EXT(0x025d, 0x715, 0x3, 0x1, 0), 199 208 SDW_SLAVE_ENTRY_EXT(0x025d, 0x714, 0x3, 0x1, 0), ··· 278 267 .pm = &rt715_pm, 279 268 }, 280 269 .probe = rt715_sdca_sdw_probe, 270 + .remove = rt715_sdca_sdw_remove, 281 271 .ops = &rt715_sdca_slave_ops, 282 272 .id_table = rt715_sdca_id, 283 273 };
+12
sound/soc/codecs/rt715-sdw.c
··· 14 14 #include <linux/soundwire/sdw_type.h> 15 15 #include <linux/soundwire/sdw_registers.h> 16 16 #include <linux/module.h> 17 + #include <linux/pm_runtime.h> 17 18 #include <linux/of.h> 18 19 #include <linux/regmap.h> 19 20 #include <sound/soc.h> ··· 515 514 return 0; 516 515 } 517 516 517 + static int rt715_sdw_remove(struct sdw_slave *slave) 518 + { 519 + struct rt715_priv *rt715 = dev_get_drvdata(&slave->dev); 520 + 521 + if (rt715->first_hw_init) 522 + pm_runtime_disable(&slave->dev); 523 + 524 + return 0; 525 + } 526 + 518 527 static const struct sdw_device_id rt715_id[] = { 519 528 SDW_SLAVE_ENTRY_EXT(0x025d, 0x714, 0x2, 0, 0), 520 529 SDW_SLAVE_ENTRY_EXT(0x025d, 0x715, 0x2, 0, 0), ··· 586 575 .pm = &rt715_pm, 587 576 }, 588 577 .probe = rt715_sdw_probe, 578 + .remove = rt715_sdw_remove, 589 579 .ops = &rt715_slave_ops, 590 580 .id_table = rt715_id, 591 581 };
+7 -1
sound/soc/codecs/wcd9335.c
··· 1287 1287 struct snd_soc_dapm_update *update = NULL; 1288 1288 u32 port_id = w->shift; 1289 1289 1290 + if (wcd->rx_port_value[port_id] == ucontrol->value.enumerated.item[0]) 1291 + return 0; 1292 + 1290 1293 wcd->rx_port_value[port_id] = ucontrol->value.enumerated.item[0]; 1294 + 1295 + /* Remove channel from any list it's in before adding it to a new one */ 1296 + list_del_init(&wcd->rx_chs[port_id].list); 1291 1297 1292 1298 switch (wcd->rx_port_value[port_id]) { 1293 1299 case 0: 1294 - list_del_init(&wcd->rx_chs[port_id].list); 1300 + /* Channel already removed from lists. Nothing to do here */ 1295 1301 break; 1296 1302 case 1: 1297 1303 list_add_tail(&wcd->rx_chs[port_id].list,
+12
sound/soc/codecs/wcd938x.c
··· 2519 2519 struct soc_enum *e = (struct soc_enum *)kcontrol->private_value; 2520 2520 int path = e->shift_l; 2521 2521 2522 + if (wcd938x->tx_mode[path] == ucontrol->value.enumerated.item[0]) 2523 + return 0; 2524 + 2522 2525 wcd938x->tx_mode[path] = ucontrol->value.enumerated.item[0]; 2523 2526 2524 2527 return 1; ··· 2543 2540 { 2544 2541 struct snd_soc_component *component = snd_soc_kcontrol_component(kcontrol); 2545 2542 struct wcd938x_priv *wcd938x = snd_soc_component_get_drvdata(component); 2543 + 2544 + if (wcd938x->hph_mode == ucontrol->value.enumerated.item[0]) 2545 + return 0; 2546 2546 2547 2547 wcd938x->hph_mode = ucontrol->value.enumerated.item[0]; 2548 2548 ··· 2638 2632 struct snd_soc_component *component = snd_soc_kcontrol_component(kcontrol); 2639 2633 struct wcd938x_priv *wcd938x = snd_soc_component_get_drvdata(component); 2640 2634 2635 + if (wcd938x->ldoh == ucontrol->value.integer.value[0]) 2636 + return 0; 2637 + 2641 2638 wcd938x->ldoh = ucontrol->value.integer.value[0]; 2642 2639 2643 2640 return 1; ··· 2662 2653 { 2663 2654 struct snd_soc_component *component = snd_soc_kcontrol_component(kcontrol); 2664 2655 struct wcd938x_priv *wcd938x = snd_soc_component_get_drvdata(component); 2656 + 2657 + if (wcd938x->bcs_dis == ucontrol->value.integer.value[0]) 2658 + return 0; 2665 2659 2666 2660 wcd938x->bcs_dis = ucontrol->value.integer.value[0]; 2667 2661
+6 -2
sound/soc/codecs/wm5110.c
··· 413 413 unsigned int rnew = (!!ucontrol->value.integer.value[1]) << mc->rshift; 414 414 unsigned int lold, rold; 415 415 unsigned int lena, rena; 416 + bool change = false; 416 417 int ret; 417 418 418 419 snd_soc_dapm_mutex_lock(dapm); ··· 441 440 goto err; 442 441 } 443 442 444 - ret = regmap_update_bits(arizona->regmap, ARIZONA_DRE_ENABLE, 445 - mask, lnew | rnew); 443 + ret = regmap_update_bits_check(arizona->regmap, ARIZONA_DRE_ENABLE, 444 + mask, lnew | rnew, &change); 446 445 if (ret) { 447 446 dev_err(arizona->dev, "Failed to set DRE: %d\n", ret); 448 447 goto err; ··· 454 453 455 454 if (!rnew && rold) 456 455 wm5110_clear_pga_volume(arizona, mc->rshift); 456 + 457 + if (change) 458 + ret = 1; 457 459 458 460 err: 459 461 snd_soc_dapm_mutex_unlock(dapm);
+1 -1
sound/soc/codecs/wm_adsp.c
··· 997 997 snd_soc_dapm_sync(dapm); 998 998 } 999 999 1000 - return 0; 1000 + return 1; 1001 1001 } 1002 1002 EXPORT_SYMBOL_GPL(wm_adsp2_preloader_put); 1003 1003
+2 -2
sound/soc/intel/avs/topology.c
··· 128 128 static int 129 129 avs_parse_uuid_token(struct snd_soc_component *comp, void *elem, void *object, u32 offset) 130 130 { 131 - struct snd_soc_tplg_vendor_value_elem *tuple = elem; 131 + struct snd_soc_tplg_vendor_uuid_elem *tuple = elem; 132 132 guid_t *val = (guid_t *)((u8 *)object + offset); 133 133 134 - guid_copy((guid_t *)val, (const guid_t *)&tuple->value); 134 + guid_copy((guid_t *)val, (const guid_t *)&tuple->uuid); 135 135 136 136 return 0; 137 137 }
+11 -2
sound/soc/intel/boards/bytcr_wm5102.c
··· 421 421 priv->spkvdd_en_gpio = gpiod_get(codec_dev, "wlf,spkvdd-ena", GPIOD_OUT_LOW); 422 422 put_device(codec_dev); 423 423 424 - if (IS_ERR(priv->spkvdd_en_gpio)) 425 - return dev_err_probe(dev, PTR_ERR(priv->spkvdd_en_gpio), "getting spkvdd-GPIO\n"); 424 + if (IS_ERR(priv->spkvdd_en_gpio)) { 425 + ret = PTR_ERR(priv->spkvdd_en_gpio); 426 + /* 427 + * The spkvdd gpio-lookup is registered by: drivers/mfd/arizona-spi.c, 428 + * so -ENOENT means that arizona-spi hasn't probed yet. 429 + */ 430 + if (ret == -ENOENT) 431 + ret = -EPROBE_DEFER; 432 + 433 + return dev_err_probe(dev, ret, "getting spkvdd-GPIO\n"); 434 + } 426 435 427 436 /* override platform name, if required */ 428 437 byt_wm5102_card.dev = dev;
+29 -22
sound/soc/intel/boards/sof_sdw.c
··· 1398 1398 .late_probe = sof_sdw_card_late_probe, 1399 1399 }; 1400 1400 1401 + static void mc_dailink_exit_loop(struct snd_soc_card *card) 1402 + { 1403 + struct snd_soc_dai_link *link; 1404 + int ret; 1405 + int i, j; 1406 + 1407 + for (i = 0; i < ARRAY_SIZE(codec_info_list); i++) { 1408 + if (!codec_info_list[i].exit) 1409 + continue; 1410 + /* 1411 + * We don't need to call .exit function if there is no matched 1412 + * dai link found. 1413 + */ 1414 + for_each_card_prelinks(card, j, link) { 1415 + if (!strcmp(link->codecs[0].dai_name, 1416 + codec_info_list[i].dai_name)) { 1417 + ret = codec_info_list[i].exit(card, link); 1418 + if (ret) 1419 + dev_warn(card->dev, 1420 + "codec exit failed %d\n", 1421 + ret); 1422 + break; 1423 + } 1424 + } 1425 + } 1426 + } 1427 + 1401 1428 static int mc_probe(struct platform_device *pdev) 1402 1429 { 1403 1430 struct snd_soc_card *card = &card_sof_sdw; ··· 1489 1462 ret = devm_snd_soc_register_card(&pdev->dev, card); 1490 1463 if (ret) { 1491 1464 dev_err(card->dev, "snd_soc_register_card failed %d\n", ret); 1465 + mc_dailink_exit_loop(card); 1492 1466 return ret; 1493 1467 } 1494 1468 ··· 1501 1473 static int mc_remove(struct platform_device *pdev) 1502 1474 { 1503 1475 struct snd_soc_card *card = platform_get_drvdata(pdev); 1504 - struct snd_soc_dai_link *link; 1505 - int ret; 1506 - int i, j; 1507 1476 1508 - for (i = 0; i < ARRAY_SIZE(codec_info_list); i++) { 1509 - if (!codec_info_list[i].exit) 1510 - continue; 1511 - /* 1512 - * We don't need to call .exit function if there is no matched 1513 - * dai link found. 1514 - */ 1515 - for_each_card_prelinks(card, j, link) { 1516 - if (!strcmp(link->codecs[0].dai_name, 1517 - codec_info_list[i].dai_name)) { 1518 - ret = codec_info_list[i].exit(card, link); 1519 - if (ret) 1520 - dev_warn(&pdev->dev, 1521 - "codec exit failed %d\n", 1522 - ret); 1523 - break; 1524 - } 1525 - } 1526 - } 1477 + mc_dailink_exit_loop(card); 1527 1478 1528 1479 return 0; 1529 1480 }
+6
sound/soc/qcom/qdsp6/q6apm-dai.c
··· 147 147 cfg.num_channels = runtime->channels; 148 148 cfg.bit_width = prtd->bits_per_sample; 149 149 150 + if (prtd->state) { 151 + /* clear the previous setup if any */ 152 + q6apm_graph_stop(prtd->graph); 153 + q6apm_unmap_memory_regions(prtd->graph, substream->stream); 154 + } 155 + 150 156 prtd->pcm_count = snd_pcm_lib_period_bytes(substream); 151 157 prtd->pos = 0; 152 158 /* rate and channels are sent to audio driver */
+129 -31
sound/soc/rockchip/rockchip_i2s.c
··· 13 13 #include <linux/of_gpio.h> 14 14 #include <linux/of_device.h> 15 15 #include <linux/clk.h> 16 + #include <linux/pinctrl/consumer.h> 16 17 #include <linux/pm_runtime.h> 17 18 #include <linux/regmap.h> 18 19 #include <linux/spinlock.h> ··· 55 54 const struct rk_i2s_pins *pins; 56 55 unsigned int bclk_ratio; 57 56 spinlock_t lock; /* tx/rx lock */ 57 + struct pinctrl *pinctrl; 58 + struct pinctrl_state *bclk_on; 59 + struct pinctrl_state *bclk_off; 58 60 }; 61 + 62 + static int i2s_pinctrl_select_bclk_on(struct rk_i2s_dev *i2s) 63 + { 64 + int ret = 0; 65 + 66 + if (!IS_ERR(i2s->pinctrl) && !IS_ERR_OR_NULL(i2s->bclk_on)) 67 + ret = pinctrl_select_state(i2s->pinctrl, 68 + i2s->bclk_on); 69 + 70 + if (ret) 71 + dev_err(i2s->dev, "bclk enable failed %d\n", ret); 72 + 73 + return ret; 74 + } 75 + 76 + static int i2s_pinctrl_select_bclk_off(struct rk_i2s_dev *i2s) 77 + { 78 + 79 + int ret = 0; 80 + 81 + if (!IS_ERR(i2s->pinctrl) && !IS_ERR_OR_NULL(i2s->bclk_off)) 82 + ret = pinctrl_select_state(i2s->pinctrl, 83 + i2s->bclk_off); 84 + 85 + if (ret) 86 + dev_err(i2s->dev, "bclk disable failed %d\n", ret); 87 + 88 + return ret; 89 + } 59 90 60 91 static int i2s_runtime_suspend(struct device *dev) 61 92 { ··· 125 92 return snd_soc_dai_get_drvdata(dai); 126 93 } 127 94 128 - static void rockchip_snd_txctrl(struct rk_i2s_dev *i2s, int on) 95 + static int rockchip_snd_txctrl(struct rk_i2s_dev *i2s, int on) 129 96 { 130 97 unsigned int val = 0; 131 98 int retry = 10; 99 + int ret = 0; 132 100 133 101 spin_lock(&i2s->lock); 134 102 if (on) { 135 - regmap_update_bits(i2s->regmap, I2S_DMACR, 136 - I2S_DMACR_TDE_ENABLE, I2S_DMACR_TDE_ENABLE); 103 + ret = regmap_update_bits(i2s->regmap, I2S_DMACR, 104 + I2S_DMACR_TDE_ENABLE, I2S_DMACR_TDE_ENABLE); 105 + if (ret < 0) 106 + goto end; 137 107 138 - regmap_update_bits(i2s->regmap, I2S_XFER, 139 - I2S_XFER_TXS_START | I2S_XFER_RXS_START, 140 - I2S_XFER_TXS_START | I2S_XFER_RXS_START); 108 + ret = regmap_update_bits(i2s->regmap, I2S_XFER, 109 + I2S_XFER_TXS_START | I2S_XFER_RXS_START, 110 + I2S_XFER_TXS_START | I2S_XFER_RXS_START); 111 + if (ret < 0) 112 + goto end; 141 113 142 114 i2s->tx_start = true; 143 115 } else { 144 116 i2s->tx_start = false; 145 117 146 - regmap_update_bits(i2s->regmap, I2S_DMACR, 147 - I2S_DMACR_TDE_ENABLE, I2S_DMACR_TDE_DISABLE); 118 + ret = regmap_update_bits(i2s->regmap, I2S_DMACR, 119 + I2S_DMACR_TDE_ENABLE, I2S_DMACR_TDE_DISABLE); 120 + if (ret < 0) 121 + goto end; 148 122 149 123 if (!i2s->rx_start) { 150 - regmap_update_bits(i2s->regmap, I2S_XFER, 151 - I2S_XFER_TXS_START | 152 - I2S_XFER_RXS_START, 153 - I2S_XFER_TXS_STOP | 154 - I2S_XFER_RXS_STOP); 124 + ret = regmap_update_bits(i2s->regmap, I2S_XFER, 125 + I2S_XFER_TXS_START | 126 + I2S_XFER_RXS_START, 127 + I2S_XFER_TXS_STOP | 128 + I2S_XFER_RXS_STOP); 129 + if (ret < 0) 130 + goto end; 155 131 156 132 udelay(150); 157 - regmap_update_bits(i2s->regmap, I2S_CLR, 158 - I2S_CLR_TXC | I2S_CLR_RXC, 159 - I2S_CLR_TXC | I2S_CLR_RXC); 133 + ret = regmap_update_bits(i2s->regmap, I2S_CLR, 134 + I2S_CLR_TXC | I2S_CLR_RXC, 135 + I2S_CLR_TXC | I2S_CLR_RXC); 136 + if (ret < 0) 137 + goto end; 160 138 161 139 regmap_read(i2s->regmap, I2S_CLR, &val); 162 140 ··· 182 138 } 183 139 } 184 140 } 141 + end: 185 142 spin_unlock(&i2s->lock); 143 + if (ret < 0) 144 + dev_err(i2s->dev, "lrclk update failed\n"); 145 + 146 + return ret; 186 147 } 187 148 188 - static void rockchip_snd_rxctrl(struct rk_i2s_dev *i2s, int on) 149 + static int rockchip_snd_rxctrl(struct rk_i2s_dev *i2s, int on) 189 150 { 190 151 unsigned int val = 0; 191 152 int retry = 10; 153 + int ret = 0; 192 154 193 155 spin_lock(&i2s->lock); 194 156 if (on) { 195 - regmap_update_bits(i2s->regmap, I2S_DMACR, 157 + ret = regmap_update_bits(i2s->regmap, I2S_DMACR, 196 158 I2S_DMACR_RDE_ENABLE, I2S_DMACR_RDE_ENABLE); 159 + if (ret < 0) 160 + goto end; 197 161 198 - regmap_update_bits(i2s->regmap, I2S_XFER, 162 + ret = regmap_update_bits(i2s->regmap, I2S_XFER, 199 163 I2S_XFER_TXS_START | I2S_XFER_RXS_START, 200 164 I2S_XFER_TXS_START | I2S_XFER_RXS_START); 165 + if (ret < 0) 166 + goto end; 201 167 202 168 i2s->rx_start = true; 203 169 } else { 204 170 i2s->rx_start = false; 205 171 206 - regmap_update_bits(i2s->regmap, I2S_DMACR, 172 + ret = regmap_update_bits(i2s->regmap, I2S_DMACR, 207 173 I2S_DMACR_RDE_ENABLE, I2S_DMACR_RDE_DISABLE); 174 + if (ret < 0) 175 + goto end; 208 176 209 177 if (!i2s->tx_start) { 210 - regmap_update_bits(i2s->regmap, I2S_XFER, 178 + ret = regmap_update_bits(i2s->regmap, I2S_XFER, 211 179 I2S_XFER_TXS_START | 212 180 I2S_XFER_RXS_START, 213 181 I2S_XFER_TXS_STOP | 214 182 I2S_XFER_RXS_STOP); 215 - 183 + if (ret < 0) 184 + goto end; 216 185 udelay(150); 217 - regmap_update_bits(i2s->regmap, I2S_CLR, 186 + ret = regmap_update_bits(i2s->regmap, I2S_CLR, 218 187 I2S_CLR_TXC | I2S_CLR_RXC, 219 188 I2S_CLR_TXC | I2S_CLR_RXC); 220 - 189 + if (ret < 0) 190 + goto end; 221 191 regmap_read(i2s->regmap, I2S_CLR, &val); 222 - 223 192 /* Should wait for clear operation to finish */ 224 193 while (val) { 225 194 regmap_read(i2s->regmap, I2S_CLR, &val); ··· 244 187 } 245 188 } 246 189 } 190 + end: 247 191 spin_unlock(&i2s->lock); 192 + if (ret < 0) 193 + dev_err(i2s->dev, "lrclk update failed\n"); 194 + 195 + return ret; 248 196 } 249 197 250 198 static int rockchip_i2s_set_fmt(struct snd_soc_dai *cpu_dai, ··· 487 425 case SNDRV_PCM_TRIGGER_RESUME: 488 426 case SNDRV_PCM_TRIGGER_PAUSE_RELEASE: 489 427 if (substream->stream == SNDRV_PCM_STREAM_CAPTURE) 490 - rockchip_snd_rxctrl(i2s, 1); 428 + ret = rockchip_snd_rxctrl(i2s, 1); 491 429 else 492 - rockchip_snd_txctrl(i2s, 1); 430 + ret = rockchip_snd_txctrl(i2s, 1); 431 + /* Do not turn on bclk if lrclk open fails. */ 432 + if (ret < 0) 433 + return ret; 434 + i2s_pinctrl_select_bclk_on(i2s); 493 435 break; 494 436 case SNDRV_PCM_TRIGGER_SUSPEND: 495 437 case SNDRV_PCM_TRIGGER_STOP: 496 438 case SNDRV_PCM_TRIGGER_PAUSE_PUSH: 497 - if (substream->stream == SNDRV_PCM_STREAM_CAPTURE) 498 - rockchip_snd_rxctrl(i2s, 0); 499 - else 500 - rockchip_snd_txctrl(i2s, 0); 439 + if (substream->stream == SNDRV_PCM_STREAM_CAPTURE) { 440 + if (!i2s->tx_start) 441 + i2s_pinctrl_select_bclk_off(i2s); 442 + ret = rockchip_snd_rxctrl(i2s, 0); 443 + } else { 444 + if (!i2s->rx_start) 445 + i2s_pinctrl_select_bclk_off(i2s); 446 + ret = rockchip_snd_txctrl(i2s, 0); 447 + } 501 448 break; 502 449 default: 503 450 ret = -EINVAL; ··· 807 736 } 808 737 809 738 i2s->bclk_ratio = 64; 739 + i2s->pinctrl = devm_pinctrl_get(&pdev->dev); 740 + if (IS_ERR(i2s->pinctrl)) 741 + dev_err(&pdev->dev, "failed to find i2s pinctrl\n"); 742 + 743 + i2s->bclk_on = pinctrl_lookup_state(i2s->pinctrl, 744 + "bclk_on"); 745 + if (IS_ERR_OR_NULL(i2s->bclk_on)) 746 + dev_err(&pdev->dev, "failed to find i2s default state\n"); 747 + else 748 + dev_dbg(&pdev->dev, "find i2s bclk state\n"); 749 + 750 + i2s->bclk_off = pinctrl_lookup_state(i2s->pinctrl, 751 + "bclk_off"); 752 + if (IS_ERR_OR_NULL(i2s->bclk_off)) 753 + dev_err(&pdev->dev, "failed to find i2s gpio state\n"); 754 + else 755 + dev_dbg(&pdev->dev, "find i2s bclk_off state\n"); 756 + 757 + i2s_pinctrl_select_bclk_off(i2s); 758 + 759 + i2s->playback_dma_data.addr = res->start + I2S_TXDR; 760 + i2s->playback_dma_data.addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES; 761 + i2s->playback_dma_data.maxburst = 4; 762 + 763 + i2s->capture_dma_data.addr = res->start + I2S_RXDR; 764 + i2s->capture_dma_data.addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES; 765 + i2s->capture_dma_data.maxburst = 4; 810 766 811 767 dev_set_drvdata(&pdev->dev, i2s); 812 768
+5
sound/soc/soc-dapm.c
··· 62 62 snd_soc_dapm_new_control_unlocked(struct snd_soc_dapm_context *dapm, 63 63 const struct snd_soc_dapm_widget *widget); 64 64 65 + static unsigned int soc_dapm_read(struct snd_soc_dapm_context *dapm, int reg); 66 + 65 67 /* dapm power sequences - make this per codec in the future */ 66 68 static int dapm_up_seq[] = { 67 69 [snd_soc_dapm_pre] = 1, ··· 444 442 445 443 snd_soc_dapm_add_path(widget->dapm, data->widget, 446 444 widget, NULL, NULL); 445 + } else if (e->reg != SND_SOC_NOPM) { 446 + data->value = soc_dapm_read(widget->dapm, e->reg) & 447 + (e->mask << e->shift_l); 447 448 } 448 449 break; 449 450 default:
+2 -2
sound/soc/soc-ops.c
··· 526 526 return -EINVAL; 527 527 if (mc->platform_max && tmp > mc->platform_max) 528 528 return -EINVAL; 529 - if (tmp > mc->max - mc->min + 1) 529 + if (tmp > mc->max - mc->min) 530 530 return -EINVAL; 531 531 532 532 if (invert) ··· 547 547 return -EINVAL; 548 548 if (mc->platform_max && tmp > mc->platform_max) 549 549 return -EINVAL; 550 - if (tmp > mc->max - mc->min + 1) 550 + if (tmp > mc->max - mc->min) 551 551 return -EINVAL; 552 552 553 553 if (invert)
+9 -1
sound/soc/sof/intel/hda-dsp.c
··· 181 181 * Power Management. 182 182 */ 183 183 184 - static int hda_dsp_core_power_up(struct snd_sof_dev *sdev, unsigned int core_mask) 184 + int hda_dsp_core_power_up(struct snd_sof_dev *sdev, unsigned int core_mask) 185 185 { 186 + struct sof_intel_hda_dev *hda = sdev->pdata->hw_pdata; 187 + const struct sof_intel_dsp_desc *chip = hda->desc; 186 188 unsigned int cpa; 187 189 u32 adspcs; 188 190 int ret; 191 + 192 + /* restrict core_mask to host managed cores mask */ 193 + core_mask &= chip->host_managed_cores_mask; 194 + /* return if core_mask is not valid */ 195 + if (!core_mask) 196 + return 0; 189 197 190 198 /* update bits */ 191 199 snd_sof_dsp_update_bits(sdev, HDA_DSP_BAR, HDA_DSP_REG_ADSPCS,
+7 -6
sound/soc/sof/intel/hda-loader.c
··· 95 95 } 96 96 97 97 /* 98 - * first boot sequence has some extra steps. core 0 waits for power 99 - * status on core 1, so power up core 1 also momentarily, keep it in 100 - * reset/stall and then turn it off 98 + * first boot sequence has some extra steps. 99 + * power on all host managed cores and only unstall/run the boot core to boot the 100 + * DSP then turn off all non boot cores (if any) is powered on. 101 101 */ 102 102 static int cl_dsp_init(struct snd_sof_dev *sdev, int stream_tag, bool imr_boot) 103 103 { ··· 110 110 int ret; 111 111 112 112 /* step 1: power up corex */ 113 - ret = hda_dsp_enable_core(sdev, chip->host_managed_cores_mask); 113 + ret = hda_dsp_core_power_up(sdev, chip->host_managed_cores_mask); 114 114 if (ret < 0) { 115 115 if (hda->boot_iteration == HDA_FW_BOOT_ATTEMPTS) 116 116 dev_err(sdev->dev, "error: dsp core 0/1 power up failed\n"); ··· 127 127 snd_sof_dsp_write(sdev, HDA_DSP_BAR, chip->ipc_req, ipc_hdr); 128 128 129 129 /* step 3: unset core 0 reset state & unstall/run core 0 */ 130 - ret = hda_dsp_core_run(sdev, BIT(0)); 130 + ret = hda_dsp_core_run(sdev, chip->init_core_mask); 131 131 if (ret < 0) { 132 132 if (hda->boot_iteration == HDA_FW_BOOT_ATTEMPTS) 133 133 dev_err(sdev->dev, ··· 389 389 struct snd_dma_buffer dmab; 390 390 int ret, ret1, i; 391 391 392 - if (hda->imrboot_supported && !sdev->first_boot) { 392 + if (sdev->system_suspend_target < SOF_SUSPEND_S4 && 393 + hda->imrboot_supported && !sdev->first_boot) { 393 394 dev_dbg(sdev->dev, "IMR restore supported, booting from IMR directly\n"); 394 395 hda->boot_iteration = 0; 395 396 ret = hda_dsp_boot_imr(sdev);
+1 -73
sound/soc/sof/intel/hda-pcm.c
··· 192 192 goto found; 193 193 } 194 194 195 - switch (sof_hda_position_quirk) { 196 - case SOF_HDA_POSITION_QUIRK_USE_SKYLAKE_LEGACY: 197 - /* 198 - * This legacy code, inherited from the Skylake driver, 199 - * mixes DPIB registers and DPIB DDR updates and 200 - * does not seem to follow any known hardware recommendations. 201 - * It's not clear e.g. why there is a different flow 202 - * for capture and playback, the only information that matters is 203 - * what traffic class is used, and on all SOF-enabled platforms 204 - * only VC0 is supported so the work-around was likely not necessary 205 - * and quite possibly wrong. 206 - */ 207 - 208 - /* DPIB/posbuf position mode: 209 - * For Playback, Use DPIB register from HDA space which 210 - * reflects the actual data transferred. 211 - * For Capture, Use the position buffer for pointer, as DPIB 212 - * is not accurate enough, its update may be completed 213 - * earlier than the data written to DDR. 214 - */ 215 - if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK) { 216 - pos = snd_sof_dsp_read(sdev, HDA_DSP_HDA_BAR, 217 - AZX_REG_VS_SDXDPIB_XBASE + 218 - (AZX_REG_VS_SDXDPIB_XINTERVAL * 219 - hstream->index)); 220 - } else { 221 - /* 222 - * For capture stream, we need more workaround to fix the 223 - * position incorrect issue: 224 - * 225 - * 1. Wait at least 20us before reading position buffer after 226 - * the interrupt generated(IOC), to make sure position update 227 - * happens on frame boundary i.e. 20.833uSec for 48KHz. 228 - * 2. Perform a dummy Read to DPIB register to flush DMA 229 - * position value. 230 - * 3. Read the DMA Position from posbuf. Now the readback 231 - * value should be >= period boundary. 232 - */ 233 - usleep_range(20, 21); 234 - snd_sof_dsp_read(sdev, HDA_DSP_HDA_BAR, 235 - AZX_REG_VS_SDXDPIB_XBASE + 236 - (AZX_REG_VS_SDXDPIB_XINTERVAL * 237 - hstream->index)); 238 - pos = snd_hdac_stream_get_pos_posbuf(hstream); 239 - } 240 - break; 241 - case SOF_HDA_POSITION_QUIRK_USE_DPIB_REGISTERS: 242 - /* 243 - * In case VC1 traffic is disabled this is the recommended option 244 - */ 245 - pos = snd_sof_dsp_read(sdev, HDA_DSP_HDA_BAR, 246 - AZX_REG_VS_SDXDPIB_XBASE + 247 - (AZX_REG_VS_SDXDPIB_XINTERVAL * 248 - hstream->index)); 249 - break; 250 - case SOF_HDA_POSITION_QUIRK_USE_DPIB_DDR_UPDATE: 251 - /* 252 - * This is the recommended option when VC1 is enabled. 253 - * While this isn't needed for SOF platforms it's added for 254 - * consistency and debug. 255 - */ 256 - pos = snd_hdac_stream_get_pos_posbuf(hstream); 257 - break; 258 - default: 259 - dev_err_once(sdev->dev, "hda_position_quirk value %d not supported\n", 260 - sof_hda_position_quirk); 261 - pos = 0; 262 - break; 263 - } 264 - 265 - if (pos >= hstream->bufsize) 266 - pos = 0; 267 - 195 + pos = hda_dsp_stream_get_position(hstream, substream->stream, true); 268 196 found: 269 197 pos = bytes_to_frames(substream->runtime, pos); 270 198
+90 -4
sound/soc/sof/intel/hda-stream.c
··· 707 707 } 708 708 709 709 static void 710 - hda_dsp_set_bytes_transferred(struct hdac_stream *hstream, u64 buffer_size) 710 + hda_dsp_compr_bytes_transferred(struct hdac_stream *hstream, int direction) 711 711 { 712 + u64 buffer_size = hstream->bufsize; 712 713 u64 prev_pos, pos, num_bytes; 713 714 714 715 div64_u64_rem(hstream->curr_pos, buffer_size, &prev_pos); 715 - pos = snd_hdac_stream_get_pos_posbuf(hstream); 716 + pos = hda_dsp_stream_get_position(hstream, direction, false); 716 717 717 718 if (pos < prev_pos) 718 719 num_bytes = (buffer_size - prev_pos) + pos; ··· 749 748 if (s->substream && sof_hda->no_ipc_position) { 750 749 snd_sof_pcm_period_elapsed(s->substream); 751 750 } else if (s->cstream) { 752 - hda_dsp_set_bytes_transferred(s, 753 - s->cstream->runtime->buffer_size); 751 + hda_dsp_compr_bytes_transferred(s, s->cstream->direction); 754 752 snd_compr_fragment_elapsed(s->cstream); 755 753 } 756 754 } ··· 1008 1008 hext_stream); 1009 1009 devm_kfree(sdev->dev, hda_stream); 1010 1010 } 1011 + } 1012 + 1013 + snd_pcm_uframes_t hda_dsp_stream_get_position(struct hdac_stream *hstream, 1014 + int direction, bool can_sleep) 1015 + { 1016 + struct hdac_ext_stream *hext_stream = stream_to_hdac_ext_stream(hstream); 1017 + struct sof_intel_hda_stream *hda_stream = hstream_to_sof_hda_stream(hext_stream); 1018 + struct snd_sof_dev *sdev = hda_stream->sdev; 1019 + snd_pcm_uframes_t pos; 1020 + 1021 + switch (sof_hda_position_quirk) { 1022 + case SOF_HDA_POSITION_QUIRK_USE_SKYLAKE_LEGACY: 1023 + /* 1024 + * This legacy code, inherited from the Skylake driver, 1025 + * mixes DPIB registers and DPIB DDR updates and 1026 + * does not seem to follow any known hardware recommendations. 1027 + * It's not clear e.g. why there is a different flow 1028 + * for capture and playback, the only information that matters is 1029 + * what traffic class is used, and on all SOF-enabled platforms 1030 + * only VC0 is supported so the work-around was likely not necessary 1031 + * and quite possibly wrong. 1032 + */ 1033 + 1034 + /* DPIB/posbuf position mode: 1035 + * For Playback, Use DPIB register from HDA space which 1036 + * reflects the actual data transferred. 1037 + * For Capture, Use the position buffer for pointer, as DPIB 1038 + * is not accurate enough, its update may be completed 1039 + * earlier than the data written to DDR. 1040 + */ 1041 + if (direction == SNDRV_PCM_STREAM_PLAYBACK) { 1042 + pos = snd_sof_dsp_read(sdev, HDA_DSP_HDA_BAR, 1043 + AZX_REG_VS_SDXDPIB_XBASE + 1044 + (AZX_REG_VS_SDXDPIB_XINTERVAL * 1045 + hstream->index)); 1046 + } else { 1047 + /* 1048 + * For capture stream, we need more workaround to fix the 1049 + * position incorrect issue: 1050 + * 1051 + * 1. Wait at least 20us before reading position buffer after 1052 + * the interrupt generated(IOC), to make sure position update 1053 + * happens on frame boundary i.e. 20.833uSec for 48KHz. 1054 + * 2. Perform a dummy Read to DPIB register to flush DMA 1055 + * position value. 1056 + * 3. Read the DMA Position from posbuf. Now the readback 1057 + * value should be >= period boundary. 1058 + */ 1059 + if (can_sleep) 1060 + usleep_range(20, 21); 1061 + 1062 + snd_sof_dsp_read(sdev, HDA_DSP_HDA_BAR, 1063 + AZX_REG_VS_SDXDPIB_XBASE + 1064 + (AZX_REG_VS_SDXDPIB_XINTERVAL * 1065 + hstream->index)); 1066 + pos = snd_hdac_stream_get_pos_posbuf(hstream); 1067 + } 1068 + break; 1069 + case SOF_HDA_POSITION_QUIRK_USE_DPIB_REGISTERS: 1070 + /* 1071 + * In case VC1 traffic is disabled this is the recommended option 1072 + */ 1073 + pos = snd_sof_dsp_read(sdev, HDA_DSP_HDA_BAR, 1074 + AZX_REG_VS_SDXDPIB_XBASE + 1075 + (AZX_REG_VS_SDXDPIB_XINTERVAL * 1076 + hstream->index)); 1077 + break; 1078 + case SOF_HDA_POSITION_QUIRK_USE_DPIB_DDR_UPDATE: 1079 + /* 1080 + * This is the recommended option when VC1 is enabled. 1081 + * While this isn't needed for SOF platforms it's added for 1082 + * consistency and debug. 1083 + */ 1084 + pos = snd_hdac_stream_get_pos_posbuf(hstream); 1085 + break; 1086 + default: 1087 + dev_err_once(sdev->dev, "hda_position_quirk value %d not supported\n", 1088 + sof_hda_position_quirk); 1089 + pos = 0; 1090 + break; 1091 + } 1092 + 1093 + if (pos >= hstream->bufsize) 1094 + pos = 0; 1095 + 1096 + return pos; 1011 1097 }
+4
sound/soc/sof/intel/hda.h
··· 497 497 */ 498 498 int hda_dsp_probe(struct snd_sof_dev *sdev); 499 499 int hda_dsp_remove(struct snd_sof_dev *sdev); 500 + int hda_dsp_core_power_up(struct snd_sof_dev *sdev, unsigned int core_mask); 500 501 int hda_dsp_core_run(struct snd_sof_dev *sdev, unsigned int core_mask); 501 502 int hda_dsp_enable_core(struct snd_sof_dev *sdev, unsigned int core_mask); 502 503 int hda_dsp_core_reset_power_down(struct snd_sof_dev *sdev, ··· 564 563 struct hdac_stream *hstream); 565 564 bool hda_dsp_check_ipc_irq(struct snd_sof_dev *sdev); 566 565 bool hda_dsp_check_stream_irq(struct snd_sof_dev *sdev); 566 + 567 + snd_pcm_uframes_t hda_dsp_stream_get_position(struct hdac_stream *hstream, 568 + int direction, bool can_sleep); 567 569 568 570 struct hdac_ext_stream * 569 571 hda_dsp_stream_get(struct snd_sof_dev *sdev, int direction, u32 flags);
+13 -14
sound/soc/sof/ipc3-topology.c
··· 1577 1577 struct sof_ipc_ctrl_data *cdata; 1578 1578 int ret; 1579 1579 1580 + if (scontrol->max_size < (sizeof(*cdata) + sizeof(struct sof_abi_hdr))) { 1581 + dev_err(sdev->dev, "%s: insufficient size for a bytes control: %zu.\n", 1582 + __func__, scontrol->max_size); 1583 + return -EINVAL; 1584 + } 1585 + 1586 + if (scontrol->priv_size > scontrol->max_size - sizeof(*cdata)) { 1587 + dev_err(sdev->dev, 1588 + "%s: bytes data size %zu exceeds max %zu.\n", __func__, 1589 + scontrol->priv_size, scontrol->max_size - sizeof(*cdata)); 1590 + return -EINVAL; 1591 + } 1592 + 1580 1593 scontrol->ipc_control_data = kzalloc(scontrol->max_size, GFP_KERNEL); 1581 1594 if (!scontrol->ipc_control_data) 1582 1595 return -ENOMEM; 1583 - 1584 - if (scontrol->max_size < sizeof(*cdata) || 1585 - scontrol->max_size < sizeof(struct sof_abi_hdr)) { 1586 - ret = -EINVAL; 1587 - goto err; 1588 - } 1589 - 1590 - /* init the get/put bytes data */ 1591 - if (scontrol->priv_size > scontrol->max_size - sizeof(*cdata)) { 1592 - dev_err(sdev->dev, "err: bytes data size %zu exceeds max %zu.\n", 1593 - scontrol->priv_size, scontrol->max_size - sizeof(*cdata)); 1594 - ret = -EINVAL; 1595 - goto err; 1596 - } 1597 1596 1598 1597 scontrol->size = sizeof(struct sof_ipc_ctrl_data) + scontrol->priv_size; 1599 1598
+1 -1
sound/soc/sof/mediatek/mt8186/mt8186.c
··· 392 392 PLATFORM_DEVID_NONE, 393 393 pdev, sizeof(*pdev)); 394 394 if (IS_ERR(priv->ipc_dev)) { 395 - ret = IS_ERR(priv->ipc_dev); 395 + ret = PTR_ERR(priv->ipc_dev); 396 396 dev_err(sdev->dev, "failed to create mtk-adsp-ipc device\n"); 397 397 goto err_adsp_off; 398 398 }
+20 -1
sound/soc/sof/pm.c
··· 23 23 u32 target_dsp_state; 24 24 25 25 switch (sdev->system_suspend_target) { 26 + case SOF_SUSPEND_S5: 27 + case SOF_SUSPEND_S4: 28 + /* DSP should be in D3 if the system is suspending to S3+ */ 26 29 case SOF_SUSPEND_S3: 27 30 /* DSP should be in D3 if the system is suspending to S3 */ 28 31 target_dsp_state = SOF_DSP_PM_D3; ··· 338 335 return 0; 339 336 340 337 #if defined(CONFIG_ACPI) 341 - if (acpi_target_system_state() == ACPI_STATE_S0) 338 + switch (acpi_target_system_state()) { 339 + case ACPI_STATE_S0: 342 340 sdev->system_suspend_target = SOF_SUSPEND_S0IX; 341 + break; 342 + case ACPI_STATE_S1: 343 + case ACPI_STATE_S2: 344 + case ACPI_STATE_S3: 345 + sdev->system_suspend_target = SOF_SUSPEND_S3; 346 + break; 347 + case ACPI_STATE_S4: 348 + sdev->system_suspend_target = SOF_SUSPEND_S4; 349 + break; 350 + case ACPI_STATE_S5: 351 + sdev->system_suspend_target = SOF_SUSPEND_S5; 352 + break; 353 + default: 354 + break; 355 + } 343 356 #endif 344 357 345 358 return 0;
+2
sound/soc/sof/sof-priv.h
··· 85 85 SOF_SUSPEND_NONE = 0, 86 86 SOF_SUSPEND_S0IX, 87 87 SOF_SUSPEND_S3, 88 + SOF_SUSPEND_S4, 89 + SOF_SUSPEND_S5, 88 90 }; 89 91 90 92 enum sof_dfsentry_type {
+248
sound/usb/quirks-table.h
··· 3803 3803 }, 3804 3804 3805 3805 /* 3806 + * MacroSilicon MS2100/MS2106 based AV capture cards 3807 + * 3808 + * These claim 96kHz 1ch in the descriptors, but are actually 48kHz 2ch. 3809 + * They also need QUIRK_FLAG_ALIGN_TRANSFER, which makes one wonder if 3810 + * they pretend to be 96kHz mono as a workaround for stereo being broken 3811 + * by that... 3812 + * 3813 + * They also have an issue with initial stream alignment that causes the 3814 + * channels to be swapped and out of phase, which is dealt with in quirks.c. 3815 + */ 3816 + { 3817 + USB_AUDIO_DEVICE(0x534d, 0x0021), 3818 + .driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) { 3819 + .vendor_name = "MacroSilicon", 3820 + .product_name = "MS210x", 3821 + .ifnum = QUIRK_ANY_INTERFACE, 3822 + .type = QUIRK_COMPOSITE, 3823 + .data = &(const struct snd_usb_audio_quirk[]) { 3824 + { 3825 + .ifnum = 2, 3826 + .type = QUIRK_AUDIO_STANDARD_MIXER, 3827 + }, 3828 + { 3829 + .ifnum = 3, 3830 + .type = QUIRK_AUDIO_FIXED_ENDPOINT, 3831 + .data = &(const struct audioformat) { 3832 + .formats = SNDRV_PCM_FMTBIT_S16_LE, 3833 + .channels = 2, 3834 + .iface = 3, 3835 + .altsetting = 1, 3836 + .altset_idx = 1, 3837 + .attributes = 0, 3838 + .endpoint = 0x82, 3839 + .ep_attr = USB_ENDPOINT_XFER_ISOC | 3840 + USB_ENDPOINT_SYNC_ASYNC, 3841 + .rates = SNDRV_PCM_RATE_CONTINUOUS, 3842 + .rate_min = 48000, 3843 + .rate_max = 48000, 3844 + } 3845 + }, 3846 + { 3847 + .ifnum = -1 3848 + } 3849 + } 3850 + } 3851 + }, 3852 + 3853 + /* 3806 3854 * MacroSilicon MS2109 based HDMI capture cards 3807 3855 * 3808 3856 * These claim 96kHz 1ch in the descriptors, but are actually 48kHz 2ch. ··· 4160 4112 { 4161 4113 .ifnum = 1, 4162 4114 .type = QUIRK_AUDIO_STANDARD_INTERFACE 4115 + }, 4116 + { 4117 + .ifnum = -1 4118 + } 4119 + } 4120 + } 4121 + }, 4122 + { 4123 + /* 4124 + * Fiero SC-01 (firmware v1.0.0 @ 48 kHz) 4125 + */ 4126 + USB_DEVICE(0x2b53, 0x0023), 4127 + .driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) { 4128 + .vendor_name = "Fiero", 4129 + .product_name = "SC-01", 4130 + .ifnum = QUIRK_ANY_INTERFACE, 4131 + .type = QUIRK_COMPOSITE, 4132 + .data = &(const struct snd_usb_audio_quirk[]) { 4133 + { 4134 + .ifnum = 0, 4135 + .type = QUIRK_AUDIO_STANDARD_INTERFACE 4136 + }, 4137 + /* Playback */ 4138 + { 4139 + .ifnum = 1, 4140 + .type = QUIRK_AUDIO_FIXED_ENDPOINT, 4141 + .data = &(const struct audioformat) { 4142 + .formats = SNDRV_PCM_FMTBIT_S32_LE, 4143 + .channels = 2, 4144 + .fmt_bits = 24, 4145 + .iface = 1, 4146 + .altsetting = 1, 4147 + .altset_idx = 1, 4148 + .endpoint = 0x01, 4149 + .ep_attr = USB_ENDPOINT_XFER_ISOC | 4150 + USB_ENDPOINT_SYNC_ASYNC, 4151 + .rates = SNDRV_PCM_RATE_48000, 4152 + .rate_min = 48000, 4153 + .rate_max = 48000, 4154 + .nr_rates = 1, 4155 + .rate_table = (unsigned int[]) { 48000 }, 4156 + .clock = 0x29 4157 + } 4158 + }, 4159 + /* Capture */ 4160 + { 4161 + .ifnum = 2, 4162 + .type = QUIRK_AUDIO_FIXED_ENDPOINT, 4163 + .data = &(const struct audioformat) { 4164 + .formats = SNDRV_PCM_FMTBIT_S32_LE, 4165 + .channels = 2, 4166 + .fmt_bits = 24, 4167 + .iface = 2, 4168 + .altsetting = 1, 4169 + .altset_idx = 1, 4170 + .endpoint = 0x82, 4171 + .ep_attr = USB_ENDPOINT_XFER_ISOC | 4172 + USB_ENDPOINT_SYNC_ASYNC | 4173 + USB_ENDPOINT_USAGE_IMPLICIT_FB, 4174 + .rates = SNDRV_PCM_RATE_48000, 4175 + .rate_min = 48000, 4176 + .rate_max = 48000, 4177 + .nr_rates = 1, 4178 + .rate_table = (unsigned int[]) { 48000 }, 4179 + .clock = 0x29 4180 + } 4181 + }, 4182 + { 4183 + .ifnum = -1 4184 + } 4185 + } 4186 + } 4187 + }, 4188 + { 4189 + /* 4190 + * Fiero SC-01 (firmware v1.0.0 @ 96 kHz) 4191 + */ 4192 + USB_DEVICE(0x2b53, 0x0024), 4193 + .driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) { 4194 + .vendor_name = "Fiero", 4195 + .product_name = "SC-01", 4196 + .ifnum = QUIRK_ANY_INTERFACE, 4197 + .type = QUIRK_COMPOSITE, 4198 + .data = &(const struct snd_usb_audio_quirk[]) { 4199 + { 4200 + .ifnum = 0, 4201 + .type = QUIRK_AUDIO_STANDARD_INTERFACE 4202 + }, 4203 + /* Playback */ 4204 + { 4205 + .ifnum = 1, 4206 + .type = QUIRK_AUDIO_FIXED_ENDPOINT, 4207 + .data = &(const struct audioformat) { 4208 + .formats = SNDRV_PCM_FMTBIT_S32_LE, 4209 + .channels = 2, 4210 + .fmt_bits = 24, 4211 + .iface = 1, 4212 + .altsetting = 1, 4213 + .altset_idx = 1, 4214 + .endpoint = 0x01, 4215 + .ep_attr = USB_ENDPOINT_XFER_ISOC | 4216 + USB_ENDPOINT_SYNC_ASYNC, 4217 + .rates = SNDRV_PCM_RATE_96000, 4218 + .rate_min = 96000, 4219 + .rate_max = 96000, 4220 + .nr_rates = 1, 4221 + .rate_table = (unsigned int[]) { 96000 }, 4222 + .clock = 0x29 4223 + } 4224 + }, 4225 + /* Capture */ 4226 + { 4227 + .ifnum = 2, 4228 + .type = QUIRK_AUDIO_FIXED_ENDPOINT, 4229 + .data = &(const struct audioformat) { 4230 + .formats = SNDRV_PCM_FMTBIT_S32_LE, 4231 + .channels = 2, 4232 + .fmt_bits = 24, 4233 + .iface = 2, 4234 + .altsetting = 1, 4235 + .altset_idx = 1, 4236 + .endpoint = 0x82, 4237 + .ep_attr = USB_ENDPOINT_XFER_ISOC | 4238 + USB_ENDPOINT_SYNC_ASYNC | 4239 + USB_ENDPOINT_USAGE_IMPLICIT_FB, 4240 + .rates = SNDRV_PCM_RATE_96000, 4241 + .rate_min = 96000, 4242 + .rate_max = 96000, 4243 + .nr_rates = 1, 4244 + .rate_table = (unsigned int[]) { 96000 }, 4245 + .clock = 0x29 4246 + } 4247 + }, 4248 + { 4249 + .ifnum = -1 4250 + } 4251 + } 4252 + } 4253 + }, 4254 + { 4255 + /* 4256 + * Fiero SC-01 (firmware v1.1.0) 4257 + */ 4258 + USB_DEVICE(0x2b53, 0x0031), 4259 + .driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) { 4260 + .vendor_name = "Fiero", 4261 + .product_name = "SC-01", 4262 + .ifnum = QUIRK_ANY_INTERFACE, 4263 + .type = QUIRK_COMPOSITE, 4264 + .data = &(const struct snd_usb_audio_quirk[]) { 4265 + { 4266 + .ifnum = 0, 4267 + .type = QUIRK_AUDIO_STANDARD_INTERFACE 4268 + }, 4269 + /* Playback */ 4270 + { 4271 + .ifnum = 1, 4272 + .type = QUIRK_AUDIO_FIXED_ENDPOINT, 4273 + .data = &(const struct audioformat) { 4274 + .formats = SNDRV_PCM_FMTBIT_S32_LE, 4275 + .channels = 2, 4276 + .fmt_bits = 24, 4277 + .iface = 1, 4278 + .altsetting = 1, 4279 + .altset_idx = 1, 4280 + .endpoint = 0x01, 4281 + .ep_attr = USB_ENDPOINT_XFER_ISOC | 4282 + USB_ENDPOINT_SYNC_ASYNC, 4283 + .rates = SNDRV_PCM_RATE_48000 | 4284 + SNDRV_PCM_RATE_96000, 4285 + .rate_min = 48000, 4286 + .rate_max = 96000, 4287 + .nr_rates = 2, 4288 + .rate_table = (unsigned int[]) { 48000, 96000 }, 4289 + .clock = 0x29 4290 + } 4291 + }, 4292 + /* Capture */ 4293 + { 4294 + .ifnum = 2, 4295 + .type = QUIRK_AUDIO_FIXED_ENDPOINT, 4296 + .data = &(const struct audioformat) { 4297 + .formats = SNDRV_PCM_FMTBIT_S32_LE, 4298 + .channels = 2, 4299 + .fmt_bits = 24, 4300 + .iface = 2, 4301 + .altsetting = 1, 4302 + .altset_idx = 1, 4303 + .endpoint = 0x82, 4304 + .ep_attr = USB_ENDPOINT_XFER_ISOC | 4305 + USB_ENDPOINT_SYNC_ASYNC | 4306 + USB_ENDPOINT_USAGE_IMPLICIT_FB, 4307 + .rates = SNDRV_PCM_RATE_48000 | 4308 + SNDRV_PCM_RATE_96000, 4309 + .rate_min = 48000, 4310 + .rate_max = 96000, 4311 + .nr_rates = 2, 4312 + .rate_table = (unsigned int[]) { 48000, 96000 }, 4313 + .clock = 0x29 4314 + } 4163 4315 }, 4164 4316 { 4165 4317 .ifnum = -1
+13
sound/usb/quirks.c
··· 1478 1478 case USB_ID(0x041e, 0x3f19): /* E-Mu 0204 USB */ 1479 1479 set_format_emu_quirk(subs, fmt); 1480 1480 break; 1481 + case USB_ID(0x534d, 0x0021): /* MacroSilicon MS2100/MS2106 */ 1481 1482 case USB_ID(0x534d, 0x2109): /* MacroSilicon MS2109 */ 1482 1483 subs->stream_offset_adj = 2; 1483 1484 break; ··· 1843 1842 QUIRK_FLAG_SHARE_MEDIA_DEVICE | QUIRK_FLAG_ALIGN_TRANSFER), 1844 1843 DEVICE_FLG(0x1395, 0x740a, /* Sennheiser DECT */ 1845 1844 QUIRK_FLAG_GET_SAMPLE_RATE), 1845 + DEVICE_FLG(0x1397, 0x0508, /* Behringer UMC204HD */ 1846 + QUIRK_FLAG_PLAYBACK_FIRST | QUIRK_FLAG_GENERIC_IMPLICIT_FB), 1847 + DEVICE_FLG(0x1397, 0x0509, /* Behringer UMC404HD */ 1848 + QUIRK_FLAG_PLAYBACK_FIRST | QUIRK_FLAG_GENERIC_IMPLICIT_FB), 1846 1849 DEVICE_FLG(0x13e5, 0x0001, /* Serato Phono */ 1847 1850 QUIRK_FLAG_IGNORE_CTL_ERROR), 1848 1851 DEVICE_FLG(0x154e, 0x1002, /* Denon DCD-1500RE */ ··· 1909 1904 QUIRK_FLAG_IGNORE_CTL_ERROR), 1910 1905 DEVICE_FLG(0x413c, 0xa506, /* Dell AE515 sound bar */ 1911 1906 QUIRK_FLAG_GET_SAMPLE_RATE), 1907 + DEVICE_FLG(0x534d, 0x0021, /* MacroSilicon MS2100/MS2106 */ 1908 + QUIRK_FLAG_ALIGN_TRANSFER), 1912 1909 DEVICE_FLG(0x534d, 0x2109, /* MacroSilicon MS2109 */ 1913 1910 QUIRK_FLAG_ALIGN_TRANSFER), 1914 1911 DEVICE_FLG(0x1224, 0x2a25, /* Jieli Technology USB PHY 2.0 */ 1915 1912 QUIRK_FLAG_GET_SAMPLE_RATE), 1913 + DEVICE_FLG(0x2b53, 0x0023, /* Fiero SC-01 (firmware v1.0.0 @ 48 kHz) */ 1914 + QUIRK_FLAG_GENERIC_IMPLICIT_FB), 1915 + DEVICE_FLG(0x2b53, 0x0024, /* Fiero SC-01 (firmware v1.0.0 @ 96 kHz) */ 1916 + QUIRK_FLAG_GENERIC_IMPLICIT_FB), 1917 + DEVICE_FLG(0x2b53, 0x0031, /* Fiero SC-01 (firmware v1.1.0) */ 1918 + QUIRK_FLAG_GENERIC_IMPLICIT_FB), 1916 1919 1917 1920 /* Vendor matches */ 1918 1921 VENDOR_FLG(0x045e, /* MS Lifecam */
+36
tools/arch/arm64/include/uapi/asm/kvm.h
··· 139 139 __u64 dbg_wvr[KVM_ARM_MAX_DBG_REGS]; 140 140 }; 141 141 142 + #define KVM_DEBUG_ARCH_HSR_HIGH_VALID (1 << 0) 142 143 struct kvm_debug_exit_arch { 143 144 __u32 hsr; 145 + __u32 hsr_high; /* ESR_EL2[61:32] */ 144 146 __u64 far; /* used for watchpoints */ 145 147 }; 146 148 ··· 333 331 KVM_REG_SIZE_U512 | 0xffff) 334 332 #define KVM_ARM64_SVE_VLS_WORDS \ 335 333 ((KVM_ARM64_SVE_VQ_MAX - KVM_ARM64_SVE_VQ_MIN) / 64 + 1) 334 + 335 + /* Bitmap feature firmware registers */ 336 + #define KVM_REG_ARM_FW_FEAT_BMAP (0x0016 << KVM_REG_ARM_COPROC_SHIFT) 337 + #define KVM_REG_ARM_FW_FEAT_BMAP_REG(r) (KVM_REG_ARM64 | KVM_REG_SIZE_U64 | \ 338 + KVM_REG_ARM_FW_FEAT_BMAP | \ 339 + ((r) & 0xffff)) 340 + 341 + #define KVM_REG_ARM_STD_BMAP KVM_REG_ARM_FW_FEAT_BMAP_REG(0) 342 + 343 + enum { 344 + KVM_REG_ARM_STD_BIT_TRNG_V1_0 = 0, 345 + #ifdef __KERNEL__ 346 + KVM_REG_ARM_STD_BMAP_BIT_COUNT, 347 + #endif 348 + }; 349 + 350 + #define KVM_REG_ARM_STD_HYP_BMAP KVM_REG_ARM_FW_FEAT_BMAP_REG(1) 351 + 352 + enum { 353 + KVM_REG_ARM_STD_HYP_BIT_PV_TIME = 0, 354 + #ifdef __KERNEL__ 355 + KVM_REG_ARM_STD_HYP_BMAP_BIT_COUNT, 356 + #endif 357 + }; 358 + 359 + #define KVM_REG_ARM_VENDOR_HYP_BMAP KVM_REG_ARM_FW_FEAT_BMAP_REG(2) 360 + 361 + enum { 362 + KVM_REG_ARM_VENDOR_HYP_BIT_FUNC_FEAT = 0, 363 + KVM_REG_ARM_VENDOR_HYP_BIT_PTP = 1, 364 + #ifdef __KERNEL__ 365 + KVM_REG_ARM_VENDOR_HYP_BMAP_BIT_COUNT, 366 + #endif 367 + }; 336 368 337 369 /* Device Control API: ARM VGIC */ 338 370 #define KVM_DEV_ARM_VGIC_GRP_ADDR 0
+52 -2
tools/include/uapi/linux/kvm.h
··· 444 444 #define KVM_SYSTEM_EVENT_SHUTDOWN 1 445 445 #define KVM_SYSTEM_EVENT_RESET 2 446 446 #define KVM_SYSTEM_EVENT_CRASH 3 447 + #define KVM_SYSTEM_EVENT_WAKEUP 4 448 + #define KVM_SYSTEM_EVENT_SUSPEND 5 449 + #define KVM_SYSTEM_EVENT_SEV_TERM 6 447 450 __u32 type; 448 451 __u32 ndata; 449 452 union { ··· 649 646 #define KVM_MP_STATE_OPERATING 7 650 647 #define KVM_MP_STATE_LOAD 8 651 648 #define KVM_MP_STATE_AP_RESET_HOLD 9 649 + #define KVM_MP_STATE_SUSPENDED 10 652 650 653 651 struct kvm_mp_state { 654 652 __u32 mp_state; ··· 1154 1150 #define KVM_CAP_S390_MEM_OP_EXTENSION 211 1155 1151 #define KVM_CAP_PMU_CAPABILITY 212 1156 1152 #define KVM_CAP_DISABLE_QUIRKS2 213 1157 - /* #define KVM_CAP_VM_TSC_CONTROL 214 */ 1153 + #define KVM_CAP_VM_TSC_CONTROL 214 1158 1154 #define KVM_CAP_SYSTEM_EVENT_DATA 215 1155 + #define KVM_CAP_ARM_SYSTEM_SUSPEND 216 1159 1156 1160 1157 #ifdef KVM_CAP_IRQ_ROUTING 1161 1158 ··· 1245 1240 #define KVM_XEN_HVM_CONFIG_SHARED_INFO (1 << 2) 1246 1241 #define KVM_XEN_HVM_CONFIG_RUNSTATE (1 << 3) 1247 1242 #define KVM_XEN_HVM_CONFIG_EVTCHN_2LEVEL (1 << 4) 1243 + #define KVM_XEN_HVM_CONFIG_EVTCHN_SEND (1 << 5) 1248 1244 1249 1245 struct kvm_xen_hvm_config { 1250 1246 __u32 flags; ··· 1484 1478 #define KVM_SET_PIT2 _IOW(KVMIO, 0xa0, struct kvm_pit_state2) 1485 1479 /* Available with KVM_CAP_PPC_GET_PVINFO */ 1486 1480 #define KVM_PPC_GET_PVINFO _IOW(KVMIO, 0xa1, struct kvm_ppc_pvinfo) 1487 - /* Available with KVM_CAP_TSC_CONTROL */ 1481 + /* Available with KVM_CAP_TSC_CONTROL for a vCPU, or with 1482 + * KVM_CAP_VM_TSC_CONTROL to set defaults for a VM */ 1488 1483 #define KVM_SET_TSC_KHZ _IO(KVMIO, 0xa2) 1489 1484 #define KVM_GET_TSC_KHZ _IO(KVMIO, 0xa3) 1490 1485 /* Available with KVM_CAP_PCI_2_3 */ ··· 1701 1694 struct { 1702 1695 __u64 gfn; 1703 1696 } shared_info; 1697 + struct { 1698 + __u32 send_port; 1699 + __u32 type; /* EVTCHNSTAT_ipi / EVTCHNSTAT_interdomain */ 1700 + __u32 flags; 1701 + #define KVM_XEN_EVTCHN_DEASSIGN (1 << 0) 1702 + #define KVM_XEN_EVTCHN_UPDATE (1 << 1) 1703 + #define KVM_XEN_EVTCHN_RESET (1 << 2) 1704 + /* 1705 + * Events sent by the guest are either looped back to 1706 + * the guest itself (potentially on a different port#) 1707 + * or signalled via an eventfd. 1708 + */ 1709 + union { 1710 + struct { 1711 + __u32 port; 1712 + __u32 vcpu; 1713 + __u32 priority; 1714 + } port; 1715 + struct { 1716 + __u32 port; /* Zero for eventfd */ 1717 + __s32 fd; 1718 + } eventfd; 1719 + __u32 padding[4]; 1720 + } deliver; 1721 + } evtchn; 1722 + __u32 xen_version; 1704 1723 __u64 pad[8]; 1705 1724 } u; 1706 1725 }; ··· 1735 1702 #define KVM_XEN_ATTR_TYPE_LONG_MODE 0x0 1736 1703 #define KVM_XEN_ATTR_TYPE_SHARED_INFO 0x1 1737 1704 #define KVM_XEN_ATTR_TYPE_UPCALL_VECTOR 0x2 1705 + /* Available with KVM_CAP_XEN_HVM / KVM_XEN_HVM_CONFIG_EVTCHN_SEND */ 1706 + #define KVM_XEN_ATTR_TYPE_EVTCHN 0x3 1707 + #define KVM_XEN_ATTR_TYPE_XEN_VERSION 0x4 1738 1708 1739 1709 /* Per-vCPU Xen attributes */ 1740 1710 #define KVM_XEN_VCPU_GET_ATTR _IOWR(KVMIO, 0xca, struct kvm_xen_vcpu_attr) 1741 1711 #define KVM_XEN_VCPU_SET_ATTR _IOW(KVMIO, 0xcb, struct kvm_xen_vcpu_attr) 1712 + 1713 + /* Available with KVM_CAP_XEN_HVM / KVM_XEN_HVM_CONFIG_EVTCHN_SEND */ 1714 + #define KVM_XEN_HVM_EVTCHN_SEND _IOW(KVMIO, 0xd0, struct kvm_irq_routing_xen_evtchn) 1742 1715 1743 1716 #define KVM_GET_SREGS2 _IOR(KVMIO, 0xcc, struct kvm_sregs2) 1744 1717 #define KVM_SET_SREGS2 _IOW(KVMIO, 0xcd, struct kvm_sregs2) ··· 1763 1724 __u64 time_blocked; 1764 1725 __u64 time_offline; 1765 1726 } runstate; 1727 + __u32 vcpu_id; 1728 + struct { 1729 + __u32 port; 1730 + __u32 priority; 1731 + __u64 expires_ns; 1732 + } timer; 1733 + __u8 vector; 1766 1734 } u; 1767 1735 }; 1768 1736 ··· 1780 1734 #define KVM_XEN_VCPU_ATTR_TYPE_RUNSTATE_CURRENT 0x3 1781 1735 #define KVM_XEN_VCPU_ATTR_TYPE_RUNSTATE_DATA 0x4 1782 1736 #define KVM_XEN_VCPU_ATTR_TYPE_RUNSTATE_ADJUST 0x5 1737 + /* Available with KVM_CAP_XEN_HVM / KVM_XEN_HVM_CONFIG_EVTCHN_SEND */ 1738 + #define KVM_XEN_VCPU_ATTR_TYPE_VCPU_ID 0x6 1739 + #define KVM_XEN_VCPU_ATTR_TYPE_TIMER 0x7 1740 + #define KVM_XEN_VCPU_ATTR_TYPE_UPCALL_VECTOR 0x8 1783 1741 1784 1742 /* Secure Encrypted Virtualization command */ 1785 1743 enum sev_cmd_id {
+2 -3
tools/perf/util/bpf-utils.c
··· 149 149 count = bpf_prog_info_read_offset_u32(&info, desc->count_offset); 150 150 size = bpf_prog_info_read_offset_u32(&info, desc->size_offset); 151 151 152 - data_len += count * size; 152 + data_len += roundup(count * size, sizeof(__u64)); 153 153 } 154 154 155 155 /* step 3: allocate continuous memory */ 156 - data_len = roundup(data_len, sizeof(__u64)); 157 156 info_linear = malloc(sizeof(struct perf_bpil) + data_len); 158 157 if (!info_linear) 159 158 return ERR_PTR(-ENOMEM); ··· 179 180 bpf_prog_info_set_offset_u64(&info_linear->info, 180 181 desc->array_offset, 181 182 ptr_to_u64(ptr)); 182 - ptr += count * size; 183 + ptr += roundup(count * size, sizeof(__u64)); 183 184 } 184 185 185 186 /* step 5: call syscall again to get required arrays */
+6 -1
tools/perf/util/bpf_off_cpu.c
··· 265 265 266 266 sample_type = evsel->core.attr.sample_type; 267 267 268 + if (sample_type & ~OFFCPU_SAMPLE_TYPES) { 269 + pr_err("not supported sample type: %llx\n", 270 + (unsigned long long)sample_type); 271 + return -1; 272 + } 273 + 268 274 if (sample_type & (PERF_SAMPLE_ID | PERF_SAMPLE_IDENTIFIER)) { 269 275 if (evsel->core.id) 270 276 sid = evsel->core.id[0]; ··· 325 319 } 326 320 if (sample_type & PERF_SAMPLE_CGROUP) 327 321 data.array[n++] = key.cgroup_id; 328 - /* TODO: handle more sample types */ 329 322 330 323 size = n * sizeof(u64); 331 324 data.hdr.size = size;
+14 -6
tools/perf/util/bpf_skel/off_cpu.bpf.c
··· 71 71 __uint(max_entries, 1); 72 72 } cgroup_filter SEC(".maps"); 73 73 74 + /* new kernel task_struct definition */ 75 + struct task_struct___new { 76 + long __state; 77 + } __attribute__((preserve_access_index)); 78 + 74 79 /* old kernel task_struct definition */ 75 80 struct task_struct___old { 76 81 long state; ··· 98 93 */ 99 94 static inline int get_task_state(struct task_struct *t) 100 95 { 101 - if (bpf_core_field_exists(t->__state)) 102 - return BPF_CORE_READ(t, __state); 96 + /* recast pointer to capture new type for compiler */ 97 + struct task_struct___new *t_new = (void *)t; 103 98 104 - /* recast pointer to capture task_struct___old type for compiler */ 105 - struct task_struct___old *t_old = (void *)t; 99 + if (bpf_core_field_exists(t_new->__state)) { 100 + return BPF_CORE_READ(t_new, __state); 101 + } else { 102 + /* recast pointer to capture old type for compiler */ 103 + struct task_struct___old *t_old = (void *)t; 106 104 107 - /* now use old "state" name of the field */ 108 - return BPF_CORE_READ(t_old, state); 105 + return BPF_CORE_READ(t_old, state); 106 + } 109 107 } 110 108 111 109 static inline __u64 get_cgroup_id(struct task_struct *t)
+9
tools/perf/util/evsel.c
··· 48 48 #include "util.h" 49 49 #include "hashmap.h" 50 50 #include "pmu-hybrid.h" 51 + #include "off_cpu.h" 51 52 #include "../perf-sys.h" 52 53 #include "util/parse-branch-options.h" 53 54 #include <internal/xyarray.h> ··· 1103 1102 } 1104 1103 } 1105 1104 1105 + static bool evsel__is_offcpu_event(struct evsel *evsel) 1106 + { 1107 + return evsel__is_bpf_output(evsel) && !strcmp(evsel->name, OFFCPU_EVENT); 1108 + } 1109 + 1106 1110 /* 1107 1111 * The enable_on_exec/disabled value strategy: 1108 1112 * ··· 1372 1366 */ 1373 1367 if (evsel__is_dummy_event(evsel)) 1374 1368 evsel__reset_sample_bit(evsel, BRANCH_STACK); 1369 + 1370 + if (evsel__is_offcpu_event(evsel)) 1371 + evsel->core.attr.sample_type &= OFFCPU_SAMPLE_TYPES; 1375 1372 } 1376 1373 1377 1374 int evsel__set_filter(struct evsel *evsel, const char *filter)
+9
tools/perf/util/off_cpu.h
··· 1 1 #ifndef PERF_UTIL_OFF_CPU_H 2 2 #define PERF_UTIL_OFF_CPU_H 3 3 4 + #include <linux/perf_event.h> 5 + 4 6 struct evlist; 5 7 struct target; 6 8 struct perf_session; 7 9 struct record_opts; 8 10 9 11 #define OFFCPU_EVENT "offcpu-time" 12 + 13 + #define OFFCPU_SAMPLE_TYPES (PERF_SAMPLE_IDENTIFIER | PERF_SAMPLE_IP | \ 14 + PERF_SAMPLE_TID | PERF_SAMPLE_TIME | \ 15 + PERF_SAMPLE_ID | PERF_SAMPLE_CPU | \ 16 + PERF_SAMPLE_PERIOD | PERF_SAMPLE_CALLCHAIN | \ 17 + PERF_SAMPLE_CGROUP) 18 + 10 19 11 20 #ifdef HAVE_BPF_SKEL 12 21 int off_cpu_prepare(struct evlist *evlist, struct target *target,
+5 -4
tools/perf/util/synthetic-events.c
··· 754 754 snprintf(filename, sizeof(filename), "%s/proc/%d/task", 755 755 machine->root_dir, pid); 756 756 757 - n = scandir(filename, &dirent, filter_task, alphasort); 757 + n = scandir(filename, &dirent, filter_task, NULL); 758 758 if (n < 0) 759 759 return n; 760 760 ··· 767 767 if (*end) 768 768 continue; 769 769 770 - rc = -1; 770 + /* some threads may exit just after scan, ignore it */ 771 771 if (perf_event__prepare_comm(comm_event, pid, _pid, machine, 772 772 &tgid, &ppid, &kernel_thread) != 0) 773 - break; 773 + continue; 774 774 775 + rc = -1; 775 776 if (perf_event__synthesize_fork(tool, fork_event, _pid, tgid, 776 777 ppid, process, machine) < 0) 777 778 break; ··· 988 987 return 0; 989 988 990 989 snprintf(proc_path, sizeof(proc_path), "%s/proc", machine->root_dir); 991 - n = scandir(proc_path, &dirent, filter_task, alphasort); 990 + n = scandir(proc_path, &dirent, filter_task, NULL); 992 991 if (n < 0) 993 992 return err; 994 993
+1 -1
tools/perf/util/unwind-libunwind-local.c
··· 197 197 #ifndef NO_LIBUNWIND_DEBUG_FRAME 198 198 static u64 elf_section_offset(int fd, const char *name) 199 199 { 200 - u64 address, offset; 200 + u64 address, offset = 0; 201 201 202 202 if (elf_section_address_and_offset(fd, name, &address, &offset)) 203 203 return 0;
+21
tools/testing/selftests/bpf/verifier/jmp32.c
··· 864 864 .result = ACCEPT, 865 865 .flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS, 866 866 }, 867 + { 868 + "jeq32/jne32: bounds checking", 869 + .insns = { 870 + BPF_MOV64_IMM(BPF_REG_6, 563), 871 + BPF_MOV64_IMM(BPF_REG_2, 0), 872 + BPF_ALU64_IMM(BPF_NEG, BPF_REG_2, 0), 873 + BPF_ALU64_IMM(BPF_NEG, BPF_REG_2, 0), 874 + BPF_ALU32_REG(BPF_OR, BPF_REG_2, BPF_REG_6), 875 + BPF_JMP32_IMM(BPF_JNE, BPF_REG_2, 8, 5), 876 + BPF_JMP_IMM(BPF_JSGE, BPF_REG_2, 500, 2), 877 + BPF_MOV64_IMM(BPF_REG_0, 2), 878 + BPF_EXIT_INSN(), 879 + BPF_MOV64_REG(BPF_REG_0, BPF_REG_4), 880 + BPF_EXIT_INSN(), 881 + BPF_MOV64_IMM(BPF_REG_0, 1), 882 + BPF_EXIT_INSN(), 883 + }, 884 + .prog_type = BPF_PROG_TYPE_SCHED_CLS, 885 + .result = ACCEPT, 886 + .retval = 1, 887 + },
+22
tools/testing/selftests/bpf/verifier/jump.c
··· 373 373 .result = ACCEPT, 374 374 .retval = 3, 375 375 }, 376 + { 377 + "jump & dead code elimination", 378 + .insns = { 379 + BPF_MOV64_IMM(BPF_REG_0, 1), 380 + BPF_MOV64_IMM(BPF_REG_3, 0), 381 + BPF_ALU64_IMM(BPF_NEG, BPF_REG_3, 0), 382 + BPF_ALU64_IMM(BPF_NEG, BPF_REG_3, 0), 383 + BPF_ALU64_IMM(BPF_OR, BPF_REG_3, 32767), 384 + BPF_JMP_IMM(BPF_JSGE, BPF_REG_3, 0, 1), 385 + BPF_EXIT_INSN(), 386 + BPF_JMP_IMM(BPF_JSLE, BPF_REG_3, 0x8000, 1), 387 + BPF_EXIT_INSN(), 388 + BPF_ALU64_IMM(BPF_ADD, BPF_REG_3, -32767), 389 + BPF_MOV64_IMM(BPF_REG_0, 2), 390 + BPF_JMP_IMM(BPF_JLE, BPF_REG_3, 0, 1), 391 + BPF_MOV64_REG(BPF_REG_0, BPF_REG_4), 392 + BPF_EXIT_INSN(), 393 + }, 394 + .prog_type = BPF_PROG_TYPE_SCHED_CLS, 395 + .result = ACCEPT, 396 + .retval = 2, 397 + },
+5 -1
tools/testing/selftests/net/forwarding/lib.sh
··· 1240 1240 # FDB entry was installed. 1241 1241 bridge link set dev $br_port1 flood off 1242 1242 1243 + ip link set $host1_if promisc on 1243 1244 tc qdisc add dev $host1_if ingress 1244 1245 tc filter add dev $host1_if ingress protocol ip pref 1 handle 101 \ 1245 1246 flower dst_mac $mac action drop ··· 1251 1250 tc -j -s filter show dev $host1_if ingress \ 1252 1251 | jq -e ".[] | select(.options.handle == 101) \ 1253 1252 | select(.options.actions[0].stats.packets == 1)" &> /dev/null 1254 - check_fail $? "Packet reached second host when should not" 1253 + check_fail $? "Packet reached first host when should not" 1255 1254 1256 1255 $MZ $host1_if -c 1 -p 64 -a $mac -t ip -q 1257 1256 sleep 1 ··· 1290 1289 1291 1290 tc filter del dev $host1_if ingress protocol ip pref 1 handle 101 flower 1292 1291 tc qdisc del dev $host1_if ingress 1292 + ip link set $host1_if promisc off 1293 1293 1294 1294 bridge link set dev $br_port1 flood on 1295 1295 ··· 1308 1306 1309 1307 # Add an ACL on `host2_if` which will tell us whether the packet 1310 1308 # was flooded to it or not. 1309 + ip link set $host2_if promisc on 1311 1310 tc qdisc add dev $host2_if ingress 1312 1311 tc filter add dev $host2_if ingress protocol ip pref 1 handle 101 \ 1313 1312 flower dst_mac $mac action drop ··· 1326 1323 1327 1324 tc filter del dev $host2_if ingress protocol ip pref 1 handle 101 flower 1328 1325 tc qdisc del dev $host2_if ingress 1326 + ip link set $host2_if promisc off 1329 1327 1330 1328 return $err 1331 1329 }
+71 -2
tools/testing/selftests/net/mptcp/pm_nl_ctl.c
··· 39 39 fprintf(stderr, "\tdsf lip <local-ip> lport <local-port> rip <remote-ip> rport <remote-port> token <token>\n"); 40 40 fprintf(stderr, "\tdel <id> [<ip>]\n"); 41 41 fprintf(stderr, "\tget <id>\n"); 42 - fprintf(stderr, "\tset [<ip>] [id <nr>] flags [no]backup|[no]fullmesh [port <nr>]\n"); 42 + fprintf(stderr, "\tset [<ip>] [id <nr>] flags [no]backup|[no]fullmesh [port <nr>] [token <token>] [rip <ip>] [rport <port>]\n"); 43 43 fprintf(stderr, "\tflush\n"); 44 44 fprintf(stderr, "\tdump\n"); 45 45 fprintf(stderr, "\tlimits [<rcv addr max> <subflow max>]\n"); ··· 1279 1279 struct rtattr *rta, *nest; 1280 1280 struct nlmsghdr *nh; 1281 1281 u_int32_t flags = 0; 1282 + u_int32_t token = 0; 1283 + u_int16_t rport = 0; 1282 1284 u_int16_t family; 1285 + void *rip = NULL; 1283 1286 int nest_start; 1284 1287 int use_id = 0; 1285 1288 u_int8_t id; ··· 1342 1339 error(1, 0, " missing flags keyword"); 1343 1340 1344 1341 for (; arg < argc; arg++) { 1345 - if (!strcmp(argv[arg], "flags")) { 1342 + if (!strcmp(argv[arg], "token")) { 1343 + if (++arg >= argc) 1344 + error(1, 0, " missing token value"); 1345 + 1346 + /* token */ 1347 + token = atoi(argv[arg]); 1348 + } else if (!strcmp(argv[arg], "flags")) { 1346 1349 char *tok, *str; 1347 1350 1348 1351 /* flags */ ··· 1387 1378 rta->rta_len = RTA_LENGTH(2); 1388 1379 memcpy(RTA_DATA(rta), &port, 2); 1389 1380 off += NLMSG_ALIGN(rta->rta_len); 1381 + } else if (!strcmp(argv[arg], "rport")) { 1382 + if (++arg >= argc) 1383 + error(1, 0, " missing remote port"); 1384 + 1385 + rport = atoi(argv[arg]); 1386 + } else if (!strcmp(argv[arg], "rip")) { 1387 + if (++arg >= argc) 1388 + error(1, 0, " missing remote ip"); 1389 + 1390 + rip = argv[arg]; 1390 1391 } else { 1391 1392 error(1, 0, "unknown keyword %s", argv[arg]); 1392 1393 } 1393 1394 } 1394 1395 nest->rta_len = off - nest_start; 1396 + 1397 + /* token */ 1398 + if (token) { 1399 + rta = (void *)(data + off); 1400 + rta->rta_type = MPTCP_PM_ATTR_TOKEN; 1401 + rta->rta_len = RTA_LENGTH(4); 1402 + memcpy(RTA_DATA(rta), &token, 4); 1403 + off += NLMSG_ALIGN(rta->rta_len); 1404 + } 1405 + 1406 + /* remote addr/port */ 1407 + if (rip) { 1408 + nest_start = off; 1409 + nest = (void *)(data + off); 1410 + nest->rta_type = NLA_F_NESTED | MPTCP_PM_ATTR_ADDR_REMOTE; 1411 + nest->rta_len = RTA_LENGTH(0); 1412 + off += NLMSG_ALIGN(nest->rta_len); 1413 + 1414 + /* addr data */ 1415 + rta = (void *)(data + off); 1416 + if (inet_pton(AF_INET, rip, RTA_DATA(rta))) { 1417 + family = AF_INET; 1418 + rta->rta_type = MPTCP_PM_ADDR_ATTR_ADDR4; 1419 + rta->rta_len = RTA_LENGTH(4); 1420 + } else if (inet_pton(AF_INET6, rip, RTA_DATA(rta))) { 1421 + family = AF_INET6; 1422 + rta->rta_type = MPTCP_PM_ADDR_ATTR_ADDR6; 1423 + rta->rta_len = RTA_LENGTH(16); 1424 + } else { 1425 + error(1, errno, "can't parse ip %s", (char *)rip); 1426 + } 1427 + off += NLMSG_ALIGN(rta->rta_len); 1428 + 1429 + /* family */ 1430 + rta = (void *)(data + off); 1431 + rta->rta_type = MPTCP_PM_ADDR_ATTR_FAMILY; 1432 + rta->rta_len = RTA_LENGTH(2); 1433 + memcpy(RTA_DATA(rta), &family, 2); 1434 + off += NLMSG_ALIGN(rta->rta_len); 1435 + 1436 + if (rport) { 1437 + rta = (void *)(data + off); 1438 + rta->rta_type = MPTCP_PM_ADDR_ATTR_PORT; 1439 + rta->rta_len = RTA_LENGTH(2); 1440 + memcpy(RTA_DATA(rta), &rport, 2); 1441 + off += NLMSG_ALIGN(rta->rta_len); 1442 + } 1443 + 1444 + nest->rta_len = off - nest_start; 1445 + } 1395 1446 1396 1447 do_nl_req(fd, nh, off, 0); 1397 1448 return 0;
+32
tools/testing/selftests/net/mptcp/userspace_pm.sh
··· 770 770 rm -f "$evts" 771 771 } 772 772 773 + test_prio() 774 + { 775 + local count 776 + 777 + # Send MP_PRIO signal from client to server machine 778 + ip netns exec "$ns2" ./pm_nl_ctl set 10.0.1.2 port "$client4_port" flags backup token "$client4_token" rip 10.0.1.1 rport "$server4_port" 779 + sleep 0.5 780 + 781 + # Check TX 782 + stdbuf -o0 -e0 printf "MP_PRIO TX \t" 783 + count=$(ip netns exec "$ns2" nstat -as | grep MPTcpExtMPPrioTx | awk '{print $2}') 784 + [ -z "$count" ] && count=0 785 + if [ $count != 1 ]; then 786 + stdbuf -o0 -e0 printf "[FAIL]\n" 787 + exit 1 788 + else 789 + stdbuf -o0 -e0 printf "[OK]\n" 790 + fi 791 + 792 + # Check RX 793 + stdbuf -o0 -e0 printf "MP_PRIO RX \t" 794 + count=$(ip netns exec "$ns1" nstat -as | grep MPTcpExtMPPrioRx | awk '{print $2}') 795 + [ -z "$count" ] && count=0 796 + if [ $count != 1 ]; then 797 + stdbuf -o0 -e0 printf "[FAIL]\n" 798 + exit 1 799 + else 800 + stdbuf -o0 -e0 printf "[OK]\n" 801 + fi 802 + } 803 + 773 804 make_connection 774 805 make_connection "v6" 775 806 test_announce 776 807 test_remove 777 808 test_subflows 809 + test_prio 778 810 779 811 exit 0
+1 -1
tools/testing/selftests/net/udpgro.sh
··· 34 34 ip -netns "${PEER_NS}" addr add dev veth1 192.168.1.1/24 35 35 ip -netns "${PEER_NS}" addr add dev veth1 2001:db8::1/64 nodad 36 36 ip -netns "${PEER_NS}" link set dev veth1 up 37 - ip -n "${PEER_NS}" link set veth1 xdp object ../bpf/xdp_dummy.o section xdp_dummy 37 + ip -n "${PEER_NS}" link set veth1 xdp object ../bpf/xdp_dummy.o section xdp 38 38 } 39 39 40 40 run_one() {
+1 -1
tools/testing/selftests/net/udpgro_bench.sh
··· 34 34 ip -netns "${PEER_NS}" addr add dev veth1 2001:db8::1/64 nodad 35 35 ip -netns "${PEER_NS}" link set dev veth1 up 36 36 37 - ip -n "${PEER_NS}" link set veth1 xdp object ../bpf/xdp_dummy.o section xdp_dummy 37 + ip -n "${PEER_NS}" link set veth1 xdp object ../bpf/xdp_dummy.o section xdp 38 38 ip netns exec "${PEER_NS}" ./udpgso_bench_rx ${rx_args} -r & 39 39 ip netns exec "${PEER_NS}" ./udpgso_bench_rx -t ${rx_args} -r & 40 40
+1 -1
tools/testing/selftests/net/udpgro_frglist.sh
··· 36 36 ip netns exec "${PEER_NS}" ethtool -K veth1 rx-gro-list on 37 37 38 38 39 - ip -n "${PEER_NS}" link set veth1 xdp object ../bpf/xdp_dummy.o section xdp_dummy 39 + ip -n "${PEER_NS}" link set veth1 xdp object ../bpf/xdp_dummy.o section xdp 40 40 tc -n "${PEER_NS}" qdisc add dev veth1 clsact 41 41 tc -n "${PEER_NS}" filter add dev veth1 ingress prio 4 protocol ipv6 bpf object-file ../bpf/nat6to4.o section schedcls/ingress6/nat_6 direct-action 42 42 tc -n "${PEER_NS}" filter add dev veth1 egress prio 4 protocol ip bpf object-file ../bpf/nat6to4.o section schedcls/egress4/snat4 direct-action
+1 -1
tools/testing/selftests/net/udpgro_fwd.sh
··· 46 46 ip -n $BASE$ns addr add dev veth$ns $BM_NET_V4$ns/24 47 47 ip -n $BASE$ns addr add dev veth$ns $BM_NET_V6$ns/64 nodad 48 48 done 49 - ip -n $NS_DST link set veth$DST xdp object ../bpf/xdp_dummy.o section xdp_dummy 2>/dev/null 49 + ip -n $NS_DST link set veth$DST xdp object ../bpf/xdp_dummy.o section xdp 2>/dev/null 50 50 } 51 51 52 52 create_vxlan_endpoint() {
+3 -3
tools/testing/selftests/net/veth.sh
··· 289 289 ip netns exec $NS_SRC ethtool -L veth$SRC rx 1 tx 2 2>/dev/null 290 290 printf "%-60s" "bad setting: XDP with RX nr less than TX" 291 291 ip -n $NS_DST link set dev veth$DST xdp object ../bpf/xdp_dummy.o \ 292 - section xdp_dummy 2>/dev/null &&\ 292 + section xdp 2>/dev/null &&\ 293 293 echo "fail - set operation successful ?!?" || echo " ok " 294 294 295 295 # the following tests will run with multiple channels active 296 296 ip netns exec $NS_SRC ethtool -L veth$SRC rx 2 297 297 ip netns exec $NS_DST ethtool -L veth$DST rx 2 298 298 ip -n $NS_DST link set dev veth$DST xdp object ../bpf/xdp_dummy.o \ 299 - section xdp_dummy 2>/dev/null 299 + section xdp 2>/dev/null 300 300 printf "%-60s" "bad setting: reducing RX nr below peer TX with XDP set" 301 301 ip netns exec $NS_DST ethtool -L veth$DST rx 1 2>/dev/null &&\ 302 302 echo "fail - set operation successful ?!?" || echo " ok " ··· 311 311 chk_channels "setting invalid channels nr" $DST 2 2 312 312 fi 313 313 314 - ip -n $NS_DST link set dev veth$DST xdp object ../bpf/xdp_dummy.o section xdp_dummy 2>/dev/null 314 + ip -n $NS_DST link set dev veth$DST xdp object ../bpf/xdp_dummy.o section xdp 2>/dev/null 315 315 chk_gro_flag "with xdp attached - gro flag" $DST on 316 316 chk_gro_flag " - peer gro flag" $SRC off 317 317 chk_tso_flag " - tso flag" $SRC off
+11 -9
tools/testing/selftests/wireguard/qemu/Makefile
··· 19 19 MIRROR := https://download.wireguard.com/qemu-test/distfiles/ 20 20 21 21 KERNEL_BUILD_PATH := $(BUILD_PATH)/kernel$(if $(findstring yes,$(DEBUG_KERNEL)),-debug) 22 - rwildcard=$(foreach d,$(wildcard $1*),$(call rwildcard,$d/,$2) $(filter $(subst *,%,$2),$d)) 23 - WIREGUARD_SOURCES := $(call rwildcard,$(KERNEL_PATH)/drivers/net/wireguard/,*) 24 22 25 23 default: qemu 26 24 ··· 107 109 QEMU_ARCH := x86_64 108 110 KERNEL_ARCH := x86_64 109 111 KERNEL_BZIMAGE := $(KERNEL_BUILD_PATH)/arch/x86/boot/bzImage 112 + QEMU_VPORT_RESULT := virtio-serial-device 110 113 ifeq ($(HOST_ARCH),$(ARCH)) 111 - QEMU_MACHINE := -cpu host -machine q35,accel=kvm 114 + QEMU_MACHINE := -cpu host -machine microvm,accel=kvm,pit=off,pic=off,rtc=off -no-acpi 112 115 else 113 - QEMU_MACHINE := -cpu max -machine q35 116 + QEMU_MACHINE := -cpu max -machine microvm -no-acpi 114 117 endif 115 118 else ifeq ($(ARCH),i686) 116 119 CHOST := i686-linux-musl 117 120 QEMU_ARCH := i386 118 121 KERNEL_ARCH := x86 119 122 KERNEL_BZIMAGE := $(KERNEL_BUILD_PATH)/arch/x86/boot/bzImage 123 + QEMU_VPORT_RESULT := virtio-serial-device 120 124 ifeq ($(subst x86_64,i686,$(HOST_ARCH)),$(ARCH)) 121 - QEMU_MACHINE := -cpu host -machine q35,accel=kvm 125 + QEMU_MACHINE := -cpu host -machine microvm,accel=kvm,pit=off,pic=off,rtc=off -no-acpi 122 126 else 123 - QEMU_MACHINE := -cpu max -machine q35 127 + QEMU_MACHINE := -cpu coreduo -machine microvm -no-acpi 124 128 endif 125 129 else ifeq ($(ARCH),mips64) 126 130 CHOST := mips64-linux-musl ··· 208 208 KERNEL_ARCH := m68k 209 209 KERNEL_BZIMAGE := $(KERNEL_BUILD_PATH)/vmlinux 210 210 KERNEL_CMDLINE := $(shell sed -n 's/CONFIG_CMDLINE=\(.*\)/\1/p' arch/m68k.config) 211 + QEMU_VPORT_RESULT := virtio-serial-device 211 212 ifeq ($(HOST_ARCH),$(ARCH)) 212 - QEMU_MACHINE := -cpu host,accel=kvm -machine q800 -append $(KERNEL_CMDLINE) 213 + QEMU_MACHINE := -cpu host,accel=kvm -machine virt -append $(KERNEL_CMDLINE) 213 214 else 214 - QEMU_MACHINE := -machine q800 -smp 1 -append $(KERNEL_CMDLINE) 215 + QEMU_MACHINE := -machine virt -smp 1 -append $(KERNEL_CMDLINE) 215 216 endif 216 217 else ifeq ($(ARCH),riscv64) 217 218 CHOST := riscv64-linux-musl ··· 323 322 cd $(KERNEL_BUILD_PATH) && ARCH=$(KERNEL_ARCH) $(KERNEL_PATH)/scripts/kconfig/merge_config.sh -n $(KERNEL_BUILD_PATH)/.config $(KERNEL_BUILD_PATH)/minimal.config 324 323 $(if $(findstring yes,$(DEBUG_KERNEL)),cp debug.config $(KERNEL_BUILD_PATH) && cd $(KERNEL_BUILD_PATH) && ARCH=$(KERNEL_ARCH) $(KERNEL_PATH)/scripts/kconfig/merge_config.sh -n $(KERNEL_BUILD_PATH)/.config debug.config,) 325 324 326 - $(KERNEL_BZIMAGE): $(TOOLCHAIN_PATH)/.installed $(KERNEL_BUILD_PATH)/.config $(BUILD_PATH)/init-cpio-spec.txt $(IPERF_PATH)/src/iperf3 $(IPUTILS_PATH)/ping $(BASH_PATH)/bash $(IPROUTE2_PATH)/misc/ss $(IPROUTE2_PATH)/ip/ip $(IPTABLES_PATH)/iptables/xtables-legacy-multi $(NMAP_PATH)/ncat/ncat $(WIREGUARD_TOOLS_PATH)/src/wg $(BUILD_PATH)/init ../netns.sh $(WIREGUARD_SOURCES) 325 + $(KERNEL_BZIMAGE): $(TOOLCHAIN_PATH)/.installed $(KERNEL_BUILD_PATH)/.config $(BUILD_PATH)/init-cpio-spec.txt $(IPERF_PATH)/src/iperf3 $(IPUTILS_PATH)/ping $(BASH_PATH)/bash $(IPROUTE2_PATH)/misc/ss $(IPROUTE2_PATH)/ip/ip $(IPTABLES_PATH)/iptables/xtables-legacy-multi $(NMAP_PATH)/ncat/ncat $(WIREGUARD_TOOLS_PATH)/src/wg $(BUILD_PATH)/init 327 326 $(MAKE) -C $(KERNEL_PATH) O=$(KERNEL_BUILD_PATH) ARCH=$(KERNEL_ARCH) CROSS_COMPILE=$(CROSS_COMPILE) 327 + .PHONY: $(KERNEL_BZIMAGE) 328 328 329 329 $(TOOLCHAIN_PATH)/$(CHOST)/include/linux/.installed: | $(KERNEL_BUILD_PATH)/.config $(TOOLCHAIN_PATH)/.installed 330 330 rm -rf $(TOOLCHAIN_PATH)/$(CHOST)/include/linux
+1
tools/testing/selftests/wireguard/qemu/arch/arm.config
··· 7 7 CONFIG_VIRTIO_MENU=y 8 8 CONFIG_VIRTIO_MMIO=y 9 9 CONFIG_VIRTIO_CONSOLE=y 10 + CONFIG_COMPAT_32BIT_TIME=y 10 11 CONFIG_CMDLINE_BOOL=y 11 12 CONFIG_CMDLINE="console=ttyAMA0 wg.success=vport0p1 panic_on_warn=1" 12 13 CONFIG_FRAME_WARN=1024
+1
tools/testing/selftests/wireguard/qemu/arch/armeb.config
··· 7 7 CONFIG_VIRTIO_MENU=y 8 8 CONFIG_VIRTIO_MMIO=y 9 9 CONFIG_VIRTIO_CONSOLE=y 10 + CONFIG_COMPAT_32BIT_TIME=y 10 11 CONFIG_CMDLINE_BOOL=y 11 12 CONFIG_CMDLINE="console=ttyAMA0 wg.success=vport0p1 panic_on_warn=1" 12 13 CONFIG_CPU_BIG_ENDIAN=y
+6 -2
tools/testing/selftests/wireguard/qemu/arch/i686.config
··· 1 - CONFIG_ACPI=y 2 1 CONFIG_SERIAL_8250=y 3 2 CONFIG_SERIAL_8250_CONSOLE=y 3 + CONFIG_VIRTIO_MENU=y 4 + CONFIG_VIRTIO_MMIO=y 5 + CONFIG_VIRTIO_CONSOLE=y 6 + CONFIG_VIRTIO_MMIO_CMDLINE_DEVICES=y 7 + CONFIG_COMPAT_32BIT_TIME=y 4 8 CONFIG_CMDLINE_BOOL=y 5 - CONFIG_CMDLINE="console=ttyS0 wg.success=ttyS1 panic_on_warn=1" 9 + CONFIG_CMDLINE="console=ttyS0 wg.success=vport0p1 panic_on_warn=1 reboot=t" 6 10 CONFIG_FRAME_WARN=1024
+4 -6
tools/testing/selftests/wireguard/qemu/arch/m68k.config
··· 1 1 CONFIG_MMU=y 2 + CONFIG_VIRT=y 2 3 CONFIG_M68KCLASSIC=y 3 - CONFIG_M68040=y 4 - CONFIG_MAC=y 5 - CONFIG_SERIAL_PMACZILOG=y 6 - CONFIG_SERIAL_PMACZILOG_TTYS=y 7 - CONFIG_SERIAL_PMACZILOG_CONSOLE=y 8 - CONFIG_CMDLINE="console=ttyS0 wg.success=ttyS1 panic_on_warn=1" 4 + CONFIG_VIRTIO_CONSOLE=y 5 + CONFIG_COMPAT_32BIT_TIME=y 6 + CONFIG_CMDLINE="console=ttyGF0 wg.success=vport0p1 panic_on_warn=1" 9 7 CONFIG_FRAME_WARN=1024
+1
tools/testing/selftests/wireguard/qemu/arch/mips.config
··· 6 6 CONFIG_POWER_RESET_SYSCON=y 7 7 CONFIG_SERIAL_8250=y 8 8 CONFIG_SERIAL_8250_CONSOLE=y 9 + CONFIG_COMPAT_32BIT_TIME=y 9 10 CONFIG_CMDLINE_BOOL=y 10 11 CONFIG_CMDLINE="console=ttyS0 wg.success=ttyS1 panic_on_warn=1" 11 12 CONFIG_FRAME_WARN=1024
+1
tools/testing/selftests/wireguard/qemu/arch/mipsel.config
··· 7 7 CONFIG_POWER_RESET_SYSCON=y 8 8 CONFIG_SERIAL_8250=y 9 9 CONFIG_SERIAL_8250_CONSOLE=y 10 + CONFIG_COMPAT_32BIT_TIME=y 10 11 CONFIG_CMDLINE_BOOL=y 11 12 CONFIG_CMDLINE="console=ttyS0 wg.success=ttyS1 panic_on_warn=1" 12 13 CONFIG_FRAME_WARN=1024
+1
tools/testing/selftests/wireguard/qemu/arch/powerpc.config
··· 4 4 CONFIG_PHYS_64BIT=y 5 5 CONFIG_SERIAL_8250=y 6 6 CONFIG_SERIAL_8250_CONSOLE=y 7 + CONFIG_COMPAT_32BIT_TIME=y 7 8 CONFIG_MATH_EMULATION=y 8 9 CONFIG_CMDLINE_BOOL=y 9 10 CONFIG_CMDLINE="console=ttyS0 wg.success=ttyS1 panic_on_warn=1"
+5 -2
tools/testing/selftests/wireguard/qemu/arch/x86_64.config
··· 1 - CONFIG_ACPI=y 2 1 CONFIG_SERIAL_8250=y 3 2 CONFIG_SERIAL_8250_CONSOLE=y 3 + CONFIG_VIRTIO_MENU=y 4 + CONFIG_VIRTIO_MMIO=y 5 + CONFIG_VIRTIO_CONSOLE=y 6 + CONFIG_VIRTIO_MMIO_CMDLINE_DEVICES=y 4 7 CONFIG_CMDLINE_BOOL=y 5 - CONFIG_CMDLINE="console=ttyS0 wg.success=ttyS1 panic_on_warn=1" 8 + CONFIG_CMDLINE="console=ttyS0 wg.success=vport0p1 panic_on_warn=1 reboot=t" 6 9 CONFIG_FRAME_WARN=1280
+11
tools/testing/selftests/wireguard/qemu/init.c
··· 11 11 #include <stdlib.h> 12 12 #include <stdbool.h> 13 13 #include <fcntl.h> 14 + #include <time.h> 14 15 #include <sys/wait.h> 15 16 #include <sys/mount.h> 16 17 #include <sys/stat.h> ··· 69 68 if (ioctl(fd, RNDADDTOENTCNT, &bits) < 0) 70 69 panic("ioctl(RNDADDTOENTCNT)"); 71 70 close(fd); 71 + } 72 + 73 + static void set_time(void) 74 + { 75 + if (time(NULL)) 76 + return; 77 + pretty_message("[+] Setting fake time..."); 78 + if (stime(&(time_t){1433512680}) < 0) 79 + panic("settimeofday()"); 72 80 } 73 81 74 82 static void mount_filesystems(void) ··· 269 259 print_banner(); 270 260 mount_filesystems(); 271 261 seed_rng(); 262 + set_time(); 272 263 kmod_selftests(); 273 264 enable_logging(); 274 265 clear_leaks();