Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge 4.20-rc5 into usb-next

We need the USB fixes into usb-next.

Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

+4686 -2325
+60 -3
Documentation/admin-guide/kernel-parameters.txt
··· 856 856 causing system reset or hang due to sending 857 857 INIT from AP to BSP. 858 858 859 - disable_counter_freezing [HW] 859 + perf_v4_pmi= [X86,INTEL] 860 + Format: <bool> 860 861 Disable Intel PMU counter freezing feature. 861 862 The feature only exists starting from 862 863 Arch Perfmon v4 (Skylake and newer). ··· 3505 3504 before loading. 3506 3505 See Documentation/blockdev/ramdisk.txt. 3507 3506 3507 + psi= [KNL] Enable or disable pressure stall information 3508 + tracking. 3509 + Format: <bool> 3510 + 3508 3511 psmouse.proto= [HW,MOUSE] Highest PS2 mouse protocol extension to 3509 3512 probe for; one of (bare|imps|exps|lifebook|any). 3510 3513 psmouse.rate= [HW,MOUSE] Set desired mouse report rate, in reports ··· 4199 4194 4200 4195 spectre_v2= [X86] Control mitigation of Spectre variant 2 4201 4196 (indirect branch speculation) vulnerability. 4197 + The default operation protects the kernel from 4198 + user space attacks. 4202 4199 4203 - on - unconditionally enable 4204 - off - unconditionally disable 4200 + on - unconditionally enable, implies 4201 + spectre_v2_user=on 4202 + off - unconditionally disable, implies 4203 + spectre_v2_user=off 4205 4204 auto - kernel detects whether your CPU model is 4206 4205 vulnerable 4207 4206 ··· 4215 4206 CONFIG_RETPOLINE configuration option, and the 4216 4207 compiler with which the kernel was built. 4217 4208 4209 + Selecting 'on' will also enable the mitigation 4210 + against user space to user space task attacks. 4211 + 4212 + Selecting 'off' will disable both the kernel and 4213 + the user space protections. 4214 + 4218 4215 Specific mitigations can also be selected manually: 4219 4216 4220 4217 retpoline - replace indirect branches ··· 4229 4214 4230 4215 Not specifying this option is equivalent to 4231 4216 spectre_v2=auto. 4217 + 4218 + spectre_v2_user= 4219 + [X86] Control mitigation of Spectre variant 2 4220 + (indirect branch speculation) vulnerability between 4221 + user space tasks 4222 + 4223 + on - Unconditionally enable mitigations. Is 4224 + enforced by spectre_v2=on 4225 + 4226 + off - Unconditionally disable mitigations. Is 4227 + enforced by spectre_v2=off 4228 + 4229 + prctl - Indirect branch speculation is enabled, 4230 + but mitigation can be enabled via prctl 4231 + per thread. The mitigation control state 4232 + is inherited on fork. 4233 + 4234 + prctl,ibpb 4235 + - Like "prctl" above, but only STIBP is 4236 + controlled per thread. IBPB is issued 4237 + always when switching between different user 4238 + space processes. 4239 + 4240 + seccomp 4241 + - Same as "prctl" above, but all seccomp 4242 + threads will enable the mitigation unless 4243 + they explicitly opt out. 4244 + 4245 + seccomp,ibpb 4246 + - Like "seccomp" above, but only STIBP is 4247 + controlled per thread. IBPB is issued 4248 + always when switching between different 4249 + user space processes. 4250 + 4251 + auto - Kernel selects the mitigation depending on 4252 + the available CPU features and vulnerability. 4253 + 4254 + Default mitigation: 4255 + If CONFIG_SECCOMP=y then "seccomp", otherwise "prctl" 4256 + 4257 + Not specifying this option is equivalent to 4258 + spectre_v2_user=auto. 4232 4259 4233 4260 spec_store_bypass_disable= 4234 4261 [HW] Control Speculative Store Bypass (SSB) Disable mitigation
+1
Documentation/arm64/silicon-errata.txt
··· 57 57 | ARM | Cortex-A73 | #858921 | ARM64_ERRATUM_858921 | 58 58 | ARM | Cortex-A55 | #1024718 | ARM64_ERRATUM_1024718 | 59 59 | ARM | Cortex-A76 | #1188873 | ARM64_ERRATUM_1188873 | 60 + | ARM | Cortex-A76 | #1286807 | ARM64_ERRATUM_1286807 | 60 61 | ARM | MMU-500 | #841119,#826419 | N/A | 61 62 | | | | | 62 63 | Cavium | ThunderX ITS | #22375, #24313 | CAVIUM_ERRATUM_22375 |
+24 -9
Documentation/devicetree/bindings/phy/qcom-qmp-phy.txt
··· 40 40 "ref" for 19.2 MHz ref clk, 41 41 "com_aux" for phy common block aux clock, 42 42 "ref_aux" for phy reference aux clock, 43 + 44 + For "qcom,ipq8074-qmp-pcie-phy": no clocks are listed. 43 45 For "qcom,msm8996-qmp-pcie-phy" must contain: 44 46 "aux", "cfg_ahb", "ref". 45 47 For "qcom,msm8996-qmp-usb3-phy" must contain: 46 48 "aux", "cfg_ahb", "ref". 47 - For "qcom,qmp-v3-usb3-phy" must contain: 49 + For "qcom,sdm845-qmp-usb3-phy" must contain: 48 50 "aux", "cfg_ahb", "ref", "com_aux". 51 + For "qcom,sdm845-qmp-usb3-uni-phy" must contain: 52 + "aux", "cfg_ahb", "ref", "com_aux". 53 + For "qcom,sdm845-qmp-ufs-phy" must contain: 54 + "ref", "ref_aux". 49 55 50 56 - resets: a list of phandles and reset controller specifier pairs, 51 57 one for each entry in reset-names. 52 58 - reset-names: "phy" for reset of phy block, 53 59 "common" for phy common block reset, 54 - "cfg" for phy's ahb cfg block reset (Optional). 55 - For "qcom,msm8996-qmp-pcie-phy" must contain: 56 - "phy", "common", "cfg". 57 - For "qcom,msm8996-qmp-usb3-phy" must contain 58 - "phy", "common". 60 + "cfg" for phy's ahb cfg block reset. 61 + 59 62 For "qcom,ipq8074-qmp-pcie-phy" must contain: 60 - "phy", "common". 63 + "phy", "common". 64 + For "qcom,msm8996-qmp-pcie-phy" must contain: 65 + "phy", "common", "cfg". 66 + For "qcom,msm8996-qmp-usb3-phy" must contain 67 + "phy", "common". 68 + For "qcom,sdm845-qmp-usb3-phy" must contain: 69 + "phy", "common". 70 + For "qcom,sdm845-qmp-usb3-uni-phy" must contain: 71 + "phy", "common". 72 + For "qcom,sdm845-qmp-ufs-phy": no resets are listed. 61 73 62 74 - vdda-phy-supply: Phandle to a regulator supply to PHY core block. 63 75 - vdda-pll-supply: Phandle to 1.8V regulator supply to PHY refclk pll block. ··· 91 79 92 80 - #phy-cells: must be 0 93 81 82 + Required properties child node of pcie and usb3 qmp phys: 94 83 - clocks: a list of phandles and clock-specifier pairs, 95 84 one for each entry in clock-names. 96 - - clock-names: Must contain following for pcie and usb qmp phys: 85 + - clock-names: Must contain following: 97 86 "pipe<lane-number>" for pipe clock specific to each lane. 98 87 - clock-output-names: Name of the PHY clock that will be the parent for 99 88 the above pipe clock. ··· 104 91 (or) 105 92 "pcie20_phy1_pipe_clk" 106 93 94 + Required properties for child node of PHYs with lane reset, AKA: 95 + "qcom,msm8996-qmp-pcie-phy" 107 96 - resets: a list of phandles and reset controller specifier pairs, 108 97 one for each entry in reset-names. 109 - - reset-names: Must contain following for pcie qmp phys: 98 + - reset-names: Must contain following: 110 99 "lane<lane-number>" for reset specific to each lane. 111 100 112 101 Example:
+8 -6
Documentation/devicetree/bindings/spi/spi-uniphier.txt
··· 5 5 Required properties: 6 6 - compatible: should be "socionext,uniphier-scssi" 7 7 - reg: address and length of the spi master registers 8 - - #address-cells: must be <1>, see spi-bus.txt 9 - - #size-cells: must be <0>, see spi-bus.txt 10 - - clocks: A phandle to the clock for the device. 11 - - resets: A phandle to the reset control for the device. 8 + - interrupts: a single interrupt specifier 9 + - pinctrl-names: should be "default" 10 + - pinctrl-0: pin control state for the default mode 11 + - clocks: a phandle to the clock for the device 12 + - resets: a phandle to the reset control for the device 12 13 13 14 Example: 14 15 15 16 spi0: spi@54006000 { 16 17 compatible = "socionext,uniphier-scssi"; 17 18 reg = <0x54006000 0x100>; 18 - #address-cells = <1>; 19 - #size-cells = <0>; 19 + interrupts = <0 39 4>; 20 + pinctrl-names = "default"; 21 + pinctrl-0 = <&pinctrl_spi0>; 20 22 clocks = <&peri_clk 11>; 21 23 resets = <&peri_rst 11>; 22 24 };
+9
Documentation/userspace-api/spec_ctrl.rst
··· 92 92 * prctl(PR_SET_SPECULATION_CTRL, PR_SPEC_STORE_BYPASS, PR_SPEC_ENABLE, 0, 0); 93 93 * prctl(PR_SET_SPECULATION_CTRL, PR_SPEC_STORE_BYPASS, PR_SPEC_DISABLE, 0, 0); 94 94 * prctl(PR_SET_SPECULATION_CTRL, PR_SPEC_STORE_BYPASS, PR_SPEC_FORCE_DISABLE, 0, 0); 95 + 96 + - PR_SPEC_INDIR_BRANCH: Indirect Branch Speculation in User Processes 97 + (Mitigate Spectre V2 style attacks against user processes) 98 + 99 + Invocations: 100 + * prctl(PR_GET_SPECULATION_CTRL, PR_SPEC_INDIRECT_BRANCH, 0, 0, 0); 101 + * prctl(PR_SET_SPECULATION_CTRL, PR_SPEC_INDIRECT_BRANCH, PR_SPEC_ENABLE, 0, 0); 102 + * prctl(PR_SET_SPECULATION_CTRL, PR_SPEC_INDIRECT_BRANCH, PR_SPEC_DISABLE, 0, 0); 103 + * prctl(PR_SET_SPECULATION_CTRL, PR_SPEC_INDIRECT_BRANCH, PR_SPEC_FORCE_DISABLE, 0, 0);
+1 -31
Documentation/x86/boot.txt
··· 61 61 to struct boot_params for loading bzImage and ramdisk 62 62 above 4G in 64bit. 63 63 64 - Protocol 2.13: (Kernel 3.14) Support 32- and 64-bit flags being set in 65 - xloadflags to support booting a 64-bit kernel from 32-bit 66 - EFI 67 - 68 - Protocol 2.14: (Kernel 4.20) Added acpi_rsdp_addr holding the physical 69 - address of the ACPI RSDP table. 70 - The bootloader updates version with: 71 - 0x8000 | min(kernel-version, bootloader-version) 72 - kernel-version being the protocol version supported by 73 - the kernel and bootloader-version the protocol version 74 - supported by the bootloader. 75 - 76 64 **** MEMORY LAYOUT 77 65 78 66 The traditional memory map for the kernel loader, used for Image or ··· 197 209 0258/8 2.10+ pref_address Preferred loading address 198 210 0260/4 2.10+ init_size Linear memory required during initialization 199 211 0264/4 2.11+ handover_offset Offset of handover entry point 200 - 0268/8 2.14+ acpi_rsdp_addr Physical address of RSDP table 201 212 202 213 (1) For backwards compatibility, if the setup_sects field contains 0, the 203 214 real value is 4. ··· 309 322 Contains the magic number "HdrS" (0x53726448). 310 323 311 324 Field name: version 312 - Type: modify 325 + Type: read 313 326 Offset/size: 0x206/2 314 327 Protocol: 2.00+ 315 328 316 329 Contains the boot protocol version, in (major << 8)+minor format, 317 330 e.g. 0x0204 for version 2.04, and 0x0a11 for a hypothetical version 318 331 10.17. 319 - 320 - Up to protocol version 2.13 this information is only read by the 321 - bootloader. From protocol version 2.14 onwards the bootloader will 322 - write the used protocol version or-ed with 0x8000 to the field. The 323 - used protocol version will be the minimum of the supported protocol 324 - versions of the bootloader and the kernel. 325 332 326 333 Field name: realmode_swtch 327 334 Type: modify (optional) ··· 743 762 handover protocol to boot the kernel should jump to this offset. 744 763 745 764 See EFI HANDOVER PROTOCOL below for more details. 746 - 747 - Field name: acpi_rsdp_addr 748 - Type: write 749 - Offset/size: 0x268/8 750 - Protocol: 2.14+ 751 - 752 - This field can be set by the boot loader to tell the kernel the 753 - physical address of the ACPI RSDP table. 754 - 755 - A value of 0 indicates the kernel should fall back to the standard 756 - methods to locate the RSDP. 757 765 758 766 759 767 **** THE IMAGE CHECKSUM
+93 -33
MAINTAINERS
··· 1923 1923 M: Andy Gross <andy.gross@linaro.org> 1924 1924 M: David Brown <david.brown@linaro.org> 1925 1925 L: linux-arm-msm@vger.kernel.org 1926 - L: linux-soc@vger.kernel.org 1927 1926 S: Maintained 1928 1927 F: Documentation/devicetree/bindings/soc/qcom/ 1929 1928 F: arch/arm/boot/dts/qcom-*.dts ··· 2490 2491 ATHEROS ATH5K WIRELESS DRIVER 2491 2492 M: Jiri Slaby <jirislaby@gmail.com> 2492 2493 M: Nick Kossifidis <mickflemm@gmail.com> 2493 - M: "Luis R. Rodriguez" <mcgrof@do-not-panic.com> 2494 + M: Luis Chamberlain <mcgrof@kernel.org> 2494 2495 L: linux-wireless@vger.kernel.org 2495 2496 W: http://wireless.kernel.org/en/users/Drivers/ath5k 2496 2497 S: Maintained ··· 2800 2801 T: git git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git 2801 2802 Q: https://patchwork.ozlabs.org/project/netdev/list/?delegate=77147 2802 2803 S: Supported 2803 - F: arch/x86/net/bpf_jit* 2804 + F: arch/*/net/* 2804 2805 F: Documentation/networking/filter.txt 2805 2806 F: Documentation/bpf/ 2806 2807 F: include/linux/bpf* ··· 2819 2820 F: tools/bpf/ 2820 2821 F: tools/lib/bpf/ 2821 2822 F: tools/testing/selftests/bpf/ 2823 + 2824 + BPF JIT for ARM 2825 + M: Shubham Bansal <illusionist.neo@gmail.com> 2826 + L: netdev@vger.kernel.org 2827 + S: Maintained 2828 + F: arch/arm/net/ 2829 + 2830 + BPF JIT for ARM64 2831 + M: Daniel Borkmann <daniel@iogearbox.net> 2832 + M: Alexei Starovoitov <ast@kernel.org> 2833 + M: Zi Shen Lim <zlim.lnx@gmail.com> 2834 + L: netdev@vger.kernel.org 2835 + S: Supported 2836 + F: arch/arm64/net/ 2837 + 2838 + BPF JIT for MIPS (32-BIT AND 64-BIT) 2839 + M: Paul Burton <paul.burton@mips.com> 2840 + L: netdev@vger.kernel.org 2841 + S: Maintained 2842 + F: arch/mips/net/ 2843 + 2844 + BPF JIT for NFP NICs 2845 + M: Jakub Kicinski <jakub.kicinski@netronome.com> 2846 + L: netdev@vger.kernel.org 2847 + S: Supported 2848 + F: drivers/net/ethernet/netronome/nfp/bpf/ 2849 + 2850 + BPF JIT for POWERPC (32-BIT AND 64-BIT) 2851 + M: Naveen N. Rao <naveen.n.rao@linux.ibm.com> 2852 + M: Sandipan Das <sandipan@linux.ibm.com> 2853 + L: netdev@vger.kernel.org 2854 + S: Maintained 2855 + F: arch/powerpc/net/ 2856 + 2857 + BPF JIT for S390 2858 + M: Martin Schwidefsky <schwidefsky@de.ibm.com> 2859 + M: Heiko Carstens <heiko.carstens@de.ibm.com> 2860 + L: netdev@vger.kernel.org 2861 + S: Maintained 2862 + F: arch/s390/net/ 2863 + X: arch/s390/net/pnet.c 2864 + 2865 + BPF JIT for SPARC (32-BIT AND 64-BIT) 2866 + M: David S. Miller <davem@davemloft.net> 2867 + L: netdev@vger.kernel.org 2868 + S: Maintained 2869 + F: arch/sparc/net/ 2870 + 2871 + BPF JIT for X86 32-BIT 2872 + M: Wang YanQing <udknight@gmail.com> 2873 + L: netdev@vger.kernel.org 2874 + S: Maintained 2875 + F: arch/x86/net/bpf_jit_comp32.c 2876 + 2877 + BPF JIT for X86 64-BIT 2878 + M: Alexei Starovoitov <ast@kernel.org> 2879 + M: Daniel Borkmann <daniel@iogearbox.net> 2880 + L: netdev@vger.kernel.org 2881 + S: Supported 2882 + F: arch/x86/net/ 2883 + X: arch/x86/net/bpf_jit_comp32.c 2822 2884 2823 2885 BROADCOM B44 10/100 ETHERNET DRIVER 2824 2886 M: Michael Chan <michael.chan@broadcom.com> ··· 2921 2861 BROADCOM BCM47XX MIPS ARCHITECTURE 2922 2862 M: Hauke Mehrtens <hauke@hauke-m.de> 2923 2863 M: Rafał Miłecki <zajec5@gmail.com> 2924 - L: linux-mips@linux-mips.org 2864 + L: linux-mips@vger.kernel.org 2925 2865 S: Maintained 2926 2866 F: Documentation/devicetree/bindings/mips/brcm/ 2927 2867 F: arch/mips/bcm47xx/* ··· 2930 2870 BROADCOM BCM5301X ARM ARCHITECTURE 2931 2871 M: Hauke Mehrtens <hauke@hauke-m.de> 2932 2872 M: Rafał Miłecki <zajec5@gmail.com> 2933 - M: Jon Mason <jonmason@broadcom.com> 2934 2873 M: bcm-kernel-feedback-list@broadcom.com 2935 2874 L: linux-arm-kernel@lists.infradead.org 2936 2875 S: Maintained ··· 2984 2925 BROADCOM BMIPS MIPS ARCHITECTURE 2985 2926 M: Kevin Cernekee <cernekee@gmail.com> 2986 2927 M: Florian Fainelli <f.fainelli@gmail.com> 2987 - L: linux-mips@linux-mips.org 2928 + L: linux-mips@vger.kernel.org 2988 2929 T: git git://github.com/broadcom/stblinux.git 2989 2930 S: Maintained 2990 2931 F: arch/mips/bmips/* ··· 3075 3016 BROADCOM IPROC ARM ARCHITECTURE 3076 3017 M: Ray Jui <rjui@broadcom.com> 3077 3018 M: Scott Branden <sbranden@broadcom.com> 3078 - M: Jon Mason <jonmason@broadcom.com> 3079 3019 M: bcm-kernel-feedback-list@broadcom.com 3080 3020 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 3081 3021 T: git git://github.com/broadcom/cygnus-linux.git ··· 3121 3063 3122 3064 BROADCOM NVRAM DRIVER 3123 3065 M: Rafał Miłecki <zajec5@gmail.com> 3124 - L: linux-mips@linux-mips.org 3066 + L: linux-mips@vger.kernel.org 3125 3067 S: Maintained 3126 3068 F: drivers/firmware/broadcom/* 3127 3069 ··· 4223 4165 4224 4166 DECSTATION PLATFORM SUPPORT 4225 4167 M: "Maciej W. Rozycki" <macro@linux-mips.org> 4226 - L: linux-mips@linux-mips.org 4168 + L: linux-mips@vger.kernel.org 4227 4169 W: http://www.linux-mips.org/wiki/DECstation 4228 4170 S: Maintained 4229 4171 F: arch/mips/dec/ ··· 5314 5256 M: Ralf Baechle <ralf@linux-mips.org> 5315 5257 M: David Daney <david.daney@cavium.com> 5316 5258 L: linux-edac@vger.kernel.org 5317 - L: linux-mips@linux-mips.org 5259 + L: linux-mips@vger.kernel.org 5318 5260 S: Supported 5319 5261 F: drivers/edac/octeon_edac* 5320 5262 ··· 5832 5774 F: tools/firewire/ 5833 5775 5834 5776 FIRMWARE LOADER (request_firmware) 5835 - M: Luis R. Rodriguez <mcgrof@kernel.org> 5777 + M: Luis Chamberlain <mcgrof@kernel.org> 5836 5778 L: linux-kernel@vger.kernel.org 5837 5779 S: Maintained 5838 5780 F: Documentation/firmware_class/ ··· 7761 7703 7762 7704 IOC3 ETHERNET DRIVER 7763 7705 M: Ralf Baechle <ralf@linux-mips.org> 7764 - L: linux-mips@linux-mips.org 7706 + L: linux-mips@vger.kernel.org 7765 7707 S: Maintained 7766 7708 F: drivers/net/ethernet/sgi/ioc3-eth.c 7767 7709 ··· 8132 8074 F: Documentation/dev-tools/kselftest* 8133 8075 8134 8076 KERNEL USERMODE HELPER 8135 - M: "Luis R. Rodriguez" <mcgrof@kernel.org> 8077 + M: Luis Chamberlain <mcgrof@kernel.org> 8136 8078 L: linux-kernel@vger.kernel.org 8137 8079 S: Maintained 8138 8080 F: kernel/umh.c ··· 8189 8131 8190 8132 KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips) 8191 8133 M: James Hogan <jhogan@kernel.org> 8192 - L: linux-mips@linux-mips.org 8134 + L: linux-mips@vger.kernel.org 8193 8135 S: Supported 8194 8136 F: arch/mips/include/uapi/asm/kvm* 8195 8137 F: arch/mips/include/asm/kvm* ··· 8308 8250 F: mm/kmemleak-test.c 8309 8251 8310 8252 KMOD KERNEL MODULE LOADER - USERMODE HELPER 8311 - M: "Luis R. Rodriguez" <mcgrof@kernel.org> 8253 + M: Luis Chamberlain <mcgrof@kernel.org> 8312 8254 L: linux-kernel@vger.kernel.org 8313 8255 S: Maintained 8314 8256 F: kernel/kmod.c ··· 8362 8304 8363 8305 LANTIQ MIPS ARCHITECTURE 8364 8306 M: John Crispin <john@phrozen.org> 8365 - L: linux-mips@linux-mips.org 8307 + L: linux-mips@vger.kernel.org 8366 8308 S: Maintained 8367 8309 F: arch/mips/lantiq 8368 8310 F: drivers/soc/lantiq ··· 8925 8867 8926 8868 MARDUK (CREATOR CI40) DEVICE TREE SUPPORT 8927 8869 M: Rahul Bedarkar <rahulbedarkar89@gmail.com> 8928 - L: linux-mips@linux-mips.org 8870 + L: linux-mips@vger.kernel.org 8929 8871 S: Maintained 8930 8872 F: arch/mips/boot/dts/img/pistachio_marduk.dts 8931 8873 ··· 9884 9826 9885 9827 MICROSEMI MIPS SOCS 9886 9828 M: Alexandre Belloni <alexandre.belloni@bootlin.com> 9887 - L: linux-mips@linux-mips.org 9829 + L: linux-mips@vger.kernel.org 9888 9830 S: Maintained 9889 9831 F: arch/mips/generic/board-ocelot.c 9890 9832 F: arch/mips/configs/generic/board-ocelot.config ··· 9924 9866 M: Ralf Baechle <ralf@linux-mips.org> 9925 9867 M: Paul Burton <paul.burton@mips.com> 9926 9868 M: James Hogan <jhogan@kernel.org> 9927 - L: linux-mips@linux-mips.org 9869 + L: linux-mips@vger.kernel.org 9928 9870 W: http://www.linux-mips.org/ 9929 9871 T: git git://git.linux-mips.org/pub/scm/ralf/linux.git 9930 9872 T: git git://git.kernel.org/pub/scm/linux/kernel/git/mips/linux.git ··· 9937 9879 9938 9880 MIPS BOSTON DEVELOPMENT BOARD 9939 9881 M: Paul Burton <paul.burton@mips.com> 9940 - L: linux-mips@linux-mips.org 9882 + L: linux-mips@vger.kernel.org 9941 9883 S: Maintained 9942 9884 F: Documentation/devicetree/bindings/clock/img,boston-clock.txt 9943 9885 F: arch/mips/boot/dts/img/boston.dts ··· 9947 9889 9948 9890 MIPS GENERIC PLATFORM 9949 9891 M: Paul Burton <paul.burton@mips.com> 9950 - L: linux-mips@linux-mips.org 9892 + L: linux-mips@vger.kernel.org 9951 9893 S: Supported 9952 9894 F: Documentation/devicetree/bindings/power/mti,mips-cpc.txt 9953 9895 F: arch/mips/generic/ ··· 9955 9897 9956 9898 MIPS/LOONGSON1 ARCHITECTURE 9957 9899 M: Keguang Zhang <keguang.zhang@gmail.com> 9958 - L: linux-mips@linux-mips.org 9900 + L: linux-mips@vger.kernel.org 9959 9901 S: Maintained 9960 9902 F: arch/mips/loongson32/ 9961 9903 F: arch/mips/include/asm/mach-loongson32/ ··· 9964 9906 9965 9907 MIPS/LOONGSON2 ARCHITECTURE 9966 9908 M: Jiaxun Yang <jiaxun.yang@flygoat.com> 9967 - L: linux-mips@linux-mips.org 9909 + L: linux-mips@vger.kernel.org 9968 9910 S: Maintained 9969 9911 F: arch/mips/loongson64/fuloong-2e/ 9970 9912 F: arch/mips/loongson64/lemote-2f/ ··· 9974 9916 9975 9917 MIPS/LOONGSON3 ARCHITECTURE 9976 9918 M: Huacai Chen <chenhc@lemote.com> 9977 - L: linux-mips@linux-mips.org 9919 + L: linux-mips@vger.kernel.org 9978 9920 S: Maintained 9979 9921 F: arch/mips/loongson64/ 9980 9922 F: arch/mips/include/asm/mach-loongson64/ ··· 9984 9926 9985 9927 MIPS RINT INSTRUCTION EMULATION 9986 9928 M: Aleksandar Markovic <aleksandar.markovic@mips.com> 9987 - L: linux-mips@linux-mips.org 9929 + L: linux-mips@vger.kernel.org 9988 9930 S: Supported 9989 9931 F: arch/mips/math-emu/sp_rint.c 9990 9932 F: arch/mips/math-emu/dp_rint.c ··· 10969 10911 10970 10912 ONION OMEGA2+ BOARD 10971 10913 M: Harvey Hunt <harveyhuntnexus@gmail.com> 10972 - L: linux-mips@linux-mips.org 10914 + L: linux-mips@vger.kernel.org 10973 10915 S: Maintained 10974 10916 F: arch/mips/boot/dts/ralink/omega2p.dts 10975 10917 ··· 11878 11820 11879 11821 PISTACHIO SOC SUPPORT 11880 11822 M: James Hartley <james.hartley@sondrel.com> 11881 - L: linux-mips@linux-mips.org 11823 + L: linux-mips@vger.kernel.org 11882 11824 S: Odd Fixes 11883 11825 F: arch/mips/pistachio/ 11884 11826 F: arch/mips/include/asm/mach-pistachio/ ··· 12058 12000 F: include/linux/printk.h 12059 12001 12060 12002 PRISM54 WIRELESS DRIVER 12061 - M: "Luis R. Rodriguez" <mcgrof@gmail.com> 12003 + M: Luis Chamberlain <mcgrof@kernel.org> 12062 12004 L: linux-wireless@vger.kernel.org 12063 12005 W: http://wireless.kernel.org/en/users/Drivers/p54 12064 12006 S: Obsolete ··· 12072 12014 F: fs/proc/ 12073 12015 F: include/linux/proc_fs.h 12074 12016 F: tools/testing/selftests/proc/ 12017 + F: Documentation/filesystems/proc.txt 12075 12018 12076 12019 PROC SYSCTL 12077 - M: "Luis R. Rodriguez" <mcgrof@kernel.org> 12020 + M: Luis Chamberlain <mcgrof@kernel.org> 12078 12021 M: Kees Cook <keescook@chromium.org> 12079 12022 L: linux-kernel@vger.kernel.org 12080 12023 L: linux-fsdevel@vger.kernel.org ··· 12538 12479 12539 12480 RALINK MIPS ARCHITECTURE 12540 12481 M: John Crispin <john@phrozen.org> 12541 - L: linux-mips@linux-mips.org 12482 + L: linux-mips@vger.kernel.org 12542 12483 S: Maintained 12543 12484 F: arch/mips/ralink 12544 12485 ··· 12558 12499 12559 12500 RANCHU VIRTUAL BOARD FOR MIPS 12560 12501 M: Miodrag Dinic <miodrag.dinic@mips.com> 12561 - L: linux-mips@linux-mips.org 12502 + L: linux-mips@vger.kernel.org 12562 12503 S: Supported 12563 12504 F: arch/mips/generic/board-ranchu.c 12564 12505 F: arch/mips/configs/generic/board-ranchu.config ··· 14008 13949 F: Documentation/devicetree/bindings/sound/ 14009 13950 F: Documentation/sound/soc/ 14010 13951 F: sound/soc/ 13952 + F: include/dt-bindings/sound/ 14011 13953 F: include/sound/soc* 14012 13954 14013 13955 SOUNDWIRE SUBSYSTEM ··· 15294 15234 TURBOCHANNEL SUBSYSTEM 15295 15235 M: "Maciej W. Rozycki" <macro@linux-mips.org> 15296 15236 M: Ralf Baechle <ralf@linux-mips.org> 15297 - L: linux-mips@linux-mips.org 15237 + L: linux-mips@vger.kernel.org 15298 15238 Q: http://patchwork.linux-mips.org/project/linux-mips/list/ 15299 15239 S: Maintained 15300 15240 F: drivers/tc/ ··· 16115 16055 16116 16056 VOCORE VOCORE2 BOARD 16117 16057 M: Harvey Hunt <harveyhuntnexus@gmail.com> 16118 - L: linux-mips@linux-mips.org 16058 + L: linux-mips@vger.kernel.org 16119 16059 S: Maintained 16120 16060 F: arch/mips/boot/dts/ralink/vocore2.dts 16121 16061
+1 -1
Makefile
··· 2 2 VERSION = 4 3 3 PATCHLEVEL = 20 4 4 SUBLEVEL = 0 5 - EXTRAVERSION = -rc4 5 + EXTRAVERSION = -rc5 6 6 NAME = Shy Crocodile 7 7 8 8 # *DOCUMENTATION*
+1 -1
arch/arm/boot/dts/am3517-evm.dts
··· 228 228 vmmc-supply = <&vmmc_fixed>; 229 229 bus-width = <4>; 230 230 wp-gpios = <&gpio4 30 GPIO_ACTIVE_HIGH>; /* gpio_126 */ 231 - cd-gpios = <&gpio4 31 GPIO_ACTIVE_HIGH>; /* gpio_127 */ 231 + cd-gpios = <&gpio4 31 GPIO_ACTIVE_LOW>; /* gpio_127 */ 232 232 }; 233 233 234 234 &mmc3 {
+1 -1
arch/arm/boot/dts/am3517-som.dtsi
··· 163 163 compatible = "ti,wl1271"; 164 164 reg = <2>; 165 165 interrupt-parent = <&gpio6>; 166 - interrupts = <10 IRQ_TYPE_LEVEL_HIGH>; /* gpio_170 */ 166 + interrupts = <10 IRQ_TYPE_EDGE_RISING>; /* gpio_170 */ 167 167 ref-clock-frequency = <26000000>; 168 168 tcxo-clock-frequency = <26000000>; 169 169 };
-6
arch/arm/boot/dts/imx51-zii-rdu1.dts
··· 492 492 pinctrl-0 = <&pinctrl_i2c2>; 493 493 status = "okay"; 494 494 495 - eeprom@50 { 496 - compatible = "atmel,24c04"; 497 - pagesize = <16>; 498 - reg = <0x50>; 499 - }; 500 - 501 495 hpa1: amp@60 { 502 496 compatible = "ti,tpa6130a2"; 503 497 reg = <0x60>;
+1 -1
arch/arm/boot/dts/logicpd-som-lv.dtsi
··· 129 129 }; 130 130 131 131 &mmc3 { 132 - interrupts-extended = <&intc 94 &omap3_pmx_core2 0x46>; 132 + interrupts-extended = <&intc 94 &omap3_pmx_core 0x136>; 133 133 pinctrl-0 = <&mmc3_pins &wl127x_gpio>; 134 134 pinctrl-names = "default"; 135 135 vmmc-supply = <&wl12xx_vmmc>;
+1 -1
arch/arm/boot/dts/logicpd-torpedo-37xx-devkit.dts
··· 35 35 * jumpering combinations for the long run. 36 36 */ 37 37 &mmc3 { 38 - interrupts-extended = <&intc 94 &omap3_pmx_core2 0x46>; 38 + interrupts-extended = <&intc 94 &omap3_pmx_core 0x136>; 39 39 pinctrl-0 = <&mmc3_pins &mmc3_core2_pins>; 40 40 pinctrl-names = "default"; 41 41 vmmc-supply = <&wl12xx_vmmc>;
+5 -1
arch/arm/boot/dts/rk3288-veyron.dtsi
··· 10 10 #include "rk3288.dtsi" 11 11 12 12 / { 13 - memory@0 { 13 + /* 14 + * The default coreboot on veyron devices ignores memory@0 nodes 15 + * and would instead create another memory node. 16 + */ 17 + memory { 14 18 device_type = "memory"; 15 19 reg = <0x0 0x0 0x0 0x80000000>; 16 20 };
+1 -1
arch/arm/boot/dts/sama5d2.dtsi
··· 314 314 0x1 0x0 0x60000000 0x10000000 315 315 0x2 0x0 0x70000000 0x10000000 316 316 0x3 0x0 0x80000000 0x10000000>; 317 - clocks = <&mck>; 317 + clocks = <&h32ck>; 318 318 status = "disabled"; 319 319 320 320 nand_controller: nand-controller {
+1 -16
arch/arm/kernel/ftrace.c
··· 183 183 unsigned long frame_pointer) 184 184 { 185 185 unsigned long return_hooker = (unsigned long) &return_to_handler; 186 - struct ftrace_graph_ent trace; 187 186 unsigned long old; 188 - int err; 189 187 190 188 if (unlikely(atomic_read(&current->tracing_graph_pause))) 191 189 return; ··· 191 193 old = *parent; 192 194 *parent = return_hooker; 193 195 194 - trace.func = self_addr; 195 - trace.depth = current->curr_ret_stack + 1; 196 - 197 - /* Only trace if the calling function expects to */ 198 - if (!ftrace_graph_entry(&trace)) { 196 + if (function_graph_enter(old, self_addr, frame_pointer, NULL)) 199 197 *parent = old; 200 - return; 201 - } 202 - 203 - err = ftrace_push_return_trace(old, self_addr, &trace.depth, 204 - frame_pointer, NULL); 205 - if (err == -EBUSY) { 206 - *parent = old; 207 - return; 208 - } 209 198 } 210 199 211 200 #ifdef CONFIG_DYNAMIC_FTRACE
+3 -1
arch/arm/mach-davinci/da830.c
··· 759 759 }; 760 760 761 761 static struct davinci_gpio_platform_data da830_gpio_platform_data = { 762 - .ngpio = 128, 762 + .no_auto_base = true, 763 + .base = 0, 764 + .ngpio = 128, 763 765 }; 764 766 765 767 int __init da830_register_gpio(void)
+3 -1
arch/arm/mach-davinci/da850.c
··· 719 719 } 720 720 721 721 static struct davinci_gpio_platform_data da850_gpio_platform_data = { 722 - .ngpio = 144, 722 + .no_auto_base = true, 723 + .base = 0, 724 + .ngpio = 144, 723 725 }; 724 726 725 727 int __init da850_register_gpio(void)
+40
arch/arm/mach-davinci/devices-da8xx.c
··· 701 701 }, 702 702 { /* interrupt */ 703 703 .start = IRQ_DA8XX_GPIO0, 704 + .end = IRQ_DA8XX_GPIO0, 705 + .flags = IORESOURCE_IRQ, 706 + }, 707 + { 708 + .start = IRQ_DA8XX_GPIO1, 709 + .end = IRQ_DA8XX_GPIO1, 710 + .flags = IORESOURCE_IRQ, 711 + }, 712 + { 713 + .start = IRQ_DA8XX_GPIO2, 714 + .end = IRQ_DA8XX_GPIO2, 715 + .flags = IORESOURCE_IRQ, 716 + }, 717 + { 718 + .start = IRQ_DA8XX_GPIO3, 719 + .end = IRQ_DA8XX_GPIO3, 720 + .flags = IORESOURCE_IRQ, 721 + }, 722 + { 723 + .start = IRQ_DA8XX_GPIO4, 724 + .end = IRQ_DA8XX_GPIO4, 725 + .flags = IORESOURCE_IRQ, 726 + }, 727 + { 728 + .start = IRQ_DA8XX_GPIO5, 729 + .end = IRQ_DA8XX_GPIO5, 730 + .flags = IORESOURCE_IRQ, 731 + }, 732 + { 733 + .start = IRQ_DA8XX_GPIO6, 734 + .end = IRQ_DA8XX_GPIO6, 735 + .flags = IORESOURCE_IRQ, 736 + }, 737 + { 738 + .start = IRQ_DA8XX_GPIO7, 739 + .end = IRQ_DA8XX_GPIO7, 740 + .flags = IORESOURCE_IRQ, 741 + }, 742 + { 743 + .start = IRQ_DA8XX_GPIO8, 704 744 .end = IRQ_DA8XX_GPIO8, 705 745 .flags = IORESOURCE_IRQ, 706 746 },
+32
arch/arm/mach-davinci/dm355.c
··· 548 548 }, 549 549 { /* interrupt */ 550 550 .start = IRQ_DM355_GPIOBNK0, 551 + .end = IRQ_DM355_GPIOBNK0, 552 + .flags = IORESOURCE_IRQ, 553 + }, 554 + { 555 + .start = IRQ_DM355_GPIOBNK1, 556 + .end = IRQ_DM355_GPIOBNK1, 557 + .flags = IORESOURCE_IRQ, 558 + }, 559 + { 560 + .start = IRQ_DM355_GPIOBNK2, 561 + .end = IRQ_DM355_GPIOBNK2, 562 + .flags = IORESOURCE_IRQ, 563 + }, 564 + { 565 + .start = IRQ_DM355_GPIOBNK3, 566 + .end = IRQ_DM355_GPIOBNK3, 567 + .flags = IORESOURCE_IRQ, 568 + }, 569 + { 570 + .start = IRQ_DM355_GPIOBNK4, 571 + .end = IRQ_DM355_GPIOBNK4, 572 + .flags = IORESOURCE_IRQ, 573 + }, 574 + { 575 + .start = IRQ_DM355_GPIOBNK5, 576 + .end = IRQ_DM355_GPIOBNK5, 577 + .flags = IORESOURCE_IRQ, 578 + }, 579 + { 580 + .start = IRQ_DM355_GPIOBNK6, 551 581 .end = IRQ_DM355_GPIOBNK6, 552 582 .flags = IORESOURCE_IRQ, 553 583 }, 554 584 }; 555 585 556 586 static struct davinci_gpio_platform_data dm355_gpio_platform_data = { 587 + .no_auto_base = true, 588 + .base = 0, 557 589 .ngpio = 104, 558 590 }; 559 591
+37
arch/arm/mach-davinci/dm365.c
··· 267 267 }, 268 268 { /* interrupt */ 269 269 .start = IRQ_DM365_GPIO0, 270 + .end = IRQ_DM365_GPIO0, 271 + .flags = IORESOURCE_IRQ, 272 + }, 273 + { 274 + .start = IRQ_DM365_GPIO1, 275 + .end = IRQ_DM365_GPIO1, 276 + .flags = IORESOURCE_IRQ, 277 + }, 278 + { 279 + .start = IRQ_DM365_GPIO2, 280 + .end = IRQ_DM365_GPIO2, 281 + .flags = IORESOURCE_IRQ, 282 + }, 283 + { 284 + .start = IRQ_DM365_GPIO3, 285 + .end = IRQ_DM365_GPIO3, 286 + .flags = IORESOURCE_IRQ, 287 + }, 288 + { 289 + .start = IRQ_DM365_GPIO4, 290 + .end = IRQ_DM365_GPIO4, 291 + .flags = IORESOURCE_IRQ, 292 + }, 293 + { 294 + .start = IRQ_DM365_GPIO5, 295 + .end = IRQ_DM365_GPIO5, 296 + .flags = IORESOURCE_IRQ, 297 + }, 298 + { 299 + .start = IRQ_DM365_GPIO6, 300 + .end = IRQ_DM365_GPIO6, 301 + .flags = IORESOURCE_IRQ, 302 + }, 303 + { 304 + .start = IRQ_DM365_GPIO7, 270 305 .end = IRQ_DM365_GPIO7, 271 306 .flags = IORESOURCE_IRQ, 272 307 }, 273 308 }; 274 309 275 310 static struct davinci_gpio_platform_data dm365_gpio_platform_data = { 311 + .no_auto_base = true, 312 + .base = 0, 276 313 .ngpio = 104, 277 314 .gpio_unbanked = 8, 278 315 };
+22
arch/arm/mach-davinci/dm644x.c
··· 492 492 }, 493 493 { /* interrupt */ 494 494 .start = IRQ_GPIOBNK0, 495 + .end = IRQ_GPIOBNK0, 496 + .flags = IORESOURCE_IRQ, 497 + }, 498 + { 499 + .start = IRQ_GPIOBNK1, 500 + .end = IRQ_GPIOBNK1, 501 + .flags = IORESOURCE_IRQ, 502 + }, 503 + { 504 + .start = IRQ_GPIOBNK2, 505 + .end = IRQ_GPIOBNK2, 506 + .flags = IORESOURCE_IRQ, 507 + }, 508 + { 509 + .start = IRQ_GPIOBNK3, 510 + .end = IRQ_GPIOBNK3, 511 + .flags = IORESOURCE_IRQ, 512 + }, 513 + { 514 + .start = IRQ_GPIOBNK4, 495 515 .end = IRQ_GPIOBNK4, 496 516 .flags = IORESOURCE_IRQ, 497 517 }, 498 518 }; 499 519 500 520 static struct davinci_gpio_platform_data dm644_gpio_platform_data = { 521 + .no_auto_base = true, 522 + .base = 0, 501 523 .ngpio = 71, 502 524 }; 503 525
+12
arch/arm/mach-davinci/dm646x.c
··· 442 442 }, 443 443 { /* interrupt */ 444 444 .start = IRQ_DM646X_GPIOBNK0, 445 + .end = IRQ_DM646X_GPIOBNK0, 446 + .flags = IORESOURCE_IRQ, 447 + }, 448 + { 449 + .start = IRQ_DM646X_GPIOBNK1, 450 + .end = IRQ_DM646X_GPIOBNK1, 451 + .flags = IORESOURCE_IRQ, 452 + }, 453 + { 454 + .start = IRQ_DM646X_GPIOBNK2, 445 455 .end = IRQ_DM646X_GPIOBNK2, 446 456 .flags = IORESOURCE_IRQ, 447 457 }, 448 458 }; 449 459 450 460 static struct davinci_gpio_platform_data dm646x_gpio_platform_data = { 461 + .no_auto_base = true, 462 + .base = 0, 451 463 .ngpio = 43, 452 464 }; 453 465
+3
arch/arm/mach-omap1/board-ams-delta.c
··· 750 750 struct modem_private_data *priv = port->private_data; 751 751 int ret; 752 752 753 + if (!priv) 754 + return; 755 + 753 756 if (IS_ERR(priv->regulator)) 754 757 return; 755 758
+1 -1
arch/arm/mach-omap2/prm44xx.c
··· 351 351 * to occur, WAKEUPENABLE bits must be set in the pad mux registers, and 352 352 * omap44xx_prm_reconfigure_io_chain() must be called. No return value. 353 353 */ 354 - static void __init omap44xx_prm_enable_io_wakeup(void) 354 + static void omap44xx_prm_enable_io_wakeup(void) 355 355 { 356 356 s32 inst = omap4_prmst_get_prm_dev_inst(); 357 357
+25
arch/arm64/Kconfig
··· 497 497 498 498 If unsure, say Y. 499 499 500 + config ARM64_ERRATUM_1286807 501 + bool "Cortex-A76: Modification of the translation table for a virtual address might lead to read-after-read ordering violation" 502 + default y 503 + select ARM64_WORKAROUND_REPEAT_TLBI 504 + help 505 + This option adds workaround for ARM Cortex-A76 erratum 1286807 506 + 507 + On the affected Cortex-A76 cores (r0p0 to r3p0), if a virtual 508 + address for a cacheable mapping of a location is being 509 + accessed by a core while another core is remapping the virtual 510 + address to a new physical page using the recommended 511 + break-before-make sequence, then under very rare circumstances 512 + TLBI+DSB completes before a read using the translation being 513 + invalidated has been observed by other observers. The 514 + workaround repeats the TLBI+DSB operation. 515 + 516 + If unsure, say Y. 517 + 500 518 config CAVIUM_ERRATUM_22375 501 519 bool "Cavium erratum 22375, 24313" 502 520 default y ··· 584 566 is unchanged. Work around the erratum by invalidating the walk cache 585 567 entries for the trampoline before entering the kernel proper. 586 568 569 + config ARM64_WORKAROUND_REPEAT_TLBI 570 + bool 571 + help 572 + Enable the repeat TLBI workaround for Falkor erratum 1009 and 573 + Cortex-A76 erratum 1286807. 574 + 587 575 config QCOM_FALKOR_ERRATUM_1009 588 576 bool "Falkor E1009: Prematurely complete a DSB after a TLBI" 589 577 default y 578 + select ARM64_WORKAROUND_REPEAT_TLBI 590 579 help 591 580 On Falkor v1, the CPU may prematurely complete a DSB following a 592 581 TLBI xxIS invalidate maintenance operation. Repeat the TLBI operation
+4
arch/arm64/boot/dts/qcom/msm8998-mtp.dtsi
··· 241 241 }; 242 242 }; 243 243 }; 244 + 245 + &tlmm { 246 + gpio-reserved-ranges = <0 4>, <81 4>; 247 + };
+4
arch/arm64/boot/dts/qcom/sdm845-mtp.dts
··· 352 352 status = "okay"; 353 353 }; 354 354 355 + &tlmm { 356 + gpio-reserved-ranges = <0 4>, <81 4>; 357 + }; 358 + 355 359 &uart9 { 356 360 status = "okay"; 357 361 };
+1 -1
arch/arm64/boot/dts/rockchip/rk3399-puma-haikou.dts
··· 153 153 }; 154 154 155 155 &pcie0 { 156 - ep-gpios = <&gpio4 RK_PC6 GPIO_ACTIVE_LOW>; 156 + ep-gpios = <&gpio4 RK_PC6 GPIO_ACTIVE_HIGH>; 157 157 num-lanes = <4>; 158 158 pinctrl-names = "default"; 159 159 pinctrl-0 = <&pcie_clkreqn_cpm>;
-12
arch/arm64/boot/dts/rockchip/rk3399-rock960.dtsi
··· 57 57 regulator-always-on; 58 58 vin-supply = <&vcc_sys>; 59 59 }; 60 - 61 - vdd_log: vdd-log { 62 - compatible = "pwm-regulator"; 63 - pwms = <&pwm2 0 25000 0>; 64 - regulator-name = "vdd_log"; 65 - regulator-min-microvolt = <800000>; 66 - regulator-max-microvolt = <1400000>; 67 - regulator-always-on; 68 - regulator-boot-on; 69 - vin-supply = <&vcc_sys>; 70 - }; 71 - 72 60 }; 73 61 74 62 &cpu_l0 {
+1 -1
arch/arm64/boot/dts/ti/k3-am65-wakeup.dtsi
··· 36 36 37 37 wkup_uart0: serial@42300000 { 38 38 compatible = "ti,am654-uart"; 39 - reg = <0x00 0x42300000 0x00 0x100>; 39 + reg = <0x42300000 0x100>; 40 40 reg-shift = <2>; 41 41 reg-io-width = <4>; 42 42 interrupts = <GIC_SPI 697 IRQ_TYPE_LEVEL_HIGH>;
+13
arch/arm64/include/asm/ftrace.h
··· 56 56 { 57 57 return is_compat_task(); 58 58 } 59 + 60 + #define ARCH_HAS_SYSCALL_MATCH_SYM_NAME 61 + 62 + static inline bool arch_syscall_match_sym_name(const char *sym, 63 + const char *name) 64 + { 65 + /* 66 + * Since all syscall functions have __arm64_ prefix, we must skip it. 67 + * However, as we described above, we decided to ignore compat 68 + * syscalls, so we don't care about __arm64_compat_ prefix here. 69 + */ 70 + return !strcmp(sym + 8, name); 71 + } 59 72 #endif /* ifndef __ASSEMBLY__ */ 60 73 61 74 #endif /* __ASM_FTRACE_H */
+2 -2
arch/arm64/include/asm/tlbflush.h
··· 41 41 ALTERNATIVE("nop\n nop", \ 42 42 "dsb ish\n tlbi " #op, \ 43 43 ARM64_WORKAROUND_REPEAT_TLBI, \ 44 - CONFIG_QCOM_FALKOR_ERRATUM_1009) \ 44 + CONFIG_ARM64_WORKAROUND_REPEAT_TLBI) \ 45 45 : : ) 46 46 47 47 #define __TLBI_1(op, arg) asm ("tlbi " #op ", %0\n" \ 48 48 ALTERNATIVE("nop\n nop", \ 49 49 "dsb ish\n tlbi " #op ", %0", \ 50 50 ARM64_WORKAROUND_REPEAT_TLBI, \ 51 - CONFIG_QCOM_FALKOR_ERRATUM_1009) \ 51 + CONFIG_ARM64_WORKAROUND_REPEAT_TLBI) \ 52 52 : : "r" (arg)) 53 53 54 54 #define __TLBI_N(op, arg, n, ...) __TLBI_##n(op, arg)
+17 -3
arch/arm64/kernel/cpu_errata.c
··· 570 570 571 571 #endif 572 572 573 + #ifdef CONFIG_ARM64_WORKAROUND_REPEAT_TLBI 574 + 575 + static const struct midr_range arm64_repeat_tlbi_cpus[] = { 576 + #ifdef CONFIG_QCOM_FALKOR_ERRATUM_1009 577 + MIDR_RANGE(MIDR_QCOM_FALKOR_V1, 0, 0, 0, 0), 578 + #endif 579 + #ifdef CONFIG_ARM64_ERRATUM_1286807 580 + MIDR_RANGE(MIDR_CORTEX_A76, 0, 0, 3, 0), 581 + #endif 582 + {}, 583 + }; 584 + 585 + #endif 586 + 573 587 const struct arm64_cpu_capabilities arm64_errata[] = { 574 588 #if defined(CONFIG_ARM64_ERRATUM_826319) || \ 575 589 defined(CONFIG_ARM64_ERRATUM_827319) || \ ··· 709 695 .matches = is_kryo_midr, 710 696 }, 711 697 #endif 712 - #ifdef CONFIG_QCOM_FALKOR_ERRATUM_1009 698 + #ifdef CONFIG_ARM64_WORKAROUND_REPEAT_TLBI 713 699 { 714 - .desc = "Qualcomm Technologies Falkor erratum 1009", 700 + .desc = "Qualcomm erratum 1009, ARM erratum 1286807", 715 701 .capability = ARM64_WORKAROUND_REPEAT_TLBI, 716 - ERRATA_MIDR_REV(MIDR_QCOM_FALKOR_V1, 0, 0), 702 + ERRATA_MIDR_RANGE_LIST(arm64_repeat_tlbi_cpus), 717 703 }, 718 704 #endif 719 705 #ifdef CONFIG_ARM64_ERRATUM_858921
+1 -14
arch/arm64/kernel/ftrace.c
··· 216 216 { 217 217 unsigned long return_hooker = (unsigned long)&return_to_handler; 218 218 unsigned long old; 219 - struct ftrace_graph_ent trace; 220 - int err; 221 219 222 220 if (unlikely(atomic_read(&current->tracing_graph_pause))) 223 221 return; ··· 227 229 */ 228 230 old = *parent; 229 231 230 - trace.func = self_addr; 231 - trace.depth = current->curr_ret_stack + 1; 232 - 233 - /* Only trace if the calling function expects to */ 234 - if (!ftrace_graph_entry(&trace)) 235 - return; 236 - 237 - err = ftrace_push_return_trace(old, self_addr, &trace.depth, 238 - frame_pointer, NULL); 239 - if (err == -EBUSY) 240 - return; 241 - else 232 + if (!function_graph_enter(old, self_addr, frame_pointer, NULL)) 242 233 *parent = return_hooker; 243 234 } 244 235
+17 -9
arch/arm64/net/bpf_jit_comp.c
··· 351 351 * >0 - successfully JITed a 16-byte eBPF instruction. 352 352 * <0 - failed to JIT. 353 353 */ 354 - static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx) 354 + static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, 355 + bool extra_pass) 355 356 { 356 357 const u8 code = insn->code; 357 358 const u8 dst = bpf2a64[insn->dst_reg]; ··· 626 625 case BPF_JMP | BPF_CALL: 627 626 { 628 627 const u8 r0 = bpf2a64[BPF_REG_0]; 629 - const u64 func = (u64)__bpf_call_base + imm; 628 + bool func_addr_fixed; 629 + u64 func_addr; 630 + int ret; 630 631 631 - if (ctx->prog->is_func) 632 - emit_addr_mov_i64(tmp, func, ctx); 632 + ret = bpf_jit_get_func_addr(ctx->prog, insn, extra_pass, 633 + &func_addr, &func_addr_fixed); 634 + if (ret < 0) 635 + return ret; 636 + if (func_addr_fixed) 637 + /* We can use optimized emission here. */ 638 + emit_a64_mov_i64(tmp, func_addr, ctx); 633 639 else 634 - emit_a64_mov_i64(tmp, func, ctx); 640 + emit_addr_mov_i64(tmp, func_addr, ctx); 635 641 emit(A64_BLR(tmp), ctx); 636 642 emit(A64_MOV(1, r0, A64_R(0)), ctx); 637 643 break; ··· 761 753 return 0; 762 754 } 763 755 764 - static int build_body(struct jit_ctx *ctx) 756 + static int build_body(struct jit_ctx *ctx, bool extra_pass) 765 757 { 766 758 const struct bpf_prog *prog = ctx->prog; 767 759 int i; ··· 770 762 const struct bpf_insn *insn = &prog->insnsi[i]; 771 763 int ret; 772 764 773 - ret = build_insn(insn, ctx); 765 + ret = build_insn(insn, ctx, extra_pass); 774 766 if (ret > 0) { 775 767 i++; 776 768 if (ctx->image == NULL) ··· 866 858 /* 1. Initial fake pass to compute ctx->idx. */ 867 859 868 860 /* Fake pass to fill in ctx->offset. */ 869 - if (build_body(&ctx)) { 861 + if (build_body(&ctx, extra_pass)) { 870 862 prog = orig_prog; 871 863 goto out_off; 872 864 } ··· 896 888 897 889 build_prologue(&ctx, was_classic); 898 890 899 - if (build_body(&ctx)) { 891 + if (build_body(&ctx, extra_pass)) { 900 892 bpf_jit_binary_free(header); 901 893 prog = orig_prog; 902 894 goto out_off;
+3 -1
arch/ia64/include/asm/numa.h
··· 59 59 */ 60 60 61 61 extern u8 numa_slit[MAX_NUMNODES * MAX_NUMNODES]; 62 - #define node_distance(from,to) (numa_slit[(from) * MAX_NUMNODES + (to)]) 62 + #define slit_distance(from,to) (numa_slit[(from) * MAX_NUMNODES + (to)]) 63 + extern int __node_distance(int from, int to); 64 + #define node_distance(from,to) __node_distance(from, to) 63 65 64 66 extern int paddr_to_nid(unsigned long paddr); 65 67
+3 -3
arch/ia64/kernel/acpi.c
··· 578 578 if (!slit_table) { 579 579 for (i = 0; i < MAX_NUMNODES; i++) 580 580 for (j = 0; j < MAX_NUMNODES; j++) 581 - node_distance(i, j) = i == j ? LOCAL_DISTANCE : 582 - REMOTE_DISTANCE; 581 + slit_distance(i, j) = i == j ? 582 + LOCAL_DISTANCE : REMOTE_DISTANCE; 583 583 return; 584 584 } 585 585 ··· 592 592 if (!pxm_bit_test(j)) 593 593 continue; 594 594 node_to = pxm_to_node(j); 595 - node_distance(node_from, node_to) = 595 + slit_distance(node_from, node_to) = 596 596 slit_table->entry[i * slit_table->locality_count + j]; 597 597 } 598 598 }
+6
arch/ia64/mm/numa.c
··· 36 36 */ 37 37 u8 numa_slit[MAX_NUMNODES * MAX_NUMNODES]; 38 38 39 + int __node_distance(int from, int to) 40 + { 41 + return slit_distance(from, to); 42 + } 43 + EXPORT_SYMBOL(__node_distance); 44 + 39 45 /* Identify which cnode a physical address resides on */ 40 46 int 41 47 paddr_to_nid(unsigned long paddr)
+2 -13
arch/microblaze/kernel/ftrace.c
··· 22 22 void prepare_ftrace_return(unsigned long *parent, unsigned long self_addr) 23 23 { 24 24 unsigned long old; 25 - int faulted, err; 26 - struct ftrace_graph_ent trace; 25 + int faulted; 27 26 unsigned long return_hooker = (unsigned long) 28 27 &return_to_handler; 29 28 ··· 62 63 return; 63 64 } 64 65 65 - err = ftrace_push_return_trace(old, self_addr, &trace.depth, 0, NULL); 66 - if (err == -EBUSY) { 66 + if (function_graph_enter(old, self_addr, 0, NULL)) 67 67 *parent = old; 68 - return; 69 - } 70 - 71 - trace.func = self_addr; 72 - /* Only trace if the calling function expects to */ 73 - if (!ftrace_graph_entry(&trace)) { 74 - current->curr_ret_stack--; 75 - *parent = old; 76 - } 77 68 } 78 69 #endif /* CONFIG_FUNCTION_GRAPH_TRACER */ 79 70
+1 -1
arch/mips/include/asm/syscall.h
··· 73 73 #ifdef CONFIG_64BIT 74 74 case 4: case 5: case 6: case 7: 75 75 #ifdef CONFIG_MIPS32_O32 76 - if (test_thread_flag(TIF_32BIT_REGS)) 76 + if (test_tsk_thread_flag(task, TIF_32BIT_REGS)) 77 77 return get_user(*arg, (int *)usp + n); 78 78 else 79 79 #endif
+2 -12
arch/mips/kernel/ftrace.c
··· 322 322 unsigned long fp) 323 323 { 324 324 unsigned long old_parent_ra; 325 - struct ftrace_graph_ent trace; 326 325 unsigned long return_hooker = (unsigned long) 327 326 &return_to_handler; 328 327 int faulted, insns; ··· 368 369 if (unlikely(faulted)) 369 370 goto out; 370 371 371 - if (ftrace_push_return_trace(old_parent_ra, self_ra, &trace.depth, fp, 372 - NULL) == -EBUSY) { 373 - *parent_ra_addr = old_parent_ra; 374 - return; 375 - } 376 - 377 372 /* 378 373 * Get the recorded ip of the current mcount calling site in the 379 374 * __mcount_loc section, which will be used to filter the function ··· 375 382 */ 376 383 377 384 insns = core_kernel_text(self_ra) ? 2 : MCOUNT_OFFSET_INSNS + 1; 378 - trace.func = self_ra - (MCOUNT_INSN_SIZE * insns); 385 + self_ra -= (MCOUNT_INSN_SIZE * insns); 379 386 380 - /* Only trace if the calling function expects to */ 381 - if (!ftrace_graph_entry(&trace)) { 382 - current->curr_ret_stack--; 387 + if (function_graph_enter(old_parent_ra, self_ra, fp, NULL)) 383 388 *parent_ra_addr = old_parent_ra; 384 - } 385 389 return; 386 390 out: 387 391 ftrace_graph_stop();
+1 -1
arch/mips/ralink/mt7620.c
··· 84 84 }; 85 85 static struct rt2880_pmx_func nd_sd_grp[] = { 86 86 FUNC("nand", MT7620_GPIO_MODE_NAND, 45, 15), 87 - FUNC("sd", MT7620_GPIO_MODE_SD, 45, 15) 87 + FUNC("sd", MT7620_GPIO_MODE_SD, 47, 13) 88 88 }; 89 89 90 90 static struct rt2880_pmx_group mt7620a_pinmux_data[] = {
+2 -16
arch/nds32/kernel/ftrace.c
··· 211 211 unsigned long frame_pointer) 212 212 { 213 213 unsigned long return_hooker = (unsigned long)&return_to_handler; 214 - struct ftrace_graph_ent trace; 215 214 unsigned long old; 216 - int err; 217 215 218 216 if (unlikely(atomic_read(&current->tracing_graph_pause))) 219 217 return; 220 218 221 219 old = *parent; 222 220 223 - trace.func = self_addr; 224 - trace.depth = current->curr_ret_stack + 1; 225 - 226 - /* Only trace if the calling function expects to */ 227 - if (!ftrace_graph_entry(&trace)) 228 - return; 229 - 230 - err = ftrace_push_return_trace(old, self_addr, &trace.depth, 231 - frame_pointer, NULL); 232 - 233 - if (err == -EBUSY) 234 - return; 235 - 236 - *parent = return_hooker; 221 + if (!function_graph_enter(old, self_addr, frame_pointer, NULL)) 222 + *parent = return_hooker; 237 223 } 238 224 239 225 noinline void ftrace_graph_caller(void)
+3 -14
arch/parisc/kernel/ftrace.c
··· 30 30 unsigned long self_addr) 31 31 { 32 32 unsigned long old; 33 - struct ftrace_graph_ent trace; 34 33 extern int parisc_return_to_handler; 35 34 36 35 if (unlikely(ftrace_graph_is_dead())) ··· 40 41 41 42 old = *parent; 42 43 43 - trace.func = self_addr; 44 - trace.depth = current->curr_ret_stack + 1; 45 - 46 - /* Only trace if the calling function expects to */ 47 - if (!ftrace_graph_entry(&trace)) 48 - return; 49 - 50 - if (ftrace_push_return_trace(old, self_addr, &trace.depth, 51 - 0, NULL) == -EBUSY) 52 - return; 53 - 54 - /* activate parisc_return_to_handler() as return point */ 55 - *parent = (unsigned long) &parisc_return_to_handler; 44 + if (!function_graph_enter(old, self_addr, 0, NULL)) 45 + /* activate parisc_return_to_handler() as return point */ 46 + *parent = (unsigned long) &parisc_return_to_handler; 56 47 } 57 48 #endif /* CONFIG_FUNCTION_GRAPH_TRACER */ 58 49
+2 -13
arch/powerpc/kernel/trace/ftrace.c
··· 950 950 */ 951 951 unsigned long prepare_ftrace_return(unsigned long parent, unsigned long ip) 952 952 { 953 - struct ftrace_graph_ent trace; 954 953 unsigned long return_hooker; 955 954 956 955 if (unlikely(ftrace_graph_is_dead())) ··· 960 961 961 962 return_hooker = ppc_function_entry(return_to_handler); 962 963 963 - trace.func = ip; 964 - trace.depth = current->curr_ret_stack + 1; 965 - 966 - /* Only trace if the calling function expects to */ 967 - if (!ftrace_graph_entry(&trace)) 968 - goto out; 969 - 970 - if (ftrace_push_return_trace(parent, ip, &trace.depth, 0, 971 - NULL) == -EBUSY) 972 - goto out; 973 - 974 - parent = return_hooker; 964 + if (!function_graph_enter(parent, ip, 0, NULL)) 965 + parent = return_hooker; 975 966 out: 976 967 return parent; 977 968 }
+1
arch/powerpc/kvm/book3s_hv.c
··· 983 983 ret = kvmhv_enter_nested_guest(vcpu); 984 984 if (ret == H_INTERRUPT) { 985 985 kvmppc_set_gpr(vcpu, 3, 0); 986 + vcpu->arch.hcall_needed = 0; 986 987 return -EINTR; 987 988 } 988 989 break;
+38 -19
arch/powerpc/net/bpf_jit_comp64.c
··· 166 166 PPC_BLR(); 167 167 } 168 168 169 - static void bpf_jit_emit_func_call(u32 *image, struct codegen_context *ctx, u64 func) 169 + static void bpf_jit_emit_func_call_hlp(u32 *image, struct codegen_context *ctx, 170 + u64 func) 171 + { 172 + #ifdef PPC64_ELF_ABI_v1 173 + /* func points to the function descriptor */ 174 + PPC_LI64(b2p[TMP_REG_2], func); 175 + /* Load actual entry point from function descriptor */ 176 + PPC_BPF_LL(b2p[TMP_REG_1], b2p[TMP_REG_2], 0); 177 + /* ... and move it to LR */ 178 + PPC_MTLR(b2p[TMP_REG_1]); 179 + /* 180 + * Load TOC from function descriptor at offset 8. 181 + * We can clobber r2 since we get called through a 182 + * function pointer (so caller will save/restore r2) 183 + * and since we don't use a TOC ourself. 184 + */ 185 + PPC_BPF_LL(2, b2p[TMP_REG_2], 8); 186 + #else 187 + /* We can clobber r12 */ 188 + PPC_FUNC_ADDR(12, func); 189 + PPC_MTLR(12); 190 + #endif 191 + PPC_BLRL(); 192 + } 193 + 194 + static void bpf_jit_emit_func_call_rel(u32 *image, struct codegen_context *ctx, 195 + u64 func) 170 196 { 171 197 unsigned int i, ctx_idx = ctx->idx; 172 198 ··· 299 273 { 300 274 const struct bpf_insn *insn = fp->insnsi; 301 275 int flen = fp->len; 302 - int i; 276 + int i, ret; 303 277 304 278 /* Start of epilogue code - will only be valid 2nd pass onwards */ 305 279 u32 exit_addr = addrs[flen]; ··· 310 284 u32 src_reg = b2p[insn[i].src_reg]; 311 285 s16 off = insn[i].off; 312 286 s32 imm = insn[i].imm; 287 + bool func_addr_fixed; 288 + u64 func_addr; 313 289 u64 imm64; 314 - u8 *func; 315 290 u32 true_cond; 316 291 u32 tmp_idx; 317 292 ··· 738 711 case BPF_JMP | BPF_CALL: 739 712 ctx->seen |= SEEN_FUNC; 740 713 741 - /* bpf function call */ 742 - if (insn[i].src_reg == BPF_PSEUDO_CALL) 743 - if (!extra_pass) 744 - func = NULL; 745 - else if (fp->aux->func && off < fp->aux->func_cnt) 746 - /* use the subprog id from the off 747 - * field to lookup the callee address 748 - */ 749 - func = (u8 *) fp->aux->func[off]->bpf_func; 750 - else 751 - return -EINVAL; 752 - /* kernel helper call */ 714 + ret = bpf_jit_get_func_addr(fp, &insn[i], extra_pass, 715 + &func_addr, &func_addr_fixed); 716 + if (ret < 0) 717 + return ret; 718 + 719 + if (func_addr_fixed) 720 + bpf_jit_emit_func_call_hlp(image, ctx, func_addr); 753 721 else 754 - func = (u8 *) __bpf_call_base + imm; 755 - 756 - bpf_jit_emit_func_call(image, ctx, (u64)func); 757 - 722 + bpf_jit_emit_func_call_rel(image, ctx, func_addr); 758 723 /* move return value from r3 to BPF_REG_0 */ 759 724 PPC_MR(b2p[BPF_REG_0], 3); 760 725 break;
+2 -12
arch/riscv/kernel/ftrace.c
··· 132 132 { 133 133 unsigned long return_hooker = (unsigned long)&return_to_handler; 134 134 unsigned long old; 135 - struct ftrace_graph_ent trace; 136 135 int err; 137 136 138 137 if (unlikely(atomic_read(&current->tracing_graph_pause))) ··· 143 144 */ 144 145 old = *parent; 145 146 146 - trace.func = self_addr; 147 - trace.depth = current->curr_ret_stack + 1; 148 - 149 - if (!ftrace_graph_entry(&trace)) 150 - return; 151 - 152 - err = ftrace_push_return_trace(old, self_addr, &trace.depth, 153 - frame_pointer, parent); 154 - if (err == -EBUSY) 155 - return; 156 - *parent = return_hooker; 147 + if (function_graph_enter(old, self_addr, frame_pointer, parent)) 148 + *parent = return_hooker; 157 149 } 158 150 159 151 #ifdef CONFIG_DYNAMIC_FTRACE
+2 -11
arch/s390/kernel/ftrace.c
··· 203 203 */ 204 204 unsigned long prepare_ftrace_return(unsigned long parent, unsigned long ip) 205 205 { 206 - struct ftrace_graph_ent trace; 207 - 208 206 if (unlikely(ftrace_graph_is_dead())) 209 207 goto out; 210 208 if (unlikely(atomic_read(&current->tracing_graph_pause))) 211 209 goto out; 212 210 ip -= MCOUNT_INSN_SIZE; 213 - trace.func = ip; 214 - trace.depth = current->curr_ret_stack + 1; 215 - /* Only trace if the calling function expects to. */ 216 - if (!ftrace_graph_entry(&trace)) 217 - goto out; 218 - if (ftrace_push_return_trace(parent, ip, &trace.depth, 0, 219 - NULL) == -EBUSY) 220 - goto out; 221 - parent = (unsigned long) return_to_handler; 211 + if (!function_graph_enter(parent, ip, 0, NULL)) 212 + parent = (unsigned long) return_to_handler; 222 213 out: 223 214 return parent; 224 215 }
+2
arch/s390/kernel/perf_cpum_cf.c
··· 346 346 break; 347 347 348 348 case PERF_TYPE_HARDWARE: 349 + if (is_sampling_event(event)) /* No sampling support */ 350 + return -ENOENT; 349 351 ev = attr->config; 350 352 /* Count user space (problem-state) only */ 351 353 if (!attr->exclude_user && attr->exclude_kernel) {
+1
arch/s390/mm/pgalloc.c
··· 131 131 } 132 132 133 133 pgd = mm->pgd; 134 + mm_dec_nr_pmds(mm); 134 135 mm->pgd = (pgd_t *) (pgd_val(*pgd) & _REGION_ENTRY_ORIGIN); 135 136 mm->context.asce_limit = _REGION3_SIZE; 136 137 mm->context.asce = __pa(mm->pgd) | _ASCE_TABLE_LENGTH |
+2 -14
arch/sh/kernel/ftrace.c
··· 321 321 void prepare_ftrace_return(unsigned long *parent, unsigned long self_addr) 322 322 { 323 323 unsigned long old; 324 - int faulted, err; 325 - struct ftrace_graph_ent trace; 324 + int faulted; 326 325 unsigned long return_hooker = (unsigned long)&return_to_handler; 327 326 328 327 if (unlikely(ftrace_graph_is_dead())) ··· 364 365 return; 365 366 } 366 367 367 - err = ftrace_push_return_trace(old, self_addr, &trace.depth, 0, NULL); 368 - if (err == -EBUSY) { 368 + if (function_graph_enter(old, self_addr, 0, NULL)) 369 369 __raw_writel(old, parent); 370 - return; 371 - } 372 - 373 - trace.func = self_addr; 374 - 375 - /* Only trace if the calling function expects to */ 376 - if (!ftrace_graph_entry(&trace)) { 377 - current->curr_ret_stack--; 378 - __raw_writel(old, parent); 379 - } 380 370 } 381 371 #endif /* CONFIG_FUNCTION_GRAPH_TRACER */
+1 -10
arch/sparc/kernel/ftrace.c
··· 126 126 unsigned long frame_pointer) 127 127 { 128 128 unsigned long return_hooker = (unsigned long) &return_to_handler; 129 - struct ftrace_graph_ent trace; 130 129 131 130 if (unlikely(atomic_read(&current->tracing_graph_pause))) 132 131 return parent + 8UL; 133 132 134 - trace.func = self_addr; 135 - trace.depth = current->curr_ret_stack + 1; 136 - 137 - /* Only trace if the calling function expects to */ 138 - if (!ftrace_graph_entry(&trace)) 139 - return parent + 8UL; 140 - 141 - if (ftrace_push_return_trace(parent, self_addr, &trace.depth, 142 - frame_pointer, NULL) == -EBUSY) 133 + if (function_graph_enter(parent, self_addr, frame_pointer, NULL)) 143 134 return parent + 8UL; 144 135 145 136 return return_hooker;
+68 -29
arch/sparc/net/bpf_jit_comp_64.c
··· 791 791 } 792 792 793 793 /* Just skip the save instruction and the ctx register move. */ 794 - #define BPF_TAILCALL_PROLOGUE_SKIP 16 794 + #define BPF_TAILCALL_PROLOGUE_SKIP 32 795 795 #define BPF_TAILCALL_CNT_SP_OFF (STACK_BIAS + 128) 796 796 797 797 static void build_prologue(struct jit_ctx *ctx) ··· 824 824 const u8 vfp = bpf2sparc[BPF_REG_FP]; 825 825 826 826 emit(ADD | IMMED | RS1(FP) | S13(STACK_BIAS) | RD(vfp), ctx); 827 + } else { 828 + emit_nop(ctx); 827 829 } 828 830 829 831 emit_reg_move(I0, O0, ctx); 832 + emit_reg_move(I1, O1, ctx); 833 + emit_reg_move(I2, O2, ctx); 834 + emit_reg_move(I3, O3, ctx); 835 + emit_reg_move(I4, O4, ctx); 830 836 /* If you add anything here, adjust BPF_TAILCALL_PROLOGUE_SKIP above. */ 831 837 } 832 838 ··· 1276 1270 const u8 tmp2 = bpf2sparc[TMP_REG_2]; 1277 1271 u32 opcode = 0, rs2; 1278 1272 1273 + if (insn->dst_reg == BPF_REG_FP) 1274 + ctx->saw_frame_pointer = true; 1275 + 1279 1276 ctx->tmp_2_used = true; 1280 1277 emit_loadimm(imm, tmp2, ctx); 1281 1278 ··· 1317 1308 const u8 tmp = bpf2sparc[TMP_REG_1]; 1318 1309 u32 opcode = 0, rs2; 1319 1310 1311 + if (insn->dst_reg == BPF_REG_FP) 1312 + ctx->saw_frame_pointer = true; 1313 + 1320 1314 switch (BPF_SIZE(code)) { 1321 1315 case BPF_W: 1322 1316 opcode = ST32; ··· 1352 1340 const u8 tmp2 = bpf2sparc[TMP_REG_2]; 1353 1341 const u8 tmp3 = bpf2sparc[TMP_REG_3]; 1354 1342 1343 + if (insn->dst_reg == BPF_REG_FP) 1344 + ctx->saw_frame_pointer = true; 1345 + 1355 1346 ctx->tmp_1_used = true; 1356 1347 ctx->tmp_2_used = true; 1357 1348 ctx->tmp_3_used = true; ··· 1374 1359 const u8 tmp = bpf2sparc[TMP_REG_1]; 1375 1360 const u8 tmp2 = bpf2sparc[TMP_REG_2]; 1376 1361 const u8 tmp3 = bpf2sparc[TMP_REG_3]; 1362 + 1363 + if (insn->dst_reg == BPF_REG_FP) 1364 + ctx->saw_frame_pointer = true; 1377 1365 1378 1366 ctx->tmp_1_used = true; 1379 1367 ctx->tmp_2_used = true; ··· 1443 1425 struct bpf_prog *tmp, *orig_prog = prog; 1444 1426 struct sparc64_jit_data *jit_data; 1445 1427 struct bpf_binary_header *header; 1428 + u32 prev_image_size, image_size; 1446 1429 bool tmp_blinded = false; 1447 1430 bool extra_pass = false; 1448 1431 struct jit_ctx ctx; 1449 - u32 image_size; 1450 1432 u8 *image_ptr; 1451 - int pass; 1433 + int pass, i; 1452 1434 1453 1435 if (!prog->jit_requested) 1454 1436 return orig_prog; ··· 1479 1461 header = jit_data->header; 1480 1462 extra_pass = true; 1481 1463 image_size = sizeof(u32) * ctx.idx; 1464 + prev_image_size = image_size; 1465 + pass = 1; 1482 1466 goto skip_init_ctx; 1483 1467 } 1484 1468 1485 1469 memset(&ctx, 0, sizeof(ctx)); 1486 1470 ctx.prog = prog; 1487 1471 1488 - ctx.offset = kcalloc(prog->len, sizeof(unsigned int), GFP_KERNEL); 1472 + ctx.offset = kmalloc_array(prog->len, sizeof(unsigned int), GFP_KERNEL); 1489 1473 if (ctx.offset == NULL) { 1490 1474 prog = orig_prog; 1491 1475 goto out_off; 1492 1476 } 1493 1477 1494 - /* Fake pass to detect features used, and get an accurate assessment 1495 - * of what the final image size will be. 1478 + /* Longest sequence emitted is for bswap32, 12 instructions. Pre-cook 1479 + * the offset array so that we converge faster. 1496 1480 */ 1497 - if (build_body(&ctx)) { 1498 - prog = orig_prog; 1499 - goto out_off; 1481 + for (i = 0; i < prog->len; i++) 1482 + ctx.offset[i] = i * (12 * 4); 1483 + 1484 + prev_image_size = ~0U; 1485 + for (pass = 1; pass < 40; pass++) { 1486 + ctx.idx = 0; 1487 + 1488 + build_prologue(&ctx); 1489 + if (build_body(&ctx)) { 1490 + prog = orig_prog; 1491 + goto out_off; 1492 + } 1493 + build_epilogue(&ctx); 1494 + 1495 + if (bpf_jit_enable > 1) 1496 + pr_info("Pass %d: size = %u, seen = [%c%c%c%c%c%c]\n", pass, 1497 + ctx.idx * 4, 1498 + ctx.tmp_1_used ? '1' : ' ', 1499 + ctx.tmp_2_used ? '2' : ' ', 1500 + ctx.tmp_3_used ? '3' : ' ', 1501 + ctx.saw_frame_pointer ? 'F' : ' ', 1502 + ctx.saw_call ? 'C' : ' ', 1503 + ctx.saw_tail_call ? 'T' : ' '); 1504 + 1505 + if (ctx.idx * 4 == prev_image_size) 1506 + break; 1507 + prev_image_size = ctx.idx * 4; 1508 + cond_resched(); 1500 1509 } 1501 - build_prologue(&ctx); 1502 - build_epilogue(&ctx); 1503 1510 1504 1511 /* Now we know the actual image size. */ 1505 1512 image_size = sizeof(u32) * ctx.idx; ··· 1537 1494 1538 1495 ctx.image = (u32 *)image_ptr; 1539 1496 skip_init_ctx: 1540 - for (pass = 1; pass < 3; pass++) { 1541 - ctx.idx = 0; 1497 + ctx.idx = 0; 1542 1498 1543 - build_prologue(&ctx); 1499 + build_prologue(&ctx); 1544 1500 1545 - if (build_body(&ctx)) { 1546 - bpf_jit_binary_free(header); 1547 - prog = orig_prog; 1548 - goto out_off; 1549 - } 1501 + if (build_body(&ctx)) { 1502 + bpf_jit_binary_free(header); 1503 + prog = orig_prog; 1504 + goto out_off; 1505 + } 1550 1506 1551 - build_epilogue(&ctx); 1507 + build_epilogue(&ctx); 1552 1508 1553 - if (bpf_jit_enable > 1) 1554 - pr_info("Pass %d: shrink = %d, seen = [%c%c%c%c%c%c]\n", pass, 1555 - image_size - (ctx.idx * 4), 1556 - ctx.tmp_1_used ? '1' : ' ', 1557 - ctx.tmp_2_used ? '2' : ' ', 1558 - ctx.tmp_3_used ? '3' : ' ', 1559 - ctx.saw_frame_pointer ? 'F' : ' ', 1560 - ctx.saw_call ? 'C' : ' ', 1561 - ctx.saw_tail_call ? 'T' : ' '); 1509 + if (ctx.idx * 4 != prev_image_size) { 1510 + pr_err("bpf_jit: Failed to converge, prev_size=%u size=%d\n", 1511 + prev_image_size, ctx.idx * 4); 1512 + bpf_jit_binary_free(header); 1513 + prog = orig_prog; 1514 + goto out_off; 1562 1515 } 1563 1516 1564 1517 if (bpf_jit_enable > 1)
+1 -11
arch/x86/Kconfig
··· 444 444 branches. Requires a compiler with -mindirect-branch=thunk-extern 445 445 support for full protection. The kernel may run slower. 446 446 447 - Without compiler support, at least indirect branches in assembler 448 - code are eliminated. Since this includes the syscall entry path, 449 - it is not entirely pointless. 450 - 451 447 config INTEL_RDT 452 448 bool "Intel Resource Director Technology support" 453 449 depends on X86 && CPU_SUP_INTEL ··· 1000 1004 to the kernel image. 1001 1005 1002 1006 config SCHED_SMT 1003 - bool "SMT (Hyperthreading) scheduler support" 1004 - depends on SMP 1005 - ---help--- 1006 - SMT scheduler support improves the CPU scheduler's decision making 1007 - when dealing with Intel Pentium 4 chips with HyperThreading at a 1008 - cost of slightly increased overhead in some places. If unsure say 1009 - N here. 1007 + def_bool y if SMP 1010 1008 1011 1009 config SCHED_MC 1012 1010 def_bool y
+3 -2
arch/x86/Makefile
··· 220 220 221 221 # Avoid indirect branches in kernel to deal with Spectre 222 222 ifdef CONFIG_RETPOLINE 223 - ifneq ($(RETPOLINE_CFLAGS),) 224 - KBUILD_CFLAGS += $(RETPOLINE_CFLAGS) -DRETPOLINE 223 + ifeq ($(RETPOLINE_CFLAGS),) 224 + $(error You are building kernel with non-retpoline compiler, please update your compiler.) 225 225 endif 226 + KBUILD_CFLAGS += $(RETPOLINE_CFLAGS) 226 227 endif 227 228 228 229 archscripts: scripts_basic
+1 -5
arch/x86/boot/header.S
··· 300 300 # Part 2 of the header, from the old setup.S 301 301 302 302 .ascii "HdrS" # header signature 303 - .word 0x020e # header version number (>= 0x0105) 303 + .word 0x020d # header version number (>= 0x0105) 304 304 # or else old loadlin-1.5 will fail) 305 305 .globl realmode_swtch 306 306 realmode_swtch: .word 0, 0 # default_switch, SETUPSEG ··· 557 557 558 558 init_size: .long INIT_SIZE # kernel initialization size 559 559 handover_offset: .long 0 # Filled in by build.c 560 - 561 - acpi_rsdp_addr: .quad 0 # 64-bit physical pointer to the 562 - # ACPI RSDP table, added with 563 - # version 2.14 564 560 565 561 # End of setup header ##################################################### 566 562
-20
arch/x86/events/core.c
··· 438 438 if (config == -1LL) 439 439 return -EINVAL; 440 440 441 - /* 442 - * Branch tracing: 443 - */ 444 - if (attr->config == PERF_COUNT_HW_BRANCH_INSTRUCTIONS && 445 - !attr->freq && hwc->sample_period == 1) { 446 - /* BTS is not supported by this architecture. */ 447 - if (!x86_pmu.bts_active) 448 - return -EOPNOTSUPP; 449 - 450 - /* BTS is currently only allowed for user-mode. */ 451 - if (!attr->exclude_kernel) 452 - return -EOPNOTSUPP; 453 - 454 - /* disallow bts if conflicting events are present */ 455 - if (x86_add_exclusive(x86_lbr_exclusive_lbr)) 456 - return -EBUSY; 457 - 458 - event->destroy = hw_perf_lbr_event_destroy; 459 - } 460 - 461 441 hwc->config |= config; 462 442 463 443 return 0;
+52 -16
arch/x86/events/intel/core.c
··· 2306 2306 return handled; 2307 2307 } 2308 2308 2309 - static bool disable_counter_freezing; 2309 + static bool disable_counter_freezing = true; 2310 2310 static int __init intel_perf_counter_freezing_setup(char *s) 2311 2311 { 2312 - disable_counter_freezing = true; 2313 - pr_info("Intel PMU Counter freezing feature disabled\n"); 2312 + bool res; 2313 + 2314 + if (kstrtobool(s, &res)) 2315 + return -EINVAL; 2316 + 2317 + disable_counter_freezing = !res; 2314 2318 return 1; 2315 2319 } 2316 - __setup("disable_counter_freezing", intel_perf_counter_freezing_setup); 2320 + __setup("perf_v4_pmi=", intel_perf_counter_freezing_setup); 2317 2321 2318 2322 /* 2319 2323 * Simplified handler for Arch Perfmon v4: ··· 2474 2470 static struct event_constraint * 2475 2471 intel_bts_constraints(struct perf_event *event) 2476 2472 { 2477 - struct hw_perf_event *hwc = &event->hw; 2478 - unsigned int hw_event, bts_event; 2479 - 2480 - if (event->attr.freq) 2481 - return NULL; 2482 - 2483 - hw_event = hwc->config & INTEL_ARCH_EVENT_MASK; 2484 - bts_event = x86_pmu.event_map(PERF_COUNT_HW_BRANCH_INSTRUCTIONS); 2485 - 2486 - if (unlikely(hw_event == bts_event && hwc->sample_period == 1)) 2473 + if (unlikely(intel_pmu_has_bts(event))) 2487 2474 return &bts_constraint; 2488 2475 2489 2476 return NULL; ··· 3093 3098 return flags; 3094 3099 } 3095 3100 3101 + static int intel_pmu_bts_config(struct perf_event *event) 3102 + { 3103 + struct perf_event_attr *attr = &event->attr; 3104 + 3105 + if (unlikely(intel_pmu_has_bts(event))) { 3106 + /* BTS is not supported by this architecture. */ 3107 + if (!x86_pmu.bts_active) 3108 + return -EOPNOTSUPP; 3109 + 3110 + /* BTS is currently only allowed for user-mode. */ 3111 + if (!attr->exclude_kernel) 3112 + return -EOPNOTSUPP; 3113 + 3114 + /* BTS is not allowed for precise events. */ 3115 + if (attr->precise_ip) 3116 + return -EOPNOTSUPP; 3117 + 3118 + /* disallow bts if conflicting events are present */ 3119 + if (x86_add_exclusive(x86_lbr_exclusive_lbr)) 3120 + return -EBUSY; 3121 + 3122 + event->destroy = hw_perf_lbr_event_destroy; 3123 + } 3124 + 3125 + return 0; 3126 + } 3127 + 3128 + static int core_pmu_hw_config(struct perf_event *event) 3129 + { 3130 + int ret = x86_pmu_hw_config(event); 3131 + 3132 + if (ret) 3133 + return ret; 3134 + 3135 + return intel_pmu_bts_config(event); 3136 + } 3137 + 3096 3138 static int intel_pmu_hw_config(struct perf_event *event) 3097 3139 { 3098 3140 int ret = x86_pmu_hw_config(event); 3099 3141 3142 + if (ret) 3143 + return ret; 3144 + 3145 + ret = intel_pmu_bts_config(event); 3100 3146 if (ret) 3101 3147 return ret; 3102 3148 ··· 3163 3127 /* 3164 3128 * BTS is set up earlier in this path, so don't account twice 3165 3129 */ 3166 - if (!intel_pmu_has_bts(event)) { 3130 + if (!unlikely(intel_pmu_has_bts(event))) { 3167 3131 /* disallow lbr if conflicting events are present */ 3168 3132 if (x86_add_exclusive(x86_lbr_exclusive_lbr)) 3169 3133 return -EBUSY; ··· 3632 3596 .enable_all = core_pmu_enable_all, 3633 3597 .enable = core_pmu_enable_event, 3634 3598 .disable = x86_pmu_disable_event, 3635 - .hw_config = x86_pmu_hw_config, 3599 + .hw_config = core_pmu_hw_config, 3636 3600 .schedule_events = x86_schedule_events, 3637 3601 .eventsel = MSR_ARCH_PERFMON_EVENTSEL0, 3638 3602 .perfctr = MSR_ARCH_PERFMON_PERFCTR0,
+9 -4
arch/x86/events/perf_event.h
··· 859 859 860 860 static inline bool intel_pmu_has_bts(struct perf_event *event) 861 861 { 862 - if (event->attr.config == PERF_COUNT_HW_BRANCH_INSTRUCTIONS && 863 - !event->attr.freq && event->hw.sample_period == 1) 864 - return true; 862 + struct hw_perf_event *hwc = &event->hw; 863 + unsigned int hw_event, bts_event; 865 864 866 - return false; 865 + if (event->attr.freq) 866 + return false; 867 + 868 + hw_event = hwc->config & INTEL_ARCH_EVENT_MASK; 869 + bts_event = x86_pmu.event_map(PERF_COUNT_HW_BRANCH_INSTRUCTIONS); 870 + 871 + return hw_event == bts_event && hwc->sample_period == 1; 867 872 } 868 873 869 874 int intel_pmu_save_and_restart(struct perf_event *event);
+1 -1
arch/x86/include/asm/fpu/internal.h
··· 226 226 "3: movl $-2,%[err]\n\t" \ 227 227 "jmp 2b\n\t" \ 228 228 ".popsection\n\t" \ 229 - _ASM_EXTABLE_UA(1b, 3b) \ 229 + _ASM_EXTABLE(1b, 3b) \ 230 230 : [err] "=r" (err) \ 231 231 : "D" (st), "m" (*st), "a" (lmask), "d" (hmask) \ 232 232 : "memory")
+2 -1
arch/x86/include/asm/kvm_host.h
··· 1094 1094 bool (*has_wbinvd_exit)(void); 1095 1095 1096 1096 u64 (*read_l1_tsc_offset)(struct kvm_vcpu *vcpu); 1097 - void (*write_tsc_offset)(struct kvm_vcpu *vcpu, u64 offset); 1097 + /* Returns actual tsc_offset set in active VMCS */ 1098 + u64 (*write_l1_tsc_offset)(struct kvm_vcpu *vcpu, u64 offset); 1098 1099 1099 1100 void (*get_exit_info)(struct kvm_vcpu *vcpu, u64 *info1, u64 *info2); 1100 1101
+3 -2
arch/x86/include/asm/msr-index.h
··· 41 41 42 42 #define MSR_IA32_SPEC_CTRL 0x00000048 /* Speculation Control */ 43 43 #define SPEC_CTRL_IBRS (1 << 0) /* Indirect Branch Restricted Speculation */ 44 - #define SPEC_CTRL_STIBP (1 << 1) /* Single Thread Indirect Branch Predictors */ 44 + #define SPEC_CTRL_STIBP_SHIFT 1 /* Single Thread Indirect Branch Predictor (STIBP) bit */ 45 + #define SPEC_CTRL_STIBP (1 << SPEC_CTRL_STIBP_SHIFT) /* STIBP mask */ 45 46 #define SPEC_CTRL_SSBD_SHIFT 2 /* Speculative Store Bypass Disable bit */ 46 - #define SPEC_CTRL_SSBD (1 << SPEC_CTRL_SSBD_SHIFT) /* Speculative Store Bypass Disable */ 47 + #define SPEC_CTRL_SSBD (1 << SPEC_CTRL_SSBD_SHIFT) /* Speculative Store Bypass Disable */ 47 48 48 49 #define MSR_IA32_PRED_CMD 0x00000049 /* Prediction Command */ 49 50 #define PRED_CMD_IBPB (1 << 0) /* Indirect Branch Prediction Barrier */
+20 -6
arch/x86/include/asm/nospec-branch.h
··· 3 3 #ifndef _ASM_X86_NOSPEC_BRANCH_H_ 4 4 #define _ASM_X86_NOSPEC_BRANCH_H_ 5 5 6 + #include <linux/static_key.h> 7 + 6 8 #include <asm/alternative.h> 7 9 #include <asm/alternative-asm.h> 8 10 #include <asm/cpufeatures.h> ··· 164 162 _ASM_PTR " 999b\n\t" \ 165 163 ".popsection\n\t" 166 164 167 - #if defined(CONFIG_X86_64) && defined(RETPOLINE) 165 + #ifdef CONFIG_RETPOLINE 166 + #ifdef CONFIG_X86_64 168 167 169 168 /* 170 - * Since the inline asm uses the %V modifier which is only in newer GCC, 171 - * the 64-bit one is dependent on RETPOLINE not CONFIG_RETPOLINE. 169 + * Inline asm uses the %V modifier which is only in newer GCC 170 + * which is ensured when CONFIG_RETPOLINE is defined. 172 171 */ 173 172 # define CALL_NOSPEC \ 174 173 ANNOTATE_NOSPEC_ALTERNATIVE \ ··· 184 181 X86_FEATURE_RETPOLINE_AMD) 185 182 # define THUNK_TARGET(addr) [thunk_target] "r" (addr) 186 183 187 - #elif defined(CONFIG_X86_32) && defined(CONFIG_RETPOLINE) 184 + #else /* CONFIG_X86_32 */ 188 185 /* 189 186 * For i386 we use the original ret-equivalent retpoline, because 190 187 * otherwise we'll run out of registers. We don't care about CET ··· 214 211 X86_FEATURE_RETPOLINE_AMD) 215 212 216 213 # define THUNK_TARGET(addr) [thunk_target] "rm" (addr) 214 + #endif 217 215 #else /* No retpoline for C / inline asm */ 218 216 # define CALL_NOSPEC "call *%[thunk_target]\n" 219 217 # define THUNK_TARGET(addr) [thunk_target] "rm" (addr) ··· 223 219 /* The Spectre V2 mitigation variants */ 224 220 enum spectre_v2_mitigation { 225 221 SPECTRE_V2_NONE, 226 - SPECTRE_V2_RETPOLINE_MINIMAL, 227 - SPECTRE_V2_RETPOLINE_MINIMAL_AMD, 228 222 SPECTRE_V2_RETPOLINE_GENERIC, 229 223 SPECTRE_V2_RETPOLINE_AMD, 230 224 SPECTRE_V2_IBRS_ENHANCED, 225 + }; 226 + 227 + /* The indirect branch speculation control variants */ 228 + enum spectre_v2_user_mitigation { 229 + SPECTRE_V2_USER_NONE, 230 + SPECTRE_V2_USER_STRICT, 231 + SPECTRE_V2_USER_PRCTL, 232 + SPECTRE_V2_USER_SECCOMP, 231 233 }; 232 234 233 235 /* The Speculative Store Bypass disable variants */ ··· 312 302 X86_FEATURE_USE_IBRS_FW); \ 313 303 preempt_enable(); \ 314 304 } while (0) 305 + 306 + DECLARE_STATIC_KEY_FALSE(switch_to_cond_stibp); 307 + DECLARE_STATIC_KEY_FALSE(switch_mm_cond_ibpb); 308 + DECLARE_STATIC_KEY_FALSE(switch_mm_always_ibpb); 315 309 316 310 #endif /* __ASSEMBLY__ */ 317 311
+14 -6
arch/x86/include/asm/spec-ctrl.h
··· 53 53 return (tifn & _TIF_SSBD) >> (TIF_SSBD - SPEC_CTRL_SSBD_SHIFT); 54 54 } 55 55 56 + static inline u64 stibp_tif_to_spec_ctrl(u64 tifn) 57 + { 58 + BUILD_BUG_ON(TIF_SPEC_IB < SPEC_CTRL_STIBP_SHIFT); 59 + return (tifn & _TIF_SPEC_IB) >> (TIF_SPEC_IB - SPEC_CTRL_STIBP_SHIFT); 60 + } 61 + 56 62 static inline unsigned long ssbd_spec_ctrl_to_tif(u64 spec_ctrl) 57 63 { 58 64 BUILD_BUG_ON(TIF_SSBD < SPEC_CTRL_SSBD_SHIFT); 59 65 return (spec_ctrl & SPEC_CTRL_SSBD) << (TIF_SSBD - SPEC_CTRL_SSBD_SHIFT); 66 + } 67 + 68 + static inline unsigned long stibp_spec_ctrl_to_tif(u64 spec_ctrl) 69 + { 70 + BUILD_BUG_ON(TIF_SPEC_IB < SPEC_CTRL_STIBP_SHIFT); 71 + return (spec_ctrl & SPEC_CTRL_STIBP) << (TIF_SPEC_IB - SPEC_CTRL_STIBP_SHIFT); 60 72 } 61 73 62 74 static inline u64 ssbd_tif_to_amd_ls_cfg(u64 tifn) ··· 82 70 static inline void speculative_store_bypass_ht_init(void) { } 83 71 #endif 84 72 85 - extern void speculative_store_bypass_update(unsigned long tif); 86 - 87 - static inline void speculative_store_bypass_update_current(void) 88 - { 89 - speculative_store_bypass_update(current_thread_info()->flags); 90 - } 73 + extern void speculation_ctrl_update(unsigned long tif); 74 + extern void speculation_ctrl_update_current(void); 91 75 92 76 #endif
-3
arch/x86/include/asm/switch_to.h
··· 11 11 12 12 __visible struct task_struct *__switch_to(struct task_struct *prev, 13 13 struct task_struct *next); 14 - struct tss_struct; 15 - void __switch_to_xtra(struct task_struct *prev_p, struct task_struct *next_p, 16 - struct tss_struct *tss); 17 14 18 15 /* This runs runs on the previous thread's stack. */ 19 16 static inline void prepare_switch_to(struct task_struct *next)
+17 -3
arch/x86/include/asm/thread_info.h
··· 79 79 #define TIF_SIGPENDING 2 /* signal pending */ 80 80 #define TIF_NEED_RESCHED 3 /* rescheduling necessary */ 81 81 #define TIF_SINGLESTEP 4 /* reenable singlestep on user return*/ 82 - #define TIF_SSBD 5 /* Reduced data speculation */ 82 + #define TIF_SSBD 5 /* Speculative store bypass disable */ 83 83 #define TIF_SYSCALL_EMU 6 /* syscall emulation active */ 84 84 #define TIF_SYSCALL_AUDIT 7 /* syscall auditing active */ 85 85 #define TIF_SECCOMP 8 /* secure computing */ 86 + #define TIF_SPEC_IB 9 /* Indirect branch speculation mitigation */ 87 + #define TIF_SPEC_FORCE_UPDATE 10 /* Force speculation MSR update in context switch */ 86 88 #define TIF_USER_RETURN_NOTIFY 11 /* notify kernel of userspace return */ 87 89 #define TIF_UPROBE 12 /* breakpointed or singlestepping */ 88 90 #define TIF_PATCH_PENDING 13 /* pending live patching update */ ··· 112 110 #define _TIF_SYSCALL_EMU (1 << TIF_SYSCALL_EMU) 113 111 #define _TIF_SYSCALL_AUDIT (1 << TIF_SYSCALL_AUDIT) 114 112 #define _TIF_SECCOMP (1 << TIF_SECCOMP) 113 + #define _TIF_SPEC_IB (1 << TIF_SPEC_IB) 114 + #define _TIF_SPEC_FORCE_UPDATE (1 << TIF_SPEC_FORCE_UPDATE) 115 115 #define _TIF_USER_RETURN_NOTIFY (1 << TIF_USER_RETURN_NOTIFY) 116 116 #define _TIF_UPROBE (1 << TIF_UPROBE) 117 117 #define _TIF_PATCH_PENDING (1 << TIF_PATCH_PENDING) ··· 149 145 _TIF_FSCHECK) 150 146 151 147 /* flags to check in __switch_to() */ 152 - #define _TIF_WORK_CTXSW \ 153 - (_TIF_IO_BITMAP|_TIF_NOCPUID|_TIF_NOTSC|_TIF_BLOCKSTEP|_TIF_SSBD) 148 + #define _TIF_WORK_CTXSW_BASE \ 149 + (_TIF_IO_BITMAP|_TIF_NOCPUID|_TIF_NOTSC|_TIF_BLOCKSTEP| \ 150 + _TIF_SSBD | _TIF_SPEC_FORCE_UPDATE) 151 + 152 + /* 153 + * Avoid calls to __switch_to_xtra() on UP as STIBP is not evaluated. 154 + */ 155 + #ifdef CONFIG_SMP 156 + # define _TIF_WORK_CTXSW (_TIF_WORK_CTXSW_BASE | _TIF_SPEC_IB) 157 + #else 158 + # define _TIF_WORK_CTXSW (_TIF_WORK_CTXSW_BASE) 159 + #endif 154 160 155 161 #define _TIF_WORK_CTXSW_PREV (_TIF_WORK_CTXSW|_TIF_USER_RETURN_NOTIFY) 156 162 #define _TIF_WORK_CTXSW_NEXT (_TIF_WORK_CTXSW)
+6 -2
arch/x86/include/asm/tlbflush.h
··· 169 169 170 170 #define LOADED_MM_SWITCHING ((struct mm_struct *)1) 171 171 172 + /* Last user mm for optimizing IBPB */ 173 + union { 174 + struct mm_struct *last_user_mm; 175 + unsigned long last_user_mm_ibpb; 176 + }; 177 + 172 178 u16 loaded_mm_asid; 173 179 u16 next_asid; 174 - /* last user mm's ctx id */ 175 - u64 last_ctx_id; 176 180 177 181 /* 178 182 * We can be in one of several states:
-2
arch/x86/include/asm/x86_init.h
··· 303 303 extern void x86_init_uint_noop(unsigned int unused); 304 304 extern bool x86_pnpbios_disabled(void); 305 305 306 - void x86_verify_bootdata_version(void); 307 - 308 306 #endif
+2 -5
arch/x86/include/uapi/asm/bootparam.h
··· 16 16 #define RAMDISK_PROMPT_FLAG 0x8000 17 17 #define RAMDISK_LOAD_FLAG 0x4000 18 18 19 - /* version flags */ 20 - #define VERSION_WRITTEN 0x8000 21 - 22 19 /* loadflags */ 23 20 #define LOADED_HIGH (1<<0) 24 21 #define KASLR_FLAG (1<<1) ··· 86 89 __u64 pref_address; 87 90 __u32 init_size; 88 91 __u32 handover_offset; 89 - __u64 acpi_rsdp_addr; 90 92 } __attribute__((packed)); 91 93 92 94 struct sys_desc_table { ··· 155 159 __u8 _pad2[4]; /* 0x054 */ 156 160 __u64 tboot_addr; /* 0x058 */ 157 161 struct ist_info ist_info; /* 0x060 */ 158 - __u8 _pad3[16]; /* 0x070 */ 162 + __u64 acpi_rsdp_addr; /* 0x070 */ 163 + __u8 _pad3[8]; /* 0x078 */ 159 164 __u8 hd0_info[16]; /* obsolete! */ /* 0x080 */ 160 165 __u8 hd1_info[16]; /* obsolete! */ /* 0x090 */ 161 166 struct sys_desc_table sys_desc_table; /* obsolete! */ /* 0x0a0 */
+1 -1
arch/x86/kernel/acpi/boot.c
··· 1776 1776 1777 1777 u64 x86_default_get_root_pointer(void) 1778 1778 { 1779 - return boot_params.hdr.acpi_rsdp_addr; 1779 + return boot_params.acpi_rsdp_addr; 1780 1780 }
+394 -143
arch/x86/kernel/cpu/bugs.c
··· 14 14 #include <linux/module.h> 15 15 #include <linux/nospec.h> 16 16 #include <linux/prctl.h> 17 + #include <linux/sched/smt.h> 17 18 18 19 #include <asm/spec-ctrl.h> 19 20 #include <asm/cmdline.h> ··· 53 52 */ 54 53 u64 __ro_after_init x86_amd_ls_cfg_base; 55 54 u64 __ro_after_init x86_amd_ls_cfg_ssbd_mask; 55 + 56 + /* Control conditional STIPB in switch_to() */ 57 + DEFINE_STATIC_KEY_FALSE(switch_to_cond_stibp); 58 + /* Control conditional IBPB in switch_mm() */ 59 + DEFINE_STATIC_KEY_FALSE(switch_mm_cond_ibpb); 60 + /* Control unconditional IBPB in switch_mm() */ 61 + DEFINE_STATIC_KEY_FALSE(switch_mm_always_ibpb); 56 62 57 63 void __init check_bugs(void) 58 64 { ··· 131 123 #endif 132 124 } 133 125 134 - /* The kernel command line selection */ 135 - enum spectre_v2_mitigation_cmd { 136 - SPECTRE_V2_CMD_NONE, 137 - SPECTRE_V2_CMD_AUTO, 138 - SPECTRE_V2_CMD_FORCE, 139 - SPECTRE_V2_CMD_RETPOLINE, 140 - SPECTRE_V2_CMD_RETPOLINE_GENERIC, 141 - SPECTRE_V2_CMD_RETPOLINE_AMD, 142 - }; 143 - 144 - static const char *spectre_v2_strings[] = { 145 - [SPECTRE_V2_NONE] = "Vulnerable", 146 - [SPECTRE_V2_RETPOLINE_MINIMAL] = "Vulnerable: Minimal generic ASM retpoline", 147 - [SPECTRE_V2_RETPOLINE_MINIMAL_AMD] = "Vulnerable: Minimal AMD ASM retpoline", 148 - [SPECTRE_V2_RETPOLINE_GENERIC] = "Mitigation: Full generic retpoline", 149 - [SPECTRE_V2_RETPOLINE_AMD] = "Mitigation: Full AMD retpoline", 150 - [SPECTRE_V2_IBRS_ENHANCED] = "Mitigation: Enhanced IBRS", 151 - }; 152 - 153 - #undef pr_fmt 154 - #define pr_fmt(fmt) "Spectre V2 : " fmt 155 - 156 - static enum spectre_v2_mitigation spectre_v2_enabled __ro_after_init = 157 - SPECTRE_V2_NONE; 158 - 159 126 void 160 127 x86_virt_spec_ctrl(u64 guest_spec_ctrl, u64 guest_virt_spec_ctrl, bool setguest) 161 128 { ··· 151 168 if (static_cpu_has(X86_FEATURE_SPEC_CTRL_SSBD) || 152 169 static_cpu_has(X86_FEATURE_AMD_SSBD)) 153 170 hostval |= ssbd_tif_to_spec_ctrl(ti->flags); 171 + 172 + /* Conditional STIBP enabled? */ 173 + if (static_branch_unlikely(&switch_to_cond_stibp)) 174 + hostval |= stibp_tif_to_spec_ctrl(ti->flags); 154 175 155 176 if (hostval != guestval) { 156 177 msrval = setguest ? guestval : hostval; ··· 189 202 tif = setguest ? ssbd_spec_ctrl_to_tif(guestval) : 190 203 ssbd_spec_ctrl_to_tif(hostval); 191 204 192 - speculative_store_bypass_update(tif); 205 + speculation_ctrl_update(tif); 193 206 } 194 207 } 195 208 EXPORT_SYMBOL_GPL(x86_virt_spec_ctrl); ··· 203 216 else if (boot_cpu_has(X86_FEATURE_LS_CFG_SSBD)) 204 217 wrmsrl(MSR_AMD64_LS_CFG, msrval); 205 218 } 219 + 220 + #undef pr_fmt 221 + #define pr_fmt(fmt) "Spectre V2 : " fmt 222 + 223 + static enum spectre_v2_mitigation spectre_v2_enabled __ro_after_init = 224 + SPECTRE_V2_NONE; 225 + 226 + static enum spectre_v2_user_mitigation spectre_v2_user __ro_after_init = 227 + SPECTRE_V2_USER_NONE; 206 228 207 229 #ifdef RETPOLINE 208 230 static bool spectre_v2_bad_module; ··· 234 238 static inline const char *spectre_v2_module_string(void) { return ""; } 235 239 #endif 236 240 237 - static void __init spec2_print_if_insecure(const char *reason) 238 - { 239 - if (boot_cpu_has_bug(X86_BUG_SPECTRE_V2)) 240 - pr_info("%s selected on command line.\n", reason); 241 - } 242 - 243 - static void __init spec2_print_if_secure(const char *reason) 244 - { 245 - if (!boot_cpu_has_bug(X86_BUG_SPECTRE_V2)) 246 - pr_info("%s selected on command line.\n", reason); 247 - } 248 - 249 - static inline bool retp_compiler(void) 250 - { 251 - return __is_defined(RETPOLINE); 252 - } 253 - 254 241 static inline bool match_option(const char *arg, int arglen, const char *opt) 255 242 { 256 243 int len = strlen(opt); ··· 241 262 return len == arglen && !strncmp(arg, opt, len); 242 263 } 243 264 265 + /* The kernel command line selection for spectre v2 */ 266 + enum spectre_v2_mitigation_cmd { 267 + SPECTRE_V2_CMD_NONE, 268 + SPECTRE_V2_CMD_AUTO, 269 + SPECTRE_V2_CMD_FORCE, 270 + SPECTRE_V2_CMD_RETPOLINE, 271 + SPECTRE_V2_CMD_RETPOLINE_GENERIC, 272 + SPECTRE_V2_CMD_RETPOLINE_AMD, 273 + }; 274 + 275 + enum spectre_v2_user_cmd { 276 + SPECTRE_V2_USER_CMD_NONE, 277 + SPECTRE_V2_USER_CMD_AUTO, 278 + SPECTRE_V2_USER_CMD_FORCE, 279 + SPECTRE_V2_USER_CMD_PRCTL, 280 + SPECTRE_V2_USER_CMD_PRCTL_IBPB, 281 + SPECTRE_V2_USER_CMD_SECCOMP, 282 + SPECTRE_V2_USER_CMD_SECCOMP_IBPB, 283 + }; 284 + 285 + static const char * const spectre_v2_user_strings[] = { 286 + [SPECTRE_V2_USER_NONE] = "User space: Vulnerable", 287 + [SPECTRE_V2_USER_STRICT] = "User space: Mitigation: STIBP protection", 288 + [SPECTRE_V2_USER_PRCTL] = "User space: Mitigation: STIBP via prctl", 289 + [SPECTRE_V2_USER_SECCOMP] = "User space: Mitigation: STIBP via seccomp and prctl", 290 + }; 291 + 292 + static const struct { 293 + const char *option; 294 + enum spectre_v2_user_cmd cmd; 295 + bool secure; 296 + } v2_user_options[] __initdata = { 297 + { "auto", SPECTRE_V2_USER_CMD_AUTO, false }, 298 + { "off", SPECTRE_V2_USER_CMD_NONE, false }, 299 + { "on", SPECTRE_V2_USER_CMD_FORCE, true }, 300 + { "prctl", SPECTRE_V2_USER_CMD_PRCTL, false }, 301 + { "prctl,ibpb", SPECTRE_V2_USER_CMD_PRCTL_IBPB, false }, 302 + { "seccomp", SPECTRE_V2_USER_CMD_SECCOMP, false }, 303 + { "seccomp,ibpb", SPECTRE_V2_USER_CMD_SECCOMP_IBPB, false }, 304 + }; 305 + 306 + static void __init spec_v2_user_print_cond(const char *reason, bool secure) 307 + { 308 + if (boot_cpu_has_bug(X86_BUG_SPECTRE_V2) != secure) 309 + pr_info("spectre_v2_user=%s forced on command line.\n", reason); 310 + } 311 + 312 + static enum spectre_v2_user_cmd __init 313 + spectre_v2_parse_user_cmdline(enum spectre_v2_mitigation_cmd v2_cmd) 314 + { 315 + char arg[20]; 316 + int ret, i; 317 + 318 + switch (v2_cmd) { 319 + case SPECTRE_V2_CMD_NONE: 320 + return SPECTRE_V2_USER_CMD_NONE; 321 + case SPECTRE_V2_CMD_FORCE: 322 + return SPECTRE_V2_USER_CMD_FORCE; 323 + default: 324 + break; 325 + } 326 + 327 + ret = cmdline_find_option(boot_command_line, "spectre_v2_user", 328 + arg, sizeof(arg)); 329 + if (ret < 0) 330 + return SPECTRE_V2_USER_CMD_AUTO; 331 + 332 + for (i = 0; i < ARRAY_SIZE(v2_user_options); i++) { 333 + if (match_option(arg, ret, v2_user_options[i].option)) { 334 + spec_v2_user_print_cond(v2_user_options[i].option, 335 + v2_user_options[i].secure); 336 + return v2_user_options[i].cmd; 337 + } 338 + } 339 + 340 + pr_err("Unknown user space protection option (%s). Switching to AUTO select\n", arg); 341 + return SPECTRE_V2_USER_CMD_AUTO; 342 + } 343 + 344 + static void __init 345 + spectre_v2_user_select_mitigation(enum spectre_v2_mitigation_cmd v2_cmd) 346 + { 347 + enum spectre_v2_user_mitigation mode = SPECTRE_V2_USER_NONE; 348 + bool smt_possible = IS_ENABLED(CONFIG_SMP); 349 + enum spectre_v2_user_cmd cmd; 350 + 351 + if (!boot_cpu_has(X86_FEATURE_IBPB) && !boot_cpu_has(X86_FEATURE_STIBP)) 352 + return; 353 + 354 + if (cpu_smt_control == CPU_SMT_FORCE_DISABLED || 355 + cpu_smt_control == CPU_SMT_NOT_SUPPORTED) 356 + smt_possible = false; 357 + 358 + cmd = spectre_v2_parse_user_cmdline(v2_cmd); 359 + switch (cmd) { 360 + case SPECTRE_V2_USER_CMD_NONE: 361 + goto set_mode; 362 + case SPECTRE_V2_USER_CMD_FORCE: 363 + mode = SPECTRE_V2_USER_STRICT; 364 + break; 365 + case SPECTRE_V2_USER_CMD_PRCTL: 366 + case SPECTRE_V2_USER_CMD_PRCTL_IBPB: 367 + mode = SPECTRE_V2_USER_PRCTL; 368 + break; 369 + case SPECTRE_V2_USER_CMD_AUTO: 370 + case SPECTRE_V2_USER_CMD_SECCOMP: 371 + case SPECTRE_V2_USER_CMD_SECCOMP_IBPB: 372 + if (IS_ENABLED(CONFIG_SECCOMP)) 373 + mode = SPECTRE_V2_USER_SECCOMP; 374 + else 375 + mode = SPECTRE_V2_USER_PRCTL; 376 + break; 377 + } 378 + 379 + /* Initialize Indirect Branch Prediction Barrier */ 380 + if (boot_cpu_has(X86_FEATURE_IBPB)) { 381 + setup_force_cpu_cap(X86_FEATURE_USE_IBPB); 382 + 383 + switch (cmd) { 384 + case SPECTRE_V2_USER_CMD_FORCE: 385 + case SPECTRE_V2_USER_CMD_PRCTL_IBPB: 386 + case SPECTRE_V2_USER_CMD_SECCOMP_IBPB: 387 + static_branch_enable(&switch_mm_always_ibpb); 388 + break; 389 + case SPECTRE_V2_USER_CMD_PRCTL: 390 + case SPECTRE_V2_USER_CMD_AUTO: 391 + case SPECTRE_V2_USER_CMD_SECCOMP: 392 + static_branch_enable(&switch_mm_cond_ibpb); 393 + break; 394 + default: 395 + break; 396 + } 397 + 398 + pr_info("mitigation: Enabling %s Indirect Branch Prediction Barrier\n", 399 + static_key_enabled(&switch_mm_always_ibpb) ? 400 + "always-on" : "conditional"); 401 + } 402 + 403 + /* If enhanced IBRS is enabled no STIPB required */ 404 + if (spectre_v2_enabled == SPECTRE_V2_IBRS_ENHANCED) 405 + return; 406 + 407 + /* 408 + * If SMT is not possible or STIBP is not available clear the STIPB 409 + * mode. 410 + */ 411 + if (!smt_possible || !boot_cpu_has(X86_FEATURE_STIBP)) 412 + mode = SPECTRE_V2_USER_NONE; 413 + set_mode: 414 + spectre_v2_user = mode; 415 + /* Only print the STIBP mode when SMT possible */ 416 + if (smt_possible) 417 + pr_info("%s\n", spectre_v2_user_strings[mode]); 418 + } 419 + 420 + static const char * const spectre_v2_strings[] = { 421 + [SPECTRE_V2_NONE] = "Vulnerable", 422 + [SPECTRE_V2_RETPOLINE_GENERIC] = "Mitigation: Full generic retpoline", 423 + [SPECTRE_V2_RETPOLINE_AMD] = "Mitigation: Full AMD retpoline", 424 + [SPECTRE_V2_IBRS_ENHANCED] = "Mitigation: Enhanced IBRS", 425 + }; 426 + 244 427 static const struct { 245 428 const char *option; 246 429 enum spectre_v2_mitigation_cmd cmd; 247 430 bool secure; 248 - } mitigation_options[] = { 249 - { "off", SPECTRE_V2_CMD_NONE, false }, 250 - { "on", SPECTRE_V2_CMD_FORCE, true }, 251 - { "retpoline", SPECTRE_V2_CMD_RETPOLINE, false }, 252 - { "retpoline,amd", SPECTRE_V2_CMD_RETPOLINE_AMD, false }, 253 - { "retpoline,generic", SPECTRE_V2_CMD_RETPOLINE_GENERIC, false }, 254 - { "auto", SPECTRE_V2_CMD_AUTO, false }, 431 + } mitigation_options[] __initdata = { 432 + { "off", SPECTRE_V2_CMD_NONE, false }, 433 + { "on", SPECTRE_V2_CMD_FORCE, true }, 434 + { "retpoline", SPECTRE_V2_CMD_RETPOLINE, false }, 435 + { "retpoline,amd", SPECTRE_V2_CMD_RETPOLINE_AMD, false }, 436 + { "retpoline,generic", SPECTRE_V2_CMD_RETPOLINE_GENERIC, false }, 437 + { "auto", SPECTRE_V2_CMD_AUTO, false }, 255 438 }; 439 + 440 + static void __init spec_v2_print_cond(const char *reason, bool secure) 441 + { 442 + if (boot_cpu_has_bug(X86_BUG_SPECTRE_V2) != secure) 443 + pr_info("%s selected on command line.\n", reason); 444 + } 256 445 257 446 static enum spectre_v2_mitigation_cmd __init spectre_v2_parse_cmdline(void) 258 447 { 448 + enum spectre_v2_mitigation_cmd cmd = SPECTRE_V2_CMD_AUTO; 259 449 char arg[20]; 260 450 int ret, i; 261 - enum spectre_v2_mitigation_cmd cmd = SPECTRE_V2_CMD_AUTO; 262 451 263 452 if (cmdline_find_option_bool(boot_command_line, "nospectre_v2")) 264 453 return SPECTRE_V2_CMD_NONE; 265 - else { 266 - ret = cmdline_find_option(boot_command_line, "spectre_v2", arg, sizeof(arg)); 267 - if (ret < 0) 268 - return SPECTRE_V2_CMD_AUTO; 269 454 270 - for (i = 0; i < ARRAY_SIZE(mitigation_options); i++) { 271 - if (!match_option(arg, ret, mitigation_options[i].option)) 272 - continue; 273 - cmd = mitigation_options[i].cmd; 274 - break; 275 - } 455 + ret = cmdline_find_option(boot_command_line, "spectre_v2", arg, sizeof(arg)); 456 + if (ret < 0) 457 + return SPECTRE_V2_CMD_AUTO; 276 458 277 - if (i >= ARRAY_SIZE(mitigation_options)) { 278 - pr_err("unknown option (%s). Switching to AUTO select\n", arg); 279 - return SPECTRE_V2_CMD_AUTO; 280 - } 459 + for (i = 0; i < ARRAY_SIZE(mitigation_options); i++) { 460 + if (!match_option(arg, ret, mitigation_options[i].option)) 461 + continue; 462 + cmd = mitigation_options[i].cmd; 463 + break; 464 + } 465 + 466 + if (i >= ARRAY_SIZE(mitigation_options)) { 467 + pr_err("unknown option (%s). Switching to AUTO select\n", arg); 468 + return SPECTRE_V2_CMD_AUTO; 281 469 } 282 470 283 471 if ((cmd == SPECTRE_V2_CMD_RETPOLINE || ··· 462 316 return SPECTRE_V2_CMD_AUTO; 463 317 } 464 318 465 - if (mitigation_options[i].secure) 466 - spec2_print_if_secure(mitigation_options[i].option); 467 - else 468 - spec2_print_if_insecure(mitigation_options[i].option); 469 - 319 + spec_v2_print_cond(mitigation_options[i].option, 320 + mitigation_options[i].secure); 470 321 return cmd; 471 - } 472 - 473 - static bool stibp_needed(void) 474 - { 475 - if (spectre_v2_enabled == SPECTRE_V2_NONE) 476 - return false; 477 - 478 - if (!boot_cpu_has(X86_FEATURE_STIBP)) 479 - return false; 480 - 481 - return true; 482 - } 483 - 484 - static void update_stibp_msr(void *info) 485 - { 486 - wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base); 487 - } 488 - 489 - void arch_smt_update(void) 490 - { 491 - u64 mask; 492 - 493 - if (!stibp_needed()) 494 - return; 495 - 496 - mutex_lock(&spec_ctrl_mutex); 497 - mask = x86_spec_ctrl_base; 498 - if (cpu_smt_control == CPU_SMT_ENABLED) 499 - mask |= SPEC_CTRL_STIBP; 500 - else 501 - mask &= ~SPEC_CTRL_STIBP; 502 - 503 - if (mask != x86_spec_ctrl_base) { 504 - pr_info("Spectre v2 cross-process SMT mitigation: %s STIBP\n", 505 - cpu_smt_control == CPU_SMT_ENABLED ? 506 - "Enabling" : "Disabling"); 507 - x86_spec_ctrl_base = mask; 508 - on_each_cpu(update_stibp_msr, NULL, 1); 509 - } 510 - mutex_unlock(&spec_ctrl_mutex); 511 322 } 512 323 513 324 static void __init spectre_v2_select_mitigation(void) ··· 520 417 pr_err("Spectre mitigation: LFENCE not serializing, switching to generic retpoline\n"); 521 418 goto retpoline_generic; 522 419 } 523 - mode = retp_compiler() ? SPECTRE_V2_RETPOLINE_AMD : 524 - SPECTRE_V2_RETPOLINE_MINIMAL_AMD; 420 + mode = SPECTRE_V2_RETPOLINE_AMD; 525 421 setup_force_cpu_cap(X86_FEATURE_RETPOLINE_AMD); 526 422 setup_force_cpu_cap(X86_FEATURE_RETPOLINE); 527 423 } else { 528 424 retpoline_generic: 529 - mode = retp_compiler() ? SPECTRE_V2_RETPOLINE_GENERIC : 530 - SPECTRE_V2_RETPOLINE_MINIMAL; 425 + mode = SPECTRE_V2_RETPOLINE_GENERIC; 531 426 setup_force_cpu_cap(X86_FEATURE_RETPOLINE); 532 427 } 533 428 ··· 544 443 setup_force_cpu_cap(X86_FEATURE_RSB_CTXSW); 545 444 pr_info("Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch\n"); 546 445 547 - /* Initialize Indirect Branch Prediction Barrier if supported */ 548 - if (boot_cpu_has(X86_FEATURE_IBPB)) { 549 - setup_force_cpu_cap(X86_FEATURE_USE_IBPB); 550 - pr_info("Spectre v2 mitigation: Enabling Indirect Branch Prediction Barrier\n"); 551 - } 552 - 553 446 /* 554 447 * Retpoline means the kernel is safe because it has no indirect 555 448 * branches. Enhanced IBRS protects firmware too, so, enable restricted ··· 560 465 pr_info("Enabling Restricted Speculation for firmware calls\n"); 561 466 } 562 467 468 + /* Set up IBPB and STIBP depending on the general spectre V2 command */ 469 + spectre_v2_user_select_mitigation(cmd); 470 + 563 471 /* Enable STIBP if appropriate */ 564 472 arch_smt_update(); 473 + } 474 + 475 + static void update_stibp_msr(void * __unused) 476 + { 477 + wrmsrl(MSR_IA32_SPEC_CTRL, x86_spec_ctrl_base); 478 + } 479 + 480 + /* Update x86_spec_ctrl_base in case SMT state changed. */ 481 + static void update_stibp_strict(void) 482 + { 483 + u64 mask = x86_spec_ctrl_base & ~SPEC_CTRL_STIBP; 484 + 485 + if (sched_smt_active()) 486 + mask |= SPEC_CTRL_STIBP; 487 + 488 + if (mask == x86_spec_ctrl_base) 489 + return; 490 + 491 + pr_info("Update user space SMT mitigation: STIBP %s\n", 492 + mask & SPEC_CTRL_STIBP ? "always-on" : "off"); 493 + x86_spec_ctrl_base = mask; 494 + on_each_cpu(update_stibp_msr, NULL, 1); 495 + } 496 + 497 + /* Update the static key controlling the evaluation of TIF_SPEC_IB */ 498 + static void update_indir_branch_cond(void) 499 + { 500 + if (sched_smt_active()) 501 + static_branch_enable(&switch_to_cond_stibp); 502 + else 503 + static_branch_disable(&switch_to_cond_stibp); 504 + } 505 + 506 + void arch_smt_update(void) 507 + { 508 + /* Enhanced IBRS implies STIBP. No update required. */ 509 + if (spectre_v2_enabled == SPECTRE_V2_IBRS_ENHANCED) 510 + return; 511 + 512 + mutex_lock(&spec_ctrl_mutex); 513 + 514 + switch (spectre_v2_user) { 515 + case SPECTRE_V2_USER_NONE: 516 + break; 517 + case SPECTRE_V2_USER_STRICT: 518 + update_stibp_strict(); 519 + break; 520 + case SPECTRE_V2_USER_PRCTL: 521 + case SPECTRE_V2_USER_SECCOMP: 522 + update_indir_branch_cond(); 523 + break; 524 + } 525 + 526 + mutex_unlock(&spec_ctrl_mutex); 565 527 } 566 528 567 529 #undef pr_fmt ··· 635 483 SPEC_STORE_BYPASS_CMD_SECCOMP, 636 484 }; 637 485 638 - static const char *ssb_strings[] = { 486 + static const char * const ssb_strings[] = { 639 487 [SPEC_STORE_BYPASS_NONE] = "Vulnerable", 640 488 [SPEC_STORE_BYPASS_DISABLE] = "Mitigation: Speculative Store Bypass disabled", 641 489 [SPEC_STORE_BYPASS_PRCTL] = "Mitigation: Speculative Store Bypass disabled via prctl", ··· 645 493 static const struct { 646 494 const char *option; 647 495 enum ssb_mitigation_cmd cmd; 648 - } ssb_mitigation_options[] = { 496 + } ssb_mitigation_options[] __initdata = { 649 497 { "auto", SPEC_STORE_BYPASS_CMD_AUTO }, /* Platform decides */ 650 498 { "on", SPEC_STORE_BYPASS_CMD_ON }, /* Disable Speculative Store Bypass */ 651 499 { "off", SPEC_STORE_BYPASS_CMD_NONE }, /* Don't touch Speculative Store Bypass */ ··· 756 604 #undef pr_fmt 757 605 #define pr_fmt(fmt) "Speculation prctl: " fmt 758 606 607 + static void task_update_spec_tif(struct task_struct *tsk) 608 + { 609 + /* Force the update of the real TIF bits */ 610 + set_tsk_thread_flag(tsk, TIF_SPEC_FORCE_UPDATE); 611 + 612 + /* 613 + * Immediately update the speculation control MSRs for the current 614 + * task, but for a non-current task delay setting the CPU 615 + * mitigation until it is scheduled next. 616 + * 617 + * This can only happen for SECCOMP mitigation. For PRCTL it's 618 + * always the current task. 619 + */ 620 + if (tsk == current) 621 + speculation_ctrl_update_current(); 622 + } 623 + 759 624 static int ssb_prctl_set(struct task_struct *task, unsigned long ctrl) 760 625 { 761 - bool update; 762 - 763 626 if (ssb_mode != SPEC_STORE_BYPASS_PRCTL && 764 627 ssb_mode != SPEC_STORE_BYPASS_SECCOMP) 765 628 return -ENXIO; ··· 785 618 if (task_spec_ssb_force_disable(task)) 786 619 return -EPERM; 787 620 task_clear_spec_ssb_disable(task); 788 - update = test_and_clear_tsk_thread_flag(task, TIF_SSBD); 621 + task_update_spec_tif(task); 789 622 break; 790 623 case PR_SPEC_DISABLE: 791 624 task_set_spec_ssb_disable(task); 792 - update = !test_and_set_tsk_thread_flag(task, TIF_SSBD); 625 + task_update_spec_tif(task); 793 626 break; 794 627 case PR_SPEC_FORCE_DISABLE: 795 628 task_set_spec_ssb_disable(task); 796 629 task_set_spec_ssb_force_disable(task); 797 - update = !test_and_set_tsk_thread_flag(task, TIF_SSBD); 630 + task_update_spec_tif(task); 798 631 break; 799 632 default: 800 633 return -ERANGE; 801 634 } 635 + return 0; 636 + } 802 637 803 - /* 804 - * If being set on non-current task, delay setting the CPU 805 - * mitigation until it is next scheduled. 806 - */ 807 - if (task == current && update) 808 - speculative_store_bypass_update_current(); 809 - 638 + static int ib_prctl_set(struct task_struct *task, unsigned long ctrl) 639 + { 640 + switch (ctrl) { 641 + case PR_SPEC_ENABLE: 642 + if (spectre_v2_user == SPECTRE_V2_USER_NONE) 643 + return 0; 644 + /* 645 + * Indirect branch speculation is always disabled in strict 646 + * mode. 647 + */ 648 + if (spectre_v2_user == SPECTRE_V2_USER_STRICT) 649 + return -EPERM; 650 + task_clear_spec_ib_disable(task); 651 + task_update_spec_tif(task); 652 + break; 653 + case PR_SPEC_DISABLE: 654 + case PR_SPEC_FORCE_DISABLE: 655 + /* 656 + * Indirect branch speculation is always allowed when 657 + * mitigation is force disabled. 658 + */ 659 + if (spectre_v2_user == SPECTRE_V2_USER_NONE) 660 + return -EPERM; 661 + if (spectre_v2_user == SPECTRE_V2_USER_STRICT) 662 + return 0; 663 + task_set_spec_ib_disable(task); 664 + if (ctrl == PR_SPEC_FORCE_DISABLE) 665 + task_set_spec_ib_force_disable(task); 666 + task_update_spec_tif(task); 667 + break; 668 + default: 669 + return -ERANGE; 670 + } 810 671 return 0; 811 672 } 812 673 ··· 844 649 switch (which) { 845 650 case PR_SPEC_STORE_BYPASS: 846 651 return ssb_prctl_set(task, ctrl); 652 + case PR_SPEC_INDIRECT_BRANCH: 653 + return ib_prctl_set(task, ctrl); 847 654 default: 848 655 return -ENODEV; 849 656 } ··· 856 659 { 857 660 if (ssb_mode == SPEC_STORE_BYPASS_SECCOMP) 858 661 ssb_prctl_set(task, PR_SPEC_FORCE_DISABLE); 662 + if (spectre_v2_user == SPECTRE_V2_USER_SECCOMP) 663 + ib_prctl_set(task, PR_SPEC_FORCE_DISABLE); 859 664 } 860 665 #endif 861 666 ··· 880 681 } 881 682 } 882 683 684 + static int ib_prctl_get(struct task_struct *task) 685 + { 686 + if (!boot_cpu_has_bug(X86_BUG_SPECTRE_V2)) 687 + return PR_SPEC_NOT_AFFECTED; 688 + 689 + switch (spectre_v2_user) { 690 + case SPECTRE_V2_USER_NONE: 691 + return PR_SPEC_ENABLE; 692 + case SPECTRE_V2_USER_PRCTL: 693 + case SPECTRE_V2_USER_SECCOMP: 694 + if (task_spec_ib_force_disable(task)) 695 + return PR_SPEC_PRCTL | PR_SPEC_FORCE_DISABLE; 696 + if (task_spec_ib_disable(task)) 697 + return PR_SPEC_PRCTL | PR_SPEC_DISABLE; 698 + return PR_SPEC_PRCTL | PR_SPEC_ENABLE; 699 + case SPECTRE_V2_USER_STRICT: 700 + return PR_SPEC_DISABLE; 701 + default: 702 + return PR_SPEC_NOT_AFFECTED; 703 + } 704 + } 705 + 883 706 int arch_prctl_spec_ctrl_get(struct task_struct *task, unsigned long which) 884 707 { 885 708 switch (which) { 886 709 case PR_SPEC_STORE_BYPASS: 887 710 return ssb_prctl_get(task); 711 + case PR_SPEC_INDIRECT_BRANCH: 712 + return ib_prctl_get(task); 888 713 default: 889 714 return -ENODEV; 890 715 } ··· 1046 823 #define L1TF_DEFAULT_MSG "Mitigation: PTE Inversion" 1047 824 1048 825 #if IS_ENABLED(CONFIG_KVM_INTEL) 1049 - static const char *l1tf_vmx_states[] = { 826 + static const char * const l1tf_vmx_states[] = { 1050 827 [VMENTER_L1D_FLUSH_AUTO] = "auto", 1051 828 [VMENTER_L1D_FLUSH_NEVER] = "vulnerable", 1052 829 [VMENTER_L1D_FLUSH_COND] = "conditional cache flushes", ··· 1062 839 1063 840 if (l1tf_vmx_mitigation == VMENTER_L1D_FLUSH_EPT_DISABLED || 1064 841 (l1tf_vmx_mitigation == VMENTER_L1D_FLUSH_NEVER && 1065 - cpu_smt_control == CPU_SMT_ENABLED)) 842 + sched_smt_active())) { 1066 843 return sprintf(buf, "%s; VMX: %s\n", L1TF_DEFAULT_MSG, 1067 844 l1tf_vmx_states[l1tf_vmx_mitigation]); 845 + } 1068 846 1069 847 return sprintf(buf, "%s; VMX: %s, SMT %s\n", L1TF_DEFAULT_MSG, 1070 848 l1tf_vmx_states[l1tf_vmx_mitigation], 1071 - cpu_smt_control == CPU_SMT_ENABLED ? "vulnerable" : "disabled"); 849 + sched_smt_active() ? "vulnerable" : "disabled"); 1072 850 } 1073 851 #else 1074 852 static ssize_t l1tf_show_state(char *buf) ··· 1078 854 } 1079 855 #endif 1080 856 857 + static char *stibp_state(void) 858 + { 859 + if (spectre_v2_enabled == SPECTRE_V2_IBRS_ENHANCED) 860 + return ""; 861 + 862 + switch (spectre_v2_user) { 863 + case SPECTRE_V2_USER_NONE: 864 + return ", STIBP: disabled"; 865 + case SPECTRE_V2_USER_STRICT: 866 + return ", STIBP: forced"; 867 + case SPECTRE_V2_USER_PRCTL: 868 + case SPECTRE_V2_USER_SECCOMP: 869 + if (static_key_enabled(&switch_to_cond_stibp)) 870 + return ", STIBP: conditional"; 871 + } 872 + return ""; 873 + } 874 + 875 + static char *ibpb_state(void) 876 + { 877 + if (boot_cpu_has(X86_FEATURE_IBPB)) { 878 + if (static_key_enabled(&switch_mm_always_ibpb)) 879 + return ", IBPB: always-on"; 880 + if (static_key_enabled(&switch_mm_cond_ibpb)) 881 + return ", IBPB: conditional"; 882 + return ", IBPB: disabled"; 883 + } 884 + return ""; 885 + } 886 + 1081 887 static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr, 1082 888 char *buf, unsigned int bug) 1083 889 { 1084 - int ret; 1085 - 1086 890 if (!boot_cpu_has_bug(bug)) 1087 891 return sprintf(buf, "Not affected\n"); 1088 892 ··· 1128 876 return sprintf(buf, "Mitigation: __user pointer sanitization\n"); 1129 877 1130 878 case X86_BUG_SPECTRE_V2: 1131 - ret = sprintf(buf, "%s%s%s%s%s%s\n", spectre_v2_strings[spectre_v2_enabled], 1132 - boot_cpu_has(X86_FEATURE_USE_IBPB) ? ", IBPB" : "", 879 + return sprintf(buf, "%s%s%s%s%s%s\n", spectre_v2_strings[spectre_v2_enabled], 880 + ibpb_state(), 1133 881 boot_cpu_has(X86_FEATURE_USE_IBRS_FW) ? ", IBRS_FW" : "", 1134 - (x86_spec_ctrl_base & SPEC_CTRL_STIBP) ? ", STIBP" : "", 882 + stibp_state(), 1135 883 boot_cpu_has(X86_FEATURE_RSB_CTXSW) ? ", RSB filling" : "", 1136 884 spectre_v2_module_string()); 1137 - return ret; 1138 885 1139 886 case X86_BUG_SPEC_STORE_BYPASS: 1140 887 return sprintf(buf, "%s\n", ssb_strings[ssb_mode]);
+6 -13
arch/x86/kernel/cpu/mcheck/mce_amd.c
··· 56 56 /* Threshold LVT offset is at MSR0xC0000410[15:12] */ 57 57 #define SMCA_THR_LVT_OFF 0xF000 58 58 59 - static bool thresholding_en; 59 + static bool thresholding_irq_en; 60 60 61 61 static const char * const th_names[] = { 62 62 "load_store", ··· 534 534 535 535 set_offset: 536 536 offset = setup_APIC_mce_threshold(offset, new); 537 - 538 - if ((offset == new) && (mce_threshold_vector != amd_threshold_interrupt)) 539 - mce_threshold_vector = amd_threshold_interrupt; 537 + if (offset == new) 538 + thresholding_irq_en = true; 540 539 541 540 done: 542 541 mce_threshold_block_init(&b, offset); ··· 1356 1357 { 1357 1358 unsigned int bank; 1358 1359 1359 - if (!thresholding_en) 1360 - return 0; 1361 - 1362 1360 for (bank = 0; bank < mca_cfg.banks; ++bank) { 1363 1361 if (!(per_cpu(bank_map, cpu) & (1 << bank))) 1364 1362 continue; ··· 1372 1376 unsigned int bank; 1373 1377 struct threshold_bank **bp; 1374 1378 int err = 0; 1375 - 1376 - if (!thresholding_en) 1377 - return 0; 1378 1379 1379 1380 bp = per_cpu(threshold_banks, cpu); 1380 1381 if (bp) ··· 1401 1408 { 1402 1409 unsigned lcpu = 0; 1403 1410 1404 - if (mce_threshold_vector == amd_threshold_interrupt) 1405 - thresholding_en = true; 1406 - 1407 1411 /* to hit CPUs online before the notifier is up */ 1408 1412 for_each_online_cpu(lcpu) { 1409 1413 int err = mce_threshold_create_device(lcpu); ··· 1408 1418 if (err) 1409 1419 return err; 1410 1420 } 1421 + 1422 + if (thresholding_irq_en) 1423 + mce_threshold_vector = amd_threshold_interrupt; 1411 1424 1412 1425 return 0; 1413 1426 }
+2 -2
arch/x86/kernel/fpu/signal.c
··· 344 344 sanitize_restored_xstate(tsk, &env, xfeatures, fx_only); 345 345 } 346 346 347 + local_bh_disable(); 347 348 fpu->initialized = 1; 348 - preempt_disable(); 349 349 fpu__restore(fpu); 350 - preempt_enable(); 350 + local_bh_enable(); 351 351 352 352 return err; 353 353 } else {
+1 -14
arch/x86/kernel/ftrace.c
··· 994 994 { 995 995 unsigned long old; 996 996 int faulted; 997 - struct ftrace_graph_ent trace; 998 997 unsigned long return_hooker = (unsigned long) 999 998 &return_to_handler; 1000 999 ··· 1045 1046 return; 1046 1047 } 1047 1048 1048 - trace.func = self_addr; 1049 - trace.depth = current->curr_ret_stack + 1; 1050 - 1051 - /* Only trace if the calling function expects to */ 1052 - if (!ftrace_graph_entry(&trace)) { 1049 + if (function_graph_enter(old, self_addr, frame_pointer, parent)) 1053 1050 *parent = old; 1054 - return; 1055 - } 1056 - 1057 - if (ftrace_push_return_trace(old, self_addr, &trace.depth, 1058 - frame_pointer, parent) == -EBUSY) { 1059 - *parent = old; 1060 - return; 1061 - } 1062 1051 } 1063 1052 #endif /* CONFIG_FUNCTION_GRAPH_TRACER */
-1
arch/x86/kernel/head32.c
··· 37 37 cr4_init_shadow(); 38 38 39 39 sanitize_boot_params(&boot_params); 40 - x86_verify_bootdata_version(); 41 40 42 41 x86_early_init_platform_quirks(); 43 42
-2
arch/x86/kernel/head64.c
··· 457 457 if (!boot_params.hdr.version) 458 458 copy_bootdata(__va(real_mode_data)); 459 459 460 - x86_verify_bootdata_version(); 461 - 462 460 x86_early_init_platform_quirks(); 463 461 464 462 switch (boot_params.hdr.hardware_subarch) {
+82 -19
arch/x86/kernel/process.c
··· 40 40 #include <asm/prctl.h> 41 41 #include <asm/spec-ctrl.h> 42 42 43 + #include "process.h" 44 + 43 45 /* 44 46 * per-CPU TSS segments. Threads are completely 'soft' on Linux, 45 47 * no more per-task TSS's. The TSS size is kept cacheline-aligned ··· 254 252 enable_cpuid(); 255 253 } 256 254 257 - static inline void switch_to_bitmap(struct tss_struct *tss, 258 - struct thread_struct *prev, 255 + static inline void switch_to_bitmap(struct thread_struct *prev, 259 256 struct thread_struct *next, 260 257 unsigned long tifp, unsigned long tifn) 261 258 { 259 + struct tss_struct *tss = this_cpu_ptr(&cpu_tss_rw); 260 + 262 261 if (tifn & _TIF_IO_BITMAP) { 263 262 /* 264 263 * Copy the relevant range of the IO bitmap. ··· 398 395 wrmsrl(MSR_AMD64_VIRT_SPEC_CTRL, ssbd_tif_to_spec_ctrl(tifn)); 399 396 } 400 397 401 - static __always_inline void intel_set_ssb_state(unsigned long tifn) 398 + /* 399 + * Update the MSRs managing speculation control, during context switch. 400 + * 401 + * tifp: Previous task's thread flags 402 + * tifn: Next task's thread flags 403 + */ 404 + static __always_inline void __speculation_ctrl_update(unsigned long tifp, 405 + unsigned long tifn) 402 406 { 403 - u64 msr = x86_spec_ctrl_base | ssbd_tif_to_spec_ctrl(tifn); 407 + unsigned long tif_diff = tifp ^ tifn; 408 + u64 msr = x86_spec_ctrl_base; 409 + bool updmsr = false; 404 410 405 - wrmsrl(MSR_IA32_SPEC_CTRL, msr); 411 + /* 412 + * If TIF_SSBD is different, select the proper mitigation 413 + * method. Note that if SSBD mitigation is disabled or permanentely 414 + * enabled this branch can't be taken because nothing can set 415 + * TIF_SSBD. 416 + */ 417 + if (tif_diff & _TIF_SSBD) { 418 + if (static_cpu_has(X86_FEATURE_VIRT_SSBD)) { 419 + amd_set_ssb_virt_state(tifn); 420 + } else if (static_cpu_has(X86_FEATURE_LS_CFG_SSBD)) { 421 + amd_set_core_ssb_state(tifn); 422 + } else if (static_cpu_has(X86_FEATURE_SPEC_CTRL_SSBD) || 423 + static_cpu_has(X86_FEATURE_AMD_SSBD)) { 424 + msr |= ssbd_tif_to_spec_ctrl(tifn); 425 + updmsr = true; 426 + } 427 + } 428 + 429 + /* 430 + * Only evaluate TIF_SPEC_IB if conditional STIBP is enabled, 431 + * otherwise avoid the MSR write. 432 + */ 433 + if (IS_ENABLED(CONFIG_SMP) && 434 + static_branch_unlikely(&switch_to_cond_stibp)) { 435 + updmsr |= !!(tif_diff & _TIF_SPEC_IB); 436 + msr |= stibp_tif_to_spec_ctrl(tifn); 437 + } 438 + 439 + if (updmsr) 440 + wrmsrl(MSR_IA32_SPEC_CTRL, msr); 406 441 } 407 442 408 - static __always_inline void __speculative_store_bypass_update(unsigned long tifn) 443 + static unsigned long speculation_ctrl_update_tif(struct task_struct *tsk) 409 444 { 410 - if (static_cpu_has(X86_FEATURE_VIRT_SSBD)) 411 - amd_set_ssb_virt_state(tifn); 412 - else if (static_cpu_has(X86_FEATURE_LS_CFG_SSBD)) 413 - amd_set_core_ssb_state(tifn); 414 - else 415 - intel_set_ssb_state(tifn); 445 + if (test_and_clear_tsk_thread_flag(tsk, TIF_SPEC_FORCE_UPDATE)) { 446 + if (task_spec_ssb_disable(tsk)) 447 + set_tsk_thread_flag(tsk, TIF_SSBD); 448 + else 449 + clear_tsk_thread_flag(tsk, TIF_SSBD); 450 + 451 + if (task_spec_ib_disable(tsk)) 452 + set_tsk_thread_flag(tsk, TIF_SPEC_IB); 453 + else 454 + clear_tsk_thread_flag(tsk, TIF_SPEC_IB); 455 + } 456 + /* Return the updated threadinfo flags*/ 457 + return task_thread_info(tsk)->flags; 416 458 } 417 459 418 - void speculative_store_bypass_update(unsigned long tif) 460 + void speculation_ctrl_update(unsigned long tif) 419 461 { 462 + /* Forced update. Make sure all relevant TIF flags are different */ 420 463 preempt_disable(); 421 - __speculative_store_bypass_update(tif); 464 + __speculation_ctrl_update(~tif, tif); 422 465 preempt_enable(); 423 466 } 424 467 425 - void __switch_to_xtra(struct task_struct *prev_p, struct task_struct *next_p, 426 - struct tss_struct *tss) 468 + /* Called from seccomp/prctl update */ 469 + void speculation_ctrl_update_current(void) 470 + { 471 + preempt_disable(); 472 + speculation_ctrl_update(speculation_ctrl_update_tif(current)); 473 + preempt_enable(); 474 + } 475 + 476 + void __switch_to_xtra(struct task_struct *prev_p, struct task_struct *next_p) 427 477 { 428 478 struct thread_struct *prev, *next; 429 479 unsigned long tifp, tifn; ··· 486 430 487 431 tifn = READ_ONCE(task_thread_info(next_p)->flags); 488 432 tifp = READ_ONCE(task_thread_info(prev_p)->flags); 489 - switch_to_bitmap(tss, prev, next, tifp, tifn); 433 + switch_to_bitmap(prev, next, tifp, tifn); 490 434 491 435 propagate_user_return_notify(prev_p, next_p); 492 436 ··· 507 451 if ((tifp ^ tifn) & _TIF_NOCPUID) 508 452 set_cpuid_faulting(!!(tifn & _TIF_NOCPUID)); 509 453 510 - if ((tifp ^ tifn) & _TIF_SSBD) 511 - __speculative_store_bypass_update(tifn); 454 + if (likely(!((tifp | tifn) & _TIF_SPEC_FORCE_UPDATE))) { 455 + __speculation_ctrl_update(tifp, tifn); 456 + } else { 457 + speculation_ctrl_update_tif(prev_p); 458 + tifn = speculation_ctrl_update_tif(next_p); 459 + 460 + /* Enforce MSR update to ensure consistent state */ 461 + __speculation_ctrl_update(~tifn, tifn); 462 + } 512 463 } 513 464 514 465 /*
+39
arch/x86/kernel/process.h
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + // 3 + // Code shared between 32 and 64 bit 4 + 5 + #include <asm/spec-ctrl.h> 6 + 7 + void __switch_to_xtra(struct task_struct *prev_p, struct task_struct *next_p); 8 + 9 + /* 10 + * This needs to be inline to optimize for the common case where no extra 11 + * work needs to be done. 12 + */ 13 + static inline void switch_to_extra(struct task_struct *prev, 14 + struct task_struct *next) 15 + { 16 + unsigned long next_tif = task_thread_info(next)->flags; 17 + unsigned long prev_tif = task_thread_info(prev)->flags; 18 + 19 + if (IS_ENABLED(CONFIG_SMP)) { 20 + /* 21 + * Avoid __switch_to_xtra() invocation when conditional 22 + * STIPB is disabled and the only different bit is 23 + * TIF_SPEC_IB. For CONFIG_SMP=n TIF_SPEC_IB is not 24 + * in the TIF_WORK_CTXSW masks. 25 + */ 26 + if (!static_branch_likely(&switch_to_cond_stibp)) { 27 + prev_tif &= ~_TIF_SPEC_IB; 28 + next_tif &= ~_TIF_SPEC_IB; 29 + } 30 + } 31 + 32 + /* 33 + * __switch_to_xtra() handles debug registers, i/o bitmaps, 34 + * speculation mitigations etc. 35 + */ 36 + if (unlikely(next_tif & _TIF_WORK_CTXSW_NEXT || 37 + prev_tif & _TIF_WORK_CTXSW_PREV)) 38 + __switch_to_xtra(prev, next); 39 + }
+3 -7
arch/x86/kernel/process_32.c
··· 59 59 #include <asm/intel_rdt_sched.h> 60 60 #include <asm/proto.h> 61 61 62 + #include "process.h" 63 + 62 64 void __show_regs(struct pt_regs *regs, enum show_regs_mode mode) 63 65 { 64 66 unsigned long cr0 = 0L, cr2 = 0L, cr3 = 0L, cr4 = 0L; ··· 234 232 struct fpu *prev_fpu = &prev->fpu; 235 233 struct fpu *next_fpu = &next->fpu; 236 234 int cpu = smp_processor_id(); 237 - struct tss_struct *tss = &per_cpu(cpu_tss_rw, cpu); 238 235 239 236 /* never put a printk in __switch_to... printk() calls wake_up*() indirectly */ 240 237 ··· 265 264 if (get_kernel_rpl() && unlikely(prev->iopl != next->iopl)) 266 265 set_iopl_mask(next->iopl); 267 266 268 - /* 269 - * Now maybe handle debug registers and/or IO bitmaps 270 - */ 271 - if (unlikely(task_thread_info(prev_p)->flags & _TIF_WORK_CTXSW_PREV || 272 - task_thread_info(next_p)->flags & _TIF_WORK_CTXSW_NEXT)) 273 - __switch_to_xtra(prev_p, next_p, tss); 267 + switch_to_extra(prev_p, next_p); 274 268 275 269 /* 276 270 * Leave lazy mode, flushing any hypercalls made here.
+3 -7
arch/x86/kernel/process_64.c
··· 60 60 #include <asm/unistd_32_ia32.h> 61 61 #endif 62 62 63 + #include "process.h" 64 + 63 65 /* Prints also some state that isn't saved in the pt_regs */ 64 66 void __show_regs(struct pt_regs *regs, enum show_regs_mode mode) 65 67 { ··· 555 553 struct fpu *prev_fpu = &prev->fpu; 556 554 struct fpu *next_fpu = &next->fpu; 557 555 int cpu = smp_processor_id(); 558 - struct tss_struct *tss = &per_cpu(cpu_tss_rw, cpu); 559 556 560 557 WARN_ON_ONCE(IS_ENABLED(CONFIG_DEBUG_ENTRY) && 561 558 this_cpu_read(irq_count) != -1); ··· 618 617 /* Reload sp0. */ 619 618 update_task_stack(next_p); 620 619 621 - /* 622 - * Now maybe reload the debug registers and handle I/O bitmaps 623 - */ 624 - if (unlikely(task_thread_info(next_p)->flags & _TIF_WORK_CTXSW_NEXT || 625 - task_thread_info(prev_p)->flags & _TIF_WORK_CTXSW_PREV)) 626 - __switch_to_xtra(prev_p, next_p, tss); 620 + switch_to_extra(prev_p, next_p); 627 621 628 622 #ifdef CONFIG_XEN_PV 629 623 /*
-17
arch/x86/kernel/setup.c
··· 1280 1280 unwind_init(); 1281 1281 } 1282 1282 1283 - /* 1284 - * From boot protocol 2.14 onwards we expect the bootloader to set the 1285 - * version to "0x8000 | <used version>". In case we find a version >= 2.14 1286 - * without the 0x8000 we assume the boot loader supports 2.13 only and 1287 - * reset the version accordingly. The 0x8000 flag is removed in any case. 1288 - */ 1289 - void __init x86_verify_bootdata_version(void) 1290 - { 1291 - if (boot_params.hdr.version & VERSION_WRITTEN) 1292 - boot_params.hdr.version &= ~VERSION_WRITTEN; 1293 - else if (boot_params.hdr.version >= 0x020e) 1294 - boot_params.hdr.version = 0x020d; 1295 - 1296 - if (boot_params.hdr.version < 0x020e) 1297 - boot_params.hdr.acpi_rsdp_addr = 0; 1298 - } 1299 - 1300 1283 #ifdef CONFIG_X86_32 1301 1284 1302 1285 static struct resource video_ram_resource = {
+6 -1
arch/x86/kvm/lapic.c
··· 55 55 #define PRIo64 "o" 56 56 57 57 /* #define apic_debug(fmt,arg...) printk(KERN_WARNING fmt,##arg) */ 58 - #define apic_debug(fmt, arg...) 58 + #define apic_debug(fmt, arg...) do {} while (0) 59 59 60 60 /* 14 is the version for Xeon and Pentium 8.4.8*/ 61 61 #define APIC_VERSION (0x14UL | ((KVM_APIC_LVT_NUM - 1) << 16)) ··· 575 575 576 576 rcu_read_lock(); 577 577 map = rcu_dereference(kvm->arch.apic_map); 578 + 579 + if (unlikely(!map)) { 580 + count = -EOPNOTSUPP; 581 + goto out; 582 + } 578 583 579 584 if (min > map->max_apic_id) 580 585 goto out;
+9 -18
arch/x86/kvm/mmu.c
··· 5074 5074 } 5075 5075 5076 5076 static u64 mmu_pte_write_fetch_gpte(struct kvm_vcpu *vcpu, gpa_t *gpa, 5077 - const u8 *new, int *bytes) 5077 + int *bytes) 5078 5078 { 5079 - u64 gentry; 5079 + u64 gentry = 0; 5080 5080 int r; 5081 5081 5082 5082 /* ··· 5088 5088 /* Handle a 32-bit guest writing two halves of a 64-bit gpte */ 5089 5089 *gpa &= ~(gpa_t)7; 5090 5090 *bytes = 8; 5091 - r = kvm_vcpu_read_guest(vcpu, *gpa, &gentry, 8); 5092 - if (r) 5093 - gentry = 0; 5094 - new = (const u8 *)&gentry; 5095 5091 } 5096 5092 5097 - switch (*bytes) { 5098 - case 4: 5099 - gentry = *(const u32 *)new; 5100 - break; 5101 - case 8: 5102 - gentry = *(const u64 *)new; 5103 - break; 5104 - default: 5105 - gentry = 0; 5106 - break; 5093 + if (*bytes == 4 || *bytes == 8) { 5094 + r = kvm_vcpu_read_guest_atomic(vcpu, *gpa, &gentry, *bytes); 5095 + if (r) 5096 + gentry = 0; 5107 5097 } 5108 5098 5109 5099 return gentry; ··· 5197 5207 5198 5208 pgprintk("%s: gpa %llx bytes %d\n", __func__, gpa, bytes); 5199 5209 5200 - gentry = mmu_pte_write_fetch_gpte(vcpu, &gpa, new, &bytes); 5201 - 5202 5210 /* 5203 5211 * No need to care whether allocation memory is successful 5204 5212 * or not since pte prefetch is skiped if it does not have ··· 5205 5217 mmu_topup_memory_caches(vcpu); 5206 5218 5207 5219 spin_lock(&vcpu->kvm->mmu_lock); 5220 + 5221 + gentry = mmu_pte_write_fetch_gpte(vcpu, &gpa, &bytes); 5222 + 5208 5223 ++vcpu->kvm->stat.mmu_pte_write; 5209 5224 kvm_mmu_audit(vcpu, AUDIT_PRE_PTE_WRITE); 5210 5225
+29 -15
arch/x86/kvm/svm.c
··· 1446 1446 return vcpu->arch.tsc_offset; 1447 1447 } 1448 1448 1449 - static void svm_write_tsc_offset(struct kvm_vcpu *vcpu, u64 offset) 1449 + static u64 svm_write_l1_tsc_offset(struct kvm_vcpu *vcpu, u64 offset) 1450 1450 { 1451 1451 struct vcpu_svm *svm = to_svm(vcpu); 1452 1452 u64 g_tsc_offset = 0; ··· 1464 1464 svm->vmcb->control.tsc_offset = offset + g_tsc_offset; 1465 1465 1466 1466 mark_dirty(svm->vmcb, VMCB_INTERCEPTS); 1467 + return svm->vmcb->control.tsc_offset; 1467 1468 } 1468 1469 1469 1470 static void avic_init_vmcb(struct vcpu_svm *svm) ··· 1665 1664 static int avic_init_access_page(struct kvm_vcpu *vcpu) 1666 1665 { 1667 1666 struct kvm *kvm = vcpu->kvm; 1668 - int ret; 1667 + int ret = 0; 1669 1668 1669 + mutex_lock(&kvm->slots_lock); 1670 1670 if (kvm->arch.apic_access_page_done) 1671 - return 0; 1671 + goto out; 1672 1672 1673 - ret = x86_set_memory_region(kvm, 1674 - APIC_ACCESS_PAGE_PRIVATE_MEMSLOT, 1675 - APIC_DEFAULT_PHYS_BASE, 1676 - PAGE_SIZE); 1673 + ret = __x86_set_memory_region(kvm, 1674 + APIC_ACCESS_PAGE_PRIVATE_MEMSLOT, 1675 + APIC_DEFAULT_PHYS_BASE, 1676 + PAGE_SIZE); 1677 1677 if (ret) 1678 - return ret; 1678 + goto out; 1679 1679 1680 1680 kvm->arch.apic_access_page_done = true; 1681 - return 0; 1681 + out: 1682 + mutex_unlock(&kvm->slots_lock); 1683 + return ret; 1682 1684 } 1683 1685 1684 1686 static int avic_init_backing_page(struct kvm_vcpu *vcpu) ··· 2193 2189 return ERR_PTR(err); 2194 2190 } 2195 2191 2192 + static void svm_clear_current_vmcb(struct vmcb *vmcb) 2193 + { 2194 + int i; 2195 + 2196 + for_each_online_cpu(i) 2197 + cmpxchg(&per_cpu(svm_data, i)->current_vmcb, vmcb, NULL); 2198 + } 2199 + 2196 2200 static void svm_free_vcpu(struct kvm_vcpu *vcpu) 2197 2201 { 2198 2202 struct vcpu_svm *svm = to_svm(vcpu); 2203 + 2204 + /* 2205 + * The vmcb page can be recycled, causing a false negative in 2206 + * svm_vcpu_load(). So, ensure that no logical CPU has this 2207 + * vmcb page recorded as its current vmcb. 2208 + */ 2209 + svm_clear_current_vmcb(svm->vmcb); 2199 2210 2200 2211 __free_page(pfn_to_page(__sme_clr(svm->vmcb_pa) >> PAGE_SHIFT)); 2201 2212 __free_pages(virt_to_page(svm->msrpm), MSRPM_ALLOC_ORDER); ··· 2218 2199 __free_pages(virt_to_page(svm->nested.msrpm), MSRPM_ALLOC_ORDER); 2219 2200 kvm_vcpu_uninit(vcpu); 2220 2201 kmem_cache_free(kvm_vcpu_cache, svm); 2221 - /* 2222 - * The vmcb page can be recycled, causing a false negative in 2223 - * svm_vcpu_load(). So do a full IBPB now. 2224 - */ 2225 - indirect_branch_prediction_barrier(); 2226 2202 } 2227 2203 2228 2204 static void svm_vcpu_load(struct kvm_vcpu *vcpu, int cpu) ··· 7163 7149 .has_wbinvd_exit = svm_has_wbinvd_exit, 7164 7150 7165 7151 .read_l1_tsc_offset = svm_read_l1_tsc_offset, 7166 - .write_tsc_offset = svm_write_tsc_offset, 7152 + .write_l1_tsc_offset = svm_write_l1_tsc_offset, 7167 7153 7168 7154 .set_tdp_cr3 = set_tdp_cr3, 7169 7155
+65 -33
arch/x86/kvm/vmx.c
··· 174 174 * refer SDM volume 3b section 21.6.13 & 22.1.3. 175 175 */ 176 176 static unsigned int ple_gap = KVM_DEFAULT_PLE_GAP; 177 + module_param(ple_gap, uint, 0444); 177 178 178 179 static unsigned int ple_window = KVM_VMX_DEFAULT_PLE_WINDOW; 179 180 module_param(ple_window, uint, 0444); ··· 985 984 struct shared_msr_entry *guest_msrs; 986 985 int nmsrs; 987 986 int save_nmsrs; 987 + bool guest_msrs_dirty; 988 988 unsigned long host_idt_base; 989 989 #ifdef CONFIG_X86_64 990 990 u64 msr_host_kernel_gs_base; ··· 1308 1306 static bool nested_vmx_is_page_fault_vmexit(struct vmcs12 *vmcs12, 1309 1307 u16 error_code); 1310 1308 static void vmx_update_msr_bitmap(struct kvm_vcpu *vcpu); 1311 - static void __always_inline vmx_disable_intercept_for_msr(unsigned long *msr_bitmap, 1309 + static __always_inline void vmx_disable_intercept_for_msr(unsigned long *msr_bitmap, 1312 1310 u32 msr, int type); 1313 1311 1314 1312 static DEFINE_PER_CPU(struct vmcs *, vmxarea); ··· 1612 1610 { 1613 1611 struct vcpu_vmx *vmx = to_vmx(vcpu); 1614 1612 1615 - /* We don't support disabling the feature for simplicity. */ 1616 - if (vmx->nested.enlightened_vmcs_enabled) 1617 - return 0; 1618 - 1619 - vmx->nested.enlightened_vmcs_enabled = true; 1620 - 1621 1613 /* 1622 1614 * vmcs_version represents the range of supported Enlightened VMCS 1623 1615 * versions: lower 8 bits is the minimal version, higher 8 bits is the ··· 1620 1624 */ 1621 1625 if (vmcs_version) 1622 1626 *vmcs_version = (KVM_EVMCS_VERSION << 8) | 1; 1627 + 1628 + /* We don't support disabling the feature for simplicity. */ 1629 + if (vmx->nested.enlightened_vmcs_enabled) 1630 + return 0; 1631 + 1632 + vmx->nested.enlightened_vmcs_enabled = true; 1623 1633 1624 1634 vmx->nested.msrs.pinbased_ctls_high &= ~EVMCS1_UNSUPPORTED_PINCTRL; 1625 1635 vmx->nested.msrs.entry_ctls_high &= ~EVMCS1_UNSUPPORTED_VMENTRY_CTRL; ··· 2899 2897 2900 2898 vmx->req_immediate_exit = false; 2901 2899 2900 + /* 2901 + * Note that guest MSRs to be saved/restored can also be changed 2902 + * when guest state is loaded. This happens when guest transitions 2903 + * to/from long-mode by setting MSR_EFER.LMA. 2904 + */ 2905 + if (!vmx->loaded_cpu_state || vmx->guest_msrs_dirty) { 2906 + vmx->guest_msrs_dirty = false; 2907 + for (i = 0; i < vmx->save_nmsrs; ++i) 2908 + kvm_set_shared_msr(vmx->guest_msrs[i].index, 2909 + vmx->guest_msrs[i].data, 2910 + vmx->guest_msrs[i].mask); 2911 + 2912 + } 2913 + 2902 2914 if (vmx->loaded_cpu_state) 2903 2915 return; 2904 2916 ··· 2973 2957 vmcs_writel(HOST_GS_BASE, gs_base); 2974 2958 host_state->gs_base = gs_base; 2975 2959 } 2976 - 2977 - for (i = 0; i < vmx->save_nmsrs; ++i) 2978 - kvm_set_shared_msr(vmx->guest_msrs[i].index, 2979 - vmx->guest_msrs[i].data, 2980 - vmx->guest_msrs[i].mask); 2981 2960 } 2982 2961 2983 2962 static void vmx_prepare_switch_to_host(struct vcpu_vmx *vmx) ··· 3447 3436 move_msr_up(vmx, index, save_nmsrs++); 3448 3437 3449 3438 vmx->save_nmsrs = save_nmsrs; 3439 + vmx->guest_msrs_dirty = true; 3450 3440 3451 3441 if (cpu_has_vmx_msr_bitmap()) 3452 3442 vmx_update_msr_bitmap(&vmx->vcpu); ··· 3464 3452 return vcpu->arch.tsc_offset; 3465 3453 } 3466 3454 3467 - /* 3468 - * writes 'offset' into guest's timestamp counter offset register 3469 - */ 3470 - static void vmx_write_tsc_offset(struct kvm_vcpu *vcpu, u64 offset) 3455 + static u64 vmx_write_l1_tsc_offset(struct kvm_vcpu *vcpu, u64 offset) 3471 3456 { 3457 + u64 active_offset = offset; 3472 3458 if (is_guest_mode(vcpu)) { 3473 3459 /* 3474 3460 * We're here if L1 chose not to trap WRMSR to TSC. According ··· 3474 3464 * set for L2 remains unchanged, and still needs to be added 3475 3465 * to the newly set TSC to get L2's TSC. 3476 3466 */ 3477 - struct vmcs12 *vmcs12; 3478 - /* recalculate vmcs02.TSC_OFFSET: */ 3479 - vmcs12 = get_vmcs12(vcpu); 3480 - vmcs_write64(TSC_OFFSET, offset + 3481 - (nested_cpu_has(vmcs12, CPU_BASED_USE_TSC_OFFSETING) ? 3482 - vmcs12->tsc_offset : 0)); 3467 + struct vmcs12 *vmcs12 = get_vmcs12(vcpu); 3468 + if (nested_cpu_has(vmcs12, CPU_BASED_USE_TSC_OFFSETING)) 3469 + active_offset += vmcs12->tsc_offset; 3483 3470 } else { 3484 3471 trace_kvm_write_tsc_offset(vcpu->vcpu_id, 3485 3472 vmcs_read64(TSC_OFFSET), offset); 3486 - vmcs_write64(TSC_OFFSET, offset); 3487 3473 } 3474 + 3475 + vmcs_write64(TSC_OFFSET, active_offset); 3476 + return active_offset; 3488 3477 } 3489 3478 3490 3479 /* ··· 5953 5944 spin_unlock(&vmx_vpid_lock); 5954 5945 } 5955 5946 5956 - static void __always_inline vmx_disable_intercept_for_msr(unsigned long *msr_bitmap, 5947 + static __always_inline void vmx_disable_intercept_for_msr(unsigned long *msr_bitmap, 5957 5948 u32 msr, int type) 5958 5949 { 5959 5950 int f = sizeof(unsigned long); ··· 5991 5982 } 5992 5983 } 5993 5984 5994 - static void __always_inline vmx_enable_intercept_for_msr(unsigned long *msr_bitmap, 5985 + static __always_inline void vmx_enable_intercept_for_msr(unsigned long *msr_bitmap, 5995 5986 u32 msr, int type) 5996 5987 { 5997 5988 int f = sizeof(unsigned long); ··· 6029 6020 } 6030 6021 } 6031 6022 6032 - static void __always_inline vmx_set_intercept_for_msr(unsigned long *msr_bitmap, 6023 + static __always_inline void vmx_set_intercept_for_msr(unsigned long *msr_bitmap, 6033 6024 u32 msr, int type, bool value) 6034 6025 { 6035 6026 if (value) ··· 8673 8664 struct vmcs12 *vmcs12 = vmx->nested.cached_vmcs12; 8674 8665 struct hv_enlightened_vmcs *evmcs = vmx->nested.hv_evmcs; 8675 8666 8676 - vmcs12->hdr.revision_id = evmcs->revision_id; 8677 - 8678 8667 /* HV_VMX_ENLIGHTENED_CLEAN_FIELD_NONE */ 8679 8668 vmcs12->tpr_threshold = evmcs->tpr_threshold; 8680 8669 vmcs12->guest_rip = evmcs->guest_rip; ··· 9376 9369 9377 9370 vmx->nested.hv_evmcs = kmap(vmx->nested.hv_evmcs_page); 9378 9371 9379 - if (vmx->nested.hv_evmcs->revision_id != VMCS12_REVISION) { 9372 + /* 9373 + * Currently, KVM only supports eVMCS version 1 9374 + * (== KVM_EVMCS_VERSION) and thus we expect guest to set this 9375 + * value to first u32 field of eVMCS which should specify eVMCS 9376 + * VersionNumber. 9377 + * 9378 + * Guest should be aware of supported eVMCS versions by host by 9379 + * examining CPUID.0x4000000A.EAX[0:15]. Host userspace VMM is 9380 + * expected to set this CPUID leaf according to the value 9381 + * returned in vmcs_version from nested_enable_evmcs(). 9382 + * 9383 + * However, it turns out that Microsoft Hyper-V fails to comply 9384 + * to their own invented interface: When Hyper-V use eVMCS, it 9385 + * just sets first u32 field of eVMCS to revision_id specified 9386 + * in MSR_IA32_VMX_BASIC. Instead of used eVMCS version number 9387 + * which is one of the supported versions specified in 9388 + * CPUID.0x4000000A.EAX[0:15]. 9389 + * 9390 + * To overcome Hyper-V bug, we accept here either a supported 9391 + * eVMCS version or VMCS12 revision_id as valid values for first 9392 + * u32 field of eVMCS. 9393 + */ 9394 + if ((vmx->nested.hv_evmcs->revision_id != KVM_EVMCS_VERSION) && 9395 + (vmx->nested.hv_evmcs->revision_id != VMCS12_REVISION)) { 9380 9396 nested_release_evmcs(vcpu); 9381 9397 return 0; 9382 9398 } ··· 9420 9390 * present in struct hv_enlightened_vmcs, ...). Make sure there 9421 9391 * are no leftovers. 9422 9392 */ 9423 - if (from_launch) 9424 - memset(vmx->nested.cached_vmcs12, 0, 9425 - sizeof(*vmx->nested.cached_vmcs12)); 9393 + if (from_launch) { 9394 + struct vmcs12 *vmcs12 = get_vmcs12(vcpu); 9395 + memset(vmcs12, 0, sizeof(*vmcs12)); 9396 + vmcs12->hdr.revision_id = VMCS12_REVISION; 9397 + } 9426 9398 9427 9399 } 9428 9400 return 1; ··· 15094 15062 .has_wbinvd_exit = cpu_has_vmx_wbinvd_exit, 15095 15063 15096 15064 .read_l1_tsc_offset = vmx_read_l1_tsc_offset, 15097 - .write_tsc_offset = vmx_write_tsc_offset, 15065 + .write_l1_tsc_offset = vmx_write_l1_tsc_offset, 15098 15066 15099 15067 .set_tdp_cr3 = vmx_set_cr3, 15100 15068
+6 -4
arch/x86/kvm/x86.c
··· 1665 1665 1666 1666 static void kvm_vcpu_write_tsc_offset(struct kvm_vcpu *vcpu, u64 offset) 1667 1667 { 1668 - kvm_x86_ops->write_tsc_offset(vcpu, offset); 1669 - vcpu->arch.tsc_offset = offset; 1668 + vcpu->arch.tsc_offset = kvm_x86_ops->write_l1_tsc_offset(vcpu, offset); 1670 1669 } 1671 1670 1672 1671 static inline bool kvm_check_tsc_unstable(void) ··· 1793 1794 static inline void adjust_tsc_offset_guest(struct kvm_vcpu *vcpu, 1794 1795 s64 adjustment) 1795 1796 { 1796 - kvm_vcpu_write_tsc_offset(vcpu, vcpu->arch.tsc_offset + adjustment); 1797 + u64 tsc_offset = kvm_x86_ops->read_l1_tsc_offset(vcpu); 1798 + kvm_vcpu_write_tsc_offset(vcpu, tsc_offset + adjustment); 1797 1799 } 1798 1800 1799 1801 static inline void adjust_tsc_offset_host(struct kvm_vcpu *vcpu, s64 adjustment) ··· 6918 6918 clock_pairing.nsec = ts.tv_nsec; 6919 6919 clock_pairing.tsc = kvm_read_l1_tsc(vcpu, cycle); 6920 6920 clock_pairing.flags = 0; 6921 + memset(&clock_pairing.pad, 0, sizeof(clock_pairing.pad)); 6921 6922 6922 6923 ret = 0; 6923 6924 if (kvm_write_guest(vcpu->kvm, paddr, &clock_pairing, ··· 7456 7455 else { 7457 7456 if (vcpu->arch.apicv_active) 7458 7457 kvm_x86_ops->sync_pir_to_irr(vcpu); 7459 - kvm_ioapic_scan_entry(vcpu, vcpu->arch.ioapic_handled_vectors); 7458 + if (ioapic_in_kernel(vcpu->kvm)) 7459 + kvm_ioapic_scan_entry(vcpu, vcpu->arch.ioapic_handled_vectors); 7460 7460 } 7461 7461 7462 7462 if (is_guest_mode(vcpu))
+86 -29
arch/x86/mm/tlb.c
··· 7 7 #include <linux/export.h> 8 8 #include <linux/cpu.h> 9 9 #include <linux/debugfs.h> 10 - #include <linux/ptrace.h> 11 10 12 11 #include <asm/tlbflush.h> 13 12 #include <asm/mmu_context.h> ··· 28 29 * 29 30 * Implement flush IPI by CALL_FUNCTION_VECTOR, Alex Shi 30 31 */ 32 + 33 + /* 34 + * Use bit 0 to mangle the TIF_SPEC_IB state into the mm pointer which is 35 + * stored in cpu_tlb_state.last_user_mm_ibpb. 36 + */ 37 + #define LAST_USER_MM_IBPB 0x1UL 31 38 32 39 /* 33 40 * We get here when we do something requiring a TLB invalidation ··· 186 181 } 187 182 } 188 183 189 - static bool ibpb_needed(struct task_struct *tsk, u64 last_ctx_id) 184 + static inline unsigned long mm_mangle_tif_spec_ib(struct task_struct *next) 190 185 { 186 + unsigned long next_tif = task_thread_info(next)->flags; 187 + unsigned long ibpb = (next_tif >> TIF_SPEC_IB) & LAST_USER_MM_IBPB; 188 + 189 + return (unsigned long)next->mm | ibpb; 190 + } 191 + 192 + static void cond_ibpb(struct task_struct *next) 193 + { 194 + if (!next || !next->mm) 195 + return; 196 + 191 197 /* 192 - * Check if the current (previous) task has access to the memory 193 - * of the @tsk (next) task. If access is denied, make sure to 194 - * issue a IBPB to stop user->user Spectre-v2 attacks. 195 - * 196 - * Note: __ptrace_may_access() returns 0 or -ERRNO. 198 + * Both, the conditional and the always IBPB mode use the mm 199 + * pointer to avoid the IBPB when switching between tasks of the 200 + * same process. Using the mm pointer instead of mm->context.ctx_id 201 + * opens a hypothetical hole vs. mm_struct reuse, which is more or 202 + * less impossible to control by an attacker. Aside of that it 203 + * would only affect the first schedule so the theoretically 204 + * exposed data is not really interesting. 197 205 */ 198 - return (tsk && tsk->mm && tsk->mm->context.ctx_id != last_ctx_id && 199 - ptrace_may_access_sched(tsk, PTRACE_MODE_SPEC_IBPB)); 206 + if (static_branch_likely(&switch_mm_cond_ibpb)) { 207 + unsigned long prev_mm, next_mm; 208 + 209 + /* 210 + * This is a bit more complex than the always mode because 211 + * it has to handle two cases: 212 + * 213 + * 1) Switch from a user space task (potential attacker) 214 + * which has TIF_SPEC_IB set to a user space task 215 + * (potential victim) which has TIF_SPEC_IB not set. 216 + * 217 + * 2) Switch from a user space task (potential attacker) 218 + * which has TIF_SPEC_IB not set to a user space task 219 + * (potential victim) which has TIF_SPEC_IB set. 220 + * 221 + * This could be done by unconditionally issuing IBPB when 222 + * a task which has TIF_SPEC_IB set is either scheduled in 223 + * or out. Though that results in two flushes when: 224 + * 225 + * - the same user space task is scheduled out and later 226 + * scheduled in again and only a kernel thread ran in 227 + * between. 228 + * 229 + * - a user space task belonging to the same process is 230 + * scheduled in after a kernel thread ran in between 231 + * 232 + * - a user space task belonging to the same process is 233 + * scheduled in immediately. 234 + * 235 + * Optimize this with reasonably small overhead for the 236 + * above cases. Mangle the TIF_SPEC_IB bit into the mm 237 + * pointer of the incoming task which is stored in 238 + * cpu_tlbstate.last_user_mm_ibpb for comparison. 239 + */ 240 + next_mm = mm_mangle_tif_spec_ib(next); 241 + prev_mm = this_cpu_read(cpu_tlbstate.last_user_mm_ibpb); 242 + 243 + /* 244 + * Issue IBPB only if the mm's are different and one or 245 + * both have the IBPB bit set. 246 + */ 247 + if (next_mm != prev_mm && 248 + (next_mm | prev_mm) & LAST_USER_MM_IBPB) 249 + indirect_branch_prediction_barrier(); 250 + 251 + this_cpu_write(cpu_tlbstate.last_user_mm_ibpb, next_mm); 252 + } 253 + 254 + if (static_branch_unlikely(&switch_mm_always_ibpb)) { 255 + /* 256 + * Only flush when switching to a user space task with a 257 + * different context than the user space task which ran 258 + * last on this CPU. 259 + */ 260 + if (this_cpu_read(cpu_tlbstate.last_user_mm) != next->mm) { 261 + indirect_branch_prediction_barrier(); 262 + this_cpu_write(cpu_tlbstate.last_user_mm, next->mm); 263 + } 264 + } 200 265 } 201 266 202 267 void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next, ··· 367 292 new_asid = prev_asid; 368 293 need_flush = true; 369 294 } else { 370 - u64 last_ctx_id = this_cpu_read(cpu_tlbstate.last_ctx_id); 371 - 372 295 /* 373 296 * Avoid user/user BTB poisoning by flushing the branch 374 297 * predictor when switching between processes. This stops 375 298 * one process from doing Spectre-v2 attacks on another. 376 - * 377 - * As an optimization, flush indirect branches only when 378 - * switching into a processes that can't be ptrace by the 379 - * current one (as in such case, attacker has much more 380 - * convenient way how to tamper with the next process than 381 - * branch buffer poisoning). 382 299 */ 383 - if (static_cpu_has(X86_FEATURE_USE_IBPB) && 384 - ibpb_needed(tsk, last_ctx_id)) 385 - indirect_branch_prediction_barrier(); 300 + cond_ibpb(tsk); 386 301 387 302 if (IS_ENABLED(CONFIG_VMAP_STACK)) { 388 303 /* ··· 429 364 /* See above wrt _rcuidle. */ 430 365 trace_tlb_flush_rcuidle(TLB_FLUSH_ON_TASK_SWITCH, 0); 431 366 } 432 - 433 - /* 434 - * Record last user mm's context id, so we can avoid 435 - * flushing branch buffer with IBPB if we switch back 436 - * to the same user. 437 - */ 438 - if (next != &init_mm) 439 - this_cpu_write(cpu_tlbstate.last_ctx_id, next->context.ctx_id); 440 367 441 368 /* Make sure we write CR3 before loaded_mm. */ 442 369 barrier(); ··· 498 441 write_cr3(build_cr3(mm->pgd, 0)); 499 442 500 443 /* Reinitialize tlbstate. */ 501 - this_cpu_write(cpu_tlbstate.last_ctx_id, mm->context.ctx_id); 444 + this_cpu_write(cpu_tlbstate.last_user_mm_ibpb, LAST_USER_MM_IBPB); 502 445 this_cpu_write(cpu_tlbstate.loaded_mm_asid, 0); 503 446 this_cpu_write(cpu_tlbstate.next_asid, 1); 504 447 this_cpu_write(cpu_tlbstate.ctxs[0].ctx_id, mm->context.ctx_id);
-78
arch/x86/xen/enlighten.c
··· 10 10 #include <xen/xen.h> 11 11 #include <xen/features.h> 12 12 #include <xen/page.h> 13 - #include <xen/interface/memory.h> 14 13 15 14 #include <asm/xen/hypercall.h> 16 15 #include <asm/xen/hypervisor.h> ··· 345 346 } 346 347 EXPORT_SYMBOL(xen_arch_unregister_cpu); 347 348 #endif 348 - 349 - #ifdef CONFIG_XEN_BALLOON_MEMORY_HOTPLUG 350 - void __init arch_xen_balloon_init(struct resource *hostmem_resource) 351 - { 352 - struct xen_memory_map memmap; 353 - int rc; 354 - unsigned int i, last_guest_ram; 355 - phys_addr_t max_addr = PFN_PHYS(max_pfn); 356 - struct e820_table *xen_e820_table; 357 - const struct e820_entry *entry; 358 - struct resource *res; 359 - 360 - if (!xen_initial_domain()) 361 - return; 362 - 363 - xen_e820_table = kmalloc(sizeof(*xen_e820_table), GFP_KERNEL); 364 - if (!xen_e820_table) 365 - return; 366 - 367 - memmap.nr_entries = ARRAY_SIZE(xen_e820_table->entries); 368 - set_xen_guest_handle(memmap.buffer, xen_e820_table->entries); 369 - rc = HYPERVISOR_memory_op(XENMEM_machine_memory_map, &memmap); 370 - if (rc) { 371 - pr_warn("%s: Can't read host e820 (%d)\n", __func__, rc); 372 - goto out; 373 - } 374 - 375 - last_guest_ram = 0; 376 - for (i = 0; i < memmap.nr_entries; i++) { 377 - if (xen_e820_table->entries[i].addr >= max_addr) 378 - break; 379 - if (xen_e820_table->entries[i].type == E820_TYPE_RAM) 380 - last_guest_ram = i; 381 - } 382 - 383 - entry = &xen_e820_table->entries[last_guest_ram]; 384 - if (max_addr >= entry->addr + entry->size) 385 - goto out; /* No unallocated host RAM. */ 386 - 387 - hostmem_resource->start = max_addr; 388 - hostmem_resource->end = entry->addr + entry->size; 389 - 390 - /* 391 - * Mark non-RAM regions between the end of dom0 RAM and end of host RAM 392 - * as unavailable. The rest of that region can be used for hotplug-based 393 - * ballooning. 394 - */ 395 - for (; i < memmap.nr_entries; i++) { 396 - entry = &xen_e820_table->entries[i]; 397 - 398 - if (entry->type == E820_TYPE_RAM) 399 - continue; 400 - 401 - if (entry->addr >= hostmem_resource->end) 402 - break; 403 - 404 - res = kzalloc(sizeof(*res), GFP_KERNEL); 405 - if (!res) 406 - goto out; 407 - 408 - res->name = "Unavailable host RAM"; 409 - res->start = entry->addr; 410 - res->end = (entry->addr + entry->size < hostmem_resource->end) ? 411 - entry->addr + entry->size : hostmem_resource->end; 412 - rc = insert_resource(hostmem_resource, res); 413 - if (rc) { 414 - pr_warn("%s: Can't insert [%llx - %llx) (%d)\n", 415 - __func__, res->start, res->end, rc); 416 - kfree(res); 417 - goto out; 418 - } 419 - } 420 - 421 - out: 422 - kfree(xen_e820_table); 423 - } 424 - #endif /* CONFIG_XEN_BALLOON_MEMORY_HOTPLUG */
+20 -15
arch/x86/xen/multicalls.c
··· 69 69 70 70 trace_xen_mc_flush(b->mcidx, b->argidx, b->cbidx); 71 71 72 + #if MC_DEBUG 73 + memcpy(b->debug, b->entries, 74 + b->mcidx * sizeof(struct multicall_entry)); 75 + #endif 76 + 72 77 switch (b->mcidx) { 73 78 case 0: 74 79 /* no-op */ ··· 92 87 break; 93 88 94 89 default: 95 - #if MC_DEBUG 96 - memcpy(b->debug, b->entries, 97 - b->mcidx * sizeof(struct multicall_entry)); 98 - #endif 99 - 100 90 if (HYPERVISOR_multicall(b->entries, b->mcidx) != 0) 101 91 BUG(); 102 92 for (i = 0; i < b->mcidx; i++) 103 93 if (b->entries[i].result < 0) 104 94 ret++; 95 + } 105 96 97 + if (WARN_ON(ret)) { 98 + pr_err("%d of %d multicall(s) failed: cpu %d\n", 99 + ret, b->mcidx, smp_processor_id()); 100 + for (i = 0; i < b->mcidx; i++) { 101 + if (b->entries[i].result < 0) { 106 102 #if MC_DEBUG 107 - if (ret) { 108 - printk(KERN_ERR "%d multicall(s) failed: cpu %d\n", 109 - ret, smp_processor_id()); 110 - dump_stack(); 111 - for (i = 0; i < b->mcidx; i++) { 112 - printk(KERN_DEBUG " call %2d/%d: op=%lu arg=[%lx] result=%ld\t%pF\n", 113 - i+1, b->mcidx, 103 + pr_err(" call %2d: op=%lu arg=[%lx] result=%ld\t%pF\n", 104 + i + 1, 114 105 b->debug[i].op, 115 106 b->debug[i].args[0], 116 107 b->entries[i].result, 117 108 b->caller[i]); 109 + #else 110 + pr_err(" call %2d: op=%lu arg=[%lx] result=%ld\n", 111 + i + 1, 112 + b->entries[i].op, 113 + b->entries[i].args[0], 114 + b->entries[i].result); 115 + #endif 118 116 } 119 117 } 120 - #endif 121 118 } 122 119 123 120 b->mcidx = 0; ··· 133 126 b->cbidx = 0; 134 127 135 128 local_irq_restore(flags); 136 - 137 - WARN_ON(ret); 138 129 } 139 130 140 131 struct multicall_space __xen_mc_entry(size_t args)
+4 -2
arch/x86/xen/setup.c
··· 808 808 addr = xen_e820_table.entries[0].addr; 809 809 size = xen_e820_table.entries[0].size; 810 810 while (i < xen_e820_table.nr_entries) { 811 + bool discard = false; 811 812 812 813 chunk_size = size; 813 814 type = xen_e820_table.entries[i].type; ··· 824 823 xen_add_extra_mem(pfn_s, n_pfns); 825 824 xen_max_p2m_pfn = pfn_s + n_pfns; 826 825 } else 827 - type = E820_TYPE_UNUSABLE; 826 + discard = true; 828 827 } 829 828 830 - xen_align_and_add_e820_region(addr, chunk_size, type); 829 + if (!discard) 830 + xen_align_and_add_e820_region(addr, chunk_size, type); 831 831 832 832 addr += chunk_size; 833 833 size -= chunk_size;
+1 -6
arch/x86/xen/spinlock.c
··· 3 3 * Split spinlock implementation out into its own file, so it can be 4 4 * compiled in a FTRACE-compatible way. 5 5 */ 6 - #include <linux/kernel_stat.h> 6 + #include <linux/kernel.h> 7 7 #include <linux/spinlock.h> 8 - #include <linux/debugfs.h> 9 - #include <linux/log2.h> 10 - #include <linux/gfp.h> 11 8 #include <linux/slab.h> 12 9 #include <linux/atomic.h> 13 10 14 11 #include <asm/paravirt.h> 15 12 #include <asm/qspinlock.h> 16 13 17 - #include <xen/interface/xen.h> 18 14 #include <xen/events.h> 19 15 20 16 #include "xen-ops.h" 21 - #include "debugfs.h" 22 17 23 18 static DEFINE_PER_CPU(int, lock_kicker_irq) = -1; 24 19 static DEFINE_PER_CPU(char *, irq_name);
+8 -8
arch/xtensa/kernel/asm-offsets.c
··· 94 94 DEFINE(THREAD_SP, offsetof (struct task_struct, thread.sp)); 95 95 DEFINE(THREAD_CPENABLE, offsetof (struct thread_info, cpenable)); 96 96 #if XTENSA_HAVE_COPROCESSORS 97 - DEFINE(THREAD_XTREGS_CP0, offsetof (struct thread_info, xtregs_cp)); 98 - DEFINE(THREAD_XTREGS_CP1, offsetof (struct thread_info, xtregs_cp)); 99 - DEFINE(THREAD_XTREGS_CP2, offsetof (struct thread_info, xtregs_cp)); 100 - DEFINE(THREAD_XTREGS_CP3, offsetof (struct thread_info, xtregs_cp)); 101 - DEFINE(THREAD_XTREGS_CP4, offsetof (struct thread_info, xtregs_cp)); 102 - DEFINE(THREAD_XTREGS_CP5, offsetof (struct thread_info, xtregs_cp)); 103 - DEFINE(THREAD_XTREGS_CP6, offsetof (struct thread_info, xtregs_cp)); 104 - DEFINE(THREAD_XTREGS_CP7, offsetof (struct thread_info, xtregs_cp)); 97 + DEFINE(THREAD_XTREGS_CP0, offsetof(struct thread_info, xtregs_cp.cp0)); 98 + DEFINE(THREAD_XTREGS_CP1, offsetof(struct thread_info, xtregs_cp.cp1)); 99 + DEFINE(THREAD_XTREGS_CP2, offsetof(struct thread_info, xtregs_cp.cp2)); 100 + DEFINE(THREAD_XTREGS_CP3, offsetof(struct thread_info, xtregs_cp.cp3)); 101 + DEFINE(THREAD_XTREGS_CP4, offsetof(struct thread_info, xtregs_cp.cp4)); 102 + DEFINE(THREAD_XTREGS_CP5, offsetof(struct thread_info, xtregs_cp.cp5)); 103 + DEFINE(THREAD_XTREGS_CP6, offsetof(struct thread_info, xtregs_cp.cp6)); 104 + DEFINE(THREAD_XTREGS_CP7, offsetof(struct thread_info, xtregs_cp.cp7)); 105 105 #endif 106 106 DEFINE(THREAD_XTREGS_USER, offsetof (struct thread_info, xtregs_user)); 107 107 DEFINE(XTREGS_USER_SIZE, sizeof(xtregs_user_t));
+4 -1
arch/xtensa/kernel/process.c
··· 94 94 95 95 void coprocessor_flush_all(struct thread_info *ti) 96 96 { 97 - unsigned long cpenable; 97 + unsigned long cpenable, old_cpenable; 98 98 int i; 99 99 100 100 preempt_disable(); 101 101 102 + RSR_CPENABLE(old_cpenable); 102 103 cpenable = ti->cpenable; 104 + WSR_CPENABLE(cpenable); 103 105 104 106 for (i = 0; i < XCHAL_CP_MAX; i++) { 105 107 if ((cpenable & 1) != 0 && coprocessor_owner[i] == ti) 106 108 coprocessor_flush(ti, i); 107 109 cpenable >>= 1; 108 110 } 111 + WSR_CPENABLE(old_cpenable); 109 112 110 113 preempt_enable(); 111 114 }
+38 -4
arch/xtensa/kernel/ptrace.c
··· 127 127 } 128 128 129 129 130 + #if XTENSA_HAVE_COPROCESSORS 131 + #define CP_OFFSETS(cp) \ 132 + { \ 133 + .elf_xtregs_offset = offsetof(elf_xtregs_t, cp), \ 134 + .ti_offset = offsetof(struct thread_info, xtregs_cp.cp), \ 135 + .sz = sizeof(xtregs_ ## cp ## _t), \ 136 + } 137 + 138 + static const struct { 139 + size_t elf_xtregs_offset; 140 + size_t ti_offset; 141 + size_t sz; 142 + } cp_offsets[] = { 143 + CP_OFFSETS(cp0), 144 + CP_OFFSETS(cp1), 145 + CP_OFFSETS(cp2), 146 + CP_OFFSETS(cp3), 147 + CP_OFFSETS(cp4), 148 + CP_OFFSETS(cp5), 149 + CP_OFFSETS(cp6), 150 + CP_OFFSETS(cp7), 151 + }; 152 + #endif 153 + 130 154 static int ptrace_getxregs(struct task_struct *child, void __user *uregs) 131 155 { 132 156 struct pt_regs *regs = task_pt_regs(child); 133 157 struct thread_info *ti = task_thread_info(child); 134 158 elf_xtregs_t __user *xtregs = uregs; 135 159 int ret = 0; 160 + int i __maybe_unused; 136 161 137 162 if (!access_ok(VERIFY_WRITE, uregs, sizeof(elf_xtregs_t))) 138 163 return -EIO; ··· 165 140 #if XTENSA_HAVE_COPROCESSORS 166 141 /* Flush all coprocessor registers to memory. */ 167 142 coprocessor_flush_all(ti); 168 - ret |= __copy_to_user(&xtregs->cp0, &ti->xtregs_cp, 169 - sizeof(xtregs_coprocessor_t)); 143 + 144 + for (i = 0; i < ARRAY_SIZE(cp_offsets); ++i) 145 + ret |= __copy_to_user((char __user *)xtregs + 146 + cp_offsets[i].elf_xtregs_offset, 147 + (const char *)ti + 148 + cp_offsets[i].ti_offset, 149 + cp_offsets[i].sz); 170 150 #endif 171 151 ret |= __copy_to_user(&xtregs->opt, &regs->xtregs_opt, 172 152 sizeof(xtregs->opt)); ··· 187 157 struct pt_regs *regs = task_pt_regs(child); 188 158 elf_xtregs_t *xtregs = uregs; 189 159 int ret = 0; 160 + int i __maybe_unused; 190 161 191 162 if (!access_ok(VERIFY_READ, uregs, sizeof(elf_xtregs_t))) 192 163 return -EFAULT; ··· 197 166 coprocessor_flush_all(ti); 198 167 coprocessor_release_all(ti); 199 168 200 - ret |= __copy_from_user(&ti->xtregs_cp, &xtregs->cp0, 201 - sizeof(xtregs_coprocessor_t)); 169 + for (i = 0; i < ARRAY_SIZE(cp_offsets); ++i) 170 + ret |= __copy_from_user((char *)ti + cp_offsets[i].ti_offset, 171 + (const char __user *)xtregs + 172 + cp_offsets[i].elf_xtregs_offset, 173 + cp_offsets[i].sz); 202 174 #endif 203 175 ret |= __copy_from_user(&regs->xtregs_opt, &xtregs->opt, 204 176 sizeof(xtregs->opt));
+1 -1
block/blk-merge.c
··· 820 820 821 821 req->__data_len += blk_rq_bytes(next); 822 822 823 - if (req_op(req) != REQ_OP_DISCARD) 823 + if (!blk_discard_mergable(req)) 824 824 elv_merge_requests(q, req, next); 825 825 826 826 /*
+2 -19
drivers/acpi/acpica/exserial.c
··· 244 244 { 245 245 acpi_status status; 246 246 u32 buffer_length; 247 - u32 data_length; 248 247 void *buffer; 249 248 union acpi_operand_object *buffer_desc; 250 249 u32 function; ··· 281 282 case ACPI_ADR_SPACE_SMBUS: 282 283 283 284 buffer_length = ACPI_SMBUS_BUFFER_SIZE; 284 - data_length = ACPI_SMBUS_DATA_SIZE; 285 285 function = ACPI_WRITE | (obj_desc->field.attribute << 16); 286 286 break; 287 287 288 288 case ACPI_ADR_SPACE_IPMI: 289 289 290 290 buffer_length = ACPI_IPMI_BUFFER_SIZE; 291 - data_length = ACPI_IPMI_DATA_SIZE; 292 291 function = ACPI_WRITE; 293 292 break; 294 293 ··· 307 310 /* Add header length to get the full size of the buffer */ 308 311 309 312 buffer_length += ACPI_SERIAL_HEADER_SIZE; 310 - data_length = source_desc->buffer.pointer[1]; 311 313 function = ACPI_WRITE | (accessor_type << 16); 312 314 break; 313 315 314 316 default: 315 317 return_ACPI_STATUS(AE_AML_INVALID_SPACE_ID); 316 318 } 317 - 318 - #if 0 319 - OBSOLETE ? 320 - /* Check for possible buffer overflow */ 321 - if (data_length > source_desc->buffer.length) { 322 - ACPI_ERROR((AE_INFO, 323 - "Length in buffer header (%u)(%u) is greater than " 324 - "the physical buffer length (%u) and will overflow", 325 - data_length, buffer_length, 326 - source_desc->buffer.length)); 327 - 328 - return_ACPI_STATUS(AE_AML_BUFFER_LIMIT); 329 - } 330 - #endif 331 319 332 320 /* Create the transfer/bidirectional/return buffer */ 333 321 ··· 324 342 /* Copy the input buffer data to the transfer buffer */ 325 343 326 344 buffer = buffer_desc->buffer.pointer; 327 - memcpy(buffer, source_desc->buffer.pointer, data_length); 345 + memcpy(buffer, source_desc->buffer.pointer, 346 + min(buffer_length, source_desc->buffer.length)); 328 347 329 348 /* Lock entire transaction if requested */ 330 349
+1 -1
drivers/acpi/arm64/iort.c
··· 700 700 */ 701 701 static struct irq_domain *iort_get_platform_device_domain(struct device *dev) 702 702 { 703 - struct acpi_iort_node *node, *msi_parent; 703 + struct acpi_iort_node *node, *msi_parent = NULL; 704 704 struct fwnode_handle *iort_fwnode; 705 705 struct acpi_iort_its_group *its; 706 706 int i;
+12 -9
drivers/android/binder.c
··· 2974 2974 t->buffer = NULL; 2975 2975 goto err_binder_alloc_buf_failed; 2976 2976 } 2977 - t->buffer->allow_user_free = 0; 2978 2977 t->buffer->debug_id = t->debug_id; 2979 2978 t->buffer->transaction = t; 2980 2979 t->buffer->target_node = target_node; ··· 3509 3510 3510 3511 buffer = binder_alloc_prepare_to_free(&proc->alloc, 3511 3512 data_ptr); 3512 - if (buffer == NULL) { 3513 - binder_user_error("%d:%d BC_FREE_BUFFER u%016llx no match\n", 3514 - proc->pid, thread->pid, (u64)data_ptr); 3515 - break; 3516 - } 3517 - if (!buffer->allow_user_free) { 3518 - binder_user_error("%d:%d BC_FREE_BUFFER u%016llx matched unreturned buffer\n", 3519 - proc->pid, thread->pid, (u64)data_ptr); 3513 + if (IS_ERR_OR_NULL(buffer)) { 3514 + if (PTR_ERR(buffer) == -EPERM) { 3515 + binder_user_error( 3516 + "%d:%d BC_FREE_BUFFER u%016llx matched unreturned or currently freeing buffer\n", 3517 + proc->pid, thread->pid, 3518 + (u64)data_ptr); 3519 + } else { 3520 + binder_user_error( 3521 + "%d:%d BC_FREE_BUFFER u%016llx no match\n", 3522 + proc->pid, thread->pid, 3523 + (u64)data_ptr); 3524 + } 3520 3525 break; 3521 3526 } 3522 3527 binder_debug(BINDER_DEBUG_FREE_BUFFER,
+6 -10
drivers/android/binder_alloc.c
··· 151 151 else { 152 152 /* 153 153 * Guard against user threads attempting to 154 - * free the buffer twice 154 + * free the buffer when in use by kernel or 155 + * after it's already been freed. 155 156 */ 156 - if (buffer->free_in_progress) { 157 - binder_alloc_debug(BINDER_DEBUG_USER_ERROR, 158 - "%d:%d FREE_BUFFER u%016llx user freed buffer twice\n", 159 - alloc->pid, current->pid, 160 - (u64)user_ptr); 161 - return NULL; 162 - } 163 - buffer->free_in_progress = 1; 157 + if (!buffer->allow_user_free) 158 + return ERR_PTR(-EPERM); 159 + buffer->allow_user_free = 0; 164 160 return buffer; 165 161 } 166 162 } ··· 496 500 497 501 rb_erase(best_fit, &alloc->free_buffers); 498 502 buffer->free = 0; 499 - buffer->free_in_progress = 0; 503 + buffer->allow_user_free = 0; 500 504 binder_insert_allocated_buffer_locked(alloc, buffer); 501 505 binder_alloc_debug(BINDER_DEBUG_BUFFER_ALLOC, 502 506 "%d: binder_alloc_buf size %zd got %pK\n",
+1 -2
drivers/android/binder_alloc.h
··· 50 50 unsigned free:1; 51 51 unsigned allow_user_free:1; 52 52 unsigned async_transaction:1; 53 - unsigned free_in_progress:1; 54 - unsigned debug_id:28; 53 + unsigned debug_id:29; 55 54 56 55 struct binder_transaction *transaction; 57 56
+2 -2
drivers/atm/firestream.c
··· 1410 1410 1411 1411 func_enter (); 1412 1412 1413 - fs_dprintk (FS_DEBUG_INIT, "Inititing queue at %x: %d entries:\n", 1413 + fs_dprintk (FS_DEBUG_INIT, "Initializing queue at %x: %d entries:\n", 1414 1414 queue, nentries); 1415 1415 1416 1416 p = aligned_kmalloc (sz, GFP_KERNEL, 0x10); ··· 1443 1443 { 1444 1444 func_enter (); 1445 1445 1446 - fs_dprintk (FS_DEBUG_INIT, "Inititing free pool at %x:\n", queue); 1446 + fs_dprintk (FS_DEBUG_INIT, "Initializing free pool at %x:\n", queue); 1447 1447 1448 1448 write_fs (dev, FP_CNF(queue), (bufsize * RBFP_RBS) | RBFP_RBSVAL | RBFP_CME); 1449 1449 write_fs (dev, FP_SA(queue), 0);
+8 -2
drivers/base/devres.c
··· 26 26 27 27 struct devres { 28 28 struct devres_node node; 29 - /* -- 3 pointers */ 30 - unsigned long long data[]; /* guarantee ull alignment */ 29 + /* 30 + * Some archs want to perform DMA into kmalloc caches 31 + * and need a guaranteed alignment larger than 32 + * the alignment of a 64-bit integer. 33 + * Thus we use ARCH_KMALLOC_MINALIGN here and get exactly the same 34 + * buffer alignment as if it was allocated by plain kmalloc(). 35 + */ 36 + u8 __aligned(ARCH_KMALLOC_MINALIGN) data[]; 31 37 }; 32 38 33 39 struct devres_group {
+9 -1
drivers/dma/at_hdmac.c
··· 1641 1641 atchan->descs_allocated = 0; 1642 1642 atchan->status = 0; 1643 1643 1644 + /* 1645 + * Free atslave allocated in at_dma_xlate() 1646 + */ 1647 + kfree(chan->private); 1648 + chan->private = NULL; 1649 + 1644 1650 dev_vdbg(chan2dev(chan), "free_chan_resources: done\n"); 1645 1651 } 1646 1652 ··· 1681 1675 dma_cap_zero(mask); 1682 1676 dma_cap_set(DMA_SLAVE, mask); 1683 1677 1684 - atslave = devm_kzalloc(&dmac_pdev->dev, sizeof(*atslave), GFP_KERNEL); 1678 + atslave = kzalloc(sizeof(*atslave), GFP_KERNEL); 1685 1679 if (!atslave) 1686 1680 return NULL; 1687 1681 ··· 2006 2000 struct resource *io; 2007 2001 2008 2002 at_dma_off(atdma); 2003 + if (pdev->dev.of_node) 2004 + of_dma_controller_free(pdev->dev.of_node); 2009 2005 dma_async_device_unregister(&atdma->dma_common); 2010 2006 2011 2007 dma_pool_destroy(atdma->memset_pool);
+26 -10
drivers/firmware/efi/efi.c
··· 969 969 static DEFINE_SPINLOCK(efi_mem_reserve_persistent_lock); 970 970 static struct linux_efi_memreserve *efi_memreserve_root __ro_after_init; 971 971 972 - int efi_mem_reserve_persistent(phys_addr_t addr, u64 size) 972 + static int __init efi_memreserve_map_root(void) 973 + { 974 + if (efi.mem_reserve == EFI_INVALID_TABLE_ADDR) 975 + return -ENODEV; 976 + 977 + efi_memreserve_root = memremap(efi.mem_reserve, 978 + sizeof(*efi_memreserve_root), 979 + MEMREMAP_WB); 980 + if (WARN_ON_ONCE(!efi_memreserve_root)) 981 + return -ENOMEM; 982 + return 0; 983 + } 984 + 985 + int __ref efi_mem_reserve_persistent(phys_addr_t addr, u64 size) 973 986 { 974 987 struct linux_efi_memreserve *rsv; 988 + int rc; 975 989 976 - if (!efi_memreserve_root) 990 + if (efi_memreserve_root == (void *)ULONG_MAX) 977 991 return -ENODEV; 992 + 993 + if (!efi_memreserve_root) { 994 + rc = efi_memreserve_map_root(); 995 + if (rc) 996 + return rc; 997 + } 978 998 979 999 rsv = kmalloc(sizeof(*rsv), GFP_ATOMIC); 980 1000 if (!rsv) ··· 1013 993 1014 994 static int __init efi_memreserve_root_init(void) 1015 995 { 1016 - if (efi.mem_reserve == EFI_INVALID_TABLE_ADDR) 1017 - return -ENODEV; 1018 - 1019 - efi_memreserve_root = memremap(efi.mem_reserve, 1020 - sizeof(*efi_memreserve_root), 1021 - MEMREMAP_WB); 1022 - if (!efi_memreserve_root) 1023 - return -ENOMEM; 996 + if (efi_memreserve_root) 997 + return 0; 998 + if (efi_memreserve_map_root()) 999 + efi_memreserve_root = (void *)ULONG_MAX; 1024 1000 return 0; 1025 1001 } 1026 1002 early_initcall(efi_memreserve_root_init);
+1
drivers/fsi/Kconfig
··· 46 46 tristate "FSI master based on Aspeed ColdFire coprocessor" 47 47 depends on GPIOLIB 48 48 depends on GPIO_ASPEED 49 + select GENERIC_ALLOCATOR 49 50 ---help--- 50 51 This option enables a FSI master using the AST2400 and AST2500 GPIO 51 52 lines driven by the internal ColdFire coprocessor. This requires
-1
drivers/fsi/fsi-scom.c
··· 20 20 #include <linux/fs.h> 21 21 #include <linux/uaccess.h> 22 22 #include <linux/slab.h> 23 - #include <linux/cdev.h> 24 23 #include <linux/list.h> 25 24 26 25 #include <uapi/linux/fsi.h>
+1 -1
drivers/gpio/gpio-davinci.c
··· 258 258 chips->chip.set = davinci_gpio_set; 259 259 260 260 chips->chip.ngpio = ngpio; 261 - chips->chip.base = -1; 261 + chips->chip.base = pdata->no_auto_base ? pdata->base : -1; 262 262 263 263 #ifdef CONFIG_OF_GPIO 264 264 chips->chip.of_gpio_n_cells = 2;
+8 -6
drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
··· 181 181 182 182 if (level == adev->vm_manager.root_level) 183 183 /* For the root directory */ 184 - return round_up(adev->vm_manager.max_pfn, 1 << shift) >> shift; 184 + return round_up(adev->vm_manager.max_pfn, 1ULL << shift) >> shift; 185 185 else if (level != AMDGPU_VM_PTB) 186 186 /* Everything in between */ 187 187 return 512; ··· 1656 1656 if (!amdgpu_vm_pt_descendant(adev, &cursor)) 1657 1657 return -ENOENT; 1658 1658 continue; 1659 - } else if (frag >= parent_shift) { 1659 + } else if (frag >= parent_shift && 1660 + cursor.level - 1 != adev->vm_manager.root_level) { 1660 1661 /* If the fragment size is even larger than the parent 1661 - * shift we should go up one level and check it again. 1662 + * shift we should go up one level and check it again 1663 + * unless one level up is the root level. 1662 1664 */ 1663 1665 if (!amdgpu_vm_pt_ancestor(&cursor)) 1664 1666 return -ENOENT; ··· 1668 1666 } 1669 1667 1670 1668 /* Looks good so far, calculate parameters for the update */ 1671 - incr = AMDGPU_GPU_PAGE_SIZE << shift; 1669 + incr = (uint64_t)AMDGPU_GPU_PAGE_SIZE << shift; 1672 1670 mask = amdgpu_vm_entries_mask(adev, cursor.level); 1673 1671 pe_start = ((cursor.pfn >> shift) & mask) * 8; 1674 - entry_end = (mask + 1) << shift; 1672 + entry_end = (uint64_t)(mask + 1) << shift; 1675 1673 entry_end += cursor.pfn & ~(entry_end - 1); 1676 1674 entry_end = min(entry_end, end); 1677 1675 ··· 1684 1682 flags | AMDGPU_PTE_FRAG(frag)); 1685 1683 1686 1684 pe_start += nptes * 8; 1687 - dst += nptes * AMDGPU_GPU_PAGE_SIZE << shift; 1685 + dst += (uint64_t)nptes * AMDGPU_GPU_PAGE_SIZE << shift; 1688 1686 1689 1687 frag_start = upd_end; 1690 1688 if (frag_start >= frag_end) {
+4 -3
drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
··· 2440 2440 #endif 2441 2441 2442 2442 WREG32_FIELD15(GC, 0, RLC_CNTL, RLC_ENABLE_F32, 1); 2443 + udelay(50); 2443 2444 2444 2445 /* carrizo do enable cp interrupt after cp inited */ 2445 - if (!(adev->flags & AMD_IS_APU)) 2446 + if (!(adev->flags & AMD_IS_APU)) { 2446 2447 gfx_v9_0_enable_gui_idle_interrupt(adev, true); 2447 - 2448 - udelay(50); 2448 + udelay(50); 2449 + } 2449 2450 2450 2451 #ifdef AMDGPU_RLC_DEBUG_RETRY 2451 2452 /* RLC_GPM_GENERAL_6 : RLC Ucode version */
+2 -3
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
··· 342 342 master->connector_id); 343 343 344 344 aconnector->mst_encoder = dm_dp_create_fake_mst_encoder(master); 345 + drm_connector_attach_encoder(&aconnector->base, 346 + &aconnector->mst_encoder->base); 345 347 346 - /* 347 - * TODO: understand why this one is needed 348 - */ 349 348 drm_object_attach_property( 350 349 &connector->base, 351 350 dev->mode_config.path_property,
+2 -1
drivers/gpu/drm/ast/ast_main.c
··· 583 583 drm_mode_config_cleanup(dev); 584 584 585 585 ast_mm_fini(ast); 586 - pci_iounmap(dev->pdev, ast->ioregs); 586 + if (ast->ioregs != ast->regs + AST_IO_MM_OFFSET) 587 + pci_iounmap(dev->pdev, ast->ioregs); 587 588 pci_iounmap(dev->pdev, ast->regs); 588 589 kfree(ast); 589 590 }
+30 -6
drivers/gpu/drm/ast/ast_mode.c
··· 973 973 { 974 974 struct ast_i2c_chan *i2c = i2c_priv; 975 975 struct ast_private *ast = i2c->dev->dev_private; 976 - uint32_t val; 976 + uint32_t val, val2, count, pass; 977 977 978 - val = ast_get_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xb7, 0x10) >> 4; 978 + count = 0; 979 + pass = 0; 980 + val = (ast_get_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xb7, 0x10) >> 4) & 0x01; 981 + do { 982 + val2 = (ast_get_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xb7, 0x10) >> 4) & 0x01; 983 + if (val == val2) { 984 + pass++; 985 + } else { 986 + pass = 0; 987 + val = (ast_get_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xb7, 0x10) >> 4) & 0x01; 988 + } 989 + } while ((pass < 5) && (count++ < 0x10000)); 990 + 979 991 return val & 1 ? 1 : 0; 980 992 } 981 993 ··· 995 983 { 996 984 struct ast_i2c_chan *i2c = i2c_priv; 997 985 struct ast_private *ast = i2c->dev->dev_private; 998 - uint32_t val; 986 + uint32_t val, val2, count, pass; 999 987 1000 - val = ast_get_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xb7, 0x20) >> 5; 988 + count = 0; 989 + pass = 0; 990 + val = (ast_get_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xb7, 0x20) >> 5) & 0x01; 991 + do { 992 + val2 = (ast_get_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xb7, 0x20) >> 5) & 0x01; 993 + if (val == val2) { 994 + pass++; 995 + } else { 996 + pass = 0; 997 + val = (ast_get_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xb7, 0x20) >> 5) & 0x01; 998 + } 999 + } while ((pass < 5) && (count++ < 0x10000)); 1000 + 1001 1001 return val & 1 ? 1 : 0; 1002 1002 } 1003 1003 ··· 1022 998 1023 999 for (i = 0; i < 0x10000; i++) { 1024 1000 ujcrb7 = ((clock & 0x01) ? 0 : 1); 1025 - ast_set_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xb7, 0xfe, ujcrb7); 1001 + ast_set_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xb7, 0xf4, ujcrb7); 1026 1002 jtemp = ast_get_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xb7, 0x01); 1027 1003 if (ujcrb7 == jtemp) 1028 1004 break; ··· 1038 1014 1039 1015 for (i = 0; i < 0x10000; i++) { 1040 1016 ujcrb7 = ((data & 0x01) ? 0 : 1) << 2; 1041 - ast_set_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xb7, 0xfb, ujcrb7); 1017 + ast_set_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xb7, 0xf1, ujcrb7); 1042 1018 jtemp = ast_get_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xb7, 0x04); 1043 1019 if (ujcrb7 == jtemp) 1044 1020 break;
+2
drivers/gpu/drm/drm_auth.c
··· 142 142 143 143 lockdep_assert_held_once(&dev->master_mutex); 144 144 145 + WARN_ON(fpriv->is_master); 145 146 old_master = fpriv->master; 146 147 fpriv->master = drm_master_create(dev); 147 148 if (!fpriv->master) { ··· 171 170 /* drop references and restore old master on failure */ 172 171 drm_master_put(&fpriv->master); 173 172 fpriv->master = old_master; 173 + fpriv->is_master = 0; 174 174 175 175 return ret; 176 176 }
+2
drivers/gpu/drm/i915/gvt/aperture_gm.c
··· 61 61 } 62 62 63 63 mutex_lock(&dev_priv->drm.struct_mutex); 64 + mmio_hw_access_pre(dev_priv); 64 65 ret = i915_gem_gtt_insert(&dev_priv->ggtt.vm, node, 65 66 size, I915_GTT_PAGE_SIZE, 66 67 I915_COLOR_UNEVICTABLE, 67 68 start, end, flags); 69 + mmio_hw_access_post(dev_priv); 68 70 mutex_unlock(&dev_priv->drm.struct_mutex); 69 71 if (ret) 70 72 gvt_err("fail to alloc %s gm space from host\n",
+4 -3
drivers/gpu/drm/i915/gvt/gtt.c
··· 2447 2447 2448 2448 static void intel_vgpu_destroy_ggtt_mm(struct intel_vgpu *vgpu) 2449 2449 { 2450 - struct intel_gvt_partial_pte *pos; 2450 + struct intel_gvt_partial_pte *pos, *next; 2451 2451 2452 - list_for_each_entry(pos, 2453 - &vgpu->gtt.ggtt_mm->ggtt_mm.partial_pte_list, list) { 2452 + list_for_each_entry_safe(pos, next, 2453 + &vgpu->gtt.ggtt_mm->ggtt_mm.partial_pte_list, 2454 + list) { 2454 2455 gvt_dbg_mm("partial PTE update on hold 0x%lx : 0x%llx\n", 2455 2456 pos->offset, pos->data); 2456 2457 kfree(pos);
+2
drivers/gpu/drm/i915/gvt/mmio_context.c
··· 158 158 int ring_id, i; 159 159 160 160 for (ring_id = 0; ring_id < ARRAY_SIZE(regs); ring_id++) { 161 + if (!HAS_ENGINE(dev_priv, ring_id)) 162 + continue; 161 163 offset.reg = regs[ring_id]; 162 164 for (i = 0; i < GEN9_MOCS_SIZE; i++) { 163 165 gen9_render_mocs.control_table[ring_id][i] =
+25 -2
drivers/gpu/drm/meson/meson_crtc.c
··· 45 45 struct drm_crtc base; 46 46 struct drm_pending_vblank_event *event; 47 47 struct meson_drm *priv; 48 + bool enabled; 48 49 }; 49 50 #define to_meson_crtc(x) container_of(x, struct meson_crtc, base) 50 51 ··· 81 80 82 81 }; 83 82 84 - static void meson_crtc_atomic_enable(struct drm_crtc *crtc, 85 - struct drm_crtc_state *old_state) 83 + static void meson_crtc_enable(struct drm_crtc *crtc) 86 84 { 87 85 struct meson_crtc *meson_crtc = to_meson_crtc(crtc); 88 86 struct drm_crtc_state *crtc_state = crtc->state; ··· 101 101 writel_bits_relaxed(VPP_POSTBLEND_ENABLE, VPP_POSTBLEND_ENABLE, 102 102 priv->io_base + _REG(VPP_MISC)); 103 103 104 + drm_crtc_vblank_on(crtc); 105 + 106 + meson_crtc->enabled = true; 107 + } 108 + 109 + static void meson_crtc_atomic_enable(struct drm_crtc *crtc, 110 + struct drm_crtc_state *old_state) 111 + { 112 + struct meson_crtc *meson_crtc = to_meson_crtc(crtc); 113 + struct meson_drm *priv = meson_crtc->priv; 114 + 115 + DRM_DEBUG_DRIVER("\n"); 116 + 117 + if (!meson_crtc->enabled) 118 + meson_crtc_enable(crtc); 119 + 104 120 priv->viu.osd1_enabled = true; 105 121 } 106 122 ··· 125 109 { 126 110 struct meson_crtc *meson_crtc = to_meson_crtc(crtc); 127 111 struct meson_drm *priv = meson_crtc->priv; 112 + 113 + drm_crtc_vblank_off(crtc); 128 114 129 115 priv->viu.osd1_enabled = false; 130 116 priv->viu.osd1_commit = false; ··· 142 124 143 125 crtc->state->event = NULL; 144 126 } 127 + 128 + meson_crtc->enabled = false; 145 129 } 146 130 147 131 static void meson_crtc_atomic_begin(struct drm_crtc *crtc, ··· 151 131 { 152 132 struct meson_crtc *meson_crtc = to_meson_crtc(crtc); 153 133 unsigned long flags; 134 + 135 + if (crtc->state->enable && !meson_crtc->enabled) 136 + meson_crtc_enable(crtc); 154 137 155 138 if (crtc->state->event) { 156 139 WARN_ON(drm_crtc_vblank_get(crtc) != 0);
+1
drivers/gpu/drm/meson/meson_dw_hdmi.c
··· 706 706 .reg_read = meson_dw_hdmi_reg_read, 707 707 .reg_write = meson_dw_hdmi_reg_write, 708 708 .max_register = 0x10000, 709 + .fast_io = true, 709 710 }; 710 711 711 712 static bool meson_hdmi_connector_is_available(struct device *dev)
+4
drivers/gpu/drm/meson/meson_venc.c
··· 71 71 */ 72 72 73 73 /* HHI Registers */ 74 + #define HHI_GCLK_MPEG2 0x148 /* 0x52 offset in data sheet */ 74 75 #define HHI_VDAC_CNTL0 0x2F4 /* 0xbd offset in data sheet */ 75 76 #define HHI_VDAC_CNTL1 0x2F8 /* 0xbe offset in data sheet */ 76 77 #define HHI_HDMI_PHY_CNTL0 0x3a0 /* 0xe8 offset in data sheet */ ··· 715 714 { 5, &meson_hdmi_encp_mode_1080i60 }, 716 715 { 20, &meson_hdmi_encp_mode_1080i50 }, 717 716 { 32, &meson_hdmi_encp_mode_1080p24 }, 717 + { 33, &meson_hdmi_encp_mode_1080p50 }, 718 718 { 34, &meson_hdmi_encp_mode_1080p30 }, 719 719 { 31, &meson_hdmi_encp_mode_1080p50 }, 720 720 { 16, &meson_hdmi_encp_mode_1080p60 }, ··· 1532 1530 void meson_venc_enable_vsync(struct meson_drm *priv) 1533 1531 { 1534 1532 writel_relaxed(2, priv->io_base + _REG(VENC_INTCTRL)); 1533 + regmap_update_bits(priv->hhi, HHI_GCLK_MPEG2, BIT(25), BIT(25)); 1535 1534 } 1536 1535 1537 1536 void meson_venc_disable_vsync(struct meson_drm *priv) 1538 1537 { 1538 + regmap_update_bits(priv->hhi, HHI_GCLK_MPEG2, BIT(25), 0); 1539 1539 writel_relaxed(0, priv->io_base + _REG(VENC_INTCTRL)); 1540 1540 } 1541 1541
+6 -6
drivers/gpu/drm/meson/meson_viu.c
··· 184 184 if (lut_sel == VIU_LUT_OSD_OETF) { 185 185 writel(0, priv->io_base + _REG(addr_port)); 186 186 187 - for (i = 0; i < 20; i++) 187 + for (i = 0; i < (OSD_OETF_LUT_SIZE / 2); i++) 188 188 writel(r_map[i * 2] | (r_map[i * 2 + 1] << 16), 189 189 priv->io_base + _REG(data_port)); 190 190 191 191 writel(r_map[OSD_OETF_LUT_SIZE - 1] | (g_map[0] << 16), 192 192 priv->io_base + _REG(data_port)); 193 193 194 - for (i = 0; i < 20; i++) 194 + for (i = 0; i < (OSD_OETF_LUT_SIZE / 2); i++) 195 195 writel(g_map[i * 2 + 1] | (g_map[i * 2 + 2] << 16), 196 196 priv->io_base + _REG(data_port)); 197 197 198 - for (i = 0; i < 20; i++) 198 + for (i = 0; i < (OSD_OETF_LUT_SIZE / 2); i++) 199 199 writel(b_map[i * 2] | (b_map[i * 2 + 1] << 16), 200 200 priv->io_base + _REG(data_port)); 201 201 ··· 211 211 } else if (lut_sel == VIU_LUT_OSD_EOTF) { 212 212 writel(0, priv->io_base + _REG(addr_port)); 213 213 214 - for (i = 0; i < 20; i++) 214 + for (i = 0; i < (OSD_EOTF_LUT_SIZE / 2); i++) 215 215 writel(r_map[i * 2] | (r_map[i * 2 + 1] << 16), 216 216 priv->io_base + _REG(data_port)); 217 217 218 218 writel(r_map[OSD_EOTF_LUT_SIZE - 1] | (g_map[0] << 16), 219 219 priv->io_base + _REG(data_port)); 220 220 221 - for (i = 0; i < 20; i++) 221 + for (i = 0; i < (OSD_EOTF_LUT_SIZE / 2); i++) 222 222 writel(g_map[i * 2 + 1] | (g_map[i * 2 + 2] << 16), 223 223 priv->io_base + _REG(data_port)); 224 224 225 - for (i = 0; i < 20; i++) 225 + for (i = 0; i < (OSD_EOTF_LUT_SIZE / 2); i++) 226 226 writel(b_map[i * 2] | (b_map[i * 2 + 1] << 16), 227 227 priv->io_base + _REG(data_port)); 228 228
+18 -3
drivers/gpu/drm/rcar-du/rcar_du_group.c
··· 202 202 203 203 static void __rcar_du_group_start_stop(struct rcar_du_group *rgrp, bool start) 204 204 { 205 - struct rcar_du_crtc *rcrtc = &rgrp->dev->crtcs[rgrp->index * 2]; 205 + struct rcar_du_device *rcdu = rgrp->dev; 206 206 207 - rcar_du_crtc_dsysr_clr_set(rcrtc, DSYSR_DRES | DSYSR_DEN, 208 - start ? DSYSR_DEN : DSYSR_DRES); 207 + /* 208 + * Group start/stop is controlled by the DRES and DEN bits of DSYSR0 209 + * for the first group and DSYSR2 for the second group. On most DU 210 + * instances, this maps to the first CRTC of the group, and we can just 211 + * use rcar_du_crtc_dsysr_clr_set() to access the correct DSYSR. On 212 + * M3-N, however, DU2 doesn't exist, but DSYSR2 does. We thus need to 213 + * access the register directly using group read/write. 214 + */ 215 + if (rcdu->info->channels_mask & BIT(rgrp->index * 2)) { 216 + struct rcar_du_crtc *rcrtc = &rgrp->dev->crtcs[rgrp->index * 2]; 217 + 218 + rcar_du_crtc_dsysr_clr_set(rcrtc, DSYSR_DRES | DSYSR_DEN, 219 + start ? DSYSR_DEN : DSYSR_DRES); 220 + } else { 221 + rcar_du_group_write(rgrp, DSYSR, 222 + start ? DSYSR_DEN : DSYSR_DRES); 223 + } 209 224 } 210 225 211 226 void rcar_du_group_start_stop(struct rcar_du_group *rgrp, bool start)
+1 -1
drivers/hid/hid-sensor-custom.c
··· 358 358 sensor_inst->hsdev, 359 359 sensor_inst->hsdev->usage, 360 360 usage, report_id, 361 - SENSOR_HUB_SYNC); 361 + SENSOR_HUB_SYNC, false); 362 362 } else if (!strncmp(name, "units", strlen("units"))) 363 363 value = sensor_inst->fields[field_index].attribute.units; 364 364 else if (!strncmp(name, "unit-expo", strlen("unit-expo")))
+10 -3
drivers/hid/hid-sensor-hub.c
··· 299 299 int sensor_hub_input_attr_get_raw_value(struct hid_sensor_hub_device *hsdev, 300 300 u32 usage_id, 301 301 u32 attr_usage_id, u32 report_id, 302 - enum sensor_hub_read_flags flag) 302 + enum sensor_hub_read_flags flag, 303 + bool is_signed) 303 304 { 304 305 struct sensor_hub_data *data = hid_get_drvdata(hsdev->hdev); 305 306 unsigned long flags; ··· 332 331 &hsdev->pending.ready, HZ*5); 333 332 switch (hsdev->pending.raw_size) { 334 333 case 1: 335 - ret_val = *(u8 *)hsdev->pending.raw_data; 334 + if (is_signed) 335 + ret_val = *(s8 *)hsdev->pending.raw_data; 336 + else 337 + ret_val = *(u8 *)hsdev->pending.raw_data; 336 338 break; 337 339 case 2: 338 - ret_val = *(u16 *)hsdev->pending.raw_data; 340 + if (is_signed) 341 + ret_val = *(s16 *)hsdev->pending.raw_data; 342 + else 343 + ret_val = *(u16 *)hsdev->pending.raw_data; 339 344 break; 340 345 case 4: 341 346 ret_val = *(u32 *)hsdev->pending.raw_data;
+8
drivers/hv/channel.c
··· 516 516 } 517 517 wait_for_completion(&msginfo->waitevent); 518 518 519 + if (msginfo->response.gpadl_created.creation_status != 0) { 520 + pr_err("Failed to establish GPADL: err = 0x%x\n", 521 + msginfo->response.gpadl_created.creation_status); 522 + 523 + ret = -EDQUOT; 524 + goto cleanup; 525 + } 526 + 519 527 if (channel->rescind) { 520 528 ret = -ENODEV; 521 529 goto cleanup;
+3 -3
drivers/hwmon/ina2xx.c
··· 274 274 break; 275 275 case INA2XX_CURRENT: 276 276 /* signed register, result in mA */ 277 - val = regval * data->current_lsb_uA; 277 + val = (s16)regval * data->current_lsb_uA; 278 278 val = DIV_ROUND_CLOSEST(val, 1000); 279 279 break; 280 280 case INA2XX_CALIBRATION: ··· 491 491 } 492 492 493 493 data->groups[group++] = &ina2xx_group; 494 - if (id->driver_data == ina226) 494 + if (chip == ina226) 495 495 data->groups[group++] = &ina226_group; 496 496 497 497 hwmon_dev = devm_hwmon_device_register_with_groups(dev, client->name, ··· 500 500 return PTR_ERR(hwmon_dev); 501 501 502 502 dev_info(dev, "power monitor %s (Rshunt = %li uOhm)\n", 503 - id->name, data->rshunt); 503 + client->name, data->rshunt); 504 504 505 505 return 0; 506 506 }
+1 -1
drivers/hwmon/mlxreg-fan.c
··· 51 51 */ 52 52 #define MLXREG_FAN_GET_RPM(rval, d, s) (DIV_ROUND_CLOSEST(15000000 * 100, \ 53 53 ((rval) + (s)) * (d))) 54 - #define MLXREG_FAN_GET_FAULT(val, mask) (!!((val) ^ (mask))) 54 + #define MLXREG_FAN_GET_FAULT(val, mask) (!((val) ^ (mask))) 55 55 #define MLXREG_FAN_PWM_DUTY2STATE(duty) (DIV_ROUND_CLOSEST((duty) * \ 56 56 MLXREG_FAN_MAX_STATE, \ 57 57 MLXREG_FAN_MAX_DUTY))
-6
drivers/hwmon/raspberrypi-hwmon.c
··· 115 115 { 116 116 struct device *dev = &pdev->dev; 117 117 struct rpi_hwmon_data *data; 118 - int ret; 119 118 120 119 data = devm_kzalloc(dev, sizeof(*data), GFP_KERNEL); 121 120 if (!data) ··· 122 123 123 124 /* Parent driver assure that firmware is correct */ 124 125 data->fw = dev_get_drvdata(dev->parent); 125 - 126 - /* Init throttled */ 127 - ret = rpi_firmware_property(data->fw, RPI_FIRMWARE_GET_THROTTLED, 128 - &data->last_throttled, 129 - sizeof(data->last_throttled)); 130 126 131 127 data->hwmon_dev = devm_hwmon_device_register_with_info(dev, "rpi_volt", 132 128 data,
+1 -1
drivers/hwmon/w83795.c
··· 1691 1691 * somewhere else in the code 1692 1692 */ 1693 1693 #define SENSOR_ATTR_TEMP(index) { \ 1694 - SENSOR_ATTR_2(temp##index##_type, S_IRUGO | (index < 4 ? S_IWUSR : 0), \ 1694 + SENSOR_ATTR_2(temp##index##_type, S_IRUGO | (index < 5 ? S_IWUSR : 0), \ 1695 1695 show_temp_mode, store_temp_mode, NOT_USED, index - 1), \ 1696 1696 SENSOR_ATTR_2(temp##index##_input, S_IRUGO, show_temp, \ 1697 1697 NULL, TEMP_READ, index - 1), \
+4 -1
drivers/iio/accel/hid-sensor-accel-3d.c
··· 149 149 int report_id = -1; 150 150 u32 address; 151 151 int ret_type; 152 + s32 min; 152 153 struct hid_sensor_hub_device *hsdev = 153 154 accel_state->common_attributes.hsdev; 154 155 ··· 159 158 case IIO_CHAN_INFO_RAW: 160 159 hid_sensor_power_state(&accel_state->common_attributes, true); 161 160 report_id = accel_state->accel[chan->scan_index].report_id; 161 + min = accel_state->accel[chan->scan_index].logical_minimum; 162 162 address = accel_3d_addresses[chan->scan_index]; 163 163 if (report_id >= 0) 164 164 *val = sensor_hub_input_attr_get_raw_value( 165 165 accel_state->common_attributes.hsdev, 166 166 hsdev->usage, address, report_id, 167 - SENSOR_HUB_SYNC); 167 + SENSOR_HUB_SYNC, 168 + min < 0); 168 169 else { 169 170 *val = 0; 170 171 hid_sensor_power_state(&accel_state->common_attributes,
+4 -1
drivers/iio/gyro/hid-sensor-gyro-3d.c
··· 111 111 int report_id = -1; 112 112 u32 address; 113 113 int ret_type; 114 + s32 min; 114 115 115 116 *val = 0; 116 117 *val2 = 0; ··· 119 118 case IIO_CHAN_INFO_RAW: 120 119 hid_sensor_power_state(&gyro_state->common_attributes, true); 121 120 report_id = gyro_state->gyro[chan->scan_index].report_id; 121 + min = gyro_state->gyro[chan->scan_index].logical_minimum; 122 122 address = gyro_3d_addresses[chan->scan_index]; 123 123 if (report_id >= 0) 124 124 *val = sensor_hub_input_attr_get_raw_value( 125 125 gyro_state->common_attributes.hsdev, 126 126 HID_USAGE_SENSOR_GYRO_3D, address, 127 127 report_id, 128 - SENSOR_HUB_SYNC); 128 + SENSOR_HUB_SYNC, 129 + min < 0); 129 130 else { 130 131 *val = 0; 131 132 hid_sensor_power_state(&gyro_state->common_attributes,
+2 -1
drivers/iio/humidity/hid-sensor-humidity.c
··· 75 75 HID_USAGE_SENSOR_HUMIDITY, 76 76 HID_USAGE_SENSOR_ATMOSPHERIC_HUMIDITY, 77 77 humid_st->humidity_attr.report_id, 78 - SENSOR_HUB_SYNC); 78 + SENSOR_HUB_SYNC, 79 + humid_st->humidity_attr.logical_minimum < 0); 79 80 hid_sensor_power_state(&humid_st->common_attributes, false); 80 81 81 82 return IIO_VAL_INT;
+5 -3
drivers/iio/light/hid-sensor-als.c
··· 93 93 int report_id = -1; 94 94 u32 address; 95 95 int ret_type; 96 + s32 min; 96 97 97 98 *val = 0; 98 99 *val2 = 0; ··· 103 102 case CHANNEL_SCAN_INDEX_INTENSITY: 104 103 case CHANNEL_SCAN_INDEX_ILLUM: 105 104 report_id = als_state->als_illum.report_id; 106 - address = 107 - HID_USAGE_SENSOR_LIGHT_ILLUM; 105 + min = als_state->als_illum.logical_minimum; 106 + address = HID_USAGE_SENSOR_LIGHT_ILLUM; 108 107 break; 109 108 default: 110 109 report_id = -1; ··· 117 116 als_state->common_attributes.hsdev, 118 117 HID_USAGE_SENSOR_ALS, address, 119 118 report_id, 120 - SENSOR_HUB_SYNC); 119 + SENSOR_HUB_SYNC, 120 + min < 0); 121 121 hid_sensor_power_state(&als_state->common_attributes, 122 122 false); 123 123 } else {
+5 -3
drivers/iio/light/hid-sensor-prox.c
··· 73 73 int report_id = -1; 74 74 u32 address; 75 75 int ret_type; 76 + s32 min; 76 77 77 78 *val = 0; 78 79 *val2 = 0; ··· 82 81 switch (chan->scan_index) { 83 82 case CHANNEL_SCAN_INDEX_PRESENCE: 84 83 report_id = prox_state->prox_attr.report_id; 85 - address = 86 - HID_USAGE_SENSOR_HUMAN_PRESENCE; 84 + min = prox_state->prox_attr.logical_minimum; 85 + address = HID_USAGE_SENSOR_HUMAN_PRESENCE; 87 86 break; 88 87 default: 89 88 report_id = -1; ··· 96 95 prox_state->common_attributes.hsdev, 97 96 HID_USAGE_SENSOR_PROX, address, 98 97 report_id, 99 - SENSOR_HUB_SYNC); 98 + SENSOR_HUB_SYNC, 99 + min < 0); 100 100 hid_sensor_power_state(&prox_state->common_attributes, 101 101 false); 102 102 } else {
+5 -3
drivers/iio/magnetometer/hid-sensor-magn-3d.c
··· 163 163 int report_id = -1; 164 164 u32 address; 165 165 int ret_type; 166 + s32 min; 166 167 167 168 *val = 0; 168 169 *val2 = 0; 169 170 switch (mask) { 170 171 case IIO_CHAN_INFO_RAW: 171 172 hid_sensor_power_state(&magn_state->magn_flux_attributes, true); 172 - report_id = 173 - magn_state->magn[chan->address].report_id; 173 + report_id = magn_state->magn[chan->address].report_id; 174 + min = magn_state->magn[chan->address].logical_minimum; 174 175 address = magn_3d_addresses[chan->address]; 175 176 if (report_id >= 0) 176 177 *val = sensor_hub_input_attr_get_raw_value( 177 178 magn_state->magn_flux_attributes.hsdev, 178 179 HID_USAGE_SENSOR_COMPASS_3D, address, 179 180 report_id, 180 - SENSOR_HUB_SYNC); 181 + SENSOR_HUB_SYNC, 182 + min < 0); 181 183 else { 182 184 *val = 0; 183 185 hid_sensor_power_state(
+3 -9
drivers/iio/magnetometer/st_magn_buffer.c
··· 30 30 return st_sensors_set_dataready_irq(indio_dev, state); 31 31 } 32 32 33 - static int st_magn_buffer_preenable(struct iio_dev *indio_dev) 34 - { 35 - return st_sensors_set_enable(indio_dev, true); 36 - } 37 - 38 33 static int st_magn_buffer_postenable(struct iio_dev *indio_dev) 39 34 { 40 35 int err; ··· 45 50 if (err < 0) 46 51 goto st_magn_buffer_postenable_error; 47 52 48 - return err; 53 + return st_sensors_set_enable(indio_dev, true); 49 54 50 55 st_magn_buffer_postenable_error: 51 56 kfree(mdata->buffer_data); ··· 58 63 int err; 59 64 struct st_sensor_data *mdata = iio_priv(indio_dev); 60 65 61 - err = iio_triggered_buffer_predisable(indio_dev); 66 + err = st_sensors_set_enable(indio_dev, false); 62 67 if (err < 0) 63 68 goto st_magn_buffer_predisable_error; 64 69 65 - err = st_sensors_set_enable(indio_dev, false); 70 + err = iio_triggered_buffer_predisable(indio_dev); 66 71 67 72 st_magn_buffer_predisable_error: 68 73 kfree(mdata->buffer_data); ··· 70 75 } 71 76 72 77 static const struct iio_buffer_setup_ops st_magn_buffer_setup_ops = { 73 - .preenable = &st_magn_buffer_preenable, 74 78 .postenable = &st_magn_buffer_postenable, 75 79 .predisable = &st_magn_buffer_predisable, 76 80 };
+5 -3
drivers/iio/orientation/hid-sensor-incl-3d.c
··· 111 111 int report_id = -1; 112 112 u32 address; 113 113 int ret_type; 114 + s32 min; 114 115 115 116 *val = 0; 116 117 *val2 = 0; 117 118 switch (mask) { 118 119 case IIO_CHAN_INFO_RAW: 119 120 hid_sensor_power_state(&incl_state->common_attributes, true); 120 - report_id = 121 - incl_state->incl[chan->scan_index].report_id; 121 + report_id = incl_state->incl[chan->scan_index].report_id; 122 + min = incl_state->incl[chan->scan_index].logical_minimum; 122 123 address = incl_3d_addresses[chan->scan_index]; 123 124 if (report_id >= 0) 124 125 *val = sensor_hub_input_attr_get_raw_value( 125 126 incl_state->common_attributes.hsdev, 126 127 HID_USAGE_SENSOR_INCLINOMETER_3D, address, 127 128 report_id, 128 - SENSOR_HUB_SYNC); 129 + SENSOR_HUB_SYNC, 130 + min < 0); 129 131 else { 130 132 hid_sensor_power_state(&incl_state->common_attributes, 131 133 false);
+5 -3
drivers/iio/pressure/hid-sensor-press.c
··· 77 77 int report_id = -1; 78 78 u32 address; 79 79 int ret_type; 80 + s32 min; 80 81 81 82 *val = 0; 82 83 *val2 = 0; ··· 86 85 switch (chan->scan_index) { 87 86 case CHANNEL_SCAN_INDEX_PRESSURE: 88 87 report_id = press_state->press_attr.report_id; 89 - address = 90 - HID_USAGE_SENSOR_ATMOSPHERIC_PRESSURE; 88 + min = press_state->press_attr.logical_minimum; 89 + address = HID_USAGE_SENSOR_ATMOSPHERIC_PRESSURE; 91 90 break; 92 91 default: 93 92 report_id = -1; ··· 100 99 press_state->common_attributes.hsdev, 101 100 HID_USAGE_SENSOR_PRESSURE, address, 102 101 report_id, 103 - SENSOR_HUB_SYNC); 102 + SENSOR_HUB_SYNC, 103 + min < 0); 104 104 hid_sensor_power_state(&press_state->common_attributes, 105 105 false); 106 106 } else {
+2 -1
drivers/iio/temperature/hid-sensor-temperature.c
··· 76 76 HID_USAGE_SENSOR_TEMPERATURE, 77 77 HID_USAGE_SENSOR_DATA_ENVIRONMENTAL_TEMPERATURE, 78 78 temp_st->temperature_attr.report_id, 79 - SENSOR_HUB_SYNC); 79 + SENSOR_HUB_SYNC, 80 + temp_st->temperature_attr.logical_minimum < 0); 80 81 hid_sensor_power_state( 81 82 &temp_st->common_attributes, 82 83 false);
+4 -2
drivers/infiniband/core/roce_gid_mgmt.c
··· 767 767 768 768 case NETDEV_CHANGEADDR: 769 769 cmds[0] = netdev_del_cmd; 770 - cmds[1] = add_default_gid_cmd; 771 - cmds[2] = add_cmd; 770 + if (ndev->reg_state == NETREG_REGISTERED) { 771 + cmds[1] = add_default_gid_cmd; 772 + cmds[2] = add_cmd; 773 + } 772 774 break; 773 775 774 776 case NETDEV_CHANGEUPPER:
+6 -14
drivers/infiniband/core/umem_odp.c
··· 137 137 up_read(&per_mm->umem_rwsem); 138 138 } 139 139 140 - static int invalidate_page_trampoline(struct ib_umem_odp *item, u64 start, 141 - u64 end, void *cookie) 142 - { 143 - ib_umem_notifier_start_account(item); 144 - item->umem.context->invalidate_range(item, start, start + PAGE_SIZE); 145 - ib_umem_notifier_end_account(item); 146 - return 0; 147 - } 148 - 149 140 static int invalidate_range_start_trampoline(struct ib_umem_odp *item, 150 141 u64 start, u64 end, void *cookie) 151 142 { ··· 544 553 put_page(page); 545 554 546 555 if (remove_existing_mapping && umem->context->invalidate_range) { 547 - invalidate_page_trampoline( 556 + ib_umem_notifier_start_account(umem_odp); 557 + umem->context->invalidate_range( 548 558 umem_odp, 549 - ib_umem_start(umem) + (page_index >> umem->page_shift), 550 - ib_umem_start(umem) + ((page_index + 1) >> 551 - umem->page_shift), 552 - NULL); 559 + ib_umem_start(umem) + (page_index << umem->page_shift), 560 + ib_umem_start(umem) + 561 + ((page_index + 1) << umem->page_shift)); 562 + ib_umem_notifier_end_account(umem_odp); 553 563 ret = -EAGAIN; 554 564 } 555 565
+3
drivers/infiniband/hw/bnxt_re/main.c
··· 1268 1268 /* Registered a new RoCE device instance to netdev */ 1269 1269 rc = bnxt_re_register_netdev(rdev); 1270 1270 if (rc) { 1271 + rtnl_unlock(); 1271 1272 pr_err("Failed to register with netedev: %#x\n", rc); 1272 1273 return -EINVAL; 1273 1274 } ··· 1467 1466 "Failed to register with IB: %#x", rc); 1468 1467 bnxt_re_remove_one(rdev); 1469 1468 bnxt_re_dev_unreg(rdev); 1469 + goto exit; 1470 1470 } 1471 1471 break; 1472 1472 case NETDEV_UP: ··· 1491 1489 } 1492 1490 smp_mb__before_atomic(); 1493 1491 atomic_dec(&rdev->sched_count); 1492 + exit: 1494 1493 kfree(re_work); 1495 1494 } 1496 1495
+60 -68
drivers/infiniband/hw/hns/hns_roce_hw_v2.c
··· 1756 1756 return hns_roce_cmq_send(hr_dev, &desc, 1); 1757 1757 } 1758 1758 1759 - static int hns_roce_v2_write_mtpt(void *mb_buf, struct hns_roce_mr *mr, 1760 - unsigned long mtpt_idx) 1759 + static int set_mtpt_pbl(struct hns_roce_v2_mpt_entry *mpt_entry, 1760 + struct hns_roce_mr *mr) 1761 1761 { 1762 - struct hns_roce_v2_mpt_entry *mpt_entry; 1763 1762 struct scatterlist *sg; 1764 1763 u64 page_addr; 1765 1764 u64 *pages; 1766 1765 int i, j; 1767 1766 int len; 1768 1767 int entry; 1768 + 1769 + mpt_entry->pbl_size = cpu_to_le32(mr->pbl_size); 1770 + mpt_entry->pbl_ba_l = cpu_to_le32(lower_32_bits(mr->pbl_ba >> 3)); 1771 + roce_set_field(mpt_entry->byte_48_mode_ba, 1772 + V2_MPT_BYTE_48_PBL_BA_H_M, V2_MPT_BYTE_48_PBL_BA_H_S, 1773 + upper_32_bits(mr->pbl_ba >> 3)); 1774 + 1775 + pages = (u64 *)__get_free_page(GFP_KERNEL); 1776 + if (!pages) 1777 + return -ENOMEM; 1778 + 1779 + i = 0; 1780 + for_each_sg(mr->umem->sg_head.sgl, sg, mr->umem->nmap, entry) { 1781 + len = sg_dma_len(sg) >> PAGE_SHIFT; 1782 + for (j = 0; j < len; ++j) { 1783 + page_addr = sg_dma_address(sg) + 1784 + (j << mr->umem->page_shift); 1785 + pages[i] = page_addr >> 6; 1786 + /* Record the first 2 entry directly to MTPT table */ 1787 + if (i >= HNS_ROCE_V2_MAX_INNER_MTPT_NUM - 1) 1788 + goto found; 1789 + i++; 1790 + } 1791 + } 1792 + found: 1793 + mpt_entry->pa0_l = cpu_to_le32(lower_32_bits(pages[0])); 1794 + roce_set_field(mpt_entry->byte_56_pa0_h, V2_MPT_BYTE_56_PA0_H_M, 1795 + V2_MPT_BYTE_56_PA0_H_S, upper_32_bits(pages[0])); 1796 + 1797 + mpt_entry->pa1_l = cpu_to_le32(lower_32_bits(pages[1])); 1798 + roce_set_field(mpt_entry->byte_64_buf_pa1, V2_MPT_BYTE_64_PA1_H_M, 1799 + V2_MPT_BYTE_64_PA1_H_S, upper_32_bits(pages[1])); 1800 + roce_set_field(mpt_entry->byte_64_buf_pa1, 1801 + V2_MPT_BYTE_64_PBL_BUF_PG_SZ_M, 1802 + V2_MPT_BYTE_64_PBL_BUF_PG_SZ_S, 1803 + mr->pbl_buf_pg_sz + PG_SHIFT_OFFSET); 1804 + 1805 + free_page((unsigned long)pages); 1806 + 1807 + return 0; 1808 + } 1809 + 1810 + static int hns_roce_v2_write_mtpt(void *mb_buf, struct hns_roce_mr *mr, 1811 + unsigned long mtpt_idx) 1812 + { 1813 + struct hns_roce_v2_mpt_entry *mpt_entry; 1814 + int ret; 1769 1815 1770 1816 mpt_entry = mb_buf; 1771 1817 memset(mpt_entry, 0, sizeof(*mpt_entry)); ··· 1827 1781 mr->pbl_ba_pg_sz + PG_SHIFT_OFFSET); 1828 1782 roce_set_field(mpt_entry->byte_4_pd_hop_st, V2_MPT_BYTE_4_PD_M, 1829 1783 V2_MPT_BYTE_4_PD_S, mr->pd); 1830 - mpt_entry->byte_4_pd_hop_st = cpu_to_le32(mpt_entry->byte_4_pd_hop_st); 1831 1784 1832 1785 roce_set_bit(mpt_entry->byte_8_mw_cnt_en, V2_MPT_BYTE_8_RA_EN_S, 0); 1833 1786 roce_set_bit(mpt_entry->byte_8_mw_cnt_en, V2_MPT_BYTE_8_R_INV_EN_S, 1); ··· 1841 1796 (mr->access & IB_ACCESS_REMOTE_WRITE ? 1 : 0)); 1842 1797 roce_set_bit(mpt_entry->byte_8_mw_cnt_en, V2_MPT_BYTE_8_LW_EN_S, 1843 1798 (mr->access & IB_ACCESS_LOCAL_WRITE ? 1 : 0)); 1844 - mpt_entry->byte_8_mw_cnt_en = cpu_to_le32(mpt_entry->byte_8_mw_cnt_en); 1845 1799 1846 1800 roce_set_bit(mpt_entry->byte_12_mw_pa, V2_MPT_BYTE_12_PA_S, 1847 1801 mr->type == MR_TYPE_MR ? 0 : 1); 1848 1802 roce_set_bit(mpt_entry->byte_12_mw_pa, V2_MPT_BYTE_12_INNER_PA_VLD_S, 1849 1803 1); 1850 - mpt_entry->byte_12_mw_pa = cpu_to_le32(mpt_entry->byte_12_mw_pa); 1851 1804 1852 1805 mpt_entry->len_l = cpu_to_le32(lower_32_bits(mr->size)); 1853 1806 mpt_entry->len_h = cpu_to_le32(upper_32_bits(mr->size)); ··· 1856 1813 if (mr->type == MR_TYPE_DMA) 1857 1814 return 0; 1858 1815 1859 - mpt_entry->pbl_size = cpu_to_le32(mr->pbl_size); 1816 + ret = set_mtpt_pbl(mpt_entry, mr); 1860 1817 1861 - mpt_entry->pbl_ba_l = cpu_to_le32(lower_32_bits(mr->pbl_ba >> 3)); 1862 - roce_set_field(mpt_entry->byte_48_mode_ba, V2_MPT_BYTE_48_PBL_BA_H_M, 1863 - V2_MPT_BYTE_48_PBL_BA_H_S, 1864 - upper_32_bits(mr->pbl_ba >> 3)); 1865 - mpt_entry->byte_48_mode_ba = cpu_to_le32(mpt_entry->byte_48_mode_ba); 1866 - 1867 - pages = (u64 *)__get_free_page(GFP_KERNEL); 1868 - if (!pages) 1869 - return -ENOMEM; 1870 - 1871 - i = 0; 1872 - for_each_sg(mr->umem->sg_head.sgl, sg, mr->umem->nmap, entry) { 1873 - len = sg_dma_len(sg) >> PAGE_SHIFT; 1874 - for (j = 0; j < len; ++j) { 1875 - page_addr = sg_dma_address(sg) + 1876 - (j << mr->umem->page_shift); 1877 - pages[i] = page_addr >> 6; 1878 - 1879 - /* Record the first 2 entry directly to MTPT table */ 1880 - if (i >= HNS_ROCE_V2_MAX_INNER_MTPT_NUM - 1) 1881 - goto found; 1882 - i++; 1883 - } 1884 - } 1885 - 1886 - found: 1887 - mpt_entry->pa0_l = cpu_to_le32(lower_32_bits(pages[0])); 1888 - roce_set_field(mpt_entry->byte_56_pa0_h, V2_MPT_BYTE_56_PA0_H_M, 1889 - V2_MPT_BYTE_56_PA0_H_S, 1890 - upper_32_bits(pages[0])); 1891 - mpt_entry->byte_56_pa0_h = cpu_to_le32(mpt_entry->byte_56_pa0_h); 1892 - 1893 - mpt_entry->pa1_l = cpu_to_le32(lower_32_bits(pages[1])); 1894 - roce_set_field(mpt_entry->byte_64_buf_pa1, V2_MPT_BYTE_64_PA1_H_M, 1895 - V2_MPT_BYTE_64_PA1_H_S, upper_32_bits(pages[1])); 1896 - 1897 - free_page((unsigned long)pages); 1898 - 1899 - roce_set_field(mpt_entry->byte_64_buf_pa1, 1900 - V2_MPT_BYTE_64_PBL_BUF_PG_SZ_M, 1901 - V2_MPT_BYTE_64_PBL_BUF_PG_SZ_S, 1902 - mr->pbl_buf_pg_sz + PG_SHIFT_OFFSET); 1903 - mpt_entry->byte_64_buf_pa1 = cpu_to_le32(mpt_entry->byte_64_buf_pa1); 1904 - 1905 - return 0; 1818 + return ret; 1906 1819 } 1907 1820 1908 1821 static int hns_roce_v2_rereg_write_mtpt(struct hns_roce_dev *hr_dev, ··· 1867 1868 u64 size, void *mb_buf) 1868 1869 { 1869 1870 struct hns_roce_v2_mpt_entry *mpt_entry = mb_buf; 1871 + int ret = 0; 1870 1872 1871 1873 if (flags & IB_MR_REREG_PD) { 1872 1874 roce_set_field(mpt_entry->byte_4_pd_hop_st, V2_MPT_BYTE_4_PD_M, ··· 1880 1880 V2_MPT_BYTE_8_BIND_EN_S, 1881 1881 (mr_access_flags & IB_ACCESS_MW_BIND ? 1 : 0)); 1882 1882 roce_set_bit(mpt_entry->byte_8_mw_cnt_en, 1883 - V2_MPT_BYTE_8_ATOMIC_EN_S, 1884 - (mr_access_flags & IB_ACCESS_REMOTE_ATOMIC ? 1 : 0)); 1883 + V2_MPT_BYTE_8_ATOMIC_EN_S, 1884 + mr_access_flags & IB_ACCESS_REMOTE_ATOMIC ? 1 : 0); 1885 1885 roce_set_bit(mpt_entry->byte_8_mw_cnt_en, V2_MPT_BYTE_8_RR_EN_S, 1886 - (mr_access_flags & IB_ACCESS_REMOTE_READ ? 1 : 0)); 1886 + mr_access_flags & IB_ACCESS_REMOTE_READ ? 1 : 0); 1887 1887 roce_set_bit(mpt_entry->byte_8_mw_cnt_en, V2_MPT_BYTE_8_RW_EN_S, 1888 - (mr_access_flags & IB_ACCESS_REMOTE_WRITE ? 1 : 0)); 1888 + mr_access_flags & IB_ACCESS_REMOTE_WRITE ? 1 : 0); 1889 1889 roce_set_bit(mpt_entry->byte_8_mw_cnt_en, V2_MPT_BYTE_8_LW_EN_S, 1890 - (mr_access_flags & IB_ACCESS_LOCAL_WRITE ? 1 : 0)); 1890 + mr_access_flags & IB_ACCESS_LOCAL_WRITE ? 1 : 0); 1891 1891 } 1892 1892 1893 1893 if (flags & IB_MR_REREG_TRANS) { ··· 1896 1896 mpt_entry->len_l = cpu_to_le32(lower_32_bits(size)); 1897 1897 mpt_entry->len_h = cpu_to_le32(upper_32_bits(size)); 1898 1898 1899 - mpt_entry->pbl_size = cpu_to_le32(mr->pbl_size); 1900 - mpt_entry->pbl_ba_l = 1901 - cpu_to_le32(lower_32_bits(mr->pbl_ba >> 3)); 1902 - roce_set_field(mpt_entry->byte_48_mode_ba, 1903 - V2_MPT_BYTE_48_PBL_BA_H_M, 1904 - V2_MPT_BYTE_48_PBL_BA_H_S, 1905 - upper_32_bits(mr->pbl_ba >> 3)); 1906 - mpt_entry->byte_48_mode_ba = 1907 - cpu_to_le32(mpt_entry->byte_48_mode_ba); 1908 - 1909 1899 mr->iova = iova; 1910 1900 mr->size = size; 1901 + 1902 + ret = set_mtpt_pbl(mpt_entry, mr); 1911 1903 } 1912 1904 1913 - return 0; 1905 + return ret; 1914 1906 } 1915 1907 1916 1908 static int hns_roce_v2_frmr_write_mtpt(void *mb_buf, struct hns_roce_mr *mr)
+11 -18
drivers/infiniband/hw/mlx5/main.c
··· 1094 1094 MLX5_IB_WIDTH_12X = 1 << 4 1095 1095 }; 1096 1096 1097 - static int translate_active_width(struct ib_device *ibdev, u8 active_width, 1097 + static void translate_active_width(struct ib_device *ibdev, u8 active_width, 1098 1098 u8 *ib_width) 1099 1099 { 1100 1100 struct mlx5_ib_dev *dev = to_mdev(ibdev); 1101 - int err = 0; 1102 1101 1103 - if (active_width & MLX5_IB_WIDTH_1X) { 1102 + if (active_width & MLX5_IB_WIDTH_1X) 1104 1103 *ib_width = IB_WIDTH_1X; 1105 - } else if (active_width & MLX5_IB_WIDTH_2X) { 1106 - mlx5_ib_dbg(dev, "active_width %d is not supported by IB spec\n", 1107 - (int)active_width); 1108 - err = -EINVAL; 1109 - } else if (active_width & MLX5_IB_WIDTH_4X) { 1104 + else if (active_width & MLX5_IB_WIDTH_4X) 1110 1105 *ib_width = IB_WIDTH_4X; 1111 - } else if (active_width & MLX5_IB_WIDTH_8X) { 1106 + else if (active_width & MLX5_IB_WIDTH_8X) 1112 1107 *ib_width = IB_WIDTH_8X; 1113 - } else if (active_width & MLX5_IB_WIDTH_12X) { 1108 + else if (active_width & MLX5_IB_WIDTH_12X) 1114 1109 *ib_width = IB_WIDTH_12X; 1115 - } else { 1116 - mlx5_ib_dbg(dev, "Invalid active_width %d\n", 1110 + else { 1111 + mlx5_ib_dbg(dev, "Invalid active_width %d, setting width to default value: 4x\n", 1117 1112 (int)active_width); 1118 - err = -EINVAL; 1113 + *ib_width = IB_WIDTH_4X; 1119 1114 } 1120 1115 1121 - return err; 1116 + return; 1122 1117 } 1123 1118 1124 1119 static int mlx5_mtu_to_ib_mtu(int mtu) ··· 1220 1225 if (err) 1221 1226 goto out; 1222 1227 1223 - err = translate_active_width(ibdev, ib_link_width_oper, 1224 - &props->active_width); 1225 - if (err) 1226 - goto out; 1228 + translate_active_width(ibdev, ib_link_width_oper, &props->active_width); 1229 + 1227 1230 err = mlx5_query_port_ib_proto_oper(mdev, &props->active_speed, port); 1228 1231 if (err) 1229 1232 goto out;
+10
drivers/infiniband/hw/mlx5/odp.c
··· 674 674 goto srcu_unlock; 675 675 } 676 676 677 + if (!mr->umem->is_odp) { 678 + mlx5_ib_dbg(dev, "skipping non ODP MR (lkey=0x%06x) in page fault handler.\n", 679 + key); 680 + if (bytes_mapped) 681 + *bytes_mapped += bcnt; 682 + ret = 0; 683 + goto srcu_unlock; 684 + } 685 + 677 686 ret = pagefault_mr(dev, mr, io_virt, bcnt, bytes_mapped); 678 687 if (ret < 0) 679 688 goto srcu_unlock; ··· 744 735 head = frame; 745 736 746 737 bcnt -= frame->bcnt; 738 + offset = 0; 747 739 } 748 740 break; 749 741
+11 -11
drivers/infiniband/hw/mlx5/qp.c
··· 2633 2633 2634 2634 if (access_flags & IB_ACCESS_REMOTE_READ) 2635 2635 *hw_access_flags |= MLX5_QP_BIT_RRE; 2636 - if ((access_flags & IB_ACCESS_REMOTE_ATOMIC) && 2637 - qp->ibqp.qp_type == IB_QPT_RC) { 2636 + if (access_flags & IB_ACCESS_REMOTE_ATOMIC) { 2638 2637 int atomic_mode; 2639 2638 2640 2639 atomic_mode = get_atomic_mode(dev, qp->ibqp.qp_type); ··· 4677 4678 goto out; 4678 4679 } 4679 4680 4680 - if (wr->opcode == IB_WR_LOCAL_INV || 4681 - wr->opcode == IB_WR_REG_MR) { 4681 + if (wr->opcode == IB_WR_REG_MR) { 4682 4682 fence = dev->umr_fence; 4683 4683 next_fence = MLX5_FENCE_MODE_INITIATOR_SMALL; 4684 - } else if (wr->send_flags & IB_SEND_FENCE) { 4685 - if (qp->next_fence) 4686 - fence = MLX5_FENCE_MODE_SMALL_AND_FENCE; 4687 - else 4688 - fence = MLX5_FENCE_MODE_FENCE; 4689 - } else { 4690 - fence = qp->next_fence; 4684 + } else { 4685 + if (wr->send_flags & IB_SEND_FENCE) { 4686 + if (qp->next_fence) 4687 + fence = MLX5_FENCE_MODE_SMALL_AND_FENCE; 4688 + else 4689 + fence = MLX5_FENCE_MODE_FENCE; 4690 + } else { 4691 + fence = qp->next_fence; 4692 + } 4691 4693 } 4692 4694 4693 4695 switch (ibqp->qp_type) {
+3 -1
drivers/infiniband/sw/rdmavt/ah.c
··· 91 91 * rvt_create_ah - create an address handle 92 92 * @pd: the protection domain 93 93 * @ah_attr: the attributes of the AH 94 + * @udata: pointer to user's input output buffer information. 94 95 * 95 96 * This may be called from interrupt context. 96 97 * 97 98 * Return: newly allocated ah 98 99 */ 99 100 struct ib_ah *rvt_create_ah(struct ib_pd *pd, 100 - struct rdma_ah_attr *ah_attr) 101 + struct rdma_ah_attr *ah_attr, 102 + struct ib_udata *udata) 101 103 { 102 104 struct rvt_ah *ah; 103 105 struct rvt_dev_info *dev = ib_to_rvt(pd->device);
+2 -1
drivers/infiniband/sw/rdmavt/ah.h
··· 51 51 #include <rdma/rdma_vt.h> 52 52 53 53 struct ib_ah *rvt_create_ah(struct ib_pd *pd, 54 - struct rdma_ah_attr *ah_attr); 54 + struct rdma_ah_attr *ah_attr, 55 + struct ib_udata *udata); 55 56 int rvt_destroy_ah(struct ib_ah *ibah); 56 57 int rvt_modify_ah(struct ib_ah *ibah, struct rdma_ah_attr *ah_attr); 57 58 int rvt_query_ah(struct ib_ah *ibah, struct rdma_ah_attr *ah_attr);
+3 -4
drivers/infiniband/ulp/iser/iser_verbs.c
··· 1124 1124 IB_MR_CHECK_SIG_STATUS, &mr_status); 1125 1125 if (ret) { 1126 1126 pr_err("ib_check_mr_status failed, ret %d\n", ret); 1127 - goto err; 1127 + /* Not a lot we can do, return ambiguous guard error */ 1128 + *sector = 0; 1129 + return 0x1; 1128 1130 } 1129 1131 1130 1132 if (mr_status.fail_status & IB_MR_CHECK_SIG_STATUS) { ··· 1154 1152 } 1155 1153 1156 1154 return 0; 1157 - err: 1158 - /* Not alot we can do here, return ambiguous guard error */ 1159 - return 0x1; 1160 1155 } 1161 1156 1162 1157 void iser_err_comp(struct ib_wc *wc, const char *type)
+1 -1
drivers/misc/mic/scif/scif_rma.c
··· 416 416 if (err) 417 417 goto error_window; 418 418 err = scif_map_page(&window->num_pages_lookup.lookup[j], 419 - vmalloc_dma_phys ? 419 + vmalloc_num_pages ? 420 420 vmalloc_to_page(&window->num_pages[i]) : 421 421 virt_to_page(&window->num_pages[i]), 422 422 remote_dev);
+2 -1
drivers/mtd/nand/bbt.c
··· 27 27 unsigned int nwords = DIV_ROUND_UP(nblocks * bits_per_block, 28 28 BITS_PER_LONG); 29 29 30 - nand->bbt.cache = kzalloc(nwords, GFP_KERNEL); 30 + nand->bbt.cache = kcalloc(nwords, sizeof(*nand->bbt.cache), 31 + GFP_KERNEL); 31 32 if (!nand->bbt.cache) 32 33 return -ENOMEM; 33 34
+29 -2
drivers/mtd/spi-nor/spi-nor.c
··· 2995 2995 const u32 *smpt) 2996 2996 { 2997 2997 struct spi_nor_erase_map *map = &nor->erase_map; 2998 - const struct spi_nor_erase_type *erase = map->erase_type; 2998 + struct spi_nor_erase_type *erase = map->erase_type; 2999 2999 struct spi_nor_erase_region *region; 3000 3000 u64 offset; 3001 3001 u32 region_count; 3002 3002 int i, j; 3003 - u8 erase_type, uniform_erase_type; 3003 + u8 uniform_erase_type, save_uniform_erase_type; 3004 + u8 erase_type, regions_erase_type; 3004 3005 3005 3006 region_count = SMPT_MAP_REGION_COUNT(*smpt); 3006 3007 /* ··· 3015 3014 map->regions = region; 3016 3015 3017 3016 uniform_erase_type = 0xff; 3017 + regions_erase_type = 0; 3018 3018 offset = 0; 3019 3019 /* Populate regions. */ 3020 3020 for (i = 0; i < region_count; i++) { ··· 3032 3030 */ 3033 3031 uniform_erase_type &= erase_type; 3034 3032 3033 + /* 3034 + * regions_erase_type mask will indicate all the erase types 3035 + * supported in this configuration map. 3036 + */ 3037 + regions_erase_type |= erase_type; 3038 + 3035 3039 offset = (region[i].offset & ~SNOR_ERASE_FLAGS_MASK) + 3036 3040 region[i].size; 3037 3041 } 3038 3042 3043 + save_uniform_erase_type = map->uniform_erase_type; 3039 3044 map->uniform_erase_type = spi_nor_sort_erase_mask(map, 3040 3045 uniform_erase_type); 3046 + 3047 + if (!regions_erase_type) { 3048 + /* 3049 + * Roll back to the previous uniform_erase_type mask, SMPT is 3050 + * broken. 3051 + */ 3052 + map->uniform_erase_type = save_uniform_erase_type; 3053 + return -EINVAL; 3054 + } 3055 + 3056 + /* 3057 + * BFPT advertises all the erase types supported by all the possible 3058 + * map configurations. Mask out the erase types that are not supported 3059 + * by the current map configuration. 3060 + */ 3061 + for (i = 0; i < SNOR_ERASE_TYPE_MAX; i++) 3062 + if (!(regions_erase_type & BIT(erase[i].idx))) 3063 + spi_nor_set_erase_type(&erase[i], 0, 0xFF); 3041 3064 3042 3065 spi_nor_region_mark_end(&region[i - 1]); 3043 3066
+3
drivers/net/ethernet/cavium/thunder/nic_main.c
··· 1441 1441 { 1442 1442 struct nicpf *nic = pci_get_drvdata(pdev); 1443 1443 1444 + if (!nic) 1445 + return; 1446 + 1444 1447 if (nic->flags & NIC_SRIOV_ENABLED) 1445 1448 pci_disable_sriov(pdev); 1446 1449
+1 -3
drivers/net/ethernet/hisilicon/hip04_eth.c
··· 915 915 } 916 916 917 917 ret = register_netdev(ndev); 918 - if (ret) { 919 - free_netdev(ndev); 918 + if (ret) 920 919 goto alloc_fail; 921 - } 922 920 923 921 return 0; 924 922
+1 -1
drivers/net/ethernet/intel/i40e/i40e_main.c
··· 1413 1413 } 1414 1414 1415 1415 vsi->flags |= I40E_VSI_FLAG_FILTER_CHANGED; 1416 - set_bit(__I40E_MACVLAN_SYNC_PENDING, vsi->state); 1416 + set_bit(__I40E_MACVLAN_SYNC_PENDING, vsi->back->state); 1417 1417 } 1418 1418 1419 1419 /**
+7 -7
drivers/net/ethernet/intel/i40e/i40e_xsk.c
··· 33 33 } 34 34 35 35 /** 36 - * i40e_add_xsk_umem - Store an UMEM for a certain ring/qid 36 + * i40e_add_xsk_umem - Store a UMEM for a certain ring/qid 37 37 * @vsi: Current VSI 38 38 * @umem: UMEM to store 39 39 * @qid: Ring/qid to associate with the UMEM ··· 56 56 } 57 57 58 58 /** 59 - * i40e_remove_xsk_umem - Remove an UMEM for a certain ring/qid 59 + * i40e_remove_xsk_umem - Remove a UMEM for a certain ring/qid 60 60 * @vsi: Current VSI 61 61 * @qid: Ring/qid associated with the UMEM 62 62 **/ ··· 130 130 } 131 131 132 132 /** 133 - * i40e_xsk_umem_enable - Enable/associate an UMEM to a certain ring/qid 133 + * i40e_xsk_umem_enable - Enable/associate a UMEM to a certain ring/qid 134 134 * @vsi: Current VSI 135 135 * @umem: UMEM 136 136 * @qid: Rx ring to associate UMEM to ··· 189 189 } 190 190 191 191 /** 192 - * i40e_xsk_umem_disable - Diassociate an UMEM from a certain ring/qid 192 + * i40e_xsk_umem_disable - Disassociate a UMEM from a certain ring/qid 193 193 * @vsi: Current VSI 194 194 * @qid: Rx ring to associate UMEM to 195 195 * ··· 255 255 } 256 256 257 257 /** 258 - * i40e_xsk_umem_query - Queries a certain ring/qid for its UMEM 258 + * i40e_xsk_umem_setup - Enable/disassociate a UMEM to/from a ring/qid 259 259 * @vsi: Current VSI 260 260 * @umem: UMEM to enable/associate to a ring, or NULL to disable 261 261 * @qid: Rx ring to (dis)associate UMEM (from)to 262 262 * 263 - * This function enables or disables an UMEM to a certain ring. 263 + * This function enables or disables a UMEM to a certain ring. 264 264 * 265 265 * Returns 0 on success, <0 on failure 266 266 **/ ··· 276 276 * @rx_ring: Rx ring 277 277 * @xdp: xdp_buff used as input to the XDP program 278 278 * 279 - * This function enables or disables an UMEM to a certain ring. 279 + * This function enables or disables a UMEM to a certain ring. 280 280 * 281 281 * Returns any of I40E_XDP_{PASS, CONSUMED, TX, REDIR} 282 282 **/
+1
drivers/net/ethernet/intel/igb/e1000_i210.c
··· 842 842 nvm_word = E1000_INVM_DEFAULT_AL; 843 843 tmp_nvm = nvm_word | E1000_INVM_PLL_WO_VAL; 844 844 igb_write_phy_reg_82580(hw, I347AT4_PAGE_SELECT, E1000_PHY_PLL_FREQ_PAGE); 845 + phy_word = E1000_PHY_PLL_UNCONF; 845 846 for (i = 0; i < E1000_MAX_PLL_TRIES; i++) { 846 847 /* check current state directly from internal PHY */ 847 848 igb_read_phy_reg_82580(hw, E1000_PHY_PLL_FREQ_REG, &phy_word);
+3 -1
drivers/net/ethernet/intel/ixgbe/ixgbe_x550.c
··· 2262 2262 *autoneg = false; 2263 2263 2264 2264 if (hw->phy.sfp_type == ixgbe_sfp_type_1g_sx_core0 || 2265 - hw->phy.sfp_type == ixgbe_sfp_type_1g_sx_core1) { 2265 + hw->phy.sfp_type == ixgbe_sfp_type_1g_sx_core1 || 2266 + hw->phy.sfp_type == ixgbe_sfp_type_1g_lx_core0 || 2267 + hw->phy.sfp_type == ixgbe_sfp_type_1g_lx_core1) { 2266 2268 *speed = IXGBE_LINK_SPEED_1GB_FULL; 2267 2269 return 0; 2268 2270 }
+6 -5
drivers/net/ethernet/microchip/lan743x_main.c
··· 1672 1672 netif_wake_queue(adapter->netdev); 1673 1673 } 1674 1674 1675 - if (!napi_complete_done(napi, weight)) 1675 + if (!napi_complete(napi)) 1676 1676 goto done; 1677 1677 1678 1678 /* enable isr */ ··· 1681 1681 lan743x_csr_read(adapter, INT_STS); 1682 1682 1683 1683 done: 1684 - return weight; 1684 + return 0; 1685 1685 } 1686 1686 1687 1687 static void lan743x_tx_ring_cleanup(struct lan743x_tx *tx) ··· 1870 1870 tx->vector_flags = lan743x_intr_get_vector_flags(adapter, 1871 1871 INT_BIT_DMA_TX_ 1872 1872 (tx->channel_number)); 1873 - netif_napi_add(adapter->netdev, 1874 - &tx->napi, lan743x_tx_napi_poll, 1875 - tx->ring_size - 1); 1873 + netif_tx_napi_add(adapter->netdev, 1874 + &tx->napi, lan743x_tx_napi_poll, 1875 + tx->ring_size - 1); 1876 1876 napi_enable(&tx->napi); 1877 1877 1878 1878 data = 0; ··· 3017 3017 3018 3018 static const struct pci_device_id lan743x_pcidev_tbl[] = { 3019 3019 { PCI_DEVICE(PCI_VENDOR_ID_SMSC, PCI_DEVICE_ID_SMSC_LAN7430) }, 3020 + { PCI_DEVICE(PCI_VENDOR_ID_SMSC, PCI_DEVICE_ID_SMSC_LAN7431) }, 3020 3021 { 0, } 3021 3022 }; 3022 3023
+1
drivers/net/ethernet/microchip/lan743x_main.h
··· 548 548 /* SMSC acquired EFAR late 1990's, MCHP acquired SMSC 2012 */ 549 549 #define PCI_VENDOR_ID_SMSC PCI_VENDOR_ID_EFAR 550 550 #define PCI_DEVICE_ID_SMSC_LAN7430 (0x7430) 551 + #define PCI_DEVICE_ID_SMSC_LAN7431 (0x7431) 551 552 552 553 #define PCI_CONFIG_LENGTH (0x1000) 553 554
+1 -1
drivers/net/ethernet/qlogic/qed/qed_debug.c
··· 6071 6071 "no error", 6072 6072 "length error", 6073 6073 "function disabled", 6074 - "VF sent command to attnetion address", 6074 + "VF sent command to attention address", 6075 6075 "host sent prod update command", 6076 6076 "read of during interrupt register while in MIMD mode", 6077 6077 "access to PXP BAR reserved address",
+1 -1
drivers/net/ethernet/via/via-velocity.c
··· 3605 3605 "tx_jumbo", 3606 3606 "rx_mac_control_frames", 3607 3607 "tx_mac_control_frames", 3608 - "rx_frame_alignement_errors", 3608 + "rx_frame_alignment_errors", 3609 3609 "rx_long_ok", 3610 3610 "rx_long_err", 3611 3611 "tx_sqe_errors",
+8
drivers/net/phy/phy_device.c
··· 2197 2197 new_driver->mdiodrv.driver.remove = phy_remove; 2198 2198 new_driver->mdiodrv.driver.owner = owner; 2199 2199 2200 + /* The following works around an issue where the PHY driver doesn't bind 2201 + * to the device, resulting in the genphy driver being used instead of 2202 + * the dedicated driver. The root cause of the issue isn't known yet 2203 + * and seems to be in the base driver core. Once this is fixed we may 2204 + * remove this workaround. 2205 + */ 2206 + new_driver->mdiodrv.driver.probe_type = PROBE_FORCE_SYNCHRONOUS; 2207 + 2200 2208 retval = driver_register(&new_driver->mdiodrv.driver); 2201 2209 if (retval) { 2202 2210 pr_err("%s: Error %d in registering driver\n",
+1 -1
drivers/net/rionet.c
··· 216 216 * it just report sending a packet to the target 217 217 * (without actual packet transfer). 218 218 */ 219 - dev_kfree_skb_any(skb); 220 219 ndev->stats.tx_packets++; 221 220 ndev->stats.tx_bytes += skb->len; 221 + dev_kfree_skb_any(skb); 222 222 } 223 223 } 224 224
+4 -6
drivers/net/usb/ipheth.c
··· 140 140 struct usb_device *udev; 141 141 struct usb_interface *intf; 142 142 struct net_device *net; 143 - struct sk_buff *tx_skb; 144 143 struct urb *tx_urb; 145 144 struct urb *rx_urb; 146 145 unsigned char *tx_buf; ··· 229 230 case -ENOENT: 230 231 case -ECONNRESET: 231 232 case -ESHUTDOWN: 233 + case -EPROTO: 232 234 return; 233 235 case 0: 234 236 break; ··· 281 281 dev_err(&dev->intf->dev, "%s: urb status: %d\n", 282 282 __func__, status); 283 283 284 - dev_kfree_skb_irq(dev->tx_skb); 285 284 if (status == 0) 286 285 netif_wake_queue(dev->net); 287 286 else ··· 422 423 if (skb->len > IPHETH_BUF_SIZE) { 423 424 WARN(1, "%s: skb too large: %d bytes\n", __func__, skb->len); 424 425 dev->net->stats.tx_dropped++; 425 - dev_kfree_skb_irq(skb); 426 + dev_kfree_skb_any(skb); 426 427 return NETDEV_TX_OK; 427 428 } 428 429 ··· 442 443 dev_err(&dev->intf->dev, "%s: usb_submit_urb: %d\n", 443 444 __func__, retval); 444 445 dev->net->stats.tx_errors++; 445 - dev_kfree_skb_irq(skb); 446 + dev_kfree_skb_any(skb); 446 447 } else { 447 - dev->tx_skb = skb; 448 - 449 448 dev->net->stats.tx_packets++; 450 449 dev->net->stats.tx_bytes += skb->len; 450 + dev_consume_skb_any(skb); 451 451 netif_stop_queue(net); 452 452 } 453 453
+5 -3
drivers/nvme/host/core.c
··· 3314 3314 struct nvme_ns *ns, *next; 3315 3315 LIST_HEAD(ns_list); 3316 3316 3317 + /* prevent racing with ns scanning */ 3318 + flush_work(&ctrl->scan_work); 3319 + 3317 3320 /* 3318 3321 * The dead states indicates the controller was not gracefully 3319 3322 * disconnected. In that case, we won't be able to flush any data while ··· 3479 3476 nvme_mpath_stop(ctrl); 3480 3477 nvme_stop_keep_alive(ctrl); 3481 3478 flush_work(&ctrl->async_event_work); 3482 - flush_work(&ctrl->scan_work); 3483 3479 cancel_work_sync(&ctrl->fw_act_work); 3484 3480 if (ctrl->ops->stop_ctrl) 3485 3481 ctrl->ops->stop_ctrl(ctrl); ··· 3587 3585 3588 3586 return 0; 3589 3587 out_free_name: 3590 - kfree_const(dev->kobj.name); 3588 + kfree_const(ctrl->device->kobj.name); 3591 3589 out_release_instance: 3592 3590 ida_simple_remove(&nvme_instance_ida, ctrl->instance); 3593 3591 out: ··· 3609 3607 down_read(&ctrl->namespaces_rwsem); 3610 3608 3611 3609 /* Forcibly unquiesce queues to avoid blocking dispatch */ 3612 - if (ctrl->admin_q) 3610 + if (ctrl->admin_q && !blk_queue_dying(ctrl->admin_q)) 3613 3611 blk_mq_unquiesce_queue(ctrl->admin_q); 3614 3612 3615 3613 list_for_each_entry(ns, &ctrl->namespaces, list)
+1 -1
drivers/nvme/host/fc.c
··· 1752 1752 struct nvme_fc_queue *queue = &ctrl->queues[queue_idx]; 1753 1753 int res; 1754 1754 1755 - nvme_req(rq)->ctrl = &ctrl->ctrl; 1756 1755 res = __nvme_fc_init_request(ctrl, queue, &op->op, rq, queue->rqcnt++); 1757 1756 if (res) 1758 1757 return res; 1759 1758 op->op.fcp_req.first_sgl = &op->sgl[0]; 1760 1759 op->op.fcp_req.private = &op->priv[0]; 1760 + nvme_req(rq)->ctrl = &ctrl->ctrl; 1761 1761 return res; 1762 1762 } 1763 1763
+3
drivers/nvme/host/nvme.h
··· 531 531 static inline int nvme_mpath_init(struct nvme_ctrl *ctrl, 532 532 struct nvme_id_ctrl *id) 533 533 { 534 + if (ctrl->subsys->cmic & (1 << 3)) 535 + dev_warn(ctrl->device, 536 + "Please enable CONFIG_NVME_MULTIPATH for full support of multi-port devices.\n"); 534 537 return 0; 535 538 } 536 539 static inline void nvme_mpath_uninit(struct nvme_ctrl *ctrl)
+2
drivers/nvme/host/rdma.c
··· 184 184 qe->dma = ib_dma_map_single(ibdev, qe->data, capsule_size, dir); 185 185 if (ib_dma_mapping_error(ibdev, qe->dma)) { 186 186 kfree(qe->data); 187 + qe->data = NULL; 187 188 return -ENOMEM; 188 189 } 189 190 ··· 824 823 out_free_async_qe: 825 824 nvme_rdma_free_qe(ctrl->device->dev, &ctrl->async_event_sqe, 826 825 sizeof(struct nvme_command), DMA_TO_DEVICE); 826 + ctrl->async_event_sqe.data = NULL; 827 827 out_free_queue: 828 828 nvme_rdma_free_queue(&ctrl->queues[0]); 829 829 return error;
+2 -4
drivers/opp/of.c
··· 579 579 */ 580 580 count = of_count_phandle_with_args(dev->of_node, 581 581 "operating-points-v2", NULL); 582 - if (count != 1) 583 - return -ENODEV; 584 - 585 - index = 0; 582 + if (count == 1) 583 + index = 0; 586 584 } 587 585 588 586 opp_table = dev_pm_opp_get_opp_table_indexed(dev, index);
-1
drivers/opp/ti-opp-supply.c
··· 417 417 .probe = ti_opp_supply_probe, 418 418 .driver = { 419 419 .name = "ti_opp_supply", 420 - .owner = THIS_MODULE, 421 420 .of_match_table = of_match_ptr(ti_opp_supply_of_match), 422 421 }, 423 422 };
+1 -9
drivers/pci/controller/dwc/pci-imx6.c
··· 81 81 #define PCIE_PL_PFLR_FORCE_LINK (1 << 15) 82 82 #define PCIE_PHY_DEBUG_R0 (PL_OFFSET + 0x28) 83 83 #define PCIE_PHY_DEBUG_R1 (PL_OFFSET + 0x2c) 84 - #define PCIE_PHY_DEBUG_R1_XMLH_LINK_IN_TRAINING (1 << 29) 85 - #define PCIE_PHY_DEBUG_R1_XMLH_LINK_UP (1 << 4) 86 84 87 85 #define PCIE_PHY_CTRL (PL_OFFSET + 0x114) 88 86 #define PCIE_PHY_CTRL_DATA_LOC 0 ··· 709 711 return 0; 710 712 } 711 713 712 - static int imx6_pcie_link_up(struct dw_pcie *pci) 713 - { 714 - return dw_pcie_readl_dbi(pci, PCIE_PHY_DEBUG_R1) & 715 - PCIE_PHY_DEBUG_R1_XMLH_LINK_UP; 716 - } 717 - 718 714 static const struct dw_pcie_host_ops imx6_pcie_host_ops = { 719 715 .host_init = imx6_pcie_host_init, 720 716 }; ··· 741 749 } 742 750 743 751 static const struct dw_pcie_ops dw_pcie_ops = { 744 - .link_up = imx6_pcie_link_up, 752 + /* No special ops needed, but pcie-designware still expects this struct */ 745 753 }; 746 754 747 755 #ifdef CONFIG_PM_SLEEP
+1 -1
drivers/pci/controller/dwc/pci-layerscape.c
··· 88 88 int i; 89 89 90 90 for (i = 0; i < PCIE_IATU_NUM; i++) 91 - dw_pcie_disable_atu(pcie->pci, DW_PCIE_REGION_OUTBOUND, i); 91 + dw_pcie_disable_atu(pcie->pci, i, DW_PCIE_REGION_OUTBOUND); 92 92 } 93 93 94 94 static int ls1021_pcie_link_up(struct dw_pcie *pci)
-1
drivers/pci/controller/dwc/pcie-designware-ep.c
··· 440 440 tbl_offset = dw_pcie_readl_dbi(pci, reg); 441 441 bir = (tbl_offset & PCI_MSIX_TABLE_BIR); 442 442 tbl_offset &= PCI_MSIX_TABLE_OFFSET; 443 - tbl_offset >>= 3; 444 443 445 444 reg = PCI_BASE_ADDRESS_0 + (4 * bir); 446 445 bar_addr_upper = 0;
+11 -13
drivers/pci/pci.c
··· 5556 5556 u32 lnkcap2, lnkcap; 5557 5557 5558 5558 /* 5559 - * PCIe r4.0 sec 7.5.3.18 recommends using the Supported Link 5560 - * Speeds Vector in Link Capabilities 2 when supported, falling 5561 - * back to Max Link Speed in Link Capabilities otherwise. 5559 + * Link Capabilities 2 was added in PCIe r3.0, sec 7.8.18. The 5560 + * implementation note there recommends using the Supported Link 5561 + * Speeds Vector in Link Capabilities 2 when supported. 5562 + * 5563 + * Without Link Capabilities 2, i.e., prior to PCIe r3.0, software 5564 + * should use the Supported Link Speeds field in Link Capabilities, 5565 + * where only 2.5 GT/s and 5.0 GT/s speeds were defined. 5562 5566 */ 5563 5567 pcie_capability_read_dword(dev, PCI_EXP_LNKCAP2, &lnkcap2); 5564 5568 if (lnkcap2) { /* PCIe r3.0-compliant */ ··· 5578 5574 } 5579 5575 5580 5576 pcie_capability_read_dword(dev, PCI_EXP_LNKCAP, &lnkcap); 5581 - if (lnkcap) { 5582 - if (lnkcap & PCI_EXP_LNKCAP_SLS_16_0GB) 5583 - return PCIE_SPEED_16_0GT; 5584 - else if (lnkcap & PCI_EXP_LNKCAP_SLS_8_0GB) 5585 - return PCIE_SPEED_8_0GT; 5586 - else if (lnkcap & PCI_EXP_LNKCAP_SLS_5_0GB) 5587 - return PCIE_SPEED_5_0GT; 5588 - else if (lnkcap & PCI_EXP_LNKCAP_SLS_2_5GB) 5589 - return PCIE_SPEED_2_5GT; 5590 - } 5577 + if ((lnkcap & PCI_EXP_LNKCAP_SLS) == PCI_EXP_LNKCAP_SLS_5_0GB) 5578 + return PCIE_SPEED_5_0GT; 5579 + else if ((lnkcap & PCI_EXP_LNKCAP_SLS) == PCI_EXP_LNKCAP_SLS_2_5GB) 5580 + return PCIE_SPEED_2_5GT; 5591 5581 5592 5582 return PCI_SPEED_UNKNOWN; 5593 5583 }
+11 -9
drivers/phy/qualcomm/phy-qcom-qusb2.c
··· 231 231 .mask_core_ready = CORE_READY_STATUS, 232 232 .has_pll_override = true, 233 233 .autoresume_en = BIT(0), 234 + .update_tune1_with_efuse = true, 234 235 }; 235 236 236 237 static const char * const qusb2_phy_vreg_names[] = { ··· 403 402 404 403 /* 405 404 * Read efuse register having TUNE2/1 parameter's high nibble. 406 - * If efuse register shows value as 0x0, or if we fail to find 407 - * a valid efuse register settings, then use default value 408 - * as 0xB for high nibble that we have already set while 409 - * configuring phy. 405 + * If efuse register shows value as 0x0 (indicating value is not 406 + * fused), or if we fail to find a valid efuse register setting, 407 + * then use default value for high nibble that we have already 408 + * set while configuring the phy. 410 409 */ 411 410 val = nvmem_cell_read(qphy->cell, NULL); 412 411 if (IS_ERR(val) || !val[0]) { ··· 416 415 417 416 /* Fused TUNE1/2 value is the higher nibble only */ 418 417 if (cfg->update_tune1_with_efuse) 419 - qusb2_setbits(qphy->base, cfg->regs[QUSB2PHY_PORT_TUNE1], 420 - val[0] << 0x4); 418 + qusb2_write_mask(qphy->base, cfg->regs[QUSB2PHY_PORT_TUNE1], 419 + val[0] << HSTX_TRIM_SHIFT, 420 + HSTX_TRIM_MASK); 421 421 else 422 - qusb2_setbits(qphy->base, cfg->regs[QUSB2PHY_PORT_TUNE2], 423 - val[0] << 0x4); 424 - 422 + qusb2_write_mask(qphy->base, cfg->regs[QUSB2PHY_PORT_TUNE2], 423 + val[0] << HSTX_TRIM_SHIFT, 424 + HSTX_TRIM_MASK); 425 425 } 426 426 427 427 static int qusb2_phy_set_mode(struct phy *phy, enum phy_mode mode)
+2 -1
drivers/phy/socionext/Kconfig
··· 26 26 27 27 config PHY_UNIPHIER_PCIE 28 28 tristate "Uniphier PHY driver for PCIe controller" 29 - depends on (ARCH_UNIPHIER || COMPILE_TEST) && OF 29 + depends on ARCH_UNIPHIER || COMPILE_TEST 30 + depends on OF && HAS_IOMEM 30 31 default PCIE_UNIPHIER 31 32 select GENERIC_PHY 32 33 help
+1 -1
drivers/rtc/rtc-hid-sensor-time.c
··· 213 213 /* get a report with all values through requesting one value */ 214 214 sensor_hub_input_attr_get_raw_value(time_state->common_attributes.hsdev, 215 215 HID_USAGE_SENSOR_TIME, hid_time_addresses[0], 216 - time_state->info[0].report_id, SENSOR_HUB_SYNC); 216 + time_state->info[0].report_id, SENSOR_HUB_SYNC, false); 217 217 /* wait for all values (event) */ 218 218 ret = wait_for_completion_killable_timeout( 219 219 &time_state->comp_last_time, HZ*6);
+4 -2
drivers/s390/cio/vfio_ccw_cp.c
··· 387 387 * orb specified one of the unsupported formats, we defer 388 388 * checking for IDAWs in unsupported formats to here. 389 389 */ 390 - if ((!cp->orb.cmd.c64 || cp->orb.cmd.i2k) && ccw_is_idal(ccw)) 390 + if ((!cp->orb.cmd.c64 || cp->orb.cmd.i2k) && ccw_is_idal(ccw)) { 391 + kfree(p); 391 392 return -EOPNOTSUPP; 393 + } 392 394 393 395 if ((!ccw_is_chain(ccw)) && (!ccw_is_tic(ccw))) 394 396 break; ··· 530 528 531 529 ret = pfn_array_alloc_pin(pat->pat_pa, cp->mdev, ccw->cda, ccw->count); 532 530 if (ret < 0) 533 - goto out_init; 531 + goto out_unpin; 534 532 535 533 /* Translate this direct ccw to a idal ccw. */ 536 534 idaws = kcalloc(ret, sizeof(*idaws), GFP_DMA | GFP_KERNEL);
+5 -5
drivers/s390/cio/vfio_ccw_drv.c
··· 22 22 #include "vfio_ccw_private.h" 23 23 24 24 struct workqueue_struct *vfio_ccw_work_q; 25 - struct kmem_cache *vfio_ccw_io_region; 25 + static struct kmem_cache *vfio_ccw_io_region; 26 26 27 27 /* 28 28 * Helpers ··· 134 134 if (ret) 135 135 goto out_free; 136 136 137 - ret = vfio_ccw_mdev_reg(sch); 138 - if (ret) 139 - goto out_disable; 140 - 141 137 INIT_WORK(&private->io_work, vfio_ccw_sch_io_todo); 142 138 atomic_set(&private->avail, 1); 143 139 private->state = VFIO_CCW_STATE_STANDBY; 140 + 141 + ret = vfio_ccw_mdev_reg(sch); 142 + if (ret) 143 + goto out_disable; 144 144 145 145 return 0; 146 146
+4 -4
drivers/s390/crypto/ap_bus.c
··· 775 775 drvres = ap_drv->flags & AP_DRIVER_FLAG_DEFAULT; 776 776 if (!!devres != !!drvres) 777 777 return -ENODEV; 778 + /* (re-)init queue's state machine */ 779 + ap_queue_reinit_state(to_ap_queue(dev)); 778 780 } 779 781 780 782 /* Add queue/card to list of active queues/cards */ ··· 809 807 struct ap_device *ap_dev = to_ap_dev(dev); 810 808 struct ap_driver *ap_drv = ap_dev->drv; 811 809 810 + if (is_queue_dev(dev)) 811 + ap_queue_remove(to_ap_queue(dev)); 812 812 if (ap_drv->remove) 813 813 ap_drv->remove(ap_dev); 814 814 ··· 1448 1444 aq->ap_dev.device.parent = &ac->ap_dev.device; 1449 1445 dev_set_name(&aq->ap_dev.device, 1450 1446 "%02x.%04x", id, dom); 1451 - /* Start with a device reset */ 1452 - spin_lock_bh(&aq->lock); 1453 - ap_wait(ap_sm_event(aq, AP_EVENT_POLL)); 1454 - spin_unlock_bh(&aq->lock); 1455 1447 /* Register device */ 1456 1448 rc = device_register(&aq->ap_dev.device); 1457 1449 if (rc) {
+1
drivers/s390/crypto/ap_bus.h
··· 254 254 void ap_queue_remove(struct ap_queue *aq); 255 255 void ap_queue_suspend(struct ap_device *ap_dev); 256 256 void ap_queue_resume(struct ap_device *ap_dev); 257 + void ap_queue_reinit_state(struct ap_queue *aq); 257 258 258 259 struct ap_card *ap_card_create(int id, int queue_depth, int raw_device_type, 259 260 int comp_device_type, unsigned int functions);
+15
drivers/s390/crypto/ap_queue.c
··· 718 718 { 719 719 ap_flush_queue(aq); 720 720 del_timer_sync(&aq->timeout); 721 + 722 + /* reset with zero, also clears irq registration */ 723 + spin_lock_bh(&aq->lock); 724 + ap_zapq(aq->qid); 725 + aq->state = AP_STATE_BORKED; 726 + spin_unlock_bh(&aq->lock); 721 727 } 722 728 EXPORT_SYMBOL(ap_queue_remove); 729 + 730 + void ap_queue_reinit_state(struct ap_queue *aq) 731 + { 732 + spin_lock_bh(&aq->lock); 733 + aq->state = AP_STATE_RESET_START; 734 + ap_wait(ap_sm_event(aq, AP_EVENT_POLL)); 735 + spin_unlock_bh(&aq->lock); 736 + } 737 + EXPORT_SYMBOL(ap_queue_reinit_state);
-1
drivers/s390/crypto/zcrypt_cex2a.c
··· 196 196 struct ap_queue *aq = to_ap_queue(&ap_dev->device); 197 197 struct zcrypt_queue *zq = aq->private; 198 198 199 - ap_queue_remove(aq); 200 199 if (zq) 201 200 zcrypt_queue_unregister(zq); 202 201 }
-1
drivers/s390/crypto/zcrypt_cex2c.c
··· 251 251 struct ap_queue *aq = to_ap_queue(&ap_dev->device); 252 252 struct zcrypt_queue *zq = aq->private; 253 253 254 - ap_queue_remove(aq); 255 254 if (zq) 256 255 zcrypt_queue_unregister(zq); 257 256 }
-1
drivers/s390/crypto/zcrypt_cex4.c
··· 275 275 struct ap_queue *aq = to_ap_queue(&ap_dev->device); 276 276 struct zcrypt_queue *zq = aq->private; 277 277 278 - ap_queue_remove(aq); 279 278 if (zq) 280 279 zcrypt_queue_unregister(zq); 281 280 }
+12 -15
drivers/s390/net/qeth_core_main.c
··· 4518 4518 { 4519 4519 struct qeth_ipa_cmd *cmd; 4520 4520 struct qeth_arp_query_info *qinfo; 4521 - struct qeth_snmp_cmd *snmp; 4522 4521 unsigned char *data; 4522 + void *snmp_data; 4523 4523 __u16 data_len; 4524 4524 4525 4525 QETH_CARD_TEXT(card, 3, "snpcmdcb"); ··· 4527 4527 cmd = (struct qeth_ipa_cmd *) sdata; 4528 4528 data = (unsigned char *)((char *)cmd - reply->offset); 4529 4529 qinfo = (struct qeth_arp_query_info *) reply->param; 4530 - snmp = &cmd->data.setadapterparms.data.snmp; 4531 4530 4532 4531 if (cmd->hdr.return_code) { 4533 4532 QETH_CARD_TEXT_(card, 4, "scer1%x", cmd->hdr.return_code); ··· 4539 4540 return 0; 4540 4541 } 4541 4542 data_len = *((__u16 *)QETH_IPA_PDU_LEN_PDU1(data)); 4542 - if (cmd->data.setadapterparms.hdr.seq_no == 1) 4543 - data_len -= (__u16)((char *)&snmp->data - (char *)cmd); 4544 - else 4545 - data_len -= (__u16)((char *)&snmp->request - (char *)cmd); 4543 + if (cmd->data.setadapterparms.hdr.seq_no == 1) { 4544 + snmp_data = &cmd->data.setadapterparms.data.snmp; 4545 + data_len -= offsetof(struct qeth_ipa_cmd, 4546 + data.setadapterparms.data.snmp); 4547 + } else { 4548 + snmp_data = &cmd->data.setadapterparms.data.snmp.request; 4549 + data_len -= offsetof(struct qeth_ipa_cmd, 4550 + data.setadapterparms.data.snmp.request); 4551 + } 4546 4552 4547 4553 /* check if there is enough room in userspace */ 4548 4554 if ((qinfo->udata_len - qinfo->udata_offset) < data_len) { ··· 4560 4556 QETH_CARD_TEXT_(card, 4, "sseqn%i", 4561 4557 cmd->data.setadapterparms.hdr.seq_no); 4562 4558 /*copy entries to user buffer*/ 4563 - if (cmd->data.setadapterparms.hdr.seq_no == 1) { 4564 - memcpy(qinfo->udata + qinfo->udata_offset, 4565 - (char *)snmp, 4566 - data_len + offsetof(struct qeth_snmp_cmd, data)); 4567 - qinfo->udata_offset += offsetof(struct qeth_snmp_cmd, data); 4568 - } else { 4569 - memcpy(qinfo->udata + qinfo->udata_offset, 4570 - (char *)&snmp->request, data_len); 4571 - } 4559 + memcpy(qinfo->udata + qinfo->udata_offset, snmp_data, data_len); 4572 4560 qinfo->udata_offset += data_len; 4561 + 4573 4562 /* check if all replies received ... */ 4574 4563 QETH_CARD_TEXT_(card, 4, "srtot%i", 4575 4564 cmd->data.setadapterparms.hdr.used_total);
+2 -2
drivers/spi/spi-mt65xx.c
··· 522 522 mdata->xfer_len = min(MTK_SPI_MAX_FIFO_SIZE, len); 523 523 mtk_spi_setup_packet(master); 524 524 525 - cnt = len / 4; 525 + cnt = mdata->xfer_len / 4; 526 526 iowrite32_rep(mdata->base + SPI_TX_DATA_REG, 527 527 trans->tx_buf + mdata->num_xfered, cnt); 528 528 529 - remainder = len % 4; 529 + remainder = mdata->xfer_len % 4; 530 530 if (remainder > 0) { 531 531 reg_val = 0; 532 532 memcpy(&reg_val,
+25 -12
drivers/spi/spi-omap2-mcspi.c
··· 1540 1540 /* work with hotplug and coldplug */ 1541 1541 MODULE_ALIAS("platform:omap2_mcspi"); 1542 1542 1543 - #ifdef CONFIG_SUSPEND 1544 - static int omap2_mcspi_suspend_noirq(struct device *dev) 1543 + static int __maybe_unused omap2_mcspi_suspend(struct device *dev) 1545 1544 { 1546 - return pinctrl_pm_select_sleep_state(dev); 1545 + struct spi_master *master = dev_get_drvdata(dev); 1546 + struct omap2_mcspi *mcspi = spi_master_get_devdata(master); 1547 + int error; 1548 + 1549 + error = pinctrl_pm_select_sleep_state(dev); 1550 + if (error) 1551 + dev_warn(mcspi->dev, "%s: failed to set pins: %i\n", 1552 + __func__, error); 1553 + 1554 + error = spi_master_suspend(master); 1555 + if (error) 1556 + dev_warn(mcspi->dev, "%s: master suspend failed: %i\n", 1557 + __func__, error); 1558 + 1559 + return pm_runtime_force_suspend(dev); 1547 1560 } 1548 1561 1549 - static int omap2_mcspi_resume_noirq(struct device *dev) 1562 + static int __maybe_unused omap2_mcspi_resume(struct device *dev) 1550 1563 { 1551 1564 struct spi_master *master = dev_get_drvdata(dev); 1552 1565 struct omap2_mcspi *mcspi = spi_master_get_devdata(master); ··· 1570 1557 dev_warn(mcspi->dev, "%s: failed to set pins: %i\n", 1571 1558 __func__, error); 1572 1559 1573 - return 0; 1560 + error = spi_master_resume(master); 1561 + if (error) 1562 + dev_warn(mcspi->dev, "%s: master resume failed: %i\n", 1563 + __func__, error); 1564 + 1565 + return pm_runtime_force_resume(dev); 1574 1566 } 1575 1567 1576 - #else 1577 - #define omap2_mcspi_suspend_noirq NULL 1578 - #define omap2_mcspi_resume_noirq NULL 1579 - #endif 1580 - 1581 1568 static const struct dev_pm_ops omap2_mcspi_pm_ops = { 1582 - .suspend_noirq = omap2_mcspi_suspend_noirq, 1583 - .resume_noirq = omap2_mcspi_resume_noirq, 1569 + SET_SYSTEM_SLEEP_PM_OPS(omap2_mcspi_suspend, 1570 + omap2_mcspi_resume) 1584 1571 .runtime_resume = omap_mcspi_runtime_resume, 1585 1572 }; 1586 1573
+21 -18
drivers/staging/comedi/comedi.h
··· 1005 1005 * and INSN_DEVICE_CONFIG_GET_ROUTES. 1006 1006 */ 1007 1007 #define NI_NAMES_BASE 0x8000u 1008 + 1009 + #define _TERM_N(base, n, x) ((base) + ((x) & ((n) - 1))) 1010 + 1008 1011 /* 1009 1012 * not necessarily all allowed 64 PFIs are valid--certainly not for all devices 1010 1013 */ 1011 - #define NI_PFI(x) (NI_NAMES_BASE + ((x) & 0x3f)) 1014 + #define NI_PFI(x) _TERM_N(NI_NAMES_BASE, 64, x) 1012 1015 /* 8 trigger lines by standard, Some devices cannot talk to all eight. */ 1013 - #define TRIGGER_LINE(x) (NI_PFI(-1) + 1 + ((x) & 0x7)) 1016 + #define TRIGGER_LINE(x) _TERM_N(NI_PFI(-1) + 1, 8, x) 1014 1017 /* 4 RTSI shared MUXes to route signals to/from TRIGGER_LINES on NI hardware */ 1015 - #define NI_RTSI_BRD(x) (TRIGGER_LINE(-1) + 1 + ((x) & 0x3)) 1018 + #define NI_RTSI_BRD(x) _TERM_N(TRIGGER_LINE(-1) + 1, 4, x) 1016 1019 1017 1020 /* *** Counter/timer names : 8 counters max *** */ 1018 - #define NI_COUNTER_NAMES_BASE (NI_RTSI_BRD(-1) + 1) 1019 - #define NI_MAX_COUNTERS 7 1020 - #define NI_CtrSource(x) (NI_COUNTER_NAMES_BASE + ((x) & NI_MAX_COUNTERS)) 1021 + #define NI_MAX_COUNTERS 8 1022 + #define NI_COUNTER_NAMES_BASE (NI_RTSI_BRD(-1) + 1) 1023 + #define NI_CtrSource(x) _TERM_N(NI_COUNTER_NAMES_BASE, NI_MAX_COUNTERS, x) 1021 1024 /* Gate, Aux, A,B,Z are all treated, at times as gates */ 1022 - #define NI_GATES_NAMES_BASE (NI_CtrSource(-1) + 1) 1023 - #define NI_CtrGate(x) (NI_GATES_NAMES_BASE + ((x) & NI_MAX_COUNTERS)) 1024 - #define NI_CtrAux(x) (NI_CtrGate(-1) + 1 + ((x) & NI_MAX_COUNTERS)) 1025 - #define NI_CtrA(x) (NI_CtrAux(-1) + 1 + ((x) & NI_MAX_COUNTERS)) 1026 - #define NI_CtrB(x) (NI_CtrA(-1) + 1 + ((x) & NI_MAX_COUNTERS)) 1027 - #define NI_CtrZ(x) (NI_CtrB(-1) + 1 + ((x) & NI_MAX_COUNTERS)) 1028 - #define NI_GATES_NAMES_MAX NI_CtrZ(-1) 1029 - #define NI_CtrArmStartTrigger(x) (NI_CtrZ(-1) + 1 + ((x) & NI_MAX_COUNTERS)) 1025 + #define NI_GATES_NAMES_BASE (NI_CtrSource(-1) + 1) 1026 + #define NI_CtrGate(x) _TERM_N(NI_GATES_NAMES_BASE, NI_MAX_COUNTERS, x) 1027 + #define NI_CtrAux(x) _TERM_N(NI_CtrGate(-1) + 1, NI_MAX_COUNTERS, x) 1028 + #define NI_CtrA(x) _TERM_N(NI_CtrAux(-1) + 1, NI_MAX_COUNTERS, x) 1029 + #define NI_CtrB(x) _TERM_N(NI_CtrA(-1) + 1, NI_MAX_COUNTERS, x) 1030 + #define NI_CtrZ(x) _TERM_N(NI_CtrB(-1) + 1, NI_MAX_COUNTERS, x) 1031 + #define NI_GATES_NAMES_MAX NI_CtrZ(-1) 1032 + #define NI_CtrArmStartTrigger(x) _TERM_N(NI_CtrZ(-1) + 1, NI_MAX_COUNTERS, x) 1030 1033 #define NI_CtrInternalOutput(x) \ 1031 - (NI_CtrArmStartTrigger(-1) + 1 + ((x) & NI_MAX_COUNTERS)) 1034 + _TERM_N(NI_CtrArmStartTrigger(-1) + 1, NI_MAX_COUNTERS, x) 1032 1035 /** external pin(s) labeled conveniently as Ctr<i>Out. */ 1033 - #define NI_CtrOut(x) (NI_CtrInternalOutput(-1) + 1 + ((x) & NI_MAX_COUNTERS)) 1036 + #define NI_CtrOut(x) _TERM_N(NI_CtrInternalOutput(-1) + 1, NI_MAX_COUNTERS, x) 1034 1037 /** For Buffered sampling of ctr -- x series capability. */ 1035 - #define NI_CtrSampleClock(x) (NI_CtrOut(-1) + 1 + ((x) & NI_MAX_COUNTERS)) 1036 - #define NI_COUNTER_NAMES_MAX NI_CtrSampleClock(-1) 1038 + #define NI_CtrSampleClock(x) _TERM_N(NI_CtrOut(-1) + 1, NI_MAX_COUNTERS, x) 1039 + #define NI_COUNTER_NAMES_MAX NI_CtrSampleClock(-1) 1037 1040 1038 1041 enum ni_common_signal_names { 1039 1042 /* PXI_Star: this is a non-NI-specific signal */
+2 -1
drivers/staging/comedi/drivers/ni_mio_common.c
··· 2843 2843 return ni_ao_arm(dev, s); 2844 2844 case INSN_CONFIG_GET_CMD_TIMING_CONSTRAINTS: 2845 2845 /* we don't care about actual channels */ 2846 - data[1] = board->ao_speed; 2846 + /* data[3] : chanlist_len */ 2847 + data[1] = board->ao_speed * data[3]; 2847 2848 data[2] = 0; 2848 2849 return 0; 2849 2850 default:
+11 -11
drivers/staging/media/sunxi/cedrus/cedrus.c
··· 108 108 unsigned int count; 109 109 unsigned int i; 110 110 111 - count = vb2_request_buffer_cnt(req); 112 - if (!count) { 113 - v4l2_info(&ctx->dev->v4l2_dev, 114 - "No buffer was provided with the request\n"); 115 - return -ENOENT; 116 - } else if (count > 1) { 117 - v4l2_info(&ctx->dev->v4l2_dev, 118 - "More than one buffer was provided with the request\n"); 119 - return -EINVAL; 120 - } 121 - 122 111 list_for_each_entry(obj, &req->objects, list) { 123 112 struct vb2_buffer *vb; 124 113 ··· 121 132 122 133 if (!ctx) 123 134 return -ENOENT; 135 + 136 + count = vb2_request_buffer_cnt(req); 137 + if (!count) { 138 + v4l2_info(&ctx->dev->v4l2_dev, 139 + "No buffer was provided with the request\n"); 140 + return -ENOENT; 141 + } else if (count > 1) { 142 + v4l2_info(&ctx->dev->v4l2_dev, 143 + "More than one buffer was provided with the request\n"); 144 + return -EINVAL; 145 + } 124 146 125 147 parent_hdl = &ctx->hdl; 126 148
+1 -1
drivers/staging/most/core.c
··· 351 351 352 352 for (i = 0; i < ARRAY_SIZE(ch_data_type); i++) { 353 353 if (c->cfg.data_type & ch_data_type[i].most_ch_data_type) 354 - return snprintf(buf, PAGE_SIZE, ch_data_type[i].name); 354 + return snprintf(buf, PAGE_SIZE, "%s", ch_data_type[i].name); 355 355 } 356 356 return snprintf(buf, PAGE_SIZE, "unconfigured\n"); 357 357 }
+2 -1
drivers/staging/mt7621-dma/mtk-hsdma.c
··· 335 335 /* tx desc */ 336 336 src = sg->src_addr; 337 337 for (i = 0; i < chan->desc->num_sgs; i++) { 338 + tx_desc = &chan->tx_ring[chan->tx_idx]; 339 + 338 340 if (len > HSDMA_MAX_PLEN) 339 341 tlen = HSDMA_MAX_PLEN; 340 342 else ··· 346 344 tx_desc->addr1 = src; 347 345 tx_desc->flags |= HSDMA_DESC_PLEN1(tlen); 348 346 } else { 349 - tx_desc = &chan->tx_ring[chan->tx_idx]; 350 347 tx_desc->addr0 = src; 351 348 tx_desc->flags = HSDMA_DESC_PLEN0(tlen); 352 349
+1 -1
drivers/staging/mt7621-pinctrl/pinctrl-rt2880.c
··· 82 82 struct property *prop; 83 83 const char *function_name, *group_name; 84 84 int ret; 85 - int ngroups; 85 + int ngroups = 0; 86 86 unsigned int reserved_maps = 0; 87 87 88 88 for_each_node_with_property(np_config, "group")
+2 -2
drivers/staging/rtl8723bs/hal/rtl8723bs_recv.c
··· 109 109 rx_bssid = get_hdr_bssid(wlanhdr); 110 110 pkt_info.bssid_match = ((!IsFrameTypeCtrl(wlanhdr)) && 111 111 !pattrib->icv_err && !pattrib->crc_err && 112 - !ether_addr_equal(rx_bssid, my_bssid)); 112 + ether_addr_equal(rx_bssid, my_bssid)); 113 113 114 114 rx_ra = get_ra(wlanhdr); 115 115 my_hwaddr = myid(&padapter->eeprompriv); 116 116 pkt_info.to_self = pkt_info.bssid_match && 117 - !ether_addr_equal(rx_ra, my_hwaddr); 117 + ether_addr_equal(rx_ra, my_hwaddr); 118 118 119 119 120 120 pkt_info.is_beacon = pkt_info.bssid_match &&
+1 -1
drivers/staging/rtl8723bs/os_dep/ioctl_cfg80211.c
··· 1277 1277 1278 1278 sinfo->filled |= BIT_ULL(NL80211_STA_INFO_TX_PACKETS); 1279 1279 sinfo->tx_packets = psta->sta_stats.tx_pkts; 1280 - 1280 + sinfo->filled |= BIT_ULL(NL80211_STA_INFO_TX_FAILED); 1281 1281 } 1282 1282 1283 1283 /* for Ad-Hoc/AP mode */
+1 -1
drivers/staging/rtl8723bs/os_dep/ioctl_linux.c
··· 2289 2289 exit: 2290 2290 kfree(ptmp); 2291 2291 2292 - return 0; 2292 + return ret; 2293 2293 } 2294 2294 2295 2295 static int rtw_wx_write32(struct net_device *dev,
+6 -1
drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c
··· 1795 1795 struct vchiq_await_completion32 args32; 1796 1796 struct vchiq_completion_data32 completion32; 1797 1797 unsigned int *msgbufcount32; 1798 + unsigned int msgbufcount_native; 1798 1799 compat_uptr_t msgbuf32; 1799 1800 void *msgbuf; 1800 1801 void **msgbufptr; ··· 1907 1906 sizeof(completion32))) 1908 1907 return -EFAULT; 1909 1908 1910 - args32.msgbufcount--; 1909 + if (get_user(msgbufcount_native, &args->msgbufcount)) 1910 + return -EFAULT; 1911 + 1912 + if (!msgbufcount_native) 1913 + args32.msgbufcount--; 1911 1914 1912 1915 msgbufcount32 = 1913 1916 &((struct vchiq_await_completion32 __user *)arg)->msgbufcount;
+38 -2
drivers/thunderbolt/switch.c
··· 863 863 } 864 864 static DEVICE_ATTR(key, 0600, key_show, key_store); 865 865 866 + static void nvm_authenticate_start(struct tb_switch *sw) 867 + { 868 + struct pci_dev *root_port; 869 + 870 + /* 871 + * During host router NVM upgrade we should not allow root port to 872 + * go into D3cold because some root ports cannot trigger PME 873 + * itself. To be on the safe side keep the root port in D0 during 874 + * the whole upgrade process. 875 + */ 876 + root_port = pci_find_pcie_root_port(sw->tb->nhi->pdev); 877 + if (root_port) 878 + pm_runtime_get_noresume(&root_port->dev); 879 + } 880 + 881 + static void nvm_authenticate_complete(struct tb_switch *sw) 882 + { 883 + struct pci_dev *root_port; 884 + 885 + root_port = pci_find_pcie_root_port(sw->tb->nhi->pdev); 886 + if (root_port) 887 + pm_runtime_put(&root_port->dev); 888 + } 889 + 866 890 static ssize_t nvm_authenticate_show(struct device *dev, 867 891 struct device_attribute *attr, char *buf) 868 892 { ··· 936 912 937 913 sw->nvm->authenticating = true; 938 914 939 - if (!tb_route(sw)) 915 + if (!tb_route(sw)) { 916 + /* 917 + * Keep root port from suspending as long as the 918 + * NVM upgrade process is running. 919 + */ 920 + nvm_authenticate_start(sw); 940 921 ret = nvm_authenticate_host(sw); 941 - else 922 + if (ret) 923 + nvm_authenticate_complete(sw); 924 + } else { 942 925 ret = nvm_authenticate_device(sw); 926 + } 943 927 pm_runtime_mark_last_busy(&sw->dev); 944 928 pm_runtime_put_autosuspend(&sw->dev); 945 929 } ··· 1365 1333 ret = dma_port_flash_update_auth_status(sw->dma_port, &status); 1366 1334 if (ret <= 0) 1367 1335 return ret; 1336 + 1337 + /* Now we can allow root port to suspend again */ 1338 + if (!tb_route(sw)) 1339 + nvm_authenticate_complete(sw); 1368 1340 1369 1341 if (status) { 1370 1342 tb_sw_info(sw, "switch flash authentication failed\n");
+3
drivers/usb/core/quirks.c
··· 209 209 /* Microsoft LifeCam-VX700 v2.0 */ 210 210 { USB_DEVICE(0x045e, 0x0770), .driver_info = USB_QUIRK_RESET_RESUME }, 211 211 212 + /* Cherry Stream G230 2.0 (G85-231) and 3.0 (G85-232) */ 213 + { USB_DEVICE(0x046a, 0x0023), .driver_info = USB_QUIRK_RESET_RESUME }, 214 + 212 215 /* Logitech HD Pro Webcams C920, C920-C, C925e and C930e */ 213 216 { USB_DEVICE(0x046d, 0x082d), .driver_info = USB_QUIRK_DELAY_INIT }, 214 217 { USB_DEVICE(0x046d, 0x0841), .driver_info = USB_QUIRK_DELAY_INIT },
-5
drivers/usb/dwc3/gadget.c
··· 1470 1470 unsigned transfer_in_flight; 1471 1471 unsigned started; 1472 1472 1473 - if (dep->flags & DWC3_EP_STALL) 1474 - return 0; 1475 - 1476 1473 if (dep->number > 1) 1477 1474 trb = dwc3_ep_prev_trb(dep, dep->trb_enqueue); 1478 1475 else ··· 1491 1494 else 1492 1495 dep->flags |= DWC3_EP_STALL; 1493 1496 } else { 1494 - if (!(dep->flags & DWC3_EP_STALL)) 1495 - return 0; 1496 1497 1497 1498 ret = dwc3_send_clear_stall_ep_cmd(dep); 1498 1499 if (ret)
+6 -5
drivers/usb/gadget/function/u_ether.c
··· 401 401 static void rx_fill(struct eth_dev *dev, gfp_t gfp_flags) 402 402 { 403 403 struct usb_request *req; 404 - struct usb_request *tmp; 405 404 unsigned long flags; 406 405 407 406 /* fill unused rxq slots with some skb */ 408 407 spin_lock_irqsave(&dev->req_lock, flags); 409 - list_for_each_entry_safe(req, tmp, &dev->rx_reqs, list) { 408 + while (!list_empty(&dev->rx_reqs)) { 409 + req = list_first_entry(&dev->rx_reqs, struct usb_request, list); 410 410 list_del_init(&req->list); 411 411 spin_unlock_irqrestore(&dev->req_lock, flags); 412 412 ··· 1125 1125 { 1126 1126 struct eth_dev *dev = link->ioport; 1127 1127 struct usb_request *req; 1128 - struct usb_request *tmp; 1129 1128 1130 1129 WARN_ON(!dev); 1131 1130 if (!dev) ··· 1141 1142 */ 1142 1143 usb_ep_disable(link->in_ep); 1143 1144 spin_lock(&dev->req_lock); 1144 - list_for_each_entry_safe(req, tmp, &dev->tx_reqs, list) { 1145 + while (!list_empty(&dev->tx_reqs)) { 1146 + req = list_first_entry(&dev->tx_reqs, struct usb_request, list); 1145 1147 list_del(&req->list); 1146 1148 1147 1149 spin_unlock(&dev->req_lock); ··· 1154 1154 1155 1155 usb_ep_disable(link->out_ep); 1156 1156 spin_lock(&dev->req_lock); 1157 - list_for_each_entry_safe(req, tmp, &dev->rx_reqs, list) { 1157 + while (!list_empty(&dev->rx_reqs)) { 1158 + req = list_first_entry(&dev->rx_reqs, struct usb_request, list); 1158 1159 list_del(&req->list); 1159 1160 1160 1161 spin_unlock(&dev->req_lock);
+31 -57
drivers/usb/gadget/udc/omap_udc.c
··· 2033 2033 { 2034 2034 return machine_is_omap_innovator() 2035 2035 || machine_is_omap_osk() 2036 + || machine_is_omap_palmte() 2036 2037 || machine_is_sx1() 2037 2038 /* No known omap7xx boards with vbus sense */ 2038 2039 || cpu_is_omap7xx(); ··· 2042 2041 static int omap_udc_start(struct usb_gadget *g, 2043 2042 struct usb_gadget_driver *driver) 2044 2043 { 2045 - int status = -ENODEV; 2044 + int status; 2046 2045 struct omap_ep *ep; 2047 2046 unsigned long flags; 2048 2047 ··· 2080 2079 goto done; 2081 2080 } 2082 2081 } else { 2082 + status = 0; 2083 2083 if (can_pullup(udc)) 2084 2084 pullup_enable(udc); 2085 2085 else ··· 2595 2593 2596 2594 static void omap_udc_release(struct device *dev) 2597 2595 { 2598 - complete(udc->done); 2596 + pullup_disable(udc); 2597 + if (!IS_ERR_OR_NULL(udc->transceiver)) { 2598 + usb_put_phy(udc->transceiver); 2599 + udc->transceiver = NULL; 2600 + } 2601 + omap_writew(0, UDC_SYSCON1); 2602 + remove_proc_file(); 2603 + if (udc->dc_clk) { 2604 + if (udc->clk_requested) 2605 + omap_udc_enable_clock(0); 2606 + clk_put(udc->hhc_clk); 2607 + clk_put(udc->dc_clk); 2608 + } 2609 + if (udc->done) 2610 + complete(udc->done); 2599 2611 kfree(udc); 2600 - udc = NULL; 2601 2612 } 2602 2613 2603 2614 static int ··· 2642 2627 udc->gadget.speed = USB_SPEED_UNKNOWN; 2643 2628 udc->gadget.max_speed = USB_SPEED_FULL; 2644 2629 udc->gadget.name = driver_name; 2630 + udc->gadget.quirk_ep_out_aligned_size = 1; 2645 2631 udc->transceiver = xceiv; 2646 2632 2647 2633 /* ep0 is special; put it right after the SETUP buffer */ ··· 2883 2867 udc->clr_halt = UDC_RESET_EP; 2884 2868 2885 2869 /* USB general purpose IRQ: ep0, state changes, dma, etc */ 2886 - status = request_irq(pdev->resource[1].start, omap_udc_irq, 2887 - 0, driver_name, udc); 2870 + status = devm_request_irq(&pdev->dev, pdev->resource[1].start, 2871 + omap_udc_irq, 0, driver_name, udc); 2888 2872 if (status != 0) { 2889 2873 ERR("can't get irq %d, err %d\n", 2890 2874 (int) pdev->resource[1].start, status); ··· 2892 2876 } 2893 2877 2894 2878 /* USB "non-iso" IRQ (PIO for all but ep0) */ 2895 - status = request_irq(pdev->resource[2].start, omap_udc_pio_irq, 2896 - 0, "omap_udc pio", udc); 2879 + status = devm_request_irq(&pdev->dev, pdev->resource[2].start, 2880 + omap_udc_pio_irq, 0, "omap_udc pio", udc); 2897 2881 if (status != 0) { 2898 2882 ERR("can't get irq %d, err %d\n", 2899 2883 (int) pdev->resource[2].start, status); 2900 - goto cleanup2; 2884 + goto cleanup1; 2901 2885 } 2902 2886 #ifdef USE_ISO 2903 - status = request_irq(pdev->resource[3].start, omap_udc_iso_irq, 2904 - 0, "omap_udc iso", udc); 2887 + status = devm_request_irq(&pdev->dev, pdev->resource[3].start, 2888 + omap_udc_iso_irq, 0, "omap_udc iso", udc); 2905 2889 if (status != 0) { 2906 2890 ERR("can't get irq %d, err %d\n", 2907 2891 (int) pdev->resource[3].start, status); 2908 - goto cleanup3; 2892 + goto cleanup1; 2909 2893 } 2910 2894 #endif 2911 2895 if (cpu_is_omap16xx() || cpu_is_omap7xx()) { ··· 2916 2900 } 2917 2901 2918 2902 create_proc_file(); 2919 - status = usb_add_gadget_udc_release(&pdev->dev, &udc->gadget, 2920 - omap_udc_release); 2921 - if (status) 2922 - goto cleanup4; 2923 - 2924 - return 0; 2925 - 2926 - cleanup4: 2927 - remove_proc_file(); 2928 - 2929 - #ifdef USE_ISO 2930 - cleanup3: 2931 - free_irq(pdev->resource[2].start, udc); 2932 - #endif 2933 - 2934 - cleanup2: 2935 - free_irq(pdev->resource[1].start, udc); 2903 + return usb_add_gadget_udc_release(&pdev->dev, &udc->gadget, 2904 + omap_udc_release); 2936 2905 2937 2906 cleanup1: 2938 2907 kfree(udc); ··· 2944 2943 { 2945 2944 DECLARE_COMPLETION_ONSTACK(done); 2946 2945 2947 - if (!udc) 2948 - return -ENODEV; 2949 - 2950 - usb_del_gadget_udc(&udc->gadget); 2951 - if (udc->driver) 2952 - return -EBUSY; 2953 - 2954 2946 udc->done = &done; 2955 2947 2956 - pullup_disable(udc); 2957 - if (!IS_ERR_OR_NULL(udc->transceiver)) { 2958 - usb_put_phy(udc->transceiver); 2959 - udc->transceiver = NULL; 2960 - } 2961 - omap_writew(0, UDC_SYSCON1); 2948 + usb_del_gadget_udc(&udc->gadget); 2962 2949 2963 - remove_proc_file(); 2964 - 2965 - #ifdef USE_ISO 2966 - free_irq(pdev->resource[3].start, udc); 2967 - #endif 2968 - free_irq(pdev->resource[2].start, udc); 2969 - free_irq(pdev->resource[1].start, udc); 2970 - 2971 - if (udc->dc_clk) { 2972 - if (udc->clk_requested) 2973 - omap_udc_enable_clock(0); 2974 - clk_put(udc->hhc_clk); 2975 - clk_put(udc->dc_clk); 2976 - } 2950 + wait_for_completion(&done); 2977 2951 2978 2952 release_mem_region(pdev->resource[0].start, 2979 2953 pdev->resource[0].end - pdev->resource[0].start + 1); 2980 - 2981 - wait_for_completion(&done); 2982 2954 2983 2955 return 0; 2984 2956 }
+10
drivers/usb/storage/unusual_realtek.h
··· 27 27 "USB Card Reader", 28 28 USB_SC_DEVICE, USB_PR_DEVICE, init_realtek_cr, 0), 29 29 30 + UNUSUAL_DEV(0x0bda, 0x0177, 0x0000, 0x9999, 31 + "Realtek", 32 + "USB Card Reader", 33 + USB_SC_DEVICE, USB_PR_DEVICE, init_realtek_cr, 0), 34 + 35 + UNUSUAL_DEV(0x0bda, 0x0184, 0x0000, 0x9999, 36 + "Realtek", 37 + "USB Card Reader", 38 + USB_SC_DEVICE, USB_PR_DEVICE, init_realtek_cr, 0), 39 + 30 40 #endif /* defined(CONFIG_USB_STORAGE_REALTEK) || ... */
+9 -56
drivers/xen/balloon.c
··· 251 251 kfree(resource); 252 252 } 253 253 254 - /* 255 - * Host memory not allocated to dom0. We can use this range for hotplug-based 256 - * ballooning. 257 - * 258 - * It's a type-less resource. Setting IORESOURCE_MEM will make resource 259 - * management algorithms (arch_remove_reservations()) look into guest e820, 260 - * which we don't want. 261 - */ 262 - static struct resource hostmem_resource = { 263 - .name = "Host RAM", 264 - }; 265 - 266 - void __attribute__((weak)) __init arch_xen_balloon_init(struct resource *res) 267 - {} 268 - 269 254 static struct resource *additional_memory_resource(phys_addr_t size) 270 255 { 271 - struct resource *res, *res_hostmem; 272 - int ret = -ENOMEM; 256 + struct resource *res; 257 + int ret; 273 258 274 259 res = kzalloc(sizeof(*res), GFP_KERNEL); 275 260 if (!res) ··· 263 278 res->name = "System RAM"; 264 279 res->flags = IORESOURCE_SYSTEM_RAM | IORESOURCE_BUSY; 265 280 266 - res_hostmem = kzalloc(sizeof(*res), GFP_KERNEL); 267 - if (res_hostmem) { 268 - /* Try to grab a range from hostmem */ 269 - res_hostmem->name = "Host memory"; 270 - ret = allocate_resource(&hostmem_resource, res_hostmem, 271 - size, 0, -1, 272 - PAGES_PER_SECTION * PAGE_SIZE, NULL, NULL); 273 - } 274 - 275 - if (!ret) { 276 - /* 277 - * Insert this resource into iomem. Because hostmem_resource 278 - * tracks portion of guest e820 marked as UNUSABLE noone else 279 - * should try to use it. 280 - */ 281 - res->start = res_hostmem->start; 282 - res->end = res_hostmem->end; 283 - ret = insert_resource(&iomem_resource, res); 284 - if (ret < 0) { 285 - pr_err("Can't insert iomem_resource [%llx - %llx]\n", 286 - res->start, res->end); 287 - release_memory_resource(res_hostmem); 288 - res_hostmem = NULL; 289 - res->start = res->end = 0; 290 - } 291 - } 292 - 293 - if (ret) { 294 - ret = allocate_resource(&iomem_resource, res, 295 - size, 0, -1, 296 - PAGES_PER_SECTION * PAGE_SIZE, NULL, NULL); 297 - if (ret < 0) { 298 - pr_err("Cannot allocate new System RAM resource\n"); 299 - kfree(res); 300 - return NULL; 301 - } 281 + ret = allocate_resource(&iomem_resource, res, 282 + size, 0, -1, 283 + PAGES_PER_SECTION * PAGE_SIZE, NULL, NULL); 284 + if (ret < 0) { 285 + pr_err("Cannot allocate new System RAM resource\n"); 286 + kfree(res); 287 + return NULL; 302 288 } 303 289 304 290 #ifdef CONFIG_SPARSEMEM ··· 281 325 pr_err("New System RAM resource outside addressable RAM (%lu > %lu)\n", 282 326 pfn, limit); 283 327 release_memory_resource(res); 284 - release_memory_resource(res_hostmem); 285 328 return NULL; 286 329 } 287 330 } ··· 705 750 set_online_page_callback(&xen_online_page); 706 751 register_memory_notifier(&xen_memory_nb); 707 752 register_sysctl_table(xen_root); 708 - 709 - arch_xen_balloon_init(&hostmem_resource); 710 753 #endif 711 754 712 755 #ifdef CONFIG_XEN_PV
+2 -2
drivers/xen/pvcalls-front.c
··· 385 385 out_error: 386 386 if (*evtchn >= 0) 387 387 xenbus_free_evtchn(pvcalls_front_dev, *evtchn); 388 - kfree(map->active.data.in); 389 - kfree(map->active.ring); 388 + free_pages((unsigned long)map->active.data.in, PVCALLS_RING_ORDER); 389 + free_page((unsigned long)map->active.ring); 390 390 return ret; 391 391 } 392 392
+1
drivers/xen/xlate_mmu.c
··· 36 36 #include <asm/xen/hypervisor.h> 37 37 38 38 #include <xen/xen.h> 39 + #include <xen/xen-ops.h> 39 40 #include <xen/page.h> 40 41 #include <xen/interface/xen.h> 41 42 #include <xen/interface/memory.h>
+1 -3
fs/afs/dir.c
··· 1075 1075 if (fc->ac.error < 0) 1076 1076 return; 1077 1077 1078 - d_drop(new_dentry); 1079 - 1080 1078 inode = afs_iget(fc->vnode->vfs_inode.i_sb, fc->key, 1081 1079 newfid, newstatus, newcb, fc->cbi); 1082 1080 if (IS_ERR(inode)) { ··· 1088 1090 vnode = AFS_FS_I(inode); 1089 1091 set_bit(AFS_VNODE_NEW_CONTENT, &vnode->flags); 1090 1092 afs_vnode_commit_status(fc, vnode, 0); 1091 - d_add(new_dentry, inode); 1093 + d_instantiate(new_dentry, inode); 1092 1094 } 1093 1095 1094 1096 /*
+24 -15
fs/afs/fs_probe.c
··· 61 61 afs_io_error(call, afs_io_error_fs_probe_fail); 62 62 goto out; 63 63 case -ECONNRESET: /* Responded, but call expired. */ 64 + case -ERFKILL: 65 + case -EADDRNOTAVAIL: 64 66 case -ENETUNREACH: 65 67 case -EHOSTUNREACH: 68 + case -EHOSTDOWN: 66 69 case -ECONNREFUSED: 67 70 case -ETIMEDOUT: 68 71 case -ETIME: ··· 135 132 static int afs_do_probe_fileserver(struct afs_net *net, 136 133 struct afs_server *server, 137 134 struct key *key, 138 - unsigned int server_index) 135 + unsigned int server_index, 136 + struct afs_error *_e) 139 137 { 140 138 struct afs_addr_cursor ac = { 141 139 .index = 0, 142 140 }; 143 - int ret; 141 + bool in_progress = false; 142 + int err; 144 143 145 144 _enter("%pU", &server->uuid); 146 145 ··· 156 151 server->probe.rtt = UINT_MAX; 157 152 158 153 for (ac.index = 0; ac.index < ac.alist->nr_addrs; ac.index++) { 159 - ret = afs_fs_get_capabilities(net, server, &ac, key, server_index, 154 + err = afs_fs_get_capabilities(net, server, &ac, key, server_index, 160 155 true); 161 - if (ret != -EINPROGRESS) { 162 - afs_fs_probe_done(server); 163 - return ret; 164 - } 156 + if (err == -EINPROGRESS) 157 + in_progress = true; 158 + else 159 + afs_prioritise_error(_e, err, ac.abort_code); 165 160 } 166 161 167 - return 0; 162 + if (!in_progress) 163 + afs_fs_probe_done(server); 164 + return in_progress; 168 165 } 169 166 170 167 /* ··· 176 169 struct afs_server_list *list) 177 170 { 178 171 struct afs_server *server; 179 - int i, ret; 172 + struct afs_error e; 173 + bool in_progress = false; 174 + int i; 180 175 176 + e.error = 0; 177 + e.responded = false; 181 178 for (i = 0; i < list->nr_servers; i++) { 182 179 server = list->servers[i].server; 183 180 if (test_bit(AFS_SERVER_FL_PROBED, &server->flags)) 184 181 continue; 185 182 186 - if (!test_and_set_bit_lock(AFS_SERVER_FL_PROBING, &server->flags)) { 187 - ret = afs_do_probe_fileserver(net, server, key, i); 188 - if (ret) 189 - return ret; 190 - } 183 + if (!test_and_set_bit_lock(AFS_SERVER_FL_PROBING, &server->flags) && 184 + afs_do_probe_fileserver(net, server, key, i, &e)) 185 + in_progress = true; 191 186 } 192 187 193 - return 0; 188 + return in_progress ? 0 : e.error; 194 189 } 195 190 196 191 /*
+12 -6
fs/afs/inode.c
··· 382 382 int afs_validate(struct afs_vnode *vnode, struct key *key) 383 383 { 384 384 time64_t now = ktime_get_real_seconds(); 385 - bool valid = false; 385 + bool valid; 386 386 int ret; 387 387 388 388 _enter("{v={%llx:%llu} fl=%lx},%x", ··· 402 402 vnode->cb_v_break = vnode->volume->cb_v_break; 403 403 valid = false; 404 404 } else if (vnode->status.type == AFS_FTYPE_DIR && 405 - test_bit(AFS_VNODE_DIR_VALID, &vnode->flags) && 406 - vnode->cb_expires_at - 10 > now) { 407 - valid = true; 408 - } else if (!test_bit(AFS_VNODE_ZAP_DATA, &vnode->flags) && 409 - vnode->cb_expires_at - 10 > now) { 405 + (!test_bit(AFS_VNODE_DIR_VALID, &vnode->flags) || 406 + vnode->cb_expires_at - 10 <= now)) { 407 + valid = false; 408 + } else if (test_bit(AFS_VNODE_ZAP_DATA, &vnode->flags) || 409 + vnode->cb_expires_at - 10 <= now) { 410 + valid = false; 411 + } else { 410 412 valid = true; 411 413 } 412 414 } else if (test_bit(AFS_VNODE_DELETED, &vnode->flags)) { 413 415 valid = true; 416 + } else { 417 + vnode->cb_s_break = vnode->cb_interest->server->cb_s_break; 418 + vnode->cb_v_break = vnode->volume->cb_v_break; 419 + valid = false; 414 420 } 415 421 416 422 read_sequnlock_excl(&vnode->cb_lock);
+9
fs/afs/internal.h
··· 696 696 }; 697 697 698 698 /* 699 + * Error prioritisation and accumulation. 700 + */ 701 + struct afs_error { 702 + short error; /* Accumulated error */ 703 + bool responded; /* T if server responded */ 704 + }; 705 + 706 + /* 699 707 * Cursor for iterating over a server's address list. 700 708 */ 701 709 struct afs_addr_cursor { ··· 1023 1015 * misc.c 1024 1016 */ 1025 1017 extern int afs_abort_to_error(u32); 1018 + extern void afs_prioritise_error(struct afs_error *, int, u32); 1026 1019 1027 1020 /* 1028 1021 * mntpt.c
+52
fs/afs/misc.c
··· 118 118 default: return -EREMOTEIO; 119 119 } 120 120 } 121 + 122 + /* 123 + * Select the error to report from a set of errors. 124 + */ 125 + void afs_prioritise_error(struct afs_error *e, int error, u32 abort_code) 126 + { 127 + switch (error) { 128 + case 0: 129 + return; 130 + default: 131 + if (e->error == -ETIMEDOUT || 132 + e->error == -ETIME) 133 + return; 134 + case -ETIMEDOUT: 135 + case -ETIME: 136 + if (e->error == -ENOMEM || 137 + e->error == -ENONET) 138 + return; 139 + case -ENOMEM: 140 + case -ENONET: 141 + if (e->error == -ERFKILL) 142 + return; 143 + case -ERFKILL: 144 + if (e->error == -EADDRNOTAVAIL) 145 + return; 146 + case -EADDRNOTAVAIL: 147 + if (e->error == -ENETUNREACH) 148 + return; 149 + case -ENETUNREACH: 150 + if (e->error == -EHOSTUNREACH) 151 + return; 152 + case -EHOSTUNREACH: 153 + if (e->error == -EHOSTDOWN) 154 + return; 155 + case -EHOSTDOWN: 156 + if (e->error == -ECONNREFUSED) 157 + return; 158 + case -ECONNREFUSED: 159 + if (e->error == -ECONNRESET) 160 + return; 161 + case -ECONNRESET: /* Responded, but call expired. */ 162 + if (e->responded) 163 + return; 164 + e->error = error; 165 + return; 166 + 167 + case -ECONNABORTED: 168 + e->responded = true; 169 + e->error = afs_abort_to_error(abort_code); 170 + return; 171 + } 172 + }
+13 -40
fs/afs/rotate.c
··· 136 136 struct afs_addr_list *alist; 137 137 struct afs_server *server; 138 138 struct afs_vnode *vnode = fc->vnode; 139 - u32 rtt, abort_code; 139 + struct afs_error e; 140 + u32 rtt; 140 141 int error = fc->ac.error, i; 141 142 142 143 _enter("%lx[%d],%lx[%d],%d,%d", ··· 307 306 if (fc->error != -EDESTADDRREQ) 308 307 goto iterate_address; 309 308 /* Fall through */ 309 + case -ERFKILL: 310 + case -EADDRNOTAVAIL: 310 311 case -ENETUNREACH: 311 312 case -EHOSTUNREACH: 313 + case -EHOSTDOWN: 312 314 case -ECONNREFUSED: 313 315 _debug("no conn"); 314 316 fc->error = error; ··· 450 446 if (fc->flags & AFS_FS_CURSOR_VBUSY) 451 447 goto restart_from_beginning; 452 448 453 - abort_code = 0; 454 - error = -EDESTADDRREQ; 449 + e.error = -EDESTADDRREQ; 450 + e.responded = false; 455 451 for (i = 0; i < fc->server_list->nr_servers; i++) { 456 452 struct afs_server *s = fc->server_list->servers[i].server; 457 - int probe_error = READ_ONCE(s->probe.error); 458 453 459 - switch (probe_error) { 460 - case 0: 461 - continue; 462 - default: 463 - if (error == -ETIMEDOUT || 464 - error == -ETIME) 465 - continue; 466 - case -ETIMEDOUT: 467 - case -ETIME: 468 - if (error == -ENOMEM || 469 - error == -ENONET) 470 - continue; 471 - case -ENOMEM: 472 - case -ENONET: 473 - if (error == -ENETUNREACH) 474 - continue; 475 - case -ENETUNREACH: 476 - if (error == -EHOSTUNREACH) 477 - continue; 478 - case -EHOSTUNREACH: 479 - if (error == -ECONNREFUSED) 480 - continue; 481 - case -ECONNREFUSED: 482 - if (error == -ECONNRESET) 483 - continue; 484 - case -ECONNRESET: /* Responded, but call expired. */ 485 - if (error == -ECONNABORTED) 486 - continue; 487 - case -ECONNABORTED: 488 - abort_code = s->probe.abort_code; 489 - error = probe_error; 490 - continue; 491 - } 454 + afs_prioritise_error(&e, READ_ONCE(s->probe.error), 455 + s->probe.abort_code); 492 456 } 493 - 494 - if (error == -ECONNABORTED) 495 - error = afs_abort_to_error(abort_code); 496 457 497 458 failed_set_error: 498 459 fc->error = error; ··· 522 553 _leave(" = f [abort]"); 523 554 return false; 524 555 556 + case -ERFKILL: 557 + case -EADDRNOTAVAIL: 525 558 case -ENETUNREACH: 526 559 case -EHOSTUNREACH: 560 + case -EHOSTDOWN: 527 561 case -ECONNREFUSED: 528 562 case -ETIMEDOUT: 529 563 case -ETIME: ··· 605 633 struct afs_net *net = afs_v2net(fc->vnode); 606 634 607 635 if (fc->error == -EDESTADDRREQ || 636 + fc->error == -EADDRNOTAVAIL || 608 637 fc->error == -ENETUNREACH || 609 638 fc->error == -EHOSTUNREACH) 610 639 afs_dump_edestaddrreq(fc);
+27 -18
fs/afs/vl_probe.c
··· 61 61 afs_io_error(call, afs_io_error_vl_probe_fail); 62 62 goto out; 63 63 case -ECONNRESET: /* Responded, but call expired. */ 64 + case -ERFKILL: 65 + case -EADDRNOTAVAIL: 64 66 case -ENETUNREACH: 65 67 case -EHOSTUNREACH: 68 + case -EHOSTDOWN: 66 69 case -ECONNREFUSED: 67 70 case -ETIMEDOUT: 68 71 case -ETIME: ··· 132 129 * Probe all of a vlserver's addresses to find out the best route and to 133 130 * query its capabilities. 134 131 */ 135 - static int afs_do_probe_vlserver(struct afs_net *net, 136 - struct afs_vlserver *server, 137 - struct key *key, 138 - unsigned int server_index) 132 + static bool afs_do_probe_vlserver(struct afs_net *net, 133 + struct afs_vlserver *server, 134 + struct key *key, 135 + unsigned int server_index, 136 + struct afs_error *_e) 139 137 { 140 138 struct afs_addr_cursor ac = { 141 139 .index = 0, 142 140 }; 143 - int ret; 141 + bool in_progress = false; 142 + int err; 144 143 145 144 _enter("%s", server->name); 146 145 ··· 156 151 server->probe.rtt = UINT_MAX; 157 152 158 153 for (ac.index = 0; ac.index < ac.alist->nr_addrs; ac.index++) { 159 - ret = afs_vl_get_capabilities(net, &ac, key, server, 154 + err = afs_vl_get_capabilities(net, &ac, key, server, 160 155 server_index, true); 161 - if (ret != -EINPROGRESS) { 162 - afs_vl_probe_done(server); 163 - return ret; 164 - } 156 + if (err == -EINPROGRESS) 157 + in_progress = true; 158 + else 159 + afs_prioritise_error(_e, err, ac.abort_code); 165 160 } 166 161 167 - return 0; 162 + if (!in_progress) 163 + afs_vl_probe_done(server); 164 + return in_progress; 168 165 } 169 166 170 167 /* ··· 176 169 struct afs_vlserver_list *vllist) 177 170 { 178 171 struct afs_vlserver *server; 179 - int i, ret; 172 + struct afs_error e; 173 + bool in_progress = false; 174 + int i; 180 175 176 + e.error = 0; 177 + e.responded = false; 181 178 for (i = 0; i < vllist->nr_servers; i++) { 182 179 server = vllist->servers[i].server; 183 180 if (test_bit(AFS_VLSERVER_FL_PROBED, &server->flags)) 184 181 continue; 185 182 186 - if (!test_and_set_bit_lock(AFS_VLSERVER_FL_PROBING, &server->flags)) { 187 - ret = afs_do_probe_vlserver(net, server, key, i); 188 - if (ret) 189 - return ret; 190 - } 183 + if (!test_and_set_bit_lock(AFS_VLSERVER_FL_PROBING, &server->flags) && 184 + afs_do_probe_vlserver(net, server, key, i, &e)) 185 + in_progress = true; 191 186 } 192 187 193 - return 0; 188 + return in_progress ? 0 : e.error; 194 189 } 195 190 196 191 /*
+10 -40
fs/afs/vl_rotate.c
··· 71 71 { 72 72 struct afs_addr_list *alist; 73 73 struct afs_vlserver *vlserver; 74 + struct afs_error e; 74 75 u32 rtt; 75 - int error = vc->ac.error, abort_code, i; 76 + int error = vc->ac.error, i; 76 77 77 78 _enter("%lx[%d],%lx[%d],%d,%d", 78 79 vc->untried, vc->index, ··· 120 119 goto failed; 121 120 } 122 121 122 + case -ERFKILL: 123 + case -EADDRNOTAVAIL: 123 124 case -ENETUNREACH: 124 125 case -EHOSTUNREACH: 126 + case -EHOSTDOWN: 125 127 case -ECONNREFUSED: 126 128 case -ETIMEDOUT: 127 129 case -ETIME: ··· 239 235 if (vc->flags & AFS_VL_CURSOR_RETRY) 240 236 goto restart_from_beginning; 241 237 242 - abort_code = 0; 243 - error = -EDESTADDRREQ; 238 + e.error = -EDESTADDRREQ; 239 + e.responded = false; 244 240 for (i = 0; i < vc->server_list->nr_servers; i++) { 245 241 struct afs_vlserver *s = vc->server_list->servers[i].server; 246 - int probe_error = READ_ONCE(s->probe.error); 247 242 248 - switch (probe_error) { 249 - case 0: 250 - continue; 251 - default: 252 - if (error == -ETIMEDOUT || 253 - error == -ETIME) 254 - continue; 255 - case -ETIMEDOUT: 256 - case -ETIME: 257 - if (error == -ENOMEM || 258 - error == -ENONET) 259 - continue; 260 - case -ENOMEM: 261 - case -ENONET: 262 - if (error == -ENETUNREACH) 263 - continue; 264 - case -ENETUNREACH: 265 - if (error == -EHOSTUNREACH) 266 - continue; 267 - case -EHOSTUNREACH: 268 - if (error == -ECONNREFUSED) 269 - continue; 270 - case -ECONNREFUSED: 271 - if (error == -ECONNRESET) 272 - continue; 273 - case -ECONNRESET: /* Responded, but call expired. */ 274 - if (error == -ECONNABORTED) 275 - continue; 276 - case -ECONNABORTED: 277 - abort_code = s->probe.abort_code; 278 - error = probe_error; 279 - continue; 280 - } 243 + afs_prioritise_error(&e, READ_ONCE(s->probe.error), 244 + s->probe.abort_code); 281 245 } 282 - 283 - if (error == -ECONNABORTED) 284 - error = afs_abort_to_error(abort_code); 285 246 286 247 failed_set_error: 287 248 vc->error = error; ··· 310 341 struct afs_net *net = vc->cell->net; 311 342 312 343 if (vc->error == -EDESTADDRREQ || 344 + vc->error == -EADDRNOTAVAIL || 313 345 vc->error == -ENETUNREACH || 314 346 vc->error == -EHOSTUNREACH) 315 347 afs_vl_dump_edestaddrreq(vc);
+1
fs/aio.c
··· 1436 1436 ret = ioprio_check_cap(iocb->aio_reqprio); 1437 1437 if (ret) { 1438 1438 pr_debug("aio ioprio check cap error: %d\n", ret); 1439 + fput(req->ki_filp); 1439 1440 return ret; 1440 1441 } 1441 1442
+1 -10
fs/btrfs/disk-io.c
··· 477 477 int mirror_num = 0; 478 478 int failed_mirror = 0; 479 479 480 - clear_bit(EXTENT_BUFFER_CORRUPT, &eb->bflags); 481 480 io_tree = &BTRFS_I(fs_info->btree_inode)->io_tree; 482 481 while (1) { 482 + clear_bit(EXTENT_BUFFER_CORRUPT, &eb->bflags); 483 483 ret = read_extent_buffer_pages(io_tree, eb, WAIT_COMPLETE, 484 484 mirror_num); 485 485 if (!ret) { ··· 492 492 else 493 493 break; 494 494 } 495 - 496 - /* 497 - * This buffer's crc is fine, but its contents are corrupted, so 498 - * there is no reason to read the other copies, they won't be 499 - * any less wrong. 500 - */ 501 - if (test_bit(EXTENT_BUFFER_CORRUPT, &eb->bflags) || 502 - ret == -EUCLEAN) 503 - break; 504 495 505 496 num_copies = btrfs_num_copies(fs_info, 506 497 eb->start, eb->len);
+24
fs/btrfs/file.c
··· 2089 2089 atomic_inc(&root->log_batch); 2090 2090 2091 2091 /* 2092 + * Before we acquired the inode's lock, someone may have dirtied more 2093 + * pages in the target range. We need to make sure that writeback for 2094 + * any such pages does not start while we are logging the inode, because 2095 + * if it does, any of the following might happen when we are not doing a 2096 + * full inode sync: 2097 + * 2098 + * 1) We log an extent after its writeback finishes but before its 2099 + * checksums are added to the csum tree, leading to -EIO errors 2100 + * when attempting to read the extent after a log replay. 2101 + * 2102 + * 2) We can end up logging an extent before its writeback finishes. 2103 + * Therefore after the log replay we will have a file extent item 2104 + * pointing to an unwritten extent (and no data checksums as well). 2105 + * 2106 + * So trigger writeback for any eventual new dirty pages and then we 2107 + * wait for all ordered extents to complete below. 2108 + */ 2109 + ret = start_ordered_ops(inode, start, end); 2110 + if (ret) { 2111 + inode_unlock(inode); 2112 + goto out; 2113 + } 2114 + 2115 + /* 2092 2116 * We have to do this here to avoid the priority inversion of waiting on 2093 2117 * IO of a lower priority task while holding a transaciton open. 2094 2118 */
+2 -1
fs/btrfs/qgroup.c
··· 2659 2659 int i; 2660 2660 u64 *i_qgroups; 2661 2661 struct btrfs_fs_info *fs_info = trans->fs_info; 2662 - struct btrfs_root *quota_root = fs_info->quota_root; 2662 + struct btrfs_root *quota_root; 2663 2663 struct btrfs_qgroup *srcgroup; 2664 2664 struct btrfs_qgroup *dstgroup; 2665 2665 u32 level_size = 0; ··· 2669 2669 if (!test_bit(BTRFS_FS_QUOTA_ENABLED, &fs_info->flags)) 2670 2670 goto out; 2671 2671 2672 + quota_root = fs_info->quota_root; 2672 2673 if (!quota_root) { 2673 2674 ret = -EINVAL; 2674 2675 goto out;
+1
fs/btrfs/relocation.c
··· 3959 3959 restart: 3960 3960 if (update_backref_cache(trans, &rc->backref_cache)) { 3961 3961 btrfs_end_transaction(trans); 3962 + trans = NULL; 3962 3963 continue; 3963 3964 } 3964 3965
+8 -3
fs/btrfs/send.c
··· 3340 3340 kfree(m); 3341 3341 } 3342 3342 3343 - static void tail_append_pending_moves(struct pending_dir_move *moves, 3343 + static void tail_append_pending_moves(struct send_ctx *sctx, 3344 + struct pending_dir_move *moves, 3344 3345 struct list_head *stack) 3345 3346 { 3346 3347 if (list_empty(&moves->list)) { ··· 3351 3350 list_splice_init(&moves->list, &list); 3352 3351 list_add_tail(&moves->list, stack); 3353 3352 list_splice_tail(&list, stack); 3353 + } 3354 + if (!RB_EMPTY_NODE(&moves->node)) { 3355 + rb_erase(&moves->node, &sctx->pending_dir_moves); 3356 + RB_CLEAR_NODE(&moves->node); 3354 3357 } 3355 3358 } 3356 3359 ··· 3370 3365 return 0; 3371 3366 3372 3367 INIT_LIST_HEAD(&stack); 3373 - tail_append_pending_moves(pm, &stack); 3368 + tail_append_pending_moves(sctx, pm, &stack); 3374 3369 3375 3370 while (!list_empty(&stack)) { 3376 3371 pm = list_first_entry(&stack, struct pending_dir_move, list); ··· 3381 3376 goto out; 3382 3377 pm = get_pending_dir_moves(sctx, parent_ino); 3383 3378 if (pm) 3384 - tail_append_pending_moves(pm, &stack); 3379 + tail_append_pending_moves(sctx, pm, &stack); 3385 3380 } 3386 3381 return 0; 3387 3382
+1
fs/btrfs/super.c
··· 2237 2237 vol = memdup_user((void __user *)arg, sizeof(*vol)); 2238 2238 if (IS_ERR(vol)) 2239 2239 return PTR_ERR(vol); 2240 + vol->name[BTRFS_PATH_NAME_MAX] = '\0'; 2240 2241 2241 2242 switch (cmd) { 2242 2243 case BTRFS_IOC_SCAN_DEV:
+5 -3
fs/cachefiles/namei.c
··· 244 244 245 245 ASSERT(!test_bit(CACHEFILES_OBJECT_ACTIVE, &xobject->flags)); 246 246 247 - cache->cache.ops->put_object(&xobject->fscache, cachefiles_obj_put_wait_retry); 247 + cache->cache.ops->put_object(&xobject->fscache, 248 + (enum fscache_obj_ref_trace)cachefiles_obj_put_wait_retry); 248 249 goto try_again; 249 250 250 251 requeue: 251 - cache->cache.ops->put_object(&xobject->fscache, cachefiles_obj_put_wait_timeo); 252 + cache->cache.ops->put_object(&xobject->fscache, 253 + (enum fscache_obj_ref_trace)cachefiles_obj_put_wait_timeo); 252 254 _leave(" = -ETIMEDOUT"); 253 255 return -ETIMEDOUT; 254 256 } ··· 338 336 try_again: 339 337 /* first step is to make up a grave dentry in the graveyard */ 340 338 sprintf(nbuffer, "%08x%08x", 341 - (uint32_t) get_seconds(), 339 + (uint32_t) ktime_get_real_seconds(), 342 340 (uint32_t) atomic_inc_return(&cache->gravecounter)); 343 341 344 342 /* do the multiway lock magic */
+6 -3
fs/cachefiles/rdwr.c
··· 535 535 netpage->index, cachefiles_gfp); 536 536 if (ret < 0) { 537 537 if (ret == -EEXIST) { 538 + put_page(backpage); 539 + backpage = NULL; 538 540 put_page(netpage); 541 + netpage = NULL; 539 542 fscache_retrieval_complete(op, 1); 540 543 continue; 541 544 } ··· 611 608 netpage->index, cachefiles_gfp); 612 609 if (ret < 0) { 613 610 if (ret == -EEXIST) { 611 + put_page(backpage); 612 + backpage = NULL; 614 613 put_page(netpage); 614 + netpage = NULL; 615 615 fscache_retrieval_complete(op, 1); 616 616 continue; 617 617 } ··· 968 962 __releases(&object->fscache.cookie->lock) 969 963 { 970 964 struct cachefiles_object *object; 971 - struct cachefiles_cache *cache; 972 965 973 966 object = container_of(_object, struct cachefiles_object, fscache); 974 - cache = container_of(object->fscache.cache, 975 - struct cachefiles_cache, cache); 976 967 977 968 _enter("%p,{%lu}", object, page->index); 978 969
+2 -1
fs/cachefiles/xattr.c
··· 135 135 struct dentry *dentry = object->dentry; 136 136 int ret; 137 137 138 - ASSERT(dentry); 138 + if (!dentry) 139 + return -ESTALE; 139 140 140 141 _enter("%p,#%d", object, auxdata->len); 141 142
+2 -2
fs/direct-io.c
··· 325 325 */ 326 326 dio->iocb->ki_pos += transferred; 327 327 328 - if (dio->op == REQ_OP_WRITE) 329 - ret = generic_write_sync(dio->iocb, transferred); 328 + if (ret > 0 && dio->op == REQ_OP_WRITE) 329 + ret = generic_write_sync(dio->iocb, ret); 330 330 dio->iocb->ki_complete(dio->iocb, ret, 0); 331 331 } 332 332
+2 -1
fs/exportfs/expfs.c
··· 77 77 struct dentry *parent = dget_parent(dentry); 78 78 79 79 dput(dentry); 80 - if (IS_ROOT(dentry)) { 80 + if (dentry == parent) { 81 81 dput(parent); 82 82 return false; 83 83 } ··· 147 147 tmp = lookup_one_len_unlocked(nbuf, parent, strlen(nbuf)); 148 148 if (IS_ERR(tmp)) { 149 149 dprintk("%s: lookup failed: %d\n", __func__, PTR_ERR(tmp)); 150 + err = PTR_ERR(tmp); 150 151 goto out_err; 151 152 } 152 153 if (tmp != dentry) {
+1
fs/ext2/super.c
··· 892 892 if (sb->s_magic != EXT2_SUPER_MAGIC) 893 893 goto cantfind_ext2; 894 894 895 + opts.s_mount_opt = 0; 895 896 /* Set defaults before we parse the mount options */ 896 897 def_mount_opts = le32_to_cpu(es->s_default_mount_opts); 897 898 if (def_mount_opts & EXT2_DEFM_DEBUG)
+1 -1
fs/ext2/xattr.c
··· 612 612 } 613 613 614 614 cleanup: 615 - brelse(bh); 616 615 if (!(bh && header == HDR(bh))) 617 616 kfree(header); 617 + brelse(bh); 618 618 up_write(&EXT2_I(inode)->xattr_sem); 619 619 620 620 return error;
+3
fs/fscache/object.c
··· 730 730 731 731 if (awaken) 732 732 wake_up_bit(&cookie->flags, FSCACHE_COOKIE_INVALIDATING); 733 + if (test_and_clear_bit(FSCACHE_COOKIE_LOOKING_UP, &cookie->flags)) 734 + wake_up_bit(&cookie->flags, FSCACHE_COOKIE_LOOKING_UP); 735 + 733 736 734 737 /* Prevent a race with our last child, which has to signal EV_CLEARED 735 738 * before dropping our spinlock.
+2 -1
fs/hfs/btree.c
··· 338 338 339 339 nidx -= len * 8; 340 340 i = node->next; 341 - hfs_bnode_put(node); 342 341 if (!i) { 343 342 /* panic */; 344 343 pr_crit("unable to free bnode %u. bmap not found!\n", 345 344 node->this); 345 + hfs_bnode_put(node); 346 346 return; 347 347 } 348 + hfs_bnode_put(node); 348 349 node = hfs_bnode_find(tree, i); 349 350 if (IS_ERR(node)) 350 351 return;
+2 -1
fs/hfsplus/btree.c
··· 466 466 467 467 nidx -= len * 8; 468 468 i = node->next; 469 - hfs_bnode_put(node); 470 469 if (!i) { 471 470 /* panic */; 472 471 pr_crit("unable to free bnode %u. " 473 472 "bmap not found!\n", 474 473 node->this); 474 + hfs_bnode_put(node); 475 475 return; 476 476 } 477 + hfs_bnode_put(node); 477 478 node = hfs_bnode_find(tree, i); 478 479 if (IS_ERR(node)) 479 480 return;
+1 -1
fs/ocfs2/export.c
··· 125 125 126 126 check_gen: 127 127 if (handle->ih_generation != inode->i_generation) { 128 - iput(inode); 129 128 trace_ocfs2_get_dentry_generation((unsigned long long)blkno, 130 129 handle->ih_generation, 131 130 inode->i_generation); 131 + iput(inode); 132 132 result = ERR_PTR(-ESTALE); 133 133 goto bail; 134 134 }
+26 -21
fs/ocfs2/move_extents.c
··· 157 157 } 158 158 159 159 /* 160 - * lock allocators, and reserving appropriate number of bits for 161 - * meta blocks and data clusters. 162 - * 163 - * in some cases, we don't need to reserve clusters, just let data_ac 164 - * be NULL. 160 + * lock allocator, and reserve appropriate number of bits for 161 + * meta blocks. 165 162 */ 166 - static int ocfs2_lock_allocators_move_extents(struct inode *inode, 163 + static int ocfs2_lock_meta_allocator_move_extents(struct inode *inode, 167 164 struct ocfs2_extent_tree *et, 168 165 u32 clusters_to_move, 169 166 u32 extents_to_split, 170 167 struct ocfs2_alloc_context **meta_ac, 171 - struct ocfs2_alloc_context **data_ac, 172 168 int extra_blocks, 173 169 int *credits) 174 170 { ··· 189 193 goto out; 190 194 } 191 195 192 - if (data_ac) { 193 - ret = ocfs2_reserve_clusters(osb, clusters_to_move, data_ac); 194 - if (ret) { 195 - mlog_errno(ret); 196 - goto out; 197 - } 198 - } 199 196 200 197 *credits += ocfs2_calc_extend_credits(osb->sb, et->et_root_el); 201 198 ··· 248 259 } 249 260 } 250 261 251 - ret = ocfs2_lock_allocators_move_extents(inode, &context->et, *len, 1, 252 - &context->meta_ac, 253 - &context->data_ac, 254 - extra_blocks, &credits); 262 + ret = ocfs2_lock_meta_allocator_move_extents(inode, &context->et, 263 + *len, 1, 264 + &context->meta_ac, 265 + extra_blocks, &credits); 255 266 if (ret) { 256 267 mlog_errno(ret); 257 268 goto out; ··· 272 283 mlog_errno(ret); 273 284 goto out_unlock_mutex; 274 285 } 286 + } 287 + 288 + /* 289 + * Make sure ocfs2_reserve_cluster is called after 290 + * __ocfs2_flush_truncate_log, otherwise, dead lock may happen. 291 + * 292 + * If ocfs2_reserve_cluster is called 293 + * before __ocfs2_flush_truncate_log, dead lock on global bitmap 294 + * may happen. 295 + * 296 + */ 297 + ret = ocfs2_reserve_clusters(osb, *len, &context->data_ac); 298 + if (ret) { 299 + mlog_errno(ret); 300 + goto out_unlock_mutex; 275 301 } 276 302 277 303 handle = ocfs2_start_trans(osb, credits); ··· 621 617 } 622 618 } 623 619 624 - ret = ocfs2_lock_allocators_move_extents(inode, &context->et, len, 1, 625 - &context->meta_ac, 626 - NULL, extra_blocks, &credits); 620 + ret = ocfs2_lock_meta_allocator_move_extents(inode, &context->et, 621 + len, 1, 622 + &context->meta_ac, 623 + extra_blocks, &credits); 627 624 if (ret) { 628 625 mlog_errno(ret); 629 626 goto out;
+6 -9
fs/pstore/ram.c
··· 816 816 817 817 cxt->pstore.data = cxt; 818 818 /* 819 - * Console can handle any buffer size, so prefer LOG_LINE_MAX. If we 820 - * have to handle dumps, we must have at least record_size buffer. And 821 - * for ftrace, bufsize is irrelevant (if bufsize is 0, buf will be 822 - * ZERO_SIZE_PTR). 819 + * Since bufsize is only used for dmesg crash dumps, it 820 + * must match the size of the dprz record (after PRZ header 821 + * and ECC bytes have been accounted for). 823 822 */ 824 - if (cxt->console_size) 825 - cxt->pstore.bufsize = 1024; /* LOG_LINE_MAX */ 826 - cxt->pstore.bufsize = max(cxt->record_size, cxt->pstore.bufsize); 827 - cxt->pstore.buf = kmalloc(cxt->pstore.bufsize, GFP_KERNEL); 823 + cxt->pstore.bufsize = cxt->dprzs[0]->buffer_size; 824 + cxt->pstore.buf = kzalloc(cxt->pstore.bufsize, GFP_KERNEL); 828 825 if (!cxt->pstore.buf) { 829 - pr_err("cannot allocate pstore buffer\n"); 826 + pr_err("cannot allocate pstore crash dump buffer\n"); 830 827 err = -ENOMEM; 831 828 goto fail_clear; 832 829 }
+1 -1
fs/sysv/inode.c
··· 275 275 } 276 276 } 277 277 brelse(bh); 278 - return 0; 278 + return err; 279 279 } 280 280 281 281 int sysv_write_inode(struct inode *inode, struct writeback_control *wbc)
+10 -6
fs/udf/super.c
··· 827 827 828 828 829 829 ret = udf_dstrCS0toChar(sb, outstr, 31, pvoldesc->volIdent, 32); 830 - if (ret < 0) 831 - goto out_bh; 832 - 833 - strncpy(UDF_SB(sb)->s_volume_ident, outstr, ret); 830 + if (ret < 0) { 831 + strcpy(UDF_SB(sb)->s_volume_ident, "InvalidName"); 832 + pr_warn("incorrect volume identification, setting to " 833 + "'InvalidName'\n"); 834 + } else { 835 + strncpy(UDF_SB(sb)->s_volume_ident, outstr, ret); 836 + } 834 837 udf_debug("volIdent[] = '%s'\n", UDF_SB(sb)->s_volume_ident); 835 838 836 839 ret = udf_dstrCS0toChar(sb, outstr, 127, pvoldesc->volSetIdent, 128); 837 - if (ret < 0) 840 + if (ret < 0) { 841 + ret = 0; 838 842 goto out_bh; 839 - 843 + } 840 844 outstr[ret] = 0; 841 845 udf_debug("volSetIdent[] = '%s'\n", outstr); 842 846
+11 -3
fs/udf/unicode.c
··· 351 351 return u_len; 352 352 } 353 353 354 + /* 355 + * Convert CS0 dstring to output charset. Warning: This function may truncate 356 + * input string if it is too long as it is used for informational strings only 357 + * and it is better to truncate the string than to refuse mounting a media. 358 + */ 354 359 int udf_dstrCS0toChar(struct super_block *sb, uint8_t *utf_o, int o_len, 355 360 const uint8_t *ocu_i, int i_len) 356 361 { ··· 364 359 if (i_len > 0) { 365 360 s_len = ocu_i[i_len - 1]; 366 361 if (s_len >= i_len) { 367 - pr_err("incorrect dstring lengths (%d/%d)\n", 368 - s_len, i_len); 369 - return -EINVAL; 362 + pr_warn("incorrect dstring lengths (%d/%d)," 363 + " truncating\n", s_len, i_len); 364 + s_len = i_len - 1; 365 + /* 2-byte encoding? Need to round properly... */ 366 + if (ocu_i[0] == 16) 367 + s_len -= (s_len - 1) & 2; 370 368 } 371 369 } 372 370
+15
fs/userfaultfd.c
··· 1361 1361 ret = -EINVAL; 1362 1362 if (!vma_can_userfault(cur)) 1363 1363 goto out_unlock; 1364 + 1365 + /* 1366 + * UFFDIO_COPY will fill file holes even without 1367 + * PROT_WRITE. This check enforces that if this is a 1368 + * MAP_SHARED, the process has write permission to the backing 1369 + * file. If VM_MAYWRITE is set it also enforces that on a 1370 + * MAP_SHARED vma: there is no F_WRITE_SEAL and no further 1371 + * F_WRITE_SEAL can be taken until the vma is destroyed. 1372 + */ 1373 + ret = -EPERM; 1374 + if (unlikely(!(cur->vm_flags & VM_MAYWRITE))) 1375 + goto out_unlock; 1376 + 1364 1377 /* 1365 1378 * If this vma contains ending address, and huge pages 1366 1379 * check alignment. ··· 1419 1406 BUG_ON(!vma_can_userfault(vma)); 1420 1407 BUG_ON(vma->vm_userfaultfd_ctx.ctx && 1421 1408 vma->vm_userfaultfd_ctx.ctx != ctx); 1409 + WARN_ON(!(vma->vm_flags & VM_MAYWRITE)); 1422 1410 1423 1411 /* 1424 1412 * Nothing to do: this vma is already registered into this ··· 1566 1552 cond_resched(); 1567 1553 1568 1554 BUG_ON(!vma_can_userfault(vma)); 1555 + WARN_ON(!(vma->vm_flags & VM_MAYWRITE)); 1569 1556 1570 1557 /* 1571 1558 * Nothing to do: this vma is already registered into this
+4
include/linux/filter.h
··· 866 866 867 867 void bpf_jit_free(struct bpf_prog *fp); 868 868 869 + int bpf_jit_get_func_addr(const struct bpf_prog *prog, 870 + const struct bpf_insn *insn, bool extra_pass, 871 + u64 *func_addr, bool *func_addr_fixed); 872 + 869 873 struct bpf_prog *bpf_jit_blind_constants(struct bpf_prog *fp); 870 874 void bpf_jit_prog_release_other(struct bpf_prog *fp, struct bpf_prog *fp_other); 871 875
+1 -2
include/linux/fscache-cache.h
··· 196 196 static inline void fscache_retrieval_complete(struct fscache_retrieval *op, 197 197 int n_pages) 198 198 { 199 - atomic_sub(n_pages, &op->n_pages); 200 - if (atomic_read(&op->n_pages) <= 0) 199 + if (atomic_sub_return_relaxed(n_pages, &op->n_pages) <= 0) 201 200 fscache_op_complete(&op->op, false); 202 201 } 203 202
+2 -2
include/linux/ftrace.h
··· 777 777 extern void return_to_handler(void); 778 778 779 779 extern int 780 - ftrace_push_return_trace(unsigned long ret, unsigned long func, int *depth, 781 - unsigned long frame_pointer, unsigned long *retp); 780 + function_graph_enter(unsigned long ret, unsigned long func, 781 + unsigned long frame_pointer, unsigned long *retp); 782 782 783 783 unsigned long ftrace_graph_ret_addr(struct task_struct *task, int *idx, 784 784 unsigned long ret, unsigned long *retp);
+3 -1
include/linux/hid-sensor-hub.h
··· 177 177 * @attr_usage_id: Attribute usage id as per spec 178 178 * @report_id: Report id to look for 179 179 * @flag: Synchronous or asynchronous read 180 + * @is_signed: If true then fields < 32 bits will be sign-extended 180 181 * 181 182 * Issues a synchronous or asynchronous read request for an input attribute. 182 183 * Returns data upto 32 bits. ··· 191 190 int sensor_hub_input_attr_get_raw_value(struct hid_sensor_hub_device *hsdev, 192 191 u32 usage_id, 193 192 u32 attr_usage_id, u32 report_id, 194 - enum sensor_hub_read_flags flag 193 + enum sensor_hub_read_flags flag, 194 + bool is_signed 195 195 ); 196 196 197 197 /**
+8 -4
include/linux/mlx5/mlx5_ifc.h
··· 2473 2473 2474 2474 u8 wq_signature[0x1]; 2475 2475 u8 cont_srq[0x1]; 2476 - u8 dbr_umem_valid[0x1]; 2476 + u8 reserved_at_22[0x1]; 2477 2477 u8 rlky[0x1]; 2478 2478 u8 basic_cyclic_rcv_wqe[0x1]; 2479 2479 u8 log_rq_stride[0x3]; 2480 2480 u8 xrcd[0x18]; 2481 2481 2482 2482 u8 page_offset[0x6]; 2483 - u8 reserved_at_46[0x2]; 2483 + u8 reserved_at_46[0x1]; 2484 + u8 dbr_umem_valid[0x1]; 2484 2485 u8 cqn[0x18]; 2485 2486 2486 2487 u8 reserved_at_60[0x20]; ··· 6690 6689 6691 6690 struct mlx5_ifc_xrc_srqc_bits xrc_srq_context_entry; 6692 6691 6693 - u8 reserved_at_280[0x40]; 6692 + u8 reserved_at_280[0x60]; 6693 + 6694 6694 u8 xrc_srq_umem_valid[0x1]; 6695 - u8 reserved_at_2c1[0x5bf]; 6695 + u8 reserved_at_2e1[0x1f]; 6696 + 6697 + u8 reserved_at_300[0x580]; 6696 6698 6697 6699 u8 pas[0][0x40]; 6698 6700 };
+13
include/linux/netfilter/nf_conntrack_proto_gre.h
··· 21 21 struct nf_conntrack_tuple tuple; 22 22 }; 23 23 24 + enum grep_conntrack { 25 + GRE_CT_UNREPLIED, 26 + GRE_CT_REPLIED, 27 + GRE_CT_MAX 28 + }; 29 + 30 + struct netns_proto_gre { 31 + struct nf_proto_net nf; 32 + rwlock_t keymap_lock; 33 + struct list_head keymap_list; 34 + unsigned int gre_timeouts[GRE_CT_MAX]; 35 + }; 36 + 24 37 /* add new tuple->key_reply pair to keymap */ 25 38 int nf_ct_gre_keymap_add(struct nf_conn *ct, enum ip_conntrack_dir dir, 26 39 struct nf_conntrack_tuple *t);
+2
include/linux/platform_data/gpio-davinci.h
··· 17 17 #define __DAVINCI_GPIO_PLATFORM_H 18 18 19 19 struct davinci_gpio_platform_data { 20 + bool no_auto_base; 21 + u32 base; 20 22 u32 ngpio; 21 23 u32 gpio_unbanked; 22 24 };
+2 -1
include/linux/psi.h
··· 1 1 #ifndef _LINUX_PSI_H 2 2 #define _LINUX_PSI_H 3 3 4 + #include <linux/jump_label.h> 4 5 #include <linux/psi_types.h> 5 6 #include <linux/sched.h> 6 7 ··· 10 9 11 10 #ifdef CONFIG_PSI 12 11 13 - extern bool psi_disabled; 12 + extern struct static_key_false psi_disabled; 14 13 15 14 void psi_init(void); 16 15
+4 -1
include/linux/pstore.h
··· 90 90 * 91 91 * @buf_lock: spinlock to serialize access to @buf 92 92 * @buf: preallocated crash dump buffer 93 - * @bufsize: size of @buf available for crash dump writes 93 + * @bufsize: size of @buf available for crash dump bytes (must match 94 + * smallest number of bytes available for writing to a 95 + * backend entry, since compressed bytes don't take kindly 96 + * to being truncated) 94 97 * 95 98 * @read_mutex: serializes @open, @read, @close, and @erase callbacks 96 99 * @flags: bitfield of frontends the backend can accept writes for
-17
include/linux/ptrace.h
··· 64 64 #define PTRACE_MODE_NOAUDIT 0x04 65 65 #define PTRACE_MODE_FSCREDS 0x08 66 66 #define PTRACE_MODE_REALCREDS 0x10 67 - #define PTRACE_MODE_SCHED 0x20 68 - #define PTRACE_MODE_IBPB 0x40 69 67 70 68 /* shorthands for READ/ATTACH and FSCREDS/REALCREDS combinations */ 71 69 #define PTRACE_MODE_READ_FSCREDS (PTRACE_MODE_READ | PTRACE_MODE_FSCREDS) 72 70 #define PTRACE_MODE_READ_REALCREDS (PTRACE_MODE_READ | PTRACE_MODE_REALCREDS) 73 71 #define PTRACE_MODE_ATTACH_FSCREDS (PTRACE_MODE_ATTACH | PTRACE_MODE_FSCREDS) 74 72 #define PTRACE_MODE_ATTACH_REALCREDS (PTRACE_MODE_ATTACH | PTRACE_MODE_REALCREDS) 75 - #define PTRACE_MODE_SPEC_IBPB (PTRACE_MODE_ATTACH_REALCREDS | PTRACE_MODE_IBPB) 76 73 77 74 /** 78 75 * ptrace_may_access - check whether the caller is permitted to access ··· 86 89 * process_vm_writev or ptrace (and should use the real credentials). 87 90 */ 88 91 extern bool ptrace_may_access(struct task_struct *task, unsigned int mode); 89 - 90 - /** 91 - * ptrace_may_access - check whether the caller is permitted to access 92 - * a target task. 93 - * @task: target task 94 - * @mode: selects type of access and caller credentials 95 - * 96 - * Returns true on success, false on denial. 97 - * 98 - * Similar to ptrace_may_access(). Only to be called from context switch 99 - * code. Does not call into audit and the regular LSM hooks due to locking 100 - * constraints. 101 - */ 102 - extern bool ptrace_may_access_sched(struct task_struct *task, unsigned int mode); 103 92 104 93 static inline int ptrace_reparented(struct task_struct *child) 105 94 {
+10
include/linux/sched.h
··· 1116 1116 #ifdef CONFIG_FUNCTION_GRAPH_TRACER 1117 1117 /* Index of current stored address in ret_stack: */ 1118 1118 int curr_ret_stack; 1119 + int curr_ret_depth; 1119 1120 1120 1121 /* Stack of return addresses for return function tracing: */ 1121 1122 struct ftrace_ret_stack *ret_stack; ··· 1454 1453 #define PFA_SPREAD_SLAB 2 /* Spread some slab caches over cpuset */ 1455 1454 #define PFA_SPEC_SSB_DISABLE 3 /* Speculative Store Bypass disabled */ 1456 1455 #define PFA_SPEC_SSB_FORCE_DISABLE 4 /* Speculative Store Bypass force disabled*/ 1456 + #define PFA_SPEC_IB_DISABLE 5 /* Indirect branch speculation restricted */ 1457 + #define PFA_SPEC_IB_FORCE_DISABLE 6 /* Indirect branch speculation permanently restricted */ 1457 1458 1458 1459 #define TASK_PFA_TEST(name, func) \ 1459 1460 static inline bool task_##func(struct task_struct *p) \ ··· 1486 1483 1487 1484 TASK_PFA_TEST(SPEC_SSB_FORCE_DISABLE, spec_ssb_force_disable) 1488 1485 TASK_PFA_SET(SPEC_SSB_FORCE_DISABLE, spec_ssb_force_disable) 1486 + 1487 + TASK_PFA_TEST(SPEC_IB_DISABLE, spec_ib_disable) 1488 + TASK_PFA_SET(SPEC_IB_DISABLE, spec_ib_disable) 1489 + TASK_PFA_CLEAR(SPEC_IB_DISABLE, spec_ib_disable) 1490 + 1491 + TASK_PFA_TEST(SPEC_IB_FORCE_DISABLE, spec_ib_force_disable) 1492 + TASK_PFA_SET(SPEC_IB_FORCE_DISABLE, spec_ib_force_disable) 1489 1493 1490 1494 static inline void 1491 1495 current_restore_flags(unsigned long orig_flags, unsigned long flags)
+20
include/linux/sched/smt.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + #ifndef _LINUX_SCHED_SMT_H 3 + #define _LINUX_SCHED_SMT_H 4 + 5 + #include <linux/static_key.h> 6 + 7 + #ifdef CONFIG_SCHED_SMT 8 + extern struct static_key_false sched_smt_present; 9 + 10 + static __always_inline bool sched_smt_active(void) 11 + { 12 + return static_branch_likely(&sched_smt_present); 13 + } 14 + #else 15 + static inline bool sched_smt_active(void) { return false; } 16 + #endif 17 + 18 + void arch_smt_update(void); 19 + 20 + #endif
+2 -2
include/linux/tracehook.h
··· 83 83 * tracehook_report_syscall_entry - task is about to attempt a system call 84 84 * @regs: user register state of current task 85 85 * 86 - * This will be called if %TIF_SYSCALL_TRACE has been set, when the 87 - * current task has just entered the kernel for a system call. 86 + * This will be called if %TIF_SYSCALL_TRACE or %TIF_SYSCALL_EMU have been set, 87 + * when the current task has just entered the kernel for a system call. 88 88 * Full user register state is available here. Changing the values 89 89 * in @regs can affect the system call number and arguments to be tried. 90 90 * It is safe to block here, preventing the system call from beginning.
+3 -3
include/linux/tracepoint.h
··· 166 166 struct tracepoint_func *it_func_ptr; \ 167 167 void *it_func; \ 168 168 void *__data; \ 169 - int __maybe_unused idx = 0; \ 169 + int __maybe_unused __idx = 0; \ 170 170 \ 171 171 if (!(cond)) \ 172 172 return; \ ··· 182 182 * doesn't work from the idle path. \ 183 183 */ \ 184 184 if (rcuidle) { \ 185 - idx = srcu_read_lock_notrace(&tracepoint_srcu); \ 185 + __idx = srcu_read_lock_notrace(&tracepoint_srcu);\ 186 186 rcu_irq_enter_irqson(); \ 187 187 } \ 188 188 \ ··· 198 198 \ 199 199 if (rcuidle) { \ 200 200 rcu_irq_exit_irqson(); \ 201 - srcu_read_unlock_notrace(&tracepoint_srcu, idx);\ 201 + srcu_read_unlock_notrace(&tracepoint_srcu, __idx);\ 202 202 } \ 203 203 \ 204 204 preempt_enable_notrace(); \
+1 -1
include/net/netfilter/ipv4/nf_nat_masquerade.h
··· 9 9 const struct nf_nat_range2 *range, 10 10 const struct net_device *out); 11 11 12 - void nf_nat_masquerade_ipv4_register_notifier(void); 12 + int nf_nat_masquerade_ipv4_register_notifier(void); 13 13 void nf_nat_masquerade_ipv4_unregister_notifier(void); 14 14 15 15 #endif /*_NF_NAT_MASQUERADE_IPV4_H_ */
+1 -1
include/net/netfilter/ipv6/nf_nat_masquerade.h
··· 5 5 unsigned int 6 6 nf_nat_masquerade_ipv6(struct sk_buff *skb, const struct nf_nat_range2 *range, 7 7 const struct net_device *out); 8 - void nf_nat_masquerade_ipv6_register_notifier(void); 8 + int nf_nat_masquerade_ipv6_register_notifier(void); 9 9 void nf_nat_masquerade_ipv6_unregister_notifier(void); 10 10 11 11 #endif /* _NF_NAT_MASQUERADE_IPV6_H_ */
+1 -1
include/sound/soc.h
··· 1192 1192 ((i) < rtd->num_codecs) && ((dai) = rtd->codec_dais[i]); \ 1193 1193 (i)++) 1194 1194 #define for_each_rtd_codec_dai_rollback(rtd, i, dai) \ 1195 - for (; ((i--) >= 0) && ((dai) = rtd->codec_dais[i]);) 1195 + for (; ((--i) >= 0) && ((dai) = rtd->codec_dais[i]);) 1196 1196 1197 1197 1198 1198 /* mixer control */
+11 -1
include/trace/events/sched.h
··· 107 107 #ifdef CREATE_TRACE_POINTS 108 108 static inline long __trace_sched_switch_state(bool preempt, struct task_struct *p) 109 109 { 110 + unsigned int state; 111 + 110 112 #ifdef CONFIG_SCHED_DEBUG 111 113 BUG_ON(p != current); 112 114 #endif /* CONFIG_SCHED_DEBUG */ ··· 120 118 if (preempt) 121 119 return TASK_REPORT_MAX; 122 120 123 - return 1 << task_state_index(p); 121 + /* 122 + * task_state_index() uses fls() and returns a value from 0-8 range. 123 + * Decrement it by 1 (except TASK_RUNNING state i.e 0) before using 124 + * it for left shift operation to get the correct task->state 125 + * mapping. 126 + */ 127 + state = task_state_index(p); 128 + 129 + return state ? (1 << (state - 1)) : state; 124 130 } 125 131 #endif /* CREATE_TRACE_POINTS */ 126 132
+1
include/uapi/linux/prctl.h
··· 212 212 #define PR_SET_SPECULATION_CTRL 53 213 213 /* Speculation control variants */ 214 214 # define PR_SPEC_STORE_BYPASS 0 215 + # define PR_SPEC_INDIRECT_BRANCH 1 215 216 /* Return and control values for PR_SET/GET_SPECULATION_CTRL */ 216 217 # define PR_SPEC_NOT_AFFECTED 0 217 218 # define PR_SPEC_PRCTL (1UL << 0)
-5
include/xen/balloon.h
··· 44 44 { 45 45 } 46 46 #endif 47 - 48 - #ifdef CONFIG_XEN_BALLOON_MEMORY_HOTPLUG 49 - struct resource; 50 - void arch_xen_balloon_init(struct resource *hostmem_resource); 51 - #endif
+9
init/Kconfig
··· 509 509 510 510 Say N if unsure. 511 511 512 + config PSI_DEFAULT_DISABLED 513 + bool "Require boot parameter to enable pressure stall information tracking" 514 + default n 515 + depends on PSI 516 + help 517 + If set, pressure stall information tracking will be disabled 518 + per default but can be enabled through passing psi_enable=1 519 + on the kernel commandline during boot. 520 + 512 521 endmenu # "CPU/Task time and stats accounting" 513 522 514 523 config CPU_ISOLATION
+12 -10
init/initramfs.c
··· 291 291 return 1; 292 292 } 293 293 294 - static int __init maybe_link(void) 295 - { 296 - if (nlink >= 2) { 297 - char *old = find_link(major, minor, ino, mode, collected); 298 - if (old) 299 - return (ksys_link(old, collected) < 0) ? -1 : 1; 300 - } 301 - return 0; 302 - } 303 - 304 294 static void __init clean_path(char *path, umode_t fmode) 305 295 { 306 296 struct kstat st; ··· 301 311 else 302 312 ksys_unlink(path); 303 313 } 314 + } 315 + 316 + static int __init maybe_link(void) 317 + { 318 + if (nlink >= 2) { 319 + char *old = find_link(major, minor, ino, mode, collected); 320 + if (old) { 321 + clean_path(collected, 0); 322 + return (ksys_link(old, collected) < 0) ? -1 : 1; 323 + } 324 + } 325 + return 0; 304 326 } 305 327 306 328 static __initdata int wfd;
+34
kernel/bpf/core.c
··· 672 672 bpf_prog_unlock_free(fp); 673 673 } 674 674 675 + int bpf_jit_get_func_addr(const struct bpf_prog *prog, 676 + const struct bpf_insn *insn, bool extra_pass, 677 + u64 *func_addr, bool *func_addr_fixed) 678 + { 679 + s16 off = insn->off; 680 + s32 imm = insn->imm; 681 + u8 *addr; 682 + 683 + *func_addr_fixed = insn->src_reg != BPF_PSEUDO_CALL; 684 + if (!*func_addr_fixed) { 685 + /* Place-holder address till the last pass has collected 686 + * all addresses for JITed subprograms in which case we 687 + * can pick them up from prog->aux. 688 + */ 689 + if (!extra_pass) 690 + addr = NULL; 691 + else if (prog->aux->func && 692 + off >= 0 && off < prog->aux->func_cnt) 693 + addr = (u8 *)prog->aux->func[off]->bpf_func; 694 + else 695 + return -EINVAL; 696 + } else { 697 + /* Address of a BPF helper call. Since part of the core 698 + * kernel, it's always at a fixed location. __bpf_call_base 699 + * and the helper with imm relative to it are both in core 700 + * kernel. 701 + */ 702 + addr = (u8 *)__bpf_call_base + imm; 703 + } 704 + 705 + *func_addr = (unsigned long)addr; 706 + return 0; 707 + } 708 + 675 709 static int bpf_jit_blind_insn(const struct bpf_insn *from, 676 710 const struct bpf_insn *aux, 677 711 struct bpf_insn *to_buff)
+2 -1
kernel/bpf/local_storage.c
··· 139 139 return -ENOENT; 140 140 141 141 new = kmalloc_node(sizeof(struct bpf_storage_buffer) + 142 - map->value_size, __GFP_ZERO | GFP_USER, 142 + map->value_size, 143 + __GFP_ZERO | GFP_ATOMIC | __GFP_NOWARN, 143 144 map->numa_node); 144 145 if (!new) 145 146 return -ENOMEM;
+8 -8
kernel/bpf/queue_stack_maps.c
··· 7 7 #include <linux/bpf.h> 8 8 #include <linux/list.h> 9 9 #include <linux/slab.h> 10 + #include <linux/capability.h> 10 11 #include "percpu_freelist.h" 11 12 12 13 #define QUEUE_STACK_CREATE_FLAG_MASK \ ··· 46 45 /* Called from syscall */ 47 46 static int queue_stack_map_alloc_check(union bpf_attr *attr) 48 47 { 48 + if (!capable(CAP_SYS_ADMIN)) 49 + return -EPERM; 50 + 49 51 /* check sanity of attributes */ 50 52 if (attr->max_entries == 0 || attr->key_size != 0 || 53 + attr->value_size == 0 || 51 54 attr->map_flags & ~QUEUE_STACK_CREATE_FLAG_MASK) 52 55 return -EINVAL; 53 56 ··· 68 63 { 69 64 int ret, numa_node = bpf_map_attr_numa_node(attr); 70 65 struct bpf_queue_stack *qs; 71 - u32 size, value_size; 72 - u64 queue_size, cost; 66 + u64 size, queue_size, cost; 73 67 74 - size = attr->max_entries + 1; 75 - value_size = attr->value_size; 76 - 77 - queue_size = sizeof(*qs) + (u64) value_size * size; 78 - 79 - cost = queue_size; 68 + size = (u64) attr->max_entries + 1; 69 + cost = queue_size = sizeof(*qs) + size * attr->value_size; 80 70 if (cost >= U32_MAX - PAGE_SIZE) 81 71 return ERR_PTR(-E2BIG); 82 72
+1 -1
kernel/bpf/verifier.c
··· 5650 5650 return; 5651 5651 /* NOTE: fake 'exit' subprog should be updated as well. */ 5652 5652 for (i = 0; i <= env->subprog_cnt; i++) { 5653 - if (env->subprog_info[i].start < off) 5653 + if (env->subprog_info[i].start <= off) 5654 5654 continue; 5655 5655 env->subprog_info[i].start += len - 1; 5656 5656 }
+9 -6
kernel/cpu.c
··· 10 10 #include <linux/sched/signal.h> 11 11 #include <linux/sched/hotplug.h> 12 12 #include <linux/sched/task.h> 13 + #include <linux/sched/smt.h> 13 14 #include <linux/unistd.h> 14 15 #include <linux/cpu.h> 15 16 #include <linux/oom.h> ··· 367 366 } 368 367 369 368 #endif /* CONFIG_HOTPLUG_CPU */ 369 + 370 + /* 371 + * Architectures that need SMT-specific errata handling during SMT hotplug 372 + * should override this. 373 + */ 374 + void __weak arch_smt_update(void) { } 370 375 371 376 #ifdef CONFIG_HOTPLUG_SMT 372 377 enum cpuhp_smt_control cpu_smt_control __read_mostly = CPU_SMT_ENABLED; ··· 1018 1011 * concurrent CPU hotplug via cpu_add_remove_lock. 1019 1012 */ 1020 1013 lockup_detector_cleanup(); 1014 + arch_smt_update(); 1021 1015 return ret; 1022 1016 } 1023 1017 ··· 1147 1139 ret = cpuhp_up_callbacks(cpu, st, target); 1148 1140 out: 1149 1141 cpus_write_unlock(); 1142 + arch_smt_update(); 1150 1143 return ret; 1151 1144 } 1152 1145 ··· 2063 2054 /* Tell user space about the state change */ 2064 2055 kobject_uevent(&dev->kobj, KOBJ_ONLINE); 2065 2056 } 2066 - 2067 - /* 2068 - * Architectures that need SMT-specific errata handling during SMT hotplug 2069 - * should override this. 2070 - */ 2071 - void __weak arch_smt_update(void) { }; 2072 2057 2073 2058 static int cpuhp_smt_disable(enum cpuhp_smt_control ctrlval) 2074 2059 {
+10 -2
kernel/events/uprobes.c
··· 829 829 BUG_ON((uprobe->offset & ~PAGE_MASK) + 830 830 UPROBE_SWBP_INSN_SIZE > PAGE_SIZE); 831 831 832 - smp_wmb(); /* pairs with rmb() in find_active_uprobe() */ 832 + smp_wmb(); /* pairs with the smp_rmb() in handle_swbp() */ 833 833 set_bit(UPROBE_COPY_INSN, &uprobe->flags); 834 834 835 835 out: ··· 2178 2178 * After we hit the bp, _unregister + _register can install the 2179 2179 * new and not-yet-analyzed uprobe at the same address, restart. 2180 2180 */ 2181 - smp_rmb(); /* pairs with wmb() in install_breakpoint() */ 2182 2181 if (unlikely(!test_bit(UPROBE_COPY_INSN, &uprobe->flags))) 2183 2182 goto out; 2183 + 2184 + /* 2185 + * Pairs with the smp_wmb() in prepare_uprobe(). 2186 + * 2187 + * Guarantees that if we see the UPROBE_COPY_INSN bit set, then 2188 + * we must also see the stores to &uprobe->arch performed by the 2189 + * prepare_uprobe() call. 2190 + */ 2191 + smp_rmb(); 2184 2192 2185 2193 /* Tracing handlers use ->utask to communicate with fetch methods */ 2186 2194 if (!get_utask())
+2 -2
kernel/kcov.c
··· 56 56 struct task_struct *t; 57 57 }; 58 58 59 - static bool check_kcov_mode(enum kcov_mode needed_mode, struct task_struct *t) 59 + static notrace bool check_kcov_mode(enum kcov_mode needed_mode, struct task_struct *t) 60 60 { 61 61 unsigned int mode; 62 62 ··· 78 78 return mode == needed_mode; 79 79 } 80 80 81 - static unsigned long canonicalize_ip(unsigned long ip) 81 + static notrace unsigned long canonicalize_ip(unsigned long ip) 82 82 { 83 83 #ifdef CONFIG_RANDOMIZE_BASE 84 84 ip -= kaslr_offset();
-10
kernel/ptrace.c
··· 261 261 262 262 static int ptrace_has_cap(struct user_namespace *ns, unsigned int mode) 263 263 { 264 - if (mode & PTRACE_MODE_SCHED) 265 - return false; 266 - 267 264 if (mode & PTRACE_MODE_NOAUDIT) 268 265 return has_ns_capability_noaudit(current, ns, CAP_SYS_PTRACE); 269 266 else ··· 328 331 !ptrace_has_cap(mm->user_ns, mode))) 329 332 return -EPERM; 330 333 331 - if (mode & PTRACE_MODE_SCHED) 332 - return 0; 333 334 return security_ptrace_access_check(task, mode); 334 - } 335 - 336 - bool ptrace_may_access_sched(struct task_struct *task, unsigned int mode) 337 - { 338 - return __ptrace_may_access(task, mode | PTRACE_MODE_SCHED); 339 335 } 340 336 341 337 bool ptrace_may_access(struct task_struct *task, unsigned int mode)
+11 -8
kernel/sched/core.c
··· 5738 5738 5739 5739 #ifdef CONFIG_SCHED_SMT 5740 5740 /* 5741 - * The sched_smt_present static key needs to be evaluated on every 5742 - * hotplug event because at boot time SMT might be disabled when 5743 - * the number of booted CPUs is limited. 5744 - * 5745 - * If then later a sibling gets hotplugged, then the key would stay 5746 - * off and SMT scheduling would never be functional. 5741 + * When going up, increment the number of cores with SMT present. 5747 5742 */ 5748 - if (cpumask_weight(cpu_smt_mask(cpu)) > 1) 5749 - static_branch_enable_cpuslocked(&sched_smt_present); 5743 + if (cpumask_weight(cpu_smt_mask(cpu)) == 2) 5744 + static_branch_inc_cpuslocked(&sched_smt_present); 5750 5745 #endif 5751 5746 set_cpu_active(cpu, true); 5752 5747 ··· 5784 5789 * Do sync before park smpboot threads to take care the rcu boost case. 5785 5790 */ 5786 5791 synchronize_rcu_mult(call_rcu, call_rcu_sched); 5792 + 5793 + #ifdef CONFIG_SCHED_SMT 5794 + /* 5795 + * When going down, decrement the number of cores with SMT present. 5796 + */ 5797 + if (cpumask_weight(cpu_smt_mask(cpu)) == 2) 5798 + static_branch_dec_cpuslocked(&sched_smt_present); 5799 + #endif 5787 5800 5788 5801 if (!sched_smp_initialized) 5789 5802 return 0;
+21 -9
kernel/sched/psi.c
··· 136 136 137 137 static int psi_bug __read_mostly; 138 138 139 - bool psi_disabled __read_mostly; 140 - core_param(psi_disabled, psi_disabled, bool, 0644); 139 + DEFINE_STATIC_KEY_FALSE(psi_disabled); 140 + 141 + #ifdef CONFIG_PSI_DEFAULT_DISABLED 142 + bool psi_enable; 143 + #else 144 + bool psi_enable = true; 145 + #endif 146 + static int __init setup_psi(char *str) 147 + { 148 + return kstrtobool(str, &psi_enable) == 0; 149 + } 150 + __setup("psi=", setup_psi); 141 151 142 152 /* Running averages - we need to be higher-res than loadavg */ 143 153 #define PSI_FREQ (2*HZ+1) /* 2 sec intervals */ ··· 179 169 180 170 void __init psi_init(void) 181 171 { 182 - if (psi_disabled) 172 + if (!psi_enable) { 173 + static_branch_enable(&psi_disabled); 183 174 return; 175 + } 184 176 185 177 psi_period = jiffies_to_nsecs(PSI_FREQ); 186 178 group_init(&psi_system); ··· 561 549 struct rq_flags rf; 562 550 struct rq *rq; 563 551 564 - if (psi_disabled) 552 + if (static_branch_likely(&psi_disabled)) 565 553 return; 566 554 567 555 *flags = current->flags & PF_MEMSTALL; ··· 591 579 struct rq_flags rf; 592 580 struct rq *rq; 593 581 594 - if (psi_disabled) 582 + if (static_branch_likely(&psi_disabled)) 595 583 return; 596 584 597 585 if (*flags) ··· 612 600 #ifdef CONFIG_CGROUPS 613 601 int psi_cgroup_alloc(struct cgroup *cgroup) 614 602 { 615 - if (psi_disabled) 603 + if (static_branch_likely(&psi_disabled)) 616 604 return 0; 617 605 618 606 cgroup->psi.pcpu = alloc_percpu(struct psi_group_cpu); ··· 624 612 625 613 void psi_cgroup_free(struct cgroup *cgroup) 626 614 { 627 - if (psi_disabled) 615 + if (static_branch_likely(&psi_disabled)) 628 616 return; 629 617 630 618 cancel_delayed_work_sync(&cgroup->psi.clock_work); ··· 649 637 struct rq_flags rf; 650 638 struct rq *rq; 651 639 652 - if (psi_disabled) { 640 + if (static_branch_likely(&psi_disabled)) { 653 641 /* 654 642 * Lame to do this here, but the scheduler cannot be locked 655 643 * from the outside, so we move cgroups from inside sched/. ··· 685 673 { 686 674 int full; 687 675 688 - if (psi_disabled) 676 + if (static_branch_likely(&psi_disabled)) 689 677 return -EOPNOTSUPP; 690 678 691 679 update_stats(group);
+1 -3
kernel/sched/sched.h
··· 23 23 #include <linux/sched/prio.h> 24 24 #include <linux/sched/rt.h> 25 25 #include <linux/sched/signal.h> 26 + #include <linux/sched/smt.h> 26 27 #include <linux/sched/stat.h> 27 28 #include <linux/sched/sysctl.h> 28 29 #include <linux/sched/task.h> ··· 937 936 938 937 939 938 #ifdef CONFIG_SCHED_SMT 940 - 941 - extern struct static_key_false sched_smt_present; 942 - 943 939 extern void __update_idle_core(struct rq *rq); 944 940 945 941 static inline void update_idle_core(struct rq *rq)
+4 -4
kernel/sched/stats.h
··· 66 66 { 67 67 int clear = 0, set = TSK_RUNNING; 68 68 69 - if (psi_disabled) 69 + if (static_branch_likely(&psi_disabled)) 70 70 return; 71 71 72 72 if (!wakeup || p->sched_psi_wake_requeue) { ··· 86 86 { 87 87 int clear = TSK_RUNNING, set = 0; 88 88 89 - if (psi_disabled) 89 + if (static_branch_likely(&psi_disabled)) 90 90 return; 91 91 92 92 if (!sleep) { ··· 102 102 103 103 static inline void psi_ttwu_dequeue(struct task_struct *p) 104 104 { 105 - if (psi_disabled) 105 + if (static_branch_likely(&psi_disabled)) 106 106 return; 107 107 /* 108 108 * Is the task being migrated during a wakeup? Make sure to ··· 128 128 129 129 static inline void psi_task_tick(struct rq *rq) 130 130 { 131 - if (psi_disabled) 131 + if (static_branch_likely(&psi_disabled)) 132 132 return; 133 133 134 134 if (unlikely(rq->curr->flags & PF_MEMSTALL))
+3 -1
kernel/stackleak.c
··· 11 11 */ 12 12 13 13 #include <linux/stackleak.h> 14 + #include <linux/kprobes.h> 14 15 15 16 #ifdef CONFIG_STACKLEAK_RUNTIME_DISABLE 16 17 #include <linux/jump_label.h> ··· 48 47 #define skip_erasing() false 49 48 #endif /* CONFIG_STACKLEAK_RUNTIME_DISABLE */ 50 49 51 - asmlinkage void stackleak_erase(void) 50 + asmlinkage void notrace stackleak_erase(void) 52 51 { 53 52 /* It would be nice not to have 'kstack_ptr' and 'boundary' on stack */ 54 53 unsigned long kstack_ptr = current->lowest_stack; ··· 102 101 /* Reset the 'lowest_stack' value for the next syscall */ 103 102 current->lowest_stack = current_top_of_stack() - THREAD_SIZE/64; 104 103 } 104 + NOKPROBE_SYMBOL(stackleak_erase); 105 105 106 106 void __used stackleak_track_stack(void) 107 107 {
+5 -3
kernel/trace/bpf_trace.c
··· 196 196 i++; 197 197 } else if (fmt[i] == 'p' || fmt[i] == 's') { 198 198 mod[fmt_cnt]++; 199 - i++; 200 - if (!isspace(fmt[i]) && !ispunct(fmt[i]) && fmt[i] != 0) 199 + /* disallow any further format extensions */ 200 + if (fmt[i + 1] != 0 && 201 + !isspace(fmt[i + 1]) && 202 + !ispunct(fmt[i + 1])) 201 203 return -EINVAL; 202 204 fmt_cnt++; 203 - if (fmt[i - 1] == 's') { 205 + if (fmt[i] == 's') { 204 206 if (str_seen) 205 207 /* allow only one '%s' per fmt string */ 206 208 return -EINVAL;
+5 -2
kernel/trace/ftrace.c
··· 817 817 #ifdef CONFIG_FUNCTION_GRAPH_TRACER 818 818 static int profile_graph_entry(struct ftrace_graph_ent *trace) 819 819 { 820 - int index = trace->depth; 820 + int index = current->curr_ret_stack; 821 821 822 822 function_profile_call(trace->func, 0, NULL, NULL); 823 823 ··· 852 852 if (!fgraph_graph_time) { 853 853 int index; 854 854 855 - index = trace->depth; 855 + index = current->curr_ret_stack; 856 856 857 857 /* Append this call time to the parent time to subtract */ 858 858 if (index) ··· 6814 6814 atomic_set(&t->tracing_graph_pause, 0); 6815 6815 atomic_set(&t->trace_overrun, 0); 6816 6816 t->curr_ret_stack = -1; 6817 + t->curr_ret_depth = -1; 6817 6818 /* Make sure the tasks see the -1 first: */ 6818 6819 smp_wmb(); 6819 6820 t->ret_stack = ret_stack_list[start++]; ··· 7039 7038 void ftrace_graph_init_idle_task(struct task_struct *t, int cpu) 7040 7039 { 7041 7040 t->curr_ret_stack = -1; 7041 + t->curr_ret_depth = -1; 7042 7042 /* 7043 7043 * The idle task has no parent, it either has its own 7044 7044 * stack or no stack at all. ··· 7070 7068 /* Make sure we do not use the parent ret_stack */ 7071 7069 t->ret_stack = NULL; 7072 7070 t->curr_ret_stack = -1; 7071 + t->curr_ret_depth = -1; 7073 7072 7074 7073 if (ftrace_graph_active) { 7075 7074 struct ftrace_ret_stack *ret_stack;
+54 -3
kernel/trace/trace.h
··· 512 512 * can only be modified by current, we can reuse trace_recursion. 513 513 */ 514 514 TRACE_IRQ_BIT, 515 + 516 + /* Set if the function is in the set_graph_function file */ 517 + TRACE_GRAPH_BIT, 518 + 519 + /* 520 + * In the very unlikely case that an interrupt came in 521 + * at a start of graph tracing, and we want to trace 522 + * the function in that interrupt, the depth can be greater 523 + * than zero, because of the preempted start of a previous 524 + * trace. In an even more unlikely case, depth could be 2 525 + * if a softirq interrupted the start of graph tracing, 526 + * followed by an interrupt preempting a start of graph 527 + * tracing in the softirq, and depth can even be 3 528 + * if an NMI came in at the start of an interrupt function 529 + * that preempted a softirq start of a function that 530 + * preempted normal context!!!! Luckily, it can't be 531 + * greater than 3, so the next two bits are a mask 532 + * of what the depth is when we set TRACE_GRAPH_BIT 533 + */ 534 + 535 + TRACE_GRAPH_DEPTH_START_BIT, 536 + TRACE_GRAPH_DEPTH_END_BIT, 515 537 }; 516 538 517 539 #define trace_recursion_set(bit) do { (current)->trace_recursion |= (1<<(bit)); } while (0) 518 540 #define trace_recursion_clear(bit) do { (current)->trace_recursion &= ~(1<<(bit)); } while (0) 519 541 #define trace_recursion_test(bit) ((current)->trace_recursion & (1<<(bit))) 542 + 543 + #define trace_recursion_depth() \ 544 + (((current)->trace_recursion >> TRACE_GRAPH_DEPTH_START_BIT) & 3) 545 + #define trace_recursion_set_depth(depth) \ 546 + do { \ 547 + current->trace_recursion &= \ 548 + ~(3 << TRACE_GRAPH_DEPTH_START_BIT); \ 549 + current->trace_recursion |= \ 550 + ((depth) & 3) << TRACE_GRAPH_DEPTH_START_BIT; \ 551 + } while (0) 520 552 521 553 #define TRACE_CONTEXT_BITS 4 522 554 ··· 875 843 extern struct ftrace_hash *ftrace_graph_hash; 876 844 extern struct ftrace_hash *ftrace_graph_notrace_hash; 877 845 878 - static inline int ftrace_graph_addr(unsigned long addr) 846 + static inline int ftrace_graph_addr(struct ftrace_graph_ent *trace) 879 847 { 848 + unsigned long addr = trace->func; 880 849 int ret = 0; 881 850 882 851 preempt_disable_notrace(); ··· 888 855 } 889 856 890 857 if (ftrace_lookup_ip(ftrace_graph_hash, addr)) { 858 + 859 + /* 860 + * This needs to be cleared on the return functions 861 + * when the depth is zero. 862 + */ 863 + trace_recursion_set(TRACE_GRAPH_BIT); 864 + trace_recursion_set_depth(trace->depth); 865 + 891 866 /* 892 867 * If no irqs are to be traced, but a set_graph_function 893 868 * is set, and called by an interrupt handler, we still ··· 913 872 return ret; 914 873 } 915 874 875 + static inline void ftrace_graph_addr_finish(struct ftrace_graph_ret *trace) 876 + { 877 + if (trace_recursion_test(TRACE_GRAPH_BIT) && 878 + trace->depth == trace_recursion_depth()) 879 + trace_recursion_clear(TRACE_GRAPH_BIT); 880 + } 881 + 916 882 static inline int ftrace_graph_notrace_addr(unsigned long addr) 917 883 { 918 884 int ret = 0; ··· 933 885 return ret; 934 886 } 935 887 #else 936 - static inline int ftrace_graph_addr(unsigned long addr) 888 + static inline int ftrace_graph_addr(struct ftrace_graph_ent *trace) 937 889 { 938 890 return 1; 939 891 } ··· 942 894 { 943 895 return 0; 944 896 } 897 + static inline void ftrace_graph_addr_finish(struct ftrace_graph_ret *trace) 898 + { } 945 899 #endif /* CONFIG_DYNAMIC_FTRACE */ 946 900 947 901 extern unsigned int fgraph_max_depth; ··· 951 901 static inline bool ftrace_graph_ignore_func(struct ftrace_graph_ent *trace) 952 902 { 953 903 /* trace it when it is-nested-in or is a function enabled. */ 954 - return !(trace->depth || ftrace_graph_addr(trace->func)) || 904 + return !(trace_recursion_test(TRACE_GRAPH_BIT) || 905 + ftrace_graph_addr(trace)) || 955 906 (trace->depth < 0) || 956 907 (fgraph_max_depth && trace->depth >= fgraph_max_depth); 957 908 }
+42 -11
kernel/trace/trace_functions_graph.c
··· 118 118 struct trace_seq *s, u32 flags); 119 119 120 120 /* Add a function return address to the trace stack on thread info.*/ 121 - int 122 - ftrace_push_return_trace(unsigned long ret, unsigned long func, int *depth, 121 + static int 122 + ftrace_push_return_trace(unsigned long ret, unsigned long func, 123 123 unsigned long frame_pointer, unsigned long *retp) 124 124 { 125 125 unsigned long long calltime; ··· 177 177 #ifdef HAVE_FUNCTION_GRAPH_RET_ADDR_PTR 178 178 current->ret_stack[index].retp = retp; 179 179 #endif 180 - *depth = current->curr_ret_stack; 180 + return 0; 181 + } 182 + 183 + int function_graph_enter(unsigned long ret, unsigned long func, 184 + unsigned long frame_pointer, unsigned long *retp) 185 + { 186 + struct ftrace_graph_ent trace; 187 + 188 + trace.func = func; 189 + trace.depth = ++current->curr_ret_depth; 190 + 191 + if (ftrace_push_return_trace(ret, func, 192 + frame_pointer, retp)) 193 + goto out; 194 + 195 + /* Only trace if the calling function expects to */ 196 + if (!ftrace_graph_entry(&trace)) 197 + goto out_ret; 181 198 182 199 return 0; 200 + out_ret: 201 + current->curr_ret_stack--; 202 + out: 203 + current->curr_ret_depth--; 204 + return -EBUSY; 183 205 } 184 206 185 207 /* Retrieve a function return address to the trace stack on thread info.*/ ··· 263 241 trace->func = current->ret_stack[index].func; 264 242 trace->calltime = current->ret_stack[index].calltime; 265 243 trace->overrun = atomic_read(&current->trace_overrun); 266 - trace->depth = index; 244 + trace->depth = current->curr_ret_depth--; 245 + /* 246 + * We still want to trace interrupts coming in if 247 + * max_depth is set to 1. Make sure the decrement is 248 + * seen before ftrace_graph_return. 249 + */ 250 + barrier(); 267 251 } 268 252 269 253 /* ··· 283 255 284 256 ftrace_pop_return_trace(&trace, &ret, frame_pointer); 285 257 trace.rettime = trace_clock_local(); 258 + ftrace_graph_return(&trace); 259 + /* 260 + * The ftrace_graph_return() may still access the current 261 + * ret_stack structure, we need to make sure the update of 262 + * curr_ret_stack is after that. 263 + */ 286 264 barrier(); 287 265 current->curr_ret_stack--; 288 266 /* ··· 300 266 current->curr_ret_stack += FTRACE_NOTRACE_DEPTH; 301 267 return ret; 302 268 } 303 - 304 - /* 305 - * The trace should run after decrementing the ret counter 306 - * in case an interrupt were to come in. We don't want to 307 - * lose the interrupt if max_depth is set. 308 - */ 309 - ftrace_graph_return(&trace); 310 269 311 270 if (unlikely(!ret)) { 312 271 ftrace_graph_stop(); ··· 509 482 int cpu; 510 483 int pc; 511 484 485 + ftrace_graph_addr_finish(trace); 486 + 512 487 local_irq_save(flags); 513 488 cpu = raw_smp_processor_id(); 514 489 data = per_cpu_ptr(tr->trace_buffer.data, cpu); ··· 534 505 535 506 static void trace_graph_thresh_return(struct ftrace_graph_ret *trace) 536 507 { 508 + ftrace_graph_addr_finish(trace); 509 + 537 510 if (tracing_thresh && 538 511 (trace->rettime - trace->calltime < tracing_thresh)) 539 512 return;
+2
kernel/trace/trace_irqsoff.c
··· 208 208 unsigned long flags; 209 209 int pc; 210 210 211 + ftrace_graph_addr_finish(trace); 212 + 211 213 if (!func_prolog_dec(tr, &data, &flags)) 212 214 return; 213 215
+2
kernel/trace/trace_sched_wakeup.c
··· 270 270 unsigned long flags; 271 271 int pc; 272 272 273 + ftrace_graph_addr_finish(trace); 274 + 273 275 if (!func_prolog_preempt_disable(tr, &data, &pc)) 274 276 return; 275 277
+2 -3
lib/debugobjects.c
··· 135 135 if (!new) 136 136 return; 137 137 138 - kmemleak_ignore(new); 139 138 raw_spin_lock_irqsave(&pool_lock, flags); 140 139 hlist_add_head(&new->node, &obj_pool); 141 140 debug_objects_allocated++; ··· 1127 1128 obj = kmem_cache_zalloc(obj_cache, GFP_KERNEL); 1128 1129 if (!obj) 1129 1130 goto free; 1130 - kmemleak_ignore(obj); 1131 1131 hlist_add_head(&obj->node, &objects); 1132 1132 } 1133 1133 ··· 1182 1184 1183 1185 obj_cache = kmem_cache_create("debug_objects_cache", 1184 1186 sizeof (struct debug_obj), 0, 1185 - SLAB_DEBUG_OBJECTS, NULL); 1187 + SLAB_DEBUG_OBJECTS | SLAB_NOLEAKTRACE, 1188 + NULL); 1186 1189 1187 1190 if (!obj_cache || debug_objects_replace_static_objects()) { 1188 1191 debug_objects_enabled = 0;
+37 -1
lib/iov_iter.c
··· 560 560 return bytes; 561 561 } 562 562 563 + static size_t csum_and_copy_to_pipe_iter(const void *addr, size_t bytes, 564 + __wsum *csum, struct iov_iter *i) 565 + { 566 + struct pipe_inode_info *pipe = i->pipe; 567 + size_t n, r; 568 + size_t off = 0; 569 + __wsum sum = *csum, next; 570 + int idx; 571 + 572 + if (!sanity(i)) 573 + return 0; 574 + 575 + bytes = n = push_pipe(i, bytes, &idx, &r); 576 + if (unlikely(!n)) 577 + return 0; 578 + for ( ; n; idx = next_idx(idx, pipe), r = 0) { 579 + size_t chunk = min_t(size_t, n, PAGE_SIZE - r); 580 + char *p = kmap_atomic(pipe->bufs[idx].page); 581 + next = csum_partial_copy_nocheck(addr, p + r, chunk, 0); 582 + sum = csum_block_add(sum, next, off); 583 + kunmap_atomic(p); 584 + i->idx = idx; 585 + i->iov_offset = r + chunk; 586 + n -= chunk; 587 + off += chunk; 588 + addr += chunk; 589 + } 590 + i->count -= bytes; 591 + *csum = sum; 592 + return bytes; 593 + } 594 + 563 595 size_t _copy_to_iter(const void *addr, size_t bytes, struct iov_iter *i) 564 596 { 565 597 const char *from = addr; ··· 1470 1438 const char *from = addr; 1471 1439 __wsum sum, next; 1472 1440 size_t off = 0; 1441 + 1442 + if (unlikely(iov_iter_is_pipe(i))) 1443 + return csum_and_copy_to_pipe_iter(addr, bytes, csum, i); 1444 + 1473 1445 sum = *csum; 1474 - if (unlikely(iov_iter_is_pipe(i) || iov_iter_is_discard(i))) { 1446 + if (unlikely(iov_iter_is_discard(i))) { 1475 1447 WARN_ON(1); /* for now */ 1476 1448 return 0; 1477 1449 }
+1 -1
lib/test_hexdump.c
··· 99 99 const char *q = *result++; 100 100 size_t amount = strlen(q); 101 101 102 - strncpy(p, q, amount); 102 + memcpy(p, q, amount); 103 103 p += amount; 104 104 105 105 *p++ = ' ';
-1
lib/test_kmod.c
··· 1214 1214 1215 1215 dev_info(test_dev->dev, "removing interface\n"); 1216 1216 misc_deregister(&test_dev->misc_dev); 1217 - kfree(&test_dev->misc_dev.name); 1218 1217 1219 1218 mutex_unlock(&test_dev->config_mutex); 1220 1219 mutex_unlock(&test_dev->trigger_mutex);
+1 -2
mm/gup.c
··· 702 702 if (!vma || start >= vma->vm_end) { 703 703 vma = find_extend_vma(mm, start); 704 704 if (!vma && in_gate_area(mm, start)) { 705 - int ret; 706 705 ret = get_gate_page(mm, start & PAGE_MASK, 707 706 gup_flags, &vma, 708 707 pages ? &pages[i] : NULL); 709 708 if (ret) 710 - return i ? : ret; 709 + goto out; 711 710 ctx.page_mask = 0; 712 711 goto next_page; 713 712 }
+25 -18
mm/huge_memory.c
··· 2350 2350 } 2351 2351 } 2352 2352 2353 - static void freeze_page(struct page *page) 2353 + static void unmap_page(struct page *page) 2354 2354 { 2355 2355 enum ttu_flags ttu_flags = TTU_IGNORE_MLOCK | TTU_IGNORE_ACCESS | 2356 2356 TTU_RMAP_LOCKED | TTU_SPLIT_HUGE_PMD; ··· 2365 2365 VM_BUG_ON_PAGE(!unmap_success, page); 2366 2366 } 2367 2367 2368 - static void unfreeze_page(struct page *page) 2368 + static void remap_page(struct page *page) 2369 2369 { 2370 2370 int i; 2371 2371 if (PageTransHuge(page)) { ··· 2402 2402 (1L << PG_unevictable) | 2403 2403 (1L << PG_dirty))); 2404 2404 2405 + /* ->mapping in first tail page is compound_mapcount */ 2406 + VM_BUG_ON_PAGE(tail > 2 && page_tail->mapping != TAIL_MAPPING, 2407 + page_tail); 2408 + page_tail->mapping = head->mapping; 2409 + page_tail->index = head->index + tail; 2410 + 2405 2411 /* Page flags must be visible before we make the page non-compound. */ 2406 2412 smp_wmb(); 2407 2413 ··· 2428 2422 if (page_is_idle(head)) 2429 2423 set_page_idle(page_tail); 2430 2424 2431 - /* ->mapping in first tail page is compound_mapcount */ 2432 - VM_BUG_ON_PAGE(tail > 2 && page_tail->mapping != TAIL_MAPPING, 2433 - page_tail); 2434 - page_tail->mapping = head->mapping; 2435 - 2436 - page_tail->index = head->index + tail; 2437 2425 page_cpupid_xchg_last(page_tail, page_cpupid_last(head)); 2438 2426 2439 2427 /* ··· 2439 2439 } 2440 2440 2441 2441 static void __split_huge_page(struct page *page, struct list_head *list, 2442 - unsigned long flags) 2442 + pgoff_t end, unsigned long flags) 2443 2443 { 2444 2444 struct page *head = compound_head(page); 2445 2445 struct zone *zone = page_zone(head); 2446 2446 struct lruvec *lruvec; 2447 - pgoff_t end = -1; 2448 2447 int i; 2449 2448 2450 2449 lruvec = mem_cgroup_page_lruvec(head, zone->zone_pgdat); 2451 2450 2452 2451 /* complete memcg works before add pages to LRU */ 2453 2452 mem_cgroup_split_huge_fixup(head); 2454 - 2455 - if (!PageAnon(page)) 2456 - end = DIV_ROUND_UP(i_size_read(head->mapping->host), PAGE_SIZE); 2457 2453 2458 2454 for (i = HPAGE_PMD_NR - 1; i >= 1; i--) { 2459 2455 __split_huge_page_tail(head, i, lruvec, list); ··· 2479 2483 2480 2484 spin_unlock_irqrestore(zone_lru_lock(page_zone(head)), flags); 2481 2485 2482 - unfreeze_page(head); 2486 + remap_page(head); 2483 2487 2484 2488 for (i = 0; i < HPAGE_PMD_NR; i++) { 2485 2489 struct page *subpage = head + i; ··· 2622 2626 int count, mapcount, extra_pins, ret; 2623 2627 bool mlocked; 2624 2628 unsigned long flags; 2629 + pgoff_t end; 2625 2630 2626 2631 VM_BUG_ON_PAGE(is_huge_zero_page(page), page); 2627 2632 VM_BUG_ON_PAGE(!PageLocked(page), page); ··· 2645 2648 ret = -EBUSY; 2646 2649 goto out; 2647 2650 } 2651 + end = -1; 2648 2652 mapping = NULL; 2649 2653 anon_vma_lock_write(anon_vma); 2650 2654 } else { ··· 2659 2661 2660 2662 anon_vma = NULL; 2661 2663 i_mmap_lock_read(mapping); 2664 + 2665 + /* 2666 + *__split_huge_page() may need to trim off pages beyond EOF: 2667 + * but on 32-bit, i_size_read() takes an irq-unsafe seqlock, 2668 + * which cannot be nested inside the page tree lock. So note 2669 + * end now: i_size itself may be changed at any moment, but 2670 + * head page lock is good enough to serialize the trimming. 2671 + */ 2672 + end = DIV_ROUND_UP(i_size_read(mapping->host), PAGE_SIZE); 2662 2673 } 2663 2674 2664 2675 /* 2665 - * Racy check if we can split the page, before freeze_page() will 2676 + * Racy check if we can split the page, before unmap_page() will 2666 2677 * split PMDs 2667 2678 */ 2668 2679 if (!can_split_huge_page(head, &extra_pins)) { ··· 2680 2673 } 2681 2674 2682 2675 mlocked = PageMlocked(page); 2683 - freeze_page(head); 2676 + unmap_page(head); 2684 2677 VM_BUG_ON_PAGE(compound_mapcount(head), head); 2685 2678 2686 2679 /* Make sure the page is not on per-CPU pagevec as it takes pin */ ··· 2714 2707 if (mapping) 2715 2708 __dec_node_page_state(page, NR_SHMEM_THPS); 2716 2709 spin_unlock(&pgdata->split_queue_lock); 2717 - __split_huge_page(page, list, flags); 2710 + __split_huge_page(page, list, end, flags); 2718 2711 if (PageSwapCache(head)) { 2719 2712 swp_entry_t entry = { .val = page_private(head) }; 2720 2713 ··· 2734 2727 fail: if (mapping) 2735 2728 xa_unlock(&mapping->i_pages); 2736 2729 spin_unlock_irqrestore(zone_lru_lock(page_zone(head)), flags); 2737 - unfreeze_page(head); 2730 + remap_page(head); 2738 2731 ret = -EBUSY; 2739 2732 } 2740 2733
+1 -1
mm/hugetlb.c
··· 4080 4080 4081 4081 /* fallback to copy_from_user outside mmap_sem */ 4082 4082 if (unlikely(ret)) { 4083 - ret = -EFAULT; 4083 + ret = -ENOENT; 4084 4084 *pagep = page; 4085 4085 /* don't free the page */ 4086 4086 goto out;
+82 -60
mm/khugepaged.c
··· 1287 1287 * collapse_shmem - collapse small tmpfs/shmem pages into huge one. 1288 1288 * 1289 1289 * Basic scheme is simple, details are more complex: 1290 - * - allocate and freeze a new huge page; 1290 + * - allocate and lock a new huge page; 1291 1291 * - scan page cache replacing old pages with the new one 1292 1292 * + swap in pages if necessary; 1293 1293 * + fill in gaps; ··· 1295 1295 * - if replacing succeeds: 1296 1296 * + copy data over; 1297 1297 * + free old pages; 1298 - * + unfreeze huge page; 1298 + * + unlock huge page; 1299 1299 * - if replacing failed; 1300 1300 * + put all pages back and unfreeze them; 1301 1301 * + restore gaps in the page cache; 1302 - * + free huge page; 1302 + * + unlock and free huge page; 1303 1303 */ 1304 1304 static void collapse_shmem(struct mm_struct *mm, 1305 1305 struct address_space *mapping, pgoff_t start, ··· 1329 1329 goto out; 1330 1330 } 1331 1331 1332 - new_page->index = start; 1333 - new_page->mapping = mapping; 1334 - __SetPageSwapBacked(new_page); 1335 - __SetPageLocked(new_page); 1336 - BUG_ON(!page_ref_freeze(new_page, 1)); 1337 - 1338 - /* 1339 - * At this point the new_page is 'frozen' (page_count() is zero), 1340 - * locked and not up-to-date. It's safe to insert it into the page 1341 - * cache, because nobody would be able to map it or use it in other 1342 - * way until we unfreeze it. 1343 - */ 1344 - 1345 1332 /* This will be less messy when we use multi-index entries */ 1346 1333 do { 1347 1334 xas_lock_irq(&xas); ··· 1336 1349 if (!xas_error(&xas)) 1337 1350 break; 1338 1351 xas_unlock_irq(&xas); 1339 - if (!xas_nomem(&xas, GFP_KERNEL)) 1352 + if (!xas_nomem(&xas, GFP_KERNEL)) { 1353 + mem_cgroup_cancel_charge(new_page, memcg, true); 1354 + result = SCAN_FAIL; 1340 1355 goto out; 1356 + } 1341 1357 } while (1); 1358 + 1359 + __SetPageLocked(new_page); 1360 + __SetPageSwapBacked(new_page); 1361 + new_page->index = start; 1362 + new_page->mapping = mapping; 1363 + 1364 + /* 1365 + * At this point the new_page is locked and not up-to-date. 1366 + * It's safe to insert it into the page cache, because nobody would 1367 + * be able to map it or use it in another way until we unlock it. 1368 + */ 1342 1369 1343 1370 xas_set(&xas, start); 1344 1371 for (index = start; index < end; index++) { ··· 1360 1359 1361 1360 VM_BUG_ON(index != xas.xa_index); 1362 1361 if (!page) { 1362 + /* 1363 + * Stop if extent has been truncated or hole-punched, 1364 + * and is now completely empty. 1365 + */ 1366 + if (index == start) { 1367 + if (!xas_next_entry(&xas, end - 1)) { 1368 + result = SCAN_TRUNCATED; 1369 + goto xa_locked; 1370 + } 1371 + xas_set(&xas, index); 1372 + } 1363 1373 if (!shmem_charge(mapping->host, 1)) { 1364 1374 result = SCAN_FAIL; 1365 - break; 1375 + goto xa_locked; 1366 1376 } 1367 1377 xas_store(&xas, new_page + (index % HPAGE_PMD_NR)); 1368 1378 nr_none++; ··· 1388 1376 result = SCAN_FAIL; 1389 1377 goto xa_unlocked; 1390 1378 } 1391 - xas_lock_irq(&xas); 1392 - xas_set(&xas, index); 1393 1379 } else if (trylock_page(page)) { 1394 1380 get_page(page); 1381 + xas_unlock_irq(&xas); 1395 1382 } else { 1396 1383 result = SCAN_PAGE_LOCK; 1397 - break; 1384 + goto xa_locked; 1398 1385 } 1399 1386 1400 1387 /* ··· 1402 1391 */ 1403 1392 VM_BUG_ON_PAGE(!PageLocked(page), page); 1404 1393 VM_BUG_ON_PAGE(!PageUptodate(page), page); 1405 - VM_BUG_ON_PAGE(PageTransCompound(page), page); 1394 + 1395 + /* 1396 + * If file was truncated then extended, or hole-punched, before 1397 + * we locked the first page, then a THP might be there already. 1398 + */ 1399 + if (PageTransCompound(page)) { 1400 + result = SCAN_PAGE_COMPOUND; 1401 + goto out_unlock; 1402 + } 1406 1403 1407 1404 if (page_mapping(page) != mapping) { 1408 1405 result = SCAN_TRUNCATED; 1409 1406 goto out_unlock; 1410 1407 } 1411 - xas_unlock_irq(&xas); 1412 1408 1413 1409 if (isolate_lru_page(page)) { 1414 1410 result = SCAN_DEL_PAGE_LRU; 1415 - goto out_isolate_failed; 1411 + goto out_unlock; 1416 1412 } 1417 1413 1418 1414 if (page_mapped(page)) ··· 1439 1421 */ 1440 1422 if (!page_ref_freeze(page, 3)) { 1441 1423 result = SCAN_PAGE_COUNT; 1442 - goto out_lru; 1424 + xas_unlock_irq(&xas); 1425 + putback_lru_page(page); 1426 + goto out_unlock; 1443 1427 } 1444 1428 1445 1429 /* ··· 1453 1433 /* Finally, replace with the new page. */ 1454 1434 xas_store(&xas, new_page + (index % HPAGE_PMD_NR)); 1455 1435 continue; 1456 - out_lru: 1457 - xas_unlock_irq(&xas); 1458 - putback_lru_page(page); 1459 - out_isolate_failed: 1460 - unlock_page(page); 1461 - put_page(page); 1462 - goto xa_unlocked; 1463 1436 out_unlock: 1464 1437 unlock_page(page); 1465 1438 put_page(page); 1466 - break; 1439 + goto xa_unlocked; 1467 1440 } 1468 - xas_unlock_irq(&xas); 1469 1441 1442 + __inc_node_page_state(new_page, NR_SHMEM_THPS); 1443 + if (nr_none) { 1444 + struct zone *zone = page_zone(new_page); 1445 + 1446 + __mod_node_page_state(zone->zone_pgdat, NR_FILE_PAGES, nr_none); 1447 + __mod_node_page_state(zone->zone_pgdat, NR_SHMEM, nr_none); 1448 + } 1449 + 1450 + xa_locked: 1451 + xas_unlock_irq(&xas); 1470 1452 xa_unlocked: 1453 + 1471 1454 if (result == SCAN_SUCCEED) { 1472 1455 struct page *page, *tmp; 1473 - struct zone *zone = page_zone(new_page); 1474 1456 1475 1457 /* 1476 1458 * Replacing old pages with new one has succeeded, now we 1477 1459 * need to copy the content and free the old pages. 1478 1460 */ 1461 + index = start; 1479 1462 list_for_each_entry_safe(page, tmp, &pagelist, lru) { 1463 + while (index < page->index) { 1464 + clear_highpage(new_page + (index % HPAGE_PMD_NR)); 1465 + index++; 1466 + } 1480 1467 copy_highpage(new_page + (page->index % HPAGE_PMD_NR), 1481 1468 page); 1482 1469 list_del(&page->lru); 1483 - unlock_page(page); 1484 - page_ref_unfreeze(page, 1); 1485 1470 page->mapping = NULL; 1471 + page_ref_unfreeze(page, 1); 1486 1472 ClearPageActive(page); 1487 1473 ClearPageUnevictable(page); 1474 + unlock_page(page); 1488 1475 put_page(page); 1476 + index++; 1477 + } 1478 + while (index < end) { 1479 + clear_highpage(new_page + (index % HPAGE_PMD_NR)); 1480 + index++; 1489 1481 } 1490 1482 1491 - local_irq_disable(); 1492 - __inc_node_page_state(new_page, NR_SHMEM_THPS); 1493 - if (nr_none) { 1494 - __mod_node_page_state(zone->zone_pgdat, NR_FILE_PAGES, nr_none); 1495 - __mod_node_page_state(zone->zone_pgdat, NR_SHMEM, nr_none); 1496 - } 1497 - local_irq_enable(); 1498 - 1499 - /* 1500 - * Remove pte page tables, so we can re-fault 1501 - * the page as huge. 1502 - */ 1503 - retract_page_tables(mapping, start); 1504 - 1505 - /* Everything is ready, let's unfreeze the new_page */ 1506 - set_page_dirty(new_page); 1507 1483 SetPageUptodate(new_page); 1508 - page_ref_unfreeze(new_page, HPAGE_PMD_NR); 1484 + page_ref_add(new_page, HPAGE_PMD_NR - 1); 1485 + set_page_dirty(new_page); 1509 1486 mem_cgroup_commit_charge(new_page, memcg, false, true); 1510 1487 lru_cache_add_anon(new_page); 1511 - unlock_page(new_page); 1512 1488 1489 + /* 1490 + * Remove pte page tables, so we can re-fault the page as huge. 1491 + */ 1492 + retract_page_tables(mapping, start); 1513 1493 *hpage = NULL; 1514 1494 1515 1495 khugepaged_pages_collapsed++; 1516 1496 } else { 1517 1497 struct page *page; 1498 + 1518 1499 /* Something went wrong: roll back page cache changes */ 1519 - shmem_uncharge(mapping->host, nr_none); 1520 1500 xas_lock_irq(&xas); 1501 + mapping->nrpages -= nr_none; 1502 + shmem_uncharge(mapping->host, nr_none); 1503 + 1521 1504 xas_set(&xas, start); 1522 1505 xas_for_each(&xas, page, end - 1) { 1523 1506 page = list_first_entry_or_null(&pagelist, ··· 1542 1519 xas_store(&xas, page); 1543 1520 xas_pause(&xas); 1544 1521 xas_unlock_irq(&xas); 1545 - putback_lru_page(page); 1546 1522 unlock_page(page); 1523 + putback_lru_page(page); 1547 1524 xas_lock_irq(&xas); 1548 1525 } 1549 1526 VM_BUG_ON(nr_none); 1550 1527 xas_unlock_irq(&xas); 1551 1528 1552 - /* Unfreeze new_page, caller would take care about freeing it */ 1553 - page_ref_unfreeze(new_page, 1); 1554 1529 mem_cgroup_cancel_charge(new_page, memcg, true); 1555 - unlock_page(new_page); 1556 1530 new_page->mapping = NULL; 1557 1531 } 1532 + 1533 + unlock_page(new_page); 1558 1534 out: 1559 1535 VM_BUG_ON(!list_empty(&pagelist)); 1560 1536 /* TODO: tracepoints */
+3 -1
mm/page_alloc.c
··· 5813 5813 unsigned long size) 5814 5814 { 5815 5815 struct pglist_data *pgdat = zone->zone_pgdat; 5816 + int zone_idx = zone_idx(zone) + 1; 5816 5817 5817 - pgdat->nr_zones = zone_idx(zone) + 1; 5818 + if (zone_idx > pgdat->nr_zones) 5819 + pgdat->nr_zones = zone_idx; 5818 5820 5819 5821 zone->zone_start_pfn = zone_start_pfn; 5820 5822
+3 -10
mm/rmap.c
··· 1627 1627 address + PAGE_SIZE); 1628 1628 } else { 1629 1629 /* 1630 - * We should not need to notify here as we reach this 1631 - * case only from freeze_page() itself only call from 1632 - * split_huge_page_to_list() so everything below must 1633 - * be true: 1634 - * - page is not anonymous 1635 - * - page is locked 1636 - * 1637 - * So as it is a locked file back page thus it can not 1638 - * be remove from the page cache and replace by a new 1639 - * page before mmu_notifier_invalidate_range_end so no 1630 + * This is a locked file-backed page, thus it cannot 1631 + * be removed from the page cache and replaced by a new 1632 + * page before mmu_notifier_invalidate_range_end, so no 1640 1633 * concurrent thread might update its page table to 1641 1634 * point at new page while a device still is using this 1642 1635 * page.
+37 -6
mm/shmem.c
··· 297 297 if (!shmem_inode_acct_block(inode, pages)) 298 298 return false; 299 299 300 + /* nrpages adjustment first, then shmem_recalc_inode() when balanced */ 301 + inode->i_mapping->nrpages += pages; 302 + 300 303 spin_lock_irqsave(&info->lock, flags); 301 304 info->alloced += pages; 302 305 inode->i_blocks += pages * BLOCKS_PER_PAGE; 303 306 shmem_recalc_inode(inode); 304 307 spin_unlock_irqrestore(&info->lock, flags); 305 - inode->i_mapping->nrpages += pages; 306 308 307 309 return true; 308 310 } ··· 313 311 { 314 312 struct shmem_inode_info *info = SHMEM_I(inode); 315 313 unsigned long flags; 314 + 315 + /* nrpages adjustment done by __delete_from_page_cache() or caller */ 316 316 317 317 spin_lock_irqsave(&info->lock, flags); 318 318 info->alloced -= pages; ··· 1513 1509 { 1514 1510 struct page *oldpage, *newpage; 1515 1511 struct address_space *swap_mapping; 1512 + swp_entry_t entry; 1516 1513 pgoff_t swap_index; 1517 1514 int error; 1518 1515 1519 1516 oldpage = *pagep; 1520 - swap_index = page_private(oldpage); 1517 + entry.val = page_private(oldpage); 1518 + swap_index = swp_offset(entry); 1521 1519 swap_mapping = page_mapping(oldpage); 1522 1520 1523 1521 /* ··· 1538 1532 __SetPageLocked(newpage); 1539 1533 __SetPageSwapBacked(newpage); 1540 1534 SetPageUptodate(newpage); 1541 - set_page_private(newpage, swap_index); 1535 + set_page_private(newpage, entry.val); 1542 1536 SetPageSwapCache(newpage); 1543 1537 1544 1538 /* ··· 2220 2214 struct page *page; 2221 2215 pte_t _dst_pte, *dst_pte; 2222 2216 int ret; 2217 + pgoff_t offset, max_off; 2223 2218 2224 2219 ret = -ENOMEM; 2225 2220 if (!shmem_inode_acct_block(inode, 1)) ··· 2243 2236 *pagep = page; 2244 2237 shmem_inode_unacct_blocks(inode, 1); 2245 2238 /* don't free the page */ 2246 - return -EFAULT; 2239 + return -ENOENT; 2247 2240 } 2248 2241 } else { /* mfill_zeropage_atomic */ 2249 2242 clear_highpage(page); ··· 2257 2250 __SetPageLocked(page); 2258 2251 __SetPageSwapBacked(page); 2259 2252 __SetPageUptodate(page); 2253 + 2254 + ret = -EFAULT; 2255 + offset = linear_page_index(dst_vma, dst_addr); 2256 + max_off = DIV_ROUND_UP(i_size_read(inode), PAGE_SIZE); 2257 + if (unlikely(offset >= max_off)) 2258 + goto out_release; 2260 2259 2261 2260 ret = mem_cgroup_try_charge_delay(page, dst_mm, gfp, &memcg, false); 2262 2261 if (ret) ··· 2278 2265 _dst_pte = mk_pte(page, dst_vma->vm_page_prot); 2279 2266 if (dst_vma->vm_flags & VM_WRITE) 2280 2267 _dst_pte = pte_mkwrite(pte_mkdirty(_dst_pte)); 2268 + else { 2269 + /* 2270 + * We don't set the pte dirty if the vma has no 2271 + * VM_WRITE permission, so mark the page dirty or it 2272 + * could be freed from under us. We could do it 2273 + * unconditionally before unlock_page(), but doing it 2274 + * only if VM_WRITE is not set is faster. 2275 + */ 2276 + set_page_dirty(page); 2277 + } 2278 + 2279 + dst_pte = pte_offset_map_lock(dst_mm, dst_pmd, dst_addr, &ptl); 2280 + 2281 + ret = -EFAULT; 2282 + max_off = DIV_ROUND_UP(i_size_read(inode), PAGE_SIZE); 2283 + if (unlikely(offset >= max_off)) 2284 + goto out_release_uncharge_unlock; 2281 2285 2282 2286 ret = -EEXIST; 2283 - dst_pte = pte_offset_map_lock(dst_mm, dst_pmd, dst_addr, &ptl); 2284 2287 if (!pte_none(*dst_pte)) 2285 2288 goto out_release_uncharge_unlock; 2286 2289 ··· 2314 2285 2315 2286 /* No need to invalidate - it was non-present before */ 2316 2287 update_mmu_cache(dst_vma, dst_addr, dst_pte); 2317 - unlock_page(page); 2318 2288 pte_unmap_unlock(dst_pte, ptl); 2289 + unlock_page(page); 2319 2290 ret = 0; 2320 2291 out: 2321 2292 return ret; 2322 2293 out_release_uncharge_unlock: 2323 2294 pte_unmap_unlock(dst_pte, ptl); 2295 + ClearPageDirty(page); 2296 + delete_from_page_cache(page); 2324 2297 out_release_uncharge: 2325 2298 mem_cgroup_cancel_charge(page, memcg, false); 2326 2299 out_release:
+6 -2
mm/truncate.c
··· 517 517 */ 518 518 xa_lock_irq(&mapping->i_pages); 519 519 xa_unlock_irq(&mapping->i_pages); 520 - 521 - truncate_inode_pages(mapping, 0); 522 520 } 521 + 522 + /* 523 + * Cleancache needs notification even if there are no pages or shadow 524 + * entries. 525 + */ 526 + truncate_inode_pages(mapping, 0); 523 527 } 524 528 EXPORT_SYMBOL(truncate_inode_pages_final); 525 529
+46 -16
mm/userfaultfd.c
··· 33 33 void *page_kaddr; 34 34 int ret; 35 35 struct page *page; 36 + pgoff_t offset, max_off; 37 + struct inode *inode; 36 38 37 39 if (!*pagep) { 38 40 ret = -ENOMEM; ··· 50 48 51 49 /* fallback to copy_from_user outside mmap_sem */ 52 50 if (unlikely(ret)) { 53 - ret = -EFAULT; 51 + ret = -ENOENT; 54 52 *pagep = page; 55 53 /* don't free the page */ 56 54 goto out; ··· 75 73 if (dst_vma->vm_flags & VM_WRITE) 76 74 _dst_pte = pte_mkwrite(pte_mkdirty(_dst_pte)); 77 75 78 - ret = -EEXIST; 79 76 dst_pte = pte_offset_map_lock(dst_mm, dst_pmd, dst_addr, &ptl); 77 + if (dst_vma->vm_file) { 78 + /* the shmem MAP_PRIVATE case requires checking the i_size */ 79 + inode = dst_vma->vm_file->f_inode; 80 + offset = linear_page_index(dst_vma, dst_addr); 81 + max_off = DIV_ROUND_UP(i_size_read(inode), PAGE_SIZE); 82 + ret = -EFAULT; 83 + if (unlikely(offset >= max_off)) 84 + goto out_release_uncharge_unlock; 85 + } 86 + ret = -EEXIST; 80 87 if (!pte_none(*dst_pte)) 81 88 goto out_release_uncharge_unlock; 82 89 ··· 119 108 pte_t _dst_pte, *dst_pte; 120 109 spinlock_t *ptl; 121 110 int ret; 111 + pgoff_t offset, max_off; 112 + struct inode *inode; 122 113 123 114 _dst_pte = pte_mkspecial(pfn_pte(my_zero_pfn(dst_addr), 124 115 dst_vma->vm_page_prot)); 125 - ret = -EEXIST; 126 116 dst_pte = pte_offset_map_lock(dst_mm, dst_pmd, dst_addr, &ptl); 117 + if (dst_vma->vm_file) { 118 + /* the shmem MAP_PRIVATE case requires checking the i_size */ 119 + inode = dst_vma->vm_file->f_inode; 120 + offset = linear_page_index(dst_vma, dst_addr); 121 + max_off = DIV_ROUND_UP(i_size_read(inode), PAGE_SIZE); 122 + ret = -EFAULT; 123 + if (unlikely(offset >= max_off)) 124 + goto out_unlock; 125 + } 126 + ret = -EEXIST; 127 127 if (!pte_none(*dst_pte)) 128 128 goto out_unlock; 129 129 set_pte_at(dst_mm, dst_addr, dst_pte, _dst_pte); ··· 227 205 if (!dst_vma || !is_vm_hugetlb_page(dst_vma)) 228 206 goto out_unlock; 229 207 /* 230 - * Only allow __mcopy_atomic_hugetlb on userfaultfd 231 - * registered ranges. 208 + * Check the vma is registered in uffd, this is 209 + * required to enforce the VM_MAYWRITE check done at 210 + * uffd registration time. 232 211 */ 233 212 if (!dst_vma->vm_userfaultfd_ctx.ctx) 234 213 goto out_unlock; ··· 297 274 298 275 cond_resched(); 299 276 300 - if (unlikely(err == -EFAULT)) { 277 + if (unlikely(err == -ENOENT)) { 301 278 up_read(&dst_mm->mmap_sem); 302 279 BUG_ON(!page); 303 280 ··· 403 380 { 404 381 ssize_t err; 405 382 406 - if (vma_is_anonymous(dst_vma)) { 383 + /* 384 + * The normal page fault path for a shmem will invoke the 385 + * fault, fill the hole in the file and COW it right away. The 386 + * result generates plain anonymous memory. So when we are 387 + * asked to fill an hole in a MAP_PRIVATE shmem mapping, we'll 388 + * generate anonymous memory directly without actually filling 389 + * the hole. For the MAP_PRIVATE case the robustness check 390 + * only happens in the pagetable (to verify it's still none) 391 + * and not in the radix tree. 392 + */ 393 + if (!(dst_vma->vm_flags & VM_SHARED)) { 407 394 if (!zeropage) 408 395 err = mcopy_atomic_pte(dst_mm, dst_pmd, dst_vma, 409 396 dst_addr, src_addr, page); ··· 482 449 if (!dst_vma) 483 450 goto out_unlock; 484 451 /* 485 - * Be strict and only allow __mcopy_atomic on userfaultfd 486 - * registered ranges to prevent userland errors going 487 - * unnoticed. As far as the VM consistency is concerned, it 488 - * would be perfectly safe to remove this check, but there's 489 - * no useful usage for __mcopy_atomic ouside of userfaultfd 490 - * registered ranges. This is after all why these are ioctls 491 - * belonging to the userfaultfd and not syscalls. 452 + * Check the vma is registered in uffd, this is required to 453 + * enforce the VM_MAYWRITE check done at uffd registration 454 + * time. 492 455 */ 493 456 if (!dst_vma->vm_userfaultfd_ctx.ctx) 494 457 goto out_unlock; ··· 518 489 * dst_vma. 519 490 */ 520 491 err = -ENOMEM; 521 - if (vma_is_anonymous(dst_vma) && unlikely(anon_vma_prepare(dst_vma))) 492 + if (!(dst_vma->vm_flags & VM_SHARED) && 493 + unlikely(anon_vma_prepare(dst_vma))) 522 494 goto out_unlock; 523 495 524 496 while (src_addr < src_start + len) { ··· 560 530 src_addr, &page, zeropage); 561 531 cond_resched(); 562 532 563 - if (unlikely(err == -EFAULT)) { 533 + if (unlikely(err == -ENOENT)) { 564 534 void *page_kaddr; 565 535 566 536 up_read(&dst_mm->mmap_sem);
+2 -3
net/core/filter.c
··· 4852 4852 } else { 4853 4853 struct in6_addr *src6 = (struct in6_addr *)&tuple->ipv6.saddr; 4854 4854 struct in6_addr *dst6 = (struct in6_addr *)&tuple->ipv6.daddr; 4855 - u16 hnum = ntohs(tuple->ipv6.dport); 4856 4855 int sdif = inet6_sdif(skb); 4857 4856 4858 4857 if (proto == IPPROTO_TCP) 4859 4858 sk = __inet6_lookup(net, &tcp_hashinfo, skb, 0, 4860 4859 src6, tuple->ipv6.sport, 4861 - dst6, hnum, 4860 + dst6, ntohs(tuple->ipv6.dport), 4862 4861 dif, sdif, &refcounted); 4863 4862 else if (likely(ipv6_bpf_stub)) 4864 4863 sk = ipv6_bpf_stub->udp6_lib_lookup(net, 4865 4864 src6, tuple->ipv6.sport, 4866 - dst6, hnum, 4865 + dst6, tuple->ipv6.dport, 4867 4866 dif, sdif, 4868 4867 &udp_table, skb); 4869 4868 #endif
+2 -1
net/ipv4/ip_output.c
··· 939 939 unsigned int fraglen; 940 940 unsigned int fraggap; 941 941 unsigned int alloclen; 942 - unsigned int pagedlen = 0; 942 + unsigned int pagedlen; 943 943 struct sk_buff *skb_prev; 944 944 alloc_new_skb: 945 945 skb_prev = skb; ··· 956 956 if (datalen > mtu - fragheaderlen) 957 957 datalen = maxfraglen - fragheaderlen; 958 958 fraglen = datalen + fragheaderlen; 959 + pagedlen = 0; 959 960 960 961 if ((flags & MSG_MORE) && 961 962 !(rt->dst.dev->features&NETIF_F_SG))
+5 -2
net/ipv4/netfilter/ipt_MASQUERADE.c
··· 81 81 int ret; 82 82 83 83 ret = xt_register_target(&masquerade_tg_reg); 84 + if (ret) 85 + return ret; 84 86 85 - if (ret == 0) 86 - nf_nat_masquerade_ipv4_register_notifier(); 87 + ret = nf_nat_masquerade_ipv4_register_notifier(); 88 + if (ret) 89 + xt_unregister_target(&masquerade_tg_reg); 87 90 88 91 return ret; 89 92 }
+30 -8
net/ipv4/netfilter/nf_nat_masquerade_ipv4.c
··· 147 147 .notifier_call = masq_inet_event, 148 148 }; 149 149 150 - static atomic_t masquerade_notifier_refcount = ATOMIC_INIT(0); 150 + static int masq_refcnt; 151 + static DEFINE_MUTEX(masq_mutex); 151 152 152 - void nf_nat_masquerade_ipv4_register_notifier(void) 153 + int nf_nat_masquerade_ipv4_register_notifier(void) 153 154 { 155 + int ret = 0; 156 + 157 + mutex_lock(&masq_mutex); 154 158 /* check if the notifier was already set */ 155 - if (atomic_inc_return(&masquerade_notifier_refcount) > 1) 156 - return; 159 + if (++masq_refcnt > 1) 160 + goto out_unlock; 157 161 158 162 /* Register for device down reports */ 159 - register_netdevice_notifier(&masq_dev_notifier); 163 + ret = register_netdevice_notifier(&masq_dev_notifier); 164 + if (ret) 165 + goto err_dec; 160 166 /* Register IP address change reports */ 161 - register_inetaddr_notifier(&masq_inet_notifier); 167 + ret = register_inetaddr_notifier(&masq_inet_notifier); 168 + if (ret) 169 + goto err_unregister; 170 + 171 + mutex_unlock(&masq_mutex); 172 + return ret; 173 + 174 + err_unregister: 175 + unregister_netdevice_notifier(&masq_dev_notifier); 176 + err_dec: 177 + masq_refcnt--; 178 + out_unlock: 179 + mutex_unlock(&masq_mutex); 180 + return ret; 162 181 } 163 182 EXPORT_SYMBOL_GPL(nf_nat_masquerade_ipv4_register_notifier); 164 183 165 184 void nf_nat_masquerade_ipv4_unregister_notifier(void) 166 185 { 186 + mutex_lock(&masq_mutex); 167 187 /* check if the notifier still has clients */ 168 - if (atomic_dec_return(&masquerade_notifier_refcount) > 0) 169 - return; 188 + if (--masq_refcnt > 0) 189 + goto out_unlock; 170 190 171 191 unregister_netdevice_notifier(&masq_dev_notifier); 172 192 unregister_inetaddr_notifier(&masq_inet_notifier); 193 + out_unlock: 194 + mutex_unlock(&masq_mutex); 173 195 } 174 196 EXPORT_SYMBOL_GPL(nf_nat_masquerade_ipv4_unregister_notifier);
+3 -1
net/ipv4/netfilter/nft_masq_ipv4.c
··· 69 69 if (ret < 0) 70 70 return ret; 71 71 72 - nf_nat_masquerade_ipv4_register_notifier(); 72 + ret = nf_nat_masquerade_ipv4_register_notifier(); 73 + if (ret) 74 + nft_unregister_expr(&nft_masq_ipv4_type); 73 75 74 76 return ret; 75 77 }
+10 -6
net/ipv4/tcp_input.c
··· 579 579 u32 delta = tcp_time_stamp(tp) - tp->rx_opt.rcv_tsecr; 580 580 u32 delta_us; 581 581 582 - if (!delta) 583 - delta = 1; 584 - delta_us = delta * (USEC_PER_SEC / TCP_TS_HZ); 585 - tcp_rcv_rtt_update(tp, delta_us, 0); 582 + if (likely(delta < INT_MAX / (USEC_PER_SEC / TCP_TS_HZ))) { 583 + if (!delta) 584 + delta = 1; 585 + delta_us = delta * (USEC_PER_SEC / TCP_TS_HZ); 586 + tcp_rcv_rtt_update(tp, delta_us, 0); 587 + } 586 588 } 587 589 } 588 590 ··· 2912 2910 if (seq_rtt_us < 0 && tp->rx_opt.saw_tstamp && tp->rx_opt.rcv_tsecr && 2913 2911 flag & FLAG_ACKED) { 2914 2912 u32 delta = tcp_time_stamp(tp) - tp->rx_opt.rcv_tsecr; 2915 - u32 delta_us = delta * (USEC_PER_SEC / TCP_TS_HZ); 2916 2913 2917 - seq_rtt_us = ca_rtt_us = delta_us; 2914 + if (likely(delta < INT_MAX / (USEC_PER_SEC / TCP_TS_HZ))) { 2915 + seq_rtt_us = delta * (USEC_PER_SEC / TCP_TS_HZ); 2916 + ca_rtt_us = seq_rtt_us; 2917 + } 2918 2918 } 2919 2919 rs->rtt_us = ca_rtt_us; /* RTT of last (S)ACKed packet (or -1) */ 2920 2920 if (seq_rtt_us < 0)
+6 -4
net/ipv4/tcp_timer.c
··· 40 40 { 41 41 struct inet_connection_sock *icsk = inet_csk(sk); 42 42 u32 elapsed, start_ts; 43 + s32 remaining; 43 44 44 45 start_ts = tcp_retransmit_stamp(sk); 45 46 if (!icsk->icsk_user_timeout || !start_ts) 46 47 return icsk->icsk_rto; 47 48 elapsed = tcp_time_stamp(tcp_sk(sk)) - start_ts; 48 - if (elapsed >= icsk->icsk_user_timeout) 49 + remaining = icsk->icsk_user_timeout - elapsed; 50 + if (remaining <= 0) 49 51 return 1; /* user timeout has passed; fire ASAP */ 50 - else 51 - return min_t(u32, icsk->icsk_rto, msecs_to_jiffies(icsk->icsk_user_timeout - elapsed)); 52 + 53 + return min_t(u32, icsk->icsk_rto, msecs_to_jiffies(remaining)); 52 54 } 53 55 54 56 /** ··· 211 209 (boundary - linear_backoff_thresh) * TCP_RTO_MAX; 212 210 timeout = jiffies_to_msecs(timeout); 213 211 } 214 - return (tcp_time_stamp(tcp_sk(sk)) - start_ts) >= timeout; 212 + return (s32)(tcp_time_stamp(tcp_sk(sk)) - start_ts - timeout) >= 0; 215 213 } 216 214 217 215 /* A write timeout has occurred. Process the after effects. */
+2 -1
net/ipv6/ip6_output.c
··· 1354 1354 unsigned int fraglen; 1355 1355 unsigned int fraggap; 1356 1356 unsigned int alloclen; 1357 - unsigned int pagedlen = 0; 1357 + unsigned int pagedlen; 1358 1358 alloc_new_skb: 1359 1359 /* There's no room in the current skb */ 1360 1360 if (skb) ··· 1378 1378 if (datalen > (cork->length <= mtu && !(cork->flags & IPCORK_ALLFRAG) ? mtu : maxfraglen) - fragheaderlen) 1379 1379 datalen = maxfraglen - fragheaderlen - rt->dst.trailer_len; 1380 1380 fraglen = datalen + fragheaderlen; 1381 + pagedlen = 0; 1381 1382 1382 1383 if ((flags & MSG_MORE) && 1383 1384 !(rt->dst.dev->features&NETIF_F_SG))
+2 -1
net/ipv6/netfilter.c
··· 24 24 unsigned int hh_len; 25 25 struct dst_entry *dst; 26 26 struct flowi6 fl6 = { 27 - .flowi6_oif = sk ? sk->sk_bound_dev_if : 0, 27 + .flowi6_oif = sk && sk->sk_bound_dev_if ? sk->sk_bound_dev_if : 28 + rt6_need_strict(&iph->daddr) ? skb_dst(skb)->dev->ifindex : 0, 28 29 .flowi6_mark = skb->mark, 29 30 .flowi6_uid = sock_net_uid(net, sk), 30 31 .daddr = iph->daddr,
+6 -2
net/ipv6/netfilter/ip6t_MASQUERADE.c
··· 58 58 int err; 59 59 60 60 err = xt_register_target(&masquerade_tg6_reg); 61 - if (err == 0) 62 - nf_nat_masquerade_ipv6_register_notifier(); 61 + if (err) 62 + return err; 63 + 64 + err = nf_nat_masquerade_ipv6_register_notifier(); 65 + if (err) 66 + xt_unregister_target(&masquerade_tg6_reg); 63 67 64 68 return err; 65 69 }
+37 -14
net/ipv6/netfilter/nf_nat_masquerade_ipv6.c
··· 132 132 * of ipv6 addresses being deleted), we also need to add an upper 133 133 * limit to the number of queued work items. 134 134 */ 135 - static int masq_inet_event(struct notifier_block *this, 136 - unsigned long event, void *ptr) 135 + static int masq_inet6_event(struct notifier_block *this, 136 + unsigned long event, void *ptr) 137 137 { 138 138 struct inet6_ifaddr *ifa = ptr; 139 139 const struct net_device *dev; ··· 171 171 return NOTIFY_DONE; 172 172 } 173 173 174 - static struct notifier_block masq_inet_notifier = { 175 - .notifier_call = masq_inet_event, 174 + static struct notifier_block masq_inet6_notifier = { 175 + .notifier_call = masq_inet6_event, 176 176 }; 177 177 178 - static atomic_t masquerade_notifier_refcount = ATOMIC_INIT(0); 178 + static int masq_refcnt; 179 + static DEFINE_MUTEX(masq_mutex); 179 180 180 - void nf_nat_masquerade_ipv6_register_notifier(void) 181 + int nf_nat_masquerade_ipv6_register_notifier(void) 181 182 { 182 - /* check if the notifier is already set */ 183 - if (atomic_inc_return(&masquerade_notifier_refcount) > 1) 184 - return; 183 + int ret = 0; 185 184 186 - register_netdevice_notifier(&masq_dev_notifier); 187 - register_inet6addr_notifier(&masq_inet_notifier); 185 + mutex_lock(&masq_mutex); 186 + /* check if the notifier is already set */ 187 + if (++masq_refcnt > 1) 188 + goto out_unlock; 189 + 190 + ret = register_netdevice_notifier(&masq_dev_notifier); 191 + if (ret) 192 + goto err_dec; 193 + 194 + ret = register_inet6addr_notifier(&masq_inet6_notifier); 195 + if (ret) 196 + goto err_unregister; 197 + 198 + mutex_unlock(&masq_mutex); 199 + return ret; 200 + 201 + err_unregister: 202 + unregister_netdevice_notifier(&masq_dev_notifier); 203 + err_dec: 204 + masq_refcnt--; 205 + out_unlock: 206 + mutex_unlock(&masq_mutex); 207 + return ret; 188 208 } 189 209 EXPORT_SYMBOL_GPL(nf_nat_masquerade_ipv6_register_notifier); 190 210 191 211 void nf_nat_masquerade_ipv6_unregister_notifier(void) 192 212 { 213 + mutex_lock(&masq_mutex); 193 214 /* check if the notifier still has clients */ 194 - if (atomic_dec_return(&masquerade_notifier_refcount) > 0) 195 - return; 215 + if (--masq_refcnt > 0) 216 + goto out_unlock; 196 217 197 - unregister_inet6addr_notifier(&masq_inet_notifier); 218 + unregister_inet6addr_notifier(&masq_inet6_notifier); 198 219 unregister_netdevice_notifier(&masq_dev_notifier); 220 + out_unlock: 221 + mutex_unlock(&masq_mutex); 199 222 } 200 223 EXPORT_SYMBOL_GPL(nf_nat_masquerade_ipv6_unregister_notifier);
+3 -1
net/ipv6/netfilter/nft_masq_ipv6.c
··· 70 70 if (ret < 0) 71 71 return ret; 72 72 73 - nf_nat_masquerade_ipv6_register_notifier(); 73 + ret = nf_nat_masquerade_ipv6_register_notifier(); 74 + if (ret) 75 + nft_unregister_expr(&nft_masq_ipv6_type); 74 76 75 77 return ret; 76 78 }
+3
net/netfilter/ipvs/ip_vs_ctl.c
··· 3980 3980 3981 3981 static struct notifier_block ip_vs_dst_notifier = { 3982 3982 .notifier_call = ip_vs_dst_event, 3983 + #ifdef CONFIG_IP_VS_IPV6 3984 + .priority = ADDRCONF_NOTIFY_PRIORITY + 5, 3985 + #endif 3983 3986 }; 3984 3987 3985 3988 int __net_init ip_vs_control_net_init(struct netns_ipvs *ipvs)
+28 -16
net/netfilter/nf_conncount.c
··· 49 49 struct nf_conntrack_zone zone; 50 50 int cpu; 51 51 u32 jiffies32; 52 + bool dead; 52 53 struct rcu_head rcu_head; 53 54 }; 54 55 ··· 107 106 conn->zone = *zone; 108 107 conn->cpu = raw_smp_processor_id(); 109 108 conn->jiffies32 = (u32)jiffies; 110 - spin_lock(&list->list_lock); 109 + conn->dead = false; 110 + spin_lock_bh(&list->list_lock); 111 111 if (list->dead == true) { 112 112 kmem_cache_free(conncount_conn_cachep, conn); 113 - spin_unlock(&list->list_lock); 113 + spin_unlock_bh(&list->list_lock); 114 114 return NF_CONNCOUNT_SKIP; 115 115 } 116 116 list_add_tail(&conn->node, &list->head); 117 117 list->count++; 118 - spin_unlock(&list->list_lock); 118 + spin_unlock_bh(&list->list_lock); 119 119 return NF_CONNCOUNT_ADDED; 120 120 } 121 121 EXPORT_SYMBOL_GPL(nf_conncount_add); ··· 134 132 { 135 133 bool free_entry = false; 136 134 137 - spin_lock(&list->list_lock); 135 + spin_lock_bh(&list->list_lock); 138 136 139 - if (list->count == 0) { 140 - spin_unlock(&list->list_lock); 141 - return free_entry; 137 + if (conn->dead) { 138 + spin_unlock_bh(&list->list_lock); 139 + return free_entry; 142 140 } 143 141 144 142 list->count--; 143 + conn->dead = true; 145 144 list_del_rcu(&conn->node); 146 - if (list->count == 0) 145 + if (list->count == 0) { 146 + list->dead = true; 147 147 free_entry = true; 148 + } 148 149 149 - spin_unlock(&list->list_lock); 150 + spin_unlock_bh(&list->list_lock); 150 151 call_rcu(&conn->rcu_head, __conn_free); 151 152 return free_entry; 152 153 } ··· 250 245 { 251 246 spin_lock_init(&list->list_lock); 252 247 INIT_LIST_HEAD(&list->head); 253 - list->count = 1; 248 + list->count = 0; 254 249 list->dead = false; 255 250 } 256 251 EXPORT_SYMBOL_GPL(nf_conncount_list_init); ··· 264 259 struct nf_conn *found_ct; 265 260 unsigned int collected = 0; 266 261 bool free_entry = false; 262 + bool ret = false; 267 263 268 264 list_for_each_entry_safe(conn, conn_n, &list->head, node) { 269 265 found = find_or_evict(net, list, conn, &free_entry); ··· 294 288 if (collected > CONNCOUNT_GC_MAX_NODES) 295 289 return false; 296 290 } 297 - return false; 291 + 292 + spin_lock_bh(&list->list_lock); 293 + if (!list->count) { 294 + list->dead = true; 295 + ret = true; 296 + } 297 + spin_unlock_bh(&list->list_lock); 298 + 299 + return ret; 298 300 } 299 301 EXPORT_SYMBOL_GPL(nf_conncount_gc_list); 300 302 ··· 323 309 while (gc_count) { 324 310 rbconn = gc_nodes[--gc_count]; 325 311 spin_lock(&rbconn->list.list_lock); 326 - if (rbconn->list.count == 0 && rbconn->list.dead == false) { 327 - rbconn->list.dead = true; 328 - rb_erase(&rbconn->node, root); 329 - call_rcu(&rbconn->rcu_head, __tree_nodes_free); 330 - } 312 + rb_erase(&rbconn->node, root); 313 + call_rcu(&rbconn->rcu_head, __tree_nodes_free); 331 314 spin_unlock(&rbconn->list.list_lock); 332 315 } 333 316 } ··· 425 414 nf_conncount_list_init(&rbconn->list); 426 415 list_add(&conn->node, &rbconn->list.head); 427 416 count = 1; 417 + rbconn->list.count = count; 428 418 429 419 rb_link_node(&rbconn->node, parent, rbnode); 430 420 rb_insert_color(&rbconn->node, root);
+2 -12
net/netfilter/nf_conntrack_proto_gre.c
··· 43 43 #include <linux/netfilter/nf_conntrack_proto_gre.h> 44 44 #include <linux/netfilter/nf_conntrack_pptp.h> 45 45 46 - enum grep_conntrack { 47 - GRE_CT_UNREPLIED, 48 - GRE_CT_REPLIED, 49 - GRE_CT_MAX 50 - }; 51 - 52 46 static const unsigned int gre_timeouts[GRE_CT_MAX] = { 53 47 [GRE_CT_UNREPLIED] = 30*HZ, 54 48 [GRE_CT_REPLIED] = 180*HZ, 55 49 }; 56 50 57 51 static unsigned int proto_gre_net_id __read_mostly; 58 - struct netns_proto_gre { 59 - struct nf_proto_net nf; 60 - rwlock_t keymap_lock; 61 - struct list_head keymap_list; 62 - unsigned int gre_timeouts[GRE_CT_MAX]; 63 - }; 64 52 65 53 static inline struct netns_proto_gre *gre_pernet(struct net *net) 66 54 { ··· 389 401 static int __init nf_ct_proto_gre_init(void) 390 402 { 391 403 int ret; 404 + 405 + BUILD_BUG_ON(offsetof(struct netns_proto_gre, nf) != 0); 392 406 393 407 ret = register_pernet_subsys(&proto_gre_net_ops); 394 408 if (ret < 0)
+17 -29
net/netfilter/nf_tables_api.c
··· 2457 2457 static void nf_tables_rule_destroy(const struct nft_ctx *ctx, 2458 2458 struct nft_rule *rule) 2459 2459 { 2460 - struct nft_expr *expr; 2460 + struct nft_expr *expr, *next; 2461 2461 2462 2462 /* 2463 2463 * Careful: some expressions might not be initialized in case this ··· 2465 2465 */ 2466 2466 expr = nft_expr_first(rule); 2467 2467 while (expr != nft_expr_last(rule) && expr->ops) { 2468 + next = nft_expr_next(expr); 2468 2469 nf_tables_expr_destroy(ctx, expr); 2469 - expr = nft_expr_next(expr); 2470 + expr = next; 2470 2471 } 2471 2472 kfree(rule); 2472 2473 } ··· 2590 2589 2591 2590 if (chain->use == UINT_MAX) 2592 2591 return -EOVERFLOW; 2593 - } 2594 2592 2595 - if (nla[NFTA_RULE_POSITION]) { 2596 - if (!(nlh->nlmsg_flags & NLM_F_CREATE)) 2597 - return -EOPNOTSUPP; 2598 - 2599 - pos_handle = be64_to_cpu(nla_get_be64(nla[NFTA_RULE_POSITION])); 2600 - old_rule = __nft_rule_lookup(chain, pos_handle); 2601 - if (IS_ERR(old_rule)) { 2602 - NL_SET_BAD_ATTR(extack, nla[NFTA_RULE_POSITION]); 2603 - return PTR_ERR(old_rule); 2593 + if (nla[NFTA_RULE_POSITION]) { 2594 + pos_handle = be64_to_cpu(nla_get_be64(nla[NFTA_RULE_POSITION])); 2595 + old_rule = __nft_rule_lookup(chain, pos_handle); 2596 + if (IS_ERR(old_rule)) { 2597 + NL_SET_BAD_ATTR(extack, nla[NFTA_RULE_POSITION]); 2598 + return PTR_ERR(old_rule); 2599 + } 2604 2600 } 2605 2601 } 2606 2602 ··· 2667 2669 } 2668 2670 2669 2671 if (nlh->nlmsg_flags & NLM_F_REPLACE) { 2670 - if (!nft_is_active_next(net, old_rule)) { 2671 - err = -ENOENT; 2672 - goto err2; 2673 - } 2674 - trans = nft_trans_rule_add(&ctx, NFT_MSG_DELRULE, 2675 - old_rule); 2672 + trans = nft_trans_rule_add(&ctx, NFT_MSG_NEWRULE, rule); 2676 2673 if (trans == NULL) { 2677 2674 err = -ENOMEM; 2678 2675 goto err2; 2679 2676 } 2680 - nft_deactivate_next(net, old_rule); 2681 - chain->use--; 2682 - 2683 - if (nft_trans_rule_add(&ctx, NFT_MSG_NEWRULE, rule) == NULL) { 2684 - err = -ENOMEM; 2677 + err = nft_delrule(&ctx, old_rule); 2678 + if (err < 0) { 2679 + nft_trans_destroy(trans); 2685 2680 goto err2; 2686 2681 } 2687 2682 ··· 6315 6324 call_rcu(&old->h, __nf_tables_commit_chain_free_rules_old); 6316 6325 } 6317 6326 6318 - static void nf_tables_commit_chain_active(struct net *net, struct nft_chain *chain) 6327 + static void nf_tables_commit_chain(struct net *net, struct nft_chain *chain) 6319 6328 { 6320 6329 struct nft_rule **g0, **g1; 6321 6330 bool next_genbit; ··· 6432 6441 6433 6442 /* step 2. Make rules_gen_X visible to packet path */ 6434 6443 list_for_each_entry(table, &net->nft.tables, list) { 6435 - list_for_each_entry(chain, &table->chains, list) { 6436 - if (!nft_is_active_next(net, chain)) 6437 - continue; 6438 - nf_tables_commit_chain_active(net, chain); 6439 - } 6444 + list_for_each_entry(chain, &table->chains, list) 6445 + nf_tables_commit_chain(net, chain); 6440 6446 } 6441 6447 6442 6448 /*
+2 -1
net/netfilter/nft_compat.c
··· 520 520 void *info) 521 521 { 522 522 struct xt_match *match = expr->ops->data; 523 + struct module *me = match->me; 523 524 struct xt_mtdtor_param par; 524 525 525 526 par.net = ctx->net; ··· 531 530 par.match->destroy(&par); 532 531 533 532 if (nft_xt_put(container_of(expr->ops, struct nft_xt, ops))) 534 - module_put(match->me); 533 + module_put(me); 535 534 } 536 535 537 536 static void
+4 -1
net/netfilter/nft_flow_offload.c
··· 214 214 { 215 215 int err; 216 216 217 - register_netdevice_notifier(&flow_offload_netdev_notifier); 217 + err = register_netdevice_notifier(&flow_offload_netdev_notifier); 218 + if (err) 219 + goto err; 218 220 219 221 err = nft_register_expr(&nft_flow_offload_type); 220 222 if (err < 0) ··· 226 224 227 225 register_expr: 228 226 unregister_netdevice_notifier(&flow_offload_netdev_notifier); 227 + err: 229 228 return err; 230 229 } 231 230
-10
net/netfilter/xt_RATEEST.c
··· 201 201 return 0; 202 202 } 203 203 204 - static void __net_exit xt_rateest_net_exit(struct net *net) 205 - { 206 - struct xt_rateest_net *xn = net_generic(net, xt_rateest_id); 207 - int i; 208 - 209 - for (i = 0; i < ARRAY_SIZE(xn->hash); i++) 210 - WARN_ON_ONCE(!hlist_empty(&xn->hash[i])); 211 - } 212 - 213 204 static struct pernet_operations xt_rateest_net_ops = { 214 205 .init = xt_rateest_net_init, 215 - .exit = xt_rateest_net_exit, 216 206 .id = &xt_rateest_id, 217 207 .size = sizeof(struct xt_rateest_net), 218 208 };
+3 -6
net/netfilter/xt_hashlimit.c
··· 295 295 296 296 /* copy match config into hashtable config */ 297 297 ret = cfg_copy(&hinfo->cfg, (void *)cfg, 3); 298 - 299 - if (ret) 298 + if (ret) { 299 + vfree(hinfo); 300 300 return ret; 301 + } 301 302 302 303 hinfo->cfg.size = size; 303 304 if (hinfo->cfg.max == 0) ··· 815 814 int ret; 816 815 817 816 ret = cfg_copy(&cfg, (void *)&info->cfg, 1); 818 - 819 817 if (ret) 820 818 return ret; 821 819 ··· 830 830 int ret; 831 831 832 832 ret = cfg_copy(&cfg, (void *)&info->cfg, 2); 833 - 834 833 if (ret) 835 834 return ret; 836 835 ··· 920 921 return ret; 921 922 922 923 ret = cfg_copy(&cfg, (void *)&info->cfg, 1); 923 - 924 924 if (ret) 925 925 return ret; 926 926 ··· 938 940 return ret; 939 941 940 942 ret = cfg_copy(&cfg, (void *)&info->cfg, 2); 941 - 942 943 if (ret) 943 944 return ret; 944 945
+1
net/sctp/output.c
··· 410 410 head->truesize += skb->truesize; 411 411 head->data_len += skb->len; 412 412 head->len += skb->len; 413 + refcount_add(skb->truesize, &head->sk->sk_wmem_alloc); 413 414 414 415 __skb_header_release(skb); 415 416 }
+5 -2
net/tipc/node.c
··· 584 584 /* tipc_node_cleanup - delete nodes that does not 585 585 * have active links for NODE_CLEANUP_AFTER time 586 586 */ 587 - static int tipc_node_cleanup(struct tipc_node *peer) 587 + static bool tipc_node_cleanup(struct tipc_node *peer) 588 588 { 589 589 struct tipc_net *tn = tipc_net(peer->net); 590 590 bool deleted = false; 591 591 592 - spin_lock_bh(&tn->node_list_lock); 592 + /* If lock held by tipc_node_stop() the node will be deleted anyway */ 593 + if (!spin_trylock_bh(&tn->node_list_lock)) 594 + return false; 595 + 593 596 tipc_node_write_lock(peer); 594 597 595 598 if (!node_is_up(peer) && time_after(jiffies, peer->delete_at)) {
-2
scripts/Makefile.build
··· 236 236 objtool_args += --no-unreachable 237 237 endif 238 238 ifdef CONFIG_RETPOLINE 239 - ifneq ($(RETPOLINE_CFLAGS),) 240 239 objtool_args += --retpoline 241 - endif 242 240 endif 243 241 244 242
+2 -2
scripts/unifdef.c
··· 395 395 * When we have processed a group that starts off with a known-false 396 396 * #if/#elif sequence (which has therefore been deleted) followed by a 397 397 * #elif that we don't understand and therefore must keep, we edit the 398 - * latter into a #if to keep the nesting correct. We use strncpy() to 398 + * latter into a #if to keep the nesting correct. We use memcpy() to 399 399 * overwrite the 4 byte token "elif" with "if " without a '\0' byte. 400 400 * 401 401 * When we find a true #elif in a group, the following block will ··· 450 450 static void Itrue (void) { Ftrue(); ignoreon(); } 451 451 static void Ifalse(void) { Ffalse(); ignoreon(); } 452 452 /* modify this line */ 453 - static void Mpass (void) { strncpy(keyword, "if ", 4); Pelif(); } 453 + static void Mpass (void) { memcpy(keyword, "if ", 4); Pelif(); } 454 454 static void Mtrue (void) { keywordedit("else"); state(IS_TRUE_MIDDLE); } 455 455 static void Melif (void) { keywordedit("endif"); state(IS_FALSE_TRAILER); } 456 456 static void Melse (void) { keywordedit("endif"); state(IS_FALSE_ELSE); }
+12 -1
security/selinux/nlmsgtab.c
··· 80 80 { RTM_NEWSTATS, NETLINK_ROUTE_SOCKET__NLMSG_READ }, 81 81 { RTM_GETSTATS, NETLINK_ROUTE_SOCKET__NLMSG_READ }, 82 82 { RTM_NEWCACHEREPORT, NETLINK_ROUTE_SOCKET__NLMSG_READ }, 83 + { RTM_NEWCHAIN, NETLINK_ROUTE_SOCKET__NLMSG_WRITE }, 84 + { RTM_DELCHAIN, NETLINK_ROUTE_SOCKET__NLMSG_WRITE }, 85 + { RTM_GETCHAIN, NETLINK_ROUTE_SOCKET__NLMSG_READ }, 83 86 }; 84 87 85 88 static const struct nlmsg_perm nlmsg_tcpdiag_perms[] = ··· 161 158 162 159 switch (sclass) { 163 160 case SECCLASS_NETLINK_ROUTE_SOCKET: 164 - /* RTM_MAX always point to RTM_SETxxxx, ie RTM_NEWxxx + 3 */ 161 + /* RTM_MAX always points to RTM_SETxxxx, ie RTM_NEWxxx + 3. 162 + * If the BUILD_BUG_ON() below fails you must update the 163 + * structures at the top of this file with the new mappings 164 + * before updating the BUILD_BUG_ON() macro! 165 + */ 165 166 BUILD_BUG_ON(RTM_MAX != (RTM_NEWCHAIN + 3)); 166 167 err = nlmsg_perm(nlmsg_type, perm, nlmsg_route_perms, 167 168 sizeof(nlmsg_route_perms)); ··· 177 170 break; 178 171 179 172 case SECCLASS_NETLINK_XFRM_SOCKET: 173 + /* If the BUILD_BUG_ON() below fails you must update the 174 + * structures at the top of this file with the new mappings 175 + * before updating the BUILD_BUG_ON() macro! 176 + */ 180 177 BUILD_BUG_ON(XFRM_MSG_MAX != XFRM_MSG_MAPPING); 181 178 err = nlmsg_perm(nlmsg_type, perm, nlmsg_xfrm_perms, 182 179 sizeof(nlmsg_xfrm_perms));
+45 -35
sound/core/control.c
··· 348 348 return 0; 349 349 } 350 350 351 + /* add a new kcontrol object; call with card->controls_rwsem locked */ 352 + static int __snd_ctl_add(struct snd_card *card, struct snd_kcontrol *kcontrol) 353 + { 354 + struct snd_ctl_elem_id id; 355 + unsigned int idx; 356 + unsigned int count; 357 + 358 + id = kcontrol->id; 359 + if (id.index > UINT_MAX - kcontrol->count) 360 + return -EINVAL; 361 + 362 + if (snd_ctl_find_id(card, &id)) { 363 + dev_err(card->dev, 364 + "control %i:%i:%i:%s:%i is already present\n", 365 + id.iface, id.device, id.subdevice, id.name, id.index); 366 + return -EBUSY; 367 + } 368 + 369 + if (snd_ctl_find_hole(card, kcontrol->count) < 0) 370 + return -ENOMEM; 371 + 372 + list_add_tail(&kcontrol->list, &card->controls); 373 + card->controls_count += kcontrol->count; 374 + kcontrol->id.numid = card->last_numid + 1; 375 + card->last_numid += kcontrol->count; 376 + 377 + id = kcontrol->id; 378 + count = kcontrol->count; 379 + for (idx = 0; idx < count; idx++, id.index++, id.numid++) 380 + snd_ctl_notify(card, SNDRV_CTL_EVENT_MASK_ADD, &id); 381 + 382 + return 0; 383 + } 384 + 351 385 /** 352 386 * snd_ctl_add - add the control instance to the card 353 387 * @card: the card instance ··· 398 364 */ 399 365 int snd_ctl_add(struct snd_card *card, struct snd_kcontrol *kcontrol) 400 366 { 401 - struct snd_ctl_elem_id id; 402 - unsigned int idx; 403 - unsigned int count; 404 367 int err = -EINVAL; 405 368 406 369 if (! kcontrol) 407 370 return err; 408 371 if (snd_BUG_ON(!card || !kcontrol->info)) 409 372 goto error; 410 - id = kcontrol->id; 411 - if (id.index > UINT_MAX - kcontrol->count) 412 - goto error; 413 373 414 374 down_write(&card->controls_rwsem); 415 - if (snd_ctl_find_id(card, &id)) { 416 - up_write(&card->controls_rwsem); 417 - dev_err(card->dev, "control %i:%i:%i:%s:%i is already present\n", 418 - id.iface, 419 - id.device, 420 - id.subdevice, 421 - id.name, 422 - id.index); 423 - err = -EBUSY; 424 - goto error; 425 - } 426 - if (snd_ctl_find_hole(card, kcontrol->count) < 0) { 427 - up_write(&card->controls_rwsem); 428 - err = -ENOMEM; 429 - goto error; 430 - } 431 - list_add_tail(&kcontrol->list, &card->controls); 432 - card->controls_count += kcontrol->count; 433 - kcontrol->id.numid = card->last_numid + 1; 434 - card->last_numid += kcontrol->count; 435 - id = kcontrol->id; 436 - count = kcontrol->count; 375 + err = __snd_ctl_add(card, kcontrol); 437 376 up_write(&card->controls_rwsem); 438 - for (idx = 0; idx < count; idx++, id.index++, id.numid++) 439 - snd_ctl_notify(card, SNDRV_CTL_EVENT_MASK_ADD, &id); 377 + if (err < 0) 378 + goto error; 440 379 return 0; 441 380 442 381 error: ··· 1368 1361 kctl->tlv.c = snd_ctl_elem_user_tlv; 1369 1362 1370 1363 /* This function manage to free the instance on failure. */ 1371 - err = snd_ctl_add(card, kctl); 1372 - if (err < 0) 1373 - return err; 1364 + down_write(&card->controls_rwsem); 1365 + err = __snd_ctl_add(card, kctl); 1366 + if (err < 0) { 1367 + snd_ctl_free_one(kctl); 1368 + goto unlock; 1369 + } 1374 1370 offset = snd_ctl_get_ioff(kctl, &info->id); 1375 1371 snd_ctl_build_ioff(&info->id, kctl, offset); 1376 1372 /* ··· 1384 1374 * which locks the element. 1385 1375 */ 1386 1376 1387 - down_write(&card->controls_rwsem); 1388 1377 card->user_ctl_count++; 1389 - up_write(&card->controls_rwsem); 1390 1378 1379 + unlock: 1380 + up_write(&card->controls_rwsem); 1391 1381 return 0; 1392 1382 } 1393 1383
-2
sound/isa/wss/wss_lib.c
··· 1531 1531 if (err < 0) { 1532 1532 if (chip->release_dma) 1533 1533 chip->release_dma(chip, chip->dma_private_data, chip->dma1); 1534 - snd_free_pages(runtime->dma_area, runtime->dma_bytes); 1535 1534 return err; 1536 1535 } 1537 1536 chip->playback_substream = substream; ··· 1571 1572 if (err < 0) { 1572 1573 if (chip->release_dma) 1573 1574 chip->release_dma(chip, chip->dma_private_data, chip->dma2); 1574 - snd_free_pages(runtime->dma_area, runtime->dma_bytes); 1575 1575 return err; 1576 1576 } 1577 1577 chip->capture_substream = substream;
+1 -1
sound/pci/ac97/ac97_codec.c
··· 824 824 { 825 825 struct snd_ac97 *ac97 = snd_kcontrol_chip(kcontrol); 826 826 int reg = kcontrol->private_value & 0xff; 827 - int shift = (kcontrol->private_value >> 8) & 0xff; 827 + int shift = (kcontrol->private_value >> 8) & 0x0f; 828 828 int mask = (kcontrol->private_value >> 16) & 0xff; 829 829 // int invert = (kcontrol->private_value >> 24) & 0xff; 830 830 unsigned short value, old, new;
+2
sound/pci/hda/hda_intel.c
··· 2169 2169 /* https://bugzilla.redhat.com/show_bug.cgi?id=1525104 */ 2170 2170 SND_PCI_QUIRK(0x1849, 0xc892, "Asrock B85M-ITX", 0), 2171 2171 /* https://bugzilla.redhat.com/show_bug.cgi?id=1525104 */ 2172 + SND_PCI_QUIRK(0x1849, 0x0397, "Asrock N68C-S UCC", 0), 2173 + /* https://bugzilla.redhat.com/show_bug.cgi?id=1525104 */ 2172 2174 SND_PCI_QUIRK(0x1849, 0x7662, "Asrock H81M-HDS", 0), 2173 2175 /* https://bugzilla.redhat.com/show_bug.cgi?id=1525104 */ 2174 2176 SND_PCI_QUIRK(0x1043, 0x8733, "Asus Prime X370-Pro", 0),
+36
sound/pci/hda/patch_realtek.c
··· 388 388 case 0x10ec0285: 389 389 case 0x10ec0298: 390 390 case 0x10ec0289: 391 + case 0x10ec0300: 391 392 alc_update_coef_idx(codec, 0x10, 1<<9, 0); 392 393 break; 393 394 case 0x10ec0275: ··· 2831 2830 ALC269_TYPE_ALC215, 2832 2831 ALC269_TYPE_ALC225, 2833 2832 ALC269_TYPE_ALC294, 2833 + ALC269_TYPE_ALC300, 2834 2834 ALC269_TYPE_ALC700, 2835 2835 }; 2836 2836 ··· 2866 2864 case ALC269_TYPE_ALC215: 2867 2865 case ALC269_TYPE_ALC225: 2868 2866 case ALC269_TYPE_ALC294: 2867 + case ALC269_TYPE_ALC300: 2869 2868 case ALC269_TYPE_ALC700: 2870 2869 ssids = alc269_ssids; 2871 2870 break; ··· 5361 5358 spec->gen.preferred_dacs = preferred_pairs; 5362 5359 } 5363 5360 5361 + /* The DAC of NID 0x3 will introduce click/pop noise on headphones, so invalidate it */ 5362 + static void alc285_fixup_invalidate_dacs(struct hda_codec *codec, 5363 + const struct hda_fixup *fix, int action) 5364 + { 5365 + if (action != HDA_FIXUP_ACT_PRE_PROBE) 5366 + return; 5367 + 5368 + snd_hda_override_wcaps(codec, 0x03, 0); 5369 + } 5370 + 5364 5371 /* for hda_fixup_thinkpad_acpi() */ 5365 5372 #include "thinkpad_helper.c" 5366 5373 ··· 5508 5495 ALC255_FIXUP_DELL_HEADSET_MIC, 5509 5496 ALC295_FIXUP_HP_X360, 5510 5497 ALC221_FIXUP_HP_HEADSET_MIC, 5498 + ALC285_FIXUP_LENOVO_HEADPHONE_NOISE, 5499 + ALC295_FIXUP_HP_AUTO_MUTE, 5511 5500 }; 5512 5501 5513 5502 static const struct hda_fixup alc269_fixups[] = { ··· 5674 5659 [ALC269_FIXUP_HP_MUTE_LED_MIC3] = { 5675 5660 .type = HDA_FIXUP_FUNC, 5676 5661 .v.func = alc269_fixup_hp_mute_led_mic3, 5662 + .chained = true, 5663 + .chain_id = ALC295_FIXUP_HP_AUTO_MUTE 5677 5664 }, 5678 5665 [ALC269_FIXUP_HP_GPIO_LED] = { 5679 5666 .type = HDA_FIXUP_FUNC, ··· 6379 6362 .chained = true, 6380 6363 .chain_id = ALC269_FIXUP_HEADSET_MIC 6381 6364 }, 6365 + [ALC285_FIXUP_LENOVO_HEADPHONE_NOISE] = { 6366 + .type = HDA_FIXUP_FUNC, 6367 + .v.func = alc285_fixup_invalidate_dacs, 6368 + }, 6369 + [ALC295_FIXUP_HP_AUTO_MUTE] = { 6370 + .type = HDA_FIXUP_FUNC, 6371 + .v.func = alc_fixup_auto_mute_via_amp, 6372 + }, 6382 6373 }; 6383 6374 6384 6375 static const struct snd_pci_quirk alc269_fixup_tbl[] = { ··· 6557 6532 SND_PCI_QUIRK(0x144d, 0xc740, "Samsung Ativ book 8 (NP870Z5G)", ALC269_FIXUP_ATIV_BOOK_8), 6558 6533 SND_PCI_QUIRK(0x1458, 0xfa53, "Gigabyte BXBT-2807", ALC283_FIXUP_HEADSET_MIC), 6559 6534 SND_PCI_QUIRK(0x1462, 0xb120, "MSI Cubi MS-B120", ALC283_FIXUP_HEADSET_MIC), 6535 + SND_PCI_QUIRK(0x1462, 0xb171, "Cubi N 8GL (MS-B171)", ALC283_FIXUP_HEADSET_MIC), 6560 6536 SND_PCI_QUIRK(0x17aa, 0x1036, "Lenovo P520", ALC233_FIXUP_LENOVO_MULTI_CODECS), 6561 6537 SND_PCI_QUIRK(0x17aa, 0x20f2, "Thinkpad SL410/510", ALC269_FIXUP_SKU_IGNORE), 6562 6538 SND_PCI_QUIRK(0x17aa, 0x215e, "Thinkpad L512", ALC269_FIXUP_SKU_IGNORE), ··· 7060 7034 {0x12, 0x90a60130}, 7061 7035 {0x19, 0x03a11020}, 7062 7036 {0x21, 0x0321101f}), 7037 + SND_HDA_PIN_QUIRK(0x10ec0285, 0x17aa, "Lenovo", ALC285_FIXUP_LENOVO_HEADPHONE_NOISE, 7038 + {0x12, 0x90a60130}, 7039 + {0x14, 0x90170110}, 7040 + {0x19, 0x04a11040}, 7041 + {0x21, 0x04211020}), 7063 7042 SND_HDA_PIN_QUIRK(0x10ec0288, 0x1028, "Dell", ALC288_FIXUP_DELL1_MIC_NO_PRESENCE, 7064 7043 {0x12, 0x90a60120}, 7065 7044 {0x14, 0x90170110}, ··· 7325 7294 spec->codec_variant = ALC269_TYPE_ALC294; 7326 7295 spec->gen.mixer_nid = 0; /* ALC2x4 does not have any loopback mixer path */ 7327 7296 alc_update_coef_idx(codec, 0x6b, 0x0018, (1<<4) | (1<<3)); /* UAJ MIC Vref control by verb */ 7297 + break; 7298 + case 0x10ec0300: 7299 + spec->codec_variant = ALC269_TYPE_ALC300; 7300 + spec->gen.mixer_nid = 0; /* no loopback on ALC300 */ 7328 7301 break; 7329 7302 case 0x10ec0700: 7330 7303 case 0x10ec0701: ··· 8440 8405 HDA_CODEC_ENTRY(0x10ec0295, "ALC295", patch_alc269), 8441 8406 HDA_CODEC_ENTRY(0x10ec0298, "ALC298", patch_alc269), 8442 8407 HDA_CODEC_ENTRY(0x10ec0299, "ALC299", patch_alc269), 8408 + HDA_CODEC_ENTRY(0x10ec0300, "ALC300", patch_alc269), 8443 8409 HDA_CODEC_REV_ENTRY(0x10ec0861, 0x100340, "ALC660", patch_alc861), 8444 8410 HDA_CODEC_ENTRY(0x10ec0660, "ALC660-VD", patch_alc861vd), 8445 8411 HDA_CODEC_ENTRY(0x10ec0861, "ALC861", patch_alc861),
+5 -6
sound/soc/codecs/hdac_hdmi.c
··· 2187 2187 */ 2188 2188 snd_hdac_codec_read(hdev, hdev->afg, 0, AC_VERB_SET_POWER_STATE, 2189 2189 AC_PWRST_D3); 2190 - err = snd_hdac_display_power(bus, false); 2191 - if (err < 0) { 2192 - dev_err(dev, "Cannot turn on display power on i915\n"); 2193 - return err; 2194 - } 2195 2190 2196 2191 hlink = snd_hdac_ext_bus_get_link(bus, dev_name(dev)); 2197 2192 if (!hlink) { ··· 2196 2201 2197 2202 snd_hdac_ext_bus_link_put(bus, hlink); 2198 2203 2199 - return 0; 2204 + err = snd_hdac_display_power(bus, false); 2205 + if (err < 0) 2206 + dev_err(dev, "Cannot turn off display power on i915\n"); 2207 + 2208 + return err; 2200 2209 } 2201 2210 2202 2211 static int hdac_hdmi_runtime_resume(struct device *dev)
+1 -1
sound/soc/codecs/pcm186x.h
··· 139 139 #define PCM186X_MAX_REGISTER PCM186X_CURR_TRIM_CTRL 140 140 141 141 /* PCM186X_PAGE */ 142 - #define PCM186X_RESET 0xff 142 + #define PCM186X_RESET 0xfe 143 143 144 144 /* PCM186X_ADCX_INPUT_SEL_X */ 145 145 #define PCM186X_ADC_INPUT_SEL_POL BIT(7)
+4 -8
sound/soc/codecs/pcm3060.c
··· 198 198 }; 199 199 200 200 static const struct snd_soc_dapm_widget pcm3060_dapm_widgets[] = { 201 - SND_SOC_DAPM_OUTPUT("OUTL+"), 202 - SND_SOC_DAPM_OUTPUT("OUTR+"), 203 - SND_SOC_DAPM_OUTPUT("OUTL-"), 204 - SND_SOC_DAPM_OUTPUT("OUTR-"), 201 + SND_SOC_DAPM_OUTPUT("OUTL"), 202 + SND_SOC_DAPM_OUTPUT("OUTR"), 205 203 206 204 SND_SOC_DAPM_INPUT("INL"), 207 205 SND_SOC_DAPM_INPUT("INR"), 208 206 }; 209 207 210 208 static const struct snd_soc_dapm_route pcm3060_dapm_map[] = { 211 - { "OUTL+", NULL, "Playback" }, 212 - { "OUTR+", NULL, "Playback" }, 213 - { "OUTL-", NULL, "Playback" }, 214 - { "OUTR-", NULL, "Playback" }, 209 + { "OUTL", NULL, "Playback" }, 210 + { "OUTR", NULL, "Playback" }, 215 211 216 212 { "Capture", NULL, "INL" }, 217 213 { "Capture", NULL, "INR" },
+20 -17
sound/soc/codecs/wm_adsp.c
··· 765 765 766 766 static void wm_adsp2_show_fw_status(struct wm_adsp *dsp) 767 767 { 768 - u16 scratch[4]; 768 + unsigned int scratch[4]; 769 + unsigned int addr = dsp->base + ADSP2_SCRATCH0; 770 + unsigned int i; 769 771 int ret; 770 772 771 - ret = regmap_raw_read(dsp->regmap, dsp->base + ADSP2_SCRATCH0, 772 - scratch, sizeof(scratch)); 773 - if (ret) { 774 - adsp_err(dsp, "Failed to read SCRATCH regs: %d\n", ret); 775 - return; 773 + for (i = 0; i < ARRAY_SIZE(scratch); ++i) { 774 + ret = regmap_read(dsp->regmap, addr + i, &scratch[i]); 775 + if (ret) { 776 + adsp_err(dsp, "Failed to read SCRATCH%u: %d\n", i, ret); 777 + return; 778 + } 776 779 } 777 780 778 781 adsp_dbg(dsp, "FW SCRATCH 0:0x%x 1:0x%x 2:0x%x 3:0x%x\n", 779 - be16_to_cpu(scratch[0]), 780 - be16_to_cpu(scratch[1]), 781 - be16_to_cpu(scratch[2]), 782 - be16_to_cpu(scratch[3])); 782 + scratch[0], scratch[1], scratch[2], scratch[3]); 783 783 } 784 784 785 785 static void wm_adsp2v2_show_fw_status(struct wm_adsp *dsp) 786 786 { 787 - u32 scratch[2]; 787 + unsigned int scratch[2]; 788 788 int ret; 789 789 790 - ret = regmap_raw_read(dsp->regmap, dsp->base + ADSP2V2_SCRATCH0_1, 791 - scratch, sizeof(scratch)); 792 - 790 + ret = regmap_read(dsp->regmap, dsp->base + ADSP2V2_SCRATCH0_1, 791 + &scratch[0]); 793 792 if (ret) { 794 - adsp_err(dsp, "Failed to read SCRATCH regs: %d\n", ret); 793 + adsp_err(dsp, "Failed to read SCRATCH0_1: %d\n", ret); 795 794 return; 796 795 } 797 796 798 - scratch[0] = be32_to_cpu(scratch[0]); 799 - scratch[1] = be32_to_cpu(scratch[1]); 797 + ret = regmap_read(dsp->regmap, dsp->base + ADSP2V2_SCRATCH2_3, 798 + &scratch[1]); 799 + if (ret) { 800 + adsp_err(dsp, "Failed to read SCRATCH2_3: %d\n", ret); 801 + return; 802 + } 800 803 801 804 adsp_dbg(dsp, "FW SCRATCH 0:0x%x 1:0x%x 2:0x%x 3:0x%x\n", 802 805 scratch[0] & 0xFFFF,
+23 -3
sound/soc/intel/Kconfig
··· 101 101 codec, then enable this option by saying Y or m. This is a 102 102 recommended option 103 103 104 - config SND_SOC_INTEL_SKYLAKE_SSP_CLK 105 - tristate 106 - 107 104 config SND_SOC_INTEL_SKYLAKE 108 105 tristate "SKL/BXT/KBL/GLK/CNL... Platforms" 109 106 depends on PCI && ACPI 107 + select SND_SOC_INTEL_SKYLAKE_COMMON 108 + help 109 + If you have a Intel Skylake/Broxton/ApolloLake/KabyLake/ 110 + GeminiLake or CannonLake platform with the DSP enabled in the BIOS 111 + then enable this option by saying Y or m. 112 + 113 + if SND_SOC_INTEL_SKYLAKE 114 + 115 + config SND_SOC_INTEL_SKYLAKE_SSP_CLK 116 + tristate 117 + 118 + config SND_SOC_INTEL_SKYLAKE_HDAUDIO_CODEC 119 + bool "HDAudio codec support" 120 + help 121 + If you have a Intel Skylake/Broxton/ApolloLake/KabyLake/ 122 + GeminiLake or CannonLake platform with an HDaudio codec 123 + then enable this option by saying Y 124 + 125 + config SND_SOC_INTEL_SKYLAKE_COMMON 126 + tristate 110 127 select SND_HDA_EXT_CORE 111 128 select SND_HDA_DSP_LOADER 112 129 select SND_SOC_TOPOLOGY 113 130 select SND_SOC_INTEL_SST 131 + select SND_SOC_HDAC_HDA if SND_SOC_INTEL_SKYLAKE_HDAUDIO_CODEC 114 132 select SND_SOC_ACPI_INTEL_MATCH 115 133 help 116 134 If you have a Intel Skylake/Broxton/ApolloLake/KabyLake/ 117 135 GeminiLake or CannonLake platform with the DSP enabled in the BIOS 118 136 then enable this option by saying Y or m. 137 + 138 + endif ## SND_SOC_INTEL_SKYLAKE 119 139 120 140 config SND_SOC_ACPI_INTEL_MATCH 121 141 tristate
+14 -10
sound/soc/intel/boards/Kconfig
··· 293 293 Say Y if you have such a device. 294 294 If unsure select "N". 295 295 296 - config SND_SOC_INTEL_SKL_HDA_DSP_GENERIC_MACH 297 - tristate "SKL/KBL/BXT/APL with HDA Codecs" 298 - select SND_SOC_HDAC_HDMI 299 - select SND_SOC_HDAC_HDA 300 - help 301 - This adds support for ASoC machine driver for Intel platforms 302 - SKL/KBL/BXT/APL with iDisp, HDA audio codecs. 303 - Say Y or m if you have such a device. This is a recommended option. 304 - If unsure select "N". 305 - 306 296 config SND_SOC_INTEL_GLK_RT5682_MAX98357A_MACH 307 297 tristate "GLK with RT5682 and MAX98357A in I2S Mode" 308 298 depends on MFD_INTEL_LPSS && I2C && ACPI ··· 308 318 If unsure select "N". 309 319 310 320 endif ## SND_SOC_INTEL_SKYLAKE 321 + 322 + if SND_SOC_INTEL_SKYLAKE_HDAUDIO_CODEC 323 + 324 + config SND_SOC_INTEL_SKL_HDA_DSP_GENERIC_MACH 325 + tristate "SKL/KBL/BXT/APL with HDA Codecs" 326 + select SND_SOC_HDAC_HDMI 327 + # SND_SOC_HDAC_HDA is already selected 328 + help 329 + This adds support for ASoC machine driver for Intel platforms 330 + SKL/KBL/BXT/APL with iDisp, HDA audio codecs. 331 + Say Y or m if you have such a device. This is a recommended option. 332 + If unsure select "N". 333 + 334 + endif ## SND_SOC_INTEL_SKYLAKE_HDAUDIO_CODEC 311 335 312 336 endif ## SND_SOC_INTEL_MACH
+29 -3
sound/soc/intel/boards/cht_bsw_max98090_ti.c
··· 19 19 * ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 20 20 */ 21 21 22 + #include <linux/dmi.h> 22 23 #include <linux/module.h> 23 24 #include <linux/platform_device.h> 24 25 #include <linux/slab.h> ··· 35 34 36 35 #define CHT_PLAT_CLK_3_HZ 19200000 37 36 #define CHT_CODEC_DAI "HiFi" 37 + 38 + #define QUIRK_PMC_PLT_CLK_0 0x01 38 39 39 40 struct cht_mc_private { 40 41 struct clk *mclk; ··· 388 385 .num_controls = ARRAY_SIZE(cht_mc_controls), 389 386 }; 390 387 388 + static const struct dmi_system_id cht_max98090_quirk_table[] = { 389 + { 390 + /* Swanky model Chromebook (Toshiba Chromebook 2) */ 391 + .matches = { 392 + DMI_MATCH(DMI_PRODUCT_NAME, "Swanky"), 393 + }, 394 + .driver_data = (void *)QUIRK_PMC_PLT_CLK_0, 395 + }, 396 + {} 397 + }; 398 + 391 399 static int snd_cht_mc_probe(struct platform_device *pdev) 392 400 { 401 + const struct dmi_system_id *dmi_id; 393 402 struct device *dev = &pdev->dev; 394 403 int ret_val = 0; 395 404 struct cht_mc_private *drv; 405 + const char *mclk_name; 406 + int quirks = 0; 407 + 408 + dmi_id = dmi_first_match(cht_max98090_quirk_table); 409 + if (dmi_id) 410 + quirks = (unsigned long)dmi_id->driver_data; 396 411 397 412 drv = devm_kzalloc(&pdev->dev, sizeof(*drv), GFP_KERNEL); 398 413 if (!drv) ··· 432 411 snd_soc_card_cht.dev = &pdev->dev; 433 412 snd_soc_card_set_drvdata(&snd_soc_card_cht, drv); 434 413 435 - drv->mclk = devm_clk_get(&pdev->dev, "pmc_plt_clk_3"); 414 + if (quirks & QUIRK_PMC_PLT_CLK_0) 415 + mclk_name = "pmc_plt_clk_0"; 416 + else 417 + mclk_name = "pmc_plt_clk_3"; 418 + 419 + drv->mclk = devm_clk_get(&pdev->dev, mclk_name); 436 420 if (IS_ERR(drv->mclk)) { 437 421 dev_err(&pdev->dev, 438 - "Failed to get MCLK from pmc_plt_clk_3: %ld\n", 439 - PTR_ERR(drv->mclk)); 422 + "Failed to get MCLK from %s: %ld\n", 423 + mclk_name, PTR_ERR(drv->mclk)); 440 424 return PTR_ERR(drv->mclk); 441 425 } 442 426
+24 -8
sound/soc/intel/skylake/skl.c
··· 37 37 #include "skl.h" 38 38 #include "skl-sst-dsp.h" 39 39 #include "skl-sst-ipc.h" 40 + #if IS_ENABLED(CONFIG_SND_SOC_INTEL_SKYLAKE_HDAUDIO_CODEC) 40 41 #include "../../../soc/codecs/hdac_hda.h" 42 + #endif 41 43 42 44 /* 43 45 * initialize the PCI registers ··· 660 658 platform_device_unregister(skl->clk_dev); 661 659 } 662 660 661 + #if IS_ENABLED(CONFIG_SND_SOC_INTEL_SKYLAKE_HDAUDIO_CODEC) 662 + 663 663 #define IDISP_INTEL_VENDOR_ID 0x80860000 664 664 665 665 /* ··· 680 676 #endif 681 677 } 682 678 679 + #endif /* CONFIG_SND_SOC_INTEL_SKYLAKE_HDAUDIO_CODEC */ 680 + 683 681 /* 684 682 * Probe the given codec address 685 683 */ ··· 691 685 (AC_VERB_PARAMETERS << 8) | AC_PAR_VENDOR_ID; 692 686 unsigned int res = -1; 693 687 struct skl *skl = bus_to_skl(bus); 688 + #if IS_ENABLED(CONFIG_SND_SOC_INTEL_SKYLAKE_HDAUDIO_CODEC) 694 689 struct hdac_hda_priv *hda_codec; 695 - struct hdac_device *hdev; 696 690 int err; 691 + #endif 692 + struct hdac_device *hdev; 697 693 698 694 mutex_lock(&bus->cmd_mutex); 699 695 snd_hdac_bus_send_cmd(bus, cmd); ··· 705 697 return -EIO; 706 698 dev_dbg(bus->dev, "codec #%d probed OK: %x\n", addr, res); 707 699 700 + #if IS_ENABLED(CONFIG_SND_SOC_INTEL_SKYLAKE_HDAUDIO_CODEC) 708 701 hda_codec = devm_kzalloc(&skl->pci->dev, sizeof(*hda_codec), 709 702 GFP_KERNEL); 710 703 if (!hda_codec) ··· 724 715 load_codec_module(&hda_codec->codec); 725 716 } 726 717 return 0; 718 + #else 719 + hdev = devm_kzalloc(&skl->pci->dev, sizeof(*hdev), GFP_KERNEL); 720 + if (!hdev) 721 + return -ENOMEM; 722 + 723 + return snd_hdac_ext_bus_device_init(bus, addr, hdev); 724 + #endif /* CONFIG_SND_SOC_INTEL_SKYLAKE_HDAUDIO_CODEC */ 727 725 } 728 726 729 727 /* Codec initialization */ ··· 831 815 } 832 816 } 833 817 818 + /* 819 + * we are done probing so decrement link counts 820 + */ 821 + list_for_each_entry(hlink, &bus->hlink_list, list) 822 + snd_hdac_ext_bus_link_put(bus, hlink); 823 + 834 824 if (IS_ENABLED(CONFIG_SND_SOC_HDAC_HDMI)) { 835 825 err = snd_hdac_display_power(bus, false); 836 826 if (err < 0) { ··· 845 823 return; 846 824 } 847 825 } 848 - 849 - /* 850 - * we are done probing so decrement link counts 851 - */ 852 - list_for_each_entry(hlink, &bus->hlink_list, list) 853 - snd_hdac_ext_bus_link_put(bus, hlink); 854 826 855 827 /* configure PM */ 856 828 pm_runtime_put_noidle(bus->dev); ··· 886 870 hbus = skl_to_hbus(skl); 887 871 bus = skl_to_bus(skl); 888 872 889 - #if IS_ENABLED(CONFIG_SND_SOC_HDAC_HDA) 873 + #if IS_ENABLED(CONFIG_SND_SOC_INTEL_SKYLAKE_HDAUDIO_CODEC) 890 874 ext_ops = snd_soc_hdac_hda_get_ops(); 891 875 #endif 892 876 snd_hdac_ext_bus_init(bus, &pci->dev, &bus_core_ops, io_ops, ext_ops);
+29 -38
sound/soc/omap/omap-abe-twl6040.c
··· 36 36 #include "../codecs/twl6040.h" 37 37 38 38 struct abe_twl6040 { 39 + struct snd_soc_card card; 40 + struct snd_soc_dai_link dai_links[2]; 39 41 int jack_detection; /* board can detect jack events */ 40 42 int mclk_freq; /* MCLK frequency speed for twl6040 */ 41 43 }; ··· 210 208 ARRAY_SIZE(dmic_audio_map)); 211 209 } 212 210 213 - /* Digital audio interface glue - connects codec <--> CPU */ 214 - static struct snd_soc_dai_link abe_twl6040_dai_links[] = { 215 - { 216 - .name = "TWL6040", 217 - .stream_name = "TWL6040", 218 - .codec_dai_name = "twl6040-legacy", 219 - .codec_name = "twl6040-codec", 220 - .init = omap_abe_twl6040_init, 221 - .ops = &omap_abe_ops, 222 - }, 223 - { 224 - .name = "DMIC", 225 - .stream_name = "DMIC Capture", 226 - .codec_dai_name = "dmic-hifi", 227 - .codec_name = "dmic-codec", 228 - .init = omap_abe_dmic_init, 229 - .ops = &omap_abe_dmic_ops, 230 - }, 231 - }; 232 - 233 - /* Audio machine driver */ 234 - static struct snd_soc_card omap_abe_card = { 235 - .owner = THIS_MODULE, 236 - 237 - .dapm_widgets = twl6040_dapm_widgets, 238 - .num_dapm_widgets = ARRAY_SIZE(twl6040_dapm_widgets), 239 - .dapm_routes = audio_map, 240 - .num_dapm_routes = ARRAY_SIZE(audio_map), 241 - }; 242 - 243 211 static int omap_abe_probe(struct platform_device *pdev) 244 212 { 245 213 struct device_node *node = pdev->dev.of_node; 246 - struct snd_soc_card *card = &omap_abe_card; 214 + struct snd_soc_card *card; 247 215 struct device_node *dai_node; 248 216 struct abe_twl6040 *priv; 249 217 int num_links = 0; ··· 224 252 return -ENODEV; 225 253 } 226 254 227 - card->dev = &pdev->dev; 228 - 229 255 priv = devm_kzalloc(&pdev->dev, sizeof(struct abe_twl6040), GFP_KERNEL); 230 256 if (priv == NULL) 231 257 return -ENOMEM; 258 + 259 + card = &priv->card; 260 + card->dev = &pdev->dev; 261 + card->owner = THIS_MODULE; 262 + card->dapm_widgets = twl6040_dapm_widgets; 263 + card->num_dapm_widgets = ARRAY_SIZE(twl6040_dapm_widgets); 264 + card->dapm_routes = audio_map; 265 + card->num_dapm_routes = ARRAY_SIZE(audio_map); 232 266 233 267 if (snd_soc_of_parse_card_name(card, "ti,model")) { 234 268 dev_err(&pdev->dev, "Card name is not provided\n"); ··· 252 274 dev_err(&pdev->dev, "McPDM node is not provided\n"); 253 275 return -EINVAL; 254 276 } 255 - abe_twl6040_dai_links[0].cpu_of_node = dai_node; 256 - abe_twl6040_dai_links[0].platform_of_node = dai_node; 277 + 278 + priv->dai_links[0].name = "DMIC"; 279 + priv->dai_links[0].stream_name = "TWL6040"; 280 + priv->dai_links[0].cpu_of_node = dai_node; 281 + priv->dai_links[0].platform_of_node = dai_node; 282 + priv->dai_links[0].codec_dai_name = "twl6040-legacy"; 283 + priv->dai_links[0].codec_name = "twl6040-codec"; 284 + priv->dai_links[0].init = omap_abe_twl6040_init; 285 + priv->dai_links[0].ops = &omap_abe_ops; 257 286 258 287 dai_node = of_parse_phandle(node, "ti,dmic", 0); 259 288 if (dai_node) { 260 289 num_links = 2; 261 - abe_twl6040_dai_links[1].cpu_of_node = dai_node; 262 - abe_twl6040_dai_links[1].platform_of_node = dai_node; 290 + priv->dai_links[1].name = "TWL6040"; 291 + priv->dai_links[1].stream_name = "DMIC Capture"; 292 + priv->dai_links[1].cpu_of_node = dai_node; 293 + priv->dai_links[1].platform_of_node = dai_node; 294 + priv->dai_links[1].codec_dai_name = "dmic-hifi"; 295 + priv->dai_links[1].codec_name = "dmic-codec"; 296 + priv->dai_links[1].init = omap_abe_dmic_init; 297 + priv->dai_links[1].ops = &omap_abe_dmic_ops; 263 298 } else { 264 299 num_links = 1; 265 300 } ··· 291 300 return -ENODEV; 292 301 } 293 302 294 - card->dai_link = abe_twl6040_dai_links; 303 + card->dai_link = priv->dai_links; 295 304 card->num_links = num_links; 296 305 297 306 snd_soc_card_set_drvdata(card, priv);
+9
sound/soc/omap/omap-dmic.c
··· 48 48 struct device *dev; 49 49 void __iomem *io_base; 50 50 struct clk *fclk; 51 + struct pm_qos_request pm_qos_req; 52 + int latency; 51 53 int fclk_freq; 52 54 int out_freq; 53 55 int clk_div; ··· 125 123 struct omap_dmic *dmic = snd_soc_dai_get_drvdata(dai); 126 124 127 125 mutex_lock(&dmic->mutex); 126 + 127 + pm_qos_remove_request(&dmic->pm_qos_req); 128 128 129 129 if (!dai->active) 130 130 dmic->active = 0; ··· 232 228 /* packet size is threshold * channels */ 233 229 dma_data = snd_soc_dai_get_dma_data(dai, substream); 234 230 dma_data->maxburst = dmic->threshold * channels; 231 + dmic->latency = (OMAP_DMIC_THRES_MAX - dmic->threshold) * USEC_PER_SEC / 232 + params_rate(params); 235 233 236 234 return 0; 237 235 } ··· 243 237 { 244 238 struct omap_dmic *dmic = snd_soc_dai_get_drvdata(dai); 245 239 u32 ctrl; 240 + 241 + if (pm_qos_request_active(&dmic->pm_qos_req)) 242 + pm_qos_update_request(&dmic->pm_qos_req, dmic->latency); 246 243 247 244 /* Configure uplink threshold */ 248 245 omap_dmic_write(dmic, OMAP_DMIC_FIFO_CTRL_REG, dmic->threshold);
+3 -3
sound/soc/omap/omap-mcbsp.c
··· 308 308 pkt_size = channels; 309 309 } 310 310 311 - latency = ((((buffer_size - pkt_size) / channels) * 1000) 312 - / (params->rate_num / params->rate_den)); 313 - 311 + latency = (buffer_size - pkt_size) / channels; 312 + latency = latency * USEC_PER_SEC / 313 + (params->rate_num / params->rate_den); 314 314 mcbsp->latency[substream->stream] = latency; 315 315 316 316 omap_mcbsp_set_threshold(substream, pkt_size);
+42 -1
sound/soc/omap/omap-mcpdm.c
··· 54 54 unsigned long phys_base; 55 55 void __iomem *io_base; 56 56 int irq; 57 + struct pm_qos_request pm_qos_req; 58 + int latency[2]; 57 59 58 60 struct mutex mutex; 59 61 ··· 279 277 struct snd_soc_dai *dai) 280 278 { 281 279 struct omap_mcpdm *mcpdm = snd_soc_dai_get_drvdata(dai); 280 + int tx = (substream->stream == SNDRV_PCM_STREAM_PLAYBACK); 281 + int stream1 = tx ? SNDRV_PCM_STREAM_PLAYBACK : SNDRV_PCM_STREAM_CAPTURE; 282 + int stream2 = tx ? SNDRV_PCM_STREAM_CAPTURE : SNDRV_PCM_STREAM_PLAYBACK; 282 283 283 284 mutex_lock(&mcpdm->mutex); 284 285 ··· 294 289 } 295 290 } 296 291 292 + if (mcpdm->latency[stream2]) 293 + pm_qos_update_request(&mcpdm->pm_qos_req, 294 + mcpdm->latency[stream2]); 295 + else if (mcpdm->latency[stream1]) 296 + pm_qos_remove_request(&mcpdm->pm_qos_req); 297 + 298 + mcpdm->latency[stream1] = 0; 299 + 297 300 mutex_unlock(&mcpdm->mutex); 298 301 } 299 302 ··· 313 300 int stream = substream->stream; 314 301 struct snd_dmaengine_dai_dma_data *dma_data; 315 302 u32 threshold; 316 - int channels; 303 + int channels, latency; 317 304 int link_mask = 0; 318 305 319 306 channels = params_channels(params); ··· 357 344 358 345 dma_data->maxburst = 359 346 (MCPDM_DN_THRES_MAX - threshold) * channels; 347 + latency = threshold; 360 348 } else { 361 349 /* If playback is not running assume a stereo stream to come */ 362 350 if (!mcpdm->config[!stream].link_mask) 363 351 mcpdm->config[!stream].link_mask = (0x3 << 3); 364 352 365 353 dma_data->maxburst = threshold * channels; 354 + latency = (MCPDM_DN_THRES_MAX - threshold); 366 355 } 356 + 357 + /* 358 + * The DMA must act to a DMA request within latency time (usec) to avoid 359 + * under/overflow 360 + */ 361 + mcpdm->latency[stream] = latency * USEC_PER_SEC / params_rate(params); 362 + 363 + if (!mcpdm->latency[stream]) 364 + mcpdm->latency[stream] = 10; 367 365 368 366 /* Check if we need to restart McPDM with this stream */ 369 367 if (mcpdm->config[stream].link_mask && ··· 390 366 struct snd_soc_dai *dai) 391 367 { 392 368 struct omap_mcpdm *mcpdm = snd_soc_dai_get_drvdata(dai); 369 + struct pm_qos_request *pm_qos_req = &mcpdm->pm_qos_req; 370 + int tx = (substream->stream == SNDRV_PCM_STREAM_PLAYBACK); 371 + int stream1 = tx ? SNDRV_PCM_STREAM_PLAYBACK : SNDRV_PCM_STREAM_CAPTURE; 372 + int stream2 = tx ? SNDRV_PCM_STREAM_CAPTURE : SNDRV_PCM_STREAM_PLAYBACK; 373 + int latency = mcpdm->latency[stream2]; 374 + 375 + /* Prevent omap hardware from hitting off between FIFO fills */ 376 + if (!latency || mcpdm->latency[stream1] < latency) 377 + latency = mcpdm->latency[stream1]; 378 + 379 + if (pm_qos_request_active(pm_qos_req)) 380 + pm_qos_update_request(pm_qos_req, latency); 381 + else if (latency) 382 + pm_qos_add_request(pm_qos_req, PM_QOS_CPU_DMA_LATENCY, latency); 393 383 394 384 if (!omap_mcpdm_active(mcpdm)) { 395 385 omap_mcpdm_start(mcpdm); ··· 464 426 465 427 free_irq(mcpdm->irq, (void *)mcpdm); 466 428 pm_runtime_disable(mcpdm->dev); 429 + 430 + if (pm_qos_request_active(&mcpdm->pm_qos_req)) 431 + pm_qos_remove_request(&mcpdm->pm_qos_req); 467 432 468 433 return 0; 469 434 }
+6 -3
sound/soc/qcom/common.c
··· 13 13 struct device_node *cpu = NULL; 14 14 struct device *dev = card->dev; 15 15 struct snd_soc_dai_link *link; 16 + struct of_phandle_args args; 16 17 int ret, num_links; 17 18 18 19 ret = snd_soc_of_parse_card_name(card, "model"); ··· 48 47 goto err; 49 48 } 50 49 51 - link->cpu_of_node = of_parse_phandle(cpu, "sound-dai", 0); 52 - if (!link->cpu_of_node) { 50 + ret = of_parse_phandle_with_args(cpu, "sound-dai", 51 + "#sound-dai-cells", 0, &args); 52 + if (ret) { 53 53 dev_err(card->dev, "error getting cpu phandle\n"); 54 - ret = -EINVAL; 55 54 goto err; 56 55 } 56 + link->cpu_of_node = args.np; 57 + link->id = args.args[0]; 57 58 58 59 ret = snd_soc_of_get_dai_name(cpu, &link->cpu_dai_name); 59 60 if (ret) {
+138 -138
sound/soc/qcom/qdsp6/q6afe-dai.c
··· 1112 1112 } 1113 1113 1114 1114 static const struct snd_soc_dapm_widget q6afe_dai_widgets[] = { 1115 - SND_SOC_DAPM_AIF_OUT("HDMI_RX", "HDMI Playback", 0, 0, 0, 0), 1116 - SND_SOC_DAPM_AIF_OUT("SLIMBUS_0_RX", "Slimbus Playback", 0, 0, 0, 0), 1117 - SND_SOC_DAPM_AIF_OUT("SLIMBUS_1_RX", "Slimbus1 Playback", 0, 0, 0, 0), 1118 - SND_SOC_DAPM_AIF_OUT("SLIMBUS_2_RX", "Slimbus2 Playback", 0, 0, 0, 0), 1119 - SND_SOC_DAPM_AIF_OUT("SLIMBUS_3_RX", "Slimbus3 Playback", 0, 0, 0, 0), 1120 - SND_SOC_DAPM_AIF_OUT("SLIMBUS_4_RX", "Slimbus4 Playback", 0, 0, 0, 0), 1121 - SND_SOC_DAPM_AIF_OUT("SLIMBUS_5_RX", "Slimbus5 Playback", 0, 0, 0, 0), 1122 - SND_SOC_DAPM_AIF_OUT("SLIMBUS_6_RX", "Slimbus6 Playback", 0, 0, 0, 0), 1123 - SND_SOC_DAPM_AIF_IN("SLIMBUS_0_TX", "Slimbus Capture", 0, 0, 0, 0), 1124 - SND_SOC_DAPM_AIF_IN("SLIMBUS_1_TX", "Slimbus1 Capture", 0, 0, 0, 0), 1125 - SND_SOC_DAPM_AIF_IN("SLIMBUS_2_TX", "Slimbus2 Capture", 0, 0, 0, 0), 1126 - SND_SOC_DAPM_AIF_IN("SLIMBUS_3_TX", "Slimbus3 Capture", 0, 0, 0, 0), 1127 - SND_SOC_DAPM_AIF_IN("SLIMBUS_4_TX", "Slimbus4 Capture", 0, 0, 0, 0), 1128 - SND_SOC_DAPM_AIF_IN("SLIMBUS_5_TX", "Slimbus5 Capture", 0, 0, 0, 0), 1129 - SND_SOC_DAPM_AIF_IN("SLIMBUS_6_TX", "Slimbus6 Capture", 0, 0, 0, 0), 1130 - SND_SOC_DAPM_AIF_OUT("QUAT_MI2S_RX", "Quaternary MI2S Playback", 1115 + SND_SOC_DAPM_AIF_IN("HDMI_RX", NULL, 0, 0, 0, 0), 1116 + SND_SOC_DAPM_AIF_IN("SLIMBUS_0_RX", NULL, 0, 0, 0, 0), 1117 + SND_SOC_DAPM_AIF_IN("SLIMBUS_1_RX", NULL, 0, 0, 0, 0), 1118 + SND_SOC_DAPM_AIF_IN("SLIMBUS_2_RX", NULL, 0, 0, 0, 0), 1119 + SND_SOC_DAPM_AIF_IN("SLIMBUS_3_RX", NULL, 0, 0, 0, 0), 1120 + SND_SOC_DAPM_AIF_IN("SLIMBUS_4_RX", NULL, 0, 0, 0, 0), 1121 + SND_SOC_DAPM_AIF_IN("SLIMBUS_5_RX", NULL, 0, 0, 0, 0), 1122 + SND_SOC_DAPM_AIF_IN("SLIMBUS_6_RX", NULL, 0, 0, 0, 0), 1123 + SND_SOC_DAPM_AIF_OUT("SLIMBUS_0_TX", NULL, 0, 0, 0, 0), 1124 + SND_SOC_DAPM_AIF_OUT("SLIMBUS_1_TX", NULL, 0, 0, 0, 0), 1125 + SND_SOC_DAPM_AIF_OUT("SLIMBUS_2_TX", NULL, 0, 0, 0, 0), 1126 + SND_SOC_DAPM_AIF_OUT("SLIMBUS_3_TX", NULL, 0, 0, 0, 0), 1127 + SND_SOC_DAPM_AIF_OUT("SLIMBUS_4_TX", NULL, 0, 0, 0, 0), 1128 + SND_SOC_DAPM_AIF_OUT("SLIMBUS_5_TX", NULL, 0, 0, 0, 0), 1129 + SND_SOC_DAPM_AIF_OUT("SLIMBUS_6_TX", NULL, 0, 0, 0, 0), 1130 + SND_SOC_DAPM_AIF_IN("QUAT_MI2S_RX", NULL, 1131 1131 0, 0, 0, 0), 1132 - SND_SOC_DAPM_AIF_IN("QUAT_MI2S_TX", "Quaternary MI2S Capture", 1132 + SND_SOC_DAPM_AIF_OUT("QUAT_MI2S_TX", NULL, 1133 1133 0, 0, 0, 0), 1134 - SND_SOC_DAPM_AIF_OUT("TERT_MI2S_RX", "Tertiary MI2S Playback", 1134 + SND_SOC_DAPM_AIF_IN("TERT_MI2S_RX", NULL, 1135 1135 0, 0, 0, 0), 1136 - SND_SOC_DAPM_AIF_IN("TERT_MI2S_TX", "Tertiary MI2S Capture", 1136 + SND_SOC_DAPM_AIF_OUT("TERT_MI2S_TX", NULL, 1137 1137 0, 0, 0, 0), 1138 - SND_SOC_DAPM_AIF_OUT("SEC_MI2S_RX", "Secondary MI2S Playback", 1138 + SND_SOC_DAPM_AIF_IN("SEC_MI2S_RX", NULL, 1139 1139 0, 0, 0, 0), 1140 - SND_SOC_DAPM_AIF_IN("SEC_MI2S_TX", "Secondary MI2S Capture", 1140 + SND_SOC_DAPM_AIF_OUT("SEC_MI2S_TX", NULL, 1141 1141 0, 0, 0, 0), 1142 - SND_SOC_DAPM_AIF_OUT("SEC_MI2S_RX_SD1", 1142 + SND_SOC_DAPM_AIF_IN("SEC_MI2S_RX_SD1", 1143 1143 "Secondary MI2S Playback SD1", 1144 1144 0, 0, 0, 0), 1145 - SND_SOC_DAPM_AIF_OUT("PRI_MI2S_RX", "Primary MI2S Playback", 1145 + SND_SOC_DAPM_AIF_IN("PRI_MI2S_RX", NULL, 1146 1146 0, 0, 0, 0), 1147 - SND_SOC_DAPM_AIF_IN("PRI_MI2S_TX", "Primary MI2S Capture", 1147 + SND_SOC_DAPM_AIF_OUT("PRI_MI2S_TX", NULL, 1148 1148 0, 0, 0, 0), 1149 1149 1150 - SND_SOC_DAPM_AIF_OUT("PRIMARY_TDM_RX_0", "Primary TDM0 Playback", 1150 + SND_SOC_DAPM_AIF_IN("PRIMARY_TDM_RX_0", NULL, 1151 1151 0, 0, 0, 0), 1152 - SND_SOC_DAPM_AIF_OUT("PRIMARY_TDM_RX_1", "Primary TDM1 Playback", 1152 + SND_SOC_DAPM_AIF_IN("PRIMARY_TDM_RX_1", NULL, 1153 1153 0, 0, 0, 0), 1154 - SND_SOC_DAPM_AIF_OUT("PRIMARY_TDM_RX_2", "Primary TDM2 Playback", 1154 + SND_SOC_DAPM_AIF_IN("PRIMARY_TDM_RX_2", NULL, 1155 1155 0, 0, 0, 0), 1156 - SND_SOC_DAPM_AIF_OUT("PRIMARY_TDM_RX_3", "Primary TDM3 Playback", 1156 + SND_SOC_DAPM_AIF_IN("PRIMARY_TDM_RX_3", NULL, 1157 1157 0, 0, 0, 0), 1158 - SND_SOC_DAPM_AIF_OUT("PRIMARY_TDM_RX_4", "Primary TDM4 Playback", 1158 + SND_SOC_DAPM_AIF_IN("PRIMARY_TDM_RX_4", NULL, 1159 1159 0, 0, 0, 0), 1160 - SND_SOC_DAPM_AIF_OUT("PRIMARY_TDM_RX_5", "Primary TDM5 Playback", 1160 + SND_SOC_DAPM_AIF_IN("PRIMARY_TDM_RX_5", NULL, 1161 1161 0, 0, 0, 0), 1162 - SND_SOC_DAPM_AIF_OUT("PRIMARY_TDM_RX_6", "Primary TDM6 Playback", 1162 + SND_SOC_DAPM_AIF_IN("PRIMARY_TDM_RX_6", NULL, 1163 1163 0, 0, 0, 0), 1164 - SND_SOC_DAPM_AIF_OUT("PRIMARY_TDM_RX_7", "Primary TDM7 Playback", 1164 + SND_SOC_DAPM_AIF_IN("PRIMARY_TDM_RX_7", NULL, 1165 1165 0, 0, 0, 0), 1166 - SND_SOC_DAPM_AIF_IN("PRIMARY_TDM_TX_0", "Primary TDM0 Capture", 1166 + SND_SOC_DAPM_AIF_OUT("PRIMARY_TDM_TX_0", NULL, 1167 1167 0, 0, 0, 0), 1168 - SND_SOC_DAPM_AIF_IN("PRIMARY_TDM_TX_1", "Primary TDM1 Capture", 1168 + SND_SOC_DAPM_AIF_OUT("PRIMARY_TDM_TX_1", NULL, 1169 1169 0, 0, 0, 0), 1170 - SND_SOC_DAPM_AIF_IN("PRIMARY_TDM_TX_2", "Primary TDM2 Capture", 1170 + SND_SOC_DAPM_AIF_OUT("PRIMARY_TDM_TX_2", NULL, 1171 1171 0, 0, 0, 0), 1172 - SND_SOC_DAPM_AIF_IN("PRIMARY_TDM_TX_3", "Primary TDM3 Capture", 1172 + SND_SOC_DAPM_AIF_OUT("PRIMARY_TDM_TX_3", NULL, 1173 1173 0, 0, 0, 0), 1174 - SND_SOC_DAPM_AIF_IN("PRIMARY_TDM_TX_4", "Primary TDM4 Capture", 1174 + SND_SOC_DAPM_AIF_OUT("PRIMARY_TDM_TX_4", NULL, 1175 1175 0, 0, 0, 0), 1176 - SND_SOC_DAPM_AIF_IN("PRIMARY_TDM_TX_5", "Primary TDM5 Capture", 1176 + SND_SOC_DAPM_AIF_OUT("PRIMARY_TDM_TX_5", NULL, 1177 1177 0, 0, 0, 0), 1178 - SND_SOC_DAPM_AIF_IN("PRIMARY_TDM_TX_6", "Primary TDM6 Capture", 1178 + SND_SOC_DAPM_AIF_OUT("PRIMARY_TDM_TX_6", NULL, 1179 1179 0, 0, 0, 0), 1180 - SND_SOC_DAPM_AIF_IN("PRIMARY_TDM_TX_7", "Primary TDM7 Capture", 1181 - 0, 0, 0, 0), 1182 - 1183 - SND_SOC_DAPM_AIF_OUT("SEC_TDM_RX_0", "Secondary TDM0 Playback", 1184 - 0, 0, 0, 0), 1185 - SND_SOC_DAPM_AIF_OUT("SEC_TDM_RX_1", "Secondary TDM1 Playback", 1186 - 0, 0, 0, 0), 1187 - SND_SOC_DAPM_AIF_OUT("SEC_TDM_RX_2", "Secondary TDM2 Playback", 1188 - 0, 0, 0, 0), 1189 - SND_SOC_DAPM_AIF_OUT("SEC_TDM_RX_3", "Secondary TDM3 Playback", 1190 - 0, 0, 0, 0), 1191 - SND_SOC_DAPM_AIF_OUT("SEC_TDM_RX_4", "Secondary TDM4 Playback", 1192 - 0, 0, 0, 0), 1193 - SND_SOC_DAPM_AIF_OUT("SEC_TDM_RX_5", "Secondary TDM5 Playback", 1194 - 0, 0, 0, 0), 1195 - SND_SOC_DAPM_AIF_OUT("SEC_TDM_RX_6", "Secondary TDM6 Playback", 1196 - 0, 0, 0, 0), 1197 - SND_SOC_DAPM_AIF_OUT("SEC_TDM_RX_7", "Secondary TDM7 Playback", 1198 - 0, 0, 0, 0), 1199 - SND_SOC_DAPM_AIF_IN("SEC_TDM_TX_0", "Secondary TDM0 Capture", 1200 - 0, 0, 0, 0), 1201 - SND_SOC_DAPM_AIF_IN("SEC_TDM_TX_1", "Secondary TDM1 Capture", 1202 - 0, 0, 0, 0), 1203 - SND_SOC_DAPM_AIF_IN("SEC_TDM_TX_2", "Secondary TDM2 Capture", 1204 - 0, 0, 0, 0), 1205 - SND_SOC_DAPM_AIF_IN("SEC_TDM_TX_3", "Secondary TDM3 Capture", 1206 - 0, 0, 0, 0), 1207 - SND_SOC_DAPM_AIF_IN("SEC_TDM_TX_4", "Secondary TDM4 Capture", 1208 - 0, 0, 0, 0), 1209 - SND_SOC_DAPM_AIF_IN("SEC_TDM_TX_5", "Secondary TDM5 Capture", 1210 - 0, 0, 0, 0), 1211 - SND_SOC_DAPM_AIF_IN("SEC_TDM_TX_6", "Secondary TDM6 Capture", 1212 - 0, 0, 0, 0), 1213 - SND_SOC_DAPM_AIF_IN("SEC_TDM_TX_7", "Secondary TDM7 Capture", 1180 + SND_SOC_DAPM_AIF_OUT("PRIMARY_TDM_TX_7", NULL, 1214 1181 0, 0, 0, 0), 1215 1182 1216 - SND_SOC_DAPM_AIF_OUT("TERT_TDM_RX_0", "Tertiary TDM0 Playback", 1183 + SND_SOC_DAPM_AIF_IN("SEC_TDM_RX_0", NULL, 1217 1184 0, 0, 0, 0), 1218 - SND_SOC_DAPM_AIF_OUT("TERT_TDM_RX_1", "Tertiary TDM1 Playback", 1185 + SND_SOC_DAPM_AIF_IN("SEC_TDM_RX_1", NULL, 1219 1186 0, 0, 0, 0), 1220 - SND_SOC_DAPM_AIF_OUT("TERT_TDM_RX_2", "Tertiary TDM2 Playback", 1187 + SND_SOC_DAPM_AIF_IN("SEC_TDM_RX_2", NULL, 1221 1188 0, 0, 0, 0), 1222 - SND_SOC_DAPM_AIF_OUT("TERT_TDM_RX_3", "Tertiary TDM3 Playback", 1189 + SND_SOC_DAPM_AIF_IN("SEC_TDM_RX_3", NULL, 1223 1190 0, 0, 0, 0), 1224 - SND_SOC_DAPM_AIF_OUT("TERT_TDM_RX_4", "Tertiary TDM4 Playback", 1191 + SND_SOC_DAPM_AIF_IN("SEC_TDM_RX_4", NULL, 1225 1192 0, 0, 0, 0), 1226 - SND_SOC_DAPM_AIF_OUT("TERT_TDM_RX_5", "Tertiary TDM5 Playback", 1193 + SND_SOC_DAPM_AIF_IN("SEC_TDM_RX_5", NULL, 1227 1194 0, 0, 0, 0), 1228 - SND_SOC_DAPM_AIF_OUT("TERT_TDM_RX_6", "Tertiary TDM6 Playback", 1195 + SND_SOC_DAPM_AIF_IN("SEC_TDM_RX_6", NULL, 1229 1196 0, 0, 0, 0), 1230 - SND_SOC_DAPM_AIF_OUT("TERT_TDM_RX_7", "Tertiary TDM7 Playback", 1197 + SND_SOC_DAPM_AIF_IN("SEC_TDM_RX_7", NULL, 1231 1198 0, 0, 0, 0), 1232 - SND_SOC_DAPM_AIF_IN("TERT_TDM_TX_0", "Tertiary TDM0 Capture", 1199 + SND_SOC_DAPM_AIF_OUT("SEC_TDM_TX_0", NULL, 1233 1200 0, 0, 0, 0), 1234 - SND_SOC_DAPM_AIF_IN("TERT_TDM_TX_1", "Tertiary TDM1 Capture", 1201 + SND_SOC_DAPM_AIF_OUT("SEC_TDM_TX_1", NULL, 1235 1202 0, 0, 0, 0), 1236 - SND_SOC_DAPM_AIF_IN("TERT_TDM_TX_2", "Tertiary TDM2 Capture", 1203 + SND_SOC_DAPM_AIF_OUT("SEC_TDM_TX_2", NULL, 1237 1204 0, 0, 0, 0), 1238 - SND_SOC_DAPM_AIF_IN("TERT_TDM_TX_3", "Tertiary TDM3 Capture", 1205 + SND_SOC_DAPM_AIF_OUT("SEC_TDM_TX_3", NULL, 1239 1206 0, 0, 0, 0), 1240 - SND_SOC_DAPM_AIF_IN("TERT_TDM_TX_4", "Tertiary TDM4 Capture", 1207 + SND_SOC_DAPM_AIF_OUT("SEC_TDM_TX_4", NULL, 1241 1208 0, 0, 0, 0), 1242 - SND_SOC_DAPM_AIF_IN("TERT_TDM_TX_5", "Tertiary TDM5 Capture", 1209 + SND_SOC_DAPM_AIF_OUT("SEC_TDM_TX_5", NULL, 1243 1210 0, 0, 0, 0), 1244 - SND_SOC_DAPM_AIF_IN("TERT_TDM_TX_6", "Tertiary TDM6 Capture", 1211 + SND_SOC_DAPM_AIF_OUT("SEC_TDM_TX_6", NULL, 1245 1212 0, 0, 0, 0), 1246 - SND_SOC_DAPM_AIF_IN("TERT_TDM_TX_7", "Tertiary TDM7 Capture", 1247 - 0, 0, 0, 0), 1248 - 1249 - SND_SOC_DAPM_AIF_OUT("QUAT_TDM_RX_0", "Quaternary TDM0 Playback", 1250 - 0, 0, 0, 0), 1251 - SND_SOC_DAPM_AIF_OUT("QUAT_TDM_RX_1", "Quaternary TDM1 Playback", 1252 - 0, 0, 0, 0), 1253 - SND_SOC_DAPM_AIF_OUT("QUAT_TDM_RX_2", "Quaternary TDM2 Playback", 1254 - 0, 0, 0, 0), 1255 - SND_SOC_DAPM_AIF_OUT("QUAT_TDM_RX_3", "Quaternary TDM3 Playback", 1256 - 0, 0, 0, 0), 1257 - SND_SOC_DAPM_AIF_OUT("QUAT_TDM_RX_4", "Quaternary TDM4 Playback", 1258 - 0, 0, 0, 0), 1259 - SND_SOC_DAPM_AIF_OUT("QUAT_TDM_RX_5", "Quaternary TDM5 Playback", 1260 - 0, 0, 0, 0), 1261 - SND_SOC_DAPM_AIF_OUT("QUAT_TDM_RX_6", "Quaternary TDM6 Playback", 1262 - 0, 0, 0, 0), 1263 - SND_SOC_DAPM_AIF_OUT("QUAT_TDM_RX_7", "Quaternary TDM7 Playback", 1264 - 0, 0, 0, 0), 1265 - SND_SOC_DAPM_AIF_IN("QUAT_TDM_TX_0", "Quaternary TDM0 Capture", 1266 - 0, 0, 0, 0), 1267 - SND_SOC_DAPM_AIF_IN("QUAT_TDM_TX_1", "Quaternary TDM1 Capture", 1268 - 0, 0, 0, 0), 1269 - SND_SOC_DAPM_AIF_IN("QUAT_TDM_TX_2", "Quaternary TDM2 Capture", 1270 - 0, 0, 0, 0), 1271 - SND_SOC_DAPM_AIF_IN("QUAT_TDM_TX_3", "Quaternary TDM3 Capture", 1272 - 0, 0, 0, 0), 1273 - SND_SOC_DAPM_AIF_IN("QUAT_TDM_TX_4", "Quaternary TDM4 Capture", 1274 - 0, 0, 0, 0), 1275 - SND_SOC_DAPM_AIF_IN("QUAT_TDM_TX_5", "Quaternary TDM5 Capture", 1276 - 0, 0, 0, 0), 1277 - SND_SOC_DAPM_AIF_IN("QUAT_TDM_TX_6", "Quaternary TDM6 Capture", 1278 - 0, 0, 0, 0), 1279 - SND_SOC_DAPM_AIF_IN("QUAT_TDM_TX_7", "Quaternary TDM7 Capture", 1213 + SND_SOC_DAPM_AIF_OUT("SEC_TDM_TX_7", NULL, 1280 1214 0, 0, 0, 0), 1281 1215 1282 - SND_SOC_DAPM_AIF_OUT("QUIN_TDM_RX_0", "Quinary TDM0 Playback", 1216 + SND_SOC_DAPM_AIF_IN("TERT_TDM_RX_0", NULL, 1283 1217 0, 0, 0, 0), 1284 - SND_SOC_DAPM_AIF_OUT("QUIN_TDM_RX_1", "Quinary TDM1 Playback", 1218 + SND_SOC_DAPM_AIF_IN("TERT_TDM_RX_1", NULL, 1285 1219 0, 0, 0, 0), 1286 - SND_SOC_DAPM_AIF_OUT("QUIN_TDM_RX_2", "Quinary TDM2 Playback", 1220 + SND_SOC_DAPM_AIF_IN("TERT_TDM_RX_2", NULL, 1287 1221 0, 0, 0, 0), 1288 - SND_SOC_DAPM_AIF_OUT("QUIN_TDM_RX_3", "Quinary TDM3 Playback", 1222 + SND_SOC_DAPM_AIF_IN("TERT_TDM_RX_3", NULL, 1289 1223 0, 0, 0, 0), 1290 - SND_SOC_DAPM_AIF_OUT("QUIN_TDM_RX_4", "Quinary TDM4 Playback", 1224 + SND_SOC_DAPM_AIF_IN("TERT_TDM_RX_4", NULL, 1291 1225 0, 0, 0, 0), 1292 - SND_SOC_DAPM_AIF_OUT("QUIN_TDM_RX_5", "Quinary TDM5 Playback", 1226 + SND_SOC_DAPM_AIF_IN("TERT_TDM_RX_5", NULL, 1293 1227 0, 0, 0, 0), 1294 - SND_SOC_DAPM_AIF_OUT("QUIN_TDM_RX_6", "Quinary TDM6 Playback", 1228 + SND_SOC_DAPM_AIF_IN("TERT_TDM_RX_6", NULL, 1295 1229 0, 0, 0, 0), 1296 - SND_SOC_DAPM_AIF_OUT("QUIN_TDM_RX_7", "Quinary TDM7 Playback", 1230 + SND_SOC_DAPM_AIF_IN("TERT_TDM_RX_7", NULL, 1297 1231 0, 0, 0, 0), 1298 - SND_SOC_DAPM_AIF_IN("QUIN_TDM_TX_0", "Quinary TDM0 Capture", 1232 + SND_SOC_DAPM_AIF_OUT("TERT_TDM_TX_0", NULL, 1299 1233 0, 0, 0, 0), 1300 - SND_SOC_DAPM_AIF_IN("QUIN_TDM_TX_1", "Quinary TDM1 Capture", 1234 + SND_SOC_DAPM_AIF_OUT("TERT_TDM_TX_1", NULL, 1301 1235 0, 0, 0, 0), 1302 - SND_SOC_DAPM_AIF_IN("QUIN_TDM_TX_2", "Quinary TDM2 Capture", 1236 + SND_SOC_DAPM_AIF_OUT("TERT_TDM_TX_2", NULL, 1303 1237 0, 0, 0, 0), 1304 - SND_SOC_DAPM_AIF_IN("QUIN_TDM_TX_3", "Quinary TDM3 Capture", 1238 + SND_SOC_DAPM_AIF_OUT("TERT_TDM_TX_3", NULL, 1305 1239 0, 0, 0, 0), 1306 - SND_SOC_DAPM_AIF_IN("QUIN_TDM_TX_4", "Quinary TDM4 Capture", 1240 + SND_SOC_DAPM_AIF_OUT("TERT_TDM_TX_4", NULL, 1307 1241 0, 0, 0, 0), 1308 - SND_SOC_DAPM_AIF_IN("QUIN_TDM_TX_5", "Quinary TDM5 Capture", 1242 + SND_SOC_DAPM_AIF_OUT("TERT_TDM_TX_5", NULL, 1309 1243 0, 0, 0, 0), 1310 - SND_SOC_DAPM_AIF_IN("QUIN_TDM_TX_6", "Quinary TDM6 Capture", 1244 + SND_SOC_DAPM_AIF_OUT("TERT_TDM_TX_6", NULL, 1311 1245 0, 0, 0, 0), 1312 - SND_SOC_DAPM_AIF_IN("QUIN_TDM_TX_7", "Quinary TDM7 Capture", 1246 + SND_SOC_DAPM_AIF_OUT("TERT_TDM_TX_7", NULL, 1247 + 0, 0, 0, 0), 1248 + 1249 + SND_SOC_DAPM_AIF_IN("QUAT_TDM_RX_0", NULL, 1250 + 0, 0, 0, 0), 1251 + SND_SOC_DAPM_AIF_IN("QUAT_TDM_RX_1", NULL, 1252 + 0, 0, 0, 0), 1253 + SND_SOC_DAPM_AIF_IN("QUAT_TDM_RX_2", NULL, 1254 + 0, 0, 0, 0), 1255 + SND_SOC_DAPM_AIF_IN("QUAT_TDM_RX_3", NULL, 1256 + 0, 0, 0, 0), 1257 + SND_SOC_DAPM_AIF_IN("QUAT_TDM_RX_4", NULL, 1258 + 0, 0, 0, 0), 1259 + SND_SOC_DAPM_AIF_IN("QUAT_TDM_RX_5", NULL, 1260 + 0, 0, 0, 0), 1261 + SND_SOC_DAPM_AIF_IN("QUAT_TDM_RX_6", NULL, 1262 + 0, 0, 0, 0), 1263 + SND_SOC_DAPM_AIF_IN("QUAT_TDM_RX_7", NULL, 1264 + 0, 0, 0, 0), 1265 + SND_SOC_DAPM_AIF_OUT("QUAT_TDM_TX_0", NULL, 1266 + 0, 0, 0, 0), 1267 + SND_SOC_DAPM_AIF_OUT("QUAT_TDM_TX_1", NULL, 1268 + 0, 0, 0, 0), 1269 + SND_SOC_DAPM_AIF_OUT("QUAT_TDM_TX_2", NULL, 1270 + 0, 0, 0, 0), 1271 + SND_SOC_DAPM_AIF_OUT("QUAT_TDM_TX_3", NULL, 1272 + 0, 0, 0, 0), 1273 + SND_SOC_DAPM_AIF_OUT("QUAT_TDM_TX_4", NULL, 1274 + 0, 0, 0, 0), 1275 + SND_SOC_DAPM_AIF_OUT("QUAT_TDM_TX_5", NULL, 1276 + 0, 0, 0, 0), 1277 + SND_SOC_DAPM_AIF_OUT("QUAT_TDM_TX_6", NULL, 1278 + 0, 0, 0, 0), 1279 + SND_SOC_DAPM_AIF_OUT("QUAT_TDM_TX_7", NULL, 1280 + 0, 0, 0, 0), 1281 + 1282 + SND_SOC_DAPM_AIF_IN("QUIN_TDM_RX_0", NULL, 1283 + 0, 0, 0, 0), 1284 + SND_SOC_DAPM_AIF_IN("QUIN_TDM_RX_1", NULL, 1285 + 0, 0, 0, 0), 1286 + SND_SOC_DAPM_AIF_IN("QUIN_TDM_RX_2", NULL, 1287 + 0, 0, 0, 0), 1288 + SND_SOC_DAPM_AIF_IN("QUIN_TDM_RX_3", NULL, 1289 + 0, 0, 0, 0), 1290 + SND_SOC_DAPM_AIF_IN("QUIN_TDM_RX_4", NULL, 1291 + 0, 0, 0, 0), 1292 + SND_SOC_DAPM_AIF_IN("QUIN_TDM_RX_5", NULL, 1293 + 0, 0, 0, 0), 1294 + SND_SOC_DAPM_AIF_IN("QUIN_TDM_RX_6", NULL, 1295 + 0, 0, 0, 0), 1296 + SND_SOC_DAPM_AIF_IN("QUIN_TDM_RX_7", NULL, 1297 + 0, 0, 0, 0), 1298 + SND_SOC_DAPM_AIF_OUT("QUIN_TDM_TX_0", NULL, 1299 + 0, 0, 0, 0), 1300 + SND_SOC_DAPM_AIF_OUT("QUIN_TDM_TX_1", NULL, 1301 + 0, 0, 0, 0), 1302 + SND_SOC_DAPM_AIF_OUT("QUIN_TDM_TX_2", NULL, 1303 + 0, 0, 0, 0), 1304 + SND_SOC_DAPM_AIF_OUT("QUIN_TDM_TX_3", NULL, 1305 + 0, 0, 0, 0), 1306 + SND_SOC_DAPM_AIF_OUT("QUIN_TDM_TX_4", NULL, 1307 + 0, 0, 0, 0), 1308 + SND_SOC_DAPM_AIF_OUT("QUIN_TDM_TX_5", NULL, 1309 + 0, 0, 0, 0), 1310 + SND_SOC_DAPM_AIF_OUT("QUIN_TDM_TX_6", NULL, 1311 + 0, 0, 0, 0), 1312 + SND_SOC_DAPM_AIF_OUT("QUIN_TDM_TX_7", NULL, 1313 1313 0, 0, 0, 0), 1314 1314 }; 1315 1315
+8 -8
sound/soc/qcom/qdsp6/q6afe.c
··· 49 49 #define AFE_PORT_I2S_SD1 0x2 50 50 #define AFE_PORT_I2S_SD2 0x3 51 51 #define AFE_PORT_I2S_SD3 0x4 52 - #define AFE_PORT_I2S_SD0_MASK BIT(0x1) 53 - #define AFE_PORT_I2S_SD1_MASK BIT(0x2) 54 - #define AFE_PORT_I2S_SD2_MASK BIT(0x3) 55 - #define AFE_PORT_I2S_SD3_MASK BIT(0x4) 56 - #define AFE_PORT_I2S_SD0_1_MASK GENMASK(2, 1) 57 - #define AFE_PORT_I2S_SD2_3_MASK GENMASK(4, 3) 58 - #define AFE_PORT_I2S_SD0_1_2_MASK GENMASK(3, 1) 59 - #define AFE_PORT_I2S_SD0_1_2_3_MASK GENMASK(4, 1) 52 + #define AFE_PORT_I2S_SD0_MASK BIT(0x0) 53 + #define AFE_PORT_I2S_SD1_MASK BIT(0x1) 54 + #define AFE_PORT_I2S_SD2_MASK BIT(0x2) 55 + #define AFE_PORT_I2S_SD3_MASK BIT(0x3) 56 + #define AFE_PORT_I2S_SD0_1_MASK GENMASK(1, 0) 57 + #define AFE_PORT_I2S_SD2_3_MASK GENMASK(3, 2) 58 + #define AFE_PORT_I2S_SD0_1_2_MASK GENMASK(2, 0) 59 + #define AFE_PORT_I2S_SD0_1_2_3_MASK GENMASK(3, 0) 60 60 #define AFE_PORT_I2S_QUAD01 0x5 61 61 #define AFE_PORT_I2S_QUAD23 0x6 62 62 #define AFE_PORT_I2S_6CHS 0x7
-33
sound/soc/qcom/qdsp6/q6asm-dai.c
··· 122 122 .rate_max = 48000, \ 123 123 }, \ 124 124 .name = "MultiMedia"#num, \ 125 - .probe = fe_dai_probe, \ 126 125 .id = MSM_FRONTEND_DAI_MULTIMEDIA##num, \ 127 126 } 128 127 ··· 509 510 } 510 511 } 511 512 } 512 - 513 - static const struct snd_soc_dapm_route afe_pcm_routes[] = { 514 - {"MM_DL1", NULL, "MultiMedia1 Playback" }, 515 - {"MM_DL2", NULL, "MultiMedia2 Playback" }, 516 - {"MM_DL3", NULL, "MultiMedia3 Playback" }, 517 - {"MM_DL4", NULL, "MultiMedia4 Playback" }, 518 - {"MM_DL5", NULL, "MultiMedia5 Playback" }, 519 - {"MM_DL6", NULL, "MultiMedia6 Playback" }, 520 - {"MM_DL7", NULL, "MultiMedia7 Playback" }, 521 - {"MM_DL7", NULL, "MultiMedia8 Playback" }, 522 - {"MultiMedia1 Capture", NULL, "MM_UL1"}, 523 - {"MultiMedia2 Capture", NULL, "MM_UL2"}, 524 - {"MultiMedia3 Capture", NULL, "MM_UL3"}, 525 - {"MultiMedia4 Capture", NULL, "MM_UL4"}, 526 - {"MultiMedia5 Capture", NULL, "MM_UL5"}, 527 - {"MultiMedia6 Capture", NULL, "MM_UL6"}, 528 - {"MultiMedia7 Capture", NULL, "MM_UL7"}, 529 - {"MultiMedia8 Capture", NULL, "MM_UL8"}, 530 - 531 - }; 532 - 533 - static int fe_dai_probe(struct snd_soc_dai *dai) 534 - { 535 - struct snd_soc_dapm_context *dapm; 536 - 537 - dapm = snd_soc_component_get_dapm(dai->component); 538 - snd_soc_dapm_add_routes(dapm, afe_pcm_routes, 539 - ARRAY_SIZE(afe_pcm_routes)); 540 - 541 - return 0; 542 - } 543 - 544 513 545 514 static const struct snd_soc_component_driver q6asm_fe_dai_component = { 546 515 .name = DRV_NAME,
+19
sound/soc/qcom/qdsp6/q6routing.c
··· 909 909 {"MM_UL6", NULL, "MultiMedia6 Mixer"}, 910 910 {"MM_UL7", NULL, "MultiMedia7 Mixer"}, 911 911 {"MM_UL8", NULL, "MultiMedia8 Mixer"}, 912 + 913 + {"MM_DL1", NULL, "MultiMedia1 Playback" }, 914 + {"MM_DL2", NULL, "MultiMedia2 Playback" }, 915 + {"MM_DL3", NULL, "MultiMedia3 Playback" }, 916 + {"MM_DL4", NULL, "MultiMedia4 Playback" }, 917 + {"MM_DL5", NULL, "MultiMedia5 Playback" }, 918 + {"MM_DL6", NULL, "MultiMedia6 Playback" }, 919 + {"MM_DL7", NULL, "MultiMedia7 Playback" }, 920 + {"MM_DL8", NULL, "MultiMedia8 Playback" }, 921 + 922 + {"MultiMedia1 Capture", NULL, "MM_UL1"}, 923 + {"MultiMedia2 Capture", NULL, "MM_UL2"}, 924 + {"MultiMedia3 Capture", NULL, "MM_UL3"}, 925 + {"MultiMedia4 Capture", NULL, "MM_UL4"}, 926 + {"MultiMedia5 Capture", NULL, "MM_UL5"}, 927 + {"MultiMedia6 Capture", NULL, "MM_UL6"}, 928 + {"MultiMedia7 Capture", NULL, "MM_UL7"}, 929 + {"MultiMedia8 Capture", NULL, "MM_UL8"}, 930 + 912 931 }; 913 932 914 933 static int routing_hw_params(struct snd_pcm_substream *substream,
+1
sound/soc/rockchip/rockchip_pcm.c
··· 33 33 34 34 static const struct snd_dmaengine_pcm_config rk_dmaengine_pcm_config = { 35 35 .pcm_hardware = &snd_rockchip_hardware, 36 + .prepare_slave_config = snd_dmaengine_pcm_prepare_slave_config, 36 37 .prealloc_buffer_size = 32 * 1024, 37 38 }; 38 39
+1 -1
sound/soc/sh/rcar/ssi.c
··· 306 306 if (rsnd_ssi_is_multi_slave(mod, io)) 307 307 return 0; 308 308 309 - if (ssi->rate) { 309 + if (ssi->usrcnt > 1) { 310 310 if (ssi->rate != rate) { 311 311 dev_err(dev, "SSI parent/child should use same rate\n"); 312 312 return -EINVAL;
+8 -2
sound/soc/soc-acpi.c
··· 10 10 snd_soc_acpi_find_machine(struct snd_soc_acpi_mach *machines) 11 11 { 12 12 struct snd_soc_acpi_mach *mach; 13 + struct snd_soc_acpi_mach *mach_alt; 13 14 14 15 for (mach = machines; mach->id[0]; mach++) { 15 16 if (acpi_dev_present(mach->id, NULL, -1)) { 16 - if (mach->machine_quirk) 17 - mach = mach->machine_quirk(mach); 17 + if (mach->machine_quirk) { 18 + mach_alt = mach->machine_quirk(mach); 19 + if (!mach_alt) 20 + continue; /* not full match, ignore */ 21 + mach = mach_alt; 22 + } 23 + 18 24 return mach; 19 25 } 20 26 }
+1
sound/soc/soc-core.c
··· 2131 2131 } 2132 2132 2133 2133 card->instantiated = 1; 2134 + dapm_mark_endpoints_dirty(card); 2134 2135 snd_soc_dapm_sync(&card->dapm); 2135 2136 mutex_unlock(&card->mutex); 2136 2137 mutex_unlock(&client_mutex);
+1 -1
sound/soc/stm/stm32_sai_sub.c
··· 390 390 char *mclk_name, *p, *s = (char *)pname; 391 391 int ret, i = 0; 392 392 393 - mclk = devm_kzalloc(dev, sizeof(mclk), GFP_KERNEL); 393 + mclk = devm_kzalloc(dev, sizeof(*mclk), GFP_KERNEL); 394 394 if (!mclk) 395 395 return -ENOMEM; 396 396
+1 -1
sound/soc/sunxi/Kconfig
··· 31 31 config SND_SUN50I_CODEC_ANALOG 32 32 tristate "Allwinner sun50i Codec Analog Controls Support" 33 33 depends on (ARM64 && ARCH_SUNXI) || COMPILE_TEST 34 - select SND_SUNXI_ADDA_PR_REGMAP 34 + select SND_SUN8I_ADDA_PR_REGMAP 35 35 help 36 36 Say Y or M if you want to add support for the analog controls for 37 37 the codec embedded in Allwinner A64 SoC.
+5 -7
sound/soc/sunxi/sun8i-codec.c
··· 481 481 { "Right Digital DAC Mixer", "AIF1 Slot 0 Digital DAC Playback Switch", 482 482 "AIF1 Slot 0 Right"}, 483 483 484 - /* ADC routes */ 484 + /* ADC Routes */ 485 + { "AIF1 Slot 0 Right ADC", NULL, "ADC" }, 486 + { "AIF1 Slot 0 Left ADC", NULL, "ADC" }, 487 + 488 + /* ADC Mixer Routes */ 485 489 { "Left Digital ADC Mixer", "AIF1 Data Digital ADC Capture Switch", 486 490 "AIF1 Slot 0 Left ADC" }, 487 491 { "Right Digital ADC Mixer", "AIF1 Data Digital ADC Capture Switch", ··· 609 605 610 606 static int sun8i_codec_remove(struct platform_device *pdev) 611 607 { 612 - struct snd_soc_card *card = platform_get_drvdata(pdev); 613 - struct sun8i_codec *scodec = snd_soc_card_get_drvdata(card); 614 - 615 608 pm_runtime_disable(&pdev->dev); 616 609 if (!pm_runtime_status_suspended(&pdev->dev)) 617 610 sun8i_codec_runtime_suspend(&pdev->dev); 618 - 619 - clk_disable_unprepare(scodec->clk_module); 620 - clk_disable_unprepare(scodec->clk_bus); 621 611 622 612 return 0; 623 613 }
+2 -6
sound/sparc/cs4231.c
··· 1146 1146 runtime->hw = snd_cs4231_playback; 1147 1147 1148 1148 err = snd_cs4231_open(chip, CS4231_MODE_PLAY); 1149 - if (err < 0) { 1150 - snd_free_pages(runtime->dma_area, runtime->dma_bytes); 1149 + if (err < 0) 1151 1150 return err; 1152 - } 1153 1151 chip->playback_substream = substream; 1154 1152 chip->p_periods_sent = 0; 1155 1153 snd_pcm_set_sync(substream); ··· 1165 1167 runtime->hw = snd_cs4231_capture; 1166 1168 1167 1169 err = snd_cs4231_open(chip, CS4231_MODE_RECORD); 1168 - if (err < 0) { 1169 - snd_free_pages(runtime->dma_area, runtime->dma_bytes); 1170 + if (err < 0) 1170 1171 return err; 1171 - } 1172 1172 chip->capture_substream = substream; 1173 1173 chip->c_periods_sent = 0; 1174 1174 snd_pcm_set_sync(substream);
+10
sound/usb/quirks-table.h
··· 3382 3382 .ifnum = QUIRK_NO_INTERFACE 3383 3383 } 3384 3384 }, 3385 + /* Dell WD19 Dock */ 3386 + { 3387 + USB_DEVICE(0x0bda, 0x402e), 3388 + .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) { 3389 + .vendor_name = "Dell", 3390 + .product_name = "WD19 Dock", 3391 + .profile_name = "Dell-WD15-Dock", 3392 + .ifnum = QUIRK_NO_INTERFACE 3393 + } 3394 + }, 3385 3395 3386 3396 #undef USB_DEVICE_VENDOR_SPEC
+2
tools/arch/x86/include/asm/cpufeatures.h
··· 331 331 #define X86_FEATURE_LA57 (16*32+16) /* 5-level page tables */ 332 332 #define X86_FEATURE_RDPID (16*32+22) /* RDPID instruction */ 333 333 #define X86_FEATURE_CLDEMOTE (16*32+25) /* CLDEMOTE instruction */ 334 + #define X86_FEATURE_MOVDIRI (16*32+27) /* MOVDIRI instruction */ 335 + #define X86_FEATURE_MOVDIR64B (16*32+28) /* MOVDIR64B instruction */ 334 336 335 337 /* AMD-defined CPU features, CPUID level 0x80000007 (EBX), word 17 */ 336 338 #define X86_FEATURE_OVERFLOW_RECOV (17*32+ 0) /* MCA overflow recovery support */
+7 -1
tools/bpf/bpftool/Documentation/bpftool-cgroup.rst
··· 137 137 138 138 SEE ALSO 139 139 ======== 140 - **bpftool**\ (8), **bpftool-prog**\ (8), **bpftool-map**\ (8) 140 + **bpf**\ (2), 141 + **bpf-helpers**\ (7), 142 + **bpftool**\ (8), 143 + **bpftool-prog**\ (8), 144 + **bpftool-map**\ (8), 145 + **bpftool-net**\ (8), 146 + **bpftool-perf**\ (8)
+7 -1
tools/bpf/bpftool/Documentation/bpftool-map.rst
··· 171 171 172 172 SEE ALSO 173 173 ======== 174 - **bpftool**\ (8), **bpftool-prog**\ (8), **bpftool-cgroup**\ (8) 174 + **bpf**\ (2), 175 + **bpf-helpers**\ (7), 176 + **bpftool**\ (8), 177 + **bpftool-prog**\ (8), 178 + **bpftool-cgroup**\ (8), 179 + **bpftool-net**\ (8), 180 + **bpftool-perf**\ (8)
+7 -1
tools/bpf/bpftool/Documentation/bpftool-net.rst
··· 136 136 137 137 SEE ALSO 138 138 ======== 139 - **bpftool**\ (8), **bpftool-prog**\ (8), **bpftool-map**\ (8) 139 + **bpf**\ (2), 140 + **bpf-helpers**\ (7), 141 + **bpftool**\ (8), 142 + **bpftool-prog**\ (8), 143 + **bpftool-map**\ (8), 144 + **bpftool-cgroup**\ (8), 145 + **bpftool-perf**\ (8)
+7 -1
tools/bpf/bpftool/Documentation/bpftool-perf.rst
··· 78 78 79 79 SEE ALSO 80 80 ======== 81 - **bpftool**\ (8), **bpftool-prog**\ (8), **bpftool-map**\ (8) 81 + **bpf**\ (2), 82 + **bpf-helpers**\ (7), 83 + **bpftool**\ (8), 84 + **bpftool-prog**\ (8), 85 + **bpftool-map**\ (8), 86 + **bpftool-cgroup**\ (8), 87 + **bpftool-net**\ (8)
+9 -2
tools/bpf/bpftool/Documentation/bpftool-prog.rst
··· 124 124 Generate human-readable JSON output. Implies **-j**. 125 125 126 126 -f, --bpffs 127 - Show file names of pinned programs. 127 + When showing BPF programs, show file names of pinned 128 + programs. 128 129 129 130 EXAMPLES 130 131 ======== ··· 207 206 208 207 SEE ALSO 209 208 ======== 210 - **bpftool**\ (8), **bpftool-map**\ (8), **bpftool-cgroup**\ (8) 209 + **bpf**\ (2), 210 + **bpf-helpers**\ (7), 211 + **bpftool**\ (8), 212 + **bpftool-map**\ (8), 213 + **bpftool-cgroup**\ (8), 214 + **bpftool-net**\ (8), 215 + **bpftool-perf**\ (8)
+7 -2
tools/bpf/bpftool/Documentation/bpftool.rst
··· 63 63 64 64 SEE ALSO 65 65 ======== 66 - **bpftool-map**\ (8), **bpftool-prog**\ (8), **bpftool-cgroup**\ (8) 67 - **bpftool-perf**\ (8), **bpftool-net**\ (8) 66 + **bpf**\ (2), 67 + **bpf-helpers**\ (7), 68 + **bpftool-prog**\ (8), 69 + **bpftool-map**\ (8), 70 + **bpftool-cgroup**\ (8), 71 + **bpftool-net**\ (8), 72 + **bpftool-perf**\ (8)
+9 -8
tools/bpf/bpftool/common.c
··· 130 130 return 0; 131 131 } 132 132 133 - int open_obj_pinned(char *path) 133 + int open_obj_pinned(char *path, bool quiet) 134 134 { 135 135 int fd; 136 136 137 137 fd = bpf_obj_get(path); 138 138 if (fd < 0) { 139 - p_err("bpf obj get (%s): %s", path, 140 - errno == EACCES && !is_bpffs(dirname(path)) ? 141 - "directory not in bpf file system (bpffs)" : 142 - strerror(errno)); 139 + if (!quiet) 140 + p_err("bpf obj get (%s): %s", path, 141 + errno == EACCES && !is_bpffs(dirname(path)) ? 142 + "directory not in bpf file system (bpffs)" : 143 + strerror(errno)); 143 144 return -1; 144 145 } 145 146 ··· 152 151 enum bpf_obj_type type; 153 152 int fd; 154 153 155 - fd = open_obj_pinned(path); 154 + fd = open_obj_pinned(path, false); 156 155 if (fd < 0) 157 156 return -1; 158 157 ··· 305 304 return NULL; 306 305 } 307 306 308 - while ((n = getline(&line, &line_n, fdi))) { 307 + while ((n = getline(&line, &line_n, fdi)) > 0) { 309 308 char *value; 310 309 int len; 311 310 ··· 385 384 while ((ftse = fts_read(fts))) { 386 385 if (!(ftse->fts_info & FTS_F)) 387 386 continue; 388 - fd = open_obj_pinned(ftse->fts_path); 387 + fd = open_obj_pinned(ftse->fts_path, true); 389 388 if (fd < 0) 390 389 continue; 391 390
+1 -1
tools/bpf/bpftool/main.h
··· 127 127 int get_fd_type(int fd); 128 128 const char *get_fd_type_name(enum bpf_obj_type type); 129 129 char *get_fdinfo(int fd, const char *key); 130 - int open_obj_pinned(char *path); 130 + int open_obj_pinned(char *path, bool quiet); 131 131 int open_obj_pinned_any(char *path, enum bpf_obj_type exp_type); 132 132 int do_pin_any(int argc, char **argv, int (*get_fd_by_id)(__u32)); 133 133 int do_pin_fd(int fd, const char *name);
+8 -5
tools/bpf/bpftool/prog.c
··· 357 357 if (!hash_empty(prog_table.table)) { 358 358 struct pinned_obj *obj; 359 359 360 - printf("\n"); 361 360 hash_for_each_possible(prog_table.table, obj, hash, info->id) { 362 361 if (obj->id == info->id) 363 - printf("\tpinned %s\n", obj->path); 362 + printf("\n\tpinned %s", obj->path); 364 363 } 365 364 } 366 365 ··· 844 845 } 845 846 NEXT_ARG(); 846 847 } else if (is_prefix(*argv, "map")) { 848 + void *new_map_replace; 847 849 char *endptr, *name; 848 850 int fd; 849 851 ··· 878 878 if (fd < 0) 879 879 goto err_free_reuse_maps; 880 880 881 - map_replace = reallocarray(map_replace, old_map_fds + 1, 882 - sizeof(*map_replace)); 883 - if (!map_replace) { 881 + new_map_replace = reallocarray(map_replace, 882 + old_map_fds + 1, 883 + sizeof(*map_replace)); 884 + if (!new_map_replace) { 884 885 p_err("mem alloc failed"); 885 886 goto err_free_reuse_maps; 886 887 } 888 + map_replace = new_map_replace; 889 + 887 890 map_replace[old_map_fds].idx = idx; 888 891 map_replace[old_map_fds].name = name; 889 892 map_replace[old_map_fds].fd = fd;
+1
tools/build/Makefile.feature
··· 33 33 dwarf_getlocations \ 34 34 fortify-source \ 35 35 sync-compare-and-swap \ 36 + get_current_dir_name \ 36 37 glibc \ 37 38 gtk2 \ 38 39 gtk2-infobar \
+4
tools/build/feature/Makefile
··· 7 7 test-dwarf_getlocations.bin \ 8 8 test-fortify-source.bin \ 9 9 test-sync-compare-and-swap.bin \ 10 + test-get_current_dir_name.bin \ 10 11 test-glibc.bin \ 11 12 test-gtk2.bin \ 12 13 test-gtk2-infobar.bin \ ··· 101 100 102 101 $(OUTPUT)test-libelf.bin: 103 102 $(BUILD) -lelf 103 + 104 + $(OUTPUT)test-get_current_dir_name.bin: 105 + $(BUILD) 104 106 105 107 $(OUTPUT)test-glibc.bin: 106 108 $(BUILD)
+5
tools/build/feature/test-all.c
··· 34 34 # include "test-libelf-mmap.c" 35 35 #undef main 36 36 37 + #define main main_test_get_current_dir_name 38 + # include "test-get_current_dir_name.c" 39 + #undef main 40 + 37 41 #define main main_test_glibc 38 42 # include "test-glibc.c" 39 43 #undef main ··· 178 174 main_test_hello(); 179 175 main_test_libelf(); 180 176 main_test_libelf_mmap(); 177 + main_test_get_current_dir_name(); 181 178 main_test_glibc(); 182 179 main_test_dwarf(); 183 180 main_test_dwarf_getlocations();
+10
tools/build/feature/test-get_current_dir_name.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + #define _GNU_SOURCE 3 + #include <unistd.h> 4 + #include <stdlib.h> 5 + 6 + int main(void) 7 + { 8 + free(get_current_dir_name()); 9 + return 0; 10 + }
+2
tools/include/uapi/asm-generic/ioctls.h
··· 79 79 #define TIOCGPTLCK _IOR('T', 0x39, int) /* Get Pty lock state */ 80 80 #define TIOCGEXCL _IOR('T', 0x40, int) /* Get exclusive mode state */ 81 81 #define TIOCGPTPEER _IO('T', 0x41) /* Safely open the slave */ 82 + #define TIOCGISO7816 _IOR('T', 0x42, struct serial_iso7816) 83 + #define TIOCSISO7816 _IOWR('T', 0x43, struct serial_iso7816) 82 84 83 85 #define FIONCLEX 0x5450 84 86 #define FIOCLEX 0x5451
+22
tools/include/uapi/drm/i915_drm.h
··· 529 529 */ 530 530 #define I915_PARAM_CS_TIMESTAMP_FREQUENCY 51 531 531 532 + /* 533 + * Once upon a time we supposed that writes through the GGTT would be 534 + * immediately in physical memory (once flushed out of the CPU path). However, 535 + * on a few different processors and chipsets, this is not necessarily the case 536 + * as the writes appear to be buffered internally. Thus a read of the backing 537 + * storage (physical memory) via a different path (with different physical tags 538 + * to the indirect write via the GGTT) will see stale values from before 539 + * the GGTT write. Inside the kernel, we can for the most part keep track of 540 + * the different read/write domains in use (e.g. set-domain), but the assumption 541 + * of coherency is baked into the ABI, hence reporting its true state in this 542 + * parameter. 543 + * 544 + * Reports true when writes via mmap_gtt are immediately visible following an 545 + * lfence to flush the WCB. 546 + * 547 + * Reports false when writes via mmap_gtt are indeterminately delayed in an in 548 + * internal buffer and are _not_ immediately visible to third parties accessing 549 + * directly via mmap_cpu/mmap_wc. Use of mmap_gtt as part of an IPC 550 + * communications channel when reporting false is strongly disadvised. 551 + */ 552 + #define I915_PARAM_MMAP_GTT_COHERENT 52 553 + 532 554 typedef struct drm_i915_getparam { 533 555 __s32 param; 534 556 /*
+612
tools/include/uapi/linux/pkt_cls.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */ 2 + #ifndef __LINUX_PKT_CLS_H 3 + #define __LINUX_PKT_CLS_H 4 + 5 + #include <linux/types.h> 6 + #include <linux/pkt_sched.h> 7 + 8 + #define TC_COOKIE_MAX_SIZE 16 9 + 10 + /* Action attributes */ 11 + enum { 12 + TCA_ACT_UNSPEC, 13 + TCA_ACT_KIND, 14 + TCA_ACT_OPTIONS, 15 + TCA_ACT_INDEX, 16 + TCA_ACT_STATS, 17 + TCA_ACT_PAD, 18 + TCA_ACT_COOKIE, 19 + __TCA_ACT_MAX 20 + }; 21 + 22 + #define TCA_ACT_MAX __TCA_ACT_MAX 23 + #define TCA_OLD_COMPAT (TCA_ACT_MAX+1) 24 + #define TCA_ACT_MAX_PRIO 32 25 + #define TCA_ACT_BIND 1 26 + #define TCA_ACT_NOBIND 0 27 + #define TCA_ACT_UNBIND 1 28 + #define TCA_ACT_NOUNBIND 0 29 + #define TCA_ACT_REPLACE 1 30 + #define TCA_ACT_NOREPLACE 0 31 + 32 + #define TC_ACT_UNSPEC (-1) 33 + #define TC_ACT_OK 0 34 + #define TC_ACT_RECLASSIFY 1 35 + #define TC_ACT_SHOT 2 36 + #define TC_ACT_PIPE 3 37 + #define TC_ACT_STOLEN 4 38 + #define TC_ACT_QUEUED 5 39 + #define TC_ACT_REPEAT 6 40 + #define TC_ACT_REDIRECT 7 41 + #define TC_ACT_TRAP 8 /* For hw path, this means "trap to cpu" 42 + * and don't further process the frame 43 + * in hardware. For sw path, this is 44 + * equivalent of TC_ACT_STOLEN - drop 45 + * the skb and act like everything 46 + * is alright. 47 + */ 48 + #define TC_ACT_VALUE_MAX TC_ACT_TRAP 49 + 50 + /* There is a special kind of actions called "extended actions", 51 + * which need a value parameter. These have a local opcode located in 52 + * the highest nibble, starting from 1. The rest of the bits 53 + * are used to carry the value. These two parts together make 54 + * a combined opcode. 55 + */ 56 + #define __TC_ACT_EXT_SHIFT 28 57 + #define __TC_ACT_EXT(local) ((local) << __TC_ACT_EXT_SHIFT) 58 + #define TC_ACT_EXT_VAL_MASK ((1 << __TC_ACT_EXT_SHIFT) - 1) 59 + #define TC_ACT_EXT_OPCODE(combined) ((combined) & (~TC_ACT_EXT_VAL_MASK)) 60 + #define TC_ACT_EXT_CMP(combined, opcode) (TC_ACT_EXT_OPCODE(combined) == opcode) 61 + 62 + #define TC_ACT_JUMP __TC_ACT_EXT(1) 63 + #define TC_ACT_GOTO_CHAIN __TC_ACT_EXT(2) 64 + #define TC_ACT_EXT_OPCODE_MAX TC_ACT_GOTO_CHAIN 65 + 66 + /* Action type identifiers*/ 67 + enum { 68 + TCA_ID_UNSPEC=0, 69 + TCA_ID_POLICE=1, 70 + /* other actions go here */ 71 + __TCA_ID_MAX=255 72 + }; 73 + 74 + #define TCA_ID_MAX __TCA_ID_MAX 75 + 76 + struct tc_police { 77 + __u32 index; 78 + int action; 79 + #define TC_POLICE_UNSPEC TC_ACT_UNSPEC 80 + #define TC_POLICE_OK TC_ACT_OK 81 + #define TC_POLICE_RECLASSIFY TC_ACT_RECLASSIFY 82 + #define TC_POLICE_SHOT TC_ACT_SHOT 83 + #define TC_POLICE_PIPE TC_ACT_PIPE 84 + 85 + __u32 limit; 86 + __u32 burst; 87 + __u32 mtu; 88 + struct tc_ratespec rate; 89 + struct tc_ratespec peakrate; 90 + int refcnt; 91 + int bindcnt; 92 + __u32 capab; 93 + }; 94 + 95 + struct tcf_t { 96 + __u64 install; 97 + __u64 lastuse; 98 + __u64 expires; 99 + __u64 firstuse; 100 + }; 101 + 102 + struct tc_cnt { 103 + int refcnt; 104 + int bindcnt; 105 + }; 106 + 107 + #define tc_gen \ 108 + __u32 index; \ 109 + __u32 capab; \ 110 + int action; \ 111 + int refcnt; \ 112 + int bindcnt 113 + 114 + enum { 115 + TCA_POLICE_UNSPEC, 116 + TCA_POLICE_TBF, 117 + TCA_POLICE_RATE, 118 + TCA_POLICE_PEAKRATE, 119 + TCA_POLICE_AVRATE, 120 + TCA_POLICE_RESULT, 121 + TCA_POLICE_TM, 122 + TCA_POLICE_PAD, 123 + __TCA_POLICE_MAX 124 + #define TCA_POLICE_RESULT TCA_POLICE_RESULT 125 + }; 126 + 127 + #define TCA_POLICE_MAX (__TCA_POLICE_MAX - 1) 128 + 129 + /* tca flags definitions */ 130 + #define TCA_CLS_FLAGS_SKIP_HW (1 << 0) /* don't offload filter to HW */ 131 + #define TCA_CLS_FLAGS_SKIP_SW (1 << 1) /* don't use filter in SW */ 132 + #define TCA_CLS_FLAGS_IN_HW (1 << 2) /* filter is offloaded to HW */ 133 + #define TCA_CLS_FLAGS_NOT_IN_HW (1 << 3) /* filter isn't offloaded to HW */ 134 + #define TCA_CLS_FLAGS_VERBOSE (1 << 4) /* verbose logging */ 135 + 136 + /* U32 filters */ 137 + 138 + #define TC_U32_HTID(h) ((h)&0xFFF00000) 139 + #define TC_U32_USERHTID(h) (TC_U32_HTID(h)>>20) 140 + #define TC_U32_HASH(h) (((h)>>12)&0xFF) 141 + #define TC_U32_NODE(h) ((h)&0xFFF) 142 + #define TC_U32_KEY(h) ((h)&0xFFFFF) 143 + #define TC_U32_UNSPEC 0 144 + #define TC_U32_ROOT (0xFFF00000) 145 + 146 + enum { 147 + TCA_U32_UNSPEC, 148 + TCA_U32_CLASSID, 149 + TCA_U32_HASH, 150 + TCA_U32_LINK, 151 + TCA_U32_DIVISOR, 152 + TCA_U32_SEL, 153 + TCA_U32_POLICE, 154 + TCA_U32_ACT, 155 + TCA_U32_INDEV, 156 + TCA_U32_PCNT, 157 + TCA_U32_MARK, 158 + TCA_U32_FLAGS, 159 + TCA_U32_PAD, 160 + __TCA_U32_MAX 161 + }; 162 + 163 + #define TCA_U32_MAX (__TCA_U32_MAX - 1) 164 + 165 + struct tc_u32_key { 166 + __be32 mask; 167 + __be32 val; 168 + int off; 169 + int offmask; 170 + }; 171 + 172 + struct tc_u32_sel { 173 + unsigned char flags; 174 + unsigned char offshift; 175 + unsigned char nkeys; 176 + 177 + __be16 offmask; 178 + __u16 off; 179 + short offoff; 180 + 181 + short hoff; 182 + __be32 hmask; 183 + struct tc_u32_key keys[0]; 184 + }; 185 + 186 + struct tc_u32_mark { 187 + __u32 val; 188 + __u32 mask; 189 + __u32 success; 190 + }; 191 + 192 + struct tc_u32_pcnt { 193 + __u64 rcnt; 194 + __u64 rhit; 195 + __u64 kcnts[0]; 196 + }; 197 + 198 + /* Flags */ 199 + 200 + #define TC_U32_TERMINAL 1 201 + #define TC_U32_OFFSET 2 202 + #define TC_U32_VAROFFSET 4 203 + #define TC_U32_EAT 8 204 + 205 + #define TC_U32_MAXDEPTH 8 206 + 207 + 208 + /* RSVP filter */ 209 + 210 + enum { 211 + TCA_RSVP_UNSPEC, 212 + TCA_RSVP_CLASSID, 213 + TCA_RSVP_DST, 214 + TCA_RSVP_SRC, 215 + TCA_RSVP_PINFO, 216 + TCA_RSVP_POLICE, 217 + TCA_RSVP_ACT, 218 + __TCA_RSVP_MAX 219 + }; 220 + 221 + #define TCA_RSVP_MAX (__TCA_RSVP_MAX - 1 ) 222 + 223 + struct tc_rsvp_gpi { 224 + __u32 key; 225 + __u32 mask; 226 + int offset; 227 + }; 228 + 229 + struct tc_rsvp_pinfo { 230 + struct tc_rsvp_gpi dpi; 231 + struct tc_rsvp_gpi spi; 232 + __u8 protocol; 233 + __u8 tunnelid; 234 + __u8 tunnelhdr; 235 + __u8 pad; 236 + }; 237 + 238 + /* ROUTE filter */ 239 + 240 + enum { 241 + TCA_ROUTE4_UNSPEC, 242 + TCA_ROUTE4_CLASSID, 243 + TCA_ROUTE4_TO, 244 + TCA_ROUTE4_FROM, 245 + TCA_ROUTE4_IIF, 246 + TCA_ROUTE4_POLICE, 247 + TCA_ROUTE4_ACT, 248 + __TCA_ROUTE4_MAX 249 + }; 250 + 251 + #define TCA_ROUTE4_MAX (__TCA_ROUTE4_MAX - 1) 252 + 253 + 254 + /* FW filter */ 255 + 256 + enum { 257 + TCA_FW_UNSPEC, 258 + TCA_FW_CLASSID, 259 + TCA_FW_POLICE, 260 + TCA_FW_INDEV, /* used by CONFIG_NET_CLS_IND */ 261 + TCA_FW_ACT, /* used by CONFIG_NET_CLS_ACT */ 262 + TCA_FW_MASK, 263 + __TCA_FW_MAX 264 + }; 265 + 266 + #define TCA_FW_MAX (__TCA_FW_MAX - 1) 267 + 268 + /* TC index filter */ 269 + 270 + enum { 271 + TCA_TCINDEX_UNSPEC, 272 + TCA_TCINDEX_HASH, 273 + TCA_TCINDEX_MASK, 274 + TCA_TCINDEX_SHIFT, 275 + TCA_TCINDEX_FALL_THROUGH, 276 + TCA_TCINDEX_CLASSID, 277 + TCA_TCINDEX_POLICE, 278 + TCA_TCINDEX_ACT, 279 + __TCA_TCINDEX_MAX 280 + }; 281 + 282 + #define TCA_TCINDEX_MAX (__TCA_TCINDEX_MAX - 1) 283 + 284 + /* Flow filter */ 285 + 286 + enum { 287 + FLOW_KEY_SRC, 288 + FLOW_KEY_DST, 289 + FLOW_KEY_PROTO, 290 + FLOW_KEY_PROTO_SRC, 291 + FLOW_KEY_PROTO_DST, 292 + FLOW_KEY_IIF, 293 + FLOW_KEY_PRIORITY, 294 + FLOW_KEY_MARK, 295 + FLOW_KEY_NFCT, 296 + FLOW_KEY_NFCT_SRC, 297 + FLOW_KEY_NFCT_DST, 298 + FLOW_KEY_NFCT_PROTO_SRC, 299 + FLOW_KEY_NFCT_PROTO_DST, 300 + FLOW_KEY_RTCLASSID, 301 + FLOW_KEY_SKUID, 302 + FLOW_KEY_SKGID, 303 + FLOW_KEY_VLAN_TAG, 304 + FLOW_KEY_RXHASH, 305 + __FLOW_KEY_MAX, 306 + }; 307 + 308 + #define FLOW_KEY_MAX (__FLOW_KEY_MAX - 1) 309 + 310 + enum { 311 + FLOW_MODE_MAP, 312 + FLOW_MODE_HASH, 313 + }; 314 + 315 + enum { 316 + TCA_FLOW_UNSPEC, 317 + TCA_FLOW_KEYS, 318 + TCA_FLOW_MODE, 319 + TCA_FLOW_BASECLASS, 320 + TCA_FLOW_RSHIFT, 321 + TCA_FLOW_ADDEND, 322 + TCA_FLOW_MASK, 323 + TCA_FLOW_XOR, 324 + TCA_FLOW_DIVISOR, 325 + TCA_FLOW_ACT, 326 + TCA_FLOW_POLICE, 327 + TCA_FLOW_EMATCHES, 328 + TCA_FLOW_PERTURB, 329 + __TCA_FLOW_MAX 330 + }; 331 + 332 + #define TCA_FLOW_MAX (__TCA_FLOW_MAX - 1) 333 + 334 + /* Basic filter */ 335 + 336 + enum { 337 + TCA_BASIC_UNSPEC, 338 + TCA_BASIC_CLASSID, 339 + TCA_BASIC_EMATCHES, 340 + TCA_BASIC_ACT, 341 + TCA_BASIC_POLICE, 342 + __TCA_BASIC_MAX 343 + }; 344 + 345 + #define TCA_BASIC_MAX (__TCA_BASIC_MAX - 1) 346 + 347 + 348 + /* Cgroup classifier */ 349 + 350 + enum { 351 + TCA_CGROUP_UNSPEC, 352 + TCA_CGROUP_ACT, 353 + TCA_CGROUP_POLICE, 354 + TCA_CGROUP_EMATCHES, 355 + __TCA_CGROUP_MAX, 356 + }; 357 + 358 + #define TCA_CGROUP_MAX (__TCA_CGROUP_MAX - 1) 359 + 360 + /* BPF classifier */ 361 + 362 + #define TCA_BPF_FLAG_ACT_DIRECT (1 << 0) 363 + 364 + enum { 365 + TCA_BPF_UNSPEC, 366 + TCA_BPF_ACT, 367 + TCA_BPF_POLICE, 368 + TCA_BPF_CLASSID, 369 + TCA_BPF_OPS_LEN, 370 + TCA_BPF_OPS, 371 + TCA_BPF_FD, 372 + TCA_BPF_NAME, 373 + TCA_BPF_FLAGS, 374 + TCA_BPF_FLAGS_GEN, 375 + TCA_BPF_TAG, 376 + TCA_BPF_ID, 377 + __TCA_BPF_MAX, 378 + }; 379 + 380 + #define TCA_BPF_MAX (__TCA_BPF_MAX - 1) 381 + 382 + /* Flower classifier */ 383 + 384 + enum { 385 + TCA_FLOWER_UNSPEC, 386 + TCA_FLOWER_CLASSID, 387 + TCA_FLOWER_INDEV, 388 + TCA_FLOWER_ACT, 389 + TCA_FLOWER_KEY_ETH_DST, /* ETH_ALEN */ 390 + TCA_FLOWER_KEY_ETH_DST_MASK, /* ETH_ALEN */ 391 + TCA_FLOWER_KEY_ETH_SRC, /* ETH_ALEN */ 392 + TCA_FLOWER_KEY_ETH_SRC_MASK, /* ETH_ALEN */ 393 + TCA_FLOWER_KEY_ETH_TYPE, /* be16 */ 394 + TCA_FLOWER_KEY_IP_PROTO, /* u8 */ 395 + TCA_FLOWER_KEY_IPV4_SRC, /* be32 */ 396 + TCA_FLOWER_KEY_IPV4_SRC_MASK, /* be32 */ 397 + TCA_FLOWER_KEY_IPV4_DST, /* be32 */ 398 + TCA_FLOWER_KEY_IPV4_DST_MASK, /* be32 */ 399 + TCA_FLOWER_KEY_IPV6_SRC, /* struct in6_addr */ 400 + TCA_FLOWER_KEY_IPV6_SRC_MASK, /* struct in6_addr */ 401 + TCA_FLOWER_KEY_IPV6_DST, /* struct in6_addr */ 402 + TCA_FLOWER_KEY_IPV6_DST_MASK, /* struct in6_addr */ 403 + TCA_FLOWER_KEY_TCP_SRC, /* be16 */ 404 + TCA_FLOWER_KEY_TCP_DST, /* be16 */ 405 + TCA_FLOWER_KEY_UDP_SRC, /* be16 */ 406 + TCA_FLOWER_KEY_UDP_DST, /* be16 */ 407 + 408 + TCA_FLOWER_FLAGS, 409 + TCA_FLOWER_KEY_VLAN_ID, /* be16 */ 410 + TCA_FLOWER_KEY_VLAN_PRIO, /* u8 */ 411 + TCA_FLOWER_KEY_VLAN_ETH_TYPE, /* be16 */ 412 + 413 + TCA_FLOWER_KEY_ENC_KEY_ID, /* be32 */ 414 + TCA_FLOWER_KEY_ENC_IPV4_SRC, /* be32 */ 415 + TCA_FLOWER_KEY_ENC_IPV4_SRC_MASK,/* be32 */ 416 + TCA_FLOWER_KEY_ENC_IPV4_DST, /* be32 */ 417 + TCA_FLOWER_KEY_ENC_IPV4_DST_MASK,/* be32 */ 418 + TCA_FLOWER_KEY_ENC_IPV6_SRC, /* struct in6_addr */ 419 + TCA_FLOWER_KEY_ENC_IPV6_SRC_MASK,/* struct in6_addr */ 420 + TCA_FLOWER_KEY_ENC_IPV6_DST, /* struct in6_addr */ 421 + TCA_FLOWER_KEY_ENC_IPV6_DST_MASK,/* struct in6_addr */ 422 + 423 + TCA_FLOWER_KEY_TCP_SRC_MASK, /* be16 */ 424 + TCA_FLOWER_KEY_TCP_DST_MASK, /* be16 */ 425 + TCA_FLOWER_KEY_UDP_SRC_MASK, /* be16 */ 426 + TCA_FLOWER_KEY_UDP_DST_MASK, /* be16 */ 427 + TCA_FLOWER_KEY_SCTP_SRC_MASK, /* be16 */ 428 + TCA_FLOWER_KEY_SCTP_DST_MASK, /* be16 */ 429 + 430 + TCA_FLOWER_KEY_SCTP_SRC, /* be16 */ 431 + TCA_FLOWER_KEY_SCTP_DST, /* be16 */ 432 + 433 + TCA_FLOWER_KEY_ENC_UDP_SRC_PORT, /* be16 */ 434 + TCA_FLOWER_KEY_ENC_UDP_SRC_PORT_MASK, /* be16 */ 435 + TCA_FLOWER_KEY_ENC_UDP_DST_PORT, /* be16 */ 436 + TCA_FLOWER_KEY_ENC_UDP_DST_PORT_MASK, /* be16 */ 437 + 438 + TCA_FLOWER_KEY_FLAGS, /* be32 */ 439 + TCA_FLOWER_KEY_FLAGS_MASK, /* be32 */ 440 + 441 + TCA_FLOWER_KEY_ICMPV4_CODE, /* u8 */ 442 + TCA_FLOWER_KEY_ICMPV4_CODE_MASK,/* u8 */ 443 + TCA_FLOWER_KEY_ICMPV4_TYPE, /* u8 */ 444 + TCA_FLOWER_KEY_ICMPV4_TYPE_MASK,/* u8 */ 445 + TCA_FLOWER_KEY_ICMPV6_CODE, /* u8 */ 446 + TCA_FLOWER_KEY_ICMPV6_CODE_MASK,/* u8 */ 447 + TCA_FLOWER_KEY_ICMPV6_TYPE, /* u8 */ 448 + TCA_FLOWER_KEY_ICMPV6_TYPE_MASK,/* u8 */ 449 + 450 + TCA_FLOWER_KEY_ARP_SIP, /* be32 */ 451 + TCA_FLOWER_KEY_ARP_SIP_MASK, /* be32 */ 452 + TCA_FLOWER_KEY_ARP_TIP, /* be32 */ 453 + TCA_FLOWER_KEY_ARP_TIP_MASK, /* be32 */ 454 + TCA_FLOWER_KEY_ARP_OP, /* u8 */ 455 + TCA_FLOWER_KEY_ARP_OP_MASK, /* u8 */ 456 + TCA_FLOWER_KEY_ARP_SHA, /* ETH_ALEN */ 457 + TCA_FLOWER_KEY_ARP_SHA_MASK, /* ETH_ALEN */ 458 + TCA_FLOWER_KEY_ARP_THA, /* ETH_ALEN */ 459 + TCA_FLOWER_KEY_ARP_THA_MASK, /* ETH_ALEN */ 460 + 461 + TCA_FLOWER_KEY_MPLS_TTL, /* u8 - 8 bits */ 462 + TCA_FLOWER_KEY_MPLS_BOS, /* u8 - 1 bit */ 463 + TCA_FLOWER_KEY_MPLS_TC, /* u8 - 3 bits */ 464 + TCA_FLOWER_KEY_MPLS_LABEL, /* be32 - 20 bits */ 465 + 466 + TCA_FLOWER_KEY_TCP_FLAGS, /* be16 */ 467 + TCA_FLOWER_KEY_TCP_FLAGS_MASK, /* be16 */ 468 + 469 + TCA_FLOWER_KEY_IP_TOS, /* u8 */ 470 + TCA_FLOWER_KEY_IP_TOS_MASK, /* u8 */ 471 + TCA_FLOWER_KEY_IP_TTL, /* u8 */ 472 + TCA_FLOWER_KEY_IP_TTL_MASK, /* u8 */ 473 + 474 + TCA_FLOWER_KEY_CVLAN_ID, /* be16 */ 475 + TCA_FLOWER_KEY_CVLAN_PRIO, /* u8 */ 476 + TCA_FLOWER_KEY_CVLAN_ETH_TYPE, /* be16 */ 477 + 478 + TCA_FLOWER_KEY_ENC_IP_TOS, /* u8 */ 479 + TCA_FLOWER_KEY_ENC_IP_TOS_MASK, /* u8 */ 480 + TCA_FLOWER_KEY_ENC_IP_TTL, /* u8 */ 481 + TCA_FLOWER_KEY_ENC_IP_TTL_MASK, /* u8 */ 482 + 483 + TCA_FLOWER_KEY_ENC_OPTS, 484 + TCA_FLOWER_KEY_ENC_OPTS_MASK, 485 + 486 + TCA_FLOWER_IN_HW_COUNT, 487 + 488 + __TCA_FLOWER_MAX, 489 + }; 490 + 491 + #define TCA_FLOWER_MAX (__TCA_FLOWER_MAX - 1) 492 + 493 + enum { 494 + TCA_FLOWER_KEY_ENC_OPTS_UNSPEC, 495 + TCA_FLOWER_KEY_ENC_OPTS_GENEVE, /* Nested 496 + * TCA_FLOWER_KEY_ENC_OPT_GENEVE_ 497 + * attributes 498 + */ 499 + __TCA_FLOWER_KEY_ENC_OPTS_MAX, 500 + }; 501 + 502 + #define TCA_FLOWER_KEY_ENC_OPTS_MAX (__TCA_FLOWER_KEY_ENC_OPTS_MAX - 1) 503 + 504 + enum { 505 + TCA_FLOWER_KEY_ENC_OPT_GENEVE_UNSPEC, 506 + TCA_FLOWER_KEY_ENC_OPT_GENEVE_CLASS, /* u16 */ 507 + TCA_FLOWER_KEY_ENC_OPT_GENEVE_TYPE, /* u8 */ 508 + TCA_FLOWER_KEY_ENC_OPT_GENEVE_DATA, /* 4 to 128 bytes */ 509 + 510 + __TCA_FLOWER_KEY_ENC_OPT_GENEVE_MAX, 511 + }; 512 + 513 + #define TCA_FLOWER_KEY_ENC_OPT_GENEVE_MAX \ 514 + (__TCA_FLOWER_KEY_ENC_OPT_GENEVE_MAX - 1) 515 + 516 + enum { 517 + TCA_FLOWER_KEY_FLAGS_IS_FRAGMENT = (1 << 0), 518 + TCA_FLOWER_KEY_FLAGS_FRAG_IS_FIRST = (1 << 1), 519 + }; 520 + 521 + /* Match-all classifier */ 522 + 523 + enum { 524 + TCA_MATCHALL_UNSPEC, 525 + TCA_MATCHALL_CLASSID, 526 + TCA_MATCHALL_ACT, 527 + TCA_MATCHALL_FLAGS, 528 + __TCA_MATCHALL_MAX, 529 + }; 530 + 531 + #define TCA_MATCHALL_MAX (__TCA_MATCHALL_MAX - 1) 532 + 533 + /* Extended Matches */ 534 + 535 + struct tcf_ematch_tree_hdr { 536 + __u16 nmatches; 537 + __u16 progid; 538 + }; 539 + 540 + enum { 541 + TCA_EMATCH_TREE_UNSPEC, 542 + TCA_EMATCH_TREE_HDR, 543 + TCA_EMATCH_TREE_LIST, 544 + __TCA_EMATCH_TREE_MAX 545 + }; 546 + #define TCA_EMATCH_TREE_MAX (__TCA_EMATCH_TREE_MAX - 1) 547 + 548 + struct tcf_ematch_hdr { 549 + __u16 matchid; 550 + __u16 kind; 551 + __u16 flags; 552 + __u16 pad; /* currently unused */ 553 + }; 554 + 555 + /* 0 1 556 + * 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 557 + * +-----------------------+-+-+---+ 558 + * | Unused |S|I| R | 559 + * +-----------------------+-+-+---+ 560 + * 561 + * R(2) ::= relation to next ematch 562 + * where: 0 0 END (last ematch) 563 + * 0 1 AND 564 + * 1 0 OR 565 + * 1 1 Unused (invalid) 566 + * I(1) ::= invert result 567 + * S(1) ::= simple payload 568 + */ 569 + #define TCF_EM_REL_END 0 570 + #define TCF_EM_REL_AND (1<<0) 571 + #define TCF_EM_REL_OR (1<<1) 572 + #define TCF_EM_INVERT (1<<2) 573 + #define TCF_EM_SIMPLE (1<<3) 574 + 575 + #define TCF_EM_REL_MASK 3 576 + #define TCF_EM_REL_VALID(v) (((v) & TCF_EM_REL_MASK) != TCF_EM_REL_MASK) 577 + 578 + enum { 579 + TCF_LAYER_LINK, 580 + TCF_LAYER_NETWORK, 581 + TCF_LAYER_TRANSPORT, 582 + __TCF_LAYER_MAX 583 + }; 584 + #define TCF_LAYER_MAX (__TCF_LAYER_MAX - 1) 585 + 586 + /* Ematch type assignments 587 + * 1..32767 Reserved for ematches inside kernel tree 588 + * 32768..65535 Free to use, not reliable 589 + */ 590 + #define TCF_EM_CONTAINER 0 591 + #define TCF_EM_CMP 1 592 + #define TCF_EM_NBYTE 2 593 + #define TCF_EM_U32 3 594 + #define TCF_EM_META 4 595 + #define TCF_EM_TEXT 5 596 + #define TCF_EM_VLAN 6 597 + #define TCF_EM_CANID 7 598 + #define TCF_EM_IPSET 8 599 + #define TCF_EM_IPT 9 600 + #define TCF_EM_MAX 9 601 + 602 + enum { 603 + TCF_EM_PROG_TC 604 + }; 605 + 606 + enum { 607 + TCF_EM_OPND_EQ, 608 + TCF_EM_OPND_GT, 609 + TCF_EM_OPND_LT 610 + }; 611 + 612 + #endif
+1
tools/include/uapi/linux/prctl.h
··· 212 212 #define PR_SET_SPECULATION_CTRL 53 213 213 /* Speculation control variants */ 214 214 # define PR_SPEC_STORE_BYPASS 0 215 + # define PR_SPEC_INDIRECT_BRANCH 1 215 216 /* Return and control values for PR_SET/GET_SPECULATION_CTRL */ 216 217 # define PR_SPEC_NOT_AFFECTED 0 217 218 # define PR_SPEC_PRCTL (1UL << 0)
+37
tools/include/uapi/linux/tc_act/tc_bpf.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0+ WITH Linux-syscall-note */ 2 + /* 3 + * Copyright (c) 2015 Jiri Pirko <jiri@resnulli.us> 4 + * 5 + * This program is free software; you can redistribute it and/or modify 6 + * it under the terms of the GNU General Public License as published by 7 + * the Free Software Foundation; either version 2 of the License, or 8 + * (at your option) any later version. 9 + */ 10 + 11 + #ifndef __LINUX_TC_BPF_H 12 + #define __LINUX_TC_BPF_H 13 + 14 + #include <linux/pkt_cls.h> 15 + 16 + #define TCA_ACT_BPF 13 17 + 18 + struct tc_act_bpf { 19 + tc_gen; 20 + }; 21 + 22 + enum { 23 + TCA_ACT_BPF_UNSPEC, 24 + TCA_ACT_BPF_TM, 25 + TCA_ACT_BPF_PARMS, 26 + TCA_ACT_BPF_OPS_LEN, 27 + TCA_ACT_BPF_OPS, 28 + TCA_ACT_BPF_FD, 29 + TCA_ACT_BPF_NAME, 30 + TCA_ACT_BPF_PAD, 31 + TCA_ACT_BPF_TAG, 32 + TCA_ACT_BPF_ID, 33 + __TCA_ACT_BPF_MAX, 34 + }; 35 + #define TCA_ACT_BPF_MAX (__TCA_ACT_BPF_MAX - 1) 36 + 37 + #endif
+15 -4
tools/objtool/elf.c
··· 31 31 #include "elf.h" 32 32 #include "warn.h" 33 33 34 + #define MAX_NAME_LEN 128 35 + 34 36 struct section *find_section_by_name(struct elf *elf, const char *name) 35 37 { 36 38 struct section *sec; ··· 300 298 /* Create parent/child links for any cold subfunctions */ 301 299 list_for_each_entry(sec, &elf->sections, list) { 302 300 list_for_each_entry(sym, &sec->symbol_list, list) { 301 + char pname[MAX_NAME_LEN + 1]; 302 + size_t pnamelen; 303 303 if (sym->type != STT_FUNC) 304 304 continue; 305 305 sym->pfunc = sym->cfunc = sym; ··· 309 305 if (!coldstr) 310 306 continue; 311 307 312 - coldstr[0] = '\0'; 313 - pfunc = find_symbol_by_name(elf, sym->name); 314 - coldstr[0] = '.'; 308 + pnamelen = coldstr - sym->name; 309 + if (pnamelen > MAX_NAME_LEN) { 310 + WARN("%s(): parent function name exceeds maximum length of %d characters", 311 + sym->name, MAX_NAME_LEN); 312 + return -1; 313 + } 314 + 315 + strncpy(pname, sym->name, pnamelen); 316 + pname[pnamelen] = '\0'; 317 + pfunc = find_symbol_by_name(elf, pname); 315 318 316 319 if (!pfunc) { 317 320 WARN("%s(): can't find parent function", 318 321 sym->name); 319 - goto err; 322 + return -1; 320 323 } 321 324 322 325 sym->pfunc = pfunc;
+5
tools/perf/Makefile.config
··· 299 299 endif 300 300 endif 301 301 302 + ifeq ($(feature-get_current_dir_name), 1) 303 + CFLAGS += -DHAVE_GET_CURRENT_DIR_NAME 304 + endif 305 + 306 + 302 307 ifdef NO_LIBELF 303 308 NO_DWARF := 1 304 309 NO_DEMANGLE := 1
+1 -1
tools/perf/tests/attr/base-record
··· 9 9 config=0 10 10 sample_period=* 11 11 sample_type=263 12 - read_format=0 12 + read_format=0|4 13 13 disabled=1 14 14 inherit=1 15 15 pinned=0
+1
tools/perf/trace/beauty/ioctl.c
··· 31 31 "TCSETSW2", "TCSETSF2", "TIOCGRS48", "TIOCSRS485", "TIOCGPTN", "TIOCSPTLCK", 32 32 "TIOCGDEV", "TCSETX", "TCSETXF", "TCSETXW", "TIOCSIG", "TIOCVHANGUP", "TIOCGPKT", 33 33 "TIOCGPTLCK", [_IOC_NR(TIOCGEXCL)] = "TIOCGEXCL", "TIOCGPTPEER", 34 + "TIOCGISO7816", "TIOCSISO7816", 34 35 [_IOC_NR(FIONCLEX)] = "FIONCLEX", "FIOCLEX", "FIOASYNC", "TIOCSERCONFIG", 35 36 "TIOCSERGWILD", "TIOCSERSWILD", "TIOCGLCKTRMIOS", "TIOCSLCKTRMIOS", 36 37 "TIOCSERGSTRUCT", "TIOCSERGETLSR", "TIOCSERGETMULTI", "TIOCSERSETMULTI",
+1
tools/perf/util/Build
··· 10 10 libperf-y += evsel.o 11 11 libperf-y += evsel_fprintf.o 12 12 libperf-y += find_bit.o 13 + libperf-y += get_current_dir_name.o 13 14 libperf-y += kallsyms.o 14 15 libperf-y += levenshtein.o 15 16 libperf-y += llvm-utils.o
+1 -1
tools/perf/util/evsel.c
··· 1092 1092 attr->exclude_user = 1; 1093 1093 } 1094 1094 1095 - if (evsel->own_cpus) 1095 + if (evsel->own_cpus || evsel->unit) 1096 1096 evsel->attr.read_format |= PERF_FORMAT_ID; 1097 1097 1098 1098 /*
+18
tools/perf/util/get_current_dir_name.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + // Copyright (C) 2018, Red Hat Inc, Arnaldo Carvalho de Melo <acme@redhat.com> 3 + // 4 + #ifndef HAVE_GET_CURRENT_DIR_NAME 5 + #include "util.h" 6 + #include <unistd.h> 7 + #include <stdlib.h> 8 + #include <stdlib.h> 9 + 10 + /* Android's 'bionic' library, for one, doesn't have this */ 11 + 12 + char *get_current_dir_name(void) 13 + { 14 + char pwd[PATH_MAX]; 15 + 16 + return getcwd(pwd, sizeof(pwd)) == NULL ? NULL : strdup(pwd); 17 + } 18 + #endif // HAVE_GET_CURRENT_DIR_NAME
+15 -2
tools/perf/util/namespaces.c
··· 18 18 #include <stdio.h> 19 19 #include <string.h> 20 20 #include <unistd.h> 21 + #include <asm/bug.h> 21 22 22 23 struct namespaces *namespaces__new(struct namespaces_event *event) 23 24 { ··· 187 186 char curpath[PATH_MAX]; 188 187 int oldns = -1; 189 188 int newns = -1; 189 + char *oldcwd = NULL; 190 190 191 191 if (nc == NULL) 192 192 return; ··· 201 199 if (snprintf(curpath, PATH_MAX, "/proc/self/ns/mnt") >= PATH_MAX) 202 200 return; 203 201 202 + oldcwd = get_current_dir_name(); 203 + if (!oldcwd) 204 + return; 205 + 204 206 oldns = open(curpath, O_RDONLY); 205 207 if (oldns < 0) 206 - return; 208 + goto errout; 207 209 208 210 newns = open(nsi->mntns_path, O_RDONLY); 209 211 if (newns < 0) ··· 216 210 if (setns(newns, CLONE_NEWNS) < 0) 217 211 goto errout; 218 212 213 + nc->oldcwd = oldcwd; 219 214 nc->oldns = oldns; 220 215 nc->newns = newns; 221 216 return; 222 217 223 218 errout: 219 + free(oldcwd); 224 220 if (oldns > -1) 225 221 close(oldns); 226 222 if (newns > -1) ··· 231 223 232 224 void nsinfo__mountns_exit(struct nscookie *nc) 233 225 { 234 - if (nc == NULL || nc->oldns == -1 || nc->newns == -1) 226 + if (nc == NULL || nc->oldns == -1 || nc->newns == -1 || !nc->oldcwd) 235 227 return; 236 228 237 229 setns(nc->oldns, CLONE_NEWNS); 230 + 231 + if (nc->oldcwd) { 232 + WARN_ON_ONCE(chdir(nc->oldcwd)); 233 + zfree(&nc->oldcwd); 234 + } 238 235 239 236 if (nc->oldns > -1) { 240 237 close(nc->oldns);
+1
tools/perf/util/namespaces.h
··· 38 38 struct nscookie { 39 39 int oldns; 40 40 int newns; 41 + char *oldcwd; 41 42 }; 42 43 43 44 int nsinfo__init(struct nsinfo *nsi);
+4
tools/perf/util/util.h
··· 59 59 60 60 const char *perf_tip(const char *dirpath); 61 61 62 + #ifndef HAVE_GET_CURRENT_DIR_NAME 63 + char *get_current_dir_name(void); 64 + #endif 65 + 62 66 #ifndef HAVE_SCHED_GETCPU_SUPPORT 63 67 int sched_getcpu(void); 64 68 #endif
+1
tools/testing/selftests/Makefile
··· 24 24 TARGETS += mount 25 25 TARGETS += mqueue 26 26 TARGETS += net 27 + TARGETS += netfilter 27 28 TARGETS += nsfs 28 29 TARGETS += powerpc 29 30 TARGETS += proc
+4 -1
tools/testing/selftests/bpf/test_netcnt.c
··· 81 81 goto err; 82 82 } 83 83 84 - assert(system("ping localhost -6 -c 10000 -f -q > /dev/null") == 0); 84 + if (system("which ping6 &>/dev/null") == 0) 85 + assert(!system("ping6 localhost -c 10000 -f -q > /dev/null")); 86 + else 87 + assert(!system("ping -6 localhost -c 10000 -f -q > /dev/null")); 85 88 86 89 if (bpf_prog_query(cgroup_fd, BPF_CGROUP_INET_EGRESS, 0, NULL, NULL, 87 90 &prog_cnt)) {
+19
tools/testing/selftests/bpf/test_verifier.c
··· 13896 13896 .prog_type = BPF_PROG_TYPE_SCHED_CLS, 13897 13897 .result = ACCEPT, 13898 13898 }, 13899 + { 13900 + "calls: ctx read at start of subprog", 13901 + .insns = { 13902 + BPF_MOV64_REG(BPF_REG_6, BPF_REG_1), 13903 + BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 5), 13904 + BPF_JMP_REG(BPF_JSGT, BPF_REG_0, BPF_REG_0, 0), 13905 + BPF_MOV64_REG(BPF_REG_1, BPF_REG_6), 13906 + BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 1, 0, 2), 13907 + BPF_MOV64_REG(BPF_REG_1, BPF_REG_0), 13908 + BPF_EXIT_INSN(), 13909 + BPF_LDX_MEM(BPF_B, BPF_REG_9, BPF_REG_1, 0), 13910 + BPF_MOV64_IMM(BPF_REG_0, 0), 13911 + BPF_EXIT_INSN(), 13912 + }, 13913 + .prog_type = BPF_PROG_TYPE_SOCKET_FILTER, 13914 + .errstr_unpriv = "function calls to other bpf functions are allowed for root only", 13915 + .result_unpriv = REJECT, 13916 + .result = ACCEPT, 13917 + }, 13899 13918 }; 13900 13919 13901 13920 static int probe_filter_length(const struct bpf_insn *fp)
+6
tools/testing/selftests/netfilter/Makefile
··· 1 + # SPDX-License-Identifier: GPL-2.0 2 + # Makefile for netfilter selftests 3 + 4 + TEST_PROGS := nft_trans_stress.sh 5 + 6 + include ../lib.mk
+2
tools/testing/selftests/netfilter/config
··· 1 + CONFIG_NET_NS=y 2 + NF_TABLES_INET=y
+78
tools/testing/selftests/netfilter/nft_trans_stress.sh
··· 1 + #!/bin/bash 2 + # 3 + # This test is for stress-testing the nf_tables config plane path vs. 4 + # packet path processing: Make sure we never release rules that are 5 + # still visible to other cpus. 6 + # 7 + # set -e 8 + 9 + # Kselftest framework requirement - SKIP code is 4. 10 + ksft_skip=4 11 + 12 + testns=testns1 13 + tables="foo bar baz quux" 14 + 15 + nft --version > /dev/null 2>&1 16 + if [ $? -ne 0 ];then 17 + echo "SKIP: Could not run test without nft tool" 18 + exit $ksft_skip 19 + fi 20 + 21 + ip -Version > /dev/null 2>&1 22 + if [ $? -ne 0 ];then 23 + echo "SKIP: Could not run test without ip tool" 24 + exit $ksft_skip 25 + fi 26 + 27 + tmp=$(mktemp) 28 + 29 + for table in $tables; do 30 + echo add table inet "$table" >> "$tmp" 31 + echo flush table inet "$table" >> "$tmp" 32 + 33 + echo "add chain inet $table INPUT { type filter hook input priority 0; }" >> "$tmp" 34 + echo "add chain inet $table OUTPUT { type filter hook output priority 0; }" >> "$tmp" 35 + for c in $(seq 1 400); do 36 + chain=$(printf "chain%03u" "$c") 37 + echo "add chain inet $table $chain" >> "$tmp" 38 + done 39 + 40 + for c in $(seq 1 400); do 41 + chain=$(printf "chain%03u" "$c") 42 + for BASE in INPUT OUTPUT; do 43 + echo "add rule inet $table $BASE counter jump $chain" >> "$tmp" 44 + done 45 + echo "add rule inet $table $chain counter return" >> "$tmp" 46 + done 47 + done 48 + 49 + ip netns add "$testns" 50 + ip -netns "$testns" link set lo up 51 + 52 + lscpu | grep ^CPU\(s\): | ( read cpu cpunum ; 53 + cpunum=$((cpunum-1)) 54 + for i in $(seq 0 $cpunum);do 55 + mask=$(printf 0x%x $((1<<$i))) 56 + ip netns exec "$testns" taskset $mask ping -4 127.0.0.1 -fq > /dev/null & 57 + ip netns exec "$testns" taskset $mask ping -6 ::1 -fq > /dev/null & 58 + done) 59 + 60 + sleep 1 61 + 62 + for i in $(seq 1 10) ; do ip netns exec "$testns" nft -f "$tmp" & done 63 + 64 + for table in $tables;do 65 + randsleep=$((RANDOM%10)) 66 + sleep $randsleep 67 + ip netns exec "$testns" nft delete table inet $table 2>/dev/null 68 + done 69 + 70 + randsleep=$((RANDOM%10)) 71 + sleep $randsleep 72 + 73 + pkill -9 ping 74 + 75 + wait 76 + 77 + rm -f "$tmp" 78 + ip netns del "$testns"
+7 -2
tools/testing/selftests/proc/proc-self-map-files-002.c
··· 13 13 * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF 14 14 * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. 15 15 */ 16 - /* Test readlink /proc/self/map_files/... with address 0. */ 16 + /* Test readlink /proc/self/map_files/... with minimum address. */ 17 17 #include <errno.h> 18 18 #include <sys/types.h> 19 19 #include <sys/stat.h> ··· 47 47 int main(void) 48 48 { 49 49 const unsigned int PAGE_SIZE = sysconf(_SC_PAGESIZE); 50 + #ifdef __arm__ 51 + unsigned long va = 2 * PAGE_SIZE; 52 + #else 53 + unsigned long va = 0; 54 + #endif 50 55 void *p; 51 56 int fd; 52 57 unsigned long a, b; ··· 60 55 if (fd == -1) 61 56 return 1; 62 57 63 - p = mmap(NULL, PAGE_SIZE, PROT_NONE, MAP_PRIVATE|MAP_FILE|MAP_FIXED, fd, 0); 58 + p = mmap((void *)va, PAGE_SIZE, PROT_NONE, MAP_PRIVATE|MAP_FILE|MAP_FIXED, fd, 0); 64 59 if (p == MAP_FAILED) { 65 60 if (errno == EPERM) 66 61 return 2;