Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Cross-merge networking fixes after downstream PR.

No conflicts or adjacent changes.

Signed-off-by: Jakub Kicinski <kuba@kernel.org>

+4294 -2315
+13
CREDITS
··· 2161 2161 E: mike.kravetz@oracle.com 2162 2162 D: Maintenance and development of the hugetlb subsystem 2163 2163 2164 + N: Seth Jennings 2165 + E: sjenning@redhat.com 2166 + D: Creation and maintenance of zswap 2167 + 2168 + N: Dan Streetman 2169 + E: ddstreet@ieee.org 2170 + D: Maintenance and development of zswap 2171 + D: Creation and maintenance of the zpool API 2172 + 2173 + N: Vitaly Wool 2174 + E: vitaly.wool@konsulko.com 2175 + D: Maintenance and development of zswap 2176 + 2164 2177 N: Andreas S. Krebs 2165 2178 E: akrebs@altavista.net 2166 2179 D: CYPRESS CY82C693 chipset IDE, Digital's PC-Alpha 164SX boards
+11 -11
Documentation/ABI/testing/sysfs-class-net-queues
··· 1 - What: /sys/class/<iface>/queues/rx-<queue>/rps_cpus 1 + What: /sys/class/net/<iface>/queues/rx-<queue>/rps_cpus 2 2 Date: March 2010 3 3 KernelVersion: 2.6.35 4 4 Contact: netdev@vger.kernel.org ··· 8 8 network device queue. Possible values depend on the number 9 9 of available CPU(s) in the system. 10 10 11 - What: /sys/class/<iface>/queues/rx-<queue>/rps_flow_cnt 11 + What: /sys/class/net/<iface>/queues/rx-<queue>/rps_flow_cnt 12 12 Date: April 2010 13 13 KernelVersion: 2.6.35 14 14 Contact: netdev@vger.kernel.org ··· 16 16 Number of Receive Packet Steering flows being currently 17 17 processed by this particular network device receive queue. 18 18 19 - What: /sys/class/<iface>/queues/tx-<queue>/tx_timeout 19 + What: /sys/class/net/<iface>/queues/tx-<queue>/tx_timeout 20 20 Date: November 2011 21 21 KernelVersion: 3.3 22 22 Contact: netdev@vger.kernel.org ··· 24 24 Indicates the number of transmit timeout events seen by this 25 25 network interface transmit queue. 26 26 27 - What: /sys/class/<iface>/queues/tx-<queue>/tx_maxrate 27 + What: /sys/class/net/<iface>/queues/tx-<queue>/tx_maxrate 28 28 Date: March 2015 29 29 KernelVersion: 4.1 30 30 Contact: netdev@vger.kernel.org ··· 32 32 A Mbps max-rate set for the queue, a value of zero means disabled, 33 33 default is disabled. 34 34 35 - What: /sys/class/<iface>/queues/tx-<queue>/xps_cpus 35 + What: /sys/class/net/<iface>/queues/tx-<queue>/xps_cpus 36 36 Date: November 2010 37 37 KernelVersion: 2.6.38 38 38 Contact: netdev@vger.kernel.org ··· 42 42 network device transmit queue. Possible values depend on the 43 43 number of available CPU(s) in the system. 44 44 45 - What: /sys/class/<iface>/queues/tx-<queue>/xps_rxqs 45 + What: /sys/class/net/<iface>/queues/tx-<queue>/xps_rxqs 46 46 Date: June 2018 47 47 KernelVersion: 4.18.0 48 48 Contact: netdev@vger.kernel.org ··· 53 53 number of available receive queue(s) in the network device. 54 54 Default is disabled. 55 55 56 - What: /sys/class/<iface>/queues/tx-<queue>/byte_queue_limits/hold_time 56 + What: /sys/class/net/<iface>/queues/tx-<queue>/byte_queue_limits/hold_time 57 57 Date: November 2011 58 58 KernelVersion: 3.3 59 59 Contact: netdev@vger.kernel.org ··· 62 62 of this particular network device transmit queue. 63 63 Default value is 1000. 64 64 65 - What: /sys/class/<iface>/queues/tx-<queue>/byte_queue_limits/inflight 65 + What: /sys/class/net/<iface>/queues/tx-<queue>/byte_queue_limits/inflight 66 66 Date: November 2011 67 67 KernelVersion: 3.3 68 68 Contact: netdev@vger.kernel.org ··· 70 70 Indicates the number of bytes (objects) in flight on this 71 71 network device transmit queue. 72 72 73 - What: /sys/class/<iface>/queues/tx-<queue>/byte_queue_limits/limit 73 + What: /sys/class/net/<iface>/queues/tx-<queue>/byte_queue_limits/limit 74 74 Date: November 2011 75 75 KernelVersion: 3.3 76 76 Contact: netdev@vger.kernel.org ··· 79 79 on this network device transmit queue. This value is clamped 80 80 to be within the bounds defined by limit_max and limit_min. 81 81 82 - What: /sys/class/<iface>/queues/tx-<queue>/byte_queue_limits/limit_max 82 + What: /sys/class/net/<iface>/queues/tx-<queue>/byte_queue_limits/limit_max 83 83 Date: November 2011 84 84 KernelVersion: 3.3 85 85 Contact: netdev@vger.kernel.org ··· 88 88 queued on this network device transmit queue. See 89 89 include/linux/dynamic_queue_limits.h for the default value. 90 90 91 - What: /sys/class/<iface>/queues/tx-<queue>/byte_queue_limits/limit_min 91 + What: /sys/class/net/<iface>/queues/tx-<queue>/byte_queue_limits/limit_min 92 92 Date: November 2011 93 93 KernelVersion: 3.3 94 94 Contact: netdev@vger.kernel.org
+1
Documentation/ABI/testing/sysfs-platform-silicom
··· 10 10 Date: November 2023 11 11 KernelVersion: 6.7 12 12 Contact: Henry Shi <henrys@silicom-usa.com> 13 + Description: 13 14 This file allow user to power cycle the platform. 14 15 Default value is 0; when set to 1, it powers down 15 16 the platform, waits 5 seconds, then powers on the
+2 -2
Documentation/accel/introduction.rst
··· 101 101 email threads 102 102 ------------- 103 103 104 - * `Initial discussion on the New subsystem for acceleration devices <https://lkml.org/lkml/2022/7/31/83>`_ - Oded Gabbay (2022) 105 - * `patch-set to add the new subsystem <https://lkml.org/lkml/2022/10/22/544>`_ - Oded Gabbay (2022) 104 + * `Initial discussion on the New subsystem for acceleration devices <https://lore.kernel.org/lkml/CAFCwf11=9qpNAepL7NL+YAV_QO=Wv6pnWPhKHKAepK3fNn+2Dg@mail.gmail.com/>`_ - Oded Gabbay (2022) 105 + * `patch-set to add the new subsystem <https://lore.kernel.org/lkml/20221022214622.18042-1-ogabbay@kernel.org/>`_ - Oded Gabbay (2022) 106 106 107 107 Conference talks 108 108 ----------------
-5
Documentation/admin-guide/kernel-parameters.rst
··· 218 218 219 219 .. include:: kernel-parameters.txt 220 220 :literal: 221 - 222 - Todo 223 - ---- 224 - 225 - Add more DRM drivers.
+6 -10
Documentation/admin-guide/kernel-per-CPU-kthreads.rst
··· 243 243 3. Do any of the following needed to avoid jitter that your 244 244 application cannot tolerate: 245 245 246 - a. Build your kernel with CONFIG_SLUB=y rather than 247 - CONFIG_SLAB=y, thus avoiding the slab allocator's periodic 248 - use of each CPU's workqueues to run its cache_reap() 249 - function. 250 - b. Avoid using oprofile, thus avoiding OS jitter from 246 + a. Avoid using oprofile, thus avoiding OS jitter from 251 247 wq_sync_buffer(). 252 - c. Limit your CPU frequency so that a CPU-frequency 248 + b. Limit your CPU frequency so that a CPU-frequency 253 249 governor is not required, possibly enlisting the aid of 254 250 special heatsinks or other cooling technologies. If done 255 251 correctly, and if you CPU architecture permits, you should ··· 255 259 256 260 WARNING: Please check your CPU specifications to 257 261 make sure that this is safe on your particular system. 258 - d. As of v3.18, Christoph Lameter's on-demand vmstat workers 262 + c. As of v3.18, Christoph Lameter's on-demand vmstat workers 259 263 commit prevents OS jitter due to vmstat_update() on 260 264 CONFIG_SMP=y systems. Before v3.18, is not possible 261 265 to entirely get rid of the OS jitter, but you can ··· 270 274 (based on an earlier one from Gilad Ben-Yossef) that 271 275 reduces or even eliminates vmstat overhead for some 272 276 workloads at https://lore.kernel.org/r/00000140e9dfd6bd-40db3d4f-c1be-434f-8132-7820f81bb586-000000@email.amazonses.com. 273 - e. If running on high-end powerpc servers, build with 277 + d. If running on high-end powerpc servers, build with 274 278 CONFIG_PPC_RTAS_DAEMON=n. This prevents the RTAS 275 279 daemon from running on each CPU every second or so. 276 280 (This will require editing Kconfig files and will defeat ··· 278 282 due to the rtas_event_scan() function. 279 283 WARNING: Please check your CPU specifications to 280 284 make sure that this is safe on your particular system. 281 - f. If running on Cell Processor, build your kernel with 285 + e. If running on Cell Processor, build your kernel with 282 286 CBE_CPUFREQ_SPU_GOVERNOR=n to avoid OS jitter from 283 287 spu_gov_work(). 284 288 WARNING: Please check your CPU specifications to 285 289 make sure that this is safe on your particular system. 286 - g. If running on PowerMAC, build your kernel with 290 + f. If running on PowerMAC, build your kernel with 287 291 CONFIG_PMAC_RACKMETER=n to disable the CPU-meter, 288 292 avoiding OS jitter from rackmeter_do_timer(). 289 293
+17 -2
Documentation/dev-tools/kunit/usage.rst
··· 671 671 ------------------------ 672 672 673 673 If we do not want to expose functions or variables for testing, one option is to 674 - conditionally ``#include`` the test file at the end of your .c file. For 675 - example: 674 + conditionally export the used symbol. For example: 675 + 676 + .. code-block:: c 677 + 678 + /* In my_file.c */ 679 + 680 + VISIBLE_IF_KUNIT int do_interesting_thing(); 681 + EXPORT_SYMBOL_IF_KUNIT(do_interesting_thing); 682 + 683 + /* In my_file.h */ 684 + 685 + #if IS_ENABLED(CONFIG_KUNIT) 686 + int do_interesting_thing(void); 687 + #endif 688 + 689 + Alternatively, you could conditionally ``#include`` the test file at the end of 690 + your .c file. For example: 676 691 677 692 .. code-block:: c 678 693
+3 -3
Documentation/devicetree/bindings/display/samsung/samsung,exynos-mixer.yaml
··· 85 85 clocks: 86 86 minItems: 6 87 87 maxItems: 6 88 - regs: 88 + reg: 89 89 minItems: 2 90 90 maxItems: 2 91 91 ··· 99 99 clocks: 100 100 minItems: 4 101 101 maxItems: 4 102 - regs: 102 + reg: 103 103 minItems: 2 104 104 maxItems: 2 105 105 ··· 116 116 clocks: 117 117 minItems: 3 118 118 maxItems: 3 119 - regs: 119 + reg: 120 120 minItems: 1 121 121 maxItems: 1 122 122
+2 -2
Documentation/devicetree/bindings/media/cnm,wave521c.yaml
··· 17 17 compatible: 18 18 items: 19 19 - enum: 20 - - ti,k3-j721s2-wave521c 20 + - ti,j721s2-wave521c 21 21 - const: cnm,wave521c 22 22 23 23 reg: ··· 53 53 examples: 54 54 - | 55 55 vpu: video-codec@12345678 { 56 - compatible = "ti,k3-j721s2-wave521c", "cnm,wave521c"; 56 + compatible = "ti,j721s2-wave521c", "cnm,wave521c"; 57 57 reg = <0x12345678 0x1000>; 58 58 clocks = <&clks 42>; 59 59 interrupts = <42>;
+10
Documentation/netlink/specs/rt_link.yaml
··· 942 942 - 943 943 name: gro-ipv4-max-size 944 944 type: u32 945 + - 946 + name: dpll-pin 947 + type: nest 948 + nested-attributes: link-dpll-pin-attrs 945 949 - 946 950 name: af-spec-attrs 947 951 attributes: ··· 1631 1627 - 1632 1628 name: used 1633 1629 type: u8 1630 + - 1631 + name: link-dpll-pin-attrs 1632 + attributes: 1633 + - 1634 + name: id 1635 + type: u32 1634 1636 1635 1637 sub-messages: 1636 1638 -
+3 -1
Documentation/sphinx/templates/kernel-toc.html
··· 12 12 <script type="text/javascript"> <!-- 13 13 var sbar = document.getElementsByClassName("sphinxsidebar")[0]; 14 14 let currents = document.getElementsByClassName("current") 15 - sbar.scrollTop = currents[currents.length - 1].offsetTop; 15 + if (currents.length) { 16 + sbar.scrollTop = currents[currents.length - 1].offsetTop; 17 + } 16 18 --> </script>
+11 -12
MAINTAINERS
··· 3168 3168 3169 3169 ASUS NOTEBOOKS AND EEEPC ACPI/WMI EXTRAS DRIVERS 3170 3170 M: Corentin Chary <corentin.chary@gmail.com> 3171 - L: acpi4asus-user@lists.sourceforge.net 3171 + M: Luke D. Jones <luke@ljones.dev> 3172 3172 L: platform-driver-x86@vger.kernel.org 3173 3173 S: Maintained 3174 - W: http://acpi4asus.sf.net 3174 + W: https://asus-linux.org/ 3175 3175 F: drivers/platform/x86/asus*.c 3176 3176 F: drivers/platform/x86/eeepc*.c 3177 3177 ··· 5961 5961 F: drivers/platform/x86/dell/dell-wmi-descriptor.c 5962 5962 5963 5963 DELL WMI HARDWARE PRIVACY SUPPORT 5964 - M: Perry Yuan <Perry.Yuan@dell.com> 5965 5964 L: Dell.Client.Kernel@dell.com 5966 5965 L: platform-driver-x86@vger.kernel.org 5967 5966 S: Maintained ··· 10286 10287 F: include/scsi/viosrp.h 10287 10288 10288 10289 IBM Power Virtual SCSI Device Target Driver 10289 - M: Michael Cyr <mikecyr@linux.ibm.com> 10290 + M: Tyrel Datwyler <tyreld@linux.ibm.com> 10290 10291 L: linux-scsi@vger.kernel.org 10291 10292 L: target-devel@vger.kernel.org 10292 10293 S: Supported ··· 11728 11729 KERNEL UNIT TESTING FRAMEWORK (KUnit) 11729 11730 M: Brendan Higgins <brendanhiggins@google.com> 11730 11731 M: David Gow <davidgow@google.com> 11732 + R: Rae Moar <rmoar@google.com> 11731 11733 L: linux-kselftest@vger.kernel.org 11732 11734 L: kunit-dev@googlegroups.com 11733 11735 S: Maintained ··· 12907 12907 L: linux-man@vger.kernel.org 12908 12908 S: Maintained 12909 12909 W: http://www.kernel.org/doc/man-pages 12910 + T: git git://git.kernel.org/pub/scm/docs/man-pages/man-pages.git 12911 + T: git git://www.alejandro-colomar.es/src/alx/linux/man-pages/man-pages.git 12910 12912 12911 12913 MANAGEMENT COMPONENT TRANSPORT PROTOCOL (MCTP) 12912 12914 M: Jeremy Kerr <jk@codeconstruct.com.au> ··· 15184 15182 F: drivers/connector/ 15185 15183 F: drivers/net/ 15186 15184 F: include/dt-bindings/net/ 15185 + F: include/linux/cn_proc.h 15187 15186 F: include/linux/etherdevice.h 15188 15187 F: include/linux/fcdevice.h 15189 15188 F: include/linux/fddidevice.h ··· 15192 15189 F: include/linux/if_* 15193 15190 F: include/linux/inetdevice.h 15194 15191 F: include/linux/netdevice.h 15192 + F: include/uapi/linux/cn_proc.h 15195 15193 F: include/uapi/linux/if_* 15196 15194 F: include/uapi/linux/netdevice.h 15197 15195 X: drivers/net/wireless/ ··· 18097 18093 18098 18094 QUALCOMM ETHQOS ETHERNET DRIVER 18099 18095 M: Vinod Koul <vkoul@kernel.org> 18100 - R: Bhupesh Sharma <bhupesh.sharma@linaro.org> 18101 18096 L: netdev@vger.kernel.org 18102 18097 L: linux-arm-msm@vger.kernel.org 18103 18098 S: Maintained ··· 20564 20561 20565 20562 SPARC + UltraSPARC (sparc/sparc64) 20566 20563 M: "David S. Miller" <davem@davemloft.net> 20564 + M: Andreas Larsson <andreas@gaisler.com> 20567 20565 L: sparclinux@vger.kernel.org 20568 20566 S: Maintained 20569 20567 Q: http://patchwork.ozlabs.org/project/sparclinux/list/ ··· 24355 24351 F: Documentation/filesystems/zonefs.rst 24356 24352 F: fs/zonefs/ 24357 24353 24358 - ZPOOL COMPRESSED PAGE STORAGE API 24359 - M: Dan Streetman <ddstreet@ieee.org> 24360 - L: linux-mm@kvack.org 24361 - S: Maintained 24362 - F: include/linux/zpool.h 24363 - F: mm/zpool.c 24364 - 24365 24354 ZR36067 VIDEO FOR LINUX DRIVER 24366 24355 M: Corentin Labbe <clabbe@baylibre.com> 24367 24356 L: mjpeg-users@lists.sourceforge.net ··· 24406 24409 L: linux-mm@kvack.org 24407 24410 S: Maintained 24408 24411 F: Documentation/admin-guide/mm/zswap.rst 24412 + F: include/linux/zpool.h 24409 24413 F: include/linux/zswap.h 24414 + F: mm/zpool.c 24410 24415 F: mm/zswap.c 24411 24416 24412 24417 THE REST
+8 -8
Makefile
··· 2 2 VERSION = 6 3 3 PATCHLEVEL = 8 4 4 SUBLEVEL = 0 5 - EXTRAVERSION = -rc1 5 + EXTRAVERSION = -rc2 6 6 NAME = Hurr durr I'ma ninja sloth 7 7 8 8 # *DOCUMENTATION* ··· 294 294 single-build := 295 295 296 296 ifneq ($(filter $(no-dot-config-targets), $(MAKECMDGOALS)),) 297 - ifeq ($(filter-out $(no-dot-config-targets), $(MAKECMDGOALS)),) 297 + ifeq ($(filter-out $(no-dot-config-targets), $(MAKECMDGOALS)),) 298 298 need-config := 299 - endif 299 + endif 300 300 endif 301 301 302 302 ifneq ($(filter $(no-sync-config-targets), $(MAKECMDGOALS)),) 303 - ifeq ($(filter-out $(no-sync-config-targets), $(MAKECMDGOALS)),) 303 + ifeq ($(filter-out $(no-sync-config-targets), $(MAKECMDGOALS)),) 304 304 may-sync-config := 305 - endif 305 + endif 306 306 endif 307 307 308 308 need-compiler := $(may-sync-config) ··· 323 323 # We cannot build single targets and the others at the same time 324 324 ifneq ($(filter $(single-targets), $(MAKECMDGOALS)),) 325 325 single-build := 1 326 - ifneq ($(filter-out $(single-targets), $(MAKECMDGOALS)),) 326 + ifneq ($(filter-out $(single-targets), $(MAKECMDGOALS)),) 327 327 mixed-build := 1 328 - endif 328 + endif 329 329 endif 330 330 331 331 # For "make -j clean all", "make -j mrproper defconfig all", etc. ··· 1666 1666 @echo ' (sparse by default)' 1667 1667 @echo ' make C=2 [targets] Force check of all c source with $$CHECK' 1668 1668 @echo ' make RECORDMCOUNT_WARN=1 [targets] Warn about ignored mcount sections' 1669 - @echo ' make W=n [targets] Enable extra build checks, n=1,2,3 where' 1669 + @echo ' make W=n [targets] Enable extra build checks, n=1,2,3,c,e where' 1670 1670 @echo ' 1: warnings which may be relevant and do not occur too often' 1671 1671 @echo ' 2: warnings which occur quite often but may still be relevant' 1672 1672 @echo ' 3: more obscure warnings, can most likely be ignored'
+1
arch/Kconfig
··· 673 673 bool "Shadow Call Stack" 674 674 depends on ARCH_SUPPORTS_SHADOW_CALL_STACK 675 675 depends on DYNAMIC_FTRACE_WITH_ARGS || DYNAMIC_FTRACE_WITH_REGS || !FUNCTION_GRAPH_TRACER 676 + depends on MMU 676 677 help 677 678 This option enables the compiler's Shadow Call Stack, which 678 679 uses a shadow stack to protect function return addresses from
+2 -2
arch/arm/boot/dts/aspeed/aspeed-bmc-facebook-bletchley.dts
··· 45 45 num-chipselects = <1>; 46 46 cs-gpios = <&gpio0 ASPEED_GPIO(Z, 0) GPIO_ACTIVE_LOW>; 47 47 48 - tpmdev@0 { 49 - compatible = "tcg,tpm_tis-spi"; 48 + tpm@0 { 49 + compatible = "infineon,slb9670", "tcg,tpm_tis-spi"; 50 50 spi-max-frequency = <33000000>; 51 51 reg = <0>; 52 52 };
+2 -2
arch/arm/boot/dts/aspeed/aspeed-bmc-facebook-wedge400.dts
··· 80 80 gpio-miso = <&gpio ASPEED_GPIO(R, 5) GPIO_ACTIVE_HIGH>; 81 81 num-chipselects = <1>; 82 82 83 - tpmdev@0 { 84 - compatible = "tcg,tpm_tis-spi"; 83 + tpm@0 { 84 + compatible = "infineon,slb9670", "tcg,tpm_tis-spi"; 85 85 spi-max-frequency = <33000000>; 86 86 reg = <0>; 87 87 };
+1 -1
arch/arm/boot/dts/aspeed/aspeed-bmc-opp-tacoma.dts
··· 456 456 status = "okay"; 457 457 458 458 tpm: tpm@2e { 459 - compatible = "tcg,tpm-tis-i2c"; 459 + compatible = "nuvoton,npct75x", "tcg,tpm-tis-i2c"; 460 460 reg = <0x2e>; 461 461 }; 462 462 };
+2 -2
arch/arm/boot/dts/aspeed/ast2600-facebook-netbmc-common.dtsi
··· 35 35 gpio-mosi = <&gpio0 ASPEED_GPIO(X, 4) GPIO_ACTIVE_HIGH>; 36 36 gpio-miso = <&gpio0 ASPEED_GPIO(X, 5) GPIO_ACTIVE_HIGH>; 37 37 38 - tpmdev@0 { 39 - compatible = "tcg,tpm_tis-spi"; 38 + tpm@0 { 39 + compatible = "infineon,slb9670", "tcg,tpm_tis-spi"; 40 40 spi-max-frequency = <33000000>; 41 41 reg = <0>; 42 42 };
+1 -1
arch/arm/boot/dts/nxp/imx/imx6ull-phytec-tauri.dtsi
··· 116 116 tpm_tis: tpm@1 { 117 117 pinctrl-names = "default"; 118 118 pinctrl-0 = <&pinctrl_tpm>; 119 - compatible = "tcg,tpm_tis-spi"; 119 + compatible = "infineon,slb9670", "tcg,tpm_tis-spi"; 120 120 reg = <1>; 121 121 spi-max-frequency = <20000000>; 122 122 interrupt-parent = <&gpio5>;
+1 -1
arch/arm/boot/dts/nxp/imx/imx7d-flex-concentrator.dts
··· 130 130 * TCG specification - Section 6.4.1 Clocking: 131 131 * TPM shall support a SPI clock frequency range of 10-24 MHz. 132 132 */ 133 - st33htph: tpm-tis@0 { 133 + st33htph: tpm@0 { 134 134 compatible = "st,st33htpm-spi", "tcg,tpm_tis-spi"; 135 135 reg = <0>; 136 136 spi-max-frequency = <24000000>;
+1
arch/arm/boot/dts/samsung/exynos4212-tab3.dtsi
··· 434 434 }; 435 435 436 436 &fimd { 437 + samsung,invert-vclk; 437 438 status = "okay"; 438 439 }; 439 440
+1 -1
arch/arm/boot/dts/ti/omap/am335x-moxa-uc-2100-common.dtsi
··· 217 217 pinctrl-names = "default"; 218 218 pinctrl-0 = <&spi1_pins>; 219 219 220 - tpm_spi_tis@0 { 220 + tpm@0 { 221 221 compatible = "tcg,tpm_tis-spi"; 222 222 reg = <0>; 223 223 spi-max-frequency = <500000>;
+1 -1
arch/arm64/boot/dts/exynos/google/gs101.dtsi
··· 289 289 #clock-cells = <1>; 290 290 clocks = <&cmu_top CLK_DOUT_CMU_MISC_BUS>, 291 291 <&cmu_top CLK_DOUT_CMU_MISC_SSS>; 292 - clock-names = "dout_cmu_misc_bus", "dout_cmu_misc_sss"; 292 + clock-names = "bus", "sss"; 293 293 }; 294 294 295 295 watchdog_cl0: watchdog@10060000 {
+1 -1
arch/arm64/boot/dts/freescale/imx8mm-phygate-tauri-l.dts
··· 120 120 }; 121 121 122 122 tpm: tpm@1 { 123 - compatible = "tcg,tpm_tis-spi"; 123 + compatible = "infineon,slb9670", "tcg,tpm_tis-spi"; 124 124 interrupts = <11 IRQ_TYPE_LEVEL_LOW>; 125 125 interrupt-parent = <&gpio2>; 126 126 pinctrl-names = "default";
+1 -1
arch/arm64/boot/dts/freescale/imx8mm-venice-gw72xx.dtsi
··· 89 89 status = "okay"; 90 90 91 91 tpm@1 { 92 - compatible = "tcg,tpm_tis-spi"; 92 + compatible = "atmel,attpm20p", "tcg,tpm_tis-spi"; 93 93 reg = <0x1>; 94 94 spi-max-frequency = <36000000>; 95 95 };
+1 -1
arch/arm64/boot/dts/freescale/imx8mm-venice-gw73xx.dtsi
··· 109 109 status = "okay"; 110 110 111 111 tpm@1 { 112 - compatible = "tcg,tpm_tis-spi"; 112 + compatible = "atmel,attpm20p", "tcg,tpm_tis-spi"; 113 113 reg = <0x1>; 114 114 spi-max-frequency = <36000000>; 115 115 };
+1 -1
arch/arm64/boot/dts/freescale/imx8mp-beacon-kit.dts
··· 234 234 status = "okay"; 235 235 236 236 tpm: tpm@0 { 237 - compatible = "infineon,slb9670"; 237 + compatible = "infineon,slb9670", "tcg,tpm_tis-spi"; 238 238 reg = <0>; 239 239 pinctrl-names = "default"; 240 240 pinctrl-0 = <&pinctrl_tpm>;
+1 -1
arch/arm64/boot/dts/freescale/imx8mp-venice-gw72xx.dtsi
··· 103 103 status = "okay"; 104 104 105 105 tpm@1 { 106 - compatible = "tcg,tpm_tis-spi"; 106 + compatible = "atmel,attpm20p", "tcg,tpm_tis-spi"; 107 107 reg = <0x1>; 108 108 spi-max-frequency = <36000000>; 109 109 };
+1 -1
arch/arm64/boot/dts/freescale/imx8mp-venice-gw73xx.dtsi
··· 115 115 status = "okay"; 116 116 117 117 tpm@1 { 118 - compatible = "tcg,tpm_tis-spi"; 118 + compatible = "atmel,attpm20p", "tcg,tpm_tis-spi"; 119 119 reg = <0x1>; 120 120 spi-max-frequency = <36000000>; 121 121 };
+1 -1
arch/arm64/boot/dts/freescale/imx8mp-venice-gw74xx.dts
··· 196 196 status = "okay"; 197 197 198 198 tpm@0 { 199 - compatible = "tcg,tpm_tis-spi"; 199 + compatible = "atmel,attpm20p", "tcg,tpm_tis-spi"; 200 200 reg = <0x0>; 201 201 spi-max-frequency = <36000000>; 202 202 };
+1 -1
arch/arm64/boot/dts/freescale/imx8mq-kontron-pitx-imx8m.dts
··· 65 65 status = "okay"; 66 66 67 67 tpm@0 { 68 - compatible = "infineon,slb9670"; 68 + compatible = "infineon,slb9670", "tcg,tpm_tis-spi"; 69 69 reg = <0>; 70 70 spi-max-frequency = <43000000>; 71 71 };
+1 -1
arch/arm64/boot/dts/mediatek/mt8183-kukui.dtsi
··· 888 888 status = "okay"; 889 889 cs-gpios = <&pio 86 GPIO_ACTIVE_LOW>; 890 890 891 - cr50@0 { 891 + tpm@0 { 892 892 compatible = "google,cr50"; 893 893 reg = <0>; 894 894 spi-max-frequency = <1000000>;
+1 -1
arch/arm64/boot/dts/mediatek/mt8192-asurada.dtsi
··· 1402 1402 pinctrl-names = "default"; 1403 1403 pinctrl-0 = <&spi5_pins>; 1404 1404 1405 - cr50@0 { 1405 + tpm@0 { 1406 1406 compatible = "google,cr50"; 1407 1407 reg = <0>; 1408 1408 interrupts-extended = <&pio 171 IRQ_TYPE_EDGE_RISING>;
+1 -1
arch/arm64/boot/dts/rockchip/rk3399-gru-bob.dts
··· 70 70 &spi0 { 71 71 status = "okay"; 72 72 73 - cr50@0 { 73 + tpm@0 { 74 74 compatible = "google,cr50"; 75 75 reg = <0>; 76 76 interrupt-parent = <&gpio0>;
+1 -1
arch/arm64/boot/dts/rockchip/rk3399-gru-scarlet.dtsi
··· 706 706 &spi2 { 707 707 status = "okay"; 708 708 709 - cr50@0 { 709 + tpm@0 { 710 710 compatible = "google,cr50"; 711 711 reg = <0>; 712 712 interrupt-parent = <&gpio1>;
+2 -2
arch/loongarch/include/asm/kvm_vcpu.h
··· 60 60 void kvm_save_lsx(struct loongarch_fpu *fpu); 61 61 void kvm_restore_lsx(struct loongarch_fpu *fpu); 62 62 #else 63 - static inline int kvm_own_lsx(struct kvm_vcpu *vcpu) { } 63 + static inline int kvm_own_lsx(struct kvm_vcpu *vcpu) { return -EINVAL; } 64 64 static inline void kvm_save_lsx(struct loongarch_fpu *fpu) { } 65 65 static inline void kvm_restore_lsx(struct loongarch_fpu *fpu) { } 66 66 #endif ··· 70 70 void kvm_save_lasx(struct loongarch_fpu *fpu); 71 71 void kvm_restore_lasx(struct loongarch_fpu *fpu); 72 72 #else 73 - static inline int kvm_own_lasx(struct kvm_vcpu *vcpu) { } 73 + static inline int kvm_own_lasx(struct kvm_vcpu *vcpu) { return -EINVAL; } 74 74 static inline void kvm_save_lasx(struct loongarch_fpu *fpu) { } 75 75 static inline void kvm_restore_lasx(struct loongarch_fpu *fpu) { } 76 76 #endif
-1
arch/loongarch/kernel/smp.c
··· 509 509 sync_counter(); 510 510 cpu = raw_smp_processor_id(); 511 511 set_my_cpu_offset(per_cpu_offset(cpu)); 512 - rcutree_report_cpu_starting(cpu); 513 512 514 513 cpu_probe(); 515 514 constant_clockevent_init();
+2 -2
arch/loongarch/kvm/mmu.c
··· 675 675 * 676 676 * There are several ways to safely use this helper: 677 677 * 678 - * - Check mmu_invalidate_retry_hva() after grabbing the mapping level, before 678 + * - Check mmu_invalidate_retry_gfn() after grabbing the mapping level, before 679 679 * consuming it. In this case, mmu_lock doesn't need to be held during the 680 680 * lookup, but it does need to be held while checking the MMU notifier. 681 681 * ··· 855 855 856 856 /* Check if an invalidation has taken place since we got pfn */ 857 857 spin_lock(&kvm->mmu_lock); 858 - if (mmu_invalidate_retry_hva(kvm, mmu_seq, hva)) { 858 + if (mmu_invalidate_retry_gfn(kvm, mmu_seq, gfn)) { 859 859 /* 860 860 * This can happen when mappings are changed asynchronously, but 861 861 * also synchronously if a COW is triggered by
+10 -6
arch/loongarch/mm/tlb.c
··· 284 284 set_handler(EXCCODE_TLBNR * VECSIZE, handle_tlb_protect, VECSIZE); 285 285 set_handler(EXCCODE_TLBNX * VECSIZE, handle_tlb_protect, VECSIZE); 286 286 set_handler(EXCCODE_TLBPE * VECSIZE, handle_tlb_protect, VECSIZE); 287 - } 287 + } else { 288 + int vec_sz __maybe_unused; 289 + void *addr __maybe_unused; 290 + struct page *page __maybe_unused; 291 + 292 + /* Avoid lockdep warning */ 293 + rcutree_report_cpu_starting(cpu); 294 + 288 295 #ifdef CONFIG_NUMA 289 - else { 290 - void *addr; 291 - struct page *page; 292 - const int vec_sz = sizeof(exception_handlers); 296 + vec_sz = sizeof(exception_handlers); 293 297 294 298 if (pcpu_handlers[cpu]) 295 299 return; ··· 309 305 csr_write64(pcpu_handlers[cpu], LOONGARCH_CSR_EENTRY); 310 306 csr_write64(pcpu_handlers[cpu], LOONGARCH_CSR_MERRENTRY); 311 307 csr_write64(pcpu_handlers[cpu] + 80*VECSIZE, LOONGARCH_CSR_TLBRENTRY); 312 - } 313 308 #endif 309 + } 314 310 } 315 311 316 312 void tlb_init(int cpu)
+2 -2
arch/m68k/Makefile
··· 15 15 KBUILD_DEFCONFIG := multi_defconfig 16 16 17 17 ifdef cross_compiling 18 - ifeq ($(CROSS_COMPILE),) 18 + ifeq ($(CROSS_COMPILE),) 19 19 CROSS_COMPILE := $(call cc-cross-prefix, \ 20 20 m68k-linux-gnu- m68k-linux- m68k-unknown-linux-gnu-) 21 - endif 21 + endif 22 22 endif 23 23 24 24 #
+1
arch/mips/alchemy/common/prom.c
··· 40 40 #include <linux/string.h> 41 41 42 42 #include <asm/bootinfo.h> 43 + #include <prom.h> 43 44 44 45 int prom_argc; 45 46 char **prom_argv;
+1 -3
arch/mips/alchemy/common/setup.c
··· 30 30 #include <linux/mm.h> 31 31 #include <linux/dma-map-ops.h> /* for dma_default_coherent */ 32 32 33 + #include <asm/bootinfo.h> 33 34 #include <asm/mipsregs.h> 34 35 35 36 #include <au1000.h> 36 - 37 - extern void __init board_setup(void); 38 - extern void __init alchemy_set_lpj(void); 39 37 40 38 static bool alchemy_dma_coherent(void) 41 39 {
+1 -1
arch/mips/bcm63xx/boards/board_bcm963xx.c
··· 702 702 .boardflags_hi = 0x0000, 703 703 }; 704 704 705 - int bcm63xx_get_fallback_sprom(struct ssb_bus *bus, struct ssb_sprom *out) 705 + static int bcm63xx_get_fallback_sprom(struct ssb_bus *bus, struct ssb_sprom *out) 706 706 { 707 707 if (bus->bustype == SSB_BUSTYPE_PCI) { 708 708 memcpy(out, &bcm63xx_sprom, sizeof(struct ssb_sprom));
+1 -1
arch/mips/bcm63xx/dev-rng.c
··· 26 26 .resource = rng_resources, 27 27 }; 28 28 29 - int __init bcm63xx_rng_register(void) 29 + static int __init bcm63xx_rng_register(void) 30 30 { 31 31 if (!BCMCPU_IS_6368()) 32 32 return -ENODEV;
+1
arch/mips/bcm63xx/dev-uart.c
··· 10 10 #include <linux/kernel.h> 11 11 #include <linux/platform_device.h> 12 12 #include <bcm63xx_cpu.h> 13 + #include <bcm63xx_dev_uart.h> 13 14 14 15 static struct resource uart0_resources[] = { 15 16 {
+1 -1
arch/mips/bcm63xx/dev-wdt.c
··· 34 34 }, 35 35 }; 36 36 37 - int __init bcm63xx_wdt_register(void) 37 + static int __init bcm63xx_wdt_register(void) 38 38 { 39 39 wdt_resources[0].start = bcm63xx_regset_address(RSET_WDT); 40 40 wdt_resources[0].end = wdt_resources[0].start;
+1 -1
arch/mips/bcm63xx/irq.c
··· 72 72 */ 73 73 74 74 #define BUILD_IPIC_INTERNAL(width) \ 75 - void __dispatch_internal_##width(int cpu) \ 75 + static void __dispatch_internal_##width(int cpu) \ 76 76 { \ 77 77 u32 pending[width / 32]; \ 78 78 unsigned int src, tgt; \
+1 -1
arch/mips/bcm63xx/setup.c
··· 159 159 board_setup(); 160 160 } 161 161 162 - int __init bcm63xx_register_devices(void) 162 + static int __init bcm63xx_register_devices(void) 163 163 { 164 164 /* register gpiochip */ 165 165 bcm63xx_gpio_init();
+1 -1
arch/mips/bcm63xx/timer.c
··· 178 178 179 179 EXPORT_SYMBOL(bcm63xx_timer_set); 180 180 181 - int bcm63xx_timer_init(void) 181 + static int bcm63xx_timer_init(void) 182 182 { 183 183 int ret, irq; 184 184 u32 reg;
-3
arch/mips/cobalt/setup.c
··· 23 23 24 24 #include <cobalt.h> 25 25 26 - extern void cobalt_machine_restart(char *command); 27 - extern void cobalt_machine_halt(void); 28 - 29 26 const char *get_system_type(void) 30 27 { 31 28 switch (cobalt_board_id) {
+1 -1
arch/mips/fw/arc/memory.c
··· 37 37 */ 38 38 #define ARC_PAGE_SHIFT 12 39 39 40 - struct linux_mdesc * __init ArcGetMemoryDescriptor(struct linux_mdesc *Current) 40 + static struct linux_mdesc * __init ArcGetMemoryDescriptor(struct linux_mdesc *Current) 41 41 { 42 42 return (struct linux_mdesc *) ARC_CALL1(get_mdesc, Current); 43 43 }
+3
arch/mips/include/asm/mach-au1x00/au1000.h
··· 597 597 598 598 #include <asm/cpu.h> 599 599 600 + void alchemy_set_lpj(void); 601 + void board_setup(void); 602 + 600 603 /* helpers to access the SYS_* registers */ 601 604 static inline unsigned long alchemy_rdsys(int regofs) 602 605 {
+3
arch/mips/include/asm/mach-cobalt/cobalt.h
··· 19 19 #define COBALT_BRD_ID_QUBE2 0x5 20 20 #define COBALT_BRD_ID_RAQ2 0x6 21 21 22 + void cobalt_machine_halt(void); 23 + void cobalt_machine_restart(char *command); 24 + 22 25 #endif /* __ASM_COBALT_H */
+6
arch/mips/kernel/elf.c
··· 11 11 12 12 #include <asm/cpu-features.h> 13 13 #include <asm/cpu-info.h> 14 + #include <asm/fpu.h> 14 15 15 16 #ifdef CONFIG_MIPS_FP_SUPPORT 16 17 ··· 309 308 { 310 309 struct cpuinfo_mips *c = &boot_cpu_data; 311 310 struct task_struct *t = current; 311 + 312 + /* Do this early so t->thread.fpu.fcr31 won't be clobbered in case 313 + * we are preempted before the lose_fpu(0) in start_thread. 314 + */ 315 + lose_fpu(0); 312 316 313 317 t->thread.fpu.fcr31 = c->fpu_csr31; 314 318 switch (state->nan_2008) {
+7 -1
arch/mips/kernel/traps.c
··· 2007 2007 2008 2008 void reserve_exception_space(phys_addr_t addr, unsigned long size) 2009 2009 { 2010 - memblock_reserve(addr, size); 2010 + /* 2011 + * reserve exception space on CPUs other than CPU0 2012 + * is too late, since memblock is unavailable when APs 2013 + * up 2014 + */ 2015 + if (smp_processor_id() == 0) 2016 + memblock_reserve(addr, size); 2011 2017 } 2012 2018 2013 2019 void __init *set_except_vector(int n, void *addr)
+3 -4
arch/mips/lantiq/prom.c
··· 108 108 prom_init_cmdline(); 109 109 110 110 #if defined(CONFIG_MIPS_MT_SMP) 111 - if (cpu_has_mipsmt) { 112 - lantiq_smp_ops = vsmp_smp_ops; 111 + lantiq_smp_ops = vsmp_smp_ops; 112 + if (cpu_has_mipsmt) 113 113 lantiq_smp_ops.init_secondary = lantiq_init_secondary; 114 - register_smp_ops(&lantiq_smp_ops); 115 - } 114 + register_smp_ops(&lantiq_smp_ops); 116 115 #endif 117 116 }
+3
arch/mips/loongson64/init.c
··· 103 103 if (loongson_sysconf.vgabios_addr) 104 104 memblock_reserve(virt_to_phys((void *)loongson_sysconf.vgabios_addr), 105 105 SZ_256K); 106 + /* set nid for reserved memory */ 107 + memblock_set_node((u64)node << 44, (u64)(node + 1) << 44, 108 + &memblock.reserved, node); 106 109 } 107 110 108 111 #ifndef CONFIG_NUMA
+2
arch/mips/loongson64/numa.c
··· 132 132 133 133 /* Reserve pfn range 0~node[0]->node_start_pfn */ 134 134 memblock_reserve(0, PAGE_SIZE * start_pfn); 135 + /* set nid for reserved memory on node 0 */ 136 + memblock_set_node(0, 1ULL << 44, &memblock.reserved, 0); 135 137 } 136 138 } 137 139
+1 -1
arch/mips/sgi-ip27/Makefile
··· 5 5 6 6 obj-y := ip27-berr.o ip27-irq.o ip27-init.o ip27-klconfig.o \ 7 7 ip27-klnuma.o ip27-memory.o ip27-nmi.o ip27-reset.o ip27-timer.o \ 8 - ip27-hubio.o ip27-xtalk.o 8 + ip27-xtalk.o 9 9 10 10 obj-$(CONFIG_EARLY_PRINTK) += ip27-console.o 11 11 obj-$(CONFIG_SMP) += ip27-smp.o
+3 -1
arch/mips/sgi-ip27/ip27-berr.c
··· 22 22 #include <asm/traps.h> 23 23 #include <linux/uaccess.h> 24 24 25 + #include "ip27-common.h" 26 + 25 27 static void dump_hub_information(unsigned long errst0, unsigned long errst1) 26 28 { 27 29 static char *err_type[2][8] = { ··· 59 57 [st0.pi_stat0_fmt.s0_err_type] ? : "invalid"); 60 58 } 61 59 62 - int ip27_be_handler(struct pt_regs *regs, int is_fixup) 60 + static int ip27_be_handler(struct pt_regs *regs, int is_fixup) 63 61 { 64 62 unsigned long errst0, errst1; 65 63 int data = regs->cp0_cause & 4;
+2
arch/mips/sgi-ip27/ip27-common.h
··· 10 10 extern void hub_rtc_init(nasid_t nasid); 11 11 extern void install_cpu_nmi_handler(int slice); 12 12 extern void install_ipi(void); 13 + extern void ip27_be_init(void); 13 14 extern void ip27_reboot_setup(void); 14 15 extern const struct plat_smp_ops ip27_smp_ops; 15 16 extern unsigned long node_getfirstfree(nasid_t nasid); 16 17 extern void per_cpu_init(void); 17 18 extern void replicate_kernel_text(void); 18 19 extern void setup_replication_mask(void); 20 + 19 21 20 22 #endif /* __IP27_COMMON_H */
-185
arch/mips/sgi-ip27/ip27-hubio.c
··· 1 - // SPDX-License-Identifier: GPL-2.0-only 2 - /* 3 - * Copyright (C) 1992-1997, 2000-2003 Silicon Graphics, Inc. 4 - * Copyright (C) 2004 Christoph Hellwig. 5 - * 6 - * Support functions for the HUB ASIC - mostly PIO mapping related. 7 - */ 8 - 9 - #include <linux/bitops.h> 10 - #include <linux/string.h> 11 - #include <linux/mmzone.h> 12 - #include <asm/sn/addrs.h> 13 - #include <asm/sn/arch.h> 14 - #include <asm/sn/agent.h> 15 - #include <asm/sn/io.h> 16 - #include <asm/xtalk/xtalk.h> 17 - 18 - 19 - static int force_fire_and_forget = 1; 20 - 21 - /** 22 - * hub_pio_map - establish a HUB PIO mapping 23 - * 24 - * @nasid: nasid to perform PIO mapping on 25 - * @widget: widget ID to perform PIO mapping for 26 - * @xtalk_addr: xtalk_address that needs to be mapped 27 - * @size: size of the PIO mapping 28 - * 29 - **/ 30 - unsigned long hub_pio_map(nasid_t nasid, xwidgetnum_t widget, 31 - unsigned long xtalk_addr, size_t size) 32 - { 33 - unsigned i; 34 - 35 - /* use small-window mapping if possible */ 36 - if ((xtalk_addr % SWIN_SIZE) + size <= SWIN_SIZE) 37 - return NODE_SWIN_BASE(nasid, widget) + (xtalk_addr % SWIN_SIZE); 38 - 39 - if ((xtalk_addr % BWIN_SIZE) + size > BWIN_SIZE) { 40 - printk(KERN_WARNING "PIO mapping at hub %d widget %d addr 0x%lx" 41 - " too big (%ld)\n", 42 - nasid, widget, xtalk_addr, size); 43 - return 0; 44 - } 45 - 46 - xtalk_addr &= ~(BWIN_SIZE-1); 47 - for (i = 0; i < HUB_NUM_BIG_WINDOW; i++) { 48 - if (test_and_set_bit(i, hub_data(nasid)->h_bigwin_used)) 49 - continue; 50 - 51 - /* 52 - * The code below does a PIO write to setup an ITTE entry. 53 - * 54 - * We need to prevent other CPUs from seeing our updated 55 - * memory shadow of the ITTE (in the piomap) until the ITTE 56 - * entry is actually set up; otherwise, another CPU might 57 - * attempt a PIO prematurely. 58 - * 59 - * Also, the only way we can know that an entry has been 60 - * received by the hub and can be used by future PIO reads/ 61 - * writes is by reading back the ITTE entry after writing it. 62 - * 63 - * For these two reasons, we PIO read back the ITTE entry 64 - * after we write it. 65 - */ 66 - IIO_ITTE_PUT(nasid, i, HUB_PIO_MAP_TO_MEM, widget, xtalk_addr); 67 - __raw_readq(IIO_ITTE_GET(nasid, i)); 68 - 69 - return NODE_BWIN_BASE(nasid, widget) + (xtalk_addr % BWIN_SIZE); 70 - } 71 - 72 - printk(KERN_WARNING "unable to establish PIO mapping for at" 73 - " hub %d widget %d addr 0x%lx\n", 74 - nasid, widget, xtalk_addr); 75 - return 0; 76 - } 77 - 78 - 79 - /* 80 - * hub_setup_prb(nasid, prbnum, credits, conveyor) 81 - * 82 - * Put a PRB into fire-and-forget mode if conveyor isn't set. Otherwise, 83 - * put it into conveyor belt mode with the specified number of credits. 84 - */ 85 - static void hub_setup_prb(nasid_t nasid, int prbnum, int credits) 86 - { 87 - union iprb_u prb; 88 - int prb_offset; 89 - 90 - /* 91 - * Get the current register value. 92 - */ 93 - prb_offset = IIO_IOPRB(prbnum); 94 - prb.iprb_regval = REMOTE_HUB_L(nasid, prb_offset); 95 - 96 - /* 97 - * Clear out some fields. 98 - */ 99 - prb.iprb_ovflow = 1; 100 - prb.iprb_bnakctr = 0; 101 - prb.iprb_anakctr = 0; 102 - 103 - /* 104 - * Enable or disable fire-and-forget mode. 105 - */ 106 - prb.iprb_ff = force_fire_and_forget ? 1 : 0; 107 - 108 - /* 109 - * Set the appropriate number of PIO credits for the widget. 110 - */ 111 - prb.iprb_xtalkctr = credits; 112 - 113 - /* 114 - * Store the new value to the register. 115 - */ 116 - REMOTE_HUB_S(nasid, prb_offset, prb.iprb_regval); 117 - } 118 - 119 - /** 120 - * hub_set_piomode - set pio mode for a given hub 121 - * 122 - * @nasid: physical node ID for the hub in question 123 - * 124 - * Put the hub into either "PIO conveyor belt" mode or "fire-and-forget" mode. 125 - * To do this, we have to make absolutely sure that no PIOs are in progress 126 - * so we turn off access to all widgets for the duration of the function. 127 - * 128 - * XXX - This code should really check what kind of widget we're talking 129 - * to. Bridges can only handle three requests, but XG will do more. 130 - * How many can crossbow handle to widget 0? We're assuming 1. 131 - * 132 - * XXX - There is a bug in the crossbow that link reset PIOs do not 133 - * return write responses. The easiest solution to this problem is to 134 - * leave widget 0 (xbow) in fire-and-forget mode at all times. This 135 - * only affects pio's to xbow registers, which should be rare. 136 - **/ 137 - static void hub_set_piomode(nasid_t nasid) 138 - { 139 - u64 ii_iowa; 140 - union hubii_wcr_u ii_wcr; 141 - unsigned i; 142 - 143 - ii_iowa = REMOTE_HUB_L(nasid, IIO_OUTWIDGET_ACCESS); 144 - REMOTE_HUB_S(nasid, IIO_OUTWIDGET_ACCESS, 0); 145 - 146 - ii_wcr.wcr_reg_value = REMOTE_HUB_L(nasid, IIO_WCR); 147 - 148 - if (ii_wcr.iwcr_dir_con) { 149 - /* 150 - * Assume a bridge here. 151 - */ 152 - hub_setup_prb(nasid, 0, 3); 153 - } else { 154 - /* 155 - * Assume a crossbow here. 156 - */ 157 - hub_setup_prb(nasid, 0, 1); 158 - } 159 - 160 - /* 161 - * XXX - Here's where we should take the widget type into 162 - * when account assigning credits. 163 - */ 164 - for (i = HUB_WIDGET_ID_MIN; i <= HUB_WIDGET_ID_MAX; i++) 165 - hub_setup_prb(nasid, i, 3); 166 - 167 - REMOTE_HUB_S(nasid, IIO_OUTWIDGET_ACCESS, ii_iowa); 168 - } 169 - 170 - /* 171 - * hub_pio_init - PIO-related hub initialization 172 - * 173 - * @hub: hubinfo structure for our hub 174 - */ 175 - void hub_pio_init(nasid_t nasid) 176 - { 177 - unsigned i; 178 - 179 - /* initialize big window piomaps for this hub */ 180 - bitmap_zero(hub_data(nasid)->h_bigwin_used, HUB_NUM_BIG_WINDOW); 181 - for (i = 0; i < HUB_NUM_BIG_WINDOW; i++) 182 - IIO_ITTE_DISABLE(nasid, i); 183 - 184 - hub_set_piomode(nasid); 185 - }
+2
arch/mips/sgi-ip27/ip27-irq.c
··· 23 23 #include <asm/sn/intr.h> 24 24 #include <asm/sn/irq_alloc.h> 25 25 26 + #include "ip27-common.h" 27 + 26 28 struct hub_irq_data { 27 29 u64 *irq_mask[2]; 28 30 cpuid_t cpu;
+1
arch/mips/sgi-ip27/ip27-memory.c
··· 23 23 #include <asm/page.h> 24 24 #include <asm/pgalloc.h> 25 25 #include <asm/sections.h> 26 + #include <asm/sgialib.h> 26 27 27 28 #include <asm/sn/arch.h> 28 29 #include <asm/sn/agent.h>
+8 -17
arch/mips/sgi-ip27/ip27-nmi.c
··· 11 11 #include <asm/sn/arch.h> 12 12 #include <asm/sn/agent.h> 13 13 14 + #include "ip27-common.h" 15 + 14 16 #if 0 15 17 #define NODE_NUM_CPUS(n) CNODE_NUM_CPUS(n) 16 18 #else ··· 25 23 typedef unsigned long machreg_t; 26 24 27 25 static arch_spinlock_t nmi_lock = __ARCH_SPIN_LOCK_UNLOCKED; 28 - 29 - /* 30 - * Let's see what else we need to do here. Set up sp, gp? 31 - */ 32 - void nmi_dump(void) 33 - { 34 - void cont_nmi_dump(void); 35 - 36 - cont_nmi_dump(); 37 - } 26 + static void nmi_dump(void); 38 27 39 28 void install_cpu_nmi_handler(int slice) 40 29 { ··· 46 53 * into the eframe format for the node under consideration. 47 54 */ 48 55 49 - void nmi_cpu_eframe_save(nasid_t nasid, int slice) 56 + static void nmi_cpu_eframe_save(nasid_t nasid, int slice) 50 57 { 51 58 struct reg_struct *nr; 52 59 int i; ··· 122 129 pr_emerg("\n"); 123 130 } 124 131 125 - void nmi_dump_hub_irq(nasid_t nasid, int slice) 132 + static void nmi_dump_hub_irq(nasid_t nasid, int slice) 126 133 { 127 134 u64 mask0, mask1, pend0, pend1; 128 135 ··· 146 153 * Copy the cpu registers which have been saved in the IP27prom format 147 154 * into the eframe format for the node under consideration. 148 155 */ 149 - void nmi_node_eframe_save(nasid_t nasid) 156 + static void nmi_node_eframe_save(nasid_t nasid) 150 157 { 151 158 int slice; 152 159 ··· 163 170 /* 164 171 * Save the nmi cpu registers for all cpus in the system. 165 172 */ 166 - void 167 - nmi_eframes_save(void) 173 + static void nmi_eframes_save(void) 168 174 { 169 175 nasid_t nasid; 170 176 ··· 171 179 nmi_node_eframe_save(nasid); 172 180 } 173 181 174 - void 175 - cont_nmi_dump(void) 182 + static void nmi_dump(void) 176 183 { 177 184 #ifndef REAL_NMI_SIGNAL 178 185 static atomic_t nmied_cpus = ATOMIC_INIT(0);
+1
arch/mips/sgi-ip30/ip30-console.c
··· 3 3 #include <linux/io.h> 4 4 5 5 #include <asm/sn/ioc3.h> 6 + #include <asm/setup.h> 6 7 7 8 static inline struct ioc3_uartregs *console_uart(void) 8 9 {
+1
arch/mips/sgi-ip30/ip30-setup.c
··· 14 14 #include <linux/percpu.h> 15 15 #include <linux/memblock.h> 16 16 17 + #include <asm/bootinfo.h> 17 18 #include <asm/smp-ops.h> 18 19 #include <asm/sgialib.h> 19 20 #include <asm/time.h>
+4 -2
arch/mips/sgi-ip32/crime.c
··· 18 18 #include <asm/ip32/crime.h> 19 19 #include <asm/ip32/mace.h> 20 20 21 + #include "ip32-common.h" 22 + 21 23 struct sgi_crime __iomem *crime; 22 24 struct sgi_mace __iomem *mace; 23 25 ··· 41 39 id, rev, field, (unsigned long) CRIME_BASE); 42 40 } 43 41 44 - irqreturn_t crime_memerr_intr(unsigned int irq, void *dev_id) 42 + irqreturn_t crime_memerr_intr(int irq, void *dev_id) 45 43 { 46 44 unsigned long stat, addr; 47 45 int fatal = 0; ··· 92 90 return IRQ_HANDLED; 93 91 } 94 92 95 - irqreturn_t crime_cpuerr_intr(unsigned int irq, void *dev_id) 93 + irqreturn_t crime_cpuerr_intr(int irq, void *dev_id) 96 94 { 97 95 unsigned long stat = crime->cpu_error_stat & CRIME_CPU_ERROR_MASK; 98 96 unsigned long addr = crime->cpu_error_addr & CRIME_CPU_ERROR_ADDR_MASK;
+2
arch/mips/sgi-ip32/ip32-berr.c
··· 18 18 #include <asm/ptrace.h> 19 19 #include <asm/tlbdebug.h> 20 20 21 + #include "ip32-common.h" 22 + 21 23 static int ip32_be_handler(struct pt_regs *regs, int is_fixup) 22 24 { 23 25 int data = regs->cp0_cause & 4;
+15
arch/mips/sgi-ip32/ip32-common.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0-only */ 2 + 3 + #ifndef __IP32_COMMON_H 4 + #define __IP32_COMMON_H 5 + 6 + #include <linux/init.h> 7 + #include <linux/interrupt.h> 8 + 9 + void __init crime_init(void); 10 + irqreturn_t crime_memerr_intr(int irq, void *dev_id); 11 + irqreturn_t crime_cpuerr_intr(int irq, void *dev_id); 12 + void __init ip32_be_init(void); 13 + void ip32_prepare_poweroff(void); 14 + 15 + #endif /* __IP32_COMMON_H */
+2 -4
arch/mips/sgi-ip32/ip32-irq.c
··· 28 28 #include <asm/ip32/mace.h> 29 29 #include <asm/ip32/ip32_ints.h> 30 30 31 + #include "ip32-common.h" 32 + 31 33 /* issue a PIO read to make sure no PIO writes are pending */ 32 34 static inline void flush_crime_bus(void) 33 35 { ··· 108 106 * different IRQ map than IRIX uses, but that's OK as Linux irq handling 109 107 * is quite different anyway. 110 108 */ 111 - 112 - /* Some initial interrupts to set up */ 113 - extern irqreturn_t crime_memerr_intr(int irq, void *dev_id); 114 - extern irqreturn_t crime_cpuerr_intr(int irq, void *dev_id); 115 109 116 110 /* 117 111 * This is for pure CRIME interrupts - ie not MACE. The advantage?
+1
arch/mips/sgi-ip32/ip32-memory.c
··· 15 15 #include <asm/ip32/crime.h> 16 16 #include <asm/bootinfo.h> 17 17 #include <asm/page.h> 18 + #include <asm/sgialib.h> 18 19 19 20 extern void crime_init(void); 20 21
+2
arch/mips/sgi-ip32/ip32-reset.c
··· 29 29 #include <asm/ip32/crime.h> 30 30 #include <asm/ip32/ip32_ints.h> 31 31 32 + #include "ip32-common.h" 33 + 32 34 #define POWERDOWN_TIMEOUT 120 33 35 /* 34 36 * Blink frequency during reboot grace period and when panicked.
+1 -2
arch/mips/sgi-ip32/ip32-setup.c
··· 26 26 #include <asm/ip32/mace.h> 27 27 #include <asm/ip32/ip32_ints.h> 28 28 29 - extern void ip32_be_init(void); 30 - extern void crime_init(void); 29 + #include "ip32-common.h" 31 30 32 31 #ifdef CONFIG_SGI_O2MACE_ETH 33 32 /*
-1
arch/parisc/Kconfig
··· 25 25 select RTC_DRV_GENERIC 26 26 select INIT_ALL_POSSIBLE 27 27 select BUG 28 - select BUILDTIME_TABLE_SORT 29 28 select HAVE_KERNEL_UNCOMPRESSED 30 29 select HAVE_PCI 31 30 select HAVE_PERF_EVENTS
+2 -2
arch/parisc/Makefile
··· 50 50 51 51 # Set default cross compiler for kernel build 52 52 ifdef cross_compiling 53 - ifeq ($(CROSS_COMPILE),) 53 + ifeq ($(CROSS_COMPILE),) 54 54 CC_SUFFIXES = linux linux-gnu unknown-linux-gnu suse-linux 55 55 CROSS_COMPILE := $(call cc-cross-prefix, \ 56 56 $(foreach a,$(CC_ARCHES), \ 57 57 $(foreach s,$(CC_SUFFIXES),$(a)-$(s)-))) 58 - endif 58 + endif 59 59 endif 60 60 61 61 ifdef CONFIG_DYNAMIC_FTRACE
+1
arch/parisc/include/asm/assembly.h
··· 576 576 .section __ex_table,"aw" ! \ 577 577 .align 4 ! \ 578 578 .word (fault_addr - .), (except_addr - .) ! \ 579 + or %r0,%r0,%r0 ! \ 579 580 .previous 580 581 581 582
+64
arch/parisc/include/asm/extable.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + #ifndef __PARISC_EXTABLE_H 3 + #define __PARISC_EXTABLE_H 4 + 5 + #include <asm/ptrace.h> 6 + #include <linux/compiler.h> 7 + 8 + /* 9 + * The exception table consists of three addresses: 10 + * 11 + * - A relative address to the instruction that is allowed to fault. 12 + * - A relative address at which the program should continue (fixup routine) 13 + * - An asm statement which specifies which CPU register will 14 + * receive -EFAULT when an exception happens if the lowest bit in 15 + * the fixup address is set. 16 + * 17 + * Note: The register specified in the err_opcode instruction will be 18 + * modified at runtime if a fault happens. Register %r0 will be ignored. 19 + * 20 + * Since relative addresses are used, 32bit values are sufficient even on 21 + * 64bit kernel. 22 + */ 23 + 24 + struct pt_regs; 25 + int fixup_exception(struct pt_regs *regs); 26 + 27 + #define ARCH_HAS_RELATIVE_EXTABLE 28 + struct exception_table_entry { 29 + int insn; /* relative address of insn that is allowed to fault. */ 30 + int fixup; /* relative address of fixup routine */ 31 + int err_opcode; /* sample opcode with register which holds error code */ 32 + }; 33 + 34 + #define ASM_EXCEPTIONTABLE_ENTRY( fault_addr, except_addr, opcode )\ 35 + ".section __ex_table,\"aw\"\n" \ 36 + ".align 4\n" \ 37 + ".word (" #fault_addr " - .), (" #except_addr " - .)\n" \ 38 + opcode "\n" \ 39 + ".previous\n" 40 + 41 + /* 42 + * ASM_EXCEPTIONTABLE_ENTRY_EFAULT() creates a special exception table entry 43 + * (with lowest bit set) for which the fault handler in fixup_exception() will 44 + * load -EFAULT on fault into the register specified by the err_opcode instruction, 45 + * and zeroes the target register in case of a read fault in get_user(). 46 + */ 47 + #define ASM_EXCEPTIONTABLE_VAR(__err_var) \ 48 + int __err_var = 0 49 + #define ASM_EXCEPTIONTABLE_ENTRY_EFAULT( fault_addr, except_addr, register )\ 50 + ASM_EXCEPTIONTABLE_ENTRY( fault_addr, except_addr + 1, "or %%r0,%%r0," register) 51 + 52 + static inline void swap_ex_entry_fixup(struct exception_table_entry *a, 53 + struct exception_table_entry *b, 54 + struct exception_table_entry tmp, 55 + int delta) 56 + { 57 + a->fixup = b->fixup + delta; 58 + b->fixup = tmp.fixup - delta; 59 + a->err_opcode = b->err_opcode; 60 + b->err_opcode = tmp.err_opcode; 61 + } 62 + #define swap_ex_entry_fixup swap_ex_entry_fixup 63 + 64 + #endif
+4 -2
arch/parisc/include/asm/special_insns.h
··· 8 8 "copy %%r0,%0\n" \ 9 9 "8:\tlpa %%r0(%1),%0\n" \ 10 10 "9:\n" \ 11 - ASM_EXCEPTIONTABLE_ENTRY(8b, 9b) \ 11 + ASM_EXCEPTIONTABLE_ENTRY(8b, 9b, \ 12 + "or %%r0,%%r0,%%r0") \ 12 13 : "=&r" (pa) \ 13 14 : "r" (va) \ 14 15 : "memory" \ ··· 23 22 "copy %%r0,%0\n" \ 24 23 "8:\tlpa %%r0(%%sr3,%1),%0\n" \ 25 24 "9:\n" \ 26 - ASM_EXCEPTIONTABLE_ENTRY(8b, 9b) \ 25 + ASM_EXCEPTIONTABLE_ENTRY(8b, 9b, \ 26 + "or %%r0,%%r0,%%r0") \ 27 27 : "=&r" (pa) \ 28 28 : "r" (va) \ 29 29 : "memory" \
+7 -41
arch/parisc/include/asm/uaccess.h
··· 7 7 */ 8 8 #include <asm/page.h> 9 9 #include <asm/cache.h> 10 + #include <asm/extable.h> 10 11 11 12 #include <linux/bug.h> 12 13 #include <linux/string.h> ··· 26 25 #define LDD_USER(sr, val, ptr) __get_user_asm(sr, val, "ldd", ptr) 27 26 #define STD_USER(sr, x, ptr) __put_user_asm(sr, "std", x, ptr) 28 27 #endif 29 - 30 - /* 31 - * The exception table contains two values: the first is the relative offset to 32 - * the address of the instruction that is allowed to fault, and the second is 33 - * the relative offset to the address of the fixup routine. Since relative 34 - * addresses are used, 32bit values are sufficient even on 64bit kernel. 35 - */ 36 - 37 - #define ARCH_HAS_RELATIVE_EXTABLE 38 - struct exception_table_entry { 39 - int insn; /* relative address of insn that is allowed to fault. */ 40 - int fixup; /* relative address of fixup routine */ 41 - }; 42 - 43 - #define ASM_EXCEPTIONTABLE_ENTRY( fault_addr, except_addr )\ 44 - ".section __ex_table,\"aw\"\n" \ 45 - ".align 4\n" \ 46 - ".word (" #fault_addr " - .), (" #except_addr " - .)\n\t" \ 47 - ".previous\n" 48 - 49 - /* 50 - * ASM_EXCEPTIONTABLE_ENTRY_EFAULT() creates a special exception table entry 51 - * (with lowest bit set) for which the fault handler in fixup_exception() will 52 - * load -EFAULT into %r29 for a read or write fault, and zeroes the target 53 - * register in case of a read fault in get_user(). 54 - */ 55 - #define ASM_EXCEPTIONTABLE_REG 29 56 - #define ASM_EXCEPTIONTABLE_VAR(__variable) \ 57 - register long __variable __asm__ ("r29") = 0 58 - #define ASM_EXCEPTIONTABLE_ENTRY_EFAULT( fault_addr, except_addr )\ 59 - ASM_EXCEPTIONTABLE_ENTRY( fault_addr, except_addr + 1) 60 28 61 29 #define __get_user_internal(sr, val, ptr) \ 62 30 ({ \ ··· 53 83 \ 54 84 __asm__("1: " ldx " 0(%%sr%2,%3),%0\n" \ 55 85 "9:\n" \ 56 - ASM_EXCEPTIONTABLE_ENTRY_EFAULT(1b, 9b) \ 86 + ASM_EXCEPTIONTABLE_ENTRY_EFAULT(1b, 9b, "%1") \ 57 87 : "=r"(__gu_val), "+r"(__gu_err) \ 58 88 : "i"(sr), "r"(ptr)); \ 59 89 \ ··· 85 115 "1: ldw 0(%%sr%2,%3),%0\n" \ 86 116 "2: ldw 4(%%sr%2,%3),%R0\n" \ 87 117 "9:\n" \ 88 - ASM_EXCEPTIONTABLE_ENTRY_EFAULT(1b, 9b) \ 89 - ASM_EXCEPTIONTABLE_ENTRY_EFAULT(2b, 9b) \ 118 + ASM_EXCEPTIONTABLE_ENTRY_EFAULT(1b, 9b, "%1") \ 119 + ASM_EXCEPTIONTABLE_ENTRY_EFAULT(2b, 9b, "%1") \ 90 120 : "=&r"(__gu_tmp.l), "+r"(__gu_err) \ 91 121 : "i"(sr), "r"(ptr)); \ 92 122 \ ··· 144 174 __asm__ __volatile__ ( \ 145 175 "1: " stx " %1,0(%%sr%2,%3)\n" \ 146 176 "9:\n" \ 147 - ASM_EXCEPTIONTABLE_ENTRY_EFAULT(1b, 9b) \ 177 + ASM_EXCEPTIONTABLE_ENTRY_EFAULT(1b, 9b, "%0") \ 148 178 : "+r"(__pu_err) \ 149 179 : "r"(x), "i"(sr), "r"(ptr)) 150 180 ··· 156 186 "1: stw %1,0(%%sr%2,%3)\n" \ 157 187 "2: stw %R1,4(%%sr%2,%3)\n" \ 158 188 "9:\n" \ 159 - ASM_EXCEPTIONTABLE_ENTRY_EFAULT(1b, 9b) \ 160 - ASM_EXCEPTIONTABLE_ENTRY_EFAULT(2b, 9b) \ 189 + ASM_EXCEPTIONTABLE_ENTRY_EFAULT(1b, 9b, "%0") \ 190 + ASM_EXCEPTIONTABLE_ENTRY_EFAULT(2b, 9b, "%0") \ 161 191 : "+r"(__pu_err) \ 162 192 : "r"(__val), "i"(sr), "r"(ptr)); \ 163 193 } while (0) 164 194 165 195 #endif /* !defined(CONFIG_64BIT) */ 166 - 167 196 168 197 /* 169 198 * Complex access routines -- external declarations ··· 184 215 unsigned long len); 185 216 #define INLINE_COPY_TO_USER 186 217 #define INLINE_COPY_FROM_USER 187 - 188 - struct pt_regs; 189 - int fixup_exception(struct pt_regs *regs); 190 218 191 219 #endif /* __PARISC_UACCESS_H */
+7 -3
arch/parisc/kernel/cache.c
··· 58 58 59 59 struct pdc_cache_info cache_info __ro_after_init; 60 60 #ifndef CONFIG_PA20 61 - struct pdc_btlb_info btlb_info __ro_after_init; 61 + struct pdc_btlb_info btlb_info; 62 62 #endif 63 63 64 64 DEFINE_STATIC_KEY_TRUE(parisc_has_cache); ··· 263 263 dcache_stride = CAFL_STRIDE(cache_info.dc_conf); 264 264 icache_stride = CAFL_STRIDE(cache_info.ic_conf); 265 265 #undef CAFL_STRIDE 266 + 267 + /* stride needs to be non-zero, otherwise cache flushes will not work */ 268 + WARN_ON(cache_info.dc_size && dcache_stride == 0); 269 + WARN_ON(cache_info.ic_size && icache_stride == 0); 266 270 267 271 if ((boot_cpu_data.pdc.capabilities & PDC_MODEL_NVA_MASK) == 268 272 PDC_MODEL_NVA_UNSUPPORTED) { ··· 854 850 #endif 855 851 " fic,m %3(%4,%0)\n" 856 852 "2: sync\n" 857 - ASM_EXCEPTIONTABLE_ENTRY_EFAULT(1b, 2b) 853 + ASM_EXCEPTIONTABLE_ENTRY_EFAULT(1b, 2b, "%1") 858 854 : "+r" (start), "+r" (error) 859 855 : "r" (end), "r" (dcache_stride), "i" (SR_USER)); 860 856 } ··· 869 865 #endif 870 866 " fdc,m %3(%4,%0)\n" 871 867 "2: sync\n" 872 - ASM_EXCEPTIONTABLE_ENTRY_EFAULT(1b, 2b) 868 + ASM_EXCEPTIONTABLE_ENTRY_EFAULT(1b, 2b, "%1") 873 869 : "+r" (start), "+r" (error) 874 870 : "r" (end), "r" (icache_stride), "i" (SR_USER)); 875 871 }
+4 -1
arch/parisc/kernel/drivers.c
··· 742 742 }; 743 743 744 744 if (device_for_each_child(parent, &recurse_data, descend_children)) 745 - { /* nothing */ }; 745 + { /* nothing */ } 746 746 747 747 return d.dev; 748 748 } ··· 1003 1003 } 1004 1004 1005 1005 pr_info("\n"); 1006 + 1007 + /* Prevent hung task messages when printing on serial console */ 1008 + cond_resched(); 1006 1009 1007 1010 pr_info("#define HPA_%08lx_DESCRIPTION \"%s\"\n", 1008 1011 hpa, parisc_hardware_description(&dev->id));
+22 -22
arch/parisc/kernel/unaligned.c
··· 120 120 "2: ldbs 1(%%sr1,%3), %0\n" 121 121 " depw %2, 23, 24, %0\n" 122 122 "3: \n" 123 - ASM_EXCEPTIONTABLE_ENTRY_EFAULT(1b, 3b) 124 - ASM_EXCEPTIONTABLE_ENTRY_EFAULT(2b, 3b) 123 + ASM_EXCEPTIONTABLE_ENTRY_EFAULT(1b, 3b, "%1") 124 + ASM_EXCEPTIONTABLE_ENTRY_EFAULT(2b, 3b, "%1") 125 125 : "+r" (val), "+r" (ret), "=&r" (temp1) 126 126 : "r" (saddr), "r" (regs->isr) ); 127 127 ··· 152 152 " mtctl %2,11\n" 153 153 " vshd %0,%3,%0\n" 154 154 "3: \n" 155 - ASM_EXCEPTIONTABLE_ENTRY_EFAULT(1b, 3b) 156 - ASM_EXCEPTIONTABLE_ENTRY_EFAULT(2b, 3b) 155 + ASM_EXCEPTIONTABLE_ENTRY_EFAULT(1b, 3b, "%1") 156 + ASM_EXCEPTIONTABLE_ENTRY_EFAULT(2b, 3b, "%1") 157 157 : "+r" (val), "+r" (ret), "=&r" (temp1), "=&r" (temp2) 158 158 : "r" (saddr), "r" (regs->isr) ); 159 159 ··· 189 189 " mtsar %%r19\n" 190 190 " shrpd %0,%%r20,%%sar,%0\n" 191 191 "3: \n" 192 - ASM_EXCEPTIONTABLE_ENTRY_EFAULT(1b, 3b) 193 - ASM_EXCEPTIONTABLE_ENTRY_EFAULT(2b, 3b) 192 + ASM_EXCEPTIONTABLE_ENTRY_EFAULT(1b, 3b, "%1") 193 + ASM_EXCEPTIONTABLE_ENTRY_EFAULT(2b, 3b, "%1") 194 194 : "=r" (val), "+r" (ret) 195 195 : "0" (val), "r" (saddr), "r" (regs->isr) 196 196 : "r19", "r20" ); ··· 209 209 " vshd %0,%R0,%0\n" 210 210 " vshd %R0,%4,%R0\n" 211 211 "4: \n" 212 - ASM_EXCEPTIONTABLE_ENTRY_EFAULT(1b, 4b) 213 - ASM_EXCEPTIONTABLE_ENTRY_EFAULT(2b, 4b) 214 - ASM_EXCEPTIONTABLE_ENTRY_EFAULT(3b, 4b) 212 + ASM_EXCEPTIONTABLE_ENTRY_EFAULT(1b, 4b, "%1") 213 + ASM_EXCEPTIONTABLE_ENTRY_EFAULT(2b, 4b, "%1") 214 + ASM_EXCEPTIONTABLE_ENTRY_EFAULT(3b, 4b, "%1") 215 215 : "+r" (val), "+r" (ret), "+r" (saddr), "=&r" (shift), "=&r" (temp1) 216 216 : "r" (regs->isr) ); 217 217 } ··· 244 244 "1: stb %1, 0(%%sr1, %3)\n" 245 245 "2: stb %2, 1(%%sr1, %3)\n" 246 246 "3: \n" 247 - ASM_EXCEPTIONTABLE_ENTRY_EFAULT(1b, 3b) 248 - ASM_EXCEPTIONTABLE_ENTRY_EFAULT(2b, 3b) 247 + ASM_EXCEPTIONTABLE_ENTRY_EFAULT(1b, 3b, "%0") 248 + ASM_EXCEPTIONTABLE_ENTRY_EFAULT(2b, 3b, "%0") 249 249 : "+r" (ret), "=&r" (temp1) 250 250 : "r" (val), "r" (regs->ior), "r" (regs->isr) ); 251 251 ··· 285 285 " stw %%r20,0(%%sr1,%2)\n" 286 286 " stw %%r21,4(%%sr1,%2)\n" 287 287 "3: \n" 288 - ASM_EXCEPTIONTABLE_ENTRY_EFAULT(1b, 3b) 289 - ASM_EXCEPTIONTABLE_ENTRY_EFAULT(2b, 3b) 288 + ASM_EXCEPTIONTABLE_ENTRY_EFAULT(1b, 3b, "%0") 289 + ASM_EXCEPTIONTABLE_ENTRY_EFAULT(2b, 3b, "%0") 290 290 : "+r" (ret) 291 291 : "r" (val), "r" (regs->ior), "r" (regs->isr) 292 292 : "r19", "r20", "r21", "r22", "r1" ); ··· 329 329 "3: std %%r20,0(%%sr1,%2)\n" 330 330 "4: std %%r21,8(%%sr1,%2)\n" 331 331 "5: \n" 332 - ASM_EXCEPTIONTABLE_ENTRY_EFAULT(1b, 5b) 333 - ASM_EXCEPTIONTABLE_ENTRY_EFAULT(2b, 5b) 334 - ASM_EXCEPTIONTABLE_ENTRY_EFAULT(3b, 5b) 335 - ASM_EXCEPTIONTABLE_ENTRY_EFAULT(4b, 5b) 332 + ASM_EXCEPTIONTABLE_ENTRY_EFAULT(1b, 5b, "%0") 333 + ASM_EXCEPTIONTABLE_ENTRY_EFAULT(2b, 5b, "%0") 334 + ASM_EXCEPTIONTABLE_ENTRY_EFAULT(3b, 5b, "%0") 335 + ASM_EXCEPTIONTABLE_ENTRY_EFAULT(4b, 5b, "%0") 336 336 : "+r" (ret) 337 337 : "r" (val), "r" (regs->ior), "r" (regs->isr) 338 338 : "r19", "r20", "r21", "r22", "r1" ); ··· 357 357 "4: stw %%r1,4(%%sr1,%2)\n" 358 358 "5: stw %R1,8(%%sr1,%2)\n" 359 359 "6: \n" 360 - ASM_EXCEPTIONTABLE_ENTRY_EFAULT(1b, 6b) 361 - ASM_EXCEPTIONTABLE_ENTRY_EFAULT(2b, 6b) 362 - ASM_EXCEPTIONTABLE_ENTRY_EFAULT(3b, 6b) 363 - ASM_EXCEPTIONTABLE_ENTRY_EFAULT(4b, 6b) 364 - ASM_EXCEPTIONTABLE_ENTRY_EFAULT(5b, 6b) 360 + ASM_EXCEPTIONTABLE_ENTRY_EFAULT(1b, 6b, "%0") 361 + ASM_EXCEPTIONTABLE_ENTRY_EFAULT(2b, 6b, "%0") 362 + ASM_EXCEPTIONTABLE_ENTRY_EFAULT(3b, 6b, "%0") 363 + ASM_EXCEPTIONTABLE_ENTRY_EFAULT(4b, 6b, "%0") 364 + ASM_EXCEPTIONTABLE_ENTRY_EFAULT(5b, 6b, "%0") 365 365 : "+r" (ret) 366 366 : "r" (val), "r" (regs->ior), "r" (regs->isr) 367 367 : "r19", "r20", "r21", "r1" );
+1 -1
arch/parisc/kernel/vmlinux.lds.S
··· 127 127 } 128 128 #endif 129 129 130 - RO_DATA(8) 130 + RO_DATA(PAGE_SIZE) 131 131 132 132 /* unwind info */ 133 133 . = ALIGN(4);
+8 -3
arch/parisc/mm/fault.c
··· 150 150 * Fix up get_user() and put_user(). 151 151 * ASM_EXCEPTIONTABLE_ENTRY_EFAULT() sets the least-significant 152 152 * bit in the relative address of the fixup routine to indicate 153 - * that gr[ASM_EXCEPTIONTABLE_REG] should be loaded with 154 - * -EFAULT to report a userspace access error. 153 + * that the register encoded in the "or %r0,%r0,register" 154 + * opcode should be loaded with -EFAULT to report a userspace 155 + * access error. 155 156 */ 156 157 if (fix->fixup & 1) { 157 - regs->gr[ASM_EXCEPTIONTABLE_REG] = -EFAULT; 158 + int fault_error_reg = fix->err_opcode & 0x1f; 159 + if (!WARN_ON(!fault_error_reg)) 160 + regs->gr[fault_error_reg] = -EFAULT; 161 + pr_debug("Unalignment fixup of register %d at %pS\n", 162 + fault_error_reg, (void*)regs->iaoq[0]); 158 163 159 164 /* zero target register for get_user() */ 160 165 if (parisc_acctyp(0, regs->iir) == VM_READ) {
+48 -32
arch/riscv/boot/dts/sophgo/sg2042.dtsi
··· 93 93 <&cpu63_intc 3>; 94 94 }; 95 95 96 - clint_mtimer0: timer@70ac000000 { 96 + clint_mtimer0: timer@70ac004000 { 97 97 compatible = "sophgo,sg2042-aclint-mtimer", "thead,c900-aclint-mtimer"; 98 - reg = <0x00000070 0xac000000 0x00000000 0x00007ff8>; 98 + reg = <0x00000070 0xac004000 0x00000000 0x0000c000>; 99 + reg-names = "mtimecmp"; 99 100 interrupts-extended = <&cpu0_intc 7>, 100 101 <&cpu1_intc 7>, 101 102 <&cpu2_intc 7>, 102 103 <&cpu3_intc 7>; 103 104 }; 104 105 105 - clint_mtimer1: timer@70ac010000 { 106 + clint_mtimer1: timer@70ac014000 { 106 107 compatible = "sophgo,sg2042-aclint-mtimer", "thead,c900-aclint-mtimer"; 107 - reg = <0x00000070 0xac010000 0x00000000 0x00007ff8>; 108 + reg = <0x00000070 0xac014000 0x00000000 0x0000c000>; 109 + reg-names = "mtimecmp"; 108 110 interrupts-extended = <&cpu4_intc 7>, 109 111 <&cpu5_intc 7>, 110 112 <&cpu6_intc 7>, 111 113 <&cpu7_intc 7>; 112 114 }; 113 115 114 - clint_mtimer2: timer@70ac020000 { 116 + clint_mtimer2: timer@70ac024000 { 115 117 compatible = "sophgo,sg2042-aclint-mtimer", "thead,c900-aclint-mtimer"; 116 - reg = <0x00000070 0xac020000 0x00000000 0x00007ff8>; 118 + reg = <0x00000070 0xac024000 0x00000000 0x0000c000>; 119 + reg-names = "mtimecmp"; 117 120 interrupts-extended = <&cpu8_intc 7>, 118 121 <&cpu9_intc 7>, 119 122 <&cpu10_intc 7>, 120 123 <&cpu11_intc 7>; 121 124 }; 122 125 123 - clint_mtimer3: timer@70ac030000 { 126 + clint_mtimer3: timer@70ac034000 { 124 127 compatible = "sophgo,sg2042-aclint-mtimer", "thead,c900-aclint-mtimer"; 125 - reg = <0x00000070 0xac030000 0x00000000 0x00007ff8>; 128 + reg = <0x00000070 0xac034000 0x00000000 0x0000c000>; 129 + reg-names = "mtimecmp"; 126 130 interrupts-extended = <&cpu12_intc 7>, 127 131 <&cpu13_intc 7>, 128 132 <&cpu14_intc 7>, 129 133 <&cpu15_intc 7>; 130 134 }; 131 135 132 - clint_mtimer4: timer@70ac040000 { 136 + clint_mtimer4: timer@70ac044000 { 133 137 compatible = "sophgo,sg2042-aclint-mtimer", "thead,c900-aclint-mtimer"; 134 - reg = <0x00000070 0xac040000 0x00000000 0x00007ff8>; 138 + reg = <0x00000070 0xac044000 0x00000000 0x0000c000>; 139 + reg-names = "mtimecmp"; 135 140 interrupts-extended = <&cpu16_intc 7>, 136 141 <&cpu17_intc 7>, 137 142 <&cpu18_intc 7>, 138 143 <&cpu19_intc 7>; 139 144 }; 140 145 141 - clint_mtimer5: timer@70ac050000 { 146 + clint_mtimer5: timer@70ac054000 { 142 147 compatible = "sophgo,sg2042-aclint-mtimer", "thead,c900-aclint-mtimer"; 143 - reg = <0x00000070 0xac050000 0x00000000 0x00007ff8>; 148 + reg = <0x00000070 0xac054000 0x00000000 0x0000c000>; 149 + reg-names = "mtimecmp"; 144 150 interrupts-extended = <&cpu20_intc 7>, 145 151 <&cpu21_intc 7>, 146 152 <&cpu22_intc 7>, 147 153 <&cpu23_intc 7>; 148 154 }; 149 155 150 - clint_mtimer6: timer@70ac060000 { 156 + clint_mtimer6: timer@70ac064000 { 151 157 compatible = "sophgo,sg2042-aclint-mtimer", "thead,c900-aclint-mtimer"; 152 - reg = <0x00000070 0xac060000 0x00000000 0x00007ff8>; 158 + reg = <0x00000070 0xac064000 0x00000000 0x0000c000>; 159 + reg-names = "mtimecmp"; 153 160 interrupts-extended = <&cpu24_intc 7>, 154 161 <&cpu25_intc 7>, 155 162 <&cpu26_intc 7>, 156 163 <&cpu27_intc 7>; 157 164 }; 158 165 159 - clint_mtimer7: timer@70ac070000 { 166 + clint_mtimer7: timer@70ac074000 { 160 167 compatible = "sophgo,sg2042-aclint-mtimer", "thead,c900-aclint-mtimer"; 161 - reg = <0x00000070 0xac070000 0x00000000 0x00007ff8>; 168 + reg = <0x00000070 0xac074000 0x00000000 0x0000c000>; 169 + reg-names = "mtimecmp"; 162 170 interrupts-extended = <&cpu28_intc 7>, 163 171 <&cpu29_intc 7>, 164 172 <&cpu30_intc 7>, 165 173 <&cpu31_intc 7>; 166 174 }; 167 175 168 - clint_mtimer8: timer@70ac080000 { 176 + clint_mtimer8: timer@70ac084000 { 169 177 compatible = "sophgo,sg2042-aclint-mtimer", "thead,c900-aclint-mtimer"; 170 - reg = <0x00000070 0xac080000 0x00000000 0x00007ff8>; 178 + reg = <0x00000070 0xac084000 0x00000000 0x0000c000>; 179 + reg-names = "mtimecmp"; 171 180 interrupts-extended = <&cpu32_intc 7>, 172 181 <&cpu33_intc 7>, 173 182 <&cpu34_intc 7>, 174 183 <&cpu35_intc 7>; 175 184 }; 176 185 177 - clint_mtimer9: timer@70ac090000 { 186 + clint_mtimer9: timer@70ac094000 { 178 187 compatible = "sophgo,sg2042-aclint-mtimer", "thead,c900-aclint-mtimer"; 179 - reg = <0x00000070 0xac090000 0x00000000 0x00007ff8>; 188 + reg = <0x00000070 0xac094000 0x00000000 0x0000c000>; 189 + reg-names = "mtimecmp"; 180 190 interrupts-extended = <&cpu36_intc 7>, 181 191 <&cpu37_intc 7>, 182 192 <&cpu38_intc 7>, 183 193 <&cpu39_intc 7>; 184 194 }; 185 195 186 - clint_mtimer10: timer@70ac0a0000 { 196 + clint_mtimer10: timer@70ac0a4000 { 187 197 compatible = "sophgo,sg2042-aclint-mtimer", "thead,c900-aclint-mtimer"; 188 - reg = <0x00000070 0xac0a0000 0x00000000 0x00007ff8>; 198 + reg = <0x00000070 0xac0a4000 0x00000000 0x0000c000>; 199 + reg-names = "mtimecmp"; 189 200 interrupts-extended = <&cpu40_intc 7>, 190 201 <&cpu41_intc 7>, 191 202 <&cpu42_intc 7>, 192 203 <&cpu43_intc 7>; 193 204 }; 194 205 195 - clint_mtimer11: timer@70ac0b0000 { 206 + clint_mtimer11: timer@70ac0b4000 { 196 207 compatible = "sophgo,sg2042-aclint-mtimer", "thead,c900-aclint-mtimer"; 197 - reg = <0x00000070 0xac0b0000 0x00000000 0x00007ff8>; 208 + reg = <0x00000070 0xac0b4000 0x00000000 0x0000c000>; 209 + reg-names = "mtimecmp"; 198 210 interrupts-extended = <&cpu44_intc 7>, 199 211 <&cpu45_intc 7>, 200 212 <&cpu46_intc 7>, 201 213 <&cpu47_intc 7>; 202 214 }; 203 215 204 - clint_mtimer12: timer@70ac0c0000 { 216 + clint_mtimer12: timer@70ac0c4000 { 205 217 compatible = "sophgo,sg2042-aclint-mtimer", "thead,c900-aclint-mtimer"; 206 - reg = <0x00000070 0xac0c0000 0x00000000 0x00007ff8>; 218 + reg = <0x00000070 0xac0c4000 0x00000000 0x0000c000>; 219 + reg-names = "mtimecmp"; 207 220 interrupts-extended = <&cpu48_intc 7>, 208 221 <&cpu49_intc 7>, 209 222 <&cpu50_intc 7>, 210 223 <&cpu51_intc 7>; 211 224 }; 212 225 213 - clint_mtimer13: timer@70ac0d0000 { 226 + clint_mtimer13: timer@70ac0d4000 { 214 227 compatible = "sophgo,sg2042-aclint-mtimer", "thead,c900-aclint-mtimer"; 215 - reg = <0x00000070 0xac0d0000 0x00000000 0x00007ff8>; 228 + reg = <0x00000070 0xac0d4000 0x00000000 0x0000c000>; 229 + reg-names = "mtimecmp"; 216 230 interrupts-extended = <&cpu52_intc 7>, 217 231 <&cpu53_intc 7>, 218 232 <&cpu54_intc 7>, 219 233 <&cpu55_intc 7>; 220 234 }; 221 235 222 - clint_mtimer14: timer@70ac0e0000 { 236 + clint_mtimer14: timer@70ac0e4000 { 223 237 compatible = "sophgo,sg2042-aclint-mtimer", "thead,c900-aclint-mtimer"; 224 - reg = <0x00000070 0xac0e0000 0x00000000 0x00007ff8>; 238 + reg = <0x00000070 0xac0e4000 0x00000000 0x0000c000>; 239 + reg-names = "mtimecmp"; 225 240 interrupts-extended = <&cpu56_intc 7>, 226 241 <&cpu57_intc 7>, 227 242 <&cpu58_intc 7>, 228 243 <&cpu59_intc 7>; 229 244 }; 230 245 231 - clint_mtimer15: timer@70ac0f0000 { 246 + clint_mtimer15: timer@70ac0f4000 { 232 247 compatible = "sophgo,sg2042-aclint-mtimer", "thead,c900-aclint-mtimer"; 233 - reg = <0x00000070 0xac0f0000 0x00000000 0x00007ff8>; 248 + reg = <0x00000070 0xac0f4000 0x00000000 0x0000c000>; 249 + reg-names = "mtimecmp"; 234 250 interrupts-extended = <&cpu60_intc 7>, 235 251 <&cpu61_intc 7>, 236 252 <&cpu62_intc 7>,
+3 -1
arch/um/Makefile
··· 115 115 $(Q)$(MAKE) $(build)=$(HOST_DIR)/um include/generated/user_constants.h 116 116 117 117 LINK-$(CONFIG_LD_SCRIPT_STATIC) += -static 118 - LINK-$(CONFIG_LD_SCRIPT_DYN) += $(call cc-option, -no-pie) 118 + ifdef CONFIG_LD_SCRIPT_DYN 119 + LINK-$(call gcc-min-version, 60100)$(CONFIG_CC_IS_CLANG) += -no-pie 120 + endif 119 121 LINK-$(CONFIG_LD_SCRIPT_DYN_RPATH) += -Wl,-rpath,/lib 120 122 121 123 CFLAGS_NO_HARDENING := $(call cc-option, -fno-PIC,) $(call cc-option, -fno-pic,) \
+5 -5
arch/x86/Makefile
··· 112 112 # temporary until string.h is fixed 113 113 KBUILD_CFLAGS += -ffreestanding 114 114 115 - ifeq ($(CONFIG_STACKPROTECTOR),y) 116 - ifeq ($(CONFIG_SMP),y) 115 + ifeq ($(CONFIG_STACKPROTECTOR),y) 116 + ifeq ($(CONFIG_SMP),y) 117 117 KBUILD_CFLAGS += -mstack-protector-guard-reg=fs -mstack-protector-guard-symbol=__stack_chk_guard 118 - else 118 + else 119 119 KBUILD_CFLAGS += -mstack-protector-guard=global 120 - endif 121 - endif 120 + endif 121 + endif 122 122 else 123 123 BITS := 64 124 124 UTS_MACHINE := x86_64
+1 -3
arch/x86/include/asm/cpufeatures.h
··· 81 81 #define X86_FEATURE_K6_MTRR ( 3*32+ 1) /* AMD K6 nonstandard MTRRs */ 82 82 #define X86_FEATURE_CYRIX_ARR ( 3*32+ 2) /* Cyrix ARRs (= MTRRs) */ 83 83 #define X86_FEATURE_CENTAUR_MCR ( 3*32+ 3) /* Centaur MCRs (= MTRRs) */ 84 - 85 - /* CPU types for specific tunings: */ 86 84 #define X86_FEATURE_K8 ( 3*32+ 4) /* "" Opteron, Athlon64 */ 87 - /* FREE, was #define X86_FEATURE_K7 ( 3*32+ 5) "" Athlon */ 85 + #define X86_FEATURE_ZEN5 ( 3*32+ 5) /* "" CPU based on Zen5 microarchitecture */ 88 86 #define X86_FEATURE_P3 ( 3*32+ 6) /* "" P3 */ 89 87 #define X86_FEATURE_P4 ( 3*32+ 7) /* "" P4 */ 90 88 #define X86_FEATURE_CONSTANT_TSC ( 3*32+ 8) /* TSC ticks at a constant rate */
+2
arch/x86/include/asm/intel-family.h
··· 162 162 #define INTEL_FAM6_ATOM_CRESTMONT_X 0xAF /* Sierra Forest */ 163 163 #define INTEL_FAM6_ATOM_CRESTMONT 0xB6 /* Grand Ridge */ 164 164 165 + #define INTEL_FAM6_ATOM_DARKMONT_X 0xDD /* Clearwater Forest */ 166 + 165 167 /* Xeon Phi */ 166 168 167 169 #define INTEL_FAM6_XEON_PHI_KNL 0x57 /* Knights Landing */
+16 -1
arch/x86/include/asm/kmsan.h
··· 64 64 { 65 65 unsigned long x = (unsigned long)addr; 66 66 unsigned long y = x - __START_KERNEL_map; 67 + bool ret; 67 68 68 69 /* use the carry flag to determine if x was < __START_KERNEL_map */ 69 70 if (unlikely(x > y)) { ··· 80 79 return false; 81 80 } 82 81 83 - return pfn_valid(x >> PAGE_SHIFT); 82 + /* 83 + * pfn_valid() relies on RCU, and may call into the scheduler on exiting 84 + * the critical section. However, this would result in recursion with 85 + * KMSAN. Therefore, disable preemption here, and re-enable preemption 86 + * below while suppressing reschedules to avoid recursion. 87 + * 88 + * Note, this sacrifices occasionally breaking scheduling guarantees. 89 + * Although, a kernel compiled with KMSAN has already given up on any 90 + * performance guarantees due to being heavily instrumented. 91 + */ 92 + preempt_disable(); 93 + ret = pfn_valid(x >> PAGE_SHIFT); 94 + preempt_enable_no_resched(); 95 + 96 + return ret; 84 97 } 85 98 86 99 #endif /* !MODULE */
+21 -4
arch/x86/include/asm/syscall_wrapper.h
··· 58 58 ,,regs->di,,regs->si,,regs->dx \ 59 59 ,,regs->r10,,regs->r8,,regs->r9) \ 60 60 61 + 62 + /* SYSCALL_PT_ARGS is Adapted from s390x */ 63 + #define SYSCALL_PT_ARG6(m, t1, t2, t3, t4, t5, t6) \ 64 + SYSCALL_PT_ARG5(m, t1, t2, t3, t4, t5), m(t6, (regs->bp)) 65 + #define SYSCALL_PT_ARG5(m, t1, t2, t3, t4, t5) \ 66 + SYSCALL_PT_ARG4(m, t1, t2, t3, t4), m(t5, (regs->di)) 67 + #define SYSCALL_PT_ARG4(m, t1, t2, t3, t4) \ 68 + SYSCALL_PT_ARG3(m, t1, t2, t3), m(t4, (regs->si)) 69 + #define SYSCALL_PT_ARG3(m, t1, t2, t3) \ 70 + SYSCALL_PT_ARG2(m, t1, t2), m(t3, (regs->dx)) 71 + #define SYSCALL_PT_ARG2(m, t1, t2) \ 72 + SYSCALL_PT_ARG1(m, t1), m(t2, (regs->cx)) 73 + #define SYSCALL_PT_ARG1(m, t1) m(t1, (regs->bx)) 74 + #define SYSCALL_PT_ARGS(x, ...) SYSCALL_PT_ARG##x(__VA_ARGS__) 75 + 76 + #define __SC_COMPAT_CAST(t, a) \ 77 + (__typeof(__builtin_choose_expr(__TYPE_IS_L(t), 0, 0U))) \ 78 + (unsigned int)a 79 + 61 80 /* Mapping of registers to parameters for syscalls on i386 */ 62 81 #define SC_IA32_REGS_TO_ARGS(x, ...) \ 63 - __MAP(x,__SC_ARGS \ 64 - ,,(unsigned int)regs->bx,,(unsigned int)regs->cx \ 65 - ,,(unsigned int)regs->dx,,(unsigned int)regs->si \ 66 - ,,(unsigned int)regs->di,,(unsigned int)regs->bp) 82 + SYSCALL_PT_ARGS(x, __SC_COMPAT_CAST, \ 83 + __MAP(x, __SC_TYPE, __VA_ARGS__)) \ 67 84 68 85 #define __SYS_STUB0(abi, name) \ 69 86 long __##abi##_##name(const struct pt_regs *regs); \
+1 -1
arch/x86/kernel/alternative.c
··· 403 403 { 404 404 BUG(); 405 405 } 406 - EXPORT_SYMBOL_GPL(BUG_func); 406 + EXPORT_SYMBOL(BUG_func); 407 407 408 408 #define CALL_RIP_REL_OPCODE 0xff 409 409 #define CALL_RIP_REL_MODRM 0x15
+24 -4
arch/x86/kernel/cpu/amd.c
··· 538 538 539 539 /* Figure out Zen generations: */ 540 540 switch (c->x86) { 541 - case 0x17: { 541 + case 0x17: 542 542 switch (c->x86_model) { 543 543 case 0x00 ... 0x2f: 544 544 case 0x50 ... 0x5f: ··· 554 554 goto warn; 555 555 } 556 556 break; 557 - } 558 - case 0x19: { 557 + 558 + case 0x19: 559 559 switch (c->x86_model) { 560 560 case 0x00 ... 0x0f: 561 561 case 0x20 ... 0x5f: ··· 569 569 goto warn; 570 570 } 571 571 break; 572 - } 572 + 573 + case 0x1a: 574 + switch (c->x86_model) { 575 + case 0x00 ... 0x0f: 576 + case 0x20 ... 0x2f: 577 + case 0x40 ... 0x4f: 578 + case 0x70 ... 0x7f: 579 + setup_force_cpu_cap(X86_FEATURE_ZEN5); 580 + break; 581 + default: 582 + goto warn; 583 + } 584 + break; 585 + 573 586 default: 574 587 break; 575 588 } ··· 1052 1039 msr_set_bit(MSR_ZEN4_BP_CFG, MSR_ZEN4_BP_CFG_SHARED_BTB_FIX_BIT); 1053 1040 } 1054 1041 1042 + static void init_amd_zen5(struct cpuinfo_x86 *c) 1043 + { 1044 + init_amd_zen_common(); 1045 + } 1046 + 1055 1047 static void init_amd(struct cpuinfo_x86 *c) 1056 1048 { 1057 1049 u64 vm_cr; ··· 1102 1084 init_amd_zen3(c); 1103 1085 else if (boot_cpu_has(X86_FEATURE_ZEN4)) 1104 1086 init_amd_zen4(c); 1087 + else if (boot_cpu_has(X86_FEATURE_ZEN5)) 1088 + init_amd_zen5(c); 1105 1089 1106 1090 /* 1107 1091 * Enable workaround for FXSAVE leak on CPUs
+10 -3
block/blk-map.c
··· 205 205 /* 206 206 * success 207 207 */ 208 - if ((iov_iter_rw(iter) == WRITE && 209 - (!map_data || !map_data->null_mapped)) || 210 - (map_data && map_data->from_user)) { 208 + if (iov_iter_rw(iter) == WRITE && 209 + (!map_data || !map_data->null_mapped)) { 211 210 ret = bio_copy_from_iter(bio, iter); 211 + if (ret) 212 + goto cleanup; 213 + } else if (map_data && map_data->from_user) { 214 + struct iov_iter iter2 = *iter; 215 + 216 + /* This is the copy-in part of SG_DXFER_TO_FROM_DEV. */ 217 + iter2.data_source = ITER_SOURCE; 218 + ret = bio_copy_from_iter(bio, &iter2); 212 219 if (ret) 213 220 goto cleanup; 214 221 } else {
-2
block/ioctl.c
··· 20 20 struct blkpg_partition p; 21 21 sector_t start, length; 22 22 23 - if (disk->flags & GENHD_FL_NO_PART) 24 - return -EINVAL; 25 23 if (!capable(CAP_SYS_ADMIN)) 26 24 return -EACCES; 27 25 if (copy_from_user(&p, upart, sizeof(struct blkpg_partition)))
+5
block/partitions/core.c
··· 439 439 goto out; 440 440 } 441 441 442 + if (disk->flags & GENHD_FL_NO_PART) { 443 + ret = -EINVAL; 444 + goto out; 445 + } 446 + 442 447 if (partition_overlaps(disk, start, length, -1)) { 443 448 ret = -EBUSY; 444 449 goto out;
+16 -4
drivers/accel/ivpu/ivpu_debugfs.c
··· 102 102 { 103 103 struct ivpu_device *vdev = seq_to_ivpu(s); 104 104 105 - seq_printf(s, "%d\n", atomic_read(&vdev->pm->in_reset)); 105 + seq_printf(s, "%d\n", atomic_read(&vdev->pm->reset_pending)); 106 106 return 0; 107 107 } 108 108 ··· 130 130 131 131 fw->dvfs_mode = dvfs_mode; 132 132 133 - ivpu_pm_schedule_recovery(vdev); 133 + ret = pci_try_reset_function(to_pci_dev(vdev->drm.dev)); 134 + if (ret) 135 + return ret; 134 136 135 137 return size; 136 138 } ··· 192 190 return ret; 193 191 194 192 ivpu_hw_profiling_freq_drive(vdev, enable); 195 - ivpu_pm_schedule_recovery(vdev); 193 + 194 + ret = pci_try_reset_function(to_pci_dev(vdev->drm.dev)); 195 + if (ret) 196 + return ret; 196 197 197 198 return size; 198 199 } ··· 306 301 ivpu_force_recovery_fn(struct file *file, const char __user *user_buf, size_t size, loff_t *pos) 307 302 { 308 303 struct ivpu_device *vdev = file->private_data; 304 + int ret; 309 305 310 306 if (!size) 311 307 return -EINVAL; 312 308 313 - ivpu_pm_schedule_recovery(vdev); 309 + ret = ivpu_rpm_get(vdev); 310 + if (ret) 311 + return ret; 312 + 313 + ivpu_pm_trigger_recovery(vdev, "debugfs"); 314 + flush_work(&vdev->pm->recovery_work); 315 + ivpu_rpm_put(vdev); 314 316 return size; 315 317 } 316 318
+70 -54
drivers/accel/ivpu/ivpu_drv.c
··· 6 6 #include <linux/firmware.h> 7 7 #include <linux/module.h> 8 8 #include <linux/pci.h> 9 + #include <linux/pm_runtime.h> 9 10 10 11 #include <drm/drm_accel.h> 11 12 #include <drm/drm_file.h> ··· 18 17 #include "ivpu_debugfs.h" 19 18 #include "ivpu_drv.h" 20 19 #include "ivpu_fw.h" 20 + #include "ivpu_fw_log.h" 21 21 #include "ivpu_gem.h" 22 22 #include "ivpu_hw.h" 23 23 #include "ivpu_ipc.h" ··· 67 65 return file_priv; 68 66 } 69 67 70 - struct ivpu_file_priv *ivpu_file_priv_get_by_ctx_id(struct ivpu_device *vdev, unsigned long id) 68 + static void file_priv_unbind(struct ivpu_device *vdev, struct ivpu_file_priv *file_priv) 71 69 { 72 - struct ivpu_file_priv *file_priv; 70 + mutex_lock(&file_priv->lock); 71 + if (file_priv->bound) { 72 + ivpu_dbg(vdev, FILE, "file_priv unbind: ctx %u\n", file_priv->ctx.id); 73 73 74 - xa_lock_irq(&vdev->context_xa); 75 - file_priv = xa_load(&vdev->context_xa, id); 76 - /* file_priv may still be in context_xa during file_priv_release() */ 77 - if (file_priv && !kref_get_unless_zero(&file_priv->ref)) 78 - file_priv = NULL; 79 - xa_unlock_irq(&vdev->context_xa); 80 - 81 - if (file_priv) 82 - ivpu_dbg(vdev, KREF, "file_priv get by id: ctx %u refcount %u\n", 83 - file_priv->ctx.id, kref_read(&file_priv->ref)); 84 - 85 - return file_priv; 74 + ivpu_cmdq_release_all_locked(file_priv); 75 + ivpu_jsm_context_release(vdev, file_priv->ctx.id); 76 + ivpu_bo_unbind_all_bos_from_context(vdev, &file_priv->ctx); 77 + ivpu_mmu_user_context_fini(vdev, &file_priv->ctx); 78 + file_priv->bound = false; 79 + drm_WARN_ON(&vdev->drm, !xa_erase_irq(&vdev->context_xa, file_priv->ctx.id)); 80 + } 81 + mutex_unlock(&file_priv->lock); 86 82 } 87 83 88 84 static void file_priv_release(struct kref *ref) ··· 88 88 struct ivpu_file_priv *file_priv = container_of(ref, struct ivpu_file_priv, ref); 89 89 struct ivpu_device *vdev = file_priv->vdev; 90 90 91 - ivpu_dbg(vdev, FILE, "file_priv release: ctx %u\n", file_priv->ctx.id); 91 + ivpu_dbg(vdev, FILE, "file_priv release: ctx %u bound %d\n", 92 + file_priv->ctx.id, (bool)file_priv->bound); 92 93 93 - ivpu_cmdq_release_all(file_priv); 94 - ivpu_jsm_context_release(vdev, file_priv->ctx.id); 95 - ivpu_bo_remove_all_bos_from_context(vdev, &file_priv->ctx); 96 - ivpu_mmu_user_context_fini(vdev, &file_priv->ctx); 97 - drm_WARN_ON(&vdev->drm, xa_erase_irq(&vdev->context_xa, file_priv->ctx.id) != file_priv); 94 + pm_runtime_get_sync(vdev->drm.dev); 95 + mutex_lock(&vdev->context_list_lock); 96 + file_priv_unbind(vdev, file_priv); 97 + mutex_unlock(&vdev->context_list_lock); 98 + pm_runtime_put_autosuspend(vdev->drm.dev); 99 + 98 100 mutex_destroy(&file_priv->lock); 99 101 kfree(file_priv); 100 102 } ··· 178 176 case DRM_IVPU_PARAM_CONTEXT_BASE_ADDRESS: 179 177 args->value = vdev->hw->ranges.user.start; 180 178 break; 181 - case DRM_IVPU_PARAM_CONTEXT_PRIORITY: 182 - args->value = file_priv->priority; 183 - break; 184 179 case DRM_IVPU_PARAM_CONTEXT_ID: 185 180 args->value = file_priv->ctx.id; 186 181 break; ··· 217 218 218 219 static int ivpu_set_param_ioctl(struct drm_device *dev, void *data, struct drm_file *file) 219 220 { 220 - struct ivpu_file_priv *file_priv = file->driver_priv; 221 221 struct drm_ivpu_param *args = data; 222 222 int ret = 0; 223 223 224 224 switch (args->param) { 225 - case DRM_IVPU_PARAM_CONTEXT_PRIORITY: 226 - if (args->value <= DRM_IVPU_CONTEXT_PRIORITY_REALTIME) 227 - file_priv->priority = args->value; 228 - else 229 - ret = -EINVAL; 230 - break; 231 225 default: 232 226 ret = -EINVAL; 233 227 } ··· 233 241 struct ivpu_device *vdev = to_ivpu_device(dev); 234 242 struct ivpu_file_priv *file_priv; 235 243 u32 ctx_id; 236 - void *old; 237 - int ret; 244 + int idx, ret; 238 245 239 - ret = xa_alloc_irq(&vdev->context_xa, &ctx_id, NULL, vdev->context_xa_limit, GFP_KERNEL); 240 - if (ret) { 241 - ivpu_err(vdev, "Failed to allocate context id: %d\n", ret); 242 - return ret; 243 - } 246 + if (!drm_dev_enter(dev, &idx)) 247 + return -ENODEV; 244 248 245 249 file_priv = kzalloc(sizeof(*file_priv), GFP_KERNEL); 246 250 if (!file_priv) { 247 251 ret = -ENOMEM; 248 - goto err_xa_erase; 252 + goto err_dev_exit; 249 253 } 250 254 251 255 file_priv->vdev = vdev; 252 - file_priv->priority = DRM_IVPU_CONTEXT_PRIORITY_NORMAL; 256 + file_priv->bound = true; 253 257 kref_init(&file_priv->ref); 254 258 mutex_init(&file_priv->lock); 255 259 260 + mutex_lock(&vdev->context_list_lock); 261 + 262 + ret = xa_alloc_irq(&vdev->context_xa, &ctx_id, file_priv, 263 + vdev->context_xa_limit, GFP_KERNEL); 264 + if (ret) { 265 + ivpu_err(vdev, "Failed to allocate context id: %d\n", ret); 266 + goto err_unlock; 267 + } 268 + 256 269 ret = ivpu_mmu_user_context_init(vdev, &file_priv->ctx, ctx_id); 257 270 if (ret) 258 - goto err_mutex_destroy; 271 + goto err_xa_erase; 259 272 260 - old = xa_store_irq(&vdev->context_xa, ctx_id, file_priv, GFP_KERNEL); 261 - if (xa_is_err(old)) { 262 - ret = xa_err(old); 263 - ivpu_err(vdev, "Failed to store context %u: %d\n", ctx_id, ret); 264 - goto err_ctx_fini; 265 - } 273 + mutex_unlock(&vdev->context_list_lock); 274 + drm_dev_exit(idx); 275 + 276 + file->driver_priv = file_priv; 266 277 267 278 ivpu_dbg(vdev, FILE, "file_priv create: ctx %u process %s pid %d\n", 268 279 ctx_id, current->comm, task_pid_nr(current)); 269 280 270 - file->driver_priv = file_priv; 271 281 return 0; 272 282 273 - err_ctx_fini: 274 - ivpu_mmu_user_context_fini(vdev, &file_priv->ctx); 275 - err_mutex_destroy: 276 - mutex_destroy(&file_priv->lock); 277 - kfree(file_priv); 278 283 err_xa_erase: 279 284 xa_erase_irq(&vdev->context_xa, ctx_id); 285 + err_unlock: 286 + mutex_unlock(&vdev->context_list_lock); 287 + mutex_destroy(&file_priv->lock); 288 + kfree(file_priv); 289 + err_dev_exit: 290 + drm_dev_exit(idx); 280 291 return ret; 281 292 } 282 293 ··· 335 340 336 341 if (!ret) 337 342 ivpu_dbg(vdev, PM, "VPU ready message received successfully\n"); 338 - else 339 - ivpu_hw_diagnose_failure(vdev); 340 343 341 344 return ret; 342 345 } ··· 362 369 ret = ivpu_wait_for_ready(vdev); 363 370 if (ret) { 364 371 ivpu_err(vdev, "Failed to boot the firmware: %d\n", ret); 372 + ivpu_hw_diagnose_failure(vdev); 373 + ivpu_mmu_evtq_dump(vdev); 374 + ivpu_fw_log_dump(vdev); 365 375 return ret; 366 376 } 367 377 ··· 536 540 lockdep_set_class(&vdev->submitted_jobs_xa.xa_lock, &submitted_jobs_xa_lock_class_key); 537 541 INIT_LIST_HEAD(&vdev->bo_list); 538 542 543 + ret = drmm_mutex_init(&vdev->drm, &vdev->context_list_lock); 544 + if (ret) 545 + goto err_xa_destroy; 546 + 539 547 ret = drmm_mutex_init(&vdev->drm, &vdev->bo_list_lock); 540 548 if (ret) 541 549 goto err_xa_destroy; ··· 611 611 return ret; 612 612 } 613 613 614 + static void ivpu_bo_unbind_all_user_contexts(struct ivpu_device *vdev) 615 + { 616 + struct ivpu_file_priv *file_priv; 617 + unsigned long ctx_id; 618 + 619 + mutex_lock(&vdev->context_list_lock); 620 + 621 + xa_for_each(&vdev->context_xa, ctx_id, file_priv) 622 + file_priv_unbind(vdev, file_priv); 623 + 624 + mutex_unlock(&vdev->context_list_lock); 625 + } 626 + 614 627 static void ivpu_dev_fini(struct ivpu_device *vdev) 615 628 { 616 629 ivpu_pm_disable(vdev); 617 630 ivpu_shutdown(vdev); 618 631 if (IVPU_WA(d3hot_after_power_off)) 619 632 pci_set_power_state(to_pci_dev(vdev->drm.dev), PCI_D3hot); 633 + 634 + ivpu_jobs_abort_all(vdev); 620 635 ivpu_job_done_consumer_fini(vdev); 621 636 ivpu_pm_cancel_recovery(vdev); 637 + ivpu_bo_unbind_all_user_contexts(vdev); 622 638 623 639 ivpu_ipc_fini(vdev); 624 640 ivpu_fw_fini(vdev);
+3 -2
drivers/accel/ivpu/ivpu_drv.h
··· 56 56 #define IVPU_DBG_JSM BIT(10) 57 57 #define IVPU_DBG_KREF BIT(11) 58 58 #define IVPU_DBG_RPM BIT(12) 59 + #define IVPU_DBG_MMU_MAP BIT(13) 59 60 60 61 #define ivpu_err(vdev, fmt, ...) \ 61 62 drm_err(&(vdev)->drm, "%s(): " fmt, __func__, ##__VA_ARGS__) ··· 115 114 116 115 struct ivpu_mmu_context gctx; 117 116 struct ivpu_mmu_context rctx; 117 + struct mutex context_list_lock; /* Protects user context addition/removal */ 118 118 struct xarray context_xa; 119 119 struct xa_limit context_xa_limit; 120 120 ··· 147 145 struct mutex lock; /* Protects cmdq */ 148 146 struct ivpu_cmdq *cmdq[IVPU_NUM_ENGINES]; 149 147 struct ivpu_mmu_context ctx; 150 - u32 priority; 151 148 bool has_mmu_faults; 149 + bool bound; 152 150 }; 153 151 154 152 extern int ivpu_dbg_mask; ··· 164 162 extern int ivpu_test_mode; 165 163 166 164 struct ivpu_file_priv *ivpu_file_priv_get(struct ivpu_file_priv *file_priv); 167 - struct ivpu_file_priv *ivpu_file_priv_get_by_ctx_id(struct ivpu_device *vdev, unsigned long id); 168 165 void ivpu_file_priv_put(struct ivpu_file_priv **link); 169 166 170 167 int ivpu_boot(struct ivpu_device *vdev);
+53 -91
drivers/accel/ivpu/ivpu_gem.c
··· 24 24 25 25 static inline void ivpu_dbg_bo(struct ivpu_device *vdev, struct ivpu_bo *bo, const char *action) 26 26 { 27 - if (bo->ctx) 28 - ivpu_dbg(vdev, BO, "%6s: size %zu has_pages %d dma_mapped %d handle %u ctx %d vpu_addr 0x%llx mmu_mapped %d\n", 29 - action, ivpu_bo_size(bo), (bool)bo->base.pages, (bool)bo->base.sgt, 30 - bo->handle, bo->ctx->id, bo->vpu_addr, bo->mmu_mapped); 31 - else 32 - ivpu_dbg(vdev, BO, "%6s: size %zu has_pages %d dma_mapped %d handle %u (not added to context)\n", 33 - action, ivpu_bo_size(bo), (bool)bo->base.pages, (bool)bo->base.sgt, 34 - bo->handle); 27 + ivpu_dbg(vdev, BO, 28 + "%6s: bo %8p vpu_addr %9llx size %8zu ctx %d has_pages %d dma_mapped %d mmu_mapped %d wc %d imported %d\n", 29 + action, bo, bo->vpu_addr, ivpu_bo_size(bo), bo->ctx ? bo->ctx->id : 0, 30 + (bool)bo->base.pages, (bool)bo->base.sgt, bo->mmu_mapped, bo->base.map_wc, 31 + (bool)bo->base.base.import_attach); 35 32 } 36 33 37 34 /* ··· 46 49 mutex_lock(&bo->lock); 47 50 48 51 ivpu_dbg_bo(vdev, bo, "pin"); 49 - 50 - if (!bo->ctx) { 51 - ivpu_err(vdev, "vpu_addr not allocated for BO %d\n", bo->handle); 52 - ret = -EINVAL; 53 - goto unlock; 54 - } 52 + drm_WARN_ON(&vdev->drm, !bo->ctx); 55 53 56 54 if (!bo->mmu_mapped) { 57 55 struct sg_table *sgt = drm_gem_shmem_get_pages_sgt(&bo->base); ··· 77 85 const struct ivpu_addr_range *range) 78 86 { 79 87 struct ivpu_device *vdev = ivpu_bo_to_vdev(bo); 80 - int ret; 88 + int idx, ret; 89 + 90 + if (!drm_dev_enter(&vdev->drm, &idx)) 91 + return -ENODEV; 81 92 82 93 mutex_lock(&bo->lock); 83 94 ··· 96 101 97 102 mutex_unlock(&bo->lock); 98 103 104 + drm_dev_exit(idx); 105 + 99 106 return ret; 100 107 } 101 108 ··· 105 108 { 106 109 struct ivpu_device *vdev = ivpu_bo_to_vdev(bo); 107 110 108 - lockdep_assert_held(&bo->lock); 109 - 110 - ivpu_dbg_bo(vdev, bo, "unbind"); 111 - 112 - /* TODO: dma_unmap */ 111 + lockdep_assert(lockdep_is_held(&bo->lock) || !kref_read(&bo->base.base.refcount)); 113 112 114 113 if (bo->mmu_mapped) { 115 114 drm_WARN_ON(&vdev->drm, !bo->ctx); ··· 117 124 118 125 if (bo->ctx) { 119 126 ivpu_mmu_context_remove_node(bo->ctx, &bo->mm_node); 120 - bo->vpu_addr = 0; 121 127 bo->ctx = NULL; 122 128 } 129 + 130 + if (bo->base.base.import_attach) 131 + return; 132 + 133 + dma_resv_lock(bo->base.base.resv, NULL); 134 + if (bo->base.sgt) { 135 + dma_unmap_sgtable(vdev->drm.dev, bo->base.sgt, DMA_BIDIRECTIONAL, 0); 136 + sg_free_table(bo->base.sgt); 137 + kfree(bo->base.sgt); 138 + bo->base.sgt = NULL; 139 + } 140 + dma_resv_unlock(bo->base.base.resv); 123 141 } 124 142 125 - static void ivpu_bo_unbind(struct ivpu_bo *bo) 126 - { 127 - mutex_lock(&bo->lock); 128 - ivpu_bo_unbind_locked(bo); 129 - mutex_unlock(&bo->lock); 130 - } 131 - 132 - void ivpu_bo_remove_all_bos_from_context(struct ivpu_device *vdev, struct ivpu_mmu_context *ctx) 143 + void ivpu_bo_unbind_all_bos_from_context(struct ivpu_device *vdev, struct ivpu_mmu_context *ctx) 133 144 { 134 145 struct ivpu_bo *bo; 135 146 ··· 143 146 mutex_lock(&vdev->bo_list_lock); 144 147 list_for_each_entry(bo, &vdev->bo_list, bo_list_node) { 145 148 mutex_lock(&bo->lock); 146 - if (bo->ctx == ctx) 149 + if (bo->ctx == ctx) { 150 + ivpu_dbg_bo(vdev, bo, "unbind"); 147 151 ivpu_bo_unbind_locked(bo); 152 + } 148 153 mutex_unlock(&bo->lock); 149 154 } 150 155 mutex_unlock(&vdev->bo_list_lock); ··· 198 199 list_add_tail(&bo->bo_list_node, &vdev->bo_list); 199 200 mutex_unlock(&vdev->bo_list_lock); 200 201 201 - ivpu_dbg(vdev, BO, "create: vpu_addr 0x%llx size %zu flags 0x%x\n", 202 - bo->vpu_addr, bo->base.base.size, flags); 203 - 204 202 return bo; 205 203 } 206 204 ··· 207 211 struct ivpu_device *vdev = file_priv->vdev; 208 212 struct ivpu_bo *bo = to_ivpu_bo(obj); 209 213 struct ivpu_addr_range *range; 214 + 215 + if (bo->ctx) { 216 + ivpu_warn(vdev, "Can't add BO to ctx %u: already in ctx %u\n", 217 + file_priv->ctx.id, bo->ctx->id); 218 + return -EALREADY; 219 + } 210 220 211 221 if (bo->flags & DRM_IVPU_BO_SHAVE_MEM) 212 222 range = &vdev->hw->ranges.shave; ··· 229 227 struct ivpu_device *vdev = to_ivpu_device(obj->dev); 230 228 struct ivpu_bo *bo = to_ivpu_bo(obj); 231 229 230 + ivpu_dbg_bo(vdev, bo, "free"); 231 + 232 232 mutex_lock(&vdev->bo_list_lock); 233 233 list_del(&bo->bo_list_node); 234 234 mutex_unlock(&vdev->bo_list_lock); 235 235 236 236 drm_WARN_ON(&vdev->drm, !dma_resv_test_signaled(obj->resv, DMA_RESV_USAGE_READ)); 237 237 238 - ivpu_dbg_bo(vdev, bo, "free"); 239 - 240 - ivpu_bo_unbind(bo); 238 + ivpu_bo_unbind_locked(bo); 241 239 mutex_destroy(&bo->lock); 242 240 243 241 drm_WARN_ON(obj->dev, bo->base.pages_use_count > 1); 244 242 drm_gem_shmem_free(&bo->base); 245 243 } 246 244 247 - static const struct dma_buf_ops ivpu_bo_dmabuf_ops = { 248 - .cache_sgt_mapping = true, 249 - .attach = drm_gem_map_attach, 250 - .detach = drm_gem_map_detach, 251 - .map_dma_buf = drm_gem_map_dma_buf, 252 - .unmap_dma_buf = drm_gem_unmap_dma_buf, 253 - .release = drm_gem_dmabuf_release, 254 - .mmap = drm_gem_dmabuf_mmap, 255 - .vmap = drm_gem_dmabuf_vmap, 256 - .vunmap = drm_gem_dmabuf_vunmap, 257 - }; 258 - 259 - static struct dma_buf *ivpu_bo_export(struct drm_gem_object *obj, int flags) 260 - { 261 - struct drm_device *dev = obj->dev; 262 - struct dma_buf_export_info exp_info = { 263 - .exp_name = KBUILD_MODNAME, 264 - .owner = dev->driver->fops->owner, 265 - .ops = &ivpu_bo_dmabuf_ops, 266 - .size = obj->size, 267 - .flags = flags, 268 - .priv = obj, 269 - .resv = obj->resv, 270 - }; 271 - void *sgt; 272 - 273 - /* 274 - * Make sure that pages are allocated and dma-mapped before exporting the bo. 275 - * DMA-mapping is required if the bo will be imported to the same device. 276 - */ 277 - sgt = drm_gem_shmem_get_pages_sgt(to_drm_gem_shmem_obj(obj)); 278 - if (IS_ERR(sgt)) 279 - return sgt; 280 - 281 - return drm_gem_dmabuf_export(dev, &exp_info); 282 - } 283 - 284 245 static const struct drm_gem_object_funcs ivpu_gem_funcs = { 285 246 .free = ivpu_bo_free, 286 247 .open = ivpu_bo_open, 287 - .export = ivpu_bo_export, 288 248 .print_info = drm_gem_shmem_object_print_info, 289 249 .pin = drm_gem_shmem_object_pin, 290 250 .unpin = drm_gem_shmem_object_unpin, ··· 279 315 return PTR_ERR(bo); 280 316 } 281 317 282 - ret = drm_gem_handle_create(file, &bo->base.base, &bo->handle); 283 - if (!ret) { 318 + ret = drm_gem_handle_create(file, &bo->base.base, &args->handle); 319 + if (!ret) 284 320 args->vpu_addr = bo->vpu_addr; 285 - args->handle = bo->handle; 286 - } 287 321 288 322 drm_gem_object_put(&bo->base.base); 289 323 ··· 323 361 if (ret) 324 362 goto err_put; 325 363 364 + dma_resv_lock(bo->base.base.resv, NULL); 326 365 ret = drm_gem_shmem_vmap(&bo->base, &map); 366 + dma_resv_unlock(bo->base.base.resv); 327 367 if (ret) 328 368 goto err_put; 329 369 ··· 340 376 { 341 377 struct iosys_map map = IOSYS_MAP_INIT_VADDR(bo->base.vaddr); 342 378 379 + dma_resv_lock(bo->base.base.resv, NULL); 343 380 drm_gem_shmem_vunmap(&bo->base, &map); 381 + dma_resv_unlock(bo->base.base.resv); 382 + 344 383 drm_gem_object_put(&bo->base.base); 345 384 } 346 385 ··· 399 432 400 433 static void ivpu_bo_print_info(struct ivpu_bo *bo, struct drm_printer *p) 401 434 { 402 - unsigned long dma_refcount = 0; 403 - 404 435 mutex_lock(&bo->lock); 405 436 406 - if (bo->base.base.dma_buf && bo->base.base.dma_buf->file) 407 - dma_refcount = atomic_long_read(&bo->base.base.dma_buf->file->f_count); 408 - 409 - drm_printf(p, "%-3u %-6d 0x%-12llx %-10lu 0x%-8x %-4u %-8lu", 410 - bo->ctx->id, bo->handle, bo->vpu_addr, bo->base.base.size, 411 - bo->flags, kref_read(&bo->base.base.refcount), dma_refcount); 412 - 413 - if (bo->base.base.import_attach) 414 - drm_printf(p, " imported"); 437 + drm_printf(p, "%-9p %-3u 0x%-12llx %-10lu 0x%-8x %-4u", 438 + bo, bo->ctx->id, bo->vpu_addr, bo->base.base.size, 439 + bo->flags, kref_read(&bo->base.base.refcount)); 415 440 416 441 if (bo->base.pages) 417 442 drm_printf(p, " has_pages"); 418 443 419 444 if (bo->mmu_mapped) 420 445 drm_printf(p, " mmu_mapped"); 446 + 447 + if (bo->base.base.import_attach) 448 + drm_printf(p, " imported"); 421 449 422 450 drm_printf(p, "\n"); 423 451 ··· 424 462 struct ivpu_device *vdev = to_ivpu_device(dev); 425 463 struct ivpu_bo *bo; 426 464 427 - drm_printf(p, "%-3s %-6s %-14s %-10s %-10s %-4s %-8s %s\n", 428 - "ctx", "handle", "vpu_addr", "size", "flags", "refs", "dma_refs", "attribs"); 465 + drm_printf(p, "%-9s %-3s %-14s %-10s %-10s %-4s %s\n", 466 + "bo", "ctx", "vpu_addr", "size", "flags", "refs", "attribs"); 429 467 430 468 mutex_lock(&vdev->bo_list_lock); 431 469 list_for_each_entry(bo, &vdev->bo_list, bo_list_node)
+1 -2
drivers/accel/ivpu/ivpu_gem.h
··· 19 19 20 20 struct mutex lock; /* Protects: ctx, mmu_mapped, vpu_addr */ 21 21 u64 vpu_addr; 22 - u32 handle; 23 22 u32 flags; 24 23 u32 job_status; /* Valid only for command buffer */ 25 24 bool mmu_mapped; 26 25 }; 27 26 28 27 int ivpu_bo_pin(struct ivpu_bo *bo); 29 - void ivpu_bo_remove_all_bos_from_context(struct ivpu_device *vdev, struct ivpu_mmu_context *ctx); 28 + void ivpu_bo_unbind_all_bos_from_context(struct ivpu_device *vdev, struct ivpu_mmu_context *ctx); 30 29 31 30 struct drm_gem_object *ivpu_gem_create_object(struct drm_device *dev, size_t size); 32 31 struct ivpu_bo *ivpu_bo_alloc_internal(struct ivpu_device *vdev, u64 vpu_addr, u64 size, u32 flags);
+4 -10
drivers/accel/ivpu/ivpu_hw_37xx.c
··· 875 875 876 876 static void ivpu_hw_37xx_irq_wdt_nce_handler(struct ivpu_device *vdev) 877 877 { 878 - ivpu_err_ratelimited(vdev, "WDT NCE irq\n"); 879 - 880 - ivpu_pm_schedule_recovery(vdev); 878 + ivpu_pm_trigger_recovery(vdev, "WDT NCE IRQ"); 881 879 } 882 880 883 881 static void ivpu_hw_37xx_irq_wdt_mss_handler(struct ivpu_device *vdev) 884 882 { 885 - ivpu_err_ratelimited(vdev, "WDT MSS irq\n"); 886 - 887 883 ivpu_hw_wdt_disable(vdev); 888 - ivpu_pm_schedule_recovery(vdev); 884 + ivpu_pm_trigger_recovery(vdev, "WDT MSS IRQ"); 889 885 } 890 886 891 887 static void ivpu_hw_37xx_irq_noc_firewall_handler(struct ivpu_device *vdev) 892 888 { 893 - ivpu_err_ratelimited(vdev, "NOC Firewall irq\n"); 894 - 895 - ivpu_pm_schedule_recovery(vdev); 889 + ivpu_pm_trigger_recovery(vdev, "NOC Firewall IRQ"); 896 890 } 897 891 898 892 /* Handler for IRQs from VPU core (irqV) */ ··· 964 970 REGB_WR32(VPU_37XX_BUTTRESS_INTERRUPT_STAT, status); 965 971 966 972 if (schedule_recovery) 967 - ivpu_pm_schedule_recovery(vdev); 973 + ivpu_pm_trigger_recovery(vdev, "Buttress IRQ"); 968 974 969 975 return true; 970 976 }
+23 -6
drivers/accel/ivpu/ivpu_hw_40xx.c
··· 746 746 return 0; 747 747 } 748 748 749 - static int ivpu_hw_40xx_reset(struct ivpu_device *vdev) 749 + static int ivpu_hw_40xx_ip_reset(struct ivpu_device *vdev) 750 750 { 751 751 int ret; 752 752 u32 val; ··· 764 764 ret = REGB_POLL_FLD(VPU_40XX_BUTTRESS_IP_RESET, TRIGGER, 0, TIMEOUT_US); 765 765 if (ret) 766 766 ivpu_err(vdev, "Timed out waiting for RESET completion\n"); 767 + 768 + return ret; 769 + } 770 + 771 + static int ivpu_hw_40xx_reset(struct ivpu_device *vdev) 772 + { 773 + int ret = 0; 774 + 775 + if (ivpu_hw_40xx_ip_reset(vdev)) { 776 + ivpu_err(vdev, "Failed to reset VPU IP\n"); 777 + ret = -EIO; 778 + } 779 + 780 + if (ivpu_pll_disable(vdev)) { 781 + ivpu_err(vdev, "Failed to disable PLL\n"); 782 + ret = -EIO; 783 + } 767 784 768 785 return ret; 769 786 } ··· 930 913 931 914 ivpu_hw_40xx_save_d0i3_entry_timestamp(vdev); 932 915 933 - if (!ivpu_hw_40xx_is_idle(vdev) && ivpu_hw_40xx_reset(vdev)) 916 + if (!ivpu_hw_40xx_is_idle(vdev) && ivpu_hw_40xx_ip_reset(vdev)) 934 917 ivpu_warn(vdev, "Failed to reset the VPU\n"); 935 918 936 919 if (ivpu_pll_disable(vdev)) { ··· 1049 1032 static void ivpu_hw_40xx_irq_wdt_nce_handler(struct ivpu_device *vdev) 1050 1033 { 1051 1034 /* TODO: For LNN hang consider engine reset instead of full recovery */ 1052 - ivpu_pm_schedule_recovery(vdev); 1035 + ivpu_pm_trigger_recovery(vdev, "WDT NCE IRQ"); 1053 1036 } 1054 1037 1055 1038 static void ivpu_hw_40xx_irq_wdt_mss_handler(struct ivpu_device *vdev) 1056 1039 { 1057 1040 ivpu_hw_wdt_disable(vdev); 1058 - ivpu_pm_schedule_recovery(vdev); 1041 + ivpu_pm_trigger_recovery(vdev, "WDT MSS IRQ"); 1059 1042 } 1060 1043 1061 1044 static void ivpu_hw_40xx_irq_noc_firewall_handler(struct ivpu_device *vdev) 1062 1045 { 1063 - ivpu_pm_schedule_recovery(vdev); 1046 + ivpu_pm_trigger_recovery(vdev, "NOC Firewall IRQ"); 1064 1047 } 1065 1048 1066 1049 /* Handler for IRQs from VPU core (irqV) */ ··· 1154 1137 REGB_WR32(VPU_40XX_BUTTRESS_INTERRUPT_STAT, status); 1155 1138 1156 1139 if (schedule_recovery) 1157 - ivpu_pm_schedule_recovery(vdev); 1140 + ivpu_pm_trigger_recovery(vdev, "Buttress IRQ"); 1158 1141 1159 1142 return true; 1160 1143 }
+2 -4
drivers/accel/ivpu/ivpu_ipc.c
··· 343 343 hb_ret = ivpu_ipc_send_receive_internal(vdev, &hb_req, VPU_JSM_MSG_QUERY_ENGINE_HB_DONE, 344 344 &hb_resp, VPU_IPC_CHAN_ASYNC_CMD, 345 345 vdev->timeout.jsm); 346 - if (hb_ret == -ETIMEDOUT) { 347 - ivpu_hw_diagnose_failure(vdev); 348 - ivpu_pm_schedule_recovery(vdev); 349 - } 346 + if (hb_ret == -ETIMEDOUT) 347 + ivpu_pm_trigger_recovery(vdev, "IPC timeout"); 350 348 351 349 return ret; 352 350 }
+73 -87
drivers/accel/ivpu/ivpu_job.c
··· 112 112 } 113 113 } 114 114 115 - void ivpu_cmdq_release_all(struct ivpu_file_priv *file_priv) 115 + void ivpu_cmdq_release_all_locked(struct ivpu_file_priv *file_priv) 116 116 { 117 117 int i; 118 118 119 - mutex_lock(&file_priv->lock); 119 + lockdep_assert_held(&file_priv->lock); 120 120 121 121 for (i = 0; i < IVPU_NUM_ENGINES; i++) 122 122 ivpu_cmdq_release_locked(file_priv, i); 123 - 124 - mutex_unlock(&file_priv->lock); 125 123 } 126 124 127 125 /* 128 126 * Mark the doorbell as unregistered and reset job queue pointers. 129 127 * This function needs to be called when the VPU hardware is restarted 130 - * and FW looses job queue state. The next time job queue is used it 128 + * and FW loses job queue state. The next time job queue is used it 131 129 * will be registered again. 132 130 */ 133 131 static void ivpu_cmdq_reset_locked(struct ivpu_file_priv *file_priv, u16 engine) ··· 159 161 struct ivpu_file_priv *file_priv; 160 162 unsigned long ctx_id; 161 163 162 - xa_for_each(&vdev->context_xa, ctx_id, file_priv) { 163 - file_priv = ivpu_file_priv_get_by_ctx_id(vdev, ctx_id); 164 - if (!file_priv) 165 - continue; 164 + mutex_lock(&vdev->context_list_lock); 166 165 166 + xa_for_each(&vdev->context_xa, ctx_id, file_priv) 167 167 ivpu_cmdq_reset_all(file_priv); 168 168 169 - ivpu_file_priv_put(&file_priv); 170 - } 169 + mutex_unlock(&vdev->context_list_lock); 170 + 171 171 } 172 172 173 173 static int ivpu_cmdq_push_job(struct ivpu_cmdq *cmdq, struct ivpu_job *job) ··· 239 243 return &fence->base; 240 244 } 241 245 242 - static void job_get(struct ivpu_job *job, struct ivpu_job **link) 246 + static void ivpu_job_destroy(struct ivpu_job *job) 243 247 { 244 - struct ivpu_device *vdev = job->vdev; 245 - 246 - kref_get(&job->ref); 247 - *link = job; 248 - 249 - ivpu_dbg(vdev, KREF, "Job get: id %u refcount %u\n", job->job_id, kref_read(&job->ref)); 250 - } 251 - 252 - static void job_release(struct kref *ref) 253 - { 254 - struct ivpu_job *job = container_of(ref, struct ivpu_job, ref); 255 248 struct ivpu_device *vdev = job->vdev; 256 249 u32 i; 250 + 251 + ivpu_dbg(vdev, JOB, "Job destroyed: id %3u ctx %2d engine %d", 252 + job->job_id, job->file_priv->ctx.id, job->engine_idx); 257 253 258 254 for (i = 0; i < job->bo_count; i++) 259 255 if (job->bos[i]) ··· 253 265 254 266 dma_fence_put(job->done_fence); 255 267 ivpu_file_priv_put(&job->file_priv); 256 - 257 - ivpu_dbg(vdev, KREF, "Job released: id %u\n", job->job_id); 258 268 kfree(job); 259 - 260 - /* Allow the VPU to get suspended, must be called after ivpu_file_priv_put() */ 261 - ivpu_rpm_put(vdev); 262 - } 263 - 264 - static void job_put(struct ivpu_job *job) 265 - { 266 - struct ivpu_device *vdev = job->vdev; 267 - 268 - ivpu_dbg(vdev, KREF, "Job put: id %u refcount %u\n", job->job_id, kref_read(&job->ref)); 269 - kref_put(&job->ref, job_release); 270 269 } 271 270 272 271 static struct ivpu_job * 273 - ivpu_create_job(struct ivpu_file_priv *file_priv, u32 engine_idx, u32 bo_count) 272 + ivpu_job_create(struct ivpu_file_priv *file_priv, u32 engine_idx, u32 bo_count) 274 273 { 275 274 struct ivpu_device *vdev = file_priv->vdev; 276 275 struct ivpu_job *job; 277 - int ret; 278 - 279 - ret = ivpu_rpm_get(vdev); 280 - if (ret < 0) 281 - return NULL; 282 276 283 277 job = kzalloc(struct_size(job, bos, bo_count), GFP_KERNEL); 284 278 if (!job) 285 - goto err_rpm_put; 286 - 287 - kref_init(&job->ref); 279 + return NULL; 288 280 289 281 job->vdev = vdev; 290 282 job->engine_idx = engine_idx; ··· 278 310 job->file_priv = ivpu_file_priv_get(file_priv); 279 311 280 312 ivpu_dbg(vdev, JOB, "Job created: ctx %2d engine %d", file_priv->ctx.id, job->engine_idx); 281 - 282 313 return job; 283 314 284 315 err_free_job: 285 316 kfree(job); 286 - err_rpm_put: 287 - ivpu_rpm_put(vdev); 288 317 return NULL; 289 318 } 290 319 291 - static int ivpu_job_done(struct ivpu_device *vdev, u32 job_id, u32 job_status) 320 + static int ivpu_job_signal_and_destroy(struct ivpu_device *vdev, u32 job_id, u32 job_status) 292 321 { 293 322 struct ivpu_job *job; 294 323 ··· 302 337 ivpu_dbg(vdev, JOB, "Job complete: id %3u ctx %2d engine %d status 0x%x\n", 303 338 job->job_id, job->file_priv->ctx.id, job->engine_idx, job_status); 304 339 340 + ivpu_job_destroy(job); 305 341 ivpu_stop_job_timeout_detection(vdev); 306 342 307 - job_put(job); 343 + ivpu_rpm_put(vdev); 308 344 return 0; 309 345 } 310 346 ··· 315 349 unsigned long id; 316 350 317 351 xa_for_each(&vdev->submitted_jobs_xa, id, job) 318 - ivpu_job_done(vdev, id, VPU_JSM_STATUS_ABORTED); 352 + ivpu_job_signal_and_destroy(vdev, id, VPU_JSM_STATUS_ABORTED); 319 353 } 320 354 321 - static int ivpu_direct_job_submission(struct ivpu_job *job) 355 + static int ivpu_job_submit(struct ivpu_job *job) 322 356 { 323 357 struct ivpu_file_priv *file_priv = job->file_priv; 324 358 struct ivpu_device *vdev = job->vdev; ··· 326 360 struct ivpu_cmdq *cmdq; 327 361 int ret; 328 362 363 + ret = ivpu_rpm_get(vdev); 364 + if (ret < 0) 365 + return ret; 366 + 329 367 mutex_lock(&file_priv->lock); 330 368 331 369 cmdq = ivpu_cmdq_acquire(job->file_priv, job->engine_idx); 332 370 if (!cmdq) { 333 - ivpu_warn(vdev, "Failed get job queue, ctx %d engine %d\n", 334 - file_priv->ctx.id, job->engine_idx); 371 + ivpu_warn_ratelimited(vdev, "Failed get job queue, ctx %d engine %d\n", 372 + file_priv->ctx.id, job->engine_idx); 335 373 ret = -EINVAL; 336 - goto err_unlock; 374 + goto err_unlock_file_priv; 337 375 } 338 376 339 377 job_id_range.min = FIELD_PREP(JOB_ID_CONTEXT_MASK, (file_priv->ctx.id - 1)); 340 378 job_id_range.max = job_id_range.min | JOB_ID_JOB_MASK; 341 379 342 - job_get(job, &job); 343 - ret = xa_alloc(&vdev->submitted_jobs_xa, &job->job_id, job, job_id_range, GFP_KERNEL); 380 + xa_lock(&vdev->submitted_jobs_xa); 381 + ret = __xa_alloc(&vdev->submitted_jobs_xa, &job->job_id, job, job_id_range, GFP_KERNEL); 344 382 if (ret) { 345 - ivpu_warn_ratelimited(vdev, "Failed to allocate job id: %d\n", ret); 346 - goto err_job_put; 383 + ivpu_dbg(vdev, JOB, "Too many active jobs in ctx %d\n", 384 + file_priv->ctx.id); 385 + ret = -EBUSY; 386 + goto err_unlock_submitted_jobs_xa; 347 387 } 348 388 349 389 ret = ivpu_cmdq_push_job(cmdq, job); 350 390 if (ret) 351 - goto err_xa_erase; 391 + goto err_erase_xa; 352 392 353 393 ivpu_start_job_timeout_detection(vdev); 354 394 355 - ivpu_dbg(vdev, JOB, "Job submitted: id %3u addr 0x%llx ctx %2d engine %d next %d\n", 356 - job->job_id, job->cmd_buf_vpu_addr, file_priv->ctx.id, 357 - job->engine_idx, cmdq->jobq->header.tail); 358 - 359 - if (ivpu_test_mode & IVPU_TEST_MODE_NULL_HW) { 360 - ivpu_job_done(vdev, job->job_id, VPU_JSM_STATUS_SUCCESS); 395 + if (unlikely(ivpu_test_mode & IVPU_TEST_MODE_NULL_HW)) { 361 396 cmdq->jobq->header.head = cmdq->jobq->header.tail; 362 397 wmb(); /* Flush WC buffer for jobq header */ 363 398 } else { 364 399 ivpu_cmdq_ring_db(vdev, cmdq); 365 400 } 366 401 402 + ivpu_dbg(vdev, JOB, "Job submitted: id %3u ctx %2d engine %d addr 0x%llx next %d\n", 403 + job->job_id, file_priv->ctx.id, job->engine_idx, 404 + job->cmd_buf_vpu_addr, cmdq->jobq->header.tail); 405 + 406 + xa_unlock(&vdev->submitted_jobs_xa); 407 + 367 408 mutex_unlock(&file_priv->lock); 409 + 410 + if (unlikely(ivpu_test_mode & IVPU_TEST_MODE_NULL_HW)) 411 + ivpu_job_signal_and_destroy(vdev, job->job_id, VPU_JSM_STATUS_SUCCESS); 412 + 368 413 return 0; 369 414 370 - err_xa_erase: 371 - xa_erase(&vdev->submitted_jobs_xa, job->job_id); 372 - err_job_put: 373 - job_put(job); 374 - err_unlock: 415 + err_erase_xa: 416 + __xa_erase(&vdev->submitted_jobs_xa, job->job_id); 417 + err_unlock_submitted_jobs_xa: 418 + xa_unlock(&vdev->submitted_jobs_xa); 419 + err_unlock_file_priv: 375 420 mutex_unlock(&file_priv->lock); 421 + ivpu_rpm_put(vdev); 376 422 return ret; 377 423 } 378 424 ··· 466 488 if (params->engine > DRM_IVPU_ENGINE_COPY) 467 489 return -EINVAL; 468 490 491 + if (params->priority > DRM_IVPU_JOB_PRIORITY_REALTIME) 492 + return -EINVAL; 493 + 469 494 if (params->buffer_count == 0 || params->buffer_count > JOB_MAX_BUFFER_COUNT) 470 495 return -EINVAL; 471 496 ··· 490 509 params->buffer_count * sizeof(u32)); 491 510 if (ret) { 492 511 ret = -EFAULT; 493 - goto free_handles; 512 + goto err_free_handles; 494 513 } 495 514 496 515 if (!drm_dev_enter(&vdev->drm, &idx)) { 497 516 ret = -ENODEV; 498 - goto free_handles; 517 + goto err_free_handles; 499 518 } 500 519 501 520 ivpu_dbg(vdev, JOB, "Submit ioctl: ctx %u buf_count %u\n", 502 521 file_priv->ctx.id, params->buffer_count); 503 522 504 - job = ivpu_create_job(file_priv, params->engine, params->buffer_count); 523 + job = ivpu_job_create(file_priv, params->engine, params->buffer_count); 505 524 if (!job) { 506 525 ivpu_err(vdev, "Failed to create job\n"); 507 526 ret = -ENOMEM; 508 - goto dev_exit; 527 + goto err_exit_dev; 509 528 } 510 529 511 530 ret = ivpu_job_prepare_bos_for_submit(file, job, buf_handles, params->buffer_count, 512 531 params->commands_offset); 513 532 if (ret) { 514 - ivpu_err(vdev, "Failed to prepare job, ret %d\n", ret); 515 - goto job_put; 533 + ivpu_err(vdev, "Failed to prepare job: %d\n", ret); 534 + goto err_destroy_job; 516 535 } 517 536 518 - ret = ivpu_direct_job_submission(job); 519 - if (ret) { 520 - dma_fence_signal(job->done_fence); 521 - ivpu_err(vdev, "Failed to submit job to the HW, ret %d\n", ret); 522 - } 537 + down_read(&vdev->pm->reset_lock); 538 + ret = ivpu_job_submit(job); 539 + up_read(&vdev->pm->reset_lock); 540 + if (ret) 541 + goto err_signal_fence; 523 542 524 - job_put: 525 - job_put(job); 526 - dev_exit: 527 543 drm_dev_exit(idx); 528 - free_handles: 529 544 kfree(buf_handles); 545 + return ret; 530 546 547 + err_signal_fence: 548 + dma_fence_signal(job->done_fence); 549 + err_destroy_job: 550 + ivpu_job_destroy(job); 551 + err_exit_dev: 552 + drm_dev_exit(idx); 553 + err_free_handles: 554 + kfree(buf_handles); 531 555 return ret; 532 556 } 533 557 ··· 554 568 } 555 569 556 570 payload = (struct vpu_ipc_msg_payload_job_done *)&jsm_msg->payload; 557 - ret = ivpu_job_done(vdev, payload->job_id, payload->job_status); 571 + ret = ivpu_job_signal_and_destroy(vdev, payload->job_id, payload->job_status); 558 572 if (!ret && !xa_empty(&vdev->submitted_jobs_xa)) 559 573 ivpu_start_job_timeout_detection(vdev); 560 574 }
+1 -2
drivers/accel/ivpu/ivpu_job.h
··· 43 43 will update the job status 44 44 */ 45 45 struct ivpu_job { 46 - struct kref ref; 47 46 struct ivpu_device *vdev; 48 47 struct ivpu_file_priv *file_priv; 49 48 struct dma_fence *done_fence; ··· 55 56 56 57 int ivpu_submit_ioctl(struct drm_device *dev, void *data, struct drm_file *file); 57 58 58 - void ivpu_cmdq_release_all(struct ivpu_file_priv *file_priv); 59 + void ivpu_cmdq_release_all_locked(struct ivpu_file_priv *file_priv); 59 60 void ivpu_cmdq_reset_all_contexts(struct ivpu_device *vdev); 60 61 61 62 void ivpu_job_done_consumer_init(struct ivpu_device *vdev);
+16 -8
drivers/accel/ivpu/ivpu_mmu.c
··· 7 7 #include <linux/highmem.h> 8 8 9 9 #include "ivpu_drv.h" 10 + #include "ivpu_hw.h" 10 11 #include "ivpu_hw_reg_io.h" 11 12 #include "ivpu_mmu.h" 12 13 #include "ivpu_mmu_context.h" ··· 519 518 520 519 ivpu_err(vdev, "Timed out waiting for MMU consumer: %d, error: %s\n", ret, 521 520 ivpu_mmu_cmdq_err_to_str(err)); 521 + ivpu_hw_diagnose_failure(vdev); 522 522 } 523 523 524 524 return ret; ··· 887 885 888 886 void ivpu_mmu_irq_evtq_handler(struct ivpu_device *vdev) 889 887 { 890 - bool schedule_recovery = false; 891 888 u32 *event; 892 889 u32 ssid; 893 890 ··· 896 895 ivpu_mmu_dump_event(vdev, event); 897 896 898 897 ssid = FIELD_GET(IVPU_MMU_EVT_SSID_MASK, event[0]); 899 - if (ssid == IVPU_GLOBAL_CONTEXT_MMU_SSID) 900 - schedule_recovery = true; 901 - else 902 - ivpu_mmu_user_context_mark_invalid(vdev, ssid); 903 - } 898 + if (ssid == IVPU_GLOBAL_CONTEXT_MMU_SSID) { 899 + ivpu_pm_trigger_recovery(vdev, "MMU event"); 900 + return; 901 + } 904 902 905 - if (schedule_recovery) 906 - ivpu_pm_schedule_recovery(vdev); 903 + ivpu_mmu_user_context_mark_invalid(vdev, ssid); 904 + } 905 + } 906 + 907 + void ivpu_mmu_evtq_dump(struct ivpu_device *vdev) 908 + { 909 + u32 *event; 910 + 911 + while ((event = ivpu_mmu_get_event(vdev)) != NULL) 912 + ivpu_mmu_dump_event(vdev, event); 907 913 } 908 914 909 915 void ivpu_mmu_irq_gerr_handler(struct ivpu_device *vdev)
+1
drivers/accel/ivpu/ivpu_mmu.h
··· 46 46 47 47 void ivpu_mmu_irq_evtq_handler(struct ivpu_device *vdev); 48 48 void ivpu_mmu_irq_gerr_handler(struct ivpu_device *vdev); 49 + void ivpu_mmu_evtq_dump(struct ivpu_device *vdev); 49 50 50 51 #endif /* __IVPU_MMU_H__ */
+9
drivers/accel/ivpu/ivpu_mmu_context.c
··· 355 355 dma_addr_t dma_addr = sg_dma_address(sg) - sg->offset; 356 356 size_t size = sg_dma_len(sg) + sg->offset; 357 357 358 + ivpu_dbg(vdev, MMU_MAP, "Map ctx: %u dma_addr: 0x%llx vpu_addr: 0x%llx size: %lu\n", 359 + ctx->id, dma_addr, vpu_addr, size); 360 + 358 361 ret = ivpu_mmu_context_map_pages(vdev, ctx, vpu_addr, dma_addr, size, prot); 359 362 if (ret) { 360 363 ivpu_err(vdev, "Failed to map context pages\n"); ··· 369 366 370 367 /* Ensure page table modifications are flushed from wc buffers to memory */ 371 368 wmb(); 369 + 372 370 mutex_unlock(&ctx->lock); 373 371 374 372 ret = ivpu_mmu_invalidate_tlb(vdev, ctx->id); ··· 392 388 mutex_lock(&ctx->lock); 393 389 394 390 for_each_sgtable_dma_sg(sgt, sg, i) { 391 + dma_addr_t dma_addr = sg_dma_address(sg) - sg->offset; 395 392 size_t size = sg_dma_len(sg) + sg->offset; 393 + 394 + ivpu_dbg(vdev, MMU_MAP, "Unmap ctx: %u dma_addr: 0x%llx vpu_addr: 0x%llx size: %lu\n", 395 + ctx->id, dma_addr, vpu_addr, size); 396 396 397 397 ivpu_mmu_context_unmap_pages(ctx, vpu_addr, size); 398 398 vpu_addr += size; ··· 404 396 405 397 /* Ensure page table modifications are flushed from wc buffers to memory */ 406 398 wmb(); 399 + 407 400 mutex_unlock(&ctx->lock); 408 401 409 402 ret = ivpu_mmu_invalidate_tlb(vdev, ctx->id);
+35 -17
drivers/accel/ivpu/ivpu_pm.c
··· 13 13 #include "ivpu_drv.h" 14 14 #include "ivpu_hw.h" 15 15 #include "ivpu_fw.h" 16 + #include "ivpu_fw_log.h" 16 17 #include "ivpu_ipc.h" 17 18 #include "ivpu_job.h" 18 19 #include "ivpu_jsm_msg.h" ··· 112 111 char *evt[2] = {"IVPU_PM_EVENT=IVPU_RECOVER", NULL}; 113 112 int ret; 114 113 114 + ivpu_err(vdev, "Recovering the VPU (reset #%d)\n", atomic_read(&vdev->pm->reset_counter)); 115 + 116 + ret = pm_runtime_resume_and_get(vdev->drm.dev); 117 + if (ret) 118 + ivpu_err(vdev, "Failed to resume VPU: %d\n", ret); 119 + 120 + ivpu_fw_log_dump(vdev); 121 + 115 122 retry: 116 123 ret = pci_try_reset_function(to_pci_dev(vdev->drm.dev)); 117 124 if (ret == -EAGAIN && !drm_dev_is_unplugged(&vdev->drm)) { ··· 131 122 ivpu_err(vdev, "Failed to reset VPU: %d\n", ret); 132 123 133 124 kobject_uevent_env(&vdev->drm.dev->kobj, KOBJ_CHANGE, evt); 125 + pm_runtime_mark_last_busy(vdev->drm.dev); 126 + pm_runtime_put_autosuspend(vdev->drm.dev); 134 127 } 135 128 136 - void ivpu_pm_schedule_recovery(struct ivpu_device *vdev) 129 + void ivpu_pm_trigger_recovery(struct ivpu_device *vdev, const char *reason) 137 130 { 138 - struct ivpu_pm_info *pm = vdev->pm; 131 + ivpu_err(vdev, "Recovery triggered by %s\n", reason); 139 132 140 133 if (ivpu_disable_recovery) { 141 134 ivpu_err(vdev, "Recovery not available when disable_recovery param is set\n"); ··· 149 138 return; 150 139 } 151 140 152 - /* Schedule recovery if it's not in progress */ 153 - if (atomic_cmpxchg(&pm->in_reset, 0, 1) == 0) { 154 - ivpu_hw_irq_disable(vdev); 155 - queue_work(system_long_wq, &pm->recovery_work); 141 + /* Trigger recovery if it's not in progress */ 142 + if (atomic_cmpxchg(&vdev->pm->reset_pending, 0, 1) == 0) { 143 + ivpu_hw_diagnose_failure(vdev); 144 + ivpu_hw_irq_disable(vdev); /* Disable IRQ early to protect from IRQ storm */ 145 + queue_work(system_long_wq, &vdev->pm->recovery_work); 156 146 } 157 147 } 158 148 ··· 161 149 { 162 150 struct ivpu_pm_info *pm = container_of(work, struct ivpu_pm_info, job_timeout_work.work); 163 151 struct ivpu_device *vdev = pm->vdev; 164 - unsigned long timeout_ms = ivpu_tdr_timeout_ms ? ivpu_tdr_timeout_ms : vdev->timeout.tdr; 165 152 166 - ivpu_err(vdev, "TDR detected, timeout %lu ms", timeout_ms); 167 - ivpu_hw_diagnose_failure(vdev); 168 - 169 - ivpu_pm_schedule_recovery(vdev); 153 + ivpu_pm_trigger_recovery(vdev, "TDR"); 170 154 } 171 155 172 156 void ivpu_start_job_timeout_detection(struct ivpu_device *vdev) ··· 235 227 bool hw_is_idle = true; 236 228 int ret; 237 229 230 + drm_WARN_ON(&vdev->drm, !xa_empty(&vdev->submitted_jobs_xa)); 231 + drm_WARN_ON(&vdev->drm, work_pending(&vdev->pm->recovery_work)); 232 + 238 233 ivpu_dbg(vdev, PM, "Runtime suspend..\n"); 239 234 240 235 if (!ivpu_hw_is_idle(vdev) && vdev->pm->suspend_reschedule_counter) { ··· 258 247 ivpu_err(vdev, "Failed to set suspend VPU: %d\n", ret); 259 248 260 249 if (!hw_is_idle) { 261 - ivpu_warn(vdev, "VPU failed to enter idle, force suspended.\n"); 250 + ivpu_err(vdev, "VPU failed to enter idle, force suspended.\n"); 251 + ivpu_fw_log_dump(vdev); 262 252 ivpu_pm_prepare_cold_boot(vdev); 263 253 } else { 264 254 ivpu_pm_prepare_warm_boot(vdev); ··· 320 308 { 321 309 struct ivpu_device *vdev = pci_get_drvdata(pdev); 322 310 323 - pm_runtime_get_sync(vdev->drm.dev); 324 - 325 311 ivpu_dbg(vdev, PM, "Pre-reset..\n"); 326 312 atomic_inc(&vdev->pm->reset_counter); 327 - atomic_set(&vdev->pm->in_reset, 1); 313 + atomic_set(&vdev->pm->reset_pending, 1); 314 + 315 + pm_runtime_get_sync(vdev->drm.dev); 316 + down_write(&vdev->pm->reset_lock); 328 317 ivpu_prepare_for_reset(vdev); 329 318 ivpu_hw_reset(vdev); 330 319 ivpu_pm_prepare_cold_boot(vdev); ··· 342 329 ret = ivpu_resume(vdev); 343 330 if (ret) 344 331 ivpu_err(vdev, "Failed to set RESUME state: %d\n", ret); 345 - atomic_set(&vdev->pm->in_reset, 0); 332 + up_write(&vdev->pm->reset_lock); 333 + atomic_set(&vdev->pm->reset_pending, 0); 346 334 ivpu_dbg(vdev, PM, "Post-reset done.\n"); 347 335 336 + pm_runtime_mark_last_busy(vdev->drm.dev); 348 337 pm_runtime_put_autosuspend(vdev->drm.dev); 349 338 } 350 339 ··· 359 344 pm->vdev = vdev; 360 345 pm->suspend_reschedule_counter = PM_RESCHEDULE_LIMIT; 361 346 362 - atomic_set(&pm->in_reset, 0); 347 + init_rwsem(&pm->reset_lock); 348 + atomic_set(&pm->reset_pending, 0); 349 + atomic_set(&pm->reset_counter, 0); 350 + 363 351 INIT_WORK(&pm->recovery_work, ivpu_pm_recovery_work); 364 352 INIT_DELAYED_WORK(&pm->job_timeout_work, ivpu_job_timeout_work); 365 353
+4 -2
drivers/accel/ivpu/ivpu_pm.h
··· 6 6 #ifndef __IVPU_PM_H__ 7 7 #define __IVPU_PM_H__ 8 8 9 + #include <linux/rwsem.h> 9 10 #include <linux/types.h> 10 11 11 12 struct ivpu_device; ··· 15 14 struct ivpu_device *vdev; 16 15 struct delayed_work job_timeout_work; 17 16 struct work_struct recovery_work; 18 - atomic_t in_reset; 17 + struct rw_semaphore reset_lock; 19 18 atomic_t reset_counter; 19 + atomic_t reset_pending; 20 20 bool is_warmboot; 21 21 u32 suspend_reschedule_counter; 22 22 }; ··· 39 37 int __must_check ivpu_rpm_get_if_active(struct ivpu_device *vdev); 40 38 void ivpu_rpm_put(struct ivpu_device *vdev); 41 39 42 - void ivpu_pm_schedule_recovery(struct ivpu_device *vdev); 40 + void ivpu_pm_trigger_recovery(struct ivpu_device *vdev, const char *reason); 43 41 void ivpu_start_job_timeout_detection(struct ivpu_device *vdev); 44 42 void ivpu_stop_job_timeout_detection(struct ivpu_device *vdev); 45 43
+28 -6
drivers/ata/ahci.c
··· 48 48 enum board_ids { 49 49 /* board IDs by feature in alphabetical order */ 50 50 board_ahci, 51 + board_ahci_43bit_dma, 51 52 board_ahci_ign_iferr, 52 53 board_ahci_low_power, 53 54 board_ahci_no_debounce_delay, ··· 124 123 static const struct ata_port_info ahci_port_info[] = { 125 124 /* by features */ 126 125 [board_ahci] = { 126 + .flags = AHCI_FLAG_COMMON, 127 + .pio_mask = ATA_PIO4, 128 + .udma_mask = ATA_UDMA6, 129 + .port_ops = &ahci_ops, 130 + }, 131 + [board_ahci_43bit_dma] = { 132 + AHCI_HFLAGS (AHCI_HFLAG_43BIT_ONLY), 127 133 .flags = AHCI_FLAG_COMMON, 128 134 .pio_mask = ATA_PIO4, 129 135 .udma_mask = ATA_UDMA6, ··· 605 597 { PCI_VDEVICE(PROMISE, 0x3f20), board_ahci }, /* PDC42819 */ 606 598 { PCI_VDEVICE(PROMISE, 0x3781), board_ahci }, /* FastTrak TX8660 ahci-mode */ 607 599 608 - /* Asmedia */ 600 + /* ASMedia */ 609 601 { PCI_VDEVICE(ASMEDIA, 0x0601), board_ahci }, /* ASM1060 */ 610 602 { PCI_VDEVICE(ASMEDIA, 0x0602), board_ahci }, /* ASM1060 */ 611 - { PCI_VDEVICE(ASMEDIA, 0x0611), board_ahci }, /* ASM1061 */ 612 - { PCI_VDEVICE(ASMEDIA, 0x0612), board_ahci }, /* ASM1062 */ 603 + { PCI_VDEVICE(ASMEDIA, 0x0611), board_ahci_43bit_dma }, /* ASM1061 */ 604 + { PCI_VDEVICE(ASMEDIA, 0x0612), board_ahci_43bit_dma }, /* ASM1061/1062 */ 613 605 { PCI_VDEVICE(ASMEDIA, 0x0621), board_ahci }, /* ASM1061R */ 614 606 { PCI_VDEVICE(ASMEDIA, 0x0622), board_ahci }, /* ASM1062R */ 615 607 { PCI_VDEVICE(ASMEDIA, 0x0624), board_ahci }, /* ASM1062+JMB575 */ ··· 671 663 static void ahci_pci_save_initial_config(struct pci_dev *pdev, 672 664 struct ahci_host_priv *hpriv) 673 665 { 666 + if (pdev->vendor == PCI_VENDOR_ID_ASMEDIA && pdev->device == 0x1166) { 667 + dev_info(&pdev->dev, "ASM1166 has only six ports\n"); 668 + hpriv->saved_port_map = 0x3f; 669 + } 670 + 674 671 if (pdev->vendor == PCI_VENDOR_ID_JMICRON && pdev->device == 0x2361) { 675 672 dev_info(&pdev->dev, "JMB361 has only one port\n"); 676 673 hpriv->saved_port_map = 1; ··· 962 949 963 950 #endif /* CONFIG_PM */ 964 951 965 - static int ahci_configure_dma_masks(struct pci_dev *pdev, int using_dac) 952 + static int ahci_configure_dma_masks(struct pci_dev *pdev, 953 + struct ahci_host_priv *hpriv) 966 954 { 967 - const int dma_bits = using_dac ? 64 : 32; 955 + int dma_bits; 968 956 int rc; 957 + 958 + if (hpriv->cap & HOST_CAP_64) { 959 + dma_bits = 64; 960 + if (hpriv->flags & AHCI_HFLAG_43BIT_ONLY) 961 + dma_bits = 43; 962 + } else { 963 + dma_bits = 32; 964 + } 969 965 970 966 /* 971 967 * If the device fixup already set the dma_mask to some non-standard ··· 1948 1926 ahci_gtf_filter_workaround(host); 1949 1927 1950 1928 /* initialize adapter */ 1951 - rc = ahci_configure_dma_masks(pdev, hpriv->cap & HOST_CAP_64); 1929 + rc = ahci_configure_dma_masks(pdev, hpriv); 1952 1930 if (rc) 1953 1931 return rc; 1954 1932
+1
drivers/ata/ahci.h
··· 247 247 AHCI_HFLAG_SUSPEND_PHYS = BIT(26), /* handle PHYs during 248 248 suspend/resume */ 249 249 AHCI_HFLAG_NO_SXS = BIT(28), /* SXS not supported */ 250 + AHCI_HFLAG_43BIT_ONLY = BIT(29), /* 43bit DMA addr limit */ 250 251 251 252 /* ap->flags bits */ 252 253
+1 -1
drivers/ata/libata-sata.c
··· 784 784 EXPORT_SYMBOL_GPL(sata_lpm_ignore_phy_events); 785 785 786 786 static const char *ata_lpm_policy_names[] = { 787 - [ATA_LPM_UNKNOWN] = "max_performance", 787 + [ATA_LPM_UNKNOWN] = "keep_firmware_settings", 788 788 [ATA_LPM_MAX_POWER] = "max_performance", 789 789 [ATA_LPM_MED_POWER] = "medium_power", 790 790 [ATA_LPM_MED_POWER_WITH_DIPM] = "med_power_with_dipm",
+4 -1
drivers/block/aoe/aoeblk.c
··· 333 333 struct gendisk *gd; 334 334 mempool_t *mp; 335 335 struct blk_mq_tag_set *set; 336 + sector_t ssize; 336 337 ulong flags; 337 338 int late = 0; 338 339 int err; ··· 397 396 gd->minors = AOE_PARTITIONS; 398 397 gd->fops = &aoe_bdops; 399 398 gd->private_data = d; 400 - set_capacity(gd, d->ssize); 399 + ssize = d->ssize; 401 400 snprintf(gd->disk_name, sizeof gd->disk_name, "etherd/e%ld.%d", 402 401 d->aoemajor, d->aoeminor); 403 402 ··· 405 404 d->flags |= DEVFL_UP; 406 405 407 406 spin_unlock_irqrestore(&d->lock, flags); 407 + 408 + set_capacity(gd, ssize); 408 409 409 410 err = device_add_disk(NULL, gd, aoe_attr_groups); 410 411 if (err)
+3 -4
drivers/cpufreq/amd-pstate.c
··· 1232 1232 max_limit_perf = div_u64(policy->max * cpudata->highest_perf, cpudata->max_freq); 1233 1233 min_limit_perf = div_u64(policy->min * cpudata->highest_perf, cpudata->max_freq); 1234 1234 1235 + WRITE_ONCE(cpudata->max_limit_perf, max_limit_perf); 1236 + WRITE_ONCE(cpudata->min_limit_perf, min_limit_perf); 1237 + 1235 1238 max_perf = clamp_t(unsigned long, max_perf, cpudata->min_limit_perf, 1236 1239 cpudata->max_limit_perf); 1237 1240 min_perf = clamp_t(unsigned long, min_perf, cpudata->min_limit_perf, 1238 1241 cpudata->max_limit_perf); 1239 - 1240 - WRITE_ONCE(cpudata->max_limit_perf, max_limit_perf); 1241 - WRITE_ONCE(cpudata->min_limit_perf, min_limit_perf); 1242 - 1243 1242 value = READ_ONCE(cpudata->cppc_req_cached); 1244 1243 1245 1244 if (cpudata->policy == CPUFREQ_POLICY_PERFORMANCE)
+34 -21
drivers/cpufreq/intel_pstate.c
··· 529 529 } 530 530 #endif /* CONFIG_ACPI_CPPC_LIB */ 531 531 532 + static int intel_pstate_freq_to_hwp_rel(struct cpudata *cpu, int freq, 533 + unsigned int relation) 534 + { 535 + if (freq == cpu->pstate.turbo_freq) 536 + return cpu->pstate.turbo_pstate; 537 + 538 + if (freq == cpu->pstate.max_freq) 539 + return cpu->pstate.max_pstate; 540 + 541 + switch (relation) { 542 + case CPUFREQ_RELATION_H: 543 + return freq / cpu->pstate.scaling; 544 + case CPUFREQ_RELATION_C: 545 + return DIV_ROUND_CLOSEST(freq, cpu->pstate.scaling); 546 + } 547 + 548 + return DIV_ROUND_UP(freq, cpu->pstate.scaling); 549 + } 550 + 551 + static int intel_pstate_freq_to_hwp(struct cpudata *cpu, int freq) 552 + { 553 + return intel_pstate_freq_to_hwp_rel(cpu, freq, CPUFREQ_RELATION_L); 554 + } 555 + 532 556 /** 533 557 * intel_pstate_hybrid_hwp_adjust - Calibrate HWP performance levels. 534 558 * @cpu: Target CPU. ··· 570 546 int perf_ctl_scaling = cpu->pstate.perf_ctl_scaling; 571 547 int perf_ctl_turbo = pstate_funcs.get_turbo(cpu->cpu); 572 548 int scaling = cpu->pstate.scaling; 549 + int freq; 573 550 574 551 pr_debug("CPU%d: perf_ctl_max_phys = %d\n", cpu->cpu, perf_ctl_max_phys); 575 552 pr_debug("CPU%d: perf_ctl_turbo = %d\n", cpu->cpu, perf_ctl_turbo); ··· 584 559 cpu->pstate.max_freq = rounddown(cpu->pstate.max_pstate * scaling, 585 560 perf_ctl_scaling); 586 561 587 - cpu->pstate.max_pstate_physical = 588 - DIV_ROUND_UP(perf_ctl_max_phys * perf_ctl_scaling, 589 - scaling); 562 + freq = perf_ctl_max_phys * perf_ctl_scaling; 563 + cpu->pstate.max_pstate_physical = intel_pstate_freq_to_hwp(cpu, freq); 590 564 591 - cpu->pstate.min_freq = cpu->pstate.min_pstate * perf_ctl_scaling; 565 + freq = cpu->pstate.min_pstate * perf_ctl_scaling; 566 + cpu->pstate.min_freq = freq; 592 567 /* 593 568 * Cast the min P-state value retrieved via pstate_funcs.get_min() to 594 569 * the effective range of HWP performance levels. 595 570 */ 596 - cpu->pstate.min_pstate = DIV_ROUND_UP(cpu->pstate.min_freq, scaling); 571 + cpu->pstate.min_pstate = intel_pstate_freq_to_hwp(cpu, freq); 597 572 } 598 573 599 574 static inline void update_turbo_state(void) ··· 2553 2528 * abstract values to represent performance rather than pure ratios. 2554 2529 */ 2555 2530 if (hwp_active && cpu->pstate.scaling != perf_ctl_scaling) { 2556 - int scaling = cpu->pstate.scaling; 2557 2531 int freq; 2558 2532 2559 2533 freq = max_policy_perf * perf_ctl_scaling; 2560 - max_policy_perf = DIV_ROUND_UP(freq, scaling); 2534 + max_policy_perf = intel_pstate_freq_to_hwp(cpu, freq); 2561 2535 freq = min_policy_perf * perf_ctl_scaling; 2562 - min_policy_perf = DIV_ROUND_UP(freq, scaling); 2536 + min_policy_perf = intel_pstate_freq_to_hwp(cpu, freq); 2563 2537 } 2564 2538 2565 2539 pr_debug("cpu:%d min_policy_perf:%d max_policy_perf:%d\n", ··· 2932 2908 2933 2909 cpufreq_freq_transition_begin(policy, &freqs); 2934 2910 2935 - switch (relation) { 2936 - case CPUFREQ_RELATION_L: 2937 - target_pstate = DIV_ROUND_UP(freqs.new, cpu->pstate.scaling); 2938 - break; 2939 - case CPUFREQ_RELATION_H: 2940 - target_pstate = freqs.new / cpu->pstate.scaling; 2941 - break; 2942 - default: 2943 - target_pstate = DIV_ROUND_CLOSEST(freqs.new, cpu->pstate.scaling); 2944 - break; 2945 - } 2946 - 2911 + target_pstate = intel_pstate_freq_to_hwp_rel(cpu, freqs.new, relation); 2947 2912 target_pstate = intel_cpufreq_update_pstate(policy, target_pstate, false); 2948 2913 2949 2914 freqs.new = target_pstate * cpu->pstate.scaling; ··· 2950 2937 2951 2938 update_turbo_state(); 2952 2939 2953 - target_pstate = DIV_ROUND_UP(target_freq, cpu->pstate.scaling); 2940 + target_pstate = intel_pstate_freq_to_hwp(cpu, target_freq); 2954 2941 2955 2942 target_pstate = intel_cpufreq_update_pstate(policy, target_pstate, true); 2956 2943
+5 -2
drivers/crypto/caam/caamalg_qi2.c
··· 4545 4545 struct list_head entry; 4546 4546 struct device *dev; 4547 4547 int alg_type; 4548 + bool is_hmac; 4548 4549 struct ahash_alg ahash_alg; 4549 4550 }; 4550 4551 ··· 4572 4571 4573 4572 ctx->dev = caam_hash->dev; 4574 4573 4575 - if (alg->setkey) { 4574 + if (caam_hash->is_hmac) { 4576 4575 ctx->adata.key_dma = dma_map_single_attrs(ctx->dev, ctx->key, 4577 4576 ARRAY_SIZE(ctx->key), 4578 4577 DMA_TO_DEVICE, ··· 4612 4611 * For keyed hash algorithms shared descriptors 4613 4612 * will be created later in setkey() callback 4614 4613 */ 4615 - return alg->setkey ? 0 : ahash_set_sh_desc(ahash); 4614 + return caam_hash->is_hmac ? 0 : ahash_set_sh_desc(ahash); 4616 4615 } 4617 4616 4618 4617 static void caam_hash_cra_exit(struct crypto_tfm *tfm) ··· 4647 4646 template->hmac_name); 4648 4647 snprintf(alg->cra_driver_name, CRYPTO_MAX_ALG_NAME, "%s", 4649 4648 template->hmac_driver_name); 4649 + t_alg->is_hmac = true; 4650 4650 } else { 4651 4651 snprintf(alg->cra_name, CRYPTO_MAX_ALG_NAME, "%s", 4652 4652 template->name); 4653 4653 snprintf(alg->cra_driver_name, CRYPTO_MAX_ALG_NAME, "%s", 4654 4654 template->driver_name); 4655 4655 t_alg->ahash_alg.setkey = NULL; 4656 + t_alg->is_hmac = false; 4656 4657 } 4657 4658 alg->cra_module = THIS_MODULE; 4658 4659 alg->cra_init = caam_hash_cra_init;
+5 -2
drivers/crypto/caam/caamhash.c
··· 1753 1753 struct caam_hash_alg { 1754 1754 struct list_head entry; 1755 1755 int alg_type; 1756 + bool is_hmac; 1756 1757 struct ahash_engine_alg ahash_alg; 1757 1758 }; 1758 1759 ··· 1805 1804 } else { 1806 1805 if (priv->era >= 6) { 1807 1806 ctx->dir = DMA_BIDIRECTIONAL; 1808 - ctx->key_dir = alg->setkey ? DMA_TO_DEVICE : DMA_NONE; 1807 + ctx->key_dir = caam_hash->is_hmac ? DMA_TO_DEVICE : DMA_NONE; 1809 1808 } else { 1810 1809 ctx->dir = DMA_TO_DEVICE; 1811 1810 ctx->key_dir = DMA_NONE; ··· 1863 1862 * For keyed hash algorithms shared descriptors 1864 1863 * will be created later in setkey() callback 1865 1864 */ 1866 - return alg->setkey ? 0 : ahash_set_sh_desc(ahash); 1865 + return caam_hash->is_hmac ? 0 : ahash_set_sh_desc(ahash); 1867 1866 } 1868 1867 1869 1868 static void caam_hash_cra_exit(struct crypto_tfm *tfm) ··· 1916 1915 template->hmac_name); 1917 1916 snprintf(alg->cra_driver_name, CRYPTO_MAX_ALG_NAME, "%s", 1918 1917 template->hmac_driver_name); 1918 + t_alg->is_hmac = true; 1919 1919 } else { 1920 1920 snprintf(alg->cra_name, CRYPTO_MAX_ALG_NAME, "%s", 1921 1921 template->name); 1922 1922 snprintf(alg->cra_driver_name, CRYPTO_MAX_ALG_NAME, "%s", 1923 1923 template->driver_name); 1924 1924 halg->setkey = NULL; 1925 + t_alg->is_hmac = false; 1925 1926 } 1926 1927 alg->cra_module = THIS_MODULE; 1927 1928 alg->cra_init = caam_hash_cra_init;
+1
drivers/crypto/intel/qat/qat_4xxx/adf_4xxx_hw_data.c
··· 463 463 hw_data->fw_name = ADF_402XX_FW; 464 464 hw_data->fw_mmp_name = ADF_402XX_MMP; 465 465 hw_data->uof_get_name = uof_get_name_402xx; 466 + hw_data->get_ena_thd_mask = get_ena_thd_mask; 466 467 break; 467 468 case ADF_401XX_PCI_DEVICE_ID: 468 469 hw_data->fw_name = ADF_4XXX_FW;
+2 -2
drivers/cxl/core/region.c
··· 525 525 struct cxl_root_decoder *cxlrd = to_cxl_root_decoder(cxlr->dev.parent); 526 526 struct cxl_region_params *p = &cxlr->params; 527 527 struct resource *res; 528 - u32 remainder = 0; 528 + u64 remainder = 0; 529 529 530 530 lockdep_assert_held_write(&cxl_region_rwsem); 531 531 ··· 545 545 (cxlr->mode == CXL_DECODER_PMEM && uuid_is_null(&p->uuid))) 546 546 return -ENXIO; 547 547 548 - div_u64_rem(size, SZ_256M * p->interleave_ways, &remainder); 548 + div64_u64_rem(size, (u64)SZ_256M * p->interleave_ways, &remainder); 549 549 if (remainder) 550 550 return -EINVAL; 551 551
+15 -11
drivers/cxl/pci.c
··· 382 382 return rc; 383 383 } 384 384 385 - static int cxl_pci_setup_mailbox(struct cxl_memdev_state *mds) 385 + static int cxl_pci_setup_mailbox(struct cxl_memdev_state *mds, bool irq_avail) 386 386 { 387 387 struct cxl_dev_state *cxlds = &mds->cxlds; 388 388 const int cap = readl(cxlds->regs.mbox + CXLDEV_MBOX_CAPS_OFFSET); ··· 441 441 INIT_DELAYED_WORK(&mds->security.poll_dwork, cxl_mbox_sanitize_work); 442 442 443 443 /* background command interrupts are optional */ 444 - if (!(cap & CXLDEV_MBOX_CAP_BG_CMD_IRQ)) 444 + if (!(cap & CXLDEV_MBOX_CAP_BG_CMD_IRQ) || !irq_avail) 445 445 return 0; 446 446 447 447 msgnum = FIELD_GET(CXLDEV_MBOX_CAP_IRQ_MSGNUM_MASK, cap); ··· 588 588 return devm_add_action_or_reset(mds->cxlds.dev, free_event_buf, buf); 589 589 } 590 590 591 - static int cxl_alloc_irq_vectors(struct pci_dev *pdev) 591 + static bool cxl_alloc_irq_vectors(struct pci_dev *pdev) 592 592 { 593 593 int nvecs; 594 594 ··· 605 605 PCI_IRQ_MSIX | PCI_IRQ_MSI); 606 606 if (nvecs < 1) { 607 607 dev_dbg(&pdev->dev, "Failed to alloc irq vectors: %d\n", nvecs); 608 - return -ENXIO; 608 + return false; 609 609 } 610 - return 0; 610 + return true; 611 611 } 612 612 613 613 static irqreturn_t cxl_event_thread(int irq, void *id) ··· 743 743 } 744 744 745 745 static int cxl_event_config(struct pci_host_bridge *host_bridge, 746 - struct cxl_memdev_state *mds) 746 + struct cxl_memdev_state *mds, bool irq_avail) 747 747 { 748 748 struct cxl_event_interrupt_policy policy; 749 749 int rc; ··· 754 754 */ 755 755 if (!host_bridge->native_cxl_error) 756 756 return 0; 757 + 758 + if (!irq_avail) { 759 + dev_info(mds->cxlds.dev, "No interrupt support, disable event processing.\n"); 760 + return 0; 761 + } 757 762 758 763 rc = cxl_mem_alloc_event_buf(mds); 759 764 if (rc) ··· 794 789 struct cxl_register_map map; 795 790 struct cxl_memdev *cxlmd; 796 791 int i, rc, pmu_count; 792 + bool irq_avail; 797 793 798 794 /* 799 795 * Double check the anonymous union trickery in struct cxl_regs ··· 852 846 else 853 847 dev_warn(&pdev->dev, "Media not active (%d)\n", rc); 854 848 855 - rc = cxl_alloc_irq_vectors(pdev); 856 - if (rc) 857 - return rc; 849 + irq_avail = cxl_alloc_irq_vectors(pdev); 858 850 859 - rc = cxl_pci_setup_mailbox(mds); 851 + rc = cxl_pci_setup_mailbox(mds, irq_avail); 860 852 if (rc) 861 853 return rc; 862 854 ··· 913 909 } 914 910 } 915 911 916 - rc = cxl_event_config(host_bridge, mds); 912 + rc = cxl_event_config(host_bridge, mds, irq_avail); 917 913 if (rc) 918 914 return rc; 919 915
+13 -5
drivers/firewire/core-device.c
··· 118 118 * @buf: where to put the string 119 119 * @size: size of @buf, in bytes 120 120 * 121 - * The string is taken from a minimal ASCII text descriptor leaf after 122 - * the immediate entry with @key. The string is zero-terminated. 123 - * An overlong string is silently truncated such that it and the 124 - * zero byte fit into @size. 121 + * The string is taken from a minimal ASCII text descriptor leaf just after the entry with the 122 + * @key. The string is zero-terminated. An overlong string is silently truncated such that it 123 + * and the zero byte fit into @size. 125 124 * 126 125 * Returns strlen(buf) or a negative error code. 127 126 */ ··· 367 368 for (i = 0; i < ARRAY_SIZE(directories) && !!directories[i]; ++i) { 368 369 int result = fw_csr_string(directories[i], attr->key, buf, bufsize); 369 370 // Detected. 370 - if (result >= 0) 371 + if (result >= 0) { 371 372 ret = result; 373 + } else if (i == 0 && attr->key == CSR_VENDOR) { 374 + // Sony DVMC-DA1 has configuration ROM such that the descriptor leaf entry 375 + // in the root directory follows to the directory entry for vendor ID 376 + // instead of the immediate value for vendor ID. 377 + result = fw_csr_string(directories[i], CSR_DIRECTORY | attr->key, buf, 378 + bufsize); 379 + if (result >= 0) 380 + ret = result; 381 + } 372 382 } 373 383 374 384 if (ret >= 0) {
+57 -28
drivers/firmware/arm_ffa/driver.c
··· 107 107 struct work_struct notif_pcpu_work; 108 108 struct work_struct irq_work; 109 109 struct xarray partition_info; 110 - unsigned int partition_count; 111 110 DECLARE_HASHTABLE(notifier_hash, ilog2(FFA_MAX_NOTIFICATIONS)); 112 111 struct mutex notify_lock; /* lock to protect notifier hashtable */ 113 112 }; 114 113 115 114 static struct ffa_drv_info *drv_info; 115 + static void ffa_partitions_cleanup(void); 116 116 117 117 /* 118 118 * The driver must be able to support all the versions from the earliest ··· 733 733 void *cb_data; 734 734 735 735 partition = xa_load(&drv_info->partition_info, part_id); 736 + if (!partition) { 737 + pr_err("%s: Invalid partition ID 0x%x\n", __func__, part_id); 738 + return; 739 + } 740 + 736 741 read_lock(&partition->rw_lock); 737 742 callback = partition->callback; 738 743 cb_data = partition->cb_data; ··· 920 915 return -EOPNOTSUPP; 921 916 922 917 partition = xa_load(&drv_info->partition_info, part_id); 918 + if (!partition) { 919 + pr_err("%s: Invalid partition ID 0x%x\n", __func__, part_id); 920 + return -EINVAL; 921 + } 922 + 923 923 write_lock(&partition->rw_lock); 924 924 925 925 cb_valid = !!partition->callback; ··· 1196 1186 kfree(pbuf); 1197 1187 } 1198 1188 1199 - static void ffa_setup_partitions(void) 1189 + static int ffa_setup_partitions(void) 1200 1190 { 1201 - int count, idx; 1191 + int count, idx, ret; 1202 1192 uuid_t uuid; 1203 1193 struct ffa_device *ffa_dev; 1204 1194 struct ffa_dev_part_info *info; ··· 1207 1197 count = ffa_partition_probe(&uuid_null, &pbuf); 1208 1198 if (count <= 0) { 1209 1199 pr_info("%s: No partitions found, error %d\n", __func__, count); 1210 - return; 1200 + return -EINVAL; 1211 1201 } 1212 1202 1213 1203 xa_init(&drv_info->partition_info); ··· 1236 1226 ffa_device_unregister(ffa_dev); 1237 1227 continue; 1238 1228 } 1239 - xa_store(&drv_info->partition_info, tpbuf->id, info, GFP_KERNEL); 1229 + rwlock_init(&info->rw_lock); 1230 + ret = xa_insert(&drv_info->partition_info, tpbuf->id, 1231 + info, GFP_KERNEL); 1232 + if (ret) { 1233 + pr_err("%s: failed to save partition ID 0x%x - ret:%d\n", 1234 + __func__, tpbuf->id, ret); 1235 + ffa_device_unregister(ffa_dev); 1236 + kfree(info); 1237 + } 1240 1238 } 1241 - drv_info->partition_count = count; 1242 1239 1243 1240 kfree(pbuf); 1244 1241 1245 1242 /* Allocate for the host */ 1246 1243 info = kzalloc(sizeof(*info), GFP_KERNEL); 1247 - if (!info) 1248 - return; 1249 - xa_store(&drv_info->partition_info, drv_info->vm_id, info, GFP_KERNEL); 1250 - drv_info->partition_count++; 1244 + if (!info) { 1245 + pr_err("%s: failed to alloc Host partition ID 0x%x. Abort.\n", 1246 + __func__, drv_info->vm_id); 1247 + /* Already registered devices are freed on bus_exit */ 1248 + ffa_partitions_cleanup(); 1249 + return -ENOMEM; 1250 + } 1251 + 1252 + rwlock_init(&info->rw_lock); 1253 + ret = xa_insert(&drv_info->partition_info, drv_info->vm_id, 1254 + info, GFP_KERNEL); 1255 + if (ret) { 1256 + pr_err("%s: failed to save Host partition ID 0x%x - ret:%d. Abort.\n", 1257 + __func__, drv_info->vm_id, ret); 1258 + kfree(info); 1259 + /* Already registered devices are freed on bus_exit */ 1260 + ffa_partitions_cleanup(); 1261 + } 1262 + 1263 + return ret; 1251 1264 } 1252 1265 1253 1266 static void ffa_partitions_cleanup(void) 1254 1267 { 1255 - struct ffa_dev_part_info **info; 1256 - int idx, count = drv_info->partition_count; 1268 + struct ffa_dev_part_info *info; 1269 + unsigned long idx; 1257 1270 1258 - if (!count) 1259 - return; 1271 + xa_for_each(&drv_info->partition_info, idx, info) { 1272 + xa_erase(&drv_info->partition_info, idx); 1273 + kfree(info); 1274 + } 1260 1275 1261 - info = kcalloc(count, sizeof(*info), GFP_KERNEL); 1262 - if (!info) 1263 - return; 1264 - 1265 - xa_extract(&drv_info->partition_info, (void **)info, 0, VM_ID_MASK, 1266 - count, XA_PRESENT); 1267 - 1268 - for (idx = 0; idx < count; idx++) 1269 - kfree(info[idx]); 1270 - kfree(info); 1271 - 1272 - drv_info->partition_count = 0; 1273 1276 xa_destroy(&drv_info->partition_info); 1274 1277 } 1275 1278 ··· 1531 1508 1532 1509 ffa_notifications_setup(); 1533 1510 1534 - ffa_setup_partitions(); 1511 + ret = ffa_setup_partitions(); 1512 + if (ret) { 1513 + pr_err("failed to setup partitions\n"); 1514 + goto cleanup_notifs; 1515 + } 1535 1516 1536 1517 ret = ffa_sched_recv_cb_update(drv_info->vm_id, ffa_self_notif_handle, 1537 1518 drv_info, true); ··· 1543 1516 pr_info("Failed to register driver sched callback %d\n", ret); 1544 1517 1545 1518 return 0; 1519 + 1520 + cleanup_notifs: 1521 + ffa_notifications_cleanup(); 1546 1522 free_pages: 1547 1523 if (drv_info->tx_buffer) 1548 1524 free_pages_exact(drv_info->tx_buffer, RXTX_BUFFER_SIZE); ··· 1565 1535 ffa_rxtx_unmap(drv_info->vm_id); 1566 1536 free_pages_exact(drv_info->tx_buffer, RXTX_BUFFER_SIZE); 1567 1537 free_pages_exact(drv_info->rx_buffer, RXTX_BUFFER_SIZE); 1568 - xa_destroy(&drv_info->partition_info); 1569 1538 kfree(drv_info); 1570 1539 arm_ffa_bus_exit(); 1571 1540 }
+2 -3
drivers/firmware/arm_scmi/clock.c
··· 13 13 #include "notify.h" 14 14 15 15 /* Updated only after ALL the mandatory features for that version are merged */ 16 - #define SCMI_PROTOCOL_SUPPORTED_VERSION 0x20001 16 + #define SCMI_PROTOCOL_SUPPORTED_VERSION 0x20000 17 17 18 18 enum scmi_clock_protocol_cmd { 19 19 CLOCK_ATTRIBUTES = 0x3, ··· 954 954 scmi_clock_describe_rates_get(ph, clkid, clk); 955 955 } 956 956 957 - if (PROTOCOL_REV_MAJOR(version) >= 0x2 && 958 - PROTOCOL_REV_MINOR(version) >= 0x1) { 957 + if (PROTOCOL_REV_MAJOR(version) >= 0x3) { 959 958 cinfo->clock_config_set = scmi_clock_config_set_v2; 960 959 cinfo->clock_config_get = scmi_clock_config_get_v2; 961 960 } else {
+1
drivers/firmware/arm_scmi/common.h
··· 314 314 void shmem_clear_channel(struct scmi_shared_mem __iomem *shmem); 315 315 bool shmem_poll_done(struct scmi_shared_mem __iomem *shmem, 316 316 struct scmi_xfer *xfer); 317 + bool shmem_channel_free(struct scmi_shared_mem __iomem *shmem); 317 318 318 319 /* declarations for message passing transports */ 319 320 struct scmi_msg_payld;
+14
drivers/firmware/arm_scmi/mailbox.c
··· 45 45 { 46 46 struct scmi_mailbox *smbox = client_to_scmi_mailbox(cl); 47 47 48 + /* 49 + * An A2P IRQ is NOT valid when received while the platform still has 50 + * the ownership of the channel, because the platform at first releases 51 + * the SMT channel and then sends the completion interrupt. 52 + * 53 + * This addresses a possible race condition in which a spurious IRQ from 54 + * a previous timed-out reply which arrived late could be wrongly 55 + * associated with the next pending transaction. 56 + */ 57 + if (cl->knows_txdone && !shmem_channel_free(smbox->shmem)) { 58 + dev_warn(smbox->cinfo->dev, "Ignoring spurious A2P IRQ !\n"); 59 + return; 60 + } 61 + 48 62 scmi_rx_callback(smbox->cinfo, shmem_read_header(smbox->shmem), NULL); 49 63 } 50 64
+18 -5
drivers/firmware/arm_scmi/perf.c
··· 350 350 } 351 351 352 352 static inline void 353 - process_response_opp_v4(struct perf_dom_info *dom, struct scmi_opp *opp, 354 - unsigned int loop_idx, 353 + process_response_opp_v4(struct device *dev, struct perf_dom_info *dom, 354 + struct scmi_opp *opp, unsigned int loop_idx, 355 355 const struct scmi_msg_resp_perf_describe_levels_v4 *r) 356 356 { 357 357 opp->perf = le32_to_cpu(r->opp[loop_idx].perf_val); ··· 362 362 /* Note that PERF v4 reports always five 32-bit words */ 363 363 opp->indicative_freq = le32_to_cpu(r->opp[loop_idx].indicative_freq); 364 364 if (dom->level_indexing_mode) { 365 + int ret; 366 + 365 367 opp->level_index = le32_to_cpu(r->opp[loop_idx].level_index); 366 368 367 - xa_store(&dom->opps_by_idx, opp->level_index, opp, GFP_KERNEL); 368 - xa_store(&dom->opps_by_lvl, opp->perf, opp, GFP_KERNEL); 369 + ret = xa_insert(&dom->opps_by_idx, opp->level_index, opp, 370 + GFP_KERNEL); 371 + if (ret) 372 + dev_warn(dev, 373 + "Failed to add opps_by_idx at %d - ret:%d\n", 374 + opp->level_index, ret); 375 + 376 + ret = xa_insert(&dom->opps_by_lvl, opp->perf, opp, GFP_KERNEL); 377 + if (ret) 378 + dev_warn(dev, 379 + "Failed to add opps_by_lvl at %d - ret:%d\n", 380 + opp->perf, ret); 381 + 369 382 hash_add(dom->opps_by_freq, &opp->hash, opp->indicative_freq); 370 383 } 371 384 } ··· 395 382 if (PROTOCOL_REV_MAJOR(p->version) <= 0x3) 396 383 process_response_opp(opp, st->loop_idx, response); 397 384 else 398 - process_response_opp_v4(p->perf_dom, opp, st->loop_idx, 385 + process_response_opp_v4(ph->dev, p->perf_dom, opp, st->loop_idx, 399 386 response); 400 387 p->perf_dom->opp_count++; 401 388
+8 -4
drivers/firmware/arm_scmi/raw_mode.c
··· 1111 1111 int i; 1112 1112 1113 1113 for (i = 0; i < num_chans; i++) { 1114 - void *xret; 1115 1114 struct scmi_raw_queue *q; 1116 1115 1117 1116 q = scmi_raw_queue_init(raw); ··· 1119 1120 goto err_xa; 1120 1121 } 1121 1122 1122 - xret = xa_store(&raw->chans_q, channels[i], q, 1123 + ret = xa_insert(&raw->chans_q, channels[i], q, 1123 1124 GFP_KERNEL); 1124 - if (xa_err(xret)) { 1125 + if (ret) { 1125 1126 dev_err(dev, 1126 1127 "Fail to allocate Raw queue 0x%02X\n", 1127 1128 channels[i]); 1128 - ret = xa_err(xret); 1129 1129 goto err_xa; 1130 1130 } 1131 1131 } ··· 1320 1322 dev = raw->handle->dev; 1321 1323 q = scmi_raw_queue_select(raw, idx, 1322 1324 SCMI_XFER_IS_CHAN_SET(xfer) ? chan_id : 0); 1325 + if (!q) { 1326 + dev_warn(dev, 1327 + "RAW[%d] - NO queue for chan 0x%X. Dropping report.\n", 1328 + idx, chan_id); 1329 + return; 1330 + } 1323 1331 1324 1332 /* 1325 1333 * Grab the msg_q_lock upfront to avoid a possible race between
+7 -1
drivers/firmware/arm_scmi/shmem.c
··· 10 10 #include <linux/processor.h> 11 11 #include <linux/types.h> 12 12 13 - #include <asm-generic/bug.h> 13 + #include <linux/bug.h> 14 14 15 15 #include "common.h" 16 16 ··· 121 121 return ioread32(&shmem->channel_status) & 122 122 (SCMI_SHMEM_CHAN_STAT_CHANNEL_ERROR | 123 123 SCMI_SHMEM_CHAN_STAT_CHANNEL_FREE); 124 + } 125 + 126 + bool shmem_channel_free(struct scmi_shared_mem __iomem *shmem) 127 + { 128 + return (ioread32(&shmem->channel_status) & 129 + SCMI_SHMEM_CHAN_STAT_CHANNEL_FREE); 124 130 }
+1 -1
drivers/firmware/sysfb.c
··· 128 128 } 129 129 130 130 /* must execute after PCI subsystem for EFI quirks */ 131 - subsys_initcall_sync(sysfb_init); 131 + device_initcall(sysfb_init);
+28 -4
drivers/gpio/gpio-eic-sprd.c
··· 330 330 switch (flow_type) { 331 331 case IRQ_TYPE_LEVEL_HIGH: 332 332 sprd_eic_update(chip, offset, SPRD_EIC_DBNC_IEV, 1); 333 + sprd_eic_update(chip, offset, SPRD_EIC_DBNC_IC, 1); 333 334 break; 334 335 case IRQ_TYPE_LEVEL_LOW: 335 336 sprd_eic_update(chip, offset, SPRD_EIC_DBNC_IEV, 0); 337 + sprd_eic_update(chip, offset, SPRD_EIC_DBNC_IC, 1); 336 338 break; 337 339 case IRQ_TYPE_EDGE_RISING: 338 340 case IRQ_TYPE_EDGE_FALLING: 339 341 case IRQ_TYPE_EDGE_BOTH: 340 342 state = sprd_eic_get(chip, offset); 341 - if (state) 343 + if (state) { 342 344 sprd_eic_update(chip, offset, 343 345 SPRD_EIC_DBNC_IEV, 0); 344 - else 346 + sprd_eic_update(chip, offset, 347 + SPRD_EIC_DBNC_IC, 1); 348 + } else { 345 349 sprd_eic_update(chip, offset, 346 350 SPRD_EIC_DBNC_IEV, 1); 351 + sprd_eic_update(chip, offset, 352 + SPRD_EIC_DBNC_IC, 1); 353 + } 347 354 break; 348 355 default: 349 356 return -ENOTSUPP; ··· 362 355 switch (flow_type) { 363 356 case IRQ_TYPE_LEVEL_HIGH: 364 357 sprd_eic_update(chip, offset, SPRD_EIC_LATCH_INTPOL, 0); 358 + sprd_eic_update(chip, offset, SPRD_EIC_LATCH_INTCLR, 1); 365 359 break; 366 360 case IRQ_TYPE_LEVEL_LOW: 367 361 sprd_eic_update(chip, offset, SPRD_EIC_LATCH_INTPOL, 1); 362 + sprd_eic_update(chip, offset, SPRD_EIC_LATCH_INTCLR, 1); 368 363 break; 369 364 case IRQ_TYPE_EDGE_RISING: 370 365 case IRQ_TYPE_EDGE_FALLING: 371 366 case IRQ_TYPE_EDGE_BOTH: 372 367 state = sprd_eic_get(chip, offset); 373 - if (state) 368 + if (state) { 374 369 sprd_eic_update(chip, offset, 375 370 SPRD_EIC_LATCH_INTPOL, 0); 376 - else 371 + sprd_eic_update(chip, offset, 372 + SPRD_EIC_LATCH_INTCLR, 1); 373 + } else { 377 374 sprd_eic_update(chip, offset, 378 375 SPRD_EIC_LATCH_INTPOL, 1); 376 + sprd_eic_update(chip, offset, 377 + SPRD_EIC_LATCH_INTCLR, 1); 378 + } 379 379 break; 380 380 default: 381 381 return -ENOTSUPP; ··· 396 382 sprd_eic_update(chip, offset, SPRD_EIC_ASYNC_INTBOTH, 0); 397 383 sprd_eic_update(chip, offset, SPRD_EIC_ASYNC_INTMODE, 0); 398 384 sprd_eic_update(chip, offset, SPRD_EIC_ASYNC_INTPOL, 1); 385 + sprd_eic_update(chip, offset, SPRD_EIC_ASYNC_INTCLR, 1); 399 386 irq_set_handler_locked(data, handle_edge_irq); 400 387 break; 401 388 case IRQ_TYPE_EDGE_FALLING: 402 389 sprd_eic_update(chip, offset, SPRD_EIC_ASYNC_INTBOTH, 0); 403 390 sprd_eic_update(chip, offset, SPRD_EIC_ASYNC_INTMODE, 0); 404 391 sprd_eic_update(chip, offset, SPRD_EIC_ASYNC_INTPOL, 0); 392 + sprd_eic_update(chip, offset, SPRD_EIC_ASYNC_INTCLR, 1); 405 393 irq_set_handler_locked(data, handle_edge_irq); 406 394 break; 407 395 case IRQ_TYPE_EDGE_BOTH: 408 396 sprd_eic_update(chip, offset, SPRD_EIC_ASYNC_INTMODE, 0); 409 397 sprd_eic_update(chip, offset, SPRD_EIC_ASYNC_INTBOTH, 1); 398 + sprd_eic_update(chip, offset, SPRD_EIC_ASYNC_INTCLR, 1); 410 399 irq_set_handler_locked(data, handle_edge_irq); 411 400 break; 412 401 case IRQ_TYPE_LEVEL_HIGH: 413 402 sprd_eic_update(chip, offset, SPRD_EIC_ASYNC_INTBOTH, 0); 414 403 sprd_eic_update(chip, offset, SPRD_EIC_ASYNC_INTMODE, 1); 415 404 sprd_eic_update(chip, offset, SPRD_EIC_ASYNC_INTPOL, 1); 405 + sprd_eic_update(chip, offset, SPRD_EIC_ASYNC_INTCLR, 1); 416 406 irq_set_handler_locked(data, handle_level_irq); 417 407 break; 418 408 case IRQ_TYPE_LEVEL_LOW: 419 409 sprd_eic_update(chip, offset, SPRD_EIC_ASYNC_INTBOTH, 0); 420 410 sprd_eic_update(chip, offset, SPRD_EIC_ASYNC_INTMODE, 1); 421 411 sprd_eic_update(chip, offset, SPRD_EIC_ASYNC_INTPOL, 0); 412 + sprd_eic_update(chip, offset, SPRD_EIC_ASYNC_INTCLR, 1); 422 413 irq_set_handler_locked(data, handle_level_irq); 423 414 break; 424 415 default: ··· 436 417 sprd_eic_update(chip, offset, SPRD_EIC_SYNC_INTBOTH, 0); 437 418 sprd_eic_update(chip, offset, SPRD_EIC_SYNC_INTMODE, 0); 438 419 sprd_eic_update(chip, offset, SPRD_EIC_SYNC_INTPOL, 1); 420 + sprd_eic_update(chip, offset, SPRD_EIC_SYNC_INTCLR, 1); 439 421 irq_set_handler_locked(data, handle_edge_irq); 440 422 break; 441 423 case IRQ_TYPE_EDGE_FALLING: 442 424 sprd_eic_update(chip, offset, SPRD_EIC_SYNC_INTBOTH, 0); 443 425 sprd_eic_update(chip, offset, SPRD_EIC_SYNC_INTMODE, 0); 444 426 sprd_eic_update(chip, offset, SPRD_EIC_SYNC_INTPOL, 0); 427 + sprd_eic_update(chip, offset, SPRD_EIC_SYNC_INTCLR, 1); 445 428 irq_set_handler_locked(data, handle_edge_irq); 446 429 break; 447 430 case IRQ_TYPE_EDGE_BOTH: 448 431 sprd_eic_update(chip, offset, SPRD_EIC_SYNC_INTMODE, 0); 449 432 sprd_eic_update(chip, offset, SPRD_EIC_SYNC_INTBOTH, 1); 433 + sprd_eic_update(chip, offset, SPRD_EIC_SYNC_INTCLR, 1); 450 434 irq_set_handler_locked(data, handle_edge_irq); 451 435 break; 452 436 case IRQ_TYPE_LEVEL_HIGH: 453 437 sprd_eic_update(chip, offset, SPRD_EIC_SYNC_INTBOTH, 0); 454 438 sprd_eic_update(chip, offset, SPRD_EIC_SYNC_INTMODE, 1); 455 439 sprd_eic_update(chip, offset, SPRD_EIC_SYNC_INTPOL, 1); 440 + sprd_eic_update(chip, offset, SPRD_EIC_SYNC_INTCLR, 1); 456 441 irq_set_handler_locked(data, handle_level_irq); 457 442 break; 458 443 case IRQ_TYPE_LEVEL_LOW: 459 444 sprd_eic_update(chip, offset, SPRD_EIC_SYNC_INTBOTH, 0); 460 445 sprd_eic_update(chip, offset, SPRD_EIC_SYNC_INTMODE, 1); 461 446 sprd_eic_update(chip, offset, SPRD_EIC_SYNC_INTPOL, 0); 447 + sprd_eic_update(chip, offset, SPRD_EIC_SYNC_INTCLR, 1); 462 448 irq_set_handler_locked(data, handle_level_irq); 463 449 break; 464 450 default:
+14
drivers/gpio/gpiolib-acpi.c
··· 1651 1651 .ignore_interrupt = "INT33FC:00@3", 1652 1652 }, 1653 1653 }, 1654 + { 1655 + /* 1656 + * Spurious wakeups from TP_ATTN# pin 1657 + * Found in BIOS 0.35 1658 + * https://gitlab.freedesktop.org/drm/amd/-/issues/3073 1659 + */ 1660 + .matches = { 1661 + DMI_MATCH(DMI_SYS_VENDOR, "GPD"), 1662 + DMI_MATCH(DMI_PRODUCT_NAME, "G1619-04"), 1663 + }, 1664 + .driver_data = &(struct acpi_gpiolib_dmi_quirk) { 1665 + .ignore_wake = "PNP0C50:00@8", 1666 + }, 1667 + }, 1654 1668 {} /* Terminating entry */ 1655 1669 }; 1656 1670
+8
drivers/gpu/drm/amd/amdgpu/amdgpu_gart.c
··· 121 121 struct amdgpu_bo_param bp; 122 122 dma_addr_t dma_addr; 123 123 struct page *p; 124 + unsigned long x; 124 125 int ret; 125 126 126 127 if (adev->gart.bo != NULL) ··· 130 129 p = alloc_pages(gfp_flags, order); 131 130 if (!p) 132 131 return -ENOMEM; 132 + 133 + /* assign pages to this device */ 134 + for (x = 0; x < (1UL << order); x++) 135 + p[x].mapping = adev->mman.bdev.dev_mapping; 133 136 134 137 /* If the hardware does not support UTCL2 snooping of the CPU caches 135 138 * then set_memory_wc() could be used as a workaround to mark the pages ··· 228 223 unsigned int order = get_order(adev->gart.table_size); 229 224 struct sg_table *sg = adev->gart.bo->tbo.sg; 230 225 struct page *p; 226 + unsigned long x; 231 227 int ret; 232 228 233 229 ret = amdgpu_bo_reserve(adev->gart.bo, false); ··· 240 234 sg_free_table(sg); 241 235 kfree(sg); 242 236 p = virt_to_page(adev->gart.ptr); 237 + for (x = 0; x < (1UL << order); x++) 238 + p[x].mapping = NULL; 243 239 __free_pages(p, order); 244 240 245 241 adev->gart.ptr = NULL;
+16 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c
··· 221 221 NULL 222 222 }; 223 223 224 + static umode_t amdgpu_vram_attrs_is_visible(struct kobject *kobj, 225 + struct attribute *attr, int i) 226 + { 227 + struct device *dev = kobj_to_dev(kobj); 228 + struct drm_device *ddev = dev_get_drvdata(dev); 229 + struct amdgpu_device *adev = drm_to_adev(ddev); 230 + 231 + if (attr == &dev_attr_mem_info_vram_vendor.attr && 232 + !adev->gmc.vram_vendor) 233 + return 0; 234 + 235 + return attr->mode; 236 + } 237 + 224 238 const struct attribute_group amdgpu_vram_mgr_attr_group = { 225 - .attrs = amdgpu_vram_mgr_attributes 239 + .attrs = amdgpu_vram_mgr_attributes, 240 + .is_visible = amdgpu_vram_attrs_is_visible 226 241 }; 227 242 228 243 /**
+1 -1
drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
··· 6589 6589 #ifdef __BIG_ENDIAN 6590 6590 tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, ENDIAN_SWAP, 1); 6591 6591 #endif 6592 - tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, UNORD_DISPATCH, 0); 6592 + tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, UNORD_DISPATCH, 1); 6593 6593 tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, TUNNEL_DISPATCH, 6594 6594 prop->allow_tunneling); 6595 6595 tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, PRIV_STATE, 1);
+1 -1
drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
··· 3846 3846 (order_base_2(prop->queue_size / 4) - 1)); 3847 3847 tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, RPTR_BLOCK_SIZE, 3848 3848 (order_base_2(AMDGPU_GPU_PAGE_SIZE / 4) - 1)); 3849 - tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, UNORD_DISPATCH, 0); 3849 + tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, UNORD_DISPATCH, 1); 3850 3850 tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, TUNNEL_DISPATCH, 3851 3851 prop->allow_tunneling); 3852 3852 tmp = REG_SET_FIELD(tmp, CP_HQD_PQ_CONTROL, PRIV_STATE, 1);
+2 -1
drivers/gpu/drm/amd/amdgpu/gmc_v9_0.c
··· 1950 1950 static const u32 regBIF_BIOS_SCRATCH_4 = 0x50; 1951 1951 u32 vram_info; 1952 1952 1953 - if (!amdgpu_sriov_vf(adev)) { 1953 + /* Only for dGPU, vendor informaton is reliable */ 1954 + if (!amdgpu_sriov_vf(adev) && !(adev->flags & AMD_IS_APU)) { 1954 1955 vram_info = RREG32(regBIF_BIOS_SCRATCH_4); 1955 1956 adev->gmc.vram_vendor = vram_info & 0xF; 1956 1957 }
+1
drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v10.c
··· 170 170 m->cp_hqd_pq_control = 5 << CP_HQD_PQ_CONTROL__RPTR_BLOCK_SIZE__SHIFT; 171 171 m->cp_hqd_pq_control |= 172 172 ffs(q->queue_size / sizeof(unsigned int)) - 1 - 1; 173 + m->cp_hqd_pq_control |= CP_HQD_PQ_CONTROL__UNORD_DISPATCH_MASK; 173 174 pr_debug("cp_hqd_pq_control 0x%x\n", m->cp_hqd_pq_control); 174 175 175 176 m->cp_hqd_pq_base_lo = lower_32_bits((uint64_t)q->queue_address >> 8);
+1
drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v11.c
··· 224 224 m->cp_hqd_pq_control = 5 << CP_HQD_PQ_CONTROL__RPTR_BLOCK_SIZE__SHIFT; 225 225 m->cp_hqd_pq_control |= 226 226 ffs(q->queue_size / sizeof(unsigned int)) - 1 - 1; 227 + m->cp_hqd_pq_control |= CP_HQD_PQ_CONTROL__UNORD_DISPATCH_MASK; 227 228 pr_debug("cp_hqd_pq_control 0x%x\n", m->cp_hqd_pq_control); 228 229 229 230 m->cp_hqd_pq_base_lo = lower_32_bits((uint64_t)q->queue_address >> 8);
+10 -11
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
··· 272 272 { 273 273 u32 v_blank_start, v_blank_end, h_position, v_position; 274 274 struct amdgpu_crtc *acrtc = NULL; 275 + struct dc *dc = adev->dm.dc; 275 276 276 277 if ((crtc < 0) || (crtc >= adev->mode_info.num_crtc)) 277 278 return -EINVAL; ··· 284 283 crtc); 285 284 return 0; 286 285 } 286 + 287 + if (dc && dc->caps.ips_support && dc->idle_optimizations_allowed) 288 + dc_allow_idle_optimizations(dc, false); 287 289 288 290 /* 289 291 * TODO rework base driver to use values directly. ··· 1719 1715 init_data.nbio_reg_offsets = adev->reg_offset[NBIO_HWIP][0]; 1720 1716 init_data.clk_reg_offsets = adev->reg_offset[CLK_HWIP][0]; 1721 1717 1722 - init_data.flags.disable_ips = DMUB_IPS_DISABLE_ALL; 1718 + if (amdgpu_dc_debug_mask & DC_DISABLE_IPS) 1719 + init_data.flags.disable_ips = DMUB_IPS_DISABLE_ALL; 1720 + 1721 + init_data.flags.disable_ips_in_vpb = 1; 1723 1722 1724 1723 /* Enable DWB for tested platforms only */ 1725 1724 if (amdgpu_ip_version(adev, DCE_HWIP, 0) >= IP_VERSION(3, 0, 0)) ··· 8983 8976 8984 8977 trace_amdgpu_dm_atomic_commit_tail_begin(state); 8985 8978 8986 - if (dm->dc->caps.ips_support) { 8987 - for_each_oldnew_connector_in_state(state, connector, old_con_state, new_con_state, i) { 8988 - if (new_con_state->crtc && 8989 - new_con_state->crtc->state->active && 8990 - drm_atomic_crtc_needs_modeset(new_con_state->crtc->state)) { 8991 - dc_dmub_srv_apply_idle_power_optimizations(dm->dc, false); 8992 - break; 8993 - } 8994 - } 8995 - } 8979 + if (dm->dc->caps.ips_support && dm->dc->idle_optimizations_allowed) 8980 + dc_allow_idle_optimizations(dm->dc, false); 8996 8981 8997 8982 drm_atomic_helper_update_legacy_modeset_state(dev, state); 8998 8983 drm_dp_mst_atomic_wait_for_dependencies(state);
+4 -1
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_irq.c
··· 711 711 { 712 712 bool st; 713 713 enum dc_irq_source irq_source; 714 - 714 + struct dc *dc = adev->dm.dc; 715 715 struct amdgpu_crtc *acrtc = adev->mode_info.crtcs[crtc_id]; 716 716 717 717 if (!acrtc) { ··· 728 728 irq_source = dal_irq_type + acrtc->otg_inst; 729 729 730 730 st = (state == AMDGPU_IRQ_STATE_ENABLE); 731 + 732 + if (dc && dc->caps.ips_support && dc->idle_optimizations_allowed) 733 + dc_allow_idle_optimizations(dc, false); 731 734 732 735 dc_interrupt_set(adev->dm.dc, irq_source, st); 733 736 return 0;
+1
drivers/gpu/drm/amd/display/dc/dc.h
··· 434 434 bool EnableMinDispClkODM; 435 435 bool enable_auto_dpm_test_logs; 436 436 unsigned int disable_ips; 437 + unsigned int disable_ips_in_vpb; 437 438 }; 438 439 439 440 enum visual_confirm {
+5
drivers/gpu/drm/amd/display/dc/dc_types.h
··· 1034 1034 Replay_Msg_Not_Support = -1, 1035 1035 Replay_Set_Timing_Sync_Supported, 1036 1036 Replay_Set_Residency_Frameupdate_Timer, 1037 + Replay_Set_Pseudo_VTotal, 1037 1038 }; 1038 1039 1039 1040 union replay_error_status { ··· 1090 1089 uint16_t coasting_vtotal_table[PR_COASTING_TYPE_NUM]; 1091 1090 /* Maximum link off frame count */ 1092 1091 enum replay_link_off_frame_count_level link_off_frame_count_level; 1092 + /* Replay pseudo vtotal for abm + ips on full screen video which can improve ips residency */ 1093 + uint16_t abm_with_ips_on_full_screen_video_pseudo_vtotal; 1094 + /* Replay last pseudo vtotal set to DMUB */ 1095 + uint16_t last_pseudo_vtotal; 1093 1096 }; 1094 1097 1095 1098 /* To split out "global" and "per-panel" config settings.
+8 -1
drivers/gpu/drm/amd/display/dc/hwss/dcn35/dcn35_hwseq.c
··· 680 680 bool dcn35_apply_idle_power_optimizations(struct dc *dc, bool enable) 681 681 { 682 682 struct dc_link *edp_links[MAX_NUM_EDP]; 683 - int edp_num; 683 + int i, edp_num; 684 684 if (dc->debug.dmcub_emulation) 685 685 return true; 686 686 ··· 688 688 dc_get_edp_links(dc, edp_links, &edp_num); 689 689 if (edp_num == 0 || edp_num > 1) 690 690 return false; 691 + 692 + for (i = 0; i < dc->current_state->stream_count; ++i) { 693 + struct dc_stream_state *stream = dc->current_state->streams[i]; 694 + 695 + if (!stream->dpms_off && !dc_is_embedded_signal(stream->signal)) 696 + return false; 697 + } 691 698 } 692 699 693 700 // TODO: review other cases when idle optimization is allowed
+47
drivers/gpu/drm/amd/display/dmub/inc/dmub_cmd.h
··· 2832 2832 #define REPLAY_RESIDENCY_MODE_MASK (0x1 << REPLAY_RESIDENCY_MODE_SHIFT) 2833 2833 # define REPLAY_RESIDENCY_MODE_PHY (0x0 << REPLAY_RESIDENCY_MODE_SHIFT) 2834 2834 # define REPLAY_RESIDENCY_MODE_ALPM (0x1 << REPLAY_RESIDENCY_MODE_SHIFT) 2835 + # define REPLAY_RESIDENCY_MODE_IPS 0x10 2835 2836 2836 2837 #define REPLAY_RESIDENCY_ENABLE_MASK (0x1 << REPLAY_RESIDENCY_ENABLE_SHIFT) 2837 2838 # define REPLAY_RESIDENCY_DISABLE (0x0 << REPLAY_RESIDENCY_ENABLE_SHIFT) ··· 2895 2894 * Set Residency Frameupdate Timer. 2896 2895 */ 2897 2896 DMUB_CMD__REPLAY_SET_RESIDENCY_FRAMEUPDATE_TIMER = 6, 2897 + /** 2898 + * Set pseudo vtotal 2899 + */ 2900 + DMUB_CMD__REPLAY_SET_PSEUDO_VTOTAL = 7, 2898 2901 }; 2899 2902 2900 2903 /** ··· 3082 3077 }; 3083 3078 3084 3079 /** 3080 + * Data passed from driver to FW in a DMUB_CMD__REPLAY_SET_PSEUDO_VTOTAL command. 3081 + */ 3082 + struct dmub_cmd_replay_set_pseudo_vtotal { 3083 + /** 3084 + * Panel Instance. 3085 + * Panel isntance to identify which replay_state to use 3086 + * Currently the support is only for 0 or 1 3087 + */ 3088 + uint8_t panel_inst; 3089 + /** 3090 + * Source Vtotal that Replay + IPS + ABM full screen video src vtotal 3091 + */ 3092 + uint16_t vtotal; 3093 + /** 3094 + * Explicit padding to 4 byte boundary. 3095 + */ 3096 + uint8_t pad; 3097 + }; 3098 + 3099 + /** 3085 3100 * Definition of a DMUB_CMD__SET_REPLAY_POWER_OPT command. 3086 3101 */ 3087 3102 struct dmub_rb_cmd_replay_set_power_opt { ··· 3182 3157 }; 3183 3158 3184 3159 /** 3160 + * Definition of a DMUB_CMD__REPLAY_SET_PSEUDO_VTOTAL command. 3161 + */ 3162 + struct dmub_rb_cmd_replay_set_pseudo_vtotal { 3163 + /** 3164 + * Command header. 3165 + */ 3166 + struct dmub_cmd_header header; 3167 + /** 3168 + * Definition of DMUB_CMD__REPLAY_SET_PSEUDO_VTOTAL command. 3169 + */ 3170 + struct dmub_cmd_replay_set_pseudo_vtotal data; 3171 + }; 3172 + 3173 + /** 3185 3174 * Data passed from driver to FW in DMUB_CMD__REPLAY_SET_RESIDENCY_FRAMEUPDATE_TIMER command. 3186 3175 */ 3187 3176 struct dmub_cmd_replay_frameupdate_timer_data { ··· 3246 3207 * Definition of DMUB_CMD__REPLAY_SET_RESIDENCY_FRAMEUPDATE_TIMER command data. 3247 3208 */ 3248 3209 struct dmub_cmd_replay_frameupdate_timer_data timer_data; 3210 + /** 3211 + * Definition of DMUB_CMD__REPLAY_SET_PSEUDO_VTOTAL command data. 3212 + */ 3213 + struct dmub_cmd_replay_set_pseudo_vtotal pseudo_vtotal_data; 3249 3214 }; 3250 3215 3251 3216 /** ··· 4401 4358 * Definition of a DMUB_CMD__REPLAY_SET_RESIDENCY_FRAMEUPDATE_TIMER command. 4402 4359 */ 4403 4360 struct dmub_rb_cmd_replay_set_frameupdate_timer replay_set_frameupdate_timer; 4361 + /** 4362 + * Definition of a DMUB_CMD__REPLAY_SET_PSEUDO_VTOTAL command. 4363 + */ 4364 + struct dmub_rb_cmd_replay_set_pseudo_vtotal replay_set_pseudo_vtotal; 4404 4365 }; 4405 4366 4406 4367 /**
+5
drivers/gpu/drm/amd/display/modules/power/power_helpers.c
··· 980 980 link->replay_settings.coasting_vtotal_table[type] = vtotal; 981 981 } 982 982 983 + void set_replay_ips_full_screen_video_src_vtotal(struct dc_link *link, uint16_t vtotal) 984 + { 985 + link->replay_settings.abm_with_ips_on_full_screen_video_pseudo_vtotal = vtotal; 986 + } 987 + 983 988 void calculate_replay_link_off_frame_count(struct dc_link *link, 984 989 uint16_t vtotal, uint16_t htotal) 985 990 {
+1
drivers/gpu/drm/amd/display/modules/power/power_helpers.h
··· 57 57 void set_replay_coasting_vtotal(struct dc_link *link, 58 58 enum replay_coasting_vtotal_type type, 59 59 uint16_t vtotal); 60 + void set_replay_ips_full_screen_video_src_vtotal(struct dc_link *link, uint16_t vtotal); 60 61 void calculate_replay_link_off_frame_count(struct dc_link *link, 61 62 uint16_t vtotal, uint16_t htotal); 62 63
+1
drivers/gpu/drm/amd/include/amd_shared.h
··· 258 258 DC_ENABLE_DML2 = 0x100, 259 259 DC_DISABLE_PSR_SU = 0x200, 260 260 DC_DISABLE_REPLAY = 0x400, 261 + DC_DISABLE_IPS = 0x800, 261 262 }; 262 263 263 264 enum amd_dpm_forced_level;
+1 -1
drivers/gpu/drm/amd/include/amdgpu_reg_state.h
··· 138 138 } 139 139 140 140 #define amdgpu_asic_get_reg_state_supported(adev) \ 141 - ((adev)->asic_funcs->get_reg_state ? 1 : 0) 141 + (((adev)->asic_funcs && (adev)->asic_funcs->get_reg_state) ? 1 : 0) 142 142 143 143 #define amdgpu_asic_get_reg_state(adev, state, buf, size) \ 144 144 ((adev)->asic_funcs->get_reg_state ? \
+4 -10
drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
··· 24 24 25 25 #include <linux/firmware.h> 26 26 #include <linux/pci.h> 27 + #include <linux/power_supply.h> 27 28 #include <linux/reboot.h> 28 29 29 30 #include "amdgpu.h" ··· 818 817 * handle the switch automatically. Driver involvement 819 818 * is unnecessary. 820 819 */ 821 - if (!smu->dc_controlled_by_gpio) { 822 - ret = smu_set_power_source(smu, 823 - adev->pm.ac_power ? SMU_POWER_SOURCE_AC : 824 - SMU_POWER_SOURCE_DC); 825 - if (ret) { 826 - dev_err(adev->dev, "Failed to switch to %s mode!\n", 827 - adev->pm.ac_power ? "AC" : "DC"); 828 - return ret; 829 - } 830 - } 820 + adev->pm.ac_power = power_supply_is_system_supplied() > 0; 821 + smu_set_ac_dc(smu); 831 822 832 823 if ((amdgpu_ip_version(adev, MP1_HWIP, 0) == IP_VERSION(13, 0, 1)) || 833 824 (amdgpu_ip_version(adev, MP1_HWIP, 0) == IP_VERSION(13, 0, 3))) ··· 2703 2710 case SMU_PPT_LIMIT_CURRENT: 2704 2711 switch (amdgpu_ip_version(adev, MP1_HWIP, 0)) { 2705 2712 case IP_VERSION(13, 0, 2): 2713 + case IP_VERSION(13, 0, 6): 2706 2714 case IP_VERSION(11, 0, 7): 2707 2715 case IP_VERSION(11, 0, 11): 2708 2716 case IP_VERSION(11, 0, 12):
+2
drivers/gpu/drm/amd/pm/swsmu/smu11/smu_v11_0.c
··· 1442 1442 case 0x3: 1443 1443 dev_dbg(adev->dev, "Switched to AC mode!\n"); 1444 1444 schedule_work(&smu->interrupt_work); 1445 + adev->pm.ac_power = true; 1445 1446 break; 1446 1447 case 0x4: 1447 1448 dev_dbg(adev->dev, "Switched to DC mode!\n"); 1448 1449 schedule_work(&smu->interrupt_work); 1450 + adev->pm.ac_power = false; 1449 1451 break; 1450 1452 case 0x7: 1451 1453 /*
+2
drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c
··· 1379 1379 case 0x3: 1380 1380 dev_dbg(adev->dev, "Switched to AC mode!\n"); 1381 1381 smu_v13_0_ack_ac_dc_interrupt(smu); 1382 + adev->pm.ac_power = true; 1382 1383 break; 1383 1384 case 0x4: 1384 1385 dev_dbg(adev->dev, "Switched to DC mode!\n"); 1385 1386 smu_v13_0_ack_ac_dc_interrupt(smu); 1387 + adev->pm.ac_power = false; 1386 1388 break; 1387 1389 case 0x7: 1388 1390 /*
+52 -2
drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_0_ppt.c
··· 2357 2357 PPTable_t *pptable = table_context->driver_pptable; 2358 2358 SkuTable_t *skutable = &pptable->SkuTable; 2359 2359 uint32_t power_limit, od_percent_upper, od_percent_lower; 2360 + uint32_t msg_limit = skutable->MsgLimits.Power[PPT_THROTTLER_PPT0][POWER_SOURCE_AC]; 2360 2361 2361 2362 if (smu_v13_0_get_current_power_limit(smu, &power_limit)) 2362 2363 power_limit = smu->adev->pm.ac_power ? ··· 2381 2380 od_percent_upper, od_percent_lower, power_limit); 2382 2381 2383 2382 if (max_power_limit) { 2384 - *max_power_limit = power_limit * (100 + od_percent_upper); 2383 + *max_power_limit = msg_limit * (100 + od_percent_upper); 2385 2384 *max_power_limit /= 100; 2386 2385 } 2387 2386 ··· 2960 2959 } 2961 2960 } 2962 2961 2962 + static int smu_v13_0_0_set_power_limit(struct smu_context *smu, 2963 + enum smu_ppt_limit_type limit_type, 2964 + uint32_t limit) 2965 + { 2966 + PPTable_t *pptable = smu->smu_table.driver_pptable; 2967 + SkuTable_t *skutable = &pptable->SkuTable; 2968 + uint32_t msg_limit = skutable->MsgLimits.Power[PPT_THROTTLER_PPT0][POWER_SOURCE_AC]; 2969 + struct smu_table_context *table_context = &smu->smu_table; 2970 + OverDriveTableExternal_t *od_table = 2971 + (OverDriveTableExternal_t *)table_context->overdrive_table; 2972 + int ret = 0; 2973 + 2974 + if (limit_type != SMU_DEFAULT_PPT_LIMIT) 2975 + return -EINVAL; 2976 + 2977 + if (limit <= msg_limit) { 2978 + if (smu->current_power_limit > msg_limit) { 2979 + od_table->OverDriveTable.Ppt = 0; 2980 + od_table->OverDriveTable.FeatureCtrlMask |= 1U << PP_OD_FEATURE_PPT_BIT; 2981 + 2982 + ret = smu_v13_0_0_upload_overdrive_table(smu, od_table); 2983 + if (ret) { 2984 + dev_err(smu->adev->dev, "Failed to upload overdrive table!\n"); 2985 + return ret; 2986 + } 2987 + } 2988 + return smu_v13_0_set_power_limit(smu, limit_type, limit); 2989 + } else if (smu->od_enabled) { 2990 + ret = smu_v13_0_set_power_limit(smu, limit_type, msg_limit); 2991 + if (ret) 2992 + return ret; 2993 + 2994 + od_table->OverDriveTable.Ppt = (limit * 100) / msg_limit - 100; 2995 + od_table->OverDriveTable.FeatureCtrlMask |= 1U << PP_OD_FEATURE_PPT_BIT; 2996 + 2997 + ret = smu_v13_0_0_upload_overdrive_table(smu, od_table); 2998 + if (ret) { 2999 + dev_err(smu->adev->dev, "Failed to upload overdrive table!\n"); 3000 + return ret; 3001 + } 3002 + 3003 + smu->current_power_limit = limit; 3004 + } else { 3005 + return -EINVAL; 3006 + } 3007 + 3008 + return 0; 3009 + } 3010 + 2963 3011 static const struct pptable_funcs smu_v13_0_0_ppt_funcs = { 2964 3012 .get_allowed_feature_mask = smu_v13_0_0_get_allowed_feature_mask, 2965 3013 .set_default_dpm_table = smu_v13_0_0_set_default_dpm_table, ··· 3063 3013 .set_fan_control_mode = smu_v13_0_set_fan_control_mode, 3064 3014 .enable_mgpu_fan_boost = smu_v13_0_0_enable_mgpu_fan_boost, 3065 3015 .get_power_limit = smu_v13_0_0_get_power_limit, 3066 - .set_power_limit = smu_v13_0_set_power_limit, 3016 + .set_power_limit = smu_v13_0_0_set_power_limit, 3067 3017 .set_power_source = smu_v13_0_set_power_source, 3068 3018 .get_power_profile_mode = smu_v13_0_0_get_power_profile_mode, 3069 3019 .set_power_profile_mode = smu_v13_0_0_set_power_profile_mode,
+2 -2
drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_6_ppt.c
··· 160 160 MSG_MAP(GfxDriverResetRecovery, PPSMC_MSG_GfxDriverResetRecovery, 0), 161 161 MSG_MAP(GetMinGfxclkFrequency, PPSMC_MSG_GetMinGfxDpmFreq, 1), 162 162 MSG_MAP(GetMaxGfxclkFrequency, PPSMC_MSG_GetMaxGfxDpmFreq, 1), 163 - MSG_MAP(SetSoftMinGfxclk, PPSMC_MSG_SetSoftMinGfxClk, 0), 164 - MSG_MAP(SetSoftMaxGfxClk, PPSMC_MSG_SetSoftMaxGfxClk, 0), 163 + MSG_MAP(SetSoftMinGfxclk, PPSMC_MSG_SetSoftMinGfxClk, 1), 164 + MSG_MAP(SetSoftMaxGfxClk, PPSMC_MSG_SetSoftMaxGfxClk, 1), 165 165 MSG_MAP(PrepareMp1ForUnload, PPSMC_MSG_PrepareForDriverUnload, 0), 166 166 MSG_MAP(GetCTFLimit, PPSMC_MSG_GetCTFLimit, 0), 167 167 MSG_MAP(GetThermalLimit, PPSMC_MSG_ReadThrottlerLimit, 0),
+52 -2
drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_7_ppt.c
··· 2321 2321 PPTable_t *pptable = table_context->driver_pptable; 2322 2322 SkuTable_t *skutable = &pptable->SkuTable; 2323 2323 uint32_t power_limit, od_percent_upper, od_percent_lower; 2324 + uint32_t msg_limit = skutable->MsgLimits.Power[PPT_THROTTLER_PPT0][POWER_SOURCE_AC]; 2324 2325 2325 2326 if (smu_v13_0_get_current_power_limit(smu, &power_limit)) 2326 2327 power_limit = smu->adev->pm.ac_power ? ··· 2345 2344 od_percent_upper, od_percent_lower, power_limit); 2346 2345 2347 2346 if (max_power_limit) { 2348 - *max_power_limit = power_limit * (100 + od_percent_upper); 2347 + *max_power_limit = msg_limit * (100 + od_percent_upper); 2349 2348 *max_power_limit /= 100; 2350 2349 } 2351 2350 ··· 2546 2545 return smu->smc_fw_version > 0x00524600; 2547 2546 } 2548 2547 2548 + static int smu_v13_0_7_set_power_limit(struct smu_context *smu, 2549 + enum smu_ppt_limit_type limit_type, 2550 + uint32_t limit) 2551 + { 2552 + PPTable_t *pptable = smu->smu_table.driver_pptable; 2553 + SkuTable_t *skutable = &pptable->SkuTable; 2554 + uint32_t msg_limit = skutable->MsgLimits.Power[PPT_THROTTLER_PPT0][POWER_SOURCE_AC]; 2555 + struct smu_table_context *table_context = &smu->smu_table; 2556 + OverDriveTableExternal_t *od_table = 2557 + (OverDriveTableExternal_t *)table_context->overdrive_table; 2558 + int ret = 0; 2559 + 2560 + if (limit_type != SMU_DEFAULT_PPT_LIMIT) 2561 + return -EINVAL; 2562 + 2563 + if (limit <= msg_limit) { 2564 + if (smu->current_power_limit > msg_limit) { 2565 + od_table->OverDriveTable.Ppt = 0; 2566 + od_table->OverDriveTable.FeatureCtrlMask |= 1U << PP_OD_FEATURE_PPT_BIT; 2567 + 2568 + ret = smu_v13_0_7_upload_overdrive_table(smu, od_table); 2569 + if (ret) { 2570 + dev_err(smu->adev->dev, "Failed to upload overdrive table!\n"); 2571 + return ret; 2572 + } 2573 + } 2574 + return smu_v13_0_set_power_limit(smu, limit_type, limit); 2575 + } else if (smu->od_enabled) { 2576 + ret = smu_v13_0_set_power_limit(smu, limit_type, msg_limit); 2577 + if (ret) 2578 + return ret; 2579 + 2580 + od_table->OverDriveTable.Ppt = (limit * 100) / msg_limit - 100; 2581 + od_table->OverDriveTable.FeatureCtrlMask |= 1U << PP_OD_FEATURE_PPT_BIT; 2582 + 2583 + ret = smu_v13_0_7_upload_overdrive_table(smu, od_table); 2584 + if (ret) { 2585 + dev_err(smu->adev->dev, "Failed to upload overdrive table!\n"); 2586 + return ret; 2587 + } 2588 + 2589 + smu->current_power_limit = limit; 2590 + } else { 2591 + return -EINVAL; 2592 + } 2593 + 2594 + return 0; 2595 + } 2596 + 2549 2597 static const struct pptable_funcs smu_v13_0_7_ppt_funcs = { 2550 2598 .get_allowed_feature_mask = smu_v13_0_7_get_allowed_feature_mask, 2551 2599 .set_default_dpm_table = smu_v13_0_7_set_default_dpm_table, ··· 2646 2596 .set_fan_control_mode = smu_v13_0_set_fan_control_mode, 2647 2597 .enable_mgpu_fan_boost = smu_v13_0_7_enable_mgpu_fan_boost, 2648 2598 .get_power_limit = smu_v13_0_7_get_power_limit, 2649 - .set_power_limit = smu_v13_0_set_power_limit, 2599 + .set_power_limit = smu_v13_0_7_set_power_limit, 2650 2600 .set_power_source = smu_v13_0_set_power_source, 2651 2601 .get_power_profile_mode = smu_v13_0_7_get_power_profile_mode, 2652 2602 .set_power_profile_mode = smu_v13_0_7_set_power_profile_mode,
+6 -1
drivers/gpu/drm/bridge/analogix/anx7625.c
··· 1762 1762 u8 request = msg->request & ~DP_AUX_I2C_MOT; 1763 1763 int ret = 0; 1764 1764 1765 + mutex_lock(&ctx->aux_lock); 1765 1766 pm_runtime_get_sync(dev); 1766 1767 msg->reply = 0; 1767 1768 switch (request) { ··· 1779 1778 msg->size, msg->buffer); 1780 1779 pm_runtime_mark_last_busy(dev); 1781 1780 pm_runtime_put_autosuspend(dev); 1781 + mutex_unlock(&ctx->aux_lock); 1782 1782 1783 1783 return ret; 1784 1784 } ··· 2476 2474 ctx->connector = NULL; 2477 2475 anx7625_dp_stop(ctx); 2478 2476 2479 - pm_runtime_put_sync(dev); 2477 + mutex_lock(&ctx->aux_lock); 2478 + pm_runtime_put_sync_suspend(dev); 2479 + mutex_unlock(&ctx->aux_lock); 2480 2480 } 2481 2481 2482 2482 static enum drm_connector_status ··· 2672 2668 2673 2669 mutex_init(&platform->lock); 2674 2670 mutex_init(&platform->hdcp_wq_lock); 2671 + mutex_init(&platform->aux_lock); 2675 2672 2676 2673 INIT_DELAYED_WORK(&platform->hdcp_work, hdcp_check_work_func); 2677 2674 platform->hdcp_workqueue = create_workqueue("hdcp workqueue");
+2
drivers/gpu/drm/bridge/analogix/anx7625.h
··· 475 475 struct workqueue_struct *hdcp_workqueue; 476 476 /* Lock for hdcp work queue */ 477 477 struct mutex hdcp_wq_lock; 478 + /* Lock for aux transfer and disable */ 479 + struct mutex aux_lock; 478 480 char edid_block; 479 481 struct display_timing dt; 480 482 u8 display_timing_valid;
+23
drivers/gpu/drm/bridge/parade-ps8640.c
··· 107 107 struct device_link *link; 108 108 bool pre_enabled; 109 109 bool need_post_hpd_delay; 110 + struct mutex aux_lock; 110 111 }; 111 112 112 113 static const struct regmap_config ps8640_regmap_config[] = { ··· 346 345 struct device *dev = &ps_bridge->page[PAGE0_DP_CNTL]->dev; 347 346 int ret; 348 347 348 + mutex_lock(&ps_bridge->aux_lock); 349 349 pm_runtime_get_sync(dev); 350 + ret = _ps8640_wait_hpd_asserted(ps_bridge, 200 * 1000); 351 + if (ret) { 352 + pm_runtime_put_sync_suspend(dev); 353 + goto exit; 354 + } 350 355 ret = ps8640_aux_transfer_msg(aux, msg); 351 356 pm_runtime_mark_last_busy(dev); 352 357 pm_runtime_put_autosuspend(dev); 358 + 359 + exit: 360 + mutex_unlock(&ps_bridge->aux_lock); 353 361 354 362 return ret; 355 363 } ··· 480 470 ps_bridge->pre_enabled = false; 481 471 482 472 ps8640_bridge_vdo_control(ps_bridge, DISABLE); 473 + 474 + /* 475 + * The bridge seems to expect everything to be power cycled at the 476 + * disable process, so grab a lock here to make sure 477 + * ps8640_aux_transfer() is not holding a runtime PM reference and 478 + * preventing the bridge from suspend. 479 + */ 480 + mutex_lock(&ps_bridge->aux_lock); 481 + 483 482 pm_runtime_put_sync_suspend(&ps_bridge->page[PAGE0_DP_CNTL]->dev); 483 + 484 + mutex_unlock(&ps_bridge->aux_lock); 484 485 } 485 486 486 487 static int ps8640_bridge_attach(struct drm_bridge *bridge, ··· 639 618 ps_bridge = devm_kzalloc(dev, sizeof(*ps_bridge), GFP_KERNEL); 640 619 if (!ps_bridge) 641 620 return -ENOMEM; 621 + 622 + mutex_init(&ps_bridge->aux_lock); 642 623 643 624 ps_bridge->supplies[0].supply = "vdd12"; 644 625 ps_bridge->supplies[1].supply = "vdd33";
+2 -30
drivers/gpu/drm/bridge/samsung-dsim.c
··· 969 969 reg = samsung_dsim_read(dsi, DSIM_ESCMODE_REG); 970 970 reg &= ~DSIM_STOP_STATE_CNT_MASK; 971 971 reg |= DSIM_STOP_STATE_CNT(driver_data->reg_values[STOP_STATE_CNT]); 972 - 973 - if (!samsung_dsim_hw_is_exynos(dsi->plat_data->hw_type)) 974 - reg |= DSIM_FORCE_STOP_STATE; 975 - 976 972 samsung_dsim_write(dsi, DSIM_ESCMODE_REG, reg); 977 973 978 974 reg = DSIM_BTA_TIMEOUT(0xff) | DSIM_LPDR_TIMEOUT(0xffff); ··· 1427 1431 disable_irq(dsi->irq); 1428 1432 } 1429 1433 1430 - static void samsung_dsim_set_stop_state(struct samsung_dsim *dsi, bool enable) 1431 - { 1432 - u32 reg = samsung_dsim_read(dsi, DSIM_ESCMODE_REG); 1433 - 1434 - if (enable) 1435 - reg |= DSIM_FORCE_STOP_STATE; 1436 - else 1437 - reg &= ~DSIM_FORCE_STOP_STATE; 1438 - 1439 - samsung_dsim_write(dsi, DSIM_ESCMODE_REG, reg); 1440 - } 1441 - 1442 1434 static int samsung_dsim_init(struct samsung_dsim *dsi) 1443 1435 { 1444 1436 const struct samsung_dsim_driver_data *driver_data = dsi->driver_data; ··· 1476 1492 ret = samsung_dsim_init(dsi); 1477 1493 if (ret) 1478 1494 return; 1479 - 1480 - samsung_dsim_set_display_mode(dsi); 1481 - samsung_dsim_set_display_enable(dsi, true); 1482 1495 } 1483 1496 } 1484 1497 ··· 1484 1503 { 1485 1504 struct samsung_dsim *dsi = bridge_to_dsi(bridge); 1486 1505 1487 - if (samsung_dsim_hw_is_exynos(dsi->plat_data->hw_type)) { 1488 - samsung_dsim_set_display_mode(dsi); 1489 - samsung_dsim_set_display_enable(dsi, true); 1490 - } else { 1491 - samsung_dsim_set_stop_state(dsi, false); 1492 - } 1506 + samsung_dsim_set_display_mode(dsi); 1507 + samsung_dsim_set_display_enable(dsi, true); 1493 1508 1494 1509 dsi->state |= DSIM_STATE_VIDOUT_AVAILABLE; 1495 1510 } ··· 1497 1520 1498 1521 if (!(dsi->state & DSIM_STATE_ENABLED)) 1499 1522 return; 1500 - 1501 - if (!samsung_dsim_hw_is_exynos(dsi->plat_data->hw_type)) 1502 - samsung_dsim_set_stop_state(dsi, true); 1503 1523 1504 1524 dsi->state &= ~DSIM_STATE_VIDOUT_AVAILABLE; 1505 1525 } ··· 1801 1827 ret = samsung_dsim_init(dsi); 1802 1828 if (ret) 1803 1829 return ret; 1804 - 1805 - samsung_dsim_set_stop_state(dsi, false); 1806 1830 1807 1831 ret = mipi_dsi_create_packet(&xfer.packet, msg); 1808 1832 if (ret < 0)
+29 -13
drivers/gpu/drm/bridge/sii902x.c
··· 1080 1080 return ret; 1081 1081 } 1082 1082 1083 + ret = sii902x_audio_codec_init(sii902x, dev); 1084 + if (ret) 1085 + return ret; 1086 + 1087 + i2c_set_clientdata(sii902x->i2c, sii902x); 1088 + 1089 + sii902x->i2cmux = i2c_mux_alloc(sii902x->i2c->adapter, dev, 1090 + 1, 0, I2C_MUX_GATE, 1091 + sii902x_i2c_bypass_select, 1092 + sii902x_i2c_bypass_deselect); 1093 + if (!sii902x->i2cmux) { 1094 + ret = -ENOMEM; 1095 + goto err_unreg_audio; 1096 + } 1097 + 1098 + sii902x->i2cmux->priv = sii902x; 1099 + ret = i2c_mux_add_adapter(sii902x->i2cmux, 0, 0, 0); 1100 + if (ret) 1101 + goto err_unreg_audio; 1102 + 1083 1103 sii902x->bridge.funcs = &sii902x_bridge_funcs; 1084 1104 sii902x->bridge.of_node = dev->of_node; 1085 1105 sii902x->bridge.timings = &default_sii902x_timings; ··· 1110 1090 1111 1091 drm_bridge_add(&sii902x->bridge); 1112 1092 1113 - sii902x_audio_codec_init(sii902x, dev); 1093 + return 0; 1114 1094 1115 - i2c_set_clientdata(sii902x->i2c, sii902x); 1095 + err_unreg_audio: 1096 + if (!PTR_ERR_OR_ZERO(sii902x->audio.pdev)) 1097 + platform_device_unregister(sii902x->audio.pdev); 1116 1098 1117 - sii902x->i2cmux = i2c_mux_alloc(sii902x->i2c->adapter, dev, 1118 - 1, 0, I2C_MUX_GATE, 1119 - sii902x_i2c_bypass_select, 1120 - sii902x_i2c_bypass_deselect); 1121 - if (!sii902x->i2cmux) 1122 - return -ENOMEM; 1123 - 1124 - sii902x->i2cmux->priv = sii902x; 1125 - return i2c_mux_add_adapter(sii902x->i2cmux, 0, 0, 0); 1099 + return ret; 1126 1100 } 1127 1101 1128 1102 static int sii902x_probe(struct i2c_client *client) ··· 1184 1170 } 1185 1171 1186 1172 static void sii902x_remove(struct i2c_client *client) 1187 - 1188 1173 { 1189 1174 struct sii902x *sii902x = i2c_get_clientdata(client); 1190 1175 1191 - i2c_mux_del_adapters(sii902x->i2cmux); 1192 1176 drm_bridge_remove(&sii902x->bridge); 1177 + i2c_mux_del_adapters(sii902x->i2cmux); 1178 + 1179 + if (!PTR_ERR_OR_ZERO(sii902x->audio.pdev)) 1180 + platform_device_unregister(sii902x->audio.pdev); 1193 1181 } 1194 1182 1195 1183 static const struct of_device_id sii902x_dt_ids[] = {
+2
drivers/gpu/drm/display/drm_dp_mst_topology.c
··· 5491 5491 * - 0 if the new state is valid 5492 5492 * - %-ENOSPC, if the new state is invalid, because of BW limitation 5493 5493 * @failing_port is set to: 5494 + * 5494 5495 * - The non-root port where a BW limit check failed 5495 5496 * with all the ports downstream of @failing_port passing 5496 5497 * the BW limit check. ··· 5500 5499 * - %NULL if the BW limit check failed at the root port 5501 5500 * with all the ports downstream of the root port passing 5502 5501 * the BW limit check. 5502 + * 5503 5503 * - %-EINVAL, if the new state is invalid, because the root port has 5504 5504 * too many payloads. 5505 5505 */
+2 -2
drivers/gpu/drm/exynos/exynos5433_drm_decon.c
··· 319 319 static void decon_win_set_pixfmt(struct decon_context *ctx, unsigned int win, 320 320 struct drm_framebuffer *fb) 321 321 { 322 - struct exynos_drm_plane plane = ctx->planes[win]; 322 + struct exynos_drm_plane *plane = &ctx->planes[win]; 323 323 struct exynos_drm_plane_state *state = 324 - to_exynos_plane_state(plane.base.state); 324 + to_exynos_plane_state(plane->base.state); 325 325 unsigned int alpha = state->base.alpha; 326 326 unsigned int pixel_alpha; 327 327 unsigned long val;
+3 -3
drivers/gpu/drm/exynos/exynos_drm_fimd.c
··· 480 480 struct fimd_context *ctx = crtc->ctx; 481 481 struct drm_display_mode *mode = &crtc->base.state->adjusted_mode; 482 482 const struct fimd_driver_data *driver_data = ctx->driver_data; 483 - void *timing_base = ctx->regs + driver_data->timing_base; 483 + void __iomem *timing_base = ctx->regs + driver_data->timing_base; 484 484 u32 val; 485 485 486 486 if (ctx->suspended) ··· 661 661 static void fimd_win_set_pixfmt(struct fimd_context *ctx, unsigned int win, 662 662 struct drm_framebuffer *fb, int width) 663 663 { 664 - struct exynos_drm_plane plane = ctx->planes[win]; 664 + struct exynos_drm_plane *plane = &ctx->planes[win]; 665 665 struct exynos_drm_plane_state *state = 666 - to_exynos_plane_state(plane.base.state); 666 + to_exynos_plane_state(plane->base.state); 667 667 uint32_t pixel_format = fb->format->format; 668 668 unsigned int alpha = state->base.alpha; 669 669 u32 val = WINCONx_ENWIN;
+1 -1
drivers/gpu/drm/exynos/exynos_drm_gsc.c
··· 1341 1341 for (i = 0; i < ctx->num_clocks; i++) { 1342 1342 ret = clk_prepare_enable(ctx->clocks[i]); 1343 1343 if (ret) { 1344 - while (--i > 0) 1344 + while (--i >= 0) 1345 1345 clk_disable_unprepare(ctx->clocks[i]); 1346 1346 return ret; 1347 1347 }
-1
drivers/gpu/drm/i915/Makefile
··· 17 17 subdir-ccflags-y += $(call cc-option, -Wpacked-not-aligned) 18 18 subdir-ccflags-y += $(call cc-option, -Wformat-overflow) 19 19 subdir-ccflags-y += $(call cc-option, -Wformat-truncation) 20 - subdir-ccflags-y += $(call cc-option, -Wstringop-overflow) 21 20 subdir-ccflags-y += $(call cc-option, -Wstringop-truncation) 22 21 # The following turn off the warnings enabled by -Wextra 23 22 ifeq ($(findstring 2, $(KBUILD_EXTRA_WARN)),)
+1 -2
drivers/gpu/drm/i915/display/icl_dsi.c
··· 1155 1155 } 1156 1156 1157 1157 intel_dsi_vbt_exec_sequence(intel_dsi, MIPI_SEQ_INIT_OTP); 1158 + intel_dsi_vbt_exec_sequence(intel_dsi, MIPI_SEQ_DISPLAY_ON); 1158 1159 1159 1160 /* ensure all panel commands dispatched before enabling transcoder */ 1160 1161 wait_for_cmds_dispatched_to_panel(encoder); ··· 1255 1254 1256 1255 /* step6d: enable dsi transcoder */ 1257 1256 gen11_dsi_enable_transcoder(encoder); 1258 - 1259 - intel_dsi_vbt_exec_sequence(intel_dsi, MIPI_SEQ_DISPLAY_ON); 1260 1257 1261 1258 /* step7: enable backlight */ 1262 1259 intel_backlight_enable(crtc_state, conn_state);
+12 -2
drivers/gpu/drm/i915/display/intel_psr.c
··· 1525 1525 * can rely on frontbuffer tracking. 1526 1526 */ 1527 1527 mask = EDP_PSR_DEBUG_MASK_MEMUP | 1528 - EDP_PSR_DEBUG_MASK_HPD | 1529 - EDP_PSR_DEBUG_MASK_LPSP; 1528 + EDP_PSR_DEBUG_MASK_HPD; 1529 + 1530 + /* 1531 + * For some unknown reason on HSW non-ULT (or at least on 1532 + * Dell Latitude E6540) external displays start to flicker 1533 + * when PSR is enabled on the eDP. SR/PC6 residency is much 1534 + * higher than should be possible with an external display. 1535 + * As a workaround leave LPSP unmasked to prevent PSR entry 1536 + * when external displays are active. 1537 + */ 1538 + if (DISPLAY_VER(dev_priv) >= 8 || IS_HASWELL_ULT(dev_priv)) 1539 + mask |= EDP_PSR_DEBUG_MASK_LPSP; 1530 1540 1531 1541 if (DISPLAY_VER(dev_priv) < 20) 1532 1542 mask |= EDP_PSR_DEBUG_MASK_MAX_SLEEP;
+5 -23
drivers/gpu/drm/nouveau/nouveau_fence.c
··· 62 62 if (test_bit(DMA_FENCE_FLAG_USER_BITS, &fence->base.flags)) { 63 63 struct nouveau_fence_chan *fctx = nouveau_fctx(fence); 64 64 65 - if (atomic_dec_and_test(&fctx->notify_ref)) 65 + if (!--fctx->notify_ref) 66 66 drop = 1; 67 67 } 68 68 ··· 103 103 void 104 104 nouveau_fence_context_del(struct nouveau_fence_chan *fctx) 105 105 { 106 - cancel_work_sync(&fctx->allow_block_work); 107 106 nouveau_fence_context_kill(fctx, 0); 108 107 nvif_event_dtor(&fctx->event); 109 108 fctx->dead = 1; ··· 167 168 return ret; 168 169 } 169 170 170 - static void 171 - nouveau_fence_work_allow_block(struct work_struct *work) 172 - { 173 - struct nouveau_fence_chan *fctx = container_of(work, struct nouveau_fence_chan, 174 - allow_block_work); 175 - 176 - if (atomic_read(&fctx->notify_ref) == 0) 177 - nvif_event_block(&fctx->event); 178 - else 179 - nvif_event_allow(&fctx->event); 180 - } 181 - 182 171 void 183 172 nouveau_fence_context_new(struct nouveau_channel *chan, struct nouveau_fence_chan *fctx) 184 173 { ··· 178 191 } args; 179 192 int ret; 180 193 181 - INIT_WORK(&fctx->allow_block_work, nouveau_fence_work_allow_block); 182 194 INIT_LIST_HEAD(&fctx->flip); 183 195 INIT_LIST_HEAD(&fctx->pending); 184 196 spin_lock_init(&fctx->lock); ··· 521 535 struct nouveau_fence *fence = from_fence(f); 522 536 struct nouveau_fence_chan *fctx = nouveau_fctx(fence); 523 537 bool ret; 524 - bool do_work; 525 538 526 - if (atomic_inc_return(&fctx->notify_ref) == 0) 527 - do_work = true; 539 + if (!fctx->notify_ref++) 540 + nvif_event_allow(&fctx->event); 528 541 529 542 ret = nouveau_fence_no_signaling(f); 530 543 if (ret) 531 544 set_bit(DMA_FENCE_FLAG_USER_BITS, &fence->base.flags); 532 - else if (atomic_dec_and_test(&fctx->notify_ref)) 533 - do_work = true; 534 - 535 - if (do_work) 536 - schedule_work(&fctx->allow_block_work); 545 + else if (!--fctx->notify_ref) 546 + nvif_event_block(&fctx->event); 537 547 538 548 return ret; 539 549 }
+1 -4
drivers/gpu/drm/nouveau/nouveau_fence.h
··· 3 3 #define __NOUVEAU_FENCE_H__ 4 4 5 5 #include <linux/dma-fence.h> 6 - #include <linux/workqueue.h> 7 6 #include <nvif/event.h> 8 7 9 8 struct nouveau_drm; ··· 45 46 char name[32]; 46 47 47 48 struct nvif_event event; 48 - struct work_struct allow_block_work; 49 - atomic_t notify_ref; 50 - int dead, killed; 49 + int notify_ref, dead, killed; 51 50 }; 52 51 53 52 struct nouveau_fence_priv {
+2
drivers/gpu/drm/panel/Kconfig
··· 539 539 depends on OF 540 540 depends on DRM_MIPI_DSI 541 541 depends on BACKLIGHT_CLASS_DEVICE 542 + select DRM_DISPLAY_DP_HELPER 543 + select DRM_DISPLAY_HELPER 542 544 help 543 545 Say Y here if you want to enable support for Raydium RM692E5-based 544 546 display panels, such as the one found in the Fairphone 5 smartphone.
+1 -1
drivers/gpu/drm/panel/panel-samsung-s6d7aa0.c
··· 309 309 .off_func = s6d7aa0_lsl080al02_off, 310 310 .drm_mode = &s6d7aa0_lsl080al02_mode, 311 311 .mode_flags = MIPI_DSI_MODE_VSYNC_FLUSH | MIPI_DSI_MODE_VIDEO_NO_HFP, 312 - .bus_flags = DRM_BUS_FLAG_DE_HIGH, 312 + .bus_flags = 0, 313 313 314 314 .has_backlight = false, 315 315 .use_passwd3 = false,
+2
drivers/gpu/drm/panel/panel-simple.c
··· 3948 3948 }, 3949 3949 .bus_format = MEDIA_BUS_FMT_RGB888_1X7X4_SPWG, 3950 3950 .connector_type = DRM_MODE_CONNECTOR_LVDS, 3951 + .bus_flags = DRM_BUS_FLAG_DE_HIGH, 3951 3952 }; 3952 3953 3953 3954 static const struct panel_desc tianma_tm070jvhg33 = { ··· 3961 3960 }, 3962 3961 .bus_format = MEDIA_BUS_FMT_RGB888_1X7X4_SPWG, 3963 3962 .connector_type = DRM_MODE_CONNECTOR_LVDS, 3963 + .bus_flags = DRM_BUS_FLAG_DE_HIGH, 3964 3964 }; 3965 3965 3966 3966 static const struct display_timing tianma_tm070rvhg71_timing = {
+8 -9
drivers/gpu/drm/scheduler/sched_main.c
··· 1178 1178 struct drm_sched_entity *entity; 1179 1179 struct dma_fence *fence; 1180 1180 struct drm_sched_fence *s_fence; 1181 - struct drm_sched_job *sched_job; 1181 + struct drm_sched_job *sched_job = NULL; 1182 1182 int r; 1183 1183 1184 1184 if (READ_ONCE(sched->pause_submit)) 1185 1185 return; 1186 1186 1187 - entity = drm_sched_select_entity(sched); 1188 - if (!entity) 1189 - return; 1190 - 1191 - sched_job = drm_sched_entity_pop_job(entity); 1192 - if (!sched_job) { 1193 - complete_all(&entity->entity_idle); 1194 - return; /* No more work */ 1187 + /* Find entity with a ready job */ 1188 + while (!sched_job && (entity = drm_sched_select_entity(sched))) { 1189 + sched_job = drm_sched_entity_pop_job(entity); 1190 + if (!sched_job) 1191 + complete_all(&entity->entity_idle); 1195 1192 } 1193 + if (!entity) 1194 + return; /* No more work */ 1196 1195 1197 1196 s_fence = sched_job->s_fence; 1198 1197
+4 -1
drivers/gpu/drm/tests/drm_mm_test.c
··· 188 188 189 189 static void drm_test_mm_debug(struct kunit *test) 190 190 { 191 + struct drm_printer p = drm_debug_printer(test->name); 191 192 struct drm_mm mm; 192 193 struct drm_mm_node nodes[2]; 193 194 194 195 /* Create a small drm_mm with a couple of nodes and a few holes, and 195 196 * check that the debug iterator doesn't explode over a trivial drm_mm. 196 197 */ 197 - 198 198 drm_mm_init(&mm, 0, 4096); 199 199 200 200 memset(nodes, 0, sizeof(nodes)); ··· 209 209 KUNIT_ASSERT_FALSE_MSG(test, drm_mm_reserve_node(&mm, &nodes[1]), 210 210 "failed to reserve node[0] {start=%lld, size=%lld)\n", 211 211 nodes[0].start, nodes[0].size); 212 + 213 + drm_mm_print(&mm, &p); 214 + KUNIT_SUCCEED(test); 212 215 } 213 216 214 217 static bool expect_insert(struct kunit *test, struct drm_mm *mm,
+9 -3
drivers/gpu/drm/ttm/ttm_device.c
··· 95 95 ttm_pool_mgr_init(num_pages); 96 96 ttm_tt_mgr_init(num_pages, num_dma32); 97 97 98 - glob->dummy_read_page = alloc_page(__GFP_ZERO | GFP_DMA32); 98 + glob->dummy_read_page = alloc_page(__GFP_ZERO | GFP_DMA32 | 99 + __GFP_NOWARN); 99 100 101 + /* Retry without GFP_DMA32 for platforms DMA32 is not available */ 100 102 if (unlikely(glob->dummy_read_page == NULL)) { 101 - ret = -ENOMEM; 102 - goto out; 103 + glob->dummy_read_page = alloc_page(__GFP_ZERO); 104 + if (unlikely(glob->dummy_read_page == NULL)) { 105 + ret = -ENOMEM; 106 + goto out; 107 + } 108 + pr_warn("Using GFP_DMA32 fallback for dummy_read_page\n"); 103 109 } 104 110 105 111 INIT_LIST_HEAD(&glob->device_list);
+28 -7
drivers/gpu/drm/v3d/v3d_submit.c
··· 147 147 return 0; 148 148 } 149 149 150 + static void 151 + v3d_job_deallocate(void **container) 152 + { 153 + kfree(*container); 154 + *container = NULL; 155 + } 156 + 150 157 static int 151 158 v3d_job_init(struct v3d_dev *v3d, struct drm_file *file_priv, 152 159 struct v3d_job *job, void (*free)(struct kref *ref), ··· 280 273 281 274 ret = v3d_job_init(v3d, file_priv, &(*job)->base, 282 275 v3d_job_free, args->in_sync, se, V3D_CSD); 283 - if (ret) 276 + if (ret) { 277 + v3d_job_deallocate((void *)job); 284 278 return ret; 279 + } 285 280 286 281 ret = v3d_job_allocate((void *)clean_job, sizeof(**clean_job)); 287 282 if (ret) ··· 291 282 292 283 ret = v3d_job_init(v3d, file_priv, *clean_job, 293 284 v3d_job_free, 0, NULL, V3D_CACHE_CLEAN); 294 - if (ret) 285 + if (ret) { 286 + v3d_job_deallocate((void *)clean_job); 295 287 return ret; 288 + } 296 289 297 290 (*job)->args = *args; 298 291 ··· 871 860 872 861 ret = v3d_job_init(v3d, file_priv, &render->base, 873 862 v3d_render_job_free, args->in_sync_rcl, &se, V3D_RENDER); 874 - if (ret) 863 + if (ret) { 864 + v3d_job_deallocate((void *)&render); 875 865 goto fail; 866 + } 876 867 877 868 render->start = args->rcl_start; 878 869 render->end = args->rcl_end; ··· 887 874 888 875 ret = v3d_job_init(v3d, file_priv, &bin->base, 889 876 v3d_job_free, args->in_sync_bcl, &se, V3D_BIN); 890 - if (ret) 877 + if (ret) { 878 + v3d_job_deallocate((void *)&bin); 891 879 goto fail; 880 + } 892 881 893 882 bin->start = args->bcl_start; 894 883 bin->end = args->bcl_end; ··· 907 892 908 893 ret = v3d_job_init(v3d, file_priv, clean_job, 909 894 v3d_job_free, 0, NULL, V3D_CACHE_CLEAN); 910 - if (ret) 895 + if (ret) { 896 + v3d_job_deallocate((void *)&clean_job); 911 897 goto fail; 898 + } 912 899 913 900 last_job = clean_job; 914 901 } else { ··· 1032 1015 1033 1016 ret = v3d_job_init(v3d, file_priv, &job->base, 1034 1017 v3d_job_free, args->in_sync, &se, V3D_TFU); 1035 - if (ret) 1018 + if (ret) { 1019 + v3d_job_deallocate((void *)&job); 1036 1020 goto fail; 1021 + } 1037 1022 1038 1023 job->base.bo = kcalloc(ARRAY_SIZE(args->bo_handles), 1039 1024 sizeof(*job->base.bo), GFP_KERNEL); ··· 1252 1233 1253 1234 ret = v3d_job_init(v3d, file_priv, &cpu_job->base, 1254 1235 v3d_job_free, 0, &se, V3D_CPU); 1255 - if (ret) 1236 + if (ret) { 1237 + v3d_job_deallocate((void *)&cpu_job); 1256 1238 goto fail; 1239 + } 1257 1240 1258 1241 clean_job = cpu_job->indirect_csd.clean_job; 1259 1242 csd_job = cpu_job->indirect_csd.job;
+5 -6
drivers/gpu/drm/xe/compat-i915-headers/gem/i915_gem_object.h
··· 35 35 u32 ofs, u64 *ptr, u32 size) 36 36 { 37 37 struct ttm_bo_kmap_obj map; 38 - void *virtual; 38 + void *src; 39 39 bool is_iomem; 40 40 int ret; 41 - 42 - XE_WARN_ON(size != 8); 43 41 44 42 ret = xe_bo_lock(bo, true); 45 43 if (ret) ··· 48 50 goto out_unlock; 49 51 50 52 ofs &= ~PAGE_MASK; 51 - virtual = ttm_kmap_obj_virtual(&map, &is_iomem); 53 + src = ttm_kmap_obj_virtual(&map, &is_iomem); 54 + src += ofs; 52 55 if (is_iomem) 53 - *ptr = readq((void __iomem *)(virtual + ofs)); 56 + memcpy_fromio(ptr, (void __iomem *)src, size); 54 57 else 55 - *ptr = *(u64 *)(virtual + ofs); 58 + memcpy(ptr, src, size); 56 59 57 60 ttm_bo_kunmap(&map); 58 61 out_unlock:
-3
drivers/gpu/drm/xe/tests/xe_wa_test.c
··· 74 74 SUBPLATFORM_CASE(DG2, G11, B1), 75 75 SUBPLATFORM_CASE(DG2, G12, A0), 76 76 SUBPLATFORM_CASE(DG2, G12, A1), 77 - PLATFORM_CASE(PVC, B0), 78 - PLATFORM_CASE(PVC, B1), 79 - PLATFORM_CASE(PVC, C0), 80 77 GMDID_CASE(METEORLAKE, 1270, A0, 1300, A0), 81 78 GMDID_CASE(METEORLAKE, 1271, A0, 1300, A0), 82 79 GMDID_CASE(LUNARLAKE, 2004, A0, 2000, A0),
+1 -1
drivers/gpu/drm/xe/xe_device.c
··· 613 613 u32 xe_device_ccs_bytes(struct xe_device *xe, u64 size) 614 614 { 615 615 return xe_device_has_flat_ccs(xe) ? 616 - DIV_ROUND_UP(size, NUM_BYTES_PER_CCS_BYTE(xe)) : 0; 616 + DIV_ROUND_UP_ULL(size, NUM_BYTES_PER_CCS_BYTE(xe)) : 0; 617 617 } 618 618 619 619 bool xe_device_mem_access_ongoing(struct xe_device *xe)
+1 -1
drivers/gpu/drm/xe/xe_dma_buf.c
··· 175 175 return 0; 176 176 } 177 177 178 - const struct dma_buf_ops xe_dmabuf_ops = { 178 + static const struct dma_buf_ops xe_dmabuf_ops = { 179 179 .attach = xe_dma_buf_attach, 180 180 .detach = xe_dma_buf_detach, 181 181 .pin = xe_dma_buf_pin,
+1 -1
drivers/gpu/drm/xe/xe_hwmon.c
··· 419 419 420 420 return xe_pcode_read(gt, PCODE_MBOX(PCODE_POWER_SETUP, 421 421 POWER_SETUP_SUBCOMMAND_READ_I1, 0), 422 - uval, 0); 422 + uval, NULL); 423 423 } 424 424 425 425 static int xe_hwmon_pcode_write_i1(struct xe_gt *gt, u32 uval)
+7 -7
drivers/gpu/drm/xe/xe_migrate.c
··· 472 472 /* Indirect access needs compression enabled uncached PAT index */ 473 473 if (GRAPHICS_VERx100(xe) >= 2000) 474 474 pat_index = is_comp_pte ? xe->pat.idx[XE_CACHE_NONE_COMPRESSION] : 475 - xe->pat.idx[XE_CACHE_NONE]; 475 + xe->pat.idx[XE_CACHE_WB]; 476 476 else 477 477 pat_index = xe->pat.idx[XE_CACHE_WB]; 478 478 ··· 760 760 if (src_is_vram && xe_migrate_allow_identity(src_L0, &src_it)) 761 761 xe_res_next(&src_it, src_L0); 762 762 else 763 - emit_pte(m, bb, src_L0_pt, src_is_vram, true, &src_it, src_L0, 764 - src); 763 + emit_pte(m, bb, src_L0_pt, src_is_vram, copy_system_ccs, 764 + &src_it, src_L0, src); 765 765 766 766 if (dst_is_vram && xe_migrate_allow_identity(src_L0, &dst_it)) 767 767 xe_res_next(&dst_it, src_L0); 768 768 else 769 - emit_pte(m, bb, dst_L0_pt, dst_is_vram, true, &dst_it, src_L0, 770 - dst); 769 + emit_pte(m, bb, dst_L0_pt, dst_is_vram, copy_system_ccs, 770 + &dst_it, src_L0, dst); 771 771 772 772 if (copy_system_ccs) 773 773 emit_pte(m, bb, ccs_pt, false, false, &ccs_it, ccs_size, src); ··· 1009 1009 if (clear_vram && xe_migrate_allow_identity(clear_L0, &src_it)) 1010 1010 xe_res_next(&src_it, clear_L0); 1011 1011 else 1012 - emit_pte(m, bb, clear_L0_pt, clear_vram, true, &src_it, clear_L0, 1013 - dst); 1012 + emit_pte(m, bb, clear_L0_pt, clear_vram, clear_system_ccs, 1013 + &src_it, clear_L0, dst); 1014 1014 1015 1015 bb->cs[bb->len++] = MI_BATCH_BUFFER_END; 1016 1016 update_idx = bb->len;
+2 -2
drivers/gpu/drm/xe/xe_mmio.c
··· 272 272 drm_info(&xe->drm, "VRAM[%u, %u]: Actual physical size %pa, usable size exclude stolen %pa, CPU accessible size %pa\n", id, 273 273 tile->id, &tile->mem.vram.actual_physical_size, &tile->mem.vram.usable_size, &tile->mem.vram.io_size); 274 274 drm_info(&xe->drm, "VRAM[%u, %u]: DPA range: [%pa-%llx], io range: [%pa-%llx]\n", id, tile->id, 275 - &tile->mem.vram.dpa_base, tile->mem.vram.dpa_base + tile->mem.vram.actual_physical_size, 276 - &tile->mem.vram.io_start, tile->mem.vram.io_start + tile->mem.vram.io_size); 275 + &tile->mem.vram.dpa_base, tile->mem.vram.dpa_base + (u64)tile->mem.vram.actual_physical_size, 276 + &tile->mem.vram.io_start, tile->mem.vram.io_start + (u64)tile->mem.vram.io_size); 277 277 278 278 /* calculate total size using tile size to get the correct HW sizing */ 279 279 total_size += tile_size;
+14 -9
drivers/gpu/drm/xe/xe_vm.c
··· 1855 1855 mutex_lock(&xef->vm.lock); 1856 1856 err = xa_alloc(&xef->vm.xa, &id, vm, xa_limit_32b, GFP_KERNEL); 1857 1857 mutex_unlock(&xef->vm.lock); 1858 - if (err) { 1859 - xe_vm_close_and_put(vm); 1860 - return err; 1861 - } 1858 + if (err) 1859 + goto err_close_and_put; 1862 1860 1863 1861 if (xe->info.has_asid) { 1864 1862 mutex_lock(&xe->usm.lock); ··· 1864 1866 XA_LIMIT(1, XE_MAX_ASID - 1), 1865 1867 &xe->usm.next_asid, GFP_KERNEL); 1866 1868 mutex_unlock(&xe->usm.lock); 1867 - if (err < 0) { 1868 - xe_vm_close_and_put(vm); 1869 - return err; 1870 - } 1871 - err = 0; 1869 + if (err < 0) 1870 + goto err_free_id; 1871 + 1872 1872 vm->usm.asid = asid; 1873 1873 } 1874 1874 ··· 1884 1888 #endif 1885 1889 1886 1890 return 0; 1891 + 1892 + err_free_id: 1893 + mutex_lock(&xef->vm.lock); 1894 + xa_erase(&xef->vm.xa, id); 1895 + mutex_unlock(&xef->vm.lock); 1896 + err_close_and_put: 1897 + xe_vm_close_and_put(vm); 1898 + 1899 + return err; 1887 1900 } 1888 1901 1889 1902 int xe_vm_destroy_ioctl(struct drm_device *dev, void *data,
+82 -35
drivers/hid/bpf/hid_bpf_dispatch.c
··· 143 143 } 144 144 EXPORT_SYMBOL_GPL(call_hid_bpf_rdesc_fixup); 145 145 146 + /* Disables missing prototype warnings */ 147 + __bpf_kfunc_start_defs(); 148 + 146 149 /** 147 150 * hid_bpf_get_data - Get the kernel memory pointer associated with the context @ctx 148 151 * ··· 155 152 * 156 153 * @returns %NULL on error, an %__u8 memory pointer on success 157 154 */ 158 - noinline __u8 * 155 + __bpf_kfunc __u8 * 159 156 hid_bpf_get_data(struct hid_bpf_ctx *ctx, unsigned int offset, const size_t rdwr_buf_size) 160 157 { 161 158 struct hid_bpf_ctx_kern *ctx_kern; ··· 170 167 171 168 return ctx_kern->data + offset; 172 169 } 170 + __bpf_kfunc_end_defs(); 173 171 174 172 /* 175 173 * The following set contains all functions we agree BPF programs ··· 245 241 return 0; 246 242 } 247 243 248 - /** 249 - * hid_bpf_attach_prog - Attach the given @prog_fd to the given HID device 250 - * 251 - * @hid_id: the system unique identifier of the HID device 252 - * @prog_fd: an fd in the user process representing the program to attach 253 - * @flags: any logical OR combination of &enum hid_bpf_attach_flags 254 - * 255 - * @returns an fd of a bpf_link object on success (> %0), an error code otherwise. 256 - * Closing this fd will detach the program from the HID device (unless the bpf_link 257 - * is pinned to the BPF file system). 258 - */ 259 - /* called from syscall */ 260 - noinline int 261 - hid_bpf_attach_prog(unsigned int hid_id, int prog_fd, __u32 flags) 244 + static int do_hid_bpf_attach_prog(struct hid_device *hdev, int prog_fd, struct bpf_prog *prog, 245 + __u32 flags) 262 246 { 263 - struct hid_device *hdev; 264 - struct device *dev; 265 - int fd, err, prog_type = hid_bpf_get_prog_attach_type(prog_fd); 247 + int fd, err, prog_type; 266 248 267 - if (!hid_bpf_ops) 268 - return -EINVAL; 269 - 249 + prog_type = hid_bpf_get_prog_attach_type(prog); 270 250 if (prog_type < 0) 271 251 return prog_type; 272 252 273 253 if (prog_type >= HID_BPF_PROG_TYPE_MAX) 274 254 return -EINVAL; 275 - 276 - if ((flags & ~HID_BPF_FLAG_MASK)) 277 - return -EINVAL; 278 - 279 - dev = bus_find_device(hid_bpf_ops->bus_type, NULL, &hid_id, device_match_id); 280 - if (!dev) 281 - return -EINVAL; 282 - 283 - hdev = to_hid_device(dev); 284 255 285 256 if (prog_type == HID_BPF_PROG_TYPE_DEVICE_EVENT) { 286 257 err = hid_bpf_allocate_event_data(hdev); ··· 263 284 return err; 264 285 } 265 286 266 - fd = __hid_bpf_attach_prog(hdev, prog_type, prog_fd, flags); 287 + fd = __hid_bpf_attach_prog(hdev, prog_type, prog_fd, prog, flags); 267 288 if (fd < 0) 268 289 return fd; 269 290 ··· 278 299 return fd; 279 300 } 280 301 302 + /* Disables missing prototype warnings */ 303 + __bpf_kfunc_start_defs(); 304 + 305 + /** 306 + * hid_bpf_attach_prog - Attach the given @prog_fd to the given HID device 307 + * 308 + * @hid_id: the system unique identifier of the HID device 309 + * @prog_fd: an fd in the user process representing the program to attach 310 + * @flags: any logical OR combination of &enum hid_bpf_attach_flags 311 + * 312 + * @returns an fd of a bpf_link object on success (> %0), an error code otherwise. 313 + * Closing this fd will detach the program from the HID device (unless the bpf_link 314 + * is pinned to the BPF file system). 315 + */ 316 + /* called from syscall */ 317 + __bpf_kfunc int 318 + hid_bpf_attach_prog(unsigned int hid_id, int prog_fd, __u32 flags) 319 + { 320 + struct hid_device *hdev; 321 + struct bpf_prog *prog; 322 + struct device *dev; 323 + int err, fd; 324 + 325 + if (!hid_bpf_ops) 326 + return -EINVAL; 327 + 328 + if ((flags & ~HID_BPF_FLAG_MASK)) 329 + return -EINVAL; 330 + 331 + dev = bus_find_device(hid_bpf_ops->bus_type, NULL, &hid_id, device_match_id); 332 + if (!dev) 333 + return -EINVAL; 334 + 335 + hdev = to_hid_device(dev); 336 + 337 + /* 338 + * take a ref on the prog itself, it will be released 339 + * on errors or when it'll be detached 340 + */ 341 + prog = bpf_prog_get(prog_fd); 342 + if (IS_ERR(prog)) { 343 + err = PTR_ERR(prog); 344 + goto out_dev_put; 345 + } 346 + 347 + fd = do_hid_bpf_attach_prog(hdev, prog_fd, prog, flags); 348 + if (fd < 0) { 349 + err = fd; 350 + goto out_prog_put; 351 + } 352 + 353 + return fd; 354 + 355 + out_prog_put: 356 + bpf_prog_put(prog); 357 + out_dev_put: 358 + put_device(dev); 359 + return err; 360 + } 361 + 281 362 /** 282 363 * hid_bpf_allocate_context - Allocate a context to the given HID device 283 364 * ··· 345 306 * 346 307 * @returns A pointer to &struct hid_bpf_ctx on success, %NULL on error. 347 308 */ 348 - noinline struct hid_bpf_ctx * 309 + __bpf_kfunc struct hid_bpf_ctx * 349 310 hid_bpf_allocate_context(unsigned int hid_id) 350 311 { 351 312 struct hid_device *hdev; ··· 362 323 hdev = to_hid_device(dev); 363 324 364 325 ctx_kern = kzalloc(sizeof(*ctx_kern), GFP_KERNEL); 365 - if (!ctx_kern) 326 + if (!ctx_kern) { 327 + put_device(dev); 366 328 return NULL; 329 + } 367 330 368 331 ctx_kern->ctx.hid = hdev; 369 332 ··· 378 337 * @ctx: the HID-BPF context to release 379 338 * 380 339 */ 381 - noinline void 340 + __bpf_kfunc void 382 341 hid_bpf_release_context(struct hid_bpf_ctx *ctx) 383 342 { 384 343 struct hid_bpf_ctx_kern *ctx_kern; 344 + struct hid_device *hid; 385 345 386 346 ctx_kern = container_of(ctx, struct hid_bpf_ctx_kern, ctx); 347 + hid = (struct hid_device *)ctx_kern->ctx.hid; /* ignore const */ 387 348 388 349 kfree(ctx_kern); 350 + 351 + /* get_device() is called by bus_find_device() */ 352 + put_device(&hid->dev); 389 353 } 390 354 391 355 /** ··· 404 358 * 405 359 * @returns %0 on success, a negative error code otherwise. 406 360 */ 407 - noinline int 361 + __bpf_kfunc int 408 362 hid_bpf_hw_request(struct hid_bpf_ctx *ctx, __u8 *buf, size_t buf__sz, 409 363 enum hid_report_type rtype, enum hid_class_request reqtype) 410 364 { ··· 472 426 kfree(dma_data); 473 427 return ret; 474 428 } 429 + __bpf_kfunc_end_defs(); 475 430 476 431 /* our HID-BPF entrypoints */ 477 432 BTF_SET8_START(hid_bpf_fmodret_ids)
+2 -2
drivers/hid/bpf/hid_bpf_dispatch.h
··· 12 12 13 13 int hid_bpf_preload_skel(void); 14 14 void hid_bpf_free_links_and_skel(void); 15 - int hid_bpf_get_prog_attach_type(int prog_fd); 15 + int hid_bpf_get_prog_attach_type(struct bpf_prog *prog); 16 16 int __hid_bpf_attach_prog(struct hid_device *hdev, enum hid_bpf_prog_type prog_type, int prog_fd, 17 - __u32 flags); 17 + struct bpf_prog *prog, __u32 flags); 18 18 void __hid_bpf_destroy_device(struct hid_device *hdev); 19 19 int hid_bpf_prog_run(struct hid_device *hdev, enum hid_bpf_prog_type type, 20 20 struct hid_bpf_ctx_kern *ctx_kern);
+20 -20
drivers/hid/bpf/hid_bpf_jmp_table.c
··· 196 196 static void hid_bpf_release_progs(struct work_struct *work) 197 197 { 198 198 int i, j, n, map_fd = -1; 199 + bool hdev_destroyed; 199 200 200 201 if (!jmp_table.map) 201 202 return; ··· 221 220 if (entry->hdev) { 222 221 hdev = entry->hdev; 223 222 type = entry->type; 223 + /* 224 + * hdev is still valid, even if we are called after hid_destroy_device(): 225 + * when hid_bpf_attach() gets called, it takes a ref on the dev through 226 + * bus_find_device() 227 + */ 228 + hdev_destroyed = hdev->bpf.destroyed; 224 229 225 230 hid_bpf_populate_hdev(hdev, type); 226 231 ··· 239 232 if (test_bit(next->idx, jmp_table.enabled)) 240 233 continue; 241 234 242 - if (next->hdev == hdev && next->type == type) 235 + if (next->hdev == hdev && next->type == type) { 236 + /* 237 + * clear the hdev reference and decrement the device ref 238 + * that was taken during bus_find_device() while calling 239 + * hid_bpf_attach() 240 + */ 243 241 next->hdev = NULL; 242 + put_device(&hdev->dev); 243 + } 244 244 } 245 245 246 - /* if type was rdesc fixup, reconnect device */ 247 - if (type == HID_BPF_PROG_TYPE_RDESC_FIXUP) 246 + /* if type was rdesc fixup and the device is not gone, reconnect device */ 247 + if (type == HID_BPF_PROG_TYPE_RDESC_FIXUP && !hdev_destroyed) 248 248 hid_bpf_reconnect(hdev); 249 249 } 250 250 } ··· 347 333 return err; 348 334 } 349 335 350 - int hid_bpf_get_prog_attach_type(int prog_fd) 336 + int hid_bpf_get_prog_attach_type(struct bpf_prog *prog) 351 337 { 352 - struct bpf_prog *prog = NULL; 353 - int i; 354 338 int prog_type = HID_BPF_PROG_TYPE_UNDEF; 355 - 356 - prog = bpf_prog_get(prog_fd); 357 - if (IS_ERR(prog)) 358 - return PTR_ERR(prog); 339 + int i; 359 340 360 341 for (i = 0; i < HID_BPF_PROG_TYPE_MAX; i++) { 361 342 if (hid_bpf_btf_ids[i] == prog->aux->attach_btf_id) { ··· 358 349 break; 359 350 } 360 351 } 361 - 362 - bpf_prog_put(prog); 363 352 364 353 return prog_type; 365 354 } ··· 395 388 /* called from syscall */ 396 389 noinline int 397 390 __hid_bpf_attach_prog(struct hid_device *hdev, enum hid_bpf_prog_type prog_type, 398 - int prog_fd, __u32 flags) 391 + int prog_fd, struct bpf_prog *prog, __u32 flags) 399 392 { 400 393 struct bpf_link_primer link_primer; 401 394 struct hid_bpf_link *link; 402 - struct bpf_prog *prog = NULL; 403 395 struct hid_bpf_prog_entry *prog_entry; 404 396 int cnt, err = -EINVAL, prog_table_idx = -1; 405 - 406 - /* take a ref on the prog itself */ 407 - prog = bpf_prog_get(prog_fd); 408 - if (IS_ERR(prog)) 409 - return PTR_ERR(prog); 410 397 411 398 mutex_lock(&hid_bpf_attach_lock); 412 399 ··· 468 467 err_unlock: 469 468 mutex_unlock(&hid_bpf_attach_lock); 470 469 471 - bpf_prog_put(prog); 472 470 kfree(link); 473 471 474 472 return err;
+3
drivers/hid/hid-ids.h
··· 298 298 299 299 #define USB_VENDOR_ID_CIDC 0x1677 300 300 301 + #define I2C_VENDOR_ID_CIRQUE 0x0488 302 + #define I2C_PRODUCT_ID_CIRQUE_1063 0x1063 303 + 301 304 #define USB_VENDOR_ID_CJTOUCH 0x24b8 302 305 #define USB_DEVICE_ID_CJTOUCH_MULTI_TOUCH_0020 0x0020 303 306 #define USB_DEVICE_ID_CJTOUCH_MULTI_TOUCH_0040 0x0040
+2
drivers/hid/hid-logitech-hidpp.c
··· 4610 4610 HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH, 0xC088) }, 4611 4611 { /* Logitech G Pro X Superlight Gaming Mouse over USB */ 4612 4612 HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH, 0xC094) }, 4613 + { /* Logitech G Pro X Superlight 2 Gaming Mouse over USB */ 4614 + HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH, 0xC09b) }, 4613 4615 4614 4616 { /* G935 Gaming Headset */ 4615 4617 HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH, 0x0a87),
+4
drivers/hid/hid-nvidia-shield.c
··· 800 800 801 801 led->name = devm_kasprintf(&ts->base.hdev->dev, GFP_KERNEL, 802 802 "thunderstrike%d:blue:led", ts->id); 803 + if (!led->name) 804 + return -ENOMEM; 803 805 led->max_brightness = 1; 804 806 led->flags = LED_CORE_SUSPENDRESUME | LED_RETAIN_AT_SHUTDOWN; 805 807 led->brightness_get = &thunderstrike_led_get_brightness; ··· 833 831 shield_dev->battery_dev.desc.name = 834 832 devm_kasprintf(&ts->base.hdev->dev, GFP_KERNEL, 835 833 "thunderstrike_%d", ts->id); 834 + if (!shield_dev->battery_dev.desc.name) 835 + return -ENOMEM; 836 836 837 837 shield_dev->battery_dev.psy = power_supply_register( 838 838 &hdev->dev, &shield_dev->battery_dev.desc, &psy_cfg);
+18 -18
drivers/hid/hid-steam.c
··· 1109 1109 return hid_hw_start(hdev, HID_CONNECT_DEFAULT); 1110 1110 1111 1111 steam = devm_kzalloc(&hdev->dev, sizeof(*steam), GFP_KERNEL); 1112 - if (!steam) { 1113 - ret = -ENOMEM; 1114 - goto steam_alloc_fail; 1115 - } 1112 + if (!steam) 1113 + return -ENOMEM; 1114 + 1116 1115 steam->hdev = hdev; 1117 1116 hid_set_drvdata(hdev, steam); 1118 1117 spin_lock_init(&steam->lock); ··· 1128 1129 */ 1129 1130 ret = hid_hw_start(hdev, HID_CONNECT_DEFAULT & ~HID_CONNECT_HIDRAW); 1130 1131 if (ret) 1131 - goto hid_hw_start_fail; 1132 + goto err_cancel_work; 1132 1133 1133 1134 ret = hid_hw_open(hdev); 1134 1135 if (ret) { 1135 1136 hid_err(hdev, 1136 1137 "%s:hid_hw_open\n", 1137 1138 __func__); 1138 - goto hid_hw_open_fail; 1139 + goto err_hw_stop; 1139 1140 } 1140 1141 1141 1142 if (steam->quirks & STEAM_QUIRK_WIRELESS) { ··· 1151 1152 hid_err(hdev, 1152 1153 "%s:steam_register failed with error %d\n", 1153 1154 __func__, ret); 1154 - goto input_register_fail; 1155 + goto err_hw_close; 1155 1156 } 1156 1157 } 1157 1158 1158 1159 steam->client_hdev = steam_create_client_hid(hdev); 1159 1160 if (IS_ERR(steam->client_hdev)) { 1160 1161 ret = PTR_ERR(steam->client_hdev); 1161 - goto client_hdev_fail; 1162 + goto err_stream_unregister; 1162 1163 } 1163 1164 steam->client_hdev->driver_data = steam; 1164 1165 1165 1166 ret = hid_add_device(steam->client_hdev); 1166 1167 if (ret) 1167 - goto client_hdev_add_fail; 1168 + goto err_destroy; 1168 1169 1169 1170 return 0; 1170 1171 1171 - client_hdev_add_fail: 1172 - hid_hw_stop(hdev); 1173 - client_hdev_fail: 1172 + err_destroy: 1174 1173 hid_destroy_device(steam->client_hdev); 1175 - input_register_fail: 1176 - hid_hw_open_fail: 1177 - hid_hw_start_fail: 1174 + err_stream_unregister: 1175 + if (steam->connected) 1176 + steam_unregister(steam); 1177 + err_hw_close: 1178 + hid_hw_close(hdev); 1179 + err_hw_stop: 1180 + hid_hw_stop(hdev); 1181 + err_cancel_work: 1178 1182 cancel_work_sync(&steam->work_connect); 1179 1183 cancel_delayed_work_sync(&steam->mode_switch); 1180 1184 cancel_work_sync(&steam->rumble_work); 1181 - steam_alloc_fail: 1182 - hid_err(hdev, "%s: failed with error %d\n", 1183 - __func__, ret); 1185 + 1184 1186 return ret; 1185 1187 } 1186 1188
+5 -2
drivers/hid/hidraw.c
··· 357 357 down_write(&minors_rwsem); 358 358 359 359 spin_lock_irqsave(&hidraw_table[minor]->list_lock, flags); 360 - for (int i = list->tail; i < list->head; i++) 361 - kfree(list->buffer[i].value); 360 + while (list->tail != list->head) { 361 + kfree(list->buffer[list->tail].value); 362 + list->buffer[list->tail].value = NULL; 363 + list->tail = (list->tail + 1) & (HIDRAW_BUFFER_SIZE - 1); 364 + } 362 365 list_del(&list->node); 363 366 spin_unlock_irqrestore(&hidraw_table[minor]->list_lock, flags); 364 367 kfree(list);
+5 -1
drivers/hid/i2c-hid/i2c-hid-core.c
··· 49 49 #define I2C_HID_QUIRK_RESET_ON_RESUME BIT(2) 50 50 #define I2C_HID_QUIRK_BAD_INPUT_SIZE BIT(3) 51 51 #define I2C_HID_QUIRK_NO_WAKEUP_AFTER_RESET BIT(4) 52 + #define I2C_HID_QUIRK_NO_SLEEP_ON_SUSPEND BIT(5) 52 53 53 54 /* Command opcodes */ 54 55 #define I2C_HID_OPCODE_RESET 0x01 ··· 132 131 I2C_HID_QUIRK_RESET_ON_RESUME }, 133 132 { USB_VENDOR_ID_ITE, I2C_DEVICE_ID_ITE_LENOVO_LEGION_Y720, 134 133 I2C_HID_QUIRK_BAD_INPUT_SIZE }, 134 + { I2C_VENDOR_ID_CIRQUE, I2C_PRODUCT_ID_CIRQUE_1063, 135 + I2C_HID_QUIRK_NO_SLEEP_ON_SUSPEND }, 135 136 /* 136 137 * Sending the wakeup after reset actually break ELAN touchscreen controller 137 138 */ ··· 959 956 return ret; 960 957 961 958 /* Save some power */ 962 - i2c_hid_set_power(ihid, I2C_HID_PWR_SLEEP); 959 + if (!(ihid->quirks & I2C_HID_QUIRK_NO_SLEEP_ON_SUSPEND)) 960 + i2c_hid_set_power(ihid, I2C_HID_PWR_SLEEP); 963 961 964 962 disable_irq(client->irq); 965 963
+1
drivers/hid/i2c-hid/i2c-hid-of.c
··· 87 87 if (!ihid_of) 88 88 return -ENOMEM; 89 89 90 + ihid_of->client = client; 90 91 ihid_of->ops.power_up = i2c_hid_of_power_up; 91 92 ihid_of->ops.power_down = i2c_hid_of_power_down; 92 93
+1 -1
drivers/md/raid1.c
··· 2262 2262 int sectors = r1_bio->sectors; 2263 2263 int read_disk = r1_bio->read_disk; 2264 2264 struct mddev *mddev = conf->mddev; 2265 - struct md_rdev *rdev = rcu_dereference(conf->mirrors[read_disk].rdev); 2265 + struct md_rdev *rdev = conf->mirrors[read_disk].rdev; 2266 2266 2267 2267 if (exceed_read_errors(mddev, rdev)) { 2268 2268 r1_bio->bios[r1_bio->read_disk] = IO_BLOCKED;
+1 -1
drivers/media/common/videobuf2/videobuf2-core.c
··· 989 989 bool no_previous_buffers = !q_num_bufs; 990 990 int ret = 0; 991 991 992 - if (q->num_buffers == q->max_num_buffers) { 992 + if (q_num_bufs == q->max_num_buffers) { 993 993 dprintk(q, 1, "maximum number of buffers already allocated\n"); 994 994 return -ENOBUFS; 995 995 }
+26 -29
drivers/media/common/videobuf2/videobuf2-v4l2.c
··· 671 671 } 672 672 EXPORT_SYMBOL(vb2_querybuf); 673 673 674 - static void fill_buf_caps(struct vb2_queue *q, u32 *caps) 674 + static void vb2_set_flags_and_caps(struct vb2_queue *q, u32 memory, 675 + u32 *flags, u32 *caps, u32 *max_num_bufs) 675 676 { 677 + if (!q->allow_cache_hints || memory != V4L2_MEMORY_MMAP) { 678 + /* 679 + * This needs to clear V4L2_MEMORY_FLAG_NON_COHERENT only, 680 + * but in order to avoid bugs we zero out all bits. 681 + */ 682 + *flags = 0; 683 + } else { 684 + /* Clear all unknown flags. */ 685 + *flags &= V4L2_MEMORY_FLAG_NON_COHERENT; 686 + } 687 + 676 688 *caps = V4L2_BUF_CAP_SUPPORTS_ORPHANED_BUFS; 677 689 if (q->io_modes & VB2_MMAP) 678 690 *caps |= V4L2_BUF_CAP_SUPPORTS_MMAP; ··· 698 686 *caps |= V4L2_BUF_CAP_SUPPORTS_MMAP_CACHE_HINTS; 699 687 if (q->supports_requests) 700 688 *caps |= V4L2_BUF_CAP_SUPPORTS_REQUESTS; 701 - } 702 - 703 - static void validate_memory_flags(struct vb2_queue *q, 704 - int memory, 705 - u32 *flags) 706 - { 707 - if (!q->allow_cache_hints || memory != V4L2_MEMORY_MMAP) { 708 - /* 709 - * This needs to clear V4L2_MEMORY_FLAG_NON_COHERENT only, 710 - * but in order to avoid bugs we zero out all bits. 711 - */ 712 - *flags = 0; 713 - } else { 714 - /* Clear all unknown flags. */ 715 - *flags &= V4L2_MEMORY_FLAG_NON_COHERENT; 689 + if (max_num_bufs) { 690 + *max_num_bufs = q->max_num_buffers; 691 + *caps |= V4L2_BUF_CAP_SUPPORTS_MAX_NUM_BUFFERS; 716 692 } 717 693 } 718 694 ··· 709 709 int ret = vb2_verify_memory_type(q, req->memory, req->type); 710 710 u32 flags = req->flags; 711 711 712 - fill_buf_caps(q, &req->capabilities); 713 - validate_memory_flags(q, req->memory, &flags); 712 + vb2_set_flags_and_caps(q, req->memory, &flags, 713 + &req->capabilities, NULL); 714 714 req->flags = flags; 715 715 return ret ? ret : vb2_core_reqbufs(q, req->memory, 716 716 req->flags, &req->count); ··· 751 751 int ret = vb2_verify_memory_type(q, create->memory, f->type); 752 752 unsigned i; 753 753 754 - fill_buf_caps(q, &create->capabilities); 755 - validate_memory_flags(q, create->memory, &create->flags); 756 754 create->index = vb2_get_num_buffers(q); 757 - create->max_num_buffers = q->max_num_buffers; 758 - create->capabilities |= V4L2_BUF_CAP_SUPPORTS_MAX_NUM_BUFFERS; 755 + vb2_set_flags_and_caps(q, create->memory, &create->flags, 756 + &create->capabilities, &create->max_num_buffers); 759 757 if (create->count == 0) 760 758 return ret != -EBUSY ? ret : 0; 761 759 ··· 1004 1006 int res = vb2_verify_memory_type(vdev->queue, p->memory, p->type); 1005 1007 u32 flags = p->flags; 1006 1008 1007 - fill_buf_caps(vdev->queue, &p->capabilities); 1008 - validate_memory_flags(vdev->queue, p->memory, &flags); 1009 + vb2_set_flags_and_caps(vdev->queue, p->memory, &flags, 1010 + &p->capabilities, NULL); 1009 1011 p->flags = flags; 1010 1012 if (res) 1011 1013 return res; ··· 1024 1026 struct v4l2_create_buffers *p) 1025 1027 { 1026 1028 struct video_device *vdev = video_devdata(file); 1027 - int res = vb2_verify_memory_type(vdev->queue, p->memory, 1028 - p->format.type); 1029 + int res = vb2_verify_memory_type(vdev->queue, p->memory, p->format.type); 1029 1030 1030 - p->index = vdev->queue->num_buffers; 1031 - fill_buf_caps(vdev->queue, &p->capabilities); 1032 - validate_memory_flags(vdev->queue, p->memory, &p->flags); 1031 + p->index = vb2_get_num_buffers(vdev->queue); 1032 + vb2_set_flags_and_caps(vdev->queue, p->memory, &p->flags, 1033 + &p->capabilities, &p->max_num_buffers); 1033 1034 /* 1034 1035 * If count == 0, then just check if memory and type are valid. 1035 1036 * Any -EBUSY result from vb2_verify_memory_type can be mapped to 0.
+1 -1
drivers/media/platform/chips-media/wave5/wave5-vpu.c
··· 272 272 }; 273 273 274 274 static const struct of_device_id wave5_dt_ids[] = { 275 - { .compatible = "ti,k3-j721s2-wave521c", .data = &ti_wave521c_data }, 275 + { .compatible = "ti,j721s2-wave521c", .data = &ti_wave521c_data }, 276 276 { /* sentinel */ } 277 277 }; 278 278 MODULE_DEVICE_TABLE(of, wave5_dt_ids);
+1 -2
drivers/net/dsa/mt7530.c
··· 2840 2840 /* MT753x MAC works in 1G full duplex mode for all up-clocked 2841 2841 * variants. 2842 2842 */ 2843 - if (interface == PHY_INTERFACE_MODE_INTERNAL || 2844 - interface == PHY_INTERFACE_MODE_TRGMII || 2843 + if (interface == PHY_INTERFACE_MODE_TRGMII || 2845 2844 (phy_interface_mode_is_8023z(interface))) { 2846 2845 speed = SPEED_1000; 2847 2846 duplex = DUPLEX_FULL;
+1 -1
drivers/net/dsa/mv88e6xxx/chip.c
··· 3659 3659 int err; 3660 3660 3661 3661 if (!chip->info->ops->phy_read_c45) 3662 - return -EOPNOTSUPP; 3662 + return 0xffff; 3663 3663 3664 3664 mv88e6xxx_reg_lock(chip); 3665 3665 err = chip->info->ops->phy_read_c45(chip, bus, phy, devad, reg, &val);
+1 -2
drivers/net/dsa/qca/qca8k-8xxx.c
··· 2051 2051 priv->info = of_device_get_match_data(priv->dev); 2052 2052 2053 2053 priv->reset_gpio = devm_gpiod_get_optional(priv->dev, "reset", 2054 - GPIOD_ASIS); 2054 + GPIOD_OUT_HIGH); 2055 2055 if (IS_ERR(priv->reset_gpio)) 2056 2056 return PTR_ERR(priv->reset_gpio); 2057 2057 2058 2058 if (priv->reset_gpio) { 2059 - gpiod_set_value_cansleep(priv->reset_gpio, 1); 2060 2059 /* The active low duration must be greater than 10 ms 2061 2060 * and checkpatch.pl wants 20 ms. 2062 2061 */
+46 -14
drivers/net/ethernet/amd/pds_core/adminq.c
··· 63 63 return nq_work; 64 64 } 65 65 66 + static bool pdsc_adminq_inc_if_up(struct pdsc *pdsc) 67 + { 68 + if (pdsc->state & BIT_ULL(PDSC_S_STOPPING_DRIVER) || 69 + pdsc->state & BIT_ULL(PDSC_S_FW_DEAD)) 70 + return false; 71 + 72 + return refcount_inc_not_zero(&pdsc->adminq_refcnt); 73 + } 74 + 66 75 void pdsc_process_adminq(struct pdsc_qcq *qcq) 67 76 { 68 77 union pds_core_adminq_comp *comp; ··· 84 75 int aq_work = 0; 85 76 int credits; 86 77 87 - /* Don't process AdminQ when shutting down */ 88 - if (pdsc->state & BIT_ULL(PDSC_S_STOPPING_DRIVER)) { 89 - dev_err(pdsc->dev, "%s: called while PDSC_S_STOPPING_DRIVER\n", 78 + /* Don't process AdminQ when it's not up */ 79 + if (!pdsc_adminq_inc_if_up(pdsc)) { 80 + dev_err(pdsc->dev, "%s: called while adminq is unavailable\n", 90 81 __func__); 91 82 return; 92 83 } ··· 133 124 pds_core_intr_credits(&pdsc->intr_ctrl[qcq->intx], 134 125 credits, 135 126 PDS_CORE_INTR_CRED_REARM); 127 + refcount_dec(&pdsc->adminq_refcnt); 136 128 } 137 129 138 130 void pdsc_work_thread(struct work_struct *work) ··· 145 135 146 136 irqreturn_t pdsc_adminq_isr(int irq, void *data) 147 137 { 148 - struct pdsc_qcq *qcq = data; 149 - struct pdsc *pdsc = qcq->pdsc; 138 + struct pdsc *pdsc = data; 139 + struct pdsc_qcq *qcq; 150 140 151 - /* Don't process AdminQ when shutting down */ 152 - if (pdsc->state & BIT_ULL(PDSC_S_STOPPING_DRIVER)) { 153 - dev_err(pdsc->dev, "%s: called while PDSC_S_STOPPING_DRIVER\n", 141 + /* Don't process AdminQ when it's not up */ 142 + if (!pdsc_adminq_inc_if_up(pdsc)) { 143 + dev_err(pdsc->dev, "%s: called while adminq is unavailable\n", 154 144 __func__); 155 145 return IRQ_HANDLED; 156 146 } 157 147 148 + qcq = &pdsc->adminqcq; 158 149 queue_work(pdsc->wq, &qcq->work); 159 150 pds_core_intr_mask(&pdsc->intr_ctrl[qcq->intx], PDS_CORE_INTR_MASK_CLEAR); 151 + refcount_dec(&pdsc->adminq_refcnt); 160 152 161 153 return IRQ_HANDLED; 162 154 } ··· 191 179 192 180 /* Check that the FW is running */ 193 181 if (!pdsc_is_fw_running(pdsc)) { 194 - u8 fw_status = ioread8(&pdsc->info_regs->fw_status); 182 + if (pdsc->info_regs) { 183 + u8 fw_status = 184 + ioread8(&pdsc->info_regs->fw_status); 195 185 196 - dev_info(pdsc->dev, "%s: post failed - fw not running %#02x:\n", 197 - __func__, fw_status); 186 + dev_info(pdsc->dev, "%s: post failed - fw not running %#02x:\n", 187 + __func__, fw_status); 188 + } else { 189 + dev_info(pdsc->dev, "%s: post failed - BARs not setup\n", 190 + __func__); 191 + } 198 192 ret = -ENXIO; 199 193 200 194 goto err_out_unlock; ··· 248 230 int err = 0; 249 231 int index; 250 232 233 + if (!pdsc_adminq_inc_if_up(pdsc)) { 234 + dev_dbg(pdsc->dev, "%s: preventing adminq cmd %u\n", 235 + __func__, cmd->opcode); 236 + return -ENXIO; 237 + } 238 + 251 239 wc.qcq = &pdsc->adminqcq; 252 240 index = __pdsc_adminq_post(pdsc, &pdsc->adminqcq, cmd, comp, &wc); 253 241 if (index < 0) { ··· 272 248 break; 273 249 274 250 if (!pdsc_is_fw_running(pdsc)) { 275 - u8 fw_status = ioread8(&pdsc->info_regs->fw_status); 251 + if (pdsc->info_regs) { 252 + u8 fw_status = 253 + ioread8(&pdsc->info_regs->fw_status); 276 254 277 - dev_dbg(pdsc->dev, "%s: post wait failed - fw not running %#02x:\n", 278 - __func__, fw_status); 255 + dev_dbg(pdsc->dev, "%s: post wait failed - fw not running %#02x:\n", 256 + __func__, fw_status); 257 + } else { 258 + dev_dbg(pdsc->dev, "%s: post wait failed - BARs not setup\n", 259 + __func__); 260 + } 279 261 err = -ENXIO; 280 262 break; 281 263 } ··· 314 284 if (err == -ENXIO || err == -ETIMEDOUT) 315 285 queue_work(pdsc->wq, &pdsc->health_work); 316 286 } 287 + 288 + refcount_dec(&pdsc->adminq_refcnt); 317 289 318 290 return err; 319 291 }
+36 -10
drivers/net/ethernet/amd/pds_core/core.c
··· 125 125 126 126 snprintf(name, sizeof(name), "%s-%d-%s", 127 127 PDS_CORE_DRV_NAME, pdsc->pdev->bus->number, qcq->q.name); 128 - index = pdsc_intr_alloc(pdsc, name, pdsc_adminq_isr, qcq); 128 + index = pdsc_intr_alloc(pdsc, name, pdsc_adminq_isr, pdsc); 129 129 if (index < 0) 130 130 return index; 131 131 qcq->intx = index; ··· 404 404 int numdescs; 405 405 int err; 406 406 407 - if (init) 408 - err = pdsc_dev_init(pdsc); 409 - else 410 - err = pdsc_dev_reinit(pdsc); 407 + err = pdsc_dev_init(pdsc); 411 408 if (err) 412 409 return err; 413 410 ··· 447 450 pdsc_debugfs_add_viftype(pdsc); 448 451 } 449 452 453 + refcount_set(&pdsc->adminq_refcnt, 1); 450 454 clear_bit(PDSC_S_FW_DEAD, &pdsc->state); 451 455 return 0; 452 456 ··· 462 464 463 465 if (!pdsc->pdev->is_virtfn) 464 466 pdsc_devcmd_reset(pdsc); 467 + if (pdsc->adminqcq.work.func) 468 + cancel_work_sync(&pdsc->adminqcq.work); 465 469 pdsc_qcq_free(pdsc, &pdsc->notifyqcq); 466 470 pdsc_qcq_free(pdsc, &pdsc->adminqcq); 467 471 ··· 476 476 for (i = 0; i < pdsc->nintrs; i++) 477 477 pdsc_intr_free(pdsc, i); 478 478 479 - if (removing) { 480 - kfree(pdsc->intr_info); 481 - pdsc->intr_info = NULL; 482 - } 479 + kfree(pdsc->intr_info); 480 + pdsc->intr_info = NULL; 481 + pdsc->nintrs = 0; 483 482 } 484 483 485 484 if (pdsc->kern_dbpage) { ··· 486 487 pdsc->kern_dbpage = NULL; 487 488 } 488 489 490 + pci_free_irq_vectors(pdsc->pdev); 489 491 set_bit(PDSC_S_FW_DEAD, &pdsc->state); 490 492 } 491 493 ··· 512 512 PDS_CORE_INTR_MASK_SET); 513 513 } 514 514 515 + static void pdsc_adminq_wait_and_dec_once_unused(struct pdsc *pdsc) 516 + { 517 + /* The driver initializes the adminq_refcnt to 1 when the adminq is 518 + * allocated and ready for use. Other users/requesters will increment 519 + * the refcnt while in use. If the refcnt is down to 1 then the adminq 520 + * is not in use and the refcnt can be cleared and adminq freed. Before 521 + * calling this function the driver will set PDSC_S_FW_DEAD, which 522 + * prevent subsequent attempts to use the adminq and increment the 523 + * refcnt to fail. This guarantees that this function will eventually 524 + * exit. 525 + */ 526 + while (!refcount_dec_if_one(&pdsc->adminq_refcnt)) { 527 + dev_dbg_ratelimited(pdsc->dev, "%s: adminq in use\n", 528 + __func__); 529 + cpu_relax(); 530 + } 531 + } 532 + 515 533 void pdsc_fw_down(struct pdsc *pdsc) 516 534 { 517 535 union pds_core_notifyq_comp reset_event = { ··· 544 526 545 527 if (pdsc->pdev->is_virtfn) 546 528 return; 529 + 530 + pdsc_adminq_wait_and_dec_once_unused(pdsc); 547 531 548 532 /* Notify clients of fw_down */ 549 533 if (pdsc->fw_reporter) ··· 597 577 598 578 static void pdsc_check_pci_health(struct pdsc *pdsc) 599 579 { 600 - u8 fw_status = ioread8(&pdsc->info_regs->fw_status); 580 + u8 fw_status; 581 + 582 + /* some sort of teardown already in progress */ 583 + if (!pdsc->info_regs) 584 + return; 585 + 586 + fw_status = ioread8(&pdsc->info_regs->fw_status); 601 587 602 588 /* is PCI broken? */ 603 589 if (fw_status != PDS_RC_BAD_PCI)
+1 -1
drivers/net/ethernet/amd/pds_core/core.h
··· 184 184 struct mutex devcmd_lock; /* lock for dev_cmd operations */ 185 185 struct mutex config_lock; /* lock for configuration operations */ 186 186 spinlock_t adminq_lock; /* lock for adminq operations */ 187 + refcount_t adminq_refcnt; 187 188 struct pds_core_dev_info_regs __iomem *info_regs; 188 189 struct pds_core_dev_cmd_regs __iomem *cmd_regs; 189 190 struct pds_core_intr __iomem *intr_ctrl; ··· 281 280 union pds_core_dev_comp *comp, int max_seconds); 282 281 int pdsc_devcmd_init(struct pdsc *pdsc); 283 282 int pdsc_devcmd_reset(struct pdsc *pdsc); 284 - int pdsc_dev_reinit(struct pdsc *pdsc); 285 283 int pdsc_dev_init(struct pdsc *pdsc); 286 284 287 285 void pdsc_reset_prepare(struct pci_dev *pdev);
+4
drivers/net/ethernet/amd/pds_core/debugfs.c
··· 64 64 65 65 void pdsc_debugfs_add_ident(struct pdsc *pdsc) 66 66 { 67 + /* This file will already exist in the reset flow */ 68 + if (debugfs_lookup("identity", pdsc->dentry)) 69 + return; 70 + 67 71 debugfs_create_file("identity", 0400, pdsc->dentry, 68 72 pdsc, &identity_fops); 69 73 }
+8 -8
drivers/net/ethernet/amd/pds_core/dev.c
··· 57 57 58 58 bool pdsc_is_fw_running(struct pdsc *pdsc) 59 59 { 60 + if (!pdsc->info_regs) 61 + return false; 62 + 60 63 pdsc->fw_status = ioread8(&pdsc->info_regs->fw_status); 61 64 pdsc->last_fw_time = jiffies; 62 65 pdsc->last_hb = ioread32(&pdsc->info_regs->fw_heartbeat); ··· 185 182 { 186 183 int err; 187 184 185 + if (!pdsc->cmd_regs) 186 + return -ENXIO; 187 + 188 188 memcpy_toio(&pdsc->cmd_regs->cmd, cmd, sizeof(*cmd)); 189 189 pdsc_devcmd_dbell(pdsc); 190 190 err = pdsc_devcmd_wait(pdsc, cmd->opcode, max_seconds); 191 - memcpy_fromio(comp, &pdsc->cmd_regs->comp, sizeof(*comp)); 192 191 193 192 if ((err == -ENXIO || err == -ETIMEDOUT) && pdsc->wq) 194 193 queue_work(pdsc->wq, &pdsc->health_work); 194 + else 195 + memcpy_fromio(comp, &pdsc->cmd_regs->comp, sizeof(*comp)); 195 196 196 197 return err; 197 198 } ··· 314 307 (u8)pdsc->dev_info.fw_version[3]); 315 308 316 309 return 0; 317 - } 318 - 319 - int pdsc_dev_reinit(struct pdsc *pdsc) 320 - { 321 - pdsc_init_devinfo(pdsc); 322 - 323 - return pdsc_identify(pdsc); 324 310 } 325 311 326 312 int pdsc_dev_init(struct pdsc *pdsc)
+2 -1
drivers/net/ethernet/amd/pds_core/devlink.c
··· 111 111 112 112 mutex_lock(&pdsc->devcmd_lock); 113 113 err = pdsc_devcmd_locked(pdsc, &cmd, &comp, pdsc->devcmd_timeout * 2); 114 - memcpy_fromio(&fw_list, pdsc->cmd_regs->data, sizeof(fw_list)); 114 + if (!err) 115 + memcpy_fromio(&fw_list, pdsc->cmd_regs->data, sizeof(fw_list)); 115 116 mutex_unlock(&pdsc->devcmd_lock); 116 117 if (err && err != -EIO) 117 118 return err;
+3
drivers/net/ethernet/amd/pds_core/fw.c
··· 107 107 108 108 dev_info(pdsc->dev, "Installing firmware\n"); 109 109 110 + if (!pdsc->cmd_regs) 111 + return -ENXIO; 112 + 110 113 dl = priv_to_devlink(pdsc); 111 114 devlink_flash_update_status_notify(dl, "Preparing to flash", 112 115 NULL, 0, 0);
+22 -4
drivers/net/ethernet/amd/pds_core/main.c
··· 37 37 struct pdsc_dev_bar *bars = pdsc->bars; 38 38 unsigned int i; 39 39 40 + pdsc->info_regs = NULL; 41 + pdsc->cmd_regs = NULL; 42 + pdsc->intr_status = NULL; 43 + pdsc->intr_ctrl = NULL; 44 + 40 45 for (i = 0; i < PDS_CORE_BARS_MAX; i++) { 41 46 if (bars[i].vaddr) 42 47 pci_iounmap(pdsc->pdev, bars[i].vaddr); ··· 298 293 err_out_teardown: 299 294 pdsc_teardown(pdsc, PDSC_TEARDOWN_REMOVING); 300 295 err_out_unmap_bars: 301 - del_timer_sync(&pdsc->wdtimer); 296 + timer_shutdown_sync(&pdsc->wdtimer); 302 297 if (pdsc->wq) 303 298 destroy_workqueue(pdsc->wq); 304 299 mutex_destroy(&pdsc->config_lock); ··· 425 420 */ 426 421 pdsc_sriov_configure(pdev, 0); 427 422 428 - del_timer_sync(&pdsc->wdtimer); 423 + timer_shutdown_sync(&pdsc->wdtimer); 429 424 if (pdsc->wq) 430 425 destroy_workqueue(pdsc->wq); 431 426 ··· 438 433 mutex_destroy(&pdsc->config_lock); 439 434 mutex_destroy(&pdsc->devcmd_lock); 440 435 441 - pci_free_irq_vectors(pdev); 442 436 pdsc_unmap_bars(pdsc); 443 437 pci_release_regions(pdev); 444 438 } ··· 449 445 devlink_free(dl); 450 446 } 451 447 448 + static void pdsc_stop_health_thread(struct pdsc *pdsc) 449 + { 450 + timer_shutdown_sync(&pdsc->wdtimer); 451 + if (pdsc->health_work.func) 452 + cancel_work_sync(&pdsc->health_work); 453 + } 454 + 455 + static void pdsc_restart_health_thread(struct pdsc *pdsc) 456 + { 457 + timer_setup(&pdsc->wdtimer, pdsc_wdtimer_cb, 0); 458 + mod_timer(&pdsc->wdtimer, jiffies + 1); 459 + } 460 + 452 461 void pdsc_reset_prepare(struct pci_dev *pdev) 453 462 { 454 463 struct pdsc *pdsc = pci_get_drvdata(pdev); 455 464 465 + pdsc_stop_health_thread(pdsc); 456 466 pdsc_fw_down(pdsc); 457 467 458 - pci_free_irq_vectors(pdev); 459 468 pdsc_unmap_bars(pdsc); 460 469 pci_release_regions(pdev); 461 470 pci_disable_device(pdev); ··· 503 486 } 504 487 505 488 pdsc_fw_up(pdsc); 489 + pdsc_restart_health_thread(pdsc); 506 490 } 507 491 508 492 static const struct pci_error_handlers pdsc_err_handler = {
+1 -1
drivers/net/ethernet/broadcom/bnxt/bnxt_ptp.c
··· 684 684 timestamp.hwtstamp = ns_to_ktime(ns); 685 685 skb_tstamp_tx(ptp->tx_skb, &timestamp); 686 686 } else { 687 - netdev_WARN_ONCE(bp->dev, 687 + netdev_warn_once(bp->dev, 688 688 "TS query for TX timer failed rc = %x\n", rc); 689 689 } 690 690
+4 -4
drivers/net/ethernet/google/gve/gve_rx.c
··· 402 402 403 403 static struct sk_buff *gve_rx_add_frags(struct napi_struct *napi, 404 404 struct gve_rx_slot_page_info *page_info, 405 - u16 packet_buffer_size, u16 len, 405 + unsigned int truesize, u16 len, 406 406 struct gve_rx_ctx *ctx) 407 407 { 408 408 u32 offset = page_info->page_offset + page_info->pad; ··· 435 435 if (skb != ctx->skb_head) { 436 436 ctx->skb_head->len += len; 437 437 ctx->skb_head->data_len += len; 438 - ctx->skb_head->truesize += packet_buffer_size; 438 + ctx->skb_head->truesize += truesize; 439 439 } 440 440 skb_add_rx_frag(skb, num_frags, page_info->page, 441 - offset, len, packet_buffer_size); 441 + offset, len, truesize); 442 442 443 443 return ctx->skb_head; 444 444 } ··· 532 532 533 533 memcpy(alloc_page_info.page_address, src, page_info->pad + len); 534 534 skb = gve_rx_add_frags(napi, &alloc_page_info, 535 - rx->packet_buffer_size, 535 + PAGE_SIZE, 536 536 len, ctx); 537 537 538 538 u64_stats_update_begin(&rx->statss);
+20
drivers/net/ethernet/intel/e1000e/e1000.h
··· 360 360 * As a result, a shift of INCVALUE_SHIFT_n is used to fit a value of 361 361 * INCVALUE_n into the TIMINCA register allowing 32+8+(24-INCVALUE_SHIFT_n) 362 362 * bits to count nanoseconds leaving the rest for fractional nonseconds. 363 + * 364 + * Any given INCVALUE also has an associated maximum adjustment value. This 365 + * maximum adjustment value is the largest increase (or decrease) which can be 366 + * safely applied without overflowing the INCVALUE. Since INCVALUE has 367 + * a maximum range of 24 bits, its largest value is 0xFFFFFF. 368 + * 369 + * To understand where the maximum value comes from, consider the following 370 + * equation: 371 + * 372 + * new_incval = base_incval + (base_incval * adjustment) / 1billion 373 + * 374 + * To avoid overflow that means: 375 + * max_incval = base_incval + (base_incval * max_adj) / billion 376 + * 377 + * Re-arranging: 378 + * max_adj = floor(((max_incval - base_incval) * 1billion) / 1billion) 363 379 */ 364 380 #define INCVALUE_96MHZ 125 365 381 #define INCVALUE_SHIFT_96MHZ 17 366 382 #define INCPERIOD_SHIFT_96MHZ 2 367 383 #define INCPERIOD_96MHZ (12 >> INCPERIOD_SHIFT_96MHZ) 384 + #define MAX_PPB_96MHZ 23999900 /* 23,999,900 ppb */ 368 385 369 386 #define INCVALUE_25MHZ 40 370 387 #define INCVALUE_SHIFT_25MHZ 18 371 388 #define INCPERIOD_25MHZ 1 389 + #define MAX_PPB_25MHZ 599999900 /* 599,999,900 ppb */ 372 390 373 391 #define INCVALUE_24MHZ 125 374 392 #define INCVALUE_SHIFT_24MHZ 14 375 393 #define INCPERIOD_24MHZ 3 394 + #define MAX_PPB_24MHZ 999999999 /* 999,999,999 ppb */ 376 395 377 396 #define INCVALUE_38400KHZ 26 378 397 #define INCVALUE_SHIFT_38400KHZ 19 379 398 #define INCPERIOD_38400KHZ 1 399 + #define MAX_PPB_38400KHZ 230769100 /* 230,769,100 ppb */ 380 400 381 401 /* Another drawback of scaling the incvalue by a large factor is the 382 402 * 64-bit SYSTIM register overflows more quickly. This is dealt with
+15 -7
drivers/net/ethernet/intel/e1000e/ptp.c
··· 280 280 281 281 switch (hw->mac.type) { 282 282 case e1000_pch2lan: 283 + adapter->ptp_clock_info.max_adj = MAX_PPB_96MHZ; 284 + break; 283 285 case e1000_pch_lpt: 286 + if (er32(TSYNCRXCTL) & E1000_TSYNCRXCTL_SYSCFI) 287 + adapter->ptp_clock_info.max_adj = MAX_PPB_96MHZ; 288 + else 289 + adapter->ptp_clock_info.max_adj = MAX_PPB_25MHZ; 290 + break; 284 291 case e1000_pch_spt: 292 + adapter->ptp_clock_info.max_adj = MAX_PPB_24MHZ; 293 + break; 285 294 case e1000_pch_cnp: 286 295 case e1000_pch_tgp: 287 296 case e1000_pch_adp: ··· 298 289 case e1000_pch_lnp: 299 290 case e1000_pch_ptp: 300 291 case e1000_pch_nvp: 301 - if ((hw->mac.type < e1000_pch_lpt) || 302 - (er32(TSYNCRXCTL) & E1000_TSYNCRXCTL_SYSCFI)) { 303 - adapter->ptp_clock_info.max_adj = 24000000 - 1; 304 - break; 305 - } 306 - fallthrough; 292 + if (er32(TSYNCRXCTL) & E1000_TSYNCRXCTL_SYSCFI) 293 + adapter->ptp_clock_info.max_adj = MAX_PPB_24MHZ; 294 + else 295 + adapter->ptp_clock_info.max_adj = MAX_PPB_38400KHZ; 296 + break; 307 297 case e1000_82574: 308 298 case e1000_82583: 309 - adapter->ptp_clock_info.max_adj = 600000000 - 1; 299 + adapter->ptp_clock_info.max_adj = MAX_PPB_25MHZ; 310 300 break; 311 301 default: 312 302 break;
+1 -1
drivers/net/ethernet/intel/idpf/virtchnl2.h
··· 978 978 u8 proto_id_count; 979 979 __le16 pad; 980 980 __le16 proto_id[]; 981 - }; 981 + } __packed __aligned(2); 982 982 VIRTCHNL2_CHECK_STRUCT_LEN(6, virtchnl2_ptype); 983 983 984 984 /**
+2 -1
drivers/net/ethernet/intel/ixgbe/ixgbe_x550.c
··· 716 716 if ((command & IXGBE_SB_IOSF_CTRL_RESP_STAT_MASK) != 0) { 717 717 error = FIELD_GET(IXGBE_SB_IOSF_CTRL_CMPL_ERR_MASK, command); 718 718 hw_dbg(hw, "Failed to read, error %x\n", error); 719 - return -EIO; 719 + ret = -EIO; 720 + goto out; 720 721 } 721 722 722 723 if (!ret)
-1
drivers/net/ethernet/marvell/octeontx2/nic/otx2_ethtool.c
··· 314 314 pfvf->hw.tx_queues = channel->tx_count; 315 315 if (pfvf->xdp_prog) 316 316 pfvf->hw.xdp_queues = channel->rx_count; 317 - pfvf->hw.non_qos_queues = pfvf->hw.tx_queues + pfvf->hw.xdp_queues; 318 317 319 318 if (if_up) 320 319 err = dev->netdev_ops->ndo_open(dev);
+1 -2
drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
··· 1744 1744 /* RQ and SQs are mapped to different CQs, 1745 1745 * so find out max CQ IRQs (i.e CINTs) needed. 1746 1746 */ 1747 + pf->hw.non_qos_queues = pf->hw.tx_queues + pf->hw.xdp_queues; 1747 1748 pf->hw.cint_cnt = max3(pf->hw.rx_queues, pf->hw.tx_queues, 1748 1749 pf->hw.tc_tx_queues); 1749 1750 ··· 2643 2642 pf->hw.xdp_queues = 0; 2644 2643 xdp_features_clear_redirect_target(dev); 2645 2644 } 2646 - 2647 - pf->hw.non_qos_queues += pf->hw.xdp_queues; 2648 2645 2649 2646 if (if_up) 2650 2647 otx2_open(pf->netdev);
+3 -4
drivers/net/ethernet/marvell/octeontx2/nic/otx2_txrx.c
··· 1403 1403 struct otx2_cq_queue *cq, 1404 1404 bool *need_xdp_flush) 1405 1405 { 1406 - unsigned char *hard_start, *data; 1406 + unsigned char *hard_start; 1407 1407 int qidx = cq->cq_idx; 1408 1408 struct xdp_buff xdp; 1409 1409 struct page *page; ··· 1417 1417 1418 1418 xdp_init_buff(&xdp, pfvf->rbsize, &cq->xdp_rxq); 1419 1419 1420 - data = (unsigned char *)phys_to_virt(pa); 1421 - hard_start = page_address(page); 1422 - xdp_prepare_buff(&xdp, hard_start, data - hard_start, 1420 + hard_start = (unsigned char *)phys_to_virt(pa); 1421 + xdp_prepare_buff(&xdp, hard_start, OTX2_HEAD_ROOM, 1423 1422 cqe->sg.seg_size, false); 1424 1423 1425 1424 act = bpf_prog_run_xdp(prog, &xdp);
+4 -1
drivers/net/ethernet/mediatek/mtk_eth_soc.c
··· 4761 4761 } 4762 4762 4763 4763 if (MTK_HAS_CAPS(eth->soc->caps, MTK_36BIT_DMA)) { 4764 - err = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(36)); 4764 + err = dma_set_mask(&pdev->dev, DMA_BIT_MASK(36)); 4765 + if (!err) 4766 + err = dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(32)); 4767 + 4765 4768 if (err) { 4766 4769 dev_err(&pdev->dev, "Wrong DMA config\n"); 4767 4770 return -EINVAL;
+3 -2
drivers/net/ethernet/microchip/lan966x/lan966x_port.c
··· 168 168 lan966x_taprio_speed_set(port, config->speed); 169 169 170 170 /* Also the GIGA_MODE_ENA(1) needs to be set regardless of the 171 - * port speed for QSGMII ports. 171 + * port speed for QSGMII or SGMII ports. 172 172 */ 173 - if (phy_interface_num_ports(config->portmode) == 4) 173 + if (phy_interface_num_ports(config->portmode) == 4 || 174 + config->portmode == PHY_INTERFACE_MODE_SGMII) 174 175 mode = DEV_MAC_MODE_CFG_GIGA_MODE_ENA_SET(1); 175 176 176 177 lan_wr(config->duplex | mode,
+43 -3
drivers/net/ethernet/netronome/nfp/flower/conntrack.c
··· 1424 1424 mangle_action->mangle.mask = (__force u32)cpu_to_be32(mangle_action->mangle.mask); 1425 1425 return; 1426 1426 1427 + /* Both struct tcphdr and struct udphdr start with 1428 + * __be16 source; 1429 + * __be16 dest; 1430 + * so we can use the same code for both. 1431 + */ 1427 1432 case FLOW_ACT_MANGLE_HDR_TYPE_TCP: 1428 1433 case FLOW_ACT_MANGLE_HDR_TYPE_UDP: 1429 - mangle_action->mangle.val = (__force u16)cpu_to_be16(mangle_action->mangle.val); 1430 - mangle_action->mangle.mask = (__force u16)cpu_to_be16(mangle_action->mangle.mask); 1434 + if (mangle_action->mangle.offset == offsetof(struct tcphdr, source)) { 1435 + mangle_action->mangle.val = 1436 + (__force u32)cpu_to_be32(mangle_action->mangle.val << 16); 1437 + /* The mask of mangle action is inverse mask, 1438 + * so clear the dest tp port with 0xFFFF to 1439 + * instead of rotate-left operation. 1440 + */ 1441 + mangle_action->mangle.mask = 1442 + (__force u32)cpu_to_be32(mangle_action->mangle.mask << 16 | 0xFFFF); 1443 + } 1444 + if (mangle_action->mangle.offset == offsetof(struct tcphdr, dest)) { 1445 + mangle_action->mangle.offset = 0; 1446 + mangle_action->mangle.val = 1447 + (__force u32)cpu_to_be32(mangle_action->mangle.val); 1448 + mangle_action->mangle.mask = 1449 + (__force u32)cpu_to_be32(mangle_action->mangle.mask); 1450 + } 1431 1451 return; 1432 1452 1433 1453 default: ··· 1884 1864 { 1885 1865 struct flow_rule *rule = flow_cls_offload_flow_rule(flow); 1886 1866 struct nfp_fl_ct_flow_entry *ct_entry; 1867 + struct flow_action_entry *ct_goto; 1887 1868 struct nfp_fl_ct_zone_entry *zt; 1869 + struct flow_action_entry *act; 1888 1870 bool wildcarded = false; 1889 1871 struct flow_match_ct ct; 1890 - struct flow_action_entry *ct_goto; 1872 + int i; 1873 + 1874 + flow_action_for_each(i, act, &rule->action) { 1875 + switch (act->id) { 1876 + case FLOW_ACTION_REDIRECT: 1877 + case FLOW_ACTION_REDIRECT_INGRESS: 1878 + case FLOW_ACTION_MIRRED: 1879 + case FLOW_ACTION_MIRRED_INGRESS: 1880 + if (act->dev->rtnl_link_ops && 1881 + !strcmp(act->dev->rtnl_link_ops->kind, "openvswitch")) { 1882 + NL_SET_ERR_MSG_MOD(extack, 1883 + "unsupported offload: out port is openvswitch internal port"); 1884 + return -EOPNOTSUPP; 1885 + } 1886 + break; 1887 + default: 1888 + break; 1889 + } 1890 + } 1891 1891 1892 1892 flow_rule_match_ct(rule, &ct); 1893 1893 if (!ct.mask->ct_zone) {
+4
drivers/net/ethernet/stmicro/stmmac/dwmac-imx.c
··· 353 353 if (data->flags & STMMAC_FLAG_HWTSTAMP_CORRECT_LATENCY) 354 354 plat_dat->flags |= STMMAC_FLAG_HWTSTAMP_CORRECT_LATENCY; 355 355 356 + /* Default TX Q0 to use TSO and rest TXQ for TBS */ 357 + for (int i = 1; i < plat_dat->tx_queues_to_use; i++) 358 + plat_dat->tx_queues_cfg[i].tbs_en = 1; 359 + 356 360 plat_dat->host_dma_width = dwmac->ops->addr_width; 357 361 plat_dat->init = imx_dwmac_init; 358 362 plat_dat->exit = imx_dwmac_exit;
+3
drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
··· 3939 3939 priv->rx_copybreak = STMMAC_RX_COPYBREAK; 3940 3940 3941 3941 buf_sz = dma_conf->dma_buf_sz; 3942 + for (int i = 0; i < MTL_MAX_TX_QUEUES; i++) 3943 + if (priv->dma_conf.tx_queue[i].tbs & STMMAC_TBS_EN) 3944 + dma_conf->tx_queue[i].tbs = priv->dma_conf.tx_queue[i].tbs; 3942 3945 memcpy(&priv->dma_conf, dma_conf, sizeof(*dma_conf)); 3943 3946 3944 3947 stmmac_reset_queues_param(priv);
+4 -1
drivers/net/hyperv/netvsc.c
··· 708 708 /* Disable NAPI and disassociate its context from the device. */ 709 709 for (i = 0; i < net_device->num_chn; i++) { 710 710 /* See also vmbus_reset_channel_cb(). */ 711 - napi_disable(&net_device->chan_table[i].napi); 711 + /* only disable enabled NAPI channel */ 712 + if (i < ndev->real_num_rx_queues) 713 + napi_disable(&net_device->chan_table[i].napi); 714 + 712 715 netif_napi_del(&net_device->chan_table[i].napi); 713 716 } 714 717
+81 -66
drivers/net/phy/mediatek-ge-soc.c
··· 489 489 u16 reg, val; 490 490 491 491 if (phydev->drv->phy_id == MTK_GPHY_ID_MT7988) 492 - bias = -2; 492 + bias = -1; 493 493 494 494 val = clamp_val(bias + tx_r50_cal_val, 0, 63); 495 495 ··· 705 705 static void mt798x_phy_common_finetune(struct phy_device *phydev) 706 706 { 707 707 phy_select_page(phydev, MTK_PHY_PAGE_EXTENDED_52B5); 708 + /* SlvDSPreadyTime = 24, MasDSPreadyTime = 24 */ 709 + __phy_write(phydev, 0x11, 0xc71); 710 + __phy_write(phydev, 0x12, 0xc); 711 + __phy_write(phydev, 0x10, 0x8fae); 712 + 708 713 /* EnabRandUpdTrig = 1 */ 709 714 __phy_write(phydev, 0x11, 0x2f00); 710 715 __phy_write(phydev, 0x12, 0xe); ··· 720 715 __phy_write(phydev, 0x12, 0x0); 721 716 __phy_write(phydev, 0x10, 0x83aa); 722 717 723 - /* TrFreeze = 0 */ 718 + /* FfeUpdGainForce = 1(Enable), FfeUpdGainForceVal = 4 */ 719 + __phy_write(phydev, 0x11, 0x240); 720 + __phy_write(phydev, 0x12, 0x0); 721 + __phy_write(phydev, 0x10, 0x9680); 722 + 723 + /* TrFreeze = 0 (mt7988 default) */ 724 724 __phy_write(phydev, 0x11, 0x0); 725 725 __phy_write(phydev, 0x12, 0x0); 726 726 __phy_write(phydev, 0x10, 0x9686); 727 727 728 + /* SSTrKp100 = 5 */ 729 + /* SSTrKf100 = 6 */ 730 + /* SSTrKp1000Mas = 5 */ 731 + /* SSTrKf1000Mas = 6 */ 728 732 /* SSTrKp1000Slv = 5 */ 733 + /* SSTrKf1000Slv = 6 */ 729 734 __phy_write(phydev, 0x11, 0xbaef); 730 735 __phy_write(phydev, 0x12, 0x2e); 731 736 __phy_write(phydev, 0x10, 0x968c); 737 + phy_restore_page(phydev, MTK_PHY_PAGE_STANDARD, 0); 738 + } 739 + 740 + static void mt7981_phy_finetune(struct phy_device *phydev) 741 + { 742 + u16 val[8] = { 0x01ce, 0x01c1, 743 + 0x020f, 0x0202, 744 + 0x03d0, 0x03c0, 745 + 0x0013, 0x0005 }; 746 + int i, k; 747 + 748 + /* 100M eye finetune: 749 + * Keep middle level of TX MLT3 shapper as default. 750 + * Only change TX MLT3 overshoot level here. 751 + */ 752 + for (k = 0, i = 1; i < 12; i++) { 753 + if (i % 3 == 0) 754 + continue; 755 + phy_write_mmd(phydev, MDIO_MMD_VEND1, i, val[k++]); 756 + } 757 + 758 + phy_select_page(phydev, MTK_PHY_PAGE_EXTENDED_52B5); 759 + /* ResetSyncOffset = 6 */ 760 + __phy_write(phydev, 0x11, 0x600); 761 + __phy_write(phydev, 0x12, 0x0); 762 + __phy_write(phydev, 0x10, 0x8fc0); 763 + 764 + /* VgaDecRate = 1 */ 765 + __phy_write(phydev, 0x11, 0x4c2a); 766 + __phy_write(phydev, 0x12, 0x3e); 767 + __phy_write(phydev, 0x10, 0x8fa4); 732 768 733 769 /* MrvlTrFix100Kp = 3, MrvlTrFix100Kf = 2, 734 770 * MrvlTrFix1000Kp = 3, MrvlTrFix1000Kf = 2 ··· 784 738 __phy_write(phydev, 0x10, 0x8ec0); 785 739 phy_restore_page(phydev, MTK_PHY_PAGE_STANDARD, 0); 786 740 787 - /* TR_OPEN_LOOP_EN = 1, lpf_x_average = 9*/ 741 + /* TR_OPEN_LOOP_EN = 1, lpf_x_average = 9 */ 788 742 phy_modify_mmd(phydev, MDIO_MMD_VEND1, MTK_PHY_RG_DEV1E_REG234, 789 743 MTK_PHY_TR_OPEN_LOOP_EN_MASK | MTK_PHY_LPF_X_AVERAGE_MASK, 790 744 BIT(0) | FIELD_PREP(MTK_PHY_LPF_X_AVERAGE_MASK, 0x9)); ··· 817 771 phy_write_mmd(phydev, MDIO_MMD_VEND1, MTK_PHY_LDO_OUTPUT_V, 0x2222); 818 772 } 819 773 820 - static void mt7981_phy_finetune(struct phy_device *phydev) 821 - { 822 - u16 val[8] = { 0x01ce, 0x01c1, 823 - 0x020f, 0x0202, 824 - 0x03d0, 0x03c0, 825 - 0x0013, 0x0005 }; 826 - int i, k; 827 - 828 - /* 100M eye finetune: 829 - * Keep middle level of TX MLT3 shapper as default. 830 - * Only change TX MLT3 overshoot level here. 831 - */ 832 - for (k = 0, i = 1; i < 12; i++) { 833 - if (i % 3 == 0) 834 - continue; 835 - phy_write_mmd(phydev, MDIO_MMD_VEND1, i, val[k++]); 836 - } 837 - 838 - phy_select_page(phydev, MTK_PHY_PAGE_EXTENDED_52B5); 839 - /* SlvDSPreadyTime = 24, MasDSPreadyTime = 24 */ 840 - __phy_write(phydev, 0x11, 0xc71); 841 - __phy_write(phydev, 0x12, 0xc); 842 - __phy_write(phydev, 0x10, 0x8fae); 843 - 844 - /* ResetSyncOffset = 6 */ 845 - __phy_write(phydev, 0x11, 0x600); 846 - __phy_write(phydev, 0x12, 0x0); 847 - __phy_write(phydev, 0x10, 0x8fc0); 848 - 849 - /* VgaDecRate = 1 */ 850 - __phy_write(phydev, 0x11, 0x4c2a); 851 - __phy_write(phydev, 0x12, 0x3e); 852 - __phy_write(phydev, 0x10, 0x8fa4); 853 - 854 - /* FfeUpdGainForce = 4 */ 855 - __phy_write(phydev, 0x11, 0x240); 856 - __phy_write(phydev, 0x12, 0x0); 857 - __phy_write(phydev, 0x10, 0x9680); 858 - 859 - phy_restore_page(phydev, MTK_PHY_PAGE_STANDARD, 0); 860 - } 861 - 862 774 static void mt7988_phy_finetune(struct phy_device *phydev) 863 775 { 864 776 u16 val[12] = { 0x0187, 0x01cd, 0x01c8, 0x0182, ··· 831 827 /* TCT finetune */ 832 828 phy_write_mmd(phydev, MDIO_MMD_VEND1, MTK_PHY_RG_TX_FILTER, 0x5); 833 829 834 - /* Disable TX power saving */ 835 - phy_modify_mmd(phydev, MDIO_MMD_VEND1, MTK_PHY_RXADC_CTRL_RG7, 836 - MTK_PHY_DA_AD_BUF_BIAS_LP_MASK, 0x3 << 8); 837 - 838 830 phy_select_page(phydev, MTK_PHY_PAGE_EXTENDED_52B5); 839 - 840 - /* SlvDSPreadyTime = 24, MasDSPreadyTime = 12 */ 841 - __phy_write(phydev, 0x11, 0x671); 842 - __phy_write(phydev, 0x12, 0xc); 843 - __phy_write(phydev, 0x10, 0x8fae); 844 - 845 831 /* ResetSyncOffset = 5 */ 846 832 __phy_write(phydev, 0x11, 0x500); 847 833 __phy_write(phydev, 0x12, 0x0); ··· 839 845 840 846 /* VgaDecRate is 1 at default on mt7988 */ 841 847 848 + /* MrvlTrFix100Kp = 6, MrvlTrFix100Kf = 7, 849 + * MrvlTrFix1000Kp = 6, MrvlTrFix1000Kf = 7 850 + */ 851 + __phy_write(phydev, 0x11, 0xb90a); 852 + __phy_write(phydev, 0x12, 0x6f); 853 + __phy_write(phydev, 0x10, 0x8f82); 854 + 855 + /* RemAckCntLimitCtrl = 1 */ 856 + __phy_write(phydev, 0x11, 0xfbba); 857 + __phy_write(phydev, 0x12, 0xc3); 858 + __phy_write(phydev, 0x10, 0x87f8); 859 + 842 860 phy_restore_page(phydev, MTK_PHY_PAGE_STANDARD, 0); 843 861 844 - phy_select_page(phydev, MTK_PHY_PAGE_EXTENDED_2A30); 845 - /* TxClkOffset = 2 */ 846 - __phy_modify(phydev, MTK_PHY_ANARG_RG, MTK_PHY_TCLKOFFSET_MASK, 847 - FIELD_PREP(MTK_PHY_TCLKOFFSET_MASK, 0x2)); 848 - phy_restore_page(phydev, MTK_PHY_PAGE_STANDARD, 0); 862 + /* TR_OPEN_LOOP_EN = 1, lpf_x_average = 10 */ 863 + phy_modify_mmd(phydev, MDIO_MMD_VEND1, MTK_PHY_RG_DEV1E_REG234, 864 + MTK_PHY_TR_OPEN_LOOP_EN_MASK | MTK_PHY_LPF_X_AVERAGE_MASK, 865 + BIT(0) | FIELD_PREP(MTK_PHY_LPF_X_AVERAGE_MASK, 0xa)); 866 + 867 + /* rg_tr_lpf_cnt_val = 1023 */ 868 + phy_write_mmd(phydev, MDIO_MMD_VEND1, MTK_PHY_RG_LPF_CNT_VAL, 0x3ff); 849 869 } 850 870 851 871 static void mt798x_phy_eee(struct phy_device *phydev) ··· 892 884 MTK_PHY_LPI_SLV_SEND_TX_EN, 893 885 FIELD_PREP(MTK_PHY_LPI_SLV_SEND_TX_TIMER_MASK, 0x120)); 894 886 895 - phy_modify_mmd(phydev, MDIO_MMD_VEND1, MTK_PHY_RG_DEV1E_REG239, 896 - MTK_PHY_LPI_SEND_LOC_TIMER_MASK | 897 - MTK_PHY_LPI_TXPCS_LOC_RCV, 898 - FIELD_PREP(MTK_PHY_LPI_SEND_LOC_TIMER_MASK, 0x117)); 887 + /* Keep MTK_PHY_LPI_SEND_LOC_TIMER as 375 */ 888 + phy_clear_bits_mmd(phydev, MDIO_MMD_VEND1, MTK_PHY_RG_DEV1E_REG239, 889 + MTK_PHY_LPI_TXPCS_LOC_RCV); 899 890 891 + /* This also fixes some IoT issues, such as CH340 */ 900 892 phy_modify_mmd(phydev, MDIO_MMD_VEND1, MTK_PHY_RG_DEV1E_REG2C7, 901 893 MTK_PHY_MAX_GAIN_MASK | MTK_PHY_MIN_GAIN_MASK, 902 894 FIELD_PREP(MTK_PHY_MAX_GAIN_MASK, 0x8) | ··· 930 922 __phy_write(phydev, 0x12, 0x0); 931 923 __phy_write(phydev, 0x10, 0x9690); 932 924 933 - /* REG_EEE_st2TrKf1000 = 3 */ 925 + /* REG_EEE_st2TrKf1000 = 2 */ 934 926 __phy_write(phydev, 0x11, 0x114f); 935 927 __phy_write(phydev, 0x12, 0x2); 936 928 __phy_write(phydev, 0x10, 0x969a); ··· 955 947 __phy_write(phydev, 0x12, 0x0); 956 948 __phy_write(phydev, 0x10, 0x96b8); 957 949 958 - /* REGEEE_wake_slv_tr_wait_dfesigdet_en = 1 */ 950 + /* REGEEE_wake_slv_tr_wait_dfesigdet_en = 0 */ 959 951 __phy_write(phydev, 0x11, 0x1463); 960 952 __phy_write(phydev, 0x12, 0x0); 961 953 __phy_write(phydev, 0x10, 0x96ca); ··· 1466 1458 err = mt7988_phy_fix_leds_polarities(phydev); 1467 1459 if (err) 1468 1460 return err; 1461 + 1462 + /* Disable TX power saving at probing to: 1463 + * 1. Meet common mode compliance test criteria 1464 + * 2. Make sure that TX-VCM calibration works fine 1465 + */ 1466 + phy_modify_mmd(phydev, MDIO_MMD_VEND1, MTK_PHY_RXADC_CTRL_RG7, 1467 + MTK_PHY_DA_AD_BUF_BIAS_LP_MASK, 0x3 << 8); 1469 1468 1470 1469 return mt798x_phy_calibration(phydev); 1471 1470 }
+48 -52
drivers/net/xen-netback/netback.c
··· 104 104 module_param(provides_xdp_headroom, bool, 0644); 105 105 106 106 static void xenvif_idx_release(struct xenvif_queue *queue, u16 pending_idx, 107 - u8 status); 107 + s8 status); 108 108 109 109 static void make_tx_response(struct xenvif_queue *queue, 110 - struct xen_netif_tx_request *txp, 110 + const struct xen_netif_tx_request *txp, 111 111 unsigned int extra_count, 112 - s8 st); 113 - static void push_tx_responses(struct xenvif_queue *queue); 112 + s8 status); 114 113 115 114 static void xenvif_idx_unmap(struct xenvif_queue *queue, u16 pending_idx); 116 115 ··· 207 208 unsigned int extra_count, RING_IDX end) 208 209 { 209 210 RING_IDX cons = queue->tx.req_cons; 210 - unsigned long flags; 211 211 212 212 do { 213 - spin_lock_irqsave(&queue->response_lock, flags); 214 213 make_tx_response(queue, txp, extra_count, XEN_NETIF_RSP_ERROR); 215 - push_tx_responses(queue); 216 - spin_unlock_irqrestore(&queue->response_lock, flags); 217 214 if (cons == end) 218 215 break; 219 216 RING_COPY_REQUEST(&queue->tx, cons++, txp); ··· 460 465 for (shinfo->nr_frags = 0; nr_slots > 0 && shinfo->nr_frags < MAX_SKB_FRAGS; 461 466 nr_slots--) { 462 467 if (unlikely(!txp->size)) { 463 - unsigned long flags; 464 - 465 - spin_lock_irqsave(&queue->response_lock, flags); 466 468 make_tx_response(queue, txp, 0, XEN_NETIF_RSP_OKAY); 467 - push_tx_responses(queue); 468 - spin_unlock_irqrestore(&queue->response_lock, flags); 469 469 ++txp; 470 470 continue; 471 471 } ··· 486 496 487 497 for (shinfo->nr_frags = 0; shinfo->nr_frags < nr_slots; ++txp) { 488 498 if (unlikely(!txp->size)) { 489 - unsigned long flags; 490 - 491 - spin_lock_irqsave(&queue->response_lock, flags); 492 499 make_tx_response(queue, txp, 0, 493 500 XEN_NETIF_RSP_OKAY); 494 - push_tx_responses(queue); 495 - spin_unlock_irqrestore(&queue->response_lock, 496 - flags); 497 501 continue; 498 502 } 499 503 ··· 979 995 (ret == 0) ? 980 996 XEN_NETIF_RSP_OKAY : 981 997 XEN_NETIF_RSP_ERROR); 982 - push_tx_responses(queue); 983 998 continue; 984 999 } 985 1000 ··· 990 1007 991 1008 make_tx_response(queue, &txreq, extra_count, 992 1009 XEN_NETIF_RSP_OKAY); 993 - push_tx_responses(queue); 994 1010 continue; 995 1011 } 996 1012 ··· 1415 1433 return work_done; 1416 1434 } 1417 1435 1418 - static void xenvif_idx_release(struct xenvif_queue *queue, u16 pending_idx, 1419 - u8 status) 1420 - { 1421 - struct pending_tx_info *pending_tx_info; 1422 - pending_ring_idx_t index; 1423 - unsigned long flags; 1424 - 1425 - pending_tx_info = &queue->pending_tx_info[pending_idx]; 1426 - 1427 - spin_lock_irqsave(&queue->response_lock, flags); 1428 - 1429 - make_tx_response(queue, &pending_tx_info->req, 1430 - pending_tx_info->extra_count, status); 1431 - 1432 - /* Release the pending index before pusing the Tx response so 1433 - * its available before a new Tx request is pushed by the 1434 - * frontend. 1435 - */ 1436 - index = pending_index(queue->pending_prod++); 1437 - queue->pending_ring[index] = pending_idx; 1438 - 1439 - push_tx_responses(queue); 1440 - 1441 - spin_unlock_irqrestore(&queue->response_lock, flags); 1442 - } 1443 - 1444 - 1445 - static void make_tx_response(struct xenvif_queue *queue, 1446 - struct xen_netif_tx_request *txp, 1436 + static void _make_tx_response(struct xenvif_queue *queue, 1437 + const struct xen_netif_tx_request *txp, 1447 1438 unsigned int extra_count, 1448 - s8 st) 1439 + s8 status) 1449 1440 { 1450 1441 RING_IDX i = queue->tx.rsp_prod_pvt; 1451 1442 struct xen_netif_tx_response *resp; 1452 1443 1453 1444 resp = RING_GET_RESPONSE(&queue->tx, i); 1454 1445 resp->id = txp->id; 1455 - resp->status = st; 1446 + resp->status = status; 1456 1447 1457 1448 while (extra_count-- != 0) 1458 1449 RING_GET_RESPONSE(&queue->tx, ++i)->status = XEN_NETIF_RSP_NULL; ··· 1440 1485 RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(&queue->tx, notify); 1441 1486 if (notify) 1442 1487 notify_remote_via_irq(queue->tx_irq); 1488 + } 1489 + 1490 + static void xenvif_idx_release(struct xenvif_queue *queue, u16 pending_idx, 1491 + s8 status) 1492 + { 1493 + struct pending_tx_info *pending_tx_info; 1494 + pending_ring_idx_t index; 1495 + unsigned long flags; 1496 + 1497 + pending_tx_info = &queue->pending_tx_info[pending_idx]; 1498 + 1499 + spin_lock_irqsave(&queue->response_lock, flags); 1500 + 1501 + _make_tx_response(queue, &pending_tx_info->req, 1502 + pending_tx_info->extra_count, status); 1503 + 1504 + /* Release the pending index before pusing the Tx response so 1505 + * its available before a new Tx request is pushed by the 1506 + * frontend. 1507 + */ 1508 + index = pending_index(queue->pending_prod++); 1509 + queue->pending_ring[index] = pending_idx; 1510 + 1511 + push_tx_responses(queue); 1512 + 1513 + spin_unlock_irqrestore(&queue->response_lock, flags); 1514 + } 1515 + 1516 + static void make_tx_response(struct xenvif_queue *queue, 1517 + const struct xen_netif_tx_request *txp, 1518 + unsigned int extra_count, 1519 + s8 status) 1520 + { 1521 + unsigned long flags; 1522 + 1523 + spin_lock_irqsave(&queue->response_lock, flags); 1524 + 1525 + _make_tx_response(queue, txp, extra_count, status); 1526 + push_tx_responses(queue); 1527 + 1528 + spin_unlock_irqrestore(&queue->response_lock, flags); 1443 1529 } 1444 1530 1445 1531 static void xenvif_idx_unmap(struct xenvif_queue *queue, u16 pending_idx)
+2 -2
drivers/platform/mellanox/mlxbf-pmc.c
··· 1170 1170 int ret; 1171 1171 1172 1172 addr = pmc->block[blk_num].mmio_base + 1173 - (rounddown(cnt_num, 2) * MLXBF_PMC_CRSPACE_PERFSEL_SZ); 1173 + ((cnt_num / 2) * MLXBF_PMC_CRSPACE_PERFSEL_SZ); 1174 1174 ret = mlxbf_pmc_readl(addr, &word); 1175 1175 if (ret) 1176 1176 return ret; ··· 1413 1413 int ret; 1414 1414 1415 1415 addr = pmc->block[blk_num].mmio_base + 1416 - (rounddown(cnt_num, 2) * MLXBF_PMC_CRSPACE_PERFSEL_SZ); 1416 + ((cnt_num / 2) * MLXBF_PMC_CRSPACE_PERFSEL_SZ); 1417 1417 ret = mlxbf_pmc_readl(addr, &word); 1418 1418 if (ret) 1419 1419 return ret;
+67
drivers/platform/mellanox/mlxbf-tmfifo.c
··· 47 47 /* Message with data needs at least two words (for header & data). */ 48 48 #define MLXBF_TMFIFO_DATA_MIN_WORDS 2 49 49 50 + /* Tx timeout in milliseconds. */ 51 + #define TMFIFO_TX_TIMEOUT 2000 52 + 50 53 /* ACPI UID for BlueField-3. */ 51 54 #define TMFIFO_BF3_UID 1 52 55 ··· 65 62 * @drop_desc: dummy desc for packet dropping 66 63 * @cur_len: processed length of the current descriptor 67 64 * @rem_len: remaining length of the pending packet 65 + * @rem_padding: remaining bytes to send as paddings 68 66 * @pkt_len: total length of the pending packet 69 67 * @next_avail: next avail descriptor id 70 68 * @num: vring size (number of descriptors) 71 69 * @align: vring alignment size 72 70 * @index: vring index 73 71 * @vdev_id: vring virtio id (VIRTIO_ID_xxx) 72 + * @tx_timeout: expire time of last tx packet 74 73 * @fifo: pointer to the tmfifo structure 75 74 */ 76 75 struct mlxbf_tmfifo_vring { ··· 84 79 struct vring_desc drop_desc; 85 80 int cur_len; 86 81 int rem_len; 82 + int rem_padding; 87 83 u32 pkt_len; 88 84 u16 next_avail; 89 85 int num; 90 86 int align; 91 87 int index; 92 88 int vdev_id; 89 + unsigned long tx_timeout; 93 90 struct mlxbf_tmfifo *fifo; 94 91 }; 95 92 ··· 826 819 return true; 827 820 } 828 821 822 + static void mlxbf_tmfifo_check_tx_timeout(struct mlxbf_tmfifo_vring *vring) 823 + { 824 + unsigned long flags; 825 + 826 + /* Only handle Tx timeout for network vdev. */ 827 + if (vring->vdev_id != VIRTIO_ID_NET) 828 + return; 829 + 830 + /* Initialize the timeout or return if not expired. */ 831 + if (!vring->tx_timeout) { 832 + /* Initialize the timeout. */ 833 + vring->tx_timeout = jiffies + 834 + msecs_to_jiffies(TMFIFO_TX_TIMEOUT); 835 + return; 836 + } else if (time_before(jiffies, vring->tx_timeout)) { 837 + /* Return if not timeout yet. */ 838 + return; 839 + } 840 + 841 + /* 842 + * Drop the packet after timeout. The outstanding packet is 843 + * released and the remaining bytes will be sent with padding byte 0x00 844 + * as a recovery. On the peer(host) side, the padding bytes 0x00 will be 845 + * either dropped directly, or appended into existing outstanding packet 846 + * thus dropped as corrupted network packet. 847 + */ 848 + vring->rem_padding = round_up(vring->rem_len, sizeof(u64)); 849 + mlxbf_tmfifo_release_pkt(vring); 850 + vring->cur_len = 0; 851 + vring->rem_len = 0; 852 + vring->fifo->vring[0] = NULL; 853 + 854 + /* 855 + * Make sure the load/store are in order before 856 + * returning back to virtio. 857 + */ 858 + virtio_mb(false); 859 + 860 + /* Notify upper layer. */ 861 + spin_lock_irqsave(&vring->fifo->spin_lock[0], flags); 862 + vring_interrupt(0, vring->vq); 863 + spin_unlock_irqrestore(&vring->fifo->spin_lock[0], flags); 864 + } 865 + 829 866 /* Rx & Tx processing of a queue. */ 830 867 static void mlxbf_tmfifo_rxtx(struct mlxbf_tmfifo_vring *vring, bool is_rx) 831 868 { ··· 892 841 return; 893 842 894 843 do { 844 + retry: 895 845 /* Get available FIFO space. */ 896 846 if (avail == 0) { 897 847 if (is_rx) ··· 901 849 avail = mlxbf_tmfifo_get_tx_avail(fifo, devid); 902 850 if (avail <= 0) 903 851 break; 852 + } 853 + 854 + /* Insert paddings for discarded Tx packet. */ 855 + if (!is_rx) { 856 + vring->tx_timeout = 0; 857 + while (vring->rem_padding >= sizeof(u64)) { 858 + writeq(0, vring->fifo->tx.data); 859 + vring->rem_padding -= sizeof(u64); 860 + if (--avail == 0) 861 + goto retry; 862 + } 904 863 } 905 864 906 865 /* Console output always comes from the Tx buffer. */ ··· 923 860 /* Handle one descriptor. */ 924 861 more = mlxbf_tmfifo_rxtx_one_desc(vring, is_rx, &avail); 925 862 } while (more); 863 + 864 + /* Check Tx timeout. */ 865 + if (avail <= 0 && !is_rx) 866 + mlxbf_tmfifo_check_tx_timeout(vring); 926 867 } 927 868 928 869 /* Handle Rx or Tx queues. */
+1
drivers/platform/x86/amd/pmf/Kconfig
··· 10 10 depends on AMD_NB 11 11 select ACPI_PLATFORM_PROFILE 12 12 depends on TEE && AMDTEE 13 + depends on AMD_SFH_HID 13 14 help 14 15 This driver provides support for the AMD Platform Management Framework. 15 16 The goal is to enhance end user experience by making AMD PCs smarter,
+36
drivers/platform/x86/amd/pmf/spc.c
··· 10 10 */ 11 11 12 12 #include <acpi/button.h> 13 + #include <linux/amd-pmf-io.h> 13 14 #include <linux/power_supply.h> 14 15 #include <linux/units.h> 15 16 #include "pmf.h" ··· 45 44 dev_dbg(dev->dev, "Max C0 Residency: %u\n", in->ev_info.max_c0residency); 46 45 dev_dbg(dev->dev, "GFX Busy: %u\n", in->ev_info.gfx_busy); 47 46 dev_dbg(dev->dev, "LID State: %s\n", in->ev_info.lid_state ? "close" : "open"); 47 + dev_dbg(dev->dev, "User Presence: %s\n", in->ev_info.user_present ? "Present" : "Away"); 48 + dev_dbg(dev->dev, "Ambient Light: %d\n", in->ev_info.ambient_light); 48 49 dev_dbg(dev->dev, "==== TA inputs END ====\n"); 49 50 } 50 51 #else ··· 150 147 return 0; 151 148 } 152 149 150 + static int amd_pmf_get_sensor_info(struct amd_pmf_dev *dev, struct ta_pmf_enact_table *in) 151 + { 152 + struct amd_sfh_info sfh_info; 153 + int ret; 154 + 155 + /* Get ALS data */ 156 + ret = amd_get_sfh_info(&sfh_info, MT_ALS); 157 + if (!ret) 158 + in->ev_info.ambient_light = sfh_info.ambient_light; 159 + else 160 + return ret; 161 + 162 + /* get HPD data */ 163 + ret = amd_get_sfh_info(&sfh_info, MT_HPD); 164 + if (ret) 165 + return ret; 166 + 167 + switch (sfh_info.user_present) { 168 + case SFH_NOT_DETECTED: 169 + in->ev_info.user_present = 0xff; /* assume no sensors connected */ 170 + break; 171 + case SFH_USER_PRESENT: 172 + in->ev_info.user_present = 1; 173 + break; 174 + case SFH_USER_AWAY: 175 + in->ev_info.user_present = 0; 176 + break; 177 + } 178 + 179 + return 0; 180 + } 181 + 153 182 void amd_pmf_populate_ta_inputs(struct amd_pmf_dev *dev, struct ta_pmf_enact_table *in) 154 183 { 155 184 /* TA side lid open is 1 and close is 0, hence the ! here */ ··· 190 155 amd_pmf_get_smu_info(dev, in); 191 156 amd_pmf_get_battery_info(dev, in); 192 157 amd_pmf_get_slider_info(dev, in); 158 + amd_pmf_get_sensor_info(dev, in); 193 159 }
+3 -1
drivers/platform/x86/amd/pmf/tee-if.c
··· 298 298 if (!new_policy_buf) 299 299 return -ENOMEM; 300 300 301 - if (copy_from_user(new_policy_buf, buf, length)) 301 + if (copy_from_user(new_policy_buf, buf, length)) { 302 + kfree(new_policy_buf); 302 303 return -EFAULT; 304 + } 303 305 304 306 kfree(dev->policy_buf); 305 307 dev->policy_buf = new_policy_buf;
+2 -1
drivers/platform/x86/intel/ifs/load.c
··· 399 399 if (fw->size != expected_size) { 400 400 dev_err(dev, "File size mismatch (expected %u, actual %zu). Corrupted IFS image.\n", 401 401 expected_size, fw->size); 402 - return -EINVAL; 402 + ret = -EINVAL; 403 + goto release; 403 404 } 404 405 405 406 ret = image_sanity_check(dev, (struct microcode_header_intel *)fw->data);
+41 -41
drivers/platform/x86/intel/uncore-frequency/uncore-frequency-common.c
··· 23 23 static int (*uncore_write)(struct uncore_data *data, unsigned int input, unsigned int min_max); 24 24 static int (*uncore_read_freq)(struct uncore_data *data, unsigned int *freq); 25 25 26 - static ssize_t show_domain_id(struct device *dev, struct device_attribute *attr, char *buf) 26 + static ssize_t show_domain_id(struct kobject *kobj, struct kobj_attribute *attr, char *buf) 27 27 { 28 - struct uncore_data *data = container_of(attr, struct uncore_data, domain_id_dev_attr); 28 + struct uncore_data *data = container_of(attr, struct uncore_data, domain_id_kobj_attr); 29 29 30 30 return sprintf(buf, "%u\n", data->domain_id); 31 31 } 32 32 33 - static ssize_t show_fabric_cluster_id(struct device *dev, struct device_attribute *attr, char *buf) 33 + static ssize_t show_fabric_cluster_id(struct kobject *kobj, struct kobj_attribute *attr, char *buf) 34 34 { 35 - struct uncore_data *data = container_of(attr, struct uncore_data, fabric_cluster_id_dev_attr); 35 + struct uncore_data *data = container_of(attr, struct uncore_data, fabric_cluster_id_kobj_attr); 36 36 37 37 return sprintf(buf, "%u\n", data->cluster_id); 38 38 } 39 39 40 - static ssize_t show_package_id(struct device *dev, struct device_attribute *attr, char *buf) 40 + static ssize_t show_package_id(struct kobject *kobj, struct kobj_attribute *attr, char *buf) 41 41 { 42 - struct uncore_data *data = container_of(attr, struct uncore_data, package_id_dev_attr); 42 + struct uncore_data *data = container_of(attr, struct uncore_data, package_id_kobj_attr); 43 43 44 44 return sprintf(buf, "%u\n", data->package_id); 45 45 } ··· 97 97 } 98 98 99 99 #define store_uncore_min_max(name, min_max) \ 100 - static ssize_t store_##name(struct device *dev, \ 101 - struct device_attribute *attr, \ 100 + static ssize_t store_##name(struct kobject *kobj, \ 101 + struct kobj_attribute *attr, \ 102 102 const char *buf, size_t count) \ 103 103 { \ 104 - struct uncore_data *data = container_of(attr, struct uncore_data, name##_dev_attr);\ 104 + struct uncore_data *data = container_of(attr, struct uncore_data, name##_kobj_attr);\ 105 105 \ 106 106 return store_min_max_freq_khz(data, buf, count, \ 107 107 min_max); \ 108 108 } 109 109 110 110 #define show_uncore_min_max(name, min_max) \ 111 - static ssize_t show_##name(struct device *dev, \ 112 - struct device_attribute *attr, char *buf)\ 111 + static ssize_t show_##name(struct kobject *kobj, \ 112 + struct kobj_attribute *attr, char *buf)\ 113 113 { \ 114 - struct uncore_data *data = container_of(attr, struct uncore_data, name##_dev_attr);\ 114 + struct uncore_data *data = container_of(attr, struct uncore_data, name##_kobj_attr);\ 115 115 \ 116 116 return show_min_max_freq_khz(data, buf, min_max); \ 117 117 } 118 118 119 119 #define show_uncore_perf_status(name) \ 120 - static ssize_t show_##name(struct device *dev, \ 121 - struct device_attribute *attr, char *buf)\ 120 + static ssize_t show_##name(struct kobject *kobj, \ 121 + struct kobj_attribute *attr, char *buf)\ 122 122 { \ 123 - struct uncore_data *data = container_of(attr, struct uncore_data, name##_dev_attr);\ 123 + struct uncore_data *data = container_of(attr, struct uncore_data, name##_kobj_attr);\ 124 124 \ 125 125 return show_perf_status_freq_khz(data, buf); \ 126 126 } ··· 134 134 show_uncore_perf_status(current_freq_khz); 135 135 136 136 #define show_uncore_data(member_name) \ 137 - static ssize_t show_##member_name(struct device *dev, \ 138 - struct device_attribute *attr, char *buf)\ 137 + static ssize_t show_##member_name(struct kobject *kobj, \ 138 + struct kobj_attribute *attr, char *buf)\ 139 139 { \ 140 140 struct uncore_data *data = container_of(attr, struct uncore_data,\ 141 - member_name##_dev_attr);\ 141 + member_name##_kobj_attr);\ 142 142 \ 143 143 return sysfs_emit(buf, "%u\n", \ 144 144 data->member_name); \ ··· 149 149 150 150 #define init_attribute_rw(_name) \ 151 151 do { \ 152 - sysfs_attr_init(&data->_name##_dev_attr.attr); \ 153 - data->_name##_dev_attr.show = show_##_name; \ 154 - data->_name##_dev_attr.store = store_##_name; \ 155 - data->_name##_dev_attr.attr.name = #_name; \ 156 - data->_name##_dev_attr.attr.mode = 0644; \ 152 + sysfs_attr_init(&data->_name##_kobj_attr.attr); \ 153 + data->_name##_kobj_attr.show = show_##_name; \ 154 + data->_name##_kobj_attr.store = store_##_name; \ 155 + data->_name##_kobj_attr.attr.name = #_name; \ 156 + data->_name##_kobj_attr.attr.mode = 0644; \ 157 157 } while (0) 158 158 159 159 #define init_attribute_ro(_name) \ 160 160 do { \ 161 - sysfs_attr_init(&data->_name##_dev_attr.attr); \ 162 - data->_name##_dev_attr.show = show_##_name; \ 163 - data->_name##_dev_attr.store = NULL; \ 164 - data->_name##_dev_attr.attr.name = #_name; \ 165 - data->_name##_dev_attr.attr.mode = 0444; \ 161 + sysfs_attr_init(&data->_name##_kobj_attr.attr); \ 162 + data->_name##_kobj_attr.show = show_##_name; \ 163 + data->_name##_kobj_attr.store = NULL; \ 164 + data->_name##_kobj_attr.attr.name = #_name; \ 165 + data->_name##_kobj_attr.attr.mode = 0444; \ 166 166 } while (0) 167 167 168 168 #define init_attribute_root_ro(_name) \ 169 169 do { \ 170 - sysfs_attr_init(&data->_name##_dev_attr.attr); \ 171 - data->_name##_dev_attr.show = show_##_name; \ 172 - data->_name##_dev_attr.store = NULL; \ 173 - data->_name##_dev_attr.attr.name = #_name; \ 174 - data->_name##_dev_attr.attr.mode = 0400; \ 170 + sysfs_attr_init(&data->_name##_kobj_attr.attr); \ 171 + data->_name##_kobj_attr.show = show_##_name; \ 172 + data->_name##_kobj_attr.store = NULL; \ 173 + data->_name##_kobj_attr.attr.name = #_name; \ 174 + data->_name##_kobj_attr.attr.mode = 0400; \ 175 175 } while (0) 176 176 177 177 static int create_attr_group(struct uncore_data *data, char *name) ··· 186 186 187 187 if (data->domain_id != UNCORE_DOMAIN_ID_INVALID) { 188 188 init_attribute_root_ro(domain_id); 189 - data->uncore_attrs[index++] = &data->domain_id_dev_attr.attr; 189 + data->uncore_attrs[index++] = &data->domain_id_kobj_attr.attr; 190 190 init_attribute_root_ro(fabric_cluster_id); 191 - data->uncore_attrs[index++] = &data->fabric_cluster_id_dev_attr.attr; 191 + data->uncore_attrs[index++] = &data->fabric_cluster_id_kobj_attr.attr; 192 192 init_attribute_root_ro(package_id); 193 - data->uncore_attrs[index++] = &data->package_id_dev_attr.attr; 193 + data->uncore_attrs[index++] = &data->package_id_kobj_attr.attr; 194 194 } 195 195 196 - data->uncore_attrs[index++] = &data->max_freq_khz_dev_attr.attr; 197 - data->uncore_attrs[index++] = &data->min_freq_khz_dev_attr.attr; 198 - data->uncore_attrs[index++] = &data->initial_min_freq_khz_dev_attr.attr; 199 - data->uncore_attrs[index++] = &data->initial_max_freq_khz_dev_attr.attr; 196 + data->uncore_attrs[index++] = &data->max_freq_khz_kobj_attr.attr; 197 + data->uncore_attrs[index++] = &data->min_freq_khz_kobj_attr.attr; 198 + data->uncore_attrs[index++] = &data->initial_min_freq_khz_kobj_attr.attr; 199 + data->uncore_attrs[index++] = &data->initial_max_freq_khz_kobj_attr.attr; 200 200 201 201 ret = uncore_read_freq(data, &freq); 202 202 if (!ret) 203 - data->uncore_attrs[index++] = &data->current_freq_khz_dev_attr.attr; 203 + data->uncore_attrs[index++] = &data->current_freq_khz_kobj_attr.attr; 204 204 205 205 data->uncore_attrs[index] = NULL; 206 206
+16 -16
drivers/platform/x86/intel/uncore-frequency/uncore-frequency-common.h
··· 26 26 * @instance_id: Unique instance id to append to directory name 27 27 * @name: Sysfs entry name for this instance 28 28 * @uncore_attr_group: Attribute group storage 29 - * @max_freq_khz_dev_attr: Storage for device attribute max_freq_khz 30 - * @mix_freq_khz_dev_attr: Storage for device attribute min_freq_khz 31 - * @initial_max_freq_khz_dev_attr: Storage for device attribute initial_max_freq_khz 32 - * @initial_min_freq_khz_dev_attr: Storage for device attribute initial_min_freq_khz 33 - * @current_freq_khz_dev_attr: Storage for device attribute current_freq_khz 34 - * @domain_id_dev_attr: Storage for device attribute domain_id 35 - * @fabric_cluster_id_dev_attr: Storage for device attribute fabric_cluster_id 36 - * @package_id_dev_attr: Storage for device attribute package_id 29 + * @max_freq_khz_kobj_attr: Storage for kobject attribute max_freq_khz 30 + * @mix_freq_khz_kobj_attr: Storage for kobject attribute min_freq_khz 31 + * @initial_max_freq_khz_kobj_attr: Storage for kobject attribute initial_max_freq_khz 32 + * @initial_min_freq_khz_kobj_attr: Storage for kobject attribute initial_min_freq_khz 33 + * @current_freq_khz_kobj_attr: Storage for kobject attribute current_freq_khz 34 + * @domain_id_kobj_attr: Storage for kobject attribute domain_id 35 + * @fabric_cluster_id_kobj_attr: Storage for kobject attribute fabric_cluster_id 36 + * @package_id_kobj_attr: Storage for kobject attribute package_id 37 37 * @uncore_attrs: Attribute storage for group creation 38 38 * 39 39 * This structure is used to encapsulate all data related to uncore sysfs ··· 53 53 char name[32]; 54 54 55 55 struct attribute_group uncore_attr_group; 56 - struct device_attribute max_freq_khz_dev_attr; 57 - struct device_attribute min_freq_khz_dev_attr; 58 - struct device_attribute initial_max_freq_khz_dev_attr; 59 - struct device_attribute initial_min_freq_khz_dev_attr; 60 - struct device_attribute current_freq_khz_dev_attr; 61 - struct device_attribute domain_id_dev_attr; 62 - struct device_attribute fabric_cluster_id_dev_attr; 63 - struct device_attribute package_id_dev_attr; 56 + struct kobj_attribute max_freq_khz_kobj_attr; 57 + struct kobj_attribute min_freq_khz_kobj_attr; 58 + struct kobj_attribute initial_max_freq_khz_kobj_attr; 59 + struct kobj_attribute initial_min_freq_khz_kobj_attr; 60 + struct kobj_attribute current_freq_khz_kobj_attr; 61 + struct kobj_attribute domain_id_kobj_attr; 62 + struct kobj_attribute fabric_cluster_id_kobj_attr; 63 + struct kobj_attribute package_id_kobj_attr; 64 64 struct attribute *uncore_attrs[9]; 65 65 }; 66 66
+2 -2
drivers/platform/x86/intel/wmi/sbl-fw-update.c
··· 32 32 return -ENODEV; 33 33 34 34 if (obj->type != ACPI_TYPE_INTEGER) { 35 - dev_warn(dev, "wmi_query_block returned invalid value\n"); 35 + dev_warn(dev, "wmidev_block_query returned invalid value\n"); 36 36 kfree(obj); 37 37 return -EINVAL; 38 38 } ··· 55 55 56 56 status = wmidev_block_set(to_wmi_device(dev), 0, &input); 57 57 if (ACPI_FAILURE(status)) { 58 - dev_err(dev, "wmi_set_block failed\n"); 58 + dev_err(dev, "wmidev_block_set failed\n"); 59 59 return -ENODEV; 60 60 } 61 61
+152 -54
drivers/platform/x86/p2sb.c
··· 26 26 {} 27 27 }; 28 28 29 + /* 30 + * Cache BAR0 of P2SB device functions 0 to 7. 31 + * TODO: The constant 8 is the number of functions that PCI specification 32 + * defines. Same definitions exist tree-wide. Unify this definition and 33 + * the other definitions then move to include/uapi/linux/pci.h. 34 + */ 35 + #define NR_P2SB_RES_CACHE 8 36 + 37 + struct p2sb_res_cache { 38 + u32 bus_dev_id; 39 + struct resource res; 40 + }; 41 + 42 + static struct p2sb_res_cache p2sb_resources[NR_P2SB_RES_CACHE]; 43 + 29 44 static int p2sb_get_devfn(unsigned int *devfn) 30 45 { 31 46 unsigned int fn = P2SB_DEVFN_DEFAULT; ··· 54 39 return 0; 55 40 } 56 41 57 - /* Copy resource from the first BAR of the device in question */ 58 - static int p2sb_read_bar0(struct pci_dev *pdev, struct resource *mem) 42 + static bool p2sb_valid_resource(struct resource *res) 59 43 { 60 - struct resource *bar0 = &pdev->resource[0]; 44 + if (res->flags) 45 + return true; 46 + 47 + return false; 48 + } 49 + 50 + /* Copy resource from the first BAR of the device in question */ 51 + static void p2sb_read_bar0(struct pci_dev *pdev, struct resource *mem) 52 + { 53 + struct resource *bar0 = pci_resource_n(pdev, 0); 61 54 62 55 /* Make sure we have no dangling pointers in the output */ 63 56 memset(mem, 0, sizeof(*mem)); ··· 79 56 mem->end = bar0->end; 80 57 mem->flags = bar0->flags; 81 58 mem->desc = bar0->desc; 59 + } 60 + 61 + static void p2sb_scan_and_cache_devfn(struct pci_bus *bus, unsigned int devfn) 62 + { 63 + struct p2sb_res_cache *cache = &p2sb_resources[PCI_FUNC(devfn)]; 64 + struct pci_dev *pdev; 65 + 66 + pdev = pci_scan_single_device(bus, devfn); 67 + if (!pdev) 68 + return; 69 + 70 + p2sb_read_bar0(pdev, &cache->res); 71 + cache->bus_dev_id = bus->dev.id; 72 + 73 + pci_stop_and_remove_bus_device(pdev); 74 + } 75 + 76 + static int p2sb_scan_and_cache(struct pci_bus *bus, unsigned int devfn) 77 + { 78 + unsigned int slot, fn; 79 + 80 + if (PCI_FUNC(devfn) == 0) { 81 + /* 82 + * When function number of the P2SB device is zero, scan it and 83 + * other function numbers, and if devices are available, cache 84 + * their BAR0s. 85 + */ 86 + slot = PCI_SLOT(devfn); 87 + for (fn = 0; fn < NR_P2SB_RES_CACHE; fn++) 88 + p2sb_scan_and_cache_devfn(bus, PCI_DEVFN(slot, fn)); 89 + } else { 90 + /* Scan the P2SB device and cache its BAR0 */ 91 + p2sb_scan_and_cache_devfn(bus, devfn); 92 + } 93 + 94 + if (!p2sb_valid_resource(&p2sb_resources[PCI_FUNC(devfn)].res)) 95 + return -ENOENT; 82 96 83 97 return 0; 84 98 } 85 99 86 - static int p2sb_scan_and_read(struct pci_bus *bus, unsigned int devfn, struct resource *mem) 100 + static struct pci_bus *p2sb_get_bus(struct pci_bus *bus) 87 101 { 88 - struct pci_dev *pdev; 102 + static struct pci_bus *p2sb_bus; 103 + 104 + bus = bus ?: p2sb_bus; 105 + if (bus) 106 + return bus; 107 + 108 + /* Assume P2SB is on the bus 0 in domain 0 */ 109 + p2sb_bus = pci_find_bus(0, 0); 110 + return p2sb_bus; 111 + } 112 + 113 + static int p2sb_cache_resources(void) 114 + { 115 + unsigned int devfn_p2sb; 116 + u32 value = P2SBC_HIDE; 117 + struct pci_bus *bus; 118 + u16 class; 89 119 int ret; 90 120 91 - pdev = pci_scan_single_device(bus, devfn); 92 - if (!pdev) 121 + /* Get devfn for P2SB device itself */ 122 + ret = p2sb_get_devfn(&devfn_p2sb); 123 + if (ret) 124 + return ret; 125 + 126 + bus = p2sb_get_bus(NULL); 127 + if (!bus) 93 128 return -ENODEV; 94 129 95 - ret = p2sb_read_bar0(pdev, mem); 130 + /* 131 + * When a device with same devfn exists and its device class is not 132 + * PCI_CLASS_MEMORY_OTHER for P2SB, do not touch it. 133 + */ 134 + pci_bus_read_config_word(bus, devfn_p2sb, PCI_CLASS_DEVICE, &class); 135 + if (!PCI_POSSIBLE_ERROR(class) && class != PCI_CLASS_MEMORY_OTHER) 136 + return -ENODEV; 96 137 97 - pci_stop_and_remove_bus_device(pdev); 138 + /* 139 + * Prevent concurrent PCI bus scan from seeing the P2SB device and 140 + * removing via sysfs while it is temporarily exposed. 141 + */ 142 + pci_lock_rescan_remove(); 143 + 144 + /* 145 + * The BIOS prevents the P2SB device from being enumerated by the PCI 146 + * subsystem, so we need to unhide and hide it back to lookup the BAR. 147 + * Unhide the P2SB device here, if needed. 148 + */ 149 + pci_bus_read_config_dword(bus, devfn_p2sb, P2SBC, &value); 150 + if (value & P2SBC_HIDE) 151 + pci_bus_write_config_dword(bus, devfn_p2sb, P2SBC, 0); 152 + 153 + ret = p2sb_scan_and_cache(bus, devfn_p2sb); 154 + 155 + /* Hide the P2SB device, if it was hidden */ 156 + if (value & P2SBC_HIDE) 157 + pci_bus_write_config_dword(bus, devfn_p2sb, P2SBC, P2SBC_HIDE); 158 + 159 + pci_unlock_rescan_remove(); 160 + 98 161 return ret; 99 162 } 100 163 ··· 190 81 * @devfn: PCI slot and function to communicate with 191 82 * @mem: memory resource to be filled in 192 83 * 193 - * The BIOS prevents the P2SB device from being enumerated by the PCI 194 - * subsystem, so we need to unhide and hide it back to lookup the BAR. 195 - * 196 - * if @bus is NULL, the bus 0 in domain 0 will be used. 84 + * If @bus is NULL, the bus 0 in domain 0 will be used. 197 85 * If @devfn is 0, it will be replaced by devfn of the P2SB device. 198 86 * 199 87 * Caller must provide a valid pointer to @mem. 200 - * 201 - * Locking is handled by pci_rescan_remove_lock mutex. 202 88 * 203 89 * Return: 204 90 * 0 on success or appropriate errno value on error. 205 91 */ 206 92 int p2sb_bar(struct pci_bus *bus, unsigned int devfn, struct resource *mem) 207 93 { 208 - struct pci_dev *pdev_p2sb; 209 - unsigned int devfn_p2sb; 210 - u32 value = P2SBC_HIDE; 94 + struct p2sb_res_cache *cache; 211 95 int ret; 212 96 213 - /* Get devfn for P2SB device itself */ 214 - ret = p2sb_get_devfn(&devfn_p2sb); 215 - if (ret) 216 - return ret; 217 - 218 - /* if @bus is NULL, use bus 0 in domain 0 */ 219 - bus = bus ?: pci_find_bus(0, 0); 220 - 221 - /* 222 - * Prevent concurrent PCI bus scan from seeing the P2SB device and 223 - * removing via sysfs while it is temporarily exposed. 224 - */ 225 - pci_lock_rescan_remove(); 226 - 227 - /* Unhide the P2SB device, if needed */ 228 - pci_bus_read_config_dword(bus, devfn_p2sb, P2SBC, &value); 229 - if (value & P2SBC_HIDE) 230 - pci_bus_write_config_dword(bus, devfn_p2sb, P2SBC, 0); 231 - 232 - pdev_p2sb = pci_scan_single_device(bus, devfn_p2sb); 233 - if (devfn) 234 - ret = p2sb_scan_and_read(bus, devfn, mem); 235 - else 236 - ret = p2sb_read_bar0(pdev_p2sb, mem); 237 - pci_stop_and_remove_bus_device(pdev_p2sb); 238 - 239 - /* Hide the P2SB device, if it was hidden */ 240 - if (value & P2SBC_HIDE) 241 - pci_bus_write_config_dword(bus, devfn_p2sb, P2SBC, P2SBC_HIDE); 242 - 243 - pci_unlock_rescan_remove(); 244 - 245 - if (ret) 246 - return ret; 247 - 248 - if (mem->flags == 0) 97 + bus = p2sb_get_bus(bus); 98 + if (!bus) 249 99 return -ENODEV; 250 100 101 + if (!devfn) { 102 + ret = p2sb_get_devfn(&devfn); 103 + if (ret) 104 + return ret; 105 + } 106 + 107 + cache = &p2sb_resources[PCI_FUNC(devfn)]; 108 + if (cache->bus_dev_id != bus->dev.id) 109 + return -ENODEV; 110 + 111 + if (!p2sb_valid_resource(&cache->res)) 112 + return -ENOENT; 113 + 114 + memcpy(mem, &cache->res, sizeof(*mem)); 251 115 return 0; 252 116 } 253 117 EXPORT_SYMBOL_GPL(p2sb_bar); 118 + 119 + static int __init p2sb_fs_init(void) 120 + { 121 + p2sb_cache_resources(); 122 + return 0; 123 + } 124 + 125 + /* 126 + * pci_rescan_remove_lock to avoid access to unhidden P2SB devices can 127 + * not be locked in sysfs pci bus rescan path because of deadlock. To 128 + * avoid the deadlock, access to P2SB devices with the lock at an early 129 + * step in kernel initialization and cache required resources. This 130 + * should happen after subsys_initcall which initializes PCI subsystem 131 + * and before device_initcall which requires P2SB resources. 132 + */ 133 + fs_initcall(p2sb_fs_init);
+35
drivers/platform/x86/touchscreen_dmi.c
··· 944 944 .properties = teclast_tbook11_props, 945 945 }; 946 946 947 + static const struct property_entry teclast_x16_plus_props[] = { 948 + PROPERTY_ENTRY_U32("touchscreen-min-x", 8), 949 + PROPERTY_ENTRY_U32("touchscreen-min-y", 14), 950 + PROPERTY_ENTRY_U32("touchscreen-size-x", 1916), 951 + PROPERTY_ENTRY_U32("touchscreen-size-y", 1264), 952 + PROPERTY_ENTRY_BOOL("touchscreen-inverted-y"), 953 + PROPERTY_ENTRY_STRING("firmware-name", "gsl3692-teclast-x16-plus.fw"), 954 + PROPERTY_ENTRY_U32("silead,max-fingers", 10), 955 + PROPERTY_ENTRY_BOOL("silead,home-button"), 956 + { } 957 + }; 958 + 959 + static const struct ts_dmi_data teclast_x16_plus_data = { 960 + .embedded_fw = { 961 + .name = "silead/gsl3692-teclast-x16-plus.fw", 962 + .prefix = { 0xf0, 0x00, 0x00, 0x00, 0x02, 0x00, 0x00, 0x00 }, 963 + .length = 43560, 964 + .sha256 = { 0x9d, 0xb0, 0x3d, 0xf1, 0x00, 0x3c, 0xb5, 0x25, 965 + 0x62, 0x8a, 0xa0, 0x93, 0x4b, 0xe0, 0x4e, 0x75, 966 + 0xd1, 0x27, 0xb1, 0x65, 0x3c, 0xba, 0xa5, 0x0f, 967 + 0xcd, 0xb4, 0xbe, 0x00, 0xbb, 0xf6, 0x43, 0x29 }, 968 + }, 969 + .acpi_name = "MSSL1680:00", 970 + .properties = teclast_x16_plus_props, 971 + }; 972 + 947 973 static const struct property_entry teclast_x3_plus_props[] = { 948 974 PROPERTY_ENTRY_U32("touchscreen-size-x", 1980), 949 975 PROPERTY_ENTRY_U32("touchscreen-size-y", 1500), ··· 1636 1610 DMI_MATCH(DMI_SYS_VENDOR, "TECLAST"), 1637 1611 DMI_MATCH(DMI_PRODUCT_NAME, "TbooK 11"), 1638 1612 DMI_MATCH(DMI_PRODUCT_SKU, "E5A6_A1"), 1613 + }, 1614 + }, 1615 + { 1616 + /* Teclast X16 Plus */ 1617 + .driver_data = (void *)&teclast_x16_plus_data, 1618 + .matches = { 1619 + DMI_MATCH(DMI_SYS_VENDOR, "TECLAST"), 1620 + DMI_MATCH(DMI_PRODUCT_NAME, "Default string"), 1621 + DMI_MATCH(DMI_PRODUCT_SKU, "D3A5_A1"), 1639 1622 }, 1640 1623 }, 1641 1624 {
+112 -69
drivers/platform/x86/wmi.c
··· 25 25 #include <linux/list.h> 26 26 #include <linux/module.h> 27 27 #include <linux/platform_device.h> 28 + #include <linux/rwsem.h> 28 29 #include <linux/slab.h> 29 30 #include <linux/sysfs.h> 30 31 #include <linux/types.h> ··· 57 56 58 57 enum { /* wmi_block flags */ 59 58 WMI_READ_TAKES_NO_ARGS, 60 - WMI_PROBED, 61 59 }; 62 60 63 61 struct wmi_block { ··· 64 64 struct list_head list; 65 65 struct guid_block gblock; 66 66 struct acpi_device *acpi_device; 67 + struct rw_semaphore notify_lock; /* Protects notify callback add/remove */ 67 68 wmi_notify_handler handler; 68 69 void *handler_data; 70 + bool driver_ready; 69 71 unsigned long flags; 70 72 }; 71 73 ··· 221 219 return 0; 222 220 } 223 221 222 + static int wmidev_match_notify_id(struct device *dev, const void *data) 223 + { 224 + struct wmi_block *wblock = dev_to_wblock(dev); 225 + const u32 *notify_id = data; 226 + 227 + if (wblock->gblock.flags & ACPI_WMI_EVENT && wblock->gblock.notify_id == *notify_id) 228 + return 1; 229 + 230 + return 0; 231 + } 232 + 224 233 static struct bus_type wmi_bus_type; 225 234 226 235 static struct wmi_device *wmi_find_device_by_guid(const char *guid_string) ··· 249 236 return ERR_PTR(-ENODEV); 250 237 251 238 return dev_to_wdev(dev); 239 + } 240 + 241 + static struct wmi_device *wmi_find_event_by_notify_id(const u32 notify_id) 242 + { 243 + struct device *dev; 244 + 245 + dev = bus_find_device(&wmi_bus_type, NULL, &notify_id, wmidev_match_notify_id); 246 + if (!dev) 247 + return ERR_PTR(-ENODEV); 248 + 249 + return to_wmi_device(dev); 252 250 } 253 251 254 252 static void wmi_device_put(struct wmi_device *wdev) ··· 596 572 wmi_notify_handler handler, 597 573 void *data) 598 574 { 599 - struct wmi_block *block; 600 - acpi_status status = AE_NOT_EXIST; 601 - guid_t guid_input; 575 + struct wmi_block *wblock; 576 + struct wmi_device *wdev; 577 + acpi_status status; 602 578 603 - if (!guid || !handler) 604 - return AE_BAD_PARAMETER; 579 + wdev = wmi_find_device_by_guid(guid); 580 + if (IS_ERR(wdev)) 581 + return AE_ERROR; 605 582 606 - if (guid_parse(guid, &guid_input)) 607 - return AE_BAD_PARAMETER; 583 + wblock = container_of(wdev, struct wmi_block, dev); 608 584 609 - list_for_each_entry(block, &wmi_block_list, list) { 610 - acpi_status wmi_status; 585 + down_write(&wblock->notify_lock); 586 + if (wblock->handler) { 587 + status = AE_ALREADY_ACQUIRED; 588 + } else { 589 + wblock->handler = handler; 590 + wblock->handler_data = data; 611 591 612 - if (guid_equal(&block->gblock.guid, &guid_input)) { 613 - if (block->handler) 614 - return AE_ALREADY_ACQUIRED; 592 + if (ACPI_FAILURE(wmi_method_enable(wblock, true))) 593 + dev_warn(&wblock->dev.dev, "Failed to enable device\n"); 615 594 616 - block->handler = handler; 617 - block->handler_data = data; 618 - 619 - wmi_status = wmi_method_enable(block, true); 620 - if ((wmi_status != AE_OK) || 621 - ((wmi_status == AE_OK) && (status == AE_NOT_EXIST))) 622 - status = wmi_status; 623 - } 595 + status = AE_OK; 624 596 } 597 + up_write(&wblock->notify_lock); 598 + 599 + wmi_device_put(wdev); 625 600 626 601 return status; 627 602 } ··· 636 613 */ 637 614 acpi_status wmi_remove_notify_handler(const char *guid) 638 615 { 639 - struct wmi_block *block; 640 - acpi_status status = AE_NOT_EXIST; 641 - guid_t guid_input; 616 + struct wmi_block *wblock; 617 + struct wmi_device *wdev; 618 + acpi_status status; 642 619 643 - if (!guid) 644 - return AE_BAD_PARAMETER; 620 + wdev = wmi_find_device_by_guid(guid); 621 + if (IS_ERR(wdev)) 622 + return AE_ERROR; 645 623 646 - if (guid_parse(guid, &guid_input)) 647 - return AE_BAD_PARAMETER; 624 + wblock = container_of(wdev, struct wmi_block, dev); 648 625 649 - list_for_each_entry(block, &wmi_block_list, list) { 650 - acpi_status wmi_status; 626 + down_write(&wblock->notify_lock); 627 + if (!wblock->handler) { 628 + status = AE_NULL_ENTRY; 629 + } else { 630 + if (ACPI_FAILURE(wmi_method_enable(wblock, false))) 631 + dev_warn(&wblock->dev.dev, "Failed to disable device\n"); 651 632 652 - if (guid_equal(&block->gblock.guid, &guid_input)) { 653 - if (!block->handler) 654 - return AE_NULL_ENTRY; 633 + wblock->handler = NULL; 634 + wblock->handler_data = NULL; 655 635 656 - wmi_status = wmi_method_enable(block, false); 657 - block->handler = NULL; 658 - block->handler_data = NULL; 659 - if (wmi_status != AE_OK || (wmi_status == AE_OK && status == AE_NOT_EXIST)) 660 - status = wmi_status; 661 - } 636 + status = AE_OK; 662 637 } 638 + up_write(&wblock->notify_lock); 639 + 640 + wmi_device_put(wdev); 663 641 664 642 return status; 665 643 } ··· 679 655 acpi_status wmi_get_event_data(u32 event, struct acpi_buffer *out) 680 656 { 681 657 struct wmi_block *wblock; 658 + struct wmi_device *wdev; 659 + acpi_status status; 682 660 683 - list_for_each_entry(wblock, &wmi_block_list, list) { 684 - struct guid_block *gblock = &wblock->gblock; 661 + wdev = wmi_find_event_by_notify_id(event); 662 + if (IS_ERR(wdev)) 663 + return AE_NOT_FOUND; 685 664 686 - if ((gblock->flags & ACPI_WMI_EVENT) && gblock->notify_id == event) 687 - return get_event_data(wblock, out); 688 - } 665 + wblock = container_of(wdev, struct wmi_block, dev); 666 + status = get_event_data(wblock, out); 689 667 690 - return AE_NOT_FOUND; 668 + wmi_device_put(wdev); 669 + 670 + return status; 691 671 } 692 672 EXPORT_SYMBOL_GPL(wmi_get_event_data); 693 673 ··· 896 868 if (wdriver->probe) { 897 869 ret = wdriver->probe(dev_to_wdev(dev), 898 870 find_guid_context(wblock, wdriver)); 899 - if (!ret) { 871 + if (ret) { 900 872 if (ACPI_FAILURE(wmi_method_enable(wblock, false))) 901 873 dev_warn(dev, "Failed to disable device\n"); 902 874 ··· 904 876 } 905 877 } 906 878 907 - set_bit(WMI_PROBED, &wblock->flags); 879 + down_write(&wblock->notify_lock); 880 + wblock->driver_ready = true; 881 + up_write(&wblock->notify_lock); 908 882 909 883 return 0; 910 884 } ··· 916 886 struct wmi_block *wblock = dev_to_wblock(dev); 917 887 struct wmi_driver *wdriver = drv_to_wdrv(dev->driver); 918 888 919 - clear_bit(WMI_PROBED, &wblock->flags); 889 + down_write(&wblock->notify_lock); 890 + wblock->driver_ready = false; 891 + up_write(&wblock->notify_lock); 920 892 921 893 if (wdriver->remove) 922 894 wdriver->remove(dev_to_wdev(dev)); ··· 1031 999 wblock->dev.setable = true; 1032 1000 1033 1001 out_init: 1002 + init_rwsem(&wblock->notify_lock); 1003 + wblock->driver_ready = false; 1034 1004 wblock->dev.dev.bus = &wmi_bus_type; 1035 1005 wblock->dev.dev.parent = wmi_bus_dev; 1036 1006 ··· 1205 1171 } 1206 1172 } 1207 1173 1174 + static void wmi_notify_driver(struct wmi_block *wblock) 1175 + { 1176 + struct wmi_driver *driver = drv_to_wdrv(wblock->dev.dev.driver); 1177 + struct acpi_buffer data = { ACPI_ALLOCATE_BUFFER, NULL }; 1178 + acpi_status status; 1179 + 1180 + if (!driver->no_notify_data) { 1181 + status = get_event_data(wblock, &data); 1182 + if (ACPI_FAILURE(status)) { 1183 + dev_warn(&wblock->dev.dev, "Failed to get event data\n"); 1184 + return; 1185 + } 1186 + } 1187 + 1188 + if (driver->notify) 1189 + driver->notify(&wblock->dev, data.pointer); 1190 + 1191 + kfree(data.pointer); 1192 + } 1193 + 1208 1194 static int wmi_notify_device(struct device *dev, void *data) 1209 1195 { 1210 1196 struct wmi_block *wblock = dev_to_wblock(dev); ··· 1233 1179 if (!(wblock->gblock.flags & ACPI_WMI_EVENT && wblock->gblock.notify_id == *event)) 1234 1180 return 0; 1235 1181 1236 - /* If a driver is bound, then notify the driver. */ 1237 - if (test_bit(WMI_PROBED, &wblock->flags) && wblock->dev.dev.driver) { 1238 - struct wmi_driver *driver = drv_to_wdrv(wblock->dev.dev.driver); 1239 - struct acpi_buffer evdata = { ACPI_ALLOCATE_BUFFER, NULL }; 1240 - acpi_status status; 1241 - 1242 - if (!driver->no_notify_data) { 1243 - status = get_event_data(wblock, &evdata); 1244 - if (ACPI_FAILURE(status)) { 1245 - dev_warn(&wblock->dev.dev, "failed to get event data\n"); 1246 - return -EIO; 1247 - } 1248 - } 1249 - 1250 - if (driver->notify) 1251 - driver->notify(&wblock->dev, evdata.pointer); 1252 - 1253 - kfree(evdata.pointer); 1254 - } else if (wblock->handler) { 1255 - /* Legacy handler */ 1256 - wblock->handler(*event, wblock->handler_data); 1182 + down_read(&wblock->notify_lock); 1183 + /* The WMI driver notify handler conflicts with the legacy WMI handler. 1184 + * Because of this the WMI driver notify handler takes precedence. 1185 + */ 1186 + if (wblock->dev.dev.driver && wblock->driver_ready) { 1187 + wmi_notify_driver(wblock); 1188 + } else { 1189 + if (wblock->handler) 1190 + wblock->handler(*event, wblock->handler_data); 1257 1191 } 1192 + up_read(&wblock->notify_lock); 1258 1193 1259 1194 acpi_bus_generate_netlink_event(wblock->acpi_device->pnp.device_class, 1260 1195 dev_name(&wblock->dev.dev), *event, 0);
+1 -1
drivers/regulator/max5970-regulator.c
··· 392 392 return ret; 393 393 394 394 if (*val) 395 - return regmap_write(map, reg, *val); 395 + return regmap_write(map, reg, 0); 396 396 397 397 return 0; 398 398 }
+43
drivers/regulator/pwm-regulator.c
··· 157 157 158 158 pwm_get_state(drvdata->pwm, &pstate); 159 159 160 + if (!pstate.enabled) { 161 + if (pstate.polarity == PWM_POLARITY_INVERSED) 162 + pstate.duty_cycle = pstate.period; 163 + else 164 + pstate.duty_cycle = 0; 165 + } 166 + 160 167 voltage = pwm_get_relative_duty_cycle(&pstate, duty_unit); 168 + if (voltage < min(max_uV_duty, min_uV_duty) || 169 + voltage > max(max_uV_duty, min_uV_duty)) 170 + return -ENOTRECOVERABLE; 161 171 162 172 /* 163 173 * The dutycycle for min_uV might be greater than the one for max_uV. ··· 323 313 return 0; 324 314 } 325 315 316 + static int pwm_regulator_init_boot_on(struct platform_device *pdev, 317 + struct pwm_regulator_data *drvdata, 318 + const struct regulator_init_data *init_data) 319 + { 320 + struct pwm_state pstate; 321 + 322 + if (!init_data->constraints.boot_on || drvdata->enb_gpio) 323 + return 0; 324 + 325 + pwm_get_state(drvdata->pwm, &pstate); 326 + if (pstate.enabled) 327 + return 0; 328 + 329 + /* 330 + * Update the duty cycle so the output does not change 331 + * when the regulator core enables the regulator (and 332 + * thus the PWM channel). 333 + */ 334 + if (pstate.polarity == PWM_POLARITY_INVERSED) 335 + pstate.duty_cycle = pstate.period; 336 + else 337 + pstate.duty_cycle = 0; 338 + 339 + return pwm_apply_might_sleep(drvdata->pwm, &pstate); 340 + } 341 + 326 342 static int pwm_regulator_probe(struct platform_device *pdev) 327 343 { 328 344 const struct regulator_init_data *init_data; ··· 407 371 ret = pwm_adjust_config(drvdata->pwm); 408 372 if (ret) 409 373 return ret; 374 + 375 + ret = pwm_regulator_init_boot_on(pdev, drvdata, init_data); 376 + if (ret) { 377 + dev_err(&pdev->dev, "Failed to apply boot_on settings: %d\n", 378 + ret); 379 + return ret; 380 + } 410 381 411 382 regulator = devm_regulator_register(&pdev->dev, 412 383 &drvdata->desc, &config);
+19 -3
drivers/regulator/ti-abb-regulator.c
··· 726 726 return PTR_ERR(abb->setup_reg); 727 727 } 728 728 729 - abb->int_base = devm_platform_ioremap_resource_byname(pdev, "int-address"); 730 - if (IS_ERR(abb->int_base)) 731 - return PTR_ERR(abb->int_base); 729 + pname = "int-address"; 730 + res = platform_get_resource_byname(pdev, IORESOURCE_MEM, pname); 731 + if (!res) { 732 + dev_err(dev, "Missing '%s' IO resource\n", pname); 733 + return -ENODEV; 734 + } 735 + /* 736 + * The MPU interrupt status register (PRM_IRQSTATUS_MPU) is 737 + * shared between regulator-abb-{ivahd,dspeve,gpu} driver 738 + * instances. Therefore use devm_ioremap() rather than 739 + * devm_platform_ioremap_resource_byname() to avoid busy 740 + * resource region conflicts. 741 + */ 742 + abb->int_base = devm_ioremap(dev, res->start, 743 + resource_size(res)); 744 + if (!abb->int_base) { 745 + dev_err(dev, "Unable to map '%s'\n", pname); 746 + return -ENOMEM; 747 + } 732 748 733 749 /* Map Optional resources */ 734 750 pname = "efuse-address";
+1 -2
drivers/scsi/initio.c
··· 371 371 */ 372 372 static void initio_se2_wr(unsigned long base, u8 addr, u16 val) 373 373 { 374 - u8 rb; 375 374 u8 instr; 376 375 int i; 377 376 ··· 399 400 udelay(30); 400 401 outb(SE2CS, base + TUL_NVRAM); /* -CLK */ 401 402 udelay(30); 402 - if ((rb = inb(base + TUL_NVRAM)) & SE2DI) 403 + if (inb(base + TUL_NVRAM) & SE2DI) 403 404 break; /* write complete */ 404 405 } 405 406 outb(0, base + TUL_NVRAM); /* -CS */
+1 -1
drivers/scsi/isci/request.c
··· 3387 3387 return SCI_FAILURE; 3388 3388 } 3389 3389 3390 - return SCI_SUCCESS; 3390 + return status; 3391 3391 } 3392 3392 3393 3393 static struct isci_request *isci_request_from_tag(struct isci_host *ihost, u16 tag)
+4 -4
drivers/scsi/scsi_error.c
··· 61 61 static enum scsi_disposition scsi_try_to_abort_cmd(const struct scsi_host_template *, 62 62 struct scsi_cmnd *); 63 63 64 - void scsi_eh_wakeup(struct Scsi_Host *shost) 64 + void scsi_eh_wakeup(struct Scsi_Host *shost, unsigned int busy) 65 65 { 66 66 lockdep_assert_held(shost->host_lock); 67 67 68 - if (scsi_host_busy(shost) == shost->host_failed) { 68 + if (busy == shost->host_failed) { 69 69 trace_scsi_eh_wakeup(shost); 70 70 wake_up_process(shost->ehandler); 71 71 SCSI_LOG_ERROR_RECOVERY(5, shost_printk(KERN_INFO, shost, ··· 88 88 if (scsi_host_set_state(shost, SHOST_RECOVERY) == 0 || 89 89 scsi_host_set_state(shost, SHOST_CANCEL_RECOVERY) == 0) { 90 90 shost->host_eh_scheduled++; 91 - scsi_eh_wakeup(shost); 91 + scsi_eh_wakeup(shost, scsi_host_busy(shost)); 92 92 } 93 93 94 94 spin_unlock_irqrestore(shost->host_lock, flags); ··· 286 286 287 287 spin_lock_irqsave(shost->host_lock, flags); 288 288 shost->host_failed++; 289 - scsi_eh_wakeup(shost); 289 + scsi_eh_wakeup(shost, scsi_host_busy(shost)); 290 290 spin_unlock_irqrestore(shost->host_lock, flags); 291 291 } 292 292
+1 -1
drivers/scsi/scsi_lib.c
··· 280 280 if (unlikely(scsi_host_in_recovery(shost))) { 281 281 spin_lock_irqsave(shost->host_lock, flags); 282 282 if (shost->host_failed || shost->host_eh_scheduled) 283 - scsi_eh_wakeup(shost); 283 + scsi_eh_wakeup(shost, scsi_host_busy(shost)); 284 284 spin_unlock_irqrestore(shost->host_lock, flags); 285 285 } 286 286 rcu_read_unlock();
+1 -1
drivers/scsi/scsi_priv.h
··· 92 92 extern enum blk_eh_timer_return scsi_timeout(struct request *req); 93 93 extern int scsi_error_handler(void *host); 94 94 extern enum scsi_disposition scsi_decide_disposition(struct scsi_cmnd *cmd); 95 - extern void scsi_eh_wakeup(struct Scsi_Host *shost); 95 + extern void scsi_eh_wakeup(struct Scsi_Host *shost, unsigned int busy); 96 96 extern void scsi_eh_scmd_add(struct scsi_cmnd *); 97 97 void scsi_eh_ready_devs(struct Scsi_Host *shost, 98 98 struct list_head *work_q,
+7 -5
drivers/scsi/storvsc_drv.c
··· 330 330 */ 331 331 332 332 static int storvsc_ringbuffer_size = (128 * 1024); 333 + static int aligned_ringbuffer_size; 333 334 static u32 max_outstanding_req_per_channel; 334 335 static int storvsc_change_queue_depth(struct scsi_device *sdev, int queue_depth); 335 336 ··· 688 687 new_sc->next_request_id_callback = storvsc_next_request_id; 689 688 690 689 ret = vmbus_open(new_sc, 691 - storvsc_ringbuffer_size, 692 - storvsc_ringbuffer_size, 690 + aligned_ringbuffer_size, 691 + aligned_ringbuffer_size, 693 692 (void *)&props, 694 693 sizeof(struct vmstorage_channel_properties), 695 694 storvsc_on_channel_callback, new_sc); ··· 1974 1973 dma_set_min_align_mask(&device->device, HV_HYP_PAGE_SIZE - 1); 1975 1974 1976 1975 stor_device->port_number = host->host_no; 1977 - ret = storvsc_connect_to_vsp(device, storvsc_ringbuffer_size, is_fc); 1976 + ret = storvsc_connect_to_vsp(device, aligned_ringbuffer_size, is_fc); 1978 1977 if (ret) 1979 1978 goto err_out1; 1980 1979 ··· 2165 2164 { 2166 2165 int ret; 2167 2166 2168 - ret = storvsc_connect_to_vsp(hv_dev, storvsc_ringbuffer_size, 2167 + ret = storvsc_connect_to_vsp(hv_dev, aligned_ringbuffer_size, 2169 2168 hv_dev_is_fc(hv_dev)); 2170 2169 return ret; 2171 2170 } ··· 2199 2198 * the ring buffer indices) by the max request size (which is 2200 2199 * vmbus_channel_packet_multipage_buffer + struct vstor_packet + u64) 2201 2200 */ 2201 + aligned_ringbuffer_size = VMBUS_RING_SIZE(storvsc_ringbuffer_size); 2202 2202 max_outstanding_req_per_channel = 2203 - ((storvsc_ringbuffer_size - PAGE_SIZE) / 2203 + ((aligned_ringbuffer_size - PAGE_SIZE) / 2204 2204 ALIGN(MAX_MULTIPAGE_BUFFER_PACKET + 2205 2205 sizeof(struct vstor_packet) + sizeof(u64), 2206 2206 sizeof(u64)));
-2
drivers/scsi/virtio_scsi.c
··· 188 188 while ((buf = virtqueue_get_buf(vq, &len)) != NULL) 189 189 fn(vscsi, buf); 190 190 191 - if (unlikely(virtqueue_is_broken(vq))) 192 - break; 193 191 } while (!virtqueue_enable_cb(vq)); 194 192 spin_unlock_irqrestore(&virtscsi_vq->vq_lock, flags); 195 193 }
+3 -3
drivers/soc/apple/mailbox.c
··· 296 296 of_node_put(args.np); 297 297 298 298 if (!pdev) 299 - return ERR_PTR(EPROBE_DEFER); 299 + return ERR_PTR(-EPROBE_DEFER); 300 300 301 301 mbox = platform_get_drvdata(pdev); 302 302 if (!mbox) 303 - return ERR_PTR(EPROBE_DEFER); 303 + return ERR_PTR(-EPROBE_DEFER); 304 304 305 305 if (!device_link_add(dev, &pdev->dev, DL_FLAG_AUTOREMOVE_CONSUMER)) 306 - return ERR_PTR(ENODEV); 306 + return ERR_PTR(-ENODEV); 307 307 308 308 return mbox; 309 309 }
+2 -2
drivers/spi/spi-bcm-qspi.c
··· 19 19 #include <linux/platform_device.h> 20 20 #include <linux/slab.h> 21 21 #include <linux/spi/spi.h> 22 - #include <linux/spi/spi-mem.h> 22 + #include <linux/mtd/spi-nor.h> 23 23 #include <linux/sysfs.h> 24 24 #include <linux/types.h> 25 25 #include "spi-bcm-qspi.h" ··· 1221 1221 1222 1222 /* non-aligned and very short transfers are handled by MSPI */ 1223 1223 if (!IS_ALIGNED((uintptr_t)addr, 4) || !IS_ALIGNED((uintptr_t)buf, 4) || 1224 - len < 4) 1224 + len < 4 || op->cmd.opcode == SPINOR_OP_RDSFDP) 1225 1225 mspi_read = true; 1226 1226 1227 1227 if (!has_bspi(qspi) || mspi_read)
+9 -8
drivers/spi/spi-cadence.c
··· 317 317 xspi->rx_bytes -= nrx; 318 318 319 319 while (ntx || nrx) { 320 + if (nrx) { 321 + u8 data = cdns_spi_read(xspi, CDNS_SPI_RXD); 322 + 323 + if (xspi->rxbuf) 324 + *xspi->rxbuf++ = data; 325 + 326 + nrx--; 327 + } 328 + 320 329 if (ntx) { 321 330 if (xspi->txbuf) 322 331 cdns_spi_write(xspi, CDNS_SPI_TXD, *xspi->txbuf++); ··· 335 326 ntx--; 336 327 } 337 328 338 - if (nrx) { 339 - u8 data = cdns_spi_read(xspi, CDNS_SPI_RXD); 340 - 341 - if (xspi->rxbuf) 342 - *xspi->rxbuf++ = data; 343 - 344 - nrx--; 345 - } 346 329 } 347 330 } 348 331
+4 -1
drivers/spi/spi-cs42l43.c
··· 244 244 priv->ctlr->use_gpio_descriptors = true; 245 245 priv->ctlr->auto_runtime_pm = true; 246 246 247 - devm_pm_runtime_enable(priv->dev); 247 + ret = devm_pm_runtime_enable(priv->dev); 248 + if (ret) 249 + return ret; 250 + 248 251 pm_runtime_idle(priv->dev); 249 252 250 253 regmap_write(priv->regmap, CS42L43_TRAN_CONFIG6, CS42L43_FIFO_SIZE - 1);
+5
drivers/spi/spi-hisi-sfc-v3xx.c
··· 377 377 static irqreturn_t hisi_sfc_v3xx_isr(int irq, void *data) 378 378 { 379 379 struct hisi_sfc_v3xx_host *host = data; 380 + u32 reg; 381 + 382 + reg = readl(host->regbase + HISI_SFC_V3XX_INT_STAT); 383 + if (!reg) 384 + return IRQ_NONE; 380 385 381 386 hisi_sfc_v3xx_disable_int(host); 382 387
+2 -2
drivers/spi/spi-imx.c
··· 1344 1344 controller->dma_tx = dma_request_chan(dev, "tx"); 1345 1345 if (IS_ERR(controller->dma_tx)) { 1346 1346 ret = PTR_ERR(controller->dma_tx); 1347 - dev_dbg(dev, "can't get the TX DMA channel, error %d!\n", ret); 1347 + dev_err_probe(dev, ret, "can't get the TX DMA channel!\n"); 1348 1348 controller->dma_tx = NULL; 1349 1349 goto err; 1350 1350 } ··· 1353 1353 controller->dma_rx = dma_request_chan(dev, "rx"); 1354 1354 if (IS_ERR(controller->dma_rx)) { 1355 1355 ret = PTR_ERR(controller->dma_rx); 1356 - dev_dbg(dev, "can't get the RX DMA channel, error %d\n", ret); 1356 + dev_err_probe(dev, ret, "can't get the RX DMA channel!\n"); 1357 1357 controller->dma_rx = NULL; 1358 1358 goto err; 1359 1359 }
+1 -1
drivers/spi/spi-intel-pci.c
··· 76 76 { PCI_VDEVICE(INTEL, 0x7a24), (unsigned long)&cnl_info }, 77 77 { PCI_VDEVICE(INTEL, 0x7aa4), (unsigned long)&cnl_info }, 78 78 { PCI_VDEVICE(INTEL, 0x7e23), (unsigned long)&cnl_info }, 79 + { PCI_VDEVICE(INTEL, 0x7f24), (unsigned long)&cnl_info }, 79 80 { PCI_VDEVICE(INTEL, 0x9d24), (unsigned long)&cnl_info }, 80 81 { PCI_VDEVICE(INTEL, 0x9da4), (unsigned long)&cnl_info }, 81 82 { PCI_VDEVICE(INTEL, 0xa0a4), (unsigned long)&cnl_info }, ··· 85 84 { PCI_VDEVICE(INTEL, 0xa2a4), (unsigned long)&cnl_info }, 86 85 { PCI_VDEVICE(INTEL, 0xa324), (unsigned long)&cnl_info }, 87 86 { PCI_VDEVICE(INTEL, 0xa3a4), (unsigned long)&cnl_info }, 88 - { PCI_VDEVICE(INTEL, 0xae23), (unsigned long)&cnl_info }, 89 87 { }, 90 88 }; 91 89 MODULE_DEVICE_TABLE(pci, intel_spi_pci_ids);
+8 -8
drivers/spi/spi-sh-msiof.c
··· 136 136 137 137 /* SIFCTR */ 138 138 #define SIFCTR_TFWM_MASK GENMASK(31, 29) /* Transmit FIFO Watermark */ 139 - #define SIFCTR_TFWM_64 (0 << 29) /* Transfer Request when 64 empty stages */ 140 - #define SIFCTR_TFWM_32 (1 << 29) /* Transfer Request when 32 empty stages */ 141 - #define SIFCTR_TFWM_24 (2 << 29) /* Transfer Request when 24 empty stages */ 142 - #define SIFCTR_TFWM_16 (3 << 29) /* Transfer Request when 16 empty stages */ 143 - #define SIFCTR_TFWM_12 (4 << 29) /* Transfer Request when 12 empty stages */ 144 - #define SIFCTR_TFWM_8 (5 << 29) /* Transfer Request when 8 empty stages */ 145 - #define SIFCTR_TFWM_4 (6 << 29) /* Transfer Request when 4 empty stages */ 146 - #define SIFCTR_TFWM_1 (7 << 29) /* Transfer Request when 1 empty stage */ 139 + #define SIFCTR_TFWM_64 (0UL << 29) /* Transfer Request when 64 empty stages */ 140 + #define SIFCTR_TFWM_32 (1UL << 29) /* Transfer Request when 32 empty stages */ 141 + #define SIFCTR_TFWM_24 (2UL << 29) /* Transfer Request when 24 empty stages */ 142 + #define SIFCTR_TFWM_16 (3UL << 29) /* Transfer Request when 16 empty stages */ 143 + #define SIFCTR_TFWM_12 (4UL << 29) /* Transfer Request when 12 empty stages */ 144 + #define SIFCTR_TFWM_8 (5UL << 29) /* Transfer Request when 8 empty stages */ 145 + #define SIFCTR_TFWM_4 (6UL << 29) /* Transfer Request when 4 empty stages */ 146 + #define SIFCTR_TFWM_1 (7UL << 29) /* Transfer Request when 1 empty stage */ 147 147 #define SIFCTR_TFUA_MASK GENMASK(26, 20) /* Transmit FIFO Usable Area */ 148 148 #define SIFCTR_TFUA_SHIFT 20 149 149 #define SIFCTR_TFUA(i) ((i) << SIFCTR_TFUA_SHIFT)
+4
drivers/spi/spi.c
··· 1717 1717 pm_runtime_put_noidle(ctlr->dev.parent); 1718 1718 dev_err(&ctlr->dev, "Failed to power device: %d\n", 1719 1719 ret); 1720 + 1721 + msg->status = ret; 1722 + spi_finalize_current_message(ctlr); 1723 + 1720 1724 return ret; 1721 1725 } 1722 1726 }
-32
drivers/thermal/intel/intel_powerclamp.c
··· 49 49 */ 50 50 #define DEFAULT_DURATION_JIFFIES (6) 51 51 52 - static unsigned int target_mwait; 53 52 static struct dentry *debug_dir; 54 53 static bool poll_pkg_cstate_enable; 55 54 ··· 310 311 "\tpowerclamp controls idle ratio within this window. larger\n" 311 312 "\twindow size results in slower response time but more smooth\n" 312 313 "\tclamping results. default to 2."); 313 - 314 - static void find_target_mwait(void) 315 - { 316 - unsigned int eax, ebx, ecx, edx; 317 - unsigned int highest_cstate = 0; 318 - unsigned int highest_subcstate = 0; 319 - int i; 320 - 321 - if (boot_cpu_data.cpuid_level < CPUID_MWAIT_LEAF) 322 - return; 323 - 324 - cpuid(CPUID_MWAIT_LEAF, &eax, &ebx, &ecx, &edx); 325 - 326 - if (!(ecx & CPUID5_ECX_EXTENSIONS_SUPPORTED) || 327 - !(ecx & CPUID5_ECX_INTERRUPT_BREAK)) 328 - return; 329 - 330 - edx >>= MWAIT_SUBSTATE_SIZE; 331 - for (i = 0; i < 7 && edx; i++, edx >>= MWAIT_SUBSTATE_SIZE) { 332 - if (edx & MWAIT_SUBSTATE_MASK) { 333 - highest_cstate = i; 334 - highest_subcstate = edx & MWAIT_SUBSTATE_MASK; 335 - } 336 - } 337 - target_mwait = (highest_cstate << MWAIT_SUBSTATE_SIZE) | 338 - (highest_subcstate - 1); 339 - 340 - } 341 314 342 315 struct pkg_cstate_info { 343 316 bool skip; ··· 729 758 pr_info("No package C-state available\n"); 730 759 return -ENODEV; 731 760 } 732 - 733 - /* find the deepest mwait value */ 734 - find_target_mwait(); 735 761 736 762 return 0; 737 763 }
+1 -1
fs/bcachefs/alloc_background.c
··· 1715 1715 * This works without any other locks because this is the only 1716 1716 * thread that removes items from the need_discard tree 1717 1717 */ 1718 - bch2_trans_unlock(trans); 1718 + bch2_trans_unlock_long(trans); 1719 1719 blkdev_issue_discard(ca->disk_sb.bdev, 1720 1720 k.k->p.offset * ca->mi.bucket_size, 1721 1721 ca->mi.bucket_size,
+2 -2
fs/bcachefs/btree_locking.c
··· 92 92 continue; 93 93 94 94 bch2_btree_trans_to_text(out, i->trans); 95 - bch2_prt_task_backtrace(out, task, i == g->g ? 5 : 1); 95 + bch2_prt_task_backtrace(out, task, i == g->g ? 5 : 1, GFP_NOWAIT); 96 96 } 97 97 } 98 98 ··· 227 227 prt_printf(&buf, "backtrace:"); 228 228 prt_newline(&buf); 229 229 printbuf_indent_add(&buf, 2); 230 - bch2_prt_task_backtrace(&buf, trans->locking_wait.task, 2); 230 + bch2_prt_task_backtrace(&buf, trans->locking_wait.task, 2, GFP_NOWAIT); 231 231 printbuf_indent_sub(&buf, 2); 232 232 prt_newline(&buf); 233 233 }
+1 -1
fs/bcachefs/debug.c
··· 627 627 prt_printf(&i->buf, "backtrace:"); 628 628 prt_newline(&i->buf); 629 629 printbuf_indent_add(&i->buf, 2); 630 - bch2_prt_task_backtrace(&i->buf, task, 0); 630 + bch2_prt_task_backtrace(&i->buf, task, 0, GFP_KERNEL); 631 631 printbuf_indent_sub(&i->buf, 2); 632 632 prt_newline(&i->buf); 633 633
+1 -1
fs/bcachefs/fs-io.c
··· 79 79 continue; 80 80 81 81 bio = container_of(bio_alloc_bioset(ca->disk_sb.bdev, 0, 82 - REQ_OP_FLUSH, 82 + REQ_OP_WRITE|REQ_PREFLUSH, 83 83 GFP_KERNEL, 84 84 &c->nocow_flush_bioset), 85 85 struct nocow_flush, bio);
+12 -11
fs/bcachefs/fsck.c
··· 119 119 if (!ret) 120 120 *snapshot = iter.pos.snapshot; 121 121 err: 122 - bch_err_msg(trans->c, ret, "fetching inode %llu:%u", inode_nr, *snapshot); 123 122 bch2_trans_iter_exit(trans, &iter); 124 123 return ret; 125 124 } 126 125 127 - static int __lookup_dirent(struct btree_trans *trans, 126 + static int lookup_dirent_in_snapshot(struct btree_trans *trans, 128 127 struct bch_hash_info hash_info, 129 128 subvol_inum dir, struct qstr *name, 130 - u64 *target, unsigned *type) 129 + u64 *target, unsigned *type, u32 snapshot) 131 130 { 132 131 struct btree_iter iter; 133 132 struct bkey_s_c_dirent d; 134 - int ret; 135 - 136 - ret = bch2_hash_lookup(trans, &iter, bch2_dirent_hash_desc, 137 - &hash_info, dir, name, 0); 133 + int ret = bch2_hash_lookup_in_snapshot(trans, &iter, bch2_dirent_hash_desc, 134 + &hash_info, dir, name, 0, snapshot); 138 135 if (ret) 139 136 return ret; 140 137 ··· 222 225 223 226 struct bch_inode_unpacked root_inode; 224 227 struct bch_hash_info root_hash_info; 225 - ret = lookup_inode(trans, root_inum.inum, &root_inode, &snapshot); 228 + u32 root_inode_snapshot = snapshot; 229 + ret = lookup_inode(trans, root_inum.inum, &root_inode, &root_inode_snapshot); 226 230 bch_err_msg(c, ret, "looking up root inode"); 227 231 if (ret) 228 232 return ret; 229 233 230 234 root_hash_info = bch2_hash_info_init(c, &root_inode); 231 235 232 - ret = __lookup_dirent(trans, root_hash_info, root_inum, 233 - &lostfound_str, &inum, &d_type); 236 + ret = lookup_dirent_in_snapshot(trans, root_hash_info, root_inum, 237 + &lostfound_str, &inum, &d_type, snapshot); 234 238 if (bch2_err_matches(ret, ENOENT)) 235 239 goto create_lostfound; 236 240 ··· 248 250 * The bch2_check_dirents pass has already run, dangling dirents 249 251 * shouldn't exist here: 250 252 */ 251 - return lookup_inode(trans, inum, lostfound, &snapshot); 253 + ret = lookup_inode(trans, inum, lostfound, &snapshot); 254 + bch_err_msg(c, ret, "looking up lost+found %llu:%u in (root inode %llu, snapshot root %u)", 255 + inum, snapshot, root_inum.inum, bch2_snapshot_root(c, snapshot)); 256 + return ret; 252 257 253 258 create_lostfound: 254 259 /*
+1 -1
fs/bcachefs/journal.c
··· 233 233 prt_str(&pbuf, "entry size: "); 234 234 prt_human_readable_u64(&pbuf, vstruct_bytes(buf->data)); 235 235 prt_newline(&pbuf); 236 - bch2_prt_task_backtrace(&pbuf, current, 1); 236 + bch2_prt_task_backtrace(&pbuf, current, 1, GFP_NOWAIT); 237 237 trace_journal_entry_close(c, pbuf.buf); 238 238 printbuf_exit(&pbuf); 239 239 }
+2 -1
fs/bcachefs/journal_io.c
··· 1988 1988 percpu_ref_get(&ca->io_ref); 1989 1989 1990 1990 bio = ca->journal.bio; 1991 - bio_reset(bio, ca->disk_sb.bdev, REQ_OP_FLUSH); 1991 + bio_reset(bio, ca->disk_sb.bdev, 1992 + REQ_OP_WRITE|REQ_PREFLUSH); 1992 1993 bio->bi_end_io = journal_write_endio; 1993 1994 bio->bi_private = ca; 1994 1995 closure_bio_submit(bio, cl);
+15 -7
fs/bcachefs/str_hash.h
··· 160 160 } 161 161 162 162 static __always_inline int 163 - bch2_hash_lookup(struct btree_trans *trans, 163 + bch2_hash_lookup_in_snapshot(struct btree_trans *trans, 164 164 struct btree_iter *iter, 165 165 const struct bch_hash_desc desc, 166 166 const struct bch_hash_info *info, 167 167 subvol_inum inum, const void *key, 168 - unsigned flags) 168 + unsigned flags, u32 snapshot) 169 169 { 170 170 struct bkey_s_c k; 171 - u32 snapshot; 172 171 int ret; 173 - 174 - ret = bch2_subvolume_get_snapshot(trans, inum.subvol, &snapshot); 175 - if (ret) 176 - return ret; 177 172 178 173 for_each_btree_key_upto_norestart(trans, *iter, desc.btree_id, 179 174 SPOS(inum.inum, desc.hash_key(info, key), snapshot), ··· 187 192 bch2_trans_iter_exit(trans, iter); 188 193 189 194 return ret ?: -BCH_ERR_ENOENT_str_hash_lookup; 195 + } 196 + 197 + static __always_inline int 198 + bch2_hash_lookup(struct btree_trans *trans, 199 + struct btree_iter *iter, 200 + const struct bch_hash_desc desc, 201 + const struct bch_hash_info *info, 202 + subvol_inum inum, const void *key, 203 + unsigned flags) 204 + { 205 + u32 snapshot; 206 + return bch2_subvolume_get_snapshot(trans, inum.subvol, &snapshot) ?: 207 + bch2_hash_lookup_in_snapshot(trans, iter, desc, info, inum, key, flags, snapshot); 190 208 } 191 209 192 210 static __always_inline int
+5 -5
fs/bcachefs/util.c
··· 272 272 console_unlock(); 273 273 } 274 274 275 - int bch2_save_backtrace(bch_stacktrace *stack, struct task_struct *task, unsigned skipnr) 275 + int bch2_save_backtrace(bch_stacktrace *stack, struct task_struct *task, unsigned skipnr, 276 + gfp_t gfp) 276 277 { 277 278 #ifdef CONFIG_STACKTRACE 278 279 unsigned nr_entries = 0; 279 - int ret = 0; 280 280 281 281 stack->nr = 0; 282 - ret = darray_make_room(stack, 32); 282 + int ret = darray_make_room_gfp(stack, 32, gfp); 283 283 if (ret) 284 284 return ret; 285 285 ··· 308 308 } 309 309 } 310 310 311 - int bch2_prt_task_backtrace(struct printbuf *out, struct task_struct *task, unsigned skipnr) 311 + int bch2_prt_task_backtrace(struct printbuf *out, struct task_struct *task, unsigned skipnr, gfp_t gfp) 312 312 { 313 313 bch_stacktrace stack = { 0 }; 314 - int ret = bch2_save_backtrace(&stack, task, skipnr + 1); 314 + int ret = bch2_save_backtrace(&stack, task, skipnr + 1, gfp); 315 315 316 316 bch2_prt_backtrace(out, &stack); 317 317 darray_exit(&stack);
+2 -2
fs/bcachefs/util.h
··· 348 348 void bch2_print_string_as_lines(const char *prefix, const char *lines); 349 349 350 350 typedef DARRAY(unsigned long) bch_stacktrace; 351 - int bch2_save_backtrace(bch_stacktrace *stack, struct task_struct *, unsigned); 351 + int bch2_save_backtrace(bch_stacktrace *stack, struct task_struct *, unsigned, gfp_t); 352 352 void bch2_prt_backtrace(struct printbuf *, bch_stacktrace *); 353 - int bch2_prt_task_backtrace(struct printbuf *, struct task_struct *, unsigned); 353 + int bch2_prt_task_backtrace(struct printbuf *, struct task_struct *, unsigned, gfp_t); 354 354 355 355 static inline void prt_bdevname(struct printbuf *out, struct block_device *bdev) 356 356 {
+2 -3
fs/erofs/compress.h
··· 11 11 struct z_erofs_decompress_req { 12 12 struct super_block *sb; 13 13 struct page **in, **out; 14 - 15 14 unsigned short pageofs_in, pageofs_out; 16 15 unsigned int inputsize, outputsize; 17 16 18 - /* indicate the algorithm will be used for decompression */ 19 - unsigned int alg; 17 + unsigned int alg; /* the algorithm for decompression */ 20 18 bool inplace_io, partial_decoding, fillgaps; 19 + gfp_t gfp; /* allocation flags for extra temporary buffers */ 21 20 }; 22 21 23 22 struct z_erofs_decompressor {
+3 -2
fs/erofs/decompressor.c
··· 111 111 victim = availables[--top]; 112 112 get_page(victim); 113 113 } else { 114 - victim = erofs_allocpage(pagepool, 115 - GFP_KERNEL | __GFP_NOFAIL); 114 + victim = erofs_allocpage(pagepool, rq->gfp); 115 + if (!victim) 116 + return -ENOMEM; 116 117 set_page_private(victim, Z_EROFS_SHORTLIVED_PAGE); 117 118 } 118 119 rq->out[i] = victim;
+13 -6
fs/erofs/decompressor_deflate.c
··· 95 95 } 96 96 97 97 int z_erofs_deflate_decompress(struct z_erofs_decompress_req *rq, 98 - struct page **pagepool) 98 + struct page **pgpl) 99 99 { 100 100 const unsigned int nrpages_out = 101 101 PAGE_ALIGN(rq->pageofs_out + rq->outputsize) >> PAGE_SHIFT; ··· 158 158 strm->z.avail_out = min_t(u32, outsz, PAGE_SIZE - pofs); 159 159 outsz -= strm->z.avail_out; 160 160 if (!rq->out[no]) { 161 - rq->out[no] = erofs_allocpage(pagepool, 162 - GFP_KERNEL | __GFP_NOFAIL); 161 + rq->out[no] = erofs_allocpage(pgpl, rq->gfp); 162 + if (!rq->out[no]) { 163 + kout = NULL; 164 + err = -ENOMEM; 165 + break; 166 + } 163 167 set_page_private(rq->out[no], 164 168 Z_EROFS_SHORTLIVED_PAGE); 165 169 } ··· 215 211 216 212 DBG_BUGON(erofs_page_is_managed(EROFS_SB(sb), 217 213 rq->in[j])); 218 - tmppage = erofs_allocpage(pagepool, 219 - GFP_KERNEL | __GFP_NOFAIL); 214 + tmppage = erofs_allocpage(pgpl, rq->gfp); 215 + if (!tmppage) { 216 + err = -ENOMEM; 217 + goto failed; 218 + } 220 219 set_page_private(tmppage, Z_EROFS_SHORTLIVED_PAGE); 221 220 copy_highpage(tmppage, rq->in[j]); 222 221 rq->in[j] = tmppage; ··· 237 230 break; 238 231 } 239 232 } 240 - 233 + failed: 241 234 if (zlib_inflateEnd(&strm->z) != Z_OK && !err) 242 235 err = -EIO; 243 236 if (kout)
+12 -5
fs/erofs/decompressor_lzma.c
··· 148 148 } 149 149 150 150 int z_erofs_lzma_decompress(struct z_erofs_decompress_req *rq, 151 - struct page **pagepool) 151 + struct page **pgpl) 152 152 { 153 153 const unsigned int nrpages_out = 154 154 PAGE_ALIGN(rq->pageofs_out + rq->outputsize) >> PAGE_SHIFT; ··· 215 215 PAGE_SIZE - pageofs); 216 216 outlen -= strm->buf.out_size; 217 217 if (!rq->out[no] && rq->fillgaps) { /* deduped */ 218 - rq->out[no] = erofs_allocpage(pagepool, 219 - GFP_KERNEL | __GFP_NOFAIL); 218 + rq->out[no] = erofs_allocpage(pgpl, rq->gfp); 219 + if (!rq->out[no]) { 220 + err = -ENOMEM; 221 + break; 222 + } 220 223 set_page_private(rq->out[no], 221 224 Z_EROFS_SHORTLIVED_PAGE); 222 225 } ··· 261 258 262 259 DBG_BUGON(erofs_page_is_managed(EROFS_SB(rq->sb), 263 260 rq->in[j])); 264 - tmppage = erofs_allocpage(pagepool, 265 - GFP_KERNEL | __GFP_NOFAIL); 261 + tmppage = erofs_allocpage(pgpl, rq->gfp); 262 + if (!tmppage) { 263 + err = -ENOMEM; 264 + goto failed; 265 + } 266 266 set_page_private(tmppage, Z_EROFS_SHORTLIVED_PAGE); 267 267 copy_highpage(tmppage, rq->in[j]); 268 268 rq->in[j] = tmppage; ··· 283 277 break; 284 278 } 285 279 } 280 + failed: 286 281 if (no < nrpages_out && strm->buf.out) 287 282 kunmap(rq->out[no]); 288 283 if (ni < nrpages_in)
+1 -1
fs/erofs/fscache.c
··· 459 459 460 460 inode->i_size = OFFSET_MAX; 461 461 inode->i_mapping->a_ops = &erofs_fscache_meta_aops; 462 - mapping_set_gfp_mask(inode->i_mapping, GFP_NOFS); 462 + mapping_set_gfp_mask(inode->i_mapping, GFP_KERNEL); 463 463 inode->i_blkbits = EROFS_SB(sb)->blkszbits; 464 464 inode->i_private = ctx; 465 465
+1 -1
fs/erofs/inode.c
··· 60 60 } else { 61 61 const unsigned int gotten = sb->s_blocksize - *ofs; 62 62 63 - copied = kmalloc(vi->inode_isize, GFP_NOFS); 63 + copied = kmalloc(vi->inode_isize, GFP_KERNEL); 64 64 if (!copied) { 65 65 err = -ENOMEM; 66 66 goto err_out;
+1 -1
fs/erofs/utils.c
··· 81 81 repeat: 82 82 xa_lock(&sbi->managed_pslots); 83 83 pre = __xa_cmpxchg(&sbi->managed_pslots, grp->index, 84 - NULL, grp, GFP_NOFS); 84 + NULL, grp, GFP_KERNEL); 85 85 if (pre) { 86 86 if (xa_is_err(pre)) { 87 87 pre = ERR_PTR(xa_err(pre));
+54 -44
fs/erofs/zdata.c
··· 82 82 /* L: indicate several pageofs_outs or not */ 83 83 bool multibases; 84 84 85 + /* L: whether extra buffer allocations are best-effort */ 86 + bool besteffort; 87 + 85 88 /* A: compressed bvecs (can be cached or inplaced pages) */ 86 89 struct z_erofs_bvec compressed_bvecs[]; 87 90 }; ··· 233 230 struct page *nextpage = *candidate_bvpage; 234 231 235 232 if (!nextpage) { 236 - nextpage = erofs_allocpage(pagepool, GFP_NOFS); 233 + nextpage = erofs_allocpage(pagepool, GFP_KERNEL); 237 234 if (!nextpage) 238 235 return -ENOMEM; 239 236 set_page_private(nextpage, Z_EROFS_SHORTLIVED_PAGE); ··· 305 302 if (nrpages > pcs->maxpages) 306 303 continue; 307 304 308 - pcl = kmem_cache_zalloc(pcs->slab, GFP_NOFS); 305 + pcl = kmem_cache_zalloc(pcs->slab, GFP_KERNEL); 309 306 if (!pcl) 310 307 return ERR_PTR(-ENOMEM); 311 308 pcl->pclustersize = size; ··· 566 563 __GFP_NOMEMALLOC | __GFP_NORETRY | __GFP_NOWARN; 567 564 unsigned int i; 568 565 569 - if (i_blocksize(fe->inode) != PAGE_SIZE) 570 - return; 571 - if (fe->mode < Z_EROFS_PCLUSTER_FOLLOWED) 566 + if (i_blocksize(fe->inode) != PAGE_SIZE || 567 + fe->mode < Z_EROFS_PCLUSTER_FOLLOWED) 572 568 return; 573 569 574 570 for (i = 0; i < pclusterpages; ++i) { 575 571 struct page *page, *newpage; 576 572 void *t; /* mark pages just found for debugging */ 577 573 578 - /* the compressed page was loaded before */ 574 + /* Inaccurate check w/o locking to avoid unneeded lookups */ 579 575 if (READ_ONCE(pcl->compressed_bvecs[i].page)) 580 576 continue; 581 577 582 578 page = find_get_page(mc, pcl->obj.index + i); 583 - 584 579 if (page) { 585 580 t = (void *)((unsigned long)page | 1); 586 581 newpage = NULL; ··· 598 597 set_page_private(newpage, Z_EROFS_PREALLOCATED_PAGE); 599 598 t = (void *)((unsigned long)newpage | 1); 600 599 } 601 - 602 - if (!cmpxchg_relaxed(&pcl->compressed_bvecs[i].page, NULL, t)) 600 + spin_lock(&pcl->obj.lockref.lock); 601 + if (!pcl->compressed_bvecs[i].page) { 602 + pcl->compressed_bvecs[i].page = t; 603 + spin_unlock(&pcl->obj.lockref.lock); 603 604 continue; 605 + } 606 + spin_unlock(&pcl->obj.lockref.lock); 604 607 605 608 if (page) 606 609 put_page(page); ··· 699 694 DBG_BUGON(stop > folio_size(folio) || stop < length); 700 695 701 696 if (offset == 0 && stop == folio_size(folio)) 702 - while (!z_erofs_cache_release_folio(folio, GFP_NOFS)) 697 + while (!z_erofs_cache_release_folio(folio, 0)) 703 698 cond_resched(); 704 699 } 705 700 ··· 718 713 set_nlink(inode, 1); 719 714 inode->i_size = OFFSET_MAX; 720 715 inode->i_mapping->a_ops = &z_erofs_cache_aops; 721 - mapping_set_gfp_mask(inode->i_mapping, GFP_NOFS); 716 + mapping_set_gfp_mask(inode->i_mapping, GFP_KERNEL); 722 717 EROFS_SB(sb)->managed_cache = inode; 723 718 return 0; 724 - } 725 - 726 - static bool z_erofs_try_inplace_io(struct z_erofs_decompress_frontend *fe, 727 - struct z_erofs_bvec *bvec) 728 - { 729 - struct z_erofs_pcluster *const pcl = fe->pcl; 730 - 731 - while (fe->icur > 0) { 732 - if (!cmpxchg(&pcl->compressed_bvecs[--fe->icur].page, 733 - NULL, bvec->page)) { 734 - pcl->compressed_bvecs[fe->icur] = *bvec; 735 - return true; 736 - } 737 - } 738 - return false; 739 719 } 740 720 741 721 /* callers must be with pcluster lock held */ 742 722 static int z_erofs_attach_page(struct z_erofs_decompress_frontend *fe, 743 723 struct z_erofs_bvec *bvec, bool exclusive) 744 724 { 725 + struct z_erofs_pcluster *pcl = fe->pcl; 745 726 int ret; 746 727 747 728 if (exclusive) { 748 729 /* give priority for inplaceio to use file pages first */ 749 - if (z_erofs_try_inplace_io(fe, bvec)) 730 + spin_lock(&pcl->obj.lockref.lock); 731 + while (fe->icur > 0) { 732 + if (pcl->compressed_bvecs[--fe->icur].page) 733 + continue; 734 + pcl->compressed_bvecs[fe->icur] = *bvec; 735 + spin_unlock(&pcl->obj.lockref.lock); 750 736 return 0; 737 + } 738 + spin_unlock(&pcl->obj.lockref.lock); 739 + 751 740 /* otherwise, check if it can be used as a bvpage */ 752 741 if (fe->mode >= Z_EROFS_PCLUSTER_FOLLOWED && 753 742 !fe->candidate_bvpage) ··· 963 964 } 964 965 965 966 static int z_erofs_do_read_page(struct z_erofs_decompress_frontend *fe, 966 - struct page *page) 967 + struct page *page, bool ra) 967 968 { 968 969 struct inode *const inode = fe->inode; 969 970 struct erofs_map_blocks *const map = &fe->map; ··· 1013 1014 err = z_erofs_pcluster_begin(fe); 1014 1015 if (err) 1015 1016 goto out; 1017 + fe->pcl->besteffort |= !ra; 1016 1018 } 1017 1019 1018 1020 /* ··· 1280 1280 .inplace_io = overlapped, 1281 1281 .partial_decoding = pcl->partial, 1282 1282 .fillgaps = pcl->multibases, 1283 + .gfp = pcl->besteffort ? 1284 + GFP_KERNEL | __GFP_NOFAIL : 1285 + GFP_NOWAIT | __GFP_NORETRY 1283 1286 }, be->pagepool); 1284 1287 1285 1288 /* must handle all compressed pages before actual file pages */ ··· 1325 1322 pcl->length = 0; 1326 1323 pcl->partial = true; 1327 1324 pcl->multibases = false; 1325 + pcl->besteffort = false; 1328 1326 pcl->bvset.nextpage = NULL; 1329 1327 pcl->vcnt = 0; 1330 1328 ··· 1427 1423 { 1428 1424 gfp_t gfp = mapping_gfp_mask(mc); 1429 1425 bool tocache = false; 1430 - struct z_erofs_bvec *zbv = pcl->compressed_bvecs + nr; 1426 + struct z_erofs_bvec zbv; 1431 1427 struct address_space *mapping; 1432 - struct page *page, *oldpage; 1428 + struct page *page; 1433 1429 int justfound, bs = i_blocksize(f->inode); 1434 1430 1435 1431 /* Except for inplace pages, the entire page can be used for I/Os */ 1436 1432 bvec->bv_offset = 0; 1437 1433 bvec->bv_len = PAGE_SIZE; 1438 1434 repeat: 1439 - oldpage = READ_ONCE(zbv->page); 1440 - if (!oldpage) 1435 + spin_lock(&pcl->obj.lockref.lock); 1436 + zbv = pcl->compressed_bvecs[nr]; 1437 + page = zbv.page; 1438 + justfound = (unsigned long)page & 1UL; 1439 + page = (struct page *)((unsigned long)page & ~1UL); 1440 + pcl->compressed_bvecs[nr].page = page; 1441 + spin_unlock(&pcl->obj.lockref.lock); 1442 + if (!page) 1441 1443 goto out_allocpage; 1442 1444 1443 - justfound = (unsigned long)oldpage & 1UL; 1444 - page = (struct page *)((unsigned long)oldpage & ~1UL); 1445 1445 bvec->bv_page = page; 1446 - 1447 1446 DBG_BUGON(z_erofs_is_shortlived_page(page)); 1448 1447 /* 1449 1448 * Handle preallocated cached pages. We tried to allocate such pages ··· 1455 1448 */ 1456 1449 if (page->private == Z_EROFS_PREALLOCATED_PAGE) { 1457 1450 set_page_private(page, 0); 1458 - WRITE_ONCE(zbv->page, page); 1459 1451 tocache = true; 1460 1452 goto out_tocache; 1461 1453 } ··· 1465 1459 * therefore it is impossible for `mapping` to be NULL. 1466 1460 */ 1467 1461 if (mapping && mapping != mc) { 1468 - if (zbv->offset < 0) 1469 - bvec->bv_offset = round_up(-zbv->offset, bs); 1470 - bvec->bv_len = round_up(zbv->end, bs) - bvec->bv_offset; 1462 + if (zbv.offset < 0) 1463 + bvec->bv_offset = round_up(-zbv.offset, bs); 1464 + bvec->bv_len = round_up(zbv.end, bs) - bvec->bv_offset; 1471 1465 return; 1472 1466 } 1473 1467 ··· 1477 1471 1478 1472 /* the cached page is still in managed cache */ 1479 1473 if (page->mapping == mc) { 1480 - WRITE_ONCE(zbv->page, page); 1481 1474 /* 1482 1475 * The cached page is still available but without a valid 1483 1476 * `->private` pcluster hint. Let's reconnect them. ··· 1508 1503 put_page(page); 1509 1504 out_allocpage: 1510 1505 page = erofs_allocpage(&f->pagepool, gfp | __GFP_NOFAIL); 1511 - if (oldpage != cmpxchg(&zbv->page, oldpage, page)) { 1506 + spin_lock(&pcl->obj.lockref.lock); 1507 + if (pcl->compressed_bvecs[nr].page) { 1512 1508 erofs_pagepool_add(&f->pagepool, page); 1509 + spin_unlock(&pcl->obj.lockref.lock); 1513 1510 cond_resched(); 1514 1511 goto repeat; 1515 1512 } 1513 + pcl->compressed_bvecs[nr].page = page; 1514 + spin_unlock(&pcl->obj.lockref.lock); 1516 1515 bvec->bv_page = page; 1517 1516 out_tocache: 1518 1517 if (!tocache || bs != PAGE_SIZE || ··· 1694 1685 1695 1686 if (cur + bvec.bv_len > end) 1696 1687 bvec.bv_len = end - cur; 1688 + DBG_BUGON(bvec.bv_len < sb->s_blocksize); 1697 1689 if (!bio_add_page(bio, bvec.bv_page, bvec.bv_len, 1698 1690 bvec.bv_offset)) 1699 1691 goto submit_bio_retry; ··· 1795 1785 if (PageUptodate(page)) 1796 1786 unlock_page(page); 1797 1787 else 1798 - (void)z_erofs_do_read_page(f, page); 1788 + (void)z_erofs_do_read_page(f, page, !!rac); 1799 1789 put_page(page); 1800 1790 } 1801 1791 ··· 1816 1806 f.headoffset = (erofs_off_t)folio->index << PAGE_SHIFT; 1817 1807 1818 1808 z_erofs_pcluster_readmore(&f, NULL, true); 1819 - err = z_erofs_do_read_page(&f, &folio->page); 1809 + err = z_erofs_do_read_page(&f, &folio->page, false); 1820 1810 z_erofs_pcluster_readmore(&f, NULL, false); 1821 1811 z_erofs_pcluster_end(&f); 1822 1812 ··· 1857 1847 folio = head; 1858 1848 head = folio_get_private(folio); 1859 1849 1860 - err = z_erofs_do_read_page(&f, &folio->page); 1850 + err = z_erofs_do_read_page(&f, &folio->page, true); 1861 1851 if (err && err != -EINTR) 1862 1852 erofs_err(inode->i_sb, "readahead error at folio %lu @ nid %llu", 1863 1853 folio->index, EROFS_I(inode)->nid);
+3 -4
fs/exfat/inode.c
··· 501 501 struct inode *inode = mapping->host; 502 502 struct exfat_inode_info *ei = EXFAT_I(inode); 503 503 loff_t pos = iocb->ki_pos; 504 - loff_t size = iocb->ki_pos + iov_iter_count(iter); 504 + loff_t size = pos + iov_iter_count(iter); 505 505 int rw = iov_iter_rw(iter); 506 506 ssize_t ret; 507 507 ··· 525 525 */ 526 526 ret = blockdev_direct_IO(iocb, inode, iter, exfat_get_block); 527 527 if (ret < 0) { 528 - if (rw == WRITE) 528 + if (rw == WRITE && ret != -EIOCBQUEUED) 529 529 exfat_write_failed(mapping, size); 530 530 531 - if (ret != -EIOCBQUEUED) 532 - return ret; 531 + return ret; 533 532 } else 534 533 size = pos + ret; 535 534
+1 -1
fs/hugetlbfs/inode.c
··· 340 340 } else { 341 341 folio_unlock(folio); 342 342 343 - if (!folio_test_has_hwpoisoned(folio)) 343 + if (!folio_test_hwpoison(folio)) 344 344 want = nr; 345 345 else { 346 346 /*
+1 -7
fs/jfs/jfs_dmap.c
··· 2763 2763 * leafno - the number of the leaf to be updated. 2764 2764 * newval - the new value for the leaf. 2765 2765 * 2766 - * RETURN VALUES: 2767 - * 0 - success 2768 - * -EIO - i/o error 2766 + * RETURN VALUES: none 2769 2767 */ 2770 2768 static int dbJoin(dmtree_t *tp, int leafno, int newval, bool is_ctl) 2771 2769 { ··· 2790 2792 * get the buddy size (number of words covered) of 2791 2793 * the new value. 2792 2794 */ 2793 - 2794 - if ((newval - tp->dmt_budmin) > BUDMIN) 2795 - return -EIO; 2796 - 2797 2795 budsz = BUDSIZE(newval, tp->dmt_budmin); 2798 2796 2799 2797 /* try to join.
+20 -4
fs/smb/client/cached_dir.c
··· 145 145 struct cached_fid *cfid; 146 146 struct cached_fids *cfids; 147 147 const char *npath; 148 + int retries = 0, cur_sleep = 1; 148 149 149 150 if (tcon == NULL || tcon->cfids == NULL || tcon->nohandlecache || 150 151 is_smb1_server(tcon->ses->server) || (dir_cache_timeout == 0)) 151 152 return -EOPNOTSUPP; 152 153 153 154 ses = tcon->ses; 154 - server = cifs_pick_channel(ses); 155 155 cfids = tcon->cfids; 156 - 157 - if (!server->ops->new_lease_key) 158 - return -EIO; 159 156 160 157 if (cifs_sb->root == NULL) 161 158 return -ENOENT; 159 + 160 + replay_again: 161 + /* reinitialize for possible replay */ 162 + flags = 0; 163 + oplock = SMB2_OPLOCK_LEVEL_II; 164 + server = cifs_pick_channel(ses); 165 + 166 + if (!server->ops->new_lease_key) 167 + return -EIO; 162 168 163 169 utf16_path = cifs_convert_path_to_utf16(path, cifs_sb); 164 170 if (!utf16_path) ··· 274 268 */ 275 269 cfid->has_lease = true; 276 270 271 + if (retries) { 272 + smb2_set_replay(server, &rqst[0]); 273 + smb2_set_replay(server, &rqst[1]); 274 + } 275 + 277 276 rc = compound_send_recv(xid, ses, server, 278 277 flags, 2, rqst, 279 278 resp_buftype, rsp_iov); ··· 378 367 atomic_inc(&tcon->num_remote_opens); 379 368 } 380 369 kfree(utf16_path); 370 + 371 + if (is_replayable_error(rc) && 372 + smb2_should_replay(tcon, &retries, &cur_sleep)) 373 + goto replay_again; 374 + 381 375 return rc; 382 376 } 383 377
+1 -1
fs/smb/client/cifsencrypt.c
··· 572 572 len = cifs_strtoUTF16(user, ses->user_name, len, nls_cp); 573 573 UniStrupr(user); 574 574 } else { 575 - memset(user, '\0', 2); 575 + *(u16 *)user = 0; 576 576 } 577 577 578 578 rc = crypto_shash_update(ses->server->secmech.hmacmd5,
+14 -3
fs/smb/client/cifsfs.c
··· 396 396 spin_lock_init(&cifs_inode->writers_lock); 397 397 cifs_inode->writers = 0; 398 398 cifs_inode->netfs.inode.i_blkbits = 14; /* 2**14 = CIFS_MAX_MSGSIZE */ 399 - cifs_inode->server_eof = 0; 399 + cifs_inode->netfs.remote_i_size = 0; 400 400 cifs_inode->uniqueid = 0; 401 401 cifs_inode->createtime = 0; 402 402 cifs_inode->epoch = 0; ··· 1380 1380 struct inode *src_inode = file_inode(src_file); 1381 1381 struct inode *target_inode = file_inode(dst_file); 1382 1382 struct cifsInodeInfo *src_cifsi = CIFS_I(src_inode); 1383 + struct cifsInodeInfo *target_cifsi = CIFS_I(target_inode); 1383 1384 struct cifsFileInfo *smb_file_src; 1384 1385 struct cifsFileInfo *smb_file_target; 1385 1386 struct cifs_tcon *src_tcon; ··· 1429 1428 * Advance the EOF marker after the flush above to the end of the range 1430 1429 * if it's short of that. 1431 1430 */ 1432 - if (src_cifsi->server_eof < off + len) { 1431 + if (src_cifsi->netfs.remote_i_size < off + len) { 1433 1432 rc = cifs_precopy_set_eof(src_inode, src_cifsi, src_tcon, xid, off + len); 1434 1433 if (rc < 0) 1435 1434 goto unlock; ··· 1453 1452 /* Discard all the folios that overlap the destination region. */ 1454 1453 truncate_inode_pages_range(&target_inode->i_data, fstart, fend); 1455 1454 1455 + fscache_invalidate(cifs_inode_cookie(target_inode), NULL, 1456 + i_size_read(target_inode), 0); 1457 + 1456 1458 rc = file_modified(dst_file); 1457 1459 if (!rc) { 1458 1460 rc = target_tcon->ses->server->ops->copychunk_range(xid, 1459 1461 smb_file_src, smb_file_target, off, len, destoff); 1460 - if (rc > 0 && destoff + rc > i_size_read(target_inode)) 1462 + if (rc > 0 && destoff + rc > i_size_read(target_inode)) { 1461 1463 truncate_setsize(target_inode, destoff + rc); 1464 + netfs_resize_file(&target_cifsi->netfs, 1465 + i_size_read(target_inode), true); 1466 + fscache_resize_cookie(cifs_inode_cookie(target_inode), 1467 + i_size_read(target_inode)); 1468 + } 1469 + if (rc > 0 && destoff + rc > target_cifsi->netfs.zero_point) 1470 + target_cifsi->netfs.zero_point = destoff + rc; 1462 1471 } 1463 1472 1464 1473 file_accessed(src_file);
+13 -1
fs/smb/client/cifsglob.h
··· 50 50 #define CIFS_DEF_ACTIMEO (1 * HZ) 51 51 52 52 /* 53 + * max sleep time before retry to server 54 + */ 55 + #define CIFS_MAX_SLEEP 2000 56 + 57 + /* 53 58 * max attribute cache timeout (jiffies) - 2^30 54 59 */ 55 60 #define CIFS_MAX_ACTIMEO (1 << 30) ··· 1506 1501 struct smbd_mr *mr; 1507 1502 #endif 1508 1503 struct cifs_credits credits; 1504 + bool replay; 1509 1505 }; 1510 1506 1511 1507 /* ··· 1567 1561 spinlock_t writers_lock; 1568 1562 unsigned int writers; /* Number of writers on this inode */ 1569 1563 unsigned long time; /* jiffies of last update of inode */ 1570 - u64 server_eof; /* current file size on server -- protected by i_lock */ 1571 1564 u64 uniqueid; /* server inode number */ 1572 1565 u64 createtime; /* creation time on server */ 1573 1566 __u8 lease_key[SMB2_LEASE_KEY_SIZE]; /* lease key for this inode */ ··· 1832 1827 static inline bool is_retryable_error(int error) 1833 1828 { 1834 1829 if (is_interrupt_error(error) || error == -EAGAIN) 1830 + return true; 1831 + return false; 1832 + } 1833 + 1834 + static inline bool is_replayable_error(int error) 1835 + { 1836 + if (error == -EAGAIN || error == -ECONNABORTED) 1835 1837 return true; 1836 1838 return false; 1837 1839 }
+5 -4
fs/smb/client/file.c
··· 2120 2120 { 2121 2121 loff_t end_of_write = offset + bytes_written; 2122 2122 2123 - if (end_of_write > cifsi->server_eof) 2124 - cifsi->server_eof = end_of_write; 2123 + if (end_of_write > cifsi->netfs.remote_i_size) 2124 + netfs_resize_file(&cifsi->netfs, end_of_write, true); 2125 2125 } 2126 2126 2127 2127 static ssize_t ··· 3247 3247 3248 3248 spin_lock(&inode->i_lock); 3249 3249 cifs_update_eof(cifsi, wdata->offset, wdata->bytes); 3250 - if (cifsi->server_eof > inode->i_size) 3251 - i_size_write(inode, cifsi->server_eof); 3250 + if (cifsi->netfs.remote_i_size > inode->i_size) 3251 + i_size_write(inode, cifsi->netfs.remote_i_size); 3252 3252 spin_unlock(&inode->i_lock); 3253 3253 3254 3254 complete(&wdata->done); ··· 3300 3300 if (wdata->cfile->invalidHandle) 3301 3301 rc = -EAGAIN; 3302 3302 else { 3303 + wdata->replay = true; 3303 3304 #ifdef CONFIG_CIFS_SMB_DIRECT 3304 3305 if (wdata->mr) { 3305 3306 wdata->mr->need_invalidate = true;
+5 -3
fs/smb/client/inode.c
··· 104 104 fattr->cf_mtime = timestamp_truncate(fattr->cf_mtime, inode); 105 105 mtime = inode_get_mtime(inode); 106 106 if (timespec64_equal(&mtime, &fattr->cf_mtime) && 107 - cifs_i->server_eof == fattr->cf_eof) { 107 + cifs_i->netfs.remote_i_size == fattr->cf_eof) { 108 108 cifs_dbg(FYI, "%s: inode %llu is unchanged\n", 109 109 __func__, cifs_i->uniqueid); 110 110 return; ··· 194 194 else 195 195 clear_bit(CIFS_INO_DELETE_PENDING, &cifs_i->flags); 196 196 197 - cifs_i->server_eof = fattr->cf_eof; 197 + cifs_i->netfs.remote_i_size = fattr->cf_eof; 198 198 /* 199 199 * Can't safely change the file size here if the client is writing to 200 200 * it due to potential races. ··· 2858 2858 2859 2859 set_size_out: 2860 2860 if (rc == 0) { 2861 - cifsInode->server_eof = attrs->ia_size; 2861 + netfs_resize_file(&cifsInode->netfs, attrs->ia_size, true); 2862 2862 cifs_setsize(inode, attrs->ia_size); 2863 2863 /* 2864 2864 * i_blocks is not related to (i_size / i_blksize), but instead ··· 3011 3011 if ((attrs->ia_valid & ATTR_SIZE) && 3012 3012 attrs->ia_size != i_size_read(inode)) { 3013 3013 truncate_setsize(inode, attrs->ia_size); 3014 + netfs_resize_file(&cifsInode->netfs, attrs->ia_size, true); 3014 3015 fscache_resize_cookie(cifs_inode_cookie(inode), attrs->ia_size); 3015 3016 } 3016 3017 ··· 3211 3210 if ((attrs->ia_valid & ATTR_SIZE) && 3212 3211 attrs->ia_size != i_size_read(inode)) { 3213 3212 truncate_setsize(inode, attrs->ia_size); 3213 + netfs_resize_file(&cifsInode->netfs, attrs->ia_size, true); 3214 3214 fscache_resize_cookie(cifs_inode_cookie(inode), attrs->ia_size); 3215 3215 } 3216 3216
+1 -1
fs/smb/client/readdir.c
··· 141 141 if (likely(reparse_inode_match(inode, fattr))) { 142 142 fattr->cf_mode = inode->i_mode; 143 143 fattr->cf_rdev = inode->i_rdev; 144 - fattr->cf_eof = CIFS_I(inode)->server_eof; 144 + fattr->cf_eof = CIFS_I(inode)->netfs.remote_i_size; 145 145 fattr->cf_symlink_target = NULL; 146 146 } else { 147 147 CIFS_I(inode)->time = 0;
+27 -6
fs/smb/client/smb2inode.c
··· 120 120 unsigned int size[2]; 121 121 void *data[2]; 122 122 int len; 123 + int retries = 0, cur_sleep = 1; 124 + 125 + replay_again: 126 + /* reinitialize for possible replay */ 127 + flags = 0; 128 + oplock = SMB2_OPLOCK_LEVEL_NONE; 129 + num_rqst = 0; 130 + server = cifs_pick_channel(ses); 123 131 124 132 vars = kzalloc(sizeof(*vars), GFP_ATOMIC); 125 133 if (vars == NULL) 126 134 return -ENOMEM; 127 135 rqst = &vars->rqst[0]; 128 136 rsp_iov = &vars->rsp_iov[0]; 129 - 130 - server = cifs_pick_channel(ses); 131 137 132 138 if (smb3_encryption_required(tcon)) 133 139 flags |= CIFS_TRANSFORM_REQ; ··· 469 463 num_rqst++; 470 464 471 465 if (cfile) { 466 + if (retries) 467 + for (i = 1; i < num_rqst - 2; i++) 468 + smb2_set_replay(server, &rqst[i]); 469 + 472 470 rc = compound_send_recv(xid, ses, server, 473 471 flags, num_rqst - 2, 474 472 &rqst[1], &resp_buftype[1], 475 473 &rsp_iov[1]); 476 - } else 474 + } else { 475 + if (retries) 476 + for (i = 0; i < num_rqst; i++) 477 + smb2_set_replay(server, &rqst[i]); 478 + 477 479 rc = compound_send_recv(xid, ses, server, 478 480 flags, num_rqst, 479 481 rqst, resp_buftype, 480 482 rsp_iov); 483 + } 481 484 482 485 finished: 483 486 num_rqst = 0; ··· 635 620 } 636 621 SMB2_close_free(&rqst[num_rqst]); 637 622 638 - if (cfile) 639 - cifsFileInfo_put(cfile); 640 - 641 623 num_cmds += 2; 642 624 if (out_iov && out_buftype) { 643 625 memcpy(out_iov, rsp_iov, num_cmds * sizeof(*out_iov)); ··· 644 632 for (i = 0; i < num_cmds; i++) 645 633 free_rsp_buf(resp_buftype[i], rsp_iov[i].iov_base); 646 634 } 635 + num_cmds -= 2; /* correct num_cmds as there could be a retry */ 647 636 kfree(vars); 637 + 638 + if (is_replayable_error(rc) && 639 + smb2_should_replay(tcon, &retries, &cur_sleep)) 640 + goto replay_again; 641 + 642 + if (cfile) 643 + cifsFileInfo_put(cfile); 644 + 648 645 return rc; 649 646 } 650 647
+127 -14
fs/smb/client/smb2ops.c
··· 1108 1108 { 1109 1109 struct smb2_compound_vars *vars; 1110 1110 struct cifs_ses *ses = tcon->ses; 1111 - struct TCP_Server_Info *server = cifs_pick_channel(ses); 1111 + struct TCP_Server_Info *server; 1112 1112 struct smb_rqst *rqst; 1113 1113 struct kvec *rsp_iov; 1114 1114 __le16 *utf16_path = NULL; ··· 1124 1124 struct smb2_file_full_ea_info *ea = NULL; 1125 1125 struct smb2_query_info_rsp *rsp; 1126 1126 int rc, used_len = 0; 1127 + int retries = 0, cur_sleep = 1; 1128 + 1129 + replay_again: 1130 + /* reinitialize for possible replay */ 1131 + flags = CIFS_CP_CREATE_CLOSE_OP; 1132 + oplock = SMB2_OPLOCK_LEVEL_NONE; 1133 + server = cifs_pick_channel(ses); 1127 1134 1128 1135 if (smb3_encryption_required(tcon)) 1129 1136 flags |= CIFS_TRANSFORM_REQ; ··· 1251 1244 goto sea_exit; 1252 1245 smb2_set_related(&rqst[2]); 1253 1246 1247 + if (retries) { 1248 + smb2_set_replay(server, &rqst[0]); 1249 + smb2_set_replay(server, &rqst[1]); 1250 + smb2_set_replay(server, &rqst[2]); 1251 + } 1252 + 1254 1253 rc = compound_send_recv(xid, ses, server, 1255 1254 flags, 3, rqst, 1256 1255 resp_buftype, rsp_iov); ··· 1273 1260 kfree(vars); 1274 1261 out_free_path: 1275 1262 kfree(utf16_path); 1263 + 1264 + if (is_replayable_error(rc) && 1265 + smb2_should_replay(tcon, &retries, &cur_sleep)) 1266 + goto replay_again; 1267 + 1276 1268 return rc; 1277 1269 } 1278 1270 #endif ··· 1502 1484 struct smb_rqst *rqst; 1503 1485 struct kvec *rsp_iov; 1504 1486 struct cifs_ses *ses = tcon->ses; 1505 - struct TCP_Server_Info *server = cifs_pick_channel(ses); 1487 + struct TCP_Server_Info *server; 1506 1488 char __user *arg = (char __user *)p; 1507 1489 struct smb_query_info qi; 1508 1490 struct smb_query_info __user *pqi; ··· 1519 1501 void *data[2]; 1520 1502 int create_options = is_dir ? CREATE_NOT_FILE : CREATE_NOT_DIR; 1521 1503 void (*free_req1_func)(struct smb_rqst *r); 1504 + int retries = 0, cur_sleep = 1; 1505 + 1506 + replay_again: 1507 + /* reinitialize for possible replay */ 1508 + flags = CIFS_CP_CREATE_CLOSE_OP; 1509 + oplock = SMB2_OPLOCK_LEVEL_NONE; 1510 + server = cifs_pick_channel(ses); 1522 1511 1523 1512 vars = kzalloc(sizeof(*vars), GFP_ATOMIC); 1524 1513 if (vars == NULL) ··· 1666 1641 goto free_req_1; 1667 1642 smb2_set_related(&rqst[2]); 1668 1643 1644 + if (retries) { 1645 + smb2_set_replay(server, &rqst[0]); 1646 + smb2_set_replay(server, &rqst[1]); 1647 + smb2_set_replay(server, &rqst[2]); 1648 + } 1649 + 1669 1650 rc = compound_send_recv(xid, ses, server, 1670 1651 flags, 3, rqst, 1671 1652 resp_buftype, rsp_iov); ··· 1732 1701 kfree(buffer); 1733 1702 free_vars: 1734 1703 kfree(vars); 1704 + 1705 + if (is_replayable_error(rc) && 1706 + smb2_should_replay(tcon, &retries, &cur_sleep)) 1707 + goto replay_again; 1708 + 1735 1709 return rc; 1736 1710 } 1737 1711 ··· 2263 2227 struct cifs_open_parms oparms; 2264 2228 struct smb2_query_directory_rsp *qd_rsp = NULL; 2265 2229 struct smb2_create_rsp *op_rsp = NULL; 2266 - struct TCP_Server_Info *server = cifs_pick_channel(tcon->ses); 2267 - int retry_count = 0; 2230 + struct TCP_Server_Info *server; 2231 + int retries = 0, cur_sleep = 1; 2232 + 2233 + replay_again: 2234 + /* reinitialize for possible replay */ 2235 + flags = 0; 2236 + oplock = SMB2_OPLOCK_LEVEL_NONE; 2237 + server = cifs_pick_channel(tcon->ses); 2268 2238 2269 2239 utf16_path = cifs_convert_path_to_utf16(path, cifs_sb); 2270 2240 if (!utf16_path) ··· 2320 2278 2321 2279 smb2_set_related(&rqst[1]); 2322 2280 2323 - again: 2281 + if (retries) { 2282 + smb2_set_replay(server, &rqst[0]); 2283 + smb2_set_replay(server, &rqst[1]); 2284 + } 2285 + 2324 2286 rc = compound_send_recv(xid, tcon->ses, server, 2325 2287 flags, 2, rqst, 2326 2288 resp_buftype, rsp_iov); 2327 - 2328 - if (rc == -EAGAIN && retry_count++ < 10) 2329 - goto again; 2330 2289 2331 2290 /* If the open failed there is nothing to do */ 2332 2291 op_rsp = (struct smb2_create_rsp *)rsp_iov[0].iov_base; ··· 2376 2333 SMB2_query_directory_free(&rqst[1]); 2377 2334 free_rsp_buf(resp_buftype[0], rsp_iov[0].iov_base); 2378 2335 free_rsp_buf(resp_buftype[1], rsp_iov[1].iov_base); 2336 + 2337 + if (is_replayable_error(rc) && 2338 + smb2_should_replay(tcon, &retries, &cur_sleep)) 2339 + goto replay_again; 2340 + 2379 2341 return rc; 2380 2342 } 2381 2343 ··· 2506 2458 } 2507 2459 2508 2460 void 2461 + smb2_set_replay(struct TCP_Server_Info *server, struct smb_rqst *rqst) 2462 + { 2463 + struct smb2_hdr *shdr; 2464 + 2465 + if (server->dialect < SMB30_PROT_ID) 2466 + return; 2467 + 2468 + shdr = (struct smb2_hdr *)(rqst->rq_iov[0].iov_base); 2469 + if (shdr == NULL) { 2470 + cifs_dbg(FYI, "shdr NULL in smb2_set_related\n"); 2471 + return; 2472 + } 2473 + shdr->Flags |= SMB2_FLAGS_REPLAY_OPERATION; 2474 + } 2475 + 2476 + void 2509 2477 smb2_set_related(struct smb_rqst *rqst) 2510 2478 { 2511 2479 struct smb2_hdr *shdr; ··· 2594 2530 } 2595 2531 2596 2532 /* 2533 + * helper function for exponential backoff and check if replayable 2534 + */ 2535 + bool smb2_should_replay(struct cifs_tcon *tcon, 2536 + int *pretries, 2537 + int *pcur_sleep) 2538 + { 2539 + if (!pretries || !pcur_sleep) 2540 + return false; 2541 + 2542 + if (tcon->retry || (*pretries)++ < tcon->ses->server->retrans) { 2543 + msleep(*pcur_sleep); 2544 + (*pcur_sleep) = ((*pcur_sleep) << 1); 2545 + if ((*pcur_sleep) > CIFS_MAX_SLEEP) 2546 + (*pcur_sleep) = CIFS_MAX_SLEEP; 2547 + return true; 2548 + } 2549 + 2550 + return false; 2551 + } 2552 + 2553 + /* 2597 2554 * Passes the query info response back to the caller on success. 2598 2555 * Caller need to free this with free_rsp_buf(). 2599 2556 */ ··· 2627 2542 { 2628 2543 struct smb2_compound_vars *vars; 2629 2544 struct cifs_ses *ses = tcon->ses; 2630 - struct TCP_Server_Info *server = cifs_pick_channel(ses); 2545 + struct TCP_Server_Info *server; 2631 2546 int flags = CIFS_CP_CREATE_CLOSE_OP; 2632 2547 struct smb_rqst *rqst; 2633 2548 int resp_buftype[3]; ··· 2638 2553 int rc; 2639 2554 __le16 *utf16_path; 2640 2555 struct cached_fid *cfid = NULL; 2556 + int retries = 0, cur_sleep = 1; 2557 + 2558 + replay_again: 2559 + /* reinitialize for possible replay */ 2560 + flags = CIFS_CP_CREATE_CLOSE_OP; 2561 + oplock = SMB2_OPLOCK_LEVEL_NONE; 2562 + server = cifs_pick_channel(ses); 2641 2563 2642 2564 if (!path) 2643 2565 path = ""; ··· 2725 2633 goto qic_exit; 2726 2634 smb2_set_related(&rqst[2]); 2727 2635 2636 + if (retries) { 2637 + if (!cfid) { 2638 + smb2_set_replay(server, &rqst[0]); 2639 + smb2_set_replay(server, &rqst[2]); 2640 + } 2641 + smb2_set_replay(server, &rqst[1]); 2642 + } 2643 + 2728 2644 if (cfid) { 2729 2645 rc = compound_send_recv(xid, ses, server, 2730 2646 flags, 1, &rqst[1], ··· 2765 2665 kfree(vars); 2766 2666 out_free_path: 2767 2667 kfree(utf16_path); 2668 + 2669 + if (is_replayable_error(rc) && 2670 + smb2_should_replay(tcon, &retries, &cur_sleep)) 2671 + goto replay_again; 2672 + 2768 2673 return rc; 2769 2674 } 2770 2675 ··· 3318 3213 cfile->fid.volatile_fid, cfile->pid, new_size); 3319 3214 if (rc >= 0) { 3320 3215 truncate_setsize(inode, new_size); 3216 + netfs_resize_file(&cifsi->netfs, new_size, true); 3217 + if (offset < cifsi->netfs.zero_point) 3218 + cifsi->netfs.zero_point = offset; 3321 3219 fscache_resize_cookie(cifs_inode_cookie(inode), new_size); 3322 3220 } 3323 3221 } ··· 3544 3436 rc = SMB2_set_eof(xid, tcon, cfile->fid.persistent_fid, 3545 3437 cfile->fid.volatile_fid, cfile->pid, new_eof); 3546 3438 if (rc == 0) { 3547 - cifsi->server_eof = new_eof; 3439 + netfs_resize_file(&cifsi->netfs, new_eof, true); 3548 3440 cifs_setsize(inode, new_eof); 3549 3441 cifs_truncate_page(inode->i_mapping, inode->i_size); 3550 3442 truncate_setsize(inode, new_eof); ··· 3636 3528 int rc; 3637 3529 unsigned int xid; 3638 3530 struct inode *inode = file_inode(file); 3639 - struct cifsFileInfo *cfile = file->private_data; 3640 3531 struct cifsInodeInfo *cifsi = CIFS_I(inode); 3532 + struct cifsFileInfo *cfile = file->private_data; 3533 + struct netfs_inode *ictx = &cifsi->netfs; 3641 3534 loff_t old_eof, new_eof; 3642 3535 3643 3536 xid = get_xid(); ··· 3658 3549 goto out_2; 3659 3550 3660 3551 truncate_pagecache_range(inode, off, old_eof); 3552 + ictx->zero_point = old_eof; 3661 3553 3662 3554 rc = smb2_copychunk_range(xid, cfile, cfile, off + len, 3663 3555 old_eof - off - len, off); ··· 3673 3563 3674 3564 rc = 0; 3675 3565 3676 - cifsi->server_eof = i_size_read(inode) - len; 3677 - truncate_setsize(inode, cifsi->server_eof); 3678 - fscache_resize_cookie(cifs_inode_cookie(inode), cifsi->server_eof); 3566 + truncate_setsize(inode, new_eof); 3567 + netfs_resize_file(&cifsi->netfs, new_eof, true); 3568 + ictx->zero_point = new_eof; 3569 + fscache_resize_cookie(cifs_inode_cookie(inode), new_eof); 3679 3570 out_2: 3680 3571 filemap_invalidate_unlock(inode->i_mapping); 3681 3572 out: ··· 3692 3581 unsigned int xid; 3693 3582 struct cifsFileInfo *cfile = file->private_data; 3694 3583 struct inode *inode = file_inode(file); 3584 + struct cifsInodeInfo *cifsi = CIFS_I(inode); 3695 3585 __u64 count, old_eof, new_eof; 3696 3586 3697 3587 xid = get_xid(); ··· 3720 3608 goto out_2; 3721 3609 3722 3610 truncate_setsize(inode, new_eof); 3611 + netfs_resize_file(&cifsi->netfs, i_size_read(inode), true); 3723 3612 fscache_resize_cookie(cifs_inode_cookie(inode), i_size_read(inode)); 3724 3613 3725 3614 rc = smb2_copychunk_range(xid, cfile, cfile, off, count, off + len);
+239 -30
fs/smb/client/smb2pdu.c
··· 195 195 pserver = server->primary_server; 196 196 cifs_signal_cifsd_for_reconnect(pserver, false); 197 197 skip_terminate: 198 - mutex_unlock(&ses->session_mutex); 199 198 return -EHOSTDOWN; 200 199 } 201 200 ··· 2764 2765 int flags = 0; 2765 2766 unsigned int total_len; 2766 2767 __le16 *utf16_path = NULL; 2767 - struct TCP_Server_Info *server = cifs_pick_channel(ses); 2768 + struct TCP_Server_Info *server; 2769 + int retries = 0, cur_sleep = 1; 2770 + 2771 + replay_again: 2772 + /* reinitialize for possible replay */ 2773 + flags = 0; 2774 + n_iov = 2; 2775 + server = cifs_pick_channel(ses); 2768 2776 2769 2777 cifs_dbg(FYI, "mkdir\n"); 2770 2778 ··· 2875 2869 /* no need to inc num_remote_opens because we close it just below */ 2876 2870 trace_smb3_posix_mkdir_enter(xid, tcon->tid, ses->Suid, full_path, CREATE_NOT_FILE, 2877 2871 FILE_WRITE_ATTRIBUTES); 2872 + 2873 + if (retries) 2874 + smb2_set_replay(server, &rqst); 2875 + 2878 2876 /* resource #4: response buffer */ 2879 2877 rc = cifs_send_recv(xid, ses, server, 2880 2878 &rqst, &resp_buftype, flags, &rsp_iov); ··· 2916 2906 cifs_small_buf_release(req); 2917 2907 err_free_path: 2918 2908 kfree(utf16_path); 2909 + 2910 + if (is_replayable_error(rc) && 2911 + smb2_should_replay(tcon, &retries, &cur_sleep)) 2912 + goto replay_again; 2913 + 2919 2914 return rc; 2920 2915 } 2921 2916 ··· 3116 3101 struct smb2_create_rsp *rsp = NULL; 3117 3102 struct cifs_tcon *tcon = oparms->tcon; 3118 3103 struct cifs_ses *ses = tcon->ses; 3119 - struct TCP_Server_Info *server = cifs_pick_channel(ses); 3104 + struct TCP_Server_Info *server; 3120 3105 struct kvec iov[SMB2_CREATE_IOV_SIZE]; 3121 3106 struct kvec rsp_iov = {NULL, 0}; 3122 3107 int resp_buftype = CIFS_NO_BUFFER; 3123 3108 int rc = 0; 3124 3109 int flags = 0; 3110 + int retries = 0, cur_sleep = 1; 3111 + 3112 + replay_again: 3113 + /* reinitialize for possible replay */ 3114 + flags = 0; 3115 + server = cifs_pick_channel(ses); 3125 3116 3126 3117 cifs_dbg(FYI, "create/open\n"); 3127 3118 if (!ses || !server) ··· 3148 3127 3149 3128 trace_smb3_open_enter(xid, tcon->tid, tcon->ses->Suid, oparms->path, 3150 3129 oparms->create_options, oparms->desired_access); 3130 + 3131 + if (retries) 3132 + smb2_set_replay(server, &rqst); 3151 3133 3152 3134 rc = cifs_send_recv(xid, ses, server, 3153 3135 &rqst, &resp_buftype, flags, ··· 3205 3181 creat_exit: 3206 3182 SMB2_open_free(&rqst); 3207 3183 free_rsp_buf(resp_buftype, rsp); 3184 + 3185 + if (is_replayable_error(rc) && 3186 + smb2_should_replay(tcon, &retries, &cur_sleep)) 3187 + goto replay_again; 3188 + 3208 3189 return rc; 3209 3190 } 3210 3191 ··· 3334 3305 int resp_buftype = CIFS_NO_BUFFER; 3335 3306 int rc = 0; 3336 3307 int flags = 0; 3308 + int retries = 0, cur_sleep = 1; 3309 + 3310 + if (!tcon) 3311 + return -EIO; 3312 + 3313 + ses = tcon->ses; 3314 + if (!ses) 3315 + return -EIO; 3316 + 3317 + replay_again: 3318 + /* reinitialize for possible replay */ 3319 + flags = 0; 3320 + server = cifs_pick_channel(ses); 3321 + 3322 + if (!server) 3323 + return -EIO; 3337 3324 3338 3325 cifs_dbg(FYI, "SMB2 IOCTL\n"); 3339 3326 ··· 3359 3314 /* zero out returned data len, in case of error */ 3360 3315 if (plen) 3361 3316 *plen = 0; 3362 - 3363 - if (!tcon) 3364 - return -EIO; 3365 - 3366 - ses = tcon->ses; 3367 - if (!ses) 3368 - return -EIO; 3369 - 3370 - server = cifs_pick_channel(ses); 3371 - if (!server) 3372 - return -EIO; 3373 3317 3374 3318 if (smb3_encryption_required(tcon)) 3375 3319 flags |= CIFS_TRANSFORM_REQ; ··· 3373 3339 in_data, indatalen, max_out_data_len); 3374 3340 if (rc) 3375 3341 goto ioctl_exit; 3342 + 3343 + if (retries) 3344 + smb2_set_replay(server, &rqst); 3376 3345 3377 3346 rc = cifs_send_recv(xid, ses, server, 3378 3347 &rqst, &resp_buftype, flags, ··· 3446 3409 ioctl_exit: 3447 3410 SMB2_ioctl_free(&rqst); 3448 3411 free_rsp_buf(resp_buftype, rsp); 3412 + 3413 + if (is_replayable_error(rc) && 3414 + smb2_should_replay(tcon, &retries, &cur_sleep)) 3415 + goto replay_again; 3416 + 3449 3417 return rc; 3450 3418 } 3451 3419 ··· 3522 3480 struct smb_rqst rqst; 3523 3481 struct smb2_close_rsp *rsp = NULL; 3524 3482 struct cifs_ses *ses = tcon->ses; 3525 - struct TCP_Server_Info *server = cifs_pick_channel(ses); 3483 + struct TCP_Server_Info *server; 3526 3484 struct kvec iov[1]; 3527 3485 struct kvec rsp_iov; 3528 3486 int resp_buftype = CIFS_NO_BUFFER; 3529 3487 int rc = 0; 3530 3488 int flags = 0; 3531 3489 bool query_attrs = false; 3490 + int retries = 0, cur_sleep = 1; 3491 + 3492 + replay_again: 3493 + /* reinitialize for possible replay */ 3494 + flags = 0; 3495 + query_attrs = false; 3496 + server = cifs_pick_channel(ses); 3532 3497 3533 3498 cifs_dbg(FYI, "Close\n"); 3534 3499 ··· 3560 3511 query_attrs); 3561 3512 if (rc) 3562 3513 goto close_exit; 3514 + 3515 + if (retries) 3516 + smb2_set_replay(server, &rqst); 3563 3517 3564 3518 rc = cifs_send_recv(xid, ses, server, 3565 3519 &rqst, &resp_buftype, flags, &rsp_iov); ··· 3597 3545 cifs_dbg(VFS, "handle cancelled close fid 0x%llx returned error %d\n", 3598 3546 persistent_fid, tmp_rc); 3599 3547 } 3548 + 3549 + if (is_replayable_error(rc) && 3550 + smb2_should_replay(tcon, &retries, &cur_sleep)) 3551 + goto replay_again; 3552 + 3600 3553 return rc; 3601 3554 } 3602 3555 ··· 3732 3675 struct TCP_Server_Info *server; 3733 3676 int flags = 0; 3734 3677 bool allocated = false; 3678 + int retries = 0, cur_sleep = 1; 3735 3679 3736 3680 cifs_dbg(FYI, "Query Info\n"); 3737 3681 3738 3682 if (!ses) 3739 3683 return -EIO; 3684 + 3685 + replay_again: 3686 + /* reinitialize for possible replay */ 3687 + flags = 0; 3688 + allocated = false; 3740 3689 server = cifs_pick_channel(ses); 3690 + 3741 3691 if (!server) 3742 3692 return -EIO; 3743 3693 ··· 3765 3701 3766 3702 trace_smb3_query_info_enter(xid, persistent_fid, tcon->tid, 3767 3703 ses->Suid, info_class, (__u32)info_type); 3704 + 3705 + if (retries) 3706 + smb2_set_replay(server, &rqst); 3768 3707 3769 3708 rc = cifs_send_recv(xid, ses, server, 3770 3709 &rqst, &resp_buftype, flags, &rsp_iov); ··· 3811 3744 qinf_exit: 3812 3745 SMB2_query_info_free(&rqst); 3813 3746 free_rsp_buf(resp_buftype, rsp); 3747 + 3748 + if (is_replayable_error(rc) && 3749 + smb2_should_replay(tcon, &retries, &cur_sleep)) 3750 + goto replay_again; 3751 + 3814 3752 return rc; 3815 3753 } 3816 3754 ··· 3916 3844 u32 *plen /* returned data len */) 3917 3845 { 3918 3846 struct cifs_ses *ses = tcon->ses; 3919 - struct TCP_Server_Info *server = cifs_pick_channel(ses); 3847 + struct TCP_Server_Info *server; 3920 3848 struct smb_rqst rqst; 3921 3849 struct smb2_change_notify_rsp *smb_rsp; 3922 3850 struct kvec iov[1]; ··· 3924 3852 int resp_buftype = CIFS_NO_BUFFER; 3925 3853 int flags = 0; 3926 3854 int rc = 0; 3855 + int retries = 0, cur_sleep = 1; 3856 + 3857 + replay_again: 3858 + /* reinitialize for possible replay */ 3859 + flags = 0; 3860 + server = cifs_pick_channel(ses); 3927 3861 3928 3862 cifs_dbg(FYI, "change notify\n"); 3929 3863 if (!ses || !server) ··· 3954 3876 3955 3877 trace_smb3_notify_enter(xid, persistent_fid, tcon->tid, ses->Suid, 3956 3878 (u8)watch_tree, completion_filter); 3879 + 3880 + if (retries) 3881 + smb2_set_replay(server, &rqst); 3882 + 3957 3883 rc = cifs_send_recv(xid, ses, server, 3958 3884 &rqst, &resp_buftype, flags, &rsp_iov); 3959 3885 ··· 3992 3910 if (rqst.rq_iov) 3993 3911 cifs_small_buf_release(rqst.rq_iov[0].iov_base); /* request */ 3994 3912 free_rsp_buf(resp_buftype, rsp_iov.iov_base); 3913 + 3914 + if (is_replayable_error(rc) && 3915 + smb2_should_replay(tcon, &retries, &cur_sleep)) 3916 + goto replay_again; 3917 + 3995 3918 return rc; 3996 3919 } 3997 3920 ··· 4239 4152 struct smb_rqst rqst; 4240 4153 struct kvec iov[1]; 4241 4154 struct kvec rsp_iov = {NULL, 0}; 4242 - struct TCP_Server_Info *server = cifs_pick_channel(ses); 4155 + struct TCP_Server_Info *server; 4243 4156 int resp_buftype = CIFS_NO_BUFFER; 4244 4157 int flags = 0; 4245 4158 int rc = 0; 4159 + int retries = 0, cur_sleep = 1; 4160 + 4161 + replay_again: 4162 + /* reinitialize for possible replay */ 4163 + flags = 0; 4164 + server = cifs_pick_channel(ses); 4246 4165 4247 4166 cifs_dbg(FYI, "flush\n"); 4248 4167 if (!ses || !(ses->server)) ··· 4268 4175 goto flush_exit; 4269 4176 4270 4177 trace_smb3_flush_enter(xid, persistent_fid, tcon->tid, ses->Suid); 4178 + 4179 + if (retries) 4180 + smb2_set_replay(server, &rqst); 4181 + 4271 4182 rc = cifs_send_recv(xid, ses, server, 4272 4183 &rqst, &resp_buftype, flags, &rsp_iov); 4273 4184 ··· 4286 4189 flush_exit: 4287 4190 SMB2_flush_free(&rqst); 4288 4191 free_rsp_buf(resp_buftype, rsp_iov.iov_base); 4192 + 4193 + if (is_replayable_error(rc) && 4194 + smb2_should_replay(tcon, &retries, &cur_sleep)) 4195 + goto replay_again; 4196 + 4289 4197 return rc; 4290 4198 } 4291 4199 ··· 4770 4668 struct cifs_io_parms *io_parms = NULL; 4771 4669 int credit_request; 4772 4670 4773 - if (!wdata->server) 4671 + if (!wdata->server || wdata->replay) 4774 4672 server = wdata->server = cifs_pick_channel(tcon->ses); 4775 4673 4776 4674 /* ··· 4855 4753 rqst.rq_nvec = 1; 4856 4754 rqst.rq_iter = wdata->iter; 4857 4755 rqst.rq_iter_size = iov_iter_count(&rqst.rq_iter); 4756 + if (wdata->replay) 4757 + smb2_set_replay(server, &rqst); 4858 4758 #ifdef CONFIG_CIFS_SMB_DIRECT 4859 4759 if (wdata->mr) 4860 4760 iov[0].iov_len += sizeof(struct smbd_buffer_descriptor_v1); ··· 4930 4826 int flags = 0; 4931 4827 unsigned int total_len; 4932 4828 struct TCP_Server_Info *server; 4829 + int retries = 0, cur_sleep = 1; 4933 4830 4831 + replay_again: 4832 + /* reinitialize for possible replay */ 4833 + flags = 0; 4934 4834 *nbytes = 0; 4935 - 4936 - if (n_vec < 1) 4937 - return rc; 4938 - 4939 4835 if (!io_parms->server) 4940 4836 io_parms->server = cifs_pick_channel(io_parms->tcon->ses); 4941 4837 server = io_parms->server; 4942 4838 if (server == NULL) 4943 4839 return -ECONNABORTED; 4840 + 4841 + if (n_vec < 1) 4842 + return rc; 4944 4843 4945 4844 rc = smb2_plain_req_init(SMB2_WRITE, io_parms->tcon, server, 4946 4845 (void **) &req, &total_len); ··· 4978 4871 rqst.rq_iov = iov; 4979 4872 rqst.rq_nvec = n_vec + 1; 4980 4873 4874 + if (retries) 4875 + smb2_set_replay(server, &rqst); 4876 + 4981 4877 rc = cifs_send_recv(xid, io_parms->tcon->ses, server, 4982 4878 &rqst, 4983 4879 &resp_buftype, flags, &rsp_iov); ··· 5005 4895 5006 4896 cifs_small_buf_release(req); 5007 4897 free_rsp_buf(resp_buftype, rsp); 4898 + 4899 + if (is_replayable_error(rc) && 4900 + smb2_should_replay(io_parms->tcon, &retries, &cur_sleep)) 4901 + goto replay_again; 4902 + 5008 4903 return rc; 5009 4904 } 5010 4905 ··· 5321 5206 struct kvec rsp_iov; 5322 5207 int rc = 0; 5323 5208 struct cifs_ses *ses = tcon->ses; 5324 - struct TCP_Server_Info *server = cifs_pick_channel(ses); 5209 + struct TCP_Server_Info *server; 5325 5210 int flags = 0; 5211 + int retries = 0, cur_sleep = 1; 5212 + 5213 + replay_again: 5214 + /* reinitialize for possible replay */ 5215 + flags = 0; 5216 + server = cifs_pick_channel(ses); 5326 5217 5327 5218 if (!ses || !(ses->server)) 5328 5219 return -EIO; ··· 5347 5226 srch_inf->info_level); 5348 5227 if (rc) 5349 5228 goto qdir_exit; 5229 + 5230 + if (retries) 5231 + smb2_set_replay(server, &rqst); 5350 5232 5351 5233 rc = cifs_send_recv(xid, ses, server, 5352 5234 &rqst, &resp_buftype, flags, &rsp_iov); ··· 5385 5261 qdir_exit: 5386 5262 SMB2_query_directory_free(&rqst); 5387 5263 free_rsp_buf(resp_buftype, rsp); 5264 + 5265 + if (is_replayable_error(rc) && 5266 + smb2_should_replay(tcon, &retries, &cur_sleep)) 5267 + goto replay_again; 5268 + 5388 5269 return rc; 5389 5270 } 5390 5271 ··· 5456 5327 int rc = 0; 5457 5328 int resp_buftype; 5458 5329 struct cifs_ses *ses = tcon->ses; 5459 - struct TCP_Server_Info *server = cifs_pick_channel(ses); 5330 + struct TCP_Server_Info *server; 5460 5331 int flags = 0; 5332 + int retries = 0, cur_sleep = 1; 5333 + 5334 + replay_again: 5335 + /* reinitialize for possible replay */ 5336 + flags = 0; 5337 + server = cifs_pick_channel(ses); 5461 5338 5462 5339 if (!ses || !server) 5463 5340 return -EIO; ··· 5491 5356 return rc; 5492 5357 } 5493 5358 5359 + if (retries) 5360 + smb2_set_replay(server, &rqst); 5494 5361 5495 5362 rc = cifs_send_recv(xid, ses, server, 5496 5363 &rqst, &resp_buftype, flags, ··· 5508 5371 5509 5372 free_rsp_buf(resp_buftype, rsp); 5510 5373 kfree(iov); 5374 + 5375 + if (is_replayable_error(rc) && 5376 + smb2_should_replay(tcon, &retries, &cur_sleep)) 5377 + goto replay_again; 5378 + 5511 5379 return rc; 5512 5380 } 5513 5381 ··· 5565 5423 int rc; 5566 5424 struct smb2_oplock_break *req = NULL; 5567 5425 struct cifs_ses *ses = tcon->ses; 5568 - struct TCP_Server_Info *server = cifs_pick_channel(ses); 5426 + struct TCP_Server_Info *server; 5569 5427 int flags = CIFS_OBREAK_OP; 5570 5428 unsigned int total_len; 5571 5429 struct kvec iov[1]; 5572 5430 struct kvec rsp_iov; 5573 5431 int resp_buf_type; 5432 + int retries = 0, cur_sleep = 1; 5433 + 5434 + replay_again: 5435 + /* reinitialize for possible replay */ 5436 + flags = CIFS_OBREAK_OP; 5437 + server = cifs_pick_channel(ses); 5574 5438 5575 5439 cifs_dbg(FYI, "SMB2_oplock_break\n"); 5576 5440 rc = smb2_plain_req_init(SMB2_OPLOCK_BREAK, tcon, server, ··· 5601 5453 rqst.rq_iov = iov; 5602 5454 rqst.rq_nvec = 1; 5603 5455 5456 + if (retries) 5457 + smb2_set_replay(server, &rqst); 5458 + 5604 5459 rc = cifs_send_recv(xid, ses, server, 5605 5460 &rqst, &resp_buf_type, flags, &rsp_iov); 5606 5461 cifs_small_buf_release(req); 5607 - 5608 5462 if (rc) { 5609 5463 cifs_stats_fail_inc(tcon, SMB2_OPLOCK_BREAK_HE); 5610 5464 cifs_dbg(FYI, "Send error in Oplock Break = %d\n", rc); 5611 5465 } 5466 + 5467 + if (is_replayable_error(rc) && 5468 + smb2_should_replay(tcon, &retries, &cur_sleep)) 5469 + goto replay_again; 5612 5470 5613 5471 return rc; 5614 5472 } ··· 5701 5547 int rc = 0; 5702 5548 int resp_buftype; 5703 5549 struct cifs_ses *ses = tcon->ses; 5704 - struct TCP_Server_Info *server = cifs_pick_channel(ses); 5550 + struct TCP_Server_Info *server; 5705 5551 FILE_SYSTEM_POSIX_INFO *info = NULL; 5706 5552 int flags = 0; 5553 + int retries = 0, cur_sleep = 1; 5554 + 5555 + replay_again: 5556 + /* reinitialize for possible replay */ 5557 + flags = 0; 5558 + server = cifs_pick_channel(ses); 5707 5559 5708 5560 rc = build_qfs_info_req(&iov, tcon, server, 5709 5561 FS_POSIX_INFORMATION, ··· 5724 5564 memset(&rqst, 0, sizeof(struct smb_rqst)); 5725 5565 rqst.rq_iov = &iov; 5726 5566 rqst.rq_nvec = 1; 5567 + 5568 + if (retries) 5569 + smb2_set_replay(server, &rqst); 5727 5570 5728 5571 rc = cifs_send_recv(xid, ses, server, 5729 5572 &rqst, &resp_buftype, flags, &rsp_iov); ··· 5747 5584 5748 5585 posix_qfsinf_exit: 5749 5586 free_rsp_buf(resp_buftype, rsp_iov.iov_base); 5587 + 5588 + if (is_replayable_error(rc) && 5589 + smb2_should_replay(tcon, &retries, &cur_sleep)) 5590 + goto replay_again; 5591 + 5750 5592 return rc; 5751 5593 } 5752 5594 ··· 5766 5598 int rc = 0; 5767 5599 int resp_buftype; 5768 5600 struct cifs_ses *ses = tcon->ses; 5769 - struct TCP_Server_Info *server = cifs_pick_channel(ses); 5601 + struct TCP_Server_Info *server; 5770 5602 struct smb2_fs_full_size_info *info = NULL; 5771 5603 int flags = 0; 5604 + int retries = 0, cur_sleep = 1; 5605 + 5606 + replay_again: 5607 + /* reinitialize for possible replay */ 5608 + flags = 0; 5609 + server = cifs_pick_channel(ses); 5772 5610 5773 5611 rc = build_qfs_info_req(&iov, tcon, server, 5774 5612 FS_FULL_SIZE_INFORMATION, ··· 5789 5615 memset(&rqst, 0, sizeof(struct smb_rqst)); 5790 5616 rqst.rq_iov = &iov; 5791 5617 rqst.rq_nvec = 1; 5618 + 5619 + if (retries) 5620 + smb2_set_replay(server, &rqst); 5792 5621 5793 5622 rc = cifs_send_recv(xid, ses, server, 5794 5623 &rqst, &resp_buftype, flags, &rsp_iov); ··· 5812 5635 5813 5636 qfsinf_exit: 5814 5637 free_rsp_buf(resp_buftype, rsp_iov.iov_base); 5638 + 5639 + if (is_replayable_error(rc) && 5640 + smb2_should_replay(tcon, &retries, &cur_sleep)) 5641 + goto replay_again; 5642 + 5815 5643 return rc; 5816 5644 } 5817 5645 ··· 5831 5649 int rc = 0; 5832 5650 int resp_buftype, max_len, min_len; 5833 5651 struct cifs_ses *ses = tcon->ses; 5834 - struct TCP_Server_Info *server = cifs_pick_channel(ses); 5652 + struct TCP_Server_Info *server; 5835 5653 unsigned int rsp_len, offset; 5836 5654 int flags = 0; 5655 + int retries = 0, cur_sleep = 1; 5656 + 5657 + replay_again: 5658 + /* reinitialize for possible replay */ 5659 + flags = 0; 5660 + server = cifs_pick_channel(ses); 5837 5661 5838 5662 if (level == FS_DEVICE_INFORMATION) { 5839 5663 max_len = sizeof(FILE_SYSTEM_DEVICE_INFO); ··· 5870 5682 memset(&rqst, 0, sizeof(struct smb_rqst)); 5871 5683 rqst.rq_iov = &iov; 5872 5684 rqst.rq_nvec = 1; 5685 + 5686 + if (retries) 5687 + smb2_set_replay(server, &rqst); 5873 5688 5874 5689 rc = cifs_send_recv(xid, ses, server, 5875 5690 &rqst, &resp_buftype, flags, &rsp_iov); ··· 5911 5720 5912 5721 qfsattr_exit: 5913 5722 free_rsp_buf(resp_buftype, rsp_iov.iov_base); 5723 + 5724 + if (is_replayable_error(rc) && 5725 + smb2_should_replay(tcon, &retries, &cur_sleep)) 5726 + goto replay_again; 5727 + 5914 5728 return rc; 5915 5729 } 5916 5730 ··· 5933 5737 unsigned int count; 5934 5738 int flags = CIFS_NO_RSP_BUF; 5935 5739 unsigned int total_len; 5936 - struct TCP_Server_Info *server = cifs_pick_channel(tcon->ses); 5740 + struct TCP_Server_Info *server; 5741 + int retries = 0, cur_sleep = 1; 5742 + 5743 + replay_again: 5744 + /* reinitialize for possible replay */ 5745 + flags = CIFS_NO_RSP_BUF; 5746 + server = cifs_pick_channel(tcon->ses); 5937 5747 5938 5748 cifs_dbg(FYI, "smb2_lockv num lock %d\n", num_lock); 5939 5749 ··· 5970 5768 rqst.rq_iov = iov; 5971 5769 rqst.rq_nvec = 2; 5972 5770 5771 + if (retries) 5772 + smb2_set_replay(server, &rqst); 5773 + 5973 5774 rc = cifs_send_recv(xid, tcon->ses, server, 5974 5775 &rqst, &resp_buf_type, flags, 5975 5776 &rsp_iov); ··· 5983 5778 trace_smb3_lock_err(xid, persist_fid, tcon->tid, 5984 5779 tcon->ses->Suid, rc); 5985 5780 } 5781 + 5782 + if (is_replayable_error(rc) && 5783 + smb2_should_replay(tcon, &retries, &cur_sleep)) 5784 + goto replay_again; 5986 5785 5987 5786 return rc; 5988 5787 }
+5
fs/smb/client/smb2proto.h
··· 122 122 extern void smb2_set_next_command(struct cifs_tcon *tcon, 123 123 struct smb_rqst *rqst); 124 124 extern void smb2_set_related(struct smb_rqst *rqst); 125 + extern void smb2_set_replay(struct TCP_Server_Info *server, 126 + struct smb_rqst *rqst); 127 + extern bool smb2_should_replay(struct cifs_tcon *tcon, 128 + int *pretries, 129 + int *pcur_sleep); 125 130 126 131 /* 127 132 * SMB2 Worker functions - most of protocol specific implementation details
-7
fs/smb/client/smbencrypt.c
··· 26 26 #include "cifsproto.h" 27 27 #include "../common/md4.h" 28 28 29 - #ifndef false 30 - #define false 0 31 - #endif 32 - #ifndef true 33 - #define true 1 34 - #endif 35 - 36 29 /* following came from the other byteorder.h to avoid include conflicts */ 37 30 #define CVAL(buf,pos) (((unsigned char *)(buf))[pos]) 38 31 #define SSVALX(buf,pos,val) (CVAL(buf,pos)=(val)&0xFF,CVAL(buf,pos+1)=(val)>>8)
+12 -2
fs/smb/client/transport.c
··· 400 400 server->conn_id, server->hostname); 401 401 } 402 402 smbd_done: 403 - if (rc < 0 && rc != -EINTR) 403 + /* 404 + * there's hardly any use for the layers above to know the 405 + * actual error code here. All they should do at this point is 406 + * to retry the connection and hope it goes away. 407 + */ 408 + if (rc < 0 && rc != -EINTR && rc != -EAGAIN) { 404 409 cifs_server_dbg(VFS, "Error %d sending data on socket to server\n", 405 410 rc); 406 - else if (rc > 0) 411 + rc = -ECONNABORTED; 412 + cifs_signal_cifsd_for_reconnect(server, false); 413 + } else if (rc > 0) 407 414 rc = 0; 408 415 out: 409 416 cifs_in_send_dec(server); ··· 1031 1024 for (i = 0; i < ses->chan_count; i++) { 1032 1025 server = ses->chans[i].server; 1033 1026 if (!server || server->terminate) 1027 + continue; 1028 + 1029 + if (CIFS_CHAN_NEEDS_RECONNECT(ses, i)) 1034 1030 continue; 1035 1031 1036 1032 /*
+2 -1
fs/smb/server/ksmbd_netlink.h
··· 304 304 KSMBD_EVENT_SPNEGO_AUTHEN_REQUEST, 305 305 KSMBD_EVENT_SPNEGO_AUTHEN_RESPONSE = 15, 306 306 307 - KSMBD_EVENT_MAX 307 + __KSMBD_EVENT_MAX, 308 + KSMBD_EVENT_MAX = __KSMBD_EVENT_MAX - 1 308 309 }; 309 310 310 311 /*
+2 -2
fs/smb/server/transport_ipc.c
··· 74 74 static int handle_generic_event(struct sk_buff *skb, struct genl_info *info); 75 75 static int ksmbd_ipc_heartbeat_request(void); 76 76 77 - static const struct nla_policy ksmbd_nl_policy[KSMBD_EVENT_MAX] = { 77 + static const struct nla_policy ksmbd_nl_policy[KSMBD_EVENT_MAX + 1] = { 78 78 [KSMBD_EVENT_UNSPEC] = { 79 79 .len = 0, 80 80 }, ··· 403 403 return -EPERM; 404 404 #endif 405 405 406 - if (type >= KSMBD_EVENT_MAX) { 406 + if (type > KSMBD_EVENT_MAX) { 407 407 WARN_ON(1); 408 408 return -EINVAL; 409 409 }
+2
fs/smb/server/transport_tcp.c
··· 365 365 * @t: TCP transport instance 366 366 * @buf: buffer to store read data from socket 367 367 * @to_read: number of bytes to read from socket 368 + * @max_retries: number of retries if reading from socket fails 368 369 * 369 370 * Return: on success return number of bytes read from socket, 370 371 * otherwise return error number ··· 417 416 418 417 /** 419 418 * create_socket - create socket for ksmbd/0 419 + * @iface: interface to bind the created socket to 420 420 * 421 421 * Return: 0 on success, error number otherwise 422 422 */
-38
fs/tracefs/event_inode.c
··· 281 281 inode->i_gid = attr->gid; 282 282 } 283 283 284 - static void update_gid(struct eventfs_inode *ei, kgid_t gid, int level) 285 - { 286 - struct eventfs_inode *ei_child; 287 - 288 - /* at most we have events/system/event */ 289 - if (WARN_ON_ONCE(level > 3)) 290 - return; 291 - 292 - ei->attr.gid = gid; 293 - 294 - if (ei->entry_attrs) { 295 - for (int i = 0; i < ei->nr_entries; i++) { 296 - ei->entry_attrs[i].gid = gid; 297 - } 298 - } 299 - 300 - /* 301 - * Only eventfs_inode with dentries are updated, make sure 302 - * all eventfs_inodes are updated. If one of the children 303 - * do not have a dentry, this function must traverse it. 304 - */ 305 - list_for_each_entry_srcu(ei_child, &ei->children, list, 306 - srcu_read_lock_held(&eventfs_srcu)) { 307 - if (!ei_child->dentry) 308 - update_gid(ei_child, gid, level + 1); 309 - } 310 - } 311 - 312 - void eventfs_update_gid(struct dentry *dentry, kgid_t gid) 313 - { 314 - struct eventfs_inode *ei = dentry->d_fsdata; 315 - int idx; 316 - 317 - idx = srcu_read_lock(&eventfs_srcu); 318 - update_gid(ei, gid, 0); 319 - srcu_read_unlock(&eventfs_srcu, idx); 320 - } 321 - 322 284 /** 323 285 * create_file - create a file in the tracefs filesystem 324 286 * @name: the name of the file to create.
-1
fs/tracefs/internal.h
··· 82 82 struct dentry *eventfs_start_creating(const char *name, struct dentry *parent); 83 83 struct dentry *eventfs_failed_creating(struct dentry *dentry); 84 84 struct dentry *eventfs_end_creating(struct dentry *dentry); 85 - void eventfs_update_gid(struct dentry *dentry, kgid_t gid); 86 85 void eventfs_set_ei_status_free(struct tracefs_inode *ti, struct dentry *dentry); 87 86 88 87 #endif /* _TRACEFS_INTERNAL_H */
+17 -10
fs/xfs/xfs_super.c
··· 1496 1496 1497 1497 mp->m_super = sb; 1498 1498 1499 + /* 1500 + * Copy VFS mount flags from the context now that all parameter parsing 1501 + * is guaranteed to have been completed by either the old mount API or 1502 + * the newer fsopen/fsconfig API. 1503 + */ 1504 + if (fc->sb_flags & SB_RDONLY) 1505 + set_bit(XFS_OPSTATE_READONLY, &mp->m_opstate); 1506 + if (fc->sb_flags & SB_DIRSYNC) 1507 + mp->m_features |= XFS_FEAT_DIRSYNC; 1508 + if (fc->sb_flags & SB_SYNCHRONOUS) 1509 + mp->m_features |= XFS_FEAT_WSYNC; 1510 + 1499 1511 error = xfs_fs_validate_params(mp); 1500 1512 if (error) 1501 1513 return error; ··· 1977 1965 .free = xfs_fs_free, 1978 1966 }; 1979 1967 1968 + /* 1969 + * WARNING: do not initialise any parameters in this function that depend on 1970 + * mount option parsing having already been performed as this can be called from 1971 + * fsopen() before any parameters have been set. 1972 + */ 1980 1973 static int xfs_init_fs_context( 1981 1974 struct fs_context *fc) 1982 1975 { ··· 2012 1995 mp->m_logbufs = -1; 2013 1996 mp->m_logbsize = -1; 2014 1997 mp->m_allocsize_log = 16; /* 64k */ 2015 - 2016 - /* 2017 - * Copy binary VFS mount flags we are interested in. 2018 - */ 2019 - if (fc->sb_flags & SB_RDONLY) 2020 - set_bit(XFS_OPSTATE_READONLY, &mp->m_opstate); 2021 - if (fc->sb_flags & SB_DIRSYNC) 2022 - mp->m_features |= XFS_FEAT_DIRSYNC; 2023 - if (fc->sb_flags & SB_SYNCHRONOUS) 2024 - mp->m_features |= XFS_FEAT_WSYNC; 2025 1998 2026 1999 fc->s_fs_info = mp; 2027 2000 fc->ops = &xfs_context_ops;
-11
include/linux/hid_bpf.h
··· 77 77 int hid_bpf_device_event(struct hid_bpf_ctx *ctx); 78 78 int hid_bpf_rdesc_fixup(struct hid_bpf_ctx *ctx); 79 79 80 - /* Following functions are kfunc that we export to BPF programs */ 81 - /* available everywhere in HID-BPF */ 82 - __u8 *hid_bpf_get_data(struct hid_bpf_ctx *ctx, unsigned int offset, const size_t __sz); 83 - 84 - /* only available in syscall */ 85 - int hid_bpf_attach_prog(unsigned int hid_id, int prog_fd, __u32 flags); 86 - int hid_bpf_hw_request(struct hid_bpf_ctx *ctx, __u8 *buf, size_t buf__sz, 87 - enum hid_report_type rtype, enum hid_class_request reqtype); 88 - struct hid_bpf_ctx *hid_bpf_allocate_context(unsigned int hid_id); 89 - void hid_bpf_release_context(struct hid_bpf_ctx *ctx); 90 - 91 80 /* 92 81 * Below is HID internal 93 82 */
+1 -1
include/linux/libata.h
··· 471 471 472 472 /* 473 473 * Link power management policy: If you alter this, you also need to 474 - * alter libata-scsi.c (for the ascii descriptions) 474 + * alter libata-sata.c (for the ascii descriptions) 475 475 */ 476 476 enum ata_lpm_policy { 477 477 ATA_LPM_UNKNOWN,
+2 -2
include/linux/lsm_hook_defs.h
··· 315 315 LSM_HOOK(int, 0, socket_setsockopt, struct socket *sock, int level, int optname) 316 316 LSM_HOOK(int, 0, socket_shutdown, struct socket *sock, int how) 317 317 LSM_HOOK(int, 0, socket_sock_rcv_skb, struct sock *sk, struct sk_buff *skb) 318 - LSM_HOOK(int, 0, socket_getpeersec_stream, struct socket *sock, 318 + LSM_HOOK(int, -ENOPROTOOPT, socket_getpeersec_stream, struct socket *sock, 319 319 sockptr_t optval, sockptr_t optlen, unsigned int len) 320 - LSM_HOOK(int, 0, socket_getpeersec_dgram, struct socket *sock, 320 + LSM_HOOK(int, -ENOPROTOOPT, socket_getpeersec_dgram, struct socket *sock, 321 321 struct sk_buff *skb, u32 *secid) 322 322 LSM_HOOK(int, 0, sk_alloc_security, struct sock *sk, int family, gfp_t priority) 323 323 LSM_HOOK(void, LSM_RET_VOID, sk_free_security, struct sock *sk)
+1
include/linux/mman.h
··· 156 156 return _calc_vm_trans(flags, MAP_GROWSDOWN, VM_GROWSDOWN ) | 157 157 _calc_vm_trans(flags, MAP_LOCKED, VM_LOCKED ) | 158 158 _calc_vm_trans(flags, MAP_SYNC, VM_SYNC ) | 159 + _calc_vm_trans(flags, MAP_STACK, VM_NOHUGEPAGE) | 159 160 arch_calc_vm_flag_bits(flags); 160 161 } 161 162
+3 -3
include/linux/mmzone.h
··· 2013 2013 if (pfn_to_section_nr(pfn) >= NR_MEM_SECTIONS) 2014 2014 return 0; 2015 2015 ms = __pfn_to_section(pfn); 2016 - rcu_read_lock(); 2016 + rcu_read_lock_sched(); 2017 2017 if (!valid_section(ms)) { 2018 - rcu_read_unlock(); 2018 + rcu_read_unlock_sched(); 2019 2019 return 0; 2020 2020 } 2021 2021 /* ··· 2023 2023 * the entire section-sized span. 2024 2024 */ 2025 2025 ret = early_section(ms) || pfn_section_valid(ms, pfn); 2026 - rcu_read_unlock(); 2026 + rcu_read_unlock_sched(); 2027 2027 2028 2028 return ret; 2029 2029 }
+4
include/linux/netfilter/ipset/ip_set.h
··· 186 186 /* Return true if "b" set is the same as "a" 187 187 * according to the create set parameters */ 188 188 bool (*same_set)(const struct ip_set *a, const struct ip_set *b); 189 + /* Cancel ongoing garbage collectors before destroying the set*/ 190 + void (*cancel_gc)(struct ip_set *set); 189 191 /* Region-locking is used */ 190 192 bool region_lock; 191 193 }; ··· 244 242 245 243 /* A generic IP set */ 246 244 struct ip_set { 245 + /* For call_cru in destroy */ 246 + struct rcu_head rcu; 247 247 /* The name of the set */ 248 248 char name[IPSET_MAXNAMELEN]; 249 249 /* Lock protecting the set data */
+1 -1
include/linux/spi/spi.h
··· 21 21 #include <uapi/linux/spi/spi.h> 22 22 23 23 /* Max no. of CS supported per spi device */ 24 - #define SPI_CS_CNT_MAX 4 24 + #define SPI_CS_CNT_MAX 16 25 25 26 26 struct dma_chan; 27 27 struct software_node;
+1
include/linux/syscalls.h
··· 128 128 #define __TYPE_IS_LL(t) (__TYPE_AS(t, 0LL) || __TYPE_AS(t, 0ULL)) 129 129 #define __SC_LONG(t, a) __typeof(__builtin_choose_expr(__TYPE_IS_LL(t), 0LL, 0L)) a 130 130 #define __SC_CAST(t, a) (__force t) a 131 + #define __SC_TYPE(t, a) t 131 132 #define __SC_ARGS(t, a) a 132 133 #define __SC_TEST(t, a) (void)BUILD_BUG_ON_ZERO(!__TYPE_IS_LL(t) && sizeof(t) > sizeof(long)) 133 134
+14 -6
include/net/af_unix.h
··· 54 54 55 55 #define UNIXCB(skb) (*(struct unix_skb_parms *)&((skb)->cb)) 56 56 57 - #define unix_state_lock(s) spin_lock(&unix_sk(s)->lock) 58 - #define unix_state_unlock(s) spin_unlock(&unix_sk(s)->lock) 59 - #define unix_state_lock_nested(s) \ 60 - spin_lock_nested(&unix_sk(s)->lock, \ 61 - SINGLE_DEPTH_NESTING) 62 - 63 57 /* The AF_UNIX socket */ 64 58 struct unix_sock { 65 59 /* WARNING: sk has to be the first member */ ··· 78 84 79 85 #define unix_sk(ptr) container_of_const(ptr, struct unix_sock, sk) 80 86 #define unix_peer(sk) (unix_sk(sk)->peer) 87 + 88 + #define unix_state_lock(s) spin_lock(&unix_sk(s)->lock) 89 + #define unix_state_unlock(s) spin_unlock(&unix_sk(s)->lock) 90 + enum unix_socket_lock_class { 91 + U_LOCK_NORMAL, 92 + U_LOCK_SECOND, /* for double locking, see unix_state_double_lock(). */ 93 + U_LOCK_DIAG, /* used while dumping icons, see sk_diag_dump_icons(). */ 94 + }; 95 + 96 + static inline void unix_state_lock_nested(struct sock *sk, 97 + enum unix_socket_lock_class subclass) 98 + { 99 + spin_lock_nested(&unix_sk(sk)->lock, subclass); 100 + } 81 101 82 102 #define peer_wait peer_wq.wait 83 103
+1 -1
include/net/ip.h
··· 767 767 * Functions provided by ip_sockglue.c 768 768 */ 769 769 770 - void ipv4_pktinfo_prepare(const struct sock *sk, struct sk_buff *skb); 770 + void ipv4_pktinfo_prepare(const struct sock *sk, struct sk_buff *skb, bool drop_dst); 771 771 void ip_cmsg_recv_offset(struct msghdr *msg, struct sock *sk, 772 772 struct sk_buff *skb, int tlen, int offset); 773 773 int ip_cmsg_send(struct sock *sk, struct msghdr *msg,
+2
include/net/netfilter/nf_tables.h
··· 1357 1357 * @type: stateful object numeric type 1358 1358 * @owner: module owner 1359 1359 * @maxattr: maximum netlink attribute 1360 + * @family: address family for AF-specific object types 1360 1361 * @policy: netlink attribute policy 1361 1362 */ 1362 1363 struct nft_object_type { ··· 1367 1366 struct list_head list; 1368 1367 u32 type; 1369 1368 unsigned int maxattr; 1369 + u8 family; 1370 1370 struct module *owner; 1371 1371 const struct nla_policy *policy; 1372 1372 };
+20 -5
include/uapi/drm/ivpu_accel.h
··· 53 53 #define DRM_IVPU_PARAM_CORE_CLOCK_RATE 3 54 54 #define DRM_IVPU_PARAM_NUM_CONTEXTS 4 55 55 #define DRM_IVPU_PARAM_CONTEXT_BASE_ADDRESS 5 56 - #define DRM_IVPU_PARAM_CONTEXT_PRIORITY 6 56 + #define DRM_IVPU_PARAM_CONTEXT_PRIORITY 6 /* Deprecated */ 57 57 #define DRM_IVPU_PARAM_CONTEXT_ID 7 58 58 #define DRM_IVPU_PARAM_FW_API_VERSION 8 59 59 #define DRM_IVPU_PARAM_ENGINE_HEARTBEAT 9 ··· 64 64 65 65 #define DRM_IVPU_PLATFORM_TYPE_SILICON 0 66 66 67 + /* Deprecated, use DRM_IVPU_JOB_PRIORITY */ 67 68 #define DRM_IVPU_CONTEXT_PRIORITY_IDLE 0 68 69 #define DRM_IVPU_CONTEXT_PRIORITY_NORMAL 1 69 70 #define DRM_IVPU_CONTEXT_PRIORITY_FOCUS 2 70 71 #define DRM_IVPU_CONTEXT_PRIORITY_REALTIME 3 72 + 73 + #define DRM_IVPU_JOB_PRIORITY_DEFAULT 0 74 + #define DRM_IVPU_JOB_PRIORITY_IDLE 1 75 + #define DRM_IVPU_JOB_PRIORITY_NORMAL 2 76 + #define DRM_IVPU_JOB_PRIORITY_FOCUS 3 77 + #define DRM_IVPU_JOB_PRIORITY_REALTIME 4 71 78 72 79 /** 73 80 * DRM_IVPU_CAP_METRIC_STREAMER ··· 118 111 * 119 112 * %DRM_IVPU_PARAM_CONTEXT_BASE_ADDRESS: 120 113 * Lowest VPU virtual address available in the current context (read-only) 121 - * 122 - * %DRM_IVPU_PARAM_CONTEXT_PRIORITY: 123 - * Value of current context scheduling priority (read-write). 124 - * See DRM_IVPU_CONTEXT_PRIORITY_* for possible values. 125 114 * 126 115 * %DRM_IVPU_PARAM_CONTEXT_ID: 127 116 * Current context ID, always greater than 0 (read-only) ··· 289 286 * to be executed. The offset has to be 8-byte aligned. 290 287 */ 291 288 __u32 commands_offset; 289 + 290 + /** 291 + * @priority: 292 + * 293 + * Priority to be set for related job command queue, can be one of the following: 294 + * %DRM_IVPU_JOB_PRIORITY_DEFAULT 295 + * %DRM_IVPU_JOB_PRIORITY_IDLE 296 + * %DRM_IVPU_JOB_PRIORITY_NORMAL 297 + * %DRM_IVPU_JOB_PRIORITY_FOCUS 298 + * %DRM_IVPU_JOB_PRIORITY_REALTIME 299 + */ 300 + __u32 priority; 292 301 }; 293 302 294 303 /* drm_ivpu_bo_wait job status codes */
-1
io_uring/opdef.c
··· 471 471 }, 472 472 [IORING_OP_FIXED_FD_INSTALL] = { 473 473 .needs_file = 1, 474 - .audit_skip = 1, 475 474 .prep = io_install_fixed_fd_prep, 476 475 .issue = io_install_fixed_fd, 477 476 },
+4
io_uring/openclose.c
··· 277 277 if (flags & ~IORING_FIXED_FD_NO_CLOEXEC) 278 278 return -EINVAL; 279 279 280 + /* ensure the task's creds are used when installing/receiving fds */ 281 + if (req->flags & REQ_F_CREDS) 282 + return -EPERM; 283 + 280 284 /* default to O_CLOEXEC, disable if IORING_FIXED_FD_NO_CLOEXEC is set */ 281 285 ifi = io_kiocb_to_cmd(req, struct io_fixed_install); 282 286 ifi->o_flags = O_CLOEXEC;
+1 -1
kernel/events/uprobes.c
··· 537 537 } 538 538 } 539 539 540 - ret = __replace_page(vma, vaddr, old_page, new_page); 540 + ret = __replace_page(vma, vaddr & PAGE_MASK, old_page, new_page); 541 541 if (new_page) 542 542 put_page(new_page); 543 543 put_old:
+12 -3
kernel/futex/core.c
··· 627 627 } 628 628 629 629 /* 630 - * PI futexes can not be requeued and must remove themselves from the 631 - * hash bucket. The hash bucket lock (i.e. lock_ptr) is held. 630 + * PI futexes can not be requeued and must remove themselves from the hash 631 + * bucket. The hash bucket lock (i.e. lock_ptr) is held. 632 632 */ 633 633 void futex_unqueue_pi(struct futex_q *q) 634 634 { 635 - __futex_unqueue(q); 635 + /* 636 + * If the lock was not acquired (due to timeout or signal) then the 637 + * rt_waiter is removed before futex_q is. If this is observed by 638 + * an unlocker after dropping the rtmutex wait lock and before 639 + * acquiring the hash bucket lock, then the unlocker dequeues the 640 + * futex_q from the hash bucket list to guarantee consistent state 641 + * vs. userspace. Therefore the dequeue here must be conditional. 642 + */ 643 + if (!plist_node_empty(&q->list)) 644 + __futex_unqueue(q); 636 645 637 646 BUG_ON(!q->pi_state); 638 647 put_pi_state(q->pi_state);
+8 -3
kernel/futex/pi.c
··· 1135 1135 1136 1136 hb = futex_hash(&key); 1137 1137 spin_lock(&hb->lock); 1138 + retry_hb: 1138 1139 1139 1140 /* 1140 1141 * Check waiters first. We do not trust user space values at ··· 1178 1177 /* 1179 1178 * Futex vs rt_mutex waiter state -- if there are no rt_mutex 1180 1179 * waiters even though futex thinks there are, then the waiter 1181 - * is leaving and the uncontended path is safe to take. 1180 + * is leaving. The entry needs to be removed from the list so a 1181 + * new futex_lock_pi() is not using this stale PI-state while 1182 + * the futex is available in user space again. 1183 + * There can be more than one task on its way out so it needs 1184 + * to retry. 1182 1185 */ 1183 1186 rt_waiter = rt_mutex_top_waiter(&pi_state->pi_mutex); 1184 1187 if (!rt_waiter) { 1188 + __futex_unqueue(top_waiter); 1185 1189 raw_spin_unlock_irq(&pi_state->pi_mutex.wait_lock); 1186 - goto do_uncontended; 1190 + goto retry_hb; 1187 1191 } 1188 1192 1189 1193 get_pi_state(pi_state); ··· 1223 1217 return ret; 1224 1218 } 1225 1219 1226 - do_uncontended: 1227 1220 /* 1228 1221 * We have no kernel internal state, i.e. no waiters in the 1229 1222 * kernel. Waiters which are about to queue themselves are stuck
+1 -1
kernel/irq/irqdesc.c
··· 600 600 mutex_init(&desc[i].request_mutex); 601 601 init_waitqueue_head(&desc[i].wait_for_threads); 602 602 desc_set_defaults(i, &desc[i], node, NULL, NULL); 603 - irq_resend_init(desc); 603 + irq_resend_init(&desc[i]); 604 604 } 605 605 return arch_early_irq_init(); 606 606 }
+24 -1
kernel/time/clocksource.c
··· 99 99 * Interval: 0.5sec. 100 100 */ 101 101 #define WATCHDOG_INTERVAL (HZ >> 1) 102 + #define WATCHDOG_INTERVAL_MAX_NS ((2 * WATCHDOG_INTERVAL) * (NSEC_PER_SEC / HZ)) 102 103 103 104 /* 104 105 * Threshold: 0.0312s, when doubled: 0.0625s. ··· 135 134 static DEFINE_SPINLOCK(watchdog_lock); 136 135 static int watchdog_running; 137 136 static atomic_t watchdog_reset_pending; 137 + static int64_t watchdog_max_interval; 138 138 139 139 static inline void clocksource_watchdog_lock(unsigned long *flags) 140 140 { ··· 401 399 static void clocksource_watchdog(struct timer_list *unused) 402 400 { 403 401 u64 csnow, wdnow, cslast, wdlast, delta; 402 + int64_t wd_nsec, cs_nsec, interval; 404 403 int next_cpu, reset_pending; 405 - int64_t wd_nsec, cs_nsec; 406 404 struct clocksource *cs; 407 405 enum wd_read_status read_ret; 408 406 unsigned long extra_wait = 0; ··· 471 469 472 470 if (atomic_read(&watchdog_reset_pending)) 473 471 continue; 472 + 473 + /* 474 + * The processing of timer softirqs can get delayed (usually 475 + * on account of ksoftirqd not getting to run in a timely 476 + * manner), which causes the watchdog interval to stretch. 477 + * Skew detection may fail for longer watchdog intervals 478 + * on account of fixed margins being used. 479 + * Some clocksources, e.g. acpi_pm, cannot tolerate 480 + * watchdog intervals longer than a few seconds. 481 + */ 482 + interval = max(cs_nsec, wd_nsec); 483 + if (unlikely(interval > WATCHDOG_INTERVAL_MAX_NS)) { 484 + if (system_state > SYSTEM_SCHEDULING && 485 + interval > 2 * watchdog_max_interval) { 486 + watchdog_max_interval = interval; 487 + pr_warn("Long readout interval, skipping watchdog check: cs_nsec: %lld wd_nsec: %lld\n", 488 + cs_nsec, wd_nsec); 489 + } 490 + watchdog_timer.expires = jiffies; 491 + continue; 492 + } 474 493 475 494 /* Check the deviation from the watchdog clocksource. */ 476 495 md = cs->uncertainty_margin + watchdog->uncertainty_margin;
+5
kernel/time/tick-sched.c
··· 1577 1577 { 1578 1578 struct tick_sched *ts = &per_cpu(tick_cpu_sched, cpu); 1579 1579 ktime_t idle_sleeptime, iowait_sleeptime; 1580 + unsigned long idle_calls, idle_sleeps; 1580 1581 1581 1582 # ifdef CONFIG_HIGH_RES_TIMERS 1582 1583 if (ts->sched_timer.base) ··· 1586 1585 1587 1586 idle_sleeptime = ts->idle_sleeptime; 1588 1587 iowait_sleeptime = ts->iowait_sleeptime; 1588 + idle_calls = ts->idle_calls; 1589 + idle_sleeps = ts->idle_sleeps; 1589 1590 memset(ts, 0, sizeof(*ts)); 1590 1591 ts->idle_sleeptime = idle_sleeptime; 1591 1592 ts->iowait_sleeptime = iowait_sleeptime; 1593 + ts->idle_calls = idle_calls; 1594 + ts->idle_sleeps = idle_sleeps; 1592 1595 } 1593 1596 #endif 1594 1597
+4 -2
kernel/trace/trace_events_trigger.c
··· 1470 1470 struct event_trigger_data *data, 1471 1471 struct trace_event_file *file) 1472 1472 { 1473 - if (tracing_alloc_snapshot_instance(file->tr) != 0) 1474 - return 0; 1473 + int ret = tracing_alloc_snapshot_instance(file->tr); 1474 + 1475 + if (ret < 0) 1476 + return ret; 1475 1477 1476 1478 return register_trigger(glob, data, file); 1477 1479 }
+2 -2
lib/kunit/device.c
··· 45 45 int error; 46 46 47 47 kunit_bus_device = root_device_register("kunit"); 48 - if (!kunit_bus_device) 49 - return -ENOMEM; 48 + if (IS_ERR(kunit_bus_device)) 49 + return PTR_ERR(kunit_bus_device); 50 50 51 51 error = bus_register(&kunit_bus_type); 52 52 if (error)
+4
lib/kunit/executor.c
··· 146 146 kfree(suite_set.start); 147 147 } 148 148 149 + /* 150 + * Filter and reallocate test suites. Must return the filtered test suites set 151 + * allocated at a valid virtual address or NULL in case of error. 152 + */ 149 153 struct kunit_suite_set 150 154 kunit_filter_suites(const struct kunit_suite_set *suite_set, 151 155 const char *filter_glob,
+1 -1
lib/kunit/kunit-test.c
··· 720 720 long action_was_run = 0; 721 721 722 722 test_device = kunit_device_register(test, "my_device"); 723 - KUNIT_ASSERT_NOT_NULL(test, test_device); 723 + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, test_device); 724 724 725 725 /* Add an action to verify cleanup. */ 726 726 devm_add_action(test_device, test_dev_action, &action_was_run);
+11 -3
lib/kunit/test.c
··· 17 17 #include <linux/panic.h> 18 18 #include <linux/sched/debug.h> 19 19 #include <linux/sched.h> 20 + #include <linux/mm.h> 20 21 21 22 #include "debugfs.h" 22 23 #include "device-impl.h" ··· 802 801 }; 803 802 const char *action = kunit_action(); 804 803 804 + /* 805 + * Check if the start address is a valid virtual address to detect 806 + * if the module load sequence has failed and the suite set has not 807 + * been initialized and filtered. 808 + */ 809 + if (!suite_set.start || !virt_addr_valid(suite_set.start)) 810 + return; 811 + 805 812 if (!action) 806 813 __kunit_test_suites_exit(mod->kunit_suites, 807 814 mod->num_kunit_suites); 808 815 809 - if (suite_set.start) 810 - kunit_free_suite_set(suite_set); 816 + kunit_free_suite_set(suite_set); 811 817 } 812 818 813 819 static int kunit_module_notify(struct notifier_block *nb, unsigned long val, ··· 824 816 825 817 switch (val) { 826 818 case MODULE_STATE_LIVE: 819 + kunit_module_init(mod); 827 820 break; 828 821 case MODULE_STATE_GOING: 829 822 kunit_module_exit(mod); 830 823 break; 831 824 case MODULE_STATE_COMING: 832 - kunit_module_init(mod); 833 825 break; 834 826 case MODULE_STATE_UNFORMED: 835 827 break;
+263 -110
lib/stackdepot.c
··· 14 14 15 15 #define pr_fmt(fmt) "stackdepot: " fmt 16 16 17 + #include <linux/debugfs.h> 17 18 #include <linux/gfp.h> 18 19 #include <linux/jhash.h> 19 20 #include <linux/kernel.h> ··· 22 21 #include <linux/list.h> 23 22 #include <linux/mm.h> 24 23 #include <linux/mutex.h> 25 - #include <linux/percpu.h> 26 24 #include <linux/printk.h> 25 + #include <linux/rculist.h> 26 + #include <linux/rcupdate.h> 27 27 #include <linux/refcount.h> 28 28 #include <linux/slab.h> 29 29 #include <linux/spinlock.h> ··· 69 67 }; 70 68 71 69 struct stack_record { 72 - struct list_head list; /* Links in hash table or freelist */ 70 + struct list_head hash_list; /* Links in the hash table */ 73 71 u32 hash; /* Hash in hash table */ 74 72 u32 size; /* Number of stored frames */ 75 - union handle_parts handle; 73 + union handle_parts handle; /* Constant after initialization */ 76 74 refcount_t count; 77 - unsigned long entries[CONFIG_STACKDEPOT_MAX_FRAMES]; /* Frames */ 75 + union { 76 + unsigned long entries[CONFIG_STACKDEPOT_MAX_FRAMES]; /* Frames */ 77 + struct { 78 + /* 79 + * An important invariant of the implementation is to 80 + * only place a stack record onto the freelist iff its 81 + * refcount is zero. Because stack records with a zero 82 + * refcount are never considered as valid, it is safe to 83 + * union @entries and freelist management state below. 84 + * Conversely, as soon as an entry is off the freelist 85 + * and its refcount becomes non-zero, the below must not 86 + * be accessed until being placed back on the freelist. 87 + */ 88 + struct list_head free_list; /* Links in the freelist */ 89 + unsigned long rcu_state; /* RCU cookie */ 90 + }; 91 + }; 78 92 }; 79 93 80 94 #define DEPOT_STACK_RECORD_SIZE \ ··· 130 112 * yet allocated or if the limit on the number of pools is reached. 131 113 */ 132 114 static bool new_pool_required = true; 133 - /* Lock that protects the variables above. */ 134 - static DEFINE_RWLOCK(pool_rwlock); 115 + /* The lock must be held when performing pool or freelist modifications. */ 116 + static DEFINE_RAW_SPINLOCK(pool_lock); 117 + 118 + /* Statistics counters for debugfs. */ 119 + enum depot_counter_id { 120 + DEPOT_COUNTER_ALLOCS, 121 + DEPOT_COUNTER_FREES, 122 + DEPOT_COUNTER_INUSE, 123 + DEPOT_COUNTER_FREELIST_SIZE, 124 + DEPOT_COUNTER_COUNT, 125 + }; 126 + static long counters[DEPOT_COUNTER_COUNT]; 127 + static const char *const counter_names[] = { 128 + [DEPOT_COUNTER_ALLOCS] = "allocations", 129 + [DEPOT_COUNTER_FREES] = "frees", 130 + [DEPOT_COUNTER_INUSE] = "in_use", 131 + [DEPOT_COUNTER_FREELIST_SIZE] = "freelist_size", 132 + }; 133 + static_assert(ARRAY_SIZE(counter_names) == DEPOT_COUNTER_COUNT); 135 134 136 135 static int __init disable_stack_depot(char *str) 137 136 { ··· 293 258 } 294 259 EXPORT_SYMBOL_GPL(stack_depot_init); 295 260 296 - /* Initializes a stack depol pool. */ 261 + /* 262 + * Initializes new stack depot @pool, release all its entries to the freelist, 263 + * and update the list of pools. 264 + */ 297 265 static void depot_init_pool(void *pool) 298 266 { 299 267 int offset; 300 268 301 - lockdep_assert_held_write(&pool_rwlock); 302 - 303 - WARN_ON(!list_empty(&free_stacks)); 269 + lockdep_assert_held(&pool_lock); 304 270 305 271 /* Initialize handles and link stack records into the freelist. */ 306 272 for (offset = 0; offset <= DEPOT_POOL_SIZE - DEPOT_STACK_RECORD_SIZE; ··· 312 276 stack->handle.offset = offset >> DEPOT_STACK_ALIGN; 313 277 stack->handle.extra = 0; 314 278 315 - list_add(&stack->list, &free_stacks); 279 + /* 280 + * Stack traces of size 0 are never saved, and we can simply use 281 + * the size field as an indicator if this is a new unused stack 282 + * record in the freelist. 283 + */ 284 + stack->size = 0; 285 + 286 + INIT_LIST_HEAD(&stack->hash_list); 287 + /* 288 + * Add to the freelist front to prioritize never-used entries: 289 + * required in case there are entries in the freelist, but their 290 + * RCU cookie still belongs to the current RCU grace period 291 + * (there can still be concurrent readers). 292 + */ 293 + list_add(&stack->free_list, &free_stacks); 294 + counters[DEPOT_COUNTER_FREELIST_SIZE]++; 316 295 } 317 296 318 297 /* Save reference to the pool to be used by depot_fetch_stack(). */ 319 298 stack_pools[pools_num] = pool; 320 - pools_num++; 299 + 300 + /* Pairs with concurrent READ_ONCE() in depot_fetch_stack(). */ 301 + WRITE_ONCE(pools_num, pools_num + 1); 302 + ASSERT_EXCLUSIVE_WRITER(pools_num); 321 303 } 322 304 323 305 /* Keeps the preallocated memory to be used for a new stack depot pool. */ 324 306 static void depot_keep_new_pool(void **prealloc) 325 307 { 326 - lockdep_assert_held_write(&pool_rwlock); 308 + lockdep_assert_held(&pool_lock); 327 309 328 310 /* 329 311 * If a new pool is already saved or the maximum number of ··· 364 310 * number of pools is reached. In either case, take note that 365 311 * keeping another pool is not required. 366 312 */ 367 - new_pool_required = false; 313 + WRITE_ONCE(new_pool_required, false); 368 314 } 369 315 370 - /* Updates references to the current and the next stack depot pools. */ 371 - static bool depot_update_pools(void **prealloc) 316 + /* 317 + * Try to initialize a new stack depot pool from either a previous or the 318 + * current pre-allocation, and release all its entries to the freelist. 319 + */ 320 + static bool depot_try_init_pool(void **prealloc) 372 321 { 373 - lockdep_assert_held_write(&pool_rwlock); 374 - 375 - /* Check if we still have objects in the freelist. */ 376 - if (!list_empty(&free_stacks)) 377 - goto out_keep_prealloc; 322 + lockdep_assert_held(&pool_lock); 378 323 379 324 /* Check if we have a new pool saved and use it. */ 380 325 if (new_pool) { ··· 382 329 383 330 /* Take note that we might need a new new_pool. */ 384 331 if (pools_num < DEPOT_MAX_POOLS) 385 - new_pool_required = true; 332 + WRITE_ONCE(new_pool_required, true); 386 333 387 - /* Try keeping the preallocated memory for new_pool. */ 388 - goto out_keep_prealloc; 334 + return true; 389 335 } 390 336 391 337 /* Bail out if we reached the pool limit. */ ··· 401 349 } 402 350 403 351 return false; 352 + } 404 353 405 - out_keep_prealloc: 406 - /* Keep the preallocated memory for a new pool if required. */ 407 - if (*prealloc) 408 - depot_keep_new_pool(prealloc); 409 - return true; 354 + /* Try to find next free usable entry. */ 355 + static struct stack_record *depot_pop_free(void) 356 + { 357 + struct stack_record *stack; 358 + 359 + lockdep_assert_held(&pool_lock); 360 + 361 + if (list_empty(&free_stacks)) 362 + return NULL; 363 + 364 + /* 365 + * We maintain the invariant that the elements in front are least 366 + * recently used, and are therefore more likely to be associated with an 367 + * RCU grace period in the past. Consequently it is sufficient to only 368 + * check the first entry. 369 + */ 370 + stack = list_first_entry(&free_stacks, struct stack_record, free_list); 371 + if (stack->size && !poll_state_synchronize_rcu(stack->rcu_state)) 372 + return NULL; 373 + 374 + list_del(&stack->free_list); 375 + counters[DEPOT_COUNTER_FREELIST_SIZE]--; 376 + 377 + return stack; 410 378 } 411 379 412 380 /* Allocates a new stack in a stack depot pool. */ ··· 435 363 { 436 364 struct stack_record *stack; 437 365 438 - lockdep_assert_held_write(&pool_rwlock); 366 + lockdep_assert_held(&pool_lock); 439 367 440 - /* Update current and new pools if required and possible. */ 441 - if (!depot_update_pools(prealloc)) 368 + /* This should already be checked by public API entry points. */ 369 + if (WARN_ON_ONCE(!size)) 442 370 return NULL; 443 371 444 372 /* Check if we have a stack record to save the stack trace. */ 445 - if (list_empty(&free_stacks)) 446 - return NULL; 447 - 448 - /* Get and unlink the first entry from the freelist. */ 449 - stack = list_first_entry(&free_stacks, struct stack_record, list); 450 - list_del(&stack->list); 373 + stack = depot_pop_free(); 374 + if (!stack) { 375 + /* No usable entries on the freelist - try to refill the freelist. */ 376 + if (!depot_try_init_pool(prealloc)) 377 + return NULL; 378 + stack = depot_pop_free(); 379 + if (WARN_ON(!stack)) 380 + return NULL; 381 + } 451 382 452 383 /* Limit number of saved frames to CONFIG_STACKDEPOT_MAX_FRAMES. */ 453 384 if (size > CONFIG_STACKDEPOT_MAX_FRAMES) ··· 469 394 */ 470 395 kmsan_unpoison_memory(stack, DEPOT_STACK_RECORD_SIZE); 471 396 397 + counters[DEPOT_COUNTER_ALLOCS]++; 398 + counters[DEPOT_COUNTER_INUSE]++; 472 399 return stack; 473 400 } 474 401 475 402 static struct stack_record *depot_fetch_stack(depot_stack_handle_t handle) 476 403 { 404 + const int pools_num_cached = READ_ONCE(pools_num); 477 405 union handle_parts parts = { .handle = handle }; 478 406 void *pool; 479 407 size_t offset = parts.offset << DEPOT_STACK_ALIGN; 480 408 struct stack_record *stack; 481 409 482 - lockdep_assert_held(&pool_rwlock); 410 + lockdep_assert_not_held(&pool_lock); 483 411 484 - if (parts.pool_index > pools_num) { 412 + if (parts.pool_index > pools_num_cached) { 485 413 WARN(1, "pool index %d out of bounds (%d) for stack id %08x\n", 486 - parts.pool_index, pools_num, handle); 414 + parts.pool_index, pools_num_cached, handle); 487 415 return NULL; 488 416 } 489 417 490 418 pool = stack_pools[parts.pool_index]; 491 - if (!pool) 419 + if (WARN_ON(!pool)) 492 420 return NULL; 493 421 494 422 stack = pool + offset; 423 + if (WARN_ON(!refcount_read(&stack->count))) 424 + return NULL; 425 + 495 426 return stack; 496 427 } 497 428 498 429 /* Links stack into the freelist. */ 499 430 static void depot_free_stack(struct stack_record *stack) 500 431 { 501 - lockdep_assert_held_write(&pool_rwlock); 432 + unsigned long flags; 502 433 503 - list_add(&stack->list, &free_stacks); 434 + lockdep_assert_not_held(&pool_lock); 435 + 436 + raw_spin_lock_irqsave(&pool_lock, flags); 437 + printk_deferred_enter(); 438 + 439 + /* 440 + * Remove the entry from the hash list. Concurrent list traversal may 441 + * still observe the entry, but since the refcount is zero, this entry 442 + * will no longer be considered as valid. 443 + */ 444 + list_del_rcu(&stack->hash_list); 445 + 446 + /* 447 + * Due to being used from constrained contexts such as the allocators, 448 + * NMI, or even RCU itself, stack depot cannot rely on primitives that 449 + * would sleep (such as synchronize_rcu()) or recursively call into 450 + * stack depot again (such as call_rcu()). 451 + * 452 + * Instead, get an RCU cookie, so that we can ensure this entry isn't 453 + * moved onto another list until the next grace period, and concurrent 454 + * RCU list traversal remains safe. 455 + */ 456 + stack->rcu_state = get_state_synchronize_rcu(); 457 + 458 + /* 459 + * Add the entry to the freelist tail, so that older entries are 460 + * considered first - their RCU cookie is more likely to no longer be 461 + * associated with the current grace period. 462 + */ 463 + list_add_tail(&stack->free_list, &free_stacks); 464 + 465 + counters[DEPOT_COUNTER_FREELIST_SIZE]++; 466 + counters[DEPOT_COUNTER_FREES]++; 467 + counters[DEPOT_COUNTER_INUSE]--; 468 + 469 + printk_deferred_exit(); 470 + raw_spin_unlock_irqrestore(&pool_lock, flags); 504 471 } 505 472 506 473 /* Calculates the hash for a stack. */ ··· 570 453 571 454 /* Finds a stack in a bucket of the hash table. */ 572 455 static inline struct stack_record *find_stack(struct list_head *bucket, 573 - unsigned long *entries, int size, 574 - u32 hash) 456 + unsigned long *entries, int size, 457 + u32 hash, depot_flags_t flags) 575 458 { 576 - struct list_head *pos; 577 - struct stack_record *found; 459 + struct stack_record *stack, *ret = NULL; 578 460 579 - lockdep_assert_held(&pool_rwlock); 461 + /* 462 + * Stack depot may be used from instrumentation that instruments RCU or 463 + * tracing itself; use variant that does not call into RCU and cannot be 464 + * traced. 465 + * 466 + * Note: Such use cases must take care when using refcounting to evict 467 + * unused entries, because the stack record free-then-reuse code paths 468 + * do call into RCU. 469 + */ 470 + rcu_read_lock_sched_notrace(); 580 471 581 - list_for_each(pos, bucket) { 582 - found = list_entry(pos, struct stack_record, list); 583 - if (found->hash == hash && 584 - found->size == size && 585 - !stackdepot_memcmp(entries, found->entries, size)) 586 - return found; 472 + list_for_each_entry_rcu(stack, bucket, hash_list) { 473 + if (stack->hash != hash || stack->size != size) 474 + continue; 475 + 476 + /* 477 + * This may race with depot_free_stack() accessing the freelist 478 + * management state unioned with @entries. The refcount is zero 479 + * in that case and the below refcount_inc_not_zero() will fail. 480 + */ 481 + if (data_race(stackdepot_memcmp(entries, stack->entries, size))) 482 + continue; 483 + 484 + /* 485 + * Try to increment refcount. If this succeeds, the stack record 486 + * is valid and has not yet been freed. 487 + * 488 + * If STACK_DEPOT_FLAG_GET is not used, it is undefined behavior 489 + * to then call stack_depot_put() later, and we can assume that 490 + * a stack record is never placed back on the freelist. 491 + */ 492 + if ((flags & STACK_DEPOT_FLAG_GET) && !refcount_inc_not_zero(&stack->count)) 493 + continue; 494 + 495 + ret = stack; 496 + break; 587 497 } 588 - return NULL; 498 + 499 + rcu_read_unlock_sched_notrace(); 500 + 501 + return ret; 589 502 } 590 503 591 504 depot_stack_handle_t stack_depot_save_flags(unsigned long *entries, ··· 629 482 struct page *page = NULL; 630 483 void *prealloc = NULL; 631 484 bool can_alloc = depot_flags & STACK_DEPOT_FLAG_CAN_ALLOC; 632 - bool need_alloc = false; 633 485 unsigned long flags; 634 486 u32 hash; 635 487 ··· 651 505 hash = hash_stack(entries, nr_entries); 652 506 bucket = &stack_table[hash & stack_hash_mask]; 653 507 654 - read_lock_irqsave(&pool_rwlock, flags); 655 - printk_deferred_enter(); 656 - 657 - /* Fast path: look the stack trace up without full locking. */ 658 - found = find_stack(bucket, entries, nr_entries, hash); 659 - if (found) { 660 - if (depot_flags & STACK_DEPOT_FLAG_GET) 661 - refcount_inc(&found->count); 662 - printk_deferred_exit(); 663 - read_unlock_irqrestore(&pool_rwlock, flags); 508 + /* Fast path: look the stack trace up without locking. */ 509 + found = find_stack(bucket, entries, nr_entries, hash, depot_flags); 510 + if (found) 664 511 goto exit; 665 - } 666 - 667 - /* Take note if another stack pool needs to be allocated. */ 668 - if (new_pool_required) 669 - need_alloc = true; 670 - 671 - printk_deferred_exit(); 672 - read_unlock_irqrestore(&pool_rwlock, flags); 673 512 674 513 /* 675 514 * Allocate memory for a new pool if required now: 676 515 * we won't be able to do that under the lock. 677 516 */ 678 - if (unlikely(can_alloc && need_alloc)) { 517 + if (unlikely(can_alloc && READ_ONCE(new_pool_required))) { 679 518 /* 680 519 * Zero out zone modifiers, as we don't have specific zone 681 520 * requirements. Keep the flags related to allocation in atomic ··· 674 543 prealloc = page_address(page); 675 544 } 676 545 677 - write_lock_irqsave(&pool_rwlock, flags); 546 + raw_spin_lock_irqsave(&pool_lock, flags); 678 547 printk_deferred_enter(); 679 548 680 - found = find_stack(bucket, entries, nr_entries, hash); 549 + /* Try to find again, to avoid concurrently inserting duplicates. */ 550 + found = find_stack(bucket, entries, nr_entries, hash, depot_flags); 681 551 if (!found) { 682 552 struct stack_record *new = 683 553 depot_alloc_stack(entries, nr_entries, hash, &prealloc); 684 554 685 555 if (new) { 686 - list_add(&new->list, bucket); 556 + /* 557 + * This releases the stack record into the bucket and 558 + * makes it visible to readers in find_stack(). 559 + */ 560 + list_add_rcu(&new->hash_list, bucket); 687 561 found = new; 688 562 } 689 - } else { 690 - if (depot_flags & STACK_DEPOT_FLAG_GET) 691 - refcount_inc(&found->count); 563 + } 564 + 565 + if (prealloc) { 692 566 /* 693 - * Stack depot already contains this stack trace, but let's 694 - * keep the preallocated memory for future. 567 + * Either stack depot already contains this stack trace, or 568 + * depot_alloc_stack() did not consume the preallocated memory. 569 + * Try to keep the preallocated memory for future. 695 570 */ 696 - if (prealloc) 697 - depot_keep_new_pool(&prealloc); 571 + depot_keep_new_pool(&prealloc); 698 572 } 699 573 700 574 printk_deferred_exit(); 701 - write_unlock_irqrestore(&pool_rwlock, flags); 575 + raw_spin_unlock_irqrestore(&pool_lock, flags); 702 576 exit: 703 577 if (prealloc) { 704 578 /* Stack depot didn't use this memory, free it. */ ··· 728 592 unsigned long **entries) 729 593 { 730 594 struct stack_record *stack; 731 - unsigned long flags; 732 595 733 596 *entries = NULL; 734 597 /* ··· 739 604 if (!handle || stack_depot_disabled) 740 605 return 0; 741 606 742 - read_lock_irqsave(&pool_rwlock, flags); 743 - printk_deferred_enter(); 744 - 745 607 stack = depot_fetch_stack(handle); 746 - 747 - printk_deferred_exit(); 748 - read_unlock_irqrestore(&pool_rwlock, flags); 608 + /* 609 + * Should never be NULL, otherwise this is a use-after-put (or just a 610 + * corrupt handle). 611 + */ 612 + if (WARN(!stack, "corrupt handle or use after stack_depot_put()")) 613 + return 0; 749 614 750 615 *entries = stack->entries; 751 616 return stack->size; ··· 755 620 void stack_depot_put(depot_stack_handle_t handle) 756 621 { 757 622 struct stack_record *stack; 758 - unsigned long flags; 759 623 760 624 if (!handle || stack_depot_disabled) 761 625 return; 762 626 763 - write_lock_irqsave(&pool_rwlock, flags); 764 - printk_deferred_enter(); 765 - 766 627 stack = depot_fetch_stack(handle); 767 - if (WARN_ON(!stack)) 768 - goto out; 628 + /* 629 + * Should always be able to find the stack record, otherwise this is an 630 + * unbalanced put attempt (or corrupt handle). 631 + */ 632 + if (WARN(!stack, "corrupt handle or unbalanced stack_depot_put()")) 633 + return; 769 634 770 - if (refcount_dec_and_test(&stack->count)) { 771 - /* Unlink stack from the hash table. */ 772 - list_del(&stack->list); 773 - 774 - /* Free stack. */ 635 + if (refcount_dec_and_test(&stack->count)) 775 636 depot_free_stack(stack); 776 - } 777 - 778 - out: 779 - printk_deferred_exit(); 780 - write_unlock_irqrestore(&pool_rwlock, flags); 781 637 } 782 638 EXPORT_SYMBOL_GPL(stack_depot_put); 783 639 ··· 816 690 return parts.extra; 817 691 } 818 692 EXPORT_SYMBOL(stack_depot_get_extra_bits); 693 + 694 + static int stats_show(struct seq_file *seq, void *v) 695 + { 696 + /* 697 + * data race ok: These are just statistics counters, and approximate 698 + * statistics are ok for debugging. 699 + */ 700 + seq_printf(seq, "pools: %d\n", data_race(pools_num)); 701 + for (int i = 0; i < DEPOT_COUNTER_COUNT; i++) 702 + seq_printf(seq, "%s: %ld\n", counter_names[i], data_race(counters[i])); 703 + 704 + return 0; 705 + } 706 + DEFINE_SHOW_ATTRIBUTE(stats); 707 + 708 + static int depot_debugfs_init(void) 709 + { 710 + struct dentry *dir; 711 + 712 + if (stack_depot_disabled) 713 + return 0; 714 + 715 + dir = debugfs_create_dir("stackdepot", NULL); 716 + debugfs_create_file("stats", 0444, dir, NULL, &stats_fops); 717 + return 0; 718 + } 719 + late_initcall(depot_debugfs_init);
+14 -4
mm/huge_memory.c
··· 37 37 #include <linux/page_owner.h> 38 38 #include <linux/sched/sysctl.h> 39 39 #include <linux/memory-tiers.h> 40 + #include <linux/compat.h> 40 41 41 42 #include <asm/tlb.h> 42 43 #include <asm/pgalloc.h> ··· 810 809 { 811 810 loff_t off_end = off + len; 812 811 loff_t off_align = round_up(off, size); 813 - unsigned long len_pad, ret; 812 + unsigned long len_pad, ret, off_sub; 813 + 814 + if (IS_ENABLED(CONFIG_32BIT) || in_compat_syscall()) 815 + return 0; 814 816 815 817 if (off_end <= off_align || (off_end - off_align) < size) 816 818 return 0; ··· 839 835 if (ret == addr) 840 836 return addr; 841 837 842 - ret += (off - ret) & (size - 1); 838 + off_sub = (off - ret) & (size - 1); 839 + 840 + if (current->mm->get_unmapped_area == arch_get_unmapped_area_topdown && 841 + !off_sub) 842 + return ret + size; 843 + 844 + ret += off_sub; 843 845 return ret; 844 846 } 845 847 ··· 2447 2437 page = pmd_page(old_pmd); 2448 2438 folio = page_folio(page); 2449 2439 if (!folio_test_dirty(folio) && pmd_dirty(old_pmd)) 2450 - folio_set_dirty(folio); 2440 + folio_mark_dirty(folio); 2451 2441 if (!folio_test_referenced(folio) && pmd_young(old_pmd)) 2452 2442 folio_set_referenced(folio); 2453 2443 folio_remove_rmap_pmd(folio, page, vma); ··· 3573 3563 } 3574 3564 3575 3565 if (pmd_dirty(pmdval)) 3576 - folio_set_dirty(folio); 3566 + folio_mark_dirty(folio); 3577 3567 if (pmd_write(pmdval)) 3578 3568 entry = make_writable_migration_entry(page_to_pfn(page)); 3579 3569 else if (anon_exclusive)
+3
mm/memblock.c
··· 2176 2176 start = region->base; 2177 2177 end = start + region->size; 2178 2178 2179 + if (nid == NUMA_NO_NODE || nid >= MAX_NUMNODES) 2180 + nid = early_pfn_to_nid(PFN_DOWN(start)); 2181 + 2179 2182 reserve_bootmem_region(start, end, nid); 2180 2183 } 2181 2184 }
+25 -4
mm/memcontrol.c
··· 2623 2623 } 2624 2624 2625 2625 /* 2626 - * Scheduled by try_charge() to be executed from the userland return path 2627 - * and reclaims memory over the high limit. 2626 + * Reclaims memory over the high limit. Called directly from 2627 + * try_charge() (context permitting), as well as from the userland 2628 + * return path where reclaim is always able to block. 2628 2629 */ 2629 2630 void mem_cgroup_handle_over_high(gfp_t gfp_mask) 2630 2631 { ··· 2644 2643 current->memcg_nr_pages_over_high = 0; 2645 2644 2646 2645 retry_reclaim: 2646 + /* 2647 + * Bail if the task is already exiting. Unlike memory.max, 2648 + * memory.high enforcement isn't as strict, and there is no 2649 + * OOM killer involved, which means the excess could already 2650 + * be much bigger (and still growing) than it could for 2651 + * memory.max; the dying task could get stuck in fruitless 2652 + * reclaim for a long time, which isn't desirable. 2653 + */ 2654 + if (task_is_dying()) 2655 + goto out; 2656 + 2647 2657 /* 2648 2658 * The allocating task should reclaim at least the batch size, but for 2649 2659 * subsequent retries we only want to do what's necessary to prevent oom ··· 2705 2693 } 2706 2694 2707 2695 /* 2696 + * Reclaim didn't manage to push usage below the limit, slow 2697 + * this allocating task down. 2698 + * 2708 2699 * If we exit early, we're guaranteed to die (since 2709 2700 * schedule_timeout_killable sets TASK_KILLABLE). This means we don't 2710 2701 * need to account for any ill-begotten jiffies to pay them off later. ··· 2902 2887 } 2903 2888 } while ((memcg = parent_mem_cgroup(memcg))); 2904 2889 2890 + /* 2891 + * Reclaim is set up above to be called from the userland 2892 + * return path. But also attempt synchronous reclaim to avoid 2893 + * excessive overrun while the task is still inside the 2894 + * kernel. If this is successful, the return path will see it 2895 + * when it rechecks the overage and simply bail out. 2896 + */ 2905 2897 if (current->memcg_nr_pages_over_high > MEMCG_CHARGE_BATCH && 2906 2898 !(current->flags & PF_MEMALLOC) && 2907 - gfpflags_allow_blocking(gfp_mask)) { 2899 + gfpflags_allow_blocking(gfp_mask)) 2908 2900 mem_cgroup_handle_over_high(gfp_mask); 2909 - } 2910 2901 return 0; 2911 2902 } 2912 2903
+1 -1
mm/memory-failure.c
··· 982 982 int count = page_count(p) - 1; 983 983 984 984 if (extra_pins) 985 - count -= 1; 985 + count -= folio_nr_pages(page_folio(p)); 986 986 987 987 if (count > 0) { 988 988 pr_err("%#lx: %s still referenced by %d users\n",
+1 -1
mm/memory.c
··· 1464 1464 delay_rmap = 0; 1465 1465 if (!folio_test_anon(folio)) { 1466 1466 if (pte_dirty(ptent)) { 1467 - folio_set_dirty(folio); 1467 + folio_mark_dirty(folio); 1468 1468 if (tlb_delay_rmap(tlb)) { 1469 1469 delay_rmap = 1; 1470 1470 force_flush = 1;
+4 -2
mm/mmap.c
··· 1825 1825 /* 1826 1826 * mmap_region() will call shmem_zero_setup() to create a file, 1827 1827 * so use shmem's get_unmapped_area in case it can be huge. 1828 - * do_mmap() will clear pgoff, so match alignment. 1829 1828 */ 1830 - pgoff = 0; 1831 1829 get_area = shmem_get_unmapped_area; 1832 1830 } else if (IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE)) { 1833 1831 /* Ensures that larger anonymous mappings are THP aligned. */ 1834 1832 get_area = thp_get_unmapped_area; 1835 1833 } 1834 + 1835 + /* Always treat pgoff as zero for anonymous memory. */ 1836 + if (!file) 1837 + pgoff = 0; 1836 1838 1837 1839 addr = get_area(file, addr, len, pgoff, flags); 1838 1840 if (IS_ERR_VALUE(addr))
+1 -1
mm/page-writeback.c
··· 1638 1638 */ 1639 1639 dtc->wb_thresh = __wb_calc_thresh(dtc); 1640 1640 dtc->wb_bg_thresh = dtc->thresh ? 1641 - div_u64((u64)dtc->wb_thresh * dtc->bg_thresh, dtc->thresh) : 0; 1641 + div64_u64(dtc->wb_thresh * dtc->bg_thresh, dtc->thresh) : 0; 1642 1642 1643 1643 /* 1644 1644 * In order to avoid the stacked BDI deadlock we need
+2 -2
mm/readahead.c
··· 469 469 470 470 if (!folio) 471 471 return -ENOMEM; 472 - mark = round_up(mark, 1UL << order); 472 + mark = round_down(mark, 1UL << order); 473 473 if (index == mark) 474 474 folio_set_readahead(folio); 475 475 err = filemap_add_folio(ractl->mapping, folio, index, gfp); ··· 575 575 * It's the expected callback index, assume sequential access. 576 576 * Ramp up sizes, and push forward the readahead window. 577 577 */ 578 - expected = round_up(ra->start + ra->size - ra->async_size, 578 + expected = round_down(ra->start + ra->size - ra->async_size, 579 579 1UL << order); 580 580 if (index == expected || index == (ra->start + ra->size)) { 581 581 ra->start += ra->size;
+13 -2
mm/userfaultfd.c
··· 357 357 unsigned long dst_start, 358 358 unsigned long src_start, 359 359 unsigned long len, 360 + atomic_t *mmap_changing, 360 361 uffd_flags_t flags) 361 362 { 362 363 struct mm_struct *dst_mm = dst_vma->vm_mm; ··· 473 472 goto out; 474 473 } 475 474 mmap_read_lock(dst_mm); 475 + /* 476 + * If memory mappings are changing because of non-cooperative 477 + * operation (e.g. mremap) running in parallel, bail out and 478 + * request the user to retry later 479 + */ 480 + if (mmap_changing && atomic_read(mmap_changing)) { 481 + err = -EAGAIN; 482 + break; 483 + } 476 484 477 485 dst_vma = NULL; 478 486 goto retry; ··· 516 506 unsigned long dst_start, 517 507 unsigned long src_start, 518 508 unsigned long len, 509 + atomic_t *mmap_changing, 519 510 uffd_flags_t flags); 520 511 #endif /* CONFIG_HUGETLB_PAGE */ 521 512 ··· 633 622 * If this is a HUGETLB vma, pass off to appropriate routine 634 623 */ 635 624 if (is_vm_hugetlb_page(dst_vma)) 636 - return mfill_atomic_hugetlb(dst_vma, dst_start, 637 - src_start, len, flags); 625 + return mfill_atomic_hugetlb(dst_vma, dst_start, src_start, 626 + len, mmap_changing, flags); 638 627 639 628 if (!vma_is_anonymous(dst_vma) && !vma_is_shmem(dst_vma)) 640 629 goto out_unlock;
+3
net/batman-adv/multicast.c
··· 2175 2175 cancel_delayed_work_sync(&bat_priv->mcast.work); 2176 2176 2177 2177 batadv_tvlv_container_unregister(bat_priv, BATADV_TVLV_MCAST, 2); 2178 + batadv_tvlv_handler_unregister(bat_priv, BATADV_TVLV_MCAST_TRACKER, 1); 2178 2179 batadv_tvlv_handler_unregister(bat_priv, BATADV_TVLV_MCAST, 2); 2179 2180 2180 2181 /* safely calling outside of worker, as worker was canceled above */ ··· 2199 2198 BATADV_MCAST_WANT_NO_RTR4); 2200 2199 batadv_mcast_want_rtr6_update(bat_priv, orig, 2201 2200 BATADV_MCAST_WANT_NO_RTR6); 2201 + batadv_mcast_have_mc_ptype_update(bat_priv, orig, 2202 + BATADV_MCAST_HAVE_MC_PTYPE_CAPA); 2202 2203 2203 2204 spin_unlock_bh(&orig->mcast_handler_lock); 2204 2205 }
+15 -5
net/bridge/br_multicast.c
··· 1762 1762 } 1763 1763 #endif 1764 1764 1765 + static void br_multicast_query_delay_expired(struct timer_list *t) 1766 + { 1767 + } 1768 + 1765 1769 static void br_multicast_select_own_querier(struct net_bridge_mcast *brmctx, 1766 1770 struct br_ip *ip, 1767 1771 struct sk_buff *skb) ··· 3202 3198 unsigned long max_delay) 3203 3199 { 3204 3200 if (!timer_pending(&query->timer)) 3205 - query->delay_time = jiffies + max_delay; 3201 + mod_timer(&query->delay_timer, jiffies + max_delay); 3206 3202 3207 3203 mod_timer(&query->timer, jiffies + brmctx->multicast_querier_interval); 3208 3204 } ··· 4045 4041 brmctx->multicast_querier_interval = 255 * HZ; 4046 4042 brmctx->multicast_membership_interval = 260 * HZ; 4047 4043 4048 - brmctx->ip4_other_query.delay_time = 0; 4049 4044 brmctx->ip4_querier.port_ifidx = 0; 4050 4045 seqcount_spinlock_init(&brmctx->ip4_querier.seq, &br->multicast_lock); 4051 4046 brmctx->multicast_igmp_version = 2; 4052 4047 #if IS_ENABLED(CONFIG_IPV6) 4053 4048 brmctx->multicast_mld_version = 1; 4054 - brmctx->ip6_other_query.delay_time = 0; 4055 4049 brmctx->ip6_querier.port_ifidx = 0; 4056 4050 seqcount_spinlock_init(&brmctx->ip6_querier.seq, &br->multicast_lock); 4057 4051 #endif ··· 4058 4056 br_ip4_multicast_local_router_expired, 0); 4059 4057 timer_setup(&brmctx->ip4_other_query.timer, 4060 4058 br_ip4_multicast_querier_expired, 0); 4059 + timer_setup(&brmctx->ip4_other_query.delay_timer, 4060 + br_multicast_query_delay_expired, 0); 4061 4061 timer_setup(&brmctx->ip4_own_query.timer, 4062 4062 br_ip4_multicast_query_expired, 0); 4063 4063 #if IS_ENABLED(CONFIG_IPV6) ··· 4067 4063 br_ip6_multicast_local_router_expired, 0); 4068 4064 timer_setup(&brmctx->ip6_other_query.timer, 4069 4065 br_ip6_multicast_querier_expired, 0); 4066 + timer_setup(&brmctx->ip6_other_query.delay_timer, 4067 + br_multicast_query_delay_expired, 0); 4070 4068 timer_setup(&brmctx->ip6_own_query.timer, 4071 4069 br_ip6_multicast_query_expired, 0); 4072 4070 #endif ··· 4203 4197 { 4204 4198 del_timer_sync(&brmctx->ip4_mc_router_timer); 4205 4199 del_timer_sync(&brmctx->ip4_other_query.timer); 4200 + del_timer_sync(&brmctx->ip4_other_query.delay_timer); 4206 4201 del_timer_sync(&brmctx->ip4_own_query.timer); 4207 4202 #if IS_ENABLED(CONFIG_IPV6) 4208 4203 del_timer_sync(&brmctx->ip6_mc_router_timer); 4209 4204 del_timer_sync(&brmctx->ip6_other_query.timer); 4205 + del_timer_sync(&brmctx->ip6_other_query.delay_timer); 4210 4206 del_timer_sync(&brmctx->ip6_own_query.timer); 4211 4207 #endif 4212 4208 } ··· 4651 4643 max_delay = brmctx->multicast_query_response_interval; 4652 4644 4653 4645 if (!timer_pending(&brmctx->ip4_other_query.timer)) 4654 - brmctx->ip4_other_query.delay_time = jiffies + max_delay; 4646 + mod_timer(&brmctx->ip4_other_query.delay_timer, 4647 + jiffies + max_delay); 4655 4648 4656 4649 br_multicast_start_querier(brmctx, &brmctx->ip4_own_query); 4657 4650 4658 4651 #if IS_ENABLED(CONFIG_IPV6) 4659 4652 if (!timer_pending(&brmctx->ip6_other_query.timer)) 4660 - brmctx->ip6_other_query.delay_time = jiffies + max_delay; 4653 + mod_timer(&brmctx->ip6_other_query.delay_timer, 4654 + jiffies + max_delay); 4661 4655 4662 4656 br_multicast_start_querier(brmctx, &brmctx->ip6_own_query); 4663 4657 #endif
+2 -2
net/bridge/br_private.h
··· 78 78 /* other querier */ 79 79 struct bridge_mcast_other_query { 80 80 struct timer_list timer; 81 - unsigned long delay_time; 81 + struct timer_list delay_timer; 82 82 }; 83 83 84 84 /* selected querier */ ··· 1159 1159 own_querier_enabled = false; 1160 1160 } 1161 1161 1162 - return time_is_before_jiffies(querier->delay_time) && 1162 + return !timer_pending(&querier->delay_timer) && 1163 1163 (own_querier_enabled || timer_pending(&querier->timer)); 1164 1164 } 1165 1165
+1 -1
net/devlink/port.c
··· 674 674 return -EOPNOTSUPP; 675 675 } 676 676 if (tb[DEVLINK_PORT_FN_ATTR_STATE] && !ops->port_fn_state_set) { 677 - NL_SET_ERR_MSG_ATTR(extack, tb[DEVLINK_PORT_FUNCTION_ATTR_HW_ADDR], 677 + NL_SET_ERR_MSG_ATTR(extack, tb[DEVLINK_PORT_FN_ATTR_STATE], 678 678 "Function does not support state setting"); 679 679 return -EOPNOTSUPP; 680 680 }
+2 -2
net/hsr/hsr_device.c
··· 308 308 309 309 skb = hsr_init_skb(master); 310 310 if (!skb) { 311 - WARN_ONCE(1, "HSR: Could not send supervision frame\n"); 311 + netdev_warn_once(master->dev, "HSR: Could not send supervision frame\n"); 312 312 return; 313 313 } 314 314 ··· 355 355 356 356 skb = hsr_init_skb(master); 357 357 if (!skb) { 358 - WARN_ONCE(1, "PRP: Could not send supervision frame\n"); 358 + netdev_warn_once(master->dev, "PRP: Could not send supervision frame\n"); 359 359 return; 360 360 } 361 361
+6 -6
net/ipv4/ip_output.c
··· 1287 1287 if (unlikely(!rt)) 1288 1288 return -EFAULT; 1289 1289 1290 + cork->fragsize = ip_sk_use_pmtu(sk) ? 1291 + dst_mtu(&rt->dst) : READ_ONCE(rt->dst.dev->mtu); 1292 + 1293 + if (!inetdev_valid_mtu(cork->fragsize)) 1294 + return -ENETUNREACH; 1295 + 1290 1296 /* 1291 1297 * setup for corking. 1292 1298 */ ··· 1308 1302 cork->flags |= IPCORK_OPT; 1309 1303 cork->addr = ipc->addr; 1310 1304 } 1311 - 1312 - cork->fragsize = ip_sk_use_pmtu(sk) ? 1313 - dst_mtu(&rt->dst) : READ_ONCE(rt->dst.dev->mtu); 1314 - 1315 - if (!inetdev_valid_mtu(cork->fragsize)) 1316 - return -ENETUNREACH; 1317 1305 1318 1306 cork->gso_size = ipc->gso_size; 1319 1307
+4 -2
net/ipv4/ip_sockglue.c
··· 1363 1363 * ipv4_pktinfo_prepare - transfer some info from rtable to skb 1364 1364 * @sk: socket 1365 1365 * @skb: buffer 1366 + * @drop_dst: if true, drops skb dst 1366 1367 * 1367 1368 * To support IP_CMSG_PKTINFO option, we store rt_iif and specific 1368 1369 * destination in skb->cb[] before dst drop. 1369 1370 * This way, receiver doesn't make cache line misses to read rtable. 1370 1371 */ 1371 - void ipv4_pktinfo_prepare(const struct sock *sk, struct sk_buff *skb) 1372 + void ipv4_pktinfo_prepare(const struct sock *sk, struct sk_buff *skb, bool drop_dst) 1372 1373 { 1373 1374 struct in_pktinfo *pktinfo = PKTINFO_SKB_CB(skb); 1374 1375 bool prepare = inet_test_bit(PKTINFO, sk) || ··· 1398 1397 pktinfo->ipi_ifindex = 0; 1399 1398 pktinfo->ipi_spec_dst.s_addr = 0; 1400 1399 } 1401 - skb_dst_drop(skb); 1400 + if (drop_dst) 1401 + skb_dst_drop(skb); 1402 1402 } 1403 1403 1404 1404 int ip_setsockopt(struct sock *sk, int level, int optname, sockptr_t optval,
+1 -1
net/ipv4/ipmr.c
··· 1073 1073 msg = (struct igmpmsg *)skb_network_header(skb); 1074 1074 msg->im_vif = vifi; 1075 1075 msg->im_vif_hi = vifi >> 8; 1076 - ipv4_pktinfo_prepare(mroute_sk, pkt); 1076 + ipv4_pktinfo_prepare(mroute_sk, pkt, false); 1077 1077 memcpy(skb->cb, pkt->cb, sizeof(skb->cb)); 1078 1078 /* Add our header */ 1079 1079 igmp = skb_put(skb, sizeof(struct igmphdr));
+1 -1
net/ipv4/raw.c
··· 292 292 293 293 /* Charge it to the socket. */ 294 294 295 - ipv4_pktinfo_prepare(sk, skb); 295 + ipv4_pktinfo_prepare(sk, skb, true); 296 296 if (sock_queue_rcv_skb_reason(sk, skb, &reason) < 0) { 297 297 kfree_skb_reason(skb, reason); 298 298 return NET_RX_DROP;
+11 -1
net/ipv4/tcp.c
··· 1786 1786 1787 1787 static bool can_map_frag(const skb_frag_t *frag) 1788 1788 { 1789 - return skb_frag_size(frag) == PAGE_SIZE && !skb_frag_off(frag); 1789 + struct page *page; 1790 + 1791 + if (skb_frag_size(frag) != PAGE_SIZE || skb_frag_off(frag)) 1792 + return false; 1793 + 1794 + page = skb_frag_page(frag); 1795 + 1796 + if (PageCompound(page) || page->mapping) 1797 + return false; 1798 + 1799 + return true; 1790 1800 } 1791 1801 1792 1802 static int find_next_mappable_frag(const skb_frag_t *frag,
+1 -1
net/ipv4/udp.c
··· 2169 2169 2170 2170 udp_csum_pull_header(skb); 2171 2171 2172 - ipv4_pktinfo_prepare(sk, skb); 2172 + ipv4_pktinfo_prepare(sk, skb, true); 2173 2173 return __udp_queue_rcv_skb(sk, skb); 2174 2174 2175 2175 csum_error:
+14 -7
net/ipv6/addrconf_core.c
··· 220 220 EXPORT_SYMBOL_GPL(ipv6_stub); 221 221 222 222 /* IPv6 Wildcard Address and Loopback Address defined by RFC2553 */ 223 - const struct in6_addr in6addr_loopback = IN6ADDR_LOOPBACK_INIT; 223 + const struct in6_addr in6addr_loopback __aligned(BITS_PER_LONG/8) 224 + = IN6ADDR_LOOPBACK_INIT; 224 225 EXPORT_SYMBOL(in6addr_loopback); 225 - const struct in6_addr in6addr_any = IN6ADDR_ANY_INIT; 226 + const struct in6_addr in6addr_any __aligned(BITS_PER_LONG/8) 227 + = IN6ADDR_ANY_INIT; 226 228 EXPORT_SYMBOL(in6addr_any); 227 - const struct in6_addr in6addr_linklocal_allnodes = IN6ADDR_LINKLOCAL_ALLNODES_INIT; 229 + const struct in6_addr in6addr_linklocal_allnodes __aligned(BITS_PER_LONG/8) 230 + = IN6ADDR_LINKLOCAL_ALLNODES_INIT; 228 231 EXPORT_SYMBOL(in6addr_linklocal_allnodes); 229 - const struct in6_addr in6addr_linklocal_allrouters = IN6ADDR_LINKLOCAL_ALLROUTERS_INIT; 232 + const struct in6_addr in6addr_linklocal_allrouters __aligned(BITS_PER_LONG/8) 233 + = IN6ADDR_LINKLOCAL_ALLROUTERS_INIT; 230 234 EXPORT_SYMBOL(in6addr_linklocal_allrouters); 231 - const struct in6_addr in6addr_interfacelocal_allnodes = IN6ADDR_INTERFACELOCAL_ALLNODES_INIT; 235 + const struct in6_addr in6addr_interfacelocal_allnodes __aligned(BITS_PER_LONG/8) 236 + = IN6ADDR_INTERFACELOCAL_ALLNODES_INIT; 232 237 EXPORT_SYMBOL(in6addr_interfacelocal_allnodes); 233 - const struct in6_addr in6addr_interfacelocal_allrouters = IN6ADDR_INTERFACELOCAL_ALLROUTERS_INIT; 238 + const struct in6_addr in6addr_interfacelocal_allrouters __aligned(BITS_PER_LONG/8) 239 + = IN6ADDR_INTERFACELOCAL_ALLROUTERS_INIT; 234 240 EXPORT_SYMBOL(in6addr_interfacelocal_allrouters); 235 - const struct in6_addr in6addr_sitelocal_allrouters = IN6ADDR_SITELOCAL_ALLROUTERS_INIT; 241 + const struct in6_addr in6addr_sitelocal_allrouters __aligned(BITS_PER_LONG/8) 242 + = IN6ADDR_SITELOCAL_ALLROUTERS_INIT; 236 243 EXPORT_SYMBOL(in6addr_sitelocal_allrouters); 237 244 238 245 static void snmp6_free_dev(struct inet6_dev *idev)
+18 -3
net/ipv6/ip6_tunnel.c
··· 796 796 struct sk_buff *skb), 797 797 bool log_ecn_err) 798 798 { 799 - const struct ipv6hdr *ipv6h = ipv6_hdr(skb); 800 - int err; 799 + const struct ipv6hdr *ipv6h; 800 + int nh, err; 801 801 802 802 if ((!(tpi->flags & TUNNEL_CSUM) && 803 803 (tunnel->parms.i_flags & TUNNEL_CSUM)) || ··· 829 829 goto drop; 830 830 } 831 831 832 - ipv6h = ipv6_hdr(skb); 833 832 skb->protocol = eth_type_trans(skb, tunnel->dev); 834 833 skb_postpull_rcsum(skb, eth_hdr(skb), ETH_HLEN); 835 834 } else { ··· 836 837 skb_reset_mac_header(skb); 837 838 } 838 839 840 + /* Save offset of outer header relative to skb->head, 841 + * because we are going to reset the network header to the inner header 842 + * and might change skb->head. 843 + */ 844 + nh = skb_network_header(skb) - skb->head; 845 + 839 846 skb_reset_network_header(skb); 847 + 848 + if (!pskb_inet_may_pull(skb)) { 849 + DEV_STATS_INC(tunnel->dev, rx_length_errors); 850 + DEV_STATS_INC(tunnel->dev, rx_errors); 851 + goto drop; 852 + } 853 + 854 + /* Get the outer header. */ 855 + ipv6h = (struct ipv6hdr *)(skb->head + nh); 856 + 840 857 memset(skb->cb, 0, sizeof(struct inet6_skb_parm)); 841 858 842 859 __skb_tunnel_rx(skb, tunnel->dev, tunnel->net);
+2
net/llc/af_llc.c
··· 226 226 } 227 227 netdev_put(llc->dev, &llc->dev_tracker); 228 228 sock_put(sk); 229 + sock_orphan(sk); 230 + sock->sk = NULL; 229 231 llc_sk_free(sk); 230 232 out: 231 233 return 0;
-3
net/mptcp/protocol.c
··· 2314 2314 if (__mptcp_check_fallback(msk)) 2315 2315 return false; 2316 2316 2317 - if (tcp_rtx_and_write_queues_empty(sk)) 2318 - return false; 2319 - 2320 2317 /* the closing socket has some data untransmitted and/or unacked: 2321 2318 * some data in the mptcp rtx queue has not really xmitted yet. 2322 2319 * keep it simple and re-inject the whole mptcp level rtx queue
+11 -3
net/netfilter/ipset/ip_set_bitmap_gen.h
··· 30 30 #define mtype_del IPSET_TOKEN(MTYPE, _del) 31 31 #define mtype_list IPSET_TOKEN(MTYPE, _list) 32 32 #define mtype_gc IPSET_TOKEN(MTYPE, _gc) 33 + #define mtype_cancel_gc IPSET_TOKEN(MTYPE, _cancel_gc) 33 34 #define mtype MTYPE 34 35 35 36 #define get_ext(set, map, id) ((map)->extensions + ((set)->dsize * (id))) ··· 59 58 mtype_destroy(struct ip_set *set) 60 59 { 61 60 struct mtype *map = set->data; 62 - 63 - if (SET_WITH_TIMEOUT(set)) 64 - del_timer_sync(&map->gc); 65 61 66 62 if (set->dsize && set->extensions & IPSET_EXT_DESTROY) 67 63 mtype_ext_cleanup(set); ··· 288 290 add_timer(&map->gc); 289 291 } 290 292 293 + static void 294 + mtype_cancel_gc(struct ip_set *set) 295 + { 296 + struct mtype *map = set->data; 297 + 298 + if (SET_WITH_TIMEOUT(set)) 299 + del_timer_sync(&map->gc); 300 + } 301 + 291 302 static const struct ip_set_type_variant mtype = { 292 303 .kadt = mtype_kadt, 293 304 .uadt = mtype_uadt, ··· 310 303 .head = mtype_head, 311 304 .list = mtype_list, 312 305 .same_set = mtype_same_set, 306 + .cancel_gc = mtype_cancel_gc, 313 307 }; 314 308 315 309 #endif /* __IP_SET_BITMAP_IP_GEN_H */
+28 -9
net/netfilter/ipset/ip_set_core.c
··· 1182 1182 kfree(set); 1183 1183 } 1184 1184 1185 + static void 1186 + ip_set_destroy_set_rcu(struct rcu_head *head) 1187 + { 1188 + struct ip_set *set = container_of(head, struct ip_set, rcu); 1189 + 1190 + ip_set_destroy_set(set); 1191 + } 1192 + 1185 1193 static int ip_set_destroy(struct sk_buff *skb, const struct nfnl_info *info, 1186 1194 const struct nlattr * const attr[]) 1187 1195 { ··· 1201 1193 if (unlikely(protocol_min_failed(attr))) 1202 1194 return -IPSET_ERR_PROTOCOL; 1203 1195 1204 - /* Must wait for flush to be really finished in list:set */ 1205 - rcu_barrier(); 1206 1196 1207 1197 /* Commands are serialized and references are 1208 1198 * protected by the ip_set_ref_lock. ··· 1212 1206 * counter, so if it's already zero, we can proceed 1213 1207 * without holding the lock. 1214 1208 */ 1215 - read_lock_bh(&ip_set_ref_lock); 1216 1209 if (!attr[IPSET_ATTR_SETNAME]) { 1210 + /* Must wait for flush to be really finished in list:set */ 1211 + rcu_barrier(); 1212 + read_lock_bh(&ip_set_ref_lock); 1217 1213 for (i = 0; i < inst->ip_set_max; i++) { 1218 1214 s = ip_set(inst, i); 1219 1215 if (s && (s->ref || s->ref_netlink)) { ··· 1229 1221 s = ip_set(inst, i); 1230 1222 if (s) { 1231 1223 ip_set(inst, i) = NULL; 1224 + /* Must cancel garbage collectors */ 1225 + s->variant->cancel_gc(s); 1232 1226 ip_set_destroy_set(s); 1233 1227 } 1234 1228 } ··· 1238 1228 inst->is_destroyed = false; 1239 1229 } else { 1240 1230 u32 flags = flag_exist(info->nlh); 1231 + u16 features = 0; 1232 + 1233 + read_lock_bh(&ip_set_ref_lock); 1241 1234 s = find_set_and_id(inst, nla_data(attr[IPSET_ATTR_SETNAME]), 1242 1235 &i); 1243 1236 if (!s) { ··· 1251 1238 ret = -IPSET_ERR_BUSY; 1252 1239 goto out; 1253 1240 } 1241 + features = s->type->features; 1254 1242 ip_set(inst, i) = NULL; 1255 1243 read_unlock_bh(&ip_set_ref_lock); 1256 - 1257 - ip_set_destroy_set(s); 1244 + if (features & IPSET_TYPE_NAME) { 1245 + /* Must wait for flush to be really finished */ 1246 + rcu_barrier(); 1247 + } 1248 + /* Must cancel garbage collectors */ 1249 + s->variant->cancel_gc(s); 1250 + call_rcu(&s->rcu, ip_set_destroy_set_rcu); 1258 1251 } 1259 1252 return 0; 1260 1253 out: ··· 1412 1393 ip_set(inst, from_id) = to; 1413 1394 ip_set(inst, to_id) = from; 1414 1395 write_unlock_bh(&ip_set_ref_lock); 1415 - 1416 - /* Make sure all readers of the old set pointers are completed. */ 1417 - synchronize_rcu(); 1418 1396 1419 1397 return 0; 1420 1398 } ··· 2425 2409 { 2426 2410 nf_unregister_sockopt(&so_set); 2427 2411 nfnetlink_subsys_unregister(&ip_set_netlink_subsys); 2428 - 2429 2412 unregister_pernet_subsys(&ip_set_net_ops); 2413 + 2414 + /* Wait for call_rcu() in destroy */ 2415 + rcu_barrier(); 2416 + 2430 2417 pr_debug("these are the famous last words\n"); 2431 2418 } 2432 2419
+12 -3
net/netfilter/ipset/ip_set_hash_gen.h
··· 222 222 #undef mtype_gc_do 223 223 #undef mtype_gc 224 224 #undef mtype_gc_init 225 + #undef mtype_cancel_gc 225 226 #undef mtype_variant 226 227 #undef mtype_data_match 227 228 ··· 267 266 #define mtype_gc_do IPSET_TOKEN(MTYPE, _gc_do) 268 267 #define mtype_gc IPSET_TOKEN(MTYPE, _gc) 269 268 #define mtype_gc_init IPSET_TOKEN(MTYPE, _gc_init) 269 + #define mtype_cancel_gc IPSET_TOKEN(MTYPE, _cancel_gc) 270 270 #define mtype_variant IPSET_TOKEN(MTYPE, _variant) 271 271 #define mtype_data_match IPSET_TOKEN(MTYPE, _data_match) 272 272 ··· 452 450 struct htype *h = set->data; 453 451 struct list_head *l, *lt; 454 452 455 - if (SET_WITH_TIMEOUT(set)) 456 - cancel_delayed_work_sync(&h->gc.dwork); 457 - 458 453 mtype_ahash_destroy(set, ipset_dereference_nfnl(h->table), true); 459 454 list_for_each_safe(l, lt, &h->ad) { 460 455 list_del(l); ··· 596 597 { 597 598 INIT_DEFERRABLE_WORK(&gc->dwork, mtype_gc); 598 599 queue_delayed_work(system_power_efficient_wq, &gc->dwork, HZ); 600 + } 601 + 602 + static void 603 + mtype_cancel_gc(struct ip_set *set) 604 + { 605 + struct htype *h = set->data; 606 + 607 + if (SET_WITH_TIMEOUT(set)) 608 + cancel_delayed_work_sync(&h->gc.dwork); 599 609 } 600 610 601 611 static int ··· 1449 1441 .uref = mtype_uref, 1450 1442 .resize = mtype_resize, 1451 1443 .same_set = mtype_same_set, 1444 + .cancel_gc = mtype_cancel_gc, 1452 1445 .region_lock = true, 1453 1446 }; 1454 1447
+10 -3
net/netfilter/ipset/ip_set_list_set.c
··· 426 426 struct list_set *map = set->data; 427 427 struct set_elem *e, *n; 428 428 429 - if (SET_WITH_TIMEOUT(set)) 430 - timer_shutdown_sync(&map->gc); 431 - 432 429 list_for_each_entry_safe(e, n, &map->members, list) { 433 430 list_del(&e->list); 434 431 ip_set_put_byindex(map->net, e->id); ··· 542 545 a->extensions == b->extensions; 543 546 } 544 547 548 + static void 549 + list_set_cancel_gc(struct ip_set *set) 550 + { 551 + struct list_set *map = set->data; 552 + 553 + if (SET_WITH_TIMEOUT(set)) 554 + timer_shutdown_sync(&map->gc); 555 + } 556 + 545 557 static const struct ip_set_type_variant set_variant = { 546 558 .kadt = list_set_kadt, 547 559 .uadt = list_set_uadt, ··· 564 558 .head = list_set_head, 565 559 .list = list_set_list, 566 560 .same_set = list_set_same_set, 561 + .cancel_gc = list_set_cancel_gc, 567 562 }; 568 563 569 564 static void
+1 -1
net/netfilter/nf_conntrack_proto_sctp.c
··· 283 283 pr_debug("Setting vtag %x for secondary conntrack\n", 284 284 sh->vtag); 285 285 ct->proto.sctp.vtag[IP_CT_DIR_ORIGINAL] = sh->vtag; 286 - } else { 286 + } else if (sch->type == SCTP_CID_SHUTDOWN_ACK) { 287 287 /* If it is a shutdown ack OOTB packet, we expect a return 288 288 shutdown complete, otherwise an ABORT Sec 8.4 (5) and (8) */ 289 289 pr_debug("Setting vtag %x for new conn OOTB\n",
+6 -4
net/netfilter/nf_conntrack_proto_tcp.c
··· 457 457 const struct sk_buff *skb, 458 458 unsigned int dataoff, 459 459 const struct tcphdr *tcph, 460 - u32 end, u32 win) 460 + u32 end, u32 win, 461 + enum ip_conntrack_dir dir) 461 462 { 462 463 /* SYN-ACK in reply to a SYN 463 464 * or SYN from reply direction in simultaneous open. ··· 472 471 * Both sides must send the Window Scale option 473 472 * to enable window scaling in either direction. 474 473 */ 475 - if (!(sender->flags & IP_CT_TCP_FLAG_WINDOW_SCALE && 474 + if (dir == IP_CT_DIR_REPLY && 475 + !(sender->flags & IP_CT_TCP_FLAG_WINDOW_SCALE && 476 476 receiver->flags & IP_CT_TCP_FLAG_WINDOW_SCALE)) { 477 477 sender->td_scale = 0; 478 478 receiver->td_scale = 0; ··· 544 542 if (tcph->syn) { 545 543 tcp_init_sender(sender, receiver, 546 544 skb, dataoff, tcph, 547 - end, win); 545 + end, win, dir); 548 546 if (!tcph->ack) 549 547 /* Simultaneous open */ 550 548 return NFCT_TCP_ACCEPT; ··· 587 585 */ 588 586 tcp_init_sender(sender, receiver, 589 587 skb, dataoff, tcph, 590 - end, win); 588 + end, win, dir); 591 589 592 590 if (dir == IP_CT_DIR_REPLY && !tcph->ack) 593 591 return NFCT_TCP_ACCEPT;
+4 -3
net/netfilter/nf_log.c
··· 193 193 return; 194 194 } 195 195 196 - BUG_ON(loggers[pf][type] == NULL); 197 - 198 196 rcu_read_lock(); 199 197 logger = rcu_dereference(loggers[pf][type]); 200 - module_put(logger->me); 198 + if (!logger) 199 + WARN_ON_ONCE(1); 200 + else 201 + module_put(logger->me); 201 202 rcu_read_unlock(); 202 203 } 203 204 EXPORT_SYMBOL_GPL(nf_logger_put);
+9 -5
net/netfilter/nf_tables_api.c
··· 7558 7558 return -1; 7559 7559 } 7560 7560 7561 - static const struct nft_object_type *__nft_obj_type_get(u32 objtype) 7561 + static const struct nft_object_type *__nft_obj_type_get(u32 objtype, u8 family) 7562 7562 { 7563 7563 const struct nft_object_type *type; 7564 7564 7565 7565 list_for_each_entry(type, &nf_tables_objects, list) { 7566 + if (type->family != NFPROTO_UNSPEC && 7567 + type->family != family) 7568 + continue; 7569 + 7566 7570 if (objtype == type->type) 7567 7571 return type; 7568 7572 } ··· 7574 7570 } 7575 7571 7576 7572 static const struct nft_object_type * 7577 - nft_obj_type_get(struct net *net, u32 objtype) 7573 + nft_obj_type_get(struct net *net, u32 objtype, u8 family) 7578 7574 { 7579 7575 const struct nft_object_type *type; 7580 7576 7581 - type = __nft_obj_type_get(objtype); 7577 + type = __nft_obj_type_get(objtype, family); 7582 7578 if (type != NULL && try_module_get(type->owner)) 7583 7579 return type; 7584 7580 ··· 7671 7667 if (info->nlh->nlmsg_flags & NLM_F_REPLACE) 7672 7668 return -EOPNOTSUPP; 7673 7669 7674 - type = __nft_obj_type_get(objtype); 7670 + type = __nft_obj_type_get(objtype, family); 7675 7671 if (WARN_ON_ONCE(!type)) 7676 7672 return -ENOENT; 7677 7673 ··· 7685 7681 if (!nft_use_inc(&table->use)) 7686 7682 return -EMFILE; 7687 7683 7688 - type = nft_obj_type_get(net, objtype); 7684 + type = nft_obj_type_get(net, objtype, family); 7689 7685 if (IS_ERR(type)) { 7690 7686 err = PTR_ERR(type); 7691 7687 goto err_type;
+24
net/netfilter/nft_ct.c
··· 1250 1250 if (tb[NFTA_CT_EXPECT_L3PROTO]) 1251 1251 priv->l3num = ntohs(nla_get_be16(tb[NFTA_CT_EXPECT_L3PROTO])); 1252 1252 1253 + switch (priv->l3num) { 1254 + case NFPROTO_IPV4: 1255 + case NFPROTO_IPV6: 1256 + if (priv->l3num != ctx->family) 1257 + return -EINVAL; 1258 + 1259 + fallthrough; 1260 + case NFPROTO_INET: 1261 + break; 1262 + default: 1263 + return -EOPNOTSUPP; 1264 + } 1265 + 1253 1266 priv->l4proto = nla_get_u8(tb[NFTA_CT_EXPECT_L4PROTO]); 1267 + switch (priv->l4proto) { 1268 + case IPPROTO_TCP: 1269 + case IPPROTO_UDP: 1270 + case IPPROTO_UDPLITE: 1271 + case IPPROTO_DCCP: 1272 + case IPPROTO_SCTP: 1273 + break; 1274 + default: 1275 + return -EOPNOTSUPP; 1276 + } 1277 + 1254 1278 priv->dport = nla_get_be16(tb[NFTA_CT_EXPECT_DPORT]); 1255 1279 priv->timeout = nla_get_u32(tb[NFTA_CT_EXPECT_TIMEOUT]); 1256 1280 priv->size = nla_get_u8(tb[NFTA_CT_EXPECT_SIZE]);
+1
net/netfilter/nft_tunnel.c
··· 713 713 714 714 static struct nft_object_type nft_tunnel_obj_type __read_mostly = { 715 715 .type = NFT_OBJECT_TUNNEL, 716 + .family = NFPROTO_NETDEV, 716 717 .ops = &nft_tunnel_obj_ops, 717 718 .maxattr = NFTA_TUNNEL_KEY_MAX, 718 719 .policy = nft_tunnel_key_policy,
+4
net/nfc/nci/core.c
··· 1208 1208 { 1209 1209 nfc_free_device(ndev->nfc_dev); 1210 1210 nci_hci_deallocate(ndev); 1211 + 1212 + /* drop partial rx data packet if present */ 1213 + if (ndev->rx_data_reassembly) 1214 + kfree_skb(ndev->rx_data_reassembly); 1211 1215 kfree(ndev); 1212 1216 } 1213 1217 EXPORT_SYMBOL(nci_free_device);
+9 -3
net/smc/smc_core.c
··· 1877 1877 struct smcd_dev *smcismdev, 1878 1878 struct smcd_gid *peer_gid) 1879 1879 { 1880 - return lgr->peer_gid.gid == peer_gid->gid && lgr->smcd == smcismdev && 1881 - smc_ism_is_virtual(smcismdev) ? 1882 - (lgr->peer_gid.gid_ext == peer_gid->gid_ext) : 1; 1880 + if (lgr->peer_gid.gid != peer_gid->gid || 1881 + lgr->smcd != smcismdev) 1882 + return false; 1883 + 1884 + if (smc_ism_is_virtual(smcismdev) && 1885 + lgr->peer_gid.gid_ext != peer_gid->gid_ext) 1886 + return false; 1887 + 1888 + return true; 1883 1889 } 1884 1890 1885 1891 /* create a new SMC connection (and a new link group if necessary) */
+2 -2
net/sunrpc/svc.c
··· 1598 1598 /* Finally, send the reply synchronously */ 1599 1599 if (rqstp->bc_to_initval > 0) { 1600 1600 timeout.to_initval = rqstp->bc_to_initval; 1601 - timeout.to_retries = rqstp->bc_to_initval; 1601 + timeout.to_retries = rqstp->bc_to_retries; 1602 1602 } else { 1603 1603 timeout.to_initval = req->rq_xprt->timeout->to_initval; 1604 - timeout.to_initval = req->rq_xprt->timeout->to_retries; 1604 + timeout.to_retries = req->rq_xprt->timeout->to_retries; 1605 1605 } 1606 1606 memcpy(&req->rq_snd_buf, &rqstp->rq_res, sizeof(req->rq_snd_buf)); 1607 1607 task = rpc_run_bc_task(req, &timeout);
+6 -8
net/unix/af_unix.c
··· 1342 1342 unix_state_lock(sk1); 1343 1343 return; 1344 1344 } 1345 - if (sk1 < sk2) { 1346 - unix_state_lock(sk1); 1347 - unix_state_lock_nested(sk2); 1348 - } else { 1349 - unix_state_lock(sk2); 1350 - unix_state_lock_nested(sk1); 1351 - } 1345 + if (sk1 > sk2) 1346 + swap(sk1, sk2); 1347 + 1348 + unix_state_lock(sk1); 1349 + unix_state_lock_nested(sk2, U_LOCK_SECOND); 1352 1350 } 1353 1351 1354 1352 static void unix_state_double_unlock(struct sock *sk1, struct sock *sk2) ··· 1587 1589 goto out_unlock; 1588 1590 } 1589 1591 1590 - unix_state_lock_nested(sk); 1592 + unix_state_lock_nested(sk, U_LOCK_SECOND); 1591 1593 1592 1594 if (sk->sk_state != st) { 1593 1595 unix_state_unlock(sk);
+1 -1
net/unix/diag.c
··· 84 84 * queue lock. With the other's queue locked it's 85 85 * OK to lock the state. 86 86 */ 87 - unix_state_lock_nested(req); 87 + unix_state_lock_nested(req, U_LOCK_DIAG); 88 88 peer = unix_sk(req)->peer; 89 89 buf[i++] = (peer ? sock_i_ino(peer) : 0); 90 90 unix_state_unlock(req);
+4 -4
scripts/Makefile.defconf
··· 9 9 # Input config fragments without '.config' suffix 10 10 define merge_into_defconfig 11 11 $(Q)$(CONFIG_SHELL) $(srctree)/scripts/kconfig/merge_config.sh \ 12 - -m -O $(objtree) $(srctree)/arch/$(ARCH)/configs/$(1) \ 13 - $(foreach config,$(2),$(srctree)/arch/$(ARCH)/configs/$(config).config) 12 + -m -O $(objtree) $(srctree)/arch/$(SRCARCH)/configs/$(1) \ 13 + $(foreach config,$(2),$(srctree)/arch/$(SRCARCH)/configs/$(config).config) 14 14 +$(Q)$(MAKE) -f $(srctree)/Makefile olddefconfig 15 15 endef 16 16 ··· 23 23 # Input config fragments without '.config' suffix 24 24 define merge_into_defconfig_override 25 25 $(Q)$(CONFIG_SHELL) $(srctree)/scripts/kconfig/merge_config.sh \ 26 - -Q -m -O $(objtree) $(srctree)/arch/$(ARCH)/configs/$(1) \ 27 - $(foreach config,$(2),$(srctree)/arch/$(ARCH)/configs/$(config).config) 26 + -Q -m -O $(objtree) $(srctree)/arch/$(SRCARCH)/configs/$(1) \ 27 + $(foreach config,$(2),$(srctree)/arch/$(SRCARCH)/configs/$(config).config) 28 28 +$(Q)$(MAKE) -f $(srctree)/Makefile olddefconfig 29 29 endef
+3 -1
scripts/kconfig/symbol.c
··· 345 345 346 346 oldval = sym->curr; 347 347 348 + newval.tri = no; 349 + 348 350 switch (sym->type) { 349 351 case S_INT: 350 352 newval.val = "0"; ··· 359 357 break; 360 358 case S_BOOLEAN: 361 359 case S_TRISTATE: 362 - newval = symbol_no.curr; 360 + newval.val = "n"; 363 361 break; 364 362 default: 365 363 sym->curr.val = sym->name;
+3 -12
scripts/mod/modpost.c
··· 70 70 break; 71 71 case LOG_ERROR: 72 72 fprintf(stderr, "ERROR: "); 73 - break; 74 - case LOG_FATAL: 75 - fprintf(stderr, "FATAL: "); 73 + error_occurred = true; 76 74 break; 77 75 default: /* invalid loglevel, ignore */ 78 76 break; ··· 81 83 va_start(arglist, fmt); 82 84 vfprintf(stderr, fmt, arglist); 83 85 va_end(arglist); 84 - 85 - if (loglevel == LOG_FATAL) 86 - exit(1); 87 - if (loglevel == LOG_ERROR) 88 - error_occurred = true; 89 86 } 90 - 91 - void __attribute__((alias("modpost_log"))) 92 - modpost_log_noret(enum loglevel loglevel, const char *fmt, ...); 93 87 94 88 static inline bool strends(const char *str, const char *postfix) 95 89 { ··· 796 806 797 807 #define DATA_SECTIONS ".data", ".data.rel" 798 808 #define TEXT_SECTIONS ".text", ".text.*", ".sched.text", \ 799 - ".kprobes.text", ".cpuidle.text", ".noinstr.text" 809 + ".kprobes.text", ".cpuidle.text", ".noinstr.text", \ 810 + ".ltext", ".ltext.*" 800 811 #define OTHER_TEXT_SECTIONS ".ref.text", ".head.text", ".spinlock.text", \ 801 812 ".fixup", ".entry.text", ".exception.text", \ 802 813 ".coldtext", ".softirqentry.text"
+1 -5
scripts/mod/modpost.h
··· 194 194 enum loglevel { 195 195 LOG_WARN, 196 196 LOG_ERROR, 197 - LOG_FATAL 198 197 }; 199 198 200 199 void __attribute__((format(printf, 2, 3))) 201 200 modpost_log(enum loglevel loglevel, const char *fmt, ...); 202 - 203 - void __attribute__((format(printf, 2, 3), noreturn)) 204 - modpost_log_noret(enum loglevel loglevel, const char *fmt, ...); 205 201 206 202 /* 207 203 * warn - show the given message, then let modpost continue running, still ··· 214 218 */ 215 219 #define warn(fmt, args...) modpost_log(LOG_WARN, fmt, ##args) 216 220 #define error(fmt, args...) modpost_log(LOG_ERROR, fmt, ##args) 217 - #define fatal(fmt, args...) modpost_log_noret(LOG_FATAL, fmt, ##args) 221 + #define fatal(fmt, args...) do { error(fmt, ##args); exit(1); } while (1)
+11 -11
scripts/package/kernel.spec
··· 55 55 %{make} %{makeflags} KERNELRELEASE=%{KERNELRELEASE} KBUILD_BUILD_VERSION=%{release} 56 56 57 57 %install 58 - mkdir -p %{buildroot}/boot 59 - cp $(%{make} %{makeflags} -s image_name) %{buildroot}/boot/vmlinuz-%{KERNELRELEASE} 58 + mkdir -p %{buildroot}/lib/modules/%{KERNELRELEASE} 59 + cp $(%{make} %{makeflags} -s image_name) %{buildroot}/lib/modules/%{KERNELRELEASE}/vmlinuz 60 60 %{make} %{makeflags} INSTALL_MOD_PATH=%{buildroot} modules_install 61 61 %{make} %{makeflags} INSTALL_HDR_PATH=%{buildroot}/usr headers_install 62 - cp System.map %{buildroot}/boot/System.map-%{KERNELRELEASE} 63 - cp .config %{buildroot}/boot/config-%{KERNELRELEASE} 62 + cp System.map %{buildroot}/lib/modules/%{KERNELRELEASE} 63 + cp .config %{buildroot}/lib/modules/%{KERNELRELEASE}/config 64 64 ln -fns /usr/src/kernels/%{KERNELRELEASE} %{buildroot}/lib/modules/%{KERNELRELEASE}/build 65 65 %if %{with_devel} 66 66 %{make} %{makeflags} run-command KBUILD_RUN_COMMAND='${srctree}/scripts/package/install-extmod-build %{buildroot}/usr/src/kernels/%{KERNELRELEASE}' ··· 70 70 rm -rf %{buildroot} 71 71 72 72 %post 73 - if [ -x /sbin/installkernel -a -r /boot/vmlinuz-%{KERNELRELEASE} -a -r /boot/System.map-%{KERNELRELEASE} ]; then 74 - cp /boot/vmlinuz-%{KERNELRELEASE} /boot/.vmlinuz-%{KERNELRELEASE}-rpm 75 - cp /boot/System.map-%{KERNELRELEASE} /boot/.System.map-%{KERNELRELEASE}-rpm 76 - rm -f /boot/vmlinuz-%{KERNELRELEASE} /boot/System.map-%{KERNELRELEASE} 77 - /sbin/installkernel %{KERNELRELEASE} /boot/.vmlinuz-%{KERNELRELEASE}-rpm /boot/.System.map-%{KERNELRELEASE}-rpm 78 - rm -f /boot/.vmlinuz-%{KERNELRELEASE}-rpm /boot/.System.map-%{KERNELRELEASE}-rpm 73 + if [ -x /usr/bin/kernel-install ]; then 74 + /usr/bin/kernel-install add %{KERNELRELEASE} /lib/modules/%{KERNELRELEASE}/vmlinuz 79 75 fi 76 + for file in vmlinuz System.map config; do 77 + if ! cmp --silent "/lib/modules/%{KERNELRELEASE}/${file}" "/boot/${file}-%{KERNELRELEASE}"; then 78 + cp "/lib/modules/%{KERNELRELEASE}/${file}" "/boot/${file}-%{KERNELRELEASE}" 79 + fi 80 + done 80 81 81 82 %preun 82 83 if [ -x /sbin/new-kernel-pkg ]; then ··· 95 94 %defattr (-, root, root) 96 95 /lib/modules/%{KERNELRELEASE} 97 96 %exclude /lib/modules/%{KERNELRELEASE}/build 98 - /boot/* 99 97 100 98 %files headers 101 99 %defattr (-, root, root)
+40 -5
security/security.c
··· 4255 4255 */ 4256 4256 int security_inode_getsecctx(struct inode *inode, void **ctx, u32 *ctxlen) 4257 4257 { 4258 - return call_int_hook(inode_getsecctx, -EOPNOTSUPP, inode, ctx, ctxlen); 4258 + struct security_hook_list *hp; 4259 + int rc; 4260 + 4261 + /* 4262 + * Only one module will provide a security context. 4263 + */ 4264 + hlist_for_each_entry(hp, &security_hook_heads.inode_getsecctx, list) { 4265 + rc = hp->hook.inode_getsecctx(inode, ctx, ctxlen); 4266 + if (rc != LSM_RET_DEFAULT(inode_getsecctx)) 4267 + return rc; 4268 + } 4269 + 4270 + return LSM_RET_DEFAULT(inode_getsecctx); 4259 4271 } 4260 4272 EXPORT_SYMBOL(security_inode_getsecctx); 4261 4273 ··· 4624 4612 int security_socket_getpeersec_stream(struct socket *sock, sockptr_t optval, 4625 4613 sockptr_t optlen, unsigned int len) 4626 4614 { 4627 - return call_int_hook(socket_getpeersec_stream, -ENOPROTOOPT, sock, 4628 - optval, optlen, len); 4615 + struct security_hook_list *hp; 4616 + int rc; 4617 + 4618 + /* 4619 + * Only one module will provide a security context. 4620 + */ 4621 + hlist_for_each_entry(hp, &security_hook_heads.socket_getpeersec_stream, 4622 + list) { 4623 + rc = hp->hook.socket_getpeersec_stream(sock, optval, optlen, 4624 + len); 4625 + if (rc != LSM_RET_DEFAULT(socket_getpeersec_stream)) 4626 + return rc; 4627 + } 4628 + return LSM_RET_DEFAULT(socket_getpeersec_stream); 4629 4629 } 4630 4630 4631 4631 /** ··· 4657 4633 int security_socket_getpeersec_dgram(struct socket *sock, 4658 4634 struct sk_buff *skb, u32 *secid) 4659 4635 { 4660 - return call_int_hook(socket_getpeersec_dgram, -ENOPROTOOPT, sock, 4661 - skb, secid); 4636 + struct security_hook_list *hp; 4637 + int rc; 4638 + 4639 + /* 4640 + * Only one module will provide a security context. 4641 + */ 4642 + hlist_for_each_entry(hp, &security_hook_heads.socket_getpeersec_dgram, 4643 + list) { 4644 + rc = hp->hook.socket_getpeersec_dgram(sock, skb, secid); 4645 + if (rc != LSM_RET_DEFAULT(socket_getpeersec_dgram)) 4646 + return rc; 4647 + } 4648 + return LSM_RET_DEFAULT(socket_getpeersec_dgram); 4662 4649 } 4663 4650 EXPORT_SYMBOL(security_socket_getpeersec_dgram); 4664 4651
+1 -1
tools/power/cpupower/bench/Makefile
··· 15 15 OBJS = $(OUTPUT)main.o $(OUTPUT)parse.o $(OUTPUT)system.o $(OUTPUT)benchmark.o 16 16 endif 17 17 18 - CFLAGS += -D_GNU_SOURCE -I../lib -DDEFAULT_CONFIG_FILE=\"$(confdir)/cpufreq-bench.conf\" 18 + override CFLAGS += -D_GNU_SOURCE -I../lib -DDEFAULT_CONFIG_FILE=\"$(confdir)/cpufreq-bench.conf\" 19 19 20 20 $(OUTPUT)%.o : %.c 21 21 $(ECHO) " CC " $@
+2
tools/testing/cxl/Kbuild
··· 65 65 cxl_core-y += cxl_core_test.o 66 66 cxl_core-y += cxl_core_exports.o 67 67 68 + KBUILD_CFLAGS := $(filter-out -Wmissing-prototypes -Wmissing-declarations, $(KBUILD_CFLAGS)) 69 + 68 70 obj-m += test/
+2
tools/testing/cxl/test/Kbuild
··· 8 8 cxl_test-y := cxl.o 9 9 cxl_mock-y := mock.o 10 10 cxl_mock_mem-y := mem.o 11 + 12 + KBUILD_CFLAGS := $(filter-out -Wmissing-prototypes -Wmissing-declarations, $(KBUILD_CFLAGS))
+2
tools/testing/nvdimm/Kbuild
··· 82 82 libnvdimm-y += libnvdimm_test.o 83 83 libnvdimm-y += config_check.o 84 84 85 + KBUILD_CFLAGS := $(filter-out -Wmissing-prototypes -Wmissing-declarations, $(KBUILD_CFLAGS)) 86 + 85 87 obj-m += test/
+11
tools/testing/selftests/drivers/net/bonding/lag_lib.sh
··· 48 48 ip link add mv0 link "$name" up address "$ucaddr" type macvlan 49 49 # Used to test dev->mc handling 50 50 ip address add "$addr6" dev "$name" 51 + 52 + # Check that addresses were added as expected 53 + (grep_bridge_fdb "$ucaddr" bridge fdb show dev dummy1 || 54 + grep_bridge_fdb "$ucaddr" bridge fdb show dev dummy2) >/dev/null 55 + check_err $? "macvlan unicast address not found on a slave" 56 + 57 + # mcaddr is added asynchronously by addrconf_dad_work(), use busywait 58 + (busywait 10000 grep_bridge_fdb "$mcaddr" bridge fdb show dev dummy1 || 59 + grep_bridge_fdb "$mcaddr" bridge fdb show dev dummy2) >/dev/null 60 + check_err $? "IPv6 solicited-node multicast mac address not found on a slave" 61 + 51 62 ip link set dev "$name" down 52 63 ip link del "$name" 53 64
+3 -1
tools/testing/selftests/drivers/net/team/config
··· 1 + CONFIG_DUMMY=y 2 + CONFIG_IPV6=y 3 + CONFIG_MACVLAN=y 1 4 CONFIG_NET_TEAM=y 2 5 CONFIG_NET_TEAM_MODE_LOADBALANCE=y 3 - CONFIG_MACVLAN=y
+4 -4
tools/testing/selftests/hid/tests/test_wacom_generic.py
··· 880 880 does not overlap with other contacts. The value of `t` may be 881 881 incremented over time to move the point along a linear path. 882 882 """ 883 - x = 50 + 10 * contact_id + t 884 - y = 100 + 100 * contact_id + t 883 + x = 50 + 10 * contact_id + t * 11 884 + y = 100 + 100 * contact_id + t * 11 885 885 return test_multitouch.Touch(contact_id, x, y) 886 886 887 887 def make_contacts(self, n, t=0): ··· 902 902 tracking_id = contact_ids.tracking_id 903 903 slot_num = contact_ids.slot_num 904 904 905 - x = 50 + 10 * contact_id + t 906 - y = 100 + 100 * contact_id + t 905 + x = 50 + 10 * contact_id + t * 11 906 + y = 100 + 100 * contact_id + t * 11 907 907 908 908 # If the data isn't supposed to be stored in any slots, there is 909 909 # nothing we can check for in the evdev stream.
+17 -20
tools/testing/selftests/livepatch/functions.sh
··· 42 42 exit 1 43 43 } 44 44 45 - # save existing dmesg so we can detect new content 46 - function save_dmesg() { 47 - SAVED_DMESG=$(mktemp --tmpdir -t klp-dmesg-XXXXXX) 48 - dmesg > "$SAVED_DMESG" 49 - } 50 - 51 - # cleanup temporary dmesg file from save_dmesg() 52 - function cleanup_dmesg_file() { 53 - rm -f "$SAVED_DMESG" 54 - } 55 - 56 45 function push_config() { 57 46 DYNAMIC_DEBUG=$(grep '^kernel/livepatch' /sys/kernel/debug/dynamic_debug/control | \ 58 47 awk -F'[: ]' '{print "file " $1 " line " $2 " " $4}') ··· 88 99 89 100 function cleanup() { 90 101 pop_config 91 - cleanup_dmesg_file 92 102 } 93 103 94 104 # setup_config - save the current config and set a script exit trap that ··· 268 280 function start_test { 269 281 local test="$1" 270 282 271 - save_dmesg 283 + # Dump something unique into the dmesg log, then stash the entry 284 + # in LAST_DMESG. The check_result() function will use it to 285 + # find new kernel messages since the test started. 286 + local last_dmesg_msg="livepatch kselftest timestamp: $(date --rfc-3339=ns)" 287 + log "$last_dmesg_msg" 288 + loop_until 'dmesg | grep -q "$last_dmesg_msg"' || 289 + die "buffer busy? can't find canary dmesg message: $last_dmesg_msg" 290 + LAST_DMESG=$(dmesg | grep "$last_dmesg_msg") 291 + 272 292 echo -n "TEST: $test ... " 273 293 log "===== TEST: $test =====" 274 294 } ··· 287 291 local expect="$*" 288 292 local result 289 293 290 - # Note: when comparing dmesg output, the kernel log timestamps 291 - # help differentiate repeated testing runs. Remove them with a 292 - # post-comparison sed filter. 293 - 294 - result=$(dmesg | comm --nocheck-order -13 "$SAVED_DMESG" - | \ 294 + # Test results include any new dmesg entry since LAST_DMESG, then: 295 + # - include lines matching keywords 296 + # - exclude lines matching keywords 297 + # - filter out dmesg timestamp prefixes 298 + result=$(dmesg | awk -v last_dmesg="$LAST_DMESG" 'p; $0 == last_dmesg { p=1 }' | \ 295 299 grep -e 'livepatch:' -e 'test_klp' | \ 296 300 grep -v '\(tainting\|taints\) kernel' | \ 297 301 sed 's/^\[[ 0-9.]*\] //') 298 302 299 303 if [[ "$expect" == "$result" ]] ; then 300 304 echo "ok" 305 + elif [[ "$result" == "" ]] ; then 306 + echo -e "not ok\n\nbuffer overrun? can't find canary dmesg entry: $LAST_DMESG\n" 307 + die "livepatch kselftest(s) failed" 301 308 else 302 309 echo -e "not ok\n\n$(diff -upr --label expected --label result <(echo "$expect") <(echo "$result"))\n" 303 310 die "livepatch kselftest(s) failed" 304 311 fi 305 - 306 - cleanup_dmesg_file 307 312 } 308 313 309 314 # check_sysfs_rights(modname, rel_path, expected_rights) - check sysfs
+1 -1
tools/testing/selftests/mm/charge_reserved_hugetlb.sh
··· 1 - #!/bin/sh 1 + #!/bin/bash 2 2 # SPDX-License-Identifier: GPL-2.0 3 3 4 4 # Kselftest framework requirement - SKIP code is 4.
+1 -1
tools/testing/selftests/mm/ksm_tests.c
··· 566 566 if (map_ptr_orig == MAP_FAILED) 567 567 err(2, "initial mmap"); 568 568 569 - if (madvise(map_ptr, len + HPAGE_SIZE, MADV_HUGEPAGE)) 569 + if (madvise(map_ptr, len, MADV_HUGEPAGE)) 570 570 err(2, "MADV_HUGEPAGE"); 571 571 572 572 pagemap_fd = open("/proc/self/pagemap", O_RDONLY);
+7
tools/testing/selftests/mm/map_hugetlb.c
··· 15 15 #include <unistd.h> 16 16 #include <sys/mman.h> 17 17 #include <fcntl.h> 18 + #include "vm_util.h" 18 19 19 20 #define LENGTH (256UL*1024*1024) 20 21 #define PROTECTION (PROT_READ | PROT_WRITE) ··· 59 58 { 60 59 void *addr; 61 60 int ret; 61 + size_t hugepage_size; 62 62 size_t length = LENGTH; 63 63 int flags = FLAGS; 64 64 int shift = 0; 65 + 66 + hugepage_size = default_huge_page_size(); 67 + /* munmap with fail if the length is not page aligned */ 68 + if (hugepage_size > length) 69 + length = hugepage_size; 65 70 66 71 if (argc > 1) 67 72 length = atol(argv[1]) << 20;
+14 -13
tools/testing/selftests/mm/mremap_test.c
··· 360 360 char pattern_seed) 361 361 { 362 362 void *addr, *src_addr, *dest_addr, *dest_preamble_addr; 363 - unsigned long long i; 363 + int d; 364 + unsigned long long t; 364 365 struct timespec t_start = {0, 0}, t_end = {0, 0}; 365 366 long long start_ns, end_ns, align_mask, ret, offset; 366 367 unsigned long long threshold; ··· 379 378 380 379 /* Set byte pattern for source block. */ 381 380 srand(pattern_seed); 382 - for (i = 0; i < threshold; i++) 383 - memset((char *) src_addr + i, (char) rand(), 1); 381 + for (t = 0; t < threshold; t++) 382 + memset((char *) src_addr + t, (char) rand(), 1); 384 383 385 384 /* Mask to zero out lower bits of address for alignment */ 386 385 align_mask = ~(c.dest_alignment - 1); ··· 421 420 422 421 /* Set byte pattern for the dest preamble block. */ 423 422 srand(pattern_seed); 424 - for (i = 0; i < c.dest_preamble_size; i++) 425 - memset((char *) dest_preamble_addr + i, (char) rand(), 1); 423 + for (d = 0; d < c.dest_preamble_size; d++) 424 + memset((char *) dest_preamble_addr + d, (char) rand(), 1); 426 425 } 427 426 428 427 clock_gettime(CLOCK_MONOTONIC, &t_start); ··· 438 437 439 438 /* Verify byte pattern after remapping */ 440 439 srand(pattern_seed); 441 - for (i = 0; i < threshold; i++) { 440 + for (t = 0; t < threshold; t++) { 442 441 char c = (char) rand(); 443 442 444 - if (((char *) dest_addr)[i] != c) { 443 + if (((char *) dest_addr)[t] != c) { 445 444 ksft_print_msg("Data after remap doesn't match at offset %llu\n", 446 - i); 445 + t); 447 446 ksft_print_msg("Expected: %#x\t Got: %#x\n", c & 0xff, 448 - ((char *) dest_addr)[i] & 0xff); 447 + ((char *) dest_addr)[t] & 0xff); 449 448 ret = -1; 450 449 goto clean_up_dest; 451 450 } ··· 454 453 /* Verify the dest preamble byte pattern after remapping */ 455 454 if (c.dest_preamble_size) { 456 455 srand(pattern_seed); 457 - for (i = 0; i < c.dest_preamble_size; i++) { 456 + for (d = 0; d < c.dest_preamble_size; d++) { 458 457 char c = (char) rand(); 459 458 460 - if (((char *) dest_preamble_addr)[i] != c) { 459 + if (((char *) dest_preamble_addr)[d] != c) { 461 460 ksft_print_msg("Preamble data after remap doesn't match at offset %d\n", 462 - i); 461 + d); 463 462 ksft_print_msg("Expected: %#x\t Got: %#x\n", c & 0xff, 464 - ((char *) dest_preamble_addr)[i] & 0xff); 463 + ((char *) dest_preamble_addr)[d] & 0xff); 465 464 ret = -1; 466 465 goto clean_up_dest; 467 466 }
+6
tools/testing/selftests/mm/va_high_addr_switch.sh
··· 29 29 # See man 1 gzip under '-f'. 30 30 local pg_table_levels=$(gzip -dcfq "${config}" | grep PGTABLE_LEVELS | cut -d'=' -f 2) 31 31 32 + local cpu_supports_pl5=$(awk '/^flags/ {if (/la57/) {print 0;} 33 + else {print 1}; exit}' /proc/cpuinfo 2>/dev/null) 34 + 32 35 if [[ "${pg_table_levels}" -lt 5 ]]; then 33 36 echo "$0: PGTABLE_LEVELS=${pg_table_levels}, must be >= 5 to run this test" 37 + exit $ksft_skip 38 + elif [[ "${cpu_supports_pl5}" -ne 0 ]]; then 39 + echo "$0: CPU does not have the necessary la57 flag to support page table level 5" 34 40 exit $ksft_skip 35 41 fi 36 42 }
+1 -1
tools/testing/selftests/mm/write_hugetlb_memory.sh
··· 1 - #!/bin/sh 1 + #!/bin/bash 2 2 # SPDX-License-Identifier: GPL-2.0 3 3 4 4 set -e
+5 -4
tools/testing/selftests/net/Makefile
··· 53 53 TEST_PROGS += ip_local_port_range.sh 54 54 TEST_PROGS += rps_default_mask.sh 55 55 TEST_PROGS += big_tcp.sh 56 - TEST_PROGS_EXTENDED := in_netns.sh setup_loopback.sh setup_veth.sh 57 - TEST_PROGS_EXTENDED += toeplitz_client.sh toeplitz.sh lib.sh 56 + TEST_PROGS_EXTENDED := toeplitz_client.sh toeplitz.sh 58 57 TEST_GEN_FILES = socket nettest 59 58 TEST_GEN_FILES += psock_fanout psock_tpacket msg_zerocopy reuseport_addr_any 60 59 TEST_GEN_FILES += tcp_mmap tcp_inq psock_snd txring_overwrite ··· 83 84 TEST_GEN_FILES += sctp_hello 84 85 TEST_GEN_FILES += csum 85 86 TEST_GEN_FILES += nat6to4.o 87 + TEST_GEN_FILES += xdp_dummy.o 86 88 TEST_GEN_FILES += ip_local_port_range 87 89 TEST_GEN_FILES += bind_wildcard 88 90 TEST_PROGS += test_vxlan_mdb.sh ··· 95 95 TEST_PROGS += vlan_hw_filter.sh 96 96 97 97 TEST_FILES := settings 98 + TEST_FILES += in_netns.sh lib.sh net_helper.sh setup_loopback.sh setup_veth.sh 98 99 99 100 include ../lib.mk 100 101 ··· 105 104 $(OUTPUT)/bind_bhash: LDLIBS += -lpthread 106 105 $(OUTPUT)/io_uring_zerocopy_tx: CFLAGS += -I../../../include/ 107 106 108 - # Rules to generate bpf obj nat6to4.o 107 + # Rules to generate bpf objs 109 108 CLANG ?= clang 110 109 SCRATCH_DIR := $(OUTPUT)/tools 111 110 BUILD_DIR := $(SCRATCH_DIR)/build ··· 140 139 141 140 CLANG_SYS_INCLUDES = $(call get_sys_includes,$(CLANG),$(CLANG_TARGET_ARCH)) 142 141 143 - $(OUTPUT)/nat6to4.o: nat6to4.c $(BPFOBJ) | $(MAKE_DIRS) 142 + $(OUTPUT)/nat6to4.o $(OUTPUT)/xdp_dummy.o: $(OUTPUT)/%.o : %.c $(BPFOBJ) | $(MAKE_DIRS) 144 143 $(CLANG) -O2 --target=bpf -c $< $(CCINCLUDE) $(CLANG_SYS_INCLUDES) -o $@ 145 144 146 145 $(BPFOBJ): $(wildcard $(BPFDIR)/*.[ch] $(BPFDIR)/Makefile) \
+16
tools/testing/selftests/net/config
··· 19 19 CONFIG_BRIDGE=y 20 20 CONFIG_CRYPTO_CHACHA20POLY1305=m 21 21 CONFIG_VLAN_8021Q=y 22 + CONFIG_GENEVE=m 22 23 CONFIG_IFB=y 23 24 CONFIG_INET_DIAG=y 25 + CONFIG_INET_ESP=y 26 + CONFIG_INET_ESP_OFFLOAD=y 24 27 CONFIG_IP_GRE=m 25 28 CONFIG_NETFILTER=y 26 29 CONFIG_NETFILTER_ADVANCED=y ··· 32 29 CONFIG_IP6_NF_IPTABLES=m 33 30 CONFIG_IP_NF_IPTABLES=m 34 31 CONFIG_IP6_NF_NAT=m 32 + CONFIG_IP6_NF_RAW=m 35 33 CONFIG_IP_NF_NAT=m 34 + CONFIG_IP_NF_RAW=m 35 + CONFIG_IP_NF_TARGET_TTL=m 36 36 CONFIG_IPV6_GRE=m 37 37 CONFIG_IPV6_SEG6_LWTUNNEL=y 38 38 CONFIG_L2TP_ETH=m ··· 51 45 CONFIG_NF_TABLES_IPV6=y 52 46 CONFIG_NF_TABLES_IPV4=y 53 47 CONFIG_NFT_NAT=m 48 + CONFIG_NETFILTER_XT_MATCH_LENGTH=m 49 + CONFIG_NET_ACT_CSUM=m 50 + CONFIG_NET_ACT_CT=m 54 51 CONFIG_NET_ACT_GACT=m 52 + CONFIG_NET_ACT_PEDIT=m 55 53 CONFIG_NET_CLS_BASIC=m 54 + CONFIG_NET_CLS_BPF=m 55 + CONFIG_NET_CLS_MATCHALL=m 56 56 CONFIG_NET_CLS_U32=m 57 57 CONFIG_NET_IPGRE_DEMUX=m 58 58 CONFIG_NET_IPGRE=m ··· 67 55 CONFIG_NET_SCH_FQ=m 68 56 CONFIG_NET_SCH_ETF=m 69 57 CONFIG_NET_SCH_NETEM=y 58 + CONFIG_NET_SCH_PRIO=m 59 + CONFIG_NFT_COMPAT=m 60 + CONFIG_NF_FLOW_TABLE=m 70 61 CONFIG_PSAMPLE=m 71 62 CONFIG_TCP_MD5SIG=y 72 63 CONFIG_TEST_BLACKHOLE_DEV=m ··· 95 80 CONFIG_NETFILTER_XT_MATCH_POLICY=m 96 81 CONFIG_CRYPTO_ARIA=y 97 82 CONFIG_XFRM_INTERFACE=m 83 + CONFIG_XFRM_USER=m
+1 -1
tools/testing/selftests/net/forwarding/Makefile
··· 112 112 vxlan_symmetric_ipv6.sh \ 113 113 vxlan_symmetric.sh 114 114 115 - TEST_PROGS_EXTENDED := devlink_lib.sh \ 115 + TEST_FILES := devlink_lib.sh \ 116 116 ethtool_lib.sh \ 117 117 fib_offload_lib.sh \ 118 118 forwarding.config.sample \
+4 -1
tools/testing/selftests/net/lib.sh
··· 4 4 ############################################################################## 5 5 # Defines 6 6 7 + WAIT_TIMEOUT=${WAIT_TIMEOUT:=20} 8 + BUSYWAIT_TIMEOUT=$((WAIT_TIMEOUT * 1000)) # ms 9 + 7 10 # Kselftest framework requirement - SKIP code is 4. 8 11 ksft_skip=4 9 12 # namespace list created by setup_ns ··· 51 48 52 49 for ns in "$@"; do 53 50 ip netns delete "${ns}" &> /dev/null 54 - if ! busywait 2 ip netns list \| grep -vq "^$ns$" &> /dev/null; then 51 + if ! busywait $BUSYWAIT_TIMEOUT ip netns list \| grep -vq "^$ns$" &> /dev/null; then 55 52 echo "Warn: Failed to remove namespace $ns" 56 53 ret=1 57 54 fi
+3
tools/testing/selftests/net/mptcp/config
··· 22 22 CONFIG_NFT_SOCKET=m 23 23 CONFIG_IP_ADVANCED_ROUTER=y 24 24 CONFIG_IP_MULTIPLE_TABLES=y 25 + CONFIG_IP_NF_FILTER=m 26 + CONFIG_IP_NF_MANGLE=m 25 27 CONFIG_IP_NF_TARGET_REJECT=m 26 28 CONFIG_IPV6_MULTIPLE_TABLES=y 29 + CONFIG_IP6_NF_FILTER=m 27 30 CONFIG_NET_ACT_CSUM=m 28 31 CONFIG_NET_ACT_PEDIT=m 29 32 CONFIG_NET_CLS_ACT=y
+11 -16
tools/testing/selftests/net/mptcp/mptcp_join.sh
··· 643 643 mptcp_lib_kill_wait $evts_ns2_pid 644 644 } 645 645 646 - kill_tests_wait() 647 - { 648 - #shellcheck disable=SC2046 649 - kill -SIGUSR1 $(ip netns pids $ns2) $(ip netns pids $ns1) 650 - wait 651 - } 652 - 653 646 pm_nl_set_limits() 654 647 { 655 648 local ns=$1 ··· 3446 3453 chk_mptcp_info subflows 0 subflows 0 3447 3454 chk_subflows_total 1 1 3448 3455 kill_events_pids 3449 - wait $tests_pid 3456 + mptcp_lib_kill_wait $tests_pid 3450 3457 fi 3451 3458 3452 3459 # userspace pm create destroy subflow ··· 3468 3475 chk_mptcp_info subflows 0 subflows 0 3469 3476 chk_subflows_total 1 1 3470 3477 kill_events_pids 3471 - wait $tests_pid 3478 + mptcp_lib_kill_wait $tests_pid 3472 3479 fi 3473 3480 3474 3481 # userspace pm create id 0 subflow ··· 3487 3494 chk_mptcp_info subflows 1 subflows 1 3488 3495 chk_subflows_total 2 2 3489 3496 kill_events_pids 3490 - wait $tests_pid 3497 + mptcp_lib_kill_wait $tests_pid 3491 3498 fi 3492 3499 3493 3500 # userspace pm remove initial subflow ··· 3511 3518 chk_mptcp_info subflows 1 subflows 1 3512 3519 chk_subflows_total 1 1 3513 3520 kill_events_pids 3514 - wait $tests_pid 3521 + mptcp_lib_kill_wait $tests_pid 3515 3522 fi 3516 3523 3517 3524 # userspace pm send RM_ADDR for ID 0 ··· 3537 3544 chk_mptcp_info subflows 1 subflows 1 3538 3545 chk_subflows_total 1 1 3539 3546 kill_events_pids 3540 - wait $tests_pid 3547 + mptcp_lib_kill_wait $tests_pid 3541 3548 fi 3542 3549 } 3543 3550 ··· 3551 3558 pm_nl_set_limits $ns2 2 2 3552 3559 pm_nl_add_endpoint $ns1 10.0.2.1 flags signal 3553 3560 speed=slow \ 3554 - run_tests $ns1 $ns2 10.0.1.1 2>/dev/null & 3561 + run_tests $ns1 $ns2 10.0.1.1 & 3562 + local tests_pid=$! 3555 3563 3556 3564 wait_mpj $ns1 3557 3565 pm_nl_check_endpoint "creation" \ ··· 3567 3573 pm_nl_add_endpoint $ns2 10.0.2.2 flags signal 3568 3574 pm_nl_check_endpoint "modif is allowed" \ 3569 3575 $ns2 10.0.2.2 id 1 flags signal 3570 - kill_tests_wait 3576 + mptcp_lib_kill_wait $tests_pid 3571 3577 fi 3572 3578 3573 3579 if reset "delete and re-add" && ··· 3576 3582 pm_nl_set_limits $ns2 1 1 3577 3583 pm_nl_add_endpoint $ns2 10.0.2.2 id 2 dev ns2eth2 flags subflow 3578 3584 test_linkfail=4 speed=20 \ 3579 - run_tests $ns1 $ns2 10.0.1.1 2>/dev/null & 3585 + run_tests $ns1 $ns2 10.0.1.1 & 3586 + local tests_pid=$! 3580 3587 3581 3588 wait_mpj $ns2 3582 3589 chk_subflow_nr "before delete" 2 ··· 3592 3597 wait_mpj $ns2 3593 3598 chk_subflow_nr "after re-add" 2 3594 3599 chk_mptcp_info subflows 1 subflows 1 3595 - kill_tests_wait 3600 + mptcp_lib_kill_wait $tests_pid 3596 3601 fi 3597 3602 } 3598 3603
+1 -1
tools/testing/selftests/net/mptcp/mptcp_lib.sh
··· 6 6 readonly KSFT_SKIP=4 7 7 8 8 # shellcheck disable=SC2155 # declare and assign separately 9 - readonly KSFT_TEST=$(basename "${0}" | sed 's/\.sh$//g') 9 + readonly KSFT_TEST="${MPTCP_LIB_KSFT_TEST:-$(basename "${0}" .sh)}" 10 10 11 11 MPTCP_LIB_SUBTESTS=() 12 12
+1 -1
tools/testing/selftests/net/mptcp/settings
··· 1 - timeout=1200 1 + timeout=1800
+4 -4
tools/testing/selftests/net/mptcp/simult_flows.sh
··· 284 284 285 285 setup 286 286 run_test 10 10 0 0 "balanced bwidth" 287 - run_test 10 10 1 50 "balanced bwidth with unbalanced delay" 287 + run_test 10 10 1 25 "balanced bwidth with unbalanced delay" 288 288 289 289 # we still need some additional infrastructure to pass the following test-cases 290 - run_test 30 10 0 0 "unbalanced bwidth" 291 - run_test 30 10 1 50 "unbalanced bwidth with unbalanced delay" 292 - run_test 30 10 50 1 "unbalanced bwidth with opposed, unbalanced delay" 290 + run_test 10 3 0 0 "unbalanced bwidth" 291 + run_test 10 3 1 25 "unbalanced bwidth with unbalanced delay" 292 + run_test 10 3 25 1 "unbalanced bwidth with opposed, unbalanced delay" 293 293 294 294 mptcp_lib_result_print_all_tap 295 295 exit $ret
tools/testing/selftests/net/net_helper.sh
+9 -9
tools/testing/selftests/net/pmtu.sh
··· 707 707 } 708 708 709 709 setup_xfrm4udp() { 710 - setup_xfrm 4 ${veth4_a_addr} ${veth4_b_addr} "encap espinudp 4500 4500 0.0.0.0" 711 - setup_nettest_xfrm 4 4500 710 + setup_xfrm 4 ${veth4_a_addr} ${veth4_b_addr} "encap espinudp 4500 4500 0.0.0.0" && \ 711 + setup_nettest_xfrm 4 4500 712 712 } 713 713 714 714 setup_xfrm6udp() { 715 - setup_xfrm 6 ${veth6_a_addr} ${veth6_b_addr} "encap espinudp 4500 4500 0.0.0.0" 716 - setup_nettest_xfrm 6 4500 715 + setup_xfrm 6 ${veth6_a_addr} ${veth6_b_addr} "encap espinudp 4500 4500 0.0.0.0" && \ 716 + setup_nettest_xfrm 6 4500 717 717 } 718 718 719 719 setup_xfrm4udprouted() { 720 - setup_xfrm 4 ${prefix4}.${a_r1}.1 ${prefix4}.${b_r1}.1 "encap espinudp 4500 4500 0.0.0.0" 721 - setup_nettest_xfrm 4 4500 720 + setup_xfrm 4 ${prefix4}.${a_r1}.1 ${prefix4}.${b_r1}.1 "encap espinudp 4500 4500 0.0.0.0" && \ 721 + setup_nettest_xfrm 4 4500 722 722 } 723 723 724 724 setup_xfrm6udprouted() { 725 - setup_xfrm 6 ${prefix6}:${a_r1}::1 ${prefix6}:${b_r1}::1 "encap espinudp 4500 4500 0.0.0.0" 726 - setup_nettest_xfrm 6 4500 725 + setup_xfrm 6 ${prefix6}:${a_r1}::1 ${prefix6}:${b_r1}::1 "encap espinudp 4500 4500 0.0.0.0" && \ 726 + setup_nettest_xfrm 6 4500 727 727 } 728 728 729 729 setup_routing_old() { ··· 1339 1339 1340 1340 sleep 1 1341 1341 1342 - dd if=/dev/zero of=/dev/stdout status=none bs=1M count=1 | ${target} socat -T 3 -u STDIN $TCPDST,connect-timeout=3 1342 + dd if=/dev/zero status=none bs=1M count=1 | ${target} socat -T 3 -u STDIN $TCPDST,connect-timeout=3 1343 1343 1344 1344 size=$(du -sb $tmpoutfile) 1345 1345 size=${size%%/tmp/*}
tools/testing/selftests/net/setup_loopback.sh
+1 -1
tools/testing/selftests/net/setup_veth.sh
··· 11 11 local -r ns_mac="$4" 12 12 13 13 [[ -e /var/run/netns/"${ns_name}" ]] || ip netns add "${ns_name}" 14 - echo 100000 > "/sys/class/net/${ns_dev}/gro_flush_timeout" 14 + echo 1000000 > "/sys/class/net/${ns_dev}/gro_flush_timeout" 15 15 ip link set dev "${ns_dev}" netns "${ns_name}" mtu 65535 16 16 ip -netns "${ns_name}" link set dev "${ns_dev}" up 17 17
+10
tools/testing/selftests/net/tcp_ao/config
··· 1 + CONFIG_CRYPTO_HMAC=y 2 + CONFIG_CRYPTO_RMD160=y 3 + CONFIG_CRYPTO_SHA1=y 4 + CONFIG_IPV6_MULTIPLE_TABLES=y 5 + CONFIG_IPV6=y 6 + CONFIG_NET_L3_MASTER_DEV=y 7 + CONFIG_NET_VRF=y 8 + CONFIG_TCP_AO=y 9 + CONFIG_TCP_MD5SIG=y 10 + CONFIG_VETH=m
+26 -20
tools/testing/selftests/net/tcp_ao/key-management.c
··· 417 417 matches_vrf : 1, 418 418 is_current : 1, 419 419 is_rnext : 1, 420 - used_on_handshake : 1, 421 - used_after_accept : 1, 422 - used_on_client : 1; 420 + used_on_server_tx : 1, 421 + used_on_client_tx : 1, 422 + skip_counters_checks : 1; 423 423 }; 424 424 425 425 struct key_collection { ··· 609 609 addr = &this_ip_dest; 610 610 sndid = key->client_keyid; 611 611 rcvid = key->server_keyid; 612 - set_current = key->is_current; 613 - set_rnext = key->is_rnext; 612 + key->used_on_client_tx = set_current = key->is_current; 613 + key->used_on_server_tx = set_rnext = key->is_rnext; 614 614 } 615 615 616 616 if (test_add_key_cr(sk, key->password, key->len, 617 617 *addr, vrf, sndid, rcvid, key->maclen, 618 618 key->alg, set_current, set_rnext)) 619 619 test_key_error("setsockopt(TCP_AO_ADD_KEY)", key); 620 - if (set_current || set_rnext) 621 - key->used_on_handshake = 1; 622 620 #ifdef DEBUG 623 621 test_print("%s [%u/%u] key: { %s, %u:%u, %u, %u:%u:%u:%u (%u)}", 624 622 server ? "server" : "client", i, collection.nr_keys, ··· 638 640 for (i = 0; i < collection.nr_keys; i++) { 639 641 struct test_key *key = &collection.keys[i]; 640 642 uint8_t sndid, rcvid; 641 - bool was_used; 643 + bool rx_cnt_expected; 642 644 645 + if (key->skip_counters_checks) 646 + continue; 643 647 if (server) { 644 648 sndid = key->server_keyid; 645 649 rcvid = key->client_keyid; 646 - if (is_listen_sk) 647 - was_used = key->used_on_handshake; 648 - else 649 - was_used = key->used_after_accept; 650 + rx_cnt_expected = key->used_on_client_tx; 650 651 } else { 651 652 sndid = key->client_keyid; 652 653 rcvid = key->server_keyid; 653 - was_used = key->used_on_client; 654 + rx_cnt_expected = key->used_on_server_tx; 654 655 } 655 656 656 - test_tcp_ao_key_counters_cmp(tst_name, a, b, was_used, 657 + test_tcp_ao_key_counters_cmp(tst_name, a, b, 658 + rx_cnt_expected ? TEST_CNT_KEY_GOOD : 0, 657 659 sndid, rcvid); 658 660 } 659 661 test_tcp_ao_counters_free(a); ··· 841 843 synchronize_threads(); /* 4: verified => closed */ 842 844 close(sk); 843 845 844 - verify_counters(tst_name, true, false, begin, &end); 846 + verify_counters(tst_name, false, true, begin, &end); 845 847 synchronize_threads(); /* 5: counters */ 846 848 } 847 849 ··· 914 916 current_index = nr_keys - 1; 915 917 if (rnext_index < 0) 916 918 rnext_index = nr_keys - 1; 917 - collection.keys[current_index].used_on_handshake = 1; 918 - collection.keys[rnext_index].used_after_accept = 1; 919 - collection.keys[rnext_index].used_on_client = 1; 919 + collection.keys[current_index].used_on_client_tx = 1; 920 + collection.keys[rnext_index].used_on_server_tx = 1; 920 921 921 922 synchronize_threads(); /* 3: accepted => send data */ 922 923 if (test_client_verify(sk, msg_sz, msg_nr, TEST_TIMEOUT_SEC)) { ··· 1056 1059 test_error("Can't change the current key"); 1057 1060 if (test_client_verify(sk, msg_len, nr_packets, TEST_TIMEOUT_SEC)) 1058 1061 test_fail("verify failed"); 1059 - collection.keys[rotate_to_index].used_after_accept = 1; 1062 + /* There is a race here: between setting the current_key with 1063 + * setsockopt(TCP_AO_INFO) and starting to send some data - there 1064 + * might have been a segment received with the desired 1065 + * RNext_key set. In turn that would mean that the first outgoing 1066 + * segment will have the desired current_key (flipped back). 1067 + * Which is what the user/test wants. As it's racy, skip checking 1068 + * the counters, yet check what are the resulting current/rnext 1069 + * keys on both sides. 1070 + */ 1071 + collection.keys[rotate_to_index].skip_counters_checks = 1; 1060 1072 1061 1073 end_client(tst_name, sk, nr_keys, current_index, rnext_index, &tmp); 1062 1074 } ··· 1095 1089 } 1096 1090 verify_current_rnext(tst_name, sk, -1, 1097 1091 collection.keys[i].server_keyid); 1098 - collection.keys[i].used_on_client = 1; 1092 + collection.keys[i].used_on_server_tx = 1; 1099 1093 synchronize_threads(); /* verify current/rnext */ 1100 1094 } 1101 1095 end_client(tst_name, sk, nr_keys, current_index, rnext_index, &tmp);
+8 -4
tools/testing/selftests/net/tcp_ao/lib/sock.c
··· 62 62 return -ETIMEDOUT; 63 63 } 64 64 65 - if (getsockopt(sk, SOL_SOCKET, SO_ERROR, &ret, &slen) || ret) 65 + if (getsockopt(sk, SOL_SOCKET, SO_ERROR, &ret, &slen)) 66 + return -errno; 67 + if (ret) 66 68 return -ret; 67 69 return 0; 68 70 } ··· 586 584 { 587 585 size_t buf_sz = msg_len * nr; 588 586 char *buf = alloca(buf_sz); 587 + ssize_t ret; 589 588 590 589 randomize_buffer(buf, buf_sz); 591 - if (test_client_loop(sk, buf, buf_sz, msg_len, timeout_sec) != buf_sz) 592 - return -1; 593 - return 0; 590 + ret = test_client_loop(sk, buf, buf_sz, msg_len, timeout_sec); 591 + if (ret < 0) 592 + return (int)ret; 593 + return ret != buf_sz ? -1 : 0; 594 594 }
+90 -48
tools/testing/selftests/net/tcp_ao/rst.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 - /* Author: Dmitry Safonov <dima@arista.com> */ 2 + /* 3 + * The test checks that both active and passive reset have correct TCP-AO 4 + * signature. An "active" reset (abort) here is procured from closing 5 + * listen() socket with non-accepted connections in the queue: 6 + * inet_csk_listen_stop() => inet_child_forget() => 7 + * => tcp_disconnect() => tcp_send_active_reset() 8 + * 9 + * The passive reset is quite hard to get on established TCP connections. 10 + * It could be procured from non-established states, but the synchronization 11 + * part from userspace in order to reliably get RST seems uneasy. 12 + * So, instead it's procured by corrupting SEQ number on TIMED-WAIT state. 13 + * 14 + * It's important to test both passive and active RST as they go through 15 + * different code-paths: 16 + * - tcp_send_active_reset() makes no-data skb, sends it with tcp_transmit_skb() 17 + * - tcp_v*_send_reset() create their reply skbs and send them with 18 + * ip_send_unicast_reply() 19 + * 20 + * In both cases TCP-AO signatures have to be correct, which is verified by 21 + * (1) checking that the TCP-AO connection was reset and (2) TCP-AO counters. 22 + * 23 + * Author: Dmitry Safonov <dima@arista.com> 24 + */ 3 25 #include <inttypes.h> 4 26 #include "../../../../include/linux/kernel.h" 5 27 #include "aolib.h" 6 28 7 29 const size_t quota = 1000; 30 + const size_t packet_sz = 100; 8 31 /* 9 32 * Backlog == 0 means 1 connection in queue, see: 10 33 * commit 64a146513f8f ("[NET]: Revert incorrect accept queue...") ··· 80 57 if (setsockopt(sk, SOL_SOCKET, SO_LINGER, &sl, sizeof(sl))) 81 58 test_error("setsockopt(SO_LINGER)"); 82 59 close(sk); 83 - } 84 - 85 - static int test_wait_for_exception(int sk, time_t sec) 86 - { 87 - struct timeval tv = { .tv_sec = sec }; 88 - struct timeval *ptv = NULL; 89 - fd_set efds; 90 - int ret; 91 - 92 - FD_ZERO(&efds); 93 - FD_SET(sk, &efds); 94 - 95 - if (sec) 96 - ptv = &tv; 97 - 98 - errno = 0; 99 - ret = select(sk + 1, NULL, NULL, &efds, ptv); 100 - if (ret < 0) 101 - return -errno; 102 - return ret ? sk : 0; 103 60 } 104 61 105 62 static void test_server_active_rst(unsigned int port) ··· 158 155 test_fail("server returned %zd", bytes); 159 156 } 160 157 161 - synchronize_threads(); /* 3: chekpoint/restore the connection */ 158 + synchronize_threads(); /* 3: checkpoint the client */ 159 + synchronize_threads(); /* 4: close the server, creating twsk */ 162 160 if (test_get_tcp_ao_counters(sk, &ao2)) 163 161 test_error("test_get_tcp_ao_counters()"); 164 - 165 - synchronize_threads(); /* 4: terminate server + send more on client */ 166 - bytes = test_server_run(sk, quota, TEST_RETRANSMIT_SEC); 167 162 close(sk); 163 + 164 + synchronize_threads(); /* 5: restore the socket, send more data */ 168 165 test_tcp_ao_counters_cmp("passive RST server", &ao1, &ao2, TEST_CNT_GOOD); 169 166 170 - synchronize_threads(); /* 5: verified => closed */ 171 - close(sk); 167 + synchronize_threads(); /* 6: server exits */ 172 168 } 173 169 174 170 static void *server_fn(void *arg) ··· 286 284 test_error("test_wait_fds(): %d", err); 287 285 288 286 synchronize_threads(); /* 3: close listen socket */ 289 - if (test_client_verify(sk[0], 100, quota / 100, TEST_TIMEOUT_SEC)) 287 + if (test_client_verify(sk[0], packet_sz, quota / packet_sz, TEST_TIMEOUT_SEC)) 290 288 test_fail("Failed to send data on connected socket"); 291 289 else 292 290 test_ok("Verified established tcp connection"); ··· 325 323 struct tcp_sock_state img; 326 324 sockaddr_af saddr; 327 325 int sk, err; 328 - socklen_t slen = sizeof(err); 329 326 330 327 sk = socket(test_family, SOCK_STREAM, IPPROTO_TCP); 331 328 if (sk < 0) ··· 338 337 test_error("failed to connect()"); 339 338 340 339 synchronize_threads(); /* 2: accepted => send data */ 341 - if (test_client_verify(sk, 100, quota / 100, TEST_TIMEOUT_SEC)) 340 + if (test_client_verify(sk, packet_sz, quota / packet_sz, TEST_TIMEOUT_SEC)) 342 341 test_fail("Failed to send data on connected socket"); 343 342 else 344 343 test_ok("Verified established tcp connection"); 345 344 346 - synchronize_threads(); /* 3: chekpoint/restore the connection */ 345 + synchronize_threads(); /* 3: checkpoint the client */ 347 346 test_enable_repair(sk); 348 347 test_sock_checkpoint(sk, &img, &saddr); 349 348 test_ao_checkpoint(sk, &ao_img); 350 - test_kill_sk(sk); 349 + test_disable_repair(sk); 351 350 352 - img.out.seq += quota; 351 + synchronize_threads(); /* 4: close the server, creating twsk */ 352 + 353 + /* 354 + * The "corruption" in SEQ has to be small enough to fit into TCP 355 + * window, see tcp_timewait_state_process() for out-of-window 356 + * segments. 357 + */ 358 + img.out.seq += 5; /* 5 is more noticeable in tcpdump than 1 */ 359 + 360 + /* 361 + * FIXME: This is kind-of ugly and dirty, but it works. 362 + * 363 + * At this moment, the server has close'ed(sk). 364 + * The passive RST that is being targeted here is new data after 365 + * half-duplex close, see tcp_timewait_state_process() => TCP_TW_RST 366 + * 367 + * What is needed here is: 368 + * (1) wait for FIN from the server 369 + * (2) make sure that the ACK from the client went out 370 + * (3) make sure that the ACK was received and processed by the server 371 + * 372 + * Otherwise, the data that will be sent from "repaired" socket 373 + * post SEQ corruption may get to the server before it's in 374 + * TCP_FIN_WAIT2. 375 + * 376 + * (1) is easy with select()/poll() 377 + * (2) is possible by polling tcpi_state from TCP_INFO 378 + * (3) is quite complex: as server's socket was already closed, 379 + * probably the way to do it would be tcp-diag. 380 + */ 381 + sleep(TEST_RETRANSMIT_SEC); 382 + 383 + synchronize_threads(); /* 5: restore the socket, send more data */ 384 + test_kill_sk(sk); 353 385 354 386 sk = socket(test_family, SOCK_STREAM, IPPROTO_TCP); 355 387 if (sk < 0) ··· 400 366 test_disable_repair(sk); 401 367 test_sock_state_free(&img); 402 368 403 - synchronize_threads(); /* 4: terminate server + send more on client */ 404 - if (test_client_verify(sk, 100, quota / 100, 2 * TEST_TIMEOUT_SEC)) 405 - test_ok("client connection broken post-seq-adjust"); 369 + /* 370 + * This is how "passive reset" is acquired in this test from TCP_TW_RST: 371 + * 372 + * IP 10.0.254.1.7011 > 10.0.1.1.59772: Flags [P.], seq 901:1001, ack 1001, win 249, 373 + * options [tcp-ao keyid 100 rnextkeyid 100 mac 0x10217d6c36a22379086ef3b1], length 100 374 + * IP 10.0.254.1.7011 > 10.0.1.1.59772: Flags [F.], seq 1001, ack 1001, win 249, 375 + * options [tcp-ao keyid 100 rnextkeyid 100 mac 0x104ffc99b98c10a5298cc268], length 0 376 + * IP 10.0.1.1.59772 > 10.0.254.1.7011: Flags [.], ack 1002, win 251, 377 + * options [tcp-ao keyid 100 rnextkeyid 100 mac 0xe496dd4f7f5a8a66873c6f93,nop,nop,sack 1 {1001:1002}], length 0 378 + * IP 10.0.1.1.59772 > 10.0.254.1.7011: Flags [P.], seq 1006:1106, ack 1001, win 251, 379 + * options [tcp-ao keyid 100 rnextkeyid 100 mac 0x1b5f3330fb23fbcd0c77d0ca], length 100 380 + * IP 10.0.254.1.7011 > 10.0.1.1.59772: Flags [R], seq 3215596252, win 0, 381 + * options [tcp-ao keyid 100 rnextkeyid 100 mac 0x0bcfbbf497bce844312304b2], length 0 382 + */ 383 + err = test_client_verify(sk, packet_sz, quota / packet_sz, 2 * TEST_TIMEOUT_SEC); 384 + /* Make sure that the connection was reset, not timeouted */ 385 + if (err && err == -ECONNRESET) 386 + test_ok("client sock was passively reset post-seq-adjust"); 387 + else if (err) 388 + test_fail("client sock was not reset post-seq-adjust: %d", err); 406 389 else 407 - test_fail("client connection still works post-seq-adjust"); 408 - 409 - test_wait_for_exception(sk, TEST_TIMEOUT_SEC); 410 - 411 - if (getsockopt(sk, SOL_SOCKET, SO_ERROR, &err, &slen)) 412 - test_error("getsockopt()"); 413 - if (err != ECONNRESET && err != EPIPE) 414 - test_fail("client connection was not reset: %d", err); 415 - else 416 - test_ok("client connection was reset"); 390 + test_fail("client sock is yet connected post-seq-adjust"); 417 391 418 392 if (test_get_tcp_ao_counters(sk, &ao2)) 419 393 test_error("test_get_tcp_ao_counters()"); 420 394 421 - synchronize_threads(); /* 5: verified => closed */ 395 + synchronize_threads(); /* 6: server exits */ 422 396 close(sk); 423 397 test_tcp_ao_counters_cmp("client passive RST", &ao1, &ao2, TEST_CNT_GOOD); 424 398 } ··· 452 410 453 411 int main(int argc, char *argv[]) 454 412 { 455 - test_init(15, server_fn, client_fn); 413 + test_init(14, server_fn, client_fn); 456 414 return 0; 457 415 }
+1
tools/testing/selftests/net/tcp_ao/settings
··· 1 + timeout=120
+2 -2
tools/testing/selftests/net/udpgro.sh
··· 7 7 8 8 readonly PEER_NS="ns-peer-$(mktemp -u XXXXXX)" 9 9 10 - BPF_FILE="../bpf/xdp_dummy.bpf.o" 10 + BPF_FILE="xdp_dummy.o" 11 11 12 12 # set global exit status, but never reset nonzero one. 13 13 check_err() ··· 197 197 } 198 198 199 199 if [ ! -f ${BPF_FILE} ]; then 200 - echo "Missing ${BPF_FILE}. Build bpf selftest first" 200 + echo "Missing ${BPF_FILE}. Run 'make' first" 201 201 exit -1 202 202 fi 203 203
+2 -2
tools/testing/selftests/net/udpgro_bench.sh
··· 7 7 8 8 readonly PEER_NS="ns-peer-$(mktemp -u XXXXXX)" 9 9 10 - BPF_FILE="../bpf/xdp_dummy.bpf.o" 10 + BPF_FILE="xdp_dummy.o" 11 11 12 12 cleanup() { 13 13 local -r jobs="$(jobs -p)" ··· 84 84 } 85 85 86 86 if [ ! -f ${BPF_FILE} ]; then 87 - echo "Missing ${BPF_FILE}. Build bpf selftest first" 87 + echo "Missing ${BPF_FILE}. Run 'make' first" 88 88 exit -1 89 89 fi 90 90
+3 -3
tools/testing/selftests/net/udpgro_frglist.sh
··· 7 7 8 8 readonly PEER_NS="ns-peer-$(mktemp -u XXXXXX)" 9 9 10 - BPF_FILE="../bpf/xdp_dummy.bpf.o" 10 + BPF_FILE="xdp_dummy.o" 11 11 12 12 cleanup() { 13 13 local -r jobs="$(jobs -p)" ··· 85 85 } 86 86 87 87 if [ ! -f ${BPF_FILE} ]; then 88 - echo "Missing ${BPF_FILE}. Build bpf selftest first" 88 + echo "Missing ${BPF_FILE}. Run 'make' first" 89 89 exit -1 90 90 fi 91 91 92 92 if [ ! -f nat6to4.o ]; then 93 - echo "Missing nat6to4 helper. Build bpf nat6to4.o selftest first" 93 + echo "Missing nat6to4 helper. Run 'make' first" 94 94 exit -1 95 95 fi 96 96
+5 -3
tools/testing/selftests/net/udpgro_fwd.sh
··· 1 1 #!/bin/bash 2 2 # SPDX-License-Identifier: GPL-2.0 3 3 4 - BPF_FILE="../bpf/xdp_dummy.bpf.o" 4 + source net_helper.sh 5 + 6 + BPF_FILE="xdp_dummy.o" 5 7 readonly BASE="ns-$(mktemp -u XXXXXX)" 6 8 readonly SRC=2 7 9 readonly DST=1 ··· 121 119 ip netns exec $NS_DST $ipt -A INPUT -p udp --dport 8000 122 120 ip netns exec $NS_DST ./udpgso_bench_rx -C 1000 -R 10 -n 10 -l 1300 $rx_args & 123 121 local spid=$! 124 - sleep 0.1 122 + wait_local_port_listen "$NS_DST" 8000 udp 125 123 ip netns exec $NS_SRC ./udpgso_bench_tx $family -M 1 -s 13000 -S 1300 -D $dst 126 124 local retc=$? 127 125 wait $spid ··· 170 168 ip netns exec $NS_DST bash -c "echo 2 > /sys/class/net/veth$DST/queues/rx-0/rps_cpus" 171 169 ip netns exec $NS_DST taskset 0x2 ./udpgso_bench_rx -C 1000 -R 10 & 172 170 local spid=$! 173 - sleep 0.1 171 + wait_local_port_listen "$NS_DST" 8000 udp 174 172 ip netns exec $NS_SRC taskset 0x1 ./udpgso_bench_tx $family -l 3 -S 1300 -D $dst 175 173 local retc=$? 176 174 wait $spid
+2 -2
tools/testing/selftests/net/veth.sh
··· 1 1 #!/bin/sh 2 2 # SPDX-License-Identifier: GPL-2.0 3 3 4 - BPF_FILE="../bpf/xdp_dummy.bpf.o" 4 + BPF_FILE="xdp_dummy.o" 5 5 readonly STATS="$(mktemp -p /tmp ns-XXXXXX)" 6 6 readonly BASE=`basename $STATS` 7 7 readonly SRC=2 ··· 218 218 done 219 219 220 220 if [ ! -f ${BPF_FILE} ]; then 221 - echo "Missing ${BPF_FILE}. Build bpf selftest first" 221 + echo "Missing ${BPF_FILE}. Run 'make' first" 222 222 exit 1 223 223 fi 224 224
+13
tools/testing/selftests/net/xdp_dummy.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + 3 + #define KBUILD_MODNAME "xdp_dummy" 4 + #include <linux/bpf.h> 5 + #include <bpf/bpf_helpers.h> 6 + 7 + SEC("xdp") 8 + int xdp_dummy_prog(struct xdp_md *ctx) 9 + { 10 + return XDP_PASS; 11 + } 12 + 13 + char _license[] SEC("license") = "GPL";
+12 -2
tools/testing/selftests/rseq/basic_percpu_ops_test.c
··· 24 24 { 25 25 return rseq_mm_cid_available(); 26 26 } 27 + static 28 + bool rseq_use_cpu_index(void) 29 + { 30 + return false; /* Use mm_cid */ 31 + } 27 32 #else 28 33 # define RSEQ_PERCPU RSEQ_PERCPU_CPU_ID 29 34 static ··· 40 35 bool rseq_validate_cpu_id(void) 41 36 { 42 37 return rseq_current_cpu_raw() >= 0; 38 + } 39 + static 40 + bool rseq_use_cpu_index(void) 41 + { 42 + return true; /* Use cpu_id as index. */ 43 43 } 44 44 #endif 45 45 ··· 284 274 /* Generate list entries for every usable cpu. */ 285 275 sched_getaffinity(0, sizeof(allowed_cpus), &allowed_cpus); 286 276 for (i = 0; i < CPU_SETSIZE; i++) { 287 - if (!CPU_ISSET(i, &allowed_cpus)) 277 + if (rseq_use_cpu_index() && !CPU_ISSET(i, &allowed_cpus)) 288 278 continue; 289 279 for (j = 1; j <= 100; j++) { 290 280 struct percpu_list_node *node; ··· 309 299 for (i = 0; i < CPU_SETSIZE; i++) { 310 300 struct percpu_list_node *node; 311 301 312 - if (!CPU_ISSET(i, &allowed_cpus)) 302 + if (rseq_use_cpu_index() && !CPU_ISSET(i, &allowed_cpus)) 313 303 continue; 314 304 315 305 while ((node = __percpu_list_pop(&list, i))) {
+16 -6
tools/testing/selftests/rseq/param_test.c
··· 288 288 { 289 289 return rseq_mm_cid_available(); 290 290 } 291 + static 292 + bool rseq_use_cpu_index(void) 293 + { 294 + return false; /* Use mm_cid */ 295 + } 291 296 # ifdef TEST_MEMBARRIER 292 297 /* 293 298 * Membarrier does not currently support targeting a mm_cid, so ··· 316 311 bool rseq_validate_cpu_id(void) 317 312 { 318 313 return rseq_current_cpu_raw() >= 0; 314 + } 315 + static 316 + bool rseq_use_cpu_index(void) 317 + { 318 + return true; /* Use cpu_id as index. */ 319 319 } 320 320 # ifdef TEST_MEMBARRIER 321 321 static ··· 725 715 /* Generate list entries for every usable cpu. */ 726 716 sched_getaffinity(0, sizeof(allowed_cpus), &allowed_cpus); 727 717 for (i = 0; i < CPU_SETSIZE; i++) { 728 - if (!CPU_ISSET(i, &allowed_cpus)) 718 + if (rseq_use_cpu_index() && !CPU_ISSET(i, &allowed_cpus)) 729 719 continue; 730 720 for (j = 1; j <= 100; j++) { 731 721 struct percpu_list_node *node; ··· 762 752 for (i = 0; i < CPU_SETSIZE; i++) { 763 753 struct percpu_list_node *node; 764 754 765 - if (!CPU_ISSET(i, &allowed_cpus)) 755 + if (rseq_use_cpu_index() && !CPU_ISSET(i, &allowed_cpus)) 766 756 continue; 767 757 768 758 while ((node = __percpu_list_pop(&list, i))) { ··· 912 902 /* Generate list entries for every usable cpu. */ 913 903 sched_getaffinity(0, sizeof(allowed_cpus), &allowed_cpus); 914 904 for (i = 0; i < CPU_SETSIZE; i++) { 915 - if (!CPU_ISSET(i, &allowed_cpus)) 905 + if (rseq_use_cpu_index() && !CPU_ISSET(i, &allowed_cpus)) 916 906 continue; 917 907 /* Worse-case is every item in same CPU. */ 918 908 buffer.c[i].array = ··· 962 952 for (i = 0; i < CPU_SETSIZE; i++) { 963 953 struct percpu_buffer_node *node; 964 954 965 - if (!CPU_ISSET(i, &allowed_cpus)) 955 + if (rseq_use_cpu_index() && !CPU_ISSET(i, &allowed_cpus)) 966 956 continue; 967 957 968 958 while ((node = __percpu_buffer_pop(&buffer, i))) { ··· 1123 1113 /* Generate list entries for every usable cpu. */ 1124 1114 sched_getaffinity(0, sizeof(allowed_cpus), &allowed_cpus); 1125 1115 for (i = 0; i < CPU_SETSIZE; i++) { 1126 - if (!CPU_ISSET(i, &allowed_cpus)) 1116 + if (rseq_use_cpu_index() && !CPU_ISSET(i, &allowed_cpus)) 1127 1117 continue; 1128 1118 /* Worse-case is every item in same CPU. */ 1129 1119 buffer.c[i].array = ··· 1170 1160 for (i = 0; i < CPU_SETSIZE; i++) { 1171 1161 struct percpu_memcpy_buffer_node item; 1172 1162 1173 - if (!CPU_ISSET(i, &allowed_cpus)) 1163 + if (rseq_use_cpu_index() && !CPU_ISSET(i, &allowed_cpus)) 1174 1164 continue; 1175 1165 1176 1166 while (__percpu_memcpy_buffer_pop(&buffer, &item, i)) {
+64 -40
tools/testing/selftests/seccomp/seccomp_benchmark.c
··· 38 38 i *= 1000000000ULL; 39 39 i += finish.tv_nsec - start.tv_nsec; 40 40 41 - printf("%lu.%09lu - %lu.%09lu = %llu (%.1fs)\n", 42 - finish.tv_sec, finish.tv_nsec, 43 - start.tv_sec, start.tv_nsec, 44 - i, (double)i / 1000000000.0); 41 + ksft_print_msg("%lu.%09lu - %lu.%09lu = %llu (%.1fs)\n", 42 + finish.tv_sec, finish.tv_nsec, 43 + start.tv_sec, start.tv_nsec, 44 + i, (double)i / 1000000000.0); 45 45 46 46 return i; 47 47 } ··· 53 53 pid_t pid, ret; 54 54 int seconds = 15; 55 55 56 - printf("Calibrating sample size for %d seconds worth of syscalls ...\n", seconds); 56 + ksft_print_msg("Calibrating sample size for %d seconds worth of syscalls ...\n", seconds); 57 57 58 58 samples = 0; 59 59 pid = getpid(); ··· 98 98 } 99 99 100 100 long compare(const char *name_one, const char *name_eval, const char *name_two, 101 - unsigned long long one, bool (*eval)(int, int), unsigned long long two) 101 + unsigned long long one, bool (*eval)(int, int), unsigned long long two, 102 + bool skip) 102 103 { 103 104 bool good; 104 105 105 - printf("\t%s %s %s (%lld %s %lld): ", name_one, name_eval, name_two, 106 - (long long)one, name_eval, (long long)two); 106 + if (skip) { 107 + ksft_test_result_skip("%s %s %s\n", name_one, name_eval, 108 + name_two); 109 + return 0; 110 + } 111 + 112 + ksft_print_msg("\t%s %s %s (%lld %s %lld): ", name_one, name_eval, name_two, 113 + (long long)one, name_eval, (long long)two); 107 114 if (one > INT_MAX) { 108 - printf("Miscalculation! Measurement went negative: %lld\n", (long long)one); 109 - return 1; 115 + ksft_print_msg("Miscalculation! Measurement went negative: %lld\n", (long long)one); 116 + good = false; 117 + goto out; 110 118 } 111 119 if (two > INT_MAX) { 112 - printf("Miscalculation! Measurement went negative: %lld\n", (long long)two); 113 - return 1; 120 + ksft_print_msg("Miscalculation! Measurement went negative: %lld\n", (long long)two); 121 + good = false; 122 + goto out; 114 123 } 115 124 116 125 good = eval(one, two); 117 126 printf("%s\n", good ? "✔️" : "❌"); 127 + 128 + out: 129 + ksft_test_result(good, "%s %s %s\n", name_one, name_eval, name_two); 118 130 119 131 return good ? 0 : 1; 120 132 } ··· 154 142 unsigned long long samples, calc; 155 143 unsigned long long native, filter1, filter2, bitmap1, bitmap2; 156 144 unsigned long long entry, per_filter1, per_filter2; 145 + bool skip = false; 157 146 158 147 setbuf(stdout, NULL); 159 148 160 - printf("Running on:\n"); 149 + ksft_print_header(); 150 + ksft_set_plan(7); 151 + 152 + ksft_print_msg("Running on:\n"); 153 + ksft_print_msg(""); 161 154 system("uname -a"); 162 155 163 - printf("Current BPF sysctl settings:\n"); 156 + ksft_print_msg("Current BPF sysctl settings:\n"); 164 157 /* Avoid using "sysctl" which may not be installed. */ 158 + ksft_print_msg(""); 165 159 system("grep -H . /proc/sys/net/core/bpf_jit_enable"); 160 + ksft_print_msg(""); 166 161 system("grep -H . /proc/sys/net/core/bpf_jit_harden"); 167 162 168 163 if (argc > 1) ··· 177 158 else 178 159 samples = calibrate(); 179 160 180 - printf("Benchmarking %llu syscalls...\n", samples); 161 + ksft_print_msg("Benchmarking %llu syscalls...\n", samples); 181 162 182 163 /* Native call */ 183 164 native = timing(CLOCK_PROCESS_CPUTIME_ID, samples) / samples; 184 - printf("getpid native: %llu ns\n", native); 165 + ksft_print_msg("getpid native: %llu ns\n", native); 185 166 186 167 ret = prctl(PR_SET_NO_NEW_PRIVS, 1, 0, 0, 0); 187 168 assert(ret == 0); ··· 191 172 assert(ret == 0); 192 173 193 174 bitmap1 = timing(CLOCK_PROCESS_CPUTIME_ID, samples) / samples; 194 - printf("getpid RET_ALLOW 1 filter (bitmap): %llu ns\n", bitmap1); 175 + ksft_print_msg("getpid RET_ALLOW 1 filter (bitmap): %llu ns\n", bitmap1); 195 176 196 177 /* Second filter resulting in a bitmap */ 197 178 ret = prctl(PR_SET_SECCOMP, SECCOMP_MODE_FILTER, &bitmap_prog); 198 179 assert(ret == 0); 199 180 200 181 bitmap2 = timing(CLOCK_PROCESS_CPUTIME_ID, samples) / samples; 201 - printf("getpid RET_ALLOW 2 filters (bitmap): %llu ns\n", bitmap2); 182 + ksft_print_msg("getpid RET_ALLOW 2 filters (bitmap): %llu ns\n", bitmap2); 202 183 203 184 /* Third filter, can no longer be converted to bitmap */ 204 185 ret = prctl(PR_SET_SECCOMP, SECCOMP_MODE_FILTER, &prog); 205 186 assert(ret == 0); 206 187 207 188 filter1 = timing(CLOCK_PROCESS_CPUTIME_ID, samples) / samples; 208 - printf("getpid RET_ALLOW 3 filters (full): %llu ns\n", filter1); 189 + ksft_print_msg("getpid RET_ALLOW 3 filters (full): %llu ns\n", filter1); 209 190 210 191 /* Fourth filter, can not be converted to bitmap because of filter 3 */ 211 192 ret = prctl(PR_SET_SECCOMP, SECCOMP_MODE_FILTER, &bitmap_prog); 212 193 assert(ret == 0); 213 194 214 195 filter2 = timing(CLOCK_PROCESS_CPUTIME_ID, samples) / samples; 215 - printf("getpid RET_ALLOW 4 filters (full): %llu ns\n", filter2); 196 + ksft_print_msg("getpid RET_ALLOW 4 filters (full): %llu ns\n", filter2); 216 197 217 198 /* Estimations */ 218 199 #define ESTIMATE(fmt, var, what) do { \ 219 200 var = (what); \ 220 - printf("Estimated " fmt ": %llu ns\n", var); \ 221 - if (var > INT_MAX) \ 222 - goto more_samples; \ 201 + ksft_print_msg("Estimated " fmt ": %llu ns\n", var); \ 202 + if (var > INT_MAX) { \ 203 + skip = true; \ 204 + ret |= 1; \ 205 + } \ 223 206 } while (0) 224 207 225 208 ESTIMATE("total seccomp overhead for 1 bitmapped filter", calc, ··· 239 218 ESTIMATE("seccomp per-filter overhead (filters / 4)", per_filter2, 240 219 (filter2 - native - entry) / 4); 241 220 242 - printf("Expectations:\n"); 243 - ret |= compare("native", "≤", "1 bitmap", native, le, bitmap1); 244 - bits = compare("native", "≤", "1 filter", native, le, filter1); 221 + ksft_print_msg("Expectations:\n"); 222 + ret |= compare("native", "≤", "1 bitmap", native, le, bitmap1, 223 + skip); 224 + bits = compare("native", "≤", "1 filter", native, le, filter1, 225 + skip); 245 226 if (bits) 246 - goto more_samples; 227 + skip = true; 247 228 248 229 ret |= compare("per-filter (last 2 diff)", "≈", "per-filter (filters / 4)", 249 - per_filter1, approx, per_filter2); 230 + per_filter1, approx, per_filter2, skip); 250 231 251 232 bits = compare("1 bitmapped", "≈", "2 bitmapped", 252 - bitmap1 - native, approx, bitmap2 - native); 233 + bitmap1 - native, approx, bitmap2 - native, skip); 253 234 if (bits) { 254 - printf("Skipping constant action bitmap expectations: they appear unsupported.\n"); 255 - goto out; 235 + ksft_print_msg("Skipping constant action bitmap expectations: they appear unsupported.\n"); 236 + skip = true; 256 237 } 257 238 258 - ret |= compare("entry", "≈", "1 bitmapped", entry, approx, bitmap1 - native); 259 - ret |= compare("entry", "≈", "2 bitmapped", entry, approx, bitmap2 - native); 239 + ret |= compare("entry", "≈", "1 bitmapped", entry, approx, 240 + bitmap1 - native, skip); 241 + ret |= compare("entry", "≈", "2 bitmapped", entry, approx, 242 + bitmap2 - native, skip); 260 243 ret |= compare("native + entry + (per filter * 4)", "≈", "4 filters total", 261 - entry + (per_filter1 * 4) + native, approx, filter2); 262 - if (ret == 0) 263 - goto out; 244 + entry + (per_filter1 * 4) + native, approx, filter2, 245 + skip); 264 246 265 - more_samples: 266 - printf("Saw unexpected benchmark result. Try running again with more samples?\n"); 267 - out: 268 - return 0; 247 + if (ret) 248 + ksft_print_msg("Saw unexpected benchmark result. Try running again with more samples?\n"); 249 + 250 + ksft_finished(); 269 251 }