Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge 6.11-rc7 into usb-next

We need the USB fixes in here as well, and this also resolves the merge
conflict in:
drivers/usb/typec/ucsi/ucsi.c

Reported-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

+3189 -1911
+2
.mailmap
··· 60 60 Andi Kleen <ak@linux.intel.com> <ak@suse.de> 61 61 Andi Shyti <andi@etezian.org> <andi.shyti@samsung.com> 62 62 Andreas Herrmann <aherrman@de.ibm.com> 63 + Andreas Hindborg <a.hindborg@kernel.org> <a.hindborg@samsung.com> 63 64 Andrej Shadura <andrew.shadura@collabora.co.uk> 64 65 Andrej Shadura <andrew@shadura.me> <andrew@beldisplaytech.com> 65 66 Andrew Morton <akpm@linux-foundation.org> ··· 270 269 Jan Glauber <jan.glauber@gmail.com> <jang@de.ibm.com> 271 270 Jan Glauber <jan.glauber@gmail.com> <jang@linux.vnet.ibm.com> 272 271 Jan Glauber <jan.glauber@gmail.com> <jglauber@cavium.com> 272 + Jan Kuliga <jtkuliga.kdev@gmail.com> <jankul@alatek.krakow.pl> 273 273 Jarkko Sakkinen <jarkko@kernel.org> <jarkko.sakkinen@linux.intel.com> 274 274 Jarkko Sakkinen <jarkko@kernel.org> <jarkko@profian.com> 275 275 Jarkko Sakkinen <jarkko@kernel.org> <jarkko.sakkinen@tuni.fi>
+19 -14
Documentation/ABI/testing/sysfs-timecard
··· 258 258 the estimated point where the FPGA latches the PHC time. This 259 259 value may be changed by writing an unsigned integer. 260 260 261 - What: /sys/class/timecard/ocpN/ttyGNSS 262 - What: /sys/class/timecard/ocpN/ttyGNSS2 263 - Date: September 2021 264 - Contact: Jonathan Lemon <jonathan.lemon@gmail.com> 265 - Description: These optional attributes link to the TTY serial ports 266 - associated with the GNSS devices. 261 + What: /sys/class/timecard/ocpN/tty 262 + Date: August 2024 263 + Contact: Vadim Fedorenko <vadim.fedorenko@linux.dev> 264 + Description: (RO) Directory containing the sysfs nodes for TTY attributes 267 265 268 - What: /sys/class/timecard/ocpN/ttyMAC 269 - Date: September 2021 266 + What: /sys/class/timecard/ocpN/tty/ttyGNSS 267 + What: /sys/class/timecard/ocpN/tty/ttyGNSS2 268 + Date: August 2024 270 269 Contact: Jonathan Lemon <jonathan.lemon@gmail.com> 271 - Description: This optional attribute links to the TTY serial port 272 - associated with the Miniature Atomic Clock. 270 + Description: (RO) These optional attributes contain names of the TTY serial 271 + ports associated with the GNSS devices. 273 272 274 - What: /sys/class/timecard/ocpN/ttyNMEA 275 - Date: September 2021 273 + What: /sys/class/timecard/ocpN/tty/ttyMAC 274 + Date: August 2024 276 275 Contact: Jonathan Lemon <jonathan.lemon@gmail.com> 277 - Description: This optional attribute links to the TTY serial port 278 - which outputs the PHC time in NMEA ZDA format. 276 + Description: (RO) This optional attribute contains name of the TTY serial 277 + port associated with the Miniature Atomic Clock. 278 + 279 + What: /sys/class/timecard/ocpN/tty/ttyNMEA 280 + Date: August 2024 281 + Contact: Jonathan Lemon <jonathan.lemon@gmail.com> 282 + Description: (RO) This optional attribute contains name of the TTY serial 283 + port which outputs the PHC time in NMEA ZDA format. 279 284 280 285 What: /sys/class/timecard/ocpN/utc_tai_offset 281 286 Date: September 2021
+4 -3
Documentation/admin-guide/cgroup-v2.rst
··· 1717 1717 entries fault back in or are written out to disk. 1718 1718 1719 1719 memory.zswap.writeback 1720 - A read-write single value file. The default value is "1". The 1721 - initial value of the root cgroup is 1, and when a new cgroup is 1722 - created, it inherits the current value of its parent. 1720 + A read-write single value file. The default value is "1". 1721 + Note that this setting is hierarchical, i.e. the writeback would be 1722 + implicitly disabled for child cgroups if the upper hierarchy 1723 + does so. 1723 1724 1724 1725 When this is set to 0, all swapping attempts to swapping devices 1725 1726 are disabled. This included both zswap writebacks, and swapping due
-16
Documentation/arch/riscv/vm-layout.rst
··· 134 134 ffffffff00000000 | -4 GB | ffffffff7fffffff | 2 GB | modules, BPF 135 135 ffffffff80000000 | -2 GB | ffffffffffffffff | 2 GB | kernel 136 136 __________________|____________|__________________|_________|____________________________________________________________ 137 - 138 - 139 - Userspace VAs 140 - -------------------- 141 - To maintain compatibility with software that relies on the VA space with a 142 - maximum of 48 bits the kernel will, by default, return virtual addresses to 143 - userspace from a 48-bit range (sv48). This default behavior is achieved by 144 - passing 0 into the hint address parameter of mmap. On CPUs with an address space 145 - smaller than sv48, the CPU maximum supported address space will be the default. 146 - 147 - Software can "opt-in" to receiving VAs from another VA space by providing 148 - a hint address to mmap. When a hint address is passed to mmap, the returned 149 - address will never use more bits than the hint address. For example, if a hint 150 - address of `1 << 40` is passed to mmap, a valid returned address will never use 151 - bits 41 through 63. If no mappable addresses are available in that range, mmap 152 - will return `MAP_FAILED`.
+11 -4
Documentation/devicetree/bindings/display/panel/wl-355608-a8.yaml Documentation/devicetree/bindings/display/panel/anbernic,rg35xx-plus-panel.yaml
··· 1 1 # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 2 %YAML 1.2 3 3 --- 4 - $id: http://devicetree.org/schemas/display/panel/wl-355608-a8.yaml# 4 + $id: http://devicetree.org/schemas/display/panel/anbernic,rg35xx-plus-panel.yaml# 5 5 $schema: http://devicetree.org/meta-schemas/core.yaml# 6 6 7 - title: WL-355608-A8 3.5" (640x480 pixels) 24-bit IPS LCD panel 7 + title: Anbernic RG35XX series (WL-355608-A8) 3.5" 640x480 24-bit IPS LCD panel 8 8 9 9 maintainers: 10 10 - Ryan Walklin <ryan@testtoast.com> ··· 15 15 16 16 properties: 17 17 compatible: 18 - const: wl-355608-a8 18 + oneOf: 19 + - const: anbernic,rg35xx-plus-panel 20 + - items: 21 + - enum: 22 + - anbernic,rg35xx-2024-panel 23 + - anbernic,rg35xx-h-panel 24 + - anbernic,rg35xx-sp-panel 25 + - const: anbernic,rg35xx-plus-panel 19 26 20 27 reg: 21 28 maxItems: 1 ··· 47 40 #size-cells = <0>; 48 41 49 42 panel@0 { 50 - compatible = "wl-355608-a8"; 43 + compatible = "anbernic,rg35xx-plus-panel"; 51 44 reg = <0>; 52 45 53 46 spi-3wire;
+1 -1
Documentation/devicetree/bindings/nvmem/xlnx,zynqmp-nvmem.yaml
··· 28 28 29 29 examples: 30 30 - | 31 - nvmem { 31 + soc-nvmem { 32 32 compatible = "xlnx,zynqmp-nvmem-fw"; 33 33 nvmem-layout { 34 34 compatible = "fixed-layout";
+16
Documentation/process/maintainer-netdev.rst
··· 375 375 your code follow the most recent guidelines, so that eventually all code 376 376 in the domain of netdev is in the preferred format. 377 377 378 + Using device-managed and cleanup.h constructs 379 + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 380 + 381 + Netdev remains skeptical about promises of all "auto-cleanup" APIs, 382 + including even ``devm_`` helpers, historically. They are not the preferred 383 + style of implementation, merely an acceptable one. 384 + 385 + Use of ``guard()`` is discouraged within any function longer than 20 lines, 386 + ``scoped_guard()`` is considered more readable. Using normal lock/unlock is 387 + still (weakly) preferred. 388 + 389 + Low level cleanup constructs (such as ``__free()``) can be used when building 390 + APIs and helpers, especially scoped iterators. However, direct use of 391 + ``__free()`` within networking core and drivers is discouraged. 392 + Similar guidance applies to declaring variables mid-function. 393 + 378 394 Resending after review 379 395 ~~~~~~~~~~~~~~~~~~~~~~ 380 396
+19 -19
Documentation/rust/coding-guidelines.rst
··· 145 145 This example showcases a few ``rustdoc`` features and some conventions followed 146 146 in the kernel: 147 147 148 - - The first paragraph must be a single sentence briefly describing what 149 - the documented item does. Further explanations must go in extra paragraphs. 148 + - The first paragraph must be a single sentence briefly describing what 149 + the documented item does. Further explanations must go in extra paragraphs. 150 150 151 - - Unsafe functions must document their safety preconditions under 152 - a ``# Safety`` section. 151 + - Unsafe functions must document their safety preconditions under 152 + a ``# Safety`` section. 153 153 154 - - While not shown here, if a function may panic, the conditions under which 155 - that happens must be described under a ``# Panics`` section. 154 + - While not shown here, if a function may panic, the conditions under which 155 + that happens must be described under a ``# Panics`` section. 156 156 157 - Please note that panicking should be very rare and used only with a good 158 - reason. In almost all cases, a fallible approach should be used, typically 159 - returning a ``Result``. 157 + Please note that panicking should be very rare and used only with a good 158 + reason. In almost all cases, a fallible approach should be used, typically 159 + returning a ``Result``. 160 160 161 - - If providing examples of usage would help readers, they must be written in 162 - a section called ``# Examples``. 161 + - If providing examples of usage would help readers, they must be written in 162 + a section called ``# Examples``. 163 163 164 - - Rust items (functions, types, constants...) must be linked appropriately 165 - (``rustdoc`` will create a link automatically). 164 + - Rust items (functions, types, constants...) must be linked appropriately 165 + (``rustdoc`` will create a link automatically). 166 166 167 - - Any ``unsafe`` block must be preceded by a ``// SAFETY:`` comment 168 - describing why the code inside is sound. 167 + - Any ``unsafe`` block must be preceded by a ``// SAFETY:`` comment 168 + describing why the code inside is sound. 169 169 170 - While sometimes the reason might look trivial and therefore unneeded, 171 - writing these comments is not just a good way of documenting what has been 172 - taken into account, but most importantly, it provides a way to know that 173 - there are no *extra* implicit constraints. 170 + While sometimes the reason might look trivial and therefore unneeded, 171 + writing these comments is not just a good way of documenting what has been 172 + taken into account, but most importantly, it provides a way to know that 173 + there are no *extra* implicit constraints. 174 174 175 175 To learn more about how to write documentation for Rust and extra features, 176 176 please take a look at the ``rustdoc`` book at:
+3 -3
Documentation/rust/quick-start.rst
··· 305 305 is the toolchain does not support Rust's new v0 mangling scheme yet. 306 306 There are a few ways out: 307 307 308 - - Install a newer release (GDB >= 10.2, Binutils >= 2.36). 308 + - Install a newer release (GDB >= 10.2, Binutils >= 2.36). 309 309 310 - - Some versions of GDB (e.g. vanilla GDB 10.1) are able to use 311 - the pre-demangled names embedded in the debug info (``CONFIG_DEBUG_INFO``). 310 + - Some versions of GDB (e.g. vanilla GDB 10.1) are able to use 311 + the pre-demangled names embedded in the debug info (``CONFIG_DEBUG_INFO``).
+18 -5
MAINTAINERS
··· 3868 3868 F: lib/sbitmap.c 3869 3869 3870 3870 BLOCK LAYER DEVICE DRIVER API [RUST] 3871 - M: Andreas Hindborg <a.hindborg@samsung.com> 3871 + M: Andreas Hindborg <a.hindborg@kernel.org> 3872 3872 R: Boqun Feng <boqun.feng@gmail.com> 3873 3873 L: linux-block@vger.kernel.org 3874 3874 L: rust-for-linux@vger.kernel.org ··· 5956 5956 CW1200 WLAN driver 5957 5957 S: Orphan 5958 5958 F: drivers/net/wireless/st/cw1200/ 5959 + F: include/linux/platform_data/net-cw1200.h 5959 5960 5960 5961 CX18 VIDEO4LINUX DRIVER 5961 5962 M: Andy Walls <awalls@md.metrocast.net> ··· 7458 7457 T: git https://gitlab.freedesktop.org/drm/misc/kernel.git 7459 7458 F: Documentation/devicetree/bindings/display/bridge/ 7460 7459 F: drivers/gpu/drm/bridge/ 7460 + F: drivers/gpu/drm/display/drm_bridge_connector.c 7461 7461 F: drivers/gpu/drm/drm_bridge.c 7462 - F: drivers/gpu/drm/drm_bridge_connector.c 7463 7462 F: include/drm/drm_bridge.h 7464 7463 F: include/drm/drm_bridge_connector.h 7465 7464 ··· 8865 8864 FREESCALE DSPI DRIVER 8866 8865 M: Vladimir Oltean <olteanv@gmail.com> 8867 8866 L: linux-spi@vger.kernel.org 8867 + L: imx@lists.linux.dev 8868 8868 S: Maintained 8869 8869 F: Documentation/devicetree/bindings/spi/fsl,dspi*.yaml 8870 8870 F: drivers/spi/spi-fsl-dspi.c ··· 8950 8948 F: Documentation/devicetree/bindings/i2c/i2c-imx-lpi2c.yaml 8951 8949 F: drivers/i2c/busses/i2c-imx-lpi2c.c 8952 8950 8951 + FREESCALE IMX LPSPI DRIVER 8952 + M: Frank Li <Frank.Li@nxp.com> 8953 + L: linux-spi@vger.kernel.org 8954 + L: imx@lists.linux.dev 8955 + S: Maintained 8956 + F: Documentation/devicetree/bindings/spi/spi-fsl-lpspi.yaml 8957 + F: drivers/spi/spi-fsl-lpspi.c 8958 + 8953 8959 FREESCALE MPC I2C DRIVER 8954 8960 M: Chris Packham <chris.packham@alliedtelesis.co.nz> 8955 8961 L: linux-i2c@vger.kernel.org ··· 8994 8984 FREESCALE QUAD SPI DRIVER 8995 8985 M: Han Xu <han.xu@nxp.com> 8996 8986 L: linux-spi@vger.kernel.org 8987 + L: imx@lists.linux.dev 8997 8988 S: Maintained 8998 8989 F: Documentation/devicetree/bindings/spi/fsl,spi-fsl-qspi.yaml 8999 8990 F: drivers/spi/spi-fsl-qspi.c ··· 15906 15895 F: include/uapi/linux/if_* 15907 15896 F: include/uapi/linux/netdev* 15908 15897 F: tools/testing/selftests/drivers/net/ 15898 + X: Documentation/devicetree/bindings/net/bluetooth/ 15899 + X: Documentation/devicetree/bindings/net/wireless/ 15909 15900 X: drivers/net/wireless/ 15910 15901 15911 15902 NETWORKING DRIVERS (WIRELESS) ··· 16421 16408 M: Haibo Chen <haibo.chen@nxp.com> 16422 16409 R: Yogesh Gaur <yogeshgaur.83@gmail.com> 16423 16410 L: linux-spi@vger.kernel.org 16411 + L: imx@lists.linux.dev 16424 16412 S: Maintained 16425 16413 F: Documentation/devicetree/bindings/spi/spi-nxp-fspi.yaml 16426 16414 F: drivers/spi/spi-nxp-fspi.c ··· 17133 17119 17134 17120 OPENCOMPUTE PTP CLOCK DRIVER 17135 17121 M: Jonathan Lemon <jonathan.lemon@gmail.com> 17136 - M: Vadim Fedorenko <vadfed@linux.dev> 17122 + M: Vadim Fedorenko <vadim.fedorenko@linux.dev> 17137 17123 L: netdev@vger.kernel.org 17138 17124 S: Maintained 17139 17125 F: drivers/ptp/ptp_ocp.c ··· 19946 19932 RUST 19947 19933 M: Miguel Ojeda <ojeda@kernel.org> 19948 19934 M: Alex Gaynor <alex.gaynor@gmail.com> 19949 - M: Wedson Almeida Filho <wedsonaf@gmail.com> 19950 19935 R: Boqun Feng <boqun.feng@gmail.com> 19951 19936 R: Gary Guo <gary@garyguo.net> 19952 19937 R: Björn Roy Baron <bjorn3_gh@protonmail.com> 19953 19938 R: Benno Lossin <benno.lossin@proton.me> 19954 - R: Andreas Hindborg <a.hindborg@samsung.com> 19939 + R: Andreas Hindborg <a.hindborg@kernel.org> 19955 19940 R: Alice Ryhl <aliceryhl@google.com> 19956 19941 L: rust-for-linux@vger.kernel.org 19957 19942 S: Supported
+2 -1
Makefile
··· 2 2 VERSION = 6 3 3 PATCHLEVEL = 11 4 4 SUBLEVEL = 0 5 - EXTRAVERSION = -rc6 5 + EXTRAVERSION = -rc7 6 6 NAME = Baby Opossum Posse 7 7 8 8 # *DOCUMENTATION* ··· 445 445 # host programs. 446 446 export rust_common_flags := --edition=2021 \ 447 447 -Zbinary_dep_depinfo=y \ 448 + -Astable_features \ 448 449 -Dunsafe_op_in_unsafe_fn \ 449 450 -Dnon_ascii_idents \ 450 451 -Wrust_2018_idioms \
+1 -1
arch/arm/Kconfig
··· 117 117 select HAVE_KERNEL_XZ 118 118 select HAVE_KPROBES if !XIP_KERNEL && !CPU_ENDIAN_BE32 && !CPU_V7M 119 119 select HAVE_KRETPROBES if HAVE_KPROBES 120 - select HAVE_LD_DEAD_CODE_DATA_ELIMINATION 120 + select HAVE_LD_DEAD_CODE_DATA_ELIMINATION if (LD_VERSION >= 23600 || LD_IS_LLD) 121 121 select HAVE_MOD_ARCH_SPECIFIC 122 122 select HAVE_NMI 123 123 select HAVE_OPTPROBES if !THUMB2_KERNEL
+9 -3
arch/arm/kernel/entry-armv.S
··· 29 29 #include "entry-header.S" 30 30 #include <asm/probes.h> 31 31 32 + #ifdef CONFIG_HAVE_LD_DEAD_CODE_DATA_ELIMINATION 33 + #define RELOC_TEXT_NONE .reloc .text, R_ARM_NONE, . 34 + #else 35 + #define RELOC_TEXT_NONE 36 + #endif 37 + 32 38 /* 33 39 * Interrupt handling. 34 40 */ ··· 1071 1065 .globl vector_fiq 1072 1066 1073 1067 .section .vectors, "ax", %progbits 1074 - .reloc .text, R_ARM_NONE, . 1068 + RELOC_TEXT_NONE 1075 1069 W(b) vector_rst 1076 1070 W(b) vector_und 1077 1071 ARM( .reloc ., R_ARM_LDR_PC_G0, .L__vector_swi ) ··· 1085 1079 1086 1080 #ifdef CONFIG_HARDEN_BRANCH_HISTORY 1087 1081 .section .vectors.bhb.loop8, "ax", %progbits 1088 - .reloc .text, R_ARM_NONE, . 1082 + RELOC_TEXT_NONE 1089 1083 W(b) vector_rst 1090 1084 W(b) vector_bhb_loop8_und 1091 1085 ARM( .reloc ., R_ARM_LDR_PC_G0, .L__vector_bhb_loop8_swi ) ··· 1098 1092 W(b) vector_bhb_loop8_fiq 1099 1093 1100 1094 .section .vectors.bhb.bpiall, "ax", %progbits 1101 - .reloc .text, R_ARM_NONE, . 1095 + RELOC_TEXT_NONE 1102 1096 W(b) vector_rst 1103 1097 W(b) vector_bhb_bpiall_und 1104 1098 ARM( .reloc ., R_ARM_LDR_PC_G0, .L__vector_bhb_bpiall_swi )
+3 -1
arch/arm64/kernel/stacktrace.c
··· 25 25 * 26 26 * @common: Common unwind state. 27 27 * @task: The task being unwound. 28 + * @graph_idx: Used by ftrace_graph_ret_addr() for optimized stack unwinding. 28 29 * @kr_cur: When KRETPROBES is selected, holds the kretprobe instance 29 30 * associated with the most recently encountered replacement lr 30 31 * value. ··· 33 32 struct kunwind_state { 34 33 struct unwind_state common; 35 34 struct task_struct *task; 35 + int graph_idx; 36 36 #ifdef CONFIG_KRETPROBES 37 37 struct llist_node *kr_cur; 38 38 #endif ··· 108 106 if (state->task->ret_stack && 109 107 (state->common.pc == (unsigned long)return_to_handler)) { 110 108 unsigned long orig_pc; 111 - orig_pc = ftrace_graph_ret_addr(state->task, NULL, 109 + orig_pc = ftrace_graph_ret_addr(state->task, &state->graph_idx, 112 110 state->common.pc, 113 111 (void *)state->common.fp); 114 112 if (WARN_ON_ONCE(state->common.pc == orig_pc))
+11 -5
arch/parisc/mm/init.c
··· 459 459 unsigned long kernel_end = (unsigned long)&_end; 460 460 461 461 /* Remap kernel text and data, but do not touch init section yet. */ 462 - kernel_set_to_readonly = true; 463 462 map_pages(init_end, __pa(init_end), kernel_end - init_end, 464 463 PAGE_KERNEL, 0); 465 464 ··· 492 493 #ifdef CONFIG_STRICT_KERNEL_RWX 493 494 void mark_rodata_ro(void) 494 495 { 495 - /* rodata memory was already mapped with KERNEL_RO access rights by 496 - pagetable_init() and map_pages(). No need to do additional stuff here */ 497 - unsigned long roai_size = __end_ro_after_init - __start_ro_after_init; 496 + unsigned long start = (unsigned long) &__start_rodata; 497 + unsigned long end = (unsigned long) &__end_rodata; 498 498 499 - pr_info("Write protected read-only-after-init data: %luk\n", roai_size >> 10); 499 + pr_info("Write protecting the kernel read-only data: %luk\n", 500 + (end - start) >> 10); 501 + 502 + kernel_set_to_readonly = true; 503 + map_pages(start, __pa(start), end - start, PAGE_KERNEL, 0); 504 + 505 + /* force the kernel to see the new page table entries */ 506 + flush_cache_all(); 507 + flush_tlb_all(); 500 508 } 501 509 #endif 502 510
+2 -2
arch/powerpc/include/asm/nohash/32/pgtable.h
··· 52 52 #define USER_PTRS_PER_PGD (TASK_SIZE / PGDIR_SIZE) 53 53 54 54 #define pgd_ERROR(e) \ 55 - pr_err("%s:%d: bad pgd %08lx.\n", __FILE__, __LINE__, pgd_val(e)) 55 + pr_err("%s:%d: bad pgd %08llx.\n", __FILE__, __LINE__, (unsigned long long)pgd_val(e)) 56 56 57 57 /* 58 58 * This is the bottom of the PKMAP area with HIGHMEM or an arbitrary ··· 170 170 #define pmd_pfn(pmd) (pmd_val(pmd) >> PAGE_SHIFT) 171 171 #else 172 172 #define pmd_page_vaddr(pmd) \ 173 - ((const void *)(pmd_val(pmd) & ~(PTE_TABLE_SIZE - 1))) 173 + ((const void *)((unsigned long)pmd_val(pmd) & ~(PTE_TABLE_SIZE - 1))) 174 174 #define pmd_pfn(pmd) (__pa(pmd_val(pmd)) >> PAGE_SHIFT) 175 175 #endif 176 176
+9 -3
arch/powerpc/include/asm/pgtable-types.h
··· 49 49 #endif /* CONFIG_PPC64 */ 50 50 51 51 /* PGD level */ 52 - #if defined(CONFIG_PPC_E500) && defined(CONFIG_PTE_64BIT) 52 + #if defined(CONFIG_PPC_85xx) && defined(CONFIG_PTE_64BIT) 53 53 typedef struct { unsigned long long pgd; } pgd_t; 54 + 55 + static inline unsigned long long pgd_val(pgd_t x) 56 + { 57 + return x.pgd; 58 + } 54 59 #else 55 60 typedef struct { unsigned long pgd; } pgd_t; 56 - #endif 57 - #define __pgd(x) ((pgd_t) { (x) }) 61 + 58 62 static inline unsigned long pgd_val(pgd_t x) 59 63 { 60 64 return x.pgd; 61 65 } 66 + #endif 67 + #define __pgd(x) ((pgd_t) { (x) }) 62 68 63 69 /* Page protection bits */ 64 70 typedef struct { unsigned long pgprot; } pgprot_t;
+3 -1
arch/powerpc/kernel/vdso/vdso32.lds.S
··· 74 74 .got : { *(.got) } :text 75 75 .plt : { *(.plt) } 76 76 77 + .rela.dyn : { *(.rela .rela*) } 78 + 77 79 _end = .; 78 80 __end = .; 79 81 PROVIDE(end = .); ··· 89 87 *(.branch_lt) 90 88 *(.data .data.* .gnu.linkonce.d.* .sdata*) 91 89 *(.bss .sbss .dynbss .dynsbss) 92 - *(.got1 .glink .iplt .rela*) 90 + *(.got1 .glink .iplt) 93 91 } 94 92 } 95 93
+2 -2
arch/powerpc/kernel/vdso/vdso64.lds.S
··· 69 69 .eh_frame_hdr : { *(.eh_frame_hdr) } :text :eh_frame_hdr 70 70 .eh_frame : { KEEP (*(.eh_frame)) } :text 71 71 .gcc_except_table : { *(.gcc_except_table) } 72 - .rela.dyn ALIGN(8) : { *(.rela.dyn) } 72 + .rela.dyn ALIGN(8) : { *(.rela .rela*) } 73 73 74 74 .got ALIGN(8) : { *(.got .toc) } 75 75 ··· 86 86 *(.data .data.* .gnu.linkonce.d.* .sdata*) 87 87 *(.bss .sbss .dynbss .dynsbss) 88 88 *(.opd) 89 - *(.glink .iplt .plt .rela*) 89 + *(.glink .iplt .plt) 90 90 } 91 91 } 92 92
+9 -1
arch/powerpc/lib/qspinlock.c
··· 697 697 } 698 698 699 699 release: 700 - qnodesp->count--; /* release the node */ 700 + /* 701 + * Clear the lock before releasing the node, as another CPU might see stale 702 + * values if an interrupt occurs after we increment qnodesp->count 703 + * but before node->lock is initialized. The barrier ensures that 704 + * there are no further stores to the node after it has been released. 705 + */ 706 + node->lock = NULL; 707 + barrier(); 708 + qnodesp->count--; 701 709 } 702 710 703 711 void queued_spin_lock_slowpath(struct qspinlock *lock)
+1 -1
arch/powerpc/mm/nohash/tlb_64e.c
··· 33 33 * though this will probably be made common with other nohash 34 34 * implementations at some point 35 35 */ 36 - int mmu_pte_psize; /* Page size used for PTE pages */ 36 + static int mmu_pte_psize; /* Page size used for PTE pages */ 37 37 int mmu_vmemmap_psize; /* Page size used for the virtual mem map */ 38 38 int book3e_htw_mode; /* HW tablewalk? Value is PPC_HTW_* */ 39 39 unsigned long linear_map_top; /* Top of linear mapping */
+2 -2
arch/riscv/Kconfig
··· 552 552 config TOOLCHAIN_HAS_V 553 553 bool 554 554 default y 555 - depends on !64BIT || $(cc-option,-mabi=lp64 -march=rv64iv) 556 - depends on !32BIT || $(cc-option,-mabi=ilp32 -march=rv32iv) 555 + depends on !64BIT || $(cc-option,-mabi=lp64 -march=rv64imv) 556 + depends on !32BIT || $(cc-option,-mabi=ilp32 -march=rv32imv) 557 557 depends on LLD_VERSION >= 140000 || LD_VERSION >= 23800 558 558 depends on AS_HAS_OPTION_ARCH 559 559
+2 -24
arch/riscv/include/asm/processor.h
··· 14 14 15 15 #include <asm/ptrace.h> 16 16 17 - /* 18 - * addr is a hint to the maximum userspace address that mmap should provide, so 19 - * this macro needs to return the largest address space available so that 20 - * mmap_end < addr, being mmap_end the top of that address space. 21 - * See Documentation/arch/riscv/vm-layout.rst for more details. 22 - */ 23 17 #define arch_get_mmap_end(addr, len, flags) \ 24 18 ({ \ 25 - unsigned long mmap_end; \ 26 - typeof(addr) _addr = (addr); \ 27 - if ((_addr) == 0 || is_compat_task() || \ 28 - ((_addr + len) > BIT(VA_BITS - 1))) \ 29 - mmap_end = STACK_TOP_MAX; \ 30 - else \ 31 - mmap_end = (_addr + len); \ 32 - mmap_end; \ 19 + STACK_TOP_MAX; \ 33 20 }) 34 21 35 22 #define arch_get_mmap_base(addr, base) \ 36 23 ({ \ 37 - unsigned long mmap_base; \ 38 - typeof(addr) _addr = (addr); \ 39 - typeof(base) _base = (base); \ 40 - unsigned long rnd_gap = DEFAULT_MAP_WINDOW - (_base); \ 41 - if ((_addr) == 0 || is_compat_task() || \ 42 - ((_addr + len) > BIT(VA_BITS - 1))) \ 43 - mmap_base = (_base); \ 44 - else \ 45 - mmap_base = (_addr + len) - rnd_gap; \ 46 - mmap_base; \ 24 + base; \ 47 25 }) 48 26 49 27 #ifdef CONFIG_64BIT
+19 -1
arch/riscv/include/asm/sbi.h
··· 9 9 10 10 #include <linux/types.h> 11 11 #include <linux/cpumask.h> 12 + #include <linux/jump_label.h> 12 13 13 14 #ifdef CONFIG_RISCV_SBI 14 15 enum sbi_ext_id { ··· 305 304 }; 306 305 307 306 void sbi_init(void); 307 + long __sbi_base_ecall(int fid); 308 308 struct sbiret __sbi_ecall(unsigned long arg0, unsigned long arg1, 309 309 unsigned long arg2, unsigned long arg3, 310 310 unsigned long arg4, unsigned long arg5, ··· 375 373 | (minor & SBI_SPEC_VERSION_MINOR_MASK); 376 374 } 377 375 378 - int sbi_err_map_linux_errno(int err); 376 + static inline int sbi_err_map_linux_errno(int err) 377 + { 378 + switch (err) { 379 + case SBI_SUCCESS: 380 + return 0; 381 + case SBI_ERR_DENIED: 382 + return -EPERM; 383 + case SBI_ERR_INVALID_PARAM: 384 + return -EINVAL; 385 + case SBI_ERR_INVALID_ADDRESS: 386 + return -EFAULT; 387 + case SBI_ERR_NOT_SUPPORTED: 388 + case SBI_ERR_FAILURE: 389 + default: 390 + return -ENOTSUPP; 391 + }; 392 + } 379 393 380 394 extern bool sbi_debug_console_available; 381 395 int sbi_debug_console_write(const char *bytes, unsigned int num_bytes);
+5 -1
arch/riscv/kernel/Makefile
··· 20 20 ifdef CONFIG_RISCV_ALTERNATIVE_EARLY 21 21 CFLAGS_alternative.o := -mcmodel=medany 22 22 CFLAGS_cpufeature.o := -mcmodel=medany 23 + CFLAGS_sbi_ecall.o := -mcmodel=medany 23 24 ifdef CONFIG_FTRACE 24 25 CFLAGS_REMOVE_alternative.o = $(CC_FLAGS_FTRACE) 25 26 CFLAGS_REMOVE_cpufeature.o = $(CC_FLAGS_FTRACE) 27 + CFLAGS_REMOVE_sbi_ecall.o = $(CC_FLAGS_FTRACE) 26 28 endif 27 29 ifdef CONFIG_RELOCATABLE 28 30 CFLAGS_alternative.o += -fno-pie 29 31 CFLAGS_cpufeature.o += -fno-pie 32 + CFLAGS_sbi_ecall.o += -fno-pie 30 33 endif 31 34 ifdef CONFIG_KASAN 32 35 KASAN_SANITIZE_alternative.o := n 33 36 KASAN_SANITIZE_cpufeature.o := n 37 + KASAN_SANITIZE_sbi_ecall.o := n 34 38 endif 35 39 endif 36 40 ··· 92 88 93 89 obj-$(CONFIG_PERF_EVENTS) += perf_callchain.o 94 90 obj-$(CONFIG_HAVE_PERF_REGS) += perf_regs.o 95 - obj-$(CONFIG_RISCV_SBI) += sbi.o 91 + obj-$(CONFIG_RISCV_SBI) += sbi.o sbi_ecall.o 96 92 ifeq ($(CONFIG_RISCV_SBI), y) 97 93 obj-$(CONFIG_SMP) += sbi-ipi.o 98 94 obj-$(CONFIG_SMP) += cpu_ops_sbi.o
-63
arch/riscv/kernel/sbi.c
··· 14 14 #include <asm/smp.h> 15 15 #include <asm/tlbflush.h> 16 16 17 - #define CREATE_TRACE_POINTS 18 - #include <asm/trace.h> 19 - 20 17 /* default SBI version is 0.1 */ 21 18 unsigned long sbi_spec_version __ro_after_init = SBI_SPEC_VERSION_DEFAULT; 22 19 EXPORT_SYMBOL(sbi_spec_version); ··· 23 26 static int (*__sbi_rfence)(int fid, const struct cpumask *cpu_mask, 24 27 unsigned long start, unsigned long size, 25 28 unsigned long arg4, unsigned long arg5) __ro_after_init; 26 - 27 - struct sbiret __sbi_ecall(unsigned long arg0, unsigned long arg1, 28 - unsigned long arg2, unsigned long arg3, 29 - unsigned long arg4, unsigned long arg5, 30 - int fid, int ext) 31 - { 32 - struct sbiret ret; 33 - 34 - trace_sbi_call(ext, fid); 35 - 36 - register uintptr_t a0 asm ("a0") = (uintptr_t)(arg0); 37 - register uintptr_t a1 asm ("a1") = (uintptr_t)(arg1); 38 - register uintptr_t a2 asm ("a2") = (uintptr_t)(arg2); 39 - register uintptr_t a3 asm ("a3") = (uintptr_t)(arg3); 40 - register uintptr_t a4 asm ("a4") = (uintptr_t)(arg4); 41 - register uintptr_t a5 asm ("a5") = (uintptr_t)(arg5); 42 - register uintptr_t a6 asm ("a6") = (uintptr_t)(fid); 43 - register uintptr_t a7 asm ("a7") = (uintptr_t)(ext); 44 - asm volatile ("ecall" 45 - : "+r" (a0), "+r" (a1) 46 - : "r" (a2), "r" (a3), "r" (a4), "r" (a5), "r" (a6), "r" (a7) 47 - : "memory"); 48 - ret.error = a0; 49 - ret.value = a1; 50 - 51 - trace_sbi_return(ext, ret.error, ret.value); 52 - 53 - return ret; 54 - } 55 - EXPORT_SYMBOL(__sbi_ecall); 56 - 57 - int sbi_err_map_linux_errno(int err) 58 - { 59 - switch (err) { 60 - case SBI_SUCCESS: 61 - return 0; 62 - case SBI_ERR_DENIED: 63 - return -EPERM; 64 - case SBI_ERR_INVALID_PARAM: 65 - return -EINVAL; 66 - case SBI_ERR_INVALID_ADDRESS: 67 - return -EFAULT; 68 - case SBI_ERR_NOT_SUPPORTED: 69 - case SBI_ERR_FAILURE: 70 - default: 71 - return -ENOTSUPP; 72 - }; 73 - } 74 - EXPORT_SYMBOL(sbi_err_map_linux_errno); 75 29 76 30 #ifdef CONFIG_RISCV_SBI_V01 77 31 static unsigned long __sbi_v01_cpumask_to_hartmask(const struct cpumask *cpu_mask) ··· 482 534 return 0; 483 535 } 484 536 EXPORT_SYMBOL(sbi_probe_extension); 485 - 486 - static long __sbi_base_ecall(int fid) 487 - { 488 - struct sbiret ret; 489 - 490 - ret = sbi_ecall(SBI_EXT_BASE, fid, 0, 0, 0, 0, 0, 0); 491 - if (!ret.error) 492 - return ret.value; 493 - else 494 - return sbi_err_map_linux_errno(ret.error); 495 - } 496 537 497 538 static inline long sbi_get_spec_version(void) 498 539 {
+48
arch/riscv/kernel/sbi_ecall.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* Copyright (c) 2024 Rivos Inc. */ 3 + 4 + #include <asm/sbi.h> 5 + #define CREATE_TRACE_POINTS 6 + #include <asm/trace.h> 7 + 8 + long __sbi_base_ecall(int fid) 9 + { 10 + struct sbiret ret; 11 + 12 + ret = sbi_ecall(SBI_EXT_BASE, fid, 0, 0, 0, 0, 0, 0); 13 + if (!ret.error) 14 + return ret.value; 15 + else 16 + return sbi_err_map_linux_errno(ret.error); 17 + } 18 + EXPORT_SYMBOL(__sbi_base_ecall); 19 + 20 + struct sbiret __sbi_ecall(unsigned long arg0, unsigned long arg1, 21 + unsigned long arg2, unsigned long arg3, 22 + unsigned long arg4, unsigned long arg5, 23 + int fid, int ext) 24 + { 25 + struct sbiret ret; 26 + 27 + trace_sbi_call(ext, fid); 28 + 29 + register uintptr_t a0 asm ("a0") = (uintptr_t)(arg0); 30 + register uintptr_t a1 asm ("a1") = (uintptr_t)(arg1); 31 + register uintptr_t a2 asm ("a2") = (uintptr_t)(arg2); 32 + register uintptr_t a3 asm ("a3") = (uintptr_t)(arg3); 33 + register uintptr_t a4 asm ("a4") = (uintptr_t)(arg4); 34 + register uintptr_t a5 asm ("a5") = (uintptr_t)(arg5); 35 + register uintptr_t a6 asm ("a6") = (uintptr_t)(fid); 36 + register uintptr_t a7 asm ("a7") = (uintptr_t)(ext); 37 + asm volatile ("ecall" 38 + : "+r" (a0), "+r" (a1) 39 + : "r" (a2), "r" (a3), "r" (a4), "r" (a5), "r" (a6), "r" (a7) 40 + : "memory"); 41 + ret.error = a0; 42 + ret.value = a1; 43 + 44 + trace_sbi_return(ext, ret.error, ret.value); 45 + 46 + return ret; 47 + } 48 + EXPORT_SYMBOL(__sbi_ecall);
+2 -2
arch/riscv/kernel/traps_misaligned.c
··· 417 417 418 418 val.data_u64 = 0; 419 419 if (user_mode(regs)) { 420 - if (raw_copy_from_user(&val, (u8 __user *)addr, len)) 420 + if (copy_from_user(&val, (u8 __user *)addr, len)) 421 421 return -1; 422 422 } else { 423 423 memcpy(&val, (u8 *)addr, len); ··· 515 515 return -EOPNOTSUPP; 516 516 517 517 if (user_mode(regs)) { 518 - if (raw_copy_to_user((u8 __user *)addr, &val, len)) 518 + if (copy_to_user((u8 __user *)addr, &val, len)) 519 519 return -1; 520 520 } else { 521 521 memcpy((u8 *)addr, &val, len);
+1 -1
arch/riscv/mm/init.c
··· 252 252 * The size of the linear page mapping may restrict the amount of 253 253 * usable RAM. 254 254 */ 255 - if (IS_ENABLED(CONFIG_64BIT)) { 255 + if (IS_ENABLED(CONFIG_64BIT) && IS_ENABLED(CONFIG_MMU)) { 256 256 max_mapped_addr = __pa(PAGE_OFFSET) + KERN_VIRT_SIZE; 257 257 memblock_cap_memory_range(phys_ram_base, 258 258 max_mapped_addr - phys_ram_base);
-1
arch/x86/coco/tdx/tdx.c
··· 389 389 .r12 = size, 390 390 .r13 = EPT_READ, 391 391 .r14 = addr, 392 - .r15 = *val, 393 392 }; 394 393 395 394 if (__tdx_hypercall(&args))
+21 -2
arch/x86/events/intel/core.c
··· 4589 4589 return HYBRID_INTEL_CORE; 4590 4590 } 4591 4591 4592 + static inline bool erratum_hsw11(struct perf_event *event) 4593 + { 4594 + return (event->hw.config & INTEL_ARCH_EVENT_MASK) == 4595 + X86_CONFIG(.event=0xc0, .umask=0x01); 4596 + } 4597 + 4598 + /* 4599 + * The HSW11 requires a period larger than 100 which is the same as the BDM11. 4600 + * A minimum period of 128 is enforced as well for the INST_RETIRED.ALL. 4601 + * 4602 + * The message 'interrupt took too long' can be observed on any counter which 4603 + * was armed with a period < 32 and two events expired in the same NMI. 4604 + * A minimum period of 32 is enforced for the rest of the events. 4605 + */ 4606 + static void hsw_limit_period(struct perf_event *event, s64 *left) 4607 + { 4608 + *left = max(*left, erratum_hsw11(event) ? 128 : 32); 4609 + } 4610 + 4592 4611 /* 4593 4612 * Broadwell: 4594 4613 * ··· 4625 4606 */ 4626 4607 static void bdw_limit_period(struct perf_event *event, s64 *left) 4627 4608 { 4628 - if ((event->hw.config & INTEL_ARCH_EVENT_MASK) == 4629 - X86_CONFIG(.event=0xc0, .umask=0x01)) { 4609 + if (erratum_hsw11(event)) { 4630 4610 if (*left < 128) 4631 4611 *left = 128; 4632 4612 *left &= ~0x3fULL; ··· 6784 6766 6785 6767 x86_pmu.hw_config = hsw_hw_config; 6786 6768 x86_pmu.get_event_constraints = hsw_get_event_constraints; 6769 + x86_pmu.limit_period = hsw_limit_period; 6787 6770 x86_pmu.lbr_double_abort = true; 6788 6771 extra_attr = boot_cpu_has(X86_FEATURE_RTM) ? 6789 6772 hsw_format_attr : nhm_format_attr;
+7
arch/x86/include/asm/fpu/types.h
··· 591 591 * even without XSAVE support, i.e. legacy features FP + SSE 592 592 */ 593 593 u64 legacy_features; 594 + /* 595 + * @independent_features: 596 + * 597 + * Features that are supported by XSAVES, but not managed as part of 598 + * the FPU core, such as LBR 599 + */ 600 + u64 independent_features; 594 601 }; 595 602 596 603 /* FPU state configuration information */
+1
arch/x86/include/asm/page_64.h
··· 17 17 extern unsigned long page_offset_base; 18 18 extern unsigned long vmalloc_base; 19 19 extern unsigned long vmemmap_base; 20 + extern unsigned long physmem_end; 20 21 21 22 static __always_inline unsigned long __phys_addr_nodebug(unsigned long x) 22 23 {
+4
arch/x86/include/asm/pgtable_64_types.h
··· 140 140 # define VMEMMAP_START __VMEMMAP_BASE_L4 141 141 #endif /* CONFIG_DYNAMIC_MEMORY_LAYOUT */ 142 142 143 + #ifdef CONFIG_RANDOMIZE_MEMORY 144 + # define PHYSMEM_END physmem_end 145 + #endif 146 + 143 147 /* 144 148 * End of the region for which vmalloc page tables are pre-allocated. 145 149 * For non-KMSAN builds, this is the same as VMALLOC_END.
-6
arch/x86/include/asm/resctrl.h
··· 156 156 __resctrl_sched_in(tsk); 157 157 } 158 158 159 - static inline u32 resctrl_arch_system_num_rmid_idx(void) 160 - { 161 - /* RMID are independent numbers for x86. num_rmid_idx == num_rmid */ 162 - return boot_cpu_data.x86_cache_max_rmid + 1; 163 - } 164 - 165 159 static inline void resctrl_arch_rmid_idx_decode(u32 idx, u32 *closid, u32 *rmid) 166 160 { 167 161 *rmid = idx;
+6 -5
arch/x86/kernel/apic/apic.c
··· 1775 1775 1776 1776 static __init void x2apic_disable(void) 1777 1777 { 1778 - u32 x2apic_id, state = x2apic_state; 1778 + u32 x2apic_id; 1779 1779 1780 - x2apic_mode = 0; 1781 - x2apic_state = X2APIC_DISABLED; 1782 - 1783 - if (state != X2APIC_ON) 1780 + if (x2apic_state < X2APIC_ON) 1784 1781 return; 1785 1782 1786 1783 x2apic_id = read_apic_id(); ··· 1790 1793 } 1791 1794 1792 1795 __x2apic_disable(); 1796 + 1797 + x2apic_mode = 0; 1798 + x2apic_state = X2APIC_DISABLED; 1799 + 1793 1800 /* 1794 1801 * Don't reread the APIC ID as it was already done from 1795 1802 * check_x2apic() and the APIC driver still is a x2APIC variant,
+8
arch/x86/kernel/cpu/resctrl/core.c
··· 119 119 }, 120 120 }; 121 121 122 + u32 resctrl_arch_system_num_rmid_idx(void) 123 + { 124 + struct rdt_resource *r = &rdt_resources_all[RDT_RESOURCE_L3].r_resctrl; 125 + 126 + /* RMID are independent numbers for x86. num_rmid_idx == num_rmid */ 127 + return r->num_rmid; 128 + } 129 + 122 130 /* 123 131 * cache_alloc_hsw_probe() - Have to probe for Intel haswell server CPUs 124 132 * as they do not have CPUID enumeration support for Cache allocation.
+3
arch/x86/kernel/fpu/xstate.c
··· 788 788 goto out_disable; 789 789 } 790 790 791 + fpu_kernel_cfg.independent_features = fpu_kernel_cfg.max_features & 792 + XFEATURE_MASK_INDEPENDENT; 793 + 791 794 /* 792 795 * Clear XSAVE features that are disabled in the normal CPUID. 793 796 */
+2 -2
arch/x86/kernel/fpu/xstate.h
··· 62 62 static inline u64 xfeatures_mask_independent(void) 63 63 { 64 64 if (!cpu_feature_enabled(X86_FEATURE_ARCH_LBR)) 65 - return XFEATURE_MASK_INDEPENDENT & ~XFEATURE_MASK_LBR; 65 + return fpu_kernel_cfg.independent_features & ~XFEATURE_MASK_LBR; 66 66 67 - return XFEATURE_MASK_INDEPENDENT; 67 + return fpu_kernel_cfg.independent_features; 68 68 } 69 69 70 70 /* XSAVE/XRSTOR wrapper functions */
+4 -3
arch/x86/kvm/Kconfig
··· 19 19 20 20 config KVM 21 21 tristate "Kernel-based Virtual Machine (KVM) support" 22 - depends on HIGH_RES_TIMERS 23 22 depends on X86_LOCAL_APIC 24 23 select KVM_COMMON 25 24 select KVM_GENERIC_MMU_NOTIFIER ··· 143 144 select HAVE_KVM_ARCH_GMEM_PREPARE 144 145 select HAVE_KVM_ARCH_GMEM_INVALIDATE 145 146 help 146 - Provides support for launching Encrypted VMs (SEV) and Encrypted VMs 147 - with Encrypted State (SEV-ES) on AMD processors. 147 + Provides support for launching encrypted VMs which use Secure 148 + Encrypted Virtualization (SEV), Secure Encrypted Virtualization with 149 + Encrypted State (SEV-ES), and Secure Encrypted Virtualization with 150 + Secure Nested Paging (SEV-SNP) technologies on AMD processors. 148 151 149 152 config KVM_SMM 150 153 bool "System Management Mode emulation"
+3 -1
arch/x86/kvm/mmu/mmu.c
··· 4750 4750 * reload is efficient when called repeatedly, so we can do it on 4751 4751 * every iteration. 4752 4752 */ 4753 - kvm_mmu_reload(vcpu); 4753 + r = kvm_mmu_reload(vcpu); 4754 + if (r) 4755 + return r; 4754 4756 4755 4757 if (kvm_arch_has_private_mem(vcpu->kvm) && 4756 4758 kvm_mem_is_private(vcpu->kvm, gpa_to_gfn(range->gpa)))
+3 -3
arch/x86/kvm/mmu/spte.c
··· 391 391 mmio_value = 0; 392 392 393 393 /* 394 - * The masked MMIO value must obviously match itself and a removed SPTE 395 - * must not get a false positive. Removed SPTEs and MMIO SPTEs should 396 - * never collide as MMIO must set some RWX bits, and removed SPTEs must 394 + * The masked MMIO value must obviously match itself and a frozen SPTE 395 + * must not get a false positive. Frozen SPTEs and MMIO SPTEs should 396 + * never collide as MMIO must set some RWX bits, and frozen SPTEs must 397 397 * not set any RWX bits. 398 398 */ 399 399 if (WARN_ON((mmio_value & mmio_mask) != mmio_value) ||
+1 -1
arch/x86/kvm/mmu/spte.h
··· 214 214 */ 215 215 #define FROZEN_SPTE (SHADOW_NONPRESENT_VALUE | 0x5a0ULL) 216 216 217 - /* Removed SPTEs must not be misconstrued as shadow present PTEs. */ 217 + /* Frozen SPTEs must not be misconstrued as shadow present PTEs. */ 218 218 static_assert(!(FROZEN_SPTE & SPTE_MMU_PRESENT_MASK)); 219 219 220 220 static inline bool is_frozen_spte(u64 spte)
+4 -4
arch/x86/kvm/mmu/tdp_mmu.c
··· 359 359 /* 360 360 * Set the SPTE to a nonpresent value that other 361 361 * threads will not overwrite. If the SPTE was 362 - * already marked as removed then another thread 362 + * already marked as frozen then another thread 363 363 * handling a page fault could overwrite it, so 364 364 * set the SPTE until it is set from some other 365 - * value to the removed SPTE value. 365 + * value to the frozen SPTE value. 366 366 */ 367 367 for (;;) { 368 368 old_spte = kvm_tdp_mmu_write_spte_atomic(sptep, FROZEN_SPTE); ··· 536 536 u64 *sptep = rcu_dereference(iter->sptep); 537 537 538 538 /* 539 - * The caller is responsible for ensuring the old SPTE is not a REMOVED 540 - * SPTE. KVM should never attempt to zap or manipulate a REMOVED SPTE, 539 + * The caller is responsible for ensuring the old SPTE is not a FROZEN 540 + * SPTE. KVM should never attempt to zap or manipulate a FROZEN SPTE, 541 541 * and pre-checking before inserting a new SPTE is advantageous as it 542 542 * avoids unnecessary work. 543 543 */
+15
arch/x86/kvm/svm/svm.c
··· 2876 2876 case MSR_CSTAR: 2877 2877 msr_info->data = svm->vmcb01.ptr->save.cstar; 2878 2878 break; 2879 + case MSR_GS_BASE: 2880 + msr_info->data = svm->vmcb01.ptr->save.gs.base; 2881 + break; 2882 + case MSR_FS_BASE: 2883 + msr_info->data = svm->vmcb01.ptr->save.fs.base; 2884 + break; 2879 2885 case MSR_KERNEL_GS_BASE: 2880 2886 msr_info->data = svm->vmcb01.ptr->save.kernel_gs_base; 2881 2887 break; ··· 3106 3100 break; 3107 3101 case MSR_CSTAR: 3108 3102 svm->vmcb01.ptr->save.cstar = data; 3103 + break; 3104 + case MSR_GS_BASE: 3105 + svm->vmcb01.ptr->save.gs.base = data; 3106 + break; 3107 + case MSR_FS_BASE: 3108 + svm->vmcb01.ptr->save.fs.base = data; 3109 3109 break; 3110 3110 case MSR_KERNEL_GS_BASE: 3111 3111 svm->vmcb01.ptr->save.kernel_gs_base = data; ··· 5236 5224 5237 5225 /* CPUID 0x8000001F (SME/SEV features) */ 5238 5226 sev_set_cpu_caps(); 5227 + 5228 + /* Don't advertise Bus Lock Detect to guest if SVM support is absent */ 5229 + kvm_cpu_cap_clear(X86_FEATURE_BUS_LOCK_DETECT); 5239 5230 } 5240 5231 5241 5232 static __init int svm_hardware_setup(void)
+5 -1
arch/x86/kvm/x86.c
··· 4656 4656 case KVM_CAP_ASYNC_PF_INT: 4657 4657 case KVM_CAP_GET_TSC_KHZ: 4658 4658 case KVM_CAP_KVMCLOCK_CTRL: 4659 - case KVM_CAP_READONLY_MEM: 4660 4659 case KVM_CAP_IOAPIC_POLARITY_IGNORED: 4661 4660 case KVM_CAP_TSC_DEADLINE_TIMER: 4662 4661 case KVM_CAP_DISABLE_QUIRKS: ··· 4813 4814 break; 4814 4815 case KVM_CAP_VM_TYPES: 4815 4816 r = kvm_caps.supported_vm_types; 4817 + break; 4818 + case KVM_CAP_READONLY_MEM: 4819 + r = kvm ? kvm_arch_has_readonly_mem(kvm) : 1; 4816 4820 break; 4817 4821 default: 4818 4822 break; ··· 6042 6040 if (copy_from_user(&events, argp, sizeof(struct kvm_vcpu_events))) 6043 6041 break; 6044 6042 6043 + kvm_vcpu_srcu_read_lock(vcpu); 6045 6044 r = kvm_vcpu_ioctl_x86_set_vcpu_events(vcpu, &events); 6045 + kvm_vcpu_srcu_read_unlock(vcpu); 6046 6046 break; 6047 6047 } 6048 6048 case KVM_GET_DEBUGREGS: {
+4
arch/x86/mm/init_64.c
··· 958 958 int add_pages(int nid, unsigned long start_pfn, unsigned long nr_pages, 959 959 struct mhp_params *params) 960 960 { 961 + unsigned long end = ((start_pfn + nr_pages) << PAGE_SHIFT) - 1; 961 962 int ret; 963 + 964 + if (WARN_ON_ONCE(end > PHYSMEM_END)) 965 + return -ERANGE; 962 966 963 967 ret = __add_pages(nid, start_pfn, nr_pages, params); 964 968 WARN_ON_ONCE(ret);
+27 -7
arch/x86/mm/kaslr.c
··· 47 47 */ 48 48 static __initdata struct kaslr_memory_region { 49 49 unsigned long *base; 50 + unsigned long *end; 50 51 unsigned long size_tb; 51 52 } kaslr_regions[] = { 52 - { &page_offset_base, 0 }, 53 - { &vmalloc_base, 0 }, 54 - { &vmemmap_base, 0 }, 53 + { 54 + .base = &page_offset_base, 55 + .end = &physmem_end, 56 + }, 57 + { 58 + .base = &vmalloc_base, 59 + }, 60 + { 61 + .base = &vmemmap_base, 62 + }, 55 63 }; 64 + 65 + /* The end of the possible address space for physical memory */ 66 + unsigned long physmem_end __ro_after_init; 56 67 57 68 /* Get size in bytes used by the memory region */ 58 69 static inline unsigned long get_padding(struct kaslr_memory_region *region) ··· 93 82 BUILD_BUG_ON(vaddr_end != CPU_ENTRY_AREA_BASE); 94 83 BUILD_BUG_ON(vaddr_end > __START_KERNEL_map); 95 84 85 + /* Preset the end of the possible address space for physical memory */ 86 + physmem_end = ((1ULL << MAX_PHYSMEM_BITS) - 1); 96 87 if (!kaslr_memory_enabled()) 97 88 return; 98 89 ··· 141 128 vaddr += entropy; 142 129 *kaslr_regions[i].base = vaddr; 143 130 144 - /* 145 - * Jump the region and add a minimum padding based on 146 - * randomization alignment. 147 - */ 131 + /* Calculate the end of the region */ 148 132 vaddr += get_padding(&kaslr_regions[i]); 133 + /* 134 + * KASLR trims the maximum possible size of the 135 + * direct-map. Update the physmem_end boundary. 136 + * No rounding required as the region starts 137 + * PUD aligned and size is in units of TB. 138 + */ 139 + if (kaslr_regions[i].end) 140 + *kaslr_regions[i].end = __pa_nodebug(vaddr - 1); 141 + 142 + /* Add a minimum padding based on randomization alignment. */ 149 143 vaddr = round_up(vaddr + 1, PUD_SIZE); 150 144 remain_entropy -= entropy; 151 145 }
-4
block/bio-integrity.c
··· 167 167 struct request_queue *q = bdev_get_queue(bio->bi_bdev); 168 168 struct bio_integrity_payload *bip = bio_integrity(bio); 169 169 170 - if (((bip->bip_iter.bi_size + len) >> SECTOR_SHIFT) > 171 - queue_max_hw_sectors(q)) 172 - return 0; 173 - 174 170 if (bip->bip_vcnt > 0) { 175 171 struct bio_vec *bv = &bip->bip_vec[bip->bip_vcnt - 1]; 176 172 bool same_page = false;
+1
drivers/android/binder.c
··· 3422 3422 */ 3423 3423 copy_size = object_offset - user_offset; 3424 3424 if (copy_size && (user_offset > object_offset || 3425 + object_offset > tr->data_size || 3425 3426 binder_alloc_copy_user_to_buffer( 3426 3427 &target_proc->alloc, 3427 3428 t->buffer, user_offset,
+3 -1
drivers/ata/libata-core.c
··· 5593 5593 } 5594 5594 5595 5595 dr = devres_alloc(ata_devres_release, 0, GFP_KERNEL); 5596 - if (!dr) 5596 + if (!dr) { 5597 + kfree(host); 5597 5598 goto err_out; 5599 + } 5598 5600 5599 5601 devres_add(dev, dr); 5600 5602 dev_set_drvdata(dev, host);
+2
drivers/block/ublk_drv.c
··· 2663 2663 mutex_lock(&ub->mutex); 2664 2664 if (!ublk_can_use_recovery(ub)) 2665 2665 goto out_unlock; 2666 + if (!ub->nr_queues_ready) 2667 + goto out_unlock; 2666 2668 /* 2667 2669 * START_RECOVERY is only allowd after: 2668 2670 *
+1
drivers/bluetooth/hci_qca.c
··· 1091 1091 qca->memdump_state = QCA_MEMDUMP_COLLECTED; 1092 1092 cancel_delayed_work(&qca->ctrl_memdump_timeout); 1093 1093 clear_bit(QCA_MEMDUMP_COLLECTION, &qca->flags); 1094 + clear_bit(QCA_IBS_DISABLED, &qca->flags); 1094 1095 mutex_unlock(&qca->hci_memdump_lock); 1095 1096 return; 1096 1097 }
+22 -3
drivers/clk/qcom/clk-alpha-pll.c
··· 40 40 41 41 #define PLL_USER_CTL(p) ((p)->offset + (p)->regs[PLL_OFF_USER_CTL]) 42 42 # define PLL_POST_DIV_SHIFT 8 43 - # define PLL_POST_DIV_MASK(p) GENMASK((p)->width, 0) 43 + # define PLL_POST_DIV_MASK(p) GENMASK((p)->width - 1, 0) 44 + # define PLL_ALPHA_MSB BIT(15) 44 45 # define PLL_ALPHA_EN BIT(24) 45 46 # define PLL_ALPHA_MODE BIT(25) 46 47 # define PLL_VCO_SHIFT 20 ··· 1553 1552 } 1554 1553 1555 1554 return regmap_update_bits(regmap, PLL_USER_CTL(pll), 1556 - PLL_POST_DIV_MASK(pll) << PLL_POST_DIV_SHIFT, 1557 - val << PLL_POST_DIV_SHIFT); 1555 + PLL_POST_DIV_MASK(pll) << pll->post_div_shift, 1556 + val << pll->post_div_shift); 1558 1557 } 1559 1558 1560 1559 const struct clk_ops clk_alpha_pll_postdiv_trion_ops = { ··· 2118 2117 regmap_write(regmap, PLL_OPMODE(pll), 0x0); 2119 2118 } 2120 2119 2120 + static void zonda_pll_adjust_l_val(unsigned long rate, unsigned long prate, u32 *l) 2121 + { 2122 + u64 remainder, quotient; 2123 + 2124 + quotient = rate; 2125 + remainder = do_div(quotient, prate); 2126 + *l = quotient; 2127 + 2128 + if ((remainder * 2) / prate) 2129 + *l = *l + 1; 2130 + } 2131 + 2121 2132 static int clk_zonda_pll_set_rate(struct clk_hw *hw, unsigned long rate, 2122 2133 unsigned long prate) 2123 2134 { ··· 2146 2133 if (ret < 0) 2147 2134 return ret; 2148 2135 2136 + if (a & PLL_ALPHA_MSB) 2137 + zonda_pll_adjust_l_val(rate, prate, &l); 2138 + 2149 2139 regmap_write(pll->clkr.regmap, PLL_ALPHA_VAL(pll), a); 2150 2140 regmap_write(pll->clkr.regmap, PLL_L_VAL(pll), l); 2141 + 2142 + if (!clk_hw_is_enabled(hw)) 2143 + return 0; 2151 2144 2152 2145 /* Wait before polling for the frequency latch */ 2153 2146 udelay(5);
+1
drivers/clk/qcom/clk-rcg.h
··· 198 198 extern const struct clk_ops clk_pixel_ops; 199 199 extern const struct clk_ops clk_gfx3d_ops; 200 200 extern const struct clk_ops clk_rcg2_shared_ops; 201 + extern const struct clk_ops clk_rcg2_shared_no_init_park_ops; 201 202 extern const struct clk_ops clk_dp_ops; 202 203 203 204 struct clk_rcg_dfs_data {
+30
drivers/clk/qcom/clk-rcg2.c
··· 1348 1348 }; 1349 1349 EXPORT_SYMBOL_GPL(clk_rcg2_shared_ops); 1350 1350 1351 + static int clk_rcg2_shared_no_init_park(struct clk_hw *hw) 1352 + { 1353 + struct clk_rcg2 *rcg = to_clk_rcg2(hw); 1354 + 1355 + /* 1356 + * Read the config register so that the parent is properly mapped at 1357 + * registration time. 1358 + */ 1359 + regmap_read(rcg->clkr.regmap, rcg->cmd_rcgr + CFG_REG, &rcg->parked_cfg); 1360 + 1361 + return 0; 1362 + } 1363 + 1364 + /* 1365 + * Like clk_rcg2_shared_ops but skip the init so that the clk frequency is left 1366 + * unchanged at registration time. 1367 + */ 1368 + const struct clk_ops clk_rcg2_shared_no_init_park_ops = { 1369 + .init = clk_rcg2_shared_no_init_park, 1370 + .enable = clk_rcg2_shared_enable, 1371 + .disable = clk_rcg2_shared_disable, 1372 + .get_parent = clk_rcg2_shared_get_parent, 1373 + .set_parent = clk_rcg2_shared_set_parent, 1374 + .recalc_rate = clk_rcg2_shared_recalc_rate, 1375 + .determine_rate = clk_rcg2_determine_rate, 1376 + .set_rate = clk_rcg2_shared_set_rate, 1377 + .set_rate_and_parent = clk_rcg2_shared_set_rate_and_parent, 1378 + }; 1379 + EXPORT_SYMBOL_GPL(clk_rcg2_shared_no_init_park_ops); 1380 + 1351 1381 /* Common APIs to be used for DFS based RCGR */ 1352 1382 static void clk_rcg2_dfs_populate_freq(struct clk_hw *hw, unsigned int l, 1353 1383 struct freq_tbl *f)
+6 -6
drivers/clk/qcom/gcc-ipq9574.c
··· 68 68 69 69 static struct clk_alpha_pll gpll0_main = { 70 70 .offset = 0x20000, 71 - .regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_DEFAULT], 71 + .regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_DEFAULT_EVO], 72 72 .clkr = { 73 73 .enable_reg = 0x0b000, 74 74 .enable_mask = BIT(0), ··· 96 96 97 97 static struct clk_alpha_pll_postdiv gpll0 = { 98 98 .offset = 0x20000, 99 - .regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_DEFAULT], 99 + .regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_DEFAULT_EVO], 100 100 .width = 4, 101 101 .clkr.hw.init = &(const struct clk_init_data) { 102 102 .name = "gpll0", ··· 110 110 111 111 static struct clk_alpha_pll gpll4_main = { 112 112 .offset = 0x22000, 113 - .regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_DEFAULT], 113 + .regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_DEFAULT_EVO], 114 114 .clkr = { 115 115 .enable_reg = 0x0b000, 116 116 .enable_mask = BIT(2), ··· 125 125 126 126 static struct clk_alpha_pll_postdiv gpll4 = { 127 127 .offset = 0x22000, 128 - .regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_DEFAULT], 128 + .regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_DEFAULT_EVO], 129 129 .width = 4, 130 130 .clkr.hw.init = &(const struct clk_init_data) { 131 131 .name = "gpll4", ··· 139 139 140 140 static struct clk_alpha_pll gpll2_main = { 141 141 .offset = 0x21000, 142 - .regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_DEFAULT], 142 + .regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_DEFAULT_EVO], 143 143 .clkr = { 144 144 .enable_reg = 0x0b000, 145 145 .enable_mask = BIT(1), ··· 154 154 155 155 static struct clk_alpha_pll_postdiv gpll2 = { 156 156 .offset = 0x21000, 157 - .regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_DEFAULT], 157 + .regs = clk_alpha_pll_regs[CLK_ALPHA_PLL_TYPE_DEFAULT_EVO], 158 158 .width = 4, 159 159 .clkr.hw.init = &(const struct clk_init_data) { 160 160 .name = "gpll2",
+24 -24
drivers/clk/qcom/gcc-sc8280xp.c
··· 1500 1500 .parent_data = gcc_parent_data_0, 1501 1501 .num_parents = ARRAY_SIZE(gcc_parent_data_0), 1502 1502 .flags = CLK_SET_RATE_PARENT, 1503 - .ops = &clk_rcg2_shared_ops, 1503 + .ops = &clk_rcg2_ops, 1504 1504 }; 1505 1505 1506 1506 static struct clk_rcg2 gcc_qupv3_wrap0_s0_clk_src = { ··· 1517 1517 .parent_data = gcc_parent_data_0, 1518 1518 .num_parents = ARRAY_SIZE(gcc_parent_data_0), 1519 1519 .flags = CLK_SET_RATE_PARENT, 1520 - .ops = &clk_rcg2_shared_ops, 1520 + .ops = &clk_rcg2_ops, 1521 1521 }; 1522 1522 1523 1523 static struct clk_rcg2 gcc_qupv3_wrap0_s1_clk_src = { ··· 1534 1534 .parent_data = gcc_parent_data_0, 1535 1535 .num_parents = ARRAY_SIZE(gcc_parent_data_0), 1536 1536 .flags = CLK_SET_RATE_PARENT, 1537 - .ops = &clk_rcg2_shared_ops, 1537 + .ops = &clk_rcg2_ops, 1538 1538 }; 1539 1539 1540 1540 static struct clk_rcg2 gcc_qupv3_wrap0_s2_clk_src = { ··· 1551 1551 .parent_data = gcc_parent_data_0, 1552 1552 .num_parents = ARRAY_SIZE(gcc_parent_data_0), 1553 1553 .flags = CLK_SET_RATE_PARENT, 1554 - .ops = &clk_rcg2_shared_ops, 1554 + .ops = &clk_rcg2_ops, 1555 1555 }; 1556 1556 1557 1557 static struct clk_rcg2 gcc_qupv3_wrap0_s3_clk_src = { ··· 1568 1568 .parent_data = gcc_parent_data_0, 1569 1569 .num_parents = ARRAY_SIZE(gcc_parent_data_0), 1570 1570 .flags = CLK_SET_RATE_PARENT, 1571 - .ops = &clk_rcg2_shared_ops, 1571 + .ops = &clk_rcg2_ops, 1572 1572 }; 1573 1573 1574 1574 static struct clk_rcg2 gcc_qupv3_wrap0_s4_clk_src = { ··· 1585 1585 .parent_data = gcc_parent_data_0, 1586 1586 .num_parents = ARRAY_SIZE(gcc_parent_data_0), 1587 1587 .flags = CLK_SET_RATE_PARENT, 1588 - .ops = &clk_rcg2_shared_ops, 1588 + .ops = &clk_rcg2_ops, 1589 1589 }; 1590 1590 1591 1591 static struct clk_rcg2 gcc_qupv3_wrap0_s5_clk_src = { ··· 1617 1617 .parent_data = gcc_parent_data_0, 1618 1618 .num_parents = ARRAY_SIZE(gcc_parent_data_0), 1619 1619 .flags = CLK_SET_RATE_PARENT, 1620 - .ops = &clk_rcg2_shared_ops, 1620 + .ops = &clk_rcg2_ops, 1621 1621 }; 1622 1622 1623 1623 static struct clk_rcg2 gcc_qupv3_wrap0_s6_clk_src = { ··· 1634 1634 .parent_data = gcc_parent_data_0, 1635 1635 .num_parents = ARRAY_SIZE(gcc_parent_data_0), 1636 1636 .flags = CLK_SET_RATE_PARENT, 1637 - .ops = &clk_rcg2_shared_ops, 1637 + .ops = &clk_rcg2_ops, 1638 1638 }; 1639 1639 1640 1640 static struct clk_rcg2 gcc_qupv3_wrap0_s7_clk_src = { ··· 1651 1651 .parent_data = gcc_parent_data_0, 1652 1652 .num_parents = ARRAY_SIZE(gcc_parent_data_0), 1653 1653 .flags = CLK_SET_RATE_PARENT, 1654 - .ops = &clk_rcg2_shared_ops, 1654 + .ops = &clk_rcg2_ops, 1655 1655 }; 1656 1656 1657 1657 static struct clk_rcg2 gcc_qupv3_wrap1_s0_clk_src = { ··· 1668 1668 .parent_data = gcc_parent_data_0, 1669 1669 .num_parents = ARRAY_SIZE(gcc_parent_data_0), 1670 1670 .flags = CLK_SET_RATE_PARENT, 1671 - .ops = &clk_rcg2_shared_ops, 1671 + .ops = &clk_rcg2_ops, 1672 1672 }; 1673 1673 1674 1674 static struct clk_rcg2 gcc_qupv3_wrap1_s1_clk_src = { ··· 1685 1685 .parent_data = gcc_parent_data_0, 1686 1686 .num_parents = ARRAY_SIZE(gcc_parent_data_0), 1687 1687 .flags = CLK_SET_RATE_PARENT, 1688 - .ops = &clk_rcg2_shared_ops, 1688 + .ops = &clk_rcg2_ops, 1689 1689 }; 1690 1690 1691 1691 static struct clk_rcg2 gcc_qupv3_wrap1_s2_clk_src = { ··· 1702 1702 .parent_data = gcc_parent_data_0, 1703 1703 .num_parents = ARRAY_SIZE(gcc_parent_data_0), 1704 1704 .flags = CLK_SET_RATE_PARENT, 1705 - .ops = &clk_rcg2_shared_ops, 1705 + .ops = &clk_rcg2_ops, 1706 1706 }; 1707 1707 1708 1708 static struct clk_rcg2 gcc_qupv3_wrap1_s3_clk_src = { ··· 1719 1719 .parent_data = gcc_parent_data_0, 1720 1720 .num_parents = ARRAY_SIZE(gcc_parent_data_0), 1721 1721 .flags = CLK_SET_RATE_PARENT, 1722 - .ops = &clk_rcg2_shared_ops, 1722 + .ops = &clk_rcg2_ops, 1723 1723 }; 1724 1724 1725 1725 static struct clk_rcg2 gcc_qupv3_wrap1_s4_clk_src = { ··· 1736 1736 .parent_data = gcc_parent_data_0, 1737 1737 .num_parents = ARRAY_SIZE(gcc_parent_data_0), 1738 1738 .flags = CLK_SET_RATE_PARENT, 1739 - .ops = &clk_rcg2_shared_ops, 1739 + .ops = &clk_rcg2_ops, 1740 1740 }; 1741 1741 1742 1742 static struct clk_rcg2 gcc_qupv3_wrap1_s5_clk_src = { ··· 1753 1753 .parent_data = gcc_parent_data_0, 1754 1754 .num_parents = ARRAY_SIZE(gcc_parent_data_0), 1755 1755 .flags = CLK_SET_RATE_PARENT, 1756 - .ops = &clk_rcg2_shared_ops, 1756 + .ops = &clk_rcg2_ops, 1757 1757 }; 1758 1758 1759 1759 static struct clk_rcg2 gcc_qupv3_wrap1_s6_clk_src = { ··· 1770 1770 .parent_data = gcc_parent_data_0, 1771 1771 .num_parents = ARRAY_SIZE(gcc_parent_data_0), 1772 1772 .flags = CLK_SET_RATE_PARENT, 1773 - .ops = &clk_rcg2_shared_ops, 1773 + .ops = &clk_rcg2_ops, 1774 1774 }; 1775 1775 1776 1776 static struct clk_rcg2 gcc_qupv3_wrap1_s7_clk_src = { ··· 1787 1787 .parent_data = gcc_parent_data_0, 1788 1788 .num_parents = ARRAY_SIZE(gcc_parent_data_0), 1789 1789 .flags = CLK_SET_RATE_PARENT, 1790 - .ops = &clk_rcg2_shared_ops, 1790 + .ops = &clk_rcg2_ops, 1791 1791 }; 1792 1792 1793 1793 static struct clk_rcg2 gcc_qupv3_wrap2_s0_clk_src = { ··· 1804 1804 .parent_data = gcc_parent_data_0, 1805 1805 .num_parents = ARRAY_SIZE(gcc_parent_data_0), 1806 1806 .flags = CLK_SET_RATE_PARENT, 1807 - .ops = &clk_rcg2_shared_ops, 1807 + .ops = &clk_rcg2_ops, 1808 1808 }; 1809 1809 1810 1810 static struct clk_rcg2 gcc_qupv3_wrap2_s1_clk_src = { ··· 1821 1821 .parent_data = gcc_parent_data_0, 1822 1822 .num_parents = ARRAY_SIZE(gcc_parent_data_0), 1823 1823 .flags = CLK_SET_RATE_PARENT, 1824 - .ops = &clk_rcg2_shared_ops, 1824 + .ops = &clk_rcg2_ops, 1825 1825 }; 1826 1826 1827 1827 static struct clk_rcg2 gcc_qupv3_wrap2_s2_clk_src = { ··· 1838 1838 .parent_data = gcc_parent_data_0, 1839 1839 .num_parents = ARRAY_SIZE(gcc_parent_data_0), 1840 1840 .flags = CLK_SET_RATE_PARENT, 1841 - .ops = &clk_rcg2_shared_ops, 1841 + .ops = &clk_rcg2_ops, 1842 1842 }; 1843 1843 1844 1844 static struct clk_rcg2 gcc_qupv3_wrap2_s3_clk_src = { ··· 1855 1855 .parent_data = gcc_parent_data_0, 1856 1856 .num_parents = ARRAY_SIZE(gcc_parent_data_0), 1857 1857 .flags = CLK_SET_RATE_PARENT, 1858 - .ops = &clk_rcg2_shared_ops, 1858 + .ops = &clk_rcg2_ops, 1859 1859 }; 1860 1860 1861 1861 static struct clk_rcg2 gcc_qupv3_wrap2_s4_clk_src = { ··· 1872 1872 .parent_data = gcc_parent_data_0, 1873 1873 .num_parents = ARRAY_SIZE(gcc_parent_data_0), 1874 1874 .flags = CLK_SET_RATE_PARENT, 1875 - .ops = &clk_rcg2_shared_ops, 1875 + .ops = &clk_rcg2_ops, 1876 1876 }; 1877 1877 1878 1878 static struct clk_rcg2 gcc_qupv3_wrap2_s5_clk_src = { ··· 1889 1889 .parent_data = gcc_parent_data_0, 1890 1890 .num_parents = ARRAY_SIZE(gcc_parent_data_0), 1891 1891 .flags = CLK_SET_RATE_PARENT, 1892 - .ops = &clk_rcg2_shared_ops, 1892 + .ops = &clk_rcg2_ops, 1893 1893 }; 1894 1894 1895 1895 static struct clk_rcg2 gcc_qupv3_wrap2_s6_clk_src = { ··· 1906 1906 .parent_data = gcc_parent_data_0, 1907 1907 .num_parents = ARRAY_SIZE(gcc_parent_data_0), 1908 1908 .flags = CLK_SET_RATE_PARENT, 1909 - .ops = &clk_rcg2_shared_ops, 1909 + .ops = &clk_rcg2_ops, 1910 1910 }; 1911 1911 1912 1912 static struct clk_rcg2 gcc_qupv3_wrap2_s7_clk_src = {
+27 -27
drivers/clk/qcom/gcc-sm8550.c
··· 536 536 .parent_data = gcc_parent_data_0, 537 537 .num_parents = ARRAY_SIZE(gcc_parent_data_0), 538 538 .flags = CLK_SET_RATE_PARENT, 539 - .ops = &clk_rcg2_shared_ops, 539 + .ops = &clk_rcg2_ops, 540 540 }, 541 541 }; 542 542 ··· 551 551 .parent_data = gcc_parent_data_0, 552 552 .num_parents = ARRAY_SIZE(gcc_parent_data_0), 553 553 .flags = CLK_SET_RATE_PARENT, 554 - .ops = &clk_rcg2_shared_ops, 554 + .ops = &clk_rcg2_ops, 555 555 }, 556 556 }; 557 557 ··· 566 566 .parent_data = gcc_parent_data_0, 567 567 .num_parents = ARRAY_SIZE(gcc_parent_data_0), 568 568 .flags = CLK_SET_RATE_PARENT, 569 - .ops = &clk_rcg2_shared_ops, 569 + .ops = &clk_rcg2_ops, 570 570 }, 571 571 }; 572 572 ··· 581 581 .parent_data = gcc_parent_data_0, 582 582 .num_parents = ARRAY_SIZE(gcc_parent_data_0), 583 583 .flags = CLK_SET_RATE_PARENT, 584 - .ops = &clk_rcg2_shared_ops, 584 + .ops = &clk_rcg2_ops, 585 585 }, 586 586 }; 587 587 ··· 596 596 .parent_data = gcc_parent_data_0, 597 597 .num_parents = ARRAY_SIZE(gcc_parent_data_0), 598 598 .flags = CLK_SET_RATE_PARENT, 599 - .ops = &clk_rcg2_shared_ops, 599 + .ops = &clk_rcg2_ops, 600 600 }, 601 601 }; 602 602 ··· 611 611 .parent_data = gcc_parent_data_0, 612 612 .num_parents = ARRAY_SIZE(gcc_parent_data_0), 613 613 .flags = CLK_SET_RATE_PARENT, 614 - .ops = &clk_rcg2_shared_ops, 614 + .ops = &clk_rcg2_ops, 615 615 }, 616 616 }; 617 617 ··· 626 626 .parent_data = gcc_parent_data_0, 627 627 .num_parents = ARRAY_SIZE(gcc_parent_data_0), 628 628 .flags = CLK_SET_RATE_PARENT, 629 - .ops = &clk_rcg2_shared_ops, 629 + .ops = &clk_rcg2_ops, 630 630 }, 631 631 }; 632 632 ··· 641 641 .parent_data = gcc_parent_data_0, 642 642 .num_parents = ARRAY_SIZE(gcc_parent_data_0), 643 643 .flags = CLK_SET_RATE_PARENT, 644 - .ops = &clk_rcg2_shared_ops, 644 + .ops = &clk_rcg2_ops, 645 645 }, 646 646 }; 647 647 ··· 656 656 .parent_data = gcc_parent_data_0, 657 657 .num_parents = ARRAY_SIZE(gcc_parent_data_0), 658 658 .flags = CLK_SET_RATE_PARENT, 659 - .ops = &clk_rcg2_shared_ops, 659 + .ops = &clk_rcg2_ops, 660 660 }, 661 661 }; 662 662 ··· 671 671 .parent_data = gcc_parent_data_0, 672 672 .num_parents = ARRAY_SIZE(gcc_parent_data_0), 673 673 .flags = CLK_SET_RATE_PARENT, 674 - .ops = &clk_rcg2_shared_ops, 674 + .ops = &clk_rcg2_ops, 675 675 }, 676 676 }; 677 677 ··· 700 700 .parent_data = gcc_parent_data_0, 701 701 .num_parents = ARRAY_SIZE(gcc_parent_data_0), 702 702 .flags = CLK_SET_RATE_PARENT, 703 - .ops = &clk_rcg2_shared_ops, 703 + .ops = &clk_rcg2_ops, 704 704 }; 705 705 706 706 static struct clk_rcg2 gcc_qupv3_wrap1_s0_clk_src = { ··· 717 717 .parent_data = gcc_parent_data_0, 718 718 .num_parents = ARRAY_SIZE(gcc_parent_data_0), 719 719 .flags = CLK_SET_RATE_PARENT, 720 - .ops = &clk_rcg2_shared_ops, 720 + .ops = &clk_rcg2_ops, 721 721 }; 722 722 723 723 static struct clk_rcg2 gcc_qupv3_wrap1_s1_clk_src = { ··· 750 750 .parent_data = gcc_parent_data_0, 751 751 .num_parents = ARRAY_SIZE(gcc_parent_data_0), 752 752 .flags = CLK_SET_RATE_PARENT, 753 - .ops = &clk_rcg2_shared_ops, 753 + .ops = &clk_rcg2_ops, 754 754 }; 755 755 756 756 static struct clk_rcg2 gcc_qupv3_wrap1_s2_clk_src = { ··· 767 767 .parent_data = gcc_parent_data_0, 768 768 .num_parents = ARRAY_SIZE(gcc_parent_data_0), 769 769 .flags = CLK_SET_RATE_PARENT, 770 - .ops = &clk_rcg2_shared_ops, 770 + .ops = &clk_rcg2_ops, 771 771 }; 772 772 773 773 static struct clk_rcg2 gcc_qupv3_wrap1_s3_clk_src = { ··· 784 784 .parent_data = gcc_parent_data_0, 785 785 .num_parents = ARRAY_SIZE(gcc_parent_data_0), 786 786 .flags = CLK_SET_RATE_PARENT, 787 - .ops = &clk_rcg2_shared_ops, 787 + .ops = &clk_rcg2_ops, 788 788 }; 789 789 790 790 static struct clk_rcg2 gcc_qupv3_wrap1_s4_clk_src = { ··· 801 801 .parent_data = gcc_parent_data_0, 802 802 .num_parents = ARRAY_SIZE(gcc_parent_data_0), 803 803 .flags = CLK_SET_RATE_PARENT, 804 - .ops = &clk_rcg2_shared_ops, 804 + .ops = &clk_rcg2_ops, 805 805 }; 806 806 807 807 static struct clk_rcg2 gcc_qupv3_wrap1_s5_clk_src = { ··· 818 818 .parent_data = gcc_parent_data_0, 819 819 .num_parents = ARRAY_SIZE(gcc_parent_data_0), 820 820 .flags = CLK_SET_RATE_PARENT, 821 - .ops = &clk_rcg2_shared_ops, 821 + .ops = &clk_rcg2_ops, 822 822 }; 823 823 824 824 static struct clk_rcg2 gcc_qupv3_wrap1_s6_clk_src = { ··· 835 835 .parent_data = gcc_parent_data_0, 836 836 .num_parents = ARRAY_SIZE(gcc_parent_data_0), 837 837 .flags = CLK_SET_RATE_PARENT, 838 - .ops = &clk_rcg2_shared_ops, 838 + .ops = &clk_rcg2_ops, 839 839 }; 840 840 841 841 static struct clk_rcg2 gcc_qupv3_wrap1_s7_clk_src = { ··· 852 852 .parent_data = gcc_parent_data_0, 853 853 .num_parents = ARRAY_SIZE(gcc_parent_data_0), 854 854 .flags = CLK_SET_RATE_PARENT, 855 - .ops = &clk_rcg2_shared_ops, 855 + .ops = &clk_rcg2_ops, 856 856 }; 857 857 858 858 static struct clk_rcg2 gcc_qupv3_wrap2_s0_clk_src = { ··· 869 869 .parent_data = gcc_parent_data_0, 870 870 .num_parents = ARRAY_SIZE(gcc_parent_data_0), 871 871 .flags = CLK_SET_RATE_PARENT, 872 - .ops = &clk_rcg2_shared_ops, 872 + .ops = &clk_rcg2_ops, 873 873 }; 874 874 875 875 static struct clk_rcg2 gcc_qupv3_wrap2_s1_clk_src = { ··· 886 886 .parent_data = gcc_parent_data_0, 887 887 .num_parents = ARRAY_SIZE(gcc_parent_data_0), 888 888 .flags = CLK_SET_RATE_PARENT, 889 - .ops = &clk_rcg2_shared_ops, 889 + .ops = &clk_rcg2_ops, 890 890 }; 891 891 892 892 static struct clk_rcg2 gcc_qupv3_wrap2_s2_clk_src = { ··· 903 903 .parent_data = gcc_parent_data_0, 904 904 .num_parents = ARRAY_SIZE(gcc_parent_data_0), 905 905 .flags = CLK_SET_RATE_PARENT, 906 - .ops = &clk_rcg2_shared_ops, 906 + .ops = &clk_rcg2_ops, 907 907 }; 908 908 909 909 static struct clk_rcg2 gcc_qupv3_wrap2_s3_clk_src = { ··· 920 920 .parent_data = gcc_parent_data_0, 921 921 .num_parents = ARRAY_SIZE(gcc_parent_data_0), 922 922 .flags = CLK_SET_RATE_PARENT, 923 - .ops = &clk_rcg2_shared_ops, 923 + .ops = &clk_rcg2_ops, 924 924 }; 925 925 926 926 static struct clk_rcg2 gcc_qupv3_wrap2_s4_clk_src = { ··· 937 937 .parent_data = gcc_parent_data_0, 938 938 .num_parents = ARRAY_SIZE(gcc_parent_data_0), 939 939 .flags = CLK_SET_RATE_PARENT, 940 - .ops = &clk_rcg2_shared_ops, 940 + .ops = &clk_rcg2_ops, 941 941 }; 942 942 943 943 static struct clk_rcg2 gcc_qupv3_wrap2_s5_clk_src = { ··· 975 975 .parent_data = gcc_parent_data_8, 976 976 .num_parents = ARRAY_SIZE(gcc_parent_data_8), 977 977 .flags = CLK_SET_RATE_PARENT, 978 - .ops = &clk_rcg2_shared_ops, 978 + .ops = &clk_rcg2_ops, 979 979 }; 980 980 981 981 static struct clk_rcg2 gcc_qupv3_wrap2_s6_clk_src = { ··· 992 992 .parent_data = gcc_parent_data_0, 993 993 .num_parents = ARRAY_SIZE(gcc_parent_data_0), 994 994 .flags = CLK_SET_RATE_PARENT, 995 - .ops = &clk_rcg2_shared_ops, 995 + .ops = &clk_rcg2_ops, 996 996 }; 997 997 998 998 static struct clk_rcg2 gcc_qupv3_wrap2_s7_clk_src = { ··· 1159 1159 .parent_data = gcc_parent_data_0, 1160 1160 .num_parents = ARRAY_SIZE(gcc_parent_data_0), 1161 1161 .flags = CLK_SET_RATE_PARENT, 1162 - .ops = &clk_rcg2_shared_ops, 1162 + .ops = &clk_rcg2_shared_no_init_park_ops, 1163 1163 }, 1164 1164 }; 1165 1165
+28 -28
drivers/clk/qcom/gcc-sm8650.c
··· 713 713 .parent_data = gcc_parent_data_0, 714 714 .num_parents = ARRAY_SIZE(gcc_parent_data_0), 715 715 .flags = CLK_SET_RATE_PARENT, 716 - .ops = &clk_rcg2_shared_ops, 716 + .ops = &clk_rcg2_ops, 717 717 }, 718 718 }; 719 719 ··· 728 728 .parent_data = gcc_parent_data_0, 729 729 .num_parents = ARRAY_SIZE(gcc_parent_data_0), 730 730 .flags = CLK_SET_RATE_PARENT, 731 - .ops = &clk_rcg2_shared_ops, 731 + .ops = &clk_rcg2_ops, 732 732 }, 733 733 }; 734 734 ··· 743 743 .parent_data = gcc_parent_data_0, 744 744 .num_parents = ARRAY_SIZE(gcc_parent_data_0), 745 745 .flags = CLK_SET_RATE_PARENT, 746 - .ops = &clk_rcg2_shared_ops, 746 + .ops = &clk_rcg2_ops, 747 747 }, 748 748 }; 749 749 ··· 758 758 .parent_data = gcc_parent_data_0, 759 759 .num_parents = ARRAY_SIZE(gcc_parent_data_0), 760 760 .flags = CLK_SET_RATE_PARENT, 761 - .ops = &clk_rcg2_shared_ops, 761 + .ops = &clk_rcg2_ops, 762 762 }, 763 763 }; 764 764 ··· 773 773 .parent_data = gcc_parent_data_0, 774 774 .num_parents = ARRAY_SIZE(gcc_parent_data_0), 775 775 .flags = CLK_SET_RATE_PARENT, 776 - .ops = &clk_rcg2_shared_ops, 776 + .ops = &clk_rcg2_ops, 777 777 }, 778 778 }; 779 779 ··· 788 788 .parent_data = gcc_parent_data_0, 789 789 .num_parents = ARRAY_SIZE(gcc_parent_data_0), 790 790 .flags = CLK_SET_RATE_PARENT, 791 - .ops = &clk_rcg2_shared_ops, 791 + .ops = &clk_rcg2_ops, 792 792 }, 793 793 }; 794 794 ··· 803 803 .parent_data = gcc_parent_data_0, 804 804 .num_parents = ARRAY_SIZE(gcc_parent_data_0), 805 805 .flags = CLK_SET_RATE_PARENT, 806 - .ops = &clk_rcg2_shared_ops, 806 + .ops = &clk_rcg2_ops, 807 807 }, 808 808 }; 809 809 ··· 818 818 .parent_data = gcc_parent_data_0, 819 819 .num_parents = ARRAY_SIZE(gcc_parent_data_0), 820 820 .flags = CLK_SET_RATE_PARENT, 821 - .ops = &clk_rcg2_shared_ops, 821 + .ops = &clk_rcg2_ops, 822 822 }, 823 823 }; 824 824 ··· 833 833 .parent_data = gcc_parent_data_0, 834 834 .num_parents = ARRAY_SIZE(gcc_parent_data_0), 835 835 .flags = CLK_SET_RATE_PARENT, 836 - .ops = &clk_rcg2_shared_ops, 836 + .ops = &clk_rcg2_ops, 837 837 }, 838 838 }; 839 839 ··· 848 848 .parent_data = gcc_parent_data_0, 849 849 .num_parents = ARRAY_SIZE(gcc_parent_data_0), 850 850 .flags = CLK_SET_RATE_PARENT, 851 - .ops = &clk_rcg2_shared_ops, 851 + .ops = &clk_rcg2_ops, 852 852 }, 853 853 }; 854 854 ··· 863 863 .parent_data = gcc_parent_data_0, 864 864 .num_parents = ARRAY_SIZE(gcc_parent_data_0), 865 865 .flags = CLK_SET_RATE_PARENT, 866 - .ops = &clk_rcg2_shared_ops, 866 + .ops = &clk_rcg2_ops, 867 867 }; 868 868 869 869 static struct clk_rcg2 gcc_qupv3_wrap1_qspi_ref_clk_src = { ··· 899 899 .parent_data = gcc_parent_data_0, 900 900 .num_parents = ARRAY_SIZE(gcc_parent_data_0), 901 901 .flags = CLK_SET_RATE_PARENT, 902 - .ops = &clk_rcg2_shared_ops, 902 + .ops = &clk_rcg2_ops, 903 903 }; 904 904 905 905 static struct clk_rcg2 gcc_qupv3_wrap1_s0_clk_src = { ··· 916 916 .parent_data = gcc_parent_data_0, 917 917 .num_parents = ARRAY_SIZE(gcc_parent_data_0), 918 918 .flags = CLK_SET_RATE_PARENT, 919 - .ops = &clk_rcg2_shared_ops, 919 + .ops = &clk_rcg2_ops, 920 920 }; 921 921 922 922 static struct clk_rcg2 gcc_qupv3_wrap1_s1_clk_src = { ··· 948 948 .parent_data = gcc_parent_data_0, 949 949 .num_parents = ARRAY_SIZE(gcc_parent_data_0), 950 950 .flags = CLK_SET_RATE_PARENT, 951 - .ops = &clk_rcg2_shared_ops, 951 + .ops = &clk_rcg2_ops, 952 952 }; 953 953 954 954 static struct clk_rcg2 gcc_qupv3_wrap1_s3_clk_src = { ··· 980 980 .parent_data = gcc_parent_data_0, 981 981 .num_parents = ARRAY_SIZE(gcc_parent_data_0), 982 982 .flags = CLK_SET_RATE_PARENT, 983 - .ops = &clk_rcg2_shared_ops, 983 + .ops = &clk_rcg2_ops, 984 984 }; 985 985 986 986 static struct clk_rcg2 gcc_qupv3_wrap1_s4_clk_src = { ··· 997 997 .parent_data = gcc_parent_data_0, 998 998 .num_parents = ARRAY_SIZE(gcc_parent_data_0), 999 999 .flags = CLK_SET_RATE_PARENT, 1000 - .ops = &clk_rcg2_shared_ops, 1000 + .ops = &clk_rcg2_ops, 1001 1001 }; 1002 1002 1003 1003 static struct clk_rcg2 gcc_qupv3_wrap1_s5_clk_src = { ··· 1014 1014 .parent_data = gcc_parent_data_0, 1015 1015 .num_parents = ARRAY_SIZE(gcc_parent_data_0), 1016 1016 .flags = CLK_SET_RATE_PARENT, 1017 - .ops = &clk_rcg2_shared_ops, 1017 + .ops = &clk_rcg2_ops, 1018 1018 }; 1019 1019 1020 1020 static struct clk_rcg2 gcc_qupv3_wrap1_s6_clk_src = { ··· 1031 1031 .parent_data = gcc_parent_data_0, 1032 1032 .num_parents = ARRAY_SIZE(gcc_parent_data_0), 1033 1033 .flags = CLK_SET_RATE_PARENT, 1034 - .ops = &clk_rcg2_shared_ops, 1034 + .ops = &clk_rcg2_ops, 1035 1035 }; 1036 1036 1037 1037 static struct clk_rcg2 gcc_qupv3_wrap1_s7_clk_src = { ··· 1059 1059 .parent_data = gcc_parent_data_2, 1060 1060 .num_parents = ARRAY_SIZE(gcc_parent_data_2), 1061 1061 .flags = CLK_SET_RATE_PARENT, 1062 - .ops = &clk_rcg2_shared_ops, 1062 + .ops = &clk_rcg2_ops, 1063 1063 }, 1064 1064 }; 1065 1065 ··· 1068 1068 .parent_data = gcc_parent_data_0, 1069 1069 .num_parents = ARRAY_SIZE(gcc_parent_data_0), 1070 1070 .flags = CLK_SET_RATE_PARENT, 1071 - .ops = &clk_rcg2_shared_ops, 1071 + .ops = &clk_rcg2_ops, 1072 1072 }; 1073 1073 1074 1074 static struct clk_rcg2 gcc_qupv3_wrap2_s0_clk_src = { ··· 1085 1085 .parent_data = gcc_parent_data_0, 1086 1086 .num_parents = ARRAY_SIZE(gcc_parent_data_0), 1087 1087 .flags = CLK_SET_RATE_PARENT, 1088 - .ops = &clk_rcg2_shared_ops, 1088 + .ops = &clk_rcg2_ops, 1089 1089 }; 1090 1090 1091 1091 static struct clk_rcg2 gcc_qupv3_wrap2_s1_clk_src = { ··· 1102 1102 .parent_data = gcc_parent_data_0, 1103 1103 .num_parents = ARRAY_SIZE(gcc_parent_data_0), 1104 1104 .flags = CLK_SET_RATE_PARENT, 1105 - .ops = &clk_rcg2_shared_ops, 1105 + .ops = &clk_rcg2_ops, 1106 1106 }; 1107 1107 1108 1108 static struct clk_rcg2 gcc_qupv3_wrap2_s2_clk_src = { ··· 1119 1119 .parent_data = gcc_parent_data_0, 1120 1120 .num_parents = ARRAY_SIZE(gcc_parent_data_0), 1121 1121 .flags = CLK_SET_RATE_PARENT, 1122 - .ops = &clk_rcg2_shared_ops, 1122 + .ops = &clk_rcg2_ops, 1123 1123 }; 1124 1124 1125 1125 static struct clk_rcg2 gcc_qupv3_wrap2_s3_clk_src = { ··· 1136 1136 .parent_data = gcc_parent_data_0, 1137 1137 .num_parents = ARRAY_SIZE(gcc_parent_data_0), 1138 1138 .flags = CLK_SET_RATE_PARENT, 1139 - .ops = &clk_rcg2_shared_ops, 1139 + .ops = &clk_rcg2_ops, 1140 1140 }; 1141 1141 1142 1142 static struct clk_rcg2 gcc_qupv3_wrap2_s4_clk_src = { ··· 1153 1153 .parent_data = gcc_parent_data_0, 1154 1154 .num_parents = ARRAY_SIZE(gcc_parent_data_0), 1155 1155 .flags = CLK_SET_RATE_PARENT, 1156 - .ops = &clk_rcg2_shared_ops, 1156 + .ops = &clk_rcg2_ops, 1157 1157 }; 1158 1158 1159 1159 static struct clk_rcg2 gcc_qupv3_wrap2_s5_clk_src = { ··· 1186 1186 .parent_data = gcc_parent_data_10, 1187 1187 .num_parents = ARRAY_SIZE(gcc_parent_data_10), 1188 1188 .flags = CLK_SET_RATE_PARENT, 1189 - .ops = &clk_rcg2_shared_ops, 1189 + .ops = &clk_rcg2_ops, 1190 1190 }; 1191 1191 1192 1192 static struct clk_rcg2 gcc_qupv3_wrap2_s6_clk_src = { ··· 1203 1203 .parent_data = gcc_parent_data_0, 1204 1204 .num_parents = ARRAY_SIZE(gcc_parent_data_0), 1205 1205 .flags = CLK_SET_RATE_PARENT, 1206 - .ops = &clk_rcg2_shared_ops, 1206 + .ops = &clk_rcg2_ops, 1207 1207 }; 1208 1208 1209 1209 static struct clk_rcg2 gcc_qupv3_wrap2_s7_clk_src = { ··· 1226 1226 .parent_data = gcc_parent_data_0, 1227 1227 .num_parents = ARRAY_SIZE(gcc_parent_data_0), 1228 1228 .flags = CLK_SET_RATE_PARENT, 1229 - .ops = &clk_rcg2_shared_ops, 1229 + .ops = &clk_rcg2_ops, 1230 1230 }; 1231 1231 1232 1232 static struct clk_rcg2 gcc_qupv3_wrap3_qspi_ref_clk_src = {
+26 -26
drivers/clk/qcom/gcc-x1e80100.c
··· 670 670 .parent_data = gcc_parent_data_0, 671 671 .num_parents = ARRAY_SIZE(gcc_parent_data_0), 672 672 .flags = CLK_SET_RATE_PARENT, 673 - .ops = &clk_rcg2_shared_ops, 673 + .ops = &clk_rcg2_ops, 674 674 }; 675 675 676 676 static struct clk_rcg2 gcc_qupv3_wrap0_s0_clk_src = { ··· 687 687 .parent_data = gcc_parent_data_0, 688 688 .num_parents = ARRAY_SIZE(gcc_parent_data_0), 689 689 .flags = CLK_SET_RATE_PARENT, 690 - .ops = &clk_rcg2_shared_ops, 690 + .ops = &clk_rcg2_ops, 691 691 }; 692 692 693 693 static struct clk_rcg2 gcc_qupv3_wrap0_s1_clk_src = { ··· 719 719 .parent_data = gcc_parent_data_0, 720 720 .num_parents = ARRAY_SIZE(gcc_parent_data_0), 721 721 .flags = CLK_SET_RATE_PARENT, 722 - .ops = &clk_rcg2_shared_ops, 722 + .ops = &clk_rcg2_ops, 723 723 }; 724 724 725 725 static struct clk_rcg2 gcc_qupv3_wrap0_s2_clk_src = { ··· 736 736 .parent_data = gcc_parent_data_0, 737 737 .num_parents = ARRAY_SIZE(gcc_parent_data_0), 738 738 .flags = CLK_SET_RATE_PARENT, 739 - .ops = &clk_rcg2_shared_ops, 739 + .ops = &clk_rcg2_ops, 740 740 }; 741 741 742 742 static struct clk_rcg2 gcc_qupv3_wrap0_s3_clk_src = { ··· 768 768 .parent_data = gcc_parent_data_0, 769 769 .num_parents = ARRAY_SIZE(gcc_parent_data_0), 770 770 .flags = CLK_SET_RATE_PARENT, 771 - .ops = &clk_rcg2_shared_ops, 771 + .ops = &clk_rcg2_ops, 772 772 }; 773 773 774 774 static struct clk_rcg2 gcc_qupv3_wrap0_s4_clk_src = { ··· 785 785 .parent_data = gcc_parent_data_0, 786 786 .num_parents = ARRAY_SIZE(gcc_parent_data_0), 787 787 .flags = CLK_SET_RATE_PARENT, 788 - .ops = &clk_rcg2_shared_ops, 788 + .ops = &clk_rcg2_ops, 789 789 }; 790 790 791 791 static struct clk_rcg2 gcc_qupv3_wrap0_s5_clk_src = { ··· 802 802 .parent_data = gcc_parent_data_0, 803 803 .num_parents = ARRAY_SIZE(gcc_parent_data_0), 804 804 .flags = CLK_SET_RATE_PARENT, 805 - .ops = &clk_rcg2_shared_ops, 805 + .ops = &clk_rcg2_ops, 806 806 }; 807 807 808 808 static struct clk_rcg2 gcc_qupv3_wrap0_s6_clk_src = { ··· 819 819 .parent_data = gcc_parent_data_0, 820 820 .num_parents = ARRAY_SIZE(gcc_parent_data_0), 821 821 .flags = CLK_SET_RATE_PARENT, 822 - .ops = &clk_rcg2_shared_ops, 822 + .ops = &clk_rcg2_ops, 823 823 }; 824 824 825 825 static struct clk_rcg2 gcc_qupv3_wrap0_s7_clk_src = { ··· 836 836 .parent_data = gcc_parent_data_0, 837 837 .num_parents = ARRAY_SIZE(gcc_parent_data_0), 838 838 .flags = CLK_SET_RATE_PARENT, 839 - .ops = &clk_rcg2_shared_ops, 839 + .ops = &clk_rcg2_ops, 840 840 }; 841 841 842 842 static struct clk_rcg2 gcc_qupv3_wrap1_s0_clk_src = { ··· 853 853 .parent_data = gcc_parent_data_0, 854 854 .num_parents = ARRAY_SIZE(gcc_parent_data_0), 855 855 .flags = CLK_SET_RATE_PARENT, 856 - .ops = &clk_rcg2_shared_ops, 856 + .ops = &clk_rcg2_ops, 857 857 }; 858 858 859 859 static struct clk_rcg2 gcc_qupv3_wrap1_s1_clk_src = { ··· 870 870 .parent_data = gcc_parent_data_0, 871 871 .num_parents = ARRAY_SIZE(gcc_parent_data_0), 872 872 .flags = CLK_SET_RATE_PARENT, 873 - .ops = &clk_rcg2_shared_ops, 873 + .ops = &clk_rcg2_ops, 874 874 }; 875 875 876 876 static struct clk_rcg2 gcc_qupv3_wrap1_s2_clk_src = { ··· 887 887 .parent_data = gcc_parent_data_0, 888 888 .num_parents = ARRAY_SIZE(gcc_parent_data_0), 889 889 .flags = CLK_SET_RATE_PARENT, 890 - .ops = &clk_rcg2_shared_ops, 890 + .ops = &clk_rcg2_ops, 891 891 }; 892 892 893 893 static struct clk_rcg2 gcc_qupv3_wrap1_s3_clk_src = { ··· 904 904 .parent_data = gcc_parent_data_0, 905 905 .num_parents = ARRAY_SIZE(gcc_parent_data_0), 906 906 .flags = CLK_SET_RATE_PARENT, 907 - .ops = &clk_rcg2_shared_ops, 907 + .ops = &clk_rcg2_ops, 908 908 }; 909 909 910 910 static struct clk_rcg2 gcc_qupv3_wrap1_s4_clk_src = { ··· 921 921 .parent_data = gcc_parent_data_0, 922 922 .num_parents = ARRAY_SIZE(gcc_parent_data_0), 923 923 .flags = CLK_SET_RATE_PARENT, 924 - .ops = &clk_rcg2_shared_ops, 924 + .ops = &clk_rcg2_ops, 925 925 }; 926 926 927 927 static struct clk_rcg2 gcc_qupv3_wrap1_s5_clk_src = { ··· 938 938 .parent_data = gcc_parent_data_0, 939 939 .num_parents = ARRAY_SIZE(gcc_parent_data_0), 940 940 .flags = CLK_SET_RATE_PARENT, 941 - .ops = &clk_rcg2_shared_ops, 941 + .ops = &clk_rcg2_ops, 942 942 }; 943 943 944 944 static struct clk_rcg2 gcc_qupv3_wrap1_s6_clk_src = { ··· 955 955 .parent_data = gcc_parent_data_0, 956 956 .num_parents = ARRAY_SIZE(gcc_parent_data_0), 957 957 .flags = CLK_SET_RATE_PARENT, 958 - .ops = &clk_rcg2_shared_ops, 958 + .ops = &clk_rcg2_ops, 959 959 }; 960 960 961 961 static struct clk_rcg2 gcc_qupv3_wrap1_s7_clk_src = { ··· 972 972 .parent_data = gcc_parent_data_0, 973 973 .num_parents = ARRAY_SIZE(gcc_parent_data_0), 974 974 .flags = CLK_SET_RATE_PARENT, 975 - .ops = &clk_rcg2_shared_ops, 975 + .ops = &clk_rcg2_ops, 976 976 }; 977 977 978 978 static struct clk_rcg2 gcc_qupv3_wrap2_s0_clk_src = { ··· 989 989 .parent_data = gcc_parent_data_0, 990 990 .num_parents = ARRAY_SIZE(gcc_parent_data_0), 991 991 .flags = CLK_SET_RATE_PARENT, 992 - .ops = &clk_rcg2_shared_ops, 992 + .ops = &clk_rcg2_ops, 993 993 }; 994 994 995 995 static struct clk_rcg2 gcc_qupv3_wrap2_s1_clk_src = { ··· 1006 1006 .parent_data = gcc_parent_data_0, 1007 1007 .num_parents = ARRAY_SIZE(gcc_parent_data_0), 1008 1008 .flags = CLK_SET_RATE_PARENT, 1009 - .ops = &clk_rcg2_shared_ops, 1009 + .ops = &clk_rcg2_ops, 1010 1010 }; 1011 1011 1012 1012 static struct clk_rcg2 gcc_qupv3_wrap2_s2_clk_src = { ··· 1023 1023 .parent_data = gcc_parent_data_0, 1024 1024 .num_parents = ARRAY_SIZE(gcc_parent_data_0), 1025 1025 .flags = CLK_SET_RATE_PARENT, 1026 - .ops = &clk_rcg2_shared_ops, 1026 + .ops = &clk_rcg2_ops, 1027 1027 }; 1028 1028 1029 1029 static struct clk_rcg2 gcc_qupv3_wrap2_s3_clk_src = { ··· 1040 1040 .parent_data = gcc_parent_data_0, 1041 1041 .num_parents = ARRAY_SIZE(gcc_parent_data_0), 1042 1042 .flags = CLK_SET_RATE_PARENT, 1043 - .ops = &clk_rcg2_shared_ops, 1043 + .ops = &clk_rcg2_ops, 1044 1044 }; 1045 1045 1046 1046 static struct clk_rcg2 gcc_qupv3_wrap2_s4_clk_src = { ··· 1057 1057 .parent_data = gcc_parent_data_0, 1058 1058 .num_parents = ARRAY_SIZE(gcc_parent_data_0), 1059 1059 .flags = CLK_SET_RATE_PARENT, 1060 - .ops = &clk_rcg2_shared_ops, 1060 + .ops = &clk_rcg2_ops, 1061 1061 }; 1062 1062 1063 1063 static struct clk_rcg2 gcc_qupv3_wrap2_s5_clk_src = { ··· 1074 1074 .parent_data = gcc_parent_data_8, 1075 1075 .num_parents = ARRAY_SIZE(gcc_parent_data_8), 1076 1076 .flags = CLK_SET_RATE_PARENT, 1077 - .ops = &clk_rcg2_shared_ops, 1077 + .ops = &clk_rcg2_ops, 1078 1078 }; 1079 1079 1080 1080 static struct clk_rcg2 gcc_qupv3_wrap2_s6_clk_src = { ··· 1091 1091 .parent_data = gcc_parent_data_0, 1092 1092 .num_parents = ARRAY_SIZE(gcc_parent_data_0), 1093 1093 .flags = CLK_SET_RATE_PARENT, 1094 - .ops = &clk_rcg2_shared_ops, 1094 + .ops = &clk_rcg2_ops, 1095 1095 }; 1096 1096 1097 1097 static struct clk_rcg2 gcc_qupv3_wrap2_s7_clk_src = { ··· 6203 6203 .pd = { 6204 6204 .name = "gcc_usb_0_phy_gdsc", 6205 6205 }, 6206 - .pwrsts = PWRSTS_OFF_ON, 6206 + .pwrsts = PWRSTS_RET_ON, 6207 6207 .flags = POLL_CFG_GDSCR | RETAIN_FF_ENABLE, 6208 6208 }; 6209 6209 ··· 6215 6215 .pd = { 6216 6216 .name = "gcc_usb_1_phy_gdsc", 6217 6217 }, 6218 - .pwrsts = PWRSTS_OFF_ON, 6218 + .pwrsts = PWRSTS_RET_ON, 6219 6219 .flags = POLL_CFG_GDSCR | RETAIN_FF_ENABLE, 6220 6220 }; 6221 6221
+30 -1
drivers/clk/starfive/clk-starfive-jh7110-sys.c
··· 385 385 } 386 386 EXPORT_SYMBOL_GPL(jh7110_reset_controller_register); 387 387 388 + /* 389 + * This clock notifier is called when the rate of PLL0 clock is to be changed. 390 + * The cpu_root clock should save the curent parent clock and switch its parent 391 + * clock to osc before PLL0 rate will be changed. Then switch its parent clock 392 + * back after the PLL0 rate is completed. 393 + */ 394 + static int jh7110_pll0_clk_notifier_cb(struct notifier_block *nb, 395 + unsigned long action, void *data) 396 + { 397 + struct jh71x0_clk_priv *priv = container_of(nb, struct jh71x0_clk_priv, pll_clk_nb); 398 + struct clk *cpu_root = priv->reg[JH7110_SYSCLK_CPU_ROOT].hw.clk; 399 + int ret = 0; 400 + 401 + if (action == PRE_RATE_CHANGE) { 402 + struct clk *osc = clk_get(priv->dev, "osc"); 403 + 404 + priv->original_clk = clk_get_parent(cpu_root); 405 + ret = clk_set_parent(cpu_root, osc); 406 + clk_put(osc); 407 + } else if (action == POST_RATE_CHANGE) { 408 + ret = clk_set_parent(cpu_root, priv->original_clk); 409 + } 410 + 411 + return notifier_from_errno(ret); 412 + } 413 + 388 414 static int __init jh7110_syscrg_probe(struct platform_device *pdev) 389 415 { 390 416 struct jh71x0_clk_priv *priv; ··· 439 413 if (IS_ERR(priv->pll[0])) 440 414 return PTR_ERR(priv->pll[0]); 441 415 } else { 442 - clk_put(pllclk); 416 + priv->pll_clk_nb.notifier_call = jh7110_pll0_clk_notifier_cb; 417 + ret = clk_notifier_register(pllclk, &priv->pll_clk_nb); 418 + if (ret) 419 + return ret; 443 420 priv->pll[0] = NULL; 444 421 } 445 422
+2
drivers/clk/starfive/clk-starfive-jh71x0.h
··· 114 114 spinlock_t rmw_lock; 115 115 struct device *dev; 116 116 void __iomem *base; 117 + struct clk *original_clk; 118 + struct notifier_block pll_clk_nb; 117 119 struct clk_hw *pll[3]; 118 120 struct jh71x0_clk reg[]; 119 121 };
+12 -4
drivers/clocksource/timer-imx-tpm.c
··· 83 83 static int tpm_set_next_event(unsigned long delta, 84 84 struct clock_event_device *evt) 85 85 { 86 - unsigned long next, now; 86 + unsigned long next, prev, now; 87 87 88 - next = tpm_read_counter(); 89 - next += delta; 88 + prev = tpm_read_counter(); 89 + next = prev + delta; 90 90 writel(next, timer_base + TPM_C0V); 91 91 now = tpm_read_counter(); 92 + 93 + /* 94 + * Need to wait CNT increase at least 1 cycle to make sure 95 + * the C0V has been updated into HW. 96 + */ 97 + if ((next & 0xffffffff) != readl(timer_base + TPM_C0V)) 98 + while (now == tpm_read_counter()) 99 + ; 92 100 93 101 /* 94 102 * NOTE: We observed in a very small probability, the bus fabric ··· 104 96 * of writing CNT registers which may cause the min_delta event got 105 97 * missed, so we need add a ETIME check here in case it happened. 106 98 */ 107 - return (int)(next - now) <= 0 ? -ETIME : 0; 99 + return (now - prev) >= delta ? -ETIME : 0; 108 100 } 109 101 110 102 static int tpm_set_state_oneshot(struct clock_event_device *evt)
+4 -13
drivers/clocksource/timer-of.c
··· 25 25 26 26 struct clock_event_device *clkevt = &to->clkevt; 27 27 28 - if (of_irq->percpu) 29 - free_percpu_irq(of_irq->irq, clkevt); 30 - else 31 - free_irq(of_irq->irq, clkevt); 28 + free_irq(of_irq->irq, clkevt); 32 29 } 33 30 34 31 /** ··· 38 41 * 39 42 * - Get interrupt number by name 40 43 * - Get interrupt number by index 41 - * 42 - * When the interrupt is per CPU, 'request_percpu_irq()' is called, 43 - * otherwise 'request_irq()' is used. 44 44 * 45 45 * Returns 0 on success, < 0 otherwise 46 46 */ ··· 63 69 return -EINVAL; 64 70 } 65 71 66 - ret = of_irq->percpu ? 67 - request_percpu_irq(of_irq->irq, of_irq->handler, 68 - np->full_name, clkevt) : 69 - request_irq(of_irq->irq, of_irq->handler, 70 - of_irq->flags ? of_irq->flags : IRQF_TIMER, 71 - np->full_name, clkevt); 72 + ret = request_irq(of_irq->irq, of_irq->handler, 73 + of_irq->flags ? of_irq->flags : IRQF_TIMER, 74 + np->full_name, clkevt); 72 75 if (ret) { 73 76 pr_err("Failed to request irq %d for %pOF\n", of_irq->irq, np); 74 77 return ret;
-1
drivers/clocksource/timer-of.h
··· 11 11 struct of_timer_irq { 12 12 int irq; 13 13 int index; 14 - int percpu; 15 14 const char *name; 16 15 unsigned long flags; 17 16 irq_handler_t handler;
+24 -10
drivers/cpufreq/amd-pstate.c
··· 1834 1834 } 1835 1835 1836 1836 /* 1837 - * If the CPPC feature is disabled in the BIOS for processors that support MSR-based CPPC, 1838 - * the AMD Pstate driver may not function correctly. 1839 - * Check the CPPC flag and display a warning message if the platform supports CPPC. 1840 - * Note: below checking code will not abort the driver registeration process because of 1841 - * the code is added for debugging purposes. 1837 + * If the CPPC feature is disabled in the BIOS for processors 1838 + * that support MSR-based CPPC, the AMD Pstate driver may not 1839 + * function correctly. 1840 + * 1841 + * For such processors, check the CPPC flag and display a 1842 + * warning message if the platform supports CPPC. 1843 + * 1844 + * Note: The code check below will not abort the driver 1845 + * registration process because of the code is added for 1846 + * debugging purposes. Besides, it may still be possible for 1847 + * the driver to work using the shared-memory mechanism. 1842 1848 */ 1843 1849 if (!cpu_feature_enabled(X86_FEATURE_CPPC)) { 1844 - if (cpu_feature_enabled(X86_FEATURE_ZEN1) || cpu_feature_enabled(X86_FEATURE_ZEN2)) { 1845 - if (c->x86_model > 0x60 && c->x86_model < 0xaf) 1850 + if (cpu_feature_enabled(X86_FEATURE_ZEN2)) { 1851 + switch (c->x86_model) { 1852 + case 0x60 ... 0x6F: 1853 + case 0x80 ... 0xAF: 1846 1854 warn = true; 1847 - } else if (cpu_feature_enabled(X86_FEATURE_ZEN3) || cpu_feature_enabled(X86_FEATURE_ZEN4)) { 1848 - if ((c->x86_model > 0x10 && c->x86_model < 0x1F) || 1849 - (c->x86_model > 0x40 && c->x86_model < 0xaf)) 1855 + break; 1856 + } 1857 + } else if (cpu_feature_enabled(X86_FEATURE_ZEN3) || 1858 + cpu_feature_enabled(X86_FEATURE_ZEN4)) { 1859 + switch (c->x86_model) { 1860 + case 0x10 ... 0x1F: 1861 + case 0x40 ... 0xAF: 1850 1862 warn = true; 1863 + break; 1864 + } 1851 1865 } else if (cpu_feature_enabled(X86_FEATURE_ZEN5)) { 1852 1866 warn = true; 1853 1867 }
+1
drivers/gpio/gpio-rockchip.c
··· 713 713 return -ENODEV; 714 714 715 715 pctldev = of_pinctrl_get(pctlnp); 716 + of_node_put(pctlnp); 716 717 if (!pctldev) 717 718 return -EPROBE_DEFER; 718 719
+1
drivers/gpio/gpio-zynqmp-modepin.c
··· 146 146 { .compatible = "xlnx,zynqmp-gpio-modepin", }, 147 147 { } 148 148 }; 149 + MODULE_DEVICE_TABLE(of, modepin_platform_id); 149 150 150 151 static struct platform_driver modepin_platform_driver = { 151 152 .driver = {
-1
drivers/gpu/drm/Makefile
··· 128 128 drm_kms_helper-y := \ 129 129 drm_atomic_helper.o \ 130 130 drm_atomic_state_helper.o \ 131 - drm_bridge_connector.o \ 132 131 drm_crtc_helper.o \ 133 132 drm_damage_helper.o \ 134 133 drm_encoder_slave.o \
+3
drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
··· 348 348 return -EINVAL; 349 349 } 350 350 351 + /* always clear VRAM */ 352 + flags |= AMDGPU_GEM_CREATE_VRAM_CLEARED; 353 + 351 354 /* create a gem object to contain this object in */ 352 355 if (args->in.domains & (AMDGPU_GEM_DOMAIN_GDS | 353 356 AMDGPU_GEM_DOMAIN_GWS | AMDGPU_GEM_DOMAIN_OA)) {
+2 -2
drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
··· 657 657 uint64_t queue_mask = 0; 658 658 int r, i, j; 659 659 660 - if (adev->enable_mes) 660 + if (adev->mes.enable_legacy_queue_map) 661 661 return amdgpu_gfx_mes_enable_kcq(adev, xcc_id); 662 662 663 663 if (!kiq->pmf || !kiq->pmf->kiq_map_queues || !kiq->pmf->kiq_set_resources) ··· 719 719 720 720 amdgpu_device_flush_hdp(adev, NULL); 721 721 722 - if (adev->enable_mes) { 722 + if (adev->mes.enable_legacy_queue_map) { 723 723 for (i = 0; i < adev->gfx.num_gfx_rings; i++) { 724 724 j = i + xcc_id * adev->gfx.num_gfx_rings; 725 725 r = amdgpu_mes_map_legacy_queue(adev,
+1
drivers/gpu/drm/amd/amdgpu/amdgpu_mes.h
··· 75 75 76 76 uint32_t sched_version; 77 77 uint32_t kiq_version; 78 + bool enable_legacy_queue_map; 78 79 79 80 uint32_t total_max_queue; 80 81 uint32_t max_doorbell_slices;
+34 -15
drivers/gpu/drm/amd/amdgpu/mes_v11_0.c
··· 693 693 (void **)&adev->mes.ucode_fw_ptr[pipe]); 694 694 } 695 695 696 + static void mes_v11_0_get_fw_version(struct amdgpu_device *adev) 697 + { 698 + int pipe; 699 + 700 + /* get MES scheduler/KIQ versions */ 701 + mutex_lock(&adev->srbm_mutex); 702 + 703 + for (pipe = 0; pipe < AMDGPU_MAX_MES_PIPES; pipe++) { 704 + soc21_grbm_select(adev, 3, pipe, 0, 0); 705 + 706 + if (pipe == AMDGPU_MES_SCHED_PIPE) 707 + adev->mes.sched_version = 708 + RREG32_SOC15(GC, 0, regCP_MES_GP3_LO); 709 + else if (pipe == AMDGPU_MES_KIQ_PIPE && adev->enable_mes_kiq) 710 + adev->mes.kiq_version = 711 + RREG32_SOC15(GC, 0, regCP_MES_GP3_LO); 712 + } 713 + 714 + soc21_grbm_select(adev, 0, 0, 0, 0); 715 + mutex_unlock(&adev->srbm_mutex); 716 + } 717 + 696 718 static void mes_v11_0_enable(struct amdgpu_device *adev, bool enable) 697 719 { 698 720 uint64_t ucode_addr; ··· 1084 1062 mes_v11_0_queue_init_register(ring); 1085 1063 } 1086 1064 1087 - /* get MES scheduler/KIQ versions */ 1088 - mutex_lock(&adev->srbm_mutex); 1089 - soc21_grbm_select(adev, 3, pipe, 0, 0); 1090 - 1091 - if (pipe == AMDGPU_MES_SCHED_PIPE) 1092 - adev->mes.sched_version = RREG32_SOC15(GC, 0, regCP_MES_GP3_LO); 1093 - else if (pipe == AMDGPU_MES_KIQ_PIPE && adev->enable_mes_kiq) 1094 - adev->mes.kiq_version = RREG32_SOC15(GC, 0, regCP_MES_GP3_LO); 1095 - 1096 - soc21_grbm_select(adev, 0, 0, 0, 0); 1097 - mutex_unlock(&adev->srbm_mutex); 1098 - 1099 1065 return 0; 1100 1066 } 1101 1067 ··· 1330 1320 1331 1321 mes_v11_0_enable(adev, true); 1332 1322 1323 + mes_v11_0_get_fw_version(adev); 1324 + 1333 1325 mes_v11_0_kiq_setting(&adev->gfx.kiq[0].ring); 1334 1326 1335 1327 r = mes_v11_0_queue_init(adev, AMDGPU_MES_KIQ_PIPE); 1336 1328 if (r) 1337 1329 goto failure; 1338 1330 1339 - r = mes_v11_0_hw_init(adev); 1340 - if (r) 1341 - goto failure; 1331 + if ((adev->mes.sched_version & AMDGPU_MES_VERSION_MASK) >= 0x47) 1332 + adev->mes.enable_legacy_queue_map = true; 1333 + else 1334 + adev->mes.enable_legacy_queue_map = false; 1335 + 1336 + if (adev->mes.enable_legacy_queue_map) { 1337 + r = mes_v11_0_hw_init(adev); 1338 + if (r) 1339 + goto failure; 1340 + } 1342 1341 1343 1342 return r; 1344 1343
+6 -3
drivers/gpu/drm/amd/amdgpu/mes_v12_0.c
··· 1266 1266 adev->mes.funcs = &mes_v12_0_funcs; 1267 1267 adev->mes.kiq_hw_init = &mes_v12_0_kiq_hw_init; 1268 1268 adev->mes.kiq_hw_fini = &mes_v12_0_kiq_hw_fini; 1269 + adev->mes.enable_legacy_queue_map = true; 1269 1270 1270 1271 adev->mes.event_log_size = AMDGPU_MES_LOG_BUFFER_SIZE; 1271 1272 ··· 1423 1422 mes_v12_0_set_hw_resources_1(&adev->mes, AMDGPU_MES_KIQ_PIPE); 1424 1423 } 1425 1424 1426 - r = mes_v12_0_hw_init(adev); 1427 - if (r) 1428 - goto failure; 1425 + if (adev->mes.enable_legacy_queue_map) { 1426 + r = mes_v12_0_hw_init(adev); 1427 + if (r) 1428 + goto failure; 1429 + } 1429 1430 1430 1431 return r; 1431 1432
+37 -2
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
··· 1752 1752 return bb; 1753 1753 } 1754 1754 1755 + static enum dmub_ips_disable_type dm_get_default_ips_mode( 1756 + struct amdgpu_device *adev) 1757 + { 1758 + /* 1759 + * On DCN35 systems with Z8 enabled, it's possible for IPS2 + Z8 to 1760 + * cause a hard hang. A fix exists for newer PMFW. 1761 + * 1762 + * As a workaround, for non-fixed PMFW, force IPS1+RCG as the deepest 1763 + * IPS state in all cases, except for s0ix and all displays off (DPMS), 1764 + * where IPS2 is allowed. 1765 + * 1766 + * When checking pmfw version, use the major and minor only. 1767 + */ 1768 + if (amdgpu_ip_version(adev, DCE_HWIP, 0) == IP_VERSION(3, 5, 0) && 1769 + (adev->pm.fw_version & 0x00FFFF00) < 0x005D6300) 1770 + return DMUB_IPS_RCG_IN_ACTIVE_IPS2_IN_OFF; 1771 + 1772 + if (amdgpu_ip_version(adev, DCE_HWIP, 0) >= IP_VERSION(3, 5, 0)) 1773 + return DMUB_IPS_ENABLE; 1774 + 1775 + /* ASICs older than DCN35 do not have IPSs */ 1776 + return DMUB_IPS_DISABLE_ALL; 1777 + } 1778 + 1755 1779 static int amdgpu_dm_init(struct amdgpu_device *adev) 1756 1780 { 1757 1781 struct dc_init_data init_data; ··· 1887 1863 if (amdgpu_dc_debug_mask & DC_DISABLE_IPS) 1888 1864 init_data.flags.disable_ips = DMUB_IPS_DISABLE_ALL; 1889 1865 else 1890 - init_data.flags.disable_ips = DMUB_IPS_ENABLE; 1866 + init_data.flags.disable_ips = dm_get_default_ips_mode(adev); 1891 1867 1892 1868 init_data.flags.disable_ips_in_vpb = 0; 1893 1869 ··· 4516 4492 struct amdgpu_dm_backlight_caps caps; 4517 4493 struct dc_link *link; 4518 4494 u32 brightness; 4519 - bool rc; 4495 + bool rc, reallow_idle = false; 4520 4496 4521 4497 amdgpu_dm_update_backlight_caps(dm, bl_idx); 4522 4498 caps = dm->backlight_caps[bl_idx]; ··· 4529 4505 link = (struct dc_link *)dm->backlight_link[bl_idx]; 4530 4506 4531 4507 /* Change brightness based on AUX property */ 4508 + mutex_lock(&dm->dc_lock); 4509 + if (dm->dc->caps.ips_support && dm->dc->ctx->dmub_srv->idle_allowed) { 4510 + dc_allow_idle_optimizations(dm->dc, false); 4511 + reallow_idle = true; 4512 + } 4513 + 4532 4514 if (caps.aux_support) { 4533 4515 rc = dc_link_set_backlight_level_nits(link, true, brightness, 4534 4516 AUX_BL_DEFAULT_TRANSITION_TIME_MS); ··· 4545 4515 if (!rc) 4546 4516 DRM_DEBUG("DM: Failed to update backlight on eDP[%d]\n", bl_idx); 4547 4517 } 4518 + 4519 + if (dm->dc->caps.ips_support && reallow_idle) 4520 + dc_allow_idle_optimizations(dm->dc, true); 4521 + 4522 + mutex_unlock(&dm->dc_lock); 4548 4523 4549 4524 if (rc) 4550 4525 dm->actual_brightness[bl_idx] = user_brightness;
+2 -1
drivers/gpu/drm/amd/display/dc/dml2/dml21/src/dml2_pmo/dml2_pmo_dcn4_fams2.c
··· 811 811 for (j = i + 1; j < display_config->display_config.num_streams; j++) { 812 812 if (memcmp(master_timing, 813 813 &display_config->display_config.stream_descriptors[j].timing, 814 - sizeof(struct dml2_timing_cfg)) == 0) { 814 + sizeof(struct dml2_timing_cfg)) == 0 && 815 + display_config->display_config.stream_descriptors[i].output.output_encoder == display_config->display_config.stream_descriptors[j].output.output_encoder) { 815 816 set_bit_in_bitfield(&pmo->scratch.pmo_dcn4.synchronized_timing_group_masks[timing_group_idx], j); 816 817 set_bit_in_bitfield(&stream_mapped_mask, j); 817 818 }
+4 -2
drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
··· 2266 2266 smu_dpm_ctx->dpm_level = level; 2267 2267 } 2268 2268 2269 - if (smu_dpm_ctx->dpm_level != AMD_DPM_FORCED_LEVEL_PERF_DETERMINISM) { 2269 + if (smu_dpm_ctx->dpm_level != AMD_DPM_FORCED_LEVEL_MANUAL && 2270 + smu_dpm_ctx->dpm_level != AMD_DPM_FORCED_LEVEL_PERF_DETERMINISM) { 2270 2271 index = fls(smu->workload_mask); 2271 2272 index = index > 0 && index <= WORKLOAD_POLICY_MAX ? index - 1 : 0; 2272 2273 workload[0] = smu->workload_setting[index]; ··· 2346 2345 workload[0] = smu->workload_setting[index]; 2347 2346 } 2348 2347 2349 - if (smu_dpm_ctx->dpm_level != AMD_DPM_FORCED_LEVEL_PERF_DETERMINISM) 2348 + if (smu_dpm_ctx->dpm_level != AMD_DPM_FORCED_LEVEL_MANUAL && 2349 + smu_dpm_ctx->dpm_level != AMD_DPM_FORCED_LEVEL_PERF_DETERMINISM) 2350 2350 smu_bump_power_profile_mode(smu, workload, 0); 2351 2351 2352 2352 return 0;
+7 -3
drivers/gpu/drm/arm/display/komeda/komeda_kms.c
··· 160 160 struct drm_plane *plane; 161 161 struct list_head zorder_list; 162 162 int order = 0, err; 163 + u32 slave_zpos = 0; 163 164 164 165 DRM_DEBUG_ATOMIC("[CRTC:%d:%s] calculating normalized zpos values\n", 165 166 crtc->base.id, crtc->name); ··· 200 199 plane_st->zpos, plane_st->normalized_zpos); 201 200 202 201 /* calculate max slave zorder */ 203 - if (has_bit(drm_plane_index(plane), kcrtc->slave_planes)) 202 + if (has_bit(drm_plane_index(plane), kcrtc->slave_planes)) { 203 + slave_zpos = plane_st->normalized_zpos; 204 + if (to_kplane_st(plane_st)->layer_split) 205 + slave_zpos++; 204 206 kcrtc_st->max_slave_zorder = 205 - max(plane_st->normalized_zpos, 206 - kcrtc_st->max_slave_zorder); 207 + max(slave_zpos, kcrtc_st->max_slave_zorder); 208 + } 207 209 } 208 210 209 211 crtc_st->zpos_changed = true;
+1
drivers/gpu/drm/bridge/Kconfig
··· 390 390 depends on OF 391 391 select DRM_DISPLAY_DP_HELPER 392 392 select DRM_DISPLAY_HELPER 393 + select DRM_BRIDGE_CONNECTOR 393 394 select DRM_KMS_HELPER 394 395 select REGMAP_I2C 395 396 select DRM_PANEL
+14 -10
drivers/gpu/drm/display/Kconfig
··· 1 1 # SPDX-License-Identifier: MIT 2 2 3 + config DRM_DISPLAY_DP_AUX_BUS 4 + tristate 5 + depends on DRM 6 + depends on OF || COMPILE_TEST 7 + 3 8 config DRM_DISPLAY_HELPER 4 9 tristate 5 10 depends on DRM 6 11 help 7 12 DRM helpers for display adapters. 8 13 9 - config DRM_DISPLAY_DP_AUX_BUS 10 - tristate 11 - depends on DRM 12 - depends on OF || COMPILE_TEST 14 + if DRM_DISPLAY_HELPER 15 + 16 + config DRM_BRIDGE_CONNECTOR 17 + bool 18 + select DRM_DISPLAY_HDMI_STATE_HELPER 19 + help 20 + DRM connector implementation terminating DRM bridge chains. 13 21 14 22 config DRM_DISPLAY_DP_AUX_CEC 15 23 bool "Enable DisplayPort CEC-Tunneling-over-AUX HDMI support" 16 - depends on DRM && DRM_DISPLAY_HELPER 17 24 select DRM_DISPLAY_DP_HELPER 18 25 select CEC_CORE 19 26 help ··· 32 25 33 26 config DRM_DISPLAY_DP_AUX_CHARDEV 34 27 bool "DRM DP AUX Interface" 35 - depends on DRM && DRM_DISPLAY_HELPER 36 28 select DRM_DISPLAY_DP_HELPER 37 29 help 38 30 Choose this option to enable a /dev/drm_dp_auxN node that allows to ··· 40 34 41 35 config DRM_DISPLAY_DP_HELPER 42 36 bool 43 - depends on DRM_DISPLAY_HELPER 44 37 help 45 38 DRM display helpers for DisplayPort. 46 39 ··· 66 61 67 62 config DRM_DISPLAY_HDCP_HELPER 68 63 bool 69 - depends on DRM_DISPLAY_HELPER 70 64 help 71 65 DRM display helpers for HDCP. 72 66 73 67 config DRM_DISPLAY_HDMI_HELPER 74 68 bool 75 - depends on DRM_DISPLAY_HELPER 76 69 help 77 70 DRM display helpers for HDMI. 78 71 79 72 config DRM_DISPLAY_HDMI_STATE_HELPER 80 73 bool 81 - depends on DRM_DISPLAY_HELPER 82 74 select DRM_DISPLAY_HDMI_HELPER 83 75 help 84 76 DRM KMS state helpers for HDMI. 77 + 78 + endif # DRM_DISPLAY_HELPER
+2
drivers/gpu/drm/display/Makefile
··· 3 3 obj-$(CONFIG_DRM_DISPLAY_DP_AUX_BUS) += drm_dp_aux_bus.o 4 4 5 5 drm_display_helper-y := drm_display_helper_mod.o 6 + drm_display_helper-$(CONFIG_DRM_BRIDGE_CONNECTOR) += \ 7 + drm_bridge_connector.o 6 8 drm_display_helper-$(CONFIG_DRM_DISPLAY_DP_HELPER) += \ 7 9 drm_dp_dual_mode_helper.o \ 8 10 drm_dp_helper.o \
+12 -1
drivers/gpu/drm/drm_bridge_connector.c drivers/gpu/drm/display/drm_bridge_connector.c
··· 216 216 } 217 217 } 218 218 219 + static void drm_bridge_connector_reset(struct drm_connector *connector) 220 + { 221 + struct drm_bridge_connector *bridge_connector = 222 + to_drm_bridge_connector(connector); 223 + 224 + drm_atomic_helper_connector_reset(connector); 225 + if (bridge_connector->bridge_hdmi) 226 + __drm_atomic_helper_connector_hdmi_reset(connector, 227 + connector->state); 228 + } 229 + 219 230 static const struct drm_connector_funcs drm_bridge_connector_funcs = { 220 - .reset = drm_atomic_helper_connector_reset, 231 + .reset = drm_bridge_connector_reset, 221 232 .detect = drm_bridge_connector_detect, 222 233 .fill_modes = drm_helper_probe_single_connector_modes, 223 234 .atomic_duplicate_state = drm_atomic_helper_connector_duplicate_state,
+64 -19
drivers/gpu/drm/drm_fbdev_dma.c
··· 36 36 return 0; 37 37 } 38 38 39 - FB_GEN_DEFAULT_DEFERRED_DMAMEM_OPS(drm_fbdev_dma, 40 - drm_fb_helper_damage_range, 41 - drm_fb_helper_damage_area); 42 - 43 39 static int drm_fbdev_dma_fb_mmap(struct fb_info *info, struct vm_area_struct *vma) 44 40 { 45 41 struct drm_fb_helper *fb_helper = info->par; 46 - struct drm_framebuffer *fb = fb_helper->fb; 47 - struct drm_gem_dma_object *dma = drm_fb_dma_get_gem_obj(fb, 0); 48 42 49 - if (!dma->map_noncoherent) 50 - vma->vm_page_prot = pgprot_writecombine(vma->vm_page_prot); 51 - 52 - return fb_deferred_io_mmap(info, vma); 43 + return drm_gem_prime_mmap(fb_helper->buffer->gem, vma); 53 44 } 54 45 55 46 static void drm_fbdev_dma_fb_destroy(struct fb_info *info) ··· 64 73 .owner = THIS_MODULE, 65 74 .fb_open = drm_fbdev_dma_fb_open, 66 75 .fb_release = drm_fbdev_dma_fb_release, 76 + __FB_DEFAULT_DMAMEM_OPS_RDWR, 77 + DRM_FB_HELPER_DEFAULT_OPS, 78 + __FB_DEFAULT_DMAMEM_OPS_DRAW, 79 + .fb_mmap = drm_fbdev_dma_fb_mmap, 80 + .fb_destroy = drm_fbdev_dma_fb_destroy, 81 + }; 82 + 83 + FB_GEN_DEFAULT_DEFERRED_DMAMEM_OPS(drm_fbdev_dma, 84 + drm_fb_helper_damage_range, 85 + drm_fb_helper_damage_area); 86 + 87 + static int drm_fbdev_dma_deferred_fb_mmap(struct fb_info *info, struct vm_area_struct *vma) 88 + { 89 + struct drm_fb_helper *fb_helper = info->par; 90 + struct drm_framebuffer *fb = fb_helper->fb; 91 + struct drm_gem_dma_object *dma = drm_fb_dma_get_gem_obj(fb, 0); 92 + 93 + if (!dma->map_noncoherent) 94 + vma->vm_page_prot = pgprot_writecombine(vma->vm_page_prot); 95 + 96 + return fb_deferred_io_mmap(info, vma); 97 + } 98 + 99 + static const struct fb_ops drm_fbdev_dma_deferred_fb_ops = { 100 + .owner = THIS_MODULE, 101 + .fb_open = drm_fbdev_dma_fb_open, 102 + .fb_release = drm_fbdev_dma_fb_release, 67 103 __FB_DEFAULT_DEFERRED_OPS_RDWR(drm_fbdev_dma), 68 104 DRM_FB_HELPER_DEFAULT_OPS, 69 105 __FB_DEFAULT_DEFERRED_OPS_DRAW(drm_fbdev_dma), 70 - .fb_mmap = drm_fbdev_dma_fb_mmap, 106 + .fb_mmap = drm_fbdev_dma_deferred_fb_mmap, 71 107 .fb_destroy = drm_fbdev_dma_fb_destroy, 72 108 }; 73 109 ··· 107 89 { 108 90 struct drm_client_dev *client = &fb_helper->client; 109 91 struct drm_device *dev = fb_helper->dev; 92 + bool use_deferred_io = false; 110 93 struct drm_client_buffer *buffer; 111 94 struct drm_gem_dma_object *dma_obj; 112 95 struct drm_framebuffer *fb; ··· 130 111 131 112 fb = buffer->fb; 132 113 114 + /* 115 + * Deferred I/O requires struct page for framebuffer memory, 116 + * which is not guaranteed for all DMA ranges. We thus only 117 + * install deferred I/O if we have a framebuffer that requires 118 + * it. 119 + */ 120 + if (fb->funcs->dirty) 121 + use_deferred_io = true; 122 + 133 123 ret = drm_client_buffer_vmap(buffer, &map); 134 124 if (ret) { 135 125 goto err_drm_client_buffer_delete; ··· 158 130 159 131 drm_fb_helper_fill_info(info, fb_helper, sizes); 160 132 161 - info->fbops = &drm_fbdev_dma_fb_ops; 133 + if (use_deferred_io) 134 + info->fbops = &drm_fbdev_dma_deferred_fb_ops; 135 + else 136 + info->fbops = &drm_fbdev_dma_fb_ops; 162 137 163 138 /* screen */ 164 139 info->flags |= FBINFO_VIRTFB; /* system memory */ ··· 175 144 } 176 145 info->fix.smem_len = info->screen_size; 177 146 178 - /* deferred I/O */ 179 - fb_helper->fbdefio.delay = HZ / 20; 180 - fb_helper->fbdefio.deferred_io = drm_fb_helper_deferred_io; 147 + /* 148 + * Only set up deferred I/O if the screen buffer supports 149 + * it. If this disagrees with the previous test for ->dirty, 150 + * mmap on the /dev/fb file might not work correctly. 151 + */ 152 + if (!is_vmalloc_addr(info->screen_buffer) && info->fix.smem_start) { 153 + unsigned long pfn = info->fix.smem_start >> PAGE_SHIFT; 181 154 182 - info->fbdefio = &fb_helper->fbdefio; 183 - ret = fb_deferred_io_init(info); 184 - if (ret) 185 - goto err_drm_fb_helper_release_info; 155 + if (drm_WARN_ON(dev, !pfn_to_page(pfn))) 156 + use_deferred_io = false; 157 + } 158 + 159 + /* deferred I/O */ 160 + if (use_deferred_io) { 161 + fb_helper->fbdefio.delay = HZ / 20; 162 + fb_helper->fbdefio.deferred_io = drm_fb_helper_deferred_io; 163 + 164 + info->fbdefio = &fb_helper->fbdefio; 165 + ret = fb_deferred_io_init(info); 166 + if (ret) 167 + goto err_drm_fb_helper_release_info; 168 + } 186 169 187 170 return 0; 188 171
+1 -1
drivers/gpu/drm/i915/display/intel_alpm.c
··· 228 228 int tfw_exit_latency = 20; /* eDP spec */ 229 229 int phy_wake = 4; /* eDP spec */ 230 230 int preamble = 8; /* eDP spec */ 231 - int precharge = intel_dp_aux_fw_sync_len() - preamble; 231 + int precharge = intel_dp_aux_fw_sync_len(intel_dp) - preamble; 232 232 u8 max_wake_lines; 233 233 234 234 io_wake_time = max(precharge, io_buffer_wake_time(crtc_state)) +
+4
drivers/gpu/drm/i915/display/intel_display_types.h
··· 1885 1885 } alpm_parameters; 1886 1886 1887 1887 u8 alpm_dpcd; 1888 + 1889 + struct { 1890 + unsigned long mask; 1891 + } quirks; 1888 1892 }; 1889 1893 1890 1894 enum lspcon_vendor {
+4
drivers/gpu/drm/i915/display/intel_dp.c
··· 82 82 #include "intel_pch_display.h" 83 83 #include "intel_pps.h" 84 84 #include "intel_psr.h" 85 + #include "intel_quirks.h" 85 86 #include "intel_tc.h" 86 87 #include "intel_vdsc.h" 87 88 #include "intel_vrr.h" ··· 3953 3952 3954 3953 drm_dp_read_desc(&intel_dp->aux, &intel_dp->desc, 3955 3954 drm_dp_is_branch(intel_dp->dpcd)); 3955 + intel_init_dpcd_quirks(intel_dp, &intel_dp->desc.ident); 3956 3956 3957 3957 /* 3958 3958 * Read the eDP display control registers. ··· 4065 4063 if (!intel_dp_is_edp(intel_dp)) { 4066 4064 drm_dp_read_desc(&intel_dp->aux, &intel_dp->desc, 4067 4065 drm_dp_is_branch(intel_dp->dpcd)); 4066 + 4067 + intel_init_dpcd_quirks(intel_dp, &intel_dp->desc.ident); 4068 4068 4069 4069 intel_dp_update_sink_caps(intel_dp); 4070 4070 }
+11 -5
drivers/gpu/drm/i915/display/intel_dp_aux.c
··· 13 13 #include "intel_dp_aux.h" 14 14 #include "intel_dp_aux_regs.h" 15 15 #include "intel_pps.h" 16 + #include "intel_quirks.h" 16 17 #include "intel_tc.h" 17 18 18 19 #define AUX_CH_NAME_BUFSIZE 6 ··· 143 142 return precharge + preamble; 144 143 } 145 144 146 - int intel_dp_aux_fw_sync_len(void) 145 + int intel_dp_aux_fw_sync_len(struct intel_dp *intel_dp) 147 146 { 147 + int precharge = 10; /* 10-16 */ 148 + int preamble = 8; 149 + 148 150 /* 149 151 * We faced some glitches on Dell Precision 5490 MTL laptop with panel: 150 152 * "Manufacturer: AUO, Model: 63898" when using HW default 18. Using 20 151 153 * is fixing these problems with the panel. It is still within range 152 - * mentioned in eDP specification. 154 + * mentioned in eDP specification. Increasing Fast Wake sync length is 155 + * causing problems with other panels: increase length as a quirk for 156 + * this specific laptop. 153 157 */ 154 - int precharge = 12; /* 10-16 */ 155 - int preamble = 8; 158 + if (intel_has_dpcd_quirk(intel_dp, QUIRK_FW_SYNC_LEN)) 159 + precharge += 2; 156 160 157 161 return precharge + preamble; 158 162 } ··· 217 211 DP_AUX_CH_CTL_TIME_OUT_MAX | 218 212 DP_AUX_CH_CTL_RECEIVE_ERROR | 219 213 DP_AUX_CH_CTL_MESSAGE_SIZE(send_bytes) | 220 - DP_AUX_CH_CTL_FW_SYNC_PULSE_SKL(intel_dp_aux_fw_sync_len()) | 214 + DP_AUX_CH_CTL_FW_SYNC_PULSE_SKL(intel_dp_aux_fw_sync_len(intel_dp)) | 221 215 DP_AUX_CH_CTL_SYNC_PULSE_SKL(intel_dp_aux_sync_len()); 222 216 223 217 if (intel_tc_port_in_tbt_alt_mode(dig_port))
+1 -1
drivers/gpu/drm/i915/display/intel_dp_aux.h
··· 20 20 21 21 void intel_dp_aux_irq_handler(struct drm_i915_private *i915); 22 22 u32 intel_dp_aux_pack(const u8 *src, int src_bytes); 23 - int intel_dp_aux_fw_sync_len(void); 23 + int intel_dp_aux_fw_sync_len(struct intel_dp *intel_dp); 24 24 25 25 #endif /* __INTEL_DP_AUX_H__ */
+26 -5
drivers/gpu/drm/i915/display/intel_modeset_setup.c
··· 326 326 327 327 static void intel_crtc_copy_hw_to_uapi_state(struct intel_crtc_state *crtc_state) 328 328 { 329 + struct drm_i915_private *i915 = to_i915(crtc_state->uapi.crtc->dev); 330 + 329 331 if (intel_crtc_is_joiner_secondary(crtc_state)) 330 332 return; 331 333 ··· 339 337 crtc_state->uapi.adjusted_mode = crtc_state->hw.adjusted_mode; 340 338 crtc_state->uapi.scaling_filter = crtc_state->hw.scaling_filter; 341 339 342 - /* assume 1:1 mapping */ 343 - drm_property_replace_blob(&crtc_state->hw.degamma_lut, 344 - crtc_state->pre_csc_lut); 345 - drm_property_replace_blob(&crtc_state->hw.gamma_lut, 346 - crtc_state->post_csc_lut); 340 + if (DISPLAY_INFO(i915)->color.degamma_lut_size) { 341 + /* assume 1:1 mapping */ 342 + drm_property_replace_blob(&crtc_state->hw.degamma_lut, 343 + crtc_state->pre_csc_lut); 344 + drm_property_replace_blob(&crtc_state->hw.gamma_lut, 345 + crtc_state->post_csc_lut); 346 + } else { 347 + /* 348 + * ilk/snb hw may be configured for either pre_csc_lut 349 + * or post_csc_lut, but we don't advertise degamma_lut as 350 + * being available in the uapi since there is only one 351 + * hardware LUT. Always assign the result of the readout 352 + * to gamma_lut as that is the only valid source of LUTs 353 + * in the uapi. 354 + */ 355 + drm_WARN_ON(&i915->drm, crtc_state->post_csc_lut && 356 + crtc_state->pre_csc_lut); 357 + 358 + drm_property_replace_blob(&crtc_state->hw.degamma_lut, 359 + NULL); 360 + drm_property_replace_blob(&crtc_state->hw.gamma_lut, 361 + crtc_state->post_csc_lut ?: 362 + crtc_state->pre_csc_lut); 363 + } 347 364 348 365 drm_property_replace_blob(&crtc_state->uapi.degamma_lut, 349 366 crtc_state->hw.degamma_lut);
+68
drivers/gpu/drm/i915/display/intel_quirks.c
··· 14 14 display->quirks.mask |= BIT(quirk); 15 15 } 16 16 17 + static void intel_set_dpcd_quirk(struct intel_dp *intel_dp, enum intel_quirk_id quirk) 18 + { 19 + intel_dp->quirks.mask |= BIT(quirk); 20 + } 21 + 17 22 /* 18 23 * Some machines (Lenovo U160) do not work with SSC on LVDS for some reason 19 24 */ ··· 70 65 drm_info(display->drm, "Applying no pps backlight power quirk\n"); 71 66 } 72 67 68 + static void quirk_fw_sync_len(struct intel_dp *intel_dp) 69 + { 70 + struct intel_display *display = to_intel_display(intel_dp); 71 + 72 + intel_set_dpcd_quirk(intel_dp, QUIRK_FW_SYNC_LEN); 73 + drm_info(display->drm, "Applying Fast Wake sync pulse count quirk\n"); 74 + } 75 + 73 76 struct intel_quirk { 74 77 int device; 75 78 int subsystem_vendor; 76 79 int subsystem_device; 77 80 void (*hook)(struct intel_display *display); 78 81 }; 82 + 83 + struct intel_dpcd_quirk { 84 + int device; 85 + int subsystem_vendor; 86 + int subsystem_device; 87 + u8 sink_oui[3]; 88 + u8 sink_device_id[6]; 89 + void (*hook)(struct intel_dp *intel_dp); 90 + }; 91 + 92 + #define SINK_OUI(first, second, third) { (first), (second), (third) } 93 + #define SINK_DEVICE_ID(first, second, third, fourth, fifth, sixth) \ 94 + { (first), (second), (third), (fourth), (fifth), (sixth) } 95 + 96 + #define SINK_DEVICE_ID_ANY SINK_DEVICE_ID(0, 0, 0, 0, 0, 0) 79 97 80 98 /* For systems that don't have a meaningful PCI subdevice/subvendor ID */ 81 99 struct intel_dmi_quirk { ··· 231 203 { 0x0f31, 0x103c, 0x220f, quirk_invert_brightness }, 232 204 }; 233 205 206 + static struct intel_dpcd_quirk intel_dpcd_quirks[] = { 207 + /* Dell Precision 5490 */ 208 + { 209 + .device = 0x7d55, 210 + .subsystem_vendor = 0x1028, 211 + .subsystem_device = 0x0cc7, 212 + .sink_oui = SINK_OUI(0x38, 0xec, 0x11), 213 + .hook = quirk_fw_sync_len, 214 + }, 215 + 216 + }; 217 + 234 218 void intel_init_quirks(struct intel_display *display) 235 219 { 236 220 struct pci_dev *d = to_pci_dev(display->drm->dev); ··· 264 224 } 265 225 } 266 226 227 + void intel_init_dpcd_quirks(struct intel_dp *intel_dp, 228 + const struct drm_dp_dpcd_ident *ident) 229 + { 230 + struct intel_display *display = to_intel_display(intel_dp); 231 + struct pci_dev *d = to_pci_dev(display->drm->dev); 232 + int i; 233 + 234 + for (i = 0; i < ARRAY_SIZE(intel_dpcd_quirks); i++) { 235 + struct intel_dpcd_quirk *q = &intel_dpcd_quirks[i]; 236 + 237 + if (d->device == q->device && 238 + (d->subsystem_vendor == q->subsystem_vendor || 239 + q->subsystem_vendor == PCI_ANY_ID) && 240 + (d->subsystem_device == q->subsystem_device || 241 + q->subsystem_device == PCI_ANY_ID) && 242 + !memcmp(q->sink_oui, ident->oui, sizeof(ident->oui)) && 243 + (!memcmp(q->sink_device_id, ident->device_id, 244 + sizeof(ident->device_id)) || 245 + !memchr_inv(q->sink_device_id, 0, sizeof(q->sink_device_id)))) 246 + q->hook(intel_dp); 247 + } 248 + } 249 + 267 250 bool intel_has_quirk(struct intel_display *display, enum intel_quirk_id quirk) 268 251 { 269 252 return display->quirks.mask & BIT(quirk); 253 + } 254 + 255 + bool intel_has_dpcd_quirk(struct intel_dp *intel_dp, enum intel_quirk_id quirk) 256 + { 257 + return intel_dp->quirks.mask & BIT(quirk); 270 258 }
+6
drivers/gpu/drm/i915/display/intel_quirks.h
··· 9 9 #include <linux/types.h> 10 10 11 11 struct intel_display; 12 + struct intel_dp; 13 + struct drm_dp_dpcd_ident; 12 14 13 15 enum intel_quirk_id { 14 16 QUIRK_BACKLIGHT_PRESENT, ··· 19 17 QUIRK_INVERT_BRIGHTNESS, 20 18 QUIRK_LVDS_SSC_DISABLE, 21 19 QUIRK_NO_PPS_BACKLIGHT_POWER_HOOK, 20 + QUIRK_FW_SYNC_LEN, 22 21 }; 23 22 24 23 void intel_init_quirks(struct intel_display *display); 24 + void intel_init_dpcd_quirks(struct intel_dp *intel_dp, 25 + const struct drm_dp_dpcd_ident *ident); 25 26 bool intel_has_quirk(struct intel_display *display, enum intel_quirk_id quirk); 27 + bool intel_has_dpcd_quirk(struct intel_dp *intel_dp, enum intel_quirk_id quirk); 26 28 27 29 #endif /* __INTEL_QUIRKS_H__ */
+1 -1
drivers/gpu/drm/i915/gt/uc/intel_gsc_uc.c
··· 302 302 { 303 303 struct intel_gt *gt = gsc_uc_to_gt(gsc); 304 304 305 - if (!intel_uc_fw_is_loadable(&gsc->fw)) 305 + if (!intel_uc_fw_is_loadable(&gsc->fw) || intel_uc_fw_is_in_error(&gsc->fw)) 306 306 return; 307 307 308 308 if (intel_gsc_uc_fw_init_done(gsc))
+5
drivers/gpu/drm/i915/gt/uc/intel_uc_fw.h
··· 258 258 return __intel_uc_fw_status(uc_fw) == INTEL_UC_FIRMWARE_RUNNING; 259 259 } 260 260 261 + static inline bool intel_uc_fw_is_in_error(struct intel_uc_fw *uc_fw) 262 + { 263 + return intel_uc_fw_status_to_error(__intel_uc_fw_status(uc_fw)) != 0; 264 + } 265 + 261 266 static inline bool intel_uc_fw_is_overridden(const struct intel_uc_fw *uc_fw) 262 267 { 263 268 return uc_fw->user_overridden;
+4 -4
drivers/gpu/drm/i915/i915_sw_fence.c
··· 51 51 debug_object_init(fence, &i915_sw_fence_debug_descr); 52 52 } 53 53 54 - static inline void debug_fence_init_onstack(struct i915_sw_fence *fence) 54 + static inline __maybe_unused void debug_fence_init_onstack(struct i915_sw_fence *fence) 55 55 { 56 56 debug_object_init_on_stack(fence, &i915_sw_fence_debug_descr); 57 57 } ··· 77 77 debug_object_destroy(fence, &i915_sw_fence_debug_descr); 78 78 } 79 79 80 - static inline void debug_fence_free(struct i915_sw_fence *fence) 80 + static inline __maybe_unused void debug_fence_free(struct i915_sw_fence *fence) 81 81 { 82 82 debug_object_free(fence, &i915_sw_fence_debug_descr); 83 83 smp_wmb(); /* flush the change in state before reallocation */ ··· 94 94 { 95 95 } 96 96 97 - static inline void debug_fence_init_onstack(struct i915_sw_fence *fence) 97 + static inline __maybe_unused void debug_fence_init_onstack(struct i915_sw_fence *fence) 98 98 { 99 99 } 100 100 ··· 115 115 { 116 116 } 117 117 118 - static inline void debug_fence_free(struct i915_sw_fence *fence) 118 + static inline __maybe_unused void debug_fence_free(struct i915_sw_fence *fence) 119 119 { 120 120 } 121 121
+4
drivers/gpu/drm/imagination/pvr_vm.c
··· 114 114 struct drm_gpuva base; 115 115 }; 116 116 117 + #define to_pvr_vm_gpuva(va) container_of_const(va, struct pvr_vm_gpuva, base) 118 + 117 119 enum pvr_vm_bind_type { 118 120 PVR_VM_BIND_TYPE_MAP, 119 121 PVR_VM_BIND_TYPE_UNMAP, ··· 388 386 389 387 drm_gpuva_unmap(&op->unmap); 390 388 drm_gpuva_unlink(op->unmap.va); 389 + kfree(to_pvr_vm_gpuva(op->unmap.va)); 391 390 392 391 return 0; 393 392 } ··· 436 433 } 437 434 438 435 drm_gpuva_unlink(op->remap.unmap->va); 436 + kfree(to_pvr_vm_gpuva(op->remap.unmap->va)); 439 437 440 438 return 0; 441 439 }
+2
drivers/gpu/drm/imx/dcss/Kconfig
··· 2 2 tristate "i.MX8MQ DCSS" 3 3 select IMX_IRQSTEER 4 4 select DRM_KMS_HELPER 5 + select DRM_DISPLAY_HELPER 6 + select DRM_BRIDGE_CONNECTOR 5 7 select DRM_GEM_DMA_HELPER 6 8 select VIDEOMODE_HELPERS 7 9 depends on DRM && ARCH_MXC && ARM64
+2
drivers/gpu/drm/imx/lcdc/Kconfig
··· 3 3 depends on DRM && (ARCH_MXC || COMPILE_TEST) 4 4 select DRM_GEM_DMA_HELPER 5 5 select DRM_KMS_HELPER 6 + select DRM_DISPLAY_HELPER 7 + select DRM_BRIDGE_CONNECTOR 6 8 help 7 9 Found on i.MX1, i.MX21, i.MX25 and i.MX27.
+2
drivers/gpu/drm/ingenic/Kconfig
··· 8 8 select DRM_BRIDGE 9 9 select DRM_PANEL_BRIDGE 10 10 select DRM_KMS_HELPER 11 + select DRM_DISPLAY_HELPER 12 + select DRM_BRIDGE_CONNECTOR 11 13 select DRM_GEM_DMA_HELPER 12 14 select REGMAP 13 15 select REGMAP_MMIO
+2
drivers/gpu/drm/kmb/Kconfig
··· 3 3 depends on DRM 4 4 depends on ARCH_KEEMBAY || COMPILE_TEST 5 5 select DRM_KMS_HELPER 6 + select DRM_DISPLAY_HELPER 7 + select DRM_BRIDGE_CONNECTOR 6 8 select DRM_GEM_DMA_HELPER 7 9 select DRM_MIPI_DSI 8 10 help
+2
drivers/gpu/drm/mediatek/Kconfig
··· 9 9 depends on MTK_MMSYS 10 10 select DRM_GEM_DMA_HELPER if DRM_FBDEV_EMULATION 11 11 select DRM_KMS_HELPER 12 + select DRM_DISPLAY_HELPER 13 + select DRM_BRIDGE_CONNECTOR 12 14 select DRM_MIPI_DSI 13 15 select DRM_PANEL 14 16 select MEMORY
+2
drivers/gpu/drm/meson/Kconfig
··· 4 4 depends on DRM && OF && (ARM || ARM64) 5 5 depends on ARCH_MESON || COMPILE_TEST 6 6 select DRM_KMS_HELPER 7 + select DRM_DISPLAY_HELPER 8 + select DRM_BRIDGE_CONNECTOR 7 9 select DRM_GEM_DMA_HELPER 8 10 select DRM_DISPLAY_CONNECTOR 9 11 select VIDEOMODE_HELPERS
+1
drivers/gpu/drm/msm/Kconfig
··· 17 17 select DRM_DISPLAY_DP_AUX_BUS 18 18 select DRM_DISPLAY_DP_HELPER 19 19 select DRM_DISPLAY_HELPER 20 + select DRM_BRIDGE_CONNECTOR 20 21 select DRM_EXEC 21 22 select DRM_KMS_HELPER 22 23 select DRM_PANEL
+1 -1
drivers/gpu/drm/nouveau/nvkm/subdev/gsp/fwsec.c
··· 324 324 return ret; 325 325 326 326 /* Verify. */ 327 - err = nvkm_rd32(device, 0x001400 + (0xf * 4)) & 0x0000ffff; 327 + err = nvkm_rd32(device, 0x001400 + (0x15 * 4)) & 0x0000ffff; 328 328 if (err) { 329 329 nvkm_error(subdev, "fwsec-sb: 0x%04x\n", err); 330 330 return -EIO;
+2
drivers/gpu/drm/omapdrm/Kconfig
··· 5 5 depends on DRM && OF 6 6 depends on ARCH_OMAP2PLUS || (COMPILE_TEST && PAGE_SIZE_LESS_THAN_64KB) 7 7 select DRM_KMS_HELPER 8 + select DRM_DISPLAY_HELPER 9 + select DRM_BRIDGE_CONNECTOR 8 10 select FB_DMAMEM_HELPERS_DEFERRED if DRM_FBDEV_EMULATION 9 11 select VIDEOMODE_HELPERS 10 12 select HDMI
+1 -1
drivers/gpu/drm/panel/panel-newvision-nv3052c.c
··· 925 925 static const struct of_device_id nv3052c_of_match[] = { 926 926 { .compatible = "leadtek,ltk035c5444t", .data = &ltk035c5444t_panel_info }, 927 927 { .compatible = "fascontek,fs035vg158", .data = &fs035vg158_panel_info }, 928 - { .compatible = "wl-355608-a8", .data = &wl_355608_a8_panel_info }, 928 + { .compatible = "anbernic,rg35xx-plus-panel", .data = &wl_355608_a8_panel_info }, 929 929 { /* sentinel */ } 930 930 }; 931 931 MODULE_DEVICE_TABLE(of, nv3052c_of_match);
+23
drivers/gpu/drm/panthor/panthor_drv.c
··· 10 10 #include <linux/platform_device.h> 11 11 #include <linux/pm_runtime.h> 12 12 13 + #include <drm/drm_auth.h> 13 14 #include <drm/drm_debugfs.h> 14 15 #include <drm/drm_drv.h> 15 16 #include <drm/drm_exec.h> ··· 997 996 return panthor_group_destroy(pfile, args->group_handle); 998 997 } 999 998 999 + static int group_priority_permit(struct drm_file *file, 1000 + u8 priority) 1001 + { 1002 + /* Ensure that priority is valid */ 1003 + if (priority > PANTHOR_GROUP_PRIORITY_HIGH) 1004 + return -EINVAL; 1005 + 1006 + /* Medium priority and below are always allowed */ 1007 + if (priority <= PANTHOR_GROUP_PRIORITY_MEDIUM) 1008 + return 0; 1009 + 1010 + /* Higher priorities require CAP_SYS_NICE or DRM_MASTER */ 1011 + if (capable(CAP_SYS_NICE) || drm_is_current_master(file)) 1012 + return 0; 1013 + 1014 + return -EACCES; 1015 + } 1016 + 1000 1017 static int panthor_ioctl_group_create(struct drm_device *ddev, void *data, 1001 1018 struct drm_file *file) 1002 1019 { ··· 1027 1008 return -EINVAL; 1028 1009 1029 1010 ret = PANTHOR_UOBJ_GET_ARRAY(queue_args, &args->queues); 1011 + if (ret) 1012 + return ret; 1013 + 1014 + ret = group_priority_permit(file, args->priority); 1030 1015 if (ret) 1031 1016 return ret; 1032 1017
+7 -1
drivers/gpu/drm/panthor/panthor_fw.c
··· 1089 1089 panthor_fw_stop(ptdev); 1090 1090 ptdev->fw->fast_reset = false; 1091 1091 drm_err(&ptdev->base, "FW fast reset failed, trying a slow reset"); 1092 + 1093 + ret = panthor_vm_flush_all(ptdev->fw->vm); 1094 + if (ret) { 1095 + drm_err(&ptdev->base, "FW slow reset failed (couldn't flush FW's AS l2cache)"); 1096 + return ret; 1097 + } 1092 1098 } 1093 1099 1094 1100 /* Reload all sections, including RO ones. We're not supposed ··· 1105 1099 1106 1100 ret = panthor_fw_start(ptdev); 1107 1101 if (ret) { 1108 - drm_err(&ptdev->base, "FW slow reset failed"); 1102 + drm_err(&ptdev->base, "FW slow reset failed (couldn't start the FW )"); 1109 1103 return ret; 1110 1104 } 1111 1105
+18 -3
drivers/gpu/drm/panthor/panthor_mmu.c
··· 576 576 if (as_nr < 0) 577 577 return 0; 578 578 579 + /* 580 + * If the AS number is greater than zero, then we can be sure 581 + * the device is up and running, so we don't need to explicitly 582 + * power it up 583 + */ 584 + 579 585 if (op != AS_COMMAND_UNLOCK) 580 586 lock_region(ptdev, as_nr, iova, size); 581 587 ··· 880 874 if (!drm_dev_enter(&ptdev->base, &cookie)) 881 875 return 0; 882 876 883 - /* Flush the PTs only if we're already awake */ 884 - if (pm_runtime_active(ptdev->base.dev)) 885 - ret = mmu_hw_do_operation(vm, iova, size, AS_COMMAND_FLUSH_PT); 877 + ret = mmu_hw_do_operation(vm, iova, size, AS_COMMAND_FLUSH_PT); 886 878 887 879 drm_dev_exit(cookie); 888 880 return ret; 881 + } 882 + 883 + /** 884 + * panthor_vm_flush_all() - Flush L2 caches for the entirety of a VM's AS 885 + * @vm: VM whose cache to flush 886 + * 887 + * Return: 0 on success, a negative error code if flush failed. 888 + */ 889 + int panthor_vm_flush_all(struct panthor_vm *vm) 890 + { 891 + return panthor_vm_flush_range(vm, vm->base.mm_start, vm->base.mm_range); 889 892 } 890 893 891 894 static int panthor_vm_unmap_pages(struct panthor_vm *vm, u64 iova, u64 size)
+1
drivers/gpu/drm/panthor/panthor_mmu.h
··· 31 31 int panthor_vm_active(struct panthor_vm *vm); 32 32 void panthor_vm_idle(struct panthor_vm *vm); 33 33 int panthor_vm_as(struct panthor_vm *vm); 34 + int panthor_vm_flush_all(struct panthor_vm *vm); 34 35 35 36 struct panthor_heap_pool * 36 37 panthor_vm_get_heap_pool(struct panthor_vm *vm, bool create);
+1 -1
drivers/gpu/drm/panthor/panthor_sched.c
··· 3092 3092 if (group_args->pad) 3093 3093 return -EINVAL; 3094 3094 3095 - if (group_args->priority > PANTHOR_CSG_PRIORITY_HIGH) 3095 + if (group_args->priority >= PANTHOR_CSG_PRIORITY_COUNT) 3096 3096 return -EINVAL; 3097 3097 3098 3098 if ((group_args->compute_core_mask & ~ptdev->gpu_info.shader_present) ||
+2
drivers/gpu/drm/renesas/rcar-du/Kconfig
··· 5 5 depends on ARM || ARM64 || COMPILE_TEST 6 6 depends on ARCH_RENESAS || COMPILE_TEST 7 7 select DRM_KMS_HELPER 8 + select DRM_DISPLAY_HELPER 9 + select DRM_BRIDGE_CONNECTOR 8 10 select DRM_GEM_DMA_HELPER 9 11 select VIDEOMODE_HELPERS 10 12 help
+2
drivers/gpu/drm/renesas/rz-du/Kconfig
··· 6 6 depends on VIDEO_RENESAS_VSP1 7 7 select DRM_GEM_DMA_HELPER 8 8 select DRM_KMS_HELPER 9 + select DRM_DISPLAY_HELPER 10 + select DRM_BRIDGE_CONNECTOR 9 11 select VIDEOMODE_HELPERS 10 12 help 11 13 Choose this option if you have an RZ/G2L alike chipset.
+2
drivers/gpu/drm/renesas/shmobile/Kconfig
··· 5 5 depends on ARCH_RENESAS || ARCH_SHMOBILE || COMPILE_TEST 6 6 select BACKLIGHT_CLASS_DEVICE 7 7 select DRM_KMS_HELPER 8 + select DRM_DISPLAY_HELPER 9 + select DRM_BRIDGE_CONNECTOR 8 10 select DRM_GEM_DMA_HELPER 9 11 select VIDEOMODE_HELPERS 10 12 help
+4
drivers/gpu/drm/rockchip/Kconfig
··· 86 86 bool "Rockchip LVDS support" 87 87 depends on DRM_ROCKCHIP 88 88 depends on PINCTRL && OF 89 + select DRM_DISPLAY_HELPER 90 + select DRM_BRIDGE_CONNECTOR 89 91 help 90 92 Choose this option to enable support for Rockchip LVDS controllers. 91 93 Rockchip rk3288 SoC has LVDS TX Controller can be used, and it ··· 98 96 bool "Rockchip RGB support" 99 97 depends on DRM_ROCKCHIP 100 98 depends on PINCTRL 99 + select DRM_DISPLAY_HELPER 100 + select DRM_BRIDGE_CONNECTOR 101 101 help 102 102 Choose this option to enable support for Rockchip RGB output. 103 103 Some Rockchip CRTCs, like rv1108, can directly output parallel
+1
drivers/gpu/drm/tegra/Kconfig
··· 8 8 select DRM_DISPLAY_DP_HELPER 9 9 select DRM_DISPLAY_HDMI_HELPER 10 10 select DRM_DISPLAY_HELPER 11 + select DRM_BRIDGE_CONNECTOR 11 12 select DRM_DISPLAY_DP_AUX_BUS 12 13 select DRM_KMS_HELPER 13 14 select DRM_MIPI_DSI
+2
drivers/gpu/drm/tidss/Kconfig
··· 3 3 depends on DRM && OF 4 4 depends on ARM || ARM64 || COMPILE_TEST 5 5 select DRM_KMS_HELPER 6 + select DRM_DISPLAY_HELPER 7 + select DRM_BRIDGE_CONNECTOR 6 8 select DRM_GEM_DMA_HELPER 7 9 help 8 10 The TI Keystone family SoCs introduced a new generation of
+4 -4
drivers/gpu/drm/xe/compat-i915-headers/intel_pcode.h
··· 13 13 snb_pcode_write_timeout(struct intel_uncore *uncore, u32 mbox, u32 val, 14 14 int fast_timeout_us, int slow_timeout_ms) 15 15 { 16 - return xe_pcode_write_timeout(__compat_uncore_to_gt(uncore), mbox, val, 16 + return xe_pcode_write_timeout(__compat_uncore_to_tile(uncore), mbox, val, 17 17 slow_timeout_ms ?: 1); 18 18 } 19 19 ··· 21 21 snb_pcode_write(struct intel_uncore *uncore, u32 mbox, u32 val) 22 22 { 23 23 24 - return xe_pcode_write(__compat_uncore_to_gt(uncore), mbox, val); 24 + return xe_pcode_write(__compat_uncore_to_tile(uncore), mbox, val); 25 25 } 26 26 27 27 static inline int 28 28 snb_pcode_read(struct intel_uncore *uncore, u32 mbox, u32 *val, u32 *val1) 29 29 { 30 - return xe_pcode_read(__compat_uncore_to_gt(uncore), mbox, val, val1); 30 + return xe_pcode_read(__compat_uncore_to_tile(uncore), mbox, val, val1); 31 31 } 32 32 33 33 static inline int ··· 35 35 u32 request, u32 reply_mask, u32 reply, 36 36 int timeout_base_ms) 37 37 { 38 - return xe_pcode_request(__compat_uncore_to_gt(uncore), mbox, request, reply_mask, reply, 38 + return xe_pcode_request(__compat_uncore_to_tile(uncore), mbox, request, reply_mask, reply, 39 39 timeout_base_ms); 40 40 } 41 41
+7
drivers/gpu/drm/xe/compat-i915-headers/intel_uncore.h
··· 17 17 return xe_root_mmio_gt(xe); 18 18 } 19 19 20 + static inline struct xe_tile *__compat_uncore_to_tile(struct intel_uncore *uncore) 21 + { 22 + struct xe_device *xe = container_of(uncore, struct xe_device, uncore); 23 + 24 + return xe_device_get_root_tile(xe); 25 + } 26 + 20 27 static inline u32 intel_uncore_read(struct intel_uncore *uncore, 21 28 i915_reg_t i915_reg) 22 29 {
+17 -6
drivers/gpu/drm/xe/display/xe_display.c
··· 315 315 * properly. 316 316 */ 317 317 intel_power_domains_disable(xe); 318 - if (has_display(xe)) 318 + intel_fbdev_set_suspend(&xe->drm, FBINFO_STATE_SUSPENDED, true); 319 + if (has_display(xe)) { 319 320 drm_kms_helper_poll_disable(&xe->drm); 321 + if (!runtime) 322 + intel_display_driver_disable_user_access(xe); 323 + } 320 324 321 325 if (!runtime) 322 326 intel_display_driver_suspend(xe); ··· 331 327 332 328 intel_hpd_cancel_work(xe); 333 329 334 - intel_encoder_suspend_all(&xe->display); 330 + if (!runtime && has_display(xe)) { 331 + intel_display_driver_suspend_access(xe); 332 + intel_encoder_suspend_all(&xe->display); 333 + } 335 334 336 335 intel_opregion_suspend(xe, s2idle ? PCI_D1 : PCI_D3cold); 337 - 338 - intel_fbdev_set_suspend(&xe->drm, FBINFO_STATE_SUSPENDED, true); 339 336 340 337 intel_dmc_suspend(xe); 341 338 } ··· 375 370 intel_display_driver_init_hw(xe); 376 371 intel_hpd_init(xe); 377 372 373 + if (!runtime && has_display(xe)) 374 + intel_display_driver_resume_access(xe); 375 + 378 376 /* MST sideband requires HPD interrupts enabled */ 379 377 intel_dp_mst_resume(xe); 380 378 if (!runtime) 381 379 intel_display_driver_resume(xe); 382 380 383 - intel_hpd_poll_disable(xe); 384 - if (has_display(xe)) 381 + if (has_display(xe)) { 385 382 drm_kms_helper_poll_enable(&xe->drm); 383 + if (!runtime) 384 + intel_display_driver_enable_user_access(xe); 385 + } 386 + intel_hpd_poll_disable(xe); 386 387 387 388 intel_opregion_resume(xe); 388 389
+6
drivers/gpu/drm/xe/xe_device_types.h
··· 203 203 } vf; 204 204 } sriov; 205 205 206 + /** @pcode: tile's PCODE */ 207 + struct { 208 + /** @pcode.lock: protecting tile's PCODE mailbox data */ 209 + struct mutex lock; 210 + } pcode; 211 + 206 212 /** @migrate: Migration helper for vram blits and clearing */ 207 213 struct xe_migrate *migrate; 208 214
+12
drivers/gpu/drm/xe/xe_gsc.c
··· 519 519 void xe_gsc_load_start(struct xe_gsc *gsc) 520 520 { 521 521 struct xe_gt *gt = gsc_to_gt(gsc); 522 + struct xe_device *xe = gt_to_xe(gt); 522 523 523 524 if (!xe_uc_fw_is_loadable(&gsc->fw) || !gsc->q) 525 + return; 526 + 527 + /* 528 + * The GSC HW is only reset by driver FLR or D3cold entry. We don't 529 + * support the former at runtime, while the latter is only supported on 530 + * DGFX, for which we don't support GSC. Therefore, if GSC failed to 531 + * load previously there is no need to try again because the HW is 532 + * stuck in the error state. 533 + */ 534 + xe_assert(xe, !IS_DGFX(xe)); 535 + if (xe_uc_fw_is_in_error_state(&gsc->fw)) 524 536 return; 525 537 526 538 /* GSC FW survives GT reset and D3Hot */
+3 -4
drivers/gpu/drm/xe/xe_gt.c
··· 47 47 #include "xe_migrate.h" 48 48 #include "xe_mmio.h" 49 49 #include "xe_pat.h" 50 - #include "xe_pcode.h" 51 50 #include "xe_pm.h" 52 51 #include "xe_mocs.h" 53 52 #include "xe_reg_sr.h" ··· 386 387 xe_tuning_process_gt(gt); 387 388 388 389 xe_force_wake_init_gt(gt, gt_to_fw(gt)); 389 - xe_pcode_init(gt); 390 390 spin_lock_init(&gt->global_invl_lock); 391 391 392 392 return 0; ··· 753 755 754 756 xe_gt_info(gt, "reset started\n"); 755 757 758 + xe_pm_runtime_get(gt_to_xe(gt)); 759 + 756 760 if (xe_fault_inject_gt_reset()) { 757 761 err = -ECANCELED; 758 762 goto err_fail; 759 763 } 760 764 761 - xe_pm_runtime_get(gt_to_xe(gt)); 762 765 xe_gt_sanitize(gt); 763 766 764 767 err = xe_force_wake_get(gt_to_fw(gt), XE_FORCEWAKE_ALL); ··· 794 795 XE_WARN_ON(xe_force_wake_put(gt_to_fw(gt), XE_FORCEWAKE_ALL)); 795 796 err_msg: 796 797 XE_WARN_ON(xe_uc_start(&gt->uc)); 797 - xe_pm_runtime_put(gt_to_xe(gt)); 798 798 err_fail: 799 799 xe_gt_err(gt, "reset failed (%pe)\n", ERR_PTR(err)); 800 800 801 801 xe_device_declare_wedged(gt_to_xe(gt)); 802 + xe_pm_runtime_put(gt_to_xe(gt)); 802 803 803 804 return err; 804 805 }
-6
drivers/gpu/drm/xe/xe_gt_types.h
··· 310 310 /** @eclass: per hardware engine class interface on the GT */ 311 311 struct xe_hw_engine_class_intf eclass[XE_ENGINE_CLASS_MAX]; 312 312 313 - /** @pcode: GT's PCODE */ 314 - struct { 315 - /** @pcode.lock: protecting GT's PCODE mailbox data */ 316 - struct mutex lock; 317 - } pcode; 318 - 319 313 /** @sysfs: sysfs' kobj used by xe_gt_sysfs */ 320 314 struct kobject *sysfs; 321 315
+1 -1
drivers/gpu/drm/xe/xe_guc_pc.c
··· 915 915 u32 min = DIV_ROUND_CLOSEST(pc->rpn_freq, GT_FREQUENCY_MULTIPLIER); 916 916 u32 max = DIV_ROUND_CLOSEST(pc->rp0_freq, GT_FREQUENCY_MULTIPLIER); 917 917 918 - XE_WARN_ON(xe_pcode_init_min_freq_table(pc_to_gt(pc), min, max)); 918 + XE_WARN_ON(xe_pcode_init_min_freq_table(gt_to_tile(pc_to_gt(pc)), min, max)); 919 919 } 920 920 921 921 static int pc_init_freqs(struct xe_guc_pc *pc)
+2 -2
drivers/gpu/drm/xe/xe_hwmon.c
··· 441 441 if (gt_to_xe(gt)->info.platform == XE_DG2) 442 442 return -ENXIO; 443 443 444 - return xe_pcode_read(gt, PCODE_MBOX(PCODE_POWER_SETUP, 444 + return xe_pcode_read(gt_to_tile(gt), PCODE_MBOX(PCODE_POWER_SETUP, 445 445 POWER_SETUP_SUBCOMMAND_READ_I1, 0), 446 446 uval, NULL); 447 447 } 448 448 449 449 static int xe_hwmon_pcode_write_i1(struct xe_gt *gt, u32 uval) 450 450 { 451 - return xe_pcode_write(gt, PCODE_MBOX(PCODE_POWER_SETUP, 451 + return xe_pcode_write(gt_to_tile(gt), PCODE_MBOX(PCODE_POWER_SETUP, 452 452 POWER_SETUP_SUBCOMMAND_WRITE_I1, 0), 453 453 (uval & POWER_SETUP_I1_DATA_MASK)); 454 454 }
+52 -52
drivers/gpu/drm/xe/xe_pcode.c
··· 12 12 13 13 #include "xe_assert.h" 14 14 #include "xe_device.h" 15 - #include "xe_gt.h" 16 15 #include "xe_mmio.h" 17 16 #include "xe_pcode_api.h" 18 17 ··· 29 30 * - PCODE for display operations 30 31 */ 31 32 32 - static int pcode_mailbox_status(struct xe_gt *gt) 33 + static int pcode_mailbox_status(struct xe_tile *tile) 33 34 { 34 35 u32 err; 35 36 static const struct pcode_err_decode err_decode[] = { ··· 44 45 [PCODE_ERROR_MASK] = {-EPROTO, "Unknown"}, 45 46 }; 46 47 47 - err = xe_mmio_read32(gt, PCODE_MAILBOX) & PCODE_ERROR_MASK; 48 + err = xe_mmio_read32(tile->primary_gt, PCODE_MAILBOX) & PCODE_ERROR_MASK; 48 49 if (err) { 49 - drm_err(&gt_to_xe(gt)->drm, "PCODE Mailbox failed: %d %s", err, 50 + drm_err(&tile_to_xe(tile)->drm, "PCODE Mailbox failed: %d %s", err, 50 51 err_decode[err].str ?: "Unknown"); 51 52 return err_decode[err].errno ?: -EPROTO; 52 53 } ··· 54 55 return 0; 55 56 } 56 57 57 - static int __pcode_mailbox_rw(struct xe_gt *gt, u32 mbox, u32 *data0, u32 *data1, 58 + static int __pcode_mailbox_rw(struct xe_tile *tile, u32 mbox, u32 *data0, u32 *data1, 58 59 unsigned int timeout_ms, bool return_data, 59 60 bool atomic) 60 61 { 62 + struct xe_gt *mmio = tile->primary_gt; 61 63 int err; 62 64 63 - if (gt_to_xe(gt)->info.skip_pcode) 65 + if (tile_to_xe(tile)->info.skip_pcode) 64 66 return 0; 65 67 66 - if ((xe_mmio_read32(gt, PCODE_MAILBOX) & PCODE_READY) != 0) 68 + if ((xe_mmio_read32(mmio, PCODE_MAILBOX) & PCODE_READY) != 0) 67 69 return -EAGAIN; 68 70 69 - xe_mmio_write32(gt, PCODE_DATA0, *data0); 70 - xe_mmio_write32(gt, PCODE_DATA1, data1 ? *data1 : 0); 71 - xe_mmio_write32(gt, PCODE_MAILBOX, PCODE_READY | mbox); 71 + xe_mmio_write32(mmio, PCODE_DATA0, *data0); 72 + xe_mmio_write32(mmio, PCODE_DATA1, data1 ? *data1 : 0); 73 + xe_mmio_write32(mmio, PCODE_MAILBOX, PCODE_READY | mbox); 72 74 73 - err = xe_mmio_wait32(gt, PCODE_MAILBOX, PCODE_READY, 0, 75 + err = xe_mmio_wait32(mmio, PCODE_MAILBOX, PCODE_READY, 0, 74 76 timeout_ms * USEC_PER_MSEC, NULL, atomic); 75 77 if (err) 76 78 return err; 77 79 78 80 if (return_data) { 79 - *data0 = xe_mmio_read32(gt, PCODE_DATA0); 81 + *data0 = xe_mmio_read32(mmio, PCODE_DATA0); 80 82 if (data1) 81 - *data1 = xe_mmio_read32(gt, PCODE_DATA1); 83 + *data1 = xe_mmio_read32(mmio, PCODE_DATA1); 82 84 } 83 85 84 - return pcode_mailbox_status(gt); 86 + return pcode_mailbox_status(tile); 85 87 } 86 88 87 - static int pcode_mailbox_rw(struct xe_gt *gt, u32 mbox, u32 *data0, u32 *data1, 89 + static int pcode_mailbox_rw(struct xe_tile *tile, u32 mbox, u32 *data0, u32 *data1, 88 90 unsigned int timeout_ms, bool return_data, 89 91 bool atomic) 90 92 { 91 - if (gt_to_xe(gt)->info.skip_pcode) 93 + if (tile_to_xe(tile)->info.skip_pcode) 92 94 return 0; 93 95 94 - lockdep_assert_held(&gt->pcode.lock); 96 + lockdep_assert_held(&tile->pcode.lock); 95 97 96 - return __pcode_mailbox_rw(gt, mbox, data0, data1, timeout_ms, return_data, atomic); 98 + return __pcode_mailbox_rw(tile, mbox, data0, data1, timeout_ms, return_data, atomic); 97 99 } 98 100 99 - int xe_pcode_write_timeout(struct xe_gt *gt, u32 mbox, u32 data, int timeout) 101 + int xe_pcode_write_timeout(struct xe_tile *tile, u32 mbox, u32 data, int timeout) 100 102 { 101 103 int err; 102 104 103 - mutex_lock(&gt->pcode.lock); 104 - err = pcode_mailbox_rw(gt, mbox, &data, NULL, timeout, false, false); 105 - mutex_unlock(&gt->pcode.lock); 105 + mutex_lock(&tile->pcode.lock); 106 + err = pcode_mailbox_rw(tile, mbox, &data, NULL, timeout, false, false); 107 + mutex_unlock(&tile->pcode.lock); 106 108 107 109 return err; 108 110 } 109 111 110 - int xe_pcode_read(struct xe_gt *gt, u32 mbox, u32 *val, u32 *val1) 112 + int xe_pcode_read(struct xe_tile *tile, u32 mbox, u32 *val, u32 *val1) 111 113 { 112 114 int err; 113 115 114 - mutex_lock(&gt->pcode.lock); 115 - err = pcode_mailbox_rw(gt, mbox, val, val1, 1, true, false); 116 - mutex_unlock(&gt->pcode.lock); 116 + mutex_lock(&tile->pcode.lock); 117 + err = pcode_mailbox_rw(tile, mbox, val, val1, 1, true, false); 118 + mutex_unlock(&tile->pcode.lock); 117 119 118 120 return err; 119 121 } 120 122 121 - static int pcode_try_request(struct xe_gt *gt, u32 mbox, 123 + static int pcode_try_request(struct xe_tile *tile, u32 mbox, 122 124 u32 request, u32 reply_mask, u32 reply, 123 125 u32 *status, bool atomic, int timeout_us, bool locked) 124 126 { 125 127 int slept, wait = 10; 126 128 127 - xe_gt_assert(gt, timeout_us > 0); 129 + xe_tile_assert(tile, timeout_us > 0); 128 130 129 131 for (slept = 0; slept < timeout_us; slept += wait) { 130 132 if (locked) 131 - *status = pcode_mailbox_rw(gt, mbox, &request, NULL, 1, true, 133 + *status = pcode_mailbox_rw(tile, mbox, &request, NULL, 1, true, 132 134 atomic); 133 135 else 134 - *status = __pcode_mailbox_rw(gt, mbox, &request, NULL, 1, true, 136 + *status = __pcode_mailbox_rw(tile, mbox, &request, NULL, 1, true, 135 137 atomic); 136 138 if ((*status == 0) && ((request & reply_mask) == reply)) 137 139 return 0; ··· 149 149 150 150 /** 151 151 * xe_pcode_request - send PCODE request until acknowledgment 152 - * @gt: gt 152 + * @tile: tile 153 153 * @mbox: PCODE mailbox ID the request is targeted for 154 154 * @request: request ID 155 155 * @reply_mask: mask used to check for request acknowledgment ··· 166 166 * Returns 0 on success, %-ETIMEDOUT in case of a timeout, <0 in case of some 167 167 * other error as reported by PCODE. 168 168 */ 169 - int xe_pcode_request(struct xe_gt *gt, u32 mbox, u32 request, 170 - u32 reply_mask, u32 reply, int timeout_base_ms) 169 + int xe_pcode_request(struct xe_tile *tile, u32 mbox, u32 request, 170 + u32 reply_mask, u32 reply, int timeout_base_ms) 171 171 { 172 172 u32 status; 173 173 int ret; 174 174 175 - xe_gt_assert(gt, timeout_base_ms <= 3); 175 + xe_tile_assert(tile, timeout_base_ms <= 3); 176 176 177 - mutex_lock(&gt->pcode.lock); 177 + mutex_lock(&tile->pcode.lock); 178 178 179 - ret = pcode_try_request(gt, mbox, request, reply_mask, reply, &status, 179 + ret = pcode_try_request(tile, mbox, request, reply_mask, reply, &status, 180 180 false, timeout_base_ms * 1000, true); 181 181 if (!ret) 182 182 goto out; ··· 191 191 * requests, and for any quirks of the PCODE firmware that delays 192 192 * the request completion. 193 193 */ 194 - drm_err(&gt_to_xe(gt)->drm, 194 + drm_err(&tile_to_xe(tile)->drm, 195 195 "PCODE timeout, retrying with preemption disabled\n"); 196 196 preempt_disable(); 197 - ret = pcode_try_request(gt, mbox, request, reply_mask, reply, &status, 197 + ret = pcode_try_request(tile, mbox, request, reply_mask, reply, &status, 198 198 true, 50 * 1000, true); 199 199 preempt_enable(); 200 200 201 201 out: 202 - mutex_unlock(&gt->pcode.lock); 202 + mutex_unlock(&tile->pcode.lock); 203 203 return status ? status : ret; 204 204 } 205 205 /** 206 206 * xe_pcode_init_min_freq_table - Initialize PCODE's QOS frequency table 207 - * @gt: gt instance 207 + * @tile: tile instance 208 208 * @min_gt_freq: Minimal (RPn) GT frequency in units of 50MHz. 209 209 * @max_gt_freq: Maximal (RP0) GT frequency in units of 50MHz. 210 210 * ··· 227 227 * - -EACCES, "PCODE Rejected" 228 228 * - -EPROTO, "Unknown" 229 229 */ 230 - int xe_pcode_init_min_freq_table(struct xe_gt *gt, u32 min_gt_freq, 230 + int xe_pcode_init_min_freq_table(struct xe_tile *tile, u32 min_gt_freq, 231 231 u32 max_gt_freq) 232 232 { 233 233 int ret; 234 234 u32 freq; 235 235 236 - if (!gt_to_xe(gt)->info.has_llc) 236 + if (!tile_to_xe(tile)->info.has_llc) 237 237 return 0; 238 238 239 239 if (max_gt_freq <= min_gt_freq) 240 240 return -EINVAL; 241 241 242 - mutex_lock(&gt->pcode.lock); 242 + mutex_lock(&tile->pcode.lock); 243 243 for (freq = min_gt_freq; freq <= max_gt_freq; freq++) { 244 244 u32 data = freq << PCODE_FREQ_RING_RATIO_SHIFT | freq; 245 245 246 - ret = pcode_mailbox_rw(gt, PCODE_WRITE_MIN_FREQ_TABLE, 246 + ret = pcode_mailbox_rw(tile, PCODE_WRITE_MIN_FREQ_TABLE, 247 247 &data, NULL, 1, false, false); 248 248 if (ret) 249 249 goto unlock; 250 250 } 251 251 252 252 unlock: 253 - mutex_unlock(&gt->pcode.lock); 253 + mutex_unlock(&tile->pcode.lock); 254 254 return ret; 255 255 } 256 256 ··· 270 270 int xe_pcode_ready(struct xe_device *xe, bool locked) 271 271 { 272 272 u32 status, request = DGFX_GET_INIT_STATUS; 273 - struct xe_gt *gt = xe_root_mmio_gt(xe); 273 + struct xe_tile *tile = xe_device_get_root_tile(xe); 274 274 int timeout_us = 180000000; /* 3 min */ 275 275 int ret; 276 276 ··· 281 281 return 0; 282 282 283 283 if (locked) 284 - mutex_lock(&gt->pcode.lock); 284 + mutex_lock(&tile->pcode.lock); 285 285 286 - ret = pcode_try_request(gt, DGFX_PCODE_STATUS, request, 286 + ret = pcode_try_request(tile, DGFX_PCODE_STATUS, request, 287 287 DGFX_INIT_STATUS_COMPLETE, 288 288 DGFX_INIT_STATUS_COMPLETE, 289 289 &status, false, timeout_us, locked); 290 290 291 291 if (locked) 292 - mutex_unlock(&gt->pcode.lock); 292 + mutex_unlock(&tile->pcode.lock); 293 293 294 294 if (ret) 295 295 drm_err(&xe->drm, ··· 300 300 301 301 /** 302 302 * xe_pcode_init: initialize components of PCODE 303 - * @gt: gt instance 303 + * @tile: tile instance 304 304 * 305 305 * This function initializes the xe_pcode component. 306 306 * To be called once only during probe. 307 307 */ 308 - void xe_pcode_init(struct xe_gt *gt) 308 + void xe_pcode_init(struct xe_tile *tile) 309 309 { 310 - drmm_mutex_init(&gt_to_xe(gt)->drm, &gt->pcode.lock); 310 + drmm_mutex_init(&tile_to_xe(tile)->drm, &tile->pcode.lock); 311 311 } 312 312 313 313 /**
+8 -8
drivers/gpu/drm/xe/xe_pcode.h
··· 7 7 #define _XE_PCODE_H_ 8 8 9 9 #include <linux/types.h> 10 - struct xe_gt; 10 + struct xe_tile; 11 11 struct xe_device; 12 12 13 - void xe_pcode_init(struct xe_gt *gt); 13 + void xe_pcode_init(struct xe_tile *tile); 14 14 int xe_pcode_probe_early(struct xe_device *xe); 15 15 int xe_pcode_ready(struct xe_device *xe, bool locked); 16 - int xe_pcode_init_min_freq_table(struct xe_gt *gt, u32 min_gt_freq, 16 + int xe_pcode_init_min_freq_table(struct xe_tile *tile, u32 min_gt_freq, 17 17 u32 max_gt_freq); 18 - int xe_pcode_read(struct xe_gt *gt, u32 mbox, u32 *val, u32 *val1); 19 - int xe_pcode_write_timeout(struct xe_gt *gt, u32 mbox, u32 val, 18 + int xe_pcode_read(struct xe_tile *tile, u32 mbox, u32 *val, u32 *val1); 19 + int xe_pcode_write_timeout(struct xe_tile *tile, u32 mbox, u32 val, 20 20 int timeout_ms); 21 - #define xe_pcode_write(gt, mbox, val) \ 22 - xe_pcode_write_timeout(gt, mbox, val, 1) 21 + #define xe_pcode_write(tile, mbox, val) \ 22 + xe_pcode_write_timeout(tile, mbox, val, 1) 23 23 24 - int xe_pcode_request(struct xe_gt *gt, u32 mbox, u32 request, 24 + int xe_pcode_request(struct xe_tile *tile, u32 mbox, u32 request, 25 25 u32 reply_mask, u32 reply, int timeout_ms); 26 26 27 27 #define PCODE_MBOX(mbcmd, param1, param2)\
+3
drivers/gpu/drm/xe/xe_tile.c
··· 9 9 #include "xe_ggtt.h" 10 10 #include "xe_gt.h" 11 11 #include "xe_migrate.h" 12 + #include "xe_pcode.h" 12 13 #include "xe_sa.h" 13 14 #include "xe_tile.h" 14 15 #include "xe_tile_sysfs.h" ··· 124 123 tile->primary_gt = xe_gt_alloc(tile); 125 124 if (IS_ERR(tile->primary_gt)) 126 125 return PTR_ERR(tile->primary_gt); 126 + 127 + xe_pcode_init(tile); 127 128 128 129 return 0; 129 130 }
+7 -2
drivers/gpu/drm/xe/xe_uc_fw.h
··· 65 65 return "<invalid>"; 66 66 } 67 67 68 - static inline int xe_uc_fw_status_to_error(enum xe_uc_fw_status status) 68 + static inline int xe_uc_fw_status_to_error(const enum xe_uc_fw_status status) 69 69 { 70 70 switch (status) { 71 71 case XE_UC_FIRMWARE_NOT_SUPPORTED: ··· 108 108 } 109 109 110 110 static inline enum xe_uc_fw_status 111 - __xe_uc_fw_status(struct xe_uc_fw *uc_fw) 111 + __xe_uc_fw_status(const struct xe_uc_fw *uc_fw) 112 112 { 113 113 /* shouldn't call this before checking hw/blob availability */ 114 114 XE_WARN_ON(uc_fw->status == XE_UC_FIRMWARE_UNINITIALIZED); ··· 154 154 static inline bool xe_uc_fw_is_overridden(const struct xe_uc_fw *uc_fw) 155 155 { 156 156 return uc_fw->user_overridden; 157 + } 158 + 159 + static inline bool xe_uc_fw_is_in_error_state(const struct xe_uc_fw *uc_fw) 160 + { 161 + return xe_uc_fw_status_to_error(__xe_uc_fw_status(uc_fw)) < 0; 157 162 } 158 163 159 164 static inline void xe_uc_fw_sanitize(struct xe_uc_fw *uc_fw)
+2 -4
drivers/gpu/drm/xe/xe_vram_freq.c
··· 34 34 char *buf) 35 35 { 36 36 struct xe_tile *tile = dev_to_tile(dev); 37 - struct xe_gt *gt = tile->primary_gt; 38 37 u32 val, mbox; 39 38 int err; 40 39 ··· 41 42 | REG_FIELD_PREP(PCODE_MB_PARAM1, PCODE_MBOX_FC_SC_READ_FUSED_P0) 42 43 | REG_FIELD_PREP(PCODE_MB_PARAM2, PCODE_MBOX_DOMAIN_HBM); 43 44 44 - err = xe_pcode_read(gt, mbox, &val, NULL); 45 + err = xe_pcode_read(tile, mbox, &val, NULL); 45 46 if (err) 46 47 return err; 47 48 ··· 56 57 char *buf) 57 58 { 58 59 struct xe_tile *tile = dev_to_tile(dev); 59 - struct xe_gt *gt = tile->primary_gt; 60 60 u32 val, mbox; 61 61 int err; 62 62 ··· 63 65 | REG_FIELD_PREP(PCODE_MB_PARAM1, PCODE_MBOX_FC_SC_READ_FUSED_PN) 64 66 | REG_FIELD_PREP(PCODE_MB_PARAM2, PCODE_MBOX_DOMAIN_HBM); 65 67 66 - err = xe_pcode_read(gt, mbox, &val, NULL); 68 + err = xe_pcode_read(tile, mbox, &val, NULL); 67 69 if (err) 68 70 return err; 69 71
+1
drivers/gpu/drm/xlnx/Kconfig
··· 8 8 select DMA_ENGINE 9 9 select DRM_DISPLAY_DP_HELPER 10 10 select DRM_DISPLAY_HELPER 11 + select DRM_BRIDGE_CONNECTOR 11 12 select DRM_GEM_DMA_HELPER 12 13 select DRM_KMS_HELPER 13 14 select GENERIC_PHY
+1
drivers/hv/vmbus_drv.c
··· 1952 1952 */ 1953 1953 device_unregister(&device_obj->device); 1954 1954 } 1955 + EXPORT_SYMBOL_GPL(vmbus_device_unregister); 1955 1956 1956 1957 #ifdef CONFIG_ACPI 1957 1958 /*
+2
drivers/hwmon/hp-wmi-sensors.c
··· 1637 1637 goto out_unlock; 1638 1638 1639 1639 wobj = out.pointer; 1640 + if (!wobj) 1641 + goto out_unlock; 1640 1642 1641 1643 err = populate_event_from_wobj(dev, &event, wobj); 1642 1644 if (err) {
+3 -3
drivers/hwmon/ltc2991.c
··· 42 42 #define LTC2991_V7_V8_FILT_EN BIT(7) 43 43 #define LTC2991_V7_V8_TEMP_EN BIT(5) 44 44 #define LTC2991_V7_V8_DIFF_EN BIT(4) 45 - #define LTC2991_V5_V6_FILT_EN BIT(7) 46 - #define LTC2991_V5_V6_TEMP_EN BIT(5) 47 - #define LTC2991_V5_V6_DIFF_EN BIT(4) 45 + #define LTC2991_V5_V6_FILT_EN BIT(3) 46 + #define LTC2991_V5_V6_TEMP_EN BIT(1) 47 + #define LTC2991_V5_V6_DIFF_EN BIT(0) 48 48 49 49 #define LTC2991_REPEAT_ACQ_EN BIT(4) 50 50 #define LTC2991_T_INT_FILT_EN BIT(3)
+17 -13
drivers/iio/adc/ad7124.c
··· 147 147 struct ad7124_channel_config { 148 148 bool live; 149 149 unsigned int cfg_slot; 150 - enum ad7124_ref_sel refsel; 151 - bool bipolar; 152 - bool buf_positive; 153 - bool buf_negative; 154 - unsigned int vref_mv; 155 - unsigned int pga_bits; 156 - unsigned int odr; 157 - unsigned int odr_sel_bits; 158 - unsigned int filter_type; 150 + /* Following fields are used to compare equality. */ 151 + struct_group(config_props, 152 + enum ad7124_ref_sel refsel; 153 + bool bipolar; 154 + bool buf_positive; 155 + bool buf_negative; 156 + unsigned int vref_mv; 157 + unsigned int pga_bits; 158 + unsigned int odr; 159 + unsigned int odr_sel_bits; 160 + unsigned int filter_type; 161 + ); 159 162 }; 160 163 161 164 struct ad7124_channel { ··· 337 334 ptrdiff_t cmp_size; 338 335 int i; 339 336 340 - cmp_size = (u8 *)&cfg->live - (u8 *)cfg; 337 + cmp_size = sizeof_field(struct ad7124_channel_config, config_props); 341 338 for (i = 0; i < st->num_channels; i++) { 342 339 cfg_aux = &st->channels[i].cfg; 343 340 344 - if (cfg_aux->live && !memcmp(cfg, cfg_aux, cmp_size)) 341 + if (cfg_aux->live && 342 + !memcmp(&cfg->config_props, &cfg_aux->config_props, cmp_size)) 345 343 return cfg_aux; 346 344 } 347 345 ··· 768 764 if (ret < 0) 769 765 return ret; 770 766 767 + fsleep(200); 771 768 timeout = 100; 772 769 do { 773 770 ret = ad_sd_read_reg(&st->sd, AD7124_STATUS, 1, &readval); ··· 844 839 st->channels = channels; 845 840 846 841 device_for_each_child_node_scoped(dev, child) { 847 - cfg = &st->channels[channel].cfg; 848 - 849 842 ret = fwnode_property_read_u32(child, "reg", &channel); 850 843 if (ret) 851 844 return ret; ··· 861 858 st->channels[channel].ain = AD7124_CHANNEL_AINP(ain[0]) | 862 859 AD7124_CHANNEL_AINM(ain[1]); 863 860 861 + cfg = &st->channels[channel].cfg; 864 862 cfg->bipolar = fwnode_property_read_bool(child, "bipolar"); 865 863 866 864 ret = fwnode_property_read_u32(child, "adi,reference-select", &tmp);
+5 -8
drivers/iio/adc/ad7173.c
··· 302 302 .num_configs = 8, 303 303 .num_voltage_in = 16, 304 304 .num_gpios = 4, 305 - .higher_gpio_bits = true, 306 305 .has_vincom_input = true, 307 306 .has_temp = true, 308 307 .has_input_buf = true, ··· 319 320 .num_configs = 8, 320 321 .num_voltage_in = 16, 321 322 .num_gpios = 4, 322 - .higher_gpio_bits = true, 323 323 .has_vincom_input = true, 324 324 .has_temp = true, 325 325 .has_input_buf = true, ··· 336 338 .num_configs = 8, 337 339 .num_voltage_in = 16, 338 340 .num_gpios = 4, 339 - .higher_gpio_bits = true, 340 341 .has_vincom_input = true, 341 342 .has_temp = true, 342 343 .has_input_buf = true, ··· 1432 1435 } 1433 1436 1434 1437 static const struct of_device_id ad7173_of_match[] = { 1435 - { .compatible = "ad4111", .data = &ad4111_device_info }, 1436 - { .compatible = "ad4112", .data = &ad4112_device_info }, 1437 - { .compatible = "ad4114", .data = &ad4114_device_info }, 1438 - { .compatible = "ad4115", .data = &ad4115_device_info }, 1439 - { .compatible = "ad4116", .data = &ad4116_device_info }, 1438 + { .compatible = "adi,ad4111", .data = &ad4111_device_info }, 1439 + { .compatible = "adi,ad4112", .data = &ad4112_device_info }, 1440 + { .compatible = "adi,ad4114", .data = &ad4114_device_info }, 1441 + { .compatible = "adi,ad4115", .data = &ad4115_device_info }, 1442 + { .compatible = "adi,ad4116", .data = &ad4116_device_info }, 1440 1443 { .compatible = "adi,ad7172-2", .data = &ad7172_2_device_info }, 1441 1444 { .compatible = "adi,ad7172-4", .data = &ad7172_4_device_info }, 1442 1445 { .compatible = "adi,ad7173-8", .data = &ad7173_8_device_info },
+2 -26
drivers/iio/adc/ad7606.c
··· 49 49 1, 2, 4, 8, 16, 32, 64, 128, 50 50 }; 51 51 52 - static int ad7606_reset(struct ad7606_state *st) 52 + int ad7606_reset(struct ad7606_state *st) 53 53 { 54 54 if (st->gpio_reset) { 55 55 gpiod_set_value(st->gpio_reset, 1); ··· 60 60 61 61 return -ENODEV; 62 62 } 63 + EXPORT_SYMBOL_NS_GPL(ad7606_reset, IIO_AD7606); 63 64 64 65 static int ad7606_reg_access(struct iio_dev *indio_dev, 65 66 unsigned int reg, ··· 89 88 { 90 89 unsigned int num = st->chip_info->num_channels - 1; 91 90 u16 *data = st->data; 92 - int ret; 93 - 94 - /* 95 - * The frstdata signal is set to high while and after reading the sample 96 - * of the first channel and low for all other channels. This can be used 97 - * to check that the incoming data is correctly aligned. During normal 98 - * operation the data should never become unaligned, but some glitch or 99 - * electrostatic discharge might cause an extra read or clock cycle. 100 - * Monitoring the frstdata signal allows to recover from such failure 101 - * situations. 102 - */ 103 - 104 - if (st->gpio_frstdata) { 105 - ret = st->bops->read_block(st->dev, 1, data); 106 - if (ret) 107 - return ret; 108 - 109 - if (!gpiod_get_value(st->gpio_frstdata)) { 110 - ad7606_reset(st); 111 - return -EIO; 112 - } 113 - 114 - data++; 115 - num--; 116 - } 117 91 118 92 return st->bops->read_block(st->dev, num, data); 119 93 }
+2
drivers/iio/adc/ad7606.h
··· 151 151 const char *name, unsigned int id, 152 152 const struct ad7606_bus_ops *bops); 153 153 154 + int ad7606_reset(struct ad7606_state *st); 155 + 154 156 enum ad7606_supported_device_ids { 155 157 ID_AD7605_4, 156 158 ID_AD7606_8,
+44 -2
drivers/iio/adc/ad7606_par.c
··· 7 7 8 8 #include <linux/mod_devicetable.h> 9 9 #include <linux/module.h> 10 + #include <linux/gpio/consumer.h> 10 11 #include <linux/platform_device.h> 11 12 #include <linux/types.h> 12 13 #include <linux/err.h> ··· 22 21 struct iio_dev *indio_dev = dev_get_drvdata(dev); 23 22 struct ad7606_state *st = iio_priv(indio_dev); 24 23 25 - insw((unsigned long)st->base_address, buf, count); 26 24 25 + /* 26 + * On the parallel interface, the frstdata signal is set to high while 27 + * and after reading the sample of the first channel and low for all 28 + * other channels. This can be used to check that the incoming data is 29 + * correctly aligned. During normal operation the data should never 30 + * become unaligned, but some glitch or electrostatic discharge might 31 + * cause an extra read or clock cycle. Monitoring the frstdata signal 32 + * allows to recover from such failure situations. 33 + */ 34 + int num = count; 35 + u16 *_buf = buf; 36 + 37 + if (st->gpio_frstdata) { 38 + insw((unsigned long)st->base_address, _buf, 1); 39 + if (!gpiod_get_value(st->gpio_frstdata)) { 40 + ad7606_reset(st); 41 + return -EIO; 42 + } 43 + _buf++; 44 + num--; 45 + } 46 + insw((unsigned long)st->base_address, _buf, num); 27 47 return 0; 28 48 } 29 49 ··· 57 35 { 58 36 struct iio_dev *indio_dev = dev_get_drvdata(dev); 59 37 struct ad7606_state *st = iio_priv(indio_dev); 38 + /* 39 + * On the parallel interface, the frstdata signal is set to high while 40 + * and after reading the sample of the first channel and low for all 41 + * other channels. This can be used to check that the incoming data is 42 + * correctly aligned. During normal operation the data should never 43 + * become unaligned, but some glitch or electrostatic discharge might 44 + * cause an extra read or clock cycle. Monitoring the frstdata signal 45 + * allows to recover from such failure situations. 46 + */ 47 + int num = count; 48 + u16 *_buf = buf; 60 49 61 - insb((unsigned long)st->base_address, buf, count * 2); 50 + if (st->gpio_frstdata) { 51 + insb((unsigned long)st->base_address, _buf, 2); 52 + if (!gpiod_get_value(st->gpio_frstdata)) { 53 + ad7606_reset(st); 54 + return -EIO; 55 + } 56 + _buf++; 57 + num--; 58 + } 59 + insb((unsigned long)st->base_address, _buf, num * 2); 62 60 63 61 return 0; 64 62 }
+1 -1
drivers/iio/adc/ad_sigma_delta.c
··· 569 569 static int devm_ad_sd_probe_trigger(struct device *dev, struct iio_dev *indio_dev) 570 570 { 571 571 struct ad_sigma_delta *sigma_delta = iio_device_get_drvdata(indio_dev); 572 - unsigned long irq_flags = irq_get_trigger_type(sigma_delta->spi->irq); 572 + unsigned long irq_flags = irq_get_trigger_type(sigma_delta->irq_line); 573 573 int ret; 574 574 575 575 if (dev != &sigma_delta->spi->dev) {
+1 -1
drivers/iio/adc/ti-ads1119.c
··· 735 735 if (client->irq > 0) { 736 736 ret = devm_request_threaded_irq(dev, client->irq, 737 737 ads1119_irq_handler, 738 - NULL, IRQF_TRIGGER_FALLING, 738 + NULL, IRQF_ONESHOT, 739 739 "ads1119", indio_dev); 740 740 if (ret) 741 741 return dev_err_probe(dev, ret,
+3 -1
drivers/iio/buffer/industrialio-buffer-dmaengine.c
··· 237 237 238 238 ret = dma_get_slave_caps(chan, &caps); 239 239 if (ret < 0) 240 - goto err_free; 240 + goto err_release; 241 241 242 242 /* Needs to be aligned to the maximum of the minimums */ 243 243 if (caps.src_addr_widths) ··· 263 263 264 264 return &dmaengine_buffer->queue.buffer; 265 265 266 + err_release: 267 + dma_release_channel(chan); 266 268 err_free: 267 269 kfree(dmaengine_buffer); 268 270 return ERR_PTR(ret);
+11 -2
drivers/iio/imu/inv_mpu6050/inv_mpu_trigger.c
··· 248 248 int result; 249 249 250 250 switch (st->chip_type) { 251 + case INV_MPU6000: 251 252 case INV_MPU6050: 253 + case INV_MPU9150: 254 + /* 255 + * WoM is not supported and interrupt status read seems to be broken for 256 + * some chips. Since data ready is the only interrupt, bypass interrupt 257 + * status read and always assert data ready bit. 258 + */ 259 + wom_bits = 0; 260 + int_status = INV_MPU6050_BIT_RAW_DATA_RDY_INT; 261 + goto data_ready_interrupt; 252 262 case INV_MPU6500: 253 263 case INV_MPU6515: 254 264 case INV_MPU6880: 255 - case INV_MPU6000: 256 - case INV_MPU9150: 257 265 case INV_MPU9250: 258 266 case INV_MPU9255: 259 267 wom_bits = INV_MPU6500_BIT_WOM_INT; ··· 287 279 } 288 280 } 289 281 282 + data_ready_interrupt: 290 283 /* handle raw data interrupt */ 291 284 if (int_status & INV_MPU6050_BIT_RAW_DATA_RDY_INT) { 292 285 indio_dev->pollfunc->timestamp = st->it_timestamp;
+4 -4
drivers/iio/inkern.c
··· 647 647 break; 648 648 case IIO_VAL_INT_PLUS_MICRO: 649 649 if (scale_val2 < 0) 650 - *processed = -raw64 * scale_val; 650 + *processed = -raw64 * scale_val * scale; 651 651 else 652 - *processed = raw64 * scale_val; 652 + *processed = raw64 * scale_val * scale; 653 653 *processed += div_s64(raw64 * (s64)scale_val2 * scale, 654 654 1000000LL); 655 655 break; 656 656 case IIO_VAL_INT_PLUS_NANO: 657 657 if (scale_val2 < 0) 658 - *processed = -raw64 * scale_val; 658 + *processed = -raw64 * scale_val * scale; 659 659 else 660 - *processed = raw64 * scale_val; 660 + *processed = raw64 * scale_val * scale; 661 661 *processed += div_s64(raw64 * (s64)scale_val2 * scale, 662 662 1000000000LL); 663 663 break;
+3 -3
drivers/irqchip/irq-gic-v2m.c
··· 407 407 408 408 ret = gicv2m_init_one(&child->fwnode, spi_start, nr_spis, 409 409 &res, 0); 410 - if (ret) { 411 - of_node_put(child); 410 + if (ret) 412 411 break; 413 - } 414 412 } 415 413 414 + if (ret && child) 415 + of_node_put(child); 416 416 if (!ret) 417 417 ret = gicv2m_allocate_domains(parent); 418 418 if (ret)
+10 -6
drivers/irqchip/irq-gic-v3-its.c
··· 1330 1330 } 1331 1331 1332 1332 /* 1333 - * Protect against concurrent updates of the mapping state on 1334 - * individual VMs. 1335 - */ 1336 - guard(raw_spinlock_irqsave)(&vpe->its_vm->vmapp_lock); 1337 - 1338 - /* 1339 1333 * Yet another marvel of the architecture. If using the 1340 1334 * its_list "feature", we need to make sure that all ITSs 1341 1335 * receive all VMOVP commands in the same order. The only way ··· 3818 3824 * protect us, and that we must ensure nobody samples vpe->col_idx 3819 3825 * during the update, hence the lock below which must also be 3820 3826 * taken on any vLPI handling path that evaluates vpe->col_idx. 3827 + * 3828 + * Finally, we must protect ourselves against concurrent updates of 3829 + * the mapping state on this VM should the ITS list be in use (see 3830 + * the shortcut in its_send_vmovp() otherewise). 3821 3831 */ 3832 + if (its_list_map) 3833 + raw_spin_lock(&vpe->its_vm->vmapp_lock); 3834 + 3822 3835 from = vpe_to_cpuid_lock(vpe, &flags); 3823 3836 table_mask = gic_data_rdist_cpu(from)->vpe_table_mask; 3824 3837 ··· 3854 3853 out: 3855 3854 irq_data_update_effective_affinity(d, cpumask_of(cpu)); 3856 3855 vpe_to_cpuid_unlock(vpe, flags); 3856 + 3857 + if (its_list_map) 3858 + raw_spin_unlock(&vpe->its_vm->vmapp_lock); 3857 3859 3858 3860 return IRQ_SET_MASK_OK_DONE; 3859 3861 }
+14 -7
drivers/irqchip/irq-gic-v3.c
··· 1154 1154 gic_data.rdists.has_vpend_valid_dirty ? "Valid+Dirty " : ""); 1155 1155 } 1156 1156 1157 - static void gic_cpu_sys_reg_init(void) 1157 + static void gic_cpu_sys_reg_enable(void) 1158 1158 { 1159 - int i, cpu = smp_processor_id(); 1160 - u64 mpidr = gic_cpu_to_affinity(cpu); 1161 - u64 need_rss = MPIDR_RS(mpidr); 1162 - bool group0; 1163 - u32 pribits; 1164 - 1165 1159 /* 1166 1160 * Need to check that the SRE bit has actually been set. If 1167 1161 * not, it means that SRE is disabled at EL2. We're going to ··· 1165 1171 */ 1166 1172 if (!gic_enable_sre()) 1167 1173 pr_err("GIC: unable to set SRE (disabled at EL2), panic ahead\n"); 1174 + 1175 + } 1176 + 1177 + static void gic_cpu_sys_reg_init(void) 1178 + { 1179 + int i, cpu = smp_processor_id(); 1180 + u64 mpidr = gic_cpu_to_affinity(cpu); 1181 + u64 need_rss = MPIDR_RS(mpidr); 1182 + bool group0; 1183 + u32 pribits; 1168 1184 1169 1185 pribits = gic_get_pribits(); 1170 1186 ··· 1337 1333 1338 1334 static int gic_starting_cpu(unsigned int cpu) 1339 1335 { 1336 + gic_cpu_sys_reg_enable(); 1340 1337 gic_cpu_init(); 1341 1338 1342 1339 if (gic_dist_supports_lpis()) ··· 1503 1498 if (cmd == CPU_PM_EXIT) { 1504 1499 if (gic_dist_security_disabled()) 1505 1500 gic_enable_redist(true); 1501 + gic_cpu_sys_reg_enable(); 1506 1502 gic_cpu_sys_reg_init(); 1507 1503 } else if (cmd == CPU_PM_ENTER && gic_dist_security_disabled()) { 1508 1504 gic_write_grpen1(0); ··· 2076 2070 2077 2071 gic_update_rdist_properties(); 2078 2072 2073 + gic_cpu_sys_reg_enable(); 2079 2074 gic_prio_init(); 2080 2075 gic_dist_init(); 2081 2076 gic_cpu_init();
+4 -1
drivers/irqchip/irq-msi-lib.c
··· 128 128 const struct msi_parent_ops *ops = d->msi_parent_ops; 129 129 u32 busmask = BIT(bus_token); 130 130 131 + if (!ops) 132 + return 0; 133 + 131 134 if (fwspec->fwnode != d->fwnode || fwspec->param_count != 0) 132 135 return 0; 133 136 ··· 138 135 if (bus_token == ops->bus_select_token) 139 136 return 1; 140 137 141 - return ops && !!(ops->bus_select_mask & busmask); 138 + return !!(ops->bus_select_mask & busmask); 142 139 } 143 140 EXPORT_SYMBOL_GPL(msi_lib_irq_domain_select);
+2 -2
drivers/irqchip/irq-riscv-aplic-main.c
··· 175 175 176 176 /* Map the MMIO registers */ 177 177 regs = devm_platform_ioremap_resource(pdev, 0); 178 - if (!regs) { 178 + if (IS_ERR(regs)) { 179 179 dev_err(dev, "failed map MMIO registers\n"); 180 - return -ENOMEM; 180 + return PTR_ERR(regs); 181 181 } 182 182 183 183 /*
+71 -44
drivers/irqchip/irq-sifive-plic.c
··· 3 3 * Copyright (C) 2017 SiFive 4 4 * Copyright (C) 2018 Christoph Hellwig 5 5 */ 6 + #define pr_fmt(fmt) "riscv-plic: " fmt 6 7 #include <linux/cpu.h> 7 8 #include <linux/interrupt.h> 8 9 #include <linux/io.h> ··· 64 63 #define PLIC_QUIRK_EDGE_INTERRUPT 0 65 64 66 65 struct plic_priv { 67 - struct device *dev; 66 + struct fwnode_handle *fwnode; 68 67 struct cpumask lmask; 69 68 struct irq_domain *irqdomain; 70 69 void __iomem *regs; ··· 379 378 int err = generic_handle_domain_irq(handler->priv->irqdomain, 380 379 hwirq); 381 380 if (unlikely(err)) { 382 - dev_warn_ratelimited(handler->priv->dev, 383 - "can't find mapping for hwirq %lu\n", hwirq); 381 + pr_warn_ratelimited("%pfwP: can't find mapping for hwirq %lu\n", 382 + handler->priv->fwnode, hwirq); 384 383 } 385 384 } 386 385 ··· 409 408 enable_percpu_irq(plic_parent_irq, 410 409 irq_get_trigger_type(plic_parent_irq)); 411 410 else 412 - dev_warn(handler->priv->dev, "cpu%d: parent irq not available\n", cpu); 411 + pr_warn("%pfwP: cpu%d: parent irq not available\n", 412 + handler->priv->fwnode, cpu); 413 413 plic_set_threshold(handler, PLIC_ENABLE_THRESHOLD); 414 414 415 415 return 0; ··· 426 424 {} 427 425 }; 428 426 429 - static int plic_parse_nr_irqs_and_contexts(struct platform_device *pdev, 427 + static int plic_parse_nr_irqs_and_contexts(struct fwnode_handle *fwnode, 430 428 u32 *nr_irqs, u32 *nr_contexts) 431 429 { 432 - struct device *dev = &pdev->dev; 433 430 int rc; 434 431 435 432 /* 436 433 * Currently, only OF fwnode is supported so extend this 437 434 * function for ACPI support. 438 435 */ 439 - if (!is_of_node(dev->fwnode)) 436 + if (!is_of_node(fwnode)) 440 437 return -EINVAL; 441 438 442 - rc = of_property_read_u32(to_of_node(dev->fwnode), "riscv,ndev", nr_irqs); 439 + rc = of_property_read_u32(to_of_node(fwnode), "riscv,ndev", nr_irqs); 443 440 if (rc) { 444 - dev_err(dev, "riscv,ndev property not available\n"); 441 + pr_err("%pfwP: riscv,ndev property not available\n", fwnode); 445 442 return rc; 446 443 } 447 444 448 - *nr_contexts = of_irq_count(to_of_node(dev->fwnode)); 445 + *nr_contexts = of_irq_count(to_of_node(fwnode)); 449 446 if (WARN_ON(!(*nr_contexts))) { 450 - dev_err(dev, "no PLIC context available\n"); 447 + pr_err("%pfwP: no PLIC context available\n", fwnode); 451 448 return -EINVAL; 452 449 } 453 450 454 451 return 0; 455 452 } 456 453 457 - static int plic_parse_context_parent(struct platform_device *pdev, u32 context, 454 + static int plic_parse_context_parent(struct fwnode_handle *fwnode, u32 context, 458 455 u32 *parent_hwirq, int *parent_cpu) 459 456 { 460 - struct device *dev = &pdev->dev; 461 457 struct of_phandle_args parent; 462 458 unsigned long hartid; 463 459 int rc; ··· 464 464 * Currently, only OF fwnode is supported so extend this 465 465 * function for ACPI support. 466 466 */ 467 - if (!is_of_node(dev->fwnode)) 467 + if (!is_of_node(fwnode)) 468 468 return -EINVAL; 469 469 470 - rc = of_irq_parse_one(to_of_node(dev->fwnode), context, &parent); 470 + rc = of_irq_parse_one(to_of_node(fwnode), context, &parent); 471 471 if (rc) 472 472 return rc; 473 473 ··· 480 480 return 0; 481 481 } 482 482 483 - static int plic_probe(struct platform_device *pdev) 483 + static int plic_probe(struct fwnode_handle *fwnode) 484 484 { 485 485 int error = 0, nr_contexts, nr_handlers = 0, cpu, i; 486 - struct device *dev = &pdev->dev; 487 486 unsigned long plic_quirks = 0; 488 487 struct plic_handler *handler; 489 488 u32 nr_irqs, parent_hwirq; 490 489 struct plic_priv *priv; 491 490 irq_hw_number_t hwirq; 491 + void __iomem *regs; 492 492 493 - if (is_of_node(dev->fwnode)) { 493 + if (is_of_node(fwnode)) { 494 494 const struct of_device_id *id; 495 495 496 - id = of_match_node(plic_match, to_of_node(dev->fwnode)); 496 + id = of_match_node(plic_match, to_of_node(fwnode)); 497 497 if (id) 498 498 plic_quirks = (unsigned long)id->data; 499 + 500 + regs = of_iomap(to_of_node(fwnode), 0); 501 + if (!regs) 502 + return -ENOMEM; 503 + } else { 504 + return -ENODEV; 499 505 } 500 506 501 - error = plic_parse_nr_irqs_and_contexts(pdev, &nr_irqs, &nr_contexts); 507 + error = plic_parse_nr_irqs_and_contexts(fwnode, &nr_irqs, &nr_contexts); 502 508 if (error) 503 - return error; 509 + goto fail_free_regs; 504 510 505 - priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL); 506 - if (!priv) 507 - return -ENOMEM; 511 + priv = kzalloc(sizeof(*priv), GFP_KERNEL); 512 + if (!priv) { 513 + error = -ENOMEM; 514 + goto fail_free_regs; 515 + } 508 516 509 - priv->dev = dev; 517 + priv->fwnode = fwnode; 510 518 priv->plic_quirks = plic_quirks; 511 519 priv->nr_irqs = nr_irqs; 520 + priv->regs = regs; 512 521 513 - priv->regs = devm_platform_ioremap_resource(pdev, 0); 514 - if (WARN_ON(!priv->regs)) 515 - return -EIO; 516 - 517 - priv->prio_save = devm_bitmap_zalloc(dev, nr_irqs, GFP_KERNEL); 518 - if (!priv->prio_save) 519 - return -ENOMEM; 522 + priv->prio_save = bitmap_zalloc(nr_irqs, GFP_KERNEL); 523 + if (!priv->prio_save) { 524 + error = -ENOMEM; 525 + goto fail_free_priv; 526 + } 520 527 521 528 for (i = 0; i < nr_contexts; i++) { 522 - error = plic_parse_context_parent(pdev, i, &parent_hwirq, &cpu); 529 + error = plic_parse_context_parent(fwnode, i, &parent_hwirq, &cpu); 523 530 if (error) { 524 - dev_warn(dev, "hwirq for context%d not found\n", i); 531 + pr_warn("%pfwP: hwirq for context%d not found\n", fwnode, i); 525 532 continue; 526 533 } 527 534 ··· 550 543 } 551 544 552 545 if (cpu < 0) { 553 - dev_warn(dev, "Invalid cpuid for context %d\n", i); 546 + pr_warn("%pfwP: Invalid cpuid for context %d\n", fwnode, i); 554 547 continue; 555 548 } 556 549 ··· 561 554 */ 562 555 handler = per_cpu_ptr(&plic_handlers, cpu); 563 556 if (handler->present) { 564 - dev_warn(dev, "handler already present for context %d.\n", i); 557 + pr_warn("%pfwP: handler already present for context %d.\n", fwnode, i); 565 558 plic_set_threshold(handler, PLIC_DISABLE_THRESHOLD); 566 559 goto done; 567 560 } ··· 575 568 i * CONTEXT_ENABLE_SIZE; 576 569 handler->priv = priv; 577 570 578 - handler->enable_save = devm_kcalloc(dev, DIV_ROUND_UP(nr_irqs, 32), 579 - sizeof(*handler->enable_save), GFP_KERNEL); 571 + handler->enable_save = kcalloc(DIV_ROUND_UP(nr_irqs, 32), 572 + sizeof(*handler->enable_save), GFP_KERNEL); 580 573 if (!handler->enable_save) 581 574 goto fail_cleanup_contexts; 582 575 done: ··· 588 581 nr_handlers++; 589 582 } 590 583 591 - priv->irqdomain = irq_domain_add_linear(to_of_node(dev->fwnode), nr_irqs + 1, 584 + priv->irqdomain = irq_domain_add_linear(to_of_node(fwnode), nr_irqs + 1, 592 585 &plic_irqdomain_ops, priv); 593 586 if (WARN_ON(!priv->irqdomain)) 594 587 goto fail_cleanup_contexts; ··· 626 619 } 627 620 } 628 621 629 - dev_info(dev, "mapped %d interrupts with %d handlers for %d contexts.\n", 630 - nr_irqs, nr_handlers, nr_contexts); 622 + pr_info("%pfwP: mapped %d interrupts with %d handlers for %d contexts.\n", 623 + fwnode, nr_irqs, nr_handlers, nr_contexts); 631 624 return 0; 632 625 633 626 fail_cleanup_contexts: 634 627 for (i = 0; i < nr_contexts; i++) { 635 - if (plic_parse_context_parent(pdev, i, &parent_hwirq, &cpu)) 628 + if (plic_parse_context_parent(fwnode, i, &parent_hwirq, &cpu)) 636 629 continue; 637 630 if (parent_hwirq != RV_IRQ_EXT || cpu < 0) 638 631 continue; ··· 641 634 handler->present = false; 642 635 handler->hart_base = NULL; 643 636 handler->enable_base = NULL; 637 + kfree(handler->enable_save); 644 638 handler->enable_save = NULL; 645 639 handler->priv = NULL; 646 640 } 647 - return -ENOMEM; 641 + bitmap_free(priv->prio_save); 642 + fail_free_priv: 643 + kfree(priv); 644 + fail_free_regs: 645 + iounmap(regs); 646 + return error; 647 + } 648 + 649 + static int plic_platform_probe(struct platform_device *pdev) 650 + { 651 + return plic_probe(pdev->dev.fwnode); 648 652 } 649 653 650 654 static struct platform_driver plic_driver = { 651 655 .driver = { 652 656 .name = "riscv-plic", 653 657 .of_match_table = plic_match, 658 + .suppress_bind_attrs = true, 654 659 }, 655 - .probe = plic_probe, 660 + .probe = plic_platform_probe, 656 661 }; 657 662 builtin_platform_driver(plic_driver); 663 + 664 + static int __init plic_early_probe(struct device_node *node, 665 + struct device_node *parent) 666 + { 667 + return plic_probe(&node->fwnode); 668 + } 669 + 670 + IRQCHIP_DECLARE(riscv, "allwinner,sun20i-d1-plic", plic_early_probe);
+2 -3
drivers/misc/fastrpc.c
··· 1910 1910 &args[0]); 1911 1911 if (err) { 1912 1912 dev_err(dev, "mmap error (len 0x%08llx)\n", buf->size); 1913 - goto err_invoke; 1913 + fastrpc_buf_free(buf); 1914 + return err; 1914 1915 } 1915 1916 1916 1917 /* update the buffer to be able to deallocate the memory on the DSP */ ··· 1949 1948 1950 1949 err_assign: 1951 1950 fastrpc_req_munmap_impl(fl, buf); 1952 - err_invoke: 1953 - fastrpc_buf_free(buf); 1954 1951 1955 1952 return err; 1956 1953 }
+4 -10
drivers/misc/keba/cp500.c
··· 212 212 } 213 213 static DEVICE_ATTR_RW(keep_cfg); 214 214 215 - static struct attribute *attrs[] = { 215 + static struct attribute *cp500_attrs[] = { 216 216 &dev_attr_version.attr, 217 217 &dev_attr_keep_cfg.attr, 218 218 NULL 219 219 }; 220 - static const struct attribute_group attrs_group = { .attrs = attrs }; 220 + ATTRIBUTE_GROUPS(cp500); 221 221 222 222 static void cp500_i2c_release(struct device *dev) 223 223 { ··· 396 396 397 397 pci_set_drvdata(pci_dev, cp500); 398 398 399 - ret = sysfs_create_group(&pci_dev->dev.kobj, &attrs_group); 400 - if (ret != 0) 401 - goto out_free_irq; 402 399 403 400 ret = cp500_enable(cp500); 404 401 if (ret != 0) 405 - goto out_remove_group; 402 + goto out_free_irq; 406 403 407 404 cp500_register_auxiliary_devs(cp500); 408 405 409 406 return 0; 410 407 411 - out_remove_group: 412 - sysfs_remove_group(&pci_dev->dev.kobj, &attrs_group); 413 408 out_free_irq: 414 409 pci_free_irq_vectors(pci_dev); 415 410 out_disable: ··· 421 426 cp500_unregister_auxiliary_devs(cp500); 422 427 423 428 cp500_disable(cp500); 424 - 425 - sysfs_remove_group(&pci_dev->dev.kobj, &attrs_group); 426 429 427 430 pci_set_drvdata(pci_dev, 0); 428 431 ··· 443 450 .id_table = cp500_ids, 444 451 .probe = cp500_probe, 445 452 .remove = cp500_remove, 453 + .dev_groups = cp500_groups, 446 454 }; 447 455 module_pci_driver(cp500_driver); 448 456
+2 -1
drivers/misc/vmw_vmci/vmci_resource.c
··· 144 144 spin_lock(&vmci_resource_table.lock); 145 145 146 146 hlist_for_each_entry(r, &vmci_resource_table.entries[idx], node) { 147 - if (vmci_handle_is_equal(r->handle, resource->handle)) { 147 + if (vmci_handle_is_equal(r->handle, resource->handle) && 148 + resource->type == r->type) { 148 149 hlist_del_init_rcu(&r->node); 149 150 break; 150 151 }
+13 -9
drivers/mmc/core/quirks.h
··· 15 15 16 16 #include "card.h" 17 17 18 + static const struct mmc_fixup __maybe_unused mmc_sd_fixups[] = { 19 + /* 20 + * Kingston Canvas Go! Plus microSD cards never finish SD cache flush. 21 + * This has so far only been observed on cards from 11/2019, while new 22 + * cards from 2023/05 do not exhibit this behavior. 23 + */ 24 + _FIXUP_EXT("SD64G", CID_MANFID_KINGSTON_SD, 0x5449, 2019, 11, 25 + 0, -1ull, SDIO_ANY_ID, SDIO_ANY_ID, add_quirk_sd, 26 + MMC_QUIRK_BROKEN_SD_CACHE, EXT_CSD_REV_ANY), 27 + 28 + END_FIXUP 29 + }; 30 + 18 31 static const struct mmc_fixup __maybe_unused mmc_blk_fixups[] = { 19 32 #define INAND_CMD38_ARG_EXT_CSD 113 20 33 #define INAND_CMD38_ARG_ERASE 0x00 ··· 65 52 MMC_QUIRK_BLK_NO_CMD23), 66 53 MMC_FIXUP("MMC32G", CID_MANFID_TOSHIBA, CID_OEMID_ANY, add_quirk_mmc, 67 54 MMC_QUIRK_BLK_NO_CMD23), 68 - 69 - /* 70 - * Kingston Canvas Go! Plus microSD cards never finish SD cache flush. 71 - * This has so far only been observed on cards from 11/2019, while new 72 - * cards from 2023/05 do not exhibit this behavior. 73 - */ 74 - _FIXUP_EXT("SD64G", CID_MANFID_KINGSTON_SD, 0x5449, 2019, 11, 75 - 0, -1ull, SDIO_ANY_ID, SDIO_ANY_ID, add_quirk_sd, 76 - MMC_QUIRK_BROKEN_SD_CACHE, EXT_CSD_REV_ANY), 77 55 78 56 /* 79 57 * Some SD cards lockup while using CMD23 multiblock transfers.
+4
drivers/mmc/core/sd.c
··· 26 26 #include "host.h" 27 27 #include "bus.h" 28 28 #include "mmc_ops.h" 29 + #include "quirks.h" 29 30 #include "sd.h" 30 31 #include "sd_ops.h" 31 32 ··· 1475 1474 if (err) 1476 1475 goto free_card; 1477 1476 } 1477 + 1478 + /* Apply quirks prior to card setup */ 1479 + mmc_fixup_device(card, mmc_sd_fixups); 1478 1480 1479 1481 err = mmc_sd_setup_card(host, card, oldcard != NULL); 1480 1482 if (err)
+1 -1
drivers/mmc/host/cqhci-core.c
··· 617 617 cqhci_writel(cq_host, 0, CQHCI_CTL); 618 618 mmc->cqe_on = true; 619 619 pr_debug("%s: cqhci: CQE on\n", mmc_hostname(mmc)); 620 - if (cqhci_readl(cq_host, CQHCI_CTL) && CQHCI_HALT) { 620 + if (cqhci_readl(cq_host, CQHCI_CTL) & CQHCI_HALT) { 621 621 pr_err("%s: cqhci: CQE failed to exit halt state\n", 622 622 mmc_hostname(mmc)); 623 623 }
+2 -2
drivers/mmc/host/dw_mmc.c
··· 2957 2957 if (host->use_dma == TRANS_MODE_IDMAC) { 2958 2958 mmc->max_segs = host->ring_size; 2959 2959 mmc->max_blk_size = 65535; 2960 - mmc->max_seg_size = 0x1000; 2961 - mmc->max_req_size = mmc->max_seg_size * host->ring_size; 2960 + mmc->max_req_size = DW_MCI_DESC_DATA_LENGTH * host->ring_size; 2961 + mmc->max_seg_size = mmc->max_req_size; 2962 2962 mmc->max_blk_count = mmc->max_req_size / 512; 2963 2963 } else if (host->use_dma == TRANS_MODE_EDMAC) { 2964 2964 mmc->max_segs = 64;
+1
drivers/mmc/host/sdhci-of-aspeed.c
··· 510 510 { .compatible = "aspeed,ast2600-sdhci", .data = &ast2600_sdhci_pdata, }, 511 511 { } 512 512 }; 513 + MODULE_DEVICE_TABLE(of, aspeed_sdhci_of_match); 513 514 514 515 static struct platform_driver aspeed_sdhci_driver = { 515 516 .driver = {
+11 -11
drivers/net/bareudp.c
··· 83 83 84 84 if (skb_copy_bits(skb, BAREUDP_BASE_HLEN, &ipversion, 85 85 sizeof(ipversion))) { 86 - bareudp->dev->stats.rx_dropped++; 86 + DEV_STATS_INC(bareudp->dev, rx_dropped); 87 87 goto drop; 88 88 } 89 89 ipversion >>= 4; ··· 93 93 } else if (ipversion == 6 && bareudp->multi_proto_mode) { 94 94 proto = htons(ETH_P_IPV6); 95 95 } else { 96 - bareudp->dev->stats.rx_dropped++; 96 + DEV_STATS_INC(bareudp->dev, rx_dropped); 97 97 goto drop; 98 98 } 99 99 } else if (bareudp->ethertype == htons(ETH_P_MPLS_UC)) { ··· 107 107 ipv4_is_multicast(tunnel_hdr->daddr)) { 108 108 proto = htons(ETH_P_MPLS_MC); 109 109 } else { 110 - bareudp->dev->stats.rx_dropped++; 110 + DEV_STATS_INC(bareudp->dev, rx_dropped); 111 111 goto drop; 112 112 } 113 113 } else { ··· 123 123 (addr_type & IPV6_ADDR_MULTICAST)) { 124 124 proto = htons(ETH_P_MPLS_MC); 125 125 } else { 126 - bareudp->dev->stats.rx_dropped++; 126 + DEV_STATS_INC(bareudp->dev, rx_dropped); 127 127 goto drop; 128 128 } 129 129 } ··· 135 135 proto, 136 136 !net_eq(bareudp->net, 137 137 dev_net(bareudp->dev)))) { 138 - bareudp->dev->stats.rx_dropped++; 138 + DEV_STATS_INC(bareudp->dev, rx_dropped); 139 139 goto drop; 140 140 } 141 141 ··· 143 143 144 144 tun_dst = udp_tun_rx_dst(skb, family, key, 0, 0); 145 145 if (!tun_dst) { 146 - bareudp->dev->stats.rx_dropped++; 146 + DEV_STATS_INC(bareudp->dev, rx_dropped); 147 147 goto drop; 148 148 } 149 149 skb_dst_set(skb, &tun_dst->dst); ··· 169 169 &((struct ipv6hdr *)oiph)->saddr); 170 170 } 171 171 if (err > 1) { 172 - ++bareudp->dev->stats.rx_frame_errors; 173 - ++bareudp->dev->stats.rx_errors; 172 + DEV_STATS_INC(bareudp->dev, rx_frame_errors); 173 + DEV_STATS_INC(bareudp->dev, rx_errors); 174 174 goto drop; 175 175 } 176 176 } ··· 467 467 dev_kfree_skb(skb); 468 468 469 469 if (err == -ELOOP) 470 - dev->stats.collisions++; 470 + DEV_STATS_INC(dev, collisions); 471 471 else if (err == -ENETUNREACH) 472 - dev->stats.tx_carrier_errors++; 472 + DEV_STATS_INC(dev, tx_carrier_errors); 473 473 474 - dev->stats.tx_errors++; 474 + DEV_STATS_INC(dev, tx_errors); 475 475 return NETDEV_TX_OK; 476 476 } 477 477
+8 -10
drivers/net/can/kvaser_pciefd.c
··· 1686 1686 const struct kvaser_pciefd_irq_mask *irq_mask = pcie->driver_data->irq_mask; 1687 1687 u32 pci_irq = ioread32(KVASER_PCIEFD_PCI_IRQ_ADDR(pcie)); 1688 1688 u32 srb_irq = 0; 1689 + u32 srb_release = 0; 1689 1690 int i; 1690 1691 1691 1692 if (!(pci_irq & irq_mask->all)) ··· 1700 1699 kvaser_pciefd_transmit_irq(pcie->can[i]); 1701 1700 } 1702 1701 1703 - if (srb_irq & KVASER_PCIEFD_SRB_IRQ_DPD0) { 1704 - /* Reset DMA buffer 0, may trigger new interrupt */ 1705 - iowrite32(KVASER_PCIEFD_SRB_CMD_RDB0, 1706 - KVASER_PCIEFD_SRB_ADDR(pcie) + KVASER_PCIEFD_SRB_CMD_REG); 1707 - } 1702 + if (srb_irq & KVASER_PCIEFD_SRB_IRQ_DPD0) 1703 + srb_release |= KVASER_PCIEFD_SRB_CMD_RDB0; 1708 1704 1709 - if (srb_irq & KVASER_PCIEFD_SRB_IRQ_DPD1) { 1710 - /* Reset DMA buffer 1, may trigger new interrupt */ 1711 - iowrite32(KVASER_PCIEFD_SRB_CMD_RDB1, 1712 - KVASER_PCIEFD_SRB_ADDR(pcie) + KVASER_PCIEFD_SRB_CMD_REG); 1713 - } 1705 + if (srb_irq & KVASER_PCIEFD_SRB_IRQ_DPD1) 1706 + srb_release |= KVASER_PCIEFD_SRB_CMD_RDB1; 1707 + 1708 + if (srb_release) 1709 + iowrite32(srb_release, KVASER_PCIEFD_SRB_ADDR(pcie) + KVASER_PCIEFD_SRB_CMD_REG); 1714 1710 1715 1711 return IRQ_HANDLED; 1716 1712 }
+70 -46
drivers/net/can/m_can/m_can.c
··· 483 483 { 484 484 m_can_coalescing_disable(cdev); 485 485 m_can_write(cdev, M_CAN_ILE, 0x0); 486 - cdev->active_interrupts = 0x0; 487 486 488 487 if (!cdev->net->irq) { 489 488 dev_dbg(cdev->dev, "Stop hrtimer\n"); 490 - hrtimer_cancel(&cdev->hrtimer); 489 + hrtimer_try_to_cancel(&cdev->hrtimer); 491 490 } 492 491 } 493 492 ··· 1036 1037 return work_done; 1037 1038 } 1038 1039 1039 - static int m_can_rx_peripheral(struct net_device *dev, u32 irqstatus) 1040 - { 1041 - struct m_can_classdev *cdev = netdev_priv(dev); 1042 - int work_done; 1043 - 1044 - work_done = m_can_rx_handler(dev, NAPI_POLL_WEIGHT, irqstatus); 1045 - 1046 - /* Don't re-enable interrupts if the driver had a fatal error 1047 - * (e.g., FIFO read failure). 1048 - */ 1049 - if (work_done < 0) 1050 - m_can_disable_all_interrupts(cdev); 1051 - 1052 - return work_done; 1053 - } 1054 - 1055 1040 static int m_can_poll(struct napi_struct *napi, int quota) 1056 1041 { 1057 1042 struct net_device *dev = napi->dev; ··· 1200 1217 HRTIMER_MODE_REL); 1201 1218 } 1202 1219 1203 - static irqreturn_t m_can_isr(int irq, void *dev_id) 1220 + /* This interrupt handler is called either from the interrupt thread or a 1221 + * hrtimer. This has implications like cancelling a timer won't be possible 1222 + * blocking. 1223 + */ 1224 + static int m_can_interrupt_handler(struct m_can_classdev *cdev) 1204 1225 { 1205 - struct net_device *dev = (struct net_device *)dev_id; 1206 - struct m_can_classdev *cdev = netdev_priv(dev); 1226 + struct net_device *dev = cdev->net; 1207 1227 u32 ir; 1228 + int ret; 1208 1229 1209 - if (pm_runtime_suspended(cdev->dev)) { 1210 - m_can_coalescing_disable(cdev); 1230 + if (pm_runtime_suspended(cdev->dev)) 1211 1231 return IRQ_NONE; 1212 - } 1213 1232 1214 1233 ir = m_can_read(cdev, M_CAN_IR); 1215 1234 m_can_coalescing_update(cdev, ir); ··· 1235 1250 m_can_disable_all_interrupts(cdev); 1236 1251 napi_schedule(&cdev->napi); 1237 1252 } else { 1238 - int pkts; 1239 - 1240 - pkts = m_can_rx_peripheral(dev, ir); 1241 - if (pkts < 0) 1242 - goto out_fail; 1253 + ret = m_can_rx_handler(dev, NAPI_POLL_WEIGHT, ir); 1254 + if (ret < 0) 1255 + return ret; 1243 1256 } 1244 1257 } 1245 1258 ··· 1255 1272 } else { 1256 1273 if (ir & (IR_TEFN | IR_TEFW)) { 1257 1274 /* New TX FIFO Element arrived */ 1258 - if (m_can_echo_tx_event(dev) != 0) 1259 - goto out_fail; 1275 + ret = m_can_echo_tx_event(dev); 1276 + if (ret != 0) 1277 + return ret; 1260 1278 } 1261 1279 } 1262 1280 ··· 1265 1281 can_rx_offload_threaded_irq_finish(&cdev->offload); 1266 1282 1267 1283 return IRQ_HANDLED; 1284 + } 1268 1285 1269 - out_fail: 1270 - m_can_disable_all_interrupts(cdev); 1271 - return IRQ_HANDLED; 1286 + static irqreturn_t m_can_isr(int irq, void *dev_id) 1287 + { 1288 + struct net_device *dev = (struct net_device *)dev_id; 1289 + struct m_can_classdev *cdev = netdev_priv(dev); 1290 + int ret; 1291 + 1292 + ret = m_can_interrupt_handler(cdev); 1293 + if (ret < 0) { 1294 + m_can_disable_all_interrupts(cdev); 1295 + return IRQ_HANDLED; 1296 + } 1297 + 1298 + return ret; 1272 1299 } 1273 1300 1274 1301 static enum hrtimer_restart m_can_coalescing_timer(struct hrtimer *timer) 1275 1302 { 1276 1303 struct m_can_classdev *cdev = container_of(timer, struct m_can_classdev, hrtimer); 1304 + 1305 + if (cdev->can.state == CAN_STATE_BUS_OFF || 1306 + cdev->can.state == CAN_STATE_STOPPED) 1307 + return HRTIMER_NORESTART; 1277 1308 1278 1309 irq_wake_thread(cdev->net->irq, cdev->net); 1279 1310 ··· 1541 1542 else 1542 1543 interrupts &= ~(IR_ERR_LEC_31X); 1543 1544 } 1545 + cdev->active_interrupts = 0; 1544 1546 m_can_interrupt_enable(cdev, interrupts); 1545 1547 1546 1548 /* route all interrupts to INT0 */ ··· 1991 1991 { 1992 1992 struct m_can_classdev *cdev = container_of(timer, struct 1993 1993 m_can_classdev, hrtimer); 1994 + int ret; 1994 1995 1995 - m_can_isr(0, cdev->net); 1996 + if (cdev->can.state == CAN_STATE_BUS_OFF || 1997 + cdev->can.state == CAN_STATE_STOPPED) 1998 + return HRTIMER_NORESTART; 1999 + 2000 + ret = m_can_interrupt_handler(cdev); 2001 + 2002 + /* On error or if napi is scheduled to read, stop the timer */ 2003 + if (ret < 0 || napi_is_scheduled(&cdev->napi)) 2004 + return HRTIMER_NORESTART; 1996 2005 1997 2006 hrtimer_forward_now(timer, ms_to_ktime(HRTIMER_POLL_INTERVAL_MS)); 1998 2007 ··· 2061 2052 /* start the m_can controller */ 2062 2053 err = m_can_start(dev); 2063 2054 if (err) 2064 - goto exit_irq_fail; 2055 + goto exit_start_fail; 2065 2056 2066 2057 if (!cdev->is_peripheral) 2067 2058 napi_enable(&cdev->napi); ··· 2070 2061 2071 2062 return 0; 2072 2063 2064 + exit_start_fail: 2065 + if (cdev->is_peripheral || dev->irq) 2066 + free_irq(dev->irq, dev); 2073 2067 exit_irq_fail: 2074 2068 if (cdev->is_peripheral) 2075 2069 destroy_workqueue(cdev->tx_wq); ··· 2184 2172 return 0; 2185 2173 } 2186 2174 2187 - static const struct ethtool_ops m_can_ethtool_ops = { 2175 + static const struct ethtool_ops m_can_ethtool_ops_coalescing = { 2188 2176 .supported_coalesce_params = ETHTOOL_COALESCE_RX_USECS_IRQ | 2189 2177 ETHTOOL_COALESCE_RX_MAX_FRAMES_IRQ | 2190 2178 ETHTOOL_COALESCE_TX_USECS_IRQ | ··· 2195 2183 .set_coalesce = m_can_set_coalesce, 2196 2184 }; 2197 2185 2198 - static const struct ethtool_ops m_can_ethtool_ops_polling = { 2186 + static const struct ethtool_ops m_can_ethtool_ops = { 2199 2187 .get_ts_info = ethtool_op_get_ts_info, 2200 2188 }; 2201 2189 2202 - static int register_m_can_dev(struct net_device *dev) 2190 + static int register_m_can_dev(struct m_can_classdev *cdev) 2203 2191 { 2192 + struct net_device *dev = cdev->net; 2193 + 2204 2194 dev->flags |= IFF_ECHO; /* we support local echo */ 2205 2195 dev->netdev_ops = &m_can_netdev_ops; 2206 - if (dev->irq) 2207 - dev->ethtool_ops = &m_can_ethtool_ops; 2196 + if (dev->irq && cdev->is_peripheral) 2197 + dev->ethtool_ops = &m_can_ethtool_ops_coalescing; 2208 2198 else 2209 - dev->ethtool_ops = &m_can_ethtool_ops_polling; 2199 + dev->ethtool_ops = &m_can_ethtool_ops; 2210 2200 2211 2201 return register_candev(dev); 2212 2202 } ··· 2394 2380 if (ret) 2395 2381 goto rx_offload_del; 2396 2382 2397 - ret = register_m_can_dev(cdev->net); 2383 + ret = register_m_can_dev(cdev); 2398 2384 if (ret) { 2399 2385 dev_err(cdev->dev, "registering %s failed (err=%d)\n", 2400 2386 cdev->net->name, ret); ··· 2441 2427 netif_device_detach(ndev); 2442 2428 2443 2429 /* leave the chip running with rx interrupt enabled if it is 2444 - * used as a wake-up source. 2430 + * used as a wake-up source. Coalescing needs to be reset then, 2431 + * the timer is cancelled here, interrupts are done in resume. 2445 2432 */ 2446 - if (cdev->pm_wake_source) 2433 + if (cdev->pm_wake_source) { 2434 + hrtimer_cancel(&cdev->hrtimer); 2447 2435 m_can_write(cdev, M_CAN_IE, IR_RF0N); 2448 - else 2436 + } else { 2449 2437 m_can_stop(ndev); 2438 + } 2450 2439 2451 2440 m_can_clk_stop(cdev); 2452 2441 } ··· 2479 2462 return ret; 2480 2463 2481 2464 if (cdev->pm_wake_source) { 2465 + /* Restore active interrupts but disable coalescing as 2466 + * we may have missed important waterlevel interrupts 2467 + * between suspend and resume. Timers are already 2468 + * stopped in suspend. Here we enable all interrupts 2469 + * again. 2470 + */ 2471 + cdev->active_interrupts |= IR_RF0N | IR_TEFN; 2482 2472 m_can_write(cdev, M_CAN_IE, cdev->active_interrupts); 2483 2473 } else { 2484 2474 ret = m_can_start(ndev);
+1 -1
drivers/net/can/spi/mcp251x.c
··· 752 752 int ret; 753 753 754 754 /* Force wakeup interrupt to wake device, but don't execute IST */ 755 - disable_irq(spi->irq); 755 + disable_irq_nosync(spi->irq); 756 756 mcp251x_write_2regs(spi, CANINTE, CANINTE_WAKIE, CANINTF_WAKIF); 757 757 758 758 /* Wait for oscillator startup timer after wake up */
+10 -1
drivers/net/can/spi/mcp251xfd/mcp251xfd-ram.c
··· 97 97 if (ring) { 98 98 u8 num_rx_coalesce = 0, num_tx_coalesce = 0; 99 99 100 - num_rx = can_ram_rounddown_pow_of_two(config, &config->rx, 0, ring->rx_pending); 100 + /* If the ring parameters have been configured in 101 + * CAN-CC mode, but and we are in CAN-FD mode now, 102 + * they might be to big. Use the default CAN-FD values 103 + * in this case. 104 + */ 105 + num_rx = ring->rx_pending; 106 + if (num_rx > layout->max_rx) 107 + num_rx = layout->default_rx; 108 + 109 + num_rx = can_ram_rounddown_pow_of_two(config, &config->rx, 0, num_rx); 101 110 102 111 /* The ethtool doc says: 103 112 * To disable coalescing, set usecs = 0 and max_frames = 1.
+28 -6
drivers/net/can/spi/mcp251xfd/mcp251xfd-ring.c
··· 290 290 const struct mcp251xfd_rx_ring *rx_ring; 291 291 u16 base = 0, ram_used; 292 292 u8 fifo_nr = 1; 293 - int i; 293 + int err = 0, i; 294 294 295 295 netdev_reset_queue(priv->ndev); 296 296 ··· 386 386 netdev_err(priv->ndev, 387 387 "Error during ring configuration, using more RAM (%u bytes) than available (%u bytes).\n", 388 388 ram_used, MCP251XFD_RAM_SIZE); 389 - return -ENOMEM; 389 + err = -ENOMEM; 390 390 } 391 391 392 - return 0; 392 + if (priv->tx_obj_num_coalesce_irq && 393 + priv->tx_obj_num_coalesce_irq * 2 != priv->tx->obj_num) { 394 + netdev_err(priv->ndev, 395 + "Error during ring configuration, number of TEF coalescing buffers (%u) must be half of TEF buffers (%u).\n", 396 + priv->tx_obj_num_coalesce_irq, priv->tx->obj_num); 397 + err = -EINVAL; 398 + } 399 + 400 + return err; 393 401 } 394 402 395 403 void mcp251xfd_ring_free(struct mcp251xfd_priv *priv) ··· 477 469 478 470 /* switching from CAN-2.0 to CAN-FD mode or vice versa */ 479 471 if (fd_mode != test_bit(MCP251XFD_FLAGS_FD_MODE, priv->flags)) { 472 + const struct ethtool_ringparam ring = { 473 + .rx_pending = priv->rx_obj_num, 474 + .tx_pending = priv->tx->obj_num, 475 + }; 476 + const struct ethtool_coalesce ec = { 477 + .rx_coalesce_usecs_irq = priv->rx_coalesce_usecs_irq, 478 + .rx_max_coalesced_frames_irq = priv->rx_obj_num_coalesce_irq, 479 + .tx_coalesce_usecs_irq = priv->tx_coalesce_usecs_irq, 480 + .tx_max_coalesced_frames_irq = priv->tx_obj_num_coalesce_irq, 481 + }; 480 482 struct can_ram_layout layout; 481 483 482 - can_ram_get_layout(&layout, &mcp251xfd_ram_config, NULL, NULL, fd_mode); 483 - priv->rx_obj_num = layout.default_rx; 484 - tx_ring->obj_num = layout.default_tx; 484 + can_ram_get_layout(&layout, &mcp251xfd_ram_config, &ring, &ec, fd_mode); 485 + 486 + priv->rx_obj_num = layout.cur_rx; 487 + priv->rx_obj_num_coalesce_irq = layout.rx_coalesce; 488 + 489 + tx_ring->obj_num = layout.cur_tx; 490 + priv->tx_obj_num_coalesce_irq = layout.tx_coalesce; 485 491 } 486 492 487 493 if (fd_mode) {
+8 -2
drivers/net/dsa/vitesse-vsc73xx-core.c
··· 36 36 #define VSC73XX_BLOCK_ANALYZER 0x2 /* Only subblock 0 */ 37 37 #define VSC73XX_BLOCK_MII 0x3 /* Subblocks 0 and 1 */ 38 38 #define VSC73XX_BLOCK_MEMINIT 0x3 /* Only subblock 2 */ 39 - #define VSC73XX_BLOCK_CAPTURE 0x4 /* Only subblock 2 */ 39 + #define VSC73XX_BLOCK_CAPTURE 0x4 /* Subblocks 0-4, 6, 7 */ 40 40 #define VSC73XX_BLOCK_ARBITER 0x5 /* Only subblock 0 */ 41 41 #define VSC73XX_BLOCK_SYSTEM 0x7 /* Only subblock 0 */ 42 42 ··· 410 410 break; 411 411 412 412 case VSC73XX_BLOCK_MII: 413 - case VSC73XX_BLOCK_CAPTURE: 414 413 case VSC73XX_BLOCK_ARBITER: 415 414 switch (subblock) { 416 415 case 0 ... 1: 416 + return 1; 417 + } 418 + break; 419 + case VSC73XX_BLOCK_CAPTURE: 420 + switch (subblock) { 421 + case 0 ... 4: 422 + case 6 ... 7: 417 423 return 1; 418 424 } 419 425 break;
+2
drivers/net/ethernet/intel/ice/ice.h
··· 318 318 ICE_VSI_UMAC_FLTR_CHANGED, 319 319 ICE_VSI_MMAC_FLTR_CHANGED, 320 320 ICE_VSI_PROMISC_CHANGED, 321 + ICE_VSI_REBUILD_PENDING, 321 322 ICE_VSI_STATE_NBITS /* must be last */ 322 323 }; 323 324 ··· 412 411 struct ice_tx_ring **xdp_rings; /* XDP ring array */ 413 412 u16 num_xdp_txq; /* Used XDP queues */ 414 413 u8 xdp_mapping_mode; /* ICE_MAP_MODE_[CONTIG|SCATTER] */ 414 + struct mutex xdp_state_lock; 415 415 416 416 struct net_device **target_netdevs; 417 417
+3 -8
drivers/net/ethernet/intel/ice/ice_base.c
··· 190 190 } 191 191 q_vector = vsi->q_vectors[v_idx]; 192 192 193 - ice_for_each_tx_ring(tx_ring, q_vector->tx) { 194 - ice_queue_set_napi(vsi, tx_ring->q_index, NETDEV_QUEUE_TYPE_TX, 195 - NULL); 193 + ice_for_each_tx_ring(tx_ring, vsi->q_vectors[v_idx]->tx) 196 194 tx_ring->q_vector = NULL; 197 - } 198 - ice_for_each_rx_ring(rx_ring, q_vector->rx) { 199 - ice_queue_set_napi(vsi, rx_ring->q_index, NETDEV_QUEUE_TYPE_RX, 200 - NULL); 195 + 196 + ice_for_each_rx_ring(rx_ring, vsi->q_vectors[v_idx]->rx) 201 197 rx_ring->q_vector = NULL; 202 - } 203 198 204 199 /* only VSI with an associated netdev is set up with NAPI */ 205 200 if (vsi->netdev)
+71 -130
drivers/net/ethernet/intel/ice/ice_lib.c
··· 447 447 448 448 ice_vsi_free_stats(vsi); 449 449 ice_vsi_free_arrays(vsi); 450 + mutex_destroy(&vsi->xdp_state_lock); 450 451 mutex_unlock(&pf->sw_mutex); 451 452 devm_kfree(dev, vsi); 452 453 } ··· 626 625 /* prepare pf->next_vsi for next use */ 627 626 pf->next_vsi = ice_get_free_slot(pf->vsi, pf->num_alloc_vsi, 628 627 pf->next_vsi); 628 + 629 + mutex_init(&vsi->xdp_state_lock); 629 630 630 631 unlock_pf: 631 632 mutex_unlock(&pf->sw_mutex); ··· 2289 2286 2290 2287 ice_vsi_map_rings_to_vectors(vsi); 2291 2288 2292 - /* Associate q_vector rings to napi */ 2293 - ice_vsi_set_napi_queues(vsi); 2294 - 2295 2289 vsi->stat_offsets_loaded = false; 2296 2290 2297 2291 /* ICE_VSI_CTRL does not need RSS so skip RSS processing */ ··· 2426 2426 dev_err(ice_pf_to_dev(pf), "Failed to remove RDMA scheduler config for VSI %u, err %d\n", 2427 2427 vsi->vsi_num, err); 2428 2428 2429 - if (ice_is_xdp_ena_vsi(vsi)) 2429 + if (vsi->xdp_rings) 2430 2430 /* return value check can be skipped here, it always returns 2431 2431 * 0 if reset is in progress 2432 2432 */ ··· 2528 2528 for (q = 0; q < q_vector->num_ring_tx; q++) { 2529 2529 ice_write_itr(&q_vector->tx, 0); 2530 2530 wr32(hw, QINT_TQCTL(vsi->txq_map[txq]), 0); 2531 - if (ice_is_xdp_ena_vsi(vsi)) { 2531 + if (vsi->xdp_rings) { 2532 2532 u32 xdp_txq = txq + vsi->num_xdp_txq; 2533 2533 2534 2534 wr32(hw, QINT_TQCTL(vsi->txq_map[xdp_txq]), 0); ··· 2628 2628 if (!test_and_set_bit(ICE_VSI_DOWN, vsi->state)) 2629 2629 ice_down(vsi); 2630 2630 2631 + ice_vsi_clear_napi_queues(vsi); 2631 2632 ice_vsi_free_irq(vsi); 2632 2633 ice_vsi_free_tx_rings(vsi); 2633 2634 ice_vsi_free_rx_rings(vsi); ··· 2672 2671 */ 2673 2672 void ice_dis_vsi(struct ice_vsi *vsi, bool locked) 2674 2673 { 2675 - if (test_bit(ICE_VSI_DOWN, vsi->state)) 2676 - return; 2674 + bool already_down = test_bit(ICE_VSI_DOWN, vsi->state); 2677 2675 2678 2676 set_bit(ICE_VSI_NEEDS_RESTART, vsi->state); 2679 2677 ··· 2680 2680 if (netif_running(vsi->netdev)) { 2681 2681 if (!locked) 2682 2682 rtnl_lock(); 2683 - 2684 - ice_vsi_close(vsi); 2683 + already_down = test_bit(ICE_VSI_DOWN, vsi->state); 2684 + if (!already_down) 2685 + ice_vsi_close(vsi); 2685 2686 2686 2687 if (!locked) 2687 2688 rtnl_unlock(); 2688 - } else { 2689 + } else if (!already_down) { 2689 2690 ice_vsi_close(vsi); 2690 2691 } 2691 - } else if (vsi->type == ICE_VSI_CTRL) { 2692 + } else if (vsi->type == ICE_VSI_CTRL && !already_down) { 2692 2693 ice_vsi_close(vsi); 2693 2694 } 2694 2695 } 2695 2696 2696 2697 /** 2697 - * __ice_queue_set_napi - Set the napi instance for the queue 2698 - * @dev: device to which NAPI and queue belong 2699 - * @queue_index: Index of queue 2700 - * @type: queue type as RX or TX 2701 - * @napi: NAPI context 2702 - * @locked: is the rtnl_lock already held 2703 - * 2704 - * Set the napi instance for the queue. Caller indicates the lock status. 2705 - */ 2706 - static void 2707 - __ice_queue_set_napi(struct net_device *dev, unsigned int queue_index, 2708 - enum netdev_queue_type type, struct napi_struct *napi, 2709 - bool locked) 2710 - { 2711 - if (!locked) 2712 - rtnl_lock(); 2713 - netif_queue_set_napi(dev, queue_index, type, napi); 2714 - if (!locked) 2715 - rtnl_unlock(); 2716 - } 2717 - 2718 - /** 2719 - * ice_queue_set_napi - Set the napi instance for the queue 2720 - * @vsi: VSI being configured 2721 - * @queue_index: Index of queue 2722 - * @type: queue type as RX or TX 2723 - * @napi: NAPI context 2724 - * 2725 - * Set the napi instance for the queue. The rtnl lock state is derived from the 2726 - * execution path. 2727 - */ 2728 - void 2729 - ice_queue_set_napi(struct ice_vsi *vsi, unsigned int queue_index, 2730 - enum netdev_queue_type type, struct napi_struct *napi) 2731 - { 2732 - struct ice_pf *pf = vsi->back; 2733 - 2734 - if (!vsi->netdev) 2735 - return; 2736 - 2737 - if (current_work() == &pf->serv_task || 2738 - test_bit(ICE_PREPARED_FOR_RESET, pf->state) || 2739 - test_bit(ICE_DOWN, pf->state) || 2740 - test_bit(ICE_SUSPENDED, pf->state)) 2741 - __ice_queue_set_napi(vsi->netdev, queue_index, type, napi, 2742 - false); 2743 - else 2744 - __ice_queue_set_napi(vsi->netdev, queue_index, type, napi, 2745 - true); 2746 - } 2747 - 2748 - /** 2749 - * __ice_q_vector_set_napi_queues - Map queue[s] associated with the napi 2750 - * @q_vector: q_vector pointer 2751 - * @locked: is the rtnl_lock already held 2752 - * 2753 - * Associate the q_vector napi with all the queue[s] on the vector. 2754 - * Caller indicates the lock status. 2755 - */ 2756 - void __ice_q_vector_set_napi_queues(struct ice_q_vector *q_vector, bool locked) 2757 - { 2758 - struct ice_rx_ring *rx_ring; 2759 - struct ice_tx_ring *tx_ring; 2760 - 2761 - ice_for_each_rx_ring(rx_ring, q_vector->rx) 2762 - __ice_queue_set_napi(q_vector->vsi->netdev, rx_ring->q_index, 2763 - NETDEV_QUEUE_TYPE_RX, &q_vector->napi, 2764 - locked); 2765 - 2766 - ice_for_each_tx_ring(tx_ring, q_vector->tx) 2767 - __ice_queue_set_napi(q_vector->vsi->netdev, tx_ring->q_index, 2768 - NETDEV_QUEUE_TYPE_TX, &q_vector->napi, 2769 - locked); 2770 - /* Also set the interrupt number for the NAPI */ 2771 - netif_napi_set_irq(&q_vector->napi, q_vector->irq.virq); 2772 - } 2773 - 2774 - /** 2775 - * ice_q_vector_set_napi_queues - Map queue[s] associated with the napi 2776 - * @q_vector: q_vector pointer 2777 - * 2778 - * Associate the q_vector napi with all the queue[s] on the vector 2779 - */ 2780 - void ice_q_vector_set_napi_queues(struct ice_q_vector *q_vector) 2781 - { 2782 - struct ice_rx_ring *rx_ring; 2783 - struct ice_tx_ring *tx_ring; 2784 - 2785 - ice_for_each_rx_ring(rx_ring, q_vector->rx) 2786 - ice_queue_set_napi(q_vector->vsi, rx_ring->q_index, 2787 - NETDEV_QUEUE_TYPE_RX, &q_vector->napi); 2788 - 2789 - ice_for_each_tx_ring(tx_ring, q_vector->tx) 2790 - ice_queue_set_napi(q_vector->vsi, tx_ring->q_index, 2791 - NETDEV_QUEUE_TYPE_TX, &q_vector->napi); 2792 - /* Also set the interrupt number for the NAPI */ 2793 - netif_napi_set_irq(&q_vector->napi, q_vector->irq.virq); 2794 - } 2795 - 2796 - /** 2797 - * ice_vsi_set_napi_queues 2698 + * ice_vsi_set_napi_queues - associate netdev queues with napi 2798 2699 * @vsi: VSI pointer 2799 2700 * 2800 - * Associate queue[s] with napi for all vectors 2701 + * Associate queue[s] with napi for all vectors. 2702 + * The caller must hold rtnl_lock. 2801 2703 */ 2802 2704 void ice_vsi_set_napi_queues(struct ice_vsi *vsi) 2803 2705 { 2804 - int i; 2706 + struct net_device *netdev = vsi->netdev; 2707 + int q_idx, v_idx; 2805 2708 2806 - if (!vsi->netdev) 2709 + if (!netdev) 2807 2710 return; 2808 2711 2809 - ice_for_each_q_vector(vsi, i) 2810 - ice_q_vector_set_napi_queues(vsi->q_vectors[i]); 2712 + ice_for_each_rxq(vsi, q_idx) 2713 + netif_queue_set_napi(netdev, q_idx, NETDEV_QUEUE_TYPE_RX, 2714 + &vsi->rx_rings[q_idx]->q_vector->napi); 2715 + 2716 + ice_for_each_txq(vsi, q_idx) 2717 + netif_queue_set_napi(netdev, q_idx, NETDEV_QUEUE_TYPE_TX, 2718 + &vsi->tx_rings[q_idx]->q_vector->napi); 2719 + /* Also set the interrupt number for the NAPI */ 2720 + ice_for_each_q_vector(vsi, v_idx) { 2721 + struct ice_q_vector *q_vector = vsi->q_vectors[v_idx]; 2722 + 2723 + netif_napi_set_irq(&q_vector->napi, q_vector->irq.virq); 2724 + } 2725 + } 2726 + 2727 + /** 2728 + * ice_vsi_clear_napi_queues - dissociate netdev queues from napi 2729 + * @vsi: VSI pointer 2730 + * 2731 + * Clear the association between all VSI queues queue[s] and napi. 2732 + * The caller must hold rtnl_lock. 2733 + */ 2734 + void ice_vsi_clear_napi_queues(struct ice_vsi *vsi) 2735 + { 2736 + struct net_device *netdev = vsi->netdev; 2737 + int q_idx; 2738 + 2739 + if (!netdev) 2740 + return; 2741 + 2742 + ice_for_each_txq(vsi, q_idx) 2743 + netif_queue_set_napi(netdev, q_idx, NETDEV_QUEUE_TYPE_TX, NULL); 2744 + 2745 + ice_for_each_rxq(vsi, q_idx) 2746 + netif_queue_set_napi(netdev, q_idx, NETDEV_QUEUE_TYPE_RX, NULL); 2811 2747 } 2812 2748 2813 2749 /** ··· 2975 3039 if (WARN_ON(vsi->type == ICE_VSI_VF && !vsi->vf)) 2976 3040 return -EINVAL; 2977 3041 3042 + mutex_lock(&vsi->xdp_state_lock); 3043 + 2978 3044 ret = ice_vsi_realloc_stat_arrays(vsi); 2979 3045 if (ret) 2980 - goto err_vsi_cfg; 3046 + goto unlock; 2981 3047 2982 3048 ice_vsi_decfg(vsi); 2983 3049 ret = ice_vsi_cfg_def(vsi); 2984 3050 if (ret) 2985 - goto err_vsi_cfg; 3051 + goto unlock; 2986 3052 2987 3053 coalesce = kcalloc(vsi->num_q_vectors, 2988 3054 sizeof(struct ice_coalesce_stored), GFP_KERNEL); 2989 - if (!coalesce) 2990 - return -ENOMEM; 3055 + if (!coalesce) { 3056 + ret = -ENOMEM; 3057 + goto decfg; 3058 + } 2991 3059 2992 3060 prev_num_q_vectors = ice_vsi_rebuild_get_coalesce(vsi, coalesce); 2993 3061 ··· 2999 3059 if (ret) { 3000 3060 if (vsi_flags & ICE_VSI_FLAG_INIT) { 3001 3061 ret = -EIO; 3002 - goto err_vsi_cfg_tc_lan; 3062 + goto free_coalesce; 3003 3063 } 3004 3064 3005 - kfree(coalesce); 3006 - return ice_schedule_reset(pf, ICE_RESET_PFR); 3065 + ret = ice_schedule_reset(pf, ICE_RESET_PFR); 3066 + goto free_coalesce; 3007 3067 } 3008 3068 3009 3069 ice_vsi_rebuild_set_coalesce(vsi, coalesce, prev_num_q_vectors); 3010 - kfree(coalesce); 3070 + clear_bit(ICE_VSI_REBUILD_PENDING, vsi->state); 3011 3071 3012 - return 0; 3013 - 3014 - err_vsi_cfg_tc_lan: 3015 - ice_vsi_decfg(vsi); 3072 + free_coalesce: 3016 3073 kfree(coalesce); 3017 - err_vsi_cfg: 3074 + decfg: 3075 + if (ret) 3076 + ice_vsi_decfg(vsi); 3077 + unlock: 3078 + mutex_unlock(&vsi->xdp_state_lock); 3018 3079 return ret; 3019 3080 } 3020 3081
+2 -8
drivers/net/ethernet/intel/ice/ice_lib.h
··· 44 44 struct ice_vsi * 45 45 ice_vsi_setup(struct ice_pf *pf, struct ice_vsi_cfg_params *params); 46 46 47 - void 48 - ice_queue_set_napi(struct ice_vsi *vsi, unsigned int queue_index, 49 - enum netdev_queue_type type, struct napi_struct *napi); 50 - 51 - void __ice_q_vector_set_napi_queues(struct ice_q_vector *q_vector, bool locked); 52 - 53 - void ice_q_vector_set_napi_queues(struct ice_q_vector *q_vector); 54 - 55 47 void ice_vsi_set_napi_queues(struct ice_vsi *vsi); 48 + 49 + void ice_vsi_clear_napi_queues(struct ice_vsi *vsi); 56 50 57 51 int ice_vsi_release(struct ice_vsi *vsi); 58 52
+41 -13
drivers/net/ethernet/intel/ice/ice_main.c
··· 608 608 memset(&vsi->mqprio_qopt, 0, sizeof(vsi->mqprio_qopt)); 609 609 } 610 610 } 611 + 612 + if (vsi->netdev) 613 + netif_device_detach(vsi->netdev); 611 614 skip: 612 615 613 616 /* clear SW filtering DB */ 614 617 ice_clear_hw_tbls(hw); 615 618 /* disable the VSIs and their queues that are not already DOWN */ 619 + set_bit(ICE_VSI_REBUILD_PENDING, ice_get_main_vsi(pf)->state); 616 620 ice_pf_dis_all_vsi(pf, false); 617 621 618 622 if (test_bit(ICE_FLAG_PTP_SUPPORTED, pf->flags)) ··· 3005 3001 struct netlink_ext_ack *extack) 3006 3002 { 3007 3003 unsigned int frame_size = vsi->netdev->mtu + ICE_ETH_PKT_HDR_PAD; 3008 - bool if_running = netif_running(vsi->netdev); 3009 3004 int ret = 0, xdp_ring_err = 0; 3005 + bool if_running; 3010 3006 3011 3007 if (prog && !prog->aux->xdp_has_frags) { 3012 3008 if (frame_size > ice_max_xdp_frame_size(vsi)) { ··· 3017 3013 } 3018 3014 3019 3015 /* hot swap progs and avoid toggling link */ 3020 - if (ice_is_xdp_ena_vsi(vsi) == !!prog) { 3016 + if (ice_is_xdp_ena_vsi(vsi) == !!prog || 3017 + test_bit(ICE_VSI_REBUILD_PENDING, vsi->state)) { 3021 3018 ice_vsi_assign_bpf_prog(vsi, prog); 3022 3019 return 0; 3023 3020 } 3024 3021 3022 + if_running = netif_running(vsi->netdev) && 3023 + !test_and_set_bit(ICE_VSI_DOWN, vsi->state); 3024 + 3025 3025 /* need to stop netdev while setting up the program for Rx rings */ 3026 - if (if_running && !test_and_set_bit(ICE_VSI_DOWN, vsi->state)) { 3026 + if (if_running) { 3027 3027 ret = ice_down(vsi); 3028 3028 if (ret) { 3029 3029 NL_SET_ERR_MSG_MOD(extack, "Preparing device for XDP attach failed"); ··· 3093 3085 { 3094 3086 struct ice_netdev_priv *np = netdev_priv(dev); 3095 3087 struct ice_vsi *vsi = np->vsi; 3088 + int ret; 3096 3089 3097 3090 if (vsi->type != ICE_VSI_PF) { 3098 3091 NL_SET_ERR_MSG_MOD(xdp->extack, "XDP can be loaded only on PF VSI"); 3099 3092 return -EINVAL; 3100 3093 } 3101 3094 3095 + mutex_lock(&vsi->xdp_state_lock); 3096 + 3102 3097 switch (xdp->command) { 3103 3098 case XDP_SETUP_PROG: 3104 - return ice_xdp_setup_prog(vsi, xdp->prog, xdp->extack); 3099 + ret = ice_xdp_setup_prog(vsi, xdp->prog, xdp->extack); 3100 + break; 3105 3101 case XDP_SETUP_XSK_POOL: 3106 - return ice_xsk_pool_setup(vsi, xdp->xsk.pool, 3107 - xdp->xsk.queue_id); 3102 + ret = ice_xsk_pool_setup(vsi, xdp->xsk.pool, xdp->xsk.queue_id); 3103 + break; 3108 3104 default: 3109 - return -EINVAL; 3105 + ret = -EINVAL; 3110 3106 } 3107 + 3108 + mutex_unlock(&vsi->xdp_state_lock); 3109 + return ret; 3111 3110 } 3112 3111 3113 3112 /** ··· 3570 3555 if (!vsi->netdev) 3571 3556 return; 3572 3557 3573 - ice_for_each_q_vector(vsi, v_idx) { 3558 + ice_for_each_q_vector(vsi, v_idx) 3574 3559 netif_napi_add(vsi->netdev, &vsi->q_vectors[v_idx]->napi, 3575 3560 ice_napi_poll); 3576 - __ice_q_vector_set_napi_queues(vsi->q_vectors[v_idx], false); 3577 - } 3578 3561 } 3579 3562 3580 3563 /** ··· 5550 5537 if (ret) 5551 5538 goto err_reinit; 5552 5539 ice_vsi_map_rings_to_vectors(pf->vsi[v]); 5540 + rtnl_lock(); 5553 5541 ice_vsi_set_napi_queues(pf->vsi[v]); 5542 + rtnl_unlock(); 5554 5543 } 5555 5544 5556 5545 ret = ice_req_irq_msix_misc(pf); ··· 5566 5551 5567 5552 err_reinit: 5568 5553 while (v--) 5569 - if (pf->vsi[v]) 5554 + if (pf->vsi[v]) { 5555 + rtnl_lock(); 5556 + ice_vsi_clear_napi_queues(pf->vsi[v]); 5557 + rtnl_unlock(); 5570 5558 ice_vsi_free_q_vectors(pf->vsi[v]); 5559 + } 5571 5560 5572 5561 return ret; 5573 5562 } ··· 5636 5617 ice_for_each_vsi(pf, v) { 5637 5618 if (!pf->vsi[v]) 5638 5619 continue; 5620 + rtnl_lock(); 5621 + ice_vsi_clear_napi_queues(pf->vsi[v]); 5622 + rtnl_unlock(); 5639 5623 ice_vsi_free_q_vectors(pf->vsi[v]); 5640 5624 } 5641 5625 ice_clear_interrupt_scheme(pf); ··· 7252 7230 if (tx_err) 7253 7231 netdev_err(vsi->netdev, "Failed stop Tx rings, VSI %d error %d\n", 7254 7232 vsi->vsi_num, tx_err); 7255 - if (!tx_err && ice_is_xdp_ena_vsi(vsi)) { 7233 + if (!tx_err && vsi->xdp_rings) { 7256 7234 tx_err = ice_vsi_stop_xdp_tx_rings(vsi); 7257 7235 if (tx_err) 7258 7236 netdev_err(vsi->netdev, "Failed stop XDP rings, VSI %d error %d\n", ··· 7269 7247 ice_for_each_txq(vsi, i) 7270 7248 ice_clean_tx_ring(vsi->tx_rings[i]); 7271 7249 7272 - if (ice_is_xdp_ena_vsi(vsi)) 7250 + if (vsi->xdp_rings) 7273 7251 ice_for_each_xdp_txq(vsi, i) 7274 7252 ice_clean_tx_ring(vsi->xdp_rings[i]); 7275 7253 ··· 7474 7452 err = netif_set_real_num_rx_queues(vsi->netdev, vsi->num_rxq); 7475 7453 if (err) 7476 7454 goto err_set_qs; 7455 + 7456 + ice_vsi_set_napi_queues(vsi); 7477 7457 } 7478 7458 7479 7459 err = ice_up_complete(vsi); ··· 7613 7589 */ 7614 7590 static void ice_rebuild(struct ice_pf *pf, enum ice_reset_req reset_type) 7615 7591 { 7592 + struct ice_vsi *vsi = ice_get_main_vsi(pf); 7616 7593 struct device *dev = ice_pf_to_dev(pf); 7617 7594 struct ice_hw *hw = &pf->hw; 7618 7595 bool dvm; ··· 7755 7730 7756 7731 ice_rebuild_arfs(pf); 7757 7732 } 7733 + 7734 + if (vsi && vsi->netdev) 7735 + netif_device_attach(vsi->netdev); 7758 7736 7759 7737 ice_update_pf_netdev_link(pf); 7760 7738
+5 -13
drivers/net/ethernet/intel/ice/ice_xsk.c
··· 39 39 sizeof(vsi_stat->rx_ring_stats[q_idx]->rx_stats)); 40 40 memset(&vsi_stat->tx_ring_stats[q_idx]->stats, 0, 41 41 sizeof(vsi_stat->tx_ring_stats[q_idx]->stats)); 42 - if (ice_is_xdp_ena_vsi(vsi)) 42 + if (vsi->xdp_rings) 43 43 memset(&vsi->xdp_rings[q_idx]->ring_stats->stats, 0, 44 44 sizeof(vsi->xdp_rings[q_idx]->ring_stats->stats)); 45 45 } ··· 52 52 static void ice_qp_clean_rings(struct ice_vsi *vsi, u16 q_idx) 53 53 { 54 54 ice_clean_tx_ring(vsi->tx_rings[q_idx]); 55 - if (ice_is_xdp_ena_vsi(vsi)) 55 + if (vsi->xdp_rings) 56 56 ice_clean_tx_ring(vsi->xdp_rings[q_idx]); 57 57 ice_clean_rx_ring(vsi->rx_rings[q_idx]); 58 58 } ··· 165 165 struct ice_q_vector *q_vector; 166 166 struct ice_tx_ring *tx_ring; 167 167 struct ice_rx_ring *rx_ring; 168 - int timeout = 50; 169 168 int fail = 0; 170 169 int err; 171 170 ··· 174 175 tx_ring = vsi->tx_rings[q_idx]; 175 176 rx_ring = vsi->rx_rings[q_idx]; 176 177 q_vector = rx_ring->q_vector; 177 - 178 - while (test_and_set_bit(ICE_CFG_BUSY, vsi->state)) { 179 - timeout--; 180 - if (!timeout) 181 - return -EBUSY; 182 - usleep_range(1000, 2000); 183 - } 184 178 185 179 synchronize_net(); 186 180 netif_carrier_off(vsi->netdev); ··· 186 194 err = ice_vsi_stop_tx_ring(vsi, ICE_NO_RESET, 0, tx_ring, &txq_meta); 187 195 if (!fail) 188 196 fail = err; 189 - if (ice_is_xdp_ena_vsi(vsi)) { 197 + if (vsi->xdp_rings) { 190 198 struct ice_tx_ring *xdp_ring = vsi->xdp_rings[q_idx]; 191 199 192 200 memset(&txq_meta, 0, sizeof(txq_meta)); ··· 253 261 netif_tx_start_queue(netdev_get_tx_queue(vsi->netdev, q_idx)); 254 262 netif_carrier_on(vsi->netdev); 255 263 } 256 - clear_bit(ICE_CFG_BUSY, vsi->state); 257 264 258 265 return fail; 259 266 } ··· 381 390 goto failure; 382 391 } 383 392 384 - if_running = netif_running(vsi->netdev) && ice_is_xdp_ena_vsi(vsi); 393 + if_running = !test_bit(ICE_VSI_DOWN, vsi->state) && 394 + ice_is_xdp_ena_vsi(vsi); 385 395 386 396 if (if_running) { 387 397 struct ice_rx_ring *rx_ring = vsi->rx_rings[qid];
+10
drivers/net/ethernet/intel/igb/igb_main.c
··· 6960 6960 6961 6961 static void igb_tsync_interrupt(struct igb_adapter *adapter) 6962 6962 { 6963 + const u32 mask = (TSINTR_SYS_WRAP | E1000_TSICR_TXTS | 6964 + TSINTR_TT0 | TSINTR_TT1 | 6965 + TSINTR_AUTT0 | TSINTR_AUTT1); 6963 6966 struct e1000_hw *hw = &adapter->hw; 6964 6967 u32 tsicr = rd32(E1000_TSICR); 6965 6968 struct ptp_clock_event event; 6969 + 6970 + if (hw->mac.type == e1000_82580) { 6971 + /* 82580 has a hardware bug that requires an explicit 6972 + * write to clear the TimeSync interrupt cause. 6973 + */ 6974 + wr32(E1000_TSICR, tsicr & mask); 6975 + } 6966 6976 6967 6977 if (tsicr & TSINTR_SYS_WRAP) { 6968 6978 event.type = PTP_CLOCK_PPS;
+1
drivers/net/ethernet/intel/igc/igc_main.c
··· 7413 7413 rtnl_lock(); 7414 7414 if (netif_running(netdev)) { 7415 7415 if (igc_open(netdev)) { 7416 + rtnl_unlock(); 7416 7417 netdev_err(netdev, "igc_open failed after reset\n"); 7417 7418 return; 7418 7419 }
+2 -12
drivers/net/ethernet/microchip/vcap/vcap_api_kunit.c
··· 1442 1442 vcap_enable_lookups(&test_vctrl, &test_netdev, 0, 0, 1443 1443 rule->cookie, false); 1444 1444 1445 - vcap_free_rule(rule); 1446 - 1447 - /* Check that the rule has been freed: tricky to access since this 1448 - * memory should not be accessible anymore 1449 - */ 1450 - KUNIT_EXPECT_PTR_NE(test, NULL, rule); 1451 - ret = list_empty(&rule->keyfields); 1452 - KUNIT_EXPECT_EQ(test, true, ret); 1453 - ret = list_empty(&rule->actionfields); 1454 - KUNIT_EXPECT_EQ(test, true, ret); 1455 - 1456 - vcap_del_rule(&test_vctrl, &test_netdev, id); 1445 + ret = vcap_del_rule(&test_vctrl, &test_netdev, id); 1446 + KUNIT_EXPECT_EQ(test, 0, ret); 1457 1447 } 1458 1448 1459 1449 static void vcap_api_set_rule_counter_test(struct kunit *test)
+13 -9
drivers/net/ethernet/microsoft/mana/mana_en.c
··· 1872 1872 1873 1873 for (i = 0; i < apc->num_queues; i++) { 1874 1874 napi = &apc->tx_qp[i].tx_cq.napi; 1875 - napi_synchronize(napi); 1876 - napi_disable(napi); 1877 - netif_napi_del(napi); 1878 - 1875 + if (apc->tx_qp[i].txq.napi_initialized) { 1876 + napi_synchronize(napi); 1877 + napi_disable(napi); 1878 + netif_napi_del(napi); 1879 + apc->tx_qp[i].txq.napi_initialized = false; 1880 + } 1879 1881 mana_destroy_wq_obj(apc, GDMA_SQ, apc->tx_qp[i].tx_object); 1880 1882 1881 1883 mana_deinit_cq(apc, &apc->tx_qp[i].tx_cq); ··· 1933 1931 txq->ndev = net; 1934 1932 txq->net_txq = netdev_get_tx_queue(net, i); 1935 1933 txq->vp_offset = apc->tx_vp_offset; 1934 + txq->napi_initialized = false; 1936 1935 skb_queue_head_init(&txq->pending_skbs); 1937 1936 1938 1937 memset(&spec, 0, sizeof(spec)); ··· 2000 1997 2001 1998 netif_napi_add_tx(net, &cq->napi, mana_poll); 2002 1999 napi_enable(&cq->napi); 2000 + txq->napi_initialized = true; 2003 2001 2004 2002 mana_gd_ring_cq(cq->gdma_cq, SET_ARM_BIT); 2005 2003 } ··· 2012 2008 } 2013 2009 2014 2010 static void mana_destroy_rxq(struct mana_port_context *apc, 2015 - struct mana_rxq *rxq, bool validate_state) 2011 + struct mana_rxq *rxq, bool napi_initialized) 2016 2012 2017 2013 { 2018 2014 struct gdma_context *gc = apc->ac->gdma_dev->gdma_context; ··· 2027 2023 2028 2024 napi = &rxq->rx_cq.napi; 2029 2025 2030 - if (validate_state) 2026 + if (napi_initialized) { 2031 2027 napi_synchronize(napi); 2032 2028 2033 - napi_disable(napi); 2029 + napi_disable(napi); 2034 2030 2031 + netif_napi_del(napi); 2032 + } 2035 2033 xdp_rxq_info_unreg(&rxq->xdp_rxq); 2036 - 2037 - netif_napi_del(napi); 2038 2034 2039 2035 mana_destroy_wq_obj(apc, GDMA_RQ, rxq->rxobj); 2040 2036
+49 -33
drivers/net/ethernet/ti/am65-cpsw-nuss.c
··· 156 156 #define AM65_CPSW_CPPI_TX_PKT_TYPE 0x7 157 157 158 158 /* XDP */ 159 - #define AM65_CPSW_XDP_CONSUMED 2 160 - #define AM65_CPSW_XDP_REDIRECT 1 159 + #define AM65_CPSW_XDP_CONSUMED BIT(1) 160 + #define AM65_CPSW_XDP_REDIRECT BIT(0) 161 161 #define AM65_CPSW_XDP_PASS 0 162 162 163 163 /* Include headroom compatible with both skb and xdpf */ 164 - #define AM65_CPSW_HEADROOM (max(NET_SKB_PAD, XDP_PACKET_HEADROOM) + NET_IP_ALIGN) 164 + #define AM65_CPSW_HEADROOM_NA (max(NET_SKB_PAD, XDP_PACKET_HEADROOM) + NET_IP_ALIGN) 165 + #define AM65_CPSW_HEADROOM ALIGN(AM65_CPSW_HEADROOM_NA, sizeof(long)) 165 166 166 167 static void am65_cpsw_port_set_sl_mac(struct am65_cpsw_port *slave, 167 168 const u8 *dev_addr) ··· 934 933 host_desc = k3_cppi_desc_pool_alloc(tx_chn->desc_pool); 935 934 if (unlikely(!host_desc)) { 936 935 ndev->stats.tx_dropped++; 937 - return -ENOMEM; 936 + return AM65_CPSW_XDP_CONSUMED; /* drop */ 938 937 } 939 938 940 939 am65_cpsw_nuss_set_buf_type(tx_chn, host_desc, buf_type); ··· 943 942 pkt_len, DMA_TO_DEVICE); 944 943 if (unlikely(dma_mapping_error(tx_chn->dma_dev, dma_buf))) { 945 944 ndev->stats.tx_dropped++; 946 - ret = -ENOMEM; 945 + ret = AM65_CPSW_XDP_CONSUMED; /* drop */ 947 946 goto pool_free; 948 947 } 949 948 ··· 978 977 /* Inform BQL */ 979 978 netdev_tx_completed_queue(netif_txq, 1, pkt_len); 980 979 ndev->stats.tx_errors++; 980 + ret = AM65_CPSW_XDP_CONSUMED; /* drop */ 981 981 goto dma_unmap; 982 982 } 983 983 ··· 998 996 int desc_idx, int cpu, int *len) 999 997 { 1000 998 struct am65_cpsw_rx_chn *rx_chn = &common->rx_chns; 999 + struct am65_cpsw_ndev_priv *ndev_priv; 1001 1000 struct net_device *ndev = port->ndev; 1001 + struct am65_cpsw_ndev_stats *stats; 1002 1002 int ret = AM65_CPSW_XDP_CONSUMED; 1003 1003 struct am65_cpsw_tx_chn *tx_chn; 1004 1004 struct netdev_queue *netif_txq; ··· 1008 1004 struct bpf_prog *prog; 1009 1005 struct page *page; 1010 1006 u32 act; 1007 + int err; 1011 1008 1012 1009 prog = READ_ONCE(port->xdp_prog); 1013 1010 if (!prog) ··· 1017 1012 act = bpf_prog_run_xdp(prog, xdp); 1018 1013 /* XDP prog might have changed packet data and boundaries */ 1019 1014 *len = xdp->data_end - xdp->data; 1015 + 1016 + ndev_priv = netdev_priv(ndev); 1017 + stats = this_cpu_ptr(ndev_priv->stats); 1020 1018 1021 1019 switch (act) { 1022 1020 case XDP_PASS: ··· 1031 1023 1032 1024 xdpf = xdp_convert_buff_to_frame(xdp); 1033 1025 if (unlikely(!xdpf)) 1034 - break; 1026 + goto drop; 1035 1027 1036 1028 __netif_tx_lock(netif_txq, cpu); 1037 - ret = am65_cpsw_xdp_tx_frame(ndev, tx_chn, xdpf, 1029 + err = am65_cpsw_xdp_tx_frame(ndev, tx_chn, xdpf, 1038 1030 AM65_CPSW_TX_BUF_TYPE_XDP_TX); 1039 1031 __netif_tx_unlock(netif_txq); 1040 - if (ret) 1041 - break; 1032 + if (err) 1033 + goto drop; 1042 1034 1043 - ndev->stats.rx_bytes += *len; 1044 - ndev->stats.rx_packets++; 1035 + u64_stats_update_begin(&stats->syncp); 1036 + stats->rx_bytes += *len; 1037 + stats->rx_packets++; 1038 + u64_stats_update_end(&stats->syncp); 1045 1039 ret = AM65_CPSW_XDP_CONSUMED; 1046 1040 goto out; 1047 1041 case XDP_REDIRECT: 1048 1042 if (unlikely(xdp_do_redirect(ndev, xdp, prog))) 1049 - break; 1043 + goto drop; 1050 1044 1051 - ndev->stats.rx_bytes += *len; 1052 - ndev->stats.rx_packets++; 1045 + u64_stats_update_begin(&stats->syncp); 1046 + stats->rx_bytes += *len; 1047 + stats->rx_packets++; 1048 + u64_stats_update_end(&stats->syncp); 1053 1049 ret = AM65_CPSW_XDP_REDIRECT; 1054 1050 goto out; 1055 1051 default: 1056 1052 bpf_warn_invalid_xdp_action(ndev, prog, act); 1057 1053 fallthrough; 1058 1054 case XDP_ABORTED: 1055 + drop: 1059 1056 trace_xdp_exception(ndev, prog, act); 1060 1057 fallthrough; 1061 1058 case XDP_DROP: ··· 1069 1056 1070 1057 page = virt_to_head_page(xdp->data); 1071 1058 am65_cpsw_put_page(rx_chn, page, true, desc_idx); 1072 - 1073 1059 out: 1074 1060 return ret; 1075 1061 } ··· 1107 1095 } 1108 1096 1109 1097 static int am65_cpsw_nuss_rx_packets(struct am65_cpsw_common *common, 1110 - u32 flow_idx, int cpu) 1098 + u32 flow_idx, int cpu, int *xdp_state) 1111 1099 { 1112 1100 struct am65_cpsw_rx_chn *rx_chn = &common->rx_chns; 1113 1101 u32 buf_dma_len, pkt_len, port_id = 0, csum_info; ··· 1126 1114 void **swdata; 1127 1115 u32 *psdata; 1128 1116 1117 + *xdp_state = AM65_CPSW_XDP_PASS; 1129 1118 ret = k3_udma_glue_pop_rx_chn(rx_chn->rx_chn, flow_idx, &desc_dma); 1130 1119 if (ret) { 1131 1120 if (ret != -ENODATA) ··· 1174 1161 } 1175 1162 1176 1163 if (port->xdp_prog) { 1177 - xdp_init_buff(&xdp, AM65_CPSW_MAX_PACKET_SIZE, &port->xdp_rxq); 1178 - 1179 - xdp_prepare_buff(&xdp, page_addr, skb_headroom(skb), 1164 + xdp_init_buff(&xdp, PAGE_SIZE, &port->xdp_rxq); 1165 + xdp_prepare_buff(&xdp, page_addr, AM65_CPSW_HEADROOM, 1180 1166 pkt_len, false); 1181 - 1182 - ret = am65_cpsw_run_xdp(common, port, &xdp, desc_idx, 1183 - cpu, &pkt_len); 1184 - if (ret != AM65_CPSW_XDP_PASS) 1185 - return ret; 1167 + *xdp_state = am65_cpsw_run_xdp(common, port, &xdp, desc_idx, 1168 + cpu, &pkt_len); 1169 + if (*xdp_state != AM65_CPSW_XDP_PASS) 1170 + goto allocate; 1186 1171 1187 1172 /* Compute additional headroom to be reserved */ 1188 1173 headroom = (xdp.data - xdp.data_hard_start) - skb_headroom(skb); ··· 1204 1193 stats->rx_bytes += pkt_len; 1205 1194 u64_stats_update_end(&stats->syncp); 1206 1195 1196 + allocate: 1207 1197 new_page = page_pool_dev_alloc_pages(rx_chn->page_pool); 1208 - if (unlikely(!new_page)) 1198 + if (unlikely(!new_page)) { 1199 + dev_err(dev, "page alloc failed\n"); 1209 1200 return -ENOMEM; 1201 + } 1202 + 1210 1203 rx_chn->pages[desc_idx] = new_page; 1211 1204 1212 1205 if (netif_dormant(ndev)) { ··· 1244 1229 struct am65_cpsw_common *common = am65_cpsw_napi_to_common(napi_rx); 1245 1230 int flow = AM65_CPSW_MAX_RX_FLOWS; 1246 1231 int cpu = smp_processor_id(); 1247 - bool xdp_redirect = false; 1232 + int xdp_state_or = 0; 1248 1233 int cur_budget, ret; 1234 + int xdp_state; 1249 1235 int num_rx = 0; 1250 1236 1251 1237 /* process every flow */ ··· 1254 1238 cur_budget = budget - num_rx; 1255 1239 1256 1240 while (cur_budget--) { 1257 - ret = am65_cpsw_nuss_rx_packets(common, flow, cpu); 1258 - if (ret) { 1259 - if (ret == AM65_CPSW_XDP_REDIRECT) 1260 - xdp_redirect = true; 1241 + ret = am65_cpsw_nuss_rx_packets(common, flow, cpu, 1242 + &xdp_state); 1243 + xdp_state_or |= xdp_state; 1244 + if (ret) 1261 1245 break; 1262 - } 1263 1246 num_rx++; 1264 1247 } 1265 1248 ··· 1266 1251 break; 1267 1252 } 1268 1253 1269 - if (xdp_redirect) 1254 + if (xdp_state_or & AM65_CPSW_XDP_REDIRECT) 1270 1255 xdp_do_flush(); 1271 1256 1272 1257 dev_dbg(common->dev, "%s num_rx:%d %d\n", __func__, num_rx, budget); ··· 1933 1918 static int am65_cpsw_ndo_xdp_xmit(struct net_device *ndev, int n, 1934 1919 struct xdp_frame **frames, u32 flags) 1935 1920 { 1921 + struct am65_cpsw_common *common = am65_ndev_to_common(ndev); 1936 1922 struct am65_cpsw_tx_chn *tx_chn; 1937 1923 struct netdev_queue *netif_txq; 1938 1924 int cpu = smp_processor_id(); 1939 1925 int i, nxmit = 0; 1940 1926 1941 - tx_chn = &am65_ndev_to_common(ndev)->tx_chns[cpu % AM65_CPSW_MAX_TX_QUEUES]; 1927 + tx_chn = &common->tx_chns[cpu % common->tx_ch_num]; 1942 1928 netif_txq = netdev_get_tx_queue(ndev, tx_chn->id); 1943 1929 1944 1930 __netif_tx_lock(netif_txq, cpu);
+3
drivers/net/ethernet/xilinx/xilinx_axienet.h
··· 436 436 * @tx_bytes: TX byte count for statistics 437 437 * @tx_stat_sync: Synchronization object for TX stats 438 438 * @dma_err_task: Work structure to process Axi DMA errors 439 + * @stopping: Set when @dma_err_task shouldn't do anything because we are 440 + * about to stop the device. 439 441 * @tx_irq: Axidma TX IRQ number 440 442 * @rx_irq: Axidma RX IRQ number 441 443 * @eth_irq: Ethernet core IRQ number ··· 509 507 struct u64_stats_sync tx_stat_sync; 510 508 511 509 struct work_struct dma_err_task; 510 + bool stopping; 512 511 513 512 int tx_irq; 514 513 int rx_irq;
+8
drivers/net/ethernet/xilinx/xilinx_axienet_main.c
··· 1460 1460 struct axienet_local *lp = netdev_priv(ndev); 1461 1461 1462 1462 /* Enable worker thread for Axi DMA error handling */ 1463 + lp->stopping = false; 1463 1464 INIT_WORK(&lp->dma_err_task, axienet_dma_err_handler); 1464 1465 1465 1466 napi_enable(&lp->napi_rx); ··· 1581 1580 dev_dbg(&ndev->dev, "axienet_close()\n"); 1582 1581 1583 1582 if (!lp->use_dmaengine) { 1583 + WRITE_ONCE(lp->stopping, true); 1584 + flush_work(&lp->dma_err_task); 1585 + 1584 1586 napi_disable(&lp->napi_tx); 1585 1587 napi_disable(&lp->napi_rx); 1586 1588 } ··· 2157 2153 struct axienet_local *lp = container_of(work, struct axienet_local, 2158 2154 dma_err_task); 2159 2155 struct net_device *ndev = lp->ndev; 2156 + 2157 + /* Don't bother if we are going to stop anyway */ 2158 + if (READ_ONCE(lp->stopping)) 2159 + return; 2160 2160 2161 2161 napi_disable(&lp->napi_tx); 2162 2162 napi_disable(&lp->napi_rx);
+5
drivers/net/mctp/Kconfig
··· 21 21 Say y here if you need to connect to MCTP endpoints over serial. To 22 22 compile as a module, use m; the module will be called mctp-serial. 23 23 24 + config MCTP_SERIAL_TEST 25 + bool "MCTP serial tests" if !KUNIT_ALL_TESTS 26 + depends on MCTP_SERIAL=y && KUNIT=y 27 + default KUNIT_ALL_TESTS 28 + 24 29 config MCTP_TRANSPORT_I2C 25 30 tristate "MCTP SMBus/I2C transport" 26 31 # i2c-mux is optional, but we must build as a module if i2c-mux is a module
+111 -2
drivers/net/mctp/mctp-serial.c
··· 91 91 * will be those non-escaped bytes, and does not include the escaped 92 92 * byte. 93 93 */ 94 - for (i = 1; i + dev->txpos + 1 < dev->txlen; i++) { 95 - if (needs_escape(dev->txbuf[dev->txpos + i + 1])) 94 + for (i = 1; i + dev->txpos < dev->txlen; i++) { 95 + if (needs_escape(dev->txbuf[dev->txpos + i])) 96 96 break; 97 97 } 98 98 ··· 521 521 MODULE_LICENSE("GPL v2"); 522 522 MODULE_AUTHOR("Jeremy Kerr <jk@codeconstruct.com.au>"); 523 523 MODULE_DESCRIPTION("MCTP Serial transport"); 524 + 525 + #if IS_ENABLED(CONFIG_MCTP_SERIAL_TEST) 526 + #include <kunit/test.h> 527 + 528 + #define MAX_CHUNKS 6 529 + struct test_chunk_tx { 530 + u8 input_len; 531 + u8 input[MCTP_SERIAL_MTU]; 532 + u8 chunks[MAX_CHUNKS]; 533 + }; 534 + 535 + static void test_next_chunk_len(struct kunit *test) 536 + { 537 + struct mctp_serial devx; 538 + struct mctp_serial *dev = &devx; 539 + int next; 540 + 541 + const struct test_chunk_tx *params = test->param_value; 542 + 543 + memset(dev, 0x0, sizeof(*dev)); 544 + memcpy(dev->txbuf, params->input, params->input_len); 545 + dev->txlen = params->input_len; 546 + 547 + for (size_t i = 0; i < MAX_CHUNKS; i++) { 548 + next = next_chunk_len(dev); 549 + dev->txpos += next; 550 + KUNIT_EXPECT_EQ(test, next, params->chunks[i]); 551 + 552 + if (next == 0) { 553 + KUNIT_EXPECT_EQ(test, dev->txpos, dev->txlen); 554 + return; 555 + } 556 + } 557 + 558 + KUNIT_FAIL_AND_ABORT(test, "Ran out of chunks"); 559 + } 560 + 561 + static struct test_chunk_tx chunk_tx_tests[] = { 562 + { 563 + .input_len = 5, 564 + .input = { 0x00, 0x11, 0x22, 0x7e, 0x80 }, 565 + .chunks = { 3, 1, 1, 0}, 566 + }, 567 + { 568 + .input_len = 5, 569 + .input = { 0x00, 0x11, 0x22, 0x7e, 0x7d }, 570 + .chunks = { 3, 1, 1, 0}, 571 + }, 572 + { 573 + .input_len = 3, 574 + .input = { 0x7e, 0x11, 0x22, }, 575 + .chunks = { 1, 2, 0}, 576 + }, 577 + { 578 + .input_len = 3, 579 + .input = { 0x7e, 0x7e, 0x7d, }, 580 + .chunks = { 1, 1, 1, 0}, 581 + }, 582 + { 583 + .input_len = 4, 584 + .input = { 0x7e, 0x7e, 0x00, 0x7d, }, 585 + .chunks = { 1, 1, 1, 1, 0}, 586 + }, 587 + { 588 + .input_len = 6, 589 + .input = { 0x7e, 0x7e, 0x00, 0x7d, 0x10, 0x10}, 590 + .chunks = { 1, 1, 1, 1, 2, 0}, 591 + }, 592 + { 593 + .input_len = 1, 594 + .input = { 0x7e }, 595 + .chunks = { 1, 0 }, 596 + }, 597 + { 598 + .input_len = 1, 599 + .input = { 0x80 }, 600 + .chunks = { 1, 0 }, 601 + }, 602 + { 603 + .input_len = 3, 604 + .input = { 0x80, 0x80, 0x00 }, 605 + .chunks = { 3, 0 }, 606 + }, 607 + { 608 + .input_len = 7, 609 + .input = { 0x01, 0x00, 0x08, 0xc8, 0x00, 0x80, 0x02 }, 610 + .chunks = { 7, 0 }, 611 + }, 612 + { 613 + .input_len = 7, 614 + .input = { 0x01, 0x00, 0x08, 0xc8, 0x7e, 0x80, 0x02 }, 615 + .chunks = { 4, 1, 2, 0 }, 616 + }, 617 + }; 618 + 619 + KUNIT_ARRAY_PARAM(chunk_tx, chunk_tx_tests, NULL); 620 + 621 + static struct kunit_case mctp_serial_test_cases[] = { 622 + KUNIT_CASE_PARAM(test_next_chunk_len, chunk_tx_gen_params), 623 + }; 624 + 625 + static struct kunit_suite mctp_serial_test_suite = { 626 + .name = "mctp_serial", 627 + .test_cases = mctp_serial_test_cases, 628 + }; 629 + 630 + kunit_test_suite(mctp_serial_test_suite); 631 + 632 + #endif /* CONFIG_MCTP_SERIAL_TEST */
+2
drivers/net/phy/phy_device.c
··· 3347 3347 err = of_phy_led(phydev, led); 3348 3348 if (err) { 3349 3349 of_node_put(led); 3350 + of_node_put(leds); 3350 3351 phy_leds_unregister(phydev); 3351 3352 return err; 3352 3353 } 3353 3354 } 3354 3355 3356 + of_node_put(leds); 3355 3357 return 0; 3356 3358 } 3357 3359
+13 -4
drivers/net/usb/r8152.c
··· 5178 5178 data = (u8 *)mac; 5179 5179 data += __le16_to_cpu(mac->fw_offset); 5180 5180 5181 - generic_ocp_write(tp, __le16_to_cpu(mac->fw_reg), 0xff, length, data, 5182 - type); 5181 + if (generic_ocp_write(tp, __le16_to_cpu(mac->fw_reg), 0xff, length, 5182 + data, type) < 0) { 5183 + dev_err(&tp->intf->dev, "Write %s fw fail\n", 5184 + type ? "PLA" : "USB"); 5185 + return; 5186 + } 5183 5187 5184 5188 ocp_write_word(tp, type, __le16_to_cpu(mac->bp_ba_addr), 5185 5189 __le16_to_cpu(mac->bp_ba_value)); 5186 5190 5187 - generic_ocp_write(tp, __le16_to_cpu(mac->bp_start), BYTE_EN_DWORD, 5188 - __le16_to_cpu(mac->bp_num) << 1, mac->bp, type); 5191 + if (generic_ocp_write(tp, __le16_to_cpu(mac->bp_start), BYTE_EN_DWORD, 5192 + ALIGN(__le16_to_cpu(mac->bp_num) << 1, 4), 5193 + mac->bp, type) < 0) { 5194 + dev_err(&tp->intf->dev, "Write %s bp fail\n", 5195 + type ? "PLA" : "USB"); 5196 + return; 5197 + } 5189 5198 5190 5199 bp_en_addr = __le16_to_cpu(mac->bp_en_addr); 5191 5200 if (bp_en_addr)
+3 -8
drivers/net/usb/usbnet.c
··· 61 61 62 62 /*-------------------------------------------------------------------------*/ 63 63 64 - // randomly generated ethernet address 65 - static u8 node_id [ETH_ALEN]; 66 - 67 64 /* use ethtool to change the level for any given device */ 68 65 static int msg_level = -1; 69 66 module_param (msg_level, int, 0); ··· 1722 1725 1723 1726 dev->net = net; 1724 1727 strscpy(net->name, "usb%d", sizeof(net->name)); 1725 - eth_hw_addr_set(net, node_id); 1726 1728 1727 1729 /* rx and tx sides can use different message sizes; 1728 1730 * bind() should set rx_urb_size in that case. ··· 1797 1801 goto out4; 1798 1802 } 1799 1803 1800 - /* let userspace know we have a random address */ 1801 - if (ether_addr_equal(net->dev_addr, node_id)) 1802 - net->addr_assign_type = NET_ADDR_RANDOM; 1804 + /* this flags the device for user space */ 1805 + if (!is_valid_ether_addr(net->dev_addr)) 1806 + eth_hw_addr_random(net); 1803 1807 1804 1808 if ((dev->driver_info->flags & FLAG_WLAN) != 0) 1805 1809 SET_NETDEV_DEVTYPE(net, &wlan_type); ··· 2207 2211 BUILD_BUG_ON( 2208 2212 sizeof_field(struct sk_buff, cb) < sizeof(struct skb_data)); 2209 2213 2210 - eth_random_addr(node_id); 2211 2214 return 0; 2212 2215 } 2213 2216 module_init(usbnet_init);
+2 -2
drivers/net/wireless/ath/ath11k/ahb.c
··· 413 413 return ret; 414 414 } 415 415 416 - static void ath11k_ahb_power_down(struct ath11k_base *ab, bool is_suspend) 416 + static void ath11k_ahb_power_down(struct ath11k_base *ab) 417 417 { 418 418 struct ath11k_ahb *ab_ahb = ath11k_ahb_priv(ab); 419 419 ··· 1280 1280 struct ath11k_base *ab = platform_get_drvdata(pdev); 1281 1281 1282 1282 if (test_bit(ATH11K_FLAG_QMI_FAIL, &ab->dev_flags)) { 1283 - ath11k_ahb_power_down(ab, false); 1283 + ath11k_ahb_power_down(ab); 1284 1284 ath11k_debugfs_soc_destroy(ab); 1285 1285 ath11k_qmi_deinit_service(ab); 1286 1286 goto qmi_fail;
+33 -86
drivers/net/wireless/ath/ath11k/core.c
··· 906 906 return ret; 907 907 } 908 908 909 + ret = ath11k_wow_enable(ab); 910 + if (ret) { 911 + ath11k_warn(ab, "failed to enable wow during suspend: %d\n", ret); 912 + return ret; 913 + } 914 + 909 915 ret = ath11k_dp_rx_pktlog_stop(ab, false); 910 916 if (ret) { 911 917 ath11k_warn(ab, "failed to stop dp rx pktlog during suspend: %d\n", ··· 922 916 ath11k_ce_stop_shadow_timers(ab); 923 917 ath11k_dp_stop_shadow_timers(ab); 924 918 925 - /* PM framework skips suspend_late/resume_early callbacks 926 - * if other devices report errors in their suspend callbacks. 927 - * However ath11k_core_resume() would still be called because 928 - * here we return success thus kernel put us on dpm_suspended_list. 929 - * Since we won't go through a power down/up cycle, there is 930 - * no chance to call complete(&ab->restart_completed) in 931 - * ath11k_core_restart(), making ath11k_core_resume() timeout. 932 - * So call it here to avoid this issue. This also works in case 933 - * no error happens thus suspend_late/resume_early get called, 934 - * because it will be reinitialized in ath11k_core_resume_early(). 935 - */ 936 - complete(&ab->restart_completed); 919 + ath11k_hif_irq_disable(ab); 920 + ath11k_hif_ce_irq_disable(ab); 921 + 922 + ret = ath11k_hif_suspend(ab); 923 + if (ret) { 924 + ath11k_warn(ab, "failed to suspend hif: %d\n", ret); 925 + return ret; 926 + } 937 927 938 928 return 0; 939 929 } 940 930 EXPORT_SYMBOL(ath11k_core_suspend); 941 - 942 - int ath11k_core_suspend_late(struct ath11k_base *ab) 943 - { 944 - struct ath11k_pdev *pdev; 945 - struct ath11k *ar; 946 - 947 - if (!ab->hw_params.supports_suspend) 948 - return -EOPNOTSUPP; 949 - 950 - /* so far single_pdev_only chips have supports_suspend as true 951 - * and only the first pdev is valid. 952 - */ 953 - pdev = ath11k_core_get_single_pdev(ab); 954 - ar = pdev->ar; 955 - if (!ar || ar->state != ATH11K_STATE_OFF) 956 - return 0; 957 - 958 - ath11k_hif_irq_disable(ab); 959 - ath11k_hif_ce_irq_disable(ab); 960 - 961 - ath11k_hif_power_down(ab, true); 962 - 963 - return 0; 964 - } 965 - EXPORT_SYMBOL(ath11k_core_suspend_late); 966 - 967 - int ath11k_core_resume_early(struct ath11k_base *ab) 968 - { 969 - int ret; 970 - struct ath11k_pdev *pdev; 971 - struct ath11k *ar; 972 - 973 - if (!ab->hw_params.supports_suspend) 974 - return -EOPNOTSUPP; 975 - 976 - /* so far single_pdev_only chips have supports_suspend as true 977 - * and only the first pdev is valid. 978 - */ 979 - pdev = ath11k_core_get_single_pdev(ab); 980 - ar = pdev->ar; 981 - if (!ar || ar->state != ATH11K_STATE_OFF) 982 - return 0; 983 - 984 - reinit_completion(&ab->restart_completed); 985 - ret = ath11k_hif_power_up(ab); 986 - if (ret) 987 - ath11k_warn(ab, "failed to power up hif during resume: %d\n", ret); 988 - 989 - return ret; 990 - } 991 - EXPORT_SYMBOL(ath11k_core_resume_early); 992 931 993 932 int ath11k_core_resume(struct ath11k_base *ab) 994 933 { 995 934 int ret; 996 935 struct ath11k_pdev *pdev; 997 936 struct ath11k *ar; 998 - long time_left; 999 937 1000 938 if (!ab->hw_params.supports_suspend) 1001 939 return -EOPNOTSUPP; 1002 940 1003 - /* so far single_pdev_only chips have supports_suspend as true 941 + /* so far signle_pdev_only chips have supports_suspend as true 1004 942 * and only the first pdev is valid. 1005 943 */ 1006 944 pdev = ath11k_core_get_single_pdev(ab); ··· 952 1002 if (!ar || ar->state != ATH11K_STATE_OFF) 953 1003 return 0; 954 1004 955 - time_left = wait_for_completion_timeout(&ab->restart_completed, 956 - ATH11K_RESET_TIMEOUT_HZ); 957 - if (time_left == 0) { 958 - ath11k_warn(ab, "timeout while waiting for restart complete"); 959 - return -ETIMEDOUT; 1005 + ret = ath11k_hif_resume(ab); 1006 + if (ret) { 1007 + ath11k_warn(ab, "failed to resume hif during resume: %d\n", ret); 1008 + return ret; 960 1009 } 961 1010 962 - if (ab->hw_params.current_cc_support && 963 - ar->alpha2[0] != 0 && ar->alpha2[1] != 0) { 964 - ret = ath11k_reg_set_cc(ar); 965 - if (ret) { 966 - ath11k_warn(ab, "failed to set country code during resume: %d\n", 967 - ret); 968 - return ret; 969 - } 970 - } 1011 + ath11k_hif_ce_irq_enable(ab); 1012 + ath11k_hif_irq_enable(ab); 971 1013 972 1014 ret = ath11k_dp_rx_pktlog_start(ab); 973 - if (ret) 1015 + if (ret) { 974 1016 ath11k_warn(ab, "failed to start rx pktlog during resume: %d\n", 975 1017 ret); 1018 + return ret; 1019 + } 976 1020 977 - return ret; 1021 + ret = ath11k_wow_wakeup(ab); 1022 + if (ret) { 1023 + ath11k_warn(ab, "failed to wakeup wow during resume: %d\n", ret); 1024 + return ret; 1025 + } 1026 + 1027 + return 0; 978 1028 } 979 1029 EXPORT_SYMBOL(ath11k_core_resume); 980 1030 ··· 2069 2119 2070 2120 if (!ab->is_reset) 2071 2121 ath11k_core_post_reconfigure_recovery(ab); 2072 - 2073 - complete(&ab->restart_completed); 2074 2122 } 2075 2123 2076 2124 static void ath11k_core_reset(struct work_struct *work) ··· 2138 2190 ath11k_hif_irq_disable(ab); 2139 2191 ath11k_hif_ce_irq_disable(ab); 2140 2192 2141 - ath11k_hif_power_down(ab, false); 2193 + ath11k_hif_power_down(ab); 2142 2194 ath11k_hif_power_up(ab); 2143 2195 2144 2196 ath11k_dbg(ab, ATH11K_DBG_BOOT, "reset started\n"); ··· 2211 2263 2212 2264 mutex_unlock(&ab->core_lock); 2213 2265 2214 - ath11k_hif_power_down(ab, false); 2266 + ath11k_hif_power_down(ab); 2215 2267 ath11k_mac_destroy(ab); 2216 2268 ath11k_core_soc_destroy(ab); 2217 2269 ath11k_fw_destroy(ab); ··· 2264 2316 timer_setup(&ab->rx_replenish_retry, ath11k_ce_rx_replenish_retry, 0); 2265 2317 init_completion(&ab->htc_suspend); 2266 2318 init_completion(&ab->wow.wakeup_completed); 2267 - init_completion(&ab->restart_completed); 2268 2319 2269 2320 ab->dev = dev; 2270 2321 ab->hif.bus = bus;
-4
drivers/net/wireless/ath/ath11k/core.h
··· 1036 1036 DECLARE_BITMAP(fw_features, ATH11K_FW_FEATURE_COUNT); 1037 1037 } fw; 1038 1038 1039 - struct completion restart_completed; 1040 - 1041 1039 #ifdef CONFIG_NL80211_TESTMODE 1042 1040 struct { 1043 1041 u32 data_pos; ··· 1235 1237 int ath11k_core_check_dt(struct ath11k_base *ath11k); 1236 1238 int ath11k_core_check_smbios(struct ath11k_base *ab); 1237 1239 void ath11k_core_halt(struct ath11k *ar); 1238 - int ath11k_core_resume_early(struct ath11k_base *ab); 1239 1240 int ath11k_core_resume(struct ath11k_base *ab); 1240 1241 int ath11k_core_suspend(struct ath11k_base *ab); 1241 - int ath11k_core_suspend_late(struct ath11k_base *ab); 1242 1242 void ath11k_core_pre_reconfigure_recovery(struct ath11k_base *ab); 1243 1243 bool ath11k_core_coldboot_cal_support(struct ath11k_base *ab); 1244 1244
+3 -9
drivers/net/wireless/ath/ath11k/hif.h
··· 18 18 int (*start)(struct ath11k_base *ab); 19 19 void (*stop)(struct ath11k_base *ab); 20 20 int (*power_up)(struct ath11k_base *ab); 21 - void (*power_down)(struct ath11k_base *ab, bool is_suspend); 21 + void (*power_down)(struct ath11k_base *ab); 22 22 int (*suspend)(struct ath11k_base *ab); 23 23 int (*resume)(struct ath11k_base *ab); 24 24 int (*map_service_to_pipe)(struct ath11k_base *ab, u16 service_id, ··· 67 67 68 68 static inline int ath11k_hif_power_up(struct ath11k_base *ab) 69 69 { 70 - if (!ab->hif.ops->power_up) 71 - return -EOPNOTSUPP; 72 - 73 70 return ab->hif.ops->power_up(ab); 74 71 } 75 72 76 - static inline void ath11k_hif_power_down(struct ath11k_base *ab, bool is_suspend) 73 + static inline void ath11k_hif_power_down(struct ath11k_base *ab) 77 74 { 78 - if (!ab->hif.ops->power_down) 79 - return; 80 - 81 - ab->hif.ops->power_down(ab, is_suspend); 75 + ab->hif.ops->power_down(ab); 82 76 } 83 77 84 78 static inline int ath11k_hif_suspend(struct ath11k_base *ab)
+1
drivers/net/wireless/ath/ath11k/mac.c
··· 7900 7900 } 7901 7901 7902 7902 if (psd) { 7903 + arvif->reg_tpc_info.is_psd_power = true; 7903 7904 arvif->reg_tpc_info.num_pwr_levels = psd->count; 7904 7905 7905 7906 for (i = 0; i < arvif->reg_tpc_info.num_pwr_levels; i++) {
+2 -10
drivers/net/wireless/ath/ath11k/mhi.c
··· 453 453 return 0; 454 454 } 455 455 456 - void ath11k_mhi_stop(struct ath11k_pci *ab_pci, bool is_suspend) 456 + void ath11k_mhi_stop(struct ath11k_pci *ab_pci) 457 457 { 458 - /* During suspend we need to use mhi_power_down_keep_dev() 459 - * workaround, otherwise ath11k_core_resume() will timeout 460 - * during resume. 461 - */ 462 - if (is_suspend) 463 - mhi_power_down_keep_dev(ab_pci->mhi_ctrl, true); 464 - else 465 - mhi_power_down(ab_pci->mhi_ctrl, true); 466 - 458 + mhi_power_down(ab_pci->mhi_ctrl, true); 467 459 mhi_unprepare_after_power_down(ab_pci->mhi_ctrl); 468 460 } 469 461
+2 -1
drivers/net/wireless/ath/ath11k/mhi.h
··· 18 18 #define MHICTRL_RESET_MASK 0x2 19 19 20 20 int ath11k_mhi_start(struct ath11k_pci *ar_pci); 21 - void ath11k_mhi_stop(struct ath11k_pci *ar_pci, bool is_suspend); 21 + void ath11k_mhi_stop(struct ath11k_pci *ar_pci); 22 22 int ath11k_mhi_register(struct ath11k_pci *ar_pci); 23 23 void ath11k_mhi_unregister(struct ath11k_pci *ar_pci); 24 24 void ath11k_mhi_set_mhictrl_reset(struct ath11k_base *ab); ··· 26 26 27 27 int ath11k_mhi_suspend(struct ath11k_pci *ar_pci); 28 28 int ath11k_mhi_resume(struct ath11k_pci *ar_pci); 29 + 29 30 #endif
+7 -37
drivers/net/wireless/ath/ath11k/pci.c
··· 638 638 return 0; 639 639 } 640 640 641 - static void ath11k_pci_power_down(struct ath11k_base *ab, bool is_suspend) 641 + static void ath11k_pci_power_down(struct ath11k_base *ab) 642 642 { 643 643 struct ath11k_pci *ab_pci = ath11k_pci_priv(ab); 644 644 ··· 649 649 650 650 ath11k_pci_msi_disable(ab_pci); 651 651 652 - ath11k_mhi_stop(ab_pci, is_suspend); 652 + ath11k_mhi_stop(ab_pci); 653 653 clear_bit(ATH11K_FLAG_DEVICE_INIT_DONE, &ab->dev_flags); 654 654 ath11k_pci_sw_reset(ab_pci->ab, false); 655 655 } ··· 970 970 ath11k_pci_set_irq_affinity_hint(ab_pci, NULL); 971 971 972 972 if (test_bit(ATH11K_FLAG_QMI_FAIL, &ab->dev_flags)) { 973 - ath11k_pci_power_down(ab, false); 973 + ath11k_pci_power_down(ab); 974 974 ath11k_debugfs_soc_destroy(ab); 975 975 ath11k_qmi_deinit_service(ab); 976 976 goto qmi_fail; ··· 998 998 struct ath11k_pci *ab_pci = ath11k_pci_priv(ab); 999 999 1000 1000 ath11k_pci_set_irq_affinity_hint(ab_pci, NULL); 1001 - ath11k_pci_power_down(ab, false); 1001 + ath11k_pci_power_down(ab); 1002 1002 } 1003 1003 1004 1004 static __maybe_unused int ath11k_pci_pm_suspend(struct device *dev) ··· 1035 1035 return ret; 1036 1036 } 1037 1037 1038 - static __maybe_unused int ath11k_pci_pm_suspend_late(struct device *dev) 1039 - { 1040 - struct ath11k_base *ab = dev_get_drvdata(dev); 1041 - int ret; 1042 - 1043 - ret = ath11k_core_suspend_late(ab); 1044 - if (ret) 1045 - ath11k_warn(ab, "failed to late suspend core: %d\n", ret); 1046 - 1047 - /* Similar to ath11k_pci_pm_suspend(), we return success here 1048 - * even error happens, to allow system suspend/hibernation survive. 1049 - */ 1050 - return 0; 1051 - } 1052 - 1053 - static __maybe_unused int ath11k_pci_pm_resume_early(struct device *dev) 1054 - { 1055 - struct ath11k_base *ab = dev_get_drvdata(dev); 1056 - int ret; 1057 - 1058 - ret = ath11k_core_resume_early(ab); 1059 - if (ret) 1060 - ath11k_warn(ab, "failed to early resume core: %d\n", ret); 1061 - 1062 - return ret; 1063 - } 1064 - 1065 - static const struct dev_pm_ops __maybe_unused ath11k_pci_pm_ops = { 1066 - SET_SYSTEM_SLEEP_PM_OPS(ath11k_pci_pm_suspend, 1067 - ath11k_pci_pm_resume) 1068 - SET_LATE_SYSTEM_SLEEP_PM_OPS(ath11k_pci_pm_suspend_late, 1069 - ath11k_pci_pm_resume_early) 1070 - }; 1038 + static SIMPLE_DEV_PM_OPS(ath11k_pci_pm_ops, 1039 + ath11k_pci_pm_suspend, 1040 + ath11k_pci_pm_resume); 1071 1041 1072 1042 static struct pci_driver ath11k_pci_driver = { 1073 1043 .name = "ath11k_pci",
+1 -1
drivers/net/wireless/ath/ath11k/qmi.c
··· 2877 2877 } 2878 2878 2879 2879 /* reset the firmware */ 2880 - ath11k_hif_power_down(ab, false); 2880 + ath11k_hif_power_down(ab); 2881 2881 ath11k_hif_power_up(ab); 2882 2882 ath11k_dbg(ab, ATH11K_DBG_QMI, "exit wait for cold boot done\n"); 2883 2883 return 0;
+2 -1
drivers/nvme/host/core.c
··· 4437 4437 4438 4438 static void nvme_handle_aer_persistent_error(struct nvme_ctrl *ctrl) 4439 4439 { 4440 - dev_warn(ctrl->device, "resetting controller due to AER\n"); 4440 + dev_warn(ctrl->device, 4441 + "resetting controller due to persistent internal error\n"); 4441 4442 nvme_reset_ctrl(ctrl); 4442 4443 } 4443 4444
+3 -1
drivers/nvme/host/multipath.c
··· 616 616 blk_set_stacking_limits(&lim); 617 617 lim.dma_alignment = 3; 618 618 lim.features |= BLK_FEAT_IO_STAT | BLK_FEAT_NOWAIT | BLK_FEAT_POLL; 619 - if (head->ids.csi != NVME_CSI_ZNS) 619 + if (head->ids.csi == NVME_CSI_ZNS) 620 + lim.features |= BLK_FEAT_ZONED; 621 + else 620 622 lim.max_zone_append_sectors = 0; 621 623 622 624 head->disk = blk_alloc_disk(&lim, ctrl->numa_node);
+17
drivers/nvme/host/pci.c
··· 2508 2508 2509 2509 static void nvme_pci_update_nr_queues(struct nvme_dev *dev) 2510 2510 { 2511 + if (!dev->ctrl.tagset) { 2512 + nvme_alloc_io_tag_set(&dev->ctrl, &dev->tagset, &nvme_mq_ops, 2513 + nvme_pci_nr_maps(dev), sizeof(struct nvme_iod)); 2514 + return; 2515 + } 2516 + 2511 2517 blk_mq_update_nr_hw_queues(&dev->tagset, dev->online_queues - 1); 2512 2518 /* free previously allocated queues that are no longer usable */ 2513 2519 nvme_free_queues(dev, dev->online_queues); ··· 2972 2966 dmi_match(DMI_BOARD_NAME, "NS5x_7xAU") || 2973 2967 dmi_match(DMI_BOARD_NAME, "NS5x_7xPU") || 2974 2968 dmi_match(DMI_BOARD_NAME, "PH4PRX1_PH6PRX1")) 2969 + return NVME_QUIRK_FORCE_NO_SIMPLE_SUSPEND; 2970 + } else if (pdev->vendor == 0x144d && pdev->device == 0xa80d) { 2971 + /* 2972 + * Exclude Samsung 990 Evo from NVME_QUIRK_SIMPLE_SUSPEND 2973 + * because of high power consumption (> 2 Watt) in s2idle 2974 + * sleep. Only some boards with Intel CPU are affected. 2975 + */ 2976 + if (dmi_match(DMI_BOARD_NAME, "GMxPXxx") || 2977 + dmi_match(DMI_BOARD_NAME, "PH4PG31") || 2978 + dmi_match(DMI_BOARD_NAME, "PH4PRX1_PH6PRX1") || 2979 + dmi_match(DMI_BOARD_NAME, "PH6PG01_PH6PG71")) 2975 2980 return NVME_QUIRK_FORCE_NO_SIMPLE_SUSPEND; 2976 2981 } 2977 2982
+10
drivers/nvme/target/admin-cmd.c
··· 587 587 u16 status = 0; 588 588 int i = 0; 589 589 590 + /* 591 + * NSID values 0xFFFFFFFE and NVME_NSID_ALL are invalid 592 + * See NVMe Base Specification, Active Namespace ID list (CNS 02h). 593 + */ 594 + if (min_nsid == 0xFFFFFFFE || min_nsid == NVME_NSID_ALL) { 595 + req->error_loc = offsetof(struct nvme_identify, nsid); 596 + status = NVME_SC_INVALID_NS | NVME_STATUS_DNR; 597 + goto out; 598 + } 599 + 590 600 list = kzalloc(buf_size, GFP_KERNEL); 591 601 if (!list) { 592 602 status = NVME_SC_INTERNAL;
+1 -1
drivers/nvme/target/debugfs.c
··· 13 13 #include "nvmet.h" 14 14 #include "debugfs.h" 15 15 16 - struct dentry *nvmet_debugfs; 16 + static struct dentry *nvmet_debugfs; 17 17 18 18 #define NVMET_DEBUGFS_ATTR(field) \ 19 19 static int field##_open(struct inode *inode, struct file *file) \
+3 -1
drivers/nvme/target/tcp.c
··· 2146 2146 } 2147 2147 2148 2148 queue->nr_cmds = sq->size * 2; 2149 - if (nvmet_tcp_alloc_cmds(queue)) 2149 + if (nvmet_tcp_alloc_cmds(queue)) { 2150 + queue->nr_cmds = 0; 2150 2151 return NVME_SC_INTERNAL; 2152 + } 2151 2153 return 0; 2152 2154 } 2153 2155
+3 -3
drivers/nvmem/core.c
··· 1276 1276 EXPORT_SYMBOL_GPL(nvmem_device_put); 1277 1277 1278 1278 /** 1279 - * devm_nvmem_device_get() - Get nvmem cell of device form a given id 1279 + * devm_nvmem_device_get() - Get nvmem device of device form a given id 1280 1280 * 1281 1281 * @dev: Device that requests the nvmem device. 1282 1282 * @id: name id for the requested nvmem device. 1283 1283 * 1284 - * Return: ERR_PTR() on error or a valid pointer to a struct nvmem_cell 1285 - * on success. The nvmem_cell will be freed by the automatically once the 1284 + * Return: ERR_PTR() on error or a valid pointer to a struct nvmem_device 1285 + * on success. The nvmem_device will be freed by the automatically once the 1286 1286 * device is freed. 1287 1287 */ 1288 1288 struct nvmem_device *devm_nvmem_device_get(struct device *dev, const char *id)
+7
drivers/nvmem/u-boot-env.c
··· 176 176 data_offset = offsetof(struct u_boot_env_image_broadcom, data); 177 177 break; 178 178 } 179 + 180 + if (dev_size < data_offset) { 181 + dev_err(dev, "Device too small for u-boot-env\n"); 182 + err = -EIO; 183 + goto err_kfree; 184 + } 185 + 179 186 crc32_addr = (__le32 *)(buf + crc32_offset); 180 187 crc32 = le32_to_cpu(*crc32_addr); 181 188 crc32_data_len = dev_size - crc32_data_offset;
+22 -34
drivers/opp/core.c
··· 1061 1061 return 0; 1062 1062 } 1063 1063 1064 + static int _set_opp_level(struct device *dev, struct dev_pm_opp *opp) 1065 + { 1066 + unsigned int level = 0; 1067 + int ret = 0; 1068 + 1069 + if (opp) { 1070 + if (opp->level == OPP_LEVEL_UNSET) 1071 + return 0; 1072 + 1073 + level = opp->level; 1074 + } 1075 + 1076 + /* Request a new performance state through the device's PM domain. */ 1077 + ret = dev_pm_domain_set_performance_state(dev, level); 1078 + if (ret) 1079 + dev_err(dev, "Failed to set performance state %u (%d)\n", level, 1080 + ret); 1081 + 1082 + return ret; 1083 + } 1084 + 1064 1085 /* This is only called for PM domain for now */ 1065 1086 static int _set_required_opps(struct device *dev, struct opp_table *opp_table, 1066 1087 struct dev_pm_opp *opp, bool up) ··· 1112 1091 if (devs[index]) { 1113 1092 required_opp = opp ? opp->required_opps[index] : NULL; 1114 1093 1115 - ret = dev_pm_opp_set_opp(devs[index], required_opp); 1094 + ret = _set_opp_level(devs[index], required_opp); 1116 1095 if (ret) 1117 1096 return ret; 1118 1097 } ··· 1121 1100 } 1122 1101 1123 1102 return 0; 1124 - } 1125 - 1126 - static int _set_opp_level(struct device *dev, struct dev_pm_opp *opp) 1127 - { 1128 - unsigned int level = 0; 1129 - int ret = 0; 1130 - 1131 - if (opp) { 1132 - if (opp->level == OPP_LEVEL_UNSET) 1133 - return 0; 1134 - 1135 - level = opp->level; 1136 - } 1137 - 1138 - /* Request a new performance state through the device's PM domain. */ 1139 - ret = dev_pm_domain_set_performance_state(dev, level); 1140 - if (ret) 1141 - dev_err(dev, "Failed to set performance state %u (%d)\n", level, 1142 - ret); 1143 - 1144 - return ret; 1145 1103 } 1146 1104 1147 1105 static void _find_current_opp(struct device *dev, struct opp_table *opp_table) ··· 2455 2455 } else { 2456 2456 dev_pm_opp_put_opp_table(genpd_table); 2457 2457 } 2458 - } 2459 - 2460 - /* 2461 - * Add the virtual genpd device as a user of the OPP table, so 2462 - * we can call dev_pm_opp_set_opp() on it directly. 2463 - * 2464 - * This will be automatically removed when the OPP table is 2465 - * removed, don't need to handle that here. 2466 - */ 2467 - if (!_add_opp_dev(virt_dev, opp_table->required_opp_tables[index])) { 2468 - ret = -ENOMEM; 2469 - goto err; 2470 2458 } 2471 2459 2472 2460 opp_table->required_devs[index] = virt_dev;
+23 -3
drivers/pci/pwrctl/core.c
··· 48 48 return NOTIFY_DONE; 49 49 } 50 50 51 + static void rescan_work_func(struct work_struct *work) 52 + { 53 + struct pci_pwrctl *pwrctl = container_of(work, struct pci_pwrctl, work); 54 + 55 + pci_lock_rescan_remove(); 56 + pci_rescan_bus(to_pci_dev(pwrctl->dev->parent)->bus); 57 + pci_unlock_rescan_remove(); 58 + } 59 + 60 + /** 61 + * pci_pwrctl_init() - Initialize the PCI power control context struct 62 + * 63 + * @pwrctl: PCI power control data 64 + * @dev: Parent device 65 + */ 66 + void pci_pwrctl_init(struct pci_pwrctl *pwrctl, struct device *dev) 67 + { 68 + pwrctl->dev = dev; 69 + INIT_WORK(&pwrctl->work, rescan_work_func); 70 + } 71 + EXPORT_SYMBOL_GPL(pci_pwrctl_init); 72 + 51 73 /** 52 74 * pci_pwrctl_device_set_ready() - Notify the pwrctl subsystem that the PCI 53 75 * device is powered-up and ready to be detected. ··· 96 74 if (ret) 97 75 return ret; 98 76 99 - pci_lock_rescan_remove(); 100 - pci_rescan_bus(to_pci_dev(pwrctl->dev->parent)->bus); 101 - pci_unlock_rescan_remove(); 77 + schedule_work(&pwrctl->work); 102 78 103 79 return 0; 104 80 }
+1 -1
drivers/pci/pwrctl/pci-pwrctl-pwrseq.c
··· 50 50 if (ret) 51 51 return ret; 52 52 53 - data->ctx.dev = dev; 53 + pci_pwrctl_init(&data->ctx, dev); 54 54 55 55 ret = devm_pci_pwrctl_device_set_ready(dev, &data->ctx); 56 56 if (ret)
+17 -1
drivers/pci/remove.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 2 #include <linux/pci.h> 3 3 #include <linux/module.h> 4 + #include <linux/of.h> 4 5 #include <linux/of_platform.h> 6 + #include <linux/platform_device.h> 7 + 5 8 #include "pci.h" 6 9 7 10 static void pci_free_resources(struct pci_dev *dev) ··· 17 14 } 18 15 } 19 16 17 + static int pci_pwrctl_unregister(struct device *dev, void *data) 18 + { 19 + struct device_node *pci_node = data, *plat_node = dev_of_node(dev); 20 + 21 + if (dev_is_platform(dev) && plat_node && plat_node == pci_node) { 22 + of_device_unregister(to_platform_device(dev)); 23 + of_node_clear_flag(plat_node, OF_POPULATED); 24 + } 25 + 26 + return 0; 27 + } 28 + 20 29 static void pci_stop_dev(struct pci_dev *dev) 21 30 { 22 31 pci_pme_active(dev, false); 23 32 24 33 if (pci_dev_is_added(dev)) { 25 - of_platform_depopulate(&dev->dev); 34 + device_for_each_child(dev->dev.parent, dev_of_node(&dev->dev), 35 + pci_pwrctl_unregister); 26 36 device_release_driver(&dev->dev); 27 37 pci_proc_detach_device(dev); 28 38 pci_remove_sysfs_dev_files(dev);
+3 -1
drivers/pinctrl/qcom/pinctrl-x1e80100.c
··· 1839 1839 .ngroups = ARRAY_SIZE(x1e80100_groups), 1840 1840 .ngpios = 239, 1841 1841 .wakeirq_map = x1e80100_pdc_map, 1842 - .nwakeirq_map = ARRAY_SIZE(x1e80100_pdc_map), 1842 + /* TODO: Enabling PDC currently breaks GPIO interrupts */ 1843 + .nwakeirq_map = 0, 1844 + /* .nwakeirq_map = ARRAY_SIZE(x1e80100_pdc_map), */ 1843 1845 .egpio_func = 9, 1844 1846 }; 1845 1847
+1 -1
drivers/platform/x86/amd/pmf/pmf-quirks.c
··· 25 25 .ident = "ROG Zephyrus G14", 26 26 .matches = { 27 27 DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."), 28 - DMI_MATCH(DMI_PRODUCT_NAME, "GA403UV"), 28 + DMI_MATCH(DMI_PRODUCT_NAME, "GA403U"), 29 29 }, 30 30 .driver_data = &quirk_no_sps_bug, 31 31 },
+4 -1
drivers/platform/x86/dell/dell-smbios-base.c
··· 622 622 return 0; 623 623 624 624 fail_sysfs: 625 - free_group(platform_device); 625 + if (!wmi) 626 + exit_dell_smbios_wmi(); 627 + if (!smm) 628 + exit_dell_smbios_smm(); 626 629 627 630 fail_create_group: 628 631 platform_device_del(platform_device);
+101 -67
drivers/ptp/ptp_ocp.c
··· 316 316 #define OCP_SERIAL_LEN 6 317 317 #define OCP_SMA_NUM 4 318 318 319 + enum { 320 + PORT_GNSS, 321 + PORT_GNSS2, 322 + PORT_MAC, /* miniature atomic clock */ 323 + PORT_NMEA, 324 + 325 + __PORT_COUNT, 326 + }; 327 + 319 328 struct ptp_ocp { 320 329 struct pci_dev *pdev; 321 330 struct device dev; ··· 366 357 struct delayed_work sync_work; 367 358 int id; 368 359 int n_irqs; 369 - struct ptp_ocp_serial_port gnss_port; 370 - struct ptp_ocp_serial_port gnss2_port; 371 - struct ptp_ocp_serial_port mac_port; /* miniature atomic clock */ 372 - struct ptp_ocp_serial_port nmea_port; 360 + struct ptp_ocp_serial_port port[__PORT_COUNT]; 373 361 bool fw_loader; 374 362 u8 fw_tag; 375 363 u16 fw_version; ··· 661 655 }, 662 656 }, 663 657 { 664 - OCP_SERIAL_RESOURCE(gnss_port), 658 + OCP_SERIAL_RESOURCE(port[PORT_GNSS]), 665 659 .offset = 0x00160000 + 0x1000, .irq_vec = 3, 666 660 .extra = &(struct ptp_ocp_serial_port) { 667 661 .baud = 115200, 668 662 }, 669 663 }, 670 664 { 671 - OCP_SERIAL_RESOURCE(gnss2_port), 665 + OCP_SERIAL_RESOURCE(port[PORT_GNSS2]), 672 666 .offset = 0x00170000 + 0x1000, .irq_vec = 4, 673 667 .extra = &(struct ptp_ocp_serial_port) { 674 668 .baud = 115200, 675 669 }, 676 670 }, 677 671 { 678 - OCP_SERIAL_RESOURCE(mac_port), 672 + OCP_SERIAL_RESOURCE(port[PORT_MAC]), 679 673 .offset = 0x00180000 + 0x1000, .irq_vec = 5, 680 674 .extra = &(struct ptp_ocp_serial_port) { 681 675 .baud = 57600, 682 676 }, 683 677 }, 684 678 { 685 - OCP_SERIAL_RESOURCE(nmea_port), 679 + OCP_SERIAL_RESOURCE(port[PORT_NMEA]), 686 680 .offset = 0x00190000 + 0x1000, .irq_vec = 10, 687 681 }, 688 682 { ··· 746 740 .offset = 0x01000000, .size = 0x10000, 747 741 }, 748 742 { 749 - OCP_SERIAL_RESOURCE(gnss_port), 743 + OCP_SERIAL_RESOURCE(port[PORT_GNSS]), 750 744 .offset = 0x00160000 + 0x1000, .irq_vec = 3, 751 745 .extra = &(struct ptp_ocp_serial_port) { 752 746 .baud = 115200, ··· 845 839 }, 846 840 }, 847 841 { 848 - OCP_SERIAL_RESOURCE(mac_port), 842 + OCP_SERIAL_RESOURCE(port[PORT_MAC]), 849 843 .offset = 0x00190000, .irq_vec = 7, 850 844 .extra = &(struct ptp_ocp_serial_port) { 851 845 .baud = 9600, ··· 956 950 .offset = 0x00220000, .size = 0x1000, 957 951 }, 958 952 { 959 - OCP_SERIAL_RESOURCE(gnss_port), 953 + OCP_SERIAL_RESOURCE(port[PORT_GNSS]), 960 954 .offset = 0x00160000 + 0x1000, .irq_vec = 3, 961 955 .extra = &(struct ptp_ocp_serial_port) { 962 956 .baud = 9600, 963 957 }, 964 958 }, 965 959 { 966 - OCP_SERIAL_RESOURCE(mac_port), 960 + OCP_SERIAL_RESOURCE(port[PORT_MAC]), 967 961 .offset = 0x00180000 + 0x1000, .irq_vec = 5, 968 962 .extra = &(struct ptp_ocp_serial_port) { 969 963 .baud = 115200, ··· 1653 1647 if (idx >= ARRAY_SIZE(gnss_name)) 1654 1648 idx = ARRAY_SIZE(gnss_name) - 1; 1655 1649 return gnss_name[idx]; 1650 + } 1651 + 1652 + static const char * 1653 + ptp_ocp_tty_port_name(int idx) 1654 + { 1655 + static const char * const tty_name[] = { 1656 + "GNSS", "GNSS2", "MAC", "NMEA" 1657 + }; 1658 + return tty_name[idx]; 1656 1659 } 1657 1660 1658 1661 struct ptp_ocp_nvmem_match_info { ··· 3362 3347 static EXT_ATTR_RO(freq, frequency, 3); 3363 3348 3364 3349 static ssize_t 3350 + ptp_ocp_tty_show(struct device *dev, struct device_attribute *attr, char *buf) 3351 + { 3352 + struct dev_ext_attribute *ea = to_ext_attr(attr); 3353 + struct ptp_ocp *bp = dev_get_drvdata(dev); 3354 + 3355 + return sysfs_emit(buf, "ttyS%d", bp->port[(uintptr_t)ea->var].line); 3356 + } 3357 + 3358 + static umode_t 3359 + ptp_ocp_timecard_tty_is_visible(struct kobject *kobj, struct attribute *attr, int n) 3360 + { 3361 + struct ptp_ocp *bp = dev_get_drvdata(kobj_to_dev(kobj)); 3362 + struct ptp_ocp_serial_port *port; 3363 + struct device_attribute *dattr; 3364 + struct dev_ext_attribute *ea; 3365 + 3366 + if (strncmp(attr->name, "tty", 3)) 3367 + return attr->mode; 3368 + 3369 + dattr = container_of(attr, struct device_attribute, attr); 3370 + ea = container_of(dattr, struct dev_ext_attribute, attr); 3371 + port = &bp->port[(uintptr_t)ea->var]; 3372 + return port->line == -1 ? 0 : 0444; 3373 + } 3374 + 3375 + #define EXT_TTY_ATTR_RO(_name, _val) \ 3376 + struct dev_ext_attribute dev_attr_tty##_name = \ 3377 + { __ATTR(tty##_name, 0444, ptp_ocp_tty_show, NULL), (void *)_val } 3378 + 3379 + static EXT_TTY_ATTR_RO(GNSS, PORT_GNSS); 3380 + static EXT_TTY_ATTR_RO(GNSS2, PORT_GNSS2); 3381 + static EXT_TTY_ATTR_RO(MAC, PORT_MAC); 3382 + static EXT_TTY_ATTR_RO(NMEA, PORT_NMEA); 3383 + static struct attribute *ptp_ocp_timecard_tty_attrs[] = { 3384 + &dev_attr_ttyGNSS.attr.attr, 3385 + &dev_attr_ttyGNSS2.attr.attr, 3386 + &dev_attr_ttyMAC.attr.attr, 3387 + &dev_attr_ttyNMEA.attr.attr, 3388 + NULL, 3389 + }; 3390 + 3391 + static const struct attribute_group ptp_ocp_timecard_tty_group = { 3392 + .name = "tty", 3393 + .attrs = ptp_ocp_timecard_tty_attrs, 3394 + .is_visible = ptp_ocp_timecard_tty_is_visible, 3395 + }; 3396 + 3397 + static ssize_t 3365 3398 serialnum_show(struct device *dev, struct device_attribute *attr, char *buf) 3366 3399 { 3367 3400 struct ptp_ocp *bp = dev_get_drvdata(dev); ··· 3838 3775 3839 3776 static const struct ocp_attr_group fb_timecard_groups[] = { 3840 3777 { .cap = OCP_CAP_BASIC, .group = &fb_timecard_group }, 3778 + { .cap = OCP_CAP_BASIC, .group = &ptp_ocp_timecard_tty_group }, 3841 3779 { .cap = OCP_CAP_SIGNAL, .group = &fb_timecard_signal0_group }, 3842 3780 { .cap = OCP_CAP_SIGNAL, .group = &fb_timecard_signal1_group }, 3843 3781 { .cap = OCP_CAP_SIGNAL, .group = &fb_timecard_signal2_group }, ··· 3878 3814 3879 3815 static const struct ocp_attr_group art_timecard_groups[] = { 3880 3816 { .cap = OCP_CAP_BASIC, .group = &art_timecard_group }, 3817 + { .cap = OCP_CAP_BASIC, .group = &ptp_ocp_timecard_tty_group }, 3881 3818 { }, 3882 3819 }; 3883 3820 ··· 3906 3841 3907 3842 static const struct ocp_attr_group adva_timecard_groups[] = { 3908 3843 { .cap = OCP_CAP_BASIC, .group = &adva_timecard_group }, 3844 + { .cap = OCP_CAP_BASIC, .group = &ptp_ocp_timecard_tty_group }, 3909 3845 { .cap = OCP_CAP_SIGNAL, .group = &fb_timecard_signal0_group }, 3910 3846 { .cap = OCP_CAP_SIGNAL, .group = &fb_timecard_signal1_group }, 3911 3847 { .cap = OCP_CAP_FREQ, .group = &fb_timecard_freq0_group }, ··· 4026 3960 bp = dev_get_drvdata(dev); 4027 3961 4028 3962 seq_printf(s, "%7s: /dev/ptp%d\n", "PTP", ptp_clock_index(bp->ptp)); 4029 - if (bp->gnss_port.line != -1) 4030 - seq_printf(s, "%7s: /dev/ttyS%d\n", "GNSS1", 4031 - bp->gnss_port.line); 4032 - if (bp->gnss2_port.line != -1) 4033 - seq_printf(s, "%7s: /dev/ttyS%d\n", "GNSS2", 4034 - bp->gnss2_port.line); 4035 - if (bp->mac_port.line != -1) 4036 - seq_printf(s, "%7s: /dev/ttyS%d\n", "MAC", bp->mac_port.line); 4037 - if (bp->nmea_port.line != -1) 4038 - seq_printf(s, "%7s: /dev/ttyS%d\n", "NMEA", bp->nmea_port.line); 3963 + for (i = 0; i < __PORT_COUNT; i++) { 3964 + if (bp->port[i].line != -1) 3965 + seq_printf(s, "%7s: /dev/ttyS%d\n", ptp_ocp_tty_port_name(i), 3966 + bp->port[i].line); 3967 + } 4039 3968 4040 3969 memset(sma_val, 0xff, sizeof(sma_val)); 4041 3970 if (bp->sma_map1) { ··· 4340 4279 static int 4341 4280 ptp_ocp_device_init(struct ptp_ocp *bp, struct pci_dev *pdev) 4342 4281 { 4343 - int err; 4282 + int i, err; 4344 4283 4345 4284 mutex_lock(&ptp_ocp_lock); 4346 4285 err = idr_alloc(&ptp_ocp_idr, bp, 0, 0, GFP_KERNEL); ··· 4353 4292 4354 4293 bp->ptp_info = ptp_ocp_clock_info; 4355 4294 spin_lock_init(&bp->lock); 4356 - bp->gnss_port.line = -1; 4357 - bp->gnss2_port.line = -1; 4358 - bp->mac_port.line = -1; 4359 - bp->nmea_port.line = -1; 4295 + 4296 + for (i = 0; i < __PORT_COUNT; i++) 4297 + bp->port[i].line = -1; 4298 + 4360 4299 bp->pdev = pdev; 4361 4300 4362 4301 device_initialize(&bp->dev); ··· 4413 4352 struct pps_device *pps; 4414 4353 char buf[32]; 4415 4354 4416 - if (bp->gnss_port.line != -1) { 4417 - sprintf(buf, "ttyS%d", bp->gnss_port.line); 4418 - ptp_ocp_link_child(bp, buf, "ttyGNSS"); 4419 - } 4420 - if (bp->gnss2_port.line != -1) { 4421 - sprintf(buf, "ttyS%d", bp->gnss2_port.line); 4422 - ptp_ocp_link_child(bp, buf, "ttyGNSS2"); 4423 - } 4424 - if (bp->mac_port.line != -1) { 4425 - sprintf(buf, "ttyS%d", bp->mac_port.line); 4426 - ptp_ocp_link_child(bp, buf, "ttyMAC"); 4427 - } 4428 - if (bp->nmea_port.line != -1) { 4429 - sprintf(buf, "ttyS%d", bp->nmea_port.line); 4430 - ptp_ocp_link_child(bp, buf, "ttyNMEA"); 4431 - } 4432 4355 sprintf(buf, "ptp%d", ptp_clock_index(bp->ptp)); 4433 4356 ptp_ocp_link_child(bp, buf, "ptp"); 4434 4357 ··· 4461 4416 }; 4462 4417 struct device *dev = &bp->pdev->dev; 4463 4418 u32 reg; 4419 + int i; 4464 4420 4465 4421 ptp_ocp_phc_info(bp); 4466 4422 4467 - ptp_ocp_serial_info(dev, "GNSS", bp->gnss_port.line, 4468 - bp->gnss_port.baud); 4469 - ptp_ocp_serial_info(dev, "GNSS2", bp->gnss2_port.line, 4470 - bp->gnss2_port.baud); 4471 - ptp_ocp_serial_info(dev, "MAC", bp->mac_port.line, bp->mac_port.baud); 4472 - if (bp->nmea_out && bp->nmea_port.line != -1) { 4473 - bp->nmea_port.baud = -1; 4423 + for (i = 0; i < __PORT_COUNT; i++) { 4424 + if (i == PORT_NMEA && bp->nmea_out && bp->port[PORT_NMEA].line != -1) { 4425 + bp->port[PORT_NMEA].baud = -1; 4474 4426 4475 - reg = ioread32(&bp->nmea_out->uart_baud); 4476 - if (reg < ARRAY_SIZE(nmea_baud)) 4477 - bp->nmea_port.baud = nmea_baud[reg]; 4478 - 4479 - ptp_ocp_serial_info(dev, "NMEA", bp->nmea_port.line, 4480 - bp->nmea_port.baud); 4427 + reg = ioread32(&bp->nmea_out->uart_baud); 4428 + if (reg < ARRAY_SIZE(nmea_baud)) 4429 + bp->port[PORT_NMEA].baud = nmea_baud[reg]; 4430 + } 4431 + ptp_ocp_serial_info(dev, ptp_ocp_tty_port_name(i), bp->port[i].line, 4432 + bp->port[i].baud); 4481 4433 } 4482 4434 } 4483 4435 ··· 4483 4441 { 4484 4442 struct device *dev = &bp->dev; 4485 4443 4486 - sysfs_remove_link(&dev->kobj, "ttyGNSS"); 4487 - sysfs_remove_link(&dev->kobj, "ttyGNSS2"); 4488 - sysfs_remove_link(&dev->kobj, "ttyMAC"); 4489 4444 sysfs_remove_link(&dev->kobj, "ptp"); 4490 4445 sysfs_remove_link(&dev->kobj, "pps"); 4491 4446 } ··· 4512 4473 for (i = 0; i < 4; i++) 4513 4474 if (bp->signal_out[i]) 4514 4475 ptp_ocp_unregister_ext(bp->signal_out[i]); 4515 - if (bp->gnss_port.line != -1) 4516 - serial8250_unregister_port(bp->gnss_port.line); 4517 - if (bp->gnss2_port.line != -1) 4518 - serial8250_unregister_port(bp->gnss2_port.line); 4519 - if (bp->mac_port.line != -1) 4520 - serial8250_unregister_port(bp->mac_port.line); 4521 - if (bp->nmea_port.line != -1) 4522 - serial8250_unregister_port(bp->nmea_port.line); 4476 + for (i = 0; i < __PORT_COUNT; i++) 4477 + if (bp->port[i].line != -1) 4478 + serial8250_unregister_port(bp->port[i].line); 4523 4479 platform_device_unregister(bp->spi_flash); 4524 4480 platform_device_unregister(bp->i2c_ctrl); 4525 4481 if (bp->i2c_clk)
+1 -1
drivers/pwm/pwm-stm32.c
··· 412 412 /* Enable channel */ 413 413 mask = TIM_CCER_CCxE(ch + 1); 414 414 if (priv->have_complementary_output) 415 - mask |= TIM_CCER_CCxNE(ch); 415 + mask |= TIM_CCER_CCxNE(ch + 1); 416 416 417 417 regmap_set_bits(priv->regmap, TIM_CCER, mask); 418 418
+1
drivers/spi/spi-bcm63xx.c
··· 472 472 { .compatible = "brcm,bcm6358-spi", .data = &bcm6358_spi_reg_offsets }, 473 473 { }, 474 474 }; 475 + MODULE_DEVICE_TABLE(of, bcm63xx_spi_of_match); 475 476 476 477 static int bcm63xx_spi_probe(struct platform_device *pdev) 477 478 {
+2 -2
drivers/spi/spi-fsl-lpspi.c
··· 136 136 }; 137 137 138 138 static struct fsl_lpspi_devtype_data imx7ulp_lpspi_devtype_data = { 139 - .prescale_max = 8, 139 + .prescale_max = 7, 140 140 }; 141 141 142 142 static const struct of_device_id fsl_lpspi_dt_ids[] = { ··· 336 336 337 337 div = DIV_ROUND_UP(perclk_rate, config.speed_hz); 338 338 339 - for (prescale = 0; prescale < prescale_max; prescale++) { 339 + for (prescale = 0; prescale <= prescale_max; prescale++) { 340 340 scldiv = div / (1 << prescale) - 2; 341 341 if (scldiv < 256) { 342 342 fsl_lpspi->config.prescale = prescale;
+3
drivers/spi/spi-intel.c
··· 1390 1390 1391 1391 pdata->name = devm_kasprintf(ispi->dev, GFP_KERNEL, "%s-chip1", 1392 1392 dev_name(ispi->dev)); 1393 + if (!pdata->name) 1394 + return -ENOMEM; 1395 + 1393 1396 pdata->nr_parts = 1; 1394 1397 parts = devm_kcalloc(ispi->dev, pdata->nr_parts, sizeof(*parts), 1395 1398 GFP_KERNEL);
+7 -16
drivers/spi/spi-rockchip.c
··· 945 945 { 946 946 int ret; 947 947 struct spi_controller *ctlr = dev_get_drvdata(dev); 948 - struct rockchip_spi *rs = spi_controller_get_devdata(ctlr); 949 948 950 949 ret = spi_controller_suspend(ctlr); 951 950 if (ret < 0) 952 951 return ret; 953 952 954 - clk_disable_unprepare(rs->spiclk); 955 - clk_disable_unprepare(rs->apb_pclk); 953 + ret = pm_runtime_force_suspend(dev); 954 + if (ret < 0) { 955 + spi_controller_resume(ctlr); 956 + return ret; 957 + } 956 958 957 959 pinctrl_pm_select_sleep_state(dev); 958 960 ··· 965 963 { 966 964 int ret; 967 965 struct spi_controller *ctlr = dev_get_drvdata(dev); 968 - struct rockchip_spi *rs = spi_controller_get_devdata(ctlr); 969 966 970 967 pinctrl_pm_select_default_state(dev); 971 968 972 - ret = clk_prepare_enable(rs->apb_pclk); 969 + ret = pm_runtime_force_resume(dev); 973 970 if (ret < 0) 974 971 return ret; 975 972 976 - ret = clk_prepare_enable(rs->spiclk); 977 - if (ret < 0) 978 - clk_disable_unprepare(rs->apb_pclk); 979 - 980 - ret = spi_controller_resume(ctlr); 981 - if (ret < 0) { 982 - clk_disable_unprepare(rs->spiclk); 983 - clk_disable_unprepare(rs->apb_pclk); 984 - } 985 - 986 - return 0; 973 + return spi_controller_resume(ctlr); 987 974 } 988 975 #endif /* CONFIG_PM_SLEEP */ 989 976
+2
drivers/spi/spidev.c
··· 702 702 static const struct spi_device_id spidev_spi_ids[] = { 703 703 { .name = "bh2228fv" }, 704 704 { .name = "dh2228fv" }, 705 + { .name = "jg10309-01" }, 705 706 { .name = "ltc2488" }, 706 707 { .name = "sx1301" }, 707 708 { .name = "bk4" }, ··· 732 731 static const struct of_device_id spidev_dt_ids[] = { 733 732 { .compatible = "cisco,spi-petra", .data = &spidev_of_check }, 734 733 { .compatible = "dh,dhcom-board", .data = &spidev_of_check }, 734 + { .compatible = "elgin,jg10309-01", .data = &spidev_of_check }, 735 735 { .compatible = "lineartechnology,ltc2488", .data = &spidev_of_check }, 736 736 { .compatible = "lwn,bk4", .data = &spidev_of_check }, 737 737 { .compatible = "menlo,m53cpld", .data = &spidev_of_check },
+1 -1
drivers/staging/iio/frequency/ad9834.c
··· 114 114 115 115 clk_freq = clk_get_rate(st->mclk); 116 116 117 - if (fout > (clk_freq / 2)) 117 + if (!clk_freq || fout > (clk_freq / 2)) 118 118 return -EINVAL; 119 119 120 120 regval = ad9834_calc_freqreg(clk_freq, fout);
+3
drivers/ufs/host/ufs-mediatek.c
··· 1026 1026 if (host->caps & UFS_MTK_CAP_DISABLE_AH8) 1027 1027 hba->caps |= UFSHCD_CAP_HIBERN8_WITH_CLK_GATING; 1028 1028 1029 + if (host->caps & UFS_MTK_CAP_DISABLE_MCQ) 1030 + hba->quirks |= UFSHCD_QUIRK_BROKEN_LSDBS_CAP; 1031 + 1029 1032 ufs_mtk_init_clocks(hba); 1030 1033 1031 1034 /*
+10 -1
drivers/uio/uio_hv_generic.c
··· 106 106 107 107 /* 108 108 * Callback from vmbus_event when channel is rescinded. 109 + * It is meant for rescind of primary channels only. 109 110 */ 110 111 static void hv_uio_rescind(struct vmbus_channel *channel) 111 112 { 112 - struct hv_device *hv_dev = channel->primary_channel->device_obj; 113 + struct hv_device *hv_dev = channel->device_obj; 113 114 struct hv_uio_private_data *pdata = hv_get_drvdata(hv_dev); 114 115 115 116 /* ··· 121 120 122 121 /* Wake up reader */ 123 122 uio_event_notify(&pdata->info); 123 + 124 + /* 125 + * With rescind callback registered, rescind path will not unregister the device 126 + * from vmbus when the primary channel is rescinded. 127 + * Without it, rescind handling is incomplete and next onoffer msg does not come. 128 + * Unregister the device from vmbus here. 129 + */ 130 + vmbus_device_unregister(channel->device_obj); 124 131 } 125 132 126 133 /* Sysfs API to allow mmap of the ring buffers
+15
drivers/usb/dwc3/core.c
··· 1387 1387 } 1388 1388 1389 1389 /* 1390 + * STAR 9001285599: This issue affects DWC_usb3 version 3.20a 1391 + * only. If the PM TIMER ECM is enabled through GUCTL2[19], the 1392 + * link compliance test (TD7.21) may fail. If the ECN is not 1393 + * enabled (GUCTL2[19] = 0), the controller will use the old timer 1394 + * value (5us), which is still acceptable for the link compliance 1395 + * test. Therefore, do not enable PM TIMER ECM in 3.20a by 1396 + * setting GUCTL2[19] by default; instead, use GUCTL2[19] = 0. 1397 + */ 1398 + if (DWC3_VER_IS(DWC3, 320A)) { 1399 + reg = dwc3_readl(dwc->regs, DWC3_GUCTL2); 1400 + reg &= ~DWC3_GUCTL2_LC_TIMER; 1401 + dwc3_writel(dwc->regs, DWC3_GUCTL2, reg); 1402 + } 1403 + 1404 + /* 1390 1405 * When configured in HOST mode, after issuing U3/L2 exit controller 1391 1406 * fails to send proper CRC checksum in CRC5 feild. Because of this 1392 1407 * behaviour Transaction Error is generated, resulting in reset and
+2
drivers/usb/dwc3/core.h
··· 421 421 422 422 /* Global User Control Register 2 */ 423 423 #define DWC3_GUCTL2_RST_ACTBITLATER BIT(14) 424 + #define DWC3_GUCTL2_LC_TIMER BIT(19) 424 425 425 426 /* Global User Control Register 3 */ 426 427 #define DWC3_GUCTL3_SPLITDISABLE BIT(14) ··· 1270 1269 #define DWC3_REVISION_290A 0x5533290a 1271 1270 #define DWC3_REVISION_300A 0x5533300a 1272 1271 #define DWC3_REVISION_310A 0x5533310a 1272 + #define DWC3_REVISION_320A 0x5533320a 1273 1273 #define DWC3_REVISION_330A 0x5533330a 1274 1274 1275 1275 #define DWC31_REVISION_ANY 0x0
+17 -24
drivers/usb/dwc3/gadget.c
··· 287 287 * 288 288 * Caller should handle locking. This function will issue @cmd with given 289 289 * @params to @dep and wait for its completion. 290 + * 291 + * According to the programming guide, if the link state is in L1/L2/U3, 292 + * then sending the Start Transfer command may not complete. The 293 + * programming guide suggested to bring the link state back to ON/U0 by 294 + * performing remote wakeup prior to sending the command. However, don't 295 + * initiate remote wakeup when the user/function does not send wakeup 296 + * request via wakeup ops. Send the command when it's allowed. 297 + * 298 + * Notes: 299 + * For L1 link state, issuing a command requires the clearing of 300 + * GUSB2PHYCFG.SUSPENDUSB2, which turns on the signal required to complete 301 + * the given command (usually within 50us). This should happen within the 302 + * command timeout set by driver. No additional step is needed. 303 + * 304 + * For L2 or U3 link state, the gadget is in USB suspend. Care should be 305 + * taken when sending Start Transfer command to ensure that it's done after 306 + * USB resume. 290 307 */ 291 308 int dwc3_send_gadget_ep_cmd(struct dwc3_ep *dep, unsigned int cmd, 292 309 struct dwc3_gadget_ep_cmd_params *params) ··· 342 325 343 326 if (saved_config) 344 327 dwc3_writel(dwc->regs, DWC3_GUSB2PHYCFG(0), reg); 345 - } 346 - 347 - if (DWC3_DEPCMD_CMD(cmd) == DWC3_DEPCMD_STARTTRANSFER) { 348 - int link_state; 349 - 350 - /* 351 - * Initiate remote wakeup if the link state is in U3 when 352 - * operating in SS/SSP or L1/L2 when operating in HS/FS. If the 353 - * link state is in U1/U2, no remote wakeup is needed. The Start 354 - * Transfer command will initiate the link recovery. 355 - */ 356 - link_state = dwc3_gadget_get_link_state(dwc); 357 - switch (link_state) { 358 - case DWC3_LINK_STATE_U2: 359 - if (dwc->gadget->speed >= USB_SPEED_SUPER) 360 - break; 361 - 362 - fallthrough; 363 - case DWC3_LINK_STATE_U3: 364 - ret = __dwc3_gadget_wakeup(dwc, false); 365 - dev_WARN_ONCE(dwc->dev, ret, "wakeup failed --> %d\n", 366 - ret); 367 - break; 368 - } 369 328 } 370 329 371 330 /*
+3 -9
drivers/usb/gadget/udc/cdns2/cdns2-gadget.c
··· 2251 2251 { 2252 2252 u32 max_speed; 2253 2253 void *buf; 2254 - int val; 2255 2254 int ret; 2256 2255 2257 2256 pdev->usb_regs = pdev->regs; ··· 2260 2261 pdev->adma_regs = pdev->regs + CDNS2_ADMA_REGS_OFFSET; 2261 2262 2262 2263 /* Reset controller. */ 2263 - set_reg_bit_8(&pdev->usb_regs->cpuctrl, CPUCTRL_SW_RST); 2264 - 2265 - ret = readl_poll_timeout_atomic(&pdev->usb_regs->cpuctrl, val, 2266 - !(val & CPUCTRL_SW_RST), 1, 10000); 2267 - if (ret) { 2268 - dev_err(pdev->dev, "Error: reset controller timeout\n"); 2269 - return -EINVAL; 2270 - } 2264 + writeb(CPUCTRL_SW_RST | CPUCTRL_UPCLK | CPUCTRL_WUEN, 2265 + &pdev->usb_regs->cpuctrl); 2266 + usleep_range(5, 10); 2271 2267 2272 2268 usb_initialize_gadget(pdev->dev, &pdev->gadget, NULL); 2273 2269
+9
drivers/usb/gadget/udc/cdns2/cdns2-gadget.h
··· 292 292 #define SPEEDCTRL_HSDISABLE BIT(7) 293 293 294 294 /* CPUCTRL- bitmasks. */ 295 + /* UP clock enable */ 296 + #define CPUCTRL_UPCLK BIT(0) 295 297 /* Controller reset bit. */ 296 298 #define CPUCTRL_SW_RST BIT(1) 299 + /** 300 + * If the wuen bit is ‘1’, the upclken is automatically set to ‘1’ after 301 + * detecting rising edge of wuintereq interrupt. If the wuen bit is ‘0’, 302 + * the wuintereq interrupt is ignored. 303 + */ 304 + #define CPUCTRL_WUEN BIT(7) 305 + 297 306 298 307 /** 299 308 * struct cdns2_adma_regs - ADMA controller registers.
+41 -39
drivers/usb/typec/ucsi/ucsi.c
··· 911 911 912 912 static int ucsi_register_cable(struct ucsi_connector *con) 913 913 { 914 + struct ucsi_cable_property cable_prop; 914 915 struct typec_cable *cable; 915 916 struct typec_cable_desc desc = {}; 917 + u64 command; 918 + int ret; 916 919 917 - switch (UCSI_CABLE_PROP_FLAG_PLUG_TYPE(con->cable_prop.flags)) { 920 + command = UCSI_GET_CABLE_PROPERTY | UCSI_CONNECTOR_NUMBER(con->num); 921 + ret = ucsi_send_command(con->ucsi, command, &cable_prop, sizeof(cable_prop)); 922 + if (ret < 0) { 923 + dev_err(con->ucsi->dev, "GET_CABLE_PROPERTY failed (%d)\n", ret); 924 + return ret; 925 + } 926 + 927 + switch (UCSI_CABLE_PROP_FLAG_PLUG_TYPE(cable_prop.flags)) { 918 928 case UCSI_CABLE_PROPERTY_PLUG_TYPE_A: 919 929 desc.type = USB_PLUG_TYPE_A; 920 930 break; ··· 941 931 942 932 if (con->ucsi->cap.features & UCSI_CAP_GET_PD_MESSAGE) 943 933 desc.identity = &con->cable_identity; 944 - desc.active = !!(UCSI_CABLE_PROP_FLAG_ACTIVE_CABLE & 945 - con->cable_prop.flags); 946 - desc.pd_revision = UCSI_CABLE_PROP_FLAG_PD_MAJOR_REV_AS_BCD( 947 - con->cable_prop.flags); 934 + desc.active = !!(UCSI_CABLE_PROP_FLAG_ACTIVE_CABLE & cable_prop.flags); 935 + 936 + if (con->ucsi->version >= UCSI_VERSION_2_1) 937 + desc.pd_revision = UCSI_CABLE_PROP_FLAG_PD_MAJOR_REV_AS_BCD(cable_prop.flags); 948 938 949 939 cable = typec_register_cable(con->port, &desc); 950 940 if (IS_ERR(cable)) { ··· 969 959 con->cable = NULL; 970 960 } 971 961 962 + static int ucsi_check_connector_capability(struct ucsi_connector *con) 963 + { 964 + u64 command; 965 + int ret; 966 + 967 + if (!con->partner || con->ucsi->version < UCSI_VERSION_2_1) 968 + return 0; 969 + 970 + command = UCSI_GET_CONNECTOR_CAPABILITY | UCSI_CONNECTOR_NUMBER(con->num); 971 + ret = ucsi_send_command(con->ucsi, command, &con->cap, sizeof(con->cap)); 972 + if (ret < 0) { 973 + dev_err(con->ucsi->dev, "GET_CONNECTOR_CAPABILITY failed (%d)\n", ret); 974 + return ret; 975 + } 976 + 977 + typec_partner_set_pd_revision(con->partner, 978 + UCSI_CONCAP_FLAG_PARTNER_PD_MAJOR_REV_AS_BCD(con->cap.flags)); 979 + 980 + return ret; 981 + } 982 + 972 983 static void ucsi_pwr_opmode_change(struct ucsi_connector *con) 973 984 { 974 985 switch (UCSI_CONSTAT_PWR_OPMODE(con->status.flags)) { ··· 999 968 ucsi_partner_task(con, ucsi_get_src_pdos, 30, 0); 1000 969 ucsi_partner_task(con, ucsi_check_altmodes, 30, HZ); 1001 970 ucsi_partner_task(con, ucsi_register_partner_pdos, 1, HZ); 971 + ucsi_partner_task(con, ucsi_check_connector_capability, 1, HZ); 1002 972 break; 1003 973 case UCSI_CONSTAT_PWR_OPMODE_TYPEC1_5: 1004 974 con->rdo = 0; ··· 1044 1012 if (con->ucsi->cap.features & UCSI_CAP_GET_PD_MESSAGE) 1045 1013 desc.identity = &con->partner_identity; 1046 1014 desc.usb_pd = pwr_opmode == UCSI_CONSTAT_PWR_OPMODE_PD; 1047 - desc.pd_revision = UCSI_CONCAP_FLAG_PARTNER_PD_MAJOR_REV_AS_BCD(con->cap.flags); 1048 1015 1049 1016 partner = typec_register_partner(con->port, &desc); 1050 1017 if (IS_ERR(partner)) { ··· 1120 1089 con->num, u_role); 1121 1090 } 1122 1091 1123 - static int ucsi_check_connector_capability(struct ucsi_connector *con) 1124 - { 1125 - u64 command; 1126 - int ret; 1127 - 1128 - if (!con->partner || con->ucsi->version < UCSI_VERSION_2_0) 1129 - return 0; 1130 - 1131 - command = UCSI_GET_CONNECTOR_CAPABILITY | UCSI_CONNECTOR_NUMBER(con->num); 1132 - ret = ucsi_send_command(con->ucsi, command, &con->cap, sizeof(con->cap)); 1133 - if (ret < 0) { 1134 - dev_err(con->ucsi->dev, "GET_CONNECTOR_CAPABILITY failed (%d)\n", ret); 1135 - return ret; 1136 - } 1137 - 1138 - typec_partner_set_pd_revision(con->partner, 1139 - UCSI_CONCAP_FLAG_PARTNER_PD_MAJOR_REV_AS_BCD(con->cap.flags)); 1140 - 1141 - return ret; 1142 - } 1143 - 1144 1092 static int ucsi_check_connection(struct ucsi_connector *con) 1145 1093 { 1146 1094 u8 prev_flags = con->status.flags; ··· 1151 1141 1152 1142 static int ucsi_check_cable(struct ucsi_connector *con) 1153 1143 { 1154 - u64 command; 1155 1144 int ret, num_plug_am; 1156 1145 1157 1146 if (con->cable) 1158 1147 return 0; 1159 - 1160 - command = UCSI_GET_CABLE_PROPERTY | UCSI_CONNECTOR_NUMBER(con->num); 1161 - ret = ucsi_send_command(con->ucsi, command, &con->cable_prop, 1162 - sizeof(con->cable_prop)); 1163 - if (ret < 0) { 1164 - dev_err(con->ucsi->dev, "GET_CABLE_PROPERTY failed (%d)\n", 1165 - ret); 1166 - return ret; 1167 - } 1168 1148 1169 1149 ret = ucsi_register_cable(con); 1170 1150 if (ret < 0) ··· 1231 1231 if (con->status.flags & UCSI_CONSTAT_CONNECTED) { 1232 1232 ucsi_register_partner(con); 1233 1233 ucsi_partner_task(con, ucsi_check_connection, 1, HZ); 1234 - ucsi_partner_task(con, ucsi_check_connector_capability, 1, HZ); 1235 1234 if (con->ucsi->cap.features & UCSI_CAP_GET_PD_MESSAGE) 1236 1235 ucsi_partner_task(con, ucsi_get_partner_identity, 1, HZ); 1237 1236 if (con->ucsi->cap.features & UCSI_CAP_CABLE_DETAILS) 1238 1237 ucsi_partner_task(con, ucsi_check_cable, 1, HZ); 1239 1238 1240 1239 if (UCSI_CONSTAT_PWR_OPMODE(con->status.flags) == 1241 - UCSI_CONSTAT_PWR_OPMODE_PD) 1240 + UCSI_CONSTAT_PWR_OPMODE_PD) { 1242 1241 ucsi_partner_task(con, ucsi_register_partner_pdos, 1, HZ); 1242 + ucsi_partner_task(con, ucsi_check_connector_capability, 1, HZ); 1243 + } 1243 1244 } else { 1244 1245 ucsi_unregister_partner(con); 1245 1246 } ··· 1669 1668 ucsi_register_device_pdos(con); 1670 1669 ucsi_get_src_pdos(con); 1671 1670 ucsi_check_altmodes(con); 1671 + ucsi_check_connector_capability(con); 1672 1672 } 1673 1673 1674 1674 trace_ucsi_register_port(con->num, &con->status);
-1
drivers/usb/typec/ucsi/ucsi.h
··· 435 435 436 436 struct ucsi_connector_status status; 437 437 struct ucsi_connector_capability cap; 438 - struct ucsi_cable_property cable_prop; 439 438 struct power_supply *psy; 440 439 struct power_supply_desc psy_desc; 441 440 u32 rdo;
+1 -1
fs/bcachefs/buckets.c
··· 876 876 need_rebalance_delta -= s != 0; 877 877 need_rebalance_sectors_delta -= s; 878 878 879 - s = bch2_bkey_sectors_need_rebalance(c, old); 879 + s = bch2_bkey_sectors_need_rebalance(c, new.s_c); 880 880 need_rebalance_delta += s != 0; 881 881 need_rebalance_sectors_delta += s; 882 882
+2 -1
fs/bcachefs/replicas.c
··· 82 82 } 83 83 84 84 for (unsigned i = 0; i < r->nr_devs; i++) 85 - if (!bch2_member_exists(sb, r->devs[i])) { 85 + if (r->devs[i] != BCH_SB_MEMBER_INVALID && 86 + !bch2_member_exists(sb, r->devs[i])) { 86 87 prt_printf(err, "invalid device %u in entry ", r->devs[i]); 87 88 goto bad; 88 89 }
+2 -1
fs/bcachefs/sb-members.c
··· 11 11 12 12 void bch2_dev_missing(struct bch_fs *c, unsigned dev) 13 13 { 14 - bch2_fs_inconsistent(c, "pointer to nonexistent device %u", dev); 14 + if (dev != BCH_SB_MEMBER_INVALID) 15 + bch2_fs_inconsistent(c, "pointer to nonexistent device %u", dev); 15 16 } 16 17 17 18 void bch2_dev_bucket_missing(struct bch_fs *c, struct bpos bucket)
+5
fs/bcachefs/sb-members_format.h
··· 8 8 */ 9 9 #define BCH_SB_MEMBERS_MAX 64 10 10 11 + /* 12 + * Sentinal value - indicates a device that does not exist 13 + */ 14 + #define BCH_SB_MEMBER_INVALID 255 15 + 11 16 #define BCH_MIN_NR_NBUCKETS (1 << 6) 12 17 13 18 #define BCH_IOPS_MEASUREMENTS() \
-1
fs/btrfs/ctree.h
··· 459 459 void *filldir_buf; 460 460 u64 last_index; 461 461 struct extent_state *llseek_cached_state; 462 - bool fsync_skip_inode_lock; 463 462 }; 464 463 465 464 static inline u32 BTRFS_LEAF_DATA_SIZE(const struct btrfs_fs_info *info)
+3 -13
fs/btrfs/direct-io.c
··· 864 864 if (IS_ERR_OR_NULL(dio)) { 865 865 ret = PTR_ERR_OR_ZERO(dio); 866 866 } else { 867 - struct btrfs_file_private stack_private = { 0 }; 868 - struct btrfs_file_private *private; 869 - const bool have_private = (file->private_data != NULL); 870 - 871 - if (!have_private) 872 - file->private_data = &stack_private; 873 - 874 867 /* 875 868 * If we have a synchronous write, we must make sure the fsync 876 869 * triggered by the iomap_dio_complete() call below doesn't ··· 872 879 * partial writes due to the input buffer (or parts of it) not 873 880 * being already faulted in. 874 881 */ 875 - private = file->private_data; 876 - private->fsync_skip_inode_lock = true; 882 + ASSERT(current->journal_info == NULL); 883 + current->journal_info = BTRFS_TRANS_DIO_WRITE_STUB; 877 884 ret = iomap_dio_complete(dio); 878 - private->fsync_skip_inode_lock = false; 879 - 880 - if (!have_private) 881 - file->private_data = NULL; 885 + current->journal_info = NULL; 882 886 } 883 887 884 888 /* No increment (+=) because iomap returns a cumulative value. */
+7 -2
fs/btrfs/file.c
··· 1603 1603 */ 1604 1604 int btrfs_sync_file(struct file *file, loff_t start, loff_t end, int datasync) 1605 1605 { 1606 - struct btrfs_file_private *private = file->private_data; 1607 1606 struct dentry *dentry = file_dentry(file); 1608 1607 struct btrfs_inode *inode = BTRFS_I(d_inode(dentry)); 1609 1608 struct btrfs_root *root = inode->root; ··· 1612 1613 int ret = 0, err; 1613 1614 u64 len; 1614 1615 bool full_sync; 1615 - const bool skip_ilock = (private ? private->fsync_skip_inode_lock : false); 1616 + bool skip_ilock = false; 1617 + 1618 + if (current->journal_info == BTRFS_TRANS_DIO_WRITE_STUB) { 1619 + skip_ilock = true; 1620 + current->journal_info = NULL; 1621 + lockdep_assert_held(&inode->vfs_inode.i_rwsem); 1622 + } 1616 1623 1617 1624 trace_btrfs_sync_file(file, datasync); 1618 1625
+1 -2
fs/btrfs/qgroup.c
··· 4346 4346 int ret; 4347 4347 4348 4348 if (btrfs_qgroup_mode(inode->root->fs_info) == BTRFS_QGROUP_MODE_DISABLED) { 4349 - extent_changeset_init(&changeset); 4350 4349 return clear_record_extent_bits(&inode->io_tree, start, 4351 4350 start + len - 1, 4352 - EXTENT_QGROUP_RESERVED, &changeset); 4351 + EXTENT_QGROUP_RESERVED, NULL); 4353 4352 } 4354 4353 4355 4354 /* In release case, we shouldn't have @reserved */
+6
fs/btrfs/transaction.h
··· 27 27 struct btrfs_root; 28 28 struct btrfs_path; 29 29 30 + /* 31 + * Signal that a direct IO write is in progress, to avoid deadlock for sync 32 + * direct IO writes when fsync is called during the direct IO write path. 33 + */ 34 + #define BTRFS_TRANS_DIO_WRITE_STUB ((void *) 1) 35 + 30 36 /* Radix-tree tag for roots that are part of the trasaction. */ 31 37 #define BTRFS_ROOT_TRANS_TAG 0 32 38
+25 -5
fs/btrfs/zoned.c
··· 1406 1406 return -EINVAL; 1407 1407 } 1408 1408 1409 + bg->zone_capacity = min_not_zero(zone_info[0].capacity, zone_info[1].capacity); 1410 + 1409 1411 if (zone_info[0].alloc_offset == WP_MISSING_DEV) { 1410 1412 btrfs_err(bg->fs_info, 1411 1413 "zoned: cannot recover write pointer for zone %llu", ··· 1434 1432 } 1435 1433 1436 1434 bg->alloc_offset = zone_info[0].alloc_offset; 1437 - bg->zone_capacity = min(zone_info[0].capacity, zone_info[1].capacity); 1438 1435 return 0; 1439 1436 } 1440 1437 ··· 1450 1449 btrfs_bg_type_to_raid_name(map->type)); 1451 1450 return -EINVAL; 1452 1451 } 1452 + 1453 + /* In case a device is missing we have a cap of 0, so don't use it. */ 1454 + bg->zone_capacity = min_not_zero(zone_info[0].capacity, zone_info[1].capacity); 1453 1455 1454 1456 for (i = 0; i < map->num_stripes; i++) { 1455 1457 if (zone_info[i].alloc_offset == WP_MISSING_DEV || ··· 1475 1471 if (test_bit(0, active)) 1476 1472 set_bit(BLOCK_GROUP_FLAG_ZONE_IS_ACTIVE, &bg->runtime_flags); 1477 1473 } 1478 - /* In case a device is missing we have a cap of 0, so don't use it. */ 1479 - bg->zone_capacity = min_not_zero(zone_info[0].capacity, 1480 - zone_info[1].capacity); 1481 1474 } 1482 1475 1483 1476 if (zone_info[0].alloc_offset != WP_MISSING_DEV) ··· 1564 1563 unsigned long *active = NULL; 1565 1564 u64 last_alloc = 0; 1566 1565 u32 num_sequential = 0, num_conventional = 0; 1566 + u64 profile; 1567 1567 1568 1568 if (!btrfs_is_zoned(fs_info)) 1569 1569 return 0; ··· 1625 1623 } 1626 1624 } 1627 1625 1628 - switch (map->type & BTRFS_BLOCK_GROUP_PROFILE_MASK) { 1626 + profile = map->type & BTRFS_BLOCK_GROUP_PROFILE_MASK; 1627 + switch (profile) { 1629 1628 case 0: /* single */ 1630 1629 ret = btrfs_load_block_group_single(cache, &zone_info[0], active); 1631 1630 break; ··· 1651 1648 btrfs_bg_type_to_raid_name(map->type)); 1652 1649 ret = -EINVAL; 1653 1650 goto out; 1651 + } 1652 + 1653 + if (ret == -EIO && profile != 0 && profile != BTRFS_BLOCK_GROUP_RAID0 && 1654 + profile != BTRFS_BLOCK_GROUP_RAID10) { 1655 + /* 1656 + * Detected broken write pointer. Make this block group 1657 + * unallocatable by setting the allocation pointer at the end of 1658 + * allocatable region. Relocating this block group will fix the 1659 + * mismatch. 1660 + * 1661 + * Currently, we cannot handle RAID0 or RAID10 case like this 1662 + * because we don't have a proper zone_capacity value. But, 1663 + * reading from this block group won't work anyway by a missing 1664 + * stripe. 1665 + */ 1666 + cache->alloc_offset = cache->zone_capacity; 1667 + ret = 0; 1654 1668 } 1655 1669 1656 1670 out:
+10 -4
fs/fuse/dev.c
··· 31 31 32 32 static struct kmem_cache *fuse_req_cachep; 33 33 34 + static void end_requests(struct list_head *head); 35 + 34 36 static struct fuse_dev *fuse_get_dev(struct file *file) 35 37 { 36 38 /* ··· 775 773 (folio->flags & PAGE_FLAGS_CHECK_AT_PREP & 776 774 ~(1 << PG_locked | 777 775 1 << PG_referenced | 778 - 1 << PG_uptodate | 779 776 1 << PG_lru | 780 777 1 << PG_active | 781 778 1 << PG_workingset | ··· 819 818 820 819 newfolio = page_folio(buf->page); 821 820 822 - if (!folio_test_uptodate(newfolio)) 823 - folio_mark_uptodate(newfolio); 824 - 821 + folio_clear_uptodate(newfolio); 825 822 folio_clear_mappedtodisk(newfolio); 826 823 827 824 if (fuse_check_folio(newfolio) != 0) ··· 1821 1822 } 1822 1823 1823 1824 spin_lock(&fiq->lock); 1825 + if (!fiq->connected) { 1826 + spin_unlock(&fiq->lock); 1827 + list_for_each_entry(req, &to_queue, list) 1828 + clear_bit(FR_PENDING, &req->flags); 1829 + end_requests(&to_queue); 1830 + return; 1831 + } 1824 1832 /* iq and pq requests are both oldest to newest */ 1825 1833 list_splice(&to_queue, &fiq->pending); 1826 1834 fiq->ops->wake_pending_and_unlock(fiq);
+1 -1
fs/fuse/dir.c
··· 670 670 671 671 err = get_create_ext(&args, dir, entry, mode); 672 672 if (err) 673 - goto out_put_forget_req; 673 + goto out_free_ff; 674 674 675 675 err = fuse_simple_request(fm, &args); 676 676 free_ext_value(&args);
+7 -1
fs/fuse/file.c
··· 1832 1832 fuse_writepage_finish(fm, wpa); 1833 1833 spin_unlock(&fi->lock); 1834 1834 1835 - /* After fuse_writepage_finish() aux request list is private */ 1835 + /* After rb_erase() aux request list is private */ 1836 1836 for (aux = wpa->next; aux; aux = next) { 1837 + struct backing_dev_info *bdi = inode_to_bdi(aux->inode); 1838 + 1837 1839 next = aux->next; 1838 1840 aux->next = NULL; 1841 + 1842 + dec_wb_stat(&bdi->wb, WB_WRITEBACK); 1843 + dec_node_page_state(aux->ia.ap.pages[0], NR_WRITEBACK_TEMP); 1844 + wb_writeout_inc(&bdi->wb); 1839 1845 fuse_writepage_free(aux); 1840 1846 } 1841 1847
+6 -1
fs/fuse/inode.c
··· 1332 1332 * on a stacked fs (e.g. overlayfs) themselves and with 1333 1333 * max_stack_depth == 1, FUSE fs can be stacked as the 1334 1334 * underlying fs of a stacked fs (e.g. overlayfs). 1335 + * 1336 + * Also don't allow the combination of FUSE_PASSTHROUGH 1337 + * and FUSE_WRITEBACK_CACHE, current design doesn't handle 1338 + * them together. 1335 1339 */ 1336 1340 if (IS_ENABLED(CONFIG_FUSE_PASSTHROUGH) && 1337 1341 (flags & FUSE_PASSTHROUGH) && 1338 1342 arg->max_stack_depth > 0 && 1339 - arg->max_stack_depth <= FILESYSTEM_MAX_STACK_DEPTH) { 1343 + arg->max_stack_depth <= FILESYSTEM_MAX_STACK_DEPTH && 1344 + !(flags & FUSE_WRITEBACK_CACHE)) { 1340 1345 fc->passthrough = 1; 1341 1346 fc->max_stack_depth = arg->max_stack_depth; 1342 1347 fm->sb->s_stack_depth = arg->max_stack_depth;
+2 -2
fs/fuse/xattr.c
··· 81 81 } 82 82 ret = fuse_simple_request(fm, &args); 83 83 if (!ret && !size) 84 - ret = min_t(ssize_t, outarg.size, XATTR_SIZE_MAX); 84 + ret = min_t(size_t, outarg.size, XATTR_SIZE_MAX); 85 85 if (ret == -ENOSYS) { 86 86 fm->fc->no_getxattr = 1; 87 87 ret = -EOPNOTSUPP; ··· 143 143 } 144 144 ret = fuse_simple_request(fm, &args); 145 145 if (!ret && !size) 146 - ret = min_t(ssize_t, outarg.size, XATTR_LIST_MAX); 146 + ret = min_t(size_t, outarg.size, XATTR_LIST_MAX); 147 147 if (ret > 0 && size) 148 148 ret = fuse_verify_xattr_list(list, ret); 149 149 if (ret == -ENOSYS) {
+3 -3
fs/libfs.c
··· 2117 2117 } 2118 2118 EXPORT_SYMBOL(simple_inode_init_ts); 2119 2119 2120 - static inline struct dentry *get_stashed_dentry(struct dentry *stashed) 2120 + static inline struct dentry *get_stashed_dentry(struct dentry **stashed) 2121 2121 { 2122 2122 struct dentry *dentry; 2123 2123 2124 2124 guard(rcu)(); 2125 - dentry = READ_ONCE(stashed); 2125 + dentry = rcu_dereference(*stashed); 2126 2126 if (!dentry) 2127 2127 return NULL; 2128 2128 if (!lockref_get_not_dead(&dentry->d_lockref)) ··· 2219 2219 const struct stashed_operations *sops = mnt->mnt_sb->s_fs_info; 2220 2220 2221 2221 /* See if dentry can be reused. */ 2222 - path->dentry = get_stashed_dentry(*stashed); 2222 + path->dentry = get_stashed_dentry(stashed); 2223 2223 if (path->dentry) { 2224 2224 sops->put_data(data); 2225 2225 goto out_path;
+1
fs/netfs/fscache_main.c
··· 103 103 104 104 kmem_cache_destroy(fscache_cookie_jar); 105 105 fscache_proc_cleanup(); 106 + timer_shutdown_sync(&fscache_cookie_lru_timer); 106 107 destroy_workqueue(fscache_wq); 107 108 pr_notice("FS-Cache unloaded\n"); 108 109 }
+1 -1
fs/netfs/io.c
··· 270 270 if (count == remaining) 271 271 return; 272 272 273 - _debug("R=%08x[%u] ITER RESUB-MISMATCH %zx != %zx-%zx-%llx %x\n", 273 + _debug("R=%08x[%u] ITER RESUB-MISMATCH %zx != %zx-%zx-%llx %x", 274 274 rreq->debug_id, subreq->debug_index, 275 275 iov_iter_count(&subreq->io_iter), subreq->transferred, 276 276 subreq->len, rreq->i_size,
+33 -2
fs/nilfs2/recovery.c
··· 716 716 } 717 717 718 718 /** 719 + * nilfs_abort_roll_forward - cleaning up after a failed rollforward recovery 720 + * @nilfs: nilfs object 721 + */ 722 + static void nilfs_abort_roll_forward(struct the_nilfs *nilfs) 723 + { 724 + struct nilfs_inode_info *ii, *n; 725 + LIST_HEAD(head); 726 + 727 + /* Abandon inodes that have read recovery data */ 728 + spin_lock(&nilfs->ns_inode_lock); 729 + list_splice_init(&nilfs->ns_dirty_files, &head); 730 + spin_unlock(&nilfs->ns_inode_lock); 731 + if (list_empty(&head)) 732 + return; 733 + 734 + set_nilfs_purging(nilfs); 735 + list_for_each_entry_safe(ii, n, &head, i_dirty) { 736 + spin_lock(&nilfs->ns_inode_lock); 737 + list_del_init(&ii->i_dirty); 738 + spin_unlock(&nilfs->ns_inode_lock); 739 + 740 + iput(&ii->vfs_inode); 741 + } 742 + clear_nilfs_purging(nilfs); 743 + } 744 + 745 + /** 719 746 * nilfs_salvage_orphan_logs - salvage logs written after the latest checkpoint 720 747 * @nilfs: nilfs object 721 748 * @sb: super block instance ··· 800 773 if (unlikely(err)) { 801 774 nilfs_err(sb, "error %d writing segment for recovery", 802 775 err); 803 - goto failed; 776 + goto put_root; 804 777 } 805 778 806 779 nilfs_finish_roll_forward(nilfs, ri); 807 780 } 808 781 809 - failed: 782 + put_root: 810 783 nilfs_put_root(root); 811 784 return err; 785 + 786 + failed: 787 + nilfs_abort_roll_forward(nilfs); 788 + goto put_root; 812 789 } 813 790 814 791 /**
+6 -4
fs/nilfs2/segment.c
··· 1812 1812 nilfs_abort_logs(&logs, ret ? : err); 1813 1813 1814 1814 list_splice_tail_init(&sci->sc_segbufs, &logs); 1815 + if (list_empty(&logs)) 1816 + return; /* if the first segment buffer preparation failed */ 1817 + 1815 1818 nilfs_cancel_segusage(&logs, nilfs->ns_sufile); 1816 1819 nilfs_free_incomplete_logs(&logs, nilfs); 1817 1820 ··· 2059 2056 2060 2057 err = nilfs_segctor_begin_construction(sci, nilfs); 2061 2058 if (unlikely(err)) 2062 - goto out; 2059 + goto failed; 2063 2060 2064 2061 /* Update time stamp */ 2065 2062 sci->sc_seg_ctime = ktime_get_real_seconds(); ··· 2123 2120 return err; 2124 2121 2125 2122 failed_to_write: 2126 - if (sci->sc_stage.flags & NILFS_CF_IFILE_STARTED) 2127 - nilfs_redirty_inodes(&sci->sc_dirty_files); 2128 - 2129 2123 failed: 2124 + if (mode == SC_LSEG_SR && nilfs_sc_cstage_get(sci) >= NILFS_ST_IFILE) 2125 + nilfs_redirty_inodes(&sci->sc_dirty_files); 2130 2126 if (nilfs_doing_gc()) 2131 2127 nilfs_redirty_inodes(&sci->sc_gc_inodes); 2132 2128 nilfs_segctor_abort_construction(sci, nilfs, err);
+33 -10
fs/nilfs2/sysfs.c
··· 836 836 struct the_nilfs *nilfs, 837 837 char *buf) 838 838 { 839 - struct nilfs_super_block **sbp = nilfs->ns_sbp; 840 - u32 major = le32_to_cpu(sbp[0]->s_rev_level); 841 - u16 minor = le16_to_cpu(sbp[0]->s_minor_rev_level); 839 + struct nilfs_super_block *raw_sb; 840 + u32 major; 841 + u16 minor; 842 + 843 + down_read(&nilfs->ns_sem); 844 + raw_sb = nilfs->ns_sbp[0]; 845 + major = le32_to_cpu(raw_sb->s_rev_level); 846 + minor = le16_to_cpu(raw_sb->s_minor_rev_level); 847 + up_read(&nilfs->ns_sem); 842 848 843 849 return sysfs_emit(buf, "%d.%d\n", major, minor); 844 850 } ··· 862 856 struct the_nilfs *nilfs, 863 857 char *buf) 864 858 { 865 - struct nilfs_super_block **sbp = nilfs->ns_sbp; 866 - u64 dev_size = le64_to_cpu(sbp[0]->s_dev_size); 859 + struct nilfs_super_block *raw_sb; 860 + u64 dev_size; 861 + 862 + down_read(&nilfs->ns_sem); 863 + raw_sb = nilfs->ns_sbp[0]; 864 + dev_size = le64_to_cpu(raw_sb->s_dev_size); 865 + up_read(&nilfs->ns_sem); 867 866 868 867 return sysfs_emit(buf, "%llu\n", dev_size); 869 868 } ··· 890 879 struct the_nilfs *nilfs, 891 880 char *buf) 892 881 { 893 - struct nilfs_super_block **sbp = nilfs->ns_sbp; 882 + struct nilfs_super_block *raw_sb; 883 + ssize_t len; 894 884 895 - return sysfs_emit(buf, "%pUb\n", sbp[0]->s_uuid); 885 + down_read(&nilfs->ns_sem); 886 + raw_sb = nilfs->ns_sbp[0]; 887 + len = sysfs_emit(buf, "%pUb\n", raw_sb->s_uuid); 888 + up_read(&nilfs->ns_sem); 889 + 890 + return len; 896 891 } 897 892 898 893 static ··· 906 889 struct the_nilfs *nilfs, 907 890 char *buf) 908 891 { 909 - struct nilfs_super_block **sbp = nilfs->ns_sbp; 892 + struct nilfs_super_block *raw_sb; 893 + ssize_t len; 910 894 911 - return scnprintf(buf, sizeof(sbp[0]->s_volume_name), "%s\n", 912 - sbp[0]->s_volume_name); 895 + down_read(&nilfs->ns_sem); 896 + raw_sb = nilfs->ns_sbp[0]; 897 + len = scnprintf(buf, sizeof(raw_sb->s_volume_name), "%s\n", 898 + raw_sb->s_volume_name); 899 + up_read(&nilfs->ns_sem); 900 + 901 + return len; 913 902 } 914 903 915 904 static const char dev_readme_str[] =
+46 -8
fs/smb/client/cifssmb.c
··· 1261 1261 return rc; 1262 1262 } 1263 1263 1264 + static void cifs_readv_worker(struct work_struct *work) 1265 + { 1266 + struct cifs_io_subrequest *rdata = 1267 + container_of(work, struct cifs_io_subrequest, subreq.work); 1268 + 1269 + netfs_subreq_terminated(&rdata->subreq, 1270 + (rdata->result == 0 || rdata->result == -EAGAIN) ? 1271 + rdata->got_bytes : rdata->result, true); 1272 + } 1273 + 1264 1274 static void 1265 1275 cifs_readv_callback(struct mid_q_entry *mid) 1266 1276 { 1267 1277 struct cifs_io_subrequest *rdata = mid->callback_data; 1278 + struct netfs_inode *ictx = netfs_inode(rdata->rreq->inode); 1268 1279 struct cifs_tcon *tcon = tlink_tcon(rdata->req->cfile->tlink); 1269 1280 struct TCP_Server_Info *server = tcon->ses->server; 1270 1281 struct smb_rqst rqst = { .rq_iov = rdata->iov, 1271 1282 .rq_nvec = 2, 1272 1283 .rq_iter = rdata->subreq.io_iter }; 1273 - struct cifs_credits credits = { .value = 1, .instance = 0 }; 1284 + struct cifs_credits credits = { 1285 + .value = 1, 1286 + .instance = 0, 1287 + .rreq_debug_id = rdata->rreq->debug_id, 1288 + .rreq_debug_index = rdata->subreq.debug_index, 1289 + }; 1274 1290 1275 1291 cifs_dbg(FYI, "%s: mid=%llu state=%d result=%d bytes=%zu\n", 1276 1292 __func__, mid->mid, mid->mid_state, rdata->result, ··· 1298 1282 if (server->sign) { 1299 1283 int rc = 0; 1300 1284 1285 + iov_iter_truncate(&rqst.rq_iter, rdata->got_bytes); 1301 1286 rc = cifs_verify_signature(&rqst, server, 1302 1287 mid->sequence_number); 1303 1288 if (rc) ··· 1323 1306 rdata->result = -EIO; 1324 1307 } 1325 1308 1326 - if (rdata->result == 0 || rdata->result == -EAGAIN) 1327 - iov_iter_advance(&rdata->subreq.io_iter, rdata->got_bytes); 1309 + if (rdata->result == -ENODATA) { 1310 + __set_bit(NETFS_SREQ_HIT_EOF, &rdata->subreq.flags); 1311 + rdata->result = 0; 1312 + } else { 1313 + if (rdata->got_bytes < rdata->actual_len && 1314 + rdata->subreq.start + rdata->subreq.transferred + rdata->got_bytes == 1315 + ictx->remote_i_size) { 1316 + __set_bit(NETFS_SREQ_HIT_EOF, &rdata->subreq.flags); 1317 + rdata->result = 0; 1318 + } 1319 + } 1320 + 1328 1321 rdata->credits.value = 0; 1329 - netfs_subreq_terminated(&rdata->subreq, 1330 - (rdata->result == 0 || rdata->result == -EAGAIN) ? 1331 - rdata->got_bytes : rdata->result, 1332 - false); 1322 + INIT_WORK(&rdata->subreq.work, cifs_readv_worker); 1323 + queue_work(cifsiod_wq, &rdata->subreq.work); 1333 1324 release_mid(mid); 1334 1325 add_credits(server, &credits, 0); 1335 1326 } ··· 1644 1619 cifs_writev_callback(struct mid_q_entry *mid) 1645 1620 { 1646 1621 struct cifs_io_subrequest *wdata = mid->callback_data; 1622 + struct TCP_Server_Info *server = wdata->server; 1647 1623 struct cifs_tcon *tcon = tlink_tcon(wdata->req->cfile->tlink); 1648 1624 WRITE_RSP *smb = (WRITE_RSP *)mid->resp_buf; 1649 - struct cifs_credits credits = { .value = 1, .instance = 0 }; 1625 + struct cifs_credits credits = { 1626 + .value = 1, 1627 + .instance = 0, 1628 + .rreq_debug_id = wdata->rreq->debug_id, 1629 + .rreq_debug_index = wdata->subreq.debug_index, 1630 + }; 1650 1631 ssize_t result; 1651 1632 size_t written; 1652 1633 ··· 1688 1657 break; 1689 1658 } 1690 1659 1660 + trace_smb3_rw_credits(credits.rreq_debug_id, credits.rreq_debug_index, 1661 + wdata->credits.value, 1662 + server->credits, server->in_flight, 1663 + 0, cifs_trace_rw_credits_write_response_clear); 1691 1664 wdata->credits.value = 0; 1692 1665 cifs_write_subrequest_terminated(wdata, result, true); 1693 1666 release_mid(mid); 1667 + trace_smb3_rw_credits(credits.rreq_debug_id, credits.rreq_debug_index, 0, 1668 + server->credits, server->in_flight, 1669 + credits.value, cifs_trace_rw_credits_write_response_add); 1694 1670 add_credits(tcon->ses->server, &credits, 0); 1695 1671 } 1696 1672
+13 -1
fs/smb/client/connect.c
··· 657 657 server_unresponsive(struct TCP_Server_Info *server) 658 658 { 659 659 /* 660 + * If we're in the process of mounting a share or reconnecting a session 661 + * and the server abruptly shut down (e.g. socket wasn't closed, packet 662 + * had been ACK'ed but no SMB response), don't wait longer than 20s to 663 + * negotiate protocol. 664 + */ 665 + spin_lock(&server->srv_lock); 666 + if (server->tcpStatus == CifsInNegotiate && 667 + time_after(jiffies, server->lstrp + 20 * HZ)) { 668 + spin_unlock(&server->srv_lock); 669 + cifs_reconnect(server, false); 670 + return true; 671 + } 672 + /* 660 673 * We need to wait 3 echo intervals to make sure we handle such 661 674 * situations right: 662 675 * 1s client sends a normal SMB request ··· 680 667 * 65s kernel_recvmsg times out, and we see that we haven't gotten 681 668 * a response in >60s. 682 669 */ 683 - spin_lock(&server->srv_lock); 684 670 if ((server->tcpStatus == CifsGood || 685 671 server->tcpStatus == CifsNeedNegotiate) && 686 672 (!server->ops->can_echo || server->ops->can_echo(server)) &&
+2
fs/smb/client/inode.c
··· 172 172 CIFS_I(inode)->time = 0; /* force reval */ 173 173 return -ESTALE; 174 174 } 175 + if (inode->i_state & I_NEW) 176 + CIFS_I(inode)->netfs.zero_point = fattr->cf_eof; 175 177 176 178 cifs_revalidate_cache(inode, fattr); 177 179
+3
fs/smb/client/smb2inode.c
··· 1106 1106 co, DELETE, SMB2_OP_RENAME, cfile, source_dentry); 1107 1107 if (rc == -EINVAL) { 1108 1108 cifs_dbg(FYI, "invalid lease key, resending request without lease"); 1109 + cifs_get_writable_path(tcon, from_name, 1110 + FIND_WR_WITH_DELETE, &cfile); 1109 1111 rc = smb2_set_path_attr(xid, tcon, from_name, to_name, cifs_sb, 1110 1112 co, DELETE, SMB2_OP_RENAME, cfile, NULL); 1111 1113 } ··· 1151 1149 cfile, NULL, NULL, dentry); 1152 1150 if (rc == -EINVAL) { 1153 1151 cifs_dbg(FYI, "invalid lease key, resending request without lease"); 1152 + cifs_get_writable_path(tcon, full_path, FIND_WR_ANY, &cfile); 1154 1153 rc = smb2_compound_op(xid, tcon, cifs_sb, 1155 1154 full_path, &oparms, &in_iov, 1156 1155 &(int){SMB2_OP_SET_EOF}, 1,
+5 -3
fs/smb/client/smb2ops.c
··· 316 316 cifs_trace_rw_credits_no_adjust_up); 317 317 trace_smb3_too_many_credits(server->CurrentMid, 318 318 server->conn_id, server->hostname, 0, credits->value - new_val, 0); 319 - cifs_server_dbg(VFS, "request has less credits (%d) than required (%d)", 319 + cifs_server_dbg(VFS, "R=%x[%x] request has less credits (%d) than required (%d)", 320 + subreq->rreq->debug_id, subreq->subreq.debug_index, 320 321 credits->value, new_val); 321 322 322 323 return -EOPNOTSUPP; ··· 339 338 trace_smb3_reconnect_detected(server->CurrentMid, 340 339 server->conn_id, server->hostname, scredits, 341 340 credits->value - new_val, in_flight); 342 - cifs_server_dbg(VFS, "trying to return %d credits to old session\n", 343 - credits->value - new_val); 341 + cifs_server_dbg(VFS, "R=%x[%x] trying to return %d credits to old session\n", 342 + subreq->rreq->debug_id, subreq->subreq.debug_index, 343 + credits->value - new_val); 344 344 return -EAGAIN; 345 345 } 346 346
+4
fs/smb/server/smb2pdu.c
··· 1690 1690 rc = ksmbd_session_register(conn, sess); 1691 1691 if (rc) 1692 1692 goto out_err; 1693 + 1694 + conn->binding = false; 1693 1695 } else if (conn->dialect >= SMB30_PROT_ID && 1694 1696 (server_conf.flags & KSMBD_GLOBAL_FLAG_SMB3_MULTICHANNEL) && 1695 1697 req->Flags & SMB2_SESSION_REQ_FLAG_BINDING) { ··· 1770 1768 sess = NULL; 1771 1769 goto out_err; 1772 1770 } 1771 + 1772 + conn->binding = false; 1773 1773 } 1774 1774 work->sess = sess; 1775 1775
+3 -1
fs/smb/server/transport_tcp.c
··· 624 624 for_each_netdev(&init_net, netdev) { 625 625 if (netif_is_bridge_port(netdev)) 626 626 continue; 627 - if (!alloc_iface(kstrdup(netdev->name, GFP_KERNEL))) 627 + if (!alloc_iface(kstrdup(netdev->name, GFP_KERNEL))) { 628 + rtnl_unlock(); 628 629 return -ENOMEM; 630 + } 629 631 } 630 632 rtnl_unlock(); 631 633 bind_additional_ifaces = 1;
+1 -1
fs/smb/server/xattr.h
··· 76 76 struct xattr_smb_acl { 77 77 int count; 78 78 int next; 79 - struct xattr_acl_entry entries[]; 79 + struct xattr_acl_entry entries[] __counted_by(count); 80 80 }; 81 81 82 82 /* 64bytes hash in xattr_ntacl is computed with sha256 */
+1 -1
fs/tracefs/event_inode.c
··· 862 862 list_for_each_entry(ei_child, &ei->children, list) 863 863 eventfs_remove_rec(ei_child, level + 1); 864 864 865 - list_del(&ei->list); 865 + list_del_rcu(&ei->list); 866 866 free_ei(ei); 867 867 } 868 868
+49
include/kunit/test.h
··· 28 28 #include <linux/types.h> 29 29 30 30 #include <asm/rwonce.h> 31 + #include <asm/sections.h> 31 32 32 33 /* Static key: true if any KUnit tests are currently running */ 33 34 DECLARE_STATIC_KEY_FALSE(kunit_running); ··· 480 479 { 481 480 return kunit_kmalloc_array(test, n, size, gfp | __GFP_ZERO); 482 481 } 482 + 483 + 484 + /** 485 + * kunit_kfree_const() - conditionally free test managed memory 486 + * @test: The test context object. 487 + * @x: pointer to the memory 488 + * 489 + * Calls kunit_kfree() only if @x is not in .rodata section. 490 + * See kunit_kstrdup_const() for more information. 491 + */ 492 + void kunit_kfree_const(struct kunit *test, const void *x); 493 + 494 + /** 495 + * kunit_kstrdup() - Duplicates a string into a test managed allocation. 496 + * 497 + * @test: The test context object. 498 + * @str: The NULL-terminated string to duplicate. 499 + * @gfp: flags passed to underlying kmalloc(). 500 + * 501 + * See kstrdup() and kunit_kmalloc_array() for more information. 502 + */ 503 + static inline char *kunit_kstrdup(struct kunit *test, const char *str, gfp_t gfp) 504 + { 505 + size_t len; 506 + char *buf; 507 + 508 + if (!str) 509 + return NULL; 510 + 511 + len = strlen(str) + 1; 512 + buf = kunit_kmalloc(test, len, gfp); 513 + if (buf) 514 + memcpy(buf, str, len); 515 + return buf; 516 + } 517 + 518 + /** 519 + * kunit_kstrdup_const() - Conditionally duplicates a string into a test managed allocation. 520 + * 521 + * @test: The test context object. 522 + * @str: The NULL-terminated string to duplicate. 523 + * @gfp: flags passed to underlying kmalloc(). 524 + * 525 + * Calls kunit_kstrdup() only if @str is not in the rodata section. Must be freed with 526 + * kunit_kfree_const() -- not kunit_kfree(). 527 + * See kstrdup_const() and kunit_kmalloc_array() for more information. 528 + */ 529 + const char *kunit_kstrdup_const(struct kunit *test, const char *str, gfp_t gfp); 483 530 484 531 /** 485 532 * kunit_vm_mmap() - Allocate KUnit-tracked vm_mmap() area
-9
include/linux/bpf-cgroup.h
··· 390 390 __ret; \ 391 391 }) 392 392 393 - #define BPF_CGROUP_GETSOCKOPT_MAX_OPTLEN(optlen) \ 394 - ({ \ 395 - int __ret = 0; \ 396 - if (cgroup_bpf_enabled(CGROUP_GETSOCKOPT)) \ 397 - copy_from_sockptr(&__ret, optlen, sizeof(int)); \ 398 - __ret; \ 399 - }) 400 - 401 393 #define BPF_CGROUP_RUN_PROG_GETSOCKOPT(sock, level, optname, optval, optlen, \ 402 394 max_optlen, retval) \ 403 395 ({ \ ··· 510 518 #define BPF_CGROUP_RUN_PROG_SOCK_OPS(sock_ops) ({ 0; }) 511 519 #define BPF_CGROUP_RUN_PROG_DEVICE_CGROUP(atype, major, minor, access) ({ 0; }) 512 520 #define BPF_CGROUP_RUN_PROG_SYSCTL(head,table,write,buf,count,pos) ({ 0; }) 513 - #define BPF_CGROUP_GETSOCKOPT_MAX_OPTLEN(optlen) ({ 0; }) 514 521 #define BPF_CGROUP_RUN_PROG_GETSOCKOPT(sock, level, optname, optval, \ 515 522 optlen, max_optlen, retval) ({ retval; }) 516 523 #define BPF_CGROUP_RUN_PROG_GETSOCKOPT_KERN(sock, level, optname, optval, \
+4 -2
include/linux/context_tracking.h
··· 80 80 return context_tracking_enabled_this_cpu(); 81 81 } 82 82 83 - static __always_inline void context_tracking_guest_exit(void) 83 + static __always_inline bool context_tracking_guest_exit(void) 84 84 { 85 85 if (context_tracking_enabled()) 86 86 __ct_user_exit(CONTEXT_GUEST); 87 + 88 + return context_tracking_enabled_this_cpu(); 87 89 } 88 90 89 91 #define CT_WARN_ON(cond) WARN_ON(context_tracking_enabled() && (cond)) ··· 100 98 static inline int ct_state(void) { return -1; } 101 99 static inline int __ct_state(void) { return -1; } 102 100 static __always_inline bool context_tracking_guest_enter(void) { return false; } 103 - static __always_inline void context_tracking_guest_exit(void) { } 101 + static __always_inline bool context_tracking_guest_exit(void) { return false; } 104 102 #define CT_WARN_ON(cond) do { } while (0) 105 103 #endif /* !CONFIG_CONTEXT_TRACKING_USER */ 106 104
+9 -1
include/linux/kvm_host.h
··· 485 485 */ 486 486 static __always_inline void guest_context_exit_irqoff(void) 487 487 { 488 - context_tracking_guest_exit(); 488 + /* 489 + * Guest mode is treated as a quiescent state, see 490 + * guest_context_enter_irqoff() for more details. 491 + */ 492 + if (!context_tracking_guest_exit()) { 493 + instrumentation_begin(); 494 + rcu_virt_note_context_switch(); 495 + instrumentation_end(); 496 + } 489 497 } 490 498 491 499 /*
+4
include/linux/mm.h
··· 97 97 extern int mmap_rnd_compat_bits __read_mostly; 98 98 #endif 99 99 100 + #ifndef PHYSMEM_END 101 + # define PHYSMEM_END ((1ULL << MAX_PHYSMEM_BITS) - 1) 102 + #endif 103 + 100 104 #include <asm/page.h> 101 105 #include <asm/processor.h> 102 106
+3
include/linux/pci-pwrctl.h
··· 7 7 #define __PCI_PWRCTL_H__ 8 8 9 9 #include <linux/notifier.h> 10 + #include <linux/workqueue.h> 10 11 11 12 struct device; 12 13 struct device_link; ··· 42 41 /* Private: don't use. */ 43 42 struct notifier_block nb; 44 43 struct device_link *link; 44 + struct work_struct work; 45 45 }; 46 46 47 + void pci_pwrctl_init(struct pci_pwrctl *pwrctl, struct device *dev); 47 48 int pci_pwrctl_device_set_ready(struct pci_pwrctl *pwrctl); 48 49 void pci_pwrctl_device_unset_ready(struct pci_pwrctl *pwrctl); 49 50 int devm_pci_pwrctl_device_set_ready(struct device *dev,
+8
include/linux/regulator/consumer.h
··· 452 452 return 0; 453 453 } 454 454 455 + static inline int devm_regulator_bulk_get_const( 456 + struct device *dev, int num_consumers, 457 + const struct regulator_bulk_data *in_consumers, 458 + struct regulator_bulk_data **out_consumers) 459 + { 460 + return 0; 461 + } 462 + 455 463 static inline int regulator_bulk_enable(int num_consumers, 456 464 struct regulator_bulk_data *consumers) 457 465 {
+1
include/linux/resctrl.h
··· 248 248 249 249 /* The number of closid supported by this resource regardless of CDP */ 250 250 u32 resctrl_arch_get_num_closid(struct rdt_resource *r); 251 + u32 resctrl_arch_system_num_rmid_idx(void); 251 252 int resctrl_arch_update_domains(struct rdt_resource *r, u32 closid); 252 253 253 254 /*
-5
include/net/bluetooth/hci_core.h
··· 186 186 struct smp_csrk { 187 187 bdaddr_t bdaddr; 188 188 u8 bdaddr_type; 189 - u8 link_type; 190 189 u8 type; 191 190 u8 val[16]; 192 191 }; ··· 195 196 struct rcu_head rcu; 196 197 bdaddr_t bdaddr; 197 198 u8 bdaddr_type; 198 - u8 link_type; 199 199 u8 authenticated; 200 200 u8 type; 201 201 u8 enc_size; ··· 209 211 bdaddr_t rpa; 210 212 bdaddr_t bdaddr; 211 213 u8 addr_type; 212 - u8 link_type; 213 214 u8 val[16]; 214 215 }; 215 216 ··· 216 219 struct list_head list; 217 220 struct rcu_head rcu; 218 221 bdaddr_t bdaddr; 219 - u8 bdaddr_type; 220 - u8 link_type; 221 222 u8 type; 222 223 u8 val[HCI_LINK_KEY_SIZE]; 223 224 u8 pin_len;
+4
include/net/bluetooth/hci_sync.h
··· 73 73 void *data, hci_cmd_sync_work_destroy_t destroy); 74 74 int hci_cmd_sync_queue_once(struct hci_dev *hdev, hci_cmd_sync_work_func_t func, 75 75 void *data, hci_cmd_sync_work_destroy_t destroy); 76 + int hci_cmd_sync_run(struct hci_dev *hdev, hci_cmd_sync_work_func_t func, 77 + void *data, hci_cmd_sync_work_destroy_t destroy); 78 + int hci_cmd_sync_run_once(struct hci_dev *hdev, hci_cmd_sync_work_func_t func, 79 + void *data, hci_cmd_sync_work_destroy_t destroy); 76 80 struct hci_cmd_sync_work_entry * 77 81 hci_cmd_sync_lookup_entry(struct hci_dev *hdev, hci_cmd_sync_work_func_t func, 78 82 void *data, hci_cmd_sync_work_destroy_t destroy);
+2
include/net/mana/mana.h
··· 98 98 99 99 atomic_t pending_sends; 100 100 101 + bool napi_initialized; 102 + 101 103 struct mana_stats_tx stats; 102 104 }; 103 105
+1 -1
include/sound/sof/topology.h
··· 54 54 struct sof_ipc_comp { 55 55 struct sof_ipc_cmd_hdr hdr; 56 56 uint32_t id; 57 - enum sof_comp_type type; 57 + uint32_t type; 58 58 uint32_t pipeline_id; 59 59 uint32_t core; 60 60
+5 -1
include/uapi/drm/panthor_drm.h
··· 692 692 /** @PANTHOR_GROUP_PRIORITY_MEDIUM: Medium priority group. */ 693 693 PANTHOR_GROUP_PRIORITY_MEDIUM, 694 694 695 - /** @PANTHOR_GROUP_PRIORITY_HIGH: High priority group. */ 695 + /** 696 + * @PANTHOR_GROUP_PRIORITY_HIGH: High priority group. 697 + * 698 + * Requires CAP_SYS_NICE or DRM_MASTER. 699 + */ 696 700 PANTHOR_GROUP_PRIORITY_HIGH, 697 701 }; 698 702
+1 -1
include/uapi/sound/sof/abi.h
··· 29 29 /* SOF ABI version major, minor and patch numbers */ 30 30 #define SOF_ABI_MAJOR 3 31 31 #define SOF_ABI_MINOR 23 32 - #define SOF_ABI_PATCH 0 32 + #define SOF_ABI_PATCH 1 33 33 34 34 /* SOF ABI version number. Format within 32bit word is MMmmmppp */ 35 35 #define SOF_ABI_MAJOR_SHIFT 24
+4 -2
kernel/bpf/btf.c
··· 823 823 const char *src = btf_str_by_offset(btf, offset); 824 824 const char *src_limit; 825 825 826 + if (!*src) 827 + return false; 828 + 826 829 /* set a limit on identifier length */ 827 830 src_limit = src + KSYM_NAME_LEN; 828 - src++; 829 831 while (*src && src < src_limit) { 830 832 if (!isprint(*src)) 831 833 return false; ··· 6285 6283 6286 6284 errout: 6287 6285 btf_verifier_env_free(env); 6288 - if (base_btf != vmlinux_btf) 6286 + if (!IS_ERR(base_btf) && base_btf != vmlinux_btf) 6289 6287 btf_free(base_btf); 6290 6288 if (btf) { 6291 6289 kvfree(btf->data);
+12 -6
kernel/events/core.c
··· 1255 1255 * perf_event_context::mutex 1256 1256 * perf_event::child_mutex; 1257 1257 * perf_event_context::lock 1258 - * perf_event::mmap_mutex 1259 1258 * mmap_lock 1259 + * perf_event::mmap_mutex 1260 + * perf_buffer::aux_mutex 1260 1261 * perf_addr_filters_head::lock 1261 1262 * 1262 1263 * cpu_hotplug_lock ··· 6374 6373 event->pmu->event_unmapped(event, vma->vm_mm); 6375 6374 6376 6375 /* 6377 - * rb->aux_mmap_count will always drop before rb->mmap_count and 6378 - * event->mmap_count, so it is ok to use event->mmap_mutex to 6379 - * serialize with perf_mmap here. 6376 + * The AUX buffer is strictly a sub-buffer, serialize using aux_mutex 6377 + * to avoid complications. 6380 6378 */ 6381 6379 if (rb_has_aux(rb) && vma->vm_pgoff == rb->aux_pgoff && 6382 - atomic_dec_and_mutex_lock(&rb->aux_mmap_count, &event->mmap_mutex)) { 6380 + atomic_dec_and_mutex_lock(&rb->aux_mmap_count, &rb->aux_mutex)) { 6383 6381 /* 6384 6382 * Stop all AUX events that are writing to this buffer, 6385 6383 * so that we can free its AUX pages and corresponding PMU ··· 6395 6395 rb_free_aux(rb); 6396 6396 WARN_ON_ONCE(refcount_read(&rb->aux_refcount)); 6397 6397 6398 - mutex_unlock(&event->mmap_mutex); 6398 + mutex_unlock(&rb->aux_mutex); 6399 6399 } 6400 6400 6401 6401 if (atomic_dec_and_test(&rb->mmap_count)) ··· 6483 6483 struct perf_event *event = file->private_data; 6484 6484 unsigned long user_locked, user_lock_limit; 6485 6485 struct user_struct *user = current_user(); 6486 + struct mutex *aux_mutex = NULL; 6486 6487 struct perf_buffer *rb = NULL; 6487 6488 unsigned long locked, lock_limit; 6488 6489 unsigned long vma_size; ··· 6531 6530 rb = event->rb; 6532 6531 if (!rb) 6533 6532 goto aux_unlock; 6533 + 6534 + aux_mutex = &rb->aux_mutex; 6535 + mutex_lock(aux_mutex); 6534 6536 6535 6537 aux_offset = READ_ONCE(rb->user_page->aux_offset); 6536 6538 aux_size = READ_ONCE(rb->user_page->aux_size); ··· 6685 6681 atomic_dec(&rb->mmap_count); 6686 6682 } 6687 6683 aux_unlock: 6684 + if (aux_mutex) 6685 + mutex_unlock(aux_mutex); 6688 6686 mutex_unlock(&event->mmap_mutex); 6689 6687 6690 6688 /*
+1
kernel/events/internal.h
··· 40 40 struct user_struct *mmap_user; 41 41 42 42 /* AUX area */ 43 + struct mutex aux_mutex; 43 44 long aux_head; 44 45 unsigned int aux_nest; 45 46 long aux_wakeup; /* last aux_watermark boundary crossed by aux_head */
+2
kernel/events/ring_buffer.c
··· 337 337 */ 338 338 if (!rb->nr_pages) 339 339 rb->paused = 1; 340 + 341 + mutex_init(&rb->aux_mutex); 340 342 } 341 343 342 344 void perf_aux_output_flag(struct perf_output_handle *handle, u64 flags)
+1 -2
kernel/events/uprobes.c
··· 1489 1489 struct xol_area *area; 1490 1490 void *insns; 1491 1491 1492 - area = kmalloc(sizeof(*area), GFP_KERNEL); 1492 + area = kzalloc(sizeof(*area), GFP_KERNEL); 1493 1493 if (unlikely(!area)) 1494 1494 goto out; 1495 1495 ··· 1499 1499 goto free_area; 1500 1500 1501 1501 area->xol_mapping.name = "[uprobes]"; 1502 - area->xol_mapping.fault = NULL; 1503 1502 area->xol_mapping.pages = area->pages; 1504 1503 area->pages[0] = alloc_page(GFP_HIGHUSER); 1505 1504 if (!area->pages[0])
+1 -1
kernel/kexec_file.c
··· 752 752 753 753 #ifdef CONFIG_CRASH_HOTPLUG 754 754 /* Exclude elfcorehdr segment to allow future changes via hotplug */ 755 - if (j == image->elfcorehdr_index) 755 + if (i == image->elfcorehdr_index) 756 756 continue; 757 757 #endif 758 758
+5 -4
kernel/locking/rtmutex.c
··· 1644 1644 } 1645 1645 1646 1646 static void __sched rt_mutex_handle_deadlock(int res, int detect_deadlock, 1647 + struct rt_mutex_base *lock, 1647 1648 struct rt_mutex_waiter *w) 1648 1649 { 1649 1650 /* ··· 1657 1656 if (build_ww_mutex() && w->ww_ctx) 1658 1657 return; 1659 1658 1660 - /* 1661 - * Yell loudly and stop the task right here. 1662 - */ 1659 + raw_spin_unlock_irq(&lock->wait_lock); 1660 + 1663 1661 WARN(1, "rtmutex deadlock detected\n"); 1662 + 1664 1663 while (1) { 1665 1664 set_current_state(TASK_INTERRUPTIBLE); 1666 1665 rt_mutex_schedule(); ··· 1714 1713 } else { 1715 1714 __set_current_state(TASK_RUNNING); 1716 1715 remove_waiter(lock, waiter); 1717 - rt_mutex_handle_deadlock(ret, chwalk, waiter); 1716 + rt_mutex_handle_deadlock(ret, chwalk, lock, waiter); 1718 1717 } 1719 1718 1720 1719 /*
+2 -4
kernel/resource.c
··· 1826 1826 if (flags & GFR_DESCENDING) { 1827 1827 resource_size_t end; 1828 1828 1829 - end = min_t(resource_size_t, base->end, 1830 - (1ULL << MAX_PHYSMEM_BITS) - 1); 1829 + end = min_t(resource_size_t, base->end, PHYSMEM_END); 1831 1830 return end - size + 1; 1832 1831 } 1833 1832 ··· 1843 1844 * @size did not wrap 0. 1844 1845 */ 1845 1846 return addr > addr - size && 1846 - addr <= min_t(resource_size_t, base->end, 1847 - (1ULL << MAX_PHYSMEM_BITS) - 1); 1847 + addr <= min_t(resource_size_t, base->end, PHYSMEM_END); 1848 1848 } 1849 1849 1850 1850 static resource_size_t gfr_next(resource_size_t addr, resource_size_t size,
+18 -13
kernel/trace/fgraph.c
··· 1206 1206 read_unlock(&tasklist_lock); 1207 1207 } 1208 1208 1209 - static void ftrace_graph_enable_direct(bool enable_branch) 1209 + static void ftrace_graph_enable_direct(bool enable_branch, struct fgraph_ops *gops) 1210 1210 { 1211 1211 trace_func_graph_ent_t func = NULL; 1212 1212 trace_func_graph_ret_t retfunc = NULL; 1213 1213 int i; 1214 1214 1215 - for_each_set_bit(i, &fgraph_array_bitmask, 1216 - sizeof(fgraph_array_bitmask) * BITS_PER_BYTE) { 1217 - func = fgraph_array[i]->entryfunc; 1218 - retfunc = fgraph_array[i]->retfunc; 1219 - fgraph_direct_gops = fgraph_array[i]; 1220 - } 1215 + if (gops) { 1216 + func = gops->entryfunc; 1217 + retfunc = gops->retfunc; 1218 + fgraph_direct_gops = gops; 1219 + } else { 1220 + for_each_set_bit(i, &fgraph_array_bitmask, 1221 + sizeof(fgraph_array_bitmask) * BITS_PER_BYTE) { 1222 + func = fgraph_array[i]->entryfunc; 1223 + retfunc = fgraph_array[i]->retfunc; 1224 + fgraph_direct_gops = fgraph_array[i]; 1225 + } 1226 + } 1221 1227 if (WARN_ON_ONCE(!func)) 1222 1228 return; 1223 1229 ··· 1262 1256 ret = -ENOSPC; 1263 1257 goto out; 1264 1258 } 1265 - 1266 - fgraph_array[i] = gops; 1267 1259 gops->idx = i; 1268 1260 1269 1261 ftrace_graph_active++; ··· 1270 1266 ftrace_graph_disable_direct(true); 1271 1267 1272 1268 if (ftrace_graph_active == 1) { 1273 - ftrace_graph_enable_direct(false); 1269 + ftrace_graph_enable_direct(false, gops); 1274 1270 register_pm_notifier(&ftrace_suspend_notifier); 1275 1271 ret = start_graph_tracing(); 1276 1272 if (ret) ··· 1285 1281 } else { 1286 1282 init_task_vars(gops->idx); 1287 1283 } 1288 - 1289 1284 /* Always save the function, and reset at unregistering */ 1290 1285 gops->saved_func = gops->entryfunc; 1291 1286 1292 1287 ret = ftrace_startup_subops(&graph_ops, &gops->ops, command); 1288 + if (!ret) 1289 + fgraph_array[i] = gops; 1290 + 1293 1291 error: 1294 1292 if (ret) { 1295 - fgraph_array[i] = &fgraph_stub; 1296 1293 ftrace_graph_active--; 1297 1294 gops->saved_func = NULL; 1298 1295 fgraph_lru_release_index(i); ··· 1329 1324 ftrace_shutdown_subops(&graph_ops, &gops->ops, command); 1330 1325 1331 1326 if (ftrace_graph_active == 1) 1332 - ftrace_graph_enable_direct(true); 1327 + ftrace_graph_enable_direct(true, NULL); 1333 1328 else if (!ftrace_graph_active) 1334 1329 ftrace_graph_disable_direct(false); 1335 1330
+2
kernel/trace/trace.c
··· 3958 3958 break; 3959 3959 entries++; 3960 3960 ring_buffer_iter_advance(buf_iter); 3961 + /* This could be a big loop */ 3962 + cond_resched(); 3961 3963 } 3962 3964 3963 3965 per_cpu_ptr(iter->array_buffer->data, cpu)->skipped_entries = entries;
+34 -16
kernel/trace/trace_osnoise.c
··· 253 253 } 254 254 255 255 /* 256 + * Protect the interface. 257 + */ 258 + static struct mutex interface_lock; 259 + 260 + /* 256 261 * tlat_var_reset - Reset the values of the given timerlat_variables 257 262 */ 258 263 static inline void tlat_var_reset(void) 259 264 { 260 265 struct timerlat_variables *tlat_var; 261 266 int cpu; 267 + 268 + /* Synchronize with the timerlat interfaces */ 269 + mutex_lock(&interface_lock); 262 270 /* 263 271 * So far, all the values are initialized as 0, so 264 272 * zeroing the structure is perfect. 265 273 */ 266 274 for_each_cpu(cpu, cpu_online_mask) { 267 275 tlat_var = per_cpu_ptr(&per_cpu_timerlat_var, cpu); 276 + if (tlat_var->kthread) 277 + hrtimer_cancel(&tlat_var->timer); 268 278 memset(tlat_var, 0, sizeof(*tlat_var)); 269 279 } 280 + mutex_unlock(&interface_lock); 270 281 } 271 282 #else /* CONFIG_TIMERLAT_TRACER */ 272 283 #define tlat_var_reset() do {} while (0) ··· 341 330 int context; /* timer context */ 342 331 }; 343 332 #endif 344 - 345 - /* 346 - * Protect the interface. 347 - */ 348 - static struct mutex interface_lock; 349 333 350 334 /* 351 335 * Tracer data. ··· 1618 1612 1619 1613 static struct cpumask osnoise_cpumask; 1620 1614 static struct cpumask save_cpumask; 1615 + static struct cpumask kthread_cpumask; 1621 1616 1622 1617 /* 1623 1618 * osnoise_sleep - sleep until the next period ··· 1682 1675 */ 1683 1676 mutex_lock(&interface_lock); 1684 1677 this_cpu_osn_var()->kthread = NULL; 1678 + cpumask_clear_cpu(smp_processor_id(), &kthread_cpumask); 1685 1679 mutex_unlock(&interface_lock); 1686 1680 1687 1681 return 1; ··· 1953 1945 { 1954 1946 struct task_struct *kthread; 1955 1947 1948 + mutex_lock(&interface_lock); 1956 1949 kthread = per_cpu(per_cpu_osnoise_var, cpu).kthread; 1957 1950 if (kthread) { 1958 - if (test_bit(OSN_WORKLOAD, &osnoise_options)) { 1951 + per_cpu(per_cpu_osnoise_var, cpu).kthread = NULL; 1952 + mutex_unlock(&interface_lock); 1953 + 1954 + if (cpumask_test_and_clear_cpu(cpu, &kthread_cpumask) && 1955 + !WARN_ON(!test_bit(OSN_WORKLOAD, &osnoise_options))) { 1959 1956 kthread_stop(kthread); 1960 - } else { 1957 + } else if (!WARN_ON(test_bit(OSN_WORKLOAD, &osnoise_options))) { 1961 1958 /* 1962 1959 * This is a user thread waiting on the timerlat_fd. We need 1963 1960 * to close all users, and the best way to guarantee this is ··· 1971 1958 kill_pid(kthread->thread_pid, SIGKILL, 1); 1972 1959 put_task_struct(kthread); 1973 1960 } 1974 - per_cpu(per_cpu_osnoise_var, cpu).kthread = NULL; 1975 1961 } else { 1962 + mutex_unlock(&interface_lock); 1976 1963 /* if no workload, just return */ 1977 1964 if (!test_bit(OSN_WORKLOAD, &osnoise_options)) { 1978 1965 /* ··· 1980 1967 */ 1981 1968 per_cpu(per_cpu_osnoise_var, cpu).sampling = false; 1982 1969 barrier(); 1983 - return; 1984 1970 } 1985 1971 } 1986 1972 } ··· 1994 1982 { 1995 1983 int cpu; 1996 1984 1997 - cpus_read_lock(); 1998 - 1999 - for_each_online_cpu(cpu) 1985 + for_each_possible_cpu(cpu) 2000 1986 stop_kthread(cpu); 2001 - 2002 - cpus_read_unlock(); 2003 1987 } 2004 1988 2005 1989 /* ··· 2029 2021 } 2030 2022 2031 2023 per_cpu(per_cpu_osnoise_var, cpu).kthread = kthread; 2024 + cpumask_set_cpu(cpu, &kthread_cpumask); 2032 2025 2033 2026 return 0; 2034 2027 } ··· 2057 2048 */ 2058 2049 cpumask_and(current_mask, cpu_online_mask, &osnoise_cpumask); 2059 2050 2060 - for_each_possible_cpu(cpu) 2051 + for_each_possible_cpu(cpu) { 2052 + if (cpumask_test_and_clear_cpu(cpu, &kthread_cpumask)) { 2053 + struct task_struct *kthread; 2054 + 2055 + kthread = per_cpu(per_cpu_osnoise_var, cpu).kthread; 2056 + if (!WARN_ON(!kthread)) 2057 + kthread_stop(kthread); 2058 + } 2061 2059 per_cpu(per_cpu_osnoise_var, cpu).kthread = NULL; 2060 + } 2062 2061 2063 2062 for_each_cpu(cpu, current_mask) { 2064 2063 retval = start_kthread(cpu); ··· 2596 2579 osn_var = per_cpu_ptr(&per_cpu_osnoise_var, cpu); 2597 2580 tlat_var = per_cpu_ptr(&per_cpu_timerlat_var, cpu); 2598 2581 2599 - hrtimer_cancel(&tlat_var->timer); 2582 + if (tlat_var->kthread) 2583 + hrtimer_cancel(&tlat_var->timer); 2600 2584 memset(tlat_var, 0, sizeof(*tlat_var)); 2601 2585 2602 2586 osn_var->sampling = 0;
+18 -5
kernel/trace/trace_selftest.c
··· 942 942 { 943 943 struct fgraph_fixture *fixture; 944 944 bool printed = false; 945 - int i, ret; 945 + int i, j, ret; 946 946 947 947 pr_cont("PASSED\n"); 948 948 pr_info("Testing multiple fgraph storage on a function: "); ··· 953 953 if (ret && ret != -ENODEV) { 954 954 pr_cont("*Could not set filter* "); 955 955 printed = true; 956 - goto out; 956 + goto out2; 957 957 } 958 + } 958 959 960 + for (j = 0; j < ARRAY_SIZE(store_bytes); j++) { 961 + fixture = &store_bytes[j]; 959 962 ret = register_ftrace_graph(&fixture->gops); 960 963 if (ret) { 961 964 pr_warn("Failed to init store_bytes fgraph tracing\n"); 962 965 printed = true; 963 - goto out; 966 + goto out1; 964 967 } 965 968 } 966 969 967 970 DYN_FTRACE_TEST_NAME(); 968 - out: 971 + out1: 972 + while (--j >= 0) { 973 + fixture = &store_bytes[j]; 974 + unregister_ftrace_graph(&fixture->gops); 975 + 976 + if (fixture->error_str && !printed) { 977 + pr_cont("*** %s ***", fixture->error_str); 978 + printed = true; 979 + } 980 + } 981 + out2: 969 982 while (--i >= 0) { 970 983 fixture = &store_bytes[i]; 971 - unregister_ftrace_graph(&fixture->gops); 984 + ftrace_free_filter(&fixture->gops.ops); 972 985 973 986 if (fixture->error_str && !printed) { 974 987 pr_cont("*** %s ***", fixture->error_str);
+11 -6
lib/codetag.c
··· 125 125 cttype->desc.tag_size; 126 126 } 127 127 128 - #ifdef CONFIG_MODULES 129 128 static void *get_symbol(struct module *mod, const char *prefix, const char *name) 130 129 { 131 130 DECLARE_SEQ_BUF(sb, KSYM_NAME_LEN); ··· 154 155 }; 155 156 } 156 157 158 + static const char *get_mod_name(__maybe_unused struct module *mod) 159 + { 160 + #ifdef CONFIG_MODULES 161 + if (mod) 162 + return mod->name; 163 + #endif 164 + return "(built-in)"; 165 + } 166 + 157 167 static int codetag_module_init(struct codetag_type *cttype, struct module *mod) 158 168 { 159 169 struct codetag_range range; ··· 172 164 range = get_section_range(mod, cttype->desc.section); 173 165 if (!range.start || !range.stop) { 174 166 pr_warn("Failed to load code tags of type %s from the module %s\n", 175 - cttype->desc.section, 176 - mod ? mod->name : "(built-in)"); 167 + cttype->desc.section, get_mod_name(mod)); 177 168 return -EINVAL; 178 169 } 179 170 ··· 206 199 return 0; 207 200 } 208 201 202 + #ifdef CONFIG_MODULES 209 203 void codetag_load_module(struct module *mod) 210 204 { 211 205 struct codetag_type *cttype; ··· 256 248 257 249 return unload_ok; 258 250 } 259 - 260 - #else /* CONFIG_MODULES */ 261 - static int codetag_module_init(struct codetag_type *cttype, struct module *mod) { return 0; } 262 251 #endif /* CONFIG_MODULES */ 263 252 264 253 struct codetag_type *
+5 -2
lib/kunit/device.c
··· 89 89 if (!driver) 90 90 return ERR_PTR(err); 91 91 92 - driver->name = name; 92 + driver->name = kunit_kstrdup_const(test, name, GFP_KERNEL); 93 93 driver->bus = &kunit_bus_type; 94 94 driver->owner = THIS_MODULE; 95 95 ··· 192 192 const struct device_driver *driver = to_kunit_device(dev)->driver; 193 193 194 194 kunit_release_action(test, device_unregister_wrapper, dev); 195 - if (driver) 195 + if (driver) { 196 + const char *driver_name = driver->name; 196 197 kunit_release_action(test, driver_unregister_wrapper, (void *)driver); 198 + kunit_kfree_const(test, driver_name); 199 + } 197 200 } 198 201 EXPORT_SYMBOL_GPL(kunit_device_unregister); 199 202
+19
lib/kunit/test.c
··· 874 874 } 875 875 EXPORT_SYMBOL_GPL(kunit_kfree); 876 876 877 + void kunit_kfree_const(struct kunit *test, const void *x) 878 + { 879 + #if !IS_MODULE(CONFIG_KUNIT) 880 + if (!is_kernel_rodata((unsigned long)x)) 881 + #endif 882 + kunit_kfree(test, x); 883 + } 884 + EXPORT_SYMBOL_GPL(kunit_kfree_const); 885 + 886 + const char *kunit_kstrdup_const(struct kunit *test, const char *str, gfp_t gfp) 887 + { 888 + #if !IS_MODULE(CONFIG_KUNIT) 889 + if (is_kernel_rodata((unsigned long)str)) 890 + return str; 891 + #endif 892 + return kunit_kstrdup(test, str, gfp); 893 + } 894 + EXPORT_SYMBOL_GPL(kunit_kstrdup_const); 895 + 877 896 void kunit_cleanup(struct kunit *test) 878 897 { 879 898 struct kunit_resource *res;
+2 -5
lib/maple_tree.c
··· 7566 7566 * 2. The gap is correctly set in the parents 7567 7567 */ 7568 7568 void mt_validate(struct maple_tree *mt) 7569 + __must_hold(mas->tree->ma_lock) 7569 7570 { 7570 7571 unsigned char end; 7571 7572 7572 7573 MA_STATE(mas, mt, 0, 0); 7573 - rcu_read_lock(); 7574 7574 mas_start(&mas); 7575 7575 if (!mas_is_active(&mas)) 7576 - goto done; 7576 + return; 7577 7577 7578 7578 while (!mte_is_leaf(mas.node)) 7579 7579 mas_descend(&mas); ··· 7594 7594 mas_dfs_postorder(&mas, ULONG_MAX); 7595 7595 } 7596 7596 mt_validate_nulls(mt); 7597 - done: 7598 - rcu_read_unlock(); 7599 - 7600 7597 } 7601 7598 EXPORT_SYMBOL_GPL(mt_validate); 7602 7599
+1 -1
mm/filemap.c
··· 4231 4231 } 4232 4232 4233 4233 /* Wait for writeback to complete on all folios and discard. */ 4234 - truncate_inode_pages_range(mapping, start, end); 4234 + invalidate_inode_pages2_range(mapping, start / PAGE_SIZE, end / PAGE_SIZE); 4235 4235 4236 4236 unlock: 4237 4237 filemap_invalidate_unlock(mapping);
+9 -3
mm/memcontrol.c
··· 3613 3613 memcg1_soft_limit_reset(memcg); 3614 3614 #ifdef CONFIG_ZSWAP 3615 3615 memcg->zswap_max = PAGE_COUNTER_MAX; 3616 - WRITE_ONCE(memcg->zswap_writeback, 3617 - !parent || READ_ONCE(parent->zswap_writeback)); 3616 + WRITE_ONCE(memcg->zswap_writeback, true); 3618 3617 #endif 3619 3618 page_counter_set_high(&memcg->swap, PAGE_COUNTER_MAX); 3620 3619 if (parent) { ··· 5319 5320 bool mem_cgroup_zswap_writeback_enabled(struct mem_cgroup *memcg) 5320 5321 { 5321 5322 /* if zswap is disabled, do not block pages going to the swapping device */ 5322 - return !zswap_is_enabled() || !memcg || READ_ONCE(memcg->zswap_writeback); 5323 + if (!zswap_is_enabled()) 5324 + return true; 5325 + 5326 + for (; memcg; memcg = parent_mem_cgroup(memcg)) 5327 + if (!READ_ONCE(memcg->zswap_writeback)) 5328 + return false; 5329 + 5330 + return true; 5323 5331 } 5324 5332 5325 5333 static u64 zswap_current_read(struct cgroup_subsys_state *css,
+1 -1
mm/memory_hotplug.c
··· 1681 1681 1682 1682 struct range mhp_get_pluggable_range(bool need_mapping) 1683 1683 { 1684 - const u64 max_phys = (1ULL << MAX_PHYSMEM_BITS) - 1; 1684 + const u64 max_phys = PHYSMEM_END; 1685 1685 struct range mhp_range; 1686 1686 1687 1687 if (need_mapping) {
+7
mm/page_alloc.c
··· 1054 1054 reset_page_owner(page, order); 1055 1055 page_table_check_free(page, order); 1056 1056 pgalloc_tag_sub(page, 1 << order); 1057 + 1058 + /* 1059 + * The page is isolated and accounted for. 1060 + * Mark the codetag as empty to avoid accounting error 1061 + * when the page is freed by unpoison_memory(). 1062 + */ 1063 + clear_page_tag_ref(page); 1057 1064 return false; 1058 1065 } 1059 1066
+4
mm/slub.c
··· 2116 2116 if (!mem_alloc_profiling_enabled()) 2117 2117 return; 2118 2118 2119 + /* slab->obj_exts might not be NULL if it was created for MEMCG accounting. */ 2120 + if (s->flags & (SLAB_NO_OBJ_EXT | SLAB_NOLEAKTRACE)) 2121 + return; 2122 + 2119 2123 obj_exts = slab_obj_exts(slab); 2120 2124 if (!obj_exts) 2121 2125 return;
+1 -1
mm/sparse.c
··· 129 129 static void __meminit mminit_validate_memmodel_limits(unsigned long *start_pfn, 130 130 unsigned long *end_pfn) 131 131 { 132 - unsigned long max_sparsemem_pfn = 1UL << (MAX_PHYSMEM_BITS-PAGE_SHIFT); 132 + unsigned long max_sparsemem_pfn = (PHYSMEM_END + 1) >> PAGE_SHIFT; 133 133 134 134 /* 135 135 * Sanity checks - do not allow an architecture to pass
+16 -13
mm/userfaultfd.c
··· 787 787 } 788 788 789 789 dst_pmdval = pmdp_get_lockless(dst_pmd); 790 - /* 791 - * If the dst_pmd is mapped as THP don't 792 - * override it and just be strict. 793 - */ 794 - if (unlikely(pmd_trans_huge(dst_pmdval))) { 795 - err = -EEXIST; 796 - break; 797 - } 798 790 if (unlikely(pmd_none(dst_pmdval)) && 799 791 unlikely(__pte_alloc(dst_mm, dst_pmd))) { 800 792 err = -ENOMEM; 801 793 break; 802 794 } 803 - /* If an huge pmd materialized from under us fail */ 804 - if (unlikely(pmd_trans_huge(*dst_pmd))) { 795 + dst_pmdval = pmdp_get_lockless(dst_pmd); 796 + /* 797 + * If the dst_pmd is THP don't override it and just be strict. 798 + * (This includes the case where the PMD used to be THP and 799 + * changed back to none after __pte_alloc().) 800 + */ 801 + if (unlikely(!pmd_present(dst_pmdval) || pmd_trans_huge(dst_pmdval) || 802 + pmd_devmap(dst_pmdval))) { 803 + err = -EEXIST; 804 + break; 805 + } 806 + if (unlikely(pmd_bad(dst_pmdval))) { 805 807 err = -EFAULT; 806 808 break; 807 809 } 808 - 809 - BUG_ON(pmd_none(*dst_pmd)); 810 - BUG_ON(pmd_trans_huge(*dst_pmd)); 810 + /* 811 + * For shmem mappings, khugepaged is allowed to remove page 812 + * tables under us; pte_offset_map_lock() will deal with that. 813 + */ 811 814 812 815 err = mfill_atomic_pte(dst_pmd, dst_vma, dst_addr, 813 816 src_addr, flags, &folio);
+5 -2
mm/vmalloc.c
··· 2191 2191 { 2192 2192 struct vmap_node *vn = container_of(work, 2193 2193 struct vmap_node, purge_work); 2194 + unsigned long nr_purged_pages = 0; 2194 2195 struct vmap_area *va, *n_va; 2195 2196 LIST_HEAD(local_list); 2196 2197 ··· 2209 2208 kasan_release_vmalloc(orig_start, orig_end, 2210 2209 va->va_start, va->va_end); 2211 2210 2212 - atomic_long_sub(nr, &vmap_lazy_nr); 2211 + nr_purged_pages += nr; 2213 2212 vn->nr_purged++; 2214 2213 2215 2214 if (is_vn_id_valid(vn_id) && !vn->skip_populate) ··· 2219 2218 /* Go back to global. */ 2220 2219 list_add(&va->list, &local_list); 2221 2220 } 2221 + 2222 + atomic_long_sub(nr_purged_pages, &vmap_lazy_nr); 2222 2223 2223 2224 reclaim_list_global(&local_list); 2224 2225 } ··· 2629 2626 vb->dirty_max = 0; 2630 2627 bitmap_set(vb->used_map, 0, (1UL << order)); 2631 2628 INIT_LIST_HEAD(&vb->free_list); 2629 + vb->cpu = raw_smp_processor_id(); 2632 2630 2633 2631 xa = addr_to_vb_xa(va->va_start); 2634 2632 vb_idx = addr_to_vb_idx(va->va_start); ··· 2646 2642 * integrity together with list_for_each_rcu from read 2647 2643 * side. 2648 2644 */ 2649 - vb->cpu = raw_smp_processor_id(); 2650 2645 vbq = per_cpu_ptr(&vmap_block_queue, vb->cpu); 2651 2646 spin_lock(&vbq->lock); 2652 2647 list_add_tail_rcu(&vb->free_list, &vbq->free);
+2 -22
mm/vmscan.c
··· 1604 1604 1605 1605 } 1606 1606 1607 - #ifdef CONFIG_CMA 1608 - /* 1609 - * It is waste of effort to scan and reclaim CMA pages if it is not available 1610 - * for current allocation context. Kswapd can not be enrolled as it can not 1611 - * distinguish this scenario by using sc->gfp_mask = GFP_KERNEL 1612 - */ 1613 - static bool skip_cma(struct folio *folio, struct scan_control *sc) 1614 - { 1615 - return !current_is_kswapd() && 1616 - gfp_migratetype(sc->gfp_mask) != MIGRATE_MOVABLE && 1617 - folio_migratetype(folio) == MIGRATE_CMA; 1618 - } 1619 - #else 1620 - static bool skip_cma(struct folio *folio, struct scan_control *sc) 1621 - { 1622 - return false; 1623 - } 1624 - #endif 1625 - 1626 1607 /* 1627 1608 * Isolating page from the lruvec to fill in @dst list by nr_to_scan times. 1628 1609 * ··· 1650 1669 nr_pages = folio_nr_pages(folio); 1651 1670 total_scan += nr_pages; 1652 1671 1653 - if (folio_zonenum(folio) > sc->reclaim_idx || 1654 - skip_cma(folio, sc)) { 1672 + if (folio_zonenum(folio) > sc->reclaim_idx) { 1655 1673 nr_skipped[folio_zonenum(folio)] += nr_pages; 1656 1674 move_to = &folios_skipped; 1657 1675 goto move; ··· 4300 4320 } 4301 4321 4302 4322 /* ineligible */ 4303 - if (zone > sc->reclaim_idx || skip_cma(folio, sc)) { 4323 + if (zone > sc->reclaim_idx) { 4304 4324 gen = folio_inc_gen(lruvec, folio, false); 4305 4325 list_move_tail(&folio->lru, &lrugen->folios[gen][type][zone]); 4306 4326 return true;
+5 -1
net/bluetooth/hci_conn.c
··· 2952 2952 return 0; 2953 2953 } 2954 2954 2955 - return hci_cmd_sync_queue_once(hdev, abort_conn_sync, conn, NULL); 2955 + /* Run immediately if on cmd_sync_work since this may be called 2956 + * as a result to MGMT_OP_DISCONNECT/MGMT_OP_UNPAIR which does 2957 + * already queue its callback on cmd_sync_work. 2958 + */ 2959 + return hci_cmd_sync_run_once(hdev, abort_conn_sync, conn, NULL); 2956 2960 }
+40 -2
net/bluetooth/hci_sync.c
··· 112 112 skb_queue_tail(&req->cmd_q, skb); 113 113 } 114 114 115 - static int hci_cmd_sync_run(struct hci_request *req) 115 + static int hci_req_sync_run(struct hci_request *req) 116 116 { 117 117 struct hci_dev *hdev = req->hdev; 118 118 struct sk_buff *skb; ··· 169 169 170 170 hdev->req_status = HCI_REQ_PEND; 171 171 172 - err = hci_cmd_sync_run(&req); 172 + err = hci_req_sync_run(&req); 173 173 if (err < 0) 174 174 return ERR_PTR(err); 175 175 ··· 781 781 return hci_cmd_sync_queue(hdev, func, data, destroy); 782 782 } 783 783 EXPORT_SYMBOL(hci_cmd_sync_queue_once); 784 + 785 + /* Run HCI command: 786 + * 787 + * - hdev must be running 788 + * - if on cmd_sync_work then run immediately otherwise queue 789 + */ 790 + int hci_cmd_sync_run(struct hci_dev *hdev, hci_cmd_sync_work_func_t func, 791 + void *data, hci_cmd_sync_work_destroy_t destroy) 792 + { 793 + /* Only queue command if hdev is running which means it had been opened 794 + * and is either on init phase or is already up. 795 + */ 796 + if (!test_bit(HCI_RUNNING, &hdev->flags)) 797 + return -ENETDOWN; 798 + 799 + /* If on cmd_sync_work then run immediately otherwise queue */ 800 + if (current_work() == &hdev->cmd_sync_work) 801 + return func(hdev, data); 802 + 803 + return hci_cmd_sync_submit(hdev, func, data, destroy); 804 + } 805 + EXPORT_SYMBOL(hci_cmd_sync_run); 806 + 807 + /* Run HCI command entry once: 808 + * 809 + * - Lookup if an entry already exist and only if it doesn't creates a new entry 810 + * and run it. 811 + * - if on cmd_sync_work then run immediately otherwise queue 812 + */ 813 + int hci_cmd_sync_run_once(struct hci_dev *hdev, hci_cmd_sync_work_func_t func, 814 + void *data, hci_cmd_sync_work_destroy_t destroy) 815 + { 816 + if (hci_cmd_sync_lookup_entry(hdev, func, data, destroy)) 817 + return 0; 818 + 819 + return hci_cmd_sync_run(hdev, func, data, destroy); 820 + } 821 + EXPORT_SYMBOL(hci_cmd_sync_run_once); 784 822 785 823 /* Lookup HCI command entry: 786 824 *
+67 -77
net/bluetooth/mgmt.c
··· 2830 2830 bt_dev_dbg(hdev, "debug_keys %u key_count %u", cp->debug_keys, 2831 2831 key_count); 2832 2832 2833 - for (i = 0; i < key_count; i++) { 2834 - struct mgmt_link_key_info *key = &cp->keys[i]; 2835 - 2836 - /* Considering SMP over BREDR/LE, there is no need to check addr_type */ 2837 - if (key->type > 0x08) 2838 - return mgmt_cmd_status(sk, hdev->id, 2839 - MGMT_OP_LOAD_LINK_KEYS, 2840 - MGMT_STATUS_INVALID_PARAMS); 2841 - } 2842 - 2843 2833 hci_dev_lock(hdev); 2844 2834 2845 2835 hci_link_keys_clear(hdev); ··· 2851 2861 key->val)) { 2852 2862 bt_dev_warn(hdev, "Skipping blocked link key for %pMR", 2853 2863 &key->addr.bdaddr); 2864 + continue; 2865 + } 2866 + 2867 + if (key->addr.type != BDADDR_BREDR) { 2868 + bt_dev_warn(hdev, 2869 + "Invalid link address type %u for %pMR", 2870 + key->addr.type, &key->addr.bdaddr); 2871 + continue; 2872 + } 2873 + 2874 + if (key->type > 0x08) { 2875 + bt_dev_warn(hdev, "Invalid link key type %u for %pMR", 2876 + key->type, &key->addr.bdaddr); 2854 2877 continue; 2855 2878 } 2856 2879 ··· 2924 2921 if (!conn) 2925 2922 return 0; 2926 2923 2927 - return hci_abort_conn_sync(hdev, conn, HCI_ERROR_REMOTE_USER_TERM); 2924 + /* Disregard any possible error since the likes of hci_abort_conn_sync 2925 + * will clean up the connection no matter the error. 2926 + */ 2927 + hci_abort_conn(conn, HCI_ERROR_REMOTE_USER_TERM); 2928 + 2929 + return 0; 2928 2930 } 2929 2931 2930 2932 static int unpair_device(struct sock *sk, struct hci_dev *hdev, void *data, ··· 3061 3053 return err; 3062 3054 } 3063 3055 3056 + static void disconnect_complete(struct hci_dev *hdev, void *data, int err) 3057 + { 3058 + struct mgmt_pending_cmd *cmd = data; 3059 + 3060 + cmd->cmd_complete(cmd, mgmt_status(err)); 3061 + mgmt_pending_free(cmd); 3062 + } 3063 + 3064 + static int disconnect_sync(struct hci_dev *hdev, void *data) 3065 + { 3066 + struct mgmt_pending_cmd *cmd = data; 3067 + struct mgmt_cp_disconnect *cp = cmd->param; 3068 + struct hci_conn *conn; 3069 + 3070 + if (cp->addr.type == BDADDR_BREDR) 3071 + conn = hci_conn_hash_lookup_ba(hdev, ACL_LINK, 3072 + &cp->addr.bdaddr); 3073 + else 3074 + conn = hci_conn_hash_lookup_le(hdev, &cp->addr.bdaddr, 3075 + le_addr_type(cp->addr.type)); 3076 + 3077 + if (!conn) 3078 + return -ENOTCONN; 3079 + 3080 + /* Disregard any possible error since the likes of hci_abort_conn_sync 3081 + * will clean up the connection no matter the error. 3082 + */ 3083 + hci_abort_conn(conn, HCI_ERROR_REMOTE_USER_TERM); 3084 + 3085 + return 0; 3086 + } 3087 + 3064 3088 static int disconnect(struct sock *sk, struct hci_dev *hdev, void *data, 3065 3089 u16 len) 3066 3090 { 3067 3091 struct mgmt_cp_disconnect *cp = data; 3068 3092 struct mgmt_rp_disconnect rp; 3069 3093 struct mgmt_pending_cmd *cmd; 3070 - struct hci_conn *conn; 3071 3094 int err; 3072 3095 3073 3096 bt_dev_dbg(hdev, "sock %p", sk); ··· 3121 3082 goto failed; 3122 3083 } 3123 3084 3124 - if (pending_find(MGMT_OP_DISCONNECT, hdev)) { 3125 - err = mgmt_cmd_complete(sk, hdev->id, MGMT_OP_DISCONNECT, 3126 - MGMT_STATUS_BUSY, &rp, sizeof(rp)); 3127 - goto failed; 3128 - } 3129 - 3130 - if (cp->addr.type == BDADDR_BREDR) 3131 - conn = hci_conn_hash_lookup_ba(hdev, ACL_LINK, 3132 - &cp->addr.bdaddr); 3133 - else 3134 - conn = hci_conn_hash_lookup_le(hdev, &cp->addr.bdaddr, 3135 - le_addr_type(cp->addr.type)); 3136 - 3137 - if (!conn || conn->state == BT_OPEN || conn->state == BT_CLOSED) { 3138 - err = mgmt_cmd_complete(sk, hdev->id, MGMT_OP_DISCONNECT, 3139 - MGMT_STATUS_NOT_CONNECTED, &rp, 3140 - sizeof(rp)); 3141 - goto failed; 3142 - } 3143 - 3144 - cmd = mgmt_pending_add(sk, MGMT_OP_DISCONNECT, hdev, data, len); 3085 + cmd = mgmt_pending_new(sk, MGMT_OP_DISCONNECT, hdev, data, len); 3145 3086 if (!cmd) { 3146 3087 err = -ENOMEM; 3147 3088 goto failed; ··· 3129 3110 3130 3111 cmd->cmd_complete = generic_cmd_complete; 3131 3112 3132 - err = hci_disconnect(conn, HCI_ERROR_REMOTE_USER_TERM); 3113 + err = hci_cmd_sync_queue(hdev, disconnect_sync, cmd, 3114 + disconnect_complete); 3133 3115 if (err < 0) 3134 - mgmt_pending_remove(cmd); 3116 + mgmt_pending_free(cmd); 3135 3117 3136 3118 failed: 3137 3119 hci_dev_unlock(hdev); ··· 7092 7072 7093 7073 for (i = 0; i < irk_count; i++) { 7094 7074 struct mgmt_irk_info *irk = &cp->irks[i]; 7095 - u8 addr_type = le_addr_type(irk->addr.type); 7096 7075 7097 7076 if (hci_is_blocked_key(hdev, 7098 7077 HCI_BLOCKED_KEY_TYPE_IRK, ··· 7101 7082 continue; 7102 7083 } 7103 7084 7104 - /* When using SMP over BR/EDR, the addr type should be set to BREDR */ 7105 - if (irk->addr.type == BDADDR_BREDR) 7106 - addr_type = BDADDR_BREDR; 7107 - 7108 7085 hci_add_irk(hdev, &irk->addr.bdaddr, 7109 - addr_type, irk->val, 7086 + le_addr_type(irk->addr.type), irk->val, 7110 7087 BDADDR_ANY); 7111 7088 } 7112 7089 ··· 7167 7152 7168 7153 bt_dev_dbg(hdev, "key_count %u", key_count); 7169 7154 7170 - for (i = 0; i < key_count; i++) { 7171 - struct mgmt_ltk_info *key = &cp->keys[i]; 7172 - 7173 - if (!ltk_is_valid(key)) 7174 - return mgmt_cmd_status(sk, hdev->id, 7175 - MGMT_OP_LOAD_LONG_TERM_KEYS, 7176 - MGMT_STATUS_INVALID_PARAMS); 7177 - } 7178 - 7179 7155 hci_dev_lock(hdev); 7180 7156 7181 7157 hci_smp_ltks_clear(hdev); ··· 7174 7168 for (i = 0; i < key_count; i++) { 7175 7169 struct mgmt_ltk_info *key = &cp->keys[i]; 7176 7170 u8 type, authenticated; 7177 - u8 addr_type = le_addr_type(key->addr.type); 7178 7171 7179 7172 if (hci_is_blocked_key(hdev, 7180 7173 HCI_BLOCKED_KEY_TYPE_LTK, 7181 7174 key->val)) { 7182 7175 bt_dev_warn(hdev, "Skipping blocked LTK for %pMR", 7176 + &key->addr.bdaddr); 7177 + continue; 7178 + } 7179 + 7180 + if (!ltk_is_valid(key)) { 7181 + bt_dev_warn(hdev, "Invalid LTK for %pMR", 7183 7182 &key->addr.bdaddr); 7184 7183 continue; 7185 7184 } ··· 7214 7203 continue; 7215 7204 } 7216 7205 7217 - /* When using SMP over BR/EDR, the addr type should be set to BREDR */ 7218 - if (key->addr.type == BDADDR_BREDR) 7219 - addr_type = BDADDR_BREDR; 7220 - 7221 7206 hci_add_ltk(hdev, &key->addr.bdaddr, 7222 - addr_type, type, authenticated, 7207 + le_addr_type(key->addr.type), type, authenticated, 7223 7208 key->val, key->enc_size, key->ediv, key->rand); 7224 7209 } 7225 7210 ··· 9509 9502 9510 9503 ev.store_hint = persistent; 9511 9504 bacpy(&ev.key.addr.bdaddr, &key->bdaddr); 9512 - ev.key.addr.type = link_to_bdaddr(key->link_type, key->bdaddr_type); 9505 + ev.key.addr.type = BDADDR_BREDR; 9513 9506 ev.key.type = key->type; 9514 9507 memcpy(ev.key.val, key->val, HCI_LINK_KEY_SIZE); 9515 9508 ev.key.pin_len = key->pin_len; ··· 9560 9553 ev.store_hint = persistent; 9561 9554 9562 9555 bacpy(&ev.key.addr.bdaddr, &key->bdaddr); 9563 - ev.key.addr.type = link_to_bdaddr(key->link_type, key->bdaddr_type); 9556 + ev.key.addr.type = link_to_bdaddr(LE_LINK, key->bdaddr_type); 9564 9557 ev.key.type = mgmt_ltk_type(key); 9565 9558 ev.key.enc_size = key->enc_size; 9566 9559 ev.key.ediv = key->ediv; ··· 9589 9582 9590 9583 bacpy(&ev.rpa, &irk->rpa); 9591 9584 bacpy(&ev.irk.addr.bdaddr, &irk->bdaddr); 9592 - ev.irk.addr.type = link_to_bdaddr(irk->link_type, irk->addr_type); 9585 + ev.irk.addr.type = link_to_bdaddr(LE_LINK, irk->addr_type); 9593 9586 memcpy(ev.irk.val, irk->val, sizeof(irk->val)); 9594 9587 9595 9588 mgmt_event(MGMT_EV_NEW_IRK, hdev, &ev, sizeof(ev), NULL); ··· 9618 9611 ev.store_hint = persistent; 9619 9612 9620 9613 bacpy(&ev.key.addr.bdaddr, &csrk->bdaddr); 9621 - ev.key.addr.type = link_to_bdaddr(csrk->link_type, csrk->bdaddr_type); 9614 + ev.key.addr.type = link_to_bdaddr(LE_LINK, csrk->bdaddr_type); 9622 9615 ev.key.type = csrk->type; 9623 9616 memcpy(ev.key.val, csrk->val, sizeof(csrk->val)); 9624 9617 ··· 9696 9689 mgmt_event_skb(skb, NULL); 9697 9690 } 9698 9691 9699 - static void disconnect_rsp(struct mgmt_pending_cmd *cmd, void *data) 9700 - { 9701 - struct sock **sk = data; 9702 - 9703 - cmd->cmd_complete(cmd, 0); 9704 - 9705 - *sk = cmd->sk; 9706 - sock_hold(*sk); 9707 - 9708 - mgmt_pending_remove(cmd); 9709 - } 9710 - 9711 9692 static void unpair_device_rsp(struct mgmt_pending_cmd *cmd, void *data) 9712 9693 { 9713 9694 struct hci_dev *hdev = data; ··· 9739 9744 if (link_type != ACL_LINK && link_type != LE_LINK) 9740 9745 return; 9741 9746 9742 - mgmt_pending_foreach(MGMT_OP_DISCONNECT, hdev, disconnect_rsp, &sk); 9743 - 9744 9747 bacpy(&ev.addr.bdaddr, bdaddr); 9745 9748 ev.addr.type = link_to_bdaddr(link_type, addr_type); 9746 9749 ev.reason = reason; ··· 9751 9758 9752 9759 if (sk) 9753 9760 sock_put(sk); 9754 - 9755 - mgmt_pending_foreach(MGMT_OP_UNPAIR_DEVICE, hdev, unpair_device_rsp, 9756 - hdev); 9757 9761 } 9758 9762 9759 9763 void mgmt_disconnect_failed(struct hci_dev *hdev, bdaddr_t *bdaddr,
-7
net/bluetooth/smp.c
··· 1060 1060 } 1061 1061 1062 1062 if (smp->remote_irk) { 1063 - smp->remote_irk->link_type = hcon->type; 1064 1063 mgmt_new_irk(hdev, smp->remote_irk, persistent); 1065 1064 1066 1065 /* Now that user space can be considered to know the ··· 1079 1080 } 1080 1081 1081 1082 if (smp->csrk) { 1082 - smp->csrk->link_type = hcon->type; 1083 1083 smp->csrk->bdaddr_type = hcon->dst_type; 1084 1084 bacpy(&smp->csrk->bdaddr, &hcon->dst); 1085 1085 mgmt_new_csrk(hdev, smp->csrk, persistent); 1086 1086 } 1087 1087 1088 1088 if (smp->responder_csrk) { 1089 - smp->responder_csrk->link_type = hcon->type; 1090 1089 smp->responder_csrk->bdaddr_type = hcon->dst_type; 1091 1090 bacpy(&smp->responder_csrk->bdaddr, &hcon->dst); 1092 1091 mgmt_new_csrk(hdev, smp->responder_csrk, persistent); 1093 1092 } 1094 1093 1095 1094 if (smp->ltk) { 1096 - smp->ltk->link_type = hcon->type; 1097 1095 smp->ltk->bdaddr_type = hcon->dst_type; 1098 1096 bacpy(&smp->ltk->bdaddr, &hcon->dst); 1099 1097 mgmt_new_ltk(hdev, smp->ltk, persistent); 1100 1098 } 1101 1099 1102 1100 if (smp->responder_ltk) { 1103 - smp->responder_ltk->link_type = hcon->type; 1104 1101 smp->responder_ltk->bdaddr_type = hcon->dst_type; 1105 1102 bacpy(&smp->responder_ltk->bdaddr, &hcon->dst); 1106 1103 mgmt_new_ltk(hdev, smp->responder_ltk, persistent); ··· 1116 1121 key = hci_add_link_key(hdev, smp->conn->hcon, &hcon->dst, 1117 1122 smp->link_key, type, 0, &persistent); 1118 1123 if (key) { 1119 - key->link_type = hcon->type; 1120 - key->bdaddr_type = hcon->dst_type; 1121 1124 mgmt_new_link_key(hdev, key, persistent); 1122 1125 1123 1126 /* Don't keep debug keys around if the relevant
+2 -4
net/bridge/br_fdb.c
··· 1469 1469 modified = true; 1470 1470 } 1471 1471 1472 - if (test_bit(BR_FDB_ADDED_BY_EXT_LEARN, &fdb->flags)) { 1472 + if (test_and_set_bit(BR_FDB_ADDED_BY_EXT_LEARN, &fdb->flags)) { 1473 1473 /* Refresh entry */ 1474 1474 fdb->used = jiffies; 1475 - } else if (!test_bit(BR_FDB_ADDED_BY_USER, &fdb->flags)) { 1476 - /* Take over SW learned entry */ 1477 - set_bit(BR_FDB_ADDED_BY_EXT_LEARN, &fdb->flags); 1475 + } else { 1478 1476 modified = true; 1479 1477 } 1480 1478
+4
net/can/bcm.c
··· 1470 1470 1471 1471 /* remove device reference, if this is our bound device */ 1472 1472 if (bo->bound && bo->ifindex == dev->ifindex) { 1473 + #if IS_ENABLED(CONFIG_PROC_FS) 1474 + if (sock_net(sk)->can.bcmproc_dir && bo->bcm_proc_read) 1475 + remove_proc_entry(bo->procname, sock_net(sk)->can.bcmproc_dir); 1476 + #endif 1473 1477 bo->bound = 0; 1474 1478 bo->ifindex = 0; 1475 1479 notify_enodev = 1;
+1 -1
net/core/net-sysfs.c
··· 1524 1524 }; 1525 1525 #else 1526 1526 /* Fake declaration, all the code using it should be dead */ 1527 - extern const struct attribute_group dql_group; 1527 + static const struct attribute_group dql_group = {}; 1528 1528 #endif /* CONFIG_BQL */ 1529 1529 1530 1530 #ifdef CONFIG_XPS
+24 -5
net/ipv4/fou_core.c
··· 50 50 51 51 static inline struct fou *fou_from_sock(struct sock *sk) 52 52 { 53 - return sk->sk_user_data; 53 + return rcu_dereference_sk_user_data(sk); 54 54 } 55 55 56 56 static int fou_recv_pull(struct sk_buff *skb, struct fou *fou, size_t len) ··· 233 233 struct sk_buff *skb) 234 234 { 235 235 const struct net_offload __rcu **offloads; 236 - u8 proto = fou_from_sock(sk)->protocol; 236 + struct fou *fou = fou_from_sock(sk); 237 237 const struct net_offload *ops; 238 238 struct sk_buff *pp = NULL; 239 + u8 proto; 240 + 241 + if (!fou) 242 + goto out; 243 + 244 + proto = fou->protocol; 239 245 240 246 /* We can clear the encap_mark for FOU as we are essentially doing 241 247 * one of two possible things. We are either adding an L4 tunnel ··· 269 263 int nhoff) 270 264 { 271 265 const struct net_offload __rcu **offloads; 272 - u8 proto = fou_from_sock(sk)->protocol; 266 + struct fou *fou = fou_from_sock(sk); 273 267 const struct net_offload *ops; 274 - int err = -ENOSYS; 268 + u8 proto; 269 + int err; 270 + 271 + if (!fou) { 272 + err = -ENOENT; 273 + goto out; 274 + } 275 + 276 + proto = fou->protocol; 275 277 276 278 offloads = NAPI_GRO_CB(skb)->is_ipv6 ? inet6_offloads : inet_offloads; 277 279 ops = rcu_dereference(offloads[proto]); 278 - if (WARN_ON(!ops || !ops->callbacks.gro_complete)) 280 + if (WARN_ON(!ops || !ops->callbacks.gro_complete)) { 281 + err = -ENOSYS; 279 282 goto out; 283 + } 280 284 281 285 err = ops->callbacks.gro_complete(skb, nhoff); 282 286 ··· 335 319 struct fou *fou = fou_from_sock(sk); 336 320 struct gro_remcsum grc; 337 321 u8 proto; 322 + 323 + if (!fou) 324 + goto out; 338 325 339 326 skb_gro_remcsum_init(&grc); 340 327
+1 -1
net/ipv4/tcp_bpf.c
··· 577 577 err = sk_stream_error(sk, msg->msg_flags, err); 578 578 release_sock(sk); 579 579 sk_psock_put(sk, psock); 580 - return copied ? copied : err; 580 + return copied > 0 ? copied : err; 581 581 } 582 582 583 583 enum {
+1
net/ipv6/ila/ila.h
··· 108 108 void ila_lwt_fini(void); 109 109 110 110 int ila_xlat_init_net(struct net *net); 111 + void ila_xlat_pre_exit_net(struct net *net); 111 112 void ila_xlat_exit_net(struct net *net); 112 113 113 114 int ila_xlat_nl_cmd_add_mapping(struct sk_buff *skb, struct genl_info *info);
+6
net/ipv6/ila/ila_main.c
··· 71 71 return err; 72 72 } 73 73 74 + static __net_exit void ila_pre_exit_net(struct net *net) 75 + { 76 + ila_xlat_pre_exit_net(net); 77 + } 78 + 74 79 static __net_exit void ila_exit_net(struct net *net) 75 80 { 76 81 ila_xlat_exit_net(net); ··· 83 78 84 79 static struct pernet_operations ila_net_ops = { 85 80 .init = ila_init_net, 81 + .pre_exit = ila_pre_exit_net, 86 82 .exit = ila_exit_net, 87 83 .id = &ila_net_id, 88 84 .size = sizeof(struct ila_net),
+9 -4
net/ipv6/ila/ila_xlat.c
··· 619 619 return 0; 620 620 } 621 621 622 + void ila_xlat_pre_exit_net(struct net *net) 623 + { 624 + struct ila_net *ilan = net_generic(net, ila_net_id); 625 + 626 + if (ilan->xlat.hooks_registered) 627 + nf_unregister_net_hooks(net, ila_nf_hook_ops, 628 + ARRAY_SIZE(ila_nf_hook_ops)); 629 + } 630 + 622 631 void ila_xlat_exit_net(struct net *net) 623 632 { 624 633 struct ila_net *ilan = net_generic(net, ila_net_id); ··· 635 626 rhashtable_free_and_destroy(&ilan->xlat.rhash_table, ila_free_cb, NULL); 636 627 637 628 free_bucket_spinlocks(ilan->xlat.locks); 638 - 639 - if (ilan->xlat.hooks_registered) 640 - nf_unregister_net_hooks(net, ila_nf_hook_ops, 641 - ARRAY_SIZE(ila_nf_hook_ops)); 642 629 } 643 630 644 631 static int ila_xlat_addr(struct sk_buff *skb, bool sir2ila)
+7 -4
net/sched/sch_cake.c
··· 786 786 * queue, accept the collision, update the host tags. 787 787 */ 788 788 q->way_collisions++; 789 - if (q->flows[outer_hash + k].set == CAKE_SET_BULK) { 790 - q->hosts[q->flows[reduced_hash].srchost].srchost_bulk_flow_count--; 791 - q->hosts[q->flows[reduced_hash].dsthost].dsthost_bulk_flow_count--; 792 - } 793 789 allocate_src = cake_dsrc(flow_mode); 794 790 allocate_dst = cake_ddst(flow_mode); 791 + 792 + if (q->flows[outer_hash + k].set == CAKE_SET_BULK) { 793 + if (allocate_src) 794 + q->hosts[q->flows[reduced_hash].srchost].srchost_bulk_flow_count--; 795 + if (allocate_dst) 796 + q->hosts[q->flows[reduced_hash].dsthost].dsthost_bulk_flow_count--; 797 + } 795 798 found: 796 799 /* reserve queue for future packets in same flow */ 797 800 reduced_hash = outer_hash + k;
+4 -5
net/sched/sch_netem.c
··· 742 742 743 743 err = qdisc_enqueue(skb, q->qdisc, &to_free); 744 744 kfree_skb_list(to_free); 745 - if (err != NET_XMIT_SUCCESS && 746 - net_xmit_drop_count(err)) { 747 - qdisc_qstats_drop(sch); 748 - qdisc_tree_reduce_backlog(sch, 1, 749 - pkt_len); 745 + if (err != NET_XMIT_SUCCESS) { 746 + if (net_xmit_drop_count(err)) 747 + qdisc_qstats_drop(sch); 748 + qdisc_tree_reduce_backlog(sch, 1, pkt_len); 750 749 } 751 750 goto tfifo_dequeue; 752 751 }
+3
net/smc/smc.h
··· 284 284 285 285 struct smc_sock { /* smc sock container */ 286 286 struct sock sk; 287 + #if IS_ENABLED(CONFIG_IPV6) 288 + struct ipv6_pinfo *pinet6; 289 + #endif 287 290 struct socket *clcsock; /* internal tcp socket */ 288 291 void (*clcsk_state_change)(struct sock *sk); 289 292 /* original stat_change fct. */
+7 -1
net/smc/smc_inet.c
··· 60 60 }; 61 61 62 62 #if IS_ENABLED(CONFIG_IPV6) 63 + struct smc6_sock { 64 + struct smc_sock smc; 65 + struct ipv6_pinfo inet6; 66 + }; 67 + 63 68 static struct proto smc_inet6_prot = { 64 69 .name = "INET6_SMC", 65 70 .owner = THIS_MODULE, ··· 72 67 .hash = smc_hash_sk, 73 68 .unhash = smc_unhash_sk, 74 69 .release_cb = smc_release_cb, 75 - .obj_size = sizeof(struct smc_sock), 70 + .obj_size = sizeof(struct smc6_sock), 76 71 .h.smc_hash = &smc_v6_hashinfo, 77 72 .slab_flags = SLAB_TYPESAFE_BY_RCU, 73 + .ipv6_pinfo_offset = offsetof(struct smc6_sock, inet6), 78 74 }; 79 75 80 76 static const struct proto_ops smc_inet6_stream_ops = {
+2 -2
net/socket.c
··· 2362 2362 int do_sock_getsockopt(struct socket *sock, bool compat, int level, 2363 2363 int optname, sockptr_t optval, sockptr_t optlen) 2364 2364 { 2365 - int max_optlen __maybe_unused; 2365 + int max_optlen __maybe_unused = 0; 2366 2366 const struct proto_ops *ops; 2367 2367 int err; 2368 2368 ··· 2371 2371 return err; 2372 2372 2373 2373 if (!compat) 2374 - max_optlen = BPF_CGROUP_GETSOCKOPT_MAX_OPTLEN(optlen); 2374 + copy_from_sockptr(&max_optlen, optlen, sizeof(int)); 2375 2375 2376 2376 ops = READ_ONCE(sock->ops); 2377 2377 if (level == SOL_SOCKET) {
+1 -1
rust/Makefile
··· 305 305 quiet_cmd_exports = EXPORTS $@ 306 306 cmd_exports = \ 307 307 $(NM) -p --defined-only $< \ 308 - | awk '/ (T|R|D) / {printf "EXPORT_SYMBOL_RUST_GPL(%s);\n",$$3}' > $@ 308 + | awk '/ (T|R|D|B) / {printf "EXPORT_SYMBOL_RUST_GPL(%s);\n",$$3}' > $@ 309 309 310 310 $(obj)/exports_core_generated.h: $(obj)/core.o FORCE 311 311 $(call if_changed,exports)
+4 -2
rust/kernel/alloc/box_ext.rs
··· 21 21 22 22 impl<T> BoxExt<T> for Box<T> { 23 23 fn new(x: T, flags: Flags) -> Result<Self, AllocError> { 24 - let b = <Self as BoxExt<_>>::new_uninit(flags)?; 25 - Ok(Box::write(b, x)) 24 + let mut b = <Self as BoxExt<_>>::new_uninit(flags)?; 25 + b.write(x); 26 + // SAFETY: We just wrote to it. 27 + Ok(unsafe { b.assume_init() }) 26 28 } 27 29 28 30 #[cfg(any(test, testlib))]
+2 -4
rust/kernel/block/mq/gen_disk.rs
··· 6 6 //! C header: [`include/linux/blk_mq.h`](srctree/include/linux/blk_mq.h) 7 7 8 8 use crate::block::mq::{raw_writer::RawWriter, Operations, TagSet}; 9 - use crate::error; 10 9 use crate::{bindings, error::from_err_ptr, error::Result, sync::Arc}; 10 + use crate::{error, static_lock_class}; 11 11 use core::fmt::{self, Write}; 12 12 13 13 /// A builder for [`GenDisk`]. ··· 93 93 name: fmt::Arguments<'_>, 94 94 tagset: Arc<TagSet<T>>, 95 95 ) -> Result<GenDisk<T>> { 96 - let lock_class_key = crate::sync::LockClassKey::new(); 97 - 98 96 // SAFETY: `bindings::queue_limits` contain only fields that are valid when zeroed. 99 97 let mut lim: bindings::queue_limits = unsafe { core::mem::zeroed() }; 100 98 ··· 108 110 tagset.raw_tag_set(), 109 111 &mut lim, 110 112 core::ptr::null_mut(), 111 - lock_class_key.as_ptr(), 113 + static_lock_class!().as_ptr(), 112 114 ) 113 115 })?; 114 116
+2 -2
rust/kernel/init/macros.rs
··· 145 145 //! } 146 146 //! } 147 147 //! // Implement the internal `PinData` trait that marks the pin-data struct as a pin-data 148 - //! // struct. This is important to ensure that no user can implement a rouge `__pin_data` 148 + //! // struct. This is important to ensure that no user can implement a rogue `__pin_data` 149 149 //! // function without using `unsafe`. 150 150 //! unsafe impl<T> ::kernel::init::__internal::PinData for __ThePinData<T> { 151 151 //! type Datee = Bar<T>; ··· 156 156 //! // case no such fields exist, hence this is almost empty. The two phantomdata fields exist 157 157 //! // for two reasons: 158 158 //! // - `__phantom`: every generic must be used, since we cannot really know which generics 159 - //! // are used, we declere all and then use everything here once. 159 + //! // are used, we declare all and then use everything here once. 160 160 //! // - `__phantom_pin`: uses the `'__pin` lifetime and ensures that this struct is invariant 161 161 //! // over it. The lifetime is needed to work around the limitation that trait bounds must 162 162 //! // not be trivial, e.g. the user has a `#[pin] PhantomPinned` field -- this is
+1 -1
rust/kernel/net/phy.rs
··· 491 491 pub struct DriverVTable(Opaque<bindings::phy_driver>); 492 492 493 493 // SAFETY: `DriverVTable` doesn't expose any &self method to access internal data, so it's safe to 494 - // share `&DriverVTable` across execution context boundries. 494 + // share `&DriverVTable` across execution context boundaries. 495 495 unsafe impl Sync for DriverVTable {} 496 496 497 497 /// Creates a [`DriverVTable`] instance from [`Driver`].
+5 -1
rust/macros/module.rs
··· 217 217 // freed until the module is unloaded. 218 218 #[cfg(MODULE)] 219 219 static THIS_MODULE: kernel::ThisModule = unsafe {{ 220 - kernel::ThisModule::from_ptr(&kernel::bindings::__this_module as *const _ as *mut _) 220 + extern \"C\" {{ 221 + static __this_module: kernel::types::Opaque<kernel::bindings::module>; 222 + }} 223 + 224 + kernel::ThisModule::from_ptr(__this_module.get()) 221 225 }}; 222 226 #[cfg(not(MODULE))] 223 227 static THIS_MODULE: kernel::ThisModule = unsafe {{
+49 -17
scripts/gfp-translate
··· 62 62 fi 63 63 64 64 # Extract GFP flags from the kernel source 65 - TMPFILE=`mktemp -t gfptranslate-XXXXXX` || exit 1 66 - grep -q ___GFP $SOURCE/include/linux/gfp_types.h 67 - if [ $? -eq 0 ]; then 68 - grep "^#define ___GFP" $SOURCE/include/linux/gfp_types.h | sed -e 's/u$//' | grep -v GFP_BITS > $TMPFILE 69 - else 70 - grep "^#define __GFP" $SOURCE/include/linux/gfp_types.h | sed -e 's/(__force gfp_t)//' | sed -e 's/u)/)/' | grep -v GFP_BITS | sed -e 's/)\//) \//' > $TMPFILE 71 - fi 65 + TMPFILE=`mktemp -t gfptranslate-XXXXXX.c` || exit 1 72 66 73 - # Parse the flags 74 - IFS=" 75 - " 76 67 echo Source: $SOURCE 77 68 echo Parsing: $GFPMASK 78 - for LINE in `cat $TMPFILE`; do 79 - MASK=`echo $LINE | awk '{print $3}'` 80 - if [ $(($GFPMASK&$MASK)) -ne 0 ]; then 81 - echo $LINE 82 - fi 83 - done 84 69 85 - rm -f $TMPFILE 70 + ( 71 + cat <<EOF 72 + #include <stdint.h> 73 + #include <stdio.h> 74 + 75 + // Try to fool compiler.h into not including extra stuff 76 + #define __ASSEMBLY__ 1 77 + 78 + #include <generated/autoconf.h> 79 + #include <linux/gfp_types.h> 80 + 81 + static const char *masks[] = { 82 + EOF 83 + 84 + sed -nEe 's/^[[:space:]]+(___GFP_.*)_BIT,.*$/\1/p' $SOURCE/include/linux/gfp_types.h | 85 + while read b; do 86 + cat <<EOF 87 + #if defined($b) && ($b > 0) 88 + [${b}_BIT] = "$b", 89 + #endif 90 + EOF 91 + done 92 + 93 + cat <<EOF 94 + }; 95 + 96 + int main(int argc, char *argv[]) 97 + { 98 + unsigned long long mask = $GFPMASK; 99 + 100 + for (int i = 0; i < sizeof(mask) * 8; i++) { 101 + unsigned long long bit = 1ULL << i; 102 + if (mask & bit) 103 + printf("\t%-25s0x%llx\n", 104 + (i < ___GFP_LAST_BIT && masks[i]) ? 105 + masks[i] : "*** INVALID ***", 106 + bit); 107 + } 108 + 109 + return 0; 110 + } 111 + EOF 112 + ) > $TMPFILE 113 + 114 + ${CC:-gcc} -Wall -o ${TMPFILE}.bin -I $SOURCE/include $TMPFILE && ${TMPFILE}.bin 115 + 116 + rm -f $TMPFILE ${TMPFILE}.bin 117 + 86 118 exit 0
+11
sound/pci/hda/patch_conexant.c
··· 307 307 CXT_FIXUP_HEADSET_MIC, 308 308 CXT_FIXUP_HP_MIC_NO_PRESENCE, 309 309 CXT_PINCFG_SWS_JS201D, 310 + CXT_PINCFG_TOP_SPEAKER, 310 311 }; 311 312 312 313 /* for hda_fixup_thinkpad_acpi() */ ··· 975 974 .type = HDA_FIXUP_PINS, 976 975 .v.pins = cxt_pincfg_sws_js201d, 977 976 }, 977 + [CXT_PINCFG_TOP_SPEAKER] = { 978 + .type = HDA_FIXUP_PINS, 979 + .v.pins = (const struct hda_pintbl[]) { 980 + { 0x1d, 0x82170111 }, 981 + { } 982 + }, 983 + }, 978 984 }; 979 985 980 986 static const struct snd_pci_quirk cxt5045_fixups[] = { ··· 1078 1070 SND_PCI_QUIRK_VENDOR(0x17aa, "Thinkpad", CXT_FIXUP_THINKPAD_ACPI), 1079 1071 SND_PCI_QUIRK(0x1c06, 0x2011, "Lemote A1004", CXT_PINCFG_LEMOTE_A1004), 1080 1072 SND_PCI_QUIRK(0x1c06, 0x2012, "Lemote A1205", CXT_PINCFG_LEMOTE_A1205), 1073 + SND_PCI_QUIRK(0x2782, 0x12c3, "Sirius Gen1", CXT_PINCFG_TOP_SPEAKER), 1074 + SND_PCI_QUIRK(0x2782, 0x12c5, "Sirius Gen2", CXT_PINCFG_TOP_SPEAKER), 1081 1075 {} 1082 1076 }; 1083 1077 ··· 1099 1089 { .id = CXT_FIXUP_HP_MIC_NO_PRESENCE, .name = "hp-mic-fix" }, 1100 1090 { .id = CXT_PINCFG_LENOVO_NOTEBOOK, .name = "lenovo-20149" }, 1101 1091 { .id = CXT_PINCFG_SWS_JS201D, .name = "sws-js201d" }, 1092 + { .id = CXT_PINCFG_TOP_SPEAKER, .name = "sirius-top-speaker" }, 1102 1093 {} 1103 1094 }; 1104 1095
+1
sound/pci/hda/patch_hdmi.c
··· 4639 4639 HDA_CODEC_ENTRY(0x8086281e, "Battlemage HDMI", patch_i915_adlp_hdmi), 4640 4640 HDA_CODEC_ENTRY(0x8086281f, "Raptor Lake P HDMI", patch_i915_adlp_hdmi), 4641 4641 HDA_CODEC_ENTRY(0x80862820, "Lunar Lake HDMI", patch_i915_adlp_hdmi), 4642 + HDA_CODEC_ENTRY(0x80862822, "Panther Lake HDMI", patch_i915_adlp_hdmi), 4642 4643 HDA_CODEC_ENTRY(0x80862880, "CedarTrail HDMI", patch_generic_hdmi), 4643 4644 HDA_CODEC_ENTRY(0x80862882, "Valleyview2 HDMI", patch_i915_byt_hdmi), 4644 4645 HDA_CODEC_ENTRY(0x80862883, "Braswell HDMI", patch_i915_byt_hdmi),
+21 -1
sound/pci/hda/patch_realtek.c
··· 7538 7538 ALC236_FIXUP_HP_GPIO_LED, 7539 7539 ALC236_FIXUP_HP_MUTE_LED, 7540 7540 ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF, 7541 + ALC236_FIXUP_LENOVO_INV_DMIC, 7541 7542 ALC298_FIXUP_SAMSUNG_AMP, 7542 7543 ALC298_FIXUP_SAMSUNG_AMP2, 7543 7544 ALC298_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET, ··· 7638 7637 ALC287_FIXUP_LENOVO_14ARP8_LEGION_IAH7, 7639 7638 ALC287_FIXUP_LENOVO_SSID_17AA3820, 7640 7639 ALCXXX_FIXUP_CS35LXX, 7640 + ALC245_FIXUP_CLEVO_NOISY_MIC, 7641 7641 }; 7642 7642 7643 7643 /* A special fixup for Lenovo C940 and Yoga Duet 7; ··· 9163 9161 .type = HDA_FIXUP_FUNC, 9164 9162 .v.func = alc236_fixup_hp_mute_led_micmute_vref, 9165 9163 }, 9164 + [ALC236_FIXUP_LENOVO_INV_DMIC] = { 9165 + .type = HDA_FIXUP_FUNC, 9166 + .v.func = alc_fixup_inv_dmic, 9167 + .chained = true, 9168 + .chain_id = ALC283_FIXUP_INT_MIC, 9169 + }, 9166 9170 [ALC298_FIXUP_SAMSUNG_AMP] = { 9167 9171 .type = HDA_FIXUP_FUNC, 9168 9172 .v.func = alc298_fixup_samsung_amp, ··· 9978 9970 .type = HDA_FIXUP_FUNC, 9979 9971 .v.func = cs35lxx_autodet_fixup, 9980 9972 }, 9973 + [ALC245_FIXUP_CLEVO_NOISY_MIC] = { 9974 + .type = HDA_FIXUP_FUNC, 9975 + .v.func = alc269_fixup_limit_int_mic_boost, 9976 + .chained = true, 9977 + .chain_id = ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE, 9978 + }, 9981 9979 }; 9982 9980 9983 9981 static const struct snd_pci_quirk alc269_fixup_tbl[] = { ··· 10232 10218 SND_PCI_QUIRK(0x103c, 0x87f5, "HP", ALC287_FIXUP_HP_GPIO_LED), 10233 10219 SND_PCI_QUIRK(0x103c, 0x87f6, "HP Spectre x360 14", ALC245_FIXUP_HP_X360_AMP), 10234 10220 SND_PCI_QUIRK(0x103c, 0x87f7, "HP Spectre x360 14", ALC245_FIXUP_HP_X360_AMP), 10221 + SND_PCI_QUIRK(0x103c, 0x87fd, "HP Laptop 14-dq2xxx", ALC236_FIXUP_HP_MUTE_LED_COEFBIT2), 10235 10222 SND_PCI_QUIRK(0x103c, 0x87fe, "HP Laptop 15s-fq2xxx", ALC236_FIXUP_HP_MUTE_LED_COEFBIT2), 10236 10223 SND_PCI_QUIRK(0x103c, 0x8805, "HP ProBook 650 G8 Notebook PC", ALC236_FIXUP_HP_GPIO_LED), 10237 10224 SND_PCI_QUIRK(0x103c, 0x880d, "HP EliteBook 830 G8 Notebook PC", ALC285_FIXUP_HP_GPIO_LED), ··· 10357 10342 SND_PCI_QUIRK(0x103c, 0x8c16, "HP Spectre 16", ALC287_FIXUP_CS35L41_I2C_2), 10358 10343 SND_PCI_QUIRK(0x103c, 0x8c17, "HP Spectre 16", ALC287_FIXUP_CS35L41_I2C_2), 10359 10344 SND_PCI_QUIRK(0x103c, 0x8c21, "HP Pavilion Plus Laptop 14-ey0XXX", ALC245_FIXUP_HP_X360_MUTE_LEDS), 10345 + SND_PCI_QUIRK(0x103c, 0x8c30, "HP Victus 15-fb1xxx", ALC245_FIXUP_HP_MUTE_LED_COEFBIT), 10360 10346 SND_PCI_QUIRK(0x103c, 0x8c46, "HP EliteBook 830 G11", ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED), 10361 10347 SND_PCI_QUIRK(0x103c, 0x8c47, "HP EliteBook 840 G11", ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED), 10362 10348 SND_PCI_QUIRK(0x103c, 0x8c48, "HP EliteBook 860 G11", ALC245_FIXUP_CS35L41_SPI_2_HP_GPIO_LED), ··· 10495 10479 SND_PCI_QUIRK(0x1043, 0x1e02, "ASUS UX3402ZA", ALC245_FIXUP_CS35L41_SPI_2), 10496 10480 SND_PCI_QUIRK(0x1043, 0x1e11, "ASUS Zephyrus G15", ALC289_FIXUP_ASUS_GA502), 10497 10481 SND_PCI_QUIRK(0x1043, 0x1e12, "ASUS UM3402", ALC287_FIXUP_CS35L41_I2C_2), 10482 + SND_PCI_QUIRK(0x1043, 0x1e1f, "ASUS Vivobook 15 X1504VAP", ALC2XX_FIXUP_HEADSET_MIC), 10498 10483 SND_PCI_QUIRK(0x1043, 0x1e51, "ASUS Zephyrus M15", ALC294_FIXUP_ASUS_GU502_PINS), 10499 10484 SND_PCI_QUIRK(0x1043, 0x1e5e, "ASUS ROG Strix G513", ALC294_FIXUP_ASUS_G513_PINS), 10500 10485 SND_PCI_QUIRK(0x1043, 0x1e63, "ASUS H7606W", ALC285_FIXUP_CS35L56_I2C_2), ··· 10636 10619 SND_PCI_QUIRK(0x1558, 0xa600, "Clevo NL50NU", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 10637 10620 SND_PCI_QUIRK(0x1558, 0xa650, "Clevo NP[567]0SN[CD]", ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 10638 10621 SND_PCI_QUIRK(0x1558, 0xa671, "Clevo NP70SN[CDE]", ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 10639 - SND_PCI_QUIRK(0x1558, 0xa763, "Clevo V54x_6x_TU", ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 10622 + SND_PCI_QUIRK(0x1558, 0xa741, "Clevo V54x_6x_TNE", ALC245_FIXUP_CLEVO_NOISY_MIC), 10623 + SND_PCI_QUIRK(0x1558, 0xa763, "Clevo V54x_6x_TU", ALC245_FIXUP_CLEVO_NOISY_MIC), 10640 10624 SND_PCI_QUIRK(0x1558, 0xb018, "Clevo NP50D[BE]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 10641 10625 SND_PCI_QUIRK(0x1558, 0xb019, "Clevo NH77D[BE]Q", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), 10642 10626 SND_PCI_QUIRK(0x1558, 0xb022, "Clevo NH77D[DC][QW]", ALC293_FIXUP_SYSTEM76_MIC_NO_PRESENCE), ··· 10760 10742 SND_PCI_QUIRK(0x17aa, 0x38f9, "Thinkbook 16P Gen5", ALC287_FIXUP_CS35L41_I2C_2), 10761 10743 SND_PCI_QUIRK(0x17aa, 0x38fa, "Thinkbook 16P Gen5", ALC287_FIXUP_CS35L41_I2C_2), 10762 10744 SND_PCI_QUIRK(0x17aa, 0x3902, "Lenovo E50-80", ALC269_FIXUP_DMIC_THINKPAD_ACPI), 10745 + SND_PCI_QUIRK(0x17aa, 0x3913, "Lenovo 145", ALC236_FIXUP_LENOVO_INV_DMIC), 10763 10746 SND_PCI_QUIRK(0x17aa, 0x3977, "IdeaPad S210", ALC283_FIXUP_INT_MIC), 10764 10747 SND_PCI_QUIRK(0x17aa, 0x3978, "Lenovo B50-70", ALC269_FIXUP_DMIC_THINKPAD_ACPI), 10765 10748 SND_PCI_QUIRK(0x17aa, 0x3bf8, "Quanta FL1", ALC269_FIXUP_PCM_44K), ··· 11013 10994 {.id = ALC623_FIXUP_LENOVO_THINKSTATION_P340, .name = "alc623-lenovo-thinkstation-p340"}, 11014 10995 {.id = ALC255_FIXUP_ACER_HEADPHONE_AND_MIC, .name = "alc255-acer-headphone-and-mic"}, 11015 10996 {.id = ALC285_FIXUP_HP_GPIO_AMP_INIT, .name = "alc285-hp-amp-init"}, 10997 + {.id = ALC236_FIXUP_LENOVO_INV_DMIC, .name = "alc236-fixup-lenovo-inv-mic"}, 11016 10998 {} 11017 10999 }; 11018 11000 #define ALC225_STANDARD_PINS \
+7
sound/soc/amd/yc/acp6x-mach.c
··· 356 356 { 357 357 .driver_data = &acp6x_card, 358 358 .matches = { 359 + DMI_MATCH(DMI_BOARD_VENDOR, "Micro-Star International Co., Ltd."), 360 + DMI_MATCH(DMI_PRODUCT_NAME, "Bravo 17 D7VEK"), 361 + } 362 + }, 363 + { 364 + .driver_data = &acp6x_card, 365 + .matches = { 359 366 DMI_MATCH(DMI_BOARD_VENDOR, "Alienware"), 360 367 DMI_MATCH(DMI_PRODUCT_NAME, "Alienware m17 R5 AMD"), 361 368 }
+1
sound/soc/codecs/chv3-codec.c
··· 26 26 { .compatible = "google,chv3-codec", }, 27 27 { } 28 28 }; 29 + MODULE_DEVICE_TABLE(of, chv3_codec_of_match); 29 30 30 31 static struct platform_driver chv3_codec_platform_driver = { 31 32 .driver = {
+10 -1
sound/soc/codecs/lpass-va-macro.c
··· 228 228 struct va_macro_data { 229 229 bool has_swr_master; 230 230 bool has_npl_clk; 231 + int version; 231 232 }; 232 233 233 234 static const struct va_macro_data sm8250_va_data = { 234 235 .has_swr_master = false, 235 236 .has_npl_clk = false, 237 + .version = LPASS_CODEC_VERSION_1_0, 236 238 }; 237 239 238 240 static const struct va_macro_data sm8450_va_data = { ··· 1589 1587 goto err_npl; 1590 1588 } 1591 1589 1592 - va_macro_set_lpass_codec_version(va); 1590 + /** 1591 + * old version of codecs do not have a reliable way to determine the 1592 + * version from registers, get them from soc specific data 1593 + */ 1594 + if (data->version) 1595 + lpass_macro_set_codec_version(data->version); 1596 + else /* read version from register */ 1597 + va_macro_set_lpass_codec_version(va); 1593 1598 1594 1599 if (va->has_swr_master) { 1595 1600 /* Set default CLK div to 1 */
+1
sound/soc/codecs/tda7419.c
··· 623 623 { .compatible = "st,tda7419" }, 624 624 { }, 625 625 }; 626 + MODULE_DEVICE_TABLE(of, tda7419_of_match); 626 627 627 628 static struct i2c_driver tda7419_driver = { 628 629 .driver = {
+1
sound/soc/google/chv3-i2s.c
··· 322 322 { .compatible = "google,chv3-i2s" }, 323 323 {}, 324 324 }; 325 + MODULE_DEVICE_TABLE(of, chv3_i2s_of_match); 325 326 326 327 static struct platform_driver chv3_i2s_driver = { 327 328 .probe = chv3_i2s_probe,
+1 -1
sound/soc/intel/boards/bxt_rt298.c
··· 605 605 int i; 606 606 607 607 for (i = 0; i < ARRAY_SIZE(broxton_rt298_dais); i++) { 608 - if (card->dai_link[i].codecs->name && 608 + if (card->dai_link[i].num_codecs && 609 609 !strncmp(card->dai_link[i].codecs->name, "i2c-INT343A:00", 610 610 I2C_NAME_SIZE)) { 611 611 if (!strncmp(card->name, "broxton-rt298",
+1 -1
sound/soc/intel/boards/bytcht_cx2072x.c
··· 241 241 242 242 /* fix index of codec dai */ 243 243 for (i = 0; i < ARRAY_SIZE(byt_cht_cx2072x_dais); i++) { 244 - if (byt_cht_cx2072x_dais[i].codecs->name && 244 + if (byt_cht_cx2072x_dais[i].num_codecs && 245 245 !strcmp(byt_cht_cx2072x_dais[i].codecs->name, 246 246 "i2c-14F10720:00")) { 247 247 dai_index = i;
+1 -1
sound/soc/intel/boards/bytcht_da7213.c
··· 245 245 246 246 /* fix index of codec dai */ 247 247 for (i = 0; i < ARRAY_SIZE(dailink); i++) { 248 - if (dailink[i].codecs->name && 248 + if (dailink[i].num_codecs && 249 249 !strcmp(dailink[i].codecs->name, "i2c-DLGS7213:00")) { 250 250 dai_index = i; 251 251 break;
+1 -1
sound/soc/intel/boards/bytcht_es8316.c
··· 546 546 547 547 /* fix index of codec dai */ 548 548 for (i = 0; i < ARRAY_SIZE(byt_cht_es8316_dais); i++) { 549 - if (byt_cht_es8316_dais[i].codecs->name && 549 + if (byt_cht_es8316_dais[i].num_codecs && 550 550 !strcmp(byt_cht_es8316_dais[i].codecs->name, 551 551 "i2c-ESSX8316:00")) { 552 552 dai_index = i;
+1 -1
sound/soc/intel/boards/bytcr_rt5640.c
··· 1677 1677 1678 1678 /* fix index of codec dai */ 1679 1679 for (i = 0; i < ARRAY_SIZE(byt_rt5640_dais); i++) { 1680 - if (byt_rt5640_dais[i].codecs->name && 1680 + if (byt_rt5640_dais[i].num_codecs && 1681 1681 !strcmp(byt_rt5640_dais[i].codecs->name, 1682 1682 "i2c-10EC5640:00")) { 1683 1683 dai_index = i;
+1 -1
sound/soc/intel/boards/bytcr_rt5651.c
··· 910 910 911 911 /* fix index of codec dai */ 912 912 for (i = 0; i < ARRAY_SIZE(byt_rt5651_dais); i++) { 913 - if (byt_rt5651_dais[i].codecs->name && 913 + if (byt_rt5651_dais[i].num_codecs && 914 914 !strcmp(byt_rt5651_dais[i].codecs->name, 915 915 "i2c-10EC5651:00")) { 916 916 dai_index = i;
+1 -1
sound/soc/intel/boards/bytcr_wm5102.c
··· 605 605 606 606 /* find index of codec dai */ 607 607 for (i = 0; i < ARRAY_SIZE(byt_wm5102_dais); i++) { 608 - if (byt_wm5102_dais[i].codecs->name && 608 + if (byt_wm5102_dais[i].num_codecs && 609 609 !strcmp(byt_wm5102_dais[i].codecs->name, 610 610 "wm5102-codec")) { 611 611 dai_index = i;
+1 -1
sound/soc/intel/boards/cht_bsw_rt5645.c
··· 569 569 570 570 /* set correct codec name */ 571 571 for (i = 0; i < ARRAY_SIZE(cht_dailink); i++) 572 - if (cht_dailink[i].codecs->name && 572 + if (cht_dailink[i].num_codecs && 573 573 !strcmp(cht_dailink[i].codecs->name, 574 574 "i2c-10EC5645:00")) { 575 575 dai_index = i;
+1 -1
sound/soc/intel/boards/cht_bsw_rt5672.c
··· 466 466 467 467 /* find index of codec dai */ 468 468 for (i = 0; i < ARRAY_SIZE(cht_dailink); i++) { 469 - if (cht_dailink[i].codecs->name && 469 + if (cht_dailink[i].num_codecs && 470 470 !strcmp(cht_dailink[i].codecs->name, RT5672_I2C_DEFAULT)) { 471 471 dai_index = i; 472 472 break;
-1
sound/soc/intel/common/soc-acpi-intel-cht-match.c
··· 84 84 /* Lenovo Yoga Tab 3 Pro YT3-X90, codec missing from DSDT */ 85 85 .matches = { 86 86 DMI_MATCH(DMI_SYS_VENDOR, "Intel Corporation"), 87 - DMI_MATCH(DMI_PRODUCT_NAME, "CHERRYVIEW D1 PLATFORM"), 88 87 DMI_MATCH(DMI_PRODUCT_VERSION, "Blade3-10A-001"), 89 88 }, 90 89 },
+1
sound/soc/intel/keembay/kmb_platform.c
··· 814 814 { .compatible = "intel,keembay-tdm", .data = &intel_kmb_tdm_dai}, 815 815 {} 816 816 }; 817 + MODULE_DEVICE_TABLE(of, kmb_plat_of_match); 817 818 818 819 static int kmb_plat_dai_probe(struct platform_device *pdev) 819 820 {
+13 -4
sound/soc/mediatek/mt8188/mt8188-mt6359.c
··· 734 734 struct mtk_soc_card_data *soc_card_data = snd_soc_card_get_drvdata(rtd->card); 735 735 struct snd_soc_jack *jack = &soc_card_data->card_data->jacks[MT8188_JACK_HEADSET]; 736 736 struct snd_soc_component *component = snd_soc_rtd_to_codec(rtd, 0)->component; 737 + struct mtk_platform_card_data *card_data = soc_card_data->card_data; 737 738 int ret; 738 739 739 740 ret = snd_soc_dapm_new_controls(&card->dapm, mt8188_nau8825_widgets, ··· 763 762 return ret; 764 763 } 765 764 766 - snd_jack_set_key(jack->jack, SND_JACK_BTN_0, KEY_PLAYPAUSE); 767 - snd_jack_set_key(jack->jack, SND_JACK_BTN_1, KEY_VOICECOMMAND); 768 - snd_jack_set_key(jack->jack, SND_JACK_BTN_2, KEY_VOLUMEUP); 769 - snd_jack_set_key(jack->jack, SND_JACK_BTN_3, KEY_VOLUMEDOWN); 765 + if (card_data->flags & ES8326_HS_PRESENT) { 766 + snd_jack_set_key(jack->jack, SND_JACK_BTN_0, KEY_PLAYPAUSE); 767 + snd_jack_set_key(jack->jack, SND_JACK_BTN_1, KEY_VOLUMEUP); 768 + snd_jack_set_key(jack->jack, SND_JACK_BTN_2, KEY_VOLUMEDOWN); 769 + snd_jack_set_key(jack->jack, SND_JACK_BTN_3, KEY_VOICECOMMAND); 770 + } else { 771 + snd_jack_set_key(jack->jack, SND_JACK_BTN_0, KEY_PLAYPAUSE); 772 + snd_jack_set_key(jack->jack, SND_JACK_BTN_1, KEY_VOICECOMMAND); 773 + snd_jack_set_key(jack->jack, SND_JACK_BTN_2, KEY_VOLUMEUP); 774 + snd_jack_set_key(jack->jack, SND_JACK_BTN_3, KEY_VOLUMEDOWN); 775 + } 776 + 770 777 ret = snd_soc_component_set_jack(component, jack, NULL); 771 778 772 779 if (ret) {
+1
sound/soc/soc-dapm.c
··· 4057 4057 4058 4058 case SND_SOC_DAPM_POST_PMD: 4059 4059 kfree(substream->runtime); 4060 + substream->runtime = NULL; 4060 4061 break; 4061 4062 4062 4063 default:
+2
sound/soc/sof/topology.c
··· 2050 2050 if (!slink) 2051 2051 return 0; 2052 2052 2053 + slink->link->platforms->name = NULL; 2054 + 2053 2055 kfree(slink->tuples); 2054 2056 list_del(&slink->list); 2055 2057 kfree(slink->hw_configs);
+73 -70
sound/soc/sunxi/sun4i-i2s.c
··· 100 100 #define SUN8I_I2S_CTRL_MODE_PCM (0 << 4) 101 101 102 102 #define SUN8I_I2S_FMT0_LRCLK_POLARITY_MASK BIT(19) 103 - #define SUN8I_I2S_FMT0_LRCLK_POLARITY_INVERTED (1 << 19) 104 - #define SUN8I_I2S_FMT0_LRCLK_POLARITY_NORMAL (0 << 19) 103 + #define SUN8I_I2S_FMT0_LRCLK_POLARITY_START_HIGH (1 << 19) 104 + #define SUN8I_I2S_FMT0_LRCLK_POLARITY_START_LOW (0 << 19) 105 105 #define SUN8I_I2S_FMT0_LRCK_PERIOD_MASK GENMASK(17, 8) 106 106 #define SUN8I_I2S_FMT0_LRCK_PERIOD(period) ((period - 1) << 8) 107 107 #define SUN8I_I2S_FMT0_BCLK_POLARITY_MASK BIT(7) ··· 729 729 static int sun8i_i2s_set_soc_fmt(const struct sun4i_i2s *i2s, 730 730 unsigned int fmt) 731 731 { 732 - u32 mode, val; 732 + u32 mode, lrclk_pol, bclk_pol, val; 733 733 u8 offset; 734 - 735 - /* 736 - * DAI clock polarity 737 - * 738 - * The setup for LRCK contradicts the datasheet, but under a 739 - * scope it's clear that the LRCK polarity is reversed 740 - * compared to the expected polarity on the bus. 741 - */ 742 - switch (fmt & SND_SOC_DAIFMT_INV_MASK) { 743 - case SND_SOC_DAIFMT_IB_IF: 744 - /* Invert both clocks */ 745 - val = SUN8I_I2S_FMT0_BCLK_POLARITY_INVERTED; 746 - break; 747 - case SND_SOC_DAIFMT_IB_NF: 748 - /* Invert bit clock */ 749 - val = SUN8I_I2S_FMT0_BCLK_POLARITY_INVERTED | 750 - SUN8I_I2S_FMT0_LRCLK_POLARITY_INVERTED; 751 - break; 752 - case SND_SOC_DAIFMT_NB_IF: 753 - /* Invert frame clock */ 754 - val = 0; 755 - break; 756 - case SND_SOC_DAIFMT_NB_NF: 757 - val = SUN8I_I2S_FMT0_LRCLK_POLARITY_INVERTED; 758 - break; 759 - default: 760 - return -EINVAL; 761 - } 762 - 763 - regmap_update_bits(i2s->regmap, SUN4I_I2S_FMT0_REG, 764 - SUN8I_I2S_FMT0_LRCLK_POLARITY_MASK | 765 - SUN8I_I2S_FMT0_BCLK_POLARITY_MASK, 766 - val); 767 734 768 735 /* DAI Mode */ 769 736 switch (fmt & SND_SOC_DAIFMT_FORMAT_MASK) { 770 737 case SND_SOC_DAIFMT_DSP_A: 738 + lrclk_pol = SUN8I_I2S_FMT0_LRCLK_POLARITY_START_HIGH; 771 739 mode = SUN8I_I2S_CTRL_MODE_PCM; 772 740 offset = 1; 773 741 break; 774 742 775 743 case SND_SOC_DAIFMT_DSP_B: 744 + lrclk_pol = SUN8I_I2S_FMT0_LRCLK_POLARITY_START_HIGH; 776 745 mode = SUN8I_I2S_CTRL_MODE_PCM; 777 746 offset = 0; 778 747 break; 779 748 780 749 case SND_SOC_DAIFMT_I2S: 750 + lrclk_pol = SUN8I_I2S_FMT0_LRCLK_POLARITY_START_LOW; 781 751 mode = SUN8I_I2S_CTRL_MODE_LEFT; 782 752 offset = 1; 783 753 break; 784 754 785 755 case SND_SOC_DAIFMT_LEFT_J: 756 + lrclk_pol = SUN8I_I2S_FMT0_LRCLK_POLARITY_START_HIGH; 786 757 mode = SUN8I_I2S_CTRL_MODE_LEFT; 787 758 offset = 0; 788 759 break; 789 760 790 761 case SND_SOC_DAIFMT_RIGHT_J: 762 + lrclk_pol = SUN8I_I2S_FMT0_LRCLK_POLARITY_START_HIGH; 791 763 mode = SUN8I_I2S_CTRL_MODE_RIGHT; 792 764 offset = 0; 793 765 break; ··· 776 804 regmap_update_bits(i2s->regmap, SUN8I_I2S_RX_CHAN_SEL_REG, 777 805 SUN8I_I2S_TX_CHAN_OFFSET_MASK, 778 806 SUN8I_I2S_TX_CHAN_OFFSET(offset)); 807 + 808 + /* DAI clock polarity */ 809 + bclk_pol = SUN8I_I2S_FMT0_BCLK_POLARITY_NORMAL; 810 + 811 + switch (fmt & SND_SOC_DAIFMT_INV_MASK) { 812 + case SND_SOC_DAIFMT_IB_IF: 813 + /* Invert both clocks */ 814 + lrclk_pol ^= SUN8I_I2S_FMT0_LRCLK_POLARITY_MASK; 815 + bclk_pol = SUN8I_I2S_FMT0_BCLK_POLARITY_INVERTED; 816 + break; 817 + case SND_SOC_DAIFMT_IB_NF: 818 + /* Invert bit clock */ 819 + bclk_pol = SUN8I_I2S_FMT0_BCLK_POLARITY_INVERTED; 820 + break; 821 + case SND_SOC_DAIFMT_NB_IF: 822 + /* Invert frame clock */ 823 + lrclk_pol ^= SUN8I_I2S_FMT0_LRCLK_POLARITY_MASK; 824 + break; 825 + case SND_SOC_DAIFMT_NB_NF: 826 + /* No inversion */ 827 + break; 828 + default: 829 + return -EINVAL; 830 + } 831 + 832 + regmap_update_bits(i2s->regmap, SUN4I_I2S_FMT0_REG, 833 + SUN8I_I2S_FMT0_LRCLK_POLARITY_MASK | 834 + SUN8I_I2S_FMT0_BCLK_POLARITY_MASK, 835 + lrclk_pol | bclk_pol); 779 836 780 837 /* DAI clock master masks */ 781 838 switch (fmt & SND_SOC_DAIFMT_CLOCK_PROVIDER_MASK) { ··· 837 836 static int sun50i_h6_i2s_set_soc_fmt(const struct sun4i_i2s *i2s, 838 837 unsigned int fmt) 839 838 { 840 - u32 mode, val; 839 + u32 mode, lrclk_pol, bclk_pol, val; 841 840 u8 offset; 842 - 843 - /* 844 - * DAI clock polarity 845 - * 846 - * The setup for LRCK contradicts the datasheet, but under a 847 - * scope it's clear that the LRCK polarity is reversed 848 - * compared to the expected polarity on the bus. 849 - */ 850 - switch (fmt & SND_SOC_DAIFMT_INV_MASK) { 851 - case SND_SOC_DAIFMT_IB_IF: 852 - /* Invert both clocks */ 853 - val = SUN8I_I2S_FMT0_BCLK_POLARITY_INVERTED; 854 - break; 855 - case SND_SOC_DAIFMT_IB_NF: 856 - /* Invert bit clock */ 857 - val = SUN8I_I2S_FMT0_BCLK_POLARITY_INVERTED | 858 - SUN8I_I2S_FMT0_LRCLK_POLARITY_INVERTED; 859 - break; 860 - case SND_SOC_DAIFMT_NB_IF: 861 - /* Invert frame clock */ 862 - val = 0; 863 - break; 864 - case SND_SOC_DAIFMT_NB_NF: 865 - val = SUN8I_I2S_FMT0_LRCLK_POLARITY_INVERTED; 866 - break; 867 - default: 868 - return -EINVAL; 869 - } 870 - 871 - regmap_update_bits(i2s->regmap, SUN4I_I2S_FMT0_REG, 872 - SUN8I_I2S_FMT0_LRCLK_POLARITY_MASK | 873 - SUN8I_I2S_FMT0_BCLK_POLARITY_MASK, 874 - val); 875 841 876 842 /* DAI Mode */ 877 843 switch (fmt & SND_SOC_DAIFMT_FORMAT_MASK) { 878 844 case SND_SOC_DAIFMT_DSP_A: 845 + lrclk_pol = SUN8I_I2S_FMT0_LRCLK_POLARITY_START_HIGH; 879 846 mode = SUN8I_I2S_CTRL_MODE_PCM; 880 847 offset = 1; 881 848 break; 882 849 883 850 case SND_SOC_DAIFMT_DSP_B: 851 + lrclk_pol = SUN8I_I2S_FMT0_LRCLK_POLARITY_START_HIGH; 884 852 mode = SUN8I_I2S_CTRL_MODE_PCM; 885 853 offset = 0; 886 854 break; 887 855 888 856 case SND_SOC_DAIFMT_I2S: 857 + lrclk_pol = SUN8I_I2S_FMT0_LRCLK_POLARITY_START_LOW; 889 858 mode = SUN8I_I2S_CTRL_MODE_LEFT; 890 859 offset = 1; 891 860 break; 892 861 893 862 case SND_SOC_DAIFMT_LEFT_J: 863 + lrclk_pol = SUN8I_I2S_FMT0_LRCLK_POLARITY_START_HIGH; 894 864 mode = SUN8I_I2S_CTRL_MODE_LEFT; 895 865 offset = 0; 896 866 break; 897 867 898 868 case SND_SOC_DAIFMT_RIGHT_J: 869 + lrclk_pol = SUN8I_I2S_FMT0_LRCLK_POLARITY_START_HIGH; 899 870 mode = SUN8I_I2S_CTRL_MODE_RIGHT; 900 871 offset = 0; 901 872 break; ··· 884 911 regmap_update_bits(i2s->regmap, SUN50I_H6_I2S_RX_CHAN_SEL_REG, 885 912 SUN50I_H6_I2S_TX_CHAN_SEL_OFFSET_MASK, 886 913 SUN50I_H6_I2S_TX_CHAN_SEL_OFFSET(offset)); 914 + 915 + /* DAI clock polarity */ 916 + bclk_pol = SUN8I_I2S_FMT0_BCLK_POLARITY_NORMAL; 917 + 918 + switch (fmt & SND_SOC_DAIFMT_INV_MASK) { 919 + case SND_SOC_DAIFMT_IB_IF: 920 + /* Invert both clocks */ 921 + lrclk_pol ^= SUN8I_I2S_FMT0_LRCLK_POLARITY_MASK; 922 + bclk_pol = SUN8I_I2S_FMT0_BCLK_POLARITY_INVERTED; 923 + break; 924 + case SND_SOC_DAIFMT_IB_NF: 925 + /* Invert bit clock */ 926 + bclk_pol = SUN8I_I2S_FMT0_BCLK_POLARITY_INVERTED; 927 + break; 928 + case SND_SOC_DAIFMT_NB_IF: 929 + /* Invert frame clock */ 930 + lrclk_pol ^= SUN8I_I2S_FMT0_LRCLK_POLARITY_MASK; 931 + break; 932 + case SND_SOC_DAIFMT_NB_NF: 933 + /* No inversion */ 934 + break; 935 + default: 936 + return -EINVAL; 937 + } 938 + 939 + regmap_update_bits(i2s->regmap, SUN4I_I2S_FMT0_REG, 940 + SUN8I_I2S_FMT0_LRCLK_POLARITY_MASK | 941 + SUN8I_I2S_FMT0_BCLK_POLARITY_MASK, 942 + lrclk_pol | bclk_pol); 943 + 887 944 888 945 /* DAI clock master masks */ 889 946 switch (fmt & SND_SOC_DAIFMT_CLOCK_PROVIDER_MASK) {
+7 -5
sound/soc/tegra/tegra210_ahub.c
··· 2 2 // 3 3 // tegra210_ahub.c - Tegra210 AHUB driver 4 4 // 5 - // Copyright (c) 2020-2022, NVIDIA CORPORATION. All rights reserved. 5 + // Copyright (c) 2020-2024, NVIDIA CORPORATION. All rights reserved. 6 6 7 7 #include <linux/clk.h> 8 8 #include <linux/device.h> ··· 1391 1391 return err; 1392 1392 } 1393 1393 1394 - err = of_platform_populate(pdev->dev.of_node, NULL, NULL, &pdev->dev); 1395 - if (err) 1396 - return err; 1397 - 1398 1394 pm_runtime_enable(&pdev->dev); 1395 + 1396 + err = of_platform_populate(pdev->dev.of_node, NULL, NULL, &pdev->dev); 1397 + if (err) { 1398 + pm_runtime_disable(&pdev->dev); 1399 + return err; 1400 + } 1399 1401 1400 1402 return 0; 1401 1403 }
+4 -3
tools/net/ynl/lib/ynl.py
··· 388 388 389 389 def decode(self, ynl, nl_msg, op): 390 390 msg = self._decode(nl_msg) 391 + if op is None: 392 + op = ynl.rsp_by_value[msg.cmd()] 391 393 fixed_header_size = ynl._struct_size(op.fixed_header) 392 394 msg.raw_attrs = NlAttrs(msg.raw, fixed_header_size) 393 395 return msg ··· 923 921 print("Netlink done while checking for ntf!?") 924 922 continue 925 923 926 - op = self.rsp_by_value[nl_msg.cmd()] 927 - decoded = self.nlproto.decode(self, nl_msg, op) 924 + decoded = self.nlproto.decode(self, nl_msg, None) 928 925 if decoded.cmd() not in self.async_msg_ids: 929 926 print("Unexpected msg id done while checking for ntf", decoded) 930 927 continue ··· 981 980 if nl_msg.extack: 982 981 self._decode_extack(req_msg, op, nl_msg.extack) 983 982 else: 984 - op = self.rsp_by_value[nl_msg.cmd()] 983 + op = None 985 984 req_flags = [] 986 985 987 986 if nl_msg.error:
+4 -4
tools/perf/builtin-daemon.c
··· 691 691 692 692 fprintf(out, "%c%" PRIu64, 693 693 /* session up time */ 694 - csv_sep, (curr - daemon->start) / 60); 694 + csv_sep, (uint64_t)((curr - daemon->start) / 60)); 695 695 696 696 fprintf(out, "\n"); 697 697 } else { ··· 702 702 fprintf(out, " lock: %s/lock\n", 703 703 daemon->base); 704 704 fprintf(out, " up: %" PRIu64 " minutes\n", 705 - (curr - daemon->start) / 60); 705 + (uint64_t)((curr - daemon->start) / 60)); 706 706 } 707 707 } 708 708 ··· 730 730 731 731 fprintf(out, "%c%" PRIu64, 732 732 /* session up time */ 733 - csv_sep, (curr - session->start) / 60); 733 + csv_sep, (uint64_t)((curr - session->start) / 60)); 734 734 735 735 fprintf(out, "\n"); 736 736 } else { ··· 747 747 fprintf(out, " ack: %s/%s\n", 748 748 session->base, SESSION_ACK); 749 749 fprintf(out, " up: %" PRIu64 " minutes\n", 750 - (curr - session->start) / 60); 750 + (uint64_t)((curr - session->start) / 60)); 751 751 } 752 752 } 753 753
+3 -1
tools/perf/tests/pmu.c
··· 456 456 /** 457 457 * Test perf_pmu__match() that's used to search for a PMU given a name passed 458 458 * on the command line. The name that's passed may also be a filename type glob 459 - * match. 459 + * match. If the name does not match, perf_pmu__match() attempts to match the 460 + * alias of the PMU, if provided. 460 461 */ 461 462 static int test__pmu_match(struct test_suite *test __maybe_unused, int subtest __maybe_unused) 462 463 { 463 464 struct perf_pmu test_pmu; 465 + test_pmu.alias_name = NULL; 464 466 465 467 test_pmu.name = "pmuname"; 466 468 TEST_ASSERT_EQUAL("Exact match", perf_pmu__match(&test_pmu, "pmuname"), true);
+3
tools/perf/util/bpf_lock_contention.c
··· 286 286 goto next; 287 287 288 288 for (int i = 0; i < total_cpus; i++) { 289 + if (cpu_data[i].lock == 0) 290 + continue; 291 + 289 292 update_lock_stat(stat_fd, -1, end_ts, aggr_mode, 290 293 &cpu_data[i]); 291 294 }
+1
tools/perf/util/python.c
··· 20 20 #include "util/env.h" 21 21 #include "util/kvm-stat.h" 22 22 #include "util/kwork.h" 23 + #include "util/sample.h" 23 24 #include "util/lock-contention.h" 24 25 #include <internal/lib.h> 25 26 #include "../builtin.h"
+34
tools/testing/selftests/bpf/prog_tests/btf.c
··· 3551 3551 BTF_STR_SEC("\0x\0?.foo bar:buz"), 3552 3552 }, 3553 3553 { 3554 + .descr = "datasec: name with non-printable first char not is ok", 3555 + .raw_types = { 3556 + /* int */ 3557 + BTF_TYPE_INT_ENC(0, BTF_INT_SIGNED, 0, 32, 4), /* [1] */ 3558 + /* VAR x */ /* [2] */ 3559 + BTF_TYPE_ENC(1, BTF_INFO_ENC(BTF_KIND_VAR, 0, 0), 1), 3560 + BTF_VAR_STATIC, 3561 + /* DATASEC ?.data */ /* [3] */ 3562 + BTF_TYPE_ENC(3, BTF_INFO_ENC(BTF_KIND_DATASEC, 0, 1), 4), 3563 + BTF_VAR_SECINFO_ENC(2, 0, 4), 3564 + BTF_END_RAW, 3565 + }, 3566 + BTF_STR_SEC("\0x\0\7foo"), 3567 + .err_str = "Invalid name", 3568 + .btf_load_err = true, 3569 + }, 3570 + { 3571 + .descr = "datasec: name '\\0' is not ok", 3572 + .raw_types = { 3573 + /* int */ 3574 + BTF_TYPE_INT_ENC(0, BTF_INT_SIGNED, 0, 32, 4), /* [1] */ 3575 + /* VAR x */ /* [2] */ 3576 + BTF_TYPE_ENC(1, BTF_INFO_ENC(BTF_KIND_VAR, 0, 0), 1), 3577 + BTF_VAR_STATIC, 3578 + /* DATASEC \0 */ /* [3] */ 3579 + BTF_TYPE_ENC(3, BTF_INFO_ENC(BTF_KIND_DATASEC, 0, 1), 4), 3580 + BTF_VAR_SECINFO_ENC(2, 0, 4), 3581 + BTF_END_RAW, 3582 + }, 3583 + BTF_STR_SEC("\0x\0"), 3584 + .err_str = "Invalid name", 3585 + .btf_load_err = true, 3586 + }, 3587 + { 3554 3588 .descr = "type name '?foo' is not ok", 3555 3589 .raw_types = { 3556 3590 /* union ?foo; */
+13 -24
tools/testing/selftests/mm/mseal_test.c
··· 81 81 return sret; 82 82 } 83 83 84 - static void *sys_mmap(void *addr, unsigned long len, unsigned long prot, 85 - unsigned long flags, unsigned long fd, unsigned long offset) 86 - { 87 - void *sret; 88 - 89 - errno = 0; 90 - sret = (void *) syscall(__NR_mmap, addr, len, prot, 91 - flags, fd, offset); 92 - return sret; 93 - } 94 - 95 84 static int sys_munmap(void *ptr, size_t size) 96 85 { 97 86 int sret; ··· 161 172 { 162 173 void *ptr; 163 174 164 - ptr = sys_mmap(NULL, size, PROT_READ, MAP_ANONYMOUS | MAP_PRIVATE, -1, 0); 175 + ptr = mmap(NULL, size, PROT_READ, MAP_ANONYMOUS | MAP_PRIVATE, -1, 0); 165 176 *ptrOut = ptr; 166 177 } 167 178 ··· 170 181 void *ptr; 171 182 unsigned long mapflags = MAP_ANONYMOUS | MAP_PRIVATE; 172 183 173 - ptr = sys_mmap(NULL, size, PROT_READ | PROT_WRITE, mapflags, -1, 0); 184 + ptr = mmap(NULL, size, PROT_READ | PROT_WRITE, mapflags, -1, 0); 174 185 *ptrOut = ptr; 175 186 } 176 187 ··· 194 205 void *ptr; 195 206 unsigned long page_size = getpagesize(); 196 207 197 - ptr = sys_mmap(NULL, page_size, PROT_READ, MAP_ANONYMOUS | MAP_PRIVATE, -1, 0); 208 + ptr = mmap(NULL, page_size, PROT_READ, MAP_ANONYMOUS | MAP_PRIVATE, -1, 0); 198 209 if (ptr == (void *) -1) 199 210 return false; 200 211 ··· 470 481 int prot; 471 482 472 483 /* use mmap to change protection. */ 473 - ptr = sys_mmap(0, size, PROT_NONE, 474 - MAP_ANONYMOUS | MAP_PRIVATE | MAP_FIXED, -1, 0); 484 + ptr = mmap(0, size, PROT_NONE, 485 + MAP_ANONYMOUS | MAP_PRIVATE | MAP_FIXED, -1, 0); 475 486 FAIL_TEST_IF_FALSE(ptr == 0); 476 487 477 488 size = get_vma_size(ptr, &prot); ··· 1198 1209 } 1199 1210 1200 1211 /* use mmap to change protection. */ 1201 - ret2 = sys_mmap(ptr, size, PROT_NONE, 1202 - MAP_ANONYMOUS | MAP_PRIVATE | MAP_FIXED, -1, 0); 1212 + ret2 = mmap(ptr, size, PROT_NONE, 1213 + MAP_ANONYMOUS | MAP_PRIVATE | MAP_FIXED, -1, 0); 1203 1214 if (seal) { 1204 1215 FAIL_TEST_IF_FALSE(ret2 == MAP_FAILED); 1205 1216 FAIL_TEST_IF_FALSE(errno == EPERM); ··· 1229 1240 } 1230 1241 1231 1242 /* use mmap to expand. */ 1232 - ret2 = sys_mmap(ptr, size, PROT_READ, 1233 - MAP_ANONYMOUS | MAP_PRIVATE | MAP_FIXED, -1, 0); 1243 + ret2 = mmap(ptr, size, PROT_READ, 1244 + MAP_ANONYMOUS | MAP_PRIVATE | MAP_FIXED, -1, 0); 1234 1245 if (seal) { 1235 1246 FAIL_TEST_IF_FALSE(ret2 == MAP_FAILED); 1236 1247 FAIL_TEST_IF_FALSE(errno == EPERM); ··· 1257 1268 } 1258 1269 1259 1270 /* use mmap to shrink. */ 1260 - ret2 = sys_mmap(ptr, 8 * page_size, PROT_READ, 1261 - MAP_ANONYMOUS | MAP_PRIVATE | MAP_FIXED, -1, 0); 1271 + ret2 = mmap(ptr, 8 * page_size, PROT_READ, 1272 + MAP_ANONYMOUS | MAP_PRIVATE | MAP_FIXED, -1, 0); 1262 1273 if (seal) { 1263 1274 FAIL_TEST_IF_FALSE(ret2 == MAP_FAILED); 1264 1275 FAIL_TEST_IF_FALSE(errno == EPERM); ··· 1639 1650 ret = fallocate(fd, 0, 0, size); 1640 1651 FAIL_TEST_IF_FALSE(!ret); 1641 1652 1642 - ptr = sys_mmap(NULL, size, PROT_READ, mapflags, fd, 0); 1653 + ptr = mmap(NULL, size, PROT_READ, mapflags, fd, 0); 1643 1654 FAIL_TEST_IF_FALSE(ptr != MAP_FAILED); 1644 1655 1645 1656 if (seal) { ··· 1669 1680 int ret; 1670 1681 unsigned long mapflags = MAP_ANONYMOUS | MAP_SHARED; 1671 1682 1672 - ptr = sys_mmap(NULL, size, PROT_READ, mapflags, -1, 0); 1683 + ptr = mmap(NULL, size, PROT_READ, mapflags, -1, 0); 1673 1684 FAIL_TEST_IF_FALSE(ptr != (void *)-1); 1674 1685 1675 1686 if (seal) {
+1 -12
tools/testing/selftests/mm/seal_elf.c
··· 30 30 return sret; 31 31 } 32 32 33 - static void *sys_mmap(void *addr, unsigned long len, unsigned long prot, 34 - unsigned long flags, unsigned long fd, unsigned long offset) 35 - { 36 - void *sret; 37 - 38 - errno = 0; 39 - sret = (void *) syscall(__NR_mmap, addr, len, prot, 40 - flags, fd, offset); 41 - return sret; 42 - } 43 - 44 33 static inline int sys_mprotect(void *ptr, size_t size, unsigned long prot) 45 34 { 46 35 int sret; ··· 45 56 void *ptr; 46 57 unsigned long page_size = getpagesize(); 47 58 48 - ptr = sys_mmap(NULL, page_size, PROT_READ, MAP_ANONYMOUS | MAP_PRIVATE, -1, 0); 59 + ptr = mmap(NULL, page_size, PROT_READ, MAP_ANONYMOUS | MAP_PRIVATE, -1, 0); 49 60 if (ptr == (void *) -1) 50 61 return false; 51 62
+2 -1
tools/testing/selftests/net/Makefile
··· 85 85 TEST_PROGS += sctp_vrf.sh 86 86 TEST_GEN_FILES += sctp_hello 87 87 TEST_GEN_FILES += ip_local_port_range 88 - TEST_GEN_FILES += bind_wildcard 88 + TEST_GEN_PROGS += bind_wildcard 89 + TEST_GEN_PROGS += bind_timewait 89 90 TEST_PROGS += test_vxlan_mdb.sh 90 91 TEST_PROGS += test_bridge_neigh_suppress.sh 91 92 TEST_PROGS += test_vxlan_nolocalbypass.sh
-2
tools/testing/selftests/riscv/mm/mmap_bottomup.c
··· 7 7 TEST(infinite_rlimit) 8 8 { 9 9 EXPECT_EQ(BOTTOM_UP, memory_layout()); 10 - 11 - TEST_MMAPS; 12 10 } 13 11 14 12 TEST_HARNESS_MAIN
-2
tools/testing/selftests/riscv/mm/mmap_default.c
··· 7 7 TEST(default_rlimit) 8 8 { 9 9 EXPECT_EQ(TOP_DOWN, memory_layout()); 10 - 11 - TEST_MMAPS; 12 10 } 13 11 14 12 TEST_HARNESS_MAIN
-67
tools/testing/selftests/riscv/mm/mmap_test.h
··· 10 10 #define TOP_DOWN 0 11 11 #define BOTTOM_UP 1 12 12 13 - #if __riscv_xlen == 64 14 - uint64_t random_addresses[] = { 15 - 0x19764f0d73b3a9f0, 0x016049584cecef59, 0x3580bdd3562f4acd, 16 - 0x1164219f20b17da0, 0x07d97fcb40ff2373, 0x76ec528921272ee7, 17 - 0x4dd48c38a3de3f70, 0x2e11415055f6997d, 0x14b43334ac476c02, 18 - 0x375a60795aff19f6, 0x47f3051725b8ee1a, 0x4e697cf240494a9f, 19 - 0x456b59b5c2f9e9d1, 0x101724379d63cb96, 0x7fe9ad31619528c1, 20 - 0x2f417247c495c2ea, 0x329a5a5b82943a5e, 0x06d7a9d6adcd3827, 21 - 0x327b0b9ee37f62d5, 0x17c7b1851dfd9b76, 0x006ebb6456ec2cd9, 22 - 0x00836cd14146a134, 0x00e5c4dcde7126db, 0x004c29feadf75753, 23 - 0x00d8b20149ed930c, 0x00d71574c269387a, 0x0006ebe4a82acb7a, 24 - 0x0016135df51f471b, 0x00758bdb55455160, 0x00d0bdd949b13b32, 25 - 0x00ecea01e7c5f54b, 0x00e37b071b9948b1, 0x0011fdd00ff57ab3, 26 - 0x00e407294b52f5ea, 0x00567748c200ed20, 0x000d073084651046, 27 - 0x00ac896f4365463c, 0x00eb0d49a0b26216, 0x0066a2564a982a31, 28 - 0x002e0d20237784ae, 0x0000554ff8a77a76, 0x00006ce07a54c012, 29 - 0x000009570516d799, 0x00000954ca15b84d, 0x0000684f0d453379, 30 - 0x00002ae5816302b5, 0x0000042403fb54bf, 0x00004bad7392bf30, 31 - 0x00003e73bfa4b5e3, 0x00005442c29978e0, 0x00002803f11286b6, 32 - 0x000073875d745fc6, 0x00007cede9cb8240, 0x000027df84cc6a4f, 33 - 0x00006d7e0e74242a, 0x00004afd0b836e02, 0x000047d0e837cd82, 34 - 0x00003b42405efeda, 0x00001531bafa4c95, 0x00007172cae34ac4, 35 - }; 36 - #else 37 - uint32_t random_addresses[] = { 38 - 0x8dc302e0, 0x929ab1e0, 0xb47683ba, 0xea519c73, 0xa19f1c90, 0xc49ba213, 39 - 0x8f57c625, 0xadfe5137, 0x874d4d95, 0xaa20f09d, 0xcf21ebfc, 0xda7737f1, 40 - 0xcedf392a, 0x83026c14, 0xccedca52, 0xc6ccf826, 0xe0cd9415, 0x997472ca, 41 - 0xa21a44c1, 0xe82196f5, 0xa23fd66b, 0xc28d5590, 0xd009cdce, 0xcf0be646, 42 - 0x8fc8c7ff, 0xe2a85984, 0xa3d3236b, 0x89a0619d, 0xc03db924, 0xb5d4cc1b, 43 - 0xb96ee04c, 0xd191da48, 0xb432a000, 0xaa2bebbc, 0xa2fcb289, 0xb0cca89b, 44 - 0xb0c18d6a, 0x88f58deb, 0xa4d42d1c, 0xe4d74e86, 0x99902b09, 0x8f786d31, 45 - 0xbec5e381, 0x9a727e65, 0xa9a65040, 0xa880d789, 0x8f1b335e, 0xfc821c1e, 46 - 0x97e34be4, 0xbbef84ed, 0xf447d197, 0xfd7ceee2, 0xe632348d, 0xee4590f4, 47 - 0x958992a5, 0xd57e05d6, 0xfd240970, 0xc5b0dcff, 0xd96da2c2, 0xa7ae041d, 48 - }; 49 - #endif 50 - 51 - // Only works on 64 bit 52 - #if __riscv_xlen == 64 53 13 #define PROT (PROT_READ | PROT_WRITE) 54 14 #define FLAGS (MAP_PRIVATE | MAP_ANONYMOUS) 55 - 56 - /* mmap must return a value that doesn't use more bits than the hint address. */ 57 - static inline unsigned long get_max_value(unsigned long input) 58 - { 59 - unsigned long max_bit = (1UL << (((sizeof(unsigned long) * 8) - 1 - 60 - __builtin_clzl(input)))); 61 - 62 - return max_bit + (max_bit - 1); 63 - } 64 - 65 - #define TEST_MMAPS \ 66 - ({ \ 67 - void *mmap_addr; \ 68 - for (int i = 0; i < ARRAY_SIZE(random_addresses); i++) { \ 69 - mmap_addr = mmap((void *)random_addresses[i], \ 70 - 5 * sizeof(int), PROT, FLAGS, 0, 0); \ 71 - EXPECT_NE(MAP_FAILED, mmap_addr); \ 72 - EXPECT_GE((void *)get_max_value(random_addresses[i]), \ 73 - mmap_addr); \ 74 - mmap_addr = mmap((void *)random_addresses[i], \ 75 - 5 * sizeof(int), PROT, FLAGS, 0, 0); \ 76 - EXPECT_NE(MAP_FAILED, mmap_addr); \ 77 - EXPECT_GE((void *)get_max_value(random_addresses[i]), \ 78 - mmap_addr); \ 79 - } \ 80 - }) 81 - #endif /* __riscv_xlen == 64 */ 82 15 83 16 static inline int memory_layout(void) 84 17 {